From 0ebbdfb263f573400f470cc76ddcf38d89cc059e Mon Sep 17 00:00:00 2001 From: Aaron Francis Fernandes <79958509+aaronfern@users.noreply.github.com> Date: Tue, 30 Jan 2024 17:07:54 +0530 Subject: [PATCH] Sync with upstream `v1.28.0` (#260) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat(*): add more metrics * Added the RBAC Permission to Linode. * CA - Correct Cloudprovider PR labelling to area/provider/ * fix(volcengine): don't build all provider when volcengine tag exist * chore: replace `github.com/ghodss/yaml` with `sigs.k8s.io/yaml` At the time of making this commit, the package `github.com/ghodss/yaml` is no longer actively maintained. `sigs.k8s.io/yaml` is a permanent fork of `ghodss/yaml` and is actively maintained by Kubernetes SIG. Signed-off-by: Eng Zer Jun * Fixed Typo and Trailing-whitespace * Skip healthiness check for non-existing similar node groups * BinpackingLimiter interface * fix comment and list format * add more logging for balancing similar node groups this change adds some logging at verbosity levels 2 and 3 to help diagnose why the cluster-autoscaler does not consider 2 or more node groups to be similar. * Update VPA scripts to use v1. * fix: don't clean `CriticalAddonsOnly` taint from template nodes - this taint leads to unexpected behavior - users expect CA to consider the taint when autoscaling Signed-off-by: vadasambar * Updated the owners of civo cloudprovider Signed-off-by: Vishal Anarse * Bump golang from 1.20.4 to 1.20.5 in /vertical-pod-autoscaler/builder Bumps golang from 1.20.4 to 1.20.5. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * cluster-autoscaler: support Brightbox image pattern cluster-autoscaler/cloudprovider/brightbox Allow scaled workers to be built from an image name pattern as well as an image id. This deals with long running clusters where the official image is updated with security changes over time. * brightbox: set default docker registry Only set the registry value in the local Makefile if it is missing * Update oci-ip-cluster-autoscaler-w-config.yaml * Update oci-ip-cluster-autoscaler-w-principals.yaml * address comments * golint fix * Remove print condition for vpa-beta2-crd. * Improvement: Modified the VPA content for the helm chart. * Bump the Chart version to 9.29.1 and CA image to 1.27.2 * make no-op binpacking limiter as default + move mark nodegroups to its method * Drop projected volumes for init containers * fix zonal gce outage breaking CA when only some of the zones are failed * Bump version to 0.14.0 as a preparation for release. * Update vendor to Kubernetes 1.28.0-alpha.2 * Interface fixes after Kubernetes 1.28.0-alpha.2 vendor update * Execute git commands to show the state of local clone of the repo. * Clarify and simplify the "build and stage images" step. * Mention logs from #5862 in release instructions. * addressed comments * chore: remove unused func scaleFromZeroAnnotationsEnabled Signed-off-by: Dinesh B * add cluster-autoscaler name and version to the user agent This makes it easier to distinguish between various users of the Go SDK. * Explicitly create and remove buildx builders * Apply fixes to in place support VPA AEP Looks like the first PR merged a bit too early, while there were open coments * Add voelzmo to VPA reviewers voelzmo meets [requirements](https://github.com/kubernetes/community/blob/9504ce87ec14cff9455e794fdcbc5088c52f9dd9/community-membership.md#requirements-1): - K8s org members since 2023-02: https://github.com/kubernetes/org/issues/4015 - Reviewer of 12 merged VPA PRs: https://github.com/kubernetes/autoscaler/pulls?q=is%3Apr+reviewed-by%3Avoelzmo+label%3Avertical-pod-autoscaler+is%3Amerged+ - Sent 10 merged VPA PRs: https://github.com/kubernetes/autoscaler/pulls?q=is%3Apr+label%3Avertical-pod-autoscaler+author%3Avoelzmo+is%3Amerged * Bump default VPA version to 0.14.0 * Minor tweaks after preparing VPA 0.14.0 release. * fix: CA on fargate causing log flood - happens when CA tries to check if the unmanaged fargate node is a part of ASG (it isn't) - and keeps on logging error Signed-off-by: vadasambar * test: fix node names Signed-off-by: vadasambar * Sort nodegroups in order of their ID * Move two util functions from actuator to delete_in_batch, where they are more appropriate * Add support for atomic scale-down in node group options * Extract cropNodesToBudgets function out of actuator file * Support atomic scale-down option for node groups * Respond to readability-related comments from the review * Don't pass NodeGroup as a parameter to functions running asynchronously * Add unit test for group_deletion_scheduler * Use single AtomicScaling option for scale up and scale down * address comments * Address next set of comments * update agnhost image to pull from registry.k8s.io * Revert "Add subresource status for vpa" This reverts commit 1384c8bc09cacd2f774872e01a945c4de6697238. * Bugfix for budget cropping Previous "CropNodes" function of ScaleDownBudgetProcessor had an assumption that atomically-scaled node groups should be classified as "empty" or "drain" as a whole, however Cluster Autoscaler may classify some of the nodes from a single group as "empty" and other as "drain". * Remove unneeded node groups regardless of scale down being in cooldown. * Update VPA vendor Generated by runing: ``` go mod tidy go mod vendor ``` * Replace `BuildTestContainer` with use of builder * Quote temp folder name parameter to avoid errors * Include short unregistered nodes in calculation of incorrect node group sizes * Add BigDarkClown to Cluster Autoscaler approvers * Add support for scaling up ZeroToMaxNodesScaling node groups * Use appropriate logging levels * Remove unused field in expander and add comment about estimator * Merge tests for ZeroToMaxNodesScaling into one table-driven test. * Merged multiple tests into one single table driven test. * Fixed some typos. * Change handling of scale up options for ZeroToMaxNodeScaling in orchestrator * Started handling scale up options for ZeroToMaxNodeScaling with the existing estimator * Skip setting similar node groups for the node groups that use ZeroToMaxNodeScaling * Renamed the autoscaling option from "AtomicScaleUp" to "AtomicScaling" * Merged multiple tests into one single table driven test. * Fixed some typos. * Rename the autoscaling option * Renamed the "AtomicScaling" autoscaling option to "ZeroOrMaxNodeScaling" to be more clear about the behavior. * Record all vpa api versions in recommender metrics Change the tracking of APIVersion from a boolean indicating if the VPA is v1beta1 to the version string and make sure it gets exported in metrics. Add tests for the recommender metrics. * Add subresource status for vpa Add status field in subresource on crd yaml and add new ClusterRole system:vpa-actor to patch /status subresource. The `metadata.generation` only increase on vpa spec update. Fix e2e test for patch and create vpa * Implement threshold interface for use by threshold based limiter Add EstimationContext to take into account runtime state of the autoscaling for estimations Implement static threshold Implement cluster capacity threshold for Estimation Limiter Implement similar node groups capacity threshold for Estimation Limiter Set default estimation thresholds * Fix tests * Add ClusterStateRegistry to the AutoscalingContext. Due to the dependency of the MaxNodeProvisionTimeProvider on the context the provider was extracted to a dedicated package and injected to the ClusterStateRegistry after context creation. * Make signature of GetDurationLimit uniformed with GetNodeLimit For SNG threshold include capacity of the currently estimated node group (as it is not part of SNG itself) Replaced direct calls with use of getters in cluster capacity threshold Renamed getters removing the verb Get Replace EstimationContext struct with interface Add support for negative threshold value in estimation limiter * Add support for negative binpacking duration limit in threshold based estimation limiter * update RBAC to only use verbs that exist for the resources Signed-off-by: Maximilian Rink * Move powerState to azure_util, change default to powerStateUnknown * renames all PowerState* consts to vmPowerState* * moves vmPowerState* consts and helper functions to azure_util.go * changes default vmPowerState to vmPowerStateUnknown instead of vmPowerStateStopped when a power state is not set. * test: fix failing tests - remove non-relevant comment related to rescheduler Signed-off-by: vadasambar * feat: set `IgnoreDaemonSetsUtilization` per nodegroup Signed-off-by: vadasambar fix: test cases failing for actuator and scaledown/eligibility - abstract default values into `config` Signed-off-by: vadasambar refactor: rename global `IgnoreDaemonSetsUtilization` -> `GlobalIgnoreDaemonSetsUtilization` in code - there is no change in the flag name - rename `thresholdGetter` -> `configGetter` and tweak it to accomodate `GetIgnoreDaemonSetsUtilization` Signed-off-by: vadasambar refactor: reset help text for `ignore-daemonsets-utilization` flag - because per nodegroup override is supported only for AWS ASG tags as of now Signed-off-by: vadasambar docs: add info about overriding `--ignore-daemonsets-utilization` per ASG - in AWS cloud provider README Signed-off-by: vadasambar refactor: use a limiting interface in actuator in place of `NodeGroupConfigProcessor` interface - to limit the functions that can be used - since we need it only for `GetIgnoreDaemonSetsUtilization` Signed-off-by: vadasambar fix: tests failing for actuator - rename `staticNodeGroupConfigProcessor` -> `MockNodeGroupConfigGetter` - move `MockNodeGroupConfigGetter` to test/common so that it can be used in different tests Signed-off-by: vadasambar fix: go lint errors for `MockNodeGroupConfigGetter` Signed-off-by: vadasambar test: add tests for `IgnoreDaemonSetsUtilization` in cloud provider dir Signed-off-by: vadasambar test: update node group config processor tests for `IgnoreDaemonSetsUtilization` Signed-off-by: vadasambar test: update eligibility test cases for `IgnoreDaemonSetsUtilization` Signed-off-by: vadasambar test: run actuation tests for 2 NGS - one with `IgnoreDaemonSetsUtilization`: `false` - one with `IgnoreDaemonSetsUtilization`: `true` Signed-off-by: vadasambar test: add tests for `IgnoreDaemonSetsUtilization` in actuator - add helper to generate multiple ds pods dynamically - get rid of mock config processor because it is not required Signed-off-by: vadasambar test: fix failing tests for actuator Signed-off-by: vadasambar refactor: remove `GlobalIgnoreDaemonSetUtilization` autoscaling option - not required Signed-off-by: vadasambar fix: warn message `DefaultScaleDownUnreadyTimeKey` -> `DefaultIgnoreDaemonSetsUtilizationKey` Signed-off-by: vadasambar refactor: use `generateDsPods` instead of `generateDsPod` Signed-off-by: vadasambar refactor: `globaIgnoreDaemonSetsUtilization` -> `ignoreDaemonSetsUtilization` Signed-off-by: vadasambar * test: fix merge conflicts in actuator tests Signed-off-by: vadasambar * refactor: use `actuatorNodeGroupConfigGetter` param in `NewActuator` - instead of passing all the processors (we only need `NodeGroupConfigProcessor`) Signed-off-by: vadasambar * test: refactor eligibility tests - add suffix to tests with `IgnoreDaemonSetsUtilization` set to `true` and `IgnoreDaemonSetsUtilization` set to `false` Signed-off-by: vadasambar * refactor: remove comment line (not relevant anymore) Signed-off-by: vadasambar * fix: dynamic assignment of the scale down threshold flags. Setting maxEmptyBulkDelete, and maxScaleDownParallelism to be the larger of the two flags in the case both are set * Refactor autoscaler.go and static_autoscalar.go to move declaration of the NodeDeletion option to main.go * Fixed go:build tags for ovhcloud * Update the go:build tag for missing cloud providers. * Adapt FAQ for Pods without controller * Use strings instead of NodeGroups as map keys in budgets.go * Delete dead code from budgets.go * Re-introduce asynchronous node deletion and clean node deletion logic. * feat: support custom scheduler config for in-tree schedulr plugins (without extenders) Signed-off-by: vadasambar refactor: rename `--scheduler-config` -> `--scheduler-config-file` to avoid confusion Signed-off-by: vadasambar fix: `goto` causing infinite loop - abstract out running extenders in a separate function Signed-off-by: vadasambar refactor: remove code around extenders - we decided not to use scheduler extenders for checking if a pod would fit on a node Signed-off-by: vadasambar refactor: move scheduler config to a `utils/scheduler` package` - use default config as a fallback Signed-off-by: vadasambar test: fix static_autoscaler test Signed-off-by: vadasambar refactor: `GetSchedulerConfiguration` fn - remove falling back - add mechanism to detect if the scheduler config file flag was set - Signed-off-by: vadasambar test: wip add tests for `GetSchedulerConfig` - tests are failing now Signed-off-by: vadasambar test: add tests for `GetSchedulerConfig` - abstract error messages so that we can use them in the tests - set api version explicitly (this is what upstream does as well) Signed-off-by: vadasambar refactor: do a round of cleanup to make PR ready for review - make import names consistent Signed-off-by: vadasambar fix: use `pflag` to check if the `--scheduler-config-file` flag was set Signed-off-by: vadasambar docs: add comments for exported error constants Signed-off-by: vadasambar refactor: don't export error messages - exporting is not needed Signed-off-by: vadasambar fix: add underscore in test file name Signed-off-by: vadasambar test: fix test failing because of no comment on exported `SchedulerConfigFileFlag` Signed-off-by: vadasambar refacotr: change name of flag variable `schedulerConfig` -> `schedulerConfigFile` - avoids confusion Signed-off-by: vadasambar test: add extra test cases for predicate checker - where the predicate checker uses custom scheduler config Signed-off-by: vadasambar refactor: remove `setFlags` variable - not needed anymore Signed-off-by: vadasambar refactor: abstract custom scheduler configs into `conifg` package - make them constants Signed-off-by: vadasambar test: fix linting error Signed-off-by: vadasambar refactor: introduce a new custom test predicate checker - instead of adding a param to the current one - this is so that we don't have to pass `nil` to the existing test predicate checker in many places Signed-off-by: vadasambar refactor: rename `NewCustomPredicateChecker` -> `NewTestPredicateCheckerWithCustomConfig` - latter narrows down meaning of the function better than former Signed-off-by: vadasambar refactor: rename `GetSchedulerConfig` -> `ConfigFromPath` - `scheduler.ConfigFromPath` is shorter and feels less vague than `scheduler.GetSchedulerConfig` - move test config to a new package `test` under `config` package Signed-off-by: vadasambar docs: add `TODO` for replacing code to parse scheduler config - with upstream function Signed-off-by: vadasambar * Use fixed version of golang image * Fix TestBinpackingLimiter flake * Bump golang from 1.20.5 to 1.20.6 in /vertical-pod-autoscaler/builder Bumps golang from 1.20.5 to 1.20.6. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Fix: Do not inject fakeNode for instance which has errors on create * chore: add script to update vendored hcloud-go * chore(deps): update vendored hcloud-go to 2.0.0 Generated by: ``` UPSTREAM_REF=v2.0.0 hack/update-vendor.sh ``` * fix: balancer RBAC permission to update balancer status * CA - AWS Cloudprovider OWNERS Update * Enable parallel drain by default. * Add BigDarkClown to patch releases schedule * Update Cluster Autoscaler vendor to K8s 1.28.0-beta.0 * Add EstimationAnalyserFunc to be run at the end of the estimation logic * Remove ChangeRequirements with `OrEqual` * Add EvictionRequirements to types * Run `generate-crd-yaml.sh` * Add metrics for improved observability: * pending_node_deletions * failed_gpu_scale_ups_total * Add requirement for Custom Resources to VPA FAQ * Clarify Eviction Control for Pods with multiple Containers * Fix broken hyperlink Co-authored-by: Shubham * Update vertical-pod-autoscaler/FAQ.md Co-authored-by: Joachim * Update vertical-pod-autoscaler/FAQ.md Co-authored-by: Joachim * Reword AND/OR combinations for more clarity * Fix nil pointer exception for case when node is nil while processing gpuInfo * feat: add prometheus basic auth Signed-off-by: AhmedGrati * Add error code for invalid reservations to GCE client * Bump golang from 1.20.6 to 1.20.7 in /vertical-pod-autoscaler/builder Bumps golang from 1.20.6 to 1.20.7. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Support ZeroOrMaxNodeScaling node groups when cleaning up unregistered nodes * Don't pass nil nodes to GetGpuInfoForMetrics * Revert "Fix nil pointer exception for case when node is nil while processing …" * Clean up NodeGroupConfigProcessor interface * docs: add kep to add fswatcher to nanny for automatic nanny configuration Signed-off-by: AhmedGrati * Allow using an external secret instead of using the one the Helm chart creates * Remove the MaxNodeProvisioningTimeProvider interface * Fixed the hyperlink for Node group auto discovery. * Update ResourcePolicy description and limit control README * s390x image support * Bump golang from 1.20.7 to 1.21.0 in /vertical-pod-autoscaler/builder Bumps golang from 1.20.7 to 1.21.0. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * test * Set batch size to target size for atomically scaled groups * a little extra validation * test with 2 atomic groups * don't block draining other groups when one group has some empty nodes * fix: Broken links to testgrid dashboard * fix: scale down broken for providers not implementing NodeGroup.GetOptions() * feat(hetzner): use less requests while waiting for server create The default is to send a new request every 500ms, this will instead use an exponential backoff while waiting for the server the create. * Update in-place updates AEP adding details to consider * Fix Doc with External gRPC Signed-off-by: ZhengSheng0524 * Add fetch reservations in specific project GCE supports shared reservations where the reservation is in a different project than the project the cluster is in. Add GCE client method to get said reservations so autoscaling can support shared reservations. * kep: add config file format and structure notes Signed-off-by: AhmedGrati * CA - 1.28.0 k/k Vendor Update * Fix duplicate imports in IT * re-add changes part of FORK-CHANGE * Re added a fork change command and updated sync change notes * Update cluster-autoscaler/SYNC-CHANGES/SYNC_CHANGES-1.28.md Co-authored-by: Rishabh Patel <66425093+rishabh-11@users.noreply.github.com> --------- Signed-off-by: Eng Zer Jun Signed-off-by: vadasambar Signed-off-by: Vishal Anarse Signed-off-by: dependabot[bot] Signed-off-by: Dinesh B Signed-off-by: Maximilian Rink Signed-off-by: vadasambar Signed-off-by: AhmedGrati Signed-off-by: ZhengSheng0524 Co-authored-by: qianlei.qianl Co-authored-by: shubham82 Co-authored-by: Guy Templeton Co-authored-by: Kubernetes Prow Robot Co-authored-by: Eng Zer Jun Co-authored-by: Bartłomiej Wróblewski Co-authored-by: Kushagra Co-authored-by: kei-gnu <61653118+kei-gnu@users.noreply.github.com> Co-authored-by: michael mccune Co-authored-by: vadasambar Co-authored-by: Vishal Anarse Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Neil Wilson Co-authored-by: Sourabh Gupta Co-authored-by: Artur Żyliński Co-authored-by: Damika Gamlath Co-authored-by: Karol Golab Co-authored-by: Dinesh B Co-authored-by: Todd Neal Co-authored-by: Marco Voelz Co-authored-by: Joachim Bartosik Co-authored-by: vadasambar Co-authored-by: Karol Wychowaniec Co-authored-by: Kevin Wiesmueller Co-authored-by: Aleksandra Gacek Co-authored-by: Hakan Bostan Co-authored-by: xiaoqing Co-authored-by: Yuriy Stryuchkov <43517125+ystryuchkov@users.noreply.github.com> Co-authored-by: Daniel Gutowski Co-authored-by: Maximilian Rink Co-authored-by: dom.bozzuto Co-authored-by: bsoghigian Co-authored-by: Krzysztof Siedlecki Co-authored-by: Julian Tölle Co-authored-by: Amir Alavi Co-authored-by: Daniel Kłobuszewski Co-authored-by: droctothorpe Co-authored-by: Marco Voelz Co-authored-by: Jayant Jain Co-authored-by: AhmedGrati Co-authored-by: Mike Tougeron Co-authored-by: Sachin Tiptur Co-authored-by: Saripalli Lavanya Co-authored-by: Aleksandra Malinowska Co-authored-by: Yash Khare Co-authored-by: Piotr Betkier Co-authored-by: ZhengSheng0524 Co-authored-by: Jessica Chen Co-authored-by: Rishabh Patel <66425093+rishabh-11@users.noreply.github.com> --- .../README.md | 123 + .../5700-nanny-configuration-reload/README.md | 61 + balancer/deploy/controller.yaml | 6 + balancer/proposals/balancer.md | 4 +- builder/Dockerfile | 2 +- charts/cluster-autoscaler/Chart.yaml | 4 +- charts/cluster-autoscaler/README.md | 8 +- charts/cluster-autoscaler/README.md.gotmpl | 5 +- .../templates/clusterrole.yaml | 11 +- .../templates/deployment.yaml | 20 +- charts/cluster-autoscaler/templates/role.yaml | 11 +- charts/cluster-autoscaler/values.yaml | 5 +- cluster-autoscaler/Dockerfile.s390x | 20 + cluster-autoscaler/FAQ.md | 15 +- cluster-autoscaler/Makefile | 2 +- cluster-autoscaler/OWNERS | 1 + cluster-autoscaler/README.md | 12 +- .../SYNC-CHANGES/SYNC_CHANGES-1.28.md | 41 + .../cloudprovider/alicloud/OWNERS | 4 + .../cloudprovider/alicloud/README.md | 8 +- .../sdk/auth/credentials/oidc_credential.go | 37 + .../alibaba-cloud-sdk-go/sdk/auth/signer.go | 4 + .../sdk/auth/signers/signer_oidc.go | 249 + .../alibaba-cloud-sdk-go/sdk/client.go | 19 + .../alibaba-cloud-sdk-go/sdk/client_test.go | 35 + .../services/ecs/client.go | 7 + .../services/ess/client.go | 7 + .../alicloud/alicloud_auto_scaling.go | 9 +- .../alicloud/alicloud_auto_scaling_test.go | 38 + .../alicloud/alicloud_cloud_config.go | 59 +- .../alicloud/alicloud_cloud_config_test.go | 44 + .../alicloud/alicloud_instance_types.go | 7 +- .../alicloud/alicloud_instance_types_test.go | 38 + .../cluster-autoscaler-rrsa-standard.yaml | 203 + cluster-autoscaler/cloudprovider/aws/OWNERS | 6 +- .../cloudprovider/aws/README.md | 2 + .../cloudprovider/aws/auto_scaling_groups.go | 30 +- .../cloudprovider/aws/aws_cloud_provider.go | 12 +- .../aws/aws_cloud_provider_test.go | 135 +- .../cloudprovider/aws/aws_manager.go | 42 + .../cloudprovider/aws/aws_manager_test.go | 72 +- .../cloudprovider/aws/aws_sdk_provider.go | 7 +- .../cloudprovider/aws/aws_wrapper.go | 16 +- .../cloudprovider/aws/aws_wrapper_test.go | 19 +- .../cloudprovider/aws/ec2_instance_types.go | 14 + .../cluster-autoscaler-autodiscover.yaml | 6 +- .../cluster-autoscaler-multi-asg.yaml | 6 +- .../examples/cluster-autoscaler-one-asg.yaml | 6 +- ...uster-autoscaler-run-on-control-plane.yaml | 6 +- .../aws/managed_nodegroup_cache.go | 16 +- .../aws/managed_nodegroup_cache_test.go | 134 + cluster-autoscaler/cloudprovider/azure/OWNERS | 4 + .../cloudprovider/azure/azure_scale_set.go | 41 +- .../azure/azure_scale_set_test.go | 74 + .../cloudprovider/azure/azure_util.go | 39 + .../cloudprovider/baiducloud/OWNERS | 4 + .../cloudprovider/bizflycloud/OWNERS | 2 + .../cloudprovider/brightbox/Makefile | 2 +- .../cloudprovider/brightbox/OWNERS | 3 + .../cloudprovider/brightbox/README.md | 4 +- .../brightbox/brightbox_node_group.go | 28 +- .../brightbox/k8ssdk/brightbox_interface.go | 56 + .../brightbox/k8ssdk/cloud_access.go | 3 + .../brightbox/k8ssdk/mocks/CloudAccess.go | 345 +- .../cloudprovider/builder/builder_all.go | 8 +- .../cloudprovider/builder/builder_ovhcloud.go | 6 +- .../builder/builder_volcengine.go | 43 + .../cloudprovider/cherryservers/OWNERS | 3 + cluster-autoscaler/cloudprovider/civo/OWNERS | 18 +- .../cloudprovider/civo/civo_manager.go | 93 +- .../cloudprovider/civo/civo_manager_test.go | 63 + .../cluster-autoscaler-node-pool.yaml | 181 + .../cloudprovider/cloud_provider.go | 2 + .../cloudprovider/cloudstack/OWNERS | 4 + .../cloudprovider/clusterapi/OWNERS | 3 + .../clusterapi/clusterapi_nodegroup.go | 2 +- .../clusterapi/clusterapi_nodegroup_test.go | 25 + .../clusterapi/clusterapi_utils.go | 10 - .../clusterapi/clusterapi_utils_test.go | 41 - .../cloudprovider/digitalocean/OWNERS | 3 +- .../cloudprovider/exoscale/OWNERS | 3 + .../cloudprovider/externalgrpc/OWNERS | 3 + .../cloudprovider/externalgrpc/README.md | 2 +- cluster-autoscaler/cloudprovider/gce/OWNERS | 3 + .../gce/autoscaling_gce_client.go | 46 +- .../gce/autoscaling_gce_client_test.go | 12 + cluster-autoscaler/cloudprovider/gce/cache.go | 20 +- .../cloudprovider/gce/gce_manager_test.go | 1 + .../cloudprovider/gce/gce_price_info.go | 21 + .../cloudprovider/gce/mig_info_provider.go | 24 +- .../gce/mig_info_provider_test.go | 141 +- .../cloudprovider/gce/templates.go | 2 +- .../cloudprovider/hetzner/OWNERS | 3 + .../cloudprovider/hetzner/README.md | 17 +- .../hetzner/hack/update-vendor.sh | 28 + .../hetzner/hcloud-go}/LICENSE | 12 +- .../hetzner/hcloud-go/hcloud/action.go | 58 +- .../hetzner/hcloud-go/hcloud/architecture.go | 2 +- .../hetzner/hcloud-go/hcloud/certificate.go | 52 +- .../hetzner/hcloud-go/hcloud/client.go | 19 +- .../hetzner/hcloud-go/hcloud/datacenter.go | 36 +- .../hetzner/hcloud-go/hcloud/deprecation.go | 59 + .../hetzner/hcloud-go/hcloud/error.go | 16 - .../hetzner/hcloud-go/hcloud/firewall.go | 48 +- .../hetzner/hcloud-go/hcloud/floating_ip.go | 32 +- .../hetzner/hcloud-go/hcloud/hcloud.go | 18 +- .../hetzner/hcloud-go/hcloud/helper.go | 16 - .../hetzner/hcloud-go/hcloud/image.go | 28 +- .../internal/instrumentation/metrics.go | 16 - .../hetzner/hcloud-go/hcloud/iso.go | 40 +- .../hetzner/hcloud-go/hcloud/labels.go | 4 +- .../hetzner/hcloud-go/hcloud/load_balancer.go | 54 +- .../hcloud-go/hcloud/load_balancer_type.go | 34 +- .../hetzner/hcloud-go/hcloud/location.go | 34 +- .../hcloud-go/hcloud/metadata/client.go | 20 +- .../hetzner/hcloud-go/hcloud/network.go | 46 +- .../hcloud-go/hcloud/placement_group.go | 28 +- .../hetzner/hcloud-go/hcloud/pricing.go | 16 - .../hetzner/hcloud-go/hcloud/primary_ip.go | 50 +- .../hetzner/hcloud-go/hcloud/rdns.go | 26 +- .../hetzner/hcloud-go/hcloud/resource.go | 18 +- .../hetzner/hcloud-go/hcloud/schema.go | 62 +- .../hetzner/hcloud-go/hcloud/schema/action.go | 20 +- .../hcloud-go/hcloud/schema/certificate.go | 20 +- .../hcloud-go/hcloud/schema/datacenter.go | 22 +- .../hcloud-go/hcloud/schema/deprecation.go | 12 + .../hetzner/hcloud-go/hcloud/schema/error.go | 16 - .../hcloud-go/hcloud/schema/firewall.go | 20 +- .../hcloud-go/hcloud/schema/floating_ip.go | 24 +- .../hetzner/hcloud-go/hcloud/schema/image.go | 22 +- .../hetzner/hcloud-go/hcloud/schema/iso.go | 18 +- .../hcloud-go/hcloud/schema/load_balancer.go | 74 +- .../hcloud/schema/load_balancer_type.go | 18 +- .../hcloud-go/hcloud/schema/location.go | 18 +- .../hetzner/hcloud-go/hcloud/schema/meta.go | 16 - .../hcloud-go/hcloud/schema/network.go | 55 +- .../hcloud/schema/placement_group.go | 20 +- .../hcloud-go/hcloud/schema/pricing.go | 20 +- .../hcloud-go/hcloud/schema/primary_ip.go | 11 +- .../hetzner/hcloud-go/hcloud/schema/server.go | 58 +- .../hcloud-go/hcloud/schema/server_type.go | 38 +- .../hcloud-go/hcloud/schema/ssh_key.go | 18 +- .../hetzner/hcloud-go/hcloud/schema/volume.go | 24 +- .../hetzner/hcloud-go/hcloud/server.go | 34 +- .../hetzner/hcloud-go/hcloud/server_type.go | 39 +- .../hetzner/hcloud-go/hcloud/ssh_key.go | 26 +- .../hetzner/hcloud-go/hcloud/testing.go | 24 +- .../hetzner/hcloud-go/hcloud/volume.go | 26 +- .../cloudprovider/hetzner/hetzner_manager.go | 11 +- .../hetzner/hetzner_node_group.go | 2 +- .../hetzner/hetzner_servers_cache.go | 2 +- .../cloudprovider/huaweicloud/OWNERS | 3 + .../cloudprovider/ionoscloud/OWNERS | 3 + .../cloudprovider/kamatera/OWNERS | 3 + .../cloudprovider/kubemark/OWNERS | 3 + .../cloudprovider/linode/OWNERS | 3 + .../cluster-autoscaler-autodiscover.yaml | 4 +- .../cloudprovider/magnum/OWNERS | 3 + .../cloudprovider/magnum/README.md | 2 +- .../magnum/magnum_cloud_provider.go | 4 + cluster-autoscaler/cloudprovider/oci/OWNERS | 3 + .../oci-ip-cluster-autoscaler-w-config.yaml | 2 +- ...ci-ip-cluster-autoscaler-w-principals.yaml | 2 +- .../cloudprovider/ovhcloud/OWNERS | 3 + .../cloudprovider/packet/OWNERS | 3 + .../cloudprovider/rancher/OWNERS | 3 + .../cloudprovider/scaleway/OWNERS | 3 + .../cloudprovider/tencentcloud/OWNERS | 3 + .../cloudprovider/test/test_cloud_provider.go | 10 +- .../cloudprovider/volcengine/OWNERS | 6 + .../cloudprovider/volcengine/README.md | 218 + .../cluster-autoscaler-deployment.yaml | 52 + .../examples/cluster-autoscaler-secret.yaml | 11 + .../cluster-autoscaler-svcaccount.yaml | 122 + .../volcengine/volc-sdk-golang/base/aes.go | 67 + .../volcengine/volc-sdk-golang/base/client.go | 357 + .../volcengine/volc-sdk-golang/base/model.go | 160 + .../volcengine/volc-sdk-golang/base/sign.go | 349 + .../volcengine/volc-sdk-golang/base/utils.go | 252 + .../volc-sdk-golang/base/version.go} | 11 +- .../volc-sdk-golang/service/sts/config.go | 97 + .../volc-sdk-golang/service/sts/model.go | 52 + .../volc-sdk-golang/service/sts/wrapper.go | 57 + .../service/sts/wrapper_test.go} | 38 +- .../volcengine-go-sdk/internal/ini/ast.go | 139 + .../internal/ini/comma_token.go} | 19 +- .../internal/ini/comment_token.go | 54 + .../internal/ini/empty_token.go} | 13 +- .../internal/ini/expression.go | 43 + .../volcengine-go-sdk/internal/ini/fuzz.go | 37 + .../volcengine-go-sdk/internal/ini/ini.go | 70 + .../internal/ini/ini_lexer.go | 184 + .../internal/ini/ini_parser.go | 368 + .../internal/ini/literal_tokens.go | 343 + .../internal/ini/newline_token.go | 49 + .../internal/ini/number_helper.go | 171 + .../internal/ini/op_tokens.go | 58 + .../internal/ini/parse_error.go | 62 + .../internal/ini/parse_stack.go | 79 + .../internal/ini/sep_tokens.go | 60 + .../volcengine-go-sdk/internal/ini/skipper.go | 64 + .../internal/ini/statement.go | 54 + .../internal/ini/value_util.go | 303 + .../volcengine-go-sdk/internal/ini/visitor.go | 185 + .../volcengine-go-sdk/internal/ini/walker.go | 44 + .../internal/ini/ws_token.go | 43 + .../volcengine-go-sdk/internal/sdkio/byte.go | 39 + .../internal/sdkmath/floor_go.go | 74 + .../internal/sdkrand/locked_source.go | 48 + .../internal/sdkrand/read.go} | 19 +- .../internal/shareddefaults/shared_config.go | 59 + .../private/protocol/host.go | 87 + .../private/protocol/host_prefix.go | 72 + .../private/protocol/idempotency.go | 94 + .../private/protocol/json/jsonutil/build.go | 315 + .../protocol/json/jsonutil/unmarshal.go | 269 + .../private/protocol/jsonvalue.go | 95 + .../private/protocol/payload.go | 100 + .../private/protocol/query/build.go | 53 + .../protocol/query/queryutil/queryutil.go | 271 + .../private/protocol/query/unmarshal.go | 55 + .../private/protocol/query/unmarshal_error.go | 88 + .../private/protocol/rest/build.go | 329 + .../private/protocol/rest/payload.go | 64 + .../private/protocol/rest/unmarshal.go | 256 + .../private/protocol/timestamp.go | 103 + .../private/protocol/unmarshal.go | 40 + .../protocol/volcenginequery/unmarshal.go | 96 + .../private/protocol/xml/xmlutil/build.go | 325 + .../private/protocol/xml/xmlutil/sort.go | 51 + .../private/protocol/xml/xmlutil/unmarshal.go | 310 + .../protocol/xml/xmlutil/xml_to_struct.go | 178 + .../private/util/sort_keys.go | 33 + .../volcengine-go-sdk/private/util/util.go | 128 + .../autoscaling/api_attach_db_instances.go | 202 + .../autoscaling/api_attach_instances.go | 194 + .../autoscaling/api_attach_server_groups.go | 232 + .../api_complete_lifecycle_activity.go | 202 + .../autoscaling/api_create_lifecycle_hook.go | 218 + .../api_create_scaling_configuration.go | 374 ++ .../autoscaling/api_create_scaling_group.go | 296 + .../autoscaling/api_create_scaling_policy.go | 372 ++ .../autoscaling/api_delete_lifecycle_hook.go | 186 + .../api_delete_scaling_configuration.go | 186 + .../autoscaling/api_delete_scaling_group.go | 186 + .../autoscaling/api_delete_scaling_policy.go | 186 + .../api_describe_lifecycle_activities.go | 304 + .../api_describe_lifecycle_hooks.go | 304 + .../api_describe_scaling_activities.go | 438 ++ .../api_describe_scaling_configurations.go | 476 ++ .../api_describe_scaling_groups.go | 430 ++ .../api_describe_scaling_instances.go | 344 + .../api_describe_scaling_policies.go | 482 ++ .../autoscaling/api_detach_db_instances.go | 202 + .../autoscaling/api_detach_instances.go | 202 + .../autoscaling/api_detach_server_groups.go | 216 + .../autoscaling/api_disable_scaling_group.go | 186 + .../autoscaling/api_disable_scaling_policy.go | 186 + .../api_enable_scaling_configuration.go | 194 + .../autoscaling/api_enable_scaling_group.go | 186 + .../autoscaling/api_enable_scaling_policy.go | 186 + .../autoscaling/api_modify_lifecycle_hook.go | 210 + .../api_modify_scaling_configuration.go | 374 ++ .../autoscaling/api_modify_scaling_group.go | 250 + .../autoscaling/api_modify_scaling_policy.go | 364 + .../autoscaling/api_remove_instances.go | 194 + .../api_set_instances_protection.go | 248 + .../autoscaling/interface_autoscaling.go | 297 + .../autoscaling/service_autoscaling.go | 105 + .../ecs/api_associate_instances_iam_role.go | 254 + .../service/ecs/api_attach_key_pair.go | 262 + .../service/ecs/api_create_deployment_set.go | 218 + .../service/ecs/api_create_image.go | 210 + .../service/ecs/api_create_key_pair.go | 218 + .../service/ecs/api_create_tags.go | 292 + .../service/ecs/api_delete_deployment_set.go | 178 + .../service/ecs/api_delete_images.go | 246 + .../service/ecs/api_delete_instance.go | 178 + .../service/ecs/api_delete_instances.go | 246 + .../service/ecs/api_delete_key_pairs.go | 246 + .../service/ecs/api_delete_tags.go | 262 + .../ecs/api_describe_available_resource.go | 332 + ...ment_set_supported_instance_type_family.go | 186 + .../ecs/api_describe_deployment_sets.go | 358 + .../api_describe_image_share_permission.go | 248 + .../service/ecs/api_describe_images.go | 434 ++ .../api_describe_instance_ecs_terminal_url.go | 186 + .../api_describe_instance_type_families.go | 232 + .../ecs/api_describe_instance_types.go | 608 ++ .../ecs/api_describe_instance_vnc_url.go | 186 + .../service/ecs/api_describe_instances.go | 820 +++ .../ecs/api_describe_instances_iam_roles.go | 240 + .../service/ecs/api_describe_key_pairs.go | 296 + .../service/ecs/api_describe_system_events.go | 418 ++ .../service/ecs/api_describe_tags.go | 302 + .../service/ecs/api_describe_tasks.go | 329 + .../service/ecs/api_describe_user_data.go | 194 + .../service/ecs/api_describe_zones.go | 208 + .../service/ecs/api_detach_key_pair.go | 262 + .../api_disassociate_instances_iam_role.go | 254 + .../service/ecs/api_export_image.go | 202 + .../service/ecs/api_import_image.go | 250 + .../service/ecs/api_import_key_pair.go | 218 + .../api_modify_deployment_set_attribute.go | 194 + .../service/ecs/api_modify_image_attribute.go | 202 + .../ecs/api_modify_image_share_permission.go | 194 + .../ecs/api_modify_instance_attribute.go | 210 + .../ecs/api_modify_instance_charge_type.go | 234 + .../ecs/api_modify_instance_deployment.go | 186 + .../service/ecs/api_modify_instance_spec.go | 210 + .../ecs/api_modify_key_pair_attribute.go | 202 + .../service/ecs/api_reboot_instance.go | 186 + .../service/ecs/api_reboot_instances.go | 254 + .../service/ecs/api_renew_instance.go | 210 + .../service/ecs/api_replace_system_volume.go | 236 + .../service/ecs/api_run_instances.go | 540 ++ .../service/ecs/api_start_instance.go | 178 + .../service/ecs/api_start_instances.go | 246 + .../service/ecs/api_stop_instance.go | 194 + .../service/ecs/api_stop_instances.go | 262 + .../service/ecs/api_update_system_events.go | 278 + .../service/ecs/interface_ecs.go | 449 ++ .../service/ecs/service_ecs.go | 105 + .../volcengine/client/client.go | 122 + .../volcengine/client/default_retryer.go | 195 + .../volcengine/client/logger.go | 284 + .../volcengine/client/metadata/client_info.go | 32 + .../volcengine/client/no_op_retryer.go | 47 + .../volcengine-go-sdk/volcengine/config.go | 668 ++ .../volcengine-go-sdk/volcengine/context.go} | 20 +- .../volcengine/context_background.go | 30 + .../volcengine/context_sleep.go | 43 + .../volcengine/convert_types.go | 951 +++ .../volcengine/corehandlers/custom_req.go | 41 + .../volcengine/corehandlers/handlers.go | 249 + .../corehandlers/param_validator.go | 36 + .../volcengine/corehandlers/user_agent.go | 56 + .../volcengine/credentials/chain_provider.go | 104 + .../volcengine/credentials/credentials.go | 337 + .../credentials/endpointcreds/provider.go | 193 + .../volcengine/credentials/env_provider.go | 93 + .../credentials/processcreds/provider.go | 368 + .../shared_credentials_provider.go | 169 + .../volcengine/credentials/static_provider.go | 74 + .../volcengine/credentials/sts_credentials.go | 81 + .../volcengine/credentials/sts_provider.go | 90 + .../volcengine/custom/custom.go | 59 + .../volcengine/custom/interceptor.go | 51 + .../volcengine/defaults/defaults.go | 192 + .../volcengine/defaults/shared_config.go | 46 + .../volcengine/endpoints/endpoints.go | 428 ++ .../volcengine/endpoints/model.go | 327 + .../volcengine-go-sdk/volcengine/errors.go | 32 + .../volcengine-go-sdk/volcengine/jsonvalue.go | 31 + .../volcengine-go-sdk/volcengine/logger.go | 140 + .../request/connection_reset_error.go | 37 + .../volcengine/request/handlers.go | 341 + .../volcengine/request/http_request.go | 43 + .../volcengine/request/offset_reader.go | 84 + .../volcengine/request/request.go | 747 +++ .../volcengine/request/request_context.go | 31 + .../volcengine/request/request_pagination.go | 283 + .../volcengine/request/retryer.go | 295 + .../volcengine/request/timeout_read_closer.go | 113 + .../volcengine/request/validation.go | 333 + .../volcengine/request/waiter.go | 315 + .../response/volcengine_response.go | 44 + .../volcengine/session/credentials.go | 142 + .../volcengine/session/env_config.go | 296 + .../volcengine/session/session.go | 612 ++ .../volcengine/session/shared_config.go | 498 ++ .../volcengine/signer/volc/volc.go | 79 + .../volcengine/special/iot_response.go | 32 + .../volcengine/special/special_conf.go | 33 + .../volcengine-go-sdk/volcengine/types.go | 226 + .../volcengine/universal/universal_client.go | 105 + .../volcengine/universal/universal_const.go} | 22 +- .../volcengine/universal/universal_struct.go | 31 + .../volcengine-go-sdk/volcengine/url.go | 29 + .../volcengine-go-sdk/volcengine/version.go | 27 + .../volcengine/volcenginebody/body.go | 140 + .../volcengine/volcengineerr/error.go | 133 + .../volcengine/volcengineerr/types.go | 254 + .../volcengine/volcenginequery/build.go | 62 + .../volcengine/volcenginequery/unmarshal.go | 163 + .../volcenginequery/unmarshal_error.go | 135 + .../volcengine/volcengineutil/copy.go | 127 + .../volcengine/volcengineutil/equal.go | 44 + .../volcengine/volcengineutil/path_value.go | 240 + .../volcengine/volcengineutil/prettify.go | 144 + .../volcengine/volcengineutil/string_value.go | 110 + .../volcengine/volcengineutil/tools.go | 51 + .../volcengine/volcengineutil/trans.go | 77 + .../volcengine/volcengineutil/url.go | 58 + .../volcengine_auto_scaling_cloud_service.go | 197 + .../volcengine_auto_scaling_group.go | 216 + .../volcengine_auto_scaling_groups.go | 95 + .../volcengine/volcengine_cloud_config.go | 84 + .../volcengine/volcengine_cloud_provider.go | 231 + .../volcengine_ecs_cloud_service.go | 61 + .../volcengine/volcengine_manager.go | 238 + .../volcengine/volcengine_util.go | 46 + cluster-autoscaler/cloudprovider/vultr/OWNERS | 9 +- .../cloudprovider/vultr/vultr_node_group.go | 10 +- .../clusterstate/clusterstate.go | 72 +- .../clusterstate/clusterstate_test.go | 382 +- .../max_node_provision_time_provider.go | 72 +- .../utils/node_instances_cache_test.go | 21 +- .../clusterstate/utils/status.go | 3 + .../clusterstate/utils/status_test.go | 11 + .../config/autoscaling_options.go | 13 +- cluster-autoscaler/config/const.go | 16 + cluster-autoscaler/config/test/config.go | 68 + .../context/autoscaling_context.go | 8 +- cluster-autoscaler/core/autoscaler.go | 19 +- .../core/scaledown/actuation/actuator.go | 281 +- .../core/scaledown/actuation/actuator_test.go | 1446 ++-- .../scaledown/actuation/delete_in_batch.go | 97 +- .../actuation/delete_in_batch_test.go | 22 +- .../actuation/group_deletion_scheduler.go | 151 + .../group_deletion_scheduler_test.go | 188 + .../core/scaledown/budgets/budgets.go | 219 + .../core/scaledown/budgets/budgets_test.go | 400 ++ .../core/scaledown/eligibility/eligibility.go | 36 +- .../scaledown/eligibility/eligibility_test.go | 83 +- .../core/scaledown/legacy/legacy_test.go | 28 +- .../core/scaledown/planner/planner.go | 8 +- .../core/scaledown/planner/planner_test.go | 191 + .../core/scaledown/unneeded/nodes.go | 8 +- .../core/scaledown/unneeded/nodes_test.go | 5 +- .../core/scaleup/orchestrator/executor.go | 248 + .../scaleup/orchestrator/executor_test.go | 127 + .../core/scaleup/orchestrator/orchestrator.go | 410 +- .../scaleup/orchestrator/orchestrator_test.go | 1052 ++- .../scaleup/orchestrator/skippedreasons.go | 25 +- .../orchestrator/skippedreasons_test.go | 2 +- cluster-autoscaler/core/static_autoscaler.go | 208 +- .../core/static_autoscaler_test.go | 311 +- cluster-autoscaler/core/test/common.go | 150 +- .../estimator/binpacking_estimator.go | 99 +- .../estimator/binpacking_estimator_test.go | 21 +- .../estimator/cluster_capacity_threshold.go | 52 + .../cluster_capacity_threshold_test.go | 68 + .../estimator/decreasing_pod_orderer.go | 87 + .../estimator/decreasing_pod_orderer_test.go | 66 + .../estimator/estimation_context.go | 59 + cluster-autoscaler/estimator/estimator.go | 21 +- .../estimator/sng_capacity_threshold.go | 71 + .../estimator/sng_capacity_threshold_test.go | 92 + .../estimator/static_threshold.go | 45 + cluster-autoscaler/estimator/threshold.go | 30 + .../estimator/threshold_based_limiter.go | 36 +- .../estimator/threshold_based_limiter_test.go | 139 +- cluster-autoscaler/expander/expander.go | 9 +- cluster-autoscaler/go.mod | 161 +- cluster-autoscaler/go.sum | 243 +- .../integration/integration_test.go | 7 +- cluster-autoscaler/main.go | 60 +- cluster-autoscaler/metrics/metrics.go | 30 +- .../binpacking/binpacking_limiter.go | 52 + .../node_group_config_processor.go | 63 +- .../node_group_config_processor_test.go | 76 +- .../nodegroupset/compare_nodegroups.go | 7 + .../nodes/post_filtering_processor.go | 46 +- cluster-autoscaler/processors/processors.go | 8 +- .../empty_candidates_sorting.go | 4 +- cluster-autoscaler/simulator/drain.go | 40 +- cluster-autoscaler/simulator/drain_test.go | 2 +- .../simulator/drainability/mirror.go | 39 + .../simulator/drainability/mirror_test.go | 66 + .../simulator/drainability/rule.go | 58 +- .../predicatechecker/schedulerbased.go | 23 +- .../predicatechecker/schedulerbased_test.go | 251 +- .../simulator/predicatechecker/testchecker.go | 19 +- .../simulator/utilization/info.go | 9 +- cluster-autoscaler/utils/drain/drain.go | 4 - .../utils/kubernetes/listers.go | 129 +- .../utils/kubernetes/testlisters.go | 19 + .../utils/scheduler/scheduler.go | 41 + .../utils/scheduler/scheduler_test.go | 87 + cluster-autoscaler/utils/taints/taints.go | 46 +- .../utils/taints/taints_test.go | 47 +- cluster-autoscaler/utils/test/test_utils.go | 9 + cluster-autoscaler/utils/utils.go | 10 + cluster-autoscaler/utils/utils_test.go | 4 + .../Azure/azure-sdk-for-go/version/version.go | 2 +- .../Azure/go-autorest/autorest/adal/token.go | 44 +- .../Azure/go-autorest/autorest/autorest.go | 32 +- .../Azure/go-autorest/autorest/azure/azure.go | 2 +- .../Azure/go-autorest/autorest/utility.go | 6 +- .../Microsoft/go-winio/.gitattributes | 1 + .../github.com/Microsoft/go-winio/.gitignore | 9 + .../Microsoft/go-winio/.golangci.yml | 144 + .../github.com/Microsoft/go-winio/README.md | 85 +- .../github.com/Microsoft/go-winio/SECURITY.md | 41 + .../github.com/Microsoft/go-winio/backup.go | 48 +- .../github.com/Microsoft/go-winio/doc.go | 22 + .../github.com/Microsoft/go-winio/ea.go | 8 +- .../github.com/Microsoft/go-winio/file.go | 70 +- .../github.com/Microsoft/go-winio/fileinfo.go | 29 +- .../github.com/Microsoft/go-winio/hvsock.go | 360 +- .../go-winio/internal/socket/rawaddr.go | 20 + .../go-winio/internal/socket/socket.go | 179 + .../internal/socket/zsyscall_windows.go | 72 + .../github.com/Microsoft/go-winio/pipe.go | 124 +- .../Microsoft/go-winio/pkg/guid/guid.go | 25 +- .../go-winio/pkg/guid/guid_nonwindows.go | 16 + .../go-winio/pkg/guid/guid_windows.go | 13 + .../go-winio/pkg/guid/variant_string.go | 27 + .../pkg/security/grantvmgroupaccess.go | 44 +- .../go-winio/pkg/security/syscall_windows.go | 8 +- .../go-winio/pkg/security/zsyscall_windows.go | 28 +- .../Microsoft/go-winio/privilege.go | 37 +- .../github.com/Microsoft/go-winio/reparse.go | 11 +- .../github.com/Microsoft/go-winio/sd.go | 64 +- .../github.com/Microsoft/go-winio/syscall.go | 4 +- .../github.com/Microsoft/go-winio/tools.go | 5 + .../github.com/Microsoft/go-winio/vhd/vhd.go | 122 +- .../Microsoft/go-winio/vhd/zvhd_windows.go | 56 +- .../Microsoft/go-winio/zsyscall_windows.go | 45 +- .../github.com/Microsoft/hcsshim/hcn/hcn.go | 304 - .../Microsoft/hcsshim/hcn/hcnendpoint.go | 388 -- .../Microsoft/hcsshim/hcn/hcnerrors.go | 164 - .../Microsoft/hcsshim/hcn/hcnglobals.go | 132 - .../Microsoft/hcsshim/hcn/hcnloadbalancer.go | 311 - .../Microsoft/hcsshim/hcn/hcnnamespace.go | 446 -- .../Microsoft/hcsshim/hcn/hcnnetwork.go | 462 -- .../Microsoft/hcsshim/hcn/hcnpolicy.go | 329 - .../Microsoft/hcsshim/hcn/hcnroute.go | 266 - .../Microsoft/hcsshim/hcn/hcnsupport.go | 143 - .../Microsoft/hcsshim/hcn/zsyscall_windows.go | 795 --- .../hcsshim/internal/cni/registry.go | 110 - .../hcsshim/internal/regstate/regstate.go | 288 - .../internal/regstate/zsyscall_windows.go | 51 - .../hcsshim/internal/runhcs/container.go | 71 - .../Microsoft/hcsshim/internal/runhcs/util.go | 16 - .../Microsoft/hcsshim/internal/runhcs/vm.go | 43 - .../github.com/cilium/ebpf/ARCHITECTURE.md | 6 + .../github.com/cilium/ebpf/MAINTAINERS.md | 8 + .../vendor/github.com/cilium/ebpf/Makefile | 83 +- .../vendor/github.com/cilium/ebpf/README.md | 11 +- .../vendor/github.com/cilium/ebpf/asm/func.go | 41 + .../github.com/cilium/ebpf/asm/func_string.go | 40 +- .../github.com/cilium/ebpf/asm/instruction.go | 548 +- .../vendor/github.com/cilium/ebpf/asm/jump.go | 74 +- .../github.com/cilium/ebpf/asm/metadata.go | 80 + .../github.com/cilium/ebpf/asm/opcode.go | 116 +- .../cilium/ebpf/asm/opcode_string.go | 18 +- .../github.com/cilium/ebpf/asm/register.go | 1 + .../vendor/github.com/cilium/ebpf/btf/btf.go | 897 +++ .../ebpf/{internal => }/btf/btf_types.go | 82 +- .../{internal => }/btf/btf_types_string.go | 0 .../cilium/ebpf/{internal => }/btf/core.go | 480 +- .../cilium/ebpf/{internal => }/btf/doc.go | 3 - .../github.com/cilium/ebpf/btf/ext_info.go | 721 ++ .../github.com/cilium/ebpf/btf/format.go | 319 + .../github.com/cilium/ebpf/btf/handle.go | 121 + .../github.com/cilium/ebpf/btf/strings.go | 128 + .../github.com/cilium/ebpf/btf/types.go | 1212 ++++ .../github.com/cilium/ebpf/collection.go | 298 +- .../vendor/github.com/cilium/ebpf/doc.go | 9 + .../github.com/cilium/ebpf/elf_reader.go | 638 +- .../github.com/cilium/ebpf/elf_reader_fuzz.go | 22 - .../vendor/github.com/cilium/ebpf/info.go | 158 +- .../cilium/ebpf/internal/btf/btf.go | 798 --- .../cilium/ebpf/internal/btf/ext_info.go | 312 - .../cilium/ebpf/internal/btf/fuzz.go | 50 - .../cilium/ebpf/internal/btf/info.go | 48 - .../cilium/ebpf/internal/btf/strings.go | 54 - .../cilium/ebpf/internal/btf/syscalls.go | 31 - .../cilium/ebpf/internal/btf/types.go | 957 --- .../github.com/cilium/ebpf/internal/elf.go | 34 + .../github.com/cilium/ebpf/internal/endian.go | 29 - .../cilium/ebpf/internal/endian_be.go | 13 + .../cilium/ebpf/internal/endian_le.go | 13 + .../github.com/cilium/ebpf/internal/errors.go | 203 +- .../github.com/cilium/ebpf/internal/fd.go | 69 - .../cilium/ebpf/internal/feature.go | 10 +- .../github.com/cilium/ebpf/internal/io.go | 48 +- .../github.com/cilium/ebpf/internal/output.go | 84 + .../cilium/ebpf/internal/pinning.go | 43 +- .../cilium/ebpf/internal/sys/doc.go | 6 + .../github.com/cilium/ebpf/internal/sys/fd.go | 96 + .../cilium/ebpf/internal/{ => sys}/ptr.go | 9 +- .../ebpf/internal/{ => sys}/ptr_32_be.go | 2 +- .../ebpf/internal/{ => sys}/ptr_32_le.go | 2 +- .../cilium/ebpf/internal/{ => sys}/ptr_64.go | 2 +- .../cilium/ebpf/internal/sys/syscall.go | 126 + .../cilium/ebpf/internal/sys/types.go | 1052 +++ .../cilium/ebpf/internal/syscall.go | 304 - .../cilium/ebpf/internal/syscall_string.go | 56 - .../cilium/ebpf/internal/unix/types_linux.go | 26 +- .../cilium/ebpf/internal/unix/types_other.go | 17 +- .../github.com/cilium/ebpf/internal/vdso.go | 150 + .../cilium/ebpf/internal/version.go | 83 +- .../github.com/cilium/ebpf/link/cgroup.go | 14 +- .../github.com/cilium/ebpf/link/freplace.go | 88 - .../github.com/cilium/ebpf/link/iter.go | 39 +- .../github.com/cilium/ebpf/link/kprobe.go | 234 +- .../github.com/cilium/ebpf/link/link.go | 220 +- .../github.com/cilium/ebpf/link/netns.go | 28 +- .../github.com/cilium/ebpf/link/perf_event.go | 260 +- .../github.com/cilium/ebpf/link/program.go | 10 +- .../cilium/ebpf/link/raw_tracepoint.go | 60 +- .../cilium/ebpf/link/socket_filter.go | 40 + .../github.com/cilium/ebpf/link/syscalls.go | 126 +- .../github.com/cilium/ebpf/link/tracepoint.go | 33 +- .../github.com/cilium/ebpf/link/tracing.go | 141 + .../github.com/cilium/ebpf/link/uprobe.go | 201 +- .../vendor/github.com/cilium/ebpf/link/xdp.go | 54 + .../vendor/github.com/cilium/ebpf/linker.go | 293 +- .../vendor/github.com/cilium/ebpf/map.go | 533 +- .../github.com/cilium/ebpf/marshalers.go | 36 +- .../vendor/github.com/cilium/ebpf/prog.go | 610 +- .../vendor/github.com/cilium/ebpf/readme.md | 11 +- .../github.com/cilium/ebpf/run-tests.sh | 62 +- .../vendor/github.com/cilium/ebpf/syscalls.go | 358 +- .../vendor/github.com/cilium/ebpf/types.go | 22 +- .../github.com/cilium/ebpf/types_string.go | 7 +- .../spec/lib/go/csi/csi.pb.go | 1562 ++++- .../github.com/containerd/cgroups/.gitignore | 2 + .../github.com/containerd/cgroups/Makefile | 24 + .../containerd/cgroups/Protobuild.toml | 46 + .../github.com/containerd/cgroups/README.md | 204 + .../github.com/containerd/cgroups/blkio.go | 361 + .../github.com/containerd/cgroups/cgroup.go | 543 ++ .../github.com/containerd/cgroups/control.go | 99 + .../github.com/containerd/cgroups/cpu.go | 125 + .../github.com/containerd/cgroups/cpuacct.go | 129 + .../github.com/containerd/cgroups/cpuset.go | 158 + .../github.com/containerd/cgroups/devices.go | 92 + .../github.com/containerd/cgroups/errors.go | 47 + .../github.com/containerd/cgroups/freezer.go | 82 + .../containerd/cgroups/hierarchy.go | 20 + .../github.com/containerd/cgroups/hugetlb.go | 109 + .../github.com/containerd/cgroups/memory.go | 480 ++ .../github.com/containerd/cgroups/named.go | 39 + .../github.com/containerd/cgroups/net_cls.go | 61 + .../github.com/containerd/cgroups/net_prio.go | 65 + .../github.com/containerd/cgroups/opts.go | 61 + .../github.com/containerd/cgroups/paths.go | 106 + .../containerd/cgroups/perf_event.go | 37 + .../github.com/containerd/cgroups/pids.go | 85 + .../github.com/containerd/cgroups/rdma.go | 154 + .../github.com/containerd/cgroups/state.go | 28 + .../containerd/cgroups/subsystem.go | 116 + .../github.com/containerd/cgroups/systemd.go | 158 + .../github.com/containerd/cgroups/ticks.go | 26 + .../github.com/containerd/cgroups/utils.go | 391 ++ .../github.com/containerd/cgroups/v1.go | 73 + .../containerd/ttrpc/.gitattributes | 1 + .../github.com/containerd/ttrpc/.gitignore | 2 + .../github.com/containerd/ttrpc/.golangci.yml | 52 + .../github.com/containerd/ttrpc/Makefile | 180 + .../github.com/containerd/ttrpc/PROTOCOL.md | 240 + .../containerd/ttrpc/Protobuild.toml | 28 + .../github.com/containerd/ttrpc/README.md | 9 +- .../github.com/containerd/ttrpc/channel.go | 47 +- .../github.com/containerd/ttrpc/client.go | 517 +- .../github.com/containerd/ttrpc/codec.go | 2 +- .../vendor/github.com/containerd/ttrpc/doc.go | 23 + .../github.com/containerd/ttrpc/errors.go | 34 + .../github.com/containerd/ttrpc/handshake.go | 2 +- .../containerd/ttrpc/interceptor.go | 17 +- .../github.com/containerd/ttrpc/request.pb.go | 396 ++ .../github.com/containerd/ttrpc/request.proto | 29 + .../github.com/containerd/ttrpc/server.go | 311 +- .../github.com/containerd/ttrpc/services.go | 201 +- .../github.com/containerd/ttrpc/stream.go | 84 + .../containerd/ttrpc/stream_server.go | 22 + .../github.com/containerd/ttrpc/test.proto | 16 + .../github.com/containerd/ttrpc/types.go | 63 - .../containerd/ttrpc/unixcreds_linux.go | 8 +- .../distribution/reference/reference.go | 4 +- .../pkg/aws/apis/aws_provider_spec.go | 11 +- .../pkg/azure/apis/azure_provider_spec.go | 42 +- .../vendor/github.com/ghodss/yaml/.gitignore | 20 - .../vendor/github.com/ghodss/yaml/.travis.yml | 7 - .../vendor/github.com/ghodss/yaml/README.md | 121 - .../vendor/github.com/ghodss/yaml/fields.go | 501 -- .../vendor/github.com/ghodss/yaml/yaml.go | 277 - .../vendor/github.com/gofrs/uuid/.travis.yml | 22 - .../vendor/github.com/gofrs/uuid/README.md | 9 + .../vendor/github.com/gofrs/uuid/codec.go | 244 +- .../vendor/github.com/gofrs/uuid/fuzz.go | 11 +- .../vendor/github.com/gofrs/uuid/generator.go | 209 +- .../vendor/github.com/gofrs/uuid/sql.go | 35 +- .../vendor/github.com/gofrs/uuid/uuid.go | 143 +- .../github.com/golang-jwt/jwt/v4/README.md | 18 +- .../github.com/golang-jwt/jwt/v4/claims.go | 6 +- .../github.com/golang-jwt/jwt/v4/parser.go | 7 + .../github.com/golang-jwt/jwt/v4/token.go | 20 +- .../google/cadvisor/container/crio/handler.go | 16 +- .../google/cadvisor/info/v1/machine.go | 1 + .../github.com/google/cel-go/cel/BUILD.bazel | 4 +- .../github.com/google/cel-go/cel/env.go | 59 +- .../github.com/google/cel-go/cel/library.go | 57 +- .../github.com/google/cel-go/cel/options.go | 9 + .../google/cel-go/checker/BUILD.bazel | 4 +- .../github.com/google/cel-go/checker/cost.go | 42 +- .../google/cel-go/checker/decls/BUILD.bazel | 2 +- .../github.com/google/cel-go/checker/env.go | 27 +- .../google/cel-go/common/BUILD.bazel | 2 +- .../cel-go/common/containers/BUILD.bazel | 4 +- .../google/cel-go/common/debug/BUILD.bazel | 2 +- .../google/cel-go/common/types/BUILD.bazel | 6 +- .../google/cel-go/common/types/pb/BUILD.bazel | 2 +- .../cel-go/common/types/ref/BUILD.bazel | 2 +- .../cel-go/common/types/ref/reference.go | 11 +- .../github.com/google/cel-go/ext/BUILD.bazel | 6 +- .../github.com/google/cel-go/ext/README.md | 86 +- .../github.com/google/cel-go/ext/bindings.go | 100 + .../github.com/google/cel-go/ext/math.go | 2 +- .../github.com/google/cel-go/ext/sets.go | 138 + .../github.com/google/cel-go/ext/strings.go | 76 +- .../google/cel-go/interpreter/BUILD.bazel | 5 +- .../google/cel-go/interpreter/attributes.go | 70 +- .../cel-go/interpreter/interpretable.go | 113 +- .../google/cel-go/interpreter/planner.go | 17 +- .../google/cel-go/interpreter/prune.go | 352 +- .../google/cel-go/interpreter/runtimecost.go | 56 +- .../google/cel-go/parser/BUILD.bazel | 6 +- .../google/cel-go/parser/gen/BUILD.bazel | 2 +- .../github.com/google/cel-go/parser/helper.go | 3 +- .../google/cel-go/parser/options.go | 16 +- .../github.com/google/cel-go/parser/parser.go | 34 +- .../google/cel-go/parser/unparser.go | 3 + .../{gnostic => gnostic-models}/LICENSE | 0 .../compiler/README.md | 0 .../compiler/context.go | 0 .../compiler/error.go | 0 .../compiler/extensions.go | 2 +- .../compiler/helpers.go | 2 +- .../compiler/main.go | 0 .../compiler/reader.go | 0 .../extensions/README.md | 0 .../extensions/extension.pb.go | 4 +- .../extensions/extension.proto | 0 .../extensions/extensions.go | 0 .../jsonschema/README.md | 0 .../jsonschema/base.go | 15 +- .../jsonschema/display.go | 17 +- .../jsonschema/models.go | 8 +- .../jsonschema/operations.go | 0 .../jsonschema/reader.go | 1 + .../jsonschema/schema.json | 0 .../jsonschema/writer.go | 30 +- .../openapiv2/OpenAPIv2.go | 9 +- .../openapiv2/OpenAPIv2.pb.go | 4 +- .../openapiv2/OpenAPIv2.proto | 0 .../openapiv2/README.md | 0 .../openapiv2/document.go | 2 +- .../openapiv2/openapi-2.0.json | 0 .../openapiv3/OpenAPIv3.go | 9 +- .../openapiv3/OpenAPIv3.pb.go | 13 +- .../openapiv3/OpenAPIv3.proto | 2 +- .../openapiv3/README.md | 4 - .../openapiv3/document.go | 2 +- .../gnostic/openapiv3/annotations.pb.go | 183 - .../gnostic/openapiv3/annotations.proto | 60 - .../google/gnostic/openapiv3/openapi-3.0.json | 1251 ---- .../google/gnostic/openapiv3/openapi-3.1.json | 1250 ---- .../mitchellh/mapstructure/CHANGELOG.md | 96 - .../mitchellh/mapstructure/README.md | 46 - .../mitchellh/mapstructure/decode_hooks.go | 279 - .../mitchellh/mapstructure/error.go | 50 - .../mitchellh/mapstructure/mapstructure.go | 1540 ----- .../libcontainer/cgroups/ebpf/ebpf_linux.go | 2 +- .../runc/libcontainer/cgroups/fs/fs.go | 1 + .../libcontainer/cgroups/systemd/common.go | 78 +- .../libcontainer/cgroups/systemd/cpuset.go | 5 + .../runc/libcontainer/cgroups/systemd/v1.go | 14 +- .../runc/libcontainer/cgroups/systemd/v2.go | 4 +- .../runc/libcontainer/cgroups/utils.go | 6 +- .../configs/validate/validator.go | 5 +- .../runc/libcontainer/container_linux.go | 2 +- .../runc/libcontainer/eaccess_go119.go | 17 + .../runc/libcontainer/eaccess_stub.go | 10 + .../runc/libcontainer/factory_linux.go | 11 +- .../runc/libcontainer/init_linux.go | 5 +- .../runc/libcontainer/rootfs_linux.go | 82 +- .../runc/libcontainer/standard_init_linux.go | 11 +- .../opencontainers/runc/libcontainer/sync.go | 14 +- .../runc/libcontainer/system/linux.go | 19 - .../runc/libcontainer/user/user.go | 14 +- .../collectors/go_collector_latest.go | 2 + .../client_golang/prometheus/counter.go | 26 +- .../client_golang/prometheus/desc.go | 46 +- .../client_golang/prometheus/doc.go | 44 +- .../client_golang/prometheus/gauge.go | 26 +- .../prometheus/go_collector_latest.go | 7 +- .../client_golang/prometheus/histogram.go | 61 +- .../client_golang/prometheus/labels.go | 72 + .../client_golang/prometheus/metric.go | 6 +- .../client_golang/prometheus/promhttp/http.go | 19 +- .../prometheus/promhttp/instrument_client.go | 26 +- .../prometheus/promhttp/instrument_server.go | 101 +- .../prometheus/promhttp/option.go | 38 +- .../client_golang/prometheus/registry.go | 17 +- .../client_golang/prometheus/summary.go | 39 +- .../prometheus/testutil/promlint/promlint.go | 20 +- .../prometheus/testutil/testutil.go | 1 + .../client_golang/prometheus/timer.go | 28 +- .../client_golang/prometheus/value.go | 10 +- .../client_golang/prometheus/vec.go | 79 +- .../client_golang/prometheus/vnext.go | 23 + .../client_golang/prometheus/wrap.go | 8 +- .../prometheus/client_model/go/metrics.pb.go | 1530 +++-- .../prometheus/common/expfmt/decode.go | 5 +- .../prometheus/common/expfmt/encode.go | 13 +- .../prometheus/common/expfmt/expfmt.go | 26 +- .../prometheus/common/expfmt/text_parse.go | 2 +- .../prometheus/procfs/Makefile.common | 16 +- .../vendor/github.com/prometheus/procfs/fs.go | 9 +- .../prometheus/procfs/fs_statfs_notype.go | 23 + .../prometheus/procfs/fs_statfs_type.go | 33 + .../prometheus/procfs/internal/util/parse.go | 15 + .../prometheus/procfs/mountstats.go | 6 +- .../prometheus/procfs/net_conntrackstat.go | 88 +- .../prometheus/procfs/net_softnet.go | 5 + .../prometheus/procfs/net_wireless.go | 182 + .../github.com/prometheus/procfs/netstat.go | 25 +- .../github.com/prometheus/procfs/proc.go | 22 +- .../github.com/prometheus/procfs/proc_stat.go | 6 +- .../prometheus/procfs/proc_status.go | 32 + .../github.com/prometheus/procfs/thread.go | 9 +- .../seccomp/libseccomp-golang/CHANGELOG | 25 + .../seccomp/libseccomp-golang/README.md | 32 +- .../seccomp/libseccomp-golang/SECURITY.md | 1 + .../seccomp/libseccomp-golang/seccomp.go | 15 +- .../libseccomp-golang/seccomp_internal.go | 17 +- .../vishvananda/netns/.golangci.yml | 2 + .../github.com/vishvananda/netns/README.md | 11 - .../vishvananda/netns/netns_linux.go | 9 +- .../vishvananda/netns/nshandle_linux.go | 2 +- .../vishvananda/netns/nshandle_others.go | 2 +- .../go.etcd.io/etcd/api/v3/version/version.go | 2 +- .../etcd/client/pkg/v3/logutil/zap.go | 24 +- .../etcd/client/pkg/v3/tlsutil/versions.go | 47 + .../etcd/client/pkg/v3/transport/listener.go | 33 +- .../vendor/go.etcd.io/etcd/client/v3/doc.go | 4 +- .../client/v3/internal/endpoint/endpoint.go | 13 +- .../vendor/go.etcd.io/etcd/client/v3/txn.go | 17 +- .../vendor/go.etcd.io/etcd/client/v3/watch.go | 2 +- .../vendor/golang.org/x/crypto/hkdf/hkdf.go | 93 + .../ghodss/yaml => golang.org/x/mod}/LICENSE | 25 +- .../vendor/golang.org/x/mod/PATENTS | 22 + .../vendor/golang.org/x/mod/semver/semver.go | 401 ++ .../golang.org/x/oauth2/google/default.go | 9 +- .../golang.org/x/oauth2/internal/oauth2.go | 2 +- .../golang.org/x/oauth2/internal/token.go | 60 +- .../vendor/golang.org/x/oauth2/token.go | 19 +- .../golang.org/x/sys/execabs/execabs.go | 102 + .../golang.org/x/sys/execabs/execabs_go118.go | 18 + .../golang.org/x/sys/execabs/execabs_go119.go | 21 + .../x/tools/cmd/stringer/stringer.go | 657 ++ .../x/tools/go/gcexportdata/gcexportdata.go | 186 + .../x/tools/go/gcexportdata/importer.go | 75 + .../tools/go/internal/packagesdriver/sizes.go | 49 + .../golang.org/x/tools/go/packages/doc.go | 220 + .../x/tools/go/packages/external.go | 101 + .../golang.org/x/tools/go/packages/golist.go | 1183 ++++ .../x/tools/go/packages/golist_overlay.go | 575 ++ .../x/tools/go/packages/loadmode_string.go | 57 + .../x/tools/go/packages/packages.go | 1332 ++++ .../golang.org/x/tools/go/packages/visit.go | 59 + .../x/tools/go/types/objectpath/objectpath.go | 824 +++ .../x/tools/internal/event/core/event.go | 85 + .../x/tools/internal/event/core/export.go | 70 + .../x/tools/internal/event/core/fast.go | 77 + .../golang.org/x/tools/internal/event/doc.go | 7 + .../x/tools/internal/event/event.go | 127 + .../x/tools/internal/event/keys/keys.go | 564 ++ .../x/tools/internal/event/keys/standard.go | 22 + .../x/tools/internal/event/label/label.go | 215 + .../x/tools/internal/event/tag/tag.go | 59 + .../x/tools/internal/gcimporter/bimport.go | 150 + .../x/tools/internal/gcimporter/exportdata.go | 99 + .../x/tools/internal/gcimporter/gcimporter.go | 274 + .../x/tools/internal/gcimporter/iexport.go | 1322 ++++ .../x/tools/internal/gcimporter/iimport.go | 1083 +++ .../internal/gcimporter/newInterface10.go | 22 + .../internal/gcimporter/newInterface11.go | 14 + .../internal/gcimporter/support_go117.go | 16 + .../internal/gcimporter/support_go118.go | 37 + .../x/tools/internal/gcimporter/unified_no.go | 10 + .../tools/internal/gcimporter/unified_yes.go | 10 + .../x/tools/internal/gcimporter/ureader_no.go | 19 + .../tools/internal/gcimporter/ureader_yes.go | 728 ++ .../x/tools/internal/gocommand/invoke.go | 462 ++ .../x/tools/internal/gocommand/vendor.go | 109 + .../x/tools/internal/gocommand/version.go | 71 + .../internal/packagesinternal/packages.go | 30 + .../x/tools/internal/pkgbits/codes.go | 77 + .../x/tools/internal/pkgbits/decoder.go | 517 ++ .../x/tools/internal/pkgbits/doc.go | 32 + .../x/tools/internal/pkgbits/encoder.go | 383 ++ .../x/tools/internal/pkgbits/flags.go | 9 + .../x/tools/internal/pkgbits/frames_go1.go | 21 + .../x/tools/internal/pkgbits/frames_go17.go | 28 + .../x/tools/internal/pkgbits/reloc.go | 42 + .../x/tools/internal/pkgbits/support.go | 17 + .../x/tools/internal/pkgbits/sync.go | 113 + .../internal/pkgbits/syncmarker_string.go | 89 + .../internal/tokeninternal/tokeninternal.go | 151 + .../tools/internal/typesinternal/errorcode.go | 1560 +++++ .../typesinternal/errorcode_string.go | 179 + .../x/tools/internal/typesinternal/types.go | 68 + .../tools/internal/typesinternal/types_118.go | 19 + .../genproto/googleapis/api}/LICENSE | 0 .../googleapis/api/annotations/client.pb.go | 96 +- .../genproto/googleapis/api/tidyfix.go | 23 + .../genproto/googleapis/rpc/LICENSE | 202 + .../genproto/internal/doc.go | 17 + .../admissionregistration/v1/generated.proto | 4 +- .../api/admissionregistration/v1/types.go | 4 +- .../v1/types_swagger_doc_generated.go | 4 +- .../v1alpha1/generated.pb.go | 541 +- .../v1alpha1/generated.proto | 92 +- .../admissionregistration/v1alpha1/types.go | 105 +- .../v1alpha1/types_swagger_doc_generated.go | 25 +- .../v1alpha1/zz_generated.deepcopy.go | 33 +- .../v1beta1/generated.pb.go | 5927 ++++++++++++++--- .../v1beta1/generated.proto | 564 +- .../admissionregistration/v1beta1/register.go | 4 + .../admissionregistration/v1beta1/types.go | 594 +- .../v1beta1/types_swagger_doc_generated.go | 178 +- .../v1beta1/zz_generated.deepcopy.go | 448 +- .../zz_generated.prerelease-lifecycle.go | 72 + .../api/apidiscovery/v2beta1/generated.proto | 4 +- .../k8s.io/api/apidiscovery/v2beta1/types.go | 4 +- .../v1alpha1/generated.pb.go | 148 +- .../v1alpha1/generated.proto | 5 + .../api/apiserverinternal/v1alpha1/types.go | 5 + .../v1alpha1/types_swagger_doc_generated.go | 1 + .../v1alpha1/zz_generated.deepcopy.go | 5 + .../vendor/k8s.io/api/apps/v1/types.go | 3 +- .../api/authentication/v1/generated.pb.go | 511 +- .../api/authentication/v1/generated.proto | 20 + .../k8s.io/api/authentication/v1/register.go | 1 + .../k8s.io/api/authentication/v1/types.go | 25 + .../v1/types_swagger_doc_generated.go | 19 + .../v1/zz_generated.deepcopy.go | 44 + .../k8s.io/api/batch/v1/generated.pb.go | 398 +- .../k8s.io/api/batch/v1/generated.proto | 66 +- .../vendor/k8s.io/api/batch/v1/types.go | 96 +- .../batch/v1/types_swagger_doc_generated.go | 9 +- .../api/batch/v1/zz_generated.deepcopy.go | 25 + .../api/core/v1/annotation_key_constants.go | 6 +- .../vendor/k8s.io/api/core/v1/generated.pb.go | 3017 +++++---- .../vendor/k8s.io/api/core/v1/generated.proto | 196 +- .../vendor/k8s.io/api/core/v1/types.go | 238 +- .../core/v1/types_swagger_doc_generated.go | 68 +- .../k8s.io/api/core/v1/well_known_labels.go | 4 + .../api/core/v1/zz_generated.deepcopy.go | 75 +- .../k8s.io/api/discovery/v1/generated.proto | 2 + .../vendor/k8s.io/api/discovery/v1/types.go | 2 + .../v1/types_swagger_doc_generated.go | 2 +- .../api/extensions/v1beta1/generated.pb.go | 610 +- .../api/extensions/v1beta1/generated.proto | 17 - .../k8s.io/api/extensions/v1beta1/types.go | 50 +- .../v1beta1/types_swagger_doc_generated.go | 10 - .../v1beta1/zz_generated.deepcopy.go | 24 - .../api/flowcontrol/v1alpha1/generated.pb.go | 477 +- .../api/flowcontrol/v1alpha1/generated.proto | 42 + .../k8s.io/api/flowcontrol/v1alpha1/types.go | 45 + .../v1alpha1/types_swagger_doc_generated.go | 11 + .../v1alpha1/zz_generated.deepcopy.go | 31 + .../api/flowcontrol/v1beta1/generated.pb.go | 476 +- .../api/flowcontrol/v1beta1/generated.proto | 42 + .../k8s.io/api/flowcontrol/v1beta1/types.go | 49 +- .../v1beta1/types_swagger_doc_generated.go | 11 + .../v1beta1/zz_generated.deepcopy.go | 31 + .../api/flowcontrol/v1beta2/generated.pb.go | 477 +- .../api/flowcontrol/v1beta2/generated.proto | 42 + .../k8s.io/api/flowcontrol/v1beta2/types.go | 49 +- .../v1beta2/types_swagger_doc_generated.go | 11 + .../v1beta2/zz_generated.deepcopy.go | 31 + .../api/flowcontrol/v1beta3/generated.pb.go | 475 +- .../api/flowcontrol/v1beta3/generated.proto | 46 +- .../k8s.io/api/flowcontrol/v1beta3/types.go | 53 +- .../v1beta3/types_swagger_doc_generated.go | 13 +- .../v1beta3/zz_generated.deepcopy.go | 31 + .../k8s.io/api/networking/v1/generated.pb.go | 443 +- .../k8s.io/api/networking/v1/generated.proto | 17 - .../vendor/k8s.io/api/networking/v1/types.go | 50 +- .../v1/types_swagger_doc_generated.go | 10 - .../networking/v1/zz_generated.deepcopy.go | 24 - .../vendor/k8s.io/api/rbac/v1/generated.proto | 2 + .../vendor/k8s.io/api/rbac/v1/types.go | 2 + .../rbac/v1/types_swagger_doc_generated.go | 4 +- .../k8s.io/apiextensions-apiserver/LICENSE | 202 + .../pkg/features/OWNERS | 4 + .../pkg/features/kube_features.go | 48 + .../k8s.io/apimachinery/pkg/api/errors/OWNERS | 1 - .../k8s.io/apimachinery/pkg/api/meta/help.go | 83 +- .../apimachinery/pkg/api/resource/OWNERS | 1 - .../pkg/apis/meta/v1/generated.proto | 2 - .../apimachinery/pkg/apis/meta/v1/types.go | 22 +- .../apis/meta/v1/unstructured/unstructured.go | 5 + .../meta/v1/unstructured/unstructured_list.go | 9 + .../k8s.io/apimachinery/pkg/runtime/codec.go | 1 - .../apimachinery/pkg/runtime/converter.go | 4 +- .../apimachinery/pkg/runtime/interfaces.go | 5 + .../pkg/runtime/schema/group_version.go | 2 +- .../k8s.io/apimachinery/pkg/runtime/splice.go | 76 + .../apimachinery/pkg/types/namespacedname.go | 3 +- .../apimachinery/pkg/util/cache/expiring.go | 12 +- .../k8s.io/apimachinery/pkg/util/diff/diff.go | 37 +- .../k8s.io/apimachinery/pkg/util/dump/dump.go | 54 + .../pkg/util/httpstream/spdy/roundtripper.go | 4 +- .../pkg/util/httpstream}/wsstream/conn.go | 0 .../pkg/util/httpstream}/wsstream/doc.go | 2 +- .../pkg/util/httpstream}/wsstream/stream.go | 0 .../apimachinery/pkg/util/intstr/intstr.go | 7 +- .../managedfields/internal/fieldmanager.go | 25 +- .../managedfields/internal/skipnonapplied.go | 14 +- .../managedfields/internal/structuredmerge.go | 3 + .../managedfields/internal/versioncheck.go | 52 + .../apimachinery/pkg/util/mergepatch/util.go | 4 +- .../k8s.io/apimachinery/pkg/util/net/util.go | 6 + .../apimachinery/pkg/util/proxy/transport.go | 3 +- .../pkg/util/proxy/upgradeaware.go | 3 +- .../pkg/util/strategicpatch/patch.go | 63 +- .../apimachinery/pkg/util/version/version.go | 5 + .../k8s.io/apimachinery/pkg/util/wait/loop.go | 19 +- .../k8s.io/apimachinery/pkg/util/wait/poll.go | 28 +- .../configuration/mutating_webhook_manager.go | 76 +- .../validating_webhook_manager.go | 79 +- .../pkg/admission/metrics/metrics.go | 67 +- .../pkg/admission/plugin/cel/compile.go | 273 +- .../pkg/admission/plugin/cel/composition.go | 198 + .../pkg/admission/plugin/cel/filter.go | 81 +- .../pkg/admission/plugin/cel/interface.go | 14 +- .../validatingadmissionpolicy/admission.go | 10 +- .../caching_authorizer.go | 133 + .../validatingadmissionpolicy/controller.go | 443 +- .../controller_reconcile.go | 292 +- .../validatingadmissionpolicy/interface.go | 32 +- .../validatingadmissionpolicy/matcher.go | 17 +- .../matching/matching.go | 59 +- .../validatingadmissionpolicy/typechecking.go | 330 +- .../validatingadmissionpolicy/validator.go | 23 +- .../pkg/admission/plugin/webhook/accessors.go | 29 +- .../plugin/webhook/generic/webhook.go | 13 +- .../webhook/matchconditions/interface.go | 3 +- .../plugin/webhook/matchconditions/matcher.go | 23 +- .../plugin/webhook/mutating/dispatcher.go | 4 + .../webhook/predicates/namespace/matcher.go | 6 + .../plugin/webhook/validating/dispatcher.go | 4 + .../pkg/apis/flowcontrol/bootstrap/default.go | 4 + .../k8s.io/apiserver/pkg/audit/context.go | 76 +- .../k8s.io/apiserver/pkg/audit/request.go | 29 +- .../request/websocket/protocol.go | 2 +- .../token/cache/cached_token_authenticator.go | 9 +- .../k8s.io/apiserver/pkg/cel/common/values.go | 20 +- .../k8s.io/apiserver/pkg/cel/composited.go | 119 - .../apiserver/pkg/cel/environment/base.go | 119 + .../pkg/cel/environment/environment.go | 274 + .../k8s.io/apiserver/pkg/cel/lazy/lazy.go | 191 + .../k8s.io/apiserver/pkg/cel/library/authz.go | 62 +- .../k8s.io/apiserver/pkg/cel/library/cost.go | 4 +- .../apiserver/pkg/cel/library/quantity.go | 375 ++ .../k8s.io/apiserver/pkg/cel/library/regex.go | 4 +- .../k8s.io/apiserver/pkg/cel/library/test.go | 79 + .../k8s.io/apiserver/pkg/cel/quantity.go | 76 + .../k8s.io/apiserver/pkg/cel/registry.go | 79 - .../vendor/k8s.io/apiserver/pkg/cel/types.go | 160 +- .../apiserver/pkg/endpoints/filters/audit.go | 13 +- .../pkg/endpoints/filters/authentication.go | 6 +- .../pkg/endpoints/filters/authn_audit.go | 4 +- .../pkg/endpoints/filters/authorization.go | 18 +- .../pkg/endpoints/filters/metrics.go | 47 +- .../pkg/endpoints/filters/request_deadline.go | 4 +- .../apiserver/pkg/endpoints/groupversion.go | 5 + .../pkg/endpoints/handlers/create.go | 13 +- .../apiserver/pkg/endpoints/handlers/patch.go | 19 +- .../handlers/responsewriters/writers.go | 2 +- .../pkg/endpoints/handlers/update.go | 27 +- .../apiserver/pkg/endpoints/handlers/watch.go | 4 +- .../apiserver/pkg/endpoints/installer.go | 48 +- .../pkg/endpoints/metrics/metrics.go | 2 +- .../apiserver/pkg/features/kube_features.go | 57 +- .../apiserver/pkg/registry/generic/OWNERS | 1 - .../pkg/registry/generic/registry/dryrun.go | 22 +- .../pkg/registry/generic/registry/store.go | 127 +- .../k8s.io/apiserver/pkg/server/config.go | 27 +- .../pkg/server/filters/maxinflight.go | 9 +- .../server/filters/priority-and-fairness.go | 441 +- .../apiserver/pkg/server/genericapiserver.go | 50 +- .../k8s.io/apiserver/pkg/server/handler.go | 4 +- .../apiserver/pkg/server/options/OWNERS | 2 + .../apiserver/pkg/server/options/admission.go | 15 +- .../apiserver/pkg/server/options/audit.go | 10 - .../options/deprecated_insecure_serving.go | 4 +- .../server/options/encryptionconfig/config.go | 286 +- .../encryptionconfig/controller/controller.go | 10 +- .../encryptionconfig/metrics/metrics.go | 86 + .../apiserver/pkg/server/options/etcd.go | 202 +- .../pkg/server/options/recommended.go | 20 +- .../apiserver/pkg/server/options/serving.go | 2 +- .../apiserver/pkg/server/routes/metrics.go | 2 + .../apiserver/pkg/server/routes/openapi.go | 5 +- .../pkg/server/storage/storage_factory.go | 38 +- .../k8s.io/apiserver/pkg/storage/OWNERS | 4 +- .../apiserver/pkg/storage/cacher/cacher.go | 161 +- .../pkg/storage/cacher/caching_object.go | 4 + .../pkg/storage/cacher/lister_watcher.go | 77 + .../pkg/storage/cacher/watch_cache.go | 34 +- .../pkg/storage/cacher/watch_progress.go | 121 + .../pkg/storage/etcd3/healthcheck.go | 1 + .../pkg/storage/etcd3/metrics/metrics.go | 103 +- .../apiserver/pkg/storage/etcd3/store.go | 51 +- .../apiserver/pkg/storage/etcd3/watcher.go | 14 +- .../apiserver/pkg/storage/interfaces.go | 15 + .../pkg/storage/storagebackend/OWNERS | 1 - .../storage/storagebackend/factory/etcd3.go | 82 +- .../storage/storagebackend/factory/factory.go | 30 + .../pkg/storage/value/encrypt/aes/aes.go | 86 +- .../value/encrypt/aes/aes_extended_nonce.go | 186 + .../pkg/storage/value/encrypt/aes/cache.go | 91 + .../value/encrypt/envelope/kmsv2/cache.go | 22 +- .../value/encrypt/envelope/kmsv2/envelope.go | 179 +- .../value/encrypt/envelope/kmsv2/v2/api.pb.go | 96 +- .../value/encrypt/envelope/kmsv2/v2/api.proto | 19 +- .../apiserver/pkg/storage/value/metrics.go | 26 +- .../pkg/storage/value/transformer.go | 53 +- .../apiserver/pkg/storageversion/manager.go | 9 +- .../apiserver/pkg/storageversion/updater.go | 11 +- .../apiserver/pkg/util/flowcontrol/OWNERS | 4 +- .../pkg/util/flowcontrol/apf_controller.go | 214 +- .../util/flowcontrol/apf_controller_debug.go | 118 +- .../pkg/util/flowcontrol/apf_filter.go | 4 + .../pkg/util/flowcontrol/debug/dump.go | 22 +- .../flowcontrol/dropped_requests_tracker.go | 234 + .../util/flowcontrol/fairqueuing/interface.go | 12 +- .../fairqueuing/queueset/queueset.go | 89 +- .../flowcontrol/fairqueuing/queueset/types.go | 85 +- .../pkg/util/flowcontrol/max_seats.go | 66 + .../pkg/util/flowcontrol/metrics/metrics.go | 50 +- .../pkg/util/flowcontrol/request/config.go | 9 +- .../request/list_work_estimator.go | 50 +- .../request/mutating_work_estimator.go | 30 +- .../util/flowcontrol/request/seat_seconds.go | 2 +- .../pkg/util/flowcontrol/request/width.go | 30 +- .../pkg/util/peerproxy/metrics/metrics.go | 56 + .../pkg/util/webhook/authentication.go | 1 + .../apiserver/pkg/util/webhook/webhook.go | 2 +- .../v1alpha1/paramref.go | 27 +- .../v1alpha1/validatingadmissionpolicyspec.go | 14 + .../v1alpha1/variable.go | 48 + .../v1beta1/auditannotation.go | 48 + .../v1beta1/expressionwarning.go | 48 + .../v1beta1/matchresources.go | 90 + .../v1beta1/namedrulewithoperations.go | 95 + .../v1beta1/paramkind.go | 48 + .../admissionregistration/v1beta1/paramref.go | 71 + .../v1beta1/typechecking.go | 44 + .../v1beta1/validatingadmissionpolicy.go | 256 + .../validatingadmissionpolicybinding.go | 247 + .../validatingadmissionpolicybindingspec.go | 72 + .../v1beta1/validatingadmissionpolicyspec.go | 117 + .../validatingadmissionpolicystatus.go | 66 + .../v1beta1/validation.go | 70 + .../admissionregistration/v1beta1/variable.go | 48 + .../v1alpha1/serverstorageversion.go | 11 + .../applyconfigurations/batch/v1/jobspec.go | 27 + .../applyconfigurations/batch/v1/jobstatus.go | 18 + .../applyconfigurations/core/v1/container.go | 9 + .../core/v1/ephemeralcontainer.go | 8 + .../core/v1/ephemeralcontainercommon.go | 9 + .../applyconfigurations/core/v1/hostip.go | 39 + .../core/v1/persistentvolumeclaimstatus.go | 28 +- .../core/v1/persistentvolumestatus.go | 16 +- .../core/v1/podresourceclaimstatus.go | 48 + .../applyconfigurations/core/v1/podstatus.go | 56 +- .../extensions/v1beta1/networkpolicy.go | 11 +- .../extensions/v1beta1/networkpolicystatus.go | 48 - .../exemptprioritylevelconfiguration.go | 48 + .../prioritylevelconfigurationspec.go | 9 + .../exemptprioritylevelconfiguration.go | 48 + .../v1beta1/prioritylevelconfigurationspec.go | 9 + .../exemptprioritylevelconfiguration.go | 48 + .../v1beta2/prioritylevelconfigurationspec.go | 9 + .../exemptprioritylevelconfiguration.go | 48 + .../v1beta3/prioritylevelconfigurationspec.go | 9 + .../applyconfigurations/internal/internal.go | 440 +- .../networking/v1/networkpolicy.go | 11 +- .../networking/v1/networkpolicystatus.go | 48 - .../discovery/aggregated_discovery.go | 6 +- .../discovery/cached/memory/memcache.go | 2 +- .../client-go/discovery/discovery_client.go | 66 +- .../client-go/discovery/fake/discovery.go | 2 +- .../v1beta1/interface.go | 14 + .../v1beta1/validatingadmissionpolicy.go | 89 + .../validatingadmissionpolicybinding.go | 89 + .../k8s.io/client-go/informers/factory.go | 4 +- .../k8s.io/client-go/informers/generic.go | 4 + .../v1beta1/admissionregistration_client.go | 10 + .../fake/fake_admissionregistration_client.go | 8 + .../fake/fake_validatingadmissionpolicy.go | 178 + .../fake_validatingadmissionpolicybinding.go | 145 + .../v1beta1/generated_expansion.go | 4 + .../v1beta1/validatingadmissionpolicy.go | 243 + .../validatingadmissionpolicybinding.go | 197 + .../v1/authentication_client.go | 5 + .../v1/fake/fake_authentication_client.go | 4 + .../v1/fake/fake_selfsubjectreview.go | 46 + .../authentication/v1/generated_expansion.go | 2 + .../authentication/v1/selfsubjectreview.go | 64 + .../v1beta1/fake/fake_networkpolicy.go | 35 - .../typed/extensions/v1beta1/networkpolicy.go | 48 - .../networking/v1/fake/fake_networkpolicy.go | 35 - .../typed/networking/v1/networkpolicy.go | 48 - .../v1beta1/expansion_generated.go | 8 + .../v1beta1/validatingadmissionpolicy.go | 68 + .../validatingadmissionpolicybinding.go | 68 + .../vendor/k8s.io/client-go/openapi/client.go | 7 +- .../k8s.io/client-go/openapi/groupversion.go | 42 +- .../k8s.io/client-go/openapi/typeconverter.go | 48 + .../plugin/pkg/client/auth/exec/exec.go | 6 +- .../vendor/k8s.io/client-go/rest/config.go | 10 +- .../vendor/k8s.io/client-go/rest/request.go | 28 +- .../vendor/k8s.io/client-go/rest/url_utils.go | 4 +- .../k8s.io/client-go/tools/cache/OWNERS | 4 +- .../client-go/tools/cache/controller.go | 4 - .../client-go/tools/cache/object-names.go | 65 + .../k8s.io/client-go/tools/cache/reflector.go | 30 +- .../client-go/tools/cache/shared_informer.go | 37 +- .../k8s.io/client-go/tools/cache/store.go | 31 +- .../client-go/tools/clientcmd/api/types.go | 14 +- .../client-go/tools/clientcmd/loader.go | 24 +- .../tools/leaderelection/leaderelection.go | 5 + .../resourcelock/configmaplock.go | 126 - .../resourcelock/endpointslock.go | 121 - .../leaderelection/resourcelock/interface.go | 42 +- .../k8s.io/client-go/tools/metrics/metrics.go | 48 + .../k8s.io/client-go/tools/pager/pager.go | 36 +- .../k8s.io/client-go/tools/record/event.go | 5 +- .../client-go/tools/watch/retrywatcher.go | 7 +- .../k8s.io/client-go/transport/cache.go | 6 + .../vendor/k8s.io/client-go/util/cert/cert.go | 34 +- .../pkg/providers/v1/aws.go | 131 +- .../pkg/providers/v1/aws_fakes.go | 11 - .../k8s.io/cloud-provider/api/retry_error.go | 46 + .../vendor/k8s.io/cloud-provider/cloud.go | 12 +- .../k8s.io/cloud-provider/config/types.go | 2 +- .../cloud-provider/names/controller_names.go | 69 + .../cloud-provider/options/kubecloudshared.go | 8 +- .../k8s.io/cloud-provider/options/options.go | 23 +- .../component-base/logs/api/v1/options.go | 99 +- .../component-base/logs/api/v1/registry.go | 10 + .../component-base/logs/api/v1/types.go | 44 +- .../logs/api/v1/zz_generated.deepcopy.go | 18 + .../k8s.io/component-base/metrics/http.go | 18 +- .../metrics/legacyregistry/registry.go | 8 +- .../metrics/prometheus/feature/metrics.go | 2 +- .../metrics/prometheus/restclient/metrics.go | 60 + .../metrics/prometheus/slis/metrics.go | 4 +- .../k8s.io/component-base/metrics/registry.go | 12 +- .../metrics/testutil/testutil.go | 61 + .../k8s.io/component-base/version/dynamic.go | 77 + .../component-base/version/verflag/verflag.go | 52 +- .../k8s.io/component-base/version/version.go | 2 +- .../corev1/nodeaffinity/nodeaffinity.go | 11 +- .../controller-manager/options/generic.go | 35 +- .../pkg/leadermigration/config/default.go | 6 +- .../cri-api/pkg/apis/runtime/v1/api.pb.go | 2022 ++++-- .../cri-api/pkg/apis/runtime/v1/api.proto | 55 +- .../k8s.io/cri-api/pkg/apis/services.go | 2 + .../k8s.io/cri-api/pkg/errors/errors.go | 10 + .../k8s.io/csi-translation-lib/translate.go | 10 +- .../resourceclaim/resourceclaim.go | 49 +- .../vendor/k8s.io/klog/v2/format.go | 65 + .../klog/v2/internal/serialize/keyvalues.go | 47 +- .../vendor/k8s.io/klog/v2/k8s_references.go | 12 +- .../vendor/k8s.io/klog/v2/klog.go | 13 + .../vendor/k8s.io/kms/apis/v1beta1/api.pb.go | 50 +- .../vendor/k8s.io/kms/apis/v1beta1/api.proto | 8 +- .../vendor/k8s.io/kms/apis/v1beta1/v1beta1.go | 1 + .../vendor/k8s.io/kms/apis/v2/api.pb.go | 1 - .../vendor/k8s.io/kms/apis/v2/api.proto | 1 - .../kube-openapi/pkg/builder/openapi.go | 2 +- .../kube-openapi/pkg/builder/parameters.go | 259 + .../k8s.io/kube-openapi/pkg/cached/cache.go | 166 +- .../kube-openapi/pkg/handler/handler.go | 20 +- .../kube-openapi/pkg/handler3/handler.go | 2 +- .../kube-openapi/pkg/util/proto/document.go | 2 +- .../pkg/util/proto/document_v3.go | 2 +- .../pkg/validation/spec/gnostic.go | 2 +- .../pkg/validation/strfmt/format.go | 81 - .../k8s.io/kube-proxy/config/v1alpha1/doc.go | 21 - .../kube-proxy/config/v1alpha1/register.go | 43 - .../kube-proxy/config/v1alpha1/types.go | 201 - .../config/v1alpha1/zz_generated.deepcopy.go | 198 - .../k8s.io/kube-scheduler/config/v1/types.go | 6 + .../kube-scheduler/config/v1beta2/register.go | 50 - .../kube-scheduler/config/v1beta2/types.go | 369 - .../config/v1beta2/types_pluginargs.go | 229 - .../config/v1beta2/zz_generated.deepcopy.go | 614 -- .../k8s.io/kubelet/config/v1beta1/types.go | 19 +- .../pkg/apis/deviceplugin/v1beta1/api.pb.go | 405 +- .../pkg/apis/deviceplugin/v1beta1/api.proto | 11 + .../kubelet/pkg/apis/dra/v1alpha3/api.pb.go | 2134 ++++++ .../kubelet/pkg/apis/dra/v1alpha3/api.proto | 103 + .../kubelet/pkg/apis/stats/v1alpha1/types.go | 22 + .../pkg}/cri/streaming/.import-restrictions | 0 .../pkg}/cri/streaming/errors.go | 0 .../cri/streaming/portforward/constants.go | 0 .../cri/streaming/portforward/httpstream.go | 0 .../cri/streaming/portforward/portforward.go | 2 +- .../cri/streaming/portforward/websocket.go | 2 +- .../cri/streaming/remotecommand/attach.go | 0 .../pkg}/cri/streaming/remotecommand/doc.go | 0 .../pkg}/cri/streaming/remotecommand/exec.go | 0 .../cri/streaming/remotecommand/httpstream.go | 2 +- .../cri/streaming/remotecommand/websocket.go | 2 +- .../pkg}/cri/streaming/request_cache.go | 0 .../pkg}/cri/streaming/server.go | 4 +- .../cmd/kube-proxy/app/conntrack.go | 144 - .../cmd/kube-proxy/app/init_windows.go | 46 - .../kubernetes/cmd/kube-proxy/app/server.go | 842 --- .../cmd/kube-proxy/app/server_others.go | 581 -- .../cmd/kube-proxy/app/server_windows.go | 172 - .../app/options/globalflags_providers.go | 1 + .../cmd/kubelet/app/options/options.go | 18 +- .../cmd/kubelet/app/plugins_providers.go | 2 - .../kubernetes/cmd/kubelet/app/server.go | 166 +- .../kubernetes/pkg/api/service/warnings.go | 7 + .../kubernetes/pkg/api/v1/resource/helpers.go | 58 +- .../pkg/apis/apps/validation/validation.go | 50 +- .../kubernetes/pkg/apis/autoscaling/OWNERS | 1 - .../k8s.io/kubernetes/pkg/apis/batch/types.go | 92 +- .../pkg/apis/batch/zz_generated.deepcopy.go | 25 + .../pkg/apis/core/helper/helpers.go | 57 +- .../kubernetes/pkg/apis/core/install/OWNERS | 1 - .../kubernetes/pkg/apis/core/pods/helpers.go | 1 + .../k8s.io/kubernetes/pkg/apis/core/types.go | 221 +- .../k8s.io/kubernetes/pkg/apis/core/v1/OWNERS | 1 - .../kubernetes/pkg/apis/core/v1/conversion.go | 1 + .../kubernetes/pkg/apis/core/v1/defaults.go | 17 +- .../pkg/apis/core/v1/helper/helpers.go | 6 - .../apis/core/v1/zz_generated.conversion.go | 76 +- .../pkg/apis/core/validation/OWNERS | 1 - .../pkg/apis/core/validation/validation.go | 464 +- .../pkg/apis/core/zz_generated.deepcopy.go | 75 +- .../kubernetes/pkg/apis/extensions/OWNERS | 2 +- .../kubernetes/pkg/apis/networking/types.go | 41 - .../apis/networking/zz_generated.deepcopy.go | 24 - .../pkg/controller/controller_ref_manager.go | 12 +- .../pkg/controller/controller_utils.go | 137 +- .../controller/daemon/daemon_controller.go | 34 +- .../pkg/controller/daemon/update.go | 5 +- .../deployment/util/deployment_util.go | 12 +- .../kubernetes/pkg/controller/lookup_cache.go | 92 - .../azure/azure_acr_helper.go | 3 +- .../azure/azure_credentials.go | 3 +- .../pkg/credentialprovider/config.go | 7 +- .../pkg/credentialprovider/gcp/metadata.go | 4 +- .../pkg/credentialprovider/plugin/config.go | 5 +- .../pkg/credentialprovider/plugin/plugin.go | 2 +- .../kubernetes/pkg/features/kube_features.go | 354 +- .../pkg/kubelet/apis/config/types.go | 17 +- .../apis/config/validation/validation.go | 2 +- .../kubelet/apis/podresources/server_v1.go | 16 +- .../pkg/kubelet/cm/cgroup_manager_linux.go | 11 +- .../pkg/kubelet/cm/container_manager.go | 26 +- .../pkg/kubelet/cm/container_manager_linux.go | 33 +- .../kubelet/cm/container_manager_windows.go | 6 +- .../kubelet/cm/cpumanager/cpu_assignment.go | 2 +- .../pkg/kubelet/cm/cpumanager/cpu_manager.go | 2 +- .../kubelet/cm/cpumanager/fake_cpu_manager.go | 2 +- .../pkg/kubelet/cm/cpumanager/policy.go | 2 +- .../pkg/kubelet/cm/cpumanager/policy_none.go | 2 +- .../kubelet/cm/cpumanager/policy_static.go | 4 +- .../kubelet/cm/cpumanager/state/checkpoint.go | 15 +- .../pkg/kubelet/cm/cpumanager/state/state.go | 6 +- .../cm/cpumanager/state/state_checkpoint.go | 6 +- .../kubelet/cm/cpumanager/state/state_mem.go | 2 +- .../cm/cpumanager/topology/topology.go | 2 +- .../devicemanager/checkpoint/checkpointv1.go | 15 +- .../pkg/kubelet/cm/devicemanager/manager.go | 86 +- .../kubelet/cm/devicemanager/pod_devices.go | 75 + .../pkg/kubelet/cm/devicemanager/types.go | 6 +- .../pkg/kubelet/cm/dra/claiminfo.go | 20 +- .../kubernetes/pkg/kubelet/cm/dra/manager.go | 426 +- .../pkg/kubelet/cm/dra/plugin/client.go | 183 +- .../pkg/kubelet/cm/dra/state/checkpoint.go | 54 + .../kubelet/cm/dra/state/state_checkpoint.go | 34 + .../kubernetes/pkg/kubelet/cm/helpers.go | 32 + .../pkg/kubelet/cm/helpers_linux.go | 2 +- .../cm/node_container_manager_linux.go | 2 +- .../kubelet/cm/qos_container_manager_linux.go | 4 +- .../cm/topologymanager/policy_options.go | 4 +- .../pkg/kubelet/cm/topologymanager/scope.go | 2 + .../cm/topologymanager/scope_container.go | 5 - .../kubelet/cm/topologymanager/scope_none.go | 46 + .../kubelet/cm/topologymanager/scope_pod.go | 5 - .../cm/topologymanager/topology_manager.go | 15 +- .../pkg/kubelet/cm/{dra => util/cdi}/cdi.go | 9 +- .../pkg/kubelet/config/file_linux.go | 12 +- .../pkg/kubelet/container/helpers.go | 24 + .../pkg/kubelet/container/runtime.go | 6 - .../kubelet/container/testing/fake_runtime.go | 32 - .../pkg/kubelet/cri/remote/remote_image.go | 26 +- .../pkg/kubelet/cri/remote/remote_runtime.go | 34 +- .../kubernetes/pkg/kubelet/events/event.go | 1 + .../pkg/kubelet/eviction/eviction_manager.go | 4 - .../pkg/kubelet/images/image_manager.go | 56 +- .../kubernetes/pkg/kubelet/images/types.go | 3 - .../k8s.io/kubernetes/pkg/kubelet/kubelet.go | 453 +- .../pkg/kubelet/kubelet_network_linux.go | 115 +- .../kubernetes/pkg/kubelet/kubelet_pods.go | 337 +- .../pkg/kubelet/kuberuntime/helpers.go | 73 +- .../kuberuntime/instrumented_services.go | 9 + .../kuberuntime/kuberuntime_container.go | 250 +- .../kuberuntime_container_linux.go | 164 +- .../pkg/kubelet/kuberuntime/kuberuntime_gc.go | 11 +- .../kuberuntime/kuberuntime_manager.go | 79 +- .../kubelet/kuberuntime/security_context.go | 10 +- .../pkg/kubelet/lifecycle/predicate.go | 18 + .../metrics/collectors/resource_metrics.go | 56 + .../kubernetes/pkg/kubelet/metrics/metrics.go | 25 +- .../kubernetes/pkg/kubelet/network/dns/dns.go | 12 +- .../pkg/kubelet/nodestatus/setters.go | 6 +- .../kubernetes/pkg/kubelet/pleg/evented.go | 10 +- .../kubernetes/pkg/kubelet/pod/pod_manager.go | 79 +- .../kubernetes/pkg/kubelet/pod_workers.go | 41 +- .../pkg/kubelet/preemption/preemption.go | 11 + .../pkg/kubelet/prober/prober_manager.go | 124 +- .../kubelet/prober/testing/fake_manager.go | 2 +- .../kubernetes/pkg/kubelet/prober/worker.go | 32 +- .../k8s.io/kubernetes/pkg/kubelet/runonce.go | 2 +- .../pkg/kubelet/secret/fake_manager.go | 29 +- .../kubernetes/pkg/kubelet/server/server.go | 11 +- .../pkg/kubelet/server/stats/summary.go | 2 + .../kubelet/stats/cadvisor_stats_provider.go | 2 + .../pkg/kubelet/stats/cri_stats_provider.go | 103 +- .../kubelet/stats/cri_stats_provider_linux.go | 105 + .../stats/cri_stats_provider_others.go | 25 +- .../stats/cri_stats_provider_windows.go | 159 +- .../kubernetes/pkg/kubelet/stats/helper.go | 24 + .../kubernetes/pkg/kubelet/stats/provider.go | 16 +- .../kubernetes/pkg/kubelet/status/generate.go | 65 +- .../pkg/kubelet/status/status_manager.go | 107 +- .../pkg/kubelet/sysctl/allowlist.go | 7 +- .../kubernetes/pkg/kubelet/types/constants.go | 7 +- .../pkg/kubelet/types/pod_status.go | 4 +- .../pkg/kubelet/types/pod_update.go | 10 + .../pkg/kubelet/userns/userns_manager.go | 8 +- .../util/manager/cache_based_manager.go | 12 +- .../pkg/kubelet/util/manager/manager.go | 13 +- .../util/manager/watch_based_manager.go | 20 +- .../kubernetes/pkg/kubelet/util/util.go | 14 + .../pkg/kubelet/util/util_windows.go | 29 +- .../kubernetes/pkg/kubelet/volume_host.go | 6 - .../cache/actual_state_of_world.go | 65 +- .../cache/desired_state_of_world.go | 2 +- .../desired_state_of_world_populator.go | 39 +- .../reconciler/reconciler_common.go | 48 +- .../reconciler/reconciler_new.go | 10 +- .../reconciler/reconstruct_common.go | 40 +- .../reconciler/reconstruct_new.go | 25 +- .../kubelet/volumemanager/volume_manager.go | 46 +- .../volumemanager/volume_manager_fake.go | 6 +- .../pkg/kubemark/.import-restrictions | 1 - .../kubernetes/pkg/kubemark/hollow_proxy.go | 146 - .../kubernetes/pkg/probe/http/request.go | 2 +- .../vendor/k8s.io/kubernetes/pkg/proxy/OWNERS | 1 + .../kubernetes/pkg/proxy/apis/config/OWNERS | 8 - .../kubernetes/pkg/proxy/apis/config/doc.go | 20 - .../pkg/proxy/apis/config/register.go | 43 - .../pkg/proxy/apis/config/scheme/scheme.go | 43 - .../kubernetes/pkg/proxy/apis/config/types.go | 281 - .../proxy/apis/config/v1alpha1/defaults.go | 137 - .../pkg/proxy/apis/config/v1alpha1/doc.go | 24 - .../proxy/apis/config/v1alpha1/register.go | 43 - .../v1alpha1/zz_generated.conversion.go | 325 - .../config/v1alpha1/zz_generated.defaults.go | 41 - .../apis/config/validation/validation.go | 293 - .../apis/config/zz_generated.deepcopy.go | 220 - .../k8s.io/kubernetes/pkg/proxy/config/OWNERS | 1 - .../kubernetes/pkg/proxy/conntrack/cleanup.go | 111 + .../{util => proxy}/conntrack/conntrack.go | 4 +- .../k8s.io/kubernetes/pkg/proxy/endpoints.go | 22 +- .../pkg/proxy/endpointslicecache.go | 12 +- .../pkg/proxy/healthcheck/proxier_health.go | 67 +- .../pkg/proxy/healthcheck/service_health.go | 60 +- .../kubernetes/pkg/proxy/iptables/OWNERS | 8 - .../kubernetes/pkg/proxy/iptables/proxier.go | 1687 ----- .../k8s.io/kubernetes/pkg/proxy/ipvs/OWNERS | 1 + .../pkg/proxy/ipvs/graceful_termination.go | 2 +- .../k8s.io/kubernetes/pkg/proxy/ipvs/ipset.go | 2 +- .../pkg/{util => proxy/ipvs}/ipset/ipset.go | 18 +- .../pkg/{util => proxy/ipvs}/ipset/types.go | 0 .../kubernetes/pkg/proxy/ipvs/netlink.go | 5 + .../pkg/proxy/ipvs/netlink_linux.go | 34 +- .../pkg/proxy/ipvs/netlink_unsupported.go | 5 + .../kubernetes/pkg/proxy/ipvs/proxier.go | 210 +- .../kubernetes/pkg/proxy/ipvs/safe_ipset.go | 2 +- .../{util/ipvs => proxy/ipvs/util}/ipvs.go | 0 .../ipvs => proxy/ipvs/util}/ipvs_linux.go | 2 +- .../ipvs/util}/ipvs_unsupported.go | 0 .../kubernetes/pkg/proxy/metrics/metrics.go | 72 +- .../k8s.io/kubernetes/pkg/proxy/node.go | 27 + .../k8s.io/kubernetes/pkg/proxy/service.go | 42 +- .../k8s.io/kubernetes/pkg/proxy/topology.go | 4 +- .../k8s.io/kubernetes/pkg/proxy/types.go | 2 +- .../pkg/proxy/util/iptables/traffic.go | 6 +- .../kubernetes/pkg/proxy/util/linebuffer.go | 150 + .../pkg/proxy/util/nodeport_addresses.go | 85 +- .../k8s.io/kubernetes/pkg/proxy/util/utils.go | 194 +- .../kubernetes/pkg/proxy/winkernel/OWNERS | 17 - .../kubernetes/pkg/proxy/winkernel/hns.go | 451 -- .../kubernetes/pkg/proxy/winkernel/metrics.go | 39 - .../kubernetes/pkg/proxy/winkernel/proxier.go | 1711 ----- .../core/service/allocator/interfaces.go | 4 - .../scheduler/apis/config/scheme/scheme.go | 3 - .../pkg/scheduler/apis/config/types.go | 10 +- .../apis/config/v1/default_plugins.go | 2 +- .../pkg/scheduler/apis/config/v1/defaults.go | 6 +- .../apis/config/v1/zz_generated.conversion.go | 2 + .../apis/config/v1beta2/conversion.go | 117 - .../apis/config/v1beta2/default_plugins.go | 189 - .../scheduler/apis/config/v1beta2/defaults.go | 241 - .../scheduler/apis/config/v1beta2/register.go | 42 - .../config/v1beta2/zz_generated.conversion.go | 945 --- .../config/v1beta2/zz_generated.defaults.go | 73 - .../apis/config/v1beta3/default_plugins.go | 2 +- .../scheduler/apis/config/v1beta3/defaults.go | 6 +- .../config/v1beta3/zz_generated.conversion.go | 1 + .../apis/config/validation/validation.go | 9 +- .../validation/validation_pluginargs.go | 13 +- .../kubernetes/pkg/scheduler/eventhandlers.go | 227 +- .../kubernetes/pkg/scheduler/extender.go | 4 +- .../pkg/scheduler/framework/cycle_state.go | 21 +- .../pkg/scheduler/framework/interface.go | 16 +- .../defaultpreemption/default_preemption.go | 3 +- .../dynamicresources/dynamicresources.go | 294 +- .../framework/plugins/feature/feature.go | 1 + .../plugins/interpodaffinity/filtering.go | 16 +- .../plugins/interpodaffinity/plugin.go | 12 +- .../plugins/interpodaffinity/scoring.go | 26 +- .../plugins/nodeaffinity/node_affinity.go | 16 +- .../framework/plugins/nodename/node_name.go | 10 +- .../framework/plugins/nodeports/node_ports.go | 16 +- .../noderesources/balanced_allocation.go | 2 +- .../framework/plugins/noderesources/fit.go | 48 +- .../noderesources/resource_allocation.go | 18 +- .../nodeunschedulable/node_unschedulable.go | 10 +- .../framework/plugins/nodevolumelimits/csi.go | 46 +- .../plugins/nodevolumelimits/non_csi.go | 39 +- .../plugins/nodevolumelimits/utils.go | 10 +- .../plugins/podtopologyspread/filtering.go | 27 +- .../plugins/podtopologyspread/plugin.go | 8 +- .../plugins/podtopologyspread/scoring.go | 18 +- .../scheduler/framework/plugins/registry.go | 1 + .../schedulinggates/scheduling_gates.go | 6 +- .../tainttoleration/taint_toleration.go | 9 +- .../plugins/volumebinding/assume_cache.go | 2 +- .../framework/plugins/volumebinding/binder.go | 18 +- .../plugins/volumebinding/fake_binder.go | 2 +- .../plugins/volumebinding/volume_binding.go | 21 +- .../volumerestrictions/volume_restrictions.go | 40 +- .../plugins/volumezone/volume_zone.go | 22 +- .../framework/preemption/preemption.go | 51 +- .../scheduler/framework/runtime/framework.go | 288 +- .../framework/runtime/instrumented_plugins.go | 83 + .../pkg/scheduler/framework/types.go | 82 +- .../pkg/scheduler/internal/cache/cache.go | 158 +- .../internal/cache/debugger/comparer.go | 10 +- .../internal/cache/debugger/debugger.go | 10 +- .../internal/cache/debugger/dumper.go | 14 +- .../pkg/scheduler/internal/cache/interface.go | 23 +- .../pkg/scheduler/internal/cache/node_tree.go | 22 +- .../pkg/scheduler/internal/cache/snapshot.go | 16 +- .../internal/queue/scheduling_queue.go | 626 +- .../pkg/scheduler/metrics/metrics.go | 11 - .../pkg/scheduler/metrics/profile_metrics.go | 12 +- .../pkg/scheduler/profile/profile.go | 13 +- .../kubernetes/pkg/scheduler/schedule_one.go | 287 +- .../kubernetes/pkg/scheduler/scheduler.go | 106 +- .../kubernetes/pkg/scheduler/util/utils.go | 40 +- .../kubernetes/pkg/util/conntrack/OWNERS | 8 - .../pkg/util/filesystem/defaultfs.go | 2 +- .../pkg/util/filesystem/filesystem.go | 2 +- .../k8s.io/kubernetes/pkg/util/hash/hash.go | 11 +- .../k8s.io/kubernetes/pkg/util/ipset/OWNERS | 11 - .../kubernetes/pkg/util/iptables/OWNERS | 10 +- .../k8s.io/kubernetes/pkg/util/ipvs/OWNERS | 16 - .../kubernetes/pkg/util/parsers/parsers.go | 2 +- .../pkg/volume/configmap/configmap.go | 2 +- .../kubernetes/pkg/volume/csi/csi_attacher.go | 43 +- .../kubernetes/pkg/volume/csi/csi_mounter.go | 2 +- .../kubernetes/pkg/volume/csi/csi_plugin.go | 2 +- .../kubernetes/pkg/volume/csi/csi_util.go | 2 +- .../csi/nodeinfomanager/nodeinfomanager.go | 5 +- .../pkg/volume/csimigration/plugin_manager.go | 2 +- .../pkg/volume/downwardapi/downwardapi.go | 2 +- .../pkg/volume/emptydir/empty_dir.go | 12 +- .../kubernetes/pkg/volume/fc/disk_manager.go | 2 +- .../pkg/volume/flexvolume/mounter.go | 2 +- .../k8s.io/kubernetes/pkg/volume/gcepd/OWNERS | 13 - .../kubernetes/pkg/volume/gcepd/attacher.go | 410 -- .../k8s.io/kubernetes/pkg/volume/gcepd/doc.go | 19 - .../kubernetes/pkg/volume/gcepd/gce_pd.go | 568 -- .../pkg/volume/gcepd/gce_pd_block.go | 183 - .../kubernetes/pkg/volume/gcepd/gce_util.go | 368 - .../pkg/volume/git_repo/git_repo.go | 2 +- .../pkg/volume/iscsi/disk_manager.go | 2 +- .../kubernetes/pkg/volume/local/local.go | 2 +- .../k8s.io/kubernetes/pkg/volume/plugins.go | 10 +- .../pkg/volume/portworx/portworx.go | 2 +- .../pkg/volume/projected/projected.go | 2 +- .../kubernetes/pkg/volume/rbd/disk_manager.go | 2 +- .../kubernetes/pkg/volume/secret/secret.go | 2 +- .../pkg/volume/util/atomic_writer.go | 9 +- .../fsquota/common/quota_common_linux_impl.go | 3 +- .../pkg/volume/util/fsquota/project.go | 3 +- .../pkg/volume/util/fsquota/quota_linux.go | 9 +- .../kubernetes/pkg/volume/util/io_util.go | 2 +- .../util/operationexecutor/node_expander.go | 30 +- .../operationexecutor/operation_executor.go | 66 +- .../operationexecutor/operation_generator.go | 41 +- .../kubernetes/pkg/volume/util/resize_util.go | 52 +- .../pkg/volume/util/storageclass.go | 2 +- .../k8s.io/kubernetes/pkg/volume/util/util.go | 9 +- .../volumepathhandler/volume_path_handler.go | 5 +- .../volume_path_handler_linux.go | 3 +- .../kubernetes/pkg/volume/volume_linux.go | 16 +- .../pkg/volume/volume_unsupported.go | 2 +- .../volume/vsphere_volume/vsphere_volume.go | 2 +- .../vsphere_volume_util_windows.go | 4 + .../kubernetes/test/utils/deployment.go | 11 +- .../k8s.io/kubernetes/test/utils/paths.go | 12 +- .../kubernetes/test/utils/pki_helpers.go | 4 +- .../k8s.io/kubernetes/test/utils/runners.go | 49 +- .../legacy-cloud-providers/azure/azure.go | 27 - .../gce/gce_healthchecks.go | 38 - .../gce/gce_loadbalancer_external.go | 9 +- .../vsphere/nodemanager.go | 57 +- .../legacy-cloud-providers/vsphere/vsphere.go | 4 + .../k8s.io/mount-utils/mount_helper_unix.go | 55 +- .../vendor/k8s.io/mount-utils/mount_linux.go | 9 +- .../k8s.io/mount-utils/mount_windows.go | 22 +- .../k8s.io/mount-utils/resizefs_linux.go | 3 +- .../pkg/kubelet/cm => utils}/cpuset/OWNERS | 7 +- .../pkg/kubelet/cm => utils}/cpuset/cpuset.go | 2 + cluster-autoscaler/vendor/modules.txt | 283 +- cluster-autoscaler/version/version.go | 2 +- hack/boilerplate/boilerplate.py | 1 + hack/verify-golint.sh | 2 + multidimensional-pod-autoscaler/AEP.md | 542 ++ .../kep-imgs/mpa-action-actuation.png | Bin 0 -> 221661 bytes .../kep-imgs/mpa-design.png | Bin 0 -> 463736 bytes vertical-pod-autoscaler/FAQ.md | 13 + vertical-pod-autoscaler/OWNERS | 1 + vertical-pod-autoscaler/README.md | 19 +- vertical-pod-autoscaler/RELEASE.md | 41 +- vertical-pod-autoscaler/builder/Dockerfile | 2 +- vertical-pod-autoscaler/common/version.go | 2 +- .../admission-controller-deployment.yaml | 2 +- .../deploy/recommender-deployment-high.yaml | 2 +- .../deploy/recommender-deployment-low.yaml | 2 +- .../deploy/recommender-deployment.yaml | 2 +- .../deploy/updater-deployment.yaml | 2 +- vertical-pod-autoscaler/deploy/vpa-rbac.yaml | 26 +- .../deploy/vpa-v1-crd-gen.yaml | 42 +- vertical-pod-autoscaler/e2e/v1/actuation.go | 4 +- vertical-pod-autoscaler/e2e/v1/common.go | 19 +- .../e2e/v1beta2/actuation.go | 4 +- vertical-pod-autoscaler/e2e/v1beta2/common.go | 19 +- .../moby/spdystream/config/v1beta2/doc.go | 5 +- .../api/resource/v1alpha1/generated.proto | 3 +- .../k8s.io/api/resource/v1alpha1/types.go | 3 +- .../4016-in-place-updates-support/README.md | 214 + .../4831-control-eviction-behavior/README.md | 4 +- vertical-pod-autoscaler/go.mod | 8 +- vertical-pod-autoscaler/go.sum | 16 +- .../hack/generate-crd-yaml.sh | 2 +- .../hack/vpa-process-yaml.sh | 2 +- .../hack/vpa-process-yamls.sh | 2 +- .../pkg/admission-controller/Makefile | 16 +- .../pkg/admission-controller/config.go | 1 + .../pkg/apis/autoscaling.k8s.io/v1/types.go | 35 +- .../apis/autoscaling.k8s.io/v1beta2/types.go | 1 + .../pkg/recommender/Makefile | 17 +- .../pkg/recommender/input/cluster_feeder.go | 56 +- .../input/history/history_provider.go | 28 +- .../input/metrics/metrics_client.go | 14 +- .../input/metrics/metrics_client_test_util.go | 5 +- .../pkg/recommender/main.go | 67 +- .../pkg/recommender/model/cluster.go | 1 + .../pkg/recommender/model/cluster_test.go | 45 + .../pkg/recommender/model/vpa.go | 23 +- .../pkg/recommender/routines/recommender.go | 39 +- vertical-pod-autoscaler/pkg/updater/Makefile | 17 +- .../pkg/updater/logic/updater_test.go | 3 +- .../priority/priority_processor_test.go | 49 +- .../update_priority_calculator_test.go | 49 +- .../utils/metrics/recommender/recommender.go | 22 +- .../metrics/recommender/recommender_test.go | 322 + .../pkg/utils/test/test_utils.go | 18 +- .../pkg/utils/test/test_vpa.go | 13 + vertical-pod-autoscaler/pkg/utils/vpa/api.go | 6 +- .../pkg/utils/vpa/api_test.go | 4 +- .../pkg/utils/vpa/capping_test.go | 8 +- .../vendor/golang.org/x/net/http2/flow.go | 88 +- .../vendor/golang.org/x/net/http2/frame.go | 11 +- .../golang.org/x/net/http2/hpack/hpack.go | 81 +- .../vendor/golang.org/x/net/http2/server.go | 105 +- .../golang.org/x/net/http2/transport.go | 88 +- .../vendor/golang.org/x/sys/unix/gccgo.go | 4 +- .../vendor/golang.org/x/sys/unix/gccgo_c.c | 4 +- .../vendor/golang.org/x/sys/unix/ioctl.go | 21 +- .../vendor/golang.org/x/sys/unix/ioctl_zos.go | 8 +- .../vendor/golang.org/x/sys/unix/mkall.sh | 22 +- .../golang.org/x/sys/unix/ptrace_darwin.go | 6 + .../golang.org/x/sys/unix/ptrace_ios.go | 6 + .../golang.org/x/sys/unix/syscall_aix.go | 5 +- .../golang.org/x/sys/unix/syscall_bsd.go | 3 +- .../golang.org/x/sys/unix/syscall_darwin.go | 13 +- .../x/sys/unix/syscall_darwin_amd64.go | 1 + .../x/sys/unix/syscall_darwin_arm64.go | 1 + .../x/sys/unix/syscall_dragonfly.go | 2 + .../golang.org/x/sys/unix/syscall_freebsd.go | 44 +- .../x/sys/unix/syscall_freebsd_386.go | 12 +- .../x/sys/unix/syscall_freebsd_amd64.go | 12 +- .../x/sys/unix/syscall_freebsd_arm.go | 10 +- .../x/sys/unix/syscall_freebsd_arm64.go | 10 +- .../x/sys/unix/syscall_freebsd_riscv64.go | 10 +- .../golang.org/x/sys/unix/syscall_hurd.go | 30 + .../golang.org/x/sys/unix/syscall_hurd_386.go | 29 + .../golang.org/x/sys/unix/syscall_linux.go | 87 +- .../golang.org/x/sys/unix/syscall_netbsd.go | 20 +- .../golang.org/x/sys/unix/syscall_openbsd.go | 2 + .../x/sys/unix/syscall_openbsd_libc.go | 4 +- .../golang.org/x/sys/unix/syscall_solaris.go | 22 +- .../golang.org/x/sys/unix/syscall_unix.go | 57 +- .../x/sys/unix/syscall_zos_s390x.go | 4 +- .../golang.org/x/sys/unix/timestruct.go | 2 +- .../vendor/golang.org/x/sys/unix/xattr_bsd.go | 9 +- .../golang.org/x/sys/unix/zerrors_linux.go | 40 +- .../x/sys/unix/zerrors_linux_386.go | 1 + .../x/sys/unix/zerrors_linux_amd64.go | 1 + .../x/sys/unix/zerrors_linux_arm.go | 1 + .../x/sys/unix/zerrors_linux_arm64.go | 1 + .../x/sys/unix/zerrors_linux_loong64.go | 1 + .../x/sys/unix/zerrors_linux_mips.go | 1 + .../x/sys/unix/zerrors_linux_mips64.go | 1 + .../x/sys/unix/zerrors_linux_mips64le.go | 1 + .../x/sys/unix/zerrors_linux_mipsle.go | 1 + .../x/sys/unix/zerrors_linux_ppc.go | 1 + .../x/sys/unix/zerrors_linux_ppc64.go | 1 + .../x/sys/unix/zerrors_linux_ppc64le.go | 1 + .../x/sys/unix/zerrors_linux_riscv64.go | 1 + .../x/sys/unix/zerrors_linux_s390x.go | 1 + .../x/sys/unix/zerrors_linux_sparc64.go | 1 + .../x/sys/unix/zerrors_openbsd_386.go | 356 +- .../x/sys/unix/zerrors_openbsd_amd64.go | 189 +- .../x/sys/unix/zerrors_openbsd_arm.go | 348 +- .../x/sys/unix/zerrors_openbsd_arm64.go | 160 +- .../x/sys/unix/zerrors_openbsd_mips64.go | 95 +- .../x/sys/unix/zptrace_armnn_linux.go | 8 +- .../x/sys/unix/zptrace_linux_arm64.go | 4 +- .../x/sys/unix/zptrace_mipsnn_linux.go | 8 +- .../x/sys/unix/zptrace_mipsnnle_linux.go | 8 +- .../x/sys/unix/zptrace_x86_linux.go | 8 +- .../golang.org/x/sys/unix/zsyscall_aix_ppc.go | 10 + .../x/sys/unix/zsyscall_aix_ppc64.go | 10 + .../x/sys/unix/zsyscall_aix_ppc64_gc.go | 7 + .../x/sys/unix/zsyscall_aix_ppc64_gccgo.go | 8 + .../x/sys/unix/zsyscall_darwin_amd64.go | 16 + .../x/sys/unix/zsyscall_darwin_arm64.go | 16 + .../x/sys/unix/zsyscall_dragonfly_amd64.go | 20 + .../x/sys/unix/zsyscall_freebsd_386.go | 30 + .../x/sys/unix/zsyscall_freebsd_amd64.go | 30 + .../x/sys/unix/zsyscall_freebsd_arm.go | 30 + .../x/sys/unix/zsyscall_freebsd_arm64.go | 30 + .../x/sys/unix/zsyscall_freebsd_riscv64.go | 30 + .../golang.org/x/sys/unix/zsyscall_linux.go | 21 + .../x/sys/unix/zsyscall_netbsd_386.go | 20 + .../x/sys/unix/zsyscall_netbsd_amd64.go | 20 + .../x/sys/unix/zsyscall_netbsd_arm.go | 20 + .../x/sys/unix/zsyscall_netbsd_arm64.go | 20 + .../x/sys/unix/zsyscall_openbsd_386.go | 22 + .../x/sys/unix/zsyscall_openbsd_386.s | 137 +- .../x/sys/unix/zsyscall_openbsd_amd64.go | 22 + .../x/sys/unix/zsyscall_openbsd_amd64.s | 137 +- .../x/sys/unix/zsyscall_openbsd_arm.go | 22 + .../x/sys/unix/zsyscall_openbsd_arm.s | 137 +- .../x/sys/unix/zsyscall_openbsd_arm64.go | 22 + .../x/sys/unix/zsyscall_openbsd_arm64.s | 137 +- .../x/sys/unix/zsyscall_openbsd_mips64.go | 820 ++- .../x/sys/unix/zsyscall_openbsd_mips64.s | 669 ++ .../x/sys/unix/zsyscall_openbsd_ppc64.go | 22 + .../x/sys/unix/zsyscall_openbsd_ppc64.s | 6 + .../x/sys/unix/zsyscall_openbsd_riscv64.go | 22 + .../x/sys/unix/zsyscall_openbsd_riscv64.s | 137 +- .../x/sys/unix/zsyscall_solaris_amd64.go | 24 + .../x/sys/unix/zsyscall_zos_s390x.go | 10 + .../x/sys/unix/zsysctl_openbsd_386.go | 51 +- .../x/sys/unix/zsysctl_openbsd_amd64.go | 17 +- .../x/sys/unix/zsysctl_openbsd_arm.go | 51 +- .../x/sys/unix/zsysctl_openbsd_arm64.go | 11 +- .../x/sys/unix/zsysctl_openbsd_mips64.go | 3 +- .../x/sys/unix/zsysnum_openbsd_mips64.go | 1 + .../x/sys/unix/ztypes_freebsd_386.go | 2 +- .../x/sys/unix/ztypes_freebsd_amd64.go | 2 +- .../x/sys/unix/ztypes_freebsd_arm.go | 2 +- .../x/sys/unix/ztypes_freebsd_arm64.go | 2 +- .../x/sys/unix/ztypes_freebsd_riscv64.go | 2 +- .../golang.org/x/sys/unix/ztypes_linux.go | 349 +- .../golang.org/x/sys/unix/ztypes_linux_386.go | 2 +- .../x/sys/unix/ztypes_linux_amd64.go | 2 +- .../golang.org/x/sys/unix/ztypes_linux_arm.go | 2 +- .../x/sys/unix/ztypes_linux_arm64.go | 2 +- .../x/sys/unix/ztypes_linux_loong64.go | 2 +- .../x/sys/unix/ztypes_linux_mips.go | 2 +- .../x/sys/unix/ztypes_linux_mips64.go | 2 +- .../x/sys/unix/ztypes_linux_mips64le.go | 2 +- .../x/sys/unix/ztypes_linux_mipsle.go | 2 +- .../golang.org/x/sys/unix/ztypes_linux_ppc.go | 2 +- .../x/sys/unix/ztypes_linux_ppc64.go | 2 +- .../x/sys/unix/ztypes_linux_ppc64le.go | 2 +- .../x/sys/unix/ztypes_linux_riscv64.go | 2 +- .../x/sys/unix/ztypes_linux_s390x.go | 2 +- .../x/sys/unix/ztypes_linux_sparc64.go | 2 +- .../x/sys/unix/ztypes_netbsd_386.go | 84 + .../x/sys/unix/ztypes_netbsd_amd64.go | 84 + .../x/sys/unix/ztypes_netbsd_arm.go | 84 + .../x/sys/unix/ztypes_netbsd_arm64.go | 84 + .../x/sys/unix/ztypes_openbsd_386.go | 97 +- .../x/sys/unix/ztypes_openbsd_amd64.go | 33 +- .../x/sys/unix/ztypes_openbsd_arm.go | 9 +- .../x/sys/unix/ztypes_openbsd_arm64.go | 9 +- .../x/sys/unix/ztypes_openbsd_mips64.go | 9 +- .../x/sys/windows/syscall_windows.go | 20 +- .../golang.org/x/sys/windows/types_windows.go | 85 + .../x/sys/windows/zsyscall_windows.go | 27 + .../x/text/unicode/norm/forminfo.go | 2 +- vertical-pod-autoscaler/vendor/modules.txt | 8 +- 1842 files changed, 134451 insertions(+), 47687 deletions(-) create mode 100644 addon-resizer/enhancements/5546-scaling-based-on-container-count/README.md create mode 100644 addon-resizer/enhancements/5700-nanny-configuration-reload/README.md create mode 100644 cluster-autoscaler/Dockerfile.s390x create mode 100644 cluster-autoscaler/SYNC-CHANGES/SYNC_CHANGES-1.28.md create mode 100644 cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/credentials/oidc_credential.go create mode 100644 cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/signers/signer_oidc.go create mode 100644 cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/client_test.go create mode 100644 cluster-autoscaler/cloudprovider/alicloud/alicloud_auto_scaling_test.go create mode 100644 cluster-autoscaler/cloudprovider/alicloud/alicloud_cloud_config_test.go create mode 100644 cluster-autoscaler/cloudprovider/alicloud/alicloud_instance_types_test.go create mode 100644 cluster-autoscaler/cloudprovider/alicloud/examples/cluster-autoscaler-rrsa-standard.yaml create mode 100644 cluster-autoscaler/cloudprovider/bizflycloud/OWNERS create mode 100644 cluster-autoscaler/cloudprovider/builder/builder_volcengine.go create mode 100644 cluster-autoscaler/cloudprovider/civo/examples/cluster-autoscaler-node-pool.yaml create mode 100755 cluster-autoscaler/cloudprovider/hetzner/hack/update-vendor.sh rename cluster-autoscaler/{vendor/github.com/mitchellh/mapstructure => cloudprovider/hetzner/hcloud-go}/LICENSE (86%) create mode 100644 cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/deprecation.go create mode 100644 cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/deprecation.go create mode 100644 cluster-autoscaler/cloudprovider/linode/OWNERS create mode 100644 cluster-autoscaler/cloudprovider/ovhcloud/OWNERS create mode 100644 cluster-autoscaler/cloudprovider/volcengine/OWNERS create mode 100644 cluster-autoscaler/cloudprovider/volcengine/README.md create mode 100644 cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-deployment.yaml create mode 100644 cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-secret.yaml create mode 100644 cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-svcaccount.yaml create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/aes.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/client.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/model.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/sign.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/utils.go rename cluster-autoscaler/{vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.deepcopy.go => cloudprovider/volcengine/volc-sdk-golang/base/version.go} (76%) create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/config.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/model.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/wrapper.go rename cluster-autoscaler/{vendor/k8s.io/apiserver/pkg/cel/library/libraries.go => cloudprovider/volcengine/volc-sdk-golang/service/sts/wrapper_test.go} (50%) create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ast.go rename cluster-autoscaler/{vendor/k8s.io/kube-scheduler/config/v1beta2/doc.go => cloudprovider/volcengine/volcengine-go-sdk/internal/ini/comma_token.go} (62%) create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/comment_token.go rename cluster-autoscaler/{vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.deepcopy.go => cloudprovider/volcengine/volcengine-go-sdk/internal/ini/empty_token.go} (66%) create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/expression.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/fuzz.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini_lexer.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini_parser.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/literal_tokens.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/newline_token.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/number_helper.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/op_tokens.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/parse_error.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/parse_stack.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/sep_tokens.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/skipper.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/statement.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/value_util.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/visitor.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/walker.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ws_token.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkio/byte.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkmath/floor_go.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkrand/locked_source.go rename cluster-autoscaler/{vendor/k8s.io/kubernetes/cmd/kube-proxy/app/init_others.go => cloudprovider/volcengine/volcengine-go-sdk/internal/sdkrand/read.go} (67%) create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/shareddefaults/shared_config.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/host.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/host_prefix.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/idempotency.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/json/jsonutil/build.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/json/jsonutil/unmarshal.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/jsonvalue.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/payload.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/build.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/queryutil/queryutil.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/unmarshal.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/unmarshal_error.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/build.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/payload.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/unmarshal.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/timestamp.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/unmarshal.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/volcenginequery/unmarshal.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/build.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/sort.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/unmarshal.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/xml_to_struct.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/util/sort_keys.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/util/util.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_db_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_server_groups.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_complete_lifecycle_activity.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_lifecycle_hook.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_configuration.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_group.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_policy.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_lifecycle_hook.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_configuration.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_group.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_policy.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_lifecycle_activities.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_lifecycle_hooks.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_activities.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_configurations.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_groups.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_policies.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_db_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_server_groups.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_disable_scaling_group.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_disable_scaling_policy.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_configuration.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_group.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_policy.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_lifecycle_hook.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_configuration.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_group.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_policy.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_remove_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_set_instances_protection.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/interface_autoscaling.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/service_autoscaling.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_associate_instances_iam_role.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_attach_key_pair.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_deployment_set.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_image.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_key_pair.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_tags.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_deployment_set.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_images.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_instance.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_key_pairs.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_tags.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_available_resource.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_deployment_set_supported_instance_type_family.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_deployment_sets.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_image_share_permission.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_images.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_ecs_terminal_url.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_type_families.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_types.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_vnc_url.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instances_iam_roles.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_key_pairs.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_system_events.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_tags.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_tasks.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_user_data.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_zones.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_detach_key_pair.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_disassociate_instances_iam_role.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_export_image.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_import_image.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_import_key_pair.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_deployment_set_attribute.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_image_attribute.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_image_share_permission.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_attribute.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_charge_type.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_deployment.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_spec.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_key_pair_attribute.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_reboot_instance.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_reboot_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_renew_instance.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_replace_system_volume.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_run_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_start_instance.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_start_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_stop_instance.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_stop_instances.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_update_system_events.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/interface_ecs.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/service_ecs.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/client.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/default_retryer.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/logger.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata/client_info.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/no_op_retryer.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/config.go rename cluster-autoscaler/{vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/doc.go => cloudprovider/volcengine/volcengine-go-sdk/volcengine/context.go} (55%) create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context_background.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context_sleep.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/convert_types.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/custom_req.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/handlers.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/param_validator.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/user_agent.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/chain_provider.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/credentials.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/endpointcreds/provider.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/env_provider.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/processcreds/provider.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/shared_credentials_provider.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/static_provider.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/sts_credentials.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/sts_provider.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom/custom.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom/interceptor.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults/defaults.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults/shared_config.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints/endpoints.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints/model.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/errors.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/jsonvalue.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/logger.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/connection_reset_error.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/handlers.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/http_request.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/offset_reader.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request_context.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request_pagination.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/retryer.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/timeout_read_closer.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/validation.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/waiter.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response/volcengine_response.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/credentials.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/env_config.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/session.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/shared_config.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/signer/volc/volc.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/special/iot_response.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/special/special_conf.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/types.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_client.go rename cluster-autoscaler/{vendor/k8s.io/kubernetes/pkg/proxy/apis/well_known_labels.go => cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_const.go} (70%) create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_struct.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/url.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/version.go create mode 100755 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginebody/body.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr/error.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr/types.go create mode 100755 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/build.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/unmarshal.go create mode 100755 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/unmarshal_error.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/copy.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/equal.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/path_value.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/prettify.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/string_value.go create mode 100755 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/tools.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/trans.go create mode 100755 cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/url.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_cloud_service.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_group.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_groups.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine_cloud_config.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine_cloud_provider.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine_ecs_cloud_service.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine_manager.go create mode 100644 cluster-autoscaler/cloudprovider/volcengine/volcengine_util.go create mode 100644 cluster-autoscaler/config/test/config.go create mode 100644 cluster-autoscaler/core/scaledown/actuation/group_deletion_scheduler.go create mode 100644 cluster-autoscaler/core/scaledown/actuation/group_deletion_scheduler_test.go create mode 100644 cluster-autoscaler/core/scaledown/budgets/budgets.go create mode 100644 cluster-autoscaler/core/scaledown/budgets/budgets_test.go create mode 100644 cluster-autoscaler/core/scaleup/orchestrator/executor.go create mode 100644 cluster-autoscaler/core/scaleup/orchestrator/executor_test.go create mode 100644 cluster-autoscaler/estimator/cluster_capacity_threshold.go create mode 100644 cluster-autoscaler/estimator/cluster_capacity_threshold_test.go create mode 100644 cluster-autoscaler/estimator/decreasing_pod_orderer.go create mode 100644 cluster-autoscaler/estimator/decreasing_pod_orderer_test.go create mode 100644 cluster-autoscaler/estimator/estimation_context.go create mode 100644 cluster-autoscaler/estimator/sng_capacity_threshold.go create mode 100644 cluster-autoscaler/estimator/sng_capacity_threshold_test.go create mode 100644 cluster-autoscaler/estimator/static_threshold.go create mode 100644 cluster-autoscaler/estimator/threshold.go create mode 100644 cluster-autoscaler/processors/binpacking/binpacking_limiter.go create mode 100644 cluster-autoscaler/simulator/drainability/mirror.go create mode 100644 cluster-autoscaler/simulator/drainability/mirror_test.go create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.gitattributes create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.golangci.yml create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/SECURITY.md create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/rawaddr.go create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/socket.go create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/zsyscall_windows.go create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_nonwindows.go create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_windows.go create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/variant_string.go create mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/go-winio/tools.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcn.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnendpoint.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnerrors.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnglobals.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnloadbalancer.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnnamespace.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnnetwork.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnpolicy.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnroute.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnsupport.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/zsyscall_windows.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/cni/registry.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/regstate/regstate.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/regstate/zsyscall_windows.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/container.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/util.go delete mode 100644 cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/vm.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/MAINTAINERS.md create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/metadata.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/btf.go rename cluster-autoscaler/vendor/github.com/cilium/ebpf/{internal => }/btf/btf_types.go (73%) rename cluster-autoscaler/vendor/github.com/cilium/ebpf/{internal => }/btf/btf_types_string.go (100%) rename cluster-autoscaler/vendor/github.com/cilium/ebpf/{internal => }/btf/core.go (65%) rename cluster-autoscaler/vendor/github.com/cilium/ebpf/{internal => }/btf/doc.go (71%) create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/ext_info.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/format.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/handle.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/strings.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/types.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/elf_reader_fuzz.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/btf.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/ext_info.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/fuzz.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/info.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/strings.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/syscalls.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/types.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian_be.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian_le.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/fd.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/output.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/fd.go rename cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/{ => sys}/ptr.go (71%) rename cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/{ => sys}/ptr_32_be.go (93%) rename cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/{ => sys}/ptr_32_le.go (94%) rename cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/{ => sys}/ptr_64.go (95%) create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/syscall.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/types.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/syscall.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/syscall_string.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/vdso.go delete mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/link/freplace.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/link/socket_filter.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/link/tracing.go create mode 100644 cluster-autoscaler/vendor/github.com/cilium/ebpf/link/xdp.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/.gitignore create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/Makefile create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/Protobuild.toml create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/README.md create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/blkio.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/cgroup.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/control.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/cpu.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/cpuacct.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/cpuset.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/devices.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/errors.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/freezer.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/hierarchy.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/hugetlb.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/memory.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/named.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/net_cls.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/net_prio.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/opts.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/paths.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/perf_event.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/pids.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/rdma.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/state.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/subsystem.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/systemd.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/ticks.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/utils.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/cgroups/v1.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/.gitattributes create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/.golangci.yml create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/Makefile create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/PROTOCOL.md create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/Protobuild.toml create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/errors.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/request.pb.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/request.proto create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/stream.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/stream_server.go create mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/test.proto delete mode 100644 cluster-autoscaler/vendor/github.com/containerd/ttrpc/types.go delete mode 100644 cluster-autoscaler/vendor/github.com/ghodss/yaml/.gitignore delete mode 100644 cluster-autoscaler/vendor/github.com/ghodss/yaml/.travis.yml delete mode 100644 cluster-autoscaler/vendor/github.com/ghodss/yaml/README.md delete mode 100644 cluster-autoscaler/vendor/github.com/ghodss/yaml/fields.go delete mode 100644 cluster-autoscaler/vendor/github.com/ghodss/yaml/yaml.go delete mode 100644 cluster-autoscaler/vendor/github.com/gofrs/uuid/.travis.yml create mode 100644 cluster-autoscaler/vendor/github.com/google/cel-go/ext/bindings.go create mode 100644 cluster-autoscaler/vendor/github.com/google/cel-go/ext/sets.go rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/LICENSE (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/compiler/README.md (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/compiler/context.go (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/compiler/error.go (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/compiler/extensions.go (97%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/compiler/helpers.go (99%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/compiler/main.go (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/compiler/reader.go (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/extensions/README.md (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/extensions/extension.pb.go (99%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/extensions/extension.proto (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/extensions/extensions.go (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/jsonschema/README.md (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/jsonschema/base.go (90%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/jsonschema/display.go (92%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/jsonschema/models.go (97%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/jsonschema/operations.go (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/jsonschema/reader.go (99%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/jsonschema/schema.json (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/jsonschema/writer.go (92%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv2/OpenAPIv2.go (99%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv2/OpenAPIv2.pb.go (99%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv2/OpenAPIv2.proto (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv2/README.md (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv2/document.go (96%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv2/openapi-2.0.json (100%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv3/OpenAPIv3.go (99%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv3/OpenAPIv3.pb.go (99%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv3/OpenAPIv3.proto (99%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv3/README.md (89%) rename cluster-autoscaler/vendor/github.com/google/{gnostic => gnostic-models}/openapiv3/document.go (96%) delete mode 100644 cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/annotations.pb.go delete mode 100644 cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/annotations.proto delete mode 100644 cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/openapi-3.0.json delete mode 100644 cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/openapi-3.1.json delete mode 100644 cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/CHANGELOG.md delete mode 100644 cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/README.md delete mode 100644 cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/decode_hooks.go delete mode 100644 cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/error.go delete mode 100644 cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/mapstructure.go create mode 100644 cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/eaccess_go119.go create mode 100644 cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/eaccess_stub.go create mode 100644 cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/vnext.go create mode 100644 cluster-autoscaler/vendor/github.com/prometheus/procfs/fs_statfs_notype.go create mode 100644 cluster-autoscaler/vendor/github.com/prometheus/procfs/fs_statfs_type.go create mode 100644 cluster-autoscaler/vendor/github.com/prometheus/procfs/net_wireless.go create mode 100644 cluster-autoscaler/vendor/github.com/vishvananda/netns/.golangci.yml create mode 100644 cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/tlsutil/versions.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/crypto/hkdf/hkdf.go rename cluster-autoscaler/vendor/{github.com/ghodss/yaml => golang.org/x/mod}/LICENSE (55%) create mode 100644 cluster-autoscaler/vendor/golang.org/x/mod/PATENTS create mode 100644 cluster-autoscaler/vendor/golang.org/x/mod/semver/semver.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs_go118.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs_go119.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/cmd/stringer/stringer.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/gcexportdata/importer.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/packages/doc.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/packages/external.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/packages/golist.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/packages/golist_overlay.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/packages/loadmode_string.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/packages/packages.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/packages/visit.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/go/types/objectpath/objectpath.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/event.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/export.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/fast.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/event/doc.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/event/event.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/event/keys/keys.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/event/keys/standard.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/event/label/label.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/event/tag/tag.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/bimport.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/exportdata.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/gcimporter.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/iexport.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/iimport.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/newInterface10.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/newInterface11.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/support_go117.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/support_go118.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/unified_no.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/unified_yes.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/ureader_no.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/ureader_yes.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/invoke.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/vendor.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/version.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/packagesinternal/packages.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/codes.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/decoder.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/doc.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/encoder.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/flags.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/frames_go1.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/frames_go17.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/reloc.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/support.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/sync.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/syncmarker_string.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/tokeninternal/tokeninternal.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/errorcode.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/errorcode_string.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/types.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/types_118.go rename cluster-autoscaler/vendor/{k8s.io/kube-proxy => google.golang.org/genproto/googleapis/api}/LICENSE (100%) create mode 100644 cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/tidyfix.go create mode 100644 cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/rpc/LICENSE create mode 100644 cluster-autoscaler/vendor/google.golang.org/genproto/internal/doc.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/LICENSE create mode 100644 cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/pkg/features/OWNERS create mode 100644 cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/pkg/features/kube_features.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/splice.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/dump/dump.go rename cluster-autoscaler/vendor/k8s.io/{apiserver/pkg/util => apimachinery/pkg/util/httpstream}/wsstream/conn.go (100%) rename cluster-autoscaler/vendor/k8s.io/{apiserver/pkg/util => apimachinery/pkg/util/httpstream}/wsstream/doc.go (91%) rename cluster-autoscaler/vendor/k8s.io/{apiserver/pkg/util => apimachinery/pkg/util/httpstream}/wsstream/stream.go (100%) create mode 100644 cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/versioncheck.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/composition.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/caching_authorizer.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/composited.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/base.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/environment.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/lazy/lazy.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/quantity.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/test.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/quantity.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/registry.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/metrics/metrics.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_progress.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/aes_extended_nonce.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/cache.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/dropped_requests_tracker.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/max_seats.go create mode 100644 cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/peerproxy/metrics/metrics.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/variable.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/auditannotation.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/expressionwarning.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/matchresources.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/namedrulewithoperations.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/paramkind.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/paramref.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/typechecking.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicy.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicybinding.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicybindingspec.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicyspec.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicystatus.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validation.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/variable.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/hostip.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/podresourceclaimstatus.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/extensions/v1beta1/networkpolicystatus.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1alpha1/exemptprioritylevelconfiguration.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta1/exemptprioritylevelconfiguration.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta2/exemptprioritylevelconfiguration.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3/exemptprioritylevelconfiguration.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/networking/v1/networkpolicystatus.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/validatingadmissionpolicy.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/validatingadmissionpolicybinding.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_validatingadmissionpolicy.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_validatingadmissionpolicybinding.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/validatingadmissionpolicy.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/validatingadmissionpolicybinding.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/fake/fake_selfsubjectreview.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/selfsubjectreview.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/validatingadmissionpolicy.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/validatingadmissionpolicybinding.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/openapi/typeconverter.go create mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/object-names.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/configmaplock.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/endpointslock.go create mode 100644 cluster-autoscaler/vendor/k8s.io/cloud-provider/api/retry_error.go create mode 100644 cluster-autoscaler/vendor/k8s.io/cloud-provider/names/controller_names.go create mode 100644 cluster-autoscaler/vendor/k8s.io/component-base/version/dynamic.go create mode 100644 cluster-autoscaler/vendor/k8s.io/klog/v2/format.go create mode 100644 cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/builder/parameters.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/doc.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/register.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/types.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/zz_generated.deepcopy.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/register.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/types.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/types_pluginargs.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/zz_generated.deepcopy.go create mode 100644 cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/dra/v1alpha3/api.pb.go create mode 100644 cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/dra/v1alpha3/api.proto rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/.import-restrictions (100%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/errors.go (100%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/portforward/constants.go (100%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/portforward/httpstream.go (100%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/portforward/portforward.go (97%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/portforward/websocket.go (99%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/remotecommand/attach.go (100%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/remotecommand/doc.go (100%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/remotecommand/exec.go (100%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/remotecommand/httpstream.go (99%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/remotecommand/websocket.go (98%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/request_cache.go (100%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet => kubelet/pkg}/cri/streaming/server.go (98%) delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/conntrack.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/init_windows.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server_others.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server_windows.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/lookup_cache.go create mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_none.go rename cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/{dra => util/cdi}/cdi.go (98%) create mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_linux.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubemark/hollow_proxy.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/OWNERS delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/doc.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/register.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/scheme/scheme.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/types.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/defaults.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/doc.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/register.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.conversion.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.defaults.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/validation/validation.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/zz_generated.deepcopy.go create mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/conntrack/cleanup.go rename cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/{util => proxy}/conntrack/conntrack.go (97%) delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/iptables/OWNERS delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/iptables/proxier.go rename cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/{util => proxy/ipvs}/ipset/ipset.go (94%) rename cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/{util => proxy/ipvs}/ipset/types.go (100%) rename cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/{util/ipvs => proxy/ipvs/util}/ipvs.go (100%) rename cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/{util/ipvs => proxy/ipvs/util}/ipvs_linux.go (99%) rename cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/{util/ipvs => proxy/ipvs/util}/ipvs_unsupported.go (100%) create mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/linebuffer.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/OWNERS delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/hns.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/metrics.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/proxier.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/conversion.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/default_plugins.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/defaults.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/register.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.conversion.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.defaults.go create mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/instrumented_plugins.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/conntrack/OWNERS delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipset/OWNERS delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/OWNERS delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/OWNERS delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/attacher.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/doc.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_pd.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_pd_block.go delete mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_util.go rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet/cm => utils}/cpuset/OWNERS (57%) rename cluster-autoscaler/vendor/k8s.io/{kubernetes/pkg/kubelet/cm => utils}/cpuset/cpuset.go (98%) create mode 100644 multidimensional-pod-autoscaler/AEP.md create mode 100644 multidimensional-pod-autoscaler/kep-imgs/mpa-action-actuation.png create mode 100644 multidimensional-pod-autoscaler/kep-imgs/mpa-design.png create mode 100644 vertical-pod-autoscaler/enhancements/4016-in-place-updates-support/README.md create mode 100644 vertical-pod-autoscaler/pkg/utils/metrics/recommender/recommender_test.go create mode 100644 vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_hurd.go create mode 100644 vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_hurd_386.go create mode 100644 vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.s diff --git a/addon-resizer/enhancements/5546-scaling-based-on-container-count/README.md b/addon-resizer/enhancements/5546-scaling-based-on-container-count/README.md new file mode 100644 index 000000000000..d2de8c89f70a --- /dev/null +++ b/addon-resizer/enhancements/5546-scaling-based-on-container-count/README.md @@ -0,0 +1,123 @@ +# KEP-5546: Scaling based on container count + + +- [Summary](#summary) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [Notes](#notes) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + + +## Summary + +Currently Addon Resizer supports scaling based on the number of nodes. Some workloads use resources proportionally to +the number of containers in the cluster. Since number of containers per node is very different in different clusters +it's more resource-efficient to scale such workloads based directly on the container count. + +### Goals + +- Allow scaling workloads based on count of containers in a cluster. +- Allow this for Addon Resizer 1.8 ([used by metrics server]). + +### Non-Goals + +- Using both node and container count to scale workloads. +- Bringing this change to the `master` branch of Addon Resizer. + +## Proposal + +Add flag `--scaling-mode` to Addon Resizer on the [`addon-resizer-release-1.8`] branch. Flag will +have two valid values: + +- `node-proportional` - default, current behavior. +- `container-proportional` - addon resizer will set resources, using the same algorithm it's using now but using number + of containers where it's currently using number of nodes. + +### Notes + +Addon Resizer 1.8 assumes in multiple places that it's scaling based on the number of nodes: + +- [Flag descriptions] that directly reference node counts (`--extra-cpu`, `--extra-memory`, `--extra-storage`, and + `--minClusterSize`) will need to be updated to instead refer to cluster size. +- [README] will need to be updated to reference cluster size instead of node count and explain that cluster size refers + to either node count or container count, depending on the value of the `--scaling-mode` flag. +- Many variable names in code which now refer to node count will refer to cluster size and should be renamed accordingly. + +In addition to implementing the feature we should also clean up the code and documentation. + +### Risks and Mitigations + +One potential risk is that Addon resizer can obtain cluster size (node count or container count): +- from metrics or +- by querying Cluster Api Server to list all objects of the appropriate type + +depending on the configuration. There can be many times more containers in a cluster that there are nodes. So listing +all containers could result in higher load on the Cluster API server. Since Addon Resizer is requesting very few fields +I don't expect this effect to be noticeable. + +Also I expect metrics-server to test for this before using the feature and any other users of Addon Resizer are likely +better off using metrics (which don't have this problem). + +## Design Details + +- Implement function `kubernetesClient.CountContainers()`. It will be analogous to the existing + [`kubernetesClient.CountNodes()`] function. + - If using metrics to determine number of containers in the cluster: + - Fetch pod metrics (similar to [fetching node metrics] but use `/pods` URI instead of `/nodes`). + - For each pod obtain number of containers (length of the `containers` field). + - Sum container counts for all pods. + - If using API server: + - Fetch list pods (similar to [listing nodes]) + - Fetch only [`Spec.InitContainers`], [`Spec.Containers`], and [`Spec.EphemeralContainers`] fields. + - Exclude pods in terminal states ([selector excluding pods in terminal states in VPA]) + - Sum container count over pods. +- Add the `--scaling-mode` flag, with two valid values: + - `node-proportional` - default, current behavior, scaling based on clusters node count and + - `container-proportional` - new behavior, scaling based on clusters container count +- Pass value indicating if we should use node count or container count to the [`updateResources()`] function. +- In `updateResources()` use node count or container count, depending on the value. + +Check that listing containers directly works + +Coinsider listing pods, getting containers only for working pods + +### Test Plan + +In addition to unit tests we will run manual e2e test: + +- Create config based on [`example.yaml`] but scaling the deployment based on the number of containers in the cluster. +- Create config starting deployment with 100 `pause` containers. + +Test the feature by: + +- Starting the deployment scaled by Addon Resizer, based on node count. +- Observe size of the deployment and that it's stable. +- Start deployment with 100 `pause` containers. +- Observe the scaled deployment change resources appropriately. + +Test the node-based scaling: + +- Apply [`example.yaml`]. +- Observe amount and stability assigned resources. +- Resize cluster. +- Observe change in assigned resources. + +Both tests should be performed with metrics- and API- based scaling. + +[used by metrics server]: https://github.com/kubernetes-sigs/metrics-server/blob/0c47555e9b49cfe0719db1a0b7fb6c8dcdff3d38/charts/metrics-server/values.yaml#L121 +[`addon-resizer-release-1.8`]: https://github.com/kubernetes/autoscaler/tree/addon-resizer-release-1.8 +[Flag descriptions]: https://github.com/kubernetes/autoscaler/blob/da500188188d275a382be578ad3d0a758c3a170f/addon-resizer/nanny/main/pod_nanny.go#L47 +[README]: https://github.com/kubernetes/autoscaler/blob/da500188188d275a382be578ad3d0a758c3a170f/addon-resizer/README.md?plain=1#L1 +[`kubernetesClient.CountNodes()`]: https://github.com/kubernetes/autoscaler/blob/da500188188d275a382be578ad3d0a758c3a170f/addon-resizer/nanny/kubernetes_client.go#L58 +[fetching node metrics]: https://github.com/kubernetes/autoscaler/blob/da500188188d275a382be578ad3d0a758c3a170f/addon-resizer/nanny/kubernetes_client.go#L150 +[listing nodes]: https://github.com/kubernetes/autoscaler/blob/da500188188d275a382be578ad3d0a758c3a170f/addon-resizer/nanny/kubernetes_client.go#L71 +[`Spec.InitContainers`]: https://github.com/kubernetes/api/blob/1528256abbdf8ff2510112b28a6aacd239789a36/core/v1/types.go#L3143 +[`Spec.Containers`]: https://github.com/kubernetes/api/blob/1528256abbdf8ff2510112b28a6aacd239789a36/core/v1/types.go#L3150 +[`Spec.EphemeralContainers`]: https://github.com/kubernetes/api/blob/1528256abbdf8ff2510112b28a6aacd239789a36/core/v1/types.go#L3158 +[`Status.Phase`]: https://github.com/kubernetes/api/blob/1528256abbdf8ff2510112b28a6aacd239789a36/core/v1/types.go#L4011 +[selector excluding pods in terminal states in VPA]: https://github.com/kubernetes/autoscaler/blob/04e5bfc88363b4af9fdeb9dfd06c362ec5831f51/vertical-pod-autoscaler/e2e/v1beta2/common.go#L195 +[`updateResources()`]: https://github.com/kubernetes/autoscaler/blob/da500188188d275a382be578ad3d0a758c3a170f/addon-resizer/nanny/nanny_lib.go#L126 +[`example.yaml`]: https://github.com/kubernetes/autoscaler/blob/c8d612725c4f186d5de205ed0114f21540a8ed39/addon-resizer/deploy/example.yaml \ No newline at end of file diff --git a/addon-resizer/enhancements/5700-nanny-configuration-reload/README.md b/addon-resizer/enhancements/5700-nanny-configuration-reload/README.md new file mode 100644 index 000000000000..c661c57f36a1 --- /dev/null +++ b/addon-resizer/enhancements/5700-nanny-configuration-reload/README.md @@ -0,0 +1,61 @@ +# KEP-5546: Automatic reload of nanny configuration when updated + + +- [Summary](#summary) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [Notes](#notes) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + + +Sure, here's the enhancement proposal in the requested format: + +## Summary +- **Goals:** The goal of this enhancement is to improve the user experience for applying nanny configuration changes in the addon-resizer 1.8 when used with the metrics server. The proposed solution involves automatically reloading the nanny configuration whenever changes occur, eliminating the need for manual intervention and sidecar containers. +- **Non-Goals:** This proposal does not aim to update the functional behavior of the addon-resizer. + +## Proposal +The proposed solution involves updating the addon-resizer with the following steps: +- Create a file system watcher using `fsnotify` under `utils/fswatcher` to watch nanny configurations' changes. It should run as a goroutine in the background. +- Detect changes of the nanny configurations' file using the created `fswatcher` trigger the reloading process when configuration changes are detected. Events should be sent in a channel. +- Re-execute the method responsible for building the NannyConfiguration `loadNannyConfiguration` to apply the updated configuration to the addon-resizer. +- Proper error handling should be implemented to manage scenarios where the configuration file is temporarily inaccessible or if there are parsing errors in the configuration file. + +### Risks and Mitigations +- There is a potential risk of filesystem-related issues causing the file watcher to malfunction. Proper testing and error handling should be implemented to handle such scenarios gracefully. +- Errors in the configuration file could lead to unexpected behavior or crashes. The addon-resizer should handle parsing errors and fall back to the previous working configuration if necessary. + +## Design Details +- Create a new package for the `fswatcher` under `utils/fswatcher`. It would contain the `fswatcher` struct and methods and unit-tests. + - `FsWatcher` struct would look similar to this: + ```go + type FsWatcher struct { + *fsnotify.Watcher + + Events chan struct{} + ratelimit time.Duration + names []string + paths map[string]struct{} + } + ``` + - Implement the following functions: + - `CreateFsWatcher`: Instantiates a new `FsWatcher` and start watching on file system. + - `initWatcher`: Initializes the `fsnotify` watcher and initialize the `paths` that would be watched. + - `add`: Adds a new file to watch. + - `reset`: Re-initializes the `FsWatcher`. + - `watch`: watches for the configured files. +- In the main function, we create a new `FsWatcher` and then we wait in an infinite loop to receive events indicating +filesystem changes. Based on these changes, we re-execute `loadNannyConfiguration` function. + +> **Note:** The expected configuration file format is YAML. It has the same structure as the NannyConfiguration CRD. + +### Test Plan +To ensure the proper functioning of the enhanced addon-resizer, the following test plan should be executed: +1. **Unit Tests:** Write unit tests to validate the file watcher's functionality and ensure it triggers events when the configuration file changes. +2. **Manual e2e Tests:** Deploy the addon-resizer with `BaseMemory` of `300Mi` and then we change the `BaseMemory` to `100Mi`. We should observer changes in the behavior of watched pod. + + +[fsnotify]: https://github.com/fsnotify/fsnotify diff --git a/balancer/deploy/controller.yaml b/balancer/deploy/controller.yaml index 9dfc079ee0fe..a6a8bd4db6ff 100644 --- a/balancer/deploy/controller.yaml +++ b/balancer/deploy/controller.yaml @@ -20,6 +20,12 @@ rules: - watch - patch - update + - apiGroups: + - balancer.x-k8s.io + resources: + - balancers/status + verbs: + - update - apiGroups: - "" resources: diff --git a/balancer/proposals/balancer.md b/balancer/proposals/balancer.md index 5e2fc004efc0..534eaa59f64f 100644 --- a/balancer/proposals/balancer.md +++ b/balancer/proposals/balancer.md @@ -10,7 +10,7 @@ These domains may include: * Cloud provider zones inside a single region, to ensure that the application is still up and running, even if one of the zones has issues. * Different types of Kubernetes nodes. These may involve nodes that are spot/preemptible, or of different machine families. -A single Kuberentes deployment may either leave the placement entirely up to the scheduler +A single Kubernetes deployment may either leave the placement entirely up to the scheduler (most likely leading to something not entirely desired, like all pods going to a single domain) or focus on a single domain (thus not achieving the goal of being in two or more domains). @@ -179,4 +179,4 @@ type BalancerStatus struct { // +patchStrategy=merge Conditions []metav1.Condition } -``` \ No newline at end of file +``` diff --git a/builder/Dockerfile b/builder/Dockerfile index cb5802480182..5c24664b3e62 100644 --- a/builder/Dockerfile +++ b/builder/Dockerfile @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -FROM golang:1.20 +FROM golang:1.20.4 LABEL maintainer="Marcin Wielgus " ENV GOPATH /gopath/ diff --git a/charts/cluster-autoscaler/Chart.yaml b/charts/cluster-autoscaler/Chart.yaml index fd11b4490beb..7bbaffd34b6c 100644 --- a/charts/cluster-autoscaler/Chart.yaml +++ b/charts/cluster-autoscaler/Chart.yaml @@ -1,5 +1,5 @@ apiVersion: v2 -appVersion: 1.26.2 +appVersion: 1.27.2 description: Scales Kubernetes worker nodes within autoscaling groups. engine: gotpl home: https://github.com/kubernetes/autoscaler @@ -11,4 +11,4 @@ name: cluster-autoscaler sources: - https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler type: application -version: 9.28.0 +version: 9.29.2 diff --git a/charts/cluster-autoscaler/README.md b/charts/cluster-autoscaler/README.md index 6580cff5d378..dd6171b780e7 100644 --- a/charts/cluster-autoscaler/README.md +++ b/charts/cluster-autoscaler/README.md @@ -320,7 +320,10 @@ Though enough for the majority of installations, the default PodSecurityPolicy _ ### VerticalPodAutoscaler -The chart can install a [`VerticalPodAutoscaler`](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/README.md) for the Deployment if needed. A VPA can help minimize wasted resources when usage spikes periodically or remediate containers that are being OOMKilled. +The CA Helm Chart can install a [`VerticalPodAutoscaler`](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/README.md) object from Chart version `9.27.0` +onwards for the Cluster Autoscaler Deployment to scale the CA as appropriate, but for that, we +need to install the VPA to the cluster separately. A VPA can help minimize wasted resources +when usage spikes periodically or remediate containers that are being OOMKilled. The following example snippet can be used to install VPA that allows scaling down from the default recommendations of the deployment template: @@ -383,7 +386,7 @@ vpa: | image.pullPolicy | string | `"IfNotPresent"` | Image pull policy | | image.pullSecrets | list | `[]` | Image pull secrets | | image.repository | string | `"registry.k8s.io/autoscaling/cluster-autoscaler"` | Image repository | -| image.tag | string | `"v1.26.2"` | Image tag | +| image.tag | string | `"v1.27.2"` | Image tag | | kubeTargetVersionOverride | string | `""` | Allow overriding the `.Capabilities.KubeVersion.GitVersion` check. Useful for `helm template` commands. | | magnumCABundlePath | string | `"/etc/kubernetes/ca-bundle.crt"` | Path to the host's CA bundle, from `ca-file` in the cloud-config file. | | magnumClusterName | string | `""` | Cluster name or ID in Magnum. Required if `cloudProvider=magnum` and not setting `autoDiscovery.clusterName`. | @@ -408,6 +411,7 @@ vpa: | rbac.serviceAccount.name | string | `""` | The name of the ServiceAccount to use. If not set and create is `true`, a name is generated using the fullname template. | | replicaCount | int | `1` | Desired number of pods | | resources | object | `{}` | Pod resource requests and limits. | +| secretKeyRefNameOverride | string | `""` | Overrides the name of the Secret to use when loading the secretKeyRef for AWS and Azure env variables | | securityContext | object | `{}` | [Security context for pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) | | service.annotations | object | `{}` | Annotations to add to service | | service.create | bool | `true` | If `true`, a Service will be created. | diff --git a/charts/cluster-autoscaler/README.md.gotmpl b/charts/cluster-autoscaler/README.md.gotmpl index b3a042b0de82..611ad6bb121f 100644 --- a/charts/cluster-autoscaler/README.md.gotmpl +++ b/charts/cluster-autoscaler/README.md.gotmpl @@ -320,7 +320,10 @@ Though enough for the majority of installations, the default PodSecurityPolicy _ ### VerticalPodAutoscaler -The chart can install a [`VerticalPodAutoscaler`](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/README.md) for the Deployment if needed. A VPA can help minimize wasted resources when usage spikes periodically or remediate containers that are being OOMKilled. +The CA Helm Chart can install a [`VerticalPodAutoscaler`](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/README.md) object from Chart version `9.27.0` +onwards for the Cluster Autoscaler Deployment to scale the CA as appropriate, but for that, we +need to install the VPA to the cluster separately. A VPA can help minimize wasted resources +when usage spikes periodically or remediate containers that are being OOMKilled. The following example snippet can be used to install VPA that allows scaling down from the default recommendations of the deployment template: diff --git a/charts/cluster-autoscaler/templates/clusterrole.yaml b/charts/cluster-autoscaler/templates/clusterrole.yaml index e3d36557ffd4..4ee33d81b477 100644 --- a/charts/cluster-autoscaler/templates/clusterrole.yaml +++ b/charts/cluster-autoscaler/templates/clusterrole.yaml @@ -151,7 +151,7 @@ rules: - cluster.x-k8s.io resources: - machinedeployments - - machinedeployments/scale + - machinepools - machines - machinesets verbs: @@ -159,5 +159,14 @@ rules: - list - update - watch + - apiGroups: + - cluster.x-k8s.io + resources: + - machinedeployments/scale + - machinepools/scale + verbs: + - get + - patch + - update {{- end }} {{- end -}} diff --git a/charts/cluster-autoscaler/templates/deployment.yaml b/charts/cluster-autoscaler/templates/deployment.yaml index ea5ba5c41d50..957398a3bd69 100644 --- a/charts/cluster-autoscaler/templates/deployment.yaml +++ b/charts/cluster-autoscaler/templates/deployment.yaml @@ -132,36 +132,36 @@ spec: valueFrom: secretKeyRef: key: AwsAccessKeyId - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} {{- end }} {{- if .Values.awsSecretAccessKey }} - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: key: AwsSecretAccessKey - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} {{- end }} {{- else if eq .Values.cloudProvider "azure" }} - name: ARM_SUBSCRIPTION_ID valueFrom: secretKeyRef: key: SubscriptionID - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} - name: ARM_RESOURCE_GROUP valueFrom: secretKeyRef: key: ResourceGroup - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} - name: ARM_VM_TYPE valueFrom: secretKeyRef: key: VMType - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} - name: AZURE_CLUSTER_NAME valueFrom: secretKeyRef: key: ClusterName - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} {{- if .Values.azureUseWorkloadIdentityExtension }} - name: ARM_USE_WORKLOAD_IDENTITY_EXTENSION value: "true" @@ -173,22 +173,22 @@ spec: valueFrom: secretKeyRef: key: TenantID - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} - name: ARM_CLIENT_ID valueFrom: secretKeyRef: key: ClientID - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} - name: ARM_CLIENT_SECRET valueFrom: secretKeyRef: key: ClientSecret - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} - name: AZURE_NODE_RESOURCE_GROUP valueFrom: secretKeyRef: key: NodeResourceGroup - name: {{ template "cluster-autoscaler.fullname" . }} + name: {{ default (include "cluster-autoscaler.fullname" .) .Values.secretKeyRefNameOverride }} {{- end }} {{- end }} {{- range $key, $value := .Values.extraEnv }} diff --git a/charts/cluster-autoscaler/templates/role.yaml b/charts/cluster-autoscaler/templates/role.yaml index b22fb58be8a4..44b1678af4d8 100644 --- a/charts/cluster-autoscaler/templates/role.yaml +++ b/charts/cluster-autoscaler/templates/role.yaml @@ -49,7 +49,7 @@ rules: - cluster.x-k8s.io resources: - machinedeployments - - machinedeployments/scale + - machinepools - machines - machinesets verbs: @@ -57,6 +57,15 @@ rules: - list - update - watch + - apiGroups: + - cluster.x-k8s.io + resources: + - machinedeployments/scale + - machinepools/scale + verbs: + - get + - patch + - update {{- end }} {{- if ( not .Values.rbac.clusterScoped ) }} - apiGroups: diff --git a/charts/cluster-autoscaler/values.yaml b/charts/cluster-autoscaler/values.yaml index 2f3e5cf609e3..a7deb24a7e70 100644 --- a/charts/cluster-autoscaler/values.yaml +++ b/charts/cluster-autoscaler/values.yaml @@ -230,7 +230,7 @@ image: # image.repository -- Image repository repository: registry.k8s.io/autoscaling/cluster-autoscaler # image.tag -- Image tag - tag: v1.26.2 + tag: v1.27.2 # image.pullPolicy -- Image pull policy pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. @@ -396,3 +396,6 @@ vpa: updateMode: "Auto" # vpa.containerPolicy -- [ContainerResourcePolicy](https://github.com/kubernetes/autoscaler/blob/vertical-pod-autoscaler/v0.13.0/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L159). The containerName is always et to the deployment's container name. This value is required if VPA is enabled. containerPolicy: {} + +# secretKeyRefNameOverride -- Overrides the name of the Secret to use when loading the secretKeyRef for AWS and Azure env variables +secretKeyRefNameOverride: "" diff --git a/cluster-autoscaler/Dockerfile.s390x b/cluster-autoscaler/Dockerfile.s390x new file mode 100644 index 000000000000..07ed8daf9ded --- /dev/null +++ b/cluster-autoscaler/Dockerfile.s390x @@ -0,0 +1,20 @@ +# Copyright 2016 The Kubernetes Authors. All rights reserved +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +ARG BASEIMAGE=gcr.io/distroless/static:nonroot-s390x +FROM $BASEIMAGE +LABEL maintainer="Marcin Wielgus " + +COPY cluster-autoscaler-s390x /cluster-autoscaler +WORKDIR / +CMD ["/cluster-autoscaler"] diff --git a/cluster-autoscaler/FAQ.md b/cluster-autoscaler/FAQ.md index 52a82fe1814a..b81f4d482f07 100644 --- a/cluster-autoscaler/FAQ.md +++ b/cluster-autoscaler/FAQ.md @@ -95,7 +95,7 @@ Cluster Autoscaler decreases the size of the cluster when some nodes are consist * are not run on the node by default, * * don't have a [pod disruption budget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#how-disruption-budgets-work) set or their PDB is too restrictive (since CA 0.6). * Pods that are not backed by a controller object (so not created by deployment, replica set, job, stateful set etc). * -* Pods with local storage. * +* Pods with local storage **. * - unless the pod has the following annotation set: ``` "cluster-autoscaler.kubernetes.io/safe-to-evict-local-volumes": "volume-1,volume-2,.." @@ -115,6 +115,13 @@ matching anti-affinity, etc) __Or__ you have overridden this behaviour with one of the relevant flags. [See below for more information on these flags.](#what-are-the-parameters-to-ca) +**Local storage in this case considers a Volume configured with properties making it a local Volume, such as the following examples: + +* [`hostPath`](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) +* [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) which does **not** use "Memory" for its `emptyDir.medium` field + +ConfigMaps, Secrets, Projected volumes and emptyDir with `medium=Memory` are not considered local storage. + ### Which version on Cluster Autoscaler should I use in my cluster? See [Cluster Autoscaler Releases](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#releases). @@ -125,8 +132,8 @@ Since version 1.0.0 we consider CA as GA. It means that: * We have enough confidence that it does what it is expected to do. Each commit goes through a big suite of unit tests with more than 75% coverage (on average). We have a series of e2e tests that validate that CA works well on - [GCE](https://k8s-testgrid.appspot.com/sig-autoscaling#gce-autoscaling) - and [GKE](https://k8s-testgrid.appspot.com/sig-autoscaling#gke-autoscaling). + [GCE](https://testgrid.k8s.io/sig-autoscaling#gce-autoscaling) + and [GKE](https://testgrid.k8s.io/sig-autoscaling#gke-autoscaling). Due to the missing testing infrastructure, AWS (or any other cloud provider) compatibility tests are not the part of the standard development or release procedure. However there is a number of AWS users who run CA in their production environment and submit new code, patches and bug reports. @@ -862,7 +869,7 @@ Events: ``` This limitation was solved with -[volume topological scheduling](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md) +[volume topological scheduling](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/volume-topology-scheduling.md) introduced as beta in Kubernetes 1.11 and planned for GA in 1.13. To allow CA to take advantage of topological scheduling, use separate node groups per zone. This way CA knows exactly which node group will create nodes in the required zone rather than relying on the cloud provider choosing a zone for a new node in a multi-zone node group. diff --git a/cluster-autoscaler/Makefile b/cluster-autoscaler/Makefile index 70ff86998ef5..883adc2f5fb2 100644 --- a/cluster-autoscaler/Makefile +++ b/cluster-autoscaler/Makefile @@ -1,4 +1,4 @@ -ALL_ARCH = amd64 arm64 +ALL_ARCH = amd64 arm64 s390x all: $(addprefix build-arch-,$(ALL_ARCH)) TAG?=dev diff --git a/cluster-autoscaler/OWNERS b/cluster-autoscaler/OWNERS index 63c9ef54d474..d96aa261a525 100644 --- a/cluster-autoscaler/OWNERS +++ b/cluster-autoscaler/OWNERS @@ -1,4 +1,5 @@ approvers: +- BigDarkClown - feiskyer - towca - x13n diff --git a/cluster-autoscaler/README.md b/cluster-autoscaler/README.md index d8e6b82b98af..3217828c8829 100644 --- a/cluster-autoscaler/README.md +++ b/cluster-autoscaler/README.md @@ -48,6 +48,8 @@ Starting from Kubernetes 1.12, versioning scheme was changed to match Kubernetes | Kubernetes Version | CA Version | |--------|--------| +| 1.28.X | 1.28.X | +| 1.27.X | 1.27.X | | 1.26.X | 1.26.X | | 1.25.X | 1.25.X | | 1.24.X | 1.24.X | @@ -85,10 +87,11 @@ target ETA and the actual releases. | Date | Maintainer Preparing Release | Backup Maintainer | |------------|------------------------------|-------------------| | 2023-03-15 | MaciekPytel | gjtempleton | -| 2023-05-17 | gjtempleton | towca | -| 2023-07-19 | towca | x13n | -| 2023-09-13 | x13n | MaciekPytel | -| 2023-11-15 | MaciekPytel | gjtempleton | +| 2023-05-17 | gjtempleton | BigDarkClown | +| 2023-07-19 | BigDarkClown | towca | +| 2023-09-13 | towca | x13n | +| 2023-11-15 | x13n | MaciekPytel | +| 2024-01-17 | MaciekPytel | gjtempleton | Additional patch releases may happen outside of the schedule in case of critical bugs or vulnerabilities. @@ -99,6 +102,7 @@ Starting with Gardener/Autoscaler v1.20, versioning scheme has changed to match | Kubernetes Version | CA Version | Gardener CA Version | |--------------------|------------|---------------------| +| 1.28.X | 1.28.X | 1.28.X | | 1.27.X | 1.27.X | 1.27.X | | 1.26.X | 1.26.X | 1.26.X | | 1.25.X | 1.25.X | 1.25.X | diff --git a/cluster-autoscaler/SYNC-CHANGES/SYNC_CHANGES-1.28.md b/cluster-autoscaler/SYNC-CHANGES/SYNC_CHANGES-1.28.md new file mode 100644 index 000000000000..d682e52830a6 --- /dev/null +++ b/cluster-autoscaler/SYNC-CHANGES/SYNC_CHANGES-1.28.md @@ -0,0 +1,41 @@ + + +- [v1.28.0](#v1280) + - [Synced with which upstream CA](#synced-with-which-upstream-ca) + - [Changes made](#changes-made) + - [During merging](#during-merging) + - [During vendoring k8s](#during-vendoring-k8s) + - [Others](#others) + + +# v1.28.0 + + +## Synced with which upstream CA + +[v1.28.0](https://github.com/kubernetes/autoscaler/tree/cluster-autoscaler-1.28.0/cluster-autoscaler) + +## Changes made + - See general release notes of 1.28.0: https://github.com/kubernetes/autoscaler/releases/tag/cluster-autoscaler-1.28.0 + - New flag added: flag.String(config.SchedulerConfigFileFlag, "", "scheduler-config) allows changing configuration of in-tree scheduler plugins acting on PreFilter and Filter extension points") + - The following options have been added per node group + ``` + // ZeroOrMaxNodeScaling means that a node group should be scaled up to maximum size or down to zero nodes all at once instead of one-by-one. + ZeroOrMaxNodeScaling bool + // IgnoreDaemonSetsUtilization sets if daemonsets utilization should be considered during node scale-down + IgnoreDaemonSetsUtilization bool + ``` + +### During merging + - Log message for the `scale up not possible case` was updated and an integration test that depended on it was updated + +### During vendoring k8s +- mcm v0.50.0 -> 0.50.1 +- mcm-provider-aws v0.17.0 -> v0.19.2 +- mcm-provider-azure v0.10.0 -> v0.11.1 + +### Others +- [Release matrix](../README.md#releases-gardenerautoscaler) of Gardener Autoscaler updated. +- The `max-empty-bulk-delete` flag will be deprecated in k8s version 1.29. Please use `max-scale-down-parallelism` instead. +- `parallelDrain` flag will be removed in future releases. +- Parallel node group scale ups are now supported (ref: https://github.com/gardener/autoscaler/issues/268) \ No newline at end of file diff --git a/cluster-autoscaler/cloudprovider/alicloud/OWNERS b/cluster-autoscaler/cloudprovider/alicloud/OWNERS index 0ef765c6254a..a853ada74cdf 100644 --- a/cluster-autoscaler/cloudprovider/alicloud/OWNERS +++ b/cluster-autoscaler/cloudprovider/alicloud/OWNERS @@ -2,3 +2,7 @@ approvers: - ringtail reviewers: - ringtail + + +labels: +- area/provider/alicloud diff --git a/cluster-autoscaler/cloudprovider/alicloud/README.md b/cluster-autoscaler/cloudprovider/alicloud/README.md index 2878147033fe..f67db568ef09 100644 --- a/cluster-autoscaler/cloudprovider/alicloud/README.md +++ b/cluster-autoscaler/cloudprovider/alicloud/README.md @@ -12,7 +12,7 @@ Cluster autoscaler must run on v1.9.3 or greater. ## ACS Console Deployment -doc: https://www.alibabacloud.com/help/doc-detail/89733.html +doc: https://www.alibabacloud.com/help/en/container-service-for-kubernetes/latest/auto-scaling-of-nodes ## Custom Deployment ### 1.Prepare Identity authentication @@ -119,9 +119,9 @@ spec: key: access-key-secret - name: REGION_ID valueFrom: - secretKeyRef: - name: cloud-config - key: region-id + secretKeyRef: + name: cloud-config + key: region-id volumeMounts: - name: ssl-certs mountPath: /etc/ssl/certs/ca-certificates.crt diff --git a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/credentials/oidc_credential.go b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/credentials/oidc_credential.go new file mode 100644 index 000000000000..2d23de3cd98b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/credentials/oidc_credential.go @@ -0,0 +1,37 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package credentials + +// OIDCCredential is a kind of credentials +type OIDCCredential struct { + RoleArn string + OIDCProviderArn string + OIDCTokenFilePath string + RoleSessionName string + RoleSessionExpiration int +} + +// NewOIDCRoleArnCredential returns OIDCCredential +func NewOIDCRoleArnCredential(roleArn, OIDCProviderArn, OIDCTokenFilePath, RoleSessionName string, RoleSessionExpiration int) *OIDCCredential { + return &OIDCCredential{ + RoleArn: roleArn, + OIDCProviderArn: OIDCProviderArn, + OIDCTokenFilePath: OIDCTokenFilePath, + RoleSessionName: RoleSessionName, + RoleSessionExpiration: RoleSessionExpiration, + } +} diff --git a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/signer.go b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/signer.go index 424db3b3cd3d..3f732a36f7f5 100644 --- a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/signer.go +++ b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/signer.go @@ -61,6 +61,10 @@ func NewSignerWithCredential(credential Credential, commonApi func(request *requ { signer, err = signers.NewEcsRamRoleSigner(instance, commonApi) } + case *credentials.OIDCCredential: + { + signer, err = signers.NewOIDCSigner(instance) + } case *credentials.BaseCredential: // deprecated user interface { signer, err = signers.NewAccessKeySigner(instance.ToAccessKeyCredential()) diff --git a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/signers/signer_oidc.go b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/signers/signer_oidc.go new file mode 100644 index 000000000000..930e9612e4dc --- /dev/null +++ b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/signers/signer_oidc.go @@ -0,0 +1,249 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package signers + +import ( + "encoding/json" + "fmt" + "github.com/jmespath/go-jmespath" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/errors" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/requests" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/responses" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/utils" + "k8s.io/klog/v2" + "net/http" + "os" + "runtime" + "strconv" + "strings" + "time" +) + +const ( + defaultOIDCDurationSeconds = 3600 +) + +// OIDCSigner is kind of signer +type OIDCSigner struct { + *credentialUpdater + roleSessionName string + sessionCredential *SessionCredential + credential *credentials.OIDCCredential +} + +// NewOIDCSigner returns OIDCSigner +func NewOIDCSigner(credential *credentials.OIDCCredential) (signer *OIDCSigner, err error) { + signer = &OIDCSigner{ + credential: credential, + } + + signer.credentialUpdater = &credentialUpdater{ + credentialExpiration: credential.RoleSessionExpiration, + buildRequestMethod: signer.buildCommonRequest, + responseCallBack: signer.refreshCredential, + refreshApi: signer.refreshApi, + } + + if len(credential.RoleSessionName) > 0 { + signer.roleSessionName = credential.RoleSessionName + } else { + signer.roleSessionName = "kubernetes-cluster-autoscaler-" + strconv.FormatInt(time.Now().UnixNano()/1000, 10) + } + if credential.RoleSessionExpiration > 0 { + if credential.RoleSessionExpiration >= 900 && credential.RoleSessionExpiration <= 3600 { + signer.credentialExpiration = credential.RoleSessionExpiration + } else { + err = errors.NewClientError(errors.InvalidParamErrorCode, "Assume Role session duration should be in the range of 15min - 1Hr", nil) + } + } else { + signer.credentialExpiration = defaultOIDCDurationSeconds + } + return +} + +// GetName returns "HMAC-SHA1" +func (*OIDCSigner) GetName() string { + return "HMAC-SHA1" +} + +// GetType returns "" +func (*OIDCSigner) GetType() string { + return "" +} + +// GetVersion returns "1.0" +func (*OIDCSigner) GetVersion() string { + return "1.0" +} + +// GetAccessKeyId returns accessKeyId +func (signer *OIDCSigner) GetAccessKeyId() (accessKeyId string, err error) { + if signer.sessionCredential == nil || signer.needUpdateCredential() { + err = signer.updateCredential() + } + if err != nil && (signer.sessionCredential == nil || len(signer.sessionCredential.AccessKeyId) <= 0) { + return "", err + } + + return signer.sessionCredential.AccessKeyId, nil +} + +// GetExtraParam returns params +func (signer *OIDCSigner) GetExtraParam() map[string]string { + if signer.sessionCredential == nil || signer.needUpdateCredential() { + signer.updateCredential() + } + if signer.sessionCredential == nil || len(signer.sessionCredential.StsToken) <= 0 { + return make(map[string]string) + } + + return map[string]string{"SecurityToken": signer.sessionCredential.StsToken} +} + +// Sign create signer +func (signer *OIDCSigner) Sign(stringToSign, secretSuffix string) string { + secret := signer.sessionCredential.AccessKeySecret + secretSuffix + return ShaHmac1(stringToSign, secret) +} + +func (signer *OIDCSigner) buildCommonRequest() (request *requests.CommonRequest, err error) { + const endpoint = "sts.aliyuncs.com" + const stsApiVersion = "2015-04-01" + const action = "AssumeRoleWithOIDC" + request = requests.NewCommonRequest() + request.Scheme = requests.HTTPS + request.Domain = endpoint + request.Method = requests.POST + request.QueryParams["Action"] = action + request.QueryParams["Version"] = stsApiVersion + request.QueryParams["Format"] = "JSON" + request.QueryParams["Timestamp"] = utils.GetTimeInFormatISO8601() + request.QueryParams["SignatureNonce"] = utils.GetUUIDV4() + request.FormParams["RoleArn"] = signer.credential.RoleArn + request.FormParams["OIDCProviderArn"] = signer.credential.OIDCProviderArn + request.FormParams["OIDCToken"] = signer.getOIDCToken(signer.credential.OIDCTokenFilePath) + request.QueryParams["RoleSessionName"] = signer.credential.RoleSessionName + request.Headers["host"] = endpoint + request.Headers["Accept-Encoding"] = "identity" + request.Headers["content-type"] = "application/x-www-form-urlencoded" + request.Headers["user-agent"] = fmt.Sprintf("AlibabaCloud (%s; %s) Golang/%s Core/%s TeaDSL/1 kubernetes-cluster-autoscaler", runtime.GOOS, runtime.GOARCH, strings.Trim(runtime.Version(), "go"), "0.01") + return +} + +func (signer *OIDCSigner) getOIDCToken(OIDCTokenFilePath string) string { + tokenPath := OIDCTokenFilePath + _, err := os.Stat(tokenPath) + if os.IsNotExist(err) { + tokenPath = os.Getenv("ALIBABA_CLOUD_OIDC_TOKEN_FILE") + if tokenPath == "" { + klog.Error("oidc token file path is missing") + return "" + } + } + + token, err := os.ReadFile(tokenPath) + if err != nil { + klog.Errorf("get oidc token from file %s failed: %s", tokenPath, err) + return "" + } + return string(token) +} + +func (signer *OIDCSigner) refreshApi(request *requests.CommonRequest) (response *responses.CommonResponse, err error) { + body := utils.GetUrlFormedMap(request.FormParams) + httpRequest, err := http.NewRequest(request.Method, fmt.Sprintf("%s://%s/?%s", strings.ToLower(request.Scheme), request.Domain, utils.GetUrlFormedMap(request.QueryParams)), strings.NewReader(body)) + if err != nil { + klog.Errorf("refresh RRSA token failed: %s", err) + return + } + + httpRequest.Proto = "HTTP/1.1" + httpRequest.Host = request.Domain + for k, v := range request.Headers { + httpRequest.Header.Add(k, v) + } + + httpClient := &http.Client{} + httpResponse, err := httpClient.Do(httpRequest) + if err != nil { + klog.Errorf("refresh RRSA token failed: %s", err) + return + } + + response = responses.NewCommonResponse() + err = responses.Unmarshal(response, httpResponse, "") + + return +} + +func (signer *OIDCSigner) refreshCredential(response *responses.CommonResponse) (err error) { + if response.GetHttpStatus() != http.StatusOK { + message := "refresh RRSA failed" + err = errors.NewServerError(response.GetHttpStatus(), response.GetHttpContentString(), message) + return + } + + var data interface{} + err = json.Unmarshal(response.GetHttpContentBytes(), &data) + if err != nil { + klog.Errorf("refresh RRSA token err, json.Unmarshal fail: %s", err) + return + } + accessKeyId, err := jmespath.Search("Credentials.AccessKeyId", data) + if err != nil { + klog.Errorf("refresh RRSA token err, fail to get AccessKeyId: %s", err) + return + } + accessKeySecret, err := jmespath.Search("Credentials.AccessKeySecret", data) + if err != nil { + klog.Errorf("refresh RRSA token err, fail to get AccessKeySecret: %s", err) + return + } + securityToken, err := jmespath.Search("Credentials.SecurityToken", data) + if err != nil { + klog.Errorf("refresh RRSA token err, fail to get SecurityToken: %s", err) + return + } + expiration, err := jmespath.Search("Credentials.Expiration", data) + if err != nil { + klog.Errorf("refresh RRSA token err, fail to get Expiration: %s", err) + return + } + + if accessKeyId == nil || accessKeySecret == nil || securityToken == nil { + return + } + + expirationTime, err := time.Parse("2006-01-02T15:04:05Z", expiration.(string)) + signer.credentialExpiration = int(expirationTime.Unix() - time.Now().Unix()) + signer.sessionCredential = &SessionCredential{ + AccessKeyId: accessKeyId.(string), + AccessKeySecret: accessKeySecret.(string), + StsToken: securityToken.(string), + } + + return +} + +// GetSessionCredential returns SessionCredential +func (signer *OIDCSigner) GetSessionCredential() *SessionCredential { + return signer.sessionCredential +} + +// Shutdown doesn't implement +func (signer *OIDCSigner) Shutdown() {} diff --git a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/client.go b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/client.go index 9ba0123d0475..9870f1210587 100644 --- a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/client.go +++ b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/client.go @@ -150,6 +150,18 @@ func (client *Client) InitWithEcsRamRole(regionId, roleName string) (err error) return client.InitWithOptions(regionId, config, credential) } +// InitWithRRSA need regionId,roleARN,oidcProviderARN,oidcTokenFilePath and roleSessionName +func (client *Client) InitWithRRSA(regionId, roleARN, oidcProviderARN, oidcTokenFilePath, roleSessionName string) (err error) { + config := client.InitClientConfig() + credential := &credentials.OIDCCredential{ + RoleArn: roleARN, + OIDCProviderArn: oidcProviderARN, + OIDCTokenFilePath: oidcTokenFilePath, + RoleSessionName: roleSessionName, + } + return client.InitWithOptions(regionId, config, credential) +} + // InitClientConfig init client config func (client *Client) InitClientConfig() (config *Config) { if client.config != nil { @@ -395,6 +407,13 @@ func NewClientWithEcsRamRole(regionId string, roleName string) (client *Client, return } +// NewClientWithRRSA create client with RRSA on ECS +func NewClientWithRRSA(regionId, roleARN, oidcProviderARN, oidcTokenFilePath, roleSessionName string) (client *Client, err error) { + client = &Client{} + err = client.InitWithRRSA(regionId, roleARN, oidcProviderARN, oidcTokenFilePath, roleSessionName) + return +} + // NewClientWithRsaKeyPair create client with key-pair func NewClientWithRsaKeyPair(regionId string, publicKeyId, privateKey string, sessionExpiration int) (client *Client, err error) { client = &Client{} diff --git a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/client_test.go b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/client_test.go new file mode 100644 index 000000000000..5b1ca237b5c1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/client_test.go @@ -0,0 +1,35 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package sdk + +import ( + "github.com/stretchr/testify/assert" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/sdk/auth/signers" + "testing" +) + +func TestRRSAClientInit(t *testing.T) { + oidcProviderARN := "acs:ram::12345:oidc-provider/ack-rrsa-cb123" + oidcTokenFilePath := "/var/run/secrets/tokens/oidc-token" + roleARN := "acs:ram::12345:role/autoscaler-role" + roleSessionName := "session" + regionId := "cn-hangzhou" + + client, err := NewClientWithRRSA(regionId, roleARN, oidcProviderARN, oidcTokenFilePath, roleSessionName) + assert.NoError(t, err) + assert.IsType(t, &signers.OIDCSigner{}, client.signer) +} diff --git a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/services/ecs/client.go b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/services/ecs/client.go index 3d43f1ff45df..170830bfe22f 100644 --- a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/services/ecs/client.go +++ b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/services/ecs/client.go @@ -80,3 +80,10 @@ func NewClientWithRsaKeyPair(regionId string, publicKeyId, privateKey string, se err = client.InitWithRsaKeyPair(regionId, publicKeyId, privateKey, sessionExpiration) return } + +// NewClientWithRRSA is a shortcut to create sdk client with RRSA +func NewClientWithRRSA(regionId, roleARN, oidcProviderARN, oidcTokenFilePath, roleSessionName string) (client *Client, err error) { + client = &Client{} + err = client.InitWithRRSA(regionId, roleARN, oidcProviderARN, oidcTokenFilePath, roleSessionName) + return +} diff --git a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/services/ess/client.go b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/services/ess/client.go index cf19bb4bfdce..3be994bffe65 100755 --- a/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/services/ess/client.go +++ b/cluster-autoscaler/cloudprovider/alicloud/alibaba-cloud-sdk-go/services/ess/client.go @@ -80,3 +80,10 @@ func NewClientWithRsaKeyPair(regionId string, publicKeyId, privateKey string, se err = client.InitWithRsaKeyPair(regionId, publicKeyId, privateKey, sessionExpiration) return } + +// NewClientWithRRSA is a shortcut to create sdk client with RRSA +func NewClientWithRRSA(regionId, roleARN, oidcProviderARN, oidcTokenFilePath, roleSessionName string) (client *Client, err error) { + client = &Client{} + err = client.InitWithRRSA(regionId, roleARN, oidcProviderARN, oidcTokenFilePath, roleSessionName) + return +} diff --git a/cluster-autoscaler/cloudprovider/alicloud/alicloud_auto_scaling.go b/cluster-autoscaler/cloudprovider/alicloud/alicloud_auto_scaling.go index 8ccadf572478..4f974be0082a 100644 --- a/cluster-autoscaler/cloudprovider/alicloud/alicloud_auto_scaling.go +++ b/cluster-autoscaler/cloudprovider/alicloud/alicloud_auto_scaling.go @@ -52,7 +52,7 @@ func newAutoScalingWrapper(cfg *cloudConfig) (*autoScalingWrapper, error) { asw := &autoScalingWrapper{ cfg: cfg, } - if cfg.STSEnabled == true { + if cfg.STSEnabled { go func(asw *autoScalingWrapper, cfg *cloudConfig) { timer := time.NewTicker(refreshClientInterval) defer timer.Stop() @@ -76,7 +76,7 @@ func newAutoScalingWrapper(cfg *cloudConfig) (*autoScalingWrapper, error) { func getEssClient(cfg *cloudConfig) (client *ess.Client, err error) { region := cfg.getRegion() - if cfg.STSEnabled == true { + if cfg.STSEnabled { auth, err := cfg.getSTSToken() if err != nil { klog.Errorf("Failed to get sts token from metadata,Because of %s", err.Error()) @@ -86,6 +86,11 @@ func getEssClient(cfg *cloudConfig) (client *ess.Client, err error) { if err != nil { klog.Errorf("Failed to create client with sts in metadata because of %s", err.Error()) } + } else if cfg.RRSAEnabled { + client, err = ess.NewClientWithRRSA(region, cfg.RoleARN, cfg.OIDCProviderARN, cfg.OIDCTokenFilePath, cfg.RoleSessionName) + if err != nil { + klog.Errorf("Failed to create ess client with RRSA, because of %s", err.Error()) + } } else { client, err = ess.NewClientWithAccessKey(region, cfg.AccessKeyID, cfg.AccessKeySecret) if err != nil { diff --git a/cluster-autoscaler/cloudprovider/alicloud/alicloud_auto_scaling_test.go b/cluster-autoscaler/cloudprovider/alicloud/alicloud_auto_scaling_test.go new file mode 100644 index 000000000000..409fc4871b8a --- /dev/null +++ b/cluster-autoscaler/cloudprovider/alicloud/alicloud_auto_scaling_test.go @@ -0,0 +1,38 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package alicloud + +import ( + "github.com/stretchr/testify/assert" + "testing" +) + +func TestRRSACloudConfigEssClientCreation(t *testing.T) { + t.Setenv(oidcProviderARN, "acs:ram::12345:oidc-provider/ack-rrsa-cb123") + t.Setenv(oidcTokenFilePath, "/var/run/secrets/tokens/oidc-token") + t.Setenv(roleARN, "acs:ram::12345:role/autoscaler-role") + t.Setenv(roleSessionName, "session") + t.Setenv(regionId, "cn-hangzhou") + + cfg := &cloudConfig{} + assert.True(t, cfg.isValid()) + assert.True(t, cfg.RRSAEnabled) + + client, err := getEssClient(cfg) + assert.NoError(t, err) + assert.NotNil(t, client) +} diff --git a/cluster-autoscaler/cloudprovider/alicloud/alicloud_cloud_config.go b/cluster-autoscaler/cloudprovider/alicloud/alicloud_cloud_config.go index 36d518dac520..9a624459163d 100644 --- a/cluster-autoscaler/cloudprovider/alicloud/alicloud_cloud_config.go +++ b/cluster-autoscaler/cloudprovider/alicloud/alicloud_cloud_config.go @@ -18,21 +18,30 @@ package alicloud import ( "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/alicloud/metadata" - klog "k8s.io/klog/v2" + "k8s.io/klog/v2" "os" ) const ( - accessKeyId = "ACCESS_KEY_ID" - accessKeySecret = "ACCESS_KEY_SECRET" - regionId = "REGION_ID" + accessKeyId = "ACCESS_KEY_ID" + accessKeySecret = "ACCESS_KEY_SECRET" + oidcProviderARN = "ALICLOUD_OIDC_PROVIDER_ARN" + oidcTokenFilePath = "ALICLOUD_OIDC_TOKEN_FILE_PATH" + roleARN = "ALICLOUD_ROLE_ARN" + roleSessionName = "ALICLOUD_SESSION_NAME" + regionId = "REGION_ID" ) type cloudConfig struct { - RegionId string - AccessKeyID string - AccessKeySecret string - STSEnabled bool + RegionId string + AccessKeyID string + AccessKeySecret string + OIDCProviderARN string + OIDCTokenFilePath string + RoleARN string + RoleSessionName string + RRSAEnabled bool + STSEnabled bool } func (cc *cloudConfig) isValid() bool { @@ -48,18 +57,40 @@ func (cc *cloudConfig) isValid() bool { cc.RegionId = os.Getenv(regionId) } - if cc.RegionId == "" || cc.AccessKeyID == "" || cc.AccessKeySecret == "" { - klog.V(5).Infof("Failed to get AccessKeyId:%s,AccessKeySecret:%s,RegionId:%s from cloudConfig and Env\n", cc.AccessKeyID, cc.AccessKeySecret, cc.RegionId) + if cc.OIDCProviderARN == "" { + cc.OIDCProviderARN = os.Getenv(oidcProviderARN) + } + + if cc.OIDCTokenFilePath == "" { + cc.OIDCTokenFilePath = os.Getenv(oidcTokenFilePath) + } + + if cc.RoleARN == "" { + cc.RoleARN = os.Getenv(roleARN) + } + + if cc.RoleSessionName == "" { + cc.RoleSessionName = os.Getenv(roleSessionName) + } + + if cc.RegionId != "" && cc.AccessKeyID != "" && cc.AccessKeySecret != "" { + klog.V(2).Info("Using AccessKey authentication") + return true + } else if cc.RegionId != "" && cc.OIDCProviderARN != "" && cc.OIDCTokenFilePath != "" && cc.RoleARN != "" && cc.RoleSessionName != "" { + klog.V(2).Info("Using RRSA authentication") + cc.RRSAEnabled = true + return true + } else { + klog.V(5).Infof("Failed to get AccessKeyId:%s,RegionId:%s from cloudConfig and Env\n", cc.AccessKeyID, cc.RegionId) + klog.V(5).Infof("Failed to get OIDCProviderARN:%s,OIDCTokenFilePath:%s,RoleARN:%s,RoleSessionName:%s,RegionId:%s from cloudConfig and Env\n", cc.OIDCProviderARN, cc.OIDCTokenFilePath, cc.RoleARN, cc.RoleSessionName, cc.RegionId) klog.V(5).Infof("Try to use sts token in metadata instead.\n") - if cc.validateSTSToken() == true && cc.getRegion() != "" { + if cc.validateSTSToken() && cc.getRegion() != "" { //if CA is working on ECS with valid role name, use sts token instead. cc.STSEnabled = true return true } - } else { - cc.STSEnabled = false - return true } + return false } diff --git a/cluster-autoscaler/cloudprovider/alicloud/alicloud_cloud_config_test.go b/cluster-autoscaler/cloudprovider/alicloud/alicloud_cloud_config_test.go new file mode 100644 index 000000000000..4c828367c5f0 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/alicloud/alicloud_cloud_config_test.go @@ -0,0 +1,44 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package alicloud + +import ( + "github.com/stretchr/testify/assert" + "testing" +) + +func TestAccessKeyCloudConfigIsValid(t *testing.T) { + t.Setenv(accessKeyId, "id") + t.Setenv(accessKeySecret, "secret") + t.Setenv(regionId, "cn-hangzhou") + + cfg := &cloudConfig{} + assert.True(t, cfg.isValid()) + assert.False(t, cfg.RRSAEnabled) +} + +func TestRRSACloudConfigIsValid(t *testing.T) { + t.Setenv(oidcProviderARN, "acs:ram::12345:oidc-provider/ack-rrsa-cb123") + t.Setenv(oidcTokenFilePath, "/var/run/secrets/tokens/oidc-token") + t.Setenv(roleARN, "acs:ram::12345:role/autoscaler-role") + t.Setenv(roleSessionName, "session") + t.Setenv(regionId, "cn-hangzhou") + + cfg := &cloudConfig{} + assert.True(t, cfg.isValid()) + assert.True(t, cfg.RRSAEnabled) +} diff --git a/cluster-autoscaler/cloudprovider/alicloud/alicloud_instance_types.go b/cluster-autoscaler/cloudprovider/alicloud/alicloud_instance_types.go index 3f09394d78b9..966d4b6ed00a 100644 --- a/cluster-autoscaler/cloudprovider/alicloud/alicloud_instance_types.go +++ b/cluster-autoscaler/cloudprovider/alicloud/alicloud_instance_types.go @@ -107,7 +107,7 @@ func newInstanceWrapper(cfg *cloudConfig) (*instanceWrapper, error) { return nil, fmt.Errorf("your cloud config is not valid") } iw := &instanceWrapper{} - if cfg.STSEnabled == true { + if cfg.STSEnabled { go func(iw *instanceWrapper, cfg *cloudConfig) { timer := time.NewTicker(refreshClientInterval) defer timer.Stop() @@ -141,6 +141,11 @@ func getEcsClient(cfg *cloudConfig) (client *ecs.Client, err error) { if err != nil { klog.Errorf("failed to create client with sts in metadata,because of %s", err.Error()) } + } else if cfg.RRSAEnabled { + client, err = ecs.NewClientWithRRSA(region, cfg.RoleARN, cfg.OIDCProviderARN, cfg.OIDCTokenFilePath, cfg.RoleSessionName) + if err != nil { + klog.Errorf("Failed to create ess client with RRSA, because of %s", err.Error()) + } } else { client, err = ecs.NewClientWithAccessKey(region, cfg.AccessKeyID, cfg.AccessKeySecret) if err != nil { diff --git a/cluster-autoscaler/cloudprovider/alicloud/alicloud_instance_types_test.go b/cluster-autoscaler/cloudprovider/alicloud/alicloud_instance_types_test.go new file mode 100644 index 000000000000..eaf5cdeec516 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/alicloud/alicloud_instance_types_test.go @@ -0,0 +1,38 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package alicloud + +import ( + "github.com/stretchr/testify/assert" + "testing" +) + +func TestRRSACloudConfigEcsClientCreation(t *testing.T) { + t.Setenv(oidcProviderARN, "acs:ram::12345:oidc-provider/ack-rrsa-cb123") + t.Setenv(oidcTokenFilePath, "/var/run/secrets/tokens/oidc-token") + t.Setenv(roleARN, "acs:ram::12345:role/autoscaler-role") + t.Setenv(roleSessionName, "session") + t.Setenv(regionId, "cn-hangzhou") + + cfg := &cloudConfig{} + assert.True(t, cfg.isValid()) + assert.True(t, cfg.RRSAEnabled) + + client, err := getEcsClient(cfg) + assert.NoError(t, err) + assert.NotNil(t, client) +} diff --git a/cluster-autoscaler/cloudprovider/alicloud/examples/cluster-autoscaler-rrsa-standard.yaml b/cluster-autoscaler/cloudprovider/alicloud/examples/cluster-autoscaler-rrsa-standard.yaml new file mode 100644 index 000000000000..bd7d3220ad4c --- /dev/null +++ b/cluster-autoscaler/cloudprovider/alicloud/examples/cluster-autoscaler-rrsa-standard.yaml @@ -0,0 +1,203 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler + name: cluster-autoscaler + namespace: kube-system +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: cluster-autoscaler + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +rules: + - apiGroups: [""] + resources: ["events", "endpoints"] + verbs: ["create", "patch"] + - apiGroups: [""] + resources: ["pods/eviction"] + verbs: ["create"] + - apiGroups: [""] + resources: ["pods/status"] + verbs: ["update"] + - apiGroups: [""] + resources: ["endpoints"] + resourceNames: ["cluster-autoscaler"] + verbs: ["get", "update"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["watch", "list", "get", "update"] + - apiGroups: [""] + resources: + - "namespaces" + - "pods" + - "services" + - "replicationcontrollers" + - "persistentvolumeclaims" + - "persistentvolumes" + verbs: ["watch", "list", "get"] + - apiGroups: ["extensions"] + resources: ["replicasets", "daemonsets"] + verbs: ["watch", "list", "get"] + - apiGroups: ["policy"] + resources: ["poddisruptionbudgets"] + verbs: ["watch", "list"] + - apiGroups: ["apps"] + resources: ["statefulsets", "replicasets", "daemonsets"] + verbs: ["watch", "list", "get"] + - apiGroups: ["storage.k8s.io"] + resources: ["storageclasses", "csinodes", "csistoragecapacities", "csidrivers"] + verbs: ["watch", "list", "get"] + - apiGroups: ["batch", "extensions"] + resources: ["jobs"] + verbs: ["get", "list", "watch", "patch"] + - apiGroups: ["coordination.k8s.io"] + resources: ["leases"] + verbs: ["*"] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +rules: +- apiGroups: [""] + resources: ["configmaps"] + verbs: ["create","list","watch"] +- apiGroups: [""] + resources: ["configmaps"] + resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] + verbs: ["delete","get","update","watch"] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: cluster-autoscaler + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-autoscaler +subjects: + - kind: ServiceAccount + name: cluster-autoscaler + namespace: kube-system + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: cluster-autoscaler +subjects: + - kind: ServiceAccount + name: cluster-autoscaler + namespace: kube-system + +--- +apiVersion: v1 +kind: Secret +metadata: + name: cloud-config + namespace: kube-system +type: Opaque +data: + oidc-provider-arn: [YOUR_BASE64_OIDC_PROVIDER_ARN] + oidc-token-file-path: [YOUR_BASE64_OIDC_TOKEN_FILE_PATH] + role-arn: [YOUR_BASE64_ROLE_ARN] + session-name: [YOUR_BASE64_SESSION_NAME] + region-id: [YOUR_BASE64_REGION_ID] + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + app: cluster-autoscaler +spec: + replicas: 1 + selector: + matchLabels: + app: cluster-autoscaler + template: + metadata: + labels: + app: cluster-autoscaler + spec: + priorityClassName: system-cluster-critical + serviceAccountName: cluster-autoscaler + containers: + - image: registry.cn-hangzhou.aliyuncs.com/acs/autoscaler:v1.3.1 + name: cluster-autoscaler + resources: + limits: + cpu: 100m + memory: 300Mi + requests: + cpu: 100m + memory: 300Mi + command: + - ./cluster-autoscaler + - --v=4 + - --stderrthreshold=info + - --cloud-provider=alicloud + - --nodes=[min]:[max]:[ASG_ID] + imagePullPolicy: "Always" + env: + - name: ALICLOUD_OIDC_PROVIDER_ARN + valueFrom: + secretKeyRef: + name: cloud-config + key: oidc-provider-arn + - name: ALICLOUD_OIDC_TOKEN_FILE_PATH + valueFrom: + secretKeyRef: + name: cloud-config + key: oidc-token-file-path + - name: ALICLOUD_ROLE_ARN + valueFrom: + secretKeyRef: + name: cloud-config + key: role-arn + - name: ALICLOUD_SESSION_NAME + valueFrom: + secretKeyRef: + name: cloud-config + key: session-name + - name: REGION_ID + valueFrom: + secretKeyRef: + name: cloud-config + key: region-id + volumeMounts: + - name: oidc-token + mountPath: /var/run/secrets/tokens + volumes: + - name: oidc-token + projected: + sources: + - serviceAccountToken: + path: oidc-token + expirationSeconds: 7200 # The validity period of the OIDC token in seconds. + audience: "sts.aliyuncs.com" diff --git a/cluster-autoscaler/cloudprovider/aws/OWNERS b/cluster-autoscaler/cloudprovider/aws/OWNERS index f7dbbbadc56f..0187682c2f9e 100644 --- a/cluster-autoscaler/cloudprovider/aws/OWNERS +++ b/cluster-autoscaler/cloudprovider/aws/OWNERS @@ -1,10 +1,12 @@ approvers: -- jaypipes - gjtempleton - drmorr0 emeritus_approvers: - Jeffwan -reviewers: - jaypipes +reviewers: - gjtempleton - drmorr0 + +labels: +- area/provider/aws diff --git a/cluster-autoscaler/cloudprovider/aws/README.md b/cluster-autoscaler/cloudprovider/aws/README.md index 4904ad3fd5df..3bf5b745ae83 100644 --- a/cluster-autoscaler/cloudprovider/aws/README.md +++ b/cluster-autoscaler/cloudprovider/aws/README.md @@ -246,6 +246,8 @@ as string). Currently supported autoscaling options (and example values) are: (overrides `--scale-down-unneeded-time` value for that specific ASG) * `k8s.io/cluster-autoscaler/node-template/autoscaling-options/scaledownunreadytime`: `20m0s` (overrides `--scale-down-unready-time` value for that specific ASG) +* `k8s.io/cluster-autoscaler/node-template/autoscaling-options/ignoredaemonsetsutilization`: `true` + (overrides `--ignore-daemonsets-utilization` value for that specific ASG) **NOTE:** It is your responsibility to ensure such labels and/or taints are applied via the node's kubelet configuration at startup. Cluster Autoscaler will not set the node taints for you. diff --git a/cluster-autoscaler/cloudprovider/aws/auto_scaling_groups.go b/cluster-autoscaler/cloudprovider/aws/auto_scaling_groups.go index 36eb4d9a1ab0..a2a83fc52bae 100644 --- a/cluster-autoscaler/cloudprovider/aws/auto_scaling_groups.go +++ b/cluster-autoscaler/cloudprovider/aws/auto_scaling_groups.go @@ -40,6 +40,7 @@ type asgCache struct { asgToInstances map[AwsRef][]AwsInstanceRef instanceToAsg map[AwsInstanceRef]*asg instanceStatus map[AwsInstanceRef]*string + instanceLifecycle map[AwsInstanceRef]*string asgInstanceTypeCache *instanceTypeExpirationStore mutex sync.Mutex awsService *awsWrapper @@ -83,6 +84,7 @@ func newASGCache(awsService *awsWrapper, explicitSpecs []string, autoDiscoverySp asgToInstances: make(map[AwsRef][]AwsInstanceRef), instanceToAsg: make(map[AwsInstanceRef]*asg), instanceStatus: make(map[AwsInstanceRef]*string), + instanceLifecycle: make(map[AwsInstanceRef]*string), asgInstanceTypeCache: newAsgInstanceTypeCache(awsService), interrupt: make(chan struct{}), asgAutoDiscoverySpecs: autoDiscoverySpecs, @@ -239,6 +241,14 @@ func (m *asgCache) InstanceStatus(ref AwsInstanceRef) (*string, error) { return nil, fmt.Errorf("could not find instance %v", ref) } +func (m *asgCache) findInstanceLifecycle(ref AwsInstanceRef) (*string, error) { + if lifecycle, found := m.instanceLifecycle[ref]; found { + return lifecycle, nil + } + + return nil, fmt.Errorf("could not find instance %v", ref) +} + func (m *asgCache) SetAsgSize(asg *asg, size int) error { m.mutex.Lock() defer m.mutex.Unlock() @@ -292,7 +302,6 @@ func (m *asgCache) DeleteInstances(instances []*AwsInstanceRef) error { for i, instance := range instances { instanceIds[i] = instance.Name } - return fmt.Errorf("can't delete instances %s as they belong to at least two different ASGs (%s and %s)", strings.Join(instanceIds, ","), commonAsg.Name, asg.Name) } } @@ -305,6 +314,22 @@ func (m *asgCache) DeleteInstances(instances []*AwsInstanceRef) error { "of deleting instance", instance.Name) m.decreaseAsgSizeByOneNoLock(commonAsg) } else { + // check if the instance is already terminating - if it is, don't bother terminating again + // as doing so causes unnecessary API calls and can cause the curSize cached value to decrement + // unnecessarily. + lifecycle, err := m.findInstanceLifecycle(*instance) + if err != nil { + return err + } + + if lifecycle != nil && + *lifecycle == autoscaling.LifecycleStateTerminating || + *lifecycle == autoscaling.LifecycleStateTerminatingWait || + *lifecycle == autoscaling.LifecycleStateTerminatingProceed { + klog.V(2).Infof("instance %s is already terminating, will skip instead", instance.Name) + continue + } + params := &autoscaling.TerminateInstanceInAutoScalingGroupInput{ InstanceId: aws.String(instance.Name), ShouldDecrementDesiredCapacity: aws.Bool(true), @@ -361,6 +386,7 @@ func (m *asgCache) regenerate() error { newInstanceToAsgCache := make(map[AwsInstanceRef]*asg) newAsgToInstancesCache := make(map[AwsRef][]AwsInstanceRef) newInstanceStatusMap := make(map[AwsInstanceRef]*string) + newInstanceLifecycleMap := make(map[AwsInstanceRef]*string) // Fetch details of all ASGs refreshNames := m.buildAsgNames() @@ -402,6 +428,7 @@ func (m *asgCache) regenerate() error { newInstanceToAsgCache[ref] = asg newAsgToInstancesCache[asg.AwsRef][i] = ref newInstanceStatusMap[ref] = instance.HealthStatus + newInstanceLifecycleMap[ref] = instance.LifecycleState } } @@ -431,6 +458,7 @@ func (m *asgCache) regenerate() error { m.instanceToAsg = newInstanceToAsgCache m.autoscalingOptions = newAutoscalingOptions m.instanceStatus = newInstanceStatusMap + m.instanceLifecycle = newInstanceLifecycleMap return nil } diff --git a/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider.go b/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider.go index b38b70d13a2c..f17bb2cbe098 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider.go +++ b/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider.go @@ -36,6 +36,8 @@ import ( const ( // GPULabel is the label added to nodes with GPU resource. GPULabel = "k8s.amazonaws.com/accelerator" + // nodeNotPresentErr indicates no node with the given identifier present in AWS + nodeNotPresentErr = "node is not present in aws" ) var ( @@ -129,6 +131,14 @@ func (aws *awsCloudProvider) NodeGroupForNode(node *apiv1.Node) (cloudprovider.N // HasInstance returns whether a given node has a corresponding instance in this cloud provider func (aws *awsCloudProvider) HasInstance(node *apiv1.Node) (bool, error) { + // we haven't implemented a way to check if a fargate instance + // exists in the cloud provider + // returning 'true' because we are assuming the node exists in AWS + // this is the default behavior if the check is unimplemented + if strings.HasPrefix(node.GetName(), "fargate") { + return true, cloudprovider.ErrNotImplemented + } + awsRef, err := AwsRefFromProviderId(node.Spec.ProviderID) if err != nil { return false, err @@ -140,7 +150,7 @@ func (aws *awsCloudProvider) HasInstance(node *apiv1.Node) (bool, error) { return true, nil } - return false, fmt.Errorf("node is not present in aws: %v", err) + return false, fmt.Errorf("%s: %v", nodeNotPresentErr, err) } // Pricing returns pricing model for this cloud provider or error if not available. diff --git a/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider_test.go b/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider_test.go index 78a4f46cb8c4..10f4cc87cab9 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider_test.go +++ b/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider_test.go @@ -22,6 +22,7 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/mock" apiv1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/service/autoscaling" @@ -39,9 +40,9 @@ var testAwsManager = &AwsManager{ awsService: testAwsService, } -func newTestAwsManagerWithMockServices(mockAutoScaling autoScalingI, mockEC2 ec2I, mockEKS eksI, autoDiscoverySpecs []asgAutoDiscoveryConfig) *AwsManager { +func newTestAwsManagerWithMockServices(mockAutoScaling autoScalingI, mockEC2 ec2I, mockEKS eksI, autoDiscoverySpecs []asgAutoDiscoveryConfig, instanceStatus map[AwsInstanceRef]*string) *AwsManager { awsService := awsWrapper{mockAutoScaling, mockEC2, mockEKS} - return &AwsManager{ + mgr := &AwsManager{ awsService: awsService, asgCache: &asgCache{ registeredAsgs: make(map[AwsRef]*asg, 0), @@ -55,16 +56,21 @@ func newTestAwsManagerWithMockServices(mockAutoScaling autoScalingI, mockEC2 ec2 autoscalingOptions: make(map[AwsRef]map[string]string), }, } + + if instanceStatus != nil { + mgr.asgCache.instanceStatus = instanceStatus + } + return mgr } func newTestAwsManagerWithAsgs(t *testing.T, mockAutoScaling autoScalingI, mockEC2 ec2I, specs []string) *AwsManager { - m := newTestAwsManagerWithMockServices(mockAutoScaling, mockEC2, nil, nil) + m := newTestAwsManagerWithMockServices(mockAutoScaling, mockEC2, nil, nil, nil) m.asgCache.parseExplicitAsgs(specs) return m } func newTestAwsManagerWithAutoAsgs(t *testing.T, mockAutoScaling autoScalingI, mockEC2 ec2I, specs []string, autoDiscoverySpecs []asgAutoDiscoveryConfig) *AwsManager { - m := newTestAwsManagerWithMockServices(mockAutoScaling, mockEC2, nil, autoDiscoverySpecs) + m := newTestAwsManagerWithMockServices(mockAutoScaling, mockEC2, nil, autoDiscoverySpecs, nil) m.asgCache.parseExplicitAsgs(specs) return m } @@ -75,6 +81,7 @@ func testNamedDescribeAutoScalingGroupsOutput(groupName string, desiredCap int64 instances = append(instances, &autoscaling.Instance{ InstanceId: aws.String(id), AvailabilityZone: aws.String("us-east-1a"), + LifecycleState: aws.String(autoscaling.LifecycleStateInService), }) } return &autoscaling.DescribeAutoScalingGroupsOutput{ @@ -91,6 +98,15 @@ func testNamedDescribeAutoScalingGroupsOutput(groupName string, desiredCap int64 } } +func testSetASGInstanceLifecycle(asg *autoscaling.DescribeAutoScalingGroupsOutput, lifecycleState string) *autoscaling.DescribeAutoScalingGroupsOutput { + for _, asg := range asg.AutoScalingGroups { + for _, instance := range asg.Instances { + instance.LifecycleState = aws.String(lifecycleState) + } + } + return asg +} + func testProvider(t *testing.T, m *AwsManager) *awsCloudProvider { resourceLimiter := cloudprovider.NewResourceLimiter( map[string]int64{cloudprovider.ResourceNameCores: 1, cloudprovider.ResourceNameMemory: 10000000}, @@ -440,6 +456,54 @@ func TestDeleteNodes(t *testing.T) { assert.Equal(t, 1, newSize) } +func TestDeleteNodesTerminatingInstances(t *testing.T) { + a := &autoScalingMock{} + provider := testProvider(t, newTestAwsManagerWithAsgs(t, a, nil, []string{"1:5:test-asg"})) + asgs := provider.NodeGroups() + + a.On("TerminateInstanceInAutoScalingGroup", &autoscaling.TerminateInstanceInAutoScalingGroupInput{ + InstanceId: aws.String("test-instance-id"), + ShouldDecrementDesiredCapacity: aws.Bool(true), + }).Return(&autoscaling.TerminateInstanceInAutoScalingGroupOutput{ + Activity: &autoscaling.Activity{Description: aws.String("Deleted instance")}, + }) + + // Look up the current number of instances... + var expectedInstancesCount int64 = 2 + a.On("DescribeAutoScalingGroupsPages", + &autoscaling.DescribeAutoScalingGroupsInput{ + AutoScalingGroupNames: aws.StringSlice([]string{"test-asg"}), + MaxRecords: aws.Int64(maxRecordsReturnedByAPI), + }, + mock.AnythingOfType("func(*autoscaling.DescribeAutoScalingGroupsOutput, bool) bool"), + ).Run(func(args mock.Arguments) { + fn := args.Get(1).(func(*autoscaling.DescribeAutoScalingGroupsOutput, bool) bool) + fn(testSetASGInstanceLifecycle(testNamedDescribeAutoScalingGroupsOutput("test-asg", expectedInstancesCount, "test-instance-id", "second-test-instance-id"), autoscaling.LifecycleStateTerminatingWait), false) + // we expect the instance count to be 1 after the call to DeleteNodes + expectedInstancesCount = 1 + }).Return(nil) + + provider.Refresh() + + initialSize, err := asgs[0].TargetSize() + assert.NoError(t, err) + assert.Equal(t, 2, initialSize) + + node := &apiv1.Node{ + Spec: apiv1.NodeSpec{ + ProviderID: "aws:///us-east-1a/test-instance-id", + }, + } + err = asgs[0].DeleteNodes([]*apiv1.Node{node}) + assert.NoError(t, err) + a.AssertNumberOfCalls(t, "TerminateInstanceInAutoScalingGroup", 0) // instances which are terminating don't need to be terminated again + a.AssertNumberOfCalls(t, "DescribeAutoScalingGroupsPages", 1) + + newSize, err := asgs[0].TargetSize() + assert.NoError(t, err) + assert.Equal(t, 2, newSize) +} + func TestDeleteNodesWithPlaceholder(t *testing.T) { a := &autoScalingMock{} provider := testProvider(t, newTestAwsManagerWithAsgs(t, a, nil, []string{"1:5:test-asg"})) @@ -534,7 +598,7 @@ func TestDeleteNodesAfterMultipleRefreshes(t *testing.T) { func TestGetResourceLimiter(t *testing.T) { mockAutoScaling := &autoScalingMock{} mockEC2 := &ec2Mock{} - m := newTestAwsManagerWithMockServices(mockAutoScaling, mockEC2, nil, nil) + m := newTestAwsManagerWithMockServices(mockAutoScaling, mockEC2, nil, nil, nil) provider := testProvider(t, m) _, err := provider.GetResourceLimiter() @@ -546,3 +610,64 @@ func TestCleanup(t *testing.T) { err := provider.Cleanup() assert.NoError(t, err) } + +func TestHasInstance(t *testing.T) { + nodeStatus := "Healthy" + mgr := &AwsManager{ + asgCache: &asgCache{ + registeredAsgs: make(map[AwsRef]*asg, 0), + asgToInstances: make(map[AwsRef][]AwsInstanceRef), + instanceToAsg: make(map[AwsInstanceRef]*asg), + interrupt: make(chan struct{}), + awsService: &testAwsService, + instanceStatus: map[AwsInstanceRef]*string{ + { + ProviderID: "aws:///us-east-1a/test-instance-id", + Name: "test-instance-id", + }: &nodeStatus, + }, + }, + awsService: testAwsService, + } + provider := testProvider(t, mgr) + + // Case 1: correct node - present in AWS + node1 := &apiv1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-1", + }, + Spec: apiv1.NodeSpec{ + ProviderID: "aws:///us-east-1a/test-instance-id", + }, + } + present, err := provider.HasInstance(node1) + assert.NoError(t, err) + assert.True(t, present) + + // Case 2: incorrect node - fargate is unsupported + node2 := &apiv1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "fargate-1", + }, + Spec: apiv1.NodeSpec{ + ProviderID: "aws:///us-east-1a/test-instance-id", + }, + } + present, err = provider.HasInstance(node2) + assert.Equal(t, cloudprovider.ErrNotImplemented, err) + assert.True(t, present) + + // Case 3: correct node - not present in AWS + node3 := &apiv1.Node{ + ObjectMeta: metav1.ObjectMeta{ + Name: "node-2", + }, + Spec: apiv1.NodeSpec{ + ProviderID: "aws:///us-east-1a/test-instance-id-2", + }, + } + present, err = provider.HasInstance(node3) + assert.ErrorContains(t, err, nodeNotPresentErr) + assert.False(t, present) + +} diff --git a/cluster-autoscaler/cloudprovider/aws/aws_manager.go b/cluster-autoscaler/cloudprovider/aws/aws_manager.go index e0c262e73f07..110185de055a 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws_manager.go +++ b/cluster-autoscaler/cloudprovider/aws/aws_manager.go @@ -245,6 +245,15 @@ func (m *AwsManager) GetAsgOptions(asg asg, defaults config.NodeGroupAutoscaling } } + if stringOpt, found := options[config.DefaultIgnoreDaemonSetsUtilizationKey]; found { + if opt, err := strconv.ParseBool(stringOpt); err != nil { + klog.Warningf("failed to convert asg %s %s tag to bool: %v", + asg.Name, config.DefaultIgnoreDaemonSetsUtilizationKey, err) + } else { + defaults.IgnoreDaemonSetsUtilization = opt + } + } + return &defaults } @@ -308,6 +317,18 @@ func (m *AwsManager) buildNodeFromTemplate(asg *asg, template *asgTemplate) (*ap node.Spec.Taints = append(node.Spec.Taints, mngTaints...) klog.V(5).Infof("node.Spec.Taints : %+v\n", node.Spec.Taints) } + + mngTags, err := m.managedNodegroupCache.getManagedNodegroupTags(nodegroupName, clusterName) + if err != nil { + klog.Errorf("Failed to get tags from EKS DescribeNodegroup API for nodegroup %s in cluster %s because %s.", nodegroupName, clusterName, err) + } else if mngTags != nil && len(mngTags) > 0 { + resourcesFromMngTags := extractAllocatableResourcesFromTags(mngTags) + klog.V(5).Infof("Extracted resources from EKS nodegroup tags %v", resourcesFromTags) + // ManagedNodeGroup resource-indicating tags override conflicting tags on the ASG if they exist + for resourceName, val := range resourcesFromMngTags { + node.Status.Capacity[apiv1.ResourceName(resourceName)] = *val + } + } } node.Status.Conditions = cloudprovider.BuildReadyConditions() @@ -458,6 +479,27 @@ func extractAllocatableResourcesFromAsg(tags []*autoscaling.TagDescription) map[ return result } +func extractAllocatableResourcesFromTags(tags map[string]string) map[string]*resource.Quantity { + result := make(map[string]*resource.Quantity) + + for k, v := range tags { + splits := strings.Split(k, "k8s.io/cluster-autoscaler/node-template/resources/") + if len(splits) > 1 { + label := splits[1] + if label != "" { + quantity, err := resource.ParseQuantity(v) + if err != nil { + klog.Warningf("Failed to parse resource quanitity '%s' for resource '%s'", v, label) + continue + } + result[label] = &quantity + } + } + } + + return result +} + func extractTaintsFromAsg(tags []*autoscaling.TagDescription) []apiv1.Taint { taints := make([]apiv1.Taint, 0) diff --git a/cluster-autoscaler/cloudprovider/aws/aws_manager_test.go b/cluster-autoscaler/cloudprovider/aws/aws_manager_test.go index 2b7c5fdc57b5..ddf1b28ddce7 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws_manager_test.go +++ b/cluster-autoscaler/cloudprovider/aws/aws_manager_test.go @@ -103,12 +103,34 @@ func TestExtractAllocatableResourcesFromAsg(t *testing.T) { assert.Equal(t, resource.NewQuantity(5, resource.DecimalSI).String(), labels["custom-resource"].String()) } +func TestExtractAllocatableResourcesFromTags(t *testing.T) { + tags := map[string]string{ + "k8s.io/cluster-autoscaler/node-template/resources/cpu": "100m", + "k8s.io/cluster-autoscaler/node-template/resources/memory": "100M", + "k8s.io/cluster-autoscaler/node-template/resources/ephemeral-storage": "20G", + "k8s.io/cluster-autoscaler/node-template/resources/custom-resource": "5", + "k8s.io/cluster-autoscaler/node-template/resources/error-resource": "GG", + } + + labels := extractAllocatableResourcesFromTags(tags) + + assert.Equal(t, 4, len(labels)) + assert.NotContains(t, labels, "error-resource") + assert.Equal(t, resource.NewMilliQuantity(100, resource.DecimalSI).String(), labels["cpu"].String()) + expectedMemory := resource.MustParse("100M") + assert.Equal(t, (&expectedMemory).String(), labels["memory"].String()) + expectedEphemeralStorage := resource.MustParse("20G") + assert.Equal(t, (&expectedEphemeralStorage).String(), labels["ephemeral-storage"].String()) + assert.Equal(t, resource.NewQuantity(5, resource.DecimalSI).String(), labels["custom-resource"].String()) +} + func TestGetAsgOptions(t *testing.T) { defaultOptions := config.NodeGroupAutoscalingOptions{ ScaleDownUtilizationThreshold: 0.1, ScaleDownGpuUtilizationThreshold: 0.2, ScaleDownUnneededTime: time.Second, ScaleDownUnreadyTime: time.Minute, + IgnoreDaemonSetsUtilization: false, } tests := []struct { @@ -124,39 +146,60 @@ func TestGetAsgOptions(t *testing.T) { { description: "keep defaults on invalid tags values", tags: map[string]string{ - "scaledownutilizationthreshold": "not-a-float", - "scaledownunneededtime": "not-a-duration", - "ScaleDownUnreadyTime": "", + config.DefaultScaleDownUtilizationThresholdKey: "not-a-float", + config.DefaultScaleDownUnneededTimeKey: "not-a-duration", + "ScaleDownUnreadyTime": "", + config.DefaultIgnoreDaemonSetsUtilizationKey: "not-a-bool", }, expected: &defaultOptions, }, { description: "use provided tags and fill missing with defaults", tags: map[string]string{ - "scaledownutilizationthreshold": "0.42", - "scaledownunneededtime": "1h", + config.DefaultScaleDownUtilizationThresholdKey: "0.42", + config.DefaultScaleDownUnneededTimeKey: "1h", + config.DefaultIgnoreDaemonSetsUtilizationKey: "true", }, expected: &config.NodeGroupAutoscalingOptions{ ScaleDownUtilizationThreshold: 0.42, ScaleDownGpuUtilizationThreshold: defaultOptions.ScaleDownGpuUtilizationThreshold, ScaleDownUnneededTime: time.Hour, ScaleDownUnreadyTime: defaultOptions.ScaleDownUnreadyTime, + IgnoreDaemonSetsUtilization: true, + }, + }, + { + description: "use provided tags (happy path)", + tags: map[string]string{ + config.DefaultScaleDownUtilizationThresholdKey: "0.42", + config.DefaultScaleDownUnneededTimeKey: "1h", + config.DefaultScaleDownGpuUtilizationThresholdKey: "0.7", + config.DefaultScaleDownUnreadyTimeKey: "25m", + config.DefaultIgnoreDaemonSetsUtilizationKey: "true", + }, + expected: &config.NodeGroupAutoscalingOptions{ + ScaleDownUtilizationThreshold: 0.42, + ScaleDownGpuUtilizationThreshold: 0.7, + ScaleDownUnneededTime: time.Hour, + ScaleDownUnreadyTime: 25 * time.Minute, + IgnoreDaemonSetsUtilization: true, }, }, { description: "ignore unknown tags", tags: map[string]string{ - "scaledownutilizationthreshold": "0.6", - "scaledowngpuutilizationthreshold": "0.7", - "scaledownunneededtime": "1m", - "scaledownunreadytime": "1h", - "notyetspecified": "42", + config.DefaultScaleDownUtilizationThresholdKey: "0.6", + config.DefaultScaleDownGpuUtilizationThresholdKey: "0.7", + config.DefaultScaleDownUnneededTimeKey: "1m", + config.DefaultScaleDownUnreadyTimeKey: "1h", + "notyetspecified": "42", }, expected: &config.NodeGroupAutoscalingOptions{ ScaleDownUtilizationThreshold: 0.6, ScaleDownGpuUtilizationThreshold: 0.7, ScaleDownUnneededTime: time.Minute, ScaleDownUnreadyTime: time.Hour, + IgnoreDaemonSetsUtilization: false, }, }, } @@ -212,11 +255,17 @@ func TestBuildNodeFromTemplateWithManagedNodegroup(t *testing.T) { Value: taintValue2, } + ephemeralStorageKey := "ephemeral-storage" + diskSizeGb := 80 + tagKey1 := fmt.Sprintf("k8s.io/cluster-autoscaler/node-template/resources/%s", ephemeralStorageKey) + tagValue1 := fmt.Sprintf("%dGi", diskSizeGb) + err := mngCache.Add(managedNodegroupCachedObject{ name: ngNameLabelValue, clusterName: clusterNameLabelValue, taints: []apiv1.Taint{taint1, taint2}, labels: map[string]string{labelKey1: labelValue1, labelKey2: labelValue2}, + tags: map[string]string{tagKey1: tagValue1}, }) require.NoError(t, err) @@ -239,6 +288,9 @@ func TestBuildNodeFromTemplateWithManagedNodegroup(t *testing.T) { }, }) assert.NoError(t, observedErr) + esValue, esExist := observedNode.Status.Capacity[apiv1.ResourceName(ephemeralStorageKey)] + assert.True(t, esExist) + assert.Equal(t, int64(diskSizeGb*1024*1024*1024), esValue.Value()) assert.GreaterOrEqual(t, len(observedNode.Labels), 4) ngNameValue, ngLabelExist := observedNode.Labels["nodegroup-name"] assert.True(t, ngLabelExist) diff --git a/cluster-autoscaler/cloudprovider/aws/aws_sdk_provider.go b/cluster-autoscaler/cloudprovider/aws/aws_sdk_provider.go index 1b1e64925c5f..b1f33442f71b 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws_sdk_provider.go +++ b/cluster-autoscaler/cloudprovider/aws/aws_sdk_provider.go @@ -27,7 +27,9 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/ec2metadata" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/endpoints" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/session" + "k8s.io/autoscaler/cluster-autoscaler/version" provider_aws "k8s.io/cloud-provider-aws/pkg/providers/v1" "k8s.io/klog/v2" ) @@ -61,11 +63,14 @@ func createAWSSDKProvider(configReader io.Reader) (*awsSDKProvider, error) { } sess, err := session.NewSession(config) - if err != nil { return nil, err } + // add cluster-autoscaler to the user-agent to make it easier to identify + agent := fmt.Sprintf("cluster-autoscaler/v%s", version.ClusterAutoscalerVersion) + sess.Handlers.Build.PushBack(request.MakeAddToUserAgentFreeFormHandler(agent)) + provider := &awsSDKProvider{ session: sess, } diff --git a/cluster-autoscaler/cloudprovider/aws/aws_wrapper.go b/cluster-autoscaler/cloudprovider/aws/aws_wrapper.go index 2408805b9f20..b993e615e011 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws_wrapper.go +++ b/cluster-autoscaler/cloudprovider/aws/aws_wrapper.go @@ -57,7 +57,7 @@ type awsWrapper struct { eksI } -func (m *awsWrapper) getManagedNodegroupInfo(nodegroupName string, clusterName string) ([]apiv1.Taint, map[string]string, error) { +func (m *awsWrapper) getManagedNodegroupInfo(nodegroupName string, clusterName string) ([]apiv1.Taint, map[string]string, map[string]string, error) { params := &eks.DescribeNodegroupInput{ ClusterName: &clusterName, NodegroupName: &nodegroupName, @@ -66,13 +66,14 @@ func (m *awsWrapper) getManagedNodegroupInfo(nodegroupName string, clusterName s r, err := m.DescribeNodegroup(params) observeAWSRequest("DescribeNodegroup", err, start) if err != nil { - return nil, nil, err + return nil, nil, nil, err } klog.V(6).Infof("DescribeNodegroup output : %+v\n", r) taints := make([]apiv1.Taint, 0) labels := make(map[string]string) + tags := make(map[string]string) // Labels will include diskSize, amiType, capacityType, version if r.Nodegroup.DiskSize != nil { @@ -104,6 +105,15 @@ func (m *awsWrapper) getManagedNodegroupInfo(nodegroupName string, clusterName s } } + if r.Nodegroup.Tags != nil && len(r.Nodegroup.Tags) > 0 { + tagsMap := r.Nodegroup.Tags + for k, v := range tagsMap { + if v != nil { + tags[k] = *v + } + } + } + if r.Nodegroup.Taints != nil && len(r.Nodegroup.Taints) > 0 { taintList := r.Nodegroup.Taints for _, taint := range taintList { @@ -117,7 +127,7 @@ func (m *awsWrapper) getManagedNodegroupInfo(nodegroupName string, clusterName s } } - return taints, labels, nil + return taints, labels, tags, nil } func (m *awsWrapper) getInstanceTypeByLaunchConfigNames(launchConfigToQuery []*string) (map[string]string, error) { diff --git a/cluster-autoscaler/cloudprovider/aws/aws_wrapper_test.go b/cluster-autoscaler/cloudprovider/aws/aws_wrapper_test.go index c9a4da29307b..126daee7d346 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws_wrapper_test.go +++ b/cluster-autoscaler/cloudprovider/aws/aws_wrapper_test.go @@ -137,6 +137,11 @@ func TestGetManagedNodegroup(t *testing.T) { capacityType := "testCapacityType" k8sVersion := "1.19" + tagKey1 := "tag 1" + tagValue1 := "value 1" + tagKey2 := "tag 2" + tagValue2 := "value 2" + // Create test nodegroup testNodegroup := eks.Nodegroup{ AmiType: &amiType, @@ -147,6 +152,7 @@ func TestGetManagedNodegroup(t *testing.T) { CapacityType: &capacityType, Version: &k8sVersion, Taints: []*eks.Taint{&taint1, &taint2}, + Tags: map[string]*string{tagKey1: &tagValue1, tagKey2: &tagValue2}, } k.On("DescribeNodegroup", &eks.DescribeNodegroupInput{ @@ -154,7 +160,7 @@ func TestGetManagedNodegroup(t *testing.T) { NodegroupName: &nodegroupName, }).Return(&eks.DescribeNodegroupOutput{Nodegroup: &testNodegroup}, nil) - taintList, labelMap, err := awsWrapper.getManagedNodegroupInfo(nodegroupName, clusterName) + taintList, labelMap, tagMap, err := awsWrapper.getManagedNodegroupInfo(nodegroupName, clusterName) assert.Nil(t, err) assert.Equal(t, len(taintList), 2) assert.Equal(t, taintList[0].Effect, apiv1.TaintEffect(taintEffect1)) @@ -171,6 +177,9 @@ func TestGetManagedNodegroup(t *testing.T) { assert.Equal(t, labelMap["capacityType"], capacityType) assert.Equal(t, labelMap["k8sVersion"], k8sVersion) assert.Equal(t, labelMap["eks.amazonaws.com/nodegroup"], nodegroupName) + assert.Equal(t, len(tagMap), 2) + assert.Equal(t, tagMap[tagKey1], tagValue1) + assert.Equal(t, tagMap[tagKey2], tagValue2) } func TestGetManagedNodegroupWithNilValues(t *testing.T) { @@ -198,6 +207,7 @@ func TestGetManagedNodegroupWithNilValues(t *testing.T) { CapacityType: &capacityType, Version: &k8sVersion, Taints: nil, + Tags: nil, } k.On("DescribeNodegroup", &eks.DescribeNodegroupInput{ @@ -205,7 +215,7 @@ func TestGetManagedNodegroupWithNilValues(t *testing.T) { NodegroupName: &nodegroupName, }).Return(&eks.DescribeNodegroupOutput{Nodegroup: &testNodegroup}, nil) - taintList, labelMap, err := awsWrapper.getManagedNodegroupInfo(nodegroupName, clusterName) + taintList, labelMap, tagMap, err := awsWrapper.getManagedNodegroupInfo(nodegroupName, clusterName) assert.Nil(t, err) assert.Equal(t, len(taintList), 0) assert.Equal(t, len(labelMap), 4) @@ -213,6 +223,7 @@ func TestGetManagedNodegroupWithNilValues(t *testing.T) { assert.Equal(t, labelMap["capacityType"], capacityType) assert.Equal(t, labelMap["k8sVersion"], k8sVersion) assert.Equal(t, labelMap["eks.amazonaws.com/nodegroup"], nodegroupName) + assert.Equal(t, len(tagMap), 0) } func TestGetManagedNodegroupWithEmptyValues(t *testing.T) { @@ -240,6 +251,7 @@ func TestGetManagedNodegroupWithEmptyValues(t *testing.T) { CapacityType: &capacityType, Version: &k8sVersion, Taints: make([]*eks.Taint, 0), + Tags: make(map[string]*string), } k.On("DescribeNodegroup", &eks.DescribeNodegroupInput{ @@ -247,7 +259,7 @@ func TestGetManagedNodegroupWithEmptyValues(t *testing.T) { NodegroupName: &nodegroupName, }).Return(&eks.DescribeNodegroupOutput{Nodegroup: &testNodegroup}, nil) - taintList, labelMap, err := awsWrapper.getManagedNodegroupInfo(nodegroupName, clusterName) + taintList, labelMap, tagMap, err := awsWrapper.getManagedNodegroupInfo(nodegroupName, clusterName) assert.Nil(t, err) assert.Equal(t, len(taintList), 0) assert.Equal(t, len(labelMap), 4) @@ -255,6 +267,7 @@ func TestGetManagedNodegroupWithEmptyValues(t *testing.T) { assert.Equal(t, labelMap["capacityType"], capacityType) assert.Equal(t, labelMap["k8sVersion"], k8sVersion) assert.Equal(t, labelMap["eks.amazonaws.com/nodegroup"], nodegroupName) + assert.Equal(t, len(tagMap), 0) } func TestMoreThen100Groups(t *testing.T) { diff --git a/cluster-autoscaler/cloudprovider/aws/ec2_instance_types.go b/cluster-autoscaler/cloudprovider/aws/ec2_instance_types.go index a9af34c37052..a9dc9937d5ee 100644 --- a/cluster-autoscaler/cloudprovider/aws/ec2_instance_types.go +++ b/cluster-autoscaler/cloudprovider/aws/ec2_instance_types.go @@ -2818,6 +2818,13 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 8, Architecture: "amd64", }, + "p4de.24xlarge": { + InstanceType: "p4de.24xlarge", + VCPU: 96, + MemoryMb: 1179648, + GPU: 8, + Architecture: "amd64", + }, "r3.2xlarge": { InstanceType: "r3.2xlarge", VCPU: 8, @@ -4071,6 +4078,13 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "amd64", }, + "trn1n.32xlarge": { + InstanceType: "trn1n.32xlarge", + VCPU: 128, + MemoryMb: 524288, + GPU: 0, + Architecture: "amd64", + }, "u-12tb1.112xlarge": { InstanceType: "u-12tb1.112xlarge", VCPU: 448, diff --git a/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml b/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml index 0607705da389..ff1105e43ca7 100644 --- a/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml +++ b/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml @@ -75,7 +75,7 @@ metadata: rules: - apiGroups: [""] resources: ["configmaps"] - verbs: ["create","list","watch"] + verbs: ["create", "list", "watch"] - apiGroups: [""] resources: ["configmaps"] resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] @@ -146,7 +146,7 @@ spec: type: RuntimeDefault serviceAccountName: cluster-autoscaler containers: - - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.2 + - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.26.2 name: cluster-autoscaler resources: limits: @@ -165,7 +165,7 @@ spec: - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/ volumeMounts: - name: ssl-certs - mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes + mountPath: /etc/ssl/certs/ca-certificates.crt # /etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes readOnly: true imagePullPolicy: "Always" securityContext: diff --git a/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-multi-asg.yaml b/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-multi-asg.yaml index 50adbaa26ec8..1eef576adc77 100644 --- a/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-multi-asg.yaml +++ b/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-multi-asg.yaml @@ -75,7 +75,7 @@ metadata: rules: - apiGroups: [""] resources: ["configmaps"] - verbs: ["create","list","watch"] + verbs: ["create", "list", "watch"] - apiGroups: [""] resources: ["configmaps"] resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] @@ -146,7 +146,7 @@ spec: type: RuntimeDefault serviceAccountName: cluster-autoscaler containers: - - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.2 + - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.26.2 name: cluster-autoscaler resources: limits: @@ -166,7 +166,7 @@ spec: - --nodes=1:3:k8s-worker-asg-2 volumeMounts: - name: ssl-certs - mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes + mountPath: /etc/ssl/certs/ca-certificates.crt # /etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes readOnly: true imagePullPolicy: "Always" securityContext: diff --git a/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-one-asg.yaml b/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-one-asg.yaml index 3462e6b30656..67d57bc4dcdc 100644 --- a/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-one-asg.yaml +++ b/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-one-asg.yaml @@ -75,7 +75,7 @@ metadata: rules: - apiGroups: [""] resources: ["configmaps"] - verbs: ["create","list","watch"] + verbs: ["create", "list", "watch"] - apiGroups: [""] resources: ["configmaps"] resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] @@ -146,7 +146,7 @@ spec: type: RuntimeDefault serviceAccountName: cluster-autoscaler containers: - - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.2 + - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.26.2 name: cluster-autoscaler resources: limits: @@ -164,7 +164,7 @@ spec: - --nodes=1:10:k8s-worker-asg-1 volumeMounts: - name: ssl-certs - mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes + mountPath: /etc/ssl/certs/ca-certificates.crt # /etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes readOnly: true imagePullPolicy: "Always" securityContext: diff --git a/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-control-plane.yaml b/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-control-plane.yaml index 0c581abb33c4..9cda4af03e5b 100644 --- a/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-control-plane.yaml +++ b/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-run-on-control-plane.yaml @@ -75,7 +75,7 @@ metadata: rules: - apiGroups: [""] resources: ["configmaps"] - verbs: ["create","list","watch"] + verbs: ["create", "list", "watch"] - apiGroups: [""] resources: ["configmaps"] resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] @@ -153,7 +153,7 @@ spec: nodeSelector: kubernetes.io/role: master containers: - - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.2 + - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.26.2 name: cluster-autoscaler resources: limits: @@ -171,7 +171,7 @@ spec: - --nodes={{ node_asg_min }}:{{ node_asg_max }}:{{ name }} volumeMounts: - name: ssl-certs - mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes + mountPath: /etc/ssl/certs/ca-certificates.crt # /etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes readOnly: true imagePullPolicy: "Always" securityContext: diff --git a/cluster-autoscaler/cloudprovider/aws/managed_nodegroup_cache.go b/cluster-autoscaler/cloudprovider/aws/managed_nodegroup_cache.go index 33aec5ed0e03..51211c1d8a45 100644 --- a/cluster-autoscaler/cloudprovider/aws/managed_nodegroup_cache.go +++ b/cluster-autoscaler/cloudprovider/aws/managed_nodegroup_cache.go @@ -50,6 +50,7 @@ type managedNodegroupCachedObject struct { clusterName string taints []apiv1.Taint labels map[string]string + tags map[string]string } type mngJitterClock struct { @@ -92,7 +93,7 @@ func (c *mngJitterClock) Since(ts time.Time) time.Duration { } func (m *managedNodegroupCache) getManagedNodegroup(nodegroupName string, clusterName string) (*managedNodegroupCachedObject, error) { - taintList, labelMap, err := m.awsService.getManagedNodegroupInfo(nodegroupName, clusterName) + taintList, labelMap, tagMap, err := m.awsService.getManagedNodegroupInfo(nodegroupName, clusterName) if err != nil { // If there's an error cache an empty nodegroup to limit failed calls to the EKS API newEmptyNodegroup := managedNodegroupCachedObject{ @@ -100,6 +101,7 @@ func (m *managedNodegroupCache) getManagedNodegroup(nodegroupName string, cluste clusterName: clusterName, taints: nil, labels: nil, + tags: nil, } m.Add(newEmptyNodegroup) @@ -111,6 +113,7 @@ func (m *managedNodegroupCache) getManagedNodegroup(nodegroupName string, cluste clusterName: clusterName, taints: taintList, labels: labelMap, + tags: tagMap, } m.Add(newNodegroup) @@ -130,7 +133,7 @@ func (m managedNodegroupCache) getManagedNodegroupInfoObject(nodegroupName strin managedNodegroupInfo, err := m.getManagedNodegroup(nodegroupName, clusterName) if err != nil { - klog.Errorf("Failed to query the managed nodegroup %s for the cluster %s while looking for labels/taints: %v", nodegroupName, clusterName, err) + klog.Errorf("Failed to query the managed nodegroup %s for the cluster %s while looking for labels/taints/tags: %v", nodegroupName, clusterName, err) return nil, err } return managedNodegroupInfo, nil @@ -145,6 +148,15 @@ func (m managedNodegroupCache) getManagedNodegroupLabels(nodegroupName string, c return getManagedNodegroupInfoObject.labels, nil } +func (m managedNodegroupCache) getManagedNodegroupTags(nodegroupName string, clusterName string) (map[string]string, error) { + getManagedNodegroupInfoObject, err := m.getManagedNodegroupInfoObject(nodegroupName, clusterName) + if err != nil { + return nil, err + } + + return getManagedNodegroupInfoObject.tags, nil +} + func (m managedNodegroupCache) getManagedNodegroupTaints(nodegroupName string, clusterName string) ([]apiv1.Taint, error) { getManagedNodegroupInfoObject, err := m.getManagedNodegroupInfoObject(nodegroupName, clusterName) if err != nil { diff --git a/cluster-autoscaler/cloudprovider/aws/managed_nodegroup_cache_test.go b/cluster-autoscaler/cloudprovider/aws/managed_nodegroup_cache_test.go index 4c3c89e9c51c..3024907d8a10 100644 --- a/cluster-autoscaler/cloudprovider/aws/managed_nodegroup_cache_test.go +++ b/cluster-autoscaler/cloudprovider/aws/managed_nodegroup_cache_test.go @@ -38,6 +38,8 @@ func TestManagedNodegroupCache(t *testing.T) { taintEffect := "effect 1" taintKey := "key 1" taintValue := "value 1" + tagKey := "tag key 1" + tagValue := "tag value 1" taint := apiv1.Taint{ Effect: apiv1.TaintEffect(taintEffect), Key: taintKey, @@ -50,6 +52,7 @@ func TestManagedNodegroupCache(t *testing.T) { clusterName: clusterName, taints: []apiv1.Taint{taint}, labels: map[string]string{labelKey: labelValue}, + tags: map[string]string{tagKey: tagValue}, }) require.NoError(t, err) obj, ok, err := c.GetByKey(nodegroupName) @@ -63,6 +66,8 @@ func TestManagedNodegroupCache(t *testing.T) { assert.Equal(t, apiv1.TaintEffect(taintEffect), obj.(managedNodegroupCachedObject).taints[0].Effect) assert.Equal(t, taintKey, obj.(managedNodegroupCachedObject).taints[0].Key) assert.Equal(t, taintValue, obj.(managedNodegroupCachedObject).taints[0].Value) + assert.Equal(t, len(obj.(managedNodegroupCachedObject).tags), 1) + assert.Equal(t, tagValue, obj.(managedNodegroupCachedObject).tags[tagKey]) } func TestGetManagedNodegroupWithError(t *testing.T) { @@ -111,6 +116,7 @@ func TestGetManagedNodegroupNoTaintsOrLabels(t *testing.T) { CapacityType: &capacityType, Version: &k8sVersion, Taints: nil, + Tags: nil, } k.On("DescribeNodegroup", &eks.DescribeNodegroupInput{ @@ -130,6 +136,7 @@ func TestGetManagedNodegroupNoTaintsOrLabels(t *testing.T) { assert.Equal(t, cacheObj.labels["capacityType"], capacityType) assert.Equal(t, cacheObj.labels["k8sVersion"], k8sVersion) assert.Equal(t, cacheObj.labels["eks.amazonaws.com/nodegroup"], nodegroupName) + assert.Equal(t, len(cacheObj.tags), 0) } func TestGetManagedNodegroupWithTaintsAndLabels(t *testing.T) { @@ -165,6 +172,11 @@ func TestGetManagedNodegroupWithTaintsAndLabels(t *testing.T) { Value: &taintValue2, } + tagKey1 := "tagKey 1" + tagKey2 := "tagKey 2" + tagValue1 := "tagValue 1" + tagValue2 := "tagValue 2" + // Create test nodegroup testNodegroup := eks.Nodegroup{ AmiType: &amiType, @@ -175,6 +187,7 @@ func TestGetManagedNodegroupWithTaintsAndLabels(t *testing.T) { CapacityType: &capacityType, Version: &k8sVersion, Taints: []*eks.Taint{&taint1, &taint2}, + Tags: map[string]*string{tagKey1: &tagValue1, tagKey2: &tagValue2}, } k.On("DescribeNodegroup", &eks.DescribeNodegroupInput{ @@ -203,6 +216,9 @@ func TestGetManagedNodegroupWithTaintsAndLabels(t *testing.T) { assert.Equal(t, cacheObj.labels["capacityType"], capacityType) assert.Equal(t, cacheObj.labels["k8sVersion"], k8sVersion) assert.Equal(t, cacheObj.labels["eks.amazonaws.com/nodegroup"], nodegroupName) + assert.Equal(t, len(cacheObj.tags), 2) + assert.Equal(t, cacheObj.tags[tagKey1], tagValue1) + assert.Equal(t, cacheObj.tags[tagKey2], tagValue2) } func TestGetManagedNodegroupInfoObjectWithError(t *testing.T) { @@ -241,6 +257,8 @@ func TestGetManagedNodegroupInfoObjectWithCachedNodegroup(t *testing.T) { Key: taintKey, Value: taintValue, } + tagKey := "tag key 1" + tagValue := "tag value 1" c := newManagedNodeGroupCache(&awsWrapper{nil, nil, k}) err := c.Add(managedNodegroupCachedObject{ @@ -248,6 +266,7 @@ func TestGetManagedNodegroupInfoObjectWithCachedNodegroup(t *testing.T) { clusterName: clusterName, taints: []apiv1.Taint{taint}, labels: map[string]string{labelKey: labelValue}, + tags: map[string]string{tagKey: tagValue}, }) mngInfoObject, err := c.getManagedNodegroupInfoObject(nodegroupName, clusterName) @@ -275,6 +294,11 @@ func TestGetManagedNodegroupInfoObjectNoCachedNodegroup(t *testing.T) { labelValue1 := "testValue 1" labelValue2 := "testValue 2" + tagKey1 := "tagKey 1" + tagKey2 := "tagKey 2" + tagValue1 := "tagValue 1" + tagValue2 := "tagValue 2" + // Create test nodegroup testNodegroup := eks.Nodegroup{ AmiType: &amiType, @@ -285,6 +309,7 @@ func TestGetManagedNodegroupInfoObjectNoCachedNodegroup(t *testing.T) { CapacityType: &capacityType, Version: &k8sVersion, Taints: nil, + Tags: map[string]*string{tagKey1: &tagValue1, tagKey2: &tagValue2}, } k.On("DescribeNodegroup", &eks.DescribeNodegroupInput{ @@ -304,6 +329,9 @@ func TestGetManagedNodegroupInfoObjectNoCachedNodegroup(t *testing.T) { assert.Equal(t, mngInfoObject.labels["capacityType"], capacityType) assert.Equal(t, mngInfoObject.labels["k8sVersion"], k8sVersion) assert.Equal(t, mngInfoObject.labels["eks.amazonaws.com/nodegroup"], nodegroupName) + assert.Equal(t, len(mngInfoObject.tags), 2) + assert.Equal(t, mngInfoObject.tags[tagKey1], tagValue1) + assert.Equal(t, mngInfoObject.tags[tagKey2], tagValue2) k.AssertCalled(t, "DescribeNodegroup", &eks.DescribeNodegroupInput{ ClusterName: &clusterName, NodegroupName: &nodegroupName, @@ -325,6 +353,8 @@ func TestGetManagedNodegroupLabelsWithCachedNodegroup(t *testing.T) { Key: taintKey, Value: taintValue, } + tagKey := "tag key 1" + tagValue := "tag value 1" c := newManagedNodeGroupCache(&awsWrapper{nil, nil, k}) err := c.Add(managedNodegroupCachedObject{ @@ -332,6 +362,7 @@ func TestGetManagedNodegroupLabelsWithCachedNodegroup(t *testing.T) { clusterName: clusterName, taints: []apiv1.Taint{taint}, labels: map[string]string{labelKey: labelValue}, + tags: map[string]string{tagKey: tagValue}, }) labelsMap, err := c.getManagedNodegroupLabels(nodegroupName, clusterName) @@ -369,6 +400,7 @@ func TestGetManagedNodegroupLabelsNoCachedNodegroup(t *testing.T) { CapacityType: &capacityType, Version: &k8sVersion, Taints: nil, + Tags: nil, } k.On("DescribeNodegroup", &eks.DescribeNodegroupInput{ @@ -419,6 +451,7 @@ func TestGetManagedNodegroupLabelsWithCachedNodegroupThatExpires(t *testing.T) { CapacityType: &capacityType, Version: &k8sVersion, Taints: nil, + Tags: nil, } k.On("DescribeNodegroup", &eks.DescribeNodegroupInput{ @@ -586,3 +619,104 @@ func TestGetManagedNodegroupTaintsNoCachedNodegroup(t *testing.T) { NodegroupName: &nodegroupName, }) } + +func TestGetManagedNodegroupTagsWithCachedNodegroup(t *testing.T) { + k := &eksMock{} + + nodegroupName := "nodegroupName" + clusterName := "clusterName" + labelKey := "label key 1" + labelValue := "label value 1" + taintEffect := "effect 1" + taintKey := "key 1" + taintValue := "value 1" + taint := apiv1.Taint{ + Effect: apiv1.TaintEffect(taintEffect), + Key: taintKey, + Value: taintValue, + } + tagKey := "tag key 1" + tagValue := "tag value 1" + + c := newManagedNodeGroupCache(&awsWrapper{nil, nil, k}) + err := c.Add(managedNodegroupCachedObject{ + name: nodegroupName, + clusterName: clusterName, + taints: []apiv1.Taint{taint}, + labels: map[string]string{labelKey: labelValue}, + tags: map[string]string{tagKey: tagValue}, + }) + + tagsMap, err := c.getManagedNodegroupTags(nodegroupName, clusterName) + require.NoError(t, err) + assert.Equal(t, len(tagsMap), 1) + assert.Equal(t, tagsMap[tagKey], tagValue) + k.AssertNotCalled(t, "DescribeNodegroup", &eks.DescribeNodegroupInput{ + ClusterName: &clusterName, + NodegroupName: &nodegroupName, + }) +} + +func TestGetManagedNodegroupTagsNoCachedNodegroup(t *testing.T) { + k := &eksMock{} + + nodegroupName := "testNodegroup" + clusterName := "testCluster" + amiType := "testAmiType" + capacityType := "testCapacityType" + k8sVersion := "1.19" + diskSize := int64(100) + + taintEffect1 := "effect 1" + taintKey1 := "key 1" + taintValue1 := "value 1" + taint1 := eks.Taint{ + Effect: &taintEffect1, + Key: &taintKey1, + Value: &taintValue1, + } + + taintEffect2 := "effect 2" + taintKey2 := "key 2" + taintValue2 := "value 2" + taint2 := eks.Taint{ + Effect: &taintEffect2, + Key: &taintKey2, + Value: &taintValue2, + } + + tagKey1 := "tagKey 1" + tagKey2 := "tagKey 2" + tagValue1 := "tagValue 1" + tagValue2 := "tagValue 2" + + // Create test nodegroup + testNodegroup := eks.Nodegroup{ + AmiType: &amiType, + ClusterName: &clusterName, + DiskSize: &diskSize, + Labels: nil, + NodegroupName: &nodegroupName, + CapacityType: &capacityType, + Version: &k8sVersion, + Taints: []*eks.Taint{&taint1, &taint2}, + Tags: map[string]*string{tagKey1: &tagValue1, tagKey2: &tagValue2}, + } + + k.On("DescribeNodegroup", &eks.DescribeNodegroupInput{ + ClusterName: &clusterName, + NodegroupName: &nodegroupName, + }).Return(&eks.DescribeNodegroupOutput{Nodegroup: &testNodegroup}, nil) + + c := newManagedNodeGroupCache(&awsWrapper{nil, nil, k}) + + tagsMap, err := c.getManagedNodegroupTags(nodegroupName, clusterName) + require.NoError(t, err) + assert.Equal(t, len(tagsMap), 2) + assert.Equal(t, tagsMap[tagKey1], tagValue1) + assert.Equal(t, tagsMap[tagKey2], tagValue2) + k.AssertCalled(t, "DescribeNodegroup", &eks.DescribeNodegroupInput{ + ClusterName: &clusterName, + NodegroupName: &nodegroupName, + }) +} diff --git a/cluster-autoscaler/cloudprovider/azure/OWNERS b/cluster-autoscaler/cloudprovider/azure/OWNERS index ab314a5d089e..73f4ae0914d1 100644 --- a/cluster-autoscaler/cloudprovider/azure/OWNERS +++ b/cluster-autoscaler/cloudprovider/azure/OWNERS @@ -10,3 +10,7 @@ reviewers: - tallaxes emeritus_approvers: - marwanad + + +labels: +- area/provider/azure diff --git a/cluster-autoscaler/cloudprovider/azure/azure_scale_set.go b/cluster-autoscaler/cloudprovider/azure/azure_scale_set.go index ffeae2b7c04a..3b7978c5868b 100644 --- a/cluster-autoscaler/cloudprovider/azure/azure_scale_set.go +++ b/cluster-autoscaler/cloudprovider/azure/azure_scale_set.go @@ -295,7 +295,7 @@ func (scaleSet *ScaleSet) GetScaleSetVms() ([]compute.VirtualMachineScaleSetVM, defer cancel() resourceGroup := scaleSet.manager.config.ResourceGroup - vmList, rerr := scaleSet.manager.azClient.virtualMachineScaleSetVMsClient.List(ctx, resourceGroup, scaleSet.Name, "") + vmList, rerr := scaleSet.manager.azClient.virtualMachineScaleSetVMsClient.List(ctx, resourceGroup, scaleSet.Name, "instanceView") klog.V(4).Infof("GetScaleSetVms: scaleSet.Name: %s, vmList: %v", scaleSet.Name, vmList) if rerr != nil { klog.Errorf("VirtualMachineScaleSetVMsClient.List failed for %s: %v", scaleSet.Name, rerr) @@ -612,18 +612,26 @@ func buildInstanceCache(vmList interface{}) []cloudprovider.Instance { switch vms := vmList.(type) { case []compute.VirtualMachineScaleSetVM: for _, vm := range vms { - addInstanceToCache(&instances, vm.ID, vm.ProvisioningState) + powerState := vmPowerStateRunning + if vm.InstanceView != nil && vm.InstanceView.Statuses != nil { + powerState = vmPowerStateFromStatuses(*vm.InstanceView.Statuses) + } + addInstanceToCache(&instances, vm.ID, vm.ProvisioningState, powerState) } case []compute.VirtualMachine: for _, vm := range vms { - addInstanceToCache(&instances, vm.ID, vm.ProvisioningState) + powerState := vmPowerStateRunning + if vm.InstanceView != nil && vm.InstanceView.Statuses != nil { + powerState = vmPowerStateFromStatuses(*vm.InstanceView.Statuses) + } + addInstanceToCache(&instances, vm.ID, vm.ProvisioningState, powerState) } } return instances } -func addInstanceToCache(instances *[]cloudprovider.Instance, id *string, provisioningState *string) { +func addInstanceToCache(instances *[]cloudprovider.Instance, id *string, provisioningState *string, powerState string) { // The resource ID is empty string, which indicates the instance may be in deleting state. if len(*id) == 0 { return @@ -638,7 +646,7 @@ func addInstanceToCache(instances *[]cloudprovider.Instance, id *string, provisi *instances = append(*instances, cloudprovider.Instance{ Id: "azure://" + resourceID, - Status: instanceStatusFromProvisioningState(provisioningState), + Status: instanceStatusFromProvisioningStateAndPowerState(resourceID, provisioningState, powerState), }) } @@ -665,13 +673,13 @@ func (scaleSet *ScaleSet) setInstanceStatusByProviderID(providerID string, statu scaleSet.lastInstanceRefresh = time.Now() } -// instanceStatusFromProvisioningState converts the VM provisioning state to cloudprovider.InstanceStatus -func instanceStatusFromProvisioningState(provisioningState *string) *cloudprovider.InstanceStatus { +// instanceStatusFromProvisioningStateAndPowerState converts the VM provisioning state and power state to cloudprovider.InstanceStatus +func instanceStatusFromProvisioningStateAndPowerState(resourceId string, provisioningState *string, powerState string) *cloudprovider.InstanceStatus { if provisioningState == nil { return nil } - klog.V(5).Infof("Getting vm instance provisioning state %s", *provisioningState) + klog.V(5).Infof("Getting vm instance provisioning state %s for %s", *provisioningState, resourceId) status := &cloudprovider.InstanceStatus{} switch *provisioningState { @@ -679,6 +687,23 @@ func instanceStatusFromProvisioningState(provisioningState *string) *cloudprovid status.State = cloudprovider.InstanceDeleting case string(compute.ProvisioningStateCreating): status.State = cloudprovider.InstanceCreating + case string(compute.ProvisioningStateFailed): + // Provisioning can fail both during instance creation or after the instance is running. + // Per https://learn.microsoft.com/en-us/azure/virtual-machines/states-billing#provisioning-states, + // ProvisioningState represents the most recent provisioning state, therefore only report + // InstanceCreating errors when the power state indicates the instance has not yet started running + if !isRunningVmPowerState(powerState) { + klog.V(4).Infof("VM %s reports failed provisioning state with non-running power state: %s", resourceId, powerState) + status.State = cloudprovider.InstanceCreating + status.ErrorInfo = &cloudprovider.InstanceErrorInfo{ + ErrorClass: cloudprovider.OutOfResourcesErrorClass, + ErrorCode: "provisioning-state-failed", + ErrorMessage: "Azure failed to provision a node for this node group", + } + } else { + klog.V(5).Infof("VM %s reports a failed provisioning state but is running (%s)", resourceId, powerState) + status.State = cloudprovider.InstanceRunning + } default: status.State = cloudprovider.InstanceRunning } diff --git a/cluster-autoscaler/cloudprovider/azure/azure_scale_set_test.go b/cluster-autoscaler/cloudprovider/azure/azure_scale_set_test.go index cf5a4f6d0cd7..0f14a75a5b93 100644 --- a/cluster-autoscaler/cloudprovider/azure/azure_scale_set_test.go +++ b/cluster-autoscaler/cloudprovider/azure/azure_scale_set_test.go @@ -225,6 +225,80 @@ func TestIncreaseSize(t *testing.T) { } } +func TestIncreaseSizeOnVMProvisioningFailed(t *testing.T) { + testCases := map[string]struct { + expectInstanceRunning bool + isMissingInstanceView bool + statuses []compute.InstanceViewStatus + }{ + "out of resources when no power state exists": {}, + "out of resources when VM is stopped": { + statuses: []compute.InstanceViewStatus{{Code: to.StringPtr(vmPowerStateStopped)}}, + }, + "out of resources when VM reports invalid power state": { + statuses: []compute.InstanceViewStatus{{Code: to.StringPtr("PowerState/invalid")}}, + }, + "instance running when power state is running": { + expectInstanceRunning: true, + statuses: []compute.InstanceViewStatus{{Code: to.StringPtr(vmPowerStateRunning)}}, + }, + "instance running if instance view cannot be retrieved": { + expectInstanceRunning: true, + isMissingInstanceView: true, + }, + } + for testName, testCase := range testCases { + t.Run(testName, func(t *testing.T) { + ctrl := gomock.NewController(t) + defer ctrl.Finish() + + manager := newTestAzureManager(t) + vmssName := "vmss-failed-upscale" + + expectedScaleSets := newTestVMSSList(3, "vmss-failed-upscale", "eastus", compute.Uniform) + expectedVMSSVMs := newTestVMSSVMList(3) + expectedVMSSVMs[2].ProvisioningState = to.StringPtr(string(compute.ProvisioningStateFailed)) + if !testCase.isMissingInstanceView { + expectedVMSSVMs[2].InstanceView = &compute.VirtualMachineScaleSetVMInstanceView{Statuses: &testCase.statuses} + } + + mockVMSSClient := mockvmssclient.NewMockInterface(ctrl) + mockVMSSClient.EXPECT().List(gomock.Any(), manager.config.ResourceGroup).Return(expectedScaleSets, nil) + mockVMSSClient.EXPECT().CreateOrUpdateAsync(gomock.Any(), manager.config.ResourceGroup, vmssName, gomock.Any()).Return(nil, nil) + mockVMSSClient.EXPECT().WaitForCreateOrUpdateResult(gomock.Any(), gomock.Any(), manager.config.ResourceGroup).Return(&http.Response{StatusCode: http.StatusOK}, nil).AnyTimes() + manager.azClient.virtualMachineScaleSetsClient = mockVMSSClient + mockVMSSVMClient := mockvmssvmclient.NewMockInterface(ctrl) + mockVMSSVMClient.EXPECT().List(gomock.Any(), manager.config.ResourceGroup, "vmss-failed-upscale", gomock.Any()).Return(expectedVMSSVMs, nil).AnyTimes() + manager.azClient.virtualMachineScaleSetVMsClient = mockVMSSVMClient + manager.explicitlyConfigured["vmss-failed-upscale"] = true + registered := manager.RegisterNodeGroup(newTestScaleSet(manager, vmssName)) + assert.True(t, registered) + manager.Refresh() + + provider, err := BuildAzureCloudProvider(manager, nil) + assert.NoError(t, err) + + scaleSet, ok := provider.NodeGroups()[0].(*ScaleSet) + assert.True(t, ok) + + // Increase size by one, but the new node fails provisioning + err = scaleSet.IncreaseSize(1) + assert.NoError(t, err) + + nodes, err := scaleSet.Nodes() + assert.NoError(t, err) + + assert.Equal(t, 3, len(nodes)) + if testCase.expectInstanceRunning { + assert.Equal(t, cloudprovider.InstanceRunning, nodes[2].Status.State) + } else { + assert.Equal(t, cloudprovider.InstanceCreating, nodes[2].Status.State) + assert.Equal(t, cloudprovider.OutOfResourcesErrorClass, nodes[2].Status.ErrorInfo.ErrorClass) + } + }) + } +} + func TestIncreaseSizeOnVMSSUpdating(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish() diff --git a/cluster-autoscaler/cloudprovider/azure/azure_util.go b/cluster-autoscaler/cloudprovider/azure/azure_util.go index 08d8c749a88d..5a389b738c63 100644 --- a/cluster-autoscaler/cloudprovider/azure/azure_util.go +++ b/cluster-autoscaler/cloudprovider/azure/azure_util.go @@ -82,6 +82,16 @@ const ( nodeTaintTagName = "k8s.io_cluster-autoscaler_node-template_taint_" nodeResourcesTagName = "k8s.io_cluster-autoscaler_node-template_resources_" nodeOptionsTagName = "k8s.io_cluster-autoscaler_node-template_autoscaling-options_" + + // PowerStates reflect the operational state of a VM + // From https://learn.microsoft.com/en-us/java/api/com.microsoft.azure.management.compute.powerstate?view=azure-java-stable + vmPowerStateStarting = "PowerState/starting" + vmPowerStateRunning = "PowerState/running" + vmPowerStateStopping = "PowerState/stopping" + vmPowerStateStopped = "PowerState/stopped" + vmPowerStateDeallocating = "PowerState/deallocating" + vmPowerStateDeallocated = "PowerState/deallocated" + vmPowerStateUnknown = "PowerState/unknown" ) var ( @@ -608,3 +618,32 @@ func isAzureRequestsThrottled(rerr *retry.Error) bool { return rerr.HTTPStatusCode == http.StatusTooManyRequests } + +func isRunningVmPowerState(powerState string) bool { + return powerState == vmPowerStateRunning || powerState == vmPowerStateStarting +} + +func isKnownVmPowerState(powerState string) bool { + knownPowerStates := map[string]bool{ + vmPowerStateStarting: true, + vmPowerStateRunning: true, + vmPowerStateStopping: true, + vmPowerStateStopped: true, + vmPowerStateDeallocating: true, + vmPowerStateDeallocated: true, + vmPowerStateUnknown: true, + } + return knownPowerStates[powerState] +} + +func vmPowerStateFromStatuses(statuses []compute.InstanceViewStatus) string { + for _, status := range statuses { + if status.Code == nil || !isKnownVmPowerState(*status.Code) { + continue + } + return *status.Code + } + + // PowerState is not set if the VM is still creating (or has failed creation) + return vmPowerStateUnknown +} diff --git a/cluster-autoscaler/cloudprovider/baiducloud/OWNERS b/cluster-autoscaler/cloudprovider/baiducloud/OWNERS index 1e733d8af3c1..eac67ffdc30c 100644 --- a/cluster-autoscaler/cloudprovider/baiducloud/OWNERS +++ b/cluster-autoscaler/cloudprovider/baiducloud/OWNERS @@ -2,3 +2,7 @@ approvers: - hello2mao reviewers: - hello2mao + + +labels: +- area/provider/baiducloud diff --git a/cluster-autoscaler/cloudprovider/bizflycloud/OWNERS b/cluster-autoscaler/cloudprovider/bizflycloud/OWNERS new file mode 100644 index 000000000000..1af607113f0e --- /dev/null +++ b/cluster-autoscaler/cloudprovider/bizflycloud/OWNERS @@ -0,0 +1,2 @@ +labels: +- area/provider/bizflycloud diff --git a/cluster-autoscaler/cloudprovider/brightbox/Makefile b/cluster-autoscaler/cloudprovider/brightbox/Makefile index 14c20c9dd550..d7391aad7e5d 100644 --- a/cluster-autoscaler/cloudprovider/brightbox/Makefile +++ b/cluster-autoscaler/cloudprovider/brightbox/Makefile @@ -1,5 +1,5 @@ export BUILD_TAGS=brightbox -export REGISTRY=brightbox +export REGISTRY?=brightbox export GOARCH?=$(shell go env GOARCH) ifndef TAG override TAG=dev diff --git a/cluster-autoscaler/cloudprovider/brightbox/OWNERS b/cluster-autoscaler/cloudprovider/brightbox/OWNERS index 4ba5aa32e36f..27b6b5210196 100644 --- a/cluster-autoscaler/cloudprovider/brightbox/OWNERS +++ b/cluster-autoscaler/cloudprovider/brightbox/OWNERS @@ -4,3 +4,6 @@ approvers: reviewers: # - NeilW # - johnl + +labels: +- area/provider/brightbox diff --git a/cluster-autoscaler/cloudprovider/brightbox/README.md b/cluster-autoscaler/cloudprovider/brightbox/README.md index 270fd1a0e5d4..2823105845bf 100644 --- a/cluster-autoscaler/cloudprovider/brightbox/README.md +++ b/cluster-autoscaler/cloudprovider/brightbox/README.md @@ -161,8 +161,8 @@ $ make build This builds an autoscaler containing only the Brightbox Cloud provider, tagged as `brightbox/cluster-autoscaler-brightbox:dev`. To build any -other version add a TAG variable +other version add a TAG variable and/or a REGISTRY variable ``` -make build TAG=1.1x +make build TAG=1.1x REGISTRY=cr.brightbox.com/acc-xxxxx/ ``` diff --git a/cluster-autoscaler/cloudprovider/brightbox/brightbox_node_group.go b/cluster-autoscaler/cloudprovider/brightbox/brightbox_node_group.go index 6ce89bbbf98f..75d03de79caf 100644 --- a/cluster-autoscaler/cloudprovider/brightbox/brightbox_node_group.go +++ b/cluster-autoscaler/cloudprovider/brightbox/brightbox_node_group.go @@ -359,24 +359,32 @@ func makeNodeGroupFromAPIDetails( if mapData["server_group"] == "" { return nil, cloudprovider.ErrIllegalConfiguration } + ng := brightboxNodeGroup{ + id: mapData["server_group"], + minSize: minSize, + maxSize: maxSize, + Cloud: cloudclient, + } + imageID := mapData["image"] + if !(len(imageID) == 9 && strings.HasPrefix(imageID, "img-")) { + image, err := ng.GetImageByName(imageID) + if err != nil || image == nil { + return nil, cloudprovider.ErrIllegalConfiguration + } + imageID = image.Id + } userData := mapData["user_data"] options := &brightbox.ServerOptions{ - Image: mapData["image"], + Image: imageID, Name: &name, ServerType: mapData["type"], Zone: mapData["zone"], UserData: &userData, ServerGroups: mergeServerGroups(mapData), } - result := brightboxNodeGroup{ - id: mapData["server_group"], - minSize: minSize, - maxSize: maxSize, - serverOptions: options, - Cloud: cloudclient, - } - klog.V(4).Info(result.Debug()) - return &result, nil + ng.serverOptions = options + klog.V(4).Info(ng.Debug()) + return &ng, nil } func mergeServerGroups(data map[string]string) []string { diff --git a/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/brightbox_interface.go b/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/brightbox_interface.go index 5fffa2025143..131e1c5a541b 100644 --- a/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/brightbox_interface.go +++ b/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/brightbox_interface.go @@ -18,6 +18,8 @@ import ( "context" "fmt" "net/http" + "regexp" + "sort" "strings" brightbox "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/brightbox/gobrightbox" @@ -166,6 +168,36 @@ func (c *Cloud) GetServerType(identifier string) (*brightbox.ServerType, error) return client.ServerType(identifier) } +// GetImageByName obtains the most recent available image that matches the supplied pattern +func (c *Cloud) GetImageByName(name string) (*brightbox.Image, error) { + klog.V(4).Infof("GetImageByName %q", name) + client, err := c.CloudClient() + if err != nil { + return nil, err + } + klog.V(6).Info("GetImageByName compiling regexp") + nameRe, err := regexp.Compile(name) + if err != nil { + return nil, err + } + klog.V(6).Info("GetImageByName retrieving images") + images, err := client.Images() + if err != nil { + return nil, err + } + klog.V(6).Info("GetImageByName filtering images") + filteredImages := filter( + images, + func(i brightbox.Image) bool { + return i.Official && + i.Status == status.Available && + nameRe.MatchString(i.Name) + }, + ) + klog.V(6).Infof("GetImageByName images selected (%+v)", filteredImages) + return mostRecent(filteredImages), nil +} + // GetConfigMaps obtains the list of Config Maps on the account func (c *Cloud) GetConfigMaps() ([]brightbox.ConfigMap, error) { klog.V(4).Info("GetConfigMaps") @@ -555,3 +587,27 @@ func ErrorIfAcmeNotComplete(acme *brightbox.LoadBalancerAcme) error { } return nil } + +// Returns the most recent item out of a slice of items with Dates +// or nil if there are no items +func mostRecent(items []brightbox.Image) *brightbox.Image { + if len(items) == 0 { + return nil + } + sortedItems := items + sort.Slice(items, func(i, j int) bool { + return items[i].CreatedAt.Unix() > items[j].CreatedAt.Unix() + }) + return &sortedItems[0] +} + +// filter returns a new slice with all elements from the +// input elements for which the provided predicate function returns true. +func filter[T any](input []T, pred func(T) bool) (output []T) { + for _, v := range input { + if pred(v) { + output = append(output, v) + } + } + return output +} diff --git a/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/cloud_access.go b/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/cloud_access.go index d1ad5d55056b..f544edf39ac6 100644 --- a/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/cloud_access.go +++ b/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/cloud_access.go @@ -92,6 +92,9 @@ type CloudAccess interface { // DestroyCloudIP issues a request to destroy the cloud ip DestroyCloudIP(identifier string) error + // ConfigMaps retrieves a list of all config maps + Images() ([]brightbox.Image, error) + // ConfigMaps retrieves a list of all config maps ConfigMaps() ([]brightbox.ConfigMap, error) diff --git a/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/mocks/CloudAccess.go b/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/mocks/CloudAccess.go index 04e09aeb4f35..271f2e728f5a 100644 --- a/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/mocks/CloudAccess.go +++ b/cluster-autoscaler/cloudprovider/brightbox/k8ssdk/mocks/CloudAccess.go @@ -12,10 +12,12 @@ // See the License for the specific language governing permissions and // limitations under the License. +// Code generated by mockery v2.20.0. DO NOT EDIT. + package mocks import ( - brightbox "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/brightbox/gobrightbox" + gobrightbox "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/brightbox/gobrightbox" mock "github.com/stretchr/testify/mock" ) @@ -26,19 +28,22 @@ type CloudAccess struct { } // AddServersToServerGroup provides a mock function with given fields: identifier, serverIds -func (_m *CloudAccess) AddServersToServerGroup(identifier string, serverIds []string) (*brightbox.ServerGroup, error) { +func (_m *CloudAccess) AddServersToServerGroup(identifier string, serverIds []string) (*gobrightbox.ServerGroup, error) { ret := _m.Called(identifier, serverIds) - var r0 *brightbox.ServerGroup - if rf, ok := ret.Get(0).(func(string, []string) *brightbox.ServerGroup); ok { + var r0 *gobrightbox.ServerGroup + var r1 error + if rf, ok := ret.Get(0).(func(string, []string) (*gobrightbox.ServerGroup, error)); ok { + return rf(identifier, serverIds) + } + if rf, ok := ret.Get(0).(func(string, []string) *gobrightbox.ServerGroup); ok { r0 = rf(identifier, serverIds) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.ServerGroup) + r0 = ret.Get(0).(*gobrightbox.ServerGroup) } } - var r1 error if rf, ok := ret.Get(1).(func(string, []string) error); ok { r1 = rf(identifier, serverIds) } else { @@ -49,19 +54,22 @@ func (_m *CloudAccess) AddServersToServerGroup(identifier string, serverIds []st } // CloudIP provides a mock function with given fields: identifier -func (_m *CloudAccess) CloudIP(identifier string) (*brightbox.CloudIP, error) { +func (_m *CloudAccess) CloudIP(identifier string) (*gobrightbox.CloudIP, error) { ret := _m.Called(identifier) - var r0 *brightbox.CloudIP - if rf, ok := ret.Get(0).(func(string) *brightbox.CloudIP); ok { + var r0 *gobrightbox.CloudIP + var r1 error + if rf, ok := ret.Get(0).(func(string) (*gobrightbox.CloudIP, error)); ok { + return rf(identifier) + } + if rf, ok := ret.Get(0).(func(string) *gobrightbox.CloudIP); ok { r0 = rf(identifier) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.CloudIP) + r0 = ret.Get(0).(*gobrightbox.CloudIP) } } - var r1 error if rf, ok := ret.Get(1).(func(string) error); ok { r1 = rf(identifier) } else { @@ -72,19 +80,22 @@ func (_m *CloudAccess) CloudIP(identifier string) (*brightbox.CloudIP, error) { } // CloudIPs provides a mock function with given fields: -func (_m *CloudAccess) CloudIPs() ([]brightbox.CloudIP, error) { +func (_m *CloudAccess) CloudIPs() ([]gobrightbox.CloudIP, error) { ret := _m.Called() - var r0 []brightbox.CloudIP - if rf, ok := ret.Get(0).(func() []brightbox.CloudIP); ok { + var r0 []gobrightbox.CloudIP + var r1 error + if rf, ok := ret.Get(0).(func() ([]gobrightbox.CloudIP, error)); ok { + return rf() + } + if rf, ok := ret.Get(0).(func() []gobrightbox.CloudIP); ok { r0 = rf() } else { if ret.Get(0) != nil { - r0 = ret.Get(0).([]brightbox.CloudIP) + r0 = ret.Get(0).([]gobrightbox.CloudIP) } } - var r1 error if rf, ok := ret.Get(1).(func() error); ok { r1 = rf() } else { @@ -95,19 +106,22 @@ func (_m *CloudAccess) CloudIPs() ([]brightbox.CloudIP, error) { } // ConfigMap provides a mock function with given fields: identifier -func (_m *CloudAccess) ConfigMap(identifier string) (*brightbox.ConfigMap, error) { +func (_m *CloudAccess) ConfigMap(identifier string) (*gobrightbox.ConfigMap, error) { ret := _m.Called(identifier) - var r0 *brightbox.ConfigMap - if rf, ok := ret.Get(0).(func(string) *brightbox.ConfigMap); ok { + var r0 *gobrightbox.ConfigMap + var r1 error + if rf, ok := ret.Get(0).(func(string) (*gobrightbox.ConfigMap, error)); ok { + return rf(identifier) + } + if rf, ok := ret.Get(0).(func(string) *gobrightbox.ConfigMap); ok { r0 = rf(identifier) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.ConfigMap) + r0 = ret.Get(0).(*gobrightbox.ConfigMap) } } - var r1 error if rf, ok := ret.Get(1).(func(string) error); ok { r1 = rf(identifier) } else { @@ -118,19 +132,22 @@ func (_m *CloudAccess) ConfigMap(identifier string) (*brightbox.ConfigMap, error } // ConfigMaps provides a mock function with given fields: -func (_m *CloudAccess) ConfigMaps() ([]brightbox.ConfigMap, error) { +func (_m *CloudAccess) ConfigMaps() ([]gobrightbox.ConfigMap, error) { ret := _m.Called() - var r0 []brightbox.ConfigMap - if rf, ok := ret.Get(0).(func() []brightbox.ConfigMap); ok { + var r0 []gobrightbox.ConfigMap + var r1 error + if rf, ok := ret.Get(0).(func() ([]gobrightbox.ConfigMap, error)); ok { + return rf() + } + if rf, ok := ret.Get(0).(func() []gobrightbox.ConfigMap); ok { r0 = rf() } else { if ret.Get(0) != nil { - r0 = ret.Get(0).([]brightbox.ConfigMap) + r0 = ret.Get(0).([]gobrightbox.ConfigMap) } } - var r1 error if rf, ok := ret.Get(1).(func() error); ok { r1 = rf() } else { @@ -141,20 +158,23 @@ func (_m *CloudAccess) ConfigMaps() ([]brightbox.ConfigMap, error) { } // CreateCloudIP provides a mock function with given fields: newCloudIP -func (_m *CloudAccess) CreateCloudIP(newCloudIP *brightbox.CloudIPOptions) (*brightbox.CloudIP, error) { +func (_m *CloudAccess) CreateCloudIP(newCloudIP *gobrightbox.CloudIPOptions) (*gobrightbox.CloudIP, error) { ret := _m.Called(newCloudIP) - var r0 *brightbox.CloudIP - if rf, ok := ret.Get(0).(func(*brightbox.CloudIPOptions) *brightbox.CloudIP); ok { + var r0 *gobrightbox.CloudIP + var r1 error + if rf, ok := ret.Get(0).(func(*gobrightbox.CloudIPOptions) (*gobrightbox.CloudIP, error)); ok { + return rf(newCloudIP) + } + if rf, ok := ret.Get(0).(func(*gobrightbox.CloudIPOptions) *gobrightbox.CloudIP); ok { r0 = rf(newCloudIP) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.CloudIP) + r0 = ret.Get(0).(*gobrightbox.CloudIP) } } - var r1 error - if rf, ok := ret.Get(1).(func(*brightbox.CloudIPOptions) error); ok { + if rf, ok := ret.Get(1).(func(*gobrightbox.CloudIPOptions) error); ok { r1 = rf(newCloudIP) } else { r1 = ret.Error(1) @@ -164,20 +184,23 @@ func (_m *CloudAccess) CreateCloudIP(newCloudIP *brightbox.CloudIPOptions) (*bri } // CreateFirewallPolicy provides a mock function with given fields: policyOptions -func (_m *CloudAccess) CreateFirewallPolicy(policyOptions *brightbox.FirewallPolicyOptions) (*brightbox.FirewallPolicy, error) { +func (_m *CloudAccess) CreateFirewallPolicy(policyOptions *gobrightbox.FirewallPolicyOptions) (*gobrightbox.FirewallPolicy, error) { ret := _m.Called(policyOptions) - var r0 *brightbox.FirewallPolicy - if rf, ok := ret.Get(0).(func(*brightbox.FirewallPolicyOptions) *brightbox.FirewallPolicy); ok { + var r0 *gobrightbox.FirewallPolicy + var r1 error + if rf, ok := ret.Get(0).(func(*gobrightbox.FirewallPolicyOptions) (*gobrightbox.FirewallPolicy, error)); ok { + return rf(policyOptions) + } + if rf, ok := ret.Get(0).(func(*gobrightbox.FirewallPolicyOptions) *gobrightbox.FirewallPolicy); ok { r0 = rf(policyOptions) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.FirewallPolicy) + r0 = ret.Get(0).(*gobrightbox.FirewallPolicy) } } - var r1 error - if rf, ok := ret.Get(1).(func(*brightbox.FirewallPolicyOptions) error); ok { + if rf, ok := ret.Get(1).(func(*gobrightbox.FirewallPolicyOptions) error); ok { r1 = rf(policyOptions) } else { r1 = ret.Error(1) @@ -187,20 +210,23 @@ func (_m *CloudAccess) CreateFirewallPolicy(policyOptions *brightbox.FirewallPol } // CreateFirewallRule provides a mock function with given fields: ruleOptions -func (_m *CloudAccess) CreateFirewallRule(ruleOptions *brightbox.FirewallRuleOptions) (*brightbox.FirewallRule, error) { +func (_m *CloudAccess) CreateFirewallRule(ruleOptions *gobrightbox.FirewallRuleOptions) (*gobrightbox.FirewallRule, error) { ret := _m.Called(ruleOptions) - var r0 *brightbox.FirewallRule - if rf, ok := ret.Get(0).(func(*brightbox.FirewallRuleOptions) *brightbox.FirewallRule); ok { + var r0 *gobrightbox.FirewallRule + var r1 error + if rf, ok := ret.Get(0).(func(*gobrightbox.FirewallRuleOptions) (*gobrightbox.FirewallRule, error)); ok { + return rf(ruleOptions) + } + if rf, ok := ret.Get(0).(func(*gobrightbox.FirewallRuleOptions) *gobrightbox.FirewallRule); ok { r0 = rf(ruleOptions) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.FirewallRule) + r0 = ret.Get(0).(*gobrightbox.FirewallRule) } } - var r1 error - if rf, ok := ret.Get(1).(func(*brightbox.FirewallRuleOptions) error); ok { + if rf, ok := ret.Get(1).(func(*gobrightbox.FirewallRuleOptions) error); ok { r1 = rf(ruleOptions) } else { r1 = ret.Error(1) @@ -210,20 +236,23 @@ func (_m *CloudAccess) CreateFirewallRule(ruleOptions *brightbox.FirewallRuleOpt } // CreateLoadBalancer provides a mock function with given fields: newDetails -func (_m *CloudAccess) CreateLoadBalancer(newDetails *brightbox.LoadBalancerOptions) (*brightbox.LoadBalancer, error) { +func (_m *CloudAccess) CreateLoadBalancer(newDetails *gobrightbox.LoadBalancerOptions) (*gobrightbox.LoadBalancer, error) { ret := _m.Called(newDetails) - var r0 *brightbox.LoadBalancer - if rf, ok := ret.Get(0).(func(*brightbox.LoadBalancerOptions) *brightbox.LoadBalancer); ok { + var r0 *gobrightbox.LoadBalancer + var r1 error + if rf, ok := ret.Get(0).(func(*gobrightbox.LoadBalancerOptions) (*gobrightbox.LoadBalancer, error)); ok { + return rf(newDetails) + } + if rf, ok := ret.Get(0).(func(*gobrightbox.LoadBalancerOptions) *gobrightbox.LoadBalancer); ok { r0 = rf(newDetails) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.LoadBalancer) + r0 = ret.Get(0).(*gobrightbox.LoadBalancer) } } - var r1 error - if rf, ok := ret.Get(1).(func(*brightbox.LoadBalancerOptions) error); ok { + if rf, ok := ret.Get(1).(func(*gobrightbox.LoadBalancerOptions) error); ok { r1 = rf(newDetails) } else { r1 = ret.Error(1) @@ -233,20 +262,23 @@ func (_m *CloudAccess) CreateLoadBalancer(newDetails *brightbox.LoadBalancerOpti } // CreateServer provides a mock function with given fields: newServer -func (_m *CloudAccess) CreateServer(newServer *brightbox.ServerOptions) (*brightbox.Server, error) { +func (_m *CloudAccess) CreateServer(newServer *gobrightbox.ServerOptions) (*gobrightbox.Server, error) { ret := _m.Called(newServer) - var r0 *brightbox.Server - if rf, ok := ret.Get(0).(func(*brightbox.ServerOptions) *brightbox.Server); ok { + var r0 *gobrightbox.Server + var r1 error + if rf, ok := ret.Get(0).(func(*gobrightbox.ServerOptions) (*gobrightbox.Server, error)); ok { + return rf(newServer) + } + if rf, ok := ret.Get(0).(func(*gobrightbox.ServerOptions) *gobrightbox.Server); ok { r0 = rf(newServer) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.Server) + r0 = ret.Get(0).(*gobrightbox.Server) } } - var r1 error - if rf, ok := ret.Get(1).(func(*brightbox.ServerOptions) error); ok { + if rf, ok := ret.Get(1).(func(*gobrightbox.ServerOptions) error); ok { r1 = rf(newServer) } else { r1 = ret.Error(1) @@ -256,20 +288,23 @@ func (_m *CloudAccess) CreateServer(newServer *brightbox.ServerOptions) (*bright } // CreateServerGroup provides a mock function with given fields: newServerGroup -func (_m *CloudAccess) CreateServerGroup(newServerGroup *brightbox.ServerGroupOptions) (*brightbox.ServerGroup, error) { +func (_m *CloudAccess) CreateServerGroup(newServerGroup *gobrightbox.ServerGroupOptions) (*gobrightbox.ServerGroup, error) { ret := _m.Called(newServerGroup) - var r0 *brightbox.ServerGroup - if rf, ok := ret.Get(0).(func(*brightbox.ServerGroupOptions) *brightbox.ServerGroup); ok { + var r0 *gobrightbox.ServerGroup + var r1 error + if rf, ok := ret.Get(0).(func(*gobrightbox.ServerGroupOptions) (*gobrightbox.ServerGroup, error)); ok { + return rf(newServerGroup) + } + if rf, ok := ret.Get(0).(func(*gobrightbox.ServerGroupOptions) *gobrightbox.ServerGroup); ok { r0 = rf(newServerGroup) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.ServerGroup) + r0 = ret.Get(0).(*gobrightbox.ServerGroup) } } - var r1 error - if rf, ok := ret.Get(1).(func(*brightbox.ServerGroupOptions) error); ok { + if rf, ok := ret.Get(1).(func(*gobrightbox.ServerGroupOptions) error); ok { r1 = rf(newServerGroup) } else { r1 = ret.Error(1) @@ -349,19 +384,48 @@ func (_m *CloudAccess) DestroyServerGroup(identifier string) error { } // FirewallPolicies provides a mock function with given fields: -func (_m *CloudAccess) FirewallPolicies() ([]brightbox.FirewallPolicy, error) { +func (_m *CloudAccess) FirewallPolicies() ([]gobrightbox.FirewallPolicy, error) { ret := _m.Called() - var r0 []brightbox.FirewallPolicy - if rf, ok := ret.Get(0).(func() []brightbox.FirewallPolicy); ok { + var r0 []gobrightbox.FirewallPolicy + var r1 error + if rf, ok := ret.Get(0).(func() ([]gobrightbox.FirewallPolicy, error)); ok { + return rf() + } + if rf, ok := ret.Get(0).(func() []gobrightbox.FirewallPolicy); ok { r0 = rf() } else { if ret.Get(0) != nil { - r0 = ret.Get(0).([]brightbox.FirewallPolicy) + r0 = ret.Get(0).([]gobrightbox.FirewallPolicy) } } + if rf, ok := ret.Get(1).(func() error); ok { + r1 = rf() + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// Images provides a mock function with given fields: +func (_m *CloudAccess) Images() ([]gobrightbox.Image, error) { + ret := _m.Called() + + var r0 []gobrightbox.Image var r1 error + if rf, ok := ret.Get(0).(func() ([]gobrightbox.Image, error)); ok { + return rf() + } + if rf, ok := ret.Get(0).(func() []gobrightbox.Image); ok { + r0 = rf() + } else { + if ret.Get(0) != nil { + r0 = ret.Get(0).([]gobrightbox.Image) + } + } + if rf, ok := ret.Get(1).(func() error); ok { r1 = rf() } else { @@ -372,19 +436,22 @@ func (_m *CloudAccess) FirewallPolicies() ([]brightbox.FirewallPolicy, error) { } // LoadBalancer provides a mock function with given fields: identifier -func (_m *CloudAccess) LoadBalancer(identifier string) (*brightbox.LoadBalancer, error) { +func (_m *CloudAccess) LoadBalancer(identifier string) (*gobrightbox.LoadBalancer, error) { ret := _m.Called(identifier) - var r0 *brightbox.LoadBalancer - if rf, ok := ret.Get(0).(func(string) *brightbox.LoadBalancer); ok { + var r0 *gobrightbox.LoadBalancer + var r1 error + if rf, ok := ret.Get(0).(func(string) (*gobrightbox.LoadBalancer, error)); ok { + return rf(identifier) + } + if rf, ok := ret.Get(0).(func(string) *gobrightbox.LoadBalancer); ok { r0 = rf(identifier) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.LoadBalancer) + r0 = ret.Get(0).(*gobrightbox.LoadBalancer) } } - var r1 error if rf, ok := ret.Get(1).(func(string) error); ok { r1 = rf(identifier) } else { @@ -395,19 +462,22 @@ func (_m *CloudAccess) LoadBalancer(identifier string) (*brightbox.LoadBalancer, } // LoadBalancers provides a mock function with given fields: -func (_m *CloudAccess) LoadBalancers() ([]brightbox.LoadBalancer, error) { +func (_m *CloudAccess) LoadBalancers() ([]gobrightbox.LoadBalancer, error) { ret := _m.Called() - var r0 []brightbox.LoadBalancer - if rf, ok := ret.Get(0).(func() []brightbox.LoadBalancer); ok { + var r0 []gobrightbox.LoadBalancer + var r1 error + if rf, ok := ret.Get(0).(func() ([]gobrightbox.LoadBalancer, error)); ok { + return rf() + } + if rf, ok := ret.Get(0).(func() []gobrightbox.LoadBalancer); ok { r0 = rf() } else { if ret.Get(0) != nil { - r0 = ret.Get(0).([]brightbox.LoadBalancer) + r0 = ret.Get(0).([]gobrightbox.LoadBalancer) } } - var r1 error if rf, ok := ret.Get(1).(func() error); ok { r1 = rf() } else { @@ -432,19 +502,22 @@ func (_m *CloudAccess) MapCloudIP(identifier string, destination string) error { } // RemoveServersFromServerGroup provides a mock function with given fields: identifier, serverIds -func (_m *CloudAccess) RemoveServersFromServerGroup(identifier string, serverIds []string) (*brightbox.ServerGroup, error) { +func (_m *CloudAccess) RemoveServersFromServerGroup(identifier string, serverIds []string) (*gobrightbox.ServerGroup, error) { ret := _m.Called(identifier, serverIds) - var r0 *brightbox.ServerGroup - if rf, ok := ret.Get(0).(func(string, []string) *brightbox.ServerGroup); ok { + var r0 *gobrightbox.ServerGroup + var r1 error + if rf, ok := ret.Get(0).(func(string, []string) (*gobrightbox.ServerGroup, error)); ok { + return rf(identifier, serverIds) + } + if rf, ok := ret.Get(0).(func(string, []string) *gobrightbox.ServerGroup); ok { r0 = rf(identifier, serverIds) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.ServerGroup) + r0 = ret.Get(0).(*gobrightbox.ServerGroup) } } - var r1 error if rf, ok := ret.Get(1).(func(string, []string) error); ok { r1 = rf(identifier, serverIds) } else { @@ -455,19 +528,22 @@ func (_m *CloudAccess) RemoveServersFromServerGroup(identifier string, serverIds } // Server provides a mock function with given fields: identifier -func (_m *CloudAccess) Server(identifier string) (*brightbox.Server, error) { +func (_m *CloudAccess) Server(identifier string) (*gobrightbox.Server, error) { ret := _m.Called(identifier) - var r0 *brightbox.Server - if rf, ok := ret.Get(0).(func(string) *brightbox.Server); ok { + var r0 *gobrightbox.Server + var r1 error + if rf, ok := ret.Get(0).(func(string) (*gobrightbox.Server, error)); ok { + return rf(identifier) + } + if rf, ok := ret.Get(0).(func(string) *gobrightbox.Server); ok { r0 = rf(identifier) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.Server) + r0 = ret.Get(0).(*gobrightbox.Server) } } - var r1 error if rf, ok := ret.Get(1).(func(string) error); ok { r1 = rf(identifier) } else { @@ -478,19 +554,22 @@ func (_m *CloudAccess) Server(identifier string) (*brightbox.Server, error) { } // ServerGroup provides a mock function with given fields: identifier -func (_m *CloudAccess) ServerGroup(identifier string) (*brightbox.ServerGroup, error) { +func (_m *CloudAccess) ServerGroup(identifier string) (*gobrightbox.ServerGroup, error) { ret := _m.Called(identifier) - var r0 *brightbox.ServerGroup - if rf, ok := ret.Get(0).(func(string) *brightbox.ServerGroup); ok { + var r0 *gobrightbox.ServerGroup + var r1 error + if rf, ok := ret.Get(0).(func(string) (*gobrightbox.ServerGroup, error)); ok { + return rf(identifier) + } + if rf, ok := ret.Get(0).(func(string) *gobrightbox.ServerGroup); ok { r0 = rf(identifier) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.ServerGroup) + r0 = ret.Get(0).(*gobrightbox.ServerGroup) } } - var r1 error if rf, ok := ret.Get(1).(func(string) error); ok { r1 = rf(identifier) } else { @@ -501,19 +580,22 @@ func (_m *CloudAccess) ServerGroup(identifier string) (*brightbox.ServerGroup, e } // ServerGroups provides a mock function with given fields: -func (_m *CloudAccess) ServerGroups() ([]brightbox.ServerGroup, error) { +func (_m *CloudAccess) ServerGroups() ([]gobrightbox.ServerGroup, error) { ret := _m.Called() - var r0 []brightbox.ServerGroup - if rf, ok := ret.Get(0).(func() []brightbox.ServerGroup); ok { + var r0 []gobrightbox.ServerGroup + var r1 error + if rf, ok := ret.Get(0).(func() ([]gobrightbox.ServerGroup, error)); ok { + return rf() + } + if rf, ok := ret.Get(0).(func() []gobrightbox.ServerGroup); ok { r0 = rf() } else { if ret.Get(0) != nil { - r0 = ret.Get(0).([]brightbox.ServerGroup) + r0 = ret.Get(0).([]gobrightbox.ServerGroup) } } - var r1 error if rf, ok := ret.Get(1).(func() error); ok { r1 = rf() } else { @@ -524,19 +606,22 @@ func (_m *CloudAccess) ServerGroups() ([]brightbox.ServerGroup, error) { } // ServerType provides a mock function with given fields: identifier -func (_m *CloudAccess) ServerType(identifier string) (*brightbox.ServerType, error) { +func (_m *CloudAccess) ServerType(identifier string) (*gobrightbox.ServerType, error) { ret := _m.Called(identifier) - var r0 *brightbox.ServerType - if rf, ok := ret.Get(0).(func(string) *brightbox.ServerType); ok { + var r0 *gobrightbox.ServerType + var r1 error + if rf, ok := ret.Get(0).(func(string) (*gobrightbox.ServerType, error)); ok { + return rf(identifier) + } + if rf, ok := ret.Get(0).(func(string) *gobrightbox.ServerType); ok { r0 = rf(identifier) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.ServerType) + r0 = ret.Get(0).(*gobrightbox.ServerType) } } - var r1 error if rf, ok := ret.Get(1).(func(string) error); ok { r1 = rf(identifier) } else { @@ -547,19 +632,22 @@ func (_m *CloudAccess) ServerType(identifier string) (*brightbox.ServerType, err } // ServerTypes provides a mock function with given fields: -func (_m *CloudAccess) ServerTypes() ([]brightbox.ServerType, error) { +func (_m *CloudAccess) ServerTypes() ([]gobrightbox.ServerType, error) { ret := _m.Called() - var r0 []brightbox.ServerType - if rf, ok := ret.Get(0).(func() []brightbox.ServerType); ok { + var r0 []gobrightbox.ServerType + var r1 error + if rf, ok := ret.Get(0).(func() ([]gobrightbox.ServerType, error)); ok { + return rf() + } + if rf, ok := ret.Get(0).(func() []gobrightbox.ServerType); ok { r0 = rf() } else { if ret.Get(0) != nil { - r0 = ret.Get(0).([]brightbox.ServerType) + r0 = ret.Get(0).([]gobrightbox.ServerType) } } - var r1 error if rf, ok := ret.Get(1).(func() error); ok { r1 = rf() } else { @@ -584,20 +672,23 @@ func (_m *CloudAccess) UnMapCloudIP(identifier string) error { } // UpdateFirewallRule provides a mock function with given fields: ruleOptions -func (_m *CloudAccess) UpdateFirewallRule(ruleOptions *brightbox.FirewallRuleOptions) (*brightbox.FirewallRule, error) { +func (_m *CloudAccess) UpdateFirewallRule(ruleOptions *gobrightbox.FirewallRuleOptions) (*gobrightbox.FirewallRule, error) { ret := _m.Called(ruleOptions) - var r0 *brightbox.FirewallRule - if rf, ok := ret.Get(0).(func(*brightbox.FirewallRuleOptions) *brightbox.FirewallRule); ok { + var r0 *gobrightbox.FirewallRule + var r1 error + if rf, ok := ret.Get(0).(func(*gobrightbox.FirewallRuleOptions) (*gobrightbox.FirewallRule, error)); ok { + return rf(ruleOptions) + } + if rf, ok := ret.Get(0).(func(*gobrightbox.FirewallRuleOptions) *gobrightbox.FirewallRule); ok { r0 = rf(ruleOptions) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.FirewallRule) + r0 = ret.Get(0).(*gobrightbox.FirewallRule) } } - var r1 error - if rf, ok := ret.Get(1).(func(*brightbox.FirewallRuleOptions) error); ok { + if rf, ok := ret.Get(1).(func(*gobrightbox.FirewallRuleOptions) error); ok { r1 = rf(ruleOptions) } else { r1 = ret.Error(1) @@ -607,20 +698,23 @@ func (_m *CloudAccess) UpdateFirewallRule(ruleOptions *brightbox.FirewallRuleOpt } // UpdateLoadBalancer provides a mock function with given fields: newDetails -func (_m *CloudAccess) UpdateLoadBalancer(newDetails *brightbox.LoadBalancerOptions) (*brightbox.LoadBalancer, error) { +func (_m *CloudAccess) UpdateLoadBalancer(newDetails *gobrightbox.LoadBalancerOptions) (*gobrightbox.LoadBalancer, error) { ret := _m.Called(newDetails) - var r0 *brightbox.LoadBalancer - if rf, ok := ret.Get(0).(func(*brightbox.LoadBalancerOptions) *brightbox.LoadBalancer); ok { + var r0 *gobrightbox.LoadBalancer + var r1 error + if rf, ok := ret.Get(0).(func(*gobrightbox.LoadBalancerOptions) (*gobrightbox.LoadBalancer, error)); ok { + return rf(newDetails) + } + if rf, ok := ret.Get(0).(func(*gobrightbox.LoadBalancerOptions) *gobrightbox.LoadBalancer); ok { r0 = rf(newDetails) } else { if ret.Get(0) != nil { - r0 = ret.Get(0).(*brightbox.LoadBalancer) + r0 = ret.Get(0).(*gobrightbox.LoadBalancer) } } - var r1 error - if rf, ok := ret.Get(1).(func(*brightbox.LoadBalancerOptions) error); ok { + if rf, ok := ret.Get(1).(func(*gobrightbox.LoadBalancerOptions) error); ok { r1 = rf(newDetails) } else { r1 = ret.Error(1) @@ -628,3 +722,18 @@ func (_m *CloudAccess) UpdateLoadBalancer(newDetails *brightbox.LoadBalancerOpti return r0, r1 } + +type mockConstructorTestingTNewCloudAccess interface { + mock.TestingT + Cleanup(func()) +} + +// NewCloudAccess creates a new instance of CloudAccess. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations. +func NewCloudAccess(t mockConstructorTestingTNewCloudAccess) *CloudAccess { + mock := &CloudAccess{} + mock.Mock.Test(t) + + t.Cleanup(func() { mock.AssertExpectations(t) }) + + return mock +} diff --git a/cluster-autoscaler/cloudprovider/builder/builder_all.go b/cluster-autoscaler/cloudprovider/builder/builder_all.go index ede6a00e7bed..ff6aad7b5bf4 100644 --- a/cluster-autoscaler/cloudprovider/builder/builder_all.go +++ b/cluster-autoscaler/cloudprovider/builder/builder_all.go @@ -1,5 +1,5 @@ -//go:build !gce && !aws && !azure && !kubemark && !alicloud && !magnum && !digitalocean && !clusterapi && !huaweicloud && !ionoscloud && !linode && !hetzner && !bizflycloud && !brightbox && !packet && !oci && !vultr && !tencentcloud && !scaleway && !externalgrpc && !civo && !rancher -// +build !gce,!aws,!azure,!kubemark,!alicloud,!magnum,!digitalocean,!clusterapi,!huaweicloud,!ionoscloud,!linode,!hetzner,!bizflycloud,!brightbox,!packet,!oci,!vultr,!tencentcloud,!scaleway,!externalgrpc,!civo,!rancher +//go:build !gce && !aws && !azure && !kubemark && !alicloud && !magnum && !digitalocean && !clusterapi && !huaweicloud && !ionoscloud && !linode && !hetzner && !bizflycloud && !brightbox && !packet && !oci && !vultr && !tencentcloud && !scaleway && !externalgrpc && !civo && !rancher && !volcengine && !baiducloud && !cherry && !cloudstack && !exoscale && !kamatera && !ovhcloud +// +build !gce,!aws,!azure,!kubemark,!alicloud,!magnum,!digitalocean,!clusterapi,!huaweicloud,!ionoscloud,!linode,!hetzner,!bizflycloud,!brightbox,!packet,!oci,!vultr,!tencentcloud,!scaleway,!externalgrpc,!civo,!rancher,!volcengine,!baiducloud,!cherry,!cloudstack,!exoscale,!kamatera,!ovhcloud /* Copyright 2018 The Kubernetes Authors. @@ -48,6 +48,7 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/rancher" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/scaleway" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/tencentcloud" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/vultr" "k8s.io/autoscaler/cluster-autoscaler/config" ) @@ -81,6 +82,7 @@ var AvailableCloudProviders = []string{ cloudprovider.CivoProviderName, cloudprovider.ScalewayProviderName, cloudprovider.RancherProviderName, + cloudprovider.VolcengineProviderName, } // DefaultCloudProvider is GCE. @@ -144,6 +146,8 @@ func buildCloudProvider(opts config.AutoscalingOptions, do cloudprovider.NodeGro return scaleway.BuildScaleway(opts, do, rl) case cloudprovider.RancherProviderName: return rancher.BuildRancher(opts, do, rl) + case cloudprovider.VolcengineProviderName: + return volcengine.BuildVolcengine(opts, do, rl) } return nil } diff --git a/cluster-autoscaler/cloudprovider/builder/builder_ovhcloud.go b/cluster-autoscaler/cloudprovider/builder/builder_ovhcloud.go index 680cdc55c159..413c5bbf3e59 100644 --- a/cluster-autoscaler/cloudprovider/builder/builder_ovhcloud.go +++ b/cluster-autoscaler/cloudprovider/builder/builder_ovhcloud.go @@ -1,5 +1,5 @@ -//go:build exoscale -// +build exoscale +//go:build ovhcloud +// +build ovhcloud /* Copyright 2020 The Kubernetes Authors. @@ -24,7 +24,7 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/config" ) -// AvailableCloudProviders supported by the Hetzner cloud provider builder. +// AvailableCloudProviders supported by the OVHcloud cloud provider builder. var AvailableCloudProviders = []string{ cloudprovider.OVHcloudProviderName, } diff --git a/cluster-autoscaler/cloudprovider/builder/builder_volcengine.go b/cluster-autoscaler/cloudprovider/builder/builder_volcengine.go new file mode 100644 index 000000000000..78e728dd1d17 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/builder/builder_volcengine.go @@ -0,0 +1,43 @@ +//go:build volcengine +// +build volcengine + +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package builder + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/config" +) + +// AvailableCloudProviders supported by the cloud provider builder. +var AvailableCloudProviders = []string{ + cloudprovider.VolcengineProviderName, +} + +// DefaultCloudProvider for volcengine-only build is volcengine. +const DefaultCloudProvider = cloudprovider.VolcengineProviderName + +func buildCloudProvider(opts config.AutoscalingOptions, do cloudprovider.NodeGroupDiscoveryOptions, rl *cloudprovider.ResourceLimiter) cloudprovider.CloudProvider { + switch opts.CloudProviderName { + case cloudprovider.VolcengineProviderName: + return volcengine.BuildVolcengine(opts, do, rl) + } + + return nil +} diff --git a/cluster-autoscaler/cloudprovider/cherryservers/OWNERS b/cluster-autoscaler/cloudprovider/cherryservers/OWNERS index 9b0c1bddeb8b..db7732c9424e 100644 --- a/cluster-autoscaler/cloudprovider/cherryservers/OWNERS +++ b/cluster-autoscaler/cloudprovider/cherryservers/OWNERS @@ -8,3 +8,6 @@ reviewers: #- zalmarge #- ArturasRa #- Andrius521 + +labels: +- area/provider/cherryservers diff --git a/cluster-autoscaler/cloudprovider/civo/OWNERS b/cluster-autoscaler/cloudprovider/civo/OWNERS index 0cc8aeb9b18f..0878cda602e9 100644 --- a/cluster-autoscaler/cloudprovider/civo/OWNERS +++ b/cluster-autoscaler/cloudprovider/civo/OWNERS @@ -1,10 +1,8 @@ -#approvers: -#- DMajrekar -#- birdiesanders -#- vishalanarase -#- RealHarshThakur -#reviewers: -#- DMajrekar -#- birdiesanders -#- vishalanarase -#- RealHarshThakur +approvers: +- vishalanarase + +reviewers: +- vishalanarase + +labels: +- area/provider/civo diff --git a/cluster-autoscaler/cloudprovider/civo/civo_manager.go b/cluster-autoscaler/cloudprovider/civo/civo_manager.go index f3c8cfded034..52df7823fe8b 100644 --- a/cluster-autoscaler/cloudprovider/civo/civo_manager.go +++ b/cluster-autoscaler/cloudprovider/civo/civo_manager.go @@ -119,51 +119,92 @@ func newManager(configReader io.Reader, discoveryOpts cloudprovider.NodeGroupDis // Refresh refreshes the cache holding the nodegroups. This is called by the CA // based on the `--scan-interval`. By default it's 10 seconds. func (m *Manager) Refresh() error { - var minSize int - var maxSize int - var workerConfigFound = false + var ( + minSize int + maxSize int + workerConfigFound = false + poolConfigFound = false + poolGroups []*NodeGroup + workerGroups []*NodeGroup + ) + + pools, err := m.client.ListKubernetesClusterPools(m.clusterID) + if err != nil { + return fmt.Errorf("couldn't list Kubernetes cluster pools: %s", err) + } + + klog.V(4).Infof("refreshing workers node group kubernetes cluster: %q", m.clusterID) + for _, specString := range m.discoveryOpts.NodeGroupSpecs { spec, err := dynamic.SpecFromString(specString, true) if err != nil { return fmt.Errorf("failed to parse node group spec: %v", err) } + if spec.Name == "workers" { minSize = spec.MinSize maxSize = spec.MaxSize workerConfigFound = true klog.V(4).Infof("found configuration for workers node group: min: %d max: %d", minSize, maxSize) + } else { + poolConfigFound = true + pool := m.getNodeGroupConfig(spec, pools) + if pool != nil { + poolGroups = append(poolGroups, pool) + } + klog.V(4).Infof("found configuration for pool node group: min: %d max: %d", minSize, maxSize) } } - if !workerConfigFound { - return fmt.Errorf("no workers node group configuration found") - } - pools, err := m.client.ListKubernetesClusterPools(m.clusterID) - if err != nil { - return fmt.Errorf("couldn't list Kubernetes cluster pools: %s", err) + if poolConfigFound { + m.nodeGroups = poolGroups + } else if workerConfigFound { + for _, nodePool := range pools { + np := nodePool + klog.V(4).Infof("adding node pool: %q", nodePool.ID) + + workerGroups = append(workerGroups, &NodeGroup{ + id: nodePool.ID, + clusterID: m.clusterID, + client: m.client, + nodePool: &np, + minSize: minSize, + maxSize: maxSize, + }) + } + m.nodeGroups = workerGroups + } else { + return fmt.Errorf("no workers node group configuration found") } - klog.V(4).Infof("refreshing workers node group kubernetes cluster: %q min: %d max: %d", m.clusterID, minSize, maxSize) - - var group []*NodeGroup - for _, nodePool := range pools { - np := nodePool - klog.V(4).Infof("adding node pool: %q", nodePool.ID) - - group = append(group, &NodeGroup{ - id: nodePool.ID, - clusterID: m.clusterID, - client: m.client, - nodePool: &np, - minSize: minSize, - maxSize: maxSize, - }) + // If both config found, pool config get precedence + if poolConfigFound && workerConfigFound { + m.nodeGroups = poolGroups } - if len(group) == 0 { + if len(m.nodeGroups) == 0 { klog.V(4).Info("cluster-autoscaler is disabled. no node pools are configured") } - m.nodeGroups = group + return nil +} + +// getNodeGroupConfig get the node group configuration from the cluster pool configuration +func (m *Manager) getNodeGroupConfig(spec *dynamic.NodeGroupSpec, pools []civocloud.KubernetesPool) *NodeGroup { + for _, nodePool := range pools { + if spec.Name == nodePool.ID { + np := nodePool + klog.V(4).Infof("adding node pool: %q min: %d max: %d", nodePool.ID, spec.MinSize, spec.MaxSize) + + return &NodeGroup{ + id: nodePool.ID, + clusterID: m.clusterID, + client: m.client, + nodePool: &np, + minSize: spec.MinSize, + maxSize: spec.MaxSize, + } + } + } return nil } diff --git a/cluster-autoscaler/cloudprovider/civo/civo_manager_test.go b/cluster-autoscaler/cloudprovider/civo/civo_manager_test.go index 36edcf785846..8fd2aaabc13c 100644 --- a/cluster-autoscaler/cloudprovider/civo/civo_manager_test.go +++ b/cluster-autoscaler/cloudprovider/civo/civo_manager_test.go @@ -174,3 +174,66 @@ func TestCivoManager_RefreshWithNodeSpec(t *testing.T) { assert.Equal(t, 10, manager.nodeGroups[0].maxSize, "maximum node for node group does not match") }) } + +func TestCivoManager_RefreshWithNodeSpecPool(t *testing.T) { + t.Run("success", func(t *testing.T) { + cfg := `{"cluster_id": "123456", "api_key": "123-123-123", "api_url": "https://api.civo.com", "region": "test"}` + nodeGroupSpecs := []string{"1:5:pool-1", "5:10:pool-2"} + nodeGroupDiscoveryOptions := cloudprovider.NodeGroupDiscoveryOptions{NodeGroupSpecs: nodeGroupSpecs} + manager, err := newManager(bytes.NewBufferString(cfg), nodeGroupDiscoveryOptions) + assert.NoError(t, err) + + client := &civoClientMock{} + + client.On("ListKubernetesClusterPools", manager.clusterID).Return( + []civocloud.KubernetesPool{ + { + ID: "pool-1", + Count: 2, + Size: "small", + InstanceNames: []string{"test-1", "test-2"}, + Instances: []civocloud.KubernetesInstance{ + { + ID: "1", + Hostname: "test-1", + Status: "ACTIVE", + }, + { + ID: "2", + Hostname: "test-1", + Status: "ACTIVE", + }, + }, + }, + { + ID: "pool-2", + Count: 2, + Size: "small", + InstanceNames: []string{"test-1", "test-2"}, + Instances: []civocloud.KubernetesInstance{ + { + ID: "3", + Hostname: "test-3", + Status: "ACTIVE", + }, + { + ID: "4", + Hostname: "test-4", + Status: "BUILDING", + }, + }, + }, + }, + nil, + ).Once() + + manager.client = client + err = manager.Refresh() + assert.NoError(t, err) + assert.Equal(t, 2, len(manager.nodeGroups), "number of node groups do not match") + assert.Equal(t, 1, manager.nodeGroups[0].minSize, "minimum node for node group does not match") + assert.Equal(t, 5, manager.nodeGroups[0].maxSize, "maximum node for node group does not match") + assert.Equal(t, 5, manager.nodeGroups[1].minSize, "minimum node for node group does not match") + assert.Equal(t, 10, manager.nodeGroups[1].maxSize, "maximum node for node group does not match") + }) +} diff --git a/cluster-autoscaler/cloudprovider/civo/examples/cluster-autoscaler-node-pool.yaml b/cluster-autoscaler/cloudprovider/civo/examples/cluster-autoscaler-node-pool.yaml new file mode 100644 index 000000000000..b56df166304f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/civo/examples/cluster-autoscaler-node-pool.yaml @@ -0,0 +1,181 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler + name: cluster-autoscaler + namespace: kube-system +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: cluster-autoscaler + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +rules: + - apiGroups: ["storage.k8s.io"] + resources: ["csistoragecapacities", "csidrivers"] + verbs: ["get", "list"] + - apiGroups: [""] + resources: ["events", "endpoints"] + verbs: ["create", "patch"] + - apiGroups: [""] + resources: ["pods/eviction"] + verbs: ["create"] + - apiGroups: [""] + resources: ["pods/status"] + verbs: ["update"] + - apiGroups: [""] + resources: ["endpoints"] + resourceNames: ["cluster-autoscaler"] + verbs: ["get", "update"] + - apiGroups: [""] + resources: ["nodes", "namespaces"] + verbs: ["watch", "list", "get", "update"] + - apiGroups: [""] + resources: + - "pods" + - "services" + - "replicationcontrollers" + - "persistentvolumeclaims" + - "persistentvolumes" + verbs: ["watch", "list", "get"] + - apiGroups: ["extensions"] + resources: ["replicasets", "daemonsets"] + verbs: ["watch", "list", "get"] + - apiGroups: ["policy"] + resources: ["poddisruptionbudgets"] + verbs: ["watch", "list"] + - apiGroups: ["apps"] + resources: ["statefulsets", "replicasets", "daemonsets"] + verbs: ["watch", "list", "get"] + - apiGroups: ["storage.k8s.io"] + resources: ["storageclasses", "csinodes"] + verbs: ["watch", "list", "get"] + - apiGroups: ["batch", "extensions"] + resources: ["jobs"] + verbs: ["get", "list", "watch", "patch"] + - apiGroups: ["coordination.k8s.io"] + resources: ["leases"] + verbs: ["create"] + - apiGroups: ["coordination.k8s.io"] + resourceNames: ["cluster-autoscaler"] + resources: ["leases"] + verbs: ["get", "update"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +rules: + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create", "list", "watch"] + - apiGroups: [""] + resources: ["configmaps"] + resourceNames: + ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] + verbs: ["delete", "get", "update", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: cluster-autoscaler + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-autoscaler +subjects: + - kind: ServiceAccount + name: cluster-autoscaler + namespace: kube-system +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: cluster-autoscaler +subjects: + - kind: ServiceAccount + name: cluster-autoscaler + namespace: kube-system +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + app: cluster-autoscaler +spec: + replicas: 1 + selector: + matchLabels: + app: cluster-autoscaler + template: + metadata: + labels: + app: cluster-autoscaler + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "8085" + spec: + serviceAccountName: cluster-autoscaler + containers: + - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.25.0 # or your custom image + name: cluster-autoscaler + imagePullPolicy: Always + resources: + limits: + cpu: 100m + memory: 300Mi + requests: + cpu: 100m + memory: 300Mi + command: + - ./cluster-autoscaler + - --v=4 + - --stderrthreshold=info + - --cloud-provider=civo + - --nodes=1:5: + - --nodes=5:10: + - --skip-nodes-with-local-storage=false + - --skip-nodes-with-system-pods=false + env: + - name: CIVO_API_URL + valueFrom: + secretKeyRef: + key: api-url + name: civo-api-access + - name: CIVO_API_KEY + valueFrom: + secretKeyRef: + key: api-key + name: civo-api-access + - name: CIVO_CLUSTER_ID + valueFrom: + secretKeyRef: + key: cluster-id + name: civo-api-access + - name: CIVO_REGION + valueFrom: + secretKeyRef: + key: region + name: civo-api-access diff --git a/cluster-autoscaler/cloudprovider/cloud_provider.go b/cluster-autoscaler/cloudprovider/cloud_provider.go index 2f45221ed640..f3cdc1ccea63 100644 --- a/cluster-autoscaler/cloudprovider/cloud_provider.go +++ b/cluster-autoscaler/cloudprovider/cloud_provider.go @@ -74,6 +74,8 @@ const ( LinodeProviderName = "linode" // ScalewayProviderName gets the provider name of scaleway ScalewayProviderName = "scaleway" + // VolcengineProviderName gets the provider name of volcengine + VolcengineProviderName = "volcengine" // VultrProviderName gets the provider name of vultr VultrProviderName = "vultr" // PacketProviderName gets the provider name of packet diff --git a/cluster-autoscaler/cloudprovider/cloudstack/OWNERS b/cluster-autoscaler/cloudprovider/cloudstack/OWNERS index 8a121c3d3537..a4d2224908e5 100644 --- a/cluster-autoscaler/cloudprovider/cloudstack/OWNERS +++ b/cluster-autoscaler/cloudprovider/cloudstack/OWNERS @@ -1,2 +1,6 @@ maintainers: - davidjumani + + +labels: +- area/provider/cloudstack diff --git a/cluster-autoscaler/cloudprovider/clusterapi/OWNERS b/cluster-autoscaler/cloudprovider/clusterapi/OWNERS index 5e434c972c25..58e6cb8cd6ca 100644 --- a/cluster-autoscaler/cloudprovider/clusterapi/OWNERS +++ b/cluster-autoscaler/cloudprovider/clusterapi/OWNERS @@ -12,3 +12,6 @@ reviewers: - mrajashree - arunmk - shysank + +labels: +- area/provider/clusterapi diff --git a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_nodegroup.go b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_nodegroup.go index d5708271ac27..37a4f9ced14b 100644 --- a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_nodegroup.go +++ b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_nodegroup.go @@ -276,7 +276,7 @@ func (ng *nodegroup) TemplateNodeInfo() (*schedulerframework.NodeInfo, error) { } func (ng *nodegroup) buildTemplateLabels(nodeName string) (map[string]string, error) { - labels := cloudprovider.JoinStringMaps(ng.scalableResource.Labels(), buildGenericLabels(nodeName)) + labels := cloudprovider.JoinStringMaps(buildGenericLabels(nodeName), ng.scalableResource.Labels()) nodes, err := ng.Nodes() if err != nil { diff --git a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_nodegroup_test.go b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_nodegroup_test.go index 96aed9fbf1dc..6da06bde3614 100644 --- a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_nodegroup_test.go +++ b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_nodegroup_test.go @@ -1342,6 +1342,31 @@ func TestNodeGroupTemplateNodeInfo(t *testing.T) { }, }, }, + { + name: "When the NodeGroup can scale from zero, the label capacity annotations merge with the pre-built node labels and take precedence if the same key is defined in both", + nodeGroupAnnotations: map[string]string{ + memoryKey: "2048Mi", + cpuKey: "2", + gpuTypeKey: gpuapis.ResourceNvidiaGPU, + gpuCountKey: "1", + labelsKey: "kubernetes.io/arch=arm64,my-custom-label=custom-value", + }, + config: testCaseConfig{ + expectedErr: nil, + expectedCapacity: map[corev1.ResourceName]int64{ + corev1.ResourceCPU: 2, + corev1.ResourceMemory: 2048 * 1024 * 1024, + corev1.ResourcePods: 110, + gpuapis.ResourceNvidiaGPU: 1, + }, + expectedNodeLabels: map[string]string{ + "kubernetes.io/os": "linux", + "kubernetes.io/arch": "arm64", + "kubernetes.io/hostname": "random value", + "my-custom-label": "custom-value", + }, + }, + }, { name: "When the NodeGroup can scale from zero and the Node still exists, it includes the known node labels", nodeGroupAnnotations: map[string]string{ diff --git a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_utils.go b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_utils.go index 7d007363d090..27292d49e8ea 100644 --- a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_utils.go +++ b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_utils.go @@ -172,16 +172,6 @@ func normalizedProviderString(s string) normalizedProviderID { return normalizedProviderID(split[len(split)-1]) } -func scaleFromZeroAnnotationsEnabled(annotations map[string]string) bool { - cpu := annotations[cpuKey] - mem := annotations[memoryKey] - - if cpu != "" && mem != "" { - return true - } - return false -} - func parseKey(annotations map[string]string, key string) (resource.Quantity, error) { if val, exists := annotations[key]; exists && val != "" { return resource.ParseQuantity(val) diff --git a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_utils_test.go b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_utils_test.go index ab929d8cb870..c4d01c56e365 100644 --- a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_utils_test.go +++ b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_utils_test.go @@ -430,47 +430,6 @@ func TestUtilNormalizedProviderID(t *testing.T) { } } -func TestScaleFromZeroEnabled(t *testing.T) { - for _, tc := range []struct { - description string - enabled bool - annotations map[string]string - }{{ - description: "nil annotations", - enabled: false, - }, { - description: "empty annotations", - annotations: map[string]string{}, - enabled: false, - }, { - description: "non-matching annotation", - annotations: map[string]string{"foo": "bar"}, - enabled: false, - }, { - description: "matching key, incomplete annotations", - annotations: map[string]string{ - "foo": "bar", - cpuKey: "1", - }, - enabled: false, - }, { - description: "matching key, complete annotations", - annotations: map[string]string{ - "foo": "bar", - cpuKey: "1", - memoryKey: "2Mi", - }, - enabled: true, - }} { - t.Run(tc.description, func(t *testing.T) { - got := scaleFromZeroAnnotationsEnabled(tc.annotations) - if tc.enabled != got { - t.Errorf("expected %t, got %t", tc.enabled, got) - } - }) - } -} - func TestParseCPUCapacity(t *testing.T) { for _, tc := range []struct { description string diff --git a/cluster-autoscaler/cloudprovider/digitalocean/OWNERS b/cluster-autoscaler/cloudprovider/digitalocean/OWNERS index 49c188a01f4c..8dbc63124953 100644 --- a/cluster-autoscaler/cloudprovider/digitalocean/OWNERS +++ b/cluster-autoscaler/cloudprovider/digitalocean/OWNERS @@ -5,4 +5,5 @@ reviewers: - andrewsykim - MorrisLaw - +labels: +- area/provider/digitalocean diff --git a/cluster-autoscaler/cloudprovider/exoscale/OWNERS b/cluster-autoscaler/cloudprovider/exoscale/OWNERS index f90a88af7e2b..534798451037 100644 --- a/cluster-autoscaler/cloudprovider/exoscale/OWNERS +++ b/cluster-autoscaler/cloudprovider/exoscale/OWNERS @@ -2,3 +2,6 @@ approvers: # - pierre-emmanuelJ # - 7fELF # - PhilippeChepy + +labels: +- area/provider/exoscale diff --git a/cluster-autoscaler/cloudprovider/externalgrpc/OWNERS b/cluster-autoscaler/cloudprovider/externalgrpc/OWNERS index 318b2bb4ee85..f6102e50af84 100644 --- a/cluster-autoscaler/cloudprovider/externalgrpc/OWNERS +++ b/cluster-autoscaler/cloudprovider/externalgrpc/OWNERS @@ -2,3 +2,6 @@ approvers: #- dbonfigli reviewers: #- dbonfigli + +labels: +- area/provider/externalgrpc diff --git a/cluster-autoscaler/cloudprovider/externalgrpc/README.md b/cluster-autoscaler/cloudprovider/externalgrpc/README.md index 43561099c0cf..ec3125da26a4 100644 --- a/cluster-autoscaler/cloudprovider/externalgrpc/README.md +++ b/cluster-autoscaler/cloudprovider/externalgrpc/README.md @@ -1,6 +1,6 @@ # External gRPC Cloud Provider -The Exteral gRPC Cloud Provider provides a plugin system to support out-of-tree cloud provider implementations. +The External gRPC Cloud Provider provides a plugin system to support out-of-tree cloud provider implementations. Cluster Autoscaler adds or removes nodes from the cluster by creating or deleting VMs. To separate the autoscaling logic (the same for all clouds) from the API calls required to execute it (different for each cloud), the latter are hidden behind an interface, `CloudProvider`. Each supported cloud has its own implementation in this repository and `--cloud-provider` flag determines which one will be used. diff --git a/cluster-autoscaler/cloudprovider/gce/OWNERS b/cluster-autoscaler/cloudprovider/gce/OWNERS index 444bc04c026d..e5a0607703f0 100644 --- a/cluster-autoscaler/cloudprovider/gce/OWNERS +++ b/cluster-autoscaler/cloudprovider/gce/OWNERS @@ -12,3 +12,6 @@ reviewers: - x13n - yaroslava-serdiuk - BigDarkClown + +labels: +- area/provider/gce diff --git a/cluster-autoscaler/cloudprovider/gce/autoscaling_gce_client.go b/cluster-autoscaler/cloudprovider/gce/autoscaling_gce_client.go index f4e4e7f54f21..c16475bcf14a 100644 --- a/cluster-autoscaler/cloudprovider/gce/autoscaling_gce_client.go +++ b/cluster-autoscaler/cloudprovider/gce/autoscaling_gce_client.go @@ -59,6 +59,15 @@ const ( // is facing errors caused by vmExternalIpAccess policy constraint misconfiguration. ErrorCodeVmExternalIpAccessPolicyConstraint = "VM_EXTERNAL_IP_ACCESS_POLICY_CONSTRAINT" + // ErrorInvalidReservation is an error code for InstanceErrorInfo if the node group couldn't + // be scaled up because no reservation was found, or the reservation associated with the MIG + // was invalid. + ErrorInvalidReservation = "INVALID_RESERVATION" + + // ErrorReservationNotReady is an error code for InstanceErrorInfo if the node group couldn't + // be scaled up because the associated reservation was not ready. + ErrorReservationNotReady = "RESERVATION_NOT_READY" + // ErrorCodeOther is an error code used in InstanceErrorInfo if other error occurs. ErrorCodeOther = "OTHER" ) @@ -78,6 +87,7 @@ type AutoscalingGceClient interface { FetchZones(region string) ([]string, error) FetchAvailableCpuPlatforms() (map[string][]string, error) FetchReservations() ([]*gce.Reservation, error) + FetchReservationsInProject(projectId string) ([]*gce.Reservation, error) // modifying resources ResizeMig(GceRef, int64) error @@ -372,6 +382,16 @@ func GetErrorInfo(errorCode, errorMessage, instanceStatus string, previousErrorI ErrorClass: cloudprovider.OtherErrorClass, ErrorCode: ErrorCodeVmExternalIpAccessPolicyConstraint, } + } else if isReservationNotReady(errorCode, errorMessage) { + return &cloudprovider.InstanceErrorInfo{ + ErrorClass: cloudprovider.OtherErrorClass, + ErrorCode: ErrorReservationNotReady, + } + } else if isInvalidReservationError(errorCode, errorMessage) { + return &cloudprovider.InstanceErrorInfo{ + ErrorClass: cloudprovider.OtherErrorClass, + ErrorCode: ErrorInvalidReservation, + } } else if isInstanceStatusNotRunningYet(instanceStatus) { if previousErrorInfo != nil { // keep the current error @@ -428,6 +448,26 @@ func isInstanceStatusNotRunningYet(instanceStatus string) bool { return instanceStatus == "" || instanceStatus == "PROVISIONING" || instanceStatus == "STAGING" } +func isReservationNotReady(errorCode, errorMessage string) bool { + return strings.Contains(errorMessage, "it requires reservation to be in READY state") +} + +func isInvalidReservationError(errorCode, errorMessage string) bool { + reservationErrors := []string{ + "Incompatible AggregateReservation VMFamily", + "Could not find the given reservation with the following name", + "must use ReservationAffinity of", + "The reservation must exist in the same project as the instance", + "only compatible with Aggregate Reservations", + } + for _, rErr := range reservationErrors { + if strings.Contains(errorMessage, rErr) { + return true + } + } + return false +} + func generateInstanceName(baseName string, existingNames map[string]bool) string { for i := 0; i < 100; i++ { name := fmt.Sprintf("%v-%v", baseName, rand.String(4)) @@ -511,8 +551,12 @@ func (client *autoscalingGceClientV1) FetchMigsWithName(zone string, name *regex } func (client *autoscalingGceClientV1) FetchReservations() ([]*gce.Reservation, error) { + return client.FetchReservationsInProject(client.projectId) +} + +func (client *autoscalingGceClientV1) FetchReservationsInProject(projectId string) ([]*gce.Reservation, error) { reservations := make([]*gce.Reservation, 0) - call := client.gceService.Reservations.AggregatedList(client.projectId) + call := client.gceService.Reservations.AggregatedList(projectId) err := call.Pages(context.TODO(), func(ls *gce.ReservationAggregatedList) error { for _, items := range ls.Items { reservations = append(reservations, items.Reservations...) diff --git a/cluster-autoscaler/cloudprovider/gce/autoscaling_gce_client_test.go b/cluster-autoscaler/cloudprovider/gce/autoscaling_gce_client_test.go index 6b47b9e749dc..d4968591233b 100644 --- a/cluster-autoscaler/cloudprovider/gce/autoscaling_gce_client_test.go +++ b/cluster-autoscaler/cloudprovider/gce/autoscaling_gce_client_test.go @@ -176,6 +176,18 @@ func TestErrors(t *testing.T) { expectedErrorCode: "VM_EXTERNAL_IP_ACCESS_POLICY_CONSTRAINT", expectedErrorClass: cloudprovider.OtherErrorClass, }, + { + errorCodes: []string{"CONDITION_NOT_MET"}, + errorMessage: "Instance 'myinst' creation failed: The reservation must exist in the same project as the instance.", + expectedErrorCode: "INVALID_RESERVATION", + expectedErrorClass: cloudprovider.OtherErrorClass, + }, + { + errorCodes: []string{"CONDITION_NOT_MET"}, + errorMessage: "Cannot insert instance to a reservation with status: CREATING, as it requires reservation to be in READY state.", + expectedErrorCode: "RESERVATION_NOT_READY", + expectedErrorClass: cloudprovider.OtherErrorClass, + }, { errorCodes: []string{"xyz", "abc"}, expectedErrorCode: "OTHER", diff --git a/cluster-autoscaler/cloudprovider/gce/cache.go b/cluster-autoscaler/cloudprovider/gce/cache.go index f41f9b4fe3a0..818f5d7aada9 100644 --- a/cluster-autoscaler/cloudprovider/gce/cache.go +++ b/cluster-autoscaler/cloudprovider/gce/cache.go @@ -19,6 +19,7 @@ package gce import ( "reflect" "sync" + "time" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" @@ -57,6 +58,7 @@ type GceCache struct { // Cache content. migs map[GceRef]Mig instances map[GceRef][]cloudprovider.Instance + instancesUpdateTime map[GceRef]time.Time instancesToMig map[GceRef]GceRef instancesFromUnknownMig map[GceRef]bool resourceLimiter *cloudprovider.ResourceLimiter @@ -73,6 +75,7 @@ func NewGceCache() *GceCache { return &GceCache{ migs: map[GceRef]Mig{}, instances: map[GceRef][]cloudprovider.Instance{}, + instancesUpdateTime: map[GceRef]time.Time{}, instancesToMig: map[GceRef]GceRef{}, instancesFromUnknownMig: map[GceRef]bool{}, autoscalingOptionsCache: map[GceRef]map[string]string{}, @@ -152,6 +155,19 @@ func (gc *GceCache) GetMigInstances(migRef GceRef) ([]cloudprovider.Instance, bo return append([]cloudprovider.Instance{}, instances...), found } +// GetMigInstancesUpdateTime returns the timestamp when the cached instances +// were updated for a given MIG GceRef +func (gc *GceCache) GetMigInstancesUpdateTime(migRef GceRef) (time.Time, bool) { + gc.cacheMutex.Lock() + defer gc.cacheMutex.Unlock() + + timestamp, found := gc.instancesUpdateTime[migRef] + if found { + klog.V(5).Infof("Instances update time cache hit for %s", migRef) + } + return timestamp, found +} + // GetMigForInstance returns the cached MIG for instance GceRef func (gc *GceCache) GetMigForInstance(instanceRef GceRef) (GceRef, bool) { gc.cacheMutex.Lock() @@ -178,12 +194,13 @@ func (gc *GceCache) IsMigUnknownForInstance(instanceRef GceRef) bool { } // SetMigInstances sets instances for a given Mig ref -func (gc *GceCache) SetMigInstances(migRef GceRef, instances []cloudprovider.Instance) error { +func (gc *GceCache) SetMigInstances(migRef GceRef, instances []cloudprovider.Instance, timeNow time.Time) error { gc.cacheMutex.Lock() defer gc.cacheMutex.Unlock() gc.removeMigInstances(migRef) gc.instances[migRef] = append([]cloudprovider.Instance{}, instances...) + gc.instancesUpdateTime[migRef] = timeNow for _, instance := range instances { instanceRef, err := GceRefFromProviderId(instance.Id) if err != nil { @@ -211,6 +228,7 @@ func (gc *GceCache) InvalidateAllMigInstances() { klog.V(5).Infof("Mig instances cache invalidated") gc.instances = make(map[GceRef][]cloudprovider.Instance) + gc.instancesUpdateTime = make(map[GceRef]time.Time) } // InvalidateInstancesToMig clears the instance to mig mapping for a GceRef diff --git a/cluster-autoscaler/cloudprovider/gce/gce_manager_test.go b/cluster-autoscaler/cloudprovider/gce/gce_manager_test.go index b37b1ba005e4..609e6af91f06 100644 --- a/cluster-autoscaler/cloudprovider/gce/gce_manager_test.go +++ b/cluster-autoscaler/cloudprovider/gce/gce_manager_test.go @@ -334,6 +334,7 @@ func newTestGceManager(t *testing.T, testServerURL string, regional bool) *gceMa cache := &GceCache{ migs: make(map[GceRef]Mig), instances: make(map[GceRef][]cloudprovider.Instance), + instancesUpdateTime: make(map[GceRef]time.Time), instancesToMig: make(map[GceRef]GceRef), instancesFromUnknownMig: make(map[GceRef]bool), autoscalingOptionsCache: map[GceRef]map[string]string{}, diff --git a/cluster-autoscaler/cloudprovider/gce/gce_price_info.go b/cluster-autoscaler/cloudprovider/gce/gce_price_info.go index f43d92047995..11a147b99a23 100644 --- a/cluster-autoscaler/cloudprovider/gce/gce_price_info.go +++ b/cluster-autoscaler/cloudprovider/gce/gce_price_info.go @@ -70,6 +70,7 @@ const ( var ( predefinedCpuPricePerHour = map[string]float64{ "a2": 0.031611, + "g2": 0.024988, "c2": 0.03398, "c2d": 0.029563, "c3": 0.03398, @@ -82,6 +83,7 @@ var ( } predefinedMemoryPricePerHourPerGb = map[string]float64{ "a2": 0.004237, + "g2": 0.002927, "c2": 0.00455, "c2d": 0.003959, "c3": 0.00456, @@ -94,6 +96,7 @@ var ( } predefinedPreemptibleDiscount = map[string]float64{ "a2": 0.009483 / 0.031611, + "g2": 0.007496 / 0.024988, "c2": 0.00822 / 0.03398, "c2d": 0.007154 / 0.029563, "c3": 0.003086 / 0.03398, @@ -136,6 +139,14 @@ var ( "a2-ultragpu-1g": 5.0688, "a2-ultragpu-2g": 10.1376, "a2-ultragpu-4g": 20.2752, + "g2-standard-4": 0.76, + "g2-standard-8": 0.91, + "g2-standard-12": 1.06, + "g2-standard-16": 1.20, + "g2-standard-24": 2.11, + "g2-standard-32": 1.79, + "g2-standard-48": 4.23, + "g2-standard-96": 8.46, "a2-ultragpu-8g": 40.5504, "c2-standard-4": 0.2088, "c2-standard-8": 0.4176, @@ -309,6 +320,14 @@ var ( "a2-ultragpu-2g": 3.2, "a2-ultragpu-4g": 6.4, "a2-ultragpu-8g": 12.8, + "g2-standard-4": 0.23, + "g2-standard-8": 0.27, + "g2-standard-12": 0.32, + "g2-standard-16": 0.36, + "g2-standard-24": 0.63, + "g2-standard-32": 0.54, + "g2-standard-48": 1.27, + "g2-standard-96": 2.54, "c2-standard-4": 0.0505, "c2-standard-8": 0.1011, "c2-standard-16": 0.2021, @@ -476,6 +495,7 @@ var ( "nvidia-tesla-k80": 0.45, "nvidia-tesla-a100": 0, // price of this gpu is counted into A2 machine-type price "nvidia-a100-80gb": 0, // price of this gpu is counted into A2 machine-type price + "nvidia-l4": 0, // price of this gpu is counted into G2 machine-type price } preemptibleGpuPrices = map[string]float64{ "nvidia-tesla-t4": 0.11, @@ -485,6 +505,7 @@ var ( "nvidia-tesla-k80": 0.037500, "nvidia-tesla-a100": 0, // price of this gpu is counted into A2 machine-type price "nvidia-a100-80gb": 0, // price of this gpu is counted into A2 machine-type price + "nvidia-l4": 0, // price of this gpu is counted into G2 machine-type price } bootDiskPricePerHour = map[string]float64{ "pd-standard": 0.04 / hoursInMonth, diff --git a/cluster-autoscaler/cloudprovider/gce/mig_info_provider.go b/cluster-autoscaler/cloudprovider/gce/mig_info_provider.go index 391d0e44a1d1..abee9c4660ce 100644 --- a/cluster-autoscaler/cloudprovider/gce/mig_info_provider.go +++ b/cluster-autoscaler/cloudprovider/gce/mig_info_provider.go @@ -66,7 +66,6 @@ type cachingMigInfoProvider struct { concurrentGceRefreshes int migInstanceMutex sync.Mutex migInstancesMinRefreshWaitTime time.Duration - migInstancesLastRefreshedInfo map[string]time.Time timeProvider timeProvider } @@ -85,7 +84,6 @@ func NewCachingMigInfoProvider(cache *GceCache, migLister MigLister, gceClient A projectId: projectId, concurrentGceRefreshes: concurrentGceRefreshes, migInstancesMinRefreshWaitTime: migInstancesMinRefreshWaitTime, - migInstancesLastRefreshedInfo: make(map[string]time.Time), timeProvider: &realTime{}, } } @@ -175,7 +173,7 @@ func (c *cachingMigInfoProvider) findMigWithMatchingBasename(instanceRef GceRef) } func (c *cachingMigInfoProvider) fillMigInstances(migRef GceRef) error { - if val, ok := c.migInstancesLastRefreshedInfo[migRef.String()]; ok { + if val, ok := c.cache.GetMigInstancesUpdateTime(migRef); ok { // do not regenerate MIG instances cache if last refresh happened recently. if c.timeProvider.Now().Sub(val) < c.migInstancesMinRefreshWaitTime { klog.V(4).Infof("Not regenerating MIG instances cache for %s, as it was refreshed in last MinRefreshWaitTime (%s).", migRef.String(), c.migInstancesMinRefreshWaitTime) @@ -189,8 +187,7 @@ func (c *cachingMigInfoProvider) fillMigInstances(migRef GceRef) error { return err } // only save information for successful calls, given the errors above may be transient. - c.migInstancesLastRefreshedInfo[migRef.String()] = c.timeProvider.Now() - return c.cache.SetMigInstances(migRef, instances) + return c.cache.SetMigInstances(migRef, instances, c.timeProvider.Now()) } func (c *cachingMigInfoProvider) GetMigTargetSize(migRef GceRef) (int64, error) { @@ -301,14 +298,29 @@ func (c *cachingMigInfoProvider) fillMigInfoCache() error { migs[piece], errors[piece] = c.gceClient.FetchAllMigs(zones[piece]) }) + failedZones := map[string]error{} + failedZoneCount := 0 for idx, err := range errors { if err != nil { klog.Errorf("Error listing migs from zone %v; err=%v", zones[idx], err) - return fmt.Errorf("%v", errors) + failedZones[zones[idx]] = err + failedZoneCount++ } } + if failedZoneCount > 0 && failedZoneCount == len(zones) { + return fmt.Errorf("%v", errors) + } + registeredMigRefs := c.getRegisteredMigRefs() + + for migRef := range registeredMigRefs { + err, ok := failedZones[migRef.Zone] + if ok { + c.migLister.HandleMigIssue(migRef, err) + } + } + for idx, zone := range zones { for _, zoneMig := range migs[idx] { zoneMigRef := GceRef{ diff --git a/cluster-autoscaler/cloudprovider/gce/mig_info_provider_test.go b/cluster-autoscaler/cloudprovider/gce/mig_info_provider_test.go index ddc31c31afb7..adf2240329fb 100644 --- a/cluster-autoscaler/cloudprovider/gce/mig_info_provider_test.go +++ b/cluster-autoscaler/cloudprovider/gce/mig_info_provider_test.go @@ -105,6 +105,10 @@ func (client *mockAutoscalingGceClient) FetchReservations() ([]*gce.Reservation, return nil, nil } +func (client *mockAutoscalingGceClient) FetchReservationsInProject(_ string) ([]*gce.Reservation, error) { + return nil, nil +} + func (client *mockAutoscalingGceClient) ResizeMig(_ GceRef, _ int64) error { return nil } @@ -117,6 +121,88 @@ func (client *mockAutoscalingGceClient) CreateInstances(_ GceRef, _ string, _ in return nil } +func TestFillMigInstances(t *testing.T) { + migRef := GceRef{Project: "test", Zone: "zone-A", Name: "some-mig"} + oldInstances := []cloudprovider.Instance{ + {Id: "gce://test/zone-A/some-mig-old-instance-1"}, + {Id: "gce://test/zone-A/some-mig-old-instance-2"}, + } + newInstances := []cloudprovider.Instance{ + {Id: "gce://test/zone-A/some-mig-new-instance-1"}, + {Id: "gce://test/zone-A/some-mig-new-instance-2"}, + } + + timeNow := time.Now() + timeRecent := timeNow.Add(-30 * time.Minute) + timeOld := timeNow.Add(-90 * time.Minute) + + testCases := []struct { + name string + cache *GceCache + wantClientCalls int + wantInstances []cloudprovider.Instance + wantUpdateTime time.Time + }{ + { + name: "No instances in cache", + cache: &GceCache{ + instances: map[GceRef][]cloudprovider.Instance{}, + instancesUpdateTime: map[GceRef]time.Time{}, + instancesToMig: map[GceRef]GceRef{}, + }, + wantClientCalls: 1, + wantInstances: newInstances, + wantUpdateTime: timeNow, + }, + { + name: "Old instances in cache", + cache: &GceCache{ + instances: map[GceRef][]cloudprovider.Instance{migRef: oldInstances}, + instancesUpdateTime: map[GceRef]time.Time{migRef: timeOld}, + instancesToMig: map[GceRef]GceRef{}, + }, + wantClientCalls: 1, + wantInstances: newInstances, + wantUpdateTime: timeNow, + }, + { + name: "Recently updated instances in cache", + cache: &GceCache{ + instances: map[GceRef][]cloudprovider.Instance{migRef: oldInstances}, + instancesUpdateTime: map[GceRef]time.Time{migRef: timeRecent}, + instancesToMig: map[GceRef]GceRef{}, + }, + wantClientCalls: 0, + wantInstances: oldInstances, + wantUpdateTime: timeRecent, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + callCounter := make(map[GceRef]int) + client := &mockAutoscalingGceClient{ + fetchMigInstances: fetchMigInstancesWithCounter(newInstances, callCounter), + } + + provider, ok := NewCachingMigInfoProvider(tc.cache, NewMigLister(tc.cache), client, mig.GceRef().Project, 1, time.Hour).(*cachingMigInfoProvider) + assert.True(t, ok) + provider.timeProvider = &fakeTime{now: timeNow} + + assert.NoError(t, provider.fillMigInstances(migRef)) + assert.Equal(t, tc.wantClientCalls, callCounter[migRef]) + + updateTime, updateTimeFound := tc.cache.GetMigInstancesUpdateTime(migRef) + assert.True(t, updateTimeFound) + assert.Equal(t, tc.wantUpdateTime, updateTime) + + instances, instancesFound := tc.cache.GetMigInstances(migRef) + assert.True(t, instancesFound) + assert.ElementsMatch(t, tc.wantInstances, instances) + }) + } +} + func TestMigInfoProviderGetMigForInstance(t *testing.T) { instance := cloudprovider.Instance{ Id: "gce://project/us-test1/base-instance-name-abcd", @@ -166,10 +252,11 @@ func TestMigInfoProviderGetMigForInstance(t *testing.T) { { name: "mig from cache fill", cache: &GceCache{ - migs: map[GceRef]Mig{mig.GceRef(): mig}, - instances: map[GceRef][]cloudprovider.Instance{}, - instancesToMig: map[GceRef]GceRef{}, - migBaseNameCache: map[GceRef]string{mig.GceRef(): "base-instance-name"}, + migs: map[GceRef]Mig{mig.GceRef(): mig}, + instances: map[GceRef][]cloudprovider.Instance{}, + instancesUpdateTime: map[GceRef]time.Time{}, + instancesToMig: map[GceRef]GceRef{}, + migBaseNameCache: map[GceRef]string{mig.GceRef(): "base-instance-name"}, }, fetchMigInstances: fetchMigInstancesConst([]cloudprovider.Instance{instance}), expectedMig: mig, @@ -179,10 +266,11 @@ func TestMigInfoProviderGetMigForInstance(t *testing.T) { { name: "mig and basename from cache fill", cache: &GceCache{ - migs: map[GceRef]Mig{mig.GceRef(): mig}, - instances: map[GceRef][]cloudprovider.Instance{}, - instancesToMig: map[GceRef]GceRef{}, - migBaseNameCache: map[GceRef]string{}, + migs: map[GceRef]Mig{mig.GceRef(): mig}, + instances: map[GceRef][]cloudprovider.Instance{}, + instancesUpdateTime: map[GceRef]time.Time{}, + instancesToMig: map[GceRef]GceRef{}, + migBaseNameCache: map[GceRef]string{}, }, fetchMigInstances: fetchMigInstancesConst([]cloudprovider.Instance{instance}), fetchMigBasename: fetchMigBasenameConst("base-instance-name"), @@ -195,6 +283,7 @@ func TestMigInfoProviderGetMigForInstance(t *testing.T) { cache: &GceCache{ migs: map[GceRef]Mig{mig.GceRef(): mig}, instances: map[GceRef][]cloudprovider.Instance{}, + instancesUpdateTime: map[GceRef]time.Time{}, instancesFromUnknownMig: map[GceRef]bool{}, migBaseNameCache: map[GceRef]string{mig.GceRef(): "base-instance-name"}, }, @@ -258,6 +347,8 @@ func TestMigInfoProviderGetMigForInstance(t *testing.T) { } func TestGetMigInstances(t *testing.T) { + oldRefreshTime := time.Now().Add(-time.Hour) + newRefreshTime := time.Now() instances := []cloudprovider.Instance{ {Id: "gce://project/us-test1/base-instance-name-abcd"}, {Id: "gce://project/us-test1/base-instance-name-efgh"}, @@ -271,34 +362,43 @@ func TestGetMigInstances(t *testing.T) { expectedErr error expectedCachedInstances []cloudprovider.Instance expectedCached bool + expectedRefreshTime time.Time + expectedRefreshed bool }{ { name: "instances in cache", cache: &GceCache{ - migs: map[GceRef]Mig{mig.GceRef(): mig}, - instances: map[GceRef][]cloudprovider.Instance{mig.GceRef(): instances}, + migs: map[GceRef]Mig{mig.GceRef(): mig}, + instances: map[GceRef][]cloudprovider.Instance{mig.GceRef(): instances}, + instancesUpdateTime: map[GceRef]time.Time{mig.GceRef(): oldRefreshTime}, }, expectedInstances: instances, expectedCachedInstances: instances, expectedCached: true, + expectedRefreshTime: oldRefreshTime, + expectedRefreshed: true, }, { name: "instances cache fill", cache: &GceCache{ - migs: map[GceRef]Mig{mig.GceRef(): mig}, - instances: map[GceRef][]cloudprovider.Instance{}, - instancesToMig: map[GceRef]GceRef{}, + migs: map[GceRef]Mig{mig.GceRef(): mig}, + instances: map[GceRef][]cloudprovider.Instance{}, + instancesUpdateTime: map[GceRef]time.Time{}, + instancesToMig: map[GceRef]GceRef{}, }, fetchMigInstances: fetchMigInstancesConst(instances), expectedInstances: instances, expectedCachedInstances: instances, expectedCached: true, + expectedRefreshTime: newRefreshTime, + expectedRefreshed: true, }, { name: "error during instances cache fill", cache: &GceCache{ - migs: map[GceRef]Mig{mig.GceRef(): mig}, - instances: map[GceRef][]cloudprovider.Instance{}, + migs: map[GceRef]Mig{mig.GceRef(): mig}, + instances: map[GceRef][]cloudprovider.Instance{}, + instancesUpdateTime: map[GceRef]time.Time{}, }, fetchMigInstances: fetchMigInstancesFail, expectedErr: errFetchMigInstances, @@ -312,15 +412,22 @@ func TestGetMigInstances(t *testing.T) { fetchMigInstances: tc.fetchMigInstances, } migLister := NewMigLister(tc.cache) - provider := NewCachingMigInfoProvider(tc.cache, migLister, client, mig.GceRef().Project, 1, 0*time.Second) + provider, ok := NewCachingMigInfoProvider(tc.cache, migLister, client, mig.GceRef().Project, 1, 0*time.Second).(*cachingMigInfoProvider) + assert.True(t, ok) + provider.timeProvider = &fakeTime{now: newRefreshTime} + instances, err := provider.GetMigInstances(mig.GceRef()) cachedInstances, cached := tc.cache.GetMigInstances(mig.GceRef()) + refreshTime, refreshed := tc.cache.GetMigInstancesUpdateTime(mig.GceRef()) assert.Equal(t, tc.expectedInstances, instances) assert.Equal(t, tc.expectedErr, err) assert.Equal(t, tc.expectedCachedInstances, cachedInstances) assert.Equal(t, tc.expectedCached, cached) + + assert.Equal(t, tc.expectedRefreshTime, refreshTime) + assert.Equal(t, tc.expectedRefreshed, refreshed) }) } } @@ -996,7 +1103,6 @@ func TestMultipleGetMigInstanceCallsLimited(t *testing.T) { projectId: projectId, concurrentGceRefreshes: 1, migInstancesMinRefreshWaitTime: tc.refreshRateDuration, - migInstancesLastRefreshedInfo: make(map[string]time.Time), timeProvider: ft, } ft.now = tc.firstCallTime @@ -1022,6 +1128,7 @@ func emptyCache() *GceCache { return &GceCache{ migs: map[GceRef]Mig{mig.GceRef(): mig}, instances: make(map[GceRef][]cloudprovider.Instance), + instancesUpdateTime: make(map[GceRef]time.Time), migTargetSizeCache: make(map[GceRef]int64), migBaseNameCache: make(map[GceRef]string), instanceTemplateNameCache: make(map[GceRef]string), diff --git a/cluster-autoscaler/cloudprovider/gce/templates.go b/cluster-autoscaler/cloudprovider/gce/templates.go index f4ca3b848efc..98f7c931809d 100644 --- a/cluster-autoscaler/cloudprovider/gce/templates.go +++ b/cluster-autoscaler/cloudprovider/gce/templates.go @@ -25,12 +25,12 @@ import ( "strings" "time" - "github.com/ghodss/yaml" gce "google.golang.org/api/compute/v1" apiv1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/klog/v2" + "sigs.k8s.io/yaml" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" "k8s.io/autoscaler/cluster-autoscaler/utils/gpu" diff --git a/cluster-autoscaler/cloudprovider/hetzner/OWNERS b/cluster-autoscaler/cloudprovider/hetzner/OWNERS index 94eb2469f858..a4b0d9d0e4a7 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/OWNERS +++ b/cluster-autoscaler/cloudprovider/hetzner/OWNERS @@ -6,3 +6,6 @@ reviewers: - apricote #- LKaemmerling #- 4ND3R50N + +labels: +- area/provider/hetzner diff --git a/cluster-autoscaler/cloudprovider/hetzner/README.md b/cluster-autoscaler/cloudprovider/hetzner/README.md index 765636c441c7..066d87b7cddb 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/README.md +++ b/cluster-autoscaler/cloudprovider/hetzner/README.md @@ -2,7 +2,7 @@ The cluster autoscaler for Hetzner Cloud scales worker nodes. -# Configuration +## Configuration `HCLOUD_TOKEN` Required Hetzner Cloud token. @@ -31,10 +31,9 @@ Multiple flags will create multiple node pools. For example: You can find a deployment sample under [examples/cluster-autoscaler-run-on-master.yaml](examples/cluster-autoscaler-run-on-master.yaml). Please be aware that you should change the values within this deployment to reflect your cluster. -# Development +## Development -Make sure you're inside the root path of the [autoscaler -repository](https://github.com/kubernetes/autoscaler) +Make sure you're inside the `cluster-autoscaler` root folder. 1.) Build the `cluster-autoscaler` binary: @@ -55,3 +54,13 @@ docker build -t hetzner/cluster-autoscaler:dev . ``` docker push hetzner/cluster-autoscaler:dev ``` + +### Updating vendored hcloud-go + +To update the vendored `hcloud-go` code, navigate to the directory and run the `hack/update-vendor.sh` script: + +``` +cd cluster-autoscaler/cloudprovider/hetzner +UPSTREAM_REF=v2.0.0 hack/update-vendor.sh +git add hcloud-go/ +``` diff --git a/cluster-autoscaler/cloudprovider/hetzner/hack/update-vendor.sh b/cluster-autoscaler/cloudprovider/hetzner/hack/update-vendor.sh new file mode 100755 index 000000000000..c7ce398a2da9 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/hetzner/hack/update-vendor.sh @@ -0,0 +1,28 @@ +#!/usr/bin/env bash +set -o pipefail +set -o nounset +set -o errexit + +UPSTREAM_REPO=${UPSTREAM_REPO:-https://github.com/hetznercloud/hcloud-go.git} +UPSTREAM_REF=${UPSTREAM_REF:-main} + +vendor_path=hcloud-go + +original_module_path="github.com/hetznercloud/hcloud-go/v2/" +vendor_module_path=k8s.io/autoscaler/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/ + + +echo "# Removing existing directory." +rm -rf hcloud-go + +echo "# Cloning repo" +git clone --depth=1 --branch "$UPSTREAM_REF" "$UPSTREAM_REPO" "$vendor_path" + +echo "# Removing unnecessary files" +find "$vendor_path" -type f ! -name "*.go" ! -name "LICENSE" -delete +find "$vendor_path" -type f -name "*_test.go" -delete +find "$vendor_path" -type d -empty -delete + +echo "# Rewriting module path" +find "$vendor_path" -type f -exec sed -i "s@${original_module_path}@${vendor_module_path}@g" {} + + diff --git a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/LICENSE b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/LICENSE similarity index 86% rename from cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/LICENSE rename to cluster-autoscaler/cloudprovider/hetzner/hcloud-go/LICENSE index f9c841a51e0d..394ce101f08c 100644 --- a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/LICENSE +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/LICENSE @@ -1,6 +1,6 @@ -The MIT License (MIT) +MIT License -Copyright (c) 2013 Mitchell Hashimoto +Copyright (c) 2018-2020 Hetzner Cloud GmbH Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal @@ -9,13 +9,13 @@ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -THE SOFTWARE. +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/action.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/action.go index 9502739db527..12b4721e3c7a 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/action.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/action.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -27,7 +11,7 @@ import ( // Action represents an action in the Hetzner Cloud. type Action struct { - ID int + ID int64 Status ActionStatus Command string Progress int @@ -50,7 +34,7 @@ const ( // ActionResource references other resources from an action. type ActionResource struct { - ID int + ID int64 Type ActionResourceType } @@ -92,7 +76,7 @@ type ActionClient struct { } // GetByID retrieves an action by its ID. If the action does not exist, nil is returned. -func (c *ActionClient) GetByID(ctx context.Context, id int) (*Action, *Response, error) { +func (c *ActionClient) GetByID(ctx context.Context, id int64) (*Action, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/actions/%d", id), nil) if err != nil { return nil, nil, err @@ -112,13 +96,13 @@ func (c *ActionClient) GetByID(ctx context.Context, id int) (*Action, *Response, // ActionListOpts specifies options for listing actions. type ActionListOpts struct { ListOpts - ID []int + ID []int64 Status []ActionStatus Sort []string } func (l ActionListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() for _, id := range l.ID { vals.Add("id", fmt.Sprintf("%d", id)) } @@ -156,30 +140,12 @@ func (c *ActionClient) List(ctx context.Context, opts ActionListOpts) ([]*Action // All returns all actions. func (c *ActionClient) All(ctx context.Context) ([]*Action, error) { - allActions := []*Action{} - - opts := ActionListOpts{} - opts.PerPage = 50 - - err := c.client.all(func(page int) (*Response, error) { - opts.Page = page - actions, resp, err := c.List(ctx, opts) - if err != nil { - return resp, err - } - allActions = append(allActions, actions...) - return resp, nil - }) - if err != nil { - return nil, err - } - - return allActions, nil + return c.AllWithOpts(ctx, ActionListOpts{ListOpts: ListOpts{PerPage: 50}}) } // AllWithOpts returns all actions for the given options. func (c *ActionClient) AllWithOpts(ctx context.Context, opts ActionListOpts) ([]*Action, error) { - allActions := []*Action{} + var allActions []*Action err := c.client.all(func(page int) (*Response, error) { opts.Page = page @@ -208,7 +174,7 @@ func (c *ActionClient) AllWithOpts(ctx context.Context, opts ActionListOpts) ([] // complete successfully, as well as any errors that happened while // querying the API. // -// By default the method keeps watching until all actions have finished +// By default, the method keeps watching until all actions have finished // processing. If you want to be able to cancel the method or configure a // timeout, use the [context.Context]. Once the method has stopped watching, // both returned channels are closed. @@ -223,8 +189,8 @@ func (c *ActionClient) WatchOverallProgress(ctx context.Context, actions []*Acti defer close(errCh) defer close(progressCh) - successIDs := make([]int, 0, len(actions)) - watchIDs := make(map[int]struct{}, len(actions)) + successIDs := make([]int64, 0, len(actions)) + watchIDs := make(map[int64]struct{}, len(actions)) for _, action := range actions { watchIDs[action.ID] = struct{}{} } @@ -257,7 +223,7 @@ func (c *ActionClient) WatchOverallProgress(ctx context.Context, actions []*Acti continue case ActionStatusSuccess: delete(watchIDs, a.ID) - successIDs := append(successIDs, a.ID) + successIDs = append(successIDs, a.ID) sendProgress(progressCh, int(float64(len(actions)-len(successIDs))/float64(len(actions))*100)) case ActionStatusError: delete(watchIDs, a.ID) @@ -285,7 +251,7 @@ func (c *ActionClient) WatchOverallProgress(ctx context.Context, actions []*Acti // API, as well as the error of the action if it did not complete // successfully, or nil if it did. // -// By default the method keeps watching until the action has finished +// By default, the method keeps watching until the action has finished // processing. If you want to be able to cancel the method or configure a // timeout, use the [context.Context]. Once the method has stopped watching, // both returned channels are closed. diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/architecture.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/architecture.go index 1a2c173becbb..d68e01acf903 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/architecture.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/architecture.go @@ -1,5 +1,5 @@ /* -Copyright 2018 The Kubernetes Authors. +Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/certificate.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/certificate.go index c95b4d1f709e..9a5b26f869f2 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/certificate.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/certificate.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -66,7 +50,7 @@ const ( // CertificateUsedByRef points to a resource that uses this certificate. type CertificateUsedByRef struct { - ID int + ID int64 Type CertificateUsedByRefType } @@ -84,9 +68,9 @@ func (st *CertificateStatus) IsFailed() bool { return st.Issuance == CertificateStatusTypeFailed || st.Renewal == CertificateStatusTypeFailed } -// Certificate represents an certificate in the Hetzner Cloud. +// Certificate represents a certificate in the Hetzner Cloud. type Certificate struct { - ID int + ID int64 Name string Labels map[string]string Type CertificateType @@ -112,7 +96,7 @@ type CertificateClient struct { } // GetByID retrieves a Certificate by its ID. If the Certificate does not exist, nil is returned. -func (c *CertificateClient) GetByID(ctx context.Context, id int) (*Certificate, *Response, error) { +func (c *CertificateClient) GetByID(ctx context.Context, id int64) (*Certificate, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/certificates/%d", id), nil) if err != nil { return nil, nil, err @@ -144,8 +128,8 @@ func (c *CertificateClient) GetByName(ctx context.Context, name string) (*Certif // Get retrieves a Certificate by its ID if the input can be parsed as an integer, otherwise it // retrieves a Certificate by its name. If the Certificate does not exist, nil is returned. func (c *CertificateClient) Get(ctx context.Context, idOrName string) (*Certificate, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -158,7 +142,7 @@ type CertificateListOpts struct { } func (l CertificateListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -193,25 +177,7 @@ func (c *CertificateClient) List(ctx context.Context, opts CertificateListOpts) // All returns all Certificates. func (c *CertificateClient) All(ctx context.Context) ([]*Certificate, error) { - allCertificates := []*Certificate{} - - opts := CertificateListOpts{} - opts.PerPage = 50 - - err := c.client.all(func(page int) (*Response, error) { - opts.Page = page - Certificate, resp, err := c.List(ctx, opts) - if err != nil { - return resp, err - } - allCertificates = append(allCertificates, Certificate...) - return resp, nil - }) - if err != nil { - return nil, err - } - - return allCertificates, nil + return c.AllWithOpts(ctx, CertificateListOpts{ListOpts: ListOpts{PerPage: 50}}) } // AllWithOpts returns all Certificates for the given options. @@ -276,7 +242,7 @@ func (o CertificateCreateOpts) validateUploaded() error { return nil } -// Create creates a new certificate uploaded certificate. +// Create creates a new uploaded certificate. // // Create returns an error for certificates of any other type. Use // CreateCertificate to create such certificates. diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/client.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/client.go index 9ad4482ddce5..56983ea84d76 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/client.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/client.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -461,7 +445,8 @@ type ListOpts struct { LabelSelector string // Label selector for filtering by labels } -func (l ListOpts) values() url.Values { +// Values returns the ListOpts as URL values. +func (l ListOpts) Values() url.Values { vals := url.Values{} if l.Page > 0 { vals.Add("page", strconv.Itoa(l.Page)) diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/datacenter.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/datacenter.go index 514771b54479..cc473845e632 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/datacenter.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/datacenter.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -27,7 +11,7 @@ import ( // Datacenter represents a datacenter in the Hetzner Cloud. type Datacenter struct { - ID int + ID int64 Name string Description string Location *Location @@ -46,7 +30,7 @@ type DatacenterClient struct { } // GetByID retrieves a datacenter by its ID. If the datacenter does not exist, nil is returned. -func (c *DatacenterClient) GetByID(ctx context.Context, id int) (*Datacenter, *Response, error) { +func (c *DatacenterClient) GetByID(ctx context.Context, id int64) (*Datacenter, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/datacenters/%d", id), nil) if err != nil { return nil, nil, err @@ -63,7 +47,7 @@ func (c *DatacenterClient) GetByID(ctx context.Context, id int) (*Datacenter, *R return DatacenterFromSchema(body.Datacenter), resp, nil } -// GetByName retrieves an datacenter by its name. If the datacenter does not exist, nil is returned. +// GetByName retrieves a datacenter by its name. If the datacenter does not exist, nil is returned. func (c *DatacenterClient) GetByName(ctx context.Context, name string) (*Datacenter, *Response, error) { if name == "" { return nil, nil, nil @@ -78,8 +62,8 @@ func (c *DatacenterClient) GetByName(ctx context.Context, name string) (*Datacen // Get retrieves a datacenter by its ID if the input can be parsed as an integer, otherwise it // retrieves a datacenter by its name. If the datacenter does not exist, nil is returned. func (c *DatacenterClient) Get(ctx context.Context, idOrName string) (*Datacenter, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -92,7 +76,7 @@ type DatacenterListOpts struct { } func (l DatacenterListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -127,10 +111,12 @@ func (c *DatacenterClient) List(ctx context.Context, opts DatacenterListOpts) ([ // All returns all datacenters. func (c *DatacenterClient) All(ctx context.Context) ([]*Datacenter, error) { - allDatacenters := []*Datacenter{} + return c.AllWithOpts(ctx, DatacenterListOpts{ListOpts: ListOpts{PerPage: 50}}) +} - opts := DatacenterListOpts{} - opts.PerPage = 50 +// AllWithOpts returns all datacenters for the given options. +func (c *DatacenterClient) AllWithOpts(ctx context.Context, opts DatacenterListOpts) ([]*Datacenter, error) { + var allDatacenters []*Datacenter err := c.client.all(func(page int) (*Response, error) { opts.Page = page diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/deprecation.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/deprecation.go new file mode 100644 index 000000000000..17c6949cba37 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/deprecation.go @@ -0,0 +1,59 @@ +package hcloud + +import "time" + +// Deprecatable is a shared interface implemented by all Resources that have a defined deprecation workflow. +type Deprecatable interface { + // IsDeprecated returns true if the resource is marked as deprecated. + IsDeprecated() bool + + // UnavailableAfter returns the time that the deprecated resource will be removed from the API. + // This only returns a valid value if [Deprecatable.IsDeprecated] returned true. + UnavailableAfter() time.Time + + // DeprecationAnnounced returns the time that the deprecation of this resource was announced. + // This only returns a valid value if [Deprecatable.IsDeprecated] returned true. + DeprecationAnnounced() time.Time +} + +// DeprecationInfo contains the information published when a resource is actually deprecated. +type DeprecationInfo struct { + Announced time.Time + UnavailableAfter time.Time +} + +// DeprecatableResource implements the [Deprecatable] interface and can be embedded in structs for Resources that can be +// deprecated. +type DeprecatableResource struct { + Deprecation *DeprecationInfo +} + +// IsDeprecated returns true if the resource is marked as deprecated. +func (d DeprecatableResource) IsDeprecated() bool { + return d.Deprecation != nil +} + +// UnavailableAfter returns the time that the deprecated resource will be removed from the API. +// This only returns a valid value if [Deprecatable.IsDeprecated] returned true. +func (d DeprecatableResource) UnavailableAfter() time.Time { + if !d.IsDeprecated() { + // Return "null" time if resource is not deprecated + return time.Unix(0, 0) + } + + return d.Deprecation.UnavailableAfter +} + +// DeprecationAnnounced returns the time that the deprecation of this resource was announced. +// This only returns a valid value if [Deprecatable.IsDeprecated] returned true. +func (d DeprecatableResource) DeprecationAnnounced() time.Time { + if !d.IsDeprecated() { + // Return "null" time if resource is not deprecated + return time.Unix(0, 0) + } + + return d.Deprecation.Announced +} + +// Make sure that all expected Resources actually implement the interface. +var _ Deprecatable = ServerType{} diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/error.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/error.go index e77398ec7c6c..ff04d07b229a 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/error.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/error.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/firewall.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/firewall.go index 4734bb4f6271..968d2045909a 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/firewall.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/firewall.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -32,7 +16,7 @@ import ( // Firewall represents a Firewall in the Hetzner Cloud. type Firewall struct { - ID int + ID int64 Name string Labels map[string]string Created time.Time @@ -96,7 +80,7 @@ type FirewallResource struct { // FirewallResourceServer represents a Server to apply a Firewall on. type FirewallResourceServer struct { - ID int + ID int64 } // FirewallResourceLabelSelector represents a LabelSelector to apply a Firewall on. @@ -110,7 +94,7 @@ type FirewallClient struct { } // GetByID retrieves a Firewall by its ID. If the Firewall does not exist, nil is returned. -func (c *FirewallClient) GetByID(ctx context.Context, id int) (*Firewall, *Response, error) { +func (c *FirewallClient) GetByID(ctx context.Context, id int64) (*Firewall, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/firewalls/%d", id), nil) if err != nil { return nil, nil, err @@ -142,8 +126,8 @@ func (c *FirewallClient) GetByName(ctx context.Context, name string) (*Firewall, // Get retrieves a Firewall by its ID if the input can be parsed as an integer, otherwise it // retrieves a Firewall by its name. If the Firewall does not exist, nil is returned. func (c *FirewallClient) Get(ctx context.Context, idOrName string) (*Firewall, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -156,7 +140,7 @@ type FirewallListOpts struct { } func (l FirewallListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -191,25 +175,7 @@ func (c *FirewallClient) List(ctx context.Context, opts FirewallListOpts) ([]*Fi // All returns all Firewalls. func (c *FirewallClient) All(ctx context.Context) ([]*Firewall, error) { - allFirewalls := []*Firewall{} - - opts := FirewallListOpts{} - opts.PerPage = 50 - - err := c.client.all(func(page int) (*Response, error) { - opts.Page = page - firewalls, resp, err := c.List(ctx, opts) - if err != nil { - return resp, err - } - allFirewalls = append(allFirewalls, firewalls...) - return resp, nil - }) - if err != nil { - return nil, err - } - - return allFirewalls, nil + return c.AllWithOpts(ctx, FirewallListOpts{ListOpts: ListOpts{PerPage: 50}}) } // AllWithOpts returns all Firewalls for the given options. diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/floating_ip.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/floating_ip.go index 73074805efc8..7e2dc5cad952 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/floating_ip.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/floating_ip.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -32,7 +16,7 @@ import ( // FloatingIP represents a Floating IP in the Hetzner Cloud. type FloatingIP struct { - ID int + ID int64 Description string Created time.Time IP net.IP @@ -58,7 +42,7 @@ type FloatingIPProtection struct { Delete bool } -// FloatingIPType represents the type of a Floating IP. +// FloatingIPType represents the type of Floating IP. type FloatingIPType string // Floating IP types. @@ -67,7 +51,7 @@ const ( FloatingIPTypeIPv6 FloatingIPType = "ipv6" ) -// changeDNSPtr changes or resets the reverse DNS pointer for a IP address. +// changeDNSPtr changes or resets the reverse DNS pointer for an IP address. // Pass a nil ptr to reset the reverse DNS pointer to its default value. func (f *FloatingIP) changeDNSPtr(ctx context.Context, client *Client, ip net.IP, ptr *string) (*Action, *Response, error) { reqBody := schema.FloatingIPActionChangeDNSPtrRequest{ @@ -111,7 +95,7 @@ type FloatingIPClient struct { // GetByID retrieves a Floating IP by its ID. If the Floating IP does not exist, // nil is returned. -func (c *FloatingIPClient) GetByID(ctx context.Context, id int) (*FloatingIP, *Response, error) { +func (c *FloatingIPClient) GetByID(ctx context.Context, id int64) (*FloatingIP, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/floating_ips/%d", id), nil) if err != nil { return nil, nil, err @@ -143,8 +127,8 @@ func (c *FloatingIPClient) GetByName(ctx context.Context, name string) (*Floatin // Get retrieves a Floating IP by its ID if the input can be parsed as an integer, otherwise it // retrieves a Floating IP by its name. If the Floating IP does not exist, nil is returned. func (c *FloatingIPClient) Get(ctx context.Context, idOrName string) (*FloatingIP, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -157,7 +141,7 @@ type FloatingIPListOpts struct { } func (l FloatingIPListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -197,7 +181,7 @@ func (c *FloatingIPClient) All(ctx context.Context) ([]*FloatingIP, error) { // AllWithOpts returns all Floating IPs for the given options. func (c *FloatingIPClient) AllWithOpts(ctx context.Context, opts FloatingIPListOpts) ([]*FloatingIP, error) { - allFloatingIPs := []*FloatingIP{} + var allFloatingIPs []*FloatingIP err := c.client.all(func(page int) (*Response, error) { opts.Page = page diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/hcloud.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/hcloud.go index 331a8a89c5af..0131c0d9230e 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/hcloud.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/hcloud.go @@ -1,21 +1,5 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - // Package hcloud is a library for the Hetzner Cloud API. package hcloud // Version is the library's version following Semantic Versioning. -const Version = "1.42.0" // x-release-please-version +const Version = "2.0.0" // x-release-please-version diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/helper.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/helper.go index 93af1ee7a409..1965609add66 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/helper.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/helper.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import "time" diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/image.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/image.go index 0e7593c59975..f79489728769 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/image.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/image.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -30,7 +14,7 @@ import ( // Image represents an Image in the Hetzner Cloud. type Image struct { - ID int + ID int64 Name string Type ImageType Status ImageStatus @@ -97,7 +81,7 @@ type ImageClient struct { } // GetByID retrieves an image by its ID. If the image does not exist, nil is returned. -func (c *ImageClient) GetByID(ctx context.Context, id int) (*Image, *Response, error) { +func (c *ImageClient) GetByID(ctx context.Context, id int64) (*Image, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/images/%d", id), nil) if err != nil { return nil, nil, err @@ -148,7 +132,7 @@ func (c *ImageClient) GetByNameAndArchitecture(ctx context.Context, name string, // // Deprecated: Use [ImageClient.GetForArchitecture] instead. func (c *ImageClient) Get(ctx context.Context, idOrName string) (*Image, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) @@ -160,7 +144,7 @@ func (c *ImageClient) Get(ctx context.Context, idOrName string) (*Image, *Respon // In contrast to [ImageClient.Get], this method also returns deprecated images. Depending on your needs you should // check for this in your calling method. func (c *ImageClient) GetForArchitecture(ctx context.Context, idOrName string, architecture Architecture) (*Image, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { return c.GetByID(ctx, id) } return c.GetByNameAndArchitecture(ctx, idOrName, architecture) @@ -179,12 +163,12 @@ type ImageListOpts struct { } func (l ImageListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() for _, typ := range l.Type { vals.Add("type", string(typ)) } if l.BoundTo != nil { - vals.Add("bound_to", strconv.Itoa(l.BoundTo.ID)) + vals.Add("bound_to", strconv.FormatInt(l.BoundTo.ID, 10)) } if l.Name != "" { vals.Add("name", l.Name) diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/internal/instrumentation/metrics.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/internal/instrumentation/metrics.go index 14af92844aed..69a7165ba868 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/internal/instrumentation/metrics.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/internal/instrumentation/metrics.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package instrumentation import ( diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/iso.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/iso.go index a7d0fec23950..70807e38fc18 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/iso.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/iso.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -28,7 +12,7 @@ import ( // ISO represents an ISO image in the Hetzner Cloud. type ISO struct { - ID int + ID int64 Name string Description string Type ISOType @@ -58,7 +42,7 @@ type ISOClient struct { } // GetByID retrieves an ISO by its ID. -func (c *ISOClient) GetByID(ctx context.Context, id int) (*ISO, *Response, error) { +func (c *ISOClient) GetByID(ctx context.Context, id int64) (*ISO, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/isos/%d", id), nil) if err != nil { return nil, nil, err @@ -89,8 +73,8 @@ func (c *ISOClient) GetByName(ctx context.Context, name string) (*ISO, *Response // Get retrieves an ISO by its ID if the input can be parsed as an integer, otherwise it retrieves an ISO by its name. func (c *ISOClient) Get(ctx context.Context, idOrName string) (*ISO, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -105,11 +89,15 @@ type ISOListOpts struct { Architecture []Architecture // IncludeWildcardArchitecture must be set to also return custom ISOs that have no architecture set, if you are // also setting the Architecture field. + // Deprecated: Use [ISOListOpts.IncludeArchitectureWildcard] instead. IncludeWildcardArchitecture bool + // IncludeWildcardArchitecture must be set to also return custom ISOs that have no architecture set, if you are + // also setting the Architecture field. + IncludeArchitectureWildcard bool } func (l ISOListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -119,7 +107,7 @@ func (l ISOListOpts) values() url.Values { for _, arch := range l.Architecture { vals.Add("architecture", string(arch)) } - if l.IncludeWildcardArchitecture { + if l.IncludeArchitectureWildcard || l.IncludeWildcardArchitecture { vals.Add("include_architecture_wildcard", "true") } return vals @@ -150,10 +138,12 @@ func (c *ISOClient) List(ctx context.Context, opts ISOListOpts) ([]*ISO, *Respon // All returns all ISOs. func (c *ISOClient) All(ctx context.Context) ([]*ISO, error) { - allISOs := []*ISO{} + return c.AllWithOpts(ctx, ISOListOpts{ListOpts: ListOpts{PerPage: 50}}) +} - opts := ISOListOpts{} - opts.PerPage = 50 +// AllWithOpts returns all ISOs for the given options. +func (c *ISOClient) AllWithOpts(ctx context.Context, opts ISOListOpts) ([]*ISO, error) { + allISOs := make([]*ISO, 0) err := c.client.all(func(page int) (*Response, error) { opts.Page = page diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/labels.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/labels.go index e22000a8fa67..3dc7d781fdda 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/labels.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/labels.go @@ -6,8 +6,8 @@ import ( ) var keyRegexp = regexp.MustCompile( - `^([a-z0-9A-Z]((?:[\-_.]|[a-z0-9A-Z]){0,253}[a-z0-9A-Z])?/)?[a-z0-9A-Z]((?:[\-_.]|[a-z0-9A-Z]|){0,62}[a-z0-9A-Z])?$`) -var valueRegexp = regexp.MustCompile(`^(([a-z0-9A-Z](?:[\-_.]|[a-z0-9A-Z]){0,62})?[a-z0-9A-Z]$|$)`) + `^([a-z0-9A-Z]((?:[\-_.]|[a-z0-9A-Z]){0,253}[a-z0-9A-Z])?/)?[a-z0-9A-Z]((?:[\-_.]|[a-z0-9A-Z]|){0,61}[a-z0-9A-Z])?$`) +var valueRegexp = regexp.MustCompile(`^(([a-z0-9A-Z](?:[\-_.]|[a-z0-9A-Z]){0,61})?[a-z0-9A-Z]$|$)`) func ValidateResourceLabels(labels map[string]interface{}) (bool, error) { for k, v := range labels { diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/load_balancer.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/load_balancer.go index 24f4e8852bd0..020312f19592 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/load_balancer.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/load_balancer.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -32,7 +16,7 @@ import ( // LoadBalancer represents a Load Balancer in the Hetzner Cloud. type LoadBalancer struct { - ID int + ID int64 Name string PublicNet LoadBalancerPublicNet PrivateNet []LoadBalancerPrivateNet @@ -119,7 +103,7 @@ type LoadBalancerAlgorithmType string const ( // LoadBalancerAlgorithmTypeRoundRobin is an algorithm which distributes - // requests to targets in a round robin fashion. + // requests to targets in a round-robin fashion. LoadBalancerAlgorithmTypeRoundRobin LoadBalancerAlgorithmType = "round_robin" // LoadBalancerAlgorithmTypeLeastConnections is an algorithm which distributes // requests to targets with the least number of connections. @@ -132,7 +116,7 @@ type LoadBalancerAlgorithm struct { Type LoadBalancerAlgorithmType } -// LoadBalancerTargetType specifies the type of a Load Balancer target. +// LoadBalancerTargetType specifies the type of Load Balancer target. type LoadBalancerTargetType string const ( @@ -213,7 +197,7 @@ type LoadBalancerProtection struct { Delete bool } -// changeDNSPtr changes or resets the reverse DNS pointer for a IP address. +// changeDNSPtr changes or resets the reverse DNS pointer for an IP address. // Pass a nil ptr to reset the reverse DNS pointer to its default value. func (lb *LoadBalancer) changeDNSPtr(ctx context.Context, client *Client, ip net.IP, ptr *string) (*Action, *Response, error) { reqBody := schema.LoadBalancerActionChangeDNSPtrRequest{ @@ -257,7 +241,7 @@ type LoadBalancerClient struct { } // GetByID retrieves a Load Balancer by its ID. If the Load Balancer does not exist, nil is returned. -func (c *LoadBalancerClient) GetByID(ctx context.Context, id int) (*LoadBalancer, *Response, error) { +func (c *LoadBalancerClient) GetByID(ctx context.Context, id int64) (*LoadBalancer, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/load_balancers/%d", id), nil) if err != nil { return nil, nil, err @@ -289,8 +273,8 @@ func (c *LoadBalancerClient) GetByName(ctx context.Context, name string) (*LoadB // Get retrieves a Load Balancer by its ID if the input can be parsed as an integer, otherwise it // retrieves a Load Balancer by its name. If the Load Balancer does not exist, nil is returned. func (c *LoadBalancerClient) Get(ctx context.Context, idOrName string) (*LoadBalancer, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -303,7 +287,7 @@ type LoadBalancerListOpts struct { } func (l LoadBalancerListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -338,25 +322,7 @@ func (c *LoadBalancerClient) List(ctx context.Context, opts LoadBalancerListOpts // All returns all Load Balancers. func (c *LoadBalancerClient) All(ctx context.Context) ([]*LoadBalancer, error) { - allLoadBalancer := []*LoadBalancer{} - - opts := LoadBalancerListOpts{} - opts.PerPage = 50 - - err := c.client.all(func(page int) (*Response, error) { - opts.Page = page - LoadBalancer, resp, err := c.List(ctx, opts) - if err != nil { - return resp, err - } - allLoadBalancer = append(allLoadBalancer, LoadBalancer...) - return resp, nil - }) - if err != nil { - return nil, err - } - - return allLoadBalancer, nil + return c.AllWithOpts(ctx, LoadBalancerListOpts{ListOpts: ListOpts{PerPage: 50}}) } // AllWithOpts returns all Load Balancers for the given options. @@ -681,7 +647,7 @@ type LoadBalancerAddServiceOptsHTTP struct { StickySessions *bool } -// LoadBalancerAddServiceOptsHealthCheck holds options for specifying an health check +// LoadBalancerAddServiceOptsHealthCheck holds options for specifying a health check // when adding a service to a Load Balancer. type LoadBalancerAddServiceOptsHealthCheck struct { Protocol LoadBalancerServiceProtocol diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/load_balancer_type.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/load_balancer_type.go index ea9ea9556ebc..2aa7289ac4af 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/load_balancer_type.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/load_balancer_type.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -27,7 +11,7 @@ import ( // LoadBalancerType represents a LoadBalancer type in the Hetzner Cloud. type LoadBalancerType struct { - ID int + ID int64 Name string Description string MaxConnections int @@ -43,7 +27,7 @@ type LoadBalancerTypeClient struct { } // GetByID retrieves a Load Balancer type by its ID. If the Load Balancer type does not exist, nil is returned. -func (c *LoadBalancerTypeClient) GetByID(ctx context.Context, id int) (*LoadBalancerType, *Response, error) { +func (c *LoadBalancerTypeClient) GetByID(ctx context.Context, id int64) (*LoadBalancerType, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/load_balancer_types/%d", id), nil) if err != nil { return nil, nil, err @@ -75,8 +59,8 @@ func (c *LoadBalancerTypeClient) GetByName(ctx context.Context, name string) (*L // Get retrieves a Load Balancer type by its ID if the input can be parsed as an integer, otherwise it // retrieves a Load Balancer type by its name. If the Load Balancer type does not exist, nil is returned. func (c *LoadBalancerTypeClient) Get(ctx context.Context, idOrName string) (*LoadBalancerType, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -89,7 +73,7 @@ type LoadBalancerTypeListOpts struct { } func (l LoadBalancerTypeListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -124,10 +108,12 @@ func (c *LoadBalancerTypeClient) List(ctx context.Context, opts LoadBalancerType // All returns all Load Balancer types. func (c *LoadBalancerTypeClient) All(ctx context.Context) ([]*LoadBalancerType, error) { - allLoadBalancerTypes := []*LoadBalancerType{} + return c.AllWithOpts(ctx, LoadBalancerTypeListOpts{ListOpts: ListOpts{PerPage: 50}}) +} - opts := LoadBalancerTypeListOpts{} - opts.PerPage = 50 +// AllWithOpts returns all Load Balancer types for the given options. +func (c *LoadBalancerTypeClient) AllWithOpts(ctx context.Context, opts LoadBalancerTypeListOpts) ([]*LoadBalancerType, error) { + var allLoadBalancerTypes []*LoadBalancerType err := c.client.all(func(page int) (*Response, error) { opts.Page = page diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/location.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/location.go index fad81fc6b223..58105e820bf4 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/location.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/location.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -27,7 +11,7 @@ import ( // Location represents a location in the Hetzner Cloud. type Location struct { - ID int + ID int64 Name string Description string Country string @@ -43,7 +27,7 @@ type LocationClient struct { } // GetByID retrieves a location by its ID. If the location does not exist, nil is returned. -func (c *LocationClient) GetByID(ctx context.Context, id int) (*Location, *Response, error) { +func (c *LocationClient) GetByID(ctx context.Context, id int64) (*Location, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/locations/%d", id), nil) if err != nil { return nil, nil, err @@ -75,8 +59,8 @@ func (c *LocationClient) GetByName(ctx context.Context, name string) (*Location, // Get retrieves a location by its ID if the input can be parsed as an integer, otherwise it // retrieves a location by its name. If the location does not exist, nil is returned. func (c *LocationClient) Get(ctx context.Context, idOrName string) (*Location, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -89,7 +73,7 @@ type LocationListOpts struct { } func (l LocationListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -124,10 +108,12 @@ func (c *LocationClient) List(ctx context.Context, opts LocationListOpts) ([]*Lo // All returns all locations. func (c *LocationClient) All(ctx context.Context) ([]*Location, error) { - allLocations := []*Location{} + return c.AllWithOpts(ctx, LocationListOpts{ListOpts: ListOpts{PerPage: 50}}) +} - opts := LocationListOpts{} - opts.PerPage = 50 +// AllWithOpts returns all locations for the given options. +func (c *LocationClient) AllWithOpts(ctx context.Context, opts LocationListOpts) ([]*Location, error) { + var allLocations []*Location err := c.client.all(func(page int) (*Response, error) { opts.Page = page diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/metadata/client.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/metadata/client.go index ff835f5d393f..8b31bfdd8b17 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/metadata/client.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/metadata/client.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package metadata import ( @@ -120,12 +104,12 @@ func (c *Client) Hostname() (string, error) { } // InstanceID returns the ID of the server that did the request to the Metadata server. -func (c *Client) InstanceID() (int, error) { +func (c *Client) InstanceID() (int64, error) { resp, err := c.get("/instance-id") if err != nil { return 0, err } - return strconv.Atoi(resp) + return strconv.ParseInt(resp, 10, 64) } // PublicIPv4 returns the Public IPv4 of the server that did the request to the Metadata server. diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/network.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/network.go index 77a8c65ac614..ccce14208e30 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/network.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/network.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -52,7 +36,7 @@ const ( // Network represents a network in the Hetzner Cloud. type Network struct { - ID int + ID int64 Name string Created time.Time IPRange *net.IPNet @@ -61,6 +45,9 @@ type Network struct { Servers []*Server Protection NetworkProtection Labels map[string]string + + // ExposeRoutesToVSwitch indicates if the routes from this network should be exposed to the vSwitch connection. + ExposeRoutesToVSwitch bool } // NetworkSubnet represents a subnet of a network in the Hetzner Cloud. @@ -69,7 +56,7 @@ type NetworkSubnet struct { IPRange *net.IPNet NetworkZone NetworkZone Gateway net.IP - VSwitchID int + VSwitchID int64 } // NetworkRoute represents a route of a network. @@ -89,7 +76,7 @@ type NetworkClient struct { } // GetByID retrieves a network by its ID. If the network does not exist, nil is returned. -func (c *NetworkClient) GetByID(ctx context.Context, id int) (*Network, *Response, error) { +func (c *NetworkClient) GetByID(ctx context.Context, id int64) (*Network, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/networks/%d", id), nil) if err != nil { return nil, nil, err @@ -121,8 +108,8 @@ func (c *NetworkClient) GetByName(ctx context.Context, name string) (*Network, * // Get retrieves a network by its ID if the input can be parsed as an integer, otherwise it // retrieves a network by its name. If the network does not exist, nil is returned. func (c *NetworkClient) Get(ctx context.Context, idOrName string) (*Network, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -135,7 +122,7 @@ type NetworkListOpts struct { } func (l NetworkListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -206,6 +193,9 @@ func (c *NetworkClient) Delete(ctx context.Context, network *Network) (*Response type NetworkUpdateOpts struct { Name string Labels map[string]string + // ExposeRoutesToVSwitch indicates if the routes from this network should be exposed to the vSwitch connection. + // The exposing only takes effect if a vSwitch connection is active. + ExposeRoutesToVSwitch *bool } // Update updates a network. @@ -216,6 +206,10 @@ func (c *NetworkClient) Update(ctx context.Context, network *Network, opts Netwo if opts.Labels != nil { reqBody.Labels = &opts.Labels } + if opts.ExposeRoutesToVSwitch != nil { + reqBody.ExposeRoutesToVSwitch = opts.ExposeRoutesToVSwitch + } + reqBodyData, err := json.Marshal(reqBody) if err != nil { return nil, nil, err @@ -242,6 +236,9 @@ type NetworkCreateOpts struct { Subnets []NetworkSubnet Routes []NetworkRoute Labels map[string]string + // ExposeRoutesToVSwitch indicates if the routes from this network should be exposed to the vSwitch connection. + // The exposing only takes effect if a vSwitch connection is active. + ExposeRoutesToVSwitch bool } // Validate checks if options are valid. @@ -261,8 +258,9 @@ func (c *NetworkClient) Create(ctx context.Context, opts NetworkCreateOpts) (*Ne return nil, nil, err } reqBody := schema.NetworkCreateRequest{ - Name: opts.Name, - IPRange: opts.IPRange.String(), + Name: opts.Name, + IPRange: opts.IPRange.String(), + ExposeRoutesToVSwitch: opts.ExposeRoutesToVSwitch, } for _, subnet := range opts.Subnets { s := schema.NetworkSubnet{ diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/placement_group.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/placement_group.go index 508db93e6271..3355c182e0cb 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/placement_group.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/placement_group.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -31,11 +15,11 @@ import ( // PlacementGroup represents a Placement Group in the Hetzner Cloud. type PlacementGroup struct { - ID int + ID int64 Name string Labels map[string]string Created time.Time - Servers []int + Servers []int64 Type PlacementGroupType } @@ -53,7 +37,7 @@ type PlacementGroupClient struct { } // GetByID retrieves a PlacementGroup by its ID. If the PlacementGroup does not exist, nil is returned. -func (c *PlacementGroupClient) GetByID(ctx context.Context, id int) (*PlacementGroup, *Response, error) { +func (c *PlacementGroupClient) GetByID(ctx context.Context, id int64) (*PlacementGroup, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/placement_groups/%d", id), nil) if err != nil { return nil, nil, err @@ -85,8 +69,8 @@ func (c *PlacementGroupClient) GetByName(ctx context.Context, name string) (*Pla // Get retrieves a PlacementGroup by its ID if the input can be parsed as an integer, otherwise it // retrieves a PlacementGroup by its name. If the PlacementGroup does not exist, nil is returned. func (c *PlacementGroupClient) Get(ctx context.Context, idOrName string) (*PlacementGroup, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -100,7 +84,7 @@ type PlacementGroupListOpts struct { } func (l PlacementGroupListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/pricing.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/pricing.go index f08b02435dc2..9e9c4a464ed0 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/pricing.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/pricing.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/primary_ip.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/primary_ip.go index aa34cff651c6..8e43b3a3dc1c 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/primary_ip.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/primary_ip.go @@ -15,7 +15,7 @@ import ( // PrimaryIP defines a Primary IP. type PrimaryIP struct { - ID int + ID int64 IP net.IP Network *net.IPNet Labels map[string]string @@ -23,7 +23,7 @@ type PrimaryIP struct { Type PrimaryIPType Protection PrimaryIPProtection DNSPtr map[string]string - AssigneeID int + AssigneeID int64 AssigneeType string AutoDelete bool Blocked bool @@ -43,6 +43,32 @@ type PrimaryIPDNSPTR struct { IP string } +// changeDNSPtr changes or resets the reverse DNS pointer for a IP address. +// Pass a nil ptr to reset the reverse DNS pointer to its default value. +func (p *PrimaryIP) changeDNSPtr(ctx context.Context, client *Client, ip net.IP, ptr *string) (*Action, *Response, error) { + reqBody := schema.PrimaryIPActionChangeDNSPtrRequest{ + IP: ip.String(), + DNSPtr: ptr, + } + reqBodyData, err := json.Marshal(reqBody) + if err != nil { + return nil, nil, err + } + + path := fmt.Sprintf("/primary_ips/%d/actions/change_dns_ptr", p.ID) + req, err := client.NewRequest(ctx, "POST", path, bytes.NewReader(reqBodyData)) + if err != nil { + return nil, nil, err + } + + var respBody PrimaryIPChangeDNSPtrResult + resp, err := client.Do(req, &respBody) + if err != nil { + return nil, resp, err + } + return ActionFromSchema(respBody.Action), resp, nil +} + // GetDNSPtrForIP searches for the dns assigned to the given IP address. // It returns an error if there is no dns set for the given IP address. func (p *PrimaryIP) GetDNSPtrForIP(ip net.IP) (string, error) { @@ -66,7 +92,7 @@ const ( // PrimaryIPCreateOpts defines the request to // create a Primary IP. type PrimaryIPCreateOpts struct { - AssigneeID *int `json:"assignee_id,omitempty"` + AssigneeID *int64 `json:"assignee_id,omitempty"` AssigneeType string `json:"assignee_type"` AutoDelete *bool `json:"auto_delete,omitempty"` Datacenter string `json:"datacenter,omitempty"` @@ -93,8 +119,8 @@ type PrimaryIPUpdateOpts struct { // PrimaryIPAssignOpts defines the request to // assign a Primary IP to an assignee (usually a server). type PrimaryIPAssignOpts struct { - ID int - AssigneeID int `json:"assignee_id"` + ID int64 + AssigneeID int64 `json:"assignee_id"` AssigneeType string `json:"assignee_type"` } @@ -107,7 +133,7 @@ type PrimaryIPAssignResult struct { // PrimaryIPChangeDNSPtrOpts defines the request to // change a DNS PTR entry from a Primary IP. type PrimaryIPChangeDNSPtrOpts struct { - ID int + ID int64 DNSPtr string `json:"dns_ptr"` IP string `json:"ip"` } @@ -121,7 +147,7 @@ type PrimaryIPChangeDNSPtrResult struct { // PrimaryIPChangeProtectionOpts defines the request to // change protection configuration of a Primary IP. type PrimaryIPChangeProtectionOpts struct { - ID int + ID int64 Delete bool `json:"delete"` } @@ -137,7 +163,7 @@ type PrimaryIPClient struct { } // GetByID retrieves a Primary IP by its ID. If the Primary IP does not exist, nil is returned. -func (c *PrimaryIPClient) GetByID(ctx context.Context, id int) (*PrimaryIP, *Response, error) { +func (c *PrimaryIPClient) GetByID(ctx context.Context, id int64) (*PrimaryIP, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/primary_ips/%d", id), nil) if err != nil { return nil, nil, err @@ -181,8 +207,8 @@ func (c *PrimaryIPClient) GetByName(ctx context.Context, name string) (*PrimaryI // Get retrieves a Primary IP by its ID if the input can be parsed as an integer, otherwise it // retrieves a Primary IP by its name. If the Primary IP does not exist, nil is returned. func (c *PrimaryIPClient) Get(ctx context.Context, idOrName string) (*PrimaryIP, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -196,7 +222,7 @@ type PrimaryIPListOpts struct { } func (l PrimaryIPListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -337,7 +363,7 @@ func (c *PrimaryIPClient) Assign(ctx context.Context, opts PrimaryIPAssignOpts) } // Unassign a Primary IP from a resource. -func (c *PrimaryIPClient) Unassign(ctx context.Context, id int) (*Action, *Response, error) { +func (c *PrimaryIPClient) Unassign(ctx context.Context, id int64) (*Action, *Response, error) { path := fmt.Sprintf("/primary_ips/%d/actions/unassign", id) req, err := c.client.NewRequest(ctx, "POST", path, bytes.NewReader([]byte{})) if err != nil { diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/rdns.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/rdns.go index daf9b4c1c8f9..f53c030dafda 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/rdns.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/rdns.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -23,7 +7,7 @@ import ( ) // RDNSSupporter defines functions to change and lookup reverse dns entries. -// currently implemented by Server, FloatingIP and LoadBalancer. +// currently implemented by Server, FloatingIP, PrimaryIP and LoadBalancer. type RDNSSupporter interface { // changeDNSPtr changes or resets the reverse DNS pointer for a IP address. // Pass a nil ptr to reset the reverse DNS pointer to its default value. @@ -33,7 +17,7 @@ type RDNSSupporter interface { GetDNSPtrForIP(ip net.IP) (string, error) } -// RDNSClient simplifys the handling objects which support reverse dns entries. +// RDNSClient simplifies the handling objects which support reverse dns entries. type RDNSClient struct { client *Client } @@ -60,3 +44,9 @@ func RDNSLookup(i interface{}, ip net.IP) (string, error) { return rdns.GetDNSPtrForIP(ip) } + +// Make sure that all expected Resources actually implement the interface. +var _ RDNSSupporter = &FloatingIP{} +var _ RDNSSupporter = &PrimaryIP{} +var _ RDNSSupporter = &Server{} +var _ RDNSSupporter = &LoadBalancer{} diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/resource.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/resource.go index 2f72428ad040..a74b2cf7aefc 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/resource.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/resource.go @@ -1,23 +1,7 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud // Resource defines the schema of a resource. type Resource struct { - ID int + ID int64 Type string } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema.go index 7f45cd294ce9..d03317db8ad3 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -294,15 +278,19 @@ func ServerPrivateNetFromSchema(s schema.ServerPrivateNet) ServerPrivateNet { // ServerTypeFromSchema converts a schema.ServerType to a ServerType. func ServerTypeFromSchema(s schema.ServerType) *ServerType { st := &ServerType{ - ID: s.ID, - Name: s.Name, - Description: s.Description, - Cores: s.Cores, - Memory: s.Memory, - Disk: s.Disk, - StorageType: StorageType(s.StorageType), - CPUType: CPUType(s.CPUType), - Architecture: Architecture(s.Architecture), + ID: s.ID, + Name: s.Name, + Description: s.Description, + Cores: s.Cores, + Memory: s.Memory, + Disk: s.Disk, + StorageType: StorageType(s.StorageType), + CPUType: CPUType(s.CPUType), + Architecture: Architecture(s.Architecture), + IncludedTraffic: s.IncludedTraffic, + DeprecatableResource: DeprecatableResource{ + DeprecationFromSchema(s.Deprecation), + }, } for _, price := range s.Prices { st.Pricings = append(st.Pricings, ServerTypeLocationPricing{ @@ -317,6 +305,7 @@ func ServerTypeFromSchema(s schema.ServerType) *ServerType { }, }) } + return st } @@ -414,7 +403,8 @@ func NetworkFromSchema(s schema.Network) *Network { Protection: NetworkProtection{ Delete: s.Protection.Delete, }, - Labels: map[string]string{}, + Labels: map[string]string{}, + ExposeRoutesToVSwitch: s.ExposeRoutesToVSwitch, } _, n.IPRange, _ = net.ParseCIDR(s.IPRange) @@ -903,7 +893,7 @@ func loadBalancerCreateOptsToSchema(opts LoadBalancerCreateOpts) schema.LoadBala } if opts.Location != nil { if opts.Location.ID != 0 { - req.Location = Ptr(strconv.Itoa(opts.Location.ID)) + req.Location = Ptr(strconv.FormatInt(opts.Location.ID, 10)) } else { req.Location = Ptr(opts.Location.Name) } @@ -953,7 +943,7 @@ func loadBalancerCreateOptsToSchema(opts LoadBalancerCreateOpts) schema.LoadBala } } if service.HTTP.Certificates != nil { - certificates := []int{} + certificates := []int64{} for _, certificate := range service.HTTP.Certificates { certificates = append(certificates, certificate.ID) } @@ -1008,7 +998,7 @@ func loadBalancerAddServiceOptsToSchema(opts LoadBalancerAddServiceOpts) schema. req.HTTP.CookieLifetime = Ptr(int(opts.HTTP.CookieLifetime.Seconds())) } if opts.HTTP.Certificates != nil { - certificates := []int{} + certificates := []int64{} for _, certificate := range opts.HTTP.Certificates { certificates = append(certificates, certificate.ID) } @@ -1060,7 +1050,7 @@ func loadBalancerUpdateServiceOptsToSchema(opts LoadBalancerUpdateServiceOpts) s req.HTTP.CookieLifetime = Ptr(int(opts.HTTP.CookieLifetime.Seconds())) } if opts.HTTP.Certificates != nil { - certificates := []int{} + certificates := []int64{} for _, certificate := range opts.HTTP.Certificates { certificates = append(certificates, certificate.ID) } @@ -1264,3 +1254,15 @@ func loadBalancerMetricsFromSchema(s *schema.LoadBalancerGetMetricsResponse) (*L return &ms, nil } + +// DeprecationFromSchema converts a [schema.DeprecationInfo] to a [DeprecationInfo]. +func DeprecationFromSchema(s *schema.DeprecationInfo) *DeprecationInfo { + if s == nil { + return nil + } + + return &DeprecationInfo{ + Announced: s.Announced, + UnavailableAfter: s.UnavailableAfter, + } +} diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/action.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/action.go index 1de4764ef29a..49ac96a22953 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/action.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/action.go @@ -1,26 +1,10 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // Action defines the schema of an action. type Action struct { - ID int `json:"id"` + ID int64 `json:"id"` Status string `json:"status"` Command string `json:"command"` Progress int `json:"progress"` @@ -32,7 +16,7 @@ type Action struct { // ActionResourceReference defines the schema of an action resource reference. type ActionResourceReference struct { - ID int `json:"id"` + ID int64 `json:"id"` Type string `json:"type"` } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/certificate.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/certificate.go index cafb726619d9..eb7b03ce2138 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/certificate.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/certificate.go @@ -1,26 +1,10 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // CertificateUsedByRef defines the schema of a resource using a certificate. type CertificateUsedByRef struct { - ID int `json:"id"` + ID int64 `json:"id"` Type string `json:"type"` } @@ -32,7 +16,7 @@ type CertificateStatusRef struct { // Certificate defines the schema of an certificate. type Certificate struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Labels map[string]string `json:"labels"` Type string `json:"type"` diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/datacenter.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/datacenter.go index 3e939e72002b..eaa12429f834 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/datacenter.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/datacenter.go @@ -1,30 +1,14 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema // Datacenter defines the schema of a datacenter. type Datacenter struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Description string `json:"description"` Location Location `json:"location"` ServerTypes struct { - Supported []int `json:"supported"` - Available []int `json:"available"` + Supported []int64 `json:"supported"` + Available []int64 `json:"available"` } `json:"server_types"` } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/deprecation.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/deprecation.go new file mode 100644 index 000000000000..87292f78b53e --- /dev/null +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/deprecation.go @@ -0,0 +1,12 @@ +package schema + +import "time" + +type DeprecationInfo struct { + Announced time.Time `json:"announced"` + UnavailableAfter time.Time `json:"unavailable_after"` +} + +type DeprecatableResource struct { + Deprecation *DeprecationInfo `json:"deprecation"` +} diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/error.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/error.go index 8b15a8e99182..2d5cf5ddd833 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/error.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/error.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "encoding/json" diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/firewall.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/firewall.go index 6c32bb05bf43..371e648f14e1 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/firewall.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/firewall.go @@ -1,26 +1,10 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // Firewall defines the schema of a Firewall. type Firewall struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Labels map[string]string `json:"labels"` Created time.Time `json:"created"` @@ -70,7 +54,7 @@ type FirewallResourceLabelSelector struct { // FirewallResourceServer defines the schema of a Server to apply a Firewall on. type FirewallResourceServer struct { - ID int `json:"id"` + ID int64 `json:"id"` } // FirewallCreateResponse defines the schema of the response when creating a Firewall. diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/floating_ip.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/floating_ip.go index d734e3664d0a..6256b0d96f8d 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/floating_ip.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/floating_ip.go @@ -1,31 +1,15 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // FloatingIP defines the schema of a Floating IP. type FloatingIP struct { - ID int `json:"id"` + ID int64 `json:"id"` Description *string `json:"description"` Created time.Time `json:"created"` IP string `json:"ip"` Type string `json:"type"` - Server *int `json:"server"` + Server *int64 `json:"server"` DNSPtr []FloatingIPDNSPtr `json:"dns_ptr"` HomeLocation Location `json:"home_location"` Blocked bool `json:"blocked"` @@ -75,7 +59,7 @@ type FloatingIPListResponse struct { type FloatingIPCreateRequest struct { Type string `json:"type"` HomeLocation *string `json:"home_location,omitempty"` - Server *int `json:"server,omitempty"` + Server *int64 `json:"server,omitempty"` Description *string `json:"description,omitempty"` Labels *map[string]string `json:"labels,omitempty"` Name *string `json:"name,omitempty"` @@ -91,7 +75,7 @@ type FloatingIPCreateResponse struct { // FloatingIPActionAssignRequest defines the schema of the request to // create an assign Floating IP action. type FloatingIPActionAssignRequest struct { - Server int `json:"server"` + Server int64 `json:"server"` } // FloatingIPActionAssignResponse defines the schema of the response when diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/image.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/image.go index 0c773787b8b3..1520935ec7c5 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/image.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/image.go @@ -1,26 +1,10 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // Image defines the schema of an image. type Image struct { - ID int `json:"id"` + ID int64 `json:"id"` Status string `json:"status"` Type string `json:"type"` Name *string `json:"name"` @@ -29,7 +13,7 @@ type Image struct { DiskSize float32 `json:"disk_size"` Created time.Time `json:"created"` CreatedFrom *ImageCreatedFrom `json:"created_from"` - BoundTo *int `json:"bound_to"` + BoundTo *int64 `json:"bound_to"` OSFlavor string `json:"os_flavor"` OSVersion *string `json:"os_version"` Architecture string `json:"architecture"` @@ -47,7 +31,7 @@ type ImageProtection struct { // ImageCreatedFrom defines the schema of the images created from reference. type ImageCreatedFrom struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/iso.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/iso.go index 943865712f81..4f89dd04627e 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/iso.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/iso.go @@ -1,26 +1,10 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // ISO defines the schema of an ISO image. type ISO struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Description string `json:"description"` Type string `json:"type"` diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/load_balancer.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/load_balancer.go index f50b2b592939..7e1c4f5da64b 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/load_balancer.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/load_balancer.go @@ -1,25 +1,9 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" type LoadBalancer struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` PublicNet LoadBalancerPublicNet `json:"public_net"` PrivateNet []LoadBalancerPrivateNet `json:"private_net"` @@ -53,7 +37,7 @@ type LoadBalancerPublicNetIPv6 struct { } type LoadBalancerPrivateNet struct { - Network int `json:"network"` + Network int64 `json:"network"` IP string `json:"ip"` } @@ -75,11 +59,11 @@ type LoadBalancerService struct { } type LoadBalancerServiceHTTP struct { - CookieName string `json:"cookie_name"` - CookieLifetime int `json:"cookie_lifetime"` - Certificates []int `json:"certificates"` - RedirectHTTP bool `json:"redirect_http"` - StickySessions bool `json:"sticky_sessions"` + CookieName string `json:"cookie_name"` + CookieLifetime int `json:"cookie_lifetime"` + Certificates []int64 `json:"certificates"` + RedirectHTTP bool `json:"redirect_http"` + StickySessions bool `json:"sticky_sessions"` } type LoadBalancerServiceHealthCheck struct { @@ -115,7 +99,7 @@ type LoadBalancerTargetHealthStatus struct { } type LoadBalancerTargetServer struct { - ID int `json:"id"` + ID int64 `json:"id"` } type LoadBalancerTargetLabelSelector struct { @@ -143,7 +127,7 @@ type LoadBalancerActionAddTargetRequest struct { } type LoadBalancerActionAddTargetRequestServer struct { - ID int `json:"id"` + ID int64 `json:"id"` } type LoadBalancerActionAddTargetRequestLabelSelector struct { @@ -166,7 +150,7 @@ type LoadBalancerActionRemoveTargetRequest struct { } type LoadBalancerActionRemoveTargetRequestServer struct { - ID int `json:"id"` + ID int64 `json:"id"` } type LoadBalancerActionRemoveTargetRequestLabelSelector struct { @@ -191,11 +175,11 @@ type LoadBalancerActionAddServiceRequest struct { } type LoadBalancerActionAddServiceRequestHTTP struct { - CookieName *string `json:"cookie_name,omitempty"` - CookieLifetime *int `json:"cookie_lifetime,omitempty"` - Certificates *[]int `json:"certificates,omitempty"` - RedirectHTTP *bool `json:"redirect_http,omitempty"` - StickySessions *bool `json:"sticky_sessions,omitempty"` + CookieName *string `json:"cookie_name,omitempty"` + CookieLifetime *int `json:"cookie_lifetime,omitempty"` + Certificates *[]int64 `json:"certificates,omitempty"` + RedirectHTTP *bool `json:"redirect_http,omitempty"` + StickySessions *bool `json:"sticky_sessions,omitempty"` } type LoadBalancerActionAddServiceRequestHealthCheck struct { @@ -229,11 +213,11 @@ type LoadBalancerActionUpdateServiceRequest struct { } type LoadBalancerActionUpdateServiceRequestHTTP struct { - CookieName *string `json:"cookie_name,omitempty"` - CookieLifetime *int `json:"cookie_lifetime,omitempty"` - Certificates *[]int `json:"certificates,omitempty"` - RedirectHTTP *bool `json:"redirect_http,omitempty"` - StickySessions *bool `json:"sticky_sessions,omitempty"` + CookieName *string `json:"cookie_name,omitempty"` + CookieLifetime *int `json:"cookie_lifetime,omitempty"` + Certificates *[]int64 `json:"certificates,omitempty"` + RedirectHTTP *bool `json:"redirect_http,omitempty"` + StickySessions *bool `json:"sticky_sessions,omitempty"` } type LoadBalancerActionUpdateServiceRequestHealthCheck struct { @@ -275,7 +259,7 @@ type LoadBalancerCreateRequest struct { Targets []LoadBalancerCreateRequestTarget `json:"targets,omitempty"` Services []LoadBalancerCreateRequestService `json:"services,omitempty"` PublicInterface *bool `json:"public_interface,omitempty"` - Network *int `json:"network,omitempty"` + Network *int64 `json:"network,omitempty"` } type LoadBalancerCreateRequestAlgorithm struct { @@ -291,7 +275,7 @@ type LoadBalancerCreateRequestTarget struct { } type LoadBalancerCreateRequestTargetServer struct { - ID int `json:"id"` + ID int64 `json:"id"` } type LoadBalancerCreateRequestTargetLabelSelector struct { @@ -312,11 +296,11 @@ type LoadBalancerCreateRequestService struct { } type LoadBalancerCreateRequestServiceHTTP struct { - CookieName *string `json:"cookie_name,omitempty"` - CookieLifetime *int `json:"cookie_lifetime,omitempty"` - Certificates *[]int `json:"certificates,omitempty"` - RedirectHTTP *bool `json:"redirect_http,omitempty"` - StickySessions *bool `json:"sticky_sessions,omitempty"` + CookieName *string `json:"cookie_name,omitempty"` + CookieLifetime *int `json:"cookie_lifetime,omitempty"` + Certificates *[]int64 `json:"certificates,omitempty"` + RedirectHTTP *bool `json:"redirect_http,omitempty"` + StickySessions *bool `json:"sticky_sessions,omitempty"` } type LoadBalancerCreateRequestServiceHealthCheck struct { @@ -367,7 +351,7 @@ type LoadBalancerActionChangeAlgorithmResponse struct { } type LoadBalancerActionAttachToNetworkRequest struct { - Network int `json:"network"` + Network int64 `json:"network"` IP *string `json:"ip,omitempty"` } @@ -376,7 +360,7 @@ type LoadBalancerActionAttachToNetworkResponse struct { } type LoadBalancerActionDetachFromNetworkRequest struct { - Network int `json:"network"` + Network int64 `json:"network"` } type LoadBalancerActionDetachFromNetworkResponse struct { diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/load_balancer_type.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/load_balancer_type.go index 815273841508..09ac43d7a6ce 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/load_balancer_type.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/load_balancer_type.go @@ -1,24 +1,8 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema // LoadBalancerType defines the schema of a LoadBalancer type. type LoadBalancerType struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Description string `json:"description"` MaxConnections int `json:"max_connections"` diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/location.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/location.go index 16ea709c8436..e07306071c24 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/location.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/location.go @@ -1,24 +1,8 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema // Location defines the schema of a location. type Location struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Description string `json:"description"` Country string `json:"country"` diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/meta.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/meta.go index e527a2f45a22..9b06cda8c6d3 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/meta.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/meta.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema // Meta defines the schema of meta information which may be included diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/network.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/network.go index 7ca287a1337d..2344aea450a6 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/network.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/network.go @@ -1,34 +1,19 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // Network defines the schema of a network. type Network struct { - ID int `json:"id"` - Name string `json:"name"` - Created time.Time `json:"created"` - IPRange string `json:"ip_range"` - Subnets []NetworkSubnet `json:"subnets"` - Routes []NetworkRoute `json:"routes"` - Servers []int `json:"servers"` - Protection NetworkProtection `json:"protection"` - Labels map[string]string `json:"labels"` + ID int64 `json:"id"` + Name string `json:"name"` + Created time.Time `json:"created"` + IPRange string `json:"ip_range"` + Subnets []NetworkSubnet `json:"subnets"` + Routes []NetworkRoute `json:"routes"` + Servers []int64 `json:"servers"` + Protection NetworkProtection `json:"protection"` + Labels map[string]string `json:"labels"` + ExposeRoutesToVSwitch bool `json:"expose_routes_to_vswitch"` } // NetworkSubnet represents a subnet of a network. @@ -37,7 +22,7 @@ type NetworkSubnet struct { IPRange string `json:"ip_range"` NetworkZone string `json:"network_zone"` Gateway string `json:"gateway,omitempty"` - VSwitchID int `json:"vswitch_id,omitempty"` + VSwitchID int64 `json:"vswitch_id,omitempty"` } // NetworkRoute represents a route of a network. @@ -53,8 +38,9 @@ type NetworkProtection struct { // NetworkUpdateRequest defines the schema of the request to update a network. type NetworkUpdateRequest struct { - Name string `json:"name,omitempty"` - Labels *map[string]string `json:"labels,omitempty"` + Name string `json:"name,omitempty"` + Labels *map[string]string `json:"labels,omitempty"` + ExposeRoutesToVSwitch *bool `json:"expose_routes_to_vswitch,omitempty"` } // NetworkUpdateResponse defines the schema of the response when updating a network. @@ -76,11 +62,12 @@ type NetworkGetResponse struct { // NetworkCreateRequest defines the schema of the request to create a network. type NetworkCreateRequest struct { - Name string `json:"name"` - IPRange string `json:"ip_range"` - Subnets []NetworkSubnet `json:"subnets,omitempty"` - Routes []NetworkRoute `json:"routes,omitempty"` - Labels *map[string]string `json:"labels,omitempty"` + Name string `json:"name"` + IPRange string `json:"ip_range"` + Subnets []NetworkSubnet `json:"subnets,omitempty"` + Routes []NetworkRoute `json:"routes,omitempty"` + Labels *map[string]string `json:"labels,omitempty"` + ExposeRoutesToVSwitch bool `json:"expose_routes_to_vswitch"` } // NetworkCreateResponse defines the schema of the response when @@ -108,7 +95,7 @@ type NetworkActionAddSubnetRequest struct { IPRange string `json:"ip_range,omitempty"` NetworkZone string `json:"network_zone"` Gateway string `json:"gateway"` - VSwitchID int `json:"vswitch_id,omitempty"` + VSwitchID int64 `json:"vswitch_id,omitempty"` } // NetworkActionAddSubnetResponse defines the schema of the response when diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/placement_group.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/placement_group.go index a13a8d5b4f31..671bd6bed81d 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/placement_group.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/placement_group.go @@ -1,29 +1,13 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" type PlacementGroup struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Labels map[string]string `json:"labels"` Created time.Time `json:"created"` - Servers []int `json:"servers"` + Servers []int64 `json:"servers"` Type string `json:"type"` } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/pricing.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/pricing.go index 8fc50eb32e84..192352f5d790 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/pricing.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/pricing.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema // Pricing defines the schema for pricing information. @@ -77,7 +61,7 @@ type PricingServerBackup struct { // PricingServerType defines the schema of pricing information for a server type. type PricingServerType struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Prices []PricingServerTypePrice `json:"prices"` } @@ -92,7 +76,7 @@ type PricingServerTypePrice struct { // PricingLoadBalancerType defines the schema of pricing information for a Load Balancer type. type PricingLoadBalancerType struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Prices []PricingLoadBalancerTypePrice `json:"prices"` } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/primary_ip.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/primary_ip.go index d232a732d195..b685c386f9d2 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/primary_ip.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/primary_ip.go @@ -4,14 +4,14 @@ import "time" // PrimaryIP defines a Primary IP. type PrimaryIP struct { - ID int `json:"id"` + ID int64 `json:"id"` IP string `json:"ip"` Labels map[string]string `json:"labels"` Name string `json:"name"` Type string `json:"type"` Protection PrimaryIPProtection `json:"protection"` DNSPtr []PrimaryIPDNSPTR `json:"dns_ptr"` - AssigneeID int `json:"assignee_id"` + AssigneeID int64 `json:"assignee_id"` AssigneeType string `json:"assignee_type"` AutoDelete bool `json:"auto_delete"` Blocked bool `json:"blocked"` @@ -53,3 +53,10 @@ type PrimaryIPListResult struct { type PrimaryIPUpdateResult struct { PrimaryIP PrimaryIP `json:"primary_ip"` } + +// PrimaryIPActionChangeDNSPtrRequest defines the schema for the request to +// change a Primary IP's reverse DNS pointer. +type PrimaryIPActionChangeDNSPtrRequest struct { + IP string `json:"ip"` + DNSPtr *string `json:"dns_ptr"` +} diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/server.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/server.go index 3949ebfbe03e..39a10b064831 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/server.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/server.go @@ -1,26 +1,10 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // Server defines the schema of a server. type Server struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Status string `json:"status"` Created time.Time `json:"created"` @@ -38,7 +22,7 @@ type Server struct { Image *Image `json:"image"` Protection ServerProtection `json:"protection"` Labels map[string]string `json:"labels"` - Volumes []int `json:"volumes"` + Volumes []int64 `json:"volumes"` PrimaryDiskSize int `json:"primary_disk_size"` PlacementGroup *PlacementGroup `json:"placement_group"` } @@ -54,14 +38,14 @@ type ServerProtection struct { type ServerPublicNet struct { IPv4 ServerPublicNetIPv4 `json:"ipv4"` IPv6 ServerPublicNetIPv6 `json:"ipv6"` - FloatingIPs []int `json:"floating_ips"` + FloatingIPs []int64 `json:"floating_ips"` Firewalls []ServerFirewall `json:"firewalls"` } // ServerPublicNetIPv4 defines the schema of a server's public // network information for an IPv4. type ServerPublicNetIPv4 struct { - ID int `json:"id"` + ID int64 `json:"id"` IP string `json:"ip"` Blocked bool `json:"blocked"` DNSPtr string `json:"dns_ptr"` @@ -70,7 +54,7 @@ type ServerPublicNetIPv4 struct { // ServerPublicNetIPv6 defines the schema of a server's public // network information for an IPv6. type ServerPublicNetIPv6 struct { - ID int `json:"id"` + ID int64 `json:"id"` IP string `json:"ip"` Blocked bool `json:"blocked"` DNSPtr []ServerPublicNetIPv6DNSPtr `json:"dns_ptr"` @@ -86,13 +70,13 @@ type ServerPublicNetIPv6DNSPtr struct { // ServerFirewall defines the schema of a Server's Firewalls on // a certain network interface. type ServerFirewall struct { - ID int `json:"id"` + ID int64 `json:"id"` Status string `json:"status"` } // ServerPrivateNet defines the schema of a server's private network information. type ServerPrivateNet struct { - Network int `json:"network"` + Network int64 `json:"network"` IP string `json:"ip"` AliasIPs []string `json:"alias_ips"` MACAddress string `json:"mac_address"` @@ -116,31 +100,31 @@ type ServerCreateRequest struct { Name string `json:"name"` ServerType interface{} `json:"server_type"` // int or string Image interface{} `json:"image"` // int or string - SSHKeys []int `json:"ssh_keys,omitempty"` + SSHKeys []int64 `json:"ssh_keys,omitempty"` Location string `json:"location,omitempty"` Datacenter string `json:"datacenter,omitempty"` UserData string `json:"user_data,omitempty"` StartAfterCreate *bool `json:"start_after_create,omitempty"` Labels *map[string]string `json:"labels,omitempty"` Automount *bool `json:"automount,omitempty"` - Volumes []int `json:"volumes,omitempty"` - Networks []int `json:"networks,omitempty"` + Volumes []int64 `json:"volumes,omitempty"` + Networks []int64 `json:"networks,omitempty"` Firewalls []ServerCreateFirewalls `json:"firewalls,omitempty"` - PlacementGroup int `json:"placement_group,omitempty"` + PlacementGroup int64 `json:"placement_group,omitempty"` PublicNet *ServerCreatePublicNet `json:"public_net,omitempty"` } // ServerCreatePublicNet defines the public network configuration of a server. type ServerCreatePublicNet struct { - EnableIPv4 bool `json:"enable_ipv4"` - EnableIPv6 bool `json:"enable_ipv6"` - IPv4ID int `json:"ipv4,omitempty"` - IPv6ID int `json:"ipv6,omitempty"` + EnableIPv4 bool `json:"enable_ipv4"` + EnableIPv6 bool `json:"enable_ipv6"` + IPv4ID int64 `json:"ipv4,omitempty"` + IPv6ID int64 `json:"ipv6,omitempty"` } // ServerCreateFirewalls defines which Firewalls to apply when creating a Server. type ServerCreateFirewalls struct { - Firewall int `json:"firewall"` + Firewall int64 `json:"firewall"` } // ServerCreateResponse defines the schema of the response when @@ -249,7 +233,7 @@ type ServerActionCreateImageResponse struct { // create a enable_rescue server action. type ServerActionEnableRescueRequest struct { Type *string `json:"type,omitempty"` - SSHKeys []int `json:"ssh_keys,omitempty"` + SSHKeys []int64 `json:"ssh_keys,omitempty"` } // ServerActionEnableRescueResponse defines the schema of the response when @@ -380,7 +364,7 @@ type ServerActionRequestConsoleResponse struct { // ServerActionAttachToNetworkRequest defines the schema for the request to // attach a network to a server. type ServerActionAttachToNetworkRequest struct { - Network int `json:"network"` + Network int64 `json:"network"` IP *string `json:"ip,omitempty"` AliasIPs []*string `json:"alias_ips,omitempty"` } @@ -394,7 +378,7 @@ type ServerActionAttachToNetworkResponse struct { // ServerActionDetachFromNetworkRequest defines the schema for the request to // detach a network from a server. type ServerActionDetachFromNetworkRequest struct { - Network int `json:"network"` + Network int64 `json:"network"` } // ServerActionDetachFromNetworkResponse defines the schema of the response when @@ -406,7 +390,7 @@ type ServerActionDetachFromNetworkResponse struct { // ServerActionChangeAliasIPsRequest defines the schema for the request to // change a server's alias IPs in a network. type ServerActionChangeAliasIPsRequest struct { - Network int `json:"network"` + Network int64 `json:"network"` AliasIPs []string `json:"alias_ips"` } @@ -435,7 +419,7 @@ type ServerTimeSeriesVals struct { // ServerActionAddToPlacementGroupRequest defines the schema for the request to // add a server to a placement group. type ServerActionAddToPlacementGroupRequest struct { - PlacementGroup int `json:"placement_group"` + PlacementGroup int64 `json:"placement_group"` } // ServerActionAddToPlacementGroupResponse defines the schema of the response when diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/server_type.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/server_type.go index e125d905f65e..0920a5ee16d7 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/server_type.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/server_type.go @@ -1,33 +1,19 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema // ServerType defines the schema of a server type. type ServerType struct { - ID int `json:"id"` - Name string `json:"name"` - Description string `json:"description"` - Cores int `json:"cores"` - Memory float32 `json:"memory"` - Disk int `json:"disk"` - StorageType string `json:"storage_type"` - CPUType string `json:"cpu_type"` - Architecture string `json:"architecture"` - Prices []PricingServerTypePrice `json:"prices"` + ID int64 `json:"id"` + Name string `json:"name"` + Description string `json:"description"` + Cores int `json:"cores"` + Memory float32 `json:"memory"` + Disk int `json:"disk"` + StorageType string `json:"storage_type"` + CPUType string `json:"cpu_type"` + Architecture string `json:"architecture"` + IncludedTraffic int64 `json:"included_traffic"` + Prices []PricingServerTypePrice `json:"prices"` + DeprecatableResource } // ServerTypeListResponse defines the schema of the response when diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/ssh_key.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/ssh_key.go index b061e0d06541..7e095bc5a7fa 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/ssh_key.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/ssh_key.go @@ -1,26 +1,10 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // SSHKey defines the schema of a SSH key. type SSHKey struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` Fingerprint string `json:"fingerprint"` PublicKey string `json:"public_key"` diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/volume.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/volume.go index 4cef9aef5ef5..0dd391bccc85 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/volume.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/schema/volume.go @@ -1,28 +1,12 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package schema import "time" // Volume defines the schema of a volume. type Volume struct { - ID int `json:"id"` + ID int64 `json:"id"` Name string `json:"name"` - Server *int `json:"server"` + Server *int64 `json:"server"` Status string `json:"status"` Location Location `json:"location"` Size int `json:"size"` @@ -37,7 +21,7 @@ type Volume struct { type VolumeCreateRequest struct { Name string `json:"name"` Size int `json:"size"` - Server *int `json:"server,omitempty"` + Server *int64 `json:"server,omitempty"` Location interface{} `json:"location,omitempty"` // int, string, or nil Labels *map[string]string `json:"labels,omitempty"` Automount *bool `json:"automount,omitempty"` @@ -95,7 +79,7 @@ type VolumeActionChangeProtectionResponse struct { // VolumeActionAttachVolumeRequest defines the schema of the request to // attach a volume to a server. type VolumeActionAttachVolumeRequest struct { - Server int `json:"server"` + Server int64 `json:"server"` Automount *bool `json:"automount,omitempty"` } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/server.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/server.go index 40b228afab14..9e2071f3b780 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/server.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/server.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -33,7 +17,7 @@ import ( // Server represents a server in the Hetzner Cloud. type Server struct { - ID int + ID int64 Name string Status ServerStatus Created time.Time @@ -114,7 +98,7 @@ type ServerPublicNet struct { // ServerPublicNetIPv4 represents a server's public IPv4 address. type ServerPublicNetIPv4 struct { - ID int + ID int64 IP net.IP Blocked bool DNSPtr string @@ -126,7 +110,7 @@ func (n *ServerPublicNetIPv4) IsUnspecified() bool { // ServerPublicNetIPv6 represents a Server's public IPv6 network and address. type ServerPublicNetIPv6 struct { - ID int + ID int64 IP net.IP Network *net.IPNet Blocked bool @@ -210,7 +194,7 @@ type ServerClient struct { } // GetByID retrieves a server by its ID. If the server does not exist, nil is returned. -func (c *ServerClient) GetByID(ctx context.Context, id int) (*Server, *Response, error) { +func (c *ServerClient) GetByID(ctx context.Context, id int64) (*Server, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/servers/%d", id), nil) if err != nil { return nil, nil, err @@ -242,8 +226,8 @@ func (c *ServerClient) GetByName(ctx context.Context, name string) (*Server, *Re // Get retrieves a server by its ID if the input can be parsed as an integer, otherwise it // retrieves a server by its name. If the server does not exist, nil is returned. func (c *ServerClient) Get(ctx context.Context, idOrName string) (*Server, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -257,7 +241,7 @@ type ServerListOpts struct { } func (l ServerListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -433,14 +417,14 @@ func (c *ServerClient) Create(ctx context.Context, opts ServerCreateOpts) (Serve } if opts.Location != nil { if opts.Location.ID != 0 { - reqBody.Location = strconv.Itoa(opts.Location.ID) + reqBody.Location = strconv.FormatInt(opts.Location.ID, 10) } else { reqBody.Location = opts.Location.Name } } if opts.Datacenter != nil { if opts.Datacenter.ID != 0 { - reqBody.Datacenter = strconv.Itoa(opts.Datacenter.ID) + reqBody.Datacenter = strconv.FormatInt(opts.Datacenter.ID, 10) } else { reqBody.Datacenter = opts.Datacenter.Name } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/server_type.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/server_type.go index 3b45140066f4..aebaebf3c55b 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/server_type.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/server_type.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -27,7 +11,7 @@ import ( // ServerType represents a server type in the Hetzner Cloud. type ServerType struct { - ID int + ID int64 Name string Description string Cores int @@ -36,7 +20,10 @@ type ServerType struct { StorageType StorageType CPUType CPUType Architecture Architecture - Pricings []ServerTypeLocationPricing + // IncludedTraffic is the free traffic per month in bytes + IncludedTraffic int64 + Pricings []ServerTypeLocationPricing + DeprecatableResource } // StorageType specifies the type of storage. @@ -67,7 +54,7 @@ type ServerTypeClient struct { } // GetByID retrieves a server type by its ID. If the server type does not exist, nil is returned. -func (c *ServerTypeClient) GetByID(ctx context.Context, id int) (*ServerType, *Response, error) { +func (c *ServerTypeClient) GetByID(ctx context.Context, id int64) (*ServerType, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/server_types/%d", id), nil) if err != nil { return nil, nil, err @@ -99,8 +86,8 @@ func (c *ServerTypeClient) GetByName(ctx context.Context, name string) (*ServerT // Get retrieves a server type by its ID if the input can be parsed as an integer, otherwise it // retrieves a server type by its name. If the server type does not exist, nil is returned. func (c *ServerTypeClient) Get(ctx context.Context, idOrName string) (*ServerType, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -113,7 +100,7 @@ type ServerTypeListOpts struct { } func (l ServerTypeListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } @@ -148,10 +135,12 @@ func (c *ServerTypeClient) List(ctx context.Context, opts ServerTypeListOpts) ([ // All returns all server types. func (c *ServerTypeClient) All(ctx context.Context) ([]*ServerType, error) { - allServerTypes := []*ServerType{} + return c.AllWithOpts(ctx, ServerTypeListOpts{ListOpts: ListOpts{PerPage: 50}}) +} - opts := ServerTypeListOpts{} - opts.PerPage = 50 +// AllWithOpts returns all server types for the given options. +func (c *ServerTypeClient) AllWithOpts(ctx context.Context, opts ServerTypeListOpts) ([]*ServerType, error) { + var allServerTypes []*ServerType err := c.client.all(func(page int) (*Response, error) { opts.Page = page diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/ssh_key.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/ssh_key.go index 92a980611a7c..40537606c633 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/ssh_key.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/ssh_key.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -31,7 +15,7 @@ import ( // SSHKey represents a SSH key in the Hetzner Cloud. type SSHKey struct { - ID int + ID int64 Name string Fingerprint string PublicKey string @@ -45,7 +29,7 @@ type SSHKeyClient struct { } // GetByID retrieves a SSH key by its ID. If the SSH key does not exist, nil is returned. -func (c *SSHKeyClient) GetByID(ctx context.Context, id int) (*SSHKey, *Response, error) { +func (c *SSHKeyClient) GetByID(ctx context.Context, id int64) (*SSHKey, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/ssh_keys/%d", id), nil) if err != nil { return nil, nil, err @@ -86,8 +70,8 @@ func (c *SSHKeyClient) GetByFingerprint(ctx context.Context, fingerprint string) // Get retrieves a SSH key by its ID if the input can be parsed as an integer, otherwise it // retrieves a SSH key by its name. If the SSH key does not exist, nil is returned. func (c *SSHKeyClient) Get(ctx context.Context, idOrName string) (*SSHKey, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -101,7 +85,7 @@ type SSHKeyListOpts struct { } func (l SSHKeyListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/testing.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/testing.go index 03e9302dcd45..63cd92f7b8ec 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/testing.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/testing.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -21,14 +5,12 @@ import ( "time" ) -const apiTimestampFormat = "2006-01-02T15:04:05-07:00" - -func mustParseTime(t *testing.T, layout, value string) time.Time { +func mustParseTime(t *testing.T, value string) time.Time { t.Helper() - ts, err := time.Parse(layout, value) + ts, err := time.Parse(time.RFC3339, value) if err != nil { - t.Fatalf("parse time: layout %v: value %v: %v", layout, value, err) + t.Fatalf("parse time: value %v: %v", value, err) } return ts } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/volume.go b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/volume.go index 9b4ebd63ca1b..025907d401ab 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/volume.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hcloud-go/hcloud/volume.go @@ -1,19 +1,3 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - package hcloud import ( @@ -31,7 +15,7 @@ import ( // Volume represents a volume in the Hetzner Cloud. type Volume struct { - ID int + ID int64 Name string Status VolumeStatus Server *Server @@ -65,7 +49,7 @@ const ( ) // GetByID retrieves a volume by its ID. If the volume does not exist, nil is returned. -func (c *VolumeClient) GetByID(ctx context.Context, id int) (*Volume, *Response, error) { +func (c *VolumeClient) GetByID(ctx context.Context, id int64) (*Volume, *Response, error) { req, err := c.client.NewRequest(ctx, "GET", fmt.Sprintf("/volumes/%d", id), nil) if err != nil { return nil, nil, err @@ -97,8 +81,8 @@ func (c *VolumeClient) GetByName(ctx context.Context, name string) (*Volume, *Re // Get retrieves a volume by its ID if the input can be parsed as an integer, otherwise it // retrieves a volume by its name. If the volume does not exist, nil is returned. func (c *VolumeClient) Get(ctx context.Context, idOrName string) (*Volume, *Response, error) { - if id, err := strconv.Atoi(idOrName); err == nil { - return c.GetByID(ctx, int(id)) + if id, err := strconv.ParseInt(idOrName, 10, 64); err == nil { + return c.GetByID(ctx, id) } return c.GetByName(ctx, idOrName) } @@ -112,7 +96,7 @@ type VolumeListOpts struct { } func (l VolumeListOpts) values() url.Values { - vals := l.ListOpts.values() + vals := l.ListOpts.Values() if l.Name != "" { vals.Add("name", l.Name) } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hetzner_manager.go b/cluster-autoscaler/cloudprovider/hetzner/hetzner_manager.go index 7c2f45b3d462..2d2afd44a5c1 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hetzner_manager.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hetzner_manager.go @@ -63,18 +63,19 @@ func newManager() (*hetznerManager, error) { return nil, errors.New("`HCLOUD_TOKEN` is not specified") } - cloudInitBase64 := os.Getenv("HCLOUD_CLOUD_INIT") - if cloudInitBase64 == "" { - return nil, errors.New("`HCLOUD_CLOUD_INIT` is not specified") - } - client := hcloud.NewClient( hcloud.WithToken(token), hcloud.WithHTTPClient(httpClient), hcloud.WithApplication("cluster-autoscaler", version.ClusterAutoscalerVersion), + hcloud.WithPollBackoffFunc(hcloud.ExponentialBackoff(2, 500*time.Millisecond)), ) ctx := context.Background() + + cloudInitBase64 := os.Getenv("HCLOUD_CLOUD_INIT") + if cloudInitBase64 == "" { + return nil, errors.New("`HCLOUD_CLOUD_INIT` is not specified") + } cloudInit, err := base64.StdEncoding.DecodeString(cloudInitBase64) if err != nil { return nil, fmt.Errorf("failed to parse cloud init error: %s", err) diff --git a/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go b/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go index 05cf317ac016..af79f94c8987 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go @@ -285,7 +285,7 @@ func toInstance(vm *hcloud.Server) cloudprovider.Instance { } } -func toProviderID(nodeID int) string { +func toProviderID(nodeID int64) string { return fmt.Sprintf("%s%d", providerIDPrefix, nodeID) } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hetzner_servers_cache.go b/cluster-autoscaler/cloudprovider/hetzner/hetzner_servers_cache.go index 34172bcbe6e8..04a2e964d6ff 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hetzner_servers_cache.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hetzner_servers_cache.go @@ -131,7 +131,7 @@ func (m *serversCache) getServer(nodeIdOrName string) (*hcloud.Server, error) { } for _, server := range servers { - if server.Name == nodeIdOrName || strconv.Itoa(server.ID) == nodeIdOrName { + if server.Name == nodeIdOrName || strconv.FormatInt(server.ID, 10) == nodeIdOrName { return server, nil } } diff --git a/cluster-autoscaler/cloudprovider/huaweicloud/OWNERS b/cluster-autoscaler/cloudprovider/huaweicloud/OWNERS index d9bf03c1a018..517f099df7b5 100644 --- a/cluster-autoscaler/cloudprovider/huaweicloud/OWNERS +++ b/cluster-autoscaler/cloudprovider/huaweicloud/OWNERS @@ -4,3 +4,6 @@ approvers: reviewers: - kevin-wangzefeng - RainbowMango + +labels: +- area/provider/huaweicloud diff --git a/cluster-autoscaler/cloudprovider/ionoscloud/OWNERS b/cluster-autoscaler/cloudprovider/ionoscloud/OWNERS index e0edbb8234ff..782d148e9633 100644 --- a/cluster-autoscaler/cloudprovider/ionoscloud/OWNERS +++ b/cluster-autoscaler/cloudprovider/ionoscloud/OWNERS @@ -2,3 +2,6 @@ maintainers: - avorima - schegi - piepmatz + +labels: +- area/provider/ionoscloud diff --git a/cluster-autoscaler/cloudprovider/kamatera/OWNERS b/cluster-autoscaler/cloudprovider/kamatera/OWNERS index fab24c0be3a4..8eeeaabe83b7 100644 --- a/cluster-autoscaler/cloudprovider/kamatera/OWNERS +++ b/cluster-autoscaler/cloudprovider/kamatera/OWNERS @@ -2,3 +2,6 @@ approvers: #- OriHoch reviewers: #- OriHoch + +labels: +- area/provider/kamatera diff --git a/cluster-autoscaler/cloudprovider/kubemark/OWNERS b/cluster-autoscaler/cloudprovider/kubemark/OWNERS index eedd254586aa..2feabd2f2121 100644 --- a/cluster-autoscaler/cloudprovider/kubemark/OWNERS +++ b/cluster-autoscaler/cloudprovider/kubemark/OWNERS @@ -2,3 +2,6 @@ approvers: - ellistarn reviewers: - ellistarn + +labels: +- area/provider/kubemark diff --git a/cluster-autoscaler/cloudprovider/linode/OWNERS b/cluster-autoscaler/cloudprovider/linode/OWNERS new file mode 100644 index 000000000000..a8a368986ed1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/linode/OWNERS @@ -0,0 +1,3 @@ + +labels: +- area/provider/linode diff --git a/cluster-autoscaler/cloudprovider/linode/examples/cluster-autoscaler-autodiscover.yaml b/cluster-autoscaler/cloudprovider/linode/examples/cluster-autoscaler-autodiscover.yaml index 16d238cce415..1c453e2b447b 100644 --- a/cluster-autoscaler/cloudprovider/linode/examples/cluster-autoscaler-autodiscover.yaml +++ b/cluster-autoscaler/cloudprovider/linode/examples/cluster-autoscaler-autodiscover.yaml @@ -51,7 +51,7 @@ rules: resources: ["statefulsets", "replicasets", "daemonsets"] verbs: ["watch", "list", "get"] - apiGroups: ["storage.k8s.io"] - resources: ["storageclasses", "csinodes"] + resources: ["storageclasses", "csinodes", "csidrivers", "csistoragecapacities"] verbs: ["watch", "list", "get"] - apiGroups: ["batch", "extensions"] resources: ["jobs"] @@ -167,4 +167,4 @@ spec: path: "/etc/ssl/certs/ca-certificates.crt" - name: cloud-config secret: - secretName: cluster-autoscaler-cloud-config \ No newline at end of file + secretName: cluster-autoscaler-cloud-config diff --git a/cluster-autoscaler/cloudprovider/magnum/OWNERS b/cluster-autoscaler/cloudprovider/magnum/OWNERS index a5af729a8a86..52a3f93e4130 100644 --- a/cluster-autoscaler/cloudprovider/magnum/OWNERS +++ b/cluster-autoscaler/cloudprovider/magnum/OWNERS @@ -2,3 +2,6 @@ approvers: - tghartland reviewers: - tghartland + +labels: +- area/provider/magnum diff --git a/cluster-autoscaler/cloudprovider/magnum/README.md b/cluster-autoscaler/cloudprovider/magnum/README.md index 77b2accaec13..738e09cca408 100644 --- a/cluster-autoscaler/cloudprovider/magnum/README.md +++ b/cluster-autoscaler/cloudprovider/magnum/README.md @@ -48,7 +48,7 @@ to match your cluster. | --cluster-name | The name of your Kubernetes cluster. If there are multiple clusters sharing the same name then the cluster IDs should be used instead. | | --cloud-provider | Can be omitted if the autoscaler is built with `BUILD_TAGS=magnum`, otherwise use `--cloud-provider=magnum`. | | --nodes | Used to select a specific node group to autoscale and constrain its node count. Of the form `min:max:NodeGroupName`. Can be used multiple times. | -| --node-group-auto-discovery | See below(#node-group-auto-discovery). | +| --node-group-auto-discovery | [See below](#node-group-auto-discovery). | #### Deployment with helm diff --git a/cluster-autoscaler/cloudprovider/magnum/magnum_cloud_provider.go b/cluster-autoscaler/cloudprovider/magnum/magnum_cloud_provider.go index 7f254170da31..879e7a464a0e 100644 --- a/cluster-autoscaler/cloudprovider/magnum/magnum_cloud_provider.go +++ b/cluster-autoscaler/cloudprovider/magnum/magnum_cloud_provider.go @@ -124,6 +124,10 @@ func (mcp *magnumCloudProvider) NodeGroupForNode(node *apiv1.Node) (cloudprovide if _, found := node.ObjectMeta.Labels["node-role.kubernetes.io/master"]; found { return nil, nil } + // Ignore control-plane nodes + if _, found := node.ObjectMeta.Labels["node-role.kubernetes.io/control-plane"]; found { + return nil, nil + } ngUUID, err := mcp.magnumManager.nodeGroupForNode(node) if err != nil { diff --git a/cluster-autoscaler/cloudprovider/oci/OWNERS b/cluster-autoscaler/cloudprovider/oci/OWNERS index f6b36ed6bc2c..6555c7ae500e 100644 --- a/cluster-autoscaler/cloudprovider/oci/OWNERS +++ b/cluster-autoscaler/cloudprovider/oci/OWNERS @@ -8,3 +8,6 @@ reviewers: #- trungng92 #- sohan-oracle #- ericrrath + +labels: +- area/provider/oci diff --git a/cluster-autoscaler/cloudprovider/oci/examples/oci-ip-cluster-autoscaler-w-config.yaml b/cluster-autoscaler/cloudprovider/oci/examples/oci-ip-cluster-autoscaler-w-config.yaml index 594f04e96246..d4f3368f4bca 100644 --- a/cluster-autoscaler/cloudprovider/oci/examples/oci-ip-cluster-autoscaler-w-config.yaml +++ b/cluster-autoscaler/cloudprovider/oci/examples/oci-ip-cluster-autoscaler-w-config.yaml @@ -16,7 +16,7 @@ metadata: k8s-app: cluster-autoscaler rules: - apiGroups: ["storage.k8s.io"] - resources: ["csidriver", "csistoragecapacities"] + resources: ["csidrivers", "csistoragecapacities"] verbs: ["watch", "list"] - apiGroups: [""] resources: ["events", "endpoints"] diff --git a/cluster-autoscaler/cloudprovider/oci/examples/oci-ip-cluster-autoscaler-w-principals.yaml b/cluster-autoscaler/cloudprovider/oci/examples/oci-ip-cluster-autoscaler-w-principals.yaml index 1f5e7b26e000..39b9f8f4b749 100644 --- a/cluster-autoscaler/cloudprovider/oci/examples/oci-ip-cluster-autoscaler-w-principals.yaml +++ b/cluster-autoscaler/cloudprovider/oci/examples/oci-ip-cluster-autoscaler-w-principals.yaml @@ -16,7 +16,7 @@ metadata: k8s-app: cluster-autoscaler rules: - apiGroups: ["storage.k8s.io"] - resources: ["csidriver", "csistoragecapacities"] + resources: ["csidrivers", "csistoragecapacities"] verbs: ["watch", "list"] - apiGroups: [""] resources: ["events", "endpoints"] diff --git a/cluster-autoscaler/cloudprovider/ovhcloud/OWNERS b/cluster-autoscaler/cloudprovider/ovhcloud/OWNERS new file mode 100644 index 000000000000..167d61951dab --- /dev/null +++ b/cluster-autoscaler/cloudprovider/ovhcloud/OWNERS @@ -0,0 +1,3 @@ + +labels: +- area/provider/ovhcloud diff --git a/cluster-autoscaler/cloudprovider/packet/OWNERS b/cluster-autoscaler/cloudprovider/packet/OWNERS index 3e0dd461bdf9..cbb8c1796985 100644 --- a/cluster-autoscaler/cloudprovider/packet/OWNERS +++ b/cluster-autoscaler/cloudprovider/packet/OWNERS @@ -11,3 +11,6 @@ reviewers: - detiber - displague - v-pap + +labels: +- area/provider/packet diff --git a/cluster-autoscaler/cloudprovider/rancher/OWNERS b/cluster-autoscaler/cloudprovider/rancher/OWNERS index 370bf629914d..96271d62be0e 100644 --- a/cluster-autoscaler/cloudprovider/rancher/OWNERS +++ b/cluster-autoscaler/cloudprovider/rancher/OWNERS @@ -8,3 +8,6 @@ reviewers: #- gajicdev #- pawelkuc #- thirdeyenick + +labels: +- area/provider/rancher diff --git a/cluster-autoscaler/cloudprovider/scaleway/OWNERS b/cluster-autoscaler/cloudprovider/scaleway/OWNERS index 0765b7c70703..188bb8cca804 100644 --- a/cluster-autoscaler/cloudprovider/scaleway/OWNERS +++ b/cluster-autoscaler/cloudprovider/scaleway/OWNERS @@ -4,3 +4,6 @@ reviewers: - remyleone - Sh4d1 + +labels: +- area/provider/scaleway diff --git a/cluster-autoscaler/cloudprovider/tencentcloud/OWNERS b/cluster-autoscaler/cloudprovider/tencentcloud/OWNERS index 88f18c24ec68..3a4764b5f3e3 100644 --- a/cluster-autoscaler/cloudprovider/tencentcloud/OWNERS +++ b/cluster-autoscaler/cloudprovider/tencentcloud/OWNERS @@ -2,3 +2,6 @@ approvers: # - alphajc reviewers: # - alphajc + +labels: +- area/provider/tencentcloud diff --git a/cluster-autoscaler/cloudprovider/test/test_cloud_provider.go b/cluster-autoscaler/cloudprovider/test/test_cloud_provider.go index 274933428a8f..bcebbfa9db7e 100644 --- a/cluster-autoscaler/cloudprovider/test/test_cloud_provider.go +++ b/cluster-autoscaler/cloudprovider/test/test_cloud_provider.go @@ -443,6 +443,9 @@ func (tng *TestNodeGroup) DeleteNodes(nodes []*apiv1.Node) error { id := tng.id tng.targetSize -= len(nodes) tng.Unlock() + if tng.opts != nil && tng.opts.ZeroOrMaxNodeScaling && tng.targetSize != 0 { + return fmt.Errorf("TestNodeGroup: attempted to partially scale down a node group that should be scaled down atomically") + } for _, node := range nodes { err := tng.cloudProvider.onScaleDown(id, node.Name) if err != nil { @@ -476,7 +479,12 @@ func (tng *TestNodeGroup) Nodes() ([]cloudprovider.Instance, error) { instances := make([]cloudprovider.Instance, 0) for node, nodegroup := range tng.cloudProvider.nodes { if nodegroup == tng.id { - instances = append(instances, cloudprovider.Instance{Id: node}) + instances = append(instances, cloudprovider.Instance{ + Id: node, + Status: &cloudprovider.InstanceStatus{ + State: cloudprovider.InstanceRunning, + }, + }) } } return instances, nil diff --git a/cluster-autoscaler/cloudprovider/volcengine/OWNERS b/cluster-autoscaler/cloudprovider/volcengine/OWNERS new file mode 100644 index 000000000000..303761210458 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/OWNERS @@ -0,0 +1,6 @@ +approvers: +# - dougsong +reviewers: +# - dougsong +labels: +- area/provider/volcengine diff --git a/cluster-autoscaler/cloudprovider/volcengine/README.md b/cluster-autoscaler/cloudprovider/volcengine/README.md new file mode 100644 index 000000000000..9ee3f32e38bc --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/README.md @@ -0,0 +1,218 @@ +# Cluster Autoscaler on Volcengine + +The Cluster Autoscaler on Volcengine dynamically scales Kubernetes worker nodes. It runs as a deployment within your cluster. This README provides a step-by-step guide for setting up cluster autoscaler on your Kubernetes cluster. + +## Permissions + +### Using Volcengine Credentials + +To use Volcengine credentials, create a `Secret` with your access key and access key secret: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: cloud-config + namespace: kube-system +type: Opaque +data: + access-key: [YOUR_BASE64_AK_ID] + secret-key: [YOUR_BASE64_AK_SECRET] + region-id: [YOUR_BASE64_REGION_ID] +``` + +See the [Volcengine Access Key User Manual](https://www.volcengine.com/docs/6291/65568) and [Volcengine Autoscaling Region](https://www.volcengine.com/docs/6617/87001) for more information. + +## Manual Configuration + +### Auto Scaling Group Setup + +1. Create an Auto Scaling Group in the [Volcengine Console](https://console.volcengine.com/as) with valid configurations, and set the desired instance number to zero. + +2. Create a Scaling Configuration for the Scaling Group with valid configurations. In User Data, specify the script to initialize the environment and join this node to the Kubernetes cluster. + +### Cluster Autoscaler Deployment + +1. Create a service account. + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler + name: cluster-autoscaler-account + namespace: kube-system + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: cluster-autoscaler + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +rules: + - apiGroups: [""] + resources: ["events", "endpoints"] + verbs: ["create", "patch"] + - apiGroups: [""] + resources: ["pods/eviction"] + verbs: ["create"] + - apiGroups: [""] + resources: ["pods/status"] + verbs: ["update"] + - apiGroups: [""] + resources: ["endpoints"] + resourceNames: ["cluster-autoscaler"] + verbs: ["get", "update"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["watch", "list", "get", "update", "delete"] + - apiGroups: [""] + resources: + - "namespaces" + - "pods" + - "services" + - "replicationcontrollers" + - "persistentvolumeclaims" + - "persistentvolumes" + verbs: ["watch", "list", "get"] + - apiGroups: ["batch", "extensions"] + resources: ["jobs"] + verbs: ["watch", "list", "get", "patch"] + - apiGroups: [ "policy" ] + resources: [ "poddisruptionbudgets" ] + verbs: [ "watch", "list" ] + - apiGroups: ["apps"] + resources: ["daemonsets", "replicasets", "statefulsets"] + verbs: ["watch", "list", "get"] + - apiGroups: ["storage.k8s.io"] + resources: ["storageclasses", "csinodes", "csidrivers", "csistoragecapacities"] + verbs: ["watch", "list", "get"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create","list","watch"] + - apiGroups: [""] + resources: ["configmaps"] + resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] + verbs: ["delete", "get", "update"] + - apiGroups: ["coordination.k8s.io"] + resources: ["leases"] + verbs: ["watch", "list", "get", "create", "update", "patch", "delete", "deletecollection"] + - apiGroups: ["extensions"] + resources: ["replicasets", "daemonsets"] + verbs: ["watch", "list", "get"] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +rules: + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create","list","watch"] + - apiGroups: [""] + resources: ["configmaps"] + resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] + verbs: ["delete","get","update","watch"] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: cluster-autoscaler + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-autoscaler +subjects: + - kind: ServiceAccount + name: cluster-autoscaler-account + namespace: kube-system + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: cluster-autoscaler +subjects: + - kind: ServiceAccount + name: cluster-autoscaler + namespace: kube-system +``` + +2. Create a deployment. + +```yaml +--- +kind: Deployment +apiVersion: apps/v1 +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + app: cluster-autoscaler +spec: + replicas: 1 + selector: + matchLabels: + app: cluster-autoscaler + template: + metadata: + namespace: kube-system + labels: + app: cluster-autoscaler + spec: + serviceAccountName: cluster-autoscaler-account + containers: + - name: cluster-autoscaler + image: registry.k8s.io/autoscaling/cluster-autoscaler:latest + imagePullPolicy: Always + command: + - ./cluster-autoscaler + - --alsologtostderr + - --cloud-config=/config/cloud-config + - --cloud-provider=volcengine + - --nodes=[min]:[max]:[ASG_ID] + - --scale-down-delay-after-add=1m0s + - --scale-down-unneeded-time=1m0s + env: + - name: ACCESS_KEY + valueFrom: + secretKeyRef: + name: cloud-config + key: access-key + - name: SECRET_KEY + valueFrom: + secretKeyRef: + name: cloud-config + key: secret-key + - name: REGION_ID + valueFrom: + secretKeyRef: + name: cloud-config + key: region-id +``` + +## Auto-Discovery Setup + +Auto Discovery is not currently supported in Volcengine. \ No newline at end of file diff --git a/cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-deployment.yaml b/cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-deployment.yaml new file mode 100644 index 000000000000..882413a70c41 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-deployment.yaml @@ -0,0 +1,52 @@ +kind: Deployment +apiVersion: apps/v1 +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + app: cluster-autoscaler +spec: + replicas: 1 + selector: + matchLabels: + app: cluster-autoscaler + template: + metadata: + namespace: kube-system + labels: + app: cluster-autoscaler + spec: + serviceAccountName: cluster-autoscaler-account + containers: + - name: cluster-autoscaler + image: registry.k8s.io/autoscaling/cluster-autoscaler:latest + imagePullPolicy: Always + command: + - ./cluster-autoscaler + - --alsologtostderr + - --cloud-config=/config/cloud-config + - --cloud-provider=volcengine + - --nodes=[min]:[max]:[ASG_ID] + - --scale-down-delay-after-add=1m0s + - --scale-down-unneeded-time=1m0s + env: + - name: ACCESS_KEY + valueFrom: + secretKeyRef: + name: cloud-config + key: access-key + - name: SECRET_KEY + valueFrom: + secretKeyRef: + name: cloud-config + key: secret-key + - name: REGION_ID + valueFrom: + secretKeyRef: + name: cloud-config + key: region-id + - name: ENDPOINT + valueFrom: + secretKeyRef: + name: cloud-config + key: endpoint \ No newline at end of file diff --git a/cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-secret.yaml b/cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-secret.yaml new file mode 100644 index 000000000000..9be6ff38f0a0 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-secret.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Secret +metadata: + name: cloud-config + namespace: kube-system +type: Opaque +data: + access-key: [YOUR_BASE64_AK_ID] + secret-key: [YOUR_BASE64_AK_SECRET] + region-id: [YOUR_BASE64_REGION_ID] + endpoint: [YOUR_BASE64_ENDPOINT] \ No newline at end of file diff --git a/cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-svcaccount.yaml b/cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-svcaccount.yaml new file mode 100644 index 000000000000..a28db02a0400 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/examples/cluster-autoscaler-svcaccount.yaml @@ -0,0 +1,122 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler + name: cluster-autoscaler-account + namespace: kube-system + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: cluster-autoscaler + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +rules: + - apiGroups: [""] + resources: ["events", "endpoints"] + verbs: ["create", "patch"] + - apiGroups: [""] + resources: ["pods/eviction"] + verbs: ["create"] + - apiGroups: [""] + resources: ["pods/status"] + verbs: ["update"] + - apiGroups: [""] + resources: ["endpoints"] + resourceNames: ["cluster-autoscaler"] + verbs: ["get", "update"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["watch", "list", "get", "update", "delete"] + - apiGroups: [""] + resources: + - "namespaces" + - "pods" + - "services" + - "replicationcontrollers" + - "persistentvolumeclaims" + - "persistentvolumes" + verbs: ["watch", "list", "get"] + - apiGroups: ["batch", "extensions"] + resources: ["jobs"] + verbs: ["watch", "list", "get", "patch"] + - apiGroups: [ "policy" ] + resources: [ "poddisruptionbudgets" ] + verbs: [ "watch", "list" ] + - apiGroups: ["apps"] + resources: ["daemonsets", "replicasets", "statefulsets"] + verbs: ["watch", "list", "get"] + - apiGroups: ["storage.k8s.io"] + resources: ["storageclasses", "csinodes", "csidrivers", "csistoragecapacities"] + verbs: ["watch", "list", "get"] + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create","list","watch"] + - apiGroups: [""] + resources: ["configmaps"] + resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] + verbs: ["delete", "get", "update"] + - apiGroups: ["coordination.k8s.io"] + resources: ["leases"] + verbs: ["watch", "list", "get", "create", "update", "patch", "delete", "deletecollection"] + - apiGroups: ["extensions"] + resources: ["replicasets", "daemonsets"] + verbs: ["watch", "list", "get"] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +rules: + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create","list","watch"] + - apiGroups: [""] + resources: ["configmaps"] + resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"] + verbs: ["delete","get","update","watch"] + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: cluster-autoscaler + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-autoscaler +subjects: + - kind: ServiceAccount + name: cluster-autoscaler-account + namespace: kube-system + +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: cluster-autoscaler + namespace: kube-system + labels: + k8s-addon: cluster-autoscaler.addons.k8s.io + k8s-app: cluster-autoscaler +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: cluster-autoscaler +subjects: + - kind: ServiceAccount + name: cluster-autoscaler + namespace: kube-system diff --git a/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/aes.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/aes.go new file mode 100644 index 000000000000..e11e0d729690 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/aes.go @@ -0,0 +1,67 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package base + +import ( + "bytes" + "crypto/aes" + "crypto/cipher" + "encoding/base64" + "errors" + "fmt" +) + +// AES CBC +func aesEncryptCBC(origData, key []byte) (crypted []byte, err error) { + defer func() { + if r := recover(); r != nil { + crypted = nil + err = errors.New(fmt.Sprintf("%v", r)) + } + }() + block, err := aes.NewCipher(key) + if err != nil { + return + } + + blockSize := block.BlockSize() + origData = zeroPadding(origData, blockSize) + blockMode := cipher.NewCBCEncrypter(block, key[:blockSize]) + crypted = make([]byte, len(origData)) + blockMode.CryptBlocks(crypted, origData) + return +} + +// AES CBC Do a Base64 encryption after encryption +func aesEncryptCBCWithBase64(origData, key []byte) (string, error) { + cbc, err := aesEncryptCBC(origData, key) + if err != nil { + return "", err + } + + return base64.StdEncoding.EncodeToString(cbc), nil +} + +func zeroPadding(ciphertext []byte, blockSize int) []byte { + padding := blockSize - len(ciphertext)%blockSize + if padding == 0 { + return ciphertext + } + + padtext := bytes.Repeat([]byte{byte(0)}, padding) + return append(ciphertext, padtext...) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/client.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/client.go new file mode 100644 index 000000000000..92ffaa8b4375 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/client.go @@ -0,0 +1,357 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package base + +import ( + "context" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "io" + "io/ioutil" + "net/http" + "net/url" + "os" + "strings" + "time" + + "github.com/cenkalti/backoff/v4" +) + +const ( + accessKey = "VOLC_ACCESSKEY" + secretKey = "VOLC_SECRETKEY" + + defaultScheme = "http" +) + +var _GlobalClient *http.Client + +func init() { + _GlobalClient = &http.Client{ + Transport: &http.Transport{ + MaxIdleConns: 1000, + MaxIdleConnsPerHost: 100, + IdleConnTimeout: 10 * time.Second, + }, + } +} + +// Client +type Client struct { + Client *http.Client + ServiceInfo *ServiceInfo + ApiInfoList map[string]*ApiInfo +} + +// NewClient +func NewClient(info *ServiceInfo, apiInfoList map[string]*ApiInfo) *Client { + client := &Client{Client: _GlobalClient, ServiceInfo: info.Clone(), ApiInfoList: apiInfoList} + + if client.ServiceInfo.Scheme == "" { + client.ServiceInfo.Scheme = defaultScheme + } + + if os.Getenv(accessKey) != "" && os.Getenv(secretKey) != "" { + client.ServiceInfo.Credentials.AccessKeyID = os.Getenv(accessKey) + client.ServiceInfo.Credentials.SecretAccessKey = os.Getenv(secretKey) + } else if _, err := os.Stat(os.Getenv("HOME") + "/.volc/config"); err == nil { + if content, err := ioutil.ReadFile(os.Getenv("HOME") + "/.volc/config"); err == nil { + m := make(map[string]string) + json.Unmarshal(content, &m) + if accessKey, ok := m["ak"]; ok { + client.ServiceInfo.Credentials.AccessKeyID = accessKey + } + if secretKey, ok := m["sk"]; ok { + client.ServiceInfo.Credentials.SecretAccessKey = secretKey + } + } + } + return client +} + +func (serviceInfo *ServiceInfo) Clone() *ServiceInfo { + ret := new(ServiceInfo) + //base info + ret.Timeout = serviceInfo.Timeout + ret.Host = serviceInfo.Host + ret.Scheme = serviceInfo.Scheme + + //credential + ret.Credentials = serviceInfo.Credentials.Clone() + + // header + ret.Header = serviceInfo.Header.Clone() + return ret +} + +func (cred Credentials) Clone() Credentials { + return Credentials{ + Service: cred.Service, + Region: cred.Region, + SecretAccessKey: cred.SecretAccessKey, + AccessKeyID: cred.AccessKeyID, + SessionToken: cred.SessionToken, + } +} + +// SetRetrySettings +func (client *Client) SetRetrySettings(retrySettings *RetrySettings) { + if retrySettings != nil { + client.ServiceInfo.Retry = *retrySettings + } +} + +// SetAccessKey +func (client *Client) SetAccessKey(ak string) { + if ak != "" { + client.ServiceInfo.Credentials.AccessKeyID = ak + } +} + +// SetSecretKey +func (client *Client) SetSecretKey(sk string) { + if sk != "" { + client.ServiceInfo.Credentials.SecretAccessKey = sk + } +} + +// SetSessionToken +func (client *Client) SetSessionToken(token string) { + if token != "" { + client.ServiceInfo.Credentials.SessionToken = token + } +} + +// SetHost +func (client *Client) SetHost(host string) { + if host != "" { + client.ServiceInfo.Host = host + } +} + +func (client *Client) SetScheme(scheme string) { + if scheme != "" { + client.ServiceInfo.Scheme = scheme + } +} + +// SetCredential +func (client *Client) SetCredential(c Credentials) { + if c.AccessKeyID != "" { + client.ServiceInfo.Credentials.AccessKeyID = c.AccessKeyID + } + + if c.SecretAccessKey != "" { + client.ServiceInfo.Credentials.SecretAccessKey = c.SecretAccessKey + } + + if c.Region != "" { + client.ServiceInfo.Credentials.Region = c.Region + } + + if c.SessionToken != "" { + client.ServiceInfo.Credentials.SessionToken = c.SessionToken + } + + if c.Service != "" { + client.ServiceInfo.Credentials.Service = c.Service + } +} + +func (client *Client) SetTimeout(timeout time.Duration) { + if timeout > 0 { + client.ServiceInfo.Timeout = timeout + } +} + +// GetSignUrl +func (client *Client) GetSignUrl(api string, query url.Values) (string, error) { + apiInfo := client.ApiInfoList[api] + + if apiInfo == nil { + return "", errors.New("The related api does not exist") + } + + query = mergeQuery(query, apiInfo.Query) + + u := url.URL{ + Scheme: client.ServiceInfo.Scheme, + Host: client.ServiceInfo.Host, + Path: apiInfo.Path, + RawQuery: query.Encode(), + } + req, err := http.NewRequest(strings.ToUpper(apiInfo.Method), u.String(), nil) + + if err != nil { + return "", errors.New("Failed to build request") + } + + return client.ServiceInfo.Credentials.SignUrl(req), nil +} + +// SignSts2 +func (client *Client) SignSts2(inlinePolicy *Policy, expire time.Duration) (*SecurityToken2, error) { + var err error + sts := new(SecurityToken2) + if sts.AccessKeyID, sts.SecretAccessKey, err = createTempAKSK(); err != nil { + return nil, err + } + + if expire < time.Minute { + expire = time.Minute + } + + now := time.Now() + expireTime := now.Add(expire) + sts.CurrentTime = now.Format(time.RFC3339) + sts.ExpiredTime = expireTime.Format(time.RFC3339) + + innerToken, err := createInnerToken(client.ServiceInfo.Credentials, sts, inlinePolicy, expireTime.Unix()) + if err != nil { + return nil, err + } + + b, _ := json.Marshal(innerToken) + sts.SessionToken = "STS2" + base64.StdEncoding.EncodeToString(b) + return sts, nil +} + +// Query Initiate a Get query request +func (client *Client) Query(api string, query url.Values) ([]byte, int, error) { + return client.CtxQuery(context.Background(), api, query) +} + +func (client *Client) CtxQuery(ctx context.Context, api string, query url.Values) ([]byte, int, error) { + return client.request(ctx, api, query, "", "") +} + +// Json Initiate a Json post request +func (client *Client) Json(api string, query url.Values, body string) ([]byte, int, error) { + return client.CtxJson(context.Background(), api, query, body) +} + +func (client *Client) CtxJson(ctx context.Context, api string, query url.Values, body string) ([]byte, int, error) { + return client.request(ctx, api, query, body, "application/json") +} +func (client *Client) PostWithContentType(api string, query url.Values, body string, ct string) ([]byte, int, error) { + return client.CtxPostWithContentType(context.Background(), api, query, body, ct) +} + +// CtxPostWithContentType Initiate a post request with a custom Content-Type, Content-Type cannot be empty +func (client *Client) CtxPostWithContentType(ctx context.Context, api string, query url.Values, body string, ct string) ([]byte, int, error) { + return client.request(ctx, api, query, body, ct) +} + +func (client *Client) Post(api string, query url.Values, form url.Values) ([]byte, int, error) { + return client.CtxPost(context.Background(), api, query, form) +} + +// CtxPost Initiate a Post request +func (client *Client) CtxPost(ctx context.Context, api string, query url.Values, form url.Values) ([]byte, int, error) { + apiInfo := client.ApiInfoList[api] + form = mergeQuery(form, apiInfo.Form) + return client.request(ctx, api, query, form.Encode(), "application/x-www-form-urlencoded") +} + +func (client *Client) makeRequest(inputContext context.Context, api string, req *http.Request, timeout time.Duration) ([]byte, int, error, bool) { + req = client.ServiceInfo.Credentials.Sign(req) + + ctx := inputContext + if ctx == nil { + ctx = context.Background() + } + + ctx, cancel := context.WithTimeout(ctx, timeout) + defer cancel() + req = req.WithContext(ctx) + + resp, err := client.Client.Do(req) + if err != nil { + // should retry when client sends request error. + return []byte(""), 500, err, true + } + defer resp.Body.Close() + + body, err := ioutil.ReadAll(resp.Body) + if err != nil { + return []byte(""), resp.StatusCode, err, false + } + + if resp.StatusCode < 200 || resp.StatusCode > 299 { + needRetry := false + // should retry when server returns 5xx error. + if resp.StatusCode >= http.StatusInternalServerError { + needRetry = true + } + return body, resp.StatusCode, fmt.Errorf("api %s http code %d body %s", api, resp.StatusCode, string(body)), needRetry + } + + return body, resp.StatusCode, nil, false +} + +func (client *Client) request(ctx context.Context, api string, query url.Values, body string, ct string) ([]byte, int, error) { + apiInfo := client.ApiInfoList[api] + + if apiInfo == nil { + return []byte(""), 500, errors.New("The related api does not exist") + } + timeout := getTimeout(client.ServiceInfo.Timeout, apiInfo.Timeout) + header := mergeHeader(client.ServiceInfo.Header, apiInfo.Header) + query = mergeQuery(query, apiInfo.Query) + retrySettings := getRetrySetting(&client.ServiceInfo.Retry, &apiInfo.Retry) + + u := url.URL{ + Scheme: client.ServiceInfo.Scheme, + Host: client.ServiceInfo.Host, + Path: apiInfo.Path, + RawQuery: query.Encode(), + } + requestBody := strings.NewReader(body) + req, err := http.NewRequest(strings.ToUpper(apiInfo.Method), u.String(), nil) + if err != nil { + return []byte(""), 500, errors.New("Failed to build request") + } + req.Header = header + if ct != "" { + req.Header.Set("Content-Type", ct) + } + + // Because service info could be changed by SetRegion, so set UA header for every request here. + req.Header.Set("User-Agent", strings.Join([]string{SDKName, SDKVersion}, "/")) + + var resp []byte + var code int + + err = backoff.Retry(func() error { + _, err = requestBody.Seek(0, io.SeekStart) + if err != nil { + // if seek failed, stop retry. + return backoff.Permanent(err) + } + req.Body = ioutil.NopCloser(requestBody) + var needRetry bool + resp, code, err, needRetry = client.makeRequest(ctx, api, req, timeout) + if needRetry { + return err + } else { + return backoff.Permanent(err) + } + }, backoff.WithMaxRetries(backoff.NewConstantBackOff(*retrySettings.RetryInterval), *retrySettings.RetryTimes)) + return resp, code, err +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/model.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/model.go new file mode 100644 index 000000000000..26a8858e823b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/model.go @@ -0,0 +1,160 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package base + +import ( + "net/http" + "net/url" + "time" +) + +const ( + RegionCnNorth1 = "cn-north-1" + RegionUsEast1 = "us-east-1" + RegionApSingapore = "ap-singapore-1" + + timeFormatV4 = "20060102T150405Z" +) + +type ServiceInfo struct { + Timeout time.Duration + Scheme string + Host string + Header http.Header + Credentials Credentials + Retry RetrySettings +} + +type ApiInfo struct { + Method string + Path string + Query url.Values + Form url.Values + Timeout time.Duration + Header http.Header + Retry RetrySettings +} + +type Credentials struct { + AccessKeyID string + SecretAccessKey string + Service string + Region string + SessionToken string +} + +type metadata struct { + algorithm string + credentialScope string + signedHeaders string + date string + region string + service string +} + +// Unified JSON return results +type CommonResponse struct { + ResponseMetadata ResponseMetadata + Result interface{} `json:"Result,omitempty"` +} + +type BaseResp struct { + Status string + CreatedTime int64 + UpdatedTime int64 +} + +type ErrorObj struct { + CodeN int + Code string + Message string +} + +type ResponseMetadata struct { + RequestId string + Service string `json:",omitempty"` + Region string `json:",omitempty"` + Action string `json:",omitempty"` + Version string `json:",omitempty"` + Error *ErrorObj `json:",omitempty"` +} + +type Policy struct { + Statement []*Statement +} + +const ( + StatementEffectAllow = "Allow" + StatementEffectDeny = "Deny" +) + +type Statement struct { + Effect string + Action []string + Resource []string + Condition string `json:",omitempty"` +} + +type SecurityToken2 struct { + AccessKeyID string + SecretAccessKey string + SessionToken string + ExpiredTime string + CurrentTime string +} + +type InnerToken struct { + LTAccessKeyId string + AccessKeyId string + SignedSecretAccessKey string + ExpiredTime int64 + PolicyString string + Signature string +} + +type RetrySettings struct { + AutoRetry bool + RetryTimes *uint64 + RetryInterval *time.Duration +} + +type RequestParam struct { + IsSignUrl bool + Body []byte + Method string + Date time.Time + Path string + Host string + QueryList url.Values + Headers http.Header +} + +type SignRequest struct { + XDate string + XNotSignBody string + XCredential string + XAlgorithm string + XSignedHeaders string + XSignedQueries string + XSignature string + XSecurityToken string + + Host string + ContentType string + XContentSha256 string + Authorization string +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/sign.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/sign.go new file mode 100644 index 000000000000..2a97e3e6ee56 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/sign.go @@ -0,0 +1,349 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package base + +import ( + "bytes" + "crypto/hmac" + "crypto/md5" + "crypto/sha256" + "encoding/base64" + "encoding/hex" + "fmt" + "io/ioutil" + "net/http" + "net/url" + "sort" + "strings" + "time" +) + +func (c Credentials) Sign(request *http.Request) *http.Request { + query := request.URL.Query() + request.URL.RawQuery = query.Encode() + + if request.URL.Path == "" { + request.URL.Path += "/" + } + requestParam := RequestParam{ + IsSignUrl: false, + Body: readAndReplaceBody(request), + Host: request.Host, + Path: request.URL.Path, + Method: request.Method, + Date: now(), + QueryList: query, + Headers: request.Header, + } + signRequest := GetSignRequest(requestParam, c) + + request.Header.Set("Host", signRequest.Host) + request.Header.Set("Content-Type", signRequest.ContentType) + request.Header.Set("X-Date", signRequest.XDate) + request.Header.Set("X-Content-Sha256", signRequest.XContentSha256) + request.Header.Set("Authorization", signRequest.Authorization) + if signRequest.XSecurityToken != "" { + request.Header.Set("X-Security-Token", signRequest.XSecurityToken) + } + return request +} + +func (c Credentials) SignUrl(request *http.Request) string { + query := request.URL.Query() + + requestParam := RequestParam{ + IsSignUrl: true, + Body: readAndReplaceBody(request), + Host: request.Host, + Path: request.URL.Path, + Method: request.Method, + Date: now(), + QueryList: query, + Headers: request.Header, + } + signRequest := GetSignRequest(requestParam, c) + + query.Set("X-Date", signRequest.XDate) + query.Set("X-NotSignBody", signRequest.XNotSignBody) + query.Set("X-Credential", signRequest.XCredential) + query.Set("X-Algorithm", signRequest.XAlgorithm) + query.Set("X-SignedHeaders", signRequest.XSignedHeaders) + query.Set("X-SignedQueries", signRequest.XSignedQueries) + query.Set("X-Signature", signRequest.XSignature) + if signRequest.XSecurityToken != "" { + query.Set("X-Security-Token", signRequest.XSecurityToken) + } + return query.Encode() +} + +func GetSignRequest(requestParam RequestParam, credentials Credentials) SignRequest { + formatDate := appointTimestampV4(requestParam.Date) + meta := getMetaData(credentials, tsDateV4(formatDate)) + + requestSignMap := make(map[string][]string) + if credentials.SessionToken != "" { + requestSignMap["X-Security-Token"] = []string{credentials.SessionToken} + } + signRequest := SignRequest{ + XDate: formatDate, + XSecurityToken: credentials.SessionToken, + } + var bodyHash string + if requestParam.IsSignUrl { + for k, v := range requestParam.QueryList { + requestSignMap[k] = v + } + requestSignMap["X-Date"], requestSignMap["X-NotSignBody"], requestSignMap["X-Credential"], requestSignMap["X-Algorithm"], requestSignMap["X-SignedHeaders"], requestSignMap["X-SignedQueries"] = + []string{formatDate}, []string{""}, []string{credentials.AccessKeyID + "/" + meta.credentialScope}, []string{meta.algorithm}, []string{meta.signedHeaders}, []string{""} + + keys := make([]string, 0, len(requestSignMap)) + for k := range requestSignMap { + keys = append(keys, k) + } + sort.Strings(keys) + requestSignMap["X-SignedQueries"] = []string{strings.Join(keys, ";")} + + signRequest.XNotSignBody, signRequest.XCredential, signRequest.XAlgorithm, signRequest.XSignedHeaders, signRequest.XSignedQueries = + "", credentials.AccessKeyID+"/"+meta.credentialScope, meta.algorithm, meta.signedHeaders, strings.Join(keys, ";") + bodyHash = hashSHA256([]byte{}) + } else { + for k, v := range requestParam.Headers { + requestSignMap[k] = v + } + if requestSignMap["Content-Type"] == nil { + signRequest.ContentType = "application/x-www-form-urlencoded; charset=utf-8" + } else { + signRequest.ContentType = requestSignMap["Content-Type"][0] + } + requestSignMap["X-Date"], requestSignMap["Host"], requestSignMap["Content-Type"] = []string{formatDate}, []string{requestParam.Host}, []string{signRequest.ContentType} + + if len(requestParam.Body) == 0 { + bodyHash = hashSHA256([]byte{}) + } else { + bodyHash = hashSHA256(requestParam.Body) + } + requestSignMap["X-Content-Sha256"] = []string{bodyHash} + signRequest.Host, signRequest.XContentSha256 = requestParam.Host, bodyHash + } + + signature := getSignatureStr(requestParam, meta, credentials.SecretAccessKey, formatDate, requestSignMap, bodyHash) + if requestParam.IsSignUrl { + signRequest.XSignature = signature + } else { + signRequest.Authorization = buildAuthHeaderV4(signature, meta, credentials) + } + return signRequest +} + +func getSignatureStr(requestParam RequestParam, meta *metadata, secretAccessKey string, + formatDate string, requestSignMap map[string][]string, bodyHash string) string { + // Task 1 + hashedCanonReq := hashedCanonicalRequestV4(requestParam, meta, requestSignMap, bodyHash) + + // Task 2 + stringToSign := concat("\n", meta.algorithm, formatDate, meta.credentialScope, hashedCanonReq) + + // Task 3 + signingKey := signingKeyV4(secretAccessKey, meta.date, meta.region, meta.service) + return signatureV4(signingKey, stringToSign) +} + +func hashedCanonicalRequestV4(param RequestParam, meta *metadata, requestSignMap map[string][]string, bodyHash string) string { + var canonicalRequest string + if param.IsSignUrl { + queryList := make(url.Values) + for k, v := range requestSignMap { + for i := range v { + queryList.Set(k, v[i]) + } + } + canonicalRequest = concat("\n", param.Method, normuri(param.Path), normquery(queryList), "\n", meta.signedHeaders, bodyHash) + } else { + canonicalHeaders := getCanonicalHeaders(param, meta, requestSignMap) + canonicalRequest = concat("\n", param.Method, normuri(param.Path), normquery(param.QueryList), canonicalHeaders, meta.signedHeaders, bodyHash) + } + return hashSHA256([]byte(canonicalRequest)) +} + +func getCanonicalHeaders(param RequestParam, meta *metadata, requestSignMap map[string][]string) string { + signMap := make(map[string][]string) + signedHeaders := sortHeaders(requestSignMap, signMap) + if !param.IsSignUrl { + meta.signedHeaders = concat(";", signedHeaders...) + } + if param.Path == "" { + param.Path = "/" + } + var headersToSign string + for _, key := range signedHeaders { + value := strings.TrimSpace(signMap[key][0]) + if key == "host" { + if strings.Contains(value, ":") { + split := strings.Split(value, ":") + port := split[1] + if port == "80" || port == "443" { + value = split[0] + } + } + } + headersToSign += key + ":" + value + "\n" + } + return headersToSign +} + +func sortHeaders(requestSignMap map[string][]string, signMap map[string][]string) []string { + var sortedHeaderKeys []string + for k, v := range requestSignMap { + signMap[strings.ToLower(k)] = v + switch k { + case "Content-Type", "Content-Md5", "Host", "X-Security-Token": + default: + if !strings.HasPrefix(k, "X-") { + continue + } + } + sortedHeaderKeys = append(sortedHeaderKeys, strings.ToLower(k)) + } + sort.Strings(sortedHeaderKeys) + return sortedHeaderKeys +} + +func getMetaData(credentials Credentials, date string) *metadata { + meta := new(metadata) + meta.date, meta.service, meta.region, meta.signedHeaders, meta.algorithm = date, credentials.Service, credentials.Region, "", "HMAC-SHA256" + meta.credentialScope = concat("/", meta.date, meta.region, meta.service, "request") + return meta +} + +func signatureV4(signingKey []byte, stringToSign string) string { + return hex.EncodeToString(hmacSHA256(signingKey, stringToSign)) +} + +func signingKeyV4(secretKey, date, region, service string) []byte { + kDate := hmacSHA256([]byte(secretKey), date) + kRegion := hmacSHA256(kDate, region) + kService := hmacSHA256(kRegion, service) + kSigning := hmacSHA256(kService, "request") + return kSigning +} + +func buildAuthHeaderV4(signature string, meta *metadata, keys Credentials) string { + credential := keys.AccessKeyID + "/" + meta.credentialScope + + return meta.algorithm + + " Credential=" + credential + + ", SignedHeaders=" + meta.signedHeaders + + ", Signature=" + signature +} + +func timestampV4() string { + return now().Format(timeFormatV4) +} + +func appointTimestampV4(date time.Time) string { + return date.Format(timeFormatV4) +} +func tsDateV4(timestamp string) string { + return timestamp[:8] +} + +func hmacSHA256(key []byte, content string) []byte { + mac := hmac.New(sha256.New, key) + mac.Write([]byte(content)) + return mac.Sum(nil) +} + +func hashSHA256(content []byte) string { + h := sha256.New() + h.Write(content) + return fmt.Sprintf("%x", h.Sum(nil)) +} + +func hashMD5(content []byte) string { + h := md5.New() + h.Write(content) + return base64.StdEncoding.EncodeToString(h.Sum(nil)) +} + +func readAndReplaceBody(request *http.Request) []byte { + if request.Body == nil { + return []byte{} + } + payload, _ := ioutil.ReadAll(request.Body) + request.Body = ioutil.NopCloser(bytes.NewReader(payload)) + return payload +} + +func concat(delim string, str ...string) string { + return strings.Join(str, delim) +} + +var now = func() time.Time { + return time.Now().UTC() +} + +func normuri(uri string) string { + parts := strings.Split(uri, "/") + for i := range parts { + parts[i] = encodePathFrag(parts[i]) + } + return strings.Join(parts, "/") +} + +func encodePathFrag(s string) string { + hexCount := 0 + for i := 0; i < len(s); i++ { + c := s[i] + if shouldEscape(c) { + hexCount++ + } + } + t := make([]byte, len(s)+2*hexCount) + j := 0 + for i := 0; i < len(s); i++ { + c := s[i] + if shouldEscape(c) { + t[j] = '%' + t[j+1] = "0123456789ABCDEF"[c>>4] + t[j+2] = "0123456789ABCDEF"[c&15] + j += 3 + } else { + t[j] = c + j++ + } + } + return string(t) +} + +func shouldEscape(c byte) bool { + if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' { + return false + } + if '0' <= c && c <= '9' { + return false + } + if c == '-' || c == '_' || c == '.' || c == '~' { + return false + } + return true +} + +func normquery(v url.Values) string { + queryString := v.Encode() + + return strings.Replace(queryString, "+", "%20", -1) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/utils.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/utils.go new file mode 100644 index 000000000000..3665de0eab37 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/utils.go @@ -0,0 +1,252 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package base + +import ( + "crypto/md5" + "encoding/base64" + "encoding/hex" + "encoding/json" + "fmt" + "math/rand" + "net/http" + "net/url" + "reflect" + "strconv" + "strings" + "time" + + "github.com/google/uuid" +) + +var ( + letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ") + defaultRetryTimes uint64 = 2 + defaultRetryInterval = 1 * time.Second +) + +func init() { + rand.Seed(time.Now().Unix()) +} + +func createTempAKSK() (accessKeyId string, plainSk string, err error) { + if accessKeyId, err = generateAccessKeyId("AKTP"); err != nil { + return + } + + plainSk, err = generateSecretKey() + if err != nil { + return + } + return +} + +func generateAccessKeyId(prefix string) (string, error) { + uuid := uuid.New() + + uidBase64 := base64.StdEncoding.EncodeToString([]byte(strings.Replace(uuid.String(), "-", "", -1))) + + s := strings.Replace(uidBase64, "=", "", -1) + s = strings.Replace(s, "/", "", -1) + s = strings.Replace(s, "+", "", -1) + s = strings.Replace(s, "-", "", -1) + return prefix + s, nil +} + +func randStringRunes(n int) string { + b := make([]rune, n) + for i := range b { + b[i] = letterRunes[rand.Intn(len(letterRunes))] + } + return string(b) +} + +func generateSecretKey() (string, error) { + randString32 := randStringRunes(32) + return aesEncryptCBCWithBase64([]byte(randString32), []byte("bytedance-isgood")) +} + +func createInnerToken(credentials Credentials, sts *SecurityToken2, inlinePolicy *Policy, t int64) (*InnerToken, error) { + var err error + innerToken := new(InnerToken) + + innerToken.LTAccessKeyId = credentials.AccessKeyID + innerToken.AccessKeyId = sts.AccessKeyID + innerToken.ExpiredTime = t + + key := md5.Sum([]byte(credentials.SecretAccessKey)) + innerToken.SignedSecretAccessKey, err = aesEncryptCBCWithBase64([]byte(sts.SecretAccessKey), key[:]) + if err != nil { + return nil, err + } + + if inlinePolicy != nil { + b, _ := json.Marshal(inlinePolicy) + innerToken.PolicyString = string(b) + } + + signStr := fmt.Sprintf("%s|%s|%d|%s|%s", innerToken.LTAccessKeyId, innerToken.AccessKeyId, innerToken.ExpiredTime, innerToken.SignedSecretAccessKey, innerToken.PolicyString) + + innerToken.Signature = hex.EncodeToString(hmacSHA256(key[:], signStr)) + return innerToken, nil +} + +func getTimeout(serviceTimeout, apiTimeout time.Duration) time.Duration { + timeout := time.Second + if serviceTimeout != time.Duration(0) { + timeout = serviceTimeout + } + if apiTimeout != time.Duration(0) { + timeout = apiTimeout + } + return timeout +} + +func getRetrySetting(serviceRetrySettings, apiRetrySettings *RetrySettings) *RetrySettings { + retrySettings := &RetrySettings{ + AutoRetry: false, + RetryTimes: new(uint64), + RetryInterval: new(time.Duration), + } + if !apiRetrySettings.AutoRetry || !serviceRetrySettings.AutoRetry { + return retrySettings + } + retrySettings.AutoRetry = true + if serviceRetrySettings.RetryTimes != nil { + retrySettings.RetryTimes = serviceRetrySettings.RetryTimes + } else if apiRetrySettings.RetryTimes != nil { + retrySettings.RetryTimes = apiRetrySettings.RetryTimes + } else { + retrySettings.RetryTimes = &defaultRetryTimes + } + if serviceRetrySettings.RetryInterval != nil { + retrySettings.RetryInterval = serviceRetrySettings.RetryInterval + } else if apiRetrySettings.RetryInterval != nil { + retrySettings.RetryInterval = apiRetrySettings.RetryInterval + } else { + retrySettings.RetryInterval = &defaultRetryInterval + } + return retrySettings +} + +func mergeQuery(query1, query2 url.Values) (query url.Values) { + query = url.Values{} + if query1 != nil { + for k, vv := range query1 { + for _, v := range vv { + query.Add(k, v) + } + } + } + + if query2 != nil { + for k, vv := range query2 { + for _, v := range vv { + query.Add(k, v) + } + } + } + return +} + +func mergeHeader(header1, header2 http.Header) (header http.Header) { + header = http.Header{} + if header1 != nil { + for k, v := range header1 { + header.Set(k, strings.Join(v, ";")) + } + } + if header2 != nil { + for k, v := range header2 { + header.Set(k, strings.Join(v, ";")) + } + } + + return +} + +func NewAllowStatement(actions, resources []string) *Statement { + sts := new(Statement) + sts.Effect = "Allow" + sts.Action = actions + sts.Resource = resources + + return sts +} + +func NewDenyStatement(actions, resources []string) *Statement { + sts := new(Statement) + sts.Effect = "Deny" + sts.Action = actions + sts.Resource = resources + + return sts +} + +func ToUrlValues(i interface{}) (values url.Values) { + values = url.Values{} + iVal := reflect.ValueOf(i).Elem() + typ := iVal.Type() + for i := 0; i < iVal.NumField(); i++ { + f := iVal.Field(i) + // You ca use tags here... + // tag := typ.Field(i).Tag.Get("tagname") + // Convert each type into a string for the url.Values string map + var v string + switch f.Interface().(type) { + case int, int8, int16, int32, int64: + v = strconv.FormatInt(f.Int(), 10) + case uint, uint8, uint16, uint32, uint64: + v = strconv.FormatUint(f.Uint(), 10) + case float32: + v = strconv.FormatFloat(f.Float(), 'f', 4, 32) + case float64: + v = strconv.FormatFloat(f.Float(), 'f', 4, 64) + case []byte: + v = string(f.Bytes()) + case bool: + v = strconv.FormatBool(f.Bool()) + case string: + if f.Len() == 0 { + continue + } + v = f.String() + } + values.Set(typ.Field(i).Name, v) + } + return +} + +func UnmarshalResultInto(data []byte, result interface{}) error { + resp := new(CommonResponse) + if err := json.Unmarshal(data, resp); err != nil { + return fmt.Errorf("fail to unmarshal response, %v", err) + } + errObj := resp.ResponseMetadata.Error + if errObj != nil && errObj.CodeN != 0 { + return fmt.Errorf("request %s error %s", resp.ResponseMetadata.RequestId, errObj.Message) + } + + data, err := json.Marshal(resp.Result) + if err != nil { + return fmt.Errorf("fail to marshal result, %v", err) + } + if err = json.Unmarshal(data, result); err != nil { + return fmt.Errorf("fail to unmarshal result, %v", err) + } + return nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.deepcopy.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/version.go similarity index 76% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.deepcopy.go rename to cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/version.go index e38909ba4a91..37ab6a63873b 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.deepcopy.go +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base/version.go @@ -1,8 +1,5 @@ -//go:build !ignore_autogenerated -// +build !ignore_autogenerated - /* -Copyright The Kubernetes Authors. +Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -17,6 +14,8 @@ See the License for the specific language governing permissions and limitations under the License. */ -// Code generated by deepcopy-gen. DO NOT EDIT. +package base + +const SDKName = "volc-sdk-golang" -package v1beta2 +const SDKVersion = "v1.0.82" diff --git a/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/config.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/config.go new file mode 100644 index 000000000000..e1ef66244236 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/config.go @@ -0,0 +1,97 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package sts + +import ( + "net/http" + "net/url" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base" +) + +const ( + DefaultRegion = "cn-north-1" + ServiceVersion20180101 = "2018-01-01" + ServiceName = "sts" +) + +var ( + ServiceInfo = &base.ServiceInfo{ + Timeout: 5 * time.Second, + Host: "open.volcengineapi.com", + Header: http.Header{ + "Accept": []string{"application/json"}, + }, + } + + ApiInfoList = map[string]*base.ApiInfo{ + "AssumeRole": { + Method: http.MethodGet, + Path: "/", + Query: url.Values{ + "Action": []string{"AssumeRole"}, + "Version": []string{ServiceVersion20180101}, + }, + }, + } +) + +// DefaultInstance 默认的实例 +var DefaultInstance = NewInstance() + +// IAM . +type STS struct { + Client *base.Client +} + +// NewInstance 创建一个实例 +func NewInstance() *STS { + instance := &STS{} + instance.Client = base.NewClient(ServiceInfo, ApiInfoList) + instance.Client.ServiceInfo.Credentials.Service = ServiceName + instance.Client.ServiceInfo.Credentials.Region = DefaultRegion + return instance +} + +// GetServiceInfo interface +func (p *STS) GetServiceInfo() *base.ServiceInfo { + return p.Client.ServiceInfo +} + +// GetAPIInfo interface +func (p *STS) GetAPIInfo(api string) *base.ApiInfo { + if apiInfo, ok := ApiInfoList[api]; ok { + return apiInfo + } + return nil +} + +// SetHost . +func (p *STS) SetRegion(region string) { + p.Client.ServiceInfo.Credentials.Region = region +} + +// SetHost . +func (p *STS) SetHost(host string) { + p.Client.ServiceInfo.Host = host +} + +// SetSchema . +func (p *STS) SetSchema(schema string) { + p.Client.ServiceInfo.Scheme = schema +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/model.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/model.go new file mode 100644 index 000000000000..2c3a8422702d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/model.go @@ -0,0 +1,52 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package sts + +import "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base" + +// AssumeRole +type AssumeRoleResp struct { + ResponseMetadata base.ResponseMetadata + Result *AssumeRoleResult `json:",omitempty"` +} + +type AssumeRoleResult struct { + Credentials *Credentials + AssumedRoleUser *AssumeRoleUser +} + +type AssumeRoleRequest struct { + DurationSeconds int + Policy string + RoleTrn string + RoleSessionName string +} + +type AssumeRoleUser struct { + Trn string + AssumedRoleId string +} + +type Credentials struct { + CurrentTime string + ExpiredTime string + AccessKeyId string + SecretAccessKey string + SessionToken string +} + +// AssumeRole diff --git a/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/wrapper.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/wrapper.go new file mode 100644 index 000000000000..a83d9243e543 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/wrapper.go @@ -0,0 +1,57 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package sts + +import ( + "encoding/json" + "net/url" + "strconv" +) + +func (p *STS) commonHandler(api string, query url.Values, resp interface{}) (int, error) { + respBody, statusCode, err := p.Client.Query(api, query) + if err != nil { + return statusCode, err + } + + if err := json.Unmarshal(respBody, resp); err != nil { + return statusCode, err + } + return statusCode, nil +} + +func (p *STS) AssumeRole(req *AssumeRoleRequest) (*AssumeRoleResp, int, error) { + query := url.Values{} + resp := new(AssumeRoleResp) + + if req.DurationSeconds > 0 { + query.Set("DurationSeconds", strconv.Itoa(req.DurationSeconds)) + } + + if req.Policy != "" { + query.Set("Policy", req.Policy) + } + + query.Set("RoleTrn", req.RoleTrn) + query.Set("RoleSessionName", req.RoleSessionName) + + statusCode, err := p.commonHandler("AssumeRole", query, resp) + if err != nil { + return nil, statusCode, err + } + return resp, statusCode, nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/libraries.go b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/wrapper_test.go similarity index 50% rename from cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/libraries.go rename to cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/wrapper_test.go index e2e8fc29bd16..47e0a46339ac 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/libraries.go +++ b/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts/wrapper_test.go @@ -1,5 +1,5 @@ /* -Copyright 2022 The Kubernetes Authors. +Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -14,22 +14,32 @@ See the License for the specific language governing permissions and limitations under the License. */ -package library +package sts import ( - "github.com/google/cel-go/cel" - "github.com/google/cel-go/ext" - "github.com/google/cel-go/interpreter" + "encoding/json" + "fmt" + "testing" ) -// ExtensionLibs declares the set of CEL extension libraries available everywhere CEL is used in Kubernetes. -var ExtensionLibs = append(k8sExtensionLibs, ext.Strings()) +const ( + testAk = "testAK" + testSk = "testSK" +) -var k8sExtensionLibs = []cel.EnvOption{ - URLs(), - Regex(), - Lists(), - Authz(), +func TestIAM_AssumeRole(t *testing.T) { + DefaultInstance.Client.SetAccessKey(testAk) + DefaultInstance.Client.SetSecretKey(testSk) + + req := &AssumeRoleRequest{ + DurationSeconds: 7200, + Policy: "", + RoleTrn: "testRoleTrn", + RoleSessionName: "test", + } + + list, status, err := DefaultInstance.AssumeRole(req) + fmt.Println(status, err) + b, _ := json.Marshal(list) + fmt.Println(string(b)) } - -var ExtensionLibRegexOptimizations = []*interpreter.RegexOptimization{FindRegexOptimization, FindAllRegexOptimization} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ast.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ast.go new file mode 100644 index 000000000000..1fcee0c636fe --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ast.go @@ -0,0 +1,139 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// ASTKind represents different states in the parse table +// and the type of AST that is being constructed +type ASTKind int + +// ASTKind* is used in the parse table to transition between +// the different states +const ( + ASTKindNone = ASTKind(iota) + ASTKindStart + ASTKindExpr + ASTKindEqualExpr + ASTKindStatement + ASTKindSkipStatement + ASTKindExprStatement + ASTKindSectionStatement + ASTKindNestedSectionStatement + ASTKindCompletedNestedSectionStatement + ASTKindCommentStatement + ASTKindCompletedSectionStatement +) + +func (k ASTKind) String() string { + switch k { + case ASTKindNone: + return "none" + case ASTKindStart: + return "start" + case ASTKindExpr: + return "expr" + case ASTKindStatement: + return "stmt" + case ASTKindSectionStatement: + return "section_stmt" + case ASTKindExprStatement: + return "expr_stmt" + case ASTKindCommentStatement: + return "comment" + case ASTKindNestedSectionStatement: + return "nested_section_stmt" + case ASTKindCompletedSectionStatement: + return "completed_stmt" + case ASTKindSkipStatement: + return "skip" + default: + return "" + } +} + +// AST interface allows us to determine what kind of node we +// are on and casting may not need to be necessary. +// +// The root is always the first node in Children +type AST struct { + Kind ASTKind + Root Token + RootToken bool + Children []AST +} + +func newAST(kind ASTKind, root AST, children ...AST) AST { + return AST{ + Kind: kind, + Children: append([]AST{root}, children...), + } +} + +func newASTWithRootToken(kind ASTKind, root Token, children ...AST) AST { + return AST{ + Kind: kind, + Root: root, + RootToken: true, + Children: children, + } +} + +// AppendChild will append to the list of children an AST has. +func (a *AST) AppendChild(child AST) { + a.Children = append(a.Children, child) +} + +// GetRoot will return the root AST which can be the first entry +// in the children list or a token. +func (a *AST) GetRoot() AST { + if a.RootToken { + return *a + } + + if len(a.Children) == 0 { + return AST{} + } + + return a.Children[0] +} + +// GetChildren will return the current AST's list of children +func (a *AST) GetChildren() []AST { + if len(a.Children) == 0 { + return []AST{} + } + + if a.RootToken { + return a.Children + } + + return a.Children[1:] +} + +// SetChildren will set and override all children of the AST. +func (a *AST) SetChildren(children []AST) { + if a.RootToken { + a.Children = children + } else { + a.Children = append(a.Children[:1], children...) + } +} + +// Start is used to indicate the starting state of the parse table. +var Start = newAST(ASTKindStart, AST{}) diff --git a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/doc.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/comma_token.go similarity index 62% rename from cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/doc.go rename to cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/comma_token.go index c9f5f62ed8ee..573650e9adac 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/doc.go +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/comma_token.go @@ -1,5 +1,5 @@ /* -Copyright 2021 The Kubernetes Authors. +Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -14,8 +14,17 @@ See the License for the specific language governing permissions and limitations under the License. */ -// +k8s:deepcopy-gen=package -// +k8s:openapi-gen=true -// +groupName=kubescheduler.config.k8s.io +package ini -package v1beta2 // import "k8s.io/kube-scheduler/config/v1beta2" +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +var commaRunes = []rune(",") + +func isComma(b rune) bool { + return b == ',' +} + +func newCommaToken() Token { + return newToken(TokenComma, commaRunes, NoneType) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/comment_token.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/comment_token.go new file mode 100644 index 000000000000..f6fe914b2bd7 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/comment_token.go @@ -0,0 +1,54 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// isComment will return whether or not the next byte(s) is a +// comment. +func isComment(b []rune) bool { + if len(b) == 0 { + return false + } + + switch b[0] { + case ';': + return true + case '#': + return true + } + + return false +} + +// newCommentToken will create a comment token and +// return how many bytes were read. +func newCommentToken(b []rune) (Token, int, error) { + i := 0 + for ; i < len(b); i++ { + if b[i] == '\n' { + break + } + + if len(b)-i > 2 && b[i] == '\r' && b[i+1] == '\n' { + break + } + } + + return newToken(TokenComment, b[:i], NoneType), i, nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.deepcopy.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/empty_token.go similarity index 66% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.deepcopy.go rename to cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/empty_token.go index 61f6555edfc5..5c2f1794d1f8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/empty_token.go @@ -1,8 +1,5 @@ -//go:build !ignore_autogenerated -// +build !ignore_autogenerated - /* -Copyright The Kubernetes Authors. +Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -17,6 +14,10 @@ See the License for the specific language governing permissions and limitations under the License. */ -// Code generated by deepcopy-gen. DO NOT EDIT. +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. -package v1alpha1 +// emptyToken is used to satisfy the Token interface +var emptyToken = newToken(TokenNone, []rune{}, NoneType) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/expression.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/expression.go new file mode 100644 index 000000000000..0b905c53b6e2 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/expression.go @@ -0,0 +1,43 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// newExpression will return an expression AST. +// Expr represents an expression +// +// grammar: +// expr -> string | number +func newExpression(tok Token) AST { + return newASTWithRootToken(ASTKindExpr, tok) +} + +func newEqualExpr(left AST, tok Token) AST { + return newASTWithRootToken(ASTKindEqualExpr, tok, left) +} + +// EqualExprKey will return a LHS value in the equal expr +func EqualExprKey(ast AST) string { + children := ast.GetChildren() + if len(children) == 0 || ast.Kind != ASTKindEqualExpr { + return "" + } + + return string(children[0].Root.Raw()) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/fuzz.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/fuzz.go new file mode 100644 index 000000000000..4983fcdf2563 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/fuzz.go @@ -0,0 +1,37 @@ +//go:build gofuzz +// +build gofuzz + +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" +) + +func Fuzz(data []byte) int { + b := bytes.NewReader(data) + + if _, err := Parse(b); err != nil { + return 0 + } + + return 1 +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini.go new file mode 100644 index 000000000000..a1b01ed4390d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini.go @@ -0,0 +1,70 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "io" + "os" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// OpenFile takes a path to a given file, and will open and parse +// that file. +func OpenFile(path string) (Sections, error) { + f, err := os.Open(path) + if err != nil { + return Sections{}, volcengineerr.New(ErrCodeUnableToReadFile, "unable to open file", err) + } + defer f.Close() + + return Parse(f) +} + +// Parse will parse the given file using the shared config +// visitor. +func Parse(f io.Reader) (Sections, error) { + tree, err := ParseAST(f) + if err != nil { + return Sections{}, err + } + + v := NewDefaultVisitor() + if err = Walk(tree, v); err != nil { + return Sections{}, err + } + + return v.Sections, nil +} + +// ParseBytes will parse the given bytes and return the parsed sections. +func ParseBytes(b []byte) (Sections, error) { + tree, err := ParseASTBytes(b) + if err != nil { + return Sections{}, err + } + + v := NewDefaultVisitor() + if err = Walk(tree, v); err != nil { + return Sections{}, err + } + + return v.Sections, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini_lexer.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini_lexer.go new file mode 100644 index 000000000000..ce1c04877f74 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini_lexer.go @@ -0,0 +1,184 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "io" + "io/ioutil" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +const ( + // ErrCodeUnableToReadFile is used when a file is failed to be + // opened or read from. + ErrCodeUnableToReadFile = "FailedRead" +) + +// TokenType represents the various different tokens types +type TokenType int + +func (t TokenType) String() string { + switch t { + case TokenNone: + return "none" + case TokenLit: + return "literal" + case TokenSep: + return "sep" + case TokenOp: + return "op" + case TokenWS: + return "ws" + case TokenNL: + return "newline" + case TokenComment: + return "comment" + case TokenComma: + return "comma" + default: + return "" + } +} + +// TokenType enums +const ( + TokenNone = TokenType(iota) + TokenLit + TokenSep + TokenComma + TokenOp + TokenWS + TokenNL + TokenComment +) + +type iniLexer struct{} + +// Tokenize will return a list of tokens during lexical analysis of the +// io.Reader. +func (l *iniLexer) Tokenize(r io.Reader) ([]Token, error) { + b, err := ioutil.ReadAll(r) + if err != nil { + return nil, volcengineerr.New(ErrCodeUnableToReadFile, "unable to read file", err) + } + + return l.tokenize(b) +} + +func (l *iniLexer) tokenize(b []byte) ([]Token, error) { + runes := bytes.Runes(b) + var err error + n := 0 + tokenAmount := countTokens(runes) + tokens := make([]Token, tokenAmount) + count := 0 + + for len(runes) > 0 && count < tokenAmount { + switch { + case isWhitespace(runes[0]): + tokens[count], n, err = newWSToken(runes) + case isComma(runes[0]): + tokens[count], n = newCommaToken(), 1 + case isComment(runes): + tokens[count], n, err = newCommentToken(runes) + case isNewline(runes): + tokens[count], n, err = newNewlineToken(runes) + case isSep(runes): + tokens[count], n, err = newSepToken(runes) + case isOp(runes): + tokens[count], n, err = newOpToken(runes) + default: + tokens[count], n, err = newLitToken(runes) + } + + if err != nil { + return nil, err + } + + count++ + + runes = runes[n:] + } + + return tokens[:count], nil +} + +func countTokens(runes []rune) int { + count, n := 0, 0 + var err error + + for len(runes) > 0 { + switch { + case isWhitespace(runes[0]): + _, n, err = newWSToken(runes) + case isComma(runes[0]): + _, n = newCommaToken(), 1 + case isComment(runes): + _, n, err = newCommentToken(runes) + case isNewline(runes): + _, n, err = newNewlineToken(runes) + case isSep(runes): + _, n, err = newSepToken(runes) + case isOp(runes): + _, n, err = newOpToken(runes) + default: + _, n, err = newLitToken(runes) + } + + if err != nil { + return 0 + } + + count++ + runes = runes[n:] + } + + return count + 1 +} + +// Token indicates a metadata about a given value. +type Token struct { + t TokenType + ValueType ValueType + base int + raw []rune +} + +var emptyValue = Value{} + +func newToken(t TokenType, raw []rune, v ValueType) Token { + return Token{ + t: t, + raw: raw, + ValueType: v, + } +} + +// Raw return the raw runes that were consumed +func (tok Token) Raw() []rune { + return tok.raw +} + +// Type returns the token type +func (tok Token) Type() TokenType { + return tok.t +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini_parser.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini_parser.go new file mode 100644 index 000000000000..e5339b769a95 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ini_parser.go @@ -0,0 +1,368 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "io" +) + +// State enums for the parse table +const ( + InvalidState = iota + // StatementState stmt -> value stmt' + StatementState + // StatementPrimeState stmt' -> MarkComplete | op stmt + StatementPrimeState + // ValueState value -> number | string | boolean | quoted_string + ValueState + // OpenScopeState section -> [ section' + OpenScopeState + // SectionState section' -> value section_close + SectionState + // CloseScopeState section_close -> ] + CloseScopeState + // SkipState will skip (NL WS)+ + SkipState + // SkipTokenState will skip any token and push the previous + // state onto the stack. + SkipTokenState + // CommentState comment -> # comment' | ; comment' + // comment' -> MarkComplete | value + CommentState + // MarkCompleteState MarkComplete state will complete statements and move that + // to the completed AST list + MarkCompleteState + // TerminalState signifies that the tokens have been fully parsed + TerminalState +) + +// parseTable is a state machine to dictate the grammar above. +var parseTable = map[ASTKind]map[TokenType]int{ + ASTKindStart: { + TokenLit: StatementState, + TokenSep: OpenScopeState, + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenComment: CommentState, + TokenNone: TerminalState, + }, + ASTKindCommentStatement: { + TokenLit: StatementState, + TokenSep: OpenScopeState, + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenComment: CommentState, + TokenNone: MarkCompleteState, + }, + ASTKindExpr: { + TokenOp: StatementPrimeState, + TokenLit: ValueState, + TokenSep: OpenScopeState, + TokenWS: ValueState, + TokenNL: SkipState, + TokenComment: CommentState, + TokenNone: MarkCompleteState, + }, + ASTKindEqualExpr: { + TokenLit: ValueState, + TokenWS: SkipTokenState, + TokenNL: SkipState, + }, + ASTKindStatement: { + TokenLit: SectionState, + TokenSep: CloseScopeState, + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenComment: CommentState, + TokenNone: MarkCompleteState, + }, + ASTKindExprStatement: { + TokenLit: ValueState, + TokenSep: OpenScopeState, + TokenOp: ValueState, + TokenWS: ValueState, + TokenNL: MarkCompleteState, + TokenComment: CommentState, + TokenNone: TerminalState, + TokenComma: SkipState, + }, + ASTKindSectionStatement: { + TokenLit: SectionState, + TokenOp: SectionState, + TokenSep: CloseScopeState, + TokenWS: SectionState, + TokenNL: SkipTokenState, + }, + ASTKindCompletedSectionStatement: { + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenLit: StatementState, + TokenSep: OpenScopeState, + TokenComment: CommentState, + TokenNone: MarkCompleteState, + }, + ASTKindSkipStatement: { + TokenLit: StatementState, + TokenSep: OpenScopeState, + TokenWS: SkipTokenState, + TokenNL: SkipTokenState, + TokenComment: CommentState, + TokenNone: TerminalState, + }, +} + +// ParseAST will parse input from an io.Reader using +// an LL(1) parser. +func ParseAST(r io.Reader) ([]AST, error) { + lexer := iniLexer{} + tokens, err := lexer.Tokenize(r) + if err != nil { + return []AST{}, err + } + + return parse(tokens) +} + +// ParseASTBytes will parse input from a byte slice using +// an LL(1) parser. +func ParseASTBytes(b []byte) ([]AST, error) { + lexer := iniLexer{} + tokens, err := lexer.tokenize(b) + if err != nil { + return []AST{}, err + } + + return parse(tokens) +} + +func parse(tokens []Token) ([]AST, error) { + start := Start + stack := newParseStack(3, len(tokens)) + + stack.Push(start) + s := newSkipper() + +loop: + for stack.Len() > 0 { + k := stack.Pop() + + var tok Token + if len(tokens) == 0 { + // this occurs when all the tokens have been processed + // but reduction of what's left on the stack needs to + // occur. + tok = emptyToken + } else { + tok = tokens[0] + } + + step := parseTable[k.Kind][tok.Type()] + if s.ShouldSkip(tok) { + // being in a skip state with no tokens will break out of + // the parse loop since there is nothing left to process. + if len(tokens) == 0 { + break loop + } + + step = SkipTokenState + } + + switch step { + case TerminalState: + // Finished parsing. Push what should be the last + // statement to the stack. If there is anything left + // on the stack, an error in parsing has occurred. + if k.Kind != ASTKindStart { + stack.MarkComplete(k) + } + break loop + case SkipTokenState: + // When skipping a token, the previous state was popped off the stack. + // To maintain the correct state, the previous state will be pushed + // onto the stack. + stack.Push(k) + case StatementState: + if k.Kind != ASTKindStart { + stack.MarkComplete(k) + } + expr := newExpression(tok) + stack.Push(expr) + case StatementPrimeState: + if tok.Type() != TokenOp { + stack.MarkComplete(k) + continue + } + + if k.Kind != ASTKindExpr { + return nil, NewParseError( + fmt.Sprintf("invalid expression: expected Expr type, but found %T type", k), + ) + } + + k = trimSpaces(k) + expr := newEqualExpr(k, tok) + stack.Push(expr) + case ValueState: + // ValueState requires the previous state to either be an equal expression + // or an expression statement. + // + // This grammar occurs when the RHS is a number, word, or quoted string. + // equal_expr -> lit op equal_expr' + // equal_expr' -> number | string | quoted_string + // quoted_string -> " quoted_string' + // quoted_string' -> string quoted_string_end + // quoted_string_end -> " + // + // otherwise + // expr_stmt -> equal_expr (expr_stmt')* + // expr_stmt' -> ws S | op S | MarkComplete + // S -> equal_expr' expr_stmt' + switch k.Kind { + case ASTKindEqualExpr: + // assiging a value to some key + k.AppendChild(newExpression(tok)) + stack.Push(newExprStatement(k)) + case ASTKindExpr: + k.Root.raw = append(k.Root.raw, tok.Raw()...) + stack.Push(k) + case ASTKindExprStatement: + root := k.GetRoot() + children := root.GetChildren() + if len(children) == 0 { + return nil, NewParseError( + fmt.Sprintf("invalid expression: AST contains no children %s", k.Kind), + ) + } + + rhs := children[len(children)-1] + + if rhs.Root.ValueType != QuotedStringType { + rhs.Root.ValueType = StringType + rhs.Root.raw = append(rhs.Root.raw, tok.Raw()...) + + } + + children[len(children)-1] = rhs + k.SetChildren(children) + + stack.Push(k) + } + case OpenScopeState: + if !runeCompare(tok.Raw(), openBrace) { + return nil, NewParseError("expected '['") + } + + stmt := newStatement() + stack.Push(stmt) + case CloseScopeState: + if !runeCompare(tok.Raw(), closeBrace) { + return nil, NewParseError("expected ']'") + } + + k = trimSpaces(k) + stack.Push(newCompletedSectionStatement(k)) + case SectionState: + var stmt AST + + switch k.Kind { + case ASTKindStatement: + // If there are multiple literals inside of a scope declaration, + // then the current token's raw value will be appended to the Name. + // + // This handles cases like [ profile default ] + // + // k will represent a SectionStatement with the children representing + // the label of the section + stmt = newSectionStatement(tok) + case ASTKindSectionStatement: + k.Root.raw = append(k.Root.raw, tok.Raw()...) + stmt = k + default: + return nil, NewParseError( + fmt.Sprintf("invalid statement: expected statement: %v", k.Kind), + ) + } + + stack.Push(stmt) + case MarkCompleteState: + if k.Kind != ASTKindStart { + stack.MarkComplete(k) + } + + if stack.Len() == 0 { + stack.Push(start) + } + case SkipState: + stack.Push(newSkipStatement(k)) + s.Skip() + case CommentState: + if k.Kind == ASTKindStart { + stack.Push(k) + } else { + stack.MarkComplete(k) + } + + stmt := newCommentStatement(tok) + stack.Push(stmt) + default: + return nil, NewParseError( + fmt.Sprintf("invalid state with ASTKind %v and TokenType %v", + k, tok.Type())) + } + + if len(tokens) > 0 { + tokens = tokens[1:] + } + } + + // this occurs when a statement has not been completed + if stack.top > 1 { + return nil, NewParseError(fmt.Sprintf("incomplete ini expression")) + } + + // returns a sublist which excludes the start symbol + return stack.List(), nil +} + +// trimSpaces will trim spaces on the left and right hand side of +// the literal. +func trimSpaces(k AST) AST { + // trim left hand side of spaces + for i := 0; i < len(k.Root.raw); i++ { + if !isWhitespace(k.Root.raw[i]) { + break + } + + k.Root.raw = k.Root.raw[1:] + i-- + } + + // trim right hand side of spaces + for i := len(k.Root.raw) - 1; i >= 0; i-- { + if !isWhitespace(k.Root.raw[i]) { + break + } + + k.Root.raw = k.Root.raw[:len(k.Root.raw)-1] + } + + return k +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/literal_tokens.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/literal_tokens.go new file mode 100644 index 000000000000..58977e9d1271 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/literal_tokens.go @@ -0,0 +1,343 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "strconv" + "strings" +) + +var ( + runesTrue = []rune("true") + runesFalse = []rune("false") +) + +var literalValues = [][]rune{ + runesTrue, + runesFalse, +} + +func isBoolValue(b []rune) bool { + for _, lv := range literalValues { + if isLitValue(lv, b) { + return true + } + } + return false +} + +func isLitValue(want, have []rune) bool { + if len(have) < len(want) { + return false + } + + for i := 0; i < len(want); i++ { + if want[i] != have[i] { + return false + } + } + + return true +} + +// isNumberValue will return whether not the leading characters in +// a byte slice is a number. A number is delimited by whitespace or +// the newline token. +// +// A number is defined to be in a binary, octal, decimal (int | float), hex format, +// or in scientific notation. +func isNumberValue(b []rune) bool { + negativeIndex := 0 + helper := numberHelper{} + needDigit := false + + for i := 0; i < len(b); i++ { + negativeIndex++ + + switch b[i] { + case '-': + if helper.IsNegative() || negativeIndex != 1 { + return false + } + helper.Determine(b[i]) + needDigit = true + continue + case 'e', 'E': + if err := helper.Determine(b[i]); err != nil { + return false + } + negativeIndex = 0 + needDigit = true + continue + case 'b': + if helper.numberFormat == hex { + break + } + fallthrough + case 'o', 'x': + needDigit = true + if i == 0 { + return false + } + + fallthrough + case '.': + if err := helper.Determine(b[i]); err != nil { + return false + } + needDigit = true + continue + } + + if i > 0 && (isNewline(b[i:]) || isWhitespace(b[i])) { + return !needDigit + } + + if !helper.CorrectByte(b[i]) { + return false + } + needDigit = false + } + + return !needDigit +} + +func isValid(b []rune) (bool, int, error) { + if len(b) == 0 { + // TODO: should probably return an error + return false, 0, nil + } + + return isValidRune(b[0]), 1, nil +} + +func isValidRune(r rune) bool { + return r != ':' && r != '=' && r != '[' && r != ']' && r != ' ' && r != '\n' +} + +// ValueType is an enum that will signify what type +// the Value is +type ValueType int + +func (v ValueType) String() string { + switch v { + case NoneType: + return "NONE" + case DecimalType: + return "FLOAT" + case IntegerType: + return "INT" + case StringType: + return "STRING" + case BoolType: + return "BOOL" + } + + return "" +} + +// ValueType enums +const ( + NoneType = ValueType(iota) + DecimalType + IntegerType + StringType + QuotedStringType + BoolType +) + +// Value is a union container +type Value struct { + Type ValueType + raw []rune + + integer int64 + decimal float64 + boolean bool + str string +} + +func newValue(t ValueType, base int, raw []rune) (Value, error) { + v := Value{ + Type: t, + raw: raw, + } + var err error + + switch t { + case DecimalType: + v.decimal, err = strconv.ParseFloat(string(raw), 64) + case IntegerType: + if base != 10 { + raw = raw[2:] + } + + v.integer, err = strconv.ParseInt(string(raw), base, 64) + case StringType: + v.str = string(raw) + case QuotedStringType: + v.str = string(raw[1 : len(raw)-1]) + case BoolType: + v.boolean = runeCompare(v.raw, runesTrue) + } + + // issue 2253 + // + // if the value trying to be parsed is too large, then we will use + // the 'StringType' and raw value instead. + if nerr, ok := err.(*strconv.NumError); ok && nerr.Err == strconv.ErrRange { + v.Type = StringType + v.str = string(raw) + err = nil + } + + return v, err +} + +// Append will append values and change the type to a string +// type. +func (v *Value) Append(tok Token) { + r := tok.Raw() + if v.Type != QuotedStringType { + v.Type = StringType + r = tok.raw[1 : len(tok.raw)-1] + } + if tok.Type() != TokenLit { + v.raw = append(v.raw, tok.Raw()...) + } else { + v.raw = append(v.raw, r...) + } +} + +func (v Value) String() string { + switch v.Type { + case DecimalType: + return fmt.Sprintf("decimal: %f", v.decimal) + case IntegerType: + return fmt.Sprintf("integer: %d", v.integer) + case StringType: + return fmt.Sprintf("string: %s", string(v.raw)) + case QuotedStringType: + return fmt.Sprintf("quoted string: %s", string(v.raw)) + case BoolType: + return fmt.Sprintf("bool: %t", v.boolean) + default: + return "union not set" + } +} + +func newLitToken(b []rune) (Token, int, error) { + n := 0 + var err error + + token := Token{} + if b[0] == '"' { + n, err = getStringValue(b) + if err != nil { + return token, n, err + } + + token = newToken(TokenLit, b[:n], QuotedStringType) + } else if isNumberValue(b) { + var base int + base, n, err = getNumericalValue(b) + if err != nil { + return token, 0, err + } + + value := b[:n] + vType := IntegerType + if contains(value, '.') || hasExponent(value) { + vType = DecimalType + } + token = newToken(TokenLit, value, vType) + token.base = base + } else if isBoolValue(b) { + n, err = getBoolValue(b) + + token = newToken(TokenLit, b[:n], BoolType) + } else { + n, err = getValue(b) + token = newToken(TokenLit, b[:n], StringType) + } + + return token, n, err +} + +// IntValue returns an integer value +func (v Value) IntValue() int64 { + return v.integer +} + +// FloatValue returns a float value +func (v Value) FloatValue() float64 { + return v.decimal +} + +// BoolValue returns a bool value +func (v Value) BoolValue() bool { + return v.boolean +} + +func isTrimmable(r rune) bool { + switch r { + case '\n', ' ': + return true + } + return false +} + +// StringValue returns the string value +func (v Value) StringValue() string { + switch v.Type { + case StringType: + return strings.TrimFunc(string(v.raw), isTrimmable) + case QuotedStringType: + // preserve all characters in the quotes + return string(removeEscapedCharacters(v.raw[1 : len(v.raw)-1])) + default: + return strings.TrimFunc(string(v.raw), isTrimmable) + } +} + +func contains(runes []rune, c rune) bool { + for i := 0; i < len(runes); i++ { + if runes[i] == c { + return true + } + } + + return false +} + +func runeCompare(v1 []rune, v2 []rune) bool { + if len(v1) != len(v2) { + return false + } + + for i := 0; i < len(v1); i++ { + if v1[i] != v2[i] { + return false + } + } + + return true +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/newline_token.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/newline_token.go new file mode 100644 index 000000000000..1d5eec124a66 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/newline_token.go @@ -0,0 +1,49 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +func isNewline(b []rune) bool { + if len(b) == 0 { + return false + } + + if b[0] == '\n' { + return true + } + + if len(b) < 2 { + return false + } + + return b[0] == '\r' && b[1] == '\n' +} + +func newNewlineToken(b []rune) (Token, int, error) { + i := 1 + if b[0] == '\r' && isNewline(b[1:]) { + i++ + } + + if !isNewline([]rune(b[:i])) { + return emptyToken, 0, NewParseError("invalid new line token") + } + + return newToken(TokenNL, b[:i], NoneType), i, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/number_helper.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/number_helper.go new file mode 100644 index 000000000000..d4552c56fdef --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/number_helper.go @@ -0,0 +1,171 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "fmt" + "strconv" +) + +const ( + none = numberFormat(iota) + binary + octal + decimal + hex + exponent +) + +type numberFormat int + +// numberHelper is used to dictate what format a number is in +// and what to do for negative values. Since -1e-4 is a valid +// number, we cannot just simply check for duplicate negatives. +type numberHelper struct { + numberFormat numberFormat + + negative bool + negativeExponent bool +} + +func (b numberHelper) Exists() bool { + return b.numberFormat != none +} + +func (b numberHelper) IsNegative() bool { + return b.negative || b.negativeExponent +} + +func (b *numberHelper) Determine(c rune) error { + if b.Exists() { + return NewParseError(fmt.Sprintf("multiple number formats: 0%v", string(c))) + } + + switch c { + case 'b': + b.numberFormat = binary + case 'o': + b.numberFormat = octal + case 'x': + b.numberFormat = hex + case 'e', 'E': + b.numberFormat = exponent + case '-': + if b.numberFormat != exponent { + b.negative = true + } else { + b.negativeExponent = true + } + case '.': + b.numberFormat = decimal + default: + return NewParseError(fmt.Sprintf("invalid number character: %v", string(c))) + } + + return nil +} + +func (b numberHelper) CorrectByte(c rune) bool { + switch { + case b.numberFormat == binary: + if !isBinaryByte(c) { + return false + } + case b.numberFormat == octal: + if !isOctalByte(c) { + return false + } + case b.numberFormat == hex: + if !isHexByte(c) { + return false + } + case b.numberFormat == decimal: + if !isDigit(c) { + return false + } + case b.numberFormat == exponent: + if !isDigit(c) { + return false + } + case b.negativeExponent: + if !isDigit(c) { + return false + } + case b.negative: + if !isDigit(c) { + return false + } + default: + if !isDigit(c) { + return false + } + } + + return true +} + +func (b numberHelper) Base() int { + switch b.numberFormat { + case binary: + return 2 + case octal: + return 8 + case hex: + return 16 + default: + return 10 + } +} + +func (b numberHelper) String() string { + buf := bytes.Buffer{} + i := 0 + + switch b.numberFormat { + case binary: + i++ + buf.WriteString(strconv.Itoa(i) + ": binary format\n") + case octal: + i++ + buf.WriteString(strconv.Itoa(i) + ": octal format\n") + case hex: + i++ + buf.WriteString(strconv.Itoa(i) + ": hex format\n") + case exponent: + i++ + buf.WriteString(strconv.Itoa(i) + ": exponent format\n") + default: + i++ + buf.WriteString(strconv.Itoa(i) + ": integer format\n") + } + + if b.negative { + i++ + buf.WriteString(strconv.Itoa(i) + ": negative format\n") + } + + if b.negativeExponent { + i++ + buf.WriteString(strconv.Itoa(i) + ": negative exponent format\n") + } + + return buf.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/op_tokens.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/op_tokens.go new file mode 100644 index 000000000000..9b026d62feb1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/op_tokens.go @@ -0,0 +1,58 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" +) + +var ( + equalOp = []rune("=") + equalColonOp = []rune(":") +) + +func isOp(b []rune) bool { + if len(b) == 0 { + return false + } + + switch b[0] { + case '=': + return true + case ':': + return true + default: + return false + } +} + +func newOpToken(b []rune) (Token, int, error) { + tok := Token{} + + switch b[0] { + case '=': + tok = newToken(TokenOp, equalOp, NoneType) + case ':': + tok = newToken(TokenOp, equalColonOp, NoneType) + default: + return tok, 0, NewParseError(fmt.Sprintf("unexpected op type, %v", b[0])) + } + return tok, 1, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/parse_error.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/parse_error.go new file mode 100644 index 000000000000..a3a34a1dcaa1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/parse_error.go @@ -0,0 +1,62 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "fmt" + +const ( + // ErrCodeParseError is returned when a parsing error + // has occurred. + ErrCodeParseError = "INIParseError" +) + +// ParseError is an error which is returned during any part of +// the parsing process. +type ParseError struct { + msg string +} + +// NewParseError will return a new ParseError where message +// is the description of the error. +func NewParseError(message string) *ParseError { + return &ParseError{ + msg: message, + } +} + +// Code will return the ErrCodeParseError +func (err *ParseError) Code() string { + return ErrCodeParseError +} + +// Message returns the error's message +func (err *ParseError) Message() string { + return err.msg +} + +// OrigError return nothing since there will never be any +// original error. +func (err *ParseError) OrigError() error { + return nil +} + +func (err *ParseError) Error() string { + return fmt.Sprintf("%s: %s", err.Code(), err.Message()) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/parse_stack.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/parse_stack.go new file mode 100644 index 000000000000..2c7071b7b689 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/parse_stack.go @@ -0,0 +1,79 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "fmt" +) + +// ParseStack is a stack that contains a container, the stack portion, +// and the list which is the list of ASTs that have been successfully +// parsed. +type ParseStack struct { + top int + container []AST + list []AST + index int +} + +func newParseStack(sizeContainer, sizeList int) ParseStack { + return ParseStack{ + container: make([]AST, sizeContainer), + list: make([]AST, sizeList), + } +} + +// Pop will return and truncate the last container element. +func (s *ParseStack) Pop() AST { + s.top-- + return s.container[s.top] +} + +// Push will add the new AST to the container +func (s *ParseStack) Push(ast AST) { + s.container[s.top] = ast + s.top++ +} + +// MarkComplete will append the AST to the list of completed statements +func (s *ParseStack) MarkComplete(ast AST) { + s.list[s.index] = ast + s.index++ +} + +// List will return the completed statements +func (s ParseStack) List() []AST { + return s.list[:s.index] +} + +// Len will return the length of the container +func (s *ParseStack) Len() int { + return s.top +} + +func (s ParseStack) String() string { + buf := bytes.Buffer{} + for i, node := range s.list { + buf.WriteString(fmt.Sprintf("%d: %v\n", i+1, node)) + } + + return buf.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/sep_tokens.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/sep_tokens.go new file mode 100644 index 000000000000..221a68789e55 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/sep_tokens.go @@ -0,0 +1,60 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" +) + +var ( + emptyRunes = []rune{} +) + +func isSep(b []rune) bool { + if len(b) == 0 { + return false + } + + switch b[0] { + case '[', ']': + return true + default: + return false + } +} + +var ( + openBrace = []rune("[") + closeBrace = []rune("]") +) + +func newSepToken(b []rune) (Token, int, error) { + tok := Token{} + + switch b[0] { + case '[': + tok = newToken(TokenSep, openBrace, NoneType) + case ']': + tok = newToken(TokenSep, closeBrace, NoneType) + default: + return tok, 0, NewParseError(fmt.Sprintf("unexpected sep type, %v", b[0])) + } + return tok, 1, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/skipper.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/skipper.go new file mode 100644 index 000000000000..eb08e0167c9c --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/skipper.go @@ -0,0 +1,64 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// skipper is used to skip certain blocks of an ini file. +// Currently skipper is used to skip nested blocks of ini +// files. See example below +// +// [ foo ] +// nested = ; this section will be skipped +// a=b +// c=d +// bar=baz ; this will be included +type skipper struct { + shouldSkip bool + TokenSet bool + prevTok Token +} + +func newSkipper() skipper { + return skipper{ + prevTok: emptyToken, + } +} + +func (s *skipper) ShouldSkip(tok Token) bool { + if s.shouldSkip && + s.prevTok.Type() == TokenNL && + tok.Type() != TokenWS { + + s.Continue() + return false + } + s.prevTok = tok + + return s.shouldSkip +} + +func (s *skipper) Skip() { + s.shouldSkip = true + s.prevTok = emptyToken +} + +func (s *skipper) Continue() { + s.shouldSkip = false + s.prevTok = emptyToken +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/statement.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/statement.go new file mode 100644 index 000000000000..14ae44003574 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/statement.go @@ -0,0 +1,54 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// Statement is an empty AST mostly used for transitioning states. +func newStatement() AST { + return newAST(ASTKindStatement, AST{}) +} + +// SectionStatement represents a section AST +func newSectionStatement(tok Token) AST { + return newASTWithRootToken(ASTKindSectionStatement, tok) +} + +// ExprStatement represents a completed expression AST +func newExprStatement(ast AST) AST { + return newAST(ASTKindExprStatement, ast) +} + +// CommentStatement represents a comment in the ini definition. +// +// grammar: +// comment -> #comment' | ;comment' +// comment' -> epsilon | value +func newCommentStatement(tok Token) AST { + return newAST(ASTKindCommentStatement, newExpression(tok)) +} + +// CompletedSectionStatement represents a completed section +func newCompletedSectionStatement(ast AST) AST { + return newAST(ASTKindCompletedSectionStatement, ast) +} + +// SkipStatement is used to skip whole statements +func newSkipStatement(ast AST) AST { + return newAST(ASTKindSkipStatement, ast) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/value_util.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/value_util.go new file mode 100644 index 000000000000..306dce6b019c --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/value_util.go @@ -0,0 +1,303 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" +) + +// getStringValue will return a quoted string and the amount +// of bytes read +// +// an error will be returned if the string is not properly formatted +func getStringValue(b []rune) (int, error) { + if b[0] != '"' { + return 0, NewParseError("strings must start with '\"'") + } + + endQuote := false + i := 1 + + for ; i < len(b) && !endQuote; i++ { + if escaped := isEscaped(b[:i], b[i]); b[i] == '"' && !escaped { + endQuote = true + break + } else if escaped { + /*c, err := getEscapedByte(b[i]) + if err != nil { + return 0, err + } + + b[i-1] = c + b = append(b[:i], b[i+1:]...) + i--*/ + + continue + } + } + + if !endQuote { + return 0, NewParseError("missing '\"' in string value") + } + + return i + 1, nil +} + +// getBoolValue will return a boolean and the amount +// of bytes read +// +// an error will be returned if the boolean is not of a correct +// value +func getBoolValue(b []rune) (int, error) { + if len(b) < 4 { + return 0, NewParseError("invalid boolean value") + } + + n := 0 + for _, lv := range literalValues { + if len(lv) > len(b) { + continue + } + + if isLitValue(lv, b) { + n = len(lv) + } + } + + if n == 0 { + return 0, NewParseError("invalid boolean value") + } + + return n, nil +} + +// getNumericalValue will return a numerical string, the amount +// of bytes read, and the base of the number +// +// an error will be returned if the number is not of a correct +// value +func getNumericalValue(b []rune) (int, int, error) { + if !isDigit(b[0]) { + return 0, 0, NewParseError("invalid digit value") + } + + i := 0 + helper := numberHelper{} + +loop: + for negativeIndex := 0; i < len(b); i++ { + negativeIndex++ + + if !isDigit(b[i]) { + switch b[i] { + case '-': + if helper.IsNegative() || negativeIndex != 1 { + return 0, 0, NewParseError("parse error '-'") + } + + n := getNegativeNumber(b[i:]) + i += (n - 1) + helper.Determine(b[i]) + continue + case '.': + if err := helper.Determine(b[i]); err != nil { + return 0, 0, err + } + case 'e', 'E': + if err := helper.Determine(b[i]); err != nil { + return 0, 0, err + } + + negativeIndex = 0 + case 'b': + if helper.numberFormat == hex { + break + } + fallthrough + case 'o', 'x': + if i == 0 && b[i] != '0' { + return 0, 0, NewParseError("incorrect base format, expected leading '0'") + } + + if i != 1 { + return 0, 0, NewParseError(fmt.Sprintf("incorrect base format found %s at %d index", string(b[i]), i)) + } + + if err := helper.Determine(b[i]); err != nil { + return 0, 0, err + } + default: + if isWhitespace(b[i]) { + break loop + } + + if isNewline(b[i:]) { + break loop + } + + if !(helper.numberFormat == hex && isHexByte(b[i])) { + if i+2 < len(b) && !isNewline(b[i:i+2]) { + return 0, 0, NewParseError("invalid numerical character") + } else if !isNewline([]rune{b[i]}) { + return 0, 0, NewParseError("invalid numerical character") + } + + break loop + } + } + } + } + + return helper.Base(), i, nil +} + +// isDigit will return whether or not something is an integer +func isDigit(b rune) bool { + return b >= '0' && b <= '9' +} + +func hasExponent(v []rune) bool { + return contains(v, 'e') || contains(v, 'E') +} + +func isBinaryByte(b rune) bool { + switch b { + case '0', '1': + return true + default: + return false + } +} + +func isOctalByte(b rune) bool { + switch b { + case '0', '1', '2', '3', '4', '5', '6', '7': + return true + default: + return false + } +} + +func isHexByte(b rune) bool { + if isDigit(b) { + return true + } + return (b >= 'A' && b <= 'F') || + (b >= 'a' && b <= 'f') +} + +func getValue(b []rune) (int, error) { + i := 0 + + for i < len(b) { + if isNewline(b[i:]) { + break + } + + if isOp(b[i:]) { + break + } + + valid, n, err := isValid(b[i:]) + if err != nil { + return 0, err + } + + if !valid { + break + } + + i += n + } + + return i, nil +} + +// getNegativeNumber will return a negative number from a +// byte slice. This will iterate through all characters until +// a non-digit has been found. +func getNegativeNumber(b []rune) int { + if b[0] != '-' { + return 0 + } + + i := 1 + for ; i < len(b); i++ { + if !isDigit(b[i]) { + return i + } + } + + return i +} + +// isEscaped will return whether or not the character is an escaped +// character. +func isEscaped(value []rune, b rune) bool { + if len(value) == 0 { + return false + } + + switch b { + case '\'': // single quote + case '"': // quote + case 'n': // newline + case 't': // tab + case '\\': // backslash + default: + return false + } + + return value[len(value)-1] == '\\' +} + +func getEscapedByte(b rune) (rune, error) { + switch b { + case '\'': // single quote + return '\'', nil + case '"': // quote + return '"', nil + case 'n': // newline + return '\n', nil + case 't': // table + return '\t', nil + case '\\': // backslash + return '\\', nil + default: + return b, NewParseError(fmt.Sprintf("invalid escaped character %c", b)) + } +} + +func removeEscapedCharacters(b []rune) []rune { + for i := 0; i < len(b); i++ { + if isEscaped(b[:i], b[i]) { + c, err := getEscapedByte(b[i]) + if err != nil { + return b + } + + b[i-1] = c + b = append(b[:i], b[i+1:]...) + i-- + } + } + + return b +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/visitor.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/visitor.go new file mode 100644 index 000000000000..aee203fbcda0 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/visitor.go @@ -0,0 +1,185 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "sort" +) + +// Visitor is an interface used by walkers that will +// traverse an array of ASTs. +type Visitor interface { + VisitExpr(AST) error + VisitStatement(AST) error +} + +// DefaultVisitor is used to visit statements and expressions +// and ensure that they are both of the correct format. +// In addition, upon visiting this will build sections and populate +// the Sections field which can be used to retrieve profile +// configuration. +type DefaultVisitor struct { + scope string + Sections Sections +} + +// NewDefaultVisitor return a DefaultVisitor +func NewDefaultVisitor() *DefaultVisitor { + return &DefaultVisitor{ + Sections: Sections{ + container: map[string]Section{}, + }, + } +} + +// VisitExpr visits expressions... +func (v *DefaultVisitor) VisitExpr(expr AST) error { + t := v.Sections.container[v.scope] + if t.values == nil { + t.values = values{} + } + + switch expr.Kind { + case ASTKindExprStatement: + opExpr := expr.GetRoot() + switch opExpr.Kind { + case ASTKindEqualExpr: + children := opExpr.GetChildren() + if len(children) <= 1 { + return NewParseError("unexpected token type") + } + + rhs := children[1] + + if rhs.Root.Type() != TokenLit { + return NewParseError("unexpected token type") + } + + key := EqualExprKey(opExpr) + v, err := newValue(rhs.Root.ValueType, rhs.Root.base, rhs.Root.Raw()) + if err != nil { + return err + } + + t.values[key] = v + default: + return NewParseError(fmt.Sprintf("unsupported expression %v", expr)) + } + default: + return NewParseError(fmt.Sprintf("unsupported expression %v", expr)) + } + + v.Sections.container[v.scope] = t + return nil +} + +// VisitStatement visits statements... +func (v *DefaultVisitor) VisitStatement(stmt AST) error { + switch stmt.Kind { + case ASTKindCompletedSectionStatement: + child := stmt.GetRoot() + if child.Kind != ASTKindSectionStatement { + return NewParseError(fmt.Sprintf("unsupported child statement: %T", child)) + } + + name := string(child.Root.Raw()) + v.Sections.container[name] = Section{} + v.scope = name + default: + return NewParseError(fmt.Sprintf("unsupported statement: %s", stmt.Kind)) + } + + return nil +} + +// Sections is a map of Section structures that represent +// a configuration. +type Sections struct { + container map[string]Section +} + +// GetSection will return section p. If section p does not exist, +// false will be returned in the second parameter. +func (t Sections) GetSection(p string) (Section, bool) { + v, ok := t.container[p] + return v, ok +} + +// values represents a map of union values. +type values map[string]Value + +// List will return a list of all sections that were successfully +// parsed. +func (t Sections) List() []string { + keys := make([]string, len(t.container)) + i := 0 + for k := range t.container { + keys[i] = k + i++ + } + + sort.Strings(keys) + return keys +} + +// Section contains a name and values. This represent +// a sectioned entry in a configuration file. +type Section struct { + Name string + values values +} + +// Has will return whether or not an entry exists in a given section +func (t Section) Has(k string) bool { + _, ok := t.values[k] + return ok +} + +// ValueType will returned what type the union is set to. If +// k was not found, the NoneType will be returned. +func (t Section) ValueType(k string) (ValueType, bool) { + v, ok := t.values[k] + return v.Type, ok +} + +// Bool returns a bool value at k +func (t Section) Bool(k string) bool { + return t.values[k].BoolValue() +} + +// Int returns an integer value at k +func (t Section) Int(k string) int64 { + return t.values[k].IntValue() +} + +// Float64 returns a float value at k +func (t Section) Float64(k string) float64 { + return t.values[k].FloatValue() +} + +// String returns the string value at k +func (t Section) String(k string) string { + _, ok := t.values[k] + if !ok { + return "" + } + return t.values[k].StringValue() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/walker.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/walker.go new file mode 100644 index 000000000000..dc1a4de08a91 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/walker.go @@ -0,0 +1,44 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// Walk will traverse the AST using the v, the Visitor. +func Walk(tree []AST, v Visitor) error { + for _, node := range tree { + switch node.Kind { + case ASTKindExpr, + ASTKindExprStatement: + + if err := v.VisitExpr(node); err != nil { + return err + } + case ASTKindStatement, + ASTKindCompletedSectionStatement, + ASTKindNestedSectionStatement, + ASTKindCompletedNestedSectionStatement: + + if err := v.VisitStatement(node); err != nil { + return err + } + } + } + + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ws_token.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ws_token.go new file mode 100644 index 000000000000..c891f775aecb --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini/ws_token.go @@ -0,0 +1,43 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package ini + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "unicode" +) + +// isWhitespace will return whether or not the character is +// a whitespace character. +// +// Whitespace is defined as a space or tab. +func isWhitespace(c rune) bool { + return unicode.IsSpace(c) && c != '\n' && c != '\r' +} + +func newWSToken(b []rune) (Token, int, error) { + i := 0 + for ; i < len(b); i++ { + if !isWhitespace(b[i]) { + break + } + } + + return newToken(TokenWS, b[:i], NoneType), i, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkio/byte.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkio/byte.go new file mode 100644 index 000000000000..f21e56003338 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkio/byte.go @@ -0,0 +1,39 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package sdkio + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "io" + +const ( + // Byte is 8 bits + Byte int64 = 1 + // KbByte (KiB) is 1024 Bytes + KbByte = Byte * 1024 + // MbByte (MiB) is 1024 KiB + MbByte = KbByte * 1024 + // GbByte (GiB) is 1024 MiB + GbByte = MbByte * 1024 +) + +const ( + SeekStart = io.SeekStart // seek relative to the origin of the file + SeekCurrent = io.SeekCurrent // seek relative to the current offset + SeekEnd = io.SeekEnd // seek relative to the end +) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkmath/floor_go.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkmath/floor_go.go new file mode 100644 index 000000000000..3e8cad9a96f1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkmath/floor_go.go @@ -0,0 +1,74 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package sdkmath + +import "math" + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// Copied from the Go standard library's (Go 1.12) math/floor.go for use in +// Go version prior to Go 1.10. +const ( + uvone = 0x3FF0000000000000 + mask = 0x7FF + shift = 64 - 11 - 1 + bias = 1023 + signMask = 1 << 63 + fracMask = 1<= 0.5 { + // return t + Copysign(1, x) + // } + // return t + // } + bits := math.Float64bits(x) + e := uint(bits>>shift) & mask + if e < bias { + // Round abs(x) < 1 including denormals. + bits &= signMask // +-0 + if e == bias-1 { + bits |= uvone // +-1 + } + } else if e < bias+shift { + // Round any abs(x) >= 1 containing a fractional component [0,1). + // + // Numbers with larger exponents are returned unchanged since they + // must be either an integer, infinity, or NaN. + const half = 1 << (shift - 1) + e -= bias + bits += half >> e + bits &^= fracMask >> e + } + return math.Float64frombits(bits) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkrand/locked_source.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkrand/locked_source.go new file mode 100644 index 000000000000..85939be609e6 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkrand/locked_source.go @@ -0,0 +1,48 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package sdkrand + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "math/rand" + "sync" + "time" +) + +// lockedSource is a thread-safe implementation of rand.Source +type lockedSource struct { + lk sync.Mutex + src rand.Source +} + +func (r *lockedSource) Int63() (n int64) { + r.lk.Lock() + n = r.src.Int63() + r.lk.Unlock() + return +} + +func (r *lockedSource) Seed(seed int64) { + r.lk.Lock() + r.src.Seed(seed) + r.lk.Unlock() +} + +// SeededRand is a new RNG using a thread safe implementation of rand.Source +var SeededRand = rand.New(&lockedSource{src: rand.NewSource(time.Now().UnixNano())}) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/init_others.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkrand/read.go similarity index 67% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/init_others.go rename to cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkrand/read.go index cee69f1a0eb2..50803495f1e7 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/init_others.go +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkrand/read.go @@ -1,8 +1,5 @@ -//go:build !windows -// +build !windows - /* -Copyright 2018 The Kubernetes Authors. +Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -17,15 +14,13 @@ See the License for the specific language governing permissions and limitations under the License. */ -package app +package sdkrand -import ( - "github.com/spf13/pflag" -) +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. -func initForOS(service bool) error { - return nil -} +import "math/rand" -func (o *Options) addOSFlags(fs *pflag.FlagSet) { +func Read(r *rand.Rand, p []byte) (int, error) { + return r.Read(p) } diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/shareddefaults/shared_config.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/shareddefaults/shared_config.go new file mode 100644 index 000000000000..80b094678800 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/shareddefaults/shared_config.go @@ -0,0 +1,59 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package shareddefaults + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "os" + "path/filepath" + "runtime" +) + +// SharedCredentialsFilename returns the SDK's default file path +// for the shared credentials file. +// +// Builds the shared config file path based on the OS's platform. +// +// - Linux/Unix: $HOME/.volcengine/credentials +// - Windows: %USERPROFILE%\.volcengine\credentials +func SharedCredentialsFilename() string { + return filepath.Join(UserHomeDir(), ".volcengine", "credentials") +} + +// SharedConfigFilename returns the SDK's default file path for +// the shared config file. +// +// Builds the shared config file path based on the OS's platform. +// +// - Linux/Unix: $HOME/.volcengine/config +// - Windows: %USERPROFILE%\.volcengine\config +func SharedConfigFilename() string { + return filepath.Join(UserHomeDir(), ".volcengine", "config") +} + +// UserHomeDir returns the home directory for the user the process is +// running under. +func UserHomeDir() string { + if runtime.GOOS == "windows" { // Windows + return os.Getenv("USERPROFILE") + } + + // *nix + return os.Getenv("HOME") +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/host.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/host.go new file mode 100644 index 000000000000..71504d32b931 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/host.go @@ -0,0 +1,87 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package protocol + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "strings" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// ValidateEndpointHostHandler is a request handler that will validate the +// request endpoint's hosts is a valid RFC 3986 host. +var ValidateEndpointHostHandler = request.NamedHandler{ + Name: "volcenginesdk.protocol.ValidateEndpointHostHandler", + Fn: func(r *request.Request) { + err := ValidateEndpointHost(r.Operation.Name, r.HTTPRequest.URL.Host) + if err != nil { + r.Error = err + } + }, +} + +// ValidateEndpointHost validates that the host string passed in is a valid RFC +// 3986 host. Returns error if the host is not valid. +func ValidateEndpointHost(opName, host string) error { + paramErrs := request.ErrInvalidParams{Context: opName} + labels := strings.Split(host, ".") + + for i, label := range labels { + if i == len(labels)-1 && len(label) == 0 { + // Allow trailing dot for FQDN hosts. + continue + } + + if !ValidHostLabel(label) { + paramErrs.Add(request.NewErrParamFormat( + "endpoint host label", "[a-zA-Z0-9-]{1,63}", label)) + } + } + + if len(host) > 255 { + paramErrs.Add(request.NewErrParamMaxLen( + "endpoint host", 255, host, + )) + } + + if paramErrs.Len() > 0 { + return paramErrs + } + return nil +} + +// ValidHostLabel returns if the label is a valid RFC 3986 host label. +func ValidHostLabel(label string) bool { + if l := len(label); l == 0 || l > 63 { + return false + } + for _, r := range label { + switch { + case r >= '0' && r <= '9': + case r >= 'A' && r <= 'Z': + case r >= 'a' && r <= 'z': + case r == '-': + default: + return false + } + } + + return true +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/host_prefix.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/host_prefix.go new file mode 100644 index 000000000000..6030c5937616 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/host_prefix.go @@ -0,0 +1,72 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package protocol + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "strings" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// HostPrefixHandlerName is the handler name for the host prefix request +// handler. +const HostPrefixHandlerName = "volcenginesdk.endpoint.HostPrefixHandler" + +// NewHostPrefixHandler constructs a build handler +func NewHostPrefixHandler(prefix string, labelsFn func() map[string]string) request.NamedHandler { + builder := HostPrefixBuilder{ + Prefix: prefix, + LabelsFn: labelsFn, + } + + return request.NamedHandler{ + Name: HostPrefixHandlerName, + Fn: builder.Build, + } +} + +// HostPrefixBuilder provides the request handler to expand and prepend +// the host prefix into the operation's request endpoint host. +type HostPrefixBuilder struct { + Prefix string + LabelsFn func() map[string]string +} + +// Build updates the passed in Request with the HostPrefix template expanded. +func (h HostPrefixBuilder) Build(r *request.Request) { + //if volcengine.BoolValue(r.Config.DisableEndpointHostPrefix) { + // return + //} + + var labels map[string]string + if h.LabelsFn != nil { + labels = h.LabelsFn() + } + + prefix := h.Prefix + for name, value := range labels { + prefix = strings.Replace(prefix, "{"+name+"}", value, -1) + } + + r.HTTPRequest.URL.Host = prefix + r.HTTPRequest.URL.Host + if len(r.HTTPRequest.Host) > 0 { + r.HTTPRequest.Host = prefix + r.HTTPRequest.Host + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/idempotency.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/idempotency.go new file mode 100644 index 000000000000..0f1c0af3590d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/idempotency.go @@ -0,0 +1,94 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package protocol + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "crypto/rand" + "fmt" + "reflect" +) + +// RandReader is the random reader the protocol package will use to read +// random bytes from. This is exported for testing, and should not be used. +var RandReader = rand.Reader + +const idempotencyTokenFillTag = `idempotencyToken` + +// CanSetIdempotencyToken returns true if the struct field should be +// automatically populated with a Idempotency token. +// +// Only *string and string type fields that are tagged with idempotencyToken +// which are not already set can be auto filled. +func CanSetIdempotencyToken(v reflect.Value, f reflect.StructField) bool { + switch u := v.Interface().(type) { + // To auto fill an Idempotency token the field must be a string, + // tagged for auto fill, and have a zero value. + case *string: + return u == nil && len(f.Tag.Get(idempotencyTokenFillTag)) != 0 + case string: + return len(u) == 0 && len(f.Tag.Get(idempotencyTokenFillTag)) != 0 + } + + return false +} + +// GetIdempotencyToken returns a randomly generated idempotency token. +func GetIdempotencyToken() string { + b := make([]byte, 16) + RandReader.Read(b) + + return UUIDVersion4(b) +} + +// SetIdempotencyToken will set the value provided with a Idempotency Token. +// Given that the value can be set. Will panic if value is not setable. +func SetIdempotencyToken(v reflect.Value) { + if v.Kind() == reflect.Ptr { + if v.IsNil() && v.CanSet() { + v.Set(reflect.New(v.Type().Elem())) + } + v = v.Elem() + } + v = reflect.Indirect(v) + + if !v.CanSet() { + panic(fmt.Sprintf("unable to set idempotnecy token %v", v)) + } + + b := make([]byte, 16) + _, err := rand.Read(b) + if err != nil { + // TODO handle error + return + } + + v.Set(reflect.ValueOf(UUIDVersion4(b))) +} + +// UUIDVersion4 returns a Version 4 random UUID from the byte slice provided +func UUIDVersion4(u []byte) string { + // https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_.28random.29 + // 13th character is "4" + u[6] = (u[6] | 0x40) & 0x4F + // 17th character is "8", "9", "a", or "b" + u[8] = (u[8] | 0x80) & 0xBF + + return fmt.Sprintf(`%X-%X-%X-%X-%X`, u[0:4], u[4:6], u[6:8], u[8:10], u[10:]) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/json/jsonutil/build.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/json/jsonutil/build.go new file mode 100644 index 000000000000..4f3ea1a430ff --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/json/jsonutil/build.go @@ -0,0 +1,315 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package jsonutil provides JSON serialization of VOLCSTACK requests and responses. +package jsonutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "fmt" + "math" + "reflect" + "sort" + "strconv" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" +) + +var timeType = reflect.ValueOf(time.Time{}).Type() +var byteSliceType = reflect.ValueOf([]byte{}).Type() + +// BuildJSON builds a JSON string for a given object v. +func BuildJSON(v interface{}) ([]byte, error) { + var buf bytes.Buffer + + err := buildAny(reflect.ValueOf(v), &buf, "") + return buf.Bytes(), err +} + +func buildAny(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + origVal := value + value = reflect.Indirect(value) + if !value.IsValid() { + return nil + } + + vtype := value.Type() + + t := tag.Get("type") + if t == "" { + switch vtype.Kind() { + case reflect.Struct: + // also it can't be a time object + if value.Type() != timeType { + t = "structure" + } + case reflect.Slice: + // also it can't be a byte slice + if _, ok := value.Interface().([]byte); !ok { + t = "list" + } + case reflect.Map: + // cannot be a JSONValue map + if _, ok := value.Interface().(volcengine.JSONValue); !ok { + t = "map" + } + } + } + + switch t { + case "structure": + if field, ok := vtype.FieldByName("_"); ok { + tag = field.Tag + } + return buildStruct(value, buf, tag) + case "list": + return buildList(value, buf, tag) + case "map": + return buildMap(value, buf, tag) + default: + return buildScalar(origVal, buf, tag) + } +} + +func buildStruct(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + if !value.IsValid() { + return nil + } + + // unwrap payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := value.Type().FieldByName(payload) + tag = field.Tag + value = elemOf(value.FieldByName(payload)) + + if !value.IsValid() { + return nil + } + } + + buf.WriteByte('{') + + t := value.Type() + first := true + for i := 0; i < t.NumField(); i++ { + member := value.Field(i) + + // This allocates the most memory. + // Additionally, we cannot skip nil fields due to + // idempotency auto filling. + field := t.Field(i) + + if field.PkgPath != "" { + continue // ignore unexported fields + } + if field.Tag.Get("json") == "-" { + continue + } + if field.Tag.Get("location") != "" { + continue // ignore non-volcenginebody elements + } + if field.Tag.Get("ignore") != "" { + continue + } + + if protocol.CanSetIdempotencyToken(member, field) { + token := protocol.GetIdempotencyToken() + member = reflect.ValueOf(&token) + } + + if (member.Kind() == reflect.Ptr || member.Kind() == reflect.Slice || member.Kind() == reflect.Map) && member.IsNil() { + continue // ignore unset fields + } + + if first { + first = false + } else { + buf.WriteByte(',') + } + + // figure out what this field is called + name := field.Name + if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + + writeString(name, buf) + buf.WriteString(`:`) + + err := buildAny(member, buf, field.Tag) + if err != nil { + return err + } + + } + + buf.WriteString("}") + + return nil +} + +func buildList(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + buf.WriteString("[") + + for i := 0; i < value.Len(); i++ { + buildAny(value.Index(i), buf, "") + + if i < value.Len()-1 { + buf.WriteString(",") + } + } + + buf.WriteString("]") + + return nil +} + +type sortedValues []reflect.Value + +func (sv sortedValues) Len() int { return len(sv) } +func (sv sortedValues) Swap(i, j int) { sv[i], sv[j] = sv[j], sv[i] } +func (sv sortedValues) Less(i, j int) bool { return sv[i].String() < sv[j].String() } + +func buildMap(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + buf.WriteString("{") + + sv := sortedValues(value.MapKeys()) + sort.Sort(sv) + + for i, k := range sv { + if i > 0 { + buf.WriteByte(',') + } + + writeString(k.String(), buf) + buf.WriteString(`:`) + + buildAny(value.MapIndex(k), buf, "") + } + + buf.WriteString("}") + + return nil +} + +func buildScalar(v reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + // prevents allocation on the heap. + scratch := [64]byte{} + switch value := reflect.Indirect(v); value.Kind() { + case reflect.String: + writeString(value.String(), buf) + case reflect.Bool: + if value.Bool() { + buf.WriteString("true") + } else { + buf.WriteString("false") + } + case reflect.Int64: + buf.Write(strconv.AppendInt(scratch[:0], value.Int(), 10)) + case reflect.Float64: + f := value.Float() + if math.IsInf(f, 0) || math.IsNaN(f) { + return &json.UnsupportedValueError{Value: v, Str: strconv.FormatFloat(f, 'f', -1, 64)} + } + buf.Write(strconv.AppendFloat(scratch[:0], f, 'f', -1, 64)) + default: + switch converted := value.Interface().(type) { + case time.Time: + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.UnixTimeFormatName + } + + ts := protocol.FormatTime(format, converted) + if format != protocol.UnixTimeFormatName { + ts = `"` + ts + `"` + } + + buf.WriteString(ts) + case []byte: + if !value.IsNil() { + buf.WriteByte('"') + if len(converted) < 1024 { + // for small buffers, using Encode directly is much faster. + dst := make([]byte, base64.StdEncoding.EncodedLen(len(converted))) + base64.StdEncoding.Encode(dst, converted) + buf.Write(dst) + } else { + // for large buffers, avoid unnecessary extra temporary + // buffer space. + enc := base64.NewEncoder(base64.StdEncoding, buf) + enc.Write(converted) + enc.Close() + } + buf.WriteByte('"') + } + case volcengine.JSONValue: + str, err := protocol.EncodeJSONValue(converted, protocol.QuotedEscape) + if err != nil { + return fmt.Errorf("unable to encode JSONValue, %v", err) + } + buf.WriteString(str) + default: + return fmt.Errorf("unsupported JSON value %v (%s)", value.Interface(), value.Type()) + } + } + return nil +} + +var hex = "0123456789abcdef" + +func writeString(s string, buf *bytes.Buffer) { + buf.WriteByte('"') + for i := 0; i < len(s); i++ { + if s[i] == '"' { + buf.WriteString(`\"`) + } else if s[i] == '\\' { + buf.WriteString(`\\`) + } else if s[i] == '\b' { + buf.WriteString(`\b`) + } else if s[i] == '\f' { + buf.WriteString(`\f`) + } else if s[i] == '\r' { + buf.WriteString(`\r`) + } else if s[i] == '\t' { + buf.WriteString(`\t`) + } else if s[i] == '\n' { + buf.WriteString(`\n`) + } else if s[i] < 32 { + buf.WriteString("\\u00") + buf.WriteByte(hex[s[i]>>4]) + buf.WriteByte(hex[s[i]&0xF]) + } else { + buf.WriteByte(s[i]) + } + } + buf.WriteByte('"') +} + +// Returns the reflection element of a value, if it is a pointer. +func elemOf(value reflect.Value) reflect.Value { + for value.Kind() == reflect.Ptr { + value = value.Elem() + } + return value +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/json/jsonutil/unmarshal.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/json/jsonutil/unmarshal.go new file mode 100644 index 000000000000..201da81b0c54 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/json/jsonutil/unmarshal.go @@ -0,0 +1,269 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package jsonutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "reflect" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// UnmarshalJSONError unmarshal's the reader's JSON document into the passed in +// type. The value to unmarshal the json document into must be a pointer to the +// type. +func UnmarshalJSONError(v interface{}, stream io.Reader) error { + var errBuf bytes.Buffer + body := io.TeeReader(stream, &errBuf) + + err := json.NewDecoder(body).Decode(v) + if err != nil { + msg := "failed decoding error message" + if err == io.EOF { + msg = "error message missing" + err = nil + } + return volcengineerr.NewUnmarshalError(err, msg, errBuf.Bytes()) + } + + return nil +} + +// UnmarshalJSON reads a stream and unmarshals the results in object v. +func UnmarshalJSON(v interface{}, stream io.Reader) error { + var out interface{} + + err := json.NewDecoder(stream).Decode(&out) + if err == io.EOF { + return nil + } else if err != nil { + return err + } + + return unmarshalAny(reflect.ValueOf(v), out, "") +} + +func unmarshalAny(value reflect.Value, data interface{}, tag reflect.StructTag) error { + vtype := value.Type() + if vtype.Kind() == reflect.Ptr { + vtype = vtype.Elem() // check kind of actual element type + } + + t := tag.Get("type") + if t == "" { + switch vtype.Kind() { + case reflect.Struct: + // also it can't be a time object + if _, ok := value.Interface().(*time.Time); !ok { + t = "structure" + } + case reflect.Slice: + // also it can't be a byte slice + if _, ok := value.Interface().([]byte); !ok { + t = "list" + } + case reflect.Map: + // cannot be a JSONValue map + if _, ok := value.Interface().(volcengine.JSONValue); !ok { + t = "map" + } + } + } + + switch t { + case "structure": + if field, ok := vtype.FieldByName("_"); ok { + tag = field.Tag + } + return unmarshalStruct(value, data, tag) + case "list": + return unmarshalList(value, data, tag) + case "map": + return unmarshalMap(value, data, tag) + default: + return unmarshalScalar(value, data, tag) + } +} + +func unmarshalStruct(value reflect.Value, data interface{}, tag reflect.StructTag) error { + if data == nil { + return nil + } + mapData, ok := data.(map[string]interface{}) + if !ok { + return fmt.Errorf("JSON value is not a structure (%#v)", data) + } + + t := value.Type() + if value.Kind() == reflect.Ptr { + if value.IsNil() { // create the structure if it's nil + s := reflect.New(value.Type().Elem()) + value.Set(s) + value = s + } + + value = value.Elem() + t = t.Elem() + } + + // unwrap any payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := t.FieldByName(payload) + return unmarshalAny(value.FieldByName(payload), data, field.Tag) + } + + for i := 0; i < t.NumField(); i++ { + field := t.Field(i) + if field.PkgPath != "" { + continue // ignore unexported fields + } + + // figure out what this field is called + name := field.Name + if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + + member := value.FieldByIndex(field.Index) + err := unmarshalAny(member, mapData[name], field.Tag) + if err != nil { + return err + } + } + return nil +} + +func unmarshalList(value reflect.Value, data interface{}, tag reflect.StructTag) error { + if data == nil { + return nil + } + listData, ok := data.([]interface{}) + if !ok { + return fmt.Errorf("JSON value is not a list (%#v)", data) + } + + if value.IsNil() { + l := len(listData) + value.Set(reflect.MakeSlice(value.Type(), l, l)) + } + + for i, c := range listData { + err := unmarshalAny(value.Index(i), c, "") + if err != nil { + return err + } + } + + return nil +} + +func unmarshalMap(value reflect.Value, data interface{}, tag reflect.StructTag) error { + if data == nil { + return nil + } + mapData, ok := data.(map[string]interface{}) + if !ok { + return fmt.Errorf("JSON value is not a map (%#v)", data) + } + + if value.IsNil() { + value.Set(reflect.MakeMap(value.Type())) + } + + for k, v := range mapData { + kvalue := reflect.ValueOf(k) + vvalue := reflect.New(value.Type().Elem()).Elem() + + unmarshalAny(vvalue, v, "") + value.SetMapIndex(kvalue, vvalue) + } + + return nil +} + +func unmarshalScalar(value reflect.Value, data interface{}, tag reflect.StructTag) error { + + switch d := data.(type) { + case nil: + return nil // nothing to do here + case string: + switch value.Interface().(type) { + case *string: + value.Set(reflect.ValueOf(&d)) + case []byte: + b, err := base64.StdEncoding.DecodeString(d) + if err != nil { + return err + } + value.Set(reflect.ValueOf(b)) + case *time.Time: + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.ISO8601TimeFormatName + } + + t, err := protocol.ParseTime(format, d) + if err != nil { + return err + } + value.Set(reflect.ValueOf(&t)) + case volcengine.JSONValue: + // No need to use escaping as the value is a non-quoted string. + v, err := protocol.DecodeJSONValue(d, protocol.NoEscape) + if err != nil { + return err + } + value.Set(reflect.ValueOf(v)) + default: + return fmt.Errorf("unsupported value: %v (%s)", value.Interface(), value.Type()) + } + case float64: + switch value.Interface().(type) { + case *int64: + di := int64(d) + value.Set(reflect.ValueOf(&di)) + case *float64: + value.Set(reflect.ValueOf(&d)) + case *time.Time: + // Time unmarshaled from a float64 can only be epoch seconds + t := time.Unix(int64(d), 0).UTC() + value.Set(reflect.ValueOf(&t)) + default: + return fmt.Errorf("unsupported value: %v (%s)", value.Interface(), value.Type()) + } + case bool: + switch value.Interface().(type) { + case *bool: + value.Set(reflect.ValueOf(&d)) + default: + return fmt.Errorf("unsupported value: %v (%s)", value.Interface(), value.Type()) + } + default: + return fmt.Errorf("unsupported JSON value (%v)", data) + } + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/jsonvalue.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/jsonvalue.go new file mode 100644 index 000000000000..1a2a336dcc0b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/jsonvalue.go @@ -0,0 +1,95 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package protocol + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/base64" + "encoding/json" + "fmt" + "strconv" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" +) + +// EscapeMode is the mode that should be use for escaping a value +type EscapeMode uint + +// The modes for escaping a value before it is marshaled, and unmarshaled. +const ( + NoEscape EscapeMode = iota + Base64Escape + QuotedEscape +) + +// EncodeJSONValue marshals the value into a JSON string, and optionally base64 +// encodes the string before returning it. +// +// Will panic if the escape mode is unknown. +func EncodeJSONValue(v volcengine.JSONValue, escape EscapeMode) (string, error) { + b, err := json.Marshal(v) + if err != nil { + return "", err + } + + switch escape { + case NoEscape: + return string(b), nil + case Base64Escape: + return base64.StdEncoding.EncodeToString(b), nil + case QuotedEscape: + return strconv.Quote(string(b)), nil + } + + panic(fmt.Sprintf("EncodeJSONValue called with unknown EscapeMode, %v", escape)) +} + +// DecodeJSONValue will attempt to decode the string input as a JSONValue. +// Optionally decoding base64 the value first before JSON unmarshaling. +// +// Will panic if the escape mode is unknown. +func DecodeJSONValue(v string, escape EscapeMode) (volcengine.JSONValue, error) { + var b []byte + var err error + + switch escape { + case NoEscape: + b = []byte(v) + case Base64Escape: + b, err = base64.StdEncoding.DecodeString(v) + case QuotedEscape: + var u string + u, err = strconv.Unquote(v) + b = []byte(u) + default: + panic(fmt.Sprintf("DecodeJSONValue called with unknown EscapeMode, %v", escape)) + } + + if err != nil { + return nil, err + } + + m := volcengine.JSONValue{} + err = json.Unmarshal(b, &m) + if err != nil { + return nil, err + } + + return m, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/payload.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/payload.go new file mode 100644 index 000000000000..b39cfb29ab94 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/payload.go @@ -0,0 +1,100 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package protocol + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "io" + "io/ioutil" + "net/http" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// PayloadUnmarshaler provides the interface for unmarshaling a payload's +// reader into a SDK shape. +type PayloadUnmarshaler interface { + UnmarshalPayload(io.Reader, interface{}) error +} + +// HandlerPayloadUnmarshal implements the PayloadUnmarshaler from a +// HandlerList. This provides the support for unmarshaling a payload reader to +// a shape without needing a SDK request first. +type HandlerPayloadUnmarshal struct { + Unmarshalers request.HandlerList +} + +// UnmarshalPayload unmarshals the io.Reader payload into the SDK shape using +// the Unmarshalers HandlerList provided. Returns an error if unable +// unmarshaling fails. +func (h HandlerPayloadUnmarshal) UnmarshalPayload(r io.Reader, v interface{}) error { + req := &request.Request{ + HTTPRequest: &http.Request{}, + HTTPResponse: &http.Response{ + StatusCode: 200, + Header: http.Header{}, + Body: ioutil.NopCloser(r), + }, + Data: v, + } + + h.Unmarshalers.Run(req) + + return req.Error +} + +// PayloadMarshaler provides the interface for marshaling a SDK shape into and +// io.Writer. +type PayloadMarshaler interface { + MarshalPayload(io.Writer, interface{}) error +} + +// HandlerPayloadMarshal implements the PayloadMarshaler from a HandlerList. +// This provides support for marshaling a SDK shape into an io.Writer without +// needing a SDK request first. +type HandlerPayloadMarshal struct { + Marshalers request.HandlerList +} + +// MarshalPayload marshals the SDK shape into the io.Writer using the +// Marshalers HandlerList provided. Returns an error if unable if marshal +// fails. +func (h HandlerPayloadMarshal) MarshalPayload(w io.Writer, v interface{}) error { + req := request.New( + volcengine.Config{}, + metadata.ClientInfo{}, + request.Handlers{}, + nil, + &request.Operation{HTTPMethod: "GET"}, + v, + nil, + ) + + h.Marshalers.Run(req) + + if req.Error != nil { + return req.Error + } + + io.Copy(w, req.GetBody()) + + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/build.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/build.go new file mode 100644 index 000000000000..8c9f07429bd6 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/build.go @@ -0,0 +1,53 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package query provides serialization of VOLCSTACK volcenginequery requests, and responses. +package query + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "net/url" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/queryutil" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// BuildHandler is a named request handler for building volcenginequery protocol requests +var BuildHandler = request.NamedHandler{Name: "awssdk.volcenginequery.Build", Fn: Build} + +// Build builds a request for an VOLCSTACK Query service. +func Build(r *request.Request) { + body := url.Values{ + "Action": {r.Operation.Name}, + "Version": {r.ClientInfo.APIVersion}, + } + if err := queryutil.Parse(body, r.Params, false); err != nil { + r.Error = volcengineerr.New(request.ErrCodeSerialization, "failed encoding Query request", err) + return + } + + if !r.IsPresigned() { + r.HTTPRequest.Method = "POST" + r.HTTPRequest.Header.Set("Content-Type", "application/x-www-form-urlencoded; charset=utf-8") + r.SetBufferBody([]byte(body.Encode())) + } else { // This is a pre-signed request + r.HTTPRequest.Method = "GET" + r.HTTPRequest.URL.RawQuery = body.Encode() + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/queryutil/queryutil.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/queryutil/queryutil.go new file mode 100644 index 000000000000..b121e34af237 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/queryutil/queryutil.go @@ -0,0 +1,271 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package queryutil + +import ( + "encoding/base64" + "encoding/json" + "fmt" + "net/url" + "reflect" + "sort" + "strconv" + "strings" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol" +) + +// Parse parses an object i and fills a url.Values object. The isEC2 flag +// indicates if this is the EC2 Query sub-protocol. +func Parse(body url.Values, i interface{}, isEC2 bool) error { + q := queryParser{isEC2: isEC2} + return q.parseValue(body, reflect.ValueOf(i), "", "") +} + +func elemOf(value reflect.Value) reflect.Value { + for value.Kind() == reflect.Ptr { + value = value.Elem() + } + return value +} + +type queryParser struct { + isEC2 bool +} + +func (q *queryParser) parseValue(v url.Values, value reflect.Value, prefix string, tag reflect.StructTag) error { + value = elemOf(value) + + // no need to handle zero values + if !value.IsValid() { + return nil + } + + t := tag.Get("type") + if t == "" { + switch value.Kind() { + case reflect.Struct: + t = "structure" + case reflect.Slice: + t = "list" + case reflect.Map: + t = "map" + } + } + + switch t { + case "structure": + return q.parseStruct(v, value, prefix) + case "list": + return q.parseList(v, value, prefix, tag) + case "map": + return q.parseMap(v, value, prefix, tag) + default: + return q.parseScalar(v, value, prefix, tag) + } +} + +func (q *queryParser) parseStruct(v url.Values, value reflect.Value, prefix string) error { + if !value.IsValid() { + return nil + } + + t := value.Type() + for i := 0; i < value.NumField(); i++ { + elemValue := elemOf(value.Field(i)) + field := t.Field(i) + + if field.PkgPath != "" { + continue // ignore unexported fields + } + if field.Tag.Get("ignore") != "" { + continue + } + + if protocol.CanSetIdempotencyToken(value.Field(i), field) { + token := protocol.GetIdempotencyToken() + elemValue = reflect.ValueOf(token) + } + + var name string + if q.isEC2 { + name = field.Tag.Get("queryName") + } + if name == "" { + if field.Tag.Get("flattened") != "" && field.Tag.Get("locationNameList") != "" { + name = field.Tag.Get("locationNameList") + } else if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + if name != "" && q.isEC2 { + name = strings.ToUpper(name[0:1]) + name[1:] + } + } + if name == "" { + name = field.Name + } + + if prefix != "" { + name = prefix + "." + name + } + + if err := q.parseValue(v, elemValue, name, field.Tag); err != nil { + return err + } + } + return nil +} + +func (q *queryParser) parseList(v url.Values, value reflect.Value, prefix string, tag reflect.StructTag) error { + // If it's empty, generate an empty value + if !value.IsNil() && value.Len() == 0 { + v.Set(prefix, "") + return nil + } + + if _, ok := value.Interface().([]byte); ok { + return q.parseScalar(v, value, prefix, tag) + } + + // check for unflattened list member + //if !q.isEC2 && tag.Get("flattened") == "" { + // if listName := tag.Get("locationNameList"); listName == "" { + // prefix += ".member" + // } else { + // prefix += "." + listName + // } + //} + + for i := 0; i < value.Len(); i++ { + slicePrefix := prefix + if slicePrefix == "" { + slicePrefix = strconv.Itoa(i + 1) + } else { + slicePrefix = slicePrefix + "." + strconv.Itoa(i+1) + } + if err := q.parseValue(v, value.Index(i), slicePrefix, ""); err != nil { + return err + } + } + return nil +} + +func (q *queryParser) parseMap(v url.Values, value reflect.Value, prefix string, tag reflect.StructTag) error { + // If it's empty, generate an empty value + if !value.IsNil() && value.Len() == 0 { + v.Set(prefix, "") + return nil + } + + // check for unflattened list member + if !q.isEC2 && tag.Get("flattened") == "" { + prefix += ".entry" + } + + // sort keys for improved serialization consistency. + // this is not strictly necessary for protocol support. + mapKeyValues := value.MapKeys() + mapKeys := map[string]reflect.Value{} + mapKeyNames := make([]string, len(mapKeyValues)) + for i, mapKey := range mapKeyValues { + name := mapKey.String() + mapKeys[name] = mapKey + mapKeyNames[i] = name + } + sort.Strings(mapKeyNames) + + for i, mapKeyName := range mapKeyNames { + mapKey := mapKeys[mapKeyName] + mapValue := value.MapIndex(mapKey) + + kname := tag.Get("locationNameKey") + if kname == "" { + kname = "key" + } + vname := tag.Get("locationNameValue") + if vname == "" { + vname = "value" + } + + // serialize key + var keyName string + if prefix == "" { + keyName = strconv.Itoa(i+1) + "." + kname + } else { + keyName = prefix + "." + strconv.Itoa(i+1) + "." + kname + } + + if err := q.parseValue(v, mapKey, keyName, ""); err != nil { + return err + } + + // serialize value + var valueName string + if prefix == "" { + valueName = strconv.Itoa(i+1) + "." + vname + } else { + valueName = prefix + "." + strconv.Itoa(i+1) + "." + vname + } + + if err := q.parseValue(v, mapValue, valueName, ""); err != nil { + return err + } + } + + return nil +} + +func (q *queryParser) parseScalar(v url.Values, r reflect.Value, name string, tag reflect.StructTag) error { + switch value := r.Interface().(type) { + case string: + v.Set(name, value) + case []byte: + if !r.IsNil() { + v.Set(name, base64.StdEncoding.EncodeToString(value)) + } + case json.Number: + v.Set(name, value.String()) + case bool: + v.Set(name, strconv.FormatBool(value)) + case int64: + v.Set(name, strconv.FormatInt(value, 10)) + case int32: + v.Set(name, strconv.FormatInt(int64(value), 10)) + case int16: + v.Set(name, strconv.FormatInt(int64(value), 10)) + case int8: + v.Set(name, strconv.FormatInt(int64(value), 10)) + case int: + v.Set(name, strconv.Itoa(value)) + case float64: + v.Set(name, strconv.FormatFloat(value, 'f', -1, 64)) + case float32: + v.Set(name, strconv.FormatFloat(float64(value), 'f', -1, 32)) + case time.Time: + const ISO8601UTC = "2006-01-02T15:04:05Z" + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.ISO8601TimeFormatName + } + + v.Set(name, protocol.FormatTime(format, value)) + default: + return fmt.Errorf("unsupported value for param %s: %v (%s)", name, r.Interface(), r.Type().Name()) + } + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/unmarshal.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/unmarshal.go new file mode 100644 index 000000000000..66b33513705f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/unmarshal.go @@ -0,0 +1,55 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package query + +// This File is modify from https://github.com/aws/aws-sdk-go/blob/main/private/protocol/query/unmarshal.go + +import ( + "encoding/xml" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// UnmarshalHandler is a named request handler for unmarshaling volcenginequery protocol requests +var UnmarshalHandler = request.NamedHandler{Name: "awssdk.volcenginequery.Unmarshal", Fn: Unmarshal} + +// UnmarshalMetaHandler is a named request handler for unmarshaling volcenginequery protocol request metadata +var UnmarshalMetaHandler = request.NamedHandler{Name: "awssdk.volcenginequery.UnmarshalMeta", Fn: UnmarshalMeta} + +// Unmarshal unmarshals a response for an VOLCSTACK Query service. +func Unmarshal(r *request.Request) { + defer r.HTTPResponse.Body.Close() + if r.DataFilled() { + decoder := xml.NewDecoder(r.HTTPResponse.Body) + err := xmlutil.UnmarshalXML(r.Data, decoder, r.Operation.Name+"Result") + if err != nil { + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New(request.ErrCodeSerialization, "failed decoding Query response", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) + return + } + } +} + +// UnmarshalMeta unmarshals header response values for an VOLCSTACK Query service. +func UnmarshalMeta(r *request.Request) { + r.RequestID = r.HTTPResponse.Header.Get("X-Top-Requestid") +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/unmarshal_error.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/unmarshal_error.go new file mode 100644 index 000000000000..b2ac8c25508f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/unmarshal_error.go @@ -0,0 +1,88 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package query + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/xml" + "fmt" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// UnmarshalErrorHandler is a name request handler to unmarshal request errors +var UnmarshalErrorHandler = request.NamedHandler{Name: "volcenginesdk.volcenginequery.UnmarshalError", Fn: UnmarshalError} + +type xmlErrorResponse struct { + Code string `xml:"Error>Code"` + Message string `xml:"Error>Message"` + RequestID string `xml:"RequestId"` +} + +type xmlResponseError struct { + xmlErrorResponse +} + +func (e *xmlResponseError) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error { + const svcUnavailableTagName = "ServiceUnavailableException" + const errorResponseTagName = "ErrorResponse" + + switch start.Name.Local { + case svcUnavailableTagName: + e.Code = svcUnavailableTagName + e.Message = "service is unavailable" + return d.Skip() + + case errorResponseTagName: + return d.DecodeElement(&e.xmlErrorResponse, &start) + + default: + return fmt.Errorf("unknown error response tag, %v", start) + } +} + +// UnmarshalError unmarshals an error response for an VOLCSTACK Query service. +func UnmarshalError(r *request.Request) { + defer r.HTTPResponse.Body.Close() + + var respErr xmlResponseError + err := xmlutil.UnmarshalXMLError(&respErr, r.HTTPResponse.Body) + if err != nil { + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New(request.ErrCodeSerialization, + "failed to unmarshal error message", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) + return + } + + reqID := respErr.RequestID + if len(reqID) == 0 { + reqID = r.RequestID + } + + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New(respErr.Code, respErr.Message, nil), + r.HTTPResponse.StatusCode, + reqID, + ) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/build.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/build.go new file mode 100644 index 000000000000..f720fdbf39ec --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/build.go @@ -0,0 +1,329 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package rest provides RESTFUL serialization of VOLCSTACK requests and responses. +package rest + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "encoding/base64" + "fmt" + "io" + "net/http" + "net/url" + "path" + "reflect" + "strconv" + "strings" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// Whether the byte value can be sent without escaping in VOLCSTACK URLs +var noEscape [256]bool + +var errValueNotSet = fmt.Errorf("value not set") + +var byteSliceType = reflect.TypeOf([]byte{}) + +func init() { + for i := 0; i < len(noEscape); i++ { + // VOLCSTACK expects every character except these to be escaped + noEscape[i] = (i >= 'A' && i <= 'Z') || + (i >= 'a' && i <= 'z') || + (i >= '0' && i <= '9') || + i == '-' || + i == '.' || + i == '_' || + i == '~' + } +} + +// BuildHandler is a named request handler for building rest protocol requests +var BuildHandler = request.NamedHandler{Name: "awssdk.rest.Build", Fn: Build} + +// Build builds the REST component of a service request. +func Build(r *request.Request) { + if r.ParamsFilled() { + v := reflect.ValueOf(r.Params).Elem() + buildLocationElements(r, v, false) + buildBody(r, v) + } +} + +// BuildAsGET builds the REST component of a service request with the ability to hoist +// data from the volcenginebody. +func BuildAsGET(r *request.Request) { + if r.ParamsFilled() { + v := reflect.ValueOf(r.Params).Elem() + buildLocationElements(r, v, true) + buildBody(r, v) + } +} + +func buildLocationElements(r *request.Request, v reflect.Value, buildGETQuery bool) { + query := r.HTTPRequest.URL.Query() + + // Setup the raw path to match the base path pattern. This is needed + // so that when the path is mutated a custom escaped version can be + // stored in RawPath that will be used by the Go client. + r.HTTPRequest.URL.RawPath = r.HTTPRequest.URL.Path + + for i := 0; i < v.NumField(); i++ { + m := v.Field(i) + if n := v.Type().Field(i).Name; n[0:1] == strings.ToLower(n[0:1]) { + continue + } + + if m.IsValid() { + field := v.Type().Field(i) + name := field.Tag.Get("locationName") + if name == "" { + name = field.Name + } + if kind := m.Kind(); kind == reflect.Ptr { + m = m.Elem() + } else if kind == reflect.Interface { + if !m.Elem().IsValid() { + continue + } + } + if !m.IsValid() { + continue + } + if field.Tag.Get("ignore") != "" { + continue + } + + // Support the ability to customize values to be marshaled as a + // blob even though they were modeled as a string. Required for S3 + // API operations like SSECustomerKey is modeled as stirng but + // required to be base64 encoded in request. + if field.Tag.Get("marshal-as") == "blob" { + m = m.Convert(byteSliceType) + } + + var err error + switch field.Tag.Get("location") { + case "headers": // header maps + err = buildHeaderMap(&r.HTTPRequest.Header, m, field.Tag) + case "header": + err = buildHeader(&r.HTTPRequest.Header, m, name, field.Tag) + case "uri": + err = buildURI(r.HTTPRequest.URL, m, name, field.Tag) + case "querystring": + err = buildQueryString(query, m, name, field.Tag) + default: + if buildGETQuery { + err = buildQueryString(query, m, name, field.Tag) + } + } + r.Error = err + } + if r.Error != nil { + return + } + } + + r.HTTPRequest.URL.RawQuery = query.Encode() + if !volcengine.BoolValue(r.Config.DisableRestProtocolURICleaning) { + cleanPath(r.HTTPRequest.URL) + } +} + +func buildBody(r *request.Request, v reflect.Value) { + if field, ok := v.Type().FieldByName("_"); ok { + if payloadName := field.Tag.Get("payload"); payloadName != "" { + pfield, _ := v.Type().FieldByName(payloadName) + if ptag := pfield.Tag.Get("type"); ptag != "" && ptag != "structure" { + payload := reflect.Indirect(v.FieldByName(payloadName)) + if payload.IsValid() && payload.Interface() != nil { + switch reader := payload.Interface().(type) { + case io.ReadSeeker: + r.SetReaderBody(reader) + case []byte: + r.SetBufferBody(reader) + case string: + r.SetStringBody(reader) + default: + r.Error = volcengineerr.New(request.ErrCodeSerialization, + "failed to encode REST request", + fmt.Errorf("unknown payload type %s", payload.Type())) + } + } + } + } + } +} + +func buildHeader(header *http.Header, v reflect.Value, name string, tag reflect.StructTag) error { + str, err := convertType(v, tag) + if err == errValueNotSet { + return nil + } else if err != nil { + return volcengineerr.New(request.ErrCodeSerialization, "failed to encode REST request", err) + } + + name = strings.TrimSpace(name) + str = strings.TrimSpace(str) + + header.Add(name, str) + + return nil +} + +func buildHeaderMap(header *http.Header, v reflect.Value, tag reflect.StructTag) error { + prefix := tag.Get("locationName") + for _, key := range v.MapKeys() { + str, err := convertType(v.MapIndex(key), tag) + if err == errValueNotSet { + continue + } else if err != nil { + return volcengineerr.New(request.ErrCodeSerialization, "failed to encode REST request", err) + + } + keyStr := strings.TrimSpace(key.String()) + str = strings.TrimSpace(str) + + header.Add(prefix+keyStr, str) + } + return nil +} + +func buildURI(u *url.URL, v reflect.Value, name string, tag reflect.StructTag) error { + value, err := convertType(v, tag) + if err == errValueNotSet { + return nil + } else if err != nil { + return volcengineerr.New(request.ErrCodeSerialization, "failed to encode REST request", err) + } + + u.Path = strings.Replace(u.Path, "{"+name+"}", value, -1) + u.Path = strings.Replace(u.Path, "{"+name+"+}", value, -1) + + u.RawPath = strings.Replace(u.RawPath, "{"+name+"}", EscapePath(value, true), -1) + u.RawPath = strings.Replace(u.RawPath, "{"+name+"+}", EscapePath(value, false), -1) + + return nil +} + +func buildQueryString(query url.Values, v reflect.Value, name string, tag reflect.StructTag) error { + switch value := v.Interface().(type) { + case []*string: + for _, item := range value { + query.Add(name, *item) + } + case map[string]*string: + for key, item := range value { + query.Add(key, *item) + } + case map[string][]*string: + for key, items := range value { + for _, item := range items { + query.Add(key, *item) + } + } + default: + str, err := convertType(v, tag) + if err == errValueNotSet { + return nil + } else if err != nil { + return volcengineerr.New(request.ErrCodeSerialization, "failed to encode REST request", err) + } + query.Set(name, str) + } + + return nil +} + +func cleanPath(u *url.URL) { + hasSlash := strings.HasSuffix(u.Path, "/") + + // clean up path, removing duplicate `/` + u.Path = path.Clean(u.Path) + u.RawPath = path.Clean(u.RawPath) + + if hasSlash && !strings.HasSuffix(u.Path, "/") { + u.Path += "/" + u.RawPath += "/" + } +} + +// EscapePath escapes part of a URL path in style +func EscapePath(path string, encodeSep bool) string { + var buf bytes.Buffer + for i := 0; i < len(path); i++ { + c := path[i] + if noEscape[c] || (c == '/' && !encodeSep) { + buf.WriteByte(c) + } else { + fmt.Fprintf(&buf, "%%%02X", c) + } + } + return buf.String() +} + +func convertType(v reflect.Value, tag reflect.StructTag) (str string, err error) { + v = reflect.Indirect(v) + if !v.IsValid() { + return "", errValueNotSet + } + + switch value := v.Interface().(type) { + case string: + str = value + case []byte: + str = base64.StdEncoding.EncodeToString(value) + case bool: + str = strconv.FormatBool(value) + case int64: + str = strconv.FormatInt(value, 10) + case float64: + str = strconv.FormatFloat(value, 'f', -1, 64) + case time.Time: + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.RFC822TimeFormatName + if tag.Get("location") == "querystring" { + format = protocol.ISO8601TimeFormatName + } + } + str = protocol.FormatTime(format, value) + case volcengine.JSONValue: + if len(value) == 0 { + return "", errValueNotSet + } + escaping := protocol.NoEscape + if tag.Get("location") == "header" { + escaping = protocol.Base64Escape + } + str, err = protocol.EncodeJSONValue(value, escaping) + if err != nil { + return "", fmt.Errorf("unable to encode JSONValue, %v", err) + } + default: + err := fmt.Errorf("unsupported value for param %v (%s)", v.Interface(), v.Type()) + return "", err + } + return str, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/payload.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/payload.go new file mode 100644 index 000000000000..eb9db8178fc2 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/payload.go @@ -0,0 +1,64 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rest + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "reflect" + +// PayloadMember returns the payload field member of i if there is one, or nil. +func PayloadMember(i interface{}) interface{} { + if i == nil { + return nil + } + + v := reflect.ValueOf(i).Elem() + if !v.IsValid() { + return nil + } + if field, ok := v.Type().FieldByName("_"); ok { + if payloadName := field.Tag.Get("payload"); payloadName != "" { + field, _ := v.Type().FieldByName(payloadName) + if field.Tag.Get("type") != "structure" { + return nil + } + + payload := v.FieldByName(payloadName) + if payload.IsValid() || (payload.Kind() == reflect.Ptr && !payload.IsNil()) { + return payload.Interface() + } + } + } + return nil +} + +// PayloadType returns the type of a payload field member of i if there is one, or "". +func PayloadType(i interface{}) string { + v := reflect.Indirect(reflect.ValueOf(i)) + if !v.IsValid() { + return "" + } + if field, ok := v.Type().FieldByName("_"); ok { + if payloadName := field.Tag.Get("payload"); payloadName != "" { + if member, ok := v.Type().FieldByName(payloadName); ok { + return member.Tag.Get("type") + } + } + } + return "" +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/unmarshal.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/unmarshal.go new file mode 100644 index 000000000000..519861faf470 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/rest/unmarshal.go @@ -0,0 +1,256 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rest + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "encoding/base64" + "fmt" + "io" + "io/ioutil" + "net/http" + "reflect" + "strconv" + "strings" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// UnmarshalHandler is a named request handler for unmarshaling rest protocol requests +var UnmarshalHandler = request.NamedHandler{Name: "volcenginesdk.rest.Unmarshal", Fn: Unmarshal} + +// UnmarshalMetaHandler is a named request handler for unmarshaling rest protocol request metadata +var UnmarshalMetaHandler = request.NamedHandler{Name: "volcenginesdk.rest.UnmarshalMeta", Fn: UnmarshalMeta} + +// Unmarshal unmarshals the REST component of a response in a REST service. +func Unmarshal(r *request.Request) { + if r.DataFilled() { + v := reflect.Indirect(reflect.ValueOf(r.Data)) + unmarshalBody(r, v) + } +} + +// UnmarshalMeta unmarshals the REST metadata of a response in a REST service +func UnmarshalMeta(r *request.Request) { + r.RequestID = r.HTTPResponse.Header.Get("X-Top-Requestid") + if r.RequestID == "" { + // Alternative version of request id in the header + r.RequestID = r.HTTPResponse.Header.Get("X-Top-Request-Id") + } + if r.DataFilled() { + v := reflect.Indirect(reflect.ValueOf(r.Data)) + unmarshalLocationElements(r, v) + } +} + +func unmarshalBody(r *request.Request, v reflect.Value) { + if field, ok := v.Type().FieldByName("_"); ok { + if payloadName := field.Tag.Get("payload"); payloadName != "" { + pfield, _ := v.Type().FieldByName(payloadName) + if ptag := pfield.Tag.Get("type"); ptag != "" && ptag != "structure" { + payload := v.FieldByName(payloadName) + if payload.IsValid() { + switch payload.Interface().(type) { + case []byte: + defer r.HTTPResponse.Body.Close() + b, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + r.Error = volcengineerr.New(request.ErrCodeSerialization, "failed to decode REST response", err) + } else { + payload.Set(reflect.ValueOf(b)) + } + case *string: + defer r.HTTPResponse.Body.Close() + b, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + r.Error = volcengineerr.New(request.ErrCodeSerialization, "failed to decode REST response", err) + } else { + str := string(b) + payload.Set(reflect.ValueOf(&str)) + } + default: + switch payload.Type().String() { + case "io.ReadCloser": + payload.Set(reflect.ValueOf(r.HTTPResponse.Body)) + case "io.ReadSeeker": + b, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + r.Error = volcengineerr.New(request.ErrCodeSerialization, + "failed to read response volcenginebody", err) + return + } + payload.Set(reflect.ValueOf(ioutil.NopCloser(bytes.NewReader(b)))) + default: + io.Copy(ioutil.Discard, r.HTTPResponse.Body) + defer r.HTTPResponse.Body.Close() + r.Error = volcengineerr.New(request.ErrCodeSerialization, + "failed to decode REST response", + fmt.Errorf("unknown payload type %s", payload.Type())) + } + } + } + } + } + } +} + +func unmarshalLocationElements(r *request.Request, v reflect.Value) { + for i := 0; i < v.NumField(); i++ { + m, field := v.Field(i), v.Type().Field(i) + if n := field.Name; n[0:1] == strings.ToLower(n[0:1]) { + continue + } + + if m.IsValid() { + name := field.Tag.Get("locationName") + if name == "" { + name = field.Name + } + + switch field.Tag.Get("location") { + case "statusCode": + unmarshalStatusCode(m, r.HTTPResponse.StatusCode) + case "header": + err := unmarshalHeader(m, r.HTTPResponse.Header.Get(name), field.Tag) + if err != nil { + r.Error = volcengineerr.New(request.ErrCodeSerialization, "failed to decode REST response", err) + break + } + case "headers": + prefix := field.Tag.Get("locationName") + err := unmarshalHeaderMap(m, r.HTTPResponse.Header, prefix) + if err != nil { + r.Error = volcengineerr.New(request.ErrCodeSerialization, "failed to decode REST response", err) + break + } + } + } + if r.Error != nil { + return + } + } +} + +func unmarshalStatusCode(v reflect.Value, statusCode int) { + if !v.IsValid() { + return + } + + switch v.Interface().(type) { + case *int64: + s := int64(statusCode) + v.Set(reflect.ValueOf(&s)) + } +} + +func unmarshalHeaderMap(r reflect.Value, headers http.Header, prefix string) error { + if len(headers) == 0 { + return nil + } + switch r.Interface().(type) { + case map[string]*string: // we only support string map value types + out := map[string]*string{} + for k, v := range headers { + k = http.CanonicalHeaderKey(k) + if strings.HasPrefix(strings.ToLower(k), strings.ToLower(prefix)) { + out[k[len(prefix):]] = &v[0] + } + } + if len(out) != 0 { + r.Set(reflect.ValueOf(out)) + } + + } + return nil +} + +func unmarshalHeader(v reflect.Value, header string, tag reflect.StructTag) error { + switch tag.Get("type") { + case "jsonvalue": + if len(header) == 0 { + return nil + } + case "blob": + if len(header) == 0 { + return nil + } + default: + if !v.IsValid() || (header == "" && v.Elem().Kind() != reflect.String) { + return nil + } + } + + switch v.Interface().(type) { + case *string: + v.Set(reflect.ValueOf(&header)) + case []byte: + b, err := base64.StdEncoding.DecodeString(header) + if err != nil { + return err + } + v.Set(reflect.ValueOf(b)) + case *bool: + b, err := strconv.ParseBool(header) + if err != nil { + return err + } + v.Set(reflect.ValueOf(&b)) + case *int64: + i, err := strconv.ParseInt(header, 10, 64) + if err != nil { + return err + } + v.Set(reflect.ValueOf(&i)) + case *float64: + f, err := strconv.ParseFloat(header, 64) + if err != nil { + return err + } + v.Set(reflect.ValueOf(&f)) + case *time.Time: + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.RFC822TimeFormatName + } + t, err := protocol.ParseTime(format, header) + if err != nil { + return err + } + v.Set(reflect.ValueOf(&t)) + case volcengine.JSONValue: + escaping := protocol.NoEscape + if tag.Get("location") == "header" { + escaping = protocol.Base64Escape + } + m, err := protocol.DecodeJSONValue(header, escaping) + if err != nil { + return err + } + v.Set(reflect.ValueOf(m)) + default: + err := fmt.Errorf("Unsupported value for param %v (%s)", v.Interface(), v.Type()) + return err + } + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/timestamp.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/timestamp.go new file mode 100644 index 000000000000..8446c16c1f14 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/timestamp.go @@ -0,0 +1,103 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package protocol + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "math" + "strconv" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkmath" +) + +// Names of time formats supported by the SDK +const ( + RFC822TimeFormatName = "rfc822" + ISO8601TimeFormatName = "iso8601" + UnixTimeFormatName = "unixTimestamp" +) + +// Time formats supported by the SDK +// Output time is intended to not contain decimals +const ( + // RFC 7231#section-7.1.1.1 timetamp format. e.g Tue, 29 Apr 2014 18:30:38 GMT + RFC822TimeFormat = "Mon, 2 Jan 2006 15:04:05 GMT" + + // This format is used for output time without seconds precision + RFC822OutputTimeFormat = "Mon, 02 Jan 2006 15:04:05 GMT" + + // RFC3339 a subset of the ISO8601 timestamp format. e.g 2014-04-29T18:30:38Z + ISO8601TimeFormat = "2006-01-02T15:04:05.999999999Z" + + // This format is used for output time without seconds precision + ISO8601OutputTimeFormat = "2006-01-02T15:04:05Z" +) + +// IsKnownTimestampFormat returns if the timestamp format name +// is know to the SDK's protocols. +func IsKnownTimestampFormat(name string) bool { + switch name { + case RFC822TimeFormatName: + fallthrough + case ISO8601TimeFormatName: + fallthrough + case UnixTimeFormatName: + return true + default: + return false + } +} + +// FormatTime returns a string value of the time. +func FormatTime(name string, t time.Time) string { + t = t.UTC() + + switch name { + case RFC822TimeFormatName: + return t.Format(RFC822OutputTimeFormat) + case ISO8601TimeFormatName: + return t.Format(ISO8601OutputTimeFormat) + case UnixTimeFormatName: + return strconv.FormatInt(t.Unix(), 10) + default: + panic("unknown timestamp format name, " + name) + } +} + +// ParseTime attempts to parse the time given the format. Returns +// the time if it was able to be parsed, and fails otherwise. +func ParseTime(formatName, value string) (time.Time, error) { + switch formatName { + case RFC822TimeFormatName: + return time.Parse(RFC822TimeFormat, value) + case ISO8601TimeFormatName: + return time.Parse(ISO8601TimeFormat, value) + case UnixTimeFormatName: + v, err := strconv.ParseFloat(value, 64) + _, dec := math.Modf(v) + dec = sdkmath.Round(dec*1e3) / 1e3 //Rounds 0.1229999 to 0.123 + if err != nil { + return time.Time{}, err + } + return time.Unix(int64(v), int64(dec*(1e9))), nil + default: + panic("unknown timestamp format name, " + formatName) + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/unmarshal.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/unmarshal.go new file mode 100644 index 000000000000..81b39f214cee --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/unmarshal.go @@ -0,0 +1,40 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package protocol + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "io" + "io/ioutil" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// UnmarshalDiscardBodyHandler is a named request handler to empty and close a response's volcenginebody +var UnmarshalDiscardBodyHandler = request.NamedHandler{Name: "volcenginesdk.shared.UnmarshalDiscardBody", Fn: UnmarshalDiscardBody} + +// UnmarshalDiscardBody is a request handler to empty a response's volcenginebody and closing it. +func UnmarshalDiscardBody(r *request.Request) { + if r.HTTPResponse == nil || r.HTTPResponse.Body == nil { + return + } + + io.Copy(ioutil.Discard, r.HTTPResponse.Body) + r.HTTPResponse.Body.Close() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/volcenginequery/unmarshal.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/volcenginequery/unmarshal.go new file mode 100644 index 000000000000..22af8a77366b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/volcenginequery/unmarshal.go @@ -0,0 +1,96 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcenginequery + +//go:generate go run -tags codegen ../../../private/model/cli/gen-protocol-tests ../../../models/protocol_tests/output/ec2.json unmarshal_test.go + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/xml" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// UnmarshalHandler is a named request handler for unmarshaling ec2query protocol requests +var UnmarshalHandler = request.NamedHandler{Name: "volcenginesdk.volcenginequery.Unmarshal", Fn: Unmarshal} + +// UnmarshalMetaHandler is a named request handler for unmarshaling ec2query protocol request metadata +var UnmarshalMetaHandler = request.NamedHandler{Name: "volcenginesdk.volcenginequery.UnmarshalMeta", Fn: UnmarshalMeta} + +// UnmarshalErrorHandler is a named request handler for unmarshaling ec2query protocol request errors +var UnmarshalErrorHandler = request.NamedHandler{Name: "volcenginesdk.volcenginequery.UnmarshalError", Fn: UnmarshalError} + +// Unmarshal unmarshals a response body for the EC2 protocol. +func Unmarshal(r *request.Request) { + defer r.HTTPResponse.Body.Close() + if r.DataFilled() { + decoder := xml.NewDecoder(r.HTTPResponse.Body) + err := xmlutil.UnmarshalXML(r.Data, decoder, "") + if err != nil { + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New(request.ErrCodeSerialization, + "failed decoding EC2 Query response", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) + return + } + } +} + +// UnmarshalMeta unmarshals response headers for the EC2 protocol. +func UnmarshalMeta(r *request.Request) { + r.RequestID = r.HTTPResponse.Header.Get("X-Amzn-Requestid") + if r.RequestID == "" { + // Alternative version of request id in the header + r.RequestID = r.HTTPResponse.Header.Get("X-Amz-Request-Id") + } +} + +type xmlErrorResponse struct { + XMLName xml.Name `xml:"Response"` + Code string `xml:"Errors>Error>Code"` + Message string `xml:"Errors>Error>Message"` + RequestID string `xml:"RequestID"` +} + +// UnmarshalError unmarshals a response error for the EC2 protocol. +func UnmarshalError(r *request.Request) { + defer r.HTTPResponse.Body.Close() + + var respErr xmlErrorResponse + err := xmlutil.UnmarshalXMLError(&respErr, r.HTTPResponse.Body) + if err != nil { + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New(request.ErrCodeSerialization, + "failed to unmarshal error message", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) + return + } + + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New(respErr.Code, respErr.Message, nil), + r.HTTPResponse.StatusCode, + respErr.RequestID, + ) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/build.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/build.go new file mode 100644 index 000000000000..7ad1c7bca2ee --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/build.go @@ -0,0 +1,325 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package xmlutil Package xml util provides XML serialization of VOLCSTACK requests and responses. +package xmlutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/base64" + "encoding/xml" + "fmt" + "reflect" + "sort" + "strconv" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol" +) + +// BuildXML will serialize params into an xml.Encoder. Error will be returned +// if the serialization of any of the params or nested values fails. +func BuildXML(params interface{}, e *xml.Encoder) error { + return buildXML(params, e, false) +} + +func buildXML(params interface{}, e *xml.Encoder, sorted bool) error { + b := xmlBuilder{encoder: e, namespaces: map[string]string{}} + root := NewXMLElement(xml.Name{}) + if err := b.buildValue(reflect.ValueOf(params), root, ""); err != nil { + return err + } + for _, c := range root.Children { + for _, v := range c { + return StructToXML(e, v, sorted) + } + } + return nil +} + +// Returns the reflection element of a value, if it is a pointer. +func elemOf(value reflect.Value) reflect.Value { + for value.Kind() == reflect.Ptr { + value = value.Elem() + } + return value +} + +// A xmlBuilder serializes values from Go code to XML +type xmlBuilder struct { + encoder *xml.Encoder + namespaces map[string]string +} + +// buildValue generic XMLNode builder for any type. Will build value for their specific type +// struct, list, map, scalar. +// +// Also takes a "type" tag value to set what type a value should be converted to XMLNode as. If +// type is not provided reflect will be used to determine the value's type. +func (b *xmlBuilder) buildValue(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + value = elemOf(value) + if !value.IsValid() { // no need to handle zero values + return nil + } else if tag.Get("location") != "" { // don't handle non-volcenginebody location values + return nil + } + + t := tag.Get("type") + if t == "" { + switch value.Kind() { + case reflect.Struct: + t = "structure" + case reflect.Slice: + t = "list" + case reflect.Map: + t = "map" + } + } + + switch t { + case "structure": + if field, ok := value.Type().FieldByName("_"); ok { + tag = tag + reflect.StructTag(" ") + field.Tag + } + return b.buildStruct(value, current, tag) + case "list": + return b.buildList(value, current, tag) + case "map": + return b.buildMap(value, current, tag) + default: + return b.buildScalar(value, current, tag) + } +} + +// buildStruct adds a struct and its fields to the current XMLNode. All fields and any nested +// types are converted to XMLNodes also. +func (b *xmlBuilder) buildStruct(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + if !value.IsValid() { + return nil + } + + // unwrap payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := value.Type().FieldByName(payload) + tag = field.Tag + value = elemOf(value.FieldByName(payload)) + + if !value.IsValid() { + return nil + } + } + + child := NewXMLElement(xml.Name{Local: tag.Get("locationName")}) + + // there is an xmlNamespace associated with this struct + if prefix, uri := tag.Get("xmlPrefix"), tag.Get("xmlURI"); uri != "" { + ns := xml.Attr{ + Name: xml.Name{Local: "xmlns"}, + Value: uri, + } + if prefix != "" { + b.namespaces[prefix] = uri // register the namespace + ns.Name.Local = "xmlns:" + prefix + } + + child.Attr = append(child.Attr, ns) + } + + var payloadFields, nonPayloadFields int + + t := value.Type() + for i := 0; i < value.NumField(); i++ { + member := elemOf(value.Field(i)) + field := t.Field(i) + + if field.PkgPath != "" { + continue // ignore unexported fields + } + if field.Tag.Get("ignore") != "" { + continue + } + + mTag := field.Tag + if mTag.Get("location") != "" { // skip non-volcenginebody members + nonPayloadFields++ + continue + } + payloadFields++ + + if protocol.CanSetIdempotencyToken(value.Field(i), field) { + token := protocol.GetIdempotencyToken() + member = reflect.ValueOf(token) + } + + memberName := mTag.Get("locationName") + if memberName == "" { + memberName = field.Name + mTag = reflect.StructTag(string(mTag) + ` locationName:"` + memberName + `"`) + } + if err := b.buildValue(member, child, mTag); err != nil { + return err + } + } + + // Only case where the child shape is not added is if the shape only contains + // non-payload fields, e.g headers/volcenginequery. + if !(payloadFields == 0 && nonPayloadFields > 0) { + current.AddChild(child) + } + + return nil +} + +// buildList adds the value's list items to the current XMLNode as children nodes. All +// nested values in the list are converted to XMLNodes also. +func (b *xmlBuilder) buildList(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + if value.IsNil() { // don't build omitted lists + return nil + } + + // check for unflattened list member + flattened := tag.Get("flattened") != "" + + xname := xml.Name{Local: tag.Get("locationName")} + if flattened { + for i := 0; i < value.Len(); i++ { + child := NewXMLElement(xname) + current.AddChild(child) + if err := b.buildValue(value.Index(i), child, ""); err != nil { + return err + } + } + } else { + list := NewXMLElement(xname) + current.AddChild(list) + + for i := 0; i < value.Len(); i++ { + iname := tag.Get("locationNameList") + if iname == "" { + iname = "member" + } + + child := NewXMLElement(xml.Name{Local: iname}) + list.AddChild(child) + if err := b.buildValue(value.Index(i), child, ""); err != nil { + return err + } + } + } + + return nil +} + +// buildMap adds the value's key/value pairs to the current XMLNode as children nodes. All +// nested values in the map are converted to XMLNodes also. +// +// Error will be returned if it is unable to build the map's values into XMLNodes +func (b *xmlBuilder) buildMap(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + if value.IsNil() { // don't build omitted maps + return nil + } + + maproot := NewXMLElement(xml.Name{Local: tag.Get("locationName")}) + current.AddChild(maproot) + current = maproot + + kname, vname := "key", "value" + if n := tag.Get("locationNameKey"); n != "" { + kname = n + } + if n := tag.Get("locationNameValue"); n != "" { + vname = n + } + + // sorting is not required for compliance, but it makes testing easier + keys := make([]string, value.Len()) + for i, k := range value.MapKeys() { + keys[i] = k.String() + } + sort.Strings(keys) + + for _, k := range keys { + v := value.MapIndex(reflect.ValueOf(k)) + + mapcur := current + if tag.Get("flattened") == "" { // add "entry" tag to non-flat maps + child := NewXMLElement(xml.Name{Local: "entry"}) + mapcur.AddChild(child) + mapcur = child + } + + kchild := NewXMLElement(xml.Name{Local: kname}) + kchild.Text = k + vchild := NewXMLElement(xml.Name{Local: vname}) + mapcur.AddChild(kchild) + mapcur.AddChild(vchild) + + if err := b.buildValue(v, vchild, ""); err != nil { + return err + } + } + + return nil +} + +// buildScalar will convert the value into a string and append it as a attribute or child +// of the current XMLNode. +// +// The value will be added as an attribute if tag contains a "xmlAttribute" attribute value. +// +// Error will be returned if the value type is unsupported. +func (b *xmlBuilder) buildScalar(value reflect.Value, current *XMLNode, tag reflect.StructTag) error { + var str string + switch converted := value.Interface().(type) { + case string: + str = converted + case []byte: + if !value.IsNil() { + str = base64.StdEncoding.EncodeToString(converted) + } + case bool: + str = strconv.FormatBool(converted) + case int64: + str = strconv.FormatInt(converted, 10) + case int: + str = strconv.Itoa(converted) + case float64: + str = strconv.FormatFloat(converted, 'f', -1, 64) + case float32: + str = strconv.FormatFloat(float64(converted), 'f', -1, 32) + case time.Time: + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.ISO8601TimeFormatName + } + + str = protocol.FormatTime(format, converted) + default: + return fmt.Errorf("unsupported value for param %s: %v (%s)", + tag.Get("locationName"), value.Interface(), value.Type().Name()) + } + + xname := xml.Name{Local: tag.Get("locationName")} + if tag.Get("xmlAttribute") != "" { // put into current node's attribute list + attr := xml.Attr{Name: xname, Value: str} + current.Attr = append(current.Attr, attr) + } else { // regular text node + current.AddChild(&XMLNode{Name: xname, Text: str}) + } + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/sort.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/sort.go new file mode 100644 index 000000000000..688d69edbadd --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/sort.go @@ -0,0 +1,51 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package xmlutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/xml" + "strings" +) + +type xmlAttrSlice []xml.Attr + +func (x xmlAttrSlice) Len() int { + return len(x) +} + +func (x xmlAttrSlice) Less(i, j int) bool { + spaceI, spaceJ := x[i].Name.Space, x[j].Name.Space + localI, localJ := x[i].Name.Local, x[j].Name.Local + valueI, valueJ := x[i].Value, x[j].Value + + spaceCmp := strings.Compare(spaceI, spaceJ) + localCmp := strings.Compare(localI, localJ) + valueCmp := strings.Compare(valueI, valueJ) + + if spaceCmp == -1 || (spaceCmp == 0 && (localCmp == -1 || (localCmp == 0 && valueCmp == -1))) { + return true + } + + return false +} + +func (x xmlAttrSlice) Swap(i, j int) { + x[i], x[j] = x[j], x[i] +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/unmarshal.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/unmarshal.go new file mode 100644 index 000000000000..71a4191658da --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/unmarshal.go @@ -0,0 +1,310 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package xmlutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "encoding/base64" + "encoding/xml" + "fmt" + "io" + "reflect" + "strconv" + "strings" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// UnmarshalXMLError unmarshals the XML error from the stream into the value +// type specified. The value must be a pointer. If the message fails to +// unmarshal, the message content will be included in the returned error as a +// volcengineerr.UnmarshalError. +func UnmarshalXMLError(v interface{}, stream io.Reader) error { + var errBuf bytes.Buffer + body := io.TeeReader(stream, &errBuf) + + err := xml.NewDecoder(body).Decode(v) + if err != nil && err != io.EOF { + return volcengineerr.NewUnmarshalError(err, + "failed to unmarshal error message", errBuf.Bytes()) + } + + return nil +} + +// UnmarshalXML deserializes an xml.Decoder into the container v. V +// needs to match the shape of the XML expected to be decoded. +// If the shape doesn't match unmarshaling will fail. +func UnmarshalXML(v interface{}, d *xml.Decoder, wrapper string) error { + n, err := XMLToStruct(d, nil) + if err != nil { + return err + } + if n.Children != nil { + for _, root := range n.Children { + for _, c := range root { + if wrappedChild, ok := c.Children[wrapper]; ok { + c = wrappedChild[0] // pull out wrapped element + } + + err = parse(reflect.ValueOf(v), c, "") + if err != nil { + if err == io.EOF { + return nil + } + return err + } + } + } + return nil + } + return nil +} + +// parse deserializes any value from the XMLNode. The type tag is used to infer the type, or reflect +// will be used to determine the type from r. +func parse(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + rtype := r.Type() + if rtype.Kind() == reflect.Ptr { + rtype = rtype.Elem() // check kind of actual element type + } + + t := tag.Get("type") + if t == "" { + switch rtype.Kind() { + case reflect.Struct: + // also it can't be a time object + if _, ok := r.Interface().(*time.Time); !ok { + t = "structure" + } + case reflect.Slice: + // also it can't be a byte slice + if _, ok := r.Interface().([]byte); !ok { + t = "list" + } + case reflect.Map: + t = "map" + } + } + + switch t { + case "structure": + if field, ok := rtype.FieldByName("_"); ok { + tag = field.Tag + } + return parseStruct(r, node, tag) + case "list": + return parseList(r, node, tag) + case "map": + return parseMap(r, node, tag) + default: + return parseScalar(r, node, tag) + } +} + +// parseStruct deserializes a structure and its fields from an XMLNode. Any nested +// types in the structure will also be deserialized. +func parseStruct(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + t := r.Type() + if r.Kind() == reflect.Ptr { + if r.IsNil() { // create the structure if it's nil + s := reflect.New(r.Type().Elem()) + r.Set(s) + r = s + } + + r = r.Elem() + t = t.Elem() + } + + // unwrap any payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := t.FieldByName(payload) + return parseStruct(r.FieldByName(payload), node, field.Tag) + } + + for i := 0; i < t.NumField(); i++ { + field := t.Field(i) + if c := field.Name[0:1]; strings.ToLower(c) == c { + continue // ignore unexported fields + } + + // figure out what this field is called + name := field.Name + if field.Tag.Get("flattened") != "" && field.Tag.Get("locationNameList") != "" { + name = field.Tag.Get("locationNameList") + } else if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + + // try to find the field by name in elements + elems := node.Children[name] + + if elems == nil { // try to find the field in attributes + if val, ok := node.findElem(name); ok { + elems = []*XMLNode{{Text: val}} + } + } + + member := r.FieldByName(field.Name) + for _, elem := range elems { + err := parse(member, elem, field.Tag) + if err != nil { + return err + } + } + } + return nil +} + +// parseList deserializes a list of values from an XML node. Each list entry +// will also be deserialized. +func parseList(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + t := r.Type() + + if tag.Get("flattened") == "" { // look at all item entries + mname := "member" + if name := tag.Get("locationNameList"); name != "" { + mname = name + } + + if Children, ok := node.Children[mname]; ok { + if r.IsNil() { + r.Set(reflect.MakeSlice(t, len(Children), len(Children))) + } + + for i, c := range Children { + err := parse(r.Index(i), c, "") + if err != nil { + return err + } + } + } + } else { // flattened list means this is a single element + if r.IsNil() { + r.Set(reflect.MakeSlice(t, 0, 0)) + } + + childR := reflect.Zero(t.Elem()) + r.Set(reflect.Append(r, childR)) + err := parse(r.Index(r.Len()-1), node, "") + if err != nil { + return err + } + } + + return nil +} + +// parseMap deserializes a map from an XMLNode. The direct children of the XMLNode +// will also be deserialized as map entries. +func parseMap(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + if r.IsNil() { + r.Set(reflect.MakeMap(r.Type())) + } + + if tag.Get("flattened") == "" { // look at all child entries + for _, entry := range node.Children["entry"] { + parseMapEntry(r, entry, tag) + } + } else { // this element is itself an entry + parseMapEntry(r, node, tag) + } + + return nil +} + +// parseMapEntry deserializes a map entry from a XML node. +func parseMapEntry(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + kname, vname := "key", "value" + if n := tag.Get("locationNameKey"); n != "" { + kname = n + } + if n := tag.Get("locationNameValue"); n != "" { + vname = n + } + + keys, ok := node.Children[kname] + values := node.Children[vname] + if ok { + for i, key := range keys { + keyR := reflect.ValueOf(key.Text) + value := values[i] + valueR := reflect.New(r.Type().Elem()).Elem() + + parse(valueR, value, "") + r.SetMapIndex(keyR, valueR) + } + } + return nil +} + +// parseScaller deserializes an XMLNode value into a concrete type based on the +// interface type of r. +// +// Error is returned if the deserialization fails due to invalid type conversion, +// or unsupported interface type. +func parseScalar(r reflect.Value, node *XMLNode, tag reflect.StructTag) error { + switch r.Interface().(type) { + case *string: + r.Set(reflect.ValueOf(&node.Text)) + return nil + case []byte: + b, err := base64.StdEncoding.DecodeString(node.Text) + if err != nil { + return err + } + r.Set(reflect.ValueOf(b)) + case *bool: + v, err := strconv.ParseBool(node.Text) + if err != nil { + return err + } + r.Set(reflect.ValueOf(&v)) + case *int64: + v, err := strconv.ParseInt(node.Text, 10, 64) + if err != nil { + return err + } + r.Set(reflect.ValueOf(&v)) + case *float64: + v, err := strconv.ParseFloat(node.Text, 64) + if err != nil { + return err + } + r.Set(reflect.ValueOf(&v)) + case *time.Time: + format := tag.Get("timestampFormat") + if len(format) == 0 { + format = protocol.ISO8601TimeFormatName + } + + t, err := protocol.ParseTime(format, node.Text) + if err != nil { + return err + } + r.Set(reflect.ValueOf(&t)) + default: + return fmt.Errorf("unsupported value: %v (%s)", r.Interface(), r.Type()) + } + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/xml_to_struct.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/xml_to_struct.go new file mode 100644 index 000000000000..b029d1ebdfe3 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil/xml_to_struct.go @@ -0,0 +1,178 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package xmlutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/xml" + "fmt" + "io" + "sort" +) + +// A XMLNode contains the values to be encoded or decoded. +type XMLNode struct { + Name xml.Name `json:",omitempty"` + Children map[string][]*XMLNode `json:",omitempty"` + Text string `json:",omitempty"` + Attr []xml.Attr `json:",omitempty"` + + namespaces map[string]string + parent *XMLNode +} + +// NewXMLElement returns a pointer to a new XMLNode initialized to default values. +func NewXMLElement(name xml.Name) *XMLNode { + return &XMLNode{ + Name: name, + Children: map[string][]*XMLNode{}, + Attr: []xml.Attr{}, + } +} + +// AddChild adds child to the XMLNode. +func (n *XMLNode) AddChild(child *XMLNode) { + child.parent = n + if _, ok := n.Children[child.Name.Local]; !ok { + n.Children[child.Name.Local] = []*XMLNode{} + } + n.Children[child.Name.Local] = append(n.Children[child.Name.Local], child) +} + +// XMLToStruct converts a xml.Decoder stream to XMLNode with nested values. +func XMLToStruct(d *xml.Decoder, s *xml.StartElement) (*XMLNode, error) { + out := &XMLNode{} + for { + tok, err := d.Token() + if err != nil { + if err == io.EOF { + break + } else { + return out, err + } + } + + if tok == nil { + break + } + + switch typed := tok.(type) { + case xml.CharData: + out.Text = string(typed.Copy()) + case xml.StartElement: + el := typed.Copy() + out.Attr = el.Attr + if out.Children == nil { + out.Children = map[string][]*XMLNode{} + } + + name := typed.Name.Local + slice := out.Children[name] + if slice == nil { + slice = []*XMLNode{} + } + node, e := XMLToStruct(d, &el) + out.findNamespaces() + if e != nil { + return out, e + } + node.Name = typed.Name + node.findNamespaces() + tempOut := *out + // Save into a temp variable, simply because out gets squashed during + // loop iterations + node.parent = &tempOut + slice = append(slice, node) + out.Children[name] = slice + case xml.EndElement: + if s != nil && s.Name.Local == typed.Name.Local { // matching end token + return out, nil + } + out = &XMLNode{} + } + } + return out, nil +} + +func (n *XMLNode) findNamespaces() { + ns := map[string]string{} + for _, a := range n.Attr { + if a.Name.Space == "xmlns" { + ns[a.Value] = a.Name.Local + } + } + + n.namespaces = ns +} + +func (n *XMLNode) findElem(name string) (string, bool) { + for node := n; node != nil; node = node.parent { + for _, a := range node.Attr { + namespace := a.Name.Space + if v, ok := node.namespaces[namespace]; ok { + namespace = v + } + if name == fmt.Sprintf("%s:%s", namespace, a.Name.Local) { + return a.Value, true + } + } + } + return "", false +} + +// StructToXML writes an XMLNode to a xml.Encoder as tokens. +func StructToXML(e *xml.Encoder, node *XMLNode, sorted bool) error { + // Sort Attributes + attrs := node.Attr + if sorted { + sortedAttrs := make([]xml.Attr, len(attrs)) + for _, k := range node.Attr { + sortedAttrs = append(sortedAttrs, k) + } + sort.Sort(xmlAttrSlice(sortedAttrs)) + attrs = sortedAttrs + } + + e.EncodeToken(xml.StartElement{Name: node.Name, Attr: attrs}) + + if node.Text != "" { + e.EncodeToken(xml.CharData([]byte(node.Text))) + } else if sorted { + sortedNames := []string{} + for k := range node.Children { + sortedNames = append(sortedNames, k) + } + sort.Strings(sortedNames) + + for _, k := range sortedNames { + for _, v := range node.Children[k] { + StructToXML(e, v, sorted) + } + } + } else { + for _, c := range node.Children { + for _, v := range c { + StructToXML(e, v, sorted) + } + } + } + + e.EncodeToken(xml.EndElement{Name: node.Name}) + return e.Flush() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/util/sort_keys.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/util/sort_keys.go new file mode 100644 index 000000000000..821948e2bfc6 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/util/sort_keys.go @@ -0,0 +1,33 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package util + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "sort" + +// SortedKeys returns a sorted slice of keys of a map. +func SortedKeys(m map[string]interface{}) []string { + i, sorted := 0, make([]string, len(m)) + for k := range m { + sorted[i] = k + i++ + } + sort.Strings(sorted) + return sorted +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/util/util.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/util/util.go new file mode 100644 index 000000000000..a1b41890b78b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/util/util.go @@ -0,0 +1,128 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package util + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "encoding/xml" + "fmt" + "go/format" + "io" + "reflect" + "regexp" + "strings" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/xml/xmlutil" +) + +// GoFmt returns the Go formated string of the input. +// +// Panics if the format fails. +func GoFmt(buf string) string { + formatted, err := format.Source([]byte(buf)) + if err != nil { + panic(fmt.Errorf("%s\nOriginal code:\n%s", err.Error(), buf)) + } + return string(formatted) +} + +var reTrim = regexp.MustCompile(`\s{2,}`) + +// Trim removes all leading and trailing white space. +// +// All consecutive spaces will be reduced to a single space. +func Trim(s string) string { + return strings.TrimSpace(reTrim.ReplaceAllString(s, " ")) +} + +// Capitalize capitalizes the first character of the string. +func Capitalize(s string) string { + if len(s) == 1 { + return strings.ToUpper(s) + } + return strings.ToUpper(s[0:1]) + s[1:] +} + +// SortXML sorts the reader's XML elements +func SortXML(r io.Reader) string { + var buf bytes.Buffer + d := xml.NewDecoder(r) + root, _ := xmlutil.XMLToStruct(d, nil) + e := xml.NewEncoder(&buf) + xmlutil.StructToXML(e, root, true) + return buf.String() +} + +// PrettyPrint generates a human readable representation of the value v. +// All values of v are recursively found and pretty printed also. +func PrettyPrint(v interface{}) string { + value := reflect.ValueOf(v) + switch value.Kind() { + case reflect.Struct: + str := fullName(value.Type()) + "{\n" + for i := 0; i < value.NumField(); i++ { + l := string(value.Type().Field(i).Name[0]) + if strings.ToUpper(l) == l { + str += value.Type().Field(i).Name + ": " + str += PrettyPrint(value.Field(i).Interface()) + str += ",\n" + } + } + str += "}" + return str + case reflect.Map: + str := "map[" + fullName(value.Type().Key()) + "]" + fullName(value.Type().Elem()) + "{\n" + for _, k := range value.MapKeys() { + str += "\"" + k.String() + "\": " + str += PrettyPrint(value.MapIndex(k).Interface()) + str += ",\n" + } + str += "}" + return str + case reflect.Ptr: + if e := value.Elem(); e.IsValid() { + return "&" + PrettyPrint(e.Interface()) + } + return "nil" + case reflect.Slice: + str := "[]" + fullName(value.Type().Elem()) + "{\n" + for i := 0; i < value.Len(); i++ { + str += PrettyPrint(value.Index(i).Interface()) + str += ",\n" + } + str += "}" + return str + default: + return fmt.Sprintf("%#v", v) + } +} + +func pkgName(t reflect.Type) string { + pkg := t.PkgPath() + c := strings.Split(pkg, "/") + return c[len(c)-1] +} + +func fullName(t reflect.Type) string { + if pkg := pkgName(t); pkg != "" { + return pkg + "." + t.Name() + } + return t.Name() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_db_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_db_instances.go new file mode 100644 index 000000000000..35c38c1fd9c7 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_db_instances.go @@ -0,0 +1,202 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opAttachDBInstancesCommon = "AttachDBInstances" + +// AttachDBInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the AttachDBInstancesCommon operation. The "output" return +// value will be populated with the AttachDBInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AttachDBInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after AttachDBInstancesCommon Send returns without error. +// +// See AttachDBInstancesCommon for more information on using the AttachDBInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the AttachDBInstancesCommonRequest method. +// req, resp := client.AttachDBInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) AttachDBInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opAttachDBInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// AttachDBInstancesCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation AttachDBInstancesCommon for usage and error information. +func (c *AUTOSCALING) AttachDBInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.AttachDBInstancesCommonRequest(input) + return out, req.Send() +} + +// AttachDBInstancesCommonWithContext is the same as AttachDBInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See AttachDBInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) AttachDBInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.AttachDBInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAttachDBInstances = "AttachDBInstances" + +// AttachDBInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the AttachDBInstances operation. The "output" return +// value will be populated with the AttachDBInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AttachDBInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after AttachDBInstancesCommon Send returns without error. +// +// See AttachDBInstances for more information on using the AttachDBInstances +// API call, and error handling. +// +// // Example sending a request using the AttachDBInstancesRequest method. +// req, resp := client.AttachDBInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) AttachDBInstancesRequest(input *AttachDBInstancesInput) (req *request.Request, output *AttachDBInstancesOutput) { + op := &request.Operation{ + Name: opAttachDBInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &AttachDBInstancesInput{} + } + + output = &AttachDBInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// AttachDBInstances API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation AttachDBInstances for usage and error information. +func (c *AUTOSCALING) AttachDBInstances(input *AttachDBInstancesInput) (*AttachDBInstancesOutput, error) { + req, out := c.AttachDBInstancesRequest(input) + return out, req.Send() +} + +// AttachDBInstancesWithContext is the same as AttachDBInstances with the addition of +// the ability to pass a context and additional request options. +// +// See AttachDBInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) AttachDBInstancesWithContext(ctx volcengine.Context, input *AttachDBInstancesInput, opts ...request.Option) (*AttachDBInstancesOutput, error) { + req, out := c.AttachDBInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AttachDBInstancesInput struct { + _ struct{} `type:"structure"` + + DBInstanceIds []*string `type:"list"` + + ForceAttach *bool `type:"boolean"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s AttachDBInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachDBInstancesInput) GoString() string { + return s.String() +} + +// SetDBInstanceIds sets the DBInstanceIds field's value. +func (s *AttachDBInstancesInput) SetDBInstanceIds(v []*string) *AttachDBInstancesInput { + s.DBInstanceIds = v + return s +} + +// SetForceAttach sets the ForceAttach field's value. +func (s *AttachDBInstancesInput) SetForceAttach(v bool) *AttachDBInstancesInput { + s.ForceAttach = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *AttachDBInstancesInput) SetScalingGroupId(v string) *AttachDBInstancesInput { + s.ScalingGroupId = &v + return s +} + +type AttachDBInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s AttachDBInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachDBInstancesOutput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *AttachDBInstancesOutput) SetScalingGroupId(v string) *AttachDBInstancesOutput { + s.ScalingGroupId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_instances.go new file mode 100644 index 000000000000..fa0e12837df3 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_instances.go @@ -0,0 +1,194 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opAttachInstancesCommon = "AttachInstances" + +// AttachInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the AttachInstancesCommon operation. The "output" return +// value will be populated with the AttachInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AttachInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after AttachInstancesCommon Send returns without error. +// +// See AttachInstancesCommon for more information on using the AttachInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the AttachInstancesCommonRequest method. +// req, resp := client.AttachInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) AttachInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opAttachInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// AttachInstancesCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation AttachInstancesCommon for usage and error information. +func (c *AUTOSCALING) AttachInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.AttachInstancesCommonRequest(input) + return out, req.Send() +} + +// AttachInstancesCommonWithContext is the same as AttachInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See AttachInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) AttachInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.AttachInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAttachInstances = "AttachInstances" + +// AttachInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the AttachInstances operation. The "output" return +// value will be populated with the AttachInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AttachInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after AttachInstancesCommon Send returns without error. +// +// See AttachInstances for more information on using the AttachInstances +// API call, and error handling. +// +// // Example sending a request using the AttachInstancesRequest method. +// req, resp := client.AttachInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) AttachInstancesRequest(input *AttachInstancesInput) (req *request.Request, output *AttachInstancesOutput) { + op := &request.Operation{ + Name: opAttachInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &AttachInstancesInput{} + } + + output = &AttachInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// AttachInstances API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation AttachInstances for usage and error information. +func (c *AUTOSCALING) AttachInstances(input *AttachInstancesInput) (*AttachInstancesOutput, error) { + req, out := c.AttachInstancesRequest(input) + return out, req.Send() +} + +// AttachInstancesWithContext is the same as AttachInstances with the addition of +// the ability to pass a context and additional request options. +// +// See AttachInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) AttachInstancesWithContext(ctx volcengine.Context, input *AttachInstancesInput, opts ...request.Option) (*AttachInstancesOutput, error) { + req, out := c.AttachInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AttachInstancesInput struct { + _ struct{} `type:"structure"` + + Entrusted *bool `type:"boolean"` + + InstanceIds []*string `type:"list"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s AttachInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachInstancesInput) GoString() string { + return s.String() +} + +// SetEntrusted sets the Entrusted field's value. +func (s *AttachInstancesInput) SetEntrusted(v bool) *AttachInstancesInput { + s.Entrusted = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *AttachInstancesInput) SetInstanceIds(v []*string) *AttachInstancesInput { + s.InstanceIds = v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *AttachInstancesInput) SetScalingGroupId(v string) *AttachInstancesInput { + s.ScalingGroupId = &v + return s +} + +type AttachInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s AttachInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachInstancesOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_server_groups.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_server_groups.go new file mode 100644 index 000000000000..e31f6ad3f126 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_attach_server_groups.go @@ -0,0 +1,232 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opAttachServerGroupsCommon = "AttachServerGroups" + +// AttachServerGroupsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the AttachServerGroupsCommon operation. The "output" return +// value will be populated with the AttachServerGroupsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AttachServerGroupsCommon Request to send the API call to the service. +// the "output" return value is not valid until after AttachServerGroupsCommon Send returns without error. +// +// See AttachServerGroupsCommon for more information on using the AttachServerGroupsCommon +// API call, and error handling. +// +// // Example sending a request using the AttachServerGroupsCommonRequest method. +// req, resp := client.AttachServerGroupsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) AttachServerGroupsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opAttachServerGroupsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// AttachServerGroupsCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation AttachServerGroupsCommon for usage and error information. +func (c *AUTOSCALING) AttachServerGroupsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.AttachServerGroupsCommonRequest(input) + return out, req.Send() +} + +// AttachServerGroupsCommonWithContext is the same as AttachServerGroupsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See AttachServerGroupsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) AttachServerGroupsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.AttachServerGroupsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAttachServerGroups = "AttachServerGroups" + +// AttachServerGroupsRequest generates a "volcengine/request.Request" representing the +// client's request for the AttachServerGroups operation. The "output" return +// value will be populated with the AttachServerGroupsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AttachServerGroupsCommon Request to send the API call to the service. +// the "output" return value is not valid until after AttachServerGroupsCommon Send returns without error. +// +// See AttachServerGroups for more information on using the AttachServerGroups +// API call, and error handling. +// +// // Example sending a request using the AttachServerGroupsRequest method. +// req, resp := client.AttachServerGroupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) AttachServerGroupsRequest(input *AttachServerGroupsInput) (req *request.Request, output *AttachServerGroupsOutput) { + op := &request.Operation{ + Name: opAttachServerGroups, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &AttachServerGroupsInput{} + } + + output = &AttachServerGroupsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// AttachServerGroups API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation AttachServerGroups for usage and error information. +func (c *AUTOSCALING) AttachServerGroups(input *AttachServerGroupsInput) (*AttachServerGroupsOutput, error) { + req, out := c.AttachServerGroupsRequest(input) + return out, req.Send() +} + +// AttachServerGroupsWithContext is the same as AttachServerGroups with the addition of +// the ability to pass a context and additional request options. +// +// See AttachServerGroups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) AttachServerGroupsWithContext(ctx volcengine.Context, input *AttachServerGroupsInput, opts ...request.Option) (*AttachServerGroupsOutput, error) { + req, out := c.AttachServerGroupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AttachServerGroupsInput struct { + _ struct{} `type:"structure"` + + ScalingGroupId *string `type:"string"` + + ServerGroupAttributes []*ServerGroupAttributeForAttachServerGroupsInput `type:"list"` +} + +// String returns the string representation +func (s AttachServerGroupsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachServerGroupsInput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *AttachServerGroupsInput) SetScalingGroupId(v string) *AttachServerGroupsInput { + s.ScalingGroupId = &v + return s +} + +// SetServerGroupAttributes sets the ServerGroupAttributes field's value. +func (s *AttachServerGroupsInput) SetServerGroupAttributes(v []*ServerGroupAttributeForAttachServerGroupsInput) *AttachServerGroupsInput { + s.ServerGroupAttributes = v + return s +} + +type AttachServerGroupsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s AttachServerGroupsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachServerGroupsOutput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *AttachServerGroupsOutput) SetScalingGroupId(v string) *AttachServerGroupsOutput { + s.ScalingGroupId = &v + return s +} + +type ServerGroupAttributeForAttachServerGroupsInput struct { + _ struct{} `type:"structure"` + + Port *int32 `type:"int32"` + + ServerGroupId *string `type:"string"` + + Weight *int32 `type:"int32"` +} + +// String returns the string representation +func (s ServerGroupAttributeForAttachServerGroupsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerGroupAttributeForAttachServerGroupsInput) GoString() string { + return s.String() +} + +// SetPort sets the Port field's value. +func (s *ServerGroupAttributeForAttachServerGroupsInput) SetPort(v int32) *ServerGroupAttributeForAttachServerGroupsInput { + s.Port = &v + return s +} + +// SetServerGroupId sets the ServerGroupId field's value. +func (s *ServerGroupAttributeForAttachServerGroupsInput) SetServerGroupId(v string) *ServerGroupAttributeForAttachServerGroupsInput { + s.ServerGroupId = &v + return s +} + +// SetWeight sets the Weight field's value. +func (s *ServerGroupAttributeForAttachServerGroupsInput) SetWeight(v int32) *ServerGroupAttributeForAttachServerGroupsInput { + s.Weight = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_complete_lifecycle_activity.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_complete_lifecycle_activity.go new file mode 100644 index 000000000000..04b94eb0b023 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_complete_lifecycle_activity.go @@ -0,0 +1,202 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opCompleteLifecycleActivityCommon = "CompleteLifecycleActivity" + +// CompleteLifecycleActivityCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the CompleteLifecycleActivityCommon operation. The "output" return +// value will be populated with the CompleteLifecycleActivityCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CompleteLifecycleActivityCommon Request to send the API call to the service. +// the "output" return value is not valid until after CompleteLifecycleActivityCommon Send returns without error. +// +// See CompleteLifecycleActivityCommon for more information on using the CompleteLifecycleActivityCommon +// API call, and error handling. +// +// // Example sending a request using the CompleteLifecycleActivityCommonRequest method. +// req, resp := client.CompleteLifecycleActivityCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CompleteLifecycleActivityCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opCompleteLifecycleActivityCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// CompleteLifecycleActivityCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CompleteLifecycleActivityCommon for usage and error information. +func (c *AUTOSCALING) CompleteLifecycleActivityCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.CompleteLifecycleActivityCommonRequest(input) + return out, req.Send() +} + +// CompleteLifecycleActivityCommonWithContext is the same as CompleteLifecycleActivityCommon with the addition of +// the ability to pass a context and additional request options. +// +// See CompleteLifecycleActivityCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CompleteLifecycleActivityCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.CompleteLifecycleActivityCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCompleteLifecycleActivity = "CompleteLifecycleActivity" + +// CompleteLifecycleActivityRequest generates a "volcengine/request.Request" representing the +// client's request for the CompleteLifecycleActivity operation. The "output" return +// value will be populated with the CompleteLifecycleActivityCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CompleteLifecycleActivityCommon Request to send the API call to the service. +// the "output" return value is not valid until after CompleteLifecycleActivityCommon Send returns without error. +// +// See CompleteLifecycleActivity for more information on using the CompleteLifecycleActivity +// API call, and error handling. +// +// // Example sending a request using the CompleteLifecycleActivityRequest method. +// req, resp := client.CompleteLifecycleActivityRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CompleteLifecycleActivityRequest(input *CompleteLifecycleActivityInput) (req *request.Request, output *CompleteLifecycleActivityOutput) { + op := &request.Operation{ + Name: opCompleteLifecycleActivity, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &CompleteLifecycleActivityInput{} + } + + output = &CompleteLifecycleActivityOutput{} + req = c.newRequest(op, input, output) + + return +} + +// CompleteLifecycleActivity API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CompleteLifecycleActivity for usage and error information. +func (c *AUTOSCALING) CompleteLifecycleActivity(input *CompleteLifecycleActivityInput) (*CompleteLifecycleActivityOutput, error) { + req, out := c.CompleteLifecycleActivityRequest(input) + return out, req.Send() +} + +// CompleteLifecycleActivityWithContext is the same as CompleteLifecycleActivity with the addition of +// the ability to pass a context and additional request options. +// +// See CompleteLifecycleActivity for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CompleteLifecycleActivityWithContext(ctx volcengine.Context, input *CompleteLifecycleActivityInput, opts ...request.Option) (*CompleteLifecycleActivityOutput, error) { + req, out := c.CompleteLifecycleActivityRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CompleteLifecycleActivityInput struct { + _ struct{} `type:"structure"` + + LifecycleActivityId *string `type:"string"` + + LifecycleActivityPolicy *string `type:"string"` +} + +// String returns the string representation +func (s CompleteLifecycleActivityInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CompleteLifecycleActivityInput) GoString() string { + return s.String() +} + +// SetLifecycleActivityId sets the LifecycleActivityId field's value. +func (s *CompleteLifecycleActivityInput) SetLifecycleActivityId(v string) *CompleteLifecycleActivityInput { + s.LifecycleActivityId = &v + return s +} + +// SetLifecycleActivityPolicy sets the LifecycleActivityPolicy field's value. +func (s *CompleteLifecycleActivityInput) SetLifecycleActivityPolicy(v string) *CompleteLifecycleActivityInput { + s.LifecycleActivityPolicy = &v + return s +} + +type CompleteLifecycleActivityOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + InstanceId *string `type:"string"` + + LifecycleActivityId *string `type:"string"` +} + +// String returns the string representation +func (s CompleteLifecycleActivityOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CompleteLifecycleActivityOutput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *CompleteLifecycleActivityOutput) SetInstanceId(v string) *CompleteLifecycleActivityOutput { + s.InstanceId = &v + return s +} + +// SetLifecycleActivityId sets the LifecycleActivityId field's value. +func (s *CompleteLifecycleActivityOutput) SetLifecycleActivityId(v string) *CompleteLifecycleActivityOutput { + s.LifecycleActivityId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_lifecycle_hook.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_lifecycle_hook.go new file mode 100644 index 000000000000..c21658fd0a19 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_lifecycle_hook.go @@ -0,0 +1,218 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opCreateLifecycleHookCommon = "CreateLifecycleHook" + +// CreateLifecycleHookCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateLifecycleHookCommon operation. The "output" return +// value will be populated with the CreateLifecycleHookCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateLifecycleHookCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateLifecycleHookCommon Send returns without error. +// +// See CreateLifecycleHookCommon for more information on using the CreateLifecycleHookCommon +// API call, and error handling. +// +// // Example sending a request using the CreateLifecycleHookCommonRequest method. +// req, resp := client.CreateLifecycleHookCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CreateLifecycleHookCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opCreateLifecycleHookCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// CreateLifecycleHookCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CreateLifecycleHookCommon for usage and error information. +func (c *AUTOSCALING) CreateLifecycleHookCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.CreateLifecycleHookCommonRequest(input) + return out, req.Send() +} + +// CreateLifecycleHookCommonWithContext is the same as CreateLifecycleHookCommon with the addition of +// the ability to pass a context and additional request options. +// +// See CreateLifecycleHookCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CreateLifecycleHookCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.CreateLifecycleHookCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateLifecycleHook = "CreateLifecycleHook" + +// CreateLifecycleHookRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateLifecycleHook operation. The "output" return +// value will be populated with the CreateLifecycleHookCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateLifecycleHookCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateLifecycleHookCommon Send returns without error. +// +// See CreateLifecycleHook for more information on using the CreateLifecycleHook +// API call, and error handling. +// +// // Example sending a request using the CreateLifecycleHookRequest method. +// req, resp := client.CreateLifecycleHookRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CreateLifecycleHookRequest(input *CreateLifecycleHookInput) (req *request.Request, output *CreateLifecycleHookOutput) { + op := &request.Operation{ + Name: opCreateLifecycleHook, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &CreateLifecycleHookInput{} + } + + output = &CreateLifecycleHookOutput{} + req = c.newRequest(op, input, output) + + return +} + +// CreateLifecycleHook API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CreateLifecycleHook for usage and error information. +func (c *AUTOSCALING) CreateLifecycleHook(input *CreateLifecycleHookInput) (*CreateLifecycleHookOutput, error) { + req, out := c.CreateLifecycleHookRequest(input) + return out, req.Send() +} + +// CreateLifecycleHookWithContext is the same as CreateLifecycleHook with the addition of +// the ability to pass a context and additional request options. +// +// See CreateLifecycleHook for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CreateLifecycleHookWithContext(ctx volcengine.Context, input *CreateLifecycleHookInput, opts ...request.Option) (*CreateLifecycleHookOutput, error) { + req, out := c.CreateLifecycleHookRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CreateLifecycleHookInput struct { + _ struct{} `type:"structure"` + + LifecycleHookName *string `type:"string"` + + LifecycleHookPolicy *string `type:"string"` + + LifecycleHookTimeout *int32 `type:"int32"` + + LifecycleHookType *string `type:"string"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s CreateLifecycleHookInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLifecycleHookInput) GoString() string { + return s.String() +} + +// SetLifecycleHookName sets the LifecycleHookName field's value. +func (s *CreateLifecycleHookInput) SetLifecycleHookName(v string) *CreateLifecycleHookInput { + s.LifecycleHookName = &v + return s +} + +// SetLifecycleHookPolicy sets the LifecycleHookPolicy field's value. +func (s *CreateLifecycleHookInput) SetLifecycleHookPolicy(v string) *CreateLifecycleHookInput { + s.LifecycleHookPolicy = &v + return s +} + +// SetLifecycleHookTimeout sets the LifecycleHookTimeout field's value. +func (s *CreateLifecycleHookInput) SetLifecycleHookTimeout(v int32) *CreateLifecycleHookInput { + s.LifecycleHookTimeout = &v + return s +} + +// SetLifecycleHookType sets the LifecycleHookType field's value. +func (s *CreateLifecycleHookInput) SetLifecycleHookType(v string) *CreateLifecycleHookInput { + s.LifecycleHookType = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *CreateLifecycleHookInput) SetScalingGroupId(v string) *CreateLifecycleHookInput { + s.ScalingGroupId = &v + return s +} + +type CreateLifecycleHookOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + LifecycleHookId *string `type:"string"` +} + +// String returns the string representation +func (s CreateLifecycleHookOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLifecycleHookOutput) GoString() string { + return s.String() +} + +// SetLifecycleHookId sets the LifecycleHookId field's value. +func (s *CreateLifecycleHookOutput) SetLifecycleHookId(v string) *CreateLifecycleHookOutput { + s.LifecycleHookId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_configuration.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_configuration.go new file mode 100644 index 000000000000..8a3396b82782 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_configuration.go @@ -0,0 +1,374 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opCreateScalingConfigurationCommon = "CreateScalingConfiguration" + +// CreateScalingConfigurationCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateScalingConfigurationCommon operation. The "output" return +// value will be populated with the CreateScalingConfigurationCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateScalingConfigurationCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateScalingConfigurationCommon Send returns without error. +// +// See CreateScalingConfigurationCommon for more information on using the CreateScalingConfigurationCommon +// API call, and error handling. +// +// // Example sending a request using the CreateScalingConfigurationCommonRequest method. +// req, resp := client.CreateScalingConfigurationCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CreateScalingConfigurationCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opCreateScalingConfigurationCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// CreateScalingConfigurationCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CreateScalingConfigurationCommon for usage and error information. +func (c *AUTOSCALING) CreateScalingConfigurationCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.CreateScalingConfigurationCommonRequest(input) + return out, req.Send() +} + +// CreateScalingConfigurationCommonWithContext is the same as CreateScalingConfigurationCommon with the addition of +// the ability to pass a context and additional request options. +// +// See CreateScalingConfigurationCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CreateScalingConfigurationCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.CreateScalingConfigurationCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateScalingConfiguration = "CreateScalingConfiguration" + +// CreateScalingConfigurationRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateScalingConfiguration operation. The "output" return +// value will be populated with the CreateScalingConfigurationCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateScalingConfigurationCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateScalingConfigurationCommon Send returns without error. +// +// See CreateScalingConfiguration for more information on using the CreateScalingConfiguration +// API call, and error handling. +// +// // Example sending a request using the CreateScalingConfigurationRequest method. +// req, resp := client.CreateScalingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CreateScalingConfigurationRequest(input *CreateScalingConfigurationInput) (req *request.Request, output *CreateScalingConfigurationOutput) { + op := &request.Operation{ + Name: opCreateScalingConfiguration, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &CreateScalingConfigurationInput{} + } + + output = &CreateScalingConfigurationOutput{} + req = c.newRequest(op, input, output) + + return +} + +// CreateScalingConfiguration API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CreateScalingConfiguration for usage and error information. +func (c *AUTOSCALING) CreateScalingConfiguration(input *CreateScalingConfigurationInput) (*CreateScalingConfigurationOutput, error) { + req, out := c.CreateScalingConfigurationRequest(input) + return out, req.Send() +} + +// CreateScalingConfigurationWithContext is the same as CreateScalingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See CreateScalingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CreateScalingConfigurationWithContext(ctx volcengine.Context, input *CreateScalingConfigurationInput, opts ...request.Option) (*CreateScalingConfigurationOutput, error) { + req, out := c.CreateScalingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CreateScalingConfigurationInput struct { + _ struct{} `type:"structure"` + + Eip *EipForCreateScalingConfigurationInput `type:"structure"` + + HostName *string `type:"string"` + + ImageId *string `type:"string"` + + InstanceDescription *string `type:"string"` + + InstanceName *string `type:"string"` + + InstanceTypes []*string `type:"list"` + + KeyPairName *string `type:"string"` + + Password *string `type:"string"` + + ScalingConfigurationName *string `type:"string"` + + ScalingGroupId *string `type:"string"` + + SecurityEnhancementStrategy *string `type:"string"` + + SecurityGroupIds []*string `type:"list"` + + UserData *string `type:"string"` + + Volumes []*VolumeForCreateScalingConfigurationInput `type:"list"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s CreateScalingConfigurationInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateScalingConfigurationInput) GoString() string { + return s.String() +} + +// SetEip sets the Eip field's value. +func (s *CreateScalingConfigurationInput) SetEip(v *EipForCreateScalingConfigurationInput) *CreateScalingConfigurationInput { + s.Eip = v + return s +} + +// SetHostName sets the HostName field's value. +func (s *CreateScalingConfigurationInput) SetHostName(v string) *CreateScalingConfigurationInput { + s.HostName = &v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *CreateScalingConfigurationInput) SetImageId(v string) *CreateScalingConfigurationInput { + s.ImageId = &v + return s +} + +// SetInstanceDescription sets the InstanceDescription field's value. +func (s *CreateScalingConfigurationInput) SetInstanceDescription(v string) *CreateScalingConfigurationInput { + s.InstanceDescription = &v + return s +} + +// SetInstanceName sets the InstanceName field's value. +func (s *CreateScalingConfigurationInput) SetInstanceName(v string) *CreateScalingConfigurationInput { + s.InstanceName = &v + return s +} + +// SetInstanceTypes sets the InstanceTypes field's value. +func (s *CreateScalingConfigurationInput) SetInstanceTypes(v []*string) *CreateScalingConfigurationInput { + s.InstanceTypes = v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *CreateScalingConfigurationInput) SetKeyPairName(v string) *CreateScalingConfigurationInput { + s.KeyPairName = &v + return s +} + +// SetPassword sets the Password field's value. +func (s *CreateScalingConfigurationInput) SetPassword(v string) *CreateScalingConfigurationInput { + s.Password = &v + return s +} + +// SetScalingConfigurationName sets the ScalingConfigurationName field's value. +func (s *CreateScalingConfigurationInput) SetScalingConfigurationName(v string) *CreateScalingConfigurationInput { + s.ScalingConfigurationName = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *CreateScalingConfigurationInput) SetScalingGroupId(v string) *CreateScalingConfigurationInput { + s.ScalingGroupId = &v + return s +} + +// SetSecurityEnhancementStrategy sets the SecurityEnhancementStrategy field's value. +func (s *CreateScalingConfigurationInput) SetSecurityEnhancementStrategy(v string) *CreateScalingConfigurationInput { + s.SecurityEnhancementStrategy = &v + return s +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *CreateScalingConfigurationInput) SetSecurityGroupIds(v []*string) *CreateScalingConfigurationInput { + s.SecurityGroupIds = v + return s +} + +// SetUserData sets the UserData field's value. +func (s *CreateScalingConfigurationInput) SetUserData(v string) *CreateScalingConfigurationInput { + s.UserData = &v + return s +} + +// SetVolumes sets the Volumes field's value. +func (s *CreateScalingConfigurationInput) SetVolumes(v []*VolumeForCreateScalingConfigurationInput) *CreateScalingConfigurationInput { + s.Volumes = v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *CreateScalingConfigurationInput) SetZoneId(v string) *CreateScalingConfigurationInput { + s.ZoneId = &v + return s +} + +type CreateScalingConfigurationOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingConfigurationId *string `type:"string"` +} + +// String returns the string representation +func (s CreateScalingConfigurationOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateScalingConfigurationOutput) GoString() string { + return s.String() +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *CreateScalingConfigurationOutput) SetScalingConfigurationId(v string) *CreateScalingConfigurationOutput { + s.ScalingConfigurationId = &v + return s +} + +type EipForCreateScalingConfigurationInput struct { + _ struct{} `type:"structure"` + + Bandwidth *int32 `type:"int32"` + + BillingType *string `type:"string"` + + ISP *string `type:"string"` +} + +// String returns the string representation +func (s EipForCreateScalingConfigurationInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EipForCreateScalingConfigurationInput) GoString() string { + return s.String() +} + +// SetBandwidth sets the Bandwidth field's value. +func (s *EipForCreateScalingConfigurationInput) SetBandwidth(v int32) *EipForCreateScalingConfigurationInput { + s.Bandwidth = &v + return s +} + +// SetBillingType sets the BillingType field's value. +func (s *EipForCreateScalingConfigurationInput) SetBillingType(v string) *EipForCreateScalingConfigurationInput { + s.BillingType = &v + return s +} + +// SetISP sets the ISP field's value. +func (s *EipForCreateScalingConfigurationInput) SetISP(v string) *EipForCreateScalingConfigurationInput { + s.ISP = &v + return s +} + +type VolumeForCreateScalingConfigurationInput struct { + _ struct{} `type:"structure"` + + DeleteWithInstance *bool `type:"boolean"` + + Size *int32 `type:"int32"` + + VolumeType *string `type:"string"` +} + +// String returns the string representation +func (s VolumeForCreateScalingConfigurationInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s VolumeForCreateScalingConfigurationInput) GoString() string { + return s.String() +} + +// SetDeleteWithInstance sets the DeleteWithInstance field's value. +func (s *VolumeForCreateScalingConfigurationInput) SetDeleteWithInstance(v bool) *VolumeForCreateScalingConfigurationInput { + s.DeleteWithInstance = &v + return s +} + +// SetSize sets the Size field's value. +func (s *VolumeForCreateScalingConfigurationInput) SetSize(v int32) *VolumeForCreateScalingConfigurationInput { + s.Size = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *VolumeForCreateScalingConfigurationInput) SetVolumeType(v string) *VolumeForCreateScalingConfigurationInput { + s.VolumeType = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_group.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_group.go new file mode 100644 index 000000000000..ac8724d190cf --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_group.go @@ -0,0 +1,296 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opCreateScalingGroupCommon = "CreateScalingGroup" + +// CreateScalingGroupCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateScalingGroupCommon operation. The "output" return +// value will be populated with the CreateScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateScalingGroupCommon Send returns without error. +// +// See CreateScalingGroupCommon for more information on using the CreateScalingGroupCommon +// API call, and error handling. +// +// // Example sending a request using the CreateScalingGroupCommonRequest method. +// req, resp := client.CreateScalingGroupCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CreateScalingGroupCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opCreateScalingGroupCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// CreateScalingGroupCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CreateScalingGroupCommon for usage and error information. +func (c *AUTOSCALING) CreateScalingGroupCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.CreateScalingGroupCommonRequest(input) + return out, req.Send() +} + +// CreateScalingGroupCommonWithContext is the same as CreateScalingGroupCommon with the addition of +// the ability to pass a context and additional request options. +// +// See CreateScalingGroupCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CreateScalingGroupCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.CreateScalingGroupCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateScalingGroup = "CreateScalingGroup" + +// CreateScalingGroupRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateScalingGroup operation. The "output" return +// value will be populated with the CreateScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateScalingGroupCommon Send returns without error. +// +// See CreateScalingGroup for more information on using the CreateScalingGroup +// API call, and error handling. +// +// // Example sending a request using the CreateScalingGroupRequest method. +// req, resp := client.CreateScalingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CreateScalingGroupRequest(input *CreateScalingGroupInput) (req *request.Request, output *CreateScalingGroupOutput) { + op := &request.Operation{ + Name: opCreateScalingGroup, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &CreateScalingGroupInput{} + } + + output = &CreateScalingGroupOutput{} + req = c.newRequest(op, input, output) + + return +} + +// CreateScalingGroup API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CreateScalingGroup for usage and error information. +func (c *AUTOSCALING) CreateScalingGroup(input *CreateScalingGroupInput) (*CreateScalingGroupOutput, error) { + req, out := c.CreateScalingGroupRequest(input) + return out, req.Send() +} + +// CreateScalingGroupWithContext is the same as CreateScalingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See CreateScalingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CreateScalingGroupWithContext(ctx volcengine.Context, input *CreateScalingGroupInput, opts ...request.Option) (*CreateScalingGroupOutput, error) { + req, out := c.CreateScalingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CreateScalingGroupInput struct { + _ struct{} `type:"structure"` + + DBInstanceIds []*string `type:"list"` + + DefaultCooldown *int32 `type:"int32"` + + DesireInstanceNumber *int32 `type:"int32"` + + InstanceTerminatePolicy *string `type:"string"` + + MaxInstanceNumber *int32 `type:"int32"` + + MinInstanceNumber *int32 `type:"int32"` + + MultiAZPolicy *string `type:"string"` + + ScalingGroupName *string `type:"string"` + + ServerGroupAttributes []*ServerGroupAttributeForCreateScalingGroupInput `type:"list"` + + SubnetIds []*string `type:"list"` +} + +// String returns the string representation +func (s CreateScalingGroupInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateScalingGroupInput) GoString() string { + return s.String() +} + +// SetDBInstanceIds sets the DBInstanceIds field's value. +func (s *CreateScalingGroupInput) SetDBInstanceIds(v []*string) *CreateScalingGroupInput { + s.DBInstanceIds = v + return s +} + +// SetDefaultCooldown sets the DefaultCooldown field's value. +func (s *CreateScalingGroupInput) SetDefaultCooldown(v int32) *CreateScalingGroupInput { + s.DefaultCooldown = &v + return s +} + +// SetDesireInstanceNumber sets the DesireInstanceNumber field's value. +func (s *CreateScalingGroupInput) SetDesireInstanceNumber(v int32) *CreateScalingGroupInput { + s.DesireInstanceNumber = &v + return s +} + +// SetInstanceTerminatePolicy sets the InstanceTerminatePolicy field's value. +func (s *CreateScalingGroupInput) SetInstanceTerminatePolicy(v string) *CreateScalingGroupInput { + s.InstanceTerminatePolicy = &v + return s +} + +// SetMaxInstanceNumber sets the MaxInstanceNumber field's value. +func (s *CreateScalingGroupInput) SetMaxInstanceNumber(v int32) *CreateScalingGroupInput { + s.MaxInstanceNumber = &v + return s +} + +// SetMinInstanceNumber sets the MinInstanceNumber field's value. +func (s *CreateScalingGroupInput) SetMinInstanceNumber(v int32) *CreateScalingGroupInput { + s.MinInstanceNumber = &v + return s +} + +// SetMultiAZPolicy sets the MultiAZPolicy field's value. +func (s *CreateScalingGroupInput) SetMultiAZPolicy(v string) *CreateScalingGroupInput { + s.MultiAZPolicy = &v + return s +} + +// SetScalingGroupName sets the ScalingGroupName field's value. +func (s *CreateScalingGroupInput) SetScalingGroupName(v string) *CreateScalingGroupInput { + s.ScalingGroupName = &v + return s +} + +// SetServerGroupAttributes sets the ServerGroupAttributes field's value. +func (s *CreateScalingGroupInput) SetServerGroupAttributes(v []*ServerGroupAttributeForCreateScalingGroupInput) *CreateScalingGroupInput { + s.ServerGroupAttributes = v + return s +} + +// SetSubnetIds sets the SubnetIds field's value. +func (s *CreateScalingGroupInput) SetSubnetIds(v []*string) *CreateScalingGroupInput { + s.SubnetIds = v + return s +} + +type CreateScalingGroupOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s CreateScalingGroupOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateScalingGroupOutput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *CreateScalingGroupOutput) SetScalingGroupId(v string) *CreateScalingGroupOutput { + s.ScalingGroupId = &v + return s +} + +type ServerGroupAttributeForCreateScalingGroupInput struct { + _ struct{} `type:"structure"` + + Port *int32 `type:"int32"` + + ServerGroupId *string `type:"string"` + + Weight *int32 `type:"int32"` +} + +// String returns the string representation +func (s ServerGroupAttributeForCreateScalingGroupInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerGroupAttributeForCreateScalingGroupInput) GoString() string { + return s.String() +} + +// SetPort sets the Port field's value. +func (s *ServerGroupAttributeForCreateScalingGroupInput) SetPort(v int32) *ServerGroupAttributeForCreateScalingGroupInput { + s.Port = &v + return s +} + +// SetServerGroupId sets the ServerGroupId field's value. +func (s *ServerGroupAttributeForCreateScalingGroupInput) SetServerGroupId(v string) *ServerGroupAttributeForCreateScalingGroupInput { + s.ServerGroupId = &v + return s +} + +// SetWeight sets the Weight field's value. +func (s *ServerGroupAttributeForCreateScalingGroupInput) SetWeight(v int32) *ServerGroupAttributeForCreateScalingGroupInput { + s.Weight = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_policy.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_policy.go new file mode 100644 index 000000000000..628180986577 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_create_scaling_policy.go @@ -0,0 +1,372 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opCreateScalingPolicyCommon = "CreateScalingPolicy" + +// CreateScalingPolicyCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateScalingPolicyCommon operation. The "output" return +// value will be populated with the CreateScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateScalingPolicyCommon Send returns without error. +// +// See CreateScalingPolicyCommon for more information on using the CreateScalingPolicyCommon +// API call, and error handling. +// +// // Example sending a request using the CreateScalingPolicyCommonRequest method. +// req, resp := client.CreateScalingPolicyCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CreateScalingPolicyCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opCreateScalingPolicyCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// CreateScalingPolicyCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CreateScalingPolicyCommon for usage and error information. +func (c *AUTOSCALING) CreateScalingPolicyCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.CreateScalingPolicyCommonRequest(input) + return out, req.Send() +} + +// CreateScalingPolicyCommonWithContext is the same as CreateScalingPolicyCommon with the addition of +// the ability to pass a context and additional request options. +// +// See CreateScalingPolicyCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CreateScalingPolicyCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.CreateScalingPolicyCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateScalingPolicy = "CreateScalingPolicy" + +// CreateScalingPolicyRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateScalingPolicy operation. The "output" return +// value will be populated with the CreateScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateScalingPolicyCommon Send returns without error. +// +// See CreateScalingPolicy for more information on using the CreateScalingPolicy +// API call, and error handling. +// +// // Example sending a request using the CreateScalingPolicyRequest method. +// req, resp := client.CreateScalingPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) CreateScalingPolicyRequest(input *CreateScalingPolicyInput) (req *request.Request, output *CreateScalingPolicyOutput) { + op := &request.Operation{ + Name: opCreateScalingPolicy, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &CreateScalingPolicyInput{} + } + + output = &CreateScalingPolicyOutput{} + req = c.newRequest(op, input, output) + + return +} + +// CreateScalingPolicy API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation CreateScalingPolicy for usage and error information. +func (c *AUTOSCALING) CreateScalingPolicy(input *CreateScalingPolicyInput) (*CreateScalingPolicyOutput, error) { + req, out := c.CreateScalingPolicyRequest(input) + return out, req.Send() +} + +// CreateScalingPolicyWithContext is the same as CreateScalingPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See CreateScalingPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) CreateScalingPolicyWithContext(ctx volcengine.Context, input *CreateScalingPolicyInput, opts ...request.Option) (*CreateScalingPolicyOutput, error) { + req, out := c.CreateScalingPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AlarmPolicyConditionForCreateScalingPolicyInput struct { + _ struct{} `type:"structure"` + + ComparisonOperator *string `type:"string"` + + MetricName *string `type:"string"` + + MetricUnit *string `type:"string"` + + Threshold *string `type:"string"` +} + +// String returns the string representation +func (s AlarmPolicyConditionForCreateScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AlarmPolicyConditionForCreateScalingPolicyInput) GoString() string { + return s.String() +} + +// SetComparisonOperator sets the ComparisonOperator field's value. +func (s *AlarmPolicyConditionForCreateScalingPolicyInput) SetComparisonOperator(v string) *AlarmPolicyConditionForCreateScalingPolicyInput { + s.ComparisonOperator = &v + return s +} + +// SetMetricName sets the MetricName field's value. +func (s *AlarmPolicyConditionForCreateScalingPolicyInput) SetMetricName(v string) *AlarmPolicyConditionForCreateScalingPolicyInput { + s.MetricName = &v + return s +} + +// SetMetricUnit sets the MetricUnit field's value. +func (s *AlarmPolicyConditionForCreateScalingPolicyInput) SetMetricUnit(v string) *AlarmPolicyConditionForCreateScalingPolicyInput { + s.MetricUnit = &v + return s +} + +// SetThreshold sets the Threshold field's value. +func (s *AlarmPolicyConditionForCreateScalingPolicyInput) SetThreshold(v string) *AlarmPolicyConditionForCreateScalingPolicyInput { + s.Threshold = &v + return s +} + +type AlarmPolicyForCreateScalingPolicyInput struct { + _ struct{} `type:"structure"` + + Condition *AlarmPolicyConditionForCreateScalingPolicyInput `type:"structure"` + + EvaluationCount *int32 `type:"int32"` + + RuleType *string `type:"string"` +} + +// String returns the string representation +func (s AlarmPolicyForCreateScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AlarmPolicyForCreateScalingPolicyInput) GoString() string { + return s.String() +} + +// SetCondition sets the Condition field's value. +func (s *AlarmPolicyForCreateScalingPolicyInput) SetCondition(v *AlarmPolicyConditionForCreateScalingPolicyInput) *AlarmPolicyForCreateScalingPolicyInput { + s.Condition = v + return s +} + +// SetEvaluationCount sets the EvaluationCount field's value. +func (s *AlarmPolicyForCreateScalingPolicyInput) SetEvaluationCount(v int32) *AlarmPolicyForCreateScalingPolicyInput { + s.EvaluationCount = &v + return s +} + +// SetRuleType sets the RuleType field's value. +func (s *AlarmPolicyForCreateScalingPolicyInput) SetRuleType(v string) *AlarmPolicyForCreateScalingPolicyInput { + s.RuleType = &v + return s +} + +type CreateScalingPolicyInput struct { + _ struct{} `type:"structure"` + + AdjustmentType *string `type:"string"` + + AdjustmentValue *int32 `type:"int32"` + + AlarmPolicy *AlarmPolicyForCreateScalingPolicyInput `type:"structure"` + + Cooldown *int32 `type:"int32"` + + ScalingGroupId *string `type:"string"` + + ScalingPolicyName *string `type:"string"` + + ScalingPolicyType *string `type:"string"` + + ScheduledPolicy *ScheduledPolicyForCreateScalingPolicyInput `type:"structure"` +} + +// String returns the string representation +func (s CreateScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateScalingPolicyInput) GoString() string { + return s.String() +} + +// SetAdjustmentType sets the AdjustmentType field's value. +func (s *CreateScalingPolicyInput) SetAdjustmentType(v string) *CreateScalingPolicyInput { + s.AdjustmentType = &v + return s +} + +// SetAdjustmentValue sets the AdjustmentValue field's value. +func (s *CreateScalingPolicyInput) SetAdjustmentValue(v int32) *CreateScalingPolicyInput { + s.AdjustmentValue = &v + return s +} + +// SetAlarmPolicy sets the AlarmPolicy field's value. +func (s *CreateScalingPolicyInput) SetAlarmPolicy(v *AlarmPolicyForCreateScalingPolicyInput) *CreateScalingPolicyInput { + s.AlarmPolicy = v + return s +} + +// SetCooldown sets the Cooldown field's value. +func (s *CreateScalingPolicyInput) SetCooldown(v int32) *CreateScalingPolicyInput { + s.Cooldown = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *CreateScalingPolicyInput) SetScalingGroupId(v string) *CreateScalingPolicyInput { + s.ScalingGroupId = &v + return s +} + +// SetScalingPolicyName sets the ScalingPolicyName field's value. +func (s *CreateScalingPolicyInput) SetScalingPolicyName(v string) *CreateScalingPolicyInput { + s.ScalingPolicyName = &v + return s +} + +// SetScalingPolicyType sets the ScalingPolicyType field's value. +func (s *CreateScalingPolicyInput) SetScalingPolicyType(v string) *CreateScalingPolicyInput { + s.ScalingPolicyType = &v + return s +} + +// SetScheduledPolicy sets the ScheduledPolicy field's value. +func (s *CreateScalingPolicyInput) SetScheduledPolicy(v *ScheduledPolicyForCreateScalingPolicyInput) *CreateScalingPolicyInput { + s.ScheduledPolicy = v + return s +} + +type CreateScalingPolicyOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingPolicyId *string `type:"string"` +} + +// String returns the string representation +func (s CreateScalingPolicyOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateScalingPolicyOutput) GoString() string { + return s.String() +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *CreateScalingPolicyOutput) SetScalingPolicyId(v string) *CreateScalingPolicyOutput { + s.ScalingPolicyId = &v + return s +} + +type ScheduledPolicyForCreateScalingPolicyInput struct { + _ struct{} `type:"structure"` + + LaunchTime *string `type:"string"` + + RecurrenceEndTime *string `type:"string"` + + RecurrenceType *string `type:"string"` + + RecurrenceValue *string `type:"string"` +} + +// String returns the string representation +func (s ScheduledPolicyForCreateScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScheduledPolicyForCreateScalingPolicyInput) GoString() string { + return s.String() +} + +// SetLaunchTime sets the LaunchTime field's value. +func (s *ScheduledPolicyForCreateScalingPolicyInput) SetLaunchTime(v string) *ScheduledPolicyForCreateScalingPolicyInput { + s.LaunchTime = &v + return s +} + +// SetRecurrenceEndTime sets the RecurrenceEndTime field's value. +func (s *ScheduledPolicyForCreateScalingPolicyInput) SetRecurrenceEndTime(v string) *ScheduledPolicyForCreateScalingPolicyInput { + s.RecurrenceEndTime = &v + return s +} + +// SetRecurrenceType sets the RecurrenceType field's value. +func (s *ScheduledPolicyForCreateScalingPolicyInput) SetRecurrenceType(v string) *ScheduledPolicyForCreateScalingPolicyInput { + s.RecurrenceType = &v + return s +} + +// SetRecurrenceValue sets the RecurrenceValue field's value. +func (s *ScheduledPolicyForCreateScalingPolicyInput) SetRecurrenceValue(v string) *ScheduledPolicyForCreateScalingPolicyInput { + s.RecurrenceValue = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_lifecycle_hook.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_lifecycle_hook.go new file mode 100644 index 000000000000..fa80433d7f4f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_lifecycle_hook.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteLifecycleHookCommon = "DeleteLifecycleHook" + +// DeleteLifecycleHookCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteLifecycleHookCommon operation. The "output" return +// value will be populated with the DeleteLifecycleHookCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteLifecycleHookCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteLifecycleHookCommon Send returns without error. +// +// See DeleteLifecycleHookCommon for more information on using the DeleteLifecycleHookCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteLifecycleHookCommonRequest method. +// req, resp := client.DeleteLifecycleHookCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DeleteLifecycleHookCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteLifecycleHookCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteLifecycleHookCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DeleteLifecycleHookCommon for usage and error information. +func (c *AUTOSCALING) DeleteLifecycleHookCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteLifecycleHookCommonRequest(input) + return out, req.Send() +} + +// DeleteLifecycleHookCommonWithContext is the same as DeleteLifecycleHookCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLifecycleHookCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DeleteLifecycleHookCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteLifecycleHookCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteLifecycleHook = "DeleteLifecycleHook" + +// DeleteLifecycleHookRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteLifecycleHook operation. The "output" return +// value will be populated with the DeleteLifecycleHookCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteLifecycleHookCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteLifecycleHookCommon Send returns without error. +// +// See DeleteLifecycleHook for more information on using the DeleteLifecycleHook +// API call, and error handling. +// +// // Example sending a request using the DeleteLifecycleHookRequest method. +// req, resp := client.DeleteLifecycleHookRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DeleteLifecycleHookRequest(input *DeleteLifecycleHookInput) (req *request.Request, output *DeleteLifecycleHookOutput) { + op := &request.Operation{ + Name: opDeleteLifecycleHook, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteLifecycleHookInput{} + } + + output = &DeleteLifecycleHookOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteLifecycleHook API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DeleteLifecycleHook for usage and error information. +func (c *AUTOSCALING) DeleteLifecycleHook(input *DeleteLifecycleHookInput) (*DeleteLifecycleHookOutput, error) { + req, out := c.DeleteLifecycleHookRequest(input) + return out, req.Send() +} + +// DeleteLifecycleHookWithContext is the same as DeleteLifecycleHook with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLifecycleHook for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DeleteLifecycleHookWithContext(ctx volcengine.Context, input *DeleteLifecycleHookInput, opts ...request.Option) (*DeleteLifecycleHookOutput, error) { + req, out := c.DeleteLifecycleHookRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteLifecycleHookInput struct { + _ struct{} `type:"structure"` + + LifecycleHookId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteLifecycleHookInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLifecycleHookInput) GoString() string { + return s.String() +} + +// SetLifecycleHookId sets the LifecycleHookId field's value. +func (s *DeleteLifecycleHookInput) SetLifecycleHookId(v string) *DeleteLifecycleHookInput { + s.LifecycleHookId = &v + return s +} + +type DeleteLifecycleHookOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + LifecycleHookId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteLifecycleHookOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLifecycleHookOutput) GoString() string { + return s.String() +} + +// SetLifecycleHookId sets the LifecycleHookId field's value. +func (s *DeleteLifecycleHookOutput) SetLifecycleHookId(v string) *DeleteLifecycleHookOutput { + s.LifecycleHookId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_configuration.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_configuration.go new file mode 100644 index 000000000000..aa279e060d30 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_configuration.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteScalingConfigurationCommon = "DeleteScalingConfiguration" + +// DeleteScalingConfigurationCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteScalingConfigurationCommon operation. The "output" return +// value will be populated with the DeleteScalingConfigurationCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteScalingConfigurationCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteScalingConfigurationCommon Send returns without error. +// +// See DeleteScalingConfigurationCommon for more information on using the DeleteScalingConfigurationCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteScalingConfigurationCommonRequest method. +// req, resp := client.DeleteScalingConfigurationCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DeleteScalingConfigurationCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteScalingConfigurationCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteScalingConfigurationCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DeleteScalingConfigurationCommon for usage and error information. +func (c *AUTOSCALING) DeleteScalingConfigurationCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteScalingConfigurationCommonRequest(input) + return out, req.Send() +} + +// DeleteScalingConfigurationCommonWithContext is the same as DeleteScalingConfigurationCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteScalingConfigurationCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DeleteScalingConfigurationCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteScalingConfigurationCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteScalingConfiguration = "DeleteScalingConfiguration" + +// DeleteScalingConfigurationRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteScalingConfiguration operation. The "output" return +// value will be populated with the DeleteScalingConfigurationCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteScalingConfigurationCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteScalingConfigurationCommon Send returns without error. +// +// See DeleteScalingConfiguration for more information on using the DeleteScalingConfiguration +// API call, and error handling. +// +// // Example sending a request using the DeleteScalingConfigurationRequest method. +// req, resp := client.DeleteScalingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DeleteScalingConfigurationRequest(input *DeleteScalingConfigurationInput) (req *request.Request, output *DeleteScalingConfigurationOutput) { + op := &request.Operation{ + Name: opDeleteScalingConfiguration, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteScalingConfigurationInput{} + } + + output = &DeleteScalingConfigurationOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteScalingConfiguration API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DeleteScalingConfiguration for usage and error information. +func (c *AUTOSCALING) DeleteScalingConfiguration(input *DeleteScalingConfigurationInput) (*DeleteScalingConfigurationOutput, error) { + req, out := c.DeleteScalingConfigurationRequest(input) + return out, req.Send() +} + +// DeleteScalingConfigurationWithContext is the same as DeleteScalingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteScalingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DeleteScalingConfigurationWithContext(ctx volcengine.Context, input *DeleteScalingConfigurationInput, opts ...request.Option) (*DeleteScalingConfigurationOutput, error) { + req, out := c.DeleteScalingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteScalingConfigurationInput struct { + _ struct{} `type:"structure"` + + ScalingConfigurationId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteScalingConfigurationInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteScalingConfigurationInput) GoString() string { + return s.String() +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *DeleteScalingConfigurationInput) SetScalingConfigurationId(v string) *DeleteScalingConfigurationInput { + s.ScalingConfigurationId = &v + return s +} + +type DeleteScalingConfigurationOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingConfigurationId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteScalingConfigurationOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteScalingConfigurationOutput) GoString() string { + return s.String() +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *DeleteScalingConfigurationOutput) SetScalingConfigurationId(v string) *DeleteScalingConfigurationOutput { + s.ScalingConfigurationId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_group.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_group.go new file mode 100644 index 000000000000..9144e7f4dfa1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_group.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteScalingGroupCommon = "DeleteScalingGroup" + +// DeleteScalingGroupCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteScalingGroupCommon operation. The "output" return +// value will be populated with the DeleteScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteScalingGroupCommon Send returns without error. +// +// See DeleteScalingGroupCommon for more information on using the DeleteScalingGroupCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteScalingGroupCommonRequest method. +// req, resp := client.DeleteScalingGroupCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DeleteScalingGroupCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteScalingGroupCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteScalingGroupCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DeleteScalingGroupCommon for usage and error information. +func (c *AUTOSCALING) DeleteScalingGroupCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteScalingGroupCommonRequest(input) + return out, req.Send() +} + +// DeleteScalingGroupCommonWithContext is the same as DeleteScalingGroupCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteScalingGroupCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DeleteScalingGroupCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteScalingGroupCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteScalingGroup = "DeleteScalingGroup" + +// DeleteScalingGroupRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteScalingGroup operation. The "output" return +// value will be populated with the DeleteScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteScalingGroupCommon Send returns without error. +// +// See DeleteScalingGroup for more information on using the DeleteScalingGroup +// API call, and error handling. +// +// // Example sending a request using the DeleteScalingGroupRequest method. +// req, resp := client.DeleteScalingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DeleteScalingGroupRequest(input *DeleteScalingGroupInput) (req *request.Request, output *DeleteScalingGroupOutput) { + op := &request.Operation{ + Name: opDeleteScalingGroup, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteScalingGroupInput{} + } + + output = &DeleteScalingGroupOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteScalingGroup API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DeleteScalingGroup for usage and error information. +func (c *AUTOSCALING) DeleteScalingGroup(input *DeleteScalingGroupInput) (*DeleteScalingGroupOutput, error) { + req, out := c.DeleteScalingGroupRequest(input) + return out, req.Send() +} + +// DeleteScalingGroupWithContext is the same as DeleteScalingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteScalingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DeleteScalingGroupWithContext(ctx volcengine.Context, input *DeleteScalingGroupInput, opts ...request.Option) (*DeleteScalingGroupOutput, error) { + req, out := c.DeleteScalingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteScalingGroupInput struct { + _ struct{} `type:"structure"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteScalingGroupInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteScalingGroupInput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DeleteScalingGroupInput) SetScalingGroupId(v string) *DeleteScalingGroupInput { + s.ScalingGroupId = &v + return s +} + +type DeleteScalingGroupOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteScalingGroupOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteScalingGroupOutput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DeleteScalingGroupOutput) SetScalingGroupId(v string) *DeleteScalingGroupOutput { + s.ScalingGroupId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_policy.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_policy.go new file mode 100644 index 000000000000..0f4c00939d1a --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_delete_scaling_policy.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteScalingPolicyCommon = "DeleteScalingPolicy" + +// DeleteScalingPolicyCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteScalingPolicyCommon operation. The "output" return +// value will be populated with the DeleteScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteScalingPolicyCommon Send returns without error. +// +// See DeleteScalingPolicyCommon for more information on using the DeleteScalingPolicyCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteScalingPolicyCommonRequest method. +// req, resp := client.DeleteScalingPolicyCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DeleteScalingPolicyCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteScalingPolicyCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteScalingPolicyCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DeleteScalingPolicyCommon for usage and error information. +func (c *AUTOSCALING) DeleteScalingPolicyCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteScalingPolicyCommonRequest(input) + return out, req.Send() +} + +// DeleteScalingPolicyCommonWithContext is the same as DeleteScalingPolicyCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteScalingPolicyCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DeleteScalingPolicyCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteScalingPolicyCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteScalingPolicy = "DeleteScalingPolicy" + +// DeleteScalingPolicyRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteScalingPolicy operation. The "output" return +// value will be populated with the DeleteScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteScalingPolicyCommon Send returns without error. +// +// See DeleteScalingPolicy for more information on using the DeleteScalingPolicy +// API call, and error handling. +// +// // Example sending a request using the DeleteScalingPolicyRequest method. +// req, resp := client.DeleteScalingPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DeleteScalingPolicyRequest(input *DeleteScalingPolicyInput) (req *request.Request, output *DeleteScalingPolicyOutput) { + op := &request.Operation{ + Name: opDeleteScalingPolicy, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteScalingPolicyInput{} + } + + output = &DeleteScalingPolicyOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteScalingPolicy API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DeleteScalingPolicy for usage and error information. +func (c *AUTOSCALING) DeleteScalingPolicy(input *DeleteScalingPolicyInput) (*DeleteScalingPolicyOutput, error) { + req, out := c.DeleteScalingPolicyRequest(input) + return out, req.Send() +} + +// DeleteScalingPolicyWithContext is the same as DeleteScalingPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteScalingPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DeleteScalingPolicyWithContext(ctx volcengine.Context, input *DeleteScalingPolicyInput, opts ...request.Option) (*DeleteScalingPolicyOutput, error) { + req, out := c.DeleteScalingPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteScalingPolicyInput struct { + _ struct{} `type:"structure"` + + ScalingPolicyId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteScalingPolicyInput) GoString() string { + return s.String() +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *DeleteScalingPolicyInput) SetScalingPolicyId(v string) *DeleteScalingPolicyInput { + s.ScalingPolicyId = &v + return s +} + +type DeleteScalingPolicyOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingPolicyId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteScalingPolicyOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteScalingPolicyOutput) GoString() string { + return s.String() +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *DeleteScalingPolicyOutput) SetScalingPolicyId(v string) *DeleteScalingPolicyOutput { + s.ScalingPolicyId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_lifecycle_activities.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_lifecycle_activities.go new file mode 100644 index 000000000000..b85355a8ffb7 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_lifecycle_activities.go @@ -0,0 +1,304 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeLifecycleActivitiesCommon = "DescribeLifecycleActivities" + +// DescribeLifecycleActivitiesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeLifecycleActivitiesCommon operation. The "output" return +// value will be populated with the DescribeLifecycleActivitiesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeLifecycleActivitiesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeLifecycleActivitiesCommon Send returns without error. +// +// See DescribeLifecycleActivitiesCommon for more information on using the DescribeLifecycleActivitiesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeLifecycleActivitiesCommonRequest method. +// req, resp := client.DescribeLifecycleActivitiesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeLifecycleActivitiesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeLifecycleActivitiesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeLifecycleActivitiesCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeLifecycleActivitiesCommon for usage and error information. +func (c *AUTOSCALING) DescribeLifecycleActivitiesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeLifecycleActivitiesCommonRequest(input) + return out, req.Send() +} + +// DescribeLifecycleActivitiesCommonWithContext is the same as DescribeLifecycleActivitiesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeLifecycleActivitiesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeLifecycleActivitiesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeLifecycleActivitiesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeLifecycleActivities = "DescribeLifecycleActivities" + +// DescribeLifecycleActivitiesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeLifecycleActivities operation. The "output" return +// value will be populated with the DescribeLifecycleActivitiesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeLifecycleActivitiesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeLifecycleActivitiesCommon Send returns without error. +// +// See DescribeLifecycleActivities for more information on using the DescribeLifecycleActivities +// API call, and error handling. +// +// // Example sending a request using the DescribeLifecycleActivitiesRequest method. +// req, resp := client.DescribeLifecycleActivitiesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeLifecycleActivitiesRequest(input *DescribeLifecycleActivitiesInput) (req *request.Request, output *DescribeLifecycleActivitiesOutput) { + op := &request.Operation{ + Name: opDescribeLifecycleActivities, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeLifecycleActivitiesInput{} + } + + output = &DescribeLifecycleActivitiesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeLifecycleActivities API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeLifecycleActivities for usage and error information. +func (c *AUTOSCALING) DescribeLifecycleActivities(input *DescribeLifecycleActivitiesInput) (*DescribeLifecycleActivitiesOutput, error) { + req, out := c.DescribeLifecycleActivitiesRequest(input) + return out, req.Send() +} + +// DescribeLifecycleActivitiesWithContext is the same as DescribeLifecycleActivities with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeLifecycleActivities for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeLifecycleActivitiesWithContext(ctx volcengine.Context, input *DescribeLifecycleActivitiesInput, opts ...request.Option) (*DescribeLifecycleActivitiesOutput, error) { + req, out := c.DescribeLifecycleActivitiesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeLifecycleActivitiesInput struct { + _ struct{} `type:"structure"` + + InstanceId *string `type:"string"` + + LifecycleActivityStatus *string `type:"string"` + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingActivityId *string `type:"string"` +} + +// String returns the string representation +func (s DescribeLifecycleActivitiesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLifecycleActivitiesInput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DescribeLifecycleActivitiesInput) SetInstanceId(v string) *DescribeLifecycleActivitiesInput { + s.InstanceId = &v + return s +} + +// SetLifecycleActivityStatus sets the LifecycleActivityStatus field's value. +func (s *DescribeLifecycleActivitiesInput) SetLifecycleActivityStatus(v string) *DescribeLifecycleActivitiesInput { + s.LifecycleActivityStatus = &v + return s +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeLifecycleActivitiesInput) SetPageNumber(v int32) *DescribeLifecycleActivitiesInput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeLifecycleActivitiesInput) SetPageSize(v int32) *DescribeLifecycleActivitiesInput { + s.PageSize = &v + return s +} + +// SetScalingActivityId sets the ScalingActivityId field's value. +func (s *DescribeLifecycleActivitiesInput) SetScalingActivityId(v string) *DescribeLifecycleActivitiesInput { + s.ScalingActivityId = &v + return s +} + +type DescribeLifecycleActivitiesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + LifecycleActivities []*LifecycleActivityForDescribeLifecycleActivitiesOutput `type:"list"` + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeLifecycleActivitiesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLifecycleActivitiesOutput) GoString() string { + return s.String() +} + +// SetLifecycleActivities sets the LifecycleActivities field's value. +func (s *DescribeLifecycleActivitiesOutput) SetLifecycleActivities(v []*LifecycleActivityForDescribeLifecycleActivitiesOutput) *DescribeLifecycleActivitiesOutput { + s.LifecycleActivities = v + return s +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeLifecycleActivitiesOutput) SetPageNumber(v int32) *DescribeLifecycleActivitiesOutput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeLifecycleActivitiesOutput) SetPageSize(v int32) *DescribeLifecycleActivitiesOutput { + s.PageSize = &v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeLifecycleActivitiesOutput) SetTotalCount(v int32) *DescribeLifecycleActivitiesOutput { + s.TotalCount = &v + return s +} + +type LifecycleActivityForDescribeLifecycleActivitiesOutput struct { + _ struct{} `type:"structure"` + + InstanceId *string `type:"string"` + + LifecycleActivityId *string `type:"string"` + + LifecycleActivityStatus *string `type:"string"` + + LifecycleHookId *string `type:"string"` + + LifecycleHookPolicy *string `type:"string"` + + ScalingActivityId *string `type:"string"` +} + +// String returns the string representation +func (s LifecycleActivityForDescribeLifecycleActivitiesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecycleActivityForDescribeLifecycleActivitiesOutput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *LifecycleActivityForDescribeLifecycleActivitiesOutput) SetInstanceId(v string) *LifecycleActivityForDescribeLifecycleActivitiesOutput { + s.InstanceId = &v + return s +} + +// SetLifecycleActivityId sets the LifecycleActivityId field's value. +func (s *LifecycleActivityForDescribeLifecycleActivitiesOutput) SetLifecycleActivityId(v string) *LifecycleActivityForDescribeLifecycleActivitiesOutput { + s.LifecycleActivityId = &v + return s +} + +// SetLifecycleActivityStatus sets the LifecycleActivityStatus field's value. +func (s *LifecycleActivityForDescribeLifecycleActivitiesOutput) SetLifecycleActivityStatus(v string) *LifecycleActivityForDescribeLifecycleActivitiesOutput { + s.LifecycleActivityStatus = &v + return s +} + +// SetLifecycleHookId sets the LifecycleHookId field's value. +func (s *LifecycleActivityForDescribeLifecycleActivitiesOutput) SetLifecycleHookId(v string) *LifecycleActivityForDescribeLifecycleActivitiesOutput { + s.LifecycleHookId = &v + return s +} + +// SetLifecycleHookPolicy sets the LifecycleHookPolicy field's value. +func (s *LifecycleActivityForDescribeLifecycleActivitiesOutput) SetLifecycleHookPolicy(v string) *LifecycleActivityForDescribeLifecycleActivitiesOutput { + s.LifecycleHookPolicy = &v + return s +} + +// SetScalingActivityId sets the ScalingActivityId field's value. +func (s *LifecycleActivityForDescribeLifecycleActivitiesOutput) SetScalingActivityId(v string) *LifecycleActivityForDescribeLifecycleActivitiesOutput { + s.ScalingActivityId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_lifecycle_hooks.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_lifecycle_hooks.go new file mode 100644 index 000000000000..8a3375b2cfe2 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_lifecycle_hooks.go @@ -0,0 +1,304 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeLifecycleHooksCommon = "DescribeLifecycleHooks" + +// DescribeLifecycleHooksCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeLifecycleHooksCommon operation. The "output" return +// value will be populated with the DescribeLifecycleHooksCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeLifecycleHooksCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeLifecycleHooksCommon Send returns without error. +// +// See DescribeLifecycleHooksCommon for more information on using the DescribeLifecycleHooksCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeLifecycleHooksCommonRequest method. +// req, resp := client.DescribeLifecycleHooksCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeLifecycleHooksCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeLifecycleHooksCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeLifecycleHooksCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeLifecycleHooksCommon for usage and error information. +func (c *AUTOSCALING) DescribeLifecycleHooksCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeLifecycleHooksCommonRequest(input) + return out, req.Send() +} + +// DescribeLifecycleHooksCommonWithContext is the same as DescribeLifecycleHooksCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeLifecycleHooksCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeLifecycleHooksCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeLifecycleHooksCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeLifecycleHooks = "DescribeLifecycleHooks" + +// DescribeLifecycleHooksRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeLifecycleHooks operation. The "output" return +// value will be populated with the DescribeLifecycleHooksCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeLifecycleHooksCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeLifecycleHooksCommon Send returns without error. +// +// See DescribeLifecycleHooks for more information on using the DescribeLifecycleHooks +// API call, and error handling. +// +// // Example sending a request using the DescribeLifecycleHooksRequest method. +// req, resp := client.DescribeLifecycleHooksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeLifecycleHooksRequest(input *DescribeLifecycleHooksInput) (req *request.Request, output *DescribeLifecycleHooksOutput) { + op := &request.Operation{ + Name: opDescribeLifecycleHooks, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeLifecycleHooksInput{} + } + + output = &DescribeLifecycleHooksOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeLifecycleHooks API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeLifecycleHooks for usage and error information. +func (c *AUTOSCALING) DescribeLifecycleHooks(input *DescribeLifecycleHooksInput) (*DescribeLifecycleHooksOutput, error) { + req, out := c.DescribeLifecycleHooksRequest(input) + return out, req.Send() +} + +// DescribeLifecycleHooksWithContext is the same as DescribeLifecycleHooks with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeLifecycleHooks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeLifecycleHooksWithContext(ctx volcengine.Context, input *DescribeLifecycleHooksInput, opts ...request.Option) (*DescribeLifecycleHooksOutput, error) { + req, out := c.DescribeLifecycleHooksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeLifecycleHooksInput struct { + _ struct{} `type:"structure"` + + LifecycleHookIds []*string `type:"list"` + + LifecycleHookName *string `type:"string"` + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DescribeLifecycleHooksInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLifecycleHooksInput) GoString() string { + return s.String() +} + +// SetLifecycleHookIds sets the LifecycleHookIds field's value. +func (s *DescribeLifecycleHooksInput) SetLifecycleHookIds(v []*string) *DescribeLifecycleHooksInput { + s.LifecycleHookIds = v + return s +} + +// SetLifecycleHookName sets the LifecycleHookName field's value. +func (s *DescribeLifecycleHooksInput) SetLifecycleHookName(v string) *DescribeLifecycleHooksInput { + s.LifecycleHookName = &v + return s +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeLifecycleHooksInput) SetPageNumber(v int32) *DescribeLifecycleHooksInput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeLifecycleHooksInput) SetPageSize(v int32) *DescribeLifecycleHooksInput { + s.PageSize = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DescribeLifecycleHooksInput) SetScalingGroupId(v string) *DescribeLifecycleHooksInput { + s.ScalingGroupId = &v + return s +} + +type DescribeLifecycleHooksOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + LifecycleHooks []*LifecycleHookForDescribeLifecycleHooksOutput `type:"list"` + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeLifecycleHooksOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLifecycleHooksOutput) GoString() string { + return s.String() +} + +// SetLifecycleHooks sets the LifecycleHooks field's value. +func (s *DescribeLifecycleHooksOutput) SetLifecycleHooks(v []*LifecycleHookForDescribeLifecycleHooksOutput) *DescribeLifecycleHooksOutput { + s.LifecycleHooks = v + return s +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeLifecycleHooksOutput) SetPageNumber(v int32) *DescribeLifecycleHooksOutput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeLifecycleHooksOutput) SetPageSize(v int32) *DescribeLifecycleHooksOutput { + s.PageSize = &v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeLifecycleHooksOutput) SetTotalCount(v int32) *DescribeLifecycleHooksOutput { + s.TotalCount = &v + return s +} + +type LifecycleHookForDescribeLifecycleHooksOutput struct { + _ struct{} `type:"structure"` + + LifecycleHookId *string `type:"string"` + + LifecycleHookName *string `type:"string"` + + LifecycleHookPolicy *string `type:"string"` + + LifecycleHookTimeout *int32 `type:"int32"` + + LifecycleHookType *string `type:"string"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s LifecycleHookForDescribeLifecycleHooksOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecycleHookForDescribeLifecycleHooksOutput) GoString() string { + return s.String() +} + +// SetLifecycleHookId sets the LifecycleHookId field's value. +func (s *LifecycleHookForDescribeLifecycleHooksOutput) SetLifecycleHookId(v string) *LifecycleHookForDescribeLifecycleHooksOutput { + s.LifecycleHookId = &v + return s +} + +// SetLifecycleHookName sets the LifecycleHookName field's value. +func (s *LifecycleHookForDescribeLifecycleHooksOutput) SetLifecycleHookName(v string) *LifecycleHookForDescribeLifecycleHooksOutput { + s.LifecycleHookName = &v + return s +} + +// SetLifecycleHookPolicy sets the LifecycleHookPolicy field's value. +func (s *LifecycleHookForDescribeLifecycleHooksOutput) SetLifecycleHookPolicy(v string) *LifecycleHookForDescribeLifecycleHooksOutput { + s.LifecycleHookPolicy = &v + return s +} + +// SetLifecycleHookTimeout sets the LifecycleHookTimeout field's value. +func (s *LifecycleHookForDescribeLifecycleHooksOutput) SetLifecycleHookTimeout(v int32) *LifecycleHookForDescribeLifecycleHooksOutput { + s.LifecycleHookTimeout = &v + return s +} + +// SetLifecycleHookType sets the LifecycleHookType field's value. +func (s *LifecycleHookForDescribeLifecycleHooksOutput) SetLifecycleHookType(v string) *LifecycleHookForDescribeLifecycleHooksOutput { + s.LifecycleHookType = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *LifecycleHookForDescribeLifecycleHooksOutput) SetScalingGroupId(v string) *LifecycleHookForDescribeLifecycleHooksOutput { + s.ScalingGroupId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_activities.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_activities.go new file mode 100644 index 000000000000..7bc0b0e3a31b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_activities.go @@ -0,0 +1,438 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeScalingActivitiesCommon = "DescribeScalingActivities" + +// DescribeScalingActivitiesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingActivitiesCommon operation. The "output" return +// value will be populated with the DescribeScalingActivitiesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingActivitiesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingActivitiesCommon Send returns without error. +// +// See DescribeScalingActivitiesCommon for more information on using the DescribeScalingActivitiesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingActivitiesCommonRequest method. +// req, resp := client.DescribeScalingActivitiesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingActivitiesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeScalingActivitiesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingActivitiesCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingActivitiesCommon for usage and error information. +func (c *AUTOSCALING) DescribeScalingActivitiesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeScalingActivitiesCommonRequest(input) + return out, req.Send() +} + +// DescribeScalingActivitiesCommonWithContext is the same as DescribeScalingActivitiesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingActivitiesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingActivitiesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeScalingActivitiesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeScalingActivities = "DescribeScalingActivities" + +// DescribeScalingActivitiesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingActivities operation. The "output" return +// value will be populated with the DescribeScalingActivitiesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingActivitiesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingActivitiesCommon Send returns without error. +// +// See DescribeScalingActivities for more information on using the DescribeScalingActivities +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingActivitiesRequest method. +// req, resp := client.DescribeScalingActivitiesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingActivitiesRequest(input *DescribeScalingActivitiesInput) (req *request.Request, output *DescribeScalingActivitiesOutput) { + op := &request.Operation{ + Name: opDescribeScalingActivities, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeScalingActivitiesInput{} + } + + output = &DescribeScalingActivitiesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingActivities API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingActivities for usage and error information. +func (c *AUTOSCALING) DescribeScalingActivities(input *DescribeScalingActivitiesInput) (*DescribeScalingActivitiesOutput, error) { + req, out := c.DescribeScalingActivitiesRequest(input) + return out, req.Send() +} + +// DescribeScalingActivitiesWithContext is the same as DescribeScalingActivities with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingActivities for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingActivitiesWithContext(ctx volcengine.Context, input *DescribeScalingActivitiesInput, opts ...request.Option) (*DescribeScalingActivitiesOutput, error) { + req, out := c.DescribeScalingActivitiesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeScalingActivitiesInput struct { + _ struct{} `type:"structure"` + + EndTime *string `type:"string"` + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingActivityIds []*string `type:"list"` + + ScalingGroupId *string `type:"string"` + + StartTime *string `type:"string"` + + StatusCode *string `type:"string"` +} + +// String returns the string representation +func (s DescribeScalingActivitiesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingActivitiesInput) GoString() string { + return s.String() +} + +// SetEndTime sets the EndTime field's value. +func (s *DescribeScalingActivitiesInput) SetEndTime(v string) *DescribeScalingActivitiesInput { + s.EndTime = &v + return s +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingActivitiesInput) SetPageNumber(v int32) *DescribeScalingActivitiesInput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingActivitiesInput) SetPageSize(v int32) *DescribeScalingActivitiesInput { + s.PageSize = &v + return s +} + +// SetScalingActivityIds sets the ScalingActivityIds field's value. +func (s *DescribeScalingActivitiesInput) SetScalingActivityIds(v []*string) *DescribeScalingActivitiesInput { + s.ScalingActivityIds = v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DescribeScalingActivitiesInput) SetScalingGroupId(v string) *DescribeScalingActivitiesInput { + s.ScalingGroupId = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *DescribeScalingActivitiesInput) SetStartTime(v string) *DescribeScalingActivitiesInput { + s.StartTime = &v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *DescribeScalingActivitiesInput) SetStatusCode(v string) *DescribeScalingActivitiesInput { + s.StatusCode = &v + return s +} + +type DescribeScalingActivitiesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingActivities []*ScalingActivityForDescribeScalingActivitiesOutput `type:"list"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeScalingActivitiesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingActivitiesOutput) GoString() string { + return s.String() +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingActivitiesOutput) SetPageNumber(v int32) *DescribeScalingActivitiesOutput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingActivitiesOutput) SetPageSize(v int32) *DescribeScalingActivitiesOutput { + s.PageSize = &v + return s +} + +// SetScalingActivities sets the ScalingActivities field's value. +func (s *DescribeScalingActivitiesOutput) SetScalingActivities(v []*ScalingActivityForDescribeScalingActivitiesOutput) *DescribeScalingActivitiesOutput { + s.ScalingActivities = v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeScalingActivitiesOutput) SetTotalCount(v int32) *DescribeScalingActivitiesOutput { + s.TotalCount = &v + return s +} + +type RelatedInstanceForDescribeScalingActivitiesOutput struct { + _ struct{} `type:"structure"` + + InstanceId *string `type:"string"` + + Message *string `type:"string"` + + OperateType *string `type:"string"` + + Status *string `type:"string"` +} + +// String returns the string representation +func (s RelatedInstanceForDescribeScalingActivitiesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RelatedInstanceForDescribeScalingActivitiesOutput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *RelatedInstanceForDescribeScalingActivitiesOutput) SetInstanceId(v string) *RelatedInstanceForDescribeScalingActivitiesOutput { + s.InstanceId = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *RelatedInstanceForDescribeScalingActivitiesOutput) SetMessage(v string) *RelatedInstanceForDescribeScalingActivitiesOutput { + s.Message = &v + return s +} + +// SetOperateType sets the OperateType field's value. +func (s *RelatedInstanceForDescribeScalingActivitiesOutput) SetOperateType(v string) *RelatedInstanceForDescribeScalingActivitiesOutput { + s.OperateType = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *RelatedInstanceForDescribeScalingActivitiesOutput) SetStatus(v string) *RelatedInstanceForDescribeScalingActivitiesOutput { + s.Status = &v + return s +} + +type ScalingActivityForDescribeScalingActivitiesOutput struct { + _ struct{} `type:"structure"` + + ActivityType *string `type:"string"` + + ActualAdjustInstanceNumber *int32 `type:"int32"` + + Cooldown *int32 `type:"int32"` + + CreatedAt *string `type:"string"` + + CurrentInstanceNumber *int32 `type:"int32"` + + ExpectedRunTime *string `type:"string"` + + MaxInstanceNumber *int32 `type:"int32"` + + MinInstanceNumber *int32 `type:"int32"` + + RelatedInstances []*RelatedInstanceForDescribeScalingActivitiesOutput `type:"list"` + + ResultMsg *string `type:"string"` + + ScalingActivityId *string `type:"string"` + + ScalingGroupId *string `type:"string"` + + StatusCode *string `type:"string"` + + StoppedAt *string `type:"string"` + + TaskCategory *string `type:"string"` +} + +// String returns the string representation +func (s ScalingActivityForDescribeScalingActivitiesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScalingActivityForDescribeScalingActivitiesOutput) GoString() string { + return s.String() +} + +// SetActivityType sets the ActivityType field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetActivityType(v string) *ScalingActivityForDescribeScalingActivitiesOutput { + s.ActivityType = &v + return s +} + +// SetActualAdjustInstanceNumber sets the ActualAdjustInstanceNumber field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetActualAdjustInstanceNumber(v int32) *ScalingActivityForDescribeScalingActivitiesOutput { + s.ActualAdjustInstanceNumber = &v + return s +} + +// SetCooldown sets the Cooldown field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetCooldown(v int32) *ScalingActivityForDescribeScalingActivitiesOutput { + s.Cooldown = &v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetCreatedAt(v string) *ScalingActivityForDescribeScalingActivitiesOutput { + s.CreatedAt = &v + return s +} + +// SetCurrentInstanceNumber sets the CurrentInstanceNumber field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetCurrentInstanceNumber(v int32) *ScalingActivityForDescribeScalingActivitiesOutput { + s.CurrentInstanceNumber = &v + return s +} + +// SetExpectedRunTime sets the ExpectedRunTime field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetExpectedRunTime(v string) *ScalingActivityForDescribeScalingActivitiesOutput { + s.ExpectedRunTime = &v + return s +} + +// SetMaxInstanceNumber sets the MaxInstanceNumber field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetMaxInstanceNumber(v int32) *ScalingActivityForDescribeScalingActivitiesOutput { + s.MaxInstanceNumber = &v + return s +} + +// SetMinInstanceNumber sets the MinInstanceNumber field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetMinInstanceNumber(v int32) *ScalingActivityForDescribeScalingActivitiesOutput { + s.MinInstanceNumber = &v + return s +} + +// SetRelatedInstances sets the RelatedInstances field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetRelatedInstances(v []*RelatedInstanceForDescribeScalingActivitiesOutput) *ScalingActivityForDescribeScalingActivitiesOutput { + s.RelatedInstances = v + return s +} + +// SetResultMsg sets the ResultMsg field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetResultMsg(v string) *ScalingActivityForDescribeScalingActivitiesOutput { + s.ResultMsg = &v + return s +} + +// SetScalingActivityId sets the ScalingActivityId field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetScalingActivityId(v string) *ScalingActivityForDescribeScalingActivitiesOutput { + s.ScalingActivityId = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetScalingGroupId(v string) *ScalingActivityForDescribeScalingActivitiesOutput { + s.ScalingGroupId = &v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetStatusCode(v string) *ScalingActivityForDescribeScalingActivitiesOutput { + s.StatusCode = &v + return s +} + +// SetStoppedAt sets the StoppedAt field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetStoppedAt(v string) *ScalingActivityForDescribeScalingActivitiesOutput { + s.StoppedAt = &v + return s +} + +// SetTaskCategory sets the TaskCategory field's value. +func (s *ScalingActivityForDescribeScalingActivitiesOutput) SetTaskCategory(v string) *ScalingActivityForDescribeScalingActivitiesOutput { + s.TaskCategory = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_configurations.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_configurations.go new file mode 100644 index 000000000000..d19d6256b58e --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_configurations.go @@ -0,0 +1,476 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeScalingConfigurationsCommon = "DescribeScalingConfigurations" + +// DescribeScalingConfigurationsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingConfigurationsCommon operation. The "output" return +// value will be populated with the DescribeScalingConfigurationsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingConfigurationsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingConfigurationsCommon Send returns without error. +// +// See DescribeScalingConfigurationsCommon for more information on using the DescribeScalingConfigurationsCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingConfigurationsCommonRequest method. +// req, resp := client.DescribeScalingConfigurationsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingConfigurationsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeScalingConfigurationsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingConfigurationsCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingConfigurationsCommon for usage and error information. +func (c *AUTOSCALING) DescribeScalingConfigurationsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeScalingConfigurationsCommonRequest(input) + return out, req.Send() +} + +// DescribeScalingConfigurationsCommonWithContext is the same as DescribeScalingConfigurationsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingConfigurationsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingConfigurationsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeScalingConfigurationsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeScalingConfigurations = "DescribeScalingConfigurations" + +// DescribeScalingConfigurationsRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingConfigurations operation. The "output" return +// value will be populated with the DescribeScalingConfigurationsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingConfigurationsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingConfigurationsCommon Send returns without error. +// +// See DescribeScalingConfigurations for more information on using the DescribeScalingConfigurations +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingConfigurationsRequest method. +// req, resp := client.DescribeScalingConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingConfigurationsRequest(input *DescribeScalingConfigurationsInput) (req *request.Request, output *DescribeScalingConfigurationsOutput) { + op := &request.Operation{ + Name: opDescribeScalingConfigurations, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeScalingConfigurationsInput{} + } + + output = &DescribeScalingConfigurationsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingConfigurations API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingConfigurations for usage and error information. +func (c *AUTOSCALING) DescribeScalingConfigurations(input *DescribeScalingConfigurationsInput) (*DescribeScalingConfigurationsOutput, error) { + req, out := c.DescribeScalingConfigurationsRequest(input) + return out, req.Send() +} + +// DescribeScalingConfigurationsWithContext is the same as DescribeScalingConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingConfigurationsWithContext(ctx volcengine.Context, input *DescribeScalingConfigurationsInput, opts ...request.Option) (*DescribeScalingConfigurationsOutput, error) { + req, out := c.DescribeScalingConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeScalingConfigurationsInput struct { + _ struct{} `type:"structure"` + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingConfigurationIds []*string `type:"list"` + + ScalingConfigurationNames []*string `type:"list"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DescribeScalingConfigurationsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingConfigurationsInput) GoString() string { + return s.String() +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingConfigurationsInput) SetPageNumber(v int32) *DescribeScalingConfigurationsInput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingConfigurationsInput) SetPageSize(v int32) *DescribeScalingConfigurationsInput { + s.PageSize = &v + return s +} + +// SetScalingConfigurationIds sets the ScalingConfigurationIds field's value. +func (s *DescribeScalingConfigurationsInput) SetScalingConfigurationIds(v []*string) *DescribeScalingConfigurationsInput { + s.ScalingConfigurationIds = v + return s +} + +// SetScalingConfigurationNames sets the ScalingConfigurationNames field's value. +func (s *DescribeScalingConfigurationsInput) SetScalingConfigurationNames(v []*string) *DescribeScalingConfigurationsInput { + s.ScalingConfigurationNames = v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DescribeScalingConfigurationsInput) SetScalingGroupId(v string) *DescribeScalingConfigurationsInput { + s.ScalingGroupId = &v + return s +} + +type DescribeScalingConfigurationsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingConfigurations []*ScalingConfigurationForDescribeScalingConfigurationsOutput `type:"list"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeScalingConfigurationsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingConfigurationsOutput) GoString() string { + return s.String() +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingConfigurationsOutput) SetPageNumber(v int32) *DescribeScalingConfigurationsOutput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingConfigurationsOutput) SetPageSize(v int32) *DescribeScalingConfigurationsOutput { + s.PageSize = &v + return s +} + +// SetScalingConfigurations sets the ScalingConfigurations field's value. +func (s *DescribeScalingConfigurationsOutput) SetScalingConfigurations(v []*ScalingConfigurationForDescribeScalingConfigurationsOutput) *DescribeScalingConfigurationsOutput { + s.ScalingConfigurations = v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeScalingConfigurationsOutput) SetTotalCount(v int32) *DescribeScalingConfigurationsOutput { + s.TotalCount = &v + return s +} + +type EipForDescribeScalingConfigurationsOutput struct { + _ struct{} `type:"structure"` + + Bandwidth *int32 `type:"int32"` + + BillingType *string `type:"string"` + + ISP *string `type:"string"` +} + +// String returns the string representation +func (s EipForDescribeScalingConfigurationsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EipForDescribeScalingConfigurationsOutput) GoString() string { + return s.String() +} + +// SetBandwidth sets the Bandwidth field's value. +func (s *EipForDescribeScalingConfigurationsOutput) SetBandwidth(v int32) *EipForDescribeScalingConfigurationsOutput { + s.Bandwidth = &v + return s +} + +// SetBillingType sets the BillingType field's value. +func (s *EipForDescribeScalingConfigurationsOutput) SetBillingType(v string) *EipForDescribeScalingConfigurationsOutput { + s.BillingType = &v + return s +} + +// SetISP sets the ISP field's value. +func (s *EipForDescribeScalingConfigurationsOutput) SetISP(v string) *EipForDescribeScalingConfigurationsOutput { + s.ISP = &v + return s +} + +type ScalingConfigurationForDescribeScalingConfigurationsOutput struct { + _ struct{} `type:"structure"` + + CreatedAt *string `type:"string"` + + Eip *EipForDescribeScalingConfigurationsOutput `type:"structure"` + + HostName *string `type:"string"` + + ImageId *string `type:"string"` + + InstanceDescription *string `type:"string"` + + InstanceName *string `type:"string"` + + InstanceTypes []*string `type:"list"` + + KeyPairName *string `type:"string"` + + LifecycleState *string `type:"string"` + + ScalingConfigurationId *string `type:"string"` + + ScalingConfigurationName *string `type:"string"` + + ScalingGroupId *string `type:"string"` + + SecurityEnhancementStrategy *string `type:"string"` + + SecurityGroupIds []*string `type:"list"` + + UpdatedAt *string `type:"string"` + + UserData *string `type:"string"` + + Volumes []*VolumeForDescribeScalingConfigurationsOutput `type:"list"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s ScalingConfigurationForDescribeScalingConfigurationsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScalingConfigurationForDescribeScalingConfigurationsOutput) GoString() string { + return s.String() +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetCreatedAt(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.CreatedAt = &v + return s +} + +// SetEip sets the Eip field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetEip(v *EipForDescribeScalingConfigurationsOutput) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.Eip = v + return s +} + +// SetHostName sets the HostName field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetHostName(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.HostName = &v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetImageId(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.ImageId = &v + return s +} + +// SetInstanceDescription sets the InstanceDescription field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetInstanceDescription(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.InstanceDescription = &v + return s +} + +// SetInstanceName sets the InstanceName field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetInstanceName(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.InstanceName = &v + return s +} + +// SetInstanceTypes sets the InstanceTypes field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetInstanceTypes(v []*string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.InstanceTypes = v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetKeyPairName(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.KeyPairName = &v + return s +} + +// SetLifecycleState sets the LifecycleState field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetLifecycleState(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.LifecycleState = &v + return s +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetScalingConfigurationId(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.ScalingConfigurationId = &v + return s +} + +// SetScalingConfigurationName sets the ScalingConfigurationName field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetScalingConfigurationName(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.ScalingConfigurationName = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetScalingGroupId(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.ScalingGroupId = &v + return s +} + +// SetSecurityEnhancementStrategy sets the SecurityEnhancementStrategy field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetSecurityEnhancementStrategy(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.SecurityEnhancementStrategy = &v + return s +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetSecurityGroupIds(v []*string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.SecurityGroupIds = v + return s +} + +// SetUpdatedAt sets the UpdatedAt field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetUpdatedAt(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.UpdatedAt = &v + return s +} + +// SetUserData sets the UserData field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetUserData(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.UserData = &v + return s +} + +// SetVolumes sets the Volumes field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetVolumes(v []*VolumeForDescribeScalingConfigurationsOutput) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.Volumes = v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *ScalingConfigurationForDescribeScalingConfigurationsOutput) SetZoneId(v string) *ScalingConfigurationForDescribeScalingConfigurationsOutput { + s.ZoneId = &v + return s +} + +type VolumeForDescribeScalingConfigurationsOutput struct { + _ struct{} `type:"structure"` + + DeleteWithInstance *bool `type:"boolean"` + + Size *int32 `type:"int32"` + + VolumeType *string `type:"string"` +} + +// String returns the string representation +func (s VolumeForDescribeScalingConfigurationsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s VolumeForDescribeScalingConfigurationsOutput) GoString() string { + return s.String() +} + +// SetDeleteWithInstance sets the DeleteWithInstance field's value. +func (s *VolumeForDescribeScalingConfigurationsOutput) SetDeleteWithInstance(v bool) *VolumeForDescribeScalingConfigurationsOutput { + s.DeleteWithInstance = &v + return s +} + +// SetSize sets the Size field's value. +func (s *VolumeForDescribeScalingConfigurationsOutput) SetSize(v int32) *VolumeForDescribeScalingConfigurationsOutput { + s.Size = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *VolumeForDescribeScalingConfigurationsOutput) SetVolumeType(v string) *VolumeForDescribeScalingConfigurationsOutput { + s.VolumeType = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_groups.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_groups.go new file mode 100644 index 000000000000..7223e6427027 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_groups.go @@ -0,0 +1,430 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeScalingGroupsCommon = "DescribeScalingGroups" + +// DescribeScalingGroupsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingGroupsCommon operation. The "output" return +// value will be populated with the DescribeScalingGroupsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingGroupsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingGroupsCommon Send returns without error. +// +// See DescribeScalingGroupsCommon for more information on using the DescribeScalingGroupsCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingGroupsCommonRequest method. +// req, resp := client.DescribeScalingGroupsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingGroupsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeScalingGroupsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingGroupsCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingGroupsCommon for usage and error information. +func (c *AUTOSCALING) DescribeScalingGroupsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeScalingGroupsCommonRequest(input) + return out, req.Send() +} + +// DescribeScalingGroupsCommonWithContext is the same as DescribeScalingGroupsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingGroupsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingGroupsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeScalingGroupsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeScalingGroups = "DescribeScalingGroups" + +// DescribeScalingGroupsRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingGroups operation. The "output" return +// value will be populated with the DescribeScalingGroupsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingGroupsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingGroupsCommon Send returns without error. +// +// See DescribeScalingGroups for more information on using the DescribeScalingGroups +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingGroupsRequest method. +// req, resp := client.DescribeScalingGroupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingGroupsRequest(input *DescribeScalingGroupsInput) (req *request.Request, output *DescribeScalingGroupsOutput) { + op := &request.Operation{ + Name: opDescribeScalingGroups, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeScalingGroupsInput{} + } + + output = &DescribeScalingGroupsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingGroups API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingGroups for usage and error information. +func (c *AUTOSCALING) DescribeScalingGroups(input *DescribeScalingGroupsInput) (*DescribeScalingGroupsOutput, error) { + req, out := c.DescribeScalingGroupsRequest(input) + return out, req.Send() +} + +// DescribeScalingGroupsWithContext is the same as DescribeScalingGroups with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingGroups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingGroupsWithContext(ctx volcengine.Context, input *DescribeScalingGroupsInput, opts ...request.Option) (*DescribeScalingGroupsOutput, error) { + req, out := c.DescribeScalingGroupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeScalingGroupsInput struct { + _ struct{} `type:"structure"` + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingGroupIds []*string `type:"list"` + + ScalingGroupNames []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeScalingGroupsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingGroupsInput) GoString() string { + return s.String() +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingGroupsInput) SetPageNumber(v int32) *DescribeScalingGroupsInput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingGroupsInput) SetPageSize(v int32) *DescribeScalingGroupsInput { + s.PageSize = &v + return s +} + +// SetScalingGroupIds sets the ScalingGroupIds field's value. +func (s *DescribeScalingGroupsInput) SetScalingGroupIds(v []*string) *DescribeScalingGroupsInput { + s.ScalingGroupIds = v + return s +} + +// SetScalingGroupNames sets the ScalingGroupNames field's value. +func (s *DescribeScalingGroupsInput) SetScalingGroupNames(v []*string) *DescribeScalingGroupsInput { + s.ScalingGroupNames = v + return s +} + +type DescribeScalingGroupsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingGroups []*ScalingGroupForDescribeScalingGroupsOutput `type:"list"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeScalingGroupsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingGroupsOutput) GoString() string { + return s.String() +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingGroupsOutput) SetPageNumber(v int32) *DescribeScalingGroupsOutput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingGroupsOutput) SetPageSize(v int32) *DescribeScalingGroupsOutput { + s.PageSize = &v + return s +} + +// SetScalingGroups sets the ScalingGroups field's value. +func (s *DescribeScalingGroupsOutput) SetScalingGroups(v []*ScalingGroupForDescribeScalingGroupsOutput) *DescribeScalingGroupsOutput { + s.ScalingGroups = v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeScalingGroupsOutput) SetTotalCount(v int32) *DescribeScalingGroupsOutput { + s.TotalCount = &v + return s +} + +type ScalingGroupForDescribeScalingGroupsOutput struct { + _ struct{} `type:"structure"` + + ActiveScalingConfigurationId *string `type:"string"` + + CreatedAt *string `type:"string"` + + DBInstanceIds []*string `type:"list"` + + DefaultCooldown *int32 `type:"int32"` + + DesireInstanceNumber *int32 `type:"int32"` + + InstanceTerminatePolicy *string `type:"string"` + + LifecycleState *string `type:"string"` + + MaxInstanceNumber *int32 `type:"int32"` + + MinInstanceNumber *int32 `type:"int32"` + + MultiAZPolicy *string `type:"string"` + + ScalingGroupId *string `type:"string"` + + ScalingGroupName *string `type:"string"` + + ServerGroupAttributes []*ServerGroupAttributeForDescribeScalingGroupsOutput `type:"list"` + + SubnetIds []*string `type:"list"` + + TotalInstanceCount *int32 `type:"int32"` + + UpdatedAt *string `type:"string"` + + VpcId *string `type:"string"` +} + +// String returns the string representation +func (s ScalingGroupForDescribeScalingGroupsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScalingGroupForDescribeScalingGroupsOutput) GoString() string { + return s.String() +} + +// SetActiveScalingConfigurationId sets the ActiveScalingConfigurationId field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetActiveScalingConfigurationId(v string) *ScalingGroupForDescribeScalingGroupsOutput { + s.ActiveScalingConfigurationId = &v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetCreatedAt(v string) *ScalingGroupForDescribeScalingGroupsOutput { + s.CreatedAt = &v + return s +} + +// SetDBInstanceIds sets the DBInstanceIds field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetDBInstanceIds(v []*string) *ScalingGroupForDescribeScalingGroupsOutput { + s.DBInstanceIds = v + return s +} + +// SetDefaultCooldown sets the DefaultCooldown field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetDefaultCooldown(v int32) *ScalingGroupForDescribeScalingGroupsOutput { + s.DefaultCooldown = &v + return s +} + +// SetDesireInstanceNumber sets the DesireInstanceNumber field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetDesireInstanceNumber(v int32) *ScalingGroupForDescribeScalingGroupsOutput { + s.DesireInstanceNumber = &v + return s +} + +// SetInstanceTerminatePolicy sets the InstanceTerminatePolicy field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetInstanceTerminatePolicy(v string) *ScalingGroupForDescribeScalingGroupsOutput { + s.InstanceTerminatePolicy = &v + return s +} + +// SetLifecycleState sets the LifecycleState field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetLifecycleState(v string) *ScalingGroupForDescribeScalingGroupsOutput { + s.LifecycleState = &v + return s +} + +// SetMaxInstanceNumber sets the MaxInstanceNumber field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetMaxInstanceNumber(v int32) *ScalingGroupForDescribeScalingGroupsOutput { + s.MaxInstanceNumber = &v + return s +} + +// SetMinInstanceNumber sets the MinInstanceNumber field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetMinInstanceNumber(v int32) *ScalingGroupForDescribeScalingGroupsOutput { + s.MinInstanceNumber = &v + return s +} + +// SetMultiAZPolicy sets the MultiAZPolicy field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetMultiAZPolicy(v string) *ScalingGroupForDescribeScalingGroupsOutput { + s.MultiAZPolicy = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetScalingGroupId(v string) *ScalingGroupForDescribeScalingGroupsOutput { + s.ScalingGroupId = &v + return s +} + +// SetScalingGroupName sets the ScalingGroupName field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetScalingGroupName(v string) *ScalingGroupForDescribeScalingGroupsOutput { + s.ScalingGroupName = &v + return s +} + +// SetServerGroupAttributes sets the ServerGroupAttributes field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetServerGroupAttributes(v []*ServerGroupAttributeForDescribeScalingGroupsOutput) *ScalingGroupForDescribeScalingGroupsOutput { + s.ServerGroupAttributes = v + return s +} + +// SetSubnetIds sets the SubnetIds field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetSubnetIds(v []*string) *ScalingGroupForDescribeScalingGroupsOutput { + s.SubnetIds = v + return s +} + +// SetTotalInstanceCount sets the TotalInstanceCount field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetTotalInstanceCount(v int32) *ScalingGroupForDescribeScalingGroupsOutput { + s.TotalInstanceCount = &v + return s +} + +// SetUpdatedAt sets the UpdatedAt field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetUpdatedAt(v string) *ScalingGroupForDescribeScalingGroupsOutput { + s.UpdatedAt = &v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *ScalingGroupForDescribeScalingGroupsOutput) SetVpcId(v string) *ScalingGroupForDescribeScalingGroupsOutput { + s.VpcId = &v + return s +} + +type ServerGroupAttributeForDescribeScalingGroupsOutput struct { + _ struct{} `type:"structure"` + + LoadBalancerId *string `type:"string"` + + Port *int32 `type:"int32"` + + ServerGroupId *string `type:"string"` + + Weight *int32 `type:"int32"` +} + +// String returns the string representation +func (s ServerGroupAttributeForDescribeScalingGroupsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerGroupAttributeForDescribeScalingGroupsOutput) GoString() string { + return s.String() +} + +// SetLoadBalancerId sets the LoadBalancerId field's value. +func (s *ServerGroupAttributeForDescribeScalingGroupsOutput) SetLoadBalancerId(v string) *ServerGroupAttributeForDescribeScalingGroupsOutput { + s.LoadBalancerId = &v + return s +} + +// SetPort sets the Port field's value. +func (s *ServerGroupAttributeForDescribeScalingGroupsOutput) SetPort(v int32) *ServerGroupAttributeForDescribeScalingGroupsOutput { + s.Port = &v + return s +} + +// SetServerGroupId sets the ServerGroupId field's value. +func (s *ServerGroupAttributeForDescribeScalingGroupsOutput) SetServerGroupId(v string) *ServerGroupAttributeForDescribeScalingGroupsOutput { + s.ServerGroupId = &v + return s +} + +// SetWeight sets the Weight field's value. +func (s *ServerGroupAttributeForDescribeScalingGroupsOutput) SetWeight(v int32) *ServerGroupAttributeForDescribeScalingGroupsOutput { + s.Weight = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_instances.go new file mode 100644 index 000000000000..65a3cbeca42e --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_instances.go @@ -0,0 +1,344 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeScalingInstancesCommon = "DescribeScalingInstances" + +// DescribeScalingInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingInstancesCommon operation. The "output" return +// value will be populated with the DescribeScalingInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingInstancesCommon Send returns without error. +// +// See DescribeScalingInstancesCommon for more information on using the DescribeScalingInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingInstancesCommonRequest method. +// req, resp := client.DescribeScalingInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeScalingInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingInstancesCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingInstancesCommon for usage and error information. +func (c *AUTOSCALING) DescribeScalingInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeScalingInstancesCommonRequest(input) + return out, req.Send() +} + +// DescribeScalingInstancesCommonWithContext is the same as DescribeScalingInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeScalingInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeScalingInstances = "DescribeScalingInstances" + +// DescribeScalingInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingInstances operation. The "output" return +// value will be populated with the DescribeScalingInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingInstancesCommon Send returns without error. +// +// See DescribeScalingInstances for more information on using the DescribeScalingInstances +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingInstancesRequest method. +// req, resp := client.DescribeScalingInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingInstancesRequest(input *DescribeScalingInstancesInput) (req *request.Request, output *DescribeScalingInstancesOutput) { + op := &request.Operation{ + Name: opDescribeScalingInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeScalingInstancesInput{} + } + + output = &DescribeScalingInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingInstances API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingInstances for usage and error information. +func (c *AUTOSCALING) DescribeScalingInstances(input *DescribeScalingInstancesInput) (*DescribeScalingInstancesOutput, error) { + req, out := c.DescribeScalingInstancesRequest(input) + return out, req.Send() +} + +// DescribeScalingInstancesWithContext is the same as DescribeScalingInstances with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingInstancesWithContext(ctx volcengine.Context, input *DescribeScalingInstancesInput, opts ...request.Option) (*DescribeScalingInstancesOutput, error) { + req, out := c.DescribeScalingInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeScalingInstancesInput struct { + _ struct{} `type:"structure"` + + CreationType *string `type:"string"` + + InstanceIds []*string `type:"list"` + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingConfigurationId *string `type:"string"` + + ScalingGroupId *string `type:"string"` + + Status *string `type:"string"` +} + +// String returns the string representation +func (s DescribeScalingInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingInstancesInput) GoString() string { + return s.String() +} + +// SetCreationType sets the CreationType field's value. +func (s *DescribeScalingInstancesInput) SetCreationType(v string) *DescribeScalingInstancesInput { + s.CreationType = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DescribeScalingInstancesInput) SetInstanceIds(v []*string) *DescribeScalingInstancesInput { + s.InstanceIds = v + return s +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingInstancesInput) SetPageNumber(v int32) *DescribeScalingInstancesInput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingInstancesInput) SetPageSize(v int32) *DescribeScalingInstancesInput { + s.PageSize = &v + return s +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *DescribeScalingInstancesInput) SetScalingConfigurationId(v string) *DescribeScalingInstancesInput { + s.ScalingConfigurationId = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DescribeScalingInstancesInput) SetScalingGroupId(v string) *DescribeScalingInstancesInput { + s.ScalingGroupId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DescribeScalingInstancesInput) SetStatus(v string) *DescribeScalingInstancesInput { + s.Status = &v + return s +} + +type DescribeScalingInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingInstances []*ScalingInstanceForDescribeScalingInstancesOutput `type:"list"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeScalingInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingInstancesOutput) GoString() string { + return s.String() +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingInstancesOutput) SetPageNumber(v int32) *DescribeScalingInstancesOutput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingInstancesOutput) SetPageSize(v int32) *DescribeScalingInstancesOutput { + s.PageSize = &v + return s +} + +// SetScalingInstances sets the ScalingInstances field's value. +func (s *DescribeScalingInstancesOutput) SetScalingInstances(v []*ScalingInstanceForDescribeScalingInstancesOutput) *DescribeScalingInstancesOutput { + s.ScalingInstances = v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeScalingInstancesOutput) SetTotalCount(v int32) *DescribeScalingInstancesOutput { + s.TotalCount = &v + return s +} + +type ScalingInstanceForDescribeScalingInstancesOutput struct { + _ struct{} `type:"structure"` + + CreatedTime *string `type:"string"` + + CreationType *string `type:"string"` + + Entrusted *bool `type:"boolean"` + + InstanceId *string `type:"string"` + + ScalingConfigurationId *string `type:"string"` + + ScalingGroupId *string `type:"string"` + + ScalingPolicyId *string `type:"string"` + + Status *string `type:"string"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s ScalingInstanceForDescribeScalingInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScalingInstanceForDescribeScalingInstancesOutput) GoString() string { + return s.String() +} + +// SetCreatedTime sets the CreatedTime field's value. +func (s *ScalingInstanceForDescribeScalingInstancesOutput) SetCreatedTime(v string) *ScalingInstanceForDescribeScalingInstancesOutput { + s.CreatedTime = &v + return s +} + +// SetCreationType sets the CreationType field's value. +func (s *ScalingInstanceForDescribeScalingInstancesOutput) SetCreationType(v string) *ScalingInstanceForDescribeScalingInstancesOutput { + s.CreationType = &v + return s +} + +// SetEntrusted sets the Entrusted field's value. +func (s *ScalingInstanceForDescribeScalingInstancesOutput) SetEntrusted(v bool) *ScalingInstanceForDescribeScalingInstancesOutput { + s.Entrusted = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ScalingInstanceForDescribeScalingInstancesOutput) SetInstanceId(v string) *ScalingInstanceForDescribeScalingInstancesOutput { + s.InstanceId = &v + return s +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *ScalingInstanceForDescribeScalingInstancesOutput) SetScalingConfigurationId(v string) *ScalingInstanceForDescribeScalingInstancesOutput { + s.ScalingConfigurationId = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *ScalingInstanceForDescribeScalingInstancesOutput) SetScalingGroupId(v string) *ScalingInstanceForDescribeScalingInstancesOutput { + s.ScalingGroupId = &v + return s +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *ScalingInstanceForDescribeScalingInstancesOutput) SetScalingPolicyId(v string) *ScalingInstanceForDescribeScalingInstancesOutput { + s.ScalingPolicyId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ScalingInstanceForDescribeScalingInstancesOutput) SetStatus(v string) *ScalingInstanceForDescribeScalingInstancesOutput { + s.Status = &v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *ScalingInstanceForDescribeScalingInstancesOutput) SetZoneId(v string) *ScalingInstanceForDescribeScalingInstancesOutput { + s.ZoneId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_policies.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_policies.go new file mode 100644 index 000000000000..a2c9998cc266 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_describe_scaling_policies.go @@ -0,0 +1,482 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeScalingPoliciesCommon = "DescribeScalingPolicies" + +// DescribeScalingPoliciesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingPoliciesCommon operation. The "output" return +// value will be populated with the DescribeScalingPoliciesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingPoliciesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingPoliciesCommon Send returns without error. +// +// See DescribeScalingPoliciesCommon for more information on using the DescribeScalingPoliciesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingPoliciesCommonRequest method. +// req, resp := client.DescribeScalingPoliciesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingPoliciesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeScalingPoliciesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingPoliciesCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingPoliciesCommon for usage and error information. +func (c *AUTOSCALING) DescribeScalingPoliciesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeScalingPoliciesCommonRequest(input) + return out, req.Send() +} + +// DescribeScalingPoliciesCommonWithContext is the same as DescribeScalingPoliciesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingPoliciesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingPoliciesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeScalingPoliciesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeScalingPolicies = "DescribeScalingPolicies" + +// DescribeScalingPoliciesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeScalingPolicies operation. The "output" return +// value will be populated with the DescribeScalingPoliciesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeScalingPoliciesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeScalingPoliciesCommon Send returns without error. +// +// See DescribeScalingPolicies for more information on using the DescribeScalingPolicies +// API call, and error handling. +// +// // Example sending a request using the DescribeScalingPoliciesRequest method. +// req, resp := client.DescribeScalingPoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DescribeScalingPoliciesRequest(input *DescribeScalingPoliciesInput) (req *request.Request, output *DescribeScalingPoliciesOutput) { + op := &request.Operation{ + Name: opDescribeScalingPolicies, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeScalingPoliciesInput{} + } + + output = &DescribeScalingPoliciesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeScalingPolicies API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DescribeScalingPolicies for usage and error information. +func (c *AUTOSCALING) DescribeScalingPolicies(input *DescribeScalingPoliciesInput) (*DescribeScalingPoliciesOutput, error) { + req, out := c.DescribeScalingPoliciesRequest(input) + return out, req.Send() +} + +// DescribeScalingPoliciesWithContext is the same as DescribeScalingPolicies with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeScalingPolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DescribeScalingPoliciesWithContext(ctx volcengine.Context, input *DescribeScalingPoliciesInput, opts ...request.Option) (*DescribeScalingPoliciesOutput, error) { + req, out := c.DescribeScalingPoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AlarmPolicyForDescribeScalingPoliciesOutput struct { + _ struct{} `type:"structure"` + + Condition *ConditionForDescribeScalingPoliciesOutput `type:"structure"` + + EvaluationCount *int32 `type:"int32"` + + RuleType *string `type:"string"` +} + +// String returns the string representation +func (s AlarmPolicyForDescribeScalingPoliciesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AlarmPolicyForDescribeScalingPoliciesOutput) GoString() string { + return s.String() +} + +// SetCondition sets the Condition field's value. +func (s *AlarmPolicyForDescribeScalingPoliciesOutput) SetCondition(v *ConditionForDescribeScalingPoliciesOutput) *AlarmPolicyForDescribeScalingPoliciesOutput { + s.Condition = v + return s +} + +// SetEvaluationCount sets the EvaluationCount field's value. +func (s *AlarmPolicyForDescribeScalingPoliciesOutput) SetEvaluationCount(v int32) *AlarmPolicyForDescribeScalingPoliciesOutput { + s.EvaluationCount = &v + return s +} + +// SetRuleType sets the RuleType field's value. +func (s *AlarmPolicyForDescribeScalingPoliciesOutput) SetRuleType(v string) *AlarmPolicyForDescribeScalingPoliciesOutput { + s.RuleType = &v + return s +} + +type ConditionForDescribeScalingPoliciesOutput struct { + _ struct{} `type:"structure"` + + ComparisonOperator *string `type:"string"` + + MetricName *string `type:"string"` + + MetricUnit *string `type:"string"` + + Threshold *string `type:"string"` +} + +// String returns the string representation +func (s ConditionForDescribeScalingPoliciesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConditionForDescribeScalingPoliciesOutput) GoString() string { + return s.String() +} + +// SetComparisonOperator sets the ComparisonOperator field's value. +func (s *ConditionForDescribeScalingPoliciesOutput) SetComparisonOperator(v string) *ConditionForDescribeScalingPoliciesOutput { + s.ComparisonOperator = &v + return s +} + +// SetMetricName sets the MetricName field's value. +func (s *ConditionForDescribeScalingPoliciesOutput) SetMetricName(v string) *ConditionForDescribeScalingPoliciesOutput { + s.MetricName = &v + return s +} + +// SetMetricUnit sets the MetricUnit field's value. +func (s *ConditionForDescribeScalingPoliciesOutput) SetMetricUnit(v string) *ConditionForDescribeScalingPoliciesOutput { + s.MetricUnit = &v + return s +} + +// SetThreshold sets the Threshold field's value. +func (s *ConditionForDescribeScalingPoliciesOutput) SetThreshold(v string) *ConditionForDescribeScalingPoliciesOutput { + s.Threshold = &v + return s +} + +type DescribeScalingPoliciesInput struct { + _ struct{} `type:"structure"` + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingGroupId *string `type:"string"` + + ScalingPolicyIds []*string `type:"list"` + + ScalingPolicyNames []*string `type:"list"` + + ScalingPolicyType *string `type:"string"` +} + +// String returns the string representation +func (s DescribeScalingPoliciesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingPoliciesInput) GoString() string { + return s.String() +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingPoliciesInput) SetPageNumber(v int32) *DescribeScalingPoliciesInput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingPoliciesInput) SetPageSize(v int32) *DescribeScalingPoliciesInput { + s.PageSize = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DescribeScalingPoliciesInput) SetScalingGroupId(v string) *DescribeScalingPoliciesInput { + s.ScalingGroupId = &v + return s +} + +// SetScalingPolicyIds sets the ScalingPolicyIds field's value. +func (s *DescribeScalingPoliciesInput) SetScalingPolicyIds(v []*string) *DescribeScalingPoliciesInput { + s.ScalingPolicyIds = v + return s +} + +// SetScalingPolicyNames sets the ScalingPolicyNames field's value. +func (s *DescribeScalingPoliciesInput) SetScalingPolicyNames(v []*string) *DescribeScalingPoliciesInput { + s.ScalingPolicyNames = v + return s +} + +// SetScalingPolicyType sets the ScalingPolicyType field's value. +func (s *DescribeScalingPoliciesInput) SetScalingPolicyType(v string) *DescribeScalingPoliciesInput { + s.ScalingPolicyType = &v + return s +} + +type DescribeScalingPoliciesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + PageNumber *int32 `type:"int32"` + + PageSize *int32 `type:"int32"` + + ScalingPolicies []*ScalingPolicyForDescribeScalingPoliciesOutput `type:"list"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeScalingPoliciesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeScalingPoliciesOutput) GoString() string { + return s.String() +} + +// SetPageNumber sets the PageNumber field's value. +func (s *DescribeScalingPoliciesOutput) SetPageNumber(v int32) *DescribeScalingPoliciesOutput { + s.PageNumber = &v + return s +} + +// SetPageSize sets the PageSize field's value. +func (s *DescribeScalingPoliciesOutput) SetPageSize(v int32) *DescribeScalingPoliciesOutput { + s.PageSize = &v + return s +} + +// SetScalingPolicies sets the ScalingPolicies field's value. +func (s *DescribeScalingPoliciesOutput) SetScalingPolicies(v []*ScalingPolicyForDescribeScalingPoliciesOutput) *DescribeScalingPoliciesOutput { + s.ScalingPolicies = v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeScalingPoliciesOutput) SetTotalCount(v int32) *DescribeScalingPoliciesOutput { + s.TotalCount = &v + return s +} + +type ScalingPolicyForDescribeScalingPoliciesOutput struct { + _ struct{} `type:"structure"` + + AdjustmentType *string `type:"string"` + + AdjustmentValue *int32 `type:"int32"` + + AlarmPolicy *AlarmPolicyForDescribeScalingPoliciesOutput `type:"structure"` + + Cooldown *int32 `type:"int32"` + + ScalingGroupId *string `type:"string"` + + ScalingPolicyId *string `type:"string"` + + ScalingPolicyName *string `type:"string"` + + ScalingPolicyType *string `type:"string"` + + ScheduledPolicy *ScheduledPolicyForDescribeScalingPoliciesOutput `type:"structure"` + + Status *string `type:"string"` +} + +// String returns the string representation +func (s ScalingPolicyForDescribeScalingPoliciesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScalingPolicyForDescribeScalingPoliciesOutput) GoString() string { + return s.String() +} + +// SetAdjustmentType sets the AdjustmentType field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetAdjustmentType(v string) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.AdjustmentType = &v + return s +} + +// SetAdjustmentValue sets the AdjustmentValue field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetAdjustmentValue(v int32) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.AdjustmentValue = &v + return s +} + +// SetAlarmPolicy sets the AlarmPolicy field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetAlarmPolicy(v *AlarmPolicyForDescribeScalingPoliciesOutput) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.AlarmPolicy = v + return s +} + +// SetCooldown sets the Cooldown field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetCooldown(v int32) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.Cooldown = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetScalingGroupId(v string) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.ScalingGroupId = &v + return s +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetScalingPolicyId(v string) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.ScalingPolicyId = &v + return s +} + +// SetScalingPolicyName sets the ScalingPolicyName field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetScalingPolicyName(v string) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.ScalingPolicyName = &v + return s +} + +// SetScalingPolicyType sets the ScalingPolicyType field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetScalingPolicyType(v string) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.ScalingPolicyType = &v + return s +} + +// SetScheduledPolicy sets the ScheduledPolicy field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetScheduledPolicy(v *ScheduledPolicyForDescribeScalingPoliciesOutput) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.ScheduledPolicy = v + return s +} + +// SetStatus sets the Status field's value. +func (s *ScalingPolicyForDescribeScalingPoliciesOutput) SetStatus(v string) *ScalingPolicyForDescribeScalingPoliciesOutput { + s.Status = &v + return s +} + +type ScheduledPolicyForDescribeScalingPoliciesOutput struct { + _ struct{} `type:"structure"` + + LaunchTime *string `type:"string"` + + RecurrenceEndTime *string `type:"string"` + + RecurrenceStartTime *string `type:"string"` + + RecurrenceType *string `type:"string"` + + RecurrenceValue *string `type:"string"` +} + +// String returns the string representation +func (s ScheduledPolicyForDescribeScalingPoliciesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScheduledPolicyForDescribeScalingPoliciesOutput) GoString() string { + return s.String() +} + +// SetLaunchTime sets the LaunchTime field's value. +func (s *ScheduledPolicyForDescribeScalingPoliciesOutput) SetLaunchTime(v string) *ScheduledPolicyForDescribeScalingPoliciesOutput { + s.LaunchTime = &v + return s +} + +// SetRecurrenceEndTime sets the RecurrenceEndTime field's value. +func (s *ScheduledPolicyForDescribeScalingPoliciesOutput) SetRecurrenceEndTime(v string) *ScheduledPolicyForDescribeScalingPoliciesOutput { + s.RecurrenceEndTime = &v + return s +} + +// SetRecurrenceStartTime sets the RecurrenceStartTime field's value. +func (s *ScheduledPolicyForDescribeScalingPoliciesOutput) SetRecurrenceStartTime(v string) *ScheduledPolicyForDescribeScalingPoliciesOutput { + s.RecurrenceStartTime = &v + return s +} + +// SetRecurrenceType sets the RecurrenceType field's value. +func (s *ScheduledPolicyForDescribeScalingPoliciesOutput) SetRecurrenceType(v string) *ScheduledPolicyForDescribeScalingPoliciesOutput { + s.RecurrenceType = &v + return s +} + +// SetRecurrenceValue sets the RecurrenceValue field's value. +func (s *ScheduledPolicyForDescribeScalingPoliciesOutput) SetRecurrenceValue(v string) *ScheduledPolicyForDescribeScalingPoliciesOutput { + s.RecurrenceValue = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_db_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_db_instances.go new file mode 100644 index 000000000000..506692762e91 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_db_instances.go @@ -0,0 +1,202 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDetachDBInstancesCommon = "DetachDBInstances" + +// DetachDBInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DetachDBInstancesCommon operation. The "output" return +// value will be populated with the DetachDBInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DetachDBInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DetachDBInstancesCommon Send returns without error. +// +// See DetachDBInstancesCommon for more information on using the DetachDBInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the DetachDBInstancesCommonRequest method. +// req, resp := client.DetachDBInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DetachDBInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDetachDBInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DetachDBInstancesCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DetachDBInstancesCommon for usage and error information. +func (c *AUTOSCALING) DetachDBInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DetachDBInstancesCommonRequest(input) + return out, req.Send() +} + +// DetachDBInstancesCommonWithContext is the same as DetachDBInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DetachDBInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DetachDBInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DetachDBInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDetachDBInstances = "DetachDBInstances" + +// DetachDBInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the DetachDBInstances operation. The "output" return +// value will be populated with the DetachDBInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DetachDBInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DetachDBInstancesCommon Send returns without error. +// +// See DetachDBInstances for more information on using the DetachDBInstances +// API call, and error handling. +// +// // Example sending a request using the DetachDBInstancesRequest method. +// req, resp := client.DetachDBInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DetachDBInstancesRequest(input *DetachDBInstancesInput) (req *request.Request, output *DetachDBInstancesOutput) { + op := &request.Operation{ + Name: opDetachDBInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DetachDBInstancesInput{} + } + + output = &DetachDBInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DetachDBInstances API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DetachDBInstances for usage and error information. +func (c *AUTOSCALING) DetachDBInstances(input *DetachDBInstancesInput) (*DetachDBInstancesOutput, error) { + req, out := c.DetachDBInstancesRequest(input) + return out, req.Send() +} + +// DetachDBInstancesWithContext is the same as DetachDBInstances with the addition of +// the ability to pass a context and additional request options. +// +// See DetachDBInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DetachDBInstancesWithContext(ctx volcengine.Context, input *DetachDBInstancesInput, opts ...request.Option) (*DetachDBInstancesOutput, error) { + req, out := c.DetachDBInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DetachDBInstancesInput struct { + _ struct{} `type:"structure"` + + DBInstanceIds []*string `type:"list"` + + ForceDetach *bool `type:"boolean"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DetachDBInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachDBInstancesInput) GoString() string { + return s.String() +} + +// SetDBInstanceIds sets the DBInstanceIds field's value. +func (s *DetachDBInstancesInput) SetDBInstanceIds(v []*string) *DetachDBInstancesInput { + s.DBInstanceIds = v + return s +} + +// SetForceDetach sets the ForceDetach field's value. +func (s *DetachDBInstancesInput) SetForceDetach(v bool) *DetachDBInstancesInput { + s.ForceDetach = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DetachDBInstancesInput) SetScalingGroupId(v string) *DetachDBInstancesInput { + s.ScalingGroupId = &v + return s +} + +type DetachDBInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DetachDBInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachDBInstancesOutput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DetachDBInstancesOutput) SetScalingGroupId(v string) *DetachDBInstancesOutput { + s.ScalingGroupId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_instances.go new file mode 100644 index 000000000000..dca1b2d2093b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_instances.go @@ -0,0 +1,202 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDetachInstancesCommon = "DetachInstances" + +// DetachInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DetachInstancesCommon operation. The "output" return +// value will be populated with the DetachInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DetachInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DetachInstancesCommon Send returns without error. +// +// See DetachInstancesCommon for more information on using the DetachInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the DetachInstancesCommonRequest method. +// req, resp := client.DetachInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DetachInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDetachInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DetachInstancesCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DetachInstancesCommon for usage and error information. +func (c *AUTOSCALING) DetachInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DetachInstancesCommonRequest(input) + return out, req.Send() +} + +// DetachInstancesCommonWithContext is the same as DetachInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DetachInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DetachInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DetachInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDetachInstances = "DetachInstances" + +// DetachInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the DetachInstances operation. The "output" return +// value will be populated with the DetachInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DetachInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DetachInstancesCommon Send returns without error. +// +// See DetachInstances for more information on using the DetachInstances +// API call, and error handling. +// +// // Example sending a request using the DetachInstancesRequest method. +// req, resp := client.DetachInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DetachInstancesRequest(input *DetachInstancesInput) (req *request.Request, output *DetachInstancesOutput) { + op := &request.Operation{ + Name: opDetachInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DetachInstancesInput{} + } + + output = &DetachInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DetachInstances API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DetachInstances for usage and error information. +func (c *AUTOSCALING) DetachInstances(input *DetachInstancesInput) (*DetachInstancesOutput, error) { + req, out := c.DetachInstancesRequest(input) + return out, req.Send() +} + +// DetachInstancesWithContext is the same as DetachInstances with the addition of +// the ability to pass a context and additional request options. +// +// See DetachInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DetachInstancesWithContext(ctx volcengine.Context, input *DetachInstancesInput, opts ...request.Option) (*DetachInstancesOutput, error) { + req, out := c.DetachInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DetachInstancesInput struct { + _ struct{} `type:"structure"` + + DecreaseDesiredCapacity *bool `type:"boolean"` + + DetachOption *string `type:"string"` + + InstanceIds []*string `type:"list"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DetachInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachInstancesInput) GoString() string { + return s.String() +} + +// SetDecreaseDesiredCapacity sets the DecreaseDesiredCapacity field's value. +func (s *DetachInstancesInput) SetDecreaseDesiredCapacity(v bool) *DetachInstancesInput { + s.DecreaseDesiredCapacity = &v + return s +} + +// SetDetachOption sets the DetachOption field's value. +func (s *DetachInstancesInput) SetDetachOption(v string) *DetachInstancesInput { + s.DetachOption = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DetachInstancesInput) SetInstanceIds(v []*string) *DetachInstancesInput { + s.InstanceIds = v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DetachInstancesInput) SetScalingGroupId(v string) *DetachInstancesInput { + s.ScalingGroupId = &v + return s +} + +type DetachInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s DetachInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachInstancesOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_server_groups.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_server_groups.go new file mode 100644 index 000000000000..f22c18dcce11 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_detach_server_groups.go @@ -0,0 +1,216 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDetachServerGroupsCommon = "DetachServerGroups" + +// DetachServerGroupsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DetachServerGroupsCommon operation. The "output" return +// value will be populated with the DetachServerGroupsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DetachServerGroupsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DetachServerGroupsCommon Send returns without error. +// +// See DetachServerGroupsCommon for more information on using the DetachServerGroupsCommon +// API call, and error handling. +// +// // Example sending a request using the DetachServerGroupsCommonRequest method. +// req, resp := client.DetachServerGroupsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DetachServerGroupsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDetachServerGroupsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DetachServerGroupsCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DetachServerGroupsCommon for usage and error information. +func (c *AUTOSCALING) DetachServerGroupsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DetachServerGroupsCommonRequest(input) + return out, req.Send() +} + +// DetachServerGroupsCommonWithContext is the same as DetachServerGroupsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DetachServerGroupsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DetachServerGroupsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DetachServerGroupsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDetachServerGroups = "DetachServerGroups" + +// DetachServerGroupsRequest generates a "volcengine/request.Request" representing the +// client's request for the DetachServerGroups operation. The "output" return +// value will be populated with the DetachServerGroupsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DetachServerGroupsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DetachServerGroupsCommon Send returns without error. +// +// See DetachServerGroups for more information on using the DetachServerGroups +// API call, and error handling. +// +// // Example sending a request using the DetachServerGroupsRequest method. +// req, resp := client.DetachServerGroupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DetachServerGroupsRequest(input *DetachServerGroupsInput) (req *request.Request, output *DetachServerGroupsOutput) { + op := &request.Operation{ + Name: opDetachServerGroups, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DetachServerGroupsInput{} + } + + output = &DetachServerGroupsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DetachServerGroups API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DetachServerGroups for usage and error information. +func (c *AUTOSCALING) DetachServerGroups(input *DetachServerGroupsInput) (*DetachServerGroupsOutput, error) { + req, out := c.DetachServerGroupsRequest(input) + return out, req.Send() +} + +// DetachServerGroupsWithContext is the same as DetachServerGroups with the addition of +// the ability to pass a context and additional request options. +// +// See DetachServerGroups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DetachServerGroupsWithContext(ctx volcengine.Context, input *DetachServerGroupsInput, opts ...request.Option) (*DetachServerGroupsOutput, error) { + req, out := c.DetachServerGroupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DetachServerGroupsInput struct { + _ struct{} `type:"structure"` + + ScalingGroupId *string `type:"string"` + + ServerGroupAttributes []*ServerGroupAttributeForDetachServerGroupsInput `type:"list"` +} + +// String returns the string representation +func (s DetachServerGroupsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachServerGroupsInput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DetachServerGroupsInput) SetScalingGroupId(v string) *DetachServerGroupsInput { + s.ScalingGroupId = &v + return s +} + +// SetServerGroupAttributes sets the ServerGroupAttributes field's value. +func (s *DetachServerGroupsInput) SetServerGroupAttributes(v []*ServerGroupAttributeForDetachServerGroupsInput) *DetachServerGroupsInput { + s.ServerGroupAttributes = v + return s +} + +type DetachServerGroupsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DetachServerGroupsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachServerGroupsOutput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DetachServerGroupsOutput) SetScalingGroupId(v string) *DetachServerGroupsOutput { + s.ScalingGroupId = &v + return s +} + +type ServerGroupAttributeForDetachServerGroupsInput struct { + _ struct{} `type:"structure"` + + ServerGroupId *string `type:"string"` +} + +// String returns the string representation +func (s ServerGroupAttributeForDetachServerGroupsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerGroupAttributeForDetachServerGroupsInput) GoString() string { + return s.String() +} + +// SetServerGroupId sets the ServerGroupId field's value. +func (s *ServerGroupAttributeForDetachServerGroupsInput) SetServerGroupId(v string) *ServerGroupAttributeForDetachServerGroupsInput { + s.ServerGroupId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_disable_scaling_group.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_disable_scaling_group.go new file mode 100644 index 000000000000..592dbd081d8d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_disable_scaling_group.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDisableScalingGroupCommon = "DisableScalingGroup" + +// DisableScalingGroupCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DisableScalingGroupCommon operation. The "output" return +// value will be populated with the DisableScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DisableScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after DisableScalingGroupCommon Send returns without error. +// +// See DisableScalingGroupCommon for more information on using the DisableScalingGroupCommon +// API call, and error handling. +// +// // Example sending a request using the DisableScalingGroupCommonRequest method. +// req, resp := client.DisableScalingGroupCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DisableScalingGroupCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDisableScalingGroupCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DisableScalingGroupCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DisableScalingGroupCommon for usage and error information. +func (c *AUTOSCALING) DisableScalingGroupCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DisableScalingGroupCommonRequest(input) + return out, req.Send() +} + +// DisableScalingGroupCommonWithContext is the same as DisableScalingGroupCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DisableScalingGroupCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DisableScalingGroupCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DisableScalingGroupCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDisableScalingGroup = "DisableScalingGroup" + +// DisableScalingGroupRequest generates a "volcengine/request.Request" representing the +// client's request for the DisableScalingGroup operation. The "output" return +// value will be populated with the DisableScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DisableScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after DisableScalingGroupCommon Send returns without error. +// +// See DisableScalingGroup for more information on using the DisableScalingGroup +// API call, and error handling. +// +// // Example sending a request using the DisableScalingGroupRequest method. +// req, resp := client.DisableScalingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DisableScalingGroupRequest(input *DisableScalingGroupInput) (req *request.Request, output *DisableScalingGroupOutput) { + op := &request.Operation{ + Name: opDisableScalingGroup, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DisableScalingGroupInput{} + } + + output = &DisableScalingGroupOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DisableScalingGroup API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DisableScalingGroup for usage and error information. +func (c *AUTOSCALING) DisableScalingGroup(input *DisableScalingGroupInput) (*DisableScalingGroupOutput, error) { + req, out := c.DisableScalingGroupRequest(input) + return out, req.Send() +} + +// DisableScalingGroupWithContext is the same as DisableScalingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See DisableScalingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DisableScalingGroupWithContext(ctx volcengine.Context, input *DisableScalingGroupInput, opts ...request.Option) (*DisableScalingGroupOutput, error) { + req, out := c.DisableScalingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DisableScalingGroupInput struct { + _ struct{} `type:"structure"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DisableScalingGroupInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableScalingGroupInput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DisableScalingGroupInput) SetScalingGroupId(v string) *DisableScalingGroupInput { + s.ScalingGroupId = &v + return s +} + +type DisableScalingGroupOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s DisableScalingGroupOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableScalingGroupOutput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *DisableScalingGroupOutput) SetScalingGroupId(v string) *DisableScalingGroupOutput { + s.ScalingGroupId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_disable_scaling_policy.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_disable_scaling_policy.go new file mode 100644 index 000000000000..7d6813709550 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_disable_scaling_policy.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDisableScalingPolicyCommon = "DisableScalingPolicy" + +// DisableScalingPolicyCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DisableScalingPolicyCommon operation. The "output" return +// value will be populated with the DisableScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DisableScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after DisableScalingPolicyCommon Send returns without error. +// +// See DisableScalingPolicyCommon for more information on using the DisableScalingPolicyCommon +// API call, and error handling. +// +// // Example sending a request using the DisableScalingPolicyCommonRequest method. +// req, resp := client.DisableScalingPolicyCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DisableScalingPolicyCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDisableScalingPolicyCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DisableScalingPolicyCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DisableScalingPolicyCommon for usage and error information. +func (c *AUTOSCALING) DisableScalingPolicyCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DisableScalingPolicyCommonRequest(input) + return out, req.Send() +} + +// DisableScalingPolicyCommonWithContext is the same as DisableScalingPolicyCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DisableScalingPolicyCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DisableScalingPolicyCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DisableScalingPolicyCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDisableScalingPolicy = "DisableScalingPolicy" + +// DisableScalingPolicyRequest generates a "volcengine/request.Request" representing the +// client's request for the DisableScalingPolicy operation. The "output" return +// value will be populated with the DisableScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DisableScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after DisableScalingPolicyCommon Send returns without error. +// +// See DisableScalingPolicy for more information on using the DisableScalingPolicy +// API call, and error handling. +// +// // Example sending a request using the DisableScalingPolicyRequest method. +// req, resp := client.DisableScalingPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) DisableScalingPolicyRequest(input *DisableScalingPolicyInput) (req *request.Request, output *DisableScalingPolicyOutput) { + op := &request.Operation{ + Name: opDisableScalingPolicy, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DisableScalingPolicyInput{} + } + + output = &DisableScalingPolicyOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DisableScalingPolicy API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation DisableScalingPolicy for usage and error information. +func (c *AUTOSCALING) DisableScalingPolicy(input *DisableScalingPolicyInput) (*DisableScalingPolicyOutput, error) { + req, out := c.DisableScalingPolicyRequest(input) + return out, req.Send() +} + +// DisableScalingPolicyWithContext is the same as DisableScalingPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DisableScalingPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) DisableScalingPolicyWithContext(ctx volcengine.Context, input *DisableScalingPolicyInput, opts ...request.Option) (*DisableScalingPolicyOutput, error) { + req, out := c.DisableScalingPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DisableScalingPolicyInput struct { + _ struct{} `type:"structure"` + + ScalingPolicyId *string `type:"string"` +} + +// String returns the string representation +func (s DisableScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableScalingPolicyInput) GoString() string { + return s.String() +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *DisableScalingPolicyInput) SetScalingPolicyId(v string) *DisableScalingPolicyInput { + s.ScalingPolicyId = &v + return s +} + +type DisableScalingPolicyOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingPolicyId *string `type:"string"` +} + +// String returns the string representation +func (s DisableScalingPolicyOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisableScalingPolicyOutput) GoString() string { + return s.String() +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *DisableScalingPolicyOutput) SetScalingPolicyId(v string) *DisableScalingPolicyOutput { + s.ScalingPolicyId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_configuration.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_configuration.go new file mode 100644 index 000000000000..9e1ffeac04ed --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_configuration.go @@ -0,0 +1,194 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opEnableScalingConfigurationCommon = "EnableScalingConfiguration" + +// EnableScalingConfigurationCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the EnableScalingConfigurationCommon operation. The "output" return +// value will be populated with the EnableScalingConfigurationCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned EnableScalingConfigurationCommon Request to send the API call to the service. +// the "output" return value is not valid until after EnableScalingConfigurationCommon Send returns without error. +// +// See EnableScalingConfigurationCommon for more information on using the EnableScalingConfigurationCommon +// API call, and error handling. +// +// // Example sending a request using the EnableScalingConfigurationCommonRequest method. +// req, resp := client.EnableScalingConfigurationCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) EnableScalingConfigurationCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opEnableScalingConfigurationCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// EnableScalingConfigurationCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation EnableScalingConfigurationCommon for usage and error information. +func (c *AUTOSCALING) EnableScalingConfigurationCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.EnableScalingConfigurationCommonRequest(input) + return out, req.Send() +} + +// EnableScalingConfigurationCommonWithContext is the same as EnableScalingConfigurationCommon with the addition of +// the ability to pass a context and additional request options. +// +// See EnableScalingConfigurationCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) EnableScalingConfigurationCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.EnableScalingConfigurationCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opEnableScalingConfiguration = "EnableScalingConfiguration" + +// EnableScalingConfigurationRequest generates a "volcengine/request.Request" representing the +// client's request for the EnableScalingConfiguration operation. The "output" return +// value will be populated with the EnableScalingConfigurationCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned EnableScalingConfigurationCommon Request to send the API call to the service. +// the "output" return value is not valid until after EnableScalingConfigurationCommon Send returns without error. +// +// See EnableScalingConfiguration for more information on using the EnableScalingConfiguration +// API call, and error handling. +// +// // Example sending a request using the EnableScalingConfigurationRequest method. +// req, resp := client.EnableScalingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) EnableScalingConfigurationRequest(input *EnableScalingConfigurationInput) (req *request.Request, output *EnableScalingConfigurationOutput) { + op := &request.Operation{ + Name: opEnableScalingConfiguration, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &EnableScalingConfigurationInput{} + } + + output = &EnableScalingConfigurationOutput{} + req = c.newRequest(op, input, output) + + return +} + +// EnableScalingConfiguration API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation EnableScalingConfiguration for usage and error information. +func (c *AUTOSCALING) EnableScalingConfiguration(input *EnableScalingConfigurationInput) (*EnableScalingConfigurationOutput, error) { + req, out := c.EnableScalingConfigurationRequest(input) + return out, req.Send() +} + +// EnableScalingConfigurationWithContext is the same as EnableScalingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See EnableScalingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) EnableScalingConfigurationWithContext(ctx volcengine.Context, input *EnableScalingConfigurationInput, opts ...request.Option) (*EnableScalingConfigurationOutput, error) { + req, out := c.EnableScalingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type EnableScalingConfigurationInput struct { + _ struct{} `type:"structure"` + + ScalingConfigurationId *string `type:"string"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s EnableScalingConfigurationInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableScalingConfigurationInput) GoString() string { + return s.String() +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *EnableScalingConfigurationInput) SetScalingConfigurationId(v string) *EnableScalingConfigurationInput { + s.ScalingConfigurationId = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *EnableScalingConfigurationInput) SetScalingGroupId(v string) *EnableScalingConfigurationInput { + s.ScalingGroupId = &v + return s +} + +type EnableScalingConfigurationOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingConfigurationId *string `type:"string"` +} + +// String returns the string representation +func (s EnableScalingConfigurationOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableScalingConfigurationOutput) GoString() string { + return s.String() +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *EnableScalingConfigurationOutput) SetScalingConfigurationId(v string) *EnableScalingConfigurationOutput { + s.ScalingConfigurationId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_group.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_group.go new file mode 100644 index 000000000000..69dc2b178734 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_group.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opEnableScalingGroupCommon = "EnableScalingGroup" + +// EnableScalingGroupCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the EnableScalingGroupCommon operation. The "output" return +// value will be populated with the EnableScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned EnableScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after EnableScalingGroupCommon Send returns without error. +// +// See EnableScalingGroupCommon for more information on using the EnableScalingGroupCommon +// API call, and error handling. +// +// // Example sending a request using the EnableScalingGroupCommonRequest method. +// req, resp := client.EnableScalingGroupCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) EnableScalingGroupCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opEnableScalingGroupCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// EnableScalingGroupCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation EnableScalingGroupCommon for usage and error information. +func (c *AUTOSCALING) EnableScalingGroupCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.EnableScalingGroupCommonRequest(input) + return out, req.Send() +} + +// EnableScalingGroupCommonWithContext is the same as EnableScalingGroupCommon with the addition of +// the ability to pass a context and additional request options. +// +// See EnableScalingGroupCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) EnableScalingGroupCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.EnableScalingGroupCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opEnableScalingGroup = "EnableScalingGroup" + +// EnableScalingGroupRequest generates a "volcengine/request.Request" representing the +// client's request for the EnableScalingGroup operation. The "output" return +// value will be populated with the EnableScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned EnableScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after EnableScalingGroupCommon Send returns without error. +// +// See EnableScalingGroup for more information on using the EnableScalingGroup +// API call, and error handling. +// +// // Example sending a request using the EnableScalingGroupRequest method. +// req, resp := client.EnableScalingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) EnableScalingGroupRequest(input *EnableScalingGroupInput) (req *request.Request, output *EnableScalingGroupOutput) { + op := &request.Operation{ + Name: opEnableScalingGroup, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &EnableScalingGroupInput{} + } + + output = &EnableScalingGroupOutput{} + req = c.newRequest(op, input, output) + + return +} + +// EnableScalingGroup API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation EnableScalingGroup for usage and error information. +func (c *AUTOSCALING) EnableScalingGroup(input *EnableScalingGroupInput) (*EnableScalingGroupOutput, error) { + req, out := c.EnableScalingGroupRequest(input) + return out, req.Send() +} + +// EnableScalingGroupWithContext is the same as EnableScalingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See EnableScalingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) EnableScalingGroupWithContext(ctx volcengine.Context, input *EnableScalingGroupInput, opts ...request.Option) (*EnableScalingGroupOutput, error) { + req, out := c.EnableScalingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type EnableScalingGroupInput struct { + _ struct{} `type:"structure"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s EnableScalingGroupInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableScalingGroupInput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *EnableScalingGroupInput) SetScalingGroupId(v string) *EnableScalingGroupInput { + s.ScalingGroupId = &v + return s +} + +type EnableScalingGroupOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s EnableScalingGroupOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableScalingGroupOutput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *EnableScalingGroupOutput) SetScalingGroupId(v string) *EnableScalingGroupOutput { + s.ScalingGroupId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_policy.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_policy.go new file mode 100644 index 000000000000..f27ced36c91b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_enable_scaling_policy.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opEnableScalingPolicyCommon = "EnableScalingPolicy" + +// EnableScalingPolicyCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the EnableScalingPolicyCommon operation. The "output" return +// value will be populated with the EnableScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned EnableScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after EnableScalingPolicyCommon Send returns without error. +// +// See EnableScalingPolicyCommon for more information on using the EnableScalingPolicyCommon +// API call, and error handling. +// +// // Example sending a request using the EnableScalingPolicyCommonRequest method. +// req, resp := client.EnableScalingPolicyCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) EnableScalingPolicyCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opEnableScalingPolicyCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// EnableScalingPolicyCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation EnableScalingPolicyCommon for usage and error information. +func (c *AUTOSCALING) EnableScalingPolicyCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.EnableScalingPolicyCommonRequest(input) + return out, req.Send() +} + +// EnableScalingPolicyCommonWithContext is the same as EnableScalingPolicyCommon with the addition of +// the ability to pass a context and additional request options. +// +// See EnableScalingPolicyCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) EnableScalingPolicyCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.EnableScalingPolicyCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opEnableScalingPolicy = "EnableScalingPolicy" + +// EnableScalingPolicyRequest generates a "volcengine/request.Request" representing the +// client's request for the EnableScalingPolicy operation. The "output" return +// value will be populated with the EnableScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned EnableScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after EnableScalingPolicyCommon Send returns without error. +// +// See EnableScalingPolicy for more information on using the EnableScalingPolicy +// API call, and error handling. +// +// // Example sending a request using the EnableScalingPolicyRequest method. +// req, resp := client.EnableScalingPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) EnableScalingPolicyRequest(input *EnableScalingPolicyInput) (req *request.Request, output *EnableScalingPolicyOutput) { + op := &request.Operation{ + Name: opEnableScalingPolicy, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &EnableScalingPolicyInput{} + } + + output = &EnableScalingPolicyOutput{} + req = c.newRequest(op, input, output) + + return +} + +// EnableScalingPolicy API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation EnableScalingPolicy for usage and error information. +func (c *AUTOSCALING) EnableScalingPolicy(input *EnableScalingPolicyInput) (*EnableScalingPolicyOutput, error) { + req, out := c.EnableScalingPolicyRequest(input) + return out, req.Send() +} + +// EnableScalingPolicyWithContext is the same as EnableScalingPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See EnableScalingPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) EnableScalingPolicyWithContext(ctx volcengine.Context, input *EnableScalingPolicyInput, opts ...request.Option) (*EnableScalingPolicyOutput, error) { + req, out := c.EnableScalingPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type EnableScalingPolicyInput struct { + _ struct{} `type:"structure"` + + ScalingPolicyId *string `type:"string"` +} + +// String returns the string representation +func (s EnableScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableScalingPolicyInput) GoString() string { + return s.String() +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *EnableScalingPolicyInput) SetScalingPolicyId(v string) *EnableScalingPolicyInput { + s.ScalingPolicyId = &v + return s +} + +type EnableScalingPolicyOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingPolicyId *string `type:"string"` +} + +// String returns the string representation +func (s EnableScalingPolicyOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EnableScalingPolicyOutput) GoString() string { + return s.String() +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *EnableScalingPolicyOutput) SetScalingPolicyId(v string) *EnableScalingPolicyOutput { + s.ScalingPolicyId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_lifecycle_hook.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_lifecycle_hook.go new file mode 100644 index 000000000000..992371c08b7d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_lifecycle_hook.go @@ -0,0 +1,210 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyLifecycleHookCommon = "ModifyLifecycleHook" + +// ModifyLifecycleHookCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyLifecycleHookCommon operation. The "output" return +// value will be populated with the ModifyLifecycleHookCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyLifecycleHookCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyLifecycleHookCommon Send returns without error. +// +// See ModifyLifecycleHookCommon for more information on using the ModifyLifecycleHookCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyLifecycleHookCommonRequest method. +// req, resp := client.ModifyLifecycleHookCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) ModifyLifecycleHookCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyLifecycleHookCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyLifecycleHookCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation ModifyLifecycleHookCommon for usage and error information. +func (c *AUTOSCALING) ModifyLifecycleHookCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyLifecycleHookCommonRequest(input) + return out, req.Send() +} + +// ModifyLifecycleHookCommonWithContext is the same as ModifyLifecycleHookCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyLifecycleHookCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) ModifyLifecycleHookCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyLifecycleHookCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyLifecycleHook = "ModifyLifecycleHook" + +// ModifyLifecycleHookRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyLifecycleHook operation. The "output" return +// value will be populated with the ModifyLifecycleHookCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyLifecycleHookCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyLifecycleHookCommon Send returns without error. +// +// See ModifyLifecycleHook for more information on using the ModifyLifecycleHook +// API call, and error handling. +// +// // Example sending a request using the ModifyLifecycleHookRequest method. +// req, resp := client.ModifyLifecycleHookRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) ModifyLifecycleHookRequest(input *ModifyLifecycleHookInput) (req *request.Request, output *ModifyLifecycleHookOutput) { + op := &request.Operation{ + Name: opModifyLifecycleHook, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyLifecycleHookInput{} + } + + output = &ModifyLifecycleHookOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyLifecycleHook API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation ModifyLifecycleHook for usage and error information. +func (c *AUTOSCALING) ModifyLifecycleHook(input *ModifyLifecycleHookInput) (*ModifyLifecycleHookOutput, error) { + req, out := c.ModifyLifecycleHookRequest(input) + return out, req.Send() +} + +// ModifyLifecycleHookWithContext is the same as ModifyLifecycleHook with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyLifecycleHook for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) ModifyLifecycleHookWithContext(ctx volcengine.Context, input *ModifyLifecycleHookInput, opts ...request.Option) (*ModifyLifecycleHookOutput, error) { + req, out := c.ModifyLifecycleHookRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyLifecycleHookInput struct { + _ struct{} `type:"structure"` + + LifecycleHookId *string `type:"string"` + + LifecycleHookPolicy *string `type:"string"` + + LifecycleHookTimeout *int32 `type:"int32"` + + LifecycleHookType *string `type:"string"` +} + +// String returns the string representation +func (s ModifyLifecycleHookInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyLifecycleHookInput) GoString() string { + return s.String() +} + +// SetLifecycleHookId sets the LifecycleHookId field's value. +func (s *ModifyLifecycleHookInput) SetLifecycleHookId(v string) *ModifyLifecycleHookInput { + s.LifecycleHookId = &v + return s +} + +// SetLifecycleHookPolicy sets the LifecycleHookPolicy field's value. +func (s *ModifyLifecycleHookInput) SetLifecycleHookPolicy(v string) *ModifyLifecycleHookInput { + s.LifecycleHookPolicy = &v + return s +} + +// SetLifecycleHookTimeout sets the LifecycleHookTimeout field's value. +func (s *ModifyLifecycleHookInput) SetLifecycleHookTimeout(v int32) *ModifyLifecycleHookInput { + s.LifecycleHookTimeout = &v + return s +} + +// SetLifecycleHookType sets the LifecycleHookType field's value. +func (s *ModifyLifecycleHookInput) SetLifecycleHookType(v string) *ModifyLifecycleHookInput { + s.LifecycleHookType = &v + return s +} + +type ModifyLifecycleHookOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + LifecycleHookId *string `type:"string"` +} + +// String returns the string representation +func (s ModifyLifecycleHookOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyLifecycleHookOutput) GoString() string { + return s.String() +} + +// SetLifecycleHookId sets the LifecycleHookId field's value. +func (s *ModifyLifecycleHookOutput) SetLifecycleHookId(v string) *ModifyLifecycleHookOutput { + s.LifecycleHookId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_configuration.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_configuration.go new file mode 100644 index 000000000000..05a5738fe908 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_configuration.go @@ -0,0 +1,374 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyScalingConfigurationCommon = "ModifyScalingConfiguration" + +// ModifyScalingConfigurationCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyScalingConfigurationCommon operation. The "output" return +// value will be populated with the ModifyScalingConfigurationCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyScalingConfigurationCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyScalingConfigurationCommon Send returns without error. +// +// See ModifyScalingConfigurationCommon for more information on using the ModifyScalingConfigurationCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyScalingConfigurationCommonRequest method. +// req, resp := client.ModifyScalingConfigurationCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) ModifyScalingConfigurationCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyScalingConfigurationCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyScalingConfigurationCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation ModifyScalingConfigurationCommon for usage and error information. +func (c *AUTOSCALING) ModifyScalingConfigurationCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyScalingConfigurationCommonRequest(input) + return out, req.Send() +} + +// ModifyScalingConfigurationCommonWithContext is the same as ModifyScalingConfigurationCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyScalingConfigurationCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) ModifyScalingConfigurationCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyScalingConfigurationCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyScalingConfiguration = "ModifyScalingConfiguration" + +// ModifyScalingConfigurationRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyScalingConfiguration operation. The "output" return +// value will be populated with the ModifyScalingConfigurationCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyScalingConfigurationCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyScalingConfigurationCommon Send returns without error. +// +// See ModifyScalingConfiguration for more information on using the ModifyScalingConfiguration +// API call, and error handling. +// +// // Example sending a request using the ModifyScalingConfigurationRequest method. +// req, resp := client.ModifyScalingConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) ModifyScalingConfigurationRequest(input *ModifyScalingConfigurationInput) (req *request.Request, output *ModifyScalingConfigurationOutput) { + op := &request.Operation{ + Name: opModifyScalingConfiguration, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyScalingConfigurationInput{} + } + + output = &ModifyScalingConfigurationOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyScalingConfiguration API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation ModifyScalingConfiguration for usage and error information. +func (c *AUTOSCALING) ModifyScalingConfiguration(input *ModifyScalingConfigurationInput) (*ModifyScalingConfigurationOutput, error) { + req, out := c.ModifyScalingConfigurationRequest(input) + return out, req.Send() +} + +// ModifyScalingConfigurationWithContext is the same as ModifyScalingConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyScalingConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) ModifyScalingConfigurationWithContext(ctx volcengine.Context, input *ModifyScalingConfigurationInput, opts ...request.Option) (*ModifyScalingConfigurationOutput, error) { + req, out := c.ModifyScalingConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type EipForModifyScalingConfigurationInput struct { + _ struct{} `type:"structure"` + + Bandwidth *int32 `type:"int32"` + + BillingType *string `type:"string"` + + ISP *string `type:"string"` +} + +// String returns the string representation +func (s EipForModifyScalingConfigurationInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EipForModifyScalingConfigurationInput) GoString() string { + return s.String() +} + +// SetBandwidth sets the Bandwidth field's value. +func (s *EipForModifyScalingConfigurationInput) SetBandwidth(v int32) *EipForModifyScalingConfigurationInput { + s.Bandwidth = &v + return s +} + +// SetBillingType sets the BillingType field's value. +func (s *EipForModifyScalingConfigurationInput) SetBillingType(v string) *EipForModifyScalingConfigurationInput { + s.BillingType = &v + return s +} + +// SetISP sets the ISP field's value. +func (s *EipForModifyScalingConfigurationInput) SetISP(v string) *EipForModifyScalingConfigurationInput { + s.ISP = &v + return s +} + +type ModifyScalingConfigurationInput struct { + _ struct{} `type:"structure"` + + Eip *EipForModifyScalingConfigurationInput `type:"structure"` + + HostName *string `type:"string"` + + ImageId *string `type:"string"` + + InstanceDescription *string `type:"string"` + + InstanceName *string `type:"string"` + + InstanceTypes []*string `type:"list"` + + KeyPairName *string `type:"string"` + + Password *string `type:"string"` + + ScalingConfigurationId *string `type:"string"` + + ScalingConfigurationName *string `type:"string"` + + SecurityEnhancementStrategy *string `type:"string"` + + SecurityGroupIds []*string `type:"list"` + + UserData *string `type:"string"` + + Volumes []*VolumeForModifyScalingConfigurationInput `type:"list"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s ModifyScalingConfigurationInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyScalingConfigurationInput) GoString() string { + return s.String() +} + +// SetEip sets the Eip field's value. +func (s *ModifyScalingConfigurationInput) SetEip(v *EipForModifyScalingConfigurationInput) *ModifyScalingConfigurationInput { + s.Eip = v + return s +} + +// SetHostName sets the HostName field's value. +func (s *ModifyScalingConfigurationInput) SetHostName(v string) *ModifyScalingConfigurationInput { + s.HostName = &v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *ModifyScalingConfigurationInput) SetImageId(v string) *ModifyScalingConfigurationInput { + s.ImageId = &v + return s +} + +// SetInstanceDescription sets the InstanceDescription field's value. +func (s *ModifyScalingConfigurationInput) SetInstanceDescription(v string) *ModifyScalingConfigurationInput { + s.InstanceDescription = &v + return s +} + +// SetInstanceName sets the InstanceName field's value. +func (s *ModifyScalingConfigurationInput) SetInstanceName(v string) *ModifyScalingConfigurationInput { + s.InstanceName = &v + return s +} + +// SetInstanceTypes sets the InstanceTypes field's value. +func (s *ModifyScalingConfigurationInput) SetInstanceTypes(v []*string) *ModifyScalingConfigurationInput { + s.InstanceTypes = v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *ModifyScalingConfigurationInput) SetKeyPairName(v string) *ModifyScalingConfigurationInput { + s.KeyPairName = &v + return s +} + +// SetPassword sets the Password field's value. +func (s *ModifyScalingConfigurationInput) SetPassword(v string) *ModifyScalingConfigurationInput { + s.Password = &v + return s +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *ModifyScalingConfigurationInput) SetScalingConfigurationId(v string) *ModifyScalingConfigurationInput { + s.ScalingConfigurationId = &v + return s +} + +// SetScalingConfigurationName sets the ScalingConfigurationName field's value. +func (s *ModifyScalingConfigurationInput) SetScalingConfigurationName(v string) *ModifyScalingConfigurationInput { + s.ScalingConfigurationName = &v + return s +} + +// SetSecurityEnhancementStrategy sets the SecurityEnhancementStrategy field's value. +func (s *ModifyScalingConfigurationInput) SetSecurityEnhancementStrategy(v string) *ModifyScalingConfigurationInput { + s.SecurityEnhancementStrategy = &v + return s +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *ModifyScalingConfigurationInput) SetSecurityGroupIds(v []*string) *ModifyScalingConfigurationInput { + s.SecurityGroupIds = v + return s +} + +// SetUserData sets the UserData field's value. +func (s *ModifyScalingConfigurationInput) SetUserData(v string) *ModifyScalingConfigurationInput { + s.UserData = &v + return s +} + +// SetVolumes sets the Volumes field's value. +func (s *ModifyScalingConfigurationInput) SetVolumes(v []*VolumeForModifyScalingConfigurationInput) *ModifyScalingConfigurationInput { + s.Volumes = v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *ModifyScalingConfigurationInput) SetZoneId(v string) *ModifyScalingConfigurationInput { + s.ZoneId = &v + return s +} + +type ModifyScalingConfigurationOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingConfigurationId *string `type:"string"` +} + +// String returns the string representation +func (s ModifyScalingConfigurationOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyScalingConfigurationOutput) GoString() string { + return s.String() +} + +// SetScalingConfigurationId sets the ScalingConfigurationId field's value. +func (s *ModifyScalingConfigurationOutput) SetScalingConfigurationId(v string) *ModifyScalingConfigurationOutput { + s.ScalingConfigurationId = &v + return s +} + +type VolumeForModifyScalingConfigurationInput struct { + _ struct{} `type:"structure"` + + DeleteWithInstance *bool `type:"boolean"` + + Size *int32 `type:"int32"` + + VolumeType *string `type:"string"` +} + +// String returns the string representation +func (s VolumeForModifyScalingConfigurationInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s VolumeForModifyScalingConfigurationInput) GoString() string { + return s.String() +} + +// SetDeleteWithInstance sets the DeleteWithInstance field's value. +func (s *VolumeForModifyScalingConfigurationInput) SetDeleteWithInstance(v bool) *VolumeForModifyScalingConfigurationInput { + s.DeleteWithInstance = &v + return s +} + +// SetSize sets the Size field's value. +func (s *VolumeForModifyScalingConfigurationInput) SetSize(v int32) *VolumeForModifyScalingConfigurationInput { + s.Size = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *VolumeForModifyScalingConfigurationInput) SetVolumeType(v string) *VolumeForModifyScalingConfigurationInput { + s.VolumeType = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_group.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_group.go new file mode 100644 index 000000000000..a57fa84e3049 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_group.go @@ -0,0 +1,250 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyScalingGroupCommon = "ModifyScalingGroup" + +// ModifyScalingGroupCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyScalingGroupCommon operation. The "output" return +// value will be populated with the ModifyScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyScalingGroupCommon Send returns without error. +// +// See ModifyScalingGroupCommon for more information on using the ModifyScalingGroupCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyScalingGroupCommonRequest method. +// req, resp := client.ModifyScalingGroupCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) ModifyScalingGroupCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyScalingGroupCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyScalingGroupCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation ModifyScalingGroupCommon for usage and error information. +func (c *AUTOSCALING) ModifyScalingGroupCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyScalingGroupCommonRequest(input) + return out, req.Send() +} + +// ModifyScalingGroupCommonWithContext is the same as ModifyScalingGroupCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyScalingGroupCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) ModifyScalingGroupCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyScalingGroupCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyScalingGroup = "ModifyScalingGroup" + +// ModifyScalingGroupRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyScalingGroup operation. The "output" return +// value will be populated with the ModifyScalingGroupCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyScalingGroupCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyScalingGroupCommon Send returns without error. +// +// See ModifyScalingGroup for more information on using the ModifyScalingGroup +// API call, and error handling. +// +// // Example sending a request using the ModifyScalingGroupRequest method. +// req, resp := client.ModifyScalingGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) ModifyScalingGroupRequest(input *ModifyScalingGroupInput) (req *request.Request, output *ModifyScalingGroupOutput) { + op := &request.Operation{ + Name: opModifyScalingGroup, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyScalingGroupInput{} + } + + output = &ModifyScalingGroupOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyScalingGroup API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation ModifyScalingGroup for usage and error information. +func (c *AUTOSCALING) ModifyScalingGroup(input *ModifyScalingGroupInput) (*ModifyScalingGroupOutput, error) { + req, out := c.ModifyScalingGroupRequest(input) + return out, req.Send() +} + +// ModifyScalingGroupWithContext is the same as ModifyScalingGroup with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyScalingGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) ModifyScalingGroupWithContext(ctx volcengine.Context, input *ModifyScalingGroupInput, opts ...request.Option) (*ModifyScalingGroupOutput, error) { + req, out := c.ModifyScalingGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyScalingGroupInput struct { + _ struct{} `type:"structure"` + + ActiveScalingConfigurationId *string `type:"string"` + + DefaultCooldown *int32 `type:"int32"` + + DesireInstanceNumber *int32 `type:"int32"` + + InstanceTerminatePolicy *string `type:"string"` + + MaxInstanceNumber *int32 `type:"int32"` + + MinInstanceNumber *int32 `type:"int32"` + + ScalingGroupId *string `type:"string"` + + ScalingGroupName *string `type:"string"` + + SubnetIds []*string `type:"list"` +} + +// String returns the string representation +func (s ModifyScalingGroupInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyScalingGroupInput) GoString() string { + return s.String() +} + +// SetActiveScalingConfigurationId sets the ActiveScalingConfigurationId field's value. +func (s *ModifyScalingGroupInput) SetActiveScalingConfigurationId(v string) *ModifyScalingGroupInput { + s.ActiveScalingConfigurationId = &v + return s +} + +// SetDefaultCooldown sets the DefaultCooldown field's value. +func (s *ModifyScalingGroupInput) SetDefaultCooldown(v int32) *ModifyScalingGroupInput { + s.DefaultCooldown = &v + return s +} + +// SetDesireInstanceNumber sets the DesireInstanceNumber field's value. +func (s *ModifyScalingGroupInput) SetDesireInstanceNumber(v int32) *ModifyScalingGroupInput { + s.DesireInstanceNumber = &v + return s +} + +// SetInstanceTerminatePolicy sets the InstanceTerminatePolicy field's value. +func (s *ModifyScalingGroupInput) SetInstanceTerminatePolicy(v string) *ModifyScalingGroupInput { + s.InstanceTerminatePolicy = &v + return s +} + +// SetMaxInstanceNumber sets the MaxInstanceNumber field's value. +func (s *ModifyScalingGroupInput) SetMaxInstanceNumber(v int32) *ModifyScalingGroupInput { + s.MaxInstanceNumber = &v + return s +} + +// SetMinInstanceNumber sets the MinInstanceNumber field's value. +func (s *ModifyScalingGroupInput) SetMinInstanceNumber(v int32) *ModifyScalingGroupInput { + s.MinInstanceNumber = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *ModifyScalingGroupInput) SetScalingGroupId(v string) *ModifyScalingGroupInput { + s.ScalingGroupId = &v + return s +} + +// SetScalingGroupName sets the ScalingGroupName field's value. +func (s *ModifyScalingGroupInput) SetScalingGroupName(v string) *ModifyScalingGroupInput { + s.ScalingGroupName = &v + return s +} + +// SetSubnetIds sets the SubnetIds field's value. +func (s *ModifyScalingGroupInput) SetSubnetIds(v []*string) *ModifyScalingGroupInput { + s.SubnetIds = v + return s +} + +type ModifyScalingGroupOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s ModifyScalingGroupOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyScalingGroupOutput) GoString() string { + return s.String() +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *ModifyScalingGroupOutput) SetScalingGroupId(v string) *ModifyScalingGroupOutput { + s.ScalingGroupId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_policy.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_policy.go new file mode 100644 index 000000000000..f30f01795b11 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_modify_scaling_policy.go @@ -0,0 +1,364 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyScalingPolicyCommon = "ModifyScalingPolicy" + +// ModifyScalingPolicyCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyScalingPolicyCommon operation. The "output" return +// value will be populated with the ModifyScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyScalingPolicyCommon Send returns without error. +// +// See ModifyScalingPolicyCommon for more information on using the ModifyScalingPolicyCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyScalingPolicyCommonRequest method. +// req, resp := client.ModifyScalingPolicyCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) ModifyScalingPolicyCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyScalingPolicyCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyScalingPolicyCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation ModifyScalingPolicyCommon for usage and error information. +func (c *AUTOSCALING) ModifyScalingPolicyCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyScalingPolicyCommonRequest(input) + return out, req.Send() +} + +// ModifyScalingPolicyCommonWithContext is the same as ModifyScalingPolicyCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyScalingPolicyCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) ModifyScalingPolicyCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyScalingPolicyCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyScalingPolicy = "ModifyScalingPolicy" + +// ModifyScalingPolicyRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyScalingPolicy operation. The "output" return +// value will be populated with the ModifyScalingPolicyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyScalingPolicyCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyScalingPolicyCommon Send returns without error. +// +// See ModifyScalingPolicy for more information on using the ModifyScalingPolicy +// API call, and error handling. +// +// // Example sending a request using the ModifyScalingPolicyRequest method. +// req, resp := client.ModifyScalingPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) ModifyScalingPolicyRequest(input *ModifyScalingPolicyInput) (req *request.Request, output *ModifyScalingPolicyOutput) { + op := &request.Operation{ + Name: opModifyScalingPolicy, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyScalingPolicyInput{} + } + + output = &ModifyScalingPolicyOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyScalingPolicy API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation ModifyScalingPolicy for usage and error information. +func (c *AUTOSCALING) ModifyScalingPolicy(input *ModifyScalingPolicyInput) (*ModifyScalingPolicyOutput, error) { + req, out := c.ModifyScalingPolicyRequest(input) + return out, req.Send() +} + +// ModifyScalingPolicyWithContext is the same as ModifyScalingPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyScalingPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) ModifyScalingPolicyWithContext(ctx volcengine.Context, input *ModifyScalingPolicyInput, opts ...request.Option) (*ModifyScalingPolicyOutput, error) { + req, out := c.ModifyScalingPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AlarmPolicyConditionForModifyScalingPolicyInput struct { + _ struct{} `type:"structure"` + + ComparisonOperator *string `type:"string"` + + MetricName *string `type:"string"` + + MetricUnit *string `type:"string"` + + Threshold *string `type:"string"` +} + +// String returns the string representation +func (s AlarmPolicyConditionForModifyScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AlarmPolicyConditionForModifyScalingPolicyInput) GoString() string { + return s.String() +} + +// SetComparisonOperator sets the ComparisonOperator field's value. +func (s *AlarmPolicyConditionForModifyScalingPolicyInput) SetComparisonOperator(v string) *AlarmPolicyConditionForModifyScalingPolicyInput { + s.ComparisonOperator = &v + return s +} + +// SetMetricName sets the MetricName field's value. +func (s *AlarmPolicyConditionForModifyScalingPolicyInput) SetMetricName(v string) *AlarmPolicyConditionForModifyScalingPolicyInput { + s.MetricName = &v + return s +} + +// SetMetricUnit sets the MetricUnit field's value. +func (s *AlarmPolicyConditionForModifyScalingPolicyInput) SetMetricUnit(v string) *AlarmPolicyConditionForModifyScalingPolicyInput { + s.MetricUnit = &v + return s +} + +// SetThreshold sets the Threshold field's value. +func (s *AlarmPolicyConditionForModifyScalingPolicyInput) SetThreshold(v string) *AlarmPolicyConditionForModifyScalingPolicyInput { + s.Threshold = &v + return s +} + +type AlarmPolicyForModifyScalingPolicyInput struct { + _ struct{} `type:"structure"` + + Condition *AlarmPolicyConditionForModifyScalingPolicyInput `type:"structure"` + + EvaluationCount *int32 `type:"int32"` + + RuleType *string `type:"string"` +} + +// String returns the string representation +func (s AlarmPolicyForModifyScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AlarmPolicyForModifyScalingPolicyInput) GoString() string { + return s.String() +} + +// SetCondition sets the Condition field's value. +func (s *AlarmPolicyForModifyScalingPolicyInput) SetCondition(v *AlarmPolicyConditionForModifyScalingPolicyInput) *AlarmPolicyForModifyScalingPolicyInput { + s.Condition = v + return s +} + +// SetEvaluationCount sets the EvaluationCount field's value. +func (s *AlarmPolicyForModifyScalingPolicyInput) SetEvaluationCount(v int32) *AlarmPolicyForModifyScalingPolicyInput { + s.EvaluationCount = &v + return s +} + +// SetRuleType sets the RuleType field's value. +func (s *AlarmPolicyForModifyScalingPolicyInput) SetRuleType(v string) *AlarmPolicyForModifyScalingPolicyInput { + s.RuleType = &v + return s +} + +type ModifyScalingPolicyInput struct { + _ struct{} `type:"structure"` + + AdjustmentType *string `type:"string"` + + AdjustmentValue *int32 `type:"int32"` + + AlarmPolicy *AlarmPolicyForModifyScalingPolicyInput `type:"structure"` + + Cooldown *int32 `type:"int32"` + + ScalingPolicyId *string `type:"string"` + + ScalingPolicyName *string `type:"string"` + + ScheduledPolicy *ScheduledPolicyForModifyScalingPolicyInput `type:"structure"` +} + +// String returns the string representation +func (s ModifyScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyScalingPolicyInput) GoString() string { + return s.String() +} + +// SetAdjustmentType sets the AdjustmentType field's value. +func (s *ModifyScalingPolicyInput) SetAdjustmentType(v string) *ModifyScalingPolicyInput { + s.AdjustmentType = &v + return s +} + +// SetAdjustmentValue sets the AdjustmentValue field's value. +func (s *ModifyScalingPolicyInput) SetAdjustmentValue(v int32) *ModifyScalingPolicyInput { + s.AdjustmentValue = &v + return s +} + +// SetAlarmPolicy sets the AlarmPolicy field's value. +func (s *ModifyScalingPolicyInput) SetAlarmPolicy(v *AlarmPolicyForModifyScalingPolicyInput) *ModifyScalingPolicyInput { + s.AlarmPolicy = v + return s +} + +// SetCooldown sets the Cooldown field's value. +func (s *ModifyScalingPolicyInput) SetCooldown(v int32) *ModifyScalingPolicyInput { + s.Cooldown = &v + return s +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *ModifyScalingPolicyInput) SetScalingPolicyId(v string) *ModifyScalingPolicyInput { + s.ScalingPolicyId = &v + return s +} + +// SetScalingPolicyName sets the ScalingPolicyName field's value. +func (s *ModifyScalingPolicyInput) SetScalingPolicyName(v string) *ModifyScalingPolicyInput { + s.ScalingPolicyName = &v + return s +} + +// SetScheduledPolicy sets the ScheduledPolicy field's value. +func (s *ModifyScalingPolicyInput) SetScheduledPolicy(v *ScheduledPolicyForModifyScalingPolicyInput) *ModifyScalingPolicyInput { + s.ScheduledPolicy = v + return s +} + +type ModifyScalingPolicyOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ScalingPolicyId *string `type:"string"` +} + +// String returns the string representation +func (s ModifyScalingPolicyOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyScalingPolicyOutput) GoString() string { + return s.String() +} + +// SetScalingPolicyId sets the ScalingPolicyId field's value. +func (s *ModifyScalingPolicyOutput) SetScalingPolicyId(v string) *ModifyScalingPolicyOutput { + s.ScalingPolicyId = &v + return s +} + +type ScheduledPolicyForModifyScalingPolicyInput struct { + _ struct{} `type:"structure"` + + LaunchTime *string `type:"string"` + + RecurrenceEndTime *string `type:"string"` + + RecurrenceType *string `type:"string"` + + RecurrenceValue *string `type:"string"` +} + +// String returns the string representation +func (s ScheduledPolicyForModifyScalingPolicyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ScheduledPolicyForModifyScalingPolicyInput) GoString() string { + return s.String() +} + +// SetLaunchTime sets the LaunchTime field's value. +func (s *ScheduledPolicyForModifyScalingPolicyInput) SetLaunchTime(v string) *ScheduledPolicyForModifyScalingPolicyInput { + s.LaunchTime = &v + return s +} + +// SetRecurrenceEndTime sets the RecurrenceEndTime field's value. +func (s *ScheduledPolicyForModifyScalingPolicyInput) SetRecurrenceEndTime(v string) *ScheduledPolicyForModifyScalingPolicyInput { + s.RecurrenceEndTime = &v + return s +} + +// SetRecurrenceType sets the RecurrenceType field's value. +func (s *ScheduledPolicyForModifyScalingPolicyInput) SetRecurrenceType(v string) *ScheduledPolicyForModifyScalingPolicyInput { + s.RecurrenceType = &v + return s +} + +// SetRecurrenceValue sets the RecurrenceValue field's value. +func (s *ScheduledPolicyForModifyScalingPolicyInput) SetRecurrenceValue(v string) *ScheduledPolicyForModifyScalingPolicyInput { + s.RecurrenceValue = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_remove_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_remove_instances.go new file mode 100644 index 000000000000..6dec44eb18d6 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_remove_instances.go @@ -0,0 +1,194 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opRemoveInstancesCommon = "RemoveInstances" + +// RemoveInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the RemoveInstancesCommon operation. The "output" return +// value will be populated with the RemoveInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RemoveInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after RemoveInstancesCommon Send returns without error. +// +// See RemoveInstancesCommon for more information on using the RemoveInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the RemoveInstancesCommonRequest method. +// req, resp := client.RemoveInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) RemoveInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opRemoveInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// RemoveInstancesCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation RemoveInstancesCommon for usage and error information. +func (c *AUTOSCALING) RemoveInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.RemoveInstancesCommonRequest(input) + return out, req.Send() +} + +// RemoveInstancesCommonWithContext is the same as RemoveInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) RemoveInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.RemoveInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRemoveInstances = "RemoveInstances" + +// RemoveInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the RemoveInstances operation. The "output" return +// value will be populated with the RemoveInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RemoveInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after RemoveInstancesCommon Send returns without error. +// +// See RemoveInstances for more information on using the RemoveInstances +// API call, and error handling. +// +// // Example sending a request using the RemoveInstancesRequest method. +// req, resp := client.RemoveInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) RemoveInstancesRequest(input *RemoveInstancesInput) (req *request.Request, output *RemoveInstancesOutput) { + op := &request.Operation{ + Name: opRemoveInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &RemoveInstancesInput{} + } + + output = &RemoveInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// RemoveInstances API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation RemoveInstances for usage and error information. +func (c *AUTOSCALING) RemoveInstances(input *RemoveInstancesInput) (*RemoveInstancesOutput, error) { + req, out := c.RemoveInstancesRequest(input) + return out, req.Send() +} + +// RemoveInstancesWithContext is the same as RemoveInstances with the addition of +// the ability to pass a context and additional request options. +// +// See RemoveInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) RemoveInstancesWithContext(ctx volcengine.Context, input *RemoveInstancesInput, opts ...request.Option) (*RemoveInstancesOutput, error) { + req, out := c.RemoveInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type RemoveInstancesInput struct { + _ struct{} `type:"structure"` + + DecreaseDesiredCapacity *bool `type:"boolean"` + + InstanceIds []*string `type:"list"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s RemoveInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveInstancesInput) GoString() string { + return s.String() +} + +// SetDecreaseDesiredCapacity sets the DecreaseDesiredCapacity field's value. +func (s *RemoveInstancesInput) SetDecreaseDesiredCapacity(v bool) *RemoveInstancesInput { + s.DecreaseDesiredCapacity = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *RemoveInstancesInput) SetInstanceIds(v []*string) *RemoveInstancesInput { + s.InstanceIds = v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *RemoveInstancesInput) SetScalingGroupId(v string) *RemoveInstancesInput { + s.ScalingGroupId = &v + return s +} + +type RemoveInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s RemoveInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RemoveInstancesOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_set_instances_protection.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_set_instances_protection.go new file mode 100644 index 000000000000..8c1bcafe2deb --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/api_set_instances_protection.go @@ -0,0 +1,248 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opSetInstancesProtectionCommon = "SetInstancesProtection" + +// SetInstancesProtectionCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the SetInstancesProtectionCommon operation. The "output" return +// value will be populated with the SetInstancesProtectionCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned SetInstancesProtectionCommon Request to send the API call to the service. +// the "output" return value is not valid until after SetInstancesProtectionCommon Send returns without error. +// +// See SetInstancesProtectionCommon for more information on using the SetInstancesProtectionCommon +// API call, and error handling. +// +// // Example sending a request using the SetInstancesProtectionCommonRequest method. +// req, resp := client.SetInstancesProtectionCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) SetInstancesProtectionCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opSetInstancesProtectionCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// SetInstancesProtectionCommon API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation SetInstancesProtectionCommon for usage and error information. +func (c *AUTOSCALING) SetInstancesProtectionCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.SetInstancesProtectionCommonRequest(input) + return out, req.Send() +} + +// SetInstancesProtectionCommonWithContext is the same as SetInstancesProtectionCommon with the addition of +// the ability to pass a context and additional request options. +// +// See SetInstancesProtectionCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) SetInstancesProtectionCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.SetInstancesProtectionCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opSetInstancesProtection = "SetInstancesProtection" + +// SetInstancesProtectionRequest generates a "volcengine/request.Request" representing the +// client's request for the SetInstancesProtection operation. The "output" return +// value will be populated with the SetInstancesProtectionCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned SetInstancesProtectionCommon Request to send the API call to the service. +// the "output" return value is not valid until after SetInstancesProtectionCommon Send returns without error. +// +// See SetInstancesProtection for more information on using the SetInstancesProtection +// API call, and error handling. +// +// // Example sending a request using the SetInstancesProtectionRequest method. +// req, resp := client.SetInstancesProtectionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *AUTOSCALING) SetInstancesProtectionRequest(input *SetInstancesProtectionInput) (req *request.Request, output *SetInstancesProtectionOutput) { + op := &request.Operation{ + Name: opSetInstancesProtection, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &SetInstancesProtectionInput{} + } + + output = &SetInstancesProtectionOutput{} + req = c.newRequest(op, input, output) + + return +} + +// SetInstancesProtection API operation for AUTO_SCALING. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for AUTO_SCALING's +// API operation SetInstancesProtection for usage and error information. +func (c *AUTOSCALING) SetInstancesProtection(input *SetInstancesProtectionInput) (*SetInstancesProtectionOutput, error) { + req, out := c.SetInstancesProtectionRequest(input) + return out, req.Send() +} + +// SetInstancesProtectionWithContext is the same as SetInstancesProtection with the addition of +// the ability to pass a context and additional request options. +// +// See SetInstancesProtection for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *AUTOSCALING) SetInstancesProtectionWithContext(ctx volcengine.Context, input *SetInstancesProtectionInput, opts ...request.Option) (*SetInstancesProtectionOutput, error) { + req, out := c.SetInstancesProtectionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type InstanceProtectionResultForSetInstancesProtectionOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + InstanceId *string `type:"string"` + + Message *string `type:"string"` + + Result *string `type:"string"` +} + +// String returns the string representation +func (s InstanceProtectionResultForSetInstancesProtectionOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceProtectionResultForSetInstancesProtectionOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *InstanceProtectionResultForSetInstancesProtectionOutput) SetCode(v string) *InstanceProtectionResultForSetInstancesProtectionOutput { + s.Code = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *InstanceProtectionResultForSetInstancesProtectionOutput) SetInstanceId(v string) *InstanceProtectionResultForSetInstancesProtectionOutput { + s.InstanceId = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *InstanceProtectionResultForSetInstancesProtectionOutput) SetMessage(v string) *InstanceProtectionResultForSetInstancesProtectionOutput { + s.Message = &v + return s +} + +// SetResult sets the Result field's value. +func (s *InstanceProtectionResultForSetInstancesProtectionOutput) SetResult(v string) *InstanceProtectionResultForSetInstancesProtectionOutput { + s.Result = &v + return s +} + +type SetInstancesProtectionInput struct { + _ struct{} `type:"structure"` + + InstanceIds []*string `type:"list"` + + ProtectedFromScaleIn *bool `type:"boolean"` + + ScalingGroupId *string `type:"string"` +} + +// String returns the string representation +func (s SetInstancesProtectionInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetInstancesProtectionInput) GoString() string { + return s.String() +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *SetInstancesProtectionInput) SetInstanceIds(v []*string) *SetInstancesProtectionInput { + s.InstanceIds = v + return s +} + +// SetProtectedFromScaleIn sets the ProtectedFromScaleIn field's value. +func (s *SetInstancesProtectionInput) SetProtectedFromScaleIn(v bool) *SetInstancesProtectionInput { + s.ProtectedFromScaleIn = &v + return s +} + +// SetScalingGroupId sets the ScalingGroupId field's value. +func (s *SetInstancesProtectionInput) SetScalingGroupId(v string) *SetInstancesProtectionInput { + s.ScalingGroupId = &v + return s +} + +type SetInstancesProtectionOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + InstanceProtectionResults []*InstanceProtectionResultForSetInstancesProtectionOutput `type:"list"` +} + +// String returns the string representation +func (s SetInstancesProtectionOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s SetInstancesProtectionOutput) GoString() string { + return s.String() +} + +// SetInstanceProtectionResults sets the InstanceProtectionResults field's value. +func (s *SetInstancesProtectionOutput) SetInstanceProtectionResults(v []*InstanceProtectionResultForSetInstancesProtectionOutput) *SetInstancesProtectionOutput { + s.InstanceProtectionResults = v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/interface_autoscaling.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/interface_autoscaling.go new file mode 100644 index 000000000000..e76747dcf5de --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/interface_autoscaling.go @@ -0,0 +1,297 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package autoscalingiface provides an interface to enable mocking the AUTO_SCALING service client +// for testing your code. +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// AUTOSCALINGAPI provides an interface to enable mocking the +// autoscaling.AUTOSCALING service client's API operation, +// +// // volcengine sdk func uses an SDK service client to make a request to +// // AUTO_SCALING. +// func myFunc(svc AUTOSCALINGAPI) bool { +// // Make svc.AttachDBInstances request +// } +// +// func main() { +// sess := session.New() +// svc := autoscaling.New(sess) +// +// myFunc(svc) +// } +type AUTOSCALINGAPI interface { + AttachDBInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + AttachDBInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + AttachDBInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + AttachDBInstances(*AttachDBInstancesInput) (*AttachDBInstancesOutput, error) + AttachDBInstancesWithContext(volcengine.Context, *AttachDBInstancesInput, ...request.Option) (*AttachDBInstancesOutput, error) + AttachDBInstancesRequest(*AttachDBInstancesInput) (*request.Request, *AttachDBInstancesOutput) + + AttachInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + AttachInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + AttachInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + AttachInstances(*AttachInstancesInput) (*AttachInstancesOutput, error) + AttachInstancesWithContext(volcengine.Context, *AttachInstancesInput, ...request.Option) (*AttachInstancesOutput, error) + AttachInstancesRequest(*AttachInstancesInput) (*request.Request, *AttachInstancesOutput) + + AttachServerGroupsCommon(*map[string]interface{}) (*map[string]interface{}, error) + AttachServerGroupsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + AttachServerGroupsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + AttachServerGroups(*AttachServerGroupsInput) (*AttachServerGroupsOutput, error) + AttachServerGroupsWithContext(volcengine.Context, *AttachServerGroupsInput, ...request.Option) (*AttachServerGroupsOutput, error) + AttachServerGroupsRequest(*AttachServerGroupsInput) (*request.Request, *AttachServerGroupsOutput) + + CompleteLifecycleActivityCommon(*map[string]interface{}) (*map[string]interface{}, error) + CompleteLifecycleActivityCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + CompleteLifecycleActivityCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + CompleteLifecycleActivity(*CompleteLifecycleActivityInput) (*CompleteLifecycleActivityOutput, error) + CompleteLifecycleActivityWithContext(volcengine.Context, *CompleteLifecycleActivityInput, ...request.Option) (*CompleteLifecycleActivityOutput, error) + CompleteLifecycleActivityRequest(*CompleteLifecycleActivityInput) (*request.Request, *CompleteLifecycleActivityOutput) + + CreateLifecycleHookCommon(*map[string]interface{}) (*map[string]interface{}, error) + CreateLifecycleHookCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + CreateLifecycleHookCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + CreateLifecycleHook(*CreateLifecycleHookInput) (*CreateLifecycleHookOutput, error) + CreateLifecycleHookWithContext(volcengine.Context, *CreateLifecycleHookInput, ...request.Option) (*CreateLifecycleHookOutput, error) + CreateLifecycleHookRequest(*CreateLifecycleHookInput) (*request.Request, *CreateLifecycleHookOutput) + + CreateScalingConfigurationCommon(*map[string]interface{}) (*map[string]interface{}, error) + CreateScalingConfigurationCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + CreateScalingConfigurationCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + CreateScalingConfiguration(*CreateScalingConfigurationInput) (*CreateScalingConfigurationOutput, error) + CreateScalingConfigurationWithContext(volcengine.Context, *CreateScalingConfigurationInput, ...request.Option) (*CreateScalingConfigurationOutput, error) + CreateScalingConfigurationRequest(*CreateScalingConfigurationInput) (*request.Request, *CreateScalingConfigurationOutput) + + CreateScalingGroupCommon(*map[string]interface{}) (*map[string]interface{}, error) + CreateScalingGroupCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + CreateScalingGroupCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + CreateScalingGroup(*CreateScalingGroupInput) (*CreateScalingGroupOutput, error) + CreateScalingGroupWithContext(volcengine.Context, *CreateScalingGroupInput, ...request.Option) (*CreateScalingGroupOutput, error) + CreateScalingGroupRequest(*CreateScalingGroupInput) (*request.Request, *CreateScalingGroupOutput) + + CreateScalingPolicyCommon(*map[string]interface{}) (*map[string]interface{}, error) + CreateScalingPolicyCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + CreateScalingPolicyCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + CreateScalingPolicy(*CreateScalingPolicyInput) (*CreateScalingPolicyOutput, error) + CreateScalingPolicyWithContext(volcengine.Context, *CreateScalingPolicyInput, ...request.Option) (*CreateScalingPolicyOutput, error) + CreateScalingPolicyRequest(*CreateScalingPolicyInput) (*request.Request, *CreateScalingPolicyOutput) + + DeleteLifecycleHookCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteLifecycleHookCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteLifecycleHookCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteLifecycleHook(*DeleteLifecycleHookInput) (*DeleteLifecycleHookOutput, error) + DeleteLifecycleHookWithContext(volcengine.Context, *DeleteLifecycleHookInput, ...request.Option) (*DeleteLifecycleHookOutput, error) + DeleteLifecycleHookRequest(*DeleteLifecycleHookInput) (*request.Request, *DeleteLifecycleHookOutput) + + DeleteScalingConfigurationCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteScalingConfigurationCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteScalingConfigurationCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteScalingConfiguration(*DeleteScalingConfigurationInput) (*DeleteScalingConfigurationOutput, error) + DeleteScalingConfigurationWithContext(volcengine.Context, *DeleteScalingConfigurationInput, ...request.Option) (*DeleteScalingConfigurationOutput, error) + DeleteScalingConfigurationRequest(*DeleteScalingConfigurationInput) (*request.Request, *DeleteScalingConfigurationOutput) + + DeleteScalingGroupCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteScalingGroupCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteScalingGroupCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteScalingGroup(*DeleteScalingGroupInput) (*DeleteScalingGroupOutput, error) + DeleteScalingGroupWithContext(volcengine.Context, *DeleteScalingGroupInput, ...request.Option) (*DeleteScalingGroupOutput, error) + DeleteScalingGroupRequest(*DeleteScalingGroupInput) (*request.Request, *DeleteScalingGroupOutput) + + DeleteScalingPolicyCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteScalingPolicyCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteScalingPolicyCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteScalingPolicy(*DeleteScalingPolicyInput) (*DeleteScalingPolicyOutput, error) + DeleteScalingPolicyWithContext(volcengine.Context, *DeleteScalingPolicyInput, ...request.Option) (*DeleteScalingPolicyOutput, error) + DeleteScalingPolicyRequest(*DeleteScalingPolicyInput) (*request.Request, *DeleteScalingPolicyOutput) + + DescribeLifecycleActivitiesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeLifecycleActivitiesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeLifecycleActivitiesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeLifecycleActivities(*DescribeLifecycleActivitiesInput) (*DescribeLifecycleActivitiesOutput, error) + DescribeLifecycleActivitiesWithContext(volcengine.Context, *DescribeLifecycleActivitiesInput, ...request.Option) (*DescribeLifecycleActivitiesOutput, error) + DescribeLifecycleActivitiesRequest(*DescribeLifecycleActivitiesInput) (*request.Request, *DescribeLifecycleActivitiesOutput) + + DescribeLifecycleHooksCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeLifecycleHooksCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeLifecycleHooksCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeLifecycleHooks(*DescribeLifecycleHooksInput) (*DescribeLifecycleHooksOutput, error) + DescribeLifecycleHooksWithContext(volcengine.Context, *DescribeLifecycleHooksInput, ...request.Option) (*DescribeLifecycleHooksOutput, error) + DescribeLifecycleHooksRequest(*DescribeLifecycleHooksInput) (*request.Request, *DescribeLifecycleHooksOutput) + + DescribeScalingActivitiesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeScalingActivitiesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeScalingActivitiesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeScalingActivities(*DescribeScalingActivitiesInput) (*DescribeScalingActivitiesOutput, error) + DescribeScalingActivitiesWithContext(volcengine.Context, *DescribeScalingActivitiesInput, ...request.Option) (*DescribeScalingActivitiesOutput, error) + DescribeScalingActivitiesRequest(*DescribeScalingActivitiesInput) (*request.Request, *DescribeScalingActivitiesOutput) + + DescribeScalingConfigurationsCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeScalingConfigurationsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeScalingConfigurationsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeScalingConfigurations(*DescribeScalingConfigurationsInput) (*DescribeScalingConfigurationsOutput, error) + DescribeScalingConfigurationsWithContext(volcengine.Context, *DescribeScalingConfigurationsInput, ...request.Option) (*DescribeScalingConfigurationsOutput, error) + DescribeScalingConfigurationsRequest(*DescribeScalingConfigurationsInput) (*request.Request, *DescribeScalingConfigurationsOutput) + + DescribeScalingGroupsCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeScalingGroupsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeScalingGroupsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeScalingGroups(*DescribeScalingGroupsInput) (*DescribeScalingGroupsOutput, error) + DescribeScalingGroupsWithContext(volcengine.Context, *DescribeScalingGroupsInput, ...request.Option) (*DescribeScalingGroupsOutput, error) + DescribeScalingGroupsRequest(*DescribeScalingGroupsInput) (*request.Request, *DescribeScalingGroupsOutput) + + DescribeScalingInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeScalingInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeScalingInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeScalingInstances(*DescribeScalingInstancesInput) (*DescribeScalingInstancesOutput, error) + DescribeScalingInstancesWithContext(volcengine.Context, *DescribeScalingInstancesInput, ...request.Option) (*DescribeScalingInstancesOutput, error) + DescribeScalingInstancesRequest(*DescribeScalingInstancesInput) (*request.Request, *DescribeScalingInstancesOutput) + + DescribeScalingPoliciesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeScalingPoliciesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeScalingPoliciesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeScalingPolicies(*DescribeScalingPoliciesInput) (*DescribeScalingPoliciesOutput, error) + DescribeScalingPoliciesWithContext(volcengine.Context, *DescribeScalingPoliciesInput, ...request.Option) (*DescribeScalingPoliciesOutput, error) + DescribeScalingPoliciesRequest(*DescribeScalingPoliciesInput) (*request.Request, *DescribeScalingPoliciesOutput) + + DetachDBInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DetachDBInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DetachDBInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DetachDBInstances(*DetachDBInstancesInput) (*DetachDBInstancesOutput, error) + DetachDBInstancesWithContext(volcengine.Context, *DetachDBInstancesInput, ...request.Option) (*DetachDBInstancesOutput, error) + DetachDBInstancesRequest(*DetachDBInstancesInput) (*request.Request, *DetachDBInstancesOutput) + + DetachInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DetachInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DetachInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DetachInstances(*DetachInstancesInput) (*DetachInstancesOutput, error) + DetachInstancesWithContext(volcengine.Context, *DetachInstancesInput, ...request.Option) (*DetachInstancesOutput, error) + DetachInstancesRequest(*DetachInstancesInput) (*request.Request, *DetachInstancesOutput) + + DetachServerGroupsCommon(*map[string]interface{}) (*map[string]interface{}, error) + DetachServerGroupsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DetachServerGroupsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DetachServerGroups(*DetachServerGroupsInput) (*DetachServerGroupsOutput, error) + DetachServerGroupsWithContext(volcengine.Context, *DetachServerGroupsInput, ...request.Option) (*DetachServerGroupsOutput, error) + DetachServerGroupsRequest(*DetachServerGroupsInput) (*request.Request, *DetachServerGroupsOutput) + + DisableScalingGroupCommon(*map[string]interface{}) (*map[string]interface{}, error) + DisableScalingGroupCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DisableScalingGroupCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DisableScalingGroup(*DisableScalingGroupInput) (*DisableScalingGroupOutput, error) + DisableScalingGroupWithContext(volcengine.Context, *DisableScalingGroupInput, ...request.Option) (*DisableScalingGroupOutput, error) + DisableScalingGroupRequest(*DisableScalingGroupInput) (*request.Request, *DisableScalingGroupOutput) + + DisableScalingPolicyCommon(*map[string]interface{}) (*map[string]interface{}, error) + DisableScalingPolicyCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DisableScalingPolicyCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DisableScalingPolicy(*DisableScalingPolicyInput) (*DisableScalingPolicyOutput, error) + DisableScalingPolicyWithContext(volcengine.Context, *DisableScalingPolicyInput, ...request.Option) (*DisableScalingPolicyOutput, error) + DisableScalingPolicyRequest(*DisableScalingPolicyInput) (*request.Request, *DisableScalingPolicyOutput) + + EnableScalingConfigurationCommon(*map[string]interface{}) (*map[string]interface{}, error) + EnableScalingConfigurationCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + EnableScalingConfigurationCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + EnableScalingConfiguration(*EnableScalingConfigurationInput) (*EnableScalingConfigurationOutput, error) + EnableScalingConfigurationWithContext(volcengine.Context, *EnableScalingConfigurationInput, ...request.Option) (*EnableScalingConfigurationOutput, error) + EnableScalingConfigurationRequest(*EnableScalingConfigurationInput) (*request.Request, *EnableScalingConfigurationOutput) + + EnableScalingGroupCommon(*map[string]interface{}) (*map[string]interface{}, error) + EnableScalingGroupCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + EnableScalingGroupCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + EnableScalingGroup(*EnableScalingGroupInput) (*EnableScalingGroupOutput, error) + EnableScalingGroupWithContext(volcengine.Context, *EnableScalingGroupInput, ...request.Option) (*EnableScalingGroupOutput, error) + EnableScalingGroupRequest(*EnableScalingGroupInput) (*request.Request, *EnableScalingGroupOutput) + + EnableScalingPolicyCommon(*map[string]interface{}) (*map[string]interface{}, error) + EnableScalingPolicyCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + EnableScalingPolicyCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + EnableScalingPolicy(*EnableScalingPolicyInput) (*EnableScalingPolicyOutput, error) + EnableScalingPolicyWithContext(volcengine.Context, *EnableScalingPolicyInput, ...request.Option) (*EnableScalingPolicyOutput, error) + EnableScalingPolicyRequest(*EnableScalingPolicyInput) (*request.Request, *EnableScalingPolicyOutput) + + ModifyLifecycleHookCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyLifecycleHookCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyLifecycleHookCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyLifecycleHook(*ModifyLifecycleHookInput) (*ModifyLifecycleHookOutput, error) + ModifyLifecycleHookWithContext(volcengine.Context, *ModifyLifecycleHookInput, ...request.Option) (*ModifyLifecycleHookOutput, error) + ModifyLifecycleHookRequest(*ModifyLifecycleHookInput) (*request.Request, *ModifyLifecycleHookOutput) + + ModifyScalingConfigurationCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyScalingConfigurationCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyScalingConfigurationCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyScalingConfiguration(*ModifyScalingConfigurationInput) (*ModifyScalingConfigurationOutput, error) + ModifyScalingConfigurationWithContext(volcengine.Context, *ModifyScalingConfigurationInput, ...request.Option) (*ModifyScalingConfigurationOutput, error) + ModifyScalingConfigurationRequest(*ModifyScalingConfigurationInput) (*request.Request, *ModifyScalingConfigurationOutput) + + ModifyScalingGroupCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyScalingGroupCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyScalingGroupCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyScalingGroup(*ModifyScalingGroupInput) (*ModifyScalingGroupOutput, error) + ModifyScalingGroupWithContext(volcengine.Context, *ModifyScalingGroupInput, ...request.Option) (*ModifyScalingGroupOutput, error) + ModifyScalingGroupRequest(*ModifyScalingGroupInput) (*request.Request, *ModifyScalingGroupOutput) + + ModifyScalingPolicyCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyScalingPolicyCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyScalingPolicyCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyScalingPolicy(*ModifyScalingPolicyInput) (*ModifyScalingPolicyOutput, error) + ModifyScalingPolicyWithContext(volcengine.Context, *ModifyScalingPolicyInput, ...request.Option) (*ModifyScalingPolicyOutput, error) + ModifyScalingPolicyRequest(*ModifyScalingPolicyInput) (*request.Request, *ModifyScalingPolicyOutput) + + RemoveInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + RemoveInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + RemoveInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + RemoveInstances(*RemoveInstancesInput) (*RemoveInstancesOutput, error) + RemoveInstancesWithContext(volcengine.Context, *RemoveInstancesInput, ...request.Option) (*RemoveInstancesOutput, error) + RemoveInstancesRequest(*RemoveInstancesInput) (*request.Request, *RemoveInstancesOutput) + + SetInstancesProtectionCommon(*map[string]interface{}) (*map[string]interface{}, error) + SetInstancesProtectionCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + SetInstancesProtectionCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + SetInstancesProtection(*SetInstancesProtectionInput) (*SetInstancesProtectionOutput, error) + SetInstancesProtectionWithContext(volcengine.Context, *SetInstancesProtectionInput, ...request.Option) (*SetInstancesProtectionOutput, error) + SetInstancesProtectionRequest(*SetInstancesProtectionInput) (*request.Request, *SetInstancesProtectionOutput) +} + +var _ AUTOSCALINGAPI = (*AUTOSCALING)(nil) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/service_autoscaling.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/service_autoscaling.go new file mode 100644 index 000000000000..3419d868bdbf --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling/service_autoscaling.go @@ -0,0 +1,105 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package autoscaling + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/signer/volc" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery" +) + +// AUTOSCALING provides the API operation methods for making requests to +// AUTO_SCALING. See this package's package overview docs +// for details on the service. +// +// AUTOSCALING methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type AUTOSCALING struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "autoscaling" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "auto_scaling" // ServiceID is a unique identifer of a specific service. +) + +// New create int can support ssl or region locate set +func New(p client.ConfigProvider, cfgs ...*volcengine.Config) *AUTOSCALING { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg volcengine.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *AUTOSCALING { + svc := &AUTOSCALING{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2020-01-01", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Build.PushBackNamed(corehandlers.SDKVersionUserAgentHandler) + svc.Handlers.Build.PushBackNamed(corehandlers.AddHostExecEnvUserAgentHandler) + svc.Handlers.Sign.PushBackNamed(volc.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(volcenginequery.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(volcenginequery.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(volcenginequery.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(volcenginequery.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a AUTOSCALING operation and runs any +// custom request initialization. +func (c *AUTOSCALING) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_associate_instances_iam_role.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_associate_instances_iam_role.go new file mode 100644 index 000000000000..9286b99fdc1c --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_associate_instances_iam_role.go @@ -0,0 +1,254 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opAssociateInstancesIamRoleCommon = "AssociateInstancesIamRole" + +// AssociateInstancesIamRoleCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the AssociateInstancesIamRoleCommon operation. The "output" return +// value will be populated with the AssociateInstancesIamRoleCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AssociateInstancesIamRoleCommon Request to send the API call to the service. +// the "output" return value is not valid until after AssociateInstancesIamRoleCommon Send returns without error. +// +// See AssociateInstancesIamRoleCommon for more information on using the AssociateInstancesIamRoleCommon +// API call, and error handling. +// +// // Example sending a request using the AssociateInstancesIamRoleCommonRequest method. +// req, resp := client.AssociateInstancesIamRoleCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) AssociateInstancesIamRoleCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opAssociateInstancesIamRoleCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// AssociateInstancesIamRoleCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation AssociateInstancesIamRoleCommon for usage and error information. +func (c *ECS) AssociateInstancesIamRoleCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.AssociateInstancesIamRoleCommonRequest(input) + return out, req.Send() +} + +// AssociateInstancesIamRoleCommonWithContext is the same as AssociateInstancesIamRoleCommon with the addition of +// the ability to pass a context and additional request options. +// +// See AssociateInstancesIamRoleCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) AssociateInstancesIamRoleCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.AssociateInstancesIamRoleCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAssociateInstancesIamRole = "AssociateInstancesIamRole" + +// AssociateInstancesIamRoleRequest generates a "volcengine/request.Request" representing the +// client's request for the AssociateInstancesIamRole operation. The "output" return +// value will be populated with the AssociateInstancesIamRoleCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AssociateInstancesIamRoleCommon Request to send the API call to the service. +// the "output" return value is not valid until after AssociateInstancesIamRoleCommon Send returns without error. +// +// See AssociateInstancesIamRole for more information on using the AssociateInstancesIamRole +// API call, and error handling. +// +// // Example sending a request using the AssociateInstancesIamRoleRequest method. +// req, resp := client.AssociateInstancesIamRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) AssociateInstancesIamRoleRequest(input *AssociateInstancesIamRoleInput) (req *request.Request, output *AssociateInstancesIamRoleOutput) { + op := &request.Operation{ + Name: opAssociateInstancesIamRole, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &AssociateInstancesIamRoleInput{} + } + + output = &AssociateInstancesIamRoleOutput{} + req = c.newRequest(op, input, output) + + return +} + +// AssociateInstancesIamRole API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation AssociateInstancesIamRole for usage and error information. +func (c *ECS) AssociateInstancesIamRole(input *AssociateInstancesIamRoleInput) (*AssociateInstancesIamRoleOutput, error) { + req, out := c.AssociateInstancesIamRoleRequest(input) + return out, req.Send() +} + +// AssociateInstancesIamRoleWithContext is the same as AssociateInstancesIamRole with the addition of +// the ability to pass a context and additional request options. +// +// See AssociateInstancesIamRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) AssociateInstancesIamRoleWithContext(ctx volcengine.Context, input *AssociateInstancesIamRoleInput, opts ...request.Option) (*AssociateInstancesIamRoleOutput, error) { + req, out := c.AssociateInstancesIamRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AssociateInstancesIamRoleInput struct { + _ struct{} `type:"structure"` + + IamRoleName *string `type:"string"` + + InstanceIds []*string `type:"list"` +} + +// String returns the string representation +func (s AssociateInstancesIamRoleInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateInstancesIamRoleInput) GoString() string { + return s.String() +} + +// SetIamRoleName sets the IamRoleName field's value. +func (s *AssociateInstancesIamRoleInput) SetIamRoleName(v string) *AssociateInstancesIamRoleInput { + s.IamRoleName = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *AssociateInstancesIamRoleInput) SetInstanceIds(v []*string) *AssociateInstancesIamRoleInput { + s.InstanceIds = v + return s +} + +type AssociateInstancesIamRoleOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForAssociateInstancesIamRoleOutput `type:"list"` +} + +// String returns the string representation +func (s AssociateInstancesIamRoleOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateInstancesIamRoleOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *AssociateInstancesIamRoleOutput) SetOperationDetails(v []*OperationDetailForAssociateInstancesIamRoleOutput) *AssociateInstancesIamRoleOutput { + s.OperationDetails = v + return s +} + +type ErrorForAssociateInstancesIamRoleOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForAssociateInstancesIamRoleOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForAssociateInstancesIamRoleOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForAssociateInstancesIamRoleOutput) SetCode(v string) *ErrorForAssociateInstancesIamRoleOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForAssociateInstancesIamRoleOutput) SetMessage(v string) *ErrorForAssociateInstancesIamRoleOutput { + s.Message = &v + return s +} + +type OperationDetailForAssociateInstancesIamRoleOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForAssociateInstancesIamRoleOutput `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForAssociateInstancesIamRoleOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForAssociateInstancesIamRoleOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForAssociateInstancesIamRoleOutput) SetError(v *ErrorForAssociateInstancesIamRoleOutput) *OperationDetailForAssociateInstancesIamRoleOutput { + s.Error = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *OperationDetailForAssociateInstancesIamRoleOutput) SetInstanceId(v string) *OperationDetailForAssociateInstancesIamRoleOutput { + s.InstanceId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_attach_key_pair.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_attach_key_pair.go new file mode 100644 index 000000000000..c705bd7fbfb8 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_attach_key_pair.go @@ -0,0 +1,262 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opAttachKeyPairCommon = "AttachKeyPair" + +// AttachKeyPairCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the AttachKeyPairCommon operation. The "output" return +// value will be populated with the AttachKeyPairCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AttachKeyPairCommon Request to send the API call to the service. +// the "output" return value is not valid until after AttachKeyPairCommon Send returns without error. +// +// See AttachKeyPairCommon for more information on using the AttachKeyPairCommon +// API call, and error handling. +// +// // Example sending a request using the AttachKeyPairCommonRequest method. +// req, resp := client.AttachKeyPairCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) AttachKeyPairCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opAttachKeyPairCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// AttachKeyPairCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation AttachKeyPairCommon for usage and error information. +func (c *ECS) AttachKeyPairCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.AttachKeyPairCommonRequest(input) + return out, req.Send() +} + +// AttachKeyPairCommonWithContext is the same as AttachKeyPairCommon with the addition of +// the ability to pass a context and additional request options. +// +// See AttachKeyPairCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) AttachKeyPairCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.AttachKeyPairCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opAttachKeyPair = "AttachKeyPair" + +// AttachKeyPairRequest generates a "volcengine/request.Request" representing the +// client's request for the AttachKeyPair operation. The "output" return +// value will be populated with the AttachKeyPairCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned AttachKeyPairCommon Request to send the API call to the service. +// the "output" return value is not valid until after AttachKeyPairCommon Send returns without error. +// +// See AttachKeyPair for more information on using the AttachKeyPair +// API call, and error handling. +// +// // Example sending a request using the AttachKeyPairRequest method. +// req, resp := client.AttachKeyPairRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) AttachKeyPairRequest(input *AttachKeyPairInput) (req *request.Request, output *AttachKeyPairOutput) { + op := &request.Operation{ + Name: opAttachKeyPair, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &AttachKeyPairInput{} + } + + output = &AttachKeyPairOutput{} + req = c.newRequest(op, input, output) + + return +} + +// AttachKeyPair API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation AttachKeyPair for usage and error information. +func (c *ECS) AttachKeyPair(input *AttachKeyPairInput) (*AttachKeyPairOutput, error) { + req, out := c.AttachKeyPairRequest(input) + return out, req.Send() +} + +// AttachKeyPairWithContext is the same as AttachKeyPair with the addition of +// the ability to pass a context and additional request options. +// +// See AttachKeyPair for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) AttachKeyPairWithContext(ctx volcengine.Context, input *AttachKeyPairInput, opts ...request.Option) (*AttachKeyPairOutput, error) { + req, out := c.AttachKeyPairRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AttachKeyPairInput struct { + _ struct{} `type:"structure"` + + InstanceIds []*string `type:"list"` + + KeyPairId *string `type:"string"` + + KeyPairName *string `type:"string"` +} + +// String returns the string representation +func (s AttachKeyPairInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachKeyPairInput) GoString() string { + return s.String() +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *AttachKeyPairInput) SetInstanceIds(v []*string) *AttachKeyPairInput { + s.InstanceIds = v + return s +} + +// SetKeyPairId sets the KeyPairId field's value. +func (s *AttachKeyPairInput) SetKeyPairId(v string) *AttachKeyPairInput { + s.KeyPairId = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *AttachKeyPairInput) SetKeyPairName(v string) *AttachKeyPairInput { + s.KeyPairName = &v + return s +} + +type AttachKeyPairOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForAttachKeyPairOutput `type:"list"` +} + +// String returns the string representation +func (s AttachKeyPairOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AttachKeyPairOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *AttachKeyPairOutput) SetOperationDetails(v []*OperationDetailForAttachKeyPairOutput) *AttachKeyPairOutput { + s.OperationDetails = v + return s +} + +type ErrorForAttachKeyPairOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForAttachKeyPairOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForAttachKeyPairOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForAttachKeyPairOutput) SetCode(v string) *ErrorForAttachKeyPairOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForAttachKeyPairOutput) SetMessage(v string) *ErrorForAttachKeyPairOutput { + s.Message = &v + return s +} + +type OperationDetailForAttachKeyPairOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForAttachKeyPairOutput `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForAttachKeyPairOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForAttachKeyPairOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForAttachKeyPairOutput) SetError(v *ErrorForAttachKeyPairOutput) *OperationDetailForAttachKeyPairOutput { + s.Error = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *OperationDetailForAttachKeyPairOutput) SetInstanceId(v string) *OperationDetailForAttachKeyPairOutput { + s.InstanceId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_deployment_set.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_deployment_set.go new file mode 100644 index 000000000000..f8cda74b1b93 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_deployment_set.go @@ -0,0 +1,218 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opCreateDeploymentSetCommon = "CreateDeploymentSet" + +// CreateDeploymentSetCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateDeploymentSetCommon operation. The "output" return +// value will be populated with the CreateDeploymentSetCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateDeploymentSetCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateDeploymentSetCommon Send returns without error. +// +// See CreateDeploymentSetCommon for more information on using the CreateDeploymentSetCommon +// API call, and error handling. +// +// // Example sending a request using the CreateDeploymentSetCommonRequest method. +// req, resp := client.CreateDeploymentSetCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) CreateDeploymentSetCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opCreateDeploymentSetCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// CreateDeploymentSetCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation CreateDeploymentSetCommon for usage and error information. +func (c *ECS) CreateDeploymentSetCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.CreateDeploymentSetCommonRequest(input) + return out, req.Send() +} + +// CreateDeploymentSetCommonWithContext is the same as CreateDeploymentSetCommon with the addition of +// the ability to pass a context and additional request options. +// +// See CreateDeploymentSetCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) CreateDeploymentSetCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.CreateDeploymentSetCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateDeploymentSet = "CreateDeploymentSet" + +// CreateDeploymentSetRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateDeploymentSet operation. The "output" return +// value will be populated with the CreateDeploymentSetCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateDeploymentSetCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateDeploymentSetCommon Send returns without error. +// +// See CreateDeploymentSet for more information on using the CreateDeploymentSet +// API call, and error handling. +// +// // Example sending a request using the CreateDeploymentSetRequest method. +// req, resp := client.CreateDeploymentSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) CreateDeploymentSetRequest(input *CreateDeploymentSetInput) (req *request.Request, output *CreateDeploymentSetOutput) { + op := &request.Operation{ + Name: opCreateDeploymentSet, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &CreateDeploymentSetInput{} + } + + output = &CreateDeploymentSetOutput{} + req = c.newRequest(op, input, output) + + return +} + +// CreateDeploymentSet API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation CreateDeploymentSet for usage and error information. +func (c *ECS) CreateDeploymentSet(input *CreateDeploymentSetInput) (*CreateDeploymentSetOutput, error) { + req, out := c.CreateDeploymentSetRequest(input) + return out, req.Send() +} + +// CreateDeploymentSetWithContext is the same as CreateDeploymentSet with the addition of +// the ability to pass a context and additional request options. +// +// See CreateDeploymentSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) CreateDeploymentSetWithContext(ctx volcengine.Context, input *CreateDeploymentSetInput, opts ...request.Option) (*CreateDeploymentSetOutput, error) { + req, out := c.CreateDeploymentSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CreateDeploymentSetInput struct { + _ struct{} `type:"structure"` + + ClientToken *string `type:"string"` + + DeploymentSetName *string `type:"string"` + + Description *string `type:"string"` + + Granularity *string `type:"string"` + + Strategy *string `type:"string"` +} + +// String returns the string representation +func (s CreateDeploymentSetInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDeploymentSetInput) GoString() string { + return s.String() +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateDeploymentSetInput) SetClientToken(v string) *CreateDeploymentSetInput { + s.ClientToken = &v + return s +} + +// SetDeploymentSetName sets the DeploymentSetName field's value. +func (s *CreateDeploymentSetInput) SetDeploymentSetName(v string) *CreateDeploymentSetInput { + s.DeploymentSetName = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CreateDeploymentSetInput) SetDescription(v string) *CreateDeploymentSetInput { + s.Description = &v + return s +} + +// SetGranularity sets the Granularity field's value. +func (s *CreateDeploymentSetInput) SetGranularity(v string) *CreateDeploymentSetInput { + s.Granularity = &v + return s +} + +// SetStrategy sets the Strategy field's value. +func (s *CreateDeploymentSetInput) SetStrategy(v string) *CreateDeploymentSetInput { + s.Strategy = &v + return s +} + +type CreateDeploymentSetOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + DeploymentSetId *string `type:"string"` +} + +// String returns the string representation +func (s CreateDeploymentSetOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDeploymentSetOutput) GoString() string { + return s.String() +} + +// SetDeploymentSetId sets the DeploymentSetId field's value. +func (s *CreateDeploymentSetOutput) SetDeploymentSetId(v string) *CreateDeploymentSetOutput { + s.DeploymentSetId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_image.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_image.go new file mode 100644 index 000000000000..a325179c380b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_image.go @@ -0,0 +1,210 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opCreateImageCommon = "CreateImage" + +// CreateImageCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateImageCommon operation. The "output" return +// value will be populated with the CreateImageCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateImageCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateImageCommon Send returns without error. +// +// See CreateImageCommon for more information on using the CreateImageCommon +// API call, and error handling. +// +// // Example sending a request using the CreateImageCommonRequest method. +// req, resp := client.CreateImageCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) CreateImageCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opCreateImageCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// CreateImageCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation CreateImageCommon for usage and error information. +func (c *ECS) CreateImageCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.CreateImageCommonRequest(input) + return out, req.Send() +} + +// CreateImageCommonWithContext is the same as CreateImageCommon with the addition of +// the ability to pass a context and additional request options. +// +// See CreateImageCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) CreateImageCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.CreateImageCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateImage = "CreateImage" + +// CreateImageRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateImage operation. The "output" return +// value will be populated with the CreateImageCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateImageCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateImageCommon Send returns without error. +// +// See CreateImage for more information on using the CreateImage +// API call, and error handling. +// +// // Example sending a request using the CreateImageRequest method. +// req, resp := client.CreateImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) CreateImageRequest(input *CreateImageInput) (req *request.Request, output *CreateImageOutput) { + op := &request.Operation{ + Name: opCreateImage, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &CreateImageInput{} + } + + output = &CreateImageOutput{} + req = c.newRequest(op, input, output) + + return +} + +// CreateImage API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation CreateImage for usage and error information. +func (c *ECS) CreateImage(input *CreateImageInput) (*CreateImageOutput, error) { + req, out := c.CreateImageRequest(input) + return out, req.Send() +} + +// CreateImageWithContext is the same as CreateImage with the addition of +// the ability to pass a context and additional request options. +// +// See CreateImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) CreateImageWithContext(ctx volcengine.Context, input *CreateImageInput, opts ...request.Option) (*CreateImageOutput, error) { + req, out := c.CreateImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CreateImageInput struct { + _ struct{} `type:"structure"` + + Description *string `type:"string"` + + ImageName *string `type:"string"` + + InstanceId *string `type:"string"` + + ProjectName *string `type:"string"` +} + +// String returns the string representation +func (s CreateImageInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateImageInput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *CreateImageInput) SetDescription(v string) *CreateImageInput { + s.Description = &v + return s +} + +// SetImageName sets the ImageName field's value. +func (s *CreateImageInput) SetImageName(v string) *CreateImageInput { + s.ImageName = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *CreateImageInput) SetInstanceId(v string) *CreateImageInput { + s.InstanceId = &v + return s +} + +// SetProjectName sets the ProjectName field's value. +func (s *CreateImageInput) SetProjectName(v string) *CreateImageInput { + s.ProjectName = &v + return s +} + +type CreateImageOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ImageId *string `type:"string"` +} + +// String returns the string representation +func (s CreateImageOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateImageOutput) GoString() string { + return s.String() +} + +// SetImageId sets the ImageId field's value. +func (s *CreateImageOutput) SetImageId(v string) *CreateImageOutput { + s.ImageId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_key_pair.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_key_pair.go new file mode 100644 index 000000000000..d5675b9fb011 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_key_pair.go @@ -0,0 +1,218 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opCreateKeyPairCommon = "CreateKeyPair" + +// CreateKeyPairCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateKeyPairCommon operation. The "output" return +// value will be populated with the CreateKeyPairCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateKeyPairCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateKeyPairCommon Send returns without error. +// +// See CreateKeyPairCommon for more information on using the CreateKeyPairCommon +// API call, and error handling. +// +// // Example sending a request using the CreateKeyPairCommonRequest method. +// req, resp := client.CreateKeyPairCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) CreateKeyPairCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opCreateKeyPairCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// CreateKeyPairCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation CreateKeyPairCommon for usage and error information. +func (c *ECS) CreateKeyPairCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.CreateKeyPairCommonRequest(input) + return out, req.Send() +} + +// CreateKeyPairCommonWithContext is the same as CreateKeyPairCommon with the addition of +// the ability to pass a context and additional request options. +// +// See CreateKeyPairCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) CreateKeyPairCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.CreateKeyPairCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateKeyPair = "CreateKeyPair" + +// CreateKeyPairRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateKeyPair operation. The "output" return +// value will be populated with the CreateKeyPairCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateKeyPairCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateKeyPairCommon Send returns without error. +// +// See CreateKeyPair for more information on using the CreateKeyPair +// API call, and error handling. +// +// // Example sending a request using the CreateKeyPairRequest method. +// req, resp := client.CreateKeyPairRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) CreateKeyPairRequest(input *CreateKeyPairInput) (req *request.Request, output *CreateKeyPairOutput) { + op := &request.Operation{ + Name: opCreateKeyPair, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &CreateKeyPairInput{} + } + + output = &CreateKeyPairOutput{} + req = c.newRequest(op, input, output) + + return +} + +// CreateKeyPair API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation CreateKeyPair for usage and error information. +func (c *ECS) CreateKeyPair(input *CreateKeyPairInput) (*CreateKeyPairOutput, error) { + req, out := c.CreateKeyPairRequest(input) + return out, req.Send() +} + +// CreateKeyPairWithContext is the same as CreateKeyPair with the addition of +// the ability to pass a context and additional request options. +// +// See CreateKeyPair for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) CreateKeyPairWithContext(ctx volcengine.Context, input *CreateKeyPairInput, opts ...request.Option) (*CreateKeyPairOutput, error) { + req, out := c.CreateKeyPairRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CreateKeyPairInput struct { + _ struct{} `type:"structure"` + + Description *string `type:"string"` + + KeyPairName *string `type:"string"` +} + +// String returns the string representation +func (s CreateKeyPairInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateKeyPairInput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *CreateKeyPairInput) SetDescription(v string) *CreateKeyPairInput { + s.Description = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *CreateKeyPairInput) SetKeyPairName(v string) *CreateKeyPairInput { + s.KeyPairName = &v + return s +} + +type CreateKeyPairOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + FingerPrint *string `type:"string"` + + KeyPairId *string `type:"string"` + + KeyPairName *string `type:"string"` + + PrivateKey *string `type:"string"` +} + +// String returns the string representation +func (s CreateKeyPairOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateKeyPairOutput) GoString() string { + return s.String() +} + +// SetFingerPrint sets the FingerPrint field's value. +func (s *CreateKeyPairOutput) SetFingerPrint(v string) *CreateKeyPairOutput { + s.FingerPrint = &v + return s +} + +// SetKeyPairId sets the KeyPairId field's value. +func (s *CreateKeyPairOutput) SetKeyPairId(v string) *CreateKeyPairOutput { + s.KeyPairId = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *CreateKeyPairOutput) SetKeyPairName(v string) *CreateKeyPairOutput { + s.KeyPairName = &v + return s +} + +// SetPrivateKey sets the PrivateKey field's value. +func (s *CreateKeyPairOutput) SetPrivateKey(v string) *CreateKeyPairOutput { + s.PrivateKey = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_tags.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_tags.go new file mode 100644 index 000000000000..74feb85437ef --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_create_tags.go @@ -0,0 +1,292 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opCreateTagsCommon = "CreateTags" + +// CreateTagsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateTagsCommon operation. The "output" return +// value will be populated with the CreateTagsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateTagsCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateTagsCommon Send returns without error. +// +// See CreateTagsCommon for more information on using the CreateTagsCommon +// API call, and error handling. +// +// // Example sending a request using the CreateTagsCommonRequest method. +// req, resp := client.CreateTagsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) CreateTagsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opCreateTagsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// CreateTagsCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation CreateTagsCommon for usage and error information. +func (c *ECS) CreateTagsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.CreateTagsCommonRequest(input) + return out, req.Send() +} + +// CreateTagsCommonWithContext is the same as CreateTagsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See CreateTagsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) CreateTagsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.CreateTagsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateTags = "CreateTags" + +// CreateTagsRequest generates a "volcengine/request.Request" representing the +// client's request for the CreateTags operation. The "output" return +// value will be populated with the CreateTagsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned CreateTagsCommon Request to send the API call to the service. +// the "output" return value is not valid until after CreateTagsCommon Send returns without error. +// +// See CreateTags for more information on using the CreateTags +// API call, and error handling. +// +// // Example sending a request using the CreateTagsRequest method. +// req, resp := client.CreateTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) CreateTagsRequest(input *CreateTagsInput) (req *request.Request, output *CreateTagsOutput) { + op := &request.Operation{ + Name: opCreateTags, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &CreateTagsInput{} + } + + output = &CreateTagsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// CreateTags API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation CreateTags for usage and error information. +func (c *ECS) CreateTags(input *CreateTagsInput) (*CreateTagsOutput, error) { + req, out := c.CreateTagsRequest(input) + return out, req.Send() +} + +// CreateTagsWithContext is the same as CreateTags with the addition of +// the ability to pass a context and additional request options. +// +// See CreateTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) CreateTagsWithContext(ctx volcengine.Context, input *CreateTagsInput, opts ...request.Option) (*CreateTagsOutput, error) { + req, out := c.CreateTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CreateTagsInput struct { + _ struct{} `type:"structure"` + + ResourceIds []*string `type:"list"` + + ResourceType *string `type:"string"` + + Tags []*TagForCreateTagsInput `type:"list"` +} + +// String returns the string representation +func (s CreateTagsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTagsInput) GoString() string { + return s.String() +} + +// SetResourceIds sets the ResourceIds field's value. +func (s *CreateTagsInput) SetResourceIds(v []*string) *CreateTagsInput { + s.ResourceIds = v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *CreateTagsInput) SetResourceType(v string) *CreateTagsInput { + s.ResourceType = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateTagsInput) SetTags(v []*TagForCreateTagsInput) *CreateTagsInput { + s.Tags = v + return s +} + +type CreateTagsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForCreateTagsOutput `type:"list"` +} + +// String returns the string representation +func (s CreateTagsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateTagsOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *CreateTagsOutput) SetOperationDetails(v []*OperationDetailForCreateTagsOutput) *CreateTagsOutput { + s.OperationDetails = v + return s +} + +type ErrorForCreateTagsOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForCreateTagsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForCreateTagsOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForCreateTagsOutput) SetCode(v string) *ErrorForCreateTagsOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForCreateTagsOutput) SetMessage(v string) *ErrorForCreateTagsOutput { + s.Message = &v + return s +} + +type OperationDetailForCreateTagsOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForCreateTagsOutput `type:"structure"` + + ResourceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForCreateTagsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForCreateTagsOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForCreateTagsOutput) SetError(v *ErrorForCreateTagsOutput) *OperationDetailForCreateTagsOutput { + s.Error = v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *OperationDetailForCreateTagsOutput) SetResourceId(v string) *OperationDetailForCreateTagsOutput { + s.ResourceId = &v + return s +} + +type TagForCreateTagsInput struct { + _ struct{} `type:"structure"` + + Key *string `type:"string"` + + Value *string `type:"string"` +} + +// String returns the string representation +func (s TagForCreateTagsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagForCreateTagsInput) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *TagForCreateTagsInput) SetKey(v string) *TagForCreateTagsInput { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *TagForCreateTagsInput) SetValue(v string) *TagForCreateTagsInput { + s.Value = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_deployment_set.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_deployment_set.go new file mode 100644 index 000000000000..1e853a2a2a0e --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_deployment_set.go @@ -0,0 +1,178 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteDeploymentSetCommon = "DeleteDeploymentSet" + +// DeleteDeploymentSetCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteDeploymentSetCommon operation. The "output" return +// value will be populated with the DeleteDeploymentSetCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteDeploymentSetCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteDeploymentSetCommon Send returns without error. +// +// See DeleteDeploymentSetCommon for more information on using the DeleteDeploymentSetCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteDeploymentSetCommonRequest method. +// req, resp := client.DeleteDeploymentSetCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteDeploymentSetCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteDeploymentSetCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteDeploymentSetCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteDeploymentSetCommon for usage and error information. +func (c *ECS) DeleteDeploymentSetCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteDeploymentSetCommonRequest(input) + return out, req.Send() +} + +// DeleteDeploymentSetCommonWithContext is the same as DeleteDeploymentSetCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteDeploymentSetCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteDeploymentSetCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteDeploymentSetCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteDeploymentSet = "DeleteDeploymentSet" + +// DeleteDeploymentSetRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteDeploymentSet operation. The "output" return +// value will be populated with the DeleteDeploymentSetCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteDeploymentSetCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteDeploymentSetCommon Send returns without error. +// +// See DeleteDeploymentSet for more information on using the DeleteDeploymentSet +// API call, and error handling. +// +// // Example sending a request using the DeleteDeploymentSetRequest method. +// req, resp := client.DeleteDeploymentSetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteDeploymentSetRequest(input *DeleteDeploymentSetInput) (req *request.Request, output *DeleteDeploymentSetOutput) { + op := &request.Operation{ + Name: opDeleteDeploymentSet, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteDeploymentSetInput{} + } + + output = &DeleteDeploymentSetOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteDeploymentSet API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteDeploymentSet for usage and error information. +func (c *ECS) DeleteDeploymentSet(input *DeleteDeploymentSetInput) (*DeleteDeploymentSetOutput, error) { + req, out := c.DeleteDeploymentSetRequest(input) + return out, req.Send() +} + +// DeleteDeploymentSetWithContext is the same as DeleteDeploymentSet with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteDeploymentSet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteDeploymentSetWithContext(ctx volcengine.Context, input *DeleteDeploymentSetInput, opts ...request.Option) (*DeleteDeploymentSetOutput, error) { + req, out := c.DeleteDeploymentSetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteDeploymentSetInput struct { + _ struct{} `type:"structure"` + + DeploymentSetId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteDeploymentSetInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDeploymentSetInput) GoString() string { + return s.String() +} + +// SetDeploymentSetId sets the DeploymentSetId field's value. +func (s *DeleteDeploymentSetInput) SetDeploymentSetId(v string) *DeleteDeploymentSetInput { + s.DeploymentSetId = &v + return s +} + +type DeleteDeploymentSetOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s DeleteDeploymentSetOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDeploymentSetOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_images.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_images.go new file mode 100644 index 000000000000..09b66e7ee65c --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_images.go @@ -0,0 +1,246 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteImagesCommon = "DeleteImages" + +// DeleteImagesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteImagesCommon operation. The "output" return +// value will be populated with the DeleteImagesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteImagesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteImagesCommon Send returns without error. +// +// See DeleteImagesCommon for more information on using the DeleteImagesCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteImagesCommonRequest method. +// req, resp := client.DeleteImagesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteImagesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteImagesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteImagesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteImagesCommon for usage and error information. +func (c *ECS) DeleteImagesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteImagesCommonRequest(input) + return out, req.Send() +} + +// DeleteImagesCommonWithContext is the same as DeleteImagesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteImagesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteImagesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteImagesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteImages = "DeleteImages" + +// DeleteImagesRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteImages operation. The "output" return +// value will be populated with the DeleteImagesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteImagesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteImagesCommon Send returns without error. +// +// See DeleteImages for more information on using the DeleteImages +// API call, and error handling. +// +// // Example sending a request using the DeleteImagesRequest method. +// req, resp := client.DeleteImagesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteImagesRequest(input *DeleteImagesInput) (req *request.Request, output *DeleteImagesOutput) { + op := &request.Operation{ + Name: opDeleteImages, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteImagesInput{} + } + + output = &DeleteImagesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteImages API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteImages for usage and error information. +func (c *ECS) DeleteImages(input *DeleteImagesInput) (*DeleteImagesOutput, error) { + req, out := c.DeleteImagesRequest(input) + return out, req.Send() +} + +// DeleteImagesWithContext is the same as DeleteImages with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteImages for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteImagesWithContext(ctx volcengine.Context, input *DeleteImagesInput, opts ...request.Option) (*DeleteImagesOutput, error) { + req, out := c.DeleteImagesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteImagesInput struct { + _ struct{} `type:"structure"` + + ImageIds []*string `type:"list"` +} + +// String returns the string representation +func (s DeleteImagesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteImagesInput) GoString() string { + return s.String() +} + +// SetImageIds sets the ImageIds field's value. +func (s *DeleteImagesInput) SetImageIds(v []*string) *DeleteImagesInput { + s.ImageIds = v + return s +} + +type DeleteImagesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForDeleteImagesOutput `type:"list"` +} + +// String returns the string representation +func (s DeleteImagesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteImagesOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *DeleteImagesOutput) SetOperationDetails(v []*OperationDetailForDeleteImagesOutput) *DeleteImagesOutput { + s.OperationDetails = v + return s +} + +type ErrorForDeleteImagesOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForDeleteImagesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForDeleteImagesOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForDeleteImagesOutput) SetCode(v string) *ErrorForDeleteImagesOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForDeleteImagesOutput) SetMessage(v string) *ErrorForDeleteImagesOutput { + s.Message = &v + return s +} + +type OperationDetailForDeleteImagesOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForDeleteImagesOutput `type:"structure"` + + ImageId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForDeleteImagesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForDeleteImagesOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForDeleteImagesOutput) SetError(v *ErrorForDeleteImagesOutput) *OperationDetailForDeleteImagesOutput { + s.Error = v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *OperationDetailForDeleteImagesOutput) SetImageId(v string) *OperationDetailForDeleteImagesOutput { + s.ImageId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_instance.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_instance.go new file mode 100644 index 000000000000..bae1a86af1cd --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_instance.go @@ -0,0 +1,178 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteInstanceCommon = "DeleteInstance" + +// DeleteInstanceCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteInstanceCommon operation. The "output" return +// value will be populated with the DeleteInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteInstanceCommon Send returns without error. +// +// See DeleteInstanceCommon for more information on using the DeleteInstanceCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteInstanceCommonRequest method. +// req, resp := client.DeleteInstanceCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteInstanceCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteInstanceCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteInstanceCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteInstanceCommon for usage and error information. +func (c *ECS) DeleteInstanceCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteInstanceCommonRequest(input) + return out, req.Send() +} + +// DeleteInstanceCommonWithContext is the same as DeleteInstanceCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteInstanceCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteInstanceCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteInstanceCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteInstance = "DeleteInstance" + +// DeleteInstanceRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteInstance operation. The "output" return +// value will be populated with the DeleteInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteInstanceCommon Send returns without error. +// +// See DeleteInstance for more information on using the DeleteInstance +// API call, and error handling. +// +// // Example sending a request using the DeleteInstanceRequest method. +// req, resp := client.DeleteInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteInstanceRequest(input *DeleteInstanceInput) (req *request.Request, output *DeleteInstanceOutput) { + op := &request.Operation{ + Name: opDeleteInstance, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteInstanceInput{} + } + + output = &DeleteInstanceOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteInstance API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteInstance for usage and error information. +func (c *ECS) DeleteInstance(input *DeleteInstanceInput) (*DeleteInstanceOutput, error) { + req, out := c.DeleteInstanceRequest(input) + return out, req.Send() +} + +// DeleteInstanceWithContext is the same as DeleteInstance with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteInstanceWithContext(ctx volcengine.Context, input *DeleteInstanceInput, opts ...request.Option) (*DeleteInstanceOutput, error) { + req, out := c.DeleteInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteInstanceInput struct { + _ struct{} `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s DeleteInstanceInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInstanceInput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DeleteInstanceInput) SetInstanceId(v string) *DeleteInstanceInput { + s.InstanceId = &v + return s +} + +type DeleteInstanceOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s DeleteInstanceOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInstanceOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_instances.go new file mode 100644 index 000000000000..d73843ed16c3 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_instances.go @@ -0,0 +1,246 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteInstancesCommon = "DeleteInstances" + +// DeleteInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteInstancesCommon operation. The "output" return +// value will be populated with the DeleteInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteInstancesCommon Send returns without error. +// +// See DeleteInstancesCommon for more information on using the DeleteInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteInstancesCommonRequest method. +// req, resp := client.DeleteInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteInstancesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteInstancesCommon for usage and error information. +func (c *ECS) DeleteInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteInstancesCommonRequest(input) + return out, req.Send() +} + +// DeleteInstancesCommonWithContext is the same as DeleteInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteInstances = "DeleteInstances" + +// DeleteInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteInstances operation. The "output" return +// value will be populated with the DeleteInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteInstancesCommon Send returns without error. +// +// See DeleteInstances for more information on using the DeleteInstances +// API call, and error handling. +// +// // Example sending a request using the DeleteInstancesRequest method. +// req, resp := client.DeleteInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteInstancesRequest(input *DeleteInstancesInput) (req *request.Request, output *DeleteInstancesOutput) { + op := &request.Operation{ + Name: opDeleteInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteInstancesInput{} + } + + output = &DeleteInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteInstances API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteInstances for usage and error information. +func (c *ECS) DeleteInstances(input *DeleteInstancesInput) (*DeleteInstancesOutput, error) { + req, out := c.DeleteInstancesRequest(input) + return out, req.Send() +} + +// DeleteInstancesWithContext is the same as DeleteInstances with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteInstancesWithContext(ctx volcengine.Context, input *DeleteInstancesInput, opts ...request.Option) (*DeleteInstancesOutput, error) { + req, out := c.DeleteInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteInstancesInput struct { + _ struct{} `type:"structure"` + + InstanceIds []*string `type:"list"` +} + +// String returns the string representation +func (s DeleteInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInstancesInput) GoString() string { + return s.String() +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DeleteInstancesInput) SetInstanceIds(v []*string) *DeleteInstancesInput { + s.InstanceIds = v + return s +} + +type DeleteInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForDeleteInstancesOutput `type:"list"` +} + +// String returns the string representation +func (s DeleteInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteInstancesOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *DeleteInstancesOutput) SetOperationDetails(v []*OperationDetailForDeleteInstancesOutput) *DeleteInstancesOutput { + s.OperationDetails = v + return s +} + +type ErrorForDeleteInstancesOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForDeleteInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForDeleteInstancesOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForDeleteInstancesOutput) SetCode(v string) *ErrorForDeleteInstancesOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForDeleteInstancesOutput) SetMessage(v string) *ErrorForDeleteInstancesOutput { + s.Message = &v + return s +} + +type OperationDetailForDeleteInstancesOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForDeleteInstancesOutput `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForDeleteInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForDeleteInstancesOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForDeleteInstancesOutput) SetError(v *ErrorForDeleteInstancesOutput) *OperationDetailForDeleteInstancesOutput { + s.Error = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *OperationDetailForDeleteInstancesOutput) SetInstanceId(v string) *OperationDetailForDeleteInstancesOutput { + s.InstanceId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_key_pairs.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_key_pairs.go new file mode 100644 index 000000000000..473d915dfe03 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_key_pairs.go @@ -0,0 +1,246 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteKeyPairsCommon = "DeleteKeyPairs" + +// DeleteKeyPairsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteKeyPairsCommon operation. The "output" return +// value will be populated with the DeleteKeyPairsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteKeyPairsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteKeyPairsCommon Send returns without error. +// +// See DeleteKeyPairsCommon for more information on using the DeleteKeyPairsCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteKeyPairsCommonRequest method. +// req, resp := client.DeleteKeyPairsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteKeyPairsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteKeyPairsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteKeyPairsCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteKeyPairsCommon for usage and error information. +func (c *ECS) DeleteKeyPairsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteKeyPairsCommonRequest(input) + return out, req.Send() +} + +// DeleteKeyPairsCommonWithContext is the same as DeleteKeyPairsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteKeyPairsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteKeyPairsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteKeyPairsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteKeyPairs = "DeleteKeyPairs" + +// DeleteKeyPairsRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteKeyPairs operation. The "output" return +// value will be populated with the DeleteKeyPairsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteKeyPairsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteKeyPairsCommon Send returns without error. +// +// See DeleteKeyPairs for more information on using the DeleteKeyPairs +// API call, and error handling. +// +// // Example sending a request using the DeleteKeyPairsRequest method. +// req, resp := client.DeleteKeyPairsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteKeyPairsRequest(input *DeleteKeyPairsInput) (req *request.Request, output *DeleteKeyPairsOutput) { + op := &request.Operation{ + Name: opDeleteKeyPairs, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteKeyPairsInput{} + } + + output = &DeleteKeyPairsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteKeyPairs API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteKeyPairs for usage and error information. +func (c *ECS) DeleteKeyPairs(input *DeleteKeyPairsInput) (*DeleteKeyPairsOutput, error) { + req, out := c.DeleteKeyPairsRequest(input) + return out, req.Send() +} + +// DeleteKeyPairsWithContext is the same as DeleteKeyPairs with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteKeyPairs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteKeyPairsWithContext(ctx volcengine.Context, input *DeleteKeyPairsInput, opts ...request.Option) (*DeleteKeyPairsOutput, error) { + req, out := c.DeleteKeyPairsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteKeyPairsInput struct { + _ struct{} `type:"structure"` + + KeyPairNames []*string `type:"list"` +} + +// String returns the string representation +func (s DeleteKeyPairsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteKeyPairsInput) GoString() string { + return s.String() +} + +// SetKeyPairNames sets the KeyPairNames field's value. +func (s *DeleteKeyPairsInput) SetKeyPairNames(v []*string) *DeleteKeyPairsInput { + s.KeyPairNames = v + return s +} + +type DeleteKeyPairsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForDeleteKeyPairsOutput `type:"list"` +} + +// String returns the string representation +func (s DeleteKeyPairsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteKeyPairsOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *DeleteKeyPairsOutput) SetOperationDetails(v []*OperationDetailForDeleteKeyPairsOutput) *DeleteKeyPairsOutput { + s.OperationDetails = v + return s +} + +type ErrorForDeleteKeyPairsOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForDeleteKeyPairsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForDeleteKeyPairsOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForDeleteKeyPairsOutput) SetCode(v string) *ErrorForDeleteKeyPairsOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForDeleteKeyPairsOutput) SetMessage(v string) *ErrorForDeleteKeyPairsOutput { + s.Message = &v + return s +} + +type OperationDetailForDeleteKeyPairsOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForDeleteKeyPairsOutput `type:"structure"` + + KeyPairName *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForDeleteKeyPairsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForDeleteKeyPairsOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForDeleteKeyPairsOutput) SetError(v *ErrorForDeleteKeyPairsOutput) *OperationDetailForDeleteKeyPairsOutput { + s.Error = v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *OperationDetailForDeleteKeyPairsOutput) SetKeyPairName(v string) *OperationDetailForDeleteKeyPairsOutput { + s.KeyPairName = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_tags.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_tags.go new file mode 100644 index 000000000000..25b79d8babbd --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_delete_tags.go @@ -0,0 +1,262 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDeleteTagsCommon = "DeleteTags" + +// DeleteTagsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteTagsCommon operation. The "output" return +// value will be populated with the DeleteTagsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteTagsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteTagsCommon Send returns without error. +// +// See DeleteTagsCommon for more information on using the DeleteTagsCommon +// API call, and error handling. +// +// // Example sending a request using the DeleteTagsCommonRequest method. +// req, resp := client.DeleteTagsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteTagsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDeleteTagsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteTagsCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteTagsCommon for usage and error information. +func (c *ECS) DeleteTagsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DeleteTagsCommonRequest(input) + return out, req.Send() +} + +// DeleteTagsCommonWithContext is the same as DeleteTagsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteTagsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteTagsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DeleteTagsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteTags = "DeleteTags" + +// DeleteTagsRequest generates a "volcengine/request.Request" representing the +// client's request for the DeleteTags operation. The "output" return +// value will be populated with the DeleteTagsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DeleteTagsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DeleteTagsCommon Send returns without error. +// +// See DeleteTags for more information on using the DeleteTags +// API call, and error handling. +// +// // Example sending a request using the DeleteTagsRequest method. +// req, resp := client.DeleteTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DeleteTagsRequest(input *DeleteTagsInput) (req *request.Request, output *DeleteTagsOutput) { + op := &request.Operation{ + Name: opDeleteTags, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteTagsInput{} + } + + output = &DeleteTagsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DeleteTags API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DeleteTags for usage and error information. +func (c *ECS) DeleteTags(input *DeleteTagsInput) (*DeleteTagsOutput, error) { + req, out := c.DeleteTagsRequest(input) + return out, req.Send() +} + +// DeleteTagsWithContext is the same as DeleteTags with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DeleteTagsWithContext(ctx volcengine.Context, input *DeleteTagsInput, opts ...request.Option) (*DeleteTagsOutput, error) { + req, out := c.DeleteTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DeleteTagsInput struct { + _ struct{} `type:"structure"` + + ResourceIds []*string `type:"list"` + + ResourceType *string `type:"string"` + + TagKeys []*string `type:"list"` +} + +// String returns the string representation +func (s DeleteTagsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTagsInput) GoString() string { + return s.String() +} + +// SetResourceIds sets the ResourceIds field's value. +func (s *DeleteTagsInput) SetResourceIds(v []*string) *DeleteTagsInput { + s.ResourceIds = v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *DeleteTagsInput) SetResourceType(v string) *DeleteTagsInput { + s.ResourceType = &v + return s +} + +// SetTagKeys sets the TagKeys field's value. +func (s *DeleteTagsInput) SetTagKeys(v []*string) *DeleteTagsInput { + s.TagKeys = v + return s +} + +type DeleteTagsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForDeleteTagsOutput `type:"list"` +} + +// String returns the string representation +func (s DeleteTagsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteTagsOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *DeleteTagsOutput) SetOperationDetails(v []*OperationDetailForDeleteTagsOutput) *DeleteTagsOutput { + s.OperationDetails = v + return s +} + +type ErrorForDeleteTagsOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForDeleteTagsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForDeleteTagsOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForDeleteTagsOutput) SetCode(v string) *ErrorForDeleteTagsOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForDeleteTagsOutput) SetMessage(v string) *ErrorForDeleteTagsOutput { + s.Message = &v + return s +} + +type OperationDetailForDeleteTagsOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForDeleteTagsOutput `type:"structure"` + + ResourceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForDeleteTagsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForDeleteTagsOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForDeleteTagsOutput) SetError(v *ErrorForDeleteTagsOutput) *OperationDetailForDeleteTagsOutput { + s.Error = v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *OperationDetailForDeleteTagsOutput) SetResourceId(v string) *OperationDetailForDeleteTagsOutput { + s.ResourceId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_available_resource.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_available_resource.go new file mode 100644 index 000000000000..2fbcb4925315 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_available_resource.go @@ -0,0 +1,332 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeAvailableResourceCommon = "DescribeAvailableResource" + +// DescribeAvailableResourceCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeAvailableResourceCommon operation. The "output" return +// value will be populated with the DescribeAvailableResourceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeAvailableResourceCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeAvailableResourceCommon Send returns without error. +// +// See DescribeAvailableResourceCommon for more information on using the DescribeAvailableResourceCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeAvailableResourceCommonRequest method. +// req, resp := client.DescribeAvailableResourceCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeAvailableResourceCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeAvailableResourceCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeAvailableResourceCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeAvailableResourceCommon for usage and error information. +func (c *ECS) DescribeAvailableResourceCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeAvailableResourceCommonRequest(input) + return out, req.Send() +} + +// DescribeAvailableResourceCommonWithContext is the same as DescribeAvailableResourceCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAvailableResourceCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeAvailableResourceCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeAvailableResourceCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeAvailableResource = "DescribeAvailableResource" + +// DescribeAvailableResourceRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeAvailableResource operation. The "output" return +// value will be populated with the DescribeAvailableResourceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeAvailableResourceCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeAvailableResourceCommon Send returns without error. +// +// See DescribeAvailableResource for more information on using the DescribeAvailableResource +// API call, and error handling. +// +// // Example sending a request using the DescribeAvailableResourceRequest method. +// req, resp := client.DescribeAvailableResourceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeAvailableResourceRequest(input *DescribeAvailableResourceInput) (req *request.Request, output *DescribeAvailableResourceOutput) { + op := &request.Operation{ + Name: opDescribeAvailableResource, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAvailableResourceInput{} + } + + output = &DescribeAvailableResourceOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeAvailableResource API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeAvailableResource for usage and error information. +func (c *ECS) DescribeAvailableResource(input *DescribeAvailableResourceInput) (*DescribeAvailableResourceOutput, error) { + req, out := c.DescribeAvailableResourceRequest(input) + return out, req.Send() +} + +// DescribeAvailableResourceWithContext is the same as DescribeAvailableResource with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAvailableResource for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeAvailableResourceWithContext(ctx volcengine.Context, input *DescribeAvailableResourceInput, opts ...request.Option) (*DescribeAvailableResourceOutput, error) { + req, out := c.DescribeAvailableResourceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AvailableResourceForDescribeAvailableResourceOutput struct { + _ struct{} `type:"structure"` + + SupportedResources []*SupportedResourceForDescribeAvailableResourceOutput `type:"list"` + + Type *string `type:"string"` +} + +// String returns the string representation +func (s AvailableResourceForDescribeAvailableResourceOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AvailableResourceForDescribeAvailableResourceOutput) GoString() string { + return s.String() +} + +// SetSupportedResources sets the SupportedResources field's value. +func (s *AvailableResourceForDescribeAvailableResourceOutput) SetSupportedResources(v []*SupportedResourceForDescribeAvailableResourceOutput) *AvailableResourceForDescribeAvailableResourceOutput { + s.SupportedResources = v + return s +} + +// SetType sets the Type field's value. +func (s *AvailableResourceForDescribeAvailableResourceOutput) SetType(v string) *AvailableResourceForDescribeAvailableResourceOutput { + s.Type = &v + return s +} + +type AvailableZoneForDescribeAvailableResourceOutput struct { + _ struct{} `type:"structure"` + + AvailableResources []*AvailableResourceForDescribeAvailableResourceOutput `type:"list"` + + RegionId *string `type:"string"` + + Status *string `type:"string"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s AvailableZoneForDescribeAvailableResourceOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AvailableZoneForDescribeAvailableResourceOutput) GoString() string { + return s.String() +} + +// SetAvailableResources sets the AvailableResources field's value. +func (s *AvailableZoneForDescribeAvailableResourceOutput) SetAvailableResources(v []*AvailableResourceForDescribeAvailableResourceOutput) *AvailableZoneForDescribeAvailableResourceOutput { + s.AvailableResources = v + return s +} + +// SetRegionId sets the RegionId field's value. +func (s *AvailableZoneForDescribeAvailableResourceOutput) SetRegionId(v string) *AvailableZoneForDescribeAvailableResourceOutput { + s.RegionId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *AvailableZoneForDescribeAvailableResourceOutput) SetStatus(v string) *AvailableZoneForDescribeAvailableResourceOutput { + s.Status = &v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *AvailableZoneForDescribeAvailableResourceOutput) SetZoneId(v string) *AvailableZoneForDescribeAvailableResourceOutput { + s.ZoneId = &v + return s +} + +type DescribeAvailableResourceInput struct { + _ struct{} `type:"structure"` + + DestinationResource *string `type:"string"` + + InstanceChargeType *string `type:"string"` + + InstanceType *string `type:"string"` + + InstanceTypeId *string `type:"string"` + + SpotStrategy *string `type:"string"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s DescribeAvailableResourceInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAvailableResourceInput) GoString() string { + return s.String() +} + +// SetDestinationResource sets the DestinationResource field's value. +func (s *DescribeAvailableResourceInput) SetDestinationResource(v string) *DescribeAvailableResourceInput { + s.DestinationResource = &v + return s +} + +// SetInstanceChargeType sets the InstanceChargeType field's value. +func (s *DescribeAvailableResourceInput) SetInstanceChargeType(v string) *DescribeAvailableResourceInput { + s.InstanceChargeType = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *DescribeAvailableResourceInput) SetInstanceType(v string) *DescribeAvailableResourceInput { + s.InstanceType = &v + return s +} + +// SetInstanceTypeId sets the InstanceTypeId field's value. +func (s *DescribeAvailableResourceInput) SetInstanceTypeId(v string) *DescribeAvailableResourceInput { + s.InstanceTypeId = &v + return s +} + +// SetSpotStrategy sets the SpotStrategy field's value. +func (s *DescribeAvailableResourceInput) SetSpotStrategy(v string) *DescribeAvailableResourceInput { + s.SpotStrategy = &v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *DescribeAvailableResourceInput) SetZoneId(v string) *DescribeAvailableResourceInput { + s.ZoneId = &v + return s +} + +type DescribeAvailableResourceOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + AvailableZones []*AvailableZoneForDescribeAvailableResourceOutput `type:"list"` +} + +// String returns the string representation +func (s DescribeAvailableResourceOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeAvailableResourceOutput) GoString() string { + return s.String() +} + +// SetAvailableZones sets the AvailableZones field's value. +func (s *DescribeAvailableResourceOutput) SetAvailableZones(v []*AvailableZoneForDescribeAvailableResourceOutput) *DescribeAvailableResourceOutput { + s.AvailableZones = v + return s +} + +type SupportedResourceForDescribeAvailableResourceOutput struct { + _ struct{} `type:"structure"` + + Status *string `type:"string"` + + Value *string `type:"string"` +} + +// String returns the string representation +func (s SupportedResourceForDescribeAvailableResourceOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s SupportedResourceForDescribeAvailableResourceOutput) GoString() string { + return s.String() +} + +// SetStatus sets the Status field's value. +func (s *SupportedResourceForDescribeAvailableResourceOutput) SetStatus(v string) *SupportedResourceForDescribeAvailableResourceOutput { + s.Status = &v + return s +} + +// SetValue sets the Value field's value. +func (s *SupportedResourceForDescribeAvailableResourceOutput) SetValue(v string) *SupportedResourceForDescribeAvailableResourceOutput { + s.Value = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_deployment_set_supported_instance_type_family.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_deployment_set_supported_instance_type_family.go new file mode 100644 index 000000000000..86a6f49033a1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_deployment_set_supported_instance_type_family.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeDeploymentSetSupportedInstanceTypeFamilyCommon = "DescribeDeploymentSetSupportedInstanceTypeFamily" + +// DescribeDeploymentSetSupportedInstanceTypeFamilyCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeDeploymentSetSupportedInstanceTypeFamilyCommon operation. The "output" return +// value will be populated with the DescribeDeploymentSetSupportedInstanceTypeFamilyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeDeploymentSetSupportedInstanceTypeFamilyCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeDeploymentSetSupportedInstanceTypeFamilyCommon Send returns without error. +// +// See DescribeDeploymentSetSupportedInstanceTypeFamilyCommon for more information on using the DescribeDeploymentSetSupportedInstanceTypeFamilyCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeDeploymentSetSupportedInstanceTypeFamilyCommonRequest method. +// req, resp := client.DescribeDeploymentSetSupportedInstanceTypeFamilyCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeDeploymentSetSupportedInstanceTypeFamilyCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeDeploymentSetSupportedInstanceTypeFamilyCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeDeploymentSetSupportedInstanceTypeFamilyCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeDeploymentSetSupportedInstanceTypeFamilyCommon for usage and error information. +func (c *ECS) DescribeDeploymentSetSupportedInstanceTypeFamilyCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeDeploymentSetSupportedInstanceTypeFamilyCommonRequest(input) + return out, req.Send() +} + +// DescribeDeploymentSetSupportedInstanceTypeFamilyCommonWithContext is the same as DescribeDeploymentSetSupportedInstanceTypeFamilyCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDeploymentSetSupportedInstanceTypeFamilyCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeDeploymentSetSupportedInstanceTypeFamilyCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeDeploymentSetSupportedInstanceTypeFamilyCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeDeploymentSetSupportedInstanceTypeFamily = "DescribeDeploymentSetSupportedInstanceTypeFamily" + +// DescribeDeploymentSetSupportedInstanceTypeFamilyRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeDeploymentSetSupportedInstanceTypeFamily operation. The "output" return +// value will be populated with the DescribeDeploymentSetSupportedInstanceTypeFamilyCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeDeploymentSetSupportedInstanceTypeFamilyCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeDeploymentSetSupportedInstanceTypeFamilyCommon Send returns without error. +// +// See DescribeDeploymentSetSupportedInstanceTypeFamily for more information on using the DescribeDeploymentSetSupportedInstanceTypeFamily +// API call, and error handling. +// +// // Example sending a request using the DescribeDeploymentSetSupportedInstanceTypeFamilyRequest method. +// req, resp := client.DescribeDeploymentSetSupportedInstanceTypeFamilyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeDeploymentSetSupportedInstanceTypeFamilyRequest(input *DescribeDeploymentSetSupportedInstanceTypeFamilyInput) (req *request.Request, output *DescribeDeploymentSetSupportedInstanceTypeFamilyOutput) { + op := &request.Operation{ + Name: opDescribeDeploymentSetSupportedInstanceTypeFamily, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeDeploymentSetSupportedInstanceTypeFamilyInput{} + } + + output = &DescribeDeploymentSetSupportedInstanceTypeFamilyOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeDeploymentSetSupportedInstanceTypeFamily API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeDeploymentSetSupportedInstanceTypeFamily for usage and error information. +func (c *ECS) DescribeDeploymentSetSupportedInstanceTypeFamily(input *DescribeDeploymentSetSupportedInstanceTypeFamilyInput) (*DescribeDeploymentSetSupportedInstanceTypeFamilyOutput, error) { + req, out := c.DescribeDeploymentSetSupportedInstanceTypeFamilyRequest(input) + return out, req.Send() +} + +// DescribeDeploymentSetSupportedInstanceTypeFamilyWithContext is the same as DescribeDeploymentSetSupportedInstanceTypeFamily with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDeploymentSetSupportedInstanceTypeFamily for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeDeploymentSetSupportedInstanceTypeFamilyWithContext(ctx volcengine.Context, input *DescribeDeploymentSetSupportedInstanceTypeFamilyInput, opts ...request.Option) (*DescribeDeploymentSetSupportedInstanceTypeFamilyOutput, error) { + req, out := c.DescribeDeploymentSetSupportedInstanceTypeFamilyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeDeploymentSetSupportedInstanceTypeFamilyInput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DescribeDeploymentSetSupportedInstanceTypeFamilyInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDeploymentSetSupportedInstanceTypeFamilyInput) GoString() string { + return s.String() +} + +type DescribeDeploymentSetSupportedInstanceTypeFamilyOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + DeploymentSetCreateInstanceTypeFamilies []*string `type:"list"` + + DeploymentSetModifyInstanceTypeFamilies []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeDeploymentSetSupportedInstanceTypeFamilyOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDeploymentSetSupportedInstanceTypeFamilyOutput) GoString() string { + return s.String() +} + +// SetDeploymentSetCreateInstanceTypeFamilies sets the DeploymentSetCreateInstanceTypeFamilies field's value. +func (s *DescribeDeploymentSetSupportedInstanceTypeFamilyOutput) SetDeploymentSetCreateInstanceTypeFamilies(v []*string) *DescribeDeploymentSetSupportedInstanceTypeFamilyOutput { + s.DeploymentSetCreateInstanceTypeFamilies = v + return s +} + +// SetDeploymentSetModifyInstanceTypeFamilies sets the DeploymentSetModifyInstanceTypeFamilies field's value. +func (s *DescribeDeploymentSetSupportedInstanceTypeFamilyOutput) SetDeploymentSetModifyInstanceTypeFamilies(v []*string) *DescribeDeploymentSetSupportedInstanceTypeFamilyOutput { + s.DeploymentSetModifyInstanceTypeFamilies = v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_deployment_sets.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_deployment_sets.go new file mode 100644 index 000000000000..e671fd167432 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_deployment_sets.go @@ -0,0 +1,358 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeDeploymentSetsCommon = "DescribeDeploymentSets" + +// DescribeDeploymentSetsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeDeploymentSetsCommon operation. The "output" return +// value will be populated with the DescribeDeploymentSetsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeDeploymentSetsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeDeploymentSetsCommon Send returns without error. +// +// See DescribeDeploymentSetsCommon for more information on using the DescribeDeploymentSetsCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeDeploymentSetsCommonRequest method. +// req, resp := client.DescribeDeploymentSetsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeDeploymentSetsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeDeploymentSetsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeDeploymentSetsCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeDeploymentSetsCommon for usage and error information. +func (c *ECS) DescribeDeploymentSetsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeDeploymentSetsCommonRequest(input) + return out, req.Send() +} + +// DescribeDeploymentSetsCommonWithContext is the same as DescribeDeploymentSetsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDeploymentSetsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeDeploymentSetsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeDeploymentSetsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeDeploymentSets = "DescribeDeploymentSets" + +// DescribeDeploymentSetsRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeDeploymentSets operation. The "output" return +// value will be populated with the DescribeDeploymentSetsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeDeploymentSetsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeDeploymentSetsCommon Send returns without error. +// +// See DescribeDeploymentSets for more information on using the DescribeDeploymentSets +// API call, and error handling. +// +// // Example sending a request using the DescribeDeploymentSetsRequest method. +// req, resp := client.DescribeDeploymentSetsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeDeploymentSetsRequest(input *DescribeDeploymentSetsInput) (req *request.Request, output *DescribeDeploymentSetsOutput) { + op := &request.Operation{ + Name: opDescribeDeploymentSets, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeDeploymentSetsInput{} + } + + output = &DescribeDeploymentSetsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeDeploymentSets API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeDeploymentSets for usage and error information. +func (c *ECS) DescribeDeploymentSets(input *DescribeDeploymentSetsInput) (*DescribeDeploymentSetsOutput, error) { + req, out := c.DescribeDeploymentSetsRequest(input) + return out, req.Send() +} + +// DescribeDeploymentSetsWithContext is the same as DescribeDeploymentSets with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDeploymentSets for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeDeploymentSetsWithContext(ctx volcengine.Context, input *DescribeDeploymentSetsInput, opts ...request.Option) (*DescribeDeploymentSetsOutput, error) { + req, out := c.DescribeDeploymentSetsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CapacityForDescribeDeploymentSetsOutput struct { + _ struct{} `type:"structure"` + + AvailableCount *int32 `type:"int32"` + + UsedCount *int32 `type:"int32"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s CapacityForDescribeDeploymentSetsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CapacityForDescribeDeploymentSetsOutput) GoString() string { + return s.String() +} + +// SetAvailableCount sets the AvailableCount field's value. +func (s *CapacityForDescribeDeploymentSetsOutput) SetAvailableCount(v int32) *CapacityForDescribeDeploymentSetsOutput { + s.AvailableCount = &v + return s +} + +// SetUsedCount sets the UsedCount field's value. +func (s *CapacityForDescribeDeploymentSetsOutput) SetUsedCount(v int32) *CapacityForDescribeDeploymentSetsOutput { + s.UsedCount = &v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *CapacityForDescribeDeploymentSetsOutput) SetZoneId(v string) *CapacityForDescribeDeploymentSetsOutput { + s.ZoneId = &v + return s +} + +type DeploymentSetForDescribeDeploymentSetsOutput struct { + _ struct{} `type:"structure"` + + Capacities []*CapacityForDescribeDeploymentSetsOutput `type:"list"` + + CreatedAt *string `type:"string"` + + DeploymentSetDescription *string `type:"string"` + + DeploymentSetId *string `type:"string"` + + DeploymentSetName *string `type:"string"` + + Granularity *string `type:"string"` + + InstanceAmount *int32 `type:"int32"` + + InstanceIds []*string `type:"list"` + + Strategy *string `type:"string"` +} + +// String returns the string representation +func (s DeploymentSetForDescribeDeploymentSetsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeploymentSetForDescribeDeploymentSetsOutput) GoString() string { + return s.String() +} + +// SetCapacities sets the Capacities field's value. +func (s *DeploymentSetForDescribeDeploymentSetsOutput) SetCapacities(v []*CapacityForDescribeDeploymentSetsOutput) *DeploymentSetForDescribeDeploymentSetsOutput { + s.Capacities = v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *DeploymentSetForDescribeDeploymentSetsOutput) SetCreatedAt(v string) *DeploymentSetForDescribeDeploymentSetsOutput { + s.CreatedAt = &v + return s +} + +// SetDeploymentSetDescription sets the DeploymentSetDescription field's value. +func (s *DeploymentSetForDescribeDeploymentSetsOutput) SetDeploymentSetDescription(v string) *DeploymentSetForDescribeDeploymentSetsOutput { + s.DeploymentSetDescription = &v + return s +} + +// SetDeploymentSetId sets the DeploymentSetId field's value. +func (s *DeploymentSetForDescribeDeploymentSetsOutput) SetDeploymentSetId(v string) *DeploymentSetForDescribeDeploymentSetsOutput { + s.DeploymentSetId = &v + return s +} + +// SetDeploymentSetName sets the DeploymentSetName field's value. +func (s *DeploymentSetForDescribeDeploymentSetsOutput) SetDeploymentSetName(v string) *DeploymentSetForDescribeDeploymentSetsOutput { + s.DeploymentSetName = &v + return s +} + +// SetGranularity sets the Granularity field's value. +func (s *DeploymentSetForDescribeDeploymentSetsOutput) SetGranularity(v string) *DeploymentSetForDescribeDeploymentSetsOutput { + s.Granularity = &v + return s +} + +// SetInstanceAmount sets the InstanceAmount field's value. +func (s *DeploymentSetForDescribeDeploymentSetsOutput) SetInstanceAmount(v int32) *DeploymentSetForDescribeDeploymentSetsOutput { + s.InstanceAmount = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DeploymentSetForDescribeDeploymentSetsOutput) SetInstanceIds(v []*string) *DeploymentSetForDescribeDeploymentSetsOutput { + s.InstanceIds = v + return s +} + +// SetStrategy sets the Strategy field's value. +func (s *DeploymentSetForDescribeDeploymentSetsOutput) SetStrategy(v string) *DeploymentSetForDescribeDeploymentSetsOutput { + s.Strategy = &v + return s +} + +type DescribeDeploymentSetsInput struct { + _ struct{} `type:"structure"` + + DeploymentSetIds []*string `type:"list"` + + DeploymentSetName *string `type:"string"` + + Granularity *string `type:"string"` + + MaxResults *int32 `type:"int32"` + + NextToken *string `type:"string"` + + Strategy *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDeploymentSetsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDeploymentSetsInput) GoString() string { + return s.String() +} + +// SetDeploymentSetIds sets the DeploymentSetIds field's value. +func (s *DescribeDeploymentSetsInput) SetDeploymentSetIds(v []*string) *DescribeDeploymentSetsInput { + s.DeploymentSetIds = v + return s +} + +// SetDeploymentSetName sets the DeploymentSetName field's value. +func (s *DescribeDeploymentSetsInput) SetDeploymentSetName(v string) *DescribeDeploymentSetsInput { + s.DeploymentSetName = &v + return s +} + +// SetGranularity sets the Granularity field's value. +func (s *DescribeDeploymentSetsInput) SetGranularity(v string) *DescribeDeploymentSetsInput { + s.Granularity = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeDeploymentSetsInput) SetMaxResults(v int32) *DescribeDeploymentSetsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeDeploymentSetsInput) SetNextToken(v string) *DescribeDeploymentSetsInput { + s.NextToken = &v + return s +} + +// SetStrategy sets the Strategy field's value. +func (s *DescribeDeploymentSetsInput) SetStrategy(v string) *DescribeDeploymentSetsInput { + s.Strategy = &v + return s +} + +type DescribeDeploymentSetsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + DeploymentSets []*DeploymentSetForDescribeDeploymentSetsOutput `type:"list"` + + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDeploymentSetsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDeploymentSetsOutput) GoString() string { + return s.String() +} + +// SetDeploymentSets sets the DeploymentSets field's value. +func (s *DescribeDeploymentSetsOutput) SetDeploymentSets(v []*DeploymentSetForDescribeDeploymentSetsOutput) *DescribeDeploymentSetsOutput { + s.DeploymentSets = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeDeploymentSetsOutput) SetNextToken(v string) *DescribeDeploymentSetsOutput { + s.NextToken = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_image_share_permission.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_image_share_permission.go new file mode 100644 index 000000000000..5f51ecd6329e --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_image_share_permission.go @@ -0,0 +1,248 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeImageSharePermissionCommon = "DescribeImageSharePermission" + +// DescribeImageSharePermissionCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeImageSharePermissionCommon operation. The "output" return +// value will be populated with the DescribeImageSharePermissionCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeImageSharePermissionCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeImageSharePermissionCommon Send returns without error. +// +// See DescribeImageSharePermissionCommon for more information on using the DescribeImageSharePermissionCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeImageSharePermissionCommonRequest method. +// req, resp := client.DescribeImageSharePermissionCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeImageSharePermissionCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeImageSharePermissionCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeImageSharePermissionCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeImageSharePermissionCommon for usage and error information. +func (c *ECS) DescribeImageSharePermissionCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeImageSharePermissionCommonRequest(input) + return out, req.Send() +} + +// DescribeImageSharePermissionCommonWithContext is the same as DescribeImageSharePermissionCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeImageSharePermissionCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeImageSharePermissionCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeImageSharePermissionCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeImageSharePermission = "DescribeImageSharePermission" + +// DescribeImageSharePermissionRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeImageSharePermission operation. The "output" return +// value will be populated with the DescribeImageSharePermissionCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeImageSharePermissionCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeImageSharePermissionCommon Send returns without error. +// +// See DescribeImageSharePermission for more information on using the DescribeImageSharePermission +// API call, and error handling. +// +// // Example sending a request using the DescribeImageSharePermissionRequest method. +// req, resp := client.DescribeImageSharePermissionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeImageSharePermissionRequest(input *DescribeImageSharePermissionInput) (req *request.Request, output *DescribeImageSharePermissionOutput) { + op := &request.Operation{ + Name: opDescribeImageSharePermission, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeImageSharePermissionInput{} + } + + output = &DescribeImageSharePermissionOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeImageSharePermission API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeImageSharePermission for usage and error information. +func (c *ECS) DescribeImageSharePermission(input *DescribeImageSharePermissionInput) (*DescribeImageSharePermissionOutput, error) { + req, out := c.DescribeImageSharePermissionRequest(input) + return out, req.Send() +} + +// DescribeImageSharePermissionWithContext is the same as DescribeImageSharePermission with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeImageSharePermission for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeImageSharePermissionWithContext(ctx volcengine.Context, input *DescribeImageSharePermissionInput, opts ...request.Option) (*DescribeImageSharePermissionOutput, error) { + req, out := c.DescribeImageSharePermissionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AccountForDescribeImageSharePermissionOutput struct { + _ struct{} `type:"structure"` + + AccountId *string `type:"string"` +} + +// String returns the string representation +func (s AccountForDescribeImageSharePermissionOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccountForDescribeImageSharePermissionOutput) GoString() string { + return s.String() +} + +// SetAccountId sets the AccountId field's value. +func (s *AccountForDescribeImageSharePermissionOutput) SetAccountId(v string) *AccountForDescribeImageSharePermissionOutput { + s.AccountId = &v + return s +} + +type DescribeImageSharePermissionInput struct { + _ struct{} `type:"structure"` + + ImageId *string `type:"string"` + + MaxResults *int32 `type:"int32"` + + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeImageSharePermissionInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeImageSharePermissionInput) GoString() string { + return s.String() +} + +// SetImageId sets the ImageId field's value. +func (s *DescribeImageSharePermissionInput) SetImageId(v string) *DescribeImageSharePermissionInput { + s.ImageId = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeImageSharePermissionInput) SetMaxResults(v int32) *DescribeImageSharePermissionInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeImageSharePermissionInput) SetNextToken(v string) *DescribeImageSharePermissionInput { + s.NextToken = &v + return s +} + +type DescribeImageSharePermissionOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + Accounts []*AccountForDescribeImageSharePermissionOutput `type:"list"` + + ImageId *string `type:"string"` + + NextToken *string `type:"string"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeImageSharePermissionOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeImageSharePermissionOutput) GoString() string { + return s.String() +} + +// SetAccounts sets the Accounts field's value. +func (s *DescribeImageSharePermissionOutput) SetAccounts(v []*AccountForDescribeImageSharePermissionOutput) *DescribeImageSharePermissionOutput { + s.Accounts = v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *DescribeImageSharePermissionOutput) SetImageId(v string) *DescribeImageSharePermissionOutput { + s.ImageId = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeImageSharePermissionOutput) SetNextToken(v string) *DescribeImageSharePermissionOutput { + s.NextToken = &v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeImageSharePermissionOutput) SetTotalCount(v int32) *DescribeImageSharePermissionOutput { + s.TotalCount = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_images.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_images.go new file mode 100644 index 000000000000..ccc1e5ff42ce --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_images.go @@ -0,0 +1,434 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "encoding/json" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeImagesCommon = "DescribeImages" + +// DescribeImagesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeImagesCommon operation. The "output" return +// value will be populated with the DescribeImagesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeImagesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeImagesCommon Send returns without error. +// +// See DescribeImagesCommon for more information on using the DescribeImagesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeImagesCommonRequest method. +// req, resp := client.DescribeImagesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeImagesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeImagesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeImagesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeImagesCommon for usage and error information. +func (c *ECS) DescribeImagesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeImagesCommonRequest(input) + return out, req.Send() +} + +// DescribeImagesCommonWithContext is the same as DescribeImagesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeImagesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeImagesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeImagesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeImages = "DescribeImages" + +// DescribeImagesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeImages operation. The "output" return +// value will be populated with the DescribeImagesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeImagesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeImagesCommon Send returns without error. +// +// See DescribeImages for more information on using the DescribeImages +// API call, and error handling. +// +// // Example sending a request using the DescribeImagesRequest method. +// req, resp := client.DescribeImagesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeImagesRequest(input *DescribeImagesInput) (req *request.Request, output *DescribeImagesOutput) { + op := &request.Operation{ + Name: opDescribeImages, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeImagesInput{} + } + + output = &DescribeImagesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeImages API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeImages for usage and error information. +func (c *ECS) DescribeImages(input *DescribeImagesInput) (*DescribeImagesOutput, error) { + req, out := c.DescribeImagesRequest(input) + return out, req.Send() +} + +// DescribeImagesWithContext is the same as DescribeImages with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeImages for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeImagesWithContext(ctx volcengine.Context, input *DescribeImagesInput, opts ...request.Option) (*DescribeImagesOutput, error) { + req, out := c.DescribeImagesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeImagesInput struct { + _ struct{} `type:"structure"` + + ImageIds []*string `type:"list"` + + ImageStatus *string `type:"string"` + + InstanceTypeId *string `type:"string"` + + IsSupportCloudInit *bool `type:"boolean"` + + MaxResults *int32 `type:"int32"` + + NextToken *string `type:"string"` + + OsType *string `type:"string"` + + ProjectName *string `type:"string"` + + Status []*string `type:"list"` + + Visibility *string `type:"string"` +} + +// String returns the string representation +func (s DescribeImagesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeImagesInput) GoString() string { + return s.String() +} + +// SetImageIds sets the ImageIds field's value. +func (s *DescribeImagesInput) SetImageIds(v []*string) *DescribeImagesInput { + s.ImageIds = v + return s +} + +// SetImageStatus sets the ImageStatus field's value. +func (s *DescribeImagesInput) SetImageStatus(v string) *DescribeImagesInput { + s.ImageStatus = &v + return s +} + +// SetInstanceTypeId sets the InstanceTypeId field's value. +func (s *DescribeImagesInput) SetInstanceTypeId(v string) *DescribeImagesInput { + s.InstanceTypeId = &v + return s +} + +// SetIsSupportCloudInit sets the IsSupportCloudInit field's value. +func (s *DescribeImagesInput) SetIsSupportCloudInit(v bool) *DescribeImagesInput { + s.IsSupportCloudInit = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeImagesInput) SetMaxResults(v int32) *DescribeImagesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeImagesInput) SetNextToken(v string) *DescribeImagesInput { + s.NextToken = &v + return s +} + +// SetOsType sets the OsType field's value. +func (s *DescribeImagesInput) SetOsType(v string) *DescribeImagesInput { + s.OsType = &v + return s +} + +// SetProjectName sets the ProjectName field's value. +func (s *DescribeImagesInput) SetProjectName(v string) *DescribeImagesInput { + s.ProjectName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DescribeImagesInput) SetStatus(v []*string) *DescribeImagesInput { + s.Status = v + return s +} + +// SetVisibility sets the Visibility field's value. +func (s *DescribeImagesInput) SetVisibility(v string) *DescribeImagesInput { + s.Visibility = &v + return s +} + +type DescribeImagesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + Images []*ImageForDescribeImagesOutput `type:"list"` + + NextToken *string `type:"string"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeImagesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeImagesOutput) GoString() string { + return s.String() +} + +// SetImages sets the Images field's value. +func (s *DescribeImagesOutput) SetImages(v []*ImageForDescribeImagesOutput) *DescribeImagesOutput { + s.Images = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeImagesOutput) SetNextToken(v string) *DescribeImagesOutput { + s.NextToken = &v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeImagesOutput) SetTotalCount(v int32) *DescribeImagesOutput { + s.TotalCount = &v + return s +} + +type ImageForDescribeImagesOutput struct { + _ struct{} `type:"structure"` + + Architecture *string `type:"string"` + + CreatedAt *string `type:"string"` + + Description *string `type:"string"` + + ImageId *string `type:"string"` + + ImageName *string `type:"string"` + + ImageOwnerId *string `type:"string"` + + IsSupportCloudInit *bool `type:"boolean"` + + OsName *string `type:"string"` + + OsType *string `type:"string"` + + Platform *string `type:"string"` + + PlatformVersion *string `type:"string"` + + ProjectName *string `type:"string"` + + ShareStatus *string `type:"string"` + + Size *int32 `type:"int32"` + + Status *string `type:"string"` + + UpdatedAt *string `type:"string"` + + VirtualSize *json.Number `type:"json_number"` + + Visibility *string `type:"string"` +} + +// String returns the string representation +func (s ImageForDescribeImagesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImageForDescribeImagesOutput) GoString() string { + return s.String() +} + +// SetArchitecture sets the Architecture field's value. +func (s *ImageForDescribeImagesOutput) SetArchitecture(v string) *ImageForDescribeImagesOutput { + s.Architecture = &v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *ImageForDescribeImagesOutput) SetCreatedAt(v string) *ImageForDescribeImagesOutput { + s.CreatedAt = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ImageForDescribeImagesOutput) SetDescription(v string) *ImageForDescribeImagesOutput { + s.Description = &v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *ImageForDescribeImagesOutput) SetImageId(v string) *ImageForDescribeImagesOutput { + s.ImageId = &v + return s +} + +// SetImageName sets the ImageName field's value. +func (s *ImageForDescribeImagesOutput) SetImageName(v string) *ImageForDescribeImagesOutput { + s.ImageName = &v + return s +} + +// SetImageOwnerId sets the ImageOwnerId field's value. +func (s *ImageForDescribeImagesOutput) SetImageOwnerId(v string) *ImageForDescribeImagesOutput { + s.ImageOwnerId = &v + return s +} + +// SetIsSupportCloudInit sets the IsSupportCloudInit field's value. +func (s *ImageForDescribeImagesOutput) SetIsSupportCloudInit(v bool) *ImageForDescribeImagesOutput { + s.IsSupportCloudInit = &v + return s +} + +// SetOsName sets the OsName field's value. +func (s *ImageForDescribeImagesOutput) SetOsName(v string) *ImageForDescribeImagesOutput { + s.OsName = &v + return s +} + +// SetOsType sets the OsType field's value. +func (s *ImageForDescribeImagesOutput) SetOsType(v string) *ImageForDescribeImagesOutput { + s.OsType = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *ImageForDescribeImagesOutput) SetPlatform(v string) *ImageForDescribeImagesOutput { + s.Platform = &v + return s +} + +// SetPlatformVersion sets the PlatformVersion field's value. +func (s *ImageForDescribeImagesOutput) SetPlatformVersion(v string) *ImageForDescribeImagesOutput { + s.PlatformVersion = &v + return s +} + +// SetProjectName sets the ProjectName field's value. +func (s *ImageForDescribeImagesOutput) SetProjectName(v string) *ImageForDescribeImagesOutput { + s.ProjectName = &v + return s +} + +// SetShareStatus sets the ShareStatus field's value. +func (s *ImageForDescribeImagesOutput) SetShareStatus(v string) *ImageForDescribeImagesOutput { + s.ShareStatus = &v + return s +} + +// SetSize sets the Size field's value. +func (s *ImageForDescribeImagesOutput) SetSize(v int32) *ImageForDescribeImagesOutput { + s.Size = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ImageForDescribeImagesOutput) SetStatus(v string) *ImageForDescribeImagesOutput { + s.Status = &v + return s +} + +// SetUpdatedAt sets the UpdatedAt field's value. +func (s *ImageForDescribeImagesOutput) SetUpdatedAt(v string) *ImageForDescribeImagesOutput { + s.UpdatedAt = &v + return s +} + +// SetVirtualSize sets the VirtualSize field's value. +func (s *ImageForDescribeImagesOutput) SetVirtualSize(v json.Number) *ImageForDescribeImagesOutput { + s.VirtualSize = &v + return s +} + +// SetVisibility sets the Visibility field's value. +func (s *ImageForDescribeImagesOutput) SetVisibility(v string) *ImageForDescribeImagesOutput { + s.Visibility = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_ecs_terminal_url.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_ecs_terminal_url.go new file mode 100644 index 000000000000..9f9eccdb5288 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_ecs_terminal_url.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeInstanceECSTerminalUrlCommon = "DescribeInstanceECSTerminalUrl" + +// DescribeInstanceECSTerminalUrlCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstanceECSTerminalUrlCommon operation. The "output" return +// value will be populated with the DescribeInstanceECSTerminalUrlCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstanceECSTerminalUrlCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstanceECSTerminalUrlCommon Send returns without error. +// +// See DescribeInstanceECSTerminalUrlCommon for more information on using the DescribeInstanceECSTerminalUrlCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeInstanceECSTerminalUrlCommonRequest method. +// req, resp := client.DescribeInstanceECSTerminalUrlCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstanceECSTerminalUrlCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeInstanceECSTerminalUrlCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstanceECSTerminalUrlCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstanceECSTerminalUrlCommon for usage and error information. +func (c *ECS) DescribeInstanceECSTerminalUrlCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeInstanceECSTerminalUrlCommonRequest(input) + return out, req.Send() +} + +// DescribeInstanceECSTerminalUrlCommonWithContext is the same as DescribeInstanceECSTerminalUrlCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceECSTerminalUrlCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstanceECSTerminalUrlCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeInstanceECSTerminalUrlCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstanceECSTerminalUrl = "DescribeInstanceECSTerminalUrl" + +// DescribeInstanceECSTerminalUrlRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstanceECSTerminalUrl operation. The "output" return +// value will be populated with the DescribeInstanceECSTerminalUrlCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstanceECSTerminalUrlCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstanceECSTerminalUrlCommon Send returns without error. +// +// See DescribeInstanceECSTerminalUrl for more information on using the DescribeInstanceECSTerminalUrl +// API call, and error handling. +// +// // Example sending a request using the DescribeInstanceECSTerminalUrlRequest method. +// req, resp := client.DescribeInstanceECSTerminalUrlRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstanceECSTerminalUrlRequest(input *DescribeInstanceECSTerminalUrlInput) (req *request.Request, output *DescribeInstanceECSTerminalUrlOutput) { + op := &request.Operation{ + Name: opDescribeInstanceECSTerminalUrl, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstanceECSTerminalUrlInput{} + } + + output = &DescribeInstanceECSTerminalUrlOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstanceECSTerminalUrl API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstanceECSTerminalUrl for usage and error information. +func (c *ECS) DescribeInstanceECSTerminalUrl(input *DescribeInstanceECSTerminalUrlInput) (*DescribeInstanceECSTerminalUrlOutput, error) { + req, out := c.DescribeInstanceECSTerminalUrlRequest(input) + return out, req.Send() +} + +// DescribeInstanceECSTerminalUrlWithContext is the same as DescribeInstanceECSTerminalUrl with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceECSTerminalUrl for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstanceECSTerminalUrlWithContext(ctx volcengine.Context, input *DescribeInstanceECSTerminalUrlInput, opts ...request.Option) (*DescribeInstanceECSTerminalUrlOutput, error) { + req, out := c.DescribeInstanceECSTerminalUrlRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeInstanceECSTerminalUrlInput struct { + _ struct{} `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceECSTerminalUrlInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceECSTerminalUrlInput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DescribeInstanceECSTerminalUrlInput) SetInstanceId(v string) *DescribeInstanceECSTerminalUrlInput { + s.InstanceId = &v + return s +} + +type DescribeInstanceECSTerminalUrlOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + EcsTerminalUrl *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceECSTerminalUrlOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceECSTerminalUrlOutput) GoString() string { + return s.String() +} + +// SetEcsTerminalUrl sets the EcsTerminalUrl field's value. +func (s *DescribeInstanceECSTerminalUrlOutput) SetEcsTerminalUrl(v string) *DescribeInstanceECSTerminalUrlOutput { + s.EcsTerminalUrl = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_type_families.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_type_families.go new file mode 100644 index 000000000000..0b5c4db6c7d1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_type_families.go @@ -0,0 +1,232 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeInstanceTypeFamiliesCommon = "DescribeInstanceTypeFamilies" + +// DescribeInstanceTypeFamiliesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstanceTypeFamiliesCommon operation. The "output" return +// value will be populated with the DescribeInstanceTypeFamiliesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstanceTypeFamiliesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstanceTypeFamiliesCommon Send returns without error. +// +// See DescribeInstanceTypeFamiliesCommon for more information on using the DescribeInstanceTypeFamiliesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeInstanceTypeFamiliesCommonRequest method. +// req, resp := client.DescribeInstanceTypeFamiliesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstanceTypeFamiliesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeInstanceTypeFamiliesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstanceTypeFamiliesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstanceTypeFamiliesCommon for usage and error information. +func (c *ECS) DescribeInstanceTypeFamiliesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeInstanceTypeFamiliesCommonRequest(input) + return out, req.Send() +} + +// DescribeInstanceTypeFamiliesCommonWithContext is the same as DescribeInstanceTypeFamiliesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceTypeFamiliesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstanceTypeFamiliesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeInstanceTypeFamiliesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstanceTypeFamilies = "DescribeInstanceTypeFamilies" + +// DescribeInstanceTypeFamiliesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstanceTypeFamilies operation. The "output" return +// value will be populated with the DescribeInstanceTypeFamiliesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstanceTypeFamiliesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstanceTypeFamiliesCommon Send returns without error. +// +// See DescribeInstanceTypeFamilies for more information on using the DescribeInstanceTypeFamilies +// API call, and error handling. +// +// // Example sending a request using the DescribeInstanceTypeFamiliesRequest method. +// req, resp := client.DescribeInstanceTypeFamiliesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstanceTypeFamiliesRequest(input *DescribeInstanceTypeFamiliesInput) (req *request.Request, output *DescribeInstanceTypeFamiliesOutput) { + op := &request.Operation{ + Name: opDescribeInstanceTypeFamilies, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstanceTypeFamiliesInput{} + } + + output = &DescribeInstanceTypeFamiliesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstanceTypeFamilies API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstanceTypeFamilies for usage and error information. +func (c *ECS) DescribeInstanceTypeFamilies(input *DescribeInstanceTypeFamiliesInput) (*DescribeInstanceTypeFamiliesOutput, error) { + req, out := c.DescribeInstanceTypeFamiliesRequest(input) + return out, req.Send() +} + +// DescribeInstanceTypeFamiliesWithContext is the same as DescribeInstanceTypeFamilies with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceTypeFamilies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstanceTypeFamiliesWithContext(ctx volcengine.Context, input *DescribeInstanceTypeFamiliesInput, opts ...request.Option) (*DescribeInstanceTypeFamiliesOutput, error) { + req, out := c.DescribeInstanceTypeFamiliesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeInstanceTypeFamiliesInput struct { + _ struct{} `type:"structure"` + + Generation *string `type:"string"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceTypeFamiliesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceTypeFamiliesInput) GoString() string { + return s.String() +} + +// SetGeneration sets the Generation field's value. +func (s *DescribeInstanceTypeFamiliesInput) SetGeneration(v string) *DescribeInstanceTypeFamiliesInput { + s.Generation = &v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *DescribeInstanceTypeFamiliesInput) SetZoneId(v string) *DescribeInstanceTypeFamiliesInput { + s.ZoneId = &v + return s +} + +type DescribeInstanceTypeFamiliesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + InstanceTypeFamilies []*InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput `type:"list"` +} + +// String returns the string representation +func (s DescribeInstanceTypeFamiliesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceTypeFamiliesOutput) GoString() string { + return s.String() +} + +// SetInstanceTypeFamilies sets the InstanceTypeFamilies field's value. +func (s *DescribeInstanceTypeFamiliesOutput) SetInstanceTypeFamilies(v []*InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput) *DescribeInstanceTypeFamiliesOutput { + s.InstanceTypeFamilies = v + return s +} + +type InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput struct { + _ struct{} `type:"structure"` + + Generation *string `type:"string"` + + InstanceTypeFamily *string `type:"string"` + + ZoneIds []*string `type:"list"` +} + +// String returns the string representation +func (s InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput) GoString() string { + return s.String() +} + +// SetGeneration sets the Generation field's value. +func (s *InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput) SetGeneration(v string) *InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput { + s.Generation = &v + return s +} + +// SetInstanceTypeFamily sets the InstanceTypeFamily field's value. +func (s *InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput) SetInstanceTypeFamily(v string) *InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput { + s.InstanceTypeFamily = &v + return s +} + +// SetZoneIds sets the ZoneIds field's value. +func (s *InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput) SetZoneIds(v []*string) *InstanceTypeFamilyForDescribeInstanceTypeFamiliesOutput { + s.ZoneIds = v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_types.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_types.go new file mode 100644 index 000000000000..ab739df34f18 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_types.go @@ -0,0 +1,608 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeInstanceTypesCommon = "DescribeInstanceTypes" + +// DescribeInstanceTypesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstanceTypesCommon operation. The "output" return +// value will be populated with the DescribeInstanceTypesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstanceTypesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstanceTypesCommon Send returns without error. +// +// See DescribeInstanceTypesCommon for more information on using the DescribeInstanceTypesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeInstanceTypesCommonRequest method. +// req, resp := client.DescribeInstanceTypesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstanceTypesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeInstanceTypesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstanceTypesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstanceTypesCommon for usage and error information. +func (c *ECS) DescribeInstanceTypesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeInstanceTypesCommonRequest(input) + return out, req.Send() +} + +// DescribeInstanceTypesCommonWithContext is the same as DescribeInstanceTypesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceTypesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstanceTypesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeInstanceTypesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstanceTypes = "DescribeInstanceTypes" + +// DescribeInstanceTypesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstanceTypes operation. The "output" return +// value will be populated with the DescribeInstanceTypesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstanceTypesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstanceTypesCommon Send returns without error. +// +// See DescribeInstanceTypes for more information on using the DescribeInstanceTypes +// API call, and error handling. +// +// // Example sending a request using the DescribeInstanceTypesRequest method. +// req, resp := client.DescribeInstanceTypesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstanceTypesRequest(input *DescribeInstanceTypesInput) (req *request.Request, output *DescribeInstanceTypesOutput) { + op := &request.Operation{ + Name: opDescribeInstanceTypes, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstanceTypesInput{} + } + + output = &DescribeInstanceTypesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstanceTypes API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstanceTypes for usage and error information. +func (c *ECS) DescribeInstanceTypes(input *DescribeInstanceTypesInput) (*DescribeInstanceTypesOutput, error) { + req, out := c.DescribeInstanceTypesRequest(input) + return out, req.Send() +} + +// DescribeInstanceTypesWithContext is the same as DescribeInstanceTypes with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceTypes for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstanceTypesWithContext(ctx volcengine.Context, input *DescribeInstanceTypesInput, opts ...request.Option) (*DescribeInstanceTypesOutput, error) { + req, out := c.DescribeInstanceTypesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeInstanceTypesInput struct { + _ struct{} `type:"structure"` + + InstanceTypeIds []*string `type:"list"` + + InstanceTypes []*string `type:"list"` + + MaxResults *int32 `type:"int32"` + + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceTypesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceTypesInput) GoString() string { + return s.String() +} + +// SetInstanceTypeIds sets the InstanceTypeIds field's value. +func (s *DescribeInstanceTypesInput) SetInstanceTypeIds(v []*string) *DescribeInstanceTypesInput { + s.InstanceTypeIds = v + return s +} + +// SetInstanceTypes sets the InstanceTypes field's value. +func (s *DescribeInstanceTypesInput) SetInstanceTypes(v []*string) *DescribeInstanceTypesInput { + s.InstanceTypes = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInstanceTypesInput) SetMaxResults(v int32) *DescribeInstanceTypesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstanceTypesInput) SetNextToken(v string) *DescribeInstanceTypesInput { + s.NextToken = &v + return s +} + +type DescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + InstanceTypes []*InstanceTypeForDescribeInstanceTypesOutput `type:"list"` + + NextToken *string `type:"string"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetInstanceTypes sets the InstanceTypes field's value. +func (s *DescribeInstanceTypesOutput) SetInstanceTypes(v []*InstanceTypeForDescribeInstanceTypesOutput) *DescribeInstanceTypesOutput { + s.InstanceTypes = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstanceTypesOutput) SetNextToken(v string) *DescribeInstanceTypesOutput { + s.NextToken = &v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeInstanceTypesOutput) SetTotalCount(v int32) *DescribeInstanceTypesOutput { + s.TotalCount = &v + return s +} + +type GpuDeviceForDescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + Count *int32 `type:"int32"` + + Memory *MemoryForDescribeInstanceTypesOutput `type:"structure"` + + ProductName *string `type:"string"` +} + +// String returns the string representation +func (s GpuDeviceForDescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s GpuDeviceForDescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetCount sets the Count field's value. +func (s *GpuDeviceForDescribeInstanceTypesOutput) SetCount(v int32) *GpuDeviceForDescribeInstanceTypesOutput { + s.Count = &v + return s +} + +// SetMemory sets the Memory field's value. +func (s *GpuDeviceForDescribeInstanceTypesOutput) SetMemory(v *MemoryForDescribeInstanceTypesOutput) *GpuDeviceForDescribeInstanceTypesOutput { + s.Memory = v + return s +} + +// SetProductName sets the ProductName field's value. +func (s *GpuDeviceForDescribeInstanceTypesOutput) SetProductName(v string) *GpuDeviceForDescribeInstanceTypesOutput { + s.ProductName = &v + return s +} + +type GpuForDescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + GpuDevices []*GpuDeviceForDescribeInstanceTypesOutput `type:"list"` +} + +// String returns the string representation +func (s GpuForDescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s GpuForDescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetGpuDevices sets the GpuDevices field's value. +func (s *GpuForDescribeInstanceTypesOutput) SetGpuDevices(v []*GpuDeviceForDescribeInstanceTypesOutput) *GpuForDescribeInstanceTypesOutput { + s.GpuDevices = v + return s +} + +type InstanceTypeForDescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + BaselineCredit *int64 `type:"int64"` + + Gpu *GpuForDescribeInstanceTypesOutput `type:"structure"` + + InitialCredit *int64 `type:"int64"` + + InstanceTypeFamily *string `type:"string"` + + InstanceTypeId *string `type:"string"` + + LocalVolumes []*LocalVolumeForDescribeInstanceTypesOutput `type:"list"` + + Memory *MemoryForDescribeInstanceTypesOutput `type:"structure"` + + Network *NetworkForDescribeInstanceTypesOutput `type:"structure"` + + Processor *ProcessorForDescribeInstanceTypesOutput `type:"structure"` + + Rdma *RdmaForDescribeInstanceTypesOutput `type:"structure"` + + Volume *VolumeForDescribeInstanceTypesOutput `type:"structure"` +} + +// String returns the string representation +func (s InstanceTypeForDescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceTypeForDescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetBaselineCredit sets the BaselineCredit field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetBaselineCredit(v int64) *InstanceTypeForDescribeInstanceTypesOutput { + s.BaselineCredit = &v + return s +} + +// SetGpu sets the Gpu field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetGpu(v *GpuForDescribeInstanceTypesOutput) *InstanceTypeForDescribeInstanceTypesOutput { + s.Gpu = v + return s +} + +// SetInitialCredit sets the InitialCredit field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetInitialCredit(v int64) *InstanceTypeForDescribeInstanceTypesOutput { + s.InitialCredit = &v + return s +} + +// SetInstanceTypeFamily sets the InstanceTypeFamily field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetInstanceTypeFamily(v string) *InstanceTypeForDescribeInstanceTypesOutput { + s.InstanceTypeFamily = &v + return s +} + +// SetInstanceTypeId sets the InstanceTypeId field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetInstanceTypeId(v string) *InstanceTypeForDescribeInstanceTypesOutput { + s.InstanceTypeId = &v + return s +} + +// SetLocalVolumes sets the LocalVolumes field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetLocalVolumes(v []*LocalVolumeForDescribeInstanceTypesOutput) *InstanceTypeForDescribeInstanceTypesOutput { + s.LocalVolumes = v + return s +} + +// SetMemory sets the Memory field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetMemory(v *MemoryForDescribeInstanceTypesOutput) *InstanceTypeForDescribeInstanceTypesOutput { + s.Memory = v + return s +} + +// SetNetwork sets the Network field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetNetwork(v *NetworkForDescribeInstanceTypesOutput) *InstanceTypeForDescribeInstanceTypesOutput { + s.Network = v + return s +} + +// SetProcessor sets the Processor field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetProcessor(v *ProcessorForDescribeInstanceTypesOutput) *InstanceTypeForDescribeInstanceTypesOutput { + s.Processor = v + return s +} + +// SetRdma sets the Rdma field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetRdma(v *RdmaForDescribeInstanceTypesOutput) *InstanceTypeForDescribeInstanceTypesOutput { + s.Rdma = v + return s +} + +// SetVolume sets the Volume field's value. +func (s *InstanceTypeForDescribeInstanceTypesOutput) SetVolume(v *VolumeForDescribeInstanceTypesOutput) *InstanceTypeForDescribeInstanceTypesOutput { + s.Volume = v + return s +} + +type LocalVolumeForDescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + Count *int32 `type:"int32"` + + Size *int32 `type:"int32"` + + VolumeType *string `type:"string"` +} + +// String returns the string representation +func (s LocalVolumeForDescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s LocalVolumeForDescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetCount sets the Count field's value. +func (s *LocalVolumeForDescribeInstanceTypesOutput) SetCount(v int32) *LocalVolumeForDescribeInstanceTypesOutput { + s.Count = &v + return s +} + +// SetSize sets the Size field's value. +func (s *LocalVolumeForDescribeInstanceTypesOutput) SetSize(v int32) *LocalVolumeForDescribeInstanceTypesOutput { + s.Size = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *LocalVolumeForDescribeInstanceTypesOutput) SetVolumeType(v string) *LocalVolumeForDescribeInstanceTypesOutput { + s.VolumeType = &v + return s +} + +type MemoryForDescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + EncryptedSize *int32 `type:"int32"` + + Size *int32 `type:"int32"` +} + +// String returns the string representation +func (s MemoryForDescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s MemoryForDescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetEncryptedSize sets the EncryptedSize field's value. +func (s *MemoryForDescribeInstanceTypesOutput) SetEncryptedSize(v int32) *MemoryForDescribeInstanceTypesOutput { + s.EncryptedSize = &v + return s +} + +// SetSize sets the Size field's value. +func (s *MemoryForDescribeInstanceTypesOutput) SetSize(v int32) *MemoryForDescribeInstanceTypesOutput { + s.Size = &v + return s +} + +type NetworkForDescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + MaximumBandwidthMbps *int32 `type:"int32"` + + MaximumNetworkInterfaces *int32 `type:"int32"` + + MaximumPrivateIpv4AddressesPerNetworkInterface *int32 `type:"int32"` + + MaximumQueuesPerNetworkInterface *int32 `type:"int32"` + + MaximumThroughputKpps *int32 `type:"int32"` +} + +// String returns the string representation +func (s NetworkForDescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s NetworkForDescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetMaximumBandwidthMbps sets the MaximumBandwidthMbps field's value. +func (s *NetworkForDescribeInstanceTypesOutput) SetMaximumBandwidthMbps(v int32) *NetworkForDescribeInstanceTypesOutput { + s.MaximumBandwidthMbps = &v + return s +} + +// SetMaximumNetworkInterfaces sets the MaximumNetworkInterfaces field's value. +func (s *NetworkForDescribeInstanceTypesOutput) SetMaximumNetworkInterfaces(v int32) *NetworkForDescribeInstanceTypesOutput { + s.MaximumNetworkInterfaces = &v + return s +} + +// SetMaximumPrivateIpv4AddressesPerNetworkInterface sets the MaximumPrivateIpv4AddressesPerNetworkInterface field's value. +func (s *NetworkForDescribeInstanceTypesOutput) SetMaximumPrivateIpv4AddressesPerNetworkInterface(v int32) *NetworkForDescribeInstanceTypesOutput { + s.MaximumPrivateIpv4AddressesPerNetworkInterface = &v + return s +} + +// SetMaximumQueuesPerNetworkInterface sets the MaximumQueuesPerNetworkInterface field's value. +func (s *NetworkForDescribeInstanceTypesOutput) SetMaximumQueuesPerNetworkInterface(v int32) *NetworkForDescribeInstanceTypesOutput { + s.MaximumQueuesPerNetworkInterface = &v + return s +} + +// SetMaximumThroughputKpps sets the MaximumThroughputKpps field's value. +func (s *NetworkForDescribeInstanceTypesOutput) SetMaximumThroughputKpps(v int32) *NetworkForDescribeInstanceTypesOutput { + s.MaximumThroughputKpps = &v + return s +} + +type ProcessorForDescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + BaseFrequency *float64 `type:"float"` + + Cpus *int32 `type:"int32"` + + Model *string `type:"string"` + + TurboFrequency *float64 `type:"float"` +} + +// String returns the string representation +func (s ProcessorForDescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ProcessorForDescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetBaseFrequency sets the BaseFrequency field's value. +func (s *ProcessorForDescribeInstanceTypesOutput) SetBaseFrequency(v float64) *ProcessorForDescribeInstanceTypesOutput { + s.BaseFrequency = &v + return s +} + +// SetCpus sets the Cpus field's value. +func (s *ProcessorForDescribeInstanceTypesOutput) SetCpus(v int32) *ProcessorForDescribeInstanceTypesOutput { + s.Cpus = &v + return s +} + +// SetModel sets the Model field's value. +func (s *ProcessorForDescribeInstanceTypesOutput) SetModel(v string) *ProcessorForDescribeInstanceTypesOutput { + s.Model = &v + return s +} + +// SetTurboFrequency sets the TurboFrequency field's value. +func (s *ProcessorForDescribeInstanceTypesOutput) SetTurboFrequency(v float64) *ProcessorForDescribeInstanceTypesOutput { + s.TurboFrequency = &v + return s +} + +type RdmaForDescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + RdmaNetworkInterfaces *int32 `type:"int32"` +} + +// String returns the string representation +func (s RdmaForDescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RdmaForDescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetRdmaNetworkInterfaces sets the RdmaNetworkInterfaces field's value. +func (s *RdmaForDescribeInstanceTypesOutput) SetRdmaNetworkInterfaces(v int32) *RdmaForDescribeInstanceTypesOutput { + s.RdmaNetworkInterfaces = &v + return s +} + +type VolumeForDescribeInstanceTypesOutput struct { + _ struct{} `type:"structure"` + + MaximumCount *int32 `type:"int32"` + + SupportedVolumeTypes []*string `type:"list"` +} + +// String returns the string representation +func (s VolumeForDescribeInstanceTypesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s VolumeForDescribeInstanceTypesOutput) GoString() string { + return s.String() +} + +// SetMaximumCount sets the MaximumCount field's value. +func (s *VolumeForDescribeInstanceTypesOutput) SetMaximumCount(v int32) *VolumeForDescribeInstanceTypesOutput { + s.MaximumCount = &v + return s +} + +// SetSupportedVolumeTypes sets the SupportedVolumeTypes field's value. +func (s *VolumeForDescribeInstanceTypesOutput) SetSupportedVolumeTypes(v []*string) *VolumeForDescribeInstanceTypesOutput { + s.SupportedVolumeTypes = v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_vnc_url.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_vnc_url.go new file mode 100644 index 000000000000..a5472c7eaebf --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instance_vnc_url.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeInstanceVncUrlCommon = "DescribeInstanceVncUrl" + +// DescribeInstanceVncUrlCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstanceVncUrlCommon operation. The "output" return +// value will be populated with the DescribeInstanceVncUrlCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstanceVncUrlCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstanceVncUrlCommon Send returns without error. +// +// See DescribeInstanceVncUrlCommon for more information on using the DescribeInstanceVncUrlCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeInstanceVncUrlCommonRequest method. +// req, resp := client.DescribeInstanceVncUrlCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstanceVncUrlCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeInstanceVncUrlCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstanceVncUrlCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstanceVncUrlCommon for usage and error information. +func (c *ECS) DescribeInstanceVncUrlCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeInstanceVncUrlCommonRequest(input) + return out, req.Send() +} + +// DescribeInstanceVncUrlCommonWithContext is the same as DescribeInstanceVncUrlCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceVncUrlCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstanceVncUrlCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeInstanceVncUrlCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstanceVncUrl = "DescribeInstanceVncUrl" + +// DescribeInstanceVncUrlRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstanceVncUrl operation. The "output" return +// value will be populated with the DescribeInstanceVncUrlCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstanceVncUrlCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstanceVncUrlCommon Send returns without error. +// +// See DescribeInstanceVncUrl for more information on using the DescribeInstanceVncUrl +// API call, and error handling. +// +// // Example sending a request using the DescribeInstanceVncUrlRequest method. +// req, resp := client.DescribeInstanceVncUrlRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstanceVncUrlRequest(input *DescribeInstanceVncUrlInput) (req *request.Request, output *DescribeInstanceVncUrlOutput) { + op := &request.Operation{ + Name: opDescribeInstanceVncUrl, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstanceVncUrlInput{} + } + + output = &DescribeInstanceVncUrlOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstanceVncUrl API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstanceVncUrl for usage and error information. +func (c *ECS) DescribeInstanceVncUrl(input *DescribeInstanceVncUrlInput) (*DescribeInstanceVncUrlOutput, error) { + req, out := c.DescribeInstanceVncUrlRequest(input) + return out, req.Send() +} + +// DescribeInstanceVncUrlWithContext is the same as DescribeInstanceVncUrl with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceVncUrl for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstanceVncUrlWithContext(ctx volcengine.Context, input *DescribeInstanceVncUrlInput, opts ...request.Option) (*DescribeInstanceVncUrlOutput, error) { + req, out := c.DescribeInstanceVncUrlRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeInstanceVncUrlInput struct { + _ struct{} `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceVncUrlInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceVncUrlInput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DescribeInstanceVncUrlInput) SetInstanceId(v string) *DescribeInstanceVncUrlInput { + s.InstanceId = &v + return s +} + +type DescribeInstanceVncUrlOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + VncUrl *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceVncUrlOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceVncUrlOutput) GoString() string { + return s.String() +} + +// SetVncUrl sets the VncUrl field's value. +func (s *DescribeInstanceVncUrlOutput) SetVncUrl(v string) *DescribeInstanceVncUrlOutput { + s.VncUrl = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instances.go new file mode 100644 index 000000000000..3ce6e3e62d43 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instances.go @@ -0,0 +1,820 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeInstancesCommon = "DescribeInstances" + +// DescribeInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstancesCommon operation. The "output" return +// value will be populated with the DescribeInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstancesCommon Send returns without error. +// +// See DescribeInstancesCommon for more information on using the DescribeInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeInstancesCommonRequest method. +// req, resp := client.DescribeInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstancesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstancesCommon for usage and error information. +func (c *ECS) DescribeInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeInstancesCommonRequest(input) + return out, req.Send() +} + +// DescribeInstancesCommonWithContext is the same as DescribeInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstances = "DescribeInstances" + +// DescribeInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstances operation. The "output" return +// value will be populated with the DescribeInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstancesCommon Send returns without error. +// +// See DescribeInstances for more information on using the DescribeInstances +// API call, and error handling. +// +// // Example sending a request using the DescribeInstancesRequest method. +// req, resp := client.DescribeInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstancesRequest(input *DescribeInstancesInput) (req *request.Request, output *DescribeInstancesOutput) { + op := &request.Operation{ + Name: opDescribeInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstancesInput{} + } + + output = &DescribeInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstances API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstances for usage and error information. +func (c *ECS) DescribeInstances(input *DescribeInstancesInput) (*DescribeInstancesOutput, error) { + req, out := c.DescribeInstancesRequest(input) + return out, req.Send() +} + +// DescribeInstancesWithContext is the same as DescribeInstances with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstancesWithContext(ctx volcengine.Context, input *DescribeInstancesInput, opts ...request.Option) (*DescribeInstancesOutput, error) { + req, out := c.DescribeInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type CpuOptionsForDescribeInstancesOutput struct { + _ struct{} `type:"structure"` + + CoreCount *int32 `type:"int32"` + + ThreadsPerCore *int32 `type:"int32"` +} + +// String returns the string representation +func (s CpuOptionsForDescribeInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s CpuOptionsForDescribeInstancesOutput) GoString() string { + return s.String() +} + +// SetCoreCount sets the CoreCount field's value. +func (s *CpuOptionsForDescribeInstancesOutput) SetCoreCount(v int32) *CpuOptionsForDescribeInstancesOutput { + s.CoreCount = &v + return s +} + +// SetThreadsPerCore sets the ThreadsPerCore field's value. +func (s *CpuOptionsForDescribeInstancesOutput) SetThreadsPerCore(v int32) *CpuOptionsForDescribeInstancesOutput { + s.ThreadsPerCore = &v + return s +} + +type DescribeInstancesInput struct { + _ struct{} `type:"structure"` + + DeploymentSetIds []*string `type:"list"` + + HpcClusterId *string `type:"string"` + + InstanceChargeType *string `type:"string"` + + InstanceIds []*string `type:"list"` + + InstanceName *string `type:"string"` + + InstanceTypeFamilies []*string `type:"list"` + + InstanceTypeIds []*string `type:"list"` + + InstanceTypes []*string `type:"list"` + + KeyPairName *string `type:"string"` + + MaxResults *int32 `type:"int32"` + + NextToken *string `type:"string"` + + PrimaryIpAddress *string `type:"string"` + + ProjectName *string `type:"string"` + + Status *string `type:"string"` + + TagFilters []*TagFilterForDescribeInstancesInput `type:"list"` + + VpcId *string `type:"string"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancesInput) GoString() string { + return s.String() +} + +// SetDeploymentSetIds sets the DeploymentSetIds field's value. +func (s *DescribeInstancesInput) SetDeploymentSetIds(v []*string) *DescribeInstancesInput { + s.DeploymentSetIds = v + return s +} + +// SetHpcClusterId sets the HpcClusterId field's value. +func (s *DescribeInstancesInput) SetHpcClusterId(v string) *DescribeInstancesInput { + s.HpcClusterId = &v + return s +} + +// SetInstanceChargeType sets the InstanceChargeType field's value. +func (s *DescribeInstancesInput) SetInstanceChargeType(v string) *DescribeInstancesInput { + s.InstanceChargeType = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DescribeInstancesInput) SetInstanceIds(v []*string) *DescribeInstancesInput { + s.InstanceIds = v + return s +} + +// SetInstanceName sets the InstanceName field's value. +func (s *DescribeInstancesInput) SetInstanceName(v string) *DescribeInstancesInput { + s.InstanceName = &v + return s +} + +// SetInstanceTypeFamilies sets the InstanceTypeFamilies field's value. +func (s *DescribeInstancesInput) SetInstanceTypeFamilies(v []*string) *DescribeInstancesInput { + s.InstanceTypeFamilies = v + return s +} + +// SetInstanceTypeIds sets the InstanceTypeIds field's value. +func (s *DescribeInstancesInput) SetInstanceTypeIds(v []*string) *DescribeInstancesInput { + s.InstanceTypeIds = v + return s +} + +// SetInstanceTypes sets the InstanceTypes field's value. +func (s *DescribeInstancesInput) SetInstanceTypes(v []*string) *DescribeInstancesInput { + s.InstanceTypes = v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *DescribeInstancesInput) SetKeyPairName(v string) *DescribeInstancesInput { + s.KeyPairName = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInstancesInput) SetMaxResults(v int32) *DescribeInstancesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancesInput) SetNextToken(v string) *DescribeInstancesInput { + s.NextToken = &v + return s +} + +// SetPrimaryIpAddress sets the PrimaryIpAddress field's value. +func (s *DescribeInstancesInput) SetPrimaryIpAddress(v string) *DescribeInstancesInput { + s.PrimaryIpAddress = &v + return s +} + +// SetProjectName sets the ProjectName field's value. +func (s *DescribeInstancesInput) SetProjectName(v string) *DescribeInstancesInput { + s.ProjectName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DescribeInstancesInput) SetStatus(v string) *DescribeInstancesInput { + s.Status = &v + return s +} + +// SetTagFilters sets the TagFilters field's value. +func (s *DescribeInstancesInput) SetTagFilters(v []*TagFilterForDescribeInstancesInput) *DescribeInstancesInput { + s.TagFilters = v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *DescribeInstancesInput) SetVpcId(v string) *DescribeInstancesInput { + s.VpcId = &v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *DescribeInstancesInput) SetZoneId(v string) *DescribeInstancesInput { + s.ZoneId = &v + return s +} + +type DescribeInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + Instances []*InstanceForDescribeInstancesOutput `type:"list"` + + NextToken *string `type:"string"` + + TotalCount *int32 `type:"int32"` +} + +// String returns the string representation +func (s DescribeInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancesOutput) GoString() string { + return s.String() +} + +// SetInstances sets the Instances field's value. +func (s *DescribeInstancesOutput) SetInstances(v []*InstanceForDescribeInstancesOutput) *DescribeInstancesOutput { + s.Instances = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancesOutput) SetNextToken(v string) *DescribeInstancesOutput { + s.NextToken = &v + return s +} + +// SetTotalCount sets the TotalCount field's value. +func (s *DescribeInstancesOutput) SetTotalCount(v int32) *DescribeInstancesOutput { + s.TotalCount = &v + return s +} + +type EipAddressForDescribeInstancesOutput struct { + _ struct{} `type:"structure"` + + AllocationId *string `type:"string"` +} + +// String returns the string representation +func (s EipAddressForDescribeInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s EipAddressForDescribeInstancesOutput) GoString() string { + return s.String() +} + +// SetAllocationId sets the AllocationId field's value. +func (s *EipAddressForDescribeInstancesOutput) SetAllocationId(v string) *EipAddressForDescribeInstancesOutput { + s.AllocationId = &v + return s +} + +type InstanceForDescribeInstancesOutput struct { + _ struct{} `type:"structure"` + + CpuOptions *CpuOptionsForDescribeInstancesOutput `type:"structure"` + + Cpus *int32 `type:"int32"` + + CreatedAt *string `type:"string"` + + DeploymentSetId *string `type:"string"` + + Description *string `type:"string"` + + EipAddress *EipAddressForDescribeInstancesOutput `type:"structure"` + + ExpiredAt *string `type:"string"` + + HostName *string `type:"string"` + + Hostname *string `type:"string"` + + ImageId *string `type:"string"` + + InstanceChargeType *string `type:"string"` + + InstanceId *string `type:"string"` + + InstanceName *string `type:"string"` + + InstanceTypeId *string `type:"string"` + + KeyPairId *string `type:"string"` + + KeyPairName *string `type:"string"` + + LocalVolumes []*LocalVolumeForDescribeInstancesOutput `type:"list"` + + MemorySize *int32 `type:"int32"` + + NetworkInterfaces []*NetworkInterfaceForDescribeInstancesOutput `type:"list"` + + OsName *string `type:"string"` + + OsType *string `type:"string"` + + ProjectName *string `type:"string"` + + RdmaIpAddresses []*string `type:"list"` + + SpotStrategy *string `type:"string"` + + Status *string `type:"string"` + + StoppedMode *string `type:"string"` + + Tags []*TagForDescribeInstancesOutput `type:"list"` + + UpdatedAt *string `type:"string"` + + Uuid *string `type:"string"` + + VpcId *string `type:"string"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s InstanceForDescribeInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceForDescribeInstancesOutput) GoString() string { + return s.String() +} + +// SetCpuOptions sets the CpuOptions field's value. +func (s *InstanceForDescribeInstancesOutput) SetCpuOptions(v *CpuOptionsForDescribeInstancesOutput) *InstanceForDescribeInstancesOutput { + s.CpuOptions = v + return s +} + +// SetCpus sets the Cpus field's value. +func (s *InstanceForDescribeInstancesOutput) SetCpus(v int32) *InstanceForDescribeInstancesOutput { + s.Cpus = &v + return s +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *InstanceForDescribeInstancesOutput) SetCreatedAt(v string) *InstanceForDescribeInstancesOutput { + s.CreatedAt = &v + return s +} + +// SetDeploymentSetId sets the DeploymentSetId field's value. +func (s *InstanceForDescribeInstancesOutput) SetDeploymentSetId(v string) *InstanceForDescribeInstancesOutput { + s.DeploymentSetId = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *InstanceForDescribeInstancesOutput) SetDescription(v string) *InstanceForDescribeInstancesOutput { + s.Description = &v + return s +} + +// SetEipAddress sets the EipAddress field's value. +func (s *InstanceForDescribeInstancesOutput) SetEipAddress(v *EipAddressForDescribeInstancesOutput) *InstanceForDescribeInstancesOutput { + s.EipAddress = v + return s +} + +// SetExpiredAt sets the ExpiredAt field's value. +func (s *InstanceForDescribeInstancesOutput) SetExpiredAt(v string) *InstanceForDescribeInstancesOutput { + s.ExpiredAt = &v + return s +} + +// SetHostName sets the HostName field's value. +func (s *InstanceForDescribeInstancesOutput) SetHostName(v string) *InstanceForDescribeInstancesOutput { + s.HostName = &v + return s +} + +// SetHostname sets the Hostname field's value. +func (s *InstanceForDescribeInstancesOutput) SetHostname(v string) *InstanceForDescribeInstancesOutput { + s.Hostname = &v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *InstanceForDescribeInstancesOutput) SetImageId(v string) *InstanceForDescribeInstancesOutput { + s.ImageId = &v + return s +} + +// SetInstanceChargeType sets the InstanceChargeType field's value. +func (s *InstanceForDescribeInstancesOutput) SetInstanceChargeType(v string) *InstanceForDescribeInstancesOutput { + s.InstanceChargeType = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *InstanceForDescribeInstancesOutput) SetInstanceId(v string) *InstanceForDescribeInstancesOutput { + s.InstanceId = &v + return s +} + +// SetInstanceName sets the InstanceName field's value. +func (s *InstanceForDescribeInstancesOutput) SetInstanceName(v string) *InstanceForDescribeInstancesOutput { + s.InstanceName = &v + return s +} + +// SetInstanceTypeId sets the InstanceTypeId field's value. +func (s *InstanceForDescribeInstancesOutput) SetInstanceTypeId(v string) *InstanceForDescribeInstancesOutput { + s.InstanceTypeId = &v + return s +} + +// SetKeyPairId sets the KeyPairId field's value. +func (s *InstanceForDescribeInstancesOutput) SetKeyPairId(v string) *InstanceForDescribeInstancesOutput { + s.KeyPairId = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *InstanceForDescribeInstancesOutput) SetKeyPairName(v string) *InstanceForDescribeInstancesOutput { + s.KeyPairName = &v + return s +} + +// SetLocalVolumes sets the LocalVolumes field's value. +func (s *InstanceForDescribeInstancesOutput) SetLocalVolumes(v []*LocalVolumeForDescribeInstancesOutput) *InstanceForDescribeInstancesOutput { + s.LocalVolumes = v + return s +} + +// SetMemorySize sets the MemorySize field's value. +func (s *InstanceForDescribeInstancesOutput) SetMemorySize(v int32) *InstanceForDescribeInstancesOutput { + s.MemorySize = &v + return s +} + +// SetNetworkInterfaces sets the NetworkInterfaces field's value. +func (s *InstanceForDescribeInstancesOutput) SetNetworkInterfaces(v []*NetworkInterfaceForDescribeInstancesOutput) *InstanceForDescribeInstancesOutput { + s.NetworkInterfaces = v + return s +} + +// SetOsName sets the OsName field's value. +func (s *InstanceForDescribeInstancesOutput) SetOsName(v string) *InstanceForDescribeInstancesOutput { + s.OsName = &v + return s +} + +// SetOsType sets the OsType field's value. +func (s *InstanceForDescribeInstancesOutput) SetOsType(v string) *InstanceForDescribeInstancesOutput { + s.OsType = &v + return s +} + +// SetProjectName sets the ProjectName field's value. +func (s *InstanceForDescribeInstancesOutput) SetProjectName(v string) *InstanceForDescribeInstancesOutput { + s.ProjectName = &v + return s +} + +// SetRdmaIpAddresses sets the RdmaIpAddresses field's value. +func (s *InstanceForDescribeInstancesOutput) SetRdmaIpAddresses(v []*string) *InstanceForDescribeInstancesOutput { + s.RdmaIpAddresses = v + return s +} + +// SetSpotStrategy sets the SpotStrategy field's value. +func (s *InstanceForDescribeInstancesOutput) SetSpotStrategy(v string) *InstanceForDescribeInstancesOutput { + s.SpotStrategy = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *InstanceForDescribeInstancesOutput) SetStatus(v string) *InstanceForDescribeInstancesOutput { + s.Status = &v + return s +} + +// SetStoppedMode sets the StoppedMode field's value. +func (s *InstanceForDescribeInstancesOutput) SetStoppedMode(v string) *InstanceForDescribeInstancesOutput { + s.StoppedMode = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *InstanceForDescribeInstancesOutput) SetTags(v []*TagForDescribeInstancesOutput) *InstanceForDescribeInstancesOutput { + s.Tags = v + return s +} + +// SetUpdatedAt sets the UpdatedAt field's value. +func (s *InstanceForDescribeInstancesOutput) SetUpdatedAt(v string) *InstanceForDescribeInstancesOutput { + s.UpdatedAt = &v + return s +} + +// SetUuid sets the Uuid field's value. +func (s *InstanceForDescribeInstancesOutput) SetUuid(v string) *InstanceForDescribeInstancesOutput { + s.Uuid = &v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *InstanceForDescribeInstancesOutput) SetVpcId(v string) *InstanceForDescribeInstancesOutput { + s.VpcId = &v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *InstanceForDescribeInstancesOutput) SetZoneId(v string) *InstanceForDescribeInstancesOutput { + s.ZoneId = &v + return s +} + +type LocalVolumeForDescribeInstancesOutput struct { + _ struct{} `type:"structure"` + + Count *int32 `type:"int32"` + + Size *int32 `type:"int32"` + + VolumeType *string `type:"string"` +} + +// String returns the string representation +func (s LocalVolumeForDescribeInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s LocalVolumeForDescribeInstancesOutput) GoString() string { + return s.String() +} + +// SetCount sets the Count field's value. +func (s *LocalVolumeForDescribeInstancesOutput) SetCount(v int32) *LocalVolumeForDescribeInstancesOutput { + s.Count = &v + return s +} + +// SetSize sets the Size field's value. +func (s *LocalVolumeForDescribeInstancesOutput) SetSize(v int32) *LocalVolumeForDescribeInstancesOutput { + s.Size = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *LocalVolumeForDescribeInstancesOutput) SetVolumeType(v string) *LocalVolumeForDescribeInstancesOutput { + s.VolumeType = &v + return s +} + +type NetworkInterfaceForDescribeInstancesOutput struct { + _ struct{} `type:"structure"` + + MacAddress *string `type:"string"` + + NetworkInterfaceId *string `type:"string"` + + PrimaryIpAddress *string `type:"string"` + + SubnetId *string `type:"string"` + + Type *string `type:"string"` + + VpcId *string `type:"string"` +} + +// String returns the string representation +func (s NetworkInterfaceForDescribeInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s NetworkInterfaceForDescribeInstancesOutput) GoString() string { + return s.String() +} + +// SetMacAddress sets the MacAddress field's value. +func (s *NetworkInterfaceForDescribeInstancesOutput) SetMacAddress(v string) *NetworkInterfaceForDescribeInstancesOutput { + s.MacAddress = &v + return s +} + +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *NetworkInterfaceForDescribeInstancesOutput) SetNetworkInterfaceId(v string) *NetworkInterfaceForDescribeInstancesOutput { + s.NetworkInterfaceId = &v + return s +} + +// SetPrimaryIpAddress sets the PrimaryIpAddress field's value. +func (s *NetworkInterfaceForDescribeInstancesOutput) SetPrimaryIpAddress(v string) *NetworkInterfaceForDescribeInstancesOutput { + s.PrimaryIpAddress = &v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *NetworkInterfaceForDescribeInstancesOutput) SetSubnetId(v string) *NetworkInterfaceForDescribeInstancesOutput { + s.SubnetId = &v + return s +} + +// SetType sets the Type field's value. +func (s *NetworkInterfaceForDescribeInstancesOutput) SetType(v string) *NetworkInterfaceForDescribeInstancesOutput { + s.Type = &v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *NetworkInterfaceForDescribeInstancesOutput) SetVpcId(v string) *NetworkInterfaceForDescribeInstancesOutput { + s.VpcId = &v + return s +} + +type TagFilterForDescribeInstancesInput struct { + _ struct{} `type:"structure"` + + Key *string `type:"string"` + + Values []*string `type:"list"` +} + +// String returns the string representation +func (s TagFilterForDescribeInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagFilterForDescribeInstancesInput) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *TagFilterForDescribeInstancesInput) SetKey(v string) *TagFilterForDescribeInstancesInput { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *TagFilterForDescribeInstancesInput) SetValues(v []*string) *TagFilterForDescribeInstancesInput { + s.Values = v + return s +} + +type TagForDescribeInstancesOutput struct { + _ struct{} `type:"structure"` + + Key *string `type:"string"` + + Value *string `type:"string"` +} + +// String returns the string representation +func (s TagForDescribeInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagForDescribeInstancesOutput) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *TagForDescribeInstancesOutput) SetKey(v string) *TagForDescribeInstancesOutput { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *TagForDescribeInstancesOutput) SetValue(v string) *TagForDescribeInstancesOutput { + s.Value = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instances_iam_roles.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instances_iam_roles.go new file mode 100644 index 000000000000..0d3f327e3ab5 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_instances_iam_roles.go @@ -0,0 +1,240 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeInstancesIamRolesCommon = "DescribeInstancesIamRoles" + +// DescribeInstancesIamRolesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstancesIamRolesCommon operation. The "output" return +// value will be populated with the DescribeInstancesIamRolesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstancesIamRolesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstancesIamRolesCommon Send returns without error. +// +// See DescribeInstancesIamRolesCommon for more information on using the DescribeInstancesIamRolesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeInstancesIamRolesCommonRequest method. +// req, resp := client.DescribeInstancesIamRolesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstancesIamRolesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeInstancesIamRolesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstancesIamRolesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstancesIamRolesCommon for usage and error information. +func (c *ECS) DescribeInstancesIamRolesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeInstancesIamRolesCommonRequest(input) + return out, req.Send() +} + +// DescribeInstancesIamRolesCommonWithContext is the same as DescribeInstancesIamRolesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstancesIamRolesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstancesIamRolesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeInstancesIamRolesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeInstancesIamRoles = "DescribeInstancesIamRoles" + +// DescribeInstancesIamRolesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeInstancesIamRoles operation. The "output" return +// value will be populated with the DescribeInstancesIamRolesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeInstancesIamRolesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeInstancesIamRolesCommon Send returns without error. +// +// See DescribeInstancesIamRoles for more information on using the DescribeInstancesIamRoles +// API call, and error handling. +// +// // Example sending a request using the DescribeInstancesIamRolesRequest method. +// req, resp := client.DescribeInstancesIamRolesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeInstancesIamRolesRequest(input *DescribeInstancesIamRolesInput) (req *request.Request, output *DescribeInstancesIamRolesOutput) { + op := &request.Operation{ + Name: opDescribeInstancesIamRoles, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstancesIamRolesInput{} + } + + output = &DescribeInstancesIamRolesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeInstancesIamRoles API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeInstancesIamRoles for usage and error information. +func (c *ECS) DescribeInstancesIamRoles(input *DescribeInstancesIamRolesInput) (*DescribeInstancesIamRolesOutput, error) { + req, out := c.DescribeInstancesIamRolesRequest(input) + return out, req.Send() +} + +// DescribeInstancesIamRolesWithContext is the same as DescribeInstancesIamRoles with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstancesIamRoles for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeInstancesIamRolesWithContext(ctx volcengine.Context, input *DescribeInstancesIamRolesInput, opts ...request.Option) (*DescribeInstancesIamRolesOutput, error) { + req, out := c.DescribeInstancesIamRolesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeInstancesIamRolesInput struct { + _ struct{} `type:"structure"` + + InstanceIds []*string `type:"list"` + + MaxResults *int32 `type:"int32"` + + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstancesIamRolesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancesIamRolesInput) GoString() string { + return s.String() +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DescribeInstancesIamRolesInput) SetInstanceIds(v []*string) *DescribeInstancesIamRolesInput { + s.InstanceIds = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInstancesIamRolesInput) SetMaxResults(v int32) *DescribeInstancesIamRolesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancesIamRolesInput) SetNextToken(v string) *DescribeInstancesIamRolesInput { + s.NextToken = &v + return s +} + +type DescribeInstancesIamRolesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + InstancesIamRoles []*InstancesIamRoleForDescribeInstancesIamRolesOutput `type:"list"` + + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstancesIamRolesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstancesIamRolesOutput) GoString() string { + return s.String() +} + +// SetInstancesIamRoles sets the InstancesIamRoles field's value. +func (s *DescribeInstancesIamRolesOutput) SetInstancesIamRoles(v []*InstancesIamRoleForDescribeInstancesIamRolesOutput) *DescribeInstancesIamRolesOutput { + s.InstancesIamRoles = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstancesIamRolesOutput) SetNextToken(v string) *DescribeInstancesIamRolesOutput { + s.NextToken = &v + return s +} + +type InstancesIamRoleForDescribeInstancesIamRolesOutput struct { + _ struct{} `type:"structure"` + + InstanceId *string `type:"string"` + + RoleNames []*string `type:"list"` +} + +// String returns the string representation +func (s InstancesIamRoleForDescribeInstancesIamRolesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstancesIamRoleForDescribeInstancesIamRolesOutput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *InstancesIamRoleForDescribeInstancesIamRolesOutput) SetInstanceId(v string) *InstancesIamRoleForDescribeInstancesIamRolesOutput { + s.InstanceId = &v + return s +} + +// SetRoleNames sets the RoleNames field's value. +func (s *InstancesIamRoleForDescribeInstancesIamRolesOutput) SetRoleNames(v []*string) *InstancesIamRoleForDescribeInstancesIamRolesOutput { + s.RoleNames = v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_key_pairs.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_key_pairs.go new file mode 100644 index 000000000000..6c64b9c79183 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_key_pairs.go @@ -0,0 +1,296 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeKeyPairsCommon = "DescribeKeyPairs" + +// DescribeKeyPairsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeKeyPairsCommon operation. The "output" return +// value will be populated with the DescribeKeyPairsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeKeyPairsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeKeyPairsCommon Send returns without error. +// +// See DescribeKeyPairsCommon for more information on using the DescribeKeyPairsCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeKeyPairsCommonRequest method. +// req, resp := client.DescribeKeyPairsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeKeyPairsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeKeyPairsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeKeyPairsCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeKeyPairsCommon for usage and error information. +func (c *ECS) DescribeKeyPairsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeKeyPairsCommonRequest(input) + return out, req.Send() +} + +// DescribeKeyPairsCommonWithContext is the same as DescribeKeyPairsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeKeyPairsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeKeyPairsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeKeyPairsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeKeyPairs = "DescribeKeyPairs" + +// DescribeKeyPairsRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeKeyPairs operation. The "output" return +// value will be populated with the DescribeKeyPairsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeKeyPairsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeKeyPairsCommon Send returns without error. +// +// See DescribeKeyPairs for more information on using the DescribeKeyPairs +// API call, and error handling. +// +// // Example sending a request using the DescribeKeyPairsRequest method. +// req, resp := client.DescribeKeyPairsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeKeyPairsRequest(input *DescribeKeyPairsInput) (req *request.Request, output *DescribeKeyPairsOutput) { + op := &request.Operation{ + Name: opDescribeKeyPairs, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeKeyPairsInput{} + } + + output = &DescribeKeyPairsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeKeyPairs API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeKeyPairs for usage and error information. +func (c *ECS) DescribeKeyPairs(input *DescribeKeyPairsInput) (*DescribeKeyPairsOutput, error) { + req, out := c.DescribeKeyPairsRequest(input) + return out, req.Send() +} + +// DescribeKeyPairsWithContext is the same as DescribeKeyPairs with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeKeyPairs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeKeyPairsWithContext(ctx volcengine.Context, input *DescribeKeyPairsInput, opts ...request.Option) (*DescribeKeyPairsOutput, error) { + req, out := c.DescribeKeyPairsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeKeyPairsInput struct { + _ struct{} `type:"structure"` + + FingerPrint *string `type:"string"` + + KeyPairIds []*string `type:"list"` + + KeyPairName *string `type:"string"` + + KeyPairNames []*string `type:"list"` + + MaxResults *int32 `type:"int32"` + + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeKeyPairsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeKeyPairsInput) GoString() string { + return s.String() +} + +// SetFingerPrint sets the FingerPrint field's value. +func (s *DescribeKeyPairsInput) SetFingerPrint(v string) *DescribeKeyPairsInput { + s.FingerPrint = &v + return s +} + +// SetKeyPairIds sets the KeyPairIds field's value. +func (s *DescribeKeyPairsInput) SetKeyPairIds(v []*string) *DescribeKeyPairsInput { + s.KeyPairIds = v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *DescribeKeyPairsInput) SetKeyPairName(v string) *DescribeKeyPairsInput { + s.KeyPairName = &v + return s +} + +// SetKeyPairNames sets the KeyPairNames field's value. +func (s *DescribeKeyPairsInput) SetKeyPairNames(v []*string) *DescribeKeyPairsInput { + s.KeyPairNames = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeKeyPairsInput) SetMaxResults(v int32) *DescribeKeyPairsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeKeyPairsInput) SetNextToken(v string) *DescribeKeyPairsInput { + s.NextToken = &v + return s +} + +type DescribeKeyPairsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + KeyPairs []*KeyPairForDescribeKeyPairsOutput `type:"list"` + + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeKeyPairsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeKeyPairsOutput) GoString() string { + return s.String() +} + +// SetKeyPairs sets the KeyPairs field's value. +func (s *DescribeKeyPairsOutput) SetKeyPairs(v []*KeyPairForDescribeKeyPairsOutput) *DescribeKeyPairsOutput { + s.KeyPairs = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeKeyPairsOutput) SetNextToken(v string) *DescribeKeyPairsOutput { + s.NextToken = &v + return s +} + +type KeyPairForDescribeKeyPairsOutput struct { + _ struct{} `type:"structure"` + + CreatedAt *string `type:"string"` + + Description *string `type:"string"` + + FingerPrint *string `type:"string"` + + KeyPairId *string `type:"string"` + + KeyPairName *string `type:"string"` + + UpdatedAt *string `type:"string"` +} + +// String returns the string representation +func (s KeyPairForDescribeKeyPairsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s KeyPairForDescribeKeyPairsOutput) GoString() string { + return s.String() +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *KeyPairForDescribeKeyPairsOutput) SetCreatedAt(v string) *KeyPairForDescribeKeyPairsOutput { + s.CreatedAt = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *KeyPairForDescribeKeyPairsOutput) SetDescription(v string) *KeyPairForDescribeKeyPairsOutput { + s.Description = &v + return s +} + +// SetFingerPrint sets the FingerPrint field's value. +func (s *KeyPairForDescribeKeyPairsOutput) SetFingerPrint(v string) *KeyPairForDescribeKeyPairsOutput { + s.FingerPrint = &v + return s +} + +// SetKeyPairId sets the KeyPairId field's value. +func (s *KeyPairForDescribeKeyPairsOutput) SetKeyPairId(v string) *KeyPairForDescribeKeyPairsOutput { + s.KeyPairId = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *KeyPairForDescribeKeyPairsOutput) SetKeyPairName(v string) *KeyPairForDescribeKeyPairsOutput { + s.KeyPairName = &v + return s +} + +// SetUpdatedAt sets the UpdatedAt field's value. +func (s *KeyPairForDescribeKeyPairsOutput) SetUpdatedAt(v string) *KeyPairForDescribeKeyPairsOutput { + s.UpdatedAt = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_system_events.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_system_events.go new file mode 100644 index 000000000000..55414a0e6010 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_system_events.go @@ -0,0 +1,418 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "encoding/json" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeSystemEventsCommon = "DescribeSystemEvents" + +// DescribeSystemEventsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeSystemEventsCommon operation. The "output" return +// value will be populated with the DescribeSystemEventsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeSystemEventsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeSystemEventsCommon Send returns without error. +// +// See DescribeSystemEventsCommon for more information on using the DescribeSystemEventsCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeSystemEventsCommonRequest method. +// req, resp := client.DescribeSystemEventsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeSystemEventsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeSystemEventsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeSystemEventsCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeSystemEventsCommon for usage and error information. +func (c *ECS) DescribeSystemEventsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeSystemEventsCommonRequest(input) + return out, req.Send() +} + +// DescribeSystemEventsCommonWithContext is the same as DescribeSystemEventsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSystemEventsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeSystemEventsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeSystemEventsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeSystemEvents = "DescribeSystemEvents" + +// DescribeSystemEventsRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeSystemEvents operation. The "output" return +// value will be populated with the DescribeSystemEventsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeSystemEventsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeSystemEventsCommon Send returns without error. +// +// See DescribeSystemEvents for more information on using the DescribeSystemEvents +// API call, and error handling. +// +// // Example sending a request using the DescribeSystemEventsRequest method. +// req, resp := client.DescribeSystemEventsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeSystemEventsRequest(input *DescribeSystemEventsInput) (req *request.Request, output *DescribeSystemEventsOutput) { + op := &request.Operation{ + Name: opDescribeSystemEvents, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeSystemEventsInput{} + } + + output = &DescribeSystemEventsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeSystemEvents API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeSystemEvents for usage and error information. +func (c *ECS) DescribeSystemEvents(input *DescribeSystemEventsInput) (*DescribeSystemEventsOutput, error) { + req, out := c.DescribeSystemEventsRequest(input) + return out, req.Send() +} + +// DescribeSystemEventsWithContext is the same as DescribeSystemEvents with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSystemEvents for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeSystemEventsWithContext(ctx volcengine.Context, input *DescribeSystemEventsInput, opts ...request.Option) (*DescribeSystemEventsOutput, error) { + req, out := c.DescribeSystemEventsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeSystemEventsInput struct { + _ struct{} `type:"structure"` + + CreatedAtEnd *string `type:"string"` + + CreatedAtStart *string `type:"string"` + + EventIds []*string `type:"list"` + + MaxResults *json.Number `type:"json_number"` + + NextToken *string `type:"string"` + + ResourceIds []*string `type:"list"` + + Status []*string `type:"list"` + + Types []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeSystemEventsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSystemEventsInput) GoString() string { + return s.String() +} + +// SetCreatedAtEnd sets the CreatedAtEnd field's value. +func (s *DescribeSystemEventsInput) SetCreatedAtEnd(v string) *DescribeSystemEventsInput { + s.CreatedAtEnd = &v + return s +} + +// SetCreatedAtStart sets the CreatedAtStart field's value. +func (s *DescribeSystemEventsInput) SetCreatedAtStart(v string) *DescribeSystemEventsInput { + s.CreatedAtStart = &v + return s +} + +// SetEventIds sets the EventIds field's value. +func (s *DescribeSystemEventsInput) SetEventIds(v []*string) *DescribeSystemEventsInput { + s.EventIds = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeSystemEventsInput) SetMaxResults(v json.Number) *DescribeSystemEventsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeSystemEventsInput) SetNextToken(v string) *DescribeSystemEventsInput { + s.NextToken = &v + return s +} + +// SetResourceIds sets the ResourceIds field's value. +func (s *DescribeSystemEventsInput) SetResourceIds(v []*string) *DescribeSystemEventsInput { + s.ResourceIds = v + return s +} + +// SetStatus sets the Status field's value. +func (s *DescribeSystemEventsInput) SetStatus(v []*string) *DescribeSystemEventsInput { + s.Status = v + return s +} + +// SetTypes sets the Types field's value. +func (s *DescribeSystemEventsInput) SetTypes(v []*string) *DescribeSystemEventsInput { + s.Types = v + return s +} + +type DescribeSystemEventsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + NextToken *string `type:"string"` + + SystemEvents []*SystemEventForDescribeSystemEventsOutput `type:"list"` +} + +// String returns the string representation +func (s DescribeSystemEventsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSystemEventsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeSystemEventsOutput) SetNextToken(v string) *DescribeSystemEventsOutput { + s.NextToken = &v + return s +} + +// SetSystemEvents sets the SystemEvents field's value. +func (s *DescribeSystemEventsOutput) SetSystemEvents(v []*SystemEventForDescribeSystemEventsOutput) *DescribeSystemEventsOutput { + s.SystemEvents = v + return s +} + +type SystemEventForDescribeSystemEventsOutput struct { + _ struct{} `type:"structure"` + + CreatedAt *string `type:"string"` + + Id *string `type:"string"` + + OperatedEndAt *string `type:"string"` + + OperatedStartAt *string `type:"string"` + + ResourceId *string `type:"string"` + + Status *string `type:"string" enum:"StatusForDescribeSystemEventsOutput"` + + Type *string `type:"string" enum:"TypeForDescribeSystemEventsOutput"` + + UpdatedAt *string `type:"string"` +} + +// String returns the string representation +func (s SystemEventForDescribeSystemEventsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s SystemEventForDescribeSystemEventsOutput) GoString() string { + return s.String() +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *SystemEventForDescribeSystemEventsOutput) SetCreatedAt(v string) *SystemEventForDescribeSystemEventsOutput { + s.CreatedAt = &v + return s +} + +// SetId sets the Id field's value. +func (s *SystemEventForDescribeSystemEventsOutput) SetId(v string) *SystemEventForDescribeSystemEventsOutput { + s.Id = &v + return s +} + +// SetOperatedEndAt sets the OperatedEndAt field's value. +func (s *SystemEventForDescribeSystemEventsOutput) SetOperatedEndAt(v string) *SystemEventForDescribeSystemEventsOutput { + s.OperatedEndAt = &v + return s +} + +// SetOperatedStartAt sets the OperatedStartAt field's value. +func (s *SystemEventForDescribeSystemEventsOutput) SetOperatedStartAt(v string) *SystemEventForDescribeSystemEventsOutput { + s.OperatedStartAt = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *SystemEventForDescribeSystemEventsOutput) SetResourceId(v string) *SystemEventForDescribeSystemEventsOutput { + s.ResourceId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *SystemEventForDescribeSystemEventsOutput) SetStatus(v string) *SystemEventForDescribeSystemEventsOutput { + s.Status = &v + return s +} + +// SetType sets the Type field's value. +func (s *SystemEventForDescribeSystemEventsOutput) SetType(v string) *SystemEventForDescribeSystemEventsOutput { + s.Type = &v + return s +} + +// SetUpdatedAt sets the UpdatedAt field's value. +func (s *SystemEventForDescribeSystemEventsOutput) SetUpdatedAt(v string) *SystemEventForDescribeSystemEventsOutput { + s.UpdatedAt = &v + return s +} + +const ( + // StatusForDescribeSystemEventsOutputUnknownStatus is a StatusForDescribeSystemEventsOutput enum value + StatusForDescribeSystemEventsOutputUnknownStatus = "UnknownStatus" + + // StatusForDescribeSystemEventsOutputExecuting is a StatusForDescribeSystemEventsOutput enum value + StatusForDescribeSystemEventsOutputExecuting = "Executing" + + // StatusForDescribeSystemEventsOutputSucceeded is a StatusForDescribeSystemEventsOutput enum value + StatusForDescribeSystemEventsOutputSucceeded = "Succeeded" + + // StatusForDescribeSystemEventsOutputFailed is a StatusForDescribeSystemEventsOutput enum value + StatusForDescribeSystemEventsOutputFailed = "Failed" + + // StatusForDescribeSystemEventsOutputInquiring is a StatusForDescribeSystemEventsOutput enum value + StatusForDescribeSystemEventsOutputInquiring = "Inquiring" + + // StatusForDescribeSystemEventsOutputScheduled is a StatusForDescribeSystemEventsOutput enum value + StatusForDescribeSystemEventsOutputScheduled = "Scheduled" + + // StatusForDescribeSystemEventsOutputRejected is a StatusForDescribeSystemEventsOutput enum value + StatusForDescribeSystemEventsOutputRejected = "Rejected" +) + +const ( + // TypeForDescribeSystemEventsOutputUnknownType is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputUnknownType = "UnknownType" + + // TypeForDescribeSystemEventsOutputSystemFailureStop is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputSystemFailureStop = "SystemFailure_Stop" + + // TypeForDescribeSystemEventsOutputSystemFailureReboot is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputSystemFailureReboot = "SystemFailure_Reboot" + + // TypeForDescribeSystemEventsOutputSystemFailurePleaseCheck is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputSystemFailurePleaseCheck = "SystemFailure_PleaseCheck" + + // TypeForDescribeSystemEventsOutputDiskErrorRedeploy is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputDiskErrorRedeploy = "DiskError_Redeploy" + + // TypeForDescribeSystemEventsOutputHddbadSectorRedeploy is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputHddbadSectorRedeploy = "HDDBadSector_Redeploy" + + // TypeForDescribeSystemEventsOutputGpuErrorRedeploy is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputGpuErrorRedeploy = "GpuError_Redeploy" + + // TypeForDescribeSystemEventsOutputSystemMaintenanceRedeploy is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputSystemMaintenanceRedeploy = "SystemMaintenance_Redeploy" + + // TypeForDescribeSystemEventsOutputSystemFailureRedeploy is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputSystemFailureRedeploy = "SystemFailure_Redeploy" + + // TypeForDescribeSystemEventsOutputCreateInstance is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputCreateInstance = "CreateInstance" + + // TypeForDescribeSystemEventsOutputRunInstance is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputRunInstance = "RunInstance" + + // TypeForDescribeSystemEventsOutputStopInstance is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputStopInstance = "StopInstance" + + // TypeForDescribeSystemEventsOutputDeleteInstance is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputDeleteInstance = "DeleteInstance" + + // TypeForDescribeSystemEventsOutputSpotInstanceInterruptionDelete is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputSpotInstanceInterruptionDelete = "SpotInstanceInterruption_Delete" + + // TypeForDescribeSystemEventsOutputAccountUnbalancedStop is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputAccountUnbalancedStop = "AccountUnbalanced_Stop" + + // TypeForDescribeSystemEventsOutputAccountUnbalancedDelete is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputAccountUnbalancedDelete = "AccountUnbalanced_Delete" + + // TypeForDescribeSystemEventsOutputInstanceChargeTypeChange is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputInstanceChargeTypeChange = "InstanceChargeType_Change" + + // TypeForDescribeSystemEventsOutputInstanceConfigurationChange is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputInstanceConfigurationChange = "InstanceConfiguration_Change" + + // TypeForDescribeSystemEventsOutputFileSystemReadOnlyChange is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputFileSystemReadOnlyChange = "FileSystemReadOnly_Change" + + // TypeForDescribeSystemEventsOutputRebootInstance is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputRebootInstance = "RebootInstance" + + // TypeForDescribeSystemEventsOutputInstanceFailure is a TypeForDescribeSystemEventsOutput enum value + TypeForDescribeSystemEventsOutputInstanceFailure = "InstanceFailure" +) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_tags.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_tags.go new file mode 100644 index 000000000000..940544f4d302 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_tags.go @@ -0,0 +1,302 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeTagsCommon = "DescribeTags" + +// DescribeTagsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeTagsCommon operation. The "output" return +// value will be populated with the DescribeTagsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeTagsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeTagsCommon Send returns without error. +// +// See DescribeTagsCommon for more information on using the DescribeTagsCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeTagsCommonRequest method. +// req, resp := client.DescribeTagsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeTagsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeTagsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeTagsCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeTagsCommon for usage and error information. +func (c *ECS) DescribeTagsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeTagsCommonRequest(input) + return out, req.Send() +} + +// DescribeTagsCommonWithContext is the same as DescribeTagsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTagsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeTagsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeTagsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeTags = "DescribeTags" + +// DescribeTagsRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeTags operation. The "output" return +// value will be populated with the DescribeTagsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeTagsCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeTagsCommon Send returns without error. +// +// See DescribeTags for more information on using the DescribeTags +// API call, and error handling. +// +// // Example sending a request using the DescribeTagsRequest method. +// req, resp := client.DescribeTagsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeTagsRequest(input *DescribeTagsInput) (req *request.Request, output *DescribeTagsOutput) { + op := &request.Operation{ + Name: opDescribeTags, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeTagsInput{} + } + + output = &DescribeTagsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeTags API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeTags for usage and error information. +func (c *ECS) DescribeTags(input *DescribeTagsInput) (*DescribeTagsOutput, error) { + req, out := c.DescribeTagsRequest(input) + return out, req.Send() +} + +// DescribeTagsWithContext is the same as DescribeTags with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTags for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeTagsWithContext(ctx volcengine.Context, input *DescribeTagsInput, opts ...request.Option) (*DescribeTagsOutput, error) { + req, out := c.DescribeTagsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeTagsInput struct { + _ struct{} `type:"structure"` + + MaxResults *int32 `type:"int32"` + + NextToken *string `type:"string"` + + ResourceIds []*string `type:"list"` + + ResourceType *string `type:"string"` + + TagFilters []*TagFilterForDescribeTagsInput `type:"list"` +} + +// String returns the string representation +func (s DescribeTagsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTagsInput) GoString() string { + return s.String() +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeTagsInput) SetMaxResults(v int32) *DescribeTagsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeTagsInput) SetNextToken(v string) *DescribeTagsInput { + s.NextToken = &v + return s +} + +// SetResourceIds sets the ResourceIds field's value. +func (s *DescribeTagsInput) SetResourceIds(v []*string) *DescribeTagsInput { + s.ResourceIds = v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *DescribeTagsInput) SetResourceType(v string) *DescribeTagsInput { + s.ResourceType = &v + return s +} + +// SetTagFilters sets the TagFilters field's value. +func (s *DescribeTagsInput) SetTagFilters(v []*TagFilterForDescribeTagsInput) *DescribeTagsInput { + s.TagFilters = v + return s +} + +type DescribeTagsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + NextToken *string `type:"string"` + + TagResources []*TagResourceForDescribeTagsOutput `type:"list"` +} + +// String returns the string representation +func (s DescribeTagsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTagsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeTagsOutput) SetNextToken(v string) *DescribeTagsOutput { + s.NextToken = &v + return s +} + +// SetTagResources sets the TagResources field's value. +func (s *DescribeTagsOutput) SetTagResources(v []*TagResourceForDescribeTagsOutput) *DescribeTagsOutput { + s.TagResources = v + return s +} + +type TagFilterForDescribeTagsInput struct { + _ struct{} `type:"structure"` + + Key *string `type:"string"` + + Values []*string `type:"list"` +} + +// String returns the string representation +func (s TagFilterForDescribeTagsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagFilterForDescribeTagsInput) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *TagFilterForDescribeTagsInput) SetKey(v string) *TagFilterForDescribeTagsInput { + s.Key = &v + return s +} + +// SetValues sets the Values field's value. +func (s *TagFilterForDescribeTagsInput) SetValues(v []*string) *TagFilterForDescribeTagsInput { + s.Values = v + return s +} + +type TagResourceForDescribeTagsOutput struct { + _ struct{} `type:"structure"` + + ResourceId *string `type:"string"` + + ResourceType *string `type:"string"` + + TagKey *string `type:"string"` + + TagValue *string `type:"string"` +} + +// String returns the string representation +func (s TagResourceForDescribeTagsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagResourceForDescribeTagsOutput) GoString() string { + return s.String() +} + +// SetResourceId sets the ResourceId field's value. +func (s *TagResourceForDescribeTagsOutput) SetResourceId(v string) *TagResourceForDescribeTagsOutput { + s.ResourceId = &v + return s +} + +// SetResourceType sets the ResourceType field's value. +func (s *TagResourceForDescribeTagsOutput) SetResourceType(v string) *TagResourceForDescribeTagsOutput { + s.ResourceType = &v + return s +} + +// SetTagKey sets the TagKey field's value. +func (s *TagResourceForDescribeTagsOutput) SetTagKey(v string) *TagResourceForDescribeTagsOutput { + s.TagKey = &v + return s +} + +// SetTagValue sets the TagValue field's value. +func (s *TagResourceForDescribeTagsOutput) SetTagValue(v string) *TagResourceForDescribeTagsOutput { + s.TagValue = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_tasks.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_tasks.go new file mode 100644 index 000000000000..2967bd3f0245 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_tasks.go @@ -0,0 +1,329 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "encoding/json" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeTasksCommon = "DescribeTasks" + +// DescribeTasksCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeTasksCommon operation. The "output" return +// value will be populated with the DescribeTasksCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeTasksCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeTasksCommon Send returns without error. +// +// See DescribeTasksCommon for more information on using the DescribeTasksCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeTasksCommonRequest method. +// req, resp := client.DescribeTasksCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeTasksCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeTasksCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeTasksCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeTasksCommon for usage and error information. +func (c *ECS) DescribeTasksCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeTasksCommonRequest(input) + return out, req.Send() +} + +// DescribeTasksCommonWithContext is the same as DescribeTasksCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTasksCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeTasksCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeTasksCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeTasks = "DescribeTasks" + +// DescribeTasksRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeTasks operation. The "output" return +// value will be populated with the DescribeTasksCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeTasksCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeTasksCommon Send returns without error. +// +// See DescribeTasks for more information on using the DescribeTasks +// API call, and error handling. +// +// // Example sending a request using the DescribeTasksRequest method. +// req, resp := client.DescribeTasksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeTasksRequest(input *DescribeTasksInput) (req *request.Request, output *DescribeTasksOutput) { + op := &request.Operation{ + Name: opDescribeTasks, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeTasksInput{} + } + + output = &DescribeTasksOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeTasks API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeTasks for usage and error information. +func (c *ECS) DescribeTasks(input *DescribeTasksInput) (*DescribeTasksOutput, error) { + req, out := c.DescribeTasksRequest(input) + return out, req.Send() +} + +// DescribeTasksWithContext is the same as DescribeTasks with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeTasks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeTasksWithContext(ctx volcengine.Context, input *DescribeTasksInput, opts ...request.Option) (*DescribeTasksOutput, error) { + req, out := c.DescribeTasksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeTasksInput struct { + _ struct{} `type:"structure"` + + MaxResults *json.Number `type:"json_number"` + + NextToken *string `type:"string"` + + ResourceId *string `type:"string"` + + TaskIds []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeTasksInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTasksInput) GoString() string { + return s.String() +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeTasksInput) SetMaxResults(v json.Number) *DescribeTasksInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeTasksInput) SetNextToken(v string) *DescribeTasksInput { + s.NextToken = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *DescribeTasksInput) SetResourceId(v string) *DescribeTasksInput { + s.ResourceId = &v + return s +} + +// SetTaskIds sets the TaskIds field's value. +func (s *DescribeTasksInput) SetTaskIds(v []*string) *DescribeTasksInput { + s.TaskIds = v + return s +} + +type DescribeTasksOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + NextToken *string `type:"string"` + + Tasks []*TaskForDescribeTasksOutput `type:"list"` +} + +// String returns the string representation +func (s DescribeTasksOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeTasksOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeTasksOutput) SetNextToken(v string) *DescribeTasksOutput { + s.NextToken = &v + return s +} + +// SetTasks sets the Tasks field's value. +func (s *DescribeTasksOutput) SetTasks(v []*TaskForDescribeTasksOutput) *DescribeTasksOutput { + s.Tasks = v + return s +} + +type TaskForDescribeTasksOutput struct { + _ struct{} `type:"structure"` + + CreatedAt *string `type:"string"` + + EndAt *string `type:"string"` + + Id *string `type:"string"` + + Process *int64 `type:"int64"` + + ResourceId *string `type:"string"` + + Status *string `type:"string" enum:"StatusForDescribeTasksOutput"` + + Type *string `type:"string" enum:"TypeForDescribeTasksOutput"` + + UpdatedAt *string `type:"string"` +} + +// String returns the string representation +func (s TaskForDescribeTasksOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s TaskForDescribeTasksOutput) GoString() string { + return s.String() +} + +// SetCreatedAt sets the CreatedAt field's value. +func (s *TaskForDescribeTasksOutput) SetCreatedAt(v string) *TaskForDescribeTasksOutput { + s.CreatedAt = &v + return s +} + +// SetEndAt sets the EndAt field's value. +func (s *TaskForDescribeTasksOutput) SetEndAt(v string) *TaskForDescribeTasksOutput { + s.EndAt = &v + return s +} + +// SetId sets the Id field's value. +func (s *TaskForDescribeTasksOutput) SetId(v string) *TaskForDescribeTasksOutput { + s.Id = &v + return s +} + +// SetProcess sets the Process field's value. +func (s *TaskForDescribeTasksOutput) SetProcess(v int64) *TaskForDescribeTasksOutput { + s.Process = &v + return s +} + +// SetResourceId sets the ResourceId field's value. +func (s *TaskForDescribeTasksOutput) SetResourceId(v string) *TaskForDescribeTasksOutput { + s.ResourceId = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *TaskForDescribeTasksOutput) SetStatus(v string) *TaskForDescribeTasksOutput { + s.Status = &v + return s +} + +// SetType sets the Type field's value. +func (s *TaskForDescribeTasksOutput) SetType(v string) *TaskForDescribeTasksOutput { + s.Type = &v + return s +} + +// SetUpdatedAt sets the UpdatedAt field's value. +func (s *TaskForDescribeTasksOutput) SetUpdatedAt(v string) *TaskForDescribeTasksOutput { + s.UpdatedAt = &v + return s +} + +const ( + // StatusForDescribeTasksOutputUnknownStatus is a StatusForDescribeTasksOutput enum value + StatusForDescribeTasksOutputUnknownStatus = "UnknownStatus" + + // StatusForDescribeTasksOutputPending is a StatusForDescribeTasksOutput enum value + StatusForDescribeTasksOutputPending = "Pending" + + // StatusForDescribeTasksOutputRunning is a StatusForDescribeTasksOutput enum value + StatusForDescribeTasksOutputRunning = "Running" + + // StatusForDescribeTasksOutputSucceeded is a StatusForDescribeTasksOutput enum value + StatusForDescribeTasksOutputSucceeded = "Succeeded" + + // StatusForDescribeTasksOutputFailed is a StatusForDescribeTasksOutput enum value + StatusForDescribeTasksOutputFailed = "Failed" +) + +const ( + // TypeForDescribeTasksOutputUnknownType is a TypeForDescribeTasksOutput enum value + TypeForDescribeTasksOutputUnknownType = "UnknownType" + + // TypeForDescribeTasksOutputExportImage is a TypeForDescribeTasksOutput enum value + TypeForDescribeTasksOutputExportImage = "ExportImage" + + // TypeForDescribeTasksOutputCopyImage is a TypeForDescribeTasksOutput enum value + TypeForDescribeTasksOutputCopyImage = "CopyImage" + + // TypeForDescribeTasksOutputPreheatImage is a TypeForDescribeTasksOutput enum value + TypeForDescribeTasksOutputPreheatImage = "PreheatImage" +) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_user_data.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_user_data.go new file mode 100644 index 000000000000..11c2035e64e2 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_user_data.go @@ -0,0 +1,194 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeUserDataCommon = "DescribeUserData" + +// DescribeUserDataCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeUserDataCommon operation. The "output" return +// value will be populated with the DescribeUserDataCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeUserDataCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeUserDataCommon Send returns without error. +// +// See DescribeUserDataCommon for more information on using the DescribeUserDataCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeUserDataCommonRequest method. +// req, resp := client.DescribeUserDataCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeUserDataCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeUserDataCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeUserDataCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeUserDataCommon for usage and error information. +func (c *ECS) DescribeUserDataCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeUserDataCommonRequest(input) + return out, req.Send() +} + +// DescribeUserDataCommonWithContext is the same as DescribeUserDataCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeUserDataCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeUserDataCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeUserDataCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeUserData = "DescribeUserData" + +// DescribeUserDataRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeUserData operation. The "output" return +// value will be populated with the DescribeUserDataCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeUserDataCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeUserDataCommon Send returns without error. +// +// See DescribeUserData for more information on using the DescribeUserData +// API call, and error handling. +// +// // Example sending a request using the DescribeUserDataRequest method. +// req, resp := client.DescribeUserDataRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeUserDataRequest(input *DescribeUserDataInput) (req *request.Request, output *DescribeUserDataOutput) { + op := &request.Operation{ + Name: opDescribeUserData, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeUserDataInput{} + } + + output = &DescribeUserDataOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeUserData API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeUserData for usage and error information. +func (c *ECS) DescribeUserData(input *DescribeUserDataInput) (*DescribeUserDataOutput, error) { + req, out := c.DescribeUserDataRequest(input) + return out, req.Send() +} + +// DescribeUserDataWithContext is the same as DescribeUserData with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeUserData for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeUserDataWithContext(ctx volcengine.Context, input *DescribeUserDataInput, opts ...request.Option) (*DescribeUserDataOutput, error) { + req, out := c.DescribeUserDataRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeUserDataInput struct { + _ struct{} `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s DescribeUserDataInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeUserDataInput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DescribeUserDataInput) SetInstanceId(v string) *DescribeUserDataInput { + s.InstanceId = &v + return s +} + +type DescribeUserDataOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + InstanceId *string `type:"string"` + + UserData *string `type:"string"` +} + +// String returns the string representation +func (s DescribeUserDataOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeUserDataOutput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *DescribeUserDataOutput) SetInstanceId(v string) *DescribeUserDataOutput { + s.InstanceId = &v + return s +} + +// SetUserData sets the UserData field's value. +func (s *DescribeUserDataOutput) SetUserData(v string) *DescribeUserDataOutput { + s.UserData = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_zones.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_zones.go new file mode 100644 index 000000000000..9b3e35e5832d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_describe_zones.go @@ -0,0 +1,208 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDescribeZonesCommon = "DescribeZones" + +// DescribeZonesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeZonesCommon operation. The "output" return +// value will be populated with the DescribeZonesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeZonesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeZonesCommon Send returns without error. +// +// See DescribeZonesCommon for more information on using the DescribeZonesCommon +// API call, and error handling. +// +// // Example sending a request using the DescribeZonesCommonRequest method. +// req, resp := client.DescribeZonesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeZonesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDescribeZonesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeZonesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeZonesCommon for usage and error information. +func (c *ECS) DescribeZonesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DescribeZonesCommonRequest(input) + return out, req.Send() +} + +// DescribeZonesCommonWithContext is the same as DescribeZonesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeZonesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeZonesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DescribeZonesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeZones = "DescribeZones" + +// DescribeZonesRequest generates a "volcengine/request.Request" representing the +// client's request for the DescribeZones operation. The "output" return +// value will be populated with the DescribeZonesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DescribeZonesCommon Request to send the API call to the service. +// the "output" return value is not valid until after DescribeZonesCommon Send returns without error. +// +// See DescribeZones for more information on using the DescribeZones +// API call, and error handling. +// +// // Example sending a request using the DescribeZonesRequest method. +// req, resp := client.DescribeZonesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DescribeZonesRequest(input *DescribeZonesInput) (req *request.Request, output *DescribeZonesOutput) { + op := &request.Operation{ + Name: opDescribeZones, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeZonesInput{} + } + + output = &DescribeZonesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DescribeZones API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DescribeZones for usage and error information. +func (c *ECS) DescribeZones(input *DescribeZonesInput) (*DescribeZonesOutput, error) { + req, out := c.DescribeZonesRequest(input) + return out, req.Send() +} + +// DescribeZonesWithContext is the same as DescribeZones with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeZones for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DescribeZonesWithContext(ctx volcengine.Context, input *DescribeZonesInput, opts ...request.Option) (*DescribeZonesOutput, error) { + req, out := c.DescribeZonesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DescribeZonesInput struct { + _ struct{} `type:"structure"` + + ZoneIds []*string `type:"list"` +} + +// String returns the string representation +func (s DescribeZonesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeZonesInput) GoString() string { + return s.String() +} + +// SetZoneIds sets the ZoneIds field's value. +func (s *DescribeZonesInput) SetZoneIds(v []*string) *DescribeZonesInput { + s.ZoneIds = v + return s +} + +type DescribeZonesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + Zones []*ZoneForDescribeZonesOutput `type:"list"` +} + +// String returns the string representation +func (s DescribeZonesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeZonesOutput) GoString() string { + return s.String() +} + +// SetZones sets the Zones field's value. +func (s *DescribeZonesOutput) SetZones(v []*ZoneForDescribeZonesOutput) *DescribeZonesOutput { + s.Zones = v + return s +} + +type ZoneForDescribeZonesOutput struct { + _ struct{} `type:"structure"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s ZoneForDescribeZonesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ZoneForDescribeZonesOutput) GoString() string { + return s.String() +} + +// SetZoneId sets the ZoneId field's value. +func (s *ZoneForDescribeZonesOutput) SetZoneId(v string) *ZoneForDescribeZonesOutput { + s.ZoneId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_detach_key_pair.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_detach_key_pair.go new file mode 100644 index 000000000000..fa2d40a965d3 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_detach_key_pair.go @@ -0,0 +1,262 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDetachKeyPairCommon = "DetachKeyPair" + +// DetachKeyPairCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DetachKeyPairCommon operation. The "output" return +// value will be populated with the DetachKeyPairCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DetachKeyPairCommon Request to send the API call to the service. +// the "output" return value is not valid until after DetachKeyPairCommon Send returns without error. +// +// See DetachKeyPairCommon for more information on using the DetachKeyPairCommon +// API call, and error handling. +// +// // Example sending a request using the DetachKeyPairCommonRequest method. +// req, resp := client.DetachKeyPairCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DetachKeyPairCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDetachKeyPairCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DetachKeyPairCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DetachKeyPairCommon for usage and error information. +func (c *ECS) DetachKeyPairCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DetachKeyPairCommonRequest(input) + return out, req.Send() +} + +// DetachKeyPairCommonWithContext is the same as DetachKeyPairCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DetachKeyPairCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DetachKeyPairCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DetachKeyPairCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDetachKeyPair = "DetachKeyPair" + +// DetachKeyPairRequest generates a "volcengine/request.Request" representing the +// client's request for the DetachKeyPair operation. The "output" return +// value will be populated with the DetachKeyPairCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DetachKeyPairCommon Request to send the API call to the service. +// the "output" return value is not valid until after DetachKeyPairCommon Send returns without error. +// +// See DetachKeyPair for more information on using the DetachKeyPair +// API call, and error handling. +// +// // Example sending a request using the DetachKeyPairRequest method. +// req, resp := client.DetachKeyPairRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DetachKeyPairRequest(input *DetachKeyPairInput) (req *request.Request, output *DetachKeyPairOutput) { + op := &request.Operation{ + Name: opDetachKeyPair, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DetachKeyPairInput{} + } + + output = &DetachKeyPairOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DetachKeyPair API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DetachKeyPair for usage and error information. +func (c *ECS) DetachKeyPair(input *DetachKeyPairInput) (*DetachKeyPairOutput, error) { + req, out := c.DetachKeyPairRequest(input) + return out, req.Send() +} + +// DetachKeyPairWithContext is the same as DetachKeyPair with the addition of +// the ability to pass a context and additional request options. +// +// See DetachKeyPair for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DetachKeyPairWithContext(ctx volcengine.Context, input *DetachKeyPairInput, opts ...request.Option) (*DetachKeyPairOutput, error) { + req, out := c.DetachKeyPairRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DetachKeyPairInput struct { + _ struct{} `type:"structure"` + + InstanceIds []*string `type:"list"` + + KeyPairId *string `type:"string"` + + KeyPairName *string `type:"string"` +} + +// String returns the string representation +func (s DetachKeyPairInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachKeyPairInput) GoString() string { + return s.String() +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DetachKeyPairInput) SetInstanceIds(v []*string) *DetachKeyPairInput { + s.InstanceIds = v + return s +} + +// SetKeyPairId sets the KeyPairId field's value. +func (s *DetachKeyPairInput) SetKeyPairId(v string) *DetachKeyPairInput { + s.KeyPairId = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *DetachKeyPairInput) SetKeyPairName(v string) *DetachKeyPairInput { + s.KeyPairName = &v + return s +} + +type DetachKeyPairOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForDetachKeyPairOutput `type:"list"` +} + +// String returns the string representation +func (s DetachKeyPairOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DetachKeyPairOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *DetachKeyPairOutput) SetOperationDetails(v []*OperationDetailForDetachKeyPairOutput) *DetachKeyPairOutput { + s.OperationDetails = v + return s +} + +type ErrorForDetachKeyPairOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForDetachKeyPairOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForDetachKeyPairOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForDetachKeyPairOutput) SetCode(v string) *ErrorForDetachKeyPairOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForDetachKeyPairOutput) SetMessage(v string) *ErrorForDetachKeyPairOutput { + s.Message = &v + return s +} + +type OperationDetailForDetachKeyPairOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForDetachKeyPairOutput `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForDetachKeyPairOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForDetachKeyPairOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForDetachKeyPairOutput) SetError(v *ErrorForDetachKeyPairOutput) *OperationDetailForDetachKeyPairOutput { + s.Error = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *OperationDetailForDetachKeyPairOutput) SetInstanceId(v string) *OperationDetailForDetachKeyPairOutput { + s.InstanceId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_disassociate_instances_iam_role.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_disassociate_instances_iam_role.go new file mode 100644 index 000000000000..cee14e6076e0 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_disassociate_instances_iam_role.go @@ -0,0 +1,254 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opDisassociateInstancesIamRoleCommon = "DisassociateInstancesIamRole" + +// DisassociateInstancesIamRoleCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the DisassociateInstancesIamRoleCommon operation. The "output" return +// value will be populated with the DisassociateInstancesIamRoleCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DisassociateInstancesIamRoleCommon Request to send the API call to the service. +// the "output" return value is not valid until after DisassociateInstancesIamRoleCommon Send returns without error. +// +// See DisassociateInstancesIamRoleCommon for more information on using the DisassociateInstancesIamRoleCommon +// API call, and error handling. +// +// // Example sending a request using the DisassociateInstancesIamRoleCommonRequest method. +// req, resp := client.DisassociateInstancesIamRoleCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DisassociateInstancesIamRoleCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opDisassociateInstancesIamRoleCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// DisassociateInstancesIamRoleCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DisassociateInstancesIamRoleCommon for usage and error information. +func (c *ECS) DisassociateInstancesIamRoleCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.DisassociateInstancesIamRoleCommonRequest(input) + return out, req.Send() +} + +// DisassociateInstancesIamRoleCommonWithContext is the same as DisassociateInstancesIamRoleCommon with the addition of +// the ability to pass a context and additional request options. +// +// See DisassociateInstancesIamRoleCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DisassociateInstancesIamRoleCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.DisassociateInstancesIamRoleCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDisassociateInstancesIamRole = "DisassociateInstancesIamRole" + +// DisassociateInstancesIamRoleRequest generates a "volcengine/request.Request" representing the +// client's request for the DisassociateInstancesIamRole operation. The "output" return +// value will be populated with the DisassociateInstancesIamRoleCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned DisassociateInstancesIamRoleCommon Request to send the API call to the service. +// the "output" return value is not valid until after DisassociateInstancesIamRoleCommon Send returns without error. +// +// See DisassociateInstancesIamRole for more information on using the DisassociateInstancesIamRole +// API call, and error handling. +// +// // Example sending a request using the DisassociateInstancesIamRoleRequest method. +// req, resp := client.DisassociateInstancesIamRoleRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) DisassociateInstancesIamRoleRequest(input *DisassociateInstancesIamRoleInput) (req *request.Request, output *DisassociateInstancesIamRoleOutput) { + op := &request.Operation{ + Name: opDisassociateInstancesIamRole, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &DisassociateInstancesIamRoleInput{} + } + + output = &DisassociateInstancesIamRoleOutput{} + req = c.newRequest(op, input, output) + + return +} + +// DisassociateInstancesIamRole API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation DisassociateInstancesIamRole for usage and error information. +func (c *ECS) DisassociateInstancesIamRole(input *DisassociateInstancesIamRoleInput) (*DisassociateInstancesIamRoleOutput, error) { + req, out := c.DisassociateInstancesIamRoleRequest(input) + return out, req.Send() +} + +// DisassociateInstancesIamRoleWithContext is the same as DisassociateInstancesIamRole with the addition of +// the ability to pass a context and additional request options. +// +// See DisassociateInstancesIamRole for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) DisassociateInstancesIamRoleWithContext(ctx volcengine.Context, input *DisassociateInstancesIamRoleInput, opts ...request.Option) (*DisassociateInstancesIamRoleOutput, error) { + req, out := c.DisassociateInstancesIamRoleRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type DisassociateInstancesIamRoleInput struct { + _ struct{} `type:"structure"` + + IamRoleName *string `type:"string"` + + InstanceIds []*string `type:"list"` +} + +// String returns the string representation +func (s DisassociateInstancesIamRoleInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateInstancesIamRoleInput) GoString() string { + return s.String() +} + +// SetIamRoleName sets the IamRoleName field's value. +func (s *DisassociateInstancesIamRoleInput) SetIamRoleName(v string) *DisassociateInstancesIamRoleInput { + s.IamRoleName = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DisassociateInstancesIamRoleInput) SetInstanceIds(v []*string) *DisassociateInstancesIamRoleInput { + s.InstanceIds = v + return s +} + +type DisassociateInstancesIamRoleOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForDisassociateInstancesIamRoleOutput `type:"list"` +} + +// String returns the string representation +func (s DisassociateInstancesIamRoleOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateInstancesIamRoleOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *DisassociateInstancesIamRoleOutput) SetOperationDetails(v []*OperationDetailForDisassociateInstancesIamRoleOutput) *DisassociateInstancesIamRoleOutput { + s.OperationDetails = v + return s +} + +type ErrorForDisassociateInstancesIamRoleOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForDisassociateInstancesIamRoleOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForDisassociateInstancesIamRoleOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForDisassociateInstancesIamRoleOutput) SetCode(v string) *ErrorForDisassociateInstancesIamRoleOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForDisassociateInstancesIamRoleOutput) SetMessage(v string) *ErrorForDisassociateInstancesIamRoleOutput { + s.Message = &v + return s +} + +type OperationDetailForDisassociateInstancesIamRoleOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForDisassociateInstancesIamRoleOutput `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForDisassociateInstancesIamRoleOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForDisassociateInstancesIamRoleOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForDisassociateInstancesIamRoleOutput) SetError(v *ErrorForDisassociateInstancesIamRoleOutput) *OperationDetailForDisassociateInstancesIamRoleOutput { + s.Error = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *OperationDetailForDisassociateInstancesIamRoleOutput) SetInstanceId(v string) *OperationDetailForDisassociateInstancesIamRoleOutput { + s.InstanceId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_export_image.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_export_image.go new file mode 100644 index 000000000000..e15c82124b65 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_export_image.go @@ -0,0 +1,202 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opExportImageCommon = "ExportImage" + +// ExportImageCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ExportImageCommon operation. The "output" return +// value will be populated with the ExportImageCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ExportImageCommon Request to send the API call to the service. +// the "output" return value is not valid until after ExportImageCommon Send returns without error. +// +// See ExportImageCommon for more information on using the ExportImageCommon +// API call, and error handling. +// +// // Example sending a request using the ExportImageCommonRequest method. +// req, resp := client.ExportImageCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ExportImageCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opExportImageCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ExportImageCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ExportImageCommon for usage and error information. +func (c *ECS) ExportImageCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ExportImageCommonRequest(input) + return out, req.Send() +} + +// ExportImageCommonWithContext is the same as ExportImageCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ExportImageCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ExportImageCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ExportImageCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opExportImage = "ExportImage" + +// ExportImageRequest generates a "volcengine/request.Request" representing the +// client's request for the ExportImage operation. The "output" return +// value will be populated with the ExportImageCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ExportImageCommon Request to send the API call to the service. +// the "output" return value is not valid until after ExportImageCommon Send returns without error. +// +// See ExportImage for more information on using the ExportImage +// API call, and error handling. +// +// // Example sending a request using the ExportImageRequest method. +// req, resp := client.ExportImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ExportImageRequest(input *ExportImageInput) (req *request.Request, output *ExportImageOutput) { + op := &request.Operation{ + Name: opExportImage, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ExportImageInput{} + } + + output = &ExportImageOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ExportImage API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ExportImage for usage and error information. +func (c *ECS) ExportImage(input *ExportImageInput) (*ExportImageOutput, error) { + req, out := c.ExportImageRequest(input) + return out, req.Send() +} + +// ExportImageWithContext is the same as ExportImage with the addition of +// the ability to pass a context and additional request options. +// +// See ExportImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ExportImageWithContext(ctx volcengine.Context, input *ExportImageInput, opts ...request.Option) (*ExportImageOutput, error) { + req, out := c.ExportImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ExportImageInput struct { + _ struct{} `type:"structure"` + + ImageId *string `type:"string"` + + TOSBucket *string `type:"string"` + + TOSPrefix *string `type:"string"` +} + +// String returns the string representation +func (s ExportImageInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportImageInput) GoString() string { + return s.String() +} + +// SetImageId sets the ImageId field's value. +func (s *ExportImageInput) SetImageId(v string) *ExportImageInput { + s.ImageId = &v + return s +} + +// SetTOSBucket sets the TOSBucket field's value. +func (s *ExportImageInput) SetTOSBucket(v string) *ExportImageInput { + s.TOSBucket = &v + return s +} + +// SetTOSPrefix sets the TOSPrefix field's value. +func (s *ExportImageInput) SetTOSPrefix(v string) *ExportImageInput { + s.TOSPrefix = &v + return s +} + +type ExportImageOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + TaskId *string `type:"string"` +} + +// String returns the string representation +func (s ExportImageOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportImageOutput) GoString() string { + return s.String() +} + +// SetTaskId sets the TaskId field's value. +func (s *ExportImageOutput) SetTaskId(v string) *ExportImageOutput { + s.TaskId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_import_image.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_import_image.go new file mode 100644 index 000000000000..6d1aedc69750 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_import_image.go @@ -0,0 +1,250 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opImportImageCommon = "ImportImage" + +// ImportImageCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ImportImageCommon operation. The "output" return +// value will be populated with the ImportImageCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ImportImageCommon Request to send the API call to the service. +// the "output" return value is not valid until after ImportImageCommon Send returns without error. +// +// See ImportImageCommon for more information on using the ImportImageCommon +// API call, and error handling. +// +// // Example sending a request using the ImportImageCommonRequest method. +// req, resp := client.ImportImageCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ImportImageCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opImportImageCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ImportImageCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ImportImageCommon for usage and error information. +func (c *ECS) ImportImageCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ImportImageCommonRequest(input) + return out, req.Send() +} + +// ImportImageCommonWithContext is the same as ImportImageCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ImportImageCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ImportImageCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ImportImageCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opImportImage = "ImportImage" + +// ImportImageRequest generates a "volcengine/request.Request" representing the +// client's request for the ImportImage operation. The "output" return +// value will be populated with the ImportImageCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ImportImageCommon Request to send the API call to the service. +// the "output" return value is not valid until after ImportImageCommon Send returns without error. +// +// See ImportImage for more information on using the ImportImage +// API call, and error handling. +// +// // Example sending a request using the ImportImageRequest method. +// req, resp := client.ImportImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ImportImageRequest(input *ImportImageInput) (req *request.Request, output *ImportImageOutput) { + op := &request.Operation{ + Name: opImportImage, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ImportImageInput{} + } + + output = &ImportImageOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ImportImage API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ImportImage for usage and error information. +func (c *ECS) ImportImage(input *ImportImageInput) (*ImportImageOutput, error) { + req, out := c.ImportImageRequest(input) + return out, req.Send() +} + +// ImportImageWithContext is the same as ImportImage with the addition of +// the ability to pass a context and additional request options. +// +// See ImportImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ImportImageWithContext(ctx volcengine.Context, input *ImportImageInput, opts ...request.Option) (*ImportImageOutput, error) { + req, out := c.ImportImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ImportImageInput struct { + _ struct{} `type:"structure"` + + Architecture *string `type:"string"` + + BootMode *string `type:"string"` + + Description *string `type:"string"` + + ImageName *string `type:"string"` + + OsType *string `type:"string"` + + Platform *string `type:"string"` + + PlatformVersion *string `type:"string"` + + ProjectName *string `type:"string"` + + Url *string `type:"string"` +} + +// String returns the string representation +func (s ImportImageInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportImageInput) GoString() string { + return s.String() +} + +// SetArchitecture sets the Architecture field's value. +func (s *ImportImageInput) SetArchitecture(v string) *ImportImageInput { + s.Architecture = &v + return s +} + +// SetBootMode sets the BootMode field's value. +func (s *ImportImageInput) SetBootMode(v string) *ImportImageInput { + s.BootMode = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ImportImageInput) SetDescription(v string) *ImportImageInput { + s.Description = &v + return s +} + +// SetImageName sets the ImageName field's value. +func (s *ImportImageInput) SetImageName(v string) *ImportImageInput { + s.ImageName = &v + return s +} + +// SetOsType sets the OsType field's value. +func (s *ImportImageInput) SetOsType(v string) *ImportImageInput { + s.OsType = &v + return s +} + +// SetPlatform sets the Platform field's value. +func (s *ImportImageInput) SetPlatform(v string) *ImportImageInput { + s.Platform = &v + return s +} + +// SetPlatformVersion sets the PlatformVersion field's value. +func (s *ImportImageInput) SetPlatformVersion(v string) *ImportImageInput { + s.PlatformVersion = &v + return s +} + +// SetProjectName sets the ProjectName field's value. +func (s *ImportImageInput) SetProjectName(v string) *ImportImageInput { + s.ProjectName = &v + return s +} + +// SetUrl sets the Url field's value. +func (s *ImportImageInput) SetUrl(v string) *ImportImageInput { + s.Url = &v + return s +} + +type ImportImageOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + ImageId *string `type:"string"` +} + +// String returns the string representation +func (s ImportImageOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportImageOutput) GoString() string { + return s.String() +} + +// SetImageId sets the ImageId field's value. +func (s *ImportImageOutput) SetImageId(v string) *ImportImageOutput { + s.ImageId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_import_key_pair.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_import_key_pair.go new file mode 100644 index 000000000000..73d77c56a690 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_import_key_pair.go @@ -0,0 +1,218 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opImportKeyPairCommon = "ImportKeyPair" + +// ImportKeyPairCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ImportKeyPairCommon operation. The "output" return +// value will be populated with the ImportKeyPairCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ImportKeyPairCommon Request to send the API call to the service. +// the "output" return value is not valid until after ImportKeyPairCommon Send returns without error. +// +// See ImportKeyPairCommon for more information on using the ImportKeyPairCommon +// API call, and error handling. +// +// // Example sending a request using the ImportKeyPairCommonRequest method. +// req, resp := client.ImportKeyPairCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ImportKeyPairCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opImportKeyPairCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ImportKeyPairCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ImportKeyPairCommon for usage and error information. +func (c *ECS) ImportKeyPairCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ImportKeyPairCommonRequest(input) + return out, req.Send() +} + +// ImportKeyPairCommonWithContext is the same as ImportKeyPairCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ImportKeyPairCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ImportKeyPairCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ImportKeyPairCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opImportKeyPair = "ImportKeyPair" + +// ImportKeyPairRequest generates a "volcengine/request.Request" representing the +// client's request for the ImportKeyPair operation. The "output" return +// value will be populated with the ImportKeyPairCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ImportKeyPairCommon Request to send the API call to the service. +// the "output" return value is not valid until after ImportKeyPairCommon Send returns without error. +// +// See ImportKeyPair for more information on using the ImportKeyPair +// API call, and error handling. +// +// // Example sending a request using the ImportKeyPairRequest method. +// req, resp := client.ImportKeyPairRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ImportKeyPairRequest(input *ImportKeyPairInput) (req *request.Request, output *ImportKeyPairOutput) { + op := &request.Operation{ + Name: opImportKeyPair, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ImportKeyPairInput{} + } + + output = &ImportKeyPairOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ImportKeyPair API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ImportKeyPair for usage and error information. +func (c *ECS) ImportKeyPair(input *ImportKeyPairInput) (*ImportKeyPairOutput, error) { + req, out := c.ImportKeyPairRequest(input) + return out, req.Send() +} + +// ImportKeyPairWithContext is the same as ImportKeyPair with the addition of +// the ability to pass a context and additional request options. +// +// See ImportKeyPair for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ImportKeyPairWithContext(ctx volcengine.Context, input *ImportKeyPairInput, opts ...request.Option) (*ImportKeyPairOutput, error) { + req, out := c.ImportKeyPairRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ImportKeyPairInput struct { + _ struct{} `type:"structure"` + + Description *string `type:"string"` + + KeyPairName *string `type:"string"` + + PublicKey *string `type:"string"` +} + +// String returns the string representation +func (s ImportKeyPairInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportKeyPairInput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *ImportKeyPairInput) SetDescription(v string) *ImportKeyPairInput { + s.Description = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *ImportKeyPairInput) SetKeyPairName(v string) *ImportKeyPairInput { + s.KeyPairName = &v + return s +} + +// SetPublicKey sets the PublicKey field's value. +func (s *ImportKeyPairInput) SetPublicKey(v string) *ImportKeyPairInput { + s.PublicKey = &v + return s +} + +type ImportKeyPairOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + FingerPrint *string `type:"string"` + + KeyPairId *string `type:"string"` + + KeyPairName *string `type:"string"` +} + +// String returns the string representation +func (s ImportKeyPairOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ImportKeyPairOutput) GoString() string { + return s.String() +} + +// SetFingerPrint sets the FingerPrint field's value. +func (s *ImportKeyPairOutput) SetFingerPrint(v string) *ImportKeyPairOutput { + s.FingerPrint = &v + return s +} + +// SetKeyPairId sets the KeyPairId field's value. +func (s *ImportKeyPairOutput) SetKeyPairId(v string) *ImportKeyPairOutput { + s.KeyPairId = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *ImportKeyPairOutput) SetKeyPairName(v string) *ImportKeyPairOutput { + s.KeyPairName = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_deployment_set_attribute.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_deployment_set_attribute.go new file mode 100644 index 000000000000..502ab8867a34 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_deployment_set_attribute.go @@ -0,0 +1,194 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyDeploymentSetAttributeCommon = "ModifyDeploymentSetAttribute" + +// ModifyDeploymentSetAttributeCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyDeploymentSetAttributeCommon operation. The "output" return +// value will be populated with the ModifyDeploymentSetAttributeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyDeploymentSetAttributeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyDeploymentSetAttributeCommon Send returns without error. +// +// See ModifyDeploymentSetAttributeCommon for more information on using the ModifyDeploymentSetAttributeCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyDeploymentSetAttributeCommonRequest method. +// req, resp := client.ModifyDeploymentSetAttributeCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyDeploymentSetAttributeCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyDeploymentSetAttributeCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyDeploymentSetAttributeCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyDeploymentSetAttributeCommon for usage and error information. +func (c *ECS) ModifyDeploymentSetAttributeCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyDeploymentSetAttributeCommonRequest(input) + return out, req.Send() +} + +// ModifyDeploymentSetAttributeCommonWithContext is the same as ModifyDeploymentSetAttributeCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyDeploymentSetAttributeCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyDeploymentSetAttributeCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyDeploymentSetAttributeCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyDeploymentSetAttribute = "ModifyDeploymentSetAttribute" + +// ModifyDeploymentSetAttributeRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyDeploymentSetAttribute operation. The "output" return +// value will be populated with the ModifyDeploymentSetAttributeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyDeploymentSetAttributeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyDeploymentSetAttributeCommon Send returns without error. +// +// See ModifyDeploymentSetAttribute for more information on using the ModifyDeploymentSetAttribute +// API call, and error handling. +// +// // Example sending a request using the ModifyDeploymentSetAttributeRequest method. +// req, resp := client.ModifyDeploymentSetAttributeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyDeploymentSetAttributeRequest(input *ModifyDeploymentSetAttributeInput) (req *request.Request, output *ModifyDeploymentSetAttributeOutput) { + op := &request.Operation{ + Name: opModifyDeploymentSetAttribute, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyDeploymentSetAttributeInput{} + } + + output = &ModifyDeploymentSetAttributeOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyDeploymentSetAttribute API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyDeploymentSetAttribute for usage and error information. +func (c *ECS) ModifyDeploymentSetAttribute(input *ModifyDeploymentSetAttributeInput) (*ModifyDeploymentSetAttributeOutput, error) { + req, out := c.ModifyDeploymentSetAttributeRequest(input) + return out, req.Send() +} + +// ModifyDeploymentSetAttributeWithContext is the same as ModifyDeploymentSetAttribute with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyDeploymentSetAttribute for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyDeploymentSetAttributeWithContext(ctx volcengine.Context, input *ModifyDeploymentSetAttributeInput, opts ...request.Option) (*ModifyDeploymentSetAttributeOutput, error) { + req, out := c.ModifyDeploymentSetAttributeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyDeploymentSetAttributeInput struct { + _ struct{} `type:"structure"` + + DeploymentSetId *string `type:"string"` + + DeploymentSetName *string `type:"string"` + + Description *string `type:"string"` +} + +// String returns the string representation +func (s ModifyDeploymentSetAttributeInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDeploymentSetAttributeInput) GoString() string { + return s.String() +} + +// SetDeploymentSetId sets the DeploymentSetId field's value. +func (s *ModifyDeploymentSetAttributeInput) SetDeploymentSetId(v string) *ModifyDeploymentSetAttributeInput { + s.DeploymentSetId = &v + return s +} + +// SetDeploymentSetName sets the DeploymentSetName field's value. +func (s *ModifyDeploymentSetAttributeInput) SetDeploymentSetName(v string) *ModifyDeploymentSetAttributeInput { + s.DeploymentSetName = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ModifyDeploymentSetAttributeInput) SetDescription(v string) *ModifyDeploymentSetAttributeInput { + s.Description = &v + return s +} + +type ModifyDeploymentSetAttributeOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s ModifyDeploymentSetAttributeOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDeploymentSetAttributeOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_image_attribute.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_image_attribute.go new file mode 100644 index 000000000000..2931d3f30d39 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_image_attribute.go @@ -0,0 +1,202 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyImageAttributeCommon = "ModifyImageAttribute" + +// ModifyImageAttributeCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyImageAttributeCommon operation. The "output" return +// value will be populated with the ModifyImageAttributeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyImageAttributeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyImageAttributeCommon Send returns without error. +// +// See ModifyImageAttributeCommon for more information on using the ModifyImageAttributeCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyImageAttributeCommonRequest method. +// req, resp := client.ModifyImageAttributeCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyImageAttributeCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyImageAttributeCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyImageAttributeCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyImageAttributeCommon for usage and error information. +func (c *ECS) ModifyImageAttributeCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyImageAttributeCommonRequest(input) + return out, req.Send() +} + +// ModifyImageAttributeCommonWithContext is the same as ModifyImageAttributeCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyImageAttributeCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyImageAttributeCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyImageAttributeCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyImageAttribute = "ModifyImageAttribute" + +// ModifyImageAttributeRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyImageAttribute operation. The "output" return +// value will be populated with the ModifyImageAttributeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyImageAttributeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyImageAttributeCommon Send returns without error. +// +// See ModifyImageAttribute for more information on using the ModifyImageAttribute +// API call, and error handling. +// +// // Example sending a request using the ModifyImageAttributeRequest method. +// req, resp := client.ModifyImageAttributeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyImageAttributeRequest(input *ModifyImageAttributeInput) (req *request.Request, output *ModifyImageAttributeOutput) { + op := &request.Operation{ + Name: opModifyImageAttribute, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyImageAttributeInput{} + } + + output = &ModifyImageAttributeOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyImageAttribute API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyImageAttribute for usage and error information. +func (c *ECS) ModifyImageAttribute(input *ModifyImageAttributeInput) (*ModifyImageAttributeOutput, error) { + req, out := c.ModifyImageAttributeRequest(input) + return out, req.Send() +} + +// ModifyImageAttributeWithContext is the same as ModifyImageAttribute with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyImageAttribute for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyImageAttributeWithContext(ctx volcengine.Context, input *ModifyImageAttributeInput, opts ...request.Option) (*ModifyImageAttributeOutput, error) { + req, out := c.ModifyImageAttributeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyImageAttributeInput struct { + _ struct{} `type:"structure"` + + BootMode *string `type:"string"` + + Description *string `type:"string"` + + ImageId *string `type:"string"` + + ImageName *string `type:"string"` +} + +// String returns the string representation +func (s ModifyImageAttributeInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyImageAttributeInput) GoString() string { + return s.String() +} + +// SetBootMode sets the BootMode field's value. +func (s *ModifyImageAttributeInput) SetBootMode(v string) *ModifyImageAttributeInput { + s.BootMode = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ModifyImageAttributeInput) SetDescription(v string) *ModifyImageAttributeInput { + s.Description = &v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *ModifyImageAttributeInput) SetImageId(v string) *ModifyImageAttributeInput { + s.ImageId = &v + return s +} + +// SetImageName sets the ImageName field's value. +func (s *ModifyImageAttributeInput) SetImageName(v string) *ModifyImageAttributeInput { + s.ImageName = &v + return s +} + +type ModifyImageAttributeOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s ModifyImageAttributeOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyImageAttributeOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_image_share_permission.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_image_share_permission.go new file mode 100644 index 000000000000..2e2784e24e18 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_image_share_permission.go @@ -0,0 +1,194 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyImageSharePermissionCommon = "ModifyImageSharePermission" + +// ModifyImageSharePermissionCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyImageSharePermissionCommon operation. The "output" return +// value will be populated with the ModifyImageSharePermissionCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyImageSharePermissionCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyImageSharePermissionCommon Send returns without error. +// +// See ModifyImageSharePermissionCommon for more information on using the ModifyImageSharePermissionCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyImageSharePermissionCommonRequest method. +// req, resp := client.ModifyImageSharePermissionCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyImageSharePermissionCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyImageSharePermissionCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyImageSharePermissionCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyImageSharePermissionCommon for usage and error information. +func (c *ECS) ModifyImageSharePermissionCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyImageSharePermissionCommonRequest(input) + return out, req.Send() +} + +// ModifyImageSharePermissionCommonWithContext is the same as ModifyImageSharePermissionCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyImageSharePermissionCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyImageSharePermissionCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyImageSharePermissionCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyImageSharePermission = "ModifyImageSharePermission" + +// ModifyImageSharePermissionRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyImageSharePermission operation. The "output" return +// value will be populated with the ModifyImageSharePermissionCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyImageSharePermissionCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyImageSharePermissionCommon Send returns without error. +// +// See ModifyImageSharePermission for more information on using the ModifyImageSharePermission +// API call, and error handling. +// +// // Example sending a request using the ModifyImageSharePermissionRequest method. +// req, resp := client.ModifyImageSharePermissionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyImageSharePermissionRequest(input *ModifyImageSharePermissionInput) (req *request.Request, output *ModifyImageSharePermissionOutput) { + op := &request.Operation{ + Name: opModifyImageSharePermission, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyImageSharePermissionInput{} + } + + output = &ModifyImageSharePermissionOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyImageSharePermission API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyImageSharePermission for usage and error information. +func (c *ECS) ModifyImageSharePermission(input *ModifyImageSharePermissionInput) (*ModifyImageSharePermissionOutput, error) { + req, out := c.ModifyImageSharePermissionRequest(input) + return out, req.Send() +} + +// ModifyImageSharePermissionWithContext is the same as ModifyImageSharePermission with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyImageSharePermission for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyImageSharePermissionWithContext(ctx volcengine.Context, input *ModifyImageSharePermissionInput, opts ...request.Option) (*ModifyImageSharePermissionOutput, error) { + req, out := c.ModifyImageSharePermissionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyImageSharePermissionInput struct { + _ struct{} `type:"structure"` + + AddAccounts []*string `type:"list"` + + ImageId *string `type:"string"` + + RemoveAccounts []*string `type:"list"` +} + +// String returns the string representation +func (s ModifyImageSharePermissionInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyImageSharePermissionInput) GoString() string { + return s.String() +} + +// SetAddAccounts sets the AddAccounts field's value. +func (s *ModifyImageSharePermissionInput) SetAddAccounts(v []*string) *ModifyImageSharePermissionInput { + s.AddAccounts = v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *ModifyImageSharePermissionInput) SetImageId(v string) *ModifyImageSharePermissionInput { + s.ImageId = &v + return s +} + +// SetRemoveAccounts sets the RemoveAccounts field's value. +func (s *ModifyImageSharePermissionInput) SetRemoveAccounts(v []*string) *ModifyImageSharePermissionInput { + s.RemoveAccounts = v + return s +} + +type ModifyImageSharePermissionOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s ModifyImageSharePermissionOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyImageSharePermissionOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_attribute.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_attribute.go new file mode 100644 index 000000000000..530562fee61f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_attribute.go @@ -0,0 +1,210 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyInstanceAttributeCommon = "ModifyInstanceAttribute" + +// ModifyInstanceAttributeCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyInstanceAttributeCommon operation. The "output" return +// value will be populated with the ModifyInstanceAttributeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyInstanceAttributeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyInstanceAttributeCommon Send returns without error. +// +// See ModifyInstanceAttributeCommon for more information on using the ModifyInstanceAttributeCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyInstanceAttributeCommonRequest method. +// req, resp := client.ModifyInstanceAttributeCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyInstanceAttributeCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyInstanceAttributeCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyInstanceAttributeCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyInstanceAttributeCommon for usage and error information. +func (c *ECS) ModifyInstanceAttributeCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyInstanceAttributeCommonRequest(input) + return out, req.Send() +} + +// ModifyInstanceAttributeCommonWithContext is the same as ModifyInstanceAttributeCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceAttributeCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyInstanceAttributeCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyInstanceAttributeCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyInstanceAttribute = "ModifyInstanceAttribute" + +// ModifyInstanceAttributeRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyInstanceAttribute operation. The "output" return +// value will be populated with the ModifyInstanceAttributeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyInstanceAttributeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyInstanceAttributeCommon Send returns without error. +// +// See ModifyInstanceAttribute for more information on using the ModifyInstanceAttribute +// API call, and error handling. +// +// // Example sending a request using the ModifyInstanceAttributeRequest method. +// req, resp := client.ModifyInstanceAttributeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyInstanceAttributeRequest(input *ModifyInstanceAttributeInput) (req *request.Request, output *ModifyInstanceAttributeOutput) { + op := &request.Operation{ + Name: opModifyInstanceAttribute, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyInstanceAttributeInput{} + } + + output = &ModifyInstanceAttributeOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyInstanceAttribute API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyInstanceAttribute for usage and error information. +func (c *ECS) ModifyInstanceAttribute(input *ModifyInstanceAttributeInput) (*ModifyInstanceAttributeOutput, error) { + req, out := c.ModifyInstanceAttributeRequest(input) + return out, req.Send() +} + +// ModifyInstanceAttributeWithContext is the same as ModifyInstanceAttribute with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceAttribute for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyInstanceAttributeWithContext(ctx volcengine.Context, input *ModifyInstanceAttributeInput, opts ...request.Option) (*ModifyInstanceAttributeOutput, error) { + req, out := c.ModifyInstanceAttributeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyInstanceAttributeInput struct { + _ struct{} `type:"structure"` + + Description *string `type:"string"` + + InstanceId *string `type:"string"` + + InstanceName *string `type:"string"` + + Password *string `type:"string"` + + UserData *string `type:"string"` +} + +// String returns the string representation +func (s ModifyInstanceAttributeInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceAttributeInput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *ModifyInstanceAttributeInput) SetDescription(v string) *ModifyInstanceAttributeInput { + s.Description = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ModifyInstanceAttributeInput) SetInstanceId(v string) *ModifyInstanceAttributeInput { + s.InstanceId = &v + return s +} + +// SetInstanceName sets the InstanceName field's value. +func (s *ModifyInstanceAttributeInput) SetInstanceName(v string) *ModifyInstanceAttributeInput { + s.InstanceName = &v + return s +} + +// SetPassword sets the Password field's value. +func (s *ModifyInstanceAttributeInput) SetPassword(v string) *ModifyInstanceAttributeInput { + s.Password = &v + return s +} + +// SetUserData sets the UserData field's value. +func (s *ModifyInstanceAttributeInput) SetUserData(v string) *ModifyInstanceAttributeInput { + s.UserData = &v + return s +} + +type ModifyInstanceAttributeOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s ModifyInstanceAttributeOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceAttributeOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_charge_type.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_charge_type.go new file mode 100644 index 000000000000..12e143f7b5bc --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_charge_type.go @@ -0,0 +1,234 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyInstanceChargeTypeCommon = "ModifyInstanceChargeType" + +// ModifyInstanceChargeTypeCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyInstanceChargeTypeCommon operation. The "output" return +// value will be populated with the ModifyInstanceChargeTypeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyInstanceChargeTypeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyInstanceChargeTypeCommon Send returns without error. +// +// See ModifyInstanceChargeTypeCommon for more information on using the ModifyInstanceChargeTypeCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyInstanceChargeTypeCommonRequest method. +// req, resp := client.ModifyInstanceChargeTypeCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyInstanceChargeTypeCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyInstanceChargeTypeCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyInstanceChargeTypeCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyInstanceChargeTypeCommon for usage and error information. +func (c *ECS) ModifyInstanceChargeTypeCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyInstanceChargeTypeCommonRequest(input) + return out, req.Send() +} + +// ModifyInstanceChargeTypeCommonWithContext is the same as ModifyInstanceChargeTypeCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceChargeTypeCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyInstanceChargeTypeCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyInstanceChargeTypeCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyInstanceChargeType = "ModifyInstanceChargeType" + +// ModifyInstanceChargeTypeRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyInstanceChargeType operation. The "output" return +// value will be populated with the ModifyInstanceChargeTypeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyInstanceChargeTypeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyInstanceChargeTypeCommon Send returns without error. +// +// See ModifyInstanceChargeType for more information on using the ModifyInstanceChargeType +// API call, and error handling. +// +// // Example sending a request using the ModifyInstanceChargeTypeRequest method. +// req, resp := client.ModifyInstanceChargeTypeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyInstanceChargeTypeRequest(input *ModifyInstanceChargeTypeInput) (req *request.Request, output *ModifyInstanceChargeTypeOutput) { + op := &request.Operation{ + Name: opModifyInstanceChargeType, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyInstanceChargeTypeInput{} + } + + output = &ModifyInstanceChargeTypeOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyInstanceChargeType API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyInstanceChargeType for usage and error information. +func (c *ECS) ModifyInstanceChargeType(input *ModifyInstanceChargeTypeInput) (*ModifyInstanceChargeTypeOutput, error) { + req, out := c.ModifyInstanceChargeTypeRequest(input) + return out, req.Send() +} + +// ModifyInstanceChargeTypeWithContext is the same as ModifyInstanceChargeType with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceChargeType for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyInstanceChargeTypeWithContext(ctx volcengine.Context, input *ModifyInstanceChargeTypeInput, opts ...request.Option) (*ModifyInstanceChargeTypeOutput, error) { + req, out := c.ModifyInstanceChargeTypeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyInstanceChargeTypeInput struct { + _ struct{} `type:"structure"` + + AutoPay *bool `type:"boolean"` + + ClientToken *string `type:"string"` + + IncludeDataVolumes *bool `type:"boolean"` + + InstanceChargeType *string `type:"string"` + + InstanceIds []*string `type:"list"` + + Period *int32 `type:"int32"` + + PeriodUnit *string `type:"string"` +} + +// String returns the string representation +func (s ModifyInstanceChargeTypeInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceChargeTypeInput) GoString() string { + return s.String() +} + +// SetAutoPay sets the AutoPay field's value. +func (s *ModifyInstanceChargeTypeInput) SetAutoPay(v bool) *ModifyInstanceChargeTypeInput { + s.AutoPay = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *ModifyInstanceChargeTypeInput) SetClientToken(v string) *ModifyInstanceChargeTypeInput { + s.ClientToken = &v + return s +} + +// SetIncludeDataVolumes sets the IncludeDataVolumes field's value. +func (s *ModifyInstanceChargeTypeInput) SetIncludeDataVolumes(v bool) *ModifyInstanceChargeTypeInput { + s.IncludeDataVolumes = &v + return s +} + +// SetInstanceChargeType sets the InstanceChargeType field's value. +func (s *ModifyInstanceChargeTypeInput) SetInstanceChargeType(v string) *ModifyInstanceChargeTypeInput { + s.InstanceChargeType = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *ModifyInstanceChargeTypeInput) SetInstanceIds(v []*string) *ModifyInstanceChargeTypeInput { + s.InstanceIds = v + return s +} + +// SetPeriod sets the Period field's value. +func (s *ModifyInstanceChargeTypeInput) SetPeriod(v int32) *ModifyInstanceChargeTypeInput { + s.Period = &v + return s +} + +// SetPeriodUnit sets the PeriodUnit field's value. +func (s *ModifyInstanceChargeTypeInput) SetPeriodUnit(v string) *ModifyInstanceChargeTypeInput { + s.PeriodUnit = &v + return s +} + +type ModifyInstanceChargeTypeOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OrderId *string `type:"string"` +} + +// String returns the string representation +func (s ModifyInstanceChargeTypeOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceChargeTypeOutput) GoString() string { + return s.String() +} + +// SetOrderId sets the OrderId field's value. +func (s *ModifyInstanceChargeTypeOutput) SetOrderId(v string) *ModifyInstanceChargeTypeOutput { + s.OrderId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_deployment.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_deployment.go new file mode 100644 index 000000000000..b4f6b0a83e94 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_deployment.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyInstanceDeploymentCommon = "ModifyInstanceDeployment" + +// ModifyInstanceDeploymentCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyInstanceDeploymentCommon operation. The "output" return +// value will be populated with the ModifyInstanceDeploymentCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyInstanceDeploymentCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyInstanceDeploymentCommon Send returns without error. +// +// See ModifyInstanceDeploymentCommon for more information on using the ModifyInstanceDeploymentCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyInstanceDeploymentCommonRequest method. +// req, resp := client.ModifyInstanceDeploymentCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyInstanceDeploymentCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyInstanceDeploymentCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyInstanceDeploymentCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyInstanceDeploymentCommon for usage and error information. +func (c *ECS) ModifyInstanceDeploymentCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyInstanceDeploymentCommonRequest(input) + return out, req.Send() +} + +// ModifyInstanceDeploymentCommonWithContext is the same as ModifyInstanceDeploymentCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceDeploymentCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyInstanceDeploymentCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyInstanceDeploymentCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyInstanceDeployment = "ModifyInstanceDeployment" + +// ModifyInstanceDeploymentRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyInstanceDeployment operation. The "output" return +// value will be populated with the ModifyInstanceDeploymentCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyInstanceDeploymentCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyInstanceDeploymentCommon Send returns without error. +// +// See ModifyInstanceDeployment for more information on using the ModifyInstanceDeployment +// API call, and error handling. +// +// // Example sending a request using the ModifyInstanceDeploymentRequest method. +// req, resp := client.ModifyInstanceDeploymentRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyInstanceDeploymentRequest(input *ModifyInstanceDeploymentInput) (req *request.Request, output *ModifyInstanceDeploymentOutput) { + op := &request.Operation{ + Name: opModifyInstanceDeployment, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyInstanceDeploymentInput{} + } + + output = &ModifyInstanceDeploymentOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyInstanceDeployment API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyInstanceDeployment for usage and error information. +func (c *ECS) ModifyInstanceDeployment(input *ModifyInstanceDeploymentInput) (*ModifyInstanceDeploymentOutput, error) { + req, out := c.ModifyInstanceDeploymentRequest(input) + return out, req.Send() +} + +// ModifyInstanceDeploymentWithContext is the same as ModifyInstanceDeployment with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceDeployment for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyInstanceDeploymentWithContext(ctx volcengine.Context, input *ModifyInstanceDeploymentInput, opts ...request.Option) (*ModifyInstanceDeploymentOutput, error) { + req, out := c.ModifyInstanceDeploymentRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyInstanceDeploymentInput struct { + _ struct{} `type:"structure"` + + DeploymentSetId *string `type:"string"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s ModifyInstanceDeploymentInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceDeploymentInput) GoString() string { + return s.String() +} + +// SetDeploymentSetId sets the DeploymentSetId field's value. +func (s *ModifyInstanceDeploymentInput) SetDeploymentSetId(v string) *ModifyInstanceDeploymentInput { + s.DeploymentSetId = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ModifyInstanceDeploymentInput) SetInstanceId(v string) *ModifyInstanceDeploymentInput { + s.InstanceId = &v + return s +} + +type ModifyInstanceDeploymentOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s ModifyInstanceDeploymentOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceDeploymentOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_spec.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_spec.go new file mode 100644 index 000000000000..659e11bd8db5 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_instance_spec.go @@ -0,0 +1,210 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyInstanceSpecCommon = "ModifyInstanceSpec" + +// ModifyInstanceSpecCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyInstanceSpecCommon operation. The "output" return +// value will be populated with the ModifyInstanceSpecCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyInstanceSpecCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyInstanceSpecCommon Send returns without error. +// +// See ModifyInstanceSpecCommon for more information on using the ModifyInstanceSpecCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyInstanceSpecCommonRequest method. +// req, resp := client.ModifyInstanceSpecCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyInstanceSpecCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyInstanceSpecCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyInstanceSpecCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyInstanceSpecCommon for usage and error information. +func (c *ECS) ModifyInstanceSpecCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyInstanceSpecCommonRequest(input) + return out, req.Send() +} + +// ModifyInstanceSpecCommonWithContext is the same as ModifyInstanceSpecCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceSpecCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyInstanceSpecCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyInstanceSpecCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyInstanceSpec = "ModifyInstanceSpec" + +// ModifyInstanceSpecRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyInstanceSpec operation. The "output" return +// value will be populated with the ModifyInstanceSpecCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyInstanceSpecCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyInstanceSpecCommon Send returns without error. +// +// See ModifyInstanceSpec for more information on using the ModifyInstanceSpec +// API call, and error handling. +// +// // Example sending a request using the ModifyInstanceSpecRequest method. +// req, resp := client.ModifyInstanceSpecRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyInstanceSpecRequest(input *ModifyInstanceSpecInput) (req *request.Request, output *ModifyInstanceSpecOutput) { + op := &request.Operation{ + Name: opModifyInstanceSpec, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyInstanceSpecInput{} + } + + output = &ModifyInstanceSpecOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyInstanceSpec API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyInstanceSpec for usage and error information. +func (c *ECS) ModifyInstanceSpec(input *ModifyInstanceSpecInput) (*ModifyInstanceSpecOutput, error) { + req, out := c.ModifyInstanceSpecRequest(input) + return out, req.Send() +} + +// ModifyInstanceSpecWithContext is the same as ModifyInstanceSpec with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceSpec for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyInstanceSpecWithContext(ctx volcengine.Context, input *ModifyInstanceSpecInput, opts ...request.Option) (*ModifyInstanceSpecOutput, error) { + req, out := c.ModifyInstanceSpecRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyInstanceSpecInput struct { + _ struct{} `type:"structure"` + + ClientToken *string `type:"string"` + + InstanceId *string `type:"string"` + + InstanceType *string `type:"string"` +} + +// String returns the string representation +func (s ModifyInstanceSpecInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceSpecInput) GoString() string { + return s.String() +} + +// SetClientToken sets the ClientToken field's value. +func (s *ModifyInstanceSpecInput) SetClientToken(v string) *ModifyInstanceSpecInput { + s.ClientToken = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ModifyInstanceSpecInput) SetInstanceId(v string) *ModifyInstanceSpecInput { + s.InstanceId = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *ModifyInstanceSpecInput) SetInstanceType(v string) *ModifyInstanceSpecInput { + s.InstanceType = &v + return s +} + +type ModifyInstanceSpecOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + InstanceId *string `type:"string"` + + OrderId *string `type:"string"` +} + +// String returns the string representation +func (s ModifyInstanceSpecOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceSpecOutput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ModifyInstanceSpecOutput) SetInstanceId(v string) *ModifyInstanceSpecOutput { + s.InstanceId = &v + return s +} + +// SetOrderId sets the OrderId field's value. +func (s *ModifyInstanceSpecOutput) SetOrderId(v string) *ModifyInstanceSpecOutput { + s.OrderId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_key_pair_attribute.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_key_pair_attribute.go new file mode 100644 index 000000000000..a1e5de791593 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_modify_key_pair_attribute.go @@ -0,0 +1,202 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opModifyKeyPairAttributeCommon = "ModifyKeyPairAttribute" + +// ModifyKeyPairAttributeCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyKeyPairAttributeCommon operation. The "output" return +// value will be populated with the ModifyKeyPairAttributeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyKeyPairAttributeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyKeyPairAttributeCommon Send returns without error. +// +// See ModifyKeyPairAttributeCommon for more information on using the ModifyKeyPairAttributeCommon +// API call, and error handling. +// +// // Example sending a request using the ModifyKeyPairAttributeCommonRequest method. +// req, resp := client.ModifyKeyPairAttributeCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyKeyPairAttributeCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opModifyKeyPairAttributeCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyKeyPairAttributeCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyKeyPairAttributeCommon for usage and error information. +func (c *ECS) ModifyKeyPairAttributeCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ModifyKeyPairAttributeCommonRequest(input) + return out, req.Send() +} + +// ModifyKeyPairAttributeCommonWithContext is the same as ModifyKeyPairAttributeCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyKeyPairAttributeCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyKeyPairAttributeCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ModifyKeyPairAttributeCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyKeyPairAttribute = "ModifyKeyPairAttribute" + +// ModifyKeyPairAttributeRequest generates a "volcengine/request.Request" representing the +// client's request for the ModifyKeyPairAttribute operation. The "output" return +// value will be populated with the ModifyKeyPairAttributeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ModifyKeyPairAttributeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ModifyKeyPairAttributeCommon Send returns without error. +// +// See ModifyKeyPairAttribute for more information on using the ModifyKeyPairAttribute +// API call, and error handling. +// +// // Example sending a request using the ModifyKeyPairAttributeRequest method. +// req, resp := client.ModifyKeyPairAttributeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ModifyKeyPairAttributeRequest(input *ModifyKeyPairAttributeInput) (req *request.Request, output *ModifyKeyPairAttributeOutput) { + op := &request.Operation{ + Name: opModifyKeyPairAttribute, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyKeyPairAttributeInput{} + } + + output = &ModifyKeyPairAttributeOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ModifyKeyPairAttribute API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ModifyKeyPairAttribute for usage and error information. +func (c *ECS) ModifyKeyPairAttribute(input *ModifyKeyPairAttributeInput) (*ModifyKeyPairAttributeOutput, error) { + req, out := c.ModifyKeyPairAttributeRequest(input) + return out, req.Send() +} + +// ModifyKeyPairAttributeWithContext is the same as ModifyKeyPairAttribute with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyKeyPairAttribute for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ModifyKeyPairAttributeWithContext(ctx volcengine.Context, input *ModifyKeyPairAttributeInput, opts ...request.Option) (*ModifyKeyPairAttributeOutput, error) { + req, out := c.ModifyKeyPairAttributeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ModifyKeyPairAttributeInput struct { + _ struct{} `type:"structure"` + + Description *string `type:"string"` + + KeyPairId *string `type:"string"` + + KeyPairName *string `type:"string"` +} + +// String returns the string representation +func (s ModifyKeyPairAttributeInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyKeyPairAttributeInput) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *ModifyKeyPairAttributeInput) SetDescription(v string) *ModifyKeyPairAttributeInput { + s.Description = &v + return s +} + +// SetKeyPairId sets the KeyPairId field's value. +func (s *ModifyKeyPairAttributeInput) SetKeyPairId(v string) *ModifyKeyPairAttributeInput { + s.KeyPairId = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *ModifyKeyPairAttributeInput) SetKeyPairName(v string) *ModifyKeyPairAttributeInput { + s.KeyPairName = &v + return s +} + +type ModifyKeyPairAttributeOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + KeyPairName *string `type:"string"` +} + +// String returns the string representation +func (s ModifyKeyPairAttributeOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyKeyPairAttributeOutput) GoString() string { + return s.String() +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *ModifyKeyPairAttributeOutput) SetKeyPairName(v string) *ModifyKeyPairAttributeOutput { + s.KeyPairName = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_reboot_instance.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_reboot_instance.go new file mode 100644 index 000000000000..f31b4cf6e3a8 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_reboot_instance.go @@ -0,0 +1,186 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opRebootInstanceCommon = "RebootInstance" + +// RebootInstanceCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the RebootInstanceCommon operation. The "output" return +// value will be populated with the RebootInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RebootInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after RebootInstanceCommon Send returns without error. +// +// See RebootInstanceCommon for more information on using the RebootInstanceCommon +// API call, and error handling. +// +// // Example sending a request using the RebootInstanceCommonRequest method. +// req, resp := client.RebootInstanceCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) RebootInstanceCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opRebootInstanceCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// RebootInstanceCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation RebootInstanceCommon for usage and error information. +func (c *ECS) RebootInstanceCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.RebootInstanceCommonRequest(input) + return out, req.Send() +} + +// RebootInstanceCommonWithContext is the same as RebootInstanceCommon with the addition of +// the ability to pass a context and additional request options. +// +// See RebootInstanceCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) RebootInstanceCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.RebootInstanceCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRebootInstance = "RebootInstance" + +// RebootInstanceRequest generates a "volcengine/request.Request" representing the +// client's request for the RebootInstance operation. The "output" return +// value will be populated with the RebootInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RebootInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after RebootInstanceCommon Send returns without error. +// +// See RebootInstance for more information on using the RebootInstance +// API call, and error handling. +// +// // Example sending a request using the RebootInstanceRequest method. +// req, resp := client.RebootInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) RebootInstanceRequest(input *RebootInstanceInput) (req *request.Request, output *RebootInstanceOutput) { + op := &request.Operation{ + Name: opRebootInstance, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &RebootInstanceInput{} + } + + output = &RebootInstanceOutput{} + req = c.newRequest(op, input, output) + + return +} + +// RebootInstance API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation RebootInstance for usage and error information. +func (c *ECS) RebootInstance(input *RebootInstanceInput) (*RebootInstanceOutput, error) { + req, out := c.RebootInstanceRequest(input) + return out, req.Send() +} + +// RebootInstanceWithContext is the same as RebootInstance with the addition of +// the ability to pass a context and additional request options. +// +// See RebootInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) RebootInstanceWithContext(ctx volcengine.Context, input *RebootInstanceInput, opts ...request.Option) (*RebootInstanceOutput, error) { + req, out := c.RebootInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type RebootInstanceInput struct { + _ struct{} `type:"structure"` + + ForceStop *bool `type:"boolean"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s RebootInstanceInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RebootInstanceInput) GoString() string { + return s.String() +} + +// SetForceStop sets the ForceStop field's value. +func (s *RebootInstanceInput) SetForceStop(v bool) *RebootInstanceInput { + s.ForceStop = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *RebootInstanceInput) SetInstanceId(v string) *RebootInstanceInput { + s.InstanceId = &v + return s +} + +type RebootInstanceOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s RebootInstanceOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RebootInstanceOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_reboot_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_reboot_instances.go new file mode 100644 index 000000000000..b653149b0661 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_reboot_instances.go @@ -0,0 +1,254 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opRebootInstancesCommon = "RebootInstances" + +// RebootInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the RebootInstancesCommon operation. The "output" return +// value will be populated with the RebootInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RebootInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after RebootInstancesCommon Send returns without error. +// +// See RebootInstancesCommon for more information on using the RebootInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the RebootInstancesCommonRequest method. +// req, resp := client.RebootInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) RebootInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opRebootInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// RebootInstancesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation RebootInstancesCommon for usage and error information. +func (c *ECS) RebootInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.RebootInstancesCommonRequest(input) + return out, req.Send() +} + +// RebootInstancesCommonWithContext is the same as RebootInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See RebootInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) RebootInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.RebootInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRebootInstances = "RebootInstances" + +// RebootInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the RebootInstances operation. The "output" return +// value will be populated with the RebootInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RebootInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after RebootInstancesCommon Send returns without error. +// +// See RebootInstances for more information on using the RebootInstances +// API call, and error handling. +// +// // Example sending a request using the RebootInstancesRequest method. +// req, resp := client.RebootInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) RebootInstancesRequest(input *RebootInstancesInput) (req *request.Request, output *RebootInstancesOutput) { + op := &request.Operation{ + Name: opRebootInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &RebootInstancesInput{} + } + + output = &RebootInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// RebootInstances API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation RebootInstances for usage and error information. +func (c *ECS) RebootInstances(input *RebootInstancesInput) (*RebootInstancesOutput, error) { + req, out := c.RebootInstancesRequest(input) + return out, req.Send() +} + +// RebootInstancesWithContext is the same as RebootInstances with the addition of +// the ability to pass a context and additional request options. +// +// See RebootInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) RebootInstancesWithContext(ctx volcengine.Context, input *RebootInstancesInput, opts ...request.Option) (*RebootInstancesOutput, error) { + req, out := c.RebootInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ErrorForRebootInstancesOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForRebootInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForRebootInstancesOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForRebootInstancesOutput) SetCode(v string) *ErrorForRebootInstancesOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForRebootInstancesOutput) SetMessage(v string) *ErrorForRebootInstancesOutput { + s.Message = &v + return s +} + +type OperationDetailForRebootInstancesOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForRebootInstancesOutput `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForRebootInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForRebootInstancesOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForRebootInstancesOutput) SetError(v *ErrorForRebootInstancesOutput) *OperationDetailForRebootInstancesOutput { + s.Error = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *OperationDetailForRebootInstancesOutput) SetInstanceId(v string) *OperationDetailForRebootInstancesOutput { + s.InstanceId = &v + return s +} + +type RebootInstancesInput struct { + _ struct{} `type:"structure"` + + ForceStop *bool `type:"boolean"` + + InstanceIds []*string `type:"list"` +} + +// String returns the string representation +func (s RebootInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RebootInstancesInput) GoString() string { + return s.String() +} + +// SetForceStop sets the ForceStop field's value. +func (s *RebootInstancesInput) SetForceStop(v bool) *RebootInstancesInput { + s.ForceStop = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *RebootInstancesInput) SetInstanceIds(v []*string) *RebootInstancesInput { + s.InstanceIds = v + return s +} + +type RebootInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForRebootInstancesOutput `type:"list"` +} + +// String returns the string representation +func (s RebootInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RebootInstancesOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *RebootInstancesOutput) SetOperationDetails(v []*OperationDetailForRebootInstancesOutput) *RebootInstancesOutput { + s.OperationDetails = v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_renew_instance.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_renew_instance.go new file mode 100644 index 000000000000..6c635dedc92b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_renew_instance.go @@ -0,0 +1,210 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opRenewInstanceCommon = "RenewInstance" + +// RenewInstanceCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the RenewInstanceCommon operation. The "output" return +// value will be populated with the RenewInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RenewInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after RenewInstanceCommon Send returns without error. +// +// See RenewInstanceCommon for more information on using the RenewInstanceCommon +// API call, and error handling. +// +// // Example sending a request using the RenewInstanceCommonRequest method. +// req, resp := client.RenewInstanceCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) RenewInstanceCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opRenewInstanceCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// RenewInstanceCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation RenewInstanceCommon for usage and error information. +func (c *ECS) RenewInstanceCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.RenewInstanceCommonRequest(input) + return out, req.Send() +} + +// RenewInstanceCommonWithContext is the same as RenewInstanceCommon with the addition of +// the ability to pass a context and additional request options. +// +// See RenewInstanceCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) RenewInstanceCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.RenewInstanceCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRenewInstance = "RenewInstance" + +// RenewInstanceRequest generates a "volcengine/request.Request" representing the +// client's request for the RenewInstance operation. The "output" return +// value will be populated with the RenewInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RenewInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after RenewInstanceCommon Send returns without error. +// +// See RenewInstance for more information on using the RenewInstance +// API call, and error handling. +// +// // Example sending a request using the RenewInstanceRequest method. +// req, resp := client.RenewInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) RenewInstanceRequest(input *RenewInstanceInput) (req *request.Request, output *RenewInstanceOutput) { + op := &request.Operation{ + Name: opRenewInstance, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &RenewInstanceInput{} + } + + output = &RenewInstanceOutput{} + req = c.newRequest(op, input, output) + + return +} + +// RenewInstance API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation RenewInstance for usage and error information. +func (c *ECS) RenewInstance(input *RenewInstanceInput) (*RenewInstanceOutput, error) { + req, out := c.RenewInstanceRequest(input) + return out, req.Send() +} + +// RenewInstanceWithContext is the same as RenewInstance with the addition of +// the ability to pass a context and additional request options. +// +// See RenewInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) RenewInstanceWithContext(ctx volcengine.Context, input *RenewInstanceInput, opts ...request.Option) (*RenewInstanceOutput, error) { + req, out := c.RenewInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type RenewInstanceInput struct { + _ struct{} `type:"structure"` + + ClientToken *string `type:"string"` + + InstanceId *string `type:"string"` + + Period *int32 `type:"int32"` + + PeriodUnit *string `type:"string"` +} + +// String returns the string representation +func (s RenewInstanceInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RenewInstanceInput) GoString() string { + return s.String() +} + +// SetClientToken sets the ClientToken field's value. +func (s *RenewInstanceInput) SetClientToken(v string) *RenewInstanceInput { + s.ClientToken = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *RenewInstanceInput) SetInstanceId(v string) *RenewInstanceInput { + s.InstanceId = &v + return s +} + +// SetPeriod sets the Period field's value. +func (s *RenewInstanceInput) SetPeriod(v int32) *RenewInstanceInput { + s.Period = &v + return s +} + +// SetPeriodUnit sets the PeriodUnit field's value. +func (s *RenewInstanceInput) SetPeriodUnit(v string) *RenewInstanceInput { + s.PeriodUnit = &v + return s +} + +type RenewInstanceOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OrderId *string `type:"string"` +} + +// String returns the string representation +func (s RenewInstanceOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RenewInstanceOutput) GoString() string { + return s.String() +} + +// SetOrderId sets the OrderId field's value. +func (s *RenewInstanceOutput) SetOrderId(v string) *RenewInstanceOutput { + s.OrderId = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_replace_system_volume.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_replace_system_volume.go new file mode 100644 index 000000000000..ded59c106d7a --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_replace_system_volume.go @@ -0,0 +1,236 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "encoding/json" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opReplaceSystemVolumeCommon = "ReplaceSystemVolume" + +// ReplaceSystemVolumeCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the ReplaceSystemVolumeCommon operation. The "output" return +// value will be populated with the ReplaceSystemVolumeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ReplaceSystemVolumeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ReplaceSystemVolumeCommon Send returns without error. +// +// See ReplaceSystemVolumeCommon for more information on using the ReplaceSystemVolumeCommon +// API call, and error handling. +// +// // Example sending a request using the ReplaceSystemVolumeCommonRequest method. +// req, resp := client.ReplaceSystemVolumeCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ReplaceSystemVolumeCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opReplaceSystemVolumeCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// ReplaceSystemVolumeCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ReplaceSystemVolumeCommon for usage and error information. +func (c *ECS) ReplaceSystemVolumeCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.ReplaceSystemVolumeCommonRequest(input) + return out, req.Send() +} + +// ReplaceSystemVolumeCommonWithContext is the same as ReplaceSystemVolumeCommon with the addition of +// the ability to pass a context and additional request options. +// +// See ReplaceSystemVolumeCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ReplaceSystemVolumeCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.ReplaceSystemVolumeCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opReplaceSystemVolume = "ReplaceSystemVolume" + +// ReplaceSystemVolumeRequest generates a "volcengine/request.Request" representing the +// client's request for the ReplaceSystemVolume operation. The "output" return +// value will be populated with the ReplaceSystemVolumeCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned ReplaceSystemVolumeCommon Request to send the API call to the service. +// the "output" return value is not valid until after ReplaceSystemVolumeCommon Send returns without error. +// +// See ReplaceSystemVolume for more information on using the ReplaceSystemVolume +// API call, and error handling. +// +// // Example sending a request using the ReplaceSystemVolumeRequest method. +// req, resp := client.ReplaceSystemVolumeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) ReplaceSystemVolumeRequest(input *ReplaceSystemVolumeInput) (req *request.Request, output *ReplaceSystemVolumeOutput) { + op := &request.Operation{ + Name: opReplaceSystemVolume, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &ReplaceSystemVolumeInput{} + } + + output = &ReplaceSystemVolumeOutput{} + req = c.newRequest(op, input, output) + + return +} + +// ReplaceSystemVolume API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation ReplaceSystemVolume for usage and error information. +func (c *ECS) ReplaceSystemVolume(input *ReplaceSystemVolumeInput) (*ReplaceSystemVolumeOutput, error) { + req, out := c.ReplaceSystemVolumeRequest(input) + return out, req.Send() +} + +// ReplaceSystemVolumeWithContext is the same as ReplaceSystemVolume with the addition of +// the ability to pass a context and additional request options. +// +// See ReplaceSystemVolume for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) ReplaceSystemVolumeWithContext(ctx volcengine.Context, input *ReplaceSystemVolumeInput, opts ...request.Option) (*ReplaceSystemVolumeOutput, error) { + req, out := c.ReplaceSystemVolumeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ReplaceSystemVolumeInput struct { + _ struct{} `type:"structure"` + + ClientToken *string `type:"string"` + + ImageId *string `type:"string"` + + InstanceId *string `type:"string"` + + KeepImageCredential *bool `type:"boolean"` + + KeyPairName *string `type:"string"` + + Password *string `type:"string"` + + Size *json.Number `type:"json_number"` + + UserData *string `type:"string"` +} + +// String returns the string representation +func (s ReplaceSystemVolumeInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplaceSystemVolumeInput) GoString() string { + return s.String() +} + +// SetClientToken sets the ClientToken field's value. +func (s *ReplaceSystemVolumeInput) SetClientToken(v string) *ReplaceSystemVolumeInput { + s.ClientToken = &v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *ReplaceSystemVolumeInput) SetImageId(v string) *ReplaceSystemVolumeInput { + s.ImageId = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *ReplaceSystemVolumeInput) SetInstanceId(v string) *ReplaceSystemVolumeInput { + s.InstanceId = &v + return s +} + +// SetKeepImageCredential sets the KeepImageCredential field's value. +func (s *ReplaceSystemVolumeInput) SetKeepImageCredential(v bool) *ReplaceSystemVolumeInput { + s.KeepImageCredential = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *ReplaceSystemVolumeInput) SetKeyPairName(v string) *ReplaceSystemVolumeInput { + s.KeyPairName = &v + return s +} + +// SetPassword sets the Password field's value. +func (s *ReplaceSystemVolumeInput) SetPassword(v string) *ReplaceSystemVolumeInput { + s.Password = &v + return s +} + +// SetSize sets the Size field's value. +func (s *ReplaceSystemVolumeInput) SetSize(v json.Number) *ReplaceSystemVolumeInput { + s.Size = &v + return s +} + +// SetUserData sets the UserData field's value. +func (s *ReplaceSystemVolumeInput) SetUserData(v string) *ReplaceSystemVolumeInput { + s.UserData = &v + return s +} + +type ReplaceSystemVolumeOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s ReplaceSystemVolumeOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ReplaceSystemVolumeOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_run_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_run_instances.go new file mode 100644 index 000000000000..2fe24a9ad9a5 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_run_instances.go @@ -0,0 +1,540 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opRunInstancesCommon = "RunInstances" + +// RunInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the RunInstancesCommon operation. The "output" return +// value will be populated with the RunInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RunInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after RunInstancesCommon Send returns without error. +// +// See RunInstancesCommon for more information on using the RunInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the RunInstancesCommonRequest method. +// req, resp := client.RunInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) RunInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opRunInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// RunInstancesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation RunInstancesCommon for usage and error information. +func (c *ECS) RunInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.RunInstancesCommonRequest(input) + return out, req.Send() +} + +// RunInstancesCommonWithContext is the same as RunInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See RunInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) RunInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.RunInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opRunInstances = "RunInstances" + +// RunInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the RunInstances operation. The "output" return +// value will be populated with the RunInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned RunInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after RunInstancesCommon Send returns without error. +// +// See RunInstances for more information on using the RunInstances +// API call, and error handling. +// +// // Example sending a request using the RunInstancesRequest method. +// req, resp := client.RunInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) RunInstancesRequest(input *RunInstancesInput) (req *request.Request, output *RunInstancesOutput) { + op := &request.Operation{ + Name: opRunInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &RunInstancesInput{} + } + + output = &RunInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// RunInstances API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation RunInstances for usage and error information. +func (c *ECS) RunInstances(input *RunInstancesInput) (*RunInstancesOutput, error) { + req, out := c.RunInstancesRequest(input) + return out, req.Send() +} + +// RunInstancesWithContext is the same as RunInstances with the addition of +// the ability to pass a context and additional request options. +// +// See RunInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) RunInstancesWithContext(ctx volcengine.Context, input *RunInstancesInput, opts ...request.Option) (*RunInstancesOutput, error) { + req, out := c.RunInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type NetworkInterfaceForRunInstancesInput struct { + _ struct{} `type:"structure"` + + PrimaryIpAddress *string `type:"string"` + + SecurityGroupIds []*string `type:"list"` + + SubnetId *string `type:"string"` +} + +// String returns the string representation +func (s NetworkInterfaceForRunInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s NetworkInterfaceForRunInstancesInput) GoString() string { + return s.String() +} + +// SetPrimaryIpAddress sets the PrimaryIpAddress field's value. +func (s *NetworkInterfaceForRunInstancesInput) SetPrimaryIpAddress(v string) *NetworkInterfaceForRunInstancesInput { + s.PrimaryIpAddress = &v + return s +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *NetworkInterfaceForRunInstancesInput) SetSecurityGroupIds(v []*string) *NetworkInterfaceForRunInstancesInput { + s.SecurityGroupIds = v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *NetworkInterfaceForRunInstancesInput) SetSubnetId(v string) *NetworkInterfaceForRunInstancesInput { + s.SubnetId = &v + return s +} + +type RunInstancesInput struct { + _ struct{} `type:"structure"` + + AutoRenew *bool `type:"boolean"` + + AutoRenewPeriod *int32 `type:"int32"` + + ClientToken *string `type:"string"` + + Count *int32 `type:"int32"` + + CreditSpecification *string `type:"string"` + + DeploymentSetId *string `type:"string"` + + Description *string `type:"string"` + + DryRun *bool `type:"boolean"` + + HostName *string `type:"string"` + + Hostname *string `type:"string"` + + HpcClusterId *string `type:"string"` + + ImageId *string `type:"string"` + + InstanceChargeType *string `type:"string"` + + InstanceName *string `type:"string"` + + InstanceType *string `type:"string"` + + InstanceTypeId *string `type:"string"` + + KeepImageCredential *bool `type:"boolean"` + + KeyPairName *string `type:"string"` + + MinCount *int32 `type:"int32"` + + NetworkInterfaces []*NetworkInterfaceForRunInstancesInput `type:"list"` + + Password *string `type:"string"` + + Period *int32 `type:"int32"` + + PeriodUnit *string `type:"string"` + + ProjectName *string `type:"string"` + + SecurityEnhancementStrategy *string `type:"string"` + + SpotStrategy *string `type:"string"` + + SuffixIndex *int32 `type:"int32"` + + Tags []*TagForRunInstancesInput `type:"list"` + + UniqueSuffix *bool `type:"boolean"` + + UserData *string `type:"string"` + + Volumes []*VolumeForRunInstancesInput `type:"list"` + + ZoneId *string `type:"string"` +} + +// String returns the string representation +func (s RunInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RunInstancesInput) GoString() string { + return s.String() +} + +// SetAutoRenew sets the AutoRenew field's value. +func (s *RunInstancesInput) SetAutoRenew(v bool) *RunInstancesInput { + s.AutoRenew = &v + return s +} + +// SetAutoRenewPeriod sets the AutoRenewPeriod field's value. +func (s *RunInstancesInput) SetAutoRenewPeriod(v int32) *RunInstancesInput { + s.AutoRenewPeriod = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *RunInstancesInput) SetClientToken(v string) *RunInstancesInput { + s.ClientToken = &v + return s +} + +// SetCount sets the Count field's value. +func (s *RunInstancesInput) SetCount(v int32) *RunInstancesInput { + s.Count = &v + return s +} + +// SetCreditSpecification sets the CreditSpecification field's value. +func (s *RunInstancesInput) SetCreditSpecification(v string) *RunInstancesInput { + s.CreditSpecification = &v + return s +} + +// SetDeploymentSetId sets the DeploymentSetId field's value. +func (s *RunInstancesInput) SetDeploymentSetId(v string) *RunInstancesInput { + s.DeploymentSetId = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *RunInstancesInput) SetDescription(v string) *RunInstancesInput { + s.Description = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *RunInstancesInput) SetDryRun(v bool) *RunInstancesInput { + s.DryRun = &v + return s +} + +// SetHostName sets the HostName field's value. +func (s *RunInstancesInput) SetHostName(v string) *RunInstancesInput { + s.HostName = &v + return s +} + +// SetHostname sets the Hostname field's value. +func (s *RunInstancesInput) SetHostname(v string) *RunInstancesInput { + s.Hostname = &v + return s +} + +// SetHpcClusterId sets the HpcClusterId field's value. +func (s *RunInstancesInput) SetHpcClusterId(v string) *RunInstancesInput { + s.HpcClusterId = &v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *RunInstancesInput) SetImageId(v string) *RunInstancesInput { + s.ImageId = &v + return s +} + +// SetInstanceChargeType sets the InstanceChargeType field's value. +func (s *RunInstancesInput) SetInstanceChargeType(v string) *RunInstancesInput { + s.InstanceChargeType = &v + return s +} + +// SetInstanceName sets the InstanceName field's value. +func (s *RunInstancesInput) SetInstanceName(v string) *RunInstancesInput { + s.InstanceName = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *RunInstancesInput) SetInstanceType(v string) *RunInstancesInput { + s.InstanceType = &v + return s +} + +// SetInstanceTypeId sets the InstanceTypeId field's value. +func (s *RunInstancesInput) SetInstanceTypeId(v string) *RunInstancesInput { + s.InstanceTypeId = &v + return s +} + +// SetKeepImageCredential sets the KeepImageCredential field's value. +func (s *RunInstancesInput) SetKeepImageCredential(v bool) *RunInstancesInput { + s.KeepImageCredential = &v + return s +} + +// SetKeyPairName sets the KeyPairName field's value. +func (s *RunInstancesInput) SetKeyPairName(v string) *RunInstancesInput { + s.KeyPairName = &v + return s +} + +// SetMinCount sets the MinCount field's value. +func (s *RunInstancesInput) SetMinCount(v int32) *RunInstancesInput { + s.MinCount = &v + return s +} + +// SetNetworkInterfaces sets the NetworkInterfaces field's value. +func (s *RunInstancesInput) SetNetworkInterfaces(v []*NetworkInterfaceForRunInstancesInput) *RunInstancesInput { + s.NetworkInterfaces = v + return s +} + +// SetPassword sets the Password field's value. +func (s *RunInstancesInput) SetPassword(v string) *RunInstancesInput { + s.Password = &v + return s +} + +// SetPeriod sets the Period field's value. +func (s *RunInstancesInput) SetPeriod(v int32) *RunInstancesInput { + s.Period = &v + return s +} + +// SetPeriodUnit sets the PeriodUnit field's value. +func (s *RunInstancesInput) SetPeriodUnit(v string) *RunInstancesInput { + s.PeriodUnit = &v + return s +} + +// SetProjectName sets the ProjectName field's value. +func (s *RunInstancesInput) SetProjectName(v string) *RunInstancesInput { + s.ProjectName = &v + return s +} + +// SetSecurityEnhancementStrategy sets the SecurityEnhancementStrategy field's value. +func (s *RunInstancesInput) SetSecurityEnhancementStrategy(v string) *RunInstancesInput { + s.SecurityEnhancementStrategy = &v + return s +} + +// SetSpotStrategy sets the SpotStrategy field's value. +func (s *RunInstancesInput) SetSpotStrategy(v string) *RunInstancesInput { + s.SpotStrategy = &v + return s +} + +// SetSuffixIndex sets the SuffixIndex field's value. +func (s *RunInstancesInput) SetSuffixIndex(v int32) *RunInstancesInput { + s.SuffixIndex = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *RunInstancesInput) SetTags(v []*TagForRunInstancesInput) *RunInstancesInput { + s.Tags = v + return s +} + +// SetUniqueSuffix sets the UniqueSuffix field's value. +func (s *RunInstancesInput) SetUniqueSuffix(v bool) *RunInstancesInput { + s.UniqueSuffix = &v + return s +} + +// SetUserData sets the UserData field's value. +func (s *RunInstancesInput) SetUserData(v string) *RunInstancesInput { + s.UserData = &v + return s +} + +// SetVolumes sets the Volumes field's value. +func (s *RunInstancesInput) SetVolumes(v []*VolumeForRunInstancesInput) *RunInstancesInput { + s.Volumes = v + return s +} + +// SetZoneId sets the ZoneId field's value. +func (s *RunInstancesInput) SetZoneId(v string) *RunInstancesInput { + s.ZoneId = &v + return s +} + +type RunInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + InstanceIds []*string `type:"list"` +} + +// String returns the string representation +func (s RunInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s RunInstancesOutput) GoString() string { + return s.String() +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *RunInstancesOutput) SetInstanceIds(v []*string) *RunInstancesOutput { + s.InstanceIds = v + return s +} + +type TagForRunInstancesInput struct { + _ struct{} `type:"structure"` + + Key *string `type:"string"` + + Value *string `type:"string"` +} + +// String returns the string representation +func (s TagForRunInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagForRunInstancesInput) GoString() string { + return s.String() +} + +// SetKey sets the Key field's value. +func (s *TagForRunInstancesInput) SetKey(v string) *TagForRunInstancesInput { + s.Key = &v + return s +} + +// SetValue sets the Value field's value. +func (s *TagForRunInstancesInput) SetValue(v string) *TagForRunInstancesInput { + s.Value = &v + return s +} + +type VolumeForRunInstancesInput struct { + _ struct{} `type:"structure"` + + DeleteWithInstance *string `type:"string"` + + Size *int32 `type:"int32"` + + VolumeType *string `type:"string"` +} + +// String returns the string representation +func (s VolumeForRunInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s VolumeForRunInstancesInput) GoString() string { + return s.String() +} + +// SetDeleteWithInstance sets the DeleteWithInstance field's value. +func (s *VolumeForRunInstancesInput) SetDeleteWithInstance(v string) *VolumeForRunInstancesInput { + s.DeleteWithInstance = &v + return s +} + +// SetSize sets the Size field's value. +func (s *VolumeForRunInstancesInput) SetSize(v int32) *VolumeForRunInstancesInput { + s.Size = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *VolumeForRunInstancesInput) SetVolumeType(v string) *VolumeForRunInstancesInput { + s.VolumeType = &v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_start_instance.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_start_instance.go new file mode 100644 index 000000000000..4dd9770fc4c1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_start_instance.go @@ -0,0 +1,178 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opStartInstanceCommon = "StartInstance" + +// StartInstanceCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the StartInstanceCommon operation. The "output" return +// value will be populated with the StartInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned StartInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after StartInstanceCommon Send returns without error. +// +// See StartInstanceCommon for more information on using the StartInstanceCommon +// API call, and error handling. +// +// // Example sending a request using the StartInstanceCommonRequest method. +// req, resp := client.StartInstanceCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) StartInstanceCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opStartInstanceCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// StartInstanceCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation StartInstanceCommon for usage and error information. +func (c *ECS) StartInstanceCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.StartInstanceCommonRequest(input) + return out, req.Send() +} + +// StartInstanceCommonWithContext is the same as StartInstanceCommon with the addition of +// the ability to pass a context and additional request options. +// +// See StartInstanceCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) StartInstanceCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.StartInstanceCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartInstance = "StartInstance" + +// StartInstanceRequest generates a "volcengine/request.Request" representing the +// client's request for the StartInstance operation. The "output" return +// value will be populated with the StartInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned StartInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after StartInstanceCommon Send returns without error. +// +// See StartInstance for more information on using the StartInstance +// API call, and error handling. +// +// // Example sending a request using the StartInstanceRequest method. +// req, resp := client.StartInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) StartInstanceRequest(input *StartInstanceInput) (req *request.Request, output *StartInstanceOutput) { + op := &request.Operation{ + Name: opStartInstance, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &StartInstanceInput{} + } + + output = &StartInstanceOutput{} + req = c.newRequest(op, input, output) + + return +} + +// StartInstance API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation StartInstance for usage and error information. +func (c *ECS) StartInstance(input *StartInstanceInput) (*StartInstanceOutput, error) { + req, out := c.StartInstanceRequest(input) + return out, req.Send() +} + +// StartInstanceWithContext is the same as StartInstance with the addition of +// the ability to pass a context and additional request options. +// +// See StartInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) StartInstanceWithContext(ctx volcengine.Context, input *StartInstanceInput, opts ...request.Option) (*StartInstanceOutput, error) { + req, out := c.StartInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type StartInstanceInput struct { + _ struct{} `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s StartInstanceInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartInstanceInput) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *StartInstanceInput) SetInstanceId(v string) *StartInstanceInput { + s.InstanceId = &v + return s +} + +type StartInstanceOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s StartInstanceOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartInstanceOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_start_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_start_instances.go new file mode 100644 index 000000000000..5c69578f5212 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_start_instances.go @@ -0,0 +1,246 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opStartInstancesCommon = "StartInstances" + +// StartInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the StartInstancesCommon operation. The "output" return +// value will be populated with the StartInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned StartInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after StartInstancesCommon Send returns without error. +// +// See StartInstancesCommon for more information on using the StartInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the StartInstancesCommonRequest method. +// req, resp := client.StartInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) StartInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opStartInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// StartInstancesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation StartInstancesCommon for usage and error information. +func (c *ECS) StartInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.StartInstancesCommonRequest(input) + return out, req.Send() +} + +// StartInstancesCommonWithContext is the same as StartInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See StartInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) StartInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.StartInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStartInstances = "StartInstances" + +// StartInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the StartInstances operation. The "output" return +// value will be populated with the StartInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned StartInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after StartInstancesCommon Send returns without error. +// +// See StartInstances for more information on using the StartInstances +// API call, and error handling. +// +// // Example sending a request using the StartInstancesRequest method. +// req, resp := client.StartInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) StartInstancesRequest(input *StartInstancesInput) (req *request.Request, output *StartInstancesOutput) { + op := &request.Operation{ + Name: opStartInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &StartInstancesInput{} + } + + output = &StartInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// StartInstances API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation StartInstances for usage and error information. +func (c *ECS) StartInstances(input *StartInstancesInput) (*StartInstancesOutput, error) { + req, out := c.StartInstancesRequest(input) + return out, req.Send() +} + +// StartInstancesWithContext is the same as StartInstances with the addition of +// the ability to pass a context and additional request options. +// +// See StartInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) StartInstancesWithContext(ctx volcengine.Context, input *StartInstancesInput, opts ...request.Option) (*StartInstancesOutput, error) { + req, out := c.StartInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ErrorForStartInstancesOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForStartInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForStartInstancesOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForStartInstancesOutput) SetCode(v string) *ErrorForStartInstancesOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForStartInstancesOutput) SetMessage(v string) *ErrorForStartInstancesOutput { + s.Message = &v + return s +} + +type OperationDetailForStartInstancesOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForStartInstancesOutput `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForStartInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForStartInstancesOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForStartInstancesOutput) SetError(v *ErrorForStartInstancesOutput) *OperationDetailForStartInstancesOutput { + s.Error = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *OperationDetailForStartInstancesOutput) SetInstanceId(v string) *OperationDetailForStartInstancesOutput { + s.InstanceId = &v + return s +} + +type StartInstancesInput struct { + _ struct{} `type:"structure"` + + InstanceIds []*string `type:"list"` +} + +// String returns the string representation +func (s StartInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartInstancesInput) GoString() string { + return s.String() +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *StartInstancesInput) SetInstanceIds(v []*string) *StartInstancesInput { + s.InstanceIds = v + return s +} + +type StartInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForStartInstancesOutput `type:"list"` +} + +// String returns the string representation +func (s StartInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartInstancesOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *StartInstancesOutput) SetOperationDetails(v []*OperationDetailForStartInstancesOutput) *StartInstancesOutput { + s.OperationDetails = v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_stop_instance.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_stop_instance.go new file mode 100644 index 000000000000..881c762f779a --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_stop_instance.go @@ -0,0 +1,194 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opStopInstanceCommon = "StopInstance" + +// StopInstanceCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the StopInstanceCommon operation. The "output" return +// value will be populated with the StopInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned StopInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after StopInstanceCommon Send returns without error. +// +// See StopInstanceCommon for more information on using the StopInstanceCommon +// API call, and error handling. +// +// // Example sending a request using the StopInstanceCommonRequest method. +// req, resp := client.StopInstanceCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) StopInstanceCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opStopInstanceCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// StopInstanceCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation StopInstanceCommon for usage and error information. +func (c *ECS) StopInstanceCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.StopInstanceCommonRequest(input) + return out, req.Send() +} + +// StopInstanceCommonWithContext is the same as StopInstanceCommon with the addition of +// the ability to pass a context and additional request options. +// +// See StopInstanceCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) StopInstanceCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.StopInstanceCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopInstance = "StopInstance" + +// StopInstanceRequest generates a "volcengine/request.Request" representing the +// client's request for the StopInstance operation. The "output" return +// value will be populated with the StopInstanceCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned StopInstanceCommon Request to send the API call to the service. +// the "output" return value is not valid until after StopInstanceCommon Send returns without error. +// +// See StopInstance for more information on using the StopInstance +// API call, and error handling. +// +// // Example sending a request using the StopInstanceRequest method. +// req, resp := client.StopInstanceRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) StopInstanceRequest(input *StopInstanceInput) (req *request.Request, output *StopInstanceOutput) { + op := &request.Operation{ + Name: opStopInstance, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &StopInstanceInput{} + } + + output = &StopInstanceOutput{} + req = c.newRequest(op, input, output) + + return +} + +// StopInstance API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation StopInstance for usage and error information. +func (c *ECS) StopInstance(input *StopInstanceInput) (*StopInstanceOutput, error) { + req, out := c.StopInstanceRequest(input) + return out, req.Send() +} + +// StopInstanceWithContext is the same as StopInstance with the addition of +// the ability to pass a context and additional request options. +// +// See StopInstance for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) StopInstanceWithContext(ctx volcengine.Context, input *StopInstanceInput, opts ...request.Option) (*StopInstanceOutput, error) { + req, out := c.StopInstanceRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type StopInstanceInput struct { + _ struct{} `type:"structure"` + + ForceStop *bool `type:"boolean"` + + InstanceId *string `type:"string"` + + StoppedMode *string `type:"string"` +} + +// String returns the string representation +func (s StopInstanceInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopInstanceInput) GoString() string { + return s.String() +} + +// SetForceStop sets the ForceStop field's value. +func (s *StopInstanceInput) SetForceStop(v bool) *StopInstanceInput { + s.ForceStop = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *StopInstanceInput) SetInstanceId(v string) *StopInstanceInput { + s.InstanceId = &v + return s +} + +// SetStoppedMode sets the StoppedMode field's value. +func (s *StopInstanceInput) SetStoppedMode(v string) *StopInstanceInput { + s.StoppedMode = &v + return s +} + +type StopInstanceOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata +} + +// String returns the string representation +func (s StopInstanceOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopInstanceOutput) GoString() string { + return s.String() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_stop_instances.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_stop_instances.go new file mode 100644 index 000000000000..67a7fb95a940 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_stop_instances.go @@ -0,0 +1,262 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opStopInstancesCommon = "StopInstances" + +// StopInstancesCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the StopInstancesCommon operation. The "output" return +// value will be populated with the StopInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned StopInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after StopInstancesCommon Send returns without error. +// +// See StopInstancesCommon for more information on using the StopInstancesCommon +// API call, and error handling. +// +// // Example sending a request using the StopInstancesCommonRequest method. +// req, resp := client.StopInstancesCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) StopInstancesCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opStopInstancesCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// StopInstancesCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation StopInstancesCommon for usage and error information. +func (c *ECS) StopInstancesCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.StopInstancesCommonRequest(input) + return out, req.Send() +} + +// StopInstancesCommonWithContext is the same as StopInstancesCommon with the addition of +// the ability to pass a context and additional request options. +// +// See StopInstancesCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) StopInstancesCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.StopInstancesCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opStopInstances = "StopInstances" + +// StopInstancesRequest generates a "volcengine/request.Request" representing the +// client's request for the StopInstances operation. The "output" return +// value will be populated with the StopInstancesCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned StopInstancesCommon Request to send the API call to the service. +// the "output" return value is not valid until after StopInstancesCommon Send returns without error. +// +// See StopInstances for more information on using the StopInstances +// API call, and error handling. +// +// // Example sending a request using the StopInstancesRequest method. +// req, resp := client.StopInstancesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) StopInstancesRequest(input *StopInstancesInput) (req *request.Request, output *StopInstancesOutput) { + op := &request.Operation{ + Name: opStopInstances, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &StopInstancesInput{} + } + + output = &StopInstancesOutput{} + req = c.newRequest(op, input, output) + + return +} + +// StopInstances API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation StopInstances for usage and error information. +func (c *ECS) StopInstances(input *StopInstancesInput) (*StopInstancesOutput, error) { + req, out := c.StopInstancesRequest(input) + return out, req.Send() +} + +// StopInstancesWithContext is the same as StopInstances with the addition of +// the ability to pass a context and additional request options. +// +// See StopInstances for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) StopInstancesWithContext(ctx volcengine.Context, input *StopInstancesInput, opts ...request.Option) (*StopInstancesOutput, error) { + req, out := c.StopInstancesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ErrorForStopInstancesOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForStopInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForStopInstancesOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForStopInstancesOutput) SetCode(v string) *ErrorForStopInstancesOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForStopInstancesOutput) SetMessage(v string) *ErrorForStopInstancesOutput { + s.Message = &v + return s +} + +type OperationDetailForStopInstancesOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForStopInstancesOutput `type:"structure"` + + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForStopInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForStopInstancesOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForStopInstancesOutput) SetError(v *ErrorForStopInstancesOutput) *OperationDetailForStopInstancesOutput { + s.Error = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *OperationDetailForStopInstancesOutput) SetInstanceId(v string) *OperationDetailForStopInstancesOutput { + s.InstanceId = &v + return s +} + +type StopInstancesInput struct { + _ struct{} `type:"structure"` + + ForceStop *bool `type:"boolean"` + + InstanceIds []*string `type:"list"` + + StoppedMode *string `type:"string"` +} + +// String returns the string representation +func (s StopInstancesInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopInstancesInput) GoString() string { + return s.String() +} + +// SetForceStop sets the ForceStop field's value. +func (s *StopInstancesInput) SetForceStop(v bool) *StopInstancesInput { + s.ForceStop = &v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *StopInstancesInput) SetInstanceIds(v []*string) *StopInstancesInput { + s.InstanceIds = v + return s +} + +// SetStoppedMode sets the StoppedMode field's value. +func (s *StopInstancesInput) SetStoppedMode(v string) *StopInstancesInput { + s.StoppedMode = &v + return s +} + +type StopInstancesOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForStopInstancesOutput `type:"list"` +} + +// String returns the string representation +func (s StopInstancesOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s StopInstancesOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *StopInstancesOutput) SetOperationDetails(v []*OperationDetailForStopInstancesOutput) *StopInstancesOutput { + s.OperationDetails = v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_update_system_events.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_update_system_events.go new file mode 100644 index 000000000000..977d73b44783 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/api_update_system_events.go @@ -0,0 +1,278 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const opUpdateSystemEventsCommon = "UpdateSystemEvents" + +// UpdateSystemEventsCommonRequest generates a "volcengine/request.Request" representing the +// client's request for the UpdateSystemEventsCommon operation. The "output" return +// value will be populated with the UpdateSystemEventsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned UpdateSystemEventsCommon Request to send the API call to the service. +// the "output" return value is not valid until after UpdateSystemEventsCommon Send returns without error. +// +// See UpdateSystemEventsCommon for more information on using the UpdateSystemEventsCommon +// API call, and error handling. +// +// // Example sending a request using the UpdateSystemEventsCommonRequest method. +// req, resp := client.UpdateSystemEventsCommonRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) UpdateSystemEventsCommonRequest(input *map[string]interface{}) (req *request.Request, output *map[string]interface{}) { + op := &request.Operation{ + Name: opUpdateSystemEventsCommon, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &map[string]interface{}{} + } + + output = &map[string]interface{}{} + req = c.newRequest(op, input, output) + + return +} + +// UpdateSystemEventsCommon API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation UpdateSystemEventsCommon for usage and error information. +func (c *ECS) UpdateSystemEventsCommon(input *map[string]interface{}) (*map[string]interface{}, error) { + req, out := c.UpdateSystemEventsCommonRequest(input) + return out, req.Send() +} + +// UpdateSystemEventsCommonWithContext is the same as UpdateSystemEventsCommon with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSystemEventsCommon for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If the context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) UpdateSystemEventsCommonWithContext(ctx volcengine.Context, input *map[string]interface{}, opts ...request.Option) (*map[string]interface{}, error) { + req, out := c.UpdateSystemEventsCommonRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSystemEvents = "UpdateSystemEvents" + +// UpdateSystemEventsRequest generates a "volcengine/request.Request" representing the +// client's request for the UpdateSystemEvents operation. The "output" return +// value will be populated with the UpdateSystemEventsCommon request's response once the request completes +// successfully. +// +// Use "Send" method on the returned UpdateSystemEventsCommon Request to send the API call to the service. +// the "output" return value is not valid until after UpdateSystemEventsCommon Send returns without error. +// +// See UpdateSystemEvents for more information on using the UpdateSystemEvents +// API call, and error handling. +// +// // Example sending a request using the UpdateSystemEventsRequest method. +// req, resp := client.UpdateSystemEventsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +func (c *ECS) UpdateSystemEventsRequest(input *UpdateSystemEventsInput) (req *request.Request, output *UpdateSystemEventsOutput) { + op := &request.Operation{ + Name: opUpdateSystemEvents, + HTTPMethod: "GET", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSystemEventsInput{} + } + + output = &UpdateSystemEventsOutput{} + req = c.newRequest(op, input, output) + + return +} + +// UpdateSystemEvents API operation for ECS. +// +// Returns volcengineerr.Error for service API and SDK errors. Use runtime type assertions +// with volcengineerr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the VOLCENGINE API reference guide for ECS's +// API operation UpdateSystemEvents for usage and error information. +func (c *ECS) UpdateSystemEvents(input *UpdateSystemEventsInput) (*UpdateSystemEventsOutput, error) { + req, out := c.UpdateSystemEventsRequest(input) + return out, req.Send() +} + +// UpdateSystemEventsWithContext is the same as UpdateSystemEvents with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSystemEvents for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. Ifthe context is nil a panic will occur. +// In the future the SDK may create sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECS) UpdateSystemEventsWithContext(ctx volcengine.Context, input *UpdateSystemEventsInput, opts ...request.Option) (*UpdateSystemEventsOutput, error) { + req, out := c.UpdateSystemEventsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type ErrorForUpdateSystemEventsOutput struct { + _ struct{} `type:"structure"` + + Code *string `type:"string"` + + Message *string `type:"string"` +} + +// String returns the string representation +func (s ErrorForUpdateSystemEventsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s ErrorForUpdateSystemEventsOutput) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ErrorForUpdateSystemEventsOutput) SetCode(v string) *ErrorForUpdateSystemEventsOutput { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ErrorForUpdateSystemEventsOutput) SetMessage(v string) *ErrorForUpdateSystemEventsOutput { + s.Message = &v + return s +} + +type OperationDetailForUpdateSystemEventsOutput struct { + _ struct{} `type:"structure"` + + Error *ErrorForUpdateSystemEventsOutput `type:"structure"` + + EventId *string `type:"string"` +} + +// String returns the string representation +func (s OperationDetailForUpdateSystemEventsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s OperationDetailForUpdateSystemEventsOutput) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *OperationDetailForUpdateSystemEventsOutput) SetError(v *ErrorForUpdateSystemEventsOutput) *OperationDetailForUpdateSystemEventsOutput { + s.Error = v + return s +} + +// SetEventId sets the EventId field's value. +func (s *OperationDetailForUpdateSystemEventsOutput) SetEventId(v string) *OperationDetailForUpdateSystemEventsOutput { + s.EventId = &v + return s +} + +type UpdateSystemEventsInput struct { + _ struct{} `type:"structure"` + + EventIds []*string `type:"list"` + + OperatedEndAt *string `type:"string"` + + OperatedStartAt *string `type:"string"` + + Status *string `type:"string"` + + UpdatedAt *string `type:"string"` +} + +// String returns the string representation +func (s UpdateSystemEventsInput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSystemEventsInput) GoString() string { + return s.String() +} + +// SetEventIds sets the EventIds field's value. +func (s *UpdateSystemEventsInput) SetEventIds(v []*string) *UpdateSystemEventsInput { + s.EventIds = v + return s +} + +// SetOperatedEndAt sets the OperatedEndAt field's value. +func (s *UpdateSystemEventsInput) SetOperatedEndAt(v string) *UpdateSystemEventsInput { + s.OperatedEndAt = &v + return s +} + +// SetOperatedStartAt sets the OperatedStartAt field's value. +func (s *UpdateSystemEventsInput) SetOperatedStartAt(v string) *UpdateSystemEventsInput { + s.OperatedStartAt = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *UpdateSystemEventsInput) SetStatus(v string) *UpdateSystemEventsInput { + s.Status = &v + return s +} + +// SetUpdatedAt sets the UpdatedAt field's value. +func (s *UpdateSystemEventsInput) SetUpdatedAt(v string) *UpdateSystemEventsInput { + s.UpdatedAt = &v + return s +} + +type UpdateSystemEventsOutput struct { + _ struct{} `type:"structure"` + + Metadata *response.ResponseMetadata + + OperationDetails []*OperationDetailForUpdateSystemEventsOutput `type:"list"` +} + +// String returns the string representation +func (s UpdateSystemEventsOutput) String() string { + return volcengineutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSystemEventsOutput) GoString() string { + return s.String() +} + +// SetOperationDetails sets the OperationDetails field's value. +func (s *UpdateSystemEventsOutput) SetOperationDetails(v []*OperationDetailForUpdateSystemEventsOutput) *UpdateSystemEventsOutput { + s.OperationDetails = v + return s +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/interface_ecs.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/interface_ecs.go new file mode 100644 index 000000000000..a2831143177a --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/interface_ecs.go @@ -0,0 +1,449 @@ +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package ecsiface provides an interface to enable mocking the ECS service client +// for testing your code. +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// ECSAPI provides an interface to enable mocking the +// ecs.ECS service client's API operation, +// +// // volcengine sdk func uses an SDK service client to make a request to +// // ECS. +// func myFunc(svc ECSAPI) bool { +// // Make svc.AssociateInstancesIamRole request +// } +// +// func main() { +// sess := session.New() +// svc := ecs.New(sess) +// +// myFunc(svc) +// } +type ECSAPI interface { + AssociateInstancesIamRoleCommon(*map[string]interface{}) (*map[string]interface{}, error) + AssociateInstancesIamRoleCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + AssociateInstancesIamRoleCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + AssociateInstancesIamRole(*AssociateInstancesIamRoleInput) (*AssociateInstancesIamRoleOutput, error) + AssociateInstancesIamRoleWithContext(volcengine.Context, *AssociateInstancesIamRoleInput, ...request.Option) (*AssociateInstancesIamRoleOutput, error) + AssociateInstancesIamRoleRequest(*AssociateInstancesIamRoleInput) (*request.Request, *AssociateInstancesIamRoleOutput) + + AttachKeyPairCommon(*map[string]interface{}) (*map[string]interface{}, error) + AttachKeyPairCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + AttachKeyPairCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + AttachKeyPair(*AttachKeyPairInput) (*AttachKeyPairOutput, error) + AttachKeyPairWithContext(volcengine.Context, *AttachKeyPairInput, ...request.Option) (*AttachKeyPairOutput, error) + AttachKeyPairRequest(*AttachKeyPairInput) (*request.Request, *AttachKeyPairOutput) + + CreateDeploymentSetCommon(*map[string]interface{}) (*map[string]interface{}, error) + CreateDeploymentSetCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + CreateDeploymentSetCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + CreateDeploymentSet(*CreateDeploymentSetInput) (*CreateDeploymentSetOutput, error) + CreateDeploymentSetWithContext(volcengine.Context, *CreateDeploymentSetInput, ...request.Option) (*CreateDeploymentSetOutput, error) + CreateDeploymentSetRequest(*CreateDeploymentSetInput) (*request.Request, *CreateDeploymentSetOutput) + + CreateImageCommon(*map[string]interface{}) (*map[string]interface{}, error) + CreateImageCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + CreateImageCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + CreateImage(*CreateImageInput) (*CreateImageOutput, error) + CreateImageWithContext(volcengine.Context, *CreateImageInput, ...request.Option) (*CreateImageOutput, error) + CreateImageRequest(*CreateImageInput) (*request.Request, *CreateImageOutput) + + CreateKeyPairCommon(*map[string]interface{}) (*map[string]interface{}, error) + CreateKeyPairCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + CreateKeyPairCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + CreateKeyPair(*CreateKeyPairInput) (*CreateKeyPairOutput, error) + CreateKeyPairWithContext(volcengine.Context, *CreateKeyPairInput, ...request.Option) (*CreateKeyPairOutput, error) + CreateKeyPairRequest(*CreateKeyPairInput) (*request.Request, *CreateKeyPairOutput) + + CreateTagsCommon(*map[string]interface{}) (*map[string]interface{}, error) + CreateTagsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + CreateTagsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + CreateTags(*CreateTagsInput) (*CreateTagsOutput, error) + CreateTagsWithContext(volcengine.Context, *CreateTagsInput, ...request.Option) (*CreateTagsOutput, error) + CreateTagsRequest(*CreateTagsInput) (*request.Request, *CreateTagsOutput) + + DeleteDeploymentSetCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteDeploymentSetCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteDeploymentSetCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteDeploymentSet(*DeleteDeploymentSetInput) (*DeleteDeploymentSetOutput, error) + DeleteDeploymentSetWithContext(volcengine.Context, *DeleteDeploymentSetInput, ...request.Option) (*DeleteDeploymentSetOutput, error) + DeleteDeploymentSetRequest(*DeleteDeploymentSetInput) (*request.Request, *DeleteDeploymentSetOutput) + + DeleteImagesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteImagesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteImagesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteImages(*DeleteImagesInput) (*DeleteImagesOutput, error) + DeleteImagesWithContext(volcengine.Context, *DeleteImagesInput, ...request.Option) (*DeleteImagesOutput, error) + DeleteImagesRequest(*DeleteImagesInput) (*request.Request, *DeleteImagesOutput) + + DeleteInstanceCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteInstanceCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteInstanceCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteInstance(*DeleteInstanceInput) (*DeleteInstanceOutput, error) + DeleteInstanceWithContext(volcengine.Context, *DeleteInstanceInput, ...request.Option) (*DeleteInstanceOutput, error) + DeleteInstanceRequest(*DeleteInstanceInput) (*request.Request, *DeleteInstanceOutput) + + DeleteInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteInstances(*DeleteInstancesInput) (*DeleteInstancesOutput, error) + DeleteInstancesWithContext(volcengine.Context, *DeleteInstancesInput, ...request.Option) (*DeleteInstancesOutput, error) + DeleteInstancesRequest(*DeleteInstancesInput) (*request.Request, *DeleteInstancesOutput) + + DeleteKeyPairsCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteKeyPairsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteKeyPairsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteKeyPairs(*DeleteKeyPairsInput) (*DeleteKeyPairsOutput, error) + DeleteKeyPairsWithContext(volcengine.Context, *DeleteKeyPairsInput, ...request.Option) (*DeleteKeyPairsOutput, error) + DeleteKeyPairsRequest(*DeleteKeyPairsInput) (*request.Request, *DeleteKeyPairsOutput) + + DeleteTagsCommon(*map[string]interface{}) (*map[string]interface{}, error) + DeleteTagsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DeleteTagsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DeleteTags(*DeleteTagsInput) (*DeleteTagsOutput, error) + DeleteTagsWithContext(volcengine.Context, *DeleteTagsInput, ...request.Option) (*DeleteTagsOutput, error) + DeleteTagsRequest(*DeleteTagsInput) (*request.Request, *DeleteTagsOutput) + + DescribeAvailableResourceCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeAvailableResourceCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeAvailableResourceCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeAvailableResource(*DescribeAvailableResourceInput) (*DescribeAvailableResourceOutput, error) + DescribeAvailableResourceWithContext(volcengine.Context, *DescribeAvailableResourceInput, ...request.Option) (*DescribeAvailableResourceOutput, error) + DescribeAvailableResourceRequest(*DescribeAvailableResourceInput) (*request.Request, *DescribeAvailableResourceOutput) + + DescribeDeploymentSetSupportedInstanceTypeFamilyCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeDeploymentSetSupportedInstanceTypeFamilyCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeDeploymentSetSupportedInstanceTypeFamilyCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeDeploymentSetSupportedInstanceTypeFamily(*DescribeDeploymentSetSupportedInstanceTypeFamilyInput) (*DescribeDeploymentSetSupportedInstanceTypeFamilyOutput, error) + DescribeDeploymentSetSupportedInstanceTypeFamilyWithContext(volcengine.Context, *DescribeDeploymentSetSupportedInstanceTypeFamilyInput, ...request.Option) (*DescribeDeploymentSetSupportedInstanceTypeFamilyOutput, error) + DescribeDeploymentSetSupportedInstanceTypeFamilyRequest(*DescribeDeploymentSetSupportedInstanceTypeFamilyInput) (*request.Request, *DescribeDeploymentSetSupportedInstanceTypeFamilyOutput) + + DescribeDeploymentSetsCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeDeploymentSetsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeDeploymentSetsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeDeploymentSets(*DescribeDeploymentSetsInput) (*DescribeDeploymentSetsOutput, error) + DescribeDeploymentSetsWithContext(volcengine.Context, *DescribeDeploymentSetsInput, ...request.Option) (*DescribeDeploymentSetsOutput, error) + DescribeDeploymentSetsRequest(*DescribeDeploymentSetsInput) (*request.Request, *DescribeDeploymentSetsOutput) + + DescribeImageSharePermissionCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeImageSharePermissionCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeImageSharePermissionCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeImageSharePermission(*DescribeImageSharePermissionInput) (*DescribeImageSharePermissionOutput, error) + DescribeImageSharePermissionWithContext(volcengine.Context, *DescribeImageSharePermissionInput, ...request.Option) (*DescribeImageSharePermissionOutput, error) + DescribeImageSharePermissionRequest(*DescribeImageSharePermissionInput) (*request.Request, *DescribeImageSharePermissionOutput) + + DescribeImagesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeImagesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeImagesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeImages(*DescribeImagesInput) (*DescribeImagesOutput, error) + DescribeImagesWithContext(volcengine.Context, *DescribeImagesInput, ...request.Option) (*DescribeImagesOutput, error) + DescribeImagesRequest(*DescribeImagesInput) (*request.Request, *DescribeImagesOutput) + + DescribeInstanceECSTerminalUrlCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeInstanceECSTerminalUrlCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeInstanceECSTerminalUrlCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeInstanceECSTerminalUrl(*DescribeInstanceECSTerminalUrlInput) (*DescribeInstanceECSTerminalUrlOutput, error) + DescribeInstanceECSTerminalUrlWithContext(volcengine.Context, *DescribeInstanceECSTerminalUrlInput, ...request.Option) (*DescribeInstanceECSTerminalUrlOutput, error) + DescribeInstanceECSTerminalUrlRequest(*DescribeInstanceECSTerminalUrlInput) (*request.Request, *DescribeInstanceECSTerminalUrlOutput) + + DescribeInstanceTypeFamiliesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeInstanceTypeFamiliesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeInstanceTypeFamiliesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeInstanceTypeFamilies(*DescribeInstanceTypeFamiliesInput) (*DescribeInstanceTypeFamiliesOutput, error) + DescribeInstanceTypeFamiliesWithContext(volcengine.Context, *DescribeInstanceTypeFamiliesInput, ...request.Option) (*DescribeInstanceTypeFamiliesOutput, error) + DescribeInstanceTypeFamiliesRequest(*DescribeInstanceTypeFamiliesInput) (*request.Request, *DescribeInstanceTypeFamiliesOutput) + + DescribeInstanceTypesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeInstanceTypesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeInstanceTypesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeInstanceTypes(*DescribeInstanceTypesInput) (*DescribeInstanceTypesOutput, error) + DescribeInstanceTypesWithContext(volcengine.Context, *DescribeInstanceTypesInput, ...request.Option) (*DescribeInstanceTypesOutput, error) + DescribeInstanceTypesRequest(*DescribeInstanceTypesInput) (*request.Request, *DescribeInstanceTypesOutput) + + DescribeInstanceVncUrlCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeInstanceVncUrlCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeInstanceVncUrlCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeInstanceVncUrl(*DescribeInstanceVncUrlInput) (*DescribeInstanceVncUrlOutput, error) + DescribeInstanceVncUrlWithContext(volcengine.Context, *DescribeInstanceVncUrlInput, ...request.Option) (*DescribeInstanceVncUrlOutput, error) + DescribeInstanceVncUrlRequest(*DescribeInstanceVncUrlInput) (*request.Request, *DescribeInstanceVncUrlOutput) + + DescribeInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeInstances(*DescribeInstancesInput) (*DescribeInstancesOutput, error) + DescribeInstancesWithContext(volcengine.Context, *DescribeInstancesInput, ...request.Option) (*DescribeInstancesOutput, error) + DescribeInstancesRequest(*DescribeInstancesInput) (*request.Request, *DescribeInstancesOutput) + + DescribeInstancesIamRolesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeInstancesIamRolesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeInstancesIamRolesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeInstancesIamRoles(*DescribeInstancesIamRolesInput) (*DescribeInstancesIamRolesOutput, error) + DescribeInstancesIamRolesWithContext(volcengine.Context, *DescribeInstancesIamRolesInput, ...request.Option) (*DescribeInstancesIamRolesOutput, error) + DescribeInstancesIamRolesRequest(*DescribeInstancesIamRolesInput) (*request.Request, *DescribeInstancesIamRolesOutput) + + DescribeKeyPairsCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeKeyPairsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeKeyPairsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeKeyPairs(*DescribeKeyPairsInput) (*DescribeKeyPairsOutput, error) + DescribeKeyPairsWithContext(volcengine.Context, *DescribeKeyPairsInput, ...request.Option) (*DescribeKeyPairsOutput, error) + DescribeKeyPairsRequest(*DescribeKeyPairsInput) (*request.Request, *DescribeKeyPairsOutput) + + DescribeSystemEventsCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeSystemEventsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeSystemEventsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeSystemEvents(*DescribeSystemEventsInput) (*DescribeSystemEventsOutput, error) + DescribeSystemEventsWithContext(volcengine.Context, *DescribeSystemEventsInput, ...request.Option) (*DescribeSystemEventsOutput, error) + DescribeSystemEventsRequest(*DescribeSystemEventsInput) (*request.Request, *DescribeSystemEventsOutput) + + DescribeTagsCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeTagsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeTagsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeTags(*DescribeTagsInput) (*DescribeTagsOutput, error) + DescribeTagsWithContext(volcengine.Context, *DescribeTagsInput, ...request.Option) (*DescribeTagsOutput, error) + DescribeTagsRequest(*DescribeTagsInput) (*request.Request, *DescribeTagsOutput) + + DescribeTasksCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeTasksCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeTasksCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeTasks(*DescribeTasksInput) (*DescribeTasksOutput, error) + DescribeTasksWithContext(volcengine.Context, *DescribeTasksInput, ...request.Option) (*DescribeTasksOutput, error) + DescribeTasksRequest(*DescribeTasksInput) (*request.Request, *DescribeTasksOutput) + + DescribeUserDataCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeUserDataCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeUserDataCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeUserData(*DescribeUserDataInput) (*DescribeUserDataOutput, error) + DescribeUserDataWithContext(volcengine.Context, *DescribeUserDataInput, ...request.Option) (*DescribeUserDataOutput, error) + DescribeUserDataRequest(*DescribeUserDataInput) (*request.Request, *DescribeUserDataOutput) + + DescribeZonesCommon(*map[string]interface{}) (*map[string]interface{}, error) + DescribeZonesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DescribeZonesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DescribeZones(*DescribeZonesInput) (*DescribeZonesOutput, error) + DescribeZonesWithContext(volcengine.Context, *DescribeZonesInput, ...request.Option) (*DescribeZonesOutput, error) + DescribeZonesRequest(*DescribeZonesInput) (*request.Request, *DescribeZonesOutput) + + DetachKeyPairCommon(*map[string]interface{}) (*map[string]interface{}, error) + DetachKeyPairCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DetachKeyPairCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DetachKeyPair(*DetachKeyPairInput) (*DetachKeyPairOutput, error) + DetachKeyPairWithContext(volcengine.Context, *DetachKeyPairInput, ...request.Option) (*DetachKeyPairOutput, error) + DetachKeyPairRequest(*DetachKeyPairInput) (*request.Request, *DetachKeyPairOutput) + + DisassociateInstancesIamRoleCommon(*map[string]interface{}) (*map[string]interface{}, error) + DisassociateInstancesIamRoleCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + DisassociateInstancesIamRoleCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + DisassociateInstancesIamRole(*DisassociateInstancesIamRoleInput) (*DisassociateInstancesIamRoleOutput, error) + DisassociateInstancesIamRoleWithContext(volcengine.Context, *DisassociateInstancesIamRoleInput, ...request.Option) (*DisassociateInstancesIamRoleOutput, error) + DisassociateInstancesIamRoleRequest(*DisassociateInstancesIamRoleInput) (*request.Request, *DisassociateInstancesIamRoleOutput) + + ExportImageCommon(*map[string]interface{}) (*map[string]interface{}, error) + ExportImageCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ExportImageCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ExportImage(*ExportImageInput) (*ExportImageOutput, error) + ExportImageWithContext(volcengine.Context, *ExportImageInput, ...request.Option) (*ExportImageOutput, error) + ExportImageRequest(*ExportImageInput) (*request.Request, *ExportImageOutput) + + ImportImageCommon(*map[string]interface{}) (*map[string]interface{}, error) + ImportImageCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ImportImageCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ImportImage(*ImportImageInput) (*ImportImageOutput, error) + ImportImageWithContext(volcengine.Context, *ImportImageInput, ...request.Option) (*ImportImageOutput, error) + ImportImageRequest(*ImportImageInput) (*request.Request, *ImportImageOutput) + + ImportKeyPairCommon(*map[string]interface{}) (*map[string]interface{}, error) + ImportKeyPairCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ImportKeyPairCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ImportKeyPair(*ImportKeyPairInput) (*ImportKeyPairOutput, error) + ImportKeyPairWithContext(volcengine.Context, *ImportKeyPairInput, ...request.Option) (*ImportKeyPairOutput, error) + ImportKeyPairRequest(*ImportKeyPairInput) (*request.Request, *ImportKeyPairOutput) + + ModifyDeploymentSetAttributeCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyDeploymentSetAttributeCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyDeploymentSetAttributeCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyDeploymentSetAttribute(*ModifyDeploymentSetAttributeInput) (*ModifyDeploymentSetAttributeOutput, error) + ModifyDeploymentSetAttributeWithContext(volcengine.Context, *ModifyDeploymentSetAttributeInput, ...request.Option) (*ModifyDeploymentSetAttributeOutput, error) + ModifyDeploymentSetAttributeRequest(*ModifyDeploymentSetAttributeInput) (*request.Request, *ModifyDeploymentSetAttributeOutput) + + ModifyImageAttributeCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyImageAttributeCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyImageAttributeCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyImageAttribute(*ModifyImageAttributeInput) (*ModifyImageAttributeOutput, error) + ModifyImageAttributeWithContext(volcengine.Context, *ModifyImageAttributeInput, ...request.Option) (*ModifyImageAttributeOutput, error) + ModifyImageAttributeRequest(*ModifyImageAttributeInput) (*request.Request, *ModifyImageAttributeOutput) + + ModifyImageSharePermissionCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyImageSharePermissionCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyImageSharePermissionCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyImageSharePermission(*ModifyImageSharePermissionInput) (*ModifyImageSharePermissionOutput, error) + ModifyImageSharePermissionWithContext(volcengine.Context, *ModifyImageSharePermissionInput, ...request.Option) (*ModifyImageSharePermissionOutput, error) + ModifyImageSharePermissionRequest(*ModifyImageSharePermissionInput) (*request.Request, *ModifyImageSharePermissionOutput) + + ModifyInstanceAttributeCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyInstanceAttributeCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyInstanceAttributeCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyInstanceAttribute(*ModifyInstanceAttributeInput) (*ModifyInstanceAttributeOutput, error) + ModifyInstanceAttributeWithContext(volcengine.Context, *ModifyInstanceAttributeInput, ...request.Option) (*ModifyInstanceAttributeOutput, error) + ModifyInstanceAttributeRequest(*ModifyInstanceAttributeInput) (*request.Request, *ModifyInstanceAttributeOutput) + + ModifyInstanceChargeTypeCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyInstanceChargeTypeCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyInstanceChargeTypeCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyInstanceChargeType(*ModifyInstanceChargeTypeInput) (*ModifyInstanceChargeTypeOutput, error) + ModifyInstanceChargeTypeWithContext(volcengine.Context, *ModifyInstanceChargeTypeInput, ...request.Option) (*ModifyInstanceChargeTypeOutput, error) + ModifyInstanceChargeTypeRequest(*ModifyInstanceChargeTypeInput) (*request.Request, *ModifyInstanceChargeTypeOutput) + + ModifyInstanceDeploymentCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyInstanceDeploymentCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyInstanceDeploymentCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyInstanceDeployment(*ModifyInstanceDeploymentInput) (*ModifyInstanceDeploymentOutput, error) + ModifyInstanceDeploymentWithContext(volcengine.Context, *ModifyInstanceDeploymentInput, ...request.Option) (*ModifyInstanceDeploymentOutput, error) + ModifyInstanceDeploymentRequest(*ModifyInstanceDeploymentInput) (*request.Request, *ModifyInstanceDeploymentOutput) + + ModifyInstanceSpecCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyInstanceSpecCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyInstanceSpecCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyInstanceSpec(*ModifyInstanceSpecInput) (*ModifyInstanceSpecOutput, error) + ModifyInstanceSpecWithContext(volcengine.Context, *ModifyInstanceSpecInput, ...request.Option) (*ModifyInstanceSpecOutput, error) + ModifyInstanceSpecRequest(*ModifyInstanceSpecInput) (*request.Request, *ModifyInstanceSpecOutput) + + ModifyKeyPairAttributeCommon(*map[string]interface{}) (*map[string]interface{}, error) + ModifyKeyPairAttributeCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ModifyKeyPairAttributeCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ModifyKeyPairAttribute(*ModifyKeyPairAttributeInput) (*ModifyKeyPairAttributeOutput, error) + ModifyKeyPairAttributeWithContext(volcengine.Context, *ModifyKeyPairAttributeInput, ...request.Option) (*ModifyKeyPairAttributeOutput, error) + ModifyKeyPairAttributeRequest(*ModifyKeyPairAttributeInput) (*request.Request, *ModifyKeyPairAttributeOutput) + + RebootInstanceCommon(*map[string]interface{}) (*map[string]interface{}, error) + RebootInstanceCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + RebootInstanceCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + RebootInstance(*RebootInstanceInput) (*RebootInstanceOutput, error) + RebootInstanceWithContext(volcengine.Context, *RebootInstanceInput, ...request.Option) (*RebootInstanceOutput, error) + RebootInstanceRequest(*RebootInstanceInput) (*request.Request, *RebootInstanceOutput) + + RebootInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + RebootInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + RebootInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + RebootInstances(*RebootInstancesInput) (*RebootInstancesOutput, error) + RebootInstancesWithContext(volcengine.Context, *RebootInstancesInput, ...request.Option) (*RebootInstancesOutput, error) + RebootInstancesRequest(*RebootInstancesInput) (*request.Request, *RebootInstancesOutput) + + RenewInstanceCommon(*map[string]interface{}) (*map[string]interface{}, error) + RenewInstanceCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + RenewInstanceCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + RenewInstance(*RenewInstanceInput) (*RenewInstanceOutput, error) + RenewInstanceWithContext(volcengine.Context, *RenewInstanceInput, ...request.Option) (*RenewInstanceOutput, error) + RenewInstanceRequest(*RenewInstanceInput) (*request.Request, *RenewInstanceOutput) + + ReplaceSystemVolumeCommon(*map[string]interface{}) (*map[string]interface{}, error) + ReplaceSystemVolumeCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + ReplaceSystemVolumeCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + ReplaceSystemVolume(*ReplaceSystemVolumeInput) (*ReplaceSystemVolumeOutput, error) + ReplaceSystemVolumeWithContext(volcengine.Context, *ReplaceSystemVolumeInput, ...request.Option) (*ReplaceSystemVolumeOutput, error) + ReplaceSystemVolumeRequest(*ReplaceSystemVolumeInput) (*request.Request, *ReplaceSystemVolumeOutput) + + RunInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + RunInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + RunInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + RunInstances(*RunInstancesInput) (*RunInstancesOutput, error) + RunInstancesWithContext(volcengine.Context, *RunInstancesInput, ...request.Option) (*RunInstancesOutput, error) + RunInstancesRequest(*RunInstancesInput) (*request.Request, *RunInstancesOutput) + + StartInstanceCommon(*map[string]interface{}) (*map[string]interface{}, error) + StartInstanceCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + StartInstanceCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + StartInstance(*StartInstanceInput) (*StartInstanceOutput, error) + StartInstanceWithContext(volcengine.Context, *StartInstanceInput, ...request.Option) (*StartInstanceOutput, error) + StartInstanceRequest(*StartInstanceInput) (*request.Request, *StartInstanceOutput) + + StartInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + StartInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + StartInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + StartInstances(*StartInstancesInput) (*StartInstancesOutput, error) + StartInstancesWithContext(volcengine.Context, *StartInstancesInput, ...request.Option) (*StartInstancesOutput, error) + StartInstancesRequest(*StartInstancesInput) (*request.Request, *StartInstancesOutput) + + StopInstanceCommon(*map[string]interface{}) (*map[string]interface{}, error) + StopInstanceCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + StopInstanceCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + StopInstance(*StopInstanceInput) (*StopInstanceOutput, error) + StopInstanceWithContext(volcengine.Context, *StopInstanceInput, ...request.Option) (*StopInstanceOutput, error) + StopInstanceRequest(*StopInstanceInput) (*request.Request, *StopInstanceOutput) + + StopInstancesCommon(*map[string]interface{}) (*map[string]interface{}, error) + StopInstancesCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + StopInstancesCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + StopInstances(*StopInstancesInput) (*StopInstancesOutput, error) + StopInstancesWithContext(volcengine.Context, *StopInstancesInput, ...request.Option) (*StopInstancesOutput, error) + StopInstancesRequest(*StopInstancesInput) (*request.Request, *StopInstancesOutput) + + UpdateSystemEventsCommon(*map[string]interface{}) (*map[string]interface{}, error) + UpdateSystemEventsCommonWithContext(volcengine.Context, *map[string]interface{}, ...request.Option) (*map[string]interface{}, error) + UpdateSystemEventsCommonRequest(*map[string]interface{}) (*request.Request, *map[string]interface{}) + + UpdateSystemEvents(*UpdateSystemEventsInput) (*UpdateSystemEventsOutput, error) + UpdateSystemEventsWithContext(volcengine.Context, *UpdateSystemEventsInput, ...request.Option) (*UpdateSystemEventsOutput, error) + UpdateSystemEventsRequest(*UpdateSystemEventsInput) (*request.Request, *UpdateSystemEventsOutput) +} + +var _ ECSAPI = (*ECS)(nil) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/service_ecs.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/service_ecs.go new file mode 100644 index 000000000000..a363fca06a42 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs/service_ecs.go @@ -0,0 +1,105 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by volcengine with private/model/cli/gen-api/main.go. DO NOT EDIT. + +package ecs + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/signer/volc" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery" +) + +// ECS provides the API operation methods for making requests to +// ECS. See this package's package overview docs +// for details on the service. +// +// ECS methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type ECS struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "ecs" // Name of service. + EndpointsID = ServiceName // ID to lookup a service endpoint with. + ServiceID = "ecs" // ServiceID is a unique identifer of a specific service. +) + +// New create int can support ssl or region locate set +func New(p client.ConfigProvider, cfgs ...*volcengine.Config) *ECS { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg volcengine.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *ECS { + svc := &ECS{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + ServiceID: ServiceID, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2020-04-01", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Build.PushBackNamed(corehandlers.SDKVersionUserAgentHandler) + svc.Handlers.Build.PushBackNamed(corehandlers.AddHostExecEnvUserAgentHandler) + svc.Handlers.Sign.PushBackNamed(volc.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(volcenginequery.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(volcenginequery.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(volcenginequery.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(volcenginequery.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a ECS operation and runs any +// custom request initialization. +func (c *ECS) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/client.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/client.go new file mode 100644 index 000000000000..2d4d8128ed85 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/client.go @@ -0,0 +1,122 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package client + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// A Config provides configuration to a service client instance. +type Config struct { + Config *volcengine.Config + Handlers request.Handlers + Endpoint string + SigningRegion string + SigningName string + + // States that the signing name did not come from a modeled source but + // was derived based on other data. Used by service client constructors + // to determine if the signin name can be overridden based on metadata the + // service has. + SigningNameDerived bool +} + +// ConfigProvider provides a generic way for a service client to receive +// the ClientConfig without circular dependencies. +type ConfigProvider interface { + ClientConfig(serviceName string, cfgs ...*volcengine.Config) Config +} + +// ConfigNoResolveEndpointProvider same as ConfigProvider except it will not +// resolve the endpoint automatically. The service client's endpoint must be +// provided via the volcengine.Config.Endpoint field. +type ConfigNoResolveEndpointProvider interface { + ClientConfigNoResolveEndpoint(cfgs ...*volcengine.Config) Config +} + +// A Client implements the base client request and response handling +// used by all service clients. +type Client struct { + request.Retryer + metadata.ClientInfo + + Config volcengine.Config + Handlers request.Handlers +} + +// New will return a pointer to a new initialized service client. +func New(cfg volcengine.Config, info metadata.ClientInfo, handlers request.Handlers, options ...func(*Client)) *Client { + svc := &Client{ + Config: cfg, + ClientInfo: info, + Handlers: handlers.Copy(), + } + + switch retryer, ok := cfg.Retryer.(request.Retryer); { + case ok: + svc.Retryer = retryer + case cfg.Retryer != nil && cfg.Logger != nil: + s := fmt.Sprintf("WARNING: %T does not implement request.Retryer; using DefaultRetryer instead", cfg.Retryer) + cfg.Logger.Log(s) + fallthrough + default: + maxRetries := volcengine.IntValue(cfg.MaxRetries) + if cfg.MaxRetries == nil || maxRetries == volcengine.UseServiceDefaultRetries { + maxRetries = DefaultRetryerMaxNumRetries + } + svc.Retryer = DefaultRetryer{NumMaxRetries: maxRetries} + } + + svc.AddDebugHandlers() + + for _, option := range options { + option(svc) + } + + return svc +} + +// NewRequest returns a new Request pointer for the service API +// operation and parameters. +func (c *Client) NewRequest(operation *request.Operation, params interface{}, data interface{}) *request.Request { + return request.New(c.Config, c.ClientInfo, c.Handlers, c.Retryer, operation, params, data) +} + +// AddDebugHandlers injects debug logging handlers into the service to log request +// debug information. +func (c *Client) AddDebugHandlers() { + if !c.Config.LogLevel.AtLeast(volcengine.LogDebug) { + return + } + if c.Config.LogLevel.Matches(volcengine.LogInfoWithInputAndOutput) || + c.Config.LogLevel.Matches(volcengine.LogDebugWithInputAndOutput) { + c.Handlers.Send.PushFrontNamed(LogInputHandler) + c.Handlers.Complete.PushBackNamed(LogOutHandler) + return + } + + c.Handlers.Send.PushFrontNamed(LogHTTPRequestHandler) + c.Handlers.Send.PushBackNamed(LogHTTPResponseHandler) + +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/default_retryer.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/default_retryer.go new file mode 100644 index 000000000000..b4fa9e973a7d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/default_retryer.go @@ -0,0 +1,195 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package client + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "math" + "strconv" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkrand" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// DefaultRetryer implements basic retry logic using exponential backoff for +// most services. If you want to implement custom retry logic, you can implement the +// request.Retryer interface. +type DefaultRetryer struct { + // Num max Retries is the number of max retries that will be performed. + // By default, this is zero. + NumMaxRetries int + + // MinRetryDelay is the minimum retry delay after which retry will be performed. + // If not set, the value is 0ns. + MinRetryDelay time.Duration + + // MinThrottleRetryDelay is the minimum retry delay when throttled. + // If not set, the value is 0ns. + MinThrottleDelay time.Duration + + // MaxRetryDelay is the maximum retry delay before which retry must be performed. + // If not set, the value is 0ns. + MaxRetryDelay time.Duration + + // MaxThrottleDelay is the maximum retry delay when throttled. + // If not set, the value is 0ns. + MaxThrottleDelay time.Duration +} + +const ( + // DefaultRetryerMaxNumRetries sets maximum number of retries + DefaultRetryerMaxNumRetries = 3 + + // DefaultRetryerMinRetryDelay sets minimum retry delay + DefaultRetryerMinRetryDelay = 30 * time.Millisecond + + // DefaultRetryerMinThrottleDelay sets minimum delay when throttled + DefaultRetryerMinThrottleDelay = 500 * time.Millisecond + + // DefaultRetryerMaxRetryDelay sets maximum retry delay + DefaultRetryerMaxRetryDelay = 300 * time.Second + + // DefaultRetryerMaxThrottleDelay sets maximum delay when throttled + DefaultRetryerMaxThrottleDelay = 300 * time.Second +) + +// MaxRetries returns the number of maximum returns the service will use to make +// an individual API request. +func (d DefaultRetryer) MaxRetries() int { + return d.NumMaxRetries +} + +// setRetryerDefaults sets the default values of the retryer if not set +func (d *DefaultRetryer) setRetryerDefaults() { + if d.MinRetryDelay == 0 { + d.MinRetryDelay = DefaultRetryerMinRetryDelay + } + if d.MaxRetryDelay == 0 { + d.MaxRetryDelay = DefaultRetryerMaxRetryDelay + } + if d.MinThrottleDelay == 0 { + d.MinThrottleDelay = DefaultRetryerMinThrottleDelay + } + if d.MaxThrottleDelay == 0 { + d.MaxThrottleDelay = DefaultRetryerMaxThrottleDelay + } +} + +// RetryRules returns the delay duration before retrying this request again +func (d DefaultRetryer) RetryRules(r *request.Request) time.Duration { + + // if number of max retries is zero, no retries will be performed. + if d.NumMaxRetries == 0 { + return 0 + } + + // Sets default value for retryer members + d.setRetryerDefaults() + + // minDelay is the minimum retryer delay + minDelay := d.MinRetryDelay + + var initialDelay time.Duration + + isThrottle := r.IsErrorThrottle() + if isThrottle { + if delay, ok := getRetryAfterDelay(r); ok { + initialDelay = delay + } + minDelay = d.MinThrottleDelay + } + + retryCount := r.RetryCount + + // maxDelay the maximum retryer delay + maxDelay := d.MaxRetryDelay + + if isThrottle { + maxDelay = d.MaxThrottleDelay + } + + var delay time.Duration + + // Logic to cap the retry count based on the minDelay provided + actualRetryCount := int(math.Log2(float64(minDelay))) + 1 + if actualRetryCount < 63-retryCount { + delay = time.Duration(1< maxDelay { + delay = getJitterDelay(maxDelay / 2) + } + } else { + delay = getJitterDelay(maxDelay / 2) + } + return delay + initialDelay +} + +// getJitterDelay returns a jittered delay for retry +func getJitterDelay(duration time.Duration) time.Duration { + return time.Duration(sdkrand.SeededRand.Int63n(int64(duration)) + int64(duration)) +} + +// ShouldRetry returns true if the request should be retried. +func (d DefaultRetryer) ShouldRetry(r *request.Request) bool { + + // ShouldRetry returns false if number of max retries is 0. + if d.NumMaxRetries == 0 { + return false + } + + // If one of the other handlers already set the retry state + // we don't want to override it based on the service's state + if r.Retryable != nil { + return *r.Retryable + } + return r.IsErrorRetryable() || r.IsErrorThrottle() +} + +// This will look in the Retry-After header, RFC 7231, for how long +// it will wait before attempting another request +func getRetryAfterDelay(r *request.Request) (time.Duration, bool) { + if !canUseRetryAfterHeader(r) { + return 0, false + } + + delayStr := r.HTTPResponse.Header.Get("Retry-After") + if len(delayStr) == 0 { + return 0, false + } + + delay, err := strconv.Atoi(delayStr) + if err != nil { + return 0, false + } + + return time.Duration(delay) * time.Second, true +} + +// Will look at the status code to see if the retry header pertains to +// the status code. +func canUseRetryAfterHeader(r *request.Request) bool { + switch r.HTTPResponse.StatusCode { + case 429: + case 503: + default: + return false + } + + return true +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/logger.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/logger.go new file mode 100644 index 000000000000..b7d6c39cb2c9 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/logger.go @@ -0,0 +1,284 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package client + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "context" + "fmt" + "io" + "io/ioutil" + "net/http/httputil" + "strings" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +const logReqMsg = `DEBUG: Request %s/%s Details: +---[ REQUEST POST-SIGN ]----------------------------- +%s +-----------------------------------------------------` + +const logReqErrMsg = `DEBUG ERROR: Request %s/%s: +---[ REQUEST DUMP ERROR ]----------------------------- +%s +------------------------------------------------------` + +type logWriter struct { + // Logger is what we will use to log the payload of a response. + Logger volcengine.Logger + // buf stores the contents of what has been read + buf *bytes.Buffer +} + +func (logger *logWriter) Write(b []byte) (int, error) { + return logger.buf.Write(b) +} + +type teeReaderCloser struct { + // io.Reader will be a tee reader that is used during logging. + // This structure will read from a volcenginebody and write the contents to a logger. + io.Reader + // Source is used just to close when we are done reading. + Source io.ReadCloser +} + +func (reader *teeReaderCloser) Close() error { + return reader.Source.Close() +} + +type LogStruct struct { + Level string + OperationName string + Request interface{} `json:"Request,omitempty"` + Body interface{} `json:"Body,omitempty"` + Response interface{} `json:"Response,omitempty"` + Type string + AccountId string `json:"AccountId,omitempty"` + Context context.Context +} + +func logStructLog(r *request.Request, level string, logStruct LogStruct) { + logStruct.Level = level + if r.IsJsonBody && strings.HasSuffix(logStruct.OperationName, "Request") { + logStruct.Body = r.Params + } + if r.Config.LogAccount != nil { + logStruct.AccountId = *r.Config.LogAccount(r.Context()) + } + //b, _ := json.Marshal(logStruct) + r.Config.Logger.Log(logStruct) +} + +var LogInputHandler = request.NamedHandler{ + Name: "volcenginesdk.client.LogInput", + Fn: logInput, +} + +func logInput(r *request.Request) { + logInfoStruct := r.Config.LogLevel.Matches(volcengine.LogInfoWithInputAndOutput) + logDebugStruct := r.Config.LogLevel.Matches(volcengine.LogDebugWithInputAndOutput) + + logStruct := LogStruct{ + OperationName: r.Operation.Name, + Type: "Request", + Request: r.Input, + Context: r.Context(), + } + + if logInfoStruct { + logStructLog(r, "INFO", logStruct) + } else if logDebugStruct { + logStructLog(r, "DEBUG", logStruct) + } +} + +var LogOutHandler = request.NamedHandler{ + Name: "volcenginesdk.client.LogOutput", + Fn: LogOutput, +} + +func LogOutput(r *request.Request) { + logInfoStruct := r.Config.LogLevel.Matches(volcengine.LogInfoWithInputAndOutput) + logDebugStruct := r.Config.LogLevel.Matches(volcengine.LogDebugWithInputAndOutput) + + logStruct := LogStruct{ + OperationName: r.Operation.Name, + Response: r.Data, + Type: "Response", + Context: r.Context(), + } + + if logInfoStruct { + logStructLog(r, "INFO", logStruct) + } else if logDebugStruct { + logStructLog(r, "DEBUG", logStruct) + } +} + +// LogHTTPRequestHandler is a SDK request handler to log the HTTP request sent +// to a service. Will include the HTTP request volcenginebody if the LogLevel of the +// request matches LogDebugWithHTTPBody. +var LogHTTPRequestHandler = request.NamedHandler{ + Name: "volcenginesdk.client.LogRequest", + Fn: logRequest, +} + +func logRequest(r *request.Request) { + logBody := r.Config.LogLevel.Matches(volcengine.LogDebugWithHTTPBody) + bodySeekable := volcengine.IsReaderSeekable(r.Body) + + b, err := httputil.DumpRequestOut(r.HTTPRequest, logBody) + if err != nil { + r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg, + r.ClientInfo.ServiceName, r.Operation.Name, err)) + return + } + + if logBody { + if !bodySeekable { + r.SetReaderBody(volcengine.ReadSeekCloser(r.HTTPRequest.Body)) + } + // Reset the request volcenginebody because dumpRequest will re-wrap the + // r.HTTPRequest's Body as a NoOpCloser and will not be reset after + // read by the HTTP client reader. + if err := r.Error; err != nil { + r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg, + r.ClientInfo.ServiceName, r.Operation.Name, err)) + return + } + } + + r.Config.Logger.Log(fmt.Sprintf(logReqMsg, + r.ClientInfo.ServiceName, r.Operation.Name, string(b))) +} + +// LogHTTPRequestHeaderHandler is a SDK request handler to log the HTTP request sent +// to a service. Will only log the HTTP request's headers. The request payload +// will not be read. +var LogHTTPRequestHeaderHandler = request.NamedHandler{ + Name: "volcenginesdk.client.LogRequestHeader", + Fn: logRequestHeader, +} + +func logRequestHeader(r *request.Request) { + b, err := httputil.DumpRequestOut(r.HTTPRequest, false) + if err != nil { + r.Config.Logger.Log(fmt.Sprintf(logReqErrMsg, + r.ClientInfo.ServiceName, r.Operation.Name, err)) + return + } + + r.Config.Logger.Log(fmt.Sprintf(logReqMsg, + r.ClientInfo.ServiceName, r.Operation.Name, string(b))) +} + +const logRespMsg = `DEBUG: Response %s/%s Details: +---[ RESPONSE ]-------------------------------------- +%s +-----------------------------------------------------` + +const logRespErrMsg = `DEBUG ERROR: Response %s/%s: +---[ RESPONSE DUMP ERROR ]----------------------------- +%s +-----------------------------------------------------` + +// LogHTTPResponseHandler is a SDK request handler to log the HTTP response +// received from a service. Will include the HTTP response volcenginebody if the LogLevel +// of the request matches LogDebugWithHTTPBody. +var LogHTTPResponseHandler = request.NamedHandler{ + Name: "volcenginesdk.client.LogResponse", + Fn: logResponse, +} + +func logResponse(r *request.Request) { + lw := &logWriter{r.Config.Logger, bytes.NewBuffer(nil)} + + if r.HTTPResponse == nil { + lw.Logger.Log(fmt.Sprintf(logRespErrMsg, + r.ClientInfo.ServiceName, r.Operation.Name, "request's HTTPResponse is nil")) + return + } + + logBody := r.Config.LogLevel.Matches(volcengine.LogDebugWithHTTPBody) + if logBody { + r.HTTPResponse.Body = &teeReaderCloser{ + Reader: io.TeeReader(r.HTTPResponse.Body, lw), + Source: r.HTTPResponse.Body, + } + } + + handlerFn := func(req *request.Request) { + b, err := httputil.DumpResponse(req.HTTPResponse, false) + if err != nil { + lw.Logger.Log(fmt.Sprintf(logRespErrMsg, + req.ClientInfo.ServiceName, req.Operation.Name, err)) + return + } + + lw.Logger.Log(fmt.Sprintf(logRespMsg, + req.ClientInfo.ServiceName, req.Operation.Name, string(b))) + + if logBody { + b, err := ioutil.ReadAll(lw.buf) + if err != nil { + lw.Logger.Log(fmt.Sprintf(logRespErrMsg, + req.ClientInfo.ServiceName, req.Operation.Name, err)) + return + } + + lw.Logger.Log(string(b)) + } + } + + const handlerName = "volcenginesdk.client.LogResponse.ResponseBody" + + r.Handlers.Unmarshal.SetBackNamed(request.NamedHandler{ + Name: handlerName, Fn: handlerFn, + }) + r.Handlers.UnmarshalError.SetBackNamed(request.NamedHandler{ + Name: handlerName, Fn: handlerFn, + }) +} + +// LogHTTPResponseHeaderHandler is a SDK request handler to log the HTTP +// response received from a service. Will only log the HTTP response's headers. +// The response payload will not be read. +var LogHTTPResponseHeaderHandler = request.NamedHandler{ + Name: "volcenginesdk.client.LogResponseHeader", + Fn: logResponseHeader, +} + +func logResponseHeader(r *request.Request) { + if r.Config.Logger == nil { + return + } + + b, err := httputil.DumpResponse(r.HTTPResponse, false) + if err != nil { + r.Config.Logger.Log(fmt.Sprintf(logRespErrMsg, + r.ClientInfo.ServiceName, r.Operation.Name, err)) + return + } + + r.Config.Logger.Log(fmt.Sprintf(logRespMsg, + r.ClientInfo.ServiceName, r.Operation.Name, string(b))) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata/client_info.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata/client_info.go new file mode 100644 index 000000000000..153ea7d067d7 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata/client_info.go @@ -0,0 +1,32 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package metadata + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// ClientInfo wraps immutable data from the client.Client structure. +type ClientInfo struct { + ServiceName string + ServiceID string + APIVersion string + Endpoint string + SigningName string + SigningRegion string + JSONVersion string + TargetPrefix string +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/no_op_retryer.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/no_op_retryer.go new file mode 100644 index 000000000000..86aafc317eeb --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/no_op_retryer.go @@ -0,0 +1,47 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package client + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// NoOpRetryer provides a retryer that performs no retries. +// It should be used when we do not want retries to be performed. +type NoOpRetryer struct{} + +// MaxRetries returns the number of maximum returns the service will use to make +// an individual API; For NoOpRetryer the MaxRetries will always be zero. +func (d NoOpRetryer) MaxRetries() int { + return 0 +} + +// ShouldRetry will always return false for NoOpRetryer, as it should never retry. +func (d NoOpRetryer) ShouldRetry(_ *request.Request) bool { + return false +} + +// RetryRules returns the delay duration before retrying this request again; +// since NoOpRetryer does not retry, RetryRules always returns 0. +func (d NoOpRetryer) RetryRules(_ *request.Request) time.Duration { + return 0 +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/config.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/config.go new file mode 100644 index 000000000000..71ef824232b3 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/config.go @@ -0,0 +1,668 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "net/http" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints" +) + +// UseServiceDefaultRetries instructs the config to use the service's own +// default number of retries. This will be the default action if +// Config.MaxRetries is nil also. +const UseServiceDefaultRetries = -1 + +// RequestRetryer is an alias for a type that implements the request.Retryer +// interface. +type RequestRetryer interface{} + +// A Config provides service configuration for service clients. By default, +// all clients will use the defaults.DefaultConfig structure. +type Config struct { + // Enables verbose error printing of all credential chain errors. + // Should be used when wanting to see all errors while attempting to + // retrieve credentials. + CredentialsChainVerboseErrors *bool + + // The credentials object to use when signing requests. Defaults to a + // chain of credential providers to search for credentials in environment + // variables, shared credential file, and EC2 Instance Roles. + Credentials *credentials.Credentials + + // An optional endpoint URL (hostname only or fully qualified URI) + // that overrides the default generated endpoint for a client. Set this + // to `""` to use the default generated endpoint. + // + // Note: You must still provide a `Region` value when specifying an + // endpoint for a client. + Endpoint *string + + // The resolver to use for looking up endpoints for volcengine service clients + // to use based on region. + EndpointResolver endpoints.Resolver + + // EnforceShouldRetryCheck is used in the AfterRetryHandler to always call + // ShouldRetry regardless of whether or not if request.Retryable is set. + // This will utilize ShouldRetry method of custom retryers. If EnforceShouldRetryCheck + // is not set, then ShouldRetry will only be called if request.Retryable is nil. + // Proper handling of the request.Retryable field is important when setting this field. + EnforceShouldRetryCheck *bool + + // The region to send requests to. This parameter is required and must + // be configured globally or on a per-client basis unless otherwise + // noted. A full list of regions is found in the "Regions and Endpoints" + // document. + // + // Regions and Endpoints. + Region *string + + // Set this to `true` to disable SSL when sending requests. Defaults + // to `false`. + DisableSSL *bool + + // The HTTP client to use when sending requests. Defaults to + // `http.DefaultClient`. + HTTPClient *http.Client + + // An integer value representing the logging level. The default log level + // is zero (LogOff), which represents no logging. To enable logging set + // to a LogLevel Value. + LogLevel *LogLevelType + + // The logger writer interface to write logging messages to. Defaults to + // standard out. + Logger Logger + + // The maximum number of times that a request will be retried for failures. + // Defaults to -1, which defers the max retry setting to the service + // specific configuration. + MaxRetries *int + + // Retryer guides how HTTP requests should be retried in case of + // recoverable failures. + // + // When nil or the value does not implement the request.Retryer interface, + // the client.DefaultRetryer will be used. + // + // When both Retryer and MaxRetries are non-nil, the former is used and + // the latter ignored. + // + // To set the Retryer field in a type-safe manner and with chaining, use + // the request.WithRetryer helper function: + // + // cfg := request.WithRetryer(volcengine.NewConfig(), myRetryer) + // + Retryer RequestRetryer + + // Disables semantic parameter validation, which validates input for + // missing required fields and/or other semantic request input errors + // Temporary notes by xuyaming@bytedance.com because some validate field is relation. + //DisableParamValidation *bool + + // Disables the computation of request and response checksums, e.g., + //DisableComputeChecksums *bool + + // Set this to `true` to enable S3 Accelerate feature. For all operations + // compatible with S3 Accelerate will use the accelerate endpoint for + // requests. Requests not compatible will fall back to normal S3 requests. + // + // The bucket must be enable for accelerate to be used with S3 client with + // accelerate enabled. If the bucket is not enabled for accelerate an error + // will be returned. The bucket name must be DNS compatible to also work + // with accelerate. + //S3UseAccelerate *bool + + // S3DisableContentMD5Validation config option is temporarily disabled, + // For S3 GetObject API calls, #1837. + // + // Set this to `true` to disable the S3 service client from automatically + // adding the ContentMD5 to S3 Object Put and Upload API calls. This option + // will also disable the SDK from performing object ContentMD5 validation + // on GetObject API calls. + //S3DisableContentMD5Validation *bool + + // Set this to `true` to disable the EC2Metadata client from overriding the + // default http.Client's Timeout. This is helpful if you do not want the + // EC2Metadata client to create a new http.Client. This options is only + // meaningful if you're not already using a custom HTTP client with the + // SDK. Enabled by default. + // + // Must be set and provided to the session.NewSession() in order to disable + // the EC2Metadata overriding the timeout for default credentials chain. + // + // Example: + // sess := session.Must(session.NewSession(volcengine.NewConfig() + // .WithEC2MetadataDiableTimeoutOverride(true))) + // + // svc := s3.New(sess) + // + //EC2MetadataDisableTimeoutOverride *bool + + // Instructs the endpoint to be generated for a service client to + // be the dual stack endpoint. The dual stack endpoint will support + // both IPv4 and IPv6 addressing. + // + // Setting this for a service which does not support dual stack will fail + // to make requets. It is not recommended to set this value on the session + // as it will apply to all service clients created with the session. Even + // services which don't support dual stack endpoints. + // + // If the Endpoint config value is also provided the UseDualStack flag + // will be ignored. + // + // Only supported with. + // + // sess := session.Must(session.NewSession()) + // + // svc := s3.New(sess, &volcengine.Config{ + // UseDualStack: volcengine.Bool(true), + // }) + //UseDualStack *bool + + // SleepDelay is an override for the func the SDK will call when sleeping + // during the lifecycle of a request. Specifically this will be used for + // request delays. This value should only be used for testing. To adjust + // the delay of a request see the volcengine/client.DefaultRetryer and + // volcengine/request.Retryer. + // + // SleepDelay will prevent any Context from being used for canceling retry + // delay of an API operation. It is recommended to not use SleepDelay at all + // and specify a Retryer instead. + SleepDelay func(time.Duration) + + // DisableRestProtocolURICleaning will not clean the URL path when making rest protocol requests. + // Will default to false. This would only be used for empty directory names in s3 requests. + // + // Example: + // sess := session.Must(session.NewSession(&volcengine.Config{ + // DisableRestProtocolURICleaning: volcengine.Bool(true), + // })) + // + // svc := s3.New(sess) + // out, err := svc.GetObject(&s3.GetObjectInput { + // Bucket: volcengine.String("bucketname"), + // Key: volcengine.String("//foo//bar//moo"), + // }) + DisableRestProtocolURICleaning *bool + + // EnableEndpointDiscovery will allow for endpoint discovery on operations that + // have the definition in its model. By default, endpoint discovery is off. + // + // Example: + // sess := session.Must(session.NewSession(&volcengine.Config{ + // EnableEndpointDiscovery: volcengine.Bool(true), + // })) + // + // svc := s3.New(sess) + // out, err := svc.GetObject(&s3.GetObjectInput { + // Bucket: volcengine.String("bucketname"), + // Key: volcengine.String("/foo/bar/moo"), + // }) + //EnableEndpointDiscovery *bool + + // DisableEndpointHostPrefix will disable the SDK's behavior of prefixing + // request endpoint hosts with modeled information. + // + // Disabling this feature is useful when you want to use local endpoints + // for testing that do not support the modeled host prefix pattern. + //DisableEndpointHostPrefix *bool + + LogSensitives []string + + DynamicCredentials custom.DynamicCredentials + + DynamicCredentialsIncludeError custom.DynamicCredentialsIncludeError + + LogAccount custom.LogAccount + + ExtendHttpRequest custom.ExtendHttpRequest + + ExtendHttpRequestWithMeta custom.ExtendHttpRequestWithMeta + + ExtraHttpParameters custom.ExtraHttpParameters + + ExtraHttpParametersWithMeta custom.ExtraHttpParametersWithMeta + + ExtraHttpJsonBody custom.ExtraHttpJsonBody + + CustomerUnmarshalError custom.CustomerUnmarshalError + + CustomerUnmarshalData custom.CustomerUnmarshalData + + ExtraUserAgent *string + + Interceptors []custom.SdkInterceptor + + SimpleError *bool + + ForceJsonNumberDecode custom.ForceJsonNumberDecode +} + +// NewConfig returns a new Config pointer that can be chained with builder +// methods to set multiple configuration values inline without using pointers. +// +// // Create Session with MaxRetries configuration to be shared by multiple +// // service clients. +// sess := session.Must(session.NewSession(volcengine.NewConfig(). +// WithMaxRetries(3), +// )) +// +// // Create S3 service client with a specific Region. +// svc := s3.New(sess, volcengine.NewConfig(). +// WithRegion("us-west-2"), +// ) +func NewConfig() *Config { + return &Config{} +} + +func (c *Config) AddInterceptor(interceptor custom.SdkInterceptor) *Config { + c.Interceptors = append(c.Interceptors, interceptor) + return c +} + +// WithCredentialsChainVerboseErrors sets a config verbose errors boolean and returning +// a Config pointer. +func (c *Config) WithCredentialsChainVerboseErrors(verboseErrs bool) *Config { + c.CredentialsChainVerboseErrors = &verboseErrs + return c +} + +// WithCredentials sets a config Credentials value returning a Config pointer +// for chaining. +func (c *Config) WithCredentials(creds *credentials.Credentials) *Config { + c.Credentials = creds + return c +} + +// WithAkSk sets a config Credentials value returning a Config pointer +// for chaining. +func (c *Config) WithAkSk(ak, sk string) *Config { + c.Credentials = credentials.NewStaticCredentials(ak, sk, "") + return c +} + +// WithEndpoint sets a config Endpoint value returning a Config pointer for +// chaining. +func (c *Config) WithEndpoint(endpoint string) *Config { + c.Endpoint = &endpoint + return c +} + +func (c *Config) WithSimpleError(simpleError bool) *Config { + c.SimpleError = &simpleError + return c +} + +func (c *Config) WithLogSensitives(sensitives []string) *Config { + c.LogSensitives = sensitives + return c +} + +func (c *Config) WithExtendHttpRequest(extendHttpRequest custom.ExtendHttpRequest) *Config { + c.ExtendHttpRequest = extendHttpRequest + return c +} + +func (c *Config) WithExtendHttpRequestWithMeta(extendHttpRequestWithMeta custom.ExtendHttpRequestWithMeta) *Config { + c.ExtendHttpRequestWithMeta = extendHttpRequestWithMeta + return c +} + +func (c *Config) WithExtraHttpParameters(extraHttpParameters custom.ExtraHttpParameters) *Config { + c.ExtraHttpParameters = extraHttpParameters + return c +} + +func (c *Config) WithExtraHttpParametersWithMeta(extraHttpParametersWithMeta custom.ExtraHttpParametersWithMeta) *Config { + c.ExtraHttpParametersWithMeta = extraHttpParametersWithMeta + return c +} + +func (c *Config) WithExtraHttpJsonBody(extraHttpJsonBody custom.ExtraHttpJsonBody) *Config { + c.ExtraHttpJsonBody = extraHttpJsonBody + return c +} + +func (c *Config) WithExtraUserAgent(extra *string) *Config { + c.ExtraUserAgent = extra + return c +} + +func (c *Config) WithLogAccount(account custom.LogAccount) *Config { + c.LogAccount = account + return c +} + +func (c *Config) WithDynamicCredentials(f custom.DynamicCredentials) *Config { + c.DynamicCredentials = f + return c +} + +// WithDynamicCredentialsIncludeError sets a config DynamicCredentialsIncludeError value returning a Config pointer for +// chaining. +func (c *Config) WithDynamicCredentialsIncludeError(f custom.DynamicCredentialsIncludeError) *Config { + c.DynamicCredentialsIncludeError = f + return c +} + +func (c *Config) WithCustomerUnmarshalError(f custom.CustomerUnmarshalError) *Config { + c.CustomerUnmarshalError = f + return c +} + +func (c *Config) WithCustomerUnmarshalData(f custom.CustomerUnmarshalData) *Config { + c.CustomerUnmarshalData = f + return c +} + +func (c *Config) WithForceJsonNumberDecode(f custom.ForceJsonNumberDecode) *Config { + c.ForceJsonNumberDecode = f + return c +} + +// WithEndpointResolver sets a config EndpointResolver value returning a +// Config pointer for chaining. +func (c *Config) WithEndpointResolver(resolver endpoints.Resolver) *Config { + c.EndpointResolver = resolver + return c +} + +// WithRegion sets a config Region value returning a Config pointer for +// chaining. +func (c *Config) WithRegion(region string) *Config { + c.Region = ®ion + return c +} + +// WithDisableSSL sets a config DisableSSL value returning a Config pointer +// for chaining. +func (c *Config) WithDisableSSL(disable bool) *Config { + c.DisableSSL = &disable + return c +} + +// WithHTTPClient sets a config HTTPClient value returning a Config pointer +// for chaining. +func (c *Config) WithHTTPClient(client *http.Client) *Config { + c.HTTPClient = client + return c +} + +// WithMaxRetries sets a config MaxRetries value returning a Config pointer +// for chaining. +func (c *Config) WithMaxRetries(max int) *Config { + c.MaxRetries = &max + return c +} + +// WithDisableParamValidation sets a config DisableParamValidation value +// returning a Config pointer for chaining +// Temporary notes by xuyaming@bytedance.com because some validate field is relation. +//func (c *Config) WithDisableParamValidation(disable bool) *Config { +// c.DisableParamValidation = &disable +// return c +//} + +// WithDisableComputeChecksums sets a config DisableComputeChecksums value +// returning a Config pointer for chaining. +//func (c *Config) WithDisableComputeChecksums(disable bool) *Config { +// c.DisableComputeChecksums = &disable +// return c +//} + +// WithLogLevel sets a config LogLevel value returning a Config pointer for +// chaining. +func (c *Config) WithLogLevel(level LogLevelType) *Config { + c.LogLevel = &level + return c +} + +// WithLogger sets a config Logger value returning a Config pointer for +// chaining. +func (c *Config) WithLogger(logger Logger) *Config { + c.Logger = logger + return c +} + +// WithS3UseAccelerate sets a config S3UseAccelerate value returning a Config +// pointer for chaining. +//func (c *Config) WithS3UseAccelerate(enable bool) *Config { +// c.S3UseAccelerate = &enable +// return c +// +//} + +// WithS3DisableContentMD5Validation sets a config +// S3DisableContentMD5Validation value returning a Config pointer for chaining. +//func (c *Config) WithS3DisableContentMD5Validation(enable bool) *Config { +// c.S3DisableContentMD5Validation = &enable +// return c +// +//} + +// WithUseDualStack sets a config UseDualStack value returning a Config +// pointer for chaining. +//func (c *Config) WithUseDualStack(enable bool) *Config { +// c.UseDualStack = &enable +// return c +//} + +// WithEC2MetadataDisableTimeoutOverride sets a config EC2MetadataDisableTimeoutOverride value +// returning a Config pointer for chaining. +//func (c *Config) WithEC2MetadataDisableTimeoutOverride(enable bool) *Config { +// c.EC2MetadataDisableTimeoutOverride = &enable +// return c +//} + +// WithSleepDelay overrides the function used to sleep while waiting for the +// next retry. Defaults to time.Sleep. +func (c *Config) WithSleepDelay(fn func(time.Duration)) *Config { + c.SleepDelay = fn + return c +} + +// WithEndpointDiscovery will set whether or not to use endpoint discovery. +//func (c *Config) WithEndpointDiscovery(t bool) *Config { +// c.EnableEndpointDiscovery = &t +// return c +//} + +// WithDisableEndpointHostPrefix will set whether or not to use modeled host prefix +// when making requests. +//func (c *Config) WithDisableEndpointHostPrefix(t bool) *Config { +// c.DisableEndpointHostPrefix = &t +// return c +//} + +// MergeIn merges the passed in configs into the existing config object. +func (c *Config) MergeIn(cfgs ...*Config) { + for _, other := range cfgs { + mergeInConfig(c, other) + } +} + +func mergeInConfig(dst *Config, other *Config) { + if other == nil { + return + } + + if other.CredentialsChainVerboseErrors != nil { + dst.CredentialsChainVerboseErrors = other.CredentialsChainVerboseErrors + } + + if other.Credentials != nil { + dst.Credentials = other.Credentials + } + + if other.Endpoint != nil { + dst.Endpoint = other.Endpoint + } + + if other.EndpointResolver != nil { + dst.EndpointResolver = other.EndpointResolver + } + + if other.Region != nil { + dst.Region = other.Region + } + + if other.DisableSSL != nil { + dst.DisableSSL = other.DisableSSL + } + + if other.HTTPClient != nil { + dst.HTTPClient = other.HTTPClient + } + + if other.LogLevel != nil { + dst.LogLevel = other.LogLevel + } + + if other.Logger != nil { + dst.Logger = other.Logger + } + + if other.MaxRetries != nil { + dst.MaxRetries = other.MaxRetries + } + + if other.Retryer != nil { + dst.Retryer = other.Retryer + } + + if other.ForceJsonNumberDecode != nil { + dst.ForceJsonNumberDecode = other.ForceJsonNumberDecode + } + // Temporary notes by xuyaming@bytedance.com because some validate field is relation. + //if other.DisableParamValidation != nil { + // dst.DisableParamValidation = other.DisableParamValidation + //} + + //if other.DisableComputeChecksums != nil { + // dst.DisableComputeChecksums = other.DisableComputeChecksums + //} + + //if other.S3UseAccelerate != nil { + // dst.S3UseAccelerate = other.S3UseAccelerate + //} + // + //if other.S3DisableContentMD5Validation != nil { + // dst.S3DisableContentMD5Validation = other.S3DisableContentMD5Validation + //} + // + //if other.UseDualStack != nil { + // dst.UseDualStack = other.UseDualStack + //} + // + //if other.EC2MetadataDisableTimeoutOverride != nil { + // dst.EC2MetadataDisableTimeoutOverride = other.EC2MetadataDisableTimeoutOverride + //} + // + if other.SleepDelay != nil { + dst.SleepDelay = other.SleepDelay + } + // + if other.DisableRestProtocolURICleaning != nil { + dst.DisableRestProtocolURICleaning = other.DisableRestProtocolURICleaning + } + // + //if other.EnforceShouldRetryCheck != nil { + // dst.EnforceShouldRetryCheck = other.EnforceShouldRetryCheck + //} + // + //if other.EnableEndpointDiscovery != nil { + // dst.EnableEndpointDiscovery = other.EnableEndpointDiscovery + //} + // + //if other.DisableEndpointHostPrefix != nil { + // dst.DisableEndpointHostPrefix = other.DisableEndpointHostPrefix + //} + + if other.LogSensitives != nil { + dst.LogSensitives = other.LogSensitives + } + + if other.LogAccount != nil { + dst.LogAccount = other.LogAccount + } + + if other.DynamicCredentials != nil { + dst.DynamicCredentials = other.DynamicCredentials + } + + if other.DynamicCredentialsIncludeError != nil { + dst.DynamicCredentialsIncludeError = other.DynamicCredentialsIncludeError + } + + if other.ExtendHttpRequest != nil { + dst.ExtendHttpRequest = other.ExtendHttpRequest + } + + if other.ExtendHttpRequestWithMeta != nil { + dst.ExtendHttpRequestWithMeta = other.ExtendHttpRequestWithMeta + } + + if other.ExtraHttpParameters != nil { + dst.ExtraHttpParameters = other.ExtraHttpParameters + } + + if other.ExtraHttpParametersWithMeta != nil { + dst.ExtraHttpParametersWithMeta = other.ExtraHttpParametersWithMeta + } + + if other.ExtraUserAgent != nil { + dst.ExtraUserAgent = other.ExtraUserAgent + } + + if other.SimpleError != nil { + dst.SimpleError = other.SimpleError + } + + if other.ExtraHttpJsonBody != nil { + dst.ExtraHttpJsonBody = other.ExtraHttpJsonBody + } + + if other.CustomerUnmarshalError != nil { + dst.CustomerUnmarshalError = other.CustomerUnmarshalError + } + + if other.CustomerUnmarshalData != nil { + dst.CustomerUnmarshalData = other.CustomerUnmarshalData + } + + dst.Interceptors = other.Interceptors +} + +// Copy will return a shallow copy of the Config object. If any additional +// configurations are provided they will be merged into the new config returned. +func (c *Config) Copy(cfgs ...*Config) *Config { + dst := &Config{} + dst.MergeIn(c) + + for _, cfg := range cfgs { + dst.MergeIn(cfg) + } + + return dst +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/doc.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context.go similarity index 55% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/doc.go rename to cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context.go index 0c8b9e17e84f..1285bd678c02 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/doc.go +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context.go @@ -1,5 +1,5 @@ /* -Copyright 2021 The Kubernetes Authors. +Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -14,11 +14,15 @@ See the License for the specific language governing permissions and limitations under the License. */ -// +k8s:deepcopy-gen=package -// +k8s:conversion-gen=k8s.io/kubernetes/pkg/scheduler/apis/config -// +k8s:conversion-gen-external-types=k8s.io/kube-scheduler/config/v1beta2 -// +k8s:defaulter-gen=TypeMeta -// +k8s:defaulter-gen-input=k8s.io/kube-scheduler/config/v1beta2 -// +groupName=kubescheduler.config.k8s.io +package volcengine -package v1beta2 // import "k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2" +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "context" + +// Context is an alias of the Go stdlib's context.Context interface. +// It can be used within the SDK's API operation "WithContext" methods. +// +// See https://golang.org/pkg/context on how to use contexts. +type Context = context.Context diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context_background.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context_background.go new file mode 100644 index 000000000000..83169ffa06b0 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context_background.go @@ -0,0 +1,30 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "context" + +// BackgroundContext returns a context that will never be canceled, has no +// values, and no deadline. This context is used by the SDK to provide +// backwards compatibility with non-context API operations and functionality. +// See https://golang.org/pkg/context for more information on Contexts. +func BackgroundContext() Context { + return context.Background() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context_sleep.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context_sleep.go new file mode 100644 index 000000000000..69e784afe93f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/context_sleep.go @@ -0,0 +1,43 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "time" +) + +// SleepWithContext will wait for the timer duration to expire, or the context +// is canceled. Which ever happens first. If the context is canceled the Context's +// error will be returned. +// +// Expects Context to always return a non-nil error if the Done channel is closed. +func SleepWithContext(ctx Context, dur time.Duration) error { + t := time.NewTimer(dur) + defer t.Stop() + + select { + case <-t.C: + break + case <-ctx.Done(): + return ctx.Err() + } + + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/convert_types.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/convert_types.go new file mode 100644 index 000000000000..0095366ae58f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/convert_types.go @@ -0,0 +1,951 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/json" + "time" +) + +// JsonNumber returns a pointer to the json.Number passed in. +func JsonNumber(v string) *json.Number { + jn := json.Number(v) + return &jn +} + +func JsonNumberValue(v *string) json.Number { + jn := json.Number(StringValue(v)) + return jn +} + +// String returns a pointer to the string value passed in. +func String(v string) *string { + return &v +} + +// StringValue returns the value of the string pointer passed in or +// "" if the pointer is nil. +func StringValue(v *string) string { + if v != nil { + return *v + } + return "" +} + +// StringSlice converts a slice of string values into a slice of +// string pointers +func StringSlice(src []string) []*string { + dst := make([]*string, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// StringValueSlice converts a slice of string pointers into a slice of +// string values +func StringValueSlice(src []*string) []string { + dst := make([]string, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// StringMap converts a string map of string values into a string +// map of string pointers +func StringMap(src map[string]string) map[string]*string { + dst := make(map[string]*string) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// StringValueMap converts a string map of string pointers into a string +// map of string values +func StringValueMap(src map[string]*string) map[string]string { + dst := make(map[string]string) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Bool returns a pointer to the bool value passed in. +func Bool(v bool) *bool { + return &v +} + +// BoolValue returns the value of the bool pointer passed in or +// false if the pointer is nil. +func BoolValue(v *bool) bool { + if v != nil { + return *v + } + return false +} + +// BoolSlice converts a slice of bool values into a slice of +// bool pointers +func BoolSlice(src []bool) []*bool { + dst := make([]*bool, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// BoolValueSlice converts a slice of bool pointers into a slice of +// bool values +func BoolValueSlice(src []*bool) []bool { + dst := make([]bool, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// BoolMap converts a string map of bool values into a string +// map of bool pointers +func BoolMap(src map[string]bool) map[string]*bool { + dst := make(map[string]*bool) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// BoolValueMap converts a string map of bool pointers into a string +// map of bool values +func BoolValueMap(src map[string]*bool) map[string]bool { + dst := make(map[string]bool) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int returns a pointer to the int value passed in. +func Int(v int) *int { + return &v +} + +// IntValue returns the value of the int pointer passed in or +// 0 if the pointer is nil. +func IntValue(v *int) int { + if v != nil { + return *v + } + return 0 +} + +// IntSlice converts a slice of int values into a slice of +// int pointers +func IntSlice(src []int) []*int { + dst := make([]*int, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// IntValueSlice converts a slice of int pointers into a slice of +// int values +func IntValueSlice(src []*int) []int { + dst := make([]int, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// IntMap converts a string map of int values into a string +// map of int pointers +func IntMap(src map[string]int) map[string]*int { + dst := make(map[string]*int) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// IntValueMap converts a string map of int pointers into a string +// map of int values +func IntValueMap(src map[string]*int) map[string]int { + dst := make(map[string]int) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Uint returns a pointer to the uint value passed in. +func Uint(v uint) *uint { + return &v +} + +// UintValue returns the value of the uint pointer passed in or +// 0 if the pointer is nil. +func UintValue(v *uint) uint { + if v != nil { + return *v + } + return 0 +} + +// UintSlice converts a slice of uint values uinto a slice of +// uint pointers +func UintSlice(src []uint) []*uint { + dst := make([]*uint, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// UintValueSlice converts a slice of uint pointers uinto a slice of +// uint values +func UintValueSlice(src []*uint) []uint { + dst := make([]uint, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// UintMap converts a string map of uint values uinto a string +// map of uint pointers +func UintMap(src map[string]uint) map[string]*uint { + dst := make(map[string]*uint) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// UintValueMap converts a string map of uint pointers uinto a string +// map of uint values +func UintValueMap(src map[string]*uint) map[string]uint { + dst := make(map[string]uint) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int8 returns a pointer to the int8 value passed in. +func Int8(v int8) *int8 { + return &v +} + +// Int8Value returns the value of the int8 pointer passed in or +// 0 if the pointer is nil. +func Int8Value(v *int8) int8 { + if v != nil { + return *v + } + return 0 +} + +// Int8Slice converts a slice of int8 values into a slice of +// int8 pointers +func Int8Slice(src []int8) []*int8 { + dst := make([]*int8, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Int8ValueSlice converts a slice of int8 pointers into a slice of +// int8 values +func Int8ValueSlice(src []*int8) []int8 { + dst := make([]int8, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Int8Map converts a string map of int8 values into a string +// map of int8 pointers +func Int8Map(src map[string]int8) map[string]*int8 { + dst := make(map[string]*int8) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Int8ValueMap converts a string map of int8 pointers into a string +// map of int8 values +func Int8ValueMap(src map[string]*int8) map[string]int8 { + dst := make(map[string]int8) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int16 returns a pointer to the int16 value passed in. +func Int16(v int16) *int16 { + return &v +} + +// Int16Value returns the value of the int16 pointer passed in or +// 0 if the pointer is nil. +func Int16Value(v *int16) int16 { + if v != nil { + return *v + } + return 0 +} + +// Int16Slice converts a slice of int16 values into a slice of +// int16 pointers +func Int16Slice(src []int16) []*int16 { + dst := make([]*int16, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Int16ValueSlice converts a slice of int16 pointers into a slice of +// int16 values +func Int16ValueSlice(src []*int16) []int16 { + dst := make([]int16, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Int16Map converts a string map of int16 values into a string +// map of int16 pointers +func Int16Map(src map[string]int16) map[string]*int16 { + dst := make(map[string]*int16) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Int16ValueMap converts a string map of int16 pointers into a string +// map of int16 values +func Int16ValueMap(src map[string]*int16) map[string]int16 { + dst := make(map[string]int16) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int32 returns a pointer to the int32 value passed in. +func Int32(v int32) *int32 { + return &v +} + +// Int32Value returns the value of the int32 pointer passed in or +// 0 if the pointer is nil. +func Int32Value(v *int32) int32 { + if v != nil { + return *v + } + return 0 +} + +// Int32Slice converts a slice of int32 values into a slice of +// int32 pointers +func Int32Slice(src []int32) []*int32 { + dst := make([]*int32, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Int32ValueSlice converts a slice of int32 pointers into a slice of +// int32 values +func Int32ValueSlice(src []*int32) []int32 { + dst := make([]int32, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Int32Map converts a string map of int32 values into a string +// map of int32 pointers +func Int32Map(src map[string]int32) map[string]*int32 { + dst := make(map[string]*int32) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Int32ValueMap converts a string map of int32 pointers into a string +// map of int32 values +func Int32ValueMap(src map[string]*int32) map[string]int32 { + dst := make(map[string]int32) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Int64 returns a pointer to the int64 value passed in. +func Int64(v int64) *int64 { + return &v +} + +// Int64Value returns the value of the int64 pointer passed in or +// 0 if the pointer is nil. +func Int64Value(v *int64) int64 { + if v != nil { + return *v + } + return 0 +} + +// Int64Slice converts a slice of int64 values into a slice of +// int64 pointers +func Int64Slice(src []int64) []*int64 { + dst := make([]*int64, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Int64ValueSlice converts a slice of int64 pointers into a slice of +// int64 values +func Int64ValueSlice(src []*int64) []int64 { + dst := make([]int64, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Int64Map converts a string map of int64 values into a string +// map of int64 pointers +func Int64Map(src map[string]int64) map[string]*int64 { + dst := make(map[string]*int64) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Int64ValueMap converts a string map of int64 pointers into a string +// map of int64 values +func Int64ValueMap(src map[string]*int64) map[string]int64 { + dst := make(map[string]int64) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Uint8 returns a pointer to the uint8 value passed in. +func Uint8(v uint8) *uint8 { + return &v +} + +// Uint8Value returns the value of the uint8 pointer passed in or +// 0 if the pointer is nil. +func Uint8Value(v *uint8) uint8 { + if v != nil { + return *v + } + return 0 +} + +// Uint8Slice converts a slice of uint8 values into a slice of +// uint8 pointers +func Uint8Slice(src []uint8) []*uint8 { + dst := make([]*uint8, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Uint8ValueSlice converts a slice of uint8 pointers into a slice of +// uint8 values +func Uint8ValueSlice(src []*uint8) []uint8 { + dst := make([]uint8, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Uint8Map converts a string map of uint8 values into a string +// map of uint8 pointers +func Uint8Map(src map[string]uint8) map[string]*uint8 { + dst := make(map[string]*uint8) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Uint8ValueMap converts a string map of uint8 pointers into a string +// map of uint8 values +func Uint8ValueMap(src map[string]*uint8) map[string]uint8 { + dst := make(map[string]uint8) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Uint16 returns a pointer to the uint16 value passed in. +func Uint16(v uint16) *uint16 { + return &v +} + +// Uint16Value returns the value of the uint16 pointer passed in or +// 0 if the pointer is nil. +func Uint16Value(v *uint16) uint16 { + if v != nil { + return *v + } + return 0 +} + +// Uint16Slice converts a slice of uint16 values into a slice of +// uint16 pointers +func Uint16Slice(src []uint16) []*uint16 { + dst := make([]*uint16, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Uint16ValueSlice converts a slice of uint16 pointers into a slice of +// uint16 values +func Uint16ValueSlice(src []*uint16) []uint16 { + dst := make([]uint16, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Uint16Map converts a string map of uint16 values into a string +// map of uint16 pointers +func Uint16Map(src map[string]uint16) map[string]*uint16 { + dst := make(map[string]*uint16) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Uint16ValueMap converts a string map of uint16 pointers into a string +// map of uint16 values +func Uint16ValueMap(src map[string]*uint16) map[string]uint16 { + dst := make(map[string]uint16) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Uint32 returns a pointer to the uint32 value passed in. +func Uint32(v uint32) *uint32 { + return &v +} + +// Uint32Value returns the value of the uint32 pointer passed in or +// 0 if the pointer is nil. +func Uint32Value(v *uint32) uint32 { + if v != nil { + return *v + } + return 0 +} + +// Uint32Slice converts a slice of uint32 values into a slice of +// uint32 pointers +func Uint32Slice(src []uint32) []*uint32 { + dst := make([]*uint32, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Uint32ValueSlice converts a slice of uint32 pointers into a slice of +// uint32 values +func Uint32ValueSlice(src []*uint32) []uint32 { + dst := make([]uint32, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Uint32Map converts a string map of uint32 values into a string +// map of uint32 pointers +func Uint32Map(src map[string]uint32) map[string]*uint32 { + dst := make(map[string]*uint32) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Uint32ValueMap converts a string map of uint32 pointers into a string +// map of uint32 values +func Uint32ValueMap(src map[string]*uint32) map[string]uint32 { + dst := make(map[string]uint32) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Uint64 returns a pointer to the uint64 value passed in. +func Uint64(v uint64) *uint64 { + return &v +} + +// Uint64Value returns the value of the uint64 pointer passed in or +// 0 if the pointer is nil. +func Uint64Value(v *uint64) uint64 { + if v != nil { + return *v + } + return 0 +} + +// Uint64Slice converts a slice of uint64 values into a slice of +// uint64 pointers +func Uint64Slice(src []uint64) []*uint64 { + dst := make([]*uint64, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Uint64ValueSlice converts a slice of uint64 pointers into a slice of +// uint64 values +func Uint64ValueSlice(src []*uint64) []uint64 { + dst := make([]uint64, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Uint64Map converts a string map of uint64 values into a string +// map of uint64 pointers +func Uint64Map(src map[string]uint64) map[string]*uint64 { + dst := make(map[string]*uint64) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Uint64ValueMap converts a string map of uint64 pointers into a string +// map of uint64 values +func Uint64ValueMap(src map[string]*uint64) map[string]uint64 { + dst := make(map[string]uint64) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Float32 returns a pointer to the float32 value passed in. +func Float32(v float32) *float32 { + return &v +} + +// Float32Value returns the value of the float32 pointer passed in or +// 0 if the pointer is nil. +func Float32Value(v *float32) float32 { + if v != nil { + return *v + } + return 0 +} + +// Float32Slice converts a slice of float32 values into a slice of +// float32 pointers +func Float32Slice(src []float32) []*float32 { + dst := make([]*float32, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Float32ValueSlice converts a slice of float32 pointers into a slice of +// float32 values +func Float32ValueSlice(src []*float32) []float32 { + dst := make([]float32, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Float32Map converts a string map of float32 values into a string +// map of float32 pointers +func Float32Map(src map[string]float32) map[string]*float32 { + dst := make(map[string]*float32) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Float32ValueMap converts a string map of float32 pointers into a string +// map of float32 values +func Float32ValueMap(src map[string]*float32) map[string]float32 { + dst := make(map[string]float32) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Float64 returns a pointer to the float64 value passed in. +func Float64(v float64) *float64 { + return &v +} + +// Float64Value returns the value of the float64 pointer passed in or +// 0 if the pointer is nil. +func Float64Value(v *float64) float64 { + if v != nil { + return *v + } + return 0 +} + +// Float64Slice converts a slice of float64 values into a slice of +// float64 pointers +func Float64Slice(src []float64) []*float64 { + dst := make([]*float64, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// Float64ValueSlice converts a slice of float64 pointers into a slice of +// float64 values +func Float64ValueSlice(src []*float64) []float64 { + dst := make([]float64, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// Float64Map converts a string map of float64 values into a string +// map of float64 pointers +func Float64Map(src map[string]float64) map[string]*float64 { + dst := make(map[string]*float64) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// Float64ValueMap converts a string map of float64 pointers into a string +// map of float64 values +func Float64ValueMap(src map[string]*float64) map[string]float64 { + dst := make(map[string]float64) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} + +// Time returns a pointer to the time.Time value passed in. +func Time(v time.Time) *time.Time { + return &v +} + +// TimeValue returns the value of the time.Time pointer passed in or +// time.Time{} if the pointer is nil. +func TimeValue(v *time.Time) time.Time { + if v != nil { + return *v + } + return time.Time{} +} + +// SecondsTimeValue converts an int64 pointer to a time.Time value +// representing seconds since Epoch or time.Time{} if the pointer is nil. +func SecondsTimeValue(v *int64) time.Time { + if v != nil { + return time.Unix((*v / 1000), 0) + } + return time.Time{} +} + +// MillisecondsTimeValue converts an int64 pointer to a time.Time value +// representing milliseconds sinch Epoch or time.Time{} if the pointer is nil. +func MillisecondsTimeValue(v *int64) time.Time { + if v != nil { + return time.Unix(0, (*v * 1000000)) + } + return time.Time{} +} + +// TimeUnixMilli returns a Unix timestamp in milliseconds from "January 1, 1970 UTC". +// The result is undefined if the Unix time cannot be represented by an int64. +// Which includes calling TimeUnixMilli on a zero Time is undefined. +// +// This utility is useful for service API's such as CloudWatch Logs which require +// their unix time values to be in milliseconds. +// +// See Go stdlib https://golang.org/pkg/time/#Time.UnixNano for more information. +func TimeUnixMilli(t time.Time) int64 { + return t.UnixNano() / int64(time.Millisecond/time.Nanosecond) +} + +// TimeSlice converts a slice of time.Time values into a slice of +// time.Time pointers +func TimeSlice(src []time.Time) []*time.Time { + dst := make([]*time.Time, len(src)) + for i := 0; i < len(src); i++ { + dst[i] = &(src[i]) + } + return dst +} + +// TimeValueSlice converts a slice of time.Time pointers into a slice of +// time.Time values +func TimeValueSlice(src []*time.Time) []time.Time { + dst := make([]time.Time, len(src)) + for i := 0; i < len(src); i++ { + if src[i] != nil { + dst[i] = *(src[i]) + } + } + return dst +} + +// TimeMap converts a string map of time.Time values into a string +// map of time.Time pointers +func TimeMap(src map[string]time.Time) map[string]*time.Time { + dst := make(map[string]*time.Time) + for k, val := range src { + v := val + dst[k] = &v + } + return dst +} + +// TimeValueMap converts a string map of time.Time pointers into a string +// map of time.Time values +func TimeValueMap(src map[string]*time.Time) map[string]time.Time { + dst := make(map[string]time.Time) + for k, val := range src { + if val != nil { + dst[k] = *val + } + } + return dst +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/custom_req.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/custom_req.go new file mode 100644 index 000000000000..957fa8edcd5d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/custom_req.go @@ -0,0 +1,41 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package corehandlers + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +var CustomerRequestHandler = request.NamedHandler{ + Name: "core.CustomerRequestHandler", + Fn: func(r *request.Request) { + if r.Config.ExtendHttpRequest != nil { + r.Config.ExtendHttpRequest(r.Context(), r.HTTPRequest) + } + + if r.Config.ExtendHttpRequestWithMeta != nil { + r.Config.ExtendHttpRequestWithMeta(r.Context(), r.HTTPRequest, custom.RequestMetadata{ + ServiceName: r.ClientInfo.ServiceName, + Version: r.ClientInfo.APIVersion, + Action: r.Operation.Name, + HttpMethod: r.Operation.HTTPMethod, + Region: *r.Config.Region, + }) + } + }, +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/handlers.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/handlers.go new file mode 100644 index 000000000000..9f199fdb0a1d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/handlers.go @@ -0,0 +1,249 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package corehandlers + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "fmt" + "io/ioutil" + "net/http" + "net/url" + "regexp" + "strconv" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// Interface for matching types which also have a Len method. +type lener interface { + Len() int +} + +// BuildContentLengthHandler builds the content length of a request based on the volcenginebody, +// or will use the HTTPRequest.Header's "Content-Length" if defined. If unable +// to determine request volcenginebody length and no "Content-Length" was specified it will panic. +// +// The Content-Length will only be added to the request if the length of the volcenginebody +// is greater than 0. If the volcenginebody is empty or the current `Content-Length` +// header is <= 0, the header will also be stripped. +var BuildContentLengthHandler = request.NamedHandler{Name: "core.BuildContentLengthHandler", Fn: func(r *request.Request) { + var length int64 + + if slength := r.HTTPRequest.Header.Get("Content-Length"); slength != "" { + length, _ = strconv.ParseInt(slength, 10, 64) + } else { + if r.Body != nil { + var err error + length, err = volcengine.SeekerLen(r.Body) + if err != nil { + r.Error = volcengineerr.New(request.ErrCodeSerialization, "failed to get request volcenginebody's length", err) + return + } + } + } + + if length > 0 { + r.HTTPRequest.ContentLength = length + r.HTTPRequest.Header.Set("Content-Length", fmt.Sprintf("%d", length)) + } else { + r.HTTPRequest.ContentLength = 0 + r.HTTPRequest.Header.Del("Content-Length") + } +}} + +var reStatusCode = regexp.MustCompile(`^(\d{3})`) + +// ValidateReqSigHandler is a request handler to ensure that the request's +// signature doesn't expire before it is sent. This can happen when a request +// is built and signed significantly before it is sent. Or significant delays +// occur when retrying requests that would cause the signature to expire. +var ValidateReqSigHandler = request.NamedHandler{ + Name: "core.ValidateReqSigHandler", + Fn: func(r *request.Request) { + // Unsigned requests are not signed + if r.Config.Credentials == credentials.AnonymousCredentials { + return + } + + signedTime := r.Time + if !r.LastSignedAt.IsZero() { + signedTime = r.LastSignedAt + } + + // 5 minutes to allow for some clock skew/delays in transmission. + // Would be improved with volcengine/volcengine-go-sdk#423 + if signedTime.Add(5 * time.Minute).After(time.Now()) { + return + } + + fmt.Println("request expired, resigning") + r.Sign() + }, +} + +// SendHandler is a request handler to send service request using HTTP client. +var SendHandler = request.NamedHandler{ + Name: "core.SendHandler", + Fn: func(r *request.Request) { + sender := sendFollowRedirects + if r.DisableFollowRedirects { + sender = sendWithoutFollowRedirects + } + + if request.NoBody == r.HTTPRequest.Body { + // Strip off the request volcenginebody if the NoBody reader was used as a + // place holder for a request volcenginebody. This prevents the SDK from + // making requests with a request volcenginebody when it would be invalid + // to do so. + // + // Use a shallow copy of the http.Request to ensure the race condition + // of transport on Body will not trigger + reqOrig, reqCopy := r.HTTPRequest, *r.HTTPRequest + reqCopy.Body = nil + r.HTTPRequest = &reqCopy + defer func() { + r.HTTPRequest = reqOrig + }() + } + + var err error + r.HTTPResponse, err = sender(r) + if err != nil { + handleSendError(r, err) + } + }, +} + +func sendFollowRedirects(r *request.Request) (*http.Response, error) { + return r.Config.HTTPClient.Do(r.HTTPRequest) +} + +func sendWithoutFollowRedirects(r *request.Request) (*http.Response, error) { + transport := r.Config.HTTPClient.Transport + if transport == nil { + transport = http.DefaultTransport + } + + return transport.RoundTrip(r.HTTPRequest) +} + +func handleSendError(r *request.Request, err error) { + // Prevent leaking if an HTTPResponse was returned. Clean up + // the volcenginebody. + if r.HTTPResponse != nil { + r.HTTPResponse.Body.Close() + } + // Capture the case where url.Error is returned for error processing + // response. e.g. 301 without location header comes back as string + // error and r.HTTPResponse is nil. Other URL redirect errors will + // comeback in a similar method. + if e, ok := err.(*url.Error); ok && e.Err != nil { + if s := reStatusCode.FindStringSubmatch(e.Err.Error()); s != nil { + code, _ := strconv.ParseInt(s[1], 10, 64) + r.HTTPResponse = &http.Response{ + StatusCode: int(code), + Status: http.StatusText(int(code)), + Body: ioutil.NopCloser(bytes.NewReader([]byte{})), + } + return + } + } + if r.HTTPResponse == nil { + // Add a dummy request response object to ensure the HTTPResponse + // value is consistent. + r.HTTPResponse = &http.Response{ + StatusCode: int(0), + Status: http.StatusText(int(0)), + Body: ioutil.NopCloser(bytes.NewReader([]byte{})), + } + } + // Catch all request errors, and let the default retrier determine + // if the error is retryable. + r.Error = volcengineerr.New("RequestError", "send request failed", err) + + // Override the error with a context canceled error, if that was canceled. + ctx := r.Context() + select { + case <-ctx.Done(): + r.Error = volcengineerr.New(request.CanceledErrorCode, + "request context canceled", ctx.Err()) + r.Retryable = volcengine.Bool(false) + default: + } +} + +// ValidateResponseHandler is a request handler to validate service response. +var ValidateResponseHandler = request.NamedHandler{Name: "core.ValidateResponseHandler", Fn: func(r *request.Request) { + if r.HTTPResponse.StatusCode == 0 || r.HTTPResponse.StatusCode >= 300 { + // this may be replaced by an UnmarshalError handler + r.Error = volcengineerr.New("UnknownError", "unknown error", nil) + } +}} + +// AfterRetryHandler performs final checks to determine if the request should +// be retried and how long to delay. +var AfterRetryHandler = request.NamedHandler{ + Name: "core.AfterRetryHandler", + Fn: func(r *request.Request) { + // If one of the other handlers already set the retry state + // we don't want to override it based on the service's state + if r.Retryable == nil || volcengine.BoolValue(r.Config.EnforceShouldRetryCheck) { + r.Retryable = volcengine.Bool(r.ShouldRetry(r)) + } + + if r.WillRetry() { + r.RetryDelay = r.RetryRules(r) + + if sleepFn := r.Config.SleepDelay; sleepFn != nil { + // Support SleepDelay for backwards compatibility and testing + sleepFn(r.RetryDelay) + } else if err := volcengine.SleepWithContext(r.Context(), r.RetryDelay); err != nil { + r.Error = volcengineerr.New(request.CanceledErrorCode, + "request context canceled", err) + r.Retryable = volcengine.Bool(false) + return + } + + // when the expired token exception occurs the credentials + // need to be expired locally so that the next request to + // get credentials will trigger a credentials refresh. + if r.IsErrorExpired() { + r.Config.Credentials.Expire() + } + + r.RetryCount++ + r.Error = nil + } + }} + +// ValidateEndpointHandler is a request handler to validate a request had the +// appropriate Region and Endpoint set. Will set r.Error if the endpoint or +// region is not valid. +var ValidateEndpointHandler = request.NamedHandler{Name: "core.ValidateEndpointHandler", Fn: func(r *request.Request) { + if r.ClientInfo.SigningRegion == "" && volcengine.StringValue(r.Config.Region) == "" && r.Config.DynamicCredentials == nil { + r.Error = volcengine.ErrMissingRegion + } else if r.ClientInfo.Endpoint == "" { + r.Error = volcengine.ErrMissingEndpoint + } +}} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/param_validator.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/param_validator.go new file mode 100644 index 000000000000..adc113ef23f2 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/param_validator.go @@ -0,0 +1,36 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package corehandlers + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + +// ValidateParametersHandler is a request handler to validate the input parameters. +// Validating parameters only has meaning if done prior to the request being sent. +var ValidateParametersHandler = request.NamedHandler{Name: "core.ValidateParametersHandler", Fn: func(r *request.Request) { + if !r.ParamsFilled() { + return + } + + if v, ok := r.Params.(request.Validator); ok { + if err := v.Validate(); err != nil { + r.Error = err + } + } +}} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/user_agent.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/user_agent.go new file mode 100644 index 000000000000..34c24637e990 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers/user_agent.go @@ -0,0 +1,56 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package corehandlers + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "os" + "runtime" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// SDKVersionUserAgentHandler is a request handler for adding the SDK Version +// to the user agent. +var SDKVersionUserAgentHandler = request.NamedHandler{ + Name: "core.SDKVersionUserAgentHandler", + Fn: request.MakeAddToUserAgentHandler(volcengine.SDKName, volcengine.SDKVersion, + runtime.Version(), runtime.GOOS, runtime.GOARCH), +} + +const execEnvVar = `VOLCSTACK_EXECUTION_ENV` +const execEnvUAKey = `exec-env` + +// AddHostExecEnvUserAgentHandler is a request handler appending the SDK's +// execution environment to the user agent. +// +// If the environment variable VOLCSTACK_EXECUTION_ENV is set, its value will be +// appended to the user agent string. +var AddHostExecEnvUserAgentHandler = request.NamedHandler{ + Name: "core.AddHostExecEnvUserAgentHandler", + Fn: func(r *request.Request) { + v := os.Getenv(execEnvVar) + if len(v) == 0 { + return + } + + request.AddToUserAgent(r, execEnvUAKey+"/"+v) + }, +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/chain_provider.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/chain_provider.go new file mode 100644 index 000000000000..39cd505a0e08 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/chain_provider.go @@ -0,0 +1,104 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package credentials + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +var ( + // ErrNoValidProvidersFoundInChain Is returned when there are no valid + // providers in the ChainProvider. + // + // This has been deprecated. For verbose error messaging set + // volcengine.Config.CredentialsChainVerboseErrors to true. + ErrNoValidProvidersFoundInChain = volcengineerr.New("NoCredentialProviders", + `no valid providers in chain. Deprecated. + For verbose messaging see volcengine.Config.CredentialsChainVerboseErrors`, + nil) +) + +// A ChainProvider will search for a provider which returns credentials +// and cache that provider until Retrieve is called again. +// +// The ChainProvider provides a way of chaining multiple providers together +// which will pick the first available using priority order of the Providers +// in the list. +// +// If none of the Providers retrieve valid credentials Value, ChainProvider's +// Retrieve() will return the error ErrNoValidProvidersFoundInChain. +// +// If a Provider is found which returns valid credentials Value ChainProvider +// will cache that Provider for all calls to IsExpired(), until Retrieve is +// called again. +// +// Example of ChainProvider to be used with an EnvProvider and EC2RoleProvider. +// In this example EnvProvider will first check if any credentials are available +// via the environment variables. If there are none ChainProvider will check +// the next Provider in the list, EC2RoleProvider in this case. If EC2RoleProvider +// does not return any credentials ChainProvider will return the error +type ChainProvider struct { + Providers []Provider + curr Provider + VerboseErrors bool +} + +// NewChainCredentials returns a pointer to a new Credentials object +// wrapping a chain of providers. +func NewChainCredentials(providers []Provider) *Credentials { + return NewCredentials(&ChainProvider{ + Providers: append([]Provider{}, providers...), + }) +} + +// Retrieve returns the credentials value or error if no provider returned +// without error. +// +// If a provider is found it will be cached and any calls to IsExpired() +// will return the expired state of the cached provider. +func (c *ChainProvider) Retrieve() (Value, error) { + var errs []error + for _, p := range c.Providers { + creds, err := p.Retrieve() + if err == nil { + c.curr = p + return creds, nil + } + errs = append(errs, err) + } + c.curr = nil + + var err error + err = ErrNoValidProvidersFoundInChain + if c.VerboseErrors { + err = volcengineerr.NewBatchError("NoCredentialProviders", "no valid providers in chain", errs) + } + return Value{}, err +} + +// IsExpired will returned the expired state of the currently cached provider +// if there is one. If there is no current provider, true will be returned. +func (c *ChainProvider) IsExpired() bool { + if c.curr != nil { + return c.curr.IsExpired() + } + + return true +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/credentials.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/credentials.go new file mode 100644 index 000000000000..7a5ee8775e4b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/credentials.go @@ -0,0 +1,337 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package credentials provides credential retrieval and management +// +// The Credentials is the primary method of getting access to and managing +// credentials Values. Using dependency injection retrieval of the credential +// values is handled by a object which satisfies the Provider interface. +// +// By default the Credentials.Get() will cache the successful result of a +// Provider's Retrieve() until Provider.IsExpired() returns true. At which +// point Credentials will call Provider's Retrieve() to get new credential Value. +// +// The Provider is responsible for determining when credentials Value have expired. +// It is also important to note that Credentials will always call Retrieve the +// first time Credentials.Get() is called. +// +// Example of using the environment variable credentials. +// +// creds := credentials.NewEnvCredentials() +// +// // Retrieve the credentials value +// credValue, err := creds.Get() +// if err != nil { +// // handle error +// } +// +// Example of forcing credentials to expire and be refreshed on the next Get(). +// This may be helpful to proactively expire credentials and refresh them sooner +// than they would naturally expire on their own. +// +// creds := credentials.NewCredentials(&ec2rolecreds.EC2RoleProvider{}) +// creds.Expire() +// credsValue, err := creds.Get() +// // New credentials will be retrieved instead of from cache. +// +// # Custom Provider +// +// Each Provider built into this package also provides a helper method to generate +// a Credentials pointer setup with the provider. To use a custom Provider just +// create a type which satisfies the Provider interface and pass it to the +// NewCredentials method. +// +// type MyProvider struct{} +// func (m *MyProvider) Retrieve() (Value, error) {...} +// func (m *MyProvider) IsExpired() bool {...} +// +// creds := credentials.NewCredentials(&MyProvider{}) +// credValue, err := creds.Get() +package credentials + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "sync" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// AnonymousCredentials is an empty Credential object that can be used as +// dummy placeholder credentials for requests that do not need signed. +var AnonymousCredentials = NewStaticCredentials("", "", "") + +// A Value is the Volcengine credentials value for individual credential fields. +type Value struct { + // Volcengine Access key ID + AccessKeyID string + + // Volcengine Secret Access Key + SecretAccessKey string + + // Volcengine Session Token + SessionToken string + + // Provider used to get credentials + ProviderName string +} + +// HasKeys returns if the credentials Value has both AccessKeyID and +// SecretAccessKey value set. +func (v Value) HasKeys() bool { + return len(v.AccessKeyID) != 0 && len(v.SecretAccessKey) != 0 +} + +// A Provider is the interface for any component which will provide credentials +// Value. A provider is required to manage its own Expired state, and what to +// be expired means. +// +// The Provider should not need to implement its own mutexes, because +// that will be managed by Credentials. +type Provider interface { + // Retrieve returns nil if it successfully retrieved the value. + // Error is returned if the value were not obtainable, or empty. + Retrieve() (Value, error) + + // IsExpired returns if the credentials are no longer valid, and need + // to be retrieved. + IsExpired() bool +} + +// An Expirer is an interface that Providers can implement to expose the expiration +// time, if known. If the Provider cannot accurately provide this info, +// it should not implement this interface. +type Expirer interface { + // The time at which the credentials are no longer valid + ExpiresAt() time.Time +} + +// An ErrorProvider is a stub credentials provider that always returns an error +// this is used by the SDK when construction a known provider is not possible +// due to an error. +type ErrorProvider struct { + // The error to be returned from Retrieve + Err error + + // The provider name to set on the Retrieved returned Value + ProviderName string +} + +// Retrieve will always return the error that the ErrorProvider was created with. +func (p ErrorProvider) Retrieve() (Value, error) { + return Value{ProviderName: p.ProviderName}, p.Err +} + +// IsExpired will always return not expired. +func (p ErrorProvider) IsExpired() bool { + return false +} + +// A Expiry provides shared expiration logic to be used by credentials +// providers to implement expiry functionality. +// +// The best method to use this struct is as an anonymous field within the +// provider's struct. +// +// Example: +// +// type EC2RoleProvider struct { +// Expiry +// ... +// } +type Expiry struct { + // The date/time when to expire on + expiration time.Time + + // If set will be used by IsExpired to determine the current time. + // Defaults to time.Now if CurrentTime is not set. Available for testing + // to be able to mock out the current time. + CurrentTime func() time.Time +} + +// SetExpiration sets the expiration IsExpired will check when called. +// +// If window is greater than 0 the expiration time will be reduced by the +// window value. +// +// Using a window is helpful to trigger credentials to expire sooner than +// the expiration time given to ensure no requests are made with expired +// tokens. +func (e *Expiry) SetExpiration(expiration time.Time, window time.Duration) { + e.expiration = expiration + if window > 0 { + e.expiration = e.expiration.Add(-window) + } +} + +// IsExpired returns if the credentials are expired. +func (e *Expiry) IsExpired() bool { + curTime := e.CurrentTime + if curTime == nil { + curTime = time.Now + } + return e.expiration.Before(curTime()) +} + +// ExpiresAt returns the expiration time of the credential +func (e *Expiry) ExpiresAt() time.Time { + return e.expiration +} + +// A Credentials provides concurrency safe retrieval of Volcengine credentials Value. +// Credentials will cache the credentials value until they expire. Once the value +// expires the next Get will attempt to retrieve valid credentials. +// +// Credentials is safe to use across multiple goroutines and will manage the +// synchronous state so the Providers do not need to implement their own +// synchronization. +// +// The first Credentials.Get() will always call Provider.Retrieve() to get the +// first instance of the credentials Value. All calls to Get() after that +// will return the cached credentials Value until IsExpired() returns true. +type Credentials struct { + creds Value + forceRefresh bool + + m sync.RWMutex + + provider Provider +} + +// NewCredentials returns a pointer to a new Credentials with the provider set. +func NewCredentials(provider Provider) *Credentials { + return &Credentials{ + provider: provider, + forceRefresh: true, + } +} + +// NewExpireAbleCredentials returns a pointer to a new Credentials with the provider set and disable forceRefresh. +func NewExpireAbleCredentials(provider Provider) *Credentials { + return &Credentials{ + provider: provider, + } +} + +// Get returns the credentials value, or error if the credentials Value failed +// to be retrieved. +// +// Will return the cached credentials Value if it has not expired. If the +// credentials Value has expired the Provider's Retrieve() will be called +// to refresh the credentials. +// +// If Credentials.Expire() was called the credentials Value will be force +// expired, and the next call to Get() will cause them to be refreshed. +func (c *Credentials) Get() (Value, error) { + // Check the cached credentials first with just the read lock. + c.m.RLock() + if !c.isExpired() { + creds := c.creds + c.m.RUnlock() + return creds, nil + } + c.m.RUnlock() + + // Credentials are expired need to retrieve the credentials taking the full + // lock. + c.m.Lock() + defer c.m.Unlock() + + if c.isExpired() { + creds, err := c.provider.Retrieve() + if err != nil { + return Value{}, err + } + c.creds = creds + c.forceRefresh = false + } + + return c.creds, nil +} + +// Expire expires the credentials and forces them to be retrieved on the +// next call to Get(). +// +// This will override the Provider's expired state, and force Credentials +// to call the Provider's Retrieve(). +func (c *Credentials) Expire() { + c.m.Lock() + defer c.m.Unlock() + + c.forceRefresh = true +} + +// GetProvider returns provider instance of the credentials +func (c *Credentials) GetProvider() Provider { + return c.provider +} + +// IsExpired returns if the credentials are no longer valid, and need +// to be retrieved. +// +// If the Credentials were forced to be expired with Expire() this will +// reflect that override. +func (c *Credentials) IsExpired() bool { + c.m.RLock() + defer c.m.RUnlock() + + return c.isExpired() +} + +// isExpired helper method wrapping the definition of expired credentials. +func (c *Credentials) isExpired() bool { + return c.forceRefresh || c.provider.IsExpired() +} + +// ExpiresAt provides access to the functionality of the Expirer interface of +// the underlying Provider, if it supports that interface. Otherwise, it returns +// an error. +func (c *Credentials) ExpiresAt() (time.Time, error) { + c.m.RLock() + defer c.m.RUnlock() + + expirer, ok := c.provider.(Expirer) + if !ok { + return time.Time{}, volcengineerr.New("ProviderNotExpirer", + fmt.Sprintf("provider %s does not support ExpiresAt()", c.creds.ProviderName), + nil) + } + if c.forceRefresh { + // set expiration time to the distant past + return time.Time{}, nil + } + return expirer.ExpiresAt(), nil +} + +func (c *Credentials) GetBase(region string, service string) (base.Credentials, error) { + value, err := c.Get() + + if err != nil { + return base.Credentials{}, err + } + + return base.Credentials{ + AccessKeyID: value.AccessKeyID, + SecretAccessKey: value.SecretAccessKey, + SessionToken: value.SessionToken, + Service: service, + Region: region, + }, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/endpointcreds/provider.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/endpointcreds/provider.go new file mode 100644 index 000000000000..2bc1c3850ed9 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/endpointcreds/provider.go @@ -0,0 +1,193 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package endpointcreds + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/json" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/json/jsonutil" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// ProviderName is the name of the credentials provider. +const ProviderName = `CredentialsEndpointProvider` + +// Provider satisfies the credentials.Provider interface, and is a client to +// retrieve credentials from an arbitrary endpoint. +type Provider struct { + staticCreds bool + credentials.Expiry + + // Requires a Volcengine Client to make HTTP requests to the endpoint with. + // the Endpoint the request will be made to is provided by the volcengine.Config's + // Endpoint value. + Client *client.Client + + // ExpiryWindow will allow the credentials to trigger refreshing prior to + // the credentials actually expiring. This is beneficial so race conditions + // with expiring credentials do not cause request to fail unexpectedly + // due to ExpiredTokenException exceptions. + // + // So a ExpiryWindow of 10s would cause calls to IsExpired() to return true + // 10 seconds before the credentials are actually expired. + // + // If ExpiryWindow is 0 or less it will be ignored. + ExpiryWindow time.Duration + + // Optional authorization token value if set will be used as the value of + // the Authorization header of the endpoint credential request. + AuthorizationToken string +} + +// NewProviderClient returns a credentials Provider for retrieving Volcengine credentials +// from arbitrary endpoint. +func NewProviderClient(cfg volcengine.Config, handlers request.Handlers, endpoint string, options ...func(*Provider)) credentials.Provider { + p := &Provider{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: "CredentialsEndpoint", + Endpoint: endpoint, + }, + handlers, + ), + } + + p.Client.Handlers.Unmarshal.PushBack(unmarshalHandler) + p.Client.Handlers.UnmarshalError.PushBack(unmarshalError) + p.Client.Handlers.Validate.Clear() + p.Client.Handlers.Validate.PushBack(validateEndpointHandler) + + for _, option := range options { + option(p) + } + + return p +} + +// NewCredentialsClient returns a pointer to a new Credentials object +// wrapping the endpoint credentials Provider. +func NewCredentialsClient(cfg volcengine.Config, handlers request.Handlers, endpoint string, options ...func(*Provider)) *credentials.Credentials { + return credentials.NewCredentials(NewProviderClient(cfg, handlers, endpoint, options...)) +} + +// IsExpired returns true if the credentials retrieved are expired, or not yet +// retrieved. +func (p *Provider) IsExpired() bool { + if p.staticCreds { + return false + } + return p.Expiry.IsExpired() +} + +// Retrieve will attempt to request the credentials from the endpoint the Provider +// was configured for. And error will be returned if the retrieval fails. +func (p *Provider) Retrieve() (credentials.Value, error) { + resp, err := p.getCredentials() + if err != nil { + return credentials.Value{ProviderName: ProviderName}, + volcengineerr.New("CredentialsEndpointError", "failed to load credentials", err) + } + + if resp.Expiration != nil { + p.SetExpiration(*resp.Expiration, p.ExpiryWindow) + } else { + p.staticCreds = true + } + + return credentials.Value{ + AccessKeyID: resp.AccessKeyID, + SecretAccessKey: resp.SecretAccessKey, + SessionToken: resp.Token, + ProviderName: ProviderName, + }, nil +} + +type getCredentialsOutput struct { + Expiration *time.Time + AccessKeyID string + SecretAccessKey string + Token string +} + +type errorOutput struct { + Code string `json:"code"` + Message string `json:"message"` +} + +func (p *Provider) getCredentials() (*getCredentialsOutput, error) { + op := &request.Operation{ + Name: "GetCredentials", + HTTPMethod: "GET", + } + + out := &getCredentialsOutput{} + req := p.Client.NewRequest(op, nil, out) + req.HTTPRequest.Header.Set("Accept", "application/json") + if authToken := p.AuthorizationToken; len(authToken) != 0 { + req.HTTPRequest.Header.Set("Authorization", authToken) + } + + return out, req.Send() +} + +func validateEndpointHandler(r *request.Request) { + if len(r.ClientInfo.Endpoint) == 0 { + r.Error = volcengine.ErrMissingEndpoint + } +} + +func unmarshalHandler(r *request.Request) { + defer r.HTTPResponse.Body.Close() + + out := r.Data.(*getCredentialsOutput) + if err := json.NewDecoder(r.HTTPResponse.Body).Decode(&out); err != nil { + r.Error = volcengineerr.New(request.ErrCodeSerialization, + "failed to decode endpoint credentials", + err, + ) + } +} + +func unmarshalError(r *request.Request) { + defer r.HTTPResponse.Body.Close() + + var errOut errorOutput + err := jsonutil.UnmarshalJSONError(&errOut, r.HTTPResponse.Body) + if err != nil { + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New(request.ErrCodeSerialization, + "failed to decode error message", err), + r.HTTPResponse.StatusCode, + r.RequestID, + ) + return + } + + // Response volcenginebody format is not consistent between metadata endpoints. + // Grab the error message as a string and include that as the source error + r.Error = volcengineerr.New(errOut.Code, errOut.Message, nil) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/env_provider.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/env_provider.go new file mode 100644 index 000000000000..14d4fc3e1749 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/env_provider.go @@ -0,0 +1,93 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package credentials + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "os" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// EnvProviderName provides a name of Env provider +const EnvProviderName = "EnvProvider" + +var ( + // ErrAccessKeyIDNotFound is returned when the Volcengine Access Key ID can't be + // found in the process's environment. + ErrAccessKeyIDNotFound = volcengineerr.New("EnvAccessKeyNotFound", "VOLCSTACK_ACCESS_KEY_ID or VOLCSTACK_ACCESS_KEY not found in environment", nil) + + // ErrSecretAccessKeyNotFound is returned when the Volcengine Secret Access Key + // can't be found in the process's environment. + ErrSecretAccessKeyNotFound = volcengineerr.New("EnvSecretNotFound", "VOLCSTACK_SECRET_ACCESS_KEY or VOLCSTACK_SECRET_KEY not found in environment", nil) +) + +// A EnvProvider retrieves credentials from the environment variables of the +// running process. Environment credentials never expire. +// +// Environment variables used: +// +// * Access Key ID: VOLCSTACK_ACCESS_KEY_ID or VOLCSTACK_ACCESS_KEY +// +// * Secret Access Key: VOLCSTACK_SECRET_ACCESS_KEY or VOLCSTACK_SECRET_KEY +type EnvProvider struct { + retrieved bool +} + +// NewEnvCredentials returns a pointer to a new Credentials object +// wrapping the environment variable provider. +func NewEnvCredentials() *Credentials { + return NewCredentials(&EnvProvider{}) +} + +// Retrieve retrieves the keys from the environment. +func (e *EnvProvider) Retrieve() (Value, error) { + e.retrieved = false + + id := os.Getenv("VOLCSTACK_ACCESS_KEY_ID") + if id == "" { + id = os.Getenv("VOLCSTACK_ACCESS_KEY") + } + + secret := os.Getenv("VOLCSTACK_SECRET_ACCESS_KEY") + if secret == "" { + secret = os.Getenv("VOLCSTACK_SECRET_KEY") + } + + if id == "" { + return Value{ProviderName: EnvProviderName}, ErrAccessKeyIDNotFound + } + + if secret == "" { + return Value{ProviderName: EnvProviderName}, ErrSecretAccessKeyNotFound + } + + e.retrieved = true + return Value{ + AccessKeyID: id, + SecretAccessKey: secret, + SessionToken: os.Getenv("VOLCSTACK_SESSION_TOKEN"), + ProviderName: EnvProviderName, + }, nil +} + +// IsExpired returns if the credentials have been retrieved. +func (e *EnvProvider) IsExpired() bool { + return !e.retrieved +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/processcreds/provider.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/processcreds/provider.go new file mode 100644 index 000000000000..a89835618353 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/processcreds/provider.go @@ -0,0 +1,368 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package processcreds + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "encoding/json" + "fmt" + "io" + "io/ioutil" + "os" + "os/exec" + "runtime" + "strings" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +const ( + // ProviderName is the name this credentials provider will label any + // returned credentials Value with. + ProviderName = `ProcessProvider` + + // ErrCodeProcessProviderParse error parsing process output + ErrCodeProcessProviderParse = "ProcessProviderParseError" + + // ErrCodeProcessProviderVersion version error in output + ErrCodeProcessProviderVersion = "ProcessProviderVersionError" + + // ErrCodeProcessProviderRequired required attribute missing in output + ErrCodeProcessProviderRequired = "ProcessProviderRequiredError" + + // ErrCodeProcessProviderExecution execution of command failed + ErrCodeProcessProviderExecution = "ProcessProviderExecutionError" + + // errMsgProcessProviderTimeout process took longer than allowed + errMsgProcessProviderTimeout = "credential process timed out" + + // errMsgProcessProviderProcess process error + errMsgProcessProviderProcess = "error in credential_process" + + // errMsgProcessProviderParse problem parsing output + errMsgProcessProviderParse = "parse failed of credential_process output" + + // errMsgProcessProviderVersion version error in output + errMsgProcessProviderVersion = "wrong version in process output (not 1)" + + // errMsgProcessProviderMissKey missing access key id in output + errMsgProcessProviderMissKey = "missing AccessKeyId in process output" + + // errMsgProcessProviderMissSecret missing secret acess key in output + errMsgProcessProviderMissSecret = "missing SecretAccessKey in process output" + + // errMsgProcessProviderPrepareCmd prepare of command failed + errMsgProcessProviderPrepareCmd = "failed to prepare command" + + // errMsgProcessProviderEmptyCmd command must not be empty + errMsgProcessProviderEmptyCmd = "command must not be empty" + + // errMsgProcessProviderPipe failed to initialize pipe + errMsgProcessProviderPipe = "failed to initialize pipe" + + // DefaultDuration is the default amount of time in minutes that the + // credentials will be valid for. + DefaultDuration = time.Duration(15) * time.Minute + + // DefaultBufSize limits buffer size from growing to an enormous + // amount due to a faulty process. + DefaultBufSize = 1024 + + // DefaultTimeout default limit on time a process can run. + DefaultTimeout = time.Duration(1) * time.Minute +) + +// ProcessProvider satisfies the credentials.Provider interface, and is a +// client to retrieve credentials from a process. +type ProcessProvider struct { + staticCreds bool + credentials.Expiry + originalCommand []string + + // Expiry duration of the credentials. Defaults to 15 minutes if not set. + Duration time.Duration + + // ExpiryWindow will allow the credentials to trigger refreshing prior to + // the credentials actually expiring. This is beneficial so race conditions + // with expiring credentials do not cause request to fail unexpectedly + // due to ExpiredTokenException exceptions. + // + // So a ExpiryWindow of 10s would cause calls to IsExpired() to return true + // 10 seconds before the credentials are actually expired. + // + // If ExpiryWindow is 0 or less it will be ignored. + ExpiryWindow time.Duration + + // A string representing an os command that should return a JSON with + // credential information. + command *exec.Cmd + + // MaxBufSize limits memory usage from growing to an enormous + // amount due to a faulty process. + MaxBufSize int + + // Timeout limits the time a process can run. + Timeout time.Duration +} + +// NewCredentials returns a pointer to a new Credentials object wrapping the +// ProcessProvider. The credentials will expire every 15 minutes by default. +func NewCredentials(command string, options ...func(*ProcessProvider)) *credentials.Credentials { + p := &ProcessProvider{ + command: exec.Command(command), + Duration: DefaultDuration, + Timeout: DefaultTimeout, + MaxBufSize: DefaultBufSize, + } + + for _, option := range options { + option(p) + } + + return credentials.NewCredentials(p) +} + +// NewCredentialsTimeout returns a pointer to a new Credentials object with +// the specified command and timeout, and default duration and max buffer size. +func NewCredentialsTimeout(command string, timeout time.Duration) *credentials.Credentials { + p := NewCredentials(command, func(opt *ProcessProvider) { + opt.Timeout = timeout + }) + + return p +} + +// NewCredentialsCommand returns a pointer to a new Credentials object with +// the specified command, and default timeout, duration and max buffer size. +func NewCredentialsCommand(command *exec.Cmd, options ...func(*ProcessProvider)) *credentials.Credentials { + p := &ProcessProvider{ + command: command, + Duration: DefaultDuration, + Timeout: DefaultTimeout, + MaxBufSize: DefaultBufSize, + } + + for _, option := range options { + option(p) + } + + return credentials.NewCredentials(p) +} + +type credentialProcessResponse struct { + Version int + AccessKeyID string `json:"AccessKeyId"` + SecretAccessKey string + SessionToken string + Expiration *time.Time +} + +// Retrieve executes the 'credential_process' and returns the credentials. +func (p *ProcessProvider) Retrieve() (credentials.Value, error) { + out, err := p.executeCredentialProcess() + if err != nil { + return credentials.Value{ProviderName: ProviderName}, err + } + + // Serialize and validate response + resp := &credentialProcessResponse{} + if err = json.Unmarshal(out, resp); err != nil { + return credentials.Value{ProviderName: ProviderName}, volcengineerr.New( + ErrCodeProcessProviderParse, + fmt.Sprintf("%s: %s", errMsgProcessProviderParse, string(out)), + err) + } + + if resp.Version != 1 { + return credentials.Value{ProviderName: ProviderName}, volcengineerr.New( + ErrCodeProcessProviderVersion, + errMsgProcessProviderVersion, + nil) + } + + if len(resp.AccessKeyID) == 0 { + return credentials.Value{ProviderName: ProviderName}, volcengineerr.New( + ErrCodeProcessProviderRequired, + errMsgProcessProviderMissKey, + nil) + } + + if len(resp.SecretAccessKey) == 0 { + return credentials.Value{ProviderName: ProviderName}, volcengineerr.New( + ErrCodeProcessProviderRequired, + errMsgProcessProviderMissSecret, + nil) + } + + // Handle expiration + p.staticCreds = resp.Expiration == nil + if resp.Expiration != nil { + p.SetExpiration(*resp.Expiration, p.ExpiryWindow) + } + + return credentials.Value{ + ProviderName: ProviderName, + AccessKeyID: resp.AccessKeyID, + SecretAccessKey: resp.SecretAccessKey, + SessionToken: resp.SessionToken, + }, nil +} + +// IsExpired returns true if the credentials retrieved are expired, or not yet +// retrieved. +func (p *ProcessProvider) IsExpired() bool { + if p.staticCreds { + return false + } + return p.Expiry.IsExpired() +} + +// prepareCommand prepares the command to be executed. +func (p *ProcessProvider) prepareCommand() error { + + var cmdArgs []string + if runtime.GOOS == "windows" { + cmdArgs = []string{"cmd.exe", "/C"} + } else { + cmdArgs = []string{"sh", "-c"} + } + + if len(p.originalCommand) == 0 { + p.originalCommand = make([]string, len(p.command.Args)) + copy(p.originalCommand, p.command.Args) + + // check for empty command because it succeeds + if len(strings.TrimSpace(p.originalCommand[0])) < 1 { + return volcengineerr.New( + ErrCodeProcessProviderExecution, + fmt.Sprintf( + "%s: %s", + errMsgProcessProviderPrepareCmd, + errMsgProcessProviderEmptyCmd), + nil) + } + } + + cmdArgs = append(cmdArgs, p.originalCommand...) + p.command = exec.Command(cmdArgs[0], cmdArgs[1:]...) + p.command.Env = os.Environ() + + return nil +} + +// executeCredentialProcess starts the credential process on the OS and +// returns the results or an error. +func (p *ProcessProvider) executeCredentialProcess() ([]byte, error) { + + if err := p.prepareCommand(); err != nil { + return nil, err + } + + // Setup the pipes + outReadPipe, outWritePipe, err := os.Pipe() + if err != nil { + return nil, volcengineerr.New( + ErrCodeProcessProviderExecution, + errMsgProcessProviderPipe, + err) + } + + p.command.Stderr = os.Stderr // display stderr on console for MFA + p.command.Stdout = outWritePipe // get creds json on process's stdout + p.command.Stdin = os.Stdin // enable stdin for MFA + + output := bytes.NewBuffer(make([]byte, 0, p.MaxBufSize)) + + stdoutCh := make(chan error, 1) + go readInput( + io.LimitReader(outReadPipe, int64(p.MaxBufSize)), + output, + stdoutCh) + + execCh := make(chan error, 1) + go executeCommand(*p.command, execCh) + + finished := false + var errors []error + for !finished { + select { + case readError := <-stdoutCh: + errors = appendError(errors, readError) + finished = true + case execError := <-execCh: + err := outWritePipe.Close() + errors = appendError(errors, err) + errors = appendError(errors, execError) + if errors != nil { + return output.Bytes(), volcengineerr.NewBatchError( + ErrCodeProcessProviderExecution, + errMsgProcessProviderProcess, + errors) + } + case <-time.After(p.Timeout): + finished = true + return output.Bytes(), volcengineerr.NewBatchError( + ErrCodeProcessProviderExecution, + errMsgProcessProviderTimeout, + errors) // errors can be nil + } + } + + out := output.Bytes() + + if runtime.GOOS == "windows" { + // windows adds slashes to quotes + out = []byte(strings.Replace(string(out), `\"`, `"`, -1)) + } + + return out, nil +} + +// appendError conveniently checks for nil before appending slice +func appendError(errors []error, err error) []error { + if err != nil { + return append(errors, err) + } + return errors +} + +func executeCommand(cmd exec.Cmd, exec chan error) { + // Start the command + err := cmd.Start() + if err == nil { + err = cmd.Wait() + } + + exec <- err +} + +func readInput(r io.Reader, w io.Writer, read chan error) { + tee := io.TeeReader(r, w) + + _, err := ioutil.ReadAll(tee) + + if err == io.EOF { + err = nil + } + + read <- err // will only arrive here when write end of pipe is closed +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/shared_credentials_provider.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/shared_credentials_provider.go new file mode 100644 index 000000000000..ffe9d4d1dd8f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/shared_credentials_provider.go @@ -0,0 +1,169 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package credentials + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "os" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/shareddefaults" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// SharedCredsProviderName provides a name of SharedCreds provider +const SharedCredsProviderName = "SharedCredentialsProvider" + +var ( + // ErrSharedCredentialsHomeNotFound is emitted when the user directory cannot be found. + ErrSharedCredentialsHomeNotFound = volcengineerr.New("UserHomeNotFound", "user home directory not found.", nil) +) + +// A SharedCredentialsProvider retrieves credentials from the current user's home +// directory, and keeps track if those credentials are expired. +// +// Profile ini file example: $HOME/.volcengine/credentials +type SharedCredentialsProvider struct { + // Path to the shared credentials file. + // + // If empty will look for "VOLCSTACK_SHARED_CREDENTIALS_FILE" env variable. If the + // env value is empty will default to current user's home directory. + // Linux/OSX: "$HOME/.volcengine/credentials" + // Windows: "%USERPROFILE%\.volcengine\credentials" + Filename string + + // VOLCSTACK Profile to extract credentials from the shared credentials file. If empty + // will default to environment variable "VOLCSTACK_PROFILE" or "default" if + // environment variable is also not set. + Profile string + + // retrieved states if the credentials have been successfully retrieved. + retrieved bool +} + +// NewSharedCredentials returns a pointer to a new Credentials object +// wrapping the Profile file provider. +func NewSharedCredentials(filename, profile string) *Credentials { + return NewCredentials(&SharedCredentialsProvider{ + Filename: filename, + Profile: profile, + }) +} + +// Retrieve reads and extracts the shared credentials from the current +// users home directory. +func (p *SharedCredentialsProvider) Retrieve() (Value, error) { + p.retrieved = false + + filename, err := p.filename() + if err != nil { + return Value{ProviderName: SharedCredsProviderName}, err + } + + creds, err := loadProfile(filename, p.profile()) + if err != nil { + return Value{ProviderName: SharedCredsProviderName}, err + } + + p.retrieved = true + return creds, nil +} + +// IsExpired returns if the shared credentials have expired. +func (p *SharedCredentialsProvider) IsExpired() bool { + return !p.retrieved +} + +// loadProfiles loads from the file pointed to by shared credentials filename for profile. +// The credentials retrieved from the profile will be returned or error. Error will be +// returned if it fails to read from the file, or the data is invalid. +func loadProfile(filename, profile string) (Value, error) { + config, err := ini.OpenFile(filename) + if err != nil { + return Value{ProviderName: SharedCredsProviderName}, volcengineerr.New("SharedCredsLoad", "failed to load shared credentials file", err) + } + + iniProfile, ok := config.GetSection(profile) + if !ok { + return Value{ProviderName: SharedCredsProviderName}, volcengineerr.New("SharedCredsLoad", "failed to get profile", nil) + } + + id := iniProfile.String("volcengine_access_key_id") + if len(id) == 0 { + return Value{ProviderName: SharedCredsProviderName}, volcengineerr.New("SharedCredsAccessKey", + fmt.Sprintf("shared credentials %s in %s did not contain volcengine_access_key_id", profile, filename), + nil) + } + + secret := iniProfile.String("volcengine_secret_access_key") + if len(secret) == 0 { + return Value{ProviderName: SharedCredsProviderName}, volcengineerr.New("SharedCredsSecret", + fmt.Sprintf("shared credentials %s in %s did not contain volcengine_secret_access_key", profile, filename), + nil) + } + + // Default to empty string if not found + token := iniProfile.String("volcengine_session_token") + + return Value{ + AccessKeyID: id, + SecretAccessKey: secret, + SessionToken: token, + ProviderName: SharedCredsProviderName, + }, nil +} + +// filename returns the filename to use to read VOLCSTACK shared credentials. +// +// Will return an error if the user's home directory path cannot be found. +func (p *SharedCredentialsProvider) filename() (string, error) { + if len(p.Filename) != 0 { + return p.Filename, nil + } + + if p.Filename = os.Getenv("VOLCSTACK_SHARED_CREDENTIALS_FILE"); len(p.Filename) != 0 { + return p.Filename, nil + } + + if home := shareddefaults.UserHomeDir(); len(home) == 0 { + // Backwards compatibility of home directly not found error being returned. + // This error is too verbose, failure when opening the file would of been + // a better error to return. + return "", ErrSharedCredentialsHomeNotFound + } + + p.Filename = shareddefaults.SharedCredentialsFilename() + + return p.Filename, nil +} + +// profile returns the VOLCSTACK shared credentials profile. If empty will read +// environment variable "VOLCSTACK_PROFILE". If that is not set profile will +// return "default". +func (p *SharedCredentialsProvider) profile() string { + if p.Profile == "" { + p.Profile = os.Getenv("VOLCSTACK_PROFILE") + } + if p.Profile == "" { + p.Profile = "default" + } + + return p.Profile +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/static_provider.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/static_provider.go new file mode 100644 index 000000000000..6441fc7fffdb --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/static_provider.go @@ -0,0 +1,74 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package credentials + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// StaticProviderName provides a name of Static provider +const StaticProviderName = "StaticProvider" + +var ( + // ErrStaticCredentialsEmpty is emitted when static credentials are empty. + ErrStaticCredentialsEmpty = volcengineerr.New("EmptyStaticCreds", "static credentials are empty", nil) +) + +// A StaticProvider is a set of credentials which are set programmatically, +// and will never expire. +type StaticProvider struct { + Value +} + +// NewStaticCredentials returns a pointer to a new Credentials object +// wrapping a static credentials value provider. +func NewStaticCredentials(id, secret, token string) *Credentials { + return NewCredentials(&StaticProvider{Value: Value{ + AccessKeyID: id, + SecretAccessKey: secret, + SessionToken: token, + }}) +} + +// NewStaticCredentialsFromCreds returns a pointer to a new Credentials object +// wrapping the static credentials value provide. Same as NewStaticCredentials +// but takes the creds Value instead of individual fields +func NewStaticCredentialsFromCreds(creds Value) *Credentials { + return NewCredentials(&StaticProvider{Value: creds}) +} + +// Retrieve returns the credentials or error if the credentials are invalid. +func (s *StaticProvider) Retrieve() (Value, error) { + if s.AccessKeyID == "" || s.SecretAccessKey == "" { + return Value{ProviderName: StaticProviderName}, ErrStaticCredentialsEmpty + } + + if len(s.Value.ProviderName) == 0 { + s.Value.ProviderName = StaticProviderName + } + return s.Value, nil +} + +// IsExpired returns if the credentials are expired. +// +// For StaticProvider, the credentials never expired. +func (s *StaticProvider) IsExpired() bool { + return false +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/sts_credentials.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/sts_credentials.go new file mode 100644 index 000000000000..188e6495b9c8 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/sts_credentials.go @@ -0,0 +1,81 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package credentials + +import ( + "fmt" + "time" + + "github.com/google/uuid" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts" +) + +type StsAssumeRoleProvider struct { + AccessKey string + SecurityKey string + RoleName string + AccountId string + Host string + Region string + Timeout time.Duration + DurationSeconds int +} + +type StsAssumeRoleTime struct { + CurrentTime string + ExpiredTime string +} + +func StsAssumeRole(p *StsAssumeRoleProvider) (*Credentials, *StsAssumeRoleTime, error) { + ins := sts.NewInstance() + if p.Region != "" { + ins.SetRegion(p.Region) + } + if p.Host != "" { + ins.SetHost(p.Host) + } + if p.Timeout > 0 { + ins.Client.SetTimeout(p.Timeout) + } + + ins.Client.SetAccessKey(p.AccessKey) + ins.Client.SetSecretKey(p.SecurityKey) + input := &sts.AssumeRoleRequest{ + DurationSeconds: p.DurationSeconds, + RoleTrn: fmt.Sprintf("trn:iam::%s:role/%s", p.AccountId, p.RoleName), + RoleSessionName: uuid.New().String(), + } + output, statusCode, err := ins.AssumeRole(input) + var reqId string + if output != nil { + reqId = output.ResponseMetadata.RequestId + } + if err != nil { + return nil, nil, fmt.Errorf("AssumeRole error,httpcode is %v and reqId is %s error is %s", statusCode, reqId, err.Error()) + } + if statusCode >= 300 || statusCode < 200 { + return nil, nil, fmt.Errorf("AssumeRole error,httpcode is %v and reqId is %s", statusCode, reqId) + } + return NewCredentials(&StaticProvider{Value: Value{ + AccessKeyID: output.Result.Credentials.AccessKeyId, + SecretAccessKey: output.Result.Credentials.SecretAccessKey, + SessionToken: output.Result.Credentials.SessionToken, + }}), &StsAssumeRoleTime{ + CurrentTime: output.Result.Credentials.CurrentTime, + ExpiredTime: output.Result.Credentials.ExpiredTime, + }, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/sts_provider.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/sts_provider.go new file mode 100644 index 000000000000..c4904c40414f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/sts_provider.go @@ -0,0 +1,90 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package credentials + +import ( + "encoding/json" + "fmt" + "time" + + "github.com/google/uuid" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/service/sts" +) + +type StsValue StsAssumeRoleProvider + +type StsProvider struct { + Expiry + StsValue +} + +func (s *StsProvider) Retrieve() (Value, error) { + ins := sts.NewInstance() + if s.Region != "" { + ins.SetRegion(s.Region) + } + if s.Host != "" { + ins.SetHost(s.Host) + } + if s.Timeout > 0 { + ins.Client.SetTimeout(s.Timeout) + } + if s.DurationSeconds < 900 { + return Value{}, fmt.Errorf("DurationSeconds must greater than 900 seconds ") + } + + ins.Client.SetAccessKey(s.AccessKey) + ins.Client.SetSecretKey(s.SecurityKey) + input := &sts.AssumeRoleRequest{ + DurationSeconds: s.DurationSeconds, + RoleTrn: fmt.Sprintf("trn:iam::%s:role/%s", s.AccountId, s.RoleName), + RoleSessionName: uuid.New().String(), + } + t := time.Now().Add(time.Duration(s.DurationSeconds-60) * time.Second) + output, _, err := ins.AssumeRole(input) + if err != nil || output.ResponseMetadata.Error != nil { + if err == nil { + bb, _err := json.Marshal(output.ResponseMetadata.Error) + if _err != nil { + return Value{}, _err + } + return Value{}, fmt.Errorf(string(bb)) + } + return Value{}, err + } + v := Value{ + AccessKeyID: output.Result.Credentials.AccessKeyId, + SecretAccessKey: output.Result.Credentials.SecretAccessKey, + SessionToken: output.Result.Credentials.SessionToken, + ProviderName: "StsProvider", + } + s.SetExpiration(t, 0) + return v, nil +} + +func (s *StsProvider) IsExpired() bool { + return s.Expiry.IsExpired() +} + +func NewStsCredentials(value StsValue) *Credentials { + + p := &StsProvider{ + StsValue: value, + Expiry: Expiry{}, + } + return NewExpireAbleCredentials(p) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom/custom.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom/custom.go new file mode 100644 index 000000000000..7752c9ee8806 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom/custom.go @@ -0,0 +1,59 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package custom + +import ( + "context" + "net/http" + "net/url" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" +) + +type RequestMetadata struct { + ServiceName string + Version string + Action string + HttpMethod string + Region string + Request *http.Request + RawQuery *url.Values +} + +type ExtendHttpRequest func(ctx context.Context, request *http.Request) + +type ExtendHttpRequestWithMeta func(ctx context.Context, request *http.Request, meta RequestMetadata) + +type ExtraHttpParameters func(ctx context.Context) map[string]string + +type ExtraHttpParametersWithMeta func(ctx context.Context, meta RequestMetadata) map[string]string + +type ExtraHttpJsonBody func(ctx context.Context, input *map[string]interface{}, meta RequestMetadata) + +type LogAccount func(ctx context.Context) *string + +type DynamicCredentials func(ctx context.Context) (*credentials.Credentials, *string) + +// DynamicCredentialsIncludeError func return Credentials info and error info when error appear +type DynamicCredentialsIncludeError func(ctx context.Context) (*credentials.Credentials, *string, error) + +type CustomerUnmarshalError func(ctx context.Context, meta RequestMetadata, resp response.VolcengineResponse) error + +type CustomerUnmarshalData func(ctx context.Context, info RequestInfo, resp response.VolcengineResponse) interface{} + +type ForceJsonNumberDecode func(ctx context.Context, info RequestInfo) bool diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom/interceptor.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom/interceptor.go new file mode 100644 index 000000000000..c080e66c516a --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom/interceptor.go @@ -0,0 +1,51 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package custom + +import ( + "context" + "net/http" + "net/url" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" +) + +type SdkInterceptor struct { + Before BeforeCall + After AfterCall +} + +type RequestInfo struct { + Context context.Context + Request *http.Request + Response *http.Response + Name string + Method string + ClientInfo metadata.ClientInfo + URI string + Header http.Header + URL *url.URL + Input interface{} + Output interface{} + Metadata response.ResponseMetadata + Error error +} + +type BeforeCall func(RequestInfo) interface{} + +type AfterCall func(RequestInfo, interface{}) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults/defaults.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults/defaults.go new file mode 100644 index 000000000000..e5cc9002d6ad --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults/defaults.go @@ -0,0 +1,192 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package defaults is a collection of helpers to retrieve the SDK's default +// configuration and handlers. +// +// Generally this package shouldn't be used directly, but session.Session +// instead. This package is useful when you need to reset the defaults +// of a session or service client to the SDK defaults before setting +// additional parameters. +package defaults + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "net" + "net/http" + "net/url" + "os" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/endpointcreds" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +// A Defaults provides a collection of default values for SDK clients. +type Defaults struct { + Config *volcengine.Config + Handlers request.Handlers +} + +// Get returns the SDK's default values with Config and handlers pre-configured. +func Get() Defaults { + cfg := Config() + handlers := Handlers() + cfg.Credentials = CredChain(cfg, handlers) + + return Defaults{ + Config: cfg, + Handlers: handlers, + } +} + +// Config returns the default configuration without credentials. +// To retrieve a config with credentials also included use +// `defaults.Get().Config` instead. +// +// Generally you shouldn't need to use this method directly, but +// is available if you need to reset the configuration of an +// existing service client or session. +func Config() *volcengine.Config { + return volcengine.NewConfig(). + WithCredentials(credentials.AnonymousCredentials). + WithRegion(os.Getenv("VOLCSTACK_REGION")). + WithHTTPClient(http.DefaultClient). + WithMaxRetries(volcengine.UseServiceDefaultRetries). + WithLogger(volcengine.NewDefaultLogger()). + WithLogLevel(volcengine.LogOff) +} + +// Handlers returns the default request handlers. +// +// Generally you shouldn't need to use this method directly, but +// is available if you need to reset the request handlers of an +// existing service client or session. +func Handlers() request.Handlers { + var handlers request.Handlers + + handlers.Validate.PushBackNamed(corehandlers.ValidateEndpointHandler) + handlers.Validate.AfterEachFn = request.HandlerListStopOnError + handlers.Build.PushBackNamed(corehandlers.CustomerRequestHandler) + handlers.Build.AfterEachFn = request.HandlerListStopOnError + handlers.Sign.PushBackNamed(corehandlers.BuildContentLengthHandler) + handlers.Send.PushBackNamed(corehandlers.ValidateReqSigHandler) + handlers.Send.PushBackNamed(corehandlers.SendHandler) + handlers.AfterRetry.PushBackNamed(corehandlers.AfterRetryHandler) + handlers.ValidateResponse.PushBackNamed(corehandlers.ValidateResponseHandler) + + return handlers +} + +// CredChain returns the default credential chain. +// +// Generally you shouldn't need to use this method directly, but +// is available if you need to reset the credentials of an +// existing service client or session's Config. +func CredChain(cfg *volcengine.Config, handlers request.Handlers) *credentials.Credentials { + return credentials.NewCredentials(&credentials.ChainProvider{ + VerboseErrors: volcengine.BoolValue(cfg.CredentialsChainVerboseErrors), + Providers: CredProviders(cfg, handlers), + }) +} + +// CredProviders returns the slice of providers used in +// the default credential chain. +// +// For applications that need to use some other provider (for example use +// different environment variables for legacy reasons) but still fall back +// on the default chain of providers. This allows that default chaint to be +// automatically updated +func CredProviders(cfg *volcengine.Config, handlers request.Handlers) []credentials.Provider { + return []credentials.Provider{ + &credentials.EnvProvider{}, + &credentials.SharedCredentialsProvider{Filename: "", Profile: ""}, + } +} + +const ( + httpProviderAuthorizationEnvVar = "VOLCSTACK_CONTAINER_AUTHORIZATION_TOKEN" + httpProviderEnvVar = "VOLCSTACK_CONTAINER_CREDENTIALS_FULL_URI" +) + +var lookupHostFn = net.LookupHost + +func isLoopbackHost(host string) (bool, error) { + ip := net.ParseIP(host) + if ip != nil { + return ip.IsLoopback(), nil + } + + // Host is not an ip, perform lookup + addrs, err := lookupHostFn(host) + if err != nil { + return false, err + } + for _, addr := range addrs { + if !net.ParseIP(addr).IsLoopback() { + return false, nil + } + } + + return true, nil +} + +func localHTTPCredProvider(cfg volcengine.Config, handlers request.Handlers, u string) credentials.Provider { + var errMsg string + + parsed, err := url.Parse(u) + if err != nil { + errMsg = fmt.Sprintf("invalid URL, %v", err) + } else { + host := volcengine.URLHostname(parsed) + if len(host) == 0 { + errMsg = "unable to parse host from local HTTP cred provider URL" + } else if isLoopback, loopbackErr := isLoopbackHost(host); loopbackErr != nil { + errMsg = fmt.Sprintf("failed to resolve host %q, %v", host, loopbackErr) + } else if !isLoopback { + errMsg = fmt.Sprintf("invalid endpoint host, %q, only loopback hosts are allowed.", host) + } + } + + if len(errMsg) > 0 { + if cfg.Logger != nil { + cfg.Logger.Log("Ignoring, HTTP credential provider", errMsg, err) + } + return credentials.ErrorProvider{ + Err: volcengineerr.New("CredentialsEndpointError", errMsg, err), + ProviderName: endpointcreds.ProviderName, + } + } + + return httpCredProvider(cfg, handlers, u) +} + +func httpCredProvider(cfg volcengine.Config, handlers request.Handlers, u string) credentials.Provider { + return endpointcreds.NewProviderClient(cfg, handlers, u, + func(p *endpointcreds.Provider) { + p.ExpiryWindow = 5 * time.Minute + p.AuthorizationToken = os.Getenv(httpProviderAuthorizationEnvVar) + }, + ) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults/shared_config.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults/shared_config.go new file mode 100644 index 000000000000..a368c29b2745 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults/shared_config.go @@ -0,0 +1,46 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package defaults + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/shareddefaults" +) + +// SharedCredentialsFilename returns the SDK's default file path +// for the shared credentials file. +// +// Builds the shared config file path based on the OS's platform. +// +// - Linux/Unix: $HOME/.aws/credentials +// - Windows: %USERPROFILE%\.aws\credentials +func SharedCredentialsFilename() string { + return shareddefaults.SharedCredentialsFilename() +} + +// SharedConfigFilename returns the SDK's default file path for +// the shared config file. +// +// Builds the shared config file path based on the OS's platform. +// +// - Linux/Unix: $HOME/.aws/config +// - Windows: %USERPROFILE%\.aws\config +func SharedConfigFilename() string { + return shareddefaults.SharedConfigFilename() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints/endpoints.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints/endpoints.go new file mode 100644 index 000000000000..cccdc87a1284 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints/endpoints.go @@ -0,0 +1,428 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package endpoints + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "regexp" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// Options provide the configuration needed to direct how the +// endpoints will be resolved. +type Options struct { + // DisableSSL forces the endpoint to be resolved as HTTP. + // instead of HTTPS if the service supports it. + DisableSSL bool + + // Sets the resolver to resolve the endpoint as a dualstack endpoint + // for the service. If dualstack support for a service is not known and + // StrictMatching is not enabled a dualstack endpoint for the service will + // be returned. This endpoint may not be valid. If StrictMatching is + // enabled only services that are known to support dualstack will return + // dualstack endpoints. + UseDualStack bool + + // Enables strict matching of services and regions resolved endpoints. + // If the partition doesn't enumerate the exact service and region an + // error will be returned. This option will prevent returning endpoints + // that look valid, but may not resolve to any real endpoint. + StrictMatching bool + + // Enables resolving a service endpoint based on the region provided if the + // service does not exist. The service endpoint ID will be used as the service + // domain name prefix. By default the endpoint resolver requires the service + // to be known when resolving endpoints. + // + // If resolving an endpoint on the partition list the provided region will + // be used to determine which partition's domain name pattern to the service + // endpoint ID with. If both the service and region are unknown and resolving + // the endpoint on partition list an UnknownEndpointError error will be returned. + // + // If resolving and endpoint on a partition specific resolver that partition's + // domain name pattern will be used with the service endpoint ID. If both + // region and service do not exist when resolving an endpoint on a specific + // partition the partition's domain pattern will be used to combine the + // endpoint and region together. + // + // This option is ignored if StrictMatching is enabled. + ResolveUnknownService bool +} + +// Set combines all of the option functions together. +func (o *Options) Set(optFns ...func(*Options)) { + for _, fn := range optFns { + fn(o) + } +} + +// DisableSSLOption sets the DisableSSL options. Can be used as a functional +// option when resolving endpoints. +func DisableSSLOption(o *Options) { + o.DisableSSL = true +} + +// UseDualStackOption sets the UseDualStack option. Can be used as a functional +// option when resolving endpoints. +func UseDualStackOption(o *Options) { + o.UseDualStack = true +} + +// StrictMatchingOption sets the StrictMatching option. Can be used as a functional +// option when resolving endpoints. +func StrictMatchingOption(o *Options) { + o.StrictMatching = true +} + +// ResolveUnknownServiceOption sets the ResolveUnknownService option. Can be used +// as a functional option when resolving endpoints. +func ResolveUnknownServiceOption(o *Options) { + o.ResolveUnknownService = true +} + +// A Resolver provides the interface for functionality to resolve endpoints. +// The build in Partition and DefaultResolver return value satisfy this interface. +type Resolver interface { + EndpointFor(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) +} + +// ResolverFunc is a helper utility that wraps a function so it satisfies the +// Resolver interface. This is useful when you want to add additional endpoint +// resolving logic, or stub out specific endpoints with custom values. +type ResolverFunc func(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) + +// EndpointFor wraps the ResolverFunc function to satisfy the Resolver interface. +func (fn ResolverFunc) EndpointFor(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) { + return fn(service, region, opts...) +} + +var schemeRE = regexp.MustCompile("^([^:]+)://") + +// AddScheme adds the HTTP or HTTPS schemes to a endpoint URL if there is no +// scheme. If disableSSL is true HTTP will set HTTP instead of the default HTTPS. +// +// If disableSSL is set, it will only set the URL's scheme if the URL does not +// contain a scheme. +func AddScheme(endpoint string, disableSSL bool) string { + if !schemeRE.MatchString(endpoint) { + scheme := "https" + if disableSSL { + scheme = "http" + } + endpoint = fmt.Sprintf("%s://%s", scheme, endpoint) + } + + return endpoint +} + +// EnumPartitions a provides a way to retrieve the underlying partitions that +// make up the SDK's default Resolver, or any resolver decoded from a model +// file. +// +// Use this interface with DefaultResolver and DecodeModels to get the list of +// Partitions. +type EnumPartitions interface { + Partitions() []Partition +} + +// A Partition provides the ability to enumerate the partition's regions +// and services. +type Partition struct { + id, dnsSuffix string + p *partition +} + +// DNSSuffix returns the base domain name of the partition. +func (p Partition) DNSSuffix() string { return p.dnsSuffix } + +// ID returns the identifier of the partition. +func (p Partition) ID() string { return p.id } + +// EndpointFor attempts to resolve the endpoint based on service and region. +// See Options for information on configuring how the endpoint is resolved. +// +// If the service cannot be found in the metadata the UnknownServiceError +// error will be returned. This validation will occur regardless if +// StrictMatching is enabled. To enable resolving unknown services set the +// "ResolveUnknownService" option to true. When StrictMatching is disabled +// this option allows the partition resolver to resolve a endpoint based on +// the service endpoint ID provided. +// +// When resolving endpoints you can choose to enable StrictMatching. This will +// require the provided service and region to be known by the partition. +// If the endpoint cannot be strictly resolved an error will be returned. This +// mode is useful to ensure the endpoint resolved is valid. Without +// StrictMatching enabled the endpoint returned my look valid but may not work. +// StrictMatching requires the SDK to be updated if you want to take advantage +// of new regions and services expansions. +// +// Errors that can be returned. +// - UnknownServiceError +// - UnknownEndpointError +func (p Partition) EndpointFor(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) { + return p.p.EndpointFor(service, region, opts...) +} + +// Regions returns a map of Regions indexed by their ID. This is useful for +// enumerating over the regions in a partition. +func (p Partition) Regions() map[string]Region { + rs := map[string]Region{} + for id, r := range p.p.Regions { + rs[id] = Region{ + id: id, + desc: r.Description, + p: p.p, + } + } + + return rs +} + +// Services returns a map of Service indexed by their ID. This is useful for +// enumerating over the services in a partition. +func (p Partition) Services() map[string]Service { + ss := map[string]Service{} + for id := range p.p.Services { + ss[id] = Service{ + id: id, + p: p.p, + } + } + + return ss +} + +// A Region provides information about a region, and ability to resolve an +// endpoint from the context of a region, given a service. +type Region struct { + id, desc string + p *partition +} + +// ID returns the region's identifier. +func (r Region) ID() string { return r.id } + +// Description returns the region's description. The region description +// is free text, it can be empty, and it may change between SDK releases. +func (r Region) Description() string { return r.desc } + +// ResolveEndpoint resolves an endpoint from the context of the region given +// a service. See Partition.EndpointFor for usage and errors that can be returned. +func (r Region) ResolveEndpoint(service string, opts ...func(*Options)) (ResolvedEndpoint, error) { + return r.p.EndpointFor(service, r.id, opts...) +} + +// Services returns a list of all services that are known to be in this region. +func (r Region) Services() map[string]Service { + ss := map[string]Service{} + for id, s := range r.p.Services { + if _, ok := s.Endpoints[r.id]; ok { + ss[id] = Service{ + id: id, + p: r.p, + } + } + } + + return ss +} + +// A Service provides information about a service, and ability to resolve an +// endpoint from the context of a service, given a region. +type Service struct { + id string + p *partition +} + +// ID returns the identifier for the service. +func (s Service) ID() string { return s.id } + +// ResolveEndpoint resolves an endpoint from the context of a service given +// a region. See Partition.EndpointFor for usage and errors that can be returned. +func (s Service) ResolveEndpoint(region string, opts ...func(*Options)) (ResolvedEndpoint, error) { + return s.p.EndpointFor(s.id, region, opts...) +} + +// Regions returns a map of Regions that the service is present in. +// +// A region is the Volcengine region the service exists in. Whereas a Endpoint is +// an URL that can be resolved to a instance of a service. +func (s Service) Regions() map[string]Region { + rs := map[string]Region{} + for id := range s.p.Services[s.id].Endpoints { + if r, ok := s.p.Regions[id]; ok { + rs[id] = Region{ + id: id, + desc: r.Description, + p: s.p, + } + } + } + + return rs +} + +// Endpoints returns a map of Endpoints indexed by their ID for all known +// endpoints for a service. +// +// A region is the Volcengine region the service exists in. Whereas a Endpoint is +// an URL that can be resolved to a instance of a service. +func (s Service) Endpoints() map[string]Endpoint { + es := map[string]Endpoint{} + for id := range s.p.Services[s.id].Endpoints { + es[id] = Endpoint{ + id: id, + serviceID: s.id, + p: s.p, + } + } + + return es +} + +// A Endpoint provides information about endpoints, and provides the ability +// to resolve that endpoint for the service, and the region the endpoint +// represents. +type Endpoint struct { + id string + serviceID string + p *partition +} + +// ID returns the identifier for an endpoint. +func (e Endpoint) ID() string { return e.id } + +// ServiceID returns the identifier the endpoint belongs to. +func (e Endpoint) ServiceID() string { return e.serviceID } + +// ResolveEndpoint resolves an endpoint from the context of a service and +// region the endpoint represents. See Partition.EndpointFor for usage and +// errors that can be returned. +func (e Endpoint) ResolveEndpoint(opts ...func(*Options)) (ResolvedEndpoint, error) { + return e.p.EndpointFor(e.serviceID, e.id, opts...) +} + +// A ResolvedEndpoint is an endpoint that has been resolved based on a partition +// service, and region. +type ResolvedEndpoint struct { + // The endpoint URL + URL string + + // The region that should be used for signing requests. + SigningRegion string + + // The service name that should be used for signing requests. + SigningName string + + // States that the signing name for this endpoint was derived from metadata + // passed in, but was not explicitly modeled. + SigningNameDerived bool + + // The signing method that should be used for signing requests. + SigningMethod string +} + +// So that the Error interface type can be included as an anonymous field +// in the requestError struct and not conflict with the error.Error() method. +type volcengineerror volcengineerr.Error + +// A EndpointNotFoundError is returned when in StrictMatching mode, and the +// endpoint for the service and region cannot be found in any of the partitions. +type EndpointNotFoundError struct { + volcengineerror + Partition string + Service string + Region string +} + +// A UnknownServiceError is returned when the service does not resolve to an +// endpoint. Includes a list of all known services for the partition. Returned +// when a partition does not support the service. +type UnknownServiceError struct { + volcengineerror + Partition string + Service string + Known []string +} + +// NewUnknownServiceError builds and returns UnknownServiceError. +func NewUnknownServiceError(p, s string, known []string) UnknownServiceError { + return UnknownServiceError{ + volcengineerror: volcengineerr.New("UnknownServiceError", + "could not resolve endpoint for unknown service", nil), + Partition: p, + Service: s, + Known: known, + } +} + +// String returns the string representation of the error. +func (e UnknownServiceError) Error() string { + extra := fmt.Sprintf("partition: %q, service: %q", + e.Partition, e.Service) + if len(e.Known) > 0 { + extra += fmt.Sprintf(", known: %v", e.Known) + } + return volcengineerr.SprintError(e.Code(), e.Message(), extra, e.OrigErr()) +} + +// String returns the string representation of the error. +func (e UnknownServiceError) String() string { + return e.Error() +} + +// A UnknownEndpointError is returned when in StrictMatching mode and the +// service is valid, but the region does not resolve to an endpoint. Includes +// a list of all known endpoints for the service. +type UnknownEndpointError struct { + volcengineerror + Partition string + Service string + Region string + Known []string +} + +// NewUnknownEndpointError builds and returns UnknownEndpointError. +func NewUnknownEndpointError(p, s, r string, known []string) UnknownEndpointError { + return UnknownEndpointError{ + volcengineerror: volcengineerr.New("UnknownEndpointError", + "could not resolve endpoint", nil), + Partition: p, + Service: s, + Region: r, + Known: known, + } +} + +// String returns the string representation of the error. +func (e UnknownEndpointError) Error() string { + extra := fmt.Sprintf("partition: %q, service: %q, region: %q", + e.Partition, e.Service, e.Region) + if len(e.Known) > 0 { + extra += fmt.Sprintf(", known: %v", e.Known) + } + return volcengineerr.SprintError(e.Code(), e.Message(), extra, e.OrigErr()) +} + +// String returns the string representation of the error. +func (e UnknownEndpointError) String() string { + return e.Error() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints/model.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints/model.go new file mode 100644 index 000000000000..cd60add5bae2 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints/model.go @@ -0,0 +1,327 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package endpoints + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "regexp" + "strconv" + "strings" +) + +type partitions []partition + +func (ps partitions) EndpointFor(service, region string, opts ...func(*Options)) (ResolvedEndpoint, error) { + var opt Options + opt.Set(opts...) + + for i := 0; i < len(ps); i++ { + if !ps[i].canResolveEndpoint(service, region, opt.StrictMatching) { + continue + } + + return ps[i].EndpointFor(service, region, opts...) + } + + // If loose matching fallback to first partition format to use + // when resolving the endpoint. + if !opt.StrictMatching && len(ps) > 0 { + return ps[0].EndpointFor(service, region, opts...) + } + + return ResolvedEndpoint{}, NewUnknownEndpointError("all partitions", service, region, []string{}) +} + +// Partitions satisfies the EnumPartitions interface and returns a list +// of Partitions representing each partition represented in the SDK's +// endpoints model. +func (ps partitions) Partitions() []Partition { + parts := make([]Partition, 0, len(ps)) + for i := 0; i < len(ps); i++ { + parts = append(parts, ps[i].Partition()) + } + + return parts +} + +type partition struct { + ID string `json:"partition"` + Name string `json:"partitionName"` + DNSSuffix string `json:"dnsSuffix"` + RegionRegex regionRegex `json:"regionRegex"` + Defaults endpoint `json:"defaults"` + Regions regions `json:"regions"` + Services services `json:"services"` +} + +func (p partition) Partition() Partition { + return Partition{ + dnsSuffix: p.DNSSuffix, + id: p.ID, + p: &p, + } +} + +func (p partition) canResolveEndpoint(service, region string, strictMatch bool) bool { + s, hasService := p.Services[service] + _, hasEndpoint := s.Endpoints[region] + + if hasEndpoint && hasService { + return true + } + + if strictMatch { + return false + } + + return p.RegionRegex.MatchString(region) +} + +func (p partition) EndpointFor(service, region string, opts ...func(*Options)) (resolved ResolvedEndpoint, err error) { + var opt Options + opt.Set(opts...) + + s, hasService := p.Services[service] + if !(hasService || opt.ResolveUnknownService) { + // Only return error if the resolver will not fallback to creating + // endpoint based on service endpoint ID passed in. + return resolved, NewUnknownServiceError(p.ID, service, serviceList(p.Services)) + } + + e, hasEndpoint := s.endpointForRegion(region) + if !hasEndpoint && opt.StrictMatching { + return resolved, NewUnknownEndpointError(p.ID, service, region, endpointList(s.Endpoints)) + } + + defs := []endpoint{p.Defaults, s.Defaults} + return e.resolve(service, region, p.DNSSuffix, defs, opt), nil +} + +func serviceList(ss services) []string { + list := make([]string, 0, len(ss)) + for k := range ss { + list = append(list, k) + } + return list +} +func endpointList(es endpoints) []string { + list := make([]string, 0, len(es)) + for k := range es { + list = append(list, k) + } + return list +} + +type regionRegex struct { + *regexp.Regexp +} + +func (rr *regionRegex) UnmarshalJSON(b []byte) (err error) { + // Strip leading and trailing quotes + regex, err := strconv.Unquote(string(b)) + if err != nil { + return fmt.Errorf("unable to strip quotes from regex, %v", err) + } + + rr.Regexp, err = regexp.Compile(regex) + if err != nil { + return fmt.Errorf("unable to unmarshal region regex, %v", err) + } + return nil +} + +type regions map[string]region + +type region struct { + Description string `json:"description"` +} + +type services map[string]service + +type service struct { + PartitionEndpoint string `json:"partitionEndpoint"` + IsRegionalized boxedBool `json:"isRegionalized,omitempty"` + Defaults endpoint `json:"defaults"` + Endpoints endpoints `json:"endpoints"` +} + +func (s *service) endpointForRegion(region string) (endpoint, bool) { + if s.IsRegionalized == boxedFalse { + return s.Endpoints[s.PartitionEndpoint], region == s.PartitionEndpoint + } + + if e, ok := s.Endpoints[region]; ok { + return e, true + } + + // Unable to find any matching endpoint, return + // blank that will be used for generic endpoint creation. + return endpoint{}, false +} + +type endpoints map[string]endpoint + +type endpoint struct { + Hostname string `json:"hostname"` + Protocols []string `json:"protocols"` + CredentialScope credentialScope `json:"credentialScope"` + + // Custom fields not modeled + HasDualStack boxedBool `json:"-"` + DualStackHostname string `json:"-"` + + // Signature Version not used + SignatureVersions []string `json:"signatureVersions"` + + // SSLCommonName not used. + SSLCommonName string `json:"sslCommonName"` +} + +const ( + defaultProtocol = "https" + defaultSigner = "v4" +) + +var ( + protocolPriority = []string{"https", "http"} + signerPriority = []string{"v4", "v2"} +) + +func getByPriority(s []string, p []string, def string) string { + if len(s) == 0 { + return def + } + + for i := 0; i < len(p); i++ { + for j := 0; j < len(s); j++ { + if s[j] == p[i] { + return s[j] + } + } + } + + return s[0] +} + +func (e endpoint) resolve(service, region, dnsSuffix string, defs []endpoint, opts Options) ResolvedEndpoint { + var merged endpoint + for _, def := range defs { + merged.mergeIn(def) + } + merged.mergeIn(e) + e = merged + + hostname := e.Hostname + + // Offset the hostname for dualstack if enabled + if opts.UseDualStack && e.HasDualStack == boxedTrue { + hostname = e.DualStackHostname + } + + u := strings.Replace(hostname, "{service}", service, 1) + u = strings.Replace(u, "{region}", region, 1) + u = strings.Replace(u, "{dnsSuffix}", dnsSuffix, 1) + + scheme := getEndpointScheme(e.Protocols, opts.DisableSSL) + u = fmt.Sprintf("%s://%s", scheme, u) + + signingRegion := e.CredentialScope.Region + if len(signingRegion) == 0 { + signingRegion = region + } + + signingName := e.CredentialScope.Service + var signingNameDerived bool + if len(signingName) == 0 { + signingName = service + signingNameDerived = true + } + + return ResolvedEndpoint{ + URL: u, + SigningRegion: signingRegion, + SigningName: signingName, + SigningNameDerived: signingNameDerived, + SigningMethod: getByPriority(e.SignatureVersions, signerPriority, defaultSigner), + } +} + +func getEndpointScheme(protocols []string, disableSSL bool) string { + if disableSSL { + return "http" + } + + return getByPriority(protocols, protocolPriority, defaultProtocol) +} + +func (e *endpoint) mergeIn(other endpoint) { + if len(other.Hostname) > 0 { + e.Hostname = other.Hostname + } + if len(other.Protocols) > 0 { + e.Protocols = other.Protocols + } + if len(other.SignatureVersions) > 0 { + e.SignatureVersions = other.SignatureVersions + } + if len(other.CredentialScope.Region) > 0 { + e.CredentialScope.Region = other.CredentialScope.Region + } + if len(other.CredentialScope.Service) > 0 { + e.CredentialScope.Service = other.CredentialScope.Service + } + if len(other.SSLCommonName) > 0 { + e.SSLCommonName = other.SSLCommonName + } + if other.HasDualStack != boxedBoolUnset { + e.HasDualStack = other.HasDualStack + } + if len(other.DualStackHostname) > 0 { + e.DualStackHostname = other.DualStackHostname + } +} + +type credentialScope struct { + Region string `json:"region"` + Service string `json:"service"` +} + +type boxedBool int + +func (b *boxedBool) UnmarshalJSON(buf []byte) error { + v, err := strconv.ParseBool(string(buf)) + if err != nil { + return err + } + + if v { + *b = boxedTrue + } else { + *b = boxedFalse + } + + return nil +} + +const ( + boxedBoolUnset boxedBool = iota + boxedFalse + boxedTrue +) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/errors.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/errors.go new file mode 100644 index 000000000000..916e883313b6 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/errors.go @@ -0,0 +1,32 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" + +var ( + // ErrMissingRegion is an error that is returned if region configuration is + // not found. + ErrMissingRegion = volcengineerr.New("MissingRegion", "could not find region configuration", nil) + + // ErrMissingEndpoint is an error that is returned if an endpoint cannot be + // resolved for a service. + ErrMissingEndpoint = volcengineerr.New("MissingEndpoint", "'Endpoint' configuration is required for this service", nil) +) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/jsonvalue.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/jsonvalue.go new file mode 100644 index 000000000000..b0c4536b657d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/jsonvalue.go @@ -0,0 +1,31 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// JSONValue is a representation of a grab bag type that will be marshaled +// into a json string. This type can be used just like any other map. +// +// Example: +// +// values := volcengine.JSONValue{ +// "Foo": "Bar", +// } +// values["Baz"] = "Qux" +type JSONValue map[string]interface{} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/logger.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/logger.go new file mode 100644 index 000000000000..226444fd2bf7 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/logger.go @@ -0,0 +1,140 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "log" + "os" +) + +// A LogLevelType defines the level logging should be performed at. Used to instruct +// the SDK which statements should be logged. +type LogLevelType uint + +// LogLevel returns the pointer to a LogLevel. Should be used to workaround +// not being able to take the address of a non-composite literal. +func LogLevel(l LogLevelType) *LogLevelType { + return &l +} + +// Value returns the LogLevel value or the default value LogOff if the LogLevel +// is nil. Safe to use on nil value LogLevelTypes. +func (l *LogLevelType) Value() LogLevelType { + if l != nil { + return *l + } + return LogOff +} + +// Matches returns true if the v LogLevel is enabled by this LogLevel. Should be +// used with logging sub levels. Is safe to use on nil value LogLevelTypes. If +// LogLevel is nil, will default to LogOff comparison. +func (l *LogLevelType) Matches(v LogLevelType) bool { + c := l.Value() + return c&v == v +} + +// AtLeast returns true if this LogLevel is at least high enough to satisfies v. +// Is safe to use on nil value LogLevelTypes. If LogLevel is nil, will default +// to LogOff comparison. +func (l *LogLevelType) AtLeast(v LogLevelType) bool { + c := l.Value() + return c >= v +} + +const ( + // LogOff states that no logging should be performed by the SDK. This is the + // default state of the SDK, and should be use to disable all logging. + LogOff LogLevelType = iota * 0x1000 + + // LogDebug state that debug output should be logged by the SDK. This should + // be used to inspect request made and responses received. + LogDebug +) + +// Debug Logging Sub Levels +const ( + // LogDebugWithSigning states that the SDK should log request signing and + // presigning events. This should be used to log the signing details of + // requests for debugging. Will also enable LogDebug. + LogDebugWithSigning LogLevelType = LogDebug | (1 << iota) + + // LogDebugWithHTTPBody states the SDK should log HTTP request and response + // HTTP bodys in addition to the headers and path. This should be used to + // see the volcenginebody content of requests and responses made while using the SDK + // Will also enable LogDebug. + LogDebugWithHTTPBody + + // LogDebugWithRequestRetries states the SDK should log when service requests will + // be retried. This should be used to log when you want to log when service + // requests are being retried. Will also enable LogDebug. + LogDebugWithRequestRetries + + // LogDebugWithRequestErrors states the SDK should log when service requests fail + // to build, send, validate, or unmarshal. + LogDebugWithRequestErrors + + // LogDebugWithEventStreamBody states the SDK should log EventStream + // request and response bodys. This should be used to log the EventStream + // wire unmarshaled message content of requests and responses made while + // using the SDK Will also enable LogDebug. + LogDebugWithEventStreamBody + + // LogInfoWithInputAndOutput states the SDK should log STRUCT input and output + // Will also enable LogInfo. + LogInfoWithInputAndOutput + + // LogDebugWithInputAndOutput states the SDK should log STRUCT input and output + // Will also enable LogDebug. + LogDebugWithInputAndOutput +) + +// A Logger is a minimalistic interface for the SDK to log messages to. Should +// be used to provide custom logging writers for the SDK to use. +type Logger interface { + Log(...interface{}) +} + +// A LoggerFunc is a convenience type to convert a function taking a variadic +// list of arguments and wrap it so the Logger interface can be used. +type LoggerFunc func(...interface{}) + +// Log calls the wrapped function with the arguments provided +func (f LoggerFunc) Log(args ...interface{}) { + f(args...) +} + +// NewDefaultLogger returns a Logger which will write log messages to stdout, and +// use same formatting runes as the stdlib log.Logger +func NewDefaultLogger() Logger { + return &defaultLogger{ + logger: log.New(os.Stdout, "", log.LstdFlags), + } +} + +// A defaultLogger provides a minimalistic logger satisfying the Logger interface. +type defaultLogger struct { + logger *log.Logger +} + +// Log logs the parameters to the stdlib logger. See log.Println. +func (l defaultLogger) Log(args ...interface{}) { + l.logger.Println(args...) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/connection_reset_error.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/connection_reset_error.go new file mode 100644 index 000000000000..ed546d72288b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/connection_reset_error.go @@ -0,0 +1,37 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "strings" +) + +func isErrConnectionReset(err error) bool { + if strings.Contains(err.Error(), "read: connection reset") { + return false + } + + if strings.Contains(err.Error(), "connection reset") || + strings.Contains(err.Error(), "broken pipe") { + return true + } + + return false +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/handlers.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/handlers.go new file mode 100644 index 000000000000..a2b770c0631f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/handlers.go @@ -0,0 +1,341 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "strings" +) + +// A Handlers provides a collection of request handlers for various +// stages of handling requests. +type Handlers struct { + Validate HandlerList + Build HandlerList + Sign HandlerList + Send HandlerList + ValidateResponse HandlerList + Unmarshal HandlerList + UnmarshalStream HandlerList + UnmarshalMeta HandlerList + UnmarshalError HandlerList + Retry HandlerList + AfterRetry HandlerList + CompleteAttempt HandlerList + Complete HandlerList +} + +// Copy returns a copy of this handler's lists. +func (h *Handlers) Copy() Handlers { + return Handlers{ + Validate: h.Validate.copy(), + Build: h.Build.copy(), + Sign: h.Sign.copy(), + Send: h.Send.copy(), + ValidateResponse: h.ValidateResponse.copy(), + Unmarshal: h.Unmarshal.copy(), + UnmarshalStream: h.UnmarshalStream.copy(), + UnmarshalError: h.UnmarshalError.copy(), + UnmarshalMeta: h.UnmarshalMeta.copy(), + Retry: h.Retry.copy(), + AfterRetry: h.AfterRetry.copy(), + CompleteAttempt: h.CompleteAttempt.copy(), + Complete: h.Complete.copy(), + } +} + +// Clear removes callback functions for all handlers. +func (h *Handlers) Clear() { + h.Validate.Clear() + h.Build.Clear() + h.Send.Clear() + h.Sign.Clear() + h.Unmarshal.Clear() + h.UnmarshalStream.Clear() + h.UnmarshalMeta.Clear() + h.UnmarshalError.Clear() + h.ValidateResponse.Clear() + h.Retry.Clear() + h.AfterRetry.Clear() + h.CompleteAttempt.Clear() + h.Complete.Clear() +} + +// IsEmpty returns if there are no handlers in any of the handlerlists. +func (h *Handlers) IsEmpty() bool { + if h.Validate.Len() != 0 { + return false + } + if h.Build.Len() != 0 { + return false + } + if h.Send.Len() != 0 { + return false + } + if h.Sign.Len() != 0 { + return false + } + if h.Unmarshal.Len() != 0 { + return false + } + if h.UnmarshalStream.Len() != 0 { + return false + } + if h.UnmarshalMeta.Len() != 0 { + return false + } + if h.UnmarshalError.Len() != 0 { + return false + } + if h.ValidateResponse.Len() != 0 { + return false + } + if h.Retry.Len() != 0 { + return false + } + if h.AfterRetry.Len() != 0 { + return false + } + if h.CompleteAttempt.Len() != 0 { + return false + } + if h.Complete.Len() != 0 { + return false + } + + return true +} + +// A HandlerListRunItem represents an entry in the HandlerList which +// is being run. +type HandlerListRunItem struct { + Index int + Handler NamedHandler + Request *Request +} + +// A HandlerList manages zero or more handlers in a list. +type HandlerList struct { + list []NamedHandler + + // Called after each request handler in the list is called. If set + // and the func returns true the HandlerList will continue to iterate + // over the request handlers. If false is returned the HandlerList + // will stop iterating. + // + // Should be used if extra logic to be performed between each handler + // in the list. This can be used to terminate a list's iteration + // based on a condition such as error like, HandlerListStopOnError. + // Or for logging like HandlerListLogItem. + AfterEachFn func(item HandlerListRunItem) bool +} + +// A NamedHandler is a struct that contains a name and function callback. +type NamedHandler struct { + Name string + Fn func(*Request) +} + +// copy creates a copy of the handler list. +func (l *HandlerList) copy() HandlerList { + n := HandlerList{ + AfterEachFn: l.AfterEachFn, + } + if len(l.list) == 0 { + return n + } + + n.list = append(make([]NamedHandler, 0, len(l.list)), l.list...) + return n +} + +// Clear clears the handler list. +func (l *HandlerList) Clear() { + l.list = l.list[0:0] +} + +// Len returns the number of handlers in the list. +func (l *HandlerList) Len() int { + return len(l.list) +} + +// PushBack pushes handler f to the back of the handler list. +func (l *HandlerList) PushBack(f func(*Request)) { + l.PushBackNamed(NamedHandler{"__anonymous", f}) +} + +// PushBackNamed pushes named handler f to the back of the handler list. +func (l *HandlerList) PushBackNamed(n NamedHandler) { + if cap(l.list) == 0 { + l.list = make([]NamedHandler, 0, 5) + } + l.list = append(l.list, n) +} + +// PushFront pushes handler f to the front of the handler list. +func (l *HandlerList) PushFront(f func(*Request)) { + l.PushFrontNamed(NamedHandler{"__anonymous", f}) +} + +// PushFrontNamed pushes named handler f to the front of the handler list. +func (l *HandlerList) PushFrontNamed(n NamedHandler) { + if cap(l.list) == len(l.list) { + // Allocating new list required + l.list = append([]NamedHandler{n}, l.list...) + } else { + // Enough room to prepend into list. + l.list = append(l.list, NamedHandler{}) + copy(l.list[1:], l.list) + l.list[0] = n + } +} + +// Remove removes a NamedHandler n +func (l *HandlerList) Remove(n NamedHandler) { + l.RemoveByName(n.Name) +} + +// RemoveByName removes a NamedHandler by name. +func (l *HandlerList) RemoveByName(name string) { + for i := 0; i < len(l.list); i++ { + m := l.list[i] + if m.Name == name { + // Shift array preventing creating new arrays + copy(l.list[i:], l.list[i+1:]) + l.list[len(l.list)-1] = NamedHandler{} + l.list = l.list[:len(l.list)-1] + + // decrement list so next check to length is correct + i-- + } + } +} + +// SwapNamed will swap out any existing handlers with the same name as the +// passed in NamedHandler returning true if handlers were swapped. False is +// returned otherwise. +func (l *HandlerList) SwapNamed(n NamedHandler) (swapped bool) { + for i := 0; i < len(l.list); i++ { + if l.list[i].Name == n.Name { + l.list[i].Fn = n.Fn + swapped = true + } + } + + return swapped +} + +// Swap will swap out all handlers matching the name passed in. The matched +// handlers will be swapped in. True is returned if the handlers were swapped. +func (l *HandlerList) Swap(name string, replace NamedHandler) bool { + var swapped bool + + for i := 0; i < len(l.list); i++ { + if l.list[i].Name == name { + l.list[i] = replace + swapped = true + } + } + + return swapped +} + +// SetBackNamed will replace the named handler if it exists in the handler list. +// If the handler does not exist the handler will be added to the end of the list. +func (l *HandlerList) SetBackNamed(n NamedHandler) { + if !l.SwapNamed(n) { + l.PushBackNamed(n) + } +} + +// SetFrontNamed will replace the named handler if it exists in the handler list. +// If the handler does not exist the handler will be added to the beginning of +// the list. +func (l *HandlerList) SetFrontNamed(n NamedHandler) { + if !l.SwapNamed(n) { + l.PushFrontNamed(n) + } +} + +// Run executes all handlers in the list with a given request object. +func (l *HandlerList) Run(r *Request) { + for i, h := range l.list { + h.Fn(r) + item := HandlerListRunItem{ + Index: i, Handler: h, Request: r, + } + if l.AfterEachFn != nil && !l.AfterEachFn(item) { + return + } + } +} + +// HandlerListLogItem logs the request handler and the state of the +// request's Error value. Always returns true to continue iterating +// request handlers in a HandlerList. +func HandlerListLogItem(item HandlerListRunItem) bool { + if item.Request.Config.Logger == nil { + return true + } + item.Request.Config.Logger.Log("DEBUG: RequestHandler", + item.Index, item.Handler.Name, item.Request.Error) + + return true +} + +// HandlerListStopOnError returns false to stop the HandlerList iterating +// over request handlers if Request.Error is not nil. True otherwise +// to continue iterating. +func HandlerListStopOnError(item HandlerListRunItem) bool { + return item.Request.Error == nil +} + +// WithAppendUserAgent will add a string to the user agent prefixed with a +// single white space. +func WithAppendUserAgent(s string) Option { + return func(r *Request) { + r.Handlers.Build.PushBack(func(r2 *Request) { + AddToUserAgent(r, s) + }) + } +} + +// MakeAddToUserAgentHandler will add the name/version pair to the User-Agent request +// header. If the extra parameters are provided they will be added as metadata to the +// name/version pair resulting in the following format. +// "name/version (extra0; extra1; ...)" +// The user agent part will be concatenated with this current request's user agent string. +func MakeAddToUserAgentHandler(name, version string, extra ...string) func(*Request) { + ua := fmt.Sprintf("%s/%s", name, version) + if len(extra) > 0 { + ua += fmt.Sprintf("/(%s)", strings.Join(extra, "; ")) + } + return func(r *Request) { + AddToUserAgent(r, ua) + } +} + +// MakeAddToUserAgentFreeFormHandler adds the input to the User-Agent request header. +// The input string will be concatenated with the current request's user agent string. +func MakeAddToUserAgentFreeFormHandler(s string) func(*Request) { + return func(r *Request) { + AddToUserAgent(r, s) + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/http_request.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/http_request.go new file mode 100644 index 000000000000..3ce6989299ac --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/http_request.go @@ -0,0 +1,43 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "io" + "net/http" + "net/url" +) + +func copyHTTPRequest(r *http.Request, body io.ReadCloser) *http.Request { + req := new(http.Request) + *req = *r + req.URL = &url.URL{} + *req.URL = *r.URL + req.Body = body + + req.Header = http.Header{} + for k, v := range r.Header { + for _, vv := range v { + req.Header.Add(k, vv) + } + } + + return req +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/offset_reader.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/offset_reader.go new file mode 100644 index 000000000000..417ffd15ead4 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/offset_reader.go @@ -0,0 +1,84 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "io" + "sync" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkio" +) + +// offsetReader is a thread-safe io.ReadCloser to prevent racing +// with retrying requests +type offsetReader struct { + buf io.ReadSeeker + lock sync.Mutex + closed bool +} + +func newOffsetReader(buf io.ReadSeeker, offset int64) (*offsetReader, error) { + reader := &offsetReader{} + _, err := buf.Seek(offset, sdkio.SeekStart) + if err != nil { + return nil, err + } + + reader.buf = buf + return reader, nil +} + +// Close will close the instance of the offset reader's access to +// the underlying io.ReadSeeker. +func (o *offsetReader) Close() error { + o.lock.Lock() + defer o.lock.Unlock() + o.closed = true + return nil +} + +// Read is a thread-safe read of the underlying io.ReadSeeker +func (o *offsetReader) Read(p []byte) (int, error) { + o.lock.Lock() + defer o.lock.Unlock() + + if o.closed { + return 0, io.EOF + } + + return o.buf.Read(p) +} + +// Seek is a thread-safe seeking operation. +func (o *offsetReader) Seek(offset int64, whence int) (int64, error) { + o.lock.Lock() + defer o.lock.Unlock() + + return o.buf.Seek(offset, whence) +} + +// CloseAndCopy will return a new offsetReader with a copy of the old buffer +// and close the old buffer. +func (o *offsetReader) CloseAndCopy(offset int64) (*offsetReader, error) { + if err := o.Close(); err != nil { + return nil, err + } + return newOffsetReader(o.buf, offset) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request.go new file mode 100644 index 000000000000..0b541c09a6c9 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request.go @@ -0,0 +1,747 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. +import ( + "bytes" + "fmt" + "io" + "net/http" + "net/url" + "reflect" + "strings" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkio" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +const ( + // ErrCodeSerialization is the serialization error code that is received + // during protocol unmarshaling. + ErrCodeSerialization = "SerializationError" + + // ErrCodeRead is an error that is returned during HTTP reads. + ErrCodeRead = "ReadError" + + // ErrCodeResponseTimeout is the connection timeout error that is received + // during volcenginebody reads. + ErrCodeResponseTimeout = "ResponseTimeout" + + // ErrCodeInvalidPresignExpire is returned when the expire time provided to + // presign is invalid + ErrCodeInvalidPresignExpire = "InvalidPresignExpireError" + + // CanceledErrorCode is the error code that will be returned by an + // API request that was canceled. Requests given a volcengine.Context may + // return this error when canceled. + CanceledErrorCode = "RequestCanceled" +) + +// A Request is the service request to be made. +type Request struct { + Config volcengine.Config + ClientInfo metadata.ClientInfo + Handlers Handlers + + Retryer + AttemptTime time.Time + Time time.Time + Operation *Operation + HTTPRequest *http.Request + HTTPResponse *http.Response + Body io.ReadSeeker + BodyStart int64 // offset from beginning of Body that the request volcenginebody starts + Params interface{} + Error error + Data interface{} + RequestID string + RetryCount int + Retryable *bool + RetryDelay time.Duration + NotHoist bool + SignedHeaderVals http.Header + LastSignedAt time.Time + DisableFollowRedirects bool + + // Additional API error codes that should be retried. IsErrorRetryable + // will consider these codes in addition to its built in cases. + RetryErrorCodes []string + + // Additional API error codes that should be retried with throttle backoff + // delay. IsErrorThrottle will consider these codes in addition to its + // built in cases. + ThrottleErrorCodes []string + + // A value greater than 0 instructs the request to be signed as Presigned URL + // You should not set this field directly. Instead use Request's + // Presign or PresignRequest methods. + ExpireTime time.Duration + + context volcengine.Context + + built bool + + // Need to persist an intermediate volcenginebody between the input Body and HTTP + // request volcenginebody because the HTTP Client's transport can maintain a reference + // to the HTTP request's volcenginebody after the client has returned. This value is + // safe to use concurrently and wrap the input Body for each HTTP request. + safeBody *offsetReader + Input interface{} + IsJsonBody bool + + holders []interface{} + + Metadata response.ResponseMetadata +} + +// An Operation is the service API operation to be made. +type Operation struct { + Name string + HTTPMethod string + HTTPPath string + *Paginator + + BeforePresignFn func(r *Request) error +} + +// New returns a new Request pointer for the service API +// operation and parameters. +// +// Params is any value of input parameters to be the request payload. +// Data is pointer value to an object which the request's response +// payload will be deserialized to. +func New(cfg volcengine.Config, clientInfo metadata.ClientInfo, handlers Handlers, + retryer Retryer, operation *Operation, params interface{}, data interface{}) *Request { + + method := operation.HTTPMethod + if method == "" { + method = "POST" + } + + httpReq, _ := http.NewRequest(method, "", nil) + + var err error + httpReq.URL, err = url.Parse(clientInfo.Endpoint + operation.HTTPPath) + if err != nil { + httpReq.URL = &url.URL{} + err = volcengineerr.New("InvalidEndpointURL", "invalid endpoint uri", err) + } + + SanitizeHostForHeader(httpReq) + + r := &Request{ + Config: cfg, + ClientInfo: clientInfo, + Handlers: handlers.Copy(), + Retryer: retryer, + Time: time.Now(), + ExpireTime: 0, + Operation: operation, + HTTPRequest: httpReq, + Body: nil, + Params: params, + Error: err, + Data: data, + } + r.SetBufferBody([]byte{}) + + return r +} + +// A Option is a functional option that can augment or modify a request when +// using a WithContext API operation method. +type Option func(*Request) + +// WithGetResponseHeader builds a request Option which will retrieve a single +// header value from the HTTP Response. If there are multiple values for the +// header key use WithGetResponseHeaders instead to access the http.Header +// map directly. The passed in val pointer must be non-nil. +// +// This Option can be used multiple times with a single API operation. +// +// var id2, versionID string +// svc.PutObjectWithContext(ctx, params, +// request.WithGetResponseHeader("x-top-id-2", &id2), +// request.WithGetResponseHeader("x-top-version-id", &versionID), +// ) +func WithGetResponseHeader(key string, val *string) Option { + return func(r *Request) { + r.Handlers.Complete.PushBack(func(req *Request) { + *val = req.HTTPResponse.Header.Get(key) + }) + } +} + +// WithGetResponseHeaders builds a request Option which will retrieve the +// headers from the HTTP response and assign them to the passed in headers +// variable. The passed in headers pointer must be non-nil. +// +// var headers http.Header +// svc.PutObjectWithContext(ctx, params, request.WithGetResponseHeaders(&headers)) +func WithGetResponseHeaders(headers *http.Header) Option { + return func(r *Request) { + r.Handlers.Complete.PushBack(func(req *Request) { + *headers = req.HTTPResponse.Header + }) + } +} + +// WithLogLevel is a request option that will set the request to use a specific +// log level when the request is made. +// +// svc.PutObjectWithContext(ctx, params, request.WithLogLevel(volcengine.LogDebugWithHTTPBody) +func WithLogLevel(l volcengine.LogLevelType) Option { + return func(r *Request) { + r.Config.LogLevel = volcengine.LogLevel(l) + } +} + +// ApplyOptions will apply each option to the request calling them in the order +// the were provided. +func (r *Request) ApplyOptions(opts ...Option) { + for _, opt := range opts { + opt(r) + } +} + +// Context will always returns a non-nil context. If Request does not have a +// context volcengine.BackgroundContext will be returned. +func (r *Request) Context() volcengine.Context { + if r.context != nil { + return r.context + } + return volcengine.BackgroundContext() +} + +// SetContext adds a Context to the current request that can be used to cancel +// a in-flight request. The Context value must not be nil, or this method will +// panic. +// +// Unlike http.Request.WithContext, SetContext does not return a copy of the +// Request. It is not safe to use use a single Request value for multiple +// requests. A new Request should be created for each API operation request. +func (r *Request) SetContext(ctx volcengine.Context) { + if ctx == nil { + panic("context cannot be nil") + } + setRequestContext(r, ctx) +} + +// NoBody is a http.NoBody reader instructing Go HTTP client to not include +// and volcenginebody in the HTTP request. +var NoBody = http.NoBody + +// ResetBody rewinds the request volcenginebody back to its starting position, and +// sets the HTTP Request volcenginebody reference. When the volcenginebody is read prior +// to being sent in the HTTP request it will need to be rewound. +// +// ResetBody will automatically be called by the SDK's build handler, but if +// the request is being used directly ResetBody must be called before the request +// is Sent. SetStringBody, SetBufferBody, and SetReaderBody will automatically +// call ResetBody. +// +// Will also set the Go 1.8's http.Request.GetBody member to allow retrying +// PUT/POST redirects. +func (r *Request) ResetBody() { + body, err := r.getNextRequestBody() + if err != nil { + r.Error = volcengineerr.New(ErrCodeSerialization, + "failed to reset request volcenginebody", err) + return + } + + r.HTTPRequest.Body = body + r.HTTPRequest.GetBody = r.getNextRequestBody +} + +// WillRetry returns if the request's can be retried. +func (r *Request) WillRetry() bool { + if !volcengine.IsReaderSeekable(r.Body) && r.HTTPRequest.Body != NoBody { + return false + } + return r.Error != nil && volcengine.BoolValue(r.Retryable) && r.RetryCount < r.MaxRetries() +} + +func fmtAttemptCount(retryCount, maxRetries int) string { + return fmt.Sprintf("attempt %v/%v", retryCount, maxRetries) +} + +// ParamsFilled returns if the request's parameters have been populated +// and the parameters are valid. False is returned if no parameters are +// provided or invalid. +func (r *Request) ParamsFilled() bool { + return r.Params != nil && reflect.ValueOf(r.Params).Elem().IsValid() +} + +// DataFilled returns true if the request's data for response deserialization +// target has been set and is a valid. False is returned if data is not +// set, or is invalid. +func (r *Request) DataFilled() bool { + return r.Data != nil && reflect.ValueOf(r.Data).Elem().IsValid() +} + +// SetBufferBody will set the request's volcenginebody bytes that will be sent to +// the service API. +func (r *Request) SetBufferBody(buf []byte) { + r.SetReaderBody(bytes.NewReader(buf)) +} + +// SetStringBody sets the volcenginebody of the request to be backed by a string. +func (r *Request) SetStringBody(s string) { + r.SetReaderBody(strings.NewReader(s)) +} + +// SetReaderBody will set the request's volcenginebody reader. +func (r *Request) SetReaderBody(reader io.ReadSeeker) { + r.Body = reader + + if volcengine.IsReaderSeekable(reader) { + var err error + // Get the Bodies current offset so retries will start from the same + // initial position. + r.BodyStart, err = reader.Seek(0, sdkio.SeekCurrent) + if err != nil { + r.Error = volcengineerr.New(ErrCodeSerialization, + "failed to determine start of request volcenginebody", err) + return + } + } + r.ResetBody() +} + +// Presign returns the request's signed URL. Error will be returned +// if the signing fails. The expire parameter is only used for presigned +// S3 API requests. All other Volcengine services will use a fixed expiration +// time of 15 minutes. +// +// It is invalid to create a presigned URL with a expire duration 0 or less. An +// error is returned if expire duration is 0 or less. +func (r *Request) Presign(expire time.Duration) (string, error) { + r = r.copy() + + // Presign requires all headers be hoisted. There is no way to retrieve + // the signed headers not hoisted without this. Making the presigned URL + // useless. + r.NotHoist = false + + u, _, err := getPresignedURL(r, expire) + return u, err +} + +// PresignRequest behaves just like presign, with the addition of returning a +// set of headers that were signed. The expire parameter is only used for +// presigned S3 API requests. All other Volcengine services will use a fixed +// expiration time of 15 minutes. +// +// It is invalid to create a presigned URL with a expire duration 0 or less. An +// error is returned if expire duration is 0 or less. +// +// Returns the URL string for the API operation with signature in the volcenginequery string, +// and the HTTP headers that were included in the signature. These headers must +// be included in any HTTP request made with the presigned URL. +// +// To prevent hoisting any headers to the volcenginequery string set NotHoist to true on +// this Request value prior to calling PresignRequest. +func (r *Request) PresignRequest(expire time.Duration) (string, http.Header, error) { + r = r.copy() + return getPresignedURL(r, expire) +} + +// IsPresigned returns true if the request represents a presigned API url. +func (r *Request) IsPresigned() bool { + return r.ExpireTime != 0 +} + +func getPresignedURL(r *Request, expire time.Duration) (string, http.Header, error) { + if expire <= 0 { + return "", nil, volcengineerr.New( + ErrCodeInvalidPresignExpire, + "presigned URL requires an expire duration greater than 0", + nil, + ) + } + + r.ExpireTime = expire + + if r.Operation.BeforePresignFn != nil { + if err := r.Operation.BeforePresignFn(r); err != nil { + return "", nil, err + } + } + + if err := r.Sign(); err != nil { + return "", nil, err + } + + return r.HTTPRequest.URL.String(), r.SignedHeaderVals, nil +} + +const ( + notRetrying = "not retrying" +) + +func debugLogReqError(r *Request, stage, retryStr string, err error) { + if !r.Config.LogLevel.Matches(volcengine.LogDebugWithRequestErrors) { + return + } + + r.Config.Logger.Log(fmt.Sprintf("DEBUG: %s %s/%s failed, %s, error %v", + stage, r.ClientInfo.ServiceName, r.Operation.Name, retryStr, err)) +} + +// Build will build the request's object so it can be signed and sent +// to the service. Build will also validate all the request's parameters. +// Any additional build Handlers set on this request will be run +// in the order they were set. +// +// The request will only be built once. Multiple calls to build will have +// no effect. +// +// If any Validate or Build errors occur the build will stop and the error +// which occurred will be returned. +func (r *Request) Build() error { + if !r.built { + r.Handlers.Validate.Run(r) + if r.Error != nil { + debugLogReqError(r, "Validate Request", notRetrying, r.Error) + return r.Error + } + r.Handlers.Build.Run(r) + if r.Error != nil { + debugLogReqError(r, "Build Request", notRetrying, r.Error) + return r.Error + } + r.built = true + } + + return r.Error +} + +// Sign will sign the request, returning error if errors are encountered. +// +// Sign will build the request prior to signing. All Sign Handlers will +// be executed in the order they were set. +func (r *Request) Sign() error { + err := r.Build() + if err != nil { + debugLogReqError(r, "Build Request", notRetrying, err) + return err + } + + r.Handlers.Sign.Run(r) + return r.Error +} + +func (r *Request) getNextRequestBody() (body io.ReadCloser, err error) { + if r.safeBody != nil { + _ = r.safeBody.Close() + } + + r.safeBody, err = newOffsetReader(r.Body, r.BodyStart) + if err != nil { + return nil, volcengineerr.New(ErrCodeSerialization, + "failed to get next request volcenginebody reader", err) + } + + // Go 1.8 tightened and clarified the rules code needs to use when building + // requests with the http package. Go 1.8 removed the automatic detection + // of if the Request.Body was empty, or actually had bytes in it. The SDK + // always sets the Request.Body even if it is empty and should not actually + // be sent. This is incorrect. + // + // Go 1.8 did add a http.NoBody value that the SDK can use to tell the http + // client that the request really should be sent without a volcenginebody. The + // Request.Body cannot be set to nil, which is preferable, because the + // field is exported and could introduce nil pointer dereferences for users + // of the SDK if they used that field. + // + // Related golang/go#18257 + l, err := volcengine.SeekerLen(r.Body) + if err != nil { + return nil, volcengineerr.New(ErrCodeSerialization, + "failed to compute request volcenginebody size", err) + } + + if l == 0 { + body = NoBody + } else if l > 0 { + body = r.safeBody + } else { + // Hack to prevent sending bodies for methods where the volcenginebody + // should be ignored by the server. Sending bodies on these + // methods without an associated ContentLength will cause the + // request to socket timeout because the server does not handle + // Transfer-Encoding: chunked bodies for these methods. + // + // This would only happen if a volcengine.ReaderSeekerCloser was used with + // a io.Reader that was not also an io.Seeker, or did not implement + // Len() method. + switch r.Operation.HTTPMethod { + case "GET", "HEAD", "DELETE": + body = NoBody + default: + body = r.safeBody + } + } + + return body, nil +} + +// GetBody will return an io.ReadSeeker of the Request's underlying +// input volcenginebody with a concurrency safe wrapper. +func (r *Request) GetBody() io.ReadSeeker { + return r.safeBody +} + +// Send will send the request, returning error if errors are encountered. +// +// Send will sign the request prior to sending. All Send Handlers will +// be executed in the order they were set. +// +// Canceling a request is non-deterministic. If a request has been canceled, +// then the transport will choose, randomly, one of the state channels during +// reads or getting the connection. +// +// readLoop() and getConn(req *Request, cm connectMethod) +// https://github.com/golang/go/blob/master/src/net/http/transport.go +// +// Send will not close the request.Request's volcenginebody. +func (r *Request) Send() error { + interceptorMapping := make(map[int]interface{}) + defer func() { + // Regardless of success or failure of the request trigger the Complete + // request handlers. + r.Handlers.Complete.Run(r) + for index, interceptor := range r.Config.Interceptors { + if interceptor.After != nil { + interceptor.After(r.MergeRequestInfo(), interceptorMapping[index]) + } + } + }() + + if err := r.Error; err != nil { + return err + } + + for { + r.Error = nil + r.AttemptTime = time.Now() + + if err := r.Sign(); err != nil { + debugLogReqError(r, "Sign Request", notRetrying, err) + return err + } + + for index, interceptor := range r.Config.Interceptors { + if interceptor.Before != nil { + interceptorMapping[index] = interceptor.Before(r.MergeRequestInfo()) + } else { + interceptorMapping[index] = nil + } + + } + + if err := r.sendRequest(); err == nil { + return nil + } + r.Handlers.Retry.Run(r) + r.Handlers.AfterRetry.Run(r) + + if r.Error != nil || !volcengine.BoolValue(r.Retryable) { + return r.Error + } + + if err := r.prepareRetry(); err != nil { + r.Error = err + return err + } + } +} + +func (r *Request) prepareRetry() error { + if r.Config.LogLevel.Matches(volcengine.LogDebugWithRequestRetries) { + r.Config.Logger.Log(fmt.Sprintf("DEBUG: Retrying Request %s/%s, attempt %d", + r.ClientInfo.ServiceName, r.Operation.Name, r.RetryCount)) + } + + // The previous http.Request will have a reference to the r.Body + // and the HTTP Client's Transport may still be reading from + // the request's volcenginebody even though the Client's Do returned. + r.HTTPRequest = copyHTTPRequest(r.HTTPRequest, nil) + r.ResetBody() + if err := r.Error; err != nil { + return volcengineerr.New(ErrCodeSerialization, + "failed to prepare volcenginebody for retry", err) + + } + + // Closing response volcenginebody to ensure that no response volcenginebody is leaked + // between retry attempts. + if r.HTTPResponse != nil && r.HTTPResponse.Body != nil { + r.HTTPResponse.Body.Close() + } + + return nil +} + +func (r *Request) sendRequest() (sendErr error) { + defer r.Handlers.CompleteAttempt.Run(r) + + r.Retryable = nil + r.Handlers.Send.Run(r) + if r.Error != nil { + debugLogReqError(r, "Send Request", + fmtAttemptCount(r.RetryCount, r.MaxRetries()), + r.Error) + return r.Error + } + + r.Handlers.UnmarshalMeta.Run(r) + r.Handlers.ValidateResponse.Run(r) + if r.Error != nil { + r.Handlers.UnmarshalError.Run(r) + debugLogReqError(r, "Validate Response", + fmtAttemptCount(r.RetryCount, r.MaxRetries()), + r.Error) + return r.Error + } + + r.Handlers.Unmarshal.Run(r) + if r.Error != nil { + debugLogReqError(r, "Unmarshal Response", + fmtAttemptCount(r.RetryCount, r.MaxRetries()), + r.Error) + return r.Error + } + + return nil +} + +// copy will copy a request which will allow for local manipulation of the +// request. +func (r *Request) copy() *Request { + req := &Request{} + *req = *r + req.Handlers = r.Handlers.Copy() + op := *r.Operation + req.Operation = &op + return req +} + +// AddToUserAgent adds the string to the end of the request's current user agent. +func AddToUserAgent(r *Request, s string) { + curUA := r.HTTPRequest.Header.Get("User-Agent") + if len(curUA) > 0 { + s = curUA + " " + s + } + r.HTTPRequest.Header.Set("User-Agent", s) +} + +// SanitizeHostForHeader removes default port from host and updates request.Host +func SanitizeHostForHeader(r *http.Request) { + host := getHost(r) + port := portOnly(host) + if port != "" && isDefaultPort(r.URL.Scheme, port) { + r.Host = stripPort(host) + } +} + +// Returns host from request +func getHost(r *http.Request) string { + if r.Host != "" { + return r.Host + } + + return r.URL.Host +} + +// Hostname returns u.Host, without any port number. +// +// If Host is an IPv6 literal with a port number, Hostname returns the +// IPv6 literal without the square brackets. IPv6 literals may include +// a zone identifier. +// +// Copied from the Go 1.8 standard library (net/url) +func stripPort(hostport string) string { + colon := strings.IndexByte(hostport, ':') + if colon == -1 { + return hostport + } + if i := strings.IndexByte(hostport, ']'); i != -1 { + return strings.TrimPrefix(hostport[:i], "[") + } + return hostport[:colon] +} + +// Port returns the port part of u.Host, without the leading colon. +// If u.Host doesn't contain a port, Port returns an empty string. +// +// Copied from the Go 1.8 standard library (net/url) +func portOnly(hostport string) string { + colon := strings.IndexByte(hostport, ':') + if colon == -1 { + return "" + } + if i := strings.Index(hostport, "]:"); i != -1 { + return hostport[i+len("]:"):] + } + if strings.Contains(hostport, "]") { + return "" + } + return hostport[colon+len(":"):] +} + +// Returns true if the specified URI is using the standard port +// (i.e. port 80 for HTTP URIs or 443 for HTTPS URIs) +func isDefaultPort(scheme, port string) bool { + if port == "" { + return true + } + + lowerCaseScheme := strings.ToLower(scheme) + if (lowerCaseScheme == "http" && port == "80") || (lowerCaseScheme == "https" && port == "443") { + return true + } + + return false +} + +func (r *Request) MergeRequestInfo() custom.RequestInfo { + return custom.RequestInfo{ + Context: r.context, + Request: r.HTTPRequest, + Response: r.HTTPResponse, + Name: r.Operation.Name, + Method: r.Operation.HTTPMethod, + ClientInfo: r.ClientInfo, + URI: r.HTTPRequest.RequestURI, + Header: r.HTTPRequest.Header, + URL: r.HTTPRequest.URL, + Input: r.Params, + Output: r.Data, + Metadata: r.Metadata, + Error: r.Error, + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request_context.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request_context.go new file mode 100644 index 000000000000..c65e46b4c0c5 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request_context.go @@ -0,0 +1,31 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + +// setContext updates the Request to use the passed in context for cancellation. +// Context will also be used for request retry delay. +// +// Creates shallow copy of the http.Request with the WithContext method. +func setRequestContext(r *Request, ctx volcengine.Context) { + r.context = ctx + r.HTTPRequest = r.HTTPRequest.WithContext(ctx) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request_pagination.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request_pagination.go new file mode 100644 index 000000000000..e240a6e3873e --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/request_pagination.go @@ -0,0 +1,283 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "reflect" + "sync/atomic" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +// A Pagination provides paginating of SDK API operations which are paginatable. +// Generally you should not use this type directly, but use the "Pages" API +// operations method to automatically perform pagination for you. Such as, +// "S3.ListObjectsPages", and "S3.ListObjectsPagesWithContext" methods. +// +// Pagination differs from a Paginator type in that pagination is the type that +// does the pagination between API operations, and Paginator defines the +// configuration that will be used per page request. +// +// cont := true +// for p.Next() && cont { +// data := p.Page().(*s3.ListObjectsOutput) +// // process the page's data +// } +// return p.Err() +// +// See service client API operation Pages methods for examples how the SDK will +// use the Pagination type. +type Pagination struct { + // Function to return a Request value for each pagination request. + // Any configuration or handlers that need to be applied to the request + // prior to getting the next page should be done here before the request + // returned. + // + // NewRequest should always be built from the same API operations. It is + // undefined if different API operations are returned on subsequent calls. + NewRequest func() (*Request, error) + // EndPageOnSameToken, when enabled, will allow the paginator to stop on + // token that are the same as its previous tokens. + EndPageOnSameToken bool + + started bool + prevTokens []interface{} + nextTokens []interface{} + + err error + curPage interface{} +} + +// HasNextPage will return true if Pagination is able to determine that the API +// operation has additional pages. False will be returned if there are no more +// pages remaining. +// +// Will always return true if Next has not been called yet. +func (p *Pagination) HasNextPage() bool { + if !p.started { + return true + } + + hasNextPage := len(p.nextTokens) != 0 + if p.EndPageOnSameToken { + return hasNextPage && !volcengineutil.DeepEqual(p.nextTokens, p.prevTokens) + } + return hasNextPage +} + +// Err returns the error Pagination encountered when retrieving the next page. +func (p *Pagination) Err() error { + return p.err +} + +// Page returns the current page. Page should only be called after a successful +// call to Next. It is undefined what Page will return if Page is called after +// Next returns false. +func (p *Pagination) Page() interface{} { + return p.curPage +} + +// Next will attempt to retrieve the next page for the API operation. When a page +// is retrieved true will be returned. If the page cannot be retrieved, or there +// are no more pages false will be returned. +// +// Use the Page method to retrieve the current page data. The data will need +// to be cast to the API operation's output type. +// +// Use the Err method to determine if an error occurred if Page returns false. +func (p *Pagination) Next() bool { + if !p.HasNextPage() { + return false + } + + req, err := p.NewRequest() + if err != nil { + p.err = err + return false + } + + if p.started { + for i, intok := range req.Operation.InputTokens { + volcengineutil.SetValueAtPath(req.Params, intok, p.nextTokens[i]) + } + } + p.started = true + + err = req.Send() + if err != nil { + p.err = err + return false + } + + p.prevTokens = p.nextTokens + p.nextTokens = req.nextPageTokens() + p.curPage = req.Data + + return true +} + +// A Paginator is the configuration data that defines how an API operation +// should be paginated. This type is used by the API service models to define +// the generated pagination config for service APIs. +// +// The Pagination type is what provides iterating between pages of an API. It +// is only used to store the token metadata the SDK should use for performing +// pagination. +type Paginator struct { + InputTokens []string + OutputTokens []string + LimitToken string + TruncationToken string +} + +// nextPageTokens returns the tokens to use when asking for the next page of data. +func (r *Request) nextPageTokens() []interface{} { + if r.Operation.Paginator == nil { + return nil + } + if r.Operation.TruncationToken != "" { + tr, _ := volcengineutil.ValuesAtPath(r.Data, r.Operation.TruncationToken) + if len(tr) == 0 { + return nil + } + + switch v := tr[0].(type) { + case *bool: + if !volcengine.BoolValue(v) { + return nil + } + case bool: + if !v { + return nil + } + } + } + + tokens := []interface{}{} + tokenAdded := false + for _, outToken := range r.Operation.OutputTokens { + vs, _ := volcengineutil.ValuesAtPath(r.Data, outToken) + if len(vs) == 0 { + tokens = append(tokens, nil) + continue + } + v := vs[0] + + switch tv := v.(type) { + case *string: + if len(volcengineutil.StringValue(tv)) == 0 { + tokens = append(tokens, nil) + continue + } + case string: + if len(tv) == 0 { + tokens = append(tokens, nil) + continue + } + } + + tokenAdded = true + tokens = append(tokens, v) + } + if !tokenAdded { + return nil + } + + return tokens +} + +// Ensure a deprecated item is only logged once instead of each time its used. +func logDeprecatedf(logger volcengine.Logger, flag *int32, msg string) { + if logger == nil { + return + } + if atomic.CompareAndSwapInt32(flag, 0, 1) { + logger.Log(msg) + } +} + +var ( + logDeprecatedHasNextPage int32 + logDeprecatedNextPage int32 + logDeprecatedEachPage int32 +) + +// HasNextPage returns true if this request has more pages of data available. +// +// Deprecated Use Pagination type for configurable pagination of API operations +func (r *Request) HasNextPage() bool { + logDeprecatedf(r.Config.Logger, &logDeprecatedHasNextPage, + "Request.HasNextPage deprecated. Use Pagination type for configurable pagination of API operations") + + return len(r.nextPageTokens()) > 0 +} + +// NextPage returns a new Request that can be executed to return the next +// page of result data. Call .Send() on this request to execute it. +// +// Deprecated Use Pagination type for configurable pagination of API operations +func (r *Request) NextPage() *Request { + logDeprecatedf(r.Config.Logger, &logDeprecatedNextPage, + "Request.NextPage deprecated. Use Pagination type for configurable pagination of API operations") + + tokens := r.nextPageTokens() + if len(tokens) == 0 { + return nil + } + + data := reflect.New(reflect.TypeOf(r.Data).Elem()).Interface() + nr := New(r.Config, r.ClientInfo, r.Handlers, r.Retryer, r.Operation, volcengineutil.CopyOf(r.Params), data) + for i, intok := range nr.Operation.InputTokens { + volcengineutil.SetValueAtPath(nr.Params, intok, tokens[i]) + } + return nr +} + +// EachPage iterates over each page of a paginated request object. The fn +// parameter should be a function with the following sample signature: +// +// func(page *T, lastPage bool) bool { +// return true // return false to stop iterating +// } +// +// Where "T" is the structure type matching the output structure of the given +// operation. For example, a request object generated by +// DynamoDB.ListTablesRequest() would expect to see dynamodb.ListTablesOutput +// as the structure "T". The lastPage value represents whether the page is +// the last page of data or not. The return value of this function should +// return true to keep iterating or false to stop. +// +// Deprecated Use Pagination type for configurable pagination of API operations +func (r *Request) EachPage(fn func(data interface{}, isLastPage bool) (shouldContinue bool)) error { + logDeprecatedf(r.Config.Logger, &logDeprecatedEachPage, + "Request.EachPage deprecated. Use Pagination type for configurable pagination of API operations") + + for page := r; page != nil; page = page.NextPage() { + if err := page.Send(); err != nil { + return err + } + if getNextPage := fn(page.Data, !page.HasNextPage()); !getNextPage { + return page.Error + } + } + + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/retryer.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/retryer.go new file mode 100644 index 000000000000..6e4af46cbddb --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/retryer.go @@ -0,0 +1,295 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "net" + "net/url" + "strings" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// Retryer provides the interface drive the SDK's request retry behavior. The +// Retryer implementation is responsible for implementing exponential backoff, +// and determine if a request API error should be retried. +// +// client.DefaultRetryer is the SDK's default implementation of the Retryer. It +// uses the which uses the Request.IsErrorRetryable and Request.IsErrorThrottle +// methods to determine if the request is retried. +type Retryer interface { + // RetryRules return the retry delay that should be used by the SDK before + // making another request attempt for the failed request. + RetryRules(*Request) time.Duration + + // ShouldRetry returns if the failed request is retryable. + // + // Implementations may consider request attempt count when determining if a + // request is retryable, but the SDK will use MaxRetries to limit the + // number of attempts a request are made. + ShouldRetry(*Request) bool + + // MaxRetries is the number of times a request may be retried before + // failing. + MaxRetries() int +} + +// WithRetryer sets a Retryer value to the given Config returning the Config +// value for chaining. +func WithRetryer(cfg *volcengine.Config, retryer Retryer) *volcengine.Config { + cfg.Retryer = retryer + return cfg +} + +// retryableCodes is a collection of service response codes which are retry-able +// without any further action. +var retryableCodes = map[string]struct{}{ + "RequestError": {}, + "RequestTimeout": {}, + ErrCodeResponseTimeout: {}, + "RequestTimeoutException": {}, // Glacier's flavor of RequestTimeout +} + +var throttleCodes = map[string]struct{}{ + "ProvisionedThroughputExceededException": {}, + "Throttling": {}, + "ThrottlingException": {}, + "RequestLimitExceeded": {}, + "RequestThrottled": {}, + "RequestThrottledException": {}, + "TooManyRequestsException": {}, // Lambda functions + "PriorRequestNotComplete": {}, // Route53 + "TransactionInProgressException": {}, +} + +// credsExpiredCodes is a collection of error codes which signify the credentials +// need to be refreshed. Expired tokens require refreshing of credentials, and +// resigning before the request can be retried. +var credsExpiredCodes = map[string]struct{}{ + "ExpiredToken": {}, + "ExpiredTokenException": {}, + "RequestExpired": {}, // EC2 Only +} + +func isCodeThrottle(code string) bool { + _, ok := throttleCodes[code] + return ok +} + +func isCodeRetryable(code string) bool { + if _, ok := retryableCodes[code]; ok { + return true + } + + return isCodeExpiredCreds(code) +} + +func isCodeExpiredCreds(code string) bool { + _, ok := credsExpiredCodes[code] + return ok +} + +var validParentCodes = map[string]struct{}{ + ErrCodeSerialization: {}, + ErrCodeRead: {}, +} + +func isNestedErrorRetryable(parentErr volcengineerr.Error) bool { + if parentErr == nil { + return false + } + + if _, ok := validParentCodes[parentErr.Code()]; !ok { + return false + } + + err := parentErr.OrigErr() + if err == nil { + return false + } + + if aerr, ok := err.(volcengineerr.Error); ok { + return isCodeRetryable(aerr.Code()) + } + + if t, ok := err.(temporary); ok { + return t.Temporary() || isErrConnectionReset(err) + } + + return isErrConnectionReset(err) +} + +// IsErrorRetryable returns whether the error is retryable, based on its Code. +// Returns false if error is nil. +func IsErrorRetryable(err error) bool { + if err == nil { + return false + } + return shouldRetryError(err) +} + +type temporary interface { + Temporary() bool +} + +func shouldRetryError(origErr error) bool { + switch err := origErr.(type) { + case volcengineerr.Error: + if err.Code() == CanceledErrorCode { + return false + } + if isNestedErrorRetryable(err) { + return true + } + + origErr := err.OrigErr() + var shouldRetry bool + if origErr != nil { + shouldRetry := shouldRetryError(origErr) + if err.Code() == "RequestError" && !shouldRetry { + return false + } + } + if isCodeRetryable(err.Code()) { + return true + } + return shouldRetry + + case *url.Error: + if strings.Contains(err.Error(), "connection refused") { + // Refused connections should be retried as the service may not yet + // be running on the port. Go TCP dial considers refused + // connections as not temporary. + return true + } + // *url.Error only implements Temporary after golang 1.6 but since + // url.Error only wraps the error: + return shouldRetryError(err.Err) + + case temporary: + if netErr, ok := err.(*net.OpError); ok && netErr.Op == "dial" { + return true + } + // If the error is temporary, we want to allow continuation of the + // retry process + return err.Temporary() || isErrConnectionReset(origErr) + + case nil: + // `volcengineerr.Error.OrigErr()` can be nil, meaning there was an error but + // because we don't know the cause, it is marked as retryable. See + // TestRequest4xxUnretryable for an example. + return true + + default: + switch err.Error() { + case "net/http: request canceled", + "net/http: request canceled while waiting for connection": + // known 1.5 error case when an http request is cancelled + return false + } + // here we don't know the error; so we allow a retry. + return true + } +} + +// IsErrorThrottle returns whether the error is to be throttled based on its code. +// Returns false if error is nil. +func IsErrorThrottle(err error) bool { + if aerr, ok := err.(volcengineerr.Error); ok && aerr != nil { + return isCodeThrottle(aerr.Code()) + } + return false +} + +// IsErrorExpiredCreds returns whether the error code is a credential expiry +// error. Returns false if error is nil. +func IsErrorExpiredCreds(err error) bool { + if aerr, ok := err.(volcengineerr.Error); ok && aerr != nil { + return isCodeExpiredCreds(aerr.Code()) + } + return false +} + +// IsErrorRetryable returns whether the error is retryable, based on its Code. +// Returns false if the request has no Error set. +// +// Alias for the utility function IsErrorRetryable +func (r *Request) IsErrorRetryable() bool { + if isErrCode(r.Error, r.RetryErrorCodes) { + return true + } + + // HTTP response status code 501 should not be retried. + // 501 represents Not Implemented which means the request method is not + // supported by the server and cannot be handled. + if r.HTTPResponse != nil { + // HTTP response status code 500 represents internal server error and + // should be retried without any throttle. + if r.HTTPResponse.StatusCode == 500 { + return true + } + } + return IsErrorRetryable(r.Error) +} + +// IsErrorThrottle returns whether the error is to be throttled based on its +// code. Returns false if the request has no Error set. +// +// Alias for the utility function IsErrorThrottle +func (r *Request) IsErrorThrottle() bool { + if isErrCode(r.Error, r.ThrottleErrorCodes) { + return true + } + + if r.HTTPResponse != nil { + switch r.HTTPResponse.StatusCode { + case + 429, // error caused due to too many requests + 502, // Bad Gateway error should be throttled + 503, // caused when service is unavailable + 504: // error occurred due to gateway timeout + return true + } + } + + return IsErrorThrottle(r.Error) +} + +func isErrCode(err error, codes []string) bool { + if aerr, ok := err.(volcengineerr.Error); ok && aerr != nil { + for _, code := range codes { + if code == aerr.Code() { + return true + } + } + } + + return false +} + +// IsErrorExpired returns whether the error code is a credential expiry error. +// Returns false if the request has no Error set. +// +// Alias for the utility function IsErrorExpiredCreds +func (r *Request) IsErrorExpired() bool { + return IsErrorExpiredCreds(r.Error) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/timeout_read_closer.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/timeout_read_closer.go new file mode 100644 index 000000000000..43026253a614 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/timeout_read_closer.go @@ -0,0 +1,113 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "io" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +var timeoutErr = volcengineerr.New( + ErrCodeResponseTimeout, + "read on volcenginebody has reached the timeout limit", + nil, +) + +type readResult struct { + n int + err error +} + +// timeoutReadCloser will handle volcenginebody reads that take too long. +// We will return a ErrReadTimeout error if a timeout occurs. +type timeoutReadCloser struct { + reader io.ReadCloser + duration time.Duration +} + +// Read will spin off a goroutine to call the reader's Read method. We will +// select on the timer's channel or the read's channel. Whoever completes first +// will be returned. +func (r *timeoutReadCloser) Read(b []byte) (int, error) { + timer := time.NewTimer(r.duration) + c := make(chan readResult, 1) + + go func() { + n, err := r.reader.Read(b) + timer.Stop() + c <- readResult{n: n, err: err} + }() + + select { + case data := <-c: + return data.n, data.err + case <-timer.C: + return 0, timeoutErr + } +} + +func (r *timeoutReadCloser) Close() error { + return r.reader.Close() +} + +const ( + // HandlerResponseTimeout is what we use to signify the name of the + // response timeout handler. + HandlerResponseTimeout = "ResponseTimeoutHandler" +) + +// adaptToResponseTimeoutError is a handler that will replace any top level error +// to a ErrCodeResponseTimeout, if its child is that. +func adaptToResponseTimeoutError(req *Request) { + if err, ok := req.Error.(volcengineerr.Error); ok { + aerr, ok := err.OrigErr().(volcengineerr.Error) + if ok && aerr.Code() == ErrCodeResponseTimeout { + req.Error = aerr + } + } +} + +// WithResponseReadTimeout is a request option that will wrap the volcenginebody in a timeout read closer. +// This will allow for per read timeouts. If a timeout occurred, we will return the +// ErrCodeResponseTimeout. +// +// svc.PutObjectWithContext(ctx, params, request.WithTimeoutReadCloser(30 * time.Second) +func WithResponseReadTimeout(duration time.Duration) Option { + return func(r *Request) { + + var timeoutHandler = NamedHandler{ + HandlerResponseTimeout, + func(req *Request) { + req.HTTPResponse.Body = &timeoutReadCloser{ + reader: req.HTTPResponse.Body, + duration: duration, + } + }} + + // remove the handler so we are not stomping over any new durations. + r.Handlers.Send.RemoveByName(HandlerResponseTimeout) + r.Handlers.Send.PushBackNamed(timeoutHandler) + + r.Handlers.Unmarshal.PushBack(adaptToResponseTimeoutError) + r.Handlers.UnmarshalError.PushBack(adaptToResponseTimeoutError) + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/validation.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/validation.go new file mode 100644 index 000000000000..22b470301570 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/validation.go @@ -0,0 +1,333 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "fmt" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +const ( + // InvalidParameterErrCode is the error code for invalid parameters errors + InvalidParameterErrCode = "InvalidParameter" + // ParamRequiredErrCode is the error code for required parameter errors + ParamRequiredErrCode = "ParamRequiredError" + // ParamMinValueErrCode is the error code for fields with too low of a + // number value. + ParamMinValueErrCode = "ParamMinValueError" + // ParamMaxValueErrCode is the error code for fields with too high of a + // number value. + ParamMaxValueErrCode = "ParamMaxValueError" + // ParamMinLenErrCode is the error code for fields without enough elements. + ParamMinLenErrCode = "ParamMinLenError" + // ParamMaxLenErrCode is the error code for value being too long. + ParamMaxLenErrCode = "ParamMaxLenError" + + // ParamFormatErrCode is the error code for a field with invalid + // format or characters. + ParamFormatErrCode = "ParamFormatInvalidError" +) + +// Validator provides a way for types to perform validation logic on their +// input values that external code can use to determine if a type's values +// are valid. +type Validator interface { + Validate() error +} + +// An ErrInvalidParams provides wrapping of invalid parameter errors found when +// validating API operation input parameters. +type ErrInvalidParams struct { + // Context is the base context of the invalid parameter group. + Context string + errs []ErrInvalidParam +} + +// Add adds a new invalid parameter error to the collection of invalid +// parameters. The context of the invalid parameter will be updated to reflect +// this collection. +func (e *ErrInvalidParams) Add(err ErrInvalidParam) { + err.SetContext(e.Context) + e.errs = append(e.errs, err) +} + +// AddNested adds the invalid parameter errors from another ErrInvalidParams +// value into this collection. The nested errors will have their nested context +// updated and base context to reflect the merging. +// +// Use for nested validations errors. +func (e *ErrInvalidParams) AddNested(nestedCtx string, nested ErrInvalidParams) { + for _, err := range nested.errs { + err.SetContext(e.Context) + err.AddNestedContext(nestedCtx) + e.errs = append(e.errs, err) + } +} + +// Len returns the number of invalid parameter errors +func (e ErrInvalidParams) Len() int { + return len(e.errs) +} + +// Code returns the code of the error +func (e ErrInvalidParams) Code() string { + return InvalidParameterErrCode +} + +// Message returns the message of the error +func (e ErrInvalidParams) Message() string { + return fmt.Sprintf("%d validation error(s) found.", len(e.errs)) +} + +// Error returns the string formatted form of the invalid parameters. +func (e ErrInvalidParams) Error() string { + w := &bytes.Buffer{} + fmt.Fprintf(w, "%s: %s\n", e.Code(), e.Message()) + + for _, err := range e.errs { + fmt.Fprintf(w, "- %s\n", err.Message()) + } + + return w.String() +} + +// OrigErr returns the invalid parameters as a volcengineerr.BatchedErrors value +func (e ErrInvalidParams) OrigErr() error { + return volcengineerr.NewBatchError( + InvalidParameterErrCode, e.Message(), e.OrigErrs()) +} + +// OrigErrs returns a slice of the invalid parameters +func (e ErrInvalidParams) OrigErrs() []error { + errs := make([]error, len(e.errs)) + for i := 0; i < len(errs); i++ { + errs[i] = e.errs[i] + } + + return errs +} + +// An ErrInvalidParam represents an invalid parameter error type. +type ErrInvalidParam interface { + volcengineerr.Error + + // Field name the error occurred on. + Field() string + + // SetContext updates the context of the error. + SetContext(string) + + // AddNestedContext updates the error's context to include a nested level. + AddNestedContext(string) +} + +type errInvalidParam struct { + context string + nestedContext string + field string + code string + msg string +} + +// Code returns the error code for the type of invalid parameter. +func (e *errInvalidParam) Code() string { + return e.code +} + +// Message returns the reason the parameter was invalid, and its context. +func (e *errInvalidParam) Message() string { + return fmt.Sprintf("%s, %s.", e.msg, e.Field()) +} + +// Error returns the string version of the invalid parameter error. +func (e *errInvalidParam) Error() string { + return fmt.Sprintf("%s: %s", e.code, e.Message()) +} + +// OrigErr returns nil, Implemented for volcengineerr.Error interface. +func (e *errInvalidParam) OrigErr() error { + return nil +} + +// Field Returns the field and context the error occurred. +func (e *errInvalidParam) Field() string { + field := e.context + if len(field) > 0 { + field += "." + } + if len(e.nestedContext) > 0 { + field += fmt.Sprintf("%s.", e.nestedContext) + } + field += e.field + + return field +} + +// SetContext updates the base context of the error. +func (e *errInvalidParam) SetContext(ctx string) { + e.context = ctx +} + +// AddNestedContext prepends a context to the field's path. +func (e *errInvalidParam) AddNestedContext(ctx string) { + if len(e.nestedContext) == 0 { + e.nestedContext = ctx + } else { + e.nestedContext = fmt.Sprintf("%s.%s", ctx, e.nestedContext) + } + +} + +// An ErrParamRequired represents an required parameter error. +type ErrParamRequired struct { + errInvalidParam +} + +// NewErrParamRequired creates a new required parameter error. +func NewErrParamRequired(field string) *ErrParamRequired { + return &ErrParamRequired{ + errInvalidParam{ + code: ParamRequiredErrCode, + field: field, + msg: fmt.Sprintf("missing required field"), + }, + } +} + +// An ErrParamMaxValue represents a maximum value parameter error. +type ErrParamMaxValue struct { + errInvalidParam + max float64 +} + +// NewErrParamMaxValue creates a new maximum value parameter error. +func NewErrParamMaxValue(field string, max float64) *ErrParamMaxValue { + return &ErrParamMaxValue{ + errInvalidParam: errInvalidParam{ + code: ParamMinValueErrCode, + field: field, + msg: fmt.Sprintf("maximum field value of %v", max), + }, + max: max, + } +} + +// MaxValue returns the field's require maximum value. +// +// float64 is returned for both int and float max values. +func (e *ErrParamMaxValue) MaxValue() float64 { + return e.max +} + +// An ErrParamMinValue represents a minimum value parameter error. +type ErrParamMinValue struct { + errInvalidParam + min float64 +} + +// NewErrParamMinValue creates a new minimum value parameter error. +func NewErrParamMinValue(field string, min float64) *ErrParamMinValue { + return &ErrParamMinValue{ + errInvalidParam: errInvalidParam{ + code: ParamMinValueErrCode, + field: field, + msg: fmt.Sprintf("minimum field value of %v", min), + }, + min: min, + } +} + +// MinValue returns the field's require minimum value. +// +// float64 is returned for both int and float min values. +func (e *ErrParamMinValue) MinValue() float64 { + return e.min +} + +// An ErrParamMinLen represents a minimum length parameter error. +type ErrParamMinLen struct { + errInvalidParam + min int +} + +// NewErrParamMinLen creates a new minimum length parameter error. +func NewErrParamMinLen(field string, min int) *ErrParamMinLen { + return &ErrParamMinLen{ + errInvalidParam: errInvalidParam{ + code: ParamMinLenErrCode, + field: field, + msg: fmt.Sprintf("minimum field size of %v", min), + }, + min: min, + } +} + +// MinLen returns the field's required minimum length. +func (e *ErrParamMinLen) MinLen() int { + return e.min +} + +// An ErrParamMaxLen represents a maximum length parameter error. +type ErrParamMaxLen struct { + errInvalidParam + max int +} + +// NewErrParamMaxLen creates a new maximum length parameter error. +func NewErrParamMaxLen(field string, max int, value string) *ErrParamMaxLen { + return &ErrParamMaxLen{ + errInvalidParam: errInvalidParam{ + code: ParamMaxLenErrCode, + field: field, + msg: fmt.Sprintf("maximum size of %v, %v", max, value), + }, + max: max, + } +} + +// MaxLen returns the field's required minimum length. +func (e *ErrParamMaxLen) MaxLen() int { + return e.max +} + +// An ErrParamFormat represents a invalid format parameter error. +type ErrParamFormat struct { + errInvalidParam + format string +} + +// NewErrParamFormat creates a new invalid format parameter error. +func NewErrParamFormat(field string, format, value string) *ErrParamFormat { + return &ErrParamFormat{ + errInvalidParam: errInvalidParam{ + code: ParamFormatErrCode, + field: field, + msg: fmt.Sprintf("format %v, %v", format, value), + }, + format: format, + } +} + +// Format returns the field's required format. +func (e *ErrParamFormat) Format() string { + return e.format +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/waiter.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/waiter.go new file mode 100644 index 000000000000..b2648791ef63 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request/waiter.go @@ -0,0 +1,315 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package request + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// WaiterResourceNotReadyErrorCode is the error code returned by a waiter when +// the waiter's max attempts have been exhausted. +const WaiterResourceNotReadyErrorCode = "ResourceNotReady" + +// A WaiterOption is a function that will update the Waiter value's fields to +// configure the waiter. +type WaiterOption func(*Waiter) + +// WithWaiterMaxAttempts returns the maximum number of times the waiter should +// attempt to check the resource for the target state. +func WithWaiterMaxAttempts(max int) WaiterOption { + return func(w *Waiter) { + w.MaxAttempts = max + } +} + +// WaiterDelay will return a delay the waiter should pause between attempts to +// check the resource state. The passed in attempt is the number of times the +// Waiter has checked the resource state. +// +// Attempt is the number of attempts the Waiter has made checking the resource +// state. +type WaiterDelay func(attempt int) time.Duration + +// ConstantWaiterDelay returns a WaiterDelay that will always return a constant +// delay the waiter should use between attempts. It ignores the number of +// attempts made. +func ConstantWaiterDelay(delay time.Duration) WaiterDelay { + return func(attempt int) time.Duration { + return delay + } +} + +// WithWaiterDelay will set the Waiter to use the WaiterDelay passed in. +func WithWaiterDelay(delayer WaiterDelay) WaiterOption { + return func(w *Waiter) { + w.Delay = delayer + } +} + +// WithWaiterLogger returns a waiter option to set the logger a waiter +// should use to log warnings and errors to. +func WithWaiterLogger(logger volcengine.Logger) WaiterOption { + return func(w *Waiter) { + w.Logger = logger + } +} + +// WithWaiterRequestOptions returns a waiter option setting the request +// options for each request the waiter makes. Appends to waiter's request +// options already set. +func WithWaiterRequestOptions(opts ...Option) WaiterOption { + return func(w *Waiter) { + w.RequestOptions = append(w.RequestOptions, opts...) + } +} + +// A Waiter provides the functionality to perform a blocking call which will +// wait for a resource state to be satisfied by a service. +// +// This type should not be used directly. The API operations provided in the +// service packages prefixed with "WaitUntil" should be used instead. +type Waiter struct { + Name string + Acceptors []WaiterAcceptor + Logger volcengine.Logger + + MaxAttempts int + Delay WaiterDelay + + RequestOptions []Option + NewRequest func([]Option) (*Request, error) + SleepWithContext func(volcengine.Context, time.Duration) error +} + +// ApplyOptions updates the waiter with the list of waiter options provided. +func (w *Waiter) ApplyOptions(opts ...WaiterOption) { + for _, fn := range opts { + fn(w) + } +} + +// WaiterState are states the waiter uses based on WaiterAcceptor definitions +// to identify if the resource state the waiter is waiting on has occurred. +type WaiterState int + +// String returns the string representation of the waiter state. +func (s WaiterState) String() string { + switch s { + case SuccessWaiterState: + return "success" + case FailureWaiterState: + return "failure" + case RetryWaiterState: + return "retry" + default: + return "unknown waiter state" + } +} + +// States the waiter acceptors will use to identify target resource states. +const ( + SuccessWaiterState WaiterState = iota // waiter successful + FailureWaiterState // waiter failed + RetryWaiterState // waiter needs to be retried +) + +// WaiterMatchMode is the mode that the waiter will use to match the WaiterAcceptor +// definition's Expected attribute. +type WaiterMatchMode int + +// Modes the waiter will use when inspecting API response to identify target +// resource states. +const ( + PathAllWaiterMatch WaiterMatchMode = iota // match on all paths + PathWaiterMatch // match on specific path + PathAnyWaiterMatch // match on any path + PathListWaiterMatch // match on list of paths + StatusWaiterMatch // match on status code + ErrorWaiterMatch // match on error +) + +// String returns the string representation of the waiter match mode. +func (m WaiterMatchMode) String() string { + switch m { + case PathAllWaiterMatch: + return "pathAll" + case PathWaiterMatch: + return "path" + case PathAnyWaiterMatch: + return "pathAny" + case PathListWaiterMatch: + return "pathList" + case StatusWaiterMatch: + return "status" + case ErrorWaiterMatch: + return "error" + default: + return "unknown waiter match mode" + } +} + +// WaitWithContext will make requests for the API operation using NewRequest to +// build API requests. The request's response will be compared against the +// Waiter's Acceptors to determine the successful state of the resource the +// waiter is inspecting. +// +// The passed in context must not be nil. If it is nil a panic will occur. The +// Context will be used to cancel the waiter's pending requests and retry delays. +// Use volcengine.BackgroundContext if no context is available. +// +// The waiter will continue until the target state defined by the Acceptors, +// or the max attempts expires. +// +// Will return the WaiterResourceNotReadyErrorCode error code if the waiter's +// retryer ShouldRetry returns false. This normally will happen when the max +// wait attempts expires. +func (w Waiter) WaitWithContext(ctx volcengine.Context) error { + + for attempt := 1; ; attempt++ { + req, err := w.NewRequest(w.RequestOptions) + if err != nil { + waiterLogf(w.Logger, "unable to create request %v", err) + return err + } + req.Handlers.Build.PushBack(MakeAddToUserAgentFreeFormHandler("Waiter")) + err = req.Send() + + // See if any of the acceptors match the request's response, or error + for _, a := range w.Acceptors { + if matched, matchErr := a.match(w.Name, w.Logger, req, err); matched { + return matchErr + } + } + + // The Waiter should only check the resource state MaxAttempts times + // This is here instead of in the for loop above to prevent delaying + // unnecessary when the waiter will not retry. + if attempt == w.MaxAttempts { + break + } + + // Delay to wait before inspecting the resource again + delay := w.Delay(attempt) + if sleepFn := req.Config.SleepDelay; sleepFn != nil { + // Support SleepDelay for backwards compatibility and testing + sleepFn(delay) + } else { + sleepCtxFn := w.SleepWithContext + if sleepCtxFn == nil { + sleepCtxFn = volcengine.SleepWithContext + } + + if err := sleepCtxFn(ctx, delay); err != nil { + return volcengineerr.New(CanceledErrorCode, "waiter context canceled", err) + } + } + } + + return volcengineerr.New(WaiterResourceNotReadyErrorCode, "exceeded wait attempts", nil) +} + +// A WaiterAcceptor provides the information needed to wait for an API operation +// to complete. +type WaiterAcceptor struct { + State WaiterState + Matcher WaiterMatchMode + Argument string + Expected interface{} +} + +// match returns if the acceptor found a match with the passed in request +// or error. True is returned if the acceptor made a match, error is returned +// if there was an error attempting to perform the match. +func (a *WaiterAcceptor) match(name string, l volcengine.Logger, req *Request, err error) (bool, error) { + result := false + var vals []interface{} + + switch a.Matcher { + case PathAllWaiterMatch, PathWaiterMatch: + // Require all matches to be equal for result to match + vals, _ = volcengineutil.ValuesAtPath(req.Data, a.Argument) + if len(vals) == 0 { + break + } + result = true + for _, val := range vals { + if !volcengineutil.DeepEqual(val, a.Expected) { + result = false + break + } + } + case PathAnyWaiterMatch: + // Only a single match needs to equal for the result to match + vals, _ = volcengineutil.ValuesAtPath(req.Data, a.Argument) + for _, val := range vals { + if volcengineutil.DeepEqual(val, a.Expected) { + result = true + break + } + } + case PathListWaiterMatch: + // ignored matcher + case StatusWaiterMatch: + s := a.Expected.(int) + result = s == req.HTTPResponse.StatusCode + case ErrorWaiterMatch: + if aerr, ok := err.(volcengineerr.Error); ok { + result = aerr.Code() == a.Expected.(string) + } + default: + waiterLogf(l, "WARNING: Waiter %s encountered unexpected matcher: %s", + name, a.Matcher) + } + + if !result { + // If there was no matching result found there is nothing more to do + // for this response, retry the request. + return false, nil + } + + switch a.State { + case SuccessWaiterState: + // waiter completed + return true, nil + case FailureWaiterState: + // Waiter failure state triggered + return true, volcengineerr.New(WaiterResourceNotReadyErrorCode, + "failed waiting for successful resource state", err) + case RetryWaiterState: + // clear the error and retry the operation + return false, nil + default: + waiterLogf(l, "WARNING: Waiter %s encountered unexpected state: %s", + name, a.State) + return false, nil + } +} + +func waiterLogf(logger volcengine.Logger, msg string, args ...interface{}) { + if logger != nil { + logger.Log(fmt.Sprintf(msg, args...)) + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response/volcengine_response.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response/volcengine_response.go new file mode 100644 index 000000000000..d65f8f6efd65 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response/volcengine_response.go @@ -0,0 +1,44 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package response + +type VolcengineResponse struct { + ResponseMetadata *ResponseMetadata + Result interface{} +} + +type ResponseMetadata struct { + RequestId string + Action string + Version string + Service string + Region string + HTTPCode int + Error *Error +} + +type Error struct { + CodeN int + Code string + Message string +} + +type VolcengineSimpleError struct { + HttpCode int `json:"HTTPCode"` + ErrorCode string `json:"errorcode"` + Message string `json:"message"` +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/credentials.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/credentials.go new file mode 100644 index 000000000000..58c6ef74cfc9 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/credentials.go @@ -0,0 +1,142 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package session + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials/processcreds" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +func resolveCredentials(cfg *volcengine.Config, + envCfg envConfig, sharedCfg sharedConfig, + handlers request.Handlers, + sessOpts Options, +) (*credentials.Credentials, error) { + + switch { + case len(sessOpts.Profile) != 0: + // User explicitly provided an Profile in the session's configuration + // so load that profile from shared config first. + // Github(volcengine/volcengine-go-sdk#2727) + return resolveCredsFromProfile(cfg, envCfg, sharedCfg, handlers, sessOpts) + + case envCfg.Creds.HasKeys(): + // Environment credentials + return credentials.NewStaticCredentialsFromCreds(envCfg.Creds), nil + + default: + // Fallback to the "default" credential resolution chain. + return resolveCredsFromProfile(cfg, envCfg, sharedCfg, handlers, sessOpts) + } +} + +func resolveCredsFromProfile(cfg *volcengine.Config, + envCfg envConfig, sharedCfg sharedConfig, + handlers request.Handlers, + sessOpts Options, +) (creds *credentials.Credentials, err error) { + + switch { + case sharedCfg.SourceProfile != nil: + // Assume IAM role with credentials source from a different profile. + creds, err = resolveCredsFromProfile(cfg, envCfg, + *sharedCfg.SourceProfile, handlers, sessOpts, + ) + + case sharedCfg.Creds.HasKeys(): + // Static Credentials from Shared Config/Credentials file. + creds = credentials.NewStaticCredentialsFromCreds( + sharedCfg.Creds, + ) + + case len(sharedCfg.CredentialProcess) != 0: + // Get credentials from CredentialProcess + creds = processcreds.NewCredentials(sharedCfg.CredentialProcess) + + case len(sharedCfg.CredentialSource) != 0: + creds, err = resolveCredsFromSource(cfg, envCfg, + sharedCfg, handlers, sessOpts, + ) + + default: + // Fallback to default credentials provider, include mock errors for + // the credential chain so user can identify why credentials failed to + // be retrieved. + creds = credentials.NewCredentials(&credentials.ChainProvider{ + VerboseErrors: volcengine.BoolValue(cfg.CredentialsChainVerboseErrors), + Providers: []credentials.Provider{ + &credProviderError{ + Err: volcengineerr.New("EnvAccessKeyNotFound", + "failed to find credentials in the environment.", nil), + }, + &credProviderError{ + Err: volcengineerr.New("SharedCredsLoad", + fmt.Sprintf("failed to load profile, %s.", envCfg.Profile), nil), + }, + }, + }) + } + if err != nil { + return nil, err + } + + return creds, nil +} + +// valid credential source values +const ( + credSourceEc2Metadata = "Ec2InstanceMetadata" + credSourceEnvironment = "Environment" + credSourceECSContainer = "EcsContainer" +) + +func resolveCredsFromSource(cfg *volcengine.Config, + envCfg envConfig, sharedCfg sharedConfig, + handlers request.Handlers, + sessOpts Options, +) (creds *credentials.Credentials, err error) { + + switch sharedCfg.CredentialSource { + + case credSourceEnvironment: + creds = credentials.NewStaticCredentialsFromCreds(envCfg.Creds) + + default: + return nil, ErrSharedConfigInvalidCredSource + } + + return creds, nil +} + +type credProviderError struct { + Err error +} + +func (c credProviderError) Retrieve() (credentials.Value, error) { + return credentials.Value{}, c.Err +} +func (c credProviderError) IsExpired() bool { + return true +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/env_config.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/env_config.go new file mode 100644 index 000000000000..86b7ad526855 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/env_config.go @@ -0,0 +1,296 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package session + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "os" + "strconv" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults" +) + +// EnvProviderName provides a name of the provider when config is loaded from environment. +const EnvProviderName = "EnvConfigCredentials" + +// envConfig is a collection of environment values the SDK will read +// setup config from. All environment values are optional. But some values +// such as credentials require multiple values to be complete or the values +// will be ignored. +type envConfig struct { + // Environment configuration values. If set both Access Key ID and Secret Access + // Key must be provided. Session Token and optionally also be provided, but is + // not required. + // + // # Access Key ID + // VOLCSTACK_ACCESS_KEY_ID=AKID + // VOLCSTACK_ACCESS_KEY=AKID # only read if VOLCSTACK_ACCESS_KEY_ID is not set. + // + // # Secret Access Key + // VOLCSTACK_SECRET_ACCESS_KEY=SECRET + // VOLCSTACK_SECRET_KEY=SECRET=SECRET # only read if VOLCSTACK_SECRET_ACCESS_KEY is not set. + // + // # Session Token + // VOLCSTACK_SESSION_TOKEN=TOKEN + Creds credentials.Value + + // Region value will instruct the SDK where to make service API requests to. If is + // not provided in the environment the region must be provided before a service + // client request is made. + // + // VOLCSTACK_REGION=us-east-1 + // + // # VOLCSTACK_DEFAULT_REGION is only read if VOLCSTACK_SDK_LOAD_CONFIG is also set, + // # and VOLCSTACK_REGION is not also set. + // VOLCSTACK_DEFAULT_REGION=us-east-1 + Region string + + // Profile name the SDK should load use when loading shared configuration from the + // shared configuration files. If not provided "default" will be used as the + // profile name. + // + // VOLCSTACK_PROFILE=my_profile + // + // # VOLCSTACK_DEFAULT_PROFILE is only read if VOLCSTACK_SDK_LOAD_CONFIG is also set, + // # and VOLCSTACK_PROFILE is not also set. + // VOLCSTACK_DEFAULT_PROFILE=my_profile + Profile string + + // SDK load config instructs the SDK to load the shared config in addition to + // shared credentials. This also expands the configuration loaded from the shared + // credentials to have parity with the shared config file. This also enables + // Region and Profile support for the VOLCSTACK_DEFAULT_REGION and VOLCSTACK_DEFAULT_PROFILE + // env values as well. + // + // VOLCSTACK_SDK_LOAD_CONFIG=1 + EnableSharedConfig bool + + // Shared credentials file path can be set to instruct the SDK to use an alternate + // file for the shared credentials. If not set the file will be loaded from + // $HOME/.volcengine/credentials on Linux/Unix based systems, and + // %USERPROFILE%\.volcengine\credentials on Windows. + // + // VOLCSTACK_SHARED_CREDENTIALS_FILE=$HOME/my_shared_credentials + SharedCredentialsFile string + + // Shared config file path can be set to instruct the SDK to use an alternate + // file for the shared config. If not set the file will be loaded from + // $HOME/.volcengine/config on Linux/Unix based systems, and + // %USERPROFILE%\.volcengine\config on Windows. + // + // VOLCSTACK_CONFIG_FILE=$HOME/my_shared_config + SharedConfigFile string + + // Sets the path to a custom Credentials Authority (CA) Bundle PEM file + // that the SDK will use instead of the system's root CA bundle. + // Only use this if you want to configure the SDK to use a custom set + // of CAs. + // + // Enabling this option will attempt to merge the Transport + // into the SDK's HTTP client. If the client's Transport is + // not a http.Transport an error will be returned. If the + // Transport's TLS config is set this option will cause the + // SDK to overwrite the Transport's TLS config's RootCAs value. + // + // Setting a custom HTTPClient in the volcengine.Config options will override this setting. + // To use this option and custom HTTP client, the HTTP client needs to be provided + // when creating the session. Not the service client. + // + // VOLCSTACK_CA_BUNDLE=$HOME/my_custom_ca_bundle + CustomCABundle string + + csmEnabled string + CSMEnabled *bool + CSMPort string + CSMHost string + CSMClientID string + + // Enables endpoint discovery via environment variables. + // + // VOLCSTACK_ENABLE_ENDPOINT_DISCOVERY=true + EnableEndpointDiscovery *bool + enableEndpointDiscovery string + + // Specifies the WebIdentity token the SDK should use to assume a role + // with. + // + // VOLCSTACK_WEB_IDENTITY_TOKEN_FILE=file_path + WebIdentityTokenFilePath string + + // Specifies the IAM role arn to use when assuming an role. + // + // VOLCSTACK_ROLE_ARN=role_arn + RoleARN string + + // Specifies the IAM role session name to use when assuming a role. + // + // VOLCSTACK_ROLE_SESSION_NAME=session_name + RoleSessionName string +} + +var ( + csmEnabledEnvKey = []string{ + "VOLCSTACK_CSM_ENABLED", + } + csmHostEnvKey = []string{ + "VOLCSTACK_CSM_HOST", + } + csmPortEnvKey = []string{ + "VOLCSTACK_CSM_PORT", + } + csmClientIDEnvKey = []string{ + "VOLCSTACK_CSM_CLIENT_ID", + } + credAccessEnvKey = []string{ + "VOLCSTACK_ACCESS_KEY_ID", + "VOLCSTACK_ACCESS_KEY", + } + credSecretEnvKey = []string{ + "VOLCSTACK_SECRET_ACCESS_KEY", + "VOLCSTACK_SECRET_KEY", + } + credSessionEnvKey = []string{ + "VOLCSTACK_SESSION_TOKEN", + } + + enableEndpointDiscoveryEnvKey = []string{ + "VOLCSTACK_ENABLE_ENDPOINT_DISCOVERY", + } + + regionEnvKeys = []string{ + "VOLCSTACK_REGION", + "VOLCSTACK_DEFAULT_REGION", // Only read if VOLCSTACK_SDK_LOAD_CONFIG is also set + } + profileEnvKeys = []string{ + "VOLCSTACK_PROFILE", + "VOLCSTACK_DEFAULT_PROFILE", // Only read if VOLCSTACK_SDK_LOAD_CONFIG is also set + } + sharedCredsFileEnvKey = []string{ + "VOLCSTACK_SHARED_CREDENTIALS_FILE", + } + sharedConfigFileEnvKey = []string{ + "VOLCSTACK_CONFIG_FILE", + } + webIdentityTokenFilePathEnvKey = []string{ + "VOLCSTACK_WEB_IDENTITY_TOKEN_FILE", + } + roleARNEnvKey = []string{ + "VOLCSTACK_ROLE_ARN", + } + roleSessionNameEnvKey = []string{ + "VOLCSTACK_ROLE_SESSION_NAME", + } +) + +// loadEnvConfig retrieves the SDK's environment configuration. +// See `envConfig` for the values that will be retrieved. +// +// If the environment variable `VOLCSTACK_SDK_LOAD_CONFIG` is set to a truthy value +// the shared SDK config will be loaded in addition to the SDK's specific +// configuration values. +func loadEnvConfig() envConfig { + enableSharedConfig, _ := strconv.ParseBool(os.Getenv("VOLCSTACK_SDK_LOAD_CONFIG")) + return envConfigLoad(enableSharedConfig) +} + +// loadEnvSharedConfig retrieves the SDK's environment configuration, and the +// SDK shared config. See `envConfig` for the values that will be retrieved. +// +// Loads the shared configuration in addition to the SDK's specific configuration. +// This will load the same values as `loadEnvConfig` if the `VOLCSTACK_SDK_LOAD_CONFIG` +// environment variable is set. +func loadSharedEnvConfig() envConfig { + return envConfigLoad(true) +} + +func envConfigLoad(enableSharedConfig bool) envConfig { + cfg := envConfig{} + + cfg.EnableSharedConfig = enableSharedConfig + + // Static environment credentials + var creds credentials.Value + setFromEnvVal(&creds.AccessKeyID, credAccessEnvKey) + setFromEnvVal(&creds.SecretAccessKey, credSecretEnvKey) + setFromEnvVal(&creds.SessionToken, credSessionEnvKey) + if creds.HasKeys() { + // Require logical grouping of credentials + creds.ProviderName = EnvProviderName + cfg.Creds = creds + } + + // Role Metadata + setFromEnvVal(&cfg.RoleARN, roleARNEnvKey) + setFromEnvVal(&cfg.RoleSessionName, roleSessionNameEnvKey) + + // Web identity environment variables + setFromEnvVal(&cfg.WebIdentityTokenFilePath, webIdentityTokenFilePathEnvKey) + + // CSM environment variables + setFromEnvVal(&cfg.csmEnabled, csmEnabledEnvKey) + setFromEnvVal(&cfg.CSMHost, csmHostEnvKey) + setFromEnvVal(&cfg.CSMPort, csmPortEnvKey) + setFromEnvVal(&cfg.CSMClientID, csmClientIDEnvKey) + + if len(cfg.csmEnabled) != 0 { + v, _ := strconv.ParseBool(cfg.csmEnabled) + cfg.CSMEnabled = &v + } + + regionKeys := regionEnvKeys + profileKeys := profileEnvKeys + if !cfg.EnableSharedConfig { + regionKeys = regionKeys[:1] + profileKeys = profileKeys[:1] + } + + setFromEnvVal(&cfg.Region, regionKeys) + setFromEnvVal(&cfg.Profile, profileKeys) + + // endpoint discovery is in reference to it being enabled. + setFromEnvVal(&cfg.enableEndpointDiscovery, enableEndpointDiscoveryEnvKey) + if len(cfg.enableEndpointDiscovery) > 0 { + cfg.EnableEndpointDiscovery = volcengine.Bool(cfg.enableEndpointDiscovery != "false") + } + + setFromEnvVal(&cfg.SharedCredentialsFile, sharedCredsFileEnvKey) + setFromEnvVal(&cfg.SharedConfigFile, sharedConfigFileEnvKey) + + if len(cfg.SharedCredentialsFile) == 0 { + cfg.SharedCredentialsFile = defaults.SharedCredentialsFilename() + } + if len(cfg.SharedConfigFile) == 0 { + cfg.SharedConfigFile = defaults.SharedConfigFilename() + } + + cfg.CustomCABundle = os.Getenv("VOLCSTACK_CA_BUNDLE") + + return cfg +} + +func setFromEnvVal(dst *string, keys []string) { + for _, k := range keys { + if v := os.Getenv(k); len(v) > 0 { + *dst = v + break + } + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/session.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/session.go new file mode 100644 index 000000000000..2f592874a48e --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/session.go @@ -0,0 +1,612 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package session + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "crypto/tls" + "crypto/x509" + "io" + "io/ioutil" + "net" + "net/http" + "os" + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/defaults" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/endpoints" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +const ( + // ErrCodeSharedConfig represents an error that occurs in the shared + // configuration logic + ErrCodeSharedConfig = "SharedConfigErr" +) + +// ErrSharedConfigSourceCollision will be returned if a section contains both +// source_profile and credential_source +var ErrSharedConfigSourceCollision = volcengineerr.New(ErrCodeSharedConfig, "only source profile or credential source can be specified, not both", nil) + +// ErrSharedConfigECSContainerEnvVarEmpty will be returned if the environment +// variables are empty and Environment was set as the credential source +var ErrSharedConfigECSContainerEnvVarEmpty = volcengineerr.New(ErrCodeSharedConfig, "EcsContainer was specified as the credential_source, but 'volcengine_CONTAINER_CREDENTIALS_RELATIVE_URI' was not set", nil) + +// ErrSharedConfigInvalidCredSource will be returned if an invalid credential source was provided +var ErrSharedConfigInvalidCredSource = volcengineerr.New(ErrCodeSharedConfig, "credential source values must be EcsContainer, Ec2InstanceMetadata, or Environment", nil) + +// A Session provides a central location to create service clients from and +// store configurations and request handlers for those services. +// +// Sessions are safe to create service clients concurrently, but it is not safe +// to mutate the Session concurrently. +// +// The Session satisfies the service client's client.ConfigProvider. +type Session struct { + Config *volcengine.Config + Handlers request.Handlers +} + +// New creates a new instance of the handlers merging in the provided configs +// on top of the SDK's default configurations. Once the Session is created it +// can be mutated to modify the Config or Handlers. The Session is safe to be +// read concurrently, but it should not be written to concurrently. +// +// If the volcengine_SDK_LOAD_CONFIG environment is set to a truthy value, the New +// method could now encounter an error when loading the configuration. When +// The environment variable is set, and an error occurs, New will return a +// session that will fail all requests reporting the error that occurred while +// loading the session. Use NewSession to get the error when creating the +// session. +// +// If the volcengine_SDK_LOAD_CONFIG environment variable is set to a truthy value +// the shared config file (~/.volcengine/config) will also be loaded, in addition to +// the shared credentials file (~/.volcengine/credentials). Values set in both the +// shared config, and shared credentials will be taken from the shared +// credentials file. +// +// Deprecated: Use NewSession functions to create sessions instead. NewSession +// has the same functionality as New except an error can be returned when the +// func is called instead of waiting to receive an error until a request is made. +func New(cfgs ...*volcengine.Config) *Session { + // load initial config from environment + envCfg := loadEnvConfig() + + if envCfg.EnableSharedConfig { + var cfg volcengine.Config + cfg.MergeIn(cfgs...) + s, err := NewSessionWithOptions(Options{ + Config: cfg, + SharedConfigState: SharedConfigEnable, + }) + if err != nil { + // Old session.New expected all errors to be discovered when + // a request is made, and would report the errors then. This + // needs to be replicated if an error occurs while creating + // the session. + msg := "failed to create session with volcengine_SDK_LOAD_CONFIG enabled. " + + "Use session.NewSession to handle errors occurring during session creation." + + // Session creation failed, need to report the error and prevent + // any requests from succeeding. + s = &Session{Config: defaults.Config()} + s.Config.MergeIn(cfgs...) + s.Config.Logger.Log("ERROR:", msg, "Error:", err) + s.Handlers.Validate.PushBack(func(r *request.Request) { + r.Error = err + }) + } + + return s + } + + s := deprecatedNewSession(cfgs...) + + return s +} + +// NewSession returns a new Session created from SDK defaults, config files, +// environment, and user provided config files. Once the Session is created +// it can be mutated to modify the Config or Handlers. The Session is safe to +// be read concurrently, but it should not be written to concurrently. +// +// If the volcengine_SDK_LOAD_CONFIG environment variable is set to a truthy value +// the shared config file (~/.volcengine/config) will also be loaded in addition to +// the shared credentials file (~/.volcengine/credentials). Values set in both the +// shared config, and shared credentials will be taken from the shared +// credentials file. Enabling the Shared Config will also allow the Session +// to be built with retrieving credentials with AssumeRole set in the config. +// +// See the NewSessionWithOptions func for information on how to override or +// control through code how the Session will be created, such as specifying the +// config profile, and controlling if shared config is enabled or not. +func NewSession(cfgs ...*volcengine.Config) (*Session, error) { + opts := Options{} + opts.Config.MergeIn(cfgs...) + + if opts.Config.Endpoint == nil { + opts.Config.Endpoint = volcengine.String(volcengineutil.NewEndpoint().GetEndpoint()) + } + + //merge config region and endpoint info to sts + if opts.Config.Credentials == nil { + return NewSessionWithOptions(opts) + } else if sts, ok := opts.Config.Credentials.GetProvider().(*credentials.StsProvider); ok { + if sts.Region == "" && opts.Config.Region != nil { + sts.Region = *(opts.Config.Region) + } + if sts.Host == "" && opts.Config.Endpoint != nil { + sts.Host = *(opts.Config.Endpoint) + } + } + return NewSessionWithOptions(opts) +} + +// SharedConfigState provides the ability to optionally override the state +// of the session's creation based on the shared config being enabled or +// disabled. +type SharedConfigState int + +const ( + // SharedConfigStateFromEnv does not override any state of the + // volcengine_SDK_LOAD_CONFIG env var. It is the default value of the + // SharedConfigState type. + SharedConfigStateFromEnv SharedConfigState = iota + + // SharedConfigDisable overrides the volcengine_SDK_LOAD_CONFIG env var value + // and disables the shared config functionality. + SharedConfigDisable + + // SharedConfigEnable overrides the volcengine_SDK_LOAD_CONFIG env var value + // and enables the shared config functionality. + SharedConfigEnable +) + +// Options provides the means to control how a Session is created and what +// configuration values will be loaded. +type Options struct { + // Provides config values for the SDK to use when creating service clients + // and making API requests to services. Any value set in with this field + // will override the associated value provided by the SDK defaults, + // environment or config files where relevant. + // + // If not set, configuration values from from SDK defaults, environment, + // config will be used. + Config volcengine.Config + + // Overrides the config profile the Session should be created from. If not + // set the value of the environment variable will be loaded (volcengine_PROFILE, + // or volcengine_DEFAULT_PROFILE if the Shared Config is enabled). + // + // If not set and environment variables are not set the "default" + // (DefaultSharedConfigProfile) will be used as the profile to load the + // session config from. + Profile string + + // Instructs how the Session will be created based on the volcengine_SDK_LOAD_CONFIG + // environment variable. By default a Session will be created using the + // value provided by the volcengine_SDK_LOAD_CONFIG environment variable. + // + // Setting this value to SharedConfigEnable or SharedConfigDisable + // will allow you to override the volcengine_SDK_LOAD_CONFIG environment variable + // and enable or disable the shared config functionality. + SharedConfigState SharedConfigState + + // Ordered list of files the session will load configuration from. + // It will override environment variable volcengine_SHARED_CREDENTIALS_FILE, volcengine_CONFIG_FILE. + SharedConfigFiles []string + + // When the SDK's shared config is configured to assume a role with MFA + // this option is required in order to provide the mechanism that will + // retrieve the MFA token. There is no default value for this field. If + // it is not set an error will be returned when creating the session. + // + // This token provider will be called when ever the assumed role's + // credentials need to be refreshed. Within the context of service clients + // all sharing the same session the SDK will ensure calls to the token + // provider are atomic. When sharing a token provider across multiple + // sessions additional synchronization logic is needed to ensure the + // token providers do not introduce race conditions. It is recommend to + // share the session where possible. + // + // stscreds.StdinTokenProvider is a basic implementation that will prompt + // from stdin for the MFA token code. + // + // This field is only used if the shared configuration is enabled, and + // the config enables assume role wit MFA via the mfa_serial field. + AssumeRoleTokenProvider func() (string, error) + + // When the SDK's shared config is configured to assume a role this option + // may be provided to set the expiry duration of the STS credentials. + // Defaults to 15 minutes if not set as documented in the + // stscreds.AssumeRoleProvider. + AssumeRoleDuration time.Duration + + // Reader for a custom Credentials Authority (CA) bundle in PEM format that + // the SDK will use instead of the default system's root CA bundle. Use this + // only if you want to replace the CA bundle the SDK uses for TLS requests. + // + // Enabling this option will attempt to merge the Transport into the SDK's HTTP + // client. If the client's Transport is not a http.Transport an error will be + // returned. If the Transport's TLS config is set this option will cause the SDK + // to overwrite the Transport's TLS config's RootCAs value. If the CA + // bundle reader contains multiple certificates all of them will be loaded. + // + // The Session option CustomCABundle is also available when creating sessions + // to also enable this feature. CustomCABundle session option field has priority + // over the volcengine_CA_BUNDLE environment variable, and will be used if both are set. + CustomCABundle io.Reader + + // The handlers that the session and all API clients will be created with. + // This must be a complete set of handlers. Use the defaults.Handlers() + // function to initialize this value before changing the handlers to be + // used by the SDK. + Handlers request.Handlers +} + +// NewSessionWithOptions returns a new Session created from SDK defaults, config files, +// environment, and user provided config files. This func uses the Options +// values to configure how the Session is created. +// +// If the volcengine_SDK_LOAD_CONFIG environment variable is set to a truthy value +// the shared config file (~/.volcengine/config) will also be loaded in addition to +// the shared credentials file (~/.volcengine/credentials). Values set in both the +// shared config, and shared credentials will be taken from the shared +// credentials file. Enabling the Shared Config will also allow the Session +// to be built with retrieving credentials with AssumeRole set in the config. +// +// // Equivalent to session.New +// sess := session.Must(session.NewSessionWithOptions(session.Options{})) +// +// // Specify profile to load for the session's config +// sess := session.Must(session.NewSessionWithOptions(session.Options{ +// Profile: "profile_name", +// })) +// +// // Specify profile for config and region for requests +// sess := session.Must(session.NewSessionWithOptions(session.Options{ +// Config: volcengine.Config{Region: volcengine.String("us-east-1")}, +// Profile: "profile_name", +// })) +// +// // Force enable Shared Config support +// sess := session.Must(session.NewSessionWithOptions(session.Options{ +// SharedConfigState: session.SharedConfigEnable, +// })) +func NewSessionWithOptions(opts Options) (*Session, error) { + var envCfg envConfig + if opts.SharedConfigState == SharedConfigEnable { + envCfg = loadSharedEnvConfig() + } else { + envCfg = loadEnvConfig() + } + + if len(opts.Profile) != 0 { + envCfg.Profile = opts.Profile + } + + switch opts.SharedConfigState { + case SharedConfigDisable: + envCfg.EnableSharedConfig = false + case SharedConfigEnable: + envCfg.EnableSharedConfig = true + } + + // Only use volcengine_CA_BUNDLE if session option is not provided. + if len(envCfg.CustomCABundle) != 0 && opts.CustomCABundle == nil { + f, err := os.Open(envCfg.CustomCABundle) + if err != nil { + return nil, volcengineerr.New("LoadCustomCABundleError", + "failed to open custom CA bundle PEM file", err) + } + defer f.Close() + opts.CustomCABundle = f + } + + return newSession(opts, envCfg, &opts.Config) +} + +// Must is a helper function to ensure the Session is valid and there was no +// error when calling a NewSession function. +// +// This helper is intended to be used in variable initialization to load the +// Session and configuration at startup. Such as: +// +// var sess = session.Must(session.NewSession()) +func Must(sess *Session, err error) *Session { + if err != nil { + panic(err) + } + + return sess +} + +func deprecatedNewSession(cfgs ...*volcengine.Config) *Session { + cfg := defaults.Config() + handlers := defaults.Handlers() + + // Apply the passed in configs so the configuration can be applied to the + // default credential chain + cfg.MergeIn(cfgs...) + cfg.Credentials = defaults.CredChain(cfg, handlers) + + // Reapply any passed in configs to override credentials if set + cfg.MergeIn(cfgs...) + + s := &Session{ + Config: cfg, + Handlers: handlers, + } + + initHandlers(s) + return s +} + +func newSession(opts Options, envCfg envConfig, cfgs ...*volcengine.Config) (*Session, error) { + cfg := defaults.Config() + + handlers := opts.Handlers + if handlers.IsEmpty() { + handlers = defaults.Handlers() + } + + // Get a merged version of the user provided config to determine if + // credentials were. + userCfg := &volcengine.Config{} + userCfg.MergeIn(cfgs...) + cfg.MergeIn(userCfg) + + // Ordered config files will be loaded in with later files overwriting + // previous config file values. + var cfgFiles []string + if opts.SharedConfigFiles != nil { + cfgFiles = opts.SharedConfigFiles + } else { + cfgFiles = []string{envCfg.SharedConfigFile, envCfg.SharedCredentialsFile} + if !envCfg.EnableSharedConfig { + // The shared config file (~/.volcengine/config) is only loaded if instructed + // to load via the envConfig.EnableSharedConfig (volcengine_SDK_LOAD_CONFIG). + cfgFiles = cfgFiles[1:] + } + } + + // Load additional config from file(s) + sharedCfg, err := loadSharedConfig(envCfg.Profile, cfgFiles, envCfg.EnableSharedConfig) + if err != nil { + if len(envCfg.Profile) == 0 && !envCfg.EnableSharedConfig && (envCfg.Creds.HasKeys() || userCfg.Credentials != nil) { + // Special case where the user has not explicitly specified an volcengine_PROFILE, + // or session.Options.profile, shared config is not enabled, and the + // environment has credentials, allow the shared config file to fail to + // load since the user has already provided credentials, and nothing else + // is required to be read file. Github(volcengine/volcengine-go-sdk#2455) + } else if _, ok := err.(SharedConfigProfileNotExistsError); !ok { + return nil, err + } + } + + if err := mergeConfigSrcs(cfg, userCfg, envCfg, sharedCfg, handlers, opts); err != nil { + return nil, err + } + + s := &Session{ + Config: cfg, + Handlers: handlers, + } + + initHandlers(s) + + // Setup HTTP client with custom cert bundle if enabled + if opts.CustomCABundle != nil { + if err := loadCustomCABundle(s, opts.CustomCABundle); err != nil { + return nil, err + } + } + + return s, nil +} + +func getCABundleTransport() *http.Transport { + return &http.Transport{ + Proxy: http.ProxyFromEnvironment, + DialContext: (&net.Dialer{ + Timeout: 30 * time.Second, + KeepAlive: 30 * time.Second, + DualStack: true, + }).DialContext, + MaxIdleConns: 100, + IdleConnTimeout: 90 * time.Second, + TLSHandshakeTimeout: 10 * time.Second, + ExpectContinueTimeout: 1 * time.Second, + } +} + +func loadCustomCABundle(s *Session, bundle io.Reader) error { + var t *http.Transport + switch v := s.Config.HTTPClient.Transport.(type) { + case *http.Transport: + t = v + default: + if s.Config.HTTPClient.Transport != nil { + return volcengineerr.New("LoadCustomCABundleError", + "unable to load custom CA bundle, HTTPClient's transport unsupported type", nil) + } + } + if t == nil { + // Nil transport implies `http.DefaultTransport` should be used. Since + // the SDK cannot modify, nor copy the `DefaultTransport` specifying + // the values the next closest behavior. + t = getCABundleTransport() + } + + p, err := loadCertPool(bundle) + if err != nil { + return err + } + if t.TLSClientConfig == nil { + t.TLSClientConfig = &tls.Config{} + } + t.TLSClientConfig.RootCAs = p + + s.Config.HTTPClient.Transport = t + + return nil +} + +func loadCertPool(r io.Reader) (*x509.CertPool, error) { + b, err := ioutil.ReadAll(r) + if err != nil { + return nil, volcengineerr.New("LoadCustomCABundleError", + "failed to read custom CA bundle PEM file", err) + } + + p := x509.NewCertPool() + if !p.AppendCertsFromPEM(b) { + return nil, volcengineerr.New("LoadCustomCABundleError", + "failed to load custom CA bundle PEM file", err) + } + + return p, nil +} + +func mergeConfigSrcs(cfg, userCfg *volcengine.Config, + envCfg envConfig, sharedCfg sharedConfig, + handlers request.Handlers, + sessOpts Options, +) error { + + // Region if not already set by user + if len(volcengine.StringValue(cfg.Region)) == 0 { + if len(envCfg.Region) > 0 { + cfg.WithRegion(envCfg.Region) + } else if envCfg.EnableSharedConfig && len(sharedCfg.Region) > 0 { + cfg.WithRegion(sharedCfg.Region) + } + } + + //if cfg.EnableEndpointDiscovery == nil { + // if envCfg.EnableEndpointDiscovery != nil { + // cfg.WithEndpointDiscovery(*envCfg.EnableEndpointDiscovery) + // } else if envCfg.EnableSharedConfig && sharedCfg.EnableEndpointDiscovery != nil { + // cfg.WithEndpointDiscovery(*sharedCfg.EnableEndpointDiscovery) + // } + //} + + // Configure credentials if not already set by the user when creating the + // Session. + if cfg.Credentials == credentials.AnonymousCredentials && userCfg.Credentials == nil { + creds, err := resolveCredentials(cfg, envCfg, sharedCfg, handlers, sessOpts) + if err != nil { + return err + } + cfg.Credentials = creds + } + + return nil +} + +func initHandlers(s *Session) { + // Add the Validate parameter handler if it is not disabled. + s.Handlers.Validate.Remove(corehandlers.ValidateParametersHandler) + // Temporary notes by xuyaming@bytedance.com because some validate field is relation. + //if !volcengine.BoolValue(s.Config.DisableParamValidation) { + // s.Handlers.Validate.PushBackNamed(corehandlers.ValidateParametersHandler) + //} +} + +// Copy creates and returns a copy of the current Session, copying the config +// and handlers. If any additional configs are provided they will be merged +// on top of the Session's copied config. +// +// // Create a copy of the current Session, configured for the us-west-2 region. +// sess.Copy(&volcengine.Config{Region: volcengine.String("us-west-2")}) +func (s *Session) Copy(cfgs ...*volcengine.Config) *Session { + newSession := &Session{ + Config: s.Config.Copy(cfgs...), + Handlers: s.Handlers.Copy(), + } + + initHandlers(newSession) + + return newSession +} + +// ClientConfig satisfies the client.ConfigProvider interface and is used to +// configure the service client instances. Passing the Session to the service +// client's constructor (New) will use this method to configure the client. +func (s *Session) ClientConfig(serviceName string, cfgs ...*volcengine.Config) client.Config { + // Backwards compatibility, the error will be eaten if user calls ClientConfig + // directly. All SDK services will use ClientconfigWithError. + cfg, _ := s.clientConfigWithErr(serviceName, cfgs...) + + return cfg +} + +func (s *Session) clientConfigWithErr(serviceName string, cfgs ...*volcengine.Config) (client.Config, error) { + s = s.Copy(cfgs...) + + var resolved endpoints.ResolvedEndpoint + var err error + + region := volcengine.StringValue(s.Config.Region) + + if endpoint := volcengine.StringValue(s.Config.Endpoint); len(endpoint) != 0 { + resolved.URL = endpoints.AddScheme(endpoint, volcengine.BoolValue(s.Config.DisableSSL)) + resolved.SigningRegion = region + } + + return client.Config{ + Config: s.Config, + Handlers: s.Handlers, + Endpoint: resolved.URL, + SigningRegion: resolved.SigningRegion, + SigningNameDerived: resolved.SigningNameDerived, + SigningName: resolved.SigningName, + }, err +} + +// ClientConfigNoResolveEndpoint is the same as ClientConfig with the exception +// that the EndpointResolver will not be used to resolve the endpoint. The only +// endpoint set must come from the volcengine.Config.Endpoint field. +func (s *Session) ClientConfigNoResolveEndpoint(cfgs ...*volcengine.Config) client.Config { + s = s.Copy(cfgs...) + + var resolved endpoints.ResolvedEndpoint + + region := volcengine.StringValue(s.Config.Region) + + if ep := volcengine.StringValue(s.Config.Endpoint); len(ep) > 0 { + resolved.URL = endpoints.AddScheme(ep, volcengine.BoolValue(s.Config.DisableSSL)) + resolved.SigningRegion = region + } + + return client.Config{ + Config: s.Config, + Handlers: s.Handlers, + Endpoint: resolved.URL, + SigningRegion: resolved.SigningRegion, + SigningNameDerived: resolved.SigningNameDerived, + SigningName: resolved.SigningName, + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/shared_config.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/shared_config.go new file mode 100644 index 000000000000..670de38bb21d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session/shared_config.go @@ -0,0 +1,498 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package session + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/ini" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +const ( + // Static Credentials group + accessKeyIDKey = `volcengine_access_key_id` // group required + secretAccessKey = `volcengine_secret_access_key` // group required + sessionTokenKey = `volcengine_session_token` // optional + + // Assume Role Credentials group + roleArnKey = `role_trn` // group required + sourceProfileKey = `source_profile` // group required (or credential_source) + credentialSourceKey = `credential_source` // group required (or source_profile) + externalIDKey = `external_id` // optional + mfaSerialKey = `mfa_serial` // optional + roleSessionNameKey = `role_session_name` // optional + + // Additional Config fields + regionKey = `region` + + // endpoint discovery group + enableEndpointDiscoveryKey = `endpoint_discovery_enabled` // optional + + // External Credential Process + credentialProcessKey = `credential_process` // optional + + // Web Identity Token File + webIdentityTokenFileKey = `web_identity_token_file` // optional + + // DefaultSharedConfigProfile is the default profile to be used when + // loading configuration from the config files if another profile name + // is not provided. + DefaultSharedConfigProfile = `default` +) + +// sharedConfig represents the configuration fields of the SDK config files. +type sharedConfig struct { + // Credentials values from the config file. Both volcengine_access_key_id and + // volcengine_secret_access_key must be provided together in the same file to be + // considered valid. The values will be ignored if not a complete group. + // volcengine_session_token is an optional field that can be provided if both of + // the other two fields are also provided. + // + // volcengine_access_key_id + // volcengine_secret_access_key + // volcengine_session_token + Creds credentials.Value + + CredentialSource string + CredentialProcess string + WebIdentityTokenFile string + + RoleARN string + RoleSessionName string + ExternalID string + MFASerial string + + SourceProfileName string + SourceProfile *sharedConfig + + // Region is the region the SDK should use for looking up VOLCSTACK service + // endpoints and signing requests. + // + // region + Region string + + // EnableEndpointDiscovery can be enabled in the shared config by setting + // endpoint_discovery_enabled to true + // + // endpoint_discovery_enabled = true + EnableEndpointDiscovery *bool + + // CSM Options + CSMEnabled *bool + CSMHost string + CSMPort string + CSMClientID string +} + +type sharedConfigFile struct { + Filename string + IniData ini.Sections +} + +// loadSharedConfig retrieves the configuration from the list of files using +// the profile provided. The order the files are listed will determine +// precedence. Values in subsequent files will overwrite values defined in +// earlier files. +// +// For example, given two files A and B. Both define credentials. If the order +// of the files are A then B, B's credential values will be used instead of +// A's. +// +// See sharedConfig.setFromFile for information how the config files +// will be loaded. +func loadSharedConfig(profile string, filenames []string, exOpts bool) (sharedConfig, error) { + if len(profile) == 0 { + profile = DefaultSharedConfigProfile + } + + files, err := loadSharedConfigIniFiles(filenames) + if err != nil { + return sharedConfig{}, err + } + + cfg := sharedConfig{} + profiles := map[string]struct{}{} + if err = cfg.setFromIniFiles(profiles, profile, files, exOpts); err != nil { + return sharedConfig{}, err + } + + return cfg, nil +} + +func loadSharedConfigIniFiles(filenames []string) ([]sharedConfigFile, error) { + files := make([]sharedConfigFile, 0, len(filenames)) + + for _, filename := range filenames { + sections, err := ini.OpenFile(filename) + if aerr, ok := err.(volcengineerr.Error); ok && aerr.Code() == ini.ErrCodeUnableToReadFile { + // Skip files which can't be opened and read for whatever reason + continue + } else if err != nil { + return nil, SharedConfigLoadError{Filename: filename, Err: err} + } + + files = append(files, sharedConfigFile{ + Filename: filename, IniData: sections, + }) + } + + return files, nil +} + +func (cfg *sharedConfig) setFromIniFiles(profiles map[string]struct{}, profile string, files []sharedConfigFile, exOpts bool) error { + // Trim files from the list that don't exist. + var skippedFiles int + var profileNotFoundErr error + for _, f := range files { + if err := cfg.setFromIniFile(profile, f, exOpts); err != nil { + if _, ok := err.(SharedConfigProfileNotExistsError); ok { + // Ignore profiles not defined in individual files. + profileNotFoundErr = err + skippedFiles++ + continue + } + return err + } + } + if skippedFiles == len(files) { + // If all files were skipped because the profile is not found, return + // the original profile not found error. + return profileNotFoundErr + } + + if _, ok := profiles[profile]; ok { + // if this is the second instance of the profile the Assume Role + // options must be cleared because they are only valid for the + // first reference of a profile. The self linked instance of the + // profile only have credential provider options. + cfg.clearAssumeRoleOptions() + } else { + // First time a profile has been seen, It must either be a assume role + // or credentials. Assert if the credential type requires a role ARN, + // the ARN is also set. + if err := cfg.validateCredentialsRequireARN(profile); err != nil { + return err + } + } + profiles[profile] = struct{}{} + + if err := cfg.validateCredentialType(); err != nil { + return err + } + + // Link source profiles for assume roles + if len(cfg.SourceProfileName) != 0 { + // Linked profile via source_profile ignore credential provider + // options, the source profile must provide the credentials. + cfg.clearCredentialOptions() + + srcCfg := &sharedConfig{} + err := srcCfg.setFromIniFiles(profiles, cfg.SourceProfileName, files, exOpts) + if err != nil { + // SourceProfile that doesn't exist is an error in configuration. + if _, ok := err.(SharedConfigProfileNotExistsError); ok { + err = SharedConfigAssumeRoleError{ + RoleARN: cfg.RoleARN, + SourceProfile: cfg.SourceProfileName, + } + } + return err + } + + if !srcCfg.hasCredentials() { + return SharedConfigAssumeRoleError{ + RoleARN: cfg.RoleARN, + SourceProfile: cfg.SourceProfileName, + } + } + + cfg.SourceProfile = srcCfg + } + + return nil +} + +// setFromFile loads the configuration from the file using the profile +// provided. A sharedConfig pointer type value is used so that multiple config +// file loadings can be chained. +// +// Only loads complete logically grouped values, and will not set fields in cfg +// for incomplete grouped values in the config. Such as credentials. For +// example if a config file only includes volcengine_access_key_id but no +// volcengine_secret_access_key the volcengine_access_key_id will be ignored. +func (cfg *sharedConfig) setFromIniFile(profile string, file sharedConfigFile, exOpts bool) error { + section, ok := file.IniData.GetSection(profile) + if !ok { + // Fallback to to alternate profile name: profile + section, ok = file.IniData.GetSection(fmt.Sprintf("profile %s", profile)) + if !ok { + return SharedConfigProfileNotExistsError{Profile: profile, Err: nil} + } + } + + if exOpts { + // Assume Role Parameters + updateString(&cfg.RoleARN, section, roleArnKey) + updateString(&cfg.ExternalID, section, externalIDKey) + updateString(&cfg.MFASerial, section, mfaSerialKey) + updateString(&cfg.RoleSessionName, section, roleSessionNameKey) + updateString(&cfg.SourceProfileName, section, sourceProfileKey) + updateString(&cfg.CredentialSource, section, credentialSourceKey) + + updateString(&cfg.Region, section, regionKey) + } + + updateString(&cfg.CredentialProcess, section, credentialProcessKey) + updateString(&cfg.WebIdentityTokenFile, section, webIdentityTokenFileKey) + + // Shared Credentials + creds := credentials.Value{ + AccessKeyID: section.String(accessKeyIDKey), + SecretAccessKey: section.String(secretAccessKey), + SessionToken: section.String(sessionTokenKey), + ProviderName: fmt.Sprintf("SharedConfigCredentials: %s", file.Filename), + } + if creds.HasKeys() { + cfg.Creds = creds + } + + // Endpoint discovery + updateBoolPtr(&cfg.EnableEndpointDiscovery, section, enableEndpointDiscoveryKey) + + return nil +} + +func (cfg *sharedConfig) validateCredentialsRequireARN(profile string) error { + var credSource string + + switch { + case len(cfg.SourceProfileName) != 0: + credSource = sourceProfileKey + case len(cfg.CredentialSource) != 0: + credSource = credentialSourceKey + case len(cfg.WebIdentityTokenFile) != 0: + credSource = webIdentityTokenFileKey + } + + if len(credSource) != 0 && len(cfg.RoleARN) == 0 { + return CredentialRequiresARNError{ + Type: credSource, + Profile: profile, + } + } + + return nil +} + +func (cfg *sharedConfig) validateCredentialType() error { + // Only one or no credential type can be defined. + if !oneOrNone( + len(cfg.SourceProfileName) != 0, + len(cfg.CredentialSource) != 0, + len(cfg.CredentialProcess) != 0, + len(cfg.WebIdentityTokenFile) != 0, + ) { + return ErrSharedConfigSourceCollision + } + + return nil +} + +func (cfg *sharedConfig) hasCredentials() bool { + switch { + case len(cfg.SourceProfileName) != 0: + case len(cfg.CredentialSource) != 0: + case len(cfg.CredentialProcess) != 0: + case len(cfg.WebIdentityTokenFile) != 0: + case cfg.Creds.HasKeys(): + default: + return false + } + + return true +} + +func (cfg *sharedConfig) clearCredentialOptions() { + cfg.CredentialSource = "" + cfg.CredentialProcess = "" + cfg.WebIdentityTokenFile = "" + cfg.Creds = credentials.Value{} +} + +func (cfg *sharedConfig) clearAssumeRoleOptions() { + cfg.RoleARN = "" + cfg.ExternalID = "" + cfg.MFASerial = "" + cfg.RoleSessionName = "" + cfg.SourceProfileName = "" +} + +func oneOrNone(bs ...bool) bool { + var count int + + for _, b := range bs { + if b { + count++ + if count > 1 { + return false + } + } + } + + return true +} + +// updateString will only update the dst with the value in the section key, key +// is present in the section. +func updateString(dst *string, section ini.Section, key string) { + if !section.Has(key) { + return + } + *dst = section.String(key) +} + +// updateBoolPtr will only update the dst with the value in the section key, +// key is present in the section. +func updateBoolPtr(dst **bool, section ini.Section, key string) { + if !section.Has(key) { + return + } + *dst = new(bool) + **dst = section.Bool(key) +} + +// SharedConfigLoadError is an error for the shared config file failed to load. +type SharedConfigLoadError struct { + Filename string + Err error +} + +// Code is the short id of the error. +func (e SharedConfigLoadError) Code() string { + return "SharedConfigLoadError" +} + +// Message is the description of the error +func (e SharedConfigLoadError) Message() string { + return fmt.Sprintf("failed to load config file, %s", e.Filename) +} + +// OrigErr is the underlying error that caused the failure. +func (e SharedConfigLoadError) OrigErr() error { + return e.Err +} + +// Error satisfies the error interface. +func (e SharedConfigLoadError) Error() string { + return volcengineerr.SprintError(e.Code(), e.Message(), "", e.Err) +} + +// SharedConfigProfileNotExistsError is an error for the shared config when +// the profile was not find in the config file. +type SharedConfigProfileNotExistsError struct { + Profile string + Err error +} + +// Code is the short id of the error. +func (e SharedConfigProfileNotExistsError) Code() string { + return "SharedConfigProfileNotExistsError" +} + +// Message is the description of the error +func (e SharedConfigProfileNotExistsError) Message() string { + return fmt.Sprintf("failed to get profile, %s", e.Profile) +} + +// OrigErr is the underlying error that caused the failure. +func (e SharedConfigProfileNotExistsError) OrigErr() error { + return e.Err +} + +// Error satisfies the error interface. +func (e SharedConfigProfileNotExistsError) Error() string { + return volcengineerr.SprintError(e.Code(), e.Message(), "", e.Err) +} + +// SharedConfigAssumeRoleError is an error for the shared config when the +// profile contains assume role information, but that information is invalid +// or not complete. +type SharedConfigAssumeRoleError struct { + RoleARN string + SourceProfile string +} + +// Code is the short id of the error. +func (e SharedConfigAssumeRoleError) Code() string { + return "SharedConfigAssumeRoleError" +} + +// Message is the description of the error +func (e SharedConfigAssumeRoleError) Message() string { + return fmt.Sprintf( + "failed to load assume role for %s, source profile %s has no shared credentials", + e.RoleARN, e.SourceProfile, + ) +} + +// OrigErr is the underlying error that caused the failure. +func (e SharedConfigAssumeRoleError) OrigErr() error { + return nil +} + +// Error satisfies the error interface. +func (e SharedConfigAssumeRoleError) Error() string { + return volcengineerr.SprintError(e.Code(), e.Message(), "", nil) +} + +// CredentialRequiresARNError provides the error for shared config credentials +// that are incorrectly configured in the shared config or credentials file. +type CredentialRequiresARNError struct { + // type of credentials that were configured. + Type string + + // Profile name the credentials were in. + Profile string +} + +// Code is the short id of the error. +func (e CredentialRequiresARNError) Code() string { + return "CredentialRequiresARNError" +} + +// Message is the description of the error +func (e CredentialRequiresARNError) Message() string { + return fmt.Sprintf( + "credential type %s requires role_arn, profile %s", + e.Type, e.Profile, + ) +} + +// OrigErr is the underlying error that caused the failure. +func (e CredentialRequiresARNError) OrigErr() error { + return nil +} + +// Error satisfies the error interface. +func (e CredentialRequiresARNError) Error() string { + return volcengineerr.SprintError(e.Code(), e.Message(), "", nil) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/signer/volc/volc.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/signer/volc/volc.go new file mode 100644 index 000000000000..71c3ac9f4b0b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/signer/volc/volc.go @@ -0,0 +1,79 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volc + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang/base" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" +) + +var SignRequestHandler = request.NamedHandler{ + Name: "volc.SignRequestHandler", Fn: SignSDKRequest, +} + +func SignSDKRequest(req *request.Request) { + + region := req.ClientInfo.SigningRegion + + var ( + dynamicCredentials *credentials.Credentials + dynamicRegion *string + c1 base.Credentials + err error + ) + + if req.Config.DynamicCredentialsIncludeError != nil { + dynamicCredentials, dynamicRegion, err = req.Config.DynamicCredentialsIncludeError(req.Context()) + if err != nil { + req.Error = err + return + } + } else if req.Config.DynamicCredentials != nil { + dynamicCredentials, dynamicRegion = req.Config.DynamicCredentials(req.Context()) + } + + if req.Config.DynamicCredentials != nil || req.Config.DynamicCredentialsIncludeError != nil { + if volcengine.StringValue(dynamicRegion) == "" { + req.Error = volcengine.ErrMissingRegion + return + } + region = volcengine.StringValue(dynamicRegion) + } else if region == "" { + region = volcengine.StringValue(req.Config.Region) + } + + name := req.ClientInfo.SigningName + if name == "" { + name = req.ClientInfo.ServiceID + } + + if dynamicCredentials == nil { + c1, err = req.Config.Credentials.GetBase(region, name) + } else { + c1, err = dynamicCredentials.GetBase(region, name) + } + + if err != nil { + req.Error = err + return + } + + r := c1.Sign(req.HTTPRequest) + req.HTTPRequest.Header = r.Header +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/special/iot_response.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/special/iot_response.go new file mode 100644 index 000000000000..27083080456b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/special/iot_response.go @@ -0,0 +1,32 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package special + +import ( + "reflect" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" +) + +func iotResponse(response response.VolcengineResponse, i interface{}) interface{} { + _, ok1 := reflect.TypeOf(i).Elem().FieldByName("ResponseMetadata") + _, ok2 := reflect.TypeOf(i).Elem().FieldByName("Result") + if ok1 && ok2 { + return response + } + return response.Result +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/special/special_conf.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/special/special_conf.go new file mode 100644 index 000000000000..d47037053924 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/special/special_conf.go @@ -0,0 +1,33 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package special + +import "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + +type ResponseSpecial func(response.VolcengineResponse, interface{}) interface{} + +var responseSpecialMapping map[string]ResponseSpecial + +func init() { + responseSpecialMapping = map[string]ResponseSpecial{ + "iot": iotResponse, + } +} + +func ResponseSpecialMapping() map[string]ResponseSpecial { + return responseSpecialMapping +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/types.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/types.go new file mode 100644 index 000000000000..1efe284684ab --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/types.go @@ -0,0 +1,226 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "io" + "sync" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/internal/sdkio" +) + +// ReadSeekCloser wraps a io.Reader returning a ReaderSeekerCloser. Allows the +// SDK to accept an io.Reader that is not also an io.Seeker for unsigned +// streaming payload API operations. +// +// A ReadSeekCloser wrapping an nonseekable io.Reader used in an API +// operation's input will prevent that operation being retried in the case of +// network errors, and cause operation requests to fail if the operation +// requires payload signing. +// +// Note: If using With S3 PutObject to stream an object upload The SDK's S3 +// Upload manager (s3manager.Uploader) provides support for streaming with the +// ability to retry network errors. +func ReadSeekCloser(r io.Reader) ReaderSeekerCloser { + return ReaderSeekerCloser{r} +} + +// ReaderSeekerCloser represents a reader that can also delegate io.Seeker and +// io.Closer interfaces to the underlying object if they are available. +type ReaderSeekerCloser struct { + r io.Reader +} + +// IsReaderSeekable returns if the underlying reader type can be seeked. A +// io.Reader might not actually be seekable if it is the ReaderSeekerCloser +// type. +func IsReaderSeekable(r io.Reader) bool { + switch v := r.(type) { + case ReaderSeekerCloser: + return v.IsSeeker() + case *ReaderSeekerCloser: + return v.IsSeeker() + case io.ReadSeeker: + return true + default: + return false + } +} + +// Read reads from the reader up to size of p. The number of bytes read, and +// error if it occurred will be returned. +// +// If the reader is not an io.Reader zero bytes read, and nil error will be +// returned. +// +// Performs the same functionality as io.Reader Read +func (r ReaderSeekerCloser) Read(p []byte) (int, error) { + switch t := r.r.(type) { + case io.Reader: + return t.Read(p) + } + return 0, nil +} + +// Seek sets the offset for the next Read to offset, interpreted according to +// whence: 0 means relative to the origin of the file, 1 means relative to the +// current offset, and 2 means relative to the end. Seek returns the new offset +// and an error, if any. +// +// If the ReaderSeekerCloser is not an io.Seeker nothing will be done. +func (r ReaderSeekerCloser) Seek(offset int64, whence int) (int64, error) { + switch t := r.r.(type) { + case io.Seeker: + return t.Seek(offset, whence) + } + return int64(0), nil +} + +// IsSeeker returns if the underlying reader is also a seeker. +func (r ReaderSeekerCloser) IsSeeker() bool { + _, ok := r.r.(io.Seeker) + return ok +} + +// HasLen returns the length of the underlying reader if the value implements +// the Len() int method. +func (r ReaderSeekerCloser) HasLen() (int, bool) { + type lenner interface { + Len() int + } + + if lr, ok := r.r.(lenner); ok { + return lr.Len(), true + } + + return 0, false +} + +// GetLen returns the length of the bytes remaining in the underlying reader. +// Checks first for Len(), then io.Seeker to determine the size of the +// underlying reader. +// +// Will return -1 if the length cannot be determined. +func (r ReaderSeekerCloser) GetLen() (int64, error) { + if l, ok := r.HasLen(); ok { + return int64(l), nil + } + + if s, ok := r.r.(io.Seeker); ok { + return seekerLen(s) + } + + return -1, nil +} + +// SeekerLen attempts to get the number of bytes remaining at the seeker's +// current position. Returns the number of bytes remaining or error. +func SeekerLen(s io.Seeker) (int64, error) { + // Determine if the seeker is actually seekable. ReaderSeekerCloser + // hides the fact that a io.Readers might not actually be seekable. + switch v := s.(type) { + case ReaderSeekerCloser: + return v.GetLen() + case *ReaderSeekerCloser: + return v.GetLen() + } + + return seekerLen(s) +} + +func seekerLen(s io.Seeker) (int64, error) { + curOffset, err := s.Seek(0, sdkio.SeekCurrent) + if err != nil { + return 0, err + } + + endOffset, err := s.Seek(0, sdkio.SeekEnd) + if err != nil { + return 0, err + } + + _, err = s.Seek(curOffset, sdkio.SeekStart) + if err != nil { + return 0, err + } + + return endOffset - curOffset, nil +} + +// Close closes the ReaderSeekerCloser. +// +// If the ReaderSeekerCloser is not an io.Closer nothing will be done. +func (r ReaderSeekerCloser) Close() error { + switch t := r.r.(type) { + case io.Closer: + return t.Close() + } + return nil +} + +// A WriteAtBuffer provides a in memory buffer supporting the io.WriterAt interface +// Can be used with the s3manager.Downloader to download content to a buffer +// in memory. Safe to use concurrently. +type WriteAtBuffer struct { + buf []byte + m sync.Mutex + + // GrowthCoeff defines the growth rate of the internal buffer. By + // default, the growth rate is 1, where expanding the internal + // buffer will allocate only enough capacity to fit the new expected + // length. + GrowthCoeff float64 +} + +// NewWriteAtBuffer creates a WriteAtBuffer with an internal buffer +// provided by buf. +func NewWriteAtBuffer(buf []byte) *WriteAtBuffer { + return &WriteAtBuffer{buf: buf} +} + +// WriteAt writes a slice of bytes to a buffer starting at the position provided +// The number of bytes written will be returned, or error. Can overwrite previous +// written slices if the write ats overlap. +func (b *WriteAtBuffer) WriteAt(p []byte, pos int64) (n int, err error) { + pLen := len(p) + expLen := pos + int64(pLen) + b.m.Lock() + defer b.m.Unlock() + if int64(len(b.buf)) < expLen { + if int64(cap(b.buf)) < expLen { + if b.GrowthCoeff < 1 { + b.GrowthCoeff = 1 + } + newBuf := make([]byte, expLen, int64(b.GrowthCoeff*float64(expLen))) + copy(newBuf, b.buf) + b.buf = newBuf + } + b.buf = b.buf[:expLen] + } + copy(b.buf[pos:], p) + return pLen, nil +} + +// Bytes returns a slice of bytes written to the buffer. +func (b *WriteAtBuffer) Bytes() []byte { + b.m.Lock() + defer b.m.Unlock() + return b.buf +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_client.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_client.go new file mode 100644 index 000000000000..e0c5fdcb8d65 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_client.go @@ -0,0 +1,105 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package universal + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/client/metadata" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/corehandlers" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/signer/volc" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery" +) + +func New(session *session.Session) *Universal { + return &Universal{ + Session: session, + } +} + +func (u *Universal) newClient(info RequestUniversal) *client.Client { + config := u.Session.ClientConfig(info.ServiceName) + c := client.New( + *config.Config, + metadata.ClientInfo{ + SigningName: config.SigningName, + SigningRegion: config.SigningRegion, + Endpoint: config.Endpoint, + APIVersion: info.Version, + ServiceName: info.ServiceName, + ServiceID: info.ServiceName, + }, + config.Handlers, + ) + c.Handlers.Build.PushBackNamed(corehandlers.SDKVersionUserAgentHandler) + c.Handlers.Sign.PushBackNamed(volc.SignRequestHandler) + c.Handlers.Build.PushBackNamed(volcenginequery.BuildHandler) + c.Handlers.Unmarshal.PushBackNamed(volcenginequery.UnmarshalHandler) + c.Handlers.UnmarshalMeta.PushBackNamed(volcenginequery.UnmarshalMetaHandler) + c.Handlers.UnmarshalError.PushBackNamed(volcenginequery.UnmarshalErrorHandler) + + return c +} + +func (u *Universal) getMethod(m HttpMethod) string { + switch m { + case GET: + return "GET" + case POST: + return "POST" + case PUT: + return "PUT" + case DELETE: + return "DELETE" + case HEAD: + return "HEAD" + default: + return "GET" + } +} + +func getContentType(m ContentType) string { + switch m { + case ApplicationJSON: + return "application/json" + case FormUrlencoded: + return "x-www-form-urlencoded" + default: + return "" + } +} + +func (u *Universal) DoCall(info RequestUniversal, input *map[string]interface{}) (output *map[string]interface{}, err error) { + c := u.newClient(info) + op := &request.Operation{ + HTTPMethod: u.getMethod(info.HttpMethod), + HTTPPath: "/", + Name: info.Action, + } + if input == nil { + input = &map[string]interface{}{} + } + output = &map[string]interface{}{} + req := c.NewRequest(op, input, output) + + if getContentType(info.ContentType) != "" { + req.HTTPRequest.Header.Set("Content-Type", getContentType(info.ContentType)) + } + err = req.Send() + return output, err +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/well_known_labels.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_const.go similarity index 70% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/well_known_labels.go rename to cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_const.go index 84c4b9fa1589..893abf5e8663 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/well_known_labels.go +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_const.go @@ -1,5 +1,5 @@ /* -Copyright 2019 The Kubernetes Authors. +Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -14,10 +14,22 @@ See the License for the specific language governing permissions and limitations under the License. */ -package apis +package universal + +type HttpMethod int + +const ( + GET HttpMethod = iota + HEAD + POST + PUT + DELETE +) + +type ContentType int const ( - // LabelServiceProxyName indicates that an alternative service - // proxy will implement this Service. - LabelServiceProxyName = "service.kubernetes.io/service-proxy-name" + Default ContentType = iota + FormUrlencoded + ApplicationJSON ) diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_struct.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_struct.go new file mode 100644 index 000000000000..c66adb18583b --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/universal/universal_struct.go @@ -0,0 +1,31 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package universal + +import "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session" + +type Universal struct { + Session *session.Session +} + +type RequestUniversal struct { + ServiceName string + Action string + Version string + HttpMethod HttpMethod + ContentType ContentType +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/url.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/url.go new file mode 100644 index 000000000000..9c3837ecc18f --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/url.go @@ -0,0 +1,29 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "net/url" + +// URLHostname will extract the Hostname without port from the URL value. +// +// Wrapper of net/url#URL.Hostname for backwards Go version compatibility. +func URLHostname(url *url.URL) string { + return url.Hostname() +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/version.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/version.go new file mode 100644 index 000000000000..b3a357851d22 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/version.go @@ -0,0 +1,27 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package volcengine provides core functionality for making requests to volcengine services. +package volcengine + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// SDKName is the name of this volcengine SDK +const SDKName = "volcengine-go-sdk" + +// SDKVersion is the version of this SDK +const SDKVersion = "1.0.52" diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginebody/body.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginebody/body.go new file mode 100755 index 000000000000..723ab4f092d9 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginebody/body.go @@ -0,0 +1,140 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcenginebody + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/json" + "fmt" + "net/url" + "reflect" + "strings" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/private/protocol/query/queryutil" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +func BodyParam(body *url.Values, r *request.Request) { + var ( + isForm bool + ) + contentType := r.HTTPRequest.Header.Get("Content-Type") + newBody := body + if len(contentType) > 0 && strings.Contains(strings.ToLower(contentType), "x-www-form-urlencoded") { + isForm = true + newBody = &url.Values{} + } + + if !isForm && len(contentType) > 0 { + r.Error = volcengineerr.New("SerializationError", "not support such content-type", nil) + return + } + + if reflect.TypeOf(r.Params) == reflect.TypeOf(&map[string]interface{}{}) { + m := *(r.Params).(*map[string]interface{}) + for k, v := range m { + if reflect.TypeOf(v).String() == "string" { + newBody.Add(k, v.(string)) + } else { + newBody.Add(k, fmt.Sprintf("%v", v)) + } + } + } else if err := queryutil.Parse(*newBody, r.Params, false); err != nil { + r.Error = volcengineerr.New("SerializationError", "failed encoding Query request", err) + return + } + + //extra process + if r.Config.ExtraHttpParameters != nil { + extra := r.Config.ExtraHttpParameters(r.Context()) + if extra != nil { + for k, value := range extra { + newBody.Add(k, value) + } + } + } + if r.Config.ExtraHttpParametersWithMeta != nil { + extra := r.Config.ExtraHttpParametersWithMeta(r.Context(), custom.RequestMetadata{ + ServiceName: r.ClientInfo.ServiceName, + Version: r.ClientInfo.APIVersion, + Action: r.Operation.Name, + HttpMethod: r.Operation.HTTPMethod, + Region: *r.Config.Region, + Request: r.HTTPRequest, + RawQuery: body, + }) + if extra != nil { + for k, value := range extra { + newBody.Add(k, value) + } + } + } + + if isForm { + r.HTTPRequest.URL.RawQuery = body.Encode() + r.HTTPRequest.Header.Set("Content-Type", "application/x-www-form-urlencoded; charset=utf-8") + r.SetBufferBody([]byte(newBody.Encode())) + return + } + + r.Input = volcengineutil.ParameterToMap(body.Encode(), r.Config.LogSensitives, + r.Config.LogLevel.Matches(volcengine.LogInfoWithInputAndOutput) || r.Config.LogLevel.Matches(volcengine.LogDebugWithInputAndOutput)) + + r.HTTPRequest.URL.RawQuery = newBody.Encode() +} + +func BodyJson(body *url.Values, r *request.Request) { + method := strings.ToUpper(r.HTTPRequest.Method) + if v := r.HTTPRequest.Header.Get("Content-Type"); len(v) == 0 { + r.HTTPRequest.Header.Set("Content-Type", "application/json; charset=utf-8") + } + + if v := r.HTTPRequest.Header.Get("Content-Type"); !strings.Contains(strings.ToLower(v), "application/json") || method == "GET" { + return + } + + input := make(map[string]interface{}) + b, _ := json.Marshal(r.Params) + _ = json.Unmarshal(b, &input) + if r.Config.ExtraHttpJsonBody != nil { + r.Config.ExtraHttpJsonBody(r.Context(), &input, custom.RequestMetadata{ + ServiceName: r.ClientInfo.ServiceName, + Version: r.ClientInfo.APIVersion, + Action: r.Operation.Name, + HttpMethod: r.Operation.HTTPMethod, + Region: *r.Config.Region, + Request: r.HTTPRequest, + RawQuery: body, + }) + b, _ = json.Marshal(input) + } + r.SetStringBody(string(b)) + + r.HTTPRequest.URL.RawQuery = body.Encode() + r.IsJsonBody = true + + r.Input = volcengineutil.BodyToMap(input, r.Config.LogSensitives, + r.Config.LogLevel.Matches(volcengine.LogInfoWithInputAndOutput) || r.Config.LogLevel.Matches(volcengine.LogDebugWithInputAndOutput)) + r.Params = nil + r.HTTPRequest.Header.Set("Accept", "application/json") +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr/error.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr/error.go new file mode 100644 index 000000000000..b3387fa31420 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr/error.go @@ -0,0 +1,133 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package volcengineerr represents API error interface accessors for the SDK. +package volcengineerr + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +// An Error wraps lower level errors with code, message and an original error. +// The underlying concrete error type may also satisfy other interfaces which +// can be to used to obtain more specific information about the error. +// +// Calling Error() or String() will always include the full information about +// an error based on its underlying type. +type Error interface { + // Satisfy the generic error interface. + error + + // Code Returns the short phrase depicting the classification of the error. + Code() string + + // Message Returns the error details message. + Message() string + + // OrigErr Returns the original error if one was set. Nil is returned if not set. + OrigErr() error +} + +// BatchError is a batch of errors which also wraps lower level errors with +// code, message, and original errors. Calling Error() will include all errors +// that occurred in the batch. +// +// Deprecated: Replaced with BatchedErrors. Only defined for backwards +// compatibility. +type BatchError interface { + // Satisfy the generic error interface. + error + + // Code Returns the short phrase depicting the classification of the error. + Code() string + + // Message Returns the error details message. + Message() string + + // OrigErrs Returns the original error if one was set. Nil is returned if not set. + OrigErrs() []error +} + +// BatchedErrors is a batch of errors which also wraps lower level errors with +// code, message, and original errors. Calling Error() will include all errors +// that occurred in the batch. +// +// Replaces BatchError +type BatchedErrors interface { + // Error Satisfy the base Error interface. + Error + + // OrigErrs Returns the original error if one was set. Nil is returned if not set. + OrigErrs() []error +} + +// New returns an Error object described by the code, message, and origErr. +// +// If origErr satisfies the Error interface it will not be wrapped within a new +// Error object and will instead be returned. +func New(code, message string, origErr error) Error { + var errs []error + if origErr != nil { + errs = append(errs, origErr) + } + return newBaseError(code, message, errs) +} + +// NewBatchError returns an BatchedErrors with a collection of errors as an +// array of errors. +func NewBatchError(code, message string, errs []error) BatchedErrors { + return newBaseError(code, message, errs) +} + +// A RequestFailure is an interface to extract request failure information from +// an Error such as the request ID of the failed request returned by a service. +// RequestFailures may not always have a requestID value if the request failed +// prior to reaching the service such as a connection error. +type RequestFailure interface { + Error + + // StatusCode The status code of the HTTP response. + StatusCode() int + + // RequestID The request ID returned by the service for a request failure. This will + // be empty if no request ID is available such as the request failed due + // to a connection error. + RequestID() string +} + +// NewRequestFailure returns a wrapped error with additional information for +// request status code, and service requestID. +// +// Should be used to wrap all request which involve service requests. Even if +// the request failed without a service response, but had an HTTP status code +// that may be meaningful. +func NewRequestFailure(err Error, statusCode int, reqID string, simple ...*bool) RequestFailure { + return newRequestError(err, statusCode, reqID, simple...) +} + +// UnmarshalError provides the interface for the SDK failing to unmarshal data. +type UnmarshalError interface { + volcengineerror + Bytes() []byte +} + +// NewUnmarshalError returns an initialized UnmarshalError error wrapper adding +// the bytes that fail to unmarshal to the error. +func NewUnmarshalError(err error, msg string, bytes []byte) UnmarshalError { + return &unmarshalError{ + volcengineerror: New("UnmarshalError", msg, err), + bytes: bytes, + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr/types.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr/types.go new file mode 100644 index 000000000000..af1ae2efbc62 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr/types.go @@ -0,0 +1,254 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengineerr + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/hex" + "fmt" +) + +// SprintError returns a string of the formatted error code. +// +// Both extra and origErr are optional. If they are included their lines +// will be added, but if they are not included their lines will be ignored. +func SprintError(code, message, extra string, origErr error) string { + msg := fmt.Sprintf("%s: %s", code, message) + if extra != "" { + msg = fmt.Sprintf("%s\n\t%s", msg, extra) + } + if origErr != nil { + msg = fmt.Sprintf("%s\ncaused by: %s", msg, origErr.Error()) + } + return msg +} + +// A baseError wraps the code and message which defines an error. It also +// can be used to wrap an original error object. +// +// Should be used as the root for errors satisfying the volcengineerr.Error. Also +// for any error which does not fit into a specific error wrapper type. +type baseError struct { + // Classification of error + code string + + // Detailed information about error + message string + + // Optional original error this error is based off of. Allows building + // chained errors. + errs []error +} + +// newBaseError returns an error object for the code, message, and errors. +// +// code is a short no whitespace phrase depicting the classification of +// the error that is being created. +// +// message is the free flow string containing detailed information about the +// error. +// +// origErrs is the error objects which will be nested under the new errors to +// be returned. +func newBaseError(code, message string, origErrs []error) *baseError { + b := &baseError{ + code: code, + message: message, + errs: origErrs, + } + + return b +} + +// Error returns the string representation of the error. +// +// See ErrorWithExtra for formatting. +// +// Satisfies the error interface. +func (b baseError) Error() string { + size := len(b.errs) + if size > 0 { + return SprintError(b.code, b.message, "", errorList(b.errs)) + } + + return SprintError(b.code, b.message, "", nil) +} + +// String returns the string representation of the error. +// Alias for Error to satisfy the stringer interface. +func (b baseError) String() string { + return b.Error() +} + +// Code returns the short phrase depicting the classification of the error. +func (b baseError) Code() string { + return b.code +} + +// Message returns the error details message. +func (b baseError) Message() string { + return b.message +} + +// OrigErr returns the original error if one was set. Nil is returned if no +// error was set. This only returns the first element in the list. If the full +// list is needed, use BatchedErrors. +func (b baseError) OrigErr() error { + switch len(b.errs) { + case 0: + return nil + case 1: + return b.errs[0] + default: + if err, ok := b.errs[0].(Error); ok { + return NewBatchError(err.Code(), err.Message(), b.errs[1:]) + } + return NewBatchError("BatchedErrors", + "multiple errors occurred", b.errs) + } +} + +// OrigErrs returns the original errors if one was set. An empty slice is +// returned if no error was set. +func (b baseError) OrigErrs() []error { + return b.errs +} + +// So that the Error interface type can be included as an anonymous field +// in the requestError struct and not conflict with the error.Error() method. +type volcengineerror Error + +// A requestError wraps a request or service error. +// +// Composed of baseError for code, message, and original error. +type requestError struct { + volcengineerror + statusCode int + requestID string + bytes []byte + simpleError bool +} + +// newRequestError returns a wrapped error with additional information for +// request status code, and service requestID. +// +// Should be used to wrap all request which involve service requests. Even if +// the request failed without a service response, but had an HTTP status code +// that may be meaningful. +// +// Also wraps original errors via the baseError. +func newRequestError(err Error, statusCode int, requestID string, simple ...*bool) *requestError { + if simple == nil || len(simple) != 1 || simple[0] == nil { + return &requestError{ + volcengineerror: err, + statusCode: statusCode, + requestID: requestID, + } + } + return &requestError{ + volcengineerror: err, + statusCode: statusCode, + requestID: requestID, + simpleError: *simple[0], + } + +} + +// Error returns the string representation of the error. +// Satisfies the error interface. +func (r requestError) Error() string { + if !r.simpleError { + extra := fmt.Sprintf("status code: %d, request id: %s", + r.statusCode, r.requestID) + return SprintError(r.Code(), r.Message(), extra, r.OrigErr()) + } + return r.Code() + +} + +// String returns the string representation of the error. +// Alias for Error to satisfy the stringer interface. +func (r requestError) String() string { + return r.Error() +} + +// StatusCode returns the wrapped status code for the error +func (r requestError) StatusCode() int { + return r.statusCode +} + +// RequestID returns the wrapped requestID +func (r requestError) RequestID() string { + return r.requestID +} + +// OrigErrs returns the original errors if one was set. An empty slice is +// returned if no error was set. +func (r requestError) OrigErrs() []error { + if b, ok := r.volcengineerror.(BatchedErrors); ok { + return b.OrigErrs() + } + return []error{r.OrigErr()} +} + +type unmarshalError struct { + volcengineerror + bytes []byte +} + +// Error returns the string representation of the error. +// Satisfies the error interface. +func (e unmarshalError) Error() string { + extra := hex.Dump(e.bytes) + return SprintError(e.Code(), e.Message(), extra, e.OrigErr()) +} + +// String returns the string representation of the error. +// Alias for Error to satisfy the stringer interface. +func (e unmarshalError) String() string { + return e.Error() +} + +// Bytes returns the bytes that failed to unmarshal. +func (e unmarshalError) Bytes() []byte { + return e.bytes +} + +// An error list that satisfies the golang interface +type errorList []error + +// Error returns the string representation of the error. +// +// Satisfies the error interface. +func (e errorList) Error() string { + msg := "" + // How do we want to handle the array size being zero + if size := len(e); size > 0 { + for i := 0; i < size; i++ { + msg += e[i].Error() + // We check the next index to see if it is within the slice. + // If it is, then we append a newline. We do this, because unit tests + // could be broken with the additional '\n' + if i+1 < size { + msg += "\n" + } + } + } + return msg +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/build.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/build.go new file mode 100755 index 000000000000..7bcc2541a9c6 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/build.go @@ -0,0 +1,62 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcenginequery + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "net/url" + "strings" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginebody" +) + +// BuildHandler is a named request handler for building volcenginequery protocol requests +var BuildHandler = request.NamedHandler{Name: "volcenginesdk.volcenginequery.Build", Fn: Build} + +// Build builds a request for a Volcengine Query service. +func Build(r *request.Request) { + body := url.Values{ + "Action": {r.Operation.Name}, + "Version": {r.ClientInfo.APIVersion}, + } + //r.HTTPRequest.Header.Add("Accept", "application/json") + //method := strings.ToUpper(r.HTTPRequest.Method) + + if r.Config.ExtraUserAgent != nil && *r.Config.ExtraUserAgent != "" { + if strings.HasPrefix(*r.Config.ExtraUserAgent, "/") { + request.AddToUserAgent(r, *r.Config.ExtraUserAgent) + } else { + request.AddToUserAgent(r, "/"+*r.Config.ExtraUserAgent) + } + + } + r.HTTPRequest.Host = r.HTTPRequest.URL.Host + v := r.HTTPRequest.Header.Get("Content-Type") + if (strings.ToUpper(r.HTTPRequest.Method) == "PUT" || + strings.ToUpper(r.HTTPRequest.Method) == "POST" || + strings.ToUpper(r.HTTPRequest.Method) == "DELETE" || + strings.ToUpper(r.HTTPRequest.Method) == "PATCH") && + strings.Contains(strings.ToLower(v), "application/json") { + r.HTTPRequest.Header.Set("Content-Type", "application/json; charset=utf-8") + volcenginebody.BodyJson(&body, r) + } else { + volcenginebody.BodyParam(&body, r) + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/unmarshal.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/unmarshal.go new file mode 100644 index 000000000000..b4b0f4c2a6d3 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/unmarshal.go @@ -0,0 +1,163 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcenginequery + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "encoding/json" + "fmt" + "io/ioutil" + "net/http" + "reflect" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/special" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil" +) + +// UnmarshalHandler is a named request handler for unmarshaling volcenginequery protocol requests +var UnmarshalHandler = request.NamedHandler{Name: "volcenginesdk.volcenginequery.Unmarshal", Fn: Unmarshal} + +// UnmarshalMetaHandler is a named request handler for unmarshaling volcenginequery protocol request metadata +var UnmarshalMetaHandler = request.NamedHandler{Name: "volcenginesdk.volcenginequery.UnmarshalMeta", Fn: UnmarshalMeta} + +// Unmarshal unmarshals a response for an VOLCSTACK Query service. +func Unmarshal(r *request.Request) { + defer r.HTTPResponse.Body.Close() + if r.DataFilled() { + body, err := ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + fmt.Printf("read volcenginebody err, %v\n", err) + r.Error = err + return + } + + var forceJsonNumberDecoder bool + + if r.Config.ForceJsonNumberDecode != nil { + forceJsonNumberDecoder = r.Config.ForceJsonNumberDecode(r.Context(), r.MergeRequestInfo()) + } + + if reflect.TypeOf(r.Data) == reflect.TypeOf(&map[string]interface{}{}) { + if err = json.Unmarshal(body, &r.Data); err != nil || forceJsonNumberDecoder { + //try next + decoder := json.NewDecoder(bytes.NewReader(body)) + decoder.UseNumber() + if err = decoder.Decode(&r.Data); err != nil { + fmt.Printf("Unmarshal err, %v\n", err) + r.Error = err + return + } + } + var info interface{} + + ptr := r.Data.(*map[string]interface{}) + info, err = volcengineutil.ObtainSdkValue("ResponseMetadata.Error.Code", *ptr) + if err != nil { + r.Error = err + return + } + if info != nil { + if processBodyError(r, &response.VolcengineResponse{}, body, forceJsonNumberDecoder) { + return + } + } + + } else { + volcengineResponse := response.VolcengineResponse{} + if processBodyError(r, &volcengineResponse, body, forceJsonNumberDecoder) { + return + } + + if _, ok := reflect.TypeOf(r.Data).Elem().FieldByName("Metadata"); ok { + if volcengineResponse.ResponseMetadata != nil { + volcengineResponse.ResponseMetadata.HTTPCode = r.HTTPResponse.StatusCode + } + r.Metadata = *(volcengineResponse.ResponseMetadata) + reflect.ValueOf(r.Data).Elem().FieldByName("Metadata").Set(reflect.ValueOf(volcengineResponse.ResponseMetadata)) + } + + var ( + b []byte + source interface{} + ) + + if r.Config.CustomerUnmarshalData != nil { + source = r.Config.CustomerUnmarshalData(r.Context(), r.MergeRequestInfo(), volcengineResponse) + } else { + if sp, ok := special.ResponseSpecialMapping()[r.ClientInfo.ServiceName]; ok { + source = sp(volcengineResponse, r.Data) + } else { + source = volcengineResponse.Result + } + } + + if b, err = json.Marshal(source); err != nil { + fmt.Printf("Unmarshal err, %v\n", err) + r.Error = err + return + } + if err = json.Unmarshal(b, &r.Data); err != nil || forceJsonNumberDecoder { + decoder := json.NewDecoder(bytes.NewReader(b)) + decoder.UseNumber() + if err = decoder.Decode(&r.Data); err != nil { + fmt.Printf("Unmarshal err, %v\n", err) + r.Error = err + return + } + } + } + + } +} + +// UnmarshalMeta unmarshals header response values for an VOLCSTACK Query service. +func UnmarshalMeta(r *request.Request) { + +} + +func processBodyError(r *request.Request, volcengineResponse *response.VolcengineResponse, body []byte, forceJsonNumberDecoder bool) bool { + if err := json.Unmarshal(body, &volcengineResponse); err != nil || forceJsonNumberDecoder { + decoder := json.NewDecoder(bytes.NewReader(body)) + decoder.UseNumber() + if err = decoder.Decode(&r.Data); err != nil { + fmt.Printf("Unmarshal err, %v\n", err) + r.Error = err + return true + } + } + if volcengineResponse.ResponseMetadata.Error != nil && volcengineResponse.ResponseMetadata.Error.Code != "" { + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New(volcengineResponse.ResponseMetadata.Error.Code, volcengineResponse.ResponseMetadata.Error.Message, nil), + http.StatusBadRequest, + volcengineResponse.ResponseMetadata.RequestId, + ) + processUnmarshalError(unmarshalErrorInfo{ + Request: r, + Response: volcengineResponse, + Body: body, + Err: r.Error, + }) + return true + } + return false +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/unmarshal_error.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/unmarshal_error.go new file mode 100755 index 000000000000..3425f4455af7 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcenginequery/unmarshal_error.go @@ -0,0 +1,135 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcenginequery + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "encoding/json" + "fmt" + "io/ioutil" + "reflect" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/custom" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/request" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/response" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineerr" +) + +// UnmarshalErrorHandler is a name request handler to unmarshal request errors +var UnmarshalErrorHandler = request.NamedHandler{Name: "volcenginesdk.volcenginequery.UnmarshalError", Fn: UnmarshalError} + +// UnmarshalError unmarshals an error response for an VOLCSTACK Query service. +func UnmarshalError(r *request.Request) { + defer r.HTTPResponse.Body.Close() + processUnmarshalError(unmarshalErrorInfo{ + Request: r, + }) +} + +type unmarshalErrorInfo struct { + Request *request.Request + Response *response.VolcengineResponse + Body []byte + Err error +} + +func processUnmarshalError(info unmarshalErrorInfo) { + var ( + body []byte + err error + ) + r := info.Request + if info.Response == nil && info.Body == nil { + info.Response = &response.VolcengineResponse{} + if r.DataFilled() { + body, err = ioutil.ReadAll(r.HTTPResponse.Body) + if err != nil { + fmt.Printf("read volcenginebody err, %v\n", err) + r.Error = err + return + } + info.Body = body + if err = json.Unmarshal(body, info.Response); err != nil { + fmt.Printf("Unmarshal err, %v\n", err) + r.Error = err + return + } + } else { + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New("ServiceUnavailableException", "service is unavailable", nil), + r.HTTPResponse.StatusCode, + r.RequestID, + ) + return + } + } + + if r.Config.CustomerUnmarshalError != nil { + customerErr := r.Config.CustomerUnmarshalError(r.Context(), custom.RequestMetadata{ + ServiceName: r.ClientInfo.ServiceName, + Version: r.ClientInfo.APIVersion, + Action: r.Operation.Name, + HttpMethod: r.Operation.HTTPMethod, + Region: *r.Config.Region, + }, *info.Response) + if customerErr != nil { + r.Error = customerErr + return + } + } + + if info.Response.ResponseMetadata == nil { + simple := response.VolcengineSimpleError{} + if err = json.Unmarshal(info.Body, &simple); err != nil { + fmt.Printf("Unmarshal err, %v\n", err) + r.Error = err + return + } + info.Response.ResponseMetadata = &response.ResponseMetadata{ + Error: &response.Error{ + Code: simple.ErrorCode, + Message: simple.Message, + }, + } + return + } + + if info.Err != nil { + r.Error = info.Err + } else { + r.Error = volcengineerr.NewRequestFailure( + volcengineerr.New(info.Response.ResponseMetadata.Error.Code, info.Response.ResponseMetadata.Error.Message, nil), + r.HTTPResponse.StatusCode, + info.Response.ResponseMetadata.RequestId, + r.Config.SimpleError, + ) + } + if reflect.TypeOf(r.Data) != reflect.TypeOf(&map[string]interface{}{}) { + + if _, ok := reflect.TypeOf(r.Data).Elem().FieldByName("Metadata"); ok { + if info.Response.ResponseMetadata != nil { + info.Response.ResponseMetadata.HTTPCode = r.HTTPResponse.StatusCode + } + r.Metadata = *(info.Response.ResponseMetadata) + reflect.ValueOf(r.Data).Elem().FieldByName("Metadata").Set(reflect.ValueOf(info.Response.ResponseMetadata)) + } + } + return + +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/copy.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/copy.go new file mode 100644 index 000000000000..7c23f3f4a87d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/copy.go @@ -0,0 +1,127 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengineutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "io" + "reflect" + "time" +) + +// Copy deeply copies a src structure to dst. Useful for copying request and +// response structures. +// +// Can copy between structs of different type, but will only copy fields which +// are assignable, and exist in both structs. Fields which are not assignable, +// or do not exist in both structs are ignored. +func Copy(dst, src interface{}) { + dstval := reflect.ValueOf(dst) + if !dstval.IsValid() { + panic("Copy dst cannot be nil") + } + + rcopy(dstval, reflect.ValueOf(src), true) +} + +// CopyOf returns a copy of src while also allocating the memory for dst. +// src must be a pointer type or this operation will fail. +func CopyOf(src interface{}) (dst interface{}) { + dsti := reflect.New(reflect.TypeOf(src).Elem()) + dst = dsti.Interface() + rcopy(dsti, reflect.ValueOf(src), true) + return +} + +// rcopy performs a recursive copy of values from the source to destination. +// +// root is used to skip certain aspects of the copy which are not valid +// for the root node of a object. +func rcopy(dst, src reflect.Value, root bool) { + if !src.IsValid() { + return + } + + switch src.Kind() { + case reflect.Ptr: + if _, ok := src.Interface().(io.Reader); ok { + if dst.Kind() == reflect.Ptr && dst.Elem().CanSet() { + dst.Elem().Set(src) + } else if dst.CanSet() { + dst.Set(src) + } + } else { + e := src.Type().Elem() + if dst.CanSet() && !src.IsNil() { + if _, ok := src.Interface().(*time.Time); !ok { + dst.Set(reflect.New(e)) + } else { + tempValue := reflect.New(e) + tempValue.Elem().Set(src.Elem()) + // Sets time.Time's unexported values + dst.Set(tempValue) + } + } + if src.Elem().IsValid() { + // Keep the current root state since the depth hasn't changed + rcopy(dst.Elem(), src.Elem(), root) + } + } + case reflect.Struct: + t := dst.Type() + for i := 0; i < t.NumField(); i++ { + name := t.Field(i).Name + srcVal := src.FieldByName(name) + dstVal := dst.FieldByName(name) + if srcVal.IsValid() && dstVal.CanSet() { + rcopy(dstVal, srcVal, false) + } + } + case reflect.Slice: + if src.IsNil() { + break + } + + s := reflect.MakeSlice(src.Type(), src.Len(), src.Cap()) + dst.Set(s) + for i := 0; i < src.Len(); i++ { + rcopy(dst.Index(i), src.Index(i), false) + } + case reflect.Map: + if src.IsNil() { + break + } + + s := reflect.MakeMap(src.Type()) + dst.Set(s) + for _, k := range src.MapKeys() { + v := src.MapIndex(k) + v2 := reflect.New(v.Type()).Elem() + rcopy(v2, v, false) + dst.SetMapIndex(k, v2) + } + default: + // Assign the value if possible. If its not assignable, the value would + // need to be converted and the impact of that may be unexpected, or is + // not compatible with the dst type. + if src.Type().AssignableTo(dst.Type()) { + dst.Set(src) + } + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/equal.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/equal.go new file mode 100644 index 000000000000..49884f3fd380 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/equal.go @@ -0,0 +1,44 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengineutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import "reflect" + +// DeepEqual returns if the two values are deeply equal like reflect.DeepEqual. +// In addition to this, this method will also dereference the input values if +// possible so the DeepEqual performed will not fail if one parameter is a +// pointer and the other is not. +// +// DeepEqual will not perform indirection of nested values of the input parameters. +func DeepEqual(a, b interface{}) bool { + ra := reflect.Indirect(reflect.ValueOf(a)) + rb := reflect.Indirect(reflect.ValueOf(b)) + + if raValid, rbValid := ra.IsValid(), rb.IsValid(); !raValid && !rbValid { + // If the elements are both nil, and of the same type they are equal + // If they are of different types they are not equal + return reflect.TypeOf(a) == reflect.TypeOf(b) + } else if raValid != rbValid { + // Both values must be valid to be equal + return false + } + + return reflect.DeepEqual(ra.Interface(), rb.Interface()) +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/path_value.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/path_value.go new file mode 100644 index 000000000000..9db1ce9a3ca7 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/path_value.go @@ -0,0 +1,240 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengineutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "reflect" + "regexp" + "strconv" + "strings" + + "github.com/jmespath/go-jmespath" +) + +var indexRe = regexp.MustCompile(`(.+)\[(-?\d+)?\]$`) + +// rValuesAtPath returns a slice of values found in value v. The values +// in v are explored recursively so all nested values are collected. +func rValuesAtPath(v interface{}, path string, createPath, caseSensitive, nilTerm bool) []reflect.Value { + pathparts := strings.Split(path, "||") + if len(pathparts) > 1 { + for _, pathpart := range pathparts { + vals := rValuesAtPath(v, pathpart, createPath, caseSensitive, nilTerm) + if len(vals) > 0 { + return vals + } + } + return nil + } + + values := []reflect.Value{reflect.Indirect(reflect.ValueOf(v))} + components := strings.Split(path, ".") + for len(values) > 0 && len(components) > 0 { + var index *int64 + var indexStar bool + c := strings.TrimSpace(components[0]) + if c == "" { // no actual component, illegal syntax + return nil + } else if caseSensitive && c != "*" && strings.ToLower(c[0:1]) == c[0:1] { + // TODO normalize case for user + return nil // don't support unexported fields + } + + // parse this component + if m := indexRe.FindStringSubmatch(c); m != nil { + c = m[1] + if m[2] == "" { + index = nil + indexStar = true + } else { + i, _ := strconv.ParseInt(m[2], 10, 32) + index = &i + indexStar = false + } + } + + nextvals := []reflect.Value{} + for _, value := range values { + // pull component name out of struct member + if value.Kind() != reflect.Struct { + continue + } + + if c == "*" { // pull all members + for i := 0; i < value.NumField(); i++ { + if f := reflect.Indirect(value.Field(i)); f.IsValid() { + nextvals = append(nextvals, f) + } + } + continue + } + + value = value.FieldByNameFunc(func(name string) bool { + if c == name { + return true + } else if !caseSensitive && strings.ToLower(name) == strings.ToLower(c) { + return true + } + return false + }) + + if nilTerm && value.Kind() == reflect.Ptr && len(components[1:]) == 0 { + if !value.IsNil() { + value.Set(reflect.Zero(value.Type())) + } + return []reflect.Value{value} + } + + if createPath && value.Kind() == reflect.Ptr && value.IsNil() { + // TODO if the value is the terminus it should not be created + // if the value to be set to its position is nil. + value.Set(reflect.New(value.Type().Elem())) + value = value.Elem() + } else { + value = reflect.Indirect(value) + } + + if value.Kind() == reflect.Slice || value.Kind() == reflect.Map { + if !createPath && value.IsNil() { + value = reflect.ValueOf(nil) + } + } + + if value.IsValid() { + nextvals = append(nextvals, value) + } + } + values = nextvals + + if indexStar || index != nil { + nextvals = []reflect.Value{} + for _, valItem := range values { + value := reflect.Indirect(valItem) + if value.Kind() != reflect.Slice { + continue + } + + if indexStar { // grab all indices + for i := 0; i < value.Len(); i++ { + idx := reflect.Indirect(value.Index(i)) + if idx.IsValid() { + nextvals = append(nextvals, idx) + } + } + continue + } + + // pull out index + i := int(*index) + if i >= value.Len() { // check out of bounds + if createPath { + // TODO resize slice + } else { + continue + } + } else if i < 0 { // support negative indexing + i = value.Len() + i + } + value = reflect.Indirect(value.Index(i)) + + if value.Kind() == reflect.Slice || value.Kind() == reflect.Map { + if !createPath && value.IsNil() { + value = reflect.ValueOf(nil) + } + } + + if value.IsValid() { + nextvals = append(nextvals, value) + } + } + values = nextvals + } + + components = components[1:] + } + return values +} + +// ValuesAtPath returns a list of values at the case insensitive lexical +// path inside of a structure. +func ValuesAtPath(i interface{}, path string) ([]interface{}, error) { + result, err := jmespath.Search(path, i) + if err != nil { + return nil, err + } + + v := reflect.ValueOf(result) + if !v.IsValid() || (v.Kind() == reflect.Ptr && v.IsNil()) { + return nil, nil + } + if s, ok := result.([]interface{}); ok { + return s, err + } + if v.Kind() == reflect.Map && v.Len() == 0 { + return nil, nil + } + if v.Kind() == reflect.Slice { + out := make([]interface{}, v.Len()) + for i := 0; i < v.Len(); i++ { + out[i] = v.Index(i).Interface() + } + return out, nil + } + + return []interface{}{result}, nil +} + +// SetValueAtPath sets a value at the case insensitive lexical path inside +// of a structure. +func SetValueAtPath(i interface{}, path string, v interface{}) { + rvals := rValuesAtPath(i, path, true, false, v == nil) + for _, rval := range rvals { + if rval.Kind() == reflect.Ptr && rval.IsNil() { + continue + } + setValue(rval, v) + } +} + +func setValue(dstVal reflect.Value, src interface{}) { + if dstVal.Kind() == reflect.Ptr { + dstVal = reflect.Indirect(dstVal) + } + srcVal := reflect.ValueOf(src) + + if !srcVal.IsValid() { // src is literal nil + if dstVal.CanAddr() { + // Convert to pointer so that pointer's value can be nil'ed + // dstVal = dstVal.Addr() + } + dstVal.Set(reflect.Zero(dstVal.Type())) + + } else if srcVal.Kind() == reflect.Ptr { + if srcVal.IsNil() { + srcVal = reflect.Zero(dstVal.Type()) + } else { + srcVal = reflect.ValueOf(src).Elem() + } + dstVal.Set(srcVal) + } else { + dstVal.Set(srcVal) + } + +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/prettify.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/prettify.go new file mode 100644 index 000000000000..87f4c3a7e937 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/prettify.go @@ -0,0 +1,144 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengineutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "fmt" + "io" + "reflect" + "strings" +) + +// Prettify returns the string representation of a value. +func Prettify(i interface{}) string { + var buf bytes.Buffer + prettify(reflect.ValueOf(i), 0, &buf) + return buf.String() +} + +// prettify will recursively walk value v to build a textual +// representation of the value. +func prettify(v reflect.Value, indent int, buf *bytes.Buffer) { + for v.Kind() == reflect.Ptr { + v = v.Elem() + } + + switch v.Kind() { + case reflect.Struct: + strtype := v.Type().String() + if strtype == "time.Time" { + _, err := fmt.Fprintf(buf, "%s", v.Interface()) + if err != nil { + panic(err) + } + break + } else if strings.HasPrefix(strtype, "io.") { + buf.WriteString("") + break + } + + buf.WriteString("{\n") + + var names []string + for i := 0; i < v.Type().NumField(); i++ { + name := v.Type().Field(i).Name + f := v.Field(i) + if name[0:1] == strings.ToLower(name[0:1]) { + continue // ignore unexported fields + } + if (f.Kind() == reflect.Ptr || f.Kind() == reflect.Slice || f.Kind() == reflect.Map) && f.IsNil() { + continue // ignore unset fields + } + names = append(names, name) + } + + for i, n := range names { + val := v.FieldByName(n) + buf.WriteString(strings.Repeat(" ", indent+2)) + buf.WriteString(n + ": ") + prettify(val, indent+2, buf) + + if i < len(names)-1 { + buf.WriteString(",\n") + } + } + + buf.WriteString("\n" + strings.Repeat(" ", indent) + "}") + case reflect.Slice: + strtype := v.Type().String() + if strtype == "[]uint8" { + _, err := fmt.Fprintf(buf, " len %d", v.Len()) + if err != nil { + panic(err) + } + break + } + + nl, id, id2 := "", "", "" + if v.Len() > 3 { + nl, id, id2 = "\n", strings.Repeat(" ", indent), strings.Repeat(" ", indent+2) + } + buf.WriteString("[" + nl) + for i := 0; i < v.Len(); i++ { + buf.WriteString(id2) + prettify(v.Index(i), indent+2, buf) + + if i < v.Len()-1 { + buf.WriteString("," + nl) + } + } + + buf.WriteString(nl + id + "]") + case reflect.Map: + buf.WriteString("{\n") + + for i, k := range v.MapKeys() { + buf.WriteString(strings.Repeat(" ", indent+2)) + buf.WriteString(k.String() + ": ") + prettify(v.MapIndex(k), indent+2, buf) + + if i < v.Len()-1 { + buf.WriteString(",\n") + } + } + + buf.WriteString("\n" + strings.Repeat(" ", indent) + "}") + default: + if !v.IsValid() { + _, err := fmt.Fprint(buf, "") + if err != nil { + panic(err) + } + return + } + format := "%v" + switch v.Interface().(type) { + case string: + format = "%q" + case io.ReadSeeker, io.Reader: + format = "buffer(%p)" + } + _, err := fmt.Fprintf(buf, format, v.Interface()) + if err != nil { + panic(err) + } + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/string_value.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/string_value.go new file mode 100644 index 000000000000..b1a72d92ee8d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/string_value.go @@ -0,0 +1,110 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengineutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "bytes" + "fmt" + "reflect" + "strings" +) + +// StringValue returns the string representation of a value. +func StringValue(i interface{}) string { + var buf bytes.Buffer + stringValue(reflect.ValueOf(i), 0, &buf) + return buf.String() +} + +func stringValue(v reflect.Value, indent int, buf *bytes.Buffer) { + for v.Kind() == reflect.Ptr { + v = v.Elem() + } + + switch v.Kind() { + case reflect.Struct: + buf.WriteString("{\n") + + for i := 0; i < v.Type().NumField(); i++ { + ft := v.Type().Field(i) + fv := v.Field(i) + + if ft.Name[0:1] == strings.ToLower(ft.Name[0:1]) { + continue // ignore unexported fields + } + if (fv.Kind() == reflect.Ptr || fv.Kind() == reflect.Slice) && fv.IsNil() { + continue // ignore unset fields + } + + buf.WriteString(strings.Repeat(" ", indent+2)) + buf.WriteString(ft.Name + ": ") + + if tag := ft.Tag.Get("sensitive"); tag == "true" { + buf.WriteString("") + } else { + stringValue(fv, indent+2, buf) + } + + buf.WriteString(",\n") + } + + buf.WriteString("\n" + strings.Repeat(" ", indent) + "}") + case reflect.Slice: + nl, id, id2 := "", "", "" + if v.Len() > 3 { + nl, id, id2 = "\n", strings.Repeat(" ", indent), strings.Repeat(" ", indent+2) + } + buf.WriteString("[" + nl) + for i := 0; i < v.Len(); i++ { + buf.WriteString(id2) + stringValue(v.Index(i), indent+2, buf) + + if i < v.Len()-1 { + buf.WriteString("," + nl) + } + } + + buf.WriteString(nl + id + "]") + case reflect.Map: + buf.WriteString("{\n") + + for i, k := range v.MapKeys() { + buf.WriteString(strings.Repeat(" ", indent+2)) + buf.WriteString(k.String() + ": ") + stringValue(v.MapIndex(k), indent+2, buf) + + if i < v.Len()-1 { + buf.WriteString(",\n") + } + } + + buf.WriteString("\n" + strings.Repeat(" ", indent) + "}") + default: + format := "%v" + switch v.Interface().(type) { + case string: + format = "%q" + } + _, err := fmt.Fprintf(buf, format, v.Interface()) + if err != nil { + panic(err) + } + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/tools.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/tools.go new file mode 100755 index 000000000000..0e4f91d5ad70 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/tools.go @@ -0,0 +1,51 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengineutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +import ( + "fmt" + "reflect" + "strconv" + "strings" +) + +func ObtainSdkValue(keyPattern string, obj interface{}) (interface{}, error) { + keys := strings.Split(keyPattern, ".") + root := obj + for index, k := range keys { + if reflect.ValueOf(root).Kind() == reflect.Map { + root = root.(map[string]interface{})[k] + if root == nil { + return root, nil + } + + } else if reflect.ValueOf(root).Kind() == reflect.Slice { + i, err := strconv.Atoi(k) + if err != nil { + return nil, fmt.Errorf("keyPattern %s index %d must number", keyPattern, index) + } + if len(root.([]interface{})) < i { + return nil, nil + } + root = root.([]interface{})[i] + } + } + return root, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/trans.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/trans.go new file mode 100644 index 000000000000..589f9c5e8a6e --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/trans.go @@ -0,0 +1,77 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengineutil + +import "strings" + +func ParameterToMap(body string, sensitive []string, enable bool) map[string]interface{} { + if !enable { + return nil + } + result := make(map[string]interface{}) + params := strings.Split(body, "&") + for _, param := range params { + values := strings.Split(param, "=") + if values[0] == "Action" || values[0] == "Version" { + continue + } + v := values[1] + if sensitive != nil && len(sensitive) > 0 { + for _, s := range sensitive { + if strings.Contains(values[0], s) { + v = "****" + break + } + } + } + result[values[0]] = v + } + return result +} + +func BodyToMap(input map[string]interface{}, sensitive []string, enable bool) map[string]interface{} { + if !enable { + return nil + } + result := make(map[string]interface{}) +loop: + for k, v := range input { + if len(sensitive) > 0 { + for _, s := range sensitive { + if strings.Contains(k, s) { + v = "****" + result[k] = v + continue loop + } + } + } + var ( + next map[string]interface{} + nextPtr *map[string]interface{} + ok bool + ) + + if next, ok = v.(map[string]interface{}); ok { + result[k] = BodyToMap(next, sensitive, enable) + } else if nextPtr, ok = v.(*map[string]interface{}); ok { + result[k] = BodyToMap(*nextPtr, sensitive, enable) + } else { + result[k] = v + } + } + return result +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/url.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/url.go new file mode 100755 index 000000000000..46455e2122cd --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/volcengineutil/url.go @@ -0,0 +1,58 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengineutil + +// Copy from https://github.com/aws/aws-sdk-go +// May have been modified by Beijing Volcanoengine Technology Ltd. + +type Endpoint struct { + //UseSSL bool + //Locate bool + //UseInternal bool + CustomerEndpoint string + //CustomerDomainIgnoreService bool + +} + +func NewEndpoint() *Endpoint { + return &Endpoint{} +} + +func (c *Endpoint) WithCustomerEndpoint(customerEndpoint string) *Endpoint { + c.CustomerEndpoint = customerEndpoint + return c +} + +type ServiceInfo struct { + Service string + Region string +} + +const ( + endpoint = "open.volcengineapi.com" + //internalUrl = "open.volcengineapi.com" + //http = "http" + //https = "https" +) + +func (c *Endpoint) GetEndpoint() string { + if c.CustomerEndpoint != "" { + return c.CustomerEndpoint + } else { + return endpoint + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_cloud_service.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_cloud_service.go new file mode 100644 index 000000000000..3b60d1c5c813 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_cloud_service.go @@ -0,0 +1,197 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +import ( + "fmt" + "math" + "time" + + "github.com/google/uuid" + "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/autoscaling" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session" + "k8s.io/klog/v2" +) + +// AutoScalingService is the interface for volcengine auto-scaling service +type AutoScalingService interface { + GetScalingGroupById(groupId string) (*autoscaling.ScalingGroupForDescribeScalingGroupsOutput, error) + ListScalingInstancesByGroupId(groupId string) ([]*autoscaling.ScalingInstanceForDescribeScalingInstancesOutput, error) + GetScalingConfigurationById(configurationId string) (*autoscaling.ScalingConfigurationForDescribeScalingConfigurationsOutput, error) + RemoveInstances(groupId string, instanceIds []string) error + SetAsgTargetSize(groupId string, targetSize int) error + SetAsgDesireCapacity(groupId string, desireCapacity int) error +} + +type autoScalingService struct { + autoscalingClient *autoscaling.AUTOSCALING +} + +func (a *autoScalingService) SetAsgDesireCapacity(groupId string, desireCapacity int) error { + _, err := a.autoscalingClient.ModifyScalingGroupCommon(&map[string]interface{}{ + "ScalingGroupId": groupId, + "DesireInstanceNumber": desireCapacity, + }) + return err +} + +func (a *autoScalingService) SetAsgTargetSize(groupId string, targetSize int) error { + uid, err := uuid.NewUUID() + if err != nil { + return err + } + + resp, err := a.autoscalingClient.CreateScalingPolicyCommon(&map[string]interface{}{ + "AdjustmentType": "TotalCapacity", + "AdjustmentValue": targetSize, + "Cooldown": 0, + "ScalingGroupId": groupId, + "ScalingPolicyName": fmt.Sprintf("autoscaler-autogen-scaling-policy-%s", uid.String()), + "ScalingPolicyType": "Scheduled", + "ScheduledPolicy.LaunchTime": time.Now().Add(2 * time.Minute).UTC().Format("2006-01-02T15:04Z"), + }) + if err != nil { + klog.Errorf("failed to create scaling policy, err: %v", err) + return err + } + + scalingPolicyId := (*resp)["Result"].(map[string]interface{})["ScalingPolicyId"].(string) + klog.Infof("create scaling policy response: %v, scalingPolicyId: %s", resp, scalingPolicyId) + + defer func() { + // delete scaling policy + _, err = a.autoscalingClient.DeleteScalingPolicyCommon(&map[string]interface{}{ + "ScalingPolicyId": scalingPolicyId, + }) + if err != nil { + klog.Warningf("failed to delete scaling policy %s, err: %v", scalingPolicyId, err) + } + }() + + _, err = a.autoscalingClient.EnableScalingPolicyCommon(&map[string]interface{}{ + "ScalingPolicyId": scalingPolicyId, + }) + + if err != nil { + klog.Errorf("failed to enable scaling policy %s, err: %v", scalingPolicyId, err) + return err + } + + return wait.Poll(5*time.Second, 30*time.Minute, func() (bool, error) { + // check scaling group status + group, err := a.GetScalingGroupById(groupId) + if err != nil { + return false, err + } + if *group.DesireInstanceNumber == int32(targetSize) { + return true, nil + } + return false, nil + }) +} + +func (a *autoScalingService) RemoveInstances(groupId string, instanceIds []string) error { + if len(instanceIds) == 0 { + return nil + } + + instanceIdGroups := StringSliceInGroupsOf(instanceIds, 20) + for _, instanceIdGroup := range instanceIdGroups { + _, err := a.autoscalingClient.RemoveInstances(&autoscaling.RemoveInstancesInput{ + ScalingGroupId: volcengine.String(groupId), + InstanceIds: volcengine.StringSlice(instanceIdGroup), + }) + if err != nil { + return err + } + } + + return nil +} + +func (a *autoScalingService) GetScalingConfigurationById(configurationId string) (*autoscaling.ScalingConfigurationForDescribeScalingConfigurationsOutput, error) { + configurations, err := a.autoscalingClient.DescribeScalingConfigurations(&autoscaling.DescribeScalingConfigurationsInput{ + ScalingConfigurationIds: volcengine.StringSlice([]string{configurationId}), + }) + if err != nil { + return nil, err + } + if len(configurations.ScalingConfigurations) == 0 || + volcengine.StringValue(configurations.ScalingConfigurations[0].ScalingConfigurationId) != configurationId { + return nil, fmt.Errorf("scaling configuration %s not found", configurationId) + } + return configurations.ScalingConfigurations[0], nil +} + +func (a *autoScalingService) ListScalingInstancesByGroupId(groupId string) ([]*autoscaling.ScalingInstanceForDescribeScalingInstancesOutput, error) { + req := &autoscaling.DescribeScalingInstancesInput{ + ScalingGroupId: volcengine.String(groupId), + PageSize: volcengine.Int32(50), + PageNumber: volcengine.Int32(1), + } + resp, err := a.autoscalingClient.DescribeScalingInstances(req) + if err != nil { + return nil, err + } + + total := volcengine.Int32Value(resp.TotalCount) + if total <= 50 { + return resp.ScalingInstances, nil + } + + res := make([]*autoscaling.ScalingInstanceForDescribeScalingInstancesOutput, 0) + res = append(res, resp.ScalingInstances...) + totalNumber := math.Ceil(float64(total) / 50) + for i := 2; i <= int(totalNumber); i++ { + req.PageNumber = volcengine.Int32(int32(i)) + resp, err = a.autoscalingClient.DescribeScalingInstances(req) + if err != nil { + return nil, err + } + res = append(res, resp.ScalingInstances...) + } + + return res, nil +} + +func (a *autoScalingService) GetScalingGroupById(groupId string) (*autoscaling.ScalingGroupForDescribeScalingGroupsOutput, error) { + groups, err := a.autoscalingClient.DescribeScalingGroups(&autoscaling.DescribeScalingGroupsInput{ + ScalingGroupIds: volcengine.StringSlice([]string{groupId}), + }) + if err != nil { + return nil, err + } + if len(groups.ScalingGroups) == 0 { + return nil, fmt.Errorf("scaling group %s not found", groupId) + } + return groups.ScalingGroups[0], nil +} + +func newAutoScalingService(cloudConfig *cloudConfig) AutoScalingService { + config := volcengine.NewConfig(). + WithCredentials(credentials.NewStaticCredentials(cloudConfig.getAccessKey(), cloudConfig.getSecretKey(), "")). + WithRegion(cloudConfig.getRegion()). + WithEndpoint(cloudConfig.getEndpoint()) + sess, _ := session.NewSession(config) + client := autoscaling.New(sess) + return &autoScalingService{ + autoscalingClient: client, + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_group.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_group.go new file mode 100644 index 000000000000..5a63690e7529 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_group.go @@ -0,0 +1,216 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +import ( + "fmt" + + apiv1 "k8s.io/api/core/v1" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/autoscaler/cluster-autoscaler/config" + "k8s.io/klog/v2" + schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" +) + +// AutoScalingGroup represents a Volcengine 'Auto Scaling Group' which also can be treated as a node group. +type AutoScalingGroup struct { + manager VolcengineManager + asgId string + minInstanceNumber int + maxInstanceNumber int +} + +// MaxSize returns maximum size of the node group. +func (asg *AutoScalingGroup) MaxSize() int { + return asg.maxInstanceNumber +} + +// MinSize returns minimum size of the node group. +func (asg *AutoScalingGroup) MinSize() int { + return asg.minInstanceNumber +} + +// TargetSize returns the current target size of the node group. It is possible that the +// number of nodes in Kubernetes is different at the moment but should be equal +// to Size() once everything stabilizes (new nodes finish startup and registration or +// removed nodes are deleted completely). Implementation required. +func (asg *AutoScalingGroup) TargetSize() (int, error) { + return asg.manager.GetAsgDesireCapacity(asg.asgId) +} + +// IncreaseSize increases the size of the node group. To delete a node you need +// to explicitly name it and use DeleteNode. This function should wait until +// node group size is updated. Implementation required. +func (asg *AutoScalingGroup) IncreaseSize(delta int) error { + if delta <= 0 { + return fmt.Errorf("size increase must be positive") + } + size, err := asg.manager.GetAsgDesireCapacity(asg.asgId) + if err != nil { + return err + } + if size+delta > asg.MaxSize() { + return fmt.Errorf("size increase is too large - desired:%d max:%d", size+delta, asg.MaxSize()) + } + return asg.manager.SetAsgTargetSize(asg.asgId, size+delta) +} + +// DeleteNodes deletes nodes from this node group. Error is returned either on +// failure or if the given node doesn't belong to this node group. This function +// should wait until node group size is updated. Implementation required. +func (asg *AutoScalingGroup) DeleteNodes(nodes []*apiv1.Node) error { + size, err := asg.manager.GetAsgDesireCapacity(asg.asgId) + if err != nil { + klog.Errorf("Failed to get desire capacity for %s: %v", asg.asgId, err) + return err + } + if size <= asg.MinSize() { + klog.Errorf("Failed to delete nodes from %s: min size reached", asg.asgId) + return fmt.Errorf("asg min size reached") + } + + instanceIds := make([]string, 0, len(nodes)) + for _, node := range nodes { + belongs, err := asg.belongs(node) + if err != nil { + return err + } + if !belongs { + return fmt.Errorf("node %s doesn't belong to asg %s", node.Name, asg.asgId) + } + instanceId, err := ecsInstanceFromProviderId(node.Spec.ProviderID) + if err != nil { + return err + } + instanceIds = append(instanceIds, instanceId) + } + return asg.manager.DeleteScalingInstances(asg.asgId, instanceIds) +} + +func (asg *AutoScalingGroup) belongs(node *apiv1.Node) (bool, error) { + instanceId, err := ecsInstanceFromProviderId(node.Spec.ProviderID) + if err != nil { + return false, err + } + targetAsg, err := asg.manager.GetAsgForInstance(instanceId) + if err != nil { + return false, err + } + if targetAsg == nil { + return false, nil + } + return targetAsg.Id() == asg.asgId, nil +} + +// DecreaseTargetSize decreases the target size of the node group. This function +// doesn't permit to delete any existing node and can be used only to reduce the +// request for new nodes that have not been yet fulfilled. Delta should be negative. +// It is assumed that cloud provider will not delete the existing nodes when there +// is an option to just decrease the target. Implementation required. +func (asg *AutoScalingGroup) DecreaseTargetSize(delta int) error { + if delta >= 0 { + return fmt.Errorf("size decrease size must be negative") + } + desireCapacity, err := asg.manager.GetAsgDesireCapacity(asg.asgId) + if err != nil { + klog.Errorf("Failed to get desire capacity for %s: %v", asg.asgId, err) + return err + } + allNodes, err := asg.manager.GetAsgNodes(asg.asgId) + if err != nil { + klog.Errorf("Failed to get nodes for %s: %v", asg.asgId, err) + return err + } + if desireCapacity+delta < len(allNodes) { + return fmt.Errorf("size decrease is too large, need to delete existing node - newDesiredCapacity:%d currentNodes:%d", desireCapacity+delta, len(allNodes)) + } + + return asg.manager.SetAsgDesireCapacity(asg.asgId, desireCapacity+delta) +} + +// Id returns an unique identifier of the node group. +func (asg *AutoScalingGroup) Id() string { + return asg.asgId +} + +// Debug returns a string containing all information regarding this node group. +func (asg *AutoScalingGroup) Debug() string { + return fmt.Sprintf("%s (%d:%d)", asg.Id(), asg.MinSize(), asg.MaxSize()) +} + +// Nodes returns a list of all nodes that belong to this node group. +// It is required that Instance objects returned by this method have Id field set. +// Other fields are optional. +// This list should include also instances that might have not become a kubernetes node yet. +func (asg *AutoScalingGroup) Nodes() ([]cloudprovider.Instance, error) { + nodes, err := asg.manager.GetAsgNodes(asg.asgId) + if err != nil { + return nil, err + } + return nodes, nil +} + +// TemplateNodeInfo returns a schedulerframework.NodeInfo structure of an empty +// (as if just started) node. This will be used in scale-up simulations to +// predict what would a new node look like if a node group was expanded. The returned +// NodeInfo is expected to have a fully populated Node object, with all of the labels, +// capacity and allocatable information as well as all pods that are started on +// the node by default, using manifest (most likely only kube-proxy). Implementation optional. +func (asg *AutoScalingGroup) TemplateNodeInfo() (*schedulerframework.NodeInfo, error) { + template, err := asg.manager.getAsgTemplate(asg.asgId) + if err != nil { + return nil, err + } + node, err := asg.manager.buildNodeFromTemplateName(asg.asgId, template) + if err != nil { + return nil, err + } + nodeInfo := schedulerframework.NewNodeInfo(cloudprovider.BuildKubeProxy(asg.asgId)) + nodeInfo.SetNode(node) + return nodeInfo, nil +} + +// Exist checks if the node group really exists on the cloud provider side. Allows to tell the +// theoretical node group from the real one. Implementation required. +func (asg *AutoScalingGroup) Exist() bool { + return true +} + +// Create creates the node group on the cloud provider side. Implementation optional. +func (asg *AutoScalingGroup) Create() (cloudprovider.NodeGroup, error) { + return nil, cloudprovider.ErrNotImplemented +} + +// Delete deletes the node group on the cloud provider side. +// This will be executed only for autoprovisioned node groups, once their size drops to 0. +// Implementation optional. +func (asg *AutoScalingGroup) Delete() error { + return cloudprovider.ErrNotImplemented +} + +// Autoprovisioned returns true if the node group is autoprovisioned. An autoprovisioned group +// was created by CA and can be deleted when scaled to 0. +func (asg *AutoScalingGroup) Autoprovisioned() bool { + return false +} + +// GetOptions returns NodeGroupAutoscalingOptions that should be used for this particular +// NodeGroup. Returning a nil will result in using default options. +// Implementation optional. +func (asg *AutoScalingGroup) GetOptions(defaults config.NodeGroupAutoscalingOptions) (*config.NodeGroupAutoscalingOptions, error) { + return nil, cloudprovider.ErrNotImplemented +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_groups.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_groups.go new file mode 100644 index 000000000000..f8c5274b1bd8 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine_auto_scaling_groups.go @@ -0,0 +1,95 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +import ( + "sync" + "time" + + "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/klog/v2" +) + +type autoScalingGroupsCache struct { + registeredAsgs []*AutoScalingGroup + instanceToAsg map[string]*AutoScalingGroup + cacheMutex sync.Mutex + instancesNotInManagedAsg map[string]struct{} + asgService AutoScalingService +} + +func newAutoScalingGroupsCache(asgService AutoScalingService) *autoScalingGroupsCache { + registry := &autoScalingGroupsCache{ + registeredAsgs: make([]*AutoScalingGroup, 0), + instanceToAsg: make(map[string]*AutoScalingGroup), + instancesNotInManagedAsg: make(map[string]struct{}), + asgService: asgService, + } + + go wait.Forever(func() { + registry.cacheMutex.Lock() + defer registry.cacheMutex.Unlock() + if err := registry.regenerateCache(); err != nil { + klog.Errorf("Error while regenerating ASG cache: %v", err) + } + }, time.Hour) + + return registry +} + +func (c *autoScalingGroupsCache) Register(asg *AutoScalingGroup) { + c.cacheMutex.Lock() + defer c.cacheMutex.Unlock() + c.registeredAsgs = append(c.registeredAsgs, asg) +} + +func (c *autoScalingGroupsCache) FindForInstance(instanceId string) (*AutoScalingGroup, error) { + c.cacheMutex.Lock() + defer c.cacheMutex.Unlock() + if asg, found := c.instanceToAsg[instanceId]; found { + return asg, nil + } + if _, found := c.instancesNotInManagedAsg[instanceId]; found { + return nil, nil + } + if err := c.regenerateCache(); err != nil { + return nil, err + } + if asg, found := c.instanceToAsg[instanceId]; found { + return asg, nil + } + + // instance does not belong to any configured ASG + c.instancesNotInManagedAsg[instanceId] = struct{}{} + return nil, nil +} + +func (c *autoScalingGroupsCache) regenerateCache() error { + newCache := make(map[string]*AutoScalingGroup) + for _, asg := range c.registeredAsgs { + instances, err := c.asgService.ListScalingInstancesByGroupId(asg.asgId) + if err != nil { + return err + } + for _, instance := range instances { + newCache[volcengine.StringValue(instance.InstanceId)] = asg + } + } + c.instanceToAsg = newCache + return nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine_cloud_config.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine_cloud_config.go new file mode 100644 index 000000000000..bacdd46374d6 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine_cloud_config.go @@ -0,0 +1,84 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +import ( + "os" + + "k8s.io/klog/v2" +) + +const ( + regionId = "REGION_ID" + accessKey = "ACCESS_KEY" + secretKey = "SECRET_KEY" + endpoint = "ENDPOINT" + + defaultEndpoint = "open.volcengineapi.com" +) + +type cloudConfig struct { + regionId string + accessKey string + secretKey string + endpoint string +} + +func (c *cloudConfig) getRegion() string { + return c.regionId +} + +func (c *cloudConfig) getAccessKey() string { + return c.accessKey +} + +func (c *cloudConfig) getSecretKey() string { + return c.secretKey +} + +func (c *cloudConfig) getEndpoint() string { + return c.endpoint +} + +func (c *cloudConfig) validate() bool { + if c.regionId == "" { + c.regionId = os.Getenv(regionId) + } + + if c.accessKey == "" { + c.accessKey = os.Getenv(accessKey) + } + + if c.secretKey == "" { + c.secretKey = os.Getenv(secretKey) + } + + if c.endpoint == "" { + c.endpoint = os.Getenv(endpoint) + } + + if c.endpoint == "" { + c.endpoint = defaultEndpoint + } + + if c.regionId == "" || c.accessKey == "" || c.secretKey == "" || c.endpoint == "" { + klog.V(5).Infof("Failed to get RegionId:%s,AccessKey:%s,SecretKey:%s,Endpoint:%s from cloudConfig and Env\n", c.regionId, c.accessKey, c.secretKey, c.endpoint) + return false + } + + return true +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine_cloud_provider.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine_cloud_provider.go new file mode 100644 index 000000000000..53ad1ec1139d --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine_cloud_provider.go @@ -0,0 +1,231 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +import ( + "fmt" + "io" + "os" + "strings" + + "gopkg.in/gcfg.v1" + apiv1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/autoscaler/cluster-autoscaler/config" + "k8s.io/autoscaler/cluster-autoscaler/config/dynamic" + "k8s.io/autoscaler/cluster-autoscaler/utils/errors" + "k8s.io/autoscaler/cluster-autoscaler/utils/gpu" + "k8s.io/klog/v2" +) + +// volcengineCloudProvider implements CloudProvider interface. +type volcengineCloudProvider struct { + volcengineManager VolcengineManager + resourceLimiter *cloudprovider.ResourceLimiter + scalingGroups []*AutoScalingGroup +} + +// Name returns name of the cloud provider. +func (v *volcengineCloudProvider) Name() string { + return cloudprovider.VolcengineProviderName +} + +// NodeGroups returns all node groups configured for this cloud provider. +func (v *volcengineCloudProvider) NodeGroups() []cloudprovider.NodeGroup { + result := make([]cloudprovider.NodeGroup, 0, len(v.scalingGroups)) + for _, ng := range v.scalingGroups { + result = append(result, ng) + } + return result +} + +// NodeGroupForNode returns the node group for the given node, nil if the node +// should not be processed by cluster autoscaler, or non-nil error if such +// occurred. Must be implemented. +func (v *volcengineCloudProvider) NodeGroupForNode(node *apiv1.Node) (cloudprovider.NodeGroup, error) { + instanceId, err := ecsInstanceFromProviderId(node.Spec.ProviderID) + if err != nil { + return nil, err + } + if len(instanceId) == 0 { + klog.Warningf("Node %v has no providerId", node.Name) + return nil, fmt.Errorf("provider id missing from node: %s", node.Name) + } + return v.volcengineManager.GetAsgForInstance(instanceId) +} + +// HasInstance returns whether the node has corresponding instance in cloud provider, +// true if the node has an instance, false if it no longer exists +func (v *volcengineCloudProvider) HasInstance(node *apiv1.Node) (bool, error) { + return true, cloudprovider.ErrNotImplemented +} + +// Pricing returns pricing model for this cloud provider or error if not available. +// Implementation optional. +func (v *volcengineCloudProvider) Pricing() (cloudprovider.PricingModel, errors.AutoscalerError) { + return nil, cloudprovider.ErrNotImplemented +} + +// GetAvailableMachineTypes get all machine types that can be requested from the cloud provider. +// Implementation optional. +func (v *volcengineCloudProvider) GetAvailableMachineTypes() ([]string, error) { + return []string{}, nil +} + +// NewNodeGroup builds a theoretical node group based on the node definition provided. The node group is not automatically +// created on the cloud provider side. The node group is not returned by NodeGroups() until it is created. +// Implementation optional. +func (v *volcengineCloudProvider) NewNodeGroup(machineType string, labels map[string]string, systemLabels map[string]string, taints []apiv1.Taint, extraResources map[string]resource.Quantity) (cloudprovider.NodeGroup, error) { + return nil, cloudprovider.ErrNotImplemented +} + +// GetResourceLimiter returns struct containing limits (max, min) for resources (cores, memory etc.). +func (v *volcengineCloudProvider) GetResourceLimiter() (*cloudprovider.ResourceLimiter, error) { + return v.resourceLimiter, nil +} + +// GPULabel returns the label added to nodes with GPU resource. +func (v *volcengineCloudProvider) GPULabel() string { + return "" +} + +// GetAvailableGPUTypes return all available GPU types cloud provider supports. +func (v *volcengineCloudProvider) GetAvailableGPUTypes() map[string]struct{} { + return map[string]struct{}{} +} + +// Cleanup cleans up open resources before the cloud provider is destroyed, i.e. go routines etc. +func (v *volcengineCloudProvider) Cleanup() error { + return nil +} + +// Refresh is called before every main loop and can be used to dynamically update cloud provider state. +// In particular the list of node groups returned by NodeGroups can change as a result of CloudProvider.Refresh(). +func (v *volcengineCloudProvider) Refresh() error { + return nil +} + +// GetNodeGpuConfig returns the label, type and resource name for the GPU added to node. If node doesn't have +// any GPUs, it returns nil. +func (v *volcengineCloudProvider) GetNodeGpuConfig(node *apiv1.Node) *cloudprovider.GpuConfig { + return gpu.GetNodeGPUFromCloudProvider(v, node) +} + +func (v *volcengineCloudProvider) addNodeGroup(spec string) error { + group, err := buildScalingGroupFromSpec(v.volcengineManager, spec) + if err != nil { + klog.Errorf("Failed to build scaling group from spec: %v", err) + return err + } + v.addAsg(group) + return nil +} + +func (v *volcengineCloudProvider) addAsg(asg *AutoScalingGroup) { + v.scalingGroups = append(v.scalingGroups, asg) + v.volcengineManager.RegisterAsg(asg) +} + +func buildScalingGroupFromSpec(manager VolcengineManager, spec string) (*AutoScalingGroup, error) { + nodeGroupSpec, err := dynamic.SpecFromString(spec, true) + if err != nil { + return nil, fmt.Errorf("failed to parse node group spec: %v", err) + } + group, err := manager.GetAsgById(nodeGroupSpec.Name) + if err != nil { + klog.Errorf("scaling group %s not exists", nodeGroupSpec.Name) + return nil, err + } + return &AutoScalingGroup{ + manager: manager, + asgId: nodeGroupSpec.Name, + minInstanceNumber: group.minInstanceNumber, + maxInstanceNumber: group.maxInstanceNumber, + }, nil +} + +// BuildVolcengine builds CloudProvider implementation for Volcengine +func BuildVolcengine(opts config.AutoscalingOptions, do cloudprovider.NodeGroupDiscoveryOptions, rl *cloudprovider.ResourceLimiter) cloudprovider.CloudProvider { + if opts.CloudConfig == "" { + klog.Fatalf("The path to the cloud provider configuration file must be set via the --cloud-config command line parameter") + } + cloudConf, err := readConf(opts.CloudConfig) + if err != nil { + klog.Warningf("Failed to read cloud provider configuration: %v", err) + cloudConf = &cloudConfig{} + } + + if !cloudConf.validate() { + klog.Fatalf("Failed to validate cloud provider configuration: %v", err) + } + + manager, err := CreateVolcengineManager(cloudConf) + if err != nil { + klog.Fatalf("Failed to create volcengine manager: %v", err) + } + + provider, err := buildVolcengineProvider(manager, do, rl) + if err != nil { + klog.Fatalf("Failed to create volcengine cloud provider: %v", err) + } + + return provider +} + +func buildVolcengineProvider(manager VolcengineManager, do cloudprovider.NodeGroupDiscoveryOptions, rl *cloudprovider.ResourceLimiter) (cloudprovider.CloudProvider, error) { + if !do.StaticDiscoverySpecified() { + return nil, fmt.Errorf("static discovery configuration must be provided for volcengine cloud provider") + } + + provider := &volcengineCloudProvider{ + volcengineManager: manager, + resourceLimiter: rl, + } + + for _, spec := range do.NodeGroupSpecs { + if err := provider.addNodeGroup(spec); err != nil { + klog.Warningf("Failed to add node group from spec %s: %v", spec, err) + return nil, err + } + } + + return provider, nil +} + +func readConf(confFile string) (*cloudConfig, error) { + var conf io.ReadCloser + conf, err := os.Open(confFile) + if err != nil { + return nil, err + } + defer conf.Close() + + var cloudConfig cloudConfig + if err = gcfg.ReadInto(&cloudConfig, conf); err != nil { + return nil, err + } + + return &cloudConfig, nil +} + +func ecsInstanceFromProviderId(providerId string) (string, error) { + if !strings.HasPrefix(providerId, "volcengine://") { + return "", fmt.Errorf("providerId %q doesn't match prefix %q", providerId, "volcengine://") + } + return strings.TrimPrefix(providerId, "volcengine://"), nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine_ecs_cloud_service.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine_ecs_cloud_service.go new file mode 100644 index 000000000000..847537a14211 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine_ecs_cloud_service.go @@ -0,0 +1,61 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +import ( + "fmt" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/service/ecs" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/credentials" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine/session" +) + +// EcsService represents the ECS interfaces +type EcsService interface { + GetInstanceTypeById(instanceTypeId string) (*ecs.InstanceTypeForDescribeInstanceTypesOutput, error) +} + +type ecsService struct { + ecsClient *ecs.ECS +} + +// GetInstanceTypeById returns instance type info by given instance type id +func (e *ecsService) GetInstanceTypeById(instanceTypeId string) (*ecs.InstanceTypeForDescribeInstanceTypesOutput, error) { + resp, err := e.ecsClient.DescribeInstanceTypes(&ecs.DescribeInstanceTypesInput{ + InstanceTypeIds: volcengine.StringSlice([]string{instanceTypeId}), + }) + if err != nil { + return nil, err + } + if len(resp.InstanceTypes) == 0 || volcengine.StringValue(resp.InstanceTypes[0].InstanceTypeId) != instanceTypeId { + return nil, fmt.Errorf("instance type %s not found", instanceTypeId) + } + return resp.InstanceTypes[0], nil +} + +func newEcsService(cloudConfig *cloudConfig) EcsService { + config := volcengine.NewConfig(). + WithCredentials(credentials.NewStaticCredentials(cloudConfig.getAccessKey(), cloudConfig.getSecretKey(), "")). + WithRegion(cloudConfig.getRegion()). + WithEndpoint(cloudConfig.getEndpoint()) + sess, _ := session.NewSession(config) + client := ecs.New(sess) + return &ecsService{ + ecsClient: client, + } +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine_manager.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine_manager.go new file mode 100644 index 000000000000..beceb1cc90c6 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine_manager.go @@ -0,0 +1,238 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +import ( + "fmt" + "math/rand" + + apiv1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk/volcengine" + "k8s.io/autoscaler/cluster-autoscaler/utils/gpu" + "k8s.io/klog/v2" +) + +// VolcengineManager define the interface that implements Cloud Provider and Node Group +type VolcengineManager interface { + // RegisterAsg registers the given ASG with the manager. + RegisterAsg(asg *AutoScalingGroup) + + // GetAsgForInstance returns the ASG of the given instance. + GetAsgForInstance(instanceId string) (*AutoScalingGroup, error) + + // GetAsgById returns the ASG of the given id. + GetAsgById(id string) (*AutoScalingGroup, error) + + // GetAsgDesireCapacity returns the desired capacity of the given ASG. + GetAsgDesireCapacity(asgId string) (int, error) + + // SetAsgTargetSize sets the target size of the given ASG. + SetAsgTargetSize(asgId string, targetSize int) error + + // DeleteScalingInstances deletes the given instances from the given ASG. + DeleteScalingInstances(asgId string, instanceIds []string) error + + // GetAsgNodes returns the scaling instance ids of the given ASG. + GetAsgNodes(asgId string) ([]cloudprovider.Instance, error) + + // SetAsgDesireCapacity sets the desired capacity of the given ASG. + SetAsgDesireCapacity(groupId string, desireCapacity int) error + + // getAsgTemplate returns the scaling configuration of the given ASG. + getAsgTemplate(groupId string) (*asgTemplate, error) + + // buildNodeFromTemplateName builds a node object from the given template. + buildNodeFromTemplateName(asgName string, template *asgTemplate) (*apiv1.Node, error) +} + +type asgTemplate struct { + vcpu int64 + memInMB int64 + gpu int64 + region string + zone string + instanceType string + tags map[string]string +} + +// volcengineManager handles volcengine service communication. +type volcengineManager struct { + cloudConfig *cloudConfig + asgs *autoScalingGroupsCache + + asgService AutoScalingService + ecsService EcsService +} + +func (v *volcengineManager) SetAsgDesireCapacity(groupId string, desireCapacity int) error { + return v.asgService.SetAsgDesireCapacity(groupId, desireCapacity) +} + +func (v *volcengineManager) GetAsgDesireCapacity(asgId string) (int, error) { + group, err := v.asgService.GetScalingGroupById(asgId) + if err != nil { + klog.Errorf("failed to get scaling group by id %s: %v", asgId, err) + return 0, err + } + return int(volcengine.Int32Value(group.DesireInstanceNumber)), nil +} + +func (v *volcengineManager) SetAsgTargetSize(asgId string, targetSize int) error { + return v.asgService.SetAsgTargetSize(asgId, targetSize) +} + +func (v *volcengineManager) DeleteScalingInstances(asgId string, instanceIds []string) error { + if len(instanceIds) == 0 { + klog.Infof("no instances to delete from scaling group %s", asgId) + return nil + } + klog.Infof("deleting instances %v from scaling group %s", instanceIds, asgId) + return v.asgService.RemoveInstances(asgId, instanceIds) +} + +func (v *volcengineManager) GetAsgNodes(asgId string) ([]cloudprovider.Instance, error) { + scalingInstances, err := v.asgService.ListScalingInstancesByGroupId(asgId) + if err != nil { + return nil, err + } + + instances := make([]cloudprovider.Instance, 0, len(scalingInstances)) + for _, scalingInstance := range scalingInstances { + if scalingInstance.InstanceId == nil { + klog.Warningf("scaling instance has no instance id") + continue + } + + instances = append(instances, cloudprovider.Instance{ + Id: getNodeProviderId(volcengine.StringValue(scalingInstance.InstanceId)), + }) + } + return instances, nil +} + +func getNodeProviderId(instanceId string) string { + return fmt.Sprintf("volcengine://%s", instanceId) +} + +func (v *volcengineManager) getAsgTemplate(groupId string) (*asgTemplate, error) { + group, err := v.asgService.GetScalingGroupById(groupId) + if err != nil { + klog.Errorf("failed to get scaling group by id %s: %v", groupId, err) + return nil, err + } + + configuration, err := v.asgService.GetScalingConfigurationById(volcengine.StringValue(group.ActiveScalingConfigurationId)) + if err != nil { + klog.Errorf("failed to get scaling configuration by id %s: %v", volcengine.StringValue(group.ActiveScalingConfigurationId), err) + return nil, err + } + + instanceType, err := v.ecsService.GetInstanceTypeById(volcengine.StringValue(configuration.InstanceTypes[0])) + if err != nil { + klog.Errorf("failed to get instance type by id %s: %v", volcengine.StringValue(configuration.InstanceTypes[0]), err) + return nil, err + } + + return &asgTemplate{ + vcpu: int64(volcengine.Int32Value(instanceType.Processor.Cpus)), + memInMB: int64(volcengine.Int32Value(instanceType.Memory.Size)), + region: v.cloudConfig.getRegion(), + instanceType: volcengine.StringValue(instanceType.InstanceTypeId), + tags: map[string]string{}, // TODO read tags from configuration + }, nil +} + +func (v *volcengineManager) buildNodeFromTemplateName(asgName string, template *asgTemplate) (*apiv1.Node, error) { + node := apiv1.Node{} + nodeName := fmt.Sprintf("%s-asg-%d", asgName, rand.Int63()) + + node.ObjectMeta = metav1.ObjectMeta{ + Name: nodeName, + SelfLink: fmt.Sprintf("/api/v1/nodes/%s", nodeName), + Labels: map[string]string{}, + } + + node.Status = apiv1.NodeStatus{ + Capacity: apiv1.ResourceList{}, + } + + node.Status.Capacity[apiv1.ResourcePods] = *resource.NewQuantity(110, resource.DecimalSI) + node.Status.Capacity[apiv1.ResourceCPU] = *resource.NewQuantity(template.vcpu, resource.DecimalSI) + node.Status.Capacity[apiv1.ResourceMemory] = *resource.NewQuantity(template.memInMB*1024*1024, resource.DecimalSI) + node.Status.Capacity[gpu.ResourceNvidiaGPU] = *resource.NewQuantity(template.gpu, resource.DecimalSI) + + node.Status.Allocatable = node.Status.Capacity + + node.Labels = cloudprovider.JoinStringMaps(node.Labels, buildGenericLabels(template, nodeName)) + + node.Status.Conditions = cloudprovider.BuildReadyConditions() + return &node, nil +} + +func buildGenericLabels(template *asgTemplate, nodeName string) map[string]string { + result := make(map[string]string) + result[apiv1.LabelArchStable] = cloudprovider.DefaultArch + result[apiv1.LabelOSStable] = cloudprovider.DefaultOS + + result[apiv1.LabelInstanceTypeStable] = template.instanceType + + result[apiv1.LabelTopologyRegion] = template.region + result[apiv1.LabelTopologyZone] = template.zone + result[apiv1.LabelHostname] = nodeName + + // append custom node labels + for key, value := range template.tags { + result[key] = value + } + + return result +} + +func (v *volcengineManager) GetAsgById(id string) (*AutoScalingGroup, error) { + asg, err := v.asgService.GetScalingGroupById(id) + if err != nil { + return nil, err + } + return &AutoScalingGroup{ + manager: v, + asgId: volcengine.StringValue(asg.ScalingGroupId), + minInstanceNumber: int(volcengine.Int32Value(asg.MinInstanceNumber)), + maxInstanceNumber: int(volcengine.Int32Value(asg.MaxInstanceNumber)), + }, nil +} + +func (v *volcengineManager) GetAsgForInstance(instanceId string) (*AutoScalingGroup, error) { + return v.asgs.FindForInstance(instanceId) +} + +func (v *volcengineManager) RegisterAsg(asg *AutoScalingGroup) { + v.asgs.Register(asg) +} + +// CreateVolcengineManager returns the VolcengineManager interface implementation +func CreateVolcengineManager(cloudConfig *cloudConfig) (VolcengineManager, error) { + asgCloudService := newAutoScalingService(cloudConfig) + return &volcengineManager{ + cloudConfig: cloudConfig, + asgs: newAutoScalingGroupsCache(asgCloudService), + asgService: asgCloudService, + ecsService: newEcsService(cloudConfig), + }, nil +} diff --git a/cluster-autoscaler/cloudprovider/volcengine/volcengine_util.go b/cluster-autoscaler/cloudprovider/volcengine/volcengine_util.go new file mode 100644 index 000000000000..ee0b337f5bf1 --- /dev/null +++ b/cluster-autoscaler/cloudprovider/volcengine/volcengine_util.go @@ -0,0 +1,46 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package volcengine + +// StringSliceInGroupsOf split arr into num parts +func StringSliceInGroupsOf(arr []string, num int64) [][]string { + if arr == nil { + return nil + } + sliceLen := int64(len(arr)) + if sliceLen <= num { + return [][]string{arr} + } + var quantity int64 + if sliceLen%num == 0 { + quantity = sliceLen / num + } else { + quantity = (sliceLen / num) + 1 + } + var segments = make([][]string, 0) + var start, end, i int64 + for i = 1; i <= quantity; i++ { + end = i * num + if i != quantity { + segments = append(segments, arr[start:end]) + } else { + segments = append(segments, arr[start:]) + } + start = i * num + } + return segments +} diff --git a/cluster-autoscaler/cloudprovider/vultr/OWNERS b/cluster-autoscaler/cloudprovider/vultr/OWNERS index 1248e0bafa81..104fd4653094 100644 --- a/cluster-autoscaler/cloudprovider/vultr/OWNERS +++ b/cluster-autoscaler/cloudprovider/vultr/OWNERS @@ -1,4 +1,9 @@ approvers: -#- ddymko +#- happytreees +#- optik-aper reviewers: -#- ddymko +#- happytreees +#- optik-aper + +labels: +- area/provider/vultr diff --git a/cluster-autoscaler/cloudprovider/vultr/vultr_node_group.go b/cluster-autoscaler/cloudprovider/vultr/vultr_node_group.go index 0b96d76b619d..a5960657a296 100644 --- a/cluster-autoscaler/cloudprovider/vultr/vultr_node_group.go +++ b/cluster-autoscaler/cloudprovider/vultr/vultr_node_group.go @@ -103,13 +103,13 @@ func (n *NodeGroup) IncreaseSize(delta int) error { func (n *NodeGroup) DeleteNodes(nodes []*apiv1.Node) error { for _, node := range nodes { nodeID, ok := node.Labels[nodeIDLabel] + providerID := node.Spec.ProviderID - //todo review this if !ok { - // CA creates fake node objects to represent upcoming VMs that - // haven't registered as nodes yet. We cannot delete the node at - // this point. - return fmt.Errorf("cannot delete node %q with provider ID %q on node pool %q: node ID label %q is missing", node.Name, node.Spec.ProviderID, n.id, nodeIDLabel) + if providerID == "" { + return fmt.Errorf("cannot delete node %q on node pool %q: missing provider ID and node ID label %q", node.Name, n.id, nodeIDLabel) + } + nodeID = toNodeID(providerID) } err := n.client.DeleteNodePoolInstance(context.Background(), n.clusterID, n.id, nodeID) diff --git a/cluster-autoscaler/clusterstate/clusterstate.go b/cluster-autoscaler/clusterstate/clusterstate.go index 9665d83a431e..a190084c8177 100644 --- a/cluster-autoscaler/clusterstate/clusterstate.go +++ b/cluster-autoscaler/clusterstate/clusterstate.go @@ -28,7 +28,9 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/clusterstate/api" "k8s.io/autoscaler/cluster-autoscaler/clusterstate/utils" "k8s.io/autoscaler/cluster-autoscaler/metrics" + "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" "k8s.io/autoscaler/cluster-autoscaler/utils/backoff" + "k8s.io/autoscaler/cluster-autoscaler/utils/gpu" kube_util "k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes" "k8s.io/autoscaler/cluster-autoscaler/utils/taints" @@ -45,10 +47,9 @@ const ( MaxNodeStartupTime = 15 * time.Minute ) -type maxNodeProvisionTimeProvider interface { - // GetMaxNodeProvisionTime returns MaxNodeProvisionTime value that should be used for the given NodeGroup. - GetMaxNodeProvisionTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) -} +var ( + errMaxNodeProvisionTimeProviderNotSet = errors.New("MaxNodeProvisionTimeProvider was not set in cluster state") +) // ScaleUpRequest contains information about the requested node group scale up. type ScaleUpRequest struct { @@ -135,7 +136,7 @@ type ClusterStateRegistry struct { previousCloudProviderNodeInstances map[string][]cloudprovider.Instance cloudProviderNodeInstancesCache *utils.CloudProviderNodeInstancesCache interrupt chan struct{} - maxNodeProvisionTimeProvider maxNodeProvisionTimeProvider + nodeGroupConfigProcessor nodegroupconfig.NodeGroupConfigProcessor // scaleUpFailures contains information about scale-up failures for each node group. It should be // cleared periodically to avoid unnecessary accumulation. @@ -143,7 +144,7 @@ type ClusterStateRegistry struct { } // NewClusterStateRegistry creates new ClusterStateRegistry. -func NewClusterStateRegistry(cloudProvider cloudprovider.CloudProvider, config ClusterStateRegistryConfig, logRecorder *utils.LogEventRecorder, backoff backoff.Backoff, maxNodeProvisionTimeProvider maxNodeProvisionTimeProvider) *ClusterStateRegistry { +func NewClusterStateRegistry(cloudProvider cloudprovider.CloudProvider, config ClusterStateRegistryConfig, logRecorder *utils.LogEventRecorder, backoff backoff.Backoff, nodeGroupConfigProcessor nodegroupconfig.NodeGroupConfigProcessor) *ClusterStateRegistry { emptyStatus := &api.ClusterAutoscalerStatus{ ClusterwideConditions: make([]api.ClusterAutoscalerCondition, 0), NodeGroupStatuses: make([]api.NodeGroupStatus, 0), @@ -167,7 +168,7 @@ func NewClusterStateRegistry(cloudProvider cloudprovider.CloudProvider, config C cloudProviderNodeInstancesCache: utils.NewCloudProviderNodeInstancesCache(cloudProvider), interrupt: make(chan struct{}), scaleUpFailures: make(map[string][]ScaleUpFailure), - maxNodeProvisionTimeProvider: maxNodeProvisionTimeProvider, + nodeGroupConfigProcessor: nodeGroupConfigProcessor, } } @@ -194,12 +195,13 @@ func (csr *ClusterStateRegistry) RegisterOrUpdateScaleUp(nodeGroup cloudprovider } // MaxNodeProvisionTime returns MaxNodeProvisionTime value that should be used for the given NodeGroup. +// TODO(BigDarkClown): remove this method entirely, it is a redundant wrapper func (csr *ClusterStateRegistry) MaxNodeProvisionTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { - return csr.maxNodeProvisionTimeProvider.GetMaxNodeProvisionTime(nodeGroup) + return csr.nodeGroupConfigProcessor.GetMaxNodeProvisionTime(nodeGroup) } func (csr *ClusterStateRegistry) registerOrUpdateScaleUpNoLock(nodeGroup cloudprovider.NodeGroup, delta int, currentTime time.Time) { - maxNodeProvisionTime, err := csr.maxNodeProvisionTimeProvider.GetMaxNodeProvisionTime(nodeGroup) + maxNodeProvisionTime, err := csr.MaxNodeProvisionTime(nodeGroup) if err != nil { klog.Warningf("Couldn't update scale up request: failed to get maxNodeProvisionTime for node group %s: %w", nodeGroup.Id(), err) return @@ -266,7 +268,15 @@ func (csr *ClusterStateRegistry) updateScaleRequests(currentTime time.Time) { csr.logRecorder.Eventf(apiv1.EventTypeWarning, "ScaleUpTimedOut", "Nodes added to group %s failed to register within %v", scaleUpRequest.NodeGroup.Id(), currentTime.Sub(scaleUpRequest.Time)) - csr.registerFailedScaleUpNoLock(scaleUpRequest.NodeGroup, metrics.Timeout, cloudprovider.OtherErrorClass, "timeout", currentTime) + availableGPUTypes := csr.cloudProvider.GetAvailableGPUTypes() + gpuResource, gpuType := "", "" + nodeInfo, err := scaleUpRequest.NodeGroup.TemplateNodeInfo() + if err != nil { + klog.Warningf("Failed to get template node info for a node group: %s", err) + } else { + gpuResource, gpuType = gpu.GetGpuInfoForMetrics(csr.cloudProvider.GetNodeGpuConfig(nodeInfo.Node()), availableGPUTypes, nodeInfo.Node(), scaleUpRequest.NodeGroup) + } + csr.registerFailedScaleUpNoLock(scaleUpRequest.NodeGroup, metrics.Timeout, cloudprovider.OtherErrorClass, "timeout", gpuResource, gpuType, currentTime) delete(csr.scaleUpRequests, nodeGroupName) } } @@ -290,15 +300,15 @@ func (csr *ClusterStateRegistry) backoffNodeGroup(nodeGroup cloudprovider.NodeGr // RegisterFailedScaleUp should be called after getting error from cloudprovider // when trying to scale-up node group. It will mark this group as not safe to autoscale // for some time. -func (csr *ClusterStateRegistry) RegisterFailedScaleUp(nodeGroup cloudprovider.NodeGroup, reason metrics.FailedScaleUpReason, currentTime time.Time) { +func (csr *ClusterStateRegistry) RegisterFailedScaleUp(nodeGroup cloudprovider.NodeGroup, reason metrics.FailedScaleUpReason, gpuResourceName, gpuType string, currentTime time.Time) { csr.Lock() defer csr.Unlock() - csr.registerFailedScaleUpNoLock(nodeGroup, reason, cloudprovider.OtherErrorClass, string(reason), currentTime) + csr.registerFailedScaleUpNoLock(nodeGroup, reason, cloudprovider.OtherErrorClass, string(reason), gpuResourceName, gpuType, currentTime) } -func (csr *ClusterStateRegistry) registerFailedScaleUpNoLock(nodeGroup cloudprovider.NodeGroup, reason metrics.FailedScaleUpReason, errorClass cloudprovider.InstanceErrorClass, errorCode string, currentTime time.Time) { +func (csr *ClusterStateRegistry) registerFailedScaleUpNoLock(nodeGroup cloudprovider.NodeGroup, reason metrics.FailedScaleUpReason, errorClass cloudprovider.InstanceErrorClass, errorCode string, gpuResourceName, gpuType string, currentTime time.Time) { csr.scaleUpFailures[nodeGroup.Id()] = append(csr.scaleUpFailures[nodeGroup.Id()], ScaleUpFailure{NodeGroup: nodeGroup, Reason: reason, Time: currentTime}) - metrics.RegisterFailedScaleUp(reason) + metrics.RegisterFailedScaleUp(reason, gpuResourceName, gpuType) csr.backoffNodeGroup(nodeGroup, errorClass, errorCode, currentTime) } @@ -605,7 +615,7 @@ func (csr *ClusterStateRegistry) updateReadinessStats(currentTime time.Time) { continue } perNgCopy := perNodeGroup[nodeGroup.Id()] - maxNodeProvisionTime, err := csr.maxNodeProvisionTimeProvider.GetMaxNodeProvisionTime(nodeGroup) + maxNodeProvisionTime, err := csr.MaxNodeProvisionTime(nodeGroup) if err != nil { klog.Warningf("Failed to get maxNodeProvisionTime for node %s in node group %s: %w", unregistered.Node.Name, nodeGroup.Id(), err) continue @@ -648,8 +658,9 @@ func (csr *ClusterStateRegistry) updateIncorrectNodeGroupSizes(currentTime time. } continue } - if len(readiness.Registered) > acceptableRange.MaxNodes || - len(readiness.Registered) < acceptableRange.MinNodes { + unregisteredNodes := len(readiness.Unregistered) + len(readiness.LongUnregistered) + if len(readiness.Registered) > acceptableRange.CurrentTarget || + len(readiness.Registered) < acceptableRange.CurrentTarget-unregisteredNodes { incorrect := IncorrectNodeGroupSize{ CurrentSize: len(readiness.Registered), ExpectedSize: acceptableRange.CurrentTarget, @@ -976,7 +987,9 @@ func (csr *ClusterStateRegistry) getCloudProviderNodeInstances() (map[string][]c return csr.cloudProviderNodeInstancesCache.GetCloudProviderNodeInstances() } -// Calculates which of the existing cloud provider nodes are not registered in Kubernetes. +// Calculates which of the existing cloud provider nodes are not yet registered in Kubernetes. +// As we are expecting for those instances to be Ready soon (O(~minutes)), to speed up the scaling process, +// we are injecting a temporary, fake nodes to continue scaling based on in-memory cluster state. func getNotRegisteredNodes(allNodes []*apiv1.Node, cloudProviderNodeInstances map[string][]cloudprovider.Instance, time time.Time) []UnregisteredNode { registered := sets.NewString() for _, node := range allNodes { @@ -985,9 +998,9 @@ func getNotRegisteredNodes(allNodes []*apiv1.Node, cloudProviderNodeInstances ma notRegistered := make([]UnregisteredNode, 0) for _, instances := range cloudProviderNodeInstances { for _, instance := range instances { - if !registered.Has(instance.Id) { + if !registered.Has(instance.Id) && expectedToRegister(instance) { notRegistered = append(notRegistered, UnregisteredNode{ - Node: fakeNode(instance, cloudprovider.FakeNodeUnregistered), + Node: FakeNode(instance, cloudprovider.FakeNodeUnregistered), UnregisteredSince: time, }) } @@ -996,6 +1009,10 @@ func getNotRegisteredNodes(allNodes []*apiv1.Node, cloudProviderNodeInstances ma return notRegistered } +func expectedToRegister(instance cloudprovider.Instance) bool { + return instance.Status != nil && instance.Status.State != cloudprovider.InstanceDeleting && instance.Status.ErrorInfo == nil +} + // Calculates which of the registered nodes in Kubernetes that do not exist in cloud provider. func (csr *ClusterStateRegistry) getCloudProviderDeletedNodes(allNodes []*apiv1.Node) []*apiv1.Node { nodesRemoved := make([]*apiv1.Node, 0) @@ -1085,9 +1102,17 @@ func (csr *ClusterStateRegistry) handleInstanceCreationErrorsForNodeGroup( errorCode, csr.buildErrorMessageEventString(currentUniqueErrorMessagesForErrorCode[errorCode])) + availableGPUTypes := csr.cloudProvider.GetAvailableGPUTypes() + gpuResource, gpuType := "", "" + nodeInfo, err := nodeGroup.TemplateNodeInfo() + if err != nil { + klog.Warningf("Failed to get template node info for a node group: %s", err) + } else { + gpuResource, gpuType = gpu.GetGpuInfoForMetrics(csr.cloudProvider.GetNodeGpuConfig(nodeInfo.Node()), availableGPUTypes, nodeInfo.Node(), nodeGroup) + } // Decrease the scale up request by the number of deleted nodes csr.registerOrUpdateScaleUpNoLock(nodeGroup, -len(unseenInstanceIds), currentTime) - csr.registerFailedScaleUpNoLock(nodeGroup, metrics.FailedScaleUpReason(errorCode.code), errorCode.class, errorCode.code, currentTime) + csr.registerFailedScaleUpNoLock(nodeGroup, metrics.FailedScaleUpReason(errorCode.code), errorCode.class, errorCode.code, gpuResource, gpuType, currentTime) } } } @@ -1156,7 +1181,7 @@ func (csr *ClusterStateRegistry) GetCreatedNodesWithErrors() []*apiv1.Node { _, _, instancesByErrorCode := csr.buildInstanceToErrorCodeMappings(nodeGroupInstances) for _, instances := range instancesByErrorCode { for _, instance := range instances { - nodesWithCreateErrors = append(nodesWithCreateErrors, fakeNode(instance, cloudprovider.FakeNodeCreateError)) + nodesWithCreateErrors = append(nodesWithCreateErrors, FakeNode(instance, cloudprovider.FakeNodeCreateError)) } } } @@ -1173,7 +1198,8 @@ func (csr *ClusterStateRegistry) InvalidateNodeInstancesCacheEntry(nodeGroup clo csr.cloudProviderNodeInstancesCache.InvalidateCacheEntry(nodeGroup) } -func fakeNode(instance cloudprovider.Instance, reason string) *apiv1.Node { +// FakeNode creates a fake node with Name field populated and FakeNodeReasonAnnotation added +func FakeNode(instance cloudprovider.Instance, reason string) *apiv1.Node { return &apiv1.Node{ ObjectMeta: metav1.ObjectMeta{ Name: instance.Id, diff --git a/cluster-autoscaler/clusterstate/clusterstate_test.go b/cluster-autoscaler/clusterstate/clusterstate_test.go index d0147009f126..b22a2fd5b7e0 100644 --- a/cluster-autoscaler/clusterstate/clusterstate_test.go +++ b/cluster-autoscaler/clusterstate/clusterstate_test.go @@ -21,7 +21,9 @@ import ( "testing" "time" + "k8s.io/autoscaler/cluster-autoscaler/config" "k8s.io/autoscaler/cluster-autoscaler/metrics" + "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" apiv1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" @@ -72,8 +74,7 @@ func TestOKWithScaleUp(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: time.Minute})) clusterstate.RegisterOrUpdateScaleUp(provider.GetNodeGroup("ng1"), 4, time.Now()) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1}, nil, now) assert.NoError(t, err) @@ -114,8 +115,7 @@ func TestEmptyOK(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{}, nil, now.Add(-5*time.Second)) assert.NoError(t, err) assert.True(t, clusterstate.IsClusterHealthy()) @@ -155,7 +155,7 @@ func TestOKOneUnreadyNode(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1}, nil, now) assert.NoError(t, err) assert.True(t, clusterstate.IsClusterHealthy()) @@ -193,8 +193,7 @@ func TestNodeWithoutNodeGroupDontCrash(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{noNgNode}, nil, now) assert.NoError(t, err) assert.Empty(t, clusterstate.GetScaleUpFailures()) @@ -221,8 +220,7 @@ func TestOKOneUnreadyNodeWithScaleDownCandidate(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1}, nil, now) clusterstate.UpdateScaleDownCandidates([]*apiv1.Node{ng1_1}, now) @@ -287,8 +285,7 @@ func TestMissingNodes(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1}, nil, now) assert.NoError(t, err) assert.True(t, clusterstate.IsClusterHealthy()) @@ -330,8 +327,7 @@ func TestTooManyUnready(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1}, nil, now) assert.NoError(t, err) assert.False(t, clusterstate.IsClusterHealthy()) @@ -360,8 +356,7 @@ func TestUnreadyLongAfterCreation(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1}, nil, now) assert.NoError(t, err) assert.Equal(t, 1, len(clusterstate.GetClusterReadiness().Unready)) @@ -393,8 +388,7 @@ func TestNotStarted(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1}, nil, now) assert.NoError(t, err) assert.Equal(t, 1, len(clusterstate.GetClusterReadiness().NotStarted)) @@ -431,7 +425,7 @@ func TestExpiredScaleUp(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), NewStaticMaxNodeProvisionTimeProvider(2*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 2 * time.Minute})) clusterstate.RegisterOrUpdateScaleUp(provider.GetNodeGroup("ng1"), 4, now.Add(-3*time.Minute)) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1}, nil, now) assert.NoError(t, err) @@ -456,9 +450,7 @@ func TestRegisterScaleDown(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) - + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) now := time.Now() clusterstate.RegisterScaleDown(&ScaleDownRequest{ @@ -526,8 +518,7 @@ func TestUpcomingNodes(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1, ng3_1, ng4_1, ng5_1, ng5_2}, nil, now) assert.NoError(t, err) assert.Empty(t, clusterstate.GetScaleUpFailures()) @@ -574,8 +565,7 @@ func TestTaintBasedNodeDeletion(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng1_2}, nil, now) assert.NoError(t, err) assert.Empty(t, clusterstate.GetScaleUpFailures()) @@ -596,8 +586,7 @@ func TestIncorrectSize(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) now := time.Now() clusterstate.UpdateNodes([]*apiv1.Node{ng1_1}, nil, now.Add(-5*time.Minute)) incorrect := clusterstate.incorrectNodeGroupSizes["ng1"] @@ -633,8 +622,7 @@ func TestUnregisteredNodes(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(10*time.Second)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 10 * time.Second})) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1}, nil, time.Now().Add(-time.Minute)) assert.NoError(t, err) @@ -683,8 +671,7 @@ func TestCloudProviderDeletedNodes(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(10*time.Second)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 10 * time.Second})) now.Add(time.Minute) err := clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng1_2, noNgNode}, nil, now) @@ -885,8 +872,7 @@ func TestScaleUpBackoff(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(120*time.Second)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 120 * time.Second})) // After failed scale-up, node group should be still healthy, but should backoff from scale-ups clusterstate.RegisterOrUpdateScaleUp(provider.GetNodeGroup("ng1"), 1, now.Add(-180*time.Second)) @@ -953,8 +939,7 @@ func TestGetClusterSize(t *testing.T) { clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + }, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) // There are 2 actual nodes in 2 node groups with target sizes of 5 and 1. clusterstate.UpdateNodes([]*apiv1.Node{ng1_1, ng2_1, notAutoscaledNode}, nil, now) @@ -1001,7 +986,8 @@ func TestUpdateScaleUp(t *testing.T) { }, fakeLogRecorder, newBackoff(), - NewStaticMaxNodeProvisionTimeProvider(10*time.Second)) + nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 10 * time.Second}), + ) clusterstate.RegisterOrUpdateScaleUp(provider.GetNodeGroup("ng1"), 100, now) assert.Equal(t, clusterstate.scaleUpRequests["ng1"].Increase, 100) @@ -1039,11 +1025,11 @@ func TestScaleUpFailures(t *testing.T) { fakeClient := &fake.Clientset{} fakeLogRecorder, _ := utils.NewStatusMapRecorder(fakeClient, "kube-system", kube_record.NewFakeRecorder(5), false, "my-cool-configmap") - clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{}, fakeLogRecorder, newBackoff(), NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterstate := NewClusterStateRegistry(provider, ClusterStateRegistryConfig{}, fakeLogRecorder, newBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) - clusterstate.RegisterFailedScaleUp(provider.GetNodeGroup("ng1"), metrics.Timeout, now) - clusterstate.RegisterFailedScaleUp(provider.GetNodeGroup("ng2"), metrics.Timeout, now) - clusterstate.RegisterFailedScaleUp(provider.GetNodeGroup("ng1"), metrics.APIError, now.Add(time.Minute)) + clusterstate.RegisterFailedScaleUp(provider.GetNodeGroup("ng1"), metrics.Timeout, "", "", now) + clusterstate.RegisterFailedScaleUp(provider.GetNodeGroup("ng2"), metrics.Timeout, "", "", now) + clusterstate.RegisterFailedScaleUp(provider.GetNodeGroup("ng1"), metrics.APIError, "", "", now.Add(time.Minute)) failures := clusterstate.GetScaleUpFailures() assert.Equal(t, map[string][]ScaleUpFailure{ @@ -1064,3 +1050,319 @@ func newBackoff() backoff.Backoff { return backoff.NewIdBasedExponentialBackoff(5*time.Minute, /*InitialNodeGroupBackoffDuration*/ 30*time.Minute /*MaxNodeGroupBackoffDuration*/, 3*time.Hour /*NodeGroupBackoffResetTimeout*/) } + +func TestUpdateAcceptableRanges(t *testing.T) { + testCases := []struct { + name string + + targetSizes map[string]int + readiness map[string]Readiness + scaleUpRequests map[string]*ScaleUpRequest + scaledDownGroups []string + + wantAcceptableRanges map[string]AcceptableRange + }{ + { + name: "No scale-ups/scale-downs", + targetSizes: map[string]int{ + "ng1": 10, + "ng2": 20, + }, + readiness: map[string]Readiness{ + "ng1": {Ready: make([]string, 10)}, + "ng2": {Ready: make([]string, 20)}, + }, + wantAcceptableRanges: map[string]AcceptableRange{ + "ng1": {MinNodes: 10, MaxNodes: 10, CurrentTarget: 10}, + "ng2": {MinNodes: 20, MaxNodes: 20, CurrentTarget: 20}, + }, + }, + { + name: "Ongoing scale-ups", + targetSizes: map[string]int{ + "ng1": 10, + "ng2": 20, + }, + readiness: map[string]Readiness{ + "ng1": {Ready: make([]string, 10)}, + "ng2": {Ready: make([]string, 20)}, + }, + scaleUpRequests: map[string]*ScaleUpRequest{ + "ng1": {Increase: 3}, + "ng2": {Increase: 5}, + }, + wantAcceptableRanges: map[string]AcceptableRange{ + "ng1": {MinNodes: 7, MaxNodes: 10, CurrentTarget: 10}, + "ng2": {MinNodes: 15, MaxNodes: 20, CurrentTarget: 20}, + }, + }, + { + name: "Ongoing scale-downs", + targetSizes: map[string]int{ + "ng1": 10, + "ng2": 20, + }, + readiness: map[string]Readiness{ + "ng1": {Ready: make([]string, 10)}, + "ng2": {Ready: make([]string, 20)}, + }, + scaledDownGroups: []string{"ng1", "ng1", "ng2", "ng2", "ng2"}, + wantAcceptableRanges: map[string]AcceptableRange{ + "ng1": {MinNodes: 10, MaxNodes: 12, CurrentTarget: 10}, + "ng2": {MinNodes: 20, MaxNodes: 23, CurrentTarget: 20}, + }, + }, + { + name: "Some short unregistered nodes", + targetSizes: map[string]int{ + "ng1": 10, + "ng2": 20, + }, + readiness: map[string]Readiness{ + "ng1": {Ready: make([]string, 8), Unregistered: make([]string, 2)}, + "ng2": {Ready: make([]string, 17), Unregistered: make([]string, 3)}, + }, + wantAcceptableRanges: map[string]AcceptableRange{ + "ng1": {MinNodes: 10, MaxNodes: 10, CurrentTarget: 10}, + "ng2": {MinNodes: 20, MaxNodes: 20, CurrentTarget: 20}, + }, + }, + { + name: "Some long unregistered nodes", + targetSizes: map[string]int{ + "ng1": 10, + "ng2": 20, + }, + readiness: map[string]Readiness{ + "ng1": {Ready: make([]string, 8), LongUnregistered: make([]string, 2)}, + "ng2": {Ready: make([]string, 17), LongUnregistered: make([]string, 3)}, + }, + wantAcceptableRanges: map[string]AcceptableRange{ + "ng1": {MinNodes: 8, MaxNodes: 10, CurrentTarget: 10}, + "ng2": {MinNodes: 17, MaxNodes: 20, CurrentTarget: 20}, + }, + }, + { + name: "Everything together", + targetSizes: map[string]int{ + "ng1": 10, + "ng2": 20, + }, + readiness: map[string]Readiness{ + "ng1": {Ready: make([]string, 8), Unregistered: make([]string, 1), LongUnregistered: make([]string, 2)}, + "ng2": {Ready: make([]string, 17), Unregistered: make([]string, 3), LongUnregistered: make([]string, 4)}, + }, + scaleUpRequests: map[string]*ScaleUpRequest{ + "ng1": {Increase: 3}, + "ng2": {Increase: 5}, + }, + scaledDownGroups: []string{"ng1", "ng1", "ng2", "ng2", "ng2"}, + wantAcceptableRanges: map[string]AcceptableRange{ + "ng1": {MinNodes: 5, MaxNodes: 12, CurrentTarget: 10}, + "ng2": {MinNodes: 11, MaxNodes: 23, CurrentTarget: 20}, + }, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + provider := testprovider.NewTestCloudProvider(nil, nil) + for nodeGroupName, targetSize := range tc.targetSizes { + provider.AddNodeGroup(nodeGroupName, 0, 1000, targetSize) + } + var scaleDownRequests []*ScaleDownRequest + for _, nodeGroupName := range tc.scaledDownGroups { + scaleDownRequests = append(scaleDownRequests, &ScaleDownRequest{ + NodeGroup: provider.GetNodeGroup(nodeGroupName), + }) + } + + clusterState := &ClusterStateRegistry{ + cloudProvider: provider, + perNodeGroupReadiness: tc.readiness, + scaleUpRequests: tc.scaleUpRequests, + scaleDownRequests: scaleDownRequests, + } + + clusterState.updateAcceptableRanges(tc.targetSizes) + assert.Equal(t, tc.wantAcceptableRanges, clusterState.acceptableRanges) + }) + } +} + +func TestUpdateIncorrectNodeGroupSizes(t *testing.T) { + timeNow := time.Now() + testCases := []struct { + name string + + acceptableRanges map[string]AcceptableRange + readiness map[string]Readiness + incorrectSizes map[string]IncorrectNodeGroupSize + + wantIncorrectSizes map[string]IncorrectNodeGroupSize + }{ + { + name: "node groups with correct sizes", + acceptableRanges: map[string]AcceptableRange{ + "ng1": {CurrentTarget: 10}, + "ng2": {CurrentTarget: 20}, + }, + readiness: map[string]Readiness{ + "ng1": {Registered: make([]string, 10)}, + "ng2": {Registered: make([]string, 20)}, + }, + incorrectSizes: map[string]IncorrectNodeGroupSize{}, + wantIncorrectSizes: map[string]IncorrectNodeGroupSize{}, + }, + { + name: "node groups with correct sizes after not being correct sized", + acceptableRanges: map[string]AcceptableRange{ + "ng1": {CurrentTarget: 10}, + "ng2": {CurrentTarget: 20}, + }, + readiness: map[string]Readiness{ + "ng1": {Registered: make([]string, 10)}, + "ng2": {Registered: make([]string, 20)}, + }, + incorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng1": {CurrentSize: 8, ExpectedSize: 10, FirstObserved: timeNow.Add(-time.Hour)}, + "ng2": {CurrentSize: 15, ExpectedSize: 20, FirstObserved: timeNow.Add(-time.Minute)}, + }, + wantIncorrectSizes: map[string]IncorrectNodeGroupSize{}, + }, + { + name: "node groups below the target size", + acceptableRanges: map[string]AcceptableRange{ + "ng1": {CurrentTarget: 10}, + "ng2": {CurrentTarget: 20}, + }, + readiness: map[string]Readiness{ + "ng1": {Registered: make([]string, 8)}, + "ng2": {Registered: make([]string, 15)}, + }, + incorrectSizes: map[string]IncorrectNodeGroupSize{}, + wantIncorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng1": {CurrentSize: 8, ExpectedSize: 10, FirstObserved: timeNow}, + "ng2": {CurrentSize: 15, ExpectedSize: 20, FirstObserved: timeNow}, + }, + }, + { + name: "node groups above the target size", + acceptableRanges: map[string]AcceptableRange{ + "ng1": {CurrentTarget: 10}, + "ng2": {CurrentTarget: 20}, + }, + readiness: map[string]Readiness{ + "ng1": {Registered: make([]string, 12)}, + "ng2": {Registered: make([]string, 25)}, + }, + incorrectSizes: map[string]IncorrectNodeGroupSize{}, + wantIncorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng1": {CurrentSize: 12, ExpectedSize: 10, FirstObserved: timeNow}, + "ng2": {CurrentSize: 25, ExpectedSize: 20, FirstObserved: timeNow}, + }, + }, + { + name: "node groups below the target size with changed delta", + acceptableRanges: map[string]AcceptableRange{ + "ng1": {CurrentTarget: 10}, + "ng2": {CurrentTarget: 20}, + }, + readiness: map[string]Readiness{ + "ng1": {Registered: make([]string, 8)}, + "ng2": {Registered: make([]string, 15)}, + }, + incorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng1": {CurrentSize: 7, ExpectedSize: 10, FirstObserved: timeNow.Add(-time.Hour)}, + "ng2": {CurrentSize: 14, ExpectedSize: 20, FirstObserved: timeNow.Add(-time.Minute)}, + }, + wantIncorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng1": {CurrentSize: 8, ExpectedSize: 10, FirstObserved: timeNow}, + "ng2": {CurrentSize: 15, ExpectedSize: 20, FirstObserved: timeNow}, + }, + }, + { + name: "node groups below the target size with the same delta", + acceptableRanges: map[string]AcceptableRange{ + "ng1": {CurrentTarget: 10}, + "ng2": {CurrentTarget: 20}, + }, + readiness: map[string]Readiness{ + "ng1": {Registered: make([]string, 8)}, + "ng2": {Registered: make([]string, 15)}, + }, + incorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng1": {CurrentSize: 8, ExpectedSize: 10, FirstObserved: timeNow.Add(-time.Hour)}, + "ng2": {CurrentSize: 15, ExpectedSize: 20, FirstObserved: timeNow.Add(-time.Minute)}, + }, + wantIncorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng1": {CurrentSize: 8, ExpectedSize: 10, FirstObserved: timeNow.Add(-time.Hour)}, + "ng2": {CurrentSize: 15, ExpectedSize: 20, FirstObserved: timeNow.Add(-time.Minute)}, + }, + }, + { + name: "node groups below the target size with short unregistered nodes", + acceptableRanges: map[string]AcceptableRange{ + "ng1": {CurrentTarget: 10}, + "ng2": {CurrentTarget: 20}, + }, + readiness: map[string]Readiness{ + "ng1": {Registered: make([]string, 8), Unregistered: make([]string, 2)}, + "ng2": {Registered: make([]string, 15), Unregistered: make([]string, 3)}, + }, + incorrectSizes: map[string]IncorrectNodeGroupSize{}, + wantIncorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng2": {CurrentSize: 15, ExpectedSize: 20, FirstObserved: timeNow}, + }, + }, + { + name: "node groups below the target size with long unregistered nodes", + acceptableRanges: map[string]AcceptableRange{ + "ng1": {CurrentTarget: 10}, + "ng2": {CurrentTarget: 20}, + }, + readiness: map[string]Readiness{ + "ng1": {Registered: make([]string, 8), LongUnregistered: make([]string, 2)}, + "ng2": {Registered: make([]string, 15), LongUnregistered: make([]string, 3)}, + }, + incorrectSizes: map[string]IncorrectNodeGroupSize{}, + wantIncorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng2": {CurrentSize: 15, ExpectedSize: 20, FirstObserved: timeNow}, + }, + }, + { + name: "node groups below the target size with various unregistered nodes", + acceptableRanges: map[string]AcceptableRange{ + "ng1": {CurrentTarget: 10}, + "ng2": {CurrentTarget: 20}, + }, + readiness: map[string]Readiness{ + "ng1": {Registered: make([]string, 8), Unregistered: make([]string, 1), LongUnregistered: make([]string, 1)}, + "ng2": {Registered: make([]string, 15), Unregistered: make([]string, 2), LongUnregistered: make([]string, 2)}, + }, + incorrectSizes: map[string]IncorrectNodeGroupSize{}, + wantIncorrectSizes: map[string]IncorrectNodeGroupSize{ + "ng2": {CurrentSize: 15, ExpectedSize: 20, FirstObserved: timeNow}, + }, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + provider := testprovider.NewTestCloudProvider(nil, nil) + for nodeGroupName, acceptableRange := range tc.acceptableRanges { + provider.AddNodeGroup(nodeGroupName, 0, 1000, acceptableRange.CurrentTarget) + } + + clusterState := &ClusterStateRegistry{ + cloudProvider: provider, + acceptableRanges: tc.acceptableRanges, + perNodeGroupReadiness: tc.readiness, + incorrectNodeGroupSizes: tc.incorrectSizes, + } + + clusterState.updateIncorrectNodeGroupSizes(timeNow) + assert.Equal(t, tc.wantIncorrectSizes, clusterState.incorrectNodeGroupSizes) + }) + } +} diff --git a/cluster-autoscaler/clusterstate/max_node_provision_time_provider.go b/cluster-autoscaler/clusterstate/max_node_provision_time_provider.go index 2c37b9c0f911..d34e143d67c1 100644 --- a/cluster-autoscaler/clusterstate/max_node_provision_time_provider.go +++ b/cluster-autoscaler/clusterstate/max_node_provision_time_provider.go @@ -16,39 +16,39 @@ limitations under the License. package clusterstate -import ( - "time" - - "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" - "k8s.io/autoscaler/cluster-autoscaler/context" - "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" -) - -// NewDefaultMaxNodeProvisionTimeProvider returns the default maxNodeProvisionTimeProvider which uses the NodeGroupConfigProcessor. -func NewDefaultMaxNodeProvisionTimeProvider(context *context.AutoscalingContext, nodeGroupConfigProcessor nodegroupconfig.NodeGroupConfigProcessor) maxNodeProvisionTimeProvider { - return &defultMaxNodeProvisionTimeProvider{context: context, nodeGroupConfigProcessor: nodeGroupConfigProcessor} -} - -type defultMaxNodeProvisionTimeProvider struct { - context *context.AutoscalingContext - nodeGroupConfigProcessor nodegroupconfig.NodeGroupConfigProcessor -} - -// GetMaxNodeProvisionTime returns MaxNodeProvisionTime value that should be used for the given NodeGroup. -func (p *defultMaxNodeProvisionTimeProvider) GetMaxNodeProvisionTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { - return p.nodeGroupConfigProcessor.GetMaxNodeProvisionTime(p.context, nodeGroup) -} - -// NewStaticMaxNodeProvisionTimeProvider returns static maxNodeProvisionTimeProvider which returns constant MaxNodeProvisionTime for every NodeGroup. Can be used for convenient testing. -func NewStaticMaxNodeProvisionTimeProvider(maxNodeProvisionTime time.Duration) maxNodeProvisionTimeProvider { - return &staticMaxNodeProvisionTimeProvider{maxNodeProvisionTime} -} - -type staticMaxNodeProvisionTimeProvider struct { - staticMaxNodeProvisionTime time.Duration -} - -// GetMaxNodeProvisionTime returns constant MaxNodeProvisionTime value that should be used for every NodeGroup. -func (p *staticMaxNodeProvisionTimeProvider) GetMaxNodeProvisionTime(cloudprovider.NodeGroup) (time.Duration, error) { - return p.staticMaxNodeProvisionTime, nil -} +// import ( +// "time" + +// "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" +// "k8s.io/autoscaler/cluster-autoscaler/context" +// "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" +// ) + +// // NewDefaultMaxNodeProvisionTimeProvider returns the default maxNodeProvisionTimeProvider which uses the NodeGroupConfigProcessor. +// func NewDefaultMaxNodeProvisionTimeProvider(context *context.AutoscalingContext, nodeGroupConfigProcessor nodegroupconfig.NodeGroupConfigProcessor) maxNodeProvisionTimeProvider { +// return &defultMaxNodeProvisionTimeProvider{context: context, nodeGroupConfigProcessor: nodeGroupConfigProcessor} +// } + +// type defultMaxNodeProvisionTimeProvider struct { +// context *context.AutoscalingContext +// nodeGroupConfigProcessor nodegroupconfig.NodeGroupConfigProcessor +// } + +// // GetMaxNodeProvisionTime returns MaxNodeProvisionTime value that should be used for the given NodeGroup. +// func (p *defultMaxNodeProvisionTimeProvider) GetMaxNodeProvisionTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { +// return p.nodeGroupConfigProcessor.GetMaxNodeProvisionTime(p.context, nodeGroup) +// } + +// // NewStaticMaxNodeProvisionTimeProvider returns static maxNodeProvisionTimeProvider which returns constant MaxNodeProvisionTime for every NodeGroup. Can be used for convenient testing. +// func NewStaticMaxNodeProvisionTimeProvider(maxNodeProvisionTime time.Duration) maxNodeProvisionTimeProvider { +// return &staticMaxNodeProvisionTimeProvider{maxNodeProvisionTime} +// } + +// type staticMaxNodeProvisionTimeProvider struct { +// staticMaxNodeProvisionTime time.Duration +// } + +// // GetMaxNodeProvisionTime returns constant MaxNodeProvisionTime value that should be used for every NodeGroup. +// func (p *staticMaxNodeProvisionTimeProvider) GetMaxNodeProvisionTime(cloudprovider.NodeGroup) (time.Duration, error) { +// return p.staticMaxNodeProvisionTime, nil +// } diff --git a/cluster-autoscaler/clusterstate/utils/node_instances_cache_test.go b/cluster-autoscaler/clusterstate/utils/node_instances_cache_test.go index 68006a5a1f19..54881fd62974 100644 --- a/cluster-autoscaler/clusterstate/utils/node_instances_cache_test.go +++ b/cluster-autoscaler/clusterstate/utils/node_instances_cache_test.go @@ -29,20 +29,20 @@ import ( func TestCloudProviderNodeInstancesCache(t *testing.T) { // Fresh entry for node group in cache. nodeNg1_1 := BuildTestNode("ng1-1", 1000, 1000) - instanceNg1_1 := cloudprovider.Instance{Id: nodeNg1_1.Name} + instanceNg1_1 := buildRunningInstance(nodeNg1_1.Name) // Fresh entry for node group in cache - checks Invalidate function. nodeNg2_1 := BuildTestNode("ng2-1", 1000, 1000) - instanceNg2_1 := cloudprovider.Instance{Id: nodeNg2_1.Name} + instanceNg2_1 := buildRunningInstance(nodeNg2_1.Name) nodeNg2_2 := BuildTestNode("ng2-2", 1000, 1000) - instanceNg2_2 := cloudprovider.Instance{Id: nodeNg2_2.Name} + instanceNg2_2 := buildRunningInstance(nodeNg2_2.Name) // Stale entry for node group in cache - check Refresh function. nodeNg3_1 := BuildTestNode("ng3-1", 1000, 1000) - instanceNg3_1 := cloudprovider.Instance{Id: nodeNg3_1.Name} + instanceNg3_1 := buildRunningInstance(nodeNg3_1.Name) nodeNg3_2 := BuildTestNode("ng3-2", 1000, 1000) - instanceNg3_2 := cloudprovider.Instance{Id: nodeNg3_2.Name} + instanceNg3_2 := buildRunningInstance(nodeNg3_2.Name) // Removed node group. nodeNg4_1 := BuildTestNode("ng4-1", 1000, 1000) - instanceNg4_1 := cloudprovider.Instance{Id: nodeNg4_1.Name} + instanceNg4_1 := buildRunningInstance(nodeNg4_1.Name) provider := testprovider.NewTestCloudProvider(nil, nil) provider.AddNodeGroup("ng1", 1, 10, 1) @@ -90,3 +90,12 @@ func TestCloudProviderNodeInstancesCache(t *testing.T) { assert.Equal(t, map[string][]cloudprovider.Instance{"ng1": {instanceNg1_1}, "ng2": {instanceNg2_2}, "ng3": {instanceNg3_2}}, results) assert.Equal(t, 3, len(cache.cloudProviderNodeInstances)) } + +func buildRunningInstance(name string) cloudprovider.Instance { + return cloudprovider.Instance{ + Id: name, + Status: &cloudprovider.InstanceStatus{ + State: cloudprovider.InstanceRunning, + }, + } +} diff --git a/cluster-autoscaler/clusterstate/utils/status.go b/cluster-autoscaler/clusterstate/utils/status.go index 3636abf72138..c1917b44b1c9 100644 --- a/cluster-autoscaler/clusterstate/utils/status.go +++ b/cluster-autoscaler/clusterstate/utils/status.go @@ -91,6 +91,9 @@ func WriteStatusConfigMap(kubeClient kube_client.Interface, namespace string, ms maps := kubeClient.CoreV1().ConfigMaps(namespace) configMap, getStatusError = maps.Get(context.TODO(), statusConfigMapName, metav1.GetOptions{}) if getStatusError == nil { + if configMap.Data == nil { + configMap.Data = make(map[string]string) + } configMap.Data["status"] = statusMsg if configMap.ObjectMeta.Annotations == nil { configMap.ObjectMeta.Annotations = make(map[string]string) diff --git a/cluster-autoscaler/clusterstate/utils/status_test.go b/cluster-autoscaler/clusterstate/utils/status_test.go index a0dc5dab82bb..ff96a2f0f00e 100644 --- a/cluster-autoscaler/clusterstate/utils/status_test.go +++ b/cluster-autoscaler/clusterstate/utils/status_test.go @@ -95,6 +95,17 @@ func TestWriteStatusConfigMapExisting(t *testing.T) { assert.True(t, ti.getCalled) assert.True(t, ti.updateCalled) assert.False(t, ti.createCalled) + + // to test the case where configmap is empty + ti.configMap.Data = nil + result, err = WriteStatusConfigMap(ti.client, ti.namespace, "TEST_MSG", nil, "my-cool-configmap") + assert.Equal(t, ti.configMap, result) + assert.Contains(t, result.Data["status"], "TEST_MSG") + assert.Contains(t, result.ObjectMeta.Annotations, ConfigMapLastUpdatedKey) + assert.Nil(t, err) + assert.True(t, ti.getCalled) + assert.True(t, ti.updateCalled) + assert.False(t, ti.createCalled) } func TestWriteStatusConfigMapCreate(t *testing.T) { diff --git a/cluster-autoscaler/config/autoscaling_options.go b/cluster-autoscaler/config/autoscaling_options.go index 3241d230851d..a1d616286dc2 100644 --- a/cluster-autoscaler/config/autoscaling_options.go +++ b/cluster-autoscaler/config/autoscaling_options.go @@ -18,6 +18,8 @@ package config import ( "time" + + scheduler_config "k8s.io/kubernetes/pkg/scheduler/apis/config" ) // GpuLimits define lower and upper bound on GPU instances of given type in cluster @@ -46,6 +48,10 @@ type NodeGroupAutoscalingOptions struct { ScaleDownUnreadyTime time.Duration // Maximum time CA waits for node to be provisioned MaxNodeProvisionTime time.Duration + // ZeroOrMaxNodeScaling means that a node group should be scaled up to maximum size or down to zero nodes all at once instead of one-by-one. + ZeroOrMaxNodeScaling bool + // IgnoreDaemonSetsUtilization sets if daemonsets utilization should be considered during node scale-down + IgnoreDaemonSetsUtilization bool } // GCEOptions contain autoscaling options specific to GCE cloud provider. @@ -123,8 +129,6 @@ type AutoscalingOptions struct { GRPCExpanderCert string // GRPCExpanderURL is the url of the gRPC server when using the gRPC expander GRPCExpanderURL string - // IgnoreDaemonSetsUtilization is whether CA will ignore DaemonSet pods when calculating resource utilization for scaling down - IgnoreDaemonSetsUtilization bool // IgnoreMirrorPodsUtilization is whether CA will ignore Mirror pods when calculating resource utilization for scaling down IgnoreMirrorPodsUtilization bool // MaxGracefulTerminationSec is maximum number of seconds scale down waits for pods to terminate before @@ -136,6 +140,8 @@ type AutoscalingOptions struct { OkTotalUnreadyCount int // ScaleUpFromZero defines if CA should scale up when there 0 ready nodes. ScaleUpFromZero bool + // ParallelScaleUp defines whether CA can scale up node groups in parallel. + ParallelScaleUp bool // CloudConfig is the path to the cloud provider configuration file. Empty string for no configuration file. CloudConfig string // CloudProviderName sets the type of the cloud provider CA is about to run in. Allowed values: gce, aws @@ -170,6 +176,9 @@ type AutoscalingOptions struct { // ScaleDownSimulationTimeout defines the maximum time that can be // spent on scale down simulation. ScaleDownSimulationTimeout time.Duration + // SchedulerConfig allows changing configuration of in-tree + // scheduler plugins acting on PreFilter and Filter extension points + SchedulerConfig *scheduler_config.KubeSchedulerConfiguration // NodeDeletionDelayTimeout is maximum time CA waits for removing delay-deletion.cluster-autoscaler.kubernetes.io/ annotations before deleting the node. NodeDeletionDelayTimeout time.Duration // WriteStatusConfigMap tells if the status information should be written to a ConfigMap diff --git a/cluster-autoscaler/config/const.go b/cluster-autoscaler/config/const.go index 532b41a37d69..1025ac9059d7 100644 --- a/cluster-autoscaler/config/const.go +++ b/cluster-autoscaler/config/const.go @@ -16,7 +16,13 @@ limitations under the License. package config +import "time" + const ( + // SchedulerConfigFileFlag is the name of the flag + // for passing in custom scheduler config for in-tree scheduelr plugins + SchedulerConfigFileFlag = "scheduler-config-file" + // DefaultMaxClusterCores is the default maximum number of cores in the cluster. DefaultMaxClusterCores = 5000 * 64 // DefaultMaxClusterMemory is the default maximum number of gigabytes of memory in cluster. @@ -32,4 +38,14 @@ const ( DefaultScaleDownUnreadyTimeKey = "scaledownunreadytime" // DefaultMaxNodeProvisionTimeKey identifies MaxNodeProvisionTime autoscaling option DefaultMaxNodeProvisionTimeKey = "maxnodeprovisiontime" + // DefaultIgnoreDaemonSetsUtilizationKey identifies IgnoreDaemonSetsUtilization autoscaling option + DefaultIgnoreDaemonSetsUtilizationKey = "ignoredaemonsetsutilization" + // DefaultScaleDownUnneededTime identifies ScaleDownUnneededTime autoscaling option + DefaultScaleDownUnneededTime = 10 * time.Minute + // DefaultScaleDownUnreadyTime identifies ScaleDownUnreadyTime autoscaling option + DefaultScaleDownUnreadyTime = 20 * time.Minute + // DefaultScaleDownUtilizationThreshold identifies ScaleDownUtilizationThreshold autoscaling option + DefaultScaleDownUtilizationThreshold = 0.5 + // DefaultScaleDownGpuUtilizationThreshold identifies ScaleDownGpuUtilizationThreshold autoscaling option + DefaultScaleDownGpuUtilizationThreshold = 0.5 ) diff --git a/cluster-autoscaler/config/test/config.go b/cluster-autoscaler/config/test/config.go new file mode 100644 index 000000000000..c5708ac54efd --- /dev/null +++ b/cluster-autoscaler/config/test/config.go @@ -0,0 +1,68 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package test + +const ( + // Custom scheduler configs for testing + + // SchedulerConfigNodeResourcesFitDisabled is scheduler config + // with `NodeResourcesFit` plugin disabled + SchedulerConfigNodeResourcesFitDisabled = ` +apiVersion: kubescheduler.config.k8s.io/v1 +kind: KubeSchedulerConfiguration +profiles: +- pluginConfig: + plugins: + multiPoint: + disabled: + - name: NodeResourcesFit + weight: 1 + schedulerName: custom-scheduler` + + // SchedulerConfigTaintTolerationDisabled is scheduler config + // with `TaintToleration` plugin disabled + SchedulerConfigTaintTolerationDisabled = ` +apiVersion: kubescheduler.config.k8s.io/v1 +kind: KubeSchedulerConfiguration +profiles: +- pluginConfig: + plugins: + multiPoint: + disabled: + - name: TaintToleration + weight: 1 + schedulerName: custom-scheduler` + + // SchedulerConfigMinimalCorrect is the minimal + // correct scheduler config + SchedulerConfigMinimalCorrect = ` +apiVersion: kubescheduler.config.k8s.io/v1 +kind: KubeSchedulerConfiguration` + + // SchedulerConfigDecodeErr is the scheduler config + // which throws decoding error when we try to load it + SchedulerConfigDecodeErr = ` +kind: KubeSchedulerConfiguration` + + // SchedulerConfigInvalid is invalid scheduler config + // because we specify percentageOfNodesToScore > 100 + SchedulerConfigInvalid = ` +apiVersion: kubescheduler.config.k8s.io/v1 +kind: KubeSchedulerConfiguration +# percentageOfNodesToScore has to be between 0 and 100 +percentageOfNodesToScore: 130` +) diff --git a/cluster-autoscaler/context/autoscaling_context.go b/cluster-autoscaler/context/autoscaling_context.go index ffee60eca5cb..040d0db8261e 100644 --- a/cluster-autoscaler/context/autoscaling_context.go +++ b/cluster-autoscaler/context/autoscaling_context.go @@ -18,6 +18,7 @@ package context import ( "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/autoscaler/cluster-autoscaler/clusterstate" "k8s.io/autoscaler/cluster-autoscaler/clusterstate/utils" "k8s.io/autoscaler/cluster-autoscaler/config" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown" @@ -60,6 +61,8 @@ type AutoscalingContext struct { ScaleDownActuator scaledown.Actuator // RemainingPdbTracker tracks the remaining pod disruption budget RemainingPdbTracker pdb.RemainingPdbTracker + // ClusterStateRegistry tracks the health of the node groups and pending scale-ups and scale-downs + ClusterStateRegistry *clusterstate.ClusterStateRegistry } // AutoscalingKubeClients contains all Kubernetes API clients, @@ -105,7 +108,9 @@ func NewAutoscalingContext( estimatorBuilder estimator.EstimatorBuilder, processorCallbacks processor_callbacks.ProcessorCallbacks, debuggingSnapshotter debuggingsnapshot.DebuggingSnapshotter, - remainingPdbTracker pdb.RemainingPdbTracker) *AutoscalingContext { + remainingPdbTracker pdb.RemainingPdbTracker, + clusterStateRegistry *clusterstate.ClusterStateRegistry, +) *AutoscalingContext { return &AutoscalingContext{ AutoscalingOptions: options, CloudProvider: cloudProvider, @@ -117,6 +122,7 @@ func NewAutoscalingContext( ProcessorCallbacks: processorCallbacks, DebuggingSnapshotter: debuggingSnapshotter, RemainingPdbTracker: remainingPdbTracker, + ClusterStateRegistry: clusterStateRegistry, } } diff --git a/cluster-autoscaler/core/autoscaler.go b/cluster-autoscaler/core/autoscaler.go index 39f60257d06b..c8f7075ac448 100644 --- a/cluster-autoscaler/core/autoscaler.go +++ b/cluster-autoscaler/core/autoscaler.go @@ -31,6 +31,7 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/expander" "k8s.io/autoscaler/cluster-autoscaler/expander/factory" ca_processors "k8s.io/autoscaler/cluster-autoscaler/processors" + "k8s.io/autoscaler/cluster-autoscaler/simulator" "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" "k8s.io/autoscaler/cluster-autoscaler/simulator/predicatechecker" "k8s.io/autoscaler/cluster-autoscaler/utils/backoff" @@ -54,6 +55,7 @@ type AutoscalerOptions struct { DebuggingSnapshotter debuggingsnapshot.DebuggingSnapshotter RemainingPdbTracker pdb.RemainingPdbTracker ScaleUpOrchestrator scaleup.Orchestrator + DeleteOptions simulator.NodeDeleteOptions } // Autoscaler is the main component of CA which scales up/down node groups according to its configuration @@ -85,13 +87,14 @@ func NewAutoscaler(opts AutoscalerOptions) (Autoscaler, errors.AutoscalerError) opts.Backoff, opts.DebuggingSnapshotter, opts.RemainingPdbTracker, - opts.ScaleUpOrchestrator), nil + opts.ScaleUpOrchestrator, + opts.DeleteOptions), nil } // Initialize default options if not provided. func initializeDefaultOptions(opts *AutoscalerOptions) error { if opts.Processors == nil { - opts.Processors = ca_processors.DefaultProcessors() + opts.Processors = ca_processors.DefaultProcessors(opts.AutoscalingOptions) } if opts.AutoscalingKubeClients == nil { opts.AutoscalingKubeClients = context.NewAutoscalingKubeClients(opts.AutoscalingOptions, opts.KubeClient, opts.EventsKubeClient) @@ -115,7 +118,17 @@ func initializeDefaultOptions(opts *AutoscalerOptions) error { opts.ExpanderStrategy = expanderStrategy } if opts.EstimatorBuilder == nil { - estimatorBuilder, err := estimator.NewEstimatorBuilder(opts.EstimatorName, estimator.NewThresholdBasedEstimationLimiter(opts.MaxNodesPerScaleUp, opts.MaxNodeGroupBinpackingDuration)) + thresholds := []estimator.Threshold{ + estimator.NewStaticThreshold(opts.MaxNodesPerScaleUp, opts.MaxNodeGroupBinpackingDuration), + estimator.NewSngCapacityThreshold(), + estimator.NewClusterCapacityThreshold(), + } + estimatorBuilder, err := estimator.NewEstimatorBuilder( + opts.EstimatorName, + estimator.NewThresholdBasedEstimationLimiter(thresholds), + estimator.NewDecreasingPodOrderer(), + /* EstimationAnalyserFunc */ nil, + ) if err != nil { return err } diff --git a/cluster-autoscaler/core/scaledown/actuation/actuator.go b/cluster-autoscaler/core/scaledown/actuation/actuator.go index 5dd52498eb5b..08110fd3e030 100644 --- a/cluster-autoscaler/core/scaledown/actuation/actuator.go +++ b/cluster-autoscaler/core/scaledown/actuation/actuator.go @@ -17,7 +17,6 @@ limitations under the License. package actuation import ( - "reflect" "strings" "time" @@ -29,6 +28,7 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/clusterstate" "k8s.io/autoscaler/cluster-autoscaler/context" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/budgets" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/status" "k8s.io/autoscaler/cluster-autoscaler/core/utils" @@ -37,32 +37,42 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" "k8s.io/autoscaler/cluster-autoscaler/simulator/utilization" "k8s.io/autoscaler/cluster-autoscaler/utils/errors" - "k8s.io/autoscaler/cluster-autoscaler/utils/gpu" kube_util "k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes" "k8s.io/autoscaler/cluster-autoscaler/utils/taints" - "k8s.io/kubernetes/pkg/scheduler/framework" ) // Actuator is responsible for draining and deleting nodes. type Actuator struct { - ctx *context.AutoscalingContext - clusterState *clusterstate.ClusterStateRegistry - nodeDeletionTracker *deletiontracker.NodeDeletionTracker - nodeDeletionBatcher *NodeDeletionBatcher - evictor Evictor - deleteOptions simulator.NodeDeleteOptions + ctx *context.AutoscalingContext + clusterState *clusterstate.ClusterStateRegistry + nodeDeletionTracker *deletiontracker.NodeDeletionTracker + nodeDeletionScheduler *GroupDeletionScheduler + deleteOptions simulator.NodeDeleteOptions + // TODO: Move budget processor to scaledown planner, potentially merge into PostFilteringScaleDownNodeProcessor + // This is a larger change to the code structure which impacts some existing actuator unit tests + // as well as Cluster Autoscaler implementations that may override ScaleDownSetProcessor + budgetProcessor *budgets.ScaleDownBudgetProcessor + configGetter actuatorNodeGroupConfigGetter +} + +// actuatorNodeGroupConfigGetter is an interface to limit the functions that can be used +// from NodeGroupConfigProcessor interface +type actuatorNodeGroupConfigGetter interface { + // GetIgnoreDaemonSetsUtilization returns IgnoreDaemonSetsUtilization value that should be used for a given NodeGroup. + GetIgnoreDaemonSetsUtilization(nodeGroup cloudprovider.NodeGroup) (bool, error) } // NewActuator returns a new instance of Actuator. -func NewActuator(ctx *context.AutoscalingContext, csr *clusterstate.ClusterStateRegistry, ndt *deletiontracker.NodeDeletionTracker, deleteOptions simulator.NodeDeleteOptions) *Actuator { - nbd := NewNodeDeletionBatcher(ctx, csr, ndt, ctx.NodeDeletionBatcherInterval) +func NewActuator(ctx *context.AutoscalingContext, csr *clusterstate.ClusterStateRegistry, ndt *deletiontracker.NodeDeletionTracker, deleteOptions simulator.NodeDeleteOptions, configGetter actuatorNodeGroupConfigGetter) *Actuator { + ndb := NewNodeDeletionBatcher(ctx, csr, ndt, ctx.NodeDeletionBatcherInterval) return &Actuator{ - ctx: ctx, - clusterState: csr, - nodeDeletionTracker: ndt, - nodeDeletionBatcher: nbd, - evictor: NewDefaultEvictor(deleteOptions, ndt), - deleteOptions: deleteOptions, + ctx: ctx, + clusterState: csr, + nodeDeletionTracker: ndt, + nodeDeletionScheduler: NewGroupDeletionScheduler(ctx, ndt, ndb, NewDefaultEvictor(deleteOptions, ndt)), + budgetProcessor: budgets.NewScaleDownBudgetProcessor(ctx), + deleteOptions: deleteOptions, + configGetter: configGetter, } } @@ -78,13 +88,14 @@ func (a *Actuator) ClearResultsNotNewerThan(t time.Time) { // StartDeletion triggers a new deletion process. func (a *Actuator) StartDeletion(empty, drain []*apiv1.Node) (*status.ScaleDownStatus, errors.AutoscalerError) { + a.nodeDeletionScheduler.ReportMetrics() deletionStartTime := time.Now() defer func() { metrics.UpdateDuration(metrics.ScaleDownNodeDeletion, time.Now().Sub(deletionStartTime)) }() results, ts := a.nodeDeletionTracker.DeletionResults() scaleDownStatus := &status.ScaleDownStatus{NodeDeleteResults: results, NodeDeleteResultsAsOf: ts} - emptyToDelete, drainToDelete := a.cropNodesToBudgets(empty, drain) + emptyToDelete, drainToDelete := a.budgetProcessor.CropNodes(a.nodeDeletionTracker, empty, drain) if len(emptyToDelete) == 0 && len(drainToDelete) == 0 { scaleDownStatus.Result = status.ScaleDownNoNodeDeleted return scaleDownStatus, nil @@ -97,12 +108,8 @@ func (a *Actuator) StartDeletion(empty, drain []*apiv1.Node) (*status.ScaleDownS return scaleDownStatus, err } - emptyScaledDown, err := a.deleteAsyncEmpty(emptyToDelete) + emptyScaledDown := a.deleteAsyncEmpty(emptyToDelete) scaleDownStatus.ScaledDownNodes = append(scaleDownStatus.ScaledDownNodes, emptyScaledDown...) - if err != nil { - scaleDownStatus.Result = status.ScaleDownError - return scaleDownStatus, err - } } if len(drainToDelete) > 0 { @@ -122,116 +129,77 @@ func (a *Actuator) StartDeletion(empty, drain []*apiv1.Node) (*status.ScaleDownS return scaleDownStatus, nil } -// cropNodesToBudgets crops the provided node lists to respect scale-down max parallelism budgets. -func (a *Actuator) cropNodesToBudgets(empty, needDrain []*apiv1.Node) ([]*apiv1.Node, []*apiv1.Node) { - emptyInProgress, drainInProgress := a.nodeDeletionTracker.DeletionsInProgress() - parallelismBudget := a.ctx.MaxScaleDownParallelism - len(emptyInProgress) - len(drainInProgress) - drainBudget := a.ctx.MaxDrainParallelism - len(drainInProgress) - - var emptyToDelete []*apiv1.Node - for _, node := range empty { - if len(emptyToDelete) >= parallelismBudget { - break - } - emptyToDelete = append(emptyToDelete, node) - } - - parallelismBudgetLeft := parallelismBudget - len(emptyToDelete) - drainBudget = min(parallelismBudgetLeft, drainBudget) - - var drainToDelete []*apiv1.Node - for _, node := range needDrain { - if len(drainToDelete) >= drainBudget { - break - } - drainToDelete = append(drainToDelete, node) - } - - return emptyToDelete, drainToDelete -} - // deleteAsyncEmpty immediately starts deletions asynchronously. -// scaledDownNodes return value contains all nodes for which deletion successfully started. It's valid and should be consumed -// even if err != nil. -func (a *Actuator) deleteAsyncEmpty(empty []*apiv1.Node) (scaledDownNodes []*status.ScaleDownNode, err errors.AutoscalerError) { - var groupIds []string - var validNodes []*apiv1.Node - for _, emptyNode := range empty { - klog.V(0).Infof("Scale-down: removing empty node %q", emptyNode.Name) - a.ctx.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaleDownEmpty", "Scale-down: removing empty node %q", emptyNode.Name) - - nodeGroup, err := a.ctx.CloudProvider.NodeGroupForNode(emptyNode) - if err != nil || nodeGroup == nil || reflect.ValueOf(nodeGroup).IsNil() { - klog.Errorf("Failed to find node group for %s: %v", emptyNode.Name, err) - continue - } +// scaledDownNodes return value contains all nodes for which deletion successfully started. +func (a *Actuator) deleteAsyncEmpty(NodeGroupViews []*budgets.NodeGroupView) (reportedSDNodes []*status.ScaleDownNode) { + for _, bucket := range NodeGroupViews { + for _, node := range bucket.Nodes { + klog.V(0).Infof("Scale-down: removing empty node %q", node.Name) + a.ctx.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaleDownEmpty", "Scale-down: removing empty node %q", node.Name) + + if sdNode, err := a.scaleDownNodeToReport(node, false); err == nil { + reportedSDNodes = append(reportedSDNodes, sdNode) + } else { + klog.Errorf("Scale-down: couldn't report scaled down node, err: %v", err) + } - if sdNode, err := a.scaleDownNodeToReport(emptyNode, false); err == nil { - scaledDownNodes = append(scaledDownNodes, sdNode) - } else { - klog.Errorf("Scale-down: couldn't report scaled down node, err: %v", err) + a.nodeDeletionTracker.StartDeletion(bucket.Group.Id(), node.Name) } - - a.nodeDeletionTracker.StartDeletion(nodeGroup.Id(), emptyNode.Name) - groupIds = append(groupIds, nodeGroup.Id()) - validNodes = append(validNodes, emptyNode) } - go a.deleteNodesAsync(validNodes, groupIds, false) + for _, bucket := range NodeGroupViews { + go a.deleteNodesAsync(bucket.Nodes, bucket.Group, false, bucket.BatchSize) + } - return scaledDownNodes, nil + return reportedSDNodes } // taintNodesSync synchronously taints all provided nodes with NoSchedule. If tainting fails for any of the nodes, already // applied taints are cleaned up. -func (a *Actuator) taintNodesSync(nodes []*apiv1.Node) errors.AutoscalerError { +func (a *Actuator) taintNodesSync(NodeGroupViews []*budgets.NodeGroupView) errors.AutoscalerError { var taintedNodes []*apiv1.Node - for _, node := range nodes { - err := a.taintNode(node) - if err != nil { - a.ctx.Recorder.Eventf(node, apiv1.EventTypeWarning, "ScaleDownFailed", "failed to mark the node as toBeDeleted/unschedulable: %v", err) - // Clean up already applied taints in case of issues. - for _, taintedNode := range taintedNodes { - _, _ = taints.CleanToBeDeleted(taintedNode, a.ctx.ClientSet, a.ctx.CordonNodeBeforeTerminate) + for _, bucket := range NodeGroupViews { + for _, node := range bucket.Nodes { + err := a.taintNode(node) + if err != nil { + a.ctx.Recorder.Eventf(node, apiv1.EventTypeWarning, "ScaleDownFailed", "failed to mark the node as toBeDeleted/unschedulable: %v", err) + // Clean up already applied taints in case of issues. + for _, taintedNode := range taintedNodes { + _, _ = taints.CleanToBeDeleted(taintedNode, a.ctx.ClientSet, a.ctx.CordonNodeBeforeTerminate) + } + return errors.NewAutoscalerError(errors.ApiCallError, "couldn't taint node %q with ToBeDeleted", node) } - return errors.NewAutoscalerError(errors.ApiCallError, "couldn't taint node %q with ToBeDeleted", node) + taintedNodes = append(taintedNodes, node) } - taintedNodes = append(taintedNodes, node) } return nil } // deleteAsyncDrain asynchronously starts deletions with drain for all provided nodes. scaledDownNodes return value contains all nodes for which // deletion successfully started. -func (a *Actuator) deleteAsyncDrain(drain []*apiv1.Node) (scaledDownNodes []*status.ScaleDownNode) { - var groupIds []string - var validNodes []*apiv1.Node - for _, drainNode := range drain { - if sdNode, err := a.scaleDownNodeToReport(drainNode, true); err == nil { - klog.V(0).Infof("Scale-down: removing node %s, utilization: %v, pods to reschedule: %s", drainNode.Name, sdNode.UtilInfo, joinPodNames(sdNode.EvictedPods)) - a.ctx.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaleDown", "Scale-down: removing node %s, utilization: %v, pods to reschedule: %s", drainNode.Name, sdNode.UtilInfo, joinPodNames(sdNode.EvictedPods)) - scaledDownNodes = append(scaledDownNodes, sdNode) - } else { - klog.Errorf("Scale-down: couldn't report scaled down node, err: %v", err) - } +func (a *Actuator) deleteAsyncDrain(NodeGroupViews []*budgets.NodeGroupView) (reportedSDNodes []*status.ScaleDownNode) { + for _, bucket := range NodeGroupViews { + for _, drainNode := range bucket.Nodes { + if sdNode, err := a.scaleDownNodeToReport(drainNode, true); err == nil { + klog.V(0).Infof("Scale-down: removing node %s, utilization: %v, pods to reschedule: %s", drainNode.Name, sdNode.UtilInfo, joinPodNames(sdNode.EvictedPods)) + a.ctx.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaleDown", "Scale-down: removing node %s, utilization: %v, pods to reschedule: %s", drainNode.Name, sdNode.UtilInfo, joinPodNames(sdNode.EvictedPods)) + reportedSDNodes = append(reportedSDNodes, sdNode) + } else { + klog.Errorf("Scale-down: couldn't report scaled down node, err: %v", err) + } - nodeGroup, err := a.ctx.CloudProvider.NodeGroupForNode(drainNode) - if err != nil || nodeGroup == nil || reflect.ValueOf(nodeGroup).IsNil() { - klog.Errorf("Failed to find node group for %s: %v", drainNode.Name, err) - continue + a.nodeDeletionTracker.StartDeletionWithDrain(bucket.Group.Id(), drainNode.Name) } - - a.nodeDeletionTracker.StartDeletionWithDrain(nodeGroup.Id(), drainNode.Name) - groupIds = append(groupIds, nodeGroup.Id()) - validNodes = append(validNodes, drainNode) } - go a.deleteNodesAsync(validNodes, groupIds, true) + for _, bucket := range NodeGroupViews { + go a.deleteNodesAsync(bucket.Nodes, bucket.Group, true, bucket.BatchSize) + } - return scaledDownNodes + return reportedSDNodes } -func (a *Actuator) deleteNodesAsync(nodes []*apiv1.Node, groupIds []string, drain bool) { +func (a *Actuator) deleteNodesAsync(nodes []*apiv1.Node, nodeGroup cloudprovider.NodeGroup, drain bool, batchSize int) { var pdbs []*policyv1.PodDisruptionBudget var registry kube_util.ListerRegistry @@ -245,12 +213,11 @@ func (a *Actuator) deleteNodesAsync(nodes []*apiv1.Node, groupIds []string, drai } clusterSnapshot, err := a.createSnapshot(nodes) - if err != nil { klog.Errorf("Scale-down: couldn't create delete snapshot, err: %v", err) nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorInternal, Err: errors.NewAutoscalerError(errors.InternalError, "createSnapshot returned error %v", err)} - for i, node := range nodes { - CleanUpAndRecordFailedScaleDownEvent(a.ctx, node, groupIds[i], drain, a.nodeDeletionTracker, "failed to create delete snapshot", nodeDeleteResult) + for _, node := range nodes { + a.nodeDeletionScheduler.AbortNodeDeletion(node, nodeGroup.Id(), drain, "failed to create delete snapshot", nodeDeleteResult) } return } @@ -260,8 +227,8 @@ func (a *Actuator) deleteNodesAsync(nodes []*apiv1.Node, groupIds []string, drai if err != nil { klog.Errorf("Scale-down: couldn't fetch pod disruption budgets, err: %v", err) nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorInternal, Err: errors.NewAutoscalerError(errors.InternalError, "podDisruptionBudgetLister.List returned error %v", err)} - for i, node := range nodes { - CleanUpAndRecordFailedScaleDownEvent(a.ctx, node, groupIds[i], drain, a.nodeDeletionTracker, "failed to fetch pod disruption budgets", nodeDeleteResult) + for _, node := range nodes { + a.nodeDeletionScheduler.AbortNodeDeletion(node, nodeGroup.Id(), drain, "failed to fetch pod disruption budgets", nodeDeleteResult) } return } @@ -269,12 +236,16 @@ func (a *Actuator) deleteNodesAsync(nodes []*apiv1.Node, groupIds []string, drai registry = a.ctx.ListerRegistry } - for i, node := range nodes { + if batchSize == 0 { + batchSize = len(nodes) + } + + for _, node := range nodes { nodeInfo, err := clusterSnapshot.NodeInfos().Get(node.Name) if err != nil { klog.Errorf("Scale-down: can't retrieve node %q from snapshot, err: %v", node.Name, err) nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorInternal, Err: errors.NewAutoscalerError(errors.InternalError, "nodeInfos.Get for %q returned error: %v", node.Name, err)} - CleanUpAndRecordFailedScaleDownEvent(a.ctx, node, groupIds[i], drain, a.nodeDeletionTracker, "failed to get node info", nodeDeleteResult) + a.nodeDeletionScheduler.AbortNodeDeletion(node, nodeGroup.Id(), drain, "failed to get node info", nodeDeleteResult) continue } @@ -282,18 +253,18 @@ func (a *Actuator) deleteNodesAsync(nodes []*apiv1.Node, groupIds []string, drai if err != nil { klog.Errorf("Scale-down: couldn't delete node %q, err: %v", node.Name, err) nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorInternal, Err: errors.NewAutoscalerError(errors.InternalError, "GetPodsToMove for %q returned error: %v", node.Name, err)} - CleanUpAndRecordFailedScaleDownEvent(a.ctx, node, groupIds[i], drain, a.nodeDeletionTracker, "failed to get pods to move on node", nodeDeleteResult) + a.nodeDeletionScheduler.AbortNodeDeletion(node, nodeGroup.Id(), drain, "failed to get pods to move on node", nodeDeleteResult) continue } if !drain && len(podsToRemove) != 0 { klog.Errorf("Scale-down: couldn't delete empty node %q, new pods got scheduled", node.Name) nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorInternal, Err: errors.NewAutoscalerError(errors.InternalError, "failed to delete empty node %q, new pods scheduled", node.Name)} - CleanUpAndRecordFailedScaleDownEvent(a.ctx, node, groupIds[i], drain, a.nodeDeletionTracker, "node is not empty", nodeDeleteResult) + a.nodeDeletionScheduler.AbortNodeDeletion(node, nodeGroup.Id(), drain, "node is not empty", nodeDeleteResult) continue } - go a.scheduleDeletion(nodeInfo, groupIds[i], drain) + go a.nodeDeletionScheduler.ScheduleDeletion(nodeInfo, nodeGroup, batchSize, drain) } } @@ -306,8 +277,14 @@ func (a *Actuator) scaleDownNodeToReport(node *apiv1.Node, drain bool) (*status. if err != nil { return nil, err } + + ignoreDaemonSetsUtilization, err := a.configGetter.GetIgnoreDaemonSetsUtilization(nodeGroup) + if err != nil { + return nil, err + } + gpuConfig := a.ctx.CloudProvider.GetNodeGpuConfig(node) - utilInfo, err := utilization.Calculate(nodeInfo, a.ctx.IgnoreDaemonSetsUtilization, a.ctx.IgnoreMirrorPodsUtilization, gpuConfig, time.Now()) + utilInfo, err := utilization.Calculate(nodeInfo, ignoreDaemonSetsUtilization, a.ctx.IgnoreMirrorPodsUtilization, gpuConfig, time.Now()) if err != nil { return nil, err } @@ -334,40 +311,6 @@ func (a *Actuator) taintNode(node *apiv1.Node) error { return nil } -func (a *Actuator) prepareNodeForDeletion(nodeInfo *framework.NodeInfo, drain bool) status.NodeDeleteResult { - node := nodeInfo.Node() - if drain { - if evictionResults, err := a.evictor.DrainNode(a.ctx, nodeInfo); err != nil { - return status.NodeDeleteResult{ResultType: status.NodeDeleteErrorFailedToEvictPods, Err: err, PodEvictionResults: evictionResults} - } - } else { - if err := a.evictor.EvictDaemonSetPods(a.ctx, nodeInfo, time.Now()); err != nil { - // Evicting DS pods is best-effort, so proceed with the deletion even if there are errors. - klog.Warningf("Error while evicting DS pods from an empty node %q: %v", node.Name, err) - } - } - if err := WaitForDelayDeletion(node, a.ctx.ListerRegistry.AllNodeLister(), a.ctx.AutoscalingOptions.NodeDeletionDelayTimeout); err != nil { - return status.NodeDeleteResult{ResultType: status.NodeDeleteErrorFailedToDelete, Err: err} - } - return status.NodeDeleteResult{ResultType: status.NodeDeleteOk} -} - -// scheduleDeletion schedule the deletion on of the provided node by adding a node to NodeDeletionBatcher. If drain is true, the node is drained before being deleted. -func (a *Actuator) scheduleDeletion(nodeInfo *framework.NodeInfo, nodeGroupId string, drain bool) { - node := nodeInfo.Node() - nodeDeleteResult := a.prepareNodeForDeletion(nodeInfo, drain) - if nodeDeleteResult.Err != nil { - CleanUpAndRecordFailedScaleDownEvent(a.ctx, node, nodeGroupId, drain, a.nodeDeletionTracker, "prepareNodeForDeletion failed", nodeDeleteResult) - return - } - err := a.nodeDeletionBatcher.AddNode(node, drain) - if err != nil { - klog.Errorf("Couldn't add node to nodeDeletionBatcher, err: %v", err) - nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorInternal, Err: errors.NewAutoscalerError(errors.InternalError, "nodeDeletionBatcher.AddNode for %s returned error: %v", node.Name, err)} - CleanUpAndRecordFailedScaleDownEvent(a.ctx, node, nodeGroupId, drain, a.nodeDeletionTracker, "failed add node to the nodeDeletionBatche", nodeDeleteResult) - } -} - func (a *Actuator) createSnapshot(nodes []*apiv1.Node) (clustersnapshot.ClusterSnapshot, error) { knownNodes := make(map[string]bool) snapshot := clustersnapshot.NewBasicClusterSnapshot() @@ -412,37 +355,3 @@ func joinPodNames(pods []*apiv1.Pod) string { } return strings.Join(names, ",") } - -// CleanUpAndRecordFailedScaleDownEvent record failed scale down event and log an error. -func CleanUpAndRecordFailedScaleDownEvent(ctx *context.AutoscalingContext, node *apiv1.Node, nodeGroupId string, drain bool, nodeDeletionTracker *deletiontracker.NodeDeletionTracker, errMsg string, status status.NodeDeleteResult) { - if drain { - klog.Errorf("Scale-down: couldn't delete node %q with drain, %v, status error: %v", node.Name, errMsg, status.Err) - ctx.Recorder.Eventf(node, apiv1.EventTypeWarning, "ScaleDownFailed", "failed to drain and delete node: %v", status.Err) - - } else { - klog.Errorf("Scale-down: couldn't delete empty node, %v, status error: %v", errMsg, status.Err) - ctx.Recorder.Eventf(node, apiv1.EventTypeWarning, "ScaleDownFailed", "failed to delete empty node: %v", status.Err) - } - taints.CleanToBeDeleted(node, ctx.ClientSet, ctx.CordonNodeBeforeTerminate) - nodeDeletionTracker.EndDeletion(nodeGroupId, node.Name, status) -} - -// RegisterAndRecordSuccessfulScaleDownEvent register scale down and record successful scale down event. -func RegisterAndRecordSuccessfulScaleDownEvent(ctx *context.AutoscalingContext, csr *clusterstate.ClusterStateRegistry, node *apiv1.Node, nodeGroup cloudprovider.NodeGroup, drain bool, nodeDeletionTracker *deletiontracker.NodeDeletionTracker) { - ctx.Recorder.Eventf(node, apiv1.EventTypeNormal, "ScaleDown", "nodes removed by cluster autoscaler") - csr.RegisterScaleDown(&clusterstate.ScaleDownRequest{ - NodeGroup: nodeGroup, - NodeName: node.Name, - Time: time.Now(), - ExpectedDeleteTime: time.Now().Add(MaxCloudProviderNodeDeletionTime), - }) - gpuConfig := ctx.CloudProvider.GetNodeGpuConfig(node) - metricResourceName, metricGpuType := gpu.GetGpuInfoForMetrics(gpuConfig, ctx.CloudProvider.GetAvailableGPUTypes(), node, nodeGroup) - metrics.RegisterScaleDown(1, metricResourceName, metricGpuType, nodeScaleDownReason(node, drain)) - if drain { - ctx.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaleDown", "Scale-down: node %s removed with drain", node.Name) - } else { - ctx.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaleDownEmpty", "Scale-down: empty node %s removed", node.Name) - } - nodeDeletionTracker.EndDeletion(nodeGroup.Id(), node.Name, status.NodeDeleteResult{ResultType: status.NodeDeleteOk}) -} diff --git a/cluster-autoscaler/core/scaledown/actuation/actuator_test.go b/cluster-autoscaler/core/scaledown/actuation/actuator_test.go index 6c12da493270..07cad48717e7 100644 --- a/cluster-autoscaler/core/scaledown/actuation/actuator_test.go +++ b/cluster-autoscaler/core/scaledown/actuation/actuator_test.go @@ -35,242 +35,56 @@ import ( "k8s.io/client-go/kubernetes/fake" core "k8s.io/client-go/testing" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" testprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/test" "k8s.io/autoscaler/cluster-autoscaler/clusterstate" "k8s.io/autoscaler/cluster-autoscaler/config" - "k8s.io/autoscaler/cluster-autoscaler/context" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/budgets" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/status" . "k8s.io/autoscaler/cluster-autoscaler/core/test" - "k8s.io/autoscaler/cluster-autoscaler/simulator" + "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" "k8s.io/autoscaler/cluster-autoscaler/simulator/utilization" kube_util "k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes" "k8s.io/autoscaler/cluster-autoscaler/utils/taints" . "k8s.io/autoscaler/cluster-autoscaler/utils/test" ) -func TestCropNodesToBudgets(t *testing.T) { - for tn, tc := range map[string]struct { - emptyDeletionsInProgress int - drainDeletionsInProgress int - emptyNodes []*apiv1.Node - drainNodes []*apiv1.Node - wantEmpty []*apiv1.Node - wantDrain []*apiv1.Node - }{ - "no nodes": { - emptyNodes: []*apiv1.Node{}, - drainNodes: []*apiv1.Node{}, - wantEmpty: []*apiv1.Node{}, - wantDrain: []*apiv1.Node{}, - }, - // Empty nodes only. - "empty nodes within max limit, no deletions in progress": { - emptyNodes: generateNodes(10, "empty"), - wantEmpty: generateNodes(10, "empty"), - }, - "empty nodes exceeding max limit, no deletions in progress": { - emptyNodes: generateNodes(11, "empty"), - wantEmpty: generateNodes(10, "empty"), - }, - "empty nodes with deletions in progress, within budget": { - emptyDeletionsInProgress: 1, - drainDeletionsInProgress: 1, - emptyNodes: generateNodes(8, "empty"), - wantEmpty: generateNodes(8, "empty"), - }, - "empty nodes with deletions in progress, exceeding budget": { - emptyDeletionsInProgress: 1, - drainDeletionsInProgress: 1, - emptyNodes: generateNodes(10, "empty"), - wantEmpty: generateNodes(8, "empty"), - }, - "empty nodes with deletions in progress, 0 budget left": { - emptyDeletionsInProgress: 5, - drainDeletionsInProgress: 5, - emptyNodes: generateNodes(10, "empty"), - wantEmpty: []*apiv1.Node{}, - }, - "empty nodes with deletions in progress, budget exceeded (shouldn't happen, just a sanity check)": { - emptyDeletionsInProgress: 50, - drainDeletionsInProgress: 50, - emptyNodes: generateNodes(10, "empty"), - wantEmpty: []*apiv1.Node{}, - }, - // Drain nodes only. - "drain nodes within max limit, no deletions in progress": { - drainNodes: generateNodes(5, "drain"), - wantDrain: generateNodes(5, "drain"), - }, - "drain nodes exceeding max limit, no deletions in progress": { - drainNodes: generateNodes(6, "drain"), - wantDrain: generateNodes(5, "drain"), - }, - "drain nodes with deletions in progress, within budget": { - emptyDeletionsInProgress: 1, - drainDeletionsInProgress: 2, - drainNodes: generateNodes(3, "drain"), - wantDrain: generateNodes(3, "drain"), - }, - "drain nodes with deletions in progress, exceeding drain budget": { - emptyDeletionsInProgress: 1, - drainDeletionsInProgress: 2, - drainNodes: generateNodes(5, "drain"), - wantDrain: generateNodes(3, "drain"), - }, - "drain nodes with deletions in progress, 0 drain budget left": { - emptyDeletionsInProgress: 1, - drainDeletionsInProgress: 5, - drainNodes: generateNodes(5, "drain"), - wantDrain: []*apiv1.Node{}, - }, - "drain nodes with deletions in progress, drain budget exceeded (shouldn't happen, just a sanity check)": { - emptyDeletionsInProgress: 1, - drainDeletionsInProgress: 50, - drainNodes: generateNodes(5, "drain"), - wantDrain: []*apiv1.Node{}, - }, - "drain nodes with deletions in progress, exceeding overall budget": { - emptyDeletionsInProgress: 7, - drainDeletionsInProgress: 1, - drainNodes: generateNodes(4, "drain"), - wantDrain: generateNodes(2, "drain"), - }, - "drain nodes with deletions in progress, 0 overall budget left": { - emptyDeletionsInProgress: 10, - drainNodes: generateNodes(4, "drain"), - wantDrain: []*apiv1.Node{}, - }, - "drain nodes with deletions in progress, overall budget exceeded (shouldn't happen, just a sanity check)": { - emptyDeletionsInProgress: 50, - drainNodes: generateNodes(4, "drain"), - wantDrain: []*apiv1.Node{}, - }, - // Empty and drain nodes together. - "empty&drain nodes within max limits, no deletions in progress": { - emptyNodes: generateNodes(5, "empty"), - drainNodes: generateNodes(5, "drain"), - wantDrain: generateNodes(5, "drain"), - wantEmpty: generateNodes(5, "empty"), - }, - "empty&drain nodes exceeding overall limit, no deletions in progress": { - emptyNodes: generateNodes(8, "empty"), - drainNodes: generateNodes(8, "drain"), - wantDrain: generateNodes(2, "drain"), - wantEmpty: generateNodes(8, "empty"), - }, - "empty&drain nodes exceeding drain limit, no deletions in progress": { - emptyNodes: generateNodes(2, "empty"), - drainNodes: generateNodes(8, "drain"), - wantDrain: generateNodes(5, "drain"), - wantEmpty: generateNodes(2, "empty"), - }, - "empty&drain nodes with deletions in progress, 0 overall budget left": { - emptyDeletionsInProgress: 10, - emptyNodes: generateNodes(5, "empty"), - drainNodes: generateNodes(5, "drain"), - wantEmpty: []*apiv1.Node{}, - wantDrain: []*apiv1.Node{}, - }, - "empty&drain nodes with deletions in progress, overall budget exceeded (shouldn't happen, just a sanity check)": { - emptyDeletionsInProgress: 50, - emptyNodes: generateNodes(5, "empty"), - drainNodes: generateNodes(5, "drain"), - wantEmpty: []*apiv1.Node{}, - wantDrain: []*apiv1.Node{}, - }, - "empty&drain nodes with deletions in progress, 0 drain budget left": { - drainDeletionsInProgress: 5, - emptyNodes: generateNodes(5, "empty"), - drainNodes: generateNodes(5, "drain"), - wantEmpty: generateNodes(5, "empty"), - wantDrain: []*apiv1.Node{}, - }, - "empty&drain nodes with deletions in progress, drain budget exceeded (shouldn't happen, just a sanity check)": { - drainDeletionsInProgress: 9, - emptyNodes: generateNodes(5, "empty"), - drainNodes: generateNodes(5, "drain"), - wantEmpty: generateNodes(1, "empty"), - wantDrain: []*apiv1.Node{}, - }, - "empty&drain nodes with deletions in progress, overall budget exceeded, only empty nodes fit": { - emptyDeletionsInProgress: 5, - drainDeletionsInProgress: 3, - emptyNodes: generateNodes(5, "empty"), - drainNodes: generateNodes(2, "drain"), - wantEmpty: generateNodes(2, "empty"), - wantDrain: []*apiv1.Node{}, - }, - "empty&drain nodes with deletions in progress, overall budget exceeded, both empty&drain nodes fit": { - emptyDeletionsInProgress: 5, - drainDeletionsInProgress: 3, - emptyNodes: generateNodes(1, "empty"), - drainNodes: generateNodes(2, "drain"), - wantEmpty: generateNodes(1, "empty"), - wantDrain: generateNodes(1, "drain"), - }, - "empty&drain nodes with deletions in progress, drain budget exceeded": { - emptyDeletionsInProgress: 1, - drainDeletionsInProgress: 3, - emptyNodes: generateNodes(4, "empty"), - drainNodes: generateNodes(5, "drain"), - wantEmpty: generateNodes(4, "empty"), - wantDrain: generateNodes(2, "drain"), - }, - } { - t.Run(tn, func(t *testing.T) { - ctx := &context.AutoscalingContext{ - AutoscalingOptions: config.AutoscalingOptions{ - MaxScaleDownParallelism: 10, - MaxDrainParallelism: 5, - NodeDeletionBatcherInterval: 0 * time.Second, - NodeDeleteDelayAfterTaint: 1 * time.Second, - }, - } - deleteOptions := simulator.NodeDeleteOptions{ - SkipNodesWithSystemPods: true, - SkipNodesWithLocalStorage: true, - MinReplicaCount: 0, - SkipNodesWithCustomControllerPods: true, - } - ndr := deletiontracker.NewNodeDeletionTracker(1 * time.Hour) - for i := 0; i < tc.emptyDeletionsInProgress; i++ { - ndr.StartDeletion("ng1", fmt.Sprintf("empty-node-%d", i)) - } - for i := 0; i < tc.drainDeletionsInProgress; i++ { - ndr.StartDeletionWithDrain("ng2", fmt.Sprintf("drain-node-%d", i)) - } - - actuator := NewActuator(ctx, nil, ndr, deleteOptions) - gotEmpty, gotDrain := actuator.cropNodesToBudgets(tc.emptyNodes, tc.drainNodes) - if diff := cmp.Diff(tc.wantEmpty, gotEmpty, cmpopts.EquateEmpty()); diff != "" { - t.Errorf("cropNodesToBudgets empty nodes diff (-want +got):\n%s", diff) - } - if diff := cmp.Diff(tc.wantDrain, gotDrain, cmpopts.EquateEmpty()); diff != "" { - t.Errorf("cropNodesToBudgets drain nodes diff (-want +got):\n%s", diff) - } - }) - } +type startDeletionTestCase struct { + emptyNodes []*budgets.NodeGroupView + drainNodes []*budgets.NodeGroupView + pods map[string][]*apiv1.Pod + failedPodDrain map[string]bool + failedNodeDeletion map[string]bool + failedNodeTaint map[string]bool + wantStatus *status.ScaleDownStatus + wantErr error + wantDeletedPods []string + wantDeletedNodes []string + wantTaintUpdates map[string][][]apiv1.Taint + wantNodeDeleteResults map[string]status.NodeDeleteResult } -func TestStartDeletion(t *testing.T) { - testNg := testprovider.NewTestNodeGroup("test-ng", 0, 100, 3, true, false, "n1-standard-2", nil, nil) +func getStartDeletionTestCases(testNg *testprovider.TestNodeGroup, ignoreDaemonSetsUtilization bool, suffix string) map[string]startDeletionTestCase { toBeDeletedTaint := apiv1.Taint{Key: taints.ToBeDeletedTaint, Effect: apiv1.TaintEffectNoSchedule} - for tn, tc := range map[string]struct { - emptyNodes []*apiv1.Node - drainNodes []*apiv1.Node - pods map[string][]*apiv1.Pod - failedPodDrain map[string]bool - failedNodeDeletion map[string]bool - failedNodeTaint map[string]bool - wantStatus *status.ScaleDownStatus - wantErr error - wantDeletedPods []string - wantDeletedNodes []string - wantTaintUpdates map[string][][]apiv1.Taint - wantNodeDeleteResults map[string]status.NodeDeleteResult - }{ + dsUtilInfo := generateUtilInfo(2./8., 2./8.) + + if ignoreDaemonSetsUtilization { + dsUtilInfo = generateUtilInfo(0./8., 0./8.) + } + + atomic2 := sizedNodeGroup("atomic-2", 2, true) + atomic4 := sizedNodeGroup("atomic-4", 4, true) + // We need separate groups since previous test cases *change state of groups*. + // TODO(aleksandra-malinowska): refactor this test to isolate test cases. + atomic2pods := sizedNodeGroup("atomic-2-pods", 2, true) + atomic4taints := sizedNodeGroup("atomic-4-taints", 4, true) + atomic6 := sizedNodeGroup("atomic-6", 6, true) + atomic2mixed := sizedNodeGroup("atomic-2-mixed", 2, true) + atomic2drain := sizedNodeGroup("atomic-2-drain", 2, true) + + testCases := map[string]startDeletionTestCase{ "nothing to delete": { emptyNodes: nil, drainNodes: nil, @@ -279,152 +93,333 @@ func TestStartDeletion(t *testing.T) { }, }, "empty node deletion": { - emptyNodes: generateNodes(2, "empty"), + emptyNodes: generateNodeGroupViewList(testNg, 0, 2), wantStatus: &status.ScaleDownStatus{ Result: status.ScaleDownNodeDeleteStarted, ScaledDownNodes: []*status.ScaleDownNode{ { - Node: generateNode("empty-node-0"), + Node: generateNode("test-node-0"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(0, 0), }, { - Node: generateNode("empty-node-1"), + Node: generateNode("test-node-1"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(0, 0), }, }, }, - wantDeletedNodes: []string{"empty-node-0", "empty-node-1"}, + wantDeletedNodes: []string{"test-node-0", "test-node-1"}, + wantTaintUpdates: map[string][][]apiv1.Taint{ + "test-node-0": { + {toBeDeletedTaint}, + }, + "test-node-1": { + {toBeDeletedTaint}, + }, + }, + wantNodeDeleteResults: map[string]status.NodeDeleteResult{ + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteOk}, + }, + }, + "empty atomic node deletion": { + emptyNodes: generateNodeGroupViewList(atomic2, 0, 2), + wantStatus: &status.ScaleDownStatus{ + Result: status.ScaleDownNodeDeleteStarted, + ScaledDownNodes: []*status.ScaleDownNode{ + { + Node: generateNode("atomic-2-node-0"), + NodeGroup: atomic2, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + { + Node: generateNode("atomic-2-node-1"), + NodeGroup: atomic2, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + }, + }, + wantDeletedNodes: []string{"atomic-2-node-0", "atomic-2-node-1"}, wantTaintUpdates: map[string][][]apiv1.Taint{ - "empty-node-0": { + "atomic-2-node-0": { {toBeDeletedTaint}, }, - "empty-node-1": { + "atomic-2-node-1": { {toBeDeletedTaint}, }, }, wantNodeDeleteResults: map[string]status.NodeDeleteResult{ - "empty-node-0": {ResultType: status.NodeDeleteOk}, - "empty-node-1": {ResultType: status.NodeDeleteOk}, + "atomic-2-node-0": {ResultType: status.NodeDeleteOk}, + "atomic-2-node-1": {ResultType: status.NodeDeleteOk}, }, }, "deletion with drain": { - drainNodes: generateNodes(2, "drain"), + drainNodes: generateNodeGroupViewList(testNg, 0, 2), pods: map[string][]*apiv1.Pod{ - "drain-node-0": removablePods(2, "drain-node-0"), - "drain-node-1": removablePods(2, "drain-node-1"), + "test-node-0": removablePods(2, "test-node-0"), + "test-node-1": removablePods(2, "test-node-1"), }, wantStatus: &status.ScaleDownStatus{ Result: status.ScaleDownNodeDeleteStarted, ScaledDownNodes: []*status.ScaleDownNode{ { - Node: generateNode("drain-node-0"), + Node: generateNode("test-node-0"), NodeGroup: testNg, - EvictedPods: removablePods(2, "drain-node-0"), + EvictedPods: removablePods(2, "test-node-0"), UtilInfo: generateUtilInfo(2./8., 2./8.), }, { - Node: generateNode("drain-node-1"), + Node: generateNode("test-node-1"), NodeGroup: testNg, - EvictedPods: removablePods(2, "drain-node-1"), + EvictedPods: removablePods(2, "test-node-1"), UtilInfo: generateUtilInfo(2./8., 2./8.), }, }, }, - wantDeletedNodes: []string{"drain-node-0", "drain-node-1"}, - wantDeletedPods: []string{"drain-node-0-pod-0", "drain-node-0-pod-1", "drain-node-1-pod-0", "drain-node-1-pod-1"}, + wantDeletedNodes: []string{"test-node-0", "test-node-1"}, + wantDeletedPods: []string{"test-node-0-pod-0", "test-node-0-pod-1", "test-node-1-pod-0", "test-node-1-pod-1"}, wantTaintUpdates: map[string][][]apiv1.Taint{ - "drain-node-0": { + "test-node-0": { {toBeDeletedTaint}, }, - "drain-node-1": { + "test-node-1": { {toBeDeletedTaint}, }, }, wantNodeDeleteResults: map[string]status.NodeDeleteResult{ - "drain-node-0": {ResultType: status.NodeDeleteOk}, - "drain-node-1": {ResultType: status.NodeDeleteOk}, + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteOk}, }, }, "empty and drain deletion work correctly together": { - emptyNodes: generateNodes(2, "empty"), - drainNodes: generateNodes(2, "drain"), + emptyNodes: generateNodeGroupViewList(testNg, 0, 2), + drainNodes: generateNodeGroupViewList(testNg, 2, 4), pods: map[string][]*apiv1.Pod{ - "drain-node-0": removablePods(2, "drain-node-0"), - "drain-node-1": removablePods(2, "drain-node-1"), + "test-node-2": removablePods(2, "test-node-2"), + "test-node-3": removablePods(2, "test-node-3"), }, wantStatus: &status.ScaleDownStatus{ Result: status.ScaleDownNodeDeleteStarted, ScaledDownNodes: []*status.ScaleDownNode{ { - Node: generateNode("empty-node-0"), + Node: generateNode("test-node-0"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(0, 0), }, { - Node: generateNode("empty-node-1"), + Node: generateNode("test-node-1"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(0, 0), }, { - Node: generateNode("drain-node-0"), + Node: generateNode("test-node-2"), NodeGroup: testNg, - EvictedPods: removablePods(2, "drain-node-0"), + EvictedPods: removablePods(2, "test-node-2"), UtilInfo: generateUtilInfo(2./8., 2./8.), }, { - Node: generateNode("drain-node-1"), + Node: generateNode("test-node-3"), NodeGroup: testNg, - EvictedPods: removablePods(2, "drain-node-1"), + EvictedPods: removablePods(2, "test-node-3"), UtilInfo: generateUtilInfo(2./8., 2./8.), }, }, }, - wantDeletedNodes: []string{"empty-node-0", "empty-node-1", "drain-node-0", "drain-node-1"}, - wantDeletedPods: []string{"drain-node-0-pod-0", "drain-node-0-pod-1", "drain-node-1-pod-0", "drain-node-1-pod-1"}, + wantDeletedNodes: []string{"test-node-0", "test-node-1", "test-node-2", "test-node-3"}, + wantDeletedPods: []string{"test-node-2-pod-0", "test-node-2-pod-1", "test-node-3-pod-0", "test-node-3-pod-1"}, wantTaintUpdates: map[string][][]apiv1.Taint{ - "empty-node-0": { + "test-node-0": { {toBeDeletedTaint}, }, - "empty-node-1": { + "test-node-1": { {toBeDeletedTaint}, }, - "drain-node-0": { + "test-node-2": { {toBeDeletedTaint}, }, - "drain-node-1": { + "test-node-3": { {toBeDeletedTaint}, }, }, wantNodeDeleteResults: map[string]status.NodeDeleteResult{ - "empty-node-0": {ResultType: status.NodeDeleteOk}, - "empty-node-1": {ResultType: status.NodeDeleteOk}, - "drain-node-0": {ResultType: status.NodeDeleteOk}, - "drain-node-1": {ResultType: status.NodeDeleteOk}, + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteOk}, + "test-node-2": {ResultType: status.NodeDeleteOk}, + "test-node-3": {ResultType: status.NodeDeleteOk}, + }, + }, + "two atomic groups can be scaled down together": { + emptyNodes: generateNodeGroupViewList(atomic2mixed, 1, 2), + drainNodes: append(generateNodeGroupViewList(atomic2mixed, 0, 1), + generateNodeGroupViewList(atomic2drain, 0, 2)...), + pods: map[string][]*apiv1.Pod{ + "atomic-2-mixed-node-0": removablePods(2, "atomic-2-mixed-node-0"), + "atomic-2-drain-node-0": removablePods(1, "atomic-2-drain-node-0"), + "atomic-2-drain-node-1": removablePods(2, "atomic-2-drain-node-1"), + }, + wantStatus: &status.ScaleDownStatus{ + Result: status.ScaleDownNodeDeleteStarted, + ScaledDownNodes: []*status.ScaleDownNode{ + { + Node: generateNode("atomic-2-mixed-node-1"), + NodeGroup: atomic2mixed, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + { + Node: generateNode("atomic-2-mixed-node-0"), + NodeGroup: atomic2mixed, + EvictedPods: removablePods(2, "atomic-2-mixed-node-0"), + UtilInfo: generateUtilInfo(2./8., 2./8.), + }, + { + Node: generateNode("atomic-2-drain-node-0"), + NodeGroup: atomic2drain, + EvictedPods: removablePods(1, "atomic-2-drain-node-0"), + UtilInfo: generateUtilInfo(1./8., 1./8.), + }, + { + Node: generateNode("atomic-2-drain-node-1"), + NodeGroup: atomic2drain, + EvictedPods: removablePods(2, "atomic-2-drain-node-1"), + UtilInfo: generateUtilInfo(2./8., 2./8.), + }, + }, + }, + wantDeletedNodes: []string{ + "atomic-2-mixed-node-0", + "atomic-2-mixed-node-1", + "atomic-2-drain-node-0", + "atomic-2-drain-node-1", + }, + wantDeletedPods: []string{"atomic-2-mixed-node-0-pod-0", "atomic-2-mixed-node-0-pod-1", "atomic-2-drain-node-0-pod-0", "atomic-2-drain-node-1-pod-0", "atomic-2-drain-node-1-pod-1"}, + wantTaintUpdates: map[string][][]apiv1.Taint{ + "atomic-2-mixed-node-0": { + {toBeDeletedTaint}, + }, + "atomic-2-mixed-node-1": { + {toBeDeletedTaint}, + }, + "atomic-2-drain-node-0": { + {toBeDeletedTaint}, + }, + "atomic-2-drain-node-1": { + {toBeDeletedTaint}, + }, + }, + wantNodeDeleteResults: map[string]status.NodeDeleteResult{ + "atomic-2-mixed-node-0": {ResultType: status.NodeDeleteOk}, + "atomic-2-mixed-node-1": {ResultType: status.NodeDeleteOk}, + "atomic-2-drain-node-0": {ResultType: status.NodeDeleteOk}, + "atomic-2-drain-node-1": {ResultType: status.NodeDeleteOk}, + }, + }, + "atomic empty and drain deletion work correctly together": { + emptyNodes: generateNodeGroupViewList(atomic4, 0, 2), + drainNodes: generateNodeGroupViewList(atomic4, 2, 4), + pods: map[string][]*apiv1.Pod{ + "atomic-4-node-2": removablePods(2, "atomic-4-node-2"), + "atomic-4-node-3": removablePods(2, "atomic-4-node-3"), + }, + wantStatus: &status.ScaleDownStatus{ + Result: status.ScaleDownNodeDeleteStarted, + ScaledDownNodes: []*status.ScaleDownNode{ + { + Node: generateNode("atomic-4-node-0"), + NodeGroup: atomic4, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + { + Node: generateNode("atomic-4-node-1"), + NodeGroup: atomic4, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + { + Node: generateNode("atomic-4-node-2"), + NodeGroup: atomic4, + EvictedPods: removablePods(2, "atomic-4-node-2"), + UtilInfo: generateUtilInfo(2./8., 2./8.), + }, + { + Node: generateNode("atomic-4-node-3"), + NodeGroup: atomic4, + EvictedPods: removablePods(2, "atomic-4-node-3"), + UtilInfo: generateUtilInfo(2./8., 2./8.), + }, + }, + }, + wantDeletedNodes: []string{"atomic-4-node-0", "atomic-4-node-1", "atomic-4-node-2", "atomic-4-node-3"}, + wantDeletedPods: []string{"atomic-4-node-2-pod-0", "atomic-4-node-2-pod-1", "atomic-4-node-3-pod-0", "atomic-4-node-3-pod-1"}, + wantTaintUpdates: map[string][][]apiv1.Taint{ + "atomic-4-node-0": { + {toBeDeletedTaint}, + }, + "atomic-4-node-1": { + {toBeDeletedTaint}, + }, + "atomic-4-node-2": { + {toBeDeletedTaint}, + }, + "atomic-4-node-3": { + {toBeDeletedTaint}, + }, + }, + wantNodeDeleteResults: map[string]status.NodeDeleteResult{ + "atomic-4-node-0": {ResultType: status.NodeDeleteOk}, + "atomic-4-node-1": {ResultType: status.NodeDeleteOk}, + "atomic-4-node-2": {ResultType: status.NodeDeleteOk}, + "atomic-4-node-3": {ResultType: status.NodeDeleteOk}, }, }, "failure to taint empty node stops deletion and cleans already applied taints": { - emptyNodes: generateNodes(4, "empty"), - drainNodes: generateNodes(1, "drain"), + emptyNodes: generateNodeGroupViewList(testNg, 0, 4), + drainNodes: generateNodeGroupViewList(testNg, 4, 5), pods: map[string][]*apiv1.Pod{ - "drain-node-0": removablePods(2, "drain-node-0"), + "test-node-4": removablePods(2, "test-node-4"), }, - failedNodeTaint: map[string]bool{"empty-node-2": true}, + failedNodeTaint: map[string]bool{"test-node-2": true}, wantStatus: &status.ScaleDownStatus{ Result: status.ScaleDownError, ScaledDownNodes: nil, }, wantTaintUpdates: map[string][][]apiv1.Taint{ - "empty-node-0": { + "test-node-0": { {toBeDeletedTaint}, {}, }, - "empty-node-1": { + "test-node-1": { + {toBeDeletedTaint}, + {}, + }, + }, + wantErr: cmpopts.AnyError, + }, + "failure to taint empty atomic node stops deletion and cleans already applied taints": { + emptyNodes: generateNodeGroupViewList(atomic4taints, 0, 4), + drainNodes: generateNodeGroupViewList(testNg, 4, 5), + pods: map[string][]*apiv1.Pod{ + "test-node-4": removablePods(2, "test-node-4"), + }, + failedNodeTaint: map[string]bool{"atomic-4-taints-node-2": true}, + wantStatus: &status.ScaleDownStatus{ + Result: status.ScaleDownError, + ScaledDownNodes: nil, + }, + wantTaintUpdates: map[string][][]apiv1.Taint{ + "atomic-4-taints-node-0": { + {toBeDeletedTaint}, + {}, + }, + "atomic-4-taints-node-1": { {toBeDeletedTaint}, {}, }, @@ -432,259 +427,408 @@ func TestStartDeletion(t *testing.T) { wantErr: cmpopts.AnyError, }, "failure to taint drain node stops further deletion and cleans already applied taints": { - emptyNodes: generateNodes(2, "empty"), - drainNodes: generateNodes(4, "drain"), + emptyNodes: generateNodeGroupViewList(testNg, 0, 2), + drainNodes: generateNodeGroupViewList(testNg, 2, 6), + pods: map[string][]*apiv1.Pod{ + "test-node-2": removablePods(2, "test-node-2"), + "test-node-3": removablePods(2, "test-node-3"), + "test-node-4": removablePods(2, "test-node-4"), + "test-node-5": removablePods(2, "test-node-5"), + }, + failedNodeTaint: map[string]bool{"test-node-2": true}, + wantStatus: &status.ScaleDownStatus{ + Result: status.ScaleDownError, + ScaledDownNodes: []*status.ScaleDownNode{ + { + Node: generateNode("test-node-0"), + NodeGroup: testNg, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + { + Node: generateNode("test-node-1"), + NodeGroup: testNg, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + }, + }, + wantDeletedNodes: []string{"test-node-0", "test-node-1"}, + wantTaintUpdates: map[string][][]apiv1.Taint{ + "test-node-0": { + {toBeDeletedTaint}, + }, + "test-node-1": { + {toBeDeletedTaint}, + }, + }, + wantNodeDeleteResults: map[string]status.NodeDeleteResult{ + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteOk}, + }, + wantErr: cmpopts.AnyError, + }, + "failure to taint drain atomic node stops further deletion and cleans already applied taints": { + emptyNodes: generateNodeGroupViewList(testNg, 0, 2), + drainNodes: generateNodeGroupViewList(atomic6, 0, 6), pods: map[string][]*apiv1.Pod{ - "drain-node-0": removablePods(2, "drain-node-0"), - "drain-node-1": removablePods(2, "drain-node-1"), - "drain-node-2": removablePods(2, "drain-node-2"), - "drain-node-3": removablePods(2, "drain-node-3"), + "atomic-6-node-0": removablePods(2, "atomic-6-node-0"), + "atomic-6-node-1": removablePods(2, "atomic-6-node-1"), + "atomic-6-node-2": removablePods(2, "atomic-6-node-2"), + "atomic-6-node-3": removablePods(2, "atomic-6-node-3"), + "atomic-6-node-4": removablePods(2, "atomic-6-node-4"), + "atomic-6-node-5": removablePods(2, "atomic-6-node-5"), }, - failedNodeTaint: map[string]bool{"drain-node-2": true}, + failedNodeTaint: map[string]bool{"atomic-6-node-2": true}, wantStatus: &status.ScaleDownStatus{ Result: status.ScaleDownError, ScaledDownNodes: []*status.ScaleDownNode{ { - Node: generateNode("empty-node-0"), + Node: generateNode("test-node-0"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(0, 0), }, { - Node: generateNode("empty-node-1"), + Node: generateNode("test-node-1"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(0, 0), }, }, }, - wantDeletedNodes: []string{"empty-node-0", "empty-node-1"}, + wantDeletedNodes: []string{"test-node-0", "test-node-1"}, wantTaintUpdates: map[string][][]apiv1.Taint{ - "empty-node-0": { + "test-node-0": { {toBeDeletedTaint}, }, - "empty-node-1": { + "test-node-1": { {toBeDeletedTaint}, }, }, wantNodeDeleteResults: map[string]status.NodeDeleteResult{ - "empty-node-0": {ResultType: status.NodeDeleteOk}, - "empty-node-1": {ResultType: status.NodeDeleteOk}, + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteOk}, }, wantErr: cmpopts.AnyError, }, "nodes that failed drain are correctly reported in results": { - drainNodes: generateNodes(4, "drain"), + drainNodes: generateNodeGroupViewList(testNg, 0, 4), pods: map[string][]*apiv1.Pod{ - "drain-node-0": removablePods(3, "drain-node-0"), - "drain-node-1": removablePods(3, "drain-node-1"), - "drain-node-2": removablePods(3, "drain-node-2"), - "drain-node-3": removablePods(3, "drain-node-3"), + "test-node-0": removablePods(3, "test-node-0"), + "test-node-1": removablePods(3, "test-node-1"), + "test-node-2": removablePods(3, "test-node-2"), + "test-node-3": removablePods(3, "test-node-3"), }, failedPodDrain: map[string]bool{ - "drain-node-0-pod-0": true, - "drain-node-0-pod-1": true, - "drain-node-2-pod-1": true, + "test-node-0-pod-0": true, + "test-node-0-pod-1": true, + "test-node-2-pod-1": true, }, wantStatus: &status.ScaleDownStatus{ Result: status.ScaleDownNodeDeleteStarted, ScaledDownNodes: []*status.ScaleDownNode{ { - Node: generateNode("drain-node-0"), + Node: generateNode("test-node-0"), NodeGroup: testNg, - EvictedPods: removablePods(3, "drain-node-0"), + EvictedPods: removablePods(3, "test-node-0"), UtilInfo: generateUtilInfo(3./8., 3./8.), }, { - Node: generateNode("drain-node-1"), + Node: generateNode("test-node-1"), NodeGroup: testNg, - EvictedPods: removablePods(3, "drain-node-1"), + EvictedPods: removablePods(3, "test-node-1"), UtilInfo: generateUtilInfo(3./8., 3./8.), }, { - Node: generateNode("drain-node-2"), + Node: generateNode("test-node-2"), NodeGroup: testNg, - EvictedPods: removablePods(3, "drain-node-2"), + EvictedPods: removablePods(3, "test-node-2"), UtilInfo: generateUtilInfo(3./8., 3./8.), }, { - Node: generateNode("drain-node-3"), + Node: generateNode("test-node-3"), NodeGroup: testNg, - EvictedPods: removablePods(3, "drain-node-3"), + EvictedPods: removablePods(3, "test-node-3"), UtilInfo: generateUtilInfo(3./8., 3./8.), }, }, }, - wantDeletedNodes: []string{"drain-node-1", "drain-node-3"}, + wantDeletedNodes: []string{"test-node-1", "test-node-3"}, wantDeletedPods: []string{ - "drain-node-0-pod-2", - "drain-node-1-pod-0", "drain-node-1-pod-1", "drain-node-1-pod-2", - "drain-node-2-pod-0", "drain-node-2-pod-2", - "drain-node-3-pod-0", "drain-node-3-pod-1", "drain-node-3-pod-2", + "test-node-0-pod-2", + "test-node-1-pod-0", "test-node-1-pod-1", "test-node-1-pod-2", + "test-node-2-pod-0", "test-node-2-pod-2", + "test-node-3-pod-0", "test-node-3-pod-1", "test-node-3-pod-2", }, wantTaintUpdates: map[string][][]apiv1.Taint{ - "drain-node-0": { + "test-node-0": { {toBeDeletedTaint}, {}, }, - "drain-node-1": { + "test-node-1": { {toBeDeletedTaint}, }, - "drain-node-2": { + "test-node-2": { {toBeDeletedTaint}, {}, }, - "drain-node-3": { + "test-node-3": { {toBeDeletedTaint}, }, }, wantNodeDeleteResults: map[string]status.NodeDeleteResult{ - "drain-node-0": { + "test-node-0": { ResultType: status.NodeDeleteErrorFailedToEvictPods, Err: cmpopts.AnyError, PodEvictionResults: map[string]status.PodEvictionResult{ - "drain-node-0-pod-0": {Pod: removablePod("drain-node-0-pod-0", "drain-node-0"), Err: cmpopts.AnyError, TimedOut: true}, - "drain-node-0-pod-1": {Pod: removablePod("drain-node-0-pod-1", "drain-node-0"), Err: cmpopts.AnyError, TimedOut: true}, - "drain-node-0-pod-2": {Pod: removablePod("drain-node-0-pod-2", "drain-node-0")}, + "test-node-0-pod-0": {Pod: removablePod("test-node-0-pod-0", "test-node-0"), Err: cmpopts.AnyError, TimedOut: true}, + "test-node-0-pod-1": {Pod: removablePod("test-node-0-pod-1", "test-node-0"), Err: cmpopts.AnyError, TimedOut: true}, + "test-node-0-pod-2": {Pod: removablePod("test-node-0-pod-2", "test-node-0")}, }, }, - "drain-node-1": {ResultType: status.NodeDeleteOk}, - "drain-node-2": { + "test-node-1": {ResultType: status.NodeDeleteOk}, + "test-node-2": { ResultType: status.NodeDeleteErrorFailedToEvictPods, Err: cmpopts.AnyError, PodEvictionResults: map[string]status.PodEvictionResult{ - "drain-node-2-pod-0": {Pod: removablePod("drain-node-2-pod-0", "drain-node-2")}, - "drain-node-2-pod-1": {Pod: removablePod("drain-node-2-pod-1", "drain-node-2"), Err: cmpopts.AnyError, TimedOut: true}, - "drain-node-2-pod-2": {Pod: removablePod("drain-node-2-pod-2", "drain-node-2")}, + "test-node-2-pod-0": {Pod: removablePod("test-node-2-pod-0", "test-node-2")}, + "test-node-2-pod-1": {Pod: removablePod("test-node-2-pod-1", "test-node-2"), Err: cmpopts.AnyError, TimedOut: true}, + "test-node-2-pod-2": {Pod: removablePod("test-node-2-pod-2", "test-node-2")}, }, }, - "drain-node-3": {ResultType: status.NodeDeleteOk}, + "test-node-3": {ResultType: status.NodeDeleteOk}, }, }, "nodes that failed deletion are correctly reported in results": { - emptyNodes: generateNodes(2, "empty"), - drainNodes: generateNodes(2, "drain"), + emptyNodes: generateNodeGroupViewList(testNg, 0, 2), + drainNodes: generateNodeGroupViewList(testNg, 2, 4), pods: map[string][]*apiv1.Pod{ - "drain-node-0": removablePods(2, "drain-node-0"), - "drain-node-1": removablePods(2, "drain-node-1"), + "test-node-2": removablePods(2, "test-node-2"), + "test-node-3": removablePods(2, "test-node-3"), }, failedNodeDeletion: map[string]bool{ - "empty-node-1": true, - "drain-node-1": true, + "test-node-1": true, + "test-node-3": true, }, wantStatus: &status.ScaleDownStatus{ Result: status.ScaleDownNodeDeleteStarted, ScaledDownNodes: []*status.ScaleDownNode{ { - Node: generateNode("empty-node-0"), + Node: generateNode("test-node-0"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(0, 0), }, { - Node: generateNode("empty-node-1"), + Node: generateNode("test-node-1"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(0, 0), }, { - Node: generateNode("drain-node-0"), + Node: generateNode("test-node-2"), NodeGroup: testNg, - EvictedPods: removablePods(2, "drain-node-0"), + EvictedPods: removablePods(2, "test-node-2"), UtilInfo: generateUtilInfo(2./8., 2./8.), }, { - Node: generateNode("drain-node-1"), + Node: generateNode("test-node-3"), NodeGroup: testNg, - EvictedPods: removablePods(2, "drain-node-1"), + EvictedPods: removablePods(2, "test-node-3"), UtilInfo: generateUtilInfo(2./8., 2./8.), }, }, }, - wantDeletedNodes: []string{"empty-node-0", "drain-node-0"}, + wantDeletedNodes: []string{"test-node-0", "test-node-2"}, wantDeletedPods: []string{ - "drain-node-0-pod-0", "drain-node-0-pod-1", - "drain-node-1-pod-0", "drain-node-1-pod-1", + "test-node-2-pod-0", "test-node-2-pod-1", + "test-node-3-pod-0", "test-node-3-pod-1", }, wantTaintUpdates: map[string][][]apiv1.Taint{ - "empty-node-0": { + "test-node-0": { {toBeDeletedTaint}, }, - "empty-node-1": { + "test-node-1": { {toBeDeletedTaint}, {}, }, - "drain-node-0": { + "test-node-2": { {toBeDeletedTaint}, }, - "drain-node-1": { + "test-node-3": { {toBeDeletedTaint}, {}, }, }, wantNodeDeleteResults: map[string]status.NodeDeleteResult{ - "empty-node-0": {ResultType: status.NodeDeleteOk}, - "empty-node-1": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, - "drain-node-0": {ResultType: status.NodeDeleteOk}, - "drain-node-1": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + "test-node-2": {ResultType: status.NodeDeleteOk}, + "test-node-3": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, }, }, "DS pods are evicted from empty nodes, but don't block deletion on error": { - emptyNodes: generateNodes(2, "empty"), + emptyNodes: generateNodeGroupViewList(testNg, 0, 2), pods: map[string][]*apiv1.Pod{ - "empty-node-0": {generateDsPod("empty-node-0-ds-pod-0", "empty-node-0"), generateDsPod("empty-node-0-ds-pod-1", "empty-node-0")}, - "empty-node-1": {generateDsPod("empty-node-1-ds-pod-0", "empty-node-1"), generateDsPod("empty-node-1-ds-pod-1", "empty-node-1")}, + "test-node-0": generateDsPods(2, "test-node-0"), + "test-node-1": generateDsPods(2, "test-node-1"), }, - failedPodDrain: map[string]bool{"empty-node-1-ds-pod-0": true}, + failedPodDrain: map[string]bool{"test-node-1-ds-pod-0": true}, wantStatus: &status.ScaleDownStatus{ Result: status.ScaleDownNodeDeleteStarted, ScaledDownNodes: []*status.ScaleDownNode{ { - Node: generateNode("empty-node-0"), + Node: generateNode("test-node-0"), NodeGroup: testNg, EvictedPods: nil, - UtilInfo: generateUtilInfo(2./8., 2./8.), + UtilInfo: dsUtilInfo, + }, + { + Node: generateNode("test-node-1"), + NodeGroup: testNg, + EvictedPods: nil, + UtilInfo: dsUtilInfo, + }, + }, + }, + wantDeletedNodes: []string{"test-node-0", "test-node-1"}, + wantDeletedPods: []string{"test-node-0-ds-pod-0", "test-node-0-ds-pod-1", "test-node-1-ds-pod-1"}, + wantTaintUpdates: map[string][][]apiv1.Taint{ + "test-node-0": { + {toBeDeletedTaint}, + }, + "test-node-1": { + {toBeDeletedTaint}, + }, + }, + wantNodeDeleteResults: map[string]status.NodeDeleteResult{ + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteOk}, + }, + }, + "DS pods and deletion with drain": { + drainNodes: generateNodeGroupViewList(testNg, 0, 2), + pods: map[string][]*apiv1.Pod{ + "test-node-0": generateDsPods(2, "test-node-0"), + "test-node-1": generateDsPods(2, "test-node-1"), + }, + wantStatus: &status.ScaleDownStatus{ + Result: status.ScaleDownNodeDeleteStarted, + ScaledDownNodes: []*status.ScaleDownNode{ + { + Node: generateNode("test-node-0"), + NodeGroup: testNg, + // this is nil because DaemonSetEvictionForOccupiedNodes is + // not enabled for drained nodes in this test suite + EvictedPods: nil, + UtilInfo: dsUtilInfo, }, { - Node: generateNode("empty-node-1"), + Node: generateNode("test-node-1"), + NodeGroup: testNg, + // this is nil because DaemonSetEvictionForOccupiedNodes is + // not enabled for drained nodes in this test suite + EvictedPods: nil, + UtilInfo: dsUtilInfo, + }, + }, + }, + wantDeletedNodes: []string{"test-node-0", "test-node-1"}, + // same as evicted pods + wantDeletedPods: nil, + wantTaintUpdates: map[string][][]apiv1.Taint{ + "test-node-0": { + {toBeDeletedTaint}, + }, + "test-node-1": { + {toBeDeletedTaint}, + }, + }, + wantNodeDeleteResults: map[string]status.NodeDeleteResult{ + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteOk}, + }, + }, + "DS pods and empty and drain deletion work correctly together": { + emptyNodes: generateNodeGroupViewList(testNg, 0, 2), + drainNodes: generateNodeGroupViewList(testNg, 2, 4), + pods: map[string][]*apiv1.Pod{ + "test-node-2": removablePods(2, "test-node-2"), + "test-node-3": generateDsPods(2, "test-node-3"), + }, + wantStatus: &status.ScaleDownStatus{ + Result: status.ScaleDownNodeDeleteStarted, + ScaledDownNodes: []*status.ScaleDownNode{ + { + Node: generateNode("test-node-0"), NodeGroup: testNg, EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + { + Node: generateNode("test-node-1"), + NodeGroup: testNg, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + { + Node: generateNode("test-node-2"), + NodeGroup: testNg, + EvictedPods: removablePods(2, "test-node-2"), UtilInfo: generateUtilInfo(2./8., 2./8.), }, + { + Node: generateNode("test-node-3"), + NodeGroup: testNg, + // this is nil because DaemonSetEvictionForOccupiedNodes is + // not enabled for drained nodes in this test suite + EvictedPods: nil, + UtilInfo: dsUtilInfo, + }, }, }, - wantDeletedNodes: []string{"empty-node-0", "empty-node-1"}, - wantDeletedPods: []string{"empty-node-0-ds-pod-0", "empty-node-0-ds-pod-1", "empty-node-1-ds-pod-1"}, + wantDeletedNodes: []string{"test-node-0", "test-node-1", "test-node-2", "test-node-3"}, + // same as evicted pods + wantDeletedPods: nil, wantTaintUpdates: map[string][][]apiv1.Taint{ - "empty-node-0": { + "test-node-0": { + {toBeDeletedTaint}, + }, + "test-node-1": { + {toBeDeletedTaint}, + }, + "test-node-2": { {toBeDeletedTaint}, }, - "empty-node-1": { + "test-node-3": { {toBeDeletedTaint}, }, }, wantNodeDeleteResults: map[string]status.NodeDeleteResult{ - "empty-node-0": {ResultType: status.NodeDeleteOk}, - "empty-node-1": {ResultType: status.NodeDeleteOk}, + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteOk}, + "test-node-2": {ResultType: status.NodeDeleteOk}, + "test-node-3": {ResultType: status.NodeDeleteOk}, }, }, "nodes with pods are not deleted if the node is passed as empty": { - emptyNodes: generateNodes(2, "empty-but-with-pods"), + emptyNodes: generateNodeGroupViewList(testNg, 0, 2), pods: map[string][]*apiv1.Pod{ - "empty-but-with-pods-node-0": removablePods(2, "empty-but-with-pods-node-0"), - "empty-but-with-pods-node-1": removablePods(2, "empty-but-with-pods-node-1"), + "test-node-0": removablePods(2, "test-node-0"), + "test-node-1": removablePods(2, "test-node-1"), }, wantStatus: &status.ScaleDownStatus{ Result: status.ScaleDownNodeDeleteStarted, ScaledDownNodes: []*status.ScaleDownNode{ { - Node: generateNode("empty-but-with-pods-node-0"), + Node: generateNode("test-node-0"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(2./8., 2./8.), }, { - Node: generateNode("empty-but-with-pods-node-1"), + Node: generateNode("test-node-1"), NodeGroup: testNg, EvictedPods: nil, UtilInfo: generateUtilInfo(2./8., 2./8.), @@ -694,253 +838,371 @@ func TestStartDeletion(t *testing.T) { wantDeletedNodes: nil, wantDeletedPods: nil, wantTaintUpdates: map[string][][]apiv1.Taint{ - "empty-but-with-pods-node-0": { + "test-node-0": { {toBeDeletedTaint}, {}, }, - "empty-but-with-pods-node-1": { + "test-node-1": { {toBeDeletedTaint}, {}, }, }, wantNodeDeleteResults: map[string]status.NodeDeleteResult{ - "empty-but-with-pods-node-0": {ResultType: status.NodeDeleteErrorInternal, Err: cmpopts.AnyError}, - "empty-but-with-pods-node-1": {ResultType: status.NodeDeleteErrorInternal, Err: cmpopts.AnyError}, + "test-node-0": {ResultType: status.NodeDeleteErrorInternal, Err: cmpopts.AnyError}, + "test-node-1": {ResultType: status.NodeDeleteErrorInternal, Err: cmpopts.AnyError}, }, }, - } { - t.Run(tn, func(t *testing.T) { - // This is needed because the tested code starts goroutines that can technically live longer than the execution - // of a single test case, and the goroutines eventually access tc in fakeClient hooks below. - tc := tc - // Insert all nodes into a map to support live node updates and GETs. - nodesByName := make(map[string]*apiv1.Node) - nodesLock := sync.Mutex{} - for _, node := range tc.emptyNodes { - nodesByName[node.Name] = node - } - for _, node := range tc.drainNodes { - nodesByName[node.Name] = node - } + "atomic nodes with pods are not deleted if the node is passed as empty": { + emptyNodes: append( + generateNodeGroupViewList(testNg, 0, 2), + generateNodeGroupViewList(atomic2pods, 0, 2)..., + ), + pods: map[string][]*apiv1.Pod{ + "test-node-1": removablePods(2, "test-node-1"), + "atomic-2-pods-node-1": removablePods(2, "atomic-2-pods-node-1"), + }, + wantStatus: &status.ScaleDownStatus{ + Result: status.ScaleDownNodeDeleteStarted, + ScaledDownNodes: []*status.ScaleDownNode{ + { + Node: generateNode("test-node-0"), + NodeGroup: testNg, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + { + Node: generateNode("test-node-1"), + NodeGroup: testNg, + EvictedPods: nil, + UtilInfo: generateUtilInfo(2./8., 2./8.), + }, + { + Node: generateNode("atomic-2-pods-node-0"), + NodeGroup: atomic2pods, + EvictedPods: nil, + UtilInfo: generateUtilInfo(0, 0), + }, + { + Node: generateNode("atomic-2-pods-node-1"), + NodeGroup: atomic2pods, + EvictedPods: nil, + UtilInfo: generateUtilInfo(2./8., 2./8.), + }, + }, + }, + wantDeletedNodes: []string{"test-node-0"}, + wantDeletedPods: nil, + wantTaintUpdates: map[string][][]apiv1.Taint{ + "test-node-0": { + {toBeDeletedTaint}, + }, + "test-node-1": { + {toBeDeletedTaint}, + {}, + }, + "atomic-2-pods-node-0": { + {toBeDeletedTaint}, + {}, + }, + "atomic-2-pods-node-1": { + {toBeDeletedTaint}, + {}, + }, + }, + wantNodeDeleteResults: map[string]status.NodeDeleteResult{ + "test-node-0": {ResultType: status.NodeDeleteOk}, + "test-node-1": {ResultType: status.NodeDeleteErrorInternal, Err: cmpopts.AnyError}, + "atomic-2-pods-node-0": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + "atomic-2-pods-node-1": {ResultType: status.NodeDeleteErrorInternal, Err: cmpopts.AnyError}, + }, + }, + } - // Set up a fake k8s client to hook and verify certain actions. - fakeClient := &fake.Clientset{} - type nodeTaints struct { - nodeName string - taints []apiv1.Taint - } - taintUpdates := make(chan nodeTaints, 10) - deletedNodes := make(chan string, 10) - deletedPods := make(chan string, 10) + testCasesWithNGNames := map[string]startDeletionTestCase{} + for k, v := range testCases { + testCasesWithNGNames[k+" "+suffix] = v + } + + return testCasesWithNGNames +} + +func TestStartDeletion(t *testing.T) { + testNg1 := testprovider.NewTestNodeGroup("test", 100, 0, 3, true, false, "n1-standard-2", nil, nil) + opts1 := &config.NodeGroupAutoscalingOptions{ + IgnoreDaemonSetsUtilization: false, + } + testNg1.SetOptions(opts1) + testNg2 := testprovider.NewTestNodeGroup("test", 100, 0, 3, true, false, "n1-standard-2", nil, nil) + opts2 := &config.NodeGroupAutoscalingOptions{ + IgnoreDaemonSetsUtilization: true, + } + testNg2.SetOptions(opts2) - ds := generateDaemonSet() + testSets := []map[string]startDeletionTestCase{ + // IgnoreDaemonSetsUtilization is false + getStartDeletionTestCases(testNg1, opts1.IgnoreDaemonSetsUtilization, "testNg1"), + // IgnoreDaemonSetsUtilization is true + getStartDeletionTestCases(testNg2, opts2.IgnoreDaemonSetsUtilization, "testNg2"), + } - // We're faking the whole k8s client, and some of the code needs to get live nodes and pods, so GET on nodes and pods has to be set up. - fakeClient.Fake.AddReactor("get", "nodes", func(action core.Action) (bool, runtime.Object, error) { - nodesLock.Lock() - defer nodesLock.Unlock() - getAction := action.(core.GetAction) - node, found := nodesByName[getAction.GetName()] - if !found { - return true, nil, fmt.Errorf("node %q not found", getAction.GetName()) + for _, testSet := range testSets { + for tn, tc := range testSet { + t.Run(tn, func(t *testing.T) { + // This is needed because the tested code starts goroutines that can technically live longer than the execution + // of a single test case, and the goroutines eventually access tc in fakeClient hooks below. + tc := tc + // Insert all nodes into a map to support live node updates and GETs. + allEmptyNodes, allDrainNodes := []*apiv1.Node{}, []*apiv1.Node{} + nodesByName := make(map[string]*apiv1.Node) + nodesLock := sync.Mutex{} + for _, bucket := range tc.emptyNodes { + allEmptyNodes = append(allEmptyNodes, bucket.Nodes...) + for _, node := range allEmptyNodes { + nodesByName[node.Name] = node + } } - return true, node, nil - }) - fakeClient.Fake.AddReactor("get", "pods", - func(action core.Action) (bool, runtime.Object, error) { - return true, nil, errors.NewNotFound(apiv1.Resource("pod"), "whatever") - }) - // Hook node update to gather all taint updates, and to fail the update for certain nodes to simulate errors. - fakeClient.Fake.AddReactor("update", "nodes", - func(action core.Action) (bool, runtime.Object, error) { + for _, bucket := range tc.drainNodes { + allDrainNodes = append(allDrainNodes, bucket.Nodes...) + for _, node := range bucket.Nodes { + nodesByName[node.Name] = node + } + } + + // Set up a fake k8s client to hook and verify certain actions. + fakeClient := &fake.Clientset{} + type nodeTaints struct { + nodeName string + taints []apiv1.Taint + } + taintUpdates := make(chan nodeTaints, 10) + deletedNodes := make(chan string, 10) + deletedPods := make(chan string, 10) + + ds := generateDaemonSet() + + // We're faking the whole k8s client, and some of the code needs to get live nodes and pods, so GET on nodes and pods has to be set up. + fakeClient.Fake.AddReactor("get", "nodes", func(action core.Action) (bool, runtime.Object, error) { nodesLock.Lock() defer nodesLock.Unlock() - update := action.(core.UpdateAction) - obj := update.GetObject().(*apiv1.Node) - if tc.failedNodeTaint[obj.Name] { - return true, nil, fmt.Errorf("SIMULATED ERROR: won't taint") + getAction := action.(core.GetAction) + node, found := nodesByName[getAction.GetName()] + if !found { + return true, nil, fmt.Errorf("node %q not found", getAction.GetName()) } - nt := nodeTaints{ - nodeName: obj.Name, - } - for _, taint := range obj.Spec.Taints { - nt.taints = append(nt.taints, taint) - } - taintUpdates <- nt - nodesByName[obj.Name] = obj.DeepCopy() - return true, obj, nil + return true, node, nil }) - // Hook eviction creation to gather which pods were evicted, and to fail the eviction for certain pods to simulate errors. - fakeClient.Fake.AddReactor("create", "pods", - func(action core.Action) (bool, runtime.Object, error) { - createAction := action.(core.CreateAction) - if createAction == nil { - return false, nil, nil + fakeClient.Fake.AddReactor("get", "pods", + func(action core.Action) (bool, runtime.Object, error) { + return true, nil, errors.NewNotFound(apiv1.Resource("pod"), "whatever") + }) + // Hook node update to gather all taint updates, and to fail the update for certain nodes to simulate errors. + fakeClient.Fake.AddReactor("update", "nodes", + func(action core.Action) (bool, runtime.Object, error) { + nodesLock.Lock() + defer nodesLock.Unlock() + update := action.(core.UpdateAction) + obj := update.GetObject().(*apiv1.Node) + if tc.failedNodeTaint[obj.Name] { + return true, nil, fmt.Errorf("SIMULATED ERROR: won't taint") + } + nt := nodeTaints{ + nodeName: obj.Name, + } + for _, taint := range obj.Spec.Taints { + nt.taints = append(nt.taints, taint) + } + taintUpdates <- nt + nodesByName[obj.Name] = obj.DeepCopy() + return true, obj, nil + }) + // Hook eviction creation to gather which pods were evicted, and to fail the eviction for certain pods to simulate errors. + fakeClient.Fake.AddReactor("create", "pods", + func(action core.Action) (bool, runtime.Object, error) { + createAction := action.(core.CreateAction) + if createAction == nil { + return false, nil, nil + } + eviction := createAction.GetObject().(*policyv1beta1.Eviction) + if eviction == nil { + return false, nil, nil + } + if tc.failedPodDrain[eviction.Name] { + return true, nil, fmt.Errorf("SIMULATED ERROR: won't evict") + } + deletedPods <- eviction.Name + return true, nil, nil + }) + + // Hook node deletion at the level of cloud provider, to gather which nodes were deleted, and to fail the deletion for + // certain nodes to simulate errors. + provider := testprovider.NewTestCloudProvider(nil, func(nodeGroup string, node string) error { + if tc.failedNodeDeletion[node] { + return fmt.Errorf("SIMULATED ERROR: won't remove node") } - eviction := createAction.GetObject().(*policyv1beta1.Eviction) - if eviction == nil { - return false, nil, nil + deletedNodes <- node + return nil + }) + for _, bucket := range tc.emptyNodes { + bucket.Group.(*testprovider.TestNodeGroup).SetCloudProvider(provider) + provider.InsertNodeGroup(bucket.Group) + for _, node := range bucket.Nodes { + provider.AddNode(bucket.Group.Id(), node) } - if tc.failedPodDrain[eviction.Name] { - return true, nil, fmt.Errorf("SIMULATED ERROR: won't evict") + } + for _, bucket := range tc.drainNodes { + bucket.Group.(*testprovider.TestNodeGroup).SetCloudProvider(provider) + provider.InsertNodeGroup(bucket.Group) + for _, node := range bucket.Nodes { + provider.AddNode(bucket.Group.Id(), node) } - deletedPods <- eviction.Name - return true, nil, nil - }) - - // Hook node deletion at the level of cloud provider, to gather which nodes were deleted, and to fail the deletion for - // certain nodes to simulate errors. - provider := testprovider.NewTestCloudProvider(nil, func(nodeGroup string, node string) error { - if tc.failedNodeDeletion[node] { - return fmt.Errorf("SIMULATED ERROR: won't remove node") } - deletedNodes <- node - return nil - }) - testNg.SetCloudProvider(provider) - provider.InsertNodeGroup(testNg) - for _, node := range nodesByName { - provider.AddNode("test-ng", node) - } - // Set up other needed structures and options. - opts := config.AutoscalingOptions{ - MaxScaleDownParallelism: 10, - MaxDrainParallelism: 5, - MaxPodEvictionTime: 0, - DaemonSetEvictionForEmptyNodes: true, - } + // Set up other needed structures and options. + opts := config.AutoscalingOptions{ + MaxScaleDownParallelism: 10, + MaxDrainParallelism: 5, + MaxPodEvictionTime: 0, + DaemonSetEvictionForEmptyNodes: true, + } - allPods := []*apiv1.Pod{} + allPods := []*apiv1.Pod{} - for _, pods := range tc.pods { - allPods = append(allPods, pods...) - } + for _, pods := range tc.pods { + allPods = append(allPods, pods...) + } - podLister := kube_util.NewTestPodLister(allPods) - pdbLister := kube_util.NewTestPodDisruptionBudgetLister([]*policyv1.PodDisruptionBudget{}) - dsLister, err := kube_util.NewTestDaemonSetLister([]*appsv1.DaemonSet{ds}) - if err != nil { - t.Fatalf("Couldn't create daemonset lister") - } + podLister := kube_util.NewTestPodLister(allPods) + pdbLister := kube_util.NewTestPodDisruptionBudgetLister([]*policyv1.PodDisruptionBudget{}) + dsLister, err := kube_util.NewTestDaemonSetLister([]*appsv1.DaemonSet{ds}) + if err != nil { + t.Fatalf("Couldn't create daemonset lister") + } - registry := kube_util.NewListerRegistry(nil, nil, podLister, nil, pdbLister, dsLister, nil, nil, nil, nil) - ctx, err := NewScaleTestAutoscalingContext(opts, fakeClient, registry, provider, nil, nil) - if err != nil { - t.Fatalf("Couldn't set up autoscaling context: %v", err) - } - csr := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, ctx.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) - for _, node := range tc.emptyNodes { - err := ctx.ClusterSnapshot.AddNodeWithPods(node, tc.pods[node.Name]) + registry := kube_util.NewListerRegistry(nil, nil, podLister, nil, pdbLister, dsLister, nil, nil, nil, nil) + ctx, err := NewScaleTestAutoscalingContext(opts, fakeClient, registry, provider, nil, nil) if err != nil { - t.Fatalf("Couldn't add node %q to snapshot: %v", node.Name, err) + t.Fatalf("Couldn't set up autoscaling context: %v", err) } - } - for _, node := range tc.drainNodes { - pods, found := tc.pods[node.Name] - if !found { - t.Fatalf("Drain node %q doesn't have pods defined in the test case.", node.Name) + csr := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, ctx.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) + for _, bucket := range tc.emptyNodes { + for _, node := range bucket.Nodes { + err := ctx.ClusterSnapshot.AddNodeWithPods(node, tc.pods[node.Name]) + if err != nil { + t.Fatalf("Couldn't add node %q to snapshot: %v", node.Name, err) + } + } } - err := ctx.ClusterSnapshot.AddNodeWithPods(node, pods) - if err != nil { - t.Fatalf("Couldn't add node %q to snapshot: %v", node.Name, err) + for _, bucket := range tc.drainNodes { + for _, node := range bucket.Nodes { + pods, found := tc.pods[node.Name] + if !found { + t.Fatalf("Drain node %q doesn't have pods defined in the test case.", node.Name) + } + err := ctx.ClusterSnapshot.AddNodeWithPods(node, pods) + if err != nil { + t.Fatalf("Couldn't add node %q to snapshot: %v", node.Name, err) + } + } } - } - // Create Actuator, run StartDeletion, and verify the error. - ndt := deletiontracker.NewNodeDeletionTracker(0) - actuator := Actuator{ - ctx: &ctx, clusterState: csr, nodeDeletionTracker: ndt, - nodeDeletionBatcher: NewNodeDeletionBatcher(&ctx, csr, ndt, 0*time.Second), - evictor: Evictor{EvictionRetryTime: 0, DsEvictionRetryTime: 0, DsEvictionEmptyNodeTimeout: 0, PodEvictionHeadroom: DefaultPodEvictionHeadroom}, - } - gotStatus, gotErr := actuator.StartDeletion(tc.emptyNodes, tc.drainNodes) - if diff := cmp.Diff(tc.wantErr, gotErr, cmpopts.EquateErrors()); diff != "" { - t.Errorf("StartDeletion error diff (-want +got):\n%s", diff) - } + // Create Actuator, run StartDeletion, and verify the error. + ndt := deletiontracker.NewNodeDeletionTracker(0) + ndb := NewNodeDeletionBatcher(&ctx, csr, ndt, 0*time.Second) + evictor := Evictor{EvictionRetryTime: 0, DsEvictionRetryTime: 0, DsEvictionEmptyNodeTimeout: 0, PodEvictionHeadroom: DefaultPodEvictionHeadroom} + actuator := Actuator{ + ctx: &ctx, clusterState: csr, nodeDeletionTracker: ndt, + nodeDeletionScheduler: NewGroupDeletionScheduler(&ctx, ndt, ndb, evictor), + budgetProcessor: budgets.NewScaleDownBudgetProcessor(&ctx), + configGetter: nodegroupconfig.NewDefaultNodeGroupConfigProcessor(ctx.NodeGroupDefaults), + } + gotStatus, gotErr := actuator.StartDeletion(allEmptyNodes, allDrainNodes) + if diff := cmp.Diff(tc.wantErr, gotErr, cmpopts.EquateErrors()); diff != "" { + t.Errorf("StartDeletion error diff (-want +got):\n%s", diff) + } - // Verify ScaleDownStatus looks as expected. - ignoreSdNodeOrder := cmpopts.SortSlices(func(a, b *status.ScaleDownNode) bool { return a.Node.Name < b.Node.Name }) - ignoreTimestamps := cmpopts.IgnoreFields(status.ScaleDownStatus{}, "NodeDeleteResultsAsOf") - cmpNg := cmp.Comparer(func(a, b *testprovider.TestNodeGroup) bool { return a.Id() == b.Id() }) - statusCmpOpts := cmp.Options{ignoreSdNodeOrder, ignoreTimestamps, cmpNg, cmpopts.EquateEmpty()} - if diff := cmp.Diff(tc.wantStatus, gotStatus, statusCmpOpts); diff != "" { - t.Errorf("StartDeletion status diff (-want +got):\n%s", diff) - } + // Verify ScaleDownStatus looks as expected. + ignoreSdNodeOrder := cmpopts.SortSlices(func(a, b *status.ScaleDownNode) bool { return a.Node.Name < b.Node.Name }) + ignoreTimestamps := cmpopts.IgnoreFields(status.ScaleDownStatus{}, "NodeDeleteResultsAsOf") + cmpNg := cmp.Comparer(func(a, b *testprovider.TestNodeGroup) bool { return a.Id() == b.Id() }) + statusCmpOpts := cmp.Options{ignoreSdNodeOrder, ignoreTimestamps, cmpNg, cmpopts.EquateEmpty()} + if diff := cmp.Diff(tc.wantStatus, gotStatus, statusCmpOpts); diff != "" { + t.Errorf("StartDeletion status diff (-want +got):\n%s", diff) + } - // Verify that all expected nodes were deleted using the cloud provider hook. - var gotDeletedNodes []string - nodesLoop: - for i := 0; i < len(tc.wantDeletedNodes); i++ { - select { - case deletedNode := <-deletedNodes: - gotDeletedNodes = append(gotDeletedNodes, deletedNode) - case <-time.After(3 * time.Second): - t.Errorf("Timeout while waiting for deleted nodes.") - break nodesLoop + // Verify that all expected nodes were deleted using the cloud provider hook. + var gotDeletedNodes []string + nodesLoop: + for i := 0; i < len(tc.wantDeletedNodes); i++ { + select { + case deletedNode := <-deletedNodes: + gotDeletedNodes = append(gotDeletedNodes, deletedNode) + case <-time.After(3 * time.Second): + t.Errorf("Timeout while waiting for deleted nodes.") + break nodesLoop + } + } + ignoreStrOrder := cmpopts.SortSlices(func(a, b string) bool { return a < b }) + if diff := cmp.Diff(tc.wantDeletedNodes, gotDeletedNodes, ignoreStrOrder); diff != "" { + t.Errorf("deletedNodes diff (-want +got):\n%s", diff) } - } - ignoreStrOrder := cmpopts.SortSlices(func(a, b string) bool { return a < b }) - if diff := cmp.Diff(tc.wantDeletedNodes, gotDeletedNodes, ignoreStrOrder); diff != "" { - t.Errorf("deletedNodes diff (-want +got):\n%s", diff) - } - // Verify that all expected pods were deleted using the fake k8s client hook. - var gotDeletedPods []string - podsLoop: - for i := 0; i < len(tc.wantDeletedPods); i++ { - select { - case deletedPod := <-deletedPods: - gotDeletedPods = append(gotDeletedPods, deletedPod) - case <-time.After(3 * time.Second): - t.Errorf("Timeout while waiting for deleted pods.") - break podsLoop + // Verify that all expected pods were deleted using the fake k8s client hook. + var gotDeletedPods []string + podsLoop: + for i := 0; i < len(tc.wantDeletedPods); i++ { + select { + case deletedPod := <-deletedPods: + gotDeletedPods = append(gotDeletedPods, deletedPod) + case <-time.After(3 * time.Second): + t.Errorf("Timeout while waiting for deleted pods.") + break podsLoop + } + } + if diff := cmp.Diff(tc.wantDeletedPods, gotDeletedPods, ignoreStrOrder); diff != "" { + t.Errorf("deletedPods diff (-want +got):\n%s", diff) } - } - if diff := cmp.Diff(tc.wantDeletedPods, gotDeletedPods, ignoreStrOrder); diff != "" { - t.Errorf("deletedPods diff (-want +got):\n%s", diff) - } - // Verify that all expected taint updates happened using the fake k8s client hook. - allUpdatesCount := 0 - for _, updates := range tc.wantTaintUpdates { - allUpdatesCount += len(updates) - } - gotTaintUpdates := make(map[string][][]apiv1.Taint) - taintsLoop: - for i := 0; i < allUpdatesCount; i++ { - select { - case taintUpdate := <-taintUpdates: - gotTaintUpdates[taintUpdate.nodeName] = append(gotTaintUpdates[taintUpdate.nodeName], taintUpdate.taints) - case <-time.After(3 * time.Second): - t.Errorf("Timeout while waiting for taint updates.") - break taintsLoop + // Verify that all expected taint updates happened using the fake k8s client hook. + allUpdatesCount := 0 + for _, updates := range tc.wantTaintUpdates { + allUpdatesCount += len(updates) + } + gotTaintUpdates := make(map[string][][]apiv1.Taint) + taintsLoop: + for i := 0; i < allUpdatesCount; i++ { + select { + case taintUpdate := <-taintUpdates: + gotTaintUpdates[taintUpdate.nodeName] = append(gotTaintUpdates[taintUpdate.nodeName], taintUpdate.taints) + case <-time.After(3 * time.Second): + t.Errorf("Timeout while waiting for taint updates.") + break taintsLoop + } + } + ignoreTaintValue := cmpopts.IgnoreFields(apiv1.Taint{}, "Value") + if diff := cmp.Diff(tc.wantTaintUpdates, gotTaintUpdates, ignoreTaintValue, cmpopts.EquateEmpty()); diff != "" { + t.Errorf("taintUpdates diff (-want +got):\n%s", diff) } - } - ignoreTaintValue := cmpopts.IgnoreFields(apiv1.Taint{}, "Value") - if diff := cmp.Diff(tc.wantTaintUpdates, gotTaintUpdates, ignoreTaintValue, cmpopts.EquateEmpty()); diff != "" { - t.Errorf("taintUpdates diff (-want +got):\n%s", diff) - } - // Wait for all expected deletions to be reported in NodeDeletionTracker. Reporting happens shortly after the deletion - // in cloud provider we sync to above and so this will usually not wait at all. However, it can still happen - // that there is a delay between cloud provider deletion and reporting, in which case the results are not there yet - // and we need to wait for them before asserting. - err = waitForDeletionResultsCount(actuator.nodeDeletionTracker, len(tc.wantNodeDeleteResults), 3*time.Second, 200*time.Millisecond) - if err != nil { - t.Errorf("Timeout while waiting for node deletion results") - } + // Wait for all expected deletions to be reported in NodeDeletionTracker. Reporting happens shortly after the deletion + // in cloud provider we sync to above and so this will usually not wait at all. However, it can still happen + // that there is a delay between cloud provider deletion and reporting, in which case the results are not there yet + // and we need to wait for them before asserting. + err = waitForDeletionResultsCount(actuator.nodeDeletionTracker, len(tc.wantNodeDeleteResults), 3*time.Second, 200*time.Millisecond) + if err != nil { + t.Errorf("Timeout while waiting for node deletion results") + } - // Run StartDeletion again to gather node deletion results for deletions started in the previous call, and verify - // that they look as expected. - gotNextStatus, gotNextErr := actuator.StartDeletion(nil, nil) - if gotNextErr != nil { - t.Errorf("StartDeletion unexpected error: %v", gotNextErr) - } - if diff := cmp.Diff(tc.wantNodeDeleteResults, gotNextStatus.NodeDeleteResults, cmpopts.EquateEmpty(), cmpopts.EquateErrors()); diff != "" { - t.Errorf("NodeDeleteResults diff (-want +got):\n%s", diff) - } - }) + // Run StartDeletion again to gather node deletion results for deletions started in the previous call, and verify + // that they look as expected. + gotNextStatus, gotNextErr := actuator.StartDeletion(nil, nil) + if gotNextErr != nil { + t.Errorf("StartDeletion unexpected error: %v", gotNextErr) + } + if diff := cmp.Diff(tc.wantNodeDeleteResults, gotNextStatus.NodeDeleteResults, cmpopts.EquateEmpty(), cmpopts.EquateErrors()); diff != "" { + t.Errorf("NodeDeleteResults diff (-want +got):\n%s", diff) + } + }) + } } } @@ -1057,10 +1319,11 @@ func TestStartDeletionInBatchBasic(t *testing.T) { provider.InsertNodeGroup(ng) ng.SetCloudProvider(provider) for i, num := range numNodes { - nodes := generateNodes(num, ng.Id()) - deleteNodes[i] = append(deleteNodes[i], nodes...) - for _, node := range nodes { - provider.AddNode(ng.Id(), node) + singleBucketList := generateNodeGroupViewList(ng, 0, num) + bucket := singleBucketList[0] + deleteNodes[i] = append(deleteNodes[i], bucket.Nodes...) + for _, node := range bucket.Nodes { + provider.AddNode(bucket.Group.Id(), node) } } } @@ -1078,12 +1341,14 @@ func TestStartDeletionInBatchBasic(t *testing.T) { if err != nil { t.Fatalf("Couldn't set up autoscaling context: %v", err) } - csr := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, ctx.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + csr := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, ctx.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) ndt := deletiontracker.NewNodeDeletionTracker(0) + ndb := NewNodeDeletionBatcher(&ctx, csr, ndt, deleteInterval) + evictor := Evictor{EvictionRetryTime: 0, DsEvictionRetryTime: 0, DsEvictionEmptyNodeTimeout: 0, PodEvictionHeadroom: DefaultPodEvictionHeadroom} actuator := Actuator{ ctx: &ctx, clusterState: csr, nodeDeletionTracker: ndt, - nodeDeletionBatcher: NewNodeDeletionBatcher(&ctx, csr, ndt, deleteInterval), - evictor: Evictor{EvictionRetryTime: 0, DsEvictionRetryTime: 0, DsEvictionEmptyNodeTimeout: 0, PodEvictionHeadroom: DefaultPodEvictionHeadroom}, + nodeDeletionScheduler: NewGroupDeletionScheduler(&ctx, ndt, ndb, evictor), + budgetProcessor: budgets.NewScaleDownBudgetProcessor(&ctx), } for _, nodes := range deleteNodes { @@ -1115,9 +1380,17 @@ func TestStartDeletionInBatchBasic(t *testing.T) { } } -func generateNodes(count int, prefix string) []*apiv1.Node { +func sizedNodeGroup(id string, size int, atomic bool) cloudprovider.NodeGroup { + ng := testprovider.NewTestNodeGroup(id, 10000, 0, size, true, false, "n1-standard-2", nil, nil) + ng.SetOptions(&config.NodeGroupAutoscalingOptions{ + ZeroOrMaxNodeScaling: atomic, + }) + return ng +} + +func generateNodes(from, to int, prefix string) []*apiv1.Node { var result []*apiv1.Node - for i := 0; i < count; i++ { + for i := from; i < to; i++ { name := fmt.Sprintf("node-%d", i) if prefix != "" { name = prefix + "-" + name @@ -1127,18 +1400,13 @@ func generateNodes(count int, prefix string) []*apiv1.Node { return result } -func generateNodesAndNodeGroupMap(count int, prefix string) map[string]*testprovider.TestNodeGroup { - result := make(map[string]*testprovider.TestNodeGroup) - for i := 0; i < count; i++ { - name := fmt.Sprintf("node-%d", i) - ngName := fmt.Sprintf("test-ng-%v", i) - if prefix != "" { - name = prefix + "-" + name - ngName = prefix + "-" + ngName - } - result[name] = testprovider.NewTestNodeGroup(ngName, 0, 100, 3, true, false, "n1-standard-2", nil, nil) +func generateNodeGroupViewList(ng cloudprovider.NodeGroup, from, to int) []*budgets.NodeGroupView { + return []*budgets.NodeGroupView{ + { + Group: ng, + Nodes: generateNodes(from, to, ng.Id()), + }, } - return result } func generateNode(name string) *apiv1.Node { @@ -1191,8 +1459,18 @@ func removablePod(name string, node string) *apiv1.Pod { } } +func generateDsPods(count int, node string) []*apiv1.Pod { + + var result []*apiv1.Pod + for i := 0; i < count; i++ { + name := fmt.Sprintf("ds-pod-%d", i) + result = append(result, generateDsPod(name, node)) + } + return result +} + func generateDsPod(name string, node string) *apiv1.Pod { - pod := removablePod(name, node) + pod := removablePod(fmt.Sprintf("%s-%s", node, name), node) pod.OwnerReferences = GenerateOwnerReferences("ds", "DaemonSet", "apps/v1", "some-uid") return pod } diff --git a/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go b/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go index a272b2a08def..ad63736ade6b 100644 --- a/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go +++ b/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go @@ -18,7 +18,6 @@ package actuation import ( "fmt" - "reflect" "sync" "time" @@ -26,6 +25,7 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/status" "k8s.io/autoscaler/cluster-autoscaler/metrics" + "k8s.io/autoscaler/cluster-autoscaler/utils/gpu" "k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes" "k8s.io/autoscaler/cluster-autoscaler/utils/taints" "k8s.io/klog/v2" @@ -67,51 +67,52 @@ func NewNodeDeletionBatcher(ctx *context.AutoscalingContext, csr *clusterstate.C } } -// AddNode adds node to delete candidates and schedule deletion. -func (d *NodeDeletionBatcher) AddNode(node *apiv1.Node, drain bool) error { +// AddNodes adds node list to delete candidates and schedules deletion. The deletion is performed asynchronously. +func (d *NodeDeletionBatcher) AddNodes(nodes []*apiv1.Node, nodeGroup cloudprovider.NodeGroup, drain bool) { // If delete interval is 0, than instantly start node deletion. if d.deleteInterval == 0 { - nodeGroup, err := deleteNodesFromCloudProvider(d.ctx, []*apiv1.Node{node}) + go d.deleteNodesAndRegisterStatus(nodes, drain) + return + } + first := d.addNodesToBucket(nodes, nodeGroup, drain) + if first { + // Just in case a node group implementation is not thread-safe, the async "remove" function will obtain a new instance of it to preform deletion. + go func(nodeGroupId string) { + time.Sleep(d.deleteInterval) + d.remove(nodeGroupId) + }(nodeGroup.Id()) + } +} + +func (d *NodeDeletionBatcher) deleteNodesAndRegisterStatus(nodes []*apiv1.Node, drain bool) { + nodeGroup, err := deleteNodesFromCloudProvider(d.ctx, nodes) + for _, node := range nodes { if err != nil { result := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorFailedToDelete, Err: err} CleanUpAndRecordFailedScaleDownEvent(d.ctx, node, nodeGroup.Id(), drain, d.nodeDeletionTracker, "", result) } else { RegisterAndRecordSuccessfulScaleDownEvent(d.ctx, d.clusterState, node, nodeGroup, drain, d.nodeDeletionTracker) } - return nil - } - nodeGroupId, first, err := d.addNodeToBucket(node, drain) - if err != nil { - return err - } - if first { - go func(nodeGroupId string) { - time.Sleep(d.deleteInterval) - d.remove(nodeGroupId) - }(nodeGroupId) } - return nil } // AddToBucket adds node to delete candidates and return if it's a first node in the group. -func (d *NodeDeletionBatcher) addNodeToBucket(node *apiv1.Node, drain bool) (string, bool, error) { +func (d *NodeDeletionBatcher) addNodesToBucket(nodes []*apiv1.Node, nodeGroup cloudprovider.NodeGroup, drain bool) bool { d.Lock() defer d.Unlock() - nodeGroup, err := d.ctx.CloudProvider.NodeGroupForNode(node) - if err != nil { - return "", false, err + for _, node := range nodes { + d.drainedNodeDeletions[node.Name] = drain } - d.drainedNodeDeletions[node.Name] = drain val, ok := d.deletionsPerNodeGroup[nodeGroup.Id()] if !ok || len(val) == 0 { - d.deletionsPerNodeGroup[nodeGroup.Id()] = []*apiv1.Node{node} - return nodeGroup.Id(), true, nil + d.deletionsPerNodeGroup[nodeGroup.Id()] = nodes + return true } - d.deletionsPerNodeGroup[nodeGroup.Id()] = append(d.deletionsPerNodeGroup[nodeGroup.Id()], node) - return nodeGroup.Id(), false, nil + d.deletionsPerNodeGroup[nodeGroup.Id()] = append(d.deletionsPerNodeGroup[nodeGroup.Id()], nodes...) + return false } -// remove delete nodes of a given nodeGroup, if successful, the deletion is recorded in CSR, and an event is emitted on the node. +// remove deletes nodes of a given nodeGroup, if successful, the deletion is recorded in CSR, and an event is emitted on the node. func (d *NodeDeletionBatcher) remove(nodeGroupId string) error { d.Lock() defer d.Unlock() @@ -133,11 +134,10 @@ func (d *NodeDeletionBatcher) remove(nodeGroupId string) error { drain := drainedNodeDeletions[node.Name] if err != nil { result = status.NodeDeleteResult{ResultType: status.NodeDeleteErrorFailedToDelete, Err: err} - CleanUpAndRecordFailedScaleDownEvent(d.ctx, node, nodeGroup.Id(), drain, d.nodeDeletionTracker, "", result) + CleanUpAndRecordFailedScaleDownEvent(d.ctx, node, nodeGroupId, drain, d.nodeDeletionTracker, "", result) } else { RegisterAndRecordSuccessfulScaleDownEvent(d.ctx, d.clusterState, node, nodeGroup, drain, d.nodeDeletionTracker) } - } }(nodes, drainedNodeDeletions) return nil @@ -150,11 +150,8 @@ func deleteNodesFromCloudProvider(ctx *context.AutoscalingContext, nodes []*apiv if err != nil { return nodeGroup, errors.NewAutoscalerError(errors.CloudProviderError, "failed to find node group for %s: %v", nodes[0].Name, err) } - if nodeGroup == nil || reflect.ValueOf(nodeGroup).IsNil() { - return nodeGroup, errors.NewAutoscalerError(errors.InternalError, "picked node that doesn't belong to a node group: %s", nodes[0].Name) - } - if err = nodeGroup.DeleteNodes(nodes); err != nil { - return nodeGroup, errors.NewAutoscalerError(errors.CloudProviderError, "failed to delete %s: %v", nodes[0].Name, err) + if err := nodeGroup.DeleteNodes(nodes); err != nil { + return nodeGroup, errors.NewAutoscalerError(errors.CloudProviderError, "failed to delete nodes from group %s: %v", nodeGroup.Id(), err) } return nodeGroup, nil } @@ -180,3 +177,37 @@ func IsNodeBeingDeleted(node *apiv1.Node, timestamp time.Time) bool { deleteTime, _ := taints.GetToBeDeletedTime(node) return deleteTime != nil && (timestamp.Sub(*deleteTime) < MaxCloudProviderNodeDeletionTime || timestamp.Sub(*deleteTime) < MaxKubernetesEmptyNodeDeletionTime) } + +// CleanUpAndRecordFailedScaleDownEvent record failed scale down event and log an error. +func CleanUpAndRecordFailedScaleDownEvent(ctx *context.AutoscalingContext, node *apiv1.Node, nodeGroupId string, drain bool, nodeDeletionTracker *deletiontracker.NodeDeletionTracker, errMsg string, status status.NodeDeleteResult) { + if drain { + klog.Errorf("Scale-down: couldn't delete node %q with drain, %v, status error: %v", node.Name, errMsg, status.Err) + ctx.Recorder.Eventf(node, apiv1.EventTypeWarning, "ScaleDownFailed", "failed to drain and delete node: %v", status.Err) + + } else { + klog.Errorf("Scale-down: couldn't delete empty node, %v, status error: %v", errMsg, status.Err) + ctx.Recorder.Eventf(node, apiv1.EventTypeWarning, "ScaleDownFailed", "failed to delete empty node: %v", status.Err) + } + taints.CleanToBeDeleted(node, ctx.ClientSet, ctx.CordonNodeBeforeTerminate) + nodeDeletionTracker.EndDeletion(nodeGroupId, node.Name, status) +} + +// RegisterAndRecordSuccessfulScaleDownEvent register scale down and record successful scale down event. +func RegisterAndRecordSuccessfulScaleDownEvent(ctx *context.AutoscalingContext, csr *clusterstate.ClusterStateRegistry, node *apiv1.Node, nodeGroup cloudprovider.NodeGroup, drain bool, nodeDeletionTracker *deletiontracker.NodeDeletionTracker) { + ctx.Recorder.Eventf(node, apiv1.EventTypeNormal, "ScaleDown", "nodes removed by cluster autoscaler") + csr.RegisterScaleDown(&clusterstate.ScaleDownRequest{ + NodeGroup: nodeGroup, + NodeName: node.Name, + Time: time.Now(), + ExpectedDeleteTime: time.Now().Add(MaxCloudProviderNodeDeletionTime), + }) + gpuConfig := ctx.CloudProvider.GetNodeGpuConfig(node) + metricResourceName, metricGpuType := gpu.GetGpuInfoForMetrics(gpuConfig, ctx.CloudProvider.GetAvailableGPUTypes(), node, nodeGroup) + metrics.RegisterScaleDown(1, metricResourceName, metricGpuType, nodeScaleDownReason(node, drain)) + if drain { + ctx.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaleDown", "Scale-down: node %s removed with drain", node.Name) + } else { + ctx.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaleDownEmpty", "Scale-down: empty node %s removed", node.Name) + } + nodeDeletionTracker.EndDeletion(nodeGroup.Id(), node.Name, status.NodeDeleteResult{ResultType: status.NodeDeleteOk}) +} diff --git a/cluster-autoscaler/core/scaledown/actuation/delete_in_batch_test.go b/cluster-autoscaler/core/scaledown/actuation/delete_in_batch_test.go index d60328e3e707..8b0ab49e8a59 100644 --- a/cluster-autoscaler/core/scaledown/actuation/delete_in_batch_test.go +++ b/cluster-autoscaler/core/scaledown/actuation/delete_in_batch_test.go @@ -29,6 +29,7 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/config" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker" . "k8s.io/autoscaler/cluster-autoscaler/core/test" + "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" "k8s.io/autoscaler/cluster-autoscaler/utils/taints" "k8s.io/client-go/kubernetes/fake" core "k8s.io/client-go/testing" @@ -43,8 +44,8 @@ func TestAddNodeToBucket(t *testing.T) { } nodeGroup1 := "ng-1" nodeGroup2 := "ng-2" - nodes1 := generateNodes(5, "ng-1") - nodes2 := generateNodes(5, "ng-2") + nodes1 := generateNodes(0, 5, "ng-1") + nodes2 := generateNodes(0, 5, "ng-2") provider.AddNodeGroup(nodeGroup1, 1, 10, 5) provider.AddNodeGroup(nodeGroup2, 1, 10, 5) for _, node := range nodes1 { @@ -91,10 +92,11 @@ func TestAddNodeToBucket(t *testing.T) { } batchCount := 0 for _, node := range test.nodes { - _, first, err := d.addNodeToBucket(node, test.drained) + nodeGroup, err := provider.NodeGroupForNode(node) if err != nil { - t.Errorf("addNodeToBucket return error %q when addidng node %v", err, node) + t.Errorf("couldn't get node info for node %s: %s", node.Name, err) } + first := d.addNodesToBucket([]*apiv1.Node{node}, nodeGroup, test.drained) if first { batchCount += 1 } @@ -161,13 +163,14 @@ func TestRemove(t *testing.T) { }) ctx, err := NewScaleTestAutoscalingContext(config.AutoscalingOptions{}, fakeClient, nil, provider, nil, nil) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, fakeLogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, fakeLogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) if err != nil { t.Fatalf("Couldn't set up autoscaling context: %v", err) } ng := "ng" provider.AddNodeGroup(ng, 1, 10, test.numNodes) + nodeGroup := provider.GetNodeGroup(ng) d := NodeDeletionBatcher{ ctx: &ctx, @@ -176,7 +179,7 @@ func TestRemove(t *testing.T) { deletionsPerNodeGroup: make(map[string][]*apiv1.Node), drainedNodeDeletions: make(map[string]bool), } - nodes := generateNodes(test.numNodes, ng) + nodes := generateNodes(0, test.numNodes, ng) failedDeletion := test.failedDeletion for _, node := range nodes { if failedDeletion > 0 { @@ -191,14 +194,11 @@ func TestRemove(t *testing.T) { Key: taints.ToBeDeletedTaint, Effect: apiv1.TaintEffectNoSchedule, }) - _, _, err := d.addNodeToBucket(node, true) - if err != nil { - t.Errorf("addNodeToBucket return error %q when addidng node %v", err, node) - } + d.addNodesToBucket([]*apiv1.Node{node}, nodeGroup, true) } } - err = d.remove(ng) + err = d.remove(nodeGroup.Id()) if test.err { if err == nil { t.Errorf("remove() should return error, but return nil") diff --git a/cluster-autoscaler/core/scaledown/actuation/group_deletion_scheduler.go b/cluster-autoscaler/core/scaledown/actuation/group_deletion_scheduler.go new file mode 100644 index 000000000000..e8aa9676932e --- /dev/null +++ b/cluster-autoscaler/core/scaledown/actuation/group_deletion_scheduler.go @@ -0,0 +1,151 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package actuation + +import ( + "sync" + "time" + + apiv1 "k8s.io/api/core/v1" + "k8s.io/klog/v2" + "k8s.io/kubernetes/pkg/scheduler/framework" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/autoscaler/cluster-autoscaler/config" + "k8s.io/autoscaler/cluster-autoscaler/context" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/status" + "k8s.io/autoscaler/cluster-autoscaler/metrics" + "k8s.io/autoscaler/cluster-autoscaler/utils/errors" +) + +type batcher interface { + AddNodes(nodes []*apiv1.Node, nodeGroup cloudprovider.NodeGroup, drain bool) +} + +// GroupDeletionScheduler is a wrapper over NodeDeletionBatcher responsible for grouping nodes for deletion +// and rolling back deletion of all nodes from a group in case deletion fails for any of the other nodes. +type GroupDeletionScheduler struct { + sync.Mutex + ctx *context.AutoscalingContext + nodeDeletionTracker *deletiontracker.NodeDeletionTracker + nodeDeletionBatcher batcher + evictor Evictor + nodeQueue map[string][]*apiv1.Node + failuresForGroup map[string]bool +} + +// NewGroupDeletionScheduler creates an instance of GroupDeletionScheduler. +func NewGroupDeletionScheduler(ctx *context.AutoscalingContext, ndt *deletiontracker.NodeDeletionTracker, b batcher, evictor Evictor) *GroupDeletionScheduler { + return &GroupDeletionScheduler{ + ctx: ctx, + nodeDeletionTracker: ndt, + nodeDeletionBatcher: b, + evictor: evictor, + nodeQueue: map[string][]*apiv1.Node{}, + failuresForGroup: map[string]bool{}, + } +} + +// ReportMetrics should be invoked for GroupDeletionScheduler before each scale-down phase. +func (ds *GroupDeletionScheduler) ReportMetrics() { + ds.Lock() + defer ds.Unlock() + pendingNodeDeletions := 0 + for _, nodes := range ds.nodeQueue { + pendingNodeDeletions += len(nodes) + } + // Since the nodes are deleted asynchronously, it's easier to + // monitor the pending ones at the beginning of the next scale-down phase. + metrics.ObservePendingNodeDeletions(pendingNodeDeletions) +} + +// ScheduleDeletion schedules deletion of the node. Nodes that should be deleted in groups are queued until whole group is scheduled for deletion, +// other nodes are passed over to NodeDeletionBatcher immediately. +func (ds *GroupDeletionScheduler) ScheduleDeletion(nodeInfo *framework.NodeInfo, nodeGroup cloudprovider.NodeGroup, batchSize int, drain bool) { + opts, err := nodeGroup.GetOptions(ds.ctx.NodeGroupDefaults) + if err != nil && err != cloudprovider.ErrNotImplemented { + nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorInternal, Err: errors.NewAutoscalerError(errors.InternalError, "GetOptions returned error %v", err)} + ds.AbortNodeDeletion(nodeInfo.Node(), nodeGroup.Id(), drain, "failed to get autoscaling options for a node group", nodeDeleteResult) + return + } + if opts == nil { + opts = &config.NodeGroupAutoscalingOptions{} + } + + nodeDeleteResult := ds.prepareNodeForDeletion(nodeInfo, drain) + if nodeDeleteResult.Err != nil { + ds.AbortNodeDeletion(nodeInfo.Node(), nodeGroup.Id(), drain, "prepareNodeForDeletion failed", nodeDeleteResult) + return + } + + ds.addToBatcher(nodeInfo, nodeGroup, batchSize, drain, opts.ZeroOrMaxNodeScaling) +} + +// prepareNodeForDeletion is a long-running operation, so it needs to avoid locking the AtomicDeletionScheduler object +func (ds *GroupDeletionScheduler) prepareNodeForDeletion(nodeInfo *framework.NodeInfo, drain bool) status.NodeDeleteResult { + node := nodeInfo.Node() + if drain { + if evictionResults, err := ds.evictor.DrainNode(ds.ctx, nodeInfo); err != nil { + return status.NodeDeleteResult{ResultType: status.NodeDeleteErrorFailedToEvictPods, Err: err, PodEvictionResults: evictionResults} + } + } else { + if err := ds.evictor.EvictDaemonSetPods(ds.ctx, nodeInfo, time.Now()); err != nil { + // Evicting DS pods is best-effort, so proceed with the deletion even if there are errors. + klog.Warningf("Error while evicting DS pods from an empty node %q: %v", node.Name, err) + } + } + if err := WaitForDelayDeletion(node, ds.ctx.ListerRegistry.AllNodeLister(), ds.ctx.AutoscalingOptions.NodeDeletionDelayTimeout); err != nil { + return status.NodeDeleteResult{ResultType: status.NodeDeleteErrorFailedToDelete, Err: err} + } + return status.NodeDeleteResult{ResultType: status.NodeDeleteOk} +} + +func (ds *GroupDeletionScheduler) addToBatcher(nodeInfo *framework.NodeInfo, nodeGroup cloudprovider.NodeGroup, batchSize int, drain, atomic bool) { + ds.Lock() + defer ds.Unlock() + ds.nodeQueue[nodeGroup.Id()] = append(ds.nodeQueue[nodeGroup.Id()], nodeInfo.Node()) + if atomic { + if ds.failuresForGroup[nodeGroup.Id()] { + nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorFailedToDelete, Err: errors.NewAutoscalerError(errors.TransientError, "couldn't scale down other nodes in this node group")} + CleanUpAndRecordFailedScaleDownEvent(ds.ctx, nodeInfo.Node(), nodeGroup.Id(), drain, ds.nodeDeletionTracker, "scale down failed for node group as a whole", nodeDeleteResult) + delete(ds.nodeQueue, nodeGroup.Id()) + } + if len(ds.nodeQueue[nodeGroup.Id()]) < batchSize { + // node group should be scaled down atomically, but not all nodes are ready yet + return + } + } + ds.nodeDeletionBatcher.AddNodes(ds.nodeQueue[nodeGroup.Id()], nodeGroup, drain) + ds.nodeQueue[nodeGroup.Id()] = []*apiv1.Node{} +} + +// AbortNodeDeletion frees up a node that couldn't be deleted successfully. If it was a part of a group, the same is applied for other nodes queued for deletion. +func (ds *GroupDeletionScheduler) AbortNodeDeletion(node *apiv1.Node, nodeGroupId string, drain bool, errMsg string, result status.NodeDeleteResult) { + ds.Lock() + defer ds.Unlock() + ds.failuresForGroup[nodeGroupId] = true + CleanUpAndRecordFailedScaleDownEvent(ds.ctx, node, nodeGroupId, drain, ds.nodeDeletionTracker, errMsg, result) + for _, otherNode := range ds.nodeQueue[nodeGroupId] { + if otherNode == node { + continue + } + nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorFailedToDelete, Err: errors.NewAutoscalerError(errors.TransientError, "couldn't scale down other nodes in this node group")} + CleanUpAndRecordFailedScaleDownEvent(ds.ctx, otherNode, nodeGroupId, drain, ds.nodeDeletionTracker, "scale down failed for node group as a whole", nodeDeleteResult) + } + delete(ds.nodeQueue, nodeGroupId) +} diff --git a/cluster-autoscaler/core/scaledown/actuation/group_deletion_scheduler_test.go b/cluster-autoscaler/core/scaledown/actuation/group_deletion_scheduler_test.go new file mode 100644 index 000000000000..481e260c841b --- /dev/null +++ b/cluster-autoscaler/core/scaledown/actuation/group_deletion_scheduler_test.go @@ -0,0 +1,188 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package actuation + +import ( + "fmt" + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/google/go-cmp/cmp/cmpopts" + appsv1 "k8s.io/api/apps/v1" + apiv1 "k8s.io/api/core/v1" + policyv1 "k8s.io/api/policy/v1" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + testprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/test" + "k8s.io/autoscaler/cluster-autoscaler/config" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/budgets" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/status" + . "k8s.io/autoscaler/cluster-autoscaler/core/test" + kube_util "k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes" + "k8s.io/client-go/kubernetes/fake" + "k8s.io/kubernetes/pkg/scheduler/framework" + schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" +) + +func TestScheduleDeletion(t *testing.T) { + testNg := testprovider.NewTestNodeGroup("test", 0, 100, 3, true, false, "n1-standard-2", nil, nil) + atomic2 := sizedNodeGroup("atomic-2", 2, true) + atomic4 := sizedNodeGroup("atomic-4", 4, true) + + testCases := []struct { + name string + toSchedule []*budgets.NodeGroupView + toAbort []*budgets.NodeGroupView + toScheduleAfterAbort []*budgets.NodeGroupView + wantDeleted int + wantNodeDeleteResults map[string]status.NodeDeleteResult + }{ + { + name: "no nodes", + toSchedule: []*budgets.NodeGroupView{}, + }, + { + name: "individual nodes are deleted right away", + toSchedule: generateNodeGroupViewList(testNg, 0, 3), + toAbort: generateNodeGroupViewList(testNg, 3, 6), + toScheduleAfterAbort: generateNodeGroupViewList(testNg, 6, 9), + wantDeleted: 6, + wantNodeDeleteResults: map[string]status.NodeDeleteResult{ + "test-node-3": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + "test-node-4": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + "test-node-5": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + }, + }, + { + name: "whole atomic node groups deleted", + toSchedule: mergeLists( + generateNodeGroupViewList(atomic4, 0, 1), + generateNodeGroupViewList(atomic2, 0, 1), + generateNodeGroupViewList(atomic4, 1, 2), + generateNodeGroupViewList(atomic2, 1, 2), + generateNodeGroupViewList(atomic4, 2, 4), + ), + wantDeleted: 6, + }, + { + name: "atomic node group aborted in the process", + toSchedule: mergeLists( + generateNodeGroupViewList(atomic4, 0, 1), + generateNodeGroupViewList(atomic2, 0, 1), + generateNodeGroupViewList(atomic4, 1, 2), + generateNodeGroupViewList(atomic2, 1, 2), + ), + toAbort: generateNodeGroupViewList(atomic4, 2, 3), + toScheduleAfterAbort: generateNodeGroupViewList(atomic4, 3, 4), + wantDeleted: 2, + wantNodeDeleteResults: map[string]status.NodeDeleteResult{ + "atomic-4-node-0": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + "atomic-4-node-1": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + "atomic-4-node-2": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + "atomic-4-node-3": {ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError}, + }, + }, + } + for _, tc := range testCases { + tc := tc + t.Run(tc.name, func(t *testing.T) { + provider := testprovider.NewTestCloudProvider(nil, func(nodeGroup string, node string) error { + return nil + }) + for _, bucket := range append(append(tc.toSchedule, tc.toAbort...), tc.toScheduleAfterAbort...) { + bucket.Group.(*testprovider.TestNodeGroup).SetCloudProvider(provider) + provider.InsertNodeGroup(bucket.Group) + for _, node := range bucket.Nodes { + provider.AddNode(bucket.Group.Id(), node) + } + } + + batcher := &countingBatcher{} + tracker := deletiontracker.NewNodeDeletionTracker(0) + opts := config.AutoscalingOptions{} + fakeClient := &fake.Clientset{} + podLister := kube_util.NewTestPodLister([]*apiv1.Pod{}) + pdbLister := kube_util.NewTestPodDisruptionBudgetLister([]*policyv1.PodDisruptionBudget{}) + dsLister, err := kube_util.NewTestDaemonSetLister([]*appsv1.DaemonSet{}) + if err != nil { + t.Fatalf("Couldn't create daemonset lister") + } + registry := kube_util.NewListerRegistry(nil, nil, podLister, nil, pdbLister, dsLister, nil, nil, nil, nil) + ctx, err := NewScaleTestAutoscalingContext(opts, fakeClient, registry, provider, nil, nil) + if err != nil { + t.Fatalf("Couldn't set up autoscaling context: %v", err) + } + scheduler := NewGroupDeletionScheduler(&ctx, tracker, batcher, Evictor{EvictionRetryTime: 0, DsEvictionRetryTime: 0, DsEvictionEmptyNodeTimeout: 0, PodEvictionHeadroom: DefaultPodEvictionHeadroom}) + + if err := scheduleAll(tc.toSchedule, scheduler); err != nil { + t.Fatal(err) + } + for _, bucket := range tc.toAbort { + for _, node := range bucket.Nodes { + nodeDeleteResult := status.NodeDeleteResult{ResultType: status.NodeDeleteErrorFailedToDelete, Err: cmpopts.AnyError} + scheduler.AbortNodeDeletion(node, bucket.Group.Id(), false, "simulated abort", nodeDeleteResult) + } + } + if err := scheduleAll(tc.toScheduleAfterAbort, scheduler); err != nil { + t.Fatal(err) + } + + if batcher.addedNodes != tc.wantDeleted { + t.Errorf("Incorrect number of deleted nodes, want %v but got %v", tc.wantDeleted, batcher.addedNodes) + } + gotDeletionResult, _ := tracker.DeletionResults() + if diff := cmp.Diff(tc.wantNodeDeleteResults, gotDeletionResult, cmpopts.EquateEmpty(), cmpopts.EquateErrors()); diff != "" { + t.Errorf("NodeDeleteResults diff (-want +got):\n%s", diff) + } + }) + } +} + +type countingBatcher struct { + addedNodes int +} + +func (b *countingBatcher) AddNodes(nodes []*apiv1.Node, nodeGroup cloudprovider.NodeGroup, drain bool) { + b.addedNodes += len(nodes) +} + +func scheduleAll(toSchedule []*budgets.NodeGroupView, scheduler *GroupDeletionScheduler) error { + for _, bucket := range toSchedule { + bucketSize, err := bucket.Group.TargetSize() + if err != nil { + return fmt.Errorf("failed to get target size for node group %q: %s", bucket.Group.Id(), err) + } + for _, node := range bucket.Nodes { + scheduler.ScheduleDeletion(infoForNode(node), bucket.Group, bucketSize, false) + } + } + return nil +} + +func infoForNode(n *apiv1.Node) *framework.NodeInfo { + info := schedulerframework.NewNodeInfo() + info.SetNode(n) + return info +} + +func mergeLists(lists ...[]*budgets.NodeGroupView) []*budgets.NodeGroupView { + merged := []*budgets.NodeGroupView{} + for _, l := range lists { + merged = append(merged, l...) + } + return merged +} diff --git a/cluster-autoscaler/core/scaledown/budgets/budgets.go b/cluster-autoscaler/core/scaledown/budgets/budgets.go new file mode 100644 index 000000000000..95202850e35c --- /dev/null +++ b/cluster-autoscaler/core/scaledown/budgets/budgets.go @@ -0,0 +1,219 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package budgets + +import ( + "reflect" + + apiv1 "k8s.io/api/core/v1" + "k8s.io/klog/v2" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/autoscaler/cluster-autoscaler/context" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown" +) + +// NodeGroupView is a subset of nodes from a given NodeGroup +type NodeGroupView struct { + Group cloudprovider.NodeGroup + Nodes []*apiv1.Node + // BatchSize allows overriding the number of nodes needed to trigger deletion. + // Useful for node groups which only scale between zero and max size. + BatchSize int +} + +// ScaleDownBudgetProcessor is responsible for keeping the number of nodes deleted in parallel within defined limits. +type ScaleDownBudgetProcessor struct { + ctx *context.AutoscalingContext +} + +// NewScaleDownBudgetProcessor creates a ScaleDownBudgetProcessor instance. +func NewScaleDownBudgetProcessor(ctx *context.AutoscalingContext) *ScaleDownBudgetProcessor { + return &ScaleDownBudgetProcessor{ + ctx: ctx, + } +} + +// CropNodes crops the provided node lists to respect scale-down max parallelism budgets. +// The returned nodes are grouped by a node group. +// This function assumes that each node group may occur at most once in each of the "empty" and "drain" lists. +func (bp *ScaleDownBudgetProcessor) CropNodes(as scaledown.ActuationStatus, empty, drain []*apiv1.Node) (emptyToDelete, drainToDelete []*NodeGroupView) { + emptyIndividual, emptyAtomic := bp.categorize(bp.group(empty)) + drainIndividual, drainAtomic := bp.categorize(bp.group(drain)) + + emptyAtomicMap := groupBuckets(emptyAtomic) + drainAtomicMap := groupBuckets(drainAtomic) + + emptyInProgress, drainInProgress := as.DeletionsInProgress() + parallelismBudget := bp.ctx.MaxScaleDownParallelism - len(emptyInProgress) - len(drainInProgress) + drainBudget := bp.ctx.MaxDrainParallelism - len(drainInProgress) + + var err error + canOverflow := true + emptyToDelete, drainToDelete = []*NodeGroupView{}, []*NodeGroupView{} + for _, bucket := range emptyAtomic { + drainNodes := []*apiv1.Node{} + drainBucket, drainFound := drainAtomicMap[bucket.Group.Id()] + if drainFound { + drainNodes = drainBucket.Nodes + } + // For node groups using atomic scaling, skip them if either the total number + // of empty and drain nodes exceeds the parallelism budget, + // or if the number of drain nodes exceeds the drain budget. + if parallelismBudget < len(bucket.Nodes)+len(drainNodes) || + drainBudget < len(drainNodes) { + // One pod slice can sneak in even if it would exceed parallelism budget. + // This is to help avoid starvation of pod slices by regular nodes, + // also larger pod slices will immediately exceed parallelism budget. + if parallelismBudget == 0 || (len(drainNodes) > 0 && drainBudget == 0) || !canOverflow { + break + } + } + var targetSize int + if targetSize, err = bucket.Group.TargetSize(); err != nil { + // Very unlikely to happen, as we've got this far with this group. + klog.Errorf("not scaling atomically scaled group %v: can't get target size, err: %v", bucket.Group.Id(), err) + continue + } + bucket.BatchSize = targetSize + if len(bucket.Nodes)+len(drainNodes) != targetSize { + // We can't only partially scale down atomic group. + klog.Errorf("not scaling atomic group %v because not all nodes are candidates, target size: %v, empty: %v, drainable: %v", bucket.Group.Id(), targetSize, len(bucket.Nodes), len(drainNodes)) + continue + } + emptyToDelete = append(emptyToDelete, bucket) + if drainFound { + drainBucket.BatchSize = bucket.BatchSize + drainToDelete = append(drainToDelete, drainBucket) + } + parallelismBudget -= len(bucket.Nodes) + len(drainNodes) + drainBudget -= len(drainNodes) + canOverflow = false + } + + drainBudget = min(parallelismBudget, drainBudget) + for _, bucket := range drainAtomic { + if _, found := emptyAtomicMap[bucket.Group.Id()]; found { + // This atomically-scaled node group should have been already processed + // in the previous loop. + continue + } + if drainBudget < len(bucket.Nodes) { + // One pod slice can sneak in even if it would exceed parallelism budget. + // This is to help avoid starvation of pod slices by regular nodes, + // also larger pod slices will immediately exceed parallelism budget. + if drainBudget == 0 || !canOverflow { + break + } + } + var targetSize int + if targetSize, err = bucket.Group.TargetSize(); err != nil { + // Very unlikely to happen, as we've got this far with this group. + klog.Errorf("not scaling atomically scaled group %v: can't get target size, err: %v", bucket.Group.Id(), err) + continue + } + bucket.BatchSize = targetSize + if len(bucket.Nodes) != targetSize { + // We can't only partially scale down atomic group. + klog.Errorf("not scaling atomic group %v because not all nodes are candidates, target size: %v, empty: none, drainable: %v", bucket.Group.Id(), targetSize, len(bucket.Nodes)) + continue + } + drainToDelete = append(drainToDelete, bucket) + parallelismBudget -= len(bucket.Nodes) + drainBudget -= len(bucket.Nodes) + canOverflow = false + } + + emptyToDelete, allowedCount := cropIndividualNodes(emptyToDelete, emptyIndividual, parallelismBudget) + parallelismBudget -= allowedCount + drainBudget = min(parallelismBudget, drainBudget) + + drainToDelete, _ = cropIndividualNodes(drainToDelete, drainIndividual, drainBudget) + + return emptyToDelete, drainToDelete +} + +func groupBuckets(buckets []*NodeGroupView) map[string]*NodeGroupView { + grouped := map[string]*NodeGroupView{} + for _, bucket := range buckets { + grouped[bucket.Group.Id()] = bucket + } + return grouped +} + +// cropIndividualNodes returns two values: +// * nodes selected for deletion +// * the number of nodes planned for deletion in this invocation +func cropIndividualNodes(toDelete []*NodeGroupView, groups []*NodeGroupView, budget int) ([]*NodeGroupView, int) { + remainingBudget := budget + for _, bucket := range groups { + if remainingBudget < 1 { + break + } + if remainingBudget < len(bucket.Nodes) { + bucket.Nodes = bucket.Nodes[:remainingBudget] + } + toDelete = append(toDelete, bucket) + remainingBudget -= len(bucket.Nodes) + } + return toDelete, budget - remainingBudget +} + +func (bp *ScaleDownBudgetProcessor) group(nodes []*apiv1.Node) []*NodeGroupView { + groupMap := map[string]int{} + grouped := []*NodeGroupView{} + for _, node := range nodes { + nodeGroup, err := bp.ctx.CloudProvider.NodeGroupForNode(node) + if err != nil || nodeGroup == nil || reflect.ValueOf(nodeGroup).IsNil() { + klog.Errorf("Failed to find node group for %s: %v", node.Name, err) + continue + } + if idx, ok := groupMap[nodeGroup.Id()]; ok { + grouped[idx].Nodes = append(grouped[idx].Nodes, node) + } else { + groupMap[nodeGroup.Id()] = len(grouped) + grouped = append(grouped, &NodeGroupView{ + Group: nodeGroup, + Nodes: []*apiv1.Node{node}, + }) + } + } + return grouped +} + +func (bp *ScaleDownBudgetProcessor) categorize(groups []*NodeGroupView) (individual, atomic []*NodeGroupView) { + for _, view := range groups { + autoscalingOptions, err := view.Group.GetOptions(bp.ctx.NodeGroupDefaults) + if err != nil && err != cloudprovider.ErrNotImplemented { + klog.Errorf("Failed to get autoscaling options for node group %s: %v", view.Group.Id(), err) + continue + } + if autoscalingOptions != nil && autoscalingOptions.ZeroOrMaxNodeScaling { + atomic = append(atomic, view) + } else { + individual = append(individual, view) + } + } + return individual, atomic +} + +func min(x, y int) int { + if x <= y { + return x + } + return y +} diff --git a/cluster-autoscaler/core/scaledown/budgets/budgets_test.go b/cluster-autoscaler/core/scaledown/budgets/budgets_test.go new file mode 100644 index 000000000000..9227f99109bc --- /dev/null +++ b/cluster-autoscaler/core/scaledown/budgets/budgets_test.go @@ -0,0 +1,400 @@ +/* +Copyright 2022 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package budgets + +import ( + "fmt" + "testing" + "time" + + "github.com/google/go-cmp/cmp" + "github.com/google/go-cmp/cmp/cmpopts" + apiv1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + testprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/test" + "k8s.io/autoscaler/cluster-autoscaler/config" + "k8s.io/autoscaler/cluster-autoscaler/context" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker" +) + +func TestCropNodesToBudgets(t *testing.T) { + testNg := testprovider.NewTestNodeGroup("test-ng", 0, 100, 3, true, false, "n1-standard-2", nil, nil) + testNg2 := testprovider.NewTestNodeGroup("test-ng2", 0, 100, 3, true, false, "n1-standard-2", nil, nil) + atomic3 := sizedNodeGroup("atomic-3", 3, true) + atomic4 := sizedNodeGroup("atomic-4", 4, true) + atomic8 := sizedNodeGroup("atomic-8", 8, true) + atomic11 := sizedNodeGroup("atomic-11", 11, true) + for tn, tc := range map[string]struct { + emptyDeletionsInProgress int + drainDeletionsInProgress int + empty []*NodeGroupView + drain []*NodeGroupView + wantEmpty []*NodeGroupView + wantDrain []*NodeGroupView + }{ + "no nodes": { + empty: []*NodeGroupView{}, + drain: []*NodeGroupView{}, + wantEmpty: []*NodeGroupView{}, + wantDrain: []*NodeGroupView{}, + }, + // Empty nodes only. + "empty nodes within max limit, no deletions in progress": { + empty: generateNodeGroupViewList(testNg, 0, 10), + wantEmpty: generateNodeGroupViewList(testNg, 0, 10), + }, + "empty nodes exceeding max limit, no deletions in progress": { + empty: generateNodeGroupViewList(testNg, 0, 11), + wantEmpty: generateNodeGroupViewList(testNg, 0, 10), + }, + "empty atomic node group exceeding max limit": { + empty: generateNodeGroupViewList(atomic11, 0, 11), + wantEmpty: generateNodeGroupViewList(atomic11, 0, 11), + }, + "empty regular and atomic": { + empty: append(generateNodeGroupViewList(testNg, 0, 8), generateNodeGroupViewList(atomic3, 0, 3)...), + wantEmpty: append(generateNodeGroupViewList(atomic3, 0, 3), generateNodeGroupViewList(testNg, 0, 7)...), + }, + "multiple empty atomic": { + empty: append( + append( + generateNodeGroupViewList(testNg, 0, 3), + generateNodeGroupViewList(atomic8, 0, 8)...), + generateNodeGroupViewList(atomic3, 0, 3)...), + wantEmpty: append(generateNodeGroupViewList(atomic8, 0, 8), generateNodeGroupViewList(testNg, 0, 2)...), + }, + "empty nodes with deletions in progress, within budget": { + emptyDeletionsInProgress: 1, + drainDeletionsInProgress: 1, + empty: generateNodeGroupViewList(testNg, 0, 8), + wantEmpty: generateNodeGroupViewList(testNg, 0, 8), + }, + "empty nodes with deletions in progress, exceeding budget": { + emptyDeletionsInProgress: 1, + drainDeletionsInProgress: 1, + empty: generateNodeGroupViewList(testNg, 0, 10), + wantEmpty: generateNodeGroupViewList(testNg, 0, 8), + }, + "empty atomic nodes with deletions in progress, exceeding budget": { + emptyDeletionsInProgress: 3, + drainDeletionsInProgress: 3, + empty: generateNodeGroupViewList(atomic8, 0, 8), + wantEmpty: generateNodeGroupViewList(atomic8, 0, 8), + }, + "empty nodes with deletions in progress, 0 budget left": { + emptyDeletionsInProgress: 5, + drainDeletionsInProgress: 5, + empty: generateNodeGroupViewList(testNg, 0, 10), + wantEmpty: []*NodeGroupView{}, + }, + "empty atomic nodes with deletions in progress, 0 budget left": { + emptyDeletionsInProgress: 5, + drainDeletionsInProgress: 5, + empty: generateNodeGroupViewList(atomic3, 0, 3), + wantEmpty: []*NodeGroupView{}, + }, + "empty nodes with deletions in progress, budget exceeded": { + emptyDeletionsInProgress: 50, + drainDeletionsInProgress: 50, + empty: generateNodeGroupViewList(testNg, 0, 10), + wantEmpty: []*NodeGroupView{}, + }, + // Drain nodes only. + "drain nodes within max limit, no deletions in progress": { + drain: generateNodeGroupViewList(testNg, 0, 5), + wantDrain: generateNodeGroupViewList(testNg, 0, 5), + }, + "multiple drain node groups": { + drain: append(generateNodeGroupViewList(testNg, 0, 5), generateNodeGroupViewList(testNg2, 0, 5)...), + wantDrain: generateNodeGroupViewList(testNg, 0, 5), + }, + "drain nodes exceeding max limit, no deletions in progress": { + drain: generateNodeGroupViewList(testNg, 0, 6), + wantDrain: generateNodeGroupViewList(testNg, 0, 5), + }, + "drain atomic exceeding limit": { + drain: generateNodeGroupViewList(atomic8, 0, 8), + wantDrain: generateNodeGroupViewList(atomic8, 0, 8), + }, + "drain regular and atomic exceeding limit": { + drain: append(generateNodeGroupViewList(testNg, 0, 3), generateNodeGroupViewList(atomic3, 0, 3)...), + wantDrain: append(generateNodeGroupViewList(atomic3, 0, 3), generateNodeGroupViewList(testNg, 0, 2)...), + }, + "multiple drain atomic": { + drain: append( + append( + generateNodeGroupViewList(testNg, 0, 3), + generateNodeGroupViewList(atomic3, 0, 3)...), + generateNodeGroupViewList(atomic4, 0, 4)...), + wantDrain: append(generateNodeGroupViewList(atomic3, 0, 3), generateNodeGroupViewList(testNg, 0, 2)...), + }, + "drain nodes with deletions in progress, within budget": { + emptyDeletionsInProgress: 1, + drainDeletionsInProgress: 2, + drain: generateNodeGroupViewList(testNg, 0, 3), + wantDrain: generateNodeGroupViewList(testNg, 0, 3), + }, + "drain nodes with deletions in progress, exceeding drain budget": { + emptyDeletionsInProgress: 1, + drainDeletionsInProgress: 2, + drain: generateNodeGroupViewList(testNg, 0, 5), + wantDrain: generateNodeGroupViewList(testNg, 0, 3), + }, + "drain atomic nodes with deletions in progress, exceeding drain budget": { + emptyDeletionsInProgress: 1, + drainDeletionsInProgress: 2, + drain: generateNodeGroupViewList(atomic4, 0, 4), + wantDrain: generateNodeGroupViewList(atomic4, 0, 4), + }, + "drain nodes with deletions in progress, 0 drain budget left": { + emptyDeletionsInProgress: 1, + drainDeletionsInProgress: 5, + drain: generateNodeGroupViewList(testNg, 0, 5), + wantDrain: []*NodeGroupView{}, + }, + "drain atomic nodes with deletions in progress, 0 drain budget left": { + emptyDeletionsInProgress: 1, + drainDeletionsInProgress: 5, + drain: generateNodeGroupViewList(atomic4, 0, 4), + wantDrain: []*NodeGroupView{}, + }, + "drain nodes with deletions in progress, drain budget exceeded": { + emptyDeletionsInProgress: 1, + drainDeletionsInProgress: 50, + drain: generateNodeGroupViewList(testNg, 0, 5), + wantDrain: []*NodeGroupView{}, + }, + "drain nodes with deletions in progress, exceeding overall budget": { + emptyDeletionsInProgress: 7, + drainDeletionsInProgress: 1, + drain: generateNodeGroupViewList(testNg, 0, 4), + wantDrain: generateNodeGroupViewList(testNg, 0, 2), + }, + "drain nodes with deletions in progress, 0 overall budget left": { + emptyDeletionsInProgress: 10, + drain: generateNodeGroupViewList(testNg, 0, 4), + wantDrain: []*NodeGroupView{}, + }, + "drain nodes with deletions in progress, overall budget exceeded": { + emptyDeletionsInProgress: 50, + drain: generateNodeGroupViewList(testNg, 0, 4), + wantDrain: []*NodeGroupView{}, + }, + // Empty and drain nodes together. + "empty&drain nodes within max limits, no deletions in progress": { + empty: generateNodeGroupViewList(testNg, 0, 5), + drain: generateNodeGroupViewList(testNg, 0, 5), + wantDrain: generateNodeGroupViewList(testNg, 0, 5), + wantEmpty: generateNodeGroupViewList(testNg, 0, 5), + }, + "empty&drain atomic nodes within max limits, no deletions in progress": { + empty: generateNodeGroupViewList(atomic3, 0, 3), + drain: generateNodeGroupViewList(atomic4, 0, 4), + wantEmpty: generateNodeGroupViewList(atomic3, 0, 3), + wantDrain: generateNodeGroupViewList(atomic4, 0, 4), + }, + "empty&drain nodes exceeding overall limit, no deletions in progress": { + empty: generateNodeGroupViewList(testNg, 0, 8), + drain: generateNodeGroupViewList(testNg, 0, 8), + wantDrain: generateNodeGroupViewList(testNg, 0, 2), + wantEmpty: generateNodeGroupViewList(testNg, 0, 8), + }, + "empty&drain atomic nodes exceeding overall limit, no deletions in progress": { + empty: generateNodeGroupViewList(atomic8, 0, 8), + drain: generateNodeGroupViewList(atomic4, 0, 4), + wantEmpty: generateNodeGroupViewList(atomic8, 0, 8), + wantDrain: []*NodeGroupView{}, + }, + "empty&drain atomic nodes exceeding drain limit, no deletions in progress": { + empty: generateNodeGroupViewList(atomic4, 0, 4), + drain: generateNodeGroupViewList(atomic8, 0, 8), + wantEmpty: generateNodeGroupViewList(atomic4, 0, 4), + wantDrain: []*NodeGroupView{}, + }, + "empty&drain atomic and regular nodes exceeding drain limit, no deletions in progress": { + empty: append(generateNodeGroupViewList(testNg, 0, 5), generateNodeGroupViewList(atomic3, 0, 3)...), + drain: generateNodeGroupViewList(atomic8, 0, 8), + wantEmpty: append(generateNodeGroupViewList(atomic3, 0, 3), generateNodeGroupViewList(testNg, 0, 5)...), + wantDrain: []*NodeGroupView{}, + }, + "empty regular and drain atomic nodes exceeding overall limit, no deletions in progress": { + drain: generateNodeGroupViewList(atomic8, 0, 8), + empty: generateNodeGroupViewList(testNg, 0, 5), + wantDrain: generateNodeGroupViewList(atomic8, 0, 8), + wantEmpty: generateNodeGroupViewList(testNg, 0, 2), + }, + "empty&drain nodes exceeding drain limit, no deletions in progress": { + empty: generateNodeGroupViewList(testNg, 0, 2), + drain: generateNodeGroupViewList(testNg, 0, 8), + wantDrain: generateNodeGroupViewList(testNg, 0, 5), + wantEmpty: generateNodeGroupViewList(testNg, 0, 2), + }, + "empty&drain nodes with deletions in progress, 0 overall budget left": { + emptyDeletionsInProgress: 10, + empty: generateNodeGroupViewList(testNg, 0, 5), + drain: generateNodeGroupViewList(testNg, 0, 5), + wantEmpty: []*NodeGroupView{}, + wantDrain: []*NodeGroupView{}, + }, + "empty&drain nodes with deletions in progress, overall budget exceeded (shouldn't happen, just a sanity check)": { + emptyDeletionsInProgress: 50, + empty: generateNodeGroupViewList(testNg, 0, 5), + drain: generateNodeGroupViewList(testNg, 0, 5), + wantEmpty: []*NodeGroupView{}, + wantDrain: []*NodeGroupView{}, + }, + "empty&drain nodes with deletions in progress, 0 drain budget left": { + drainDeletionsInProgress: 5, + empty: generateNodeGroupViewList(testNg, 0, 5), + drain: generateNodeGroupViewList(testNg, 0, 5), + wantEmpty: generateNodeGroupViewList(testNg, 0, 5), + wantDrain: []*NodeGroupView{}, + }, + "empty&drain nodes with deletions in progress, drain budget exceeded (shouldn't happen, just a sanity check)": { + drainDeletionsInProgress: 9, + empty: generateNodeGroupViewList(testNg, 0, 5), + drain: generateNodeGroupViewList(testNg, 0, 5), + wantEmpty: generateNodeGroupViewList(testNg, 0, 1), + wantDrain: []*NodeGroupView{}, + }, + "empty&drain nodes with deletions in progress, overall budget exceeded, only empty nodes fit": { + emptyDeletionsInProgress: 5, + drainDeletionsInProgress: 3, + empty: generateNodeGroupViewList(testNg, 0, 5), + drain: generateNodeGroupViewList(testNg, 0, 2), + wantEmpty: generateNodeGroupViewList(testNg, 0, 2), + wantDrain: []*NodeGroupView{}, + }, + "empty&drain nodes with deletions in progress, overall budget exceeded, both empty&drain nodes fit": { + emptyDeletionsInProgress: 5, + drainDeletionsInProgress: 3, + empty: generateNodeGroupViewList(testNg, 0, 1), + drain: generateNodeGroupViewList(testNg, 0, 2), + wantEmpty: generateNodeGroupViewList(testNg, 0, 1), + wantDrain: generateNodeGroupViewList(testNg, 0, 1), + }, + "empty&drain nodes with deletions in progress, drain budget exceeded": { + emptyDeletionsInProgress: 1, + drainDeletionsInProgress: 3, + empty: generateNodeGroupViewList(testNg, 0, 4), + drain: generateNodeGroupViewList(testNg, 0, 5), + wantEmpty: generateNodeGroupViewList(testNg, 0, 4), + wantDrain: generateNodeGroupViewList(testNg, 0, 2), + }, + } { + t.Run(tn, func(t *testing.T) { + provider := testprovider.NewTestCloudProvider(nil, func(nodeGroup string, node string) error { + return nil + }) + for _, bucket := range append(tc.empty, tc.drain...) { + bucket.Group.(*testprovider.TestNodeGroup).SetCloudProvider(provider) + provider.InsertNodeGroup(bucket.Group) + for _, node := range bucket.Nodes { + provider.AddNode(bucket.Group.Id(), node) + } + } + + ctx := &context.AutoscalingContext{ + AutoscalingOptions: config.AutoscalingOptions{ + MaxScaleDownParallelism: 10, + MaxDrainParallelism: 5, + NodeDeletionBatcherInterval: 0 * time.Second, + NodeDeleteDelayAfterTaint: 1 * time.Second, + }, + CloudProvider: provider, + } + ndt := deletiontracker.NewNodeDeletionTracker(1 * time.Hour) + for i := 0; i < tc.emptyDeletionsInProgress; i++ { + ndt.StartDeletion("ng1", fmt.Sprintf("empty-node-%d", i)) + } + for i := 0; i < tc.drainDeletionsInProgress; i++ { + ndt.StartDeletionWithDrain("ng2", fmt.Sprintf("drain-node-%d", i)) + } + emptyList, drainList := []*apiv1.Node{}, []*apiv1.Node{} + for _, bucket := range tc.empty { + emptyList = append(emptyList, bucket.Nodes...) + } + for _, bucket := range tc.drain { + drainList = append(drainList, bucket.Nodes...) + } + + budgeter := NewScaleDownBudgetProcessor(ctx) + gotEmpty, gotDrain := budgeter.CropNodes(ndt, emptyList, drainList) + if diff := cmp.Diff(tc.wantEmpty, gotEmpty, cmpopts.EquateEmpty(), transformNodeGroupView); diff != "" { + t.Errorf("cropNodesToBudgets empty nodes diff (-want +got):\n%s", diff) + } + if diff := cmp.Diff(tc.wantDrain, gotDrain, cmpopts.EquateEmpty(), transformNodeGroupView); diff != "" { + t.Errorf("cropNodesToBudgets drain nodes diff (-want +got):\n%s", diff) + } + }) + } +} + +// transformNodeGroupView transforms a NodeGroupView to a structure that can be directly compared with other node bucket. +var transformNodeGroupView = cmp.Transformer("transformNodeGroupView", func(b NodeGroupView) interface{} { + return struct { + Group string + Nodes []*apiv1.Node + }{ + Group: b.Group.Id(), + Nodes: b.Nodes, + } +}) + +func sizedNodeGroup(id string, size int, atomic bool) cloudprovider.NodeGroup { + ng := testprovider.NewTestNodeGroup(id, 10000, 0, size, true, false, "n1-standard-2", nil, nil) + ng.SetOptions(&config.NodeGroupAutoscalingOptions{ + ZeroOrMaxNodeScaling: atomic, + }) + return ng +} + +func generateNodes(from, to int, prefix string) []*apiv1.Node { + var result []*apiv1.Node + for i := from; i < to; i++ { + name := fmt.Sprintf("node-%d", i) + if prefix != "" { + name = prefix + "-" + name + } + result = append(result, generateNode(name)) + } + return result +} + +func generateNodeGroupViewList(ng cloudprovider.NodeGroup, from, to int) []*NodeGroupView { + return []*NodeGroupView{ + { + Group: ng, + Nodes: generateNodes(from, to, ng.Id()), + }, + } +} + +func generateNode(name string) *apiv1.Node { + return &apiv1.Node{ + ObjectMeta: metav1.ObjectMeta{Name: name}, + Status: apiv1.NodeStatus{ + Allocatable: apiv1.ResourceList{ + apiv1.ResourceCPU: resource.MustParse("8"), + apiv1.ResourceMemory: resource.MustParse("8G"), + }, + }, + } +} diff --git a/cluster-autoscaler/core/scaledown/eligibility/eligibility.go b/cluster-autoscaler/core/scaledown/eligibility/eligibility.go index 29cc2b24370b..2687cdffb2cc 100644 --- a/cluster-autoscaler/core/scaledown/eligibility/eligibility.go +++ b/cluster-autoscaler/core/scaledown/eligibility/eligibility.go @@ -41,20 +41,22 @@ const ( // Checker is responsible for deciding which nodes pass the criteria for scale down. type Checker struct { - thresholdGetter utilizationThresholdGetter + configGetter nodeGroupConfigGetter } -type utilizationThresholdGetter interface { +type nodeGroupConfigGetter interface { // GetScaleDownUtilizationThreshold returns ScaleDownUtilizationThreshold value that should be used for a given NodeGroup. - GetScaleDownUtilizationThreshold(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (float64, error) + GetScaleDownUtilizationThreshold(nodeGroup cloudprovider.NodeGroup) (float64, error) // GetScaleDownGpuUtilizationThreshold returns ScaleDownGpuUtilizationThreshold value that should be used for a given NodeGroup. - GetScaleDownGpuUtilizationThreshold(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (float64, error) + GetScaleDownGpuUtilizationThreshold(nodeGroup cloudprovider.NodeGroup) (float64, error) + // GetIgnoreDaemonSetsUtilization returns IgnoreDaemonSetsUtilization value that should be used for a given NodeGroup. + GetIgnoreDaemonSetsUtilization(nodeGroup cloudprovider.NodeGroup) (bool, error) } // NewChecker creates a new Checker object. -func NewChecker(thresholdGetter utilizationThresholdGetter) *Checker { +func NewChecker(configGetter nodeGroupConfigGetter) *Checker { return &Checker{ - thresholdGetter: thresholdGetter, + configGetter: configGetter, } } @@ -118,12 +120,6 @@ func (c *Checker) unremovableReasonAndNodeUtilization(context *context.Autoscali return simulator.ScaleDownDisabledAnnotation, nil } - gpuConfig := context.CloudProvider.GetNodeGpuConfig(node) - utilInfo, err := utilization.Calculate(nodeInfo, context.IgnoreDaemonSetsUtilization, context.IgnoreMirrorPodsUtilization, gpuConfig, timestamp) - if err != nil { - klog.Warningf("Failed to calculate utilization for %s: %v", node.Name, err) - } - nodeGroup, err := context.CloudProvider.NodeGroupForNode(node) if err != nil { klog.Warning("Node group not found for node %v: %v", node.Name, err) @@ -136,6 +132,18 @@ func (c *Checker) unremovableReasonAndNodeUtilization(context *context.Autoscali return simulator.NotAutoscaled, nil } + ignoreDaemonSetsUtilization, err := c.configGetter.GetIgnoreDaemonSetsUtilization(nodeGroup) + if err != nil { + klog.Warningf("Couldn't retrieve `IgnoreDaemonSetsUtilization` option for node %v: %v", node.Name, err) + return simulator.UnexpectedError, nil + } + + gpuConfig := context.CloudProvider.GetNodeGpuConfig(node) + utilInfo, err := utilization.Calculate(nodeInfo, ignoreDaemonSetsUtilization, context.IgnoreMirrorPodsUtilization, gpuConfig, timestamp) + if err != nil { + klog.Warningf("Failed to calculate utilization for %s: %v", node.Name, err) + } + // If scale down of unready nodes is disabled, skip the node if it is unready if !context.ScaleDownUnreadyEnabled { ready, _, _ := kube_util.GetReadinessState(node) @@ -166,12 +174,12 @@ func (c *Checker) isNodeBelowUtilizationThreshold(context *context.AutoscalingCo var err error gpuConfig := context.CloudProvider.GetNodeGpuConfig(node) if gpuConfig != nil { - threshold, err = c.thresholdGetter.GetScaleDownGpuUtilizationThreshold(context, nodeGroup) + threshold, err = c.configGetter.GetScaleDownGpuUtilizationThreshold(nodeGroup) if err != nil { return false, err } } else { - threshold, err = c.thresholdGetter.GetScaleDownUtilizationThreshold(context, nodeGroup) + threshold, err = c.configGetter.GetScaleDownUtilizationThreshold(nodeGroup) if err != nil { return false, err } diff --git a/cluster-autoscaler/core/scaledown/eligibility/eligibility_test.go b/cluster-autoscaler/core/scaledown/eligibility/eligibility_test.go index 92cc5bb81571..a40c88e49104 100644 --- a/cluster-autoscaler/core/scaledown/eligibility/eligibility_test.go +++ b/cluster-autoscaler/core/scaledown/eligibility/eligibility_test.go @@ -21,12 +21,11 @@ import ( "testing" "time" - "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" testprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/test" "k8s.io/autoscaler/cluster-autoscaler/config" - "k8s.io/autoscaler/cluster-autoscaler/context" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/unremovable" . "k8s.io/autoscaler/cluster-autoscaler/core/test" + "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" "k8s.io/autoscaler/cluster-autoscaler/utils/taints" . "k8s.io/autoscaler/cluster-autoscaler/utils/test" @@ -36,8 +35,16 @@ import ( "k8s.io/client-go/kubernetes/fake" ) -func TestFilterOutUnremovable(t *testing.T) { - now := time.Now() +type testCase struct { + desc string + nodes []*apiv1.Node + pods []*apiv1.Pod + want []string + scaleDownUnready bool + ignoreDaemonSetsUtilization bool +} + +func getTestCases(ignoreDaemonSetsUtilization bool, suffix string, now time.Time) []testCase { regularNode := BuildTestNode("regular", 1000, 10) SetNodeReadyState(regularNode, true, time.Time{}) @@ -59,13 +66,10 @@ func TestFilterOutUnremovable(t *testing.T) { smallPod := BuildTestPod("smallPod", 100, 0) smallPod.Spec.NodeName = "regular" - testCases := []struct { - desc string - nodes []*apiv1.Node - pods []*apiv1.Pod - want []string - scaleDownUnready bool - }{ + dsPod := BuildDSTestPod("dsPod", 500, 0) + dsPod.Spec.NodeName = "regular" + + testCases := []testCase{ { desc: "regular node stays", nodes: []*apiv1.Node{regularNode}, @@ -111,15 +115,58 @@ func TestFilterOutUnremovable(t *testing.T) { scaleDownUnready: false, }, } + + finalTestCases := []testCase{} for _, tc := range testCases { + tc.desc = tc.desc + " " + suffix + if ignoreDaemonSetsUtilization { + tc.ignoreDaemonSetsUtilization = true + } + finalTestCases = append(finalTestCases, tc) + } + + if ignoreDaemonSetsUtilization { + finalTestCases = append(testCases, testCase{ + desc: "high utilization daemonsets node is filtered out", + nodes: []*apiv1.Node{regularNode}, + pods: []*apiv1.Pod{smallPod, dsPod}, + want: []string{}, + scaleDownUnready: true, + ignoreDaemonSetsUtilization: false, + }, + testCase{ + desc: "high utilization daemonsets node stays", + nodes: []*apiv1.Node{regularNode}, + pods: []*apiv1.Pod{smallPod, dsPod}, + want: []string{"regular"}, + scaleDownUnready: true, + ignoreDaemonSetsUtilization: true, + }) + } + + return finalTestCases +} + +func TestFilterOutUnremovable(t *testing.T) { + now := time.Now() + for _, tc := range append(getTestCases(false, "IgnoreDaemonSetUtilization=false", now), + getTestCases(true, "IgnoreDaemonsetUtilization=true", now)...) { tc := tc t.Run(tc.desc, func(t *testing.T) { t.Parallel() - c := NewChecker(&staticThresholdGetter{0.5}) options := config.AutoscalingOptions{ UnremovableNodeRecheckTimeout: 5 * time.Minute, ScaleDownUnreadyEnabled: tc.scaleDownUnready, + NodeGroupDefaults: config.NodeGroupAutoscalingOptions{ + ScaleDownUtilizationThreshold: config.DefaultScaleDownUtilizationThreshold, + ScaleDownGpuUtilizationThreshold: config.DefaultScaleDownGpuUtilizationThreshold, + ScaleDownUnneededTime: config.DefaultScaleDownUnneededTime, + ScaleDownUnreadyTime: config.DefaultScaleDownUnreadyTime, + IgnoreDaemonSetsUtilization: tc.ignoreDaemonSetsUtilization, + }, } + s := nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults) + c := NewChecker(s) provider := testprovider.NewTestCloudProvider(nil, nil) provider.AddNodeGroup("ng1", 1, 10, 2) for _, n := range tc.nodes { @@ -136,15 +183,3 @@ func TestFilterOutUnremovable(t *testing.T) { }) } } - -type staticThresholdGetter struct { - threshold float64 -} - -func (s *staticThresholdGetter) GetScaleDownUtilizationThreshold(_ *context.AutoscalingContext, _ cloudprovider.NodeGroup) (float64, error) { - return s.threshold, nil -} - -func (s *staticThresholdGetter) GetScaleDownGpuUtilizationThreshold(_ *context.AutoscalingContext, _ cloudprovider.NodeGroup) (float64, error) { - return s.threshold, nil -} diff --git a/cluster-autoscaler/core/scaledown/legacy/legacy_test.go b/cluster-autoscaler/core/scaledown/legacy/legacy_test.go index 173df9fdbacb..0b3389b50c00 100644 --- a/cluster-autoscaler/core/scaledown/legacy/legacy_test.go +++ b/cluster-autoscaler/core/scaledown/legacy/legacy_test.go @@ -22,6 +22,7 @@ import ( "testing" "time" + "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" "k8s.io/autoscaler/cluster-autoscaler/simulator" "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" autoscaler_errors "k8s.io/autoscaler/cluster-autoscaler/utils/errors" @@ -146,7 +147,7 @@ func TestFindUnneededNodes(t *testing.T) { context, err := NewScaleTestAutoscalingContext(options, &fake.Clientset{}, registry, provider, nil, nil) assert.NoError(t, err) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) sd := wrapper.sd allNodes := []*apiv1.Node{n1, n2, n3, n4, n5, n7, n8, n9} @@ -277,7 +278,7 @@ func TestFindUnneededGPUNodes(t *testing.T) { context, err := NewScaleTestAutoscalingContext(options, &fake.Clientset{}, registry, provider, nil, nil) assert.NoError(t, err) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) sd := wrapper.sd allNodes := []*apiv1.Node{n1, n2, n3} @@ -392,7 +393,7 @@ func TestFindUnneededWithPerNodeGroupThresholds(t *testing.T) { context, err := NewScaleTestAutoscalingContext(globalOptions, &fake.Clientset{}, registry, provider, nil, nil) assert.NoError(t, err) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) sd := wrapper.sd clustersnapshot.InitializeClusterSnapshotOrDie(t, context.ClusterSnapshot, allNodes, allPods) @@ -475,7 +476,7 @@ func TestPodsWithPreemptionsFindUnneededNodes(t *testing.T) { context, err := NewScaleTestAutoscalingContext(options, &fake.Clientset{}, registry, provider, nil, nil) assert.NoError(t, err) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) sd := wrapper.sd @@ -539,7 +540,7 @@ func TestFindUnneededMaxCandidates(t *testing.T) { context, err := NewScaleTestAutoscalingContext(options, &fake.Clientset{}, registry, provider, nil, nil) assert.NoError(t, err) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) sd := wrapper.sd @@ -623,7 +624,7 @@ func TestFindUnneededEmptyNodes(t *testing.T) { context, err := NewScaleTestAutoscalingContext(options, &fake.Clientset{}, registry, provider, nil, nil) assert.NoError(t, err) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) sd := wrapper.sd @@ -680,7 +681,7 @@ func TestFindUnneededNodePool(t *testing.T) { context, err := NewScaleTestAutoscalingContext(options, &fake.Clientset{}, registry, provider, nil, nil) assert.NoError(t, err) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) sd := wrapper.sd clustersnapshot.InitializeClusterSnapshotOrDie(t, context.ClusterSnapshot, nodes, pods) @@ -771,7 +772,7 @@ func TestScaleDown(t *testing.T) { assert.NoError(t, err) nodes := []*apiv1.Node{n1, n2} - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) clustersnapshot.InitializeClusterSnapshotOrDie(t, context.ClusterSnapshot, nodes, []*apiv1.Pod{p1, p2}) autoscalererr = wrapper.UpdateClusterState(nodes, nodes, nil, time.Now().Add(-5*time.Minute)) @@ -1028,7 +1029,7 @@ func simpleScaleDownEmpty(t *testing.T, config *ScaleTestConfig) { context, err := NewScaleTestAutoscalingContext(config.Options, fakeClient, registry, provider, nil, nil) assert.NoError(t, err) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.Options.NodeGroupDefaults)) wrapper := newWrapperForTesting(&context, clusterStateRegistry, config.NodeDeletionTracker) clustersnapshot.InitializeClusterSnapshotOrDie(t, context.ClusterSnapshot, nodes, []*apiv1.Pod{}) autoscalererr = wrapper.UpdateClusterState(nodes, nodes, nil, time.Now().Add(-5*time.Minute)) @@ -1123,7 +1124,7 @@ func TestNoScaleDownUnready(t *testing.T) { nodes := []*apiv1.Node{n1, n2} // N1 is unready so it requires a bigger unneeded time. - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) clustersnapshot.InitializeClusterSnapshotOrDie(t, context.ClusterSnapshot, nodes, []*apiv1.Pod{p2}) autoscalererr = wrapper.UpdateClusterState(nodes, nodes, nil, time.Now().Add(-5*time.Minute)) @@ -1237,7 +1238,7 @@ func TestScaleDownNoMove(t *testing.T) { nodes := []*apiv1.Node{n1, n2} - clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterStateRegistry := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) wrapper := newWrapperForTesting(&context, clusterStateRegistry, nil) clustersnapshot.InitializeClusterSnapshotOrDie(t, context.ClusterSnapshot, nodes, []*apiv1.Pod{p1, p2}) autoscalererr = wrapper.UpdateClusterState(nodes, nodes, nil, time.Now().Add(-5*time.Minute)) @@ -1292,7 +1293,8 @@ func newWrapperForTesting(ctx *context.AutoscalingContext, clusterStateRegistry MinReplicaCount: 0, SkipNodesWithCustomControllerPods: true, } - sd := NewScaleDown(ctx, NewTestProcessors(ctx), ndt, deleteOptions) - actuator := actuation.NewActuator(ctx, clusterStateRegistry, ndt, deleteOptions) + processors := NewTestProcessors(ctx) + sd := NewScaleDown(ctx, processors, ndt, deleteOptions) + actuator := actuation.NewActuator(ctx, clusterStateRegistry, ndt, deleteOptions, processors.NodeGroupConfigProcessor) return NewScaleDownWrapper(sd, actuator) } diff --git a/cluster-autoscaler/core/scaledown/planner/planner.go b/cluster-autoscaler/core/scaledown/planner/planner.go index 81dde062f013..0cd4002c4af3 100644 --- a/cluster-autoscaler/core/scaledown/planner/planner.go +++ b/cluster-autoscaler/core/scaledown/planner/planner.go @@ -18,6 +18,7 @@ package planner import ( "fmt" + "math" "time" apiv1 "k8s.io/api/core/v1" @@ -132,6 +133,7 @@ func (p *Planner) CleanUpUnneededNodes() { // NodesToDelete returns all Nodes that could be removed right now, according // to the Planner. func (p *Planner) NodesToDelete(_ time.Time) (empty, needDrain []*apiv1.Node) { + empty, needDrain = []*apiv1.Node{}, []*apiv1.Node{} nodes, err := allNodes(p.context.ClusterSnapshot) if err != nil { klog.Errorf("Nothing will scale down, failed to list nodes from ClusterSnapshot: %v", err) @@ -154,7 +156,10 @@ func (p *Planner) NodesToDelete(_ time.Time) (empty, needDrain []*apiv1.Node) { // downs already in progress. If we pass the empty nodes first, they will be first // to get deleted, thus we decrease chances of hitting the limit on non-empty scale down. append(emptyRemovable, needDrainRemovable...), - p.context.AutoscalingOptions.MaxScaleDownParallelism) + // No need to limit the number of nodes, since it will happen later, in the actuation stage. + // It will make a more appropriate decision by using additional information about deletions + // in progress. + math.MaxInt) for _, nodeToRemove := range nodesToRemove { if len(nodeToRemove.PodsToReschedule) > 0 { needDrain = append(needDrain, nodeToRemove.Node) @@ -162,6 +167,7 @@ func (p *Planner) NodesToDelete(_ time.Time) (empty, needDrain []*apiv1.Node) { empty = append(empty, nodeToRemove.Node) } } + return empty, needDrain } diff --git a/cluster-autoscaler/core/scaledown/planner/planner_test.go b/cluster-autoscaler/core/scaledown/planner/planner_test.go index 5e0324e2780b..7d226472acaf 100644 --- a/cluster-autoscaler/core/scaledown/planner/planner_test.go +++ b/cluster-autoscaler/core/scaledown/planner/planner_test.go @@ -27,9 +27,11 @@ import ( policyv1 "k8s.io/api/policy/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" testprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/test" "k8s.io/autoscaler/cluster-autoscaler/config" "k8s.io/autoscaler/cluster-autoscaler/context" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/status" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/unremovable" . "k8s.io/autoscaler/cluster-autoscaler/core/test" @@ -620,6 +622,195 @@ func TestUpdateClusterStatUnneededNodesLimit(t *testing.T) { } } +func TestNodesToDelete(t *testing.T) { + testCases := []struct { + name string + nodes map[cloudprovider.NodeGroup][]simulator.NodeToBeRemoved + wantEmpty []*apiv1.Node + wantDrain []*apiv1.Node + }{ + { + name: "empty", + nodes: map[cloudprovider.NodeGroup][]simulator.NodeToBeRemoved{}, + wantEmpty: []*apiv1.Node{}, + wantDrain: []*apiv1.Node{}, + }, + { + name: "single empty", + nodes: map[cloudprovider.NodeGroup][]simulator.NodeToBeRemoved{ + sizedNodeGroup("test-ng", 3, false): { + buildRemovableNode("test-node", 0), + }, + }, + wantEmpty: []*apiv1.Node{ + buildRemovableNode("test-node", 0).Node, + }, + wantDrain: []*apiv1.Node{}, + }, + { + name: "single drain", + nodes: map[cloudprovider.NodeGroup][]simulator.NodeToBeRemoved{ + sizedNodeGroup("test-ng", 3, false): { + buildRemovableNode("test-node", 1), + }, + }, + wantEmpty: []*apiv1.Node{}, + wantDrain: []*apiv1.Node{ + buildRemovableNode("test-node", 1).Node, + }, + }, + { + name: "single empty atomic", + nodes: map[cloudprovider.NodeGroup][]simulator.NodeToBeRemoved{ + sizedNodeGroup("atomic-ng", 3, true): { + buildRemovableNode("node-1", 0), + }, + }, + wantEmpty: []*apiv1.Node{}, + wantDrain: []*apiv1.Node{}, + }, + { + name: "all empty atomic", + nodes: map[cloudprovider.NodeGroup][]simulator.NodeToBeRemoved{ + sizedNodeGroup("atomic-ng", 3, true): { + buildRemovableNode("node-1", 0), + buildRemovableNode("node-2", 0), + buildRemovableNode("node-3", 0), + }, + }, + wantEmpty: []*apiv1.Node{ + buildRemovableNode("node-1", 0).Node, + buildRemovableNode("node-2", 0).Node, + buildRemovableNode("node-3", 0).Node, + }, + wantDrain: []*apiv1.Node{}, + }, + { + name: "some drain atomic", + nodes: map[cloudprovider.NodeGroup][]simulator.NodeToBeRemoved{ + sizedNodeGroup("atomic-ng", 3, true): { + buildRemovableNode("node-1", 0), + buildRemovableNode("node-2", 0), + buildRemovableNode("node-3", 1), + }, + }, + wantEmpty: []*apiv1.Node{ + buildRemovableNode("node-1", 0).Node, + buildRemovableNode("node-2", 0).Node, + }, + wantDrain: []*apiv1.Node{ + buildRemovableNode("node-3", 1).Node, + }, + }, + { + name: "different groups", + nodes: map[cloudprovider.NodeGroup][]simulator.NodeToBeRemoved{ + sizedNodeGroup("standard-empty-ng", 3, false): { + buildRemovableNode("node-1", 0), + buildRemovableNode("node-2", 0), + buildRemovableNode("node-3", 0), + }, + sizedNodeGroup("standard-drain-ng", 3, false): { + buildRemovableNode("node-4", 1), + buildRemovableNode("node-5", 2), + buildRemovableNode("node-6", 3), + }, + sizedNodeGroup("standard-mixed-ng", 3, false): { + buildRemovableNode("node-7", 0), + buildRemovableNode("node-8", 1), + buildRemovableNode("node-9", 2), + }, + sizedNodeGroup("atomic-empty-ng", 3, true): { + buildRemovableNode("node-10", 0), + buildRemovableNode("node-11", 0), + buildRemovableNode("node-12", 0), + }, + sizedNodeGroup("atomic-mixed-ng", 3, true): { + buildRemovableNode("node-13", 0), + buildRemovableNode("node-14", 1), + buildRemovableNode("node-15", 2), + }, + sizedNodeGroup("atomic-partial-ng", 3, true): { + buildRemovableNode("node-16", 0), + buildRemovableNode("node-17", 1), + }, + }, + wantEmpty: []*apiv1.Node{ + buildRemovableNode("node-1", 0).Node, + buildRemovableNode("node-2", 0).Node, + buildRemovableNode("node-3", 0).Node, + buildRemovableNode("node-7", 0).Node, + buildRemovableNode("node-10", 0).Node, + buildRemovableNode("node-11", 0).Node, + buildRemovableNode("node-12", 0).Node, + buildRemovableNode("node-13", 0).Node, + }, + wantDrain: []*apiv1.Node{ + buildRemovableNode("node-4", 0).Node, + buildRemovableNode("node-5", 0).Node, + buildRemovableNode("node-6", 0).Node, + buildRemovableNode("node-8", 0).Node, + buildRemovableNode("node-9", 0).Node, + buildRemovableNode("node-14", 0).Node, + buildRemovableNode("node-15", 0).Node, + }, + }, + } + for _, tc := range testCases { + tc := tc + t.Run(tc.name, func(t *testing.T) { + t.Parallel() + provider := testprovider.NewTestCloudProvider(nil, nil) + allNodes := []*apiv1.Node{} + allRemovables := []simulator.NodeToBeRemoved{} + for ng, nodes := range tc.nodes { + provider.InsertNodeGroup(ng) + for _, removable := range nodes { + allNodes = append(allNodes, removable.Node) + allRemovables = append(allRemovables, removable) + provider.AddNode(ng.Id(), removable.Node) + } + } + context, err := NewScaleTestAutoscalingContext(config.AutoscalingOptions{ + NodeGroupDefaults: config.NodeGroupAutoscalingOptions{ + ScaleDownUnneededTime: 10 * time.Minute, + ScaleDownUnreadyTime: 0 * time.Minute, + }, + }, &fake.Clientset{}, nil, provider, nil, nil) + assert.NoError(t, err) + clustersnapshot.InitializeClusterSnapshotOrDie(t, context.ClusterSnapshot, allNodes, nil) + deleteOptions := simulator.NodeDeleteOptions{} + p := New(&context, NewTestProcessors(&context), deleteOptions) + p.latestUpdate = time.Now() + p.actuationStatus = deletiontracker.NewNodeDeletionTracker(0 * time.Second) + p.unneededNodes.Update(allRemovables, time.Now().Add(-1*time.Hour)) + p.eligibilityChecker = &fakeEligibilityChecker{eligible: asMap(nodeNames(allNodes))} + empty, drain := p.NodesToDelete(time.Now()) + assert.ElementsMatch(t, tc.wantEmpty, empty) + assert.ElementsMatch(t, tc.wantDrain, drain) + }) + } +} + +func sizedNodeGroup(id string, size int, atomic bool) cloudprovider.NodeGroup { + ng := testprovider.NewTestNodeGroup(id, 10000, 0, size, true, false, "n1-standard-2", nil, nil) + ng.SetOptions(&config.NodeGroupAutoscalingOptions{ + ZeroOrMaxNodeScaling: atomic, + }) + return ng +} + +func buildRemovableNode(name string, podCount int) simulator.NodeToBeRemoved { + podsToReschedule := []*apiv1.Pod{} + for i := 0; i < podCount; i++ { + podsToReschedule = append(podsToReschedule, &apiv1.Pod{}) + } + return simulator.NodeToBeRemoved{ + Node: BuildTestNode(name, 1000, 10), + PodsToReschedule: podsToReschedule, + } +} + func generateReplicaSets(name string, replicas int32) []*appsv1.ReplicaSet { return []*appsv1.ReplicaSet{ { diff --git a/cluster-autoscaler/core/scaledown/unneeded/nodes.go b/cluster-autoscaler/core/scaledown/unneeded/nodes.go index 8dbdb3171d9c..47f4797ce2af 100644 --- a/cluster-autoscaler/core/scaledown/unneeded/nodes.go +++ b/cluster-autoscaler/core/scaledown/unneeded/nodes.go @@ -49,9 +49,9 @@ type node struct { type scaleDownTimeGetter interface { // GetScaleDownUnneededTime returns ScaleDownUnneededTime value that should be used for a given NodeGroup. - GetScaleDownUnneededTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) + GetScaleDownUnneededTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) // GetScaleDownUnreadyTime returns ScaleDownUnreadyTime value that should be used for a given NodeGroup. - GetScaleDownUnreadyTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) + GetScaleDownUnreadyTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) } // NewNodes returns a new initialized Nodes object. @@ -162,7 +162,7 @@ func (n *Nodes) unremovableReason(context *context.AutoscalingContext, v *node, if ready { // Check how long a ready node was underutilized. - unneededTime, err := n.sdtg.GetScaleDownUnneededTime(context, nodeGroup) + unneededTime, err := n.sdtg.GetScaleDownUnneededTime(nodeGroup) if err != nil { klog.Errorf("Error trying to get ScaleDownUnneededTime for node %s (in group: %s)", node.Name, nodeGroup.Id()) return simulator.UnexpectedError @@ -172,7 +172,7 @@ func (n *Nodes) unremovableReason(context *context.AutoscalingContext, v *node, } } else { // Unready nodes may be deleted after a different time than underutilized nodes. - unreadyTime, err := n.sdtg.GetScaleDownUnreadyTime(context, nodeGroup) + unreadyTime, err := n.sdtg.GetScaleDownUnreadyTime(nodeGroup) if err != nil { klog.Errorf("Error trying to get ScaleDownUnreadyTime for node %s (in group: %s)", node.Name, nodeGroup.Id()) return simulator.UnexpectedError diff --git a/cluster-autoscaler/core/scaledown/unneeded/nodes_test.go b/cluster-autoscaler/core/scaledown/unneeded/nodes_test.go index da899c75e41f..7088e148ea28 100644 --- a/cluster-autoscaler/core/scaledown/unneeded/nodes_test.go +++ b/cluster-autoscaler/core/scaledown/unneeded/nodes_test.go @@ -25,7 +25,6 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" testprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/test" "k8s.io/autoscaler/cluster-autoscaler/config" - "k8s.io/autoscaler/cluster-autoscaler/context" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/resource" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/status" . "k8s.io/autoscaler/cluster-autoscaler/core/test" @@ -235,10 +234,10 @@ func (f *fakeActuationStatus) DeletionsCount(nodeGroup string) int { type fakeScaleDownTimeGetter struct{} -func (f *fakeScaleDownTimeGetter) GetScaleDownUnneededTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { +func (f *fakeScaleDownTimeGetter) GetScaleDownUnneededTime(cloudprovider.NodeGroup) (time.Duration, error) { return 0 * time.Second, nil } -func (f *fakeScaleDownTimeGetter) GetScaleDownUnreadyTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { +func (f *fakeScaleDownTimeGetter) GetScaleDownUnreadyTime(cloudprovider.NodeGroup) (time.Duration, error) { return 0 * time.Second, nil } diff --git a/cluster-autoscaler/core/scaleup/orchestrator/executor.go b/cluster-autoscaler/core/scaleup/orchestrator/executor.go new file mode 100644 index 000000000000..8f787e6cecce --- /dev/null +++ b/cluster-autoscaler/core/scaleup/orchestrator/executor.go @@ -0,0 +1,248 @@ +/* +Copyright 2016 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package orchestrator + +import ( + "fmt" + "sort" + "strings" + "sync" + "time" + + apiv1 "k8s.io/api/core/v1" + "k8s.io/klog/v2" + schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/autoscaler/cluster-autoscaler/clusterstate" + "k8s.io/autoscaler/cluster-autoscaler/context" + "k8s.io/autoscaler/cluster-autoscaler/metrics" + "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupset" + "k8s.io/autoscaler/cluster-autoscaler/utils/errors" + "k8s.io/autoscaler/cluster-autoscaler/utils/gpu" +) + +// ScaleUpExecutor scales up node groups. +type scaleUpExecutor struct { + autoscalingContext *context.AutoscalingContext + clusterStateRegistry *clusterstate.ClusterStateRegistry +} + +// New returns new instance of scale up executor. +func newScaleUpExecutor( + autoscalingContext *context.AutoscalingContext, + clusterStateRegistry *clusterstate.ClusterStateRegistry, +) *scaleUpExecutor { + return &scaleUpExecutor{ + autoscalingContext: autoscalingContext, + clusterStateRegistry: clusterStateRegistry, + } +} + +// ExecuteScaleUps executes the scale ups, based on the provided scale up infos and options. +// May scale up groups concurrently when autoscler option is enabled. +// In case of issues returns an error and a scale up info which failed to execute. +// If there were multiple concurrent errors one combined error is returned. +func (e *scaleUpExecutor) ExecuteScaleUps( + scaleUpInfos []nodegroupset.ScaleUpInfo, + nodeInfos map[string]*schedulerframework.NodeInfo, + now time.Time, +) (errors.AutoscalerError, []cloudprovider.NodeGroup) { + options := e.autoscalingContext.AutoscalingOptions + if options.ParallelScaleUp { + return e.executeScaleUpsParallel(scaleUpInfos, nodeInfos, now) + } + return e.executeScaleUpsSync(scaleUpInfos, nodeInfos, now) +} + +func (e *scaleUpExecutor) executeScaleUpsSync( + scaleUpInfos []nodegroupset.ScaleUpInfo, + nodeInfos map[string]*schedulerframework.NodeInfo, + now time.Time, +) (errors.AutoscalerError, []cloudprovider.NodeGroup) { + availableGPUTypes := e.autoscalingContext.CloudProvider.GetAvailableGPUTypes() + for _, scaleUpInfo := range scaleUpInfos { + nodeInfo, ok := nodeInfos[scaleUpInfo.Group.Id()] + if !ok { + klog.Errorf("ExecuteScaleUp: failed to get node info for node group %s", scaleUpInfo.Group.Id()) + continue + } + if aErr := e.executeScaleUp(scaleUpInfo, nodeInfo, availableGPUTypes, now); aErr != nil { + return aErr, []cloudprovider.NodeGroup{scaleUpInfo.Group} + } + } + return nil, nil +} + +func (e *scaleUpExecutor) executeScaleUpsParallel( + scaleUpInfos []nodegroupset.ScaleUpInfo, + nodeInfos map[string]*schedulerframework.NodeInfo, + now time.Time, +) (errors.AutoscalerError, []cloudprovider.NodeGroup) { + if err := checkUniqueNodeGroups(scaleUpInfos); err != nil { + return err, extractNodeGroups(scaleUpInfos) + } + type errResult struct { + err errors.AutoscalerError + info *nodegroupset.ScaleUpInfo + } + scaleUpsLen := len(scaleUpInfos) + errResults := make(chan errResult, scaleUpsLen) + var wg sync.WaitGroup + wg.Add(scaleUpsLen) + availableGPUTypes := e.autoscalingContext.CloudProvider.GetAvailableGPUTypes() + for _, scaleUpInfo := range scaleUpInfos { + go func(info nodegroupset.ScaleUpInfo) { + defer wg.Done() + nodeInfo, ok := nodeInfos[info.Group.Id()] + if !ok { + klog.Errorf("ExecuteScaleUp: failed to get node info for node group %s", info.Group.Id()) + return + } + if aErr := e.executeScaleUp(info, nodeInfo, availableGPUTypes, now); aErr != nil { + errResults <- errResult{err: aErr, info: &info} + } + }(scaleUpInfo) + } + wg.Wait() + close(errResults) + var results []errResult + for err := range errResults { + results = append(results, err) + } + if len(results) > 0 { + failedNodeGroups := make([]cloudprovider.NodeGroup, len(results)) + scaleUpErrors := make([]errors.AutoscalerError, len(results)) + for i, result := range results { + failedNodeGroups[i] = result.info.Group + scaleUpErrors[i] = result.err + } + return combineConcurrentScaleUpErrors(scaleUpErrors), failedNodeGroups + } + return nil, nil +} + +func (e *scaleUpExecutor) executeScaleUp( + info nodegroupset.ScaleUpInfo, + nodeInfo *schedulerframework.NodeInfo, + availableGPUTypes map[string]struct{}, + now time.Time, +) errors.AutoscalerError { + gpuConfig := e.autoscalingContext.CloudProvider.GetNodeGpuConfig(nodeInfo.Node()) + gpuResourceName, gpuType := gpu.GetGpuInfoForMetrics(gpuConfig, availableGPUTypes, nodeInfo.Node(), nil) + klog.V(0).Infof("Scale-up: setting group %s size to %d", info.Group.Id(), info.NewSize) + e.autoscalingContext.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaledUpGroup", + "Scale-up: setting group %s size to %d instead of %d (max: %d)", info.Group.Id(), info.NewSize, info.CurrentSize, info.MaxSize) + increase := info.NewSize - info.CurrentSize + if err := info.Group.IncreaseSize(increase); err != nil { + e.autoscalingContext.LogRecorder.Eventf(apiv1.EventTypeWarning, "FailedToScaleUpGroup", "Scale-up failed for group %s: %v", info.Group.Id(), err) + aerr := errors.ToAutoscalerError(errors.CloudProviderError, err).AddPrefix("failed to increase node group size: ") + e.clusterStateRegistry.RegisterFailedScaleUp(info.Group, metrics.FailedScaleUpReason(string(aerr.Type())), gpuResourceName, gpuType, now) + return aerr + } + e.clusterStateRegistry.RegisterOrUpdateScaleUp( + info.Group, + increase, + time.Now()) + metrics.RegisterScaleUp(increase, gpuResourceName, gpuType) + e.autoscalingContext.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaledUpGroup", + "Scale-up: group %s size set to %d instead of %d (max: %d)", info.Group.Id(), info.NewSize, info.CurrentSize, info.MaxSize) + return nil +} + +func combineConcurrentScaleUpErrors(errs []errors.AutoscalerError) errors.AutoscalerError { + if len(errs) == 0 { + return nil + } + if len(errs) == 1 { + return errs[0] + } + uniqueMessages := make(map[string]bool) + uniqueTypes := make(map[errors.AutoscalerErrorType]bool) + for _, err := range errs { + uniqueTypes[err.Type()] = true + uniqueMessages[err.Error()] = true + } + if len(uniqueTypes) == 1 && len(uniqueMessages) == 1 { + return errs[0] + } + // sort to stabilize the results and easier log aggregation + sort.Slice(errs, func(i, j int) bool { + errA := errs[i] + errB := errs[j] + if errA.Type() == errB.Type() { + return errs[i].Error() < errs[j].Error() + } + return errA.Type() < errB.Type() + }) + firstErr := errs[0] + printErrorTypes := len(uniqueTypes) > 1 + message := formatMessageFromConcurrentErrors(errs, printErrorTypes) + return errors.NewAutoscalerError(firstErr.Type(), message) +} + +func formatMessageFromConcurrentErrors(errs []errors.AutoscalerError, printErrorTypes bool) string { + firstErr := errs[0] + var builder strings.Builder + builder.WriteString(firstErr.Error()) + builder.WriteString(" ...and other concurrent errors: [") + formattedErrs := map[errors.AutoscalerError]bool{ + firstErr: true, + } + for _, err := range errs { + if _, has := formattedErrs[err]; has { + continue + } + formattedErrs[err] = true + var message string + if printErrorTypes { + message = fmt.Sprintf("[%s] %s", err.Type(), err.Error()) + } else { + message = err.Error() + } + if len(formattedErrs) > 2 { + builder.WriteString(", ") + } + builder.WriteString(fmt.Sprintf("%q", message)) + } + builder.WriteString("]") + return builder.String() +} + +// Checks if all groups are scaled only once. +// Scaling one group multiple times concurrently may cause problems. +func checkUniqueNodeGroups(scaleUpInfos []nodegroupset.ScaleUpInfo) errors.AutoscalerError { + uniqueGroups := make(map[string]bool) + for _, info := range scaleUpInfos { + if uniqueGroups[info.Group.Id()] { + return errors.NewAutoscalerError( + errors.InternalError, + "assertion failure: detected group double scaling: %s", info.Group.Id(), + ) + } + uniqueGroups[info.Group.Id()] = true + } + return nil +} + +func extractNodeGroups(scaleUpInfos []nodegroupset.ScaleUpInfo) []cloudprovider.NodeGroup { + groups := make([]cloudprovider.NodeGroup, len(scaleUpInfos)) + for i, info := range scaleUpInfos { + groups[i] = info.Group + } + return groups +} diff --git a/cluster-autoscaler/core/scaleup/orchestrator/executor_test.go b/cluster-autoscaler/core/scaleup/orchestrator/executor_test.go new file mode 100644 index 000000000000..a7ef5d60f575 --- /dev/null +++ b/cluster-autoscaler/core/scaleup/orchestrator/executor_test.go @@ -0,0 +1,127 @@ +/* +Copyright 2016 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package orchestrator + +import ( + "testing" + + "k8s.io/autoscaler/cluster-autoscaler/utils/errors" + + "github.com/stretchr/testify/assert" +) + +func TestCombinedConcurrentScaleUpErrors(t *testing.T) { + cloudProviderErr := errors.NewAutoscalerError(errors.CloudProviderError, "provider error") + internalErr := errors.NewAutoscalerError(errors.InternalError, "internal error") + testCases := []struct { + desc string + errors []errors.AutoscalerError + expectedErr errors.AutoscalerError + }{ + { + desc: "no errors", + errors: []errors.AutoscalerError{}, + expectedErr: nil, + }, + { + desc: "single error", + errors: []errors.AutoscalerError{internalErr}, + expectedErr: internalErr, + }, + { + desc: "two duplicated errors", + errors: []errors.AutoscalerError{ + internalErr, + internalErr, + }, + expectedErr: internalErr, + }, + { + desc: "two different errors", + errors: []errors.AutoscalerError{ + cloudProviderErr, + internalErr, + }, + expectedErr: errors.NewAutoscalerError( + errors.CloudProviderError, + "provider error ...and other concurrent errors: [\"[internalError] internal error\"]", + ), + }, + { + desc: "two different errors - reverse alphabetical order", + errors: []errors.AutoscalerError{ + internalErr, + cloudProviderErr, + }, + expectedErr: errors.NewAutoscalerError( + errors.CloudProviderError, + "provider error ...and other concurrent errors: [\"[internalError] internal error\"]", + ), + }, + { + desc: "errors with the same type and different messages", + errors: []errors.AutoscalerError{ + errors.NewAutoscalerError(errors.InternalError, "A"), + errors.NewAutoscalerError(errors.InternalError, "B"), + errors.NewAutoscalerError(errors.InternalError, "C"), + }, + expectedErr: errors.NewAutoscalerError( + errors.InternalError, + "A ...and other concurrent errors: [\"B\", \"C\"]"), + }, + { + desc: "errors with the same type and some duplicated messages", + errors: []errors.AutoscalerError{ + errors.NewAutoscalerError(errors.InternalError, "A"), + errors.NewAutoscalerError(errors.InternalError, "B"), + errors.NewAutoscalerError(errors.InternalError, "A"), + }, + expectedErr: errors.NewAutoscalerError( + errors.InternalError, + "A ...and other concurrent errors: [\"B\"]"), + }, + { + desc: "some duplicated errors", + errors: []errors.AutoscalerError{ + errors.NewAutoscalerError(errors.CloudProviderError, "A"), + errors.NewAutoscalerError(errors.CloudProviderError, "A"), + errors.NewAutoscalerError(errors.CloudProviderError, "B"), + errors.NewAutoscalerError(errors.InternalError, "A"), + }, + expectedErr: errors.NewAutoscalerError( + errors.CloudProviderError, + "A ...and other concurrent errors: [\"[cloudProviderError] B\", \"[internalError] A\"]"), + }, + { + desc: "different errors with quotes in messages", + errors: []errors.AutoscalerError{ + errors.NewAutoscalerError(errors.InternalError, "\"first\""), + errors.NewAutoscalerError(errors.InternalError, "\"second\""), + }, + expectedErr: errors.NewAutoscalerError( + errors.InternalError, + "\"first\" ...and other concurrent errors: [\"\\\"second\\\"\"]"), + }, + } + + for _, testCase := range testCases { + t.Run(testCase.desc, func(t *testing.T) { + combinedErr := combineConcurrentScaleUpErrors(testCase.errors) + assert.Equal(t, testCase.expectedErr, combinedErr) + }) + } +} diff --git a/cluster-autoscaler/core/scaleup/orchestrator/orchestrator.go b/cluster-autoscaler/core/scaleup/orchestrator/orchestrator.go index 002cf8414bc7..c7bbe8418ef7 100644 --- a/cluster-autoscaler/core/scaleup/orchestrator/orchestrator.go +++ b/cluster-autoscaler/core/scaleup/orchestrator/orchestrator.go @@ -22,6 +22,10 @@ import ( appsv1 "k8s.io/api/apps/v1" apiv1 "k8s.io/api/core/v1" + "k8s.io/autoscaler/cluster-autoscaler/estimator" + "k8s.io/klog/v2" + schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" "k8s.io/autoscaler/cluster-autoscaler/clusterstate" "k8s.io/autoscaler/cluster-autoscaler/context" @@ -36,11 +40,8 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupset" "k8s.io/autoscaler/cluster-autoscaler/processors/status" "k8s.io/autoscaler/cluster-autoscaler/utils/errors" - "k8s.io/autoscaler/cluster-autoscaler/utils/gpu" "k8s.io/autoscaler/cluster-autoscaler/utils/klogx" "k8s.io/autoscaler/cluster-autoscaler/utils/taints" - "k8s.io/klog/v2" - schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" ) // ScaleUpOrchestrator implements scaleup.Orchestrator interface. @@ -49,6 +50,7 @@ type ScaleUpOrchestrator struct { processors *ca_processors.AutoscalingProcessors resourceManager *resource.Manager clusterStateRegistry *clusterstate.ClusterStateRegistry + scaleUpExecutor *scaleUpExecutor taintConfig taints.TaintConfig initialized bool } @@ -72,6 +74,7 @@ func (o *ScaleUpOrchestrator) Initialize( o.clusterStateRegistry = clusterStateRegistry o.taintConfig = taintConfig o.resourceManager = resource.NewManager(processors.CustomResourcesProcessor) + o.scaleUpExecutor = newScaleUpExecutor(autoscalingContext, clusterStateRegistry) o.initialized = true } @@ -100,7 +103,10 @@ func (o *ScaleUpOrchestrator) ScaleUp( klogx.V(1).UpTo(loggingQuota).Infof("Pod %s/%s is unschedulable", pod.Namespace, pod.Name) } klogx.V(1).Over(loggingQuota).Infof("%v other pods are also unschedulable", -loggingQuota.Left()) + + buildPodEquivalenceGroupsStart := time.Now() podEquivalenceGroups := equivalence.BuildPodGroups(unschedulablePods) + metrics.UpdateDurationFromStart(metrics.BuildPodEquivalenceGroups, buildPodEquivalenceGroupsStart) upcomingNodes, aErr := o.UpcomingNodes(nodeInfos) if aErr != nil { @@ -117,59 +123,48 @@ func (o *ScaleUpOrchestrator) ScaleUp( } } + // Initialise binpacking limiter. + o.processors.BinpackingLimiter.InitBinpacking(o.autoscalingContext, nodeGroups) + resourcesLeft, aErr := o.resourceManager.ResourcesLeft(o.autoscalingContext, nodeInfos, nodes) if aErr != nil { return scaleUpError(&status.ScaleUpStatus{}, aErr.AddPrefix("could not compute total resources: ")) } now := time.Now() - expansionOptions := make(map[string]expander.Option, 0) - skippedNodeGroups := map[string]status.Reasons{} - for _, nodeGroup := range nodeGroups { - if skipReason := o.IsNodeGroupReadyToScaleUp(nodeGroup, now); skipReason != nil { - skippedNodeGroups[nodeGroup.Id()] = skipReason - continue - } + // Filter out invalid node groups + validNodeGroups, skippedNodeGroups := o.filterValidScaleUpNodeGroups(nodeGroups, nodeInfos, resourcesLeft, len(nodes)+len(upcomingNodes), now) - currentTargetSize, err := nodeGroup.TargetSize() - if err != nil { - klog.Errorf("Failed to get node group size: %v", err) - skippedNodeGroups[nodeGroup.Id()] = NotReadyReason - continue - } - if currentTargetSize >= nodeGroup.MaxSize() { - //FORK-CHANGE: log level changed to 2 for better debugging. - klog.V(2).Infof("Skipping node group %s - max size reached", nodeGroup.Id()) - skippedNodeGroups[nodeGroup.Id()] = MaxLimitReachedReason - continue - } + // Mark skipped node groups as processed. + for nodegroupID := range skippedNodeGroups { + o.processors.BinpackingLimiter.MarkProcessed(o.autoscalingContext, nodegroupID) + } - nodeInfo, found := nodeInfos[nodeGroup.Id()] - if !found { - klog.Errorf("No node info for: %s", nodeGroup.Id()) - skippedNodeGroups[nodeGroup.Id()] = NotReadyReason - continue - } + // Calculate expansion options + schedulablePods := map[string][]*apiv1.Pod{} + var options []expander.Option - if skipReason := o.IsNodeGroupResourceExceeded(resourcesLeft, nodeGroup, nodeInfo); skipReason != nil { - skippedNodeGroups[nodeGroup.Id()] = skipReason - continue - } + for _, nodeGroup := range validNodeGroups { + schedulablePods[nodeGroup.Id()] = o.SchedulablePods(podEquivalenceGroups, nodeGroup, nodeInfos[nodeGroup.Id()]) + } - option, err := o.ComputeExpansionOption(podEquivalenceGroups, nodeGroup, nodeInfo, upcomingNodes) - if err != nil { - return scaleUpError(&status.ScaleUpStatus{}, errors.ToAutoscalerError(errors.InternalError, err)) - } + for _, nodeGroup := range validNodeGroups { + option := o.ComputeExpansionOption(nodeGroup, schedulablePods, nodeInfos, len(nodes)+len(upcomingNodes), now) + o.processors.BinpackingLimiter.MarkProcessed(o.autoscalingContext, nodeGroup.Id()) - if len(option.Pods) > 0 && option.NodeCount > 0 { - expansionOptions[nodeGroup.Id()] = option - } else { + if len(option.Pods) == 0 || option.NodeCount == 0 { klog.V(4).Infof("No pod can fit to %s", nodeGroup.Id()) + } else { + options = append(options, option) + } + + if o.processors.BinpackingLimiter.StopBinpacking(o.autoscalingContext, options) { + break } } - if len(expansionOptions) == 0 { + if len(options) == 0 { klog.V(1).Info("No expansion options") return &status.ScaleUpStatus{ Result: status.ScaleUpNoOptionsAvailable, @@ -179,10 +174,6 @@ func (o *ScaleUpOrchestrator) ScaleUp( } // Pick some expansion option. - options := make([]expander.Option, 0, len(expansionOptions)) - for _, o := range expansionOptions { - options = append(options, o) - } bestOption := o.autoscalingContext.ExpanderStrategy.BestOption(options, nodeInfos) if bestOption == nil || bestOption.NodeCount <= 0 { return &status.ScaleUpStatus{ @@ -219,14 +210,17 @@ func (o *ScaleUpOrchestrator) ScaleUp( mainCreatedNodeInfo, aErr := utils.GetNodeInfoFromTemplate(createNodeGroupResult.MainCreatedNodeGroup, daemonSets, o.taintConfig) if aErr == nil { nodeInfos[createNodeGroupResult.MainCreatedNodeGroup.Id()] = mainCreatedNodeInfo + schedulablePods[createNodeGroupResult.MainCreatedNodeGroup.Id()] = o.SchedulablePods(podEquivalenceGroups, createNodeGroupResult.MainCreatedNodeGroup, mainCreatedNodeInfo) } else { klog.Warningf("Cannot build node info for newly created main node group %v; balancing similar node groups may not work; err=%v", createNodeGroupResult.MainCreatedNodeGroup.Id(), aErr) - // Use node info based on expansion candidate but upadte Id which likely changed when node group was created. - nodeInfos[bestOption.NodeGroup.Id()] = nodeInfos[oldId] + // Use node info based on expansion candidate but update Id which likely changed when node group was created. + nodeInfos[createNodeGroupResult.MainCreatedNodeGroup.Id()] = nodeInfos[oldId] + schedulablePods[createNodeGroupResult.MainCreatedNodeGroup.Id()] = schedulablePods[oldId] } if oldId != createNodeGroupResult.MainCreatedNodeGroup.Id() { delete(nodeInfos, oldId) + delete(schedulablePods, oldId) } for _, nodeGroup := range createNodeGroupResult.ExtraCreatedNodeGroups { @@ -236,15 +230,7 @@ func (o *ScaleUpOrchestrator) ScaleUp( continue } nodeInfos[nodeGroup.Id()] = nodeInfo - - option, err := o.ComputeExpansionOption(podEquivalenceGroups, nodeGroup, nodeInfo, upcomingNodes) - if err != nil { - return scaleUpError(&status.ScaleUpStatus{PodsTriggeredScaleUp: bestOption.Pods}, errors.ToAutoscalerError(errors.InternalError, err)) - } - - if len(option.Pods) > 0 && option.NodeCount > 0 { - expansionOptions[nodeGroup.Id()] = option - } + schedulablePods[nodeGroup.Id()] = o.SchedulablePods(podEquivalenceGroups, nodeGroup, nodeInfo) } // Update ClusterStateRegistry so similar nodegroups rebalancing works. @@ -253,6 +239,20 @@ func (o *ScaleUpOrchestrator) ScaleUp( o.clusterStateRegistry.Recalculate() } + // Recompute similar node groups in case they need to be updated + bestOption.SimilarNodeGroups = o.ComputeSimilarNodeGroups(bestOption.NodeGroup, nodeInfos, schedulablePods, now) + if bestOption.SimilarNodeGroups != nil { + // if similar node groups are found, log about them + similarNodeGroupIds := make([]string, 0) + for _, sng := range bestOption.SimilarNodeGroups { + similarNodeGroupIds = append(similarNodeGroupIds, sng.Id()) + } + klog.V(2).Infof("Found %d similar node groups: %v", len(bestOption.SimilarNodeGroups), similarNodeGroupIds) + } else if o.autoscalingContext.BalanceSimilarNodeGroups { + // if no similar node groups are found and the flag is enabled, log about it + klog.V(2).Info("No similar node groups found") + } + nodeInfo, found := nodeInfos[bestOption.NodeGroup.Id()] if !found { // This should never happen, as we already should have retrieved nodeInfo for any considered nodegroup. @@ -273,32 +273,16 @@ func (o *ScaleUpOrchestrator) ScaleUp( } targetNodeGroups := []cloudprovider.NodeGroup{bestOption.NodeGroup} - if o.autoscalingContext.BalanceSimilarNodeGroups { - similarNodeGroups, aErr := o.processors.NodeGroupSetProcessor.FindSimilarNodeGroups(o.autoscalingContext, bestOption.NodeGroup, nodeInfos) - if aErr != nil { - return scaleUpError( - &status.ScaleUpStatus{CreateNodeGroupResults: createNodeGroupResults, PodsTriggeredScaleUp: bestOption.Pods}, - aErr.AddPrefix("failed to find matching node groups: ")) - } - - similarNodeGroups = filterNodeGroupsByPods(similarNodeGroups, bestOption.Pods, expansionOptions) - for _, ng := range similarNodeGroups { - if o.clusterStateRegistry.IsNodeGroupSafeToScaleUp(ng, now) { - targetNodeGroups = append(targetNodeGroups, ng) - } else { - // This should never happen, as we will filter out the node group earlier on because of missing - // entry in podsPassingPredicates, but double checking doesn't really cost us anything. - klog.V(2).Infof("Ignoring node group %s when balancing: group is not ready for scaleup", ng.Id()) - } - } + for _, ng := range bestOption.SimilarNodeGroups { + targetNodeGroups = append(targetNodeGroups, ng) + } - if len(targetNodeGroups) > 1 { - var names = []string{} - for _, ng := range targetNodeGroups { - names = append(names, ng.Id()) - } - klog.V(1).Infof("Splitting scale-up between %v similar node groups: {%v}", len(targetNodeGroups), strings.Join(names, ", ")) + if len(targetNodeGroups) > 1 { + var names []string + for _, ng := range targetNodeGroups { + names = append(names, ng.Id()) } + klog.V(1).Infof("Splitting scale-up between %v similar node groups: {%v}", len(targetNodeGroups), strings.Join(names, ", ")) } scaleUpInfos, aErr := o.processors.NodeGroupSetProcessor.BalanceScaleUpBetweenGroups(o.autoscalingContext, targetNodeGroups, newNodes) @@ -309,11 +293,12 @@ func (o *ScaleUpOrchestrator) ScaleUp( } klog.V(1).Infof("Final scale-up plan: %v", scaleUpInfos) - if aErr, failedInfo := o.ExecuteScaleUps(scaleUpInfos, nodeInfos, now); aErr != nil { + aErr, failedNodeGroups := o.scaleUpExecutor.ExecuteScaleUps(scaleUpInfos, nodeInfos, now) + if aErr != nil { return scaleUpError( &status.ScaleUpStatus{ CreateNodeGroupResults: createNodeGroupResults, - FailedResizeNodeGroups: []cloudprovider.NodeGroup{failedInfo.Group}, + FailedResizeNodeGroups: failedNodeGroups, PodsTriggeredScaleUp: bestOption.Pods, }, aErr, @@ -381,7 +366,7 @@ func (o *ScaleUpOrchestrator) ScaleUpToNodeGroupMinSize( continue } - if skipReason := o.IsNodeGroupResourceExceeded(resourcesLeft, ng, nodeInfo); skipReason != nil { + if skipReason := o.IsNodeGroupResourceExceeded(resourcesLeft, ng, nodeInfo, 1); skipReason != nil { klog.Warning("ScaleUpToNodeGroupMinSize: node group resource excceded: %v", skipReason) continue } @@ -414,10 +399,11 @@ func (o *ScaleUpOrchestrator) ScaleUpToNodeGroupMinSize( } klog.V(1).Infof("ScaleUpToNodeGroupMinSize: final scale-up plan: %v", scaleUpInfos) - if aErr, failedInfo := o.ExecuteScaleUps(scaleUpInfos, nodeInfos, now); aErr != nil { + aErr, failedNodeGroups := o.scaleUpExecutor.ExecuteScaleUps(scaleUpInfos, nodeInfos, now) + if aErr != nil { return scaleUpError( &status.ScaleUpStatus{ - FailedResizeNodeGroups: []cloudprovider.NodeGroup{failedInfo.Group}, + FailedResizeNodeGroups: failedNodeGroups, }, aErr, ) @@ -431,40 +417,134 @@ func (o *ScaleUpOrchestrator) ScaleUpToNodeGroupMinSize( }, nil } +// filterValidScaleUpNodeGroups filters the node groups that are valid for scale-up +func (o *ScaleUpOrchestrator) filterValidScaleUpNodeGroups( + nodeGroups []cloudprovider.NodeGroup, + nodeInfos map[string]*schedulerframework.NodeInfo, + resourcesLeft resource.Limits, + currentNodeCount int, + now time.Time, +) ([]cloudprovider.NodeGroup, map[string]status.Reasons) { + var validNodeGroups []cloudprovider.NodeGroup + skippedNodeGroups := map[string]status.Reasons{} + + for _, nodeGroup := range nodeGroups { + if skipReason := o.IsNodeGroupReadyToScaleUp(nodeGroup, now); skipReason != nil { + skippedNodeGroups[nodeGroup.Id()] = skipReason + continue + } + + currentTargetSize, err := nodeGroup.TargetSize() + if err != nil { + klog.Errorf("Failed to get node group size: %v", err) + skippedNodeGroups[nodeGroup.Id()] = NotReadyReason + continue + } + if currentTargetSize >= nodeGroup.MaxSize() { + // FORK-CHANGE: log level changed to 2 for better debugging. + klog.V(2).Infof("Skipping node group %s - max size reached", nodeGroup.Id()) + skippedNodeGroups[nodeGroup.Id()] = MaxLimitReachedReason + continue + } + autoscalingOptions, err := nodeGroup.GetOptions(o.autoscalingContext.NodeGroupDefaults) + if err != nil { + klog.Errorf("Couldn't get autoscaling options for ng: %v", nodeGroup.Id()) + } + numNodes := 1 + if autoscalingOptions != nil && autoscalingOptions.ZeroOrMaxNodeScaling { + numNodes = nodeGroup.MaxSize() - currentTargetSize + if o.autoscalingContext.MaxNodesTotal != 0 && currentNodeCount+numNodes > o.autoscalingContext.MaxNodesTotal { + klog.V(4).Infof("Skipping node group %s - atomic scale-up exceeds cluster node count limit", nodeGroup.Id()) + skippedNodeGroups[nodeGroup.Id()] = NewSkippedReasons("atomic scale-up exceeds cluster node count limit") + continue + } + } + + nodeInfo, found := nodeInfos[nodeGroup.Id()] + if !found { + klog.Errorf("No node info for: %s", nodeGroup.Id()) + skippedNodeGroups[nodeGroup.Id()] = NotReadyReason + continue + } + if skipReason := o.IsNodeGroupResourceExceeded(resourcesLeft, nodeGroup, nodeInfo, numNodes); skipReason != nil { + skippedNodeGroups[nodeGroup.Id()] = skipReason + continue + } + + validNodeGroups = append(validNodeGroups, nodeGroup) + } + return validNodeGroups, skippedNodeGroups +} + // ComputeExpansionOption computes expansion option based on pending pods and cluster state. func (o *ScaleUpOrchestrator) ComputeExpansionOption( - podEquivalenceGroups []*equivalence.PodGroup, nodeGroup cloudprovider.NodeGroup, - nodeInfo *schedulerframework.NodeInfo, - upcomingNodes []*schedulerframework.NodeInfo, -) (expander.Option, error) { - option := expander.Option{ - NodeGroup: nodeGroup, - Pods: make([]*apiv1.Pod, 0), + schedulablePods map[string][]*apiv1.Pod, + nodeInfos map[string]*schedulerframework.NodeInfo, + currentNodeCount int, + now time.Time, +) expander.Option { + option := expander.Option{NodeGroup: nodeGroup} + pods := schedulablePods[nodeGroup.Id()] + nodeInfo := nodeInfos[nodeGroup.Id()] + + if len(pods) == 0 { + return option + } + + option.SimilarNodeGroups = o.ComputeSimilarNodeGroups(nodeGroup, nodeInfos, schedulablePods, now) + + estimateStart := time.Now() + expansionEstimator := o.autoscalingContext.EstimatorBuilder( + o.autoscalingContext.PredicateChecker, + o.autoscalingContext.ClusterSnapshot, + estimator.NewEstimationContext(o.autoscalingContext.MaxNodesTotal, option.SimilarNodeGroups, currentNodeCount), + ) + option.NodeCount, option.Pods = expansionEstimator.Estimate(pods, nodeInfo, nodeGroup) + metrics.UpdateDurationFromStart(metrics.Estimate, estimateStart) + + autoscalingOptions, err := nodeGroup.GetOptions(o.autoscalingContext.NodeGroupDefaults) + if err != nil { + klog.Errorf("Failed to get autoscaling options for node group %s: %v", nodeGroup.Id(), err) + } + if autoscalingOptions != nil && autoscalingOptions.ZeroOrMaxNodeScaling { + if option.NodeCount > 0 && option.NodeCount != nodeGroup.MaxSize() { + option.NodeCount = nodeGroup.MaxSize() + } } + return option +} +// SchedulablePods returns a list of pods that could be scheduled +// in a given node group after a scale up. +func (o *ScaleUpOrchestrator) SchedulablePods( + podEquivalenceGroups []*equivalence.PodGroup, + nodeGroup cloudprovider.NodeGroup, + nodeInfo *schedulerframework.NodeInfo, +) []*apiv1.Pod { o.autoscalingContext.ClusterSnapshot.Fork() + defer o.autoscalingContext.ClusterSnapshot.Revert() // Add test node to snapshot. - var pods []*apiv1.Pod + var allPods []*apiv1.Pod for _, podInfo := range nodeInfo.Pods { - pods = append(pods, podInfo.Pod) + allPods = append(allPods, podInfo.Pod) } - if err := o.autoscalingContext.ClusterSnapshot.AddNodeWithPods(nodeInfo.Node(), pods); err != nil { + if err := o.autoscalingContext.ClusterSnapshot.AddNodeWithPods(nodeInfo.Node(), allPods); err != nil { klog.Errorf("Error while adding test Node: %v", err) - o.autoscalingContext.ClusterSnapshot.Revert() - return expander.Option{}, nil + return []*apiv1.Pod{} } + var schedulablePods []*apiv1.Pod for _, eg := range podEquivalenceGroups { samplePod := eg.Pods[0] if err := o.autoscalingContext.PredicateChecker.CheckPredicates(o.autoscalingContext.ClusterSnapshot, samplePod, nodeInfo.Node().Name); err == nil { // Add pods to option. - option.Pods = append(option.Pods, eg.Pods...) + schedulablePods = append(schedulablePods, eg.Pods...) // Mark pod group as (theoretically) schedulable. eg.Schedulable = true } else { - klog.V(2).Infof("Pod %s can't be scheduled on %s, predicate checking error: %v", samplePod.Name, nodeGroup.Id(), err.VerboseMessage()) + klog.V(2).Infof("Pod %s/%s can't be scheduled on %s, predicate checking error: %v", samplePod.Namespace, samplePod.Name, nodeGroup.Id(), err.VerboseMessage()) if podCount := len(eg.Pods); podCount > 1 { klog.V(2).Infof("%d other pods similar to %s can't be scheduled on %s", podCount-1, samplePod.Name, nodeGroup.Id()) } @@ -472,14 +552,7 @@ func (o *ScaleUpOrchestrator) ComputeExpansionOption( } } - o.autoscalingContext.ClusterSnapshot.Revert() - - if len(option.Pods) > 0 { - estimator := o.autoscalingContext.EstimatorBuilder(o.autoscalingContext.PredicateChecker, o.autoscalingContext.ClusterSnapshot) - option.NodeCount, option.Pods = estimator.Estimate(option.Pods, nodeInfo, option.NodeGroup) - } - - return option, nil + return schedulablePods } // UpcomingNodes returns a list of nodes that are not ready but should be. @@ -500,7 +573,7 @@ func (o *ScaleUpOrchestrator) UpcomingNodes(nodeInfos map[string]*schedulerframe // IsNodeGroupReadyToScaleUp returns nil if node group is ready to be scaled up, otherwise a reason is provided. func (o *ScaleUpOrchestrator) IsNodeGroupReadyToScaleUp(nodeGroup cloudprovider.NodeGroup, now time.Time) *SkippedReasons { - // Autoprovisioned node groups without nodes are created later so skip check for them. + // Non-existing node groups are created later so skip check for them. if nodeGroup.Exist() && !o.clusterStateRegistry.IsNodeGroupSafeToScaleUp(nodeGroup, now) { // Hack that depends on internals of IsNodeGroupSafeToScaleUp. if !o.clusterStateRegistry.IsNodeGroupHealthy(nodeGroup.Id()) { @@ -514,13 +587,17 @@ func (o *ScaleUpOrchestrator) IsNodeGroupReadyToScaleUp(nodeGroup cloudprovider. } // IsNodeGroupResourceExceeded returns nil if node group resource limits are not exceeded, otherwise a reason is provided. -func (o *ScaleUpOrchestrator) IsNodeGroupResourceExceeded(resourcesLeft resource.Limits, nodeGroup cloudprovider.NodeGroup, nodeInfo *schedulerframework.NodeInfo) *SkippedReasons { +func (o *ScaleUpOrchestrator) IsNodeGroupResourceExceeded(resourcesLeft resource.Limits, nodeGroup cloudprovider.NodeGroup, nodeInfo *schedulerframework.NodeInfo, numNodes int) status.Reasons { resourcesDelta, err := o.resourceManager.DeltaForNode(o.autoscalingContext, nodeInfo, nodeGroup) if err != nil { klog.Errorf("Skipping node group %s; error getting node group resources: %v", nodeGroup.Id(), err) return NotReadyReason } + for resource, delta := range resourcesDelta { + resourcesDelta[resource] = delta * int64(numNodes) + } + checkResult := resource.CheckDeltaWithinLimits(resourcesLeft, resourcesDelta) if checkResult.Exceeded { klog.V(4).Infof("Skipping node group %s; maximal limit exceeded for %v", nodeGroup.Id(), checkResult.ExceededResources) @@ -534,7 +611,7 @@ func (o *ScaleUpOrchestrator) IsNodeGroupResourceExceeded(resourcesLeft resource continue } } - return MaxResourceLimitReached(checkResult.ExceededResources) + return NewMaxResourceLimitReached(checkResult.ExceededResources) } return nil } @@ -552,86 +629,61 @@ func (o *ScaleUpOrchestrator) GetCappedNewNodeCount(newNodeCount, currentNodeCou return newNodeCount, nil } -// ExecuteScaleUps executes the scale ups, based on the provided scale up infos. -// In case of issues returns an error and a scale up info which failed to execute. -func (o *ScaleUpOrchestrator) ExecuteScaleUps( - scaleUpInfos []nodegroupset.ScaleUpInfo, +// ComputeSimilarNodeGroups finds similar node groups which can schedule the same +// set of pods as the main node group. +func (o *ScaleUpOrchestrator) ComputeSimilarNodeGroups( + nodeGroup cloudprovider.NodeGroup, nodeInfos map[string]*schedulerframework.NodeInfo, + schedulablePods map[string][]*apiv1.Pod, now time.Time, -) (errors.AutoscalerError, *nodegroupset.ScaleUpInfo) { - availableGPUTypes := o.autoscalingContext.CloudProvider.GetAvailableGPUTypes() - for _, info := range scaleUpInfos { - nodeInfo, ok := nodeInfos[info.Group.Id()] - if !ok { - klog.Errorf("ExecuteScaleUp: failed to get node info for node group %s", info.Group.Id()) - continue - } - gpuConfig := o.autoscalingContext.CloudProvider.GetNodeGpuConfig(nodeInfo.Node()) - gpuResourceName, gpuType := gpu.GetGpuInfoForMetrics(gpuConfig, availableGPUTypes, nodeInfo.Node(), nil) - if aErr := o.executeScaleUp(info, gpuResourceName, gpuType, now); aErr != nil { - return aErr, &info - } +) []cloudprovider.NodeGroup { + if !o.autoscalingContext.BalanceSimilarNodeGroups { + return nil } - return nil, nil -} -func (o *ScaleUpOrchestrator) executeScaleUp( - info nodegroupset.ScaleUpInfo, - gpuResourceName, gpuType string, - now time.Time, -) errors.AutoscalerError { - klog.V(0).Infof("Scale-up: setting group %s size to %d", info.Group.Id(), info.NewSize) - o.autoscalingContext.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaledUpGroup", - "Scale-up: setting group %s size to %d instead of %d (max: %d)", info.Group.Id(), info.NewSize, info.CurrentSize, info.MaxSize) - increase := info.NewSize - info.CurrentSize - if err := info.Group.IncreaseSize(increase); err != nil { - o.autoscalingContext.LogRecorder.Eventf(apiv1.EventTypeWarning, "FailedToScaleUpGroup", "Scale-up failed for group %s: %v", info.Group.Id(), err) - aerr := errors.ToAutoscalerError(errors.CloudProviderError, err).AddPrefix("failed to increase node group size: ") - o.clusterStateRegistry.RegisterFailedScaleUp(info.Group, metrics.FailedScaleUpReason(string(aerr.Type())), now) - return aerr - } - o.clusterStateRegistry.RegisterOrUpdateScaleUp( - info.Group, - increase, - time.Now()) - metrics.RegisterScaleUp(increase, gpuResourceName, gpuType) - o.autoscalingContext.LogRecorder.Eventf(apiv1.EventTypeNormal, "ScaledUpGroup", - "Scale-up: group %s size set to %d instead of %d (max: %d)", info.Group.Id(), info.NewSize, info.CurrentSize, info.MaxSize) - return nil -} + autoscalingOptions, err := nodeGroup.GetOptions(o.autoscalingContext.NodeGroupDefaults) + if err != nil { + klog.Errorf("Failed to get autoscaling options for node group %s: %v", nodeGroup.Id(), err) + } + if autoscalingOptions != nil && autoscalingOptions.ZeroOrMaxNodeScaling { + return nil + } -func filterNodeGroupsByPods( - groups []cloudprovider.NodeGroup, - podsRequiredToFit []*apiv1.Pod, - expansionOptions map[string]expander.Option, -) []cloudprovider.NodeGroup { + groupSchedulablePods, found := schedulablePods[nodeGroup.Id()] + if !found || len(groupSchedulablePods) == 0 { + return nil + } - result := make([]cloudprovider.NodeGroup, 0) + similarNodeGroups, err := o.processors.NodeGroupSetProcessor.FindSimilarNodeGroups(o.autoscalingContext, nodeGroup, nodeInfos) + if err != nil { + klog.Errorf("Failed to find similar node groups: %v", err) + return nil + } - for _, group := range groups { - option, found := expansionOptions[group.Id()] - if !found { - klog.V(1).Infof("No info about pods passing predicates found for group %v, skipping it from scale-up consideration", group.Id()) - continue - } - fittingPods := make(map[*apiv1.Pod]bool, len(option.Pods)) - for _, pod := range option.Pods { - fittingPods[pod] = true - } - allFit := true - for _, pod := range podsRequiredToFit { - if _, found := fittingPods[pod]; !found { - klog.V(1).Infof("Group %v, can't fit pod %v/%v, removing from scale-up consideration", group.Id(), pod.Namespace, pod.Name) - allFit = false - break - } - } - if allFit { - result = append(result, group) + var validSimilarNodeGroups []cloudprovider.NodeGroup + for _, ng := range similarNodeGroups { + // Non-existing node groups are created later so skip check for them. + if ng.Exist() && !o.clusterStateRegistry.IsNodeGroupSafeToScaleUp(ng, now) { + klog.V(2).Infof("Ignoring node group %s when balancing: group is not ready for scaleup", ng.Id()) + } else if similarSchedulablePods, found := schedulablePods[ng.Id()]; found && matchingSchedulablePods(groupSchedulablePods, similarSchedulablePods) { + validSimilarNodeGroups = append(validSimilarNodeGroups, ng) } } - return result + return validSimilarNodeGroups +} + +func matchingSchedulablePods(groupSchedulablePods []*apiv1.Pod, similarSchedulablePods []*apiv1.Pod) bool { + schedulablePods := make(map[*apiv1.Pod]bool) + for _, pod := range similarSchedulablePods { + schedulablePods[pod] = true + } + for _, pod := range groupSchedulablePods { + if _, found := schedulablePods[pod]; !found { + return false + } + } + return true } // GetRemainingPods returns information about pods which CA is unable to help diff --git a/cluster-autoscaler/core/scaleup/orchestrator/orchestrator_test.go b/cluster-autoscaler/core/scaleup/orchestrator/orchestrator_test.go index 23d9b38d3575..6386b96120d6 100644 --- a/cluster-autoscaler/core/scaleup/orchestrator/orchestrator_test.go +++ b/cluster-autoscaler/core/scaleup/orchestrator/orchestrator_test.go @@ -22,19 +22,25 @@ import ( "net/http/httptest" "regexp" "strings" + "sync/atomic" "testing" "time" + "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" + kube_record "k8s.io/client-go/tools/record" + "k8s.io/component-base/metrics/legacyregistry" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" - mockprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/mocks" testprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/test" "k8s.io/autoscaler/cluster-autoscaler/clusterstate" "k8s.io/autoscaler/cluster-autoscaler/config" + "k8s.io/autoscaler/cluster-autoscaler/context" "k8s.io/autoscaler/cluster-autoscaler/core/scaleup/resource" . "k8s.io/autoscaler/cluster-autoscaler/core/test" "k8s.io/autoscaler/cluster-autoscaler/core/utils" "k8s.io/autoscaler/cluster-autoscaler/estimator" "k8s.io/autoscaler/cluster-autoscaler/metrics" + "k8s.io/autoscaler/cluster-autoscaler/processors" "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupset" "k8s.io/autoscaler/cluster-autoscaler/processors/nodeinfosprovider" "k8s.io/autoscaler/cluster-autoscaler/processors/status" @@ -43,8 +49,6 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/utils/taints" . "k8s.io/autoscaler/cluster-autoscaler/utils/test" "k8s.io/autoscaler/cluster-autoscaler/utils/units" - kube_record "k8s.io/client-go/tools/record" - "k8s.io/component-base/metrics/legacyregistry" appsv1 "k8s.io/api/apps/v1" apiv1 "k8s.io/api/core/v1" @@ -52,7 +56,6 @@ import ( schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" "github.com/stretchr/testify/assert" - "k8s.io/autoscaler/cluster-autoscaler/expander" ) var defaultOptions = config.AutoscalingOptions{ @@ -65,7 +68,7 @@ var defaultOptions = config.AutoscalingOptions{ // Scale up scenarios. func TestScaleUpOK(t *testing.T) { - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "n1", Cpu: 100, Memory: 100, Gpu: 0, Ready: true, Group: "ng1"}, {Name: "n2", Cpu: 1000, Memory: 1000, Gpu: 0, Ready: true, Group: "ng2"}, @@ -77,8 +80,7 @@ func TestScaleUpOK(t *testing.T) { ExtraPods: []PodConfig{ {Name: "p-new", Cpu: 500, Memory: 0, Gpu: 0, Node: "", ToleratesGpu: false}, }, - Options: defaultOptions, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "ng2", SizeChange: 1}, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng2", SizeChange: 1}, } expectedResults := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "ng2", SizeChange: 1}, @@ -92,7 +94,7 @@ func TestScaleUpOK(t *testing.T) { // There are triggering, remaining & awaiting pods. func TestMixedScaleUp(t *testing.T) { - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "n1", Cpu: 100, Memory: 1000, Gpu: 0, Ready: true, Group: "ng1"}, {Name: "n2", Cpu: 1000, Memory: 100, Gpu: 0, Ready: true, Group: "ng2"}, @@ -109,8 +111,7 @@ func TestMixedScaleUp(t *testing.T) { // can only be scheduled on ng1 {Name: "awaiting", Cpu: 0, Memory: 200, Gpu: 0, Node: "", ToleratesGpu: false}, }, - Options: defaultOptions, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "ng2", SizeChange: 1}, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng2", SizeChange: 1}, } expectedResults := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "ng2", SizeChange: 1}, @@ -124,10 +125,266 @@ func TestMixedScaleUp(t *testing.T) { simpleScaleUpTest(t, config, expectedResults) } +func TestZeroOrMaxNodeScaling(t *testing.T) { + options := defaultOptions + options.NodeGroupDefaults.ZeroOrMaxNodeScaling = true + + optionsWithLimitedMaxCores := options + optionsWithLimitedMaxCores.MaxCoresTotal = 3 + + optionsWithLimitedMaxMemory := options + optionsWithLimitedMaxMemory.MaxMemoryTotal = 3000 + + optionsWithLimitedMaxNodes := options + optionsWithLimitedMaxNodes.MaxNodesTotal = 5 + + n := BuildTestNode("n", 1000, 1000) + SetNodeReadyState(n, true, time.Time{}) + nodeInfo := schedulerframework.NewNodeInfo() + nodeInfo.SetNode(n) + + cases := map[string]struct { + testConfig *ScaleUpTestConfig + expectedResults *ScaleTestResults + isScaleUpOk bool + }{ + "Atomic ScaleUp OK": { + testConfig: &ScaleUpTestConfig{ + Groups: []NodeGroupConfig{ + {Name: "ng1", MaxSize: 5}, + {Name: "ng2", MaxSize: 5}, + }, + Nodes: []NodeConfig{ + {Name: "n1", Cpu: 900, Memory: 900, Gpu: 0, Ready: true, Group: "ng1"}, + }, + Pods: []PodConfig{ + {Name: "p1", Cpu: 900, Memory: 900, Gpu: 0, Node: "n1", ToleratesGpu: false}, + }, + ExtraPods: []PodConfig{ + {Name: "p-new", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + }, + // ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng2", SizeChange: 2}, + Options: &options, + NodeTemplateConfigs: map[string]*NodeTemplateConfig{ + "ng2": { + NodeGroupName: "ng2", + MachineType: "ct4p", + NodeInfo: nodeInfo, + }, + }, + }, + expectedResults: &ScaleTestResults{ + FinalOption: GroupSizeChange{GroupName: "ng2", SizeChange: 5}, + ScaleUpStatus: ScaleUpStatusInfo{ + PodsTriggeredScaleUp: []string{"p-new"}, + }, + }, + isScaleUpOk: true, + }, + "Atomic ScaleUp with similar node groups": { + testConfig: &ScaleUpTestConfig{ + Groups: []NodeGroupConfig{ + {Name: "ng1", MaxSize: 5}, + {Name: "ng2", MaxSize: 5}, + {Name: "ng3", MaxSize: 4}, + }, + Nodes: []NodeConfig{ + {Name: "n1", Cpu: 900, Memory: 900, Gpu: 0, Ready: true, Group: "ng1"}, + }, + Pods: []PodConfig{ + {Name: "p1", Cpu: 900, Memory: 900, Gpu: 0, Node: "n1", ToleratesGpu: false}, + }, + ExtraPods: []PodConfig{ + {Name: "p-new-1", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + {Name: "p-new-2", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + {Name: "p-new-3", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + {Name: "p-new-4", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + }, + // ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng2", SizeChange: 2}, + Options: &options, + NodeTemplateConfigs: map[string]*NodeTemplateConfig{ + "ng2": { + NodeGroupName: "ng2", + MachineType: "ct4p", + NodeInfo: nodeInfo, + }, + "ng3": { + NodeGroupName: "ng3", + MachineType: "ct4p", + NodeInfo: nodeInfo, + }, + }, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng3", SizeChange: 4}, + }, + expectedResults: &ScaleTestResults{ + FinalOption: GroupSizeChange{GroupName: "ng3", SizeChange: 4}, + ScaleUpStatus: ScaleUpStatusInfo{ + PodsTriggeredScaleUp: []string{"p-new-1", "p-new-2", "p-new-3", "p-new-4"}, + }, + }, + + isScaleUpOk: true, + }, + "Atomic ScaleUp Mixed": { + testConfig: &ScaleUpTestConfig{ + Groups: []NodeGroupConfig{ + {Name: "ng1", MaxSize: 5}, + {Name: "ng2", MaxSize: 5}, + {Name: "ng3", MaxSize: 5}, + }, + Nodes: []NodeConfig{ + {Name: "n1", Cpu: 500, Memory: 2000, Gpu: 0, Ready: true, Group: "ng1"}, + {Name: "n2", Cpu: 2000, Memory: 500, Gpu: 0, Ready: true, Group: "ng2"}, + }, + Pods: []PodConfig{ + {Name: "p1", Cpu: 400, Memory: 1900, Gpu: 0, Node: "n1", ToleratesGpu: false}, + {Name: "p2", Cpu: 1900, Memory: 400, Gpu: 0, Node: "n2", ToleratesGpu: false}, + }, + ExtraPods: []PodConfig{ + {Name: "p-triggering", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + {Name: "p-remaining", Cpu: 2000, Memory: 2000, Gpu: 0, Node: "", ToleratesGpu: false}, + {Name: "p-awaiting", Cpu: 100, Memory: 1800, Gpu: 0, Node: "", ToleratesGpu: false}, + }, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng3", SizeChange: 5}, + Options: &options, + NodeTemplateConfigs: map[string]*NodeTemplateConfig{ + "ng3": { + NodeGroupName: "ng3", + MachineType: "ct4p", + NodeInfo: nodeInfo, + }, + }, + }, + expectedResults: &ScaleTestResults{ + FinalOption: GroupSizeChange{GroupName: "ng3", SizeChange: 5}, + ScaleUpStatus: ScaleUpStatusInfo{ + PodsTriggeredScaleUp: []string{"p-triggering"}, + PodsRemainUnschedulable: []string{"p-remaining"}, + PodsAwaitEvaluation: []string{"p-awaiting"}, + }, + }, + isScaleUpOk: true, + }, + "Atomic ScaleUp max cores limit hit": { + testConfig: &ScaleUpTestConfig{ + Groups: []NodeGroupConfig{ + {Name: "ng1", MaxSize: 5}, + {Name: "ng2", MaxSize: 3}, + }, + Nodes: []NodeConfig{ + {Name: "n1", Cpu: 900, Memory: 900, Gpu: 0, Ready: true, Group: "ng1"}, + }, + Pods: []PodConfig{ + {Name: "p1", Cpu: 900, Memory: 900, Gpu: 0, Node: "n1", ToleratesGpu: false}, + }, + ExtraPods: []PodConfig{ + {Name: "p-new-1", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + {Name: "p-new-2", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + }, + // ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng2", SizeChange: 2}, + Options: &optionsWithLimitedMaxCores, + NodeTemplateConfigs: map[string]*NodeTemplateConfig{ + "ng2": { + NodeGroupName: "ng2", + MachineType: "ct4p", + NodeInfo: nodeInfo, + }, + }, + }, + expectedResults: &ScaleTestResults{ + NoScaleUpReason: "max cluster cpu limit reached", + ScaleUpStatus: ScaleUpStatusInfo{ + PodsRemainUnschedulable: []string{"p-new-1", "p-new-2"}, + }, + }, + isScaleUpOk: false, + }, + "Atomic ScaleUp max memory limit hit": { + testConfig: &ScaleUpTestConfig{ + Groups: []NodeGroupConfig{ + {Name: "ng1", MaxSize: 5}, + {Name: "ng2", MaxSize: 3}, + }, + Nodes: []NodeConfig{ + {Name: "n1", Cpu: 900, Memory: 900, Gpu: 0, Ready: true, Group: "ng1"}, + }, + Pods: []PodConfig{ + {Name: "p1", Cpu: 900, Memory: 900, Gpu: 0, Node: "n1", ToleratesGpu: false}, + }, + ExtraPods: []PodConfig{ + {Name: "p-new-1", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + {Name: "p-new-2", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + }, + // ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng2", SizeChange: 2}, + Options: &optionsWithLimitedMaxMemory, + NodeTemplateConfigs: map[string]*NodeTemplateConfig{ + "ng2": { + NodeGroupName: "ng2", + MachineType: "ct4p", + NodeInfo: nodeInfo, + }, + }, + }, + expectedResults: &ScaleTestResults{ + NoScaleUpReason: "max cluster memory limit reached", + ScaleUpStatus: ScaleUpStatusInfo{ + PodsRemainUnschedulable: []string{"p-new-1", "p-new-2"}, + }, + }, + isScaleUpOk: false, + }, + "Atomic ScaleUp max nodes count limit hit": { + testConfig: &ScaleUpTestConfig{ + Groups: []NodeGroupConfig{ + {Name: "ng1", MaxSize: 2}, + {Name: "ng2", MaxSize: 5}, + }, + Nodes: []NodeConfig{ + {Name: "n1", Cpu: 900, Memory: 900, Gpu: 0, Ready: true, Group: "ng1"}, + {Name: "n2", Cpu: 900, Memory: 900, Gpu: 0, Ready: true, Group: "ng1"}, + }, + Pods: []PodConfig{ + {Name: "p1", Cpu: 900, Memory: 900, Gpu: 0, Node: "n1", ToleratesGpu: false}, + {Name: "p2", Cpu: 900, Memory: 900, Gpu: 0, Node: "n1", ToleratesGpu: false}, + }, + ExtraPods: []PodConfig{ + {Name: "p-new-1", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + {Name: "p-new-2", Cpu: 1000, Memory: 1000, Gpu: 0, Node: "", ToleratesGpu: false}, + }, + // ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng2", SizeChange: 2}, + Options: &optionsWithLimitedMaxNodes, + NodeTemplateConfigs: map[string]*NodeTemplateConfig{ + "ng2": { + NodeGroupName: "ng2", + MachineType: "ct4p", + NodeInfo: nodeInfo, + }, + }, + }, + expectedResults: &ScaleTestResults{ + NoScaleUpReason: "atomic scale-up exceeds cluster node count limit", + ScaleUpStatus: ScaleUpStatusInfo{ + PodsRemainUnschedulable: []string{"p-new-1", "p-new-2"}, + }, + }, + isScaleUpOk: false, + }, + } + for tn, tc := range cases { + t.Run(tn, func(t *testing.T) { + if tc.isScaleUpOk { + simpleScaleUpTest(t, tc.testConfig, tc.expectedResults) + } else { + simpleNoScaleUpTest(t, tc.testConfig, tc.expectedResults) + } + }) + } +} + func TestScaleUpMaxCoresLimitHit(t *testing.T) { options := defaultOptions options.MaxCoresTotal = 9 - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "n1", Cpu: 2000, Memory: 100, Gpu: 0, Ready: true, Group: "ng1"}, {Name: "n2", Cpu: 4000, Memory: 1000, Gpu: 0, Ready: true, Group: "ng2"}, @@ -140,8 +397,8 @@ func TestScaleUpMaxCoresLimitHit(t *testing.T) { {Name: "p-new-1", Cpu: 2000, Memory: 0, Gpu: 0, Node: "", ToleratesGpu: false}, {Name: "p-new-2", Cpu: 2000, Memory: 0, Gpu: 0, Node: "", ToleratesGpu: false}, }, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "ng1", SizeChange: 2}, - Options: options, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng1", SizeChange: 2}, + Options: &options, } results := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "ng1", SizeChange: 1}, @@ -156,7 +413,7 @@ func TestScaleUpMaxCoresLimitHit(t *testing.T) { func TestScaleUpMaxCoresLimitHitWithNotAutoscaledGroup(t *testing.T) { options := defaultOptions options.MaxCoresTotal = 9 - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "n1", Cpu: 2000, Memory: 100, Gpu: 0, Ready: true, Group: "ng1"}, {Name: "n2", Cpu: 4000, Memory: 1000, Gpu: 0, Ready: true, Group: ""}, @@ -169,8 +426,8 @@ func TestScaleUpMaxCoresLimitHitWithNotAutoscaledGroup(t *testing.T) { {Name: "p-new-1", Cpu: 2000, Memory: 0, Gpu: 0, Node: "", ToleratesGpu: false}, {Name: "p-new-2", Cpu: 2000, Memory: 0, Gpu: 0, Node: "", ToleratesGpu: false}, }, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "ng1", SizeChange: 2}, - Options: options, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng1", SizeChange: 2}, + Options: &options, } results := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "ng1", SizeChange: 1}, @@ -185,7 +442,7 @@ func TestScaleUpMaxCoresLimitHitWithNotAutoscaledGroup(t *testing.T) { func TestScaleUpMaxMemoryLimitHit(t *testing.T) { options := defaultOptions options.MaxMemoryTotal = 1300 * utils.MiB - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "n1", Cpu: 2000, Memory: 100 * utils.MiB, Gpu: 0, Ready: true, Group: "ng1"}, {Name: "n2", Cpu: 4000, Memory: 1000 * utils.MiB, Gpu: 0, Ready: true, Group: "ng2"}, @@ -199,8 +456,8 @@ func TestScaleUpMaxMemoryLimitHit(t *testing.T) { {Name: "p-new-2", Cpu: 2000, Memory: 100 * utils.MiB, Gpu: 0, Node: "", ToleratesGpu: false}, {Name: "p-new-3", Cpu: 2000, Memory: 100 * utils.MiB, Gpu: 0, Node: "", ToleratesGpu: false}, }, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "ng1", SizeChange: 3}, - Options: options, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng1", SizeChange: 3}, + Options: &options, } results := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "ng1", SizeChange: 2}, @@ -215,7 +472,7 @@ func TestScaleUpMaxMemoryLimitHit(t *testing.T) { func TestScaleUpMaxMemoryLimitHitWithNotAutoscaledGroup(t *testing.T) { options := defaultOptions options.MaxMemoryTotal = 1300 * utils.MiB - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "n1", Cpu: 2000, Memory: 100 * utils.MiB, Gpu: 0, Ready: true, Group: "ng1"}, {Name: "n2", Cpu: 4000, Memory: 1000 * utils.MiB, Gpu: 0, Ready: true, Group: ""}, @@ -229,8 +486,8 @@ func TestScaleUpMaxMemoryLimitHitWithNotAutoscaledGroup(t *testing.T) { {Name: "p-new-2", Cpu: 2000, Memory: 100 * utils.MiB, Gpu: 0, Node: "", ToleratesGpu: false}, {Name: "p-new-3", Cpu: 2000, Memory: 100 * utils.MiB, Gpu: 0, Node: "", ToleratesGpu: false}, }, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "ng1", SizeChange: 3}, - Options: options, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng1", SizeChange: 3}, + Options: &options, } results := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "ng1", SizeChange: 2}, @@ -242,10 +499,140 @@ func TestScaleUpMaxMemoryLimitHitWithNotAutoscaledGroup(t *testing.T) { simpleScaleUpTest(t, config, results) } +func TestScaleUpTwoGroups(t *testing.T) { + options := defaultOptions + options.BalanceSimilarNodeGroups = true + options.ParallelScaleUp = true + config := &ScaleUpTestConfig{ + Nodes: []NodeConfig{ + {Name: "ng1-n1", Cpu: 1500, Memory: 1000 * utils.MiB, Ready: true, Group: "ng1"}, + {Name: "ng2-n1", Cpu: 1500, Memory: 1000 * utils.MiB, Ready: true, Group: "ng2"}, + }, + Pods: []PodConfig{ + {Name: "p1", Cpu: 1400, Node: "ng1-n1"}, + {Name: "p2", Cpu: 1400, Node: "ng2-n1"}, + }, + ExtraPods: []PodConfig{ + {Name: "p3", Cpu: 1400}, + {Name: "p4", Cpu: 1400}, + }, + Options: &options, + } + testCases := []struct { + desc string + parallel bool + }{ + { + desc: "synchronous scale up", + parallel: false, + }, + { + desc: "parallel scale up", + parallel: true, + }, + } + for _, tc := range testCases { + t.Run(tc.desc, func(t *testing.T) { + config.Options.ParallelScaleUp = tc.parallel + result := runSimpleScaleUpTest(t, config) + assert.True(t, result.ScaleUpStatus.WasSuccessful()) + assert.Nil(t, result.ScaleUpError) + assert.Equal(t, result.GroupTargetSizes, map[string]int{ + "ng1": 2, + "ng2": 2, + }) + assert.ElementsMatch(t, result.ScaleUpStatus.PodsTriggeredScaleUp, []string{"p3", "p4"}) + }) + } +} + +func TestCloudProviderFailingToScaleUpGroups(t *testing.T) { + options := defaultOptions + options.BalanceSimilarNodeGroups = true + config := &ScaleUpTestConfig{ + Groups: []NodeGroupConfig{ + {Name: "ng1", MaxSize: 2}, + {Name: "ng2", MaxSize: 2}, + }, + Nodes: []NodeConfig{ + {Name: "ng1-n1", Cpu: 1500, Memory: 1000 * utils.MiB, Ready: true, Group: "ng1"}, + {Name: "ng2-n1", Cpu: 1500, Memory: 1000 * utils.MiB, Ready: true, Group: "ng2"}, + }, + Pods: []PodConfig{ + {Name: "p1", Cpu: 1400, Node: "ng1-n1"}, + {Name: "p2", Cpu: 1400, Node: "ng2-n1"}, + }, + ExtraPods: []PodConfig{ + {Name: "p3", Cpu: 1400}, + {Name: "p4", Cpu: 1400}, + }, + Options: &options, + } + failAlwaysScaleUp := func(group string, i int) error { + return fmt.Errorf("provider error for: %s", group) + } + failOnceScaleUp := func() testprovider.OnScaleUpFunc { + var executed atomic.Bool + return func(group string, _ int) error { + if !executed.Swap(true) { + return fmt.Errorf("provider error for: %s", group) + } + return nil + } + } + testCases := []struct { + desc string + parallel bool + onScaleUp testprovider.OnScaleUpFunc + expectConcurrentErrors bool + expectedTotalTargetSizes int + }{ + { + desc: "synchronous scale up - two failures", + parallel: false, + onScaleUp: failAlwaysScaleUp, + expectConcurrentErrors: false, + expectedTotalTargetSizes: 3, // first error stops scale up process + }, + { + desc: "parallel scale up - two failures", + parallel: true, + onScaleUp: failAlwaysScaleUp, + expectConcurrentErrors: true, + expectedTotalTargetSizes: 4, + }, + { + desc: "synchronous scale up - one failure", + parallel: false, + onScaleUp: failOnceScaleUp(), + expectConcurrentErrors: false, + expectedTotalTargetSizes: 3, + }, + { + desc: "parallel scale up - one failure", + parallel: true, + onScaleUp: failOnceScaleUp(), + expectConcurrentErrors: false, + expectedTotalTargetSizes: 4, + }, + } + for _, tc := range testCases { + t.Run(tc.desc, func(t *testing.T) { + config.Options.ParallelScaleUp = tc.parallel + config.OnScaleUp = tc.onScaleUp + result := runSimpleScaleUpTest(t, config) + assert.False(t, result.ScaleUpStatus.WasSuccessful()) + assert.Equal(t, errors.CloudProviderError, result.ScaleUpError.Type()) + assert.Equal(t, tc.expectedTotalTargetSizes, result.GroupTargetSizes["ng1"]+result.GroupTargetSizes["ng2"]) + assert.Equal(t, tc.expectConcurrentErrors, strings.Contains(result.ScaleUpError.Error(), "...and other concurrent errors")) + }) + } +} + func TestScaleUpCapToMaxTotalNodesLimit(t *testing.T) { options := defaultOptions options.MaxNodesTotal = 3 - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "n1", Cpu: 2000, Memory: 100 * utils.MiB, Gpu: 0, Ready: true, Group: "ng1"}, {Name: "n2", Cpu: 4000, Memory: 1000 * utils.MiB, Gpu: 0, Ready: true, Group: "ng2"}, @@ -259,8 +646,8 @@ func TestScaleUpCapToMaxTotalNodesLimit(t *testing.T) { {Name: "p-new-2", Cpu: 4000, Memory: 100 * utils.MiB, Gpu: 0, Node: "", ToleratesGpu: false}, {Name: "p-new-3", Cpu: 4000, Memory: 100 * utils.MiB, Gpu: 0, Node: "", ToleratesGpu: false}, }, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "ng2", SizeChange: 3}, - Options: options, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng2", SizeChange: 3}, + Options: &options, } results := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "ng2", SizeChange: 1}, @@ -275,7 +662,7 @@ func TestScaleUpCapToMaxTotalNodesLimit(t *testing.T) { func TestScaleUpCapToMaxTotalNodesLimitWithNotAutoscaledGroup(t *testing.T) { options := defaultOptions options.MaxNodesTotal = 3 - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "n1", Cpu: 2000, Memory: 100 * utils.MiB, Gpu: 0, Ready: true, Group: ""}, {Name: "n2", Cpu: 4000, Memory: 1000 * utils.MiB, Gpu: 0, Ready: true, Group: "ng2"}, @@ -289,8 +676,8 @@ func TestScaleUpCapToMaxTotalNodesLimitWithNotAutoscaledGroup(t *testing.T) { {Name: "p-new-2", Cpu: 4000, Memory: 100 * utils.MiB, Gpu: 0, Node: "", ToleratesGpu: false}, {Name: "p-new-3", Cpu: 4000, Memory: 100 * utils.MiB, Gpu: 0, Node: "", ToleratesGpu: false}, }, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "ng2", SizeChange: 3}, - Options: options, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "ng2", SizeChange: 3}, + Options: &options, } results := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "ng2", SizeChange: 1}, @@ -305,7 +692,7 @@ func TestScaleUpCapToMaxTotalNodesLimitWithNotAutoscaledGroup(t *testing.T) { func TestWillConsiderGpuAndStandardPoolForPodWhichDoesNotRequireGpu(t *testing.T) { options := defaultOptions options.MaxNodesTotal = 100 - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "gpu-node-1", Cpu: 2000, Memory: 1000 * utils.MiB, Gpu: 1, Ready: true, Group: "gpu-pool"}, {Name: "std-node-1", Cpu: 2000, Memory: 1000 * utils.MiB, Gpu: 0, Ready: true, Group: "std-pool"}, @@ -317,8 +704,8 @@ func TestWillConsiderGpuAndStandardPoolForPodWhichDoesNotRequireGpu(t *testing.T ExtraPods: []PodConfig{ {Name: "extra-std-pod", Cpu: 2000, Memory: 1000 * utils.MiB, Gpu: 0, Node: "", ToleratesGpu: true}, }, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "std-pool", SizeChange: 1}, - Options: options, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "std-pool", SizeChange: 1}, + Options: &options, } results := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "std-pool", SizeChange: 1}, @@ -337,7 +724,7 @@ func TestWillConsiderGpuAndStandardPoolForPodWhichDoesNotRequireGpu(t *testing.T func TestWillConsiderOnlyGpuPoolForPodWhichDoesRequiresGpu(t *testing.T) { options := defaultOptions options.MaxNodesTotal = 100 - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "gpu-node-1", Cpu: 2000, Memory: 1000 * utils.MiB, Gpu: 1, Ready: true, Group: "gpu-pool"}, {Name: "std-node-1", Cpu: 2000, Memory: 1000 * utils.MiB, Gpu: 0, Ready: true, Group: "std-pool"}, @@ -349,8 +736,8 @@ func TestWillConsiderOnlyGpuPoolForPodWhichDoesRequiresGpu(t *testing.T) { ExtraPods: []PodConfig{ {Name: "extra-gpu-pod", Cpu: 2000, Memory: 1000 * utils.MiB, Gpu: 1, Node: "", ToleratesGpu: true}, }, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "gpu-pool", SizeChange: 1}, - Options: options, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "gpu-pool", SizeChange: 1}, + Options: &options, } results := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "gpu-pool", SizeChange: 1}, @@ -368,7 +755,7 @@ func TestWillConsiderOnlyGpuPoolForPodWhichDoesRequiresGpu(t *testing.T) { func TestWillConsiderAllPoolsWhichFitTwoPodsRequiringGpus(t *testing.T) { options := defaultOptions options.MaxNodesTotal = 100 - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "gpu-1-node-1", Cpu: 2000, Memory: 1000 * utils.MiB, Gpu: 1, Ready: true, Group: "gpu-1-pool"}, {Name: "gpu-2-node-1", Cpu: 2000, Memory: 1000 * utils.MiB, Gpu: 2, Ready: true, Group: "gpu-2-pool"}, @@ -386,8 +773,8 @@ func TestWillConsiderAllPoolsWhichFitTwoPodsRequiringGpus(t *testing.T) { {Name: "extra-gpu-pod-2", Cpu: 1, Memory: 1 * utils.MiB, Gpu: 1, Node: "", ToleratesGpu: true}, // CPU and mem negligible {Name: "extra-gpu-pod-3", Cpu: 1, Memory: 1 * utils.MiB, Gpu: 1, Node: "", ToleratesGpu: true}, // CPU and mem negligible }, - ExpansionOptionToChoose: GroupSizeChange{GroupName: "gpu-1-pool", SizeChange: 3}, - Options: options, + ExpansionOptionToChoose: &GroupSizeChange{GroupName: "gpu-1-pool", SizeChange: 3}, + Options: &options, } results := &ScaleTestResults{ FinalOption: GroupSizeChange{GroupName: "gpu-1-pool", SizeChange: 3}, @@ -409,7 +796,7 @@ func TestNoScaleUpMaxCoresLimitHit(t *testing.T) { options := defaultOptions options.MaxCoresTotal = 7 options.MaxMemoryTotal = 1150 - config := &ScaleTestConfig{ + config := &ScaleUpTestConfig{ Nodes: []NodeConfig{ {Name: "n1", Cpu: 2000, Memory: 100, Gpu: 0, Ready: true, Group: "ng1"}, {Name: "n2", Cpu: 4000, Memory: 1000, Gpu: 0, Ready: true, Group: "ng2"}, @@ -422,7 +809,7 @@ func TestNoScaleUpMaxCoresLimitHit(t *testing.T) { {Name: "p-new-1", Cpu: 2000, Memory: 0, Gpu: 0, Node: "", ToleratesGpu: false}, {Name: "p-new-2", Cpu: 2000, Memory: 0, Gpu: 0, Node: "", ToleratesGpu: false}, }, - Options: options, + Options: &options, } results := &ScaleTestResults{ NoScaleUpReason: "max cluster cpu, memory limit reached", @@ -434,152 +821,46 @@ func TestNoScaleUpMaxCoresLimitHit(t *testing.T) { simpleNoScaleUpTest(t, config, results) } -// To implement expander.Strategy, BestOption method must have a struct receiver. -// This prevents it from modifying fields of reportingStrategy, so we need a thin -// pointer wrapper for mutable parts. -type expanderResults struct { - inputOptions []GroupSizeChange -} - -type reportingStrategy struct { - initialNodeConfigs []NodeConfig - optionToChoose GroupSizeChange - results *expanderResults - t *testing.T -} - -func (r reportingStrategy) BestOption(options []expander.Option, nodeInfo map[string]*schedulerframework.NodeInfo) *expander.Option { - r.results.inputOptions = expanderOptionsToGroupSizeChanges(options) - for _, option := range options { - GroupSizeChange := expanderOptionToGroupSizeChange(option) - if GroupSizeChange == r.optionToChoose { - return &option - } - } - assert.Fail(r.t, "did not find expansionOptionToChoose %s", r.optionToChoose) - return nil -} - -func expanderOptionsToGroupSizeChanges(options []expander.Option) []GroupSizeChange { - groupSizeChanges := make([]GroupSizeChange, 0, len(options)) - for _, option := range options { - GroupSizeChange := expanderOptionToGroupSizeChange(option) - groupSizeChanges = append(groupSizeChanges, GroupSizeChange) - } - return groupSizeChanges -} - -func expanderOptionToGroupSizeChange(option expander.Option) GroupSizeChange { - groupName := option.NodeGroup.Id() - groupSizeIncrement := option.NodeCount - scaleUpOption := GroupSizeChange{GroupName: groupName, SizeChange: groupSizeIncrement} - return scaleUpOption -} - -func runSimpleScaleUpTest(t *testing.T, config *ScaleTestConfig) *ScaleTestResults { - expandedGroups := make(chan GroupSizeChange, 10) - now := time.Now() - - groups := make(map[string][]*apiv1.Node) - nodes := make([]*apiv1.Node, 0, len(config.Nodes)) - for _, n := range config.Nodes { - node := BuildTestNode(n.Name, n.Cpu, n.Memory) - if n.Gpu > 0 { - AddGpusToNode(node, n.Gpu) - } - SetNodeReadyState(node, n.Ready, now.Add(-2*time.Minute)) - nodes = append(nodes, node) - if n.Group != "" { - groups[n.Group] = append(groups[n.Group], node) +func simpleScaleUpTest(t *testing.T, config *ScaleUpTestConfig, expectedResults *ScaleTestResults) { + results := runSimpleScaleUpTest(t, config) + assert.NotNil(t, results.GroupSizeChanges[0], "Expected scale up event") + assert.Equal(t, expectedResults.FinalOption, results.GroupSizeChanges[0]) + assert.True(t, results.ScaleUpStatus.WasSuccessful()) + nodeEventSeen := false + for _, event := range results.Events { + if strings.Contains(event, "TriggeredScaleUp") && strings.Contains(event, expectedResults.FinalOption.GroupName) { + nodeEventSeen = true } - } - - pods := make([]*apiv1.Pod, 0, len(config.Pods)) - for _, p := range config.Pods { - pod := buildTestPod(p) - pods = append(pods, pod) - } - - podLister := kube_util.NewTestPodLister(pods) - listers := kube_util.NewListerRegistry(nil, nil, podLister, nil, nil, nil, nil, nil, nil, nil) - - provider := testprovider.NewTestCloudProvider(func(nodeGroup string, increase int) error { - expandedGroups <- GroupSizeChange{GroupName: nodeGroup, SizeChange: increase} - return nil - }, nil) - - for name, nodesInGroup := range groups { - provider.AddNodeGroup(name, 1, 10, len(nodesInGroup)) - for _, n := range nodesInGroup { - provider.AddNode(name, n) + if len(expectedResults.ScaleUpStatus.PodsRemainUnschedulable) == 0 { + assert.NotRegexp(t, regexp.MustCompile("NotTriggerScaleUp"), event) } } - - resourceLimiter := cloudprovider.NewResourceLimiter( - map[string]int64{cloudprovider.ResourceNameCores: config.Options.MinCoresTotal, cloudprovider.ResourceNameMemory: config.Options.MinMemoryTotal}, - map[string]int64{cloudprovider.ResourceNameCores: config.Options.MaxCoresTotal, cloudprovider.ResourceNameMemory: config.Options.MaxMemoryTotal}) - provider.SetResourceLimiter(resourceLimiter) - - assert.NotNil(t, provider) - - // Create context with non-random expander strategy. - context, err := NewScaleTestAutoscalingContext(config.Options, &fake.Clientset{}, listers, provider, nil, nil) - assert.NoError(t, err) - - expander := reportingStrategy{ - initialNodeConfigs: config.Nodes, - optionToChoose: config.ExpansionOptionToChoose, - results: &expanderResults{}, - t: t, - } - context.ExpanderStrategy = expander - - nodeInfos, _ := nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false).Process(&context, nodes, []*appsv1.DaemonSet{}, taints.TaintConfig{}, now) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) - clusterState.UpdateNodes(nodes, nodeInfos, time.Now()) - - extraPods := make([]*apiv1.Pod, 0, len(config.ExtraPods)) - for _, p := range config.ExtraPods { - pod := buildTestPod(p) - extraPods = append(extraPods, pod) - } - - processors := NewTestProcessors(&context) - suOrchestrator := New() - suOrchestrator.Initialize(&context, processors, clusterState, taints.TaintConfig{}) - scaleUpStatus, err := suOrchestrator.ScaleUp(extraPods, nodes, []*appsv1.DaemonSet{}, nodeInfos) - processors.ScaleUpStatusProcessor.Process(&context, scaleUpStatus) - - assert.NoError(t, err) - - expandedGroup := getGroupSizeChangeFromChan(expandedGroups) - var expandedGroupStruct GroupSizeChange - if expandedGroup != nil { - expandedGroupStruct = *expandedGroup - } - - events := []string{} - for eventsLeft := true; eventsLeft; { - select { - case event := <-context.Recorder.(*kube_record.FakeRecorder).Events: - events = append(events, event) - default: - eventsLeft = false + assert.True(t, nodeEventSeen) + if len(expectedResults.ExpansionOptions) > 0 { + // Empty ExpansionOptions means we do not want to do any assertions + // on contents of actual scaleUp options + if config.ExpansionOptionToChoose != nil { + // Check that option to choose is part of expected options. + assert.Contains(t, expectedResults.ExpansionOptions, *config.ExpansionOptionToChoose, "final expected expansion option must be in expected expansion options") + assert.Contains(t, results.ExpansionOptions, *config.ExpansionOptionToChoose, "final expected expansion option must be in expected expansion options") } + assert.ElementsMatch(t, results.ExpansionOptions, expectedResults.ExpansionOptions, + "actual and expected expansion options should be the same") } - - return &ScaleTestResults{ - ExpansionOptions: expander.results.inputOptions, - FinalOption: expandedGroupStruct, - ScaleUpStatus: simplifyScaleUpStatus(scaleUpStatus), - Events: events, + if expectedResults.GroupTargetSizes != nil { + assert.Equal(t, expectedResults.GroupTargetSizes, results.GroupTargetSizes) } + assert.ElementsMatch(t, results.ScaleUpStatus.PodsTriggeredScaleUp, expectedResults.ScaleUpStatus.PodsTriggeredScaleUp, + "actual and expected triggering pods should be the same") + assert.ElementsMatch(t, results.ScaleUpStatus.PodsRemainUnschedulable, expectedResults.ScaleUpStatus.PodsRemainUnschedulable, + "actual and expected remaining pods should be the same") + assert.ElementsMatch(t, results.ScaleUpStatus.PodsAwaitEvaluation, expectedResults.ScaleUpStatus.PodsAwaitEvaluation, + "actual and expected awaiting evaluation pods should be the same") } -func simpleNoScaleUpTest(t *testing.T, config *ScaleTestConfig, expectedResults *ScaleTestResults) { +func simpleNoScaleUpTest(t *testing.T, config *ScaleUpTestConfig, expectedResults *ScaleTestResults) { results := runSimpleScaleUpTest(t, config) - - assert.Equal(t, GroupSizeChange{}, results.FinalOption) + assert.Nil(t, results.GroupSizeChanges) assert.False(t, results.ScaleUpStatus.WasSuccessful()) noScaleUpEventSeen := false for _, event := range results.Events { @@ -602,50 +883,143 @@ func simpleNoScaleUpTest(t *testing.T, config *ScaleTestConfig, expectedResults "actual and expected awaiting evaluation pods should be the same") } -func simpleScaleUpTest(t *testing.T, config *ScaleTestConfig, expectedResults *ScaleTestResults) { - results := runSimpleScaleUpTest(t, config) +func runSimpleScaleUpTest(t *testing.T, config *ScaleUpTestConfig) *ScaleUpTestResult { + now := time.Now() + groupSizeChangesChannel := make(chan GroupSizeChange, 20) + groupNodes := make(map[string][]*apiv1.Node) - assert.NotNil(t, results.FinalOption, "Expected scale up event") - assert.Equal(t, expectedResults.FinalOption, results.FinalOption) - assert.True(t, results.ScaleUpStatus.WasSuccessful()) - nodeEventSeen := false - for _, event := range results.Events { - if strings.Contains(event, "TriggeredScaleUp") && strings.Contains(event, expectedResults.FinalOption.GroupName) { - nodeEventSeen = true + // build nodes + nodes := make([]*apiv1.Node, 0, len(config.Nodes)) + for _, n := range config.Nodes { + node := buildTestNode(n, now) + nodes = append(nodes, node) + if n.Group != "" { + groupNodes[n.Group] = append(groupNodes[n.Group], node) } - if len(expectedResults.ScaleUpStatus.PodsRemainUnschedulable) == 0 { - assert.NotRegexp(t, regexp.MustCompile("NotTriggerScaleUp"), event) + } + + // build and setup pods + pods := make([]*apiv1.Pod, len(config.Pods)) + for i, p := range config.Pods { + pods[i] = buildTestPod(p) + } + extraPods := make([]*apiv1.Pod, len(config.ExtraPods)) + for i, p := range config.ExtraPods { + extraPods[i] = buildTestPod(p) + } + podLister := kube_util.NewTestPodLister(pods) + listers := kube_util.NewListerRegistry(nil, nil, podLister, nil, nil, nil, nil, nil, nil, nil) + + // setup node groups + var provider *testprovider.TestCloudProvider + onScaleUpFunc := func(nodeGroup string, increase int) error { + groupSizeChangesChannel <- GroupSizeChange{GroupName: nodeGroup, SizeChange: increase} + if config.OnScaleUp != nil { + return config.OnScaleUp(nodeGroup, increase) + } + return nil + } + if len(config.NodeTemplateConfigs) > 0 { + machineTypes := []string{} + machineTemplates := map[string]*schedulerframework.NodeInfo{} + for _, ntc := range config.NodeTemplateConfigs { + machineTypes = append(machineTypes, ntc.MachineType) + machineTemplates[ntc.NodeGroupName] = ntc.NodeInfo + } + provider = testprovider.NewTestAutoprovisioningCloudProvider(onScaleUpFunc, nil, nil, nil, machineTypes, machineTemplates) + } else { + provider = testprovider.NewTestCloudProvider(onScaleUpFunc, nil) + } + options := defaultOptions + if config.Options != nil { + options = *config.Options + } + resourceLimiter := cloudprovider.NewResourceLimiter( + map[string]int64{cloudprovider.ResourceNameCores: options.MinCoresTotal, cloudprovider.ResourceNameMemory: options.MinMemoryTotal}, + map[string]int64{cloudprovider.ResourceNameCores: options.MaxCoresTotal, cloudprovider.ResourceNameMemory: options.MaxMemoryTotal}) + provider.SetResourceLimiter(resourceLimiter) + groupConfigs := make(map[string]*NodeGroupConfig) + for _, group := range config.Groups { + groupConfigs[group.Name] = &group + } + for name, nodesInGroup := range groupNodes { + groupConfig := groupConfigs[name] + if groupConfig == nil { + groupConfig = &NodeGroupConfig{ + Name: name, + MinSize: 1, + MaxSize: 10, + } + } + provider.AddNodeGroup(name, groupConfig.MinSize, groupConfig.MaxSize, len(nodesInGroup)) + for _, n := range nodesInGroup { + provider.AddNode(name, n) } } - assert.True(t, nodeEventSeen) - if len(expectedResults.ExpansionOptions) > 0 { - // Empty ExpansionOptions means we do not want to do any assertions - // on contents of actual scaleUp options + // Build node groups without any nodes + for name, ng := range groupConfigs { + if provider.GetNodeGroup(name) == nil { + tng := provider.BuildNodeGroup(name, ng.MinSize, ng.MaxSize, 0, false, config.NodeTemplateConfigs[name].MachineType, &options.NodeGroupDefaults) + provider.InsertNodeGroup(tng) + } + } + // build orchestrator + context, err := NewScaleTestAutoscalingContext(options, &fake.Clientset{}, listers, provider, nil, nil) + assert.NoError(t, err) + nodeInfos, err := nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false). + Process(&context, nodes, []*appsv1.DaemonSet{}, taints.TaintConfig{}, now) + assert.NoError(t, err) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults)) + clusterState.UpdateNodes(nodes, nodeInfos, time.Now()) + processors := NewTestProcessors(&context) + orchestrator := New() + orchestrator.Initialize(&context, processors, clusterState, taints.TaintConfig{}) + expander := NewMockRepotingStrategy(t, config.ExpansionOptionToChoose) + context.ExpanderStrategy = expander - // Check that option to choose is part of expected options. - assert.Contains(t, expectedResults.ExpansionOptions, config.ExpansionOptionToChoose, "final expected expansion option must be in expected expansion options") - assert.Contains(t, results.ExpansionOptions, config.ExpansionOptionToChoose, "final expected expansion option must be in expected expansion options") + // scale up + scaleUpStatus, scaleUpErr := orchestrator.ScaleUp(extraPods, nodes, []*appsv1.DaemonSet{}, nodeInfos) + processors.ScaleUpStatusProcessor.Process(&context, scaleUpStatus) - assert.ElementsMatch(t, results.ExpansionOptions, expectedResults.ExpansionOptions, - "actual and expected expansion options should be the same") + // aggregate group size changes + close(groupSizeChangesChannel) + var groupSizeChanges []GroupSizeChange + for change := range groupSizeChangesChannel { + groupSizeChanges = append(groupSizeChanges, change) } - assert.ElementsMatch(t, results.ScaleUpStatus.PodsTriggeredScaleUp, expectedResults.ScaleUpStatus.PodsTriggeredScaleUp, - "actual and expected triggering pods should be the same") - assert.ElementsMatch(t, results.ScaleUpStatus.PodsRemainUnschedulable, expectedResults.ScaleUpStatus.PodsRemainUnschedulable, - "actual and expected remaining pods should be the same") - assert.ElementsMatch(t, results.ScaleUpStatus.PodsAwaitEvaluation, expectedResults.ScaleUpStatus.PodsAwaitEvaluation, - "actual and expected awaiting evaluation pods should be the same") + // aggregate events + eventsChannel := context.Recorder.(*kube_record.FakeRecorder).Events + close(eventsChannel) + events := []string{} + for event := range eventsChannel { + events = append(events, event) + } + + // build target sizes + targetSizes := make(map[string]int) + for _, group := range provider.NodeGroups() { + targetSizes[group.Id()], _ = group.TargetSize() + } + + return &ScaleUpTestResult{ + ScaleUpError: scaleUpErr, + ScaleUpStatus: simplifyScaleUpStatus(scaleUpStatus), + GroupSizeChanges: groupSizeChanges, + Events: events, + GroupTargetSizes: targetSizes, + ExpansionOptions: expander.LastInputOptions(), + } } -func getGroupSizeChangeFromChan(c chan GroupSizeChange) *GroupSizeChange { - select { - case val := <-c: - return &val - case <-time.After(100 * time.Millisecond): - return nil +func buildTestNode(n NodeConfig, now time.Time) *apiv1.Node { + node := BuildTestNode(n.Name, n.Cpu, n.Memory) + if n.Gpu > 0 { + AddGpusToNode(node, n.Gpu) } + SetNodeReadyState(node, n.Ready, now.Add(-2*time.Minute)) + return node } func buildTestPod(p PodConfig) *apiv1.Pod { @@ -697,7 +1071,7 @@ func TestScaleUpUnhealthy(t *testing.T) { nodes := []*apiv1.Node{n1, n2} nodeInfos, _ := nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false).Process(&context, nodes, []*appsv1.DaemonSet{}, taints.TaintConfig{}, now) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) clusterState.UpdateNodes(nodes, nodeInfos, time.Now()) p3 := BuildTestPod("p-new", 550, 0) @@ -711,6 +1085,63 @@ func TestScaleUpUnhealthy(t *testing.T) { assert.False(t, scaleUpStatus.WasSuccessful()) } +func TestBinpackingLimiter(t *testing.T) { + n1 := BuildTestNode("n1", 1000, 1000) + n2 := BuildTestNode("n2", 100000, 100000) + now := time.Now() + + SetNodeReadyState(n1, true, now.Add(-2*time.Minute)) + SetNodeReadyState(n2, true, now.Add(-2*time.Minute)) + + nodes := []*apiv1.Node{n1, n2} + + podLister := kube_util.NewTestPodLister([]*apiv1.Pod{}) + listers := kube_util.NewListerRegistry(nil, nil, podLister, nil, nil, nil, nil, nil, nil, nil) + + provider := testprovider.NewTestCloudProvider(func(nodeGroup string, increase int) error { + return nil + }, nil) + + options := defaultOptions + provider.AddNodeGroup("ng1", 1, 10, 1) + provider.AddNode("ng1", n1) + provider.AddNodeGroup("ng2", 1, 10, 1) + provider.AddNode("ng2", n2) + assert.NotNil(t, provider) + + context, err := NewScaleTestAutoscalingContext(options, &fake.Clientset{}, listers, provider, nil, nil) + assert.NoError(t, err) + + nodeInfos, err := nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false). + Process(&context, nodes, []*appsv1.DaemonSet{}, taints.TaintConfig{}, now) + assert.NoError(t, err) + + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) + clusterState.UpdateNodes(nodes, nodeInfos, time.Now()) + + extraPod := BuildTestPod("p-new", 500, 0) + + processors := NewTestProcessors(&context) + + // We should stop binpacking after finding expansion option from first node group. + processors.BinpackingLimiter = &MockBinpackingLimiter{} + + suOrchestrator := New() + suOrchestrator.Initialize(&context, processors, clusterState, taints.TaintConfig{}) + + expander := NewMockRepotingStrategy(t, nil) + context.ExpanderStrategy = expander + + scaleUpStatus, err := suOrchestrator.ScaleUp([]*apiv1.Pod{extraPod}, nodes, []*appsv1.DaemonSet{}, nodeInfos) + processors.ScaleUpStatusProcessor.Process(&context, scaleUpStatus) + assert.NoError(t, err) + assert.True(t, scaleUpStatus.WasSuccessful()) + + expansionOptions := expander.LastInputOptions() + // Only 1 expansion option should be there. Without BinpackingLimiter there will be 2. + assert.True(t, len(expansionOptions) == 1) +} + func TestScaleUpNoHelp(t *testing.T) { n1 := BuildTestNode("n1", 100, 1000) now := time.Now() @@ -740,7 +1171,7 @@ func TestScaleUpNoHelp(t *testing.T) { nodes := []*apiv1.Node{n1} nodeInfos, _ := nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false).Process(&context, nodes, []*appsv1.DaemonSet{}, taints.TaintConfig{}, now) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) clusterState.UpdateNodes(nodes, nodeInfos, time.Now()) p3 := BuildTestPod("p-new", 500, 0) @@ -761,6 +1192,155 @@ func TestScaleUpNoHelp(t *testing.T) { assert.Regexp(t, regexp.MustCompile("NotTriggerScaleUp"), event) } +type constNodeGroupSetProcessor struct { + similarNodeGroups []cloudprovider.NodeGroup +} + +func (p *constNodeGroupSetProcessor) FindSimilarNodeGroups(_ *context.AutoscalingContext, _ cloudprovider.NodeGroup, _ map[string]*schedulerframework.NodeInfo) ([]cloudprovider.NodeGroup, errors.AutoscalerError) { + return p.similarNodeGroups, nil +} + +func (p *constNodeGroupSetProcessor) BalanceScaleUpBetweenGroups(_ *context.AutoscalingContext, _ []cloudprovider.NodeGroup, _ int) ([]nodegroupset.ScaleUpInfo, errors.AutoscalerError) { + return nil, nil +} + +func (p *constNodeGroupSetProcessor) CleanUp() {} + +func TestComputeSimilarNodeGroups(t *testing.T) { + pod1 := BuildTestPod("p1", 100, 1000) + pod2 := BuildTestPod("p2", 100, 1000) + pod3 := BuildTestPod("p3", 100, 1000) + + testCases := []struct { + name string + nodeGroup string + similarNodeGroups []string + otherNodeGroups []string + balancingEnabled bool + schedulablePods map[string][]*apiv1.Pod + wantSimilarNodeGroups []string + }{ + { + name: "no similar node groups", + nodeGroup: "ng1", + otherNodeGroups: []string{"pg1", "pg2"}, + balancingEnabled: true, + wantSimilarNodeGroups: []string{}, + }, + { + name: "some similar node groups, but no schedulable pods", + nodeGroup: "ng1", + similarNodeGroups: []string{"ng2", "ng3"}, + otherNodeGroups: []string{"pg1", "pg2"}, + balancingEnabled: true, + wantSimilarNodeGroups: []string{}, + }, + { + name: "some similar node groups and same schedulable pods, but balancing disabled", + nodeGroup: "ng1", + similarNodeGroups: []string{"ng2", "ng3"}, + otherNodeGroups: []string{"pg1", "pg2"}, + balancingEnabled: false, + schedulablePods: map[string][]*apiv1.Pod{ + "ng1": {pod1}, + "ng2": {pod1}, + "ng3": {pod1}, + "pg1": {pod1}, + "pg2": {pod1}, + }, + wantSimilarNodeGroups: []string{}, + }, + { + name: "some similar node groups and same schedulable pods", + nodeGroup: "ng1", + similarNodeGroups: []string{"ng2", "ng3"}, + otherNodeGroups: []string{"pg1", "pg2"}, + balancingEnabled: true, + schedulablePods: map[string][]*apiv1.Pod{ + "ng1": {pod1}, + "ng2": {pod1}, + "ng3": {pod1}, + "pg1": {pod1}, + "pg2": {pod1}, + }, + wantSimilarNodeGroups: []string{"ng2", "ng3"}, + }, + { + name: "similar node groups can schedule more pods", + nodeGroup: "ng1", + similarNodeGroups: []string{"ng2", "ng3"}, + otherNodeGroups: []string{"pg1", "pg2"}, + balancingEnabled: true, + schedulablePods: map[string][]*apiv1.Pod{ + "ng1": {pod1}, + "ng2": {pod1, pod2}, + "ng3": {pod1, pod2, pod3}, + "pg1": {pod1, pod2}, + "pg2": {pod1, pod2, pod3}, + }, + wantSimilarNodeGroups: []string{"ng2", "ng3"}, + }, + { + name: "similar node groups can schedule different/no pods", + nodeGroup: "ng1", + similarNodeGroups: []string{"ng2", "ng3"}, + otherNodeGroups: []string{"pg1", "pg2"}, + balancingEnabled: true, + schedulablePods: map[string][]*apiv1.Pod{ + "ng1": {pod1, pod2}, + "ng2": {pod1}, + "pg1": {pod1}, + }, + wantSimilarNodeGroups: []string{}, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + provider := testprovider.NewTestCloudProvider(func(string, int) error { return nil }, nil) + nodeGroupSetProcessor := &constNodeGroupSetProcessor{} + now := time.Now() + + allNodeGroups := []string{tc.nodeGroup} + allNodeGroups = append(allNodeGroups, tc.similarNodeGroups...) + allNodeGroups = append(allNodeGroups, tc.otherNodeGroups...) + + var nodes []*apiv1.Node + for _, ng := range allNodeGroups { + nodeName := fmt.Sprintf("%s-node", ng) + node := BuildTestNode(nodeName, 100, 1000) + SetNodeReadyState(node, true, now.Add(-2*time.Minute)) + nodes = append(nodes, node) + + provider.AddNodeGroup(ng, 0, 10, 1) + provider.AddNode(ng, node) + } + + for _, ng := range tc.similarNodeGroups { + nodeGroupSetProcessor.similarNodeGroups = append(nodeGroupSetProcessor.similarNodeGroups, provider.GetNodeGroup(ng)) + } + + listers := kube_util.NewListerRegistry(nil, nil, kube_util.NewTestPodLister(nil), nil, nil, nil, nil, nil, nil, nil) + ctx, err := NewScaleTestAutoscalingContext(config.AutoscalingOptions{BalanceSimilarNodeGroups: tc.balancingEnabled}, &fake.Clientset{}, listers, provider, nil, nil) + assert.NoError(t, err) + + nodeInfos, _ := nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false).Process(&ctx, nodes, []*appsv1.DaemonSet{}, taints.TaintConfig{}, now) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, ctx.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) + assert.NoError(t, clusterState.UpdateNodes(nodes, nodeInfos, time.Now())) + + suOrchestrator := &ScaleUpOrchestrator{} + suOrchestrator.Initialize(&ctx, &processors.AutoscalingProcessors{NodeGroupSetProcessor: nodeGroupSetProcessor}, clusterState, taints.TaintConfig{}) + similarNodeGroups := suOrchestrator.ComputeSimilarNodeGroups(provider.GetNodeGroup(tc.nodeGroup), nodeInfos, tc.schedulablePods, now) + + var gotSimilarNodeGroups []string + for _, ng := range similarNodeGroups { + gotSimilarNodeGroups = append(gotSimilarNodeGroups, ng.Id()) + } + assert.ElementsMatch(t, gotSimilarNodeGroups, tc.wantSimilarNodeGroups) + }) + } +} + func TestScaleUpBalanceGroups(t *testing.T) { provider := testprovider.NewTestCloudProvider(func(string, int) error { return nil @@ -809,7 +1389,7 @@ func TestScaleUpBalanceGroups(t *testing.T) { assert.NoError(t, err) nodeInfos, _ := nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false).Process(&context, nodes, []*appsv1.DaemonSet{}, taints.TaintConfig{}, now) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) clusterState.UpdateNodes(nodes, nodeInfos, time.Now()) pods := make([]*apiv1.Pod, 0) @@ -871,7 +1451,7 @@ func TestScaleUpAutoprovisionedNodeGroup(t *testing.T) { context, err := NewScaleTestAutoscalingContext(options, fakeClient, listers, provider, nil, nil) assert.NoError(t, err) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) processors := NewTestProcessors(&context) processors.NodeGroupListProcessor = &MockAutoprovisioningNodeGroupListProcessor{T: t} @@ -926,7 +1506,7 @@ func TestScaleUpBalanceAutoprovisionedNodeGroups(t *testing.T) { context, err := NewScaleTestAutoscalingContext(options, fakeClient, listers, provider, nil, nil) assert.NoError(t, err) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) processors := NewTestProcessors(&context) processors.NodeGroupListProcessor = &MockAutoprovisioningNodeGroupListProcessor{T: t} @@ -987,7 +1567,7 @@ func TestScaleUpToMeetNodeGroupMinSize(t *testing.T) { nodes := []*apiv1.Node{n1, n2} nodeInfos, _ := nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false).Process(&context, nodes, []*appsv1.DaemonSet{}, taints.TaintConfig{}, time.Now()) processors := NewTestProcessors(&context) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) clusterState.UpdateNodes(nodes, nodeInfos, time.Now()) suOrchestrator := New() @@ -1049,24 +1629,33 @@ func TestCheckDeltaWithinLimits(t *testing.T) { } } -func TestAuthError(t *testing.T) { +func TestAuthErrorHandling(t *testing.T) { metrics.RegisterAll(false) - context, err := NewScaleTestAutoscalingContext(config.AutoscalingOptions{}, &fake.Clientset{}, nil, nil, nil, nil) - assert.NoError(t, err) - - nodeGroup := &mockprovider.NodeGroup{} - info := nodegroupset.ScaleUpInfo{Group: nodeGroup} - nodeGroup.On("Id").Return("A") - nodeGroup.On("IncreaseSize", 0).Return(errors.NewAutoscalerError(errors.AutoscalerErrorType("abcd"), "")) - - processors := NewTestProcessors(&context) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(nil, clusterstate.ClusterStateRegistryConfig{}, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) - suOrchestrator := New() - suOrchestrator.Initialize(&context, processors, clusterStateRegistry, taints.TaintConfig{}) - scaleUpOrchestrator := suOrchestrator.(*ScaleUpOrchestrator) - aerr := scaleUpOrchestrator.executeScaleUp(info, "", "", time.Now()) - assert.Error(t, aerr) + config := &ScaleUpTestConfig{ + Groups: []NodeGroupConfig{ + {Name: "ng1", MaxSize: 2}, + }, + Nodes: []NodeConfig{ + {Name: "ng1-n1", Cpu: 1500, Memory: 1000 * utils.MiB, Ready: true, Group: "ng1"}, + }, + ExtraPods: []PodConfig{ + {Name: "p1", Cpu: 1000}, + }, + OnScaleUp: func(group string, i int) error { + return errors.NewAutoscalerError(errors.AutoscalerErrorType("authError"), "auth error") + }, + Options: &defaultOptions, + } + results := runSimpleScaleUpTest(t, config) + expected := errors.NewAutoscalerError( + errors.AutoscalerErrorType("authError"), + "failed to increase node group size: auth error", + ) + assert.Equal(t, expected, results.ScaleUpError) + assertLegacyRegistryEntry(t, "cluster_autoscaler_failed_scale_ups_total{reason=\"authError\"} 1") +} +func assertLegacyRegistryEntry(t *testing.T, entry string) { req, err := http.NewRequest("GET", "/", nil) if err != nil { t.Fatal(err) @@ -1074,14 +1663,13 @@ func TestAuthError(t *testing.T) { rr := httptest.NewRecorder() handler := http.HandlerFunc(legacyregistry.Handler().ServeHTTP) handler.ServeHTTP(rr, req) - // Check that the status code is what we expect. if status := rr.Code; status != http.StatusOK { t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK) } // Check that the failed scale up reason is set correctly. - assert.Contains(t, rr.Body.String(), "cluster_autoscaler_failed_scale_ups_total{reason=\"abcd\"} 1") + assert.Contains(t, rr.Body.String(), entry) } func simplifyScaleUpStatus(scaleUpStatus *status.ScaleUpStatus) ScaleUpStatusInfo { diff --git a/cluster-autoscaler/core/scaleup/orchestrator/skippedreasons.go b/cluster-autoscaler/core/scaleup/orchestrator/skippedreasons.go index ba8531649277..85537cedc68e 100644 --- a/cluster-autoscaler/core/scaleup/orchestrator/skippedreasons.go +++ b/cluster-autoscaler/core/scaleup/orchestrator/skippedreasons.go @@ -45,7 +45,26 @@ var ( NotReadyReason = NewSkippedReasons("not ready for scale-up") ) -// MaxResourceLimitReached returns a reason describing which cluster wide resource limits were reached. -func MaxResourceLimitReached(resources []string) *SkippedReasons { - return NewSkippedReasons(fmt.Sprintf("max cluster %s limit reached", strings.Join(resources, ", "))) +// MaxResourceLimitReached contains information why given node group was skipped. +type MaxResourceLimitReached struct { + messages []string + resources []string +} + +// Reasons returns a slice of reasons why the node group was not considered for scale up. +func (sr *MaxResourceLimitReached) Reasons() []string { + return sr.messages +} + +// Resources returns a slice of resources which were missing in the node group. +func (sr *MaxResourceLimitReached) Resources() []string { + return sr.resources +} + +// NewMaxResourceLimitReached returns a reason describing which cluster wide resource limits were reached. +func NewMaxResourceLimitReached(resources []string) *MaxResourceLimitReached { + return &MaxResourceLimitReached{ + messages: []string{fmt.Sprintf("max cluster %s limit reached", strings.Join(resources, ", "))}, + resources: resources, + } } diff --git a/cluster-autoscaler/core/scaleup/orchestrator/skippedreasons_test.go b/cluster-autoscaler/core/scaleup/orchestrator/skippedreasons_test.go index 8f888d653bc5..0c768e523b3c 100644 --- a/cluster-autoscaler/core/scaleup/orchestrator/skippedreasons_test.go +++ b/cluster-autoscaler/core/scaleup/orchestrator/skippedreasons_test.go @@ -44,7 +44,7 @@ func TestMaxResourceLimitReached(t *testing.T) { } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { - if got := MaxResourceLimitReached(tt.resources); !reflect.DeepEqual(got.Reasons(), tt.wantReasons) { + if got := NewMaxResourceLimitReached(tt.resources); !reflect.DeepEqual(got.Reasons(), tt.wantReasons) { t.Errorf("MaxResourceLimitReached(%v) = %v, want %v", tt.resources, got.Reasons(), tt.wantReasons) } }) diff --git a/cluster-autoscaler/core/static_autoscaler.go b/cluster-autoscaler/core/static_autoscaler.go index cfe62f9b0ca1..7323f572f43a 100644 --- a/cluster-autoscaler/core/static_autoscaler.go +++ b/cluster-autoscaler/core/static_autoscaler.go @@ -140,8 +140,14 @@ func NewStaticAutoscaler( backoff backoff.Backoff, debuggingSnapshotter debuggingsnapshot.DebuggingSnapshotter, remainingPdbTracker pdb.RemainingPdbTracker, - scaleUpOrchestrator scaleup.Orchestrator) *StaticAutoscaler { + scaleUpOrchestrator scaleup.Orchestrator, + deleteOptions simulator.NodeDeleteOptions) *StaticAutoscaler { + clusterStateConfig := clusterstate.ClusterStateRegistryConfig{ + MaxTotalUnreadyPercentage: opts.MaxTotalUnreadyPercentage, + OkTotalUnreadyCount: opts.OkTotalUnreadyCount, + } + clusterStateRegistry := clusterstate.NewClusterStateRegistry(cloudProvider, clusterStateConfig, autoscalingKubeClients.LogRecorder, backoff, processors.NodeGroupConfigProcessor) processorCallbacks := newStaticAutoscalerProcessorCallbacks() autoscalingContext := context.NewAutoscalingContext( opts, @@ -153,24 +159,17 @@ func NewStaticAutoscaler( estimatorBuilder, processorCallbacks, debuggingSnapshotter, - remainingPdbTracker) - - clusterStateConfig := clusterstate.ClusterStateRegistryConfig{ - MaxTotalUnreadyPercentage: opts.MaxTotalUnreadyPercentage, - OkTotalUnreadyCount: opts.OkTotalUnreadyCount, - } + remainingPdbTracker, + clusterStateRegistry) taintConfig := taints.NewTaintConfig(opts) - clusterStateRegistry := clusterstate.NewClusterStateRegistry(autoscalingContext.CloudProvider, clusterStateConfig, autoscalingContext.LogRecorder, backoff, clusterstate.NewDefaultMaxNodeProvisionTimeProvider(autoscalingContext, processors.NodeGroupConfigProcessor)) processors.ScaleDownCandidatesNotifier.Register(clusterStateRegistry) - deleteOptions := simulator.NewNodeDeleteOptions(opts) - // TODO: Populate the ScaleDownActuator/Planner fields in AutoscalingContext // during the struct creation rather than here. ndt := deletiontracker.NewNodeDeletionTracker(0 * time.Second) scaleDown := legacy.NewScaleDown(autoscalingContext, processors, ndt, deleteOptions) - actuator := actuation.NewActuator(autoscalingContext, clusterStateRegistry, ndt, deleteOptions) + actuator := actuation.NewActuator(autoscalingContext, clusterStateRegistry, ndt, deleteOptions, processors.NodeGroupConfigProcessor) autoscalingContext.ScaleDownActuator = actuator var scaleDownPlanner scaledown.Planner @@ -282,8 +281,7 @@ func (a *StaticAutoscaler) RunOnce(currentTime time.Time) caerrors.AutoscalerErr a.DebuggingSnapshotter.StartDataCollection() defer a.DebuggingSnapshotter.Flush() - unschedulablePodLister := a.UnschedulablePodLister() - scheduledPodLister := a.ScheduledPodLister() + scheduledAndUnschedulablePodLister := a.ScheduledAndUnschedulablePodLister() autoscalingContext := a.AutoscalingContext klog.V(4).Info("Starting main loop") @@ -296,17 +294,18 @@ func (a *StaticAutoscaler) RunOnce(currentTime time.Time) caerrors.AutoscalerErr klog.Errorf("Failed to get node list: %v", typedErr) return typedErr } - originalScheduledPods, err := scheduledPodLister.List() - if err != nil { - klog.Errorf("Failed to list scheduled pods: %v", err) - return caerrors.ToAutoscalerError(caerrors.ApiCallError, err) - } if abortLoop, err := a.processors.ActionableClusterProcessor.ShouldAbort( a.AutoscalingContext, allNodes, readyNodes, currentTime); abortLoop { return err } + originalScheduledPods, unschedulablePods, err := scheduledAndUnschedulablePodLister.List() + if err != nil { + klog.Errorf("Failed to list scheduled and unschedulable pods: %v", err) + return caerrors.ToAutoscalerError(caerrors.ApiCallError, err) + } + // Update cluster resource usage metrics coresTotal, memoryTotal := calculateCoresMemoryTotal(allNodes, currentTime) metrics.UpdateClusterCPUCurrentCores(coresTotal) @@ -317,7 +316,8 @@ func (a *StaticAutoscaler) RunOnce(currentTime time.Time) caerrors.AutoscalerErr klog.Errorf("Failed to get daemonset list: %v", err) return caerrors.ToAutoscalerError(caerrors.ApiCallError, err) } - + // Snapshot scale-down actuation status before cache refresh. + scaleDownActuationStatus := a.scaleDownActuator.CheckStatus() // Call CloudProvider.Refresh before any other calls to cloud provider. refreshStart := time.Now() err = a.AutoscalingContext.CloudProvider.Refresh() @@ -403,7 +403,7 @@ func (a *StaticAutoscaler) RunOnce(currentTime time.Time) caerrors.AutoscalerErr unregisteredNodes := a.clusterStateRegistry.GetUnregisteredNodes() if len(unregisteredNodes) > 0 { klog.V(1).Infof("%d unregistered nodes present", len(unregisteredNodes)) - removedAny, err := removeOldUnregisteredNodes(unregisteredNodes, autoscalingContext, + removedAny, err := a.removeOldUnregisteredNodes(unregisteredNodes, autoscalingContext, a.clusterStateRegistry, currentTime, autoscalingContext.LogRecorder) // There was a problem with removing unregistered nodes. Retry in the next loop. if err != nil { @@ -436,23 +436,18 @@ func (a *StaticAutoscaler) RunOnce(currentTime time.Time) caerrors.AutoscalerErr // TODO: andrewskim - add protection for ready AWS nodes. // FORK-CHANGE: Commented this code as it removes `Registered but long not Ready` nodes which causes issues like scaling below minimum size and removing ready nodes during meltdown scenario - //fixedSomething, err := fixNodeGroupSize(autoscalingContext, a.clusterStateRegistry, currentTime) - //if err != nil { - // klog.Errorf("Failed to fix node group sizes: %v", err) - // return errors.ToAutoscalerError(errors.CloudProviderError, err) - //} - //if fixedSomething { - // klog.V(0).Infof("Some node group target size was fixed, skipping the iteration") - // return nil - //} + // fixedSomething, err := fixNodeGroupSize(autoscalingContext, a.clusterStateRegistry, currentTime) + // if err != nil { + // klog.Errorf("Failed to fix node group sizes: %v", err) + // return caerrors.ToAutoscalerError(caerrors.CloudProviderError, err) + // } + // if fixedSomething { + // klog.V(0).Infof("Some node group target size was fixed, skipping the iteration") + // return nil + // } metrics.UpdateLastTime(metrics.Autoscaling, time.Now()) - unschedulablePods, err := unschedulablePodLister.List() - if err != nil { - klog.Errorf("Failed to list unscheduled pods: %v", err) - return caerrors.ToAutoscalerError(caerrors.ApiCallError, err) - } metrics.UpdateUnschedulablePodsCount(len(unschedulablePods)) unschedulablePods = tpu.ClearTPURequests(unschedulablePods) @@ -603,8 +598,7 @@ func (a *StaticAutoscaler) RunOnce(currentTime time.Time) caerrors.AutoscalerErr } } - actuationStatus := a.scaleDownActuator.CheckStatus() - typedErr := a.scaleDownPlanner.UpdateClusterState(podDestinations, scaleDownCandidates, actuationStatus, currentTime) + typedErr := a.scaleDownPlanner.UpdateClusterState(podDestinations, scaleDownCandidates, scaleDownActuationStatus, currentTime) // Update clusterStateRegistry and metrics regardless of whether ScaleDown was successful or not. unneededNodes := a.scaleDownPlanner.UnneededNodes() a.processors.ScaleDownCandidatesNotifier.Update(unneededNodes, currentTime) @@ -629,23 +623,27 @@ func (a *StaticAutoscaler) RunOnce(currentTime time.Time) caerrors.AutoscalerErr a.processorCallbacks.disableScaleDownForLoop, scaleDownInCooldown) metrics.UpdateScaleDownInCooldown(scaleDownInCooldown) + // We want to delete unneeded Node Groups only if here is no current delete + // in progress. + _, drained := scaleDownActuationStatus.DeletionsInProgress() + var removedNodeGroups []cloudprovider.NodeGroup + if len(drained) == 0 { + var err error + removedNodeGroups, err = a.processors.NodeGroupManager.RemoveUnneededNodeGroups(autoscalingContext) + if err != nil { + klog.Errorf("Error while removing unneeded node groups: %v", err) + } + scaleDownStatus.RemovedNodeGroups = removedNodeGroups + } + if scaleDownInCooldown { scaleDownStatus.Result = scaledownstatus.ScaleDownInCooldown + if len(removedNodeGroups) > 0 { + a.processors.ScaleDownStatusProcessor.Process(autoscalingContext, scaleDownStatus) + } } else { klog.V(4).Infof("Starting scale down") - // We want to delete unneeded Node Groups only if there was no recent scale up, - // and there is no current delete in progress and there was no recent errors. - _, drained := actuationStatus.DeletionsInProgress() - var removedNodeGroups []cloudprovider.NodeGroup - if len(drained) == 0 { - var err error - removedNodeGroups, err = a.processors.NodeGroupManager.RemoveUnneededNodeGroups(autoscalingContext) - if err != nil { - klog.Errorf("Error while removing unneeded node groups: %v", err) - } - } - scaleDownStart := time.Now() metrics.UpdateLastTime(metrics.ScaleDown, scaleDownStart) empty, needDrain := a.scaleDownPlanner.NodesToDelete(currentTime) @@ -726,14 +724,16 @@ func fixNodeGroupSize(context *context.AutoscalingContext, clusterStateRegistry } // Removes unregistered nodes if needed. Returns true if anything was removed and error if such occurred. -func removeOldUnregisteredNodes(unregisteredNodes []clusterstate.UnregisteredNode, context *context.AutoscalingContext, +func (a *StaticAutoscaler) removeOldUnregisteredNodes(allUnregisteredNodes []clusterstate.UnregisteredNode, context *context.AutoscalingContext, csr *clusterstate.ClusterStateRegistry, currentTime time.Time, logRecorder *utils.LogEventRecorder) (bool, error) { - removedAny := false - for _, unregisteredNode := range unregisteredNodes { - nodeGroup, err := context.CloudProvider.NodeGroupForNode(unregisteredNode.Node) + + nodeGroups := a.nodeGroupsById() + nodesToBeDeletedByNodeGroupId := make(map[string][]clusterstate.UnregisteredNode) + for _, unregisteredNode := range allUnregisteredNodes { + nodeGroup, err := a.CloudProvider.NodeGroupForNode(unregisteredNode.Node) if err != nil { klog.Warningf("Failed to get node group for %s: %v", unregisteredNode.Node.Name, err) - return removedAny, err + continue } if nodeGroup == nil || reflect.ValueOf(nodeGroup).IsNil() { klog.Warningf("No node group for node %s, skipping", unregisteredNode.Node.Name) @@ -742,37 +742,80 @@ func removeOldUnregisteredNodes(unregisteredNodes []clusterstate.UnregisteredNod maxNodeProvisionTime, err := csr.MaxNodeProvisionTime(nodeGroup) if err != nil { - return removedAny, fmt.Errorf("failed to retrieve maxNodeProvisionTime for node %s in nodeGroup %s", unregisteredNode.Node.Name, nodeGroup.Id()) + return false, fmt.Errorf("failed to retrieve maxNodeProvisionTime for node %s in nodeGroup %s", unregisteredNode.Node.Name, nodeGroup.Id()) } if unregisteredNode.UnregisteredSince.Add(maxNodeProvisionTime).Before(currentTime) { - klog.V(0).Infof("Removing unregistered node %v", unregisteredNode.Node.Name) - size, err := nodeGroup.TargetSize() + klog.V(0).Infof("Marking unregistered node %v for removal", unregisteredNode.Node.Name) + nodesToBeDeletedByNodeGroupId[nodeGroup.Id()] = append(nodesToBeDeletedByNodeGroupId[nodeGroup.Id()], unregisteredNode) + } + } + + removedAny := false + for nodeGroupId, unregisteredNodesToDelete := range nodesToBeDeletedByNodeGroupId { + nodeGroup := nodeGroups[nodeGroupId] + + klog.V(0).Infof("Removing %v unregistered nodes for node group %v", len(unregisteredNodesToDelete), nodeGroupId) + size, err := nodeGroup.TargetSize() + if err != nil { + klog.Warningf("Failed to get node group size; nodeGroup=%v; err=%v", nodeGroup.Id(), err) + continue + } + possibleToDelete := size - nodeGroup.MinSize() + if possibleToDelete <= 0 { + klog.Warningf("Node group %s min size reached, skipping removal of %v unregistered nodes", nodeGroupId, len(unregisteredNodesToDelete)) + continue + } + if len(unregisteredNodesToDelete) > possibleToDelete { + klog.Warningf("Capping node group %s unregistered node removal to %d nodes, removing all %d would exceed min size constaint", nodeGroupId, possibleToDelete, len(unregisteredNodesToDelete)) + unregisteredNodesToDelete = unregisteredNodesToDelete[:possibleToDelete] + } + nodesToDelete := toNodes(unregisteredNodesToDelete) + + opts, err := nodeGroup.GetOptions(a.NodeGroupDefaults) + if err != nil { + klog.Warningf("Failed to get node group options for %s: %s", nodeGroupId, err) + continue + } + // If a scale-up of "ZeroOrMaxNodeScaling" node group failed, the cleanup + // should stick to the all-or-nothing principle. Deleting all nodes. + if opts != nil && opts.ZeroOrMaxNodeScaling { + instances, err := nodeGroup.Nodes() if err != nil { - klog.Warningf("Failed to get node group size; unregisteredNode=%v; nodeGroup=%v; err=%v", unregisteredNode.Node.Name, nodeGroup.Id(), err) - continue - } - if nodeGroup.MinSize() >= size { - klog.Warningf("Failed to remove node %s: node group min size reached, skipping unregistered node removal", unregisteredNode.Node.Name) + klog.Warningf("Failed to fill in unregistered nodes from group %s based on ZeroOrMaxNodeScaling option: %s", nodeGroupId, err) continue } - err = nodeGroup.DeleteNodes([]*apiv1.Node{unregisteredNode.Node}) - csr.InvalidateNodeInstancesCacheEntry(nodeGroup) - if err != nil { - klog.Warningf("Failed to remove node %s: %v", unregisteredNode.Node.Name, err) + nodesToDelete = instancesToFakeNodes(instances) + } + + err = nodeGroup.DeleteNodes(nodesToDelete) + csr.InvalidateNodeInstancesCacheEntry(nodeGroup) + if err != nil { + klog.Warningf("Failed to remove %v unregistered nodes from node group %s: %v", len(nodesToDelete), nodeGroupId, err) + for _, node := range nodesToDelete { logRecorder.Eventf(apiv1.EventTypeWarning, "DeleteUnregisteredFailed", - "Failed to remove node %s: %v", unregisteredNode.Node.Name, err) - return removedAny, err + "Failed to remove node %s: %v", node.Name, err) } + return removedAny, err + } + for _, node := range nodesToDelete { logRecorder.Eventf(apiv1.EventTypeNormal, "DeleteUnregistered", - "Removed unregistered node %v", unregisteredNode.Node.Name) - metrics.RegisterOldUnregisteredNodesRemoved(1) - removedAny = true + "Removed unregistered node %v", node.Name) } + metrics.RegisterOldUnregisteredNodesRemoved(len(nodesToDelete)) + removedAny = true } return removedAny, nil } +func toNodes(unregisteredNodes []clusterstate.UnregisteredNode) []*apiv1.Node { + nodes := []*apiv1.Node{} + for _, n := range unregisteredNodes { + nodes = append(nodes, n.Node) + } + return nodes +} + func (a *StaticAutoscaler) deleteCreatedNodesWithErrors() (bool, error) { // We always schedule deleting of incoming errornous nodes // TODO[lukaszos] Consider adding logic to not retry delete every loop iteration @@ -808,6 +851,21 @@ func (a *StaticAutoscaler) deleteCreatedNodesWithErrors() (bool, error) { if nodeGroup == nil { err = fmt.Errorf("node group %s not found", nodeGroupId) } else { + opts, err := nodeGroup.GetOptions(a.NodeGroupDefaults) + if err != nil { + klog.Warningf("Failed to get node group options for %s: %s", nodeGroupId, err) + continue + } + // If a scale-up of "ZeroOrMaxNodeScaling" node group failed, the cleanup + // should stick to the all-or-nothing principle. Deleting all nodes. + if opts != nil && opts.ZeroOrMaxNodeScaling { + instances, err := nodeGroup.Nodes() + if err != nil { + klog.Warningf("Failed to fill in failed nodes from group %s based on ZeroOrMaxNodeScaling option: %s", nodeGroupId, err) + continue + } + nodesToBeDeleted = instancesToFakeNodes(instances) + } err = nodeGroup.DeleteNodes(nodesToBeDeleted) } @@ -822,6 +880,16 @@ func (a *StaticAutoscaler) deleteCreatedNodesWithErrors() (bool, error) { return deletedAny, nil } +// instancesToNodes returns a list of fake nodes with just names populated, +// so that they can be passed as nodes to delete +func instancesToFakeNodes(instances []cloudprovider.Instance) []*apiv1.Node { + nodes := []*apiv1.Node{} + for _, i := range instances { + nodes = append(nodes, clusterstate.FakeNode(i, "")) + } + return nodes +} + func (a *StaticAutoscaler) nodeGroupsById() map[string]cloudprovider.NodeGroup { nodeGroups := make(map[string]cloudprovider.NodeGroup) for _, nodeGroup := range a.CloudProvider.NodeGroups() { diff --git a/cluster-autoscaler/core/static_autoscaler_test.go b/cluster-autoscaler/core/static_autoscaler_test.go index 30555466a988..2e4f740a8159 100644 --- a/cluster-autoscaler/core/static_autoscaler_test.go +++ b/cluster-autoscaler/core/static_autoscaler_test.go @@ -42,6 +42,7 @@ import ( core_utils "k8s.io/autoscaler/cluster-autoscaler/core/utils" "k8s.io/autoscaler/cluster-autoscaler/estimator" ca_processors "k8s.io/autoscaler/cluster-autoscaler/processors" + "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" "k8s.io/autoscaler/cluster-autoscaler/simulator" "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" "k8s.io/autoscaler/cluster-autoscaler/simulator/utilization" @@ -75,6 +76,15 @@ func (l *podListerMock) List() ([]*apiv1.Pod, error) { return args.Get(0).([]*apiv1.Pod), args.Error(1) } +type scheduledAndUnschedulablePodListerMock struct { + mock.Mock +} + +func (l *scheduledAndUnschedulablePodListerMock) List() (scheduledPods []*apiv1.Pod, unschedulablePods []*apiv1.Pod, err error) { + args := l.Called() + return args.Get(0).([]*apiv1.Pod), args.Get(1).([]*apiv1.Pod), args.Error(2) +} + type podDisruptionBudgetListerMock struct { mock.Mock } @@ -150,14 +160,14 @@ func (m *onNodeGroupDeleteMock) Delete(id string) error { func setUpScaleDownActuator(ctx *context.AutoscalingContext, options config.AutoscalingOptions) { deleteOptions := simulator.NewNodeDeleteOptions(options) - ctx.ScaleDownActuator = actuation.NewActuator(ctx, nil, deletiontracker.NewNodeDeletionTracker(0*time.Second), deleteOptions) + ctx.ScaleDownActuator = actuation.NewActuator(ctx, nil, deletiontracker.NewNodeDeletionTracker(0*time.Second), deleteOptions, NewTestProcessors(ctx).NodeGroupConfigProcessor) } func TestStaticAutoscalerRunOnce(t *testing.T) { readyNodeLister := kubernetes.NewTestNodeLister(nil) allNodeLister := kubernetes.NewTestNodeLister(nil) scheduledPodMock := &podListerMock{} - unschedulablePodMock := &podListerMock{} + scheduledAndUnschedulablePodMock := &scheduledAndUnschedulablePodListerMock{} podDisruptionBudgetListerMock := &podDisruptionBudgetListerMock{} daemonSetListerMock := &daemonSetListerMock{} onScaleUpMock := &onScaleUpMock{} @@ -218,7 +228,7 @@ func TestStaticAutoscalerRunOnce(t *testing.T) { setUpScaleDownActuator(&context, options) listerRegistry := kube_util.NewListerRegistry(allNodeLister, readyNodeLister, scheduledPodMock, - unschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, + scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, nil, nil, nil, nil) context.ListerRegistry = listerRegistry @@ -226,7 +236,7 @@ func TestStaticAutoscalerRunOnce(t *testing.T) { OkTotalUnreadyCount: 1, } processors := NewTestProcessors(&context) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(options.NodeGroupDefaults.MaxNodeProvisionTime)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults)) sdPlanner, sdActuator := newScaleDownPlannerAndActuator(t, &context, processors, clusterState) suOrchestrator := orchestrator.New() suOrchestrator.Initialize(&context, processors, clusterState, taints.TaintConfig{}) @@ -247,21 +257,21 @@ func TestStaticAutoscalerRunOnce(t *testing.T) { // MaxNodesTotal reached. readyNodeLister.SetNodes([]*apiv1.Node{n1}) allNodeLister.SetNodes([]*apiv1.Node{n1}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p2}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{p2}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() err = autoscaler.RunOnce(time.Now()) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Scale up. readyNodeLister.SetNodes([]*apiv1.Node{n1}) allNodeLister.SetNodes([]*apiv1.Node{n1}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Times(2) // 1 to get pods + 1 per nodegroup when building nodeInfo map - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p2}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{p2}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() onScaleUpMock.On("ScaleUp", "ng1", 1).Return(nil).Once() @@ -269,14 +279,14 @@ func TestStaticAutoscalerRunOnce(t *testing.T) { context.MaxNodesTotal = 10 err = autoscaler.RunOnce(time.Now().Add(time.Hour)) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Mark unneeded nodes. readyNodeLister.SetNodes([]*apiv1.Node{n1, n2}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() - unschedulablePodMock.On("List").Return([]*apiv1.Pod{}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() @@ -285,14 +295,14 @@ func TestStaticAutoscalerRunOnce(t *testing.T) { err = autoscaler.RunOnce(time.Now().Add(2 * time.Hour)) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Scale down. readyNodeLister.SetNodes([]*apiv1.Node{n1, n2}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Times(3) - unschedulablePodMock.On("List").Return([]*apiv1.Pod{}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Twice() onScaleDownMock.On("ScaleDown", "ng1", "n2").Return(nil).Once() @@ -300,14 +310,14 @@ func TestStaticAutoscalerRunOnce(t *testing.T) { err = autoscaler.RunOnce(time.Now().Add(3 * time.Hour)) waitForDeleteToFinish(t, deleteFinished) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Mark unregistered nodes. readyNodeLister.SetNodes([]*apiv1.Node{n1, n2}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p2}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{p2}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() @@ -316,14 +326,14 @@ func TestStaticAutoscalerRunOnce(t *testing.T) { err = autoscaler.RunOnce(time.Now().Add(4 * time.Hour)) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Remove unregistered nodes. readyNodeLister.SetNodes([]*apiv1.Node{n1, n2}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p2}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{p2}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() onScaleDownMock.On("ScaleDown", "ng2", "n3").Return(nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() @@ -331,14 +341,14 @@ func TestStaticAutoscalerRunOnce(t *testing.T) { err = autoscaler.RunOnce(time.Now().Add(5 * time.Hour)) waitForDeleteToFinish(t, deleteFinished) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Scale up to node gorup min size. readyNodeLister.SetNodes([]*apiv1.Node{n4}) allNodeLister.SetNodes([]*apiv1.Node{n4}) scheduledPodMock.On("List").Return([]*apiv1.Pod{}, nil) - unschedulablePodMock.On("List").Return([]*apiv1.Pod{}, nil) + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{}, []*apiv1.Pod{}, nil) daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil) podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil) onScaleUpMock.On("ScaleUp", "ng3", 2).Return(nil).Once() // 2 new nodes are supposed to be scaled up. @@ -355,7 +365,7 @@ func TestStaticAutoscalerRunOnceWithAutoprovisionedEnabled(t *testing.T) { readyNodeLister := kubernetes.NewTestNodeLister(nil) allNodeLister := kubernetes.NewTestNodeLister(nil) scheduledPodMock := &podListerMock{} - unschedulablePodMock := &podListerMock{} + scheduledAndUnschedulablePodMock := &scheduledAndUnschedulablePodListerMock{} podDisruptionBudgetListerMock := &podDisruptionBudgetListerMock{} daemonSetListerMock := &daemonSetListerMock{} onScaleUpMock := &onScaleUpMock{} @@ -436,14 +446,14 @@ func TestStaticAutoscalerRunOnceWithAutoprovisionedEnabled(t *testing.T) { processors.NodeGroupListProcessor = nodeGroupListProcessor listerRegistry := kube_util.NewListerRegistry(allNodeLister, readyNodeLister, scheduledPodMock, - unschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, + scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, nil, nil, nil, nil) context.ListerRegistry = listerRegistry clusterStateConfig := clusterstate.ClusterStateRegistryConfig{ OkTotalUnreadyCount: 0, } - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(options.NodeGroupDefaults.MaxNodeProvisionTime)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults)) sdPlanner, sdActuator := newScaleDownPlannerAndActuator(t, &context, processors, clusterState) suOrchestrator := orchestrator.New() @@ -465,8 +475,8 @@ func TestStaticAutoscalerRunOnceWithAutoprovisionedEnabled(t *testing.T) { // Scale up. readyNodeLister.SetNodes([]*apiv1.Node{n1}) allNodeLister.SetNodes([]*apiv1.Node{n1}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p2}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{p2}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() onNodeGroupCreateMock.On("Create", "autoprovisioned-TN2").Return(nil).Once() @@ -474,7 +484,7 @@ func TestStaticAutoscalerRunOnceWithAutoprovisionedEnabled(t *testing.T) { err = autoscaler.RunOnce(time.Now().Add(time.Hour)) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Fix target size. @@ -483,8 +493,8 @@ func TestStaticAutoscalerRunOnceWithAutoprovisionedEnabled(t *testing.T) { // Remove autoprovisioned node group and mark unneeded nodes. readyNodeLister.SetNodes([]*apiv1.Node{n1, n2}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() - unschedulablePodMock.On("List").Return([]*apiv1.Pod{}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() onNodeGroupDeleteMock.On("Delete", "autoprovisioned-TN1").Return(nil).Once() @@ -494,14 +504,14 @@ func TestStaticAutoscalerRunOnceWithAutoprovisionedEnabled(t *testing.T) { err = autoscaler.RunOnce(time.Now().Add(1 * time.Hour)) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Scale down. readyNodeLister.SetNodes([]*apiv1.Node{n1, n2}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Times(3) - unschedulablePodMock.On("List").Return([]*apiv1.Pod{}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Twice() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() onNodeGroupDeleteMock.On("Delete", "autoprovisioned-"+ @@ -511,7 +521,7 @@ func TestStaticAutoscalerRunOnceWithAutoprovisionedEnabled(t *testing.T) { err = autoscaler.RunOnce(time.Now().Add(2 * time.Hour)) waitForDeleteToFinish(t, deleteFinished) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) } @@ -519,7 +529,7 @@ func TestStaticAutoscalerRunOnceWithALongUnregisteredNode(t *testing.T) { readyNodeLister := kubernetes.NewTestNodeLister(nil) allNodeLister := kubernetes.NewTestNodeLister(nil) scheduledPodMock := &podListerMock{} - unschedulablePodMock := &podListerMock{} + scheduledAndUnschedulablePodMock := &scheduledAndUnschedulablePodListerMock{} podDisruptionBudgetListerMock := &podDisruptionBudgetListerMock{} daemonSetListerMock := &daemonSetListerMock{} onScaleUpMock := &onScaleUpMock{} @@ -580,14 +590,14 @@ func TestStaticAutoscalerRunOnceWithALongUnregisteredNode(t *testing.T) { setUpScaleDownActuator(&context, options) listerRegistry := kube_util.NewListerRegistry(allNodeLister, readyNodeLister, scheduledPodMock, - unschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, + scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, nil, nil, nil, nil) context.ListerRegistry = listerRegistry clusterStateConfig := clusterstate.ClusterStateRegistryConfig{ OkTotalUnreadyCount: 1, } - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(options.NodeGroupDefaults.MaxNodeProvisionTime)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults)) // broken node detected as unregistered nodes := []*apiv1.Node{n1} @@ -618,15 +628,15 @@ func TestStaticAutoscalerRunOnceWithALongUnregisteredNode(t *testing.T) { // Scale up. readyNodeLister.SetNodes([]*apiv1.Node{n1}) allNodeLister.SetNodes([]*apiv1.Node{n1}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() // 1 to get pods + 1 per nodegroup when building nodeInfo map - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p2}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{p2}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() onScaleUpMock.On("ScaleUp", "ng1", 1).Return(nil).Once() err = autoscaler.RunOnce(later.Add(time.Hour)) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Remove broken node after going over min size @@ -635,8 +645,8 @@ func TestStaticAutoscalerRunOnceWithALongUnregisteredNode(t *testing.T) { readyNodeLister.SetNodes([]*apiv1.Node{n1, n2}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Twice() - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p2}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, []*apiv1.Pod{p2}, nil).Once() onScaleDownMock.On("ScaleDown", "ng1", "broken").Return(nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() @@ -644,7 +654,7 @@ func TestStaticAutoscalerRunOnceWithALongUnregisteredNode(t *testing.T) { err = autoscaler.RunOnce(later.Add(2 * time.Hour)) waitForDeleteToFinish(t, deleteFinished) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) } @@ -652,7 +662,7 @@ func TestStaticAutoscalerRunOncePodsWithPriorities(t *testing.T) { readyNodeLister := kubernetes.NewTestNodeLister(nil) allNodeLister := kubernetes.NewTestNodeLister(nil) scheduledPodMock := &podListerMock{} - unschedulablePodMock := &podListerMock{} + scheduledAndUnschedulablePodMock := &scheduledAndUnschedulablePodListerMock{} podDisruptionBudgetListerMock := &podDisruptionBudgetListerMock{} daemonSetListerMock := &daemonSetListerMock{} onScaleUpMock := &onScaleUpMock{} @@ -740,7 +750,7 @@ func TestStaticAutoscalerRunOncePodsWithPriorities(t *testing.T) { setUpScaleDownActuator(&context, options) listerRegistry := kube_util.NewListerRegistry(allNodeLister, readyNodeLister, scheduledPodMock, - unschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, + scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, nil, nil, nil, nil) context.ListerRegistry = listerRegistry @@ -749,7 +759,7 @@ func TestStaticAutoscalerRunOncePodsWithPriorities(t *testing.T) { } processors := NewTestProcessors(&context) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(options.NodeGroupDefaults.MaxNodeProvisionTime)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults)) sdPlanner, sdActuator := newScaleDownPlannerAndActuator(t, &context, processors, clusterState) suOrchestrator := orchestrator.New() suOrchestrator.Initialize(&context, processors, clusterState, taints.TaintConfig{}) @@ -769,22 +779,22 @@ func TestStaticAutoscalerRunOncePodsWithPriorities(t *testing.T) { // Scale up readyNodeLister.SetNodes([]*apiv1.Node{n1, n2, n3}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2, n3}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1, p2, p3}, nil).Times(2) // 1 to get pods + 1 when building nodeInfo map - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p4, p5, p6}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1, p2, p3}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1, p2, p3}, []*apiv1.Pod{p4, p5, p6}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() onScaleUpMock.On("ScaleUp", "ng2", 1).Return(nil).Once() err = autoscaler.RunOnce(time.Now()) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Mark unneeded nodes. readyNodeLister.SetNodes([]*apiv1.Node{n1, n2, n3}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2, n3}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1, p2, p3}, nil).Times(2) - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p4, p5}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1, p2, p3}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1, p2, p3}, []*apiv1.Pod{p4, p5}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() @@ -792,14 +802,14 @@ func TestStaticAutoscalerRunOncePodsWithPriorities(t *testing.T) { err = autoscaler.RunOnce(time.Now().Add(2 * time.Hour)) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) // Scale down. readyNodeLister.SetNodes([]*apiv1.Node{n1, n2, n3}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2, n3}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p1, p2, p3, p4}, nil).Times(3) - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p5}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p1, p2, p3, p4}, nil).Twice() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p1, p2, p3, p4}, []*apiv1.Pod{p5}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Twice() onScaleDownMock.On("ScaleDown", "ng1", "n1").Return(nil).Once() @@ -809,7 +819,7 @@ func TestStaticAutoscalerRunOncePodsWithPriorities(t *testing.T) { err = autoscaler.RunOnce(time.Now().Add(3 * time.Hour)) waitForDeleteToFinish(t, deleteFinished) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) } @@ -817,7 +827,7 @@ func TestStaticAutoscalerRunOnceWithFilteringOnBinPackingEstimator(t *testing.T) readyNodeLister := kubernetes.NewTestNodeLister(nil) allNodeLister := kubernetes.NewTestNodeLister(nil) scheduledPodMock := &podListerMock{} - unschedulablePodMock := &podListerMock{} + scheduledAndUnschedulablePodMock := &scheduledAndUnschedulablePodListerMock{} podDisruptionBudgetListerMock := &podDisruptionBudgetListerMock{} daemonSetListerMock := &daemonSetListerMock{} onScaleUpMock := &onScaleUpMock{} @@ -875,7 +885,7 @@ func TestStaticAutoscalerRunOnceWithFilteringOnBinPackingEstimator(t *testing.T) setUpScaleDownActuator(&context, options) listerRegistry := kube_util.NewListerRegistry(allNodeLister, readyNodeLister, scheduledPodMock, - unschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, + scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, nil, nil, nil, nil) context.ListerRegistry = listerRegistry @@ -884,7 +894,7 @@ func TestStaticAutoscalerRunOnceWithFilteringOnBinPackingEstimator(t *testing.T) } processors := NewTestProcessors(&context) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(options.NodeGroupDefaults.MaxNodeProvisionTime)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults)) sdPlanner, sdActuator := newScaleDownPlannerAndActuator(t, &context, processors, clusterState) autoscaler := &StaticAutoscaler{ @@ -901,15 +911,15 @@ func TestStaticAutoscalerRunOnceWithFilteringOnBinPackingEstimator(t *testing.T) // Scale up readyNodeLister.SetNodes([]*apiv1.Node{n1, n2}) allNodeLister.SetNodes([]*apiv1.Node{n1, n2}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p3, p4}, nil).Times(2) // 1 to get pods + 1 when building nodeInfo map - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p3, p4}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p3, p4}, []*apiv1.Pod{p1}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() err = autoscaler.RunOnce(time.Now()) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) } @@ -917,7 +927,7 @@ func TestStaticAutoscalerRunOnceWithFilteringOnUpcomingNodesEnabledNoScaleUp(t * readyNodeLister := kubernetes.NewTestNodeLister(nil) allNodeLister := kubernetes.NewTestNodeLister(nil) scheduledPodMock := &podListerMock{} - unschedulablePodMock := &podListerMock{} + scheduledAndUnschedulablePodMock := &scheduledAndUnschedulablePodListerMock{} podDisruptionBudgetListerMock := &podDisruptionBudgetListerMock{} daemonSetListerMock := &daemonSetListerMock{} onScaleUpMock := &onScaleUpMock{} @@ -975,7 +985,7 @@ func TestStaticAutoscalerRunOnceWithFilteringOnUpcomingNodesEnabledNoScaleUp(t * setUpScaleDownActuator(&context, options) listerRegistry := kube_util.NewListerRegistry(allNodeLister, readyNodeLister, scheduledPodMock, - unschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, + scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, nil, nil, nil, nil) context.ListerRegistry = listerRegistry @@ -984,7 +994,7 @@ func TestStaticAutoscalerRunOnceWithFilteringOnUpcomingNodesEnabledNoScaleUp(t * } processors := NewTestProcessors(&context) - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(options.NodeGroupDefaults.MaxNodeProvisionTime)) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults)) sdPlanner, sdActuator := newScaleDownPlannerAndActuator(t, &context, processors, clusterState) autoscaler := &StaticAutoscaler{ @@ -1001,15 +1011,15 @@ func TestStaticAutoscalerRunOnceWithFilteringOnUpcomingNodesEnabledNoScaleUp(t * // Scale up readyNodeLister.SetNodes([]*apiv1.Node{n2, n3}) allNodeLister.SetNodes([]*apiv1.Node{n2, n3}) - scheduledPodMock.On("List").Return([]*apiv1.Pod{p2, p3}, nil).Times(2) // 1 to get pods + 1 when building nodeInfo map - unschedulablePodMock.On("List").Return([]*apiv1.Pod{p1}, nil).Once() + scheduledPodMock.On("List").Return([]*apiv1.Pod{p2, p3}, nil).Once() + scheduledAndUnschedulablePodMock.On("List").Return([]*apiv1.Pod{p2, p3}, []*apiv1.Pod{p1}, nil).Once() daemonSetListerMock.On("List", labels.Everything()).Return([]*appsv1.DaemonSet{}, nil).Once() podDisruptionBudgetListerMock.On("List").Return([]*policyv1.PodDisruptionBudget{}, nil).Once() err = autoscaler.RunOnce(time.Now()) assert.NoError(t, err) - mock.AssertExpectationsForObjects(t, scheduledPodMock, unschedulablePodMock, + mock.AssertExpectationsForObjects(t, scheduledPodMock, scheduledAndUnschedulablePodMock, podDisruptionBudgetListerMock, daemonSetListerMock, onScaleUpMock, onScaleDownMock) } @@ -1041,9 +1051,8 @@ func TestStaticAutoscalerInstanceCreationErrors(t *testing.T) { OkTotalUnreadyCount: 1, } - staticMaxNodeProvisionTimeProvider := clusterstate.NewStaticMaxNodeProvisionTimeProvider(options.NodeGroupDefaults.MaxNodeProvisionTime) - - clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), staticMaxNodeProvisionTimeProvider) + nodeGroupConfigProcessor := nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults) + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), nodeGroupConfigProcessor) autoscaler := &StaticAutoscaler{ AutoscalingContext: &context, clusterStateRegistry: clusterState, @@ -1281,7 +1290,7 @@ func TestStaticAutoscalerInstanceCreationErrors(t *testing.T) { return false }, nil) - clusterState = clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), staticMaxNodeProvisionTimeProvider) + clusterState = clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), nodeGroupConfigProcessor) clusterState.RefreshCloudProviderNodeInstancesCache() autoscaler.clusterStateRegistry = clusterState @@ -1293,6 +1302,74 @@ func TestStaticAutoscalerInstanceCreationErrors(t *testing.T) { assert.False(t, removedNodes) assert.Error(t, err) nodeGroupC.AssertNumberOfCalls(t, "DeleteNodes", 0) + + nodeGroupAtomic := &mockprovider.NodeGroup{} + nodeGroupAtomic.On("Exist").Return(true) + nodeGroupAtomic.On("Autoprovisioned").Return(false) + nodeGroupAtomic.On("TargetSize").Return(3, nil) + nodeGroupAtomic.On("Id").Return("D") + nodeGroupAtomic.On("DeleteNodes", mock.Anything).Return(nil) + nodeGroupAtomic.On("GetOptions", options.NodeGroupDefaults).Return( + &config.NodeGroupAutoscalingOptions{ + ZeroOrMaxNodeScaling: true, + }, nil) + nodeGroupAtomic.On("Nodes").Return([]cloudprovider.Instance{ + { + Id: "D1", + Status: &cloudprovider.InstanceStatus{ + State: cloudprovider.InstanceRunning, + }, + }, + { + Id: "D2", + Status: &cloudprovider.InstanceStatus{ + State: cloudprovider.InstanceRunning, + }, + }, + { + Id: "D3", + Status: &cloudprovider.InstanceStatus{ + State: cloudprovider.InstanceCreating, + ErrorInfo: &cloudprovider.InstanceErrorInfo{ + ErrorClass: cloudprovider.OtherErrorClass, + ErrorCode: "OTHER", + }, + }, + }, + }, nil).Twice() + provider = &mockprovider.CloudProvider{} + provider.On("NodeGroups").Return([]cloudprovider.NodeGroup{nodeGroupAtomic}) + provider.On("NodeGroupForNode", mock.Anything).Return( + func(node *apiv1.Node) cloudprovider.NodeGroup { + if strings.HasPrefix(node.Spec.ProviderID, "D") { + return nodeGroupAtomic + } + return nil + }, nil).Times(3) + + clusterState = clusterstate.NewClusterStateRegistry(provider, clusterStateConfig, context.LogRecorder, NewBackoff(), nodeGroupConfigProcessor) + clusterState.RefreshCloudProviderNodeInstancesCache() + autoscaler.CloudProvider = provider + autoscaler.clusterStateRegistry = clusterState + // propagate nodes info in cluster state + clusterState.UpdateNodes([]*apiv1.Node{}, nil, now) + + // delete nodes with create errors + removedNodes, err = autoscaler.deleteCreatedNodesWithErrors() + assert.True(t, removedNodes) + assert.NoError(t, err) + + nodeGroupAtomic.AssertCalled(t, "DeleteNodes", mock.MatchedBy( + func(nodes []*apiv1.Node) bool { + if len(nodes) != 3 { + return false + } + names := make(map[string]bool) + for _, node := range nodes { + names[node.Spec.ProviderID] = true + } + return names["D1"] && names["D2"] && names["D3"] + })) } type candidateTrackingFakePlanner struct { @@ -1400,7 +1477,9 @@ func TestStaticAutoscalerUpcomingScaleDownCandidates(t *testing.T) { readyNodeLister := kubernetes.NewTestNodeLister(readyNodes) daemonSetLister, err := kubernetes.NewTestDaemonSetLister(nil) assert.NoError(t, err) - listerRegistry := kube_util.NewListerRegistry(allNodeLister, readyNodeLister, kubernetes.NewTestPodLister(nil), kubernetes.NewTestPodLister(nil), kubernetes.NewTestPodDisruptionBudgetLister(nil), daemonSetLister, nil, nil, nil, nil) + listerRegistry := kube_util.NewListerRegistry(allNodeLister, readyNodeLister, + kubernetes.NewTestPodLister(nil), kubernetes.NewTestScheduledAndUnschedulablePodLister(nil, nil), + kubernetes.NewTestPodDisruptionBudgetLister(nil), daemonSetLister, nil, nil, nil, nil) // Create context with minimal options that guarantee we reach the tested logic. // We're only testing the input to UpdateClusterState which should be called whenever scale-down is enabled, other options shouldn't matter. @@ -1411,10 +1490,10 @@ func TestStaticAutoscalerUpcomingScaleDownCandidates(t *testing.T) { // Create CSR with unhealthy cluster protection effectively disabled, to guarantee we reach the tested logic. csrConfig := clusterstate.ClusterStateRegistryConfig{OkTotalUnreadyCount: nodeGroupCount * unreadyNodesCount} - csr := clusterstate.NewClusterStateRegistry(provider, csrConfig, ctx.LogRecorder, NewBackoff(), clusterstate.NewStaticMaxNodeProvisionTimeProvider(15*time.Minute)) + csr := clusterstate.NewClusterStateRegistry(provider, csrConfig, ctx.LogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(config.NodeGroupAutoscalingOptions{MaxNodeProvisionTime: 15 * time.Minute})) // Setting the Actuator is necessary for testing any scale-down logic, it shouldn't have anything to do in this test. - actuator := actuation.NewActuator(&ctx, csr, deletiontracker.NewNodeDeletionTracker(0*time.Second), simulator.NodeDeleteOptions{}) + actuator := actuation.NewActuator(&ctx, csr, deletiontracker.NewNodeDeletionTracker(0*time.Second), simulator.NodeDeleteOptions{}, NewTestProcessors(&ctx).NodeGroupConfigProcessor) ctx.ScaleDownActuator = actuator // Fake planner that keeps track of the scale-down candidates passed to UpdateClusterState. @@ -1508,8 +1587,7 @@ func TestRemoveFixNodeTargetSize(t *testing.T) { clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, NewBackoff(), - clusterstate.NewStaticMaxNodeProvisionTimeProvider(context.AutoscalingOptions.NodeGroupDefaults.MaxNodeProvisionTime)) + }, fakeLogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(context.AutoscalingOptions.NodeGroupDefaults)) err := clusterState.UpdateNodes([]*apiv1.Node{ng1_1}, nil, now.Add(-time.Hour)) assert.NoError(t, err) @@ -1557,27 +1635,96 @@ func TestRemoveOldUnregisteredNodes(t *testing.T) { clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{ MaxTotalUnreadyPercentage: 10, OkTotalUnreadyCount: 1, - }, fakeLogRecorder, NewBackoff(), - clusterstate.NewStaticMaxNodeProvisionTimeProvider(context.AutoscalingOptions.NodeGroupDefaults.MaxNodeProvisionTime)) + }, fakeLogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(context.AutoscalingOptions.NodeGroupDefaults)) err := clusterState.UpdateNodes([]*apiv1.Node{ng1_1}, nil, now.Add(-time.Hour)) assert.NoError(t, err) unregisteredNodes := clusterState.GetUnregisteredNodes() assert.Equal(t, 1, len(unregisteredNodes)) + autoscaler := &StaticAutoscaler{ + AutoscalingContext: context, + clusterStateRegistry: clusterState, + } + // Nothing should be removed. The unregistered node is not old enough. - removed, err := removeOldUnregisteredNodes(unregisteredNodes, context, clusterState, now.Add(-50*time.Minute), fakeLogRecorder) + removed, err := autoscaler.removeOldUnregisteredNodes(unregisteredNodes, context, clusterState, now.Add(-50*time.Minute), fakeLogRecorder) assert.NoError(t, err) assert.False(t, removed) // ng1_2 should be removed. - removed, err = removeOldUnregisteredNodes(unregisteredNodes, context, clusterState, now, fakeLogRecorder) + removed, err = autoscaler.removeOldUnregisteredNodes(unregisteredNodes, context, clusterState, now, fakeLogRecorder) assert.NoError(t, err) assert.True(t, removed) deletedNode := core_utils.GetStringFromChan(deletedNodes) assert.Equal(t, "ng1/ng1-2", deletedNode) } +func TestRemoveOldUnregisteredNodesAtomic(t *testing.T) { + deletedNodes := make(chan string, 10) + + now := time.Now() + provider := testprovider.NewTestCloudProvider(nil, func(nodegroup string, node string) error { + deletedNodes <- fmt.Sprintf("%s/%s", nodegroup, node) + return nil + }) + provider.AddNodeGroupWithCustomOptions("atomic-ng", 0, 10, 10, &config.NodeGroupAutoscalingOptions{ + MaxNodeProvisionTime: 45 * time.Minute, + ZeroOrMaxNodeScaling: true, + }) + regNode := BuildTestNode("atomic-ng-0", 1000, 1000) + regNode.Spec.ProviderID = "atomic-ng-0" + provider.AddNode("atomic-ng", regNode) + for i := 1; i < 10; i++ { + node := BuildTestNode(fmt.Sprintf("atomic-ng-%v", i), 1000, 1000) + node.Spec.ProviderID = fmt.Sprintf("atomic-ng-%v", i) + provider.AddNode("atomic-ng", node) + } + + fakeClient := &fake.Clientset{} + fakeLogRecorder, _ := clusterstate_utils.NewStatusMapRecorder(fakeClient, "kube-system", kube_record.NewFakeRecorder(5), false, "my-cool-configmap") + + context := &context.AutoscalingContext{ + AutoscalingOptions: config.AutoscalingOptions{ + NodeGroupDefaults: config.NodeGroupAutoscalingOptions{ + MaxNodeProvisionTime: time.Hour, + }, + }, + CloudProvider: provider, + } + clusterState := clusterstate.NewClusterStateRegistry(provider, clusterstate.ClusterStateRegistryConfig{ + MaxTotalUnreadyPercentage: 10, + OkTotalUnreadyCount: 1, + }, fakeLogRecorder, NewBackoff(), nodegroupconfig.NewDefaultNodeGroupConfigProcessor(context.AutoscalingOptions.NodeGroupDefaults)) + err := clusterState.UpdateNodes([]*apiv1.Node{regNode}, nil, now.Add(-time.Hour)) + assert.NoError(t, err) + + unregisteredNodes := clusterState.GetUnregisteredNodes() + assert.Equal(t, 9, len(unregisteredNodes)) + + autoscaler := &StaticAutoscaler{ + AutoscalingContext: context, + clusterStateRegistry: clusterState, + } + + // Nothing should be removed. The unregistered node is not old enough. + removed, err := autoscaler.removeOldUnregisteredNodes(unregisteredNodes, context, clusterState, now.Add(-50*time.Minute), fakeLogRecorder) + assert.NoError(t, err) + assert.False(t, removed) + + // unregNode is long unregistered, so all of the nodes should be removed due to ZeroOrMaxNodeScaling option + removed, err = autoscaler.removeOldUnregisteredNodes(unregisteredNodes, context, clusterState, now, fakeLogRecorder) + assert.NoError(t, err) + assert.True(t, removed) + wantNames, deletedNames := []string{}, []string{} + for i := 0; i < 10; i++ { + deletedNames = append(deletedNames, core_utils.GetStringFromChan(deletedNodes)) + wantNames = append(wantNames, fmt.Sprintf("atomic-ng/atomic-ng-%v", i)) + } + + assert.ElementsMatch(t, wantNames, deletedNames) +} + func TestSubtractNodes(t *testing.T) { ns := make([]*apiv1.Node, 5) for i := 0; i < len(ns); i++ { @@ -1742,7 +1889,7 @@ func newScaleDownPlannerAndActuator(t *testing.T, ctx *context.AutoscalingContex } ndt := deletiontracker.NewNodeDeletionTracker(0 * time.Second) sd := legacy.NewScaleDown(ctx, p, ndt, deleteOptions) - actuator := actuation.NewActuator(ctx, cs, ndt, deleteOptions) + actuator := actuation.NewActuator(ctx, cs, ndt, deleteOptions, p.NodeGroupConfigProcessor) wrapper := legacy.NewScaleDownWrapper(sd, actuator) return wrapper, wrapper } diff --git a/cluster-autoscaler/core/test/common.go b/cluster-autoscaler/core/test/common.go index 6a673d21a4a6..1c67f4b51112 100644 --- a/cluster-autoscaler/core/test/common.go +++ b/cluster-autoscaler/core/test/common.go @@ -22,10 +22,6 @@ import ( "testing" "time" - "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/pdb" - "k8s.io/autoscaler/cluster-autoscaler/debuggingsnapshot" - - "k8s.io/apimachinery/pkg/api/resource" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" testcloudprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/test" "k8s.io/autoscaler/cluster-autoscaler/clusterstate/utils" @@ -33,11 +29,15 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/context" "k8s.io/autoscaler/cluster-autoscaler/core/podlistprocessor" "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/deletiontracker" + "k8s.io/autoscaler/cluster-autoscaler/core/scaledown/pdb" + "k8s.io/autoscaler/cluster-autoscaler/debuggingsnapshot" "k8s.io/autoscaler/cluster-autoscaler/estimator" + "k8s.io/autoscaler/cluster-autoscaler/expander" "k8s.io/autoscaler/cluster-autoscaler/expander/random" "k8s.io/autoscaler/cluster-autoscaler/metrics" "k8s.io/autoscaler/cluster-autoscaler/processors" "k8s.io/autoscaler/cluster-autoscaler/processors/actionablecluster" + "k8s.io/autoscaler/cluster-autoscaler/processors/binpacking" processor_callbacks "k8s.io/autoscaler/cluster-autoscaler/processors/callbacks" "k8s.io/autoscaler/cluster-autoscaler/processors/customresources" "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" @@ -50,14 +50,14 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/processors/status" "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" "k8s.io/autoscaler/cluster-autoscaler/simulator/predicatechecker" + "k8s.io/autoscaler/cluster-autoscaler/utils/backoff" "k8s.io/autoscaler/cluster-autoscaler/utils/errors" kube_util "k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes" "k8s.io/autoscaler/cluster-autoscaler/utils/labels" "github.com/stretchr/testify/assert" - apiv1 "k8s.io/api/core/v1" - "k8s.io/autoscaler/cluster-autoscaler/utils/backoff" + "k8s.io/apimachinery/pkg/api/resource" kube_client "k8s.io/client-go/kubernetes" kube_record "k8s.io/client-go/tools/record" schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" @@ -102,9 +102,46 @@ type ScaleTestConfig struct { ExpectedScaleDownCount int } +// NodeGroupConfig is a node group config used in tests +type NodeGroupConfig struct { + Name string + MinSize int + MaxSize int +} + +// NodeTemplateConfig is a structure to provide node info in tests +type NodeTemplateConfig struct { + MachineType string + NodeInfo *schedulerframework.NodeInfo + NodeGroupName string +} + +// ScaleUpTestConfig represents a config of a scale test +type ScaleUpTestConfig struct { + Groups []NodeGroupConfig + Nodes []NodeConfig + Pods []PodConfig + ExtraPods []PodConfig + OnScaleUp testcloudprovider.OnScaleUpFunc + ExpansionOptionToChoose *GroupSizeChange + Options *config.AutoscalingOptions + NodeTemplateConfigs map[string]*NodeTemplateConfig +} + +// ScaleUpTestResult represents a node groups scale up result +type ScaleUpTestResult struct { + ScaleUpError errors.AutoscalerError + ScaleUpStatus ScaleUpStatusInfo + GroupSizeChanges []GroupSizeChange + ExpansionOptions []GroupSizeChange + Events []string + GroupTargetSizes map[string]int +} + // ScaleTestResults contains results of a scale test type ScaleTestResults struct { ExpansionOptions []GroupSizeChange + GroupTargetSizes map[string]int FinalOption GroupSizeChange NoScaleUpReason string FinalScaleDowns []string @@ -139,6 +176,7 @@ func NewTestProcessors(context *context.AutoscalingContext) *processors.Autoscal return &processors.AutoscalingProcessors{ PodListProcessor: podlistprocessor.NewDefaultPodListProcessor(context.PredicateChecker), NodeGroupListProcessor: &nodegroups.NoOpNodeGroupListProcessor{}, + BinpackingLimiter: binpacking.NewDefaultBinpackingLimiter(), NodeGroupSetProcessor: nodegroupset.NewDefaultNodeGroupSetProcessor([]string{}, config.NodeGroupDifferenceRatios{}), ScaleDownSetProcessor: nodes.NewPostFilteringScaleDownNodeProcessor(), // TODO(bskiba): change scale up test so that this can be a NoOpProcessor @@ -148,7 +186,7 @@ func NewTestProcessors(context *context.AutoscalingContext) *processors.Autoscal NodeGroupManager: nodegroups.NewDefaultNodeGroupManager(), NodeInfoProcessor: nodeinfos.NewDefaultNodeInfoProcessor(), TemplateNodeInfoProvider: nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false), - NodeGroupConfigProcessor: nodegroupconfig.NewDefaultNodeGroupConfigProcessor(), + NodeGroupConfigProcessor: nodegroupconfig.NewDefaultNodeGroupConfigProcessor(context.NodeGroupDefaults), CustomResourcesProcessor: customresources.NewDefaultCustomResourcesProcessor(), ActionableClusterProcessor: actionablecluster.NewDefaultActionableClusterProcessor(), ScaleDownCandidatesNotifier: scaledowncandidates.NewObserversList(), @@ -159,7 +197,8 @@ func NewTestProcessors(context *context.AutoscalingContext) *processors.Autoscal func NewScaleTestAutoscalingContext( options config.AutoscalingOptions, fakeClient kube_client.Interface, listers kube_util.ListerRegistry, provider cloudprovider.CloudProvider, - processorCallbacks processor_callbacks.ProcessorCallbacks, debuggingSnapshotter debuggingsnapshot.DebuggingSnapshotter) (context.AutoscalingContext, error) { + processorCallbacks processor_callbacks.ProcessorCallbacks, debuggingSnapshotter debuggingsnapshot.DebuggingSnapshotter, +) (context.AutoscalingContext, error) { // Not enough buffer space causes the test to hang without printing any logs. // This is not useful. fakeRecorder := kube_record.NewFakeRecorder(100) @@ -169,7 +208,12 @@ func NewScaleTestAutoscalingContext( } // Ignoring error here is safe - if a test doesn't specify valid estimatorName, // it either doesn't need one, or should fail when it turns out to be nil. - estimatorBuilder, _ := estimator.NewEstimatorBuilder(options.EstimatorName, estimator.NewThresholdBasedEstimationLimiter(0, 0)) + estimatorBuilder, _ := estimator.NewEstimatorBuilder( + options.EstimatorName, + estimator.NewThresholdBasedEstimationLimiter(nil), + estimator.NewDecreasingPodOrderer(), + /* EstimationAnalyserFunc */ nil, + ) predicateChecker, err := predicatechecker.NewTestPredicateChecker() if err != nil { return context.AutoscalingContext{}, err @@ -279,8 +323,8 @@ type MockAutoprovisioningNodeGroupListProcessor struct { // Process extends the list of node groups func (p *MockAutoprovisioningNodeGroupListProcessor) Process(context *context.AutoscalingContext, nodeGroups []cloudprovider.NodeGroup, nodeInfos map[string]*schedulerframework.NodeInfo, - unschedulablePods []*apiv1.Pod) ([]cloudprovider.NodeGroup, map[string]*schedulerframework.NodeInfo, error) { - + unschedulablePods []*apiv1.Pod, +) ([]cloudprovider.NodeGroup, map[string]*schedulerframework.NodeInfo, error) { machines, err := context.CloudProvider.GetAvailableMachineTypes() assert.NoError(p.T, err) @@ -300,8 +344,92 @@ func (p *MockAutoprovisioningNodeGroupListProcessor) Process(context *context.Au func (p *MockAutoprovisioningNodeGroupListProcessor) CleanUp() { } +// MockBinpackingLimiter is a fake BinpackingLimiter to be used in tests. +type MockBinpackingLimiter struct { + requiredExpansionOptions int +} + +// InitBinpacking initialises the MockBinpackingLimiter and sets requiredExpansionOptions to 1. +func (p *MockBinpackingLimiter) InitBinpacking(context *context.AutoscalingContext, nodeGroups []cloudprovider.NodeGroup) { + p.requiredExpansionOptions = 1 +} + +// StopBinpacking stops the binpacking early, if we already have requiredExpansionOptions i.e. 1. +func (p *MockBinpackingLimiter) StopBinpacking(context *context.AutoscalingContext, evaluatedOptions []expander.Option) bool { + return len(evaluatedOptions) == p.requiredExpansionOptions +} + +// MarkProcessed is here to satisfy the interface. +func (p *MockBinpackingLimiter) MarkProcessed(context *context.AutoscalingContext, nodegroupId string) { + +} + // NewBackoff creates a new backoff object func NewBackoff() backoff.Backoff { return backoff.NewIdBasedExponentialBackoff(5*time.Minute, /*InitialNodeGroupBackoffDuration*/ 30*time.Minute /*MaxNodeGroupBackoffDuration*/, 3*time.Hour /*NodeGroupBackoffResetTimeout*/) } + +// To implement expander.Strategy, BestOption method must have a struct receiver. +// This prevents it from modifying fields of reportingStrategy, so we need a thin +// pointer wrapper for mutable parts. +type expanderResults struct { + inputOptions []GroupSizeChange +} + +// MockReportingStrategy implements expander.Strategy +type MockReportingStrategy struct { + defaultStrategy expander.Strategy + optionToChoose *GroupSizeChange + t *testing.T + results *expanderResults +} + +// NewMockRepotingStrategy creates an expander strategy with reporting and mocking capabilities. +func NewMockRepotingStrategy(t *testing.T, optionToChoose *GroupSizeChange) *MockReportingStrategy { + return &MockReportingStrategy{ + defaultStrategy: random.NewStrategy(), + results: &expanderResults{}, + optionToChoose: optionToChoose, + t: t, + } +} + +// LastInputOptions provides access to expansion options passed as an input in recent strategy execution +func (r *MockReportingStrategy) LastInputOptions() []GroupSizeChange { + return r.results.inputOptions +} + +// BestOption satisfies the Strategy interface. Picks the best option from those passed as an argument. +// When parameter optionToChoose is defined, it's picked as the best one. +// Otherwise, random option is used. +func (r *MockReportingStrategy) BestOption(options []expander.Option, nodeInfo map[string]*schedulerframework.NodeInfo) *expander.Option { + r.results.inputOptions = expanderOptionsToGroupSizeChanges(options) + if r.optionToChoose == nil { + return r.defaultStrategy.BestOption(options, nodeInfo) + } + for _, option := range options { + groupSizeChange := expanderOptionToGroupSizeChange(option) + if groupSizeChange == *r.optionToChoose { + return &option + } + } + assert.Fail(r.t, "did not find expansionOptionToChoose %+v", r.optionToChoose) + return nil +} + +func expanderOptionsToGroupSizeChanges(options []expander.Option) []GroupSizeChange { + groupSizeChanges := make([]GroupSizeChange, 0, len(options)) + for _, option := range options { + groupSizeChange := expanderOptionToGroupSizeChange(option) + groupSizeChanges = append(groupSizeChanges, groupSizeChange) + } + return groupSizeChanges +} + +func expanderOptionToGroupSizeChange(option expander.Option) GroupSizeChange { + groupName := option.NodeGroup.Id() + groupSizeIncrement := option.NodeCount + scaleUpOption := GroupSizeChange{GroupName: groupName, SizeChange: groupSizeIncrement} + return scaleUpOption +} diff --git a/cluster-autoscaler/estimator/binpacking_estimator.go b/cluster-autoscaler/estimator/binpacking_estimator.go index 395cef24b9c8..2f7760cbe0a0 100644 --- a/cluster-autoscaler/estimator/binpacking_estimator.go +++ b/cluster-autoscaler/estimator/binpacking_estimator.go @@ -18,10 +18,8 @@ package estimator import ( "fmt" - "sort" apiv1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/api/resource" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" "k8s.io/autoscaler/cluster-autoscaler/simulator/predicatechecker" @@ -30,32 +28,39 @@ import ( schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" ) -// podInfo contains Pod and score that corresponds to how important it is to handle the pod first. -type podInfo struct { - score float64 - pod *apiv1.Pod -} - // BinpackingNodeEstimator estimates the number of needed nodes to handle the given amount of pods. type BinpackingNodeEstimator struct { - predicateChecker predicatechecker.PredicateChecker - clusterSnapshot clustersnapshot.ClusterSnapshot - limiter EstimationLimiter + predicateChecker predicatechecker.PredicateChecker + clusterSnapshot clustersnapshot.ClusterSnapshot + limiter EstimationLimiter + podOrderer EstimationPodOrderer + context EstimationContext + estimationAnalyserFunc EstimationAnalyserFunc // optional } // NewBinpackingNodeEstimator builds a new BinpackingNodeEstimator. func NewBinpackingNodeEstimator( predicateChecker predicatechecker.PredicateChecker, clusterSnapshot clustersnapshot.ClusterSnapshot, - limiter EstimationLimiter) *BinpackingNodeEstimator { + limiter EstimationLimiter, + podOrderer EstimationPodOrderer, + context EstimationContext, + estimationAnalyserFunc EstimationAnalyserFunc, +) *BinpackingNodeEstimator { return &BinpackingNodeEstimator{ - predicateChecker: predicateChecker, - clusterSnapshot: clusterSnapshot, - limiter: limiter, + predicateChecker: predicateChecker, + clusterSnapshot: clusterSnapshot, + limiter: limiter, + podOrderer: podOrderer, + context: context, + estimationAnalyserFunc: estimationAnalyserFunc, } } -// Estimate implements First Fit Decreasing bin-packing approximation algorithm. +// Estimate implements First-Fit bin-packing approximation algorithm +// The ordering of the pods depend on the EstimatePodOrderer, the default +// order is DecreasingPodOrderer +// First-Fit Decreasing bin-packing approximation algorithm. // See https://en.wikipedia.org/wiki/Bin_packing_problem for more details. // While it is a multi-dimensional bin packing (cpu, mem, ports) in most cases the main dimension // will be cpu thus the estimated overprovisioning of 11/9 * optimal + 6/9 should be @@ -67,11 +72,10 @@ func (e *BinpackingNodeEstimator) Estimate( nodeTemplate *schedulerframework.NodeInfo, nodeGroup cloudprovider.NodeGroup) (int, []*apiv1.Pod) { - e.limiter.StartEstimation(pods, nodeGroup) + e.limiter.StartEstimation(pods, nodeGroup, e.context) defer e.limiter.EndEstimation() - podInfos := calculatePodScore(pods, nodeTemplate) - sort.Slice(podInfos, func(i, j int) bool { return podInfos[i].score > podInfos[j].score }) + pods = e.podOrderer.Order(pods, nodeTemplate, nodeGroup) newNodeNames := make(map[string]bool) newNodesWithPods := make(map[string]bool) @@ -85,19 +89,19 @@ func (e *BinpackingNodeEstimator) Estimate( scheduledPods := []*apiv1.Pod{} lastNodeName := "" - for _, podInfo := range podInfos { + for _, pod := range pods { found := false - nodeName, err := e.predicateChecker.FitsAnyNodeMatching(e.clusterSnapshot, podInfo.pod, func(nodeInfo *schedulerframework.NodeInfo) bool { + nodeName, err := e.predicateChecker.FitsAnyNodeMatching(e.clusterSnapshot, pod, func(nodeInfo *schedulerframework.NodeInfo) bool { return newNodeNames[nodeInfo.Node().Name] }) if err == nil { found = true - if err := e.clusterSnapshot.AddPod(podInfo.pod, nodeName); err != nil { - klog.Errorf("Error adding pod %v.%v to node %v in ClusterSnapshot; %v", podInfo.pod.Namespace, podInfo.pod.Name, nodeName, err) + if err := e.clusterSnapshot.AddPod(pod, nodeName); err != nil { + klog.Errorf("Error adding pod %v.%v to node %v in ClusterSnapshot; %v", pod.Namespace, pod.Name, nodeName, err) return 0, nil } - scheduledPods = append(scheduledPods, podInfo.pod) + scheduledPods = append(scheduledPods, pod) newNodesWithPods[nodeName] = true } @@ -129,17 +133,22 @@ func (e *BinpackingNodeEstimator) Estimate( // Note that this may still fail (ex. if topology spreading with zonal topologyKey is used); // in this case we can't help the pending pod. We keep the node in clusterSnapshot to avoid // adding and removing node to snapshot for each such pod. - if err := e.predicateChecker.CheckPredicates(e.clusterSnapshot, podInfo.pod, newNodeName); err != nil { + if err := e.predicateChecker.CheckPredicates(e.clusterSnapshot, pod, newNodeName); err != nil { continue } - if err := e.clusterSnapshot.AddPod(podInfo.pod, newNodeName); err != nil { - klog.Errorf("Error adding pod %v.%v to node %v in ClusterSnapshot; %v", podInfo.pod.Namespace, podInfo.pod.Name, newNodeName, err) + if err := e.clusterSnapshot.AddPod(pod, newNodeName); err != nil { + klog.Errorf("Error adding pod %v.%v to node %v in ClusterSnapshot; %v", pod.Namespace, pod.Name, newNodeName, err) return 0, nil } newNodesWithPods[newNodeName] = true - scheduledPods = append(scheduledPods, podInfo.pod) + scheduledPods = append(scheduledPods, pod) } } + + if e.estimationAnalyserFunc != nil { + e.estimationAnalyserFunc(e.clusterSnapshot, nodeGroup, newNodesWithPods) + } + return len(newNodesWithPods), scheduledPods } @@ -157,37 +166,3 @@ func (e *BinpackingNodeEstimator) addNewNodeToSnapshot( } return newNodeInfo.Node().Name, nil } - -// Calculates score for all pods and returns podInfo structure. -// Score is defined as cpu_sum/node_capacity + mem_sum/node_capacity. -// Pods that have bigger requirements should be processed first, thus have higher scores. -func calculatePodScore(pods []*apiv1.Pod, nodeTemplate *schedulerframework.NodeInfo) []*podInfo { - podInfos := make([]*podInfo, 0, len(pods)) - - for _, pod := range pods { - cpuSum := resource.Quantity{} - memorySum := resource.Quantity{} - - for _, container := range pod.Spec.Containers { - if request, ok := container.Resources.Requests[apiv1.ResourceCPU]; ok { - cpuSum.Add(request) - } - if request, ok := container.Resources.Requests[apiv1.ResourceMemory]; ok { - memorySum.Add(request) - } - } - score := float64(0) - if cpuAllocatable, ok := nodeTemplate.Node().Status.Allocatable[apiv1.ResourceCPU]; ok && cpuAllocatable.MilliValue() > 0 { - score += float64(cpuSum.MilliValue()) / float64(cpuAllocatable.MilliValue()) - } - if memAllocatable, ok := nodeTemplate.Node().Status.Allocatable[apiv1.ResourceMemory]; ok && memAllocatable.Value() > 0 { - score += float64(memorySum.Value()) / float64(memAllocatable.Value()) - } - - podInfos = append(podInfos, &podInfo{ - score: score, - pod: pod, - }) - } - return podInfos -} diff --git a/cluster-autoscaler/estimator/binpacking_estimator_test.go b/cluster-autoscaler/estimator/binpacking_estimator_test.go index 3a6a961e6b50..5dc06d06efba 100644 --- a/cluster-autoscaler/estimator/binpacking_estimator_test.go +++ b/cluster-autoscaler/estimator/binpacking_estimator_test.go @@ -105,6 +105,7 @@ func makeNode(cpu int64, mem int64, name string, zone string) *apiv1.Node { } func TestBinpackingEstimate(t *testing.T) { + highResourcePodList := makePods(500, 1000, 0, 0, "", 10) testCases := []struct { name string millicores int64 @@ -114,6 +115,7 @@ func TestBinpackingEstimate(t *testing.T) { topologySpreadingKey string expectNodeCount int expectPodCount int + expectProcessedPods []*apiv1.Pod }{ { name: "simple resource-based binpacking", @@ -148,6 +150,16 @@ func TestBinpackingEstimate(t *testing.T) { expectNodeCount: 5, expectPodCount: 10, }, + { + name: "decreasing ordered pods are processed first", + millicores: 1000, + memory: 5000, + pods: append(makePods(50, 1000, 0, 0, "", 10), highResourcePodList...), + maxNodes: 5, + expectNodeCount: 5, + expectPodCount: 10, + expectProcessedPods: highResourcePodList, + }, { name: "hostname topology spreading with maxSkew=2 forces 2 pods/node", millicores: 1000, @@ -173,9 +185,9 @@ func TestBinpackingEstimate(t *testing.T) { predicateChecker, err := predicatechecker.NewTestPredicateChecker() assert.NoError(t, err) - limiter := NewThresholdBasedEstimationLimiter(tc.maxNodes, time.Duration(0)) - estimator := NewBinpackingNodeEstimator(predicateChecker, clusterSnapshot, limiter) - + limiter := NewThresholdBasedEstimationLimiter([]Threshold{NewStaticThreshold(tc.maxNodes, time.Duration(0))}) + processor := NewDecreasingPodOrderer() + estimator := NewBinpackingNodeEstimator(predicateChecker, clusterSnapshot, limiter, processor, nil /* EstimationContext */, nil /* EstimationAnalyserFunc */) node := makeNode(tc.millicores, tc.memory, "template", "zone-mars") nodeInfo := schedulerframework.NewNodeInfo() nodeInfo.SetNode(node) @@ -183,6 +195,9 @@ func TestBinpackingEstimate(t *testing.T) { estimatedNodes, estimatedPods := estimator.Estimate(tc.pods, nodeInfo, nil) assert.Equal(t, tc.expectNodeCount, estimatedNodes) assert.Equal(t, tc.expectPodCount, len(estimatedPods)) + if tc.expectProcessedPods != nil { + assert.Equal(t, tc.expectProcessedPods, estimatedPods) + } }) } } diff --git a/cluster-autoscaler/estimator/cluster_capacity_threshold.go b/cluster-autoscaler/estimator/cluster_capacity_threshold.go new file mode 100644 index 000000000000..eeced6886f9d --- /dev/null +++ b/cluster-autoscaler/estimator/cluster_capacity_threshold.go @@ -0,0 +1,52 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package estimator + +import ( + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" +) + +type clusterCapacityThreshold struct { +} + +// NodeLimit returns maximum number of new nodes that can be added to the cluster +// based on its capacity. Possible return values are: +// - -1 when cluster has no available capacity +// - 0 when context or cluster-wide node limit is not set. Return value of 0 means that there is no limit. +// - Any positive number representing maximum possible number of new nodes +func (l *clusterCapacityThreshold) NodeLimit(_ cloudprovider.NodeGroup, context EstimationContext) int { + if context == nil || context.ClusterMaxNodeLimit() == 0 { + return 0 + } + if (context.ClusterMaxNodeLimit() < 0) || (context.ClusterMaxNodeLimit() <= context.CurrentNodeCount()) { + return -1 + } + return context.ClusterMaxNodeLimit() - context.CurrentNodeCount() +} + +// DurationLimit always returns 0 for this threshold, meaning that no limit is set. +func (l *clusterCapacityThreshold) DurationLimit(cloudprovider.NodeGroup, EstimationContext) time.Duration { + return 0 +} + +// NewClusterCapacityThreshold returns a Threshold that can be used to limit binpacking +// by available cluster capacity +func NewClusterCapacityThreshold() Threshold { + return &clusterCapacityThreshold{} +} diff --git a/cluster-autoscaler/estimator/cluster_capacity_threshold_test.go b/cluster-autoscaler/estimator/cluster_capacity_threshold_test.go new file mode 100644 index 000000000000..9358fb9a7754 --- /dev/null +++ b/cluster-autoscaler/estimator/cluster_capacity_threshold_test.go @@ -0,0 +1,68 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package estimator + +import ( + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestNewClusterCapacityThreshold(t *testing.T) { + tests := []struct { + name string + wantThreshold int + contextMaxNodes int + contextCurrentNodes int + }{ + { + name: "returns available capacity", + contextMaxNodes: 10, + contextCurrentNodes: 5, + wantThreshold: 5, + }, + { + name: "no threshold is set if cluster capacity is unlimited", + contextMaxNodes: 0, + contextCurrentNodes: 10, + wantThreshold: 0, + }, + { + name: "threshold is negative if cluster has no capacity", + contextMaxNodes: 5, + contextCurrentNodes: 10, + wantThreshold: -1, + }, + { + name: "threshold is negative if cluster node limit is negative", + contextMaxNodes: -5, + contextCurrentNodes: 0, + wantThreshold: -1, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + context := &estimationContext{ + similarNodeGroups: nil, + currentNodeCount: tt.contextCurrentNodes, + clusterMaxNodeLimit: tt.contextMaxNodes, + } + assert.Equal(t, tt.wantThreshold, NewClusterCapacityThreshold().NodeLimit(nil, context)) + assert.True(t, NewClusterCapacityThreshold().DurationLimit(nil, nil) == 0) + }) + } +} diff --git a/cluster-autoscaler/estimator/decreasing_pod_orderer.go b/cluster-autoscaler/estimator/decreasing_pod_orderer.go new file mode 100644 index 000000000000..ac8b930ddd31 --- /dev/null +++ b/cluster-autoscaler/estimator/decreasing_pod_orderer.go @@ -0,0 +1,87 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package estimator + +import ( + "sort" + + apiv1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/kubernetes/pkg/scheduler/framework" +) + +// podScoreInfo contains Pod and score that corresponds to how important it is to handle the pod first. +type podScoreInfo struct { + score float64 + pod *apiv1.Pod +} + +// DecreasingPodOrderer is the default implementation of the EstimationPodOrderer +// It implements sorting pods by pod score in decreasing order +type DecreasingPodOrderer struct { +} + +// NewDecreasingPodOrderer returns the object of DecreasingPodOrderer +func NewDecreasingPodOrderer() *DecreasingPodOrderer { + return &DecreasingPodOrderer{} +} + +// Order is the processing func that sorts the pods based on the size of the pod +func (d *DecreasingPodOrderer) Order(pods []*apiv1.Pod, nodeTemplate *framework.NodeInfo, _ cloudprovider.NodeGroup) []*apiv1.Pod { + podInfos := make([]*podScoreInfo, 0, len(pods)) + for _, pod := range pods { + podInfos = append(podInfos, d.calculatePodScore(pod, nodeTemplate)) + } + sort.Slice(podInfos, func(i, j int) bool { return podInfos[i].score > podInfos[j].score }) + podList := make([]*apiv1.Pod, 0, len(pods)) + for _, podInfo := range podInfos { + podList = append(podList, podInfo.pod) + } + + return podList +} + +// calculatePodScore score for pod and returns podScoreInfo structure. +// Score is defined as cpu_sum/node_capacity + mem_sum/node_capacity. +// Pods that have bigger requirements should be processed first, thus have higher scores. +func (d *DecreasingPodOrderer) calculatePodScore(pod *apiv1.Pod, nodeTemplate *framework.NodeInfo) *podScoreInfo { + + cpuSum := resource.Quantity{} + memorySum := resource.Quantity{} + + for _, container := range pod.Spec.Containers { + if request, ok := container.Resources.Requests[apiv1.ResourceCPU]; ok { + cpuSum.Add(request) + } + if request, ok := container.Resources.Requests[apiv1.ResourceMemory]; ok { + memorySum.Add(request) + } + } + score := float64(0) + if cpuAllocatable, ok := nodeTemplate.Node().Status.Allocatable[apiv1.ResourceCPU]; ok && cpuAllocatable.MilliValue() > 0 { + score += float64(cpuSum.MilliValue()) / float64(cpuAllocatable.MilliValue()) + } + if memAllocatable, ok := nodeTemplate.Node().Status.Allocatable[apiv1.ResourceMemory]; ok && memAllocatable.Value() > 0 { + score += float64(memorySum.Value()) / float64(memAllocatable.Value()) + } + + return &podScoreInfo{ + score: score, + pod: pod, + } +} diff --git a/cluster-autoscaler/estimator/decreasing_pod_orderer_test.go b/cluster-autoscaler/estimator/decreasing_pod_orderer_test.go new file mode 100644 index 000000000000..07c2e0349075 --- /dev/null +++ b/cluster-autoscaler/estimator/decreasing_pod_orderer_test.go @@ -0,0 +1,66 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package estimator + +import ( + "testing" + + "github.com/stretchr/testify/assert" + apiv1 "k8s.io/api/core/v1" + "k8s.io/autoscaler/cluster-autoscaler/utils/test" + schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" +) + +func TestPodPriorityProcessor(t *testing.T) { + p1 := test.BuildTestPod("p1", 1, 1) + p2 := test.BuildTestPod("p2", 2, 1) + p3 := test.BuildTestPod("p3", 2, 100) + node := makeNode(4, 600, "node1", "zone-sun") + testCases := map[string]struct { + inputPods []*apiv1.Pod + expectedPods []*apiv1.Pod + }{ + "single pod": { + inputPods: []*apiv1.Pod{p1}, + expectedPods: []*apiv1.Pod{p1}, + }, + "sorted list of pods": { + inputPods: []*apiv1.Pod{p3, p2, p1}, + expectedPods: []*apiv1.Pod{p3, p2, p1}, + }, + "randomised list of pods": { + inputPods: []*apiv1.Pod{p1, p3, p2}, + expectedPods: []*apiv1.Pod{p3, p2, p1}, + }, + "empty pod list": { + inputPods: []*apiv1.Pod{}, + expectedPods: []*apiv1.Pod{}, + }, + } + + for tn, tc := range testCases { + t.Run(tn, func(t *testing.T) { + tc := tc + t.Parallel() + processor := NewDecreasingPodOrderer() + nodeInfo := schedulerframework.NewNodeInfo() + nodeInfo.SetNode(node) + actual := processor.Order(tc.inputPods, nodeInfo, nil) + assert.Equal(t, tc.expectedPods, actual) + }) + } +} diff --git a/cluster-autoscaler/estimator/estimation_context.go b/cluster-autoscaler/estimator/estimation_context.go new file mode 100644 index 000000000000..8187f598aaee --- /dev/null +++ b/cluster-autoscaler/estimator/estimation_context.go @@ -0,0 +1,59 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package estimator + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" +) + +// EstimationContext stores static and runtime state of autoscaling, used by Estimator +type EstimationContext interface { + SimilarNodeGroups() []cloudprovider.NodeGroup + ClusterMaxNodeLimit() int + CurrentNodeCount() int +} + +type estimationContext struct { + similarNodeGroups []cloudprovider.NodeGroup + currentNodeCount int + clusterMaxNodeLimit int +} + +// NewEstimationContext creates a patch for estimation context with runtime properties. +// This patch is used to update existing context. +func NewEstimationContext(clusterMaxNodeLimit int, similarNodeGroups []cloudprovider.NodeGroup, currentNodeCount int) EstimationContext { + return &estimationContext{ + similarNodeGroups: similarNodeGroups, + currentNodeCount: currentNodeCount, + clusterMaxNodeLimit: clusterMaxNodeLimit, + } +} + +// SimilarNodeGroups returns array of similar node groups +func (c *estimationContext) SimilarNodeGroups() []cloudprovider.NodeGroup { + return c.similarNodeGroups +} + +// ClusterMaxNodeLimit returns maximum node number allowed for the cluster +func (c *estimationContext) ClusterMaxNodeLimit() int { + return c.clusterMaxNodeLimit +} + +// CurrentNodeCount returns current number of nodes in the cluster +func (c *estimationContext) CurrentNodeCount() int { + return c.currentNodeCount +} diff --git a/cluster-autoscaler/estimator/estimator.go b/cluster-autoscaler/estimator/estimator.go index 734d4e3f97f2..b4d8e179d54d 100644 --- a/cluster-autoscaler/estimator/estimator.go +++ b/cluster-autoscaler/estimator/estimator.go @@ -23,6 +23,7 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" "k8s.io/autoscaler/cluster-autoscaler/simulator/predicatechecker" + "k8s.io/kubernetes/pkg/scheduler/framework" schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" ) @@ -42,16 +43,20 @@ type Estimator interface { } // EstimatorBuilder creates a new estimator object. -type EstimatorBuilder func(predicatechecker.PredicateChecker, clustersnapshot.ClusterSnapshot) Estimator +type EstimatorBuilder func(predicatechecker.PredicateChecker, clustersnapshot.ClusterSnapshot, EstimationContext) Estimator + +// EstimationAnalyserFunc to be run at the end of the estimation logic. +type EstimationAnalyserFunc func(clustersnapshot.ClusterSnapshot, cloudprovider.NodeGroup, map[string]bool) // NewEstimatorBuilder creates a new estimator object from flag. -func NewEstimatorBuilder(name string, limiter EstimationLimiter) (EstimatorBuilder, error) { +func NewEstimatorBuilder(name string, limiter EstimationLimiter, orderer EstimationPodOrderer, estimationAnalyserFunc EstimationAnalyserFunc) (EstimatorBuilder, error) { switch name { case BinpackingEstimatorName: return func( predicateChecker predicatechecker.PredicateChecker, - clusterSnapshot clustersnapshot.ClusterSnapshot) Estimator { - return NewBinpackingNodeEstimator(predicateChecker, clusterSnapshot, limiter) + clusterSnapshot clustersnapshot.ClusterSnapshot, + context EstimationContext) Estimator { + return NewBinpackingNodeEstimator(predicateChecker, clusterSnapshot, limiter, orderer, context, estimationAnalyserFunc) }, nil } return nil, fmt.Errorf("unknown estimator: %s", name) @@ -62,7 +67,7 @@ func NewEstimatorBuilder(name string, limiter EstimationLimiter) (EstimatorBuild // scale-up is limited by external factors. type EstimationLimiter interface { // StartEstimation is called at the start of estimation. - StartEstimation([]*apiv1.Pod, cloudprovider.NodeGroup) + StartEstimation([]*apiv1.Pod, cloudprovider.NodeGroup, EstimationContext) // EndEstimation is called at the end of estimation. EndEstimation() // PermissionToAddNode is called by an estimator when it wants to add additional @@ -72,3 +77,9 @@ type EstimationLimiter interface { // just not expected to add any more nodes. PermissionToAddNode() bool } + +// EstimationPodOrderer is an interface used to determine the order of the pods +// used while binpacking during scale up estimation +type EstimationPodOrderer interface { + Order(pods []*apiv1.Pod, nodeTemplate *framework.NodeInfo, nodeGroup cloudprovider.NodeGroup) []*apiv1.Pod +} diff --git a/cluster-autoscaler/estimator/sng_capacity_threshold.go b/cluster-autoscaler/estimator/sng_capacity_threshold.go new file mode 100644 index 000000000000..0c319449964f --- /dev/null +++ b/cluster-autoscaler/estimator/sng_capacity_threshold.go @@ -0,0 +1,71 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package estimator + +import ( + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/klog/v2" +) + +type sngCapacityThreshold struct { +} + +// NodeLimit returns maximum number of new nodes that can be added to the cluster +// based on capacity of current node group and total capacity of similar node groups. Possible return values are: +// - -1 when this node group AND similar node groups have no available capacity +// - 0 when context is not set. Return value of 0 means that there is no limit. +// - Any positive number representing maximum possible number of new nodes +func (t *sngCapacityThreshold) NodeLimit(nodeGroup cloudprovider.NodeGroup, context EstimationContext) int { + if context == nil { + return 0 + } + totalAvailableCapacity := t.computeNodeGroupCapacity(nodeGroup) + for _, sng := range context.SimilarNodeGroups() { + totalAvailableCapacity += t.computeNodeGroupCapacity(sng) + } + if totalAvailableCapacity <= 0 { + return -1 + } + return totalAvailableCapacity +} + +func (t *sngCapacityThreshold) computeNodeGroupCapacity(nodeGroup cloudprovider.NodeGroup) int { + nodeGroupTargetSize, err := nodeGroup.TargetSize() + // Should not ever happen as only valid node groups are passed to estimator + if err != nil { + klog.Errorf("Error while computing available capacity of a node group %v: can't get target size of the group", nodeGroup.Id(), err) + return 0 + } + groupCapacity := nodeGroup.MaxSize() - nodeGroupTargetSize + if groupCapacity > 0 { + return groupCapacity + } + return 0 +} + +// DurationLimit always returns 0 for this threshold, meaning that no limit is set. +func (t *sngCapacityThreshold) DurationLimit(cloudprovider.NodeGroup, EstimationContext) time.Duration { + return 0 +} + +// NewSngCapacityThreshold returns a Threshold that can be used to limit binpacking +// by available capacity of similar node groups +func NewSngCapacityThreshold() Threshold { + return &sngCapacityThreshold{} +} diff --git a/cluster-autoscaler/estimator/sng_capacity_threshold_test.go b/cluster-autoscaler/estimator/sng_capacity_threshold_test.go new file mode 100644 index 000000000000..5867152179d5 --- /dev/null +++ b/cluster-autoscaler/estimator/sng_capacity_threshold_test.go @@ -0,0 +1,92 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package estimator + +import ( + "testing" + + "github.com/stretchr/testify/assert" + testprovider "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/test" +) + +func TestSngCapacityThreshold(t *testing.T) { + type nodeGroupConfig struct { + name string + maxNodes int + nodesCount int + } + tests := []struct { + name string + nodeGroupsConfig []nodeGroupConfig + currentNodeGroup nodeGroupConfig + wantThreshold int + }{ + { + name: "returns available capacity", + nodeGroupsConfig: []nodeGroupConfig{ + {name: "ng1", maxNodes: 10, nodesCount: 5}, + {name: "ng2", maxNodes: 100, nodesCount: 50}, + {name: "ng3", maxNodes: 5, nodesCount: 3}, + }, + currentNodeGroup: nodeGroupConfig{name: "main-ng", maxNodes: 20, nodesCount: 10}, + wantThreshold: 67, + }, + { + name: "returns available capacity and skips over-provisioned groups", + nodeGroupsConfig: []nodeGroupConfig{ + {name: "ng1", maxNodes: 10, nodesCount: 5}, + {name: "ng3", maxNodes: 10, nodesCount: 11}, + {name: "ng3", maxNodes: 0, nodesCount: 5}, + }, + currentNodeGroup: nodeGroupConfig{name: "main-ng", maxNodes: 5, nodesCount: 10}, + wantThreshold: 5, + }, + { + name: "threshold is negative if cluster has no capacity", + nodeGroupsConfig: []nodeGroupConfig{ + {name: "ng1", maxNodes: 10, nodesCount: 10}, + {name: "ng2", maxNodes: 100, nodesCount: 100}, + }, + currentNodeGroup: nodeGroupConfig{name: "main-ng", maxNodes: 5, nodesCount: 5}, + wantThreshold: -1, + }, + { + name: "threshold is negative if all groups are over-provisioned", + nodeGroupsConfig: []nodeGroupConfig{ + {name: "ng1", maxNodes: 10, nodesCount: 11}, + {name: "ng3", maxNodes: 100, nodesCount: 111}, + {name: "ng3", maxNodes: 0, nodesCount: 5}, + }, + currentNodeGroup: nodeGroupConfig{name: "main-ng", maxNodes: 5, nodesCount: 10}, + wantThreshold: -1, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + provider := testprovider.NewTestCloudProvider(func(string, int) error { return nil }, nil) + for _, ng := range tt.nodeGroupsConfig { + provider.AddNodeGroup(ng.name, 0, ng.maxNodes, ng.nodesCount) + } + // Context must be constructed first to exclude current node group passed from orchestrator + context := estimationContext{similarNodeGroups: provider.NodeGroups()} + provider.AddNodeGroup(tt.currentNodeGroup.name, 0, tt.currentNodeGroup.maxNodes, tt.currentNodeGroup.nodesCount) + currentNodeGroup := provider.GetNodeGroup(tt.currentNodeGroup.name) + assert.Equalf(t, tt.wantThreshold, NewSngCapacityThreshold().NodeLimit(currentNodeGroup, &context), "NewSngCapacityThreshold()") + assert.True(t, NewClusterCapacityThreshold().DurationLimit(currentNodeGroup, &context) == 0) + }) + } +} diff --git a/cluster-autoscaler/estimator/static_threshold.go b/cluster-autoscaler/estimator/static_threshold.go new file mode 100644 index 000000000000..8aaf93c4b5d0 --- /dev/null +++ b/cluster-autoscaler/estimator/static_threshold.go @@ -0,0 +1,45 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package estimator + +import ( + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" +) + +type staticThreshold struct { + maxNodes int + maxDuration time.Duration +} + +func (l *staticThreshold) NodeLimit(cloudprovider.NodeGroup, EstimationContext) int { + return l.maxNodes +} + +func (l *staticThreshold) DurationLimit(cloudprovider.NodeGroup, EstimationContext) time.Duration { + return l.maxDuration +} + +// NewStaticThreshold returns a Threshold that should be used to limit +// result and duration of binpacking by given static values +func NewStaticThreshold(maxNodes int, maxDuration time.Duration) Threshold { + return &staticThreshold{ + maxNodes: maxNodes, + maxDuration: maxDuration, + } +} diff --git a/cluster-autoscaler/estimator/threshold.go b/cluster-autoscaler/estimator/threshold.go new file mode 100644 index 000000000000..6246360009b3 --- /dev/null +++ b/cluster-autoscaler/estimator/threshold.go @@ -0,0 +1,30 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package estimator + +import ( + "time" + + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" +) + +// Threshold provides resources configuration for threshold based estimation limiter. +// Return value of 0 means that no limit is set. +type Threshold interface { + NodeLimit(cloudprovider.NodeGroup, EstimationContext) int + DurationLimit(cloudprovider.NodeGroup, EstimationContext) time.Duration +} diff --git a/cluster-autoscaler/estimator/threshold_based_limiter.go b/cluster-autoscaler/estimator/threshold_based_limiter.go index 295ded1c5c4d..57b4e51d5d28 100644 --- a/cluster-autoscaler/estimator/threshold_based_limiter.go +++ b/cluster-autoscaler/estimator/threshold_based_limiter.go @@ -29,22 +29,39 @@ type thresholdBasedEstimationLimiter struct { maxNodes int nodes int start time.Time + thresholds []Threshold } -func (tbel *thresholdBasedEstimationLimiter) StartEstimation([]*apiv1.Pod, cloudprovider.NodeGroup) { +func (tbel *thresholdBasedEstimationLimiter) StartEstimation(_ []*apiv1.Pod, nodeGroup cloudprovider.NodeGroup, context EstimationContext) { tbel.start = time.Now() tbel.nodes = 0 + tbel.maxNodes = 0 + tbel.maxDuration = time.Duration(0) + for _, threshold := range tbel.thresholds { + tbel.maxNodes = getMinLimit(tbel.maxNodes, threshold.NodeLimit(nodeGroup, context)) + tbel.maxDuration = getMinLimit(tbel.maxDuration, threshold.DurationLimit(nodeGroup, context)) + } +} + +func getMinLimit[V int | time.Duration](baseLimit V, targetLimit V) V { + if baseLimit < 0 || targetLimit < 0 { + return -1 + } + if (baseLimit == 0 || baseLimit > targetLimit) && targetLimit > 0 { + return targetLimit + } + return baseLimit } func (*thresholdBasedEstimationLimiter) EndEstimation() {} func (tbel *thresholdBasedEstimationLimiter) PermissionToAddNode() bool { - if tbel.maxNodes > 0 && tbel.nodes >= tbel.maxNodes { + if tbel.maxNodes < 0 || (tbel.maxNodes > 0 && tbel.nodes >= tbel.maxNodes) { klog.V(4).Infof("Capping binpacking after exceeding threshold of %d nodes", tbel.maxNodes) return false } timeDefined := tbel.maxDuration > 0 && tbel.start != time.Time{} - if timeDefined && time.Now().After(tbel.start.Add(tbel.maxDuration)) { + if tbel.maxDuration < 0 || (timeDefined && time.Now().After(tbel.start.Add(tbel.maxDuration))) { klog.V(4).Infof("Capping binpacking after exceeding max duration of %v", tbel.maxDuration) return false } @@ -53,12 +70,13 @@ func (tbel *thresholdBasedEstimationLimiter) PermissionToAddNode() bool { } // NewThresholdBasedEstimationLimiter returns an EstimationLimiter that will prevent estimation -// after either a node count- of time-based threshold is reached. This is meant to prevent cases +// after either a node count of time-based threshold is reached. This is meant to prevent cases // where binpacking of hundreds or thousands of nodes takes extremely long time rendering CA // incredibly slow or even completely crashing it. -func NewThresholdBasedEstimationLimiter(maxNodes int, maxDuration time.Duration) EstimationLimiter { - return &thresholdBasedEstimationLimiter{ - maxNodes: maxNodes, - maxDuration: maxDuration, - } +// Thresholds may return: +// - negative value: no new nodes are allowed to be added if at least one threshold returns negative limit +// - 0: no limit, thresholds with no limits will be ignored in favor of thresholds with positive or negative limits +// - positive value: new nodes can be added and this value represents the limit +func NewThresholdBasedEstimationLimiter(thresholds []Threshold) EstimationLimiter { + return &thresholdBasedEstimationLimiter{thresholds: thresholds} } diff --git a/cluster-autoscaler/estimator/threshold_based_limiter_test.go b/cluster-autoscaler/estimator/threshold_based_limiter_test.go index e80b586f3ebb..ebcfc0c1ddf6 100644 --- a/cluster-autoscaler/estimator/threshold_based_limiter_test.go +++ b/cluster-autoscaler/estimator/threshold_based_limiter_test.go @@ -20,9 +20,9 @@ import ( "testing" "time" - apiv1 "k8s.io/api/core/v1" - "github.com/stretchr/testify/assert" + apiv1 "k8s.io/api/core/v1" + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" ) type limiterOperation func(*testing.T, EstimationLimiter) @@ -35,9 +35,22 @@ func expectAllow(t *testing.T, l EstimationLimiter) { assert.Equal(t, true, l.PermissionToAddNode()) } -func resetLimiter(t *testing.T, l EstimationLimiter) { +func resetLimiter(_ *testing.T, l EstimationLimiter) { l.EndEstimation() - l.StartEstimation([]*apiv1.Pod{}, nil) + l.StartEstimation([]*apiv1.Pod{}, nil, nil) +} + +type dynamicThreshold struct { + nodeLimit int +} + +func (d *dynamicThreshold) DurationLimit(cloudprovider.NodeGroup, EstimationContext) time.Duration { + return 0 +} + +func (d *dynamicThreshold) NodeLimit(cloudprovider.NodeGroup, EstimationContext) int { + d.nodeLimit += 1 + return d.nodeLimit } func TestThresholdBasedLimiter(t *testing.T) { @@ -48,6 +61,7 @@ func TestThresholdBasedLimiter(t *testing.T) { startDelta time.Duration operations []limiterOperation expectNodeCount int + thresholds []Threshold }{ { name: "no limiting happens", @@ -60,19 +74,17 @@ func TestThresholdBasedLimiter(t *testing.T) { expectNodeCount: 3, }, { - name: "time based trigger fires", - maxNodes: 20, - maxDuration: 5 * time.Second, - startDelta: -10 * time.Second, + name: "time based trigger fires", + startDelta: -10 * time.Second, operations: []limiterOperation{ expectDeny, expectDeny, }, expectNodeCount: 0, + thresholds: []Threshold{NewStaticThreshold(20, 5*time.Second)}, }, { - name: "sequence of additions works until the threshold is hit", - maxNodes: 3, + name: "sequence of additions works until the threshold is hit", operations: []limiterOperation{ expectAllow, expectAllow, @@ -80,10 +92,32 @@ func TestThresholdBasedLimiter(t *testing.T) { expectDeny, }, expectNodeCount: 3, + thresholds: []Threshold{NewStaticThreshold(3, 0)}, + }, + { + name: "binpacking is stopped if at least one threshold has negative max nodes limit", + operations: []limiterOperation{ + expectDeny, + }, + expectNodeCount: 0, + thresholds: []Threshold{ + NewStaticThreshold(-1, 0), + NewStaticThreshold(10, 0), + }, + }, + { + name: "binpacking is stopped if at least one threshold has negative max duration limit", + operations: []limiterOperation{ + expectDeny, + }, + expectNodeCount: 0, + thresholds: []Threshold{ + NewStaticThreshold(100, -1), + NewStaticThreshold(10, 60*time.Minute), + }, }, { - name: "node counter is reset", - maxNodes: 2, + name: "node counter is reset", operations: []limiterOperation{ expectAllow, expectAllow, @@ -92,12 +126,11 @@ func TestThresholdBasedLimiter(t *testing.T) { expectAllow, }, expectNodeCount: 1, + thresholds: []Threshold{NewStaticThreshold(2, 0)}, }, { - name: "timer is reset", - maxNodes: 20, - maxDuration: 5 * time.Second, - startDelta: -10 * time.Second, + name: "timer is reset", + startDelta: -10 * time.Second, operations: []limiterOperation{ expectDeny, resetLimiter, @@ -105,15 +138,42 @@ func TestThresholdBasedLimiter(t *testing.T) { expectAllow, }, expectNodeCount: 2, + thresholds: []Threshold{NewStaticThreshold(20, 5*time.Second)}, + }, + { + name: "handles dynamic limits", + maxNodes: 0, + operations: []limiterOperation{ + expectAllow, + expectAllow, + expectDeny, + resetLimiter, + expectAllow, + expectAllow, + expectAllow, + expectDeny, + }, + expectNodeCount: 3, + thresholds: []Threshold{&dynamicThreshold{1}}, + }, + { + name: "duration limit is set to runtime limit", + operations: []limiterOperation{ + expectDeny, + expectDeny, + resetLimiter, // resets startDelta + expectAllow, + expectAllow, + }, + expectNodeCount: 2, + startDelta: -120 * time.Second, + thresholds: []Threshold{NewStaticThreshold(2, 60*time.Second)}, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { - limiter := &thresholdBasedEstimationLimiter{ - maxNodes: tc.maxNodes, - maxDuration: tc.maxDuration, - } - limiter.StartEstimation([]*apiv1.Pod{}, nil) + limiter := NewThresholdBasedEstimationLimiter(tc.thresholds).(*thresholdBasedEstimationLimiter) + limiter.StartEstimation([]*apiv1.Pod{}, nil, nil) if tc.startDelta != time.Duration(0) { limiter.start = limiter.start.Add(tc.startDelta) @@ -122,8 +182,43 @@ func TestThresholdBasedLimiter(t *testing.T) { for _, op := range tc.operations { op(t, limiter) } - assert.Equal(t, tc.expectNodeCount, limiter.nodes) + assert.Equalf(t, tc.expectNodeCount, limiter.nodes, "Number of allowed nodes does not match expectation") limiter.EndEstimation() }) } } + +func TestMinLimit(t *testing.T) { + type testCase[V interface{ int | time.Duration }] struct { + name string + baseLimit V + targetLimit V + want V + } + tests := []testCase[int]{ + {name: "At least one negative", baseLimit: -10, targetLimit: 10, want: -1}, + {name: "Negative and not set", baseLimit: -10, targetLimit: 0, want: -1}, + {name: "Both negative", baseLimit: -10, targetLimit: -10, want: -1}, + {name: "Both not set", baseLimit: 0, targetLimit: 0, want: 0}, + {name: "At least one not set", baseLimit: 0, targetLimit: 10, want: 10}, + {name: "Both set", baseLimit: 5, targetLimit: 10, want: 5}, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + assert.Equalf(t, tt.want, getMinLimit(tt.baseLimit, tt.targetLimit), "getMinLimit(%v, %v)", tt.baseLimit, tt.targetLimit) + }) + } + testsTime := []testCase[time.Duration]{ + {name: "At least one negative duration", baseLimit: time.Now().Sub(time.Now().Add(5 * time.Minute)), targetLimit: time.Duration(10), want: -1}, + {name: "Negative and not set durations", baseLimit: time.Now().Sub(time.Now().Add(5 * time.Minute)), targetLimit: time.Duration(0), want: -1}, + {name: "Both negative durations", baseLimit: time.Now().Sub(time.Now().Add(5 * time.Minute)), targetLimit: time.Duration(-10), want: -1}, + {name: "Both not set durations", baseLimit: time.Duration(0), targetLimit: time.Duration(0)}, + {name: "At least one not set duration", baseLimit: time.Duration(0), targetLimit: time.Duration(10), want: 10}, + {name: "Both set durations", baseLimit: time.Duration(5), targetLimit: time.Duration(10), want: 5}, + } + for _, tt := range testsTime { + t.Run(tt.name, func(t *testing.T) { + assert.Equalf(t, tt.want, getMinLimit(tt.baseLimit, tt.targetLimit), "getMinLimit(%v, %v)", tt.baseLimit, tt.targetLimit) + }) + } +} diff --git a/cluster-autoscaler/expander/expander.go b/cluster-autoscaler/expander/expander.go index 57a91cfa78e7..9aa2eccb16b1 100644 --- a/cluster-autoscaler/expander/expander.go +++ b/cluster-autoscaler/expander/expander.go @@ -42,10 +42,11 @@ var ( // Option describes an option to expand the cluster. type Option struct { - NodeGroup cloudprovider.NodeGroup - NodeCount int - Debug string - Pods []*apiv1.Pod + NodeGroup cloudprovider.NodeGroup + SimilarNodeGroups []cloudprovider.NodeGroup + NodeCount int + Debug string + Pods []*apiv1.Pod } // Strategy describes an interface for selecting the best option when scaling up diff --git a/cluster-autoscaler/go.mod b/cluster-autoscaler/go.mod index 3fe2ff3fbd63..0cfae7b2aa5e 100644 --- a/cluster-autoscaler/go.mod +++ b/cluster-autoscaler/go.mod @@ -4,20 +4,20 @@ go 1.20 require ( cloud.google.com/go/compute/metadata v0.2.3 - github.com/Azure/azure-sdk-for-go v67.2.0+incompatible - github.com/Azure/go-autorest/autorest v0.11.28 - github.com/Azure/go-autorest/autorest/adal v0.9.21 + github.com/Azure/azure-sdk-for-go v68.0.0+incompatible + github.com/Azure/go-autorest/autorest v0.11.29 + github.com/Azure/go-autorest/autorest/adal v0.9.23 github.com/Azure/go-autorest/autorest/azure/auth v0.5.8 github.com/Azure/go-autorest/autorest/date v0.3.0 github.com/Azure/go-autorest/autorest/to v0.4.0 github.com/Azure/skewer v0.0.14 github.com/aws/aws-sdk-go v1.44.241 + github.com/cenkalti/backoff/v4 v4.2.1 github.com/digitalocean/godo v1.27.0 - github.com/gardener/machine-controller-manager v0.50.0 - github.com/gardener/machine-controller-manager-provider-aws v0.17.0 - github.com/gardener/machine-controller-manager-provider-azure v0.10.0 - github.com/ghodss/yaml v1.0.0 - github.com/gofrs/uuid v4.0.0+incompatible + github.com/gardener/machine-controller-manager v0.50.1 + github.com/gardener/machine-controller-manager-provider-aws v0.19.2 + github.com/gardener/machine-controller-manager-provider-azure v0.11.1 + github.com/gofrs/uuid v4.4.0+incompatible github.com/gogo/protobuf v1.3.2 github.com/golang/mock v1.6.0 github.com/google/go-cmp v0.5.9 @@ -28,33 +28,34 @@ require ( github.com/onsi/ginkgo/v2 v2.13.0 github.com/onsi/gomega v1.27.10 github.com/pkg/errors v0.9.1 - github.com/prometheus/client_golang v1.14.0 + github.com/prometheus/client_golang v1.16.0 github.com/satori/go.uuid v1.2.0 github.com/spf13/pflag v1.0.5 github.com/stretchr/testify v1.8.2 golang.org/x/crypto v0.12.0 golang.org/x/net v0.14.0 - golang.org/x/oauth2 v0.7.0 + golang.org/x/oauth2 v0.8.0 golang.org/x/sys v0.12.0 google.golang.org/api v0.114.0 google.golang.org/grpc v1.54.0 google.golang.org/protobuf v1.30.0 gopkg.in/gcfg.v1 v1.2.3 gopkg.in/yaml.v2 v2.4.0 - k8s.io/api v0.27.2 - k8s.io/apimachinery v0.27.2 - k8s.io/apiserver v0.27.2 - k8s.io/client-go v0.27.2 - k8s.io/cloud-provider v0.27.1 - k8s.io/cloud-provider-aws v1.27.1 - k8s.io/component-base v0.27.2 - k8s.io/component-helpers v0.27.1 - k8s.io/klog/v2 v2.90.1 - k8s.io/kubelet v0.27.1 - k8s.io/kubernetes v1.27.1 + k8s.io/api v0.28.0 + k8s.io/apimachinery v0.28.0 + k8s.io/apiserver v0.28.0 + k8s.io/client-go v0.28.0 + k8s.io/cloud-provider v0.28.0 + k8s.io/cloud-provider-aws v1.27.0 + k8s.io/component-base v0.28.0 + k8s.io/component-helpers v0.28.0 + k8s.io/klog/v2 v2.100.1 + k8s.io/kubelet v0.28.0 + k8s.io/kubernetes v1.28.0 k8s.io/legacy-cloud-providers v0.0.0 k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 sigs.k8s.io/cloud-provider-azure v1.26.2 + sigs.k8s.io/yaml v1.3.0 ) require ( @@ -67,7 +68,7 @@ require ( github.com/Azure/go-autorest/tracing v0.6.0 // indirect github.com/GoogleCloudPlatform/k8s-cloud-provider v1.18.1-0.20220218231025-f11817397a1b // indirect github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab // indirect - github.com/Microsoft/go-winio v0.4.17 // indirect + github.com/Microsoft/go-winio v0.6.0 // indirect github.com/Microsoft/hcsshim v0.8.25 // indirect github.com/NYTimes/gziphandler v1.1.1 // indirect github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230321174746-8dcc6526cfb1 // indirect @@ -75,20 +76,19 @@ require ( github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/blang/semver/v4 v4.0.0 // indirect - github.com/cenkalti/backoff/v4 v4.2.0 // indirect github.com/cespare/xxhash/v2 v2.2.0 // indirect github.com/checkpoint-restore/go-criu/v5 v5.3.0 // indirect - github.com/cilium/ebpf v0.7.0 // indirect - github.com/container-storage-interface/spec v1.7.0 // indirect - github.com/containerd/cgroups v1.0.1 // indirect + github.com/cilium/ebpf v0.9.1 // indirect + github.com/container-storage-interface/spec v1.8.0 // indirect + github.com/containerd/cgroups v1.1.0 // indirect github.com/containerd/console v1.0.3 // indirect - github.com/containerd/ttrpc v1.1.0 // indirect + github.com/containerd/ttrpc v1.2.2 // indirect github.com/coreos/go-semver v0.3.1 // indirect github.com/coreos/go-systemd/v22 v22.5.0 // indirect github.com/cyphar/filepath-securejoin v0.2.3 // indirect github.com/davecgh/go-spew v1.1.1 // indirect github.com/dimchansky/utfbom v1.1.1 // indirect - github.com/docker/distribution v2.8.1+incompatible // indirect + github.com/docker/distribution v2.8.2+incompatible // indirect github.com/docker/go-units v0.5.0 // indirect github.com/emicklei/go-restful/v3 v3.10.2 // indirect github.com/euank/go-kmsg-parser v2.0.0+incompatible // indirect @@ -102,12 +102,12 @@ require ( github.com/go-openapi/swag v0.22.3 // indirect github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect github.com/godbus/dbus/v5 v5.0.6 // indirect - github.com/golang-jwt/jwt/v4 v4.4.2 // indirect + github.com/golang-jwt/jwt/v4 v4.5.0 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/protobuf v1.5.3 // indirect - github.com/google/cadvisor v0.47.1 // indirect - github.com/google/cel-go v0.14.0 // indirect - github.com/google/gnostic v0.6.9 // indirect + github.com/google/cadvisor v0.47.3 // indirect + github.com/google/cel-go v0.16.0 // indirect + github.com/google/gnostic-models v0.6.8 // indirect github.com/google/gofuzz v1.2.0 // indirect github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 // indirect github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect @@ -124,7 +124,6 @@ require ( github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible // indirect github.com/mitchellh/go-homedir v1.1.0 // indirect - github.com/mitchellh/mapstructure v1.5.0 // indirect github.com/moby/ipvs v1.1.0 // indirect github.com/moby/spdystream v0.2.0 // indirect github.com/moby/sys/mountinfo v0.6.2 // indirect @@ -135,26 +134,26 @@ require ( github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect github.com/opencontainers/go-digest v1.0.0 // indirect - github.com/opencontainers/runc v1.1.4 // indirect + github.com/opencontainers/runc v1.1.7 // indirect github.com/opencontainers/runtime-spec v1.0.3-0.20220909204839-494a5a6aca78 // indirect github.com/opencontainers/selinux v1.10.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect - github.com/prometheus/client_model v0.3.0 // indirect - github.com/prometheus/common v0.42.0 // indirect - github.com/prometheus/procfs v0.9.0 // indirect + github.com/prometheus/client_model v0.4.0 // indirect + github.com/prometheus/common v0.44.0 // indirect + github.com/prometheus/procfs v0.10.1 // indirect github.com/rubiojr/go-vhd v0.0.0-20200706105327-02e210299021 // indirect - github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646 // indirect + github.com/seccomp/libseccomp-golang v0.10.0 // indirect github.com/sirupsen/logrus v1.9.0 // indirect github.com/spf13/cobra v1.7.0 // indirect github.com/stoewer/go-strcase v1.3.0 // indirect github.com/stretchr/objx v0.5.0 // indirect github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 // indirect github.com/vishvananda/netlink v1.1.0 // indirect - github.com/vishvananda/netns v0.0.2 // indirect + github.com/vishvananda/netns v0.0.4 // indirect github.com/vmware/govmomi v0.30.0 // indirect - go.etcd.io/etcd/api/v3 v3.5.7 // indirect - go.etcd.io/etcd/client/pkg/v3 v3.5.7 // indirect - go.etcd.io/etcd/client/v3 v3.5.7 // indirect + go.etcd.io/etcd/api/v3 v3.5.9 // indirect + go.etcd.io/etcd/client/pkg/v3 v3.5.9 // indirect + go.etcd.io/etcd/client/v3 v3.5.9 // indirect go.opencensus.io v0.24.0 // indirect go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful v0.35.0 // indirect go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.40.0 // indirect @@ -171,31 +170,33 @@ require ( go.uber.org/multierr v1.11.0 // indirect go.uber.org/zap v1.24.0 // indirect golang.org/x/exp v0.0.0-20230321023759-10a507213a29 // indirect + golang.org/x/mod v0.12.0 // indirect golang.org/x/sync v0.3.0 // indirect golang.org/x/term v0.11.0 // indirect golang.org/x/text v0.12.0 // indirect golang.org/x/time v0.3.0 // indirect golang.org/x/tools v0.12.0 // indirect google.golang.org/appengine v1.6.7 // indirect - google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect + google.golang.org/genproto v0.0.0-20230526161137-0005af68ea54 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20230525234035-dd9d682886f9 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20230525234030-28d5490b6b19 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect gopkg.in/warnings.v0 v0.1.2 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect - k8s.io/controller-manager v0.27.1 // indirect - k8s.io/cri-api v0.0.0 // indirect + k8s.io/apiextensions-apiserver v0.27.2 // indirect + k8s.io/controller-manager v0.28.0 // indirect + k8s.io/cri-api v0.28.0 // indirect k8s.io/csi-translation-lib v0.27.0 // indirect k8s.io/dynamic-resource-allocation v0.0.0 // indirect - k8s.io/kms v0.27.1 // indirect - k8s.io/kube-openapi v0.0.0-20230501164219-8b0f38b5fd1f // indirect - k8s.io/kube-proxy v0.0.0 // indirect + k8s.io/kms v0.28.0 // indirect + k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect k8s.io/kube-scheduler v0.0.0 // indirect k8s.io/kubectl v0.0.0 // indirect k8s.io/mount-utils v0.26.0-alpha.0 // indirect sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.1.2 // indirect sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect - sigs.k8s.io/yaml v1.3.0 // indirect ) replace github.com/aws/aws-sdk-go/service/eks => github.com/aws/aws-sdk-go/service/eks v1.38.49 @@ -204,62 +205,64 @@ replace github.com/digitalocean/godo => github.com/digitalocean/godo v1.27.0 replace github.com/rancher/go-rancher => github.com/rancher/go-rancher v0.1.0 -replace k8s.io/api => k8s.io/api v0.27.1 +replace k8s.io/api => k8s.io/api v0.28.0 -replace k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.27.1 +replace k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.28.0 -replace k8s.io/apimachinery => k8s.io/apimachinery v0.28.0-alpha.0 +replace k8s.io/apimachinery => k8s.io/apimachinery v0.28.0 -replace k8s.io/apiserver => k8s.io/apiserver v0.27.1 +replace k8s.io/apiserver => k8s.io/apiserver v0.28.0 -replace k8s.io/cli-runtime => k8s.io/cli-runtime v0.27.1 +replace k8s.io/cli-runtime => k8s.io/cli-runtime v0.28.0 -replace k8s.io/client-go => k8s.io/client-go v0.27.1 +replace k8s.io/client-go => k8s.io/client-go v0.28.0 -replace k8s.io/cloud-provider => k8s.io/cloud-provider v0.27.1 +replace k8s.io/cloud-provider => k8s.io/cloud-provider v0.28.0 -replace k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.27.1 +replace k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.28.0 -replace k8s.io/code-generator => k8s.io/code-generator v0.27.1 +replace k8s.io/code-generator => k8s.io/code-generator v0.28.0 -replace k8s.io/component-base => k8s.io/component-base v0.27.1 +replace k8s.io/component-base => k8s.io/component-base v0.28.0 -replace k8s.io/component-helpers => k8s.io/component-helpers v0.27.1 +replace k8s.io/component-helpers => k8s.io/component-helpers v0.28.0 -replace k8s.io/controller-manager => k8s.io/controller-manager v0.27.1 +replace k8s.io/controller-manager => k8s.io/controller-manager v0.28.0 -replace k8s.io/cri-api => k8s.io/cri-api v0.28.0-alpha.0 +replace k8s.io/cri-api => k8s.io/cri-api v0.28.0 -replace k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.27.1 +replace k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.28.0 -replace k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.27.1 +replace k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.28.0 -replace k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.27.1 +replace k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.28.0 -replace k8s.io/kube-proxy => k8s.io/kube-proxy v0.27.1 +replace k8s.io/kube-proxy => k8s.io/kube-proxy v0.28.0 -replace k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.27.1 +replace k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.28.0 -replace k8s.io/kubectl => k8s.io/kubectl v0.27.1 +replace k8s.io/kubectl => k8s.io/kubectl v0.28.0 -replace k8s.io/kubelet => k8s.io/kubelet v0.27.1 +replace k8s.io/kubelet => k8s.io/kubelet v0.28.0 -replace k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.27.1 +replace k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.28.0 -replace k8s.io/metrics => k8s.io/metrics v0.27.1 +replace k8s.io/metrics => k8s.io/metrics v0.28.0 -replace k8s.io/mount-utils => k8s.io/mount-utils v0.27.3 +replace k8s.io/mount-utils => k8s.io/mount-utils v0.28.0 -replace k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.27.1 +replace k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.28.0 -replace k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.27.1 +replace k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.28.0 -replace k8s.io/sample-controller => k8s.io/sample-controller v0.27.1 +replace k8s.io/sample-controller => k8s.io/sample-controller v0.28.0 -replace k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.27.1 +replace k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.28.0 -replace k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.27.1 +replace k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.28.0 -replace k8s.io/kms => k8s.io/kms v0.27.1 +replace k8s.io/kms => k8s.io/kms v0.28.0 -replace k8s.io/noderesourcetopology-api => k8s.io/noderesourcetopology-api v0.26.3 +replace k8s.io/noderesourcetopology-api => k8s.io/noderesourcetopology-api v0.27.0 + +replace k8s.io/endpointslice => k8s.io/endpointslice v0.28.0 diff --git a/cluster-autoscaler/go.sum b/cluster-autoscaler/go.sum index 8fd63a129f16..cc9f91b0a205 100644 --- a/cluster-autoscaler/go.sum +++ b/cluster-autoscaler/go.sum @@ -51,22 +51,22 @@ cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RX cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= github.com/Azure/azure-sdk-for-go v46.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= -github.com/Azure/azure-sdk-for-go v67.2.0+incompatible h1:Uu/Ww6ernvPTrpq31kITVTIm/I5jlJ1wjtEH/bmSB2k= -github.com/Azure/azure-sdk-for-go v67.2.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-sdk-for-go v68.0.0+incompatible h1:fcYLmCpyNYRnvJbPerq7U0hS+6+I79yEDJBqVNcqUzU= +github.com/Azure/azure-sdk-for-go v68.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E= github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs= github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= github.com/Azure/go-autorest/autorest v0.11.4/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw= github.com/Azure/go-autorest/autorest v0.11.17/go.mod h1:eipySxLmqSyC5s5k1CLupqet0PSENBEDP93LQ9a8QYw= -github.com/Azure/go-autorest/autorest v0.11.28 h1:ndAExarwr5Y+GaHE6VCaY1kyS/HwwGGyuimVhWsHOEM= -github.com/Azure/go-autorest/autorest v0.11.28/go.mod h1:MrkzG3Y3AH668QyF9KRk5neJnGgmhQ6krbhR8Q5eMvA= +github.com/Azure/go-autorest/autorest v0.11.29 h1:I4+HL/JDvErx2LjyzaVxllw2lRDB5/BT2Bm4g20iqYw= +github.com/Azure/go-autorest/autorest v0.11.29/go.mod h1:ZtEzC4Jy2JDrZLxvWs8LrBWEBycl1hbT1eknI8MtfAs= github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg= github.com/Azure/go-autorest/autorest/adal v0.9.2/go.mod h1:/3SMAM86bP6wC9Ev35peQDUeqFZBMH07vvUOmg4z/fE= github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A= github.com/Azure/go-autorest/autorest/adal v0.9.11/go.mod h1:nBKAnTomx8gDtl+3ZCJv2v0KACFHWTB2drffI1B68Pk= -github.com/Azure/go-autorest/autorest/adal v0.9.18/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ= -github.com/Azure/go-autorest/autorest/adal v0.9.21 h1:jjQnVFXPfekaqb8vIsv2G1lxshoW+oGv4MDlhRtnYZk= -github.com/Azure/go-autorest/autorest/adal v0.9.21/go.mod h1:zua7mBUaCc5YnSLKYgGJR/w5ePdMDA6H56upLsHzA9U= +github.com/Azure/go-autorest/autorest/adal v0.9.22/go.mod h1:XuAbAEUv2Tta//+voMI038TrJBqjKam0me7qR+L8Cmk= +github.com/Azure/go-autorest/autorest/adal v0.9.23 h1:Yepx8CvFxwNKpH6ja7RZ+sKX+DWYNldbLiALMC3BTz8= +github.com/Azure/go-autorest/autorest/adal v0.9.23/go.mod h1:5pcMqFkdPhviJdlEy3kC/v1ZLnQl0MH6XA5YCcMhy4c= github.com/Azure/go-autorest/autorest/azure/auth v0.5.8 h1:TzPg6B6fTZ0G1zBf3T54aI7p3cAT6u//TOXGPmFMOXg= github.com/Azure/go-autorest/autorest/azure/auth v0.5.8/go.mod h1:kxyKZTSfKh8OVFWPAgOgQ/frrJgeYQJPyR5fLFmXko4= github.com/Azure/go-autorest/autorest/azure/cli v0.4.2 h1:dMOmEJfkLKW/7JsokJqkyoYSgmR08hi9KrhjZb+JALY= @@ -96,8 +96,9 @@ github.com/GoogleCloudPlatform/k8s-cloud-provider v1.18.1-0.20220218231025-f1181 github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab h1:UKkYhof1njT1/xq4SEg5z+VpTgjmNeHwPGRQl7takDI= github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab/go.mod h1:3VYc5hodBMJ5+l/7J4xAyMeuM2PNuepvHlGs8yilUCA= github.com/Microsoft/go-winio v0.4.15/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw= -github.com/Microsoft/go-winio v0.4.17 h1:iT12IBVClFevaf8PuVyi3UmZOVh4OqnaLxDTW2O6j3w= github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84= +github.com/Microsoft/go-winio v0.6.0 h1:slsWYD/zyx7lCXoZVlvQrj0hPTM1HI4+v1sIda2yDvg= +github.com/Microsoft/go-winio v0.6.0/go.mod h1:cTAf44im0RAYeL23bpB+fzCyDH2MJiz2BO69KH/soAE= github.com/Microsoft/hcsshim v0.8.25 h1:fRMwXiwk3qDwc0P05eHnh+y2v07JdtsfQ1fuAc69m9g= github.com/Microsoft/hcsshim v0.8.25/go.mod h1:4zegtUJth7lAvFyc6cH2gGQ5B3OFQim01nnU2M8jKDg= github.com/NYTimes/gziphandler v1.1.1 h1:ZUDjpQae29j0ryrS0u/B8HZfJBtBQHjqw2rQ2cqUQ3I= @@ -129,10 +130,9 @@ github.com/blang/semver v3.5.1+incompatible h1:cQNTCjp13qL8KC3Nbxr/y2Bqb63oX6wdn github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk= github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM= github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ= -github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0= github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw= -github.com/cenkalti/backoff/v4 v4.2.0 h1:HN5dHm3WBOgndBH6E8V0q2jIYIR3s9yglV8k/+MN3u4= -github.com/cenkalti/backoff/v4 v4.2.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= +github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM= +github.com/cenkalti/backoff/v4 v4.2.1/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= @@ -145,8 +145,9 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/cilium/ebpf v0.4.0/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs= -github.com/cilium/ebpf v0.7.0 h1:1k/q3ATgxSXRdrmPfH8d7YK0GfqVsEKZAX9dQZvs56k= github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA= +github.com/cilium/ebpf v0.9.1 h1:64sn2K3UKw8NbP/blsixRpF3nXuyhz/VjRlRzvlBRu4= +github.com/cilium/ebpf v0.9.1/go.mod h1:+OhNOIXx/Fnu1IE8bJz2dzOA+VSfyTfdNUVdlQnxUFY= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= @@ -157,10 +158,11 @@ github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWH github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/container-storage-interface/spec v1.7.0 h1:gW8eyFQUZWWrMWa8p1seJ28gwDoN5CVJ4uAbQ+Hdycw= -github.com/container-storage-interface/spec v1.7.0/go.mod h1:JYuzLqr9VVNoDJl44xp/8fmCOvWPDKzuGTwCoklhuqk= -github.com/containerd/cgroups v1.0.1 h1:iJnMvco9XGvKUvNQkv88bE4uJXxRQH18efbKo9w5vHQ= +github.com/container-storage-interface/spec v1.8.0 h1:D0vhF3PLIZwlwZEf2eNbpujGCNwspwTYf2idJRJx4xI= +github.com/container-storage-interface/spec v1.8.0/go.mod h1:ROLik+GhPslwwWRNFF1KasPzroNARibH2rfz1rkg4H0= github.com/containerd/cgroups v1.0.1/go.mod h1:0SJrPIenamHDcZhEcJMNBB85rHcUsw4f25ZfBiPYRkU= +github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM= +github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw= github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw= github.com/containerd/console v1.0.2/go.mod h1:ytZPjGgY2oeTkAONYafi2kSj0aYggsf8acV1PGKCbzQ= github.com/containerd/console v1.0.3 h1:lIr7SlA5PxZyMV30bDW0MGbiOPXwc63yRuCP0ARubLw= @@ -169,8 +171,9 @@ github.com/containerd/containerd v1.4.9/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMX github.com/containerd/continuity v0.1.0/go.mod h1:ICJu0PwR54nI0yPEnJ6jcS+J7CZAUXrLh8lPo2knzsM= github.com/containerd/fifo v1.0.0/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4= github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok= -github.com/containerd/ttrpc v1.1.0 h1:GbtyLRxb0gOLR0TYQWt3O6B0NvT8tMdorEHqIQo/lWI= github.com/containerd/ttrpc v1.1.0/go.mod h1:XX4ZTnoOId4HklF4edwc4DcqskFZuvXB1Evzy5KFQpQ= +github.com/containerd/ttrpc v1.2.2 h1:9vqZr0pxwOF5koz6N0N3kJ0zDHokrcPxIR/ZR2YFtOs= +github.com/containerd/ttrpc v1.2.2/go.mod h1:sIT6l32Ph/H9cvnJsfXM5drIVzTr5A2flTf1G5tYZak= github.com/containerd/typeurl v1.0.2 h1:Chlt8zIieDbzQFzXzAeBEF92KhExuE4p9p92/QmY7aY= github.com/containerd/typeurl v1.0.2/go.mod h1:9trJWW2sRlGub4wZJRTW83VtbOLS6hwcDZXTn6oPz9s= github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= @@ -202,8 +205,9 @@ github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQ github.com/dimchansky/utfbom v1.1.1 h1:vV6w1AhK4VMnhBno/TPVCoK9U/LP0PkLCS9tbxHdi/U= github.com/dimchansky/utfbom v1.1.1/go.mod h1:SxdoEBH5qIqFocHMyGOXVAybYJdr71b1Q/j0mACtrfE= github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI= -github.com/docker/distribution v2.8.1+incompatible h1:Q50tZOPR6T/hjNsyc9g8/syEs6bk8XXApsHjKukMl68= github.com/docker/distribution v2.8.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= +github.com/docker/distribution v2.8.2+incompatible h1:T3de5rq0dB1j30rp0sA2rER+m322EBzniBPB6ZIzuh8= +github.com/docker/distribution v2.8.2+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w= github.com/docker/docker v20.10.21+incompatible h1:UTLdBmHk3bEY+w8qeO5KttOhy6OmXWsl/FEet9Uswog= github.com/docker/docker v20.10.21+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ= @@ -211,9 +215,8 @@ github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5Xh github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4= github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= -github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE= -github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo= github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= +github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY= github.com/emicklei/go-restful/v3 v3.10.2 h1:hIovbnmBTLjHXkqEBUz3HGpXZdM7ZrE9fJIZIqlJLqE= github.com/emicklei/go-restful/v3 v3.10.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= @@ -232,20 +235,18 @@ github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCv github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/felixge/httpsnoop v1.0.3 h1:s/nj+GCswXYzN5v2DpNMuMQYe+0DDwt5WVCU6CWBdXk= github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= -github.com/flowstack/go-jsonschema v0.1.1/go.mod h1:yL7fNggx1o8rm9RlgXv7hTBWxdBM0rVwpMwimd3F3N0= github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k= -github.com/frankban/quicktest v1.11.3 h1:8sXhOn0uLys67V8EsXLc6eszDs8VXWxL3iRvebPhedY= github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k= +github.com/frankban/quicktest v1.14.0 h1:+cqqvzZV87b4adx/5ayVOaYZ2CrvM4ejQvUdBzPPUss= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY= github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw= -github.com/gardener/machine-controller-manager v0.50.0 h1:3dcQjzueFU1TGgprV00adjb3OCR99myTBx8DQGxywks= -github.com/gardener/machine-controller-manager v0.50.0/go.mod h1:RySZ40AgbNV/wMq60G/w49kb+okbj5Xs1A6usz5Pm/I= -github.com/gardener/machine-controller-manager-provider-aws v0.17.0 h1:I4ML6yUOy4aHJ83Gstsryt1D7oRAGiSSR9MNihEUeAk= -github.com/gardener/machine-controller-manager-provider-aws v0.17.0/go.mod h1:GjkJKfEVKoMQmJJVpzRgqftzDitwBt61PWbBH0Vx940= -github.com/gardener/machine-controller-manager-provider-azure v0.10.0 h1:P5/SIMAuMwb8EwmfL+r0dyKIsnKE9TYgzWHSAyZwhtw= -github.com/gardener/machine-controller-manager-provider-azure v0.10.0/go.mod h1:SNSgC9bE09B4FA33OeT4JAukXL40aESriA6PTrvFNZg= -github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= +github.com/gardener/machine-controller-manager v0.50.1 h1:lL2q0O+K6jkgYzHPz85wIc9MzASZaiDvLYnTxW7P5ws= +github.com/gardener/machine-controller-manager v0.50.1/go.mod h1:RySZ40AgbNV/wMq60G/w49kb+okbj5Xs1A6usz5Pm/I= +github.com/gardener/machine-controller-manager-provider-aws v0.19.2 h1:OTCRz5bN1vSXqvqWTfBOf4b1SFqda5acbWmxrKR/kj8= +github.com/gardener/machine-controller-manager-provider-aws v0.19.2/go.mod h1:8KZgBD1hM6AdtkuEswWUnU3kv1ePkUXcy6RmUF+32MA= +github.com/gardener/machine-controller-manager-provider-azure v0.11.1 h1:gBOYrJe1FP2EFdRnq5Qt0G7k646WjODrJU4grhRqxsU= +github.com/gardener/machine-controller-manager-provider-azure v0.11.1/go.mod h1:beWt9FY5Bkpt/32Y24Uuj/ApVUH1wCZMh8saqJ98GvI= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= @@ -279,17 +280,16 @@ github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5x github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/godbus/dbus/v5 v5.0.6 h1:mkgN1ofwASrYnJ5W6U/BxG15eXXXjirgZc7CLqkcaro= github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= -github.com/gofrs/uuid v4.0.0+incompatible h1:1SD/1F5pU8p29ybwgQSwpQk+mwdRrXCYuPhW6m+TnJw= -github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM= +github.com/gofrs/uuid v4.4.0+incompatible h1:3qXRTX8/NbyulANqlc0lchS1gqAVxRgsuW1YrTJupqA= +github.com/gofrs/uuid v4.4.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM= github.com/gogo/googleapis v1.4.1/go.mod h1:2lpHqI5OcWCtVElxXnPt+s8oJvMpySlOyM6xDCrzib4= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= -github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= -github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs= -github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= +github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg= +github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/glog v1.0.0 h1:nfP3RFugxnNRyKgeWd4oI1nYvXpxrx8ck8ZrcizshdQ= github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4= @@ -332,12 +332,12 @@ github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEW github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4= -github.com/google/cadvisor v0.47.1 h1:YyKnRy/3myRNGOvF1bNF9FFnpjY7Gky5yKi/ZlN+BSo= -github.com/google/cadvisor v0.47.1/go.mod h1:iJdTjcjyKHjLCf7OSTzwP5GxdfrkPusw2x5bwGvuLUw= -github.com/google/cel-go v0.14.0 h1:LFobwuUDslWUHdQ48SXVXvQgPH2X1XVhsgOGNioAEZ4= -github.com/google/cel-go v0.14.0/go.mod h1:YzWEoI07MC/a/wj9in8GeVatqfypkldgBlwXh9bCwqY= -github.com/google/gnostic v0.6.9 h1:ZK/5VhkoX835RikCHpSUJV9a+S3e1zLh59YnyWeBW+0= -github.com/google/gnostic v0.6.9/go.mod h1:Nm8234We1lq6iB9OmlgNv3nH91XLLVZHCDayfA3xq+E= +github.com/google/cadvisor v0.47.3 h1:5XKTHBduWlBjmgw07uwEiC+Xa/FRd0MZI37oqlTagO0= +github.com/google/cadvisor v0.47.3/go.mod h1:iJdTjcjyKHjLCf7OSTzwP5GxdfrkPusw2x5bwGvuLUw= +github.com/google/cel-go v0.16.0 h1:DG9YQ8nFCFXAs/FDDwBxmL1tpKNrdlGUM9U3537bX/Y= +github.com/google/cel-go v0.16.0/go.mod h1:HXZKzB0LXqer5lHHgfWAnlYwJaQBDKMjxjulNQzhwhY= +github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= +github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= @@ -439,9 +439,8 @@ github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxv github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= -github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= -github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= @@ -462,8 +461,6 @@ github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= -github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= -github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/moby/ipvs v1.1.0 h1:ONN4pGaZQgAx+1Scz5RvWV4Q7Gb+mvfRh3NsPS+1XQQ= github.com/moby/ipvs v1.1.0/go.mod h1:4VJMWuf098bsUMmZEiD4Tjk/O7mOn3l1PTD3s4OoYAs= github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8= @@ -499,8 +496,9 @@ github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8 github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM= github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM= github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0= -github.com/opencontainers/runc v1.1.4 h1:nRCz/8sKg6K6jgYAFLDlXzPeITBZJyX28DBVhWD+5dg= github.com/opencontainers/runc v1.1.4/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg= +github.com/opencontainers/runc v1.1.7 h1:y2EZDS8sNng4Ksf0GUYNhKbTShZJPJg1FiXJNH/uoCk= +github.com/opencontainers/runc v1.1.7/go.mod h1:CbUumNnWCuTGFukNXahoo/RFBZvDAgRh/smNYNOhA50= github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0= github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0= github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0= @@ -521,14 +519,16 @@ github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5Fsn github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0= github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY= -github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw= github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y= +github.com/prometheus/client_golang v1.16.0 h1:yk/hx9hDbrGHovbci4BY+pRMfSuuat626eFsHb7tmT8= +github.com/prometheus/client_golang v1.16.0/go.mod h1:Zsulrv/L9oM40tJ7T815tM89lFEugiJ9HzIqaAx4LKc= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4= github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w= +github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY= +github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= @@ -536,8 +536,8 @@ github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB8 github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc= github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls= github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA= -github.com/prometheus/common v0.42.0 h1:EKsfXEYo4JpWMHH5cg+KOUWeuJSov1Id8zGR8eeI1YM= -github.com/prometheus/common v0.42.0/go.mod h1:xBwqVerjNdUDjgODMpudtOMwlOwf2SaTr1yjz4b7Zbc= +github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY= +github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= @@ -545,8 +545,8 @@ github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4O github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4= -github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI= -github.com/prometheus/procfs v0.9.0/go.mod h1:+pB4zwohETzFnmlpe6yd2lSc+0/46IYZRB/chUwxUZY= +github.com/prometheus/procfs v0.10.1 h1:kYK1Va/YMlutzCGazswoHKo//tZVlFpKYh+PymziUAg= +github.com/prometheus/procfs v0.10.1/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= @@ -558,8 +558,9 @@ github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQD github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww= github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0= -github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646 h1:RpforrEYXWkmGwJHIGnLZ3tTWStkjVVstwzNGqxX2Ds= github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg= +github.com/seccomp/libseccomp-golang v0.10.0 h1:aA4bp+/Zzi0BnWZ2F1wgNBs5gTpm+na2rWM6M9YjLpY= +github.com/seccomp/libseccomp-golang v0.10.0/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg= github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q= @@ -583,7 +584,6 @@ github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnIn github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE= -github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8= github.com/stoewer/go-strcase v1.3.0 h1:g0eASXYtp+yvN9fK8sH94oCIk0fau9uV1/ZdJ0AVEzs= github.com/stoewer/go-strcase v1.3.0/go.mod h1:fAH5hQ5pehh+j3nZfvwdk2RgEgQjAoM8wodgtPmh1xo= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= @@ -612,13 +612,10 @@ github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtX github.com/vishvananda/netlink v1.1.0 h1:1iyaYNBLmP6L0220aDnYQpo1QEV4t4hJ+xEEhhJH8j0= github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE= github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU= -github.com/vishvananda/netns v0.0.2 h1:Cn05BRLm+iRP/DZxyVSsfVyrzgjDbwHwkVt38qvXnNI= -github.com/vishvananda/netns v0.0.2/go.mod h1:yitZXdAVI+yPFSb4QUe+VW3vOVl4PZPNcBgbPxAtJxw= +github.com/vishvananda/netns v0.0.4 h1:Oeaw1EM2JMxD51g9uhtC0D7erkIjgmj8+JZc26m1YX8= +github.com/vishvananda/netns v0.0.4/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM= github.com/vmware/govmomi v0.30.0 h1:Fm8ugPnnlMSTSceDKY9goGvjmqc6eQLPUSUeNXdpeXA= github.com/vmware/govmomi v0.30.0/go.mod h1:F7adsVewLNHsW/IIm7ziFURaXDaHEwcc+ym4r3INMdY= -github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= -github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ= -github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= @@ -629,17 +626,17 @@ github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= -go.etcd.io/bbolt v1.3.6 h1:/ecaJf0sk1l4l6V4awd65v2C3ILy7MSj+s/x1ADCIMU= -go.etcd.io/etcd/api/v3 v3.5.7 h1:sbcmosSVesNrWOJ58ZQFitHMdncusIifYcrBfwrlJSY= -go.etcd.io/etcd/api/v3 v3.5.7/go.mod h1:9qew1gCdDDLu+VwmeG+iFpL+QlpHTo7iubavdVDgCAA= -go.etcd.io/etcd/client/pkg/v3 v3.5.7 h1:y3kf5Gbp4e4q7egZdn5T7W9TSHUvkClN6u+Rq9mEOmg= -go.etcd.io/etcd/client/pkg/v3 v3.5.7/go.mod h1:o0Abi1MK86iad3YrWhgUsbGx1pmTS+hrORWc2CamuhY= -go.etcd.io/etcd/client/v2 v2.305.7 h1:AELPkjNR3/igjbO7CjyF1fPuVPjrblliiKj+Y6xSGOU= -go.etcd.io/etcd/client/v3 v3.5.7 h1:u/OhpiuCgYY8awOHlhIhmGIGpxfBU/GZBUP3m/3/Iz4= -go.etcd.io/etcd/client/v3 v3.5.7/go.mod h1:sOWmj9DZUMyAngS7QQwCyAXXAL6WhgTOPLNS/NabQgw= -go.etcd.io/etcd/pkg/v3 v3.5.7 h1:obOzeVwerFwZ9trMWapU/VjDcYUJb5OfgC1zqEGWO/0= -go.etcd.io/etcd/raft/v3 v3.5.7 h1:aN79qxLmV3SvIq84aNTliYGmjwsW6NqJSnqmI1HLJKc= -go.etcd.io/etcd/server/v3 v3.5.7 h1:BTBD8IJUV7YFgsczZMHhMTS67XuA4KpRquL0MFOJGRk= +go.etcd.io/bbolt v1.3.7 h1:j+zJOnnEjF/kyHlDDgGnVL/AIqIJPq8UoB2GSNfkUfQ= +go.etcd.io/etcd/api/v3 v3.5.9 h1:4wSsluwyTbGGmyjJktOf3wFQoTBIURXHnq9n/G/JQHs= +go.etcd.io/etcd/api/v3 v3.5.9/go.mod h1:uyAal843mC8uUVSLWz6eHa/d971iDGnCRpmKd2Z+X8k= +go.etcd.io/etcd/client/pkg/v3 v3.5.9 h1:oidDC4+YEuSIQbsR94rY9gur91UPL6DnxDCIYd2IGsE= +go.etcd.io/etcd/client/pkg/v3 v3.5.9/go.mod h1:y+CzeSmkMpWN2Jyu1npecjB9BBnABxGM4pN8cGuJeL4= +go.etcd.io/etcd/client/v2 v2.305.9 h1:YZ2OLi0OvR0H75AcgSUajjd5uqKDKocQUqROTG11jIo= +go.etcd.io/etcd/client/v3 v3.5.9 h1:r5xghnU7CwbUxD/fbUtRyJGaYNfDun8sp/gTr1hew6E= +go.etcd.io/etcd/client/v3 v3.5.9/go.mod h1:i/Eo5LrZ5IKqpbtpPDuaUnDOUv471oDg8cjQaUr2MbA= +go.etcd.io/etcd/pkg/v3 v3.5.9 h1:6R2jg/aWd/zB9+9JxmijDKStGJAPFsX3e6BeJkMi6eQ= +go.etcd.io/etcd/raft/v3 v3.5.9 h1:ZZ1GIHoUlHsn0QVqiRysAm3/81Xx7+i2d7nSdWxlOiI= +go.etcd.io/etcd/server/v3 v3.5.9 h1:vomEmmxeztLtS5OEH7d0hBAg4cjVIu9wXuNzUZx2ZA0= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= @@ -694,6 +691,7 @@ golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPh golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a/go.mod h1:P+XmwS30IXTQdn5tA2iutPOUgjI07+tq3H3K9MVA1s8= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= +golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= golang.org/x/crypto v0.12.0 h1:tFM/ta59kqch6LlvYnPa0yx5a83cL2nHflFhYKvv9Yk= golang.org/x/crypto v0.12.0/go.mod h1:NF0Gs7EO5K4qLn+Ylc+fih8BSTeIjAP05siRnAh98yw= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= @@ -735,6 +733,7 @@ golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc= +golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -776,13 +775,13 @@ golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLd golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= golang.org/x/net v0.4.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE= +golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.14.0 h1:BONx9s002vGdD9umnlX1Po8vOZmrgH34qlHcD1MfK14= golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= @@ -804,8 +803,8 @@ golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc= -golang.org/x/oauth2 v0.7.0 h1:qe6s0zUXlPX80/dITx3440hWZ7GwMwgDDyrSGTPJG/g= -golang.org/x/oauth2 v0.7.0/go.mod h1:hPLQkd9LyjfXTiRohC/41GhcFqxisoUQ99sCUOHO9x4= +golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8= +golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -892,14 +891,15 @@ golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA= +golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.11.0 h1:F9tnn/DA/Im8nCwm+fX+1/eBwi4qFjRT++MhtVC4ZX0= golang.org/x/term v0.11.0/go.mod h1:zC9APTIj3jG3FdV/Ons+XE1riIZXG4aZ4GTHiPZJPIU= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -913,6 +913,7 @@ golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.12.0 h1:k+n5B8goJNdU7hSvEtMUz3d1Q6D/XW4COJSJR6fN0mc= golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -1080,10 +1081,13 @@ google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEc google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= google.golang.org/genproto v0.0.0-20211021150943-2b146023228c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= google.golang.org/genproto v0.0.0-20220502173005-c8bf987b8c21/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4= -google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A= -google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU= +google.golang.org/genproto v0.0.0-20230526161137-0005af68ea54 h1:9NWlQfY2ePejTmfwUH1OWwmznFa+0kKcHGPDvcPza9M= +google.golang.org/genproto v0.0.0-20230526161137-0005af68ea54/go.mod h1:zqTuNwFlFRsw5zIts5VnzLQxSRqh+CGOTVMlYbY0Eyk= +google.golang.org/genproto/googleapis/api v0.0.0-20230525234035-dd9d682886f9 h1:m8v1xLLLzMe1m5P+gCTF8nJB9epwZQUBERm20Oy1poQ= +google.golang.org/genproto/googleapis/api v0.0.0-20230525234035-dd9d682886f9/go.mod h1:vHYtlOoi6TsQ3Uk2yxR7NI5z8uoV+3pZtR4jmHIkRig= +google.golang.org/genproto/googleapis/rpc v0.0.0-20230525234030-28d5490b6b19 h1:0nDDozoAU19Qb2HwhXadU8OcsiO/09cnTqhUtq2MEOM= +google.golang.org/genproto/googleapis/rpc v0.0.0-20230525234030-28d5490b6b19/go.mod h1:66JfowdXAEgad5O9NnYcsNPLCPZJD++2L9X0PCMODrA= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= @@ -1160,7 +1164,6 @@ gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= -gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk= @@ -1172,52 +1175,52 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -k8s.io/api v0.27.1 h1:Z6zUGQ1Vd10tJ+gHcNNNgkV5emCyW+v2XTmn+CLjSd0= -k8s.io/api v0.27.1/go.mod h1:z5g/BpAiD+f6AArpqNjkY+cji8ueZDU/WV1jcj5Jk4E= -k8s.io/apimachinery v0.28.0-alpha.0 h1:GZf6I49h9Sjl2Rjc+jY72nEYApr1pCKEHoOP/KxWWrA= -k8s.io/apimachinery v0.28.0-alpha.0/go.mod h1:5ikh59fK3AJ287GUvpUsryoMFtH9zj/ARfWCo3AyXTM= -k8s.io/apiserver v0.27.1 h1:phY+BtXjjzd+ta3a4kYbomC81azQSLa1K8jo9RBw7Lg= -k8s.io/apiserver v0.27.1/go.mod h1:UGrOjLY2KsieA9Fw6lLiTObxTb8Z1xEba4uqSuMY0WU= -k8s.io/client-go v0.27.1 h1:oXsfhW/qncM1wDmWBIuDzRHNS2tLhK3BZv512Nc59W8= -k8s.io/client-go v0.27.1/go.mod h1:f8LHMUkVb3b9N8bWturc+EDtVVVwZ7ueTVquFAJb2vA= -k8s.io/cloud-provider v0.27.1 h1:482W9e2Yp8LDgTUKrXAxT+nH4pHS2TiBElI/CnfGWac= -k8s.io/cloud-provider v0.27.1/go.mod h1:oN7Zci2Ls2dorwSNd2fMiW/6DA40+F4o2QL70p63bqo= -k8s.io/cloud-provider-aws v1.27.1 h1:n2IzE5f/aGV+KdQWKTfb/1XR/04eMR+Vx+A721LwM/w= -k8s.io/cloud-provider-aws v1.27.1/go.mod h1:9vUb5mnVnReSRDBWcBxB1b0HOeEc472iOPmrnwpN9SA= -k8s.io/component-base v0.27.1 h1:kEB8p8lzi4gCs5f2SPU242vOumHJ6EOsOnDM3tTuDTM= -k8s.io/component-base v0.27.1/go.mod h1:UGEd8+gxE4YWoigz5/lb3af3Q24w98pDseXcXZjw+E0= -k8s.io/component-helpers v0.27.1 h1:uY63v834MAHuf3fBiKGQGPq/cToU5kY5SW/58Xv0gl4= -k8s.io/component-helpers v0.27.1/go.mod h1:oOpwSYW1AdL+pU7abHADwX1ZcJl+5c8mnIkvoFZNFWA= -k8s.io/controller-manager v0.27.1 h1:+4OGWAzg4JVLEauPSmyQFIfrYrYQoUsC4MbHmRuPaFU= -k8s.io/controller-manager v0.27.1/go.mod h1:oe9vKl0RPiedlCXmeVbhkDV2yX8r7C4K/B8OGaKdYtY= -k8s.io/cri-api v0.28.0-alpha.0 h1:Z8LNc0JDsR+Y/ONTfHYW/xQoT/ZOieY2jBj9M/0eJM4= -k8s.io/cri-api v0.28.0-alpha.0/go.mod h1:+Ts/AVYbIo04S86XbTD73UPp/DkTiYxtsFeOFEu32L0= -k8s.io/csi-translation-lib v0.27.1 h1:D9Hw2iBZzFPriFH0FDyUFdfflYAW6S032P6Yps9sKq8= -k8s.io/csi-translation-lib v0.27.1/go.mod h1:MyBDHVDz24OOSc4FdmSZA2nkfNu+Ysu8BqjdOAcKoT8= -k8s.io/dynamic-resource-allocation v0.27.1 h1:4HsIhgO49Yv+C1Zsw4R18tzXgtuEfwChqOwDIi/AcxE= -k8s.io/dynamic-resource-allocation v0.27.1/go.mod h1:XRA0ZE3wVNd2yYSnM8rFWStrrGGvHUhz9wfmHUXkgGY= +k8s.io/api v0.28.0 h1:3j3VPWmN9tTDI68NETBWlDiA9qOiGJ7sdKeufehBYsM= +k8s.io/api v0.28.0/go.mod h1:0l8NZJzB0i/etuWnIXcwfIv+xnDOhL3lLW919AWYDuY= +k8s.io/apiextensions-apiserver v0.28.0 h1:CszgmBL8CizEnj4sj7/PtLGey6Na3YgWyGCPONv7E9E= +k8s.io/apiextensions-apiserver v0.28.0/go.mod h1:uRdYiwIuu0SyqJKriKmqEN2jThIJPhVmOWETm8ud1VE= +k8s.io/apimachinery v0.28.0 h1:ScHS2AG16UlYWk63r46oU3D5y54T53cVI5mMJwwqFNA= +k8s.io/apimachinery v0.28.0/go.mod h1:X0xh/chESs2hP9koe+SdIAcXWcQ+RM5hy0ZynB+yEvw= +k8s.io/apiserver v0.28.0 h1:wVh7bK6Xj7hq+5ntInysTeQRAOqqFoKGUOW2yj8DXrY= +k8s.io/apiserver v0.28.0/go.mod h1:MvLmtxhQ0Tb1SZk4hfJBjs8iqr5nhYeaFSaoEcz7Lk4= +k8s.io/client-go v0.28.0 h1:ebcPRDZsCjpj62+cMk1eGNX1QkMdRmQ6lmz5BLoFWeM= +k8s.io/client-go v0.28.0/go.mod h1:0Asy9Xt3U98RypWJmU1ZrRAGKhP6NqDPmptlAzK2kMc= +k8s.io/cloud-provider v0.28.0 h1:BTIW7b757T+VXB5yqJeajPXsNOmeooopUgfzQueiWvk= +k8s.io/cloud-provider v0.28.0/go.mod h1:u0MGqdlutkTmCJyNrCzIMJ+OhrwQE9x5X8mBTN0R7us= +k8s.io/cloud-provider-aws v1.27.0 h1:PF8YrH8QcN6JoXB3Xxlaz84SBDYMPunJuCc0cPuCWXA= +k8s.io/cloud-provider-aws v1.27.0/go.mod h1:9vUb5mnVnReSRDBWcBxB1b0HOeEc472iOPmrnwpN9SA= +k8s.io/component-base v0.28.0 h1:HQKy1enJrOeJlTlN4a6dU09wtmXaUvThC0irImfqyxI= +k8s.io/component-base v0.28.0/go.mod h1:Yyf3+ZypLfMydVzuLBqJ5V7Kx6WwDr/5cN+dFjw1FNk= +k8s.io/component-helpers v0.28.0 h1:ubHUiEF7H/DOx4471pHHsLlH3EGu8jlEvnld5PS4KdI= +k8s.io/component-helpers v0.28.0/go.mod h1:i7hJ/oFhZImqUWwjLFG/yGkLpJ3KFoirY2DLYIMql6Q= +k8s.io/controller-manager v0.28.0 h1:55rmyzwEOnhAZLsuDdDHwVT2sGzkleFY0SqZFKsLN5U= +k8s.io/controller-manager v0.28.0/go.mod h1:WrABGmrpEWBap27eu533RpW5lBnVT5K+u2oc2bDwcmU= +k8s.io/cri-api v0.28.0 h1:TVidtHNi425IaKF50oDD5hRvQuK7wB4NQAfTVOcr9QA= +k8s.io/cri-api v0.28.0/go.mod h1:xXygwvSOGcT/2KXg8sMYTHns2xFem3949kCQn5IS1k4= +k8s.io/csi-translation-lib v0.28.0 h1:X3Kr5aHvH4xutNg4pgdc6RP0h3FOlJGDeui5CLfBeO4= +k8s.io/csi-translation-lib v0.28.0/go.mod h1:HvnmmGZoTobqMU4MD3yQFJ4U4Dq3PxnCfVbJUjky3K0= +k8s.io/dynamic-resource-allocation v0.28.0 h1:LQjb8kRQcvorEHlJVfHLSGued8taykmqROMGozMpHsY= +k8s.io/dynamic-resource-allocation v0.28.0/go.mod h1:x8xyQvoo22hfBEZlKt3nqkzfoTcMOJxiD9mSgDgrd2w= k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE= k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= -k8s.io/klog/v2 v2.90.1 h1:m4bYOKall2MmOiRaR1J+We67Do7vm9KiQVlT96lnHUw= -k8s.io/klog/v2 v2.90.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= -k8s.io/kms v0.27.1 h1:JTSQbJb+mcobScQwF0bOmZhIwP17k8GvBsiLlA6SQqw= -k8s.io/kms v0.27.1/go.mod h1:VuTsw0uHlSycKLCkypCGxfFCjLfzf/5YMeATECd/zJA= -k8s.io/kube-openapi v0.0.0-20230501164219-8b0f38b5fd1f h1:2kWPakN3i/k81b0gvD5C5FJ2kxm1WrQFanWchyKuqGg= -k8s.io/kube-openapi v0.0.0-20230501164219-8b0f38b5fd1f/go.mod h1:byini6yhqGC14c3ebc/QwanvYwhuMWF6yz2F8uwW8eg= -k8s.io/kube-proxy v0.27.1 h1:awlTLXvZhM/A4Nsu0ma34uKR4pHxigj9vhuQ9BHfwUk= -k8s.io/kube-proxy v0.27.1/go.mod h1:6hJ7Fnt3QtD+5cpGN6MgZOOO9KbD6TvF0/BPHk+lYtQ= -k8s.io/kube-scheduler v0.27.1 h1:Tq7ff+jUZaK8fejL4uOy1CC2B+bz2acKQ7Bf7fCtnhs= -k8s.io/kube-scheduler v0.27.1/go.mod h1:NS0RUYehdV7o1YQXO2/Ym/JAq2+nA/zrVABjbVyLJA8= -k8s.io/kubectl v0.27.1 h1:9T5c5KdpburYiW8XKQSH0Uly1kMNE90aGSnbYUZNdcA= -k8s.io/kubectl v0.27.1/go.mod h1:QsAkSmrRsKTPlAFzF8kODGDl4p35BIwQnc9XFhkcsy8= -k8s.io/kubelet v0.27.1 h1:IkfZ0N9CX/g6EDis7nJw8ZsOuHcpFA6cm0pXQx0g5TY= -k8s.io/kubelet v0.27.1/go.mod h1:g3cIhpZPawo/MvsdnmcLmqDJvDPdbUFkzfyLNz03nQg= -k8s.io/kubernetes v1.27.1 h1:DFeW4Lv+kh5DyYcezOzwmQAbC3VqXAxnMyZabALiRSc= -k8s.io/kubernetes v1.27.1/go.mod h1:TTwPjSCKQ+a/NTiFKRGjvOnEaQL8wIG40nsYH8Er4bA= -k8s.io/legacy-cloud-providers v0.27.1 h1:P0bzBX7gSx0yPeG9KDSspiy/M23gvLPLbwe4pYOS9bQ= -k8s.io/legacy-cloud-providers v0.27.1/go.mod h1:Vhh/i+Qt/ayPR40c2q3pMswg4/W8AnHsET45SEokSig= -k8s.io/mount-utils v0.27.3 h1:oubkDKLTZUneW27wgyOmp8a1AAZj04vGmtq+YW8wdvY= -k8s.io/mount-utils v0.27.3/go.mod h1:vmcjYdi2Vg1VTWY7KkhvwJVY6WDHxb/QQhiQKkR8iNs= +k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg= +k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= +k8s.io/kms v0.28.0 h1:BwJhU9qPcJhHLUcQjtelOSjYti+1/caJLr+4jHbKzTA= +k8s.io/kms v0.28.0/go.mod h1:CNU792ls92v2Ye7Vn1jn+xLqYtUSezDZNVu6PLbJyrU= +k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 h1:LyMgNKD2P8Wn1iAwQU5OhxCKlKJy0sHc+PcDwFB24dQ= +k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9/go.mod h1:wZK2AVp1uHCp4VamDVgBP2COHZjqD1T68Rf0CM3YjSM= +k8s.io/kube-scheduler v0.28.0 h1:Z9zAKRKOiOHcZwhfRYZrOtq+O6+bfWUKmlFuFHlvwHM= +k8s.io/kube-scheduler v0.28.0/go.mod h1:GMTnTM+SCwDlpRRjAC/0TgiGVgwfUbHi38rtYzqcLfc= +k8s.io/kubectl v0.28.0 h1:qhfju0OaU+JGeBlToPeeIg2UJUWP++QwTkpio6nlPKg= +k8s.io/kubectl v0.28.0/go.mod h1:1We+E5nSX3/TVoSQ6y5Bzld5OhTBHZHlKEYl7g/NaTk= +k8s.io/kubelet v0.28.0 h1:H/3JAkLIungVF+WLpqrxhgJ4gzwsbN8VA8LOTYsEX3U= +k8s.io/kubelet v0.28.0/go.mod h1:i8jUg4ltbRusT3ExOhSAeqETuHdoHTZcTT2cPr9RTgc= +k8s.io/kubernetes v1.28.0 h1:p8qq/VoNHnBWinLEi5LO2IvCfzFouN7Jhdz8+L++V+U= +k8s.io/kubernetes v1.28.0/go.mod h1:rBQpjGYlLBV0KuOLw8EG45N5EBCskWiPpi0xy5liHMI= +k8s.io/legacy-cloud-providers v0.28.0 h1:UnIGD5YlU7x4lIbJCPc49ijh0NzZhXh+lE9gWnn4m2U= +k8s.io/legacy-cloud-providers v0.28.0/go.mod h1:sb1fnmuZEVNFJG3Uhs4BO5zN/QA9jG98B0i+zG2Fg+A= +k8s.io/mount-utils v0.28.0 h1:BGYxriZPWTJFCEWDtXsdC1ZPFvI6HbfXCWpjJ42mIw4= +k8s.io/mount-utils v0.28.0/go.mod h1:AyP8LmZSLgpGdFQr+vzHTerlPiGvXUdP99n98Er47jw= k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 h1:qY1Ad8PODbnymg2pRbkyMT/ylpTrCM8P2RJ0yroCyIk= k8s.io/utils v0.0.0-20230406110748-d93618cff8a2/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= diff --git a/cluster-autoscaler/integration/integration_test.go b/cluster-autoscaler/integration/integration_test.go index a4e61208e58c..9d055ea48a1b 100644 --- a/cluster-autoscaler/integration/integration_test.go +++ b/cluster-autoscaler/integration/integration_test.go @@ -3,8 +3,6 @@ package integration import ( "context" "fmt" - . "github.com/onsi/ginkgo/v2" - . "github.com/onsi/gomega" "io/ioutil" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/util/retry" @@ -12,6 +10,9 @@ import ( "os" "regexp" "strings" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" ) var ( @@ -325,7 +326,7 @@ func (driver *Driver) controllerTests() { By("Deploying the workload") Expect(driver.deployLargeWorkload(1, scaleUpWorkload, workerWithThreeZones, false)).To(BeNil()) By("checking that scale up didn't trigger because of no machine satisfying the requirement") - skippedRegexp, _ := regexp.Compile("Pod large-scale-up-pod-.* can't be scheduled on .*, predicate checking error: Insufficient cpu; predicateName=NodeResourcesFit; reasons: Insufficient cpu;") + skippedRegexp, _ := regexp.Compile("Pod default/large-scale-up-pod-.* can't be scheduled on .*, predicate checking error: Insufficient cpu; predicateName=NodeResourcesFit; reasons: Insufficient cpu;") Eventually(func() bool { data, _ := ioutil.ReadFile(CALogFile) return skippedRegexp.Match(data) diff --git a/cluster-autoscaler/main.go b/cluster-autoscaler/main.go index 47f1111ca459..95baf2e71102 100644 --- a/cluster-autoscaler/main.go +++ b/cluster-autoscaler/main.go @@ -30,6 +30,7 @@ import ( "time" "k8s.io/autoscaler/cluster-autoscaler/debuggingsnapshot" + "k8s.io/autoscaler/cluster-autoscaler/simulator" "k8s.io/autoscaler/cluster-autoscaler/simulator/predicatechecker" "github.com/spf13/pflag" @@ -55,6 +56,7 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" "k8s.io/autoscaler/cluster-autoscaler/utils/errors" kube_util "k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes" + scheduler_util "k8s.io/autoscaler/cluster-autoscaler/utils/scheduler" "k8s.io/autoscaler/cluster-autoscaler/utils/units" "k8s.io/autoscaler/cluster-autoscaler/version" kube_client "k8s.io/client-go/kubernetes" @@ -67,6 +69,7 @@ import ( "k8s.io/component-base/config/options" "k8s.io/component-base/metrics/legacyregistry" "k8s.io/klog/v2" + scheduler_config "k8s.io/kubernetes/pkg/scheduler/apis/config" ) // MultiStringFlag is a flag for passing multiple parameters using same flag @@ -107,13 +110,13 @@ var ( "How long after node deletion that scale down evaluation resumes, defaults to scanInterval") scaleDownDelayAfterFailure = flag.Duration("scale-down-delay-after-failure", 3*time.Minute, "How long after scale down failure that scale down evaluation resumes") - scaleDownUnneededTime = flag.Duration("scale-down-unneeded-time", 10*time.Minute, + scaleDownUnneededTime = flag.Duration("scale-down-unneeded-time", config.DefaultScaleDownUnneededTime, "How long a node should be unneeded before it is eligible for scale down") - scaleDownUnreadyTime = flag.Duration("scale-down-unready-time", 20*time.Minute, + scaleDownUnreadyTime = flag.Duration("scale-down-unready-time", config.DefaultScaleDownUnreadyTime, "How long an unready node should be unneeded before it is eligible for scale down") - scaleDownUtilizationThreshold = flag.Float64("scale-down-utilization-threshold", 0.5, + scaleDownUtilizationThreshold = flag.Float64("scale-down-utilization-threshold", config.DefaultScaleDownUtilizationThreshold, "Sum of cpu or memory of all pods running on the node divided by node's corresponding allocatable resource, below which a node can be considered for scale down") - scaleDownGpuUtilizationThreshold = flag.Float64("scale-down-gpu-utilization-threshold", 0.5, + scaleDownGpuUtilizationThreshold = flag.Float64("scale-down-gpu-utilization-threshold", config.DefaultScaleDownGpuUtilizationThreshold, "Sum of gpu requests of all pods running on the node divided by node's allocatable resource, below which a node can be considered for scale down."+ "Utilization calculation only cares about gpu resource for accelerator node. cpu and memory utilization will be ignored.") scaleDownNonEmptyCandidatesCount = flag.Int("scale-down-non-empty-candidates-count", 30, @@ -132,6 +135,7 @@ var ( "for scale down when some candidates from previous iteration are no longer valid."+ "When calculating the pool size for additional candidates we take"+ "max(#nodes * scale-down-candidates-pool-ratio, scale-down-candidates-pool-min-count).") + schedulerConfigFile = flag.String(config.SchedulerConfigFileFlag, "", "scheduler-config allows changing configuration of in-tree scheduler plugins acting on PreFilter and Filter extension points") nodeDeletionDelayTimeout = flag.Duration("node-deletion-delay-timeout", 2*time.Minute, "Maximum time CA waits for removing delay-deletion.cluster-autoscaler.kubernetes.io/ annotations before deleting the node.") nodeDeletionBatcherInterval = flag.Duration("node-deletion-batcher-interval", 0*time.Second, "How long CA ScaleDown gather nodes to delete them in batch.") scanInterval = flag.Duration("scan-interval", 10*time.Second, "How often cluster is reevaluated for scale up or down") @@ -147,7 +151,8 @@ var ( maxGracefulTerminationFlag = flag.Int("max-graceful-termination-sec", 10*60, "Maximum number of seconds CA waits for pod termination when trying to scale down a node.") maxTotalUnreadyPercentage = flag.Float64("max-total-unready-percentage", 45, "Maximum percentage of unready nodes in the cluster. After this is exceeded, CA halts operations") okTotalUnreadyCount = flag.Int("ok-total-unready-count", 3, "Number of allowed unready nodes, irrespective of max-total-unready-percentage") - scaleUpFromZero = flag.Bool("scale-up-from-zero", true, "Should CA scale up when there 0 ready nodes.") + scaleUpFromZero = flag.Bool("scale-up-from-zero", true, "Should CA scale up when there are 0 ready nodes.") + parallelScaleUp = flag.Bool("parallel-scale-up", false, "Whether to allow parallel node groups scale up. Experimental: may not work on some cloud providers, enable at your own risk.") maxNodeProvisionTime = flag.Duration("max-node-provision-time", 15*time.Minute, "The default maximum time CA waits for node to be provisioned - the value can be overridden per node group") maxPodEvictionTime = flag.Duration("max-pod-eviction-time", 2*time.Minute, "Maximum time CA tries to evict a pod before giving up") nodeGroupsFlag = multiStringFlag( @@ -224,13 +229,23 @@ var ( minReplicaCount = flag.Int("min-replica-count", 0, "Minimum number or replicas that a replica set or replication controller should have to allow their pods deletion in scale down") nodeDeleteDelayAfterTaint = flag.Duration("node-delete-delay-after-taint", 5*time.Second, "How long to wait before deleting a node after tainting it") scaleDownSimulationTimeout = flag.Duration("scale-down-simulation-timeout", 30*time.Second, "How long should we run scale down simulation.") - parallelDrain = flag.Bool("parallel-drain", false, "Whether to allow parallel drain of nodes.") + parallelDrain = flag.Bool("parallel-drain", true, "Whether to allow parallel drain of nodes. This flag is deprecated and will be removed in future releases.") maxCapacityMemoryDifferenceRatio = flag.Float64("memory-difference-ratio", config.DefaultMaxCapacityMemoryDifferenceRatio, "Maximum difference in memory capacity between two similar node groups to be considered for balancing. Value is a ratio of the smaller node group's memory capacity.") maxFreeDifferenceRatio = flag.Float64("max-free-difference-ratio", config.DefaultMaxFreeDifferenceRatio, "Maximum difference in free resources between two similar node groups to be considered for balancing. Value is a ratio of the smaller node group's free resource.") maxAllocatableDifferenceRatio = flag.Float64("max-allocatable-difference-ratio", config.DefaultMaxAllocatableDifferenceRatio, "Maximum difference in allocatable resources between two similar node groups to be considered for balancing. Value is a ratio of the smaller node group's allocatable resource.") forceDaemonSets = flag.Bool("force-ds", false, "Blocks scale-up of node groups too small for all suitable Daemon Sets pods.") ) +func isFlagPassed(name string) bool { + found := false + flag.Visit(func(f *flag.Flag) { + if f.Name == name { + found = true + } + }) + return found +} + func createAutoscalingOptions() config.AutoscalingOptions { minCoresTotal, maxCoresTotal, err := parseMinMaxFlag(*coresTotal) if err != nil { @@ -251,12 +266,33 @@ func createAutoscalingOptions() config.AutoscalingOptions { if *maxDrainParallelismFlag > 1 && !*parallelDrain { klog.Fatalf("Invalid configuration, could not use --max-drain-parallelism > 1 if --parallel-drain is false") } + + // in order to avoid inconsistent deletion thresholds for the legacy planner and the new actuator, the max-empty-bulk-delete, + // and max-scale-down-parallelism flags must be set to the same value. + if isFlagPassed("max-empty-bulk-delete") && !isFlagPassed("max-scale-down-parallelism") { + *maxScaleDownParallelismFlag = *maxEmptyBulkDeleteFlag + klog.Warning("The max-empty-bulk-delete flag will be deprecated in k8s version 1.29. Please use max-scale-down-parallelism instead.") + klog.Infof("Setting max-scale-down-parallelism to %d, based on the max-empty-bulk-delete value %d", *maxScaleDownParallelismFlag, *maxEmptyBulkDeleteFlag) + } else if !isFlagPassed("max-empty-bulk-delete") && isFlagPassed("max-scale-down-parallelism") { + *maxEmptyBulkDeleteFlag = *maxScaleDownParallelismFlag + } + + var parsedSchedConfig *scheduler_config.KubeSchedulerConfiguration + // if scheduler config flag was set by the user + if pflag.CommandLine.Changed(config.SchedulerConfigFileFlag) { + parsedSchedConfig, err = scheduler_util.ConfigFromPath(*schedulerConfigFile) + } + if err != nil { + klog.Fatalf("Failed to get scheduler config: %v", err) + } + return config.AutoscalingOptions{ NodeGroupDefaults: config.NodeGroupAutoscalingOptions{ ScaleDownUtilizationThreshold: *scaleDownUtilizationThreshold, ScaleDownGpuUtilizationThreshold: *scaleDownGpuUtilizationThreshold, ScaleDownUnneededTime: *scaleDownUnneededTime, ScaleDownUnreadyTime: *scaleDownUnreadyTime, + IgnoreDaemonSetsUtilization: *ignoreDaemonSetsUtilization, MaxNodeProvisionTime: *maxNodeProvisionTime, }, CloudConfig: *cloudConfig, @@ -265,11 +301,11 @@ func createAutoscalingOptions() config.AutoscalingOptions { MaxTotalUnreadyPercentage: *maxTotalUnreadyPercentage, OkTotalUnreadyCount: *okTotalUnreadyCount, ScaleUpFromZero: *scaleUpFromZero, + ParallelScaleUp: *parallelScaleUp, EstimatorName: *estimatorFlag, ExpanderNames: *expanderFlag, GRPCExpanderCert: *grpcExpanderCert, GRPCExpanderURL: *grpcExpanderURL, - IgnoreDaemonSetsUtilization: *ignoreDaemonSetsUtilization, IgnoreMirrorPodsUtilization: *ignoreMirrorPodsUtilization, MaxBulkSoftTaintCount: *maxBulkSoftTaintCount, MaxBulkSoftTaintTime: *maxBulkSoftTaintTime, @@ -292,6 +328,7 @@ func createAutoscalingOptions() config.AutoscalingOptions { ScaleDownNonEmptyCandidatesCount: *scaleDownNonEmptyCandidatesCount, ScaleDownCandidatesPoolRatio: *scaleDownCandidatesPoolRatio, ScaleDownCandidatesPoolMinCount: *scaleDownCandidatesPoolMinCount, + SchedulerConfig: parsedSchedConfig, WriteStatusConfigMap: *writeStatusConfigMapFlag, StatusConfigMapName: *statusConfigMapName, BalanceSimilarNodeGroups: *balanceSimilarNodeGroupsFlag, @@ -398,10 +435,12 @@ func buildAutoscaler(debuggingSnapshotter debuggingsnapshot.DebuggingSnapshotter eventsKubeClient := createKubeClient(getKubeConfig()) - predicateChecker, err := predicatechecker.NewSchedulerBasedPredicateChecker(kubeClient, make(chan struct{})) + predicateChecker, err := predicatechecker.NewSchedulerBasedPredicateChecker(kubeClient, + autoscalingOptions.SchedulerConfig, make(chan struct{})) if err != nil { return nil, err } + deleteOptions := simulator.NewNodeDeleteOptions(autoscalingOptions) opts := core.AutoscalerOptions{ AutoscalingOptions: autoscalingOptions, @@ -410,16 +449,17 @@ func buildAutoscaler(debuggingSnapshotter debuggingsnapshot.DebuggingSnapshotter EventsKubeClient: eventsKubeClient, DebuggingSnapshotter: debuggingSnapshotter, PredicateChecker: predicateChecker, + DeleteOptions: deleteOptions, } - opts.Processors = ca_processors.DefaultProcessors() + opts.Processors = ca_processors.DefaultProcessors(autoscalingOptions) opts.Processors.TemplateNodeInfoProvider = nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nodeInfoCacheExpireTime, *forceDaemonSets) opts.Processors.PodListProcessor = podlistprocessor.NewDefaultPodListProcessor(opts.PredicateChecker) scaleDownCandidatesComparers := []scaledowncandidates.CandidatesComparer{} if autoscalingOptions.ParallelDrain { sdCandidatesSorting := previouscandidates.NewPreviousCandidates() scaleDownCandidatesComparers = []scaledowncandidates.CandidatesComparer{ - emptycandidates.NewEmptySortingProcessor(&autoscalingOptions, emptycandidates.NewNodeInfoGetter(opts.ClusterSnapshot)), + emptycandidates.NewEmptySortingProcessor(emptycandidates.NewNodeInfoGetter(opts.ClusterSnapshot), deleteOptions), sdCandidatesSorting, } opts.Processors.ScaleDownCandidatesNotifier.Register(sdCandidatesSorting) diff --git a/cluster-autoscaler/metrics/metrics.go b/cluster-autoscaler/metrics/metrics.go index 4b9c8a5f3db5..8f4e0d869ddd 100644 --- a/cluster-autoscaler/metrics/metrics.go +++ b/cluster-autoscaler/metrics/metrics.go @@ -97,6 +97,8 @@ const ( ScaleDownMiscOperations FunctionLabel = "scaleDown:miscOperations" ScaleDownSoftTaintUnneeded FunctionLabel = "scaleDown:softTaintUnneeded" ScaleUp FunctionLabel = "scaleUp" + BuildPodEquivalenceGroups FunctionLabel = "scaleUp:buildPodEquivalenceGroups" + Estimate FunctionLabel = "scaleUp:estimate" FindUnneeded FunctionLabel = "findUnneeded" UpdateState FunctionLabel = "updateClusterState" FilterOutSchedulable FunctionLabel = "filterOutSchedulable" @@ -224,6 +226,14 @@ var ( }, []string{"function"}, ) + pendingNodeDeletions = k8smetrics.NewGauge( + &k8smetrics.GaugeOpts{ + Namespace: caNamespace, + Name: "pending_node_deletions", + Help: "Number of nodes that haven't been removed or aborted after finished scale-down phase.", + }, + ) + /**** Metrics related to autoscaler operations ****/ errorsCount = k8smetrics.NewCounterVec( &k8smetrics.CounterOpts{ @@ -257,6 +267,14 @@ var ( }, []string{"reason"}, ) + failedGPUScaleUpCount = k8smetrics.NewCounterVec( + &k8smetrics.CounterOpts{ + Namespace: caNamespace, + Name: "failed_gpu_scale_ups_total", + Help: "Number of times scale-up operation has failed.", + }, []string{"reason", "gpu_resource_name", "gpu_name"}, + ) + scaleDownCount = k8smetrics.NewCounterVec( &k8smetrics.CounterOpts{ Namespace: caNamespace, @@ -375,6 +393,7 @@ func RegisterAll(emitPerNodeGroupMetrics bool) { legacyregistry.MustRegister(scaleUpCount) legacyregistry.MustRegister(gpuScaleUpCount) legacyregistry.MustRegister(failedScaleUpCount) + legacyregistry.MustRegister(failedGPUScaleUpCount) legacyregistry.MustRegister(scaleDownCount) legacyregistry.MustRegister(gpuScaleDownCount) legacyregistry.MustRegister(evictionsCount) @@ -387,6 +406,7 @@ func RegisterAll(emitPerNodeGroupMetrics bool) { legacyregistry.MustRegister(napEnabled) legacyregistry.MustRegister(nodeGroupCreationCount) legacyregistry.MustRegister(nodeGroupDeletionCount) + legacyregistry.MustRegister(pendingNodeDeletions) if emitPerNodeGroupMetrics { legacyregistry.MustRegister(nodesGroupMinNodes) @@ -498,8 +518,11 @@ func RegisterScaleUp(nodesCount int, gpuResourceName, gpuType string) { } // RegisterFailedScaleUp records a failed scale-up operation -func RegisterFailedScaleUp(reason FailedScaleUpReason) { +func RegisterFailedScaleUp(reason FailedScaleUpReason, gpuResourceName, gpuType string) { failedScaleUpCount.WithLabelValues(string(reason)).Inc() + if gpuType != gpu.MetricsNoGPU { + failedGPUScaleUpCount.WithLabelValues(string(reason), gpuResourceName, gpuType).Inc() + } } // RegisterScaleDown records number of nodes removed by scale down @@ -587,3 +610,8 @@ func RegisterSkippedScaleUpCPU() { func RegisterSkippedScaleUpMemory() { skippedScaleEventsCount.WithLabelValues(DirectionScaleUp, MemoryResourceLimit).Add(1.0) } + +// ObservePendingNodeDeletions records the current value of nodes_pending_deletion metric +func ObservePendingNodeDeletions(value int) { + pendingNodeDeletions.Set(float64(value)) +} diff --git a/cluster-autoscaler/processors/binpacking/binpacking_limiter.go b/cluster-autoscaler/processors/binpacking/binpacking_limiter.go new file mode 100644 index 000000000000..3b3ab61ba3f1 --- /dev/null +++ b/cluster-autoscaler/processors/binpacking/binpacking_limiter.go @@ -0,0 +1,52 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package binpacking + +import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" + "k8s.io/autoscaler/cluster-autoscaler/context" + "k8s.io/autoscaler/cluster-autoscaler/expander" +) + +// BinpackingLimiter processes expansion options to stop binpacking early. +type BinpackingLimiter interface { + InitBinpacking(context *context.AutoscalingContext, nodeGroups []cloudprovider.NodeGroup) + StopBinpacking(context *context.AutoscalingContext, evaluatedOptions []expander.Option) bool + MarkProcessed(context *context.AutoscalingContext, nodegroupId string) +} + +// NoOpBinpackingLimiter returns true without processing expansion options. +type NoOpBinpackingLimiter struct { +} + +// NewDefaultBinpackingLimiter creates an instance of NoOpBinpackingLimiter. +func NewDefaultBinpackingLimiter() BinpackingLimiter { + return &NoOpBinpackingLimiter{} +} + +// InitBinpacking initialises the BinpackingLimiter. +func (p *NoOpBinpackingLimiter) InitBinpacking(context *context.AutoscalingContext, nodeGroups []cloudprovider.NodeGroup) { +} + +// StopBinpacking is used to make decsions on the evaluated expansion options. +func (p *NoOpBinpackingLimiter) StopBinpacking(context *context.AutoscalingContext, evaluatedOptions []expander.Option) bool { + return false +} + +// MarkProcessed marks the nodegroup as processed. +func (p *NoOpBinpackingLimiter) MarkProcessed(context *context.AutoscalingContext, nodegroupId string) { +} diff --git a/cluster-autoscaler/processors/nodegroupconfig/node_group_config_processor.go b/cluster-autoscaler/processors/nodegroupconfig/node_group_config_processor.go index 5d5c80aed3ac..b20ab0f9943a 100644 --- a/cluster-autoscaler/processors/nodegroupconfig/node_group_config_processor.go +++ b/cluster-autoscaler/processors/nodegroupconfig/node_group_config_processor.go @@ -20,21 +20,23 @@ import ( "time" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" - "k8s.io/autoscaler/cluster-autoscaler/context" + "k8s.io/autoscaler/cluster-autoscaler/config" ) // NodeGroupConfigProcessor provides config values for a particular NodeGroup. type NodeGroupConfigProcessor interface { // GetScaleDownUnneededTime returns ScaleDownUnneededTime value that should be used for a given NodeGroup. - GetScaleDownUnneededTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) + GetScaleDownUnneededTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) // GetScaleDownUnreadyTime returns ScaleDownUnreadyTime value that should be used for a given NodeGroup. - GetScaleDownUnreadyTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) + GetScaleDownUnreadyTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) // GetScaleDownUtilizationThreshold returns ScaleDownUtilizationThreshold value that should be used for a given NodeGroup. - GetScaleDownUtilizationThreshold(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (float64, error) + GetScaleDownUtilizationThreshold(nodeGroup cloudprovider.NodeGroup) (float64, error) // GetScaleDownGpuUtilizationThreshold returns ScaleDownGpuUtilizationThreshold value that should be used for a given NodeGroup. - GetScaleDownGpuUtilizationThreshold(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (float64, error) + GetScaleDownGpuUtilizationThreshold(nodeGroup cloudprovider.NodeGroup) (float64, error) // GetMaxNodeProvisionTime return MaxNodeProvisionTime value that should be used for a given NodeGroup. - GetMaxNodeProvisionTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) + GetMaxNodeProvisionTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) + // GetIgnoreDaemonSetsUtilization returns IgnoreDaemonSetsUtilization value that should be used for a given NodeGroup. + GetIgnoreDaemonSetsUtilization(nodeGroup cloudprovider.NodeGroup) (bool, error) // CleanUp cleans up processor's internal structures. CleanUp() } @@ -43,73 +45,88 @@ type NodeGroupConfigProcessor interface { // for each NodeGroup. If NodeGroup doesn't return a value default config is // used instead. type DelegatingNodeGroupConfigProcessor struct { + nodeGroupDefaults config.NodeGroupAutoscalingOptions } // GetScaleDownUnneededTime returns ScaleDownUnneededTime value that should be used for a given NodeGroup. -func (p *DelegatingNodeGroupConfigProcessor) GetScaleDownUnneededTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { - ngConfig, err := nodeGroup.GetOptions(context.NodeGroupDefaults) +func (p *DelegatingNodeGroupConfigProcessor) GetScaleDownUnneededTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { + ngConfig, err := nodeGroup.GetOptions(p.nodeGroupDefaults) if err != nil && err != cloudprovider.ErrNotImplemented { return time.Duration(0), err } if ngConfig == nil || err == cloudprovider.ErrNotImplemented { - return context.NodeGroupDefaults.ScaleDownUnneededTime, nil + return p.nodeGroupDefaults.ScaleDownUnneededTime, nil } return ngConfig.ScaleDownUnneededTime, nil } // GetScaleDownUnreadyTime returns ScaleDownUnreadyTime value that should be used for a given NodeGroup. -func (p *DelegatingNodeGroupConfigProcessor) GetScaleDownUnreadyTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { - ngConfig, err := nodeGroup.GetOptions(context.NodeGroupDefaults) +func (p *DelegatingNodeGroupConfigProcessor) GetScaleDownUnreadyTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { + ngConfig, err := nodeGroup.GetOptions(p.nodeGroupDefaults) if err != nil && err != cloudprovider.ErrNotImplemented { return time.Duration(0), err } if ngConfig == nil || err == cloudprovider.ErrNotImplemented { - return context.NodeGroupDefaults.ScaleDownUnreadyTime, nil + return p.nodeGroupDefaults.ScaleDownUnreadyTime, nil } return ngConfig.ScaleDownUnreadyTime, nil } // GetScaleDownUtilizationThreshold returns ScaleDownUtilizationThreshold value that should be used for a given NodeGroup. -func (p *DelegatingNodeGroupConfigProcessor) GetScaleDownUtilizationThreshold(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (float64, error) { - ngConfig, err := nodeGroup.GetOptions(context.NodeGroupDefaults) +func (p *DelegatingNodeGroupConfigProcessor) GetScaleDownUtilizationThreshold(nodeGroup cloudprovider.NodeGroup) (float64, error) { + ngConfig, err := nodeGroup.GetOptions(p.nodeGroupDefaults) if err != nil && err != cloudprovider.ErrNotImplemented { return 0.0, err } if ngConfig == nil || err == cloudprovider.ErrNotImplemented { - return context.NodeGroupDefaults.ScaleDownUtilizationThreshold, nil + return p.nodeGroupDefaults.ScaleDownUtilizationThreshold, nil } return ngConfig.ScaleDownUtilizationThreshold, nil } // GetScaleDownGpuUtilizationThreshold returns ScaleDownGpuUtilizationThreshold value that should be used for a given NodeGroup. -func (p *DelegatingNodeGroupConfigProcessor) GetScaleDownGpuUtilizationThreshold(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (float64, error) { - ngConfig, err := nodeGroup.GetOptions(context.NodeGroupDefaults) +func (p *DelegatingNodeGroupConfigProcessor) GetScaleDownGpuUtilizationThreshold(nodeGroup cloudprovider.NodeGroup) (float64, error) { + ngConfig, err := nodeGroup.GetOptions(p.nodeGroupDefaults) if err != nil && err != cloudprovider.ErrNotImplemented { return 0.0, err } if ngConfig == nil || err == cloudprovider.ErrNotImplemented { - return context.NodeGroupDefaults.ScaleDownGpuUtilizationThreshold, nil + return p.nodeGroupDefaults.ScaleDownGpuUtilizationThreshold, nil } return ngConfig.ScaleDownGpuUtilizationThreshold, nil } // GetMaxNodeProvisionTime returns MaxNodeProvisionTime value that should be used for a given NodeGroup. -func (p *DelegatingNodeGroupConfigProcessor) GetMaxNodeProvisionTime(context *context.AutoscalingContext, nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { - ngConfig, err := nodeGroup.GetOptions(context.NodeGroupDefaults) +func (p *DelegatingNodeGroupConfigProcessor) GetMaxNodeProvisionTime(nodeGroup cloudprovider.NodeGroup) (time.Duration, error) { + ngConfig, err := nodeGroup.GetOptions(p.nodeGroupDefaults) if err != nil && err != cloudprovider.ErrNotImplemented { return time.Duration(0), err } if ngConfig == nil || err == cloudprovider.ErrNotImplemented { - return context.NodeGroupDefaults.MaxNodeProvisionTime, nil + return p.nodeGroupDefaults.MaxNodeProvisionTime, nil } return ngConfig.MaxNodeProvisionTime, nil } +// GetIgnoreDaemonSetsUtilization returns IgnoreDaemonSetsUtilization value that should be used for a given NodeGroup. +func (p *DelegatingNodeGroupConfigProcessor) GetIgnoreDaemonSetsUtilization(nodeGroup cloudprovider.NodeGroup) (bool, error) { + ngConfig, err := nodeGroup.GetOptions(p.nodeGroupDefaults) + if err != nil && err != cloudprovider.ErrNotImplemented { + return false, err + } + if ngConfig == nil || err == cloudprovider.ErrNotImplemented { + return p.nodeGroupDefaults.IgnoreDaemonSetsUtilization, nil + } + return ngConfig.IgnoreDaemonSetsUtilization, nil +} + // CleanUp cleans up processor's internal structures. func (p *DelegatingNodeGroupConfigProcessor) CleanUp() { } // NewDefaultNodeGroupConfigProcessor returns a default instance of NodeGroupConfigProcessor. -func NewDefaultNodeGroupConfigProcessor() NodeGroupConfigProcessor { - return &DelegatingNodeGroupConfigProcessor{} +func NewDefaultNodeGroupConfigProcessor(nodeGroupDefaults config.NodeGroupAutoscalingOptions) NodeGroupConfigProcessor { + return &DelegatingNodeGroupConfigProcessor{ + nodeGroupDefaults: nodeGroupDefaults, + } } diff --git a/cluster-autoscaler/processors/nodegroupconfig/node_group_config_processor_test.go b/cluster-autoscaler/processors/nodegroupconfig/node_group_config_processor_test.go index 4046424afd58..101d538a81d7 100644 --- a/cluster-autoscaler/processors/nodegroupconfig/node_group_config_processor_test.go +++ b/cluster-autoscaler/processors/nodegroupconfig/node_group_config_processor_test.go @@ -22,12 +22,10 @@ import ( "testing" "time" + "github.com/stretchr/testify/assert" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/mocks" "k8s.io/autoscaler/cluster-autoscaler/config" - "k8s.io/autoscaler/cluster-autoscaler/context" - - "github.com/stretchr/testify/assert" ) // This test covers all Get* methods implemented by @@ -49,6 +47,7 @@ func TestDelegatingNodeGroupConfigProcessor(t *testing.T) { ScaleDownGpuUtilizationThreshold: 0.6, ScaleDownUtilizationThreshold: 0.5, MaxNodeProvisionTime: 15 * time.Minute, + IgnoreDaemonSetsUtilization: true, } ngOpts := &config.NodeGroupAutoscalingOptions{ ScaleDownUnneededTime: 10 * time.Minute, @@ -56,10 +55,11 @@ func TestDelegatingNodeGroupConfigProcessor(t *testing.T) { ScaleDownGpuUtilizationThreshold: 0.85, ScaleDownUtilizationThreshold: 0.75, MaxNodeProvisionTime: 60 * time.Minute, + IgnoreDaemonSetsUtilization: false, } - testUnneededTime := func(t *testing.T, p DelegatingNodeGroupConfigProcessor, c *context.AutoscalingContext, ng cloudprovider.NodeGroup, w Want, we error) { - res, err := p.GetScaleDownUnneededTime(c, ng) + testUnneededTime := func(t *testing.T, p NodeGroupConfigProcessor, ng cloudprovider.NodeGroup, w Want, we error) { + res, err := p.GetScaleDownUnneededTime(ng) assert.Equal(t, err, we) results := map[Want]time.Duration{ NIL: time.Duration(0), @@ -68,8 +68,8 @@ func TestDelegatingNodeGroupConfigProcessor(t *testing.T) { } assert.Equal(t, res, results[w]) } - testUnreadyTime := func(t *testing.T, p DelegatingNodeGroupConfigProcessor, c *context.AutoscalingContext, ng cloudprovider.NodeGroup, w Want, we error) { - res, err := p.GetScaleDownUnreadyTime(c, ng) + testUnreadyTime := func(t *testing.T, p NodeGroupConfigProcessor, ng cloudprovider.NodeGroup, w Want, we error) { + res, err := p.GetScaleDownUnreadyTime(ng) assert.Equal(t, err, we) results := map[Want]time.Duration{ NIL: time.Duration(0), @@ -78,8 +78,8 @@ func TestDelegatingNodeGroupConfigProcessor(t *testing.T) { } assert.Equal(t, res, results[w]) } - testUtilizationThreshold := func(t *testing.T, p DelegatingNodeGroupConfigProcessor, c *context.AutoscalingContext, ng cloudprovider.NodeGroup, w Want, we error) { - res, err := p.GetScaleDownUtilizationThreshold(c, ng) + testUtilizationThreshold := func(t *testing.T, p NodeGroupConfigProcessor, ng cloudprovider.NodeGroup, w Want, we error) { + res, err := p.GetScaleDownUtilizationThreshold(ng) assert.Equal(t, err, we) results := map[Want]float64{ NIL: 0.0, @@ -88,8 +88,8 @@ func TestDelegatingNodeGroupConfigProcessor(t *testing.T) { } assert.Equal(t, res, results[w]) } - testGpuThreshold := func(t *testing.T, p DelegatingNodeGroupConfigProcessor, c *context.AutoscalingContext, ng cloudprovider.NodeGroup, w Want, we error) { - res, err := p.GetScaleDownGpuUtilizationThreshold(c, ng) + testGpuThreshold := func(t *testing.T, p NodeGroupConfigProcessor, ng cloudprovider.NodeGroup, w Want, we error) { + res, err := p.GetScaleDownGpuUtilizationThreshold(ng) assert.Equal(t, err, we) results := map[Want]float64{ NIL: 0.0, @@ -98,8 +98,8 @@ func TestDelegatingNodeGroupConfigProcessor(t *testing.T) { } assert.Equal(t, res, results[w]) } - testMaxNodeProvisionTime := func(t *testing.T, p DelegatingNodeGroupConfigProcessor, c *context.AutoscalingContext, ng cloudprovider.NodeGroup, w Want, we error) { - res, err := p.GetMaxNodeProvisionTime(c, ng) + testMaxNodeProvisionTime := func(t *testing.T, p NodeGroupConfigProcessor, ng cloudprovider.NodeGroup, w Want, we error) { + res, err := p.GetMaxNodeProvisionTime(ng) assert.Equal(t, err, we) results := map[Want]time.Duration{ NIL: time.Duration(0), @@ -109,25 +109,42 @@ func TestDelegatingNodeGroupConfigProcessor(t *testing.T) { assert.Equal(t, res, results[w]) } - funcs := map[string]func(*testing.T, DelegatingNodeGroupConfigProcessor, *context.AutoscalingContext, cloudprovider.NodeGroup, Want, error){ + // for IgnoreDaemonSetsUtilization + testIgnoreDSUtilization := func(t *testing.T, p NodeGroupConfigProcessor, ng cloudprovider.NodeGroup, w Want, we error) { + res, err := p.GetIgnoreDaemonSetsUtilization(ng) + assert.Equal(t, err, we) + results := map[Want]bool{ + NIL: false, + GLOBAL: true, + NG: false, + } + assert.Equal(t, res, results[w]) + } + + funcs := map[string]func(*testing.T, NodeGroupConfigProcessor, cloudprovider.NodeGroup, Want, error){ "ScaleDownUnneededTime": testUnneededTime, "ScaleDownUnreadyTime": testUnreadyTime, "ScaleDownUtilizationThreshold": testUtilizationThreshold, "ScaleDownGpuUtilizationThreshold": testGpuThreshold, "MaxNodeProvisionTime": testMaxNodeProvisionTime, - "MultipleOptions": func(t *testing.T, p DelegatingNodeGroupConfigProcessor, c *context.AutoscalingContext, ng cloudprovider.NodeGroup, w Want, we error) { - testUnneededTime(t, p, c, ng, w, we) - testUnreadyTime(t, p, c, ng, w, we) - testUtilizationThreshold(t, p, c, ng, w, we) - testGpuThreshold(t, p, c, ng, w, we) - testMaxNodeProvisionTime(t, p, c, ng, w, we) + "IgnoreDaemonSetsUtilization": testIgnoreDSUtilization, + "MultipleOptions": func(t *testing.T, p NodeGroupConfigProcessor, ng cloudprovider.NodeGroup, w Want, we error) { + testUnneededTime(t, p, ng, w, we) + testUnreadyTime(t, p, ng, w, we) + testUtilizationThreshold(t, p, ng, w, we) + testGpuThreshold(t, p, ng, w, we) + testMaxNodeProvisionTime(t, p, ng, w, we) + testIgnoreDSUtilization(t, p, ng, w, we) }, - "RepeatingTheSameCallGivesConsistentResults": func(t *testing.T, p DelegatingNodeGroupConfigProcessor, c *context.AutoscalingContext, ng cloudprovider.NodeGroup, w Want, we error) { - testUnneededTime(t, p, c, ng, w, we) - testUnneededTime(t, p, c, ng, w, we) + "RepeatingTheSameCallGivesConsistentResults": func(t *testing.T, p NodeGroupConfigProcessor, ng cloudprovider.NodeGroup, w Want, we error) { + testUnneededTime(t, p, ng, w, we) + testUnneededTime(t, p, ng, w, we) // throw in a different call - testGpuThreshold(t, p, c, ng, w, we) - testUnneededTime(t, p, c, ng, w, we) + testGpuThreshold(t, p, ng, w, we) + testUnneededTime(t, p, ng, w, we) + // throw in another different call + testIgnoreDSUtilization(t, p, ng, w, we) + testUnneededTime(t, p, ng, w, we) }, } @@ -161,15 +178,10 @@ func TestDelegatingNodeGroupConfigProcessor(t *testing.T) { } for tn, tc := range cases { t.Run(fmt.Sprintf("[%s] %s", fname, tn), func(t *testing.T) { - context := &context.AutoscalingContext{ - AutoscalingOptions: config.AutoscalingOptions{ - NodeGroupDefaults: tc.globalOptions, - }, - } ng := &mocks.NodeGroup{} ng.On("GetOptions", tc.globalOptions).Return(tc.ngOptions, tc.ngError) - p := DelegatingNodeGroupConfigProcessor{} - fn(t, p, context, ng, tc.want, tc.wantError) + p := NewDefaultNodeGroupConfigProcessor(tc.globalOptions) + fn(t, p, ng, tc.want, tc.wantError) }) } } diff --git a/cluster-autoscaler/processors/nodegroupset/compare_nodegroups.go b/cluster-autoscaler/processors/nodegroupset/compare_nodegroups.go index 2a46f1c65ea9..02b03e7febd0 100644 --- a/cluster-autoscaler/processors/nodegroupset/compare_nodegroups.go +++ b/cluster-autoscaler/processors/nodegroupset/compare_nodegroups.go @@ -23,6 +23,7 @@ import ( "k8s.io/apimachinery/pkg/api/resource" "k8s.io/autoscaler/cluster-autoscaler/config" "k8s.io/autoscaler/cluster-autoscaler/utils/scheduler" + klog "k8s.io/klog/v2" schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" ) @@ -146,11 +147,13 @@ func IsCloudProviderNodeInfoSimilar( for kind, qtyList := range capacity { if len(qtyList) != 2 { + klog.V(3).Infof("nodes %s and %s are not similar, missing capacity %s", n1.Node().Name, n2.Node().Name, kind) return false } switch kind { case apiv1.ResourceMemory: if !resourceListWithinTolerance(qtyList, ratioOpts.MaxCapacityMemoryDifferenceRatio) { + klog.V(3).Infof("nodes %s and %s are not similar, memory not within tolerance", n1.Node().Name, n2.Node().Name) return false } default: @@ -158,6 +161,7 @@ func IsCloudProviderNodeInfoSimilar( // If this is ever changed, enforcing MaxCoresTotal limits // as it is now may no longer work. if qtyList[0].Cmp(qtyList[1]) != 0 { + klog.V(3).Infof("nodes %s and %s are not similar, %s does not match", n1.Node().Name, n2.Node().Name, kind) return false } } @@ -165,13 +169,16 @@ func IsCloudProviderNodeInfoSimilar( // For allocatable and free we allow resource quantities to be within a few % of each other if !resourceMapsWithinTolerance(allocatable, ratioOpts.MaxAllocatableDifferenceRatio) { + klog.V(3).Infof("nodes %s and %s are not similar, allocatable resources not within tolerance", n1.Node().Name, n2.Node().Name) return false } if !resourceMapsWithinTolerance(free, ratioOpts.MaxFreeDifferenceRatio) { + klog.V(3).Infof("nodes %s and %s are not similar, free resources not within tolerance", n1.Node().Name, n2.Node().Name) return false } if !compareLabels(nodes, ignoredLabels) { + klog.V(3).Infof("nodes %s and %s are not similar, labels do not match", n1.Node().Name, n2.Node().Name) return false } diff --git a/cluster-autoscaler/processors/nodes/post_filtering_processor.go b/cluster-autoscaler/processors/nodes/post_filtering_processor.go index a7dfba82402a..06c59d1451ad 100644 --- a/cluster-autoscaler/processors/nodes/post_filtering_processor.go +++ b/cluster-autoscaler/processors/nodes/post_filtering_processor.go @@ -17,8 +17,10 @@ limitations under the License. package nodes import ( + "k8s.io/autoscaler/cluster-autoscaler/cloudprovider" "k8s.io/autoscaler/cluster-autoscaler/context" "k8s.io/autoscaler/cluster-autoscaler/simulator" + klog "k8s.io/klog/v2" ) // PostFilteringScaleDownNodeProcessor selects first maxCount nodes (if possible) to be removed @@ -26,19 +28,57 @@ type PostFilteringScaleDownNodeProcessor struct { } // GetNodesToRemove selects up to maxCount nodes for deletion, by selecting a first maxCount candidates -func (n *PostFilteringScaleDownNodeProcessor) GetNodesToRemove(ctx *context.AutoscalingContext, candidates []simulator.NodeToBeRemoved, maxCount int) []simulator.NodeToBeRemoved { +func (p *PostFilteringScaleDownNodeProcessor) GetNodesToRemove(ctx *context.AutoscalingContext, candidates []simulator.NodeToBeRemoved, maxCount int) []simulator.NodeToBeRemoved { end := len(candidates) if len(candidates) > maxCount { end = maxCount } - return candidates[:end] + return p.filterOutIncompleteAtomicNodeGroups(ctx, candidates[:end]) } // CleanUp is called at CA termination -func (n *PostFilteringScaleDownNodeProcessor) CleanUp() { +func (p *PostFilteringScaleDownNodeProcessor) CleanUp() { } // NewPostFilteringScaleDownNodeProcessor returns a new PostFilteringScaleDownNodeProcessor func NewPostFilteringScaleDownNodeProcessor() *PostFilteringScaleDownNodeProcessor { return &PostFilteringScaleDownNodeProcessor{} } + +func (p *PostFilteringScaleDownNodeProcessor) filterOutIncompleteAtomicNodeGroups(ctx *context.AutoscalingContext, nodes []simulator.NodeToBeRemoved) []simulator.NodeToBeRemoved { + nodesByGroup := map[cloudprovider.NodeGroup][]simulator.NodeToBeRemoved{} + result := []simulator.NodeToBeRemoved{} + for _, node := range nodes { + nodeGroup, err := ctx.CloudProvider.NodeGroupForNode(node.Node) + if err != nil { + klog.Errorf("Node %v will not scale down, failed to get node info: %s", node.Node.Name, err) + continue + } + autoscalingOptions, err := nodeGroup.GetOptions(ctx.NodeGroupDefaults) + if err != nil && err != cloudprovider.ErrNotImplemented { + klog.Errorf("Failed to get autoscaling options for node group %s: %v", nodeGroup.Id(), err) + continue + } + if autoscalingOptions != nil && autoscalingOptions.ZeroOrMaxNodeScaling { + klog.V(2).Infof("Considering node %s for atomic scale down", node.Node.Name) + nodesByGroup[nodeGroup] = append(nodesByGroup[nodeGroup], node) + } else { + klog.V(2).Infof("Considering node %s for standard scale down", node.Node.Name) + result = append(result, node) + } + } + for nodeGroup, nodes := range nodesByGroup { + ngSize, err := nodeGroup.TargetSize() + if err != nil { + klog.Errorf("Nodes from group %s will not scale down, failed to get target size: %s", nodeGroup.Id(), err) + continue + } + if ngSize == len(nodes) { + klog.V(2).Infof("Scheduling atomic scale down for all %v nodes from node group %s", len(nodes), nodeGroup.Id()) + result = append(result, nodes...) + } else { + klog.V(2).Infof("Skipping scale down for %v nodes from node group %s, all %v nodes have to be scaled down atomically", len(nodes), nodeGroup.Id(), ngSize) + } + } + return result +} diff --git a/cluster-autoscaler/processors/processors.go b/cluster-autoscaler/processors/processors.go index 78cc56ea13f9..638af953cf8e 100644 --- a/cluster-autoscaler/processors/processors.go +++ b/cluster-autoscaler/processors/processors.go @@ -19,6 +19,7 @@ package processors import ( "k8s.io/autoscaler/cluster-autoscaler/config" "k8s.io/autoscaler/cluster-autoscaler/processors/actionablecluster" + "k8s.io/autoscaler/cluster-autoscaler/processors/binpacking" "k8s.io/autoscaler/cluster-autoscaler/processors/customresources" "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroupconfig" "k8s.io/autoscaler/cluster-autoscaler/processors/nodegroups" @@ -38,6 +39,8 @@ type AutoscalingProcessors struct { PodListProcessor pods.PodListProcessor // NodeGroupListProcessor is used to process list of NodeGroups that can be used in scale-up. NodeGroupListProcessor nodegroups.NodeGroupListProcessor + // BinpackingLimiter processes expansion options to stop binpacking early. + BinpackingLimiter binpacking.BinpackingLimiter // NodeGroupSetProcessor is used to divide scale-up between similar NodeGroups. NodeGroupSetProcessor nodegroupset.NodeGroupSetProcessor // ScaleUpStatusProcessor is used to process the state of the cluster after a scale-up. @@ -67,10 +70,11 @@ type AutoscalingProcessors struct { } // DefaultProcessors returns default set of processors. -func DefaultProcessors() *AutoscalingProcessors { +func DefaultProcessors(options config.AutoscalingOptions) *AutoscalingProcessors { return &AutoscalingProcessors{ PodListProcessor: pods.NewDefaultPodListProcessor(), NodeGroupListProcessor: nodegroups.NewDefaultNodeGroupListProcessor(), + BinpackingLimiter: binpacking.NewDefaultBinpackingLimiter(), NodeGroupSetProcessor: nodegroupset.NewDefaultNodeGroupSetProcessor([]string{}, config.NodeGroupDifferenceRatios{ MaxAllocatableDifferenceRatio: config.DefaultMaxAllocatableDifferenceRatio, MaxCapacityMemoryDifferenceRatio: config.DefaultMaxCapacityMemoryDifferenceRatio, @@ -83,7 +87,7 @@ func DefaultProcessors() *AutoscalingProcessors { AutoscalingStatusProcessor: status.NewDefaultAutoscalingStatusProcessor(), NodeGroupManager: nodegroups.NewDefaultNodeGroupManager(), NodeInfoProcessor: nodeinfos.NewDefaultNodeInfoProcessor(), - NodeGroupConfigProcessor: nodegroupconfig.NewDefaultNodeGroupConfigProcessor(), + NodeGroupConfigProcessor: nodegroupconfig.NewDefaultNodeGroupConfigProcessor(options.NodeGroupDefaults), CustomResourcesProcessor: customresources.NewDefaultCustomResourcesProcessor(), ActionableClusterProcessor: actionablecluster.NewDefaultActionableClusterProcessor(), TemplateNodeInfoProvider: nodeinfosprovider.NewDefaultTemplateNodeInfoProvider(nil, false), diff --git a/cluster-autoscaler/processors/scaledowncandidates/emptycandidates/empty_candidates_sorting.go b/cluster-autoscaler/processors/scaledowncandidates/emptycandidates/empty_candidates_sorting.go index 9f9ce8428660..38dbeec3071a 100644 --- a/cluster-autoscaler/processors/scaledowncandidates/emptycandidates/empty_candidates_sorting.go +++ b/cluster-autoscaler/processors/scaledowncandidates/emptycandidates/empty_candidates_sorting.go @@ -20,7 +20,6 @@ import ( "time" apiv1 "k8s.io/api/core/v1" - "k8s.io/autoscaler/cluster-autoscaler/config" "k8s.io/autoscaler/cluster-autoscaler/simulator" "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" @@ -50,8 +49,7 @@ type EmptySorting struct { } // NewEmptySortingProcessor return EmptySorting struct. -func NewEmptySortingProcessor(opts *config.AutoscalingOptions, n nodeInfoGetter) *EmptySorting { - deleteOptions := simulator.NewNodeDeleteOptions(*opts) +func NewEmptySortingProcessor(n nodeInfoGetter, deleteOptions simulator.NodeDeleteOptions) *EmptySorting { return &EmptySorting{n, deleteOptions} } diff --git a/cluster-autoscaler/simulator/drain.go b/cluster-autoscaler/simulator/drain.go index d428618bed11..21edf94c1a0c 100644 --- a/cluster-autoscaler/simulator/drain.go +++ b/cluster-autoscaler/simulator/drain.go @@ -54,6 +54,7 @@ func NewNodeDeleteOptions(opts config.AutoscalingOptions) NodeDeleteOptions { SkipNodesWithLocalStorage: opts.SkipNodesWithLocalStorage, MinReplicaCount: opts.MinReplicaCount, SkipNodesWithCustomControllerPods: opts.SkipNodesWithCustomControllerPods, + DrainabilityRules: drainability.DefaultRules(), } } @@ -68,25 +69,28 @@ func NewNodeDeleteOptions(opts config.AutoscalingOptions) NodeDeleteOptions { func GetPodsToMove(nodeInfo *schedulerframework.NodeInfo, deleteOptions NodeDeleteOptions, listers kube_util.ListerRegistry, pdbs []*policyv1.PodDisruptionBudget, timestamp time.Time) (pods []*apiv1.Pod, daemonSetPods []*apiv1.Pod, blockingPod *drain.BlockingPod, err error) { var drainPods, drainDs []*apiv1.Pod + drainabilityRules := deleteOptions.DrainabilityRules + if drainabilityRules == nil { + drainabilityRules = drainability.DefaultRules() + } for _, podInfo := range nodeInfo.Pods { pod := podInfo.Pod - d := drainabilityStatus(pod, deleteOptions.DrainabilityRules) - if d.Matched { - switch d.Reason { - case drain.NoReason: - if pod_util.IsDaemonSetPod(pod) { - drainDs = append(drainDs, pod) - } else { - drainPods = append(drainPods, pod) - } - continue - default: - blockingPod = &drain.BlockingPod{pod, d.Reason} - err = d.Error - return + d := drainabilityStatus(pod, drainabilityRules) + switch d.Outcome { + case drainability.UndefinedOutcome: + pods = append(pods, podInfo.Pod) + case drainability.DrainOk: + if pod_util.IsDaemonSetPod(pod) { + drainDs = append(drainDs, pod) + } else { + drainPods = append(drainPods, pod) } + case drainability.BlockDrain: + blockingPod = &drain.BlockingPod{pod, d.BlockingReason} + err = d.Error + return + case drainability.SkipDrain: } - pods = append(pods, podInfo.Pod) } pods, daemonSetPods, blockingPod, err = drain.GetPodsForDeletionOnNodeDrain( pods, @@ -130,9 +134,11 @@ func checkPdbs(pods []*apiv1.Pod, pdbs []*policyv1.PodDisruptionBudget) (*drain. func drainabilityStatus(pod *apiv1.Pod, dr []drainability.Rule) drainability.Status { for _, f := range dr { - if d := f.Drainable(pod); d.Matched { + if d := f.Drainable(pod); d.Outcome != drainability.UndefinedOutcome { return d } } - return drainability.Status{} + return drainability.Status{ + Outcome: drainability.UndefinedOutcome, + } } diff --git a/cluster-autoscaler/simulator/drain_test.go b/cluster-autoscaler/simulator/drain_test.go index 1165a9419c32..02ad0ba372c1 100644 --- a/cluster-autoscaler/simulator/drain_test.go +++ b/cluster-autoscaler/simulator/drain_test.go @@ -339,5 +339,5 @@ func (n neverDrain) Drainable(*apiv1.Pod) drainability.Status { type cantDecide struct{} func (c cantDecide) Drainable(*apiv1.Pod) drainability.Status { - return drainability.NewUnmatchedStatus() + return drainability.NewUndefinedStatus() } diff --git a/cluster-autoscaler/simulator/drainability/mirror.go b/cluster-autoscaler/simulator/drainability/mirror.go new file mode 100644 index 000000000000..668c3993220b --- /dev/null +++ b/cluster-autoscaler/simulator/drainability/mirror.go @@ -0,0 +1,39 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drainability + +import ( + "k8s.io/autoscaler/cluster-autoscaler/utils/pod" + + apiv1 "k8s.io/api/core/v1" +) + +// MirrorPodRule is a drainability rule on how to handle mirror pods. +type MirrorPodRule struct{} + +// NewMirrorPodRule creates a new MirrorPodRule. +func NewMirrorPodRule() *MirrorPodRule { + return &MirrorPodRule{} +} + +// Drainable decides what to do with mirror pods on node drain. +func (m *MirrorPodRule) Drainable(p *apiv1.Pod) Status { + if pod.IsMirrorPod(p) { + return NewSkipStatus() + } + return NewUndefinedStatus() +} diff --git a/cluster-autoscaler/simulator/drainability/mirror_test.go b/cluster-autoscaler/simulator/drainability/mirror_test.go new file mode 100644 index 000000000000..961b3d925eae --- /dev/null +++ b/cluster-autoscaler/simulator/drainability/mirror_test.go @@ -0,0 +1,66 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package drainability + +import ( + "testing" + + apiv1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/kubernetes/pkg/kubelet/types" +) + +func TestMirrorPodRule(t *testing.T) { + testCases := []struct { + desc string + pod *apiv1.Pod + want Status + }{ + { + desc: "non mirror pod", + pod: &apiv1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "regularPod", + Namespace: "ns", + }, + }, + want: NewUndefinedStatus(), + }, + { + desc: "mirror pod", + pod: &apiv1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "manifestPod", + Namespace: "kube-system", + Annotations: map[string]string{ + types.ConfigMirrorAnnotationKey: "something", + }, + }, + }, + want: NewSkipStatus(), + }, + } + for _, tc := range testCases { + t.Run(tc.desc, func(t *testing.T) { + m := NewMirrorPodRule() + got := m.Drainable(tc.pod) + if tc.want != got { + t.Errorf("MirrorPodRule.Drainable(%v) = %v, want %v", tc.pod.Name, got, tc.want) + } + }) + } +} diff --git a/cluster-autoscaler/simulator/drainability/rule.go b/cluster-autoscaler/simulator/drainability/rule.go index c4babe6fc348..f84b33b60edd 100644 --- a/cluster-autoscaler/simulator/drainability/rule.go +++ b/cluster-autoscaler/simulator/drainability/rule.go @@ -22,15 +22,32 @@ import ( apiv1 "k8s.io/api/core/v1" ) -// Status indicates whether a pod can be drained, with an optional error message when not. +// OutcomeType identifies the action that should be taken when it comes to +// draining a pod. +type OutcomeType int + +const ( + // UndefinedOutcome means the Rule did not match and that another one + // has to be applied. + UndefinedOutcome OutcomeType = iota + // DrainOk means that the pod can be drained. + DrainOk + // BlockDrain means that the pod should block drain for its entire node. + BlockDrain + // SkipDrain means that the pod doesn't block drain of other pods, but + // should not be drained itself. + SkipDrain +) + +// Status contains all information about drainability of a single pod. // TODO(x13n): Move values from drain.BlockingPodReason to some typed string. type Status struct { - // Matched indicates whether the Rule can be applied to a given pod. - // `false` indicates that the Rule doesn't match and that another one - // has to be applied. - Matched bool - // Reason contains the decision whether to drain the pod or not. - Reason drain.BlockingPodReason + // Outcome indicates what can happen when it comes to draining a + // specific pod. + Outcome OutcomeType + // Reason contains the reason why a pod is blocking node drain. It is + // set only when Outcome is BlockDrain. + BlockingReason drain.BlockingPodReason // Error contains an optional error message. Error error } @@ -38,22 +55,28 @@ type Status struct { // NewDrainableStatus returns a new Status indicating that a pod can be drained. func NewDrainableStatus() Status { return Status{ - Matched: true, - Reason: drain.NoReason, + Outcome: DrainOk, } } // NewBlockedStatus returns a new Status indicating that a pod is blocked and cannot be drained. func NewBlockedStatus(reason drain.BlockingPodReason, err error) Status { return Status{ - Matched: true, - Reason: reason, - Error: err, + Outcome: BlockDrain, + BlockingReason: reason, + Error: err, + } +} + +// NewSkipStatus returns a new Status indicating that a pod should be skipped when draining a node. +func NewSkipStatus() Status { + return Status{ + Outcome: SkipDrain, } } -// NewUnmatchedStatus returns a new Status that doesn't contain a decision. -func NewUnmatchedStatus() Status { +// NewUndefinedStatus returns a new Status that doesn't contain a decision. +func NewUndefinedStatus() Status { return Status{} } @@ -63,3 +86,10 @@ type Rule interface { // the specific Rule. Drainable(*apiv1.Pod) Status } + +// DefaultRules returns the default list of Rules. +func DefaultRules() []Rule { + return []Rule{ + NewMirrorPodRule(), + } +} diff --git a/cluster-autoscaler/simulator/predicatechecker/schedulerbased.go b/cluster-autoscaler/simulator/predicatechecker/schedulerbased.go index 5686ed7fd7e5..04f90f82e606 100644 --- a/cluster-autoscaler/simulator/predicatechecker/schedulerbased.go +++ b/cluster-autoscaler/simulator/predicatechecker/schedulerbased.go @@ -27,6 +27,7 @@ import ( kube_client "k8s.io/client-go/kubernetes" v1listers "k8s.io/client-go/listers/core/v1" klog "k8s.io/klog/v2" + "k8s.io/kubernetes/pkg/scheduler/apis/config" scheduler_config "k8s.io/kubernetes/pkg/scheduler/apis/config/latest" schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" scheduler_plugins "k8s.io/kubernetes/pkg/scheduler/framework/plugins" @@ -44,21 +45,26 @@ type SchedulerBasedPredicateChecker struct { } // NewSchedulerBasedPredicateChecker builds scheduler based PredicateChecker. -func NewSchedulerBasedPredicateChecker(kubeClient kube_client.Interface, stop <-chan struct{}) (*SchedulerBasedPredicateChecker, error) { +func NewSchedulerBasedPredicateChecker(kubeClient kube_client.Interface, schedConfig *config.KubeSchedulerConfiguration, stop <-chan struct{}) (*SchedulerBasedPredicateChecker, error) { informerFactory := informers.NewSharedInformerFactory(kubeClient, 0) - config, err := scheduler_config.Default() - if err != nil { - return nil, fmt.Errorf("couldn't create scheduler config: %v", err) + + if schedConfig == nil { + var err error + schedConfig, err = scheduler_config.Default() + if err != nil { + return nil, fmt.Errorf("couldn't create scheduler config: %v", err) + } } - if len(config.Profiles) != 1 || config.Profiles[0].SchedulerName != apiv1.DefaultSchedulerName { - return nil, fmt.Errorf("unexpected scheduler config: expected default scheduler profile only (found %d profiles)", len(config.Profiles)) + + if len(schedConfig.Profiles) != 1 { + return nil, fmt.Errorf("unexpected scheduler config: expected one scheduler profile only (found %d profiles)", len(schedConfig.Profiles)) } sharedLister := NewDelegatingSchedulerSharedLister() framework, err := schedulerframeworkruntime.NewFramework( + context.TODO(), scheduler_plugins.NewInTreeRegistry(), - &config.Profiles[0], - stop, + &schedConfig.Profiles[0], schedulerframeworkruntime.WithInformerFactory(informerFactory), schedulerframeworkruntime.WithSnapshotSharedLister(sharedLister), ) @@ -182,6 +188,7 @@ func (p *SchedulerBasedPredicateChecker) CheckPredicates(clusterSnapshot cluster p.buildDebugInfo(filterName, nodeInfo)) } + return nil } diff --git a/cluster-autoscaler/simulator/predicatechecker/schedulerbased_test.go b/cluster-autoscaler/simulator/predicatechecker/schedulerbased_test.go index 4405490880b8..b9d6f8be7cbe 100644 --- a/cluster-autoscaler/simulator/predicatechecker/schedulerbased_test.go +++ b/cluster-autoscaler/simulator/predicatechecker/schedulerbased_test.go @@ -17,10 +17,14 @@ limitations under the License. package predicatechecker import ( + "os" + "path/filepath" "testing" "time" + testconfig "k8s.io/autoscaler/cluster-autoscaler/config/test" "k8s.io/autoscaler/cluster-autoscaler/simulator/clustersnapshot" + scheduler "k8s.io/autoscaler/cluster-autoscaler/utils/scheduler" . "k8s.io/autoscaler/cluster-autoscaler/utils/test" "github.com/stretchr/testify/assert" @@ -36,52 +40,114 @@ func TestCheckPredicate(t *testing.T) { n1000 := BuildTestNode("n1000", 1000, 2000000) SetNodeReadyState(n1000, true, time.Time{}) + n1000Unschedulable := BuildTestNode("n1000", 1000, 2000000) + SetNodeReadyState(n1000Unschedulable, true, time.Time{}) + + defaultPredicateChecker, err := NewTestPredicateChecker() + assert.NoError(t, err) + + // temp dir + tmpDir, err := os.MkdirTemp("", "scheduler-configs") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmpDir) + + customConfigFile := filepath.Join(tmpDir, "custom_config.yaml") + if err := os.WriteFile(customConfigFile, + []byte(testconfig.SchedulerConfigNodeResourcesFitDisabled), + os.FileMode(0600)); err != nil { + t.Fatal(err) + } + + customConfig, err := scheduler.ConfigFromPath(customConfigFile) + assert.NoError(t, err) + customPredicateChecker, err := NewTestPredicateCheckerWithCustomConfig(customConfig) + assert.NoError(t, err) tests := []struct { - name string - node *apiv1.Node - scheduledPods []*apiv1.Pod - testPod *apiv1.Pod - expectError bool + name string + node *apiv1.Node + scheduledPods []*apiv1.Pod + testPod *apiv1.Pod + predicateChecker PredicateChecker + expectError bool }{ + // default predicate checker test cases + { + name: "default - other pod - insuficient cpu", + node: n1000, + scheduledPods: []*apiv1.Pod{p450}, + testPod: p600, + expectError: true, + predicateChecker: defaultPredicateChecker, + }, + { + name: "default - other pod - ok", + node: n1000, + scheduledPods: []*apiv1.Pod{p450}, + testPod: p500, + expectError: false, + predicateChecker: defaultPredicateChecker, + }, + { + name: "default - empty - insuficient cpu", + node: n1000, + scheduledPods: []*apiv1.Pod{}, + testPod: p8000, + expectError: true, + predicateChecker: defaultPredicateChecker, + }, { - name: "other pod - insuficient cpu", - node: n1000, - scheduledPods: []*apiv1.Pod{p450}, - testPod: p600, - expectError: true, + name: "default - empty - ok", + node: n1000, + scheduledPods: []*apiv1.Pod{}, + testPod: p600, + expectError: false, + predicateChecker: defaultPredicateChecker, }, + // custom predicate checker test cases { - name: "other pod - ok", - node: n1000, - scheduledPods: []*apiv1.Pod{p450}, - testPod: p500, - expectError: false, + name: "custom - other pod - ok", + node: n1000, + scheduledPods: []*apiv1.Pod{p450}, + testPod: p600, + expectError: false, + predicateChecker: customPredicateChecker, }, { - name: "empty - insuficient cpu", - node: n1000, - scheduledPods: []*apiv1.Pod{}, - testPod: p8000, - expectError: true, + name: "custom -other pod - ok", + node: n1000, + scheduledPods: []*apiv1.Pod{p450}, + testPod: p500, + expectError: false, + predicateChecker: customPredicateChecker, }, { - name: "empty - ok", - node: n1000, - scheduledPods: []*apiv1.Pod{}, - testPod: p600, - expectError: false, + name: "custom -empty - ok", + node: n1000, + scheduledPods: []*apiv1.Pod{}, + testPod: p8000, + expectError: false, + predicateChecker: customPredicateChecker, + }, + { + name: "custom -empty - ok", + node: n1000, + scheduledPods: []*apiv1.Pod{}, + testPod: p600, + expectError: false, + predicateChecker: customPredicateChecker, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { var err error - predicateChecker, err := NewTestPredicateChecker() clusterSnapshot := clustersnapshot.NewBasicClusterSnapshot() err = clusterSnapshot.AddNodeWithPods(tt.node, tt.scheduledPods) assert.NoError(t, err) - predicateError := predicateChecker.CheckPredicates(clusterSnapshot, tt.testPod, tt.node.Name) + predicateError := tt.predicateChecker.CheckPredicates(clusterSnapshot, tt.testPod, tt.node.Name) if tt.expectError { assert.NotNil(t, predicateError) assert.Equal(t, NotSchedulablePredicateError, predicateError.ErrorType()) @@ -102,27 +168,99 @@ func TestFitsAnyNode(t *testing.T) { n1000 := BuildTestNode("n1000", 1000, 2000000) n2000 := BuildTestNode("n2000", 2000, 2000000) - var err error + defaultPredicateChecker, err := NewTestPredicateChecker() + assert.NoError(t, err) - clusterSnapshot := clustersnapshot.NewBasicClusterSnapshot() - err = clusterSnapshot.AddNode(n1000) + // temp dir + tmpDir, err := os.MkdirTemp("", "scheduler-configs") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmpDir) + + customConfigFile := filepath.Join(tmpDir, "custom_config.yaml") + if err := os.WriteFile(customConfigFile, + []byte(testconfig.SchedulerConfigNodeResourcesFitDisabled), + os.FileMode(0600)); err != nil { + t.Fatal(err) + } + + customConfig, err := scheduler.ConfigFromPath(customConfigFile) assert.NoError(t, err) - err = clusterSnapshot.AddNode(n2000) + customPredicateChecker, err := NewTestPredicateCheckerWithCustomConfig(customConfig) assert.NoError(t, err) - predicateChecker, err := NewTestPredicateChecker() - assert.NoError(t, err) + testCases := []struct { + name string + predicateChecker PredicateChecker + pod *apiv1.Pod + expectedNodes []string + expectError bool + }{ + // default predicate checker test cases + { + name: "default - small pod - no error", + predicateChecker: defaultPredicateChecker, + pod: p900, + expectedNodes: []string{"n1000", "n2000"}, + expectError: false, + }, + { + name: "default - medium pod - no error", + predicateChecker: defaultPredicateChecker, + pod: p1900, + expectedNodes: []string{"n2000"}, + expectError: false, + }, + { + name: "default - large pod - insufficient cpu", + predicateChecker: defaultPredicateChecker, + pod: p2100, + expectError: true, + }, - nodeName, err := predicateChecker.FitsAnyNode(clusterSnapshot, p900) - assert.NoError(t, err) - assert.True(t, nodeName == "n1000" || nodeName == "n2000") + // custom predicate checker test cases + { + name: "custom - small pod - no error", + predicateChecker: customPredicateChecker, + pod: p900, + expectedNodes: []string{"n1000", "n2000"}, + expectError: false, + }, + { + name: "custom - medium pod - no error", + predicateChecker: customPredicateChecker, + pod: p1900, + expectedNodes: []string{"n1000", "n2000"}, + expectError: false, + }, + { + name: "custom - large pod - insufficient cpu", + predicateChecker: customPredicateChecker, + pod: p2100, + expectedNodes: []string{"n1000", "n2000"}, + expectError: false, + }, + } - nodeName, err = predicateChecker.FitsAnyNode(clusterSnapshot, p1900) + clusterSnapshot := clustersnapshot.NewBasicClusterSnapshot() + err = clusterSnapshot.AddNode(n1000) assert.NoError(t, err) - assert.Equal(t, "n2000", nodeName) + err = clusterSnapshot.AddNode(n2000) + assert.NoError(t, err) + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + nodeName, err := tc.predicateChecker.FitsAnyNode(clusterSnapshot, tc.pod) + if tc.expectError { + assert.Error(t, err) + } else { + assert.NoError(t, err) + assert.Contains(t, tc.expectedNodes, nodeName) + } + }) + } - nodeName, err = predicateChecker.FitsAnyNode(clusterSnapshot, p2100) - assert.Error(t, err) } func TestDebugInfo(t *testing.T) { @@ -142,16 +280,39 @@ func TestDebugInfo(t *testing.T) { } SetNodeReadyState(node1, true, time.Time{}) - predicateChecker, err := NewTestPredicateChecker() - assert.NoError(t, err) - clusterSnapshot := clustersnapshot.NewBasicClusterSnapshot() - err = clusterSnapshot.AddNode(node1) + err := clusterSnapshot.AddNode(node1) assert.NoError(t, err) - predicateErr := predicateChecker.CheckPredicates(clusterSnapshot, p1, "n1") + // with default predicate checker + defaultPredicateChecker, err := NewTestPredicateChecker() + assert.NoError(t, err) + predicateErr := defaultPredicateChecker.CheckPredicates(clusterSnapshot, p1, "n1") assert.NotNil(t, predicateErr) assert.Equal(t, "node(s) had untolerated taint {SomeTaint: WhyNot?}", predicateErr.Message()) assert.Contains(t, predicateErr.VerboseMessage(), "RandomTaint") + + // with custom predicate checker + + // temp dir + tmpDir, err := os.MkdirTemp("", "scheduler-configs") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmpDir) + + customConfigFile := filepath.Join(tmpDir, "custom_config.yaml") + if err := os.WriteFile(customConfigFile, + []byte(testconfig.SchedulerConfigTaintTolerationDisabled), + os.FileMode(0600)); err != nil { + t.Fatal(err) + } + + customConfig, err := scheduler.ConfigFromPath(customConfigFile) + assert.NoError(t, err) + customPredicateChecker, err := NewTestPredicateCheckerWithCustomConfig(customConfig) + assert.NoError(t, err) + predicateErr = customPredicateChecker.CheckPredicates(clusterSnapshot, p1, "n1") + assert.Nil(t, predicateErr) } diff --git a/cluster-autoscaler/simulator/predicatechecker/testchecker.go b/cluster-autoscaler/simulator/predicatechecker/testchecker.go index ba7730dabd2e..81c3459c88ba 100644 --- a/cluster-autoscaler/simulator/predicatechecker/testchecker.go +++ b/cluster-autoscaler/simulator/predicatechecker/testchecker.go @@ -18,10 +18,27 @@ package predicatechecker import ( clientsetfake "k8s.io/client-go/kubernetes/fake" + "k8s.io/kubernetes/pkg/scheduler/apis/config" + scheduler_config_latest "k8s.io/kubernetes/pkg/scheduler/apis/config/latest" ) // NewTestPredicateChecker builds test version of PredicateChecker. func NewTestPredicateChecker() (PredicateChecker, error) { + schedConfig, err := scheduler_config_latest.Default() + if err != nil { + return nil, err + } + // just call out to NewSchedulerBasedPredicateChecker but use fake kubeClient - return NewSchedulerBasedPredicateChecker(clientsetfake.NewSimpleClientset(), make(chan struct{})) + return NewSchedulerBasedPredicateChecker(clientsetfake.NewSimpleClientset(), schedConfig, make(chan struct{})) +} + +// NewTestPredicateCheckerWithCustomConfig builds test version of PredicateChecker with custom scheduler config. +func NewTestPredicateCheckerWithCustomConfig(schedConfig *config.KubeSchedulerConfiguration) (PredicateChecker, error) { + if schedConfig != nil { + // just call out to NewSchedulerBasedPredicateChecker but use fake kubeClient + return NewSchedulerBasedPredicateChecker(clientsetfake.NewSimpleClientset(), schedConfig, make(chan struct{})) + } + + return NewTestPredicateChecker() } diff --git a/cluster-autoscaler/simulator/utilization/info.go b/cluster-autoscaler/simulator/utilization/info.go index f799acc43433..30c78cb95f29 100644 --- a/cluster-autoscaler/simulator/utilization/info.go +++ b/cluster-autoscaler/simulator/utilization/info.go @@ -48,7 +48,7 @@ type Info struct { // returns the individual cpu, memory and gpu utilization. func Calculate(nodeInfo *schedulerframework.NodeInfo, skipDaemonSetPods, skipMirrorPods bool, gpuConfig *cloudprovider.GpuConfig, currentTime time.Time) (utilInfo Info, err error) { if gpuConfig != nil { - gpuUtil, err := calculateUtilizationOfResource(nodeInfo, gpuConfig.ResourceName, skipDaemonSetPods, skipMirrorPods, currentTime) + gpuUtil, err := CalculateUtilizationOfResource(nodeInfo, gpuConfig.ResourceName, skipDaemonSetPods, skipMirrorPods, currentTime) if err != nil { klog.V(3).Infof("node %s has unready GPU resource: %s", nodeInfo.Node().Name, gpuConfig.ResourceName) // Return 0 if GPU is unready. This will guarantee we can still scale down a node with unready GPU. @@ -58,11 +58,11 @@ func Calculate(nodeInfo *schedulerframework.NodeInfo, skipDaemonSetPods, skipMir return Info{GpuUtil: gpuUtil, ResourceName: gpuConfig.ResourceName, Utilization: gpuUtil}, err } - cpu, err := calculateUtilizationOfResource(nodeInfo, apiv1.ResourceCPU, skipDaemonSetPods, skipMirrorPods, currentTime) + cpu, err := CalculateUtilizationOfResource(nodeInfo, apiv1.ResourceCPU, skipDaemonSetPods, skipMirrorPods, currentTime) if err != nil { return Info{}, err } - mem, err := calculateUtilizationOfResource(nodeInfo, apiv1.ResourceMemory, skipDaemonSetPods, skipMirrorPods, currentTime) + mem, err := CalculateUtilizationOfResource(nodeInfo, apiv1.ResourceMemory, skipDaemonSetPods, skipMirrorPods, currentTime) if err != nil { return Info{}, err } @@ -80,7 +80,8 @@ func Calculate(nodeInfo *schedulerframework.NodeInfo, skipDaemonSetPods, skipMir return utilization, nil } -func calculateUtilizationOfResource(nodeInfo *schedulerframework.NodeInfo, resourceName apiv1.ResourceName, skipDaemonSetPods, skipMirrorPods bool, currentTime time.Time) (float64, error) { +// CalculateUtilizationOfResource calculates utilization of a given resource for a node. +func CalculateUtilizationOfResource(nodeInfo *schedulerframework.NodeInfo, resourceName apiv1.ResourceName, skipDaemonSetPods, skipMirrorPods bool, currentTime time.Time) (float64, error) { nodeAllocatable, found := nodeInfo.Node().Status.Allocatable[resourceName] if !found { return 0, fmt.Errorf("failed to get %v from %s", resourceName, nodeInfo.Node().Name) diff --git a/cluster-autoscaler/utils/drain/drain.go b/cluster-autoscaler/utils/drain/drain.go index 02ddb7a125c5..28802c9a8291 100644 --- a/cluster-autoscaler/utils/drain/drain.go +++ b/cluster-autoscaler/utils/drain/drain.go @@ -97,10 +97,6 @@ func GetPodsForDeletionOnNodeDrain( } for _, pod := range podList { - if pod_util.IsMirrorPod(pod) { - continue - } - // Possibly skip a pod under deletion but only if it was being deleted for long enough // to avoid a situation when we delete the empty node immediately after the pod was marked for // deletion without respecting any graceful termination. diff --git a/cluster-autoscaler/utils/kubernetes/listers.go b/cluster-autoscaler/utils/kubernetes/listers.go index d0033550fad8..198fdfb37cb0 100644 --- a/cluster-autoscaler/utils/kubernetes/listers.go +++ b/cluster-autoscaler/utils/kubernetes/listers.go @@ -39,7 +39,7 @@ type ListerRegistry interface { AllNodeLister() NodeLister ReadyNodeLister() NodeLister ScheduledPodLister() PodLister - UnschedulablePodLister() PodLister + ScheduledAndUnschedulablePodLister() ScheduledAndUnschedulablePodLister PodDisruptionBudgetLister() PodDisruptionBudgetLister DaemonSetLister() v1appslister.DaemonSetLister ReplicationControllerLister() v1lister.ReplicationControllerLister @@ -49,41 +49,41 @@ type ListerRegistry interface { } type listerRegistryImpl struct { - allNodeLister NodeLister - readyNodeLister NodeLister - scheduledPodLister PodLister - unschedulablePodLister PodLister - podDisruptionBudgetLister PodDisruptionBudgetLister - daemonSetLister v1appslister.DaemonSetLister - replicationControllerLister v1lister.ReplicationControllerLister - jobLister v1batchlister.JobLister - replicaSetLister v1appslister.ReplicaSetLister - statefulSetLister v1appslister.StatefulSetLister + allNodeLister NodeLister + readyNodeLister NodeLister + scheduledPodLister PodLister + scheduledAndUnschedulablePodLister ScheduledAndUnschedulablePodLister + podDisruptionBudgetLister PodDisruptionBudgetLister + daemonSetLister v1appslister.DaemonSetLister + replicationControllerLister v1lister.ReplicationControllerLister + jobLister v1batchlister.JobLister + replicaSetLister v1appslister.ReplicaSetLister + statefulSetLister v1appslister.StatefulSetLister } // NewListerRegistry returns a registry providing various listers to list pods or nodes matching conditions func NewListerRegistry(allNode NodeLister, readyNode NodeLister, scheduledPod PodLister, - unschedulablePod PodLister, podDisruptionBudgetLister PodDisruptionBudgetLister, + scheduledAndUnschedulablePodLister ScheduledAndUnschedulablePodLister, podDisruptionBudgetLister PodDisruptionBudgetLister, daemonSetLister v1appslister.DaemonSetLister, replicationControllerLister v1lister.ReplicationControllerLister, jobLister v1batchlister.JobLister, replicaSetLister v1appslister.ReplicaSetLister, statefulSetLister v1appslister.StatefulSetLister) ListerRegistry { return listerRegistryImpl{ - allNodeLister: allNode, - readyNodeLister: readyNode, - scheduledPodLister: scheduledPod, - unschedulablePodLister: unschedulablePod, - podDisruptionBudgetLister: podDisruptionBudgetLister, - daemonSetLister: daemonSetLister, - replicationControllerLister: replicationControllerLister, - jobLister: jobLister, - replicaSetLister: replicaSetLister, - statefulSetLister: statefulSetLister, + allNodeLister: allNode, + readyNodeLister: readyNode, + scheduledPodLister: scheduledPod, + scheduledAndUnschedulablePodLister: scheduledAndUnschedulablePodLister, + podDisruptionBudgetLister: podDisruptionBudgetLister, + daemonSetLister: daemonSetLister, + replicationControllerLister: replicationControllerLister, + jobLister: jobLister, + replicaSetLister: replicaSetLister, + statefulSetLister: statefulSetLister, } } // NewListerRegistryWithDefaultListers returns a registry filled with listers of the default implementations func NewListerRegistryWithDefaultListers(kubeClient client.Interface, stopChannel <-chan struct{}) ListerRegistry { - unschedulablePodLister := NewUnschedulablePodLister(kubeClient, stopChannel) + scheduledAndUnschedulablePodLister := NewScheduledAndUnschedulablePodLister(kubeClient, stopChannel) scheduledPodLister := NewScheduledPodLister(kubeClient, stopChannel) readyNodeLister := NewReadyNodeLister(kubeClient, stopChannel) allNodeLister := NewAllNodeLister(kubeClient, stopChannel) @@ -94,7 +94,7 @@ func NewListerRegistryWithDefaultListers(kubeClient client.Interface, stopChanne replicaSetLister := NewReplicaSetLister(kubeClient, stopChannel) statefulSetLister := NewStatefulSetLister(kubeClient, stopChannel) return NewListerRegistry(allNodeLister, readyNodeLister, scheduledPodLister, - unschedulablePodLister, podDisruptionBudgetLister, daemonSetLister, + scheduledAndUnschedulablePodLister, podDisruptionBudgetLister, daemonSetLister, replicationControllerLister, jobLister, replicaSetLister, statefulSetLister) } @@ -113,9 +113,9 @@ func (r listerRegistryImpl) ScheduledPodLister() PodLister { return r.scheduledPodLister } -// UnschedulablePodLister returns the UnschedulablePodLister registered to this registry -func (r listerRegistryImpl) UnschedulablePodLister() PodLister { - return r.unschedulablePodLister +// ScheduledAndUnschedulablePodLister returns the ScheduledAndUnschedulablePodLister registered to this registry +func (r listerRegistryImpl) ScheduledAndUnschedulablePodLister() ScheduledAndUnschedulablePodLister { + return r.scheduledAndUnschedulablePodLister } // PodDisruptionBudgetLister returns the podDisruptionBudgetLister registered to this registry @@ -153,67 +153,70 @@ type PodLister interface { List() ([]*apiv1.Pod, error) } -// UnschedulablePodLister lists unscheduled pods -type UnschedulablePodLister struct { +// ScheduledPodLister lists scheduled pods. +type ScheduledPodLister struct { podLister v1lister.PodLister } -// List returns all unscheduled pods. -func (unschedulablePodLister *UnschedulablePodLister) List() ([]*apiv1.Pod, error) { - var unschedulablePods []*apiv1.Pod - allPods, err := unschedulablePodLister.podLister.List(labels.Everything()) - if err != nil { - return unschedulablePods, err - } - for _, pod := range allPods { - _, condition := podv1.GetPodCondition(&pod.Status, apiv1.PodScheduled) - if condition != nil && condition.Status == apiv1.ConditionFalse && condition.Reason == apiv1.PodReasonUnschedulable { - unschedulablePods = append(unschedulablePods, pod) - } - } - return unschedulablePods, nil -} - -// NewUnschedulablePodLister returns a lister providing pods that failed to be scheduled. -func NewUnschedulablePodLister(kubeClient client.Interface, stopchannel <-chan struct{}) PodLister { - return NewUnschedulablePodInNamespaceLister(kubeClient, apiv1.NamespaceAll, stopchannel) +// List returns all scheduled pods. +func (lister *ScheduledPodLister) List() ([]*apiv1.Pod, error) { + return lister.podLister.List(labels.Everything()) } -// NewUnschedulablePodInNamespaceLister returns a lister providing pods that failed to be scheduled in the given namespace. -func NewUnschedulablePodInNamespaceLister(kubeClient client.Interface, namespace string, stopchannel <-chan struct{}) PodLister { +// NewScheduledPodLister builds ScheduledPodLister +func NewScheduledPodLister(kubeClient client.Interface, stopchannel <-chan struct{}) PodLister { // watch unscheduled pods - selector := fields.ParseSelectorOrDie("spec.nodeName==" + "" + ",status.phase!=" + + selector := fields.ParseSelectorOrDie("spec.nodeName!=" + "" + ",status.phase!=" + string(apiv1.PodSucceeded) + ",status.phase!=" + string(apiv1.PodFailed)) - podListWatch := cache.NewListWatchFromClient(kubeClient.CoreV1().RESTClient(), "pods", namespace, selector) + podListWatch := cache.NewListWatchFromClient(kubeClient.CoreV1().RESTClient(), "pods", apiv1.NamespaceAll, selector) store, reflector := cache.NewNamespaceKeyedIndexerAndReflector(podListWatch, &apiv1.Pod{}, time.Hour) podLister := v1lister.NewPodLister(store) go reflector.Run(stopchannel) - return &UnschedulablePodLister{ + + return &ScheduledPodLister{ podLister: podLister, } } -// ScheduledPodLister lists scheduled pods. -type ScheduledPodLister struct { +// ScheduledAndUnschedulablePodLister lists scheduled and unschedulable pods obtained at the same point in time. +type ScheduledAndUnschedulablePodLister interface { + List() (scheduledPods, unschedulablePods []*apiv1.Pod, err error) +} + +// ScheduledAndUnschedulablePodLister lists scheduled and unschedulable pods. +type scheduledAndUnschedulablePodLister struct { podLister v1lister.PodLister } -// List returns all scheduled pods. -func (lister *ScheduledPodLister) List() ([]*apiv1.Pod, error) { - return lister.podLister.List(labels.Everything()) +// List returns all scheduled and unschedulable pods. +func (lister *scheduledAndUnschedulablePodLister) List() (scheduledPods []*apiv1.Pod, unschedulablePods []*apiv1.Pod, err error) { + allPods, err := lister.podLister.List(labels.Everything()) + if err != nil { + return scheduledPods, unschedulablePods, err + } + for _, pod := range allPods { + if pod.Spec.NodeName != "" { + scheduledPods = append(scheduledPods, pod) + continue + } + _, condition := podv1.GetPodCondition(&pod.Status, apiv1.PodScheduled) + if condition != nil && condition.Status == apiv1.ConditionFalse && condition.Reason == apiv1.PodReasonUnschedulable { + unschedulablePods = append(unschedulablePods, pod) + } + } + return scheduledPods, unschedulablePods, nil } -// NewScheduledPodLister builds ScheduledPodLister -func NewScheduledPodLister(kubeClient client.Interface, stopchannel <-chan struct{}) PodLister { - // watch unscheduled pods - selector := fields.ParseSelectorOrDie("spec.nodeName!=" + "" + ",status.phase!=" + +// NewScheduledAndUnschedulablePodLister builds ScheduledAndUnschedulablePodLister +func NewScheduledAndUnschedulablePodLister(kubeClient client.Interface, stopchannel <-chan struct{}) ScheduledAndUnschedulablePodLister { + selector := fields.ParseSelectorOrDie("status.phase!=" + string(apiv1.PodSucceeded) + ",status.phase!=" + string(apiv1.PodFailed)) podListWatch := cache.NewListWatchFromClient(kubeClient.CoreV1().RESTClient(), "pods", apiv1.NamespaceAll, selector) store, reflector := cache.NewNamespaceKeyedIndexerAndReflector(podListWatch, &apiv1.Pod{}, time.Hour) podLister := v1lister.NewPodLister(store) go reflector.Run(stopchannel) - return &ScheduledPodLister{ + return &scheduledAndUnschedulablePodLister{ podLister: podLister, } } diff --git a/cluster-autoscaler/utils/kubernetes/testlisters.go b/cluster-autoscaler/utils/kubernetes/testlisters.go index 571298484a6c..236d79b0230f 100644 --- a/cluster-autoscaler/utils/kubernetes/testlisters.go +++ b/cluster-autoscaler/utils/kubernetes/testlisters.go @@ -44,6 +44,25 @@ func NewTestPodLister(pods []*apiv1.Pod) PodLister { return TestPodLister{pods: pods} } +// TestScheduledAndUnschedulablePodLister is used in tests involving scheduledAndUnschedulablePodListers +type TestScheduledAndUnschedulablePodLister struct { + scheduledPods []*apiv1.Pod + unschedulablePods []*apiv1.Pod +} + +// List returns all scheduled and unschedulable pods in test lister. +func (lister TestScheduledAndUnschedulablePodLister) List() (scheduledPods []*apiv1.Pod, unschedulablePods []*apiv1.Pod, err error) { + return lister.scheduledPods, lister.unschedulablePods, nil +} + +// NewTestScheduledAndUnschedulablePodLister returns a lister that returns provided pods +func NewTestScheduledAndUnschedulablePodLister(scheduledPods []*apiv1.Pod, unschedulabePods []*apiv1.Pod) ScheduledAndUnschedulablePodLister { + return TestScheduledAndUnschedulablePodLister{ + scheduledPods: scheduledPods, + unschedulablePods: unschedulabePods, + } +} + // TestPodDisruptionBudgetLister is used in tests involving listers type TestPodDisruptionBudgetLister struct { pdbs []*policyv1.PodDisruptionBudget diff --git a/cluster-autoscaler/utils/scheduler/scheduler.go b/cluster-autoscaler/utils/scheduler/scheduler.go index 1b9e61259e2d..59d008db64d8 100644 --- a/cluster-autoscaler/utils/scheduler/scheduler.go +++ b/cluster-autoscaler/utils/scheduler/scheduler.go @@ -18,14 +18,25 @@ package scheduler import ( "fmt" + "os" "strings" apiv1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/util/uuid" + scheduler_config "k8s.io/kubernetes/pkg/scheduler/apis/config" + scheduler_scheme "k8s.io/kubernetes/pkg/scheduler/apis/config/scheme" + scheduler_validation "k8s.io/kubernetes/pkg/scheduler/apis/config/validation" schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" ) +const ( + schedulerConfigDecodeErr = "couldn't decode scheduler config" + schedulerConfigLoadErr = "couldn't load scheduler config" + schedulerConfigTypeCastErr = "couldn't assert type as KubeSchedulerConfiguration" + schedulerConfigInvalidErr = "invalid KubeSchedulerConfiguration" +) + // CreateNodeNameToInfoMap obtains a list of pods and pivots that list into a map where the keys are node names // and the values are the aggregated information for that node. Pods waiting lower priority pods preemption // (pod.Status.NominatedNodeName is set) are also added to list of pods for a node. @@ -106,3 +117,33 @@ func ResourceToResourceList(r *schedulerframework.Resource) apiv1.ResourceList { } return result } + +// ConfigFromPath loads scheduler config from a path. +// TODO(vadasambar): replace code to parse scheduler config with upstream function +// once https://github.com/kubernetes/kubernetes/pull/119057 is merged +func ConfigFromPath(path string) (*scheduler_config.KubeSchedulerConfiguration, error) { + data, err := os.ReadFile(path) + if err != nil { + return nil, fmt.Errorf("%s: %v", schedulerConfigLoadErr, err) + } + + obj, gvk, err := scheduler_scheme.Codecs.UniversalDecoder().Decode(data, nil, nil) + if err != nil { + return nil, fmt.Errorf("%s: %v", schedulerConfigDecodeErr, err) + } + + cfgObj, ok := obj.(*scheduler_config.KubeSchedulerConfiguration) + if !ok { + return nil, fmt.Errorf("%s, gvk: %s", schedulerConfigTypeCastErr, gvk) + } + + // this needs to be set explicitly because config's api version is empty after decoding + // check kubernetes/cmd/kube-scheduler/app/options/configfile.go for more info + cfgObj.TypeMeta.APIVersion = gvk.GroupVersion().String() + + if err := scheduler_validation.ValidateKubeSchedulerConfiguration(cfgObj); err != nil { + return nil, fmt.Errorf("%s: %v", schedulerConfigInvalidErr, err) + } + + return cfgObj, nil +} diff --git a/cluster-autoscaler/utils/scheduler/scheduler_test.go b/cluster-autoscaler/utils/scheduler/scheduler_test.go index 54989cb6fe16..59f1aa52d92a 100644 --- a/cluster-autoscaler/utils/scheduler/scheduler_test.go +++ b/cluster-autoscaler/utils/scheduler/scheduler_test.go @@ -18,11 +18,15 @@ package scheduler import ( "fmt" + "os" + "path/filepath" "reflect" "testing" "k8s.io/apimachinery/pkg/api/resource" + testconfig "k8s.io/autoscaler/cluster-autoscaler/config/test" . "k8s.io/autoscaler/cluster-autoscaler/utils/test" + "k8s.io/kubernetes/pkg/scheduler/apis/config" schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" apiv1 "k8s.io/api/core/v1" @@ -102,3 +106,86 @@ func TestResourceList(t *testing.T) { }) } } + +func TestConfigFromPath(t *testing.T) { + // temp dir + tmpDir, err := os.MkdirTemp("", "scheduler-configs") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmpDir) + + // Note that even if we are passing minimal config like below + // `ConfigFromPath` will set the rest of the default fields + // on its own (including default profile and default plugins) + correctConfigFile := filepath.Join(tmpDir, "correct_config.yaml") + if err := os.WriteFile(correctConfigFile, + []byte(testconfig.SchedulerConfigMinimalCorrect), + os.FileMode(0600)); err != nil { + t.Fatal(err) + } + + decodeErrConfigFile := filepath.Join(tmpDir, "decode_err_no_version_config.yaml") + if err := os.WriteFile(decodeErrConfigFile, + []byte(testconfig.SchedulerConfigDecodeErr), + os.FileMode(0600)); err != nil { + t.Fatal(err) + } + + validationErrConfigFile := filepath.Join(tmpDir, "invalid_percent_node_score_config.yaml") + if err := os.WriteFile(validationErrConfigFile, + []byte(testconfig.SchedulerConfigInvalid), + os.FileMode(0600)); err != nil { + t.Fatal(err) + } + + tests := []struct { + name string + path string + expectedErr error + expectedConfig *config.KubeSchedulerConfiguration + }{ + { + name: "Empty scheduler config file path", + path: "", + expectedErr: fmt.Errorf(schedulerConfigLoadErr), + expectedConfig: nil, + }, + { + name: "Correct scheduler config", + path: correctConfigFile, + expectedErr: nil, + expectedConfig: &config.KubeSchedulerConfiguration{}, + }, + { + name: "Scheduler config with decode error", + path: decodeErrConfigFile, + expectedErr: fmt.Errorf(schedulerConfigDecodeErr), + expectedConfig: nil, + }, + { + name: "Invalid scheduler config", + path: validationErrConfigFile, + expectedErr: fmt.Errorf(schedulerConfigInvalidErr), + expectedConfig: nil, + }, + } + + for i, test := range tests { + t.Run(fmt.Sprintf("case_%d: %s", i, test.name), func(t *testing.T) { + cfg, err := ConfigFromPath(test.path) + if test.expectedConfig == nil { + assert.Nil(t, cfg) + } else { + assert.NotNil(t, cfg) + } + + if test.expectedErr == nil { + assert.NoError(t, err) + } else { + assert.ErrorContains(t, err, test.expectedErr.Error()) + } + }) + + } +} diff --git a/cluster-autoscaler/utils/taints/taints.go b/cluster-autoscaler/utils/taints/taints.go index 80383db74ab7..1f0a1ac348ae 100644 --- a/cluster-autoscaler/utils/taints/taints.go +++ b/cluster-autoscaler/utils/taints/taints.go @@ -41,8 +41,6 @@ const ( // DeletionCandidateTaint is a taint used to mark unneeded node as preferably unschedulable. DeletionCandidateTaint = "DeletionCandidateOfClusterAutoscaler" - // ReschedulerTaintKey is the name of the taint created by rescheduler. - ReschedulerTaintKey = "CriticalAddonsOnly" // IgnoreTaintPrefix any taint starting with it will be filtered out from autoscaler template node. IgnoreTaintPrefix = "ignore-taint.cluster-autoscaler.kubernetes.io/" @@ -116,16 +114,26 @@ func getKeyShortName(key string) string { // MarkToBeDeleted sets a taint that makes the node unschedulable. func MarkToBeDeleted(node *apiv1.Node, client kube_client.Interface, cordonNode bool) error { - return AddTaint(node, client, ToBeDeletedTaint, apiv1.TaintEffectNoSchedule, cordonNode) + taint := apiv1.Taint{ + Key: ToBeDeletedTaint, + Value: fmt.Sprint(time.Now().Unix()), + Effect: apiv1.TaintEffectNoSchedule, + } + return AddTaint(node, client, taint, cordonNode) } // MarkDeletionCandidate sets a soft taint that makes the node preferably unschedulable. func MarkDeletionCandidate(node *apiv1.Node, client kube_client.Interface) error { - return AddTaint(node, client, DeletionCandidateTaint, apiv1.TaintEffectPreferNoSchedule, false) + taint := apiv1.Taint{ + Key: DeletionCandidateTaint, + Value: fmt.Sprint(time.Now().Unix()), + Effect: apiv1.TaintEffectPreferNoSchedule, + } + return AddTaint(node, client, taint, false) } // AddTaint sets the specified taint on the node. -func AddTaint(node *apiv1.Node, client kube_client.Interface, taintKey string, effect apiv1.TaintEffect, cordonNode bool) error { +func AddTaint(node *apiv1.Node, client kube_client.Interface, taint apiv1.Taint, cordonNode bool) error { retryDeadline := time.Now().Add(maxRetryDeadline) freshNode := node.DeepCopy() var err error @@ -135,12 +143,12 @@ func AddTaint(node *apiv1.Node, client kube_client.Interface, taintKey string, e // Get the newest version of the node. freshNode, err = client.CoreV1().Nodes().Get(context.TODO(), node.Name, metav1.GetOptions{}) if err != nil || freshNode == nil { - klog.Warningf("Error while adding %v taint on node %v: %v", getKeyShortName(taintKey), node.Name, err) + klog.Warningf("Error while adding %v taint on node %v: %v", getKeyShortName(taint.Key), node.Name, err) return fmt.Errorf("failed to get node %v: %v", node.Name, err) } } - if !addTaintToSpec(freshNode, taintKey, effect, cordonNode) { + if !addTaintToSpec(freshNode, taint, cordonNode) { if !refresh { // Make sure we have the latest version before skipping update. refresh = true @@ -156,26 +164,22 @@ func AddTaint(node *apiv1.Node, client kube_client.Interface, taintKey string, e } if err != nil { - klog.Warningf("Error while adding %v taint on node %v: %v", getKeyShortName(taintKey), node.Name, err) + klog.Warningf("Error while adding %v taint on node %v: %v", getKeyShortName(taint.Key), node.Name, err) return err } - klog.V(1).Infof("Successfully added %v on node %v", getKeyShortName(taintKey), node.Name) + klog.V(1).Infof("Successfully added %v on node %v", getKeyShortName(taint.Key), node.Name) return nil } } -func addTaintToSpec(node *apiv1.Node, taintKey string, effect apiv1.TaintEffect, cordonNode bool) bool { - for _, taint := range node.Spec.Taints { - if taint.Key == taintKey { - klog.V(2).Infof("%v already present on node %v, taint: %v", taintKey, node.Name, taint) +func addTaintToSpec(node *apiv1.Node, taint apiv1.Taint, cordonNode bool) bool { + for _, t := range node.Spec.Taints { + if t.Key == taint.Key { + klog.V(2).Infof("%v already present on node %v, t: %v", taint.Key, node.Name, t) return false } } - node.Spec.Taints = append(node.Spec.Taints, apiv1.Taint{ - Key: taintKey, - Value: fmt.Sprint(time.Now().Unix()), - Effect: effect, - }) + node.Spec.Taints = append(node.Spec.Taints, taint) if cordonNode { klog.V(1).Infof("Marking node %v to be cordoned by Cluster Autoscaler", node.Name) node.Spec.Unschedulable = true @@ -323,13 +327,7 @@ func CleanAllTaints(nodes []*apiv1.Node, client kube_client.Interface, recorder func SanitizeTaints(taints []apiv1.Taint, taintConfig TaintConfig) []apiv1.Taint { var newTaints []apiv1.Taint for _, taint := range taints { - // Rescheduler can put this taint on a node while evicting non-critical pods. - // New nodes will not have this taint and so we should strip it when creating - // template node. switch taint.Key { - case ReschedulerTaintKey: - klog.V(4).Info("Removing rescheduler taint when creating template") - continue case ToBeDeletedTaint: klog.V(4).Infof("Removing autoscaler taint when creating template from node") continue diff --git a/cluster-autoscaler/utils/taints/taints_test.go b/cluster-autoscaler/utils/taints/taints_test.go index ea8032e59528..276cada33eef 100644 --- a/cluster-autoscaler/utils/taints/taints_test.go +++ b/cluster-autoscaler/utils/taints/taints_test.go @@ -66,7 +66,12 @@ func TestSoftMarkNodes(t *testing.T) { func TestCheckNodes(t *testing.T) { defer setConflictRetryInterval(setConflictRetryInterval(time.Millisecond)) node := BuildTestNode("node", 1000, 1000) - addTaintToSpec(node, ToBeDeletedTaint, apiv1.TaintEffectNoSchedule, false) + taint := apiv1.Taint{ + Key: ToBeDeletedTaint, + Value: fmt.Sprint(time.Now().Unix()), + Effect: apiv1.TaintEffectNoSchedule, + } + addTaintToSpec(node, taint, false) fakeClient := buildFakeClientWithConflicts(t, node) updatedNode := getNode(t, fakeClient, "node") @@ -77,7 +82,12 @@ func TestCheckNodes(t *testing.T) { func TestSoftCheckNodes(t *testing.T) { defer setConflictRetryInterval(setConflictRetryInterval(time.Millisecond)) node := BuildTestNode("node", 1000, 1000) - addTaintToSpec(node, DeletionCandidateTaint, apiv1.TaintEffectPreferNoSchedule, false) + taint := apiv1.Taint{ + Key: DeletionCandidateTaint, + Value: fmt.Sprint(time.Now().Unix()), + Effect: apiv1.TaintEffectPreferNoSchedule, + } + addTaintToSpec(node, taint, false) fakeClient := buildFakeClientWithConflicts(t, node) updatedNode := getNode(t, fakeClient, "node") @@ -120,7 +130,12 @@ func TestSoftQueryNodes(t *testing.T) { func TestCleanNodes(t *testing.T) { defer setConflictRetryInterval(setConflictRetryInterval(time.Millisecond)) node := BuildTestNode("node", 1000, 1000) - addTaintToSpec(node, ToBeDeletedTaint, apiv1.TaintEffectNoSchedule, false) + taint := apiv1.Taint{ + Key: ToBeDeletedTaint, + Value: fmt.Sprint(time.Now().Unix()), + Effect: apiv1.TaintEffectNoSchedule, + } + addTaintToSpec(node, taint, false) fakeClient := buildFakeClientWithConflicts(t, node) updatedNode := getNode(t, fakeClient, "node") @@ -140,7 +155,12 @@ func TestCleanNodes(t *testing.T) { func TestCleanNodesWithCordon(t *testing.T) { defer setConflictRetryInterval(setConflictRetryInterval(time.Millisecond)) node := BuildTestNode("node", 1000, 1000) - addTaintToSpec(node, ToBeDeletedTaint, apiv1.TaintEffectNoSchedule, true) + taint := apiv1.Taint{ + Key: ToBeDeletedTaint, + Value: fmt.Sprint(time.Now().Unix()), + Effect: apiv1.TaintEffectNoSchedule, + } + addTaintToSpec(node, taint, true) fakeClient := buildFakeClientWithConflicts(t, node) updatedNode := getNode(t, fakeClient, "node") @@ -160,7 +180,12 @@ func TestCleanNodesWithCordon(t *testing.T) { func TestCleanNodesWithCordonOnOff(t *testing.T) { defer setConflictRetryInterval(setConflictRetryInterval(time.Millisecond)) node := BuildTestNode("node", 1000, 1000) - addTaintToSpec(node, ToBeDeletedTaint, apiv1.TaintEffectNoSchedule, true) + taint := apiv1.Taint{ + Key: ToBeDeletedTaint, + Value: fmt.Sprint(time.Now().Unix()), + Effect: apiv1.TaintEffectNoSchedule, + } + addTaintToSpec(node, taint, true) fakeClient := buildFakeClientWithConflicts(t, node) updatedNode := getNode(t, fakeClient, "node") @@ -180,7 +205,12 @@ func TestCleanNodesWithCordonOnOff(t *testing.T) { func TestSoftCleanNodes(t *testing.T) { defer setConflictRetryInterval(setConflictRetryInterval(time.Millisecond)) node := BuildTestNode("node", 1000, 1000) - addTaintToSpec(node, DeletionCandidateTaint, apiv1.TaintEffectPreferNoSchedule, false) + taint := apiv1.Taint{ + Key: DeletionCandidateTaint, + Value: fmt.Sprint(time.Now().Unix()), + Effect: apiv1.TaintEffectPreferNoSchedule, + } + addTaintToSpec(node, taint, false) fakeClient := buildFakeClientWithConflicts(t, node) updatedNode := getNode(t, fakeClient, "node") @@ -455,11 +485,6 @@ func TestSanitizeTaints(t *testing.T) { Value: "myValue", Effect: apiv1.TaintEffectNoSchedule, }, - { - Key: ReschedulerTaintKey, - Value: "test1", - Effect: apiv1.TaintEffectNoSchedule, - }, { Key: "test-taint", Value: "test2", diff --git a/cluster-autoscaler/utils/test/test_utils.go b/cluster-autoscaler/utils/test/test_utils.go index b42ec11c6d31..925fa056e6ca 100644 --- a/cluster-autoscaler/utils/test/test_utils.go +++ b/cluster-autoscaler/utils/test/test_utils.go @@ -67,6 +67,15 @@ func BuildTestPod(name string, cpu int64, mem int64) *apiv1.Pod { return pod } +// BuildDSTestPod creates a DaemonSet pod with cpu and memory. +func BuildDSTestPod(name string, cpu int64, mem int64) *apiv1.Pod { + + pod := BuildTestPod(name, cpu, mem) + pod.OwnerReferences = GenerateOwnerReferences("ds", "DaemonSet", "apps/v1", "some-uid") + + return pod +} + // BuildTestPodWithEphemeralStorage creates a pod with cpu, memory and ephemeral storage resources. func BuildTestPodWithEphemeralStorage(name string, cpu, mem, ephemeralStorage int64) *apiv1.Pod { startTime := metav1.Unix(0, 0) diff --git a/cluster-autoscaler/utils/utils.go b/cluster-autoscaler/utils/utils.go index 70444651e8e2..19e017a116e9 100644 --- a/cluster-autoscaler/utils/utils.go +++ b/cluster-autoscaler/utils/utils.go @@ -94,6 +94,16 @@ func dropProjectedVolumesAndMounts(podSpec *apiv1.PodSpec) { } podSpec.Containers[i].VolumeMounts = volumeMounts } + + for i := range podSpec.InitContainers { + var volumeMounts []apiv1.VolumeMount + for _, mount := range podSpec.InitContainers[i].VolumeMounts { + if ok := projectedVolumeNames[mount.Name]; !ok { + volumeMounts = append(volumeMounts, mount) + } + } + podSpec.InitContainers[i].VolumeMounts = volumeMounts + } } func dropHostname(podSpec *apiv1.PodSpec) { diff --git a/cluster-autoscaler/utils/utils_test.go b/cluster-autoscaler/utils/utils_test.go index b90fa4f64b9b..c7046e1b39bb 100644 --- a/cluster-autoscaler/utils/utils_test.go +++ b/cluster-autoscaler/utils/utils_test.go @@ -111,6 +111,8 @@ func TestSanitizePodSpec(t *testing.T) { }, Containers: []apiv1.Container{ {Image: "foo/bar", Name: "foobar", VolumeMounts: []apiv1.VolumeMount{{Name: "projected1"}}}, + }, + InitContainers: []apiv1.Container{ {Image: "foo/baz", Name: "foobaz", VolumeMounts: []apiv1.VolumeMount{{Name: "projected2"}}}, }, }, @@ -118,6 +120,8 @@ func TestSanitizePodSpec(t *testing.T) { NodeSelector: map[string]string{"foo": "bar"}, Containers: []apiv1.Container{ {Image: "foo/bar", Name: "foobar"}, + }, + InitContainers: []apiv1.Container{ {Image: "foo/baz", Name: "foobaz"}, }, }, diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/version/version.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/version/version.go index 2dde094414cf..bcfbb15cce0d 100644 --- a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/version/version.go +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/version/version.go @@ -4,4 +4,4 @@ package version // Licensed under the MIT License. See License.txt in the project root for license information. // Number contains the semantic version of this SDK. -const Number = "v67.2.0" +const Number = "v68.0.0" diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/adal/token.go b/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/adal/token.go index 1a9c8ab537f0..2a24ab80cf16 100644 --- a/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/adal/token.go +++ b/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/adal/token.go @@ -127,6 +127,9 @@ type TokenRefreshCallback func(Token) error // TokenRefresh is a type representing a custom callback to refresh a token type TokenRefresh func(ctx context.Context, resource string) (*Token, error) +// JWTCallback is the type representing callback that will be called to get the federated OIDC JWT +type JWTCallback func() (string, error) + // Token encapsulates the access token used to authorize Azure requests. // https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-oauth2-client-creds-grant-flow#service-to-service-access-token-response type Token struct { @@ -177,7 +180,7 @@ func (t Token) WillExpireIn(d time.Duration) bool { return !t.Expires().After(time.Now().Add(d)) } -//OAuthToken return the current access token +// OAuthToken return the current access token func (t *Token) OAuthToken() string { return t.AccessToken } @@ -367,14 +370,18 @@ func (secret ServicePrincipalAuthorizationCodeSecret) MarshalJSON() ([]byte, err // ServicePrincipalFederatedSecret implements ServicePrincipalSecret for Federated JWTs. type ServicePrincipalFederatedSecret struct { - jwt string + jwtCallback JWTCallback } // SetAuthenticationValues is a method of the interface ServicePrincipalSecret. // It will populate the form submitted during OAuth Token Acquisition using a JWT signed by an OIDC issuer. -func (secret *ServicePrincipalFederatedSecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error { +func (secret *ServicePrincipalFederatedSecret) SetAuthenticationValues(_ *ServicePrincipalToken, v *url.Values) error { + jwt, err := secret.jwtCallback() + if err != nil { + return err + } - v.Set("client_assertion", secret.jwt) + v.Set("client_assertion", jwt) v.Set("client_assertion_type", "urn:ietf:params:oauth:client-assertion-type:jwt-bearer") return nil } @@ -687,6 +694,8 @@ func NewServicePrincipalTokenFromAuthorizationCode(oauthConfig OAuthConfig, clie } // NewServicePrincipalTokenFromFederatedToken creates a ServicePrincipalToken from the supplied federated OIDC JWT. +// +// Deprecated: Use NewServicePrincipalTokenFromFederatedTokenWithCallback to refresh jwt dynamically. func NewServicePrincipalTokenFromFederatedToken(oauthConfig OAuthConfig, clientID string, jwt string, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { if err := validateOAuthConfig(oauthConfig); err != nil { return nil, err @@ -700,12 +709,37 @@ func NewServicePrincipalTokenFromFederatedToken(oauthConfig OAuthConfig, clientI if jwt == "" { return nil, fmt.Errorf("parameter 'jwt' cannot be empty") } + return NewServicePrincipalTokenFromFederatedTokenCallback( + oauthConfig, + clientID, + func() (string, error) { + return jwt, nil + }, + resource, + callbacks..., + ) +} + +// NewServicePrincipalTokenFromFederatedTokenCallback creates a ServicePrincipalToken from the supplied federated OIDC JWTCallback. +func NewServicePrincipalTokenFromFederatedTokenCallback(oauthConfig OAuthConfig, clientID string, jwtCallback JWTCallback, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + if err := validateOAuthConfig(oauthConfig); err != nil { + return nil, err + } + if err := validateStringParam(clientID, "clientID"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { + return nil, err + } + if jwtCallback == nil { + return nil, fmt.Errorf("parameter 'jwtCallback' cannot be empty") + } return NewServicePrincipalTokenWithSecret( oauthConfig, clientID, resource, &ServicePrincipalFederatedSecret{ - jwt: jwt, + jwtCallback: jwtCallback, }, callbacks..., ) diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/autorest.go b/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/autorest.go index aafdf021fd6f..211c98d1ed04 100644 --- a/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/autorest.go +++ b/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/autorest.go @@ -6,33 +6,33 @@ generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: - req, err := Prepare(&http.Request{}, - token.WithAuthorization()) + req, err := Prepare(&http.Request{}, + token.WithAuthorization()) - resp, err := Send(req, - WithLogging(logger), - DoErrorIfStatusCode(http.StatusInternalServerError), - DoCloseIfError(), - DoRetryForAttempts(5, time.Second)) + resp, err := Send(req, + WithLogging(logger), + DoErrorIfStatusCode(http.StatusInternalServerError), + DoCloseIfError(), + DoRetryForAttempts(5, time.Second)) - err = Respond(resp, - ByDiscardingBody(), - ByClosing()) + err = Respond(resp, + ByDiscardingBody(), + ByClosing()) Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: - req, err := Prepare(&http.Request{}, - WithBaseURL("https://microsoft.com/"), - WithPath("a"), - WithPath("b"), - WithPath("c")) + req, err := Prepare(&http.Request{}, + WithBaseURL("https://microsoft.com/"), + WithPath("a"), + WithPath("b"), + WithPath("c")) will set the URL to: - https://microsoft.com/a/b/c + https://microsoft.com/a/b/c Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go b/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go index 1328f1764c23..868345db6868 100644 --- a/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go +++ b/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go @@ -214,7 +214,7 @@ func (r Resource) String() string { // See https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-resource?tabs=json#resourceid. func ParseResourceID(resourceID string) (Resource, error) { - const resourceIDPatternText = `(?i)subscriptions/(.+)/resourceGroups/(.+)/providers/(.+?)/(.+?)/(.+)` + const resourceIDPatternText = `(?i)^/subscriptions/(.+)/resourceGroups/(.+)/providers/(.+?)/(.+?)/(.+)$` resourceIDPattern := regexp.MustCompile(resourceIDPatternText) match := resourceIDPattern.FindStringSubmatch(resourceID) diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/utility.go b/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/utility.go index 3467b8fa6043..d35b3850ab34 100644 --- a/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/utility.go +++ b/cluster-autoscaler/vendor/github.com/Azure/go-autorest/autorest/utility.go @@ -60,9 +60,9 @@ func NewDecoder(encodedAs EncodedAs, r io.Reader) Decoder { // is especially useful if there is a chance the data will fail to decode. // encodedAs specifies the expected encoding, r provides the io.Reader to the data, and v // is the decoding destination. -func CopyAndDecode(encodedAs EncodedAs, r io.Reader, v interface{}) (bytes.Buffer, error) { - b := bytes.Buffer{} - return b, NewDecoder(encodedAs, io.TeeReader(r, &b)).Decode(v) +func CopyAndDecode(encodedAs EncodedAs, r io.Reader, v interface{}) (b bytes.Buffer, err error) { + err = NewDecoder(encodedAs, io.TeeReader(r, &b)).Decode(v) + return } // TeeReadCloser returns a ReadCloser that writes to w what it reads from rc. diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.gitattributes b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.gitattributes new file mode 100644 index 000000000000..94f480de94e1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.gitattributes @@ -0,0 +1 @@ +* text=auto eol=lf \ No newline at end of file diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.gitignore b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.gitignore index b883f1fdc6d6..815e20660e5d 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.gitignore +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.gitignore @@ -1 +1,10 @@ +.vscode/ + *.exe + +# testing +testdata + +# go workspaces +go.work +go.work.sum diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.golangci.yml b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.golangci.yml new file mode 100644 index 000000000000..af403bb13a08 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/.golangci.yml @@ -0,0 +1,144 @@ +run: + skip-dirs: + - pkg/etw/sample + +linters: + enable: + # style + - containedctx # struct contains a context + - dupl # duplicate code + - errname # erorrs are named correctly + - goconst # strings that should be constants + - godot # comments end in a period + - misspell + - nolintlint # "//nolint" directives are properly explained + - revive # golint replacement + - stylecheck # golint replacement, less configurable than revive + - unconvert # unnecessary conversions + - wastedassign + + # bugs, performance, unused, etc ... + - contextcheck # function uses a non-inherited context + - errorlint # errors not wrapped for 1.13 + - exhaustive # check exhaustiveness of enum switch statements + - gofmt # files are gofmt'ed + - gosec # security + - nestif # deeply nested ifs + - nilerr # returns nil even with non-nil error + - prealloc # slices that can be pre-allocated + - structcheck # unused struct fields + - unparam # unused function params + +issues: + exclude-rules: + # err is very often shadowed in nested scopes + - linters: + - govet + text: '^shadow: declaration of "err" shadows declaration' + + # ignore long lines for skip autogen directives + - linters: + - revive + text: "^line-length-limit: " + source: "^//(go:generate|sys) " + + # allow unjustified ignores of error checks in defer statements + - linters: + - nolintlint + text: "^directive `//nolint:errcheck` should provide explanation" + source: '^\s*defer ' + + # allow unjustified ignores of error lints for io.EOF + - linters: + - nolintlint + text: "^directive `//nolint:errorlint` should provide explanation" + source: '[=|!]= io.EOF' + + +linters-settings: + govet: + enable-all: true + disable: + # struct order is often for Win32 compat + # also, ignore pointer bytes/GC issues for now until performance becomes an issue + - fieldalignment + check-shadowing: true + nolintlint: + allow-leading-space: false + require-explanation: true + require-specific: true + revive: + # revive is more configurable than static check, so likely the preferred alternative to static-check + # (once the perf issue is solved: https://github.com/golangci/golangci-lint/issues/2997) + enable-all-rules: + true + # https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md + rules: + # rules with required arguments + - name: argument-limit + disabled: true + - name: banned-characters + disabled: true + - name: cognitive-complexity + disabled: true + - name: cyclomatic + disabled: true + - name: file-header + disabled: true + - name: function-length + disabled: true + - name: function-result-limit + disabled: true + - name: max-public-structs + disabled: true + # geneally annoying rules + - name: add-constant # complains about any and all strings and integers + disabled: true + - name: confusing-naming # we frequently use "Foo()" and "foo()" together + disabled: true + - name: flag-parameter # excessive, and a common idiom we use + disabled: true + # general config + - name: line-length-limit + arguments: + - 140 + - name: var-naming + arguments: + - [] + - - CID + - CRI + - CTRD + - DACL + - DLL + - DOS + - ETW + - FSCTL + - GCS + - GMSA + - HCS + - HV + - IO + - LCOW + - LDAP + - LPAC + - LTSC + - MMIO + - NT + - OCI + - PMEM + - PWSH + - RX + - SACl + - SID + - SMB + - TX + - VHD + - VHDX + - VMID + - VPCI + - WCOW + - WIM + stylecheck: + checks: + - "all" + - "-ST1003" # use revive's var naming diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/README.md b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/README.md index 5680010575b7..7474b4f0b653 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/README.md +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/README.md @@ -1,4 +1,4 @@ -# go-winio +# go-winio [![Build Status](https://github.com/microsoft/go-winio/actions/workflows/ci.yml/badge.svg)](https://github.com/microsoft/go-winio/actions/workflows/ci.yml) This repository contains utilities for efficiently performing Win32 IO operations in Go. Currently, this is focused on accessing named pipes and other file handles, and @@ -11,12 +11,79 @@ package. Please see the LICENSE file for licensing information. -This project has adopted the [Microsoft Open Source Code of -Conduct](https://opensource.microsoft.com/codeofconduct/). For more information -see the [Code of Conduct -FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact -[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional -questions or comments. +## Contributing -Thanks to natefinch for the inspiration for this library. See https://github.com/natefinch/npipe -for another named pipe implementation. +This project welcomes contributions and suggestions. +Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that +you have the right to, and actually do, grant us the rights to use your contribution. +For details, visit [Microsoft CLA](https://cla.microsoft.com). + +When you submit a pull request, a CLA-bot will automatically determine whether you need to +provide a CLA and decorate the PR appropriately (e.g., label, comment). +Simply follow the instructions provided by the bot. +You will only need to do this once across all repos using our CLA. + +Additionally, the pull request pipeline requires the following steps to be performed before +mergining. + +### Code Sign-Off + +We require that contributors sign their commits using [`git commit --signoff`][git-commit-s] +to certify they either authored the work themselves or otherwise have permission to use it in this project. + +A range of commits can be signed off using [`git rebase --signoff`][git-rebase-s]. + +Please see [the developer certificate](https://developercertificate.org) for more info, +as well as to make sure that you can attest to the rules listed. +Our CI uses the DCO Github app to ensure that all commits in a given PR are signed-off. + +### Linting + +Code must pass a linting stage, which uses [`golangci-lint`][lint]. +The linting settings are stored in [`.golangci.yaml`](./.golangci.yaml), and can be run +automatically with VSCode by adding the following to your workspace or folder settings: + +```json + "go.lintTool": "golangci-lint", + "go.lintOnSave": "package", +``` + +Additional editor [integrations options are also available][lint-ide]. + +Alternatively, `golangci-lint` can be [installed locally][lint-install] and run from the repo root: + +```shell +# use . or specify a path to only lint a package +# to show all lint errors, use flags "--max-issues-per-linter=0 --max-same-issues=0" +> golangci-lint run ./... +``` + +### Go Generate + +The pipeline checks that auto-generated code, via `go generate`, are up to date. + +This can be done for the entire repo: + +```shell +> go generate ./... +``` + +## Code of Conduct + +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or +contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. + +## Special Thanks + +Thanks to [natefinch][natefinch] for the inspiration for this library. +See [npipe](https://github.com/natefinch/npipe) for another named pipe implementation. + +[lint]: https://golangci-lint.run/ +[lint-ide]: https://golangci-lint.run/usage/integrations/#editor-integration +[lint-install]: https://golangci-lint.run/usage/install/#local-installation + +[git-commit-s]: https://git-scm.com/docs/git-commit#Documentation/git-commit.txt--s +[git-rebase-s]: https://git-scm.com/docs/git-rebase#Documentation/git-rebase.txt---signoff + +[natefinch]: https://github.com/natefinch diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/SECURITY.md b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/SECURITY.md new file mode 100644 index 000000000000..869fdfe2b246 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/SECURITY.md @@ -0,0 +1,41 @@ + + +## Security + +Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). + +If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. + +## Reporting Security Issues + +**Please do not report security vulnerabilities through public GitHub issues.** + +Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). + +If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). + +You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). + +Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: + + * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) + * Full paths of source file(s) related to the manifestation of the issue + * The location of the affected source code (tag/branch/commit or direct URL) + * Any special configuration required to reproduce the issue + * Step-by-step instructions to reproduce the issue + * Proof-of-concept or exploit code (if possible) + * Impact of the issue, including how an attacker might exploit the issue + +This information will help us triage your report more quickly. + +If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. + +## Preferred Languages + +We prefer all communications to be in English. + +## Policy + +Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). + + diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/backup.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/backup.go index 2be34af43106..09621c88463a 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/backup.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/backup.go @@ -1,3 +1,4 @@ +//go:build windows // +build windows package winio @@ -7,11 +8,12 @@ import ( "errors" "fmt" "io" - "io/ioutil" "os" "runtime" "syscall" "unicode/utf16" + + "golang.org/x/sys/windows" ) //sys backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupRead @@ -24,7 +26,7 @@ const ( BackupAlternateData BackupLink BackupPropertyData - BackupObjectId + BackupObjectId //revive:disable-line:var-naming ID, not Id BackupReparseData BackupSparseBlock BackupTxfsData @@ -34,14 +36,16 @@ const ( StreamSparseAttributes = uint32(8) ) +//nolint:revive // var-naming: ALL_CAPS const ( - WRITE_DAC = 0x40000 - WRITE_OWNER = 0x80000 - ACCESS_SYSTEM_SECURITY = 0x1000000 + WRITE_DAC = windows.WRITE_DAC + WRITE_OWNER = windows.WRITE_OWNER + ACCESS_SYSTEM_SECURITY = windows.ACCESS_SYSTEM_SECURITY ) // BackupHeader represents a backup stream of a file. type BackupHeader struct { + //revive:disable-next-line:var-naming ID, not Id Id uint32 // The backup stream ID Attributes uint32 // Stream attributes Size int64 // The size of the stream in bytes @@ -49,8 +53,8 @@ type BackupHeader struct { Offset int64 // The offset of the stream in the file (for BackupSparseBlock only). } -type win32StreamId struct { - StreamId uint32 +type win32StreamID struct { + StreamID uint32 Attributes uint32 Size uint64 NameSize uint32 @@ -71,7 +75,7 @@ func NewBackupStreamReader(r io.Reader) *BackupStreamReader { // Next returns the next backup stream and prepares for calls to Read(). It skips the remainder of the current stream if // it was not completely read. func (r *BackupStreamReader) Next() (*BackupHeader, error) { - if r.bytesLeft > 0 { + if r.bytesLeft > 0 { //nolint:nestif // todo: flatten this if s, ok := r.r.(io.Seeker); ok { // Make sure Seek on io.SeekCurrent sometimes succeeds // before trying the actual seek. @@ -82,16 +86,16 @@ func (r *BackupStreamReader) Next() (*BackupHeader, error) { r.bytesLeft = 0 } } - if _, err := io.Copy(ioutil.Discard, r); err != nil { + if _, err := io.Copy(io.Discard, r); err != nil { return nil, err } } - var wsi win32StreamId + var wsi win32StreamID if err := binary.Read(r.r, binary.LittleEndian, &wsi); err != nil { return nil, err } hdr := &BackupHeader{ - Id: wsi.StreamId, + Id: wsi.StreamID, Attributes: wsi.Attributes, Size: int64(wsi.Size), } @@ -102,7 +106,7 @@ func (r *BackupStreamReader) Next() (*BackupHeader, error) { } hdr.Name = syscall.UTF16ToString(name) } - if wsi.StreamId == BackupSparseBlock { + if wsi.StreamID == BackupSparseBlock { if err := binary.Read(r.r, binary.LittleEndian, &hdr.Offset); err != nil { return nil, err } @@ -147,8 +151,8 @@ func (w *BackupStreamWriter) WriteHeader(hdr *BackupHeader) error { return fmt.Errorf("missing %d bytes", w.bytesLeft) } name := utf16.Encode([]rune(hdr.Name)) - wsi := win32StreamId{ - StreamId: hdr.Id, + wsi := win32StreamID{ + StreamID: hdr.Id, Attributes: hdr.Attributes, Size: uint64(hdr.Size), NameSize: uint32(len(name) * 2), @@ -203,7 +207,7 @@ func (r *BackupFileReader) Read(b []byte) (int, error) { var bytesRead uint32 err := backupRead(syscall.Handle(r.f.Fd()), b, &bytesRead, false, r.includeSecurity, &r.ctx) if err != nil { - return 0, &os.PathError{"BackupRead", r.f.Name(), err} + return 0, &os.PathError{Op: "BackupRead", Path: r.f.Name(), Err: err} } runtime.KeepAlive(r.f) if bytesRead == 0 { @@ -216,7 +220,7 @@ func (r *BackupFileReader) Read(b []byte) (int, error) { // the underlying file. func (r *BackupFileReader) Close() error { if r.ctx != 0 { - backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx) + _ = backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx) runtime.KeepAlive(r.f) r.ctx = 0 } @@ -242,7 +246,7 @@ func (w *BackupFileWriter) Write(b []byte) (int, error) { var bytesWritten uint32 err := backupWrite(syscall.Handle(w.f.Fd()), b, &bytesWritten, false, w.includeSecurity, &w.ctx) if err != nil { - return 0, &os.PathError{"BackupWrite", w.f.Name(), err} + return 0, &os.PathError{Op: "BackupWrite", Path: w.f.Name(), Err: err} } runtime.KeepAlive(w.f) if int(bytesWritten) != len(b) { @@ -255,7 +259,7 @@ func (w *BackupFileWriter) Write(b []byte) (int, error) { // close the underlying file. func (w *BackupFileWriter) Close() error { if w.ctx != 0 { - backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx) + _ = backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx) runtime.KeepAlive(w.f) w.ctx = 0 } @@ -271,7 +275,13 @@ func OpenForBackup(path string, access uint32, share uint32, createmode uint32) if err != nil { return nil, err } - h, err := syscall.CreateFile(&winPath[0], access, share, nil, createmode, syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT, 0) + h, err := syscall.CreateFile(&winPath[0], + access, + share, + nil, + createmode, + syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT, + 0) if err != nil { err = &os.PathError{Op: "open", Path: path, Err: err} return nil, err diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/doc.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/doc.go new file mode 100644 index 000000000000..1f5bfe2d548e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/doc.go @@ -0,0 +1,22 @@ +// This package provides utilities for efficiently performing Win32 IO operations in Go. +// Currently, this package is provides support for genreal IO and management of +// - named pipes +// - files +// - [Hyper-V sockets] +// +// This code is similar to Go's [net] package, and uses IO completion ports to avoid +// blocking IO on system threads, allowing Go to reuse the thread to schedule other goroutines. +// +// This limits support to Windows Vista and newer operating systems. +// +// Additionally, this package provides support for: +// - creating and managing GUIDs +// - writing to [ETW] +// - opening and manageing VHDs +// - parsing [Windows Image files] +// - auto-generating Win32 API code +// +// [Hyper-V sockets]: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service +// [ETW]: https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/event-tracing-for-windows--etw- +// [Windows Image files]: https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/work-with-windows-images +package winio diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/ea.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/ea.go index 4051c1b33bfe..e104dbdfdf96 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/ea.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/ea.go @@ -33,7 +33,7 @@ func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) { err = binary.Read(bytes.NewReader(b), binary.LittleEndian, &info) if err != nil { err = errInvalidEaBuffer - return + return ea, nb, err } nameOffset := fileFullEaInformationSize @@ -43,7 +43,7 @@ func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) { nextOffset := int(info.NextEntryOffset) if valueLen+valueOffset > len(b) || nextOffset < 0 || nextOffset > len(b) { err = errInvalidEaBuffer - return + return ea, nb, err } ea.Name = string(b[nameOffset : nameOffset+nameLen]) @@ -52,7 +52,7 @@ func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) { if info.NextEntryOffset != 0 { nb = b[info.NextEntryOffset:] } - return + return ea, nb, err } // DecodeExtendedAttributes decodes a list of EAs from a FILE_FULL_EA_INFORMATION @@ -67,7 +67,7 @@ func DecodeExtendedAttributes(b []byte) (eas []ExtendedAttribute, err error) { eas = append(eas, ea) b = nb } - return + return eas, err } func writeEa(buf *bytes.Buffer, ea *ExtendedAttribute, last bool) error { diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/file.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/file.go index 0385e4108129..175a99d3f429 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/file.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/file.go @@ -1,3 +1,4 @@ +//go:build windows // +build windows package winio @@ -10,6 +11,8 @@ import ( "sync/atomic" "syscall" "time" + + "golang.org/x/sys/windows" ) //sys cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) = CancelIoEx @@ -23,6 +26,8 @@ type atomicBool int32 func (b *atomicBool) isSet() bool { return atomic.LoadInt32((*int32)(b)) != 0 } func (b *atomicBool) setFalse() { atomic.StoreInt32((*int32)(b), 0) } func (b *atomicBool) setTrue() { atomic.StoreInt32((*int32)(b), 1) } + +//revive:disable-next-line:predeclared Keep "new" to maintain consistency with "atomic" pkg func (b *atomicBool) swap(new bool) bool { var newInt int32 if new { @@ -31,11 +36,6 @@ func (b *atomicBool) swap(new bool) bool { return atomic.SwapInt32((*int32)(b), newInt) == 1 } -const ( - cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS = 1 - cFILE_SKIP_SET_EVENT_ON_HANDLE = 2 -) - var ( ErrFileClosed = errors.New("file has already been closed") ErrTimeout = &timeoutError{} @@ -43,28 +43,28 @@ var ( type timeoutError struct{} -func (e *timeoutError) Error() string { return "i/o timeout" } -func (e *timeoutError) Timeout() bool { return true } -func (e *timeoutError) Temporary() bool { return true } +func (*timeoutError) Error() string { return "i/o timeout" } +func (*timeoutError) Timeout() bool { return true } +func (*timeoutError) Temporary() bool { return true } type timeoutChan chan struct{} var ioInitOnce sync.Once var ioCompletionPort syscall.Handle -// ioResult contains the result of an asynchronous IO operation +// ioResult contains the result of an asynchronous IO operation. type ioResult struct { bytes uint32 err error } -// ioOperation represents an outstanding asynchronous Win32 IO +// ioOperation represents an outstanding asynchronous Win32 IO. type ioOperation struct { o syscall.Overlapped ch chan ioResult } -func initIo() { +func initIO() { h, err := createIoCompletionPort(syscall.InvalidHandle, 0, 0, 0xffffffff) if err != nil { panic(err) @@ -93,15 +93,15 @@ type deadlineHandler struct { timedout atomicBool } -// makeWin32File makes a new win32File from an existing file handle +// makeWin32File makes a new win32File from an existing file handle. func makeWin32File(h syscall.Handle) (*win32File, error) { f := &win32File{handle: h} - ioInitOnce.Do(initIo) + ioInitOnce.Do(initIO) _, err := createIoCompletionPort(h, ioCompletionPort, 0, 0xffffffff) if err != nil { return nil, err } - err = setFileCompletionNotificationModes(h, cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS|cFILE_SKIP_SET_EVENT_ON_HANDLE) + err = setFileCompletionNotificationModes(h, windows.FILE_SKIP_COMPLETION_PORT_ON_SUCCESS|windows.FILE_SKIP_SET_EVENT_ON_HANDLE) if err != nil { return nil, err } @@ -120,14 +120,14 @@ func MakeOpenFile(h syscall.Handle) (io.ReadWriteCloser, error) { return f, nil } -// closeHandle closes the resources associated with a Win32 handle +// closeHandle closes the resources associated with a Win32 handle. func (f *win32File) closeHandle() { f.wgLock.Lock() // Atomically set that we are closing, releasing the resources only once. if !f.closing.swap(true) { f.wgLock.Unlock() // cancel all IO and wait for it to complete - cancelIoEx(f.handle, nil) + _ = cancelIoEx(f.handle, nil) f.wg.Wait() // at this point, no new IO can start syscall.Close(f.handle) @@ -143,9 +143,14 @@ func (f *win32File) Close() error { return nil } -// prepareIo prepares for a new IO operation. +// IsClosed checks if the file has been closed. +func (f *win32File) IsClosed() bool { + return f.closing.isSet() +} + +// prepareIO prepares for a new IO operation. // The caller must call f.wg.Done() when the IO is finished, prior to Close() returning. -func (f *win32File) prepareIo() (*ioOperation, error) { +func (f *win32File) prepareIO() (*ioOperation, error) { f.wgLock.RLock() if f.closing.isSet() { f.wgLock.RUnlock() @@ -158,7 +163,7 @@ func (f *win32File) prepareIo() (*ioOperation, error) { return c, nil } -// ioCompletionProcessor processes completed async IOs forever +// ioCompletionProcessor processes completed async IOs forever. func ioCompletionProcessor(h syscall.Handle) { for { var bytes uint32 @@ -172,15 +177,17 @@ func ioCompletionProcessor(h syscall.Handle) { } } -// asyncIo processes the return value from ReadFile or WriteFile, blocking until +// todo: helsaawy - create an asyncIO version that takes a context + +// asyncIO processes the return value from ReadFile or WriteFile, blocking until // the operation has actually completed. -func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) { - if err != syscall.ERROR_IO_PENDING { +func (f *win32File) asyncIO(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) { + if err != syscall.ERROR_IO_PENDING { //nolint:errorlint // err is Errno return int(bytes), err } if f.closing.isSet() { - cancelIoEx(f.handle, &c.o) + _ = cancelIoEx(f.handle, &c.o) } var timeout timeoutChan @@ -194,7 +201,7 @@ func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, er select { case r = <-c.ch: err = r.err - if err == syscall.ERROR_OPERATION_ABORTED { + if err == syscall.ERROR_OPERATION_ABORTED { //nolint:errorlint // err is Errno if f.closing.isSet() { err = ErrFileClosed } @@ -204,10 +211,10 @@ func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, er err = wsaGetOverlappedResult(f.handle, &c.o, &bytes, false, &flags) } case <-timeout: - cancelIoEx(f.handle, &c.o) + _ = cancelIoEx(f.handle, &c.o) r = <-c.ch err = r.err - if err == syscall.ERROR_OPERATION_ABORTED { + if err == syscall.ERROR_OPERATION_ABORTED { //nolint:errorlint // err is Errno err = ErrTimeout } } @@ -215,13 +222,14 @@ func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, er // runtime.KeepAlive is needed, as c is passed via native // code to ioCompletionProcessor, c must remain alive // until the channel read is complete. + // todo: (de)allocate *ioOperation via win32 heap functions, instead of needing to KeepAlive? runtime.KeepAlive(c) return int(r.bytes), err } // Read reads from a file handle. func (f *win32File) Read(b []byte) (int, error) { - c, err := f.prepareIo() + c, err := f.prepareIO() if err != nil { return 0, err } @@ -233,13 +241,13 @@ func (f *win32File) Read(b []byte) (int, error) { var bytes uint32 err = syscall.ReadFile(f.handle, b, &bytes, &c.o) - n, err := f.asyncIo(c, &f.readDeadline, bytes, err) + n, err := f.asyncIO(c, &f.readDeadline, bytes, err) runtime.KeepAlive(b) // Handle EOF conditions. if err == nil && n == 0 && len(b) != 0 { return 0, io.EOF - } else if err == syscall.ERROR_BROKEN_PIPE { + } else if err == syscall.ERROR_BROKEN_PIPE { //nolint:errorlint // err is Errno return 0, io.EOF } else { return n, err @@ -248,7 +256,7 @@ func (f *win32File) Read(b []byte) (int, error) { // Write writes to a file handle. func (f *win32File) Write(b []byte) (int, error) { - c, err := f.prepareIo() + c, err := f.prepareIO() if err != nil { return 0, err } @@ -260,7 +268,7 @@ func (f *win32File) Write(b []byte) (int, error) { var bytes uint32 err = syscall.WriteFile(f.handle, b, &bytes, &c.o) - n, err := f.asyncIo(c, &f.writeDeadline, bytes, err) + n, err := f.asyncIO(c, &f.writeDeadline, bytes, err) runtime.KeepAlive(b) return n, err } diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/fileinfo.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/fileinfo.go index 3ab6bff69c5d..702950e72a49 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/fileinfo.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/fileinfo.go @@ -1,3 +1,4 @@ +//go:build windows // +build windows package winio @@ -14,13 +15,18 @@ import ( type FileBasicInfo struct { CreationTime, LastAccessTime, LastWriteTime, ChangeTime windows.Filetime FileAttributes uint32 - pad uint32 // padding + _ uint32 // padding } // GetFileBasicInfo retrieves times and attributes for a file. func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) { bi := &FileBasicInfo{} - if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil { + if err := windows.GetFileInformationByHandleEx( + windows.Handle(f.Fd()), + windows.FileBasicInfo, + (*byte)(unsafe.Pointer(bi)), + uint32(unsafe.Sizeof(*bi)), + ); err != nil { return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err} } runtime.KeepAlive(f) @@ -29,7 +35,12 @@ func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) { // SetFileBasicInfo sets times and attributes for a file. func SetFileBasicInfo(f *os.File, bi *FileBasicInfo) error { - if err := windows.SetFileInformationByHandle(windows.Handle(f.Fd()), windows.FileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil { + if err := windows.SetFileInformationByHandle( + windows.Handle(f.Fd()), + windows.FileBasicInfo, + (*byte)(unsafe.Pointer(bi)), + uint32(unsafe.Sizeof(*bi)), + ); err != nil { return &os.PathError{Op: "SetFileInformationByHandle", Path: f.Name(), Err: err} } runtime.KeepAlive(f) @@ -48,7 +59,10 @@ type FileStandardInfo struct { // GetFileStandardInfo retrieves ended information for the file. func GetFileStandardInfo(f *os.File) (*FileStandardInfo, error) { si := &FileStandardInfo{} - if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileStandardInfo, (*byte)(unsafe.Pointer(si)), uint32(unsafe.Sizeof(*si))); err != nil { + if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), + windows.FileStandardInfo, + (*byte)(unsafe.Pointer(si)), + uint32(unsafe.Sizeof(*si))); err != nil { return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err} } runtime.KeepAlive(f) @@ -65,7 +79,12 @@ type FileIDInfo struct { // GetFileID retrieves the unique (volume, file ID) pair for a file. func GetFileID(f *os.File) (*FileIDInfo, error) { fileID := &FileIDInfo{} - if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileIdInfo, (*byte)(unsafe.Pointer(fileID)), uint32(unsafe.Sizeof(*fileID))); err != nil { + if err := windows.GetFileInformationByHandleEx( + windows.Handle(f.Fd()), + windows.FileIdInfo, + (*byte)(unsafe.Pointer(fileID)), + uint32(unsafe.Sizeof(*fileID)), + ); err != nil { return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err} } runtime.KeepAlive(f) diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/hvsock.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/hvsock.go index b632f8f8bb98..52f1c280f6a7 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/hvsock.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/hvsock.go @@ -1,8 +1,11 @@ +//go:build windows // +build windows package winio import ( + "context" + "errors" "fmt" "io" "net" @@ -11,16 +14,87 @@ import ( "time" "unsafe" + "golang.org/x/sys/windows" + + "github.com/Microsoft/go-winio/internal/socket" "github.com/Microsoft/go-winio/pkg/guid" ) -//sys bind(s syscall.Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socketError] = ws2_32.bind +const afHVSock = 34 // AF_HYPERV -const ( - afHvSock = 34 // AF_HYPERV +// Well known Service and VM IDs +//https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service#vmid-wildcards - socketError = ^uintptr(0) -) +// HvsockGUIDWildcard is the wildcard VmId for accepting connections from all partitions. +func HvsockGUIDWildcard() guid.GUID { // 00000000-0000-0000-0000-000000000000 + return guid.GUID{} +} + +// HvsockGUIDBroadcast is the wildcard VmId for broadcasting sends to all partitions. +func HvsockGUIDBroadcast() guid.GUID { //ffffffff-ffff-ffff-ffff-ffffffffffff + return guid.GUID{ + Data1: 0xffffffff, + Data2: 0xffff, + Data3: 0xffff, + Data4: [8]uint8{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff}, + } +} + +// HvsockGUIDLoopback is the Loopback VmId for accepting connections to the same partition as the connector. +func HvsockGUIDLoopback() guid.GUID { // e0e16197-dd56-4a10-9195-5ee7a155a838 + return guid.GUID{ + Data1: 0xe0e16197, + Data2: 0xdd56, + Data3: 0x4a10, + Data4: [8]uint8{0x91, 0x95, 0x5e, 0xe7, 0xa1, 0x55, 0xa8, 0x38}, + } +} + +// HvsockGUIDSiloHost is the address of a silo's host partition: +// - The silo host of a hosted silo is the utility VM. +// - The silo host of a silo on a physical host is the physical host. +func HvsockGUIDSiloHost() guid.GUID { // 36bd0c5c-7276-4223-88ba-7d03b654c568 + return guid.GUID{ + Data1: 0x36bd0c5c, + Data2: 0x7276, + Data3: 0x4223, + Data4: [8]byte{0x88, 0xba, 0x7d, 0x03, 0xb6, 0x54, 0xc5, 0x68}, + } +} + +// HvsockGUIDChildren is the wildcard VmId for accepting connections from the connector's child partitions. +func HvsockGUIDChildren() guid.GUID { // 90db8b89-0d35-4f79-8ce9-49ea0ac8b7cd + return guid.GUID{ + Data1: 0x90db8b89, + Data2: 0xd35, + Data3: 0x4f79, + Data4: [8]uint8{0x8c, 0xe9, 0x49, 0xea, 0xa, 0xc8, 0xb7, 0xcd}, + } +} + +// HvsockGUIDParent is the wildcard VmId for accepting connections from the connector's parent partition. +// Listening on this VmId accepts connection from: +// - Inside silos: silo host partition. +// - Inside hosted silo: host of the VM. +// - Inside VM: VM host. +// - Physical host: Not supported. +func HvsockGUIDParent() guid.GUID { // a42e7cda-d03f-480c-9cc2-a4de20abb878 + return guid.GUID{ + Data1: 0xa42e7cda, + Data2: 0xd03f, + Data3: 0x480c, + Data4: [8]uint8{0x9c, 0xc2, 0xa4, 0xde, 0x20, 0xab, 0xb8, 0x78}, + } +} + +// hvsockVsockServiceTemplate is the Service GUID used for the VSOCK protocol. +func hvsockVsockServiceTemplate() guid.GUID { // 00000000-facb-11e6-bd58-64006a7986d3 + return guid.GUID{ + Data2: 0xfacb, + Data3: 0x11e6, + Data4: [8]uint8{0xbd, 0x58, 0x64, 0x00, 0x6a, 0x79, 0x86, 0xd3}, + } +} // An HvsockAddr is an address for a AF_HYPERV socket. type HvsockAddr struct { @@ -35,8 +109,10 @@ type rawHvsockAddr struct { ServiceID guid.GUID } +var _ socket.RawSockaddr = &rawHvsockAddr{} + // Network returns the address's network name, "hvsock". -func (addr *HvsockAddr) Network() string { +func (*HvsockAddr) Network() string { return "hvsock" } @@ -46,14 +122,14 @@ func (addr *HvsockAddr) String() string { // VsockServiceID returns an hvsock service ID corresponding to the specified AF_VSOCK port. func VsockServiceID(port uint32) guid.GUID { - g, _ := guid.FromString("00000000-facb-11e6-bd58-64006a7986d3") + g := hvsockVsockServiceTemplate() // make a copy g.Data1 = port return g } func (addr *HvsockAddr) raw() rawHvsockAddr { return rawHvsockAddr{ - Family: afHvSock, + Family: afHVSock, VMID: addr.VMID, ServiceID: addr.ServiceID, } @@ -64,20 +140,48 @@ func (addr *HvsockAddr) fromRaw(raw *rawHvsockAddr) { addr.ServiceID = raw.ServiceID } +// Sockaddr returns a pointer to and the size of this struct. +// +// Implements the [socket.RawSockaddr] interface, and allows use in +// [socket.Bind] and [socket.ConnectEx]. +func (r *rawHvsockAddr) Sockaddr() (unsafe.Pointer, int32, error) { + return unsafe.Pointer(r), int32(unsafe.Sizeof(rawHvsockAddr{})), nil +} + +// Sockaddr interface allows use with `sockets.Bind()` and `.ConnectEx()`. +func (r *rawHvsockAddr) FromBytes(b []byte) error { + n := int(unsafe.Sizeof(rawHvsockAddr{})) + + if len(b) < n { + return fmt.Errorf("got %d, want %d: %w", len(b), n, socket.ErrBufferSize) + } + + copy(unsafe.Slice((*byte)(unsafe.Pointer(r)), n), b[:n]) + if r.Family != afHVSock { + return fmt.Errorf("got %d, want %d: %w", r.Family, afHVSock, socket.ErrAddrFamily) + } + + return nil +} + // HvsockListener is a socket listener for the AF_HYPERV address family. type HvsockListener struct { sock *win32File addr HvsockAddr } +var _ net.Listener = &HvsockListener{} + // HvsockConn is a connected socket of the AF_HYPERV address family. type HvsockConn struct { sock *win32File local, remote HvsockAddr } -func newHvSocket() (*win32File, error) { - fd, err := syscall.Socket(afHvSock, syscall.SOCK_STREAM, 1) +var _ net.Conn = &HvsockConn{} + +func newHVSocket() (*win32File, error) { + fd, err := syscall.Socket(afHVSock, syscall.SOCK_STREAM, 1) if err != nil { return nil, os.NewSyscallError("socket", err) } @@ -93,12 +197,12 @@ func newHvSocket() (*win32File, error) { // ListenHvsock listens for connections on the specified hvsock address. func ListenHvsock(addr *HvsockAddr) (_ *HvsockListener, err error) { l := &HvsockListener{addr: *addr} - sock, err := newHvSocket() + sock, err := newHVSocket() if err != nil { return nil, l.opErr("listen", err) } sa := addr.raw() - err = bind(sock.handle, unsafe.Pointer(&sa), int32(unsafe.Sizeof(sa))) + err = socket.Bind(windows.Handle(sock.handle), &sa) if err != nil { return nil, l.opErr("listen", os.NewSyscallError("socket", err)) } @@ -120,7 +224,7 @@ func (l *HvsockListener) Addr() net.Addr { // Accept waits for the next connection and returns it. func (l *HvsockListener) Accept() (_ net.Conn, err error) { - sock, err := newHvSocket() + sock, err := newHVSocket() if err != nil { return nil, l.opErr("accept", err) } @@ -129,27 +233,42 @@ func (l *HvsockListener) Accept() (_ net.Conn, err error) { sock.Close() } }() - c, err := l.sock.prepareIo() + c, err := l.sock.prepareIO() if err != nil { return nil, l.opErr("accept", err) } defer l.sock.wg.Done() // AcceptEx, per documentation, requires an extra 16 bytes per address. + // + // https://docs.microsoft.com/en-us/windows/win32/api/mswsock/nf-mswsock-acceptex const addrlen = uint32(16 + unsafe.Sizeof(rawHvsockAddr{})) var addrbuf [addrlen * 2]byte var bytes uint32 - err = syscall.AcceptEx(l.sock.handle, sock.handle, &addrbuf[0], 0, addrlen, addrlen, &bytes, &c.o) - _, err = l.sock.asyncIo(c, nil, bytes, err) - if err != nil { + err = syscall.AcceptEx(l.sock.handle, sock.handle, &addrbuf[0], 0 /*rxdatalen*/, addrlen, addrlen, &bytes, &c.o) + if _, err = l.sock.asyncIO(c, nil, bytes, err); err != nil { return nil, l.opErr("accept", os.NewSyscallError("acceptex", err)) } + conn := &HvsockConn{ sock: sock, } + // The local address returned in the AcceptEx buffer is the same as the Listener socket's + // address. However, the service GUID reported by GetSockName is different from the Listeners + // socket, and is sometimes the same as the local address of the socket that dialed the + // address, with the service GUID.Data1 incremented, but othertimes is different. + // todo: does the local address matter? is the listener's address or the actual address appropriate? conn.local.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[0]))) conn.remote.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[addrlen]))) + + // initialize the accepted socket and update its properties with those of the listening socket + if err = windows.Setsockopt(windows.Handle(sock.handle), + windows.SOL_SOCKET, windows.SO_UPDATE_ACCEPT_CONTEXT, + (*byte)(unsafe.Pointer(&l.sock.handle)), int32(unsafe.Sizeof(l.sock.handle))); err != nil { + return nil, conn.opErr("accept", os.NewSyscallError("setsockopt", err)) + } + sock = nil return conn, nil } @@ -159,43 +278,171 @@ func (l *HvsockListener) Close() error { return l.sock.Close() } -/* Need to finish ConnectEx handling -func DialHvsock(ctx context.Context, addr *HvsockAddr) (*HvsockConn, error) { - sock, err := newHvSocket() +// HvsockDialer configures and dials a Hyper-V Socket (ie, [HvsockConn]). +type HvsockDialer struct { + // Deadline is the time the Dial operation must connect before erroring. + Deadline time.Time + + // Retries is the number of additional connects to try if the connection times out, is refused, + // or the host is unreachable + Retries uint + + // RetryWait is the time to wait after a connection error to retry + RetryWait time.Duration + + rt *time.Timer // redial wait timer +} + +// Dial the Hyper-V socket at addr. +// +// See [HvsockDialer.Dial] for more information. +func Dial(ctx context.Context, addr *HvsockAddr) (conn *HvsockConn, err error) { + return (&HvsockDialer{}).Dial(ctx, addr) +} + +// Dial attempts to connect to the Hyper-V socket at addr, and returns a connection if successful. +// Will attempt (HvsockDialer).Retries if dialing fails, waiting (HvsockDialer).RetryWait between +// retries. +// +// Dialing can be cancelled either by providing (HvsockDialer).Deadline, or cancelling ctx. +func (d *HvsockDialer) Dial(ctx context.Context, addr *HvsockAddr) (conn *HvsockConn, err error) { + op := "dial" + // create the conn early to use opErr() + conn = &HvsockConn{ + remote: *addr, + } + + if !d.Deadline.IsZero() { + var cancel context.CancelFunc + ctx, cancel = context.WithDeadline(ctx, d.Deadline) + defer cancel() + } + + // preemptive timeout/cancellation check + if err = ctx.Err(); err != nil { + return nil, conn.opErr(op, err) + } + + sock, err := newHVSocket() if err != nil { - return nil, err + return nil, conn.opErr(op, err) } defer func() { if sock != nil { sock.Close() } }() - c, err := sock.prepareIo() + + sa := addr.raw() + err = socket.Bind(windows.Handle(sock.handle), &sa) if err != nil { - return nil, err + return nil, conn.opErr(op, os.NewSyscallError("bind", err)) + } + + c, err := sock.prepareIO() + if err != nil { + return nil, conn.opErr(op, err) } defer sock.wg.Done() var bytes uint32 - err = windows.ConnectEx(windows.Handle(sock.handle), sa, nil, 0, &bytes, &c.o) - _, err = sock.asyncIo(ctx, c, nil, bytes, err) + for i := uint(0); i <= d.Retries; i++ { + err = socket.ConnectEx( + windows.Handle(sock.handle), + &sa, + nil, // sendBuf + 0, // sendDataLen + &bytes, + (*windows.Overlapped)(unsafe.Pointer(&c.o))) + _, err = sock.asyncIO(c, nil, bytes, err) + if i < d.Retries && canRedial(err) { + if err = d.redialWait(ctx); err == nil { + continue + } + } + break + } if err != nil { - return nil, err + return nil, conn.opErr(op, os.NewSyscallError("connectex", err)) } - conn := &HvsockConn{ - sock: sock, - remote: *addr, + + // update the connection properties, so shutdown can be used + if err = windows.Setsockopt( + windows.Handle(sock.handle), + windows.SOL_SOCKET, + windows.SO_UPDATE_CONNECT_CONTEXT, + nil, // optvalue + 0, // optlen + ); err != nil { + return nil, conn.opErr(op, os.NewSyscallError("setsockopt", err)) + } + + // get the local name + var sal rawHvsockAddr + err = socket.GetSockName(windows.Handle(sock.handle), &sal) + if err != nil { + return nil, conn.opErr(op, os.NewSyscallError("getsockname", err)) + } + conn.local.fromRaw(&sal) + + // one last check for timeout, since asyncIO doesn't check the context + if err = ctx.Err(); err != nil { + return nil, conn.opErr(op, err) } + + conn.sock = sock sock = nil + return conn, nil } -*/ + +// redialWait waits before attempting to redial, resetting the timer as appropriate. +func (d *HvsockDialer) redialWait(ctx context.Context) (err error) { + if d.RetryWait == 0 { + return nil + } + + if d.rt == nil { + d.rt = time.NewTimer(d.RetryWait) + } else { + // should already be stopped and drained + d.rt.Reset(d.RetryWait) + } + + select { + case <-ctx.Done(): + case <-d.rt.C: + return nil + } + + // stop and drain the timer + if !d.rt.Stop() { + <-d.rt.C + } + return ctx.Err() +} + +// assumes error is a plain, unwrapped syscall.Errno provided by direct syscall. +func canRedial(err error) bool { + //nolint:errorlint // guaranteed to be an Errno + switch err { + case windows.WSAECONNREFUSED, windows.WSAENETUNREACH, windows.WSAETIMEDOUT, + windows.ERROR_CONNECTION_REFUSED, windows.ERROR_CONNECTION_UNAVAIL: + return true + default: + return false + } +} func (conn *HvsockConn) opErr(op string, err error) error { + // translate from "file closed" to "socket closed" + if errors.Is(err, ErrFileClosed) { + err = socket.ErrSocketClosed + } return &net.OpError{Op: op, Net: "hvsock", Source: &conn.local, Addr: &conn.remote, Err: err} } func (conn *HvsockConn) Read(b []byte) (int, error) { - c, err := conn.sock.prepareIo() + c, err := conn.sock.prepareIO() if err != nil { return 0, conn.opErr("read", err) } @@ -203,10 +450,11 @@ func (conn *HvsockConn) Read(b []byte) (int, error) { buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))} var flags, bytes uint32 err = syscall.WSARecv(conn.sock.handle, &buf, 1, &bytes, &flags, &c.o, nil) - n, err := conn.sock.asyncIo(c, &conn.sock.readDeadline, bytes, err) + n, err := conn.sock.asyncIO(c, &conn.sock.readDeadline, bytes, err) if err != nil { - if _, ok := err.(syscall.Errno); ok { - err = os.NewSyscallError("wsarecv", err) + var eno windows.Errno + if errors.As(err, &eno) { + err = os.NewSyscallError("wsarecv", eno) } return 0, conn.opErr("read", err) } else if n == 0 { @@ -229,7 +477,7 @@ func (conn *HvsockConn) Write(b []byte) (int, error) { } func (conn *HvsockConn) write(b []byte) (int, error) { - c, err := conn.sock.prepareIo() + c, err := conn.sock.prepareIO() if err != nil { return 0, conn.opErr("write", err) } @@ -237,10 +485,11 @@ func (conn *HvsockConn) write(b []byte) (int, error) { buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))} var bytes uint32 err = syscall.WSASend(conn.sock.handle, &buf, 1, &bytes, 0, &c.o, nil) - n, err := conn.sock.asyncIo(c, &conn.sock.writeDeadline, bytes, err) + n, err := conn.sock.asyncIO(c, &conn.sock.writeDeadline, bytes, err) if err != nil { - if _, ok := err.(syscall.Errno); ok { - err = os.NewSyscallError("wsasend", err) + var eno windows.Errno + if errors.As(err, &eno) { + err = os.NewSyscallError("wsasend", eno) } return 0, conn.opErr("write", err) } @@ -252,29 +501,43 @@ func (conn *HvsockConn) Close() error { return conn.sock.Close() } +func (conn *HvsockConn) IsClosed() bool { + return conn.sock.IsClosed() +} + +// shutdown disables sending or receiving on a socket. func (conn *HvsockConn) shutdown(how int) error { - err := syscall.Shutdown(conn.sock.handle, syscall.SHUT_RD) + if conn.IsClosed() { + return socket.ErrSocketClosed + } + + err := syscall.Shutdown(conn.sock.handle, how) if err != nil { + // If the connection was closed, shutdowns fail with "not connected" + if errors.Is(err, windows.WSAENOTCONN) || + errors.Is(err, windows.WSAESHUTDOWN) { + err = socket.ErrSocketClosed + } return os.NewSyscallError("shutdown", err) } return nil } -// CloseRead shuts down the read end of the socket. +// CloseRead shuts down the read end of the socket, preventing future read operations. func (conn *HvsockConn) CloseRead() error { err := conn.shutdown(syscall.SHUT_RD) if err != nil { - return conn.opErr("close", err) + return conn.opErr("closeread", err) } return nil } -// CloseWrite shuts down the write end of the socket, notifying the other endpoint that -// no more data will be written. +// CloseWrite shuts down the write end of the socket, preventing future write operations and +// notifying the other endpoint that no more data will be written. func (conn *HvsockConn) CloseWrite() error { err := conn.shutdown(syscall.SHUT_WR) if err != nil { - return conn.opErr("close", err) + return conn.opErr("closewrite", err) } return nil } @@ -291,8 +554,13 @@ func (conn *HvsockConn) RemoteAddr() net.Addr { // SetDeadline implements the net.Conn SetDeadline method. func (conn *HvsockConn) SetDeadline(t time.Time) error { - conn.SetReadDeadline(t) - conn.SetWriteDeadline(t) + // todo: implement `SetDeadline` for `win32File` + if err := conn.SetReadDeadline(t); err != nil { + return fmt.Errorf("set read deadline: %w", err) + } + if err := conn.SetWriteDeadline(t); err != nil { + return fmt.Errorf("set write deadline: %w", err) + } return nil } diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/rawaddr.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/rawaddr.go new file mode 100644 index 000000000000..7e82f9afa952 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/rawaddr.go @@ -0,0 +1,20 @@ +package socket + +import ( + "unsafe" +) + +// RawSockaddr allows structs to be used with [Bind] and [ConnectEx]. The +// struct must meet the Win32 sockaddr requirements specified here: +// https://docs.microsoft.com/en-us/windows/win32/winsock/sockaddr-2 +// +// Specifically, the struct size must be least larger than an int16 (unsigned short) +// for the address family. +type RawSockaddr interface { + // Sockaddr returns a pointer to the RawSockaddr and its struct size, allowing + // for the RawSockaddr's data to be overwritten by syscalls (if necessary). + // + // It is the callers responsibility to validate that the values are valid; invalid + // pointers or size can cause a panic. + Sockaddr() (unsafe.Pointer, int32, error) +} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/socket.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/socket.go new file mode 100644 index 000000000000..39e8c05f8f3f --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/socket.go @@ -0,0 +1,179 @@ +//go:build windows + +package socket + +import ( + "errors" + "fmt" + "net" + "sync" + "syscall" + "unsafe" + + "github.com/Microsoft/go-winio/pkg/guid" + "golang.org/x/sys/windows" +) + +//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go socket.go + +//sys getsockname(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) [failretval==socketError] = ws2_32.getsockname +//sys getpeername(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) [failretval==socketError] = ws2_32.getpeername +//sys bind(s windows.Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socketError] = ws2_32.bind + +const socketError = uintptr(^uint32(0)) + +var ( + // todo(helsaawy): create custom error types to store the desired vs actual size and addr family? + + ErrBufferSize = errors.New("buffer size") + ErrAddrFamily = errors.New("address family") + ErrInvalidPointer = errors.New("invalid pointer") + ErrSocketClosed = fmt.Errorf("socket closed: %w", net.ErrClosed) +) + +// todo(helsaawy): replace these with generics, ie: GetSockName[S RawSockaddr](s windows.Handle) (S, error) + +// GetSockName writes the local address of socket s to the [RawSockaddr] rsa. +// If rsa is not large enough, the [windows.WSAEFAULT] is returned. +func GetSockName(s windows.Handle, rsa RawSockaddr) error { + ptr, l, err := rsa.Sockaddr() + if err != nil { + return fmt.Errorf("could not retrieve socket pointer and size: %w", err) + } + + // although getsockname returns WSAEFAULT if the buffer is too small, it does not set + // &l to the correct size, so--apart from doubling the buffer repeatedly--there is no remedy + return getsockname(s, ptr, &l) +} + +// GetPeerName returns the remote address the socket is connected to. +// +// See [GetSockName] for more information. +func GetPeerName(s windows.Handle, rsa RawSockaddr) error { + ptr, l, err := rsa.Sockaddr() + if err != nil { + return fmt.Errorf("could not retrieve socket pointer and size: %w", err) + } + + return getpeername(s, ptr, &l) +} + +func Bind(s windows.Handle, rsa RawSockaddr) (err error) { + ptr, l, err := rsa.Sockaddr() + if err != nil { + return fmt.Errorf("could not retrieve socket pointer and size: %w", err) + } + + return bind(s, ptr, l) +} + +// "golang.org/x/sys/windows".ConnectEx and .Bind only accept internal implementations of the +// their sockaddr interface, so they cannot be used with HvsockAddr +// Replicate functionality here from +// https://cs.opensource.google/go/x/sys/+/master:windows/syscall_windows.go + +// The function pointers to `AcceptEx`, `ConnectEx` and `GetAcceptExSockaddrs` must be loaded at +// runtime via a WSAIoctl call: +// https://docs.microsoft.com/en-us/windows/win32/api/Mswsock/nc-mswsock-lpfn_connectex#remarks + +type runtimeFunc struct { + id guid.GUID + once sync.Once + addr uintptr + err error +} + +func (f *runtimeFunc) Load() error { + f.once.Do(func() { + var s windows.Handle + s, f.err = windows.Socket(windows.AF_INET, windows.SOCK_STREAM, windows.IPPROTO_TCP) + if f.err != nil { + return + } + defer windows.CloseHandle(s) //nolint:errcheck + + var n uint32 + f.err = windows.WSAIoctl(s, + windows.SIO_GET_EXTENSION_FUNCTION_POINTER, + (*byte)(unsafe.Pointer(&f.id)), + uint32(unsafe.Sizeof(f.id)), + (*byte)(unsafe.Pointer(&f.addr)), + uint32(unsafe.Sizeof(f.addr)), + &n, + nil, //overlapped + 0, //completionRoutine + ) + }) + return f.err +} + +var ( + // todo: add `AcceptEx` and `GetAcceptExSockaddrs` + WSAID_CONNECTEX = guid.GUID{ //revive:disable-line:var-naming ALL_CAPS + Data1: 0x25a207b9, + Data2: 0xddf3, + Data3: 0x4660, + Data4: [8]byte{0x8e, 0xe9, 0x76, 0xe5, 0x8c, 0x74, 0x06, 0x3e}, + } + + connectExFunc = runtimeFunc{id: WSAID_CONNECTEX} +) + +func ConnectEx( + fd windows.Handle, + rsa RawSockaddr, + sendBuf *byte, + sendDataLen uint32, + bytesSent *uint32, + overlapped *windows.Overlapped, +) error { + if err := connectExFunc.Load(); err != nil { + return fmt.Errorf("failed to load ConnectEx function pointer: %w", err) + } + ptr, n, err := rsa.Sockaddr() + if err != nil { + return err + } + return connectEx(fd, ptr, n, sendBuf, sendDataLen, bytesSent, overlapped) +} + +// BOOL LpfnConnectex( +// [in] SOCKET s, +// [in] const sockaddr *name, +// [in] int namelen, +// [in, optional] PVOID lpSendBuffer, +// [in] DWORD dwSendDataLength, +// [out] LPDWORD lpdwBytesSent, +// [in] LPOVERLAPPED lpOverlapped +// ) + +func connectEx( + s windows.Handle, + name unsafe.Pointer, + namelen int32, + sendBuf *byte, + sendDataLen uint32, + bytesSent *uint32, + overlapped *windows.Overlapped, +) (err error) { + // todo: after upgrading to 1.18, switch from syscall.Syscall9 to syscall.SyscallN + r1, _, e1 := syscall.Syscall9(connectExFunc.addr, + 7, + uintptr(s), + uintptr(name), + uintptr(namelen), + uintptr(unsafe.Pointer(sendBuf)), + uintptr(sendDataLen), + uintptr(unsafe.Pointer(bytesSent)), + uintptr(unsafe.Pointer(overlapped)), + 0, + 0) + if r1 == 0 { + if e1 != 0 { + err = error(e1) + } else { + err = syscall.EINVAL + } + } + return err +} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/zsyscall_windows.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/zsyscall_windows.go new file mode 100644 index 000000000000..6d2e1a9e4438 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/internal/socket/zsyscall_windows.go @@ -0,0 +1,72 @@ +//go:build windows + +// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT. + +package socket + +import ( + "syscall" + "unsafe" + + "golang.org/x/sys/windows" +) + +var _ unsafe.Pointer + +// Do the interface allocations only once for common +// Errno values. +const ( + errnoERROR_IO_PENDING = 997 +) + +var ( + errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING) + errERROR_EINVAL error = syscall.EINVAL +) + +// errnoErr returns common boxed Errno values, to prevent +// allocations at runtime. +func errnoErr(e syscall.Errno) error { + switch e { + case 0: + return errERROR_EINVAL + case errnoERROR_IO_PENDING: + return errERROR_IO_PENDING + } + // TODO: add more here, after collecting data on the common + // error values see on Windows. (perhaps when running + // all.bat?) + return e +} + +var ( + modws2_32 = windows.NewLazySystemDLL("ws2_32.dll") + + procbind = modws2_32.NewProc("bind") + procgetpeername = modws2_32.NewProc("getpeername") + procgetsockname = modws2_32.NewProc("getsockname") +) + +func bind(s windows.Handle, name unsafe.Pointer, namelen int32) (err error) { + r1, _, e1 := syscall.Syscall(procbind.Addr(), 3, uintptr(s), uintptr(name), uintptr(namelen)) + if r1 == socketError { + err = errnoErr(e1) + } + return +} + +func getpeername(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) { + r1, _, e1 := syscall.Syscall(procgetpeername.Addr(), 3, uintptr(s), uintptr(name), uintptr(unsafe.Pointer(namelen))) + if r1 == socketError { + err = errnoErr(e1) + } + return +} + +func getsockname(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) { + r1, _, e1 := syscall.Syscall(procgetsockname.Addr(), 3, uintptr(s), uintptr(name), uintptr(unsafe.Pointer(namelen))) + if r1 == socketError { + err = errnoErr(e1) + } + return +} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pipe.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pipe.go index 96700a73de25..ca6e38fc0006 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pipe.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pipe.go @@ -1,3 +1,4 @@ +//go:build windows // +build windows package winio @@ -13,6 +14,8 @@ import ( "syscall" "time" "unsafe" + + "golang.org/x/sys/windows" ) //sys connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) = ConnectNamedPipe @@ -21,10 +24,10 @@ import ( //sys getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) = GetNamedPipeInfo //sys getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW //sys localAlloc(uFlags uint32, length uint32) (ptr uintptr) = LocalAlloc -//sys ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntstatus) = ntdll.NtCreateNamedPipeFile -//sys rtlNtStatusToDosError(status ntstatus) (winerr error) = ntdll.RtlNtStatusToDosErrorNoTeb -//sys rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntstatus) = ntdll.RtlDosPathNameToNtPathName_U -//sys rtlDefaultNpAcl(dacl *uintptr) (status ntstatus) = ntdll.RtlDefaultNpAcl +//sys ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntStatus) = ntdll.NtCreateNamedPipeFile +//sys rtlNtStatusToDosError(status ntStatus) (winerr error) = ntdll.RtlNtStatusToDosErrorNoTeb +//sys rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntStatus) = ntdll.RtlDosPathNameToNtPathName_U +//sys rtlDefaultNpAcl(dacl *uintptr) (status ntStatus) = ntdll.RtlDefaultNpAcl type ioStatusBlock struct { Status, Information uintptr @@ -51,45 +54,22 @@ type securityDescriptor struct { Control uint16 Owner uintptr Group uintptr - Sacl uintptr - Dacl uintptr + Sacl uintptr //revive:disable-line:var-naming SACL, not Sacl + Dacl uintptr //revive:disable-line:var-naming DACL, not Dacl } -type ntstatus int32 +type ntStatus int32 -func (status ntstatus) Err() error { +func (status ntStatus) Err() error { if status >= 0 { return nil } return rtlNtStatusToDosError(status) } -const ( - cERROR_PIPE_BUSY = syscall.Errno(231) - cERROR_NO_DATA = syscall.Errno(232) - cERROR_PIPE_CONNECTED = syscall.Errno(535) - cERROR_SEM_TIMEOUT = syscall.Errno(121) - - cSECURITY_SQOS_PRESENT = 0x100000 - cSECURITY_ANONYMOUS = 0 - - cPIPE_TYPE_MESSAGE = 4 - - cPIPE_READMODE_MESSAGE = 2 - - cFILE_OPEN = 1 - cFILE_CREATE = 2 - - cFILE_PIPE_MESSAGE_TYPE = 1 - cFILE_PIPE_REJECT_REMOTE_CLIENTS = 2 - - cSE_DACL_PRESENT = 4 -) - var ( // ErrPipeListenerClosed is returned for pipe operations on listeners that have been closed. - // This error should match net.errClosing since docker takes a dependency on its text. - ErrPipeListenerClosed = errors.New("use of closed network connection") + ErrPipeListenerClosed = net.ErrClosed errPipeWriteClosed = errors.New("pipe has been closed for write") ) @@ -116,9 +96,10 @@ func (f *win32Pipe) RemoteAddr() net.Addr { } func (f *win32Pipe) SetDeadline(t time.Time) error { - f.SetReadDeadline(t) - f.SetWriteDeadline(t) - return nil + if err := f.SetReadDeadline(t); err != nil { + return err + } + return f.SetWriteDeadline(t) } // CloseWrite closes the write side of a message pipe in byte mode. @@ -157,14 +138,14 @@ func (f *win32MessageBytePipe) Read(b []byte) (int, error) { return 0, io.EOF } n, err := f.win32File.Read(b) - if err == io.EOF { + if err == io.EOF { //nolint:errorlint // If this was the result of a zero-byte read, then // it is possible that the read was due to a zero-size // message. Since we are simulating CloseWrite with a // zero-byte message, ensure that all future Read() calls // also return EOF. f.readEOF = true - } else if err == syscall.ERROR_MORE_DATA { + } else if err == syscall.ERROR_MORE_DATA { //nolint:errorlint // err is Errno // ERROR_MORE_DATA indicates that the pipe's read mode is message mode // and the message still has more bytes. Treat this as a success, since // this package presents all named pipes as byte streams. @@ -173,7 +154,7 @@ func (f *win32MessageBytePipe) Read(b []byte) (int, error) { return n, err } -func (s pipeAddress) Network() string { +func (pipeAddress) Network() string { return "pipe" } @@ -184,16 +165,21 @@ func (s pipeAddress) String() string { // tryDialPipe attempts to dial the pipe at `path` until `ctx` cancellation or timeout. func tryDialPipe(ctx context.Context, path *string, access uint32) (syscall.Handle, error) { for { - select { case <-ctx.Done(): return syscall.Handle(0), ctx.Err() default: - h, err := createFile(*path, access, 0, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_OVERLAPPED|cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0) + h, err := createFile(*path, + access, + 0, + nil, + syscall.OPEN_EXISTING, + windows.FILE_FLAG_OVERLAPPED|windows.SECURITY_SQOS_PRESENT|windows.SECURITY_ANONYMOUS, + 0) if err == nil { return h, nil } - if err != cERROR_PIPE_BUSY { + if err != windows.ERROR_PIPE_BUSY { //nolint:errorlint // err is Errno return h, &os.PathError{Err: err, Op: "open", Path: *path} } // Wait 10 msec and try again. This is a rather simplistic @@ -213,9 +199,10 @@ func DialPipe(path string, timeout *time.Duration) (net.Conn, error) { } else { absTimeout = time.Now().Add(2 * time.Second) } - ctx, _ := context.WithDeadline(context.Background(), absTimeout) + ctx, cancel := context.WithDeadline(context.Background(), absTimeout) + defer cancel() conn, err := DialPipeContext(ctx, path) - if err == context.DeadlineExceeded { + if errors.Is(err, context.DeadlineExceeded) { return nil, ErrTimeout } return conn, err @@ -251,7 +238,7 @@ func DialPipeAccess(ctx context.Context, path string, access uint32) (net.Conn, // If the pipe is in message mode, return a message byte pipe, which // supports CloseWrite(). - if flags&cPIPE_TYPE_MESSAGE != 0 { + if flags&windows.PIPE_TYPE_MESSAGE != 0 { return &win32MessageBytePipe{ win32Pipe: win32Pipe{win32File: f, path: path}, }, nil @@ -283,7 +270,11 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy oa.Length = unsafe.Sizeof(oa) var ntPath unicodeString - if err := rtlDosPathNameToNtPathName(&path16[0], &ntPath, 0, 0).Err(); err != nil { + if err := rtlDosPathNameToNtPathName(&path16[0], + &ntPath, + 0, + 0, + ).Err(); err != nil { return 0, &os.PathError{Op: "open", Path: path, Err: err} } defer localFree(ntPath.Buffer) @@ -292,8 +283,8 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy // The security descriptor is only needed for the first pipe. if first { if sd != nil { - len := uint32(len(sd)) - sdb := localAlloc(0, len) + l := uint32(len(sd)) + sdb := localAlloc(0, l) defer localFree(sdb) copy((*[0xffff]byte)(unsafe.Pointer(sdb))[:], sd) oa.SecurityDescriptor = (*securityDescriptor)(unsafe.Pointer(sdb)) @@ -301,28 +292,28 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy // Construct the default named pipe security descriptor. var dacl uintptr if err := rtlDefaultNpAcl(&dacl).Err(); err != nil { - return 0, fmt.Errorf("getting default named pipe ACL: %s", err) + return 0, fmt.Errorf("getting default named pipe ACL: %w", err) } defer localFree(dacl) sdb := &securityDescriptor{ Revision: 1, - Control: cSE_DACL_PRESENT, + Control: windows.SE_DACL_PRESENT, Dacl: dacl, } oa.SecurityDescriptor = sdb } } - typ := uint32(cFILE_PIPE_REJECT_REMOTE_CLIENTS) + typ := uint32(windows.FILE_PIPE_REJECT_REMOTE_CLIENTS) if c.MessageMode { - typ |= cFILE_PIPE_MESSAGE_TYPE + typ |= windows.FILE_PIPE_MESSAGE_TYPE } - disposition := uint32(cFILE_OPEN) + disposition := uint32(windows.FILE_OPEN) access := uint32(syscall.GENERIC_READ | syscall.GENERIC_WRITE | syscall.SYNCHRONIZE) if first { - disposition = cFILE_CREATE + disposition = windows.FILE_CREATE // By not asking for read or write access, the named pipe file system // will put this pipe into an initially disconnected state, blocking // client connections until the next call with first == false. @@ -335,7 +326,20 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy h syscall.Handle iosb ioStatusBlock ) - err = ntCreateNamedPipeFile(&h, access, &oa, &iosb, syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE, disposition, 0, typ, 0, 0, 0xffffffff, uint32(c.InputBufferSize), uint32(c.OutputBufferSize), &timeout).Err() + err = ntCreateNamedPipeFile(&h, + access, + &oa, + &iosb, + syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE, + disposition, + 0, + typ, + 0, + 0, + 0xffffffff, + uint32(c.InputBufferSize), + uint32(c.OutputBufferSize), + &timeout).Err() if err != nil { return 0, &os.PathError{Op: "open", Path: path, Err: err} } @@ -380,7 +384,7 @@ func (l *win32PipeListener) makeConnectedServerPipe() (*win32File, error) { p.Close() p = nil err = <-ch - if err == nil || err == ErrFileClosed { + if err == nil || err == ErrFileClosed { //nolint:errorlint // err is Errno err = ErrPipeListenerClosed } } @@ -402,12 +406,12 @@ func (l *win32PipeListener) listenerRoutine() { p, err = l.makeConnectedServerPipe() // If the connection was immediately closed by the client, try // again. - if err != cERROR_NO_DATA { + if err != windows.ERROR_NO_DATA { //nolint:errorlint // err is Errno break } } responseCh <- acceptResponse{p, err} - closed = err == ErrPipeListenerClosed + closed = err == ErrPipeListenerClosed //nolint:errorlint // err is Errno } } syscall.Close(l.firstHandle) @@ -469,15 +473,15 @@ func ListenPipe(path string, c *PipeConfig) (net.Listener, error) { } func connectPipe(p *win32File) error { - c, err := p.prepareIo() + c, err := p.prepareIO() if err != nil { return err } defer p.wg.Done() err = connectNamedPipe(p.handle, &c.o) - _, err = p.asyncIo(c, nil, 0, err) - if err != nil && err != cERROR_PIPE_CONNECTED { + _, err = p.asyncIO(c, nil, 0, err) + if err != nil && err != windows.ERROR_PIPE_CONNECTED { //nolint:errorlint // err is Errno return err } return nil diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go index f497c0e39178..48ce4e924366 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go @@ -1,5 +1,3 @@ -// +build windows - // Package guid provides a GUID type. The backing structure for a GUID is // identical to that used by the golang.org/x/sys/windows GUID type. // There are two main binary encodings used for a GUID, the big-endian encoding, @@ -9,26 +7,26 @@ package guid import ( "crypto/rand" - "crypto/sha1" + "crypto/sha1" //nolint:gosec // not used for secure application "encoding" "encoding/binary" "fmt" "strconv" - - "golang.org/x/sys/windows" ) +//go:generate go run golang.org/x/tools/cmd/stringer -type=Variant -trimprefix=Variant -linecomment + // Variant specifies which GUID variant (or "type") of the GUID. It determines // how the entirety of the rest of the GUID is interpreted. type Variant uint8 -// The variants specified by RFC 4122. +// The variants specified by RFC 4122 section 4.1.1. const ( // VariantUnknown specifies a GUID variant which does not conform to one of // the variant encodings specified in RFC 4122. VariantUnknown Variant = iota VariantNCS - VariantRFC4122 + VariantRFC4122 // RFC 4122 VariantMicrosoft VariantFuture ) @@ -38,16 +36,13 @@ const ( // hash of an input string. type Version uint8 +func (v Version) String() string { + return strconv.FormatUint(uint64(v), 10) +} + var _ = (encoding.TextMarshaler)(GUID{}) var _ = (encoding.TextUnmarshaler)(&GUID{}) -// GUID represents a GUID/UUID. It has the same structure as -// golang.org/x/sys/windows.GUID so that it can be used with functions expecting -// that type. It is defined as its own type so that stringification and -// marshaling can be supported. The representation matches that used by native -// Windows code. -type GUID windows.GUID - // NewV4 returns a new version 4 (pseudorandom) GUID, as defined by RFC 4122. func NewV4() (GUID, error) { var b [16]byte @@ -70,7 +65,7 @@ func NewV4() (GUID, error) { // big-endian UTF16 stream of bytes. If that is desired, the string can be // encoded as such before being passed to this function. func NewV5(namespace GUID, name []byte) (GUID, error) { - b := sha1.New() + b := sha1.New() //nolint:gosec // not used for secure application namespaceBytes := namespace.ToArray() b.Write(namespaceBytes[:]) b.Write(name) diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_nonwindows.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_nonwindows.go new file mode 100644 index 000000000000..805bd3548424 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_nonwindows.go @@ -0,0 +1,16 @@ +//go:build !windows +// +build !windows + +package guid + +// GUID represents a GUID/UUID. It has the same structure as +// golang.org/x/sys/windows.GUID so that it can be used with functions expecting +// that type. It is defined as its own type as that is only available to builds +// targeted at `windows`. The representation matches that used by native Windows +// code. +type GUID struct { + Data1 uint32 + Data2 uint16 + Data3 uint16 + Data4 [8]byte +} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_windows.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_windows.go new file mode 100644 index 000000000000..27e45ee5ccf9 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_windows.go @@ -0,0 +1,13 @@ +//go:build windows +// +build windows + +package guid + +import "golang.org/x/sys/windows" + +// GUID represents a GUID/UUID. It has the same structure as +// golang.org/x/sys/windows.GUID so that it can be used with functions expecting +// that type. It is defined as its own type so that stringification and +// marshaling can be supported. The representation matches that used by native +// Windows code. +type GUID windows.GUID diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/variant_string.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/variant_string.go new file mode 100644 index 000000000000..4076d3132fdd --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/guid/variant_string.go @@ -0,0 +1,27 @@ +// Code generated by "stringer -type=Variant -trimprefix=Variant -linecomment"; DO NOT EDIT. + +package guid + +import "strconv" + +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[VariantUnknown-0] + _ = x[VariantNCS-1] + _ = x[VariantRFC4122-2] + _ = x[VariantMicrosoft-3] + _ = x[VariantFuture-4] +} + +const _Variant_name = "UnknownNCSRFC 4122MicrosoftFuture" + +var _Variant_index = [...]uint8{0, 7, 10, 18, 27, 33} + +func (i Variant) String() string { + if i >= Variant(len(_Variant_index)-1) { + return "Variant(" + strconv.FormatInt(int64(i), 10) + ")" + } + return _Variant_name[_Variant_index[i]:_Variant_index[i+1]] +} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/grantvmgroupaccess.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/grantvmgroupaccess.go index fca241590cca..6df87b749905 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/grantvmgroupaccess.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/grantvmgroupaccess.go @@ -1,13 +1,13 @@ +//go:build windows // +build windows package security import ( + "fmt" "os" "syscall" "unsafe" - - "github.com/pkg/errors" ) type ( @@ -21,6 +21,7 @@ type ( trusteeForm uint32 trusteeType uint32 + //nolint:structcheck // structcheck thinks fields are unused, but the are used to pass data to OS explicitAccess struct { accessPermissions accessMask accessMode accessMode @@ -28,6 +29,7 @@ type ( trustee trustee } + //nolint:structcheck,unused // structcheck thinks fields are unused, but the are used to pass data to OS trustee struct { multipleTrustee *trustee multipleTrusteeOperation int32 @@ -45,6 +47,7 @@ const ( desiredAccessReadControl desiredAccess = 0x20000 desiredAccessWriteDac desiredAccess = 0x40000 + //cspell:disable-next-line gvmga = "GrantVmGroupAccess:" inheritModeNoInheritance inheritMode = 0x0 @@ -57,9 +60,9 @@ const ( shareModeRead shareMode = 0x1 shareModeWrite shareMode = 0x2 - sidVmGroup = "S-1-5-83-0" + sidVMGroup = "S-1-5-83-0" - trusteeFormIsSid trusteeForm = 0 + trusteeFormIsSID trusteeForm = 0 trusteeTypeWellKnownGroup trusteeType = 5 ) @@ -68,11 +71,13 @@ const ( // include Grant ACE entries for the VM Group SID. This is a golang re- // implementation of the same function in vmcompute, just not exported in // RS5. Which kind of sucks. Sucks a lot :/ +// +//revive:disable-next-line:var-naming VM, not Vm func GrantVmGroupAccess(name string) error { // Stat (to determine if `name` is a directory). s, err := os.Stat(name) if err != nil { - return errors.Wrapf(err, "%s os.Stat %s", gvmga, name) + return fmt.Errorf("%s os.Stat %s: %w", gvmga, name, err) } // Get a handle to the file/directory. Must defer Close on success. @@ -80,7 +85,7 @@ func GrantVmGroupAccess(name string) error { if err != nil { return err // Already wrapped } - defer syscall.CloseHandle(fd) + defer syscall.CloseHandle(fd) //nolint:errcheck // Get the current DACL and Security Descriptor. Must defer LocalFree on success. ot := objectTypeFileObject @@ -88,9 +93,9 @@ func GrantVmGroupAccess(name string) error { sd := uintptr(0) origDACL := uintptr(0) if err := getSecurityInfo(fd, uint32(ot), uint32(si), nil, nil, &origDACL, nil, &sd); err != nil { - return errors.Wrapf(err, "%s GetSecurityInfo %s", gvmga, name) + return fmt.Errorf("%s GetSecurityInfo %s: %w", gvmga, name, err) } - defer syscall.LocalFree((syscall.Handle)(unsafe.Pointer(sd))) + defer syscall.LocalFree((syscall.Handle)(unsafe.Pointer(sd))) //nolint:errcheck // Generate a new DACL which is the current DACL with the required ACEs added. // Must defer LocalFree on success. @@ -98,11 +103,11 @@ func GrantVmGroupAccess(name string) error { if err != nil { return err // Already wrapped } - defer syscall.LocalFree((syscall.Handle)(unsafe.Pointer(newDACL))) + defer syscall.LocalFree((syscall.Handle)(unsafe.Pointer(newDACL))) //nolint:errcheck // And finally use SetSecurityInfo to apply the updated DACL. if err := setSecurityInfo(fd, uint32(ot), uint32(si), uintptr(0), uintptr(0), newDACL, uintptr(0)); err != nil { - return errors.Wrapf(err, "%s SetSecurityInfo %s", gvmga, name) + return fmt.Errorf("%s SetSecurityInfo %s: %w", gvmga, name, err) } return nil @@ -111,16 +116,19 @@ func GrantVmGroupAccess(name string) error { // createFile is a helper function to call [Nt]CreateFile to get a handle to // the file or directory. func createFile(name string, isDir bool) (syscall.Handle, error) { - namep := syscall.StringToUTF16(name) + namep, err := syscall.UTF16FromString(name) + if err != nil { + return syscall.InvalidHandle, fmt.Errorf("could not convernt name to UTF-16: %w", err) + } da := uint32(desiredAccessReadControl | desiredAccessWriteDac) sm := uint32(shareModeRead | shareModeWrite) fa := uint32(syscall.FILE_ATTRIBUTE_NORMAL) if isDir { - fa = uint32(fa | syscall.FILE_FLAG_BACKUP_SEMANTICS) + fa |= syscall.FILE_FLAG_BACKUP_SEMANTICS } fd, err := syscall.CreateFile(&namep[0], da, sm, nil, syscall.OPEN_EXISTING, fa, 0) if err != nil { - return 0, errors.Wrapf(err, "%s syscall.CreateFile %s", gvmga, name) + return syscall.InvalidHandle, fmt.Errorf("%s syscall.CreateFile %s: %w", gvmga, name, err) } return fd, nil } @@ -129,9 +137,9 @@ func createFile(name string, isDir bool) (syscall.Handle, error) { // The caller is responsible for LocalFree of the returned DACL on success. func generateDACLWithAcesAdded(name string, isDir bool, origDACL uintptr) (uintptr, error) { // Generate pointers to the SIDs based on the string SIDs - sid, err := syscall.StringToSid(sidVmGroup) + sid, err := syscall.StringToSid(sidVMGroup) if err != nil { - return 0, errors.Wrapf(err, "%s syscall.StringToSid %s %s", gvmga, name, sidVmGroup) + return 0, fmt.Errorf("%s syscall.StringToSid %s %s: %w", gvmga, name, sidVMGroup, err) } inheritance := inheritModeNoInheritance @@ -140,12 +148,12 @@ func generateDACLWithAcesAdded(name string, isDir bool, origDACL uintptr) (uintp } eaArray := []explicitAccess{ - explicitAccess{ + { accessPermissions: accessMaskDesiredPermission, accessMode: accessModeGrant, inheritance: inheritance, trustee: trustee{ - trusteeForm: trusteeFormIsSid, + trusteeForm: trusteeFormIsSID, trusteeType: trusteeTypeWellKnownGroup, name: uintptr(unsafe.Pointer(sid)), }, @@ -154,7 +162,7 @@ func generateDACLWithAcesAdded(name string, isDir bool, origDACL uintptr) (uintp modifiedDACL := uintptr(0) if err := setEntriesInAcl(uintptr(uint32(1)), uintptr(unsafe.Pointer(&eaArray[0])), origDACL, &modifiedDACL); err != nil { - return 0, errors.Wrapf(err, "%s SetEntriesInAcl %s", gvmga, name) + return 0, fmt.Errorf("%s SetEntriesInAcl %s: %w", gvmga, name, err) } return modifiedDACL, nil diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/syscall_windows.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/syscall_windows.go index c40c2739b7c6..71326e4e46fe 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/syscall_windows.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/syscall_windows.go @@ -1,7 +1,7 @@ package security -//go:generate go run mksyscall_windows.go -output zsyscall_windows.go syscall_windows.go +//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go syscall_windows.go -//sys getSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, ppsidOwner **uintptr, ppsidGroup **uintptr, ppDacl *uintptr, ppSacl *uintptr, ppSecurityDescriptor *uintptr) (err error) [failretval!=0] = advapi32.GetSecurityInfo -//sys setSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, psidOwner uintptr, psidGroup uintptr, pDacl uintptr, pSacl uintptr) (err error) [failretval!=0] = advapi32.SetSecurityInfo -//sys setEntriesInAcl(count uintptr, pListOfEEs uintptr, oldAcl uintptr, newAcl *uintptr) (err error) [failretval!=0] = advapi32.SetEntriesInAclW +//sys getSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, ppsidOwner **uintptr, ppsidGroup **uintptr, ppDacl *uintptr, ppSacl *uintptr, ppSecurityDescriptor *uintptr) (win32err error) = advapi32.GetSecurityInfo +//sys setSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, psidOwner uintptr, psidGroup uintptr, pDacl uintptr, pSacl uintptr) (win32err error) = advapi32.SetSecurityInfo +//sys setEntriesInAcl(count uintptr, pListOfEEs uintptr, oldAcl uintptr, newAcl *uintptr) (win32err error) = advapi32.SetEntriesInAclW diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/zsyscall_windows.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/zsyscall_windows.go index 4a90cb3cc814..26c986b88fea 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/zsyscall_windows.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/pkg/security/zsyscall_windows.go @@ -1,4 +1,6 @@ -// Code generated by 'go generate'; DO NOT EDIT. +//go:build windows + +// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT. package security @@ -45,26 +47,26 @@ var ( procSetSecurityInfo = modadvapi32.NewProc("SetSecurityInfo") ) -func getSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, ppsidOwner **uintptr, ppsidGroup **uintptr, ppDacl *uintptr, ppSacl *uintptr, ppSecurityDescriptor *uintptr) (err error) { - r1, _, e1 := syscall.Syscall9(procGetSecurityInfo.Addr(), 8, uintptr(handle), uintptr(objectType), uintptr(si), uintptr(unsafe.Pointer(ppsidOwner)), uintptr(unsafe.Pointer(ppsidGroup)), uintptr(unsafe.Pointer(ppDacl)), uintptr(unsafe.Pointer(ppSacl)), uintptr(unsafe.Pointer(ppSecurityDescriptor)), 0) - if r1 != 0 { - err = errnoErr(e1) +func getSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, ppsidOwner **uintptr, ppsidGroup **uintptr, ppDacl *uintptr, ppSacl *uintptr, ppSecurityDescriptor *uintptr) (win32err error) { + r0, _, _ := syscall.Syscall9(procGetSecurityInfo.Addr(), 8, uintptr(handle), uintptr(objectType), uintptr(si), uintptr(unsafe.Pointer(ppsidOwner)), uintptr(unsafe.Pointer(ppsidGroup)), uintptr(unsafe.Pointer(ppDacl)), uintptr(unsafe.Pointer(ppSacl)), uintptr(unsafe.Pointer(ppSecurityDescriptor)), 0) + if r0 != 0 { + win32err = syscall.Errno(r0) } return } -func setEntriesInAcl(count uintptr, pListOfEEs uintptr, oldAcl uintptr, newAcl *uintptr) (err error) { - r1, _, e1 := syscall.Syscall6(procSetEntriesInAclW.Addr(), 4, uintptr(count), uintptr(pListOfEEs), uintptr(oldAcl), uintptr(unsafe.Pointer(newAcl)), 0, 0) - if r1 != 0 { - err = errnoErr(e1) +func setEntriesInAcl(count uintptr, pListOfEEs uintptr, oldAcl uintptr, newAcl *uintptr) (win32err error) { + r0, _, _ := syscall.Syscall6(procSetEntriesInAclW.Addr(), 4, uintptr(count), uintptr(pListOfEEs), uintptr(oldAcl), uintptr(unsafe.Pointer(newAcl)), 0, 0) + if r0 != 0 { + win32err = syscall.Errno(r0) } return } -func setSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, psidOwner uintptr, psidGroup uintptr, pDacl uintptr, pSacl uintptr) (err error) { - r1, _, e1 := syscall.Syscall9(procSetSecurityInfo.Addr(), 7, uintptr(handle), uintptr(objectType), uintptr(si), uintptr(psidOwner), uintptr(psidGroup), uintptr(pDacl), uintptr(pSacl), 0, 0) - if r1 != 0 { - err = errnoErr(e1) +func setSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, psidOwner uintptr, psidGroup uintptr, pDacl uintptr, pSacl uintptr) (win32err error) { + r0, _, _ := syscall.Syscall9(procSetSecurityInfo.Addr(), 7, uintptr(handle), uintptr(objectType), uintptr(si), uintptr(psidOwner), uintptr(psidGroup), uintptr(pDacl), uintptr(pSacl), 0, 0) + if r0 != 0 { + win32err = syscall.Errno(r0) } return } diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/privilege.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/privilege.go index 9c83d36fe533..0ff9dac906d3 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/privilege.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/privilege.go @@ -1,3 +1,4 @@ +//go:build windows // +build windows package winio @@ -24,19 +25,15 @@ import ( //sys lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) = advapi32.LookupPrivilegeDisplayNameW const ( - SE_PRIVILEGE_ENABLED = 2 + //revive:disable-next-line:var-naming ALL_CAPS + SE_PRIVILEGE_ENABLED = windows.SE_PRIVILEGE_ENABLED - ERROR_NOT_ALL_ASSIGNED syscall.Errno = 1300 + //revive:disable-next-line:var-naming ALL_CAPS + ERROR_NOT_ALL_ASSIGNED syscall.Errno = windows.ERROR_NOT_ALL_ASSIGNED - SeBackupPrivilege = "SeBackupPrivilege" - SeRestorePrivilege = "SeRestorePrivilege" -) - -const ( - securityAnonymous = iota - securityIdentification - securityImpersonation - securityDelegation + SeBackupPrivilege = "SeBackupPrivilege" + SeRestorePrivilege = "SeRestorePrivilege" + SeSecurityPrivilege = "SeSecurityPrivilege" ) var ( @@ -50,11 +47,9 @@ type PrivilegeError struct { } func (e *PrivilegeError) Error() string { - s := "" + s := "Could not enable privilege " if len(e.privileges) > 1 { s = "Could not enable privileges " - } else { - s = "Could not enable privilege " } for i, p := range e.privileges { if i != 0 { @@ -93,7 +88,7 @@ func RunWithPrivileges(names []string, fn func() error) error { } func mapPrivileges(names []string) ([]uint64, error) { - var privileges []uint64 + privileges := make([]uint64, 0, len(names)) privNameMutex.Lock() defer privNameMutex.Unlock() for _, name := range names { @@ -126,7 +121,7 @@ func enableDisableProcessPrivilege(names []string, action uint32) error { return err } - p, _ := windows.GetCurrentProcess() + p := windows.CurrentProcess() var token windows.Token err = windows.OpenProcessToken(p, windows.TOKEN_ADJUST_PRIVILEGES|windows.TOKEN_QUERY, &token) if err != nil { @@ -139,10 +134,10 @@ func enableDisableProcessPrivilege(names []string, action uint32) error { func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) error { var b bytes.Buffer - binary.Write(&b, binary.LittleEndian, uint32(len(privileges))) + _ = binary.Write(&b, binary.LittleEndian, uint32(len(privileges))) for _, p := range privileges { - binary.Write(&b, binary.LittleEndian, p) - binary.Write(&b, binary.LittleEndian, action) + _ = binary.Write(&b, binary.LittleEndian, p) + _ = binary.Write(&b, binary.LittleEndian, action) } prevState := make([]byte, b.Len()) reqSize := uint32(0) @@ -150,7 +145,7 @@ func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) e if !success { return err } - if err == ERROR_NOT_ALL_ASSIGNED { + if err == ERROR_NOT_ALL_ASSIGNED { //nolint:errorlint // err is Errno return &PrivilegeError{privileges} } return nil @@ -176,7 +171,7 @@ func getPrivilegeName(luid uint64) string { } func newThreadToken() (windows.Token, error) { - err := impersonateSelf(securityImpersonation) + err := impersonateSelf(windows.SecurityImpersonation) if err != nil { return 0, err } diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/reparse.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/reparse.go index fc1ee4d3a3e9..67d1a104a63f 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/reparse.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/reparse.go @@ -1,3 +1,6 @@ +//go:build windows +// +build windows + package winio import ( @@ -113,16 +116,16 @@ func EncodeReparsePoint(rp *ReparsePoint) []byte { } var b bytes.Buffer - binary.Write(&b, binary.LittleEndian, &data) + _ = binary.Write(&b, binary.LittleEndian, &data) if !rp.IsMountPoint { flags := uint32(0) if relative { flags |= 1 } - binary.Write(&b, binary.LittleEndian, flags) + _ = binary.Write(&b, binary.LittleEndian, flags) } - binary.Write(&b, binary.LittleEndian, ntTarget16) - binary.Write(&b, binary.LittleEndian, target16) + _ = binary.Write(&b, binary.LittleEndian, ntTarget16) + _ = binary.Write(&b, binary.LittleEndian, target16) return b.Bytes() } diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/sd.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/sd.go index db1b370a1b57..5550ef6b61ef 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/sd.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/sd.go @@ -1,23 +1,25 @@ +//go:build windows // +build windows package winio import ( + "errors" "syscall" "unsafe" + + "golang.org/x/sys/windows" ) //sys lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) = advapi32.LookupAccountNameW +//sys lookupAccountSid(systemName *uint16, sid *byte, name *uint16, nameSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) = advapi32.LookupAccountSidW //sys convertSidToStringSid(sid *byte, str **uint16) (err error) = advapi32.ConvertSidToStringSidW +//sys convertStringSidToSid(str *uint16, sid **byte) (err error) = advapi32.ConvertStringSidToSidW //sys convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) = advapi32.ConvertStringSecurityDescriptorToSecurityDescriptorW //sys convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) = advapi32.ConvertSecurityDescriptorToStringSecurityDescriptorW //sys localFree(mem uintptr) = LocalFree //sys getSecurityDescriptorLength(sd uintptr) (len uint32) = advapi32.GetSecurityDescriptorLength -const ( - cERROR_NONE_MAPPED = syscall.Errno(1332) -) - type AccountLookupError struct { Name string Err error @@ -28,8 +30,10 @@ func (e *AccountLookupError) Error() string { return "lookup account: empty account name specified" } var s string - switch e.Err { - case cERROR_NONE_MAPPED: + switch { + case errors.Is(e.Err, windows.ERROR_INVALID_SID): + s = "the security ID structure is invalid" + case errors.Is(e.Err, windows.ERROR_NONE_MAPPED): s = "not found" default: s = e.Err.Error() @@ -37,6 +41,8 @@ func (e *AccountLookupError) Error() string { return "lookup account " + e.Name + ": " + s } +func (e *AccountLookupError) Unwrap() error { return e.Err } + type SddlConversionError struct { Sddl string Err error @@ -46,15 +52,19 @@ func (e *SddlConversionError) Error() string { return "convert " + e.Sddl + ": " + e.Err.Error() } +func (e *SddlConversionError) Unwrap() error { return e.Err } + // LookupSidByName looks up the SID of an account by name +// +//revive:disable-next-line:var-naming SID, not Sid func LookupSidByName(name string) (sid string, err error) { if name == "" { - return "", &AccountLookupError{name, cERROR_NONE_MAPPED} + return "", &AccountLookupError{name, windows.ERROR_NONE_MAPPED} } var sidSize, sidNameUse, refDomainSize uint32 err = lookupAccountName(nil, name, nil, &sidSize, nil, &refDomainSize, &sidNameUse) - if err != nil && err != syscall.ERROR_INSUFFICIENT_BUFFER { + if err != nil && err != syscall.ERROR_INSUFFICIENT_BUFFER { //nolint:errorlint // err is Errno return "", &AccountLookupError{name, err} } sidBuffer := make([]byte, sidSize) @@ -73,6 +83,42 @@ func LookupSidByName(name string) (sid string, err error) { return sid, nil } +// LookupNameBySid looks up the name of an account by SID +// +//revive:disable-next-line:var-naming SID, not Sid +func LookupNameBySid(sid string) (name string, err error) { + if sid == "" { + return "", &AccountLookupError{sid, windows.ERROR_NONE_MAPPED} + } + + sidBuffer, err := windows.UTF16PtrFromString(sid) + if err != nil { + return "", &AccountLookupError{sid, err} + } + + var sidPtr *byte + if err = convertStringSidToSid(sidBuffer, &sidPtr); err != nil { + return "", &AccountLookupError{sid, err} + } + defer localFree(uintptr(unsafe.Pointer(sidPtr))) + + var nameSize, refDomainSize, sidNameUse uint32 + err = lookupAccountSid(nil, sidPtr, nil, &nameSize, nil, &refDomainSize, &sidNameUse) + if err != nil && err != windows.ERROR_INSUFFICIENT_BUFFER { //nolint:errorlint // err is Errno + return "", &AccountLookupError{sid, err} + } + + nameBuffer := make([]uint16, nameSize) + refDomainBuffer := make([]uint16, refDomainSize) + err = lookupAccountSid(nil, sidPtr, &nameBuffer[0], &nameSize, &refDomainBuffer[0], &refDomainSize, &sidNameUse) + if err != nil { + return "", &AccountLookupError{sid, err} + } + + name = windows.UTF16ToString(nameBuffer) + return name, nil +} + func SddlToSecurityDescriptor(sddl string) ([]byte, error) { var sdBuffer uintptr err := convertStringSecurityDescriptorToSecurityDescriptor(sddl, 1, &sdBuffer, nil) @@ -87,7 +133,7 @@ func SddlToSecurityDescriptor(sddl string) ([]byte, error) { func SecurityDescriptorToSddl(sd []byte) (string, error) { var sddl *uint16 - // The returned string length seems to including an aribtrary number of terminating NULs. + // The returned string length seems to include an arbitrary number of terminating NULs. // Don't use it. err := convertSecurityDescriptorToStringSecurityDescriptor(&sd[0], 1, 0xff, &sddl, nil) if err != nil { diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/syscall.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/syscall.go index 5955c99fdeaf..a6ca111b39c4 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/syscall.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/syscall.go @@ -1,3 +1,5 @@ +//go:build windows + package winio -//go:generate go run golang.org/x/sys/windows/mkwinsyscall -output zsyscall_windows.go file.go pipe.go sd.go fileinfo.go privilege.go backup.go hvsock.go +//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go ./*.go diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/tools.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/tools.go new file mode 100644 index 000000000000..2aa045843ea8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/tools.go @@ -0,0 +1,5 @@ +//go:build tools + +package winio + +import _ "golang.org/x/tools/cmd/stringer" diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/vhd/vhd.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/vhd/vhd.go index b03b789e6579..b54cad112703 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/vhd/vhd.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/vhd/vhd.go @@ -1,3 +1,4 @@ +//go:build windows // +build windows package vhd @@ -7,17 +8,16 @@ import ( "syscall" "github.com/Microsoft/go-winio/pkg/guid" - "github.com/pkg/errors" "golang.org/x/sys/windows" ) -//go:generate go run mksyscall_windows.go -output zvhd_windows.go vhd.go +//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zvhd_windows.go vhd.go -//sys createVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (err error) [failretval != 0] = virtdisk.CreateVirtualDisk -//sys openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *OpenVirtualDiskParameters, handle *syscall.Handle) (err error) [failretval != 0] = virtdisk.OpenVirtualDisk -//sys attachVirtualDisk(handle syscall.Handle, securityDescriptor *uintptr, attachVirtualDiskFlag uint32, providerSpecificFlags uint32, parameters *AttachVirtualDiskParameters, overlapped *syscall.Overlapped) (err error) [failretval != 0] = virtdisk.AttachVirtualDisk -//sys detachVirtualDisk(handle syscall.Handle, detachVirtualDiskFlags uint32, providerSpecificFlags uint32) (err error) [failretval != 0] = virtdisk.DetachVirtualDisk -//sys getVirtualDiskPhysicalPath(handle syscall.Handle, diskPathSizeInBytes *uint32, buffer *uint16) (err error) [failretval != 0] = virtdisk.GetVirtualDiskPhysicalPath +//sys createVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (win32err error) = virtdisk.CreateVirtualDisk +//sys openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (win32err error) = virtdisk.OpenVirtualDisk +//sys attachVirtualDisk(handle syscall.Handle, securityDescriptor *uintptr, attachVirtualDiskFlag uint32, providerSpecificFlags uint32, parameters *AttachVirtualDiskParameters, overlapped *syscall.Overlapped) (win32err error) = virtdisk.AttachVirtualDisk +//sys detachVirtualDisk(handle syscall.Handle, detachVirtualDiskFlags uint32, providerSpecificFlags uint32) (win32err error) = virtdisk.DetachVirtualDisk +//sys getVirtualDiskPhysicalPath(handle syscall.Handle, diskPathSizeInBytes *uint32, buffer *uint16) (win32err error) = virtdisk.GetVirtualDiskPhysicalPath type ( CreateVirtualDiskFlag uint32 @@ -62,20 +62,35 @@ type OpenVirtualDiskParameters struct { Version2 OpenVersion2 } +// The higher level `OpenVersion2` struct uses `bool`s to refer to `GetInfoOnly` and `ReadOnly` for ease of use. However, +// the internal windows structure uses `BOOL`s aka int32s for these types. `openVersion2` is used for translating +// `OpenVersion2` fields to the correct windows internal field types on the `Open____` methods. +type openVersion2 struct { + getInfoOnly int32 + readOnly int32 + resiliencyGUID guid.GUID +} + +type openVirtualDiskParameters struct { + version uint32 + version2 openVersion2 +} + type AttachVersion2 struct { RestrictedOffset uint64 RestrictedLength uint64 } type AttachVirtualDiskParameters struct { - Version uint32 // Must always be set to 2 + Version uint32 Version2 AttachVersion2 } const ( + //revive:disable-next-line:var-naming ALL_CAPS VIRTUAL_STORAGE_TYPE_DEVICE_VHDX = 0x3 - // Access Mask for opening a VHD + // Access Mask for opening a VHD. VirtualDiskAccessNone VirtualDiskAccessMask = 0x00000000 VirtualDiskAccessAttachRO VirtualDiskAccessMask = 0x00010000 VirtualDiskAccessAttachRW VirtualDiskAccessMask = 0x00020000 @@ -87,7 +102,7 @@ const ( VirtualDiskAccessAll VirtualDiskAccessMask = 0x003f0000 VirtualDiskAccessWritable VirtualDiskAccessMask = 0x00320000 - // Flags for creating a VHD + // Flags for creating a VHD. CreateVirtualDiskFlagNone CreateVirtualDiskFlag = 0x0 CreateVirtualDiskFlagFullPhysicalAllocation CreateVirtualDiskFlag = 0x1 CreateVirtualDiskFlagPreventWritesToSourceDisk CreateVirtualDiskFlag = 0x2 @@ -95,12 +110,12 @@ const ( CreateVirtualDiskFlagCreateBackingStorage CreateVirtualDiskFlag = 0x8 CreateVirtualDiskFlagUseChangeTrackingSourceLimit CreateVirtualDiskFlag = 0x10 CreateVirtualDiskFlagPreserveParentChangeTrackingState CreateVirtualDiskFlag = 0x20 - CreateVirtualDiskFlagVhdSetUseOriginalBackingStorage CreateVirtualDiskFlag = 0x40 + CreateVirtualDiskFlagVhdSetUseOriginalBackingStorage CreateVirtualDiskFlag = 0x40 //revive:disable-line:var-naming VHD, not Vhd CreateVirtualDiskFlagSparseFile CreateVirtualDiskFlag = 0x80 - CreateVirtualDiskFlagPmemCompatible CreateVirtualDiskFlag = 0x100 + CreateVirtualDiskFlagPmemCompatible CreateVirtualDiskFlag = 0x100 //revive:disable-line:var-naming PMEM, not Pmem CreateVirtualDiskFlagSupportCompressedVolumes CreateVirtualDiskFlag = 0x200 - // Flags for opening a VHD + // Flags for opening a VHD. OpenVirtualDiskFlagNone VirtualDiskFlag = 0x00000000 OpenVirtualDiskFlagNoParents VirtualDiskFlag = 0x00000001 OpenVirtualDiskFlagBlankFile VirtualDiskFlag = 0x00000002 @@ -113,7 +128,7 @@ const ( OpenVirtualDiskFlagNoWriteHardening VirtualDiskFlag = 0x00000100 OpenVirtualDiskFlagSupportCompressedVolumes VirtualDiskFlag = 0x00000200 - // Flags for attaching a VHD + // Flags for attaching a VHD. AttachVirtualDiskFlagNone AttachVirtualDiskFlag = 0x00000000 AttachVirtualDiskFlagReadOnly AttachVirtualDiskFlag = 0x00000001 AttachVirtualDiskFlagNoDriveLetter AttachVirtualDiskFlag = 0x00000002 @@ -126,12 +141,14 @@ const ( AttachVirtualDiskFlagSinglePartition AttachVirtualDiskFlag = 0x00000100 AttachVirtualDiskFlagRegisterVolume AttachVirtualDiskFlag = 0x00000200 - // Flags for detaching a VHD + // Flags for detaching a VHD. DetachVirtualDiskFlagNone DetachVirtualDiskFlag = 0x0 ) // CreateVhdx is a helper function to create a simple vhdx file at the given path using // default values. +// +//revive:disable-next-line:var-naming VHDX, not Vhdx func CreateVhdx(path string, maxSizeInGb, blockSizeInMb uint32) error { params := CreateVirtualDiskParameters{ Version: 2, @@ -146,21 +163,20 @@ func CreateVhdx(path string, maxSizeInGb, blockSizeInMb uint32) error { return err } - if err := syscall.CloseHandle(handle); err != nil { - return err - } - return nil + return syscall.CloseHandle(handle) } // DetachVirtualDisk detaches a virtual hard disk by handle. func DetachVirtualDisk(handle syscall.Handle) (err error) { if err := detachVirtualDisk(handle, 0, 0); err != nil { - return errors.Wrap(err, "failed to detach virtual disk") + return fmt.Errorf("failed to detach virtual disk: %w", err) } return nil } // DetachVhd detaches a vhd found at `path`. +// +//revive:disable-next-line:var-naming VHD, not Vhd func DetachVhd(path string) error { handle, err := OpenVirtualDisk( path, @@ -170,12 +186,16 @@ func DetachVhd(path string) error { if err != nil { return err } - defer syscall.CloseHandle(handle) + defer syscall.CloseHandle(handle) //nolint:errcheck return DetachVirtualDisk(handle) } // AttachVirtualDisk attaches a virtual hard disk for use. -func AttachVirtualDisk(handle syscall.Handle, attachVirtualDiskFlag AttachVirtualDiskFlag, parameters *AttachVirtualDiskParameters) (err error) { +func AttachVirtualDisk( + handle syscall.Handle, + attachVirtualDiskFlag AttachVirtualDiskFlag, + parameters *AttachVirtualDiskParameters, +) (err error) { // Supports both version 1 and 2 of the attach parameters as version 2 wasn't present in RS5. if err := attachVirtualDisk( handle, @@ -185,13 +205,15 @@ func AttachVirtualDisk(handle syscall.Handle, attachVirtualDiskFlag AttachVirtua parameters, nil, ); err != nil { - return errors.Wrap(err, "failed to attach virtual disk") + return fmt.Errorf("failed to attach virtual disk: %w", err) } return nil } // AttachVhd attaches a virtual hard disk at `path` for use. Attaches using version 2 // of the ATTACH_VIRTUAL_DISK_PARAMETERS. +// +//revive:disable-next-line:var-naming VHD, not Vhd func AttachVhd(path string) (err error) { handle, err := OpenVirtualDisk( path, @@ -202,20 +224,24 @@ func AttachVhd(path string) (err error) { return err } - defer syscall.CloseHandle(handle) + defer syscall.CloseHandle(handle) //nolint:errcheck params := AttachVirtualDiskParameters{Version: 2} if err := AttachVirtualDisk( handle, AttachVirtualDiskFlagNone, ¶ms, ); err != nil { - return errors.Wrap(err, "failed to attach virtual disk") + return fmt.Errorf("failed to attach virtual disk: %w", err) } return nil } // OpenVirtualDisk obtains a handle to a VHD opened with supplied access mask and flags. -func OpenVirtualDisk(vhdPath string, virtualDiskAccessMask VirtualDiskAccessMask, openVirtualDiskFlags VirtualDiskFlag) (syscall.Handle, error) { +func OpenVirtualDisk( + vhdPath string, + virtualDiskAccessMask VirtualDiskAccessMask, + openVirtualDiskFlags VirtualDiskFlag, +) (syscall.Handle, error) { parameters := OpenVirtualDiskParameters{Version: 2} handle, err := OpenVirtualDiskWithParameters( vhdPath, @@ -230,29 +256,55 @@ func OpenVirtualDisk(vhdPath string, virtualDiskAccessMask VirtualDiskAccessMask } // OpenVirtualDiskWithParameters obtains a handle to a VHD opened with supplied access mask, flags and parameters. -func OpenVirtualDiskWithParameters(vhdPath string, virtualDiskAccessMask VirtualDiskAccessMask, openVirtualDiskFlags VirtualDiskFlag, parameters *OpenVirtualDiskParameters) (syscall.Handle, error) { +func OpenVirtualDiskWithParameters( + vhdPath string, + virtualDiskAccessMask VirtualDiskAccessMask, + openVirtualDiskFlags VirtualDiskFlag, + parameters *OpenVirtualDiskParameters, +) (syscall.Handle, error) { var ( handle syscall.Handle defaultType VirtualStorageType + getInfoOnly int32 + readOnly int32 ) if parameters.Version != 2 { return handle, fmt.Errorf("only version 2 VHDs are supported, found version: %d", parameters.Version) } + if parameters.Version2.GetInfoOnly { + getInfoOnly = 1 + } + if parameters.Version2.ReadOnly { + readOnly = 1 + } + params := &openVirtualDiskParameters{ + version: parameters.Version, + version2: openVersion2{ + getInfoOnly, + readOnly, + parameters.Version2.ResiliencyGUID, + }, + } if err := openVirtualDisk( &defaultType, vhdPath, uint32(virtualDiskAccessMask), uint32(openVirtualDiskFlags), - parameters, + params, &handle, ); err != nil { - return 0, errors.Wrap(err, "failed to open virtual disk") + return 0, fmt.Errorf("failed to open virtual disk: %w", err) } return handle, nil } // CreateVirtualDisk creates a virtual harddisk and returns a handle to the disk. -func CreateVirtualDisk(path string, virtualDiskAccessMask VirtualDiskAccessMask, createVirtualDiskFlags CreateVirtualDiskFlag, parameters *CreateVirtualDiskParameters) (syscall.Handle, error) { +func CreateVirtualDisk( + path string, + virtualDiskAccessMask VirtualDiskAccessMask, + createVirtualDiskFlags CreateVirtualDiskFlag, + parameters *CreateVirtualDiskParameters, +) (syscall.Handle, error) { var ( handle syscall.Handle defaultType VirtualStorageType @@ -272,7 +324,7 @@ func CreateVirtualDisk(path string, virtualDiskAccessMask VirtualDiskAccessMask, nil, &handle, ); err != nil { - return handle, errors.Wrap(err, "failed to create virtual disk") + return handle, fmt.Errorf("failed to create virtual disk: %w", err) } return handle, nil } @@ -290,12 +342,14 @@ func GetVirtualDiskPhysicalPath(handle syscall.Handle) (_ string, err error) { &diskPathSizeInBytes, &diskPhysicalPathBuf[0], ); err != nil { - return "", errors.Wrap(err, "failed to get disk physical path") + return "", fmt.Errorf("failed to get disk physical path: %w", err) } return windows.UTF16ToString(diskPhysicalPathBuf[:]), nil } // CreateDiffVhd is a helper function to create a differencing virtual disk. +// +//revive:disable-next-line:var-naming VHD, not Vhd func CreateDiffVhd(diffVhdPath, baseVhdPath string, blockSizeInMB uint32) error { // Setting `ParentPath` is how to signal to create a differencing disk. createParams := &CreateVirtualDiskParameters{ @@ -314,10 +368,10 @@ func CreateDiffVhd(diffVhdPath, baseVhdPath string, blockSizeInMB uint32) error createParams, ) if err != nil { - return fmt.Errorf("failed to create differencing vhd: %s", err) + return fmt.Errorf("failed to create differencing vhd: %w", err) } if err := syscall.CloseHandle(vhdHandle); err != nil { - return fmt.Errorf("failed to close differencing vhd handle: %s", err) + return fmt.Errorf("failed to close differencing vhd handle: %w", err) } return nil } diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/vhd/zvhd_windows.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/vhd/zvhd_windows.go index 572f7b42f105..d0e917d2be37 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/vhd/zvhd_windows.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/vhd/zvhd_windows.go @@ -1,4 +1,6 @@ -// Code generated by 'go generate'; DO NOT EDIT. +//go:build windows + +// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT. package vhd @@ -47,60 +49,60 @@ var ( procOpenVirtualDisk = modvirtdisk.NewProc("OpenVirtualDisk") ) -func attachVirtualDisk(handle syscall.Handle, securityDescriptor *uintptr, attachVirtualDiskFlag uint32, providerSpecificFlags uint32, parameters *AttachVirtualDiskParameters, overlapped *syscall.Overlapped) (err error) { - r1, _, e1 := syscall.Syscall6(procAttachVirtualDisk.Addr(), 6, uintptr(handle), uintptr(unsafe.Pointer(securityDescriptor)), uintptr(attachVirtualDiskFlag), uintptr(providerSpecificFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(overlapped))) - if r1 != 0 { - err = errnoErr(e1) +func attachVirtualDisk(handle syscall.Handle, securityDescriptor *uintptr, attachVirtualDiskFlag uint32, providerSpecificFlags uint32, parameters *AttachVirtualDiskParameters, overlapped *syscall.Overlapped) (win32err error) { + r0, _, _ := syscall.Syscall6(procAttachVirtualDisk.Addr(), 6, uintptr(handle), uintptr(unsafe.Pointer(securityDescriptor)), uintptr(attachVirtualDiskFlag), uintptr(providerSpecificFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(overlapped))) + if r0 != 0 { + win32err = syscall.Errno(r0) } return } -func createVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (err error) { +func createVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (win32err error) { var _p0 *uint16 - _p0, err = syscall.UTF16PtrFromString(path) - if err != nil { + _p0, win32err = syscall.UTF16PtrFromString(path) + if win32err != nil { return } return _createVirtualDisk(virtualStorageType, _p0, virtualDiskAccessMask, securityDescriptor, createVirtualDiskFlags, providerSpecificFlags, parameters, overlapped, handle) } -func _createVirtualDisk(virtualStorageType *VirtualStorageType, path *uint16, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (err error) { - r1, _, e1 := syscall.Syscall9(procCreateVirtualDisk.Addr(), 9, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(unsafe.Pointer(securityDescriptor)), uintptr(createVirtualDiskFlags), uintptr(providerSpecificFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(handle))) - if r1 != 0 { - err = errnoErr(e1) +func _createVirtualDisk(virtualStorageType *VirtualStorageType, path *uint16, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (win32err error) { + r0, _, _ := syscall.Syscall9(procCreateVirtualDisk.Addr(), 9, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(unsafe.Pointer(securityDescriptor)), uintptr(createVirtualDiskFlags), uintptr(providerSpecificFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(handle))) + if r0 != 0 { + win32err = syscall.Errno(r0) } return } -func detachVirtualDisk(handle syscall.Handle, detachVirtualDiskFlags uint32, providerSpecificFlags uint32) (err error) { - r1, _, e1 := syscall.Syscall(procDetachVirtualDisk.Addr(), 3, uintptr(handle), uintptr(detachVirtualDiskFlags), uintptr(providerSpecificFlags)) - if r1 != 0 { - err = errnoErr(e1) +func detachVirtualDisk(handle syscall.Handle, detachVirtualDiskFlags uint32, providerSpecificFlags uint32) (win32err error) { + r0, _, _ := syscall.Syscall(procDetachVirtualDisk.Addr(), 3, uintptr(handle), uintptr(detachVirtualDiskFlags), uintptr(providerSpecificFlags)) + if r0 != 0 { + win32err = syscall.Errno(r0) } return } -func getVirtualDiskPhysicalPath(handle syscall.Handle, diskPathSizeInBytes *uint32, buffer *uint16) (err error) { - r1, _, e1 := syscall.Syscall(procGetVirtualDiskPhysicalPath.Addr(), 3, uintptr(handle), uintptr(unsafe.Pointer(diskPathSizeInBytes)), uintptr(unsafe.Pointer(buffer))) - if r1 != 0 { - err = errnoErr(e1) +func getVirtualDiskPhysicalPath(handle syscall.Handle, diskPathSizeInBytes *uint32, buffer *uint16) (win32err error) { + r0, _, _ := syscall.Syscall(procGetVirtualDiskPhysicalPath.Addr(), 3, uintptr(handle), uintptr(unsafe.Pointer(diskPathSizeInBytes)), uintptr(unsafe.Pointer(buffer))) + if r0 != 0 { + win32err = syscall.Errno(r0) } return } -func openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *OpenVirtualDiskParameters, handle *syscall.Handle) (err error) { +func openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (win32err error) { var _p0 *uint16 - _p0, err = syscall.UTF16PtrFromString(path) - if err != nil { + _p0, win32err = syscall.UTF16PtrFromString(path) + if win32err != nil { return } return _openVirtualDisk(virtualStorageType, _p0, virtualDiskAccessMask, openVirtualDiskFlags, parameters, handle) } -func _openVirtualDisk(virtualStorageType *VirtualStorageType, path *uint16, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *OpenVirtualDiskParameters, handle *syscall.Handle) (err error) { - r1, _, e1 := syscall.Syscall6(procOpenVirtualDisk.Addr(), 6, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(openVirtualDiskFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(handle))) - if r1 != 0 { - err = errnoErr(e1) +func _openVirtualDisk(virtualStorageType *VirtualStorageType, path *uint16, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (win32err error) { + r0, _, _ := syscall.Syscall6(procOpenVirtualDisk.Addr(), 6, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(openVirtualDiskFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(handle))) + if r0 != 0 { + win32err = syscall.Errno(r0) } return } diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go index 176ff75e320c..83f45a1351ba 100644 --- a/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go +++ b/cluster-autoscaler/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go @@ -1,4 +1,6 @@ -// Code generated by 'go generate'; DO NOT EDIT. +//go:build windows + +// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT. package winio @@ -47,9 +49,11 @@ var ( procConvertSecurityDescriptorToStringSecurityDescriptorW = modadvapi32.NewProc("ConvertSecurityDescriptorToStringSecurityDescriptorW") procConvertSidToStringSidW = modadvapi32.NewProc("ConvertSidToStringSidW") procConvertStringSecurityDescriptorToSecurityDescriptorW = modadvapi32.NewProc("ConvertStringSecurityDescriptorToSecurityDescriptorW") + procConvertStringSidToSidW = modadvapi32.NewProc("ConvertStringSidToSidW") procGetSecurityDescriptorLength = modadvapi32.NewProc("GetSecurityDescriptorLength") procImpersonateSelf = modadvapi32.NewProc("ImpersonateSelf") procLookupAccountNameW = modadvapi32.NewProc("LookupAccountNameW") + procLookupAccountSidW = modadvapi32.NewProc("LookupAccountSidW") procLookupPrivilegeDisplayNameW = modadvapi32.NewProc("LookupPrivilegeDisplayNameW") procLookupPrivilegeNameW = modadvapi32.NewProc("LookupPrivilegeNameW") procLookupPrivilegeValueW = modadvapi32.NewProc("LookupPrivilegeValueW") @@ -74,7 +78,6 @@ var ( procRtlDosPathNameToNtPathName_U = modntdll.NewProc("RtlDosPathNameToNtPathName_U") procRtlNtStatusToDosErrorNoTeb = modntdll.NewProc("RtlNtStatusToDosErrorNoTeb") procWSAGetOverlappedResult = modws2_32.NewProc("WSAGetOverlappedResult") - procbind = modws2_32.NewProc("bind") ) func adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) { @@ -123,6 +126,14 @@ func _convertStringSecurityDescriptorToSecurityDescriptor(str *uint16, revision return } +func convertStringSidToSid(str *uint16, sid **byte) (err error) { + r1, _, e1 := syscall.Syscall(procConvertStringSidToSidW.Addr(), 2, uintptr(unsafe.Pointer(str)), uintptr(unsafe.Pointer(sid)), 0) + if r1 == 0 { + err = errnoErr(e1) + } + return +} + func getSecurityDescriptorLength(sd uintptr) (len uint32) { r0, _, _ := syscall.Syscall(procGetSecurityDescriptorLength.Addr(), 1, uintptr(sd), 0, 0) len = uint32(r0) @@ -154,6 +165,14 @@ func _lookupAccountName(systemName *uint16, accountName *uint16, sid *byte, sidS return } +func lookupAccountSid(systemName *uint16, sid *byte, name *uint16, nameSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) { + r1, _, e1 := syscall.Syscall9(procLookupAccountSidW.Addr(), 7, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(nameSize)), uintptr(unsafe.Pointer(refDomain)), uintptr(unsafe.Pointer(refDomainSize)), uintptr(unsafe.Pointer(sidNameUse)), 0, 0) + if r1 == 0 { + err = errnoErr(e1) + } + return +} + func lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) { var _p0 *uint16 _p0, err = syscall.UTF16PtrFromString(systemName) @@ -380,25 +399,25 @@ func setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err erro return } -func ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntstatus) { +func ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntStatus) { r0, _, _ := syscall.Syscall15(procNtCreateNamedPipeFile.Addr(), 14, uintptr(unsafe.Pointer(pipe)), uintptr(access), uintptr(unsafe.Pointer(oa)), uintptr(unsafe.Pointer(iosb)), uintptr(share), uintptr(disposition), uintptr(options), uintptr(typ), uintptr(readMode), uintptr(completionMode), uintptr(maxInstances), uintptr(inboundQuota), uintptr(outputQuota), uintptr(unsafe.Pointer(timeout)), 0) - status = ntstatus(r0) + status = ntStatus(r0) return } -func rtlDefaultNpAcl(dacl *uintptr) (status ntstatus) { +func rtlDefaultNpAcl(dacl *uintptr) (status ntStatus) { r0, _, _ := syscall.Syscall(procRtlDefaultNpAcl.Addr(), 1, uintptr(unsafe.Pointer(dacl)), 0, 0) - status = ntstatus(r0) + status = ntStatus(r0) return } -func rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntstatus) { +func rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntStatus) { r0, _, _ := syscall.Syscall6(procRtlDosPathNameToNtPathName_U.Addr(), 4, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(ntName)), uintptr(filePart), uintptr(reserved), 0, 0) - status = ntstatus(r0) + status = ntStatus(r0) return } -func rtlNtStatusToDosError(status ntstatus) (winerr error) { +func rtlNtStatusToDosError(status ntStatus) (winerr error) { r0, _, _ := syscall.Syscall(procRtlNtStatusToDosErrorNoTeb.Addr(), 1, uintptr(status), 0, 0) if r0 != 0 { winerr = syscall.Errno(r0) @@ -417,11 +436,3 @@ func wsaGetOverlappedResult(h syscall.Handle, o *syscall.Overlapped, bytes *uint } return } - -func bind(s syscall.Handle, name unsafe.Pointer, namelen int32) (err error) { - r1, _, e1 := syscall.Syscall(procbind.Addr(), 3, uintptr(s), uintptr(name), uintptr(namelen)) - if r1 == socketError { - err = errnoErr(e1) - } - return -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcn.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcn.go deleted file mode 100644 index eefd88d8562e..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcn.go +++ /dev/null @@ -1,304 +0,0 @@ -// Package hcn is a shim for the Host Compute Networking (HCN) service, which manages networking for Windows Server -// containers and Hyper-V containers. Previous to RS5, HCN was referred to as Host Networking Service (HNS). -package hcn - -import ( - "encoding/json" - "fmt" - "syscall" - - "github.com/Microsoft/go-winio/pkg/guid" -) - -//go:generate go run ../mksyscall_windows.go -output zsyscall_windows.go hcn.go - -/// HNS V1 API - -//sys SetCurrentThreadCompartmentId(compartmentId uint32) (hr error) = iphlpapi.SetCurrentThreadCompartmentId -//sys _hnsCall(method string, path string, object string, response **uint16) (hr error) = vmcompute.HNSCall? - -/// HCN V2 API - -// Network -//sys hcnEnumerateNetworks(query string, networks **uint16, result **uint16) (hr error) = computenetwork.HcnEnumerateNetworks? -//sys hcnCreateNetwork(id *_guid, settings string, network *hcnNetwork, result **uint16) (hr error) = computenetwork.HcnCreateNetwork? -//sys hcnOpenNetwork(id *_guid, network *hcnNetwork, result **uint16) (hr error) = computenetwork.HcnOpenNetwork? -//sys hcnModifyNetwork(network hcnNetwork, settings string, result **uint16) (hr error) = computenetwork.HcnModifyNetwork? -//sys hcnQueryNetworkProperties(network hcnNetwork, query string, properties **uint16, result **uint16) (hr error) = computenetwork.HcnQueryNetworkProperties? -//sys hcnDeleteNetwork(id *_guid, result **uint16) (hr error) = computenetwork.HcnDeleteNetwork? -//sys hcnCloseNetwork(network hcnNetwork) (hr error) = computenetwork.HcnCloseNetwork? - -// Endpoint -//sys hcnEnumerateEndpoints(query string, endpoints **uint16, result **uint16) (hr error) = computenetwork.HcnEnumerateEndpoints? -//sys hcnCreateEndpoint(network hcnNetwork, id *_guid, settings string, endpoint *hcnEndpoint, result **uint16) (hr error) = computenetwork.HcnCreateEndpoint? -//sys hcnOpenEndpoint(id *_guid, endpoint *hcnEndpoint, result **uint16) (hr error) = computenetwork.HcnOpenEndpoint? -//sys hcnModifyEndpoint(endpoint hcnEndpoint, settings string, result **uint16) (hr error) = computenetwork.HcnModifyEndpoint? -//sys hcnQueryEndpointProperties(endpoint hcnEndpoint, query string, properties **uint16, result **uint16) (hr error) = computenetwork.HcnQueryEndpointProperties? -//sys hcnDeleteEndpoint(id *_guid, result **uint16) (hr error) = computenetwork.HcnDeleteEndpoint? -//sys hcnCloseEndpoint(endpoint hcnEndpoint) (hr error) = computenetwork.HcnCloseEndpoint? - -// Namespace -//sys hcnEnumerateNamespaces(query string, namespaces **uint16, result **uint16) (hr error) = computenetwork.HcnEnumerateNamespaces? -//sys hcnCreateNamespace(id *_guid, settings string, namespace *hcnNamespace, result **uint16) (hr error) = computenetwork.HcnCreateNamespace? -//sys hcnOpenNamespace(id *_guid, namespace *hcnNamespace, result **uint16) (hr error) = computenetwork.HcnOpenNamespace? -//sys hcnModifyNamespace(namespace hcnNamespace, settings string, result **uint16) (hr error) = computenetwork.HcnModifyNamespace? -//sys hcnQueryNamespaceProperties(namespace hcnNamespace, query string, properties **uint16, result **uint16) (hr error) = computenetwork.HcnQueryNamespaceProperties? -//sys hcnDeleteNamespace(id *_guid, result **uint16) (hr error) = computenetwork.HcnDeleteNamespace? -//sys hcnCloseNamespace(namespace hcnNamespace) (hr error) = computenetwork.HcnCloseNamespace? - -// LoadBalancer -//sys hcnEnumerateLoadBalancers(query string, loadBalancers **uint16, result **uint16) (hr error) = computenetwork.HcnEnumerateLoadBalancers? -//sys hcnCreateLoadBalancer(id *_guid, settings string, loadBalancer *hcnLoadBalancer, result **uint16) (hr error) = computenetwork.HcnCreateLoadBalancer? -//sys hcnOpenLoadBalancer(id *_guid, loadBalancer *hcnLoadBalancer, result **uint16) (hr error) = computenetwork.HcnOpenLoadBalancer? -//sys hcnModifyLoadBalancer(loadBalancer hcnLoadBalancer, settings string, result **uint16) (hr error) = computenetwork.HcnModifyLoadBalancer? -//sys hcnQueryLoadBalancerProperties(loadBalancer hcnLoadBalancer, query string, properties **uint16, result **uint16) (hr error) = computenetwork.HcnQueryLoadBalancerProperties? -//sys hcnDeleteLoadBalancer(id *_guid, result **uint16) (hr error) = computenetwork.HcnDeleteLoadBalancer? -//sys hcnCloseLoadBalancer(loadBalancer hcnLoadBalancer) (hr error) = computenetwork.HcnCloseLoadBalancer? - -// SDN Routes -//sys hcnEnumerateRoutes(query string, routes **uint16, result **uint16) (hr error) = computenetwork.HcnEnumerateSdnRoutes? -//sys hcnCreateRoute(id *_guid, settings string, route *hcnRoute, result **uint16) (hr error) = computenetwork.HcnCreateSdnRoute? -//sys hcnOpenRoute(id *_guid, route *hcnRoute, result **uint16) (hr error) = computenetwork.HcnOpenSdnRoute? -//sys hcnModifyRoute(route hcnRoute, settings string, result **uint16) (hr error) = computenetwork.HcnModifySdnRoute? -//sys hcnQueryRouteProperties(route hcnRoute, query string, properties **uint16, result **uint16) (hr error) = computenetwork.HcnQuerySdnRouteProperties? -//sys hcnDeleteRoute(id *_guid, result **uint16) (hr error) = computenetwork.HcnDeleteSdnRoute? -//sys hcnCloseRoute(route hcnRoute) (hr error) = computenetwork.HcnCloseSdnRoute? - -type _guid = guid.GUID - -type hcnNetwork syscall.Handle -type hcnEndpoint syscall.Handle -type hcnNamespace syscall.Handle -type hcnLoadBalancer syscall.Handle -type hcnRoute syscall.Handle - -// SchemaVersion for HCN Objects/Queries. -type SchemaVersion = Version // hcnglobals.go - -// HostComputeQueryFlags are passed in to a HostComputeQuery to determine which -// properties of an object are returned. -type HostComputeQueryFlags uint32 - -var ( - // HostComputeQueryFlagsNone returns an object with the standard properties. - HostComputeQueryFlagsNone HostComputeQueryFlags - // HostComputeQueryFlagsDetailed returns an object with all properties. - HostComputeQueryFlagsDetailed HostComputeQueryFlags = 1 -) - -// HostComputeQuery is the format for HCN queries. -type HostComputeQuery struct { - SchemaVersion SchemaVersion `json:""` - Flags HostComputeQueryFlags `json:",omitempty"` - Filter string `json:",omitempty"` -} - -type ExtraParams struct { - Resources json.RawMessage `json:",omitempty"` - SharedContainers json.RawMessage `json:",omitempty"` - LayeredOn string `json:",omitempty"` - SwitchGuid string `json:",omitempty"` - UtilityVM string `json:",omitempty"` - VirtualMachine string `json:",omitempty"` -} - -type Health struct { - Data interface{} `json:",omitempty"` - Extra ExtraParams `json:",omitempty"` -} - -// defaultQuery generates HCN Query. -// Passed into get/enumerate calls to filter results. -func defaultQuery() HostComputeQuery { - query := HostComputeQuery{ - SchemaVersion: SchemaVersion{ - Major: 2, - Minor: 0, - }, - Flags: HostComputeQueryFlagsNone, - } - return query -} - -// PlatformDoesNotSupportError happens when users are attempting to use a newer shim on an older OS -func platformDoesNotSupportError(featureName string) error { - return fmt.Errorf("Platform does not support feature %s", featureName) -} - -// V2ApiSupported returns an error if the HCN version does not support the V2 Apis. -func V2ApiSupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.Api.V2 { - return nil - } - return platformDoesNotSupportError("V2 Api/Schema") -} - -func V2SchemaVersion() SchemaVersion { - return SchemaVersion{ - Major: 2, - Minor: 0, - } -} - -// RemoteSubnetSupported returns an error if the HCN version does not support Remote Subnet policies. -func RemoteSubnetSupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.RemoteSubnet { - return nil - } - return platformDoesNotSupportError("Remote Subnet") -} - -// HostRouteSupported returns an error if the HCN version does not support Host Route policies. -func HostRouteSupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.HostRoute { - return nil - } - return platformDoesNotSupportError("Host Route") -} - -// DSRSupported returns an error if the HCN version does not support Direct Server Return. -func DSRSupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.DSR { - return nil - } - return platformDoesNotSupportError("Direct Server Return (DSR)") -} - -// Slash32EndpointPrefixesSupported returns an error if the HCN version does not support configuring endpoints with /32 prefixes. -func Slash32EndpointPrefixesSupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.Slash32EndpointPrefixes { - return nil - } - return platformDoesNotSupportError("Slash 32 Endpoint prefixes") -} - -// AclSupportForProtocol252Supported returns an error if the HCN version does not support HNS ACL Policies to support protocol 252 for VXLAN. -func AclSupportForProtocol252Supported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.AclSupportForProtocol252 { - return nil - } - return platformDoesNotSupportError("HNS ACL Policies to support protocol 252 for VXLAN") -} - -// SessionAffinitySupported returns an error if the HCN version does not support Session Affinity. -func SessionAffinitySupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.SessionAffinity { - return nil - } - return platformDoesNotSupportError("Session Affinity") -} - -// IPv6DualStackSupported returns an error if the HCN version does not support IPv6DualStack. -func IPv6DualStackSupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.IPv6DualStack { - return nil - } - return platformDoesNotSupportError("IPv6 DualStack") -} - -//L4proxySupported returns an error if the HCN verison does not support L4Proxy -func L4proxyPolicySupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.L4Proxy { - return nil - } - return platformDoesNotSupportError("L4ProxyPolicy") -} - -// L4WfpProxySupported returns an error if the HCN verison does not support L4WfpProxy -func L4WfpProxyPolicySupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.L4WfpProxy { - return nil - } - return platformDoesNotSupportError("L4WfpProxyPolicy") -} - -// SetPolicySupported returns an error if the HCN version does not support SetPolicy. -func SetPolicySupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.SetPolicy { - return nil - } - return platformDoesNotSupportError("SetPolicy") -} - -// VxlanPortSupported returns an error if the HCN version does not support configuring the VXLAN TCP port. -func VxlanPortSupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.VxlanPort { - return nil - } - return platformDoesNotSupportError("VXLAN port configuration") -} - -// TierAclPolicySupported returns an error if the HCN version does not support configuring the TierAcl. -func TierAclPolicySupported() error { - supported, err := GetCachedSupportedFeatures() - if err != nil { - return err - } - if supported.TierAcl { - return nil - } - return platformDoesNotSupportError("TierAcl") -} - -// RequestType are the different operations performed to settings. -// Used to update the settings of Endpoint/Namespace objects. -type RequestType string - -var ( - // RequestTypeAdd adds the provided settings object. - RequestTypeAdd RequestType = "Add" - // RequestTypeRemove removes the provided settings object. - RequestTypeRemove RequestType = "Remove" - // RequestTypeUpdate replaces settings with the ones provided. - RequestTypeUpdate RequestType = "Update" - // RequestTypeRefresh refreshes the settings provided. - RequestTypeRefresh RequestType = "Refresh" -) diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnendpoint.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnendpoint.go deleted file mode 100644 index 545e8639d6cf..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnendpoint.go +++ /dev/null @@ -1,388 +0,0 @@ -package hcn - -import ( - "encoding/json" - "errors" - - "github.com/Microsoft/go-winio/pkg/guid" - "github.com/Microsoft/hcsshim/internal/interop" - "github.com/sirupsen/logrus" -) - -// IpConfig is assoicated with an endpoint -type IpConfig struct { - IpAddress string `json:",omitempty"` - PrefixLength uint8 `json:",omitempty"` -} - -// EndpointFlags are special settings on an endpoint. -type EndpointFlags uint32 - -var ( - // EndpointFlagsNone is the default. - EndpointFlagsNone EndpointFlags - // EndpointFlagsRemoteEndpoint means that an endpoint is on another host. - EndpointFlagsRemoteEndpoint EndpointFlags = 1 -) - -// HostComputeEndpoint represents a network endpoint -type HostComputeEndpoint struct { - Id string `json:"ID,omitempty"` - Name string `json:",omitempty"` - HostComputeNetwork string `json:",omitempty"` // GUID - HostComputeNamespace string `json:",omitempty"` // GUID - Policies []EndpointPolicy `json:",omitempty"` - IpConfigurations []IpConfig `json:",omitempty"` - Dns Dns `json:",omitempty"` - Routes []Route `json:",omitempty"` - MacAddress string `json:",omitempty"` - Flags EndpointFlags `json:",omitempty"` - Health Health `json:",omitempty"` - SchemaVersion SchemaVersion `json:",omitempty"` -} - -// EndpointResourceType are the two different Endpoint settings resources. -type EndpointResourceType string - -var ( - // EndpointResourceTypePolicy is for Endpoint Policies. Ex: ACL, NAT - EndpointResourceTypePolicy EndpointResourceType = "Policy" - // EndpointResourceTypePort is for Endpoint Port settings. - EndpointResourceTypePort EndpointResourceType = "Port" -) - -// ModifyEndpointSettingRequest is the structure used to send request to modify an endpoint. -// Used to update policy/port on an endpoint. -type ModifyEndpointSettingRequest struct { - ResourceType EndpointResourceType `json:",omitempty"` // Policy, Port - RequestType RequestType `json:",omitempty"` // Add, Remove, Update, Refresh - Settings json.RawMessage `json:",omitempty"` -} - -// VmEndpointRequest creates a switch port with identifier `PortId`. -type VmEndpointRequest struct { - PortId guid.GUID `json:",omitempty"` - VirtualNicName string `json:",omitempty"` - VirtualMachineId guid.GUID `json:",omitempty"` -} - -type PolicyEndpointRequest struct { - Policies []EndpointPolicy `json:",omitempty"` -} - -func getEndpoint(endpointGuid guid.GUID, query string) (*HostComputeEndpoint, error) { - // Open endpoint. - var ( - endpointHandle hcnEndpoint - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - hr := hcnOpenEndpoint(&endpointGuid, &endpointHandle, &resultBuffer) - if err := checkForErrors("hcnOpenEndpoint", hr, resultBuffer); err != nil { - return nil, err - } - // Query endpoint. - hr = hcnQueryEndpointProperties(endpointHandle, query, &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryEndpointProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close endpoint. - hr = hcnCloseEndpoint(endpointHandle) - if err := checkForErrors("hcnCloseEndpoint", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeEndpoint - var outputEndpoint HostComputeEndpoint - if err := json.Unmarshal([]byte(properties), &outputEndpoint); err != nil { - return nil, err - } - return &outputEndpoint, nil -} - -func enumerateEndpoints(query string) ([]HostComputeEndpoint, error) { - // Enumerate all Endpoint Guids - var ( - resultBuffer *uint16 - endpointBuffer *uint16 - ) - hr := hcnEnumerateEndpoints(query, &endpointBuffer, &resultBuffer) - if err := checkForErrors("hcnEnumerateEndpoints", hr, resultBuffer); err != nil { - return nil, err - } - - endpoints := interop.ConvertAndFreeCoTaskMemString(endpointBuffer) - var endpointIds []guid.GUID - err := json.Unmarshal([]byte(endpoints), &endpointIds) - if err != nil { - return nil, err - } - - var outputEndpoints []HostComputeEndpoint - for _, endpointGuid := range endpointIds { - endpoint, err := getEndpoint(endpointGuid, query) - if err != nil { - return nil, err - } - outputEndpoints = append(outputEndpoints, *endpoint) - } - return outputEndpoints, nil -} - -func createEndpoint(networkId string, endpointSettings string) (*HostComputeEndpoint, error) { - networkGuid, err := guid.FromString(networkId) - if err != nil { - return nil, errInvalidNetworkID - } - // Open network. - var networkHandle hcnNetwork - var resultBuffer *uint16 - hr := hcnOpenNetwork(&networkGuid, &networkHandle, &resultBuffer) - if err := checkForErrors("hcnOpenNetwork", hr, resultBuffer); err != nil { - return nil, err - } - // Create endpoint. - endpointId := guid.GUID{} - var endpointHandle hcnEndpoint - hr = hcnCreateEndpoint(networkHandle, &endpointId, endpointSettings, &endpointHandle, &resultBuffer) - if err := checkForErrors("hcnCreateEndpoint", hr, resultBuffer); err != nil { - return nil, err - } - // Query endpoint. - hcnQuery := defaultQuery() - query, err := json.Marshal(hcnQuery) - if err != nil { - return nil, err - } - var propertiesBuffer *uint16 - hr = hcnQueryEndpointProperties(endpointHandle, string(query), &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryEndpointProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close endpoint. - hr = hcnCloseEndpoint(endpointHandle) - if err := checkForErrors("hcnCloseEndpoint", hr, nil); err != nil { - return nil, err - } - // Close network. - hr = hcnCloseNetwork(networkHandle) - if err := checkForErrors("hcnCloseNetwork", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeEndpoint - var outputEndpoint HostComputeEndpoint - if err := json.Unmarshal([]byte(properties), &outputEndpoint); err != nil { - return nil, err - } - return &outputEndpoint, nil -} - -func modifyEndpoint(endpointId string, settings string) (*HostComputeEndpoint, error) { - endpointGuid, err := guid.FromString(endpointId) - if err != nil { - return nil, errInvalidEndpointID - } - // Open endpoint - var ( - endpointHandle hcnEndpoint - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - hr := hcnOpenEndpoint(&endpointGuid, &endpointHandle, &resultBuffer) - if err := checkForErrors("hcnOpenEndpoint", hr, resultBuffer); err != nil { - return nil, err - } - // Modify endpoint - hr = hcnModifyEndpoint(endpointHandle, settings, &resultBuffer) - if err := checkForErrors("hcnModifyEndpoint", hr, resultBuffer); err != nil { - return nil, err - } - // Query endpoint. - hcnQuery := defaultQuery() - query, err := json.Marshal(hcnQuery) - if err != nil { - return nil, err - } - hr = hcnQueryEndpointProperties(endpointHandle, string(query), &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryEndpointProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close endpoint. - hr = hcnCloseEndpoint(endpointHandle) - if err := checkForErrors("hcnCloseEndpoint", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeEndpoint - var outputEndpoint HostComputeEndpoint - if err := json.Unmarshal([]byte(properties), &outputEndpoint); err != nil { - return nil, err - } - return &outputEndpoint, nil -} - -func deleteEndpoint(endpointId string) error { - endpointGuid, err := guid.FromString(endpointId) - if err != nil { - return errInvalidEndpointID - } - var resultBuffer *uint16 - hr := hcnDeleteEndpoint(&endpointGuid, &resultBuffer) - if err := checkForErrors("hcnDeleteEndpoint", hr, resultBuffer); err != nil { - return err - } - return nil -} - -// ListEndpoints makes a call to list all available endpoints. -func ListEndpoints() ([]HostComputeEndpoint, error) { - hcnQuery := defaultQuery() - endpoints, err := ListEndpointsQuery(hcnQuery) - if err != nil { - return nil, err - } - return endpoints, nil -} - -// ListEndpointsQuery makes a call to query the list of available endpoints. -func ListEndpointsQuery(query HostComputeQuery) ([]HostComputeEndpoint, error) { - queryJson, err := json.Marshal(query) - if err != nil { - return nil, err - } - - endpoints, err := enumerateEndpoints(string(queryJson)) - if err != nil { - return nil, err - } - return endpoints, nil -} - -// ListEndpointsOfNetwork queries the list of endpoints on a network. -func ListEndpointsOfNetwork(networkId string) ([]HostComputeEndpoint, error) { - hcnQuery := defaultQuery() - // TODO: Once query can convert schema, change to {HostComputeNetwork:networkId} - mapA := map[string]string{"VirtualNetwork": networkId} - filter, err := json.Marshal(mapA) - if err != nil { - return nil, err - } - hcnQuery.Filter = string(filter) - - return ListEndpointsQuery(hcnQuery) -} - -// GetEndpointByID returns an endpoint specified by Id -func GetEndpointByID(endpointId string) (*HostComputeEndpoint, error) { - hcnQuery := defaultQuery() - mapA := map[string]string{"ID": endpointId} - filter, err := json.Marshal(mapA) - if err != nil { - return nil, err - } - hcnQuery.Filter = string(filter) - - endpoints, err := ListEndpointsQuery(hcnQuery) - if err != nil { - return nil, err - } - if len(endpoints) == 0 { - return nil, EndpointNotFoundError{EndpointID: endpointId} - } - return &endpoints[0], err -} - -// GetEndpointByName returns an endpoint specified by Name -func GetEndpointByName(endpointName string) (*HostComputeEndpoint, error) { - hcnQuery := defaultQuery() - mapA := map[string]string{"Name": endpointName} - filter, err := json.Marshal(mapA) - if err != nil { - return nil, err - } - hcnQuery.Filter = string(filter) - - endpoints, err := ListEndpointsQuery(hcnQuery) - if err != nil { - return nil, err - } - if len(endpoints) == 0 { - return nil, EndpointNotFoundError{EndpointName: endpointName} - } - return &endpoints[0], err -} - -// Create Endpoint. -func (endpoint *HostComputeEndpoint) Create() (*HostComputeEndpoint, error) { - logrus.Debugf("hcn::HostComputeEndpoint::Create id=%s", endpoint.Id) - - if endpoint.HostComputeNamespace != "" { - return nil, errors.New("endpoint create error, endpoint json HostComputeNamespace is read only and should not be set") - } - - jsonString, err := json.Marshal(endpoint) - if err != nil { - return nil, err - } - - logrus.Debugf("hcn::HostComputeEndpoint::Create JSON: %s", jsonString) - endpoint, hcnErr := createEndpoint(endpoint.HostComputeNetwork, string(jsonString)) - if hcnErr != nil { - return nil, hcnErr - } - return endpoint, nil -} - -// Delete Endpoint. -func (endpoint *HostComputeEndpoint) Delete() error { - logrus.Debugf("hcn::HostComputeEndpoint::Delete id=%s", endpoint.Id) - - if err := deleteEndpoint(endpoint.Id); err != nil { - return err - } - return nil -} - -// ModifyEndpointSettings updates the Port/Policy of an Endpoint. -func ModifyEndpointSettings(endpointId string, request *ModifyEndpointSettingRequest) error { - logrus.Debugf("hcn::HostComputeEndpoint::ModifyEndpointSettings id=%s", endpointId) - - endpointSettingsRequest, err := json.Marshal(request) - if err != nil { - return err - } - - _, err = modifyEndpoint(endpointId, string(endpointSettingsRequest)) - if err != nil { - return err - } - return nil -} - -// ApplyPolicy applies a Policy (ex: ACL) on the Endpoint. -func (endpoint *HostComputeEndpoint) ApplyPolicy(requestType RequestType, endpointPolicy PolicyEndpointRequest) error { - logrus.Debugf("hcn::HostComputeEndpoint::ApplyPolicy id=%s", endpoint.Id) - - settingsJson, err := json.Marshal(endpointPolicy) - if err != nil { - return err - } - requestMessage := &ModifyEndpointSettingRequest{ - ResourceType: EndpointResourceTypePolicy, - RequestType: requestType, - Settings: settingsJson, - } - - return ModifyEndpointSettings(endpoint.Id, requestMessage) -} - -// NamespaceAttach modifies a Namespace to add an endpoint. -func (endpoint *HostComputeEndpoint) NamespaceAttach(namespaceId string) error { - return AddNamespaceEndpoint(namespaceId, endpoint.Id) -} - -// NamespaceDetach modifies a Namespace to remove an endpoint. -func (endpoint *HostComputeEndpoint) NamespaceDetach(namespaceId string) error { - return RemoveNamespaceEndpoint(namespaceId, endpoint.Id) -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnerrors.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnerrors.go deleted file mode 100644 index ad30d320d97e..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnerrors.go +++ /dev/null @@ -1,164 +0,0 @@ -// Package hcn is a shim for the Host Compute Networking (HCN) service, which manages networking for Windows Server -// containers and Hyper-V containers. Previous to RS5, HCN was referred to as Host Networking Service (HNS). -package hcn - -import ( - "errors" - "fmt" - - "github.com/Microsoft/hcsshim/internal/hcs" - "github.com/Microsoft/hcsshim/internal/hcserror" - "github.com/Microsoft/hcsshim/internal/interop" - "github.com/sirupsen/logrus" -) - -var ( - errInvalidNetworkID = errors.New("invalid network ID") - errInvalidEndpointID = errors.New("invalid endpoint ID") - errInvalidNamespaceID = errors.New("invalid namespace ID") - errInvalidLoadBalancerID = errors.New("invalid load balancer ID") - errInvalidRouteID = errors.New("invalid route ID") -) - -func checkForErrors(methodName string, hr error, resultBuffer *uint16) error { - errorFound := false - - if hr != nil { - errorFound = true - } - - result := "" - if resultBuffer != nil { - result = interop.ConvertAndFreeCoTaskMemString(resultBuffer) - if result != "" { - errorFound = true - } - } - - if errorFound { - returnError := new(hr, methodName, result) - logrus.Debugf(returnError.Error()) // HCN errors logged for debugging. - return returnError - } - - return nil -} - -type ErrorCode uint32 - -// For common errors, define the error as it is in windows, so we can quickly determine it later -const ( - ERROR_NOT_FOUND = 0x490 - HCN_E_PORT_ALREADY_EXISTS ErrorCode = 0x803b0013 -) - -type HcnError struct { - *hcserror.HcsError - code ErrorCode -} - -func (e *HcnError) Error() string { - return e.HcsError.Error() -} - -func CheckErrorWithCode(err error, code ErrorCode) bool { - hcnError, ok := err.(*HcnError) - if ok { - return hcnError.code == code - } - return false -} - -func IsElementNotFoundError(err error) bool { - return CheckErrorWithCode(err, ERROR_NOT_FOUND) -} - -func IsPortAlreadyExistsError(err error) bool { - return CheckErrorWithCode(err, HCN_E_PORT_ALREADY_EXISTS) -} - -func new(hr error, title string, rest string) error { - err := &HcnError{} - hcsError := hcserror.New(hr, title, rest) - err.HcsError = hcsError.(*hcserror.HcsError) - err.code = ErrorCode(hcserror.Win32FromError(hr)) - return err -} - -// -// Note that the below errors are not errors returned by hcn itself -// we wish to seperate them as they are shim usage error -// - -// NetworkNotFoundError results from a failed seach for a network by Id or Name -type NetworkNotFoundError struct { - NetworkName string - NetworkID string -} - -func (e NetworkNotFoundError) Error() string { - if e.NetworkName != "" { - return fmt.Sprintf("Network name %q not found", e.NetworkName) - } - return fmt.Sprintf("Network ID %q not found", e.NetworkID) -} - -// EndpointNotFoundError results from a failed seach for an endpoint by Id or Name -type EndpointNotFoundError struct { - EndpointName string - EndpointID string -} - -func (e EndpointNotFoundError) Error() string { - if e.EndpointName != "" { - return fmt.Sprintf("Endpoint name %q not found", e.EndpointName) - } - return fmt.Sprintf("Endpoint ID %q not found", e.EndpointID) -} - -// NamespaceNotFoundError results from a failed seach for a namsepace by Id -type NamespaceNotFoundError struct { - NamespaceID string -} - -func (e NamespaceNotFoundError) Error() string { - return fmt.Sprintf("Namespace ID %q not found", e.NamespaceID) -} - -// LoadBalancerNotFoundError results from a failed seach for a loadbalancer by Id -type LoadBalancerNotFoundError struct { - LoadBalancerId string -} - -func (e LoadBalancerNotFoundError) Error() string { - return fmt.Sprintf("LoadBalancer %q not found", e.LoadBalancerId) -} - -// RouteNotFoundError results from a failed seach for a route by Id -type RouteNotFoundError struct { - RouteId string -} - -func (e RouteNotFoundError) Error() string { - return fmt.Sprintf("SDN Route %q not found", e.RouteId) -} - -// IsNotFoundError returns a boolean indicating whether the error was caused by -// a resource not being found. -func IsNotFoundError(err error) bool { - switch pe := err.(type) { - case NetworkNotFoundError: - return true - case EndpointNotFoundError: - return true - case NamespaceNotFoundError: - return true - case LoadBalancerNotFoundError: - return true - case RouteNotFoundError: - return true - case *hcserror.HcsError: - return pe.Err == hcs.ErrElementNotFound - } - return false -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnglobals.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnglobals.go deleted file mode 100644 index d03c48736da1..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnglobals.go +++ /dev/null @@ -1,132 +0,0 @@ -package hcn - -import ( - "encoding/json" - "fmt" - "math" - - "github.com/Microsoft/hcsshim/internal/hcserror" - "github.com/Microsoft/hcsshim/internal/interop" - "github.com/sirupsen/logrus" -) - -// Globals are all global properties of the HCN Service. -type Globals struct { - Version Version `json:"Version"` -} - -// Version is the HCN Service version. -type Version struct { - Major int `json:"Major"` - Minor int `json:"Minor"` -} - -type VersionRange struct { - MinVersion Version - MaxVersion Version -} - -type VersionRanges []VersionRange - -var ( - // HNSVersion1803 added ACL functionality. - HNSVersion1803 = VersionRanges{VersionRange{MinVersion: Version{Major: 7, Minor: 2}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}} - // V2ApiSupport allows the use of V2 Api calls and V2 Schema. - V2ApiSupport = VersionRanges{VersionRange{MinVersion: Version{Major: 9, Minor: 2}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}} - // Remote Subnet allows for Remote Subnet policies on Overlay networks - RemoteSubnetVersion = VersionRanges{VersionRange{MinVersion: Version{Major: 9, Minor: 2}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}} - // A Host Route policy allows for local container to local host communication Overlay networks - HostRouteVersion = VersionRanges{VersionRange{MinVersion: Version{Major: 9, Minor: 2}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}} - // HNS 9.3 through 10.0 (not included), and 10.2+ allows for Direct Server Return for loadbalancing - DSRVersion = VersionRanges{ - VersionRange{MinVersion: Version{Major: 9, Minor: 3}, MaxVersion: Version{Major: 9, Minor: math.MaxInt32}}, - VersionRange{MinVersion: Version{Major: 10, Minor: 2}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}, - } - // HNS 9.3 through 10.0 (not included) and, 10.4+ provide support for configuring endpoints with /32 prefixes - Slash32EndpointPrefixesVersion = VersionRanges{ - VersionRange{MinVersion: Version{Major: 9, Minor: 3}, MaxVersion: Version{Major: 9, Minor: math.MaxInt32}}, - VersionRange{MinVersion: Version{Major: 10, Minor: 4}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}, - } - // HNS 9.3 through 10.0 (not included) and, 10.4+ allow for HNS ACL Policies to support protocol 252 for VXLAN - AclSupportForProtocol252Version = VersionRanges{ - VersionRange{MinVersion: Version{Major: 11, Minor: 0}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}, - } - // HNS 12.0 allows for session affinity for loadbalancing - SessionAffinityVersion = VersionRanges{VersionRange{MinVersion: Version{Major: 12, Minor: 0}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}} - // HNS 11.10+ supports Ipv6 dual stack. - IPv6DualStackVersion = VersionRanges{ - VersionRange{MinVersion: Version{Major: 11, Minor: 10}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}, - } - // HNS 13.0 allows for Set Policy support - SetPolicyVersion = VersionRanges{VersionRange{MinVersion: Version{Major: 13, Minor: 0}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}} - // HNS 10.3 allows for VXLAN ports - VxlanPortVersion = VersionRanges{VersionRange{MinVersion: Version{Major: 10, Minor: 3}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}} - - //HNS 9.5 through 10.0(not included), 10.5 through 11.0(not included), 11.11 through 12.0(not included), 12.1 through 13.0(not included), 13.1+ allows for Network L4Proxy Policy support - L4ProxyPolicyVersion = VersionRanges{ - VersionRange{MinVersion: Version{Major: 9, Minor: 5}, MaxVersion: Version{Major: 9, Minor: math.MaxInt32}}, - VersionRange{MinVersion: Version{Major: 10, Minor: 5}, MaxVersion: Version{Major: 10, Minor: math.MaxInt32}}, - VersionRange{MinVersion: Version{Major: 11, Minor: 11}, MaxVersion: Version{Major: 11, Minor: math.MaxInt32}}, - VersionRange{MinVersion: Version{Major: 12, Minor: 1}, MaxVersion: Version{Major: 12, Minor: math.MaxInt32}}, - VersionRange{MinVersion: Version{Major: 13, Minor: 1}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}, - } - - //HNS 13.2 allows for L4WfpProxy Policy support - L4WfpProxyPolicyVersion = VersionRanges{VersionRange{MinVersion: Version{Major: 13, Minor: 2}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}} - - //HNS 14.0 allows for TierAcl Policy support - TierAclPolicyVersion = VersionRanges{VersionRange{MinVersion: Version{Major: 14, Minor: 0}, MaxVersion: Version{Major: math.MaxInt32, Minor: math.MaxInt32}}} -) - -// GetGlobals returns the global properties of the HCN Service. -func GetGlobals() (*Globals, error) { - var version Version - err := hnsCall("GET", "/globals/version", "", &version) - if err != nil { - return nil, err - } - - globals := &Globals{ - Version: version, - } - - return globals, nil -} - -type hnsResponse struct { - Success bool - Error string - Output json.RawMessage -} - -func hnsCall(method, path, request string, returnResponse interface{}) error { - var responseBuffer *uint16 - logrus.Debugf("[%s]=>[%s] Request : %s", method, path, request) - - err := _hnsCall(method, path, request, &responseBuffer) - if err != nil { - return hcserror.New(err, "hnsCall ", "") - } - response := interop.ConvertAndFreeCoTaskMemString(responseBuffer) - - hnsresponse := &hnsResponse{} - if err = json.Unmarshal([]byte(response), &hnsresponse); err != nil { - return err - } - - if !hnsresponse.Success { - return fmt.Errorf("HNS failed with error : %s", hnsresponse.Error) - } - - if len(hnsresponse.Output) == 0 { - return nil - } - - logrus.Debugf("Network Response : %s", hnsresponse.Output) - err = json.Unmarshal(hnsresponse.Output, returnResponse) - if err != nil { - return err - } - - return nil -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnloadbalancer.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnloadbalancer.go deleted file mode 100644 index 1b434b07b3ad..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnloadbalancer.go +++ /dev/null @@ -1,311 +0,0 @@ -package hcn - -import ( - "encoding/json" - - "github.com/Microsoft/go-winio/pkg/guid" - "github.com/Microsoft/hcsshim/internal/interop" - "github.com/sirupsen/logrus" -) - -// LoadBalancerPortMapping is associated with HostComputeLoadBalancer -type LoadBalancerPortMapping struct { - Protocol uint32 `json:",omitempty"` // EX: TCP = 6, UDP = 17 - InternalPort uint16 `json:",omitempty"` - ExternalPort uint16 `json:",omitempty"` - DistributionType LoadBalancerDistribution `json:",omitempty"` // EX: Distribute per connection = 0, distribute traffic of the same protocol per client IP = 1, distribute per client IP = 2 - Flags LoadBalancerPortMappingFlags `json:",omitempty"` -} - -// HostComputeLoadBalancer represents software load balancer. -type HostComputeLoadBalancer struct { - Id string `json:"ID,omitempty"` - HostComputeEndpoints []string `json:",omitempty"` - SourceVIP string `json:",omitempty"` - FrontendVIPs []string `json:",omitempty"` - PortMappings []LoadBalancerPortMapping `json:",omitempty"` - SchemaVersion SchemaVersion `json:",omitempty"` - Flags LoadBalancerFlags `json:",omitempty"` // 0: None, 1: EnableDirectServerReturn -} - -//LoadBalancerFlags modify settings for a loadbalancer. -type LoadBalancerFlags uint32 - -var ( - // LoadBalancerFlagsNone is the default. - LoadBalancerFlagsNone LoadBalancerFlags = 0 - // LoadBalancerFlagsDSR enables Direct Server Return (DSR) - LoadBalancerFlagsDSR LoadBalancerFlags = 1 - LoadBalancerFlagsIPv6 LoadBalancerFlags = 2 -) - -// LoadBalancerPortMappingFlags are special settings on a loadbalancer. -type LoadBalancerPortMappingFlags uint32 - -var ( - // LoadBalancerPortMappingFlagsNone is the default. - LoadBalancerPortMappingFlagsNone LoadBalancerPortMappingFlags - // LoadBalancerPortMappingFlagsILB enables internal loadbalancing. - LoadBalancerPortMappingFlagsILB LoadBalancerPortMappingFlags = 1 - // LoadBalancerPortMappingFlagsLocalRoutedVIP enables VIP access from the host. - LoadBalancerPortMappingFlagsLocalRoutedVIP LoadBalancerPortMappingFlags = 2 - // LoadBalancerPortMappingFlagsUseMux enables DSR for NodePort access of VIP. - LoadBalancerPortMappingFlagsUseMux LoadBalancerPortMappingFlags = 4 - // LoadBalancerPortMappingFlagsPreserveDIP delivers packets with destination IP as the VIP. - LoadBalancerPortMappingFlagsPreserveDIP LoadBalancerPortMappingFlags = 8 -) - -// LoadBalancerDistribution specifies how the loadbalancer distributes traffic. -type LoadBalancerDistribution uint32 - -var ( - // LoadBalancerDistributionNone is the default and loadbalances each connection to the same pod. - LoadBalancerDistributionNone LoadBalancerDistribution - // LoadBalancerDistributionSourceIPProtocol loadbalances all traffic of the same protocol from a client IP to the same pod. - LoadBalancerDistributionSourceIPProtocol LoadBalancerDistribution = 1 - // LoadBalancerDistributionSourceIP loadbalances all traffic from a client IP to the same pod. - LoadBalancerDistributionSourceIP LoadBalancerDistribution = 2 -) - -func getLoadBalancer(loadBalancerGuid guid.GUID, query string) (*HostComputeLoadBalancer, error) { - // Open loadBalancer. - var ( - loadBalancerHandle hcnLoadBalancer - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - hr := hcnOpenLoadBalancer(&loadBalancerGuid, &loadBalancerHandle, &resultBuffer) - if err := checkForErrors("hcnOpenLoadBalancer", hr, resultBuffer); err != nil { - return nil, err - } - // Query loadBalancer. - hr = hcnQueryLoadBalancerProperties(loadBalancerHandle, query, &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryLoadBalancerProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close loadBalancer. - hr = hcnCloseLoadBalancer(loadBalancerHandle) - if err := checkForErrors("hcnCloseLoadBalancer", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeLoadBalancer - var outputLoadBalancer HostComputeLoadBalancer - if err := json.Unmarshal([]byte(properties), &outputLoadBalancer); err != nil { - return nil, err - } - return &outputLoadBalancer, nil -} - -func enumerateLoadBalancers(query string) ([]HostComputeLoadBalancer, error) { - // Enumerate all LoadBalancer Guids - var ( - resultBuffer *uint16 - loadBalancerBuffer *uint16 - ) - hr := hcnEnumerateLoadBalancers(query, &loadBalancerBuffer, &resultBuffer) - if err := checkForErrors("hcnEnumerateLoadBalancers", hr, resultBuffer); err != nil { - return nil, err - } - - loadBalancers := interop.ConvertAndFreeCoTaskMemString(loadBalancerBuffer) - var loadBalancerIds []guid.GUID - if err := json.Unmarshal([]byte(loadBalancers), &loadBalancerIds); err != nil { - return nil, err - } - - var outputLoadBalancers []HostComputeLoadBalancer - for _, loadBalancerGuid := range loadBalancerIds { - loadBalancer, err := getLoadBalancer(loadBalancerGuid, query) - if err != nil { - return nil, err - } - outputLoadBalancers = append(outputLoadBalancers, *loadBalancer) - } - return outputLoadBalancers, nil -} - -func createLoadBalancer(settings string) (*HostComputeLoadBalancer, error) { - // Create new loadBalancer. - var ( - loadBalancerHandle hcnLoadBalancer - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - loadBalancerGuid := guid.GUID{} - hr := hcnCreateLoadBalancer(&loadBalancerGuid, settings, &loadBalancerHandle, &resultBuffer) - if err := checkForErrors("hcnCreateLoadBalancer", hr, resultBuffer); err != nil { - return nil, err - } - // Query loadBalancer. - hcnQuery := defaultQuery() - query, err := json.Marshal(hcnQuery) - if err != nil { - return nil, err - } - hr = hcnQueryLoadBalancerProperties(loadBalancerHandle, string(query), &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryLoadBalancerProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close loadBalancer. - hr = hcnCloseLoadBalancer(loadBalancerHandle) - if err := checkForErrors("hcnCloseLoadBalancer", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeLoadBalancer - var outputLoadBalancer HostComputeLoadBalancer - if err := json.Unmarshal([]byte(properties), &outputLoadBalancer); err != nil { - return nil, err - } - return &outputLoadBalancer, nil -} - -func deleteLoadBalancer(loadBalancerId string) error { - loadBalancerGuid, err := guid.FromString(loadBalancerId) - if err != nil { - return errInvalidLoadBalancerID - } - var resultBuffer *uint16 - hr := hcnDeleteLoadBalancer(&loadBalancerGuid, &resultBuffer) - if err := checkForErrors("hcnDeleteLoadBalancer", hr, resultBuffer); err != nil { - return err - } - return nil -} - -// ListLoadBalancers makes a call to list all available loadBalancers. -func ListLoadBalancers() ([]HostComputeLoadBalancer, error) { - hcnQuery := defaultQuery() - loadBalancers, err := ListLoadBalancersQuery(hcnQuery) - if err != nil { - return nil, err - } - return loadBalancers, nil -} - -// ListLoadBalancersQuery makes a call to query the list of available loadBalancers. -func ListLoadBalancersQuery(query HostComputeQuery) ([]HostComputeLoadBalancer, error) { - queryJson, err := json.Marshal(query) - if err != nil { - return nil, err - } - - loadBalancers, err := enumerateLoadBalancers(string(queryJson)) - if err != nil { - return nil, err - } - return loadBalancers, nil -} - -// GetLoadBalancerByID returns the LoadBalancer specified by Id. -func GetLoadBalancerByID(loadBalancerId string) (*HostComputeLoadBalancer, error) { - hcnQuery := defaultQuery() - mapA := map[string]string{"ID": loadBalancerId} - filter, err := json.Marshal(mapA) - if err != nil { - return nil, err - } - hcnQuery.Filter = string(filter) - - loadBalancers, err := ListLoadBalancersQuery(hcnQuery) - if err != nil { - return nil, err - } - if len(loadBalancers) == 0 { - return nil, LoadBalancerNotFoundError{LoadBalancerId: loadBalancerId} - } - return &loadBalancers[0], err -} - -// Create LoadBalancer. -func (loadBalancer *HostComputeLoadBalancer) Create() (*HostComputeLoadBalancer, error) { - logrus.Debugf("hcn::HostComputeLoadBalancer::Create id=%s", loadBalancer.Id) - - jsonString, err := json.Marshal(loadBalancer) - if err != nil { - return nil, err - } - - logrus.Debugf("hcn::HostComputeLoadBalancer::Create JSON: %s", jsonString) - loadBalancer, hcnErr := createLoadBalancer(string(jsonString)) - if hcnErr != nil { - return nil, hcnErr - } - return loadBalancer, nil -} - -// Delete LoadBalancer. -func (loadBalancer *HostComputeLoadBalancer) Delete() error { - logrus.Debugf("hcn::HostComputeLoadBalancer::Delete id=%s", loadBalancer.Id) - - if err := deleteLoadBalancer(loadBalancer.Id); err != nil { - return err - } - return nil -} - -// AddEndpoint add an endpoint to a LoadBalancer -func (loadBalancer *HostComputeLoadBalancer) AddEndpoint(endpoint *HostComputeEndpoint) (*HostComputeLoadBalancer, error) { - logrus.Debugf("hcn::HostComputeLoadBalancer::AddEndpoint loadBalancer=%s endpoint=%s", loadBalancer.Id, endpoint.Id) - - err := loadBalancer.Delete() - if err != nil { - return nil, err - } - - // Add Endpoint to the Existing List - loadBalancer.HostComputeEndpoints = append(loadBalancer.HostComputeEndpoints, endpoint.Id) - - return loadBalancer.Create() -} - -// RemoveEndpoint removes an endpoint from a LoadBalancer -func (loadBalancer *HostComputeLoadBalancer) RemoveEndpoint(endpoint *HostComputeEndpoint) (*HostComputeLoadBalancer, error) { - logrus.Debugf("hcn::HostComputeLoadBalancer::RemoveEndpoint loadBalancer=%s endpoint=%s", loadBalancer.Id, endpoint.Id) - - err := loadBalancer.Delete() - if err != nil { - return nil, err - } - - // Create a list of all the endpoints besides the one being removed - var endpoints []string - for _, endpointReference := range loadBalancer.HostComputeEndpoints { - if endpointReference == endpoint.Id { - continue - } - endpoints = append(endpoints, endpointReference) - } - loadBalancer.HostComputeEndpoints = endpoints - return loadBalancer.Create() -} - -// AddLoadBalancer for the specified endpoints -func AddLoadBalancer(endpoints []HostComputeEndpoint, flags LoadBalancerFlags, portMappingFlags LoadBalancerPortMappingFlags, sourceVIP string, frontendVIPs []string, protocol uint16, internalPort uint16, externalPort uint16) (*HostComputeLoadBalancer, error) { - logrus.Debugf("hcn::HostComputeLoadBalancer::AddLoadBalancer endpointId=%v, LoadBalancerFlags=%v, LoadBalancerPortMappingFlags=%v, sourceVIP=%s, frontendVIPs=%v, protocol=%v, internalPort=%v, externalPort=%v", endpoints, flags, portMappingFlags, sourceVIP, frontendVIPs, protocol, internalPort, externalPort) - - loadBalancer := &HostComputeLoadBalancer{ - SourceVIP: sourceVIP, - PortMappings: []LoadBalancerPortMapping{ - { - Protocol: uint32(protocol), - InternalPort: internalPort, - ExternalPort: externalPort, - Flags: portMappingFlags, - }, - }, - FrontendVIPs: frontendVIPs, - SchemaVersion: SchemaVersion{ - Major: 2, - Minor: 0, - }, - Flags: flags, - } - - for _, endpoint := range endpoints { - loadBalancer.HostComputeEndpoints = append(loadBalancer.HostComputeEndpoints, endpoint.Id) - } - - return loadBalancer.Create() -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnnamespace.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnnamespace.go deleted file mode 100644 index d2ef2296099a..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnnamespace.go +++ /dev/null @@ -1,446 +0,0 @@ -package hcn - -import ( - "encoding/json" - "os" - "syscall" - - "github.com/Microsoft/go-winio/pkg/guid" - icni "github.com/Microsoft/hcsshim/internal/cni" - "github.com/Microsoft/hcsshim/internal/interop" - "github.com/Microsoft/hcsshim/internal/regstate" - "github.com/Microsoft/hcsshim/internal/runhcs" - "github.com/sirupsen/logrus" -) - -// NamespaceResourceEndpoint represents an Endpoint attached to a Namespace. -type NamespaceResourceEndpoint struct { - Id string `json:"ID,"` -} - -// NamespaceResourceContainer represents a Container attached to a Namespace. -type NamespaceResourceContainer struct { - Id string `json:"ID,"` -} - -// NamespaceResourceType determines whether the Namespace resource is a Container or Endpoint. -type NamespaceResourceType string - -var ( - // NamespaceResourceTypeContainer are contianers associated with a Namespace. - NamespaceResourceTypeContainer NamespaceResourceType = "Container" - // NamespaceResourceTypeEndpoint are endpoints associated with a Namespace. - NamespaceResourceTypeEndpoint NamespaceResourceType = "Endpoint" -) - -// NamespaceResource is associated with a namespace -type NamespaceResource struct { - Type NamespaceResourceType `json:","` // Container, Endpoint - Data json.RawMessage `json:","` -} - -// NamespaceType determines whether the Namespace is for a Host or Guest -type NamespaceType string - -var ( - // NamespaceTypeHost are host namespaces. - NamespaceTypeHost NamespaceType = "Host" - // NamespaceTypeHostDefault are host namespaces in the default compartment. - NamespaceTypeHostDefault NamespaceType = "HostDefault" - // NamespaceTypeGuest are guest namespaces. - NamespaceTypeGuest NamespaceType = "Guest" - // NamespaceTypeGuestDefault are guest namespaces in the default compartment. - NamespaceTypeGuestDefault NamespaceType = "GuestDefault" -) - -// HostComputeNamespace represents a namespace (AKA compartment) in -type HostComputeNamespace struct { - Id string `json:"ID,omitempty"` - NamespaceId uint32 `json:",omitempty"` - Type NamespaceType `json:",omitempty"` // Host, HostDefault, Guest, GuestDefault - Resources []NamespaceResource `json:",omitempty"` - SchemaVersion SchemaVersion `json:",omitempty"` -} - -// ModifyNamespaceSettingRequest is the structure used to send request to modify a namespace. -// Used to Add/Remove an endpoints and containers to/from a namespace. -type ModifyNamespaceSettingRequest struct { - ResourceType NamespaceResourceType `json:",omitempty"` // Container, Endpoint - RequestType RequestType `json:",omitempty"` // Add, Remove, Update, Refresh - Settings json.RawMessage `json:",omitempty"` -} - -func getNamespace(namespaceGuid guid.GUID, query string) (*HostComputeNamespace, error) { - // Open namespace. - var ( - namespaceHandle hcnNamespace - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - hr := hcnOpenNamespace(&namespaceGuid, &namespaceHandle, &resultBuffer) - if err := checkForErrors("hcnOpenNamespace", hr, resultBuffer); err != nil { - return nil, err - } - // Query namespace. - hr = hcnQueryNamespaceProperties(namespaceHandle, query, &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryNamespaceProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close namespace. - hr = hcnCloseNamespace(namespaceHandle) - if err := checkForErrors("hcnCloseNamespace", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeNamespace - var outputNamespace HostComputeNamespace - if err := json.Unmarshal([]byte(properties), &outputNamespace); err != nil { - return nil, err - } - return &outputNamespace, nil -} - -func enumerateNamespaces(query string) ([]HostComputeNamespace, error) { - // Enumerate all Namespace Guids - var ( - resultBuffer *uint16 - namespaceBuffer *uint16 - ) - hr := hcnEnumerateNamespaces(query, &namespaceBuffer, &resultBuffer) - if err := checkForErrors("hcnEnumerateNamespaces", hr, resultBuffer); err != nil { - return nil, err - } - - namespaces := interop.ConvertAndFreeCoTaskMemString(namespaceBuffer) - var namespaceIds []guid.GUID - if err := json.Unmarshal([]byte(namespaces), &namespaceIds); err != nil { - return nil, err - } - - var outputNamespaces []HostComputeNamespace - for _, namespaceGuid := range namespaceIds { - namespace, err := getNamespace(namespaceGuid, query) - if err != nil { - return nil, err - } - outputNamespaces = append(outputNamespaces, *namespace) - } - return outputNamespaces, nil -} - -func createNamespace(settings string) (*HostComputeNamespace, error) { - // Create new namespace. - var ( - namespaceHandle hcnNamespace - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - namespaceGuid := guid.GUID{} - hr := hcnCreateNamespace(&namespaceGuid, settings, &namespaceHandle, &resultBuffer) - if err := checkForErrors("hcnCreateNamespace", hr, resultBuffer); err != nil { - return nil, err - } - // Query namespace. - hcnQuery := defaultQuery() - query, err := json.Marshal(hcnQuery) - if err != nil { - return nil, err - } - hr = hcnQueryNamespaceProperties(namespaceHandle, string(query), &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryNamespaceProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close namespace. - hr = hcnCloseNamespace(namespaceHandle) - if err := checkForErrors("hcnCloseNamespace", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeNamespace - var outputNamespace HostComputeNamespace - if err := json.Unmarshal([]byte(properties), &outputNamespace); err != nil { - return nil, err - } - return &outputNamespace, nil -} - -func modifyNamespace(namespaceId string, settings string) (*HostComputeNamespace, error) { - namespaceGuid, err := guid.FromString(namespaceId) - if err != nil { - return nil, errInvalidNamespaceID - } - // Open namespace. - var ( - namespaceHandle hcnNamespace - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - hr := hcnOpenNamespace(&namespaceGuid, &namespaceHandle, &resultBuffer) - if err := checkForErrors("hcnOpenNamespace", hr, resultBuffer); err != nil { - return nil, err - } - // Modify namespace. - hr = hcnModifyNamespace(namespaceHandle, settings, &resultBuffer) - if err := checkForErrors("hcnModifyNamespace", hr, resultBuffer); err != nil { - return nil, err - } - // Query namespace. - hcnQuery := defaultQuery() - query, err := json.Marshal(hcnQuery) - if err != nil { - return nil, err - } - hr = hcnQueryNamespaceProperties(namespaceHandle, string(query), &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryNamespaceProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close namespace. - hr = hcnCloseNamespace(namespaceHandle) - if err := checkForErrors("hcnCloseNamespace", hr, nil); err != nil { - return nil, err - } - // Convert output to Namespace - var outputNamespace HostComputeNamespace - if err := json.Unmarshal([]byte(properties), &outputNamespace); err != nil { - return nil, err - } - return &outputNamespace, nil -} - -func deleteNamespace(namespaceId string) error { - namespaceGuid, err := guid.FromString(namespaceId) - if err != nil { - return errInvalidNamespaceID - } - var resultBuffer *uint16 - hr := hcnDeleteNamespace(&namespaceGuid, &resultBuffer) - if err := checkForErrors("hcnDeleteNamespace", hr, resultBuffer); err != nil { - return err - } - return nil -} - -// ListNamespaces makes a call to list all available namespaces. -func ListNamespaces() ([]HostComputeNamespace, error) { - hcnQuery := defaultQuery() - namespaces, err := ListNamespacesQuery(hcnQuery) - if err != nil { - return nil, err - } - return namespaces, nil -} - -// ListNamespacesQuery makes a call to query the list of available namespaces. -func ListNamespacesQuery(query HostComputeQuery) ([]HostComputeNamespace, error) { - queryJson, err := json.Marshal(query) - if err != nil { - return nil, err - } - - namespaces, err := enumerateNamespaces(string(queryJson)) - if err != nil { - return nil, err - } - return namespaces, nil -} - -// GetNamespaceByID returns the Namespace specified by Id. -func GetNamespaceByID(namespaceId string) (*HostComputeNamespace, error) { - hcnQuery := defaultQuery() - mapA := map[string]string{"ID": namespaceId} - filter, err := json.Marshal(mapA) - if err != nil { - return nil, err - } - hcnQuery.Filter = string(filter) - - namespaces, err := ListNamespacesQuery(hcnQuery) - if err != nil { - return nil, err - } - if len(namespaces) == 0 { - return nil, NamespaceNotFoundError{NamespaceID: namespaceId} - } - - return &namespaces[0], err -} - -// GetNamespaceEndpointIds returns the endpoints of the Namespace specified by Id. -func GetNamespaceEndpointIds(namespaceId string) ([]string, error) { - namespace, err := GetNamespaceByID(namespaceId) - if err != nil { - return nil, err - } - var endpointsIds []string - for _, resource := range namespace.Resources { - if resource.Type == "Endpoint" { - var endpointResource NamespaceResourceEndpoint - if err := json.Unmarshal([]byte(resource.Data), &endpointResource); err != nil { - return nil, err - } - endpointsIds = append(endpointsIds, endpointResource.Id) - } - } - return endpointsIds, nil -} - -// GetNamespaceContainerIds returns the containers of the Namespace specified by Id. -func GetNamespaceContainerIds(namespaceId string) ([]string, error) { - namespace, err := GetNamespaceByID(namespaceId) - if err != nil { - return nil, err - } - var containerIds []string - for _, resource := range namespace.Resources { - if resource.Type == "Container" { - var contaienrResource NamespaceResourceContainer - if err := json.Unmarshal([]byte(resource.Data), &contaienrResource); err != nil { - return nil, err - } - containerIds = append(containerIds, contaienrResource.Id) - } - } - return containerIds, nil -} - -// NewNamespace creates a new Namespace object -func NewNamespace(nsType NamespaceType) *HostComputeNamespace { - return &HostComputeNamespace{ - Type: nsType, - SchemaVersion: V2SchemaVersion(), - } -} - -// Create Namespace. -func (namespace *HostComputeNamespace) Create() (*HostComputeNamespace, error) { - logrus.Debugf("hcn::HostComputeNamespace::Create id=%s", namespace.Id) - - jsonString, err := json.Marshal(namespace) - if err != nil { - return nil, err - } - - logrus.Debugf("hcn::HostComputeNamespace::Create JSON: %s", jsonString) - namespace, hcnErr := createNamespace(string(jsonString)) - if hcnErr != nil { - return nil, hcnErr - } - return namespace, nil -} - -// Delete Namespace. -func (namespace *HostComputeNamespace) Delete() error { - logrus.Debugf("hcn::HostComputeNamespace::Delete id=%s", namespace.Id) - - if err := deleteNamespace(namespace.Id); err != nil { - return err - } - return nil -} - -// Sync Namespace endpoints with the appropriate sandbox container holding the -// network namespace open. If no sandbox container is found for this namespace -// this method is determined to be a success and will not return an error in -// this case. If the sandbox container is found and a sync is initiated any -// failures will be returned via this method. -// -// This call initiates a sync between endpoints and the matching UtilityVM -// hosting those endpoints. It is safe to call for any `NamespaceType` but -// `NamespaceTypeGuest` is the only case when a sync will actually occur. For -// `NamespaceTypeHost` the process container will be automatically synchronized -// when the the endpoint is added via `AddNamespaceEndpoint`. -// -// Note: This method sync's both additions and removals of endpoints from a -// `NamespaceTypeGuest` namespace. -func (namespace *HostComputeNamespace) Sync() error { - logrus.WithField("id", namespace.Id).Debugf("hcs::HostComputeNamespace::Sync") - - // We only attempt a sync for namespace guest. - if namespace.Type != NamespaceTypeGuest { - return nil - } - - // Look in the registry for the key to map from namespace id to pod-id - cfg, err := icni.LoadPersistedNamespaceConfig(namespace.Id) - if err != nil { - if regstate.IsNotFoundError(err) { - return nil - } - return err - } - req := runhcs.VMRequest{ - ID: cfg.ContainerID, - Op: runhcs.OpSyncNamespace, - } - shimPath := runhcs.VMPipePath(cfg.HostUniqueID) - if err := runhcs.IssueVMRequest(shimPath, &req); err != nil { - // The shim is likey gone. Simply ignore the sync as if it didn't exist. - if perr, ok := err.(*os.PathError); ok && perr.Err == syscall.ERROR_FILE_NOT_FOUND { - // Remove the reg key there is no point to try again - _ = cfg.Remove() - return nil - } - f := map[string]interface{}{ - "id": namespace.Id, - "container-id": cfg.ContainerID, - } - logrus.WithFields(f). - WithError(err). - Debugf("hcs::HostComputeNamespace::Sync failed to connect to shim pipe: '%s'", shimPath) - return err - } - return nil -} - -// ModifyNamespaceSettings updates the Endpoints/Containers of a Namespace. -func ModifyNamespaceSettings(namespaceId string, request *ModifyNamespaceSettingRequest) error { - logrus.Debugf("hcn::HostComputeNamespace::ModifyNamespaceSettings id=%s", namespaceId) - - namespaceSettings, err := json.Marshal(request) - if err != nil { - return err - } - - _, err = modifyNamespace(namespaceId, string(namespaceSettings)) - if err != nil { - return err - } - return nil -} - -// AddNamespaceEndpoint adds an endpoint to a Namespace. -func AddNamespaceEndpoint(namespaceId string, endpointId string) error { - logrus.Debugf("hcn::HostComputeEndpoint::AddNamespaceEndpoint id=%s", endpointId) - - mapA := map[string]string{"EndpointId": endpointId} - settingsJson, err := json.Marshal(mapA) - if err != nil { - return err - } - requestMessage := &ModifyNamespaceSettingRequest{ - ResourceType: NamespaceResourceTypeEndpoint, - RequestType: RequestTypeAdd, - Settings: settingsJson, - } - - return ModifyNamespaceSettings(namespaceId, requestMessage) -} - -// RemoveNamespaceEndpoint removes an endpoint from a Namespace. -func RemoveNamespaceEndpoint(namespaceId string, endpointId string) error { - logrus.Debugf("hcn::HostComputeNamespace::RemoveNamespaceEndpoint id=%s", endpointId) - - mapA := map[string]string{"EndpointId": endpointId} - settingsJson, err := json.Marshal(mapA) - if err != nil { - return err - } - requestMessage := &ModifyNamespaceSettingRequest{ - ResourceType: NamespaceResourceTypeEndpoint, - RequestType: RequestTypeRemove, - Settings: settingsJson, - } - - return ModifyNamespaceSettings(namespaceId, requestMessage) -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnnetwork.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnnetwork.go deleted file mode 100644 index c36b136387a9..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnnetwork.go +++ /dev/null @@ -1,462 +0,0 @@ -package hcn - -import ( - "encoding/json" - "errors" - - "github.com/Microsoft/go-winio/pkg/guid" - "github.com/Microsoft/hcsshim/internal/interop" - "github.com/sirupsen/logrus" -) - -// Route is associated with a subnet. -type Route struct { - NextHop string `json:",omitempty"` - DestinationPrefix string `json:",omitempty"` - Metric uint16 `json:",omitempty"` -} - -// Subnet is associated with a Ipam. -type Subnet struct { - IpAddressPrefix string `json:",omitempty"` - Policies []json.RawMessage `json:",omitempty"` - Routes []Route `json:",omitempty"` -} - -// Ipam (Internet Protocol Address Management) is associated with a network -// and represents the address space(s) of a network. -type Ipam struct { - Type string `json:",omitempty"` // Ex: Static, DHCP - Subnets []Subnet `json:",omitempty"` -} - -// MacRange is associated with MacPool and respresents the start and end addresses. -type MacRange struct { - StartMacAddress string `json:",omitempty"` - EndMacAddress string `json:",omitempty"` -} - -// MacPool is associated with a network and represents pool of MacRanges. -type MacPool struct { - Ranges []MacRange `json:",omitempty"` -} - -// Dns (Domain Name System is associated with a network). -type Dns struct { - Domain string `json:",omitempty"` - Search []string `json:",omitempty"` - ServerList []string `json:",omitempty"` - Options []string `json:",omitempty"` -} - -// NetworkType are various networks. -type NetworkType string - -// NetworkType const -const ( - NAT NetworkType = "NAT" - Transparent NetworkType = "Transparent" - L2Bridge NetworkType = "L2Bridge" - L2Tunnel NetworkType = "L2Tunnel" - ICS NetworkType = "ICS" - Private NetworkType = "Private" - Overlay NetworkType = "Overlay" -) - -// NetworkFlags are various network flags. -type NetworkFlags uint32 - -// NetworkFlags const -const ( - None NetworkFlags = 0 - EnableNonPersistent NetworkFlags = 8 -) - -// HostComputeNetwork represents a network -type HostComputeNetwork struct { - Id string `json:"ID,omitempty"` - Name string `json:",omitempty"` - Type NetworkType `json:",omitempty"` - Policies []NetworkPolicy `json:",omitempty"` - MacPool MacPool `json:",omitempty"` - Dns Dns `json:",omitempty"` - Ipams []Ipam `json:",omitempty"` - Flags NetworkFlags `json:",omitempty"` // 0: None - Health Health `json:",omitempty"` - SchemaVersion SchemaVersion `json:",omitempty"` -} - -// NetworkResourceType are the 3 different Network settings resources. -type NetworkResourceType string - -var ( - // NetworkResourceTypePolicy is for Network's policies. Ex: RemoteSubnet - NetworkResourceTypePolicy NetworkResourceType = "Policy" - // NetworkResourceTypeDNS is for Network's DNS settings. - NetworkResourceTypeDNS NetworkResourceType = "DNS" - // NetworkResourceTypeExtension is for Network's extension settings. - NetworkResourceTypeExtension NetworkResourceType = "Extension" -) - -// ModifyNetworkSettingRequest is the structure used to send request to modify an network. -// Used to update DNS/extension/policy on an network. -type ModifyNetworkSettingRequest struct { - ResourceType NetworkResourceType `json:",omitempty"` // Policy, DNS, Extension - RequestType RequestType `json:",omitempty"` // Add, Remove, Update, Refresh - Settings json.RawMessage `json:",omitempty"` -} - -type PolicyNetworkRequest struct { - Policies []NetworkPolicy `json:",omitempty"` -} - -func getNetwork(networkGuid guid.GUID, query string) (*HostComputeNetwork, error) { - // Open network. - var ( - networkHandle hcnNetwork - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - hr := hcnOpenNetwork(&networkGuid, &networkHandle, &resultBuffer) - if err := checkForErrors("hcnOpenNetwork", hr, resultBuffer); err != nil { - return nil, err - } - // Query network. - hr = hcnQueryNetworkProperties(networkHandle, query, &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryNetworkProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close network. - hr = hcnCloseNetwork(networkHandle) - if err := checkForErrors("hcnCloseNetwork", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeNetwork - var outputNetwork HostComputeNetwork - - // If HNS sets the network type to NAT (i.e. '0' in HNS.Schema.Network.NetworkMode), - // the value will be omitted from the JSON blob. We therefore need to initialize NAT here before - // unmarshaling the JSON blob. - outputNetwork.Type = NAT - - if err := json.Unmarshal([]byte(properties), &outputNetwork); err != nil { - return nil, err - } - return &outputNetwork, nil -} - -func enumerateNetworks(query string) ([]HostComputeNetwork, error) { - // Enumerate all Network Guids - var ( - resultBuffer *uint16 - networkBuffer *uint16 - ) - hr := hcnEnumerateNetworks(query, &networkBuffer, &resultBuffer) - if err := checkForErrors("hcnEnumerateNetworks", hr, resultBuffer); err != nil { - return nil, err - } - - networks := interop.ConvertAndFreeCoTaskMemString(networkBuffer) - var networkIds []guid.GUID - if err := json.Unmarshal([]byte(networks), &networkIds); err != nil { - return nil, err - } - - var outputNetworks []HostComputeNetwork - for _, networkGuid := range networkIds { - network, err := getNetwork(networkGuid, query) - if err != nil { - return nil, err - } - outputNetworks = append(outputNetworks, *network) - } - return outputNetworks, nil -} - -func createNetwork(settings string) (*HostComputeNetwork, error) { - // Create new network. - var ( - networkHandle hcnNetwork - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - networkGuid := guid.GUID{} - hr := hcnCreateNetwork(&networkGuid, settings, &networkHandle, &resultBuffer) - if err := checkForErrors("hcnCreateNetwork", hr, resultBuffer); err != nil { - return nil, err - } - // Query network. - hcnQuery := defaultQuery() - query, err := json.Marshal(hcnQuery) - if err != nil { - return nil, err - } - hr = hcnQueryNetworkProperties(networkHandle, string(query), &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryNetworkProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close network. - hr = hcnCloseNetwork(networkHandle) - if err := checkForErrors("hcnCloseNetwork", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeNetwork - var outputNetwork HostComputeNetwork - - // If HNS sets the network type to NAT (i.e. '0' in HNS.Schema.Network.NetworkMode), - // the value will be omitted from the JSON blob. We therefore need to initialize NAT here before - // unmarshaling the JSON blob. - outputNetwork.Type = NAT - - if err := json.Unmarshal([]byte(properties), &outputNetwork); err != nil { - return nil, err - } - return &outputNetwork, nil -} - -func modifyNetwork(networkId string, settings string) (*HostComputeNetwork, error) { - networkGuid, err := guid.FromString(networkId) - if err != nil { - return nil, errInvalidNetworkID - } - // Open Network - var ( - networkHandle hcnNetwork - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - hr := hcnOpenNetwork(&networkGuid, &networkHandle, &resultBuffer) - if err := checkForErrors("hcnOpenNetwork", hr, resultBuffer); err != nil { - return nil, err - } - // Modify Network - hr = hcnModifyNetwork(networkHandle, settings, &resultBuffer) - if err := checkForErrors("hcnModifyNetwork", hr, resultBuffer); err != nil { - return nil, err - } - // Query network. - hcnQuery := defaultQuery() - query, err := json.Marshal(hcnQuery) - if err != nil { - return nil, err - } - hr = hcnQueryNetworkProperties(networkHandle, string(query), &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryNetworkProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close network. - hr = hcnCloseNetwork(networkHandle) - if err := checkForErrors("hcnCloseNetwork", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeNetwork - var outputNetwork HostComputeNetwork - - // If HNS sets the network type to NAT (i.e. '0' in HNS.Schema.Network.NetworkMode), - // the value will be omitted from the JSON blob. We therefore need to initialize NAT here before - // unmarshaling the JSON blob. - outputNetwork.Type = NAT - - if err := json.Unmarshal([]byte(properties), &outputNetwork); err != nil { - return nil, err - } - return &outputNetwork, nil -} - -func deleteNetwork(networkId string) error { - networkGuid, err := guid.FromString(networkId) - if err != nil { - return errInvalidNetworkID - } - var resultBuffer *uint16 - hr := hcnDeleteNetwork(&networkGuid, &resultBuffer) - if err := checkForErrors("hcnDeleteNetwork", hr, resultBuffer); err != nil { - return err - } - return nil -} - -// ListNetworks makes a call to list all available networks. -func ListNetworks() ([]HostComputeNetwork, error) { - hcnQuery := defaultQuery() - networks, err := ListNetworksQuery(hcnQuery) - if err != nil { - return nil, err - } - return networks, nil -} - -// ListNetworksQuery makes a call to query the list of available networks. -func ListNetworksQuery(query HostComputeQuery) ([]HostComputeNetwork, error) { - queryJson, err := json.Marshal(query) - if err != nil { - return nil, err - } - - networks, err := enumerateNetworks(string(queryJson)) - if err != nil { - return nil, err - } - return networks, nil -} - -// GetNetworkByID returns the network specified by Id. -func GetNetworkByID(networkID string) (*HostComputeNetwork, error) { - hcnQuery := defaultQuery() - mapA := map[string]string{"ID": networkID} - filter, err := json.Marshal(mapA) - if err != nil { - return nil, err - } - hcnQuery.Filter = string(filter) - - networks, err := ListNetworksQuery(hcnQuery) - if err != nil { - return nil, err - } - if len(networks) == 0 { - return nil, NetworkNotFoundError{NetworkID: networkID} - } - return &networks[0], err -} - -// GetNetworkByName returns the network specified by Name. -func GetNetworkByName(networkName string) (*HostComputeNetwork, error) { - hcnQuery := defaultQuery() - mapA := map[string]string{"Name": networkName} - filter, err := json.Marshal(mapA) - if err != nil { - return nil, err - } - hcnQuery.Filter = string(filter) - - networks, err := ListNetworksQuery(hcnQuery) - if err != nil { - return nil, err - } - if len(networks) == 0 { - return nil, NetworkNotFoundError{NetworkName: networkName} - } - return &networks[0], err -} - -// Create Network. -func (network *HostComputeNetwork) Create() (*HostComputeNetwork, error) { - logrus.Debugf("hcn::HostComputeNetwork::Create id=%s", network.Id) - for _, ipam := range network.Ipams { - for _, subnet := range ipam.Subnets { - if subnet.IpAddressPrefix != "" { - hasDefault := false - for _, route := range subnet.Routes { - if route.NextHop == "" { - return nil, errors.New("network create error, subnet has address prefix but no gateway specified") - } - if route.DestinationPrefix == "0.0.0.0/0" || route.DestinationPrefix == "::/0" { - hasDefault = true - } - } - if !hasDefault { - return nil, errors.New("network create error, no default gateway") - } - } - } - } - - jsonString, err := json.Marshal(network) - if err != nil { - return nil, err - } - - logrus.Debugf("hcn::HostComputeNetwork::Create JSON: %s", jsonString) - network, hcnErr := createNetwork(string(jsonString)) - if hcnErr != nil { - return nil, hcnErr - } - return network, nil -} - -// Delete Network. -func (network *HostComputeNetwork) Delete() error { - logrus.Debugf("hcn::HostComputeNetwork::Delete id=%s", network.Id) - - if err := deleteNetwork(network.Id); err != nil { - return err - } - return nil -} - -// ModifyNetworkSettings updates the Policy for a network. -func (network *HostComputeNetwork) ModifyNetworkSettings(request *ModifyNetworkSettingRequest) error { - logrus.Debugf("hcn::HostComputeNetwork::ModifyNetworkSettings id=%s", network.Id) - - networkSettingsRequest, err := json.Marshal(request) - if err != nil { - return err - } - - _, err = modifyNetwork(network.Id, string(networkSettingsRequest)) - if err != nil { - return err - } - return nil -} - -// AddPolicy applies a Policy (ex: RemoteSubnet) on the Network. -func (network *HostComputeNetwork) AddPolicy(networkPolicy PolicyNetworkRequest) error { - logrus.Debugf("hcn::HostComputeNetwork::AddPolicy id=%s", network.Id) - - settingsJson, err := json.Marshal(networkPolicy) - if err != nil { - return err - } - requestMessage := &ModifyNetworkSettingRequest{ - ResourceType: NetworkResourceTypePolicy, - RequestType: RequestTypeAdd, - Settings: settingsJson, - } - - return network.ModifyNetworkSettings(requestMessage) -} - -// RemovePolicy removes a Policy (ex: RemoteSubnet) from the Network. -func (network *HostComputeNetwork) RemovePolicy(networkPolicy PolicyNetworkRequest) error { - logrus.Debugf("hcn::HostComputeNetwork::RemovePolicy id=%s", network.Id) - - settingsJson, err := json.Marshal(networkPolicy) - if err != nil { - return err - } - requestMessage := &ModifyNetworkSettingRequest{ - ResourceType: NetworkResourceTypePolicy, - RequestType: RequestTypeRemove, - Settings: settingsJson, - } - - return network.ModifyNetworkSettings(requestMessage) -} - -// CreateEndpoint creates an endpoint on the Network. -func (network *HostComputeNetwork) CreateEndpoint(endpoint *HostComputeEndpoint) (*HostComputeEndpoint, error) { - isRemote := endpoint.Flags&EndpointFlagsRemoteEndpoint != 0 - logrus.Debugf("hcn::HostComputeNetwork::CreatEndpoint, networkId=%s remote=%t", network.Id, isRemote) - - endpoint.HostComputeNetwork = network.Id - endpointSettings, err := json.Marshal(endpoint) - if err != nil { - return nil, err - } - newEndpoint, err := createEndpoint(network.Id, string(endpointSettings)) - if err != nil { - return nil, err - } - return newEndpoint, nil -} - -// CreateRemoteEndpoint creates a remote endpoint on the Network. -func (network *HostComputeNetwork) CreateRemoteEndpoint(endpoint *HostComputeEndpoint) (*HostComputeEndpoint, error) { - endpoint.Flags = EndpointFlagsRemoteEndpoint | endpoint.Flags - return network.CreateEndpoint(endpoint) -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnpolicy.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnpolicy.go deleted file mode 100644 index 29651bb5f14a..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnpolicy.go +++ /dev/null @@ -1,329 +0,0 @@ -package hcn - -import ( - "encoding/json" -) - -// EndpointPolicyType are the potential Policies that apply to Endpoints. -type EndpointPolicyType string - -// EndpointPolicyType const -const ( - PortMapping EndpointPolicyType = "PortMapping" - ACL EndpointPolicyType = "ACL" - QOS EndpointPolicyType = "QOS" - L2Driver EndpointPolicyType = "L2Driver" - OutBoundNAT EndpointPolicyType = "OutBoundNAT" - SDNRoute EndpointPolicyType = "SDNRoute" - L4Proxy EndpointPolicyType = "L4Proxy" - L4WFPPROXY EndpointPolicyType = "L4WFPPROXY" - PortName EndpointPolicyType = "PortName" - EncapOverhead EndpointPolicyType = "EncapOverhead" - IOV EndpointPolicyType = "Iov" - // Endpoint and Network have InterfaceConstraint and ProviderAddress - NetworkProviderAddress EndpointPolicyType = "ProviderAddress" - NetworkInterfaceConstraint EndpointPolicyType = "InterfaceConstraint" - TierAcl EndpointPolicyType = "TierAcl" -) - -// EndpointPolicy is a collection of Policy settings for an Endpoint. -type EndpointPolicy struct { - Type EndpointPolicyType `json:""` - Settings json.RawMessage `json:",omitempty"` -} - -// NetworkPolicyType are the potential Policies that apply to Networks. -type NetworkPolicyType string - -// NetworkPolicyType const -const ( - SourceMacAddress NetworkPolicyType = "SourceMacAddress" - NetAdapterName NetworkPolicyType = "NetAdapterName" - VSwitchExtension NetworkPolicyType = "VSwitchExtension" - DrMacAddress NetworkPolicyType = "DrMacAddress" - AutomaticDNS NetworkPolicyType = "AutomaticDNS" - InterfaceConstraint NetworkPolicyType = "InterfaceConstraint" - ProviderAddress NetworkPolicyType = "ProviderAddress" - RemoteSubnetRoute NetworkPolicyType = "RemoteSubnetRoute" - VxlanPort NetworkPolicyType = "VxlanPort" - HostRoute NetworkPolicyType = "HostRoute" - SetPolicy NetworkPolicyType = "SetPolicy" - NetworkL4Proxy NetworkPolicyType = "L4Proxy" - LayerConstraint NetworkPolicyType = "LayerConstraint" -) - -// NetworkPolicy is a collection of Policy settings for a Network. -type NetworkPolicy struct { - Type NetworkPolicyType `json:""` - Settings json.RawMessage `json:",omitempty"` -} - -// SubnetPolicyType are the potential Policies that apply to Subnets. -type SubnetPolicyType string - -// SubnetPolicyType const -const ( - VLAN SubnetPolicyType = "VLAN" - VSID SubnetPolicyType = "VSID" -) - -// SubnetPolicy is a collection of Policy settings for a Subnet. -type SubnetPolicy struct { - Type SubnetPolicyType `json:""` - Settings json.RawMessage `json:",omitempty"` -} - -// NatFlags are flags for portmappings. -type NatFlags uint32 - -const ( - NatFlagsNone NatFlags = iota - NatFlagsLocalRoutedVip - NatFlagsIPv6 -) - -/// Endpoint Policy objects - -// PortMappingPolicySetting defines Port Mapping (NAT) -type PortMappingPolicySetting struct { - Protocol uint32 `json:",omitempty"` // EX: TCP = 6, UDP = 17 - InternalPort uint16 `json:",omitempty"` - ExternalPort uint16 `json:",omitempty"` - VIP string `json:",omitempty"` - Flags NatFlags `json:",omitempty"` -} - -// ActionType associated with ACLs. Value is either Allow or Block. -type ActionType string - -// DirectionType associated with ACLs. Value is either In or Out. -type DirectionType string - -// RuleType associated with ACLs. Value is either Host (WFP) or Switch (VFP). -type RuleType string - -const ( - // Allow traffic - ActionTypeAllow ActionType = "Allow" - // Block traffic - ActionTypeBlock ActionType = "Block" - // Pass traffic - ActionTypePass ActionType = "Pass" - - // In is traffic coming to the Endpoint - DirectionTypeIn DirectionType = "In" - // Out is traffic leaving the Endpoint - DirectionTypeOut DirectionType = "Out" - - // Host creates WFP (Windows Firewall) rules - RuleTypeHost RuleType = "Host" - // Switch creates VFP (Virtual Filter Platform) rules - RuleTypeSwitch RuleType = "Switch" -) - -// AclPolicySetting creates firewall rules on an endpoint -type AclPolicySetting struct { - Protocols string `json:",omitempty"` // EX: 6 (TCP), 17 (UDP), 1 (ICMPv4), 58 (ICMPv6), 2 (IGMP) - Action ActionType `json:","` - Direction DirectionType `json:","` - LocalAddresses string `json:",omitempty"` - RemoteAddresses string `json:",omitempty"` - LocalPorts string `json:",omitempty"` - RemotePorts string `json:",omitempty"` - RuleType RuleType `json:",omitempty"` - Priority uint16 `json:",omitempty"` -} - -// QosPolicySetting sets Quality of Service bandwidth caps on an Endpoint. -type QosPolicySetting struct { - MaximumOutgoingBandwidthInBytes uint64 -} - -// OutboundNatPolicySetting sets outbound Network Address Translation on an Endpoint. -type OutboundNatPolicySetting struct { - VirtualIP string `json:",omitempty"` - Exceptions []string `json:",omitempty"` - Destinations []string `json:",omitempty"` - Flags NatFlags `json:",omitempty"` -} - -// SDNRoutePolicySetting sets SDN Route on an Endpoint. -type SDNRoutePolicySetting struct { - DestinationPrefix string `json:",omitempty"` - NextHop string `json:",omitempty"` - NeedEncap bool `json:",omitempty"` -} - -// FiveTuple is nested in L4ProxyPolicySetting for WFP support. -type FiveTuple struct { - Protocols string `json:",omitempty"` - LocalAddresses string `json:",omitempty"` - RemoteAddresses string `json:",omitempty"` - LocalPorts string `json:",omitempty"` - RemotePorts string `json:",omitempty"` - Priority uint16 `json:",omitempty"` -} - -// ProxyExceptions exempts traffic to IpAddresses and Ports -type ProxyExceptions struct { - IpAddressExceptions []string `json:",omitempty"` - PortExceptions []string `json:",omitempty"` -} - -// L4WfpProxyPolicySetting sets Layer-4 Proxy on an endpoint. -type L4WfpProxyPolicySetting struct { - InboundProxyPort string `json:",omitempty"` - OutboundProxyPort string `json:",omitempty"` - FilterTuple FiveTuple `json:",omitempty"` - UserSID string `json:",omitempty"` - InboundExceptions ProxyExceptions `json:",omitempty"` - OutboundExceptions ProxyExceptions `json:",omitempty"` -} - -// PortnameEndpointPolicySetting sets the port name for an endpoint. -type PortnameEndpointPolicySetting struct { - Name string `json:",omitempty"` -} - -// EncapOverheadEndpointPolicySetting sets the encap overhead for an endpoint. -type EncapOverheadEndpointPolicySetting struct { - Overhead uint16 `json:",omitempty"` -} - -// IovPolicySetting sets the Iov settings for an endpoint. -type IovPolicySetting struct { - IovOffloadWeight uint32 `json:",omitempty"` - QueuePairsRequested uint32 `json:",omitempty"` - InterruptModeration uint32 `json:",omitempty"` -} - -/// Endpoint and Network Policy objects - -// ProviderAddressEndpointPolicySetting sets the PA for an endpoint. -type ProviderAddressEndpointPolicySetting struct { - ProviderAddress string `json:",omitempty"` -} - -// InterfaceConstraintPolicySetting limits an Endpoint or Network to a specific Nic. -type InterfaceConstraintPolicySetting struct { - InterfaceGuid string `json:",omitempty"` - InterfaceLuid uint64 `json:",omitempty"` - InterfaceIndex uint32 `json:",omitempty"` - InterfaceMediaType uint32 `json:",omitempty"` - InterfaceAlias string `json:",omitempty"` - InterfaceDescription string `json:",omitempty"` -} - -/// Network Policy objects - -// SourceMacAddressNetworkPolicySetting sets source MAC for a network. -type SourceMacAddressNetworkPolicySetting struct { - SourceMacAddress string `json:",omitempty"` -} - -// NetAdapterNameNetworkPolicySetting sets network adapter of a network. -type NetAdapterNameNetworkPolicySetting struct { - NetworkAdapterName string `json:",omitempty"` -} - -// VSwitchExtensionNetworkPolicySetting enables/disabled VSwitch extensions for a network. -type VSwitchExtensionNetworkPolicySetting struct { - ExtensionID string `json:",omitempty"` - Enable bool `json:",omitempty"` -} - -// DrMacAddressNetworkPolicySetting sets the DR MAC for a network. -type DrMacAddressNetworkPolicySetting struct { - Address string `json:",omitempty"` -} - -// AutomaticDNSNetworkPolicySetting enables/disables automatic DNS on a network. -type AutomaticDNSNetworkPolicySetting struct { - Enable bool `json:",omitempty"` -} - -type LayerConstraintNetworkPolicySetting struct { - LayerId string `json:",omitempty"` -} - -/// Subnet Policy objects - -// VlanPolicySetting isolates a subnet with VLAN tagging. -type VlanPolicySetting struct { - IsolationId uint32 `json:","` -} - -// VsidPolicySetting isolates a subnet with VSID tagging. -type VsidPolicySetting struct { - IsolationId uint32 `json:","` -} - -// RemoteSubnetRoutePolicySetting creates remote subnet route rules on a network -type RemoteSubnetRoutePolicySetting struct { - DestinationPrefix string - IsolationId uint16 - ProviderAddress string - DistributedRouterMacAddress string -} - -// SetPolicyTypes associated with SetPolicy. Value is IPSET. -type SetPolicyType string - -const ( - SetPolicyTypeIpSet SetPolicyType = "IPSET" -) - -// SetPolicySetting creates IPSets on network -type SetPolicySetting struct { - Id string - Name string - Type SetPolicyType - Values string -} - -// VxlanPortPolicySetting allows configuring the VXLAN TCP port -type VxlanPortPolicySetting struct { - Port uint16 -} - -// ProtocolType associated with L4ProxyPolicy -type ProtocolType uint32 - -const ( - ProtocolTypeUnknown ProtocolType = 0 - ProtocolTypeICMPv4 ProtocolType = 1 - ProtocolTypeIGMP ProtocolType = 2 - ProtocolTypeTCP ProtocolType = 6 - ProtocolTypeUDP ProtocolType = 17 - ProtocolTypeICMPv6 ProtocolType = 58 -) - -//L4ProxyPolicySetting applies proxy policy on network/endpoint -type L4ProxyPolicySetting struct { - IP string `json:",omitempty"` - Port string `json:",omitempty"` - Protocol ProtocolType `json:",omitempty"` - Exceptions []string `json:",omitempty"` - Destination string - OutboundNAT bool `json:",omitempty"` -} - -// TierAclRule represents an ACL within TierAclPolicySetting -type TierAclRule struct { - Id string `json:",omitempty"` - Protocols string `json:",omitempty"` - TierAclRuleAction ActionType `json:","` - LocalAddresses string `json:",omitempty"` - RemoteAddresses string `json:",omitempty"` - LocalPorts string `json:",omitempty"` - RemotePorts string `json:",omitempty"` - Priority uint16 `json:",omitempty"` -} - -// TierAclPolicySetting represents a Tier containing ACLs -type TierAclPolicySetting struct { - Name string `json:","` - Direction DirectionType `json:","` - Order uint16 `json:""` - TierAclRules []TierAclRule `json:",omitempty"` -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnroute.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnroute.go deleted file mode 100644 index d6d27079bcae..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnroute.go +++ /dev/null @@ -1,266 +0,0 @@ -package hcn - -import ( - "encoding/json" - "errors" - - "github.com/Microsoft/go-winio/pkg/guid" - "github.com/Microsoft/hcsshim/internal/interop" - "github.com/sirupsen/logrus" -) - -// HostComputeRoute represents SDN routes. -type HostComputeRoute struct { - ID string `json:"ID,omitempty"` - HostComputeEndpoints []string `json:",omitempty"` - Setting []SDNRoutePolicySetting `json:",omitempty"` - SchemaVersion SchemaVersion `json:",omitempty"` -} - -// ListRoutes makes a call to list all available routes. -func ListRoutes() ([]HostComputeRoute, error) { - hcnQuery := defaultQuery() - routes, err := ListRoutesQuery(hcnQuery) - if err != nil { - return nil, err - } - return routes, nil -} - -// ListRoutesQuery makes a call to query the list of available routes. -func ListRoutesQuery(query HostComputeQuery) ([]HostComputeRoute, error) { - queryJSON, err := json.Marshal(query) - if err != nil { - return nil, err - } - - routes, err := enumerateRoutes(string(queryJSON)) - if err != nil { - return nil, err - } - return routes, nil -} - -// GetRouteByID returns the route specified by Id. -func GetRouteByID(routeID string) (*HostComputeRoute, error) { - hcnQuery := defaultQuery() - mapA := map[string]string{"ID": routeID} - filter, err := json.Marshal(mapA) - if err != nil { - return nil, err - } - hcnQuery.Filter = string(filter) - - routes, err := ListRoutesQuery(hcnQuery) - if err != nil { - return nil, err - } - if len(routes) == 0 { - return nil, RouteNotFoundError{RouteId: routeID} - } - return &routes[0], err -} - -// Create Route. -func (route *HostComputeRoute) Create() (*HostComputeRoute, error) { - logrus.Debugf("hcn::HostComputeRoute::Create id=%s", route.ID) - - jsonString, err := json.Marshal(route) - if err != nil { - return nil, err - } - - logrus.Debugf("hcn::HostComputeRoute::Create JSON: %s", jsonString) - route, hcnErr := createRoute(string(jsonString)) - if hcnErr != nil { - return nil, hcnErr - } - return route, nil -} - -// Delete Route. -func (route *HostComputeRoute) Delete() error { - logrus.Debugf("hcn::HostComputeRoute::Delete id=%s", route.ID) - - existingRoute, _ := GetRouteByID(route.ID) - - if existingRoute != nil { - if err := deleteRoute(route.ID); err != nil { - return err - } - } - - return nil -} - -// AddEndpoint add an endpoint to a route -// Since HCNRoute doesn't implement modify functionality, add operation is essentially delete and add -func (route *HostComputeRoute) AddEndpoint(endpoint *HostComputeEndpoint) (*HostComputeRoute, error) { - logrus.Debugf("hcn::HostComputeRoute::AddEndpoint route=%s endpoint=%s", route.ID, endpoint.Id) - - err := route.Delete() - if err != nil { - return nil, err - } - - // Add Endpoint to the Existing List - route.HostComputeEndpoints = append(route.HostComputeEndpoints, endpoint.Id) - - return route.Create() -} - -// RemoveEndpoint removes an endpoint from a route -// Since HCNRoute doesn't implement modify functionality, remove operation is essentially delete and add -func (route *HostComputeRoute) RemoveEndpoint(endpoint *HostComputeEndpoint) (*HostComputeRoute, error) { - logrus.Debugf("hcn::HostComputeRoute::RemoveEndpoint route=%s endpoint=%s", route.ID, endpoint.Id) - - err := route.Delete() - if err != nil { - return nil, err - } - - // Create a list of all the endpoints besides the one being removed - i := 0 - for index, endpointReference := range route.HostComputeEndpoints { - if endpointReference == endpoint.Id { - i = index - break - } - } - - route.HostComputeEndpoints = append(route.HostComputeEndpoints[0:i], route.HostComputeEndpoints[i+1:]...) - return route.Create() -} - -// AddRoute for the specified endpoints and SDN Route setting -func AddRoute(endpoints []HostComputeEndpoint, destinationPrefix string, nextHop string, needEncapsulation bool) (*HostComputeRoute, error) { - logrus.Debugf("hcn::HostComputeRoute::AddRoute endpointId=%v, destinationPrefix=%v, nextHop=%v, needEncapsulation=%v", endpoints, destinationPrefix, nextHop, needEncapsulation) - - if len(endpoints) <= 0 { - return nil, errors.New("Missing endpoints") - } - - route := &HostComputeRoute{ - SchemaVersion: V2SchemaVersion(), - Setting: []SDNRoutePolicySetting{ - { - DestinationPrefix: destinationPrefix, - NextHop: nextHop, - NeedEncap: needEncapsulation, - }, - }, - } - - for _, endpoint := range endpoints { - route.HostComputeEndpoints = append(route.HostComputeEndpoints, endpoint.Id) - } - - return route.Create() -} - -func enumerateRoutes(query string) ([]HostComputeRoute, error) { - // Enumerate all routes Guids - var ( - resultBuffer *uint16 - routeBuffer *uint16 - ) - hr := hcnEnumerateRoutes(query, &routeBuffer, &resultBuffer) - if err := checkForErrors("hcnEnumerateRoutes", hr, resultBuffer); err != nil { - return nil, err - } - - routes := interop.ConvertAndFreeCoTaskMemString(routeBuffer) - var routeIds []guid.GUID - if err := json.Unmarshal([]byte(routes), &routeIds); err != nil { - return nil, err - } - - var outputRoutes []HostComputeRoute - for _, routeGUID := range routeIds { - route, err := getRoute(routeGUID, query) - if err != nil { - return nil, err - } - outputRoutes = append(outputRoutes, *route) - } - return outputRoutes, nil -} - -func getRoute(routeGUID guid.GUID, query string) (*HostComputeRoute, error) { - // Open routes. - var ( - routeHandle hcnRoute - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - hr := hcnOpenRoute(&routeGUID, &routeHandle, &resultBuffer) - if err := checkForErrors("hcnOpenRoute", hr, resultBuffer); err != nil { - return nil, err - } - // Query routes. - hr = hcnQueryRouteProperties(routeHandle, query, &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryRouteProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close routes. - hr = hcnCloseRoute(routeHandle) - if err := checkForErrors("hcnCloseRoute", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeRoute - var outputRoute HostComputeRoute - if err := json.Unmarshal([]byte(properties), &outputRoute); err != nil { - return nil, err - } - return &outputRoute, nil -} - -func createRoute(settings string) (*HostComputeRoute, error) { - // Create new route. - var ( - routeHandle hcnRoute - resultBuffer *uint16 - propertiesBuffer *uint16 - ) - routeGUID := guid.GUID{} - hr := hcnCreateRoute(&routeGUID, settings, &routeHandle, &resultBuffer) - if err := checkForErrors("hcnCreateRoute", hr, resultBuffer); err != nil { - return nil, err - } - // Query route. - hcnQuery := defaultQuery() - query, err := json.Marshal(hcnQuery) - if err != nil { - return nil, err - } - hr = hcnQueryRouteProperties(routeHandle, string(query), &propertiesBuffer, &resultBuffer) - if err := checkForErrors("hcnQueryRouteProperties", hr, resultBuffer); err != nil { - return nil, err - } - properties := interop.ConvertAndFreeCoTaskMemString(propertiesBuffer) - // Close Route. - hr = hcnCloseRoute(routeHandle) - if err := checkForErrors("hcnCloseRoute", hr, nil); err != nil { - return nil, err - } - // Convert output to HostComputeRoute - var outputRoute HostComputeRoute - if err := json.Unmarshal([]byte(properties), &outputRoute); err != nil { - return nil, err - } - return &outputRoute, nil -} - -func deleteRoute(routeID string) error { - routeGUID, err := guid.FromString(routeID) - if err != nil { - return errInvalidRouteID - } - var resultBuffer *uint16 - hr := hcnDeleteRoute(&routeGUID, &resultBuffer) - if err := checkForErrors("hcnDeleteRoute", hr, resultBuffer); err != nil { - return err - } - return nil -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnsupport.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnsupport.go deleted file mode 100644 index 64f9e3728b51..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/hcnsupport.go +++ /dev/null @@ -1,143 +0,0 @@ -package hcn - -import ( - "fmt" - "sync" - - "github.com/pkg/errors" - "github.com/sirupsen/logrus" -) - -var ( - // featuresOnce handles assigning the supported features and printing the supported info to stdout only once to avoid unnecessary work - // multiple times. - featuresOnce sync.Once - featuresErr error - supportedFeatures SupportedFeatures -) - -// SupportedFeatures are the features provided by the Service. -type SupportedFeatures struct { - Acl AclFeatures `json:"ACL"` - Api ApiSupport `json:"API"` - RemoteSubnet bool `json:"RemoteSubnet"` - HostRoute bool `json:"HostRoute"` - DSR bool `json:"DSR"` - Slash32EndpointPrefixes bool `json:"Slash32EndpointPrefixes"` - AclSupportForProtocol252 bool `json:"AclSupportForProtocol252"` - SessionAffinity bool `json:"SessionAffinity"` - IPv6DualStack bool `json:"IPv6DualStack"` - SetPolicy bool `json:"SetPolicy"` - VxlanPort bool `json:"VxlanPort"` - L4Proxy bool `json:"L4Proxy"` // network policy that applies VFP rules to all endpoints on the network to redirect traffic - L4WfpProxy bool `json:"L4WfpProxy"` // endpoint policy that applies WFP filters to redirect traffic to/from that endpoint - TierAcl bool `json:"TierAcl"` -} - -// AclFeatures are the supported ACL possibilities. -type AclFeatures struct { - AclAddressLists bool `json:"AclAddressLists"` - AclNoHostRulePriority bool `json:"AclHostRulePriority"` - AclPortRanges bool `json:"AclPortRanges"` - AclRuleId bool `json:"AclRuleId"` -} - -// ApiSupport lists the supported API versions. -type ApiSupport struct { - V1 bool `json:"V1"` - V2 bool `json:"V2"` -} - -// GetCachedSupportedFeatures returns the features supported by the Service and an error if the query failed. If this has been called -// before it will return the supported features and error received from the first call. This can be used to optimize if many calls to the -// various hcn.IsXSupported methods need to be made. -func GetCachedSupportedFeatures() (SupportedFeatures, error) { - // Only query the HCN version and features supported once, instead of everytime this is invoked. The logs are useful to - // debug incidents where there's confusion on if a feature is supported on the host machine. The sync.Once helps to avoid redundant - // spam of these anytime a check needs to be made for if an HCN feature is supported. This is a common occurrence in kube-proxy - // for example. - featuresOnce.Do(func() { - supportedFeatures, featuresErr = getSupportedFeatures() - }) - - return supportedFeatures, featuresErr -} - -// GetSupportedFeatures returns the features supported by the Service. -// -// Deprecated: Use GetCachedSupportedFeatures instead. -func GetSupportedFeatures() SupportedFeatures { - features, err := GetCachedSupportedFeatures() - if err != nil { - // Expected on pre-1803 builds, all features will be false/unsupported - logrus.WithError(err).Errorf("unable to obtain supported features") - return features - } - return features -} - -func getSupportedFeatures() (SupportedFeatures, error) { - var features SupportedFeatures - globals, err := GetGlobals() - if err != nil { - // It's expected if this fails once, it should always fail. It should fail on pre 1803 builds for example. - return SupportedFeatures{}, errors.Wrap(err, "failed to query HCN version number: this is expected on pre 1803 builds.") - } - features.Acl = AclFeatures{ - AclAddressLists: isFeatureSupported(globals.Version, HNSVersion1803), - AclNoHostRulePriority: isFeatureSupported(globals.Version, HNSVersion1803), - AclPortRanges: isFeatureSupported(globals.Version, HNSVersion1803), - AclRuleId: isFeatureSupported(globals.Version, HNSVersion1803), - } - - features.Api = ApiSupport{ - V2: isFeatureSupported(globals.Version, V2ApiSupport), - V1: true, // HNSCall is still available. - } - - features.RemoteSubnet = isFeatureSupported(globals.Version, RemoteSubnetVersion) - features.HostRoute = isFeatureSupported(globals.Version, HostRouteVersion) - features.DSR = isFeatureSupported(globals.Version, DSRVersion) - features.Slash32EndpointPrefixes = isFeatureSupported(globals.Version, Slash32EndpointPrefixesVersion) - features.AclSupportForProtocol252 = isFeatureSupported(globals.Version, AclSupportForProtocol252Version) - features.SessionAffinity = isFeatureSupported(globals.Version, SessionAffinityVersion) - features.IPv6DualStack = isFeatureSupported(globals.Version, IPv6DualStackVersion) - features.SetPolicy = isFeatureSupported(globals.Version, SetPolicyVersion) - features.VxlanPort = isFeatureSupported(globals.Version, VxlanPortVersion) - features.L4Proxy = isFeatureSupported(globals.Version, L4ProxyPolicyVersion) - features.L4WfpProxy = isFeatureSupported(globals.Version, L4WfpProxyPolicyVersion) - features.TierAcl = isFeatureSupported(globals.Version, TierAclPolicyVersion) - - logrus.WithFields(logrus.Fields{ - "version": fmt.Sprintf("%+v", globals.Version), - "supportedFeatures": fmt.Sprintf("%+v", features), - }).Info("HCN feature check") - - return features, nil -} - -func isFeatureSupported(currentVersion Version, versionsSupported VersionRanges) bool { - isFeatureSupported := false - - for _, versionRange := range versionsSupported { - isFeatureSupported = isFeatureSupported || isFeatureInRange(currentVersion, versionRange) - } - - return isFeatureSupported -} - -func isFeatureInRange(currentVersion Version, versionRange VersionRange) bool { - if currentVersion.Major < versionRange.MinVersion.Major { - return false - } - if currentVersion.Major > versionRange.MaxVersion.Major { - return false - } - if currentVersion.Major == versionRange.MinVersion.Major && currentVersion.Minor < versionRange.MinVersion.Minor { - return false - } - if currentVersion.Major == versionRange.MaxVersion.Major && currentVersion.Minor > versionRange.MaxVersion.Minor { - return false - } - return true -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/zsyscall_windows.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/zsyscall_windows.go deleted file mode 100644 index 7ec5b58b66a8..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/hcn/zsyscall_windows.go +++ /dev/null @@ -1,795 +0,0 @@ -// Code generated mksyscall_windows.exe DO NOT EDIT - -package hcn - -import ( - "syscall" - "unsafe" - - "golang.org/x/sys/windows" -) - -var _ unsafe.Pointer - -// Do the interface allocations only once for common -// Errno values. -const ( - errnoERROR_IO_PENDING = 997 -) - -var ( - errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING) -) - -// errnoErr returns common boxed Errno values, to prevent -// allocations at runtime. -func errnoErr(e syscall.Errno) error { - switch e { - case 0: - return nil - case errnoERROR_IO_PENDING: - return errERROR_IO_PENDING - } - // TODO: add more here, after collecting data on the common - // error values see on Windows. (perhaps when running - // all.bat?) - return e -} - -var ( - modiphlpapi = windows.NewLazySystemDLL("iphlpapi.dll") - modvmcompute = windows.NewLazySystemDLL("vmcompute.dll") - modcomputenetwork = windows.NewLazySystemDLL("computenetwork.dll") - - procSetCurrentThreadCompartmentId = modiphlpapi.NewProc("SetCurrentThreadCompartmentId") - procHNSCall = modvmcompute.NewProc("HNSCall") - procHcnEnumerateNetworks = modcomputenetwork.NewProc("HcnEnumerateNetworks") - procHcnCreateNetwork = modcomputenetwork.NewProc("HcnCreateNetwork") - procHcnOpenNetwork = modcomputenetwork.NewProc("HcnOpenNetwork") - procHcnModifyNetwork = modcomputenetwork.NewProc("HcnModifyNetwork") - procHcnQueryNetworkProperties = modcomputenetwork.NewProc("HcnQueryNetworkProperties") - procHcnDeleteNetwork = modcomputenetwork.NewProc("HcnDeleteNetwork") - procHcnCloseNetwork = modcomputenetwork.NewProc("HcnCloseNetwork") - procHcnEnumerateEndpoints = modcomputenetwork.NewProc("HcnEnumerateEndpoints") - procHcnCreateEndpoint = modcomputenetwork.NewProc("HcnCreateEndpoint") - procHcnOpenEndpoint = modcomputenetwork.NewProc("HcnOpenEndpoint") - procHcnModifyEndpoint = modcomputenetwork.NewProc("HcnModifyEndpoint") - procHcnQueryEndpointProperties = modcomputenetwork.NewProc("HcnQueryEndpointProperties") - procHcnDeleteEndpoint = modcomputenetwork.NewProc("HcnDeleteEndpoint") - procHcnCloseEndpoint = modcomputenetwork.NewProc("HcnCloseEndpoint") - procHcnEnumerateNamespaces = modcomputenetwork.NewProc("HcnEnumerateNamespaces") - procHcnCreateNamespace = modcomputenetwork.NewProc("HcnCreateNamespace") - procHcnOpenNamespace = modcomputenetwork.NewProc("HcnOpenNamespace") - procHcnModifyNamespace = modcomputenetwork.NewProc("HcnModifyNamespace") - procHcnQueryNamespaceProperties = modcomputenetwork.NewProc("HcnQueryNamespaceProperties") - procHcnDeleteNamespace = modcomputenetwork.NewProc("HcnDeleteNamespace") - procHcnCloseNamespace = modcomputenetwork.NewProc("HcnCloseNamespace") - procHcnEnumerateLoadBalancers = modcomputenetwork.NewProc("HcnEnumerateLoadBalancers") - procHcnCreateLoadBalancer = modcomputenetwork.NewProc("HcnCreateLoadBalancer") - procHcnOpenLoadBalancer = modcomputenetwork.NewProc("HcnOpenLoadBalancer") - procHcnModifyLoadBalancer = modcomputenetwork.NewProc("HcnModifyLoadBalancer") - procHcnQueryLoadBalancerProperties = modcomputenetwork.NewProc("HcnQueryLoadBalancerProperties") - procHcnDeleteLoadBalancer = modcomputenetwork.NewProc("HcnDeleteLoadBalancer") - procHcnCloseLoadBalancer = modcomputenetwork.NewProc("HcnCloseLoadBalancer") - procHcnEnumerateSdnRoutes = modcomputenetwork.NewProc("HcnEnumerateSdnRoutes") - procHcnCreateSdnRoute = modcomputenetwork.NewProc("HcnCreateSdnRoute") - procHcnOpenSdnRoute = modcomputenetwork.NewProc("HcnOpenSdnRoute") - procHcnModifySdnRoute = modcomputenetwork.NewProc("HcnModifySdnRoute") - procHcnQuerySdnRouteProperties = modcomputenetwork.NewProc("HcnQuerySdnRouteProperties") - procHcnDeleteSdnRoute = modcomputenetwork.NewProc("HcnDeleteSdnRoute") - procHcnCloseSdnRoute = modcomputenetwork.NewProc("HcnCloseSdnRoute") -) - -func SetCurrentThreadCompartmentId(compartmentId uint32) (hr error) { - r0, _, _ := syscall.Syscall(procSetCurrentThreadCompartmentId.Addr(), 1, uintptr(compartmentId), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func _hnsCall(method string, path string, object string, response **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(method) - if hr != nil { - return - } - var _p1 *uint16 - _p1, hr = syscall.UTF16PtrFromString(path) - if hr != nil { - return - } - var _p2 *uint16 - _p2, hr = syscall.UTF16PtrFromString(object) - if hr != nil { - return - } - return __hnsCall(_p0, _p1, _p2, response) -} - -func __hnsCall(method *uint16, path *uint16, object *uint16, response **uint16) (hr error) { - if hr = procHNSCall.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHNSCall.Addr(), 4, uintptr(unsafe.Pointer(method)), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(object)), uintptr(unsafe.Pointer(response)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnEnumerateNetworks(query string, networks **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnEnumerateNetworks(_p0, networks, result) -} - -func _hcnEnumerateNetworks(query *uint16, networks **uint16, result **uint16) (hr error) { - if hr = procHcnEnumerateNetworks.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnEnumerateNetworks.Addr(), 3, uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(networks)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCreateNetwork(id *_guid, settings string, network *hcnNetwork, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnCreateNetwork(id, _p0, network, result) -} - -func _hcnCreateNetwork(id *_guid, settings *uint16, network *hcnNetwork, result **uint16) (hr error) { - if hr = procHcnCreateNetwork.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnCreateNetwork.Addr(), 4, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(network)), uintptr(unsafe.Pointer(result)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnOpenNetwork(id *_guid, network *hcnNetwork, result **uint16) (hr error) { - if hr = procHcnOpenNetwork.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnOpenNetwork.Addr(), 3, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(network)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnModifyNetwork(network hcnNetwork, settings string, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnModifyNetwork(network, _p0, result) -} - -func _hcnModifyNetwork(network hcnNetwork, settings *uint16, result **uint16) (hr error) { - if hr = procHcnModifyNetwork.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnModifyNetwork.Addr(), 3, uintptr(network), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnQueryNetworkProperties(network hcnNetwork, query string, properties **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnQueryNetworkProperties(network, _p0, properties, result) -} - -func _hcnQueryNetworkProperties(network hcnNetwork, query *uint16, properties **uint16, result **uint16) (hr error) { - if hr = procHcnQueryNetworkProperties.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnQueryNetworkProperties.Addr(), 4, uintptr(network), uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(properties)), uintptr(unsafe.Pointer(result)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnDeleteNetwork(id *_guid, result **uint16) (hr error) { - if hr = procHcnDeleteNetwork.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnDeleteNetwork.Addr(), 2, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(result)), 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCloseNetwork(network hcnNetwork) (hr error) { - if hr = procHcnCloseNetwork.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnCloseNetwork.Addr(), 1, uintptr(network), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnEnumerateEndpoints(query string, endpoints **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnEnumerateEndpoints(_p0, endpoints, result) -} - -func _hcnEnumerateEndpoints(query *uint16, endpoints **uint16, result **uint16) (hr error) { - if hr = procHcnEnumerateEndpoints.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnEnumerateEndpoints.Addr(), 3, uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(endpoints)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCreateEndpoint(network hcnNetwork, id *_guid, settings string, endpoint *hcnEndpoint, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnCreateEndpoint(network, id, _p0, endpoint, result) -} - -func _hcnCreateEndpoint(network hcnNetwork, id *_guid, settings *uint16, endpoint *hcnEndpoint, result **uint16) (hr error) { - if hr = procHcnCreateEndpoint.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnCreateEndpoint.Addr(), 5, uintptr(network), uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(endpoint)), uintptr(unsafe.Pointer(result)), 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnOpenEndpoint(id *_guid, endpoint *hcnEndpoint, result **uint16) (hr error) { - if hr = procHcnOpenEndpoint.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnOpenEndpoint.Addr(), 3, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(endpoint)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnModifyEndpoint(endpoint hcnEndpoint, settings string, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnModifyEndpoint(endpoint, _p0, result) -} - -func _hcnModifyEndpoint(endpoint hcnEndpoint, settings *uint16, result **uint16) (hr error) { - if hr = procHcnModifyEndpoint.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnModifyEndpoint.Addr(), 3, uintptr(endpoint), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnQueryEndpointProperties(endpoint hcnEndpoint, query string, properties **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnQueryEndpointProperties(endpoint, _p0, properties, result) -} - -func _hcnQueryEndpointProperties(endpoint hcnEndpoint, query *uint16, properties **uint16, result **uint16) (hr error) { - if hr = procHcnQueryEndpointProperties.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnQueryEndpointProperties.Addr(), 4, uintptr(endpoint), uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(properties)), uintptr(unsafe.Pointer(result)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnDeleteEndpoint(id *_guid, result **uint16) (hr error) { - if hr = procHcnDeleteEndpoint.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnDeleteEndpoint.Addr(), 2, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(result)), 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCloseEndpoint(endpoint hcnEndpoint) (hr error) { - if hr = procHcnCloseEndpoint.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnCloseEndpoint.Addr(), 1, uintptr(endpoint), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnEnumerateNamespaces(query string, namespaces **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnEnumerateNamespaces(_p0, namespaces, result) -} - -func _hcnEnumerateNamespaces(query *uint16, namespaces **uint16, result **uint16) (hr error) { - if hr = procHcnEnumerateNamespaces.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnEnumerateNamespaces.Addr(), 3, uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(namespaces)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCreateNamespace(id *_guid, settings string, namespace *hcnNamespace, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnCreateNamespace(id, _p0, namespace, result) -} - -func _hcnCreateNamespace(id *_guid, settings *uint16, namespace *hcnNamespace, result **uint16) (hr error) { - if hr = procHcnCreateNamespace.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnCreateNamespace.Addr(), 4, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(namespace)), uintptr(unsafe.Pointer(result)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnOpenNamespace(id *_guid, namespace *hcnNamespace, result **uint16) (hr error) { - if hr = procHcnOpenNamespace.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnOpenNamespace.Addr(), 3, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(namespace)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnModifyNamespace(namespace hcnNamespace, settings string, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnModifyNamespace(namespace, _p0, result) -} - -func _hcnModifyNamespace(namespace hcnNamespace, settings *uint16, result **uint16) (hr error) { - if hr = procHcnModifyNamespace.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnModifyNamespace.Addr(), 3, uintptr(namespace), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnQueryNamespaceProperties(namespace hcnNamespace, query string, properties **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnQueryNamespaceProperties(namespace, _p0, properties, result) -} - -func _hcnQueryNamespaceProperties(namespace hcnNamespace, query *uint16, properties **uint16, result **uint16) (hr error) { - if hr = procHcnQueryNamespaceProperties.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnQueryNamespaceProperties.Addr(), 4, uintptr(namespace), uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(properties)), uintptr(unsafe.Pointer(result)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnDeleteNamespace(id *_guid, result **uint16) (hr error) { - if hr = procHcnDeleteNamespace.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnDeleteNamespace.Addr(), 2, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(result)), 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCloseNamespace(namespace hcnNamespace) (hr error) { - if hr = procHcnCloseNamespace.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnCloseNamespace.Addr(), 1, uintptr(namespace), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnEnumerateLoadBalancers(query string, loadBalancers **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnEnumerateLoadBalancers(_p0, loadBalancers, result) -} - -func _hcnEnumerateLoadBalancers(query *uint16, loadBalancers **uint16, result **uint16) (hr error) { - if hr = procHcnEnumerateLoadBalancers.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnEnumerateLoadBalancers.Addr(), 3, uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(loadBalancers)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCreateLoadBalancer(id *_guid, settings string, loadBalancer *hcnLoadBalancer, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnCreateLoadBalancer(id, _p0, loadBalancer, result) -} - -func _hcnCreateLoadBalancer(id *_guid, settings *uint16, loadBalancer *hcnLoadBalancer, result **uint16) (hr error) { - if hr = procHcnCreateLoadBalancer.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnCreateLoadBalancer.Addr(), 4, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(loadBalancer)), uintptr(unsafe.Pointer(result)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnOpenLoadBalancer(id *_guid, loadBalancer *hcnLoadBalancer, result **uint16) (hr error) { - if hr = procHcnOpenLoadBalancer.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnOpenLoadBalancer.Addr(), 3, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(loadBalancer)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnModifyLoadBalancer(loadBalancer hcnLoadBalancer, settings string, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnModifyLoadBalancer(loadBalancer, _p0, result) -} - -func _hcnModifyLoadBalancer(loadBalancer hcnLoadBalancer, settings *uint16, result **uint16) (hr error) { - if hr = procHcnModifyLoadBalancer.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnModifyLoadBalancer.Addr(), 3, uintptr(loadBalancer), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnQueryLoadBalancerProperties(loadBalancer hcnLoadBalancer, query string, properties **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnQueryLoadBalancerProperties(loadBalancer, _p0, properties, result) -} - -func _hcnQueryLoadBalancerProperties(loadBalancer hcnLoadBalancer, query *uint16, properties **uint16, result **uint16) (hr error) { - if hr = procHcnQueryLoadBalancerProperties.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnQueryLoadBalancerProperties.Addr(), 4, uintptr(loadBalancer), uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(properties)), uintptr(unsafe.Pointer(result)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnDeleteLoadBalancer(id *_guid, result **uint16) (hr error) { - if hr = procHcnDeleteLoadBalancer.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnDeleteLoadBalancer.Addr(), 2, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(result)), 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCloseLoadBalancer(loadBalancer hcnLoadBalancer) (hr error) { - if hr = procHcnCloseLoadBalancer.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnCloseLoadBalancer.Addr(), 1, uintptr(loadBalancer), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnEnumerateRoutes(query string, routes **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnEnumerateRoutes(_p0, routes, result) -} - -func _hcnEnumerateRoutes(query *uint16, routes **uint16, result **uint16) (hr error) { - if hr = procHcnEnumerateSdnRoutes.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnEnumerateSdnRoutes.Addr(), 3, uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(routes)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCreateRoute(id *_guid, settings string, route *hcnRoute, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnCreateRoute(id, _p0, route, result) -} - -func _hcnCreateRoute(id *_guid, settings *uint16, route *hcnRoute, result **uint16) (hr error) { - if hr = procHcnCreateSdnRoute.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnCreateSdnRoute.Addr(), 4, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(route)), uintptr(unsafe.Pointer(result)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnOpenRoute(id *_guid, route *hcnRoute, result **uint16) (hr error) { - if hr = procHcnOpenSdnRoute.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnOpenSdnRoute.Addr(), 3, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(route)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnModifyRoute(route hcnRoute, settings string, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(settings) - if hr != nil { - return - } - return _hcnModifyRoute(route, _p0, result) -} - -func _hcnModifyRoute(route hcnRoute, settings *uint16, result **uint16) (hr error) { - if hr = procHcnModifySdnRoute.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnModifySdnRoute.Addr(), 3, uintptr(route), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(result))) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnQueryRouteProperties(route hcnRoute, query string, properties **uint16, result **uint16) (hr error) { - var _p0 *uint16 - _p0, hr = syscall.UTF16PtrFromString(query) - if hr != nil { - return - } - return _hcnQueryRouteProperties(route, _p0, properties, result) -} - -func _hcnQueryRouteProperties(route hcnRoute, query *uint16, properties **uint16, result **uint16) (hr error) { - if hr = procHcnQuerySdnRouteProperties.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall6(procHcnQuerySdnRouteProperties.Addr(), 4, uintptr(route), uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(properties)), uintptr(unsafe.Pointer(result)), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnDeleteRoute(id *_guid, result **uint16) (hr error) { - if hr = procHcnDeleteSdnRoute.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnDeleteSdnRoute.Addr(), 2, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(result)), 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} - -func hcnCloseRoute(route hcnRoute) (hr error) { - if hr = procHcnCloseSdnRoute.Find(); hr != nil { - return - } - r0, _, _ := syscall.Syscall(procHcnCloseSdnRoute.Addr(), 1, uintptr(route), 0, 0) - if int32(r0) < 0 { - if r0&0x1fff0000 == 0x00070000 { - r0 &= 0xffff - } - hr = syscall.Errno(r0) - } - return -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/cni/registry.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/cni/registry.go deleted file mode 100644 index 4a4fcea843f1..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/cni/registry.go +++ /dev/null @@ -1,110 +0,0 @@ -package cni - -import ( - "errors" - - "github.com/Microsoft/go-winio/pkg/guid" - "github.com/Microsoft/hcsshim/internal/regstate" -) - -const ( - cniRoot = "cni" - cniKey = "cfg" -) - -// PersistedNamespaceConfig is the registry version of the `NamespaceID` to UVM -// map. -type PersistedNamespaceConfig struct { - namespaceID string - stored bool - - ContainerID string - HostUniqueID guid.GUID -} - -// NewPersistedNamespaceConfig creates an in-memory namespace config that can be -// persisted to the registry. -func NewPersistedNamespaceConfig(namespaceID, containerID string, containerHostUniqueID guid.GUID) *PersistedNamespaceConfig { - return &PersistedNamespaceConfig{ - namespaceID: namespaceID, - ContainerID: containerID, - HostUniqueID: containerHostUniqueID, - } -} - -// LoadPersistedNamespaceConfig loads a persisted config from the registry that matches -// `namespaceID`. If not found returns `regstate.NotFoundError` -func LoadPersistedNamespaceConfig(namespaceID string) (*PersistedNamespaceConfig, error) { - sk, err := regstate.Open(cniRoot, false) - if err != nil { - return nil, err - } - defer sk.Close() - - pnc := PersistedNamespaceConfig{ - namespaceID: namespaceID, - stored: true, - } - if err := sk.Get(namespaceID, cniKey, &pnc); err != nil { - return nil, err - } - return &pnc, nil -} - -// Store stores or updates the in-memory config to its registry state. If the -// store failes returns the store error. -func (pnc *PersistedNamespaceConfig) Store() error { - if pnc.namespaceID == "" { - return errors.New("invalid namespaceID ''") - } - if pnc.ContainerID == "" { - return errors.New("invalid containerID ''") - } - empty := guid.GUID{} - if pnc.HostUniqueID == empty { - return errors.New("invalid containerHostUniqueID 'empy'") - } - sk, err := regstate.Open(cniRoot, false) - if err != nil { - return err - } - defer sk.Close() - - if pnc.stored { - if err := sk.Set(pnc.namespaceID, cniKey, pnc); err != nil { - return err - } - } else { - if err := sk.Create(pnc.namespaceID, cniKey, pnc); err != nil { - return err - } - } - pnc.stored = true - return nil -} - -// Remove removes any persisted state associated with this config. If the config -// is not found in the registery `Remove` returns no error. -func (pnc *PersistedNamespaceConfig) Remove() error { - if pnc.stored { - sk, err := regstate.Open(cniRoot, false) - if err != nil { - if regstate.IsNotFoundError(err) { - pnc.stored = false - return nil - } - return err - } - defer sk.Close() - - if err := sk.Remove(pnc.namespaceID); err != nil { - if regstate.IsNotFoundError(err) { - pnc.stored = false - return nil - } - return err - } - } - pnc.stored = false - return nil -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/regstate/regstate.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/regstate/regstate.go deleted file mode 100644 index 6086c1dc523f..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/regstate/regstate.go +++ /dev/null @@ -1,288 +0,0 @@ -package regstate - -import ( - "encoding/json" - "fmt" - "net/url" - "os" - "path/filepath" - "reflect" - "syscall" - - "golang.org/x/sys/windows" - "golang.org/x/sys/windows/registry" -) - -//go:generate go run $GOROOT/src/syscall/mksyscall_windows.go -output zsyscall_windows.go regstate.go - -//sys regCreateKeyEx(key syscall.Handle, subkey *uint16, reserved uint32, class *uint16, options uint32, desired uint32, sa *syscall.SecurityAttributes, result *syscall.Handle, disposition *uint32) (regerrno error) = advapi32.RegCreateKeyExW - -const ( - _REG_OPTION_VOLATILE = 1 - - _REG_OPENED_EXISTING_KEY = 2 -) - -type Key struct { - registry.Key - Name string -} - -var localMachine = &Key{registry.LOCAL_MACHINE, "HKEY_LOCAL_MACHINE"} -var localUser = &Key{registry.CURRENT_USER, "HKEY_CURRENT_USER"} - -var rootPath = `SOFTWARE\Microsoft\runhcs` - -type NotFoundError struct { - Id string -} - -func (err *NotFoundError) Error() string { - return fmt.Sprintf("ID '%s' was not found", err.Id) -} - -func IsNotFoundError(err error) bool { - _, ok := err.(*NotFoundError) - return ok -} - -type NoStateError struct { - ID string - Key string -} - -func (err *NoStateError) Error() string { - return fmt.Sprintf("state '%s' is not present for ID '%s'", err.Key, err.ID) -} - -func createVolatileKey(k *Key, path string, access uint32) (newk *Key, openedExisting bool, err error) { - var ( - h syscall.Handle - d uint32 - ) - fullpath := filepath.Join(k.Name, path) - pathPtr, _ := windows.UTF16PtrFromString(path) - err = regCreateKeyEx(syscall.Handle(k.Key), pathPtr, 0, nil, _REG_OPTION_VOLATILE, access, nil, &h, &d) - if err != nil { - return nil, false, &os.PathError{Op: "RegCreateKeyEx", Path: fullpath, Err: err} - } - return &Key{registry.Key(h), fullpath}, d == _REG_OPENED_EXISTING_KEY, nil -} - -func hive(perUser bool) *Key { - r := localMachine - if perUser { - r = localUser - } - return r -} - -func Open(root string, perUser bool) (*Key, error) { - k, _, err := createVolatileKey(hive(perUser), rootPath, registry.ALL_ACCESS) - if err != nil { - return nil, err - } - defer k.Close() - - k2, _, err := createVolatileKey(k, url.PathEscape(root), registry.ALL_ACCESS) - if err != nil { - return nil, err - } - return k2, nil -} - -func RemoveAll(root string, perUser bool) error { - k, err := hive(perUser).open(rootPath) - if err != nil { - return err - } - defer k.Close() - r, err := k.open(url.PathEscape(root)) - if err != nil { - return err - } - defer r.Close() - ids, err := r.Enumerate() - if err != nil { - return err - } - for _, id := range ids { - err = r.Remove(id) - if err != nil { - return err - } - } - r.Close() - return k.Remove(root) -} - -func (k *Key) Close() error { - err := k.Key.Close() - k.Key = 0 - return err -} - -func (k *Key) Enumerate() ([]string, error) { - escapedIDs, err := k.ReadSubKeyNames(0) - if err != nil { - return nil, err - } - var ids []string - for _, e := range escapedIDs { - id, err := url.PathUnescape(e) - if err == nil { - ids = append(ids, id) - } - } - return ids, nil -} - -func (k *Key) open(name string) (*Key, error) { - fullpath := filepath.Join(k.Name, name) - nk, err := registry.OpenKey(k.Key, name, registry.ALL_ACCESS) - if err != nil { - return nil, &os.PathError{Op: "RegOpenKey", Path: fullpath, Err: err} - } - return &Key{nk, fullpath}, nil -} - -func (k *Key) openid(id string) (*Key, error) { - escaped := url.PathEscape(id) - fullpath := filepath.Join(k.Name, escaped) - nk, err := k.open(escaped) - if perr, ok := err.(*os.PathError); ok && perr.Err == syscall.ERROR_FILE_NOT_FOUND { - return nil, &NotFoundError{id} - } - if err != nil { - return nil, &os.PathError{Op: "RegOpenKey", Path: fullpath, Err: err} - } - return nk, nil -} - -func (k *Key) Remove(id string) error { - escaped := url.PathEscape(id) - err := registry.DeleteKey(k.Key, escaped) - if err != nil { - if err == syscall.ERROR_FILE_NOT_FOUND { - return &NotFoundError{id} - } - return &os.PathError{Op: "RegDeleteKey", Path: filepath.Join(k.Name, escaped), Err: err} - } - return nil -} - -func (k *Key) set(id string, create bool, key string, state interface{}) error { - var sk *Key - var err error - if create { - var existing bool - eid := url.PathEscape(id) - sk, existing, err = createVolatileKey(k, eid, registry.ALL_ACCESS) - if err != nil { - return err - } - defer sk.Close() - if existing { - sk.Close() - return fmt.Errorf("container %s already exists", id) - } - } else { - sk, err = k.openid(id) - if err != nil { - return err - } - defer sk.Close() - } - switch reflect.TypeOf(state).Kind() { - case reflect.Bool: - v := uint32(0) - if state.(bool) { - v = 1 - } - err = sk.SetDWordValue(key, v) - case reflect.Int: - err = sk.SetQWordValue(key, uint64(state.(int))) - case reflect.String: - err = sk.SetStringValue(key, state.(string)) - default: - var js []byte - js, err = json.Marshal(state) - if err != nil { - return err - } - err = sk.SetBinaryValue(key, js) - } - if err != nil { - if err == syscall.ERROR_FILE_NOT_FOUND { - return &NoStateError{id, key} - } - return &os.PathError{Op: "RegSetValueEx", Path: sk.Name + ":" + key, Err: err} - } - return nil -} - -func (k *Key) Create(id, key string, state interface{}) error { - return k.set(id, true, key, state) -} - -func (k *Key) Set(id, key string, state interface{}) error { - return k.set(id, false, key, state) -} - -func (k *Key) Clear(id, key string) error { - sk, err := k.openid(id) - if err != nil { - return err - } - defer sk.Close() - err = sk.DeleteValue(key) - if err != nil { - if err == syscall.ERROR_FILE_NOT_FOUND { - return &NoStateError{id, key} - } - return &os.PathError{Op: "RegDeleteValue", Path: sk.Name + ":" + key, Err: err} - } - return nil -} - -func (k *Key) Get(id, key string, state interface{}) error { - sk, err := k.openid(id) - if err != nil { - return err - } - defer sk.Close() - - var js []byte - switch reflect.TypeOf(state).Elem().Kind() { - case reflect.Bool: - var v uint64 - v, _, err = sk.GetIntegerValue(key) - if err == nil { - *state.(*bool) = v != 0 - } - case reflect.Int: - var v uint64 - v, _, err = sk.GetIntegerValue(key) - if err == nil { - *state.(*int) = int(v) - } - case reflect.String: - var v string - v, _, err = sk.GetStringValue(key) - if err == nil { - *state.(*string) = string(v) - } - default: - js, _, err = sk.GetBinaryValue(key) - } - if err != nil { - if err == syscall.ERROR_FILE_NOT_FOUND { - return &NoStateError{id, key} - } - return &os.PathError{Op: "RegQueryValueEx", Path: sk.Name + ":" + key, Err: err} - } - if js != nil { - err = json.Unmarshal(js, state) - } - return err -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/regstate/zsyscall_windows.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/regstate/zsyscall_windows.go deleted file mode 100644 index 4e349ad49849..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/regstate/zsyscall_windows.go +++ /dev/null @@ -1,51 +0,0 @@ -// Code generated by 'go generate'; DO NOT EDIT. - -package regstate - -import ( - "syscall" - "unsafe" - - "golang.org/x/sys/windows" -) - -var _ unsafe.Pointer - -// Do the interface allocations only once for common -// Errno values. -const ( - errnoERROR_IO_PENDING = 997 -) - -var ( - errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING) -) - -// errnoErr returns common boxed Errno values, to prevent -// allocations at runtime. -func errnoErr(e syscall.Errno) error { - switch e { - case 0: - return nil - case errnoERROR_IO_PENDING: - return errERROR_IO_PENDING - } - // TODO: add more here, after collecting data on the common - // error values see on Windows. (perhaps when running - // all.bat?) - return e -} - -var ( - modadvapi32 = windows.NewLazySystemDLL("advapi32.dll") - - procRegCreateKeyExW = modadvapi32.NewProc("RegCreateKeyExW") -) - -func regCreateKeyEx(key syscall.Handle, subkey *uint16, reserved uint32, class *uint16, options uint32, desired uint32, sa *syscall.SecurityAttributes, result *syscall.Handle, disposition *uint32) (regerrno error) { - r0, _, _ := syscall.Syscall9(procRegCreateKeyExW.Addr(), 9, uintptr(key), uintptr(unsafe.Pointer(subkey)), uintptr(reserved), uintptr(unsafe.Pointer(class)), uintptr(options), uintptr(desired), uintptr(unsafe.Pointer(sa)), uintptr(unsafe.Pointer(result)), uintptr(unsafe.Pointer(disposition))) - if r0 != 0 { - regerrno = syscall.Errno(r0) - } - return -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/container.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/container.go deleted file mode 100644 index a161c204e2fa..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/container.go +++ /dev/null @@ -1,71 +0,0 @@ -package runhcs - -import ( - "bytes" - "errors" - "fmt" - "io" - "io/ioutil" - "os" - "syscall" - "time" - - "github.com/Microsoft/go-winio/pkg/guid" -) - -// ContainerState represents the platform agnostic pieces relating to a -// running container's status and state -type ContainerState struct { - // Version is the OCI version for the container - Version string `json:"ociVersion"` - // ID is the container ID - ID string `json:"id"` - // InitProcessPid is the init process id in the parent namespace - InitProcessPid int `json:"pid"` - // Status is the current status of the container, running, paused, ... - Status string `json:"status"` - // Bundle is the path on the filesystem to the bundle - Bundle string `json:"bundle"` - // Rootfs is a path to a directory containing the container's root filesystem. - Rootfs string `json:"rootfs"` - // Created is the unix timestamp for the creation time of the container in UTC - Created time.Time `json:"created"` - // Annotations is the user defined annotations added to the config. - Annotations map[string]string `json:"annotations,omitempty"` - // The owner of the state directory (the owner of the container). - Owner string `json:"owner"` -} - -// GetErrorFromPipe returns reads from `pipe` and verifies if the operation -// returned success or error. If error converts that to an error and returns. If -// `p` is not nill will issue a `Kill` and `Wait` for exit. -func GetErrorFromPipe(pipe io.Reader, p *os.Process) error { - serr, err := ioutil.ReadAll(pipe) - if err != nil { - return err - } - - if bytes.Equal(serr, ShimSuccess) { - return nil - } - - extra := "" - if p != nil { - _ = p.Kill() - state, err := p.Wait() - if err != nil { - panic(err) - } - extra = fmt.Sprintf(", exit code %d", state.Sys().(syscall.WaitStatus).ExitCode) - } - if len(serr) == 0 { - return fmt.Errorf("unknown shim failure%s", extra) - } - - return errors.New(string(serr)) -} - -// VMPipePath returns the named pipe path for the vm shim. -func VMPipePath(hostUniqueID guid.GUID) string { - return SafePipePath("runhcs-vm-" + hostUniqueID.String()) -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/util.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/util.go deleted file mode 100644 index dcbb1903b8f0..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/util.go +++ /dev/null @@ -1,16 +0,0 @@ -package runhcs - -import "net/url" - -const ( - SafePipePrefix = `\\.\pipe\ProtectedPrefix\Administrators\` -) - -// ShimSuccess is the byte stream returned on a successful operation. -var ShimSuccess = []byte{0, 'O', 'K', 0} - -func SafePipePath(name string) string { - // Use a pipe in the Administrators protected prefixed to prevent malicious - // squatting. - return SafePipePrefix + url.PathEscape(name) -} diff --git a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/vm.go b/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/vm.go deleted file mode 100644 index 2c8957b88df7..000000000000 --- a/cluster-autoscaler/vendor/github.com/Microsoft/hcsshim/internal/runhcs/vm.go +++ /dev/null @@ -1,43 +0,0 @@ -package runhcs - -import ( - "encoding/json" - - "github.com/Microsoft/go-winio" -) - -// VMRequestOp is an operation that can be issued to a VM shim. -type VMRequestOp string - -const ( - // OpCreateContainer is a create container request. - OpCreateContainer VMRequestOp = "create" - // OpSyncNamespace is a `cni.NamespaceTypeGuest` sync request with the UVM. - OpSyncNamespace VMRequestOp = "sync" - // OpUnmountContainer is a container unmount request. - OpUnmountContainer VMRequestOp = "unmount" - // OpUnmountContainerDiskOnly is a container unmount disk request. - OpUnmountContainerDiskOnly VMRequestOp = "unmount-disk" -) - -// VMRequest is an operation request that is issued to a VM shim. -type VMRequest struct { - ID string - Op VMRequestOp -} - -// IssueVMRequest issues a request to a shim at the given pipe. -func IssueVMRequest(pipepath string, req *VMRequest) error { - pipe, err := winio.DialPipe(pipepath, nil) - if err != nil { - return err - } - defer pipe.Close() - if err := json.NewEncoder(pipe).Encode(req); err != nil { - return err - } - if err := GetErrorFromPipe(pipe, nil); err != nil { - return err - } - return nil -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/ARCHITECTURE.md b/cluster-autoscaler/vendor/github.com/cilium/ebpf/ARCHITECTURE.md index 6cbb31b6481e..8cd7e2486e70 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/ARCHITECTURE.md +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/ARCHITECTURE.md @@ -78,3 +78,9 @@ tend to use bpf_link to do so. Older hooks unfortunately use a combination of syscalls, netlink messages, etc. Adding support for a new link type should not pull in large dependencies like netlink, so XDP programs or tracepoints are out of scope. + +Each bpf_link_type has one corresponding Go type, e.g. `link.tracing` corresponds +to BPF_LINK_TRACING. In general, these types should be unexported as long as they +don't export methods outside of the Link interface. Each Go type may have multiple +exported constructors. For example `AttachTracing` and `AttachLSM` create a +tracing link, but are distinct functions since they may require different arguments. diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/MAINTAINERS.md b/cluster-autoscaler/vendor/github.com/cilium/ebpf/MAINTAINERS.md new file mode 100644 index 000000000000..9c18e7e76f54 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/MAINTAINERS.md @@ -0,0 +1,8 @@ +# Maintainers + + * [Lorenz Bauer] + * [Timo Beckers] (Isovalent) + + +[Lorenz Bauer]: https://github.com/lmb +[Timo Beckers]: https://github.com/ti-mo diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/Makefile b/cluster-autoscaler/vendor/github.com/cilium/ebpf/Makefile index 0bc15c0810cb..2d5f04c370eb 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/Makefile +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/Makefile @@ -1,17 +1,27 @@ # The development version of clang is distributed as the 'clang' binary, # while stable/released versions have a version number attached. # Pin the default clang to a stable version. -CLANG ?= clang-12 -CFLAGS := -target bpf -O2 -g -Wall -Werror $(CFLAGS) +CLANG ?= clang-14 +STRIP ?= llvm-strip-14 +OBJCOPY ?= llvm-objcopy-14 +CFLAGS := -O2 -g -Wall -Werror $(CFLAGS) + +CI_KERNEL_URL ?= https://github.com/cilium/ci-kernels/raw/master/ # Obtain an absolute path to the directory of the Makefile. # Assume the Makefile is in the root of the repository. REPODIR := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST)))) UIDGID := $(shell stat -c '%u:%g' ${REPODIR}) +# Prefer podman if installed, otherwise use docker. +# Note: Setting the var at runtime will always override. +CONTAINER_ENGINE ?= $(if $(shell command -v podman), podman, docker) +CONTAINER_RUN_ARGS ?= $(if $(filter ${CONTAINER_ENGINE}, podman), --log-driver=none, --user "${UIDGID}") + IMAGE := $(shell cat ${REPODIR}/testdata/docker/IMAGE) VERSION := $(shell cat ${REPODIR}/testdata/docker/VERSION) + # clang <8 doesn't tag relocs properly (STT_NOTYPE) # clang 9 is the first version emitting BTF TARGETS := \ @@ -26,48 +36,75 @@ TARGETS := \ testdata/strings \ testdata/freplace \ testdata/iproute2_map_compat \ - internal/btf/testdata/relocs + testdata/map_spin_lock \ + testdata/subprog_reloc \ + testdata/fwd_decl \ + btf/testdata/relocs \ + btf/testdata/relocs_read \ + btf/testdata/relocs_read_tgt -.PHONY: all clean docker-all docker-shell +.PHONY: all clean container-all container-shell generate -.DEFAULT_TARGET = docker-all +.DEFAULT_TARGET = container-all -# Build all ELF binaries using a Dockerized LLVM toolchain. -docker-all: - docker run --rm --user "${UIDGID}" \ +# Build all ELF binaries using a containerized LLVM toolchain. +container-all: + ${CONTAINER_ENGINE} run --rm ${CONTAINER_RUN_ARGS} \ -v "${REPODIR}":/ebpf -w /ebpf --env MAKEFLAGS \ --env CFLAGS="-fdebug-prefix-map=/ebpf=." \ + --env HOME="/tmp" \ "${IMAGE}:${VERSION}" \ - make all + $(MAKE) all -# (debug) Drop the user into a shell inside the Docker container as root. -docker-shell: - docker run --rm -ti \ +# (debug) Drop the user into a shell inside the container as root. +container-shell: + ${CONTAINER_ENGINE} run --rm -ti \ -v "${REPODIR}":/ebpf -w /ebpf \ "${IMAGE}:${VERSION}" clean: -$(RM) testdata/*.elf - -$(RM) internal/btf/testdata/*.elf + -$(RM) btf/testdata/*.elf -all: $(addsuffix -el.elf,$(TARGETS)) $(addsuffix -eb.elf,$(TARGETS)) +format: + find . -type f -name "*.c" | xargs clang-format -i + +all: format $(addsuffix -el.elf,$(TARGETS)) $(addsuffix -eb.elf,$(TARGETS)) generate ln -srf testdata/loader-$(CLANG)-el.elf testdata/loader-el.elf ln -srf testdata/loader-$(CLANG)-eb.elf testdata/loader-eb.elf +# $BPF_CLANG is used in go:generate invocations. +generate: export BPF_CLANG := $(CLANG) +generate: export BPF_CFLAGS := $(CFLAGS) +generate: + go generate ./cmd/bpf2go/test + go generate ./internal/sys + cd examples/ && go generate ./... + testdata/loader-%-el.elf: testdata/loader.c - $* $(CFLAGS) -mlittle-endian -c $< -o $@ + $* $(CFLAGS) -target bpfel -c $< -o $@ + $(STRIP) -g $@ testdata/loader-%-eb.elf: testdata/loader.c - $* $(CFLAGS) -mbig-endian -c $< -o $@ + $* $(CFLAGS) -target bpfeb -c $< -o $@ + $(STRIP) -g $@ %-el.elf: %.c - $(CLANG) $(CFLAGS) -mlittle-endian -c $< -o $@ + $(CLANG) $(CFLAGS) -target bpfel -c $< -o $@ + $(STRIP) -g $@ %-eb.elf : %.c - $(CLANG) $(CFLAGS) -mbig-endian -c $< -o $@ + $(CLANG) $(CFLAGS) -target bpfeb -c $< -o $@ + $(STRIP) -g $@ -# Usage: make VMLINUX=/path/to/vmlinux vmlinux-btf -.PHONY: vmlinux-btf -vmlinux-btf: internal/btf/testdata/vmlinux-btf.gz -internal/btf/testdata/vmlinux-btf.gz: $(VMLINUX) - objcopy --dump-section .BTF=/dev/stdout "$<" /dev/null | gzip > "$@" +.PHONY: generate-btf +generate-btf: KERNEL_VERSION?=5.18 +generate-btf: + $(eval TMP := $(shell mktemp -d)) + curl -fL "$(CI_KERNEL_URL)/linux-$(KERNEL_VERSION).bz" -o "$(TMP)/bzImage" + ./testdata/extract-vmlinux "$(TMP)/bzImage" > "$(TMP)/vmlinux" + $(OBJCOPY) --dump-section .BTF=/dev/stdout "$(TMP)/vmlinux" /dev/null | gzip > "btf/testdata/vmlinux.btf.gz" + curl -fL "$(CI_KERNEL_URL)/linux-$(KERNEL_VERSION)-selftests-bpf.tgz" -o "$(TMP)/selftests.tgz" + tar -xf "$(TMP)/selftests.tgz" --to-stdout tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.ko | \ + $(OBJCOPY) --dump-section .BTF="btf/testdata/btf_testmod.btf" - /dev/null + $(RM) -r "$(TMP)" diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/README.md b/cluster-autoscaler/vendor/github.com/cilium/ebpf/README.md index 01e2fff92bbc..3e490de71101 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/README.md +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/README.md @@ -45,13 +45,17 @@ This library includes the following packages: `PERF_EVENT_ARRAY` * [ringbuf](https://pkg.go.dev/github.com/cilium/ebpf/ringbuf) allows reading from a `BPF_MAP_TYPE_RINGBUF` map - +* [features](https://pkg.go.dev/github.com/cilium/ebpf/features) implements the equivalent + of `bpftool feature probe` for discovering BPF-related kernel features using native Go. +* [rlimit](https://pkg.go.dev/github.com/cilium/ebpf/rlimit) provides a convenient API to lift + the `RLIMIT_MEMLOCK` constraint on kernels before 5.11. ## Requirements * A version of Go that is [supported by upstream](https://golang.org/doc/devel/release.html#policy) -* Linux >= 4.9. CI is run against LTS releases. +* Linux >= 4.9. CI is run against kernel.org LTS releases. 4.4 should work but is + not tested against. ## Regenerating Testdata @@ -59,6 +63,9 @@ Run `make` in the root of this repository to rebuild testdata in all subpackages. This requires Docker, as it relies on a standardized build environment to keep the build output stable. +It is possible to regenerate data using Podman by overriding the `CONTAINER_*` +variables: `CONTAINER_ENGINE=podman CONTAINER_RUN_ARGS= make`. + The toolchain image build files are kept in [testdata/docker/](testdata/docker/). ## License diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/func.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/func.go index bfa5d59c976e..a14e9e2c3ce9 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/func.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/func.go @@ -5,6 +5,10 @@ package asm // BuiltinFunc is a built-in eBPF function. type BuiltinFunc int32 +func (_ BuiltinFunc) Max() BuiltinFunc { + return maxBuiltinFunc - 1 +} + // eBPF built-in functions // // You can regenerate this list using the following gawk script: @@ -190,6 +194,43 @@ const ( FnSysBpf FnBtfFindByNameKind FnSysClose + FnTimerInit + FnTimerSetCallback + FnTimerStart + FnTimerCancel + FnGetFuncIp + FnGetAttachCookie + FnTaskPtRegs + FnGetBranchSnapshot + FnTraceVprintk + FnSkcToUnixSock + FnKallsymsLookupName + FnFindVma + FnLoop + FnStrncmp + FnGetFuncArg + FnGetFuncRet + FnGetFuncArgCnt + FnGetRetval + FnSetRetval + FnXdpGetBuffLen + FnXdpLoadBytes + FnXdpStoreBytes + FnCopyFromUserTask + FnSkbSetTstamp + FnImaFileHash + FnKptrXchg + FnMapLookupPercpuElem + FnSkcToMptcpSock + FnDynptrFromMem + FnRingbufReserveDynptr + FnRingbufSubmitDynptr + FnRingbufDiscardDynptr + FnDynptrRead + FnDynptrWrite + FnDynptrData + + maxBuiltinFunc ) // Call emits a function call. diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/func_string.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/func_string.go index 5a0e333639a3..b7431b7f605a 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/func_string.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/func_string.go @@ -177,11 +177,47 @@ func _() { _ = x[FnSysBpf-166] _ = x[FnBtfFindByNameKind-167] _ = x[FnSysClose-168] + _ = x[FnTimerInit-169] + _ = x[FnTimerSetCallback-170] + _ = x[FnTimerStart-171] + _ = x[FnTimerCancel-172] + _ = x[FnGetFuncIp-173] + _ = x[FnGetAttachCookie-174] + _ = x[FnTaskPtRegs-175] + _ = x[FnGetBranchSnapshot-176] + _ = x[FnTraceVprintk-177] + _ = x[FnSkcToUnixSock-178] + _ = x[FnKallsymsLookupName-179] + _ = x[FnFindVma-180] + _ = x[FnLoop-181] + _ = x[FnStrncmp-182] + _ = x[FnGetFuncArg-183] + _ = x[FnGetFuncRet-184] + _ = x[FnGetFuncArgCnt-185] + _ = x[FnGetRetval-186] + _ = x[FnSetRetval-187] + _ = x[FnXdpGetBuffLen-188] + _ = x[FnXdpLoadBytes-189] + _ = x[FnXdpStoreBytes-190] + _ = x[FnCopyFromUserTask-191] + _ = x[FnSkbSetTstamp-192] + _ = x[FnImaFileHash-193] + _ = x[FnKptrXchg-194] + _ = x[FnMapLookupPercpuElem-195] + _ = x[FnSkcToMptcpSock-196] + _ = x[FnDynptrFromMem-197] + _ = x[FnRingbufReserveDynptr-198] + _ = x[FnRingbufSubmitDynptr-199] + _ = x[FnRingbufDiscardDynptr-200] + _ = x[FnDynptrRead-201] + _ = x[FnDynptrWrite-202] + _ = x[FnDynptrData-203] + _ = x[maxBuiltinFunc-204] } -const _BuiltinFunc_name = "FnUnspecFnMapLookupElemFnMapUpdateElemFnMapDeleteElemFnProbeReadFnKtimeGetNsFnTracePrintkFnGetPrandomU32FnGetSmpProcessorIdFnSkbStoreBytesFnL3CsumReplaceFnL4CsumReplaceFnTailCallFnCloneRedirectFnGetCurrentPidTgidFnGetCurrentUidGidFnGetCurrentCommFnGetCgroupClassidFnSkbVlanPushFnSkbVlanPopFnSkbGetTunnelKeyFnSkbSetTunnelKeyFnPerfEventReadFnRedirectFnGetRouteRealmFnPerfEventOutputFnSkbLoadBytesFnGetStackidFnCsumDiffFnSkbGetTunnelOptFnSkbSetTunnelOptFnSkbChangeProtoFnSkbChangeTypeFnSkbUnderCgroupFnGetHashRecalcFnGetCurrentTaskFnProbeWriteUserFnCurrentTaskUnderCgroupFnSkbChangeTailFnSkbPullDataFnCsumUpdateFnSetHashInvalidFnGetNumaNodeIdFnSkbChangeHeadFnXdpAdjustHeadFnProbeReadStrFnGetSocketCookieFnGetSocketUidFnSetHashFnSetsockoptFnSkbAdjustRoomFnRedirectMapFnSkRedirectMapFnSockMapUpdateFnXdpAdjustMetaFnPerfEventReadValueFnPerfProgReadValueFnGetsockoptFnOverrideReturnFnSockOpsCbFlagsSetFnMsgRedirectMapFnMsgApplyBytesFnMsgCorkBytesFnMsgPullDataFnBindFnXdpAdjustTailFnSkbGetXfrmStateFnGetStackFnSkbLoadBytesRelativeFnFibLookupFnSockHashUpdateFnMsgRedirectHashFnSkRedirectHashFnLwtPushEncapFnLwtSeg6StoreBytesFnLwtSeg6AdjustSrhFnLwtSeg6ActionFnRcRepeatFnRcKeydownFnSkbCgroupIdFnGetCurrentCgroupIdFnGetLocalStorageFnSkSelectReuseportFnSkbAncestorCgroupIdFnSkLookupTcpFnSkLookupUdpFnSkReleaseFnMapPushElemFnMapPopElemFnMapPeekElemFnMsgPushDataFnMsgPopDataFnRcPointerRelFnSpinLockFnSpinUnlockFnSkFullsockFnTcpSockFnSkbEcnSetCeFnGetListenerSockFnSkcLookupTcpFnTcpCheckSyncookieFnSysctlGetNameFnSysctlGetCurrentValueFnSysctlGetNewValueFnSysctlSetNewValueFnStrtolFnStrtoulFnSkStorageGetFnSkStorageDeleteFnSendSignalFnTcpGenSyncookieFnSkbOutputFnProbeReadUserFnProbeReadKernelFnProbeReadUserStrFnProbeReadKernelStrFnTcpSendAckFnSendSignalThreadFnJiffies64FnReadBranchRecordsFnGetNsCurrentPidTgidFnXdpOutputFnGetNetnsCookieFnGetCurrentAncestorCgroupIdFnSkAssignFnKtimeGetBootNsFnSeqPrintfFnSeqWriteFnSkCgroupIdFnSkAncestorCgroupIdFnRingbufOutputFnRingbufReserveFnRingbufSubmitFnRingbufDiscardFnRingbufQueryFnCsumLevelFnSkcToTcp6SockFnSkcToTcpSockFnSkcToTcpTimewaitSockFnSkcToTcpRequestSockFnSkcToUdp6SockFnGetTaskStackFnLoadHdrOptFnStoreHdrOptFnReserveHdrOptFnInodeStorageGetFnInodeStorageDeleteFnDPathFnCopyFromUserFnSnprintfBtfFnSeqPrintfBtfFnSkbCgroupClassidFnRedirectNeighFnPerCpuPtrFnThisCpuPtrFnRedirectPeerFnTaskStorageGetFnTaskStorageDeleteFnGetCurrentTaskBtfFnBprmOptsSetFnKtimeGetCoarseNsFnImaInodeHashFnSockFromFileFnCheckMtuFnForEachMapElemFnSnprintfFnSysBpfFnBtfFindByNameKindFnSysClose" +const _BuiltinFunc_name = "FnUnspecFnMapLookupElemFnMapUpdateElemFnMapDeleteElemFnProbeReadFnKtimeGetNsFnTracePrintkFnGetPrandomU32FnGetSmpProcessorIdFnSkbStoreBytesFnL3CsumReplaceFnL4CsumReplaceFnTailCallFnCloneRedirectFnGetCurrentPidTgidFnGetCurrentUidGidFnGetCurrentCommFnGetCgroupClassidFnSkbVlanPushFnSkbVlanPopFnSkbGetTunnelKeyFnSkbSetTunnelKeyFnPerfEventReadFnRedirectFnGetRouteRealmFnPerfEventOutputFnSkbLoadBytesFnGetStackidFnCsumDiffFnSkbGetTunnelOptFnSkbSetTunnelOptFnSkbChangeProtoFnSkbChangeTypeFnSkbUnderCgroupFnGetHashRecalcFnGetCurrentTaskFnProbeWriteUserFnCurrentTaskUnderCgroupFnSkbChangeTailFnSkbPullDataFnCsumUpdateFnSetHashInvalidFnGetNumaNodeIdFnSkbChangeHeadFnXdpAdjustHeadFnProbeReadStrFnGetSocketCookieFnGetSocketUidFnSetHashFnSetsockoptFnSkbAdjustRoomFnRedirectMapFnSkRedirectMapFnSockMapUpdateFnXdpAdjustMetaFnPerfEventReadValueFnPerfProgReadValueFnGetsockoptFnOverrideReturnFnSockOpsCbFlagsSetFnMsgRedirectMapFnMsgApplyBytesFnMsgCorkBytesFnMsgPullDataFnBindFnXdpAdjustTailFnSkbGetXfrmStateFnGetStackFnSkbLoadBytesRelativeFnFibLookupFnSockHashUpdateFnMsgRedirectHashFnSkRedirectHashFnLwtPushEncapFnLwtSeg6StoreBytesFnLwtSeg6AdjustSrhFnLwtSeg6ActionFnRcRepeatFnRcKeydownFnSkbCgroupIdFnGetCurrentCgroupIdFnGetLocalStorageFnSkSelectReuseportFnSkbAncestorCgroupIdFnSkLookupTcpFnSkLookupUdpFnSkReleaseFnMapPushElemFnMapPopElemFnMapPeekElemFnMsgPushDataFnMsgPopDataFnRcPointerRelFnSpinLockFnSpinUnlockFnSkFullsockFnTcpSockFnSkbEcnSetCeFnGetListenerSockFnSkcLookupTcpFnTcpCheckSyncookieFnSysctlGetNameFnSysctlGetCurrentValueFnSysctlGetNewValueFnSysctlSetNewValueFnStrtolFnStrtoulFnSkStorageGetFnSkStorageDeleteFnSendSignalFnTcpGenSyncookieFnSkbOutputFnProbeReadUserFnProbeReadKernelFnProbeReadUserStrFnProbeReadKernelStrFnTcpSendAckFnSendSignalThreadFnJiffies64FnReadBranchRecordsFnGetNsCurrentPidTgidFnXdpOutputFnGetNetnsCookieFnGetCurrentAncestorCgroupIdFnSkAssignFnKtimeGetBootNsFnSeqPrintfFnSeqWriteFnSkCgroupIdFnSkAncestorCgroupIdFnRingbufOutputFnRingbufReserveFnRingbufSubmitFnRingbufDiscardFnRingbufQueryFnCsumLevelFnSkcToTcp6SockFnSkcToTcpSockFnSkcToTcpTimewaitSockFnSkcToTcpRequestSockFnSkcToUdp6SockFnGetTaskStackFnLoadHdrOptFnStoreHdrOptFnReserveHdrOptFnInodeStorageGetFnInodeStorageDeleteFnDPathFnCopyFromUserFnSnprintfBtfFnSeqPrintfBtfFnSkbCgroupClassidFnRedirectNeighFnPerCpuPtrFnThisCpuPtrFnRedirectPeerFnTaskStorageGetFnTaskStorageDeleteFnGetCurrentTaskBtfFnBprmOptsSetFnKtimeGetCoarseNsFnImaInodeHashFnSockFromFileFnCheckMtuFnForEachMapElemFnSnprintfFnSysBpfFnBtfFindByNameKindFnSysCloseFnTimerInitFnTimerSetCallbackFnTimerStartFnTimerCancelFnGetFuncIpFnGetAttachCookieFnTaskPtRegsFnGetBranchSnapshotFnTraceVprintkFnSkcToUnixSockFnKallsymsLookupNameFnFindVmaFnLoopFnStrncmpFnGetFuncArgFnGetFuncRetFnGetFuncArgCntFnGetRetvalFnSetRetvalFnXdpGetBuffLenFnXdpLoadBytesFnXdpStoreBytesFnCopyFromUserTaskFnSkbSetTstampFnImaFileHashFnKptrXchgFnMapLookupPercpuElemFnSkcToMptcpSockFnDynptrFromMemFnRingbufReserveDynptrFnRingbufSubmitDynptrFnRingbufDiscardDynptrFnDynptrReadFnDynptrWriteFnDynptrDatamaxBuiltinFunc" -var _BuiltinFunc_index = [...]uint16{0, 8, 23, 38, 53, 64, 76, 89, 104, 123, 138, 153, 168, 178, 193, 212, 230, 246, 264, 277, 289, 306, 323, 338, 348, 363, 380, 394, 406, 416, 433, 450, 466, 481, 497, 512, 528, 544, 568, 583, 596, 608, 624, 639, 654, 669, 683, 700, 714, 723, 735, 750, 763, 778, 793, 808, 828, 847, 859, 875, 894, 910, 925, 939, 952, 958, 973, 990, 1000, 1022, 1033, 1049, 1066, 1082, 1096, 1115, 1133, 1148, 1158, 1169, 1182, 1202, 1219, 1238, 1259, 1272, 1285, 1296, 1309, 1321, 1334, 1347, 1359, 1373, 1383, 1395, 1407, 1416, 1429, 1446, 1460, 1479, 1494, 1517, 1536, 1555, 1563, 1572, 1586, 1603, 1615, 1632, 1643, 1658, 1675, 1693, 1713, 1725, 1743, 1754, 1773, 1794, 1805, 1821, 1849, 1859, 1875, 1886, 1896, 1908, 1928, 1943, 1959, 1974, 1990, 2004, 2015, 2030, 2044, 2066, 2087, 2102, 2116, 2128, 2141, 2156, 2173, 2193, 2200, 2214, 2227, 2241, 2259, 2274, 2285, 2297, 2311, 2327, 2346, 2365, 2378, 2396, 2410, 2424, 2434, 2450, 2460, 2468, 2487, 2497} +var _BuiltinFunc_index = [...]uint16{0, 8, 23, 38, 53, 64, 76, 89, 104, 123, 138, 153, 168, 178, 193, 212, 230, 246, 264, 277, 289, 306, 323, 338, 348, 363, 380, 394, 406, 416, 433, 450, 466, 481, 497, 512, 528, 544, 568, 583, 596, 608, 624, 639, 654, 669, 683, 700, 714, 723, 735, 750, 763, 778, 793, 808, 828, 847, 859, 875, 894, 910, 925, 939, 952, 958, 973, 990, 1000, 1022, 1033, 1049, 1066, 1082, 1096, 1115, 1133, 1148, 1158, 1169, 1182, 1202, 1219, 1238, 1259, 1272, 1285, 1296, 1309, 1321, 1334, 1347, 1359, 1373, 1383, 1395, 1407, 1416, 1429, 1446, 1460, 1479, 1494, 1517, 1536, 1555, 1563, 1572, 1586, 1603, 1615, 1632, 1643, 1658, 1675, 1693, 1713, 1725, 1743, 1754, 1773, 1794, 1805, 1821, 1849, 1859, 1875, 1886, 1896, 1908, 1928, 1943, 1959, 1974, 1990, 2004, 2015, 2030, 2044, 2066, 2087, 2102, 2116, 2128, 2141, 2156, 2173, 2193, 2200, 2214, 2227, 2241, 2259, 2274, 2285, 2297, 2311, 2327, 2346, 2365, 2378, 2396, 2410, 2424, 2434, 2450, 2460, 2468, 2487, 2497, 2508, 2526, 2538, 2551, 2562, 2579, 2591, 2610, 2624, 2639, 2659, 2668, 2674, 2683, 2695, 2707, 2722, 2733, 2744, 2759, 2773, 2788, 2806, 2820, 2833, 2843, 2864, 2880, 2895, 2917, 2938, 2960, 2972, 2985, 2997, 3011} func (i BuiltinFunc) String() string { if i < 0 || i >= BuiltinFunc(len(_BuiltinFunc_index)-1) { diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/instruction.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/instruction.go index 64d717d156d5..f17d88b51869 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/instruction.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/instruction.go @@ -8,8 +8,10 @@ import ( "fmt" "io" "math" + "sort" "strings" + "github.com/cilium/ebpf/internal/sys" "github.com/cilium/ebpf/internal/unix" ) @@ -19,6 +21,10 @@ const InstructionSize = 8 // RawInstructionOffset is an offset in units of raw BPF instructions. type RawInstructionOffset uint64 +var ErrUnreferencedSymbol = errors.New("unreferenced symbol") +var ErrUnsatisfiedMapReference = errors.New("unsatisfied map reference") +var ErrUnsatisfiedProgramReference = errors.New("unsatisfied program reference") + // Bytes returns the offset of an instruction in bytes. func (rio RawInstructionOffset) Bytes() uint64 { return uint64(rio) * InstructionSize @@ -26,50 +32,57 @@ func (rio RawInstructionOffset) Bytes() uint64 { // Instruction is a single eBPF instruction. type Instruction struct { - OpCode OpCode - Dst Register - Src Register - Offset int16 - Constant int64 - Reference string - Symbol string -} - -// Sym creates a symbol. -func (ins Instruction) Sym(name string) Instruction { - ins.Symbol = name - return ins + OpCode OpCode + Dst Register + Src Register + Offset int16 + Constant int64 + + // Metadata contains optional metadata about this instruction. + Metadata Metadata } // Unmarshal decodes a BPF instruction. func (ins *Instruction) Unmarshal(r io.Reader, bo binary.ByteOrder) (uint64, error) { - var bi bpfInstruction - err := binary.Read(r, bo, &bi) - if err != nil { + data := make([]byte, InstructionSize) + if _, err := io.ReadFull(r, data); err != nil { return 0, err } - ins.OpCode = bi.OpCode - ins.Offset = bi.Offset - ins.Constant = int64(bi.Constant) - ins.Dst, ins.Src, err = bi.Registers.Unmarshal(bo) - if err != nil { - return 0, fmt.Errorf("can't unmarshal registers: %s", err) + ins.OpCode = OpCode(data[0]) + + regs := data[1] + switch bo { + case binary.LittleEndian: + ins.Dst, ins.Src = Register(regs&0xF), Register(regs>>4) + case binary.BigEndian: + ins.Dst, ins.Src = Register(regs>>4), Register(regs&0xf) } - if !bi.OpCode.IsDWordLoad() { + ins.Offset = int16(bo.Uint16(data[2:4])) + // Convert to int32 before widening to int64 + // to ensure the signed bit is carried over. + ins.Constant = int64(int32(bo.Uint32(data[4:8]))) + + if !ins.OpCode.IsDWordLoad() { return InstructionSize, nil } - var bi2 bpfInstruction - if err := binary.Read(r, bo, &bi2); err != nil { + // Pull another instruction from the stream to retrieve the second + // half of the 64-bit immediate value. + if _, err := io.ReadFull(r, data); err != nil { // No Wrap, to avoid io.EOF clash return 0, errors.New("64bit immediate is missing second half") } - if bi2.OpCode != 0 || bi2.Offset != 0 || bi2.Registers != 0 { + + // Require that all fields other than the value are zero. + if bo.Uint32(data[0:4]) != 0 { return 0, errors.New("64bit immediate has non-zero fields") } - ins.Constant = int64(uint64(uint32(bi2.Constant))<<32 | uint64(uint32(bi.Constant))) + + cons1 := uint32(ins.Constant) + cons2 := int32(bo.Uint32(data[4:8])) + ins.Constant = int64(cons2)<<32 | int64(cons1) return 2 * InstructionSize, nil } @@ -93,14 +106,12 @@ func (ins Instruction) Marshal(w io.Writer, bo binary.ByteOrder) (uint64, error) return 0, fmt.Errorf("can't marshal registers: %s", err) } - bpfi := bpfInstruction{ - ins.OpCode, - regs, - ins.Offset, - cons, - } - - if err := binary.Write(w, bo, &bpfi); err != nil { + data := make([]byte, InstructionSize) + data[0] = byte(ins.OpCode) + data[1] = byte(regs) + bo.PutUint16(data[2:4], uint16(ins.Offset)) + bo.PutUint32(data[4:8], uint32(cons)) + if _, err := w.Write(data); err != nil { return 0, err } @@ -108,42 +119,76 @@ func (ins Instruction) Marshal(w io.Writer, bo binary.ByteOrder) (uint64, error) return InstructionSize, nil } - bpfi = bpfInstruction{ - Constant: int32(ins.Constant >> 32), - } - - if err := binary.Write(w, bo, &bpfi); err != nil { + // The first half of the second part of a double-wide instruction + // must be zero. The second half carries the value. + bo.PutUint32(data[0:4], 0) + bo.PutUint32(data[4:8], uint32(ins.Constant>>32)) + if _, err := w.Write(data); err != nil { return 0, err } return 2 * InstructionSize, nil } +// AssociateMap associates a Map with this Instruction. +// +// Implicitly clears the Instruction's Reference field. +// +// Returns an error if the Instruction is not a map load. +func (ins *Instruction) AssociateMap(m FDer) error { + if !ins.IsLoadFromMap() { + return errors.New("not a load from a map") + } + + ins.Metadata.Set(referenceMeta{}, nil) + ins.Metadata.Set(mapMeta{}, m) + + return nil +} + // RewriteMapPtr changes an instruction to use a new map fd. // // Returns an error if the instruction doesn't load a map. +// +// Deprecated: use AssociateMap instead. If you cannot provide a Map, +// wrap an fd in a type implementing FDer. func (ins *Instruction) RewriteMapPtr(fd int) error { - if !ins.OpCode.IsDWordLoad() { - return fmt.Errorf("%s is not a 64 bit load", ins.OpCode) - } - - if ins.Src != PseudoMapFD && ins.Src != PseudoMapValue { + if !ins.IsLoadFromMap() { return errors.New("not a load from a map") } + ins.encodeMapFD(fd) + + return nil +} + +func (ins *Instruction) encodeMapFD(fd int) { // Preserve the offset value for direct map loads. offset := uint64(ins.Constant) & (math.MaxUint32 << 32) rawFd := uint64(uint32(fd)) ins.Constant = int64(offset | rawFd) - return nil } // MapPtr returns the map fd for this instruction. // // The result is undefined if the instruction is not a load from a map, // see IsLoadFromMap. +// +// Deprecated: use Map() instead. func (ins *Instruction) MapPtr() int { - return int(int32(uint64(ins.Constant) & math.MaxUint32)) + // If there is a map associated with the instruction, return its FD. + if fd := ins.Metadata.Get(mapMeta{}); fd != nil { + return fd.(FDer).FD() + } + + // Fall back to the fd stored in the Constant field + return ins.mapFd() +} + +// mapFd returns the map file descriptor stored in the 32 least significant +// bits of ins' Constant field. +func (ins *Instruction) mapFd() int { + return int(int32(ins.Constant)) } // RewriteMapOffset changes the offset of a direct load from a map. @@ -181,6 +226,18 @@ func (ins *Instruction) IsFunctionCall() bool { return ins.OpCode.JumpOp() == Call && ins.Src == PseudoCall } +// IsLoadOfFunctionPointer returns true if the instruction loads a function pointer. +func (ins *Instruction) IsLoadOfFunctionPointer() bool { + return ins.OpCode.IsDWordLoad() && ins.Src == PseudoFunc +} + +// IsFunctionReference returns true if the instruction references another BPF +// function, either by invoking a Call jump operation or by loading a function +// pointer. +func (ins *Instruction) IsFunctionReference() bool { + return ins.IsFunctionCall() || ins.IsLoadOfFunctionPointer() +} + // IsBuiltinCall returns true if the instruction is a built-in call, i.e. BPF helper call. func (ins *Instruction) IsBuiltinCall() bool { return ins.OpCode.JumpOp() == Call && ins.Src == R0 && ins.Dst == R0 @@ -213,21 +270,30 @@ func (ins Instruction) Format(f fmt.State, c rune) { } if ins.IsLoadFromMap() { - fd := ins.MapPtr() + fd := ins.mapFd() + m := ins.Map() switch ins.Src { case PseudoMapFD: - fmt.Fprintf(f, "LoadMapPtr dst: %s fd: %d", ins.Dst, fd) + if m != nil { + fmt.Fprintf(f, "LoadMapPtr dst: %s map: %s", ins.Dst, m) + } else { + fmt.Fprintf(f, "LoadMapPtr dst: %s fd: %d", ins.Dst, fd) + } case PseudoMapValue: - fmt.Fprintf(f, "LoadMapValue dst: %s, fd: %d off: %d", ins.Dst, fd, ins.mapOffset()) + if m != nil { + fmt.Fprintf(f, "LoadMapValue dst: %s, map: %s off: %d", ins.Dst, m, ins.mapOffset()) + } else { + fmt.Fprintf(f, "LoadMapValue dst: %s, fd: %d off: %d", ins.Dst, fd, ins.mapOffset()) + } } goto ref } fmt.Fprintf(f, "%v ", op) - switch cls := op.Class(); cls { - case LdClass, LdXClass, StClass, StXClass: + switch cls := op.Class(); { + case cls.isLoadOrStore(): switch op.Mode() { case ImmMode: fmt.Fprintf(f, "dst: %s imm: %d", ins.Dst, ins.Constant) @@ -241,7 +307,7 @@ func (ins Instruction) Format(f fmt.State, c rune) { fmt.Fprintf(f, "dst: %s src: %s", ins.Dst, ins.Src) } - case ALU64Class, ALUClass: + case cls.IsALU(): fmt.Fprintf(f, "dst: %s ", ins.Dst) if op.ALUOp() == Swap || op.Source() == ImmSource { fmt.Fprintf(f, "imm: %d", ins.Constant) @@ -249,7 +315,7 @@ func (ins Instruction) Format(f fmt.State, c rune) { fmt.Fprintf(f, "src: %s", ins.Src) } - case JumpClass: + case cls.IsJump(): switch jop := op.JumpOp(); jop { case Call: if ins.Src == PseudoCall { @@ -270,42 +336,212 @@ func (ins Instruction) Format(f fmt.State, c rune) { } ref: - if ins.Reference != "" { - fmt.Fprintf(f, " <%s>", ins.Reference) + if ins.Reference() != "" { + fmt.Fprintf(f, " <%s>", ins.Reference()) } } +func (ins Instruction) equal(other Instruction) bool { + return ins.OpCode == other.OpCode && + ins.Dst == other.Dst && + ins.Src == other.Src && + ins.Offset == other.Offset && + ins.Constant == other.Constant +} + +// Size returns the amount of bytes ins would occupy in binary form. +func (ins Instruction) Size() uint64 { + return uint64(InstructionSize * ins.OpCode.rawInstructions()) +} + +type symbolMeta struct{} + +// WithSymbol marks the Instruction as a Symbol, which other Instructions +// can point to using corresponding calls to WithReference. +func (ins Instruction) WithSymbol(name string) Instruction { + ins.Metadata.Set(symbolMeta{}, name) + return ins +} + +// Sym creates a symbol. +// +// Deprecated: use WithSymbol instead. +func (ins Instruction) Sym(name string) Instruction { + return ins.WithSymbol(name) +} + +// Symbol returns the value ins has been marked with using WithSymbol, +// otherwise returns an empty string. A symbol is often an Instruction +// at the start of a function body. +func (ins Instruction) Symbol() string { + sym, _ := ins.Metadata.Get(symbolMeta{}).(string) + return sym +} + +type referenceMeta struct{} + +// WithReference makes ins reference another Symbol or map by name. +func (ins Instruction) WithReference(ref string) Instruction { + ins.Metadata.Set(referenceMeta{}, ref) + return ins +} + +// Reference returns the Symbol or map name referenced by ins, if any. +func (ins Instruction) Reference() string { + ref, _ := ins.Metadata.Get(referenceMeta{}).(string) + return ref +} + +type mapMeta struct{} + +// Map returns the Map referenced by ins, if any. +// An Instruction will contain a Map if e.g. it references an existing, +// pinned map that was opened during ELF loading. +func (ins Instruction) Map() FDer { + fd, _ := ins.Metadata.Get(mapMeta{}).(FDer) + return fd +} + +type sourceMeta struct{} + +// WithSource adds source information about the Instruction. +func (ins Instruction) WithSource(src fmt.Stringer) Instruction { + ins.Metadata.Set(sourceMeta{}, src) + return ins +} + +// Source returns source information about the Instruction. The field is +// present when the compiler emits BTF line info about the Instruction and +// usually contains the line of source code responsible for it. +func (ins Instruction) Source() fmt.Stringer { + str, _ := ins.Metadata.Get(sourceMeta{}).(fmt.Stringer) + return str +} + +// A Comment can be passed to Instruction.WithSource to add a comment +// to an instruction. +type Comment string + +func (s Comment) String() string { + return string(s) +} + +// FDer represents a resource tied to an underlying file descriptor. +// Used as a stand-in for e.g. ebpf.Map since that type cannot be +// imported here and FD() is the only method we rely on. +type FDer interface { + FD() int +} + // Instructions is an eBPF program. type Instructions []Instruction +// Unmarshal unmarshals an Instructions from a binary instruction stream. +// All instructions in insns are replaced by instructions decoded from r. +func (insns *Instructions) Unmarshal(r io.Reader, bo binary.ByteOrder) error { + if len(*insns) > 0 { + *insns = nil + } + + var offset uint64 + for { + var ins Instruction + n, err := ins.Unmarshal(r, bo) + if errors.Is(err, io.EOF) { + break + } + if err != nil { + return fmt.Errorf("offset %d: %w", offset, err) + } + + *insns = append(*insns, ins) + offset += n + } + + return nil +} + +// Name returns the name of the function insns belongs to, if any. +func (insns Instructions) Name() string { + if len(insns) == 0 { + return "" + } + return insns[0].Symbol() +} + func (insns Instructions) String() string { return fmt.Sprint(insns) } +// Size returns the amount of bytes insns would occupy in binary form. +func (insns Instructions) Size() uint64 { + var sum uint64 + for _, ins := range insns { + sum += ins.Size() + } + return sum +} + +// AssociateMap updates all Instructions that Reference the given symbol +// to point to an existing Map m instead. +// +// Returns ErrUnreferencedSymbol error if no references to symbol are found +// in insns. If symbol is anything else than the symbol name of map (e.g. +// a bpf2bpf subprogram), an error is returned. +func (insns Instructions) AssociateMap(symbol string, m FDer) error { + if symbol == "" { + return errors.New("empty symbol") + } + + var found bool + for i := range insns { + ins := &insns[i] + if ins.Reference() != symbol { + continue + } + + if err := ins.AssociateMap(m); err != nil { + return err + } + + found = true + } + + if !found { + return fmt.Errorf("symbol %s: %w", symbol, ErrUnreferencedSymbol) + } + + return nil +} + // RewriteMapPtr rewrites all loads of a specific map pointer to a new fd. // -// Returns an error if the symbol isn't used, see IsUnreferencedSymbol. +// Returns ErrUnreferencedSymbol if the symbol isn't used. +// +// Deprecated: use AssociateMap instead. func (insns Instructions) RewriteMapPtr(symbol string, fd int) error { if symbol == "" { return errors.New("empty symbol") } - found := false + var found bool for i := range insns { ins := &insns[i] - if ins.Reference != symbol { + if ins.Reference() != symbol { continue } - if err := ins.RewriteMapPtr(fd); err != nil { - return err + if !ins.IsLoadFromMap() { + return errors.New("not a load from a map") } + ins.encodeMapFD(fd) + found = true } if !found { - return &unreferencedSymbolError{symbol} + return fmt.Errorf("symbol %s: %w", symbol, ErrUnreferencedSymbol) } return nil @@ -317,31 +553,61 @@ func (insns Instructions) SymbolOffsets() (map[string]int, error) { offsets := make(map[string]int) for i, ins := range insns { - if ins.Symbol == "" { + if ins.Symbol() == "" { continue } - if _, ok := offsets[ins.Symbol]; ok { - return nil, fmt.Errorf("duplicate symbol %s", ins.Symbol) + if _, ok := offsets[ins.Symbol()]; ok { + return nil, fmt.Errorf("duplicate symbol %s", ins.Symbol()) } - offsets[ins.Symbol] = i + offsets[ins.Symbol()] = i } return offsets, nil } +// FunctionReferences returns a set of symbol names these Instructions make +// bpf-to-bpf calls to. +func (insns Instructions) FunctionReferences() []string { + calls := make(map[string]struct{}) + for _, ins := range insns { + if ins.Constant != -1 { + // BPF-to-BPF calls have -1 constants. + continue + } + + if ins.Reference() == "" { + continue + } + + if !ins.IsFunctionReference() { + continue + } + + calls[ins.Reference()] = struct{}{} + } + + result := make([]string, 0, len(calls)) + for call := range calls { + result = append(result, call) + } + + sort.Strings(result) + return result +} + // ReferenceOffsets returns the set of references and their offset in // the instructions. func (insns Instructions) ReferenceOffsets() map[string][]int { offsets := make(map[string][]int) for i, ins := range insns { - if ins.Reference == "" { + if ins.Reference() == "" { continue } - offsets[ins.Reference] = append(offsets[ins.Reference], i) + offsets[ins.Reference()] = append(offsets[ins.Reference()], i) } return offsets @@ -392,18 +658,36 @@ func (insns Instructions) Format(f fmt.State, c rune) { iter := insns.Iterate() for iter.Next() { - if iter.Ins.Symbol != "" { - fmt.Fprintf(f, "%s%s:\n", symIndent, iter.Ins.Symbol) + if iter.Ins.Symbol() != "" { + fmt.Fprintf(f, "%s%s:\n", symIndent, iter.Ins.Symbol()) + } + if src := iter.Ins.Source(); src != nil { + line := strings.TrimSpace(src.String()) + if line != "" { + fmt.Fprintf(f, "%s%*s; %s\n", indent, offsetWidth, " ", line) + } } fmt.Fprintf(f, "%s%*d: %v\n", indent, offsetWidth, iter.Offset, iter.Ins) } } // Marshal encodes a BPF program into the kernel format. +// +// insns may be modified if there are unresolved jumps or bpf2bpf calls. +// +// Returns ErrUnsatisfiedProgramReference if there is a Reference Instruction +// without a matching Symbol Instruction within insns. func (insns Instructions) Marshal(w io.Writer, bo binary.ByteOrder) error { + if err := insns.encodeFunctionReferences(); err != nil { + return err + } + + if err := insns.encodeMapPointers(); err != nil { + return err + } + for i, ins := range insns { - _, err := ins.Marshal(w, bo) - if err != nil { + if _, err := ins.Marshal(w, bo); err != nil { return fmt.Errorf("instruction %d: %w", i, err) } } @@ -429,6 +713,95 @@ func (insns Instructions) Tag(bo binary.ByteOrder) (string, error) { return hex.EncodeToString(h.Sum(nil)[:unix.BPF_TAG_SIZE]), nil } +// encodeFunctionReferences populates the Offset (or Constant, depending on +// the instruction type) field of instructions with a Reference field to point +// to the offset of the corresponding instruction with a matching Symbol field. +// +// Only Reference Instructions that are either jumps or BPF function references +// (calls or function pointer loads) are populated. +// +// Returns ErrUnsatisfiedProgramReference if there is a Reference Instruction +// without at least one corresponding Symbol Instruction within insns. +func (insns Instructions) encodeFunctionReferences() error { + // Index the offsets of instructions tagged as a symbol. + symbolOffsets := make(map[string]RawInstructionOffset) + iter := insns.Iterate() + for iter.Next() { + ins := iter.Ins + + if ins.Symbol() == "" { + continue + } + + if _, ok := symbolOffsets[ins.Symbol()]; ok { + return fmt.Errorf("duplicate symbol %s", ins.Symbol()) + } + + symbolOffsets[ins.Symbol()] = iter.Offset + } + + // Find all instructions tagged as references to other symbols. + // Depending on the instruction type, populate their constant or offset + // fields to point to the symbol they refer to within the insn stream. + iter = insns.Iterate() + for iter.Next() { + i := iter.Index + offset := iter.Offset + ins := iter.Ins + + if ins.Reference() == "" { + continue + } + + switch { + case ins.IsFunctionReference() && ins.Constant == -1: + symOffset, ok := symbolOffsets[ins.Reference()] + if !ok { + return fmt.Errorf("%s at insn %d: symbol %q: %w", ins.OpCode, i, ins.Reference(), ErrUnsatisfiedProgramReference) + } + + ins.Constant = int64(symOffset - offset - 1) + + case ins.OpCode.Class().IsJump() && ins.Offset == -1: + symOffset, ok := symbolOffsets[ins.Reference()] + if !ok { + return fmt.Errorf("%s at insn %d: symbol %q: %w", ins.OpCode, i, ins.Reference(), ErrUnsatisfiedProgramReference) + } + + ins.Offset = int16(symOffset - offset - 1) + } + } + + return nil +} + +// encodeMapPointers finds all Map Instructions and encodes their FDs +// into their Constant fields. +func (insns Instructions) encodeMapPointers() error { + iter := insns.Iterate() + for iter.Next() { + ins := iter.Ins + + if !ins.IsLoadFromMap() { + continue + } + + m := ins.Map() + if m == nil { + continue + } + + fd := m.FD() + if fd < 0 { + return fmt.Errorf("map %s: %w", m, sys.ErrClosedFd) + } + + ins.encodeMapFD(m.FD()) + } + + return nil +} + // Iterate allows iterating a BPF program while keeping track of // various offsets. // @@ -464,13 +837,6 @@ func (iter *InstructionIterator) Next() bool { return true } -type bpfInstruction struct { - OpCode OpCode - Registers bpfRegisters - Offset int16 - Constant int32 -} - type bpfRegisters uint8 func newBPFRegisters(dst, src Register, bo binary.ByteOrder) (bpfRegisters, error) { @@ -484,28 +850,10 @@ func newBPFRegisters(dst, src Register, bo binary.ByteOrder) (bpfRegisters, erro } } -func (r bpfRegisters) Unmarshal(bo binary.ByteOrder) (dst, src Register, err error) { - switch bo { - case binary.LittleEndian: - return Register(r & 0xF), Register(r >> 4), nil - case binary.BigEndian: - return Register(r >> 4), Register(r & 0xf), nil - default: - return 0, 0, fmt.Errorf("unrecognized ByteOrder %T", bo) - } -} - -type unreferencedSymbolError struct { - symbol string -} - -func (use *unreferencedSymbolError) Error() string { - return fmt.Sprintf("unreferenced symbol %s", use.symbol) -} - // IsUnreferencedSymbol returns true if err was caused by // an unreferenced symbol. +// +// Deprecated: use errors.Is(err, asm.ErrUnreferencedSymbol). func IsUnreferencedSymbol(err error) bool { - _, ok := err.(*unreferencedSymbolError) - return ok + return errors.Is(err, ErrUnreferencedSymbol) } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/jump.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/jump.go index 7757179de649..e31e42cac52c 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/jump.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/jump.go @@ -60,50 +60,68 @@ func (op JumpOp) Op(source Source) OpCode { return OpCode(JumpClass).SetJumpOp(op).SetSource(source) } -// Imm compares dst to value, and adjusts PC by offset if the condition is fulfilled. +// Imm compares 64 bit dst to 64 bit value (sign extended), and adjusts PC by offset if the condition is fulfilled. func (op JumpOp) Imm(dst Register, value int32, label string) Instruction { - if op == Exit || op == Call || op == Ja { - return Instruction{OpCode: InvalidOpCode} - } + return Instruction{ + OpCode: op.opCode(JumpClass, ImmSource), + Dst: dst, + Offset: -1, + Constant: int64(value), + }.WithReference(label) +} +// Imm32 compares 32 bit dst to 32 bit value, and adjusts PC by offset if the condition is fulfilled. +// Requires kernel 5.1. +func (op JumpOp) Imm32(dst Register, value int32, label string) Instruction { return Instruction{ - OpCode: OpCode(JumpClass).SetJumpOp(op).SetSource(ImmSource), - Dst: dst, - Offset: -1, - Constant: int64(value), - Reference: label, - } + OpCode: op.opCode(Jump32Class, ImmSource), + Dst: dst, + Offset: -1, + Constant: int64(value), + }.WithReference(label) } -// Reg compares dst to src, and adjusts PC by offset if the condition is fulfilled. +// Reg compares 64 bit dst to 64 bit src, and adjusts PC by offset if the condition is fulfilled. func (op JumpOp) Reg(dst, src Register, label string) Instruction { - if op == Exit || op == Call || op == Ja { - return Instruction{OpCode: InvalidOpCode} - } + return Instruction{ + OpCode: op.opCode(JumpClass, RegSource), + Dst: dst, + Src: src, + Offset: -1, + }.WithReference(label) +} +// Reg32 compares 32 bit dst to 32 bit src, and adjusts PC by offset if the condition is fulfilled. +// Requires kernel 5.1. +func (op JumpOp) Reg32(dst, src Register, label string) Instruction { return Instruction{ - OpCode: OpCode(JumpClass).SetJumpOp(op).SetSource(RegSource), - Dst: dst, - Src: src, - Offset: -1, - Reference: label, + OpCode: op.opCode(Jump32Class, RegSource), + Dst: dst, + Src: src, + Offset: -1, + }.WithReference(label) +} + +func (op JumpOp) opCode(class Class, source Source) OpCode { + if op == Exit || op == Call || op == Ja { + return InvalidOpCode } + + return OpCode(class).SetJumpOp(op).SetSource(source) } // Label adjusts PC to the address of the label. func (op JumpOp) Label(label string) Instruction { if op == Call { return Instruction{ - OpCode: OpCode(JumpClass).SetJumpOp(Call), - Src: PseudoCall, - Constant: -1, - Reference: label, - } + OpCode: OpCode(JumpClass).SetJumpOp(Call), + Src: PseudoCall, + Constant: -1, + }.WithReference(label) } return Instruction{ - OpCode: OpCode(JumpClass).SetJumpOp(op), - Offset: -1, - Reference: label, - } + OpCode: OpCode(JumpClass).SetJumpOp(op), + Offset: -1, + }.WithReference(label) } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/metadata.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/metadata.go new file mode 100644 index 000000000000..dd368a936036 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/metadata.go @@ -0,0 +1,80 @@ +package asm + +// Metadata contains metadata about an instruction. +type Metadata struct { + head *metaElement +} + +type metaElement struct { + next *metaElement + key, value interface{} +} + +// Find the element containing key. +// +// Returns nil if there is no such element. +func (m *Metadata) find(key interface{}) *metaElement { + for e := m.head; e != nil; e = e.next { + if e.key == key { + return e + } + } + return nil +} + +// Remove an element from the linked list. +// +// Copies as many elements of the list as necessary to remove r, but doesn't +// perform a full copy. +func (m *Metadata) remove(r *metaElement) { + current := &m.head + for e := m.head; e != nil; e = e.next { + if e == r { + // We've found the element we want to remove. + *current = e.next + + // No need to copy the tail. + return + } + + // There is another element in front of the one we want to remove. + // We have to copy it to be able to change metaElement.next. + cpy := &metaElement{key: e.key, value: e.value} + *current = cpy + current = &cpy.next + } +} + +// Set a key to a value. +// +// If value is nil, the key is removed. Avoids modifying old metadata by +// copying if necessary. +func (m *Metadata) Set(key, value interface{}) { + if e := m.find(key); e != nil { + if e.value == value { + // Key is present and the value is the same. Nothing to do. + return + } + + // Key is present with a different value. Create a copy of the list + // which doesn't have the element in it. + m.remove(e) + } + + // m.head is now a linked list that doesn't contain key. + if value == nil { + return + } + + m.head = &metaElement{key: key, value: value, next: m.head} +} + +// Get the value of a key. +// +// Returns nil if no value with the given key is present. +func (m *Metadata) Get(key interface{}) interface{} { + if e := m.find(key); e != nil { + return e.value + } + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/opcode.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/opcode.go index 6edc3cf5917d..b11917e18bbc 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/opcode.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/opcode.go @@ -7,14 +7,6 @@ import ( //go:generate stringer -output opcode_string.go -type=Class -type encoding int - -const ( - unknownEncoding encoding = iota - loadOrStore - jumpOrALU -) - // Class of operations // // msb lsb @@ -26,31 +18,52 @@ type Class uint8 const classMask OpCode = 0x07 const ( - // LdClass load memory + // LdClass loads immediate values into registers. + // Also used for non-standard load operations from cBPF. LdClass Class = 0x00 - // LdXClass load memory from constant + // LdXClass loads memory into registers. LdXClass Class = 0x01 - // StClass load register from memory + // StClass stores immediate values to memory. StClass Class = 0x02 - // StXClass load register from constant + // StXClass stores registers to memory. StXClass Class = 0x03 - // ALUClass arithmetic operators + // ALUClass describes arithmetic operators. ALUClass Class = 0x04 - // JumpClass jump operators + // JumpClass describes jump operators. JumpClass Class = 0x05 - // ALU64Class arithmetic in 64 bit mode + // Jump32Class describes jump operators with 32-bit comparisons. + // Requires kernel 5.1. + Jump32Class Class = 0x06 + // ALU64Class describes arithmetic operators in 64-bit mode. ALU64Class Class = 0x07 ) -func (cls Class) encoding() encoding { - switch cls { - case LdClass, LdXClass, StClass, StXClass: - return loadOrStore - case ALU64Class, ALUClass, JumpClass: - return jumpOrALU - default: - return unknownEncoding - } +// IsLoad checks if this is either LdClass or LdXClass. +func (cls Class) IsLoad() bool { + return cls == LdClass || cls == LdXClass +} + +// IsStore checks if this is either StClass or StXClass. +func (cls Class) IsStore() bool { + return cls == StClass || cls == StXClass +} + +func (cls Class) isLoadOrStore() bool { + return cls.IsLoad() || cls.IsStore() +} + +// IsALU checks if this is either ALUClass or ALU64Class. +func (cls Class) IsALU() bool { + return cls == ALUClass || cls == ALU64Class +} + +// IsJump checks if this is either JumpClass or Jump32Class. +func (cls Class) IsJump() bool { + return cls == JumpClass || cls == Jump32Class +} + +func (cls Class) isJumpOrALU() bool { + return cls.IsJump() || cls.IsALU() } // OpCode is a packed eBPF opcode. @@ -86,7 +99,7 @@ func (op OpCode) Class() Class { // Mode returns the mode for load and store operations. func (op OpCode) Mode() Mode { - if op.Class().encoding() != loadOrStore { + if !op.Class().isLoadOrStore() { return InvalidMode } return Mode(op & modeMask) @@ -94,7 +107,7 @@ func (op OpCode) Mode() Mode { // Size returns the size for load and store operations. func (op OpCode) Size() Size { - if op.Class().encoding() != loadOrStore { + if !op.Class().isLoadOrStore() { return InvalidSize } return Size(op & sizeMask) @@ -102,7 +115,7 @@ func (op OpCode) Size() Size { // Source returns the source for branch and ALU operations. func (op OpCode) Source() Source { - if op.Class().encoding() != jumpOrALU || op.ALUOp() == Swap { + if !op.Class().isJumpOrALU() || op.ALUOp() == Swap { return InvalidSource } return Source(op & sourceMask) @@ -110,7 +123,7 @@ func (op OpCode) Source() Source { // ALUOp returns the ALUOp. func (op OpCode) ALUOp() ALUOp { - if op.Class().encoding() != jumpOrALU { + if !op.Class().IsALU() { return InvalidALUOp } return ALUOp(op & aluMask) @@ -125,18 +138,27 @@ func (op OpCode) Endianness() Endianness { } // JumpOp returns the JumpOp. +// Returns InvalidJumpOp if it doesn't encode a jump. func (op OpCode) JumpOp() JumpOp { - if op.Class().encoding() != jumpOrALU { + if !op.Class().IsJump() { return InvalidJumpOp } - return JumpOp(op & jumpMask) + + jumpOp := JumpOp(op & jumpMask) + + // Some JumpOps are only supported by JumpClass, not Jump32Class. + if op.Class() == Jump32Class && (jumpOp == Exit || jumpOp == Call || jumpOp == Ja) { + return InvalidJumpOp + } + + return jumpOp } // SetMode sets the mode on load and store operations. // // Returns InvalidOpCode if op is of the wrong class. func (op OpCode) SetMode(mode Mode) OpCode { - if op.Class().encoding() != loadOrStore || !valid(OpCode(mode), modeMask) { + if !op.Class().isLoadOrStore() || !valid(OpCode(mode), modeMask) { return InvalidOpCode } return (op & ^modeMask) | OpCode(mode) @@ -146,7 +168,7 @@ func (op OpCode) SetMode(mode Mode) OpCode { // // Returns InvalidOpCode if op is of the wrong class. func (op OpCode) SetSize(size Size) OpCode { - if op.Class().encoding() != loadOrStore || !valid(OpCode(size), sizeMask) { + if !op.Class().isLoadOrStore() || !valid(OpCode(size), sizeMask) { return InvalidOpCode } return (op & ^sizeMask) | OpCode(size) @@ -156,7 +178,7 @@ func (op OpCode) SetSize(size Size) OpCode { // // Returns InvalidOpCode if op is of the wrong class. func (op OpCode) SetSource(source Source) OpCode { - if op.Class().encoding() != jumpOrALU || !valid(OpCode(source), sourceMask) { + if !op.Class().isJumpOrALU() || !valid(OpCode(source), sourceMask) { return InvalidOpCode } return (op & ^sourceMask) | OpCode(source) @@ -166,8 +188,7 @@ func (op OpCode) SetSource(source Source) OpCode { // // Returns InvalidOpCode if op is of the wrong class. func (op OpCode) SetALUOp(alu ALUOp) OpCode { - class := op.Class() - if (class != ALUClass && class != ALU64Class) || !valid(OpCode(alu), aluMask) { + if !op.Class().IsALU() || !valid(OpCode(alu), aluMask) { return InvalidOpCode } return (op & ^aluMask) | OpCode(alu) @@ -177,17 +198,25 @@ func (op OpCode) SetALUOp(alu ALUOp) OpCode { // // Returns InvalidOpCode if op is of the wrong class. func (op OpCode) SetJumpOp(jump JumpOp) OpCode { - if op.Class() != JumpClass || !valid(OpCode(jump), jumpMask) { + if !op.Class().IsJump() || !valid(OpCode(jump), jumpMask) { + return InvalidOpCode + } + + newOp := (op & ^jumpMask) | OpCode(jump) + + // Check newOp is legal. + if newOp.JumpOp() == InvalidJumpOp { return InvalidOpCode } - return (op & ^jumpMask) | OpCode(jump) + + return newOp } func (op OpCode) String() string { var f strings.Builder - switch class := op.Class(); class { - case LdClass, LdXClass, StClass, StXClass: + switch class := op.Class(); { + case class.isLoadOrStore(): f.WriteString(strings.TrimSuffix(class.String(), "Class")) mode := op.Mode() @@ -204,7 +233,7 @@ func (op OpCode) String() string { f.WriteString("B") } - case ALU64Class, ALUClass: + case class.IsALU(): f.WriteString(op.ALUOp().String()) if op.ALUOp() == Swap { @@ -218,8 +247,13 @@ func (op OpCode) String() string { f.WriteString(strings.TrimSuffix(op.Source().String(), "Source")) } - case JumpClass: + case class.IsJump(): f.WriteString(op.JumpOp().String()) + + if class == Jump32Class { + f.WriteString("32") + } + if jop := op.JumpOp(); jop != Exit && jop != Call { f.WriteString(strings.TrimSuffix(op.Source().String(), "Source")) } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/opcode_string.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/opcode_string.go index 079ce1db0b83..58bc3e7e7f08 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/opcode_string.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/opcode_string.go @@ -14,25 +14,17 @@ func _() { _ = x[StXClass-3] _ = x[ALUClass-4] _ = x[JumpClass-5] + _ = x[Jump32Class-6] _ = x[ALU64Class-7] } -const ( - _Class_name_0 = "LdClassLdXClassStClassStXClassALUClassJumpClass" - _Class_name_1 = "ALU64Class" -) +const _Class_name = "LdClassLdXClassStClassStXClassALUClassJumpClassJump32ClassALU64Class" -var ( - _Class_index_0 = [...]uint8{0, 7, 15, 22, 30, 38, 47} -) +var _Class_index = [...]uint8{0, 7, 15, 22, 30, 38, 47, 58, 68} func (i Class) String() string { - switch { - case 0 <= i && i <= 5: - return _Class_name_0[_Class_index_0[i]:_Class_index_0[i+1]] - case i == 7: - return _Class_name_1 - default: + if i >= Class(len(_Class_index)-1) { return "Class(" + strconv.FormatInt(int64(i), 10) + ")" } + return _Class_name[_Class_index[i]:_Class_index[i+1]] } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/register.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/register.go index 76cb44bffc71..dd5d44f1c191 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/register.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/asm/register.go @@ -38,6 +38,7 @@ const ( PseudoMapFD = R1 // BPF_PSEUDO_MAP_FD PseudoMapValue = R2 // BPF_PSEUDO_MAP_VALUE PseudoCall = R1 // BPF_PSEUDO_CALL + PseudoFunc = R4 // BPF_PSEUDO_FUNC ) func (r Register) String() string { diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/btf.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/btf.go new file mode 100644 index 000000000000..a5969332aaa8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/btf.go @@ -0,0 +1,897 @@ +package btf + +import ( + "bufio" + "bytes" + "debug/elf" + "encoding/binary" + "errors" + "fmt" + "io" + "math" + "os" + "reflect" + + "github.com/cilium/ebpf/internal" + "github.com/cilium/ebpf/internal/sys" + "github.com/cilium/ebpf/internal/unix" +) + +const btfMagic = 0xeB9F + +// Errors returned by BTF functions. +var ( + ErrNotSupported = internal.ErrNotSupported + ErrNotFound = errors.New("not found") + ErrNoExtendedInfo = errors.New("no extended info") +) + +// ID represents the unique ID of a BTF object. +type ID = sys.BTFID + +// Spec represents decoded BTF. +type Spec struct { + // Data from .BTF. + rawTypes []rawType + strings *stringTable + + // All types contained by the spec. For the base type, the position of + // a type in the slice is its ID. + types types + + // Type IDs indexed by type. + typeIDs map[Type]TypeID + + // Types indexed by essential name. + // Includes all struct flavors and types with the same name. + namedTypes map[essentialName][]Type + + byteOrder binary.ByteOrder +} + +type btfHeader struct { + Magic uint16 + Version uint8 + Flags uint8 + HdrLen uint32 + + TypeOff uint32 + TypeLen uint32 + StringOff uint32 + StringLen uint32 +} + +// typeStart returns the offset from the beginning of the .BTF section +// to the start of its type entries. +func (h *btfHeader) typeStart() int64 { + return int64(h.HdrLen + h.TypeOff) +} + +// stringStart returns the offset from the beginning of the .BTF section +// to the start of its string table. +func (h *btfHeader) stringStart() int64 { + return int64(h.HdrLen + h.StringOff) +} + +// LoadSpec opens file and calls LoadSpecFromReader on it. +func LoadSpec(file string) (*Spec, error) { + fh, err := os.Open(file) + if err != nil { + return nil, err + } + defer fh.Close() + + return LoadSpecFromReader(fh) +} + +// LoadSpecFromReader reads from an ELF or a raw BTF blob. +// +// Returns ErrNotFound if reading from an ELF which contains no BTF. ExtInfos +// may be nil. +func LoadSpecFromReader(rd io.ReaderAt) (*Spec, error) { + file, err := internal.NewSafeELFFile(rd) + if err != nil { + if bo := guessRawBTFByteOrder(rd); bo != nil { + // Try to parse a naked BTF blob. This will return an error if + // we encounter a Datasec, since we can't fix it up. + spec, err := loadRawSpec(io.NewSectionReader(rd, 0, math.MaxInt64), bo, nil, nil) + return spec, err + } + + return nil, err + } + + return loadSpecFromELF(file) +} + +// LoadSpecAndExtInfosFromReader reads from an ELF. +// +// ExtInfos may be nil if the ELF doesn't contain section metadta. +// Returns ErrNotFound if the ELF contains no BTF. +func LoadSpecAndExtInfosFromReader(rd io.ReaderAt) (*Spec, *ExtInfos, error) { + file, err := internal.NewSafeELFFile(rd) + if err != nil { + return nil, nil, err + } + + spec, err := loadSpecFromELF(file) + if err != nil { + return nil, nil, err + } + + extInfos, err := loadExtInfosFromELF(file, spec.types, spec.strings) + if err != nil && !errors.Is(err, ErrNotFound) { + return nil, nil, err + } + + return spec, extInfos, nil +} + +// variableOffsets extracts all symbols offsets from an ELF and indexes them by +// section and variable name. +// +// References to variables in BTF data sections carry unsigned 32-bit offsets. +// Some ELF symbols (e.g. in vmlinux) may point to virtual memory that is well +// beyond this range. Since these symbols cannot be described by BTF info, +// ignore them here. +func variableOffsets(file *internal.SafeELFFile) (map[variable]uint32, error) { + symbols, err := file.Symbols() + if err != nil { + return nil, fmt.Errorf("can't read symbols: %v", err) + } + + variableOffsets := make(map[variable]uint32) + for _, symbol := range symbols { + if idx := symbol.Section; idx >= elf.SHN_LORESERVE && idx <= elf.SHN_HIRESERVE { + // Ignore things like SHN_ABS + continue + } + + if symbol.Value > math.MaxUint32 { + // VarSecinfo offset is u32, cannot reference symbols in higher regions. + continue + } + + if int(symbol.Section) >= len(file.Sections) { + return nil, fmt.Errorf("symbol %s: invalid section %d", symbol.Name, symbol.Section) + } + + secName := file.Sections[symbol.Section].Name + variableOffsets[variable{secName, symbol.Name}] = uint32(symbol.Value) + } + + return variableOffsets, nil +} + +func loadSpecFromELF(file *internal.SafeELFFile) (*Spec, error) { + var ( + btfSection *elf.Section + sectionSizes = make(map[string]uint32) + ) + + for _, sec := range file.Sections { + switch sec.Name { + case ".BTF": + btfSection = sec + default: + if sec.Type != elf.SHT_PROGBITS && sec.Type != elf.SHT_NOBITS { + break + } + + if sec.Size > math.MaxUint32 { + return nil, fmt.Errorf("section %s exceeds maximum size", sec.Name) + } + + sectionSizes[sec.Name] = uint32(sec.Size) + } + } + + if btfSection == nil { + return nil, fmt.Errorf("btf: %w", ErrNotFound) + } + + vars, err := variableOffsets(file) + if err != nil { + return nil, err + } + + if btfSection.ReaderAt == nil { + return nil, fmt.Errorf("compressed BTF is not supported") + } + + rawTypes, rawStrings, err := parseBTF(btfSection.ReaderAt, file.ByteOrder, nil) + if err != nil { + return nil, err + } + + err = fixupDatasec(rawTypes, rawStrings, sectionSizes, vars) + if err != nil { + return nil, err + } + + return inflateSpec(rawTypes, rawStrings, file.ByteOrder, nil) +} + +func loadRawSpec(btf io.ReaderAt, bo binary.ByteOrder, + baseTypes types, baseStrings *stringTable) (*Spec, error) { + + rawTypes, rawStrings, err := parseBTF(btf, bo, baseStrings) + if err != nil { + return nil, err + } + + return inflateSpec(rawTypes, rawStrings, bo, baseTypes) +} + +func inflateSpec(rawTypes []rawType, rawStrings *stringTable, bo binary.ByteOrder, + baseTypes types) (*Spec, error) { + + types, err := inflateRawTypes(rawTypes, baseTypes, rawStrings) + if err != nil { + return nil, err + } + + typeIDs, typesByName := indexTypes(types, TypeID(len(baseTypes))) + + return &Spec{ + rawTypes: rawTypes, + namedTypes: typesByName, + typeIDs: typeIDs, + types: types, + strings: rawStrings, + byteOrder: bo, + }, nil +} + +func indexTypes(types []Type, typeIDOffset TypeID) (map[Type]TypeID, map[essentialName][]Type) { + namedTypes := 0 + for _, typ := range types { + if typ.TypeName() != "" { + // Do a pre-pass to figure out how big types by name has to be. + // Most types have unique names, so it's OK to ignore essentialName + // here. + namedTypes++ + } + } + + typeIDs := make(map[Type]TypeID, len(types)) + typesByName := make(map[essentialName][]Type, namedTypes) + + for i, typ := range types { + if name := newEssentialName(typ.TypeName()); name != "" { + typesByName[name] = append(typesByName[name], typ) + } + typeIDs[typ] = TypeID(i) + typeIDOffset + } + + return typeIDs, typesByName +} + +// LoadKernelSpec returns the current kernel's BTF information. +// +// Defaults to /sys/kernel/btf/vmlinux and falls back to scanning the file system +// for vmlinux ELFs. Returns an error wrapping ErrNotSupported if BTF is not enabled. +func LoadKernelSpec() (*Spec, error) { + fh, err := os.Open("/sys/kernel/btf/vmlinux") + if err == nil { + defer fh.Close() + + return loadRawSpec(fh, internal.NativeEndian, nil, nil) + } + + file, err := findVMLinux() + if err != nil { + return nil, err + } + defer file.Close() + + return loadSpecFromELF(file) +} + +// findVMLinux scans multiple well-known paths for vmlinux kernel images. +func findVMLinux() (*internal.SafeELFFile, error) { + release, err := internal.KernelRelease() + if err != nil { + return nil, err + } + + // use same list of locations as libbpf + // https://github.com/libbpf/libbpf/blob/9a3a42608dbe3731256a5682a125ac1e23bced8f/src/btf.c#L3114-L3122 + locations := []string{ + "/boot/vmlinux-%s", + "/lib/modules/%s/vmlinux-%[1]s", + "/lib/modules/%s/build/vmlinux", + "/usr/lib/modules/%s/kernel/vmlinux", + "/usr/lib/debug/boot/vmlinux-%s", + "/usr/lib/debug/boot/vmlinux-%s.debug", + "/usr/lib/debug/lib/modules/%s/vmlinux", + } + + for _, loc := range locations { + file, err := internal.OpenSafeELFFile(fmt.Sprintf(loc, release)) + if errors.Is(err, os.ErrNotExist) { + continue + } + return file, err + } + + return nil, fmt.Errorf("no BTF found for kernel version %s: %w", release, internal.ErrNotSupported) +} + +// parseBTFHeader parses the header of the .BTF section. +func parseBTFHeader(r io.Reader, bo binary.ByteOrder) (*btfHeader, error) { + var header btfHeader + if err := binary.Read(r, bo, &header); err != nil { + return nil, fmt.Errorf("can't read header: %v", err) + } + + if header.Magic != btfMagic { + return nil, fmt.Errorf("incorrect magic value %v", header.Magic) + } + + if header.Version != 1 { + return nil, fmt.Errorf("unexpected version %v", header.Version) + } + + if header.Flags != 0 { + return nil, fmt.Errorf("unsupported flags %v", header.Flags) + } + + remainder := int64(header.HdrLen) - int64(binary.Size(&header)) + if remainder < 0 { + return nil, errors.New("header length shorter than btfHeader size") + } + + if _, err := io.CopyN(internal.DiscardZeroes{}, r, remainder); err != nil { + return nil, fmt.Errorf("header padding: %v", err) + } + + return &header, nil +} + +func guessRawBTFByteOrder(r io.ReaderAt) binary.ByteOrder { + buf := new(bufio.Reader) + for _, bo := range []binary.ByteOrder{ + binary.LittleEndian, + binary.BigEndian, + } { + buf.Reset(io.NewSectionReader(r, 0, math.MaxInt64)) + if _, err := parseBTFHeader(buf, bo); err == nil { + return bo + } + } + + return nil +} + +// parseBTF reads a .BTF section into memory and parses it into a list of +// raw types and a string table. +func parseBTF(btf io.ReaderAt, bo binary.ByteOrder, baseStrings *stringTable) ([]rawType, *stringTable, error) { + buf := internal.NewBufferedSectionReader(btf, 0, math.MaxInt64) + header, err := parseBTFHeader(buf, bo) + if err != nil { + return nil, nil, fmt.Errorf("parsing .BTF header: %v", err) + } + + rawStrings, err := readStringTable(io.NewSectionReader(btf, header.stringStart(), int64(header.StringLen)), + baseStrings) + if err != nil { + return nil, nil, fmt.Errorf("can't read type names: %w", err) + } + + buf.Reset(io.NewSectionReader(btf, header.typeStart(), int64(header.TypeLen))) + rawTypes, err := readTypes(buf, bo, header.TypeLen) + if err != nil { + return nil, nil, fmt.Errorf("can't read types: %w", err) + } + + return rawTypes, rawStrings, nil +} + +type variable struct { + section string + name string +} + +func fixupDatasec(rawTypes []rawType, rawStrings *stringTable, sectionSizes map[string]uint32, variableOffsets map[variable]uint32) error { + for i, rawType := range rawTypes { + if rawType.Kind() != kindDatasec { + continue + } + + name, err := rawStrings.Lookup(rawType.NameOff) + if err != nil { + return err + } + + if name == ".kconfig" || name == ".ksyms" { + return fmt.Errorf("reference to %s: %w", name, ErrNotSupported) + } + + if rawTypes[i].SizeType != 0 { + continue + } + + size, ok := sectionSizes[name] + if !ok { + return fmt.Errorf("data section %s: missing size", name) + } + + rawTypes[i].SizeType = size + + secinfos := rawType.data.([]btfVarSecinfo) + for j, secInfo := range secinfos { + id := int(secInfo.Type - 1) + if id >= len(rawTypes) { + return fmt.Errorf("data section %s: invalid type id %d for variable %d", name, id, j) + } + + varName, err := rawStrings.Lookup(rawTypes[id].NameOff) + if err != nil { + return fmt.Errorf("data section %s: can't get name for type %d: %w", name, id, err) + } + + offset, ok := variableOffsets[variable{name, varName}] + if !ok { + return fmt.Errorf("data section %s: missing offset for variable %s", name, varName) + } + + secinfos[j].Offset = offset + } + } + + return nil +} + +// Copy creates a copy of Spec. +func (s *Spec) Copy() *Spec { + types := copyTypes(s.types, nil) + + typeIDOffset := TypeID(0) + if len(s.types) != 0 { + typeIDOffset = s.typeIDs[s.types[0]] + } + typeIDs, typesByName := indexTypes(types, typeIDOffset) + + // NB: Other parts of spec are not copied since they are immutable. + return &Spec{ + s.rawTypes, + s.strings, + types, + typeIDs, + typesByName, + s.byteOrder, + } +} + +type marshalOpts struct { + ByteOrder binary.ByteOrder + StripFuncLinkage bool +} + +func (s *Spec) marshal(opts marshalOpts) ([]byte, error) { + var ( + buf bytes.Buffer + header = new(btfHeader) + headerLen = binary.Size(header) + ) + + // Reserve space for the header. We have to write it last since + // we don't know the size of the type section yet. + _, _ = buf.Write(make([]byte, headerLen)) + + // Write type section, just after the header. + for _, raw := range s.rawTypes { + switch { + case opts.StripFuncLinkage && raw.Kind() == kindFunc: + raw.SetLinkage(StaticFunc) + } + + if err := raw.Marshal(&buf, opts.ByteOrder); err != nil { + return nil, fmt.Errorf("can't marshal BTF: %w", err) + } + } + + typeLen := uint32(buf.Len() - headerLen) + + // Write string section after type section. + stringsLen := s.strings.Length() + buf.Grow(stringsLen) + if err := s.strings.Marshal(&buf); err != nil { + return nil, err + } + + // Fill out the header, and write it out. + header = &btfHeader{ + Magic: btfMagic, + Version: 1, + Flags: 0, + HdrLen: uint32(headerLen), + TypeOff: 0, + TypeLen: typeLen, + StringOff: typeLen, + StringLen: uint32(stringsLen), + } + + raw := buf.Bytes() + err := binary.Write(sliceWriter(raw[:headerLen]), opts.ByteOrder, header) + if err != nil { + return nil, fmt.Errorf("can't write header: %v", err) + } + + return raw, nil +} + +type sliceWriter []byte + +func (sw sliceWriter) Write(p []byte) (int, error) { + if len(p) != len(sw) { + return 0, errors.New("size doesn't match") + } + + return copy(sw, p), nil +} + +// TypeByID returns the BTF Type with the given type ID. +// +// Returns an error wrapping ErrNotFound if a Type with the given ID +// does not exist in the Spec. +func (s *Spec) TypeByID(id TypeID) (Type, error) { + return s.types.ByID(id) +} + +// TypeID returns the ID for a given Type. +// +// Returns an error wrapping ErrNoFound if the type isn't part of the Spec. +func (s *Spec) TypeID(typ Type) (TypeID, error) { + if _, ok := typ.(*Void); ok { + // Equality is weird for void, since it is a zero sized type. + return 0, nil + } + + id, ok := s.typeIDs[typ] + if !ok { + return 0, fmt.Errorf("no ID for type %s: %w", typ, ErrNotFound) + } + + return id, nil +} + +// AnyTypesByName returns a list of BTF Types with the given name. +// +// If the BTF blob describes multiple compilation units like vmlinux, multiple +// Types with the same name and kind can exist, but might not describe the same +// data structure. +// +// Returns an error wrapping ErrNotFound if no matching Type exists in the Spec. +func (s *Spec) AnyTypesByName(name string) ([]Type, error) { + types := s.namedTypes[newEssentialName(name)] + if len(types) == 0 { + return nil, fmt.Errorf("type name %s: %w", name, ErrNotFound) + } + + // Return a copy to prevent changes to namedTypes. + result := make([]Type, 0, len(types)) + for _, t := range types { + // Match against the full name, not just the essential one + // in case the type being looked up is a struct flavor. + if t.TypeName() == name { + result = append(result, t) + } + } + return result, nil +} + +// AnyTypeByName returns a Type with the given name. +// +// Returns an error if multiple types of that name exist. +func (s *Spec) AnyTypeByName(name string) (Type, error) { + types, err := s.AnyTypesByName(name) + if err != nil { + return nil, err + } + + if len(types) > 1 { + return nil, fmt.Errorf("found multiple types: %v", types) + } + + return types[0], nil +} + +// TypeByName searches for a Type with a specific name. Since multiple +// Types with the same name can exist, the parameter typ is taken to +// narrow down the search in case of a clash. +// +// typ must be a non-nil pointer to an implementation of a Type. +// On success, the address of the found Type will be copied to typ. +// +// Returns an error wrapping ErrNotFound if no matching +// Type exists in the Spec. If multiple candidates are found, +// an error is returned. +func (s *Spec) TypeByName(name string, typ interface{}) error { + typValue := reflect.ValueOf(typ) + if typValue.Kind() != reflect.Ptr { + return fmt.Errorf("%T is not a pointer", typ) + } + + typPtr := typValue.Elem() + if !typPtr.CanSet() { + return fmt.Errorf("%T cannot be set", typ) + } + + wanted := typPtr.Type() + if !wanted.AssignableTo(reflect.TypeOf((*Type)(nil)).Elem()) { + return fmt.Errorf("%T does not satisfy Type interface", typ) + } + + types, err := s.AnyTypesByName(name) + if err != nil { + return err + } + + var candidate Type + for _, typ := range types { + if reflect.TypeOf(typ) != wanted { + continue + } + + if candidate != nil { + return fmt.Errorf("type %s: multiple candidates for %T", name, typ) + } + + candidate = typ + } + + if candidate == nil { + return fmt.Errorf("type %s: %w", name, ErrNotFound) + } + + typPtr.Set(reflect.ValueOf(candidate)) + + return nil +} + +// LoadSplitSpecFromReader loads split BTF from a reader. +// +// Types from base are used to resolve references in the split BTF. +// The returned Spec only contains types from the split BTF, not from the base. +func LoadSplitSpecFromReader(r io.ReaderAt, base *Spec) (*Spec, error) { + return loadRawSpec(r, internal.NativeEndian, base.types, base.strings) +} + +// TypesIterator iterates over types of a given spec. +type TypesIterator struct { + spec *Spec + index int + // The last visited type in the spec. + Type Type +} + +// Iterate returns the types iterator. +func (s *Spec) Iterate() *TypesIterator { + return &TypesIterator{spec: s, index: 0} +} + +// Next returns true as long as there are any remaining types. +func (iter *TypesIterator) Next() bool { + if len(iter.spec.types) <= iter.index { + return false + } + + iter.Type = iter.spec.types[iter.index] + iter.index++ + return true +} + +// Handle is a reference to BTF loaded into the kernel. +type Handle struct { + fd *sys.FD + + // Size of the raw BTF in bytes. + size uint32 +} + +// NewHandle loads BTF into the kernel. +// +// Returns ErrNotSupported if BTF is not supported. +func NewHandle(spec *Spec) (*Handle, error) { + if err := haveBTF(); err != nil { + return nil, err + } + + if spec.byteOrder != internal.NativeEndian { + return nil, fmt.Errorf("can't load %s BTF on %s", spec.byteOrder, internal.NativeEndian) + } + + btf, err := spec.marshal(marshalOpts{ + ByteOrder: internal.NativeEndian, + StripFuncLinkage: haveFuncLinkage() != nil, + }) + if err != nil { + return nil, fmt.Errorf("can't marshal BTF: %w", err) + } + + if uint64(len(btf)) > math.MaxUint32 { + return nil, errors.New("BTF exceeds the maximum size") + } + + attr := &sys.BtfLoadAttr{ + Btf: sys.NewSlicePointer(btf), + BtfSize: uint32(len(btf)), + } + + fd, err := sys.BtfLoad(attr) + if err != nil { + logBuf := make([]byte, 64*1024) + attr.BtfLogBuf = sys.NewSlicePointer(logBuf) + attr.BtfLogSize = uint32(len(logBuf)) + attr.BtfLogLevel = 1 + // NB: The syscall will never return ENOSPC as of 5.18-rc4. + _, _ = sys.BtfLoad(attr) + return nil, internal.ErrorWithLog(err, logBuf) + } + + return &Handle{fd, attr.BtfSize}, nil +} + +// NewHandleFromID returns the BTF handle for a given id. +// +// Prefer calling [ebpf.Program.Handle] or [ebpf.Map.Handle] if possible. +// +// Returns ErrNotExist, if there is no BTF with the given id. +// +// Requires CAP_SYS_ADMIN. +func NewHandleFromID(id ID) (*Handle, error) { + fd, err := sys.BtfGetFdById(&sys.BtfGetFdByIdAttr{ + Id: uint32(id), + }) + if err != nil { + return nil, fmt.Errorf("get FD for ID %d: %w", id, err) + } + + info, err := newHandleInfoFromFD(fd) + if err != nil { + _ = fd.Close() + return nil, err + } + + return &Handle{fd, info.size}, nil +} + +// Spec parses the kernel BTF into Go types. +// +// base is used to decode split BTF and may be nil. +func (h *Handle) Spec(base *Spec) (*Spec, error) { + var btfInfo sys.BtfInfo + btfBuffer := make([]byte, h.size) + btfInfo.Btf, btfInfo.BtfSize = sys.NewSlicePointerLen(btfBuffer) + + if err := sys.ObjInfo(h.fd, &btfInfo); err != nil { + return nil, err + } + + var baseTypes types + var baseStrings *stringTable + if base != nil { + baseTypes = base.types + baseStrings = base.strings + } + + return loadRawSpec(bytes.NewReader(btfBuffer), internal.NativeEndian, baseTypes, baseStrings) +} + +// Close destroys the handle. +// +// Subsequent calls to FD will return an invalid value. +func (h *Handle) Close() error { + if h == nil { + return nil + } + + return h.fd.Close() +} + +// FD returns the file descriptor for the handle. +func (h *Handle) FD() int { + return h.fd.Int() +} + +// Info returns metadata about the handle. +func (h *Handle) Info() (*HandleInfo, error) { + return newHandleInfoFromFD(h.fd) +} + +func marshalBTF(types interface{}, strings []byte, bo binary.ByteOrder) []byte { + const minHeaderLength = 24 + + typesLen := uint32(binary.Size(types)) + header := btfHeader{ + Magic: btfMagic, + Version: 1, + HdrLen: minHeaderLength, + TypeOff: 0, + TypeLen: typesLen, + StringOff: typesLen, + StringLen: uint32(len(strings)), + } + + buf := new(bytes.Buffer) + _ = binary.Write(buf, bo, &header) + _ = binary.Write(buf, bo, types) + buf.Write(strings) + + return buf.Bytes() +} + +var haveBTF = internal.FeatureTest("BTF", "5.1", func() error { + var ( + types struct { + Integer btfType + Var btfType + btfVar struct{ Linkage uint32 } + } + strings = []byte{0, 'a', 0} + ) + + // We use a BTF_KIND_VAR here, to make sure that + // the kernel understands BTF at least as well as we + // do. BTF_KIND_VAR was introduced ~5.1. + types.Integer.SetKind(kindPointer) + types.Var.NameOff = 1 + types.Var.SetKind(kindVar) + types.Var.SizeType = 1 + + btf := marshalBTF(&types, strings, internal.NativeEndian) + + fd, err := sys.BtfLoad(&sys.BtfLoadAttr{ + Btf: sys.NewSlicePointer(btf), + BtfSize: uint32(len(btf)), + }) + if errors.Is(err, unix.EINVAL) || errors.Is(err, unix.EPERM) { + // Treat both EINVAL and EPERM as not supported: loading the program + // might still succeed without BTF. + return internal.ErrNotSupported + } + if err != nil { + return err + } + + fd.Close() + return nil +}) + +var haveFuncLinkage = internal.FeatureTest("BTF func linkage", "5.6", func() error { + if err := haveBTF(); err != nil { + return err + } + + var ( + types struct { + FuncProto btfType + Func btfType + } + strings = []byte{0, 'a', 0} + ) + + types.FuncProto.SetKind(kindFuncProto) + types.Func.SetKind(kindFunc) + types.Func.SizeType = 1 // aka FuncProto + types.Func.NameOff = 1 + types.Func.SetLinkage(GlobalFunc) + + btf := marshalBTF(&types, strings, internal.NativeEndian) + + fd, err := sys.BtfLoad(&sys.BtfLoadAttr{ + Btf: sys.NewSlicePointer(btf), + BtfSize: uint32(len(btf)), + }) + if errors.Is(err, unix.EINVAL) { + return internal.ErrNotSupported + } + if err != nil { + return err + } + + fd.Close() + return nil +}) diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/btf_types.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/btf_types.go similarity index 73% rename from cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/btf_types.go rename to cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/btf_types.go index d98c73ca594a..4810180494e6 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/btf_types.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/btf_types.go @@ -130,13 +130,22 @@ func mask(len uint32) uint32 { return (1 << len) - 1 } +func readBits(value, len, shift uint32) uint32 { + return (value >> shift) & mask(len) +} + +func writeBits(value, len, shift, new uint32) uint32 { + value &^= mask(len) << shift + value |= (new & mask(len)) << shift + return value +} + func (bt *btfType) info(len, shift uint32) uint32 { - return (bt.Info >> shift) & mask(len) + return readBits(bt.Info, len, shift) } func (bt *btfType) setInfo(value, len, shift uint32) { - bt.Info &^= mask(len) << shift - bt.Info |= (value & mask(len)) << shift + bt.Info = writeBits(bt.Info, len, shift, value) } func (bt *btfType) Kind() btfKind { @@ -177,6 +186,10 @@ func (bt *btfType) Size() uint32 { return bt.SizeType } +func (bt *btfType) SetSize(size uint32) { + bt.SizeType = size +} + type rawType struct { btfType data interface{} @@ -194,6 +207,50 @@ func (rt *rawType) Marshal(w io.Writer, bo binary.ByteOrder) error { return binary.Write(w, bo, rt.data) } +// btfInt encodes additional data for integers. +// +// ? ? ? ? e e e e o o o o o o o o ? ? ? ? ? ? ? ? b b b b b b b b +// ? = undefined +// e = encoding +// o = offset (bitfields?) +// b = bits (bitfields) +type btfInt struct { + Raw uint32 +} + +const ( + btfIntEncodingLen = 4 + btfIntEncodingShift = 24 + btfIntOffsetLen = 8 + btfIntOffsetShift = 16 + btfIntBitsLen = 8 + btfIntBitsShift = 0 +) + +func (bi btfInt) Encoding() IntEncoding { + return IntEncoding(readBits(bi.Raw, btfIntEncodingLen, btfIntEncodingShift)) +} + +func (bi *btfInt) SetEncoding(e IntEncoding) { + bi.Raw = writeBits(uint32(bi.Raw), btfIntEncodingLen, btfIntEncodingShift, uint32(e)) +} + +func (bi btfInt) Offset() Bits { + return Bits(readBits(bi.Raw, btfIntOffsetLen, btfIntOffsetShift)) +} + +func (bi *btfInt) SetOffset(offset uint32) { + bi.Raw = writeBits(bi.Raw, btfIntOffsetLen, btfIntOffsetShift, offset) +} + +func (bi btfInt) Bits() Bits { + return Bits(readBits(bi.Raw, btfIntBitsLen, btfIntBitsShift)) +} + +func (bi *btfInt) SetBits(bits byte) { + bi.Raw = writeBits(bi.Raw, btfIntBitsLen, btfIntBitsShift, uint32(bits)) +} + type btfArray struct { Type TypeID IndexType TypeID @@ -226,11 +283,14 @@ type btfParam struct { Type TypeID } -func readTypes(r io.Reader, bo binary.ByteOrder) ([]rawType, error) { - var ( - header btfType - types []rawType - ) +func readTypes(r io.Reader, bo binary.ByteOrder, typeLen uint32) ([]rawType, error) { + var header btfType + // because of the interleaving between types and struct members it is difficult to + // precompute the numbers of raw types this will parse + // this "guess" is a good first estimation + sizeOfbtfType := uintptr(binary.Size(btfType{})) + tyMaxCount := uintptr(typeLen) / sizeOfbtfType / 2 + types := make([]rawType, 0, tyMaxCount) for id := TypeID(1); ; id++ { if err := binary.Read(r, bo, &header); err == io.EOF { @@ -242,7 +302,7 @@ func readTypes(r io.Reader, bo binary.ByteOrder) ([]rawType, error) { var data interface{} switch header.Kind() { case kindInt: - data = new(uint32) + data = new(btfInt) case kindPointer: case kindArray: data = new(btfArray) @@ -281,7 +341,3 @@ func readTypes(r io.Reader, bo binary.ByteOrder) ([]rawType, error) { types = append(types, rawType{header, data}) } } - -func intEncoding(raw uint32) (IntEncoding, uint32, byte) { - return IntEncoding((raw & 0x0f000000) >> 24), (raw & 0x00ff0000) >> 16, byte(raw & 0x000000ff) -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/btf_types_string.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/btf_types_string.go similarity index 100% rename from cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/btf_types_string.go rename to cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/btf_types_string.go diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/core.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/core.go similarity index 65% rename from cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/core.go rename to cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/core.go index d02df9d50bb3..c48754809355 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/core.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/core.go @@ -1,11 +1,11 @@ package btf import ( + "encoding/binary" "errors" "fmt" "math" "reflect" - "sort" "strconv" "strings" @@ -17,50 +17,58 @@ import ( // COREFixup is the result of computing a CO-RE relocation for a target. type COREFixup struct { - Kind COREKind - Local uint32 - Target uint32 - Poison bool + kind coreKind + local uint32 + target uint32 + // True if there is no valid fixup. The instruction is replaced with an + // invalid dummy. + poison bool + // True if the validation of the local value should be skipped. Used by + // some kinds of bitfield relocations. + skipLocalValidation bool } -func (f COREFixup) equal(other COREFixup) bool { - return f.Local == other.Local && f.Target == other.Target +func (f *COREFixup) equal(other COREFixup) bool { + return f.local == other.local && f.target == other.target } -func (f COREFixup) String() string { - if f.Poison { - return fmt.Sprintf("%s=poison", f.Kind) +func (f *COREFixup) String() string { + if f.poison { + return fmt.Sprintf("%s=poison", f.kind) } - return fmt.Sprintf("%s=%d->%d", f.Kind, f.Local, f.Target) + return fmt.Sprintf("%s=%d->%d", f.kind, f.local, f.target) } -func (f COREFixup) apply(ins *asm.Instruction) error { - if f.Poison { - return errors.New("can't poison individual instruction") +func (f *COREFixup) Apply(ins *asm.Instruction) error { + if f.poison { + const badRelo = 0xbad2310 + + *ins = asm.BuiltinFunc(badRelo).Call() + return nil } switch class := ins.OpCode.Class(); class { case asm.LdXClass, asm.StClass, asm.StXClass: - if want := int16(f.Local); want != ins.Offset { - return fmt.Errorf("invalid offset %d, expected %d", ins.Offset, want) + if want := int16(f.local); !f.skipLocalValidation && want != ins.Offset { + return fmt.Errorf("invalid offset %d, expected %d", ins.Offset, f.local) } - if f.Target > math.MaxInt16 { - return fmt.Errorf("offset %d exceeds MaxInt16", f.Target) + if f.target > math.MaxInt16 { + return fmt.Errorf("offset %d exceeds MaxInt16", f.target) } - ins.Offset = int16(f.Target) + ins.Offset = int16(f.target) case asm.LdClass: if !ins.IsConstantLoad(asm.DWord) { return fmt.Errorf("not a dword-sized immediate load") } - if want := int64(f.Local); want != ins.Constant { - return fmt.Errorf("invalid immediate %d, expected %d", ins.Constant, want) + if want := int64(f.local); !f.skipLocalValidation && want != ins.Constant { + return fmt.Errorf("invalid immediate %d, expected %d (fixup: %v)", ins.Constant, want, f) } - ins.Constant = int64(f.Target) + ins.Constant = int64(f.target) case asm.ALUClass: if ins.OpCode.ALUOp() == asm.Swap { @@ -74,15 +82,15 @@ func (f COREFixup) apply(ins *asm.Instruction) error { return fmt.Errorf("invalid source %s", src) } - if want := int64(f.Local); want != ins.Constant { - return fmt.Errorf("invalid immediate %d, expected %d", ins.Constant, want) + if want := int64(f.local); !f.skipLocalValidation && want != ins.Constant { + return fmt.Errorf("invalid immediate %d, expected %d (fixup: %v, kind: %v, ins: %v)", ins.Constant, want, f, f.kind, ins) } - if f.Target > math.MaxInt32 { - return fmt.Errorf("immediate %d exceeds MaxInt32", f.Target) + if f.target > math.MaxInt32 { + return fmt.Errorf("immediate %d exceeds MaxInt32", f.target) } - ins.Constant = int64(f.Target) + ins.Constant = int64(f.target) default: return fmt.Errorf("invalid class %s", class) @@ -92,57 +100,14 @@ func (f COREFixup) apply(ins *asm.Instruction) error { } func (f COREFixup) isNonExistant() bool { - return f.Kind.checksForExistence() && f.Target == 0 -} - -type COREFixups map[uint64]COREFixup - -// Apply a set of CO-RE relocations to a BPF program. -func (fs COREFixups) Apply(insns asm.Instructions) (asm.Instructions, error) { - if len(fs) == 0 { - cpy := make(asm.Instructions, len(insns)) - copy(cpy, insns) - return insns, nil - } - - cpy := make(asm.Instructions, 0, len(insns)) - iter := insns.Iterate() - for iter.Next() { - fixup, ok := fs[iter.Offset.Bytes()] - if !ok { - cpy = append(cpy, *iter.Ins) - continue - } - - ins := *iter.Ins - if fixup.Poison { - const badRelo = asm.BuiltinFunc(0xbad2310) - - cpy = append(cpy, badRelo.Call()) - if ins.OpCode.IsDWordLoad() { - // 64 bit constant loads occupy two raw bpf instructions, so - // we need to add another instruction as padding. - cpy = append(cpy, badRelo.Call()) - } - - continue - } - - if err := fixup.apply(&ins); err != nil { - return nil, fmt.Errorf("instruction %d, offset %d: %s: %w", iter.Index, iter.Offset.Bytes(), fixup.Kind, err) - } - - cpy = append(cpy, ins) - } - - return cpy, nil + return f.kind.checksForExistence() && f.target == 0 } -// COREKind is the type of CO-RE relocation -type COREKind uint32 +// coreKind is the type of CO-RE relocation as specified in BPF source code. +type coreKind uint32 const ( - reloFieldByteOffset COREKind = iota /* field byte offset */ + reloFieldByteOffset coreKind = iota /* field byte offset */ reloFieldByteSize /* field size in bytes */ reloFieldExists /* field existence in target kernel */ reloFieldSigned /* field signedness (0 - unsigned, 1 - signed) */ @@ -156,7 +121,11 @@ const ( reloEnumvalValue /* enum value integer value */ ) -func (k COREKind) String() string { +func (k coreKind) checksForExistence() bool { + return k == reloEnumvalExists || k == reloTypeExists || k == reloFieldExists +} + +func (k coreKind) String() string { switch k { case reloFieldByteOffset: return "byte_off" @@ -187,19 +156,28 @@ func (k COREKind) String() string { } } -func (k COREKind) checksForExistence() bool { - return k == reloEnumvalExists || k == reloTypeExists || k == reloFieldExists -} - -func coreRelocate(local, target *Spec, relos coreRelos) (COREFixups, error) { +// CORERelocate calculates the difference in types between local and target. +// +// Returns a list of fixups which can be applied to instructions to make them +// match the target type(s). +// +// Fixups are returned in the order of relos, e.g. fixup[i] is the solution +// for relos[i]. +func CORERelocate(local, target *Spec, relos []*CORERelocation) ([]COREFixup, error) { if local.byteOrder != target.byteOrder { return nil, fmt.Errorf("can't relocate %s against %s", local.byteOrder, target.byteOrder) } - var ids []TypeID - relosByID := make(map[TypeID]coreRelos) - result := make(COREFixups, len(relos)) - for _, relo := range relos { + type reloGroup struct { + relos []*CORERelocation + // Position of each relocation in relos. + indices []int + } + + // Split relocations into per Type lists. + relosByType := make(map[Type]*reloGroup) + result := make([]COREFixup, len(relos)) + for i, relo := range relos { if relo.kind == reloTypeIDLocal { // Filtering out reloTypeIDLocal here makes our lives a lot easier // down the line, since it doesn't have a target at all. @@ -207,47 +185,42 @@ func coreRelocate(local, target *Spec, relos coreRelos) (COREFixups, error) { return nil, fmt.Errorf("%s: unexpected accessor %v", relo.kind, relo.accessor) } - result[uint64(relo.insnOff)] = COREFixup{ - relo.kind, - uint32(relo.typeID), - uint32(relo.typeID), - false, + id, err := local.TypeID(relo.typ) + if err != nil { + return nil, fmt.Errorf("%s: %w", relo.kind, err) + } + + result[i] = COREFixup{ + kind: relo.kind, + local: uint32(id), + target: uint32(id), } continue } - relos, ok := relosByID[relo.typeID] + group, ok := relosByType[relo.typ] if !ok { - ids = append(ids, relo.typeID) + group = &reloGroup{} + relosByType[relo.typ] = group } - relosByID[relo.typeID] = append(relos, relo) + group.relos = append(group.relos, relo) + group.indices = append(group.indices, i) } - // Ensure we work on relocations in a deterministic order. - sort.Slice(ids, func(i, j int) bool { - return ids[i] < ids[j] - }) - - for _, id := range ids { - if int(id) >= len(local.types) { - return nil, fmt.Errorf("invalid type id %d", id) - } - - localType := local.types[id] - named, ok := localType.(NamedType) - if !ok || named.TypeName() == "" { + for localType, group := range relosByType { + localTypeName := localType.TypeName() + if localTypeName == "" { return nil, fmt.Errorf("relocate unnamed or anonymous type %s: %w", localType, ErrNotSupported) } - relos := relosByID[id] - targets := target.namedTypes[essentialName(named.TypeName())] - fixups, err := coreCalculateFixups(localType, targets, relos) + targets := target.namedTypes[newEssentialName(localTypeName)] + fixups, err := coreCalculateFixups(local, target, localType, targets, group.relos) if err != nil { return nil, fmt.Errorf("relocate %s: %w", localType, err) } - for i, relo := range relos { - result[uint64(relo.insnOff)] = fixups[i] + for j, index := range group.indices { + result[index] = fixups[j] } } @@ -262,30 +235,30 @@ var errImpossibleRelocation = errors.New("impossible relocation") // // The best target is determined by scoring: the less poisoning we have to do // the better the target is. -func coreCalculateFixups(local Type, targets []NamedType, relos coreRelos) ([]COREFixup, error) { - localID := local.ID() - local, err := copyType(local, skipQualifierAndTypedef) +func coreCalculateFixups(localSpec, targetSpec *Spec, local Type, targets []Type, relos []*CORERelocation) ([]COREFixup, error) { + localID, err := localSpec.TypeID(local) if err != nil { - return nil, err + return nil, fmt.Errorf("local type ID: %w", err) } + local = Copy(local, UnderlyingType) bestScore := len(relos) var bestFixups []COREFixup for i := range targets { - targetID := targets[i].ID() - target, err := copyType(targets[i], skipQualifierAndTypedef) + targetID, err := targetSpec.TypeID(targets[i]) if err != nil { - return nil, err + return nil, fmt.Errorf("target type ID: %w", err) } + target := Copy(targets[i], UnderlyingType) score := 0 // lower is better fixups := make([]COREFixup, 0, len(relos)) for _, relo := range relos { - fixup, err := coreCalculateFixup(local, localID, target, targetID, relo) + fixup, err := coreCalculateFixup(localSpec.byteOrder, local, localID, target, targetID, relo) if err != nil { return nil, fmt.Errorf("target %s: %w", target, err) } - if fixup.Poison || fixup.isNonExistant() { + if fixup.poison || fixup.isNonExistant() { score++ } fixups = append(fixups, fixup) @@ -307,17 +280,23 @@ func coreCalculateFixups(local Type, targets []NamedType, relos coreRelos) ([]CO // the fixups agree with each other. for i, fixup := range bestFixups { if !fixup.equal(fixups[i]) { - return nil, fmt.Errorf("%s: multiple types match: %w", fixup.Kind, errAmbiguousRelocation) + return nil, fmt.Errorf("%s: multiple types match: %w", fixup.kind, errAmbiguousRelocation) } } } if bestFixups == nil { // Nothing at all matched, probably because there are no suitable - // targets at all. Poison everything! + // targets at all. + // + // Poison everything except checksForExistence. bestFixups = make([]COREFixup, len(relos)) for i, relo := range relos { - bestFixups[i] = COREFixup{Kind: relo.kind, Poison: true} + if relo.kind.checksForExistence() { + bestFixups[i] = COREFixup{kind: relo.kind, local: 1, target: 0} + } else { + bestFixups[i] = COREFixup{kind: relo.kind, poison: true} + } } } @@ -326,15 +305,18 @@ func coreCalculateFixups(local Type, targets []NamedType, relos coreRelos) ([]CO // coreCalculateFixup calculates the fixup for a single local type, target type // and relocation. -func coreCalculateFixup(local Type, localID TypeID, target Type, targetID TypeID, relo coreRelo) (COREFixup, error) { +func coreCalculateFixup(byteOrder binary.ByteOrder, local Type, localID TypeID, target Type, targetID TypeID, relo *CORERelocation) (COREFixup, error) { fixup := func(local, target uint32) (COREFixup, error) { - return COREFixup{relo.kind, local, target, false}, nil + return COREFixup{kind: relo.kind, local: local, target: target}, nil + } + fixupWithoutValidation := func(local, target uint32) (COREFixup, error) { + return COREFixup{kind: relo.kind, local: local, target: target, skipLocalValidation: true}, nil } poison := func() (COREFixup, error) { if relo.kind.checksForExistence() { return fixup(1, 0) } - return COREFixup{relo.kind, 0, 0, true}, nil + return COREFixup{kind: relo.kind, poison: true}, nil } zero := COREFixup{} @@ -390,7 +372,20 @@ func coreCalculateFixup(local Type, localID TypeID, target Type, targetID TypeID return fixup(uint32(localValue.Value), uint32(targetValue.Value)) } - case reloFieldByteOffset, reloFieldByteSize, reloFieldExists: + case reloFieldSigned: + switch local.(type) { + case *Enum: + return fixup(1, 1) + case *Int: + return fixup( + uint32(local.(*Int).Encoding&Signed), + uint32(target.(*Int).Encoding&Signed), + ) + default: + return fixupWithoutValidation(0, 0) + } + + case reloFieldByteOffset, reloFieldByteSize, reloFieldExists, reloFieldLShiftU64, reloFieldRShiftU64: if _, ok := target.(*Fwd); ok { // We can't relocate fields using a forward declaration, so // skip it. If a non-forward declaration is present in the BTF @@ -406,12 +401,17 @@ func coreCalculateFixup(local Type, localID TypeID, target Type, targetID TypeID return zero, fmt.Errorf("target %s: %w", target, err) } + maybeSkipValidation := func(f COREFixup, err error) (COREFixup, error) { + f.skipLocalValidation = localField.bitfieldSize > 0 + return f, err + } + switch relo.kind { case reloFieldExists: return fixup(1, 1) case reloFieldByteOffset: - return fixup(localField.offset/8, targetField.offset/8) + return maybeSkipValidation(fixup(localField.offset, targetField.offset)) case reloFieldByteSize: localSize, err := Sizeof(localField.Type) @@ -423,9 +423,34 @@ func coreCalculateFixup(local Type, localID TypeID, target Type, targetID TypeID if err != nil { return zero, err } + return maybeSkipValidation(fixup(uint32(localSize), uint32(targetSize))) + + case reloFieldLShiftU64: + var target uint32 + if byteOrder == binary.LittleEndian { + targetSize, err := targetField.sizeBits() + if err != nil { + return zero, err + } - return fixup(uint32(localSize), uint32(targetSize)) + target = uint32(64 - targetField.bitfieldOffset - targetSize) + } else { + loadWidth, err := Sizeof(targetField.Type) + if err != nil { + return zero, err + } + + target = uint32(64 - Bits(loadWidth*8) + targetField.bitfieldOffset) + } + return fixupWithoutValidation(0, target) + + case reloFieldRShiftU64: + targetSize, err := targetField.sizeBits() + if err != nil { + return zero, err + } + return fixupWithoutValidation(0, uint32(64-targetSize)) } } @@ -462,7 +487,7 @@ func coreCalculateFixup(local Type, localID TypeID, target Type, targetID TypeID */ type coreAccessor []int -func parseCoreAccessor(accessor string) (coreAccessor, error) { +func parseCOREAccessor(accessor string) (coreAccessor, error) { if accessor == "" { return nil, fmt.Errorf("empty accessor") } @@ -508,18 +533,73 @@ func (ca coreAccessor) enumValue(t Type) (*EnumValue, error) { return &e.Values[i], nil } +// coreField represents the position of a "child" of a composite type from the +// start of that type. +// +// /- start of composite +// | offset * 8 | bitfieldOffset | bitfieldSize | ... | +// \- start of field end of field -/ type coreField struct { - Type Type + Type Type + + // The position of the field from the start of the composite type in bytes. offset uint32 + + // The offset of the bitfield in bits from the start of the field. + bitfieldOffset Bits + + // The size of the bitfield in bits. + // + // Zero if the field is not a bitfield. + bitfieldSize Bits +} + +func (cf *coreField) adjustOffsetToNthElement(n int) error { + size, err := Sizeof(cf.Type) + if err != nil { + return err + } + + cf.offset += uint32(n) * uint32(size) + return nil } -func adjustOffset(base uint32, t Type, n int) (uint32, error) { - size, err := Sizeof(t) +func (cf *coreField) adjustOffsetBits(offset Bits) error { + align, err := alignof(cf.Type) if err != nil { - return 0, err + return err + } + + // We can compute the load offset by: + // 1) converting the bit offset to bytes with a flooring division. + // 2) dividing and multiplying that offset by the alignment, yielding the + // load size aligned offset. + offsetBytes := uint32(offset/8) / uint32(align) * uint32(align) + + // The number of bits remaining is the bit offset less the number of bits + // we can "skip" with the aligned offset. + cf.bitfieldOffset = offset - Bits(offsetBytes*8) + + // We know that cf.offset is aligned at to at least align since we get it + // from the compiler via BTF. Adding an aligned offsetBytes preserves the + // alignment. + cf.offset += offsetBytes + return nil +} + +func (cf *coreField) sizeBits() (Bits, error) { + if cf.bitfieldSize > 0 { + return cf.bitfieldSize, nil } - return base + (uint32(n) * uint32(size) * 8), nil + // Someone is trying to access a non-bitfield via a bit shift relocation. + // This happens when a field changes from a bitfield to a regular field + // between kernel versions. Synthesise the size to make the shifts work. + size, err := Sizeof(cf.Type) + if err != nil { + return 0, nil + } + return Bits(size * 8), nil } // coreFindField descends into the local type using the accessor and tries to @@ -527,32 +607,33 @@ func adjustOffset(base uint32, t Type, n int) (uint32, error) { // // Returns the field and the offset of the field from the start of // target in bits. -func coreFindField(local Type, localAcc coreAccessor, target Type) (_, _ coreField, _ error) { +func coreFindField(localT Type, localAcc coreAccessor, targetT Type) (coreField, coreField, error) { + local := coreField{Type: localT} + target := coreField{Type: targetT} + // The first index is used to offset a pointer of the base type like // when accessing an array. - localOffset, err := adjustOffset(0, local, localAcc[0]) - if err != nil { + if err := local.adjustOffsetToNthElement(localAcc[0]); err != nil { return coreField{}, coreField{}, err } - targetOffset, err := adjustOffset(0, target, localAcc[0]) - if err != nil { + if err := target.adjustOffsetToNthElement(localAcc[0]); err != nil { return coreField{}, coreField{}, err } - if err := coreAreMembersCompatible(local, target); err != nil { + if err := coreAreMembersCompatible(local.Type, target.Type); err != nil { return coreField{}, coreField{}, fmt.Errorf("fields: %w", err) } var localMaybeFlex, targetMaybeFlex bool - for _, acc := range localAcc[1:] { - switch localType := local.(type) { + for i, acc := range localAcc[1:] { + switch localType := local.Type.(type) { case composite: // For composite types acc is used to find the field in the local type, // and then we try to find a field in target with the same name. localMembers := localType.members() if acc >= len(localMembers) { - return coreField{}, coreField{}, fmt.Errorf("invalid accessor %d for %s", acc, local) + return coreField{}, coreField{}, fmt.Errorf("invalid accessor %d for %s", acc, localType) } localMember := localMembers[acc] @@ -563,13 +644,15 @@ func coreFindField(local Type, localAcc coreAccessor, target Type) (_, _ coreFie } // This is an anonymous struct or union, ignore it. - local = localMember.Type - localOffset += localMember.OffsetBits + local = coreField{ + Type: localMember.Type, + offset: local.offset + localMember.Offset.Bytes(), + } localMaybeFlex = false continue } - targetType, ok := target.(composite) + targetType, ok := target.Type.(composite) if !ok { return coreField{}, coreField{}, fmt.Errorf("target not composite: %w", errImpossibleRelocation) } @@ -579,20 +662,43 @@ func coreFindField(local Type, localAcc coreAccessor, target Type) (_, _ coreFie return coreField{}, coreField{}, err } - if targetMember.BitfieldSize > 0 { - return coreField{}, coreField{}, fmt.Errorf("field %q is a bitfield: %w", targetMember.Name, ErrNotSupported) + local = coreField{ + Type: localMember.Type, + offset: local.offset, + bitfieldSize: localMember.BitfieldSize, } - - local = localMember.Type localMaybeFlex = acc == len(localMembers)-1 - localOffset += localMember.OffsetBits - target = targetMember.Type + + target = coreField{ + Type: targetMember.Type, + offset: target.offset, + bitfieldSize: targetMember.BitfieldSize, + } targetMaybeFlex = last - targetOffset += targetMember.OffsetBits + + if local.bitfieldSize == 0 && target.bitfieldSize == 0 { + local.offset += localMember.Offset.Bytes() + target.offset += targetMember.Offset.Bytes() + break + } + + // Either of the members is a bitfield. Make sure we're at the + // end of the accessor. + if next := i + 1; next < len(localAcc[1:]) { + return coreField{}, coreField{}, fmt.Errorf("can't descend into bitfield") + } + + if err := local.adjustOffsetBits(localMember.Offset); err != nil { + return coreField{}, coreField{}, err + } + + if err := target.adjustOffsetBits(targetMember.Offset); err != nil { + return coreField{}, coreField{}, err + } case *Array: // For arrays, acc is the index in the target. - targetType, ok := target.(*Array) + targetType, ok := target.Type.(*Array) if !ok { return coreField{}, coreField{}, fmt.Errorf("target not array: %w", errImpossibleRelocation) } @@ -611,17 +717,23 @@ func coreFindField(local Type, localAcc coreAccessor, target Type) (_, _ coreFie return coreField{}, coreField{}, fmt.Errorf("out of bounds access of target: %w", errImpossibleRelocation) } - local = localType.Type + local = coreField{ + Type: localType.Type, + offset: local.offset, + } localMaybeFlex = false - localOffset, err = adjustOffset(localOffset, local, acc) - if err != nil { + + if err := local.adjustOffsetToNthElement(acc); err != nil { return coreField{}, coreField{}, err } - target = targetType.Type + target = coreField{ + Type: targetType.Type, + offset: target.offset, + } targetMaybeFlex = false - targetOffset, err = adjustOffset(targetOffset, target, acc) - if err != nil { + + if err := target.adjustOffsetToNthElement(acc); err != nil { return coreField{}, coreField{}, err } @@ -629,12 +741,12 @@ func coreFindField(local Type, localAcc coreAccessor, target Type) (_, _ coreFie return coreField{}, coreField{}, fmt.Errorf("relocate field of %T: %w", localType, ErrNotSupported) } - if err := coreAreMembersCompatible(local, target); err != nil { + if err := coreAreMembersCompatible(local.Type, target.Type); err != nil { return coreField{}, coreField{}, err } } - return coreField{local, localOffset}, coreField{target, targetOffset}, nil + return local, target, nil } // coreFindMember finds a member in a composite type while handling anonymous @@ -646,7 +758,7 @@ func coreFindMember(typ composite, name string) (Member, bool, error) { type offsetTarget struct { composite - offset uint32 + offset Bits } targets := []offsetTarget{{typ, 0}} @@ -670,7 +782,7 @@ func coreFindMember(typ composite, name string) (Member, bool, error) { for j, member := range members { if member.Name == name { // NB: This is safe because member is a copy. - member.OffsetBits += target.offset + member.Offset += target.offset return member, j == len(members)-1, nil } @@ -685,7 +797,7 @@ func coreFindMember(typ composite, name string) (Member, bool, error) { return Member{}, false, fmt.Errorf("anonymous non-composite type %T not allowed", member.Type) } - targets = append(targets, offsetTarget{comp, target.offset + member.OffsetBits}) + targets = append(targets, offsetTarget{comp, target.offset + member.Offset}) } } @@ -704,9 +816,9 @@ func coreFindEnumValue(local Type, localAcc coreAccessor, target Type) (localVal return nil, nil, errImpossibleRelocation } - localName := essentialName(localValue.Name) + localName := newEssentialName(localValue.Name) for i, targetValue := range targetEnum.Values { - if essentialName(targetValue.Name) != localName { + if newEssentialName(targetValue.Name) != localName { continue } @@ -759,15 +871,9 @@ func coreAreTypesCompatible(localType Type, targetType Type) error { } switch lv := (localType).(type) { - case *Void, *Struct, *Union, *Enum, *Fwd: + case *Void, *Struct, *Union, *Enum, *Fwd, *Int: // Nothing to do here - case *Int: - tv := targetType.(*Int) - if lv.isBitfield() || tv.isBitfield() { - return fmt.Errorf("bitfield: %w", errImpossibleRelocation) - } - case *Pointer, *Array: depth++ localType.walk(&localTs) @@ -831,7 +937,7 @@ func coreAreMembersCompatible(localType Type, targetType Type) error { return nil } - if essentialName(a) == essentialName(b) { + if newEssentialName(a) == newEssentialName(b) { return nil } @@ -849,7 +955,7 @@ func coreAreMembersCompatible(localType Type, targetType Type) error { } switch lv := localType.(type) { - case *Array, *Pointer, *Float: + case *Array, *Pointer, *Float, *Int: return nil case *Enum: @@ -860,29 +966,7 @@ func coreAreMembersCompatible(localType Type, targetType Type) error { tv := targetType.(*Fwd) return doNamesMatch(lv.Name, tv.Name) - case *Int: - tv := targetType.(*Int) - if lv.isBitfield() || tv.isBitfield() { - return fmt.Errorf("bitfield: %w", errImpossibleRelocation) - } - return nil - default: return fmt.Errorf("type %s: %w", localType, ErrNotSupported) } } - -func skipQualifierAndTypedef(typ Type) (Type, error) { - result := typ - for depth := 0; depth <= maxTypeDepth; depth++ { - switch v := (result).(type) { - case qualifier: - result = v.qualify() - case *Typedef: - result = v.Type - default: - return result, nil - } - } - return nil, errors.New("exceeded type depth") -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/doc.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/doc.go similarity index 71% rename from cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/doc.go rename to cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/doc.go index ad2576cb23c4..b1f4b1fc3eb5 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/doc.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/doc.go @@ -2,7 +2,4 @@ // // The canonical documentation lives in the Linux kernel repository and is // available at https://www.kernel.org/doc/html/latest/bpf/btf.html -// -// The API is very much unstable. You should only use this via the main -// ebpf library. package btf diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/ext_info.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/ext_info.go new file mode 100644 index 000000000000..2c0e1afe299a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/ext_info.go @@ -0,0 +1,721 @@ +package btf + +import ( + "bytes" + "encoding/binary" + "errors" + "fmt" + "io" + "math" + "sort" + + "github.com/cilium/ebpf/asm" + "github.com/cilium/ebpf/internal" +) + +// ExtInfos contains ELF section metadata. +type ExtInfos struct { + // The slices are sorted by offset in ascending order. + funcInfos map[string][]funcInfo + lineInfos map[string][]lineInfo + relocationInfos map[string][]coreRelocationInfo +} + +// loadExtInfosFromELF parses ext infos from the .BTF.ext section in an ELF. +// +// Returns an error wrapping ErrNotFound if no ext infos are present. +func loadExtInfosFromELF(file *internal.SafeELFFile, ts types, strings *stringTable) (*ExtInfos, error) { + section := file.Section(".BTF.ext") + if section == nil { + return nil, fmt.Errorf("btf ext infos: %w", ErrNotFound) + } + + if section.ReaderAt == nil { + return nil, fmt.Errorf("compressed ext_info is not supported") + } + + return loadExtInfos(section.ReaderAt, file.ByteOrder, ts, strings) +} + +// loadExtInfos parses bare ext infos. +func loadExtInfos(r io.ReaderAt, bo binary.ByteOrder, ts types, strings *stringTable) (*ExtInfos, error) { + // Open unbuffered section reader. binary.Read() calls io.ReadFull on + // the header structs, resulting in one syscall per header. + headerRd := io.NewSectionReader(r, 0, math.MaxInt64) + extHeader, err := parseBTFExtHeader(headerRd, bo) + if err != nil { + return nil, fmt.Errorf("parsing BTF extension header: %w", err) + } + + coreHeader, err := parseBTFExtCOREHeader(headerRd, bo, extHeader) + if err != nil { + return nil, fmt.Errorf("parsing BTF CO-RE header: %w", err) + } + + buf := internal.NewBufferedSectionReader(r, extHeader.funcInfoStart(), int64(extHeader.FuncInfoLen)) + btfFuncInfos, err := parseFuncInfos(buf, bo, strings) + if err != nil { + return nil, fmt.Errorf("parsing BTF function info: %w", err) + } + + funcInfos := make(map[string][]funcInfo, len(btfFuncInfos)) + for section, bfis := range btfFuncInfos { + funcInfos[section], err = newFuncInfos(bfis, ts) + if err != nil { + return nil, fmt.Errorf("section %s: func infos: %w", section, err) + } + } + + buf = internal.NewBufferedSectionReader(r, extHeader.lineInfoStart(), int64(extHeader.LineInfoLen)) + btfLineInfos, err := parseLineInfos(buf, bo, strings) + if err != nil { + return nil, fmt.Errorf("parsing BTF line info: %w", err) + } + + lineInfos := make(map[string][]lineInfo, len(btfLineInfos)) + for section, blis := range btfLineInfos { + lineInfos[section], err = newLineInfos(blis, strings) + if err != nil { + return nil, fmt.Errorf("section %s: line infos: %w", section, err) + } + } + + if coreHeader == nil || coreHeader.COREReloLen == 0 { + return &ExtInfos{funcInfos, lineInfos, nil}, nil + } + + var btfCORERelos map[string][]bpfCORERelo + buf = internal.NewBufferedSectionReader(r, extHeader.coreReloStart(coreHeader), int64(coreHeader.COREReloLen)) + btfCORERelos, err = parseCORERelos(buf, bo, strings) + if err != nil { + return nil, fmt.Errorf("parsing CO-RE relocation info: %w", err) + } + + coreRelos := make(map[string][]coreRelocationInfo, len(btfCORERelos)) + for section, brs := range btfCORERelos { + coreRelos[section], err = newRelocationInfos(brs, ts, strings) + if err != nil { + return nil, fmt.Errorf("section %s: CO-RE relocations: %w", section, err) + } + } + + return &ExtInfos{funcInfos, lineInfos, coreRelos}, nil +} + +type funcInfoMeta struct{} +type coreRelocationMeta struct{} + +// Assign per-section metadata from BTF to a section's instructions. +func (ei *ExtInfos) Assign(insns asm.Instructions, section string) { + funcInfos := ei.funcInfos[section] + lineInfos := ei.lineInfos[section] + reloInfos := ei.relocationInfos[section] + + iter := insns.Iterate() + for iter.Next() { + if len(funcInfos) > 0 && funcInfos[0].offset == iter.Offset { + iter.Ins.Metadata.Set(funcInfoMeta{}, funcInfos[0].fn) + funcInfos = funcInfos[1:] + } + + if len(lineInfos) > 0 && lineInfos[0].offset == iter.Offset { + *iter.Ins = iter.Ins.WithSource(lineInfos[0].line) + lineInfos = lineInfos[1:] + } + + if len(reloInfos) > 0 && reloInfos[0].offset == iter.Offset { + iter.Ins.Metadata.Set(coreRelocationMeta{}, reloInfos[0].relo) + reloInfos = reloInfos[1:] + } + } +} + +// MarshalExtInfos encodes function and line info embedded in insns into kernel +// wire format. +func MarshalExtInfos(insns asm.Instructions, typeID func(Type) (TypeID, error)) (funcInfos, lineInfos []byte, _ error) { + iter := insns.Iterate() + var fiBuf, liBuf bytes.Buffer + for iter.Next() { + if fn := FuncMetadata(iter.Ins); fn != nil { + fi := &funcInfo{ + fn: fn, + offset: iter.Offset, + } + if err := fi.marshal(&fiBuf, typeID); err != nil { + return nil, nil, fmt.Errorf("write func info: %w", err) + } + } + + if line, ok := iter.Ins.Source().(*Line); ok { + li := &lineInfo{ + line: line, + offset: iter.Offset, + } + if err := li.marshal(&liBuf); err != nil { + return nil, nil, fmt.Errorf("write line info: %w", err) + } + } + } + return fiBuf.Bytes(), liBuf.Bytes(), nil +} + +// btfExtHeader is found at the start of the .BTF.ext section. +type btfExtHeader struct { + Magic uint16 + Version uint8 + Flags uint8 + + // HdrLen is larger than the size of struct btfExtHeader when it is + // immediately followed by a btfExtCOREHeader. + HdrLen uint32 + + FuncInfoOff uint32 + FuncInfoLen uint32 + LineInfoOff uint32 + LineInfoLen uint32 +} + +// parseBTFExtHeader parses the header of the .BTF.ext section. +func parseBTFExtHeader(r io.Reader, bo binary.ByteOrder) (*btfExtHeader, error) { + var header btfExtHeader + if err := binary.Read(r, bo, &header); err != nil { + return nil, fmt.Errorf("can't read header: %v", err) + } + + if header.Magic != btfMagic { + return nil, fmt.Errorf("incorrect magic value %v", header.Magic) + } + + if header.Version != 1 { + return nil, fmt.Errorf("unexpected version %v", header.Version) + } + + if header.Flags != 0 { + return nil, fmt.Errorf("unsupported flags %v", header.Flags) + } + + if int64(header.HdrLen) < int64(binary.Size(&header)) { + return nil, fmt.Errorf("header length shorter than btfExtHeader size") + } + + return &header, nil +} + +// funcInfoStart returns the offset from the beginning of the .BTF.ext section +// to the start of its func_info entries. +func (h *btfExtHeader) funcInfoStart() int64 { + return int64(h.HdrLen + h.FuncInfoOff) +} + +// lineInfoStart returns the offset from the beginning of the .BTF.ext section +// to the start of its line_info entries. +func (h *btfExtHeader) lineInfoStart() int64 { + return int64(h.HdrLen + h.LineInfoOff) +} + +// coreReloStart returns the offset from the beginning of the .BTF.ext section +// to the start of its CO-RE relocation entries. +func (h *btfExtHeader) coreReloStart(ch *btfExtCOREHeader) int64 { + return int64(h.HdrLen + ch.COREReloOff) +} + +// btfExtCOREHeader is found right after the btfExtHeader when its HdrLen +// field is larger than its size. +type btfExtCOREHeader struct { + COREReloOff uint32 + COREReloLen uint32 +} + +// parseBTFExtCOREHeader parses the tail of the .BTF.ext header. If additional +// header bytes are present, extHeader.HdrLen will be larger than the struct, +// indicating the presence of a CO-RE extension header. +func parseBTFExtCOREHeader(r io.Reader, bo binary.ByteOrder, extHeader *btfExtHeader) (*btfExtCOREHeader, error) { + extHdrSize := int64(binary.Size(&extHeader)) + remainder := int64(extHeader.HdrLen) - extHdrSize + + if remainder == 0 { + return nil, nil + } + + var coreHeader btfExtCOREHeader + if err := binary.Read(r, bo, &coreHeader); err != nil { + return nil, fmt.Errorf("can't read header: %v", err) + } + + return &coreHeader, nil +} + +type btfExtInfoSec struct { + SecNameOff uint32 + NumInfo uint32 +} + +// parseExtInfoSec parses a btf_ext_info_sec header within .BTF.ext, +// appearing within func_info and line_info sub-sections. +// These headers appear once for each program section in the ELF and are +// followed by one or more func/line_info records for the section. +func parseExtInfoSec(r io.Reader, bo binary.ByteOrder, strings *stringTable) (string, *btfExtInfoSec, error) { + var infoHeader btfExtInfoSec + if err := binary.Read(r, bo, &infoHeader); err != nil { + return "", nil, fmt.Errorf("read ext info header: %w", err) + } + + secName, err := strings.Lookup(infoHeader.SecNameOff) + if err != nil { + return "", nil, fmt.Errorf("get section name: %w", err) + } + if secName == "" { + return "", nil, fmt.Errorf("extinfo header refers to empty section name") + } + + if infoHeader.NumInfo == 0 { + return "", nil, fmt.Errorf("section %s has zero records", secName) + } + + return secName, &infoHeader, nil +} + +// parseExtInfoRecordSize parses the uint32 at the beginning of a func_infos +// or line_infos segment that describes the length of all extInfoRecords in +// that segment. +func parseExtInfoRecordSize(r io.Reader, bo binary.ByteOrder) (uint32, error) { + const maxRecordSize = 256 + + var recordSize uint32 + if err := binary.Read(r, bo, &recordSize); err != nil { + return 0, fmt.Errorf("can't read record size: %v", err) + } + + if recordSize < 4 { + // Need at least InsnOff worth of bytes per record. + return 0, errors.New("record size too short") + } + if recordSize > maxRecordSize { + return 0, fmt.Errorf("record size %v exceeds %v", recordSize, maxRecordSize) + } + + return recordSize, nil +} + +// The size of a FuncInfo in BTF wire format. +var FuncInfoSize = uint32(binary.Size(bpfFuncInfo{})) + +type funcInfo struct { + fn *Func + offset asm.RawInstructionOffset +} + +type bpfFuncInfo struct { + // Instruction offset of the function within an ELF section. + InsnOff uint32 + TypeID TypeID +} + +func newFuncInfo(fi bpfFuncInfo, ts types) (*funcInfo, error) { + typ, err := ts.ByID(fi.TypeID) + if err != nil { + return nil, err + } + + fn, ok := typ.(*Func) + if !ok { + return nil, fmt.Errorf("type ID %d is a %T, but expected a Func", fi.TypeID, typ) + } + + // C doesn't have anonymous functions, but check just in case. + if fn.Name == "" { + return nil, fmt.Errorf("func with type ID %d doesn't have a name", fi.TypeID) + } + + return &funcInfo{ + fn, + asm.RawInstructionOffset(fi.InsnOff), + }, nil +} + +func newFuncInfos(bfis []bpfFuncInfo, ts types) ([]funcInfo, error) { + fis := make([]funcInfo, 0, len(bfis)) + for _, bfi := range bfis { + fi, err := newFuncInfo(bfi, ts) + if err != nil { + return nil, fmt.Errorf("offset %d: %w", bfi.InsnOff, err) + } + fis = append(fis, *fi) + } + sort.Slice(fis, func(i, j int) bool { + return fis[i].offset <= fis[j].offset + }) + return fis, nil +} + +// marshal into the BTF wire format. +func (fi *funcInfo) marshal(w io.Writer, typeID func(Type) (TypeID, error)) error { + id, err := typeID(fi.fn) + if err != nil { + return err + } + bfi := bpfFuncInfo{ + InsnOff: uint32(fi.offset), + TypeID: id, + } + return binary.Write(w, internal.NativeEndian, &bfi) +} + +// parseLineInfos parses a func_info sub-section within .BTF.ext ito a map of +// func infos indexed by section name. +func parseFuncInfos(r io.Reader, bo binary.ByteOrder, strings *stringTable) (map[string][]bpfFuncInfo, error) { + recordSize, err := parseExtInfoRecordSize(r, bo) + if err != nil { + return nil, err + } + + result := make(map[string][]bpfFuncInfo) + for { + secName, infoHeader, err := parseExtInfoSec(r, bo, strings) + if errors.Is(err, io.EOF) { + return result, nil + } + if err != nil { + return nil, err + } + + records, err := parseFuncInfoRecords(r, bo, recordSize, infoHeader.NumInfo) + if err != nil { + return nil, fmt.Errorf("section %v: %w", secName, err) + } + + result[secName] = records + } +} + +// parseFuncInfoRecords parses a stream of func_infos into a funcInfos. +// These records appear after a btf_ext_info_sec header in the func_info +// sub-section of .BTF.ext. +func parseFuncInfoRecords(r io.Reader, bo binary.ByteOrder, recordSize uint32, recordNum uint32) ([]bpfFuncInfo, error) { + var out []bpfFuncInfo + var fi bpfFuncInfo + + if exp, got := FuncInfoSize, recordSize; exp != got { + // BTF blob's record size is longer than we know how to parse. + return nil, fmt.Errorf("expected FuncInfo record size %d, but BTF blob contains %d", exp, got) + } + + for i := uint32(0); i < recordNum; i++ { + if err := binary.Read(r, bo, &fi); err != nil { + return nil, fmt.Errorf("can't read function info: %v", err) + } + + if fi.InsnOff%asm.InstructionSize != 0 { + return nil, fmt.Errorf("offset %v is not aligned with instruction size", fi.InsnOff) + } + + // ELF tracks offset in bytes, the kernel expects raw BPF instructions. + // Convert as early as possible. + fi.InsnOff /= asm.InstructionSize + + out = append(out, fi) + } + + return out, nil +} + +var LineInfoSize = uint32(binary.Size(bpfLineInfo{})) + +// Line represents the location and contents of a single line of source +// code a BPF ELF was compiled from. +type Line struct { + fileName string + line string + lineNumber uint32 + lineColumn uint32 + + // TODO: We should get rid of the fields below, but for that we need to be + // able to write BTF. + + fileNameOff uint32 + lineOff uint32 +} + +func (li *Line) FileName() string { + return li.fileName +} + +func (li *Line) Line() string { + return li.line +} + +func (li *Line) LineNumber() uint32 { + return li.lineNumber +} + +func (li *Line) LineColumn() uint32 { + return li.lineColumn +} + +func (li *Line) String() string { + return li.line +} + +type lineInfo struct { + line *Line + offset asm.RawInstructionOffset +} + +// Constants for the format of bpfLineInfo.LineCol. +const ( + bpfLineShift = 10 + bpfLineMax = (1 << (32 - bpfLineShift)) - 1 + bpfColumnMax = (1 << bpfLineShift) - 1 +) + +type bpfLineInfo struct { + // Instruction offset of the line within the whole instruction stream, in instructions. + InsnOff uint32 + FileNameOff uint32 + LineOff uint32 + LineCol uint32 +} + +func newLineInfo(li bpfLineInfo, strings *stringTable) (*lineInfo, error) { + line, err := strings.Lookup(li.LineOff) + if err != nil { + return nil, fmt.Errorf("lookup of line: %w", err) + } + + fileName, err := strings.Lookup(li.FileNameOff) + if err != nil { + return nil, fmt.Errorf("lookup of filename: %w", err) + } + + lineNumber := li.LineCol >> bpfLineShift + lineColumn := li.LineCol & bpfColumnMax + + return &lineInfo{ + &Line{ + fileName, + line, + lineNumber, + lineColumn, + li.FileNameOff, + li.LineOff, + }, + asm.RawInstructionOffset(li.InsnOff), + }, nil +} + +func newLineInfos(blis []bpfLineInfo, strings *stringTable) ([]lineInfo, error) { + lis := make([]lineInfo, 0, len(blis)) + for _, bli := range blis { + li, err := newLineInfo(bli, strings) + if err != nil { + return nil, fmt.Errorf("offset %d: %w", bli.InsnOff, err) + } + lis = append(lis, *li) + } + sort.Slice(lis, func(i, j int) bool { + return lis[i].offset <= lis[j].offset + }) + return lis, nil +} + +// marshal writes the binary representation of the LineInfo to w. +func (li *lineInfo) marshal(w io.Writer) error { + line := li.line + if line.lineNumber > bpfLineMax { + return fmt.Errorf("line %d exceeds %d", line.lineNumber, bpfLineMax) + } + + if line.lineColumn > bpfColumnMax { + return fmt.Errorf("column %d exceeds %d", line.lineColumn, bpfColumnMax) + } + + bli := bpfLineInfo{ + uint32(li.offset), + line.fileNameOff, + line.lineOff, + (line.lineNumber << bpfLineShift) | line.lineColumn, + } + return binary.Write(w, internal.NativeEndian, &bli) +} + +// parseLineInfos parses a line_info sub-section within .BTF.ext ito a map of +// line infos indexed by section name. +func parseLineInfos(r io.Reader, bo binary.ByteOrder, strings *stringTable) (map[string][]bpfLineInfo, error) { + recordSize, err := parseExtInfoRecordSize(r, bo) + if err != nil { + return nil, err + } + + result := make(map[string][]bpfLineInfo) + for { + secName, infoHeader, err := parseExtInfoSec(r, bo, strings) + if errors.Is(err, io.EOF) { + return result, nil + } + if err != nil { + return nil, err + } + + records, err := parseLineInfoRecords(r, bo, recordSize, infoHeader.NumInfo) + if err != nil { + return nil, fmt.Errorf("section %v: %w", secName, err) + } + + result[secName] = records + } +} + +// parseLineInfoRecords parses a stream of line_infos into a lineInfos. +// These records appear after a btf_ext_info_sec header in the line_info +// sub-section of .BTF.ext. +func parseLineInfoRecords(r io.Reader, bo binary.ByteOrder, recordSize uint32, recordNum uint32) ([]bpfLineInfo, error) { + var out []bpfLineInfo + var li bpfLineInfo + + if exp, got := uint32(binary.Size(li)), recordSize; exp != got { + // BTF blob's record size is longer than we know how to parse. + return nil, fmt.Errorf("expected LineInfo record size %d, but BTF blob contains %d", exp, got) + } + + for i := uint32(0); i < recordNum; i++ { + if err := binary.Read(r, bo, &li); err != nil { + return nil, fmt.Errorf("can't read line info: %v", err) + } + + if li.InsnOff%asm.InstructionSize != 0 { + return nil, fmt.Errorf("offset %v is not aligned with instruction size", li.InsnOff) + } + + // ELF tracks offset in bytes, the kernel expects raw BPF instructions. + // Convert as early as possible. + li.InsnOff /= asm.InstructionSize + + out = append(out, li) + } + + return out, nil +} + +// bpfCORERelo matches the kernel's struct bpf_core_relo. +type bpfCORERelo struct { + InsnOff uint32 + TypeID TypeID + AccessStrOff uint32 + Kind coreKind +} + +type CORERelocation struct { + typ Type + accessor coreAccessor + kind coreKind +} + +func CORERelocationMetadata(ins *asm.Instruction) *CORERelocation { + relo, _ := ins.Metadata.Get(coreRelocationMeta{}).(*CORERelocation) + return relo +} + +type coreRelocationInfo struct { + relo *CORERelocation + offset asm.RawInstructionOffset +} + +func newRelocationInfo(relo bpfCORERelo, ts types, strings *stringTable) (*coreRelocationInfo, error) { + typ, err := ts.ByID(relo.TypeID) + if err != nil { + return nil, err + } + + accessorStr, err := strings.Lookup(relo.AccessStrOff) + if err != nil { + return nil, err + } + + accessor, err := parseCOREAccessor(accessorStr) + if err != nil { + return nil, fmt.Errorf("accessor %q: %s", accessorStr, err) + } + + return &coreRelocationInfo{ + &CORERelocation{ + typ, + accessor, + relo.Kind, + }, + asm.RawInstructionOffset(relo.InsnOff), + }, nil +} + +func newRelocationInfos(brs []bpfCORERelo, ts types, strings *stringTable) ([]coreRelocationInfo, error) { + rs := make([]coreRelocationInfo, 0, len(brs)) + for _, br := range brs { + relo, err := newRelocationInfo(br, ts, strings) + if err != nil { + return nil, fmt.Errorf("offset %d: %w", br.InsnOff, err) + } + rs = append(rs, *relo) + } + sort.Slice(rs, func(i, j int) bool { + return rs[i].offset < rs[j].offset + }) + return rs, nil +} + +var extInfoReloSize = binary.Size(bpfCORERelo{}) + +// parseCORERelos parses a core_relos sub-section within .BTF.ext ito a map of +// CO-RE relocations indexed by section name. +func parseCORERelos(r io.Reader, bo binary.ByteOrder, strings *stringTable) (map[string][]bpfCORERelo, error) { + recordSize, err := parseExtInfoRecordSize(r, bo) + if err != nil { + return nil, err + } + + if recordSize != uint32(extInfoReloSize) { + return nil, fmt.Errorf("expected record size %d, got %d", extInfoReloSize, recordSize) + } + + result := make(map[string][]bpfCORERelo) + for { + secName, infoHeader, err := parseExtInfoSec(r, bo, strings) + if errors.Is(err, io.EOF) { + return result, nil + } + if err != nil { + return nil, err + } + + records, err := parseCOREReloRecords(r, bo, recordSize, infoHeader.NumInfo) + if err != nil { + return nil, fmt.Errorf("section %v: %w", secName, err) + } + + result[secName] = records + } +} + +// parseCOREReloRecords parses a stream of CO-RE relocation entries into a +// coreRelos. These records appear after a btf_ext_info_sec header in the +// core_relos sub-section of .BTF.ext. +func parseCOREReloRecords(r io.Reader, bo binary.ByteOrder, recordSize uint32, recordNum uint32) ([]bpfCORERelo, error) { + var out []bpfCORERelo + + var relo bpfCORERelo + for i := uint32(0); i < recordNum; i++ { + if err := binary.Read(r, bo, &relo); err != nil { + return nil, fmt.Errorf("can't read CO-RE relocation: %v", err) + } + + if relo.InsnOff%asm.InstructionSize != 0 { + return nil, fmt.Errorf("offset %v is not aligned with instruction size", relo.InsnOff) + } + + // ELF tracks offset in bytes, the kernel expects raw BPF instructions. + // Convert as early as possible. + relo.InsnOff /= asm.InstructionSize + + out = append(out, relo) + } + + return out, nil +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/format.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/format.go new file mode 100644 index 000000000000..e7688a2a6e83 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/format.go @@ -0,0 +1,319 @@ +package btf + +import ( + "errors" + "fmt" + "strings" +) + +var errNestedTooDeep = errors.New("nested too deep") + +// GoFormatter converts a Type to Go syntax. +// +// A zero GoFormatter is valid to use. +type GoFormatter struct { + w strings.Builder + + // Types present in this map are referred to using the given name if they + // are encountered when outputting another type. + Names map[Type]string + + // Identifier is called for each field of struct-like types. By default the + // field name is used as is. + Identifier func(string) string + + // EnumIdentifier is called for each element of an enum. By default the + // name of the enum type is concatenated with Identifier(element). + EnumIdentifier func(name, element string) string +} + +// TypeDeclaration generates a Go type declaration for a BTF type. +func (gf *GoFormatter) TypeDeclaration(name string, typ Type) (string, error) { + gf.w.Reset() + if err := gf.writeTypeDecl(name, typ); err != nil { + return "", err + } + return gf.w.String(), nil +} + +func (gf *GoFormatter) identifier(s string) string { + if gf.Identifier != nil { + return gf.Identifier(s) + } + + return s +} + +func (gf *GoFormatter) enumIdentifier(name, element string) string { + if gf.EnumIdentifier != nil { + return gf.EnumIdentifier(name, element) + } + + return name + gf.identifier(element) +} + +// writeTypeDecl outputs a declaration of the given type. +// +// It encodes https://golang.org/ref/spec#Type_declarations: +// +// type foo struct { bar uint32; } +// type bar int32 +func (gf *GoFormatter) writeTypeDecl(name string, typ Type) error { + if name == "" { + return fmt.Errorf("need a name for type %s", typ) + } + + switch v := skipQualifiers(typ).(type) { + case *Enum: + fmt.Fprintf(&gf.w, "type %s ", name) + switch v.Size { + case 1: + gf.w.WriteString("int8") + case 2: + gf.w.WriteString("int16") + case 4: + gf.w.WriteString("int32") + case 8: + gf.w.WriteString("int64") + default: + return fmt.Errorf("%s: invalid enum size %d", typ, v.Size) + } + + if len(v.Values) == 0 { + return nil + } + + gf.w.WriteString("; const ( ") + for _, ev := range v.Values { + id := gf.enumIdentifier(name, ev.Name) + fmt.Fprintf(&gf.w, "%s %s = %d; ", id, name, ev.Value) + } + gf.w.WriteString(")") + + return nil + + default: + fmt.Fprintf(&gf.w, "type %s ", name) + return gf.writeTypeLit(v, 0) + } +} + +// writeType outputs the name of a named type or a literal describing the type. +// +// It encodes https://golang.org/ref/spec#Types. +// +// foo (if foo is a named type) +// uint32 +func (gf *GoFormatter) writeType(typ Type, depth int) error { + typ = skipQualifiers(typ) + + name := gf.Names[typ] + if name != "" { + gf.w.WriteString(name) + return nil + } + + return gf.writeTypeLit(typ, depth) +} + +// writeTypeLit outputs a literal describing the type. +// +// The function ignores named types. +// +// It encodes https://golang.org/ref/spec#TypeLit. +// +// struct { bar uint32; } +// uint32 +func (gf *GoFormatter) writeTypeLit(typ Type, depth int) error { + depth++ + if depth > maxTypeDepth { + return errNestedTooDeep + } + + var err error + switch v := skipQualifiers(typ).(type) { + case *Int: + gf.writeIntLit(v) + + case *Enum: + gf.w.WriteString("int32") + + case *Typedef: + err = gf.writeType(v.Type, depth) + + case *Array: + fmt.Fprintf(&gf.w, "[%d]", v.Nelems) + err = gf.writeType(v.Type, depth) + + case *Struct: + err = gf.writeStructLit(v.Size, v.Members, depth) + + case *Union: + // Always choose the first member to represent the union in Go. + err = gf.writeStructLit(v.Size, v.Members[:1], depth) + + case *Datasec: + err = gf.writeDatasecLit(v, depth) + + default: + return fmt.Errorf("type %T: %w", v, ErrNotSupported) + } + + if err != nil { + return fmt.Errorf("%s: %w", typ, err) + } + + return nil +} + +func (gf *GoFormatter) writeIntLit(i *Int) { + // NB: Encoding.IsChar is ignored. + if i.Encoding.IsBool() && i.Size == 1 { + gf.w.WriteString("bool") + return + } + + bits := i.Size * 8 + if i.Encoding.IsSigned() { + fmt.Fprintf(&gf.w, "int%d", bits) + } else { + fmt.Fprintf(&gf.w, "uint%d", bits) + } +} + +func (gf *GoFormatter) writeStructLit(size uint32, members []Member, depth int) error { + gf.w.WriteString("struct { ") + + prevOffset := uint32(0) + skippedBitfield := false + for i, m := range members { + if m.BitfieldSize > 0 { + skippedBitfield = true + continue + } + + offset := m.Offset.Bytes() + if n := offset - prevOffset; skippedBitfield && n > 0 { + fmt.Fprintf(&gf.w, "_ [%d]byte /* unsupported bitfield */; ", n) + } else { + gf.writePadding(n) + } + + size, err := Sizeof(m.Type) + if err != nil { + return fmt.Errorf("field %d: %w", i, err) + } + prevOffset = offset + uint32(size) + + if err := gf.writeStructField(m, depth); err != nil { + return fmt.Errorf("field %d: %w", i, err) + } + } + + gf.writePadding(size - prevOffset) + gf.w.WriteString("}") + return nil +} + +func (gf *GoFormatter) writeStructField(m Member, depth int) error { + if m.BitfieldSize > 0 { + return fmt.Errorf("bitfields are not supported") + } + if m.Offset%8 != 0 { + return fmt.Errorf("unsupported offset %d", m.Offset) + } + + if m.Name == "" { + // Special case a nested anonymous union like + // struct foo { union { int bar; int baz }; } + // by replacing the whole union with its first member. + union, ok := m.Type.(*Union) + if !ok { + return fmt.Errorf("anonymous fields are not supported") + + } + + if len(union.Members) == 0 { + return errors.New("empty anonymous union") + } + + depth++ + if depth > maxTypeDepth { + return errNestedTooDeep + } + + m := union.Members[0] + size, err := Sizeof(m.Type) + if err != nil { + return err + } + + if err := gf.writeStructField(m, depth); err != nil { + return err + } + + gf.writePadding(union.Size - uint32(size)) + return nil + + } + + fmt.Fprintf(&gf.w, "%s ", gf.identifier(m.Name)) + + if err := gf.writeType(m.Type, depth); err != nil { + return err + } + + gf.w.WriteString("; ") + return nil +} + +func (gf *GoFormatter) writeDatasecLit(ds *Datasec, depth int) error { + gf.w.WriteString("struct { ") + + prevOffset := uint32(0) + for i, vsi := range ds.Vars { + v := vsi.Type.(*Var) + if v.Linkage != GlobalVar { + // Ignore static, extern, etc. for now. + continue + } + + if v.Name == "" { + return fmt.Errorf("variable %d: empty name", i) + } + + gf.writePadding(vsi.Offset - prevOffset) + prevOffset = vsi.Offset + vsi.Size + + fmt.Fprintf(&gf.w, "%s ", gf.identifier(v.Name)) + + if err := gf.writeType(v.Type, depth); err != nil { + return fmt.Errorf("variable %d: %w", i, err) + } + + gf.w.WriteString("; ") + } + + gf.writePadding(ds.Size - prevOffset) + gf.w.WriteString("}") + return nil +} + +func (gf *GoFormatter) writePadding(bytes uint32) { + if bytes > 0 { + fmt.Fprintf(&gf.w, "_ [%d]byte; ", bytes) + } +} + +func skipQualifiers(typ Type) Type { + result := typ + for depth := 0; depth <= maxTypeDepth; depth++ { + switch v := (result).(type) { + case qualifier: + result = v.qualify() + default: + return result + } + } + return &cycle{typ} +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/handle.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/handle.go new file mode 100644 index 000000000000..128e9b35cf3b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/handle.go @@ -0,0 +1,121 @@ +package btf + +import ( + "errors" + "fmt" + "os" + + "github.com/cilium/ebpf/internal/sys" + "github.com/cilium/ebpf/internal/unix" +) + +// HandleInfo describes a Handle. +type HandleInfo struct { + // ID of this handle in the kernel. The ID is only valid as long as the + // associated handle is kept alive. + ID ID + + // Name is an identifying name for the BTF, currently only used by the + // kernel. + Name string + + // IsKernel is true if the BTF originated with the kernel and not + // userspace. + IsKernel bool + + // Size of the raw BTF in bytes. + size uint32 +} + +func newHandleInfoFromFD(fd *sys.FD) (*HandleInfo, error) { + // We invoke the syscall once with a empty BTF and name buffers to get size + // information to allocate buffers. Then we invoke it a second time with + // buffers to receive the data. + var btfInfo sys.BtfInfo + if err := sys.ObjInfo(fd, &btfInfo); err != nil { + return nil, fmt.Errorf("get BTF info for fd %s: %w", fd, err) + } + + if btfInfo.NameLen > 0 { + // NameLen doesn't account for the terminating NUL. + btfInfo.NameLen++ + } + + // Don't pull raw BTF by default, since it may be quite large. + btfSize := btfInfo.BtfSize + btfInfo.BtfSize = 0 + + nameBuffer := make([]byte, btfInfo.NameLen) + btfInfo.Name, btfInfo.NameLen = sys.NewSlicePointerLen(nameBuffer) + if err := sys.ObjInfo(fd, &btfInfo); err != nil { + return nil, err + } + + return &HandleInfo{ + ID: ID(btfInfo.Id), + Name: unix.ByteSliceToString(nameBuffer), + IsKernel: btfInfo.KernelBtf != 0, + size: btfSize, + }, nil +} + +// IsModule returns true if the BTF is for the kernel itself. +func (i *HandleInfo) IsVmlinux() bool { + return i.IsKernel && i.Name == "vmlinux" +} + +// IsModule returns true if the BTF is for a kernel module. +func (i *HandleInfo) IsModule() bool { + return i.IsKernel && i.Name != "vmlinux" +} + +// HandleIterator allows enumerating BTF blobs loaded into the kernel. +type HandleIterator struct { + // The ID of the last retrieved handle. Only valid after a call to Next. + ID ID + err error +} + +// Next retrieves a handle for the next BTF blob. +// +// [Handle.Close] is called if *handle is non-nil to avoid leaking fds. +// +// Returns true if another BTF blob was found. Call [HandleIterator.Err] after +// the function returns false. +func (it *HandleIterator) Next(handle **Handle) bool { + if *handle != nil { + (*handle).Close() + *handle = nil + } + + id := it.ID + for { + attr := &sys.BtfGetNextIdAttr{Id: id} + err := sys.BtfGetNextId(attr) + if errors.Is(err, os.ErrNotExist) { + // There are no more BTF objects. + return false + } else if err != nil { + it.err = fmt.Errorf("get next BTF ID: %w", err) + return false + } + + id = attr.NextId + *handle, err = NewHandleFromID(id) + if errors.Is(err, os.ErrNotExist) { + // Try again with the next ID. + continue + } else if err != nil { + it.err = fmt.Errorf("retrieve handle for ID %d: %w", id, err) + return false + } + + it.ID = id + return true + } +} + +// Err returns an error if iteration failed for some reason. +func (it *HandleIterator) Err() error { + return it.err +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/strings.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/strings.go new file mode 100644 index 000000000000..67626e0dd172 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/strings.go @@ -0,0 +1,128 @@ +package btf + +import ( + "bufio" + "bytes" + "errors" + "fmt" + "io" +) + +type stringTable struct { + base *stringTable + offsets []uint32 + strings []string +} + +// sizedReader is implemented by bytes.Reader, io.SectionReader, strings.Reader, etc. +type sizedReader interface { + io.Reader + Size() int64 +} + +func readStringTable(r sizedReader, base *stringTable) (*stringTable, error) { + // When parsing split BTF's string table, the first entry offset is derived + // from the last entry offset of the base BTF. + firstStringOffset := uint32(0) + if base != nil { + idx := len(base.offsets) - 1 + firstStringOffset = base.offsets[idx] + uint32(len(base.strings[idx])) + 1 + } + + // Derived from vmlinux BTF. + const averageStringLength = 16 + + n := int(r.Size() / averageStringLength) + offsets := make([]uint32, 0, n) + strings := make([]string, 0, n) + + offset := firstStringOffset + scanner := bufio.NewScanner(r) + scanner.Split(splitNull) + for scanner.Scan() { + str := scanner.Text() + offsets = append(offsets, offset) + strings = append(strings, str) + offset += uint32(len(str)) + 1 + } + if err := scanner.Err(); err != nil { + return nil, err + } + + if len(strings) == 0 { + return nil, errors.New("string table is empty") + } + + if firstStringOffset == 0 && strings[0] != "" { + return nil, errors.New("first item in string table is non-empty") + } + + return &stringTable{base, offsets, strings}, nil +} + +func splitNull(data []byte, atEOF bool) (advance int, token []byte, err error) { + i := bytes.IndexByte(data, 0) + if i == -1 { + if atEOF && len(data) > 0 { + return 0, nil, errors.New("string table isn't null terminated") + } + return 0, nil, nil + } + + return i + 1, data[:i], nil +} + +func (st *stringTable) Lookup(offset uint32) (string, error) { + if st.base != nil && offset <= st.base.offsets[len(st.base.offsets)-1] { + return st.base.lookup(offset) + } + return st.lookup(offset) +} + +func (st *stringTable) lookup(offset uint32) (string, error) { + i := search(st.offsets, offset) + if i == len(st.offsets) || st.offsets[i] != offset { + return "", fmt.Errorf("offset %d isn't start of a string", offset) + } + + return st.strings[i], nil +} + +func (st *stringTable) Length() int { + last := len(st.offsets) - 1 + return int(st.offsets[last]) + len(st.strings[last]) + 1 +} + +func (st *stringTable) Marshal(w io.Writer) error { + for _, str := range st.strings { + _, err := io.WriteString(w, str) + if err != nil { + return err + } + _, err = w.Write([]byte{0}) + if err != nil { + return err + } + } + return nil +} + +// search is a copy of sort.Search specialised for uint32. +// +// Licensed under https://go.dev/LICENSE +func search(ints []uint32, needle uint32) int { + // Define f(-1) == false and f(n) == true. + // Invariant: f(i-1) == false, f(j) == true. + i, j := 0, len(ints) + for i < j { + h := int(uint(i+j) >> 1) // avoid overflow when computing h + // i ≤ h < j + if !(ints[h] >= needle) { + i = h + 1 // preserves f(i-1) == false + } else { + j = h // preserves f(j) == true + } + } + // i == j, f(i-1) == false, and f(j) (= f(i)) == true => answer is i. + return i +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/types.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/types.go new file mode 100644 index 000000000000..402a363c28ac --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/btf/types.go @@ -0,0 +1,1212 @@ +package btf + +import ( + "fmt" + "io" + "math" + "reflect" + "strings" + + "github.com/cilium/ebpf/asm" +) + +const maxTypeDepth = 32 + +// TypeID identifies a type in a BTF section. +type TypeID uint32 + +// Type represents a type described by BTF. +type Type interface { + // Type can be formatted using the %s and %v verbs. %s outputs only the + // identity of the type, without any detail. %v outputs additional detail. + // + // Use the '+' flag to include the address of the type. + // + // Use the width to specify how many levels of detail to output, for example + // %1v will output detail for the root type and a short description of its + // children. %2v would output details of the root type and its children + // as well as a short description of the grandchildren. + fmt.Formatter + + // Name of the type, empty for anonymous types and types that cannot + // carry a name, like Void and Pointer. + TypeName() string + + // Make a copy of the type, without copying Type members. + copy() Type + + // Enumerate all nested Types. Repeated calls must visit nested + // types in the same order. + walk(*typeDeque) +} + +var ( + _ Type = (*Int)(nil) + _ Type = (*Struct)(nil) + _ Type = (*Union)(nil) + _ Type = (*Enum)(nil) + _ Type = (*Fwd)(nil) + _ Type = (*Func)(nil) + _ Type = (*Typedef)(nil) + _ Type = (*Var)(nil) + _ Type = (*Datasec)(nil) + _ Type = (*Float)(nil) +) + +// types is a list of Type. +// +// The order determines the ID of a type. +type types []Type + +func (ts types) ByID(id TypeID) (Type, error) { + if int(id) > len(ts) { + return nil, fmt.Errorf("type ID %d: %w", id, ErrNotFound) + } + return ts[id], nil +} + +// Void is the unit type of BTF. +type Void struct{} + +func (v *Void) Format(fs fmt.State, verb rune) { formatType(fs, verb, v) } +func (v *Void) TypeName() string { return "" } +func (v *Void) size() uint32 { return 0 } +func (v *Void) copy() Type { return (*Void)(nil) } +func (v *Void) walk(*typeDeque) {} + +type IntEncoding byte + +const ( + Signed IntEncoding = 1 << iota + Char + Bool +) + +func (ie IntEncoding) IsSigned() bool { + return ie&Signed != 0 +} + +func (ie IntEncoding) IsChar() bool { + return ie&Char != 0 +} + +func (ie IntEncoding) IsBool() bool { + return ie&Bool != 0 +} + +func (ie IntEncoding) String() string { + switch { + case ie.IsChar() && ie.IsSigned(): + return "char" + case ie.IsChar() && !ie.IsSigned(): + return "uchar" + case ie.IsBool(): + return "bool" + case ie.IsSigned(): + return "signed" + default: + return "unsigned" + } +} + +// Int is an integer of a given length. +// +// See https://www.kernel.org/doc/html/latest/bpf/btf.html#btf-kind-int +type Int struct { + Name string + + // The size of the integer in bytes. + Size uint32 + Encoding IntEncoding +} + +func (i *Int) Format(fs fmt.State, verb rune) { + formatType(fs, verb, i, i.Encoding, "size=", i.Size*8) +} + +func (i *Int) TypeName() string { return i.Name } +func (i *Int) size() uint32 { return i.Size } +func (i *Int) walk(*typeDeque) {} +func (i *Int) copy() Type { + cpy := *i + return &cpy +} + +// Pointer is a pointer to another type. +type Pointer struct { + Target Type +} + +func (p *Pointer) Format(fs fmt.State, verb rune) { + formatType(fs, verb, p, "target=", p.Target) +} + +func (p *Pointer) TypeName() string { return "" } +func (p *Pointer) size() uint32 { return 8 } +func (p *Pointer) walk(tdq *typeDeque) { tdq.push(&p.Target) } +func (p *Pointer) copy() Type { + cpy := *p + return &cpy +} + +// Array is an array with a fixed number of elements. +type Array struct { + Index Type + Type Type + Nelems uint32 +} + +func (arr *Array) Format(fs fmt.State, verb rune) { + formatType(fs, verb, arr, "index=", arr.Index, "type=", arr.Type, "n=", arr.Nelems) +} + +func (arr *Array) TypeName() string { return "" } + +func (arr *Array) walk(tdq *typeDeque) { + tdq.push(&arr.Index) + tdq.push(&arr.Type) +} + +func (arr *Array) copy() Type { + cpy := *arr + return &cpy +} + +// Struct is a compound type of consecutive members. +type Struct struct { + Name string + // The size of the struct including padding, in bytes + Size uint32 + Members []Member +} + +func (s *Struct) Format(fs fmt.State, verb rune) { + formatType(fs, verb, s, "fields=", len(s.Members)) +} + +func (s *Struct) TypeName() string { return s.Name } + +func (s *Struct) size() uint32 { return s.Size } + +func (s *Struct) walk(tdq *typeDeque) { + for i := range s.Members { + tdq.push(&s.Members[i].Type) + } +} + +func (s *Struct) copy() Type { + cpy := *s + cpy.Members = copyMembers(s.Members) + return &cpy +} + +func (s *Struct) members() []Member { + return s.Members +} + +// Union is a compound type where members occupy the same memory. +type Union struct { + Name string + // The size of the union including padding, in bytes. + Size uint32 + Members []Member +} + +func (u *Union) Format(fs fmt.State, verb rune) { + formatType(fs, verb, u, "fields=", len(u.Members)) +} + +func (u *Union) TypeName() string { return u.Name } + +func (u *Union) size() uint32 { return u.Size } + +func (u *Union) walk(tdq *typeDeque) { + for i := range u.Members { + tdq.push(&u.Members[i].Type) + } +} + +func (u *Union) copy() Type { + cpy := *u + cpy.Members = copyMembers(u.Members) + return &cpy +} + +func (u *Union) members() []Member { + return u.Members +} + +func copyMembers(orig []Member) []Member { + cpy := make([]Member, len(orig)) + copy(cpy, orig) + return cpy +} + +type composite interface { + members() []Member +} + +var ( + _ composite = (*Struct)(nil) + _ composite = (*Union)(nil) +) + +// A value in bits. +type Bits uint32 + +// Bytes converts a bit value into bytes. +func (b Bits) Bytes() uint32 { + return uint32(b / 8) +} + +// Member is part of a Struct or Union. +// +// It is not a valid Type. +type Member struct { + Name string + Type Type + Offset Bits + BitfieldSize Bits +} + +// Enum lists possible values. +type Enum struct { + Name string + // Size of the enum value in bytes. + Size uint32 + Values []EnumValue +} + +func (e *Enum) Format(fs fmt.State, verb rune) { + formatType(fs, verb, e, "size=", e.Size, "values=", len(e.Values)) +} + +func (e *Enum) TypeName() string { return e.Name } + +// EnumValue is part of an Enum +// +// Is is not a valid Type +type EnumValue struct { + Name string + Value int32 +} + +func (e *Enum) size() uint32 { return e.Size } +func (e *Enum) walk(*typeDeque) {} +func (e *Enum) copy() Type { + cpy := *e + cpy.Values = make([]EnumValue, len(e.Values)) + copy(cpy.Values, e.Values) + return &cpy +} + +// FwdKind is the type of forward declaration. +type FwdKind int + +// Valid types of forward declaration. +const ( + FwdStruct FwdKind = iota + FwdUnion +) + +func (fk FwdKind) String() string { + switch fk { + case FwdStruct: + return "struct" + case FwdUnion: + return "union" + default: + return fmt.Sprintf("%T(%d)", fk, int(fk)) + } +} + +// Fwd is a forward declaration of a Type. +type Fwd struct { + Name string + Kind FwdKind +} + +func (f *Fwd) Format(fs fmt.State, verb rune) { + formatType(fs, verb, f, f.Kind) +} + +func (f *Fwd) TypeName() string { return f.Name } + +func (f *Fwd) walk(*typeDeque) {} +func (f *Fwd) copy() Type { + cpy := *f + return &cpy +} + +// Typedef is an alias of a Type. +type Typedef struct { + Name string + Type Type +} + +func (td *Typedef) Format(fs fmt.State, verb rune) { + formatType(fs, verb, td, td.Type) +} + +func (td *Typedef) TypeName() string { return td.Name } + +func (td *Typedef) walk(tdq *typeDeque) { tdq.push(&td.Type) } +func (td *Typedef) copy() Type { + cpy := *td + return &cpy +} + +// Volatile is a qualifier. +type Volatile struct { + Type Type +} + +func (v *Volatile) Format(fs fmt.State, verb rune) { + formatType(fs, verb, v, v.Type) +} + +func (v *Volatile) TypeName() string { return "" } + +func (v *Volatile) qualify() Type { return v.Type } +func (v *Volatile) walk(tdq *typeDeque) { tdq.push(&v.Type) } +func (v *Volatile) copy() Type { + cpy := *v + return &cpy +} + +// Const is a qualifier. +type Const struct { + Type Type +} + +func (c *Const) Format(fs fmt.State, verb rune) { + formatType(fs, verb, c, c.Type) +} + +func (c *Const) TypeName() string { return "" } + +func (c *Const) qualify() Type { return c.Type } +func (c *Const) walk(tdq *typeDeque) { tdq.push(&c.Type) } +func (c *Const) copy() Type { + cpy := *c + return &cpy +} + +// Restrict is a qualifier. +type Restrict struct { + Type Type +} + +func (r *Restrict) Format(fs fmt.State, verb rune) { + formatType(fs, verb, r, r.Type) +} + +func (r *Restrict) TypeName() string { return "" } + +func (r *Restrict) qualify() Type { return r.Type } +func (r *Restrict) walk(tdq *typeDeque) { tdq.push(&r.Type) } +func (r *Restrict) copy() Type { + cpy := *r + return &cpy +} + +// Func is a function definition. +type Func struct { + Name string + Type Type + Linkage FuncLinkage +} + +func FuncMetadata(ins *asm.Instruction) *Func { + fn, _ := ins.Metadata.Get(funcInfoMeta{}).(*Func) + return fn +} + +func (f *Func) Format(fs fmt.State, verb rune) { + formatType(fs, verb, f, f.Linkage, "proto=", f.Type) +} + +func (f *Func) TypeName() string { return f.Name } + +func (f *Func) walk(tdq *typeDeque) { tdq.push(&f.Type) } +func (f *Func) copy() Type { + cpy := *f + return &cpy +} + +// FuncProto is a function declaration. +type FuncProto struct { + Return Type + Params []FuncParam +} + +func (fp *FuncProto) Format(fs fmt.State, verb rune) { + formatType(fs, verb, fp, "args=", len(fp.Params), "return=", fp.Return) +} + +func (fp *FuncProto) TypeName() string { return "" } + +func (fp *FuncProto) walk(tdq *typeDeque) { + tdq.push(&fp.Return) + for i := range fp.Params { + tdq.push(&fp.Params[i].Type) + } +} + +func (fp *FuncProto) copy() Type { + cpy := *fp + cpy.Params = make([]FuncParam, len(fp.Params)) + copy(cpy.Params, fp.Params) + return &cpy +} + +type FuncParam struct { + Name string + Type Type +} + +// Var is a global variable. +type Var struct { + Name string + Type Type + Linkage VarLinkage +} + +func (v *Var) Format(fs fmt.State, verb rune) { + formatType(fs, verb, v, v.Linkage) +} + +func (v *Var) TypeName() string { return v.Name } + +func (v *Var) walk(tdq *typeDeque) { tdq.push(&v.Type) } +func (v *Var) copy() Type { + cpy := *v + return &cpy +} + +// Datasec is a global program section containing data. +type Datasec struct { + Name string + Size uint32 + Vars []VarSecinfo +} + +func (ds *Datasec) Format(fs fmt.State, verb rune) { + formatType(fs, verb, ds) +} + +func (ds *Datasec) TypeName() string { return ds.Name } + +func (ds *Datasec) size() uint32 { return ds.Size } + +func (ds *Datasec) walk(tdq *typeDeque) { + for i := range ds.Vars { + tdq.push(&ds.Vars[i].Type) + } +} + +func (ds *Datasec) copy() Type { + cpy := *ds + cpy.Vars = make([]VarSecinfo, len(ds.Vars)) + copy(cpy.Vars, ds.Vars) + return &cpy +} + +// VarSecinfo describes variable in a Datasec. +// +// It is not a valid Type. +type VarSecinfo struct { + Type Type + Offset uint32 + Size uint32 +} + +// Float is a float of a given length. +type Float struct { + Name string + + // The size of the float in bytes. + Size uint32 +} + +func (f *Float) Format(fs fmt.State, verb rune) { + formatType(fs, verb, f, "size=", f.Size*8) +} + +func (f *Float) TypeName() string { return f.Name } +func (f *Float) size() uint32 { return f.Size } +func (f *Float) walk(*typeDeque) {} +func (f *Float) copy() Type { + cpy := *f + return &cpy +} + +// cycle is a type which had to be elided since it exceeded maxTypeDepth. +type cycle struct { + root Type +} + +func (c *cycle) ID() TypeID { return math.MaxUint32 } +func (c *cycle) Format(fs fmt.State, verb rune) { formatType(fs, verb, c, "root=", c.root) } +func (c *cycle) TypeName() string { return "" } +func (c *cycle) walk(*typeDeque) {} +func (c *cycle) copy() Type { + cpy := *c + return &cpy +} + +type sizer interface { + size() uint32 +} + +var ( + _ sizer = (*Int)(nil) + _ sizer = (*Pointer)(nil) + _ sizer = (*Struct)(nil) + _ sizer = (*Union)(nil) + _ sizer = (*Enum)(nil) + _ sizer = (*Datasec)(nil) +) + +type qualifier interface { + qualify() Type +} + +var ( + _ qualifier = (*Const)(nil) + _ qualifier = (*Restrict)(nil) + _ qualifier = (*Volatile)(nil) +) + +// Sizeof returns the size of a type in bytes. +// +// Returns an error if the size can't be computed. +func Sizeof(typ Type) (int, error) { + var ( + n = int64(1) + elem int64 + ) + + for i := 0; i < maxTypeDepth; i++ { + switch v := typ.(type) { + case *Array: + if n > 0 && int64(v.Nelems) > math.MaxInt64/n { + return 0, fmt.Errorf("type %s: overflow", typ) + } + + // Arrays may be of zero length, which allows + // n to be zero as well. + n *= int64(v.Nelems) + typ = v.Type + continue + + case sizer: + elem = int64(v.size()) + + case *Typedef: + typ = v.Type + continue + + case qualifier: + typ = v.qualify() + continue + + default: + return 0, fmt.Errorf("unsized type %T", typ) + } + + if n > 0 && elem > math.MaxInt64/n { + return 0, fmt.Errorf("type %s: overflow", typ) + } + + size := n * elem + if int64(int(size)) != size { + return 0, fmt.Errorf("type %s: overflow", typ) + } + + return int(size), nil + } + + return 0, fmt.Errorf("type %s: exceeded type depth", typ) +} + +// alignof returns the alignment of a type. +// +// Currently only supports the subset of types necessary for bitfield relocations. +func alignof(typ Type) (int, error) { + switch t := UnderlyingType(typ).(type) { + case *Enum: + return int(t.size()), nil + case *Int: + return int(t.Size), nil + default: + return 0, fmt.Errorf("can't calculate alignment of %T", t) + } +} + +// Transformer modifies a given Type and returns the result. +// +// For example, UnderlyingType removes any qualifiers or typedefs from a type. +// See the example on Copy for how to use a transform. +type Transformer func(Type) Type + +// Copy a Type recursively. +// +// typ may form a cycle. If transform is not nil, it is called with the +// to be copied type, and the returned value is copied instead. +func Copy(typ Type, transform Transformer) Type { + copies := make(copier) + copies.copy(&typ, transform) + return typ +} + +// copy a slice of Types recursively. +// +// See Copy for the semantics. +func copyTypes(types []Type, transform Transformer) []Type { + result := make([]Type, len(types)) + copy(result, types) + + copies := make(copier) + for i := range result { + copies.copy(&result[i], transform) + } + + return result +} + +type copier map[Type]Type + +func (c copier) copy(typ *Type, transform Transformer) { + var work typeDeque + for t := typ; t != nil; t = work.pop() { + // *t is the identity of the type. + if cpy := c[*t]; cpy != nil { + *t = cpy + continue + } + + var cpy Type + if transform != nil { + cpy = transform(*t).copy() + } else { + cpy = (*t).copy() + } + + c[*t] = cpy + *t = cpy + + // Mark any nested types for copying. + cpy.walk(&work) + } +} + +// typeDeque keeps track of pointers to types which still +// need to be visited. +type typeDeque struct { + types []*Type + read, write uint64 + mask uint64 +} + +func (dq *typeDeque) empty() bool { + return dq.read == dq.write +} + +// push adds a type to the stack. +func (dq *typeDeque) push(t *Type) { + if dq.write-dq.read < uint64(len(dq.types)) { + dq.types[dq.write&dq.mask] = t + dq.write++ + return + } + + new := len(dq.types) * 2 + if new == 0 { + new = 8 + } + + types := make([]*Type, new) + pivot := dq.read & dq.mask + n := copy(types, dq.types[pivot:]) + n += copy(types[n:], dq.types[:pivot]) + types[n] = t + + dq.types = types + dq.mask = uint64(new) - 1 + dq.read, dq.write = 0, uint64(n+1) +} + +// shift returns the first element or null. +func (dq *typeDeque) shift() *Type { + if dq.empty() { + return nil + } + + index := dq.read & dq.mask + t := dq.types[index] + dq.types[index] = nil + dq.read++ + return t +} + +// pop returns the last element or null. +func (dq *typeDeque) pop() *Type { + if dq.empty() { + return nil + } + + dq.write-- + index := dq.write & dq.mask + t := dq.types[index] + dq.types[index] = nil + return t +} + +// all returns all elements. +// +// The deque is empty after calling this method. +func (dq *typeDeque) all() []*Type { + length := dq.write - dq.read + types := make([]*Type, 0, length) + for t := dq.shift(); t != nil; t = dq.shift() { + types = append(types, t) + } + return types +} + +// inflateRawTypes takes a list of raw btf types linked via type IDs, and turns +// it into a graph of Types connected via pointers. +// +// If baseTypes are provided, then the raw types are +// considered to be of a split BTF (e.g., a kernel module). +// +// Returns a slice of types indexed by TypeID. Since BTF ignores compilation +// units, multiple types may share the same name. A Type may form a cyclic graph +// by pointing at itself. +func inflateRawTypes(rawTypes []rawType, baseTypes types, rawStrings *stringTable) ([]Type, error) { + types := make([]Type, 0, len(rawTypes)+1) // +1 for Void added to base types + + typeIDOffset := TypeID(1) // Void is TypeID(0), so the rest starts from TypeID(1) + + if baseTypes == nil { + // Void is defined to always be type ID 0, and is thus omitted from BTF. + types = append(types, (*Void)(nil)) + } else { + // For split BTF, the next ID is max base BTF type ID + 1 + typeIDOffset = TypeID(len(baseTypes)) + } + + type fixupDef struct { + id TypeID + typ *Type + } + + var fixups []fixupDef + fixup := func(id TypeID, typ *Type) { + if id < TypeID(len(baseTypes)) { + *typ = baseTypes[id] + return + } + + idx := id + if baseTypes != nil { + idx = id - TypeID(len(baseTypes)) + } + if idx < TypeID(len(types)) { + // We've already inflated this type, fix it up immediately. + *typ = types[idx] + return + } + fixups = append(fixups, fixupDef{id, typ}) + } + + type assertion struct { + typ *Type + want reflect.Type + } + + var assertions []assertion + assert := func(typ *Type, want reflect.Type) error { + if *typ != nil { + // The type has already been fixed up, check the type immediately. + if reflect.TypeOf(*typ) != want { + return fmt.Errorf("expected %s, got %T", want, *typ) + } + return nil + } + assertions = append(assertions, assertion{typ, want}) + return nil + } + + type bitfieldFixupDef struct { + id TypeID + m *Member + } + + var ( + legacyBitfields = make(map[TypeID][2]Bits) // offset, size + bitfieldFixups []bitfieldFixupDef + ) + convertMembers := func(raw []btfMember, kindFlag bool) ([]Member, error) { + // NB: The fixup below relies on pre-allocating this array to + // work, since otherwise append might re-allocate members. + members := make([]Member, 0, len(raw)) + for i, btfMember := range raw { + name, err := rawStrings.Lookup(btfMember.NameOff) + if err != nil { + return nil, fmt.Errorf("can't get name for member %d: %w", i, err) + } + + members = append(members, Member{ + Name: name, + Offset: Bits(btfMember.Offset), + }) + + m := &members[i] + fixup(raw[i].Type, &m.Type) + + if kindFlag { + m.BitfieldSize = Bits(btfMember.Offset >> 24) + m.Offset &= 0xffffff + // We ignore legacy bitfield definitions if the current composite + // is a new-style bitfield. This is kind of safe since offset and + // size on the type of the member must be zero if kindFlat is set + // according to spec. + continue + } + + // This may be a legacy bitfield, try to fix it up. + data, ok := legacyBitfields[raw[i].Type] + if ok { + // Bingo! + m.Offset += data[0] + m.BitfieldSize = data[1] + continue + } + + if m.Type != nil { + // We couldn't find a legacy bitfield, but we know that the member's + // type has already been inflated. Hence we know that it can't be + // a legacy bitfield and there is nothing left to do. + continue + } + + // We don't have fixup data, and the type we're pointing + // at hasn't been inflated yet. No choice but to defer + // the fixup. + bitfieldFixups = append(bitfieldFixups, bitfieldFixupDef{ + raw[i].Type, + m, + }) + } + return members, nil + } + + for i, raw := range rawTypes { + var ( + id = typeIDOffset + TypeID(i) + typ Type + ) + + name, err := rawStrings.Lookup(raw.NameOff) + if err != nil { + return nil, fmt.Errorf("get name for type id %d: %w", id, err) + } + + switch raw.Kind() { + case kindInt: + size := raw.Size() + bi := raw.data.(*btfInt) + if bi.Offset() > 0 || bi.Bits().Bytes() != size { + legacyBitfields[id] = [2]Bits{bi.Offset(), bi.Bits()} + } + typ = &Int{name, raw.Size(), bi.Encoding()} + + case kindPointer: + ptr := &Pointer{nil} + fixup(raw.Type(), &ptr.Target) + typ = ptr + + case kindArray: + btfArr := raw.data.(*btfArray) + arr := &Array{nil, nil, btfArr.Nelems} + fixup(btfArr.IndexType, &arr.Index) + fixup(btfArr.Type, &arr.Type) + typ = arr + + case kindStruct: + members, err := convertMembers(raw.data.([]btfMember), raw.KindFlag()) + if err != nil { + return nil, fmt.Errorf("struct %s (id %d): %w", name, id, err) + } + typ = &Struct{name, raw.Size(), members} + + case kindUnion: + members, err := convertMembers(raw.data.([]btfMember), raw.KindFlag()) + if err != nil { + return nil, fmt.Errorf("union %s (id %d): %w", name, id, err) + } + typ = &Union{name, raw.Size(), members} + + case kindEnum: + rawvals := raw.data.([]btfEnum) + vals := make([]EnumValue, 0, len(rawvals)) + for i, btfVal := range rawvals { + name, err := rawStrings.Lookup(btfVal.NameOff) + if err != nil { + return nil, fmt.Errorf("get name for enum value %d: %s", i, err) + } + vals = append(vals, EnumValue{ + Name: name, + Value: btfVal.Val, + }) + } + typ = &Enum{name, raw.Size(), vals} + + case kindForward: + if raw.KindFlag() { + typ = &Fwd{name, FwdUnion} + } else { + typ = &Fwd{name, FwdStruct} + } + + case kindTypedef: + typedef := &Typedef{name, nil} + fixup(raw.Type(), &typedef.Type) + typ = typedef + + case kindVolatile: + volatile := &Volatile{nil} + fixup(raw.Type(), &volatile.Type) + typ = volatile + + case kindConst: + cnst := &Const{nil} + fixup(raw.Type(), &cnst.Type) + typ = cnst + + case kindRestrict: + restrict := &Restrict{nil} + fixup(raw.Type(), &restrict.Type) + typ = restrict + + case kindFunc: + fn := &Func{name, nil, raw.Linkage()} + fixup(raw.Type(), &fn.Type) + if err := assert(&fn.Type, reflect.TypeOf((*FuncProto)(nil))); err != nil { + return nil, err + } + typ = fn + + case kindFuncProto: + rawparams := raw.data.([]btfParam) + params := make([]FuncParam, 0, len(rawparams)) + for i, param := range rawparams { + name, err := rawStrings.Lookup(param.NameOff) + if err != nil { + return nil, fmt.Errorf("get name for func proto parameter %d: %s", i, err) + } + params = append(params, FuncParam{ + Name: name, + }) + } + for i := range params { + fixup(rawparams[i].Type, ¶ms[i].Type) + } + + fp := &FuncProto{nil, params} + fixup(raw.Type(), &fp.Return) + typ = fp + + case kindVar: + variable := raw.data.(*btfVariable) + v := &Var{name, nil, VarLinkage(variable.Linkage)} + fixup(raw.Type(), &v.Type) + typ = v + + case kindDatasec: + btfVars := raw.data.([]btfVarSecinfo) + vars := make([]VarSecinfo, 0, len(btfVars)) + for _, btfVar := range btfVars { + vars = append(vars, VarSecinfo{ + Offset: btfVar.Offset, + Size: btfVar.Size, + }) + } + for i := range vars { + fixup(btfVars[i].Type, &vars[i].Type) + if err := assert(&vars[i].Type, reflect.TypeOf((*Var)(nil))); err != nil { + return nil, err + } + } + typ = &Datasec{name, raw.SizeType, vars} + + case kindFloat: + typ = &Float{name, raw.Size()} + + default: + return nil, fmt.Errorf("type id %d: unknown kind: %v", id, raw.Kind()) + } + + types = append(types, typ) + } + + for _, fixup := range fixups { + i := int(fixup.id) + if i >= len(types)+len(baseTypes) { + return nil, fmt.Errorf("reference to invalid type id: %d", fixup.id) + } + if i < len(baseTypes) { + return nil, fmt.Errorf("fixup for base type id %d is not expected", i) + } + + *fixup.typ = types[i-len(baseTypes)] + } + + for _, bitfieldFixup := range bitfieldFixups { + if bitfieldFixup.id < TypeID(len(baseTypes)) { + return nil, fmt.Errorf("bitfield fixup from split to base types is not expected") + } + + data, ok := legacyBitfields[bitfieldFixup.id] + if ok { + // This is indeed a legacy bitfield, fix it up. + bitfieldFixup.m.Offset += data[0] + bitfieldFixup.m.BitfieldSize = data[1] + } + } + + for _, assertion := range assertions { + if reflect.TypeOf(*assertion.typ) != assertion.want { + return nil, fmt.Errorf("expected %s, got %T", assertion.want, *assertion.typ) + } + } + + return types, nil +} + +// essentialName represents the name of a BTF type stripped of any flavor +// suffixes after a ___ delimiter. +type essentialName string + +// newEssentialName returns name without a ___ suffix. +// +// CO-RE has the concept of 'struct flavors', which are used to deal with +// changes in kernel data structures. Anything after three underscores +// in a type name is ignored for the purpose of finding a candidate type +// in the kernel's BTF. +func newEssentialName(name string) essentialName { + if name == "" { + return "" + } + lastIdx := strings.LastIndex(name, "___") + if lastIdx > 0 { + return essentialName(name[:lastIdx]) + } + return essentialName(name) +} + +// UnderlyingType skips qualifiers and Typedefs. +func UnderlyingType(typ Type) Type { + result := typ + for depth := 0; depth <= maxTypeDepth; depth++ { + switch v := (result).(type) { + case qualifier: + result = v.qualify() + case *Typedef: + result = v.Type + default: + return result + } + } + return &cycle{typ} +} + +type formatState struct { + fmt.State + depth int +} + +// formattableType is a subset of Type, to ease unit testing of formatType. +type formattableType interface { + fmt.Formatter + TypeName() string +} + +// formatType formats a type in a canonical form. +// +// Handles cyclical types by only printing cycles up to a certain depth. Elements +// in extra are separated by spaces unless the preceding element is a string +// ending in '='. +func formatType(f fmt.State, verb rune, t formattableType, extra ...interface{}) { + if verb != 'v' && verb != 's' { + fmt.Fprintf(f, "{UNRECOGNIZED: %c}", verb) + return + } + + // This is the same as %T, but elides the package name. Assumes that + // formattableType is implemented by a pointer receiver. + goTypeName := reflect.TypeOf(t).Elem().Name() + _, _ = io.WriteString(f, goTypeName) + + if name := t.TypeName(); name != "" { + // Output BTF type name if present. + fmt.Fprintf(f, ":%q", name) + } + + if f.Flag('+') { + // Output address if requested. + fmt.Fprintf(f, ":%#p", t) + } + + if verb == 's' { + // %s omits details. + return + } + + var depth int + if ps, ok := f.(*formatState); ok { + depth = ps.depth + f = ps.State + } + + maxDepth, ok := f.Width() + if !ok { + maxDepth = 0 + } + + if depth > maxDepth { + // We've reached the maximum depth. This avoids infinite recursion even + // for cyclical types. + return + } + + if len(extra) == 0 { + return + } + + wantSpace := false + _, _ = io.WriteString(f, "[") + for _, arg := range extra { + if wantSpace { + _, _ = io.WriteString(f, " ") + } + + switch v := arg.(type) { + case string: + _, _ = io.WriteString(f, v) + wantSpace = len(v) > 0 && v[len(v)-1] != '=' + continue + + case formattableType: + v.Format(&formatState{f, depth + 1}, verb) + + default: + fmt.Fprint(f, arg) + } + + wantSpace = true + } + _, _ = io.WriteString(f, "]") +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/collection.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/collection.go index 2ededc87a05c..8c2ddc380214 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/collection.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/collection.go @@ -4,14 +4,11 @@ import ( "encoding/binary" "errors" "fmt" - "io" - "math" "reflect" "strings" "github.com/cilium/ebpf/asm" - "github.com/cilium/ebpf/internal" - "github.com/cilium/ebpf/internal/btf" + "github.com/cilium/ebpf/btf" ) // CollectionOptions control loading a collection into the kernel. @@ -20,6 +17,17 @@ import ( type CollectionOptions struct { Maps MapOptions Programs ProgramOptions + + // MapReplacements takes a set of Maps that will be used instead of + // creating new ones when loading the CollectionSpec. + // + // For each given Map, there must be a corresponding MapSpec in + // CollectionSpec.Maps, and its type, key/value size, max entries and flags + // must match the values of the MapSpec. + // + // The given Maps are Clone()d before being used in the Collection, so the + // caller can Close() them freely when they are no longer needed. + MapReplacements map[string]*Map } // CollectionSpec describes a collection. @@ -27,6 +35,10 @@ type CollectionSpec struct { Maps map[string]*MapSpec Programs map[string]*ProgramSpec + // Types holds type information about Maps and Programs. + // Modifications to Types are currently undefined behaviour. + Types *btf.Spec + // ByteOrder specifies whether the ELF was compiled for // big-endian or little-endian architectures. ByteOrder binary.ByteOrder @@ -42,6 +54,7 @@ func (cs *CollectionSpec) Copy() *CollectionSpec { Maps: make(map[string]*MapSpec, len(cs.Maps)), Programs: make(map[string]*ProgramSpec, len(cs.Programs)), ByteOrder: cs.ByteOrder, + Types: cs.Types, } for name, spec := range cs.Maps { @@ -61,19 +74,21 @@ func (cs *CollectionSpec) Copy() *CollectionSpec { // when calling NewCollection. Any named maps are removed from CollectionSpec.Maps. // // Returns an error if a named map isn't used in at least one program. +// +// Deprecated: Pass CollectionOptions.MapReplacements when loading the Collection +// instead. func (cs *CollectionSpec) RewriteMaps(maps map[string]*Map) error { for symbol, m := range maps { // have we seen a program that uses this symbol / map seen := false - fd := m.FD() for progName, progSpec := range cs.Programs { - err := progSpec.Instructions.RewriteMapPtr(symbol, fd) + err := progSpec.Instructions.AssociateMap(symbol, m) switch { case err == nil: seen = true - case asm.IsUnreferencedSymbol(err): + case errors.Is(err, asm.ErrUnreferencedSymbol): // Not all programs need to use the map default: @@ -107,34 +122,67 @@ func (cs *CollectionSpec) RewriteMaps(maps map[string]*Map) error { // // Returns an error if a constant doesn't exist. func (cs *CollectionSpec) RewriteConstants(consts map[string]interface{}) error { - rodata := cs.Maps[".rodata"] - if rodata == nil { - return errors.New("missing .rodata section") - } + replaced := make(map[string]bool) - if rodata.BTF == nil { - return errors.New(".rodata section has no BTF") - } + for name, spec := range cs.Maps { + if !strings.HasPrefix(name, ".rodata") { + continue + } - if n := len(rodata.Contents); n != 1 { - return fmt.Errorf("expected one key in .rodata, found %d", n) - } + b, ds, err := spec.dataSection() + if errors.Is(err, errMapNoBTFValue) { + // Data sections without a BTF Datasec are valid, but don't support + // constant replacements. + continue + } + if err != nil { + return fmt.Errorf("map %s: %w", name, err) + } + + // MapSpec.Copy() performs a shallow copy. Fully copy the byte slice + // to avoid any changes affecting other copies of the MapSpec. + cpy := make([]byte, len(b)) + copy(cpy, b) + + for _, v := range ds.Vars { + vname := v.Type.TypeName() + replacement, ok := consts[vname] + if !ok { + continue + } + + if replaced[vname] { + return fmt.Errorf("section %s: duplicate variable %s", name, vname) + } + + if int(v.Offset+v.Size) > len(cpy) { + return fmt.Errorf("section %s: offset %d(+%d) for variable %s is out of bounds", name, v.Offset, v.Size, vname) + } - kv := rodata.Contents[0] - value, ok := kv.Value.([]byte) - if !ok { - return fmt.Errorf("first value in .rodata is %T not []byte", kv.Value) + b, err := marshalBytes(replacement, int(v.Size)) + if err != nil { + return fmt.Errorf("marshaling constant replacement %s: %w", vname, err) + } + + copy(cpy[v.Offset:v.Offset+v.Size], b) + + replaced[vname] = true + } + + spec.Contents[0] = MapKV{Key: uint32(0), Value: cpy} } - buf := make([]byte, len(value)) - copy(buf, value) + var missing []string + for c := range consts { + if !replaced[c] { + missing = append(missing, c) + } + } - err := patchValue(buf, rodata.BTF.Value, consts) - if err != nil { - return err + if len(missing) != 0 { + return fmt.Errorf("spec is missing one or more constants: %s", strings.Join(missing, ",")) } - rodata.Contents[0] = MapKV{kv.Key, buf} return nil } @@ -187,6 +235,9 @@ func (cs *CollectionSpec) Assign(to interface{}) error { // LoadAndAssign loads Maps and Programs into the kernel and assigns them // to a struct. // +// Omitting Map/Program.Close() during application shutdown is an error. +// See the package documentation for details around Map and Program lifecycle. +// // This function is a shortcut to manually checking the presence // of maps and programs in a CollectionSpec. Consider using bpf2go // if this sounds useful. @@ -209,15 +260,21 @@ func (cs *CollectionSpec) Assign(to interface{}) error { // Returns an error if any of the fields can't be found, or // if the same Map or Program is assigned multiple times. func (cs *CollectionSpec) LoadAndAssign(to interface{}, opts *CollectionOptions) error { - loader := newCollectionLoader(cs, opts) - defer loader.cleanup() + loader, err := newCollectionLoader(cs, opts) + if err != nil { + return err + } + defer loader.close() // Support assigning Programs and Maps, lazy-loading the required objects. assignedMaps := make(map[string]bool) + assignedProgs := make(map[string]bool) + getValue := func(typ reflect.Type, name string) (interface{}, error) { switch typ { case reflect.TypeOf((*Program)(nil)): + assignedProgs[name] = true return loader.loadProgram(name) case reflect.TypeOf((*Map)(nil)): @@ -244,15 +301,26 @@ func (cs *CollectionSpec) LoadAndAssign(to interface{}, opts *CollectionOptions) switch m.typ { case ProgramArray: // Require all lazy-loaded ProgramArrays to be assigned to the given object. - // Without any references, they will be closed on the first GC and all tail - // calls into them will miss. - if !assignedMaps[n] { + // The kernel empties a ProgramArray once the last user space reference + // to it closes, which leads to failed tail calls. Combined with the library + // closing map fds via GC finalizers this can lead to surprising behaviour. + // Only allow unassigned ProgramArrays when the library hasn't pre-populated + // any entries from static value declarations. At this point, we know the map + // is empty and there's no way for the caller to interact with the map going + // forward. + if !assignedMaps[n] && len(cs.Maps[n].Contents) > 0 { return fmt.Errorf("ProgramArray %s must be assigned to prevent missed tail calls", n) } } } - loader.finalize() + // Prevent loader.cleanup() from closing assigned Maps and Programs. + for m := range assignedMaps { + delete(loader.maps, m) + } + for p := range assignedProgs { + delete(loader.programs, p) + } return nil } @@ -264,15 +332,26 @@ type Collection struct { Maps map[string]*Map } -// NewCollection creates a Collection from a specification. +// NewCollection creates a Collection from the given spec, creating and +// loading its declared resources into the kernel. +// +// Omitting Collection.Close() during application shutdown is an error. +// See the package documentation for details around Map and Program lifecycle. func NewCollection(spec *CollectionSpec) (*Collection, error) { return NewCollectionWithOptions(spec, CollectionOptions{}) } -// NewCollectionWithOptions creates a Collection from a specification. +// NewCollectionWithOptions creates a Collection from the given spec using +// options, creating and loading its declared resources into the kernel. +// +// Omitting Collection.Close() during application shutdown is an error. +// See the package documentation for details around Map and Program lifecycle. func NewCollectionWithOptions(spec *CollectionSpec, opts CollectionOptions) (*Collection, error) { - loader := newCollectionLoader(spec, &opts) - defer loader.cleanup() + loader, err := newCollectionLoader(spec, &opts) + if err != nil { + return nil, err + } + defer loader.close() // Create maps first, as their fds need to be linked into programs. for mapName := range spec.Maps { @@ -281,7 +360,11 @@ func NewCollectionWithOptions(spec *CollectionSpec, opts CollectionOptions) (*Co } } - for progName := range spec.Programs { + for progName, prog := range spec.Programs { + if prog.Type == UnspecifiedProgram { + continue + } + if _, err := loader.loadProgram(progName); err != nil { return nil, err } @@ -293,9 +376,9 @@ func NewCollectionWithOptions(spec *CollectionSpec, opts CollectionOptions) (*Co return nil, err } + // Prevent loader.cleanup from closing maps and programs. maps, progs := loader.maps, loader.programs - - loader.finalize() + loader.maps, loader.programs = nil, nil return &Collection{ progs, @@ -305,13 +388,11 @@ func NewCollectionWithOptions(spec *CollectionSpec, opts CollectionOptions) (*Co type handleCache struct { btfHandles map[*btf.Spec]*btf.Handle - btfSpecs map[io.ReaderAt]*btf.Spec } func newHandleCache() *handleCache { return &handleCache{ btfHandles: make(map[*btf.Spec]*btf.Handle), - btfSpecs: make(map[io.ReaderAt]*btf.Spec), } } @@ -329,20 +410,6 @@ func (hc handleCache) btfHandle(spec *btf.Spec) (*btf.Handle, error) { return handle, nil } -func (hc handleCache) btfSpec(rd io.ReaderAt) (*btf.Spec, error) { - if hc.btfSpecs[rd] != nil { - return hc.btfSpecs[rd], nil - } - - spec, err := btf.LoadSpecFromReader(rd) - if err != nil { - return nil, err - } - - hc.btfSpecs[rd] = spec - return spec, nil -} - func (hc handleCache) close() { for _, handle := range hc.btfHandles { handle.Close() @@ -357,30 +424,34 @@ type collectionLoader struct { handles *handleCache } -func newCollectionLoader(coll *CollectionSpec, opts *CollectionOptions) *collectionLoader { +func newCollectionLoader(coll *CollectionSpec, opts *CollectionOptions) (*collectionLoader, error) { if opts == nil { opts = &CollectionOptions{} } + // Check for existing MapSpecs in the CollectionSpec for all provided replacement maps. + for name, m := range opts.MapReplacements { + spec, ok := coll.Maps[name] + if !ok { + return nil, fmt.Errorf("replacement map %s not found in CollectionSpec", name) + } + + if err := spec.checkCompatibility(m); err != nil { + return nil, fmt.Errorf("using replacement map %s: %w", spec.Name, err) + } + } + return &collectionLoader{ coll, opts, make(map[string]*Map), make(map[string]*Program), newHandleCache(), - } -} - -// finalize should be called when all the collectionLoader's resources -// have been successfully loaded into the kernel and populated with values. -func (cl *collectionLoader) finalize() { - cl.maps, cl.programs = nil, nil + }, nil } -// cleanup cleans up all resources left over in the collectionLoader. -// Call finalize() when Map and Program creation/population is successful -// to prevent them from getting closed. -func (cl *collectionLoader) cleanup() { +// close all resources left over in the collectionLoader. +func (cl *collectionLoader) close() { cl.handles.close() for _, m := range cl.maps { m.Close() @@ -400,6 +471,21 @@ func (cl *collectionLoader) loadMap(mapName string) (*Map, error) { return nil, fmt.Errorf("missing map %s", mapName) } + if mapSpec.BTF != nil && cl.coll.Types != mapSpec.BTF { + return nil, fmt.Errorf("map %s: BTF doesn't match collection", mapName) + } + + if replaceMap, ok := cl.opts.MapReplacements[mapName]; ok { + // Clone the map to avoid closing user's map later on. + m, err := replaceMap.Clone() + if err != nil { + return nil, err + } + + cl.maps[mapName] = m + return m, nil + } + m, err := newMapWithOptions(mapSpec, cl.opts.Maps, cl.handles) if err != nil { return nil, fmt.Errorf("map %s: %w", mapName, err) @@ -419,33 +505,41 @@ func (cl *collectionLoader) loadProgram(progName string) (*Program, error) { return nil, fmt.Errorf("unknown program %s", progName) } + // Bail out early if we know the kernel is going to reject the program. + // This skips loading map dependencies, saving some cleanup work later. + if progSpec.Type == UnspecifiedProgram { + return nil, fmt.Errorf("cannot load program %s: program type is unspecified", progName) + } + + if progSpec.BTF != nil && cl.coll.Types != progSpec.BTF { + return nil, fmt.Errorf("program %s: BTF doesn't match collection", progName) + } + progSpec = progSpec.Copy() - // Rewrite any reference to a valid map. + // Rewrite any reference to a valid map in the program's instructions, + // which includes all of its dependencies. for i := range progSpec.Instructions { ins := &progSpec.Instructions[i] - if !ins.IsLoadFromMap() || ins.Reference == "" { + if !ins.IsLoadFromMap() || ins.Reference() == "" { continue } - if uint32(ins.Constant) != math.MaxUint32 { - // Don't overwrite maps already rewritten, users can - // rewrite programs in the spec themselves + // Don't overwrite map loads containing non-zero map fd's, + // they can be manually included by the caller. + // Map FDs/IDs are placed in the lower 32 bits of Constant. + if int32(ins.Constant) > 0 { continue } - m, err := cl.loadMap(ins.Reference) + m, err := cl.loadMap(ins.Reference()) if err != nil { return nil, fmt.Errorf("program %s: %w", progName, err) } - fd := m.FD() - if fd < 0 { - return nil, fmt.Errorf("map %s: %w", ins.Reference, internal.ErrClosedFd) - } - if err := ins.RewriteMapPtr(m.FD()); err != nil { - return nil, fmt.Errorf("program %s: map %s: %w", progName, ins.Reference, err) + if err := ins.AssociateMap(m); err != nil { + return nil, fmt.Errorf("program %s: map %s: %w", progName, ins.Reference(), err) } } @@ -467,24 +561,30 @@ func (cl *collectionLoader) populateMaps() error { mapSpec = mapSpec.Copy() - // Replace any object stubs with loaded objects. + // MapSpecs that refer to inner maps or programs within the same + // CollectionSpec do so using strings. These strings are used as the key + // to look up the respective object in the Maps or Programs fields. + // Resolve those references to actual Map or Program resources that + // have been loaded into the kernel. for i, kv := range mapSpec.Contents { - switch v := kv.Value.(type) { - case programStub: - // loadProgram is idempotent and could return an existing Program. - prog, err := cl.loadProgram(string(v)) - if err != nil { - return fmt.Errorf("loading program %s, for map %s: %w", v, mapName, err) + if objName, ok := kv.Value.(string); ok { + switch mapSpec.Type { + case ProgramArray: + // loadProgram is idempotent and could return an existing Program. + prog, err := cl.loadProgram(objName) + if err != nil { + return fmt.Errorf("loading program %s, for map %s: %w", objName, mapName, err) + } + mapSpec.Contents[i] = MapKV{kv.Key, prog} + + case ArrayOfMaps, HashOfMaps: + // loadMap is idempotent and could return an existing Map. + innerMap, err := cl.loadMap(objName) + if err != nil { + return fmt.Errorf("loading inner map %s, for map %s: %w", objName, mapName, err) + } + mapSpec.Contents[i] = MapKV{kv.Key, innerMap} } - mapSpec.Contents[i] = MapKV{kv.Key, prog} - - case mapStub: - // loadMap is idempotent and could return an existing Map. - innerMap, err := cl.loadMap(string(v)) - if err != nil { - return fmt.Errorf("loading inner map %s, for map %s: %w", v, mapName, err) - } - mapSpec.Contents[i] = MapKV{kv.Key, innerMap} } } @@ -497,7 +597,11 @@ func (cl *collectionLoader) populateMaps() error { return nil } -// LoadCollection parses an object file and converts it to a collection. +// LoadCollection reads an object file and creates and loads its declared +// resources into the kernel. +// +// Omitting Collection.Close() during application shutdown is an error. +// See the package documentation for details around Map and Program lifecycle. func LoadCollection(file string) (*Collection, error) { spec, err := LoadCollectionSpec(file) if err != nil { diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/doc.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/doc.go index f7f34da8f44c..396b3394d33d 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/doc.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/doc.go @@ -13,4 +13,13 @@ // your application as any other resource. // // Use the link subpackage to attach a loaded program to a hook in the kernel. +// +// Note that losing all references to Map and Program resources will cause +// their underlying file descriptors to be closed, potentially removing those +// objects from the kernel. Always retain a reference by e.g. deferring a +// Close() of a Collection or LoadAndAssign object until application exit. +// +// Special care needs to be taken when handling maps of type ProgramArray, +// as the kernel erases its contents when the last userspace or bpffs +// reference disappears, regardless of the map being in active use. package ebpf diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/elf_reader.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/elf_reader.go index 42010f43e583..df278895c631 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/elf_reader.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/elf_reader.go @@ -13,8 +13,8 @@ import ( "strings" "github.com/cilium/ebpf/asm" + "github.com/cilium/ebpf/btf" "github.com/cilium/ebpf/internal" - "github.com/cilium/ebpf/internal/btf" "github.com/cilium/ebpf/internal/unix" ) @@ -26,6 +26,7 @@ type elfCode struct { license string version uint32 btf *btf.Spec + extInfo *btf.ExtInfos } // LoadCollectionSpec parses an ELF file into a CollectionSpec. @@ -49,7 +50,6 @@ func LoadCollectionSpecFromReader(rd io.ReaderAt) (*CollectionSpec, error) { if err != nil { return nil, err } - defer f.Close() var ( licenseSection *elf.Section @@ -95,77 +95,29 @@ func LoadCollectionSpecFromReader(rd io.ReaderAt) (*CollectionSpec, error) { return nil, fmt.Errorf("load version: %w", err) } - btfSpec, err := btf.LoadSpecFromReader(rd) + btfSpec, btfExtInfo, err := btf.LoadSpecAndExtInfosFromReader(rd) if err != nil && !errors.Is(err, btf.ErrNotFound) { return nil, fmt.Errorf("load BTF: %w", err) } - // Assign symbols to all the sections we're interested in. - symbols, err := f.Symbols() - if err != nil { - return nil, fmt.Errorf("load symbols: %v", err) - } - - for _, symbol := range symbols { - idx := symbol.Section - symType := elf.ST_TYPE(symbol.Info) - - section := sections[idx] - if section == nil { - continue - } - - // Older versions of LLVM don't tag symbols correctly, so keep - // all NOTYPE ones. - keep := symType == elf.STT_NOTYPE - switch section.kind { - case mapSection, btfMapSection, dataSection: - keep = keep || symType == elf.STT_OBJECT - case programSection: - keep = keep || symType == elf.STT_FUNC - } - if !keep || symbol.Name == "" { - continue - } - - section.symbols[symbol.Value] = symbol - } - ec := &elfCode{ SafeELFFile: f, sections: sections, license: license, version: version, btf: btfSpec, + extInfo: btfExtInfo, } - // Go through relocation sections, and parse the ones for sections we're - // interested in. Make sure that relocations point at valid sections. - for idx, relSection := range relSections { - section := sections[idx] - if section == nil { - continue - } - - rels, err := ec.loadRelocations(relSection, symbols) - if err != nil { - return nil, fmt.Errorf("relocation for section %q: %w", section.Name, err) - } - - for _, rel := range rels { - target := sections[rel.Section] - if target == nil { - return nil, fmt.Errorf("section %q: reference to %q in section %s: %w", section.Name, rel.Name, rel.Section, ErrNotSupported) - } - - if target.Flags&elf.SHF_STRINGS > 0 { - return nil, fmt.Errorf("section %q: string is not stack allocated: %w", section.Name, ErrNotSupported) - } + symbols, err := f.Symbols() + if err != nil { + return nil, fmt.Errorf("load symbols: %v", err) + } - target.references++ - } + ec.assignSymbols(symbols) - section.relocations = rels + if err := ec.loadRelocations(relSections, symbols); err != nil { + return nil, fmt.Errorf("load relocations: %w", err) } // Collect all the various ways to define maps. @@ -183,12 +135,12 @@ func LoadCollectionSpecFromReader(rd io.ReaderAt) (*CollectionSpec, error) { } // Finally, collect programs and link them. - progs, err := ec.loadPrograms() + progs, err := ec.loadProgramSections() if err != nil { return nil, fmt.Errorf("load programs: %w", err) } - return &CollectionSpec{maps, progs, ec.ByteOrder}, nil + return &CollectionSpec{maps, progs, btfSpec, ec.ByteOrder}, nil } func loadLicense(sec *elf.Section) (string, error) { @@ -247,12 +199,91 @@ func newElfSection(section *elf.Section, kind elfSectionKind) *elfSection { } } -func (ec *elfCode) loadPrograms() (map[string]*ProgramSpec, error) { - var ( - progs []*ProgramSpec - libs []*ProgramSpec - ) +// assignSymbols takes a list of symbols and assigns them to their +// respective sections, indexed by name. +func (ec *elfCode) assignSymbols(symbols []elf.Symbol) { + for _, symbol := range symbols { + symType := elf.ST_TYPE(symbol.Info) + symSection := ec.sections[symbol.Section] + if symSection == nil { + continue + } + + // Anonymous symbols only occur in debug sections which we don't process + // relocations for. Anonymous symbols are not referenced from other sections. + if symbol.Name == "" { + continue + } + + // Older versions of LLVM don't tag symbols correctly, so keep + // all NOTYPE ones. + switch symSection.kind { + case mapSection, btfMapSection, dataSection: + if symType != elf.STT_NOTYPE && symType != elf.STT_OBJECT { + continue + } + case programSection: + if symType != elf.STT_NOTYPE && symType != elf.STT_FUNC { + continue + } + // LLVM emits LBB_ (Local Basic Block) symbols that seem to be jump + // targets within sections, but BPF has no use for them. + if symType == elf.STT_NOTYPE && elf.ST_BIND(symbol.Info) == elf.STB_LOCAL && + strings.HasPrefix(symbol.Name, "LBB") { + continue + } + // Only collect symbols that occur in program/maps/data sections. + default: + continue + } + symSection.symbols[symbol.Value] = symbol + } +} + +// loadRelocations iterates .rel* sections and extracts relocation entries for +// sections of interest. Makes sure relocations point at valid sections. +func (ec *elfCode) loadRelocations(relSections map[elf.SectionIndex]*elf.Section, symbols []elf.Symbol) error { + for idx, relSection := range relSections { + section := ec.sections[idx] + if section == nil { + continue + } + + rels, err := ec.loadSectionRelocations(relSection, symbols) + if err != nil { + return fmt.Errorf("relocation for section %q: %w", section.Name, err) + } + + for _, rel := range rels { + target := ec.sections[rel.Section] + if target == nil { + return fmt.Errorf("section %q: reference to %q in section %s: %w", section.Name, rel.Name, rel.Section, ErrNotSupported) + } + + if target.Flags&elf.SHF_STRINGS > 0 { + return fmt.Errorf("section %q: string is not stack allocated: %w", section.Name, ErrNotSupported) + } + + target.references++ + } + + section.relocations = rels + } + + return nil +} + +// loadProgramSections iterates ec's sections and emits a ProgramSpec +// for each function it finds. +// +// The resulting map is indexed by function name. +func (ec *elfCode) loadProgramSections() (map[string]*ProgramSpec, error) { + + progs := make(map[string]*ProgramSpec) + + // Generate a ProgramSpec for each function found in each program section. + var export []string for _, sec := range ec.sections { if sec.kind != programSection { continue @@ -262,86 +293,144 @@ func (ec *elfCode) loadPrograms() (map[string]*ProgramSpec, error) { return nil, fmt.Errorf("section %v: missing symbols", sec.Name) } - funcSym, ok := sec.symbols[0] - if !ok { - return nil, fmt.Errorf("section %v: no label at start", sec.Name) - } - - insns, length, err := ec.loadInstructions(sec) + funcs, err := ec.loadFunctions(sec) if err != nil { - return nil, fmt.Errorf("program %s: %w", funcSym.Name, err) + return nil, fmt.Errorf("section %v: %w", sec.Name, err) } progType, attachType, progFlags, attachTo := getProgType(sec.Name) - spec := &ProgramSpec{ - Name: funcSym.Name, - Type: progType, - Flags: progFlags, - AttachType: attachType, - AttachTo: attachTo, - License: ec.license, - KernelVersion: ec.version, - Instructions: insns, - ByteOrder: ec.ByteOrder, - } + for name, insns := range funcs { + spec := &ProgramSpec{ + Name: name, + Type: progType, + Flags: progFlags, + AttachType: attachType, + AttachTo: attachTo, + SectionName: sec.Name, + License: ec.license, + KernelVersion: ec.version, + Instructions: insns, + ByteOrder: ec.ByteOrder, + BTF: ec.btf, + } - if ec.btf != nil { - spec.BTF, err = ec.btf.Program(sec.Name, length) - if err != nil && !errors.Is(err, btf.ErrNoExtendedInfo) { - return nil, fmt.Errorf("program %s: %w", funcSym.Name, err) + // Function names must be unique within a single ELF blob. + if progs[name] != nil { + return nil, fmt.Errorf("duplicate program name %s", name) } - } + progs[name] = spec - if spec.Type == UnspecifiedProgram { - // There is no single name we can use for "library" sections, - // since they may contain multiple functions. We'll decode the - // labels they contain later on, and then link sections that way. - libs = append(libs, spec) - } else { - progs = append(progs, spec) + if spec.SectionName != ".text" { + export = append(export, name) + } } } - res := make(map[string]*ProgramSpec, len(progs)) - for _, prog := range progs { - err := link(prog, libs) - if err != nil { - return nil, fmt.Errorf("program %s: %w", prog.Name, err) + flattenPrograms(progs, export) + + // Hide programs (e.g. library functions) that were not explicitly emitted + // to an ELF section. These could be exposed in a separate CollectionSpec + // field later to allow them to be modified. + for n, p := range progs { + if p.SectionName == ".text" { + delete(progs, n) } - res[prog.Name] = prog } - return res, nil + return progs, nil } -func (ec *elfCode) loadInstructions(section *elfSection) (asm.Instructions, uint64, error) { - var ( - r = bufio.NewReader(section.Open()) - insns asm.Instructions - offset uint64 - ) - for { - var ins asm.Instruction - n, err := ins.Unmarshal(r, ec.ByteOrder) - if err == io.EOF { - return insns, offset, nil - } - if err != nil { - return nil, 0, fmt.Errorf("offset %d: %w", offset, err) - } +// loadFunctions extracts instruction streams from the given program section +// starting at each symbol in the section. The section's symbols must already +// be narrowed down to STT_NOTYPE (emitted by clang <8) or STT_FUNC. +// +// The resulting map is indexed by function name. +func (ec *elfCode) loadFunctions(section *elfSection) (map[string]asm.Instructions, error) { + r := bufio.NewReader(section.Open()) + + // Decode the section's instruction stream. + var insns asm.Instructions + if err := insns.Unmarshal(r, ec.ByteOrder); err != nil { + return nil, fmt.Errorf("decoding instructions for section %s: %w", section.Name, err) + } + if len(insns) == 0 { + return nil, fmt.Errorf("no instructions found in section %s", section.Name) + } - ins.Symbol = section.symbols[offset].Name + iter := insns.Iterate() + for iter.Next() { + ins := iter.Ins + offset := iter.Offset.Bytes() + // Tag Symbol Instructions. + if sym, ok := section.symbols[offset]; ok { + *ins = ins.WithSymbol(sym.Name) + } + + // Apply any relocations for the current instruction. + // If no relocation is present, resolve any section-relative function calls. if rel, ok := section.relocations[offset]; ok { - if err = ec.relocateInstruction(&ins, rel); err != nil { - return nil, 0, fmt.Errorf("offset %d: relocate instruction: %w", offset, err) + if err := ec.relocateInstruction(ins, rel); err != nil { + return nil, fmt.Errorf("offset %d: relocating instruction: %w", offset, err) + } + } else { + if err := referenceRelativeJump(ins, offset, section.symbols); err != nil { + return nil, fmt.Errorf("offset %d: resolving relative jump: %w", offset, err) } } + } - insns = append(insns, ins) - offset += n + if ec.extInfo != nil { + ec.extInfo.Assign(insns, section.Name) } + + return splitSymbols(insns) +} + +// referenceRelativeJump turns a relative jump to another bpf subprogram within +// the same ELF section into a Reference Instruction. +// +// Up to LLVM 9, calls to subprograms within the same ELF section are sometimes +// encoded using relative jumps instead of relocation entries. These jumps go +// out of bounds of the current program, so their targets must be memoized +// before the section's instruction stream is split. +// +// The relative jump Constant is blinded to -1 and the target Symbol is set as +// the Instruction's Reference so it can be resolved by the linker. +func referenceRelativeJump(ins *asm.Instruction, offset uint64, symbols map[uint64]elf.Symbol) error { + if !ins.IsFunctionReference() || ins.Constant == -1 { + return nil + } + + tgt := jumpTarget(offset, *ins) + sym := symbols[tgt].Name + if sym == "" { + return fmt.Errorf("no jump target found at offset %d", tgt) + } + + *ins = ins.WithReference(sym) + ins.Constant = -1 + + return nil +} + +// jumpTarget takes ins' offset within an instruction stream (in bytes) +// and returns its absolute jump destination (in bytes) within the +// instruction stream. +func jumpTarget(offset uint64, ins asm.Instruction) uint64 { + // A relative jump instruction describes the amount of raw BPF instructions + // to jump, convert the offset into bytes. + dest := ins.Constant * asm.InstructionSize + + // The starting point of the jump is the end of the current instruction. + dest += int64(offset + asm.InstructionSize) + + if dest < 0 { + return 0 + } + + return uint64(dest) } func (ec *elfCode) relocateInstruction(ins *asm.Instruction, rel elf.Symbol) error { @@ -367,18 +456,12 @@ func (ec *elfCode) relocateInstruction(ins *asm.Instruction, rel elf.Symbol) err ins.Src = asm.PseudoMapFD - // Mark the instruction as needing an update when creating the - // collection. - if err := ins.RewriteMapPtr(-1); err != nil { - return err - } - case dataSection: var offset uint32 switch typ { case elf.STT_SECTION: if bind != elf.STB_LOCAL { - return fmt.Errorf("direct load: %s: unsupported relocation %s", name, bind) + return fmt.Errorf("direct load: %s: unsupported section relocation %s", name, bind) } // This is really a reference to a static symbol, which clang doesn't @@ -387,8 +470,17 @@ func (ec *elfCode) relocateInstruction(ins *asm.Instruction, rel elf.Symbol) err offset = uint32(uint64(ins.Constant)) case elf.STT_OBJECT: - if bind != elf.STB_GLOBAL { - return fmt.Errorf("direct load: %s: unsupported relocation %s", name, bind) + // LLVM 9 emits OBJECT-LOCAL symbols for anonymous constants. + if bind != elf.STB_GLOBAL && bind != elf.STB_LOCAL { + return fmt.Errorf("direct load: %s: unsupported object relocation %s", name, bind) + } + + offset = uint32(rel.Value) + + case elf.STT_NOTYPE: + // LLVM 7 emits NOTYPE-LOCAL symbols for anonymous constants. + if bind != elf.STB_LOCAL { + return fmt.Errorf("direct load: %s: unsupported untyped relocation %s", name, bind) } offset = uint32(rel.Value) @@ -406,51 +498,71 @@ func (ec *elfCode) relocateInstruction(ins *asm.Instruction, rel elf.Symbol) err ins.Constant = int64(uint64(offset) << 32) ins.Src = asm.PseudoMapValue - // Mark the instruction as needing an update when creating the - // collection. - if err := ins.RewriteMapPtr(-1); err != nil { - return err - } - case programSection: - if ins.OpCode.JumpOp() != asm.Call { - return fmt.Errorf("not a call instruction: %s", ins) - } + switch opCode := ins.OpCode; { + case opCode.JumpOp() == asm.Call: + if ins.Src != asm.PseudoCall { + return fmt.Errorf("call: %s: incorrect source register", name) + } - if ins.Src != asm.PseudoCall { - return fmt.Errorf("call: %s: incorrect source register", name) - } + switch typ { + case elf.STT_NOTYPE, elf.STT_FUNC: + if bind != elf.STB_GLOBAL { + return fmt.Errorf("call: %s: unsupported binding: %s", name, bind) + } - switch typ { - case elf.STT_NOTYPE, elf.STT_FUNC: - if bind != elf.STB_GLOBAL { - return fmt.Errorf("call: %s: unsupported binding: %s", name, bind) - } + case elf.STT_SECTION: + if bind != elf.STB_LOCAL { + return fmt.Errorf("call: %s: unsupported binding: %s", name, bind) + } - case elf.STT_SECTION: - if bind != elf.STB_LOCAL { - return fmt.Errorf("call: %s: unsupported binding: %s", name, bind) + // The function we want to call is in the indicated section, + // at the offset encoded in the instruction itself. Reverse + // the calculation to find the real function we're looking for. + // A value of -1 references the first instruction in the section. + offset := int64(int32(ins.Constant)+1) * asm.InstructionSize + sym, ok := target.symbols[uint64(offset)] + if !ok { + return fmt.Errorf("call: no symbol at offset %d", offset) + } + + name = sym.Name + ins.Constant = -1 + + default: + return fmt.Errorf("call: %s: invalid symbol type %s", name, typ) } + case opCode.IsDWordLoad(): + switch typ { + case elf.STT_FUNC: + if bind != elf.STB_GLOBAL { + return fmt.Errorf("load: %s: unsupported binding: %s", name, bind) + } + + case elf.STT_SECTION: + if bind != elf.STB_LOCAL { + return fmt.Errorf("load: %s: unsupported binding: %s", name, bind) + } - // The function we want to call is in the indicated section, - // at the offset encoded in the instruction itself. Reverse - // the calculation to find the real function we're looking for. - // A value of -1 references the first instruction in the section. - offset := int64(int32(ins.Constant)+1) * asm.InstructionSize - if offset < 0 { - return fmt.Errorf("call: %s: invalid offset %d", name, offset) + // ins.Constant already contains the offset in bytes from the + // start of the section. This is different than a call to a + // static function. + + default: + return fmt.Errorf("load: %s: invalid symbol type %s", name, typ) } - sym, ok := target.symbols[uint64(offset)] + sym, ok := target.symbols[uint64(ins.Constant)] if !ok { - return fmt.Errorf("call: %s: no symbol at offset %d", name, offset) + return fmt.Errorf("load: no symbol at offset %d", ins.Constant) } - ins.Constant = -1 name = sym.Name + ins.Constant = -1 + ins.Src = asm.PseudoFunc default: - return fmt.Errorf("call: %s: invalid symbol type %s", name, typ) + return fmt.Errorf("neither a call nor a load instruction: %v", ins) } case undefSection: @@ -468,7 +580,7 @@ func (ec *elfCode) relocateInstruction(ins *asm.Instruction, rel elf.Symbol) err return fmt.Errorf("relocation to %q: %w", target.Name, ErrNotSupported) } - ins.Reference = name + *ins = ins.WithReference(name) return nil } @@ -525,7 +637,7 @@ func (ec *elfCode) loadMaps(maps map[string]*MapSpec) error { return fmt.Errorf("map %s: reading map tail: %w", mapName, err) } if len(extra) > 0 { - spec.Extra = *bytes.NewReader(extra) + spec.Extra = bytes.NewReader(extra) } if err := spec.clampPerfEventArraySize(); err != nil { @@ -554,7 +666,7 @@ func (ec *elfCode) loadBTFMaps(maps map[string]*MapSpec) error { // Each section must appear as a DataSec in the ELF's BTF blob. var ds *btf.Datasec - if err := ec.btf.FindType(sec.Name, &ds); err != nil { + if err := ec.btf.TypeByName(sec.Name, &ds); err != nil { return fmt.Errorf("cannot find section '%s' in BTF: %w", sec.Name, err) } @@ -617,14 +729,6 @@ func (ec *elfCode) loadBTFMaps(maps map[string]*MapSpec) error { return nil } -// A programStub is a placeholder for a Program to be inserted at a certain map key. -// It needs to be resolved into a Program later on in the loader process. -type programStub string - -// A mapStub is a placeholder for a Map to be inserted at a certain map key. -// It needs to be resolved into a Map later on in the loader process. -type mapStub string - // mapSpecFromBTF produces a MapSpec based on a btf.Struct def representing // a BTF map definition. The name and spec arguments will be copied to the // resulting MapSpec, and inner must be true on any resursive invocations. @@ -811,7 +915,9 @@ func mapSpecFromBTF(es *elfSection, vs *btf.VarSecinfo, def *btf.Struct, spec *b ValueSize: valueSize, MaxEntries: maxEntries, Flags: flags, - BTF: &btf.Map{Spec: spec, Key: key, Value: value}, + Key: key, + Value: value, + BTF: spec, Pinning: pinType, InnerMap: innerMapSpec, Contents: contents, @@ -863,7 +969,7 @@ func resolveBTFValuesContents(es *elfSection, vs *btf.VarSecinfo, member btf.Mem // The offset of the 'values' member within the _struct_ (in bits) // is the starting point of the array. Convert to bytes. Add VarSecinfo // offset to get the absolute position in the ELF blob. - start := (member.OffsetBits / 8) + vs.Offset + start := member.Offset.Bytes() + vs.Offset // 'values' is encoded in BTF as a zero (variable) length struct // member, and its contents run until the end of the VarSecinfo. // Add VarSecinfo offset to get the absolute position in the ELF blob. @@ -898,9 +1004,9 @@ func resolveBTFValuesContents(es *elfSection, vs *btf.VarSecinfo, member btf.Mem // skipped here. switch t := elf.ST_TYPE(r.Info); t { case elf.STT_FUNC: - contents = append(contents, MapKV{uint32(k), programStub(r.Name)}) + contents = append(contents, MapKV{uint32(k), r.Name}) case elf.STT_OBJECT: - contents = append(contents, MapKV{uint32(k), mapStub(r.Name)}) + contents = append(contents, MapKV{uint32(k), r.Name}) default: return nil, fmt.Errorf("unknown relocation type %v", t) } @@ -921,15 +1027,6 @@ func (ec *elfCode) loadDataSections(maps map[string]*MapSpec) error { continue } - if ec.btf == nil { - return errors.New("data sections require BTF, make sure all consts are marked as static") - } - - var datasec *btf.Datasec - if err := ec.btf.FindType(sec.Name, &datasec); err != nil { - return fmt.Errorf("data section %s: can't get BTF: %w", sec.Name, err) - } - data, err := sec.Data() if err != nil { return fmt.Errorf("data section %s: can't get contents: %w", sec.Name, err) @@ -946,14 +1043,25 @@ func (ec *elfCode) loadDataSections(maps map[string]*MapSpec) error { ValueSize: uint32(len(data)), MaxEntries: 1, Contents: []MapKV{{uint32(0), data}}, - BTF: &btf.Map{Spec: ec.btf, Key: &btf.Void{}, Value: datasec}, } - switch sec.Name { - case ".rodata": + // It is possible for a data section to exist without a corresponding BTF Datasec + // if it only contains anonymous values like macro-defined arrays. + if ec.btf != nil { + var ds *btf.Datasec + if ec.btf.TypeByName(sec.Name, &ds) == nil { + // Assign the spec's key and BTF only if the Datasec lookup was successful. + mapSpec.BTF = ec.btf + mapSpec.Key = &btf.Void{} + mapSpec.Value = ds + } + } + + switch n := sec.Name; { + case strings.HasPrefix(n, ".rodata"): mapSpec.Flags = unix.BPF_F_RDONLY_PROG mapSpec.Freeze = true - case ".bss": + case n == ".bss": // The kernel already zero-initializes the map mapSpec.Contents = nil } @@ -964,91 +1072,103 @@ func (ec *elfCode) loadDataSections(maps map[string]*MapSpec) error { } func getProgType(sectionName string) (ProgramType, AttachType, uint32, string) { - types := map[string]struct { + types := []struct { + prefix string progType ProgramType attachType AttachType progFlags uint32 }{ - // From https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/lib/bpf/libbpf.c - "socket": {SocketFilter, AttachNone, 0}, - "sk_reuseport/migrate": {SkReuseport, AttachSkReuseportSelectOrMigrate, 0}, - "sk_reuseport": {SkReuseport, AttachSkReuseportSelect, 0}, - "seccomp": {SocketFilter, AttachNone, 0}, - "kprobe/": {Kprobe, AttachNone, 0}, - "uprobe/": {Kprobe, AttachNone, 0}, - "kretprobe/": {Kprobe, AttachNone, 0}, - "uretprobe/": {Kprobe, AttachNone, 0}, - "tracepoint/": {TracePoint, AttachNone, 0}, - "raw_tracepoint/": {RawTracepoint, AttachNone, 0}, - "raw_tp/": {RawTracepoint, AttachNone, 0}, - "tp_btf/": {Tracing, AttachTraceRawTp, 0}, - "xdp": {XDP, AttachNone, 0}, - "perf_event": {PerfEvent, AttachNone, 0}, - "lwt_in": {LWTIn, AttachNone, 0}, - "lwt_out": {LWTOut, AttachNone, 0}, - "lwt_xmit": {LWTXmit, AttachNone, 0}, - "lwt_seg6local": {LWTSeg6Local, AttachNone, 0}, - "sockops": {SockOps, AttachCGroupSockOps, 0}, - "sk_skb/stream_parser": {SkSKB, AttachSkSKBStreamParser, 0}, - "sk_skb/stream_verdict": {SkSKB, AttachSkSKBStreamParser, 0}, - "sk_msg": {SkMsg, AttachSkSKBStreamVerdict, 0}, - "lirc_mode2": {LircMode2, AttachLircMode2, 0}, - "flow_dissector": {FlowDissector, AttachFlowDissector, 0}, - "iter/": {Tracing, AttachTraceIter, 0}, - "fentry/": {Tracing, AttachTraceFEntry, 0}, - "fmod_ret/": {Tracing, AttachModifyReturn, 0}, - "fexit/": {Tracing, AttachTraceFExit, 0}, - "fentry.s/": {Tracing, AttachTraceFEntry, unix.BPF_F_SLEEPABLE}, - "fmod_ret.s/": {Tracing, AttachModifyReturn, unix.BPF_F_SLEEPABLE}, - "fexit.s/": {Tracing, AttachTraceFExit, unix.BPF_F_SLEEPABLE}, - "sk_lookup/": {SkLookup, AttachSkLookup, 0}, - "freplace/": {Extension, AttachNone, 0}, - "lsm/": {LSM, AttachLSMMac, 0}, - "lsm.s/": {LSM, AttachLSMMac, unix.BPF_F_SLEEPABLE}, - - "cgroup_skb/ingress": {CGroupSKB, AttachCGroupInetIngress, 0}, - "cgroup_skb/egress": {CGroupSKB, AttachCGroupInetEgress, 0}, - "cgroup/dev": {CGroupDevice, AttachCGroupDevice, 0}, - "cgroup/skb": {CGroupSKB, AttachNone, 0}, - "cgroup/sock": {CGroupSock, AttachCGroupInetSockCreate, 0}, - "cgroup/post_bind4": {CGroupSock, AttachCGroupInet4PostBind, 0}, - "cgroup/post_bind6": {CGroupSock, AttachCGroupInet6PostBind, 0}, - "cgroup/bind4": {CGroupSockAddr, AttachCGroupInet4Bind, 0}, - "cgroup/bind6": {CGroupSockAddr, AttachCGroupInet6Bind, 0}, - "cgroup/connect4": {CGroupSockAddr, AttachCGroupInet4Connect, 0}, - "cgroup/connect6": {CGroupSockAddr, AttachCGroupInet6Connect, 0}, - "cgroup/sendmsg4": {CGroupSockAddr, AttachCGroupUDP4Sendmsg, 0}, - "cgroup/sendmsg6": {CGroupSockAddr, AttachCGroupUDP6Sendmsg, 0}, - "cgroup/recvmsg4": {CGroupSockAddr, AttachCGroupUDP4Recvmsg, 0}, - "cgroup/recvmsg6": {CGroupSockAddr, AttachCGroupUDP6Recvmsg, 0}, - "cgroup/sysctl": {CGroupSysctl, AttachCGroupSysctl, 0}, - "cgroup/getsockopt": {CGroupSockopt, AttachCGroupGetsockopt, 0}, - "cgroup/setsockopt": {CGroupSockopt, AttachCGroupSetsockopt, 0}, - "classifier": {SchedCLS, AttachNone, 0}, - "action": {SchedACT, AttachNone, 0}, - - "cgroup/getsockname4": {CGroupSockAddr, AttachCgroupInet4GetSockname, 0}, - "cgroup/getsockname6": {CGroupSockAddr, AttachCgroupInet6GetSockname, 0}, - "cgroup/getpeername4": {CGroupSockAddr, AttachCgroupInet4GetPeername, 0}, - "cgroup/getpeername6": {CGroupSockAddr, AttachCgroupInet6GetPeername, 0}, + // Please update the types from libbpf.c and follow the order of it. + // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/lib/bpf/libbpf.c + {"socket", SocketFilter, AttachNone, 0}, + {"sk_reuseport/migrate", SkReuseport, AttachSkReuseportSelectOrMigrate, 0}, + {"sk_reuseport", SkReuseport, AttachSkReuseportSelect, 0}, + {"kprobe/", Kprobe, AttachNone, 0}, + {"uprobe/", Kprobe, AttachNone, 0}, + {"kretprobe/", Kprobe, AttachNone, 0}, + {"uretprobe/", Kprobe, AttachNone, 0}, + {"tc", SchedCLS, AttachNone, 0}, + {"classifier", SchedCLS, AttachNone, 0}, + {"action", SchedACT, AttachNone, 0}, + {"tracepoint/", TracePoint, AttachNone, 0}, + {"tp/", TracePoint, AttachNone, 0}, + {"raw_tracepoint/", RawTracepoint, AttachNone, 0}, + {"raw_tp/", RawTracepoint, AttachNone, 0}, + {"raw_tracepoint.w/", RawTracepointWritable, AttachNone, 0}, + {"raw_tp.w/", RawTracepointWritable, AttachNone, 0}, + {"tp_btf/", Tracing, AttachTraceRawTp, 0}, + {"fentry/", Tracing, AttachTraceFEntry, 0}, + {"fmod_ret/", Tracing, AttachModifyReturn, 0}, + {"fexit/", Tracing, AttachTraceFExit, 0}, + {"fentry.s/", Tracing, AttachTraceFEntry, unix.BPF_F_SLEEPABLE}, + {"fmod_ret.s/", Tracing, AttachModifyReturn, unix.BPF_F_SLEEPABLE}, + {"fexit.s/", Tracing, AttachTraceFExit, unix.BPF_F_SLEEPABLE}, + {"freplace/", Extension, AttachNone, 0}, + {"lsm/", LSM, AttachLSMMac, 0}, + {"lsm.s/", LSM, AttachLSMMac, unix.BPF_F_SLEEPABLE}, + {"iter/", Tracing, AttachTraceIter, 0}, + {"syscall", Syscall, AttachNone, 0}, + {"xdp_devmap/", XDP, AttachXDPDevMap, 0}, + {"xdp_cpumap/", XDP, AttachXDPCPUMap, 0}, + {"xdp", XDP, AttachNone, 0}, + {"perf_event", PerfEvent, AttachNone, 0}, + {"lwt_in", LWTIn, AttachNone, 0}, + {"lwt_out", LWTOut, AttachNone, 0}, + {"lwt_xmit", LWTXmit, AttachNone, 0}, + {"lwt_seg6local", LWTSeg6Local, AttachNone, 0}, + {"cgroup_skb/ingress", CGroupSKB, AttachCGroupInetIngress, 0}, + {"cgroup_skb/egress", CGroupSKB, AttachCGroupInetEgress, 0}, + {"cgroup/skb", CGroupSKB, AttachNone, 0}, + {"cgroup/sock_create", CGroupSock, AttachCGroupInetSockCreate, 0}, + {"cgroup/sock_release", CGroupSock, AttachCgroupInetSockRelease, 0}, + {"cgroup/sock", CGroupSock, AttachCGroupInetSockCreate, 0}, + {"cgroup/post_bind4", CGroupSock, AttachCGroupInet4PostBind, 0}, + {"cgroup/post_bind6", CGroupSock, AttachCGroupInet6PostBind, 0}, + {"cgroup/dev", CGroupDevice, AttachCGroupDevice, 0}, + {"sockops", SockOps, AttachCGroupSockOps, 0}, + {"sk_skb/stream_parser", SkSKB, AttachSkSKBStreamParser, 0}, + {"sk_skb/stream_verdict", SkSKB, AttachSkSKBStreamVerdict, 0}, + {"sk_skb", SkSKB, AttachNone, 0}, + {"sk_msg", SkMsg, AttachSkMsgVerdict, 0}, + {"lirc_mode2", LircMode2, AttachLircMode2, 0}, + {"flow_dissector", FlowDissector, AttachFlowDissector, 0}, + {"cgroup/bind4", CGroupSockAddr, AttachCGroupInet4Bind, 0}, + {"cgroup/bind6", CGroupSockAddr, AttachCGroupInet6Bind, 0}, + {"cgroup/connect4", CGroupSockAddr, AttachCGroupInet4Connect, 0}, + {"cgroup/connect6", CGroupSockAddr, AttachCGroupInet6Connect, 0}, + {"cgroup/sendmsg4", CGroupSockAddr, AttachCGroupUDP4Sendmsg, 0}, + {"cgroup/sendmsg6", CGroupSockAddr, AttachCGroupUDP6Sendmsg, 0}, + {"cgroup/recvmsg4", CGroupSockAddr, AttachCGroupUDP4Recvmsg, 0}, + {"cgroup/recvmsg6", CGroupSockAddr, AttachCGroupUDP6Recvmsg, 0}, + {"cgroup/getpeername4", CGroupSockAddr, AttachCgroupInet4GetPeername, 0}, + {"cgroup/getpeername6", CGroupSockAddr, AttachCgroupInet6GetPeername, 0}, + {"cgroup/getsockname4", CGroupSockAddr, AttachCgroupInet4GetSockname, 0}, + {"cgroup/getsockname6", CGroupSockAddr, AttachCgroupInet6GetSockname, 0}, + {"cgroup/sysctl", CGroupSysctl, AttachCGroupSysctl, 0}, + {"cgroup/getsockopt", CGroupSockopt, AttachCGroupGetsockopt, 0}, + {"cgroup/setsockopt", CGroupSockopt, AttachCGroupSetsockopt, 0}, + {"struct_ops+", StructOps, AttachNone, 0}, + {"sk_lookup/", SkLookup, AttachSkLookup, 0}, + + {"seccomp", SocketFilter, AttachNone, 0}, } - for prefix, t := range types { - if !strings.HasPrefix(sectionName, prefix) { + for _, t := range types { + if !strings.HasPrefix(sectionName, t.prefix) { continue } - if !strings.HasSuffix(prefix, "/") { + if !strings.HasSuffix(t.prefix, "/") { return t.progType, t.attachType, t.progFlags, "" } - return t.progType, t.attachType, t.progFlags, sectionName[len(prefix):] + return t.progType, t.attachType, t.progFlags, sectionName[len(t.prefix):] } return UnspecifiedProgram, AttachNone, 0, "" } -func (ec *elfCode) loadRelocations(sec *elf.Section, symbols []elf.Symbol) (map[uint64]elf.Symbol, error) { +func (ec *elfCode) loadSectionRelocations(sec *elf.Section, symbols []elf.Symbol) (map[uint64]elf.Symbol, error) { rels := make(map[uint64]elf.Symbol) if sec.Entsize < 16 { diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/elf_reader_fuzz.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/elf_reader_fuzz.go deleted file mode 100644 index 5f4e0a0ad023..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/elf_reader_fuzz.go +++ /dev/null @@ -1,22 +0,0 @@ -//go:build gofuzz -// +build gofuzz - -// Use with https://github.com/dvyukov/go-fuzz - -package ebpf - -import "bytes" - -func FuzzLoadCollectionSpec(data []byte) int { - spec, err := LoadCollectionSpecFromReader(bytes.NewReader(data)) - if err != nil { - if spec != nil { - panic("spec is not nil") - } - return 0 - } - if spec == nil { - panic("spec is nil") - } - return 1 -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/info.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/info.go index 65fa4d7d8502..ae77bc6197f2 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/info.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/info.go @@ -2,6 +2,7 @@ package ebpf import ( "bufio" + "bytes" "encoding/hex" "errors" "fmt" @@ -10,9 +11,13 @@ import ( "strings" "syscall" "time" + "unsafe" + "github.com/cilium/ebpf/asm" + "github.com/cilium/ebpf/btf" "github.com/cilium/ebpf/internal" - "github.com/cilium/ebpf/internal/btf" + "github.com/cilium/ebpf/internal/sys" + "github.com/cilium/ebpf/internal/unix" ) // MapInfo describes a map. @@ -23,12 +28,13 @@ type MapInfo struct { ValueSize uint32 MaxEntries uint32 Flags uint32 - // Name as supplied by user space at load time. + // Name as supplied by user space at load time. Available from 4.15. Name string } -func newMapInfoFromFd(fd *internal.FD) (*MapInfo, error) { - info, err := bpfGetMapInfoByFD(fd) +func newMapInfoFromFd(fd *sys.FD) (*MapInfo, error) { + var info sys.MapInfo + err := sys.ObjInfo(fd, &info) if errors.Is(err, syscall.EINVAL) { return newMapInfoFromProc(fd) } @@ -37,18 +43,17 @@ func newMapInfoFromFd(fd *internal.FD) (*MapInfo, error) { } return &MapInfo{ - MapType(info.map_type), - MapID(info.id), - info.key_size, - info.value_size, - info.max_entries, - info.map_flags, - // name is available from 4.15. - internal.CString(info.name[:]), + MapType(info.Type), + MapID(info.Id), + info.KeySize, + info.ValueSize, + info.MaxEntries, + info.MapFlags, + unix.ByteSliceToString(info.Name[:]), }, nil } -func newMapInfoFromProc(fd *internal.FD) (*MapInfo, error) { +func newMapInfoFromProc(fd *sys.FD) (*MapInfo, error) { var mi MapInfo err := scanFdInfo(fd, map[string]interface{}{ "map_type": &mi.Type, @@ -84,20 +89,21 @@ type programStats struct { type ProgramInfo struct { Type ProgramType id ProgramID - // Truncated hash of the BPF bytecode. + // Truncated hash of the BPF bytecode. Available from 4.13. Tag string - // Name as supplied by user space at load time. + // Name as supplied by user space at load time. Available from 4.15. Name string - // BTF for the program. - btf btf.ID - // IDS map ids related to program. - ids []MapID + btf btf.ID stats *programStats + + maps []MapID + insns []byte } -func newProgramInfoFromFd(fd *internal.FD) (*ProgramInfo, error) { - info, err := bpfGetProgInfoByFD(fd, nil) +func newProgramInfoFromFd(fd *sys.FD) (*ProgramInfo, error) { + var info sys.ProgInfo + err := sys.ObjInfo(fd, &info) if errors.Is(err, syscall.EINVAL) { return newProgramInfoFromProc(fd) } @@ -105,32 +111,43 @@ func newProgramInfoFromFd(fd *internal.FD) (*ProgramInfo, error) { return nil, err } - var mapIDs []MapID - if info.nr_map_ids > 0 { - mapIDs = make([]MapID, info.nr_map_ids) - info, err = bpfGetProgInfoByFD(fd, mapIDs) - if err != nil { + pi := ProgramInfo{ + Type: ProgramType(info.Type), + id: ProgramID(info.Id), + Tag: hex.EncodeToString(info.Tag[:]), + Name: unix.ByteSliceToString(info.Name[:]), + btf: btf.ID(info.BtfId), + stats: &programStats{ + runtime: time.Duration(info.RunTimeNs), + runCount: info.RunCnt, + }, + } + + // Start with a clean struct for the second call, otherwise we may get EFAULT. + var info2 sys.ProgInfo + + if info.NrMapIds > 0 { + pi.maps = make([]MapID, info.NrMapIds) + info2.NrMapIds = info.NrMapIds + info2.MapIds = sys.NewPointer(unsafe.Pointer(&pi.maps[0])) + } + + if info.XlatedProgLen > 0 { + pi.insns = make([]byte, info.XlatedProgLen) + info2.XlatedProgLen = info.XlatedProgLen + info2.XlatedProgInsns = sys.NewSlicePointer(pi.insns) + } + + if info.NrMapIds > 0 || info.XlatedProgLen > 0 { + if err := sys.ObjInfo(fd, &info2); err != nil { return nil, err } } - return &ProgramInfo{ - Type: ProgramType(info.prog_type), - id: ProgramID(info.id), - // tag is available if the kernel supports BPF_PROG_GET_INFO_BY_FD. - Tag: hex.EncodeToString(info.tag[:]), - // name is available from 4.15. - Name: internal.CString(info.name[:]), - btf: btf.ID(info.btf_id), - ids: mapIDs, - stats: &programStats{ - runtime: time.Duration(info.run_time_ns), - runCount: info.run_cnt, - }, - }, nil + return &pi, nil } -func newProgramInfoFromProc(fd *internal.FD) (*ProgramInfo, error) { +func newProgramInfoFromProc(fd *sys.FD) (*ProgramInfo, error) { var info ProgramInfo err := scanFdInfo(fd, map[string]interface{}{ "prog_type": &info.Type, @@ -160,6 +177,7 @@ func (pi *ProgramInfo) ID() (ProgramID, bool) { // BTFID returns the BTF ID associated with the program. // +// The ID is only valid as long as the associated program is kept alive. // Available from 5.0. // // The bool return value indicates whether this optional field is available and @@ -191,20 +209,50 @@ func (pi *ProgramInfo) Runtime() (time.Duration, bool) { return time.Duration(0), false } +// Instructions returns the 'xlated' instruction stream of the program +// after it has been verified and rewritten by the kernel. These instructions +// cannot be loaded back into the kernel as-is, this is mainly used for +// inspecting loaded programs for troubleshooting, dumping, etc. +// +// For example, map accesses are made to reference their kernel map IDs, +// not the FDs they had when the program was inserted. Note that before +// the introduction of bpf_insn_prepare_dump in kernel 4.16, xlated +// instructions were not sanitized, making the output even less reusable +// and less likely to round-trip or evaluate to the same program Tag. +// +// The first instruction is marked as a symbol using the Program's name. +// +// Available from 4.13. Requires CAP_BPF or equivalent. +func (pi *ProgramInfo) Instructions() (asm.Instructions, error) { + // If the calling process is not BPF-capable or if the kernel doesn't + // support getting xlated instructions, the field will be zero. + if len(pi.insns) == 0 { + return nil, fmt.Errorf("insufficient permissions or unsupported kernel: %w", ErrNotSupported) + } + + r := bytes.NewReader(pi.insns) + var insns asm.Instructions + if err := insns.Unmarshal(r, internal.NativeEndian); err != nil { + return nil, fmt.Errorf("unmarshaling instructions: %w", err) + } + + // Tag the first instruction with the name of the program, if available. + insns[0] = insns[0].WithSymbol(pi.Name) + + return insns, nil +} + // MapIDs returns the maps related to the program. // +// Available from 4.15. +// // The bool return value indicates whether this optional field is available. func (pi *ProgramInfo) MapIDs() ([]MapID, bool) { - return pi.ids, pi.ids != nil + return pi.maps, pi.maps != nil } -func scanFdInfo(fd *internal.FD, fields map[string]interface{}) error { - raw, err := fd.Value() - if err != nil { - return err - } - - fh, err := os.Open(fmt.Sprintf("/proc/self/fdinfo/%d", raw)) +func scanFdInfo(fd *sys.FD, fields map[string]interface{}) error { + fh, err := os.Open(fmt.Sprintf("/proc/self/fdinfo/%d", fd.Int())) if err != nil { return err } @@ -247,6 +295,10 @@ func scanFdInfoReader(r io.Reader, fields map[string]interface{}) error { return err } + if len(fields) > 0 && scanned == 0 { + return ErrNotSupported + } + if scanned != len(fields) { return errMissingFields } @@ -261,11 +313,9 @@ func scanFdInfoReader(r io.Reader, fields map[string]interface{}) error { // // Requires at least 5.8. func EnableStats(which uint32) (io.Closer, error) { - attr := internal.BPFEnableStatsAttr{ - StatsType: which, - } - - fd, err := internal.BPFEnableStats(&attr) + fd, err := sys.EnableStats(&sys.EnableStatsAttr{ + Type: which, + }) if err != nil { return nil, err } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/btf.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/btf.go deleted file mode 100644 index 2b5f6d226a4f..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/btf.go +++ /dev/null @@ -1,798 +0,0 @@ -package btf - -import ( - "bytes" - "debug/elf" - "encoding/binary" - "errors" - "fmt" - "io" - "math" - "os" - "reflect" - "sync" - "unsafe" - - "github.com/cilium/ebpf/internal" - "github.com/cilium/ebpf/internal/unix" -) - -const btfMagic = 0xeB9F - -// Errors returned by BTF functions. -var ( - ErrNotSupported = internal.ErrNotSupported - ErrNotFound = errors.New("not found") - ErrNoExtendedInfo = errors.New("no extended info") -) - -// ID represents the unique ID of a BTF object. -type ID uint32 - -// Spec represents decoded BTF. -type Spec struct { - rawTypes []rawType - strings stringTable - types []Type - namedTypes map[string][]NamedType - funcInfos map[string]extInfo - lineInfos map[string]extInfo - coreRelos map[string]coreRelos - byteOrder binary.ByteOrder -} - -type btfHeader struct { - Magic uint16 - Version uint8 - Flags uint8 - HdrLen uint32 - - TypeOff uint32 - TypeLen uint32 - StringOff uint32 - StringLen uint32 -} - -// LoadSpecFromReader reads BTF sections from an ELF. -// -// Returns ErrNotFound if the reader contains no BTF. -func LoadSpecFromReader(rd io.ReaderAt) (*Spec, error) { - file, err := internal.NewSafeELFFile(rd) - if err != nil { - return nil, err - } - defer file.Close() - - symbols, err := file.Symbols() - if err != nil { - return nil, fmt.Errorf("can't read symbols: %v", err) - } - - variableOffsets := make(map[variable]uint32) - for _, symbol := range symbols { - if idx := symbol.Section; idx >= elf.SHN_LORESERVE && idx <= elf.SHN_HIRESERVE { - // Ignore things like SHN_ABS - continue - } - - if int(symbol.Section) >= len(file.Sections) { - return nil, fmt.Errorf("symbol %s: invalid section %d", symbol.Name, symbol.Section) - } - - secName := file.Sections[symbol.Section].Name - if symbol.Value > math.MaxUint32 { - return nil, fmt.Errorf("section %s: symbol %s: size exceeds maximum", secName, symbol.Name) - } - - variableOffsets[variable{secName, symbol.Name}] = uint32(symbol.Value) - } - - return loadSpecFromELF(file, variableOffsets) -} - -func loadSpecFromELF(file *internal.SafeELFFile, variableOffsets map[variable]uint32) (*Spec, error) { - var ( - btfSection *elf.Section - btfExtSection *elf.Section - sectionSizes = make(map[string]uint32) - ) - - for _, sec := range file.Sections { - switch sec.Name { - case ".BTF": - btfSection = sec - case ".BTF.ext": - btfExtSection = sec - default: - if sec.Type != elf.SHT_PROGBITS && sec.Type != elf.SHT_NOBITS { - break - } - - if sec.Size > math.MaxUint32 { - return nil, fmt.Errorf("section %s exceeds maximum size", sec.Name) - } - - sectionSizes[sec.Name] = uint32(sec.Size) - } - } - - if btfSection == nil { - return nil, fmt.Errorf("btf: %w", ErrNotFound) - } - - spec, err := loadRawSpec(btfSection.Open(), file.ByteOrder, sectionSizes, variableOffsets) - if err != nil { - return nil, err - } - - if btfExtSection == nil { - return spec, nil - } - - spec.funcInfos, spec.lineInfos, spec.coreRelos, err = parseExtInfos(btfExtSection.Open(), file.ByteOrder, spec.strings) - if err != nil { - return nil, fmt.Errorf("can't read ext info: %w", err) - } - - return spec, nil -} - -// LoadRawSpec reads a blob of BTF data that isn't wrapped in an ELF file. -// -// Prefer using LoadSpecFromReader, since this function only supports a subset -// of BTF. -func LoadRawSpec(btf io.Reader, bo binary.ByteOrder) (*Spec, error) { - // This will return an error if we encounter a Datasec, since we can't fix - // it up. - return loadRawSpec(btf, bo, nil, nil) -} - -func loadRawSpec(btf io.Reader, bo binary.ByteOrder, sectionSizes map[string]uint32, variableOffsets map[variable]uint32) (*Spec, error) { - rawTypes, rawStrings, err := parseBTF(btf, bo) - if err != nil { - return nil, err - } - - err = fixupDatasec(rawTypes, rawStrings, sectionSizes, variableOffsets) - if err != nil { - return nil, err - } - - types, typesByName, err := inflateRawTypes(rawTypes, rawStrings) - if err != nil { - return nil, err - } - - return &Spec{ - rawTypes: rawTypes, - namedTypes: typesByName, - types: types, - strings: rawStrings, - byteOrder: bo, - }, nil -} - -var kernelBTF struct { - sync.Mutex - *Spec -} - -// LoadKernelSpec returns the current kernel's BTF information. -// -// Requires a >= 5.5 kernel with CONFIG_DEBUG_INFO_BTF enabled. Returns -// ErrNotSupported if BTF is not enabled. -func LoadKernelSpec() (*Spec, error) { - kernelBTF.Lock() - defer kernelBTF.Unlock() - - if kernelBTF.Spec != nil { - return kernelBTF.Spec, nil - } - - var err error - kernelBTF.Spec, err = loadKernelSpec() - return kernelBTF.Spec, err -} - -func loadKernelSpec() (*Spec, error) { - release, err := unix.KernelRelease() - if err != nil { - return nil, fmt.Errorf("can't read kernel release number: %w", err) - } - - fh, err := os.Open("/sys/kernel/btf/vmlinux") - if err == nil { - defer fh.Close() - - return LoadRawSpec(fh, internal.NativeEndian) - } - - // use same list of locations as libbpf - // https://github.com/libbpf/libbpf/blob/9a3a42608dbe3731256a5682a125ac1e23bced8f/src/btf.c#L3114-L3122 - locations := []string{ - "/boot/vmlinux-%s", - "/lib/modules/%s/vmlinux-%[1]s", - "/lib/modules/%s/build/vmlinux", - "/usr/lib/modules/%s/kernel/vmlinux", - "/usr/lib/debug/boot/vmlinux-%s", - "/usr/lib/debug/boot/vmlinux-%s.debug", - "/usr/lib/debug/lib/modules/%s/vmlinux", - } - - for _, loc := range locations { - path := fmt.Sprintf(loc, release) - - fh, err := os.Open(path) - if err != nil { - continue - } - defer fh.Close() - - file, err := internal.NewSafeELFFile(fh) - if err != nil { - return nil, err - } - defer file.Close() - - return loadSpecFromELF(file, nil) - } - - return nil, fmt.Errorf("no BTF for kernel version %s: %w", release, internal.ErrNotSupported) -} - -func parseBTF(btf io.Reader, bo binary.ByteOrder) ([]rawType, stringTable, error) { - rawBTF, err := io.ReadAll(btf) - if err != nil { - return nil, nil, fmt.Errorf("can't read BTF: %v", err) - } - - rd := bytes.NewReader(rawBTF) - - var header btfHeader - if err := binary.Read(rd, bo, &header); err != nil { - return nil, nil, fmt.Errorf("can't read header: %v", err) - } - - if header.Magic != btfMagic { - return nil, nil, fmt.Errorf("incorrect magic value %v", header.Magic) - } - - if header.Version != 1 { - return nil, nil, fmt.Errorf("unexpected version %v", header.Version) - } - - if header.Flags != 0 { - return nil, nil, fmt.Errorf("unsupported flags %v", header.Flags) - } - - remainder := int64(header.HdrLen) - int64(binary.Size(&header)) - if remainder < 0 { - return nil, nil, errors.New("header is too short") - } - - if _, err := io.CopyN(internal.DiscardZeroes{}, rd, remainder); err != nil { - return nil, nil, fmt.Errorf("header padding: %v", err) - } - - if _, err := rd.Seek(int64(header.HdrLen+header.StringOff), io.SeekStart); err != nil { - return nil, nil, fmt.Errorf("can't seek to start of string section: %v", err) - } - - rawStrings, err := readStringTable(io.LimitReader(rd, int64(header.StringLen))) - if err != nil { - return nil, nil, fmt.Errorf("can't read type names: %w", err) - } - - if _, err := rd.Seek(int64(header.HdrLen+header.TypeOff), io.SeekStart); err != nil { - return nil, nil, fmt.Errorf("can't seek to start of type section: %v", err) - } - - rawTypes, err := readTypes(io.LimitReader(rd, int64(header.TypeLen)), bo) - if err != nil { - return nil, nil, fmt.Errorf("can't read types: %w", err) - } - - return rawTypes, rawStrings, nil -} - -type variable struct { - section string - name string -} - -func fixupDatasec(rawTypes []rawType, rawStrings stringTable, sectionSizes map[string]uint32, variableOffsets map[variable]uint32) error { - for i, rawType := range rawTypes { - if rawType.Kind() != kindDatasec { - continue - } - - name, err := rawStrings.Lookup(rawType.NameOff) - if err != nil { - return err - } - - if name == ".kconfig" || name == ".ksyms" { - return fmt.Errorf("reference to %s: %w", name, ErrNotSupported) - } - - if rawTypes[i].SizeType != 0 { - continue - } - - size, ok := sectionSizes[name] - if !ok { - return fmt.Errorf("data section %s: missing size", name) - } - - rawTypes[i].SizeType = size - - secinfos := rawType.data.([]btfVarSecinfo) - for j, secInfo := range secinfos { - id := int(secInfo.Type - 1) - if id >= len(rawTypes) { - return fmt.Errorf("data section %s: invalid type id %d for variable %d", name, id, j) - } - - varName, err := rawStrings.Lookup(rawTypes[id].NameOff) - if err != nil { - return fmt.Errorf("data section %s: can't get name for type %d: %w", name, id, err) - } - - offset, ok := variableOffsets[variable{name, varName}] - if !ok { - return fmt.Errorf("data section %s: missing offset for variable %s", name, varName) - } - - secinfos[j].Offset = offset - } - } - - return nil -} - -// Copy creates a copy of Spec. -func (s *Spec) Copy() *Spec { - types, _ := copyTypes(s.types, nil) - namedTypes := make(map[string][]NamedType) - for _, typ := range types { - if named, ok := typ.(NamedType); ok { - name := essentialName(named.TypeName()) - namedTypes[name] = append(namedTypes[name], named) - } - } - - // NB: Other parts of spec are not copied since they are immutable. - return &Spec{ - s.rawTypes, - s.strings, - types, - namedTypes, - s.funcInfos, - s.lineInfos, - s.coreRelos, - s.byteOrder, - } -} - -type marshalOpts struct { - ByteOrder binary.ByteOrder - StripFuncLinkage bool -} - -func (s *Spec) marshal(opts marshalOpts) ([]byte, error) { - var ( - buf bytes.Buffer - header = new(btfHeader) - headerLen = binary.Size(header) - ) - - // Reserve space for the header. We have to write it last since - // we don't know the size of the type section yet. - _, _ = buf.Write(make([]byte, headerLen)) - - // Write type section, just after the header. - for _, raw := range s.rawTypes { - switch { - case opts.StripFuncLinkage && raw.Kind() == kindFunc: - raw.SetLinkage(StaticFunc) - } - - if err := raw.Marshal(&buf, opts.ByteOrder); err != nil { - return nil, fmt.Errorf("can't marshal BTF: %w", err) - } - } - - typeLen := uint32(buf.Len() - headerLen) - - // Write string section after type section. - _, _ = buf.Write(s.strings) - - // Fill out the header, and write it out. - header = &btfHeader{ - Magic: btfMagic, - Version: 1, - Flags: 0, - HdrLen: uint32(headerLen), - TypeOff: 0, - TypeLen: typeLen, - StringOff: typeLen, - StringLen: uint32(len(s.strings)), - } - - raw := buf.Bytes() - err := binary.Write(sliceWriter(raw[:headerLen]), opts.ByteOrder, header) - if err != nil { - return nil, fmt.Errorf("can't write header: %v", err) - } - - return raw, nil -} - -type sliceWriter []byte - -func (sw sliceWriter) Write(p []byte) (int, error) { - if len(p) != len(sw) { - return 0, errors.New("size doesn't match") - } - - return copy(sw, p), nil -} - -// Program finds the BTF for a specific section. -// -// Length is the number of bytes in the raw BPF instruction stream. -// -// Returns an error which may wrap ErrNoExtendedInfo if the Spec doesn't -// contain extended BTF info. -func (s *Spec) Program(name string, length uint64) (*Program, error) { - if length == 0 { - return nil, errors.New("length musn't be zero") - } - - if s.funcInfos == nil && s.lineInfos == nil && s.coreRelos == nil { - return nil, fmt.Errorf("BTF for section %s: %w", name, ErrNoExtendedInfo) - } - - funcInfos, funcOK := s.funcInfos[name] - lineInfos, lineOK := s.lineInfos[name] - relos, coreOK := s.coreRelos[name] - - if !funcOK && !lineOK && !coreOK { - return nil, fmt.Errorf("no extended BTF info for section %s", name) - } - - return &Program{s, length, funcInfos, lineInfos, relos}, nil -} - -// FindType searches for a type with a specific name. -// -// Called T a type that satisfies Type, typ must be a non-nil **T. -// On success, the address of the found type will be copied in typ. -// -// Returns an error wrapping ErrNotFound if no matching -// type exists in spec. -func (s *Spec) FindType(name string, typ interface{}) error { - typValue := reflect.ValueOf(typ) - if typValue.Kind() != reflect.Ptr { - return fmt.Errorf("%T is not a pointer", typ) - } - - typPtr := typValue.Elem() - if !typPtr.CanSet() { - return fmt.Errorf("%T cannot be set", typ) - } - - wanted := typPtr.Type() - if !wanted.AssignableTo(reflect.TypeOf((*Type)(nil)).Elem()) { - return fmt.Errorf("%T does not satisfy Type interface", typ) - } - - var candidate Type - for _, typ := range s.namedTypes[essentialName(name)] { - if reflect.TypeOf(typ) != wanted { - continue - } - - // Match against the full name, not just the essential one. - if typ.TypeName() != name { - continue - } - - if candidate != nil { - return fmt.Errorf("type %s: multiple candidates for %T", name, typ) - } - - candidate = typ - } - - if candidate == nil { - return fmt.Errorf("type %s: %w", name, ErrNotFound) - } - - typPtr.Set(reflect.ValueOf(candidate)) - - return nil -} - -// Handle is a reference to BTF loaded into the kernel. -type Handle struct { - spec *Spec - fd *internal.FD -} - -// NewHandle loads BTF into the kernel. -// -// Returns ErrNotSupported if BTF is not supported. -func NewHandle(spec *Spec) (*Handle, error) { - if err := haveBTF(); err != nil { - return nil, err - } - - if spec.byteOrder != internal.NativeEndian { - return nil, fmt.Errorf("can't load %s BTF on %s", spec.byteOrder, internal.NativeEndian) - } - - btf, err := spec.marshal(marshalOpts{ - ByteOrder: internal.NativeEndian, - StripFuncLinkage: haveFuncLinkage() != nil, - }) - if err != nil { - return nil, fmt.Errorf("can't marshal BTF: %w", err) - } - - if uint64(len(btf)) > math.MaxUint32 { - return nil, errors.New("BTF exceeds the maximum size") - } - - attr := &bpfLoadBTFAttr{ - btf: internal.NewSlicePointer(btf), - btfSize: uint32(len(btf)), - } - - fd, err := bpfLoadBTF(attr) - if err != nil { - logBuf := make([]byte, 64*1024) - attr.logBuf = internal.NewSlicePointer(logBuf) - attr.btfLogSize = uint32(len(logBuf)) - attr.btfLogLevel = 1 - _, logErr := bpfLoadBTF(attr) - return nil, internal.ErrorWithLog(err, logBuf, logErr) - } - - return &Handle{spec.Copy(), fd}, nil -} - -// NewHandleFromID returns the BTF handle for a given id. -// -// Returns ErrNotExist, if there is no BTF with the given id. -// -// Requires CAP_SYS_ADMIN. -func NewHandleFromID(id ID) (*Handle, error) { - fd, err := internal.BPFObjGetFDByID(internal.BPF_BTF_GET_FD_BY_ID, uint32(id)) - if err != nil { - return nil, fmt.Errorf("get BTF by id: %w", err) - } - - info, err := newInfoFromFd(fd) - if err != nil { - _ = fd.Close() - return nil, fmt.Errorf("get BTF spec for handle: %w", err) - } - - return &Handle{info.BTF, fd}, nil -} - -// Spec returns the Spec that defined the BTF loaded into the kernel. -func (h *Handle) Spec() *Spec { - return h.spec -} - -// Close destroys the handle. -// -// Subsequent calls to FD will return an invalid value. -func (h *Handle) Close() error { - return h.fd.Close() -} - -// FD returns the file descriptor for the handle. -func (h *Handle) FD() int { - value, err := h.fd.Value() - if err != nil { - return -1 - } - - return int(value) -} - -// Map is the BTF for a map. -type Map struct { - Spec *Spec - Key, Value Type -} - -// Program is the BTF information for a stream of instructions. -type Program struct { - spec *Spec - length uint64 - funcInfos, lineInfos extInfo - coreRelos coreRelos -} - -// Spec returns the BTF spec of this program. -func (p *Program) Spec() *Spec { - return p.spec -} - -// Append the information from other to the Program. -func (p *Program) Append(other *Program) error { - if other.spec != p.spec { - return fmt.Errorf("can't append program with different BTF specs") - } - - funcInfos, err := p.funcInfos.append(other.funcInfos, p.length) - if err != nil { - return fmt.Errorf("func infos: %w", err) - } - - lineInfos, err := p.lineInfos.append(other.lineInfos, p.length) - if err != nil { - return fmt.Errorf("line infos: %w", err) - } - - p.funcInfos = funcInfos - p.lineInfos = lineInfos - p.coreRelos = p.coreRelos.append(other.coreRelos, p.length) - p.length += other.length - return nil -} - -// FuncInfos returns the binary form of BTF function infos. -func (p *Program) FuncInfos() (recordSize uint32, bytes []byte, err error) { - bytes, err = p.funcInfos.MarshalBinary() - if err != nil { - return 0, nil, fmt.Errorf("func infos: %w", err) - } - - return p.funcInfos.recordSize, bytes, nil -} - -// LineInfos returns the binary form of BTF line infos. -func (p *Program) LineInfos() (recordSize uint32, bytes []byte, err error) { - bytes, err = p.lineInfos.MarshalBinary() - if err != nil { - return 0, nil, fmt.Errorf("line infos: %w", err) - } - - return p.lineInfos.recordSize, bytes, nil -} - -// Fixups returns the changes required to adjust the program to the target. -// -// Passing a nil target will relocate against the running kernel. -func (p *Program) Fixups(target *Spec) (COREFixups, error) { - if len(p.coreRelos) == 0 { - return nil, nil - } - - if target == nil { - var err error - target, err = LoadKernelSpec() - if err != nil { - return nil, err - } - } - - return coreRelocate(p.spec, target, p.coreRelos) -} - -type bpfLoadBTFAttr struct { - btf internal.Pointer - logBuf internal.Pointer - btfSize uint32 - btfLogSize uint32 - btfLogLevel uint32 -} - -func bpfLoadBTF(attr *bpfLoadBTFAttr) (*internal.FD, error) { - fd, err := internal.BPF(internal.BPF_BTF_LOAD, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - if err != nil { - return nil, err - } - - return internal.NewFD(uint32(fd)), nil -} - -func marshalBTF(types interface{}, strings []byte, bo binary.ByteOrder) []byte { - const minHeaderLength = 24 - - typesLen := uint32(binary.Size(types)) - header := btfHeader{ - Magic: btfMagic, - Version: 1, - HdrLen: minHeaderLength, - TypeOff: 0, - TypeLen: typesLen, - StringOff: typesLen, - StringLen: uint32(len(strings)), - } - - buf := new(bytes.Buffer) - _ = binary.Write(buf, bo, &header) - _ = binary.Write(buf, bo, types) - buf.Write(strings) - - return buf.Bytes() -} - -var haveBTF = internal.FeatureTest("BTF", "5.1", func() error { - var ( - types struct { - Integer btfType - Var btfType - btfVar struct{ Linkage uint32 } - } - strings = []byte{0, 'a', 0} - ) - - // We use a BTF_KIND_VAR here, to make sure that - // the kernel understands BTF at least as well as we - // do. BTF_KIND_VAR was introduced ~5.1. - types.Integer.SetKind(kindPointer) - types.Var.NameOff = 1 - types.Var.SetKind(kindVar) - types.Var.SizeType = 1 - - btf := marshalBTF(&types, strings, internal.NativeEndian) - - fd, err := bpfLoadBTF(&bpfLoadBTFAttr{ - btf: internal.NewSlicePointer(btf), - btfSize: uint32(len(btf)), - }) - if errors.Is(err, unix.EINVAL) || errors.Is(err, unix.EPERM) { - // Treat both EINVAL and EPERM as not supported: loading the program - // might still succeed without BTF. - return internal.ErrNotSupported - } - if err != nil { - return err - } - - fd.Close() - return nil -}) - -var haveFuncLinkage = internal.FeatureTest("BTF func linkage", "5.6", func() error { - if err := haveBTF(); err != nil { - return err - } - - var ( - types struct { - FuncProto btfType - Func btfType - } - strings = []byte{0, 'a', 0} - ) - - types.FuncProto.SetKind(kindFuncProto) - types.Func.SetKind(kindFunc) - types.Func.SizeType = 1 // aka FuncProto - types.Func.NameOff = 1 - types.Func.SetLinkage(GlobalFunc) - - btf := marshalBTF(&types, strings, internal.NativeEndian) - - fd, err := bpfLoadBTF(&bpfLoadBTFAttr{ - btf: internal.NewSlicePointer(btf), - btfSize: uint32(len(btf)), - }) - if errors.Is(err, unix.EINVAL) { - return internal.ErrNotSupported - } - if err != nil { - return err - } - - fd.Close() - return nil -}) diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/ext_info.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/ext_info.go deleted file mode 100644 index cdae2ec40827..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/ext_info.go +++ /dev/null @@ -1,312 +0,0 @@ -package btf - -import ( - "bufio" - "bytes" - "encoding/binary" - "errors" - "fmt" - "io" - - "github.com/cilium/ebpf/asm" - "github.com/cilium/ebpf/internal" -) - -type btfExtHeader struct { - Magic uint16 - Version uint8 - Flags uint8 - HdrLen uint32 - - FuncInfoOff uint32 - FuncInfoLen uint32 - LineInfoOff uint32 - LineInfoLen uint32 -} - -type btfExtCoreHeader struct { - CoreReloOff uint32 - CoreReloLen uint32 -} - -func parseExtInfos(r io.ReadSeeker, bo binary.ByteOrder, strings stringTable) (funcInfo, lineInfo map[string]extInfo, relos map[string]coreRelos, err error) { - var header btfExtHeader - var coreHeader btfExtCoreHeader - if err := binary.Read(r, bo, &header); err != nil { - return nil, nil, nil, fmt.Errorf("can't read header: %v", err) - } - - if header.Magic != btfMagic { - return nil, nil, nil, fmt.Errorf("incorrect magic value %v", header.Magic) - } - - if header.Version != 1 { - return nil, nil, nil, fmt.Errorf("unexpected version %v", header.Version) - } - - if header.Flags != 0 { - return nil, nil, nil, fmt.Errorf("unsupported flags %v", header.Flags) - } - - remainder := int64(header.HdrLen) - int64(binary.Size(&header)) - if remainder < 0 { - return nil, nil, nil, errors.New("header is too short") - } - - coreHdrSize := int64(binary.Size(&coreHeader)) - if remainder >= coreHdrSize { - if err := binary.Read(r, bo, &coreHeader); err != nil { - return nil, nil, nil, fmt.Errorf("can't read CO-RE relocation header: %v", err) - } - remainder -= coreHdrSize - } - - // Of course, the .BTF.ext header has different semantics than the - // .BTF ext header. We need to ignore non-null values. - _, err = io.CopyN(io.Discard, r, remainder) - if err != nil { - return nil, nil, nil, fmt.Errorf("header padding: %v", err) - } - - if _, err := r.Seek(int64(header.HdrLen+header.FuncInfoOff), io.SeekStart); err != nil { - return nil, nil, nil, fmt.Errorf("can't seek to function info section: %v", err) - } - - buf := bufio.NewReader(io.LimitReader(r, int64(header.FuncInfoLen))) - funcInfo, err = parseExtInfo(buf, bo, strings) - if err != nil { - return nil, nil, nil, fmt.Errorf("function info: %w", err) - } - - if _, err := r.Seek(int64(header.HdrLen+header.LineInfoOff), io.SeekStart); err != nil { - return nil, nil, nil, fmt.Errorf("can't seek to line info section: %v", err) - } - - buf = bufio.NewReader(io.LimitReader(r, int64(header.LineInfoLen))) - lineInfo, err = parseExtInfo(buf, bo, strings) - if err != nil { - return nil, nil, nil, fmt.Errorf("line info: %w", err) - } - - if coreHeader.CoreReloOff > 0 && coreHeader.CoreReloLen > 0 { - if _, err := r.Seek(int64(header.HdrLen+coreHeader.CoreReloOff), io.SeekStart); err != nil { - return nil, nil, nil, fmt.Errorf("can't seek to CO-RE relocation section: %v", err) - } - - relos, err = parseExtInfoRelos(io.LimitReader(r, int64(coreHeader.CoreReloLen)), bo, strings) - if err != nil { - return nil, nil, nil, fmt.Errorf("CO-RE relocation info: %w", err) - } - } - - return funcInfo, lineInfo, relos, nil -} - -type btfExtInfoSec struct { - SecNameOff uint32 - NumInfo uint32 -} - -type extInfoRecord struct { - InsnOff uint64 - Opaque []byte -} - -type extInfo struct { - byteOrder binary.ByteOrder - recordSize uint32 - records []extInfoRecord -} - -func (ei extInfo) append(other extInfo, offset uint64) (extInfo, error) { - if other.byteOrder != ei.byteOrder { - return extInfo{}, fmt.Errorf("ext_info byte order mismatch, want %v (got %v)", ei.byteOrder, other.byteOrder) - } - - if other.recordSize != ei.recordSize { - return extInfo{}, fmt.Errorf("ext_info record size mismatch, want %d (got %d)", ei.recordSize, other.recordSize) - } - - records := make([]extInfoRecord, 0, len(ei.records)+len(other.records)) - records = append(records, ei.records...) - for _, info := range other.records { - records = append(records, extInfoRecord{ - InsnOff: info.InsnOff + offset, - Opaque: info.Opaque, - }) - } - return extInfo{ei.byteOrder, ei.recordSize, records}, nil -} - -func (ei extInfo) MarshalBinary() ([]byte, error) { - if ei.byteOrder != internal.NativeEndian { - return nil, fmt.Errorf("%s is not the native byte order", ei.byteOrder) - } - - if len(ei.records) == 0 { - return nil, nil - } - - buf := bytes.NewBuffer(make([]byte, 0, int(ei.recordSize)*len(ei.records))) - for _, info := range ei.records { - // The kernel expects offsets in number of raw bpf instructions, - // while the ELF tracks it in bytes. - insnOff := uint32(info.InsnOff / asm.InstructionSize) - if err := binary.Write(buf, internal.NativeEndian, insnOff); err != nil { - return nil, fmt.Errorf("can't write instruction offset: %v", err) - } - - buf.Write(info.Opaque) - } - - return buf.Bytes(), nil -} - -func parseExtInfo(r io.Reader, bo binary.ByteOrder, strings stringTable) (map[string]extInfo, error) { - const maxRecordSize = 256 - - var recordSize uint32 - if err := binary.Read(r, bo, &recordSize); err != nil { - return nil, fmt.Errorf("can't read record size: %v", err) - } - - if recordSize < 4 { - // Need at least insnOff - return nil, errors.New("record size too short") - } - if recordSize > maxRecordSize { - return nil, fmt.Errorf("record size %v exceeds %v", recordSize, maxRecordSize) - } - - result := make(map[string]extInfo) - for { - secName, infoHeader, err := parseExtInfoHeader(r, bo, strings) - if errors.Is(err, io.EOF) { - return result, nil - } - - var records []extInfoRecord - for i := uint32(0); i < infoHeader.NumInfo; i++ { - var byteOff uint32 - if err := binary.Read(r, bo, &byteOff); err != nil { - return nil, fmt.Errorf("section %v: can't read extended info offset: %v", secName, err) - } - - buf := make([]byte, int(recordSize-4)) - if _, err := io.ReadFull(r, buf); err != nil { - return nil, fmt.Errorf("section %v: can't read record: %v", secName, err) - } - - if byteOff%asm.InstructionSize != 0 { - return nil, fmt.Errorf("section %v: offset %v is not aligned with instruction size", secName, byteOff) - } - - records = append(records, extInfoRecord{uint64(byteOff), buf}) - } - - result[secName] = extInfo{ - bo, - recordSize, - records, - } - } -} - -// bpfCoreRelo matches `struct bpf_core_relo` from the kernel -type bpfCoreRelo struct { - InsnOff uint32 - TypeID TypeID - AccessStrOff uint32 - Kind COREKind -} - -type coreRelo struct { - insnOff uint32 - typeID TypeID - accessor coreAccessor - kind COREKind -} - -type coreRelos []coreRelo - -// append two slices of extInfoRelo to each other. The InsnOff of b are adjusted -// by offset. -func (r coreRelos) append(other coreRelos, offset uint64) coreRelos { - result := make([]coreRelo, 0, len(r)+len(other)) - result = append(result, r...) - for _, relo := range other { - relo.insnOff += uint32(offset) - result = append(result, relo) - } - return result -} - -var extInfoReloSize = binary.Size(bpfCoreRelo{}) - -func parseExtInfoRelos(r io.Reader, bo binary.ByteOrder, strings stringTable) (map[string]coreRelos, error) { - var recordSize uint32 - if err := binary.Read(r, bo, &recordSize); err != nil { - return nil, fmt.Errorf("read record size: %v", err) - } - - if recordSize != uint32(extInfoReloSize) { - return nil, fmt.Errorf("expected record size %d, got %d", extInfoReloSize, recordSize) - } - - result := make(map[string]coreRelos) - for { - secName, infoHeader, err := parseExtInfoHeader(r, bo, strings) - if errors.Is(err, io.EOF) { - return result, nil - } - - var relos coreRelos - for i := uint32(0); i < infoHeader.NumInfo; i++ { - var relo bpfCoreRelo - if err := binary.Read(r, bo, &relo); err != nil { - return nil, fmt.Errorf("section %v: read record: %v", secName, err) - } - - if relo.InsnOff%asm.InstructionSize != 0 { - return nil, fmt.Errorf("section %v: offset %v is not aligned with instruction size", secName, relo.InsnOff) - } - - accessorStr, err := strings.Lookup(relo.AccessStrOff) - if err != nil { - return nil, err - } - - accessor, err := parseCoreAccessor(accessorStr) - if err != nil { - return nil, fmt.Errorf("accessor %q: %s", accessorStr, err) - } - - relos = append(relos, coreRelo{ - relo.InsnOff, - relo.TypeID, - accessor, - relo.Kind, - }) - } - - result[secName] = relos - } -} - -func parseExtInfoHeader(r io.Reader, bo binary.ByteOrder, strings stringTable) (string, *btfExtInfoSec, error) { - var infoHeader btfExtInfoSec - if err := binary.Read(r, bo, &infoHeader); err != nil { - return "", nil, fmt.Errorf("read ext info header: %w", err) - } - - secName, err := strings.Lookup(infoHeader.SecNameOff) - if err != nil { - return "", nil, fmt.Errorf("get section name: %w", err) - } - - if infoHeader.NumInfo == 0 { - return "", nil, fmt.Errorf("section %s has zero records", secName) - } - - return secName, &infoHeader, nil -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/fuzz.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/fuzz.go deleted file mode 100644 index 220b285afe00..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/fuzz.go +++ /dev/null @@ -1,50 +0,0 @@ -//go:build gofuzz -// +build gofuzz - -// Use with https://github.com/dvyukov/go-fuzz - -package btf - -import ( - "bytes" - "encoding/binary" - - "github.com/cilium/ebpf/internal" -) - -func FuzzSpec(data []byte) int { - if len(data) < binary.Size(btfHeader{}) { - return -1 - } - - spec, err := loadNakedSpec(bytes.NewReader(data), internal.NativeEndian, nil, nil) - if err != nil { - if spec != nil { - panic("spec is not nil") - } - return 0 - } - if spec == nil { - panic("spec is nil") - } - return 1 -} - -func FuzzExtInfo(data []byte) int { - if len(data) < binary.Size(btfExtHeader{}) { - return -1 - } - - table := stringTable("\x00foo\x00barfoo\x00") - info, err := parseExtInfo(bytes.NewReader(data), internal.NativeEndian, table) - if err != nil { - if info != nil { - panic("info is not nil") - } - return 0 - } - if info == nil { - panic("info is nil") - } - return 1 -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/info.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/info.go deleted file mode 100644 index 6a9b5d2e0ba3..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/info.go +++ /dev/null @@ -1,48 +0,0 @@ -package btf - -import ( - "bytes" - - "github.com/cilium/ebpf/internal" -) - -// info describes a BTF object. -type info struct { - BTF *Spec - ID ID - // Name is an identifying name for the BTF, currently only used by the - // kernel. - Name string - // KernelBTF is true if the BTf originated with the kernel and not - // userspace. - KernelBTF bool -} - -func newInfoFromFd(fd *internal.FD) (*info, error) { - // We invoke the syscall once with a empty BTF and name buffers to get size - // information to allocate buffers. Then we invoke it a second time with - // buffers to receive the data. - bpfInfo, err := bpfGetBTFInfoByFD(fd, nil, nil) - if err != nil { - return nil, err - } - - btfBuffer := make([]byte, bpfInfo.btfSize) - nameBuffer := make([]byte, bpfInfo.nameLen) - bpfInfo, err = bpfGetBTFInfoByFD(fd, btfBuffer, nameBuffer) - if err != nil { - return nil, err - } - - spec, err := loadRawSpec(bytes.NewReader(btfBuffer), internal.NativeEndian, nil, nil) - if err != nil { - return nil, err - } - - return &info{ - BTF: spec, - ID: ID(bpfInfo.id), - Name: internal.CString(nameBuffer), - KernelBTF: bpfInfo.kernelBTF != 0, - }, nil -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/strings.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/strings.go deleted file mode 100644 index 9876aa227c0a..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/strings.go +++ /dev/null @@ -1,54 +0,0 @@ -package btf - -import ( - "bytes" - "errors" - "fmt" - "io" -) - -type stringTable []byte - -func readStringTable(r io.Reader) (stringTable, error) { - contents, err := io.ReadAll(r) - if err != nil { - return nil, fmt.Errorf("can't read string table: %v", err) - } - - if len(contents) < 1 { - return nil, errors.New("string table is empty") - } - - if contents[0] != '\x00' { - return nil, errors.New("first item in string table is non-empty") - } - - if contents[len(contents)-1] != '\x00' { - return nil, errors.New("string table isn't null terminated") - } - - return stringTable(contents), nil -} - -func (st stringTable) Lookup(offset uint32) (string, error) { - if int64(offset) > int64(^uint(0)>>1) { - return "", fmt.Errorf("offset %d overflows int", offset) - } - - pos := int(offset) - if pos >= len(st) { - return "", fmt.Errorf("offset %d is out of bounds", offset) - } - - if pos > 0 && st[pos-1] != '\x00' { - return "", fmt.Errorf("offset %d isn't start of a string", offset) - } - - str := st[pos:] - end := bytes.IndexByte(str, '\x00') - if end == -1 { - return "", fmt.Errorf("offset %d isn't null terminated", offset) - } - - return string(str[:end]), nil -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/syscalls.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/syscalls.go deleted file mode 100644 index a4f80abd011f..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/syscalls.go +++ /dev/null @@ -1,31 +0,0 @@ -package btf - -import ( - "fmt" - "unsafe" - - "github.com/cilium/ebpf/internal" -) - -type bpfBTFInfo struct { - btf internal.Pointer - btfSize uint32 - id uint32 - name internal.Pointer - nameLen uint32 - kernelBTF uint32 -} - -func bpfGetBTFInfoByFD(fd *internal.FD, btf, name []byte) (*bpfBTFInfo, error) { - info := bpfBTFInfo{ - btf: internal.NewSlicePointer(btf), - btfSize: uint32(len(btf)), - name: internal.NewSlicePointer(name), - nameLen: uint32(len(name)), - } - if err := internal.BPFObjGetInfoByFD(fd, unsafe.Pointer(&info), unsafe.Sizeof(info)); err != nil { - return nil, fmt.Errorf("can't get program info: %w", err) - } - - return &info, nil -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/types.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/types.go deleted file mode 100644 index 5c8e7c6e59da..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/btf/types.go +++ /dev/null @@ -1,957 +0,0 @@ -package btf - -import ( - "fmt" - "math" - "strings" -) - -const maxTypeDepth = 32 - -// TypeID identifies a type in a BTF section. -type TypeID uint32 - -// ID implements part of the Type interface. -func (tid TypeID) ID() TypeID { - return tid -} - -// Type represents a type described by BTF. -type Type interface { - ID() TypeID - - String() string - - // Make a copy of the type, without copying Type members. - copy() Type - - // Enumerate all nested Types. Repeated calls must visit nested - // types in the same order. - walk(*typeDeque) -} - -// NamedType is a type with a name. -type NamedType interface { - Type - - // Name of the type, empty for anonymous types. - TypeName() string -} - -var ( - _ NamedType = (*Int)(nil) - _ NamedType = (*Struct)(nil) - _ NamedType = (*Union)(nil) - _ NamedType = (*Enum)(nil) - _ NamedType = (*Fwd)(nil) - _ NamedType = (*Func)(nil) - _ NamedType = (*Typedef)(nil) - _ NamedType = (*Var)(nil) - _ NamedType = (*Datasec)(nil) - _ NamedType = (*Float)(nil) -) - -// Void is the unit type of BTF. -type Void struct{} - -func (v *Void) ID() TypeID { return 0 } -func (v *Void) String() string { return "void#0" } -func (v *Void) size() uint32 { return 0 } -func (v *Void) copy() Type { return (*Void)(nil) } -func (v *Void) walk(*typeDeque) {} - -type IntEncoding byte - -const ( - Signed IntEncoding = 1 << iota - Char - Bool -) - -// Int is an integer of a given length. -type Int struct { - TypeID - Name string - - // The size of the integer in bytes. - Size uint32 - Encoding IntEncoding - // OffsetBits is the starting bit offset. Currently always 0. - // See https://www.kernel.org/doc/html/latest/bpf/btf.html#btf-kind-int - OffsetBits uint32 - Bits byte -} - -func (i *Int) String() string { - var s strings.Builder - - switch { - case i.Encoding&Char != 0: - s.WriteString("char") - case i.Encoding&Bool != 0: - s.WriteString("bool") - default: - if i.Encoding&Signed == 0 { - s.WriteRune('u') - } - s.WriteString("int") - fmt.Fprintf(&s, "%d", i.Size*8) - } - - fmt.Fprintf(&s, "#%d", i.TypeID) - - if i.Bits > 0 { - fmt.Fprintf(&s, "[bits=%d]", i.Bits) - } - - return s.String() -} - -func (i *Int) TypeName() string { return i.Name } -func (i *Int) size() uint32 { return i.Size } -func (i *Int) walk(*typeDeque) {} -func (i *Int) copy() Type { - cpy := *i - return &cpy -} - -func (i *Int) isBitfield() bool { - return i.OffsetBits > 0 -} - -// Pointer is a pointer to another type. -type Pointer struct { - TypeID - Target Type -} - -func (p *Pointer) String() string { - return fmt.Sprintf("pointer#%d[target=#%d]", p.TypeID, p.Target.ID()) -} - -func (p *Pointer) size() uint32 { return 8 } -func (p *Pointer) walk(tdq *typeDeque) { tdq.push(&p.Target) } -func (p *Pointer) copy() Type { - cpy := *p - return &cpy -} - -// Array is an array with a fixed number of elements. -type Array struct { - TypeID - Type Type - Nelems uint32 -} - -func (arr *Array) String() string { - return fmt.Sprintf("array#%d[type=#%d n=%d]", arr.TypeID, arr.Type.ID(), arr.Nelems) -} - -func (arr *Array) walk(tdq *typeDeque) { tdq.push(&arr.Type) } -func (arr *Array) copy() Type { - cpy := *arr - return &cpy -} - -// Struct is a compound type of consecutive members. -type Struct struct { - TypeID - Name string - // The size of the struct including padding, in bytes - Size uint32 - Members []Member -} - -func (s *Struct) String() string { - return fmt.Sprintf("struct#%d[%q]", s.TypeID, s.Name) -} - -func (s *Struct) TypeName() string { return s.Name } - -func (s *Struct) size() uint32 { return s.Size } - -func (s *Struct) walk(tdq *typeDeque) { - for i := range s.Members { - tdq.push(&s.Members[i].Type) - } -} - -func (s *Struct) copy() Type { - cpy := *s - cpy.Members = copyMembers(s.Members) - return &cpy -} - -func (s *Struct) members() []Member { - return s.Members -} - -// Union is a compound type where members occupy the same memory. -type Union struct { - TypeID - Name string - // The size of the union including padding, in bytes. - Size uint32 - Members []Member -} - -func (u *Union) String() string { - return fmt.Sprintf("union#%d[%q]", u.TypeID, u.Name) -} - -func (u *Union) TypeName() string { return u.Name } - -func (u *Union) size() uint32 { return u.Size } - -func (u *Union) walk(tdq *typeDeque) { - for i := range u.Members { - tdq.push(&u.Members[i].Type) - } -} - -func (u *Union) copy() Type { - cpy := *u - cpy.Members = copyMembers(u.Members) - return &cpy -} - -func (u *Union) members() []Member { - return u.Members -} - -func copyMembers(orig []Member) []Member { - cpy := make([]Member, len(orig)) - copy(cpy, orig) - return cpy -} - -type composite interface { - members() []Member -} - -var ( - _ composite = (*Struct)(nil) - _ composite = (*Union)(nil) -) - -// Member is part of a Struct or Union. -// -// It is not a valid Type. -type Member struct { - Name string - Type Type - // OffsetBits is the bit offset of this member. - OffsetBits uint32 - BitfieldSize uint32 -} - -// Enum lists possible values. -type Enum struct { - TypeID - Name string - Values []EnumValue -} - -func (e *Enum) String() string { - return fmt.Sprintf("enum#%d[%q]", e.TypeID, e.Name) -} - -func (e *Enum) TypeName() string { return e.Name } - -// EnumValue is part of an Enum -// -// Is is not a valid Type -type EnumValue struct { - Name string - Value int32 -} - -func (e *Enum) size() uint32 { return 4 } -func (e *Enum) walk(*typeDeque) {} -func (e *Enum) copy() Type { - cpy := *e - cpy.Values = make([]EnumValue, len(e.Values)) - copy(cpy.Values, e.Values) - return &cpy -} - -// FwdKind is the type of forward declaration. -type FwdKind int - -// Valid types of forward declaration. -const ( - FwdStruct FwdKind = iota - FwdUnion -) - -func (fk FwdKind) String() string { - switch fk { - case FwdStruct: - return "struct" - case FwdUnion: - return "union" - default: - return fmt.Sprintf("%T(%d)", fk, int(fk)) - } -} - -// Fwd is a forward declaration of a Type. -type Fwd struct { - TypeID - Name string - Kind FwdKind -} - -func (f *Fwd) String() string { - return fmt.Sprintf("fwd#%d[%s %q]", f.TypeID, f.Kind, f.Name) -} - -func (f *Fwd) TypeName() string { return f.Name } - -func (f *Fwd) walk(*typeDeque) {} -func (f *Fwd) copy() Type { - cpy := *f - return &cpy -} - -// Typedef is an alias of a Type. -type Typedef struct { - TypeID - Name string - Type Type -} - -func (td *Typedef) String() string { - return fmt.Sprintf("typedef#%d[%q #%d]", td.TypeID, td.Name, td.Type.ID()) -} - -func (td *Typedef) TypeName() string { return td.Name } - -func (td *Typedef) walk(tdq *typeDeque) { tdq.push(&td.Type) } -func (td *Typedef) copy() Type { - cpy := *td - return &cpy -} - -// Volatile is a qualifier. -type Volatile struct { - TypeID - Type Type -} - -func (v *Volatile) String() string { - return fmt.Sprintf("volatile#%d[#%d]", v.TypeID, v.Type.ID()) -} - -func (v *Volatile) qualify() Type { return v.Type } -func (v *Volatile) walk(tdq *typeDeque) { tdq.push(&v.Type) } -func (v *Volatile) copy() Type { - cpy := *v - return &cpy -} - -// Const is a qualifier. -type Const struct { - TypeID - Type Type -} - -func (c *Const) String() string { - return fmt.Sprintf("const#%d[#%d]", c.TypeID, c.Type.ID()) -} - -func (c *Const) qualify() Type { return c.Type } -func (c *Const) walk(tdq *typeDeque) { tdq.push(&c.Type) } -func (c *Const) copy() Type { - cpy := *c - return &cpy -} - -// Restrict is a qualifier. -type Restrict struct { - TypeID - Type Type -} - -func (r *Restrict) String() string { - return fmt.Sprintf("restrict#%d[#%d]", r.TypeID, r.Type.ID()) -} - -func (r *Restrict) qualify() Type { return r.Type } -func (r *Restrict) walk(tdq *typeDeque) { tdq.push(&r.Type) } -func (r *Restrict) copy() Type { - cpy := *r - return &cpy -} - -// Func is a function definition. -type Func struct { - TypeID - Name string - Type Type - Linkage FuncLinkage -} - -func (f *Func) String() string { - return fmt.Sprintf("func#%d[%s %q proto=#%d]", f.TypeID, f.Linkage, f.Name, f.Type.ID()) -} - -func (f *Func) TypeName() string { return f.Name } - -func (f *Func) walk(tdq *typeDeque) { tdq.push(&f.Type) } -func (f *Func) copy() Type { - cpy := *f - return &cpy -} - -// FuncProto is a function declaration. -type FuncProto struct { - TypeID - Return Type - Params []FuncParam -} - -func (fp *FuncProto) String() string { - var s strings.Builder - fmt.Fprintf(&s, "proto#%d[", fp.TypeID) - for _, param := range fp.Params { - fmt.Fprintf(&s, "%q=#%d, ", param.Name, param.Type.ID()) - } - fmt.Fprintf(&s, "return=#%d]", fp.Return.ID()) - return s.String() -} - -func (fp *FuncProto) walk(tdq *typeDeque) { - tdq.push(&fp.Return) - for i := range fp.Params { - tdq.push(&fp.Params[i].Type) - } -} - -func (fp *FuncProto) copy() Type { - cpy := *fp - cpy.Params = make([]FuncParam, len(fp.Params)) - copy(cpy.Params, fp.Params) - return &cpy -} - -type FuncParam struct { - Name string - Type Type -} - -// Var is a global variable. -type Var struct { - TypeID - Name string - Type Type - Linkage VarLinkage -} - -func (v *Var) String() string { - return fmt.Sprintf("var#%d[%s %q]", v.TypeID, v.Linkage, v.Name) -} - -func (v *Var) TypeName() string { return v.Name } - -func (v *Var) walk(tdq *typeDeque) { tdq.push(&v.Type) } -func (v *Var) copy() Type { - cpy := *v - return &cpy -} - -// Datasec is a global program section containing data. -type Datasec struct { - TypeID - Name string - Size uint32 - Vars []VarSecinfo -} - -func (ds *Datasec) String() string { - return fmt.Sprintf("section#%d[%q]", ds.TypeID, ds.Name) -} - -func (ds *Datasec) TypeName() string { return ds.Name } - -func (ds *Datasec) size() uint32 { return ds.Size } - -func (ds *Datasec) walk(tdq *typeDeque) { - for i := range ds.Vars { - tdq.push(&ds.Vars[i].Type) - } -} - -func (ds *Datasec) copy() Type { - cpy := *ds - cpy.Vars = make([]VarSecinfo, len(ds.Vars)) - copy(cpy.Vars, ds.Vars) - return &cpy -} - -// VarSecinfo describes variable in a Datasec. -// -// It is not a valid Type. -type VarSecinfo struct { - Type Type - Offset uint32 - Size uint32 -} - -// Float is a float of a given length. -type Float struct { - TypeID - Name string - - // The size of the float in bytes. - Size uint32 -} - -func (f *Float) String() string { - return fmt.Sprintf("float%d#%d[%q]", f.Size*8, f.TypeID, f.Name) -} - -func (f *Float) TypeName() string { return f.Name } -func (f *Float) size() uint32 { return f.Size } -func (f *Float) walk(*typeDeque) {} -func (f *Float) copy() Type { - cpy := *f - return &cpy -} - -type sizer interface { - size() uint32 -} - -var ( - _ sizer = (*Int)(nil) - _ sizer = (*Pointer)(nil) - _ sizer = (*Struct)(nil) - _ sizer = (*Union)(nil) - _ sizer = (*Enum)(nil) - _ sizer = (*Datasec)(nil) -) - -type qualifier interface { - qualify() Type -} - -var ( - _ qualifier = (*Const)(nil) - _ qualifier = (*Restrict)(nil) - _ qualifier = (*Volatile)(nil) -) - -// Sizeof returns the size of a type in bytes. -// -// Returns an error if the size can't be computed. -func Sizeof(typ Type) (int, error) { - var ( - n = int64(1) - elem int64 - ) - - for i := 0; i < maxTypeDepth; i++ { - switch v := typ.(type) { - case *Array: - if n > 0 && int64(v.Nelems) > math.MaxInt64/n { - return 0, fmt.Errorf("type %s: overflow", typ) - } - - // Arrays may be of zero length, which allows - // n to be zero as well. - n *= int64(v.Nelems) - typ = v.Type - continue - - case sizer: - elem = int64(v.size()) - - case *Typedef: - typ = v.Type - continue - - case qualifier: - typ = v.qualify() - continue - - default: - return 0, fmt.Errorf("unsized type %T", typ) - } - - if n > 0 && elem > math.MaxInt64/n { - return 0, fmt.Errorf("type %s: overflow", typ) - } - - size := n * elem - if int64(int(size)) != size { - return 0, fmt.Errorf("type %s: overflow", typ) - } - - return int(size), nil - } - - return 0, fmt.Errorf("type %s: exceeded type depth", typ) -} - -// copy a Type recursively. -// -// typ may form a cycle. -// -// Returns any errors from transform verbatim. -func copyType(typ Type, transform func(Type) (Type, error)) (Type, error) { - copies := make(copier) - return typ, copies.copy(&typ, transform) -} - -// copy a slice of Types recursively. -// -// Types may form a cycle. -// -// Returns any errors from transform verbatim. -func copyTypes(types []Type, transform func(Type) (Type, error)) ([]Type, error) { - result := make([]Type, len(types)) - copy(result, types) - - copies := make(copier) - for i := range result { - if err := copies.copy(&result[i], transform); err != nil { - return nil, err - } - } - - return result, nil -} - -type copier map[Type]Type - -func (c copier) copy(typ *Type, transform func(Type) (Type, error)) error { - var work typeDeque - for t := typ; t != nil; t = work.pop() { - // *t is the identity of the type. - if cpy := c[*t]; cpy != nil { - *t = cpy - continue - } - - var cpy Type - if transform != nil { - tf, err := transform(*t) - if err != nil { - return fmt.Errorf("copy %s: %w", *t, err) - } - cpy = tf.copy() - } else { - cpy = (*t).copy() - } - - c[*t] = cpy - *t = cpy - - // Mark any nested types for copying. - cpy.walk(&work) - } - - return nil -} - -// typeDeque keeps track of pointers to types which still -// need to be visited. -type typeDeque struct { - types []*Type - read, write uint64 - mask uint64 -} - -func (dq *typeDeque) empty() bool { - return dq.read == dq.write -} - -// push adds a type to the stack. -func (dq *typeDeque) push(t *Type) { - if dq.write-dq.read < uint64(len(dq.types)) { - dq.types[dq.write&dq.mask] = t - dq.write++ - return - } - - new := len(dq.types) * 2 - if new == 0 { - new = 8 - } - - types := make([]*Type, new) - pivot := dq.read & dq.mask - n := copy(types, dq.types[pivot:]) - n += copy(types[n:], dq.types[:pivot]) - types[n] = t - - dq.types = types - dq.mask = uint64(new) - 1 - dq.read, dq.write = 0, uint64(n+1) -} - -// shift returns the first element or null. -func (dq *typeDeque) shift() *Type { - if dq.empty() { - return nil - } - - index := dq.read & dq.mask - t := dq.types[index] - dq.types[index] = nil - dq.read++ - return t -} - -// pop returns the last element or null. -func (dq *typeDeque) pop() *Type { - if dq.empty() { - return nil - } - - dq.write-- - index := dq.write & dq.mask - t := dq.types[index] - dq.types[index] = nil - return t -} - -// all returns all elements. -// -// The deque is empty after calling this method. -func (dq *typeDeque) all() []*Type { - length := dq.write - dq.read - types := make([]*Type, 0, length) - for t := dq.shift(); t != nil; t = dq.shift() { - types = append(types, t) - } - return types -} - -// inflateRawTypes takes a list of raw btf types linked via type IDs, and turns -// it into a graph of Types connected via pointers. -// -// Returns a map of named types (so, where NameOff is non-zero) and a slice of types -// indexed by TypeID. Since BTF ignores compilation units, multiple types may share -// the same name. A Type may form a cyclic graph by pointing at itself. -func inflateRawTypes(rawTypes []rawType, rawStrings stringTable) (types []Type, namedTypes map[string][]NamedType, err error) { - type fixupDef struct { - id TypeID - expectedKind btfKind - typ *Type - } - - var fixups []fixupDef - fixup := func(id TypeID, expectedKind btfKind, typ *Type) { - fixups = append(fixups, fixupDef{id, expectedKind, typ}) - } - - convertMembers := func(raw []btfMember, kindFlag bool) ([]Member, error) { - // NB: The fixup below relies on pre-allocating this array to - // work, since otherwise append might re-allocate members. - members := make([]Member, 0, len(raw)) - for i, btfMember := range raw { - name, err := rawStrings.Lookup(btfMember.NameOff) - if err != nil { - return nil, fmt.Errorf("can't get name for member %d: %w", i, err) - } - m := Member{ - Name: name, - OffsetBits: btfMember.Offset, - } - if kindFlag { - m.BitfieldSize = btfMember.Offset >> 24 - m.OffsetBits &= 0xffffff - } - members = append(members, m) - } - for i := range members { - fixup(raw[i].Type, kindUnknown, &members[i].Type) - } - return members, nil - } - - types = make([]Type, 0, len(rawTypes)) - types = append(types, (*Void)(nil)) - namedTypes = make(map[string][]NamedType) - - for i, raw := range rawTypes { - var ( - // Void is defined to always be type ID 0, and is thus - // omitted from BTF. - id = TypeID(i + 1) - typ Type - ) - - name, err := rawStrings.Lookup(raw.NameOff) - if err != nil { - return nil, nil, fmt.Errorf("get name for type id %d: %w", id, err) - } - - switch raw.Kind() { - case kindInt: - encoding, offset, bits := intEncoding(*raw.data.(*uint32)) - typ = &Int{id, name, raw.Size(), encoding, offset, bits} - - case kindPointer: - ptr := &Pointer{id, nil} - fixup(raw.Type(), kindUnknown, &ptr.Target) - typ = ptr - - case kindArray: - btfArr := raw.data.(*btfArray) - - // IndexType is unused according to btf.rst. - // Don't make it available right now. - arr := &Array{id, nil, btfArr.Nelems} - fixup(btfArr.Type, kindUnknown, &arr.Type) - typ = arr - - case kindStruct: - members, err := convertMembers(raw.data.([]btfMember), raw.KindFlag()) - if err != nil { - return nil, nil, fmt.Errorf("struct %s (id %d): %w", name, id, err) - } - typ = &Struct{id, name, raw.Size(), members} - - case kindUnion: - members, err := convertMembers(raw.data.([]btfMember), raw.KindFlag()) - if err != nil { - return nil, nil, fmt.Errorf("union %s (id %d): %w", name, id, err) - } - typ = &Union{id, name, raw.Size(), members} - - case kindEnum: - rawvals := raw.data.([]btfEnum) - vals := make([]EnumValue, 0, len(rawvals)) - for i, btfVal := range rawvals { - name, err := rawStrings.Lookup(btfVal.NameOff) - if err != nil { - return nil, nil, fmt.Errorf("get name for enum value %d: %s", i, err) - } - vals = append(vals, EnumValue{ - Name: name, - Value: btfVal.Val, - }) - } - typ = &Enum{id, name, vals} - - case kindForward: - if raw.KindFlag() { - typ = &Fwd{id, name, FwdUnion} - } else { - typ = &Fwd{id, name, FwdStruct} - } - - case kindTypedef: - typedef := &Typedef{id, name, nil} - fixup(raw.Type(), kindUnknown, &typedef.Type) - typ = typedef - - case kindVolatile: - volatile := &Volatile{id, nil} - fixup(raw.Type(), kindUnknown, &volatile.Type) - typ = volatile - - case kindConst: - cnst := &Const{id, nil} - fixup(raw.Type(), kindUnknown, &cnst.Type) - typ = cnst - - case kindRestrict: - restrict := &Restrict{id, nil} - fixup(raw.Type(), kindUnknown, &restrict.Type) - typ = restrict - - case kindFunc: - fn := &Func{id, name, nil, raw.Linkage()} - fixup(raw.Type(), kindFuncProto, &fn.Type) - typ = fn - - case kindFuncProto: - rawparams := raw.data.([]btfParam) - params := make([]FuncParam, 0, len(rawparams)) - for i, param := range rawparams { - name, err := rawStrings.Lookup(param.NameOff) - if err != nil { - return nil, nil, fmt.Errorf("get name for func proto parameter %d: %s", i, err) - } - params = append(params, FuncParam{ - Name: name, - }) - } - for i := range params { - fixup(rawparams[i].Type, kindUnknown, ¶ms[i].Type) - } - - fp := &FuncProto{id, nil, params} - fixup(raw.Type(), kindUnknown, &fp.Return) - typ = fp - - case kindVar: - variable := raw.data.(*btfVariable) - v := &Var{id, name, nil, VarLinkage(variable.Linkage)} - fixup(raw.Type(), kindUnknown, &v.Type) - typ = v - - case kindDatasec: - btfVars := raw.data.([]btfVarSecinfo) - vars := make([]VarSecinfo, 0, len(btfVars)) - for _, btfVar := range btfVars { - vars = append(vars, VarSecinfo{ - Offset: btfVar.Offset, - Size: btfVar.Size, - }) - } - for i := range vars { - fixup(btfVars[i].Type, kindVar, &vars[i].Type) - } - typ = &Datasec{id, name, raw.SizeType, vars} - - case kindFloat: - typ = &Float{id, name, raw.Size()} - - default: - return nil, nil, fmt.Errorf("type id %d: unknown kind: %v", id, raw.Kind()) - } - - types = append(types, typ) - - if named, ok := typ.(NamedType); ok { - if name := essentialName(named.TypeName()); name != "" { - namedTypes[name] = append(namedTypes[name], named) - } - } - } - - for _, fixup := range fixups { - i := int(fixup.id) - if i >= len(types) { - return nil, nil, fmt.Errorf("reference to invalid type id: %d", fixup.id) - } - - // Default void (id 0) to unknown - rawKind := kindUnknown - if i > 0 { - rawKind = rawTypes[i-1].Kind() - } - - if expected := fixup.expectedKind; expected != kindUnknown && rawKind != expected { - return nil, nil, fmt.Errorf("expected type id %d to have kind %s, found %s", fixup.id, expected, rawKind) - } - - *fixup.typ = types[i] - } - - return types, namedTypes, nil -} - -// essentialName returns name without a ___ suffix. -func essentialName(name string) string { - lastIdx := strings.LastIndex(name, "___") - if lastIdx > 0 { - return name[:lastIdx] - } - return name -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/elf.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/elf.go index 54a4313130a4..011581938d99 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/elf.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/elf.go @@ -35,6 +35,29 @@ func NewSafeELFFile(r io.ReaderAt) (safe *SafeELFFile, err error) { return &SafeELFFile{file}, nil } +// OpenSafeELFFile reads an ELF from a file. +// +// It works like NewSafeELFFile, with the exception that safe.Close will +// close the underlying file. +func OpenSafeELFFile(path string) (safe *SafeELFFile, err error) { + defer func() { + r := recover() + if r == nil { + return + } + + safe = nil + err = fmt.Errorf("reading ELF file panicked: %s", r) + }() + + file, err := elf.Open(path) + if err != nil { + return nil, err + } + + return &SafeELFFile{file}, nil +} + // Symbols is the safe version of elf.File.Symbols. func (se *SafeELFFile) Symbols() (syms []elf.Symbol, err error) { defer func() { @@ -66,3 +89,14 @@ func (se *SafeELFFile) DynamicSymbols() (syms []elf.Symbol, err error) { syms, err = se.File.DynamicSymbols() return } + +// SectionsByType returns all sections in the file with the specified section type. +func (se *SafeELFFile) SectionsByType(typ elf.SectionType) []*elf.Section { + sections := make([]*elf.Section, 0, 1) + for _, section := range se.Sections { + if section.Type == typ { + sections = append(sections, section) + } + } + return sections +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian.go deleted file mode 100644 index 6ae99fcd5f3b..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian.go +++ /dev/null @@ -1,29 +0,0 @@ -package internal - -import ( - "encoding/binary" - "unsafe" -) - -// NativeEndian is set to either binary.BigEndian or binary.LittleEndian, -// depending on the host's endianness. -var NativeEndian binary.ByteOrder - -// Clang is set to either "el" or "eb" depending on the host's endianness. -var ClangEndian string - -func init() { - if isBigEndian() { - NativeEndian = binary.BigEndian - ClangEndian = "eb" - } else { - NativeEndian = binary.LittleEndian - ClangEndian = "el" - } -} - -func isBigEndian() (ret bool) { - i := int(0x1) - bs := (*[int(unsafe.Sizeof(i))]byte)(unsafe.Pointer(&i)) - return bs[0] == 0 -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian_be.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian_be.go new file mode 100644 index 000000000000..ad33cda8511b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian_be.go @@ -0,0 +1,13 @@ +//go:build armbe || arm64be || mips || mips64 || mips64p32 || ppc64 || s390 || s390x || sparc || sparc64 +// +build armbe arm64be mips mips64 mips64p32 ppc64 s390 s390x sparc sparc64 + +package internal + +import "encoding/binary" + +// NativeEndian is set to either binary.BigEndian or binary.LittleEndian, +// depending on the host's endianness. +var NativeEndian binary.ByteOrder = binary.BigEndian + +// ClangEndian is set to either "el" or "eb" depending on the host's endianness. +const ClangEndian = "eb" diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian_le.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian_le.go new file mode 100644 index 000000000000..41a68224c833 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/endian_le.go @@ -0,0 +1,13 @@ +//go:build 386 || amd64 || amd64p32 || arm || arm64 || mipsle || mips64le || mips64p32le || ppc64le || riscv64 +// +build 386 amd64 amd64p32 arm arm64 mipsle mips64le mips64p32le ppc64le riscv64 + +package internal + +import "encoding/binary" + +// NativeEndian is set to either binary.BigEndian or binary.LittleEndian, +// depending on the host's endianness. +var NativeEndian binary.ByteOrder = binary.LittleEndian + +// ClangEndian is set to either "el" or "eb" depending on the host's endianness. +const ClangEndian = "el" diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/errors.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/errors.go index 877bd72ee266..b5ccdd7d0531 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/errors.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/errors.go @@ -2,50 +2,205 @@ package internal import ( "bytes" - "errors" "fmt" + "io" "strings" - - "github.com/cilium/ebpf/internal/unix" ) -// ErrorWithLog returns an error that includes logs from the -// kernel verifier. +// ErrorWithLog returns an error which includes logs from the kernel verifier. +// +// The default error output is a summary of the full log. The latter can be +// accessed via VerifierError.Log or by formatting the error, see Format. // -// logErr should be the error returned by the syscall that generated -// the log. It is used to check for truncation of the output. -func ErrorWithLog(err error, log []byte, logErr error) error { - logStr := strings.Trim(CString(log), "\t\r\n ") - if errors.Is(logErr, unix.ENOSPC) { - logStr += " (truncated...)" +// A set of heuristics is used to determine whether the log has been truncated. +func ErrorWithLog(err error, log []byte) *VerifierError { + const whitespace = "\t\r\v\n " + + // Convert verifier log C string by truncating it on the first 0 byte + // and trimming trailing whitespace before interpreting as a Go string. + truncated := false + if i := bytes.IndexByte(log, 0); i != -1 { + if i == len(log)-1 && !bytes.HasSuffix(log[:i], []byte{'\n'}) { + // The null byte is at the end of the buffer and it's not preceded + // by a newline character. Most likely the buffer was too short. + truncated = true + } + + log = log[:i] + } else if len(log) > 0 { + // No null byte? Dodgy! + truncated = true + } + + log = bytes.Trim(log, whitespace) + logLines := bytes.Split(log, []byte{'\n'}) + lines := make([]string, 0, len(logLines)) + for _, line := range logLines { + // Don't remove leading white space on individual lines. We rely on it + // when outputting logs. + lines = append(lines, string(bytes.TrimRight(line, whitespace))) } - return &VerifierError{err, logStr} + return &VerifierError{err, lines, truncated} } // VerifierError includes information from the eBPF verifier. +// +// It summarises the log output, see Format if you want to output the full contents. type VerifierError struct { - cause error - log string + // The error which caused this error. + Cause error + // The verifier output split into lines. + Log []string + // Whether the log output is truncated, based on several heuristics. + Truncated bool } func (le *VerifierError) Unwrap() error { - return le.cause + return le.Cause } func (le *VerifierError) Error() string { - if le.log == "" { - return le.cause.Error() + log := le.Log + if n := len(log); n > 0 && strings.HasPrefix(log[n-1], "processed ") { + // Get rid of "processed 39 insns (limit 1000000) ..." from summary. + log = log[:n-1] + } + + n := len(log) + if n == 0 { + return le.Cause.Error() + } + + lines := log[n-1:] + if n >= 2 && (includePreviousLine(log[n-1]) || le.Truncated) { + // Add one more line of context if it aids understanding the error. + lines = log[n-2:] + } + + var b strings.Builder + fmt.Fprintf(&b, "%s: ", le.Cause.Error()) + + for i, line := range lines { + b.WriteString(strings.TrimSpace(line)) + if i != len(lines)-1 { + b.WriteString(": ") + } + } + + omitted := len(le.Log) - len(lines) + if omitted == 0 && !le.Truncated { + return b.String() + } + + b.WriteString(" (") + if le.Truncated { + b.WriteString("truncated") + } + + if omitted > 0 { + if le.Truncated { + b.WriteString(", ") + } + fmt.Fprintf(&b, "%d line(s) omitted", omitted) + } + b.WriteString(")") + + return b.String() +} + +// includePreviousLine returns true if the given line likely is better +// understood with additional context from the preceding line. +func includePreviousLine(line string) bool { + // We need to find a good trade off between understandable error messages + // and too much complexity here. Checking the string prefix is ok, requiring + // regular expressions to do it is probably overkill. + + if strings.HasPrefix(line, "\t") { + // [13] STRUCT drm_rect size=16 vlen=4 + // \tx1 type_id=2 + return true + } + + if len(line) >= 2 && line[0] == 'R' && line[1] >= '0' && line[1] <= '9' { + // 0: (95) exit + // R0 !read_ok + return true } - return fmt.Sprintf("%s: %s", le.cause, le.log) + if strings.HasPrefix(line, "invalid bpf_context access") { + // 0: (79) r6 = *(u64 *)(r1 +0) + // func '__x64_sys_recvfrom' arg0 type FWD is not a struct + // invalid bpf_context access off=0 size=8 + return true + } + + return false } -// CString turns a NUL / zero terminated byte buffer into a string. -func CString(in []byte) string { - inLen := bytes.IndexByte(in, 0) - if inLen == -1 { - return "" +// Format the error. +// +// Understood verbs are %s and %v, which are equivalent to calling Error(). %v +// allows outputting additional information using the following flags: +// +// + Output the first lines, or all lines if no width is given. +// - Output the last lines, or all lines if no width is given. +// +// Use width to specify how many lines to output. Use the '-' flag to output +// lines from the end of the log instead of the beginning. +func (le *VerifierError) Format(f fmt.State, verb rune) { + switch verb { + case 's': + _, _ = io.WriteString(f, le.Error()) + + case 'v': + n, haveWidth := f.Width() + if !haveWidth || n > len(le.Log) { + n = len(le.Log) + } + + if !f.Flag('+') && !f.Flag('-') { + if haveWidth { + _, _ = io.WriteString(f, "%!v(BADWIDTH)") + return + } + + _, _ = io.WriteString(f, le.Error()) + return + } + + if f.Flag('+') && f.Flag('-') { + _, _ = io.WriteString(f, "%!v(BADFLAG)") + return + } + + fmt.Fprintf(f, "%s:", le.Cause.Error()) + + omitted := len(le.Log) - n + lines := le.Log[:n] + if f.Flag('-') { + // Print last instead of first lines. + lines = le.Log[len(le.Log)-n:] + if omitted > 0 { + fmt.Fprintf(f, "\n\t(%d line(s) omitted)", omitted) + } + } + + for _, line := range lines { + fmt.Fprintf(f, "\n\t%s", line) + } + + if !f.Flag('-') { + if omitted > 0 { + fmt.Fprintf(f, "\n\t(%d line(s) omitted)", omitted) + } + } + + if le.Truncated { + fmt.Fprintf(f, "\n\t(truncated)") + } + + default: + fmt.Fprintf(f, "%%!%c(BADVERB)", verb) } - return string(in[:inLen]) } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/fd.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/fd.go deleted file mode 100644 index af04955bd531..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/fd.go +++ /dev/null @@ -1,69 +0,0 @@ -package internal - -import ( - "errors" - "fmt" - "os" - "runtime" - "strconv" - - "github.com/cilium/ebpf/internal/unix" -) - -var ErrClosedFd = errors.New("use of closed file descriptor") - -type FD struct { - raw int64 -} - -func NewFD(value uint32) *FD { - fd := &FD{int64(value)} - runtime.SetFinalizer(fd, (*FD).Close) - return fd -} - -func (fd *FD) String() string { - return strconv.FormatInt(fd.raw, 10) -} - -func (fd *FD) Value() (uint32, error) { - if fd.raw < 0 { - return 0, ErrClosedFd - } - - return uint32(fd.raw), nil -} - -func (fd *FD) Close() error { - if fd.raw < 0 { - return nil - } - - value := int(fd.raw) - fd.raw = -1 - - fd.Forget() - return unix.Close(value) -} - -func (fd *FD) Forget() { - runtime.SetFinalizer(fd, nil) -} - -func (fd *FD) Dup() (*FD, error) { - if fd.raw < 0 { - return nil, ErrClosedFd - } - - dup, err := unix.FcntlInt(uintptr(fd.raw), unix.F_DUPFD_CLOEXEC, 0) - if err != nil { - return nil, fmt.Errorf("can't dup fd: %v", err) - } - - return NewFD(uint32(dup)), nil -} - -func (fd *FD) File(name string) *os.File { - fd.Forget() - return os.NewFile(uintptr(fd.raw), name) -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/feature.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/feature.go index c94a2e1ee018..0a6c2d1d528c 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/feature.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/feature.go @@ -54,11 +54,6 @@ type FeatureTestFn func() error // // Returns an error wrapping ErrNotSupported if the feature is not supported. func FeatureTest(name, version string, fn FeatureTestFn) func() error { - v, err := NewVersion(version) - if err != nil { - return func() error { return err } - } - ft := new(featureTest) return func() error { ft.RLock() @@ -79,6 +74,11 @@ func FeatureTest(name, version string, fn FeatureTestFn) func() error { err := fn() switch { case errors.Is(err, ErrNotSupported): + v, err := NewVersion(version) + if err != nil { + return err + } + ft.result = &UnsupportedFeatureError{ MinimumVersion: v, Name: name, diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/io.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/io.go index fa7402782d7a..30b6641f0761 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/io.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/io.go @@ -1,6 +1,35 @@ package internal -import "errors" +import ( + "bufio" + "compress/gzip" + "errors" + "io" + "os" +) + +// NewBufferedSectionReader wraps an io.ReaderAt in an appropriately-sized +// buffered reader. It is a convenience function for reading subsections of +// ELF sections while minimizing the amount of read() syscalls made. +// +// Syscall overhead is non-negligible in continuous integration context +// where ELFs might be accessed over virtual filesystems with poor random +// access performance. Buffering reads makes sense because (sub)sections +// end up being read completely anyway. +// +// Use instead of the r.Seek() + io.LimitReader() pattern. +func NewBufferedSectionReader(ra io.ReaderAt, off, n int64) *bufio.Reader { + // Clamp the size of the buffer to one page to avoid slurping large parts + // of a file into memory. bufio.NewReader uses a hardcoded default buffer + // of 4096. Allow arches with larger pages to allocate more, but don't + // allocate a fixed 4k buffer if we only need to read a small segment. + buf := n + if ps := int64(os.Getpagesize()); n > ps { + buf = ps + } + + return bufio.NewReaderSize(io.NewSectionReader(ra, off, n), int(buf)) +} // DiscardZeroes makes sure that all written bytes are zero // before discarding them. @@ -14,3 +43,20 @@ func (DiscardZeroes) Write(p []byte) (int, error) { } return len(p), nil } + +// ReadAllCompressed decompresses a gzipped file into memory. +func ReadAllCompressed(file string) ([]byte, error) { + fh, err := os.Open(file) + if err != nil { + return nil, err + } + defer fh.Close() + + gz, err := gzip.NewReader(fh) + if err != nil { + return nil, err + } + defer gz.Close() + + return io.ReadAll(gz) +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/output.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/output.go new file mode 100644 index 000000000000..aeab37fcfafe --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/output.go @@ -0,0 +1,84 @@ +package internal + +import ( + "bytes" + "errors" + "go/format" + "go/scanner" + "io" + "strings" + "unicode" +) + +// Identifier turns a C style type or field name into an exportable Go equivalent. +func Identifier(str string) string { + prev := rune(-1) + return strings.Map(func(r rune) rune { + // See https://golang.org/ref/spec#Identifiers + switch { + case unicode.IsLetter(r): + if prev == -1 { + r = unicode.ToUpper(r) + } + + case r == '_': + switch { + // The previous rune was deleted, or we are at the + // beginning of the string. + case prev == -1: + fallthrough + + // The previous rune is a lower case letter or a digit. + case unicode.IsDigit(prev) || (unicode.IsLetter(prev) && unicode.IsLower(prev)): + // delete the current rune, and force the + // next character to be uppercased. + r = -1 + } + + case unicode.IsDigit(r): + + default: + // Delete the current rune. prev is unchanged. + return -1 + } + + prev = r + return r + }, str) +} + +// WriteFormatted outputs a formatted src into out. +// +// If formatting fails it returns an informative error message. +func WriteFormatted(src []byte, out io.Writer) error { + formatted, err := format.Source(src) + if err == nil { + _, err = out.Write(formatted) + return err + } + + var el scanner.ErrorList + if !errors.As(err, &el) { + return err + } + + var nel scanner.ErrorList + for _, err := range el { + if !err.Pos.IsValid() { + nel = append(nel, err) + continue + } + + buf := src[err.Pos.Offset:] + nl := bytes.IndexRune(buf, '\n') + if nl == -1 { + nel = append(nel, err) + continue + } + + err.Msg += ": " + string(buf[:nl]) + nel = append(nel, err) + } + + return nel +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/pinning.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/pinning.go index 5329b432d72e..c711353c3ea5 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/pinning.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/pinning.go @@ -4,24 +4,54 @@ import ( "errors" "fmt" "os" + "path/filepath" + "runtime" + "unsafe" + "github.com/cilium/ebpf/internal/sys" "github.com/cilium/ebpf/internal/unix" ) -func Pin(currentPath, newPath string, fd *FD) error { +func Pin(currentPath, newPath string, fd *sys.FD) error { + const bpfFSType = 0xcafe4a11 + if newPath == "" { return errors.New("given pinning path cannot be empty") } if currentPath == newPath { return nil } + + var statfs unix.Statfs_t + if err := unix.Statfs(filepath.Dir(newPath), &statfs); err != nil { + return err + } + + fsType := int64(statfs.Type) + if unsafe.Sizeof(statfs.Type) == 4 { + // We're on a 32 bit arch, where statfs.Type is int32. bpfFSType is a + // negative number when interpreted as int32 so we need to cast via + // uint32 to avoid sign extension. + fsType = int64(uint32(statfs.Type)) + } + + if fsType != bpfFSType { + return fmt.Errorf("%s is not on a bpf filesystem", newPath) + } + + defer runtime.KeepAlive(fd) + if currentPath == "" { - return BPFObjPin(newPath, fd) + return sys.ObjPin(&sys.ObjPinAttr{ + Pathname: sys.NewStringPointer(newPath), + BpfFd: fd.Uint(), + }) } - var err error + // Renameat2 is used instead of os.Rename to disallow the new path replacing // an existing path. - if err = unix.Renameat2(unix.AT_FDCWD, currentPath, unix.AT_FDCWD, newPath, unix.RENAME_NOREPLACE); err == nil { + err := unix.Renameat2(unix.AT_FDCWD, currentPath, unix.AT_FDCWD, newPath, unix.RENAME_NOREPLACE) + if err == nil { // Object is now moved to the new pinning path. return nil } @@ -29,7 +59,10 @@ func Pin(currentPath, newPath string, fd *FD) error { return fmt.Errorf("unable to move pinned object to new path %v: %w", newPath, err) } // Internal state not in sync with the file system so let's fix it. - return BPFObjPin(newPath, fd) + return sys.ObjPin(&sys.ObjPinAttr{ + Pathname: sys.NewStringPointer(newPath), + BpfFd: fd.Uint(), + }) } func Unpin(pinnedPath string) error { diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/doc.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/doc.go new file mode 100644 index 000000000000..dfe174448e1c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/doc.go @@ -0,0 +1,6 @@ +// Package sys contains bindings for the BPF syscall. +package sys + +// Regenerate types.go by invoking go generate in the current directory. + +//go:generate go run github.com/cilium/ebpf/internal/cmd/gentypes ../../btf/testdata/vmlinux.btf.gz diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/fd.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/fd.go new file mode 100644 index 000000000000..65517d45e26a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/fd.go @@ -0,0 +1,96 @@ +package sys + +import ( + "fmt" + "math" + "os" + "runtime" + "strconv" + + "github.com/cilium/ebpf/internal/unix" +) + +var ErrClosedFd = unix.EBADF + +type FD struct { + raw int +} + +func newFD(value int) *FD { + fd := &FD{value} + runtime.SetFinalizer(fd, (*FD).Close) + return fd +} + +// NewFD wraps a raw fd with a finalizer. +// +// You must not use the raw fd after calling this function, since the underlying +// file descriptor number may change. This is because the BPF UAPI assumes that +// zero is not a valid fd value. +func NewFD(value int) (*FD, error) { + if value < 0 { + return nil, fmt.Errorf("invalid fd %d", value) + } + + fd := newFD(value) + if value != 0 { + return fd, nil + } + + dup, err := fd.Dup() + _ = fd.Close() + return dup, err +} + +func (fd *FD) String() string { + return strconv.FormatInt(int64(fd.raw), 10) +} + +func (fd *FD) Int() int { + return fd.raw +} + +func (fd *FD) Uint() uint32 { + if fd.raw < 0 || int64(fd.raw) > math.MaxUint32 { + // Best effort: this is the number most likely to be an invalid file + // descriptor. It is equal to -1 (on two's complement arches). + return math.MaxUint32 + } + return uint32(fd.raw) +} + +func (fd *FD) Close() error { + if fd.raw < 0 { + return nil + } + + value := int(fd.raw) + fd.raw = -1 + + fd.Forget() + return unix.Close(value) +} + +func (fd *FD) Forget() { + runtime.SetFinalizer(fd, nil) +} + +func (fd *FD) Dup() (*FD, error) { + if fd.raw < 0 { + return nil, ErrClosedFd + } + + // Always require the fd to be larger than zero: the BPF API treats the value + // as "no argument provided". + dup, err := unix.FcntlInt(uintptr(fd.raw), unix.F_DUPFD_CLOEXEC, 1) + if err != nil { + return nil, fmt.Errorf("can't dup fd: %v", err) + } + + return newFD(dup), nil +} + +func (fd *FD) File(name string) *os.File { + fd.Forget() + return os.NewFile(uintptr(fd.raw), name) +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr.go similarity index 71% rename from cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr.go rename to cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr.go index f295de72cfeb..a221006888d0 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr.go @@ -1,4 +1,4 @@ -package internal +package sys import ( "unsafe" @@ -20,6 +20,13 @@ func NewSlicePointer(buf []byte) Pointer { return Pointer{ptr: unsafe.Pointer(&buf[0])} } +// NewSlicePointer creates a 64-bit pointer from a byte slice. +// +// Useful to assign both the pointer and the length in one go. +func NewSlicePointerLen(buf []byte) (Pointer, uint32) { + return NewSlicePointer(buf), uint32(len(buf)) +} + // NewStringPointer creates a 64-bit pointer from a string. func NewStringPointer(str string) Pointer { p, err := unix.BytePtrFromString(str) diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr_32_be.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr_32_be.go similarity index 93% rename from cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr_32_be.go rename to cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr_32_be.go index 8c114ddf4763..df903d780b15 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr_32_be.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr_32_be.go @@ -1,7 +1,7 @@ //go:build armbe || mips || mips64p32 // +build armbe mips mips64p32 -package internal +package sys import ( "unsafe" diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr_32_le.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr_32_le.go similarity index 94% rename from cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr_32_le.go rename to cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr_32_le.go index e65a61e45d34..a6a51edb6e10 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr_32_le.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr_32_le.go @@ -1,7 +1,7 @@ //go:build 386 || amd64p32 || arm || mipsle || mips64p32le // +build 386 amd64p32 arm mipsle mips64p32le -package internal +package sys import ( "unsafe" diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr_64.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr_64.go similarity index 95% rename from cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr_64.go rename to cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr_64.go index 71a3afe307b3..7c0279e487c7 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/ptr_64.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/ptr_64.go @@ -1,7 +1,7 @@ //go:build !386 && !amd64p32 && !arm && !mipsle && !mips64p32le && !armbe && !mips && !mips64p32 // +build !386,!amd64p32,!arm,!mipsle,!mips64p32le,!armbe,!mips,!mips64p32 -package internal +package sys import ( "unsafe" diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/syscall.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/syscall.go new file mode 100644 index 000000000000..2a5935dc9123 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/syscall.go @@ -0,0 +1,126 @@ +package sys + +import ( + "runtime" + "syscall" + "unsafe" + + "github.com/cilium/ebpf/internal/unix" +) + +// BPF wraps SYS_BPF. +// +// Any pointers contained in attr must use the Pointer type from this package. +func BPF(cmd Cmd, attr unsafe.Pointer, size uintptr) (uintptr, error) { + for { + r1, _, errNo := unix.Syscall(unix.SYS_BPF, uintptr(cmd), uintptr(attr), size) + runtime.KeepAlive(attr) + + // As of ~4.20 the verifier can be interrupted by a signal, + // and returns EAGAIN in that case. + if errNo == unix.EAGAIN && cmd == BPF_PROG_LOAD { + continue + } + + var err error + if errNo != 0 { + err = wrappedErrno{errNo} + } + + return r1, err + } +} + +// Info is implemented by all structs that can be passed to the ObjInfo syscall. +// +// MapInfo +// ProgInfo +// LinkInfo +// BtfInfo +type Info interface { + info() (unsafe.Pointer, uint32) +} + +var _ Info = (*MapInfo)(nil) + +func (i *MapInfo) info() (unsafe.Pointer, uint32) { + return unsafe.Pointer(i), uint32(unsafe.Sizeof(*i)) +} + +var _ Info = (*ProgInfo)(nil) + +func (i *ProgInfo) info() (unsafe.Pointer, uint32) { + return unsafe.Pointer(i), uint32(unsafe.Sizeof(*i)) +} + +var _ Info = (*LinkInfo)(nil) + +func (i *LinkInfo) info() (unsafe.Pointer, uint32) { + return unsafe.Pointer(i), uint32(unsafe.Sizeof(*i)) +} + +var _ Info = (*BtfInfo)(nil) + +func (i *BtfInfo) info() (unsafe.Pointer, uint32) { + return unsafe.Pointer(i), uint32(unsafe.Sizeof(*i)) +} + +// ObjInfo retrieves information about a BPF Fd. +// +// info may be one of MapInfo, ProgInfo, LinkInfo and BtfInfo. +func ObjInfo(fd *FD, info Info) error { + ptr, len := info.info() + err := ObjGetInfoByFd(&ObjGetInfoByFdAttr{ + BpfFd: fd.Uint(), + InfoLen: len, + Info: NewPointer(ptr), + }) + runtime.KeepAlive(fd) + return err +} + +// BPFObjName is a null-terminated string made up of +// 'A-Za-z0-9_' characters. +type ObjName [unix.BPF_OBJ_NAME_LEN]byte + +// NewObjName truncates the result if it is too long. +func NewObjName(name string) ObjName { + var result ObjName + copy(result[:unix.BPF_OBJ_NAME_LEN-1], name) + return result +} + +// LinkID uniquely identifies a bpf_link. +type LinkID uint32 + +// BTFID uniquely identifies a BTF blob loaded into the kernel. +type BTFID uint32 + +// wrappedErrno wraps syscall.Errno to prevent direct comparisons with +// syscall.E* or unix.E* constants. +// +// You should never export an error of this type. +type wrappedErrno struct { + syscall.Errno +} + +func (we wrappedErrno) Unwrap() error { + return we.Errno +} + +type syscallError struct { + error + errno syscall.Errno +} + +func Error(err error, errno syscall.Errno) error { + return &syscallError{err, errno} +} + +func (se *syscallError) Is(target error) bool { + return target == se.error +} + +func (se *syscallError) Unwrap() error { + return se.errno +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/types.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/types.go new file mode 100644 index 000000000000..291e3a6196cb --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/sys/types.go @@ -0,0 +1,1052 @@ +// Code generated by internal/cmd/gentypes; DO NOT EDIT. + +package sys + +import ( + "unsafe" +) + +type AdjRoomMode int32 + +const ( + BPF_ADJ_ROOM_NET AdjRoomMode = 0 + BPF_ADJ_ROOM_MAC AdjRoomMode = 1 +) + +type AttachType int32 + +const ( + BPF_CGROUP_INET_INGRESS AttachType = 0 + BPF_CGROUP_INET_EGRESS AttachType = 1 + BPF_CGROUP_INET_SOCK_CREATE AttachType = 2 + BPF_CGROUP_SOCK_OPS AttachType = 3 + BPF_SK_SKB_STREAM_PARSER AttachType = 4 + BPF_SK_SKB_STREAM_VERDICT AttachType = 5 + BPF_CGROUP_DEVICE AttachType = 6 + BPF_SK_MSG_VERDICT AttachType = 7 + BPF_CGROUP_INET4_BIND AttachType = 8 + BPF_CGROUP_INET6_BIND AttachType = 9 + BPF_CGROUP_INET4_CONNECT AttachType = 10 + BPF_CGROUP_INET6_CONNECT AttachType = 11 + BPF_CGROUP_INET4_POST_BIND AttachType = 12 + BPF_CGROUP_INET6_POST_BIND AttachType = 13 + BPF_CGROUP_UDP4_SENDMSG AttachType = 14 + BPF_CGROUP_UDP6_SENDMSG AttachType = 15 + BPF_LIRC_MODE2 AttachType = 16 + BPF_FLOW_DISSECTOR AttachType = 17 + BPF_CGROUP_SYSCTL AttachType = 18 + BPF_CGROUP_UDP4_RECVMSG AttachType = 19 + BPF_CGROUP_UDP6_RECVMSG AttachType = 20 + BPF_CGROUP_GETSOCKOPT AttachType = 21 + BPF_CGROUP_SETSOCKOPT AttachType = 22 + BPF_TRACE_RAW_TP AttachType = 23 + BPF_TRACE_FENTRY AttachType = 24 + BPF_TRACE_FEXIT AttachType = 25 + BPF_MODIFY_RETURN AttachType = 26 + BPF_LSM_MAC AttachType = 27 + BPF_TRACE_ITER AttachType = 28 + BPF_CGROUP_INET4_GETPEERNAME AttachType = 29 + BPF_CGROUP_INET6_GETPEERNAME AttachType = 30 + BPF_CGROUP_INET4_GETSOCKNAME AttachType = 31 + BPF_CGROUP_INET6_GETSOCKNAME AttachType = 32 + BPF_XDP_DEVMAP AttachType = 33 + BPF_CGROUP_INET_SOCK_RELEASE AttachType = 34 + BPF_XDP_CPUMAP AttachType = 35 + BPF_SK_LOOKUP AttachType = 36 + BPF_XDP AttachType = 37 + BPF_SK_SKB_VERDICT AttachType = 38 + BPF_SK_REUSEPORT_SELECT AttachType = 39 + BPF_SK_REUSEPORT_SELECT_OR_MIGRATE AttachType = 40 + BPF_PERF_EVENT AttachType = 41 + BPF_TRACE_KPROBE_MULTI AttachType = 42 + __MAX_BPF_ATTACH_TYPE AttachType = 43 +) + +type Cmd int32 + +const ( + BPF_MAP_CREATE Cmd = 0 + BPF_MAP_LOOKUP_ELEM Cmd = 1 + BPF_MAP_UPDATE_ELEM Cmd = 2 + BPF_MAP_DELETE_ELEM Cmd = 3 + BPF_MAP_GET_NEXT_KEY Cmd = 4 + BPF_PROG_LOAD Cmd = 5 + BPF_OBJ_PIN Cmd = 6 + BPF_OBJ_GET Cmd = 7 + BPF_PROG_ATTACH Cmd = 8 + BPF_PROG_DETACH Cmd = 9 + BPF_PROG_TEST_RUN Cmd = 10 + BPF_PROG_RUN Cmd = 10 + BPF_PROG_GET_NEXT_ID Cmd = 11 + BPF_MAP_GET_NEXT_ID Cmd = 12 + BPF_PROG_GET_FD_BY_ID Cmd = 13 + BPF_MAP_GET_FD_BY_ID Cmd = 14 + BPF_OBJ_GET_INFO_BY_FD Cmd = 15 + BPF_PROG_QUERY Cmd = 16 + BPF_RAW_TRACEPOINT_OPEN Cmd = 17 + BPF_BTF_LOAD Cmd = 18 + BPF_BTF_GET_FD_BY_ID Cmd = 19 + BPF_TASK_FD_QUERY Cmd = 20 + BPF_MAP_LOOKUP_AND_DELETE_ELEM Cmd = 21 + BPF_MAP_FREEZE Cmd = 22 + BPF_BTF_GET_NEXT_ID Cmd = 23 + BPF_MAP_LOOKUP_BATCH Cmd = 24 + BPF_MAP_LOOKUP_AND_DELETE_BATCH Cmd = 25 + BPF_MAP_UPDATE_BATCH Cmd = 26 + BPF_MAP_DELETE_BATCH Cmd = 27 + BPF_LINK_CREATE Cmd = 28 + BPF_LINK_UPDATE Cmd = 29 + BPF_LINK_GET_FD_BY_ID Cmd = 30 + BPF_LINK_GET_NEXT_ID Cmd = 31 + BPF_ENABLE_STATS Cmd = 32 + BPF_ITER_CREATE Cmd = 33 + BPF_LINK_DETACH Cmd = 34 + BPF_PROG_BIND_MAP Cmd = 35 +) + +type FunctionId int32 + +const ( + BPF_FUNC_unspec FunctionId = 0 + BPF_FUNC_map_lookup_elem FunctionId = 1 + BPF_FUNC_map_update_elem FunctionId = 2 + BPF_FUNC_map_delete_elem FunctionId = 3 + BPF_FUNC_probe_read FunctionId = 4 + BPF_FUNC_ktime_get_ns FunctionId = 5 + BPF_FUNC_trace_printk FunctionId = 6 + BPF_FUNC_get_prandom_u32 FunctionId = 7 + BPF_FUNC_get_smp_processor_id FunctionId = 8 + BPF_FUNC_skb_store_bytes FunctionId = 9 + BPF_FUNC_l3_csum_replace FunctionId = 10 + BPF_FUNC_l4_csum_replace FunctionId = 11 + BPF_FUNC_tail_call FunctionId = 12 + BPF_FUNC_clone_redirect FunctionId = 13 + BPF_FUNC_get_current_pid_tgid FunctionId = 14 + BPF_FUNC_get_current_uid_gid FunctionId = 15 + BPF_FUNC_get_current_comm FunctionId = 16 + BPF_FUNC_get_cgroup_classid FunctionId = 17 + BPF_FUNC_skb_vlan_push FunctionId = 18 + BPF_FUNC_skb_vlan_pop FunctionId = 19 + BPF_FUNC_skb_get_tunnel_key FunctionId = 20 + BPF_FUNC_skb_set_tunnel_key FunctionId = 21 + BPF_FUNC_perf_event_read FunctionId = 22 + BPF_FUNC_redirect FunctionId = 23 + BPF_FUNC_get_route_realm FunctionId = 24 + BPF_FUNC_perf_event_output FunctionId = 25 + BPF_FUNC_skb_load_bytes FunctionId = 26 + BPF_FUNC_get_stackid FunctionId = 27 + BPF_FUNC_csum_diff FunctionId = 28 + BPF_FUNC_skb_get_tunnel_opt FunctionId = 29 + BPF_FUNC_skb_set_tunnel_opt FunctionId = 30 + BPF_FUNC_skb_change_proto FunctionId = 31 + BPF_FUNC_skb_change_type FunctionId = 32 + BPF_FUNC_skb_under_cgroup FunctionId = 33 + BPF_FUNC_get_hash_recalc FunctionId = 34 + BPF_FUNC_get_current_task FunctionId = 35 + BPF_FUNC_probe_write_user FunctionId = 36 + BPF_FUNC_current_task_under_cgroup FunctionId = 37 + BPF_FUNC_skb_change_tail FunctionId = 38 + BPF_FUNC_skb_pull_data FunctionId = 39 + BPF_FUNC_csum_update FunctionId = 40 + BPF_FUNC_set_hash_invalid FunctionId = 41 + BPF_FUNC_get_numa_node_id FunctionId = 42 + BPF_FUNC_skb_change_head FunctionId = 43 + BPF_FUNC_xdp_adjust_head FunctionId = 44 + BPF_FUNC_probe_read_str FunctionId = 45 + BPF_FUNC_get_socket_cookie FunctionId = 46 + BPF_FUNC_get_socket_uid FunctionId = 47 + BPF_FUNC_set_hash FunctionId = 48 + BPF_FUNC_setsockopt FunctionId = 49 + BPF_FUNC_skb_adjust_room FunctionId = 50 + BPF_FUNC_redirect_map FunctionId = 51 + BPF_FUNC_sk_redirect_map FunctionId = 52 + BPF_FUNC_sock_map_update FunctionId = 53 + BPF_FUNC_xdp_adjust_meta FunctionId = 54 + BPF_FUNC_perf_event_read_value FunctionId = 55 + BPF_FUNC_perf_prog_read_value FunctionId = 56 + BPF_FUNC_getsockopt FunctionId = 57 + BPF_FUNC_override_return FunctionId = 58 + BPF_FUNC_sock_ops_cb_flags_set FunctionId = 59 + BPF_FUNC_msg_redirect_map FunctionId = 60 + BPF_FUNC_msg_apply_bytes FunctionId = 61 + BPF_FUNC_msg_cork_bytes FunctionId = 62 + BPF_FUNC_msg_pull_data FunctionId = 63 + BPF_FUNC_bind FunctionId = 64 + BPF_FUNC_xdp_adjust_tail FunctionId = 65 + BPF_FUNC_skb_get_xfrm_state FunctionId = 66 + BPF_FUNC_get_stack FunctionId = 67 + BPF_FUNC_skb_load_bytes_relative FunctionId = 68 + BPF_FUNC_fib_lookup FunctionId = 69 + BPF_FUNC_sock_hash_update FunctionId = 70 + BPF_FUNC_msg_redirect_hash FunctionId = 71 + BPF_FUNC_sk_redirect_hash FunctionId = 72 + BPF_FUNC_lwt_push_encap FunctionId = 73 + BPF_FUNC_lwt_seg6_store_bytes FunctionId = 74 + BPF_FUNC_lwt_seg6_adjust_srh FunctionId = 75 + BPF_FUNC_lwt_seg6_action FunctionId = 76 + BPF_FUNC_rc_repeat FunctionId = 77 + BPF_FUNC_rc_keydown FunctionId = 78 + BPF_FUNC_skb_cgroup_id FunctionId = 79 + BPF_FUNC_get_current_cgroup_id FunctionId = 80 + BPF_FUNC_get_local_storage FunctionId = 81 + BPF_FUNC_sk_select_reuseport FunctionId = 82 + BPF_FUNC_skb_ancestor_cgroup_id FunctionId = 83 + BPF_FUNC_sk_lookup_tcp FunctionId = 84 + BPF_FUNC_sk_lookup_udp FunctionId = 85 + BPF_FUNC_sk_release FunctionId = 86 + BPF_FUNC_map_push_elem FunctionId = 87 + BPF_FUNC_map_pop_elem FunctionId = 88 + BPF_FUNC_map_peek_elem FunctionId = 89 + BPF_FUNC_msg_push_data FunctionId = 90 + BPF_FUNC_msg_pop_data FunctionId = 91 + BPF_FUNC_rc_pointer_rel FunctionId = 92 + BPF_FUNC_spin_lock FunctionId = 93 + BPF_FUNC_spin_unlock FunctionId = 94 + BPF_FUNC_sk_fullsock FunctionId = 95 + BPF_FUNC_tcp_sock FunctionId = 96 + BPF_FUNC_skb_ecn_set_ce FunctionId = 97 + BPF_FUNC_get_listener_sock FunctionId = 98 + BPF_FUNC_skc_lookup_tcp FunctionId = 99 + BPF_FUNC_tcp_check_syncookie FunctionId = 100 + BPF_FUNC_sysctl_get_name FunctionId = 101 + BPF_FUNC_sysctl_get_current_value FunctionId = 102 + BPF_FUNC_sysctl_get_new_value FunctionId = 103 + BPF_FUNC_sysctl_set_new_value FunctionId = 104 + BPF_FUNC_strtol FunctionId = 105 + BPF_FUNC_strtoul FunctionId = 106 + BPF_FUNC_sk_storage_get FunctionId = 107 + BPF_FUNC_sk_storage_delete FunctionId = 108 + BPF_FUNC_send_signal FunctionId = 109 + BPF_FUNC_tcp_gen_syncookie FunctionId = 110 + BPF_FUNC_skb_output FunctionId = 111 + BPF_FUNC_probe_read_user FunctionId = 112 + BPF_FUNC_probe_read_kernel FunctionId = 113 + BPF_FUNC_probe_read_user_str FunctionId = 114 + BPF_FUNC_probe_read_kernel_str FunctionId = 115 + BPF_FUNC_tcp_send_ack FunctionId = 116 + BPF_FUNC_send_signal_thread FunctionId = 117 + BPF_FUNC_jiffies64 FunctionId = 118 + BPF_FUNC_read_branch_records FunctionId = 119 + BPF_FUNC_get_ns_current_pid_tgid FunctionId = 120 + BPF_FUNC_xdp_output FunctionId = 121 + BPF_FUNC_get_netns_cookie FunctionId = 122 + BPF_FUNC_get_current_ancestor_cgroup_id FunctionId = 123 + BPF_FUNC_sk_assign FunctionId = 124 + BPF_FUNC_ktime_get_boot_ns FunctionId = 125 + BPF_FUNC_seq_printf FunctionId = 126 + BPF_FUNC_seq_write FunctionId = 127 + BPF_FUNC_sk_cgroup_id FunctionId = 128 + BPF_FUNC_sk_ancestor_cgroup_id FunctionId = 129 + BPF_FUNC_ringbuf_output FunctionId = 130 + BPF_FUNC_ringbuf_reserve FunctionId = 131 + BPF_FUNC_ringbuf_submit FunctionId = 132 + BPF_FUNC_ringbuf_discard FunctionId = 133 + BPF_FUNC_ringbuf_query FunctionId = 134 + BPF_FUNC_csum_level FunctionId = 135 + BPF_FUNC_skc_to_tcp6_sock FunctionId = 136 + BPF_FUNC_skc_to_tcp_sock FunctionId = 137 + BPF_FUNC_skc_to_tcp_timewait_sock FunctionId = 138 + BPF_FUNC_skc_to_tcp_request_sock FunctionId = 139 + BPF_FUNC_skc_to_udp6_sock FunctionId = 140 + BPF_FUNC_get_task_stack FunctionId = 141 + BPF_FUNC_load_hdr_opt FunctionId = 142 + BPF_FUNC_store_hdr_opt FunctionId = 143 + BPF_FUNC_reserve_hdr_opt FunctionId = 144 + BPF_FUNC_inode_storage_get FunctionId = 145 + BPF_FUNC_inode_storage_delete FunctionId = 146 + BPF_FUNC_d_path FunctionId = 147 + BPF_FUNC_copy_from_user FunctionId = 148 + BPF_FUNC_snprintf_btf FunctionId = 149 + BPF_FUNC_seq_printf_btf FunctionId = 150 + BPF_FUNC_skb_cgroup_classid FunctionId = 151 + BPF_FUNC_redirect_neigh FunctionId = 152 + BPF_FUNC_per_cpu_ptr FunctionId = 153 + BPF_FUNC_this_cpu_ptr FunctionId = 154 + BPF_FUNC_redirect_peer FunctionId = 155 + BPF_FUNC_task_storage_get FunctionId = 156 + BPF_FUNC_task_storage_delete FunctionId = 157 + BPF_FUNC_get_current_task_btf FunctionId = 158 + BPF_FUNC_bprm_opts_set FunctionId = 159 + BPF_FUNC_ktime_get_coarse_ns FunctionId = 160 + BPF_FUNC_ima_inode_hash FunctionId = 161 + BPF_FUNC_sock_from_file FunctionId = 162 + BPF_FUNC_check_mtu FunctionId = 163 + BPF_FUNC_for_each_map_elem FunctionId = 164 + BPF_FUNC_snprintf FunctionId = 165 + BPF_FUNC_sys_bpf FunctionId = 166 + BPF_FUNC_btf_find_by_name_kind FunctionId = 167 + BPF_FUNC_sys_close FunctionId = 168 + BPF_FUNC_timer_init FunctionId = 169 + BPF_FUNC_timer_set_callback FunctionId = 170 + BPF_FUNC_timer_start FunctionId = 171 + BPF_FUNC_timer_cancel FunctionId = 172 + BPF_FUNC_get_func_ip FunctionId = 173 + BPF_FUNC_get_attach_cookie FunctionId = 174 + BPF_FUNC_task_pt_regs FunctionId = 175 + BPF_FUNC_get_branch_snapshot FunctionId = 176 + BPF_FUNC_trace_vprintk FunctionId = 177 + BPF_FUNC_skc_to_unix_sock FunctionId = 178 + BPF_FUNC_kallsyms_lookup_name FunctionId = 179 + BPF_FUNC_find_vma FunctionId = 180 + BPF_FUNC_loop FunctionId = 181 + BPF_FUNC_strncmp FunctionId = 182 + BPF_FUNC_get_func_arg FunctionId = 183 + BPF_FUNC_get_func_ret FunctionId = 184 + BPF_FUNC_get_func_arg_cnt FunctionId = 185 + BPF_FUNC_get_retval FunctionId = 186 + BPF_FUNC_set_retval FunctionId = 187 + BPF_FUNC_xdp_get_buff_len FunctionId = 188 + BPF_FUNC_xdp_load_bytes FunctionId = 189 + BPF_FUNC_xdp_store_bytes FunctionId = 190 + BPF_FUNC_copy_from_user_task FunctionId = 191 + BPF_FUNC_skb_set_tstamp FunctionId = 192 + BPF_FUNC_ima_file_hash FunctionId = 193 + __BPF_FUNC_MAX_ID FunctionId = 194 +) + +type HdrStartOff int32 + +const ( + BPF_HDR_START_MAC HdrStartOff = 0 + BPF_HDR_START_NET HdrStartOff = 1 +) + +type LinkType int32 + +const ( + BPF_LINK_TYPE_UNSPEC LinkType = 0 + BPF_LINK_TYPE_RAW_TRACEPOINT LinkType = 1 + BPF_LINK_TYPE_TRACING LinkType = 2 + BPF_LINK_TYPE_CGROUP LinkType = 3 + BPF_LINK_TYPE_ITER LinkType = 4 + BPF_LINK_TYPE_NETNS LinkType = 5 + BPF_LINK_TYPE_XDP LinkType = 6 + BPF_LINK_TYPE_PERF_EVENT LinkType = 7 + BPF_LINK_TYPE_KPROBE_MULTI LinkType = 8 + MAX_BPF_LINK_TYPE LinkType = 9 +) + +type MapType int32 + +const ( + BPF_MAP_TYPE_UNSPEC MapType = 0 + BPF_MAP_TYPE_HASH MapType = 1 + BPF_MAP_TYPE_ARRAY MapType = 2 + BPF_MAP_TYPE_PROG_ARRAY MapType = 3 + BPF_MAP_TYPE_PERF_EVENT_ARRAY MapType = 4 + BPF_MAP_TYPE_PERCPU_HASH MapType = 5 + BPF_MAP_TYPE_PERCPU_ARRAY MapType = 6 + BPF_MAP_TYPE_STACK_TRACE MapType = 7 + BPF_MAP_TYPE_CGROUP_ARRAY MapType = 8 + BPF_MAP_TYPE_LRU_HASH MapType = 9 + BPF_MAP_TYPE_LRU_PERCPU_HASH MapType = 10 + BPF_MAP_TYPE_LPM_TRIE MapType = 11 + BPF_MAP_TYPE_ARRAY_OF_MAPS MapType = 12 + BPF_MAP_TYPE_HASH_OF_MAPS MapType = 13 + BPF_MAP_TYPE_DEVMAP MapType = 14 + BPF_MAP_TYPE_SOCKMAP MapType = 15 + BPF_MAP_TYPE_CPUMAP MapType = 16 + BPF_MAP_TYPE_XSKMAP MapType = 17 + BPF_MAP_TYPE_SOCKHASH MapType = 18 + BPF_MAP_TYPE_CGROUP_STORAGE MapType = 19 + BPF_MAP_TYPE_REUSEPORT_SOCKARRAY MapType = 20 + BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE MapType = 21 + BPF_MAP_TYPE_QUEUE MapType = 22 + BPF_MAP_TYPE_STACK MapType = 23 + BPF_MAP_TYPE_SK_STORAGE MapType = 24 + BPF_MAP_TYPE_DEVMAP_HASH MapType = 25 + BPF_MAP_TYPE_STRUCT_OPS MapType = 26 + BPF_MAP_TYPE_RINGBUF MapType = 27 + BPF_MAP_TYPE_INODE_STORAGE MapType = 28 + BPF_MAP_TYPE_TASK_STORAGE MapType = 29 + BPF_MAP_TYPE_BLOOM_FILTER MapType = 30 +) + +type ProgType int32 + +const ( + BPF_PROG_TYPE_UNSPEC ProgType = 0 + BPF_PROG_TYPE_SOCKET_FILTER ProgType = 1 + BPF_PROG_TYPE_KPROBE ProgType = 2 + BPF_PROG_TYPE_SCHED_CLS ProgType = 3 + BPF_PROG_TYPE_SCHED_ACT ProgType = 4 + BPF_PROG_TYPE_TRACEPOINT ProgType = 5 + BPF_PROG_TYPE_XDP ProgType = 6 + BPF_PROG_TYPE_PERF_EVENT ProgType = 7 + BPF_PROG_TYPE_CGROUP_SKB ProgType = 8 + BPF_PROG_TYPE_CGROUP_SOCK ProgType = 9 + BPF_PROG_TYPE_LWT_IN ProgType = 10 + BPF_PROG_TYPE_LWT_OUT ProgType = 11 + BPF_PROG_TYPE_LWT_XMIT ProgType = 12 + BPF_PROG_TYPE_SOCK_OPS ProgType = 13 + BPF_PROG_TYPE_SK_SKB ProgType = 14 + BPF_PROG_TYPE_CGROUP_DEVICE ProgType = 15 + BPF_PROG_TYPE_SK_MSG ProgType = 16 + BPF_PROG_TYPE_RAW_TRACEPOINT ProgType = 17 + BPF_PROG_TYPE_CGROUP_SOCK_ADDR ProgType = 18 + BPF_PROG_TYPE_LWT_SEG6LOCAL ProgType = 19 + BPF_PROG_TYPE_LIRC_MODE2 ProgType = 20 + BPF_PROG_TYPE_SK_REUSEPORT ProgType = 21 + BPF_PROG_TYPE_FLOW_DISSECTOR ProgType = 22 + BPF_PROG_TYPE_CGROUP_SYSCTL ProgType = 23 + BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE ProgType = 24 + BPF_PROG_TYPE_CGROUP_SOCKOPT ProgType = 25 + BPF_PROG_TYPE_TRACING ProgType = 26 + BPF_PROG_TYPE_STRUCT_OPS ProgType = 27 + BPF_PROG_TYPE_EXT ProgType = 28 + BPF_PROG_TYPE_LSM ProgType = 29 + BPF_PROG_TYPE_SK_LOOKUP ProgType = 30 + BPF_PROG_TYPE_SYSCALL ProgType = 31 +) + +type RetCode int32 + +const ( + BPF_OK RetCode = 0 + BPF_DROP RetCode = 2 + BPF_REDIRECT RetCode = 7 + BPF_LWT_REROUTE RetCode = 128 +) + +type SkAction int32 + +const ( + SK_DROP SkAction = 0 + SK_PASS SkAction = 1 +) + +type StackBuildIdStatus int32 + +const ( + BPF_STACK_BUILD_ID_EMPTY StackBuildIdStatus = 0 + BPF_STACK_BUILD_ID_VALID StackBuildIdStatus = 1 + BPF_STACK_BUILD_ID_IP StackBuildIdStatus = 2 +) + +type StatsType int32 + +const ( + BPF_STATS_RUN_TIME StatsType = 0 +) + +type XdpAction int32 + +const ( + XDP_ABORTED XdpAction = 0 + XDP_DROP XdpAction = 1 + XDP_PASS XdpAction = 2 + XDP_TX XdpAction = 3 + XDP_REDIRECT XdpAction = 4 +) + +type BtfInfo struct { + Btf Pointer + BtfSize uint32 + Id BTFID + Name Pointer + NameLen uint32 + KernelBtf uint32 +} + +type FuncInfo struct { + InsnOff uint32 + TypeId uint32 +} + +type LineInfo struct { + InsnOff uint32 + FileNameOff uint32 + LineOff uint32 + LineCol uint32 +} + +type LinkInfo struct { + Type LinkType + Id LinkID + ProgId uint32 + _ [4]byte + Extra [16]uint8 +} + +type MapInfo struct { + Type uint32 + Id uint32 + KeySize uint32 + ValueSize uint32 + MaxEntries uint32 + MapFlags uint32 + Name ObjName + Ifindex uint32 + BtfVmlinuxValueTypeId uint32 + NetnsDev uint64 + NetnsIno uint64 + BtfId uint32 + BtfKeyTypeId uint32 + BtfValueTypeId uint32 + _ [4]byte + MapExtra uint64 +} + +type ProgInfo struct { + Type uint32 + Id uint32 + Tag [8]uint8 + JitedProgLen uint32 + XlatedProgLen uint32 + JitedProgInsns uint64 + XlatedProgInsns Pointer + LoadTime uint64 + CreatedByUid uint32 + NrMapIds uint32 + MapIds Pointer + Name ObjName + Ifindex uint32 + _ [4]byte /* unsupported bitfield */ + NetnsDev uint64 + NetnsIno uint64 + NrJitedKsyms uint32 + NrJitedFuncLens uint32 + JitedKsyms uint64 + JitedFuncLens uint64 + BtfId uint32 + FuncInfoRecSize uint32 + FuncInfo uint64 + NrFuncInfo uint32 + NrLineInfo uint32 + LineInfo uint64 + JitedLineInfo uint64 + NrJitedLineInfo uint32 + LineInfoRecSize uint32 + JitedLineInfoRecSize uint32 + NrProgTags uint32 + ProgTags uint64 + RunTimeNs uint64 + RunCnt uint64 + RecursionMisses uint64 + VerifiedInsns uint32 + _ [4]byte +} + +type SkLookup struct { + Cookie uint64 + Family uint32 + Protocol uint32 + RemoteIp4 [4]uint8 + RemoteIp6 [16]uint8 + RemotePort uint16 + _ [2]byte + LocalIp4 [4]uint8 + LocalIp6 [16]uint8 + LocalPort uint32 + IngressIfindex uint32 + _ [4]byte +} + +type XdpMd struct { + Data uint32 + DataEnd uint32 + DataMeta uint32 + IngressIfindex uint32 + RxQueueIndex uint32 + EgressIfindex uint32 +} + +type BtfGetFdByIdAttr struct{ Id uint32 } + +func BtfGetFdById(attr *BtfGetFdByIdAttr) (*FD, error) { + fd, err := BPF(BPF_BTF_GET_FD_BY_ID, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type BtfGetNextIdAttr struct { + Id BTFID + NextId BTFID +} + +func BtfGetNextId(attr *BtfGetNextIdAttr) error { + _, err := BPF(BPF_BTF_GET_NEXT_ID, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type BtfLoadAttr struct { + Btf Pointer + BtfLogBuf Pointer + BtfSize uint32 + BtfLogSize uint32 + BtfLogLevel uint32 + _ [4]byte +} + +func BtfLoad(attr *BtfLoadAttr) (*FD, error) { + fd, err := BPF(BPF_BTF_LOAD, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type EnableStatsAttr struct{ Type uint32 } + +func EnableStats(attr *EnableStatsAttr) (*FD, error) { + fd, err := BPF(BPF_ENABLE_STATS, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type IterCreateAttr struct { + LinkFd uint32 + Flags uint32 +} + +func IterCreate(attr *IterCreateAttr) (*FD, error) { + fd, err := BPF(BPF_ITER_CREATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type LinkCreateAttr struct { + ProgFd uint32 + TargetFd uint32 + AttachType AttachType + Flags uint32 + TargetBtfId uint32 + _ [28]byte +} + +func LinkCreate(attr *LinkCreateAttr) (*FD, error) { + fd, err := BPF(BPF_LINK_CREATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type LinkCreateIterAttr struct { + ProgFd uint32 + TargetFd uint32 + AttachType AttachType + Flags uint32 + IterInfo Pointer + IterInfoLen uint32 + _ [20]byte +} + +func LinkCreateIter(attr *LinkCreateIterAttr) (*FD, error) { + fd, err := BPF(BPF_LINK_CREATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type LinkCreatePerfEventAttr struct { + ProgFd uint32 + TargetFd uint32 + AttachType AttachType + Flags uint32 + BpfCookie uint64 + _ [24]byte +} + +func LinkCreatePerfEvent(attr *LinkCreatePerfEventAttr) (*FD, error) { + fd, err := BPF(BPF_LINK_CREATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type LinkUpdateAttr struct { + LinkFd uint32 + NewProgFd uint32 + Flags uint32 + OldProgFd uint32 +} + +func LinkUpdate(attr *LinkUpdateAttr) error { + _, err := BPF(BPF_LINK_UPDATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapCreateAttr struct { + MapType MapType + KeySize uint32 + ValueSize uint32 + MaxEntries uint32 + MapFlags uint32 + InnerMapFd uint32 + NumaNode uint32 + MapName ObjName + MapIfindex uint32 + BtfFd uint32 + BtfKeyTypeId uint32 + BtfValueTypeId uint32 + BtfVmlinuxValueTypeId uint32 + MapExtra uint64 +} + +func MapCreate(attr *MapCreateAttr) (*FD, error) { + fd, err := BPF(BPF_MAP_CREATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type MapDeleteBatchAttr struct { + InBatch Pointer + OutBatch Pointer + Keys Pointer + Values Pointer + Count uint32 + MapFd uint32 + ElemFlags uint64 + Flags uint64 +} + +func MapDeleteBatch(attr *MapDeleteBatchAttr) error { + _, err := BPF(BPF_MAP_DELETE_BATCH, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapDeleteElemAttr struct { + MapFd uint32 + _ [4]byte + Key Pointer + Value Pointer + Flags uint64 +} + +func MapDeleteElem(attr *MapDeleteElemAttr) error { + _, err := BPF(BPF_MAP_DELETE_ELEM, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapFreezeAttr struct{ MapFd uint32 } + +func MapFreeze(attr *MapFreezeAttr) error { + _, err := BPF(BPF_MAP_FREEZE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapGetFdByIdAttr struct{ Id uint32 } + +func MapGetFdById(attr *MapGetFdByIdAttr) (*FD, error) { + fd, err := BPF(BPF_MAP_GET_FD_BY_ID, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type MapGetNextIdAttr struct { + Id uint32 + NextId uint32 +} + +func MapGetNextId(attr *MapGetNextIdAttr) error { + _, err := BPF(BPF_MAP_GET_NEXT_ID, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapGetNextKeyAttr struct { + MapFd uint32 + _ [4]byte + Key Pointer + NextKey Pointer +} + +func MapGetNextKey(attr *MapGetNextKeyAttr) error { + _, err := BPF(BPF_MAP_GET_NEXT_KEY, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapLookupAndDeleteBatchAttr struct { + InBatch Pointer + OutBatch Pointer + Keys Pointer + Values Pointer + Count uint32 + MapFd uint32 + ElemFlags uint64 + Flags uint64 +} + +func MapLookupAndDeleteBatch(attr *MapLookupAndDeleteBatchAttr) error { + _, err := BPF(BPF_MAP_LOOKUP_AND_DELETE_BATCH, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapLookupAndDeleteElemAttr struct { + MapFd uint32 + _ [4]byte + Key Pointer + Value Pointer + Flags uint64 +} + +func MapLookupAndDeleteElem(attr *MapLookupAndDeleteElemAttr) error { + _, err := BPF(BPF_MAP_LOOKUP_AND_DELETE_ELEM, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapLookupBatchAttr struct { + InBatch Pointer + OutBatch Pointer + Keys Pointer + Values Pointer + Count uint32 + MapFd uint32 + ElemFlags uint64 + Flags uint64 +} + +func MapLookupBatch(attr *MapLookupBatchAttr) error { + _, err := BPF(BPF_MAP_LOOKUP_BATCH, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapLookupElemAttr struct { + MapFd uint32 + _ [4]byte + Key Pointer + Value Pointer + Flags uint64 +} + +func MapLookupElem(attr *MapLookupElemAttr) error { + _, err := BPF(BPF_MAP_LOOKUP_ELEM, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapUpdateBatchAttr struct { + InBatch Pointer + OutBatch Pointer + Keys Pointer + Values Pointer + Count uint32 + MapFd uint32 + ElemFlags uint64 + Flags uint64 +} + +func MapUpdateBatch(attr *MapUpdateBatchAttr) error { + _, err := BPF(BPF_MAP_UPDATE_BATCH, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type MapUpdateElemAttr struct { + MapFd uint32 + _ [4]byte + Key Pointer + Value Pointer + Flags uint64 +} + +func MapUpdateElem(attr *MapUpdateElemAttr) error { + _, err := BPF(BPF_MAP_UPDATE_ELEM, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type ObjGetAttr struct { + Pathname Pointer + BpfFd uint32 + FileFlags uint32 +} + +func ObjGet(attr *ObjGetAttr) (*FD, error) { + fd, err := BPF(BPF_OBJ_GET, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type ObjGetInfoByFdAttr struct { + BpfFd uint32 + InfoLen uint32 + Info Pointer +} + +func ObjGetInfoByFd(attr *ObjGetInfoByFdAttr) error { + _, err := BPF(BPF_OBJ_GET_INFO_BY_FD, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type ObjPinAttr struct { + Pathname Pointer + BpfFd uint32 + FileFlags uint32 +} + +func ObjPin(attr *ObjPinAttr) error { + _, err := BPF(BPF_OBJ_PIN, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type ProgAttachAttr struct { + TargetFd uint32 + AttachBpfFd uint32 + AttachType uint32 + AttachFlags uint32 + ReplaceBpfFd uint32 +} + +func ProgAttach(attr *ProgAttachAttr) error { + _, err := BPF(BPF_PROG_ATTACH, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type ProgBindMapAttr struct { + ProgFd uint32 + MapFd uint32 + Flags uint32 +} + +func ProgBindMap(attr *ProgBindMapAttr) error { + _, err := BPF(BPF_PROG_BIND_MAP, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type ProgDetachAttr struct { + TargetFd uint32 + AttachBpfFd uint32 + AttachType uint32 +} + +func ProgDetach(attr *ProgDetachAttr) error { + _, err := BPF(BPF_PROG_DETACH, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type ProgGetFdByIdAttr struct{ Id uint32 } + +func ProgGetFdById(attr *ProgGetFdByIdAttr) (*FD, error) { + fd, err := BPF(BPF_PROG_GET_FD_BY_ID, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type ProgGetNextIdAttr struct { + Id uint32 + NextId uint32 +} + +func ProgGetNextId(attr *ProgGetNextIdAttr) error { + _, err := BPF(BPF_PROG_GET_NEXT_ID, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type ProgLoadAttr struct { + ProgType ProgType + InsnCnt uint32 + Insns Pointer + License Pointer + LogLevel uint32 + LogSize uint32 + LogBuf Pointer + KernVersion uint32 + ProgFlags uint32 + ProgName ObjName + ProgIfindex uint32 + ExpectedAttachType AttachType + ProgBtfFd uint32 + FuncInfoRecSize uint32 + FuncInfo Pointer + FuncInfoCnt uint32 + LineInfoRecSize uint32 + LineInfo Pointer + LineInfoCnt uint32 + AttachBtfId uint32 + AttachProgFd uint32 + CoreReloCnt uint32 + FdArray Pointer + CoreRelos Pointer + CoreReloRecSize uint32 + _ [4]byte +} + +func ProgLoad(attr *ProgLoadAttr) (*FD, error) { + fd, err := BPF(BPF_PROG_LOAD, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type ProgRunAttr struct { + ProgFd uint32 + Retval uint32 + DataSizeIn uint32 + DataSizeOut uint32 + DataIn Pointer + DataOut Pointer + Repeat uint32 + Duration uint32 + CtxSizeIn uint32 + CtxSizeOut uint32 + CtxIn Pointer + CtxOut Pointer + Flags uint32 + Cpu uint32 + BatchSize uint32 + _ [4]byte +} + +func ProgRun(attr *ProgRunAttr) error { + _, err := BPF(BPF_PROG_TEST_RUN, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + return err +} + +type RawTracepointOpenAttr struct { + Name Pointer + ProgFd uint32 + _ [4]byte +} + +func RawTracepointOpen(attr *RawTracepointOpenAttr) (*FD, error) { + fd, err := BPF(BPF_RAW_TRACEPOINT_OPEN, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) + if err != nil { + return nil, err + } + return NewFD(int(fd)) +} + +type CgroupLinkInfo struct { + CgroupId uint64 + AttachType AttachType + _ [4]byte +} + +type IterLinkInfo struct { + TargetName Pointer + TargetNameLen uint32 +} + +type NetNsLinkInfo struct { + NetnsIno uint32 + AttachType AttachType +} + +type RawTracepointLinkInfo struct { + TpName Pointer + TpNameLen uint32 + _ [4]byte +} + +type TracingLinkInfo struct { + AttachType AttachType + TargetObjId uint32 + TargetBtfId uint32 +} + +type XDPLinkInfo struct{ Ifindex uint32 } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/syscall.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/syscall.go deleted file mode 100644 index b75037bb9d68..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/syscall.go +++ /dev/null @@ -1,304 +0,0 @@ -package internal - -import ( - "errors" - "fmt" - "path/filepath" - "runtime" - "syscall" - "unsafe" - - "github.com/cilium/ebpf/internal/unix" -) - -//go:generate stringer -output syscall_string.go -type=BPFCmd - -// BPFCmd identifies a subcommand of the bpf syscall. -type BPFCmd int - -// Well known BPF commands. -const ( - BPF_MAP_CREATE BPFCmd = iota - BPF_MAP_LOOKUP_ELEM - BPF_MAP_UPDATE_ELEM - BPF_MAP_DELETE_ELEM - BPF_MAP_GET_NEXT_KEY - BPF_PROG_LOAD - BPF_OBJ_PIN - BPF_OBJ_GET - BPF_PROG_ATTACH - BPF_PROG_DETACH - BPF_PROG_TEST_RUN - BPF_PROG_GET_NEXT_ID - BPF_MAP_GET_NEXT_ID - BPF_PROG_GET_FD_BY_ID - BPF_MAP_GET_FD_BY_ID - BPF_OBJ_GET_INFO_BY_FD - BPF_PROG_QUERY - BPF_RAW_TRACEPOINT_OPEN - BPF_BTF_LOAD - BPF_BTF_GET_FD_BY_ID - BPF_TASK_FD_QUERY - BPF_MAP_LOOKUP_AND_DELETE_ELEM - BPF_MAP_FREEZE - BPF_BTF_GET_NEXT_ID - BPF_MAP_LOOKUP_BATCH - BPF_MAP_LOOKUP_AND_DELETE_BATCH - BPF_MAP_UPDATE_BATCH - BPF_MAP_DELETE_BATCH - BPF_LINK_CREATE - BPF_LINK_UPDATE - BPF_LINK_GET_FD_BY_ID - BPF_LINK_GET_NEXT_ID - BPF_ENABLE_STATS - BPF_ITER_CREATE -) - -// BPF wraps SYS_BPF. -// -// Any pointers contained in attr must use the Pointer type from this package. -func BPF(cmd BPFCmd, attr unsafe.Pointer, size uintptr) (uintptr, error) { - r1, _, errNo := unix.Syscall(unix.SYS_BPF, uintptr(cmd), uintptr(attr), size) - runtime.KeepAlive(attr) - - var err error - if errNo != 0 { - err = wrappedErrno{errNo} - } - - return r1, err -} - -type BPFProgLoadAttr struct { - ProgType uint32 - InsCount uint32 - Instructions Pointer - License Pointer - LogLevel uint32 - LogSize uint32 - LogBuf Pointer - KernelVersion uint32 // since 4.1 2541517c32be - ProgFlags uint32 // since 4.11 e07b98d9bffe - ProgName BPFObjName // since 4.15 067cae47771c - ProgIfIndex uint32 // since 4.15 1f6f4cb7ba21 - ExpectedAttachType uint32 // since 4.17 5e43f899b03a - ProgBTFFd uint32 - FuncInfoRecSize uint32 - FuncInfo Pointer - FuncInfoCnt uint32 - LineInfoRecSize uint32 - LineInfo Pointer - LineInfoCnt uint32 - AttachBTFID uint32 - AttachProgFd uint32 -} - -// BPFProgLoad wraps BPF_PROG_LOAD. -func BPFProgLoad(attr *BPFProgLoadAttr) (*FD, error) { - for { - fd, err := BPF(BPF_PROG_LOAD, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - // As of ~4.20 the verifier can be interrupted by a signal, - // and returns EAGAIN in that case. - if errors.Is(err, unix.EAGAIN) { - continue - } - - if err != nil { - return nil, err - } - - return NewFD(uint32(fd)), nil - } -} - -type BPFProgAttachAttr struct { - TargetFd uint32 - AttachBpfFd uint32 - AttachType uint32 - AttachFlags uint32 - ReplaceBpfFd uint32 -} - -func BPFProgAttach(attr *BPFProgAttachAttr) error { - _, err := BPF(BPF_PROG_ATTACH, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - return err -} - -type BPFProgDetachAttr struct { - TargetFd uint32 - AttachBpfFd uint32 - AttachType uint32 -} - -func BPFProgDetach(attr *BPFProgDetachAttr) error { - _, err := BPF(BPF_PROG_DETACH, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - return err -} - -type BPFEnableStatsAttr struct { - StatsType uint32 -} - -func BPFEnableStats(attr *BPFEnableStatsAttr) (*FD, error) { - ptr, err := BPF(BPF_ENABLE_STATS, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - if err != nil { - return nil, fmt.Errorf("enable stats: %w", err) - } - return NewFD(uint32(ptr)), nil - -} - -type bpfObjAttr struct { - fileName Pointer - fd uint32 - fileFlags uint32 -} - -const bpfFSType = 0xcafe4a11 - -// BPFObjPin wraps BPF_OBJ_PIN. -func BPFObjPin(fileName string, fd *FD) error { - dirName := filepath.Dir(fileName) - var statfs unix.Statfs_t - if err := unix.Statfs(dirName, &statfs); err != nil { - return err - } - if uint64(statfs.Type) != bpfFSType { - return fmt.Errorf("%s is not on a bpf filesystem", fileName) - } - - value, err := fd.Value() - if err != nil { - return err - } - - attr := bpfObjAttr{ - fileName: NewStringPointer(fileName), - fd: value, - } - _, err = BPF(BPF_OBJ_PIN, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - if err != nil { - return fmt.Errorf("pin object %s: %w", fileName, err) - } - return nil -} - -// BPFObjGet wraps BPF_OBJ_GET. -func BPFObjGet(fileName string, flags uint32) (*FD, error) { - attr := bpfObjAttr{ - fileName: NewStringPointer(fileName), - fileFlags: flags, - } - ptr, err := BPF(BPF_OBJ_GET, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - if err != nil { - return nil, fmt.Errorf("get object %s: %w", fileName, err) - } - return NewFD(uint32(ptr)), nil -} - -type bpfObjGetInfoByFDAttr struct { - fd uint32 - infoLen uint32 - info Pointer -} - -// BPFObjGetInfoByFD wraps BPF_OBJ_GET_INFO_BY_FD. -// -// Available from 4.13. -func BPFObjGetInfoByFD(fd *FD, info unsafe.Pointer, size uintptr) error { - value, err := fd.Value() - if err != nil { - return err - } - - attr := bpfObjGetInfoByFDAttr{ - fd: value, - infoLen: uint32(size), - info: NewPointer(info), - } - _, err = BPF(BPF_OBJ_GET_INFO_BY_FD, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - if err != nil { - return fmt.Errorf("fd %v: %w", fd, err) - } - return nil -} - -type bpfGetFDByIDAttr struct { - id uint32 - next uint32 -} - -// BPFObjGetInfoByFD wraps BPF_*_GET_FD_BY_ID. -// -// Available from 4.13. -func BPFObjGetFDByID(cmd BPFCmd, id uint32) (*FD, error) { - attr := bpfGetFDByIDAttr{ - id: id, - } - ptr, err := BPF(cmd, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - return NewFD(uint32(ptr)), err -} - -// BPFObjName is a null-terminated string made up of -// 'A-Za-z0-9_' characters. -type BPFObjName [unix.BPF_OBJ_NAME_LEN]byte - -// NewBPFObjName truncates the result if it is too long. -func NewBPFObjName(name string) BPFObjName { - var result BPFObjName - copy(result[:unix.BPF_OBJ_NAME_LEN-1], name) - return result -} - -type BPFMapCreateAttr struct { - MapType uint32 - KeySize uint32 - ValueSize uint32 - MaxEntries uint32 - Flags uint32 - InnerMapFd uint32 // since 4.12 56f668dfe00d - NumaNode uint32 // since 4.14 96eabe7a40aa - MapName BPFObjName // since 4.15 ad5b177bd73f - MapIfIndex uint32 - BTFFd uint32 - BTFKeyTypeID uint32 - BTFValueTypeID uint32 -} - -func BPFMapCreate(attr *BPFMapCreateAttr) (*FD, error) { - fd, err := BPF(BPF_MAP_CREATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - if err != nil { - return nil, err - } - - return NewFD(uint32(fd)), nil -} - -// wrappedErrno wraps syscall.Errno to prevent direct comparisons with -// syscall.E* or unix.E* constants. -// -// You should never export an error of this type. -type wrappedErrno struct { - syscall.Errno -} - -func (we wrappedErrno) Unwrap() error { - return we.Errno -} - -type syscallError struct { - error - errno syscall.Errno -} - -func SyscallError(err error, errno syscall.Errno) error { - return &syscallError{err, errno} -} - -func (se *syscallError) Is(target error) bool { - return target == se.error -} - -func (se *syscallError) Unwrap() error { - return se.errno -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/syscall_string.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/syscall_string.go deleted file mode 100644 index 85df04779731..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/syscall_string.go +++ /dev/null @@ -1,56 +0,0 @@ -// Code generated by "stringer -output syscall_string.go -type=BPFCmd"; DO NOT EDIT. - -package internal - -import "strconv" - -func _() { - // An "invalid array index" compiler error signifies that the constant values have changed. - // Re-run the stringer command to generate them again. - var x [1]struct{} - _ = x[BPF_MAP_CREATE-0] - _ = x[BPF_MAP_LOOKUP_ELEM-1] - _ = x[BPF_MAP_UPDATE_ELEM-2] - _ = x[BPF_MAP_DELETE_ELEM-3] - _ = x[BPF_MAP_GET_NEXT_KEY-4] - _ = x[BPF_PROG_LOAD-5] - _ = x[BPF_OBJ_PIN-6] - _ = x[BPF_OBJ_GET-7] - _ = x[BPF_PROG_ATTACH-8] - _ = x[BPF_PROG_DETACH-9] - _ = x[BPF_PROG_TEST_RUN-10] - _ = x[BPF_PROG_GET_NEXT_ID-11] - _ = x[BPF_MAP_GET_NEXT_ID-12] - _ = x[BPF_PROG_GET_FD_BY_ID-13] - _ = x[BPF_MAP_GET_FD_BY_ID-14] - _ = x[BPF_OBJ_GET_INFO_BY_FD-15] - _ = x[BPF_PROG_QUERY-16] - _ = x[BPF_RAW_TRACEPOINT_OPEN-17] - _ = x[BPF_BTF_LOAD-18] - _ = x[BPF_BTF_GET_FD_BY_ID-19] - _ = x[BPF_TASK_FD_QUERY-20] - _ = x[BPF_MAP_LOOKUP_AND_DELETE_ELEM-21] - _ = x[BPF_MAP_FREEZE-22] - _ = x[BPF_BTF_GET_NEXT_ID-23] - _ = x[BPF_MAP_LOOKUP_BATCH-24] - _ = x[BPF_MAP_LOOKUP_AND_DELETE_BATCH-25] - _ = x[BPF_MAP_UPDATE_BATCH-26] - _ = x[BPF_MAP_DELETE_BATCH-27] - _ = x[BPF_LINK_CREATE-28] - _ = x[BPF_LINK_UPDATE-29] - _ = x[BPF_LINK_GET_FD_BY_ID-30] - _ = x[BPF_LINK_GET_NEXT_ID-31] - _ = x[BPF_ENABLE_STATS-32] - _ = x[BPF_ITER_CREATE-33] -} - -const _BPFCmd_name = "BPF_MAP_CREATEBPF_MAP_LOOKUP_ELEMBPF_MAP_UPDATE_ELEMBPF_MAP_DELETE_ELEMBPF_MAP_GET_NEXT_KEYBPF_PROG_LOADBPF_OBJ_PINBPF_OBJ_GETBPF_PROG_ATTACHBPF_PROG_DETACHBPF_PROG_TEST_RUNBPF_PROG_GET_NEXT_IDBPF_MAP_GET_NEXT_IDBPF_PROG_GET_FD_BY_IDBPF_MAP_GET_FD_BY_IDBPF_OBJ_GET_INFO_BY_FDBPF_PROG_QUERYBPF_RAW_TRACEPOINT_OPENBPF_BTF_LOADBPF_BTF_GET_FD_BY_IDBPF_TASK_FD_QUERYBPF_MAP_LOOKUP_AND_DELETE_ELEMBPF_MAP_FREEZEBPF_BTF_GET_NEXT_IDBPF_MAP_LOOKUP_BATCHBPF_MAP_LOOKUP_AND_DELETE_BATCHBPF_MAP_UPDATE_BATCHBPF_MAP_DELETE_BATCHBPF_LINK_CREATEBPF_LINK_UPDATEBPF_LINK_GET_FD_BY_IDBPF_LINK_GET_NEXT_IDBPF_ENABLE_STATSBPF_ITER_CREATE" - -var _BPFCmd_index = [...]uint16{0, 14, 33, 52, 71, 91, 104, 115, 126, 141, 156, 173, 193, 212, 233, 253, 275, 289, 312, 324, 344, 361, 391, 405, 424, 444, 475, 495, 515, 530, 545, 566, 586, 602, 617} - -func (i BPFCmd) String() string { - if i < 0 || i >= BPFCmd(len(_BPFCmd_index)-1) { - return "BPFCmd(" + strconv.FormatInt(int64(i), 10) + ")" - } - return _BPFCmd_name[_BPFCmd_index[i]:_BPFCmd_index[i+1]] -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/unix/types_linux.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/unix/types_linux.go index 9aa70fa78c2a..db4a1f5bf9e7 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/unix/types_linux.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/unix/types_linux.go @@ -4,7 +4,6 @@ package unix import ( - "bytes" "syscall" linux "golang.org/x/sys/unix" @@ -23,6 +22,8 @@ const ( ENODEV = linux.ENODEV EBADF = linux.EBADF E2BIG = linux.E2BIG + EFAULT = linux.EFAULT + EACCES = linux.EACCES // ENOTSUPP is not the same as ENOTSUP or EOPNOTSUP ENOTSUPP = syscall.Errno(0x20c) @@ -66,11 +67,16 @@ const ( PERF_RECORD_SAMPLE = linux.PERF_RECORD_SAMPLE AT_FDCWD = linux.AT_FDCWD RENAME_NOREPLACE = linux.RENAME_NOREPLACE + SO_ATTACH_BPF = linux.SO_ATTACH_BPF + SO_DETACH_BPF = linux.SO_DETACH_BPF + SOL_SOCKET = linux.SOL_SOCKET ) // Statfs_t is a wrapper type Statfs_t = linux.Statfs_t +type Stat_t = linux.Stat_t + // Rlimit is a wrapper type Rlimit = linux.Rlimit @@ -191,18 +197,14 @@ func Renameat2(olddirfd int, oldpath string, newdirfd int, newpath string, flags return linux.Renameat2(olddirfd, oldpath, newdirfd, newpath, flags) } -func KernelRelease() (string, error) { - var uname Utsname - err := Uname(&uname) - if err != nil { - return "", err - } +func Prlimit(pid, resource int, new, old *Rlimit) error { + return linux.Prlimit(pid, resource, new, old) +} - end := bytes.IndexByte(uname.Release[:], 0) - release := string(uname.Release[:end]) - return release, nil +func Open(path string, mode int, perm uint32) (int, error) { + return linux.Open(path, mode, perm) } -func Prlimit(pid, resource int, new, old *Rlimit) error { - return linux.Prlimit(pid, resource, new, old) +func Fstat(fd int, stat *Stat_t) error { + return linux.Fstat(fd, stat) } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/unix/types_other.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/unix/types_other.go index 4f50d896ebb8..133c267dbc3d 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/unix/types_other.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/unix/types_other.go @@ -23,6 +23,8 @@ const ( ENODEV = syscall.ENODEV EBADF = syscall.Errno(0) E2BIG = syscall.Errno(0) + EFAULT = syscall.EFAULT + EACCES = syscall.Errno(0) // ENOTSUPP is not the same as ENOTSUP or EOPNOTSUP ENOTSUPP = syscall.Errno(0x20c) @@ -67,6 +69,9 @@ const ( PERF_RECORD_SAMPLE = 9 AT_FDCWD = -0x2 RENAME_NOREPLACE = 0x1 + SO_ATTACH_BPF = 0x32 + SO_DETACH_BPF = 0x1b + SOL_SOCKET = 0x1 ) // Statfs_t is a wrapper @@ -85,6 +90,8 @@ type Statfs_t struct { Spare [4]int64 } +type Stat_t struct{} + // Rlimit is a wrapper type Rlimit struct { Cur uint64 @@ -258,10 +265,14 @@ func Renameat2(olddirfd int, oldpath string, newdirfd int, newpath string, flags return errNonLinux } -func KernelRelease() (string, error) { - return "", errNonLinux +func Prlimit(pid, resource int, new, old *Rlimit) error { + return errNonLinux } -func Prlimit(pid, resource int, new, old *Rlimit) error { +func Open(path string, mode int, perm uint32) (int, error) { + return -1, errNonLinux +} + +func Fstat(fd int, stat *Stat_t) error { return errNonLinux } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/vdso.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/vdso.go new file mode 100644 index 000000000000..ae4821de20c4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/vdso.go @@ -0,0 +1,150 @@ +package internal + +import ( + "debug/elf" + "encoding/binary" + "errors" + "fmt" + "io" + "math" + "os" + + "github.com/cilium/ebpf/internal/unix" +) + +var ( + errAuxvNoVDSO = errors.New("no vdso address found in auxv") +) + +// vdsoVersion returns the LINUX_VERSION_CODE embedded in the vDSO library +// linked into the current process image. +func vdsoVersion() (uint32, error) { + // Read data from the auxiliary vector, which is normally passed directly + // to the process. Go does not expose that data, so we must read it from procfs. + // https://man7.org/linux/man-pages/man3/getauxval.3.html + av, err := os.Open("/proc/self/auxv") + if err != nil { + return 0, fmt.Errorf("opening auxv: %w", err) + } + defer av.Close() + + vdsoAddr, err := vdsoMemoryAddress(av) + if err != nil { + return 0, fmt.Errorf("finding vDSO memory address: %w", err) + } + + // Use /proc/self/mem rather than unsafe.Pointer tricks. + mem, err := os.Open("/proc/self/mem") + if err != nil { + return 0, fmt.Errorf("opening mem: %w", err) + } + defer mem.Close() + + // Open ELF at provided memory address, as offset into /proc/self/mem. + c, err := vdsoLinuxVersionCode(io.NewSectionReader(mem, int64(vdsoAddr), math.MaxInt64)) + if err != nil { + return 0, fmt.Errorf("reading linux version code: %w", err) + } + + return c, nil +} + +// vdsoMemoryAddress returns the memory address of the vDSO library +// linked into the current process image. r is an io.Reader into an auxv blob. +func vdsoMemoryAddress(r io.Reader) (uint64, error) { + const ( + _AT_NULL = 0 // End of vector + _AT_SYSINFO_EHDR = 33 // Offset to vDSO blob in process image + ) + + // Loop through all tag/value pairs in auxv until we find `AT_SYSINFO_EHDR`, + // the address of a page containing the virtual Dynamic Shared Object (vDSO). + aux := struct{ Tag, Val uint64 }{} + for { + if err := binary.Read(r, NativeEndian, &aux); err != nil { + return 0, fmt.Errorf("reading auxv entry: %w", err) + } + + switch aux.Tag { + case _AT_SYSINFO_EHDR: + if aux.Val != 0 { + return aux.Val, nil + } + return 0, fmt.Errorf("invalid vDSO address in auxv") + // _AT_NULL is always the last tag/val pair in the aux vector + // and can be treated like EOF. + case _AT_NULL: + return 0, errAuxvNoVDSO + } + } +} + +// format described at https://www.man7.org/linux/man-pages/man5/elf.5.html in section 'Notes (Nhdr)' +type elfNoteHeader struct { + NameSize int32 + DescSize int32 + Type int32 +} + +// vdsoLinuxVersionCode returns the LINUX_VERSION_CODE embedded in +// the ELF notes section of the binary provided by the reader. +func vdsoLinuxVersionCode(r io.ReaderAt) (uint32, error) { + hdr, err := NewSafeELFFile(r) + if err != nil { + return 0, fmt.Errorf("reading vDSO ELF: %w", err) + } + + sections := hdr.SectionsByType(elf.SHT_NOTE) + if len(sections) == 0 { + return 0, fmt.Errorf("no note section found in vDSO ELF") + } + + for _, sec := range sections { + sr := sec.Open() + var n elfNoteHeader + + // Read notes until we find one named 'Linux'. + for { + if err := binary.Read(sr, hdr.ByteOrder, &n); err != nil { + if errors.Is(err, io.EOF) { + // We looked at all the notes in this section + break + } + return 0, fmt.Errorf("reading note header: %w", err) + } + + // If a note name is defined, it follows the note header. + var name string + if n.NameSize > 0 { + // Read the note name, aligned to 4 bytes. + buf := make([]byte, Align(int(n.NameSize), 4)) + if err := binary.Read(sr, hdr.ByteOrder, &buf); err != nil { + return 0, fmt.Errorf("reading note name: %w", err) + } + + // Read nul-terminated string. + name = unix.ByteSliceToString(buf[:n.NameSize]) + } + + // If a note descriptor is defined, it follows the name. + // It is possible for a note to have a descriptor but not a name. + if n.DescSize > 0 { + // LINUX_VERSION_CODE is a uint32 value. + if name == "Linux" && n.DescSize == 4 && n.Type == 0 { + var version uint32 + if err := binary.Read(sr, hdr.ByteOrder, &version); err != nil { + return 0, fmt.Errorf("reading note descriptor: %w", err) + } + return version, nil + } + + // Discard the note descriptor if it exists but we're not interested in it. + if _, err := io.CopyN(io.Discard, sr, int64(Align(int(n.DescSize), 4))); err != nil { + return 0, err + } + } + } + } + + return 0, fmt.Errorf("no Linux note in ELF") +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/version.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/version.go index 4915e583767a..370e01e4447c 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/version.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/internal/version.go @@ -2,8 +2,6 @@ package internal import ( "fmt" - "os" - "regexp" "sync" "github.com/cilium/ebpf/internal/unix" @@ -18,12 +16,6 @@ const ( ) var ( - // Match between one and three decimals separated by dots, with the last - // segment (patch level) being optional on some kernels. - // The x.y.z string must appear at the start of a string or right after - // whitespace to prevent sequences like 'x.y.z-a.b.c' from matching 'a.b.c'. - rgxKernelVersion = regexp.MustCompile(`(?:\A|\s)\d{1,3}\.\d{1,3}(?:\.\d{1,3})?`) - kernelVersion = struct { once sync.Once version Version @@ -46,6 +38,15 @@ func NewVersion(ver string) (Version, error) { return Version{major, minor, patch}, nil } +// NewVersionFromCode creates a version from a LINUX_VERSION_CODE. +func NewVersionFromCode(code uint32) Version { + return Version{ + uint16(uint8(code >> 16)), + uint16(uint8(code >> 8)), + uint16(uint8(code)), + } +} + func (v Version) String() string { if v[2] == 0 { return fmt.Sprintf("v%d.%d", v[0], v[1]) @@ -98,66 +99,24 @@ func KernelVersion() (Version, error) { return kernelVersion.version, nil } -// detectKernelVersion returns the version of the running kernel. It scans the -// following sources in order: /proc/version_signature, uname -v, uname -r. -// In each of those locations, the last-appearing x.y(.z) value is selected -// for parsing. The first location that yields a usable version number is -// returned. +// detectKernelVersion returns the version of the running kernel. func detectKernelVersion() (Version, error) { - - // Try reading /proc/version_signature for Ubuntu compatibility. - // Example format: Ubuntu 4.15.0-91.92-generic 4.15.18 - // This method exists in the kernel itself, see d18acd15c - // ("perf tools: Fix kernel version error in ubuntu"). - if pvs, err := os.ReadFile("/proc/version_signature"); err == nil { - // If /proc/version_signature exists, failing to parse it is an error. - // It only exists on Ubuntu, where the real patch level is not obtainable - // through any other method. - v, err := findKernelVersion(string(pvs)) - if err != nil { - return Version{}, err - } - return v, nil - } - - var uname unix.Utsname - if err := unix.Uname(&uname); err != nil { - return Version{}, fmt.Errorf("calling uname: %w", err) - } - - // Debian puts the version including the patch level in uname.Version. - // It is not an error if there's no version number in uname.Version, - // as most distributions don't use it. Parsing can continue on uname.Release. - // Example format: #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) - if v, err := findKernelVersion(unix.ByteSliceToString(uname.Version[:])); err == nil { - return v, nil - } - - // Most other distributions have the full kernel version including patch - // level in uname.Release. - // Example format: 4.19.0-5-amd64, 5.5.10-arch1-1 - v, err := findKernelVersion(unix.ByteSliceToString(uname.Release[:])) + vc, err := vdsoVersion() if err != nil { return Version{}, err } - - return v, nil + return NewVersionFromCode(vc), nil } -// findKernelVersion matches s against rgxKernelVersion and parses the result -// into a Version. If s contains multiple matches, the last entry is selected. -func findKernelVersion(s string) (Version, error) { - m := rgxKernelVersion.FindAllString(s, -1) - if m == nil { - return Version{}, fmt.Errorf("no kernel version in string: %s", s) - } - // Pick the last match of the string in case there are multiple. - s = m[len(m)-1] - - v, err := NewVersion(s) - if err != nil { - return Version{}, fmt.Errorf("parsing version string %s: %w", s, err) +// KernelRelease returns the release string of the running kernel. +// Its format depends on the Linux distribution and corresponds to directory +// names in /lib/modules by convention. Some examples are 5.15.17-1-lts and +// 4.19.0-16-amd64. +func KernelRelease() (string, error) { + var uname unix.Utsname + if err := unix.Uname(&uname); err != nil { + return "", fmt.Errorf("uname failed: %w", err) } - return v, nil + return unix.ByteSliceToString(uname.Release[:]), nil } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/cgroup.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/cgroup.go index 5540bb068cd2..003b0638e89d 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/cgroup.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/cgroup.go @@ -56,16 +56,6 @@ func AttachCgroup(opts CgroupOptions) (Link, error) { return cg, nil } -// LoadPinnedCgroup loads a pinned cgroup from a bpffs. -func LoadPinnedCgroup(fileName string, opts *ebpf.LoadPinOptions) (Link, error) { - link, err := LoadPinnedRawLink(fileName, CgroupType, opts) - if err != nil { - return nil, err - } - - return &linkCgroup{*link}, nil -} - type progAttachCgroup struct { cgroup *os.File current *ebpf.Program @@ -151,6 +141,10 @@ func (cg *progAttachCgroup) Unpin() error { return fmt.Errorf("can't pin cgroup: %w", ErrNotSupported) } +func (cg *progAttachCgroup) Info() (*Info, error) { + return nil, fmt.Errorf("can't get cgroup info: %w", ErrNotSupported) +} + type linkCgroup struct { RawLink } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/freplace.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/freplace.go deleted file mode 100644 index a698e1a9d30c..000000000000 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/freplace.go +++ /dev/null @@ -1,88 +0,0 @@ -package link - -import ( - "fmt" - - "github.com/cilium/ebpf" - "github.com/cilium/ebpf/internal/btf" -) - -type FreplaceLink struct { - RawLink -} - -// AttachFreplace attaches the given eBPF program to the function it replaces. -// -// The program and name can either be provided at link time, or can be provided -// at program load time. If they were provided at load time, they should be nil -// and empty respectively here, as they will be ignored by the kernel. -// Examples: -// -// AttachFreplace(dispatcher, "function", replacement) -// AttachFreplace(nil, "", replacement) -func AttachFreplace(targetProg *ebpf.Program, name string, prog *ebpf.Program) (*FreplaceLink, error) { - if (name == "") != (targetProg == nil) { - return nil, fmt.Errorf("must provide both or neither of name and targetProg: %w", errInvalidInput) - } - if prog == nil { - return nil, fmt.Errorf("prog cannot be nil: %w", errInvalidInput) - } - if prog.Type() != ebpf.Extension { - return nil, fmt.Errorf("eBPF program type %s is not an Extension: %w", prog.Type(), errInvalidInput) - } - - var ( - target int - typeID btf.TypeID - ) - if targetProg != nil { - info, err := targetProg.Info() - if err != nil { - return nil, err - } - btfID, ok := info.BTFID() - if !ok { - return nil, fmt.Errorf("could not get BTF ID for program %s: %w", info.Name, errInvalidInput) - } - btfHandle, err := btf.NewHandleFromID(btfID) - if err != nil { - return nil, err - } - defer btfHandle.Close() - - var function *btf.Func - if err := btfHandle.Spec().FindType(name, &function); err != nil { - return nil, err - } - - target = targetProg.FD() - typeID = function.ID() - } - - link, err := AttachRawLink(RawLinkOptions{ - Target: target, - Program: prog, - Attach: ebpf.AttachNone, - BTF: typeID, - }) - if err != nil { - return nil, err - } - - return &FreplaceLink{*link}, nil -} - -// Update implements the Link interface. -func (f *FreplaceLink) Update(new *ebpf.Program) error { - return fmt.Errorf("freplace update: %w", ErrNotSupported) -} - -// LoadPinnedFreplace loads a pinned iterator from a bpffs. -func LoadPinnedFreplace(fileName string, opts *ebpf.LoadPinOptions) (*FreplaceLink, error) { - link, err := LoadPinnedRawLink(fileName, TracingType, opts) - if err != nil { - return nil, err - } - - return &FreplaceLink{*link}, err -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/iter.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/iter.go index 654d34ef8482..d2b32ef331cd 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/iter.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/iter.go @@ -6,7 +6,7 @@ import ( "unsafe" "github.com/cilium/ebpf" - "github.com/cilium/ebpf/internal" + "github.com/cilium/ebpf/internal/sys" ) type IterOptions struct { @@ -31,26 +31,26 @@ func AttachIter(opts IterOptions) (*Iter, error) { progFd := opts.Program.FD() if progFd < 0 { - return nil, fmt.Errorf("invalid program: %s", internal.ErrClosedFd) + return nil, fmt.Errorf("invalid program: %s", sys.ErrClosedFd) } var info bpfIterLinkInfoMap if opts.Map != nil { mapFd := opts.Map.FD() if mapFd < 0 { - return nil, fmt.Errorf("invalid map: %w", internal.ErrClosedFd) + return nil, fmt.Errorf("invalid map: %w", sys.ErrClosedFd) } info.map_fd = uint32(mapFd) } - attr := bpfLinkCreateIterAttr{ - prog_fd: uint32(progFd), - attach_type: ebpf.AttachTraceIter, - iter_info: internal.NewPointer(unsafe.Pointer(&info)), - iter_info_len: uint32(unsafe.Sizeof(info)), + attr := sys.LinkCreateIterAttr{ + ProgFd: uint32(progFd), + AttachType: sys.AttachType(ebpf.AttachTraceIter), + IterInfo: sys.NewPointer(unsafe.Pointer(&info)), + IterInfoLen: uint32(unsafe.Sizeof(info)), } - fd, err := bpfLinkCreateIter(&attr) + fd, err := sys.LinkCreateIter(&attr) if err != nil { return nil, fmt.Errorf("can't link iterator: %w", err) } @@ -58,16 +58,6 @@ func AttachIter(opts IterOptions) (*Iter, error) { return &Iter{RawLink{fd, ""}}, err } -// LoadPinnedIter loads a pinned iterator from a bpffs. -func LoadPinnedIter(fileName string, opts *ebpf.LoadPinOptions) (*Iter, error) { - link, err := LoadPinnedRawLink(fileName, IterType, opts) - if err != nil { - return nil, err - } - - return &Iter{*link}, err -} - // Iter represents an attached bpf_iter. type Iter struct { RawLink @@ -77,16 +67,11 @@ type Iter struct { // // Reading from the returned reader triggers the BPF program. func (it *Iter) Open() (io.ReadCloser, error) { - linkFd, err := it.fd.Value() - if err != nil { - return nil, err - } - - attr := &bpfIterCreateAttr{ - linkFd: linkFd, + attr := &sys.IterCreateAttr{ + LinkFd: it.fd.Uint(), } - fd, err := bpfIterCreate(attr) + fd, err := sys.IterCreate(attr) if err != nil { return nil, fmt.Errorf("can't create iterator: %w", err) } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/kprobe.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/kprobe.go index b6577b5a9925..fdf622a0c07f 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/kprobe.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/kprobe.go @@ -8,11 +8,13 @@ import ( "os" "path/filepath" "runtime" + "strings" "sync" + "syscall" "unsafe" "github.com/cilium/ebpf" - "github.com/cilium/ebpf/internal" + "github.com/cilium/ebpf/internal/sys" "github.com/cilium/ebpf/internal/unix" ) @@ -28,6 +30,27 @@ var ( type probeType uint8 +type probeArgs struct { + symbol, group, path string + offset, refCtrOffset, cookie uint64 + pid int + ret bool +} + +// KprobeOptions defines additional parameters that will be used +// when loading Kprobes. +type KprobeOptions struct { + // Arbitrary value that can be fetched from an eBPF program + // via `bpf_get_attach_cookie()`. + // + // Needs kernel 5.15+. + Cookie uint64 + // Offset of the kprobe relative to the traced symbol. + // Can be used to insert kprobes at arbitrary offsets in kernel functions, + // e.g. in places where functions have been inlined. + Offset uint64 +} + const ( kprobeType probeType = iota uprobeType @@ -71,70 +94,109 @@ func (pt probeType) RetprobeBit() (uint64, error) { // given kernel symbol starts executing. See /proc/kallsyms for available // symbols. For example, printk(): // -// kp, err := Kprobe("printk", prog) +// kp, err := Kprobe("printk", prog, nil) // // Losing the reference to the resulting Link (kp) will close the Kprobe // and prevent further execution of prog. The Link must be Closed during // program shutdown to avoid leaking system resources. -func Kprobe(symbol string, prog *ebpf.Program) (Link, error) { - k, err := kprobe(symbol, prog, false) +func Kprobe(symbol string, prog *ebpf.Program, opts *KprobeOptions) (Link, error) { + k, err := kprobe(symbol, prog, opts, false) if err != nil { return nil, err } - err = k.attach(prog) + lnk, err := attachPerfEvent(k, prog) if err != nil { k.Close() return nil, err } - return k, nil + return lnk, nil } // Kretprobe attaches the given eBPF program to a perf event that fires right // before the given kernel symbol exits, with the function stack left intact. // See /proc/kallsyms for available symbols. For example, printk(): // -// kp, err := Kretprobe("printk", prog) +// kp, err := Kretprobe("printk", prog, nil) // // Losing the reference to the resulting Link (kp) will close the Kretprobe // and prevent further execution of prog. The Link must be Closed during // program shutdown to avoid leaking system resources. -func Kretprobe(symbol string, prog *ebpf.Program) (Link, error) { - k, err := kprobe(symbol, prog, true) +func Kretprobe(symbol string, prog *ebpf.Program, opts *KprobeOptions) (Link, error) { + k, err := kprobe(symbol, prog, opts, true) if err != nil { return nil, err } - err = k.attach(prog) + lnk, err := attachPerfEvent(k, prog) if err != nil { k.Close() return nil, err } - return k, nil + return lnk, nil +} + +// isValidKprobeSymbol implements the equivalent of a regex match +// against "^[a-zA-Z_][0-9a-zA-Z_.]*$". +func isValidKprobeSymbol(s string) bool { + if len(s) < 1 { + return false + } + + for i, c := range []byte(s) { + switch { + case c >= 'a' && c <= 'z': + case c >= 'A' && c <= 'Z': + case c == '_': + case i > 0 && c >= '0' && c <= '9': + + // Allow `.` in symbol name. GCC-compiled kernel may change symbol name + // to have a `.isra.$n` suffix, like `udp_send_skb.isra.52`. + // See: https://gcc.gnu.org/gcc-10/changes.html + case i > 0 && c == '.': + + default: + return false + } + } + + return true } // kprobe opens a perf event on the given symbol and attaches prog to it. // If ret is true, create a kretprobe. -func kprobe(symbol string, prog *ebpf.Program, ret bool) (*perfEvent, error) { +func kprobe(symbol string, prog *ebpf.Program, opts *KprobeOptions, ret bool) (*perfEvent, error) { if symbol == "" { return nil, fmt.Errorf("symbol name cannot be empty: %w", errInvalidInput) } if prog == nil { return nil, fmt.Errorf("prog cannot be nil: %w", errInvalidInput) } - if !rgxTraceEvent.MatchString(symbol) { - return nil, fmt.Errorf("symbol '%s' must be alphanumeric or underscore: %w", symbol, errInvalidInput) + if !isValidKprobeSymbol(symbol) { + return nil, fmt.Errorf("symbol '%s' must be a valid symbol in /proc/kallsyms: %w", symbol, errInvalidInput) } if prog.Type() != ebpf.Kprobe { return nil, fmt.Errorf("eBPF program type %s is not a Kprobe: %w", prog.Type(), errInvalidInput) } + args := probeArgs{ + pid: perfAllThreads, + symbol: symbol, + ret: ret, + } + + if opts != nil { + args.cookie = opts.Cookie + args.offset = opts.Offset + } + // Use kprobe PMU if the kernel has it available. - tp, err := pmuKprobe(platformPrefix(symbol), ret) + tp, err := pmuKprobe(args) if errors.Is(err, os.ErrNotExist) { - tp, err = pmuKprobe(symbol, ret) + args.symbol = platformPrefix(symbol) + tp, err = pmuKprobe(args) } if err == nil { return tp, nil @@ -144,9 +206,11 @@ func kprobe(symbol string, prog *ebpf.Program, ret bool) (*perfEvent, error) { } // Use tracefs if kprobe PMU is missing. - tp, err = tracefsKprobe(platformPrefix(symbol), ret) + args.symbol = symbol + tp, err = tracefsKprobe(args) if errors.Is(err, os.ErrNotExist) { - tp, err = tracefsKprobe(symbol, ret) + args.symbol = platformPrefix(symbol) + tp, err = tracefsKprobe(args) } if err != nil { return nil, fmt.Errorf("creating trace event '%s' in tracefs: %w", symbol, err) @@ -157,8 +221,8 @@ func kprobe(symbol string, prog *ebpf.Program, ret bool) (*perfEvent, error) { // pmuKprobe opens a perf event based on the kprobe PMU. // Returns os.ErrNotExist if the given symbol does not exist in the kernel. -func pmuKprobe(symbol string, ret bool) (*perfEvent, error) { - return pmuProbe(kprobeType, symbol, "", 0, perfAllThreads, ret) +func pmuKprobe(args probeArgs) (*perfEvent, error) { + return pmuProbe(kprobeType, args) } // pmuProbe opens a perf event based on a Performance Monitoring Unit. @@ -168,7 +232,7 @@ func pmuKprobe(symbol string, ret bool) (*perfEvent, error) { // 33ea4b24277b "perf/core: Implement the 'perf_uprobe' PMU" // // Returns ErrNotSupported if the kernel doesn't support perf_[k,u]probe PMU -func pmuProbe(typ probeType, symbol, path string, offset uint64, pid int, ret bool) (*perfEvent, error) { +func pmuProbe(typ probeType, args probeArgs) (*perfEvent, error) { // Getting the PMU type will fail if the kernel doesn't support // the perf_[k,u]probe PMU. et, err := getPMUEventType(typ) @@ -177,7 +241,7 @@ func pmuProbe(typ probeType, symbol, path string, offset uint64, pid int, ret bo } var config uint64 - if ret { + if args.ret { bit, err := typ.RetprobeBit() if err != nil { return nil, err @@ -192,22 +256,30 @@ func pmuProbe(typ probeType, symbol, path string, offset uint64, pid int, ret bo switch typ { case kprobeType: // Create a pointer to a NUL-terminated string for the kernel. - sp, err = unsafeStringPtr(symbol) + sp, err = unsafeStringPtr(args.symbol) if err != nil { return nil, err } attr = unix.PerfEventAttr{ + // The minimum size required for PMU kprobes is PERF_ATTR_SIZE_VER1, + // since it added the config2 (Ext2) field. Use Ext2 as probe_offset. + Size: unix.PERF_ATTR_SIZE_VER1, Type: uint32(et), // PMU event type read from sysfs Ext1: uint64(uintptr(sp)), // Kernel symbol to trace + Ext2: args.offset, // Kernel symbol offset Config: config, // Retprobe flag } case uprobeType: - sp, err = unsafeStringPtr(path) + sp, err = unsafeStringPtr(args.path) if err != nil { return nil, err } + if args.refCtrOffset != 0 { + config |= args.refCtrOffset << uprobeRefCtrOffsetShift + } + attr = unix.PerfEventAttr{ // The minimum size required for PMU uprobes is PERF_ATTR_SIZE_VER1, // since it added the config2 (Ext2) field. The Size field controls the @@ -216,23 +288,34 @@ func pmuProbe(typ probeType, symbol, path string, offset uint64, pid int, ret bo Size: unix.PERF_ATTR_SIZE_VER1, Type: uint32(et), // PMU event type read from sysfs Ext1: uint64(uintptr(sp)), // Uprobe path - Ext2: offset, // Uprobe offset - Config: config, // Retprobe flag + Ext2: args.offset, // Uprobe offset + Config: config, // RefCtrOffset, Retprobe flag } } - fd, err := unix.PerfEventOpen(&attr, pid, 0, -1, unix.PERF_FLAG_FD_CLOEXEC) + rawFd, err := unix.PerfEventOpen(&attr, args.pid, 0, -1, unix.PERF_FLAG_FD_CLOEXEC) + // On some old kernels, kprobe PMU doesn't allow `.` in symbol names and + // return -EINVAL. Return ErrNotSupported to allow falling back to tracefs. + // https://github.com/torvalds/linux/blob/94710cac0ef4/kernel/trace/trace_kprobe.c#L340-L343 + if errors.Is(err, unix.EINVAL) && strings.Contains(args.symbol, ".") { + return nil, fmt.Errorf("symbol '%s+%#x': older kernels don't accept dots: %w", args.symbol, args.offset, ErrNotSupported) + } // Since commit 97c753e62e6c, ENOENT is correctly returned instead of EINVAL // when trying to create a kretprobe for a missing symbol. Make sure ENOENT // is returned to the caller. if errors.Is(err, os.ErrNotExist) || errors.Is(err, unix.EINVAL) { - return nil, fmt.Errorf("symbol '%s' not found: %w", symbol, os.ErrNotExist) + return nil, fmt.Errorf("symbol '%s+%#x' not found: %w", args.symbol, args.offset, os.ErrNotExist) + } + // Since commit ab105a4fb894, -EILSEQ is returned when a kprobe sym+offset is resolved + // to an invalid insn boundary. + if errors.Is(err, syscall.EILSEQ) { + return nil, fmt.Errorf("symbol '%s+%#x' not found (bad insn boundary): %w", args.symbol, args.offset, os.ErrNotExist) } // Since at least commit cb9a19fe4aa51, ENOTSUPP is returned // when attempting to set a uprobe on a trap instruction. if errors.Is(err, unix.ENOTSUPP) { - return nil, fmt.Errorf("failed setting uprobe on offset %#x (possible trap insn): %w", offset, err) + return nil, fmt.Errorf("failed setting uprobe on offset %#x (possible trap insn): %w", args.offset, err) } if err != nil { return nil, fmt.Errorf("opening perf event: %w", err) @@ -241,18 +324,24 @@ func pmuProbe(typ probeType, symbol, path string, offset uint64, pid int, ret bo // Ensure the string pointer is not collected before PerfEventOpen returns. runtime.KeepAlive(sp) + fd, err := sys.NewFD(rawFd) + if err != nil { + return nil, err + } + // Kernel has perf_[k,u]probe PMU available, initialize perf event. return &perfEvent{ - fd: internal.NewFD(uint32(fd)), - pmuID: et, - name: symbol, - typ: typ.PerfEventType(ret), + typ: typ.PerfEventType(args.ret), + name: args.symbol, + pmuID: et, + cookie: args.cookie, + fd: fd, }, nil } // tracefsKprobe creates a Kprobe tracefs entry. -func tracefsKprobe(symbol string, ret bool) (*perfEvent, error) { - return tracefsProbe(kprobeType, symbol, "", 0, perfAllThreads, ret) +func tracefsKprobe(args probeArgs) (*perfEvent, error) { + return tracefsProbe(kprobeType, args) } // tracefsProbe creates a trace event by writing an entry to /[k,u]probe_events. @@ -261,7 +350,7 @@ func tracefsKprobe(symbol string, ret bool) (*perfEvent, error) { // Path and offset are only set in the case of uprobe(s) and are used to set // the executable/library path on the filesystem and the offset where the probe is inserted. // A perf event is then opened on the newly-created trace event and returned to the caller. -func tracefsProbe(typ probeType, symbol, path string, offset uint64, pid int, ret bool) (*perfEvent, error) { +func tracefsProbe(typ probeType, args probeArgs) (_ *perfEvent, err error) { // Generate a random string for each trace event we attempt to create. // This value is used as the 'group' token in tracefs to allow creating // multiple kprobe trace events with the same name. @@ -269,42 +358,53 @@ func tracefsProbe(typ probeType, symbol, path string, offset uint64, pid int, re if err != nil { return nil, fmt.Errorf("randomizing group name: %w", err) } + args.group = group // Before attempting to create a trace event through tracefs, // check if an event with the same group and name already exists. // Kernels 4.x and earlier don't return os.ErrExist on writing a duplicate // entry, so we need to rely on reads for detecting uniqueness. - _, err = getTraceEventID(group, symbol) + _, err = getTraceEventID(group, args.symbol) if err == nil { - return nil, fmt.Errorf("trace event already exists: %s/%s", group, symbol) + return nil, fmt.Errorf("trace event already exists: %s/%s", group, args.symbol) } if err != nil && !errors.Is(err, os.ErrNotExist) { - return nil, fmt.Errorf("checking trace event %s/%s: %w", group, symbol, err) + return nil, fmt.Errorf("checking trace event %s/%s: %w", group, args.symbol, err) } // Create the [k,u]probe trace event using tracefs. - if err := createTraceFSProbeEvent(typ, group, symbol, path, offset, ret); err != nil { + if err := createTraceFSProbeEvent(typ, args); err != nil { return nil, fmt.Errorf("creating probe entry on tracefs: %w", err) } + defer func() { + if err != nil { + // Make sure we clean up the created tracefs event when we return error. + // If a livepatch handler is already active on the symbol, the write to + // tracefs will succeed, a trace event will show up, but creating the + // perf event will fail with EBUSY. + _ = closeTraceFSProbeEvent(typ, args.group, args.symbol) + } + }() // Get the newly-created trace event's id. - tid, err := getTraceEventID(group, symbol) + tid, err := getTraceEventID(group, args.symbol) if err != nil { return nil, fmt.Errorf("getting trace event id: %w", err) } // Kprobes are ephemeral tracepoints and share the same perf event type. - fd, err := openTracepointPerfEvent(tid, pid) + fd, err := openTracepointPerfEvent(tid, args.pid) if err != nil { return nil, err } return &perfEvent{ - fd: fd, + typ: typ.PerfEventType(args.ret), group: group, - name: symbol, + name: args.symbol, tracefsID: tid, - typ: typ.PerfEventType(ret), + cookie: args.cookie, + fd: fd, }, nil } @@ -312,7 +412,7 @@ func tracefsProbe(typ probeType, symbol, path string, offset uint64, pid int, re // /[k,u]probe_events. Returns os.ErrNotExist if symbol is not a valid // kernel symbol, or if it is not traceable with kprobes. Returns os.ErrExist // if a probe with the same group and symbol already exists. -func createTraceFSProbeEvent(typ probeType, group, symbol, path string, offset uint64, ret bool) error { +func createTraceFSProbeEvent(typ probeType, args probeArgs) error { // Open the kprobe_events file in tracefs. f, err := os.OpenFile(typ.EventsPath(), os.O_APPEND|os.O_WRONLY, 0666) if err != nil { @@ -320,7 +420,7 @@ func createTraceFSProbeEvent(typ probeType, group, symbol, path string, offset u } defer f.Close() - var pe string + var pe, token string switch typ { case kprobeType: // The kprobe_events syntax is as follows (see Documentation/trace/kprobetrace.txt): @@ -337,7 +437,8 @@ func createTraceFSProbeEvent(typ probeType, group, symbol, path string, offset u // subsampling or rate limiting logic can be more accurately implemented in // the eBPF program itself. // See Documentation/kprobes.txt for more details. - pe = fmt.Sprintf("%s:%s/%s %s", probePrefix(ret), group, symbol, symbol) + token = kprobeToken(args) + pe = fmt.Sprintf("%s:%s/%s %s", probePrefix(args.ret), args.group, sanitizeSymbol(args.symbol), token) case uprobeType: // The uprobe_events syntax is as follows: // p[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a probe @@ -346,18 +447,30 @@ func createTraceFSProbeEvent(typ probeType, group, symbol, path string, offset u // // Some examples: // r:ebpf_1234/readline /bin/bash:0x12345 - // p:ebpf_5678/main_mySymbol /bin/mybin:0x12345 + // p:ebpf_5678/main_mySymbol /bin/mybin:0x12345(0x123) // // See Documentation/trace/uprobetracer.txt for more details. - pathOffset := uprobePathOffset(path, offset) - pe = fmt.Sprintf("%s:%s/%s %s", probePrefix(ret), group, symbol, pathOffset) + token = uprobeToken(args) + pe = fmt.Sprintf("%s:%s/%s %s", probePrefix(args.ret), args.group, args.symbol, token) } _, err = f.WriteString(pe) // Since commit 97c753e62e6c, ENOENT is correctly returned instead of EINVAL // when trying to create a kretprobe for a missing symbol. Make sure ENOENT // is returned to the caller. + // EINVAL is also returned on pre-5.2 kernels when the `SYM[+offs]` token + // is resolved to an invalid insn boundary. if errors.Is(err, os.ErrNotExist) || errors.Is(err, unix.EINVAL) { - return fmt.Errorf("symbol %s not found: %w", symbol, os.ErrNotExist) + return fmt.Errorf("token %s: %w", token, os.ErrNotExist) + } + // Since commit ab105a4fb894, -EILSEQ is returned when a kprobe sym+offset is resolved + // to an invalid insn boundary. + if errors.Is(err, syscall.EILSEQ) { + return fmt.Errorf("token %s: bad insn boundary: %w", token, os.ErrNotExist) + } + // ERANGE is returned when the `SYM[+offs]` token is too big and cannot + // be resolved. + if errors.Is(err, syscall.ERANGE) { + return fmt.Errorf("token %s: offset too big: %w", token, os.ErrNotExist) } if err != nil { return fmt.Errorf("writing '%s' to '%s': %w", pe, typ.EventsPath(), err) @@ -377,7 +490,7 @@ func closeTraceFSProbeEvent(typ probeType, group, symbol string) error { // See [k,u]probe_events syntax above. The probe type does not need to be specified // for removals. - pe := fmt.Sprintf("-:%s/%s", group, symbol) + pe := fmt.Sprintf("-:%s/%s", group, sanitizeSymbol(symbol)) if _, err = f.WriteString(pe); err != nil { return fmt.Errorf("writing '%s' to '%s': %w", pe, typ.EventsPath(), err) } @@ -388,9 +501,9 @@ func closeTraceFSProbeEvent(typ probeType, group, symbol string) error { // randomGroup generates a pseudorandom string for use as a tracefs group name. // Returns an error when the output string would exceed 63 characters (kernel // limitation), when rand.Read() fails or when prefix contains characters not -// allowed by rgxTraceEvent. +// allowed by isValidTraceID. func randomGroup(prefix string) (string, error) { - if !rgxTraceEvent.MatchString(prefix) { + if !isValidTraceID(prefix) { return "", fmt.Errorf("prefix '%s' must be alphanumeric or underscore: %w", prefix, errInvalidInput) } @@ -442,3 +555,14 @@ func kretprobeBit() (uint64, error) { }) return kprobeRetprobeBit.value, kprobeRetprobeBit.err } + +// kprobeToken creates the SYM[+offs] token for the tracefs api. +func kprobeToken(args probeArgs) string { + po := args.symbol + + if args.offset != 0 { + po += fmt.Sprintf("+%#x", args.offset) + } + + return po +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/link.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/link.go index 4926584696bb..067d0101aa9f 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/link.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/link.go @@ -1,12 +1,14 @@ package link import ( + "bytes" + "encoding/binary" "fmt" - "unsafe" "github.com/cilium/ebpf" + "github.com/cilium/ebpf/btf" "github.com/cilium/ebpf/internal" - "github.com/cilium/ebpf/internal/btf" + "github.com/cilium/ebpf/internal/sys" ) var ErrNotSupported = internal.ErrNotSupported @@ -35,12 +37,53 @@ type Link interface { // not called. Close() error + // Info returns metadata on a link. + // + // May return an error wrapping ErrNotSupported. + Info() (*Info, error) + // Prevent external users from implementing this interface. isLink() } +// LoadPinnedLink loads a link that was persisted into a bpffs. +func LoadPinnedLink(fileName string, opts *ebpf.LoadPinOptions) (Link, error) { + raw, err := loadPinnedRawLink(fileName, opts) + if err != nil { + return nil, err + } + + return wrapRawLink(raw) +} + +// wrap a RawLink in a more specific type if possible. +// +// The function takes ownership of raw and closes it on error. +func wrapRawLink(raw *RawLink) (Link, error) { + info, err := raw.Info() + if err != nil { + raw.Close() + return nil, err + } + + switch info.Type { + case RawTracepointType: + return &rawTracepoint{*raw}, nil + case TracingType: + return &tracing{*raw}, nil + case CgroupType: + return &linkCgroup{*raw}, nil + case IterType: + return &Iter{*raw}, nil + case NetNsType: + return &NetNsLink{*raw}, nil + default: + return raw, nil + } +} + // ID uniquely identifies a BPF link. -type ID uint32 +type ID = sys.LinkID // RawLinkOptions control the creation of a raw link. type RawLinkOptions struct { @@ -52,13 +95,53 @@ type RawLinkOptions struct { Attach ebpf.AttachType // BTF is the BTF of the attachment target. BTF btf.TypeID + // Flags control the attach behaviour. + Flags uint32 } -// RawLinkInfo contains metadata on a link. -type RawLinkInfo struct { +// Info contains metadata on a link. +type Info struct { Type Type ID ID Program ebpf.ProgramID + extra interface{} +} + +type TracingInfo sys.TracingLinkInfo +type CgroupInfo sys.CgroupLinkInfo +type NetNsInfo sys.NetNsLinkInfo +type XDPInfo sys.XDPLinkInfo + +// Tracing returns tracing type-specific link info. +// +// Returns nil if the type-specific link info isn't available. +func (r Info) Tracing() *TracingInfo { + e, _ := r.extra.(*TracingInfo) + return e +} + +// Cgroup returns cgroup type-specific link info. +// +// Returns nil if the type-specific link info isn't available. +func (r Info) Cgroup() *CgroupInfo { + e, _ := r.extra.(*CgroupInfo) + return e +} + +// NetNs returns netns type-specific link info. +// +// Returns nil if the type-specific link info isn't available. +func (r Info) NetNs() *NetNsInfo { + e, _ := r.extra.(*NetNsInfo) + return e +} + +// ExtraNetNs returns XDP type-specific link info. +// +// Returns nil if the type-specific link info isn't available. +func (r Info) XDP() *XDPInfo { + e, _ := r.extra.(*XDPInfo) + return e } // RawLink is the low-level API to bpf_link. @@ -66,7 +149,7 @@ type RawLinkInfo struct { // You should consider using the higher level interfaces in this // package instead. type RawLink struct { - fd *internal.FD + fd *sys.FD pinnedPath string } @@ -77,21 +160,22 @@ func AttachRawLink(opts RawLinkOptions) (*RawLink, error) { } if opts.Target < 0 { - return nil, fmt.Errorf("invalid target: %s", internal.ErrClosedFd) + return nil, fmt.Errorf("invalid target: %s", sys.ErrClosedFd) } progFd := opts.Program.FD() if progFd < 0 { - return nil, fmt.Errorf("invalid program: %s", internal.ErrClosedFd) + return nil, fmt.Errorf("invalid program: %s", sys.ErrClosedFd) } - attr := bpfLinkCreateAttr{ - targetFd: uint32(opts.Target), - progFd: uint32(progFd), - attachType: opts.Attach, - targetBTFID: uint32(opts.BTF), + attr := sys.LinkCreateAttr{ + TargetFd: uint32(opts.Target), + ProgFd: uint32(progFd), + AttachType: sys.AttachType(opts.Attach), + TargetBtfId: uint32(opts.BTF), + Flags: opts.Flags, } - fd, err := bpfLinkCreate(&attr) + fd, err := sys.LinkCreate(&attr) if err != nil { return nil, fmt.Errorf("can't create link: %s", err) } @@ -99,44 +183,23 @@ func AttachRawLink(opts RawLinkOptions) (*RawLink, error) { return &RawLink{fd, ""}, nil } -// LoadPinnedRawLink loads a persisted link from a bpffs. -// -// Returns an error if the pinned link type doesn't match linkType. Pass -// UnspecifiedType to disable this behaviour. -func LoadPinnedRawLink(fileName string, linkType Type, opts *ebpf.LoadPinOptions) (*RawLink, error) { - fd, err := internal.BPFObjGet(fileName, opts.Marshal()) +func loadPinnedRawLink(fileName string, opts *ebpf.LoadPinOptions) (*RawLink, error) { + fd, err := sys.ObjGet(&sys.ObjGetAttr{ + Pathname: sys.NewStringPointer(fileName), + FileFlags: opts.Marshal(), + }) if err != nil { return nil, fmt.Errorf("load pinned link: %w", err) } - link := &RawLink{fd, fileName} - if linkType == UnspecifiedType { - return link, nil - } - - info, err := link.Info() - if err != nil { - link.Close() - return nil, fmt.Errorf("get pinned link info: %s", err) - } - - if info.Type != linkType { - link.Close() - return nil, fmt.Errorf("link type %v doesn't match %v", info.Type, linkType) - } - - return link, nil + return &RawLink{fd, fileName}, nil } func (l *RawLink) isLink() {} // FD returns the raw file descriptor. func (l *RawLink) FD() int { - fd, err := l.fd.Value() - if err != nil { - return -1 - } - return int(fd) + return l.fd.Int() } // Close breaks the link. @@ -185,49 +248,66 @@ type RawLinkUpdateOptions struct { func (l *RawLink) UpdateArgs(opts RawLinkUpdateOptions) error { newFd := opts.New.FD() if newFd < 0 { - return fmt.Errorf("invalid program: %s", internal.ErrClosedFd) + return fmt.Errorf("invalid program: %s", sys.ErrClosedFd) } var oldFd int if opts.Old != nil { oldFd = opts.Old.FD() if oldFd < 0 { - return fmt.Errorf("invalid replacement program: %s", internal.ErrClosedFd) + return fmt.Errorf("invalid replacement program: %s", sys.ErrClosedFd) } } - linkFd, err := l.fd.Value() - if err != nil { - return fmt.Errorf("can't update link: %s", err) - } - - attr := bpfLinkUpdateAttr{ - linkFd: linkFd, - newProgFd: uint32(newFd), - oldProgFd: uint32(oldFd), - flags: opts.Flags, + attr := sys.LinkUpdateAttr{ + LinkFd: l.fd.Uint(), + NewProgFd: uint32(newFd), + OldProgFd: uint32(oldFd), + Flags: opts.Flags, } - return bpfLinkUpdate(&attr) -} - -// struct bpf_link_info -type bpfLinkInfo struct { - typ uint32 - id uint32 - prog_id uint32 + return sys.LinkUpdate(&attr) } // Info returns metadata about the link. -func (l *RawLink) Info() (*RawLinkInfo, error) { - var info bpfLinkInfo - err := internal.BPFObjGetInfoByFD(l.fd, unsafe.Pointer(&info), unsafe.Sizeof(info)) - if err != nil { +func (l *RawLink) Info() (*Info, error) { + var info sys.LinkInfo + + if err := sys.ObjInfo(l.fd, &info); err != nil { return nil, fmt.Errorf("link info: %s", err) } - return &RawLinkInfo{ - Type(info.typ), - ID(info.id), - ebpf.ProgramID(info.prog_id), + var extra interface{} + switch info.Type { + case CgroupType: + extra = &CgroupInfo{} + case IterType: + // not supported + case NetNsType: + extra = &NetNsInfo{} + case RawTracepointType: + // not supported + case TracingType: + extra = &TracingInfo{} + case XDPType: + extra = &XDPInfo{} + case PerfEventType: + // no extra + default: + return nil, fmt.Errorf("unknown link info type: %d", info.Type) + } + + if info.Type != RawTracepointType && info.Type != IterType && info.Type != PerfEventType { + buf := bytes.NewReader(info.Extra[:]) + err := binary.Read(buf, internal.NativeEndian, extra) + if err != nil { + return nil, fmt.Errorf("can not read extra link info: %w", err) + } + } + + return &Info{ + info.Type, + info.Id, + ebpf.ProgramID(info.ProgId), + extra, }, nil } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/netns.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/netns.go index 37e5b84c4ddf..344ecced6bea 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/netns.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/netns.go @@ -6,14 +6,9 @@ import ( "github.com/cilium/ebpf" ) -// NetNsInfo contains metadata about a network namespace link. -type NetNsInfo struct { - RawLinkInfo -} - // NetNsLink is a program attached to a network namespace. type NetNsLink struct { - *RawLink + RawLink } // AttachNetNs attaches a program to a network namespace. @@ -37,24 +32,5 @@ func AttachNetNs(ns int, prog *ebpf.Program) (*NetNsLink, error) { return nil, err } - return &NetNsLink{link}, nil -} - -// LoadPinnedNetNs loads a network namespace link from bpffs. -func LoadPinnedNetNs(fileName string, opts *ebpf.LoadPinOptions) (*NetNsLink, error) { - link, err := LoadPinnedRawLink(fileName, NetNsType, opts) - if err != nil { - return nil, err - } - - return &NetNsLink{link}, nil -} - -// Info returns information about the link. -func (nns *NetNsLink) Info() (*NetNsInfo, error) { - info, err := nns.RawLink.Info() - if err != nil { - return nil, err - } - return &NetNsInfo{*info}, nil + return &NetNsLink{*link}, nil } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/perf_event.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/perf_event.go index 7e0443a75cb0..0e5bd47911bb 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/perf_event.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/perf_event.go @@ -6,14 +6,15 @@ import ( "fmt" "os" "path/filepath" - "regexp" "runtime" "strconv" "strings" "unsafe" "github.com/cilium/ebpf" + "github.com/cilium/ebpf/asm" "github.com/cilium/ebpf/internal" + "github.com/cilium/ebpf/internal/sys" "github.com/cilium/ebpf/internal/unix" ) @@ -43,11 +44,6 @@ import ( var ( tracefsPath = "/sys/kernel/debug/tracing" - // Trace event groups, names and kernel symbols must adhere to this set - // of characters. Non-empty, first character must not be a number, all - // characters must be alphanumeric or underscore. - rgxTraceEvent = regexp.MustCompile("^[a-zA-Z_][0-9a-zA-Z_]*$") - errInvalidInput = errors.New("invalid input") ) @@ -69,6 +65,8 @@ const ( // can be attached to it. It is created based on a tracefs trace event or a // Performance Monitoring Unit (PMU). type perfEvent struct { + // The event type determines the types of programs that can be attached. + typ perfEventType // Group and name of the tracepoint/kprobe/uprobe. group string @@ -79,53 +77,15 @@ type perfEvent struct { // ID of the trace event read from tracefs. Valid IDs are non-zero. tracefsID uint64 - // The event type determines the types of programs that can be attached. - typ perfEventType + // User provided arbitrary value. + cookie uint64 - fd *internal.FD -} - -func (pe *perfEvent) isLink() {} - -func (pe *perfEvent) Pin(string) error { - return fmt.Errorf("pin perf event: %w", ErrNotSupported) -} - -func (pe *perfEvent) Unpin() error { - return fmt.Errorf("unpin perf event: %w", ErrNotSupported) -} - -// Since 4.15 (e87c6bc3852b "bpf: permit multiple bpf attachments for a single perf event"), -// calling PERF_EVENT_IOC_SET_BPF appends the given program to a prog_array -// owned by the perf event, which means multiple programs can be attached -// simultaneously. -// -// Before 4.15, calling PERF_EVENT_IOC_SET_BPF more than once on a perf event -// returns EEXIST. -// -// Detaching a program from a perf event is currently not possible, so a -// program replacement mechanism cannot be implemented for perf events. -func (pe *perfEvent) Update(prog *ebpf.Program) error { - return fmt.Errorf("can't replace eBPF program in perf event: %w", ErrNotSupported) + // This is the perf event FD. + fd *sys.FD } func (pe *perfEvent) Close() error { - if pe.fd == nil { - return nil - } - - pfd, err := pe.fd.Value() - if err != nil { - return fmt.Errorf("getting perf event fd: %w", err) - } - - err = unix.IoctlSetInt(int(pfd), unix.PERF_EVENT_IOC_DISABLE, 0) - if err != nil { - return fmt.Errorf("disabling perf event: %w", err) - } - - err = pe.fd.Close() - if err != nil { + if err := pe.fd.Close(); err != nil { return fmt.Errorf("closing perf event fd: %w", err) } @@ -148,49 +108,150 @@ func (pe *perfEvent) Close() error { return nil } +// perfEventLink represents a bpf perf link. +type perfEventLink struct { + RawLink + pe *perfEvent +} + +func (pl *perfEventLink) isLink() {} + +// Pinning requires the underlying perf event FD to stay open. +// +// | PerfEvent FD | BpfLink FD | Works | +// |--------------|------------|-------| +// | Open | Open | Yes | +// | Closed | Open | No | +// | Open | Closed | No (Pin() -> EINVAL) | +// | Closed | Closed | No (Pin() -> EINVAL) | +// +// There is currently no pretty way to recover the perf event FD +// when loading a pinned link, so leave as not supported for now. +func (pl *perfEventLink) Pin(string) error { + return fmt.Errorf("perf event link pin: %w", ErrNotSupported) +} + +func (pl *perfEventLink) Unpin() error { + return fmt.Errorf("perf event link unpin: %w", ErrNotSupported) +} + +func (pl *perfEventLink) Close() error { + if err := pl.pe.Close(); err != nil { + return fmt.Errorf("perf event link close: %w", err) + } + return pl.fd.Close() +} + +func (pl *perfEventLink) Update(prog *ebpf.Program) error { + return fmt.Errorf("perf event link update: %w", ErrNotSupported) +} + +// perfEventIoctl implements Link and handles the perf event lifecycle +// via ioctl(). +type perfEventIoctl struct { + *perfEvent +} + +func (pi *perfEventIoctl) isLink() {} + +// Since 4.15 (e87c6bc3852b "bpf: permit multiple bpf attachments for a single perf event"), +// calling PERF_EVENT_IOC_SET_BPF appends the given program to a prog_array +// owned by the perf event, which means multiple programs can be attached +// simultaneously. +// +// Before 4.15, calling PERF_EVENT_IOC_SET_BPF more than once on a perf event +// returns EEXIST. +// +// Detaching a program from a perf event is currently not possible, so a +// program replacement mechanism cannot be implemented for perf events. +func (pi *perfEventIoctl) Update(prog *ebpf.Program) error { + return fmt.Errorf("perf event ioctl update: %w", ErrNotSupported) +} + +func (pi *perfEventIoctl) Pin(string) error { + return fmt.Errorf("perf event ioctl pin: %w", ErrNotSupported) +} + +func (pi *perfEventIoctl) Unpin() error { + return fmt.Errorf("perf event ioctl unpin: %w", ErrNotSupported) +} + +func (pi *perfEventIoctl) Info() (*Info, error) { + return nil, fmt.Errorf("perf event ioctl info: %w", ErrNotSupported) +} + // attach the given eBPF prog to the perf event stored in pe. // pe must contain a valid perf event fd. // prog's type must match the program type stored in pe. -func (pe *perfEvent) attach(prog *ebpf.Program) error { +func attachPerfEvent(pe *perfEvent, prog *ebpf.Program) (Link, error) { if prog == nil { - return errors.New("cannot attach a nil program") - } - if pe.fd == nil { - return errors.New("cannot attach to nil perf event") + return nil, errors.New("cannot attach a nil program") } if prog.FD() < 0 { - return fmt.Errorf("invalid program: %w", internal.ErrClosedFd) + return nil, fmt.Errorf("invalid program: %w", sys.ErrClosedFd) } + switch pe.typ { case kprobeEvent, kretprobeEvent, uprobeEvent, uretprobeEvent: if t := prog.Type(); t != ebpf.Kprobe { - return fmt.Errorf("invalid program type (expected %s): %s", ebpf.Kprobe, t) + return nil, fmt.Errorf("invalid program type (expected %s): %s", ebpf.Kprobe, t) } case tracepointEvent: if t := prog.Type(); t != ebpf.TracePoint { - return fmt.Errorf("invalid program type (expected %s): %s", ebpf.TracePoint, t) + return nil, fmt.Errorf("invalid program type (expected %s): %s", ebpf.TracePoint, t) } default: - return fmt.Errorf("unknown perf event type: %d", pe.typ) + return nil, fmt.Errorf("unknown perf event type: %d", pe.typ) + } + + if err := haveBPFLinkPerfEvent(); err == nil { + return attachPerfEventLink(pe, prog) } + return attachPerfEventIoctl(pe, prog) +} - // The ioctl below will fail when the fd is invalid. - kfd, _ := pe.fd.Value() +func attachPerfEventIoctl(pe *perfEvent, prog *ebpf.Program) (*perfEventIoctl, error) { + if pe.cookie != 0 { + return nil, fmt.Errorf("cookies are not supported: %w", ErrNotSupported) + } // Assign the eBPF program to the perf event. - err := unix.IoctlSetInt(int(kfd), unix.PERF_EVENT_IOC_SET_BPF, prog.FD()) + err := unix.IoctlSetInt(pe.fd.Int(), unix.PERF_EVENT_IOC_SET_BPF, prog.FD()) if err != nil { - return fmt.Errorf("setting perf event bpf program: %w", err) + return nil, fmt.Errorf("setting perf event bpf program: %w", err) } // PERF_EVENT_IOC_ENABLE and _DISABLE ignore their given values. - if err := unix.IoctlSetInt(int(kfd), unix.PERF_EVENT_IOC_ENABLE, 0); err != nil { - return fmt.Errorf("enable perf event: %s", err) + if err := unix.IoctlSetInt(pe.fd.Int(), unix.PERF_EVENT_IOC_ENABLE, 0); err != nil { + return nil, fmt.Errorf("enable perf event: %s", err) } + pi := &perfEventIoctl{pe} + // Close the perf event when its reference is lost to avoid leaking system resources. - runtime.SetFinalizer(pe, (*perfEvent).Close) - return nil + runtime.SetFinalizer(pi, (*perfEventIoctl).Close) + return pi, nil +} + +// Use the bpf api to attach the perf event (BPF_LINK_TYPE_PERF_EVENT, 5.15+). +// +// https://github.com/torvalds/linux/commit/b89fbfbb854c9afc3047e8273cc3a694650b802e +func attachPerfEventLink(pe *perfEvent, prog *ebpf.Program) (*perfEventLink, error) { + fd, err := sys.LinkCreatePerfEvent(&sys.LinkCreatePerfEventAttr{ + ProgFd: uint32(prog.FD()), + TargetFd: pe.fd.Uint(), + AttachType: sys.BPF_PERF_EVENT, + BpfCookie: pe.cookie, + }) + if err != nil { + return nil, fmt.Errorf("cannot create bpf perf link: %v", err) + } + + pl := &perfEventLink{RawLink{fd: fd}, pe} + + // Close the perf event when its reference is lost to avoid leaking system resources. + runtime.SetFinalizer(pl, (*perfEventLink).Close) + return pl, nil } // unsafeStringPtr returns an unsafe.Pointer to a NUL-terminated copy of str. @@ -203,8 +264,12 @@ func unsafeStringPtr(str string) (unsafe.Pointer, error) { } // getTraceEventID reads a trace event's ID from tracefs given its group and name. -// group and name must be alphanumeric or underscore, as required by the kernel. +// The kernel requires group and name to be alphanumeric or underscore. +// +// name automatically has its invalid symbols converted to underscores so the caller +// can pass a raw symbol name, e.g. a kernel symbol containing dots. func getTraceEventID(group, name string) (uint64, error) { + name = sanitizeSymbol(name) tid, err := uint64FromFile(tracefsPath, "events", group, name, "id") if errors.Is(err, os.ErrNotExist) { return 0, fmt.Errorf("trace event %s/%s: %w", group, name, os.ErrNotExist) @@ -235,7 +300,7 @@ func getPMUEventType(typ probeType) (uint64, error) { // openTracepointPerfEvent opens a tracepoint-type perf event. System-wide // [k,u]probes created by writing to /[k,u]probe_events are tracepoints // behind the scenes, and can be attached to using these perf events. -func openTracepointPerfEvent(tid uint64, pid int) (*internal.FD, error) { +func openTracepointPerfEvent(tid uint64, pid int) (*sys.FD, error) { attr := unix.PerfEventAttr{ Type: unix.PERF_TYPE_TRACEPOINT, Config: tid, @@ -249,7 +314,7 @@ func openTracepointPerfEvent(tid uint64, pid int) (*internal.FD, error) { return nil, fmt.Errorf("opening tracepoint perf event: %w", err) } - return internal.NewFD(uint32(fd)), nil + return sys.NewFD(fd) } // uint64FromFile reads a uint64 from a file. All elements of path are sanitized @@ -270,3 +335,60 @@ func uint64FromFile(base string, path ...string) (uint64, error) { et := bytes.TrimSpace(data) return strconv.ParseUint(string(et), 10, 64) } + +// Probe BPF perf link. +// +// https://elixir.bootlin.com/linux/v5.16.8/source/kernel/bpf/syscall.c#L4307 +// https://github.com/torvalds/linux/commit/b89fbfbb854c9afc3047e8273cc3a694650b802e +var haveBPFLinkPerfEvent = internal.FeatureTest("bpf_link_perf_event", "5.15", func() error { + prog, err := ebpf.NewProgram(&ebpf.ProgramSpec{ + Name: "probe_bpf_perf_link", + Type: ebpf.Kprobe, + Instructions: asm.Instructions{ + asm.Mov.Imm(asm.R0, 0), + asm.Return(), + }, + License: "MIT", + }) + if err != nil { + return err + } + defer prog.Close() + + _, err = sys.LinkCreatePerfEvent(&sys.LinkCreatePerfEventAttr{ + ProgFd: uint32(prog.FD()), + AttachType: sys.BPF_PERF_EVENT, + }) + if errors.Is(err, unix.EINVAL) { + return internal.ErrNotSupported + } + if errors.Is(err, unix.EBADF) { + return nil + } + return err +}) + +// isValidTraceID implements the equivalent of a regex match +// against "^[a-zA-Z_][0-9a-zA-Z_]*$". +// +// Trace event groups, names and kernel symbols must adhere to this set +// of characters. Non-empty, first character must not be a number, all +// characters must be alphanumeric or underscore. +func isValidTraceID(s string) bool { + if len(s) < 1 { + return false + } + for i, c := range []byte(s) { + switch { + case c >= 'a' && c <= 'z': + case c >= 'A' && c <= 'Z': + case c == '_': + case i > 0 && c >= '0' && c <= '9': + + default: + return false + } + } + + return true +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/program.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/program.go index b90c4574676e..ea31817377fc 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/program.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/program.go @@ -4,7 +4,7 @@ import ( "fmt" "github.com/cilium/ebpf" - "github.com/cilium/ebpf/internal" + "github.com/cilium/ebpf/internal/sys" ) type RawAttachProgramOptions struct { @@ -34,7 +34,7 @@ func RawAttachProgram(opts RawAttachProgramOptions) error { replaceFd = uint32(opts.Replace.FD()) } - attr := internal.BPFProgAttachAttr{ + attr := sys.ProgAttachAttr{ TargetFd: uint32(opts.Target), AttachBpfFd: uint32(opts.Program.FD()), ReplaceBpfFd: replaceFd, @@ -42,7 +42,7 @@ func RawAttachProgram(opts RawAttachProgramOptions) error { AttachFlags: uint32(opts.Flags), } - if err := internal.BPFProgAttach(&attr); err != nil { + if err := sys.ProgAttach(&attr); err != nil { return fmt.Errorf("can't attach program: %w", err) } return nil @@ -63,12 +63,12 @@ func RawDetachProgram(opts RawDetachProgramOptions) error { return err } - attr := internal.BPFProgDetachAttr{ + attr := sys.ProgDetachAttr{ TargetFd: uint32(opts.Target), AttachBpfFd: uint32(opts.Program.FD()), AttachType: uint32(opts.Attach), } - if err := internal.BPFProgDetach(&attr); err != nil { + if err := sys.ProgDetach(&attr); err != nil { return fmt.Errorf("can't detach program: %w", err) } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/raw_tracepoint.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/raw_tracepoint.go index f4beb1e07863..925e621cbbc7 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/raw_tracepoint.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/raw_tracepoint.go @@ -1,10 +1,11 @@ package link import ( + "errors" "fmt" "github.com/cilium/ebpf" - "github.com/cilium/ebpf/internal" + "github.com/cilium/ebpf/internal/sys" ) type RawTracepointOptions struct { @@ -22,40 +23,65 @@ func AttachRawTracepoint(opts RawTracepointOptions) (Link, error) { return nil, fmt.Errorf("invalid program type %s, expected RawTracepoint(Writable)", t) } if opts.Program.FD() < 0 { - return nil, fmt.Errorf("invalid program: %w", internal.ErrClosedFd) + return nil, fmt.Errorf("invalid program: %w", sys.ErrClosedFd) } - fd, err := bpfRawTracepointOpen(&bpfRawTracepointOpenAttr{ - name: internal.NewStringPointer(opts.Name), - fd: uint32(opts.Program.FD()), + fd, err := sys.RawTracepointOpen(&sys.RawTracepointOpenAttr{ + Name: sys.NewStringPointer(opts.Name), + ProgFd: uint32(opts.Program.FD()), }) if err != nil { return nil, err } - return &progAttachRawTracepoint{fd: fd}, nil + err = haveBPFLink() + if errors.Is(err, ErrNotSupported) { + // Prior to commit 70ed506c3bbc ("bpf: Introduce pinnable bpf_link abstraction") + // raw_tracepoints are just a plain fd. + return &simpleRawTracepoint{fd}, nil + } + + if err != nil { + return nil, err + } + + return &rawTracepoint{RawLink{fd: fd}}, nil } -type progAttachRawTracepoint struct { - fd *internal.FD +type simpleRawTracepoint struct { + fd *sys.FD } -var _ Link = (*progAttachRawTracepoint)(nil) +var _ Link = (*simpleRawTracepoint)(nil) -func (rt *progAttachRawTracepoint) isLink() {} +func (frt *simpleRawTracepoint) isLink() {} -func (rt *progAttachRawTracepoint) Close() error { - return rt.fd.Close() +func (frt *simpleRawTracepoint) Close() error { + return frt.fd.Close() } -func (rt *progAttachRawTracepoint) Update(_ *ebpf.Program) error { - return fmt.Errorf("can't update raw_tracepoint: %w", ErrNotSupported) +func (frt *simpleRawTracepoint) Update(_ *ebpf.Program) error { + return fmt.Errorf("update raw_tracepoint: %w", ErrNotSupported) } -func (rt *progAttachRawTracepoint) Pin(_ string) error { - return fmt.Errorf("can't pin raw_tracepoint: %w", ErrNotSupported) +func (frt *simpleRawTracepoint) Pin(string) error { + return fmt.Errorf("pin raw_tracepoint: %w", ErrNotSupported) } -func (rt *progAttachRawTracepoint) Unpin() error { +func (frt *simpleRawTracepoint) Unpin() error { return fmt.Errorf("unpin raw_tracepoint: %w", ErrNotSupported) } + +func (frt *simpleRawTracepoint) Info() (*Info, error) { + return nil, fmt.Errorf("can't get raw_tracepoint info: %w", ErrNotSupported) +} + +type rawTracepoint struct { + RawLink +} + +var _ Link = (*rawTracepoint)(nil) + +func (rt *rawTracepoint) Update(_ *ebpf.Program) error { + return fmt.Errorf("update raw_tracepoint: %w", ErrNotSupported) +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/socket_filter.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/socket_filter.go new file mode 100644 index 000000000000..94f3958cc4d8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/socket_filter.go @@ -0,0 +1,40 @@ +package link + +import ( + "syscall" + + "github.com/cilium/ebpf" + "github.com/cilium/ebpf/internal/unix" +) + +// AttachSocketFilter attaches a SocketFilter BPF program to a socket. +func AttachSocketFilter(conn syscall.Conn, program *ebpf.Program) error { + rawConn, err := conn.SyscallConn() + if err != nil { + return err + } + var ssoErr error + err = rawConn.Control(func(fd uintptr) { + ssoErr = syscall.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_ATTACH_BPF, program.FD()) + }) + if ssoErr != nil { + return ssoErr + } + return err +} + +// DetachSocketFilter detaches a SocketFilter BPF program from a socket. +func DetachSocketFilter(conn syscall.Conn) error { + rawConn, err := conn.SyscallConn() + if err != nil { + return err + } + var ssoErr error + err = rawConn.Control(func(fd uintptr) { + ssoErr = syscall.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_DETACH_BPF, 0) + }) + if ssoErr != nil { + return ssoErr + } + return err +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/syscalls.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/syscalls.go index a61499438b22..a661395b360b 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/syscalls.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/syscalls.go @@ -2,35 +2,33 @@ package link import ( "errors" - "unsafe" "github.com/cilium/ebpf" "github.com/cilium/ebpf/asm" "github.com/cilium/ebpf/internal" + "github.com/cilium/ebpf/internal/sys" "github.com/cilium/ebpf/internal/unix" ) // Type is the kind of link. -type Type uint32 +type Type = sys.LinkType // Valid link types. -// -// Equivalent to enum bpf_link_type. const ( - UnspecifiedType Type = iota - RawTracepointType - TracingType - CgroupType - IterType - NetNsType - XDPType + UnspecifiedType = sys.BPF_LINK_TYPE_UNSPEC + RawTracepointType = sys.BPF_LINK_TYPE_RAW_TRACEPOINT + TracingType = sys.BPF_LINK_TYPE_TRACING + CgroupType = sys.BPF_LINK_TYPE_CGROUP + IterType = sys.BPF_LINK_TYPE_ITER + NetNsType = sys.BPF_LINK_TYPE_NETNS + XDPType = sys.BPF_LINK_TYPE_XDP + PerfEventType = sys.BPF_LINK_TYPE_PERF_EVENT ) var haveProgAttach = internal.FeatureTest("BPF_PROG_ATTACH", "4.10", func() error { prog, err := ebpf.NewProgram(&ebpf.ProgramSpec{ - Type: ebpf.CGroupSKB, - AttachType: ebpf.AttachCGroupInetIngress, - License: "MIT", + Type: ebpf.CGroupSKB, + License: "MIT", Instructions: asm.Instructions{ asm.Mov.Imm(asm.R0, 0), asm.Return(), @@ -69,7 +67,7 @@ var haveProgAttachReplace = internal.FeatureTest("BPF_PROG_ATTACH atomic replace // We know that we have BPF_PROG_ATTACH since we can load CGroupSKB programs. // If passing BPF_F_REPLACE gives us EINVAL we know that the feature isn't // present. - attr := internal.BPFProgAttachAttr{ + attr := sys.ProgAttachAttr{ // We rely on this being checked after attachFlags. TargetFd: ^uint32(0), AttachBpfFd: uint32(prog.FD()), @@ -77,7 +75,7 @@ var haveProgAttachReplace = internal.FeatureTest("BPF_PROG_ATTACH atomic replace AttachFlags: uint32(flagReplace), } - err = internal.BPFProgAttach(&attr) + err = sys.ProgAttach(&attr) if errors.Is(err, unix.EINVAL) { return internal.ErrNotSupported } @@ -87,73 +85,14 @@ var haveProgAttachReplace = internal.FeatureTest("BPF_PROG_ATTACH atomic replace return err }) -type bpfLinkCreateAttr struct { - progFd uint32 - targetFd uint32 - attachType ebpf.AttachType - flags uint32 - targetBTFID uint32 -} - -func bpfLinkCreate(attr *bpfLinkCreateAttr) (*internal.FD, error) { - ptr, err := internal.BPF(internal.BPF_LINK_CREATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - if err != nil { - return nil, err - } - return internal.NewFD(uint32(ptr)), nil -} - -type bpfLinkCreateIterAttr struct { - prog_fd uint32 - target_fd uint32 - attach_type ebpf.AttachType - flags uint32 - iter_info internal.Pointer - iter_info_len uint32 -} - -func bpfLinkCreateIter(attr *bpfLinkCreateIterAttr) (*internal.FD, error) { - ptr, err := internal.BPF(internal.BPF_LINK_CREATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - if err != nil { - return nil, err - } - return internal.NewFD(uint32(ptr)), nil -} - -type bpfLinkUpdateAttr struct { - linkFd uint32 - newProgFd uint32 - flags uint32 - oldProgFd uint32 -} - -func bpfLinkUpdate(attr *bpfLinkUpdateAttr) error { - _, err := internal.BPF(internal.BPF_LINK_UPDATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - return err -} - var haveBPFLink = internal.FeatureTest("bpf_link", "5.7", func() error { - prog, err := ebpf.NewProgram(&ebpf.ProgramSpec{ - Type: ebpf.CGroupSKB, - AttachType: ebpf.AttachCGroupInetIngress, - License: "MIT", - Instructions: asm.Instructions{ - asm.Mov.Imm(asm.R0, 0), - asm.Return(), - }, - }) - if err != nil { - return internal.ErrNotSupported - } - defer prog.Close() - - attr := bpfLinkCreateAttr{ + attr := sys.LinkCreateAttr{ // This is a hopefully invalid file descriptor, which triggers EBADF. - targetFd: ^uint32(0), - progFd: uint32(prog.FD()), - attachType: ebpf.AttachCGroupInetIngress, + TargetFd: ^uint32(0), + ProgFd: ^uint32(0), + AttachType: sys.AttachType(ebpf.AttachCGroupInetIngress), } - _, err = bpfLinkCreate(&attr) + _, err := sys.LinkCreate(&attr) if errors.Is(err, unix.EINVAL) { return internal.ErrNotSupported } @@ -162,30 +101,3 @@ var haveBPFLink = internal.FeatureTest("bpf_link", "5.7", func() error { } return err }) - -type bpfIterCreateAttr struct { - linkFd uint32 - flags uint32 -} - -func bpfIterCreate(attr *bpfIterCreateAttr) (*internal.FD, error) { - ptr, err := internal.BPF(internal.BPF_ITER_CREATE, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - if err == nil { - return internal.NewFD(uint32(ptr)), nil - } - return nil, err -} - -type bpfRawTracepointOpenAttr struct { - name internal.Pointer - fd uint32 - _ uint32 -} - -func bpfRawTracepointOpen(attr *bpfRawTracepointOpenAttr) (*internal.FD, error) { - ptr, err := internal.BPF(internal.BPF_RAW_TRACEPOINT_OPEN, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - if err == nil { - return internal.NewFD(uint32(ptr)), nil - } - return nil, err -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/tracepoint.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/tracepoint.go index 7423df86b137..a59ef9d1c527 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/tracepoint.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/tracepoint.go @@ -6,12 +6,22 @@ import ( "github.com/cilium/ebpf" ) +// TracepointOptions defines additional parameters that will be used +// when loading Tracepoints. +type TracepointOptions struct { + // Arbitrary value that can be fetched from an eBPF program + // via `bpf_get_attach_cookie()`. + // + // Needs kernel 5.15+. + Cookie uint64 +} + // Tracepoint attaches the given eBPF program to the tracepoint with the given // group and name. See /sys/kernel/debug/tracing/events to find available // tracepoints. The top-level directory is the group, the event's subdirectory // is the name. Example: // -// tp, err := Tracepoint("syscalls", "sys_enter_fork", prog) +// tp, err := Tracepoint("syscalls", "sys_enter_fork", prog, nil) // // Losing the reference to the resulting Link (tp) will close the Tracepoint // and prevent further execution of prog. The Link must be Closed during @@ -19,14 +29,14 @@ import ( // // Note that attaching eBPF programs to syscalls (sys_enter_*/sys_exit_*) is // only possible as of kernel 4.14 (commit cf5f5ce). -func Tracepoint(group, name string, prog *ebpf.Program) (Link, error) { +func Tracepoint(group, name string, prog *ebpf.Program, opts *TracepointOptions) (Link, error) { if group == "" || name == "" { return nil, fmt.Errorf("group and name cannot be empty: %w", errInvalidInput) } if prog == nil { return nil, fmt.Errorf("prog cannot be nil: %w", errInvalidInput) } - if !rgxTraceEvent.MatchString(group) || !rgxTraceEvent.MatchString(name) { + if !isValidTraceID(group) || !isValidTraceID(name) { return nil, fmt.Errorf("group and name '%s/%s' must be alphanumeric or underscore: %w", group, name, errInvalidInput) } if prog.Type() != ebpf.TracePoint { @@ -43,18 +53,25 @@ func Tracepoint(group, name string, prog *ebpf.Program) (Link, error) { return nil, err } + var cookie uint64 + if opts != nil { + cookie = opts.Cookie + } + pe := &perfEvent{ - fd: fd, - tracefsID: tid, + typ: tracepointEvent, group: group, name: name, - typ: tracepointEvent, + tracefsID: tid, + cookie: cookie, + fd: fd, } - if err := pe.attach(prog); err != nil { + lnk, err := attachPerfEvent(pe, prog) + if err != nil { pe.Close() return nil, err } - return pe, nil + return lnk, nil } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/tracing.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/tracing.go new file mode 100644 index 000000000000..e47e61a3b843 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/tracing.go @@ -0,0 +1,141 @@ +package link + +import ( + "fmt" + + "github.com/cilium/ebpf" + "github.com/cilium/ebpf/btf" + "github.com/cilium/ebpf/internal/sys" +) + +type tracing struct { + RawLink +} + +func (f *tracing) Update(new *ebpf.Program) error { + return fmt.Errorf("tracing update: %w", ErrNotSupported) +} + +// AttachFreplace attaches the given eBPF program to the function it replaces. +// +// The program and name can either be provided at link time, or can be provided +// at program load time. If they were provided at load time, they should be nil +// and empty respectively here, as they will be ignored by the kernel. +// Examples: +// +// AttachFreplace(dispatcher, "function", replacement) +// AttachFreplace(nil, "", replacement) +func AttachFreplace(targetProg *ebpf.Program, name string, prog *ebpf.Program) (Link, error) { + if (name == "") != (targetProg == nil) { + return nil, fmt.Errorf("must provide both or neither of name and targetProg: %w", errInvalidInput) + } + if prog == nil { + return nil, fmt.Errorf("prog cannot be nil: %w", errInvalidInput) + } + if prog.Type() != ebpf.Extension { + return nil, fmt.Errorf("eBPF program type %s is not an Extension: %w", prog.Type(), errInvalidInput) + } + + var ( + target int + typeID btf.TypeID + ) + if targetProg != nil { + btfHandle, err := targetProg.Handle() + if err != nil { + return nil, err + } + defer btfHandle.Close() + + spec, err := btfHandle.Spec(nil) + if err != nil { + return nil, err + } + + var function *btf.Func + if err := spec.TypeByName(name, &function); err != nil { + return nil, err + } + + target = targetProg.FD() + typeID, err = spec.TypeID(function) + if err != nil { + return nil, err + } + } + + link, err := AttachRawLink(RawLinkOptions{ + Target: target, + Program: prog, + Attach: ebpf.AttachNone, + BTF: typeID, + }) + if err != nil { + return nil, err + } + + return &tracing{*link}, nil +} + +type TracingOptions struct { + // Program must be of type Tracing with attach type + // AttachTraceFEntry/AttachTraceFExit/AttachModifyReturn or + // AttachTraceRawTp. + Program *ebpf.Program +} + +type LSMOptions struct { + // Program must be of type LSM with attach type + // AttachLSMMac. + Program *ebpf.Program +} + +// attachBTFID links all BPF program types (Tracing/LSM) that they attach to a btf_id. +func attachBTFID(program *ebpf.Program) (Link, error) { + if program.FD() < 0 { + return nil, fmt.Errorf("invalid program %w", sys.ErrClosedFd) + } + + fd, err := sys.RawTracepointOpen(&sys.RawTracepointOpenAttr{ + ProgFd: uint32(program.FD()), + }) + if err != nil { + return nil, err + } + + raw := RawLink{fd: fd} + info, err := raw.Info() + if err != nil { + raw.Close() + return nil, err + } + + if info.Type == RawTracepointType { + // Sadness upon sadness: a Tracing program with AttachRawTp returns + // a raw_tracepoint link. Other types return a tracing link. + return &rawTracepoint{raw}, nil + } + + return &tracing{RawLink: RawLink{fd: fd}}, nil +} + +// AttachTracing links a tracing (fentry/fexit/fmod_ret) BPF program or +// a BTF-powered raw tracepoint (tp_btf) BPF Program to a BPF hook defined +// in kernel modules. +func AttachTracing(opts TracingOptions) (Link, error) { + if t := opts.Program.Type(); t != ebpf.Tracing { + return nil, fmt.Errorf("invalid program type %s, expected Tracing", t) + } + + return attachBTFID(opts.Program) +} + +// AttachLSM links a Linux security module (LSM) BPF Program to a BPF +// hook defined in kernel modules. +func AttachLSM(opts LSMOptions) (Link, error) { + if t := opts.Program.Type(); t != ebpf.LSM { + return nil, fmt.Errorf("invalid program type %s, expected LSM", t) + } + + return attachBTFID(opts.Program) +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/uprobe.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/uprobe.go index 59170ce0468d..edf925b57020 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/uprobe.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/uprobe.go @@ -6,7 +6,7 @@ import ( "fmt" "os" "path/filepath" - "regexp" + "strings" "sync" "github.com/cilium/ebpf" @@ -16,16 +16,23 @@ import ( var ( uprobeEventsPath = filepath.Join(tracefsPath, "uprobe_events") - // rgxUprobeSymbol is used to strip invalid characters from the uprobe symbol - // as they are not allowed to be used as the EVENT token in tracefs. - rgxUprobeSymbol = regexp.MustCompile("[^a-zA-Z0-9]+") - uprobeRetprobeBit = struct { once sync.Once value uint64 err error }{} + uprobeRefCtrOffsetPMUPath = "/sys/bus/event_source/devices/uprobe/format/ref_ctr_offset" + // elixir.bootlin.com/linux/v5.15-rc7/source/kernel/events/core.c#L9799 + uprobeRefCtrOffsetShift = 32 + haveRefCtrOffsetPMU = internal.FeatureTest("RefCtrOffsetPMU", "4.20", func() error { + _, err := os.Stat(uprobeRefCtrOffsetPMUPath) + if err != nil { + return internal.ErrNotSupported + } + return nil + }) + // ErrNoSymbol indicates that the given symbol was not found // in the ELF symbols table. ErrNoSymbol = errors.New("not found") @@ -35,24 +42,46 @@ var ( type Executable struct { // Path of the executable on the filesystem. path string - // Parsed ELF symbols and dynamic symbols offsets. - offsets map[string]uint64 + // Parsed ELF and dynamic symbols' addresses. + addresses map[string]uint64 } // UprobeOptions defines additional parameters that will be used // when loading Uprobes. type UprobeOptions struct { - // Symbol offset. Must be provided in case of external symbols (shared libs). - // If set, overrides the offset eventually parsed from the executable. + // Symbol address. Must be provided in case of external symbols (shared libs). + // If set, overrides the address eventually parsed from the executable. + Address uint64 + // The offset relative to given symbol. Useful when tracing an arbitrary point + // inside the frame of given symbol. + // + // Note: this field changed from being an absolute offset to being relative + // to Address. Offset uint64 // Only set the uprobe on the given process ID. Useful when tracing // shared library calls or programs that have many running instances. PID int + // Automatically manage SDT reference counts (semaphores). + // + // If this field is set, the Kernel will increment/decrement the + // semaphore located in the process memory at the provided address on + // probe attach/detach. + // + // See also: + // sourceware.org/systemtap/wiki/UserSpaceProbeImplementation (Semaphore Handling) + // github.com/torvalds/linux/commit/1cc33161a83d + // github.com/torvalds/linux/commit/a6ca88b241d5 + RefCtrOffset uint64 + // Arbitrary value that can be fetched from an eBPF program + // via `bpf_get_attach_cookie()`. + // + // Needs kernel 5.15+. + Cookie uint64 } // To open a new Executable, use: // -// OpenExecutable("/bin/bash") +// OpenExecutable("/bin/bash") // // The returned value can then be used to open Uprobe(s). func OpenExecutable(path string) (*Executable, error) { @@ -77,8 +106,8 @@ func OpenExecutable(path string) (*Executable, error) { } ex := Executable{ - path: path, - offsets: make(map[string]uint64), + path: path, + addresses: make(map[string]uint64), } if err := ex.load(se); err != nil { @@ -107,7 +136,7 @@ func (ex *Executable) load(f *internal.SafeELFFile) error { continue } - off := s.Value + address := s.Value // Loop over ELF segments. for _, prog := range f.Progs { @@ -123,32 +152,42 @@ func (ex *Executable) load(f *internal.SafeELFFile) error { // fn symbol offset = fn symbol VA - .text VA + .text offset // // stackoverflow.com/a/40249502 - off = s.Value - prog.Vaddr + prog.Off + address = s.Value - prog.Vaddr + prog.Off break } } - ex.offsets[s.Name] = off + ex.addresses[s.Name] = address } return nil } -func (ex *Executable) offset(symbol string) (uint64, error) { - if off, ok := ex.offsets[symbol]; ok { - // Symbols with location 0 from section undef are shared library calls and - // are relocated before the binary is executed. Dynamic linking is not - // implemented by the library, so mark this as unsupported for now. - // - // Since only offset values are stored and not elf.Symbol, if the value is 0, - // assume it's an external symbol. - if off == 0 { - return 0, fmt.Errorf("cannot resolve %s library call '%s', "+ - "consider providing the offset via options: %w", ex.path, symbol, ErrNotSupported) - } - return off, nil +// address calculates the address of a symbol in the executable. +// +// opts must not be nil. +func (ex *Executable) address(symbol string, opts *UprobeOptions) (uint64, error) { + if opts.Address > 0 { + return opts.Address + opts.Offset, nil } - return 0, fmt.Errorf("symbol %s: %w", symbol, ErrNoSymbol) + + address, ok := ex.addresses[symbol] + if !ok { + return 0, fmt.Errorf("symbol %s: %w", symbol, ErrNoSymbol) + } + + // Symbols with location 0 from section undef are shared library calls and + // are relocated before the binary is executed. Dynamic linking is not + // implemented by the library, so mark this as unsupported for now. + // + // Since only offset values are stored and not elf.Symbol, if the value is 0, + // assume it's an external symbol. + if address == 0 { + return 0, fmt.Errorf("cannot resolve %s library call '%s': %w "+ + "(consider providing UprobeOptions.Address)", ex.path, symbol, ErrNotSupported) + } + + return address + opts.Offset, nil } // Uprobe attaches the given eBPF program to a perf event that fires when the @@ -161,7 +200,9 @@ func (ex *Executable) offset(symbol string) (uint64, error) { // When using symbols which belongs to shared libraries, // an offset must be provided via options: // -// up, err := ex.Uprobe("main", prog, &UprobeOptions{Offset: 0x123}) +// up, err := ex.Uprobe("main", prog, &UprobeOptions{Offset: 0x123}) +// +// Note: Setting the Offset field in the options supersedes the symbol's offset. // // Losing the reference to the resulting Link (up) will close the Uprobe // and prevent further execution of prog. The Link must be Closed during @@ -175,13 +216,13 @@ func (ex *Executable) Uprobe(symbol string, prog *ebpf.Program, opts *UprobeOpti return nil, err } - err = u.attach(prog) + lnk, err := attachPerfEvent(u, prog) if err != nil { u.Close() return nil, err } - return u, nil + return lnk, nil } // Uretprobe attaches the given eBPF program to a perf event that fires right @@ -193,7 +234,9 @@ func (ex *Executable) Uprobe(symbol string, prog *ebpf.Program, opts *UprobeOpti // When using symbols which belongs to shared libraries, // an offset must be provided via options: // -// up, err := ex.Uretprobe("main", prog, &UprobeOptions{Offset: 0x123}) +// up, err := ex.Uretprobe("main", prog, &UprobeOptions{Offset: 0x123}) +// +// Note: Setting the Offset field in the options supersedes the symbol's offset. // // Losing the reference to the resulting Link (up) will close the Uprobe // and prevent further execution of prog. The Link must be Closed during @@ -207,13 +250,13 @@ func (ex *Executable) Uretprobe(symbol string, prog *ebpf.Program, opts *UprobeO return nil, err } - err = u.attach(prog) + lnk, err := attachPerfEvent(u, prog) if err != nil { u.Close() return nil, err } - return u, nil + return lnk, nil } // uprobe opens a perf event for the given binary/symbol and attaches prog to it. @@ -225,25 +268,38 @@ func (ex *Executable) uprobe(symbol string, prog *ebpf.Program, opts *UprobeOpti if prog.Type() != ebpf.Kprobe { return nil, fmt.Errorf("eBPF program type %s is not Kprobe: %w", prog.Type(), errInvalidInput) } + if opts == nil { + opts = &UprobeOptions{} + } - var offset uint64 - if opts != nil && opts.Offset != 0 { - offset = opts.Offset - } else { - off, err := ex.offset(symbol) - if err != nil { - return nil, err + offset, err := ex.address(symbol, opts) + if err != nil { + return nil, err + } + + pid := opts.PID + if pid == 0 { + pid = perfAllThreads + } + + if opts.RefCtrOffset != 0 { + if err := haveRefCtrOffsetPMU(); err != nil { + return nil, fmt.Errorf("uprobe ref_ctr_offset: %w", err) } - offset = off } - pid := perfAllThreads - if opts != nil && opts.PID != 0 { - pid = opts.PID + args := probeArgs{ + symbol: symbol, + path: ex.path, + offset: offset, + pid: pid, + refCtrOffset: opts.RefCtrOffset, + ret: ret, + cookie: opts.Cookie, } // Use uprobe PMU if the kernel has it available. - tp, err := pmuUprobe(symbol, ex.path, offset, pid, ret) + tp, err := pmuUprobe(args) if err == nil { return tp, nil } @@ -252,7 +308,8 @@ func (ex *Executable) uprobe(symbol string, prog *ebpf.Program, opts *UprobeOpti } // Use tracefs if uprobe PMU is missing. - tp, err = tracefsUprobe(uprobeSanitizedSymbol(symbol), ex.path, offset, pid, ret) + args.symbol = sanitizeSymbol(symbol) + tp, err = tracefsUprobe(args) if err != nil { return nil, fmt.Errorf("creating trace event '%s:%s' in tracefs: %w", ex.path, symbol, err) } @@ -261,23 +318,51 @@ func (ex *Executable) uprobe(symbol string, prog *ebpf.Program, opts *UprobeOpti } // pmuUprobe opens a perf event based on the uprobe PMU. -func pmuUprobe(symbol, path string, offset uint64, pid int, ret bool) (*perfEvent, error) { - return pmuProbe(uprobeType, symbol, path, offset, pid, ret) +func pmuUprobe(args probeArgs) (*perfEvent, error) { + return pmuProbe(uprobeType, args) } // tracefsUprobe creates a Uprobe tracefs entry. -func tracefsUprobe(symbol, path string, offset uint64, pid int, ret bool) (*perfEvent, error) { - return tracefsProbe(uprobeType, symbol, path, offset, pid, ret) +func tracefsUprobe(args probeArgs) (*perfEvent, error) { + return tracefsProbe(uprobeType, args) } -// uprobeSanitizedSymbol replaces every invalid characted for the tracefs api with an underscore. -func uprobeSanitizedSymbol(symbol string) string { - return rgxUprobeSymbol.ReplaceAllString(symbol, "_") +// sanitizeSymbol replaces every invalid character for the tracefs api with an underscore. +// It is equivalent to calling regexp.MustCompile("[^a-zA-Z0-9]+").ReplaceAllString("_"). +func sanitizeSymbol(s string) string { + var b strings.Builder + b.Grow(len(s)) + var skip bool + for _, c := range []byte(s) { + switch { + case c >= 'a' && c <= 'z', + c >= 'A' && c <= 'Z', + c >= '0' && c <= '9': + skip = false + b.WriteByte(c) + + default: + if !skip { + b.WriteByte('_') + skip = true + } + } + } + + return b.String() } -// uprobePathOffset creates the PATH:OFFSET token for the tracefs api. -func uprobePathOffset(path string, offset uint64) string { - return fmt.Sprintf("%s:%#x", path, offset) +// uprobeToken creates the PATH:OFFSET(REF_CTR_OFFSET) token for the tracefs api. +func uprobeToken(args probeArgs) string { + po := fmt.Sprintf("%s:%#x", args.path, args.offset) + + if args.refCtrOffset != 0 { + // This is not documented in Documentation/trace/uprobetracer.txt. + // elixir.bootlin.com/linux/v5.15-rc7/source/kernel/trace/trace.c#L5564 + po += fmt.Sprintf("(%#x)", args.refCtrOffset) + } + + return po } func uretprobeBit() (uint64, error) { diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/xdp.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/xdp.go new file mode 100644 index 000000000000..aa8dd3a4cb39 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/link/xdp.go @@ -0,0 +1,54 @@ +package link + +import ( + "fmt" + + "github.com/cilium/ebpf" +) + +// XDPAttachFlags represents how XDP program will be attached to interface. +type XDPAttachFlags uint32 + +const ( + // XDPGenericMode (SKB) links XDP BPF program for drivers which do + // not yet support native XDP. + XDPGenericMode XDPAttachFlags = 1 << (iota + 1) + // XDPDriverMode links XDP BPF program into the driver’s receive path. + XDPDriverMode + // XDPOffloadMode offloads the entire XDP BPF program into hardware. + XDPOffloadMode +) + +type XDPOptions struct { + // Program must be an XDP BPF program. + Program *ebpf.Program + + // Interface is the interface index to attach program to. + Interface int + + // Flags is one of XDPAttachFlags (optional). + // + // Only one XDP mode should be set, without flag defaults + // to driver/generic mode (best effort). + Flags XDPAttachFlags +} + +// AttachXDP links an XDP BPF program to an XDP hook. +func AttachXDP(opts XDPOptions) (Link, error) { + if t := opts.Program.Type(); t != ebpf.XDP { + return nil, fmt.Errorf("invalid program type %s, expected XDP", t) + } + + if opts.Interface < 1 { + return nil, fmt.Errorf("invalid interface index: %d", opts.Interface) + } + + rawLink, err := AttachRawLink(RawLinkOptions{ + Program: opts.Program, + Attach: ebpf.AttachXDP, + Target: opts.Interface, + Flags: uint32(opts.Flags), + }) + + return rawLink, err +} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/linker.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/linker.go index f3b1629e70a8..e6276b1829b2 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/linker.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/linker.go @@ -1,159 +1,238 @@ package ebpf import ( + "errors" "fmt" + "sync" "github.com/cilium/ebpf/asm" + "github.com/cilium/ebpf/btf" ) -// link resolves bpf-to-bpf calls. +// splitSymbols splits insns into subsections delimited by Symbol Instructions. +// insns cannot be empty and must start with a Symbol Instruction. // -// Each library may contain multiple functions / labels, and is only linked -// if prog references one of these functions. -// -// Libraries also linked. -func link(prog *ProgramSpec, libs []*ProgramSpec) error { - var ( - linked = make(map[*ProgramSpec]bool) - pending = []asm.Instructions{prog.Instructions} - insns asm.Instructions - ) - for len(pending) > 0 { - insns, pending = pending[0], pending[1:] - for _, lib := range libs { - if linked[lib] { - continue - } +// The resulting map is indexed by Symbol name. +func splitSymbols(insns asm.Instructions) (map[string]asm.Instructions, error) { + if len(insns) == 0 { + return nil, errors.New("insns is empty") + } - needed, err := needSection(insns, lib.Instructions) - if err != nil { - return fmt.Errorf("linking %s: %w", lib.Name, err) - } + if insns[0].Symbol() == "" { + return nil, errors.New("insns must start with a Symbol") + } - if !needed { - continue + var name string + progs := make(map[string]asm.Instructions) + for _, ins := range insns { + if sym := ins.Symbol(); sym != "" { + if progs[sym] != nil { + return nil, fmt.Errorf("insns contains duplicate Symbol %s", sym) } + name = sym + } - linked[lib] = true - prog.Instructions = append(prog.Instructions, lib.Instructions...) - pending = append(pending, lib.Instructions) + progs[name] = append(progs[name], ins) + } - if prog.BTF != nil && lib.BTF != nil { - if err := prog.BTF.Append(lib.BTF); err != nil { - return fmt.Errorf("linking BTF of %s: %w", lib.Name, err) - } - } + return progs, nil +} + +// The linker is responsible for resolving bpf-to-bpf calls between programs +// within an ELF. Each BPF program must be a self-contained binary blob, +// so when an instruction in one ELF program section wants to jump to +// a function in another, the linker needs to pull in the bytecode +// (and BTF info) of the target function and concatenate the instruction +// streams. +// +// Later on in the pipeline, all call sites are fixed up with relative jumps +// within this newly-created instruction stream to then finally hand off to +// the kernel with BPF_PROG_LOAD. +// +// Each function is denoted by an ELF symbol and the compiler takes care of +// register setup before each jump instruction. + +// hasFunctionReferences returns true if insns contains one or more bpf2bpf +// function references. +func hasFunctionReferences(insns asm.Instructions) bool { + for _, i := range insns { + if i.IsFunctionReference() { + return true } } - - return nil + return false } -func needSection(insns, section asm.Instructions) (bool, error) { - // A map of symbols to the libraries which contain them. - symbols, err := section.SymbolOffsets() +// applyRelocations collects and applies any CO-RE relocations in insns. +// +// Passing a nil target will relocate against the running kernel. insns are +// modified in place. +func applyRelocations(insns asm.Instructions, local, target *btf.Spec) error { + var relos []*btf.CORERelocation + var reloInsns []*asm.Instruction + iter := insns.Iterate() + for iter.Next() { + if relo := btf.CORERelocationMetadata(iter.Ins); relo != nil { + relos = append(relos, relo) + reloInsns = append(reloInsns, iter.Ins) + } + } + + if len(relos) == 0 { + return nil + } + + target, err := maybeLoadKernelBTF(target) if err != nil { - return false, err + return err } - for _, ins := range insns { - if ins.Reference == "" { - continue - } + fixups, err := btf.CORERelocate(local, target, relos) + if err != nil { + return err + } - if ins.OpCode.JumpOp() != asm.Call || ins.Src != asm.PseudoCall { - continue + for i, fixup := range fixups { + if err := fixup.Apply(reloInsns[i]); err != nil { + return fmt.Errorf("apply fixup %s: %w", &fixup, err) } + } - if ins.Constant != -1 { - // This is already a valid call, no need to link again. - continue - } + return nil +} - if _, ok := symbols[ins.Reference]; !ok { - // Symbol isn't available in this section - continue - } +// flattenPrograms resolves bpf-to-bpf calls for a set of programs. +// +// Links all programs in names by modifying their ProgramSpec in progs. +func flattenPrograms(progs map[string]*ProgramSpec, names []string) { + // Pre-calculate all function references. + refs := make(map[*ProgramSpec][]string) + for _, prog := range progs { + refs[prog] = prog.Instructions.FunctionReferences() + } - // At this point we know that at least one function in the - // library is called from insns, so we have to link it. - return true, nil + // Create a flattened instruction stream, but don't modify progs yet to + // avoid linking multiple times. + flattened := make([]asm.Instructions, 0, len(names)) + for _, name := range names { + flattened = append(flattened, flattenInstructions(name, progs, refs)) } - // None of the functions in the section are called. - return false, nil + // Finally, assign the flattened instructions. + for i, name := range names { + progs[name].Instructions = flattened[i] + } } -func fixupJumpsAndCalls(insns asm.Instructions) error { - symbolOffsets := make(map[string]asm.RawInstructionOffset) - iter := insns.Iterate() - for iter.Next() { - ins := iter.Ins +// flattenInstructions resolves bpf-to-bpf calls for a single program. +// +// Flattens the instructions of prog by concatenating the instructions of all +// direct and indirect dependencies. +// +// progs contains all referenceable programs, while refs contain the direct +// dependencies of each program. +func flattenInstructions(name string, progs map[string]*ProgramSpec, refs map[*ProgramSpec][]string) asm.Instructions { + prog := progs[name] + + insns := make(asm.Instructions, len(prog.Instructions)) + copy(insns, prog.Instructions) + + // Add all direct references of prog to the list of to be linked programs. + pending := make([]string, len(refs[prog])) + copy(pending, refs[prog]) + + // All references for which we've appended instructions. + linked := make(map[string]bool) + + // Iterate all pending references. We can't use a range since pending is + // modified in the body below. + for len(pending) > 0 { + var ref string + ref, pending = pending[0], pending[1:] - if ins.Symbol == "" { + if linked[ref] { + // We've already linked this ref, don't append instructions again. continue } - if _, ok := symbolOffsets[ins.Symbol]; ok { - return fmt.Errorf("duplicate symbol %s", ins.Symbol) + progRef := progs[ref] + if progRef == nil { + // We don't have instructions that go with this reference. This + // happens when calling extern functions. + continue } - symbolOffsets[ins.Symbol] = iter.Offset + insns = append(insns, progRef.Instructions...) + linked[ref] = true + + // Make sure we link indirect references. + pending = append(pending, refs[progRef]...) } - iter = insns.Iterate() + return insns +} + +// fixupAndValidate is called by the ELF reader right before marshaling the +// instruction stream. It performs last-minute adjustments to the program and +// runs some sanity checks before sending it off to the kernel. +func fixupAndValidate(insns asm.Instructions) error { + iter := insns.Iterate() for iter.Next() { - i := iter.Index - offset := iter.Offset ins := iter.Ins - if ins.Reference == "" { - continue + // Map load was tagged with a Reference, but does not contain a Map pointer. + if ins.IsLoadFromMap() && ins.Reference() != "" && ins.Map() == nil { + return fmt.Errorf("instruction %d: map %s: %w", iter.Index, ins.Reference(), asm.ErrUnsatisfiedMapReference) } - switch { - case ins.IsFunctionCall() && ins.Constant == -1: - // Rewrite bpf to bpf call - callOffset, ok := symbolOffsets[ins.Reference] - if !ok { - return fmt.Errorf("call at %d: reference to missing symbol %q", i, ins.Reference) - } + fixupProbeReadKernel(ins) + } - ins.Constant = int64(callOffset - offset - 1) + return nil +} - case ins.OpCode.Class() == asm.JumpClass && ins.Offset == -1: - // Rewrite jump to label - jumpOffset, ok := symbolOffsets[ins.Reference] - if !ok { - return fmt.Errorf("jump at %d: reference to missing symbol %q", i, ins.Reference) - } +// fixupProbeReadKernel replaces calls to bpf_probe_read_{kernel,user}(_str) +// with bpf_probe_read(_str) on kernels that don't support it yet. +func fixupProbeReadKernel(ins *asm.Instruction) { + if !ins.IsBuiltinCall() { + return + } - ins.Offset = int16(jumpOffset - offset - 1) + // Kernel supports bpf_probe_read_kernel, nothing to do. + if haveProbeReadKernel() == nil { + return + } - case ins.IsLoadFromMap() && ins.MapPtr() == -1: - return fmt.Errorf("map %s: %w", ins.Reference, errUnsatisfiedReference) - } + switch asm.BuiltinFunc(ins.Constant) { + case asm.FnProbeReadKernel, asm.FnProbeReadUser: + ins.Constant = int64(asm.FnProbeRead) + case asm.FnProbeReadKernelStr, asm.FnProbeReadUserStr: + ins.Constant = int64(asm.FnProbeReadStr) } +} - // fixupBPFCalls replaces bpf_probe_read_{kernel,user}[_str] with bpf_probe_read[_str] on older kernels - // https://github.com/libbpf/libbpf/blob/master/src/libbpf.c#L6009 - iter = insns.Iterate() - for iter.Next() { - ins := iter.Ins - if !ins.IsBuiltinCall() { - continue - } - switch asm.BuiltinFunc(ins.Constant) { - case asm.FnProbeReadKernel, asm.FnProbeReadUser: - if err := haveProbeReadKernel(); err != nil { - ins.Constant = int64(asm.FnProbeRead) - } - case asm.FnProbeReadKernelStr, asm.FnProbeReadUserStr: - if err := haveProbeReadKernel(); err != nil { - ins.Constant = int64(asm.FnProbeReadStr) - } - } +var kernelBTF struct { + sync.Mutex + spec *btf.Spec +} + +// maybeLoadKernelBTF loads the current kernel's BTF if spec is nil, otherwise +// it returns spec unchanged. +// +// The kernel BTF is cached for the lifetime of the process. +func maybeLoadKernelBTF(spec *btf.Spec) (*btf.Spec, error) { + if spec != nil { + return spec, nil } - return nil + kernelBTF.Lock() + defer kernelBTF.Unlock() + + if kernelBTF.spec != nil { + return kernelBTF.spec, nil + } + + var err error + kernelBTF.spec, err = btf.LoadKernelSpec() + return kernelBTF.spec, err } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/map.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/map.go index cca387ead019..e4a6c87e924b 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/map.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/map.go @@ -5,12 +5,15 @@ import ( "errors" "fmt" "io" + "math/rand" "path/filepath" "reflect" - "strings" + "time" + "unsafe" + "github.com/cilium/ebpf/btf" "github.com/cilium/ebpf/internal" - "github.com/cilium/ebpf/internal/btf" + "github.com/cilium/ebpf/internal/sys" "github.com/cilium/ebpf/internal/unix" ) @@ -19,7 +22,8 @@ var ( ErrKeyNotExist = errors.New("key does not exist") ErrKeyExist = errors.New("key already exists") ErrIterationAborted = errors.New("iteration aborted") - ErrMapIncompatible = errors.New("map's spec is incompatible with pinned map") + ErrMapIncompatible = errors.New("map spec is incompatible with existing map") + errMapNoBTFValue = errors.New("map spec does not contain a BTF Value") ) // MapOptions control loading a map into the kernel. @@ -67,12 +71,15 @@ type MapSpec struct { InnerMap *MapSpec // Extra trailing bytes found in the ELF map definition when using structs - // larger than libbpf's bpf_map_def. Must be empty before instantiating - // the MapSpec into a Map. - Extra bytes.Reader + // larger than libbpf's bpf_map_def. nil if no trailing bytes were present. + // Must be nil or empty before instantiating the MapSpec into a Map. + Extra *bytes.Reader + + // The key and value type of this map. May be nil. + Key, Value btf.Type // The BTF associated with this map. - BTF *btf.Map + BTF *btf.Spec } func (ms *MapSpec) String() string { @@ -97,6 +104,12 @@ func (ms *MapSpec) Copy() *MapSpec { return &cpy } +// hasBTF returns true if the MapSpec has a valid BTF spec and if its +// map type supports associated BTF metadata in the kernel. +func (ms *MapSpec) hasBTF() bool { + return ms.BTF != nil && ms.Type.hasBTF() +} + func (ms *MapSpec) clampPerfEventArraySize() error { if ms.Type != PerfEventArray { return nil @@ -114,6 +127,31 @@ func (ms *MapSpec) clampPerfEventArraySize() error { return nil } +// dataSection returns the contents and BTF Datasec descriptor of the spec. +func (ms *MapSpec) dataSection() ([]byte, *btf.Datasec, error) { + + if ms.Value == nil { + return nil, nil, errMapNoBTFValue + } + + ds, ok := ms.Value.(*btf.Datasec) + if !ok { + return nil, nil, fmt.Errorf("map value BTF is a %T, not a *btf.Datasec", ms.Value) + } + + if n := len(ms.Contents); n != 1 { + return nil, nil, fmt.Errorf("expected one key, found %d", n) + } + + kv := ms.Contents[0] + value, ok := kv.Value.([]byte) + if !ok { + return nil, nil, fmt.Errorf("value at first map key is %T, not []byte", kv.Value) + } + + return value, ds, nil +} + // MapKV is used to initialize the contents of a Map. type MapKV struct { Key interface{} @@ -131,7 +169,8 @@ func (ms *MapSpec) checkCompatibility(m *Map) error { case m.valueSize != ms.ValueSize: return fmt.Errorf("expected value size %v, got %v: %w", ms.ValueSize, m.valueSize, ErrMapIncompatible) - case m.maxEntries != ms.MaxEntries: + case !(ms.Type == PerfEventArray && ms.MaxEntries == 0) && + m.maxEntries != ms.MaxEntries: return fmt.Errorf("expected max entries %v, got %v: %w", ms.MaxEntries, m.maxEntries, ErrMapIncompatible) case m.flags != ms.Flags: @@ -151,7 +190,7 @@ func (ms *MapSpec) checkCompatibility(m *Map) error { // if you require custom encoding. type Map struct { name string - fd *internal.FD + fd *sys.FD typ MapType keySize uint32 valueSize uint32 @@ -166,18 +205,19 @@ type Map struct { // // You should not use fd after calling this function. func NewMapFromFD(fd int) (*Map, error) { - if fd < 0 { - return nil, errors.New("invalid fd") + f, err := sys.NewFD(fd) + if err != nil { + return nil, err } - return newMapFromFD(internal.NewFD(uint32(fd))) + return newMapFromFD(f) } -func newMapFromFD(fd *internal.FD) (*Map, error) { +func newMapFromFD(fd *sys.FD) (*Map, error) { info, err := newMapInfoFromFd(fd) if err != nil { fd.Close() - return nil, fmt.Errorf("get map info: %s", err) + return nil, fmt.Errorf("get map info: %w", err) } return newMap(fd, info.Name, info.Type, info.KeySize, info.ValueSize, info.MaxEntries, info.Flags) @@ -209,8 +249,8 @@ func NewMapWithOptions(spec *MapSpec, opts MapOptions) (*Map, error) { return nil, fmt.Errorf("creating map: %w", err) } - err = m.finalize(spec) - if err != nil { + if err := m.finalize(spec); err != nil { + m.Close() return nil, fmt.Errorf("populating map: %w", err) } @@ -257,7 +297,7 @@ func newMapWithOptions(spec *MapSpec, opts MapOptions, handles *handleCache) (_ return nil, fmt.Errorf("pin type %d: %w", int(spec.Pinning), ErrNotSupported) } - var innerFd *internal.FD + var innerFd *sys.FD if spec.Type == ArrayOfMaps || spec.Type == HashOfMaps { if spec.InnerMap == nil { return nil, fmt.Errorf("%s requires InnerMap", spec.Type) @@ -288,7 +328,7 @@ func newMapWithOptions(spec *MapSpec, opts MapOptions, handles *handleCache) (_ if spec.Pinning == PinByName { path := filepath.Join(opts.PinPath, spec.Name) if err := m.Pin(path); err != nil { - return nil, fmt.Errorf("pin map: %s", err) + return nil, fmt.Errorf("pin map: %w", err) } } @@ -297,7 +337,7 @@ func newMapWithOptions(spec *MapSpec, opts MapOptions, handles *handleCache) (_ // createMap validates the spec's properties and creates the map in the kernel // using the given opts. It does not populate or freeze the map. -func (spec *MapSpec) createMap(inner *internal.FD, opts MapOptions, handles *handleCache) (_ *Map, err error) { +func (spec *MapSpec) createMap(inner *sys.FD, opts MapOptions, handles *handleCache) (_ *Map, err error) { closeOnError := func(closer io.Closer) { if err != nil { closer.Close() @@ -310,8 +350,10 @@ func (spec *MapSpec) createMap(inner *internal.FD, opts MapOptions, handles *han // additional 'inner_map_idx' and later 'numa_node' fields. // In order to support loading these definitions, tolerate the presence of // extra bytes, but require them to be zeroes. - if _, err := io.Copy(internal.DiscardZeroes{}, &spec.Extra); err != nil { - return nil, errors.New("extra contains unhandled non-zero bytes, drain before creating map") + if spec.Extra != nil { + if _, err := io.Copy(internal.DiscardZeroes{}, spec.Extra); err != nil { + return nil, errors.New("extra contains unhandled non-zero bytes, drain before creating map") + } } switch spec.Type { @@ -360,51 +402,63 @@ func (spec *MapSpec) createMap(inner *internal.FD, opts MapOptions, handles *han return nil, fmt.Errorf("map create: %w", err) } } + if spec.Flags&unix.BPF_F_NO_PREALLOC > 0 { + if err := haveNoPreallocMaps(); err != nil { + return nil, fmt.Errorf("map create: %w", err) + } + } - attr := internal.BPFMapCreateAttr{ - MapType: uint32(spec.Type), + attr := sys.MapCreateAttr{ + MapType: sys.MapType(spec.Type), KeySize: spec.KeySize, ValueSize: spec.ValueSize, MaxEntries: spec.MaxEntries, - Flags: spec.Flags, + MapFlags: spec.Flags, NumaNode: spec.NumaNode, } if inner != nil { - var err error - attr.InnerMapFd, err = inner.Value() - if err != nil { - return nil, fmt.Errorf("map create: %w", err) - } + attr.InnerMapFd = inner.Uint() } if haveObjName() == nil { - attr.MapName = internal.NewBPFObjName(spec.Name) + attr.MapName = sys.NewObjName(spec.Name) } - var btfDisabled bool - if spec.BTF != nil { - handle, err := handles.btfHandle(spec.BTF.Spec) - btfDisabled = errors.Is(err, btf.ErrNotSupported) - if err != nil && !btfDisabled { + if spec.hasBTF() { + handle, err := handles.btfHandle(spec.BTF) + if err != nil && !errors.Is(err, btf.ErrNotSupported) { return nil, fmt.Errorf("load BTF: %w", err) } if handle != nil { - attr.BTFFd = uint32(handle.FD()) - attr.BTFKeyTypeID = uint32(spec.BTF.Key.ID()) - attr.BTFValueTypeID = uint32(spec.BTF.Value.ID()) + keyTypeID, err := spec.BTF.TypeID(spec.Key) + if err != nil { + return nil, err + } + + valueTypeID, err := spec.BTF.TypeID(spec.Value) + if err != nil { + return nil, err + } + + attr.BtfFd = uint32(handle.FD()) + attr.BtfKeyTypeId = uint32(keyTypeID) + attr.BtfValueTypeId = uint32(valueTypeID) } } - fd, err := internal.BPFMapCreate(&attr) + fd, err := sys.MapCreate(&attr) if err != nil { if errors.Is(err, unix.EPERM) { - return nil, fmt.Errorf("map create: %w (MEMLOCK bay be too low, consider rlimit.RemoveMemlock)", err) + return nil, fmt.Errorf("map create: %w (MEMLOCK may be too low, consider rlimit.RemoveMemlock)", err) } - if btfDisabled { + if !spec.hasBTF() { return nil, fmt.Errorf("map create without BTF: %w", err) } + if errors.Is(err, unix.EINVAL) && attr.MaxEntries == 0 { + return nil, fmt.Errorf("map create: %w (MaxEntries may be incorrectly set to zero)", err) + } return nil, fmt.Errorf("map create: %w", err) } defer closeOnError(fd) @@ -419,7 +473,7 @@ func (spec *MapSpec) createMap(inner *internal.FD, opts MapOptions, handles *han // newMap allocates and returns a new Map structure. // Sets the fullValueSize on per-CPU maps. -func newMap(fd *internal.FD, name string, typ MapType, keySize, valueSize, maxEntries, flags uint32) (*Map, error) { +func newMap(fd *sys.FD, name string, typ MapType, keySize, valueSize, maxEntries, flags uint32) (*Map, error) { m := &Map{ name, fd, @@ -482,6 +536,12 @@ func (m *Map) Info() (*MapInfo, error) { return newMapInfoFromFd(m.fd) } +// MapLookupFlags controls the behaviour of the map lookup calls. +type MapLookupFlags uint64 + +// LookupLock look up the value of a spin-locked map. +const LookupLock MapLookupFlags = 4 + // Lookup retrieves a value from a Map. // // Calls Close() on valueOut if it is of type **Map or **Program, @@ -490,39 +550,58 @@ func (m *Map) Info() (*MapInfo, error) { // Returns an error if the key doesn't exist, see ErrKeyNotExist. func (m *Map) Lookup(key, valueOut interface{}) error { valuePtr, valueBytes := makeBuffer(valueOut, m.fullValueSize) - if err := m.lookup(key, valuePtr); err != nil { + if err := m.lookup(key, valuePtr, 0); err != nil { return err } return m.unmarshalValue(valueOut, valueBytes) } -// LookupAndDelete retrieves and deletes a value from a Map. +// LookupWithFlags retrieves a value from a Map with flags. // -// Returns ErrKeyNotExist if the key doesn't exist. -func (m *Map) LookupAndDelete(key, valueOut interface{}) error { +// Passing LookupLock flag will look up the value of a spin-locked +// map without returning the lock. This must be specified if the +// elements contain a spinlock. +// +// Calls Close() on valueOut if it is of type **Map or **Program, +// and *valueOut is not nil. +// +// Returns an error if the key doesn't exist, see ErrKeyNotExist. +func (m *Map) LookupWithFlags(key, valueOut interface{}, flags MapLookupFlags) error { valuePtr, valueBytes := makeBuffer(valueOut, m.fullValueSize) - - keyPtr, err := m.marshalKey(key) - if err != nil { - return fmt.Errorf("can't marshal key: %w", err) - } - - if err := bpfMapLookupAndDelete(m.fd, keyPtr, valuePtr); err != nil { - return fmt.Errorf("lookup and delete failed: %w", err) + if err := m.lookup(key, valuePtr, flags); err != nil { + return err } return m.unmarshalValue(valueOut, valueBytes) } +// LookupAndDelete retrieves and deletes a value from a Map. +// +// Returns ErrKeyNotExist if the key doesn't exist. +func (m *Map) LookupAndDelete(key, valueOut interface{}) error { + return m.lookupAndDelete(key, valueOut, 0) +} + +// LookupAndDeleteWithFlags retrieves and deletes a value from a Map. +// +// Passing LookupLock flag will look up and delete the value of a spin-locked +// map without returning the lock. This must be specified if the elements +// contain a spinlock. +// +// Returns ErrKeyNotExist if the key doesn't exist. +func (m *Map) LookupAndDeleteWithFlags(key, valueOut interface{}, flags MapLookupFlags) error { + return m.lookupAndDelete(key, valueOut, flags) +} + // LookupBytes gets a value from Map. // // Returns a nil value if a key doesn't exist. func (m *Map) LookupBytes(key interface{}) ([]byte, error) { valueBytes := make([]byte, m.fullValueSize) - valuePtr := internal.NewSlicePointer(valueBytes) + valuePtr := sys.NewSlicePointer(valueBytes) - err := m.lookup(key, valuePtr) + err := m.lookup(key, valuePtr, 0) if errors.Is(err, ErrKeyNotExist) { return nil, nil } @@ -530,18 +609,47 @@ func (m *Map) LookupBytes(key interface{}) ([]byte, error) { return valueBytes, err } -func (m *Map) lookup(key interface{}, valueOut internal.Pointer) error { +func (m *Map) lookup(key interface{}, valueOut sys.Pointer, flags MapLookupFlags) error { keyPtr, err := m.marshalKey(key) if err != nil { return fmt.Errorf("can't marshal key: %w", err) } - if err = bpfMapLookupElem(m.fd, keyPtr, valueOut); err != nil { - return fmt.Errorf("lookup failed: %w", err) + attr := sys.MapLookupElemAttr{ + MapFd: m.fd.Uint(), + Key: keyPtr, + Value: valueOut, + Flags: uint64(flags), + } + + if err = sys.MapLookupElem(&attr); err != nil { + return fmt.Errorf("lookup: %w", wrapMapError(err)) } return nil } +func (m *Map) lookupAndDelete(key, valueOut interface{}, flags MapLookupFlags) error { + valuePtr, valueBytes := makeBuffer(valueOut, m.fullValueSize) + + keyPtr, err := m.marshalKey(key) + if err != nil { + return fmt.Errorf("can't marshal key: %w", err) + } + + attr := sys.MapLookupAndDeleteElemAttr{ + MapFd: m.fd.Uint(), + Key: keyPtr, + Value: valuePtr, + Flags: uint64(flags), + } + + if err := sys.MapLookupAndDeleteElem(&attr); err != nil { + return fmt.Errorf("lookup and delete: %w", wrapMapError(err)) + } + + return m.unmarshalValue(valueOut, valueBytes) +} + // MapUpdateFlags controls the behaviour of the Map.Update call. // // The exact semantics depend on the specific MapType. @@ -554,6 +662,8 @@ const ( UpdateNoExist MapUpdateFlags = 1 << (iota - 1) // UpdateExist updates an existing element. UpdateExist + // UpdateLock updates elements under bpf_spin_lock. + UpdateLock ) // Put replaces or creates a value in map. @@ -575,8 +685,15 @@ func (m *Map) Update(key, value interface{}, flags MapUpdateFlags) error { return fmt.Errorf("can't marshal value: %w", err) } - if err = bpfMapUpdateElem(m.fd, keyPtr, valuePtr, uint64(flags)); err != nil { - return fmt.Errorf("update failed: %w", err) + attr := sys.MapUpdateElemAttr{ + MapFd: m.fd.Uint(), + Key: keyPtr, + Value: valuePtr, + Flags: uint64(flags), + } + + if err = sys.MapUpdateElem(&attr); err != nil { + return fmt.Errorf("update: %w", wrapMapError(err)) } return nil @@ -591,8 +708,13 @@ func (m *Map) Delete(key interface{}) error { return fmt.Errorf("can't marshal key: %w", err) } - if err = bpfMapDeleteElem(m.fd, keyPtr); err != nil { - return fmt.Errorf("delete failed: %w", err) + attr := sys.MapDeleteElemAttr{ + MapFd: m.fd.Uint(), + Key: keyPtr, + } + + if err = sys.MapDeleteElem(&attr); err != nil { + return fmt.Errorf("delete: %w", wrapMapError(err)) } return nil } @@ -624,7 +746,7 @@ func (m *Map) NextKey(key, nextKeyOut interface{}) error { // Returns nil if there are no more keys. func (m *Map) NextKeyBytes(key interface{}) ([]byte, error) { nextKey := make([]byte, m.keySize) - nextKeyPtr := internal.NewSlicePointer(nextKey) + nextKeyPtr := sys.NewSlicePointer(nextKey) err := m.nextKey(key, nextKeyPtr) if errors.Is(err, ErrKeyNotExist) { @@ -634,9 +756,9 @@ func (m *Map) NextKeyBytes(key interface{}) ([]byte, error) { return nextKey, err } -func (m *Map) nextKey(key interface{}, nextKeyOut internal.Pointer) error { +func (m *Map) nextKey(key interface{}, nextKeyOut sys.Pointer) error { var ( - keyPtr internal.Pointer + keyPtr sys.Pointer err error ) @@ -647,12 +769,77 @@ func (m *Map) nextKey(key interface{}, nextKeyOut internal.Pointer) error { } } - if err = bpfMapGetNextKey(m.fd, keyPtr, nextKeyOut); err != nil { - return fmt.Errorf("next key failed: %w", err) + attr := sys.MapGetNextKeyAttr{ + MapFd: m.fd.Uint(), + Key: keyPtr, + NextKey: nextKeyOut, + } + + if err = sys.MapGetNextKey(&attr); err != nil { + // Kernels 4.4.131 and earlier return EFAULT instead of a pointer to the + // first map element when a nil key pointer is specified. + if key == nil && errors.Is(err, unix.EFAULT) { + var guessKey []byte + guessKey, err = m.guessNonExistentKey() + if err != nil { + return err + } + + // Retry the syscall with a valid non-existing key. + attr.Key = sys.NewSlicePointer(guessKey) + if err = sys.MapGetNextKey(&attr); err == nil { + return nil + } + } + + return fmt.Errorf("next key: %w", wrapMapError(err)) } + return nil } +// guessNonExistentKey attempts to perform a map lookup that returns ENOENT. +// This is necessary on kernels before 4.4.132, since those don't support +// iterating maps from the start by providing an invalid key pointer. +func (m *Map) guessNonExistentKey() ([]byte, error) { + // Provide an invalid value pointer to prevent a copy on the kernel side. + valuePtr := sys.NewPointer(unsafe.Pointer(^uintptr(0))) + randKey := make([]byte, int(m.keySize)) + + for i := 0; i < 4; i++ { + switch i { + // For hash maps, the 0 key is less likely to be occupied. They're often + // used for storing data related to pointers, and their access pattern is + // generally scattered across the keyspace. + case 0: + // An all-0xff key is guaranteed to be out of bounds of any array, since + // those have a fixed key size of 4 bytes. The only corner case being + // arrays with 2^32 max entries, but those are prohibitively expensive + // in many environments. + case 1: + for r := range randKey { + randKey[r] = 0xff + } + // Inspired by BCC, 0x55 is an alternating binary pattern (0101), so + // is unlikely to be taken. + case 2: + for r := range randKey { + randKey[r] = 0x55 + } + // Last ditch effort, generate a random key. + case 3: + rand.New(rand.NewSource(time.Now().UnixNano())).Read(randKey) + } + + err := m.lookup(randKey, valuePtr, 0) + if errors.Is(err, ErrKeyNotExist) { + return randKey, nil + } + } + + return nil, errors.New("couldn't find non-existing key") +} + // BatchLookup looks up many elements in a map at once. // // "keysOut" and "valuesOut" must be of type slice, a pointer @@ -664,7 +851,7 @@ func (m *Map) nextKey(key interface{}, nextKeyOut internal.Pointer) error { // the end of all possible results, even when partial results // are returned. It should be used to evaluate when lookup is "done". func (m *Map) BatchLookup(prevKey, nextKeyOut, keysOut, valuesOut interface{}, opts *BatchOptions) (int, error) { - return m.batchLookup(internal.BPF_MAP_LOOKUP_BATCH, prevKey, nextKeyOut, keysOut, valuesOut, opts) + return m.batchLookup(sys.BPF_MAP_LOOKUP_BATCH, prevKey, nextKeyOut, keysOut, valuesOut, opts) } // BatchLookupAndDelete looks up many elements in a map at once, @@ -679,10 +866,10 @@ func (m *Map) BatchLookup(prevKey, nextKeyOut, keysOut, valuesOut interface{}, o // the end of all possible results, even when partial results // are returned. It should be used to evaluate when lookup is "done". func (m *Map) BatchLookupAndDelete(prevKey, nextKeyOut, keysOut, valuesOut interface{}, opts *BatchOptions) (int, error) { - return m.batchLookup(internal.BPF_MAP_LOOKUP_AND_DELETE_BATCH, prevKey, nextKeyOut, keysOut, valuesOut, opts) + return m.batchLookup(sys.BPF_MAP_LOOKUP_AND_DELETE_BATCH, prevKey, nextKeyOut, keysOut, valuesOut, opts) } -func (m *Map) batchLookup(cmd internal.BPFCmd, startKey, nextKeyOut, keysOut, valuesOut interface{}, opts *BatchOptions) (int, error) { +func (m *Map) batchLookup(cmd sys.Cmd, startKey, nextKeyOut, keysOut, valuesOut interface{}, opts *BatchOptions) (int, error) { if err := haveBatchAPI(); err != nil { return 0, err } @@ -702,29 +889,36 @@ func (m *Map) batchLookup(cmd internal.BPFCmd, startKey, nextKeyOut, keysOut, va return 0, fmt.Errorf("keysOut and valuesOut must be the same length") } keyBuf := make([]byte, count*int(m.keySize)) - keyPtr := internal.NewSlicePointer(keyBuf) + keyPtr := sys.NewSlicePointer(keyBuf) valueBuf := make([]byte, count*int(m.fullValueSize)) - valuePtr := internal.NewSlicePointer(valueBuf) + valuePtr := sys.NewSlicePointer(valueBuf) + nextPtr, nextBuf := makeBuffer(nextKeyOut, int(m.keySize)) - var ( - startPtr internal.Pointer - err error - retErr error - ) + attr := sys.MapLookupBatchAttr{ + MapFd: m.fd.Uint(), + Keys: keyPtr, + Values: valuePtr, + Count: uint32(count), + OutBatch: nextPtr, + } + + if opts != nil { + attr.ElemFlags = opts.ElemFlags + attr.Flags = opts.Flags + } + + var err error if startKey != nil { - startPtr, err = marshalPtr(startKey, int(m.keySize)) + attr.InBatch, err = marshalPtr(startKey, int(m.keySize)) if err != nil { return 0, err } } - nextPtr, nextBuf := makeBuffer(nextKeyOut, int(m.keySize)) - ct, err := bpfMapBatch(cmd, m.fd, startPtr, nextPtr, keyPtr, valuePtr, uint32(count), opts) - if err != nil { - if !errors.Is(err, ErrKeyNotExist) { - return 0, err - } - retErr = ErrKeyNotExist + _, sysErr := sys.BPF(cmd, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) + sysErr = wrapMapError(sysErr) + if sysErr != nil && !errors.Is(sysErr, unix.ENOENT) { + return 0, sysErr } err = m.unmarshalKey(nextKeyOut, nextBuf) @@ -737,9 +931,10 @@ func (m *Map) batchLookup(cmd internal.BPFCmd, startKey, nextKeyOut, keysOut, va } err = unmarshalBytes(valuesOut, valueBuf) if err != nil { - retErr = err + return 0, err } - return int(ct), retErr + + return int(attr.Count), sysErr } // BatchUpdate updates the map with multiple keys and values @@ -763,7 +958,7 @@ func (m *Map) BatchUpdate(keys, values interface{}, opts *BatchOptions) (int, er } var ( count = keysValue.Len() - valuePtr internal.Pointer + valuePtr sys.Pointer err error ) if count != valuesValue.Len() { @@ -777,9 +972,24 @@ func (m *Map) BatchUpdate(keys, values interface{}, opts *BatchOptions) (int, er if err != nil { return 0, err } - var nilPtr internal.Pointer - ct, err := bpfMapBatch(internal.BPF_MAP_UPDATE_BATCH, m.fd, nilPtr, nilPtr, keyPtr, valuePtr, uint32(count), opts) - return int(ct), err + + attr := sys.MapUpdateBatchAttr{ + MapFd: m.fd.Uint(), + Keys: keyPtr, + Values: valuePtr, + Count: uint32(count), + } + if opts != nil { + attr.ElemFlags = opts.ElemFlags + attr.Flags = opts.Flags + } + + err = sys.MapUpdateBatch(&attr) + if err != nil { + return int(attr.Count), fmt.Errorf("batch update: %w", wrapMapError(err)) + } + + return int(attr.Count), nil } // BatchDelete batch deletes entries in the map by keys. @@ -800,9 +1010,23 @@ func (m *Map) BatchDelete(keys interface{}, opts *BatchOptions) (int, error) { if err != nil { return 0, fmt.Errorf("cannot marshal keys: %v", err) } - var nilPtr internal.Pointer - ct, err := bpfMapBatch(internal.BPF_MAP_DELETE_BATCH, m.fd, nilPtr, nilPtr, keyPtr, nilPtr, uint32(count), opts) - return int(ct), err + + attr := sys.MapDeleteBatchAttr{ + MapFd: m.fd.Uint(), + Keys: keyPtr, + Count: uint32(count), + } + + if opts != nil { + attr.ElemFlags = opts.ElemFlags + attr.Flags = opts.Flags + } + + if err = sys.MapDeleteBatch(&attr); err != nil { + return int(attr.Count), fmt.Errorf("batch delete: %w", wrapMapError(err)) + } + + return int(attr.Count), nil } // Iterate traverses a map. @@ -815,7 +1039,8 @@ func (m *Map) Iterate() *MapIterator { return newMapIterator(m) } -// Close removes a Map +// Close the Map's underlying file descriptor, which could unload the +// Map from the kernel if it is not pinned or in use by a loaded Program. func (m *Map) Close() error { if m == nil { // This makes it easier to clean up when iterating maps @@ -830,14 +1055,7 @@ func (m *Map) Close() error { // // Calling this function is invalid after Close has been called. func (m *Map) FD() int { - fd, err := m.fd.Value() - if err != nil { - // Best effort: -1 is the number most likely to be an - // invalid file descriptor. - return -1 - } - - return int(fd) + return m.fd.Int() } // Clone creates a duplicate of the Map. @@ -912,7 +1130,11 @@ func (m *Map) Freeze() error { return fmt.Errorf("can't freeze map: %w", err) } - if err := bpfMapFreeze(m.fd); err != nil { + attr := sys.MapFreezeAttr{ + MapFd: m.fd.Uint(), + } + + if err := sys.MapFreeze(&attr); err != nil { return fmt.Errorf("can't freeze map: %w", err) } return nil @@ -936,13 +1158,13 @@ func (m *Map) finalize(spec *MapSpec) error { return nil } -func (m *Map) marshalKey(data interface{}) (internal.Pointer, error) { +func (m *Map) marshalKey(data interface{}) (sys.Pointer, error) { if data == nil { if m.keySize == 0 { // Queues have a key length of zero, so passing nil here is valid. - return internal.NewPointer(nil), nil + return sys.NewPointer(nil), nil } - return internal.Pointer{}, errors.New("can't use nil as key of map") + return sys.Pointer{}, errors.New("can't use nil as key of map") } return marshalPtr(data, int(m.keySize)) @@ -957,7 +1179,7 @@ func (m *Map) unmarshalKey(data interface{}, buf []byte) error { return unmarshalBytes(data, buf) } -func (m *Map) marshalValue(data interface{}) (internal.Pointer, error) { +func (m *Map) marshalValue(data interface{}) (sys.Pointer, error) { if m.typ.hasPerCPUValue() { return marshalPerCPUValue(data, int(m.valueSize)) } @@ -970,13 +1192,13 @@ func (m *Map) marshalValue(data interface{}) (internal.Pointer, error) { switch value := data.(type) { case *Map: if !m.typ.canStoreMap() { - return internal.Pointer{}, fmt.Errorf("can't store map in %s", m.typ) + return sys.Pointer{}, fmt.Errorf("can't store map in %s", m.typ) } buf, err = marshalMap(value, int(m.valueSize)) case *Program: if !m.typ.canStoreProgram() { - return internal.Pointer{}, fmt.Errorf("can't store program in %s", m.typ) + return sys.Pointer{}, fmt.Errorf("can't store program in %s", m.typ) } buf, err = marshalProgram(value, int(m.valueSize)) @@ -985,10 +1207,10 @@ func (m *Map) marshalValue(data interface{}) (internal.Pointer, error) { } if err != nil { - return internal.Pointer{}, err + return sys.Pointer{}, err } - return internal.NewSlicePointer(buf), nil + return sys.NewSlicePointer(buf), nil } func (m *Map) unmarshalValue(value interface{}, buf []byte) error { @@ -1052,7 +1274,10 @@ func (m *Map) unmarshalValue(value interface{}, buf []byte) error { // LoadPinnedMap loads a Map from a BPF file. func LoadPinnedMap(fileName string, opts *LoadPinOptions) (*Map, error) { - fd, err := internal.BPFObjGet(fileName, opts.Marshal()) + fd, err := sys.ObjGet(&sys.ObjGetAttr{ + Pathname: sys.NewStringPointer(fileName), + FileFlags: opts.Marshal(), + }) if err != nil { return nil, err } @@ -1081,70 +1306,11 @@ func marshalMap(m *Map, length int) ([]byte, error) { return nil, fmt.Errorf("can't marshal map to %d bytes", length) } - fd, err := m.fd.Value() - if err != nil { - return nil, err - } - buf := make([]byte, 4) - internal.NativeEndian.PutUint32(buf, fd) + internal.NativeEndian.PutUint32(buf, m.fd.Uint()) return buf, nil } -func patchValue(value []byte, typ btf.Type, replacements map[string]interface{}) error { - replaced := make(map[string]bool) - replace := func(name string, offset, size int, replacement interface{}) error { - if offset+size > len(value) { - return fmt.Errorf("%s: offset %d(+%d) is out of bounds", name, offset, size) - } - - buf, err := marshalBytes(replacement, size) - if err != nil { - return fmt.Errorf("marshal %s: %w", name, err) - } - - copy(value[offset:offset+size], buf) - replaced[name] = true - return nil - } - - switch parent := typ.(type) { - case *btf.Datasec: - for _, secinfo := range parent.Vars { - name := string(secinfo.Type.(*btf.Var).Name) - replacement, ok := replacements[name] - if !ok { - continue - } - - err := replace(name, int(secinfo.Offset), int(secinfo.Size), replacement) - if err != nil { - return err - } - } - - default: - return fmt.Errorf("patching %T is not supported", typ) - } - - if len(replaced) == len(replacements) { - return nil - } - - var missing []string - for name := range replacements { - if !replaced[name] { - missing = append(missing, name) - } - } - - if len(missing) == 1 { - return fmt.Errorf("unknown field: %s", missing[0]) - } - - return fmt.Errorf("unknown fields: %s", strings.Join(missing, ",")) -} - // MapIterator iterates a Map. // // See Map.Iterate. @@ -1239,29 +1405,20 @@ func (mi *MapIterator) Err() error { // // Returns ErrNotExist, if there is no next eBPF map. func MapGetNextID(startID MapID) (MapID, error) { - id, err := objGetNextID(internal.BPF_MAP_GET_NEXT_ID, uint32(startID)) - return MapID(id), err + attr := &sys.MapGetNextIdAttr{Id: uint32(startID)} + return MapID(attr.NextId), sys.MapGetNextId(attr) } // NewMapFromID returns the map for a given id. // // Returns ErrNotExist, if there is no eBPF map with the given id. func NewMapFromID(id MapID) (*Map, error) { - fd, err := internal.BPFObjGetFDByID(internal.BPF_MAP_GET_FD_BY_ID, uint32(id)) + fd, err := sys.MapGetFdById(&sys.MapGetFdByIdAttr{ + Id: uint32(id), + }) if err != nil { return nil, err } return newMapFromFD(fd) } - -// ID returns the systemwide unique ID of the map. -// -// Deprecated: use MapInfo.ID() instead. -func (m *Map) ID() (MapID, error) { - info, err := bpfGetMapInfoByFD(m.fd) - if err != nil { - return MapID(0), err - } - return MapID(info.id), nil -} diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/marshalers.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/marshalers.go index e461d673d701..544d17f35e1a 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/marshalers.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/marshalers.go @@ -12,6 +12,7 @@ import ( "unsafe" "github.com/cilium/ebpf/internal" + "github.com/cilium/ebpf/internal/sys" ) // marshalPtr converts an arbitrary value into a pointer suitable @@ -19,17 +20,17 @@ import ( // // As an optimization, it returns the original value if it is an // unsafe.Pointer. -func marshalPtr(data interface{}, length int) (internal.Pointer, error) { +func marshalPtr(data interface{}, length int) (sys.Pointer, error) { if ptr, ok := data.(unsafe.Pointer); ok { - return internal.NewPointer(ptr), nil + return sys.NewPointer(ptr), nil } buf, err := marshalBytes(data, length) if err != nil { - return internal.Pointer{}, err + return sys.Pointer{}, err } - return internal.NewSlicePointer(buf), nil + return sys.NewSlicePointer(buf), nil } // marshalBytes converts an arbitrary value into a byte buffer. @@ -73,13 +74,13 @@ func marshalBytes(data interface{}, length int) (buf []byte, err error) { return buf, nil } -func makeBuffer(dst interface{}, length int) (internal.Pointer, []byte) { +func makeBuffer(dst interface{}, length int) (sys.Pointer, []byte) { if ptr, ok := dst.(unsafe.Pointer); ok { - return internal.NewPointer(ptr), nil + return sys.NewPointer(ptr), nil } buf := make([]byte, length) - return internal.NewSlicePointer(buf), buf + return sys.NewSlicePointer(buf), buf } var bytesReaderPool = sync.Pool{ @@ -98,14 +99,7 @@ var bytesReaderPool = sync.Pool{ func unmarshalBytes(data interface{}, buf []byte) error { switch value := data.(type) { case unsafe.Pointer: - var dst []byte - // Use unsafe.Slice when we drop support for pre1.17 (https://github.com/golang/go/issues/19367) - // We could opt for removing unsafe.Pointer support in the lib as well - sh := (*reflect.SliceHeader)(unsafe.Pointer(&dst)) - sh.Data = uintptr(value) - sh.Len = len(buf) - sh.Cap = len(buf) - + dst := unsafe.Slice((*byte)(value), len(buf)) copy(dst, buf) runtime.KeepAlive(value) return nil @@ -164,21 +158,21 @@ func unmarshalBytes(data interface{}, buf []byte) error { // Values are initialized to zero if the slice has less elements than CPUs. // // slice must have a type like []elementType. -func marshalPerCPUValue(slice interface{}, elemLength int) (internal.Pointer, error) { +func marshalPerCPUValue(slice interface{}, elemLength int) (sys.Pointer, error) { sliceType := reflect.TypeOf(slice) if sliceType.Kind() != reflect.Slice { - return internal.Pointer{}, errors.New("per-CPU value requires slice") + return sys.Pointer{}, errors.New("per-CPU value requires slice") } possibleCPUs, err := internal.PossibleCPUs() if err != nil { - return internal.Pointer{}, err + return sys.Pointer{}, err } sliceValue := reflect.ValueOf(slice) sliceLen := sliceValue.Len() if sliceLen > possibleCPUs { - return internal.Pointer{}, fmt.Errorf("per-CPU value exceeds number of CPUs") + return sys.Pointer{}, fmt.Errorf("per-CPU value exceeds number of CPUs") } alignedElemLength := internal.Align(elemLength, 8) @@ -188,14 +182,14 @@ func marshalPerCPUValue(slice interface{}, elemLength int) (internal.Pointer, er elem := sliceValue.Index(i).Interface() elemBytes, err := marshalBytes(elem, elemLength) if err != nil { - return internal.Pointer{}, err + return sys.Pointer{}, err } offset := i * alignedElemLength copy(buf[offset:offset+elemLength], elemBytes) } - return internal.NewSlicePointer(buf), nil + return sys.NewSlicePointer(buf), nil } // unmarshalPerCPUValue decodes a buffer into a slice containing one value per diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/prog.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/prog.go index 3549a3fe3f0c..675edc711d74 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/prog.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/prog.go @@ -5,23 +5,22 @@ import ( "encoding/binary" "errors" "fmt" - "io" "math" "path/filepath" + "runtime" "strings" "time" "github.com/cilium/ebpf/asm" + "github.com/cilium/ebpf/btf" "github.com/cilium/ebpf/internal" - "github.com/cilium/ebpf/internal/btf" + "github.com/cilium/ebpf/internal/sys" "github.com/cilium/ebpf/internal/unix" ) // ErrNotSupported is returned whenever the kernel doesn't support a feature. var ErrNotSupported = internal.ErrNotSupported -var errUnsatisfiedReference = errors.New("unsatisfied reference") - // ProgramID represents the unique ID of an eBPF program. type ProgramID uint32 @@ -44,12 +43,13 @@ type ProgramOptions struct { // Controls the output buffer size for the verifier. Defaults to // DefaultVerifierLogSize. LogSize int - // An ELF containing the target BTF for this program. It is used both to - // find the correct function to trace and to apply CO-RE relocations. + // Type information used for CO-RE relocations and when attaching to + // kernel functions. + // // This is useful in environments where the kernel BTF is not available // (containers) or where it is in a non-standard location. Defaults to - // use the kernel BTF from a well-known location. - TargetBTF io.ReaderAt + // use the kernel BTF from a well-known location if nil. + KernelTypes *btf.Spec } // ProgramSpec defines a Program. @@ -59,13 +59,24 @@ type ProgramSpec struct { Name string // Type determines at which hook in the kernel a program will run. - Type ProgramType + Type ProgramType + + // AttachType of the program, needed to differentiate allowed context + // accesses in some newer program types like CGroupSockAddr. + // + // Available on kernels 4.17 and later. AttachType AttachType + // Name of a kernel data structure or function to attach to. Its // interpretation depends on Type and AttachType. AttachTo string + // The program to attach to. Must be provided manually. AttachTarget *Program + + // The name of the ELF section this program orininated from. + SectionName string + Instructions asm.Instructions // Flags is passed to the kernel and specifies additional program @@ -87,7 +98,7 @@ type ProgramSpec struct { // The BTF associated with this program. Changing Instructions // will most likely invalidate the contained data, and may // result in errors when attempting to load it into the kernel. - BTF *btf.Program + BTF *btf.Spec // The byte order this program was compiled for, may be nil. ByteOrder binary.ByteOrder @@ -112,6 +123,8 @@ func (ps *ProgramSpec) Tag() (string, error) { return ps.Instructions.Tag(internal.NativeEndian) } +type VerifierError = internal.VerifierError + // Program represents BPF program loaded into the kernel. // // It is not safe to close a Program which is used by other goroutines. @@ -120,7 +133,7 @@ type Program struct { // otherwise it is empty. VerifierLog string - fd *internal.FD + fd *sys.FD name string pinnedPath string typ ProgramType @@ -128,8 +141,7 @@ type Program struct { // NewProgram creates a new Program. // -// Loading a program for the first time will perform -// feature detection by loading small, temporary programs. +// See NewProgramWithOptions for details. func NewProgram(spec *ProgramSpec) (*Program, error) { return NewProgramWithOptions(spec, ProgramOptions{}) } @@ -138,12 +150,19 @@ func NewProgram(spec *ProgramSpec) (*Program, error) { // // Loading a program for the first time will perform // feature detection by loading small, temporary programs. +// +// Returns an error wrapping VerifierError if the program or its BTF is rejected +// by the kernel. func NewProgramWithOptions(spec *ProgramSpec, opts ProgramOptions) (*Program, error) { + if spec == nil { + return nil, errors.New("can't load a program from a nil spec") + } + handles := newHandleCache() defer handles.close() prog, err := newProgramWithOptions(spec, opts, handles) - if errors.Is(err, errUnsatisfiedReference) { + if errors.Is(err, asm.ErrUnsatisfiedMapReference) { return nil, fmt.Errorf("cannot load program without loading its whole collection: %w", err) } return prog, err @@ -154,6 +173,10 @@ func newProgramWithOptions(spec *ProgramSpec, opts ProgramOptions, handles *hand return nil, errors.New("instructions cannot be empty") } + if spec.Type == UnspecifiedProgram { + return nil, errors.New("can't load program of unspecified type") + } + if spec.ByteOrder != nil && spec.ByteOrder != internal.NativeEndian { return nil, fmt.Errorf("can't load %s program on %s", spec.ByteOrder, internal.NativeEndian) } @@ -171,114 +194,85 @@ func newProgramWithOptions(spec *ProgramSpec, opts ProgramOptions, handles *hand kv = v.Kernel() } - attr := &internal.BPFProgLoadAttr{ - ProgType: uint32(spec.Type), + attr := &sys.ProgLoadAttr{ + ProgType: sys.ProgType(spec.Type), ProgFlags: spec.Flags, - ExpectedAttachType: uint32(spec.AttachType), - License: internal.NewStringPointer(spec.License), - KernelVersion: kv, + ExpectedAttachType: sys.AttachType(spec.AttachType), + License: sys.NewStringPointer(spec.License), + KernVersion: kv, } if haveObjName() == nil { - attr.ProgName = internal.NewBPFObjName(spec.Name) + attr.ProgName = sys.NewObjName(spec.Name) } - var err error - var targetBTF *btf.Spec - if opts.TargetBTF != nil { - targetBTF, err = handles.btfSpec(opts.TargetBTF) - if err != nil { - return nil, fmt.Errorf("load target BTF: %w", err) - } - } + kernelTypes := opts.KernelTypes + + insns := make(asm.Instructions, len(spec.Instructions)) + copy(insns, spec.Instructions) var btfDisabled bool - var core btf.COREFixups if spec.BTF != nil { - core, err = spec.BTF.Fixups(targetBTF) - if err != nil { - return nil, fmt.Errorf("CO-RE relocations: %w", err) + if err := applyRelocations(insns, spec.BTF, kernelTypes); err != nil { + return nil, fmt.Errorf("apply CO-RE relocations: %w", err) } - handle, err := handles.btfHandle(spec.BTF.Spec()) + handle, err := handles.btfHandle(spec.BTF) btfDisabled = errors.Is(err, btf.ErrNotSupported) if err != nil && !btfDisabled { return nil, fmt.Errorf("load BTF: %w", err) } if handle != nil { - attr.ProgBTFFd = uint32(handle.FD()) + attr.ProgBtfFd = uint32(handle.FD()) - recSize, bytes, err := spec.BTF.LineInfos() + fib, lib, err := btf.MarshalExtInfos(insns, spec.BTF.TypeID) if err != nil { - return nil, fmt.Errorf("get BTF line infos: %w", err) + return nil, err } - attr.LineInfoRecSize = recSize - attr.LineInfoCnt = uint32(uint64(len(bytes)) / uint64(recSize)) - attr.LineInfo = internal.NewSlicePointer(bytes) - recSize, bytes, err = spec.BTF.FuncInfos() - if err != nil { - return nil, fmt.Errorf("get BTF function infos: %w", err) - } - attr.FuncInfoRecSize = recSize - attr.FuncInfoCnt = uint32(uint64(len(bytes)) / uint64(recSize)) - attr.FuncInfo = internal.NewSlicePointer(bytes) - } - } + attr.FuncInfoRecSize = btf.FuncInfoSize + attr.FuncInfoCnt = uint32(len(fib)) / btf.FuncInfoSize + attr.FuncInfo = sys.NewSlicePointer(fib) - insns, err := core.Apply(spec.Instructions) - if err != nil { - return nil, fmt.Errorf("CO-RE fixup: %w", err) + attr.LineInfoRecSize = btf.LineInfoSize + attr.LineInfoCnt = uint32(len(lib)) / btf.LineInfoSize + attr.LineInfo = sys.NewSlicePointer(lib) + } } - if err := fixupJumpsAndCalls(insns); err != nil { + if err := fixupAndValidate(insns); err != nil { return nil, err } - buf := bytes.NewBuffer(make([]byte, 0, len(spec.Instructions)*asm.InstructionSize)) - err = insns.Marshal(buf, internal.NativeEndian) + buf := bytes.NewBuffer(make([]byte, 0, insns.Size())) + err := insns.Marshal(buf, internal.NativeEndian) if err != nil { return nil, err } bytecode := buf.Bytes() - attr.Instructions = internal.NewSlicePointer(bytecode) - attr.InsCount = uint32(len(bytecode) / asm.InstructionSize) - - if spec.AttachTo != "" { - if spec.AttachTarget != nil { - info, err := spec.AttachTarget.Info() - if err != nil { - return nil, fmt.Errorf("load target BTF: %w", err) - } - - btfID, ok := info.BTFID() - if !ok { - return nil, fmt.Errorf("load target BTF: no BTF info available") - } - btfHandle, err := btf.NewHandleFromID(btfID) - if err != nil { - return nil, fmt.Errorf("load target BTF: %w", err) - } - defer btfHandle.Close() - - targetBTF = btfHandle.Spec() - if err != nil { - return nil, fmt.Errorf("load target BTF: %w", err) - } - } + attr.Insns = sys.NewSlicePointer(bytecode) + attr.InsnCnt = uint32(len(bytecode) / asm.InstructionSize) - target, err := resolveBTFType(targetBTF, spec.AttachTo, spec.Type, spec.AttachType) + if spec.AttachTarget != nil { + targetID, err := findTargetInProgram(spec.AttachTarget, spec.AttachTo, spec.Type, spec.AttachType) if err != nil { - return nil, err + return nil, fmt.Errorf("attach %s/%s: %w", spec.Type, spec.AttachType, err) } - if target != nil { - attr.AttachBTFID = uint32(target.ID()) - } - if spec.AttachTarget != nil { - attr.AttachProgFd = uint32(spec.AttachTarget.FD()) + + attr.AttachBtfId = uint32(targetID) + attr.AttachProgFd = uint32(spec.AttachTarget.FD()) + defer runtime.KeepAlive(spec.AttachTarget) + } else if spec.AttachTo != "" { + targetID, err := findTargetInKernel(kernelTypes, spec.AttachTo, spec.Type, spec.AttachType) + if err != nil && !errors.Is(err, errUnrecognizedAttachType) { + // We ignore errUnrecognizedAttachType since AttachTo may be non-empty + // for programs that don't attach anywhere. + return nil, fmt.Errorf("attach %s/%s: %w", spec.Type, spec.AttachType, err) } + + attr.AttachBtfId = uint32(targetID) } logSize := DefaultVerifierLogSize @@ -291,37 +285,44 @@ func newProgramWithOptions(spec *ProgramSpec, opts ProgramOptions, handles *hand logBuf = make([]byte, logSize) attr.LogLevel = opts.LogLevel attr.LogSize = uint32(len(logBuf)) - attr.LogBuf = internal.NewSlicePointer(logBuf) + attr.LogBuf = sys.NewSlicePointer(logBuf) } - fd, err := internal.BPFProgLoad(attr) + fd, err := sys.ProgLoad(attr) if err == nil { - return &Program{internal.CString(logBuf), fd, spec.Name, "", spec.Type}, nil + return &Program{unix.ByteSliceToString(logBuf), fd, spec.Name, "", spec.Type}, nil } - logErr := err if opts.LogLevel == 0 && opts.LogSize >= 0 { // Re-run with the verifier enabled to get better error messages. logBuf = make([]byte, logSize) attr.LogLevel = 1 attr.LogSize = uint32(len(logBuf)) - attr.LogBuf = internal.NewSlicePointer(logBuf) + attr.LogBuf = sys.NewSlicePointer(logBuf) + _, _ = sys.ProgLoad(attr) + } - fd, logErr = internal.BPFProgLoad(attr) - if logErr == nil { - fd.Close() + switch { + case errors.Is(err, unix.EPERM): + if len(logBuf) > 0 && logBuf[0] == 0 { + // EPERM due to RLIMIT_MEMLOCK happens before the verifier, so we can + // check that the log is empty to reduce false positives. + return nil, fmt.Errorf("load program: %w (MEMLOCK may be too low, consider rlimit.RemoveMemlock)", err) } - } - if errors.Is(logErr, unix.EPERM) && logBuf[0] == 0 { - // EPERM due to RLIMIT_MEMLOCK happens before the verifier, so we can - // check that the log is empty to reduce false positives. - return nil, fmt.Errorf("load program: %w (MEMLOCK bay be too low, consider rlimit.RemoveMemlock)", logErr) + fallthrough + + case errors.Is(err, unix.EINVAL): + if hasFunctionReferences(spec.Instructions) { + if err := haveBPFToBPFCalls(); err != nil { + return nil, fmt.Errorf("load program: %w", err) + } + } } - err = internal.ErrorWithLog(err, logBuf, logErr) + err = internal.ErrorWithLog(err, logBuf) if btfDisabled { - return nil, fmt.Errorf("load program without BTF: %w", err) + return nil, fmt.Errorf("load program: %w (BTF disabled)", err) } return nil, fmt.Errorf("load program: %w", err) } @@ -332,18 +333,21 @@ func newProgramWithOptions(spec *ProgramSpec, opts ProgramOptions, handles *hand // // Requires at least Linux 4.10. func NewProgramFromFD(fd int) (*Program, error) { - if fd < 0 { - return nil, errors.New("invalid fd") + f, err := sys.NewFD(fd) + if err != nil { + return nil, err } - return newProgramFromFD(internal.NewFD(uint32(fd))) + return newProgramFromFD(f) } // NewProgramFromID returns the program for a given id. // // Returns ErrNotExist, if there is no eBPF program with the given id. func NewProgramFromID(id ProgramID) (*Program, error) { - fd, err := internal.BPFObjGetFDByID(internal.BPF_PROG_GET_FD_BY_ID, uint32(id)) + fd, err := sys.ProgGetFdById(&sys.ProgGetFdByIdAttr{ + Id: uint32(id), + }) if err != nil { return nil, fmt.Errorf("get program by id: %w", err) } @@ -351,7 +355,7 @@ func NewProgramFromID(id ProgramID) (*Program, error) { return newProgramFromFD(fd) } -func newProgramFromFD(fd *internal.FD) (*Program, error) { +func newProgramFromFD(fd *sys.FD) (*Program, error) { info, err := newProgramInfoFromFd(fd) if err != nil { fd.Close() @@ -380,18 +384,29 @@ func (p *Program) Info() (*ProgramInfo, error) { return newProgramInfoFromFd(p.fd) } -// FD gets the file descriptor of the Program. +// Handle returns a reference to the program's type information in the kernel. // -// It is invalid to call this function after Close has been called. -func (p *Program) FD() int { - fd, err := p.fd.Value() +// Returns ErrNotSupported if the kernel has no BTF support, or if there is no +// BTF associated with the program. +func (p *Program) Handle() (*btf.Handle, error) { + info, err := p.Info() if err != nil { - // Best effort: -1 is the number most likely to be an - // invalid file descriptor. - return -1 + return nil, err + } + + id, ok := info.BTFID() + if !ok { + return nil, fmt.Errorf("program %s: retrieve BTF ID: %w", p, ErrNotSupported) } - return int(fd) + return btf.NewHandleFromID(id) +} + +// FD gets the file descriptor of the Program. +// +// It is invalid to call this function after Close has been called. +func (p *Program) FD() int { + return p.fd.Int() } // Clone creates a duplicate of the Program. @@ -445,7 +460,9 @@ func (p *Program) IsPinned() bool { return p.pinnedPath != "" } -// Close unloads the program from the kernel. +// Close the Program's underlying file descriptor, which could unload +// the program from the kernel if it is not pinned or attached to a +// kernel hook. func (p *Program) Close() error { if p == nil { return nil @@ -454,6 +471,28 @@ func (p *Program) Close() error { return p.fd.Close() } +// Various options for Run'ing a Program +type RunOptions struct { + // Program's data input. Required field. + Data []byte + // Program's data after Program has run. Caller must allocate. Optional field. + DataOut []byte + // Program's context input. Optional field. + Context interface{} + // Program's context after Program has run. Must be a pointer or slice. Optional field. + ContextOut interface{} + // Number of times to run Program. Optional field. Defaults to 1. + Repeat uint32 + // Optional flags. + Flags uint32 + // CPU to run Program on. Optional field. + // Note not all program types support this field. + CPU uint32 + // Called whenever the syscall is interrupted, and should be set to testing.B.ResetTimer + // or similar. Typically used during benchmarking. Optional field. + Reset func() +} + // Test runs the Program in the kernel with the given input and returns the // value returned by the eBPF program. outLen may be zero. // @@ -462,11 +501,38 @@ func (p *Program) Close() error { // // This function requires at least Linux 4.12. func (p *Program) Test(in []byte) (uint32, []byte, error) { - ret, out, _, err := p.testRun(in, 1, nil) + // Older kernels ignore the dataSizeOut argument when copying to user space. + // Combined with things like bpf_xdp_adjust_head() we don't really know what the final + // size will be. Hence we allocate an output buffer which we hope will always be large + // enough, and panic if the kernel wrote past the end of the allocation. + // See https://patchwork.ozlabs.org/cover/1006822/ + var out []byte + if len(in) > 0 { + out = make([]byte, len(in)+outputPad) + } + + opts := RunOptions{ + Data: in, + DataOut: out, + Repeat: 1, + } + + ret, _, err := p.testRun(&opts) if err != nil { return ret, nil, fmt.Errorf("can't test program: %w", err) } - return ret, out, nil + return ret, opts.DataOut, nil +} + +// Run runs the Program in kernel with given RunOptions. +// +// Note: the same restrictions from Test apply. +func (p *Program) Run(opts *RunOptions) (uint32, error) { + ret, _, err := p.testRun(opts) + if err != nil { + return ret, fmt.Errorf("can't test program: %w", err) + } + return ret, nil } // Benchmark runs the Program with the given input for a number of times @@ -481,7 +547,17 @@ func (p *Program) Test(in []byte) (uint32, []byte, error) { // // This function requires at least Linux 4.12. func (p *Program) Benchmark(in []byte, repeat int, reset func()) (uint32, time.Duration, error) { - ret, _, total, err := p.testRun(in, repeat, reset) + if uint(repeat) > math.MaxUint32 { + return 0, 0, fmt.Errorf("repeat is too high") + } + + opts := RunOptions{ + Data: in, + Repeat: uint32(repeat), + Reset: reset, + } + + ret, total, err := p.testRun(&opts) if err != nil { return ret, total, fmt.Errorf("can't benchmark program: %w", err) } @@ -490,6 +566,7 @@ func (p *Program) Benchmark(in []byte, repeat int, reset func()) (uint32, time.D var haveProgTestRun = internal.FeatureTest("BPF_PROG_TEST_RUN", "4.12", func() error { prog, err := NewProgram(&ProgramSpec{ + // SocketFilter does not require privileges on newer kernels. Type: SocketFilter, Instructions: asm.Instructions{ asm.LoadImm(asm.R0, 0, asm.DWord), @@ -505,88 +582,109 @@ var haveProgTestRun = internal.FeatureTest("BPF_PROG_TEST_RUN", "4.12", func() e // Programs require at least 14 bytes input in := make([]byte, 14) - attr := bpfProgTestRunAttr{ - fd: uint32(prog.FD()), - dataSizeIn: uint32(len(in)), - dataIn: internal.NewSlicePointer(in), + attr := sys.ProgRunAttr{ + ProgFd: uint32(prog.FD()), + DataSizeIn: uint32(len(in)), + DataIn: sys.NewSlicePointer(in), } - err = bpfProgTestRun(&attr) - if errors.Is(err, unix.EINVAL) { + err = sys.ProgRun(&attr) + switch { + case errors.Is(err, unix.EINVAL): // Check for EINVAL specifically, rather than err != nil since we // otherwise misdetect due to insufficient permissions. return internal.ErrNotSupported - } - if errors.Is(err, unix.EINTR) { + + case errors.Is(err, unix.EINTR): // We know that PROG_TEST_RUN is supported if we get EINTR. return nil - } - return err -}) -func (p *Program) testRun(in []byte, repeat int, reset func()) (uint32, []byte, time.Duration, error) { - if uint(repeat) > math.MaxUint32 { - return 0, nil, 0, fmt.Errorf("repeat is too high") + case errors.Is(err, unix.ENOTSUPP): + // The first PROG_TEST_RUN patches shipped in 4.12 didn't include + // a test runner for SocketFilter. ENOTSUPP means PROG_TEST_RUN is + // supported, but not for the program type used in the probe. + return nil } - if len(in) == 0 { - return 0, nil, 0, fmt.Errorf("missing input") - } + return err +}) - if uint(len(in)) > math.MaxUint32 { - return 0, nil, 0, fmt.Errorf("input is too long") +func (p *Program) testRun(opts *RunOptions) (uint32, time.Duration, error) { + if uint(len(opts.Data)) > math.MaxUint32 { + return 0, 0, fmt.Errorf("input is too long") } if err := haveProgTestRun(); err != nil { - return 0, nil, 0, err + return 0, 0, err } - // Older kernels ignore the dataSizeOut argument when copying to user space. - // Combined with things like bpf_xdp_adjust_head() we don't really know what the final - // size will be. Hence we allocate an output buffer which we hope will always be large - // enough, and panic if the kernel wrote past the end of the allocation. - // See https://patchwork.ozlabs.org/cover/1006822/ - out := make([]byte, len(in)+outputPad) + var ctxBytes []byte + if opts.Context != nil { + ctx := new(bytes.Buffer) + if err := binary.Write(ctx, internal.NativeEndian, opts.Context); err != nil { + return 0, 0, fmt.Errorf("cannot serialize context: %v", err) + } + ctxBytes = ctx.Bytes() + } - fd, err := p.fd.Value() - if err != nil { - return 0, nil, 0, err + var ctxOut []byte + if opts.ContextOut != nil { + ctxOut = make([]byte, binary.Size(opts.ContextOut)) } - attr := bpfProgTestRunAttr{ - fd: fd, - dataSizeIn: uint32(len(in)), - dataSizeOut: uint32(len(out)), - dataIn: internal.NewSlicePointer(in), - dataOut: internal.NewSlicePointer(out), - repeat: uint32(repeat), + attr := sys.ProgRunAttr{ + ProgFd: p.fd.Uint(), + DataSizeIn: uint32(len(opts.Data)), + DataSizeOut: uint32(len(opts.DataOut)), + DataIn: sys.NewSlicePointer(opts.Data), + DataOut: sys.NewSlicePointer(opts.DataOut), + Repeat: uint32(opts.Repeat), + CtxSizeIn: uint32(len(ctxBytes)), + CtxSizeOut: uint32(len(ctxOut)), + CtxIn: sys.NewSlicePointer(ctxBytes), + CtxOut: sys.NewSlicePointer(ctxOut), + Flags: opts.Flags, + Cpu: opts.CPU, } for { - err = bpfProgTestRun(&attr) + err := sys.ProgRun(&attr) if err == nil { break } if errors.Is(err, unix.EINTR) { - if reset != nil { - reset() + if opts.Reset != nil { + opts.Reset() } continue } - return 0, nil, 0, fmt.Errorf("can't run test: %w", err) + if errors.Is(err, unix.ENOTSUPP) { + return 0, 0, fmt.Errorf("kernel doesn't support testing program type %s: %w", p.Type(), ErrNotSupported) + } + + return 0, 0, fmt.Errorf("can't run test: %w", err) } - if int(attr.dataSizeOut) > cap(out) { - // Houston, we have a problem. The program created more data than we allocated, - // and the kernel wrote past the end of our buffer. - panic("kernel wrote past end of output buffer") + if opts.DataOut != nil { + if int(attr.DataSizeOut) > cap(opts.DataOut) { + // Houston, we have a problem. The program created more data than we allocated, + // and the kernel wrote past the end of our buffer. + panic("kernel wrote past end of output buffer") + } + opts.DataOut = opts.DataOut[:int(attr.DataSizeOut)] } - out = out[:int(attr.dataSizeOut)] - total := time.Duration(attr.duration) * time.Nanosecond - return attr.retval, out, total, nil + if len(ctxOut) != 0 { + b := bytes.NewReader(ctxOut) + if err := binary.Read(b, internal.NativeEndian, opts.ContextOut); err != nil { + return 0, 0, fmt.Errorf("failed to decode ContextOut: %v", err) + } + } + + total := time.Duration(attr.Duration) * time.Nanosecond + return attr.Retval, total, nil } func unmarshalProgram(buf []byte) (*Program, error) { @@ -605,70 +703,19 @@ func marshalProgram(p *Program, length int) ([]byte, error) { return nil, fmt.Errorf("can't marshal program to %d bytes", length) } - value, err := p.fd.Value() - if err != nil { - return nil, err - } - buf := make([]byte, 4) - internal.NativeEndian.PutUint32(buf, value) + internal.NativeEndian.PutUint32(buf, p.fd.Uint()) return buf, nil } -// Attach a Program. -// -// Deprecated: use link.RawAttachProgram instead. -func (p *Program) Attach(fd int, typ AttachType, flags AttachFlags) error { - if fd < 0 { - return errors.New("invalid fd") - } - - pfd, err := p.fd.Value() - if err != nil { - return err - } - - attr := internal.BPFProgAttachAttr{ - TargetFd: uint32(fd), - AttachBpfFd: pfd, - AttachType: uint32(typ), - AttachFlags: uint32(flags), - } - - return internal.BPFProgAttach(&attr) -} - -// Detach a Program. -// -// Deprecated: use link.RawDetachProgram instead. -func (p *Program) Detach(fd int, typ AttachType, flags AttachFlags) error { - if fd < 0 { - return errors.New("invalid fd") - } - - if flags != 0 { - return errors.New("flags must be zero") - } - - pfd, err := p.fd.Value() - if err != nil { - return err - } - - attr := internal.BPFProgDetachAttr{ - TargetFd: uint32(fd), - AttachBpfFd: pfd, - AttachType: uint32(typ), - } - - return internal.BPFProgDetach(&attr) -} - // LoadPinnedProgram loads a Program from a BPF file. // // Requires at least Linux 4.11. func LoadPinnedProgram(fileName string, opts *LoadPinOptions) (*Program, error) { - fd, err := internal.BPFObjGet(fileName, opts.Marshal()) + fd, err := sys.ObjGet(&sys.ObjGetAttr{ + Pathname: sys.NewStringPointer(fileName), + FileFlags: opts.Marshal(), + }) if err != nil { return nil, err } @@ -702,28 +749,42 @@ func SanitizeName(name string, replacement rune) string { // // Returns ErrNotExist, if there is no next eBPF program. func ProgramGetNextID(startID ProgramID) (ProgramID, error) { - id, err := objGetNextID(internal.BPF_PROG_GET_NEXT_ID, uint32(startID)) - return ProgramID(id), err + attr := &sys.ProgGetNextIdAttr{Id: uint32(startID)} + return ProgramID(attr.NextId), sys.ProgGetNextId(attr) } -// ID returns the systemwide unique ID of the program. +// BindMap binds map to the program and is only released once program is released. // -// Deprecated: use ProgramInfo.ID() instead. -func (p *Program) ID() (ProgramID, error) { - info, err := bpfGetProgInfoByFD(p.fd, nil) - if err != nil { - return ProgramID(0), err +// This may be used in cases where metadata should be associated with the program +// which otherwise does not contain any references to the map. +func (p *Program) BindMap(m *Map) error { + attr := &sys.ProgBindMapAttr{ + ProgFd: uint32(p.FD()), + MapFd: uint32(m.FD()), } - return ProgramID(info.id), nil + + return sys.ProgBindMap(attr) } -func resolveBTFType(spec *btf.Spec, name string, progType ProgramType, attachType AttachType) (btf.Type, error) { +var errUnrecognizedAttachType = errors.New("unrecognized attach type") + +// find an attach target type in the kernel. +// +// spec may be nil and defaults to the canonical kernel BTF. name together with +// progType and attachType determine which type we need to attach to. +// +// Returns errUnrecognizedAttachType. +func findTargetInKernel(spec *btf.Spec, name string, progType ProgramType, attachType AttachType) (btf.TypeID, error) { type match struct { p ProgramType a AttachType } - var typeName, featureName string + var ( + typeName, featureName string + isBTFTypeFunc = true + ) + switch (match{progType, attachType}) { case match{LSM, AttachLSMMac}: typeName = "bpf_lsm_" + name @@ -731,31 +792,84 @@ func resolveBTFType(spec *btf.Spec, name string, progType ProgramType, attachTyp case match{Tracing, AttachTraceIter}: typeName = "bpf_iter_" + name featureName = name + " iterator" - case match{Extension, AttachNone}: + case match{Tracing, AttachTraceFEntry}: + typeName = name + featureName = fmt.Sprintf("fentry %s", name) + case match{Tracing, AttachTraceFExit}: + typeName = name + featureName = fmt.Sprintf("fexit %s", name) + case match{Tracing, AttachModifyReturn}: typeName = name - featureName = fmt.Sprintf("freplace %s", name) + featureName = fmt.Sprintf("fmod_ret %s", name) + case match{Tracing, AttachTraceRawTp}: + typeName = fmt.Sprintf("btf_trace_%s", name) + featureName = fmt.Sprintf("raw_tp %s", name) + isBTFTypeFunc = false default: - return nil, nil + return 0, errUnrecognizedAttachType } - if spec == nil { - var err error - spec, err = btf.LoadKernelSpec() - if err != nil { - return nil, fmt.Errorf("load kernel spec: %w", err) - } + spec, err := maybeLoadKernelBTF(spec) + if err != nil { + return 0, fmt.Errorf("load kernel spec: %w", err) } - var target *btf.Func - err := spec.FindType(typeName, &target) - if errors.Is(err, btf.ErrNotFound) { - return nil, &internal.UnsupportedFeatureError{ - Name: featureName, + var target btf.Type + if isBTFTypeFunc { + var targetFunc *btf.Func + err = spec.TypeByName(typeName, &targetFunc) + target = targetFunc + } else { + var targetTypedef *btf.Typedef + err = spec.TypeByName(typeName, &targetTypedef) + target = targetTypedef + } + + if err != nil { + if errors.Is(err, btf.ErrNotFound) { + return 0, &internal.UnsupportedFeatureError{ + Name: featureName, + } } + return 0, fmt.Errorf("find target for %s: %w", featureName, err) + } + + return spec.TypeID(target) +} + +// find an attach target type in a program. +// +// Returns errUnrecognizedAttachType. +func findTargetInProgram(prog *Program, name string, progType ProgramType, attachType AttachType) (btf.TypeID, error) { + type match struct { + p ProgramType + a AttachType + } + + var typeName string + switch (match{progType, attachType}) { + case match{Extension, AttachNone}: + typeName = name + default: + return 0, errUnrecognizedAttachType } + + btfHandle, err := prog.Handle() + if err != nil { + return 0, fmt.Errorf("load target BTF: %w", err) + } + defer btfHandle.Close() + + spec, err := btfHandle.Spec(nil) + if err != nil { + return 0, err + } + + var targetFunc *btf.Func + err = spec.TypeByName(typeName, &targetFunc) if err != nil { - return nil, fmt.Errorf("resolve BTF for %s: %w", featureName, err) + return 0, fmt.Errorf("find target %s: %w", typeName, err) } - return target, nil + return spec.TypeID(targetFunc) } diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/readme.md b/cluster-autoscaler/vendor/github.com/cilium/ebpf/readme.md index 01e2fff92bbc..3e490de71101 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/readme.md +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/readme.md @@ -45,13 +45,17 @@ This library includes the following packages: `PERF_EVENT_ARRAY` * [ringbuf](https://pkg.go.dev/github.com/cilium/ebpf/ringbuf) allows reading from a `BPF_MAP_TYPE_RINGBUF` map - +* [features](https://pkg.go.dev/github.com/cilium/ebpf/features) implements the equivalent + of `bpftool feature probe` for discovering BPF-related kernel features using native Go. +* [rlimit](https://pkg.go.dev/github.com/cilium/ebpf/rlimit) provides a convenient API to lift + the `RLIMIT_MEMLOCK` constraint on kernels before 5.11. ## Requirements * A version of Go that is [supported by upstream](https://golang.org/doc/devel/release.html#policy) -* Linux >= 4.9. CI is run against LTS releases. +* Linux >= 4.9. CI is run against kernel.org LTS releases. 4.4 should work but is + not tested against. ## Regenerating Testdata @@ -59,6 +63,9 @@ Run `make` in the root of this repository to rebuild testdata in all subpackages. This requires Docker, as it relies on a standardized build environment to keep the build output stable. +It is possible to regenerate data using Podman by overriding the `CONTAINER_*` +variables: `CONTAINER_ENGINE=podman CONTAINER_RUN_ARGS= make`. + The toolchain image build files are kept in [testdata/docker/](testdata/docker/). ## License diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/run-tests.sh b/cluster-autoscaler/vendor/github.com/cilium/ebpf/run-tests.sh index a079edc7e18c..c21cca9e5e7d 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/run-tests.sh +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/run-tests.sh @@ -1,4 +1,4 @@ -#!/bin/bash +#!/usr/bin/env bash # Test the current package under a different kernel. # Requires virtme and qemu to be installed. # Examples: @@ -48,21 +48,31 @@ if [[ "${1:-}" = "--exec-vm" ]]; then rm "${output}/fake-stdin" fi - if ! $sudo virtme-run --kimg "${input}/bzImage" --memory 768M --pwd \ - --rwdir="${testdir}=${testdir}" \ - --rodir=/run/input="${input}" \ - --rwdir=/run/output="${output}" \ - --script-sh "PATH=\"$PATH\" \"$script\" --exec-test $cmd" \ - --kopt possible_cpus=2; then # need at least two CPUs for some tests - exit 23 - fi + for ((i = 0; i < 3; i++)); do + if ! $sudo virtme-run --kimg "${input}/bzImage" --memory 768M --pwd \ + --rwdir="${testdir}=${testdir}" \ + --rodir=/run/input="${input}" \ + --rwdir=/run/output="${output}" \ + --script-sh "PATH=\"$PATH\" CI_MAX_KERNEL_VERSION="${CI_MAX_KERNEL_VERSION:-}" \"$script\" --exec-test $cmd" \ + --kopt possible_cpus=2; then # need at least two CPUs for some tests + exit 23 + fi + + if [[ -e "${output}/status" ]]; then + break + fi + + if [[ -v CI ]]; then + echo "Retrying test run due to qemu crash" + continue + fi - if [[ ! -e "${output}/success" ]]; then exit 42 - fi + done + rc=$(<"${output}/status") $sudo rm -r "$output" - exit 0 + exit $rc elif [[ "${1:-}" = "--exec-test" ]]; then shift @@ -73,13 +83,16 @@ elif [[ "${1:-}" = "--exec-test" ]]; then export KERNEL_SELFTESTS="/run/input/bpf" fi - dmesg -C - if ! "$@"; then - dmesg - exit 1 # this return code is "swallowed" by qemu + if [[ -f "/run/input/bpf/bpf_testmod/bpf_testmod.ko" ]]; then + insmod "/run/input/bpf/bpf_testmod/bpf_testmod.ko" fi - touch "/run/output/success" - exit 0 + + dmesg --clear + rc=0 + "$@" || rc=$? + dmesg + echo $rc > "/run/output/status" + exit $rc # this return code is "swallowed" by qemu fi readonly kernel_version="${1:-}" @@ -90,22 +103,27 @@ fi shift readonly kernel="linux-${kernel_version}.bz" -readonly selftests="linux-${kernel_version}-selftests-bpf.bz" +readonly selftests="linux-${kernel_version}-selftests-bpf.tgz" readonly input="$(mktemp -d)" readonly tmp_dir="${TMPDIR:-/tmp}" readonly branch="${BRANCH:-master}" fetch() { echo Fetching "${1}" - wget -nv -N -P "${tmp_dir}" "https://github.com/cilium/ci-kernels/raw/${branch}/${1}" + pushd "${tmp_dir}" > /dev/null + curl -s -L -O --fail --etag-compare "${1}.etag" --etag-save "${1}.etag" "https://github.com/cilium/ci-kernels/raw/${branch}/${1}" + local ret=$? + popd > /dev/null + return $ret } fetch "${kernel}" cp "${tmp_dir}/${kernel}" "${input}/bzImage" if fetch "${selftests}"; then + echo "Decompressing selftests" mkdir "${input}/bpf" - tar --strip-components=4 -xjf "${tmp_dir}/${selftests}" -C "${input}/bpf" + tar --strip-components=4 -xf "${tmp_dir}/${selftests}" -C "${input}/bpf" else echo "No selftests found, disabling" fi @@ -117,6 +135,8 @@ fi export GOFLAGS=-mod=readonly export CGO_ENABLED=0 +# LINUX_VERSION_CODE test compares this to discovered value. +export KERNEL_VERSION="${kernel_version}" echo Testing on "${kernel_version}" go test -exec "$script --exec-vm $input" "${args[@]}" diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/syscalls.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/syscalls.go index f8cb5f0e0cd6..e5c270a55851 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/syscalls.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/syscalls.go @@ -4,19 +4,13 @@ import ( "bytes" "errors" "fmt" - "os" - "unsafe" "github.com/cilium/ebpf/asm" "github.com/cilium/ebpf/internal" + "github.com/cilium/ebpf/internal/sys" "github.com/cilium/ebpf/internal/unix" ) -// ErrNotExist is returned when loading a non-existing map or program. -// -// Deprecated: use os.ErrNotExist instead. -var ErrNotExist = os.ErrNotExist - // invalidBPFObjNameChar returns true if char may not appear in // a BPF object name. func invalidBPFObjNameChar(char rune) bool { @@ -38,108 +32,24 @@ func invalidBPFObjNameChar(char rune) bool { } } -type bpfMapOpAttr struct { - mapFd uint32 - padding uint32 - key internal.Pointer - value internal.Pointer - flags uint64 -} - -type bpfBatchMapOpAttr struct { - inBatch internal.Pointer - outBatch internal.Pointer - keys internal.Pointer - values internal.Pointer - count uint32 - mapFd uint32 - elemFlags uint64 - flags uint64 -} - -type bpfMapInfo struct { - map_type uint32 // since 4.12 1e2709769086 - id uint32 - key_size uint32 - value_size uint32 - max_entries uint32 - map_flags uint32 - name internal.BPFObjName // since 4.15 ad5b177bd73f - ifindex uint32 // since 4.16 52775b33bb50 - btf_vmlinux_value_type_id uint32 // since 5.6 85d33df357b6 - netns_dev uint64 // since 4.16 52775b33bb50 - netns_ino uint64 - btf_id uint32 // since 4.18 78958fca7ead - btf_key_type_id uint32 // since 4.18 9b2cf328b2ec - btf_value_type_id uint32 -} - -type bpfProgInfo struct { - prog_type uint32 - id uint32 - tag [unix.BPF_TAG_SIZE]byte - jited_prog_len uint32 - xlated_prog_len uint32 - jited_prog_insns internal.Pointer - xlated_prog_insns internal.Pointer - load_time uint64 // since 4.15 cb4d2b3f03d8 - created_by_uid uint32 - nr_map_ids uint32 // since 4.15 cb4d2b3f03d8 - map_ids internal.Pointer - name internal.BPFObjName // since 4.15 067cae47771c - ifindex uint32 - gpl_compatible uint32 - netns_dev uint64 - netns_ino uint64 - nr_jited_ksyms uint32 - nr_jited_func_lens uint32 - jited_ksyms internal.Pointer - jited_func_lens internal.Pointer - btf_id uint32 - func_info_rec_size uint32 - func_info internal.Pointer - nr_func_info uint32 - nr_line_info uint32 - line_info internal.Pointer - jited_line_info internal.Pointer - nr_jited_line_info uint32 - line_info_rec_size uint32 - jited_line_info_rec_size uint32 - nr_prog_tags uint32 - prog_tags internal.Pointer - run_time_ns uint64 - run_cnt uint64 -} - -type bpfProgTestRunAttr struct { - fd uint32 - retval uint32 - dataSizeIn uint32 - dataSizeOut uint32 - dataIn internal.Pointer - dataOut internal.Pointer - repeat uint32 - duration uint32 -} - -type bpfMapFreezeAttr struct { - mapFd uint32 -} - -type bpfObjGetNextIDAttr struct { - startID uint32 - nextID uint32 - openFlags uint32 -} +func progLoad(insns asm.Instructions, typ ProgramType, license string) (*sys.FD, error) { + buf := bytes.NewBuffer(make([]byte, 0, insns.Size())) + if err := insns.Marshal(buf, internal.NativeEndian); err != nil { + return nil, err + } + bytecode := buf.Bytes() -func bpfProgTestRun(attr *bpfProgTestRunAttr) error { - _, err := internal.BPF(internal.BPF_PROG_TEST_RUN, unsafe.Pointer(attr), unsafe.Sizeof(*attr)) - return err + return sys.ProgLoad(&sys.ProgLoadAttr{ + ProgType: sys.ProgType(typ), + License: sys.NewStringPointer(license), + Insns: sys.NewSlicePointer(bytecode), + InsnCnt: uint32(len(bytecode) / asm.InstructionSize), + }) } var haveNestedMaps = internal.FeatureTest("nested maps", "4.12", func() error { - _, err := internal.BPFMapCreate(&internal.BPFMapCreateAttr{ - MapType: uint32(ArrayOfMaps), + _, err := sys.MapCreate(&sys.MapCreateAttr{ + MapType: sys.MapType(ArrayOfMaps), KeySize: 4, ValueSize: 4, MaxEntries: 1, @@ -158,12 +68,12 @@ var haveNestedMaps = internal.FeatureTest("nested maps", "4.12", func() error { var haveMapMutabilityModifiers = internal.FeatureTest("read- and write-only maps", "5.2", func() error { // This checks BPF_F_RDONLY_PROG and BPF_F_WRONLY_PROG. Since // BPF_MAP_FREEZE appeared in 5.2 as well we don't do a separate check. - m, err := internal.BPFMapCreate(&internal.BPFMapCreateAttr{ - MapType: uint32(Array), + m, err := sys.MapCreate(&sys.MapCreateAttr{ + MapType: sys.MapType(Array), KeySize: 4, ValueSize: 4, MaxEntries: 1, - Flags: unix.BPF_F_RDONLY_PROG, + MapFlags: unix.BPF_F_RDONLY_PROG, }) if err != nil { return internal.ErrNotSupported @@ -174,12 +84,12 @@ var haveMapMutabilityModifiers = internal.FeatureTest("read- and write-only maps var haveMmapableMaps = internal.FeatureTest("mmapable maps", "5.5", func() error { // This checks BPF_F_MMAPABLE, which appeared in 5.5 for array maps. - m, err := internal.BPFMapCreate(&internal.BPFMapCreateAttr{ - MapType: uint32(Array), + m, err := sys.MapCreate(&sys.MapCreateAttr{ + MapType: sys.MapType(Array), KeySize: 4, ValueSize: 4, MaxEntries: 1, - Flags: unix.BPF_F_MMAPABLE, + MapFlags: unix.BPF_F_MMAPABLE, }) if err != nil { return internal.ErrNotSupported @@ -190,12 +100,12 @@ var haveMmapableMaps = internal.FeatureTest("mmapable maps", "5.5", func() error var haveInnerMaps = internal.FeatureTest("inner maps", "5.10", func() error { // This checks BPF_F_INNER_MAP, which appeared in 5.10. - m, err := internal.BPFMapCreate(&internal.BPFMapCreateAttr{ - MapType: uint32(Array), + m, err := sys.MapCreate(&sys.MapCreateAttr{ + MapType: sys.MapType(Array), KeySize: 4, ValueSize: 4, MaxEntries: 1, - Flags: unix.BPF_F_INNER_MAP, + MapFlags: unix.BPF_F_INNER_MAP, }) if err != nil { return internal.ErrNotSupported @@ -204,111 +114,21 @@ var haveInnerMaps = internal.FeatureTest("inner maps", "5.10", func() error { return nil }) -func bpfMapLookupElem(m *internal.FD, key, valueOut internal.Pointer) error { - fd, err := m.Value() - if err != nil { - return err - } - - attr := bpfMapOpAttr{ - mapFd: fd, - key: key, - value: valueOut, - } - _, err = internal.BPF(internal.BPF_MAP_LOOKUP_ELEM, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - return wrapMapError(err) -} - -func bpfMapLookupAndDelete(m *internal.FD, key, valueOut internal.Pointer) error { - fd, err := m.Value() - if err != nil { - return err - } - - attr := bpfMapOpAttr{ - mapFd: fd, - key: key, - value: valueOut, - } - _, err = internal.BPF(internal.BPF_MAP_LOOKUP_AND_DELETE_ELEM, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - return wrapMapError(err) -} - -func bpfMapUpdateElem(m *internal.FD, key, valueOut internal.Pointer, flags uint64) error { - fd, err := m.Value() - if err != nil { - return err - } - - attr := bpfMapOpAttr{ - mapFd: fd, - key: key, - value: valueOut, - flags: flags, - } - _, err = internal.BPF(internal.BPF_MAP_UPDATE_ELEM, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - return wrapMapError(err) -} - -func bpfMapDeleteElem(m *internal.FD, key internal.Pointer) error { - fd, err := m.Value() - if err != nil { - return err - } - - attr := bpfMapOpAttr{ - mapFd: fd, - key: key, - } - _, err = internal.BPF(internal.BPF_MAP_DELETE_ELEM, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - return wrapMapError(err) -} - -func bpfMapGetNextKey(m *internal.FD, key, nextKeyOut internal.Pointer) error { - fd, err := m.Value() - if err != nil { - return err - } - - attr := bpfMapOpAttr{ - mapFd: fd, - key: key, - value: nextKeyOut, - } - _, err = internal.BPF(internal.BPF_MAP_GET_NEXT_KEY, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - return wrapMapError(err) -} - -func objGetNextID(cmd internal.BPFCmd, start uint32) (uint32, error) { - attr := bpfObjGetNextIDAttr{ - startID: start, - } - _, err := internal.BPF(cmd, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - return attr.nextID, err -} - -func bpfMapBatch(cmd internal.BPFCmd, m *internal.FD, inBatch, outBatch, keys, values internal.Pointer, count uint32, opts *BatchOptions) (uint32, error) { - fd, err := m.Value() +var haveNoPreallocMaps = internal.FeatureTest("prealloc maps", "4.6", func() error { + // This checks BPF_F_NO_PREALLOC, which appeared in 4.6. + m, err := sys.MapCreate(&sys.MapCreateAttr{ + MapType: sys.MapType(Hash), + KeySize: 4, + ValueSize: 4, + MaxEntries: 1, + MapFlags: unix.BPF_F_NO_PREALLOC, + }) if err != nil { - return 0, err - } - - attr := bpfBatchMapOpAttr{ - inBatch: inBatch, - outBatch: outBatch, - keys: keys, - values: values, - count: count, - mapFd: fd, - } - if opts != nil { - attr.elemFlags = opts.ElemFlags - attr.flags = opts.Flags + return internal.ErrNotSupported } - _, err = internal.BPF(cmd, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - // always return count even on an error, as things like update might partially be fulfilled. - return attr.count, wrapMapError(err) -} + _ = m.Close() + return nil +}) func wrapMapError(err error) error { if err == nil { @@ -316,15 +136,15 @@ func wrapMapError(err error) error { } if errors.Is(err, unix.ENOENT) { - return internal.SyscallError(ErrKeyNotExist, unix.ENOENT) + return sys.Error(ErrKeyNotExist, unix.ENOENT) } if errors.Is(err, unix.EEXIST) { - return internal.SyscallError(ErrKeyExist, unix.EEXIST) + return sys.Error(ErrKeyExist, unix.EEXIST) } if errors.Is(err, unix.ENOTSUPP) { - return internal.SyscallError(ErrNotSupported, unix.ENOTSUPP) + return sys.Error(ErrNotSupported, unix.ENOTSUPP) } if errors.Is(err, unix.E2BIG) { @@ -334,51 +154,16 @@ func wrapMapError(err error) error { return err } -func bpfMapFreeze(m *internal.FD) error { - fd, err := m.Value() - if err != nil { - return err - } - - attr := bpfMapFreezeAttr{ - mapFd: fd, - } - _, err = internal.BPF(internal.BPF_MAP_FREEZE, unsafe.Pointer(&attr), unsafe.Sizeof(attr)) - return err -} - -func bpfGetProgInfoByFD(fd *internal.FD, ids []MapID) (*bpfProgInfo, error) { - var info bpfProgInfo - if len(ids) > 0 { - info.nr_map_ids = uint32(len(ids)) - info.map_ids = internal.NewPointer(unsafe.Pointer(&ids[0])) - } - - if err := internal.BPFObjGetInfoByFD(fd, unsafe.Pointer(&info), unsafe.Sizeof(info)); err != nil { - return nil, fmt.Errorf("can't get program info: %w", err) - } - return &info, nil -} - -func bpfGetMapInfoByFD(fd *internal.FD) (*bpfMapInfo, error) { - var info bpfMapInfo - err := internal.BPFObjGetInfoByFD(fd, unsafe.Pointer(&info), unsafe.Sizeof(info)) - if err != nil { - return nil, fmt.Errorf("can't get map info: %w", err) - } - return &info, nil -} - var haveObjName = internal.FeatureTest("object names", "4.15", func() error { - attr := internal.BPFMapCreateAttr{ - MapType: uint32(Array), + attr := sys.MapCreateAttr{ + MapType: sys.MapType(Array), KeySize: 4, ValueSize: 4, MaxEntries: 1, - MapName: internal.NewBPFObjName("feature_test"), + MapName: sys.NewObjName("feature_test"), } - fd, err := internal.BPFMapCreate(&attr) + fd, err := sys.MapCreate(&attr) if err != nil { return internal.ErrNotSupported } @@ -392,15 +177,15 @@ var objNameAllowsDot = internal.FeatureTest("dot in object names", "5.2", func() return err } - attr := internal.BPFMapCreateAttr{ - MapType: uint32(Array), + attr := sys.MapCreateAttr{ + MapType: sys.MapType(Array), KeySize: 4, ValueSize: 4, MaxEntries: 1, - MapName: internal.NewBPFObjName(".test"), + MapName: sys.NewObjName(".test"), } - fd, err := internal.BPFMapCreate(&attr) + fd, err := sys.MapCreate(&attr) if err != nil { return internal.ErrNotSupported } @@ -411,24 +196,30 @@ var objNameAllowsDot = internal.FeatureTest("dot in object names", "5.2", func() var haveBatchAPI = internal.FeatureTest("map batch api", "5.6", func() error { var maxEntries uint32 = 2 - attr := internal.BPFMapCreateAttr{ - MapType: uint32(Hash), + attr := sys.MapCreateAttr{ + MapType: sys.MapType(Hash), KeySize: 4, ValueSize: 4, MaxEntries: maxEntries, } - fd, err := internal.BPFMapCreate(&attr) + fd, err := sys.MapCreate(&attr) if err != nil { return internal.ErrNotSupported } defer fd.Close() + keys := []uint32{1, 2} values := []uint32{3, 4} kp, _ := marshalPtr(keys, 8) vp, _ := marshalPtr(values, 8) - nilPtr := internal.NewPointer(nil) - _, err = bpfMapBatch(internal.BPF_MAP_UPDATE_BATCH, fd, nilPtr, nilPtr, kp, vp, maxEntries, nil) + + err = sys.MapUpdateBatch(&sys.MapUpdateBatchAttr{ + MapFd: fd.Uint(), + Keys: kp, + Values: vp, + Count: maxEntries, + }) if err != nil { return internal.ErrNotSupported } @@ -444,21 +235,30 @@ var haveProbeReadKernel = internal.FeatureTest("bpf_probe_read_kernel", "5.5", f asm.FnProbeReadKernel.Call(), asm.Return(), } - buf := bytes.NewBuffer(make([]byte, 0, len(insns)*asm.InstructionSize)) - if err := insns.Marshal(buf, internal.NativeEndian); err != nil { - return err - } - bytecode := buf.Bytes() - fd, err := internal.BPFProgLoad(&internal.BPFProgLoadAttr{ - ProgType: uint32(Kprobe), - License: internal.NewStringPointer("GPL"), - Instructions: internal.NewSlicePointer(bytecode), - InsCount: uint32(len(bytecode) / asm.InstructionSize), - }) + fd, err := progLoad(insns, Kprobe, "GPL") if err != nil { return internal.ErrNotSupported } _ = fd.Close() return nil }) + +var haveBPFToBPFCalls = internal.FeatureTest("bpf2bpf calls", "4.16", func() error { + insns := asm.Instructions{ + asm.Call.Label("prog2").WithSymbol("prog1"), + asm.Return(), + asm.Mov.Imm(asm.R0, 0).WithSymbol("prog2"), + asm.Return(), + } + + fd, err := progLoad(insns, SocketFilter, "MIT") + if errors.Is(err, unix.EINVAL) { + return internal.ErrNotSupported + } + if err != nil { + return err + } + _ = fd.Close() + return nil +}) diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/types.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/types.go index 84b83f9f9855..a27b44247454 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/types.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/types.go @@ -11,7 +11,7 @@ import ( type MapType uint32 // Max returns the latest supported MapType. -func (_ MapType) Max() MapType { +func (MapType) Max() MapType { return maxMapType - 1 } @@ -103,12 +103,6 @@ const ( maxMapType ) -// Deprecated: StructOpts was a typo, use StructOpsMap instead. -// -// Declared as a variable to prevent stringer from picking it up -// as an enum value. -var StructOpts MapType = StructOpsMap - // hasPerCPUValue returns true if the Map stores a value per CPU. func (mt MapType) hasPerCPUValue() bool { return mt == PerCPUHash || mt == PerCPUArray || mt == LRUCPUHash || mt == PerCPUCGroupStorage @@ -126,11 +120,22 @@ func (mt MapType) canStoreProgram() bool { return mt == ProgramArray } +// hasBTF returns true if the map type supports BTF key/value metadata. +func (mt MapType) hasBTF() bool { + switch mt { + case PerfEventArray, CGroupArray, StackTrace, ArrayOfMaps, HashOfMaps, DevMap, + DevMapHash, CPUMap, XSKMap, SockMap, SockHash, Queue, Stack, RingBuf: + return false + default: + return true + } +} + // ProgramType of the eBPF program type ProgramType uint32 // Max return the latest supported ProgramType. -func (_ ProgramType) Max() ProgramType { +func (ProgramType) Max() ProgramType { return maxProgramType - 1 } @@ -167,6 +172,7 @@ const ( Extension LSM SkLookup + Syscall maxProgramType ) diff --git a/cluster-autoscaler/vendor/github.com/cilium/ebpf/types_string.go b/cluster-autoscaler/vendor/github.com/cilium/ebpf/types_string.go index 81cbc9efde0c..e80b948b0960 100644 --- a/cluster-autoscaler/vendor/github.com/cilium/ebpf/types_string.go +++ b/cluster-autoscaler/vendor/github.com/cilium/ebpf/types_string.go @@ -86,12 +86,13 @@ func _() { _ = x[Extension-28] _ = x[LSM-29] _ = x[SkLookup-30] - _ = x[maxProgramType-31] + _ = x[Syscall-31] + _ = x[maxProgramType-32] } -const _ProgramType_name = "UnspecifiedProgramSocketFilterKprobeSchedCLSSchedACTTracePointXDPPerfEventCGroupSKBCGroupSockLWTInLWTOutLWTXmitSockOpsSkSKBCGroupDeviceSkMsgRawTracepointCGroupSockAddrLWTSeg6LocalLircMode2SkReuseportFlowDissectorCGroupSysctlRawTracepointWritableCGroupSockoptTracingStructOpsExtensionLSMSkLookupmaxProgramType" +const _ProgramType_name = "UnspecifiedProgramSocketFilterKprobeSchedCLSSchedACTTracePointXDPPerfEventCGroupSKBCGroupSockLWTInLWTOutLWTXmitSockOpsSkSKBCGroupDeviceSkMsgRawTracepointCGroupSockAddrLWTSeg6LocalLircMode2SkReuseportFlowDissectorCGroupSysctlRawTracepointWritableCGroupSockoptTracingStructOpsExtensionLSMSkLookupSyscallmaxProgramType" -var _ProgramType_index = [...]uint16{0, 18, 30, 36, 44, 52, 62, 65, 74, 83, 93, 98, 104, 111, 118, 123, 135, 140, 153, 167, 179, 188, 199, 212, 224, 245, 258, 265, 274, 283, 286, 294, 308} +var _ProgramType_index = [...]uint16{0, 18, 30, 36, 44, 52, 62, 65, 74, 83, 93, 98, 104, 111, 118, 123, 135, 140, 153, 167, 179, 188, 199, 212, 224, 245, 258, 265, 274, 283, 286, 294, 301, 315} func (i ProgramType) String() string { if i >= ProgramType(len(_ProgramType_index)-1) { diff --git a/cluster-autoscaler/vendor/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go b/cluster-autoscaler/vendor/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go index d889edb2c8c0..fa010c376b77 100644 --- a/cluster-autoscaler/vendor/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go +++ b/cluster-autoscaler/vendor/github.com/container-storage-interface/spec/lib/go/csi/csi.pb.go @@ -47,18 +47,28 @@ const ( // returned by NodeGetInfo to ensure that a given volume is // accessible from a given node when scheduling workloads. PluginCapability_Service_VOLUME_ACCESSIBILITY_CONSTRAINTS PluginCapability_Service_Type = 2 + // GROUP_CONTROLLER_SERVICE indicates that the Plugin provides + // RPCs for operating on groups of volumes. Plugins MAY provide + // this capability. + // The presence of this capability determines whether the CO will + // attempt to invoke the REQUIRED GroupController service RPCs, as + // well as specific RPCs as indicated by + // GroupControllerGetCapabilities. + PluginCapability_Service_GROUP_CONTROLLER_SERVICE PluginCapability_Service_Type = 3 ) var PluginCapability_Service_Type_name = map[int32]string{ 0: "UNKNOWN", 1: "CONTROLLER_SERVICE", 2: "VOLUME_ACCESSIBILITY_CONSTRAINTS", + 3: "GROUP_CONTROLLER_SERVICE", } var PluginCapability_Service_Type_value = map[string]int32{ "UNKNOWN": 0, "CONTROLLER_SERVICE": 1, "VOLUME_ACCESSIBILITY_CONSTRAINTS": 2, + "GROUP_CONTROLLER_SERVICE": 3, } func (x PluginCapability_Service_Type) String() string { @@ -85,17 +95,20 @@ const ( // expansion of node-published volume via NodeExpandVolume. // // Example 1: Given a shared filesystem volume (e.g. GlusterFs), - // the Plugin may set the ONLINE volume expansion capability and - // implement ControllerExpandVolume but not NodeExpandVolume. + // + // the Plugin may set the ONLINE volume expansion capability and + // implement ControllerExpandVolume but not NodeExpandVolume. // // Example 2: Given a block storage volume type (e.g. EBS), the - // Plugin may set the ONLINE volume expansion capability and - // implement both ControllerExpandVolume and NodeExpandVolume. + // + // Plugin may set the ONLINE volume expansion capability and + // implement both ControllerExpandVolume and NodeExpandVolume. // // Example 3: Given a Plugin that supports volume expansion only - // upon a node, the Plugin may set the ONLINE volume - // expansion capability and implement NodeExpandVolume but not - // ControllerExpandVolume. + // + // upon a node, the Plugin may set the ONLINE volume + // expansion capability and implement NodeExpandVolume but not + // ControllerExpandVolume. PluginCapability_VolumeExpansion_ONLINE PluginCapability_VolumeExpansion_Type = 1 // OFFLINE indicates that volumes currently published and // available on a node SHALL NOT be expanded via @@ -105,10 +118,11 @@ const ( // the EXPAND_VOLUME node capability. // // Example 1: Given a block storage volume type (e.g. Azure Disk) - // that does not support expansion of "node-attached" (i.e. - // controller-published) volumes, the Plugin may indicate - // OFFLINE volume expansion support and implement both - // ControllerExpandVolume and NodeExpandVolume. + // + // that does not support expansion of "node-attached" (i.e. + // controller-published) volumes, the Plugin may indicate + // OFFLINE volume expansion support and implement both + // ControllerExpandVolume and NodeExpandVolume. PluginCapability_VolumeExpansion_OFFLINE PluginCapability_VolumeExpansion_Type = 2 ) @@ -385,6 +399,34 @@ func (NodeServiceCapability_RPC_Type) EnumDescriptor() ([]byte, []int) { return fileDescriptor_9cdb00adce470e01, []int{55, 0, 0} } +type GroupControllerServiceCapability_RPC_Type int32 + +const ( + GroupControllerServiceCapability_RPC_UNKNOWN GroupControllerServiceCapability_RPC_Type = 0 + // Indicates that the group controller plugin supports + // creating, deleting, and getting details of a volume + // group snapshot. + GroupControllerServiceCapability_RPC_CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT GroupControllerServiceCapability_RPC_Type = 1 +) + +var GroupControllerServiceCapability_RPC_Type_name = map[int32]string{ + 0: "UNKNOWN", + 1: "CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT", +} + +var GroupControllerServiceCapability_RPC_Type_value = map[string]int32{ + "UNKNOWN": 0, + "CREATE_DELETE_GET_VOLUME_GROUP_SNAPSHOT": 1, +} + +func (x GroupControllerServiceCapability_RPC_Type) String() string { + return proto.EnumName(GroupControllerServiceCapability_RPC_Type_name, int32(x)) +} + +func (GroupControllerServiceCapability_RPC_Type) EnumDescriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{62, 0, 0} +} + type GetPluginInfoRequest struct { XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` @@ -555,6 +597,7 @@ func (m *GetPluginCapabilitiesResponse) GetCapabilities() []*PluginCapability { // Specifies a capability of the plugin. type PluginCapability struct { // Types that are valid to be assigned to Type: + // // *PluginCapability_Service_ // *PluginCapability_VolumeExpansion_ Type isPluginCapability_Type `protobuf_oneof:"type"` @@ -748,16 +791,16 @@ type ProbeResponse struct { // and it is important for a CO to distinguish between the following // cases: // - // 1) The plugin is in an unhealthy state and MAY need restarting. In - // this case a gRPC error code SHALL be returned. - // 2) The plugin is still initializing, but is otherwise perfectly - // healthy. In this case a successful response SHALL be returned - // with a readiness value of `false`. Calls to the plugin's - // Controller and/or Node services MAY fail due to an incomplete - // initialization state. - // 3) The plugin has finished initializing and is ready to service - // calls to its Controller and/or Node services. A successful - // response is returned with a readiness value of `true`. + // 1. The plugin is in an unhealthy state and MAY need restarting. In + // this case a gRPC error code SHALL be returned. + // 2. The plugin is still initializing, but is otherwise perfectly + // healthy. In this case a successful response SHALL be returned + // with a readiness value of `false`. Calls to the plugin's + // Controller and/or Node services MAY fail due to an incomplete + // initialization state. + // 3. The plugin has finished initializing and is ready to service + // calls to its Controller and/or Node services. A successful + // response is returned with a readiness value of `true`. // // This field is OPTIONAL. If not present, the caller SHALL assume // that the plugin is in a ready state and is accepting calls to its @@ -804,26 +847,27 @@ func (m *ProbeResponse) GetReady() *wrappers.BoolValue { type CreateVolumeRequest struct { // The suggested name for the storage space. This field is REQUIRED. // It serves two purposes: - // 1) Idempotency - This name is generated by the CO to achieve - // idempotency. The Plugin SHOULD ensure that multiple - // `CreateVolume` calls for the same name do not result in more - // than one piece of storage provisioned corresponding to that - // name. If a Plugin is unable to enforce idempotency, the CO's - // error recovery logic could result in multiple (unused) volumes - // being provisioned. - // In the case of error, the CO MUST handle the gRPC error codes - // per the recovery behavior defined in the "CreateVolume Errors" - // section below. - // The CO is responsible for cleaning up volumes it provisioned - // that it no longer needs. If the CO is uncertain whether a volume - // was provisioned or not when a `CreateVolume` call fails, the CO - // MAY call `CreateVolume` again, with the same name, to ensure the - // volume exists and to retrieve the volume's `volume_id` (unless - // otherwise prohibited by "CreateVolume Errors"). - // 2) Suggested name - Some storage systems allow callers to specify - // an identifier by which to refer to the newly provisioned - // storage. If a storage system supports this, it can optionally - // use this name as the identifier for the new volume. + // 1. Idempotency - This name is generated by the CO to achieve + // idempotency. The Plugin SHOULD ensure that multiple + // `CreateVolume` calls for the same name do not result in more + // than one piece of storage provisioned corresponding to that + // name. If a Plugin is unable to enforce idempotency, the CO's + // error recovery logic could result in multiple (unused) volumes + // being provisioned. + // In the case of error, the CO MUST handle the gRPC error codes + // per the recovery behavior defined in the "CreateVolume Errors" + // section below. + // The CO is responsible for cleaning up volumes it provisioned + // that it no longer needs. If the CO is uncertain whether a volume + // was provisioned or not when a `CreateVolume` call fails, the CO + // MAY call `CreateVolume` again, with the same name, to ensure the + // volume exists and to retrieve the volume's `volume_id` (unless + // otherwise prohibited by "CreateVolume Errors"). + // 2. Suggested name - Some storage systems allow callers to specify + // an identifier by which to refer to the newly provisioned + // storage. If a storage system supports this, it can optionally + // use this name as the identifier for the new volume. + // // Any Unicode string that conforms to the length limit is allowed // except those containing the following banned characters: // U+0000-U+0008, U+000B, U+000C, U+000E-U+001F, U+007F-U+009F. @@ -957,6 +1001,7 @@ func (m *CreateVolumeRequest) GetAccessibilityRequirements() *TopologyRequiremen // type fields MUST be specified. type VolumeContentSource struct { // Types that are valid to be assigned to Type: + // // *VolumeContentSource_Snapshot // *VolumeContentSource_Volume Type isVolumeContentSource_Type `protobuf_oneof:"type"` @@ -1168,6 +1213,7 @@ type VolumeCapability struct { // following fields MUST be specified. // // Types that are valid to be assigned to AccessType: + // // *VolumeCapability_Block // *VolumeCapability_Mount AccessType isVolumeCapability_AccessType `protobuf_oneof:"access_type"` @@ -1509,14 +1555,18 @@ type Volume struct { // node. // // Example 1: - // accessible_topology = {"region": "R1", "zone": "Z2"} + // + // accessible_topology = {"region": "R1", "zone": "Z2"} + // // Indicates a volume accessible only from the "region" "R1" and the // "zone" "Z2". // // Example 2: - // accessible_topology = - // {"region": "R1", "zone": "Z2"}, - // {"region": "R1", "zone": "Z3"} + // + // accessible_topology = + // {"region": "R1", "zone": "Z2"}, + // {"region": "R1", "zone": "Z3"} + // // Indicates a volume accessible from both "zone" "Z2" and "zone" "Z3" // in the "region" "R1". AccessibleTopology []*Topology `protobuf:"bytes,5,rep,name=accessible_topology,json=accessibleTopology,proto3" json:"accessible_topology,omitempty"` @@ -1595,21 +1645,27 @@ type TopologyRequirement struct { // accessible from at least one of the requisite topologies. // // Given - // x = number of topologies provisioned volume is accessible from - // n = number of requisite topologies + // + // x = number of topologies provisioned volume is accessible from + // n = number of requisite topologies + // // The CO MUST ensure n >= 1. The SP MUST ensure x >= 1 // If x==n, then the SP MUST make the provisioned volume available to // all topologies from the list of requisite topologies. If it is // unable to do so, the SP MUST fail the CreateVolume call. // For example, if a volume should be accessible from a single zone, // and requisite = - // {"region": "R1", "zone": "Z2"} + // + // {"region": "R1", "zone": "Z2"} + // // then the provisioned volume MUST be accessible from the "region" // "R1" and the "zone" "Z2". // Similarly, if a volume should be accessible from two zones, and // requisite = - // {"region": "R1", "zone": "Z2"}, - // {"region": "R1", "zone": "Z3"} + // + // {"region": "R1", "zone": "Z2"}, + // {"region": "R1", "zone": "Z3"} + // // then the provisioned volume MUST be accessible from the "region" // "R1" and both "zone" "Z2" and "zone" "Z3". // @@ -1618,18 +1674,23 @@ type TopologyRequirement struct { // the CreateVolume call. // For example, if a volume should be accessible from a single zone, // and requisite = - // {"region": "R1", "zone": "Z2"}, - // {"region": "R1", "zone": "Z3"} + // + // {"region": "R1", "zone": "Z2"}, + // {"region": "R1", "zone": "Z3"} + // // then the SP may choose to make the provisioned volume available in // either the "zone" "Z2" or the "zone" "Z3" in the "region" "R1". // Similarly, if a volume should be accessible from two zones, and // requisite = - // {"region": "R1", "zone": "Z2"}, - // {"region": "R1", "zone": "Z3"}, - // {"region": "R1", "zone": "Z4"} + // + // {"region": "R1", "zone": "Z2"}, + // {"region": "R1", "zone": "Z3"}, + // {"region": "R1", "zone": "Z4"} + // // then the provisioned volume MUST be accessible from any combination // of two unique topologies: e.g. "R1/Z2" and "R1/Z3", or "R1/Z2" and - // "R1/Z4", or "R1/Z3" and "R1/Z4". + // + // "R1/Z4", or "R1/Z3" and "R1/Z4". // // If x>n, then the SP MUST make the provisioned volume available from // all topologies from the list of requisite topologies and MAY choose @@ -1638,7 +1699,9 @@ type TopologyRequirement struct { // CreateVolume call. // For example, if a volume should be accessible from two zones, and // requisite = - // {"region": "R1", "zone": "Z2"} + // + // {"region": "R1", "zone": "Z2"} + // // then the provisioned volume MUST be accessible from the "region" // "R1" and the "zone" "Z2" and the SP may select the second zone // independently, e.g. "R1/Z4". @@ -1667,10 +1730,14 @@ type TopologyRequirement struct { // Example 1: // Given a volume should be accessible from a single zone, and // requisite = - // {"region": "R1", "zone": "Z2"}, - // {"region": "R1", "zone": "Z3"} + // + // {"region": "R1", "zone": "Z2"}, + // {"region": "R1", "zone": "Z3"} + // // preferred = - // {"region": "R1", "zone": "Z3"} + // + // {"region": "R1", "zone": "Z3"} + // // then the SP SHOULD first attempt to make the provisioned volume // available from "zone" "Z3" in the "region" "R1" and fall back to // "zone" "Z2" in the "region" "R1" if that is not possible. @@ -1678,13 +1745,17 @@ type TopologyRequirement struct { // Example 2: // Given a volume should be accessible from a single zone, and // requisite = - // {"region": "R1", "zone": "Z2"}, - // {"region": "R1", "zone": "Z3"}, - // {"region": "R1", "zone": "Z4"}, - // {"region": "R1", "zone": "Z5"} + // + // {"region": "R1", "zone": "Z2"}, + // {"region": "R1", "zone": "Z3"}, + // {"region": "R1", "zone": "Z4"}, + // {"region": "R1", "zone": "Z5"} + // // preferred = - // {"region": "R1", "zone": "Z4"}, - // {"region": "R1", "zone": "Z2"} + // + // {"region": "R1", "zone": "Z4"}, + // {"region": "R1", "zone": "Z2"} + // // then the SP SHOULD first attempt to make the provisioned volume // accessible from "zone" "Z4" in the "region" "R1" and fall back to // "zone" "Z2" in the "region" "R1" if that is not possible. If that @@ -1697,13 +1768,17 @@ type TopologyRequirement struct { // the volume is accessible from two zones, aka synchronously // replicated), and // requisite = - // {"region": "R1", "zone": "Z2"}, - // {"region": "R1", "zone": "Z3"}, - // {"region": "R1", "zone": "Z4"}, - // {"region": "R1", "zone": "Z5"} + // + // {"region": "R1", "zone": "Z2"}, + // {"region": "R1", "zone": "Z3"}, + // {"region": "R1", "zone": "Z4"}, + // {"region": "R1", "zone": "Z5"} + // // preferred = - // {"region": "R1", "zone": "Z5"}, - // {"region": "R1", "zone": "Z3"} + // + // {"region": "R1", "zone": "Z5"}, + // {"region": "R1", "zone": "Z3"} + // // then the SP SHOULD first attempt to make the provisioned volume // accessible from the combination of the two "zones" "Z5" and "Z3" in // the "region" "R1". If that's not possible, it should fall back to @@ -2972,6 +3047,7 @@ func (m *ControllerGetCapabilitiesResponse) GetCapabilities() []*ControllerServi // Specifies a capability of the controller service. type ControllerServiceCapability struct { // Types that are valid to be assigned to Type: + // // *ControllerServiceCapability_Rpc Type isControllerServiceCapability_Type `protobuf_oneof:"type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` @@ -3093,12 +3169,12 @@ type CreateSnapshotRequest struct { // This field is OPTIONAL. The Plugin is responsible for parsing and // validating these parameters. COs will treat these as opaque. // Use cases for opaque parameters: - // - Specify a policy to automatically clean up the snapshot. - // - Specify an expiration date for the snapshot. - // - Specify whether the snapshot is readonly or read/write. - // - Specify if the snapshot should be replicated to some place. - // - Specify primary or secondary for replication systems that - // support snapshotting only on primary. + // - Specify a policy to automatically clean up the snapshot. + // - Specify an expiration date for the snapshot. + // - Specify whether the snapshot is readonly or read/write. + // - Specify if the snapshot should be replicated to some place. + // - Specify primary or secondary for replication systems that + // support snapshotting only on primary. Parameters map[string]string `protobuf:"bytes,4,rep,name=parameters,proto3" json:"parameters,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` @@ -3230,7 +3306,21 @@ type Snapshot struct { // Indicates if a snapshot is ready to use as a // `volume_content_source` in a `CreateVolumeRequest`. The default // value is false. This field is REQUIRED. - ReadyToUse bool `protobuf:"varint,5,opt,name=ready_to_use,json=readyToUse,proto3" json:"ready_to_use,omitempty"` + ReadyToUse bool `protobuf:"varint,5,opt,name=ready_to_use,json=readyToUse,proto3" json:"ready_to_use,omitempty"` + // The ID of the volume group snapshot that this snapshot is part of. + // It uniquely identifies the group snapshot on the storage system. + // This field is OPTIONAL. + // If this snapshot is a member of a volume group snapshot, and it + // MUST NOT be deleted as a stand alone snapshot, then the SP + // MUST provide the ID of the volume group snapshot in this field. + // If provided, CO MUST use this field in subsequent volume group + // snapshot operations to indicate that this snapshot is part of the + // specified group snapshot. + // If not provided, CO SHALL treat the snapshot as independent, + // and SP SHALL allow it to be deleted separately. + // If this message is inside a VolumeGroupSnapshot message, the value + // MUST be the same as the group_snapshot_id in that message. + GroupSnapshotId string `protobuf:"bytes,6,opt,name=group_snapshot_id,json=groupSnapshotId,proto3" json:"group_snapshot_id,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` @@ -3296,6 +3386,13 @@ func (m *Snapshot) GetReadyToUse() bool { return false } +func (m *Snapshot) GetGroupSnapshotId() string { + if m != nil { + return m.GroupSnapshotId + } + return "" +} + type DeleteSnapshotRequest struct { // The ID of the snapshot to be deleted. // This field is REQUIRED. @@ -4500,6 +4597,7 @@ func (m *NodeGetCapabilitiesResponse) GetCapabilities() []*NodeServiceCapability // Specifies a capability of the node service. type NodeServiceCapability struct { // Types that are valid to be assigned to Type: + // // *NodeServiceCapability_Rpc Type isNodeServiceCapability_Type `protobuf_oneof:"type"` XXX_NoUnkeyedLiteral struct{} `json:"-"` @@ -4666,8 +4764,10 @@ type NodeGetInfoResponse struct { // no topological constraints declared for V. // // Example 1: - // accessible_topology = - // {"region": "R1", "zone": "Z2"} + // + // accessible_topology = + // {"region": "R1", "zone": "Z2"} + // // Indicates the node exists within the "region" "R1" and the "zone" // "Z2". AccessibleTopology *Topology `protobuf:"bytes,3,opt,name=accessible_topology,json=accessibleTopology,proto3" json:"accessible_topology,omitempty"` @@ -4874,6 +4974,608 @@ func (m *NodeExpandVolumeResponse) GetCapacityBytes() int64 { return 0 } +type GroupControllerGetCapabilitiesRequest struct { + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *GroupControllerGetCapabilitiesRequest) Reset() { *m = GroupControllerGetCapabilitiesRequest{} } +func (m *GroupControllerGetCapabilitiesRequest) String() string { return proto.CompactTextString(m) } +func (*GroupControllerGetCapabilitiesRequest) ProtoMessage() {} +func (*GroupControllerGetCapabilitiesRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{60} +} + +func (m *GroupControllerGetCapabilitiesRequest) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_GroupControllerGetCapabilitiesRequest.Unmarshal(m, b) +} +func (m *GroupControllerGetCapabilitiesRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_GroupControllerGetCapabilitiesRequest.Marshal(b, m, deterministic) +} +func (m *GroupControllerGetCapabilitiesRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_GroupControllerGetCapabilitiesRequest.Merge(m, src) +} +func (m *GroupControllerGetCapabilitiesRequest) XXX_Size() int { + return xxx_messageInfo_GroupControllerGetCapabilitiesRequest.Size(m) +} +func (m *GroupControllerGetCapabilitiesRequest) XXX_DiscardUnknown() { + xxx_messageInfo_GroupControllerGetCapabilitiesRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_GroupControllerGetCapabilitiesRequest proto.InternalMessageInfo + +type GroupControllerGetCapabilitiesResponse struct { + // All the capabilities that the group controller service supports. + // This field is OPTIONAL. + Capabilities []*GroupControllerServiceCapability `protobuf:"bytes,1,rep,name=capabilities,proto3" json:"capabilities,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *GroupControllerGetCapabilitiesResponse) Reset() { + *m = GroupControllerGetCapabilitiesResponse{} +} +func (m *GroupControllerGetCapabilitiesResponse) String() string { return proto.CompactTextString(m) } +func (*GroupControllerGetCapabilitiesResponse) ProtoMessage() {} +func (*GroupControllerGetCapabilitiesResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{61} +} + +func (m *GroupControllerGetCapabilitiesResponse) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_GroupControllerGetCapabilitiesResponse.Unmarshal(m, b) +} +func (m *GroupControllerGetCapabilitiesResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_GroupControllerGetCapabilitiesResponse.Marshal(b, m, deterministic) +} +func (m *GroupControllerGetCapabilitiesResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_GroupControllerGetCapabilitiesResponse.Merge(m, src) +} +func (m *GroupControllerGetCapabilitiesResponse) XXX_Size() int { + return xxx_messageInfo_GroupControllerGetCapabilitiesResponse.Size(m) +} +func (m *GroupControllerGetCapabilitiesResponse) XXX_DiscardUnknown() { + xxx_messageInfo_GroupControllerGetCapabilitiesResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_GroupControllerGetCapabilitiesResponse proto.InternalMessageInfo + +func (m *GroupControllerGetCapabilitiesResponse) GetCapabilities() []*GroupControllerServiceCapability { + if m != nil { + return m.Capabilities + } + return nil +} + +// Specifies a capability of the group controller service. +type GroupControllerServiceCapability struct { + // Types that are valid to be assigned to Type: + // + // *GroupControllerServiceCapability_Rpc + Type isGroupControllerServiceCapability_Type `protobuf_oneof:"type"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *GroupControllerServiceCapability) Reset() { *m = GroupControllerServiceCapability{} } +func (m *GroupControllerServiceCapability) String() string { return proto.CompactTextString(m) } +func (*GroupControllerServiceCapability) ProtoMessage() {} +func (*GroupControllerServiceCapability) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{62} +} + +func (m *GroupControllerServiceCapability) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_GroupControllerServiceCapability.Unmarshal(m, b) +} +func (m *GroupControllerServiceCapability) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_GroupControllerServiceCapability.Marshal(b, m, deterministic) +} +func (m *GroupControllerServiceCapability) XXX_Merge(src proto.Message) { + xxx_messageInfo_GroupControllerServiceCapability.Merge(m, src) +} +func (m *GroupControllerServiceCapability) XXX_Size() int { + return xxx_messageInfo_GroupControllerServiceCapability.Size(m) +} +func (m *GroupControllerServiceCapability) XXX_DiscardUnknown() { + xxx_messageInfo_GroupControllerServiceCapability.DiscardUnknown(m) +} + +var xxx_messageInfo_GroupControllerServiceCapability proto.InternalMessageInfo + +type isGroupControllerServiceCapability_Type interface { + isGroupControllerServiceCapability_Type() +} + +type GroupControllerServiceCapability_Rpc struct { + Rpc *GroupControllerServiceCapability_RPC `protobuf:"bytes,1,opt,name=rpc,proto3,oneof"` +} + +func (*GroupControllerServiceCapability_Rpc) isGroupControllerServiceCapability_Type() {} + +func (m *GroupControllerServiceCapability) GetType() isGroupControllerServiceCapability_Type { + if m != nil { + return m.Type + } + return nil +} + +func (m *GroupControllerServiceCapability) GetRpc() *GroupControllerServiceCapability_RPC { + if x, ok := m.GetType().(*GroupControllerServiceCapability_Rpc); ok { + return x.Rpc + } + return nil +} + +// XXX_OneofWrappers is for the internal use of the proto package. +func (*GroupControllerServiceCapability) XXX_OneofWrappers() []interface{} { + return []interface{}{ + (*GroupControllerServiceCapability_Rpc)(nil), + } +} + +type GroupControllerServiceCapability_RPC struct { + Type GroupControllerServiceCapability_RPC_Type `protobuf:"varint,1,opt,name=type,proto3,enum=csi.v1.GroupControllerServiceCapability_RPC_Type" json:"type,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *GroupControllerServiceCapability_RPC) Reset() { *m = GroupControllerServiceCapability_RPC{} } +func (m *GroupControllerServiceCapability_RPC) String() string { return proto.CompactTextString(m) } +func (*GroupControllerServiceCapability_RPC) ProtoMessage() {} +func (*GroupControllerServiceCapability_RPC) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{62, 0} +} + +func (m *GroupControllerServiceCapability_RPC) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_GroupControllerServiceCapability_RPC.Unmarshal(m, b) +} +func (m *GroupControllerServiceCapability_RPC) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_GroupControllerServiceCapability_RPC.Marshal(b, m, deterministic) +} +func (m *GroupControllerServiceCapability_RPC) XXX_Merge(src proto.Message) { + xxx_messageInfo_GroupControllerServiceCapability_RPC.Merge(m, src) +} +func (m *GroupControllerServiceCapability_RPC) XXX_Size() int { + return xxx_messageInfo_GroupControllerServiceCapability_RPC.Size(m) +} +func (m *GroupControllerServiceCapability_RPC) XXX_DiscardUnknown() { + xxx_messageInfo_GroupControllerServiceCapability_RPC.DiscardUnknown(m) +} + +var xxx_messageInfo_GroupControllerServiceCapability_RPC proto.InternalMessageInfo + +func (m *GroupControllerServiceCapability_RPC) GetType() GroupControllerServiceCapability_RPC_Type { + if m != nil { + return m.Type + } + return GroupControllerServiceCapability_RPC_UNKNOWN +} + +type CreateVolumeGroupSnapshotRequest struct { + // The suggested name for the group snapshot. This field is REQUIRED + // for idempotency. + // Any Unicode string that conforms to the length limit is allowed + // except those containing the following banned characters: + // U+0000-U+0008, U+000B, U+000C, U+000E-U+001F, U+007F-U+009F. + // (These are control characters other than commonly used whitespace.) + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + // volume IDs of the source volumes to be snapshotted together. + // This field is REQUIRED. + SourceVolumeIds []string `protobuf:"bytes,2,rep,name=source_volume_ids,json=sourceVolumeIds,proto3" json:"source_volume_ids,omitempty"` + // Secrets required by plugin to complete + // ControllerCreateVolumeGroupSnapshot request. + // This field is OPTIONAL. Refer to the `Secrets Requirements` + // section on how to use this field. + // The secrets provided in this field SHOULD be the same for + // all group snapshot operations on the same group snapshot. + Secrets map[string]string `protobuf:"bytes,3,rep,name=secrets,proto3" json:"secrets,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + // Plugin specific parameters passed in as opaque key-value pairs. + // This field is OPTIONAL. The Plugin is responsible for parsing and + // validating these parameters. COs will treat these as opaque. + Parameters map[string]string `protobuf:"bytes,4,rep,name=parameters,proto3" json:"parameters,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *CreateVolumeGroupSnapshotRequest) Reset() { *m = CreateVolumeGroupSnapshotRequest{} } +func (m *CreateVolumeGroupSnapshotRequest) String() string { return proto.CompactTextString(m) } +func (*CreateVolumeGroupSnapshotRequest) ProtoMessage() {} +func (*CreateVolumeGroupSnapshotRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{63} +} + +func (m *CreateVolumeGroupSnapshotRequest) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_CreateVolumeGroupSnapshotRequest.Unmarshal(m, b) +} +func (m *CreateVolumeGroupSnapshotRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_CreateVolumeGroupSnapshotRequest.Marshal(b, m, deterministic) +} +func (m *CreateVolumeGroupSnapshotRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_CreateVolumeGroupSnapshotRequest.Merge(m, src) +} +func (m *CreateVolumeGroupSnapshotRequest) XXX_Size() int { + return xxx_messageInfo_CreateVolumeGroupSnapshotRequest.Size(m) +} +func (m *CreateVolumeGroupSnapshotRequest) XXX_DiscardUnknown() { + xxx_messageInfo_CreateVolumeGroupSnapshotRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_CreateVolumeGroupSnapshotRequest proto.InternalMessageInfo + +func (m *CreateVolumeGroupSnapshotRequest) GetName() string { + if m != nil { + return m.Name + } + return "" +} + +func (m *CreateVolumeGroupSnapshotRequest) GetSourceVolumeIds() []string { + if m != nil { + return m.SourceVolumeIds + } + return nil +} + +func (m *CreateVolumeGroupSnapshotRequest) GetSecrets() map[string]string { + if m != nil { + return m.Secrets + } + return nil +} + +func (m *CreateVolumeGroupSnapshotRequest) GetParameters() map[string]string { + if m != nil { + return m.Parameters + } + return nil +} + +type CreateVolumeGroupSnapshotResponse struct { + // Contains all attributes of the newly created group snapshot. + // This field is REQUIRED. + GroupSnapshot *VolumeGroupSnapshot `protobuf:"bytes,1,opt,name=group_snapshot,json=groupSnapshot,proto3" json:"group_snapshot,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *CreateVolumeGroupSnapshotResponse) Reset() { *m = CreateVolumeGroupSnapshotResponse{} } +func (m *CreateVolumeGroupSnapshotResponse) String() string { return proto.CompactTextString(m) } +func (*CreateVolumeGroupSnapshotResponse) ProtoMessage() {} +func (*CreateVolumeGroupSnapshotResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{64} +} + +func (m *CreateVolumeGroupSnapshotResponse) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_CreateVolumeGroupSnapshotResponse.Unmarshal(m, b) +} +func (m *CreateVolumeGroupSnapshotResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_CreateVolumeGroupSnapshotResponse.Marshal(b, m, deterministic) +} +func (m *CreateVolumeGroupSnapshotResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_CreateVolumeGroupSnapshotResponse.Merge(m, src) +} +func (m *CreateVolumeGroupSnapshotResponse) XXX_Size() int { + return xxx_messageInfo_CreateVolumeGroupSnapshotResponse.Size(m) +} +func (m *CreateVolumeGroupSnapshotResponse) XXX_DiscardUnknown() { + xxx_messageInfo_CreateVolumeGroupSnapshotResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_CreateVolumeGroupSnapshotResponse proto.InternalMessageInfo + +func (m *CreateVolumeGroupSnapshotResponse) GetGroupSnapshot() *VolumeGroupSnapshot { + if m != nil { + return m.GroupSnapshot + } + return nil +} + +type VolumeGroupSnapshot struct { + // The identifier for this group snapshot, generated by the plugin. + // This field MUST contain enough information to uniquely identify + // this specific snapshot vs all other group snapshots supported by + // this plugin. + // This field SHALL be used by the CO in subsequent calls to refer to + // this group snapshot. + // The SP is NOT responsible for global uniqueness of + // group_snapshot_id across multiple SPs. + // This field is REQUIRED. + GroupSnapshotId string `protobuf:"bytes,1,opt,name=group_snapshot_id,json=groupSnapshotId,proto3" json:"group_snapshot_id,omitempty"` + // A list of snapshots belonging to this group. + // This field is REQUIRED. + Snapshots []*Snapshot `protobuf:"bytes,2,rep,name=snapshots,proto3" json:"snapshots,omitempty"` + // Timestamp of when the volume group snapshot was taken. + // This field is REQUIRED. + CreationTime *timestamp.Timestamp `protobuf:"bytes,3,opt,name=creation_time,json=creationTime,proto3" json:"creation_time,omitempty"` + // Indicates if all individual snapshots in the group snapshot + // are ready to use as a `volume_content_source` in a + // `CreateVolumeRequest`. The default value is false. + // If any snapshot in the list of snapshots in this message have + // ready_to_use set to false, the SP MUST set this field to false. + // If all of the snapshots in the list of snapshots in this message + // have ready_to_use set to true, the SP SHOULD set this field to + // true. + // This field is REQUIRED. + ReadyToUse bool `protobuf:"varint,4,opt,name=ready_to_use,json=readyToUse,proto3" json:"ready_to_use,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *VolumeGroupSnapshot) Reset() { *m = VolumeGroupSnapshot{} } +func (m *VolumeGroupSnapshot) String() string { return proto.CompactTextString(m) } +func (*VolumeGroupSnapshot) ProtoMessage() {} +func (*VolumeGroupSnapshot) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{65} +} + +func (m *VolumeGroupSnapshot) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_VolumeGroupSnapshot.Unmarshal(m, b) +} +func (m *VolumeGroupSnapshot) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_VolumeGroupSnapshot.Marshal(b, m, deterministic) +} +func (m *VolumeGroupSnapshot) XXX_Merge(src proto.Message) { + xxx_messageInfo_VolumeGroupSnapshot.Merge(m, src) +} +func (m *VolumeGroupSnapshot) XXX_Size() int { + return xxx_messageInfo_VolumeGroupSnapshot.Size(m) +} +func (m *VolumeGroupSnapshot) XXX_DiscardUnknown() { + xxx_messageInfo_VolumeGroupSnapshot.DiscardUnknown(m) +} + +var xxx_messageInfo_VolumeGroupSnapshot proto.InternalMessageInfo + +func (m *VolumeGroupSnapshot) GetGroupSnapshotId() string { + if m != nil { + return m.GroupSnapshotId + } + return "" +} + +func (m *VolumeGroupSnapshot) GetSnapshots() []*Snapshot { + if m != nil { + return m.Snapshots + } + return nil +} + +func (m *VolumeGroupSnapshot) GetCreationTime() *timestamp.Timestamp { + if m != nil { + return m.CreationTime + } + return nil +} + +func (m *VolumeGroupSnapshot) GetReadyToUse() bool { + if m != nil { + return m.ReadyToUse + } + return false +} + +type DeleteVolumeGroupSnapshotRequest struct { + // The ID of the group snapshot to be deleted. + // This field is REQUIRED. + GroupSnapshotId string `protobuf:"bytes,1,opt,name=group_snapshot_id,json=groupSnapshotId,proto3" json:"group_snapshot_id,omitempty"` + // A list of snapshot IDs that are part of this group snapshot. + // If SP does not need to rely on this field to delete the snapshots + // in the group, it SHOULD check this field and report an error + // if it has the ability to detect a mismatch. + // Some SPs require this list to delete the snapshots in the group. + // If SP needs to use this field to delete the snapshots in the + // group, it MUST report an error if it has the ability to detect + // a mismatch. + // This field is REQUIRED. + SnapshotIds []string `protobuf:"bytes,2,rep,name=snapshot_ids,json=snapshotIds,proto3" json:"snapshot_ids,omitempty"` + // Secrets required by plugin to complete group snapshot deletion + // request. + // This field is OPTIONAL. Refer to the `Secrets Requirements` + // section on how to use this field. + // The secrets provided in this field SHOULD be the same for + // all group snapshot operations on the same group snapshot. + Secrets map[string]string `protobuf:"bytes,3,rep,name=secrets,proto3" json:"secrets,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *DeleteVolumeGroupSnapshotRequest) Reset() { *m = DeleteVolumeGroupSnapshotRequest{} } +func (m *DeleteVolumeGroupSnapshotRequest) String() string { return proto.CompactTextString(m) } +func (*DeleteVolumeGroupSnapshotRequest) ProtoMessage() {} +func (*DeleteVolumeGroupSnapshotRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{66} +} + +func (m *DeleteVolumeGroupSnapshotRequest) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_DeleteVolumeGroupSnapshotRequest.Unmarshal(m, b) +} +func (m *DeleteVolumeGroupSnapshotRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_DeleteVolumeGroupSnapshotRequest.Marshal(b, m, deterministic) +} +func (m *DeleteVolumeGroupSnapshotRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_DeleteVolumeGroupSnapshotRequest.Merge(m, src) +} +func (m *DeleteVolumeGroupSnapshotRequest) XXX_Size() int { + return xxx_messageInfo_DeleteVolumeGroupSnapshotRequest.Size(m) +} +func (m *DeleteVolumeGroupSnapshotRequest) XXX_DiscardUnknown() { + xxx_messageInfo_DeleteVolumeGroupSnapshotRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_DeleteVolumeGroupSnapshotRequest proto.InternalMessageInfo + +func (m *DeleteVolumeGroupSnapshotRequest) GetGroupSnapshotId() string { + if m != nil { + return m.GroupSnapshotId + } + return "" +} + +func (m *DeleteVolumeGroupSnapshotRequest) GetSnapshotIds() []string { + if m != nil { + return m.SnapshotIds + } + return nil +} + +func (m *DeleteVolumeGroupSnapshotRequest) GetSecrets() map[string]string { + if m != nil { + return m.Secrets + } + return nil +} + +type DeleteVolumeGroupSnapshotResponse struct { + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *DeleteVolumeGroupSnapshotResponse) Reset() { *m = DeleteVolumeGroupSnapshotResponse{} } +func (m *DeleteVolumeGroupSnapshotResponse) String() string { return proto.CompactTextString(m) } +func (*DeleteVolumeGroupSnapshotResponse) ProtoMessage() {} +func (*DeleteVolumeGroupSnapshotResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{67} +} + +func (m *DeleteVolumeGroupSnapshotResponse) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_DeleteVolumeGroupSnapshotResponse.Unmarshal(m, b) +} +func (m *DeleteVolumeGroupSnapshotResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_DeleteVolumeGroupSnapshotResponse.Marshal(b, m, deterministic) +} +func (m *DeleteVolumeGroupSnapshotResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_DeleteVolumeGroupSnapshotResponse.Merge(m, src) +} +func (m *DeleteVolumeGroupSnapshotResponse) XXX_Size() int { + return xxx_messageInfo_DeleteVolumeGroupSnapshotResponse.Size(m) +} +func (m *DeleteVolumeGroupSnapshotResponse) XXX_DiscardUnknown() { + xxx_messageInfo_DeleteVolumeGroupSnapshotResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_DeleteVolumeGroupSnapshotResponse proto.InternalMessageInfo + +type GetVolumeGroupSnapshotRequest struct { + // The ID of the group snapshot to fetch current group snapshot + // information for. + // This field is REQUIRED. + GroupSnapshotId string `protobuf:"bytes,1,opt,name=group_snapshot_id,json=groupSnapshotId,proto3" json:"group_snapshot_id,omitempty"` + // A list of snapshot IDs that are part of this group snapshot. + // If SP does not need to rely on this field to get the snapshots + // in the group, it SHOULD check this field and report an error + // if it has the ability to detect a mismatch. + // Some SPs require this list to get the snapshots in the group. + // If SP needs to use this field to get the snapshots in the + // group, it MUST report an error if it has the ability to detect + // a mismatch. + // This field is REQUIRED. + SnapshotIds []string `protobuf:"bytes,2,rep,name=snapshot_ids,json=snapshotIds,proto3" json:"snapshot_ids,omitempty"` + // Secrets required by plugin to complete + // GetVolumeGroupSnapshot request. + // This field is OPTIONAL. Refer to the `Secrets Requirements` + // section on how to use this field. + // The secrets provided in this field SHOULD be the same for + // all group snapshot operations on the same group snapshot. + Secrets map[string]string `protobuf:"bytes,3,rep,name=secrets,proto3" json:"secrets,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *GetVolumeGroupSnapshotRequest) Reset() { *m = GetVolumeGroupSnapshotRequest{} } +func (m *GetVolumeGroupSnapshotRequest) String() string { return proto.CompactTextString(m) } +func (*GetVolumeGroupSnapshotRequest) ProtoMessage() {} +func (*GetVolumeGroupSnapshotRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{68} +} + +func (m *GetVolumeGroupSnapshotRequest) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_GetVolumeGroupSnapshotRequest.Unmarshal(m, b) +} +func (m *GetVolumeGroupSnapshotRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_GetVolumeGroupSnapshotRequest.Marshal(b, m, deterministic) +} +func (m *GetVolumeGroupSnapshotRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_GetVolumeGroupSnapshotRequest.Merge(m, src) +} +func (m *GetVolumeGroupSnapshotRequest) XXX_Size() int { + return xxx_messageInfo_GetVolumeGroupSnapshotRequest.Size(m) +} +func (m *GetVolumeGroupSnapshotRequest) XXX_DiscardUnknown() { + xxx_messageInfo_GetVolumeGroupSnapshotRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_GetVolumeGroupSnapshotRequest proto.InternalMessageInfo + +func (m *GetVolumeGroupSnapshotRequest) GetGroupSnapshotId() string { + if m != nil { + return m.GroupSnapshotId + } + return "" +} + +func (m *GetVolumeGroupSnapshotRequest) GetSnapshotIds() []string { + if m != nil { + return m.SnapshotIds + } + return nil +} + +func (m *GetVolumeGroupSnapshotRequest) GetSecrets() map[string]string { + if m != nil { + return m.Secrets + } + return nil +} + +type GetVolumeGroupSnapshotResponse struct { + // This field is REQUIRED + GroupSnapshot *VolumeGroupSnapshot `protobuf:"bytes,1,opt,name=group_snapshot,json=groupSnapshot,proto3" json:"group_snapshot,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *GetVolumeGroupSnapshotResponse) Reset() { *m = GetVolumeGroupSnapshotResponse{} } +func (m *GetVolumeGroupSnapshotResponse) String() string { return proto.CompactTextString(m) } +func (*GetVolumeGroupSnapshotResponse) ProtoMessage() {} +func (*GetVolumeGroupSnapshotResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_9cdb00adce470e01, []int{69} +} + +func (m *GetVolumeGroupSnapshotResponse) XXX_Unmarshal(b []byte) error { + return xxx_messageInfo_GetVolumeGroupSnapshotResponse.Unmarshal(m, b) +} +func (m *GetVolumeGroupSnapshotResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + return xxx_messageInfo_GetVolumeGroupSnapshotResponse.Marshal(b, m, deterministic) +} +func (m *GetVolumeGroupSnapshotResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_GetVolumeGroupSnapshotResponse.Merge(m, src) +} +func (m *GetVolumeGroupSnapshotResponse) XXX_Size() int { + return xxx_messageInfo_GetVolumeGroupSnapshotResponse.Size(m) +} +func (m *GetVolumeGroupSnapshotResponse) XXX_DiscardUnknown() { + xxx_messageInfo_GetVolumeGroupSnapshotResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_GetVolumeGroupSnapshotResponse proto.InternalMessageInfo + +func (m *GetVolumeGroupSnapshotResponse) GetGroupSnapshot() *VolumeGroupSnapshot { + if m != nil { + return m.GroupSnapshot + } + return nil +} + var E_AlphaEnum = &proto.ExtensionDesc{ ExtendedType: (*descriptor.EnumOptions)(nil), ExtensionType: (*bool)(nil), @@ -4944,6 +5646,7 @@ func init() { proto.RegisterEnum("csi.v1.ControllerServiceCapability_RPC_Type", ControllerServiceCapability_RPC_Type_name, ControllerServiceCapability_RPC_Type_value) proto.RegisterEnum("csi.v1.VolumeUsage_Unit", VolumeUsage_Unit_name, VolumeUsage_Unit_value) proto.RegisterEnum("csi.v1.NodeServiceCapability_RPC_Type", NodeServiceCapability_RPC_Type_name, NodeServiceCapability_RPC_Type_value) + proto.RegisterEnum("csi.v1.GroupControllerServiceCapability_RPC_Type", GroupControllerServiceCapability_RPC_Type_name, GroupControllerServiceCapability_RPC_Type_value) proto.RegisterType((*GetPluginInfoRequest)(nil), "csi.v1.GetPluginInfoRequest") proto.RegisterType((*GetPluginInfoResponse)(nil), "csi.v1.GetPluginInfoResponse") proto.RegisterMapType((map[string]string)(nil), "csi.v1.GetPluginInfoResponse.ManifestEntry") @@ -5046,6 +5749,21 @@ func init() { proto.RegisterType((*NodeExpandVolumeRequest)(nil), "csi.v1.NodeExpandVolumeRequest") proto.RegisterMapType((map[string]string)(nil), "csi.v1.NodeExpandVolumeRequest.SecretsEntry") proto.RegisterType((*NodeExpandVolumeResponse)(nil), "csi.v1.NodeExpandVolumeResponse") + proto.RegisterType((*GroupControllerGetCapabilitiesRequest)(nil), "csi.v1.GroupControllerGetCapabilitiesRequest") + proto.RegisterType((*GroupControllerGetCapabilitiesResponse)(nil), "csi.v1.GroupControllerGetCapabilitiesResponse") + proto.RegisterType((*GroupControllerServiceCapability)(nil), "csi.v1.GroupControllerServiceCapability") + proto.RegisterType((*GroupControllerServiceCapability_RPC)(nil), "csi.v1.GroupControllerServiceCapability.RPC") + proto.RegisterType((*CreateVolumeGroupSnapshotRequest)(nil), "csi.v1.CreateVolumeGroupSnapshotRequest") + proto.RegisterMapType((map[string]string)(nil), "csi.v1.CreateVolumeGroupSnapshotRequest.ParametersEntry") + proto.RegisterMapType((map[string]string)(nil), "csi.v1.CreateVolumeGroupSnapshotRequest.SecretsEntry") + proto.RegisterType((*CreateVolumeGroupSnapshotResponse)(nil), "csi.v1.CreateVolumeGroupSnapshotResponse") + proto.RegisterType((*VolumeGroupSnapshot)(nil), "csi.v1.VolumeGroupSnapshot") + proto.RegisterType((*DeleteVolumeGroupSnapshotRequest)(nil), "csi.v1.DeleteVolumeGroupSnapshotRequest") + proto.RegisterMapType((map[string]string)(nil), "csi.v1.DeleteVolumeGroupSnapshotRequest.SecretsEntry") + proto.RegisterType((*DeleteVolumeGroupSnapshotResponse)(nil), "csi.v1.DeleteVolumeGroupSnapshotResponse") + proto.RegisterType((*GetVolumeGroupSnapshotRequest)(nil), "csi.v1.GetVolumeGroupSnapshotRequest") + proto.RegisterMapType((map[string]string)(nil), "csi.v1.GetVolumeGroupSnapshotRequest.SecretsEntry") + proto.RegisterType((*GetVolumeGroupSnapshotResponse)(nil), "csi.v1.GetVolumeGroupSnapshotResponse") proto.RegisterExtension(E_AlphaEnum) proto.RegisterExtension(E_AlphaEnumValue) proto.RegisterExtension(E_CsiSecret) @@ -5060,245 +5778,269 @@ func init() { } var fileDescriptor_9cdb00adce470e01 = []byte{ - // 3796 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe4, 0x3b, 0x4b, 0x6c, 0x23, 0x47, - 0x76, 0x6a, 0xfe, 0x24, 0x3d, 0x4a, 0x1a, 0xaa, 0x28, 0x69, 0x38, 0x2d, 0x69, 0xa4, 0xe9, 0xf1, - 0x78, 0xe5, 0xf1, 0x0c, 0x67, 0xad, 0xb5, 0x8d, 0x58, 0x1e, 0xef, 0x9a, 0xa4, 0x38, 0x12, 0x77, - 0x28, 0x52, 0x6e, 0x52, 0x33, 0x3b, 0x93, 0x18, 0xed, 0x16, 0x59, 0xe2, 0x34, 0x4c, 0x76, 0xd3, - 0xdd, 0x4d, 0x45, 0xda, 0x4b, 0x82, 0x04, 0x39, 0x04, 0xb9, 0xe4, 0xb6, 0xce, 0x29, 0x8b, 0x24, - 0xc7, 0x5d, 0xec, 0x21, 0x08, 0x72, 0x0c, 0x90, 0x5b, 0x02, 0xe4, 0x73, 0x4b, 0x90, 0xcb, 0x1e, - 0x02, 0xe4, 0x60, 0x24, 0x80, 0xcf, 0x39, 0x04, 0x41, 0x57, 0x55, 0x37, 0xfb, 0xcb, 0xcf, 0x48, - 0x03, 0x1f, 0xf6, 0x24, 0xf6, 0xab, 0xf7, 0x5e, 0xbd, 0xaa, 0x7a, 0xef, 0xd5, 0xfb, 0x94, 0xe0, - 0x83, 0x8e, 0x62, 0xbe, 0x1a, 0x9c, 0xe6, 0x5b, 0x5a, 0xef, 0x51, 0x4b, 0x53, 0x4d, 0x59, 0x51, - 0xb1, 0xfe, 0xd0, 0x30, 0x35, 0x5d, 0xee, 0xe0, 0x87, 0x8a, 0x6a, 0x62, 0xfd, 0x4c, 0x6e, 0xe1, - 0x47, 0x46, 0x1f, 0xb7, 0x1e, 0xb5, 0x0c, 0x25, 0xdf, 0xd7, 0x35, 0x53, 0x43, 0x29, 0xeb, 0xe7, - 0xf9, 0x7b, 0xfc, 0x76, 0x47, 0xd3, 0x3a, 0x5d, 0xfc, 0x88, 0x40, 0x4f, 0x07, 0x67, 0x8f, 0xda, - 0xd8, 0x68, 0xe9, 0x4a, 0xdf, 0xd4, 0x74, 0x8a, 0xc9, 0x6f, 0xf9, 0x31, 0x4c, 0xa5, 0x87, 0x0d, - 0x53, 0xee, 0xf5, 0x19, 0xc2, 0x6d, 0x3f, 0xc2, 0xef, 0xea, 0x72, 0xbf, 0x8f, 0x75, 0x83, 0x8e, - 0x0b, 0x6b, 0xb0, 0x72, 0x80, 0xcd, 0xe3, 0xee, 0xa0, 0xa3, 0xa8, 0x15, 0xf5, 0x4c, 0x13, 0xf1, - 0x57, 0x03, 0x6c, 0x98, 0xc2, 0xbf, 0x73, 0xb0, 0xea, 0x1b, 0x30, 0xfa, 0x9a, 0x6a, 0x60, 0x84, - 0x20, 0xa1, 0xca, 0x3d, 0x9c, 0xe3, 0xb6, 0xb9, 0x9d, 0x79, 0x91, 0xfc, 0x46, 0xf7, 0x60, 0xe9, - 0x1c, 0xab, 0x6d, 0x4d, 0x97, 0xce, 0xb1, 0x6e, 0x28, 0x9a, 0x9a, 0x8b, 0x91, 0xd1, 0x45, 0x0a, - 0x7d, 0x46, 0x81, 0xe8, 0x00, 0xe6, 0x7a, 0xb2, 0xaa, 0x9c, 0x61, 0xc3, 0xcc, 0xc5, 0xb7, 0xe3, - 0x3b, 0xe9, 0xdd, 0x77, 0xf3, 0x74, 0xa9, 0xf9, 0xd0, 0xb9, 0xf2, 0x47, 0x0c, 0xbb, 0xac, 0x9a, - 0xfa, 0xa5, 0xe8, 0x10, 0xf3, 0x1f, 0xc3, 0xa2, 0x67, 0x08, 0x65, 0x20, 0xfe, 0x25, 0xbe, 0x64, - 0x32, 0x59, 0x3f, 0xd1, 0x0a, 0x24, 0xcf, 0xe5, 0xee, 0x00, 0x33, 0x49, 0xe8, 0xc7, 0x5e, 0xec, - 0xb7, 0x38, 0xe1, 0x36, 0x6c, 0x38, 0xb3, 0x95, 0xe4, 0xbe, 0x7c, 0xaa, 0x74, 0x15, 0x53, 0xc1, - 0x86, 0xbd, 0xf4, 0xcf, 0x61, 0x33, 0x62, 0x9c, 0xed, 0xc0, 0x63, 0x58, 0x68, 0xb9, 0xe0, 0x39, - 0x8e, 0x2c, 0x25, 0x67, 0x2f, 0xc5, 0x47, 0x79, 0x29, 0x7a, 0xb0, 0x85, 0x7f, 0x8e, 0x43, 0xc6, - 0x8f, 0x82, 0x1e, 0xc3, 0xac, 0x81, 0xf5, 0x73, 0xa5, 0x45, 0xf7, 0x35, 0xbd, 0xbb, 0x1d, 0xc5, - 0x2d, 0xdf, 0xa0, 0x78, 0x87, 0x33, 0xa2, 0x4d, 0x82, 0x4e, 0x20, 0x73, 0xae, 0x75, 0x07, 0x3d, - 0x2c, 0xe1, 0x8b, 0xbe, 0xac, 0x3a, 0x07, 0x90, 0xde, 0xdd, 0x89, 0x64, 0xf3, 0x8c, 0x10, 0x94, - 0x6d, 0xfc, 0xc3, 0x19, 0xf1, 0xc6, 0xb9, 0x17, 0xc4, 0xff, 0x8c, 0x83, 0x59, 0x36, 0x1b, 0xfa, - 0x08, 0x12, 0xe6, 0x65, 0x9f, 0x4a, 0xb7, 0xb4, 0x7b, 0x6f, 0x9c, 0x74, 0xf9, 0xe6, 0x65, 0x1f, - 0x8b, 0x84, 0x44, 0xf8, 0x0c, 0x12, 0xd6, 0x17, 0x4a, 0xc3, 0xec, 0x49, 0xed, 0x69, 0xad, 0xfe, - 0xbc, 0x96, 0x99, 0x41, 0x6b, 0x80, 0x4a, 0xf5, 0x5a, 0x53, 0xac, 0x57, 0xab, 0x65, 0x51, 0x6a, - 0x94, 0xc5, 0x67, 0x95, 0x52, 0x39, 0xc3, 0xa1, 0xb7, 0x60, 0xfb, 0x59, 0xbd, 0x7a, 0x72, 0x54, - 0x96, 0x0a, 0xa5, 0x52, 0xb9, 0xd1, 0xa8, 0x14, 0x2b, 0xd5, 0x4a, 0xf3, 0x85, 0x54, 0xaa, 0xd7, - 0x1a, 0x4d, 0xb1, 0x50, 0xa9, 0x35, 0x1b, 0x99, 0x18, 0xff, 0x07, 0x1c, 0xdc, 0xf0, 0x2d, 0x00, - 0x15, 0x3c, 0x12, 0x3e, 0x9c, 0x74, 0xe1, 0x6e, 0x49, 0x1f, 0x84, 0x49, 0x0a, 0x90, 0xaa, 0xd7, - 0xaa, 0x95, 0x9a, 0x25, 0x5d, 0x1a, 0x66, 0xeb, 0x4f, 0x9e, 0x90, 0x8f, 0x58, 0x31, 0x45, 0x27, - 0x14, 0x96, 0x60, 0xe1, 0x58, 0xd7, 0x4e, 0xb1, 0xad, 0x3f, 0x05, 0x58, 0x64, 0xdf, 0x4c, 0x5f, - 0xbe, 0x0f, 0x49, 0x1d, 0xcb, 0xed, 0x4b, 0x76, 0xb4, 0x7c, 0x9e, 0xda, 0x64, 0xde, 0xb6, 0xc9, - 0x7c, 0x51, 0xd3, 0xba, 0xcf, 0x2c, 0xfd, 0x14, 0x29, 0xa2, 0xf0, 0x6d, 0x02, 0xb2, 0x25, 0x1d, - 0xcb, 0x26, 0xa6, 0xd2, 0x32, 0xd6, 0xa1, 0xb6, 0xf7, 0x18, 0x96, 0x2c, 0xfd, 0x6a, 0x29, 0xe6, - 0xa5, 0xa4, 0xcb, 0x6a, 0x07, 0xb3, 0xa3, 0x5f, 0xb5, 0x77, 0xa0, 0xc4, 0x46, 0x45, 0x6b, 0x50, - 0x5c, 0x6c, 0xb9, 0x3f, 0x51, 0x05, 0xb2, 0x4c, 0x75, 0x3c, 0x2a, 0x1d, 0xf7, 0xaa, 0x34, 0x95, - 0xc2, 0xa5, 0xd2, 0xe8, 0xdc, 0x0b, 0x51, 0xb0, 0x81, 0x9e, 0x02, 0xf4, 0x65, 0x5d, 0xee, 0x61, - 0x13, 0xeb, 0x46, 0x2e, 0xe1, 0xb5, 0xef, 0x90, 0xd5, 0xe4, 0x8f, 0x1d, 0x6c, 0x6a, 0xdf, 0x2e, - 0x72, 0x74, 0x60, 0x19, 0x44, 0x4b, 0xc7, 0xa6, 0x91, 0x4b, 0x12, 0x4e, 0x3b, 0xa3, 0x38, 0x35, - 0x28, 0x2a, 0x61, 0x53, 0x8c, 0x7f, 0x5d, 0xe4, 0x44, 0x9b, 0x1a, 0xd5, 0x61, 0xd5, 0x5e, 0xa0, - 0xa6, 0x9a, 0x58, 0x35, 0x25, 0x43, 0x1b, 0xe8, 0x2d, 0x9c, 0x4b, 0x91, 0x5d, 0x5a, 0xf7, 0x2d, - 0x91, 0xe2, 0x34, 0x08, 0x8a, 0xc8, 0xb6, 0xc6, 0x03, 0x44, 0x2f, 0x81, 0x97, 0x5b, 0x2d, 0x6c, - 0x18, 0x0a, 0xdd, 0x0b, 0x49, 0xc7, 0x5f, 0x0d, 0x14, 0x1d, 0xf7, 0xb0, 0x6a, 0x1a, 0xb9, 0x59, - 0x2f, 0xd7, 0xa6, 0xd6, 0xd7, 0xba, 0x5a, 0xe7, 0x52, 0x1c, 0xe2, 0x88, 0xb7, 0x3c, 0xe4, 0xae, - 0x11, 0x83, 0xff, 0x04, 0x6e, 0xf8, 0x36, 0x65, 0x1a, 0xcf, 0xc6, 0xef, 0xc1, 0x82, 0x7b, 0x27, - 0xa6, 0xf2, 0x8a, 0x7f, 0x12, 0x83, 0x6c, 0xc8, 0x1e, 0xa0, 0x43, 0x98, 0x33, 0x54, 0xb9, 0x6f, - 0xbc, 0xd2, 0x4c, 0xa6, 0xbf, 0xf7, 0x47, 0x6c, 0x59, 0xbe, 0xc1, 0x70, 0xe9, 0xe7, 0xe1, 0x8c, - 0xe8, 0x50, 0xa3, 0x22, 0xa4, 0xe8, 0x7e, 0xfa, 0x7d, 0x53, 0x18, 0x1f, 0x0a, 0x73, 0xb8, 0x30, - 0x4a, 0xfe, 0x3d, 0x58, 0xf2, 0xce, 0x80, 0xb6, 0x20, 0x6d, 0xcf, 0x20, 0x29, 0x6d, 0xb6, 0x56, - 0xb0, 0x41, 0x95, 0x36, 0xff, 0x2e, 0x2c, 0xb8, 0x99, 0xa1, 0x75, 0x98, 0x67, 0x0a, 0xe1, 0xa0, - 0xcf, 0x51, 0x40, 0xa5, 0xed, 0xd8, 0xf4, 0x0f, 0x61, 0xc5, 0xab, 0x67, 0xcc, 0x94, 0xdf, 0x76, - 0xd6, 0x40, 0xf7, 0x62, 0xc9, 0xbb, 0x06, 0x5b, 0x4e, 0xe1, 0xcf, 0x93, 0x90, 0xf1, 0x1b, 0x0d, - 0x7a, 0x0c, 0xc9, 0xd3, 0xae, 0xd6, 0xfa, 0x92, 0xd1, 0xbe, 0x15, 0x65, 0x5d, 0xf9, 0xa2, 0x85, - 0x45, 0xa1, 0x87, 0x33, 0x22, 0x25, 0xb2, 0xa8, 0x7b, 0xda, 0x40, 0x35, 0xd9, 0xee, 0x45, 0x53, - 0x1f, 0x59, 0x58, 0x43, 0x6a, 0x42, 0x84, 0xf6, 0x21, 0x4d, 0xd5, 0x4e, 0xea, 0x69, 0x6d, 0x9c, - 0x8b, 0x13, 0x1e, 0x77, 0x23, 0x79, 0x14, 0x08, 0xee, 0x91, 0xd6, 0xc6, 0x22, 0xc8, 0xce, 0x6f, - 0x7e, 0x11, 0xd2, 0x2e, 0xd9, 0xf8, 0x01, 0xa4, 0x5d, 0x93, 0xa1, 0x9b, 0x30, 0x7b, 0x66, 0x48, - 0x8e, 0x13, 0x9e, 0x17, 0x53, 0x67, 0x06, 0xf1, 0xa7, 0x5b, 0x90, 0x26, 0x52, 0x48, 0x67, 0x5d, - 0xb9, 0x63, 0xe4, 0x62, 0xdb, 0x71, 0xeb, 0x8c, 0x08, 0xe8, 0x89, 0x05, 0x41, 0x0f, 0x80, 0x39, - 0x14, 0x89, 0xe2, 0x75, 0x74, 0x6d, 0xd0, 0x27, 0x42, 0xce, 0x8b, 0xec, 0x6a, 0x23, 0x13, 0x1d, - 0x58, 0x70, 0xfe, 0xaf, 0x63, 0x00, 0x43, 0x01, 0xd1, 0x63, 0x48, 0x90, 0x35, 0x51, 0xc7, 0xbf, - 0x33, 0xc1, 0x9a, 0xf2, 0x64, 0x61, 0x84, 0x4a, 0xf8, 0x2f, 0x0e, 0x12, 0x84, 0x8d, 0xff, 0x7a, - 0x6a, 0x54, 0x6a, 0x07, 0xd5, 0xb2, 0x54, 0xab, 0xef, 0x97, 0xa5, 0xe7, 0x62, 0xa5, 0x59, 0x16, - 0x33, 0x1c, 0x5a, 0x87, 0x9b, 0x6e, 0xb8, 0x58, 0x2e, 0xec, 0x97, 0x45, 0xa9, 0x5e, 0xab, 0xbe, - 0xc8, 0xc4, 0x10, 0x0f, 0x6b, 0x47, 0x27, 0xd5, 0x66, 0x25, 0x38, 0x16, 0x47, 0x1b, 0x90, 0x73, - 0x8d, 0x31, 0x1e, 0x8c, 0x6d, 0xc2, 0x62, 0xeb, 0x1a, 0xa5, 0x3f, 0xd9, 0x60, 0x12, 0x09, 0x70, - 0xcb, 0x3d, 0xa7, 0x97, 0x36, 0xc5, 0xc7, 0x7f, 0x5e, 0xe4, 0xd0, 0x1d, 0xc8, 0xb9, 0x71, 0x3c, - 0x1c, 0x66, 0x09, 0x4a, 0x71, 0xd1, 0xd1, 0x00, 0xa2, 0xe1, 0xcf, 0x61, 0xd1, 0x73, 0x31, 0x58, - 0x31, 0x1c, 0xf3, 0x64, 0x6d, 0xe9, 0xf4, 0xd2, 0x24, 0x71, 0x0d, 0xb7, 0x13, 0x17, 0x17, 0x6d, - 0x68, 0xd1, 0x02, 0x5a, 0x67, 0xd9, 0x55, 0x7a, 0x8a, 0xc9, 0x70, 0x62, 0x04, 0x07, 0x08, 0x88, - 0x20, 0x08, 0xbf, 0x8e, 0x41, 0x8a, 0x29, 0xc4, 0x3d, 0xd7, 0xd5, 0xe4, 0x61, 0x69, 0x43, 0x29, - 0x4b, 0x8f, 0x45, 0xc6, 0xbc, 0x16, 0x89, 0x0e, 0x61, 0xc9, 0xed, 0xbf, 0x2f, 0xec, 0xc8, 0xf1, - 0x8e, 0xf7, 0x9c, 0xdd, 0x4e, 0xe4, 0x82, 0xc5, 0x8b, 0x8b, 0xe7, 0x6e, 0x18, 0x2a, 0xc2, 0x92, - 0xef, 0x0a, 0x48, 0x8c, 0xbf, 0x02, 0x16, 0x5b, 0x1e, 0x6f, 0x58, 0x80, 0xac, 0xed, 0xbd, 0xbb, - 0x58, 0x32, 0x99, 0x77, 0x67, 0x57, 0x54, 0x26, 0xe0, 0xf5, 0xd1, 0x10, 0xd9, 0x86, 0xf1, 0x9f, - 0x02, 0x0a, 0xca, 0x3a, 0x95, 0xab, 0x1e, 0x40, 0x36, 0xe4, 0x5e, 0x41, 0x79, 0x98, 0x27, 0x47, - 0x65, 0x28, 0x26, 0x66, 0x31, 0x69, 0x50, 0xa2, 0x21, 0x8a, 0x85, 0xdf, 0xd7, 0xf1, 0x19, 0xd6, - 0x75, 0xdc, 0x26, 0x36, 0x19, 0x8a, 0xef, 0xa0, 0x08, 0x7f, 0xc8, 0xc1, 0x9c, 0x0d, 0x47, 0x7b, - 0x30, 0x67, 0xe0, 0x0e, 0xbd, 0xf3, 0xe8, 0x5c, 0xb7, 0xfd, 0xb4, 0xf9, 0x06, 0x43, 0x60, 0xd1, - 0xbb, 0x8d, 0x6f, 0x45, 0xef, 0x9e, 0xa1, 0xa9, 0x16, 0xff, 0xb7, 0x1c, 0x64, 0xf7, 0x71, 0x17, - 0xfb, 0x43, 0xa3, 0x51, 0x6e, 0xdd, 0x1d, 0x4d, 0xc4, 0xbc, 0xd1, 0x44, 0x08, 0xab, 0x11, 0xd1, - 0xc4, 0x95, 0x6e, 0xd8, 0x35, 0x58, 0xf1, 0xce, 0x46, 0xef, 0x14, 0xe1, 0x7f, 0xe2, 0x70, 0xdb, - 0xd2, 0x05, 0x5d, 0xeb, 0x76, 0xb1, 0x7e, 0x3c, 0x38, 0xed, 0x2a, 0xc6, 0xab, 0x29, 0x16, 0x77, - 0x13, 0x66, 0x55, 0xad, 0xed, 0x32, 0x9e, 0x94, 0xf5, 0x59, 0x69, 0xa3, 0x32, 0x2c, 0xfb, 0x63, - 0xbb, 0x4b, 0xe6, 0xf9, 0xa3, 0x23, 0xbb, 0xcc, 0xb9, 0xff, 0xda, 0xe2, 0x61, 0xce, 0x8a, 0x4a, - 0x35, 0xb5, 0x7b, 0x49, 0x2c, 0x66, 0x4e, 0x74, 0xbe, 0x91, 0xe8, 0x0f, 0xd3, 0x7e, 0xe0, 0x84, - 0x69, 0x23, 0x57, 0x34, 0x2a, 0x62, 0xfb, 0x22, 0x60, 0xf1, 0x29, 0xc2, 0xfa, 0xa3, 0x09, 0x59, - 0x8f, 0xf5, 0x04, 0x57, 0x39, 0xc5, 0x6b, 0x30, 0xdf, 0x7f, 0xe4, 0x60, 0x2b, 0x72, 0x09, 0x2c, - 0xce, 0x68, 0xc3, 0x8d, 0x3e, 0x1d, 0x70, 0x36, 0x81, 0x5a, 0xd9, 0xc7, 0x63, 0x37, 0x81, 0xa5, - 0xce, 0x0c, 0xea, 0xd9, 0x86, 0xa5, 0xbe, 0x07, 0xc8, 0x17, 0x20, 0x1b, 0x82, 0x36, 0xd5, 0x62, - 0xbe, 0xe1, 0x60, 0x7b, 0x28, 0xca, 0x89, 0xda, 0xbf, 0x3e, 0xf5, 0x6d, 0x0e, 0x75, 0x8b, 0xba, - 0xfc, 0x0f, 0x82, 0x6b, 0x0f, 0x9f, 0xf0, 0x4d, 0x59, 0xf0, 0x5d, 0xb8, 0x33, 0x62, 0x6a, 0x66, - 0xce, 0xbf, 0x4e, 0xc0, 0x9d, 0x67, 0x72, 0x57, 0x69, 0x3b, 0xd1, 0x63, 0x48, 0x91, 0x61, 0xf4, - 0x96, 0xb4, 0x02, 0x16, 0x40, 0xbd, 0xd6, 0x63, 0xc7, 0x6a, 0xc7, 0xf1, 0x9f, 0xe0, 0x3a, 0xbc, - 0xc6, 0xcc, 0xef, 0x45, 0x48, 0xe6, 0xf7, 0xd1, 0xe4, 0xb2, 0x8e, 0xca, 0x03, 0x4f, 0xfc, 0x0e, - 0xe6, 0xc3, 0xc9, 0xf9, 0x8e, 0xd0, 0x82, 0x2b, 0x5b, 0xf1, 0x77, 0x99, 0xaa, 0xfd, 0x7d, 0x02, - 0x84, 0x51, 0xab, 0x67, 0x3e, 0x44, 0x84, 0xf9, 0x96, 0xa6, 0x9e, 0x29, 0x7a, 0x0f, 0xb7, 0x59, - 0xca, 0xf1, 0xfe, 0x24, 0x9b, 0xc7, 0x1c, 0x48, 0xc9, 0xa6, 0x15, 0x87, 0x6c, 0x50, 0x0e, 0x66, - 0x7b, 0xd8, 0x30, 0xe4, 0x8e, 0x2d, 0x96, 0xfd, 0xc9, 0xff, 0x32, 0x0e, 0xf3, 0x0e, 0x09, 0x52, - 0x03, 0x1a, 0x4c, 0xdd, 0xd7, 0xc1, 0xeb, 0x08, 0xf0, 0xfa, 0xca, 0x1c, 0x7b, 0x0d, 0x65, 0x6e, - 0x7b, 0x94, 0x99, 0x9a, 0xc3, 0xfe, 0x6b, 0x89, 0x3d, 0x42, 0xaf, 0xbf, 0x73, 0x05, 0x14, 0x7e, - 0x07, 0x50, 0x55, 0x31, 0x58, 0xea, 0xe6, 0xb8, 0x25, 0x2b, 0x53, 0x93, 0x2f, 0x24, 0xac, 0x9a, - 0xba, 0xc2, 0xc2, 0xf5, 0xa4, 0x08, 0x3d, 0xf9, 0xa2, 0x4c, 0x21, 0x56, 0x48, 0x6f, 0x98, 0xb2, - 0x6e, 0x2a, 0x6a, 0x47, 0x32, 0xb5, 0x2f, 0xb1, 0x53, 0xe9, 0xb5, 0xa1, 0x4d, 0x0b, 0x28, 0xfc, - 0x77, 0x0c, 0xb2, 0x1e, 0xf6, 0x4c, 0x27, 0x3f, 0x86, 0xd9, 0x21, 0x6f, 0x4f, 0x18, 0x1f, 0x82, - 0x9d, 0xa7, 0xdb, 0x66, 0x53, 0xa0, 0x4d, 0x00, 0x15, 0x5f, 0x98, 0x9e, 0x79, 0xe7, 0x2d, 0x08, - 0x99, 0x93, 0xff, 0x23, 0xce, 0xc9, 0xf4, 0x4d, 0xd9, 0x1c, 0x90, 0xac, 0x92, 0xb9, 0x68, 0xdc, - 0x96, 0xd8, 0x1d, 0x43, 0xe7, 0x9d, 0x17, 0x33, 0xce, 0x48, 0x8d, 0xdc, 0x36, 0x06, 0x3a, 0x70, - 0x8a, 0xa8, 0x2d, 0x4d, 0x6d, 0x2b, 0xe6, 0xb0, 0x88, 0x7a, 0x33, 0x90, 0x20, 0xd0, 0xe1, 0xa2, - 0x95, 0x57, 0xd9, 0x65, 0x53, 0x07, 0xca, 0x7f, 0x05, 0x49, 0x7a, 0x1c, 0x13, 0x16, 0x0b, 0xd0, - 0xa7, 0x90, 0x32, 0x88, 0xc4, 0xfe, 0xc2, 0x48, 0xd8, 0x9e, 0xb8, 0x57, 0x28, 0x32, 0x3a, 0xe1, - 0x87, 0xc0, 0x0f, 0x2f, 0xa6, 0x03, 0x6c, 0x4e, 0x7e, 0xfd, 0xee, 0x59, 0x6b, 0x10, 0x7e, 0x16, - 0x83, 0xf5, 0x50, 0x06, 0xd3, 0x95, 0x3d, 0xd0, 0xa1, 0x6f, 0x25, 0xdf, 0x0f, 0xde, 0xd8, 0x01, - 0xe6, 0xa1, 0x2b, 0xe2, 0x7f, 0xff, 0x6a, 0x87, 0x59, 0x9c, 0xfa, 0x30, 0x03, 0xe7, 0x48, 0x77, - 0xe6, 0x97, 0x31, 0x40, 0x07, 0xd8, 0x74, 0x52, 0x65, 0xb6, 0xa5, 0x11, 0xfe, 0x86, 0x7b, 0x0d, - 0x7f, 0xf3, 0x63, 0x8f, 0xbf, 0xa1, 0x1e, 0xeb, 0xbe, 0xab, 0x2d, 0xe2, 0x9b, 0x7a, 0xe4, 0x6d, - 0x19, 0x91, 0x9e, 0xd2, 0x98, 0x7f, 0xb2, 0xf4, 0xf4, 0x8a, 0x6e, 0xe5, 0x3f, 0x39, 0xc8, 0x7a, - 0x84, 0x66, 0x1a, 0xf4, 0x10, 0x90, 0x7c, 0x2e, 0x2b, 0x5d, 0xd9, 0x12, 0xcc, 0x4e, 0xff, 0x59, - 0x39, 0x60, 0xd9, 0x19, 0xb1, 0xc9, 0xd0, 0x53, 0xc8, 0xf6, 0xe4, 0x0b, 0xa5, 0x37, 0xe8, 0x49, - 0x6c, 0x9f, 0x0d, 0xe5, 0xa7, 0x76, 0xe1, 0x70, 0x3d, 0x50, 0x40, 0xaf, 0xa8, 0xe6, 0x87, 0xef, - 0xd3, 0x0a, 0xfa, 0x32, 0xa3, 0x63, 0xca, 0xa3, 0xfc, 0x14, 0xa3, 0x63, 0xc8, 0xf6, 0x14, 0x35, - 0xc0, 0x2c, 0x3e, 0x96, 0x19, 0x35, 0xf0, 0x65, 0x46, 0x3c, 0xe4, 0x28, 0x08, 0xee, 0xa0, 0x97, - 0x2d, 0xd7, 0xdf, 0x46, 0xea, 0xba, 0x83, 0xc5, 0x00, 0x0e, 0xdb, 0x96, 0x83, 0xd0, 0x56, 0xd2, - 0xdd, 0xa0, 0xd9, 0xb0, 0xbe, 0x4a, 0x64, 0x57, 0xe9, 0xff, 0xe2, 0x6e, 0x0b, 0x0e, 0x60, 0xa3, - 0x8f, 0x21, 0xae, 0xf7, 0x5b, 0xcc, 0x7c, 0xbf, 0x37, 0x01, 0xff, 0xbc, 0x78, 0x5c, 0x3a, 0x9c, - 0x11, 0x2d, 0x2a, 0xfe, 0xcf, 0xe2, 0x10, 0x17, 0x8f, 0x4b, 0xe8, 0x53, 0x4f, 0x8b, 0xe5, 0xc1, - 0x84, 0x5c, 0xdc, 0x1d, 0x96, 0x7f, 0x89, 0x85, 0xb5, 0x58, 0x72, 0xb0, 0x52, 0x12, 0xcb, 0x85, - 0x66, 0x59, 0xda, 0x2f, 0x57, 0xcb, 0xcd, 0xb2, 0x44, 0x5b, 0x40, 0x19, 0x0e, 0x6d, 0x40, 0xee, - 0xf8, 0xa4, 0x58, 0xad, 0x34, 0x0e, 0xa5, 0x93, 0x9a, 0xfd, 0x8b, 0x8d, 0xc6, 0x50, 0x06, 0x16, - 0xaa, 0x95, 0x46, 0x93, 0x01, 0x1a, 0x99, 0xb8, 0x05, 0x39, 0x28, 0x37, 0xa5, 0x52, 0xe1, 0xb8, - 0x50, 0xaa, 0x34, 0x5f, 0x64, 0x12, 0x88, 0x87, 0x35, 0x2f, 0xef, 0x46, 0xad, 0x70, 0xdc, 0x38, - 0xac, 0x37, 0x33, 0x49, 0x84, 0x60, 0x89, 0xd0, 0xdb, 0xa0, 0x46, 0x26, 0x65, 0x71, 0x28, 0x55, - 0xeb, 0x35, 0x47, 0x86, 0x59, 0xb4, 0x02, 0x19, 0x7b, 0x66, 0xb1, 0x5c, 0xd8, 0x27, 0x05, 0xbd, - 0x39, 0xb4, 0x0c, 0x8b, 0xe5, 0x9f, 0x1c, 0x17, 0x6a, 0xfb, 0x36, 0xe2, 0x3c, 0xda, 0x86, 0x0d, - 0xb7, 0x38, 0x12, 0xa3, 0x2a, 0xef, 0x93, 0xa2, 0x5c, 0x23, 0x03, 0xe8, 0x16, 0x64, 0x58, 0x77, - 0xab, 0x54, 0xaf, 0xed, 0x57, 0x9a, 0x95, 0x7a, 0x2d, 0x93, 0xa6, 0x15, 0xbc, 0x2c, 0x80, 0x25, - 0x39, 0x63, 0xb6, 0x30, 0xbe, 0xac, 0xb7, 0x48, 0xcb, 0x7a, 0x76, 0xc5, 0xfa, 0x9b, 0x18, 0xac, - 0xd2, 0x92, 0xb5, 0x5d, 0x20, 0xb7, 0x7d, 0xd5, 0x0e, 0x64, 0x68, 0xbd, 0x4b, 0xf2, 0xdf, 0x02, - 0x4b, 0x14, 0xfe, 0xcc, 0xce, 0x3b, 0xec, 0xf6, 0x52, 0xcc, 0xd5, 0x5e, 0xaa, 0xf8, 0xb3, 0xb0, - 0xfb, 0xde, 0x46, 0x8c, 0x6f, 0xb6, 0x51, 0x89, 0xfd, 0x51, 0x48, 0x9a, 0xf0, 0x70, 0x34, 0xb7, - 0x51, 0x21, 0xd4, 0x55, 0xb2, 0xf8, 0x2b, 0x7a, 0xb9, 0x27, 0xb0, 0xe6, 0x97, 0x97, 0x19, 0xf4, - 0x83, 0x40, 0xbb, 0xc4, 0x71, 0xbb, 0x0e, 0xae, 0x83, 0x21, 0xfc, 0x1b, 0x07, 0x73, 0x36, 0xd8, - 0x0a, 0x6f, 0x2c, 0xbf, 0xe4, 0xa9, 0x94, 0xce, 0x5b, 0x10, 0xa7, 0xf0, 0xea, 0x6e, 0x74, 0xc4, - 0xfc, 0x8d, 0x8e, 0xd0, 0x73, 0x8e, 0x87, 0x9e, 0xf3, 0x8f, 0x60, 0xb1, 0x65, 0x89, 0xaf, 0x68, - 0xaa, 0x64, 0x2a, 0x3d, 0xbb, 0x10, 0x1a, 0x6c, 0x4c, 0x36, 0xed, 0xd7, 0x04, 0xe2, 0x82, 0x4d, - 0x60, 0x81, 0xd0, 0x36, 0x2c, 0x90, 0x46, 0xa5, 0x64, 0x6a, 0xd2, 0xc0, 0xc0, 0xb9, 0x24, 0x29, - 0x0b, 0x01, 0x81, 0x35, 0xb5, 0x13, 0x03, 0x0b, 0x7f, 0xc7, 0xc1, 0x2a, 0xad, 0x76, 0xf9, 0xd5, - 0x71, 0x5c, 0xc3, 0xc6, 0xad, 0x71, 0xbe, 0xdb, 0x30, 0x94, 0xe1, 0x9b, 0x4a, 0xf6, 0x73, 0xb0, - 0xe6, 0x9f, 0x8f, 0x65, 0xf8, 0xbf, 0x8a, 0xc1, 0x8a, 0x15, 0x9a, 0xd9, 0x03, 0xd7, 0x1d, 0x3d, - 0x4f, 0x71, 0x92, 0xbe, 0xcd, 0x4c, 0x04, 0x36, 0xf3, 0xd0, 0x9f, 0x3f, 0xbf, 0xe3, 0x0e, 0x2e, - 0xfd, 0x2b, 0x78, 0x53, 0x7b, 0xf9, 0x0b, 0x0e, 0x56, 0x7d, 0xf3, 0x31, 0x7b, 0xf9, 0xc4, 0x9f, - 0x10, 0xdc, 0x8d, 0x90, 0xef, 0xb5, 0x52, 0x82, 0x0f, 0xec, 0x50, 0x7c, 0x3a, 0xb3, 0xfc, 0xd7, - 0x18, 0x6c, 0x0e, 0x2f, 0x35, 0xf2, 0x54, 0xa0, 0x3d, 0x45, 0x45, 0xeb, 0x6a, 0x1d, 0xf9, 0xcf, - 0xfc, 0x0e, 0x77, 0x37, 0x78, 0xcf, 0x86, 0x88, 0x34, 0xca, 0xf1, 0x86, 0x16, 0x82, 0x13, 0xd3, - 0x16, 0x82, 0xaf, 0xa4, 0x01, 0xbf, 0xe7, 0xae, 0x71, 0x7b, 0xc5, 0x67, 0x9a, 0x30, 0x61, 0xb3, - 0xe8, 0x43, 0xb8, 0x49, 0xa2, 0x7f, 0xe7, 0xa5, 0x8b, 0xdd, 0x7f, 0xa7, 0x2e, 0x71, 0x4e, 0x5c, - 0xb5, 0x86, 0x9d, 0xe7, 0x1d, 0xac, 0x41, 0xd2, 0x16, 0xbe, 0x4d, 0xc0, 0x9a, 0x95, 0x1d, 0x34, - 0x4c, 0xb9, 0x33, 0x4d, 0xeb, 0xe0, 0xb7, 0x83, 0x95, 0xd8, 0x98, 0xf7, 0x58, 0xc2, 0xb9, 0x4e, - 0x52, 0x80, 0x45, 0x79, 0xc8, 0x1a, 0xa6, 0xdc, 0x21, 0xee, 0x40, 0xd6, 0x3b, 0xd8, 0x94, 0xfa, - 0xb2, 0xf9, 0x8a, 0xd9, 0xfa, 0x32, 0x1b, 0x6a, 0x92, 0x91, 0x63, 0xd9, 0x7c, 0x75, 0x4d, 0x07, - 0x89, 0x7e, 0xec, 0x77, 0x0a, 0xef, 0x8e, 0x59, 0xcb, 0x08, 0xdd, 0xfa, 0x49, 0x44, 0xb5, 0xfe, - 0xbd, 0x31, 0x2c, 0xc7, 0x57, 0xe9, 0xaf, 0x5e, 0x9d, 0xfe, 0x8e, 0x0b, 0xfd, 0xb7, 0xe0, 0x66, - 0x60, 0xf1, 0xec, 0x0a, 0xe9, 0x40, 0xce, 0x1a, 0x3a, 0x51, 0x8d, 0x29, 0xd5, 0x31, 0x42, 0x63, - 0x62, 0x11, 0x1a, 0x23, 0xac, 0xc3, 0xad, 0x90, 0x89, 0x98, 0x14, 0x7f, 0x93, 0xa4, 0x62, 0x4c, - 0xdf, 0x73, 0xfa, 0x3c, 0xca, 0x2a, 0xde, 0x77, 0x1f, 0x7b, 0x68, 0x7b, 0xe6, 0x4d, 0xd8, 0xc5, - 0x16, 0xa4, 0xdd, 0x78, 0xec, 0x1a, 0x34, 0xc7, 0x18, 0x4e, 0xf2, 0x4a, 0xad, 0xb0, 0x94, 0xaf, - 0x15, 0x56, 0x1d, 0x1a, 0xd5, 0xac, 0x37, 0xb4, 0x8d, 0xdc, 0x8a, 0x11, 0x66, 0xf5, 0x32, 0x60, - 0x56, 0x73, 0xde, 0xfe, 0x5a, 0x24, 0xd3, 0xdf, 0x00, 0xc3, 0x62, 0x4a, 0x1d, 0xda, 0xf8, 0x12, - 0x5e, 0x02, 0x4f, 0x35, 0x7e, 0xfa, 0x56, 0x94, 0x4f, 0x8d, 0x62, 0x7e, 0x35, 0x12, 0x36, 0x61, - 0x3d, 0x94, 0x37, 0x9b, 0xfa, 0x8f, 0x39, 0x2a, 0x98, 0x53, 0xe3, 0x6a, 0x98, 0xb2, 0x69, 0x4c, - 0x3a, 0x35, 0x1b, 0x74, 0x4f, 0x4d, 0x41, 0x44, 0x83, 0xa7, 0x34, 0x09, 0xe1, 0x4f, 0x39, 0xba, - 0x0f, 0x7e, 0x59, 0xd8, 0x6d, 0xfb, 0x0e, 0x24, 0x07, 0xa4, 0x8c, 0x4f, 0xa3, 0xae, 0xac, 0xd7, - 0x08, 0x4e, 0xac, 0x21, 0x91, 0x62, 0x5c, 0x5b, 0x61, 0x54, 0xf8, 0x15, 0x07, 0x69, 0x17, 0x7f, - 0xb4, 0x01, 0xf3, 0x4e, 0xe5, 0xc7, 0xce, 0x77, 0x1c, 0x80, 0x75, 0xfc, 0xa6, 0x66, 0xca, 0x5d, - 0xf6, 0xc4, 0x84, 0x7e, 0x58, 0x29, 0xea, 0xc0, 0xc0, 0x34, 0x1c, 0x8e, 0x8b, 0xe4, 0x37, 0x7a, - 0x00, 0x89, 0x81, 0xaa, 0x98, 0xc4, 0xec, 0x97, 0xfc, 0xf6, 0x4c, 0xa6, 0xca, 0x9f, 0xa8, 0x8a, - 0x29, 0x12, 0x2c, 0xe1, 0x3e, 0x24, 0xac, 0x2f, 0x6f, 0x05, 0x62, 0x1e, 0x92, 0xc5, 0x17, 0xcd, - 0x72, 0x23, 0xc3, 0x21, 0x80, 0x54, 0x85, 0xe6, 0xeb, 0x31, 0xa1, 0x6a, 0x3f, 0x33, 0x75, 0x16, - 0x61, 0xb9, 0x00, 0xf9, 0x54, 0xd5, 0xf4, 0x9e, 0xdc, 0x25, 0x32, 0xcf, 0x89, 0xce, 0x77, 0x74, - 0x77, 0x84, 0xd6, 0x12, 0x37, 0x9c, 0x13, 0x09, 0xab, 0x17, 0x7d, 0x41, 0x75, 0x2b, 0xaa, 0x52, - 0x54, 0x08, 0xad, 0x14, 0x6d, 0x7a, 0x6e, 0xd9, 0x31, 0x35, 0xa2, 0x7f, 0x88, 0xc1, 0x6a, 0x28, - 0x1e, 0xfa, 0xc0, 0x5d, 0x1d, 0xba, 0x33, 0x92, 0xa7, 0xbb, 0x2e, 0xf4, 0x2d, 0x47, 0xeb, 0x42, - 0x7b, 0x9e, 0xba, 0xd0, 0xdb, 0x63, 0xe9, 0xdd, 0x15, 0xa1, 0x5f, 0x70, 0x11, 0x15, 0xa1, 0x46, - 0xb3, 0x70, 0x50, 0x96, 0x4e, 0x6a, 0xf4, 0xaf, 0x53, 0x11, 0x5a, 0x81, 0xcc, 0xb0, 0x4e, 0x22, - 0x35, 0x9a, 0x85, 0x66, 0x23, 0x13, 0x0b, 0x56, 0x63, 0xe2, 0xa1, 0xb5, 0x96, 0xc4, 0xf8, 0xb2, - 0x4a, 0x92, 0xa2, 0xac, 0x01, 0x62, 0xd4, 0x47, 0xf5, 0x93, 0x5a, 0x53, 0x3a, 0x10, 0xeb, 0x27, - 0xc7, 0x99, 0x94, 0x53, 0x6e, 0x59, 0x01, 0xc4, 0x4e, 0xcb, 0xfd, 0x6a, 0xfe, 0x2f, 0x38, 0xc8, - 0x7a, 0xc0, 0xec, 0xf0, 0x5c, 0x3d, 0x6e, 0xce, 0xd3, 0xe3, 0x7e, 0x04, 0x2b, 0x56, 0xc6, 0x48, - 0x2d, 0xc5, 0x90, 0xfa, 0x58, 0x27, 0xb5, 0x6d, 0xa6, 0xf3, 0xcb, 0x3d, 0xf9, 0x82, 0xd5, 0xff, - 0x8f, 0xb1, 0x6e, 0x31, 0xbe, 0x86, 0x0a, 0xaf, 0xf0, 0x75, 0x9c, 0xc6, 0x25, 0x53, 0xe7, 0x35, - 0x63, 0x7d, 0x54, 0x30, 0xf1, 0x89, 0x4f, 0x91, 0xf8, 0x44, 0x78, 0xb8, 0xc4, 0x54, 0xc1, 0xf0, - 0xf4, 0x77, 0x7a, 0x6d, 0x78, 0x6f, 0xd3, 0xc8, 0xf5, 0x81, 0x5b, 0x7f, 0xc7, 0x66, 0x5a, 0xa9, - 0xaf, 0x8b, 0xdc, 0xcf, 0xaf, 0x2b, 0x4f, 0x2e, 0xd0, 0x78, 0xec, 0x0a, 0xf9, 0xd1, 0xee, 0xff, - 0x72, 0x30, 0x57, 0x69, 0x63, 0xd5, 0xa4, 0x6b, 0x5b, 0xf4, 0xfc, 0x63, 0x05, 0xda, 0x88, 0xf8, - 0x7f, 0x0b, 0xb2, 0x30, 0x7e, 0x73, 0xe4, 0x7f, 0x63, 0x08, 0x33, 0xe8, 0xcc, 0xf5, 0x4f, 0x21, - 0x9e, 0x26, 0xc6, 0x5b, 0x01, 0xca, 0x10, 0x17, 0xc7, 0xdf, 0x1b, 0x83, 0xe5, 0xcc, 0xf3, 0x21, - 0x24, 0xc9, 0x13, 0x7a, 0xb4, 0xe2, 0x3c, 0xe3, 0x77, 0xbd, 0xb0, 0xe7, 0x57, 0x7d, 0x50, 0x9b, - 0x6e, 0xf7, 0x9f, 0xe6, 0x01, 0x86, 0x69, 0x26, 0x7a, 0x0a, 0x0b, 0xee, 0x57, 0xbc, 0x68, 0x7d, - 0xc4, 0x1b, 0x72, 0x7e, 0x23, 0x7c, 0xd0, 0x91, 0xe9, 0x29, 0x2c, 0xb8, 0x9f, 0x6f, 0x0d, 0x99, - 0x85, 0x3c, 0x21, 0x1b, 0x32, 0x0b, 0x7d, 0xf1, 0x35, 0x83, 0xba, 0x70, 0x33, 0xe2, 0x01, 0x0f, - 0x7a, 0x7b, 0xb2, 0x67, 0x4e, 0xfc, 0xf7, 0x26, 0x7c, 0x09, 0x24, 0xcc, 0x20, 0x1d, 0x6e, 0x45, - 0xbe, 0x5b, 0x41, 0x3b, 0x93, 0xbe, 0xaa, 0xe1, 0xdf, 0x99, 0x00, 0xd3, 0x99, 0x73, 0x00, 0x7c, - 0x74, 0xb3, 0x1c, 0xbd, 0x33, 0xf1, 0x2b, 0x0e, 0xfe, 0xfe, 0xe4, 0xbd, 0x77, 0x61, 0x06, 0x1d, - 0x42, 0xda, 0xd5, 0x35, 0x45, 0x7c, 0x68, 0x2b, 0x95, 0x32, 0x5e, 0x1f, 0xd1, 0x66, 0xa5, 0x9c, - 0x5c, 0x8d, 0xac, 0x21, 0xa7, 0x60, 0x4b, 0x6e, 0xc8, 0x29, 0xa4, 0xf3, 0xe5, 0xdf, 0x7e, 0xdf, - 0xfd, 0x1e, 0xb6, 0xfd, 0xe1, 0x01, 0x42, 0xd8, 0xf6, 0x47, 0x04, 0x0b, 0xc2, 0x0c, 0xfa, 0x0c, - 0x96, 0xbc, 0x15, 0x6a, 0xb4, 0x39, 0xb2, 0xd2, 0xce, 0xdf, 0x8e, 0x1a, 0x76, 0xb3, 0xf4, 0x16, - 0x44, 0x87, 0x2c, 0x43, 0x0b, 0xb3, 0x43, 0x96, 0x11, 0x75, 0xd4, 0x19, 0xcb, 0x3f, 0x79, 0xca, - 0x7c, 0x43, 0xff, 0x14, 0x56, 0x9d, 0x1c, 0xfa, 0xa7, 0xd0, 0xda, 0xa0, 0x30, 0x83, 0x14, 0x58, - 0x0b, 0xaf, 0x32, 0xa1, 0x7b, 0x13, 0x15, 0xd1, 0xf8, 0xb7, 0xc7, 0xa1, 0x39, 0x53, 0xb5, 0x20, - 0x1b, 0xd2, 0xd4, 0x46, 0xc2, 0xc8, 0x8e, 0x37, 0x9d, 0xe4, 0xee, 0x04, 0x5d, 0x71, 0xc1, 0x8a, - 0x42, 0x76, 0xff, 0x23, 0x09, 0x09, 0x72, 0xed, 0x37, 0xe1, 0x86, 0xaf, 0x94, 0x80, 0x6e, 0x8f, - 0x2e, 0xb0, 0xf0, 0x5b, 0x91, 0xe3, 0xce, 0x1a, 0x5e, 0xc2, 0x72, 0xa0, 0x38, 0x80, 0xb6, 0xdd, - 0x74, 0x61, 0x05, 0x0a, 0xfe, 0xce, 0x08, 0x0c, 0x3f, 0x6f, 0xaf, 0x6f, 0xdb, 0x1e, 0x97, 0xbd, - 0x7a, 0x79, 0x47, 0xf9, 0xb3, 0x2f, 0x68, 0x94, 0xe5, 0xf7, 0x64, 0x82, 0x57, 0xae, 0x50, 0x1f, - 0x76, 0x77, 0x24, 0x8e, 0x33, 0xc3, 0xe7, 0x4e, 0x78, 0xe7, 0x4a, 0x9e, 0x90, 0x47, 0xb8, 0xd0, - 0x24, 0x8f, 0x17, 0x46, 0xa1, 0x38, 0xec, 0x9f, 0x43, 0xc6, 0x7f, 0xcf, 0xa3, 0xad, 0x31, 0x61, - 0x07, 0xbf, 0x1d, 0x8d, 0xe0, 0xdf, 0x19, 0xbf, 0x93, 0xf1, 0x4b, 0x15, 0xe6, 0x5e, 0xee, 0x8e, - 0xc4, 0x71, 0xbb, 0x45, 0x57, 0x84, 0x3b, 0x74, 0x8b, 0xc1, 0x68, 0x78, 0xe8, 0x16, 0x43, 0x42, - 0x62, 0x61, 0x66, 0xef, 0x31, 0x80, 0xdc, 0xed, 0xbf, 0x92, 0x25, 0xac, 0x0e, 0x7a, 0x68, 0x23, - 0xd0, 0x7c, 0x2a, 0xab, 0x83, 0x5e, 0xbd, 0x6f, 0x25, 0x5d, 0x46, 0xee, 0xaf, 0xe6, 0x48, 0xaa, - 0x35, 0x4f, 0x08, 0xac, 0x81, 0xbd, 0x2a, 0x64, 0x86, 0xd4, 0x12, 0x09, 0xa1, 0xd0, 0x9d, 0x50, - 0x1e, 0xa4, 0x95, 0xef, 0x63, 0xb4, 0xe4, 0x30, 0x22, 0xa3, 0x7b, 0x9f, 0x00, 0xb4, 0x0c, 0x45, - 0xa2, 0x31, 0x1c, 0xda, 0x0c, 0xf0, 0x79, 0xa2, 0xe0, 0x6e, 0xdb, 0xe6, 0xf1, 0x97, 0x4c, 0x98, - 0x96, 0xa1, 0xd0, 0x48, 0x6f, 0xef, 0x47, 0x90, 0xa6, 0xc2, 0x9c, 0x59, 0x78, 0xe3, 0xe8, 0x99, - 0x0c, 0x74, 0xf5, 0x64, 0x64, 0xaf, 0x0c, 0x8b, 0x94, 0x01, 0x4b, 0x18, 0xd1, 0x56, 0x80, 0xc5, - 0x11, 0x1d, 0xf1, 0x31, 0x59, 0x20, 0x64, 0x6c, 0x6c, 0xaf, 0x08, 0x0b, 0x36, 0x1b, 0xf3, 0x95, - 0xd6, 0x46, 0xb7, 0x43, 0xb8, 0x58, 0x03, 0x3e, 0x26, 0x69, 0xc6, 0xc4, 0x1a, 0x1a, 0x8a, 0x62, - 0xff, 0x77, 0x69, 0x50, 0x14, 0x96, 0xd4, 0x85, 0x8a, 0xc2, 0xc6, 0x8a, 0xc9, 0x97, 0xf1, 0x96, - 0xa1, 0x9c, 0xa6, 0x08, 0xd1, 0x0f, 0xfe, 0x3f, 0x00, 0x00, 0xff, 0xff, 0x6f, 0x0a, 0xad, 0x10, - 0x0a, 0x3d, 0x00, 0x00, + // 4182 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe4, 0x5c, 0x4d, 0x6c, 0x1b, 0x49, + 0x76, 0x56, 0xf3, 0x4f, 0xd2, 0xa3, 0x24, 0x53, 0xa5, 0x1f, 0xd3, 0x2d, 0x59, 0x96, 0xda, 0xe3, + 0x19, 0x8d, 0xc7, 0xa6, 0x67, 0xbc, 0x33, 0x83, 0x1d, 0x8d, 0x67, 0x77, 0x48, 0x89, 0x96, 0xb8, + 0xa6, 0x49, 0x6d, 0x93, 0xf2, 0xac, 0x9d, 0x0c, 0x7a, 0x5a, 0x64, 0x49, 0x6e, 0x0c, 0xd9, 0xcd, + 0xe9, 0x6e, 0x2a, 0xd6, 0xe6, 0x90, 0x64, 0x83, 0x20, 0x1b, 0xe4, 0x12, 0x24, 0x87, 0x4c, 0x4e, + 0x59, 0x24, 0x39, 0xee, 0x62, 0x0f, 0x41, 0x10, 0x20, 0x97, 0x00, 0xb9, 0x25, 0x40, 0x90, 0x1c, + 0x93, 0x5c, 0xf6, 0x10, 0x20, 0x87, 0x45, 0x02, 0x4c, 0x2e, 0x39, 0xe4, 0x10, 0x04, 0x5d, 0x55, + 0xfd, 0xff, 0x43, 0xd2, 0x92, 0x33, 0x01, 0xf6, 0x64, 0x75, 0xd5, 0xab, 0x57, 0xaf, 0xaa, 0xde, + 0x7b, 0xf5, 0xde, 0xf7, 0x8a, 0x86, 0xf7, 0x4e, 0x15, 0xf3, 0xf9, 0xf0, 0xb8, 0xd4, 0xd1, 0xfa, + 0xf7, 0x3a, 0x9a, 0x6a, 0xca, 0x8a, 0x8a, 0xf5, 0xbb, 0x86, 0xa9, 0xe9, 0xf2, 0x29, 0xbe, 0xab, + 0xa8, 0x26, 0xd6, 0x4f, 0xe4, 0x0e, 0xbe, 0x67, 0x0c, 0x70, 0xe7, 0x5e, 0xc7, 0x50, 0x4a, 0x03, + 0x5d, 0x33, 0x35, 0x94, 0xb3, 0xfe, 0x3c, 0x7b, 0x87, 0xdf, 0x3c, 0xd5, 0xb4, 0xd3, 0x1e, 0xbe, + 0x47, 0x5a, 0x8f, 0x87, 0x27, 0xf7, 0xba, 0xd8, 0xe8, 0xe8, 0xca, 0xc0, 0xd4, 0x74, 0x4a, 0xc9, + 0xdf, 0x08, 0x52, 0x98, 0x4a, 0x1f, 0x1b, 0xa6, 0xdc, 0x1f, 0x30, 0x82, 0x8d, 0x20, 0xc1, 0xaf, + 0xe8, 0xf2, 0x60, 0x80, 0x75, 0x83, 0xf6, 0x0b, 0xab, 0xb0, 0xbc, 0x8f, 0xcd, 0xc3, 0xde, 0xf0, + 0x54, 0x51, 0x6b, 0xea, 0x89, 0x26, 0xe2, 0x2f, 0x86, 0xd8, 0x30, 0x85, 0x7f, 0xe2, 0x60, 0x25, + 0xd0, 0x61, 0x0c, 0x34, 0xd5, 0xc0, 0x08, 0x41, 0x46, 0x95, 0xfb, 0xb8, 0xc8, 0x6d, 0x72, 0xdb, + 0xb3, 0x22, 0xf9, 0x1b, 0xdd, 0x82, 0x85, 0x33, 0xac, 0x76, 0x35, 0x5d, 0x3a, 0xc3, 0xba, 0xa1, + 0x68, 0x6a, 0x31, 0x45, 0x7a, 0xe7, 0x69, 0xeb, 0x13, 0xda, 0x88, 0xf6, 0x61, 0xa6, 0x2f, 0xab, + 0xca, 0x09, 0x36, 0xcc, 0x62, 0x7a, 0x33, 0xbd, 0x9d, 0xbf, 0xff, 0x56, 0x89, 0x2e, 0xb5, 0x14, + 0x39, 0x57, 0xe9, 0x31, 0xa3, 0xae, 0xaa, 0xa6, 0x7e, 0x2e, 0x3a, 0x83, 0xf9, 0x0f, 0x61, 0xde, + 0xd7, 0x85, 0x0a, 0x90, 0xfe, 0x1c, 0x9f, 0x33, 0x99, 0xac, 0x3f, 0xd1, 0x32, 0x64, 0xcf, 0xe4, + 0xde, 0x10, 0x33, 0x49, 0xe8, 0xc7, 0x4e, 0xea, 0x9b, 0x9c, 0xb0, 0x01, 0xeb, 0xce, 0x6c, 0xbb, + 0xf2, 0x40, 0x3e, 0x56, 0x7a, 0x8a, 0xa9, 0x60, 0xc3, 0x5e, 0xfa, 0xa7, 0x70, 0x3d, 0xa6, 0x9f, + 0xed, 0xc0, 0x03, 0x98, 0xeb, 0x78, 0xda, 0x8b, 0x1c, 0x59, 0x4a, 0xd1, 0x5e, 0x4a, 0x60, 0xe4, + 0xb9, 0xe8, 0xa3, 0x16, 0xfe, 0x33, 0x0d, 0x85, 0x20, 0x09, 0x7a, 0x00, 0xd3, 0x06, 0xd6, 0xcf, + 0x94, 0x0e, 0xdd, 0xd7, 0xfc, 0xfd, 0xcd, 0x38, 0x6e, 0xa5, 0x16, 0xa5, 0x3b, 0x98, 0x12, 0xed, + 0x21, 0xe8, 0x08, 0x0a, 0x67, 0x5a, 0x6f, 0xd8, 0xc7, 0x12, 0x7e, 0x31, 0x90, 0x55, 0xe7, 0x00, + 0xf2, 0xf7, 0xb7, 0x63, 0xd9, 0x3c, 0x21, 0x03, 0xaa, 0x36, 0xfd, 0xc1, 0x94, 0x78, 0xe5, 0xcc, + 0xdf, 0xc4, 0xff, 0x15, 0x07, 0xd3, 0x6c, 0x36, 0xf4, 0x01, 0x64, 0xcc, 0xf3, 0x01, 0x95, 0x6e, + 0xe1, 0xfe, 0xad, 0x51, 0xd2, 0x95, 0xda, 0xe7, 0x03, 0x2c, 0x92, 0x21, 0x82, 0x09, 0x19, 0xeb, + 0x0b, 0xe5, 0x61, 0xfa, 0xa8, 0xf1, 0xa8, 0xd1, 0xfc, 0xa4, 0x51, 0x98, 0x42, 0xab, 0x80, 0x76, + 0x9b, 0x8d, 0xb6, 0xd8, 0xac, 0xd7, 0xab, 0xa2, 0xd4, 0xaa, 0x8a, 0x4f, 0x6a, 0xbb, 0xd5, 0x02, + 0x87, 0x5e, 0x83, 0xcd, 0x27, 0xcd, 0xfa, 0xd1, 0xe3, 0xaa, 0x54, 0xde, 0xdd, 0xad, 0xb6, 0x5a, + 0xb5, 0x4a, 0xad, 0x5e, 0x6b, 0x3f, 0x95, 0x76, 0x9b, 0x8d, 0x56, 0x5b, 0x2c, 0xd7, 0x1a, 0xed, + 0x56, 0x21, 0x85, 0xb6, 0xa0, 0xb8, 0x2f, 0x36, 0x8f, 0x0e, 0xa5, 0x08, 0x1e, 0x69, 0x3e, 0xfd, + 0xa3, 0x0a, 0xc7, 0xff, 0x80, 0x83, 0x2b, 0x81, 0x35, 0xa2, 0xb2, 0x6f, 0x11, 0x77, 0xc7, 0xdd, + 0x1b, 0xef, 0x62, 0xee, 0x44, 0x2d, 0x06, 0x20, 0xd7, 0x6c, 0xd4, 0x6b, 0x0d, 0x6b, 0x01, 0x79, + 0x98, 0x6e, 0x3e, 0x7c, 0x48, 0x3e, 0x52, 0x95, 0x1c, 0x9d, 0x50, 0x58, 0x80, 0xb9, 0x43, 0x5d, + 0x3b, 0xc6, 0xb6, 0x8a, 0x95, 0x61, 0x9e, 0x7d, 0x33, 0x95, 0x7a, 0x1b, 0xb2, 0x3a, 0x96, 0xbb, + 0xe7, 0xec, 0xf4, 0xf9, 0x12, 0x35, 0xdb, 0x92, 0x6d, 0xb6, 0xa5, 0x8a, 0xa6, 0xf5, 0x9e, 0x58, + 0x2a, 0x2c, 0x52, 0x42, 0xe1, 0xab, 0x0c, 0x2c, 0xed, 0xea, 0x58, 0x36, 0x31, 0x95, 0x96, 0xb1, + 0x8e, 0x34, 0xcf, 0x07, 0xb0, 0x60, 0xa9, 0x60, 0x47, 0x31, 0xcf, 0x25, 0x5d, 0x56, 0x4f, 0x31, + 0xd3, 0x8e, 0x15, 0x7b, 0x07, 0x76, 0x59, 0xaf, 0x68, 0x75, 0x8a, 0xf3, 0x1d, 0xef, 0x27, 0xaa, + 0xc1, 0x12, 0xd3, 0x2e, 0x9f, 0xd6, 0xa7, 0xfd, 0x5a, 0x4f, 0xa5, 0xf0, 0x68, 0x3d, 0x3a, 0xf3, + 0xb7, 0x28, 0xd8, 0x40, 0x8f, 0x00, 0x06, 0xb2, 0x2e, 0xf7, 0xb1, 0x89, 0x75, 0xa3, 0x98, 0xf1, + 0xbb, 0x80, 0x88, 0xd5, 0x94, 0x0e, 0x1d, 0x6a, 0xea, 0x02, 0x3c, 0xc3, 0xd1, 0xbe, 0x65, 0x33, + 0x1d, 0x1d, 0x9b, 0x46, 0x31, 0x4b, 0x38, 0x6d, 0x27, 0x71, 0x6a, 0x51, 0x52, 0xc2, 0xa6, 0x92, + 0xfe, 0xb2, 0xc2, 0x89, 0xf6, 0x68, 0xd4, 0x84, 0x15, 0x7b, 0x81, 0x9a, 0x6a, 0x62, 0xd5, 0x94, + 0x0c, 0x6d, 0xa8, 0x77, 0x70, 0x31, 0x47, 0x76, 0x69, 0x2d, 0xb0, 0x44, 0x4a, 0xd3, 0x22, 0x24, + 0x22, 0xdb, 0x1a, 0x5f, 0x23, 0x7a, 0x06, 0xbc, 0xdc, 0xe9, 0x60, 0xc3, 0x50, 0xe8, 0x5e, 0x48, + 0x3a, 0xfe, 0x62, 0xa8, 0xe8, 0xb8, 0x8f, 0x55, 0xd3, 0x28, 0x4e, 0xfb, 0xb9, 0xb6, 0xb5, 0x81, + 0xd6, 0xd3, 0x4e, 0xcf, 0x45, 0x97, 0x46, 0xbc, 0xe6, 0x1b, 0xee, 0xe9, 0x31, 0xf8, 0x8f, 0xe0, + 0x4a, 0x60, 0x53, 0x26, 0x71, 0x7e, 0xfc, 0x0e, 0xcc, 0x79, 0x77, 0x62, 0x22, 0xc7, 0xf9, 0xbb, + 0x29, 0x58, 0x8a, 0xd8, 0x03, 0x74, 0x00, 0x33, 0x86, 0x2a, 0x0f, 0x8c, 0xe7, 0x9a, 0xc9, 0xf4, + 0xf7, 0x76, 0xc2, 0x96, 0x95, 0x5a, 0x8c, 0x96, 0x7e, 0x1e, 0x4c, 0x89, 0xce, 0x68, 0x54, 0x81, + 0x1c, 0xdd, 0xcf, 0xa0, 0xfb, 0x8a, 0xe2, 0x43, 0xdb, 0x1c, 0x2e, 0x6c, 0x24, 0xff, 0x0e, 0x2c, + 0xf8, 0x67, 0x40, 0x37, 0x20, 0x6f, 0xcf, 0x20, 0x29, 0x5d, 0xb6, 0x56, 0xb0, 0x9b, 0x6a, 0x5d, + 0xfe, 0x2d, 0x98, 0xf3, 0x32, 0x43, 0x6b, 0x30, 0xcb, 0x14, 0xc2, 0x21, 0x9f, 0xa1, 0x0d, 0xb5, + 0xae, 0x63, 0xd3, 0xdf, 0x82, 0x65, 0xbf, 0x9e, 0x31, 0x53, 0x7e, 0xdd, 0x59, 0x03, 0xdd, 0x8b, + 0x05, 0xff, 0x1a, 0x6c, 0x39, 0x85, 0x3f, 0xce, 0x42, 0x21, 0x68, 0x34, 0xe8, 0x01, 0x64, 0x8f, + 0x7b, 0x5a, 0xe7, 0x73, 0x36, 0xf6, 0xb5, 0x38, 0xeb, 0x2a, 0x55, 0x2c, 0x2a, 0xda, 0x7a, 0x30, + 0x25, 0xd2, 0x41, 0xd6, 0xe8, 0xbe, 0x36, 0x54, 0x4d, 0xb6, 0x7b, 0xf1, 0xa3, 0x1f, 0x5b, 0x54, + 0xee, 0x68, 0x32, 0x08, 0xed, 0x41, 0x9e, 0xaa, 0x9d, 0xd4, 0xd7, 0xba, 0xb8, 0x98, 0x26, 0x3c, + 0x6e, 0xc6, 0xf2, 0x28, 0x13, 0xda, 0xc7, 0x5a, 0x17, 0x8b, 0x20, 0x3b, 0x7f, 0xf3, 0xf3, 0x90, + 0xf7, 0xc8, 0xc6, 0x0f, 0x21, 0xef, 0x99, 0x0c, 0x5d, 0x85, 0xe9, 0x13, 0x43, 0x72, 0x9c, 0xf0, + 0xac, 0x98, 0x3b, 0x31, 0x88, 0x3f, 0xbd, 0x01, 0x79, 0x22, 0x85, 0x74, 0xd2, 0x93, 0x4f, 0x8d, + 0x62, 0x6a, 0x33, 0x6d, 0x9d, 0x11, 0x69, 0x7a, 0x68, 0xb5, 0xa0, 0x3b, 0xc0, 0x1c, 0x8a, 0x44, + 0xe9, 0x4e, 0x75, 0x6d, 0x38, 0x20, 0x42, 0xce, 0x8a, 0xec, 0xf6, 0x23, 0x13, 0xed, 0x5b, 0xed, + 0xfc, 0x9f, 0xa7, 0x00, 0x5c, 0x01, 0xd1, 0x03, 0xc8, 0x90, 0x35, 0x51, 0xc7, 0xbf, 0x3d, 0xc6, + 0x9a, 0x4a, 0x64, 0x61, 0x64, 0x94, 0xf0, 0x6f, 0x1c, 0x64, 0x08, 0x9b, 0xe0, 0x0d, 0xd6, 0xaa, + 0x35, 0xf6, 0xeb, 0x55, 0xa9, 0xd1, 0xdc, 0xab, 0x4a, 0x9f, 0x88, 0xb5, 0x76, 0x55, 0x2c, 0x70, + 0x68, 0x0d, 0xae, 0x7a, 0xdb, 0xc5, 0x6a, 0x79, 0xaf, 0x2a, 0x4a, 0xcd, 0x46, 0xfd, 0x69, 0x21, + 0x85, 0x78, 0x58, 0x7d, 0x7c, 0x54, 0x6f, 0xd7, 0xc2, 0x7d, 0x69, 0xb4, 0x0e, 0x45, 0x4f, 0x1f, + 0xe3, 0xc1, 0xd8, 0x66, 0x2c, 0xb6, 0x9e, 0x5e, 0xfa, 0x27, 0xeb, 0xcc, 0x22, 0x01, 0xae, 0x79, + 0xe7, 0xf4, 0x8f, 0xcd, 0x91, 0x0b, 0xd1, 0xba, 0x33, 0xbd, 0x34, 0x3e, 0x0e, 0xd3, 0x84, 0xa4, + 0x32, 0xef, 0x68, 0x00, 0xd1, 0xf0, 0x4f, 0x60, 0xde, 0x77, 0x31, 0x58, 0x61, 0x1e, 0xf3, 0x64, + 0x5d, 0xe9, 0xf8, 0xdc, 0x24, 0xa1, 0x0f, 0xb7, 0x9d, 0x16, 0xe7, 0xed, 0xd6, 0x8a, 0xd5, 0x68, + 0x9d, 0x65, 0x4f, 0xe9, 0x2b, 0x26, 0xa3, 0x49, 0x11, 0x1a, 0x20, 0x4d, 0x84, 0x40, 0xf8, 0x59, + 0x0a, 0x72, 0x4c, 0x21, 0x6e, 0x79, 0xae, 0x26, 0x1f, 0x4b, 0xbb, 0x95, 0xb2, 0xf4, 0x59, 0x64, + 0xca, 0x6f, 0x91, 0xe8, 0x00, 0x16, 0xbc, 0xfe, 0xfb, 0x85, 0x1d, 0x5c, 0x6e, 0xf9, 0xcf, 0xd9, + 0xeb, 0x44, 0x5e, 0xb0, 0x90, 0x72, 0xfe, 0xcc, 0xdb, 0x86, 0x2a, 0xb0, 0x10, 0xb8, 0x02, 0x32, + 0xa3, 0xaf, 0x80, 0xf9, 0x8e, 0xcf, 0x1b, 0x96, 0x61, 0xc9, 0xf6, 0xde, 0x3d, 0x2c, 0x99, 0xcc, + 0xbb, 0xb3, 0x2b, 0xaa, 0x10, 0xf2, 0xfa, 0xc8, 0x25, 0xb6, 0xdb, 0xf8, 0x8f, 0x01, 0x85, 0x65, + 0x9d, 0xc8, 0x55, 0x0f, 0x61, 0x29, 0xe2, 0x5e, 0x41, 0x25, 0x98, 0x25, 0x47, 0x65, 0x28, 0x26, + 0x66, 0x61, 0x6b, 0x58, 0x22, 0x97, 0xc4, 0xa2, 0x1f, 0xe8, 0xf8, 0x04, 0xeb, 0x3a, 0xee, 0x12, + 0x9b, 0x8c, 0xa4, 0x77, 0x48, 0x84, 0xdf, 0xe4, 0x60, 0xc6, 0x6e, 0x47, 0x3b, 0x30, 0x63, 0xe0, + 0x53, 0x7a, 0xe7, 0xd1, 0xb9, 0x36, 0x82, 0x63, 0x4b, 0x2d, 0x46, 0xc0, 0x02, 0x7c, 0x9b, 0xde, + 0x0a, 0xf0, 0x7d, 0x5d, 0x13, 0x2d, 0xfe, 0x2f, 0x39, 0x58, 0xda, 0xc3, 0x3d, 0x1c, 0x0c, 0x8d, + 0x92, 0xdc, 0xba, 0x37, 0x9a, 0x48, 0xf9, 0xa3, 0x89, 0x08, 0x56, 0x09, 0xd1, 0xc4, 0x85, 0x6e, + 0xd8, 0x55, 0x58, 0xf6, 0xcf, 0x46, 0xef, 0x14, 0xe1, 0x3f, 0xd2, 0xb0, 0x61, 0xe9, 0x82, 0xae, + 0xf5, 0x7a, 0x58, 0x3f, 0x1c, 0x1e, 0xf7, 0x14, 0xe3, 0xf9, 0x04, 0x8b, 0xbb, 0x0a, 0xd3, 0xaa, + 0xd6, 0xf5, 0x18, 0x4f, 0xce, 0xfa, 0xac, 0x75, 0x51, 0x15, 0x16, 0x83, 0xb1, 0xdd, 0x39, 0xf3, + 0xfc, 0xf1, 0x91, 0x5d, 0xe1, 0x2c, 0x78, 0x6d, 0xf1, 0x30, 0x63, 0x45, 0xa5, 0x9a, 0xda, 0x3b, + 0x27, 0x16, 0x33, 0x23, 0x3a, 0xdf, 0x48, 0x0c, 0x86, 0x69, 0xdf, 0x70, 0xc2, 0xb4, 0xc4, 0x15, + 0x25, 0x45, 0x6c, 0x9f, 0x85, 0x2c, 0x3e, 0x47, 0x58, 0x7f, 0x30, 0x26, 0xeb, 0x91, 0x9e, 0xe0, + 0x22, 0xa7, 0x78, 0x09, 0xe6, 0xfb, 0x77, 0x1c, 0xdc, 0x88, 0x5d, 0x02, 0x8b, 0x33, 0xba, 0x70, + 0x65, 0x40, 0x3b, 0x9c, 0x4d, 0xa0, 0x56, 0xf6, 0xe1, 0xc8, 0x4d, 0x60, 0xd9, 0x35, 0x6b, 0xf5, + 0x6d, 0xc3, 0xc2, 0xc0, 0xd7, 0xc8, 0x97, 0x61, 0x29, 0x82, 0x6c, 0xa2, 0xc5, 0xfc, 0x9c, 0x83, + 0x4d, 0x57, 0x94, 0x23, 0x75, 0x70, 0x79, 0xea, 0xdb, 0x76, 0x75, 0x8b, 0xba, 0xfc, 0xf7, 0xc2, + 0x6b, 0x8f, 0x9e, 0xf0, 0x55, 0x59, 0xf0, 0x4d, 0xd8, 0x4a, 0x98, 0x9a, 0x99, 0xf3, 0xcf, 0x32, + 0xb0, 0xf5, 0x44, 0xee, 0x29, 0x5d, 0x27, 0x7a, 0x8c, 0xc0, 0x21, 0x92, 0xb7, 0xa4, 0x13, 0xb2, + 0x00, 0xea, 0xb5, 0x1e, 0x38, 0x56, 0x3b, 0x8a, 0xff, 0x18, 0xd7, 0xe1, 0x25, 0x66, 0x7e, 0x4f, + 0x23, 0x32, 0xbf, 0x0f, 0xc6, 0x97, 0x35, 0x29, 0x0f, 0x3c, 0x0a, 0x3a, 0x98, 0xf7, 0xc7, 0xe7, + 0x9b, 0xa0, 0x05, 0x17, 0xb6, 0xe2, 0xaf, 0x33, 0x55, 0xfb, 0x9b, 0x0c, 0x08, 0x49, 0xab, 0x67, + 0x3e, 0x44, 0x84, 0xd9, 0x8e, 0xa6, 0x9e, 0x28, 0x7a, 0x1f, 0x77, 0x59, 0xca, 0xf1, 0xee, 0x38, + 0x9b, 0xc7, 0x1c, 0xc8, 0xae, 0x3d, 0x56, 0x74, 0xd9, 0xa0, 0x22, 0x4c, 0xf7, 0xb1, 0x61, 0xc8, + 0xa7, 0xb6, 0x58, 0xf6, 0x27, 0xff, 0x93, 0x34, 0xcc, 0x3a, 0x43, 0x90, 0x1a, 0xd2, 0x60, 0xea, + 0xbe, 0xf6, 0x5f, 0x46, 0x80, 0x97, 0x57, 0xe6, 0xd4, 0x4b, 0x28, 0x73, 0xd7, 0xa7, 0xcc, 0xd4, + 0x1c, 0xf6, 0x5e, 0x4a, 0xec, 0x04, 0xbd, 0xfe, 0xda, 0x15, 0x50, 0xf8, 0x65, 0x40, 0x75, 0xc5, + 0x60, 0xa9, 0x9b, 0xe3, 0x96, 0xac, 0x4c, 0x4d, 0x7e, 0x21, 0x61, 0xd5, 0xd4, 0x15, 0x16, 0xae, + 0x67, 0x45, 0xe8, 0xcb, 0x2f, 0xaa, 0xb4, 0xc5, 0x0a, 0xe9, 0x0d, 0x53, 0xd6, 0x4d, 0x45, 0x3d, + 0x95, 0x4c, 0xed, 0x73, 0xec, 0x80, 0xc1, 0x76, 0x6b, 0xdb, 0x6a, 0x14, 0xfe, 0x3d, 0x05, 0x4b, + 0x3e, 0xf6, 0x4c, 0x27, 0x3f, 0x84, 0x69, 0x97, 0xb7, 0x2f, 0x8c, 0x8f, 0xa0, 0x2e, 0xd1, 0x6d, + 0xb3, 0x47, 0xa0, 0xeb, 0x00, 0x2a, 0x7e, 0x61, 0xfa, 0xe6, 0x9d, 0xb5, 0x5a, 0xc8, 0x9c, 0xfc, + 0x6f, 0x71, 0x4e, 0xa6, 0x6f, 0xca, 0xe6, 0x90, 0x64, 0x95, 0xcc, 0x45, 0xe3, 0xae, 0xc4, 0xee, + 0x18, 0x3a, 0xef, 0xac, 0x58, 0x70, 0x7a, 0x1a, 0xe4, 0xb6, 0x31, 0xd0, 0xbe, 0x83, 0xb3, 0x76, + 0x34, 0xb5, 0xab, 0x98, 0x2e, 0xce, 0x7a, 0x35, 0x94, 0x20, 0xd0, 0xee, 0x8a, 0x95, 0x57, 0xd9, + 0xc8, 0xaa, 0xd3, 0xca, 0x7f, 0x01, 0x59, 0x7a, 0x1c, 0x63, 0x82, 0x05, 0xe8, 0x63, 0xc8, 0x19, + 0x44, 0xe2, 0x20, 0x30, 0x12, 0xb5, 0x27, 0xde, 0x15, 0x8a, 0x6c, 0x9c, 0xf0, 0x2d, 0xe0, 0xdd, + 0x8b, 0x69, 0x1f, 0x9b, 0xe3, 0x5f, 0xbf, 0x3b, 0xd6, 0x1a, 0x84, 0x3f, 0x4c, 0xc1, 0x5a, 0x24, + 0x83, 0xc9, 0x60, 0x0f, 0x74, 0x10, 0x58, 0xc9, 0xdb, 0xe1, 0x1b, 0x3b, 0xc4, 0x3c, 0x72, 0x45, + 0xfc, 0xaf, 0x5f, 0xec, 0x30, 0x2b, 0x13, 0x1f, 0x66, 0xe8, 0x1c, 0xe9, 0xce, 0xfc, 0x24, 0x05, + 0x68, 0x1f, 0x9b, 0x4e, 0xaa, 0xcc, 0xb6, 0x34, 0xc6, 0xdf, 0x70, 0x2f, 0xe1, 0x6f, 0xbe, 0xe3, + 0xf3, 0x37, 0xd4, 0x63, 0xdd, 0xf6, 0x54, 0x4e, 0x02, 0x53, 0x27, 0xde, 0x96, 0x31, 0xe9, 0x29, + 0x8d, 0xf9, 0xc7, 0x4b, 0x4f, 0x2f, 0xe8, 0x56, 0xfe, 0x95, 0x83, 0x25, 0x9f, 0xd0, 0x4c, 0x83, + 0xee, 0x02, 0x92, 0xcf, 0x64, 0xa5, 0x27, 0x5b, 0x82, 0xd9, 0xe9, 0x3f, 0x83, 0x03, 0x16, 0x9d, + 0x1e, 0x7b, 0x18, 0x7a, 0x04, 0x4b, 0x7d, 0xf9, 0x85, 0xd2, 0x1f, 0xf6, 0x25, 0xb6, 0xcf, 0x86, + 0xf2, 0x7d, 0x1b, 0x38, 0x5c, 0x0b, 0x01, 0xe8, 0x35, 0xd5, 0x7c, 0xff, 0x5d, 0x8a, 0xa0, 0x2f, + 0xb2, 0x71, 0x4c, 0x79, 0x94, 0xef, 0x63, 0x74, 0x08, 0x4b, 0x7d, 0x45, 0x0d, 0x31, 0x4b, 0x8f, + 0x64, 0x46, 0x0d, 0x7c, 0x91, 0x0d, 0x76, 0x39, 0x0a, 0x82, 0x37, 0xe8, 0x65, 0xcb, 0x0d, 0x56, + 0x9a, 0x7a, 0xde, 0x60, 0x31, 0x44, 0xc3, 0xb6, 0x65, 0x3f, 0xb2, 0xda, 0x74, 0x33, 0x6c, 0x36, + 0xac, 0xf4, 0x12, 0x5b, 0x78, 0xfa, 0x9f, 0xb4, 0xd7, 0x82, 0x43, 0xd4, 0xe8, 0x43, 0x48, 0xeb, + 0x83, 0x0e, 0x33, 0xdf, 0x37, 0xc6, 0xe0, 0x5f, 0x12, 0x0f, 0x77, 0x0f, 0xa6, 0x44, 0x6b, 0x14, + 0xff, 0x47, 0x69, 0x48, 0x8b, 0x87, 0xbb, 0xe8, 0x63, 0x5f, 0x89, 0xe5, 0xce, 0x98, 0x5c, 0xbc, + 0x15, 0x96, 0x7f, 0x48, 0x45, 0x95, 0x58, 0x8a, 0xb0, 0xbc, 0x2b, 0x56, 0xcb, 0xed, 0xaa, 0xb4, + 0x57, 0xad, 0x57, 0xdb, 0x55, 0x89, 0x56, 0x89, 0x0a, 0x1c, 0x5a, 0x87, 0xe2, 0xe1, 0x51, 0xa5, + 0x5e, 0x6b, 0x1d, 0x48, 0x47, 0x0d, 0xfb, 0x2f, 0xd6, 0x9b, 0x42, 0x05, 0x98, 0xab, 0xd7, 0x5a, + 0x6d, 0xd6, 0xd0, 0x2a, 0xa4, 0xad, 0x96, 0xfd, 0x6a, 0x5b, 0xda, 0x2d, 0x1f, 0x96, 0x77, 0x6b, + 0xed, 0xa7, 0x85, 0x0c, 0xe2, 0x61, 0xd5, 0xcf, 0xbb, 0xd5, 0x28, 0x1f, 0xb6, 0x0e, 0x9a, 0xed, + 0x42, 0x16, 0x21, 0x58, 0x20, 0xe3, 0xed, 0xa6, 0x56, 0x21, 0x67, 0x71, 0xd8, 0xad, 0x37, 0x1b, + 0x8e, 0x0c, 0xd3, 0x68, 0x19, 0x0a, 0xf6, 0xcc, 0x62, 0xb5, 0xbc, 0x47, 0x00, 0xbd, 0x19, 0xb4, + 0x08, 0xf3, 0xd5, 0xef, 0x1d, 0x96, 0x1b, 0x7b, 0x36, 0xe1, 0x2c, 0xda, 0x84, 0x75, 0xaf, 0x38, + 0x12, 0x1b, 0x55, 0xdd, 0x23, 0xa0, 0x5c, 0xab, 0x00, 0xe8, 0x1a, 0x14, 0x58, 0x01, 0x6c, 0xb7, + 0xd9, 0xd8, 0xab, 0xb5, 0x6b, 0xcd, 0x46, 0x21, 0x4f, 0x11, 0xbc, 0x25, 0x00, 0x4b, 0x72, 0xc6, + 0x6c, 0x6e, 0x34, 0xac, 0x37, 0x4f, 0x61, 0x3d, 0x1b, 0xb1, 0xfe, 0x79, 0x0a, 0x56, 0x28, 0x64, + 0x6d, 0x03, 0xe4, 0xb6, 0xaf, 0xda, 0x86, 0x02, 0xc5, 0xbb, 0xa4, 0xe0, 0x2d, 0xb0, 0x40, 0xdb, + 0x9f, 0xd8, 0x79, 0x87, 0x5d, 0x5e, 0x4a, 0x79, 0xca, 0x4b, 0xb5, 0x60, 0x16, 0x76, 0xdb, 0x5f, + 0x88, 0x09, 0xcc, 0x96, 0x94, 0xd8, 0x3f, 0x8e, 0x48, 0x13, 0xee, 0x26, 0x73, 0x4b, 0x0a, 0xa1, + 0x2e, 0x92, 0xc5, 0x5f, 0xd0, 0xcb, 0x3d, 0x84, 0xd5, 0xa0, 0xbc, 0xcc, 0xa0, 0xef, 0x84, 0xca, + 0x25, 0x8e, 0xdb, 0x75, 0x68, 0x1d, 0x0a, 0xe1, 0x87, 0x29, 0x98, 0xb1, 0x9b, 0xad, 0xf0, 0xc6, + 0xf2, 0x4b, 0x3e, 0xa4, 0x74, 0xd6, 0x6a, 0x71, 0x80, 0x57, 0x6f, 0xa1, 0x23, 0x15, 0x2c, 0x74, + 0x44, 0x9e, 0x73, 0x3a, 0xf2, 0x9c, 0xbf, 0x0d, 0xf3, 0x1d, 0x4b, 0x7c, 0x45, 0x53, 0x25, 0x53, + 0xe9, 0xdb, 0x40, 0x68, 0xb8, 0x30, 0xd9, 0xb6, 0x1f, 0x1c, 0x88, 0x73, 0xf6, 0x00, 0xab, 0x09, + 0x6d, 0xc2, 0x1c, 0x29, 0x54, 0x4a, 0xa6, 0x26, 0x0d, 0x0d, 0x5c, 0xcc, 0x12, 0x58, 0x08, 0x48, + 0x5b, 0x5b, 0x3b, 0x32, 0x30, 0xba, 0x07, 0x8b, 0x04, 0xc4, 0x97, 0xbc, 0x32, 0xe7, 0x2c, 0x69, + 0x58, 0xd4, 0x44, 0x7a, 0x5b, 0x8e, 0xf4, 0xc2, 0x5f, 0x73, 0xb0, 0x42, 0xe1, 0xb1, 0xa0, 0xfe, + 0x8e, 0xaa, 0xf0, 0x78, 0x55, 0x34, 0x70, 0x7d, 0x46, 0x32, 0x7c, 0x55, 0xe8, 0x40, 0x11, 0x56, + 0x83, 0xf3, 0x31, 0x48, 0xe0, 0xa7, 0x29, 0x58, 0xb6, 0x62, 0x39, 0xbb, 0xe3, 0xb2, 0xc3, 0xed, + 0x09, 0x8e, 0x3e, 0xb0, 0x99, 0x99, 0xd0, 0x66, 0x1e, 0x04, 0x13, 0xee, 0x37, 0xbd, 0xd1, 0x68, + 0x70, 0x05, 0xaf, 0x6a, 0x2f, 0x7f, 0xcc, 0xc1, 0x4a, 0x60, 0x3e, 0x66, 0x60, 0x1f, 0x05, 0x33, + 0x88, 0x9b, 0x31, 0xf2, 0xbd, 0x54, 0x0e, 0xf1, 0x9e, 0x1d, 0xbb, 0x4f, 0x66, 0xc7, 0xff, 0x98, + 0x82, 0xeb, 0xee, 0x2d, 0x48, 0xde, 0x16, 0x74, 0x27, 0x80, 0xc0, 0x2e, 0x56, 0xc2, 0xff, 0x6e, + 0xd0, 0x43, 0xdf, 0x0f, 0x5f, 0xcc, 0x11, 0x22, 0x25, 0x79, 0xea, 0x48, 0xe4, 0x38, 0x33, 0x29, + 0x72, 0x7c, 0x21, 0x0d, 0xf8, 0x35, 0x2f, 0x28, 0xee, 0x17, 0x9f, 0x69, 0xc2, 0x98, 0xd5, 0xa5, + 0xf7, 0xe1, 0x2a, 0x49, 0x17, 0x9c, 0xd7, 0x33, 0x76, 0xc1, 0x9e, 0xfa, 0xd0, 0x19, 0x71, 0xc5, + 0xea, 0x76, 0xde, 0x83, 0xb0, 0x8a, 0x4a, 0x57, 0xf8, 0x2a, 0x03, 0xab, 0x56, 0x3a, 0xd1, 0x32, + 0xe5, 0xd3, 0x49, 0x6a, 0x0d, 0xbf, 0x14, 0x86, 0x6e, 0x53, 0xfe, 0x63, 0x89, 0xe6, 0x3a, 0x0e, + 0x62, 0x8b, 0x4a, 0xb0, 0x64, 0x98, 0xf2, 0x29, 0x71, 0x07, 0xb2, 0x7e, 0x8a, 0x4d, 0x69, 0x20, + 0x9b, 0xcf, 0x99, 0xad, 0x2f, 0xb2, 0xae, 0x36, 0xe9, 0x39, 0x94, 0xcd, 0xe7, 0x97, 0x74, 0x90, + 0xe8, 0x3b, 0x41, 0xa7, 0xf0, 0xd6, 0x88, 0xb5, 0x24, 0xe8, 0xd6, 0xf7, 0x62, 0xe0, 0xfd, 0x77, + 0x46, 0xb0, 0x1c, 0x0d, 0xeb, 0x5f, 0x1c, 0xce, 0xfe, 0x9a, 0x2b, 0x03, 0xd7, 0xe0, 0x6a, 0x68, + 0xf1, 0xec, 0x0a, 0x39, 0x85, 0xa2, 0xd5, 0x75, 0xa4, 0x1a, 0x13, 0xaa, 0x63, 0x8c, 0xc6, 0xa4, + 0x62, 0x34, 0x46, 0x58, 0x83, 0x6b, 0x11, 0x13, 0x31, 0x29, 0xfe, 0x22, 0x4b, 0xc5, 0x98, 0xbc, + 0x48, 0xf5, 0x69, 0x9c, 0x55, 0xbc, 0xeb, 0x3d, 0xf6, 0xc8, 0x7a, 0xce, 0xab, 0xb0, 0x8b, 0x1b, + 0x90, 0xf7, 0xd2, 0xb1, 0x6b, 0xd0, 0x1c, 0x61, 0x38, 0xd9, 0x0b, 0xd5, 0xce, 0x72, 0x81, 0xda, + 0x59, 0xdd, 0x35, 0xaa, 0x69, 0x7f, 0x2c, 0x1c, 0xbb, 0x15, 0x09, 0x66, 0xf5, 0x2c, 0x64, 0x56, + 0x33, 0xfe, 0x82, 0x5c, 0x2c, 0xd3, 0x5f, 0x00, 0xc3, 0x62, 0x4a, 0x1d, 0x59, 0x29, 0x13, 0x9e, + 0x01, 0x4f, 0x35, 0x7e, 0xf2, 0xda, 0x55, 0x40, 0x8d, 0x52, 0x41, 0x35, 0x12, 0xae, 0xc3, 0x5a, + 0x24, 0x6f, 0x36, 0xf5, 0xef, 0x70, 0x54, 0x30, 0x07, 0x14, 0x6b, 0x99, 0xb2, 0x69, 0x8c, 0x3b, + 0x35, 0xeb, 0xf4, 0x4e, 0x4d, 0x9b, 0x88, 0x06, 0x4f, 0x68, 0x12, 0xc2, 0xef, 0x71, 0x74, 0x1f, + 0x82, 0xb2, 0xb0, 0xdb, 0xf6, 0x4d, 0xc8, 0x0e, 0x09, 0xee, 0x4f, 0xa3, 0xae, 0x25, 0xbf, 0x11, + 0x1c, 0x59, 0x5d, 0x22, 0xa5, 0xb8, 0x34, 0x24, 0x55, 0xf8, 0x29, 0x07, 0x79, 0x0f, 0x7f, 0xb4, + 0x0e, 0xb3, 0x0e, 0x54, 0x64, 0x27, 0x48, 0x4e, 0x83, 0x75, 0xfc, 0xa6, 0x66, 0xca, 0x3d, 0xf6, + 0x26, 0x85, 0x7e, 0x58, 0x39, 0xed, 0xd0, 0xc0, 0x34, 0x1c, 0x4e, 0x8b, 0xe4, 0x6f, 0x74, 0x07, + 0x32, 0x43, 0x55, 0x31, 0x89, 0xd9, 0x2f, 0x04, 0xed, 0x99, 0x4c, 0x55, 0x3a, 0x52, 0x15, 0x53, + 0x24, 0x54, 0xc2, 0x6d, 0xc8, 0x58, 0x5f, 0x7e, 0xc8, 0x62, 0x16, 0xb2, 0x95, 0xa7, 0xed, 0x6a, + 0xab, 0xc0, 0x21, 0x80, 0x5c, 0x8d, 0x26, 0xf8, 0x29, 0xa1, 0x6e, 0xbf, 0x4b, 0x75, 0x16, 0x61, + 0xb9, 0x00, 0xf9, 0x58, 0xd5, 0xf4, 0xbe, 0xdc, 0x23, 0x32, 0xcf, 0x88, 0xce, 0x77, 0x7c, 0x39, + 0x85, 0x82, 0x8f, 0xeb, 0xce, 0x89, 0x44, 0x01, 0x4c, 0x9f, 0x51, 0xdd, 0x8a, 0x83, 0x96, 0xca, + 0x91, 0xd0, 0xd2, 0x75, 0xdf, 0x2d, 0x3b, 0x02, 0x54, 0xfa, 0xdb, 0x14, 0xac, 0x44, 0xd2, 0xa1, + 0xf7, 0xbc, 0x70, 0xd2, 0x56, 0x22, 0x4f, 0x2f, 0x90, 0xf4, 0x15, 0x47, 0x81, 0xa4, 0x1d, 0x1f, + 0x90, 0xf4, 0xfa, 0xc8, 0xf1, 0x5e, 0x08, 0xe9, 0xc7, 0x5c, 0x0c, 0x84, 0xd4, 0x6a, 0x97, 0xf7, + 0xab, 0xd2, 0x51, 0x83, 0xfe, 0xeb, 0x40, 0x48, 0xcb, 0x50, 0x70, 0x81, 0x15, 0xa9, 0xd5, 0x2e, + 0x93, 0x47, 0xc6, 0x21, 0xf8, 0x26, 0x1d, 0x09, 0xce, 0x64, 0x46, 0xe3, 0x30, 0x59, 0x4a, 0xb2, + 0x0a, 0x88, 0x8d, 0x7e, 0xdc, 0x3c, 0x6a, 0xb4, 0x25, 0xf2, 0x84, 0xb9, 0x90, 0x73, 0xf0, 0x99, + 0x65, 0x40, 0xec, 0xb4, 0xbc, 0x2f, 0xf1, 0xff, 0x84, 0x83, 0x25, 0x5f, 0x33, 0x3b, 0x3c, 0x4f, + 0x51, 0x9c, 0xf3, 0x15, 0xc5, 0xef, 0xc1, 0xb2, 0x95, 0x31, 0x52, 0x4b, 0x31, 0xa4, 0x01, 0xd6, + 0x09, 0x18, 0xce, 0x74, 0x7e, 0xb1, 0x2f, 0xbf, 0x60, 0x05, 0x83, 0x43, 0xac, 0x5b, 0x8c, 0x2f, + 0x01, 0x12, 0x16, 0xbe, 0x4c, 0xd3, 0xb8, 0x64, 0xe2, 0xbc, 0x66, 0xa4, 0x8f, 0x0a, 0x27, 0x3e, + 0xe9, 0x09, 0x12, 0x9f, 0x18, 0x0f, 0x97, 0x99, 0x28, 0x18, 0x9e, 0xfc, 0x4e, 0x6f, 0xb8, 0xf7, + 0x36, 0x8d, 0x5c, 0xef, 0x78, 0xf5, 0x77, 0x64, 0xa6, 0x95, 0xfb, 0xb2, 0xc2, 0xfd, 0xe8, 0xb2, + 0xf2, 0xe4, 0x32, 0x8d, 0xc7, 0x2e, 0x90, 0x1f, 0x09, 0x77, 0xe0, 0x16, 0x79, 0x56, 0x39, 0x0a, + 0xd0, 0xa6, 0x2e, 0xe9, 0x57, 0xe1, 0xf5, 0x51, 0xd4, 0x6c, 0xfa, 0x7a, 0xa4, 0xff, 0x71, 0x6a, + 0x5b, 0x01, 0x2e, 0x23, 0x5c, 0x11, 0x9d, 0xfc, 0xb7, 0x53, 0xb0, 0x39, 0x6a, 0x1c, 0xfa, 0xd8, + 0xeb, 0x9a, 0xee, 0x8c, 0x3b, 0x9d, 0xd7, 0x4b, 0xfd, 0x01, 0xf3, 0x52, 0x55, 0x9f, 0x97, 0x7a, + 0x67, 0x12, 0x56, 0x5e, 0x87, 0x55, 0x8d, 0xf2, 0x57, 0x6f, 0xc3, 0x1b, 0x7e, 0x58, 0xda, 0xe3, + 0xa3, 0xe8, 0xaf, 0x1f, 0x1c, 0x9c, 0x9a, 0x23, 0x0e, 0x66, 0xc7, 0x87, 0xf6, 0xfe, 0x7e, 0x1a, + 0x36, 0xbd, 0x0f, 0x94, 0xf7, 0xbd, 0x68, 0x5a, 0xd2, 0xaf, 0x05, 0x6e, 0xc3, 0x62, 0x10, 0x29, + 0xb2, 0x1f, 0xe4, 0x5e, 0xf1, 0x43, 0x45, 0x46, 0xd2, 0x03, 0x9c, 0x11, 0x53, 0x27, 0xe7, 0x7f, + 0x61, 0x14, 0xf8, 0x9b, 0x63, 0x33, 0xfe, 0xff, 0x09, 0x08, 0x53, 0xf5, 0xec, 0xc1, 0x56, 0x82, + 0xfc, 0xcc, 0x2c, 0x2a, 0xb0, 0xe0, 0x07, 0x46, 0x99, 0xa6, 0x06, 0x5e, 0xa1, 0xfa, 0x07, 0xcf, + 0xfb, 0xd0, 0x52, 0x3a, 0xdb, 0x3f, 0x73, 0xf6, 0x83, 0x7d, 0x1f, 0xad, 0x75, 0xc2, 0x61, 0xe4, + 0x95, 0x2e, 0x22, 0x08, 0xba, 0xa2, 0x12, 0xcc, 0xda, 0x54, 0x46, 0xf0, 0x09, 0xa8, 0x33, 0xb9, + 0x4b, 0x12, 0x06, 0x8e, 0xd3, 0x17, 0x04, 0x8e, 0x33, 0x41, 0xe0, 0x98, 0xae, 0xed, 0x87, 0x29, + 0xd8, 0xf4, 0xbe, 0x95, 0x8c, 0x54, 0xef, 0x49, 0x16, 0xba, 0x05, 0x73, 0x1e, 0x2a, 0x5b, 0xe3, + 0xf3, 0x2e, 0xee, 0x99, 0xa4, 0xed, 0xa3, 0x24, 0x79, 0x45, 0x20, 0x28, 0xdd, 0x8a, 0x6d, 0xd8, + 0x4a, 0x98, 0x9f, 0x2a, 0x15, 0xa5, 0xfc, 0x41, 0x8a, 0xfc, 0xb6, 0xed, 0xff, 0x6e, 0xc7, 0xe2, + 0x81, 0xc7, 0x44, 0x31, 0x5e, 0xe9, 0x76, 0x29, 0xb0, 0x11, 0x37, 0xf9, 0x25, 0x1b, 0xe0, 0xfd, + 0xff, 0xe6, 0x60, 0xa6, 0xd6, 0xc5, 0xaa, 0x49, 0x83, 0x82, 0x79, 0xdf, 0xaf, 0x1c, 0xd1, 0x7a, + 0xcc, 0x8f, 0x1f, 0xc9, 0x16, 0xf0, 0xd7, 0x13, 0x7f, 0x1a, 0x29, 0x4c, 0xa1, 0x13, 0xcf, 0x2f, + 0x34, 0x7d, 0xcf, 0x05, 0x5e, 0x0b, 0x8d, 0x8c, 0xb8, 0xab, 0xf9, 0x5b, 0x23, 0xa8, 0x9c, 0x79, + 0xde, 0x87, 0x2c, 0xf9, 0xb1, 0x1a, 0x5a, 0x76, 0x7e, 0x30, 0xe7, 0xf9, 0x2d, 0x1b, 0xbf, 0x12, + 0x68, 0xb5, 0xc7, 0xdd, 0xff, 0xfb, 0x59, 0x00, 0xf7, 0x0e, 0x44, 0x8f, 0x60, 0xce, 0xeb, 0xfa, + 0xd0, 0x5a, 0xc2, 0xaf, 0xb5, 0xf8, 0xf5, 0xe8, 0x4e, 0x47, 0xa6, 0x47, 0x30, 0xe7, 0x55, 0x79, + 0x97, 0x59, 0xc4, 0x63, 0x6d, 0x97, 0x59, 0xe4, 0xdb, 0xea, 0x29, 0xd4, 0x83, 0xab, 0x31, 0x4f, + 0x65, 0xd1, 0xeb, 0xe3, 0x3d, 0x28, 0xe6, 0xdf, 0x18, 0xf3, 0xcd, 0xad, 0x30, 0x85, 0x74, 0xb8, + 0x16, 0xfb, 0x42, 0x14, 0x6d, 0x8f, 0xfb, 0x7e, 0x95, 0x7f, 0x73, 0x0c, 0x4a, 0x67, 0xce, 0x21, + 0xf0, 0xf1, 0xcf, 0xd2, 0xd0, 0x9b, 0x63, 0xbf, 0x97, 0xe4, 0x6f, 0x8f, 0xff, 0xca, 0x4d, 0x98, + 0x42, 0x07, 0x90, 0xf7, 0xbc, 0x4f, 0x42, 0x7c, 0xe4, 0xa3, 0x25, 0xca, 0x78, 0x2d, 0xe1, 0x41, + 0x13, 0xe5, 0xe4, 0x79, 0x32, 0xe2, 0x72, 0x0a, 0x3f, 0x7e, 0x71, 0x39, 0x45, 0xbc, 0x31, 0x09, + 0x6e, 0x7f, 0x20, 0x30, 0x8d, 0xda, 0xfe, 0xe8, 0x48, 0x37, 0x6a, 0xfb, 0x63, 0xa2, 0x5c, 0x61, + 0x0a, 0x7d, 0x17, 0x16, 0xfc, 0xb5, 0x60, 0x74, 0x3d, 0xb1, 0xa6, 0xcd, 0x6f, 0xc4, 0x75, 0x7b, + 0x59, 0xfa, 0x2b, 0x89, 0x2e, 0xcb, 0xc8, 0x8a, 0xa6, 0xcb, 0x32, 0xa6, 0x00, 0x39, 0x65, 0xf9, + 0x27, 0x5f, 0x7d, 0xcc, 0xf5, 0x4f, 0x51, 0x65, 0x3d, 0xd7, 0x3f, 0x45, 0x16, 0xd5, 0x84, 0x29, + 0xa4, 0xc0, 0x6a, 0x74, 0x79, 0x06, 0xdd, 0x1a, 0xab, 0xfa, 0xc4, 0xbf, 0x3e, 0x8a, 0xcc, 0x99, + 0xaa, 0x03, 0x4b, 0x11, 0xcf, 0xc7, 0x90, 0x90, 0xf8, 0xb6, 0x8c, 0x4e, 0x72, 0x73, 0x8c, 0xf7, + 0x67, 0x02, 0x71, 0xe6, 0xff, 0x95, 0x86, 0x2b, 0x81, 0xc0, 0x1e, 0xfd, 0x06, 0x07, 0x1b, 0xc9, + 0xc9, 0x0e, 0xba, 0x1b, 0x93, 0x14, 0xc4, 0x28, 0x56, 0x69, 0x5c, 0x72, 0x8f, 0x71, 0x5f, 0x8b, + 0x8d, 0x29, 0xd1, 0xf6, 0xb8, 0x61, 0xb3, 0x47, 0xa3, 0x47, 0x05, 0xa8, 0x64, 0x3b, 0xac, 0x69, + 0x63, 0xa3, 0x0e, 0xb4, 0x3d, 0x6e, 0x60, 0xe4, 0x4e, 0x3b, 0x32, 0x84, 0xa1, 0xd3, 0xf6, 0x60, + 0x35, 0xfa, 0xf6, 0x46, 0xb7, 0xc6, 0x0a, 0x2d, 0x5c, 0xad, 0x4a, 0x0e, 0x02, 0xc8, 0x6c, 0x24, + 0xad, 0xba, 0xff, 0x2f, 0x59, 0xc8, 0x10, 0xa0, 0xa4, 0x0d, 0x57, 0x02, 0xc5, 0x17, 0xb4, 0x91, + 0x5c, 0x92, 0xe2, 0x6f, 0xc4, 0xf6, 0x3b, 0xe7, 0xf7, 0x0c, 0x16, 0x43, 0xe5, 0x14, 0xb4, 0xe9, + 0x1d, 0x17, 0x55, 0xd2, 0xe1, 0xb7, 0x12, 0x28, 0x82, 0xbc, 0xfd, 0x97, 0xda, 0xe6, 0x28, 0xbc, + 0xdf, 0xcf, 0x3b, 0xee, 0x22, 0xfb, 0x8c, 0xe2, 0x52, 0xc1, 0x2b, 0x4c, 0xf0, 0xcb, 0x15, 0x79, + 0x79, 0xdd, 0x4c, 0xa4, 0x71, 0x66, 0xf8, 0xd4, 0x01, 0xc4, 0x3c, 0x70, 0x33, 0xf2, 0x09, 0x17, + 0x09, 0x8b, 0xf3, 0x42, 0x12, 0x89, 0xc3, 0xfe, 0x13, 0x28, 0x04, 0x91, 0x11, 0x74, 0x63, 0x04, + 0x50, 0xc3, 0x6f, 0xc6, 0x13, 0x04, 0x77, 0x26, 0xe8, 0x09, 0x82, 0x52, 0x45, 0x99, 0xff, 0xcd, + 0x44, 0x1a, 0xef, 0x7d, 0xe8, 0xc1, 0x04, 0xdd, 0xfb, 0x30, 0x8c, 0x1f, 0xba, 0xf7, 0x61, 0x04, + 0x88, 0x28, 0x4c, 0xed, 0x3c, 0x00, 0x90, 0x7b, 0x83, 0xe7, 0xb2, 0x84, 0xd5, 0x61, 0x1f, 0xad, + 0x87, 0xd2, 0xb4, 0xaa, 0x3a, 0xec, 0x37, 0x07, 0x56, 0x76, 0x66, 0x14, 0xff, 0x6c, 0x86, 0xe4, + 0x62, 0xb3, 0x64, 0x80, 0xd5, 0xb1, 0x53, 0x87, 0x82, 0x3b, 0x5a, 0x22, 0x81, 0x36, 0xda, 0x8a, + 0xe4, 0x41, 0x5e, 0x4b, 0x06, 0x18, 0x2d, 0x38, 0x8c, 0x48, 0xef, 0xce, 0x47, 0x00, 0x1d, 0x43, + 0x91, 0x68, 0xa4, 0x8f, 0xae, 0x87, 0xf8, 0x3c, 0x54, 0x70, 0xaf, 0x6b, 0xf3, 0xf8, 0x53, 0x26, + 0x4c, 0xc7, 0x50, 0x68, 0x3e, 0xb0, 0xf3, 0x6d, 0xc8, 0x53, 0x61, 0x4e, 0x2c, 0xba, 0x51, 0xe3, + 0x99, 0x0c, 0x74, 0xf5, 0xa4, 0x67, 0xa7, 0x0a, 0xf3, 0x94, 0x01, 0x83, 0xd8, 0xd1, 0x8d, 0x10, + 0x8b, 0xc7, 0xb4, 0x27, 0xc0, 0x64, 0x8e, 0x0c, 0x63, 0x7d, 0x3b, 0x15, 0x98, 0xb3, 0xd9, 0x98, + 0xcf, 0xb5, 0x2e, 0xda, 0x88, 0xe0, 0x62, 0x75, 0x04, 0x98, 0xe4, 0x19, 0x13, 0xab, 0xcb, 0x15, + 0xc5, 0xfe, 0x3f, 0x3e, 0xc2, 0xa2, 0x30, 0x54, 0x29, 0x52, 0x14, 0xd6, 0x57, 0xc9, 0x3e, 0x4b, + 0x77, 0x0c, 0xe5, 0x38, 0x47, 0x06, 0x7d, 0xe3, 0x7f, 0x03, 0x00, 0x00, 0xff, 0xff, 0x0c, 0x79, + 0xd8, 0xd8, 0x90, 0x46, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. @@ -5957,6 +6699,186 @@ var _Controller_serviceDesc = grpc.ServiceDesc{ Metadata: "github.com/container-storage-interface/spec/csi.proto", } +// GroupControllerClient is the client API for GroupController service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. +type GroupControllerClient interface { + GroupControllerGetCapabilities(ctx context.Context, in *GroupControllerGetCapabilitiesRequest, opts ...grpc.CallOption) (*GroupControllerGetCapabilitiesResponse, error) + CreateVolumeGroupSnapshot(ctx context.Context, in *CreateVolumeGroupSnapshotRequest, opts ...grpc.CallOption) (*CreateVolumeGroupSnapshotResponse, error) + DeleteVolumeGroupSnapshot(ctx context.Context, in *DeleteVolumeGroupSnapshotRequest, opts ...grpc.CallOption) (*DeleteVolumeGroupSnapshotResponse, error) + GetVolumeGroupSnapshot(ctx context.Context, in *GetVolumeGroupSnapshotRequest, opts ...grpc.CallOption) (*GetVolumeGroupSnapshotResponse, error) +} + +type groupControllerClient struct { + cc *grpc.ClientConn +} + +func NewGroupControllerClient(cc *grpc.ClientConn) GroupControllerClient { + return &groupControllerClient{cc} +} + +func (c *groupControllerClient) GroupControllerGetCapabilities(ctx context.Context, in *GroupControllerGetCapabilitiesRequest, opts ...grpc.CallOption) (*GroupControllerGetCapabilitiesResponse, error) { + out := new(GroupControllerGetCapabilitiesResponse) + err := c.cc.Invoke(ctx, "/csi.v1.GroupController/GroupControllerGetCapabilities", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *groupControllerClient) CreateVolumeGroupSnapshot(ctx context.Context, in *CreateVolumeGroupSnapshotRequest, opts ...grpc.CallOption) (*CreateVolumeGroupSnapshotResponse, error) { + out := new(CreateVolumeGroupSnapshotResponse) + err := c.cc.Invoke(ctx, "/csi.v1.GroupController/CreateVolumeGroupSnapshot", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *groupControllerClient) DeleteVolumeGroupSnapshot(ctx context.Context, in *DeleteVolumeGroupSnapshotRequest, opts ...grpc.CallOption) (*DeleteVolumeGroupSnapshotResponse, error) { + out := new(DeleteVolumeGroupSnapshotResponse) + err := c.cc.Invoke(ctx, "/csi.v1.GroupController/DeleteVolumeGroupSnapshot", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *groupControllerClient) GetVolumeGroupSnapshot(ctx context.Context, in *GetVolumeGroupSnapshotRequest, opts ...grpc.CallOption) (*GetVolumeGroupSnapshotResponse, error) { + out := new(GetVolumeGroupSnapshotResponse) + err := c.cc.Invoke(ctx, "/csi.v1.GroupController/GetVolumeGroupSnapshot", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + +// GroupControllerServer is the server API for GroupController service. +type GroupControllerServer interface { + GroupControllerGetCapabilities(context.Context, *GroupControllerGetCapabilitiesRequest) (*GroupControllerGetCapabilitiesResponse, error) + CreateVolumeGroupSnapshot(context.Context, *CreateVolumeGroupSnapshotRequest) (*CreateVolumeGroupSnapshotResponse, error) + DeleteVolumeGroupSnapshot(context.Context, *DeleteVolumeGroupSnapshotRequest) (*DeleteVolumeGroupSnapshotResponse, error) + GetVolumeGroupSnapshot(context.Context, *GetVolumeGroupSnapshotRequest) (*GetVolumeGroupSnapshotResponse, error) +} + +// UnimplementedGroupControllerServer can be embedded to have forward compatible implementations. +type UnimplementedGroupControllerServer struct { +} + +func (*UnimplementedGroupControllerServer) GroupControllerGetCapabilities(ctx context.Context, req *GroupControllerGetCapabilitiesRequest) (*GroupControllerGetCapabilitiesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GroupControllerGetCapabilities not implemented") +} +func (*UnimplementedGroupControllerServer) CreateVolumeGroupSnapshot(ctx context.Context, req *CreateVolumeGroupSnapshotRequest) (*CreateVolumeGroupSnapshotResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method CreateVolumeGroupSnapshot not implemented") +} +func (*UnimplementedGroupControllerServer) DeleteVolumeGroupSnapshot(ctx context.Context, req *DeleteVolumeGroupSnapshotRequest) (*DeleteVolumeGroupSnapshotResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method DeleteVolumeGroupSnapshot not implemented") +} +func (*UnimplementedGroupControllerServer) GetVolumeGroupSnapshot(ctx context.Context, req *GetVolumeGroupSnapshotRequest) (*GetVolumeGroupSnapshotResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method GetVolumeGroupSnapshot not implemented") +} + +func RegisterGroupControllerServer(s *grpc.Server, srv GroupControllerServer) { + s.RegisterService(&_GroupController_serviceDesc, srv) +} + +func _GroupController_GroupControllerGetCapabilities_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GroupControllerGetCapabilitiesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(GroupControllerServer).GroupControllerGetCapabilities(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/csi.v1.GroupController/GroupControllerGetCapabilities", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(GroupControllerServer).GroupControllerGetCapabilities(ctx, req.(*GroupControllerGetCapabilitiesRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _GroupController_CreateVolumeGroupSnapshot_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(CreateVolumeGroupSnapshotRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(GroupControllerServer).CreateVolumeGroupSnapshot(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/csi.v1.GroupController/CreateVolumeGroupSnapshot", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(GroupControllerServer).CreateVolumeGroupSnapshot(ctx, req.(*CreateVolumeGroupSnapshotRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _GroupController_DeleteVolumeGroupSnapshot_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(DeleteVolumeGroupSnapshotRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(GroupControllerServer).DeleteVolumeGroupSnapshot(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/csi.v1.GroupController/DeleteVolumeGroupSnapshot", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(GroupControllerServer).DeleteVolumeGroupSnapshot(ctx, req.(*DeleteVolumeGroupSnapshotRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _GroupController_GetVolumeGroupSnapshot_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(GetVolumeGroupSnapshotRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(GroupControllerServer).GetVolumeGroupSnapshot(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/csi.v1.GroupController/GetVolumeGroupSnapshot", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(GroupControllerServer).GetVolumeGroupSnapshot(ctx, req.(*GetVolumeGroupSnapshotRequest)) + } + return interceptor(ctx, in, info, handler) +} + +var _GroupController_serviceDesc = grpc.ServiceDesc{ + ServiceName: "csi.v1.GroupController", + HandlerType: (*GroupControllerServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "GroupControllerGetCapabilities", + Handler: _GroupController_GroupControllerGetCapabilities_Handler, + }, + { + MethodName: "CreateVolumeGroupSnapshot", + Handler: _GroupController_CreateVolumeGroupSnapshot_Handler, + }, + { + MethodName: "DeleteVolumeGroupSnapshot", + Handler: _GroupController_DeleteVolumeGroupSnapshot_Handler, + }, + { + MethodName: "GetVolumeGroupSnapshot", + Handler: _GroupController_GetVolumeGroupSnapshot_Handler, + }, + }, + Streams: []grpc.StreamDesc{}, + Metadata: "github.com/container-storage-interface/spec/csi.proto", +} + // NodeClient is the client API for Node service. // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/.gitignore b/cluster-autoscaler/vendor/github.com/containerd/cgroups/.gitignore new file mode 100644 index 000000000000..3465c14cf778 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/.gitignore @@ -0,0 +1,2 @@ +example/example +cmd/cgctl/cgctl diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/Makefile b/cluster-autoscaler/vendor/github.com/containerd/cgroups/Makefile new file mode 100644 index 000000000000..19e6607561d5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/Makefile @@ -0,0 +1,24 @@ +# Copyright The containerd Authors. + +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at + +# http://www.apache.org/licenses/LICENSE-2.0 + +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +PACKAGES=$(shell go list ./... | grep -v /vendor/) + +all: cgutil + go build -v + +cgutil: + cd cmd/cgctl && go build -v + +proto: + protobuild --quiet ${PACKAGES} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/Protobuild.toml b/cluster-autoscaler/vendor/github.com/containerd/cgroups/Protobuild.toml new file mode 100644 index 000000000000..1c4c802fe12b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/Protobuild.toml @@ -0,0 +1,46 @@ +version = "unstable" +generator = "gogoctrd" +plugins = ["grpc"] + +# Control protoc include paths. Below are usually some good defaults, but feel +# free to try it without them if it works for your project. +[includes] + # Include paths that will be added before all others. Typically, you want to + # treat the root of the project as an include, but this may not be necessary. + # before = ["."] + + # Paths that should be treated as include roots in relation to the vendor + # directory. These will be calculated with the vendor directory nearest the + # target package. + # vendored = ["github.com/gogo/protobuf"] + packages = ["github.com/gogo/protobuf"] + + # Paths that will be added untouched to the end of the includes. We use + # `/usr/local/include` to pickup the common install location of protobuf. + # This is the default. + after = ["/usr/local/include", "/usr/include"] + +# This section maps protobuf imports to Go packages. These will become +# `-M` directives in the call to the go protobuf generator. +[packages] + "gogoproto/gogo.proto" = "github.com/gogo/protobuf/gogoproto" + "google/protobuf/any.proto" = "github.com/gogo/protobuf/types" + "google/protobuf/descriptor.proto" = "github.com/gogo/protobuf/protoc-gen-gogo/descriptor" + "google/protobuf/field_mask.proto" = "github.com/gogo/protobuf/types" + "google/protobuf/timestamp.proto" = "github.com/gogo/protobuf/types" + +# Aggregrate the API descriptors to lock down API changes. +[[descriptors]] +prefix = "github.com/containerd/cgroups/stats/v1" +target = "stats/v1/metrics.pb.txt" +ignore_files = [ + "google/protobuf/descriptor.proto", + "gogoproto/gogo.proto" +] +[[descriptors]] +prefix = "github.com/containerd/cgroups/v2/stats" +target = "v2/stats/metrics.pb.txt" +ignore_files = [ + "google/protobuf/descriptor.proto", + "gogoproto/gogo.proto" +] diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/README.md b/cluster-autoscaler/vendor/github.com/containerd/cgroups/README.md new file mode 100644 index 000000000000..d2073af3abc8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/README.md @@ -0,0 +1,204 @@ +# cgroups + +[![Build Status](https://github.com/containerd/cgroups/workflows/CI/badge.svg)](https://github.com/containerd/cgroups/actions?query=workflow%3ACI) +[![codecov](https://codecov.io/gh/containerd/cgroups/branch/main/graph/badge.svg)](https://codecov.io/gh/containerd/cgroups) +[![GoDoc](https://godoc.org/github.com/containerd/cgroups?status.svg)](https://godoc.org/github.com/containerd/cgroups) +[![Go Report Card](https://goreportcard.com/badge/github.com/containerd/cgroups)](https://goreportcard.com/report/github.com/containerd/cgroups) + +Go package for creating, managing, inspecting, and destroying cgroups. +The resources format for settings on the cgroup uses the OCI runtime-spec found +[here](https://github.com/opencontainers/runtime-spec). + +## Examples (v1) + +### Create a new cgroup + +This creates a new cgroup using a static path for all subsystems under `/test`. + +* /sys/fs/cgroup/cpu/test +* /sys/fs/cgroup/memory/test +* etc.... + +It uses a single hierarchy and specifies cpu shares as a resource constraint and +uses the v1 implementation of cgroups. + + +```go +shares := uint64(100) +control, err := cgroups.New(cgroups.V1, cgroups.StaticPath("/test"), &specs.LinuxResources{ + CPU: &specs.LinuxCPU{ + Shares: &shares, + }, +}) +defer control.Delete() +``` + +### Create with systemd slice support + + +```go +control, err := cgroups.New(cgroups.Systemd, cgroups.Slice("system.slice", "runc-test"), &specs.LinuxResources{ + CPU: &specs.CPU{ + Shares: &shares, + }, +}) + +``` + +### Load an existing cgroup + +```go +control, err = cgroups.Load(cgroups.V1, cgroups.StaticPath("/test")) +``` + +### Add a process to the cgroup + +```go +if err := control.Add(cgroups.Process{Pid:1234}); err != nil { +} +``` + +### Update the cgroup + +To update the resources applied in the cgroup + +```go +shares = uint64(200) +if err := control.Update(&specs.LinuxResources{ + CPU: &specs.LinuxCPU{ + Shares: &shares, + }, +}); err != nil { +} +``` + +### Freeze and Thaw the cgroup + +```go +if err := control.Freeze(); err != nil { +} +if err := control.Thaw(); err != nil { +} +``` + +### List all processes in the cgroup or recursively + +```go +processes, err := control.Processes(cgroups.Devices, recursive) +``` + +### Get Stats on the cgroup + +```go +stats, err := control.Stat() +``` + +By adding `cgroups.IgnoreNotExist` all non-existent files will be ignored, e.g. swap memory stats without swap enabled +```go +stats, err := control.Stat(cgroups.IgnoreNotExist) +``` + +### Move process across cgroups + +This allows you to take processes from one cgroup and move them to another. + +```go +err := control.MoveTo(destination) +``` + +### Create subcgroup + +```go +subCgroup, err := control.New("child", resources) +``` + +### Registering for memory events + +This allows you to get notified by an eventfd for v1 memory cgroups events. + +```go +event := cgroups.MemoryThresholdEvent(50 * 1024 * 1024, false) +efd, err := control.RegisterMemoryEvent(event) +``` + +```go +event := cgroups.MemoryPressureEvent(cgroups.MediumPressure, cgroups.DefaultMode) +efd, err := control.RegisterMemoryEvent(event) +``` + +```go +efd, err := control.OOMEventFD() +// or by using RegisterMemoryEvent +event := cgroups.OOMEvent() +efd, err := control.RegisterMemoryEvent(event) +``` + +## Examples (v2/unified) + +### Check that the current system is running cgroups v2 + +```go +var cgroupV2 bool +if cgroups.Mode() == cgroups.Unified { + cgroupV2 = true +} +``` + +### Create a new cgroup + +This creates a new systemd v2 cgroup slice. Systemd slices consider ["-" a special character](https://www.freedesktop.org/software/systemd/man/systemd.slice.html), +so the resulting slice would be located here on disk: + +* /sys/fs/cgroup/my.slice/my-cgroup.slice/my-cgroup-abc.slice + +```go +import ( + cgroupsv2 "github.com/containerd/cgroups/v2" + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +res := cgroupsv2.Resources{} +// dummy PID of -1 is used for creating a "general slice" to be used as a parent cgroup. +// see https://github.com/containerd/cgroups/blob/1df78138f1e1e6ee593db155c6b369466f577651/v2/manager.go#L732-L735 +m, err := cgroupsv2.NewSystemd("/", "my-cgroup-abc.slice", -1, &res) +if err != nil { + return err +} +``` + +### Load an existing cgroup + +```go +m, err := cgroupsv2.LoadSystemd("/", "my-cgroup-abc.slice") +if err != nil { + return err +} +``` + +### Delete a cgroup + +```go +m, err := cgroupsv2.LoadSystemd("/", "my-cgroup-abc.slice") +if err != nil { + return err +} +err = m.DeleteSystemd() +if err != nil { + return err +} +``` + +### Attention + +All static path should not include `/sys/fs/cgroup/` prefix, it should start with your own cgroups name + +## Project details + +Cgroups is a containerd sub-project, licensed under the [Apache 2.0 license](./LICENSE). +As a containerd sub-project, you will find the: + + * [Project governance](https://github.com/containerd/project/blob/main/GOVERNANCE.md), + * [Maintainers](https://github.com/containerd/project/blob/main/MAINTAINERS), + * and [Contributing guidelines](https://github.com/containerd/project/blob/main/CONTRIBUTING.md) + +information in our [`containerd/project`](https://github.com/containerd/project) repository. diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/blkio.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/blkio.go new file mode 100644 index 000000000000..f59c75bb5d1f --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/blkio.go @@ -0,0 +1,361 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "bufio" + "fmt" + "io" + "os" + "path/filepath" + "strconv" + "strings" + + v1 "github.com/containerd/cgroups/stats/v1" + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +// NewBlkio returns a Blkio controller given the root folder of cgroups. +// It may optionally accept other configuration options, such as ProcRoot(path) +func NewBlkio(root string, options ...func(controller *blkioController)) *blkioController { + ctrl := &blkioController{ + root: filepath.Join(root, string(Blkio)), + procRoot: "/proc", + } + for _, opt := range options { + opt(ctrl) + } + return ctrl +} + +// ProcRoot overrides the default location of the "/proc" filesystem +func ProcRoot(path string) func(controller *blkioController) { + return func(c *blkioController) { + c.procRoot = path + } +} + +type blkioController struct { + root string + procRoot string +} + +func (b *blkioController) Name() Name { + return Blkio +} + +func (b *blkioController) Path(path string) string { + return filepath.Join(b.root, path) +} + +func (b *blkioController) Create(path string, resources *specs.LinuxResources) error { + if err := os.MkdirAll(b.Path(path), defaultDirPerm); err != nil { + return err + } + if resources.BlockIO == nil { + return nil + } + for _, t := range createBlkioSettings(resources.BlockIO) { + if t.value != nil { + if err := retryingWriteFile( + filepath.Join(b.Path(path), "blkio."+t.name), + t.format(t.value), + defaultFilePerm, + ); err != nil { + return err + } + } + } + return nil +} + +func (b *blkioController) Update(path string, resources *specs.LinuxResources) error { + return b.Create(path, resources) +} + +func (b *blkioController) Stat(path string, stats *v1.Metrics) error { + stats.Blkio = &v1.BlkIOStat{} + + var settings []blkioStatSettings + + // Try to read CFQ stats available on all CFQ enabled kernels first + if _, err := os.Lstat(filepath.Join(b.Path(path), "blkio.io_serviced_recursive")); err == nil { + settings = []blkioStatSettings{ + { + name: "sectors_recursive", + entry: &stats.Blkio.SectorsRecursive, + }, + { + name: "io_service_bytes_recursive", + entry: &stats.Blkio.IoServiceBytesRecursive, + }, + { + name: "io_serviced_recursive", + entry: &stats.Blkio.IoServicedRecursive, + }, + { + name: "io_queued_recursive", + entry: &stats.Blkio.IoQueuedRecursive, + }, + { + name: "io_service_time_recursive", + entry: &stats.Blkio.IoServiceTimeRecursive, + }, + { + name: "io_wait_time_recursive", + entry: &stats.Blkio.IoWaitTimeRecursive, + }, + { + name: "io_merged_recursive", + entry: &stats.Blkio.IoMergedRecursive, + }, + { + name: "time_recursive", + entry: &stats.Blkio.IoTimeRecursive, + }, + } + } + + f, err := os.Open(filepath.Join(b.procRoot, "partitions")) + if err != nil { + return err + } + defer f.Close() + + devices, err := getDevices(f) + if err != nil { + return err + } + + var size int + for _, t := range settings { + if err := b.readEntry(devices, path, t.name, t.entry); err != nil { + return err + } + size += len(*t.entry) + } + if size > 0 { + return nil + } + + // Even the kernel is compiled with the CFQ scheduler, the cgroup may not use + // block devices with the CFQ scheduler. If so, we should fallback to throttle.* files. + settings = []blkioStatSettings{ + { + name: "throttle.io_serviced", + entry: &stats.Blkio.IoServicedRecursive, + }, + { + name: "throttle.io_service_bytes", + entry: &stats.Blkio.IoServiceBytesRecursive, + }, + } + for _, t := range settings { + if err := b.readEntry(devices, path, t.name, t.entry); err != nil { + return err + } + } + return nil +} + +func (b *blkioController) readEntry(devices map[deviceKey]string, path, name string, entry *[]*v1.BlkIOEntry) error { + f, err := os.Open(filepath.Join(b.Path(path), "blkio."+name)) + if err != nil { + return err + } + defer f.Close() + sc := bufio.NewScanner(f) + for sc.Scan() { + // format: dev type amount + fields := strings.FieldsFunc(sc.Text(), splitBlkIOStatLine) + if len(fields) < 3 { + if len(fields) == 2 && fields[0] == "Total" { + // skip total line + continue + } else { + return fmt.Errorf("invalid line found while parsing %s: %s", path, sc.Text()) + } + } + major, err := strconv.ParseUint(fields[0], 10, 64) + if err != nil { + return err + } + minor, err := strconv.ParseUint(fields[1], 10, 64) + if err != nil { + return err + } + op := "" + valueField := 2 + if len(fields) == 4 { + op = fields[2] + valueField = 3 + } + v, err := strconv.ParseUint(fields[valueField], 10, 64) + if err != nil { + return err + } + *entry = append(*entry, &v1.BlkIOEntry{ + Device: devices[deviceKey{major, minor}], + Major: major, + Minor: minor, + Op: op, + Value: v, + }) + } + return sc.Err() +} + +func createBlkioSettings(blkio *specs.LinuxBlockIO) []blkioSettings { + settings := []blkioSettings{} + + if blkio.Weight != nil { + settings = append(settings, + blkioSettings{ + name: "weight", + value: blkio.Weight, + format: uintf, + }) + } + if blkio.LeafWeight != nil { + settings = append(settings, + blkioSettings{ + name: "leaf_weight", + value: blkio.LeafWeight, + format: uintf, + }) + } + for _, wd := range blkio.WeightDevice { + if wd.Weight != nil { + settings = append(settings, + blkioSettings{ + name: "weight_device", + value: wd, + format: weightdev, + }) + } + if wd.LeafWeight != nil { + settings = append(settings, + blkioSettings{ + name: "leaf_weight_device", + value: wd, + format: weightleafdev, + }) + } + } + for _, t := range []struct { + name string + list []specs.LinuxThrottleDevice + }{ + { + name: "throttle.read_bps_device", + list: blkio.ThrottleReadBpsDevice, + }, + { + name: "throttle.read_iops_device", + list: blkio.ThrottleReadIOPSDevice, + }, + { + name: "throttle.write_bps_device", + list: blkio.ThrottleWriteBpsDevice, + }, + { + name: "throttle.write_iops_device", + list: blkio.ThrottleWriteIOPSDevice, + }, + } { + for _, td := range t.list { + settings = append(settings, blkioSettings{ + name: t.name, + value: td, + format: throttleddev, + }) + } + } + return settings +} + +type blkioSettings struct { + name string + value interface{} + format func(v interface{}) []byte +} + +type blkioStatSettings struct { + name string + entry *[]*v1.BlkIOEntry +} + +func uintf(v interface{}) []byte { + return []byte(strconv.FormatUint(uint64(*v.(*uint16)), 10)) +} + +func weightdev(v interface{}) []byte { + wd := v.(specs.LinuxWeightDevice) + return []byte(fmt.Sprintf("%d:%d %d", wd.Major, wd.Minor, *wd.Weight)) +} + +func weightleafdev(v interface{}) []byte { + wd := v.(specs.LinuxWeightDevice) + return []byte(fmt.Sprintf("%d:%d %d", wd.Major, wd.Minor, *wd.LeafWeight)) +} + +func throttleddev(v interface{}) []byte { + td := v.(specs.LinuxThrottleDevice) + return []byte(fmt.Sprintf("%d:%d %d", td.Major, td.Minor, td.Rate)) +} + +func splitBlkIOStatLine(r rune) bool { + return r == ' ' || r == ':' +} + +type deviceKey struct { + major, minor uint64 +} + +// getDevices makes a best effort attempt to read all the devices into a map +// keyed by major and minor number. Since devices may be mapped multiple times, +// we err on taking the first occurrence. +func getDevices(r io.Reader) (map[deviceKey]string, error) { + + var ( + s = bufio.NewScanner(r) + devices = make(map[deviceKey]string) + ) + for i := 0; s.Scan(); i++ { + if i < 2 { + continue + } + fields := strings.Fields(s.Text()) + major, err := strconv.Atoi(fields[0]) + if err != nil { + return nil, err + } + minor, err := strconv.Atoi(fields[1]) + if err != nil { + return nil, err + } + key := deviceKey{ + major: uint64(major), + minor: uint64(minor), + } + if _, ok := devices[key]; ok { + continue + } + devices[key] = filepath.Join("/dev", fields[3]) + } + return devices, s.Err() +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/cgroup.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/cgroup.go new file mode 100644 index 000000000000..c0eac5bf19d1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/cgroup.go @@ -0,0 +1,543 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "errors" + "fmt" + "os" + "path/filepath" + "strconv" + "strings" + "sync" + + v1 "github.com/containerd/cgroups/stats/v1" + + "github.com/opencontainers/runtime-spec/specs-go" +) + +// New returns a new control via the cgroup cgroups interface +func New(hierarchy Hierarchy, path Path, resources *specs.LinuxResources, opts ...InitOpts) (Cgroup, error) { + config := newInitConfig() + for _, o := range opts { + if err := o(config); err != nil { + return nil, err + } + } + subsystems, err := hierarchy() + if err != nil { + return nil, err + } + var active []Subsystem + for _, s := range subsystems { + // check if subsystem exists + if err := initializeSubsystem(s, path, resources); err != nil { + if err == ErrControllerNotActive { + if config.InitCheck != nil { + if skerr := config.InitCheck(s, path, err); skerr != nil { + if skerr != ErrIgnoreSubsystem { + return nil, skerr + } + } + } + continue + } + return nil, err + } + active = append(active, s) + } + return &cgroup{ + path: path, + subsystems: active, + }, nil +} + +// Load will load an existing cgroup and allow it to be controlled +// All static path should not include `/sys/fs/cgroup/` prefix, it should start with your own cgroups name +func Load(hierarchy Hierarchy, path Path, opts ...InitOpts) (Cgroup, error) { + config := newInitConfig() + for _, o := range opts { + if err := o(config); err != nil { + return nil, err + } + } + var activeSubsystems []Subsystem + subsystems, err := hierarchy() + if err != nil { + return nil, err + } + // check that the subsystems still exist, and keep only those that actually exist + for _, s := range pathers(subsystems) { + p, err := path(s.Name()) + if err != nil { + if errors.Is(err, os.ErrNotExist) { + return nil, ErrCgroupDeleted + } + if err == ErrControllerNotActive { + if config.InitCheck != nil { + if skerr := config.InitCheck(s, path, err); skerr != nil { + if skerr != ErrIgnoreSubsystem { + return nil, skerr + } + } + } + continue + } + return nil, err + } + if _, err := os.Lstat(s.Path(p)); err != nil { + if os.IsNotExist(err) { + continue + } + return nil, err + } + activeSubsystems = append(activeSubsystems, s) + } + // if we do not have any active systems then the cgroup is deleted + if len(activeSubsystems) == 0 { + return nil, ErrCgroupDeleted + } + return &cgroup{ + path: path, + subsystems: activeSubsystems, + }, nil +} + +type cgroup struct { + path Path + + subsystems []Subsystem + mu sync.Mutex + err error +} + +// New returns a new sub cgroup +func (c *cgroup) New(name string, resources *specs.LinuxResources) (Cgroup, error) { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return nil, c.err + } + path := subPath(c.path, name) + for _, s := range c.subsystems { + if err := initializeSubsystem(s, path, resources); err != nil { + return nil, err + } + } + return &cgroup{ + path: path, + subsystems: c.subsystems, + }, nil +} + +// Subsystems returns all the subsystems that are currently being +// consumed by the group +func (c *cgroup) Subsystems() []Subsystem { + return c.subsystems +} + +func (c *cgroup) subsystemsFilter(subsystems ...Name) []Subsystem { + if len(subsystems) == 0 { + return c.subsystems + } + + var filteredSubsystems = []Subsystem{} + for _, s := range c.subsystems { + for _, f := range subsystems { + if s.Name() == f { + filteredSubsystems = append(filteredSubsystems, s) + break + } + } + } + + return filteredSubsystems +} + +// Add moves the provided process into the new cgroup. +// Without additional arguments, the process is added to all the cgroup subsystems. +// When giving Add a list of subsystem names, the process is only added to those +// subsystems, provided that they are active in the targeted cgroup. +func (c *cgroup) Add(process Process, subsystems ...Name) error { + return c.add(process, cgroupProcs, subsystems...) +} + +// AddProc moves the provided process id into the new cgroup. +// Without additional arguments, the process with the given id is added to all +// the cgroup subsystems. When giving AddProc a list of subsystem names, the process +// id is only added to those subsystems, provided that they are active in the targeted +// cgroup. +func (c *cgroup) AddProc(pid uint64, subsystems ...Name) error { + return c.add(Process{Pid: int(pid)}, cgroupProcs, subsystems...) +} + +// AddTask moves the provided tasks (threads) into the new cgroup. +// Without additional arguments, the task is added to all the cgroup subsystems. +// When giving AddTask a list of subsystem names, the task is only added to those +// subsystems, provided that they are active in the targeted cgroup. +func (c *cgroup) AddTask(process Process, subsystems ...Name) error { + return c.add(process, cgroupTasks, subsystems...) +} + +func (c *cgroup) add(process Process, pType procType, subsystems ...Name) error { + if process.Pid <= 0 { + return ErrInvalidPid + } + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return c.err + } + for _, s := range pathers(c.subsystemsFilter(subsystems...)) { + p, err := c.path(s.Name()) + if err != nil { + return err + } + err = retryingWriteFile( + filepath.Join(s.Path(p), pType), + []byte(strconv.Itoa(process.Pid)), + defaultFilePerm, + ) + if err != nil { + return err + } + } + return nil +} + +// Delete will remove the control group from each of the subsystems registered +func (c *cgroup) Delete() error { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return c.err + } + var errs []string + for _, s := range c.subsystems { + // kernel prevents cgroups with running process from being removed, check the tree is empty + procs, err := c.processes(s.Name(), true, cgroupProcs) + if err != nil { + return err + } + if len(procs) > 0 { + errs = append(errs, fmt.Sprintf("%s (contains running processes)", string(s.Name()))) + continue + } + if d, ok := s.(deleter); ok { + sp, err := c.path(s.Name()) + if err != nil { + return err + } + if err := d.Delete(sp); err != nil { + errs = append(errs, string(s.Name())) + } + continue + } + if p, ok := s.(pather); ok { + sp, err := c.path(s.Name()) + if err != nil { + return err + } + path := p.Path(sp) + if err := remove(path); err != nil { + errs = append(errs, path) + } + continue + } + } + if len(errs) > 0 { + return fmt.Errorf("cgroups: unable to remove paths %s", strings.Join(errs, ", ")) + } + c.err = ErrCgroupDeleted + return nil +} + +// Stat returns the current metrics for the cgroup +func (c *cgroup) Stat(handlers ...ErrorHandler) (*v1.Metrics, error) { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return nil, c.err + } + if len(handlers) == 0 { + handlers = append(handlers, errPassthrough) + } + var ( + stats = &v1.Metrics{ + CPU: &v1.CPUStat{ + Throttling: &v1.Throttle{}, + Usage: &v1.CPUUsage{}, + }, + } + wg = &sync.WaitGroup{} + errs = make(chan error, len(c.subsystems)) + ) + for _, s := range c.subsystems { + if ss, ok := s.(stater); ok { + sp, err := c.path(s.Name()) + if err != nil { + return nil, err + } + wg.Add(1) + go func() { + defer wg.Done() + if err := ss.Stat(sp, stats); err != nil { + for _, eh := range handlers { + if herr := eh(err); herr != nil { + errs <- herr + } + } + } + }() + } + } + wg.Wait() + close(errs) + for err := range errs { + return nil, err + } + return stats, nil +} + +// Update updates the cgroup with the new resource values provided +// +// Be prepared to handle EBUSY when trying to update a cgroup with +// live processes and other operations like Stats being performed at the +// same time +func (c *cgroup) Update(resources *specs.LinuxResources) error { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return c.err + } + for _, s := range c.subsystems { + if u, ok := s.(updater); ok { + sp, err := c.path(s.Name()) + if err != nil { + return err + } + if err := u.Update(sp, resources); err != nil { + return err + } + } + } + return nil +} + +// Processes returns the processes running inside the cgroup along +// with the subsystem used, pid, and path +func (c *cgroup) Processes(subsystem Name, recursive bool) ([]Process, error) { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return nil, c.err + } + return c.processes(subsystem, recursive, cgroupProcs) +} + +// Tasks returns the tasks running inside the cgroup along +// with the subsystem used, pid, and path +func (c *cgroup) Tasks(subsystem Name, recursive bool) ([]Task, error) { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return nil, c.err + } + return c.processes(subsystem, recursive, cgroupTasks) +} + +func (c *cgroup) processes(subsystem Name, recursive bool, pType procType) ([]Process, error) { + s := c.getSubsystem(subsystem) + sp, err := c.path(subsystem) + if err != nil { + return nil, err + } + if s == nil { + return nil, fmt.Errorf("cgroups: %s doesn't exist in %s subsystem", sp, subsystem) + } + path := s.(pather).Path(sp) + var processes []Process + err = filepath.Walk(path, func(p string, info os.FileInfo, err error) error { + if err != nil { + return err + } + if !recursive && info.IsDir() { + if p == path { + return nil + } + return filepath.SkipDir + } + dir, name := filepath.Split(p) + if name != pType { + return nil + } + procs, err := readPids(dir, subsystem, pType) + if err != nil { + return err + } + processes = append(processes, procs...) + return nil + }) + return processes, err +} + +// Freeze freezes the entire cgroup and all the processes inside it +func (c *cgroup) Freeze() error { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return c.err + } + s := c.getSubsystem(Freezer) + if s == nil { + return ErrFreezerNotSupported + } + sp, err := c.path(Freezer) + if err != nil { + return err + } + return s.(*freezerController).Freeze(sp) +} + +// Thaw thaws out the cgroup and all the processes inside it +func (c *cgroup) Thaw() error { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return c.err + } + s := c.getSubsystem(Freezer) + if s == nil { + return ErrFreezerNotSupported + } + sp, err := c.path(Freezer) + if err != nil { + return err + } + return s.(*freezerController).Thaw(sp) +} + +// OOMEventFD returns the memory cgroup's out of memory event fd that triggers +// when processes inside the cgroup receive an oom event. Returns +// ErrMemoryNotSupported if memory cgroups is not supported. +func (c *cgroup) OOMEventFD() (uintptr, error) { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return 0, c.err + } + s := c.getSubsystem(Memory) + if s == nil { + return 0, ErrMemoryNotSupported + } + sp, err := c.path(Memory) + if err != nil { + return 0, err + } + return s.(*memoryController).memoryEvent(sp, OOMEvent()) +} + +// RegisterMemoryEvent allows the ability to register for all v1 memory cgroups +// notifications. +func (c *cgroup) RegisterMemoryEvent(event MemoryEvent) (uintptr, error) { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return 0, c.err + } + s := c.getSubsystem(Memory) + if s == nil { + return 0, ErrMemoryNotSupported + } + sp, err := c.path(Memory) + if err != nil { + return 0, err + } + return s.(*memoryController).memoryEvent(sp, event) +} + +// State returns the state of the cgroup and its processes +func (c *cgroup) State() State { + c.mu.Lock() + defer c.mu.Unlock() + c.checkExists() + if c.err != nil && c.err == ErrCgroupDeleted { + return Deleted + } + s := c.getSubsystem(Freezer) + if s == nil { + return Thawed + } + sp, err := c.path(Freezer) + if err != nil { + return Unknown + } + state, err := s.(*freezerController).state(sp) + if err != nil { + return Unknown + } + return state +} + +// MoveTo does a recursive move subsystem by subsystem of all the processes +// inside the group +func (c *cgroup) MoveTo(destination Cgroup) error { + c.mu.Lock() + defer c.mu.Unlock() + if c.err != nil { + return c.err + } + for _, s := range c.subsystems { + processes, err := c.processes(s.Name(), true, cgroupProcs) + if err != nil { + return err + } + for _, p := range processes { + if err := destination.Add(p); err != nil { + if strings.Contains(err.Error(), "no such process") { + continue + } + return err + } + } + } + return nil +} + +func (c *cgroup) getSubsystem(n Name) Subsystem { + for _, s := range c.subsystems { + if s.Name() == n { + return s + } + } + return nil +} + +func (c *cgroup) checkExists() { + for _, s := range pathers(c.subsystems) { + p, err := c.path(s.Name()) + if err != nil { + return + } + if _, err := os.Lstat(s.Path(p)); err != nil { + if os.IsNotExist(err) { + c.err = ErrCgroupDeleted + return + } + } + } +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/control.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/control.go new file mode 100644 index 000000000000..5fcaac6d0566 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/control.go @@ -0,0 +1,99 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "os" + + v1 "github.com/containerd/cgroups/stats/v1" + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +type procType = string + +const ( + cgroupProcs procType = "cgroup.procs" + cgroupTasks procType = "tasks" + defaultDirPerm = 0755 +) + +// defaultFilePerm is a var so that the test framework can change the filemode +// of all files created when the tests are running. The difference between the +// tests and real world use is that files like "cgroup.procs" will exist when writing +// to a read cgroup filesystem and do not exist prior when running in the tests. +// this is set to a non 0 value in the test code +var defaultFilePerm = os.FileMode(0) + +type Process struct { + // Subsystem is the name of the subsystem that the process / task is in. + Subsystem Name + // Pid is the process id of the process / task. + Pid int + // Path is the full path of the subsystem and location that the process / task is in. + Path string +} + +type Task = Process + +// Cgroup handles interactions with the individual groups to perform +// actions on them as them main interface to this cgroup package +type Cgroup interface { + // New creates a new cgroup under the calling cgroup + New(string, *specs.LinuxResources) (Cgroup, error) + // Add adds a process to the cgroup (cgroup.procs). Without additional arguments, + // the process is added to all the cgroup subsystems. When giving Add a list of + // subsystem names, the process is only added to those subsystems, provided that + // they are active in the targeted cgroup. + Add(Process, ...Name) error + // AddProc adds the process with the given id to the cgroup (cgroup.procs). + // Without additional arguments, the process with the given id is added to all + // the cgroup subsystems. When giving AddProc a list of subsystem names, the process + // id is only added to those subsystems, provided that they are active in the targeted + // cgroup. + AddProc(uint64, ...Name) error + // AddTask adds a process to the cgroup (tasks). Without additional arguments, the + // task is added to all the cgroup subsystems. When giving AddTask a list of subsystem + // names, the task is only added to those subsystems, provided that they are active in + // the targeted cgroup. + AddTask(Process, ...Name) error + // Delete removes the cgroup as a whole + Delete() error + // MoveTo moves all the processes under the calling cgroup to the provided one + // subsystems are moved one at a time + MoveTo(Cgroup) error + // Stat returns the stats for all subsystems in the cgroup + Stat(...ErrorHandler) (*v1.Metrics, error) + // Update updates all the subsystems with the provided resource changes + Update(resources *specs.LinuxResources) error + // Processes returns all the processes in a select subsystem for the cgroup + Processes(Name, bool) ([]Process, error) + // Tasks returns all the tasks in a select subsystem for the cgroup + Tasks(Name, bool) ([]Task, error) + // Freeze freezes or pauses all processes inside the cgroup + Freeze() error + // Thaw thaw or resumes all processes inside the cgroup + Thaw() error + // OOMEventFD returns the memory subsystem's event fd for OOM events + OOMEventFD() (uintptr, error) + // RegisterMemoryEvent returns the memory subsystems event fd for whatever memory event was + // registered for. Can alternatively register for the oom event with this method. + RegisterMemoryEvent(MemoryEvent) (uintptr, error) + // State returns the cgroups current state + State() State + // Subsystems returns all the subsystems in the cgroup + Subsystems() []Subsystem +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/cpu.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/cpu.go new file mode 100644 index 000000000000..27024f17b826 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/cpu.go @@ -0,0 +1,125 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "bufio" + "os" + "path/filepath" + "strconv" + + v1 "github.com/containerd/cgroups/stats/v1" + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +func NewCpu(root string) *cpuController { + return &cpuController{ + root: filepath.Join(root, string(Cpu)), + } +} + +type cpuController struct { + root string +} + +func (c *cpuController) Name() Name { + return Cpu +} + +func (c *cpuController) Path(path string) string { + return filepath.Join(c.root, path) +} + +func (c *cpuController) Create(path string, resources *specs.LinuxResources) error { + if err := os.MkdirAll(c.Path(path), defaultDirPerm); err != nil { + return err + } + if cpu := resources.CPU; cpu != nil { + for _, t := range []struct { + name string + ivalue *int64 + uvalue *uint64 + }{ + { + name: "rt_period_us", + uvalue: cpu.RealtimePeriod, + }, + { + name: "rt_runtime_us", + ivalue: cpu.RealtimeRuntime, + }, + { + name: "shares", + uvalue: cpu.Shares, + }, + { + name: "cfs_period_us", + uvalue: cpu.Period, + }, + { + name: "cfs_quota_us", + ivalue: cpu.Quota, + }, + } { + var value []byte + if t.uvalue != nil { + value = []byte(strconv.FormatUint(*t.uvalue, 10)) + } else if t.ivalue != nil { + value = []byte(strconv.FormatInt(*t.ivalue, 10)) + } + if value != nil { + if err := retryingWriteFile( + filepath.Join(c.Path(path), "cpu."+t.name), + value, + defaultFilePerm, + ); err != nil { + return err + } + } + } + } + return nil +} + +func (c *cpuController) Update(path string, resources *specs.LinuxResources) error { + return c.Create(path, resources) +} + +func (c *cpuController) Stat(path string, stats *v1.Metrics) error { + f, err := os.Open(filepath.Join(c.Path(path), "cpu.stat")) + if err != nil { + return err + } + defer f.Close() + // get or create the cpu field because cpuacct can also set values on this struct + sc := bufio.NewScanner(f) + for sc.Scan() { + key, v, err := parseKV(sc.Text()) + if err != nil { + return err + } + switch key { + case "nr_periods": + stats.CPU.Throttling.Periods = v + case "nr_throttled": + stats.CPU.Throttling.ThrottledPeriods = v + case "throttled_time": + stats.CPU.Throttling.ThrottledTime = v + } + } + return sc.Err() +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/cpuacct.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/cpuacct.go new file mode 100644 index 000000000000..1022fa379b59 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/cpuacct.go @@ -0,0 +1,129 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "bufio" + "fmt" + "os" + "path/filepath" + "strconv" + "strings" + + v1 "github.com/containerd/cgroups/stats/v1" +) + +const nanosecondsInSecond = 1000000000 + +var clockTicks = getClockTicks() + +func NewCpuacct(root string) *cpuacctController { + return &cpuacctController{ + root: filepath.Join(root, string(Cpuacct)), + } +} + +type cpuacctController struct { + root string +} + +func (c *cpuacctController) Name() Name { + return Cpuacct +} + +func (c *cpuacctController) Path(path string) string { + return filepath.Join(c.root, path) +} + +func (c *cpuacctController) Stat(path string, stats *v1.Metrics) error { + user, kernel, err := c.getUsage(path) + if err != nil { + return err + } + total, err := readUint(filepath.Join(c.Path(path), "cpuacct.usage")) + if err != nil { + return err + } + percpu, err := c.percpuUsage(path) + if err != nil { + return err + } + stats.CPU.Usage.Total = total + stats.CPU.Usage.User = user + stats.CPU.Usage.Kernel = kernel + stats.CPU.Usage.PerCPU = percpu + return nil +} + +func (c *cpuacctController) percpuUsage(path string) ([]uint64, error) { + var usage []uint64 + data, err := os.ReadFile(filepath.Join(c.Path(path), "cpuacct.usage_percpu")) + if err != nil { + return nil, err + } + for _, v := range strings.Fields(string(data)) { + u, err := strconv.ParseUint(v, 10, 64) + if err != nil { + return nil, err + } + usage = append(usage, u) + } + return usage, nil +} + +func (c *cpuacctController) getUsage(path string) (user uint64, kernel uint64, err error) { + statPath := filepath.Join(c.Path(path), "cpuacct.stat") + f, err := os.Open(statPath) + if err != nil { + return 0, 0, err + } + defer f.Close() + var ( + raw = make(map[string]uint64) + sc = bufio.NewScanner(f) + ) + for sc.Scan() { + key, v, err := parseKV(sc.Text()) + if err != nil { + return 0, 0, err + } + raw[key] = v + } + if err := sc.Err(); err != nil { + return 0, 0, err + } + for _, t := range []struct { + name string + value *uint64 + }{ + { + name: "user", + value: &user, + }, + { + name: "system", + value: &kernel, + }, + } { + v, ok := raw[t.name] + if !ok { + return 0, 0, fmt.Errorf("expected field %q but not found in %q", t.name, statPath) + } + *t.value = v + } + return (user * nanosecondsInSecond) / clockTicks, (kernel * nanosecondsInSecond) / clockTicks, nil +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/cpuset.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/cpuset.go new file mode 100644 index 000000000000..8b56d3dbaa00 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/cpuset.go @@ -0,0 +1,158 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "bytes" + "fmt" + "os" + "path/filepath" + + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +func NewCpuset(root string) *cpusetController { + return &cpusetController{ + root: filepath.Join(root, string(Cpuset)), + } +} + +type cpusetController struct { + root string +} + +func (c *cpusetController) Name() Name { + return Cpuset +} + +func (c *cpusetController) Path(path string) string { + return filepath.Join(c.root, path) +} + +func (c *cpusetController) Create(path string, resources *specs.LinuxResources) error { + if err := c.ensureParent(c.Path(path), c.root); err != nil { + return err + } + if err := os.MkdirAll(c.Path(path), defaultDirPerm); err != nil { + return err + } + if err := c.copyIfNeeded(c.Path(path), filepath.Dir(c.Path(path))); err != nil { + return err + } + if resources.CPU != nil { + for _, t := range []struct { + name string + value string + }{ + { + name: "cpus", + value: resources.CPU.Cpus, + }, + { + name: "mems", + value: resources.CPU.Mems, + }, + } { + if t.value != "" { + if err := retryingWriteFile( + filepath.Join(c.Path(path), "cpuset."+t.name), + []byte(t.value), + defaultFilePerm, + ); err != nil { + return err + } + } + } + } + return nil +} + +func (c *cpusetController) Update(path string, resources *specs.LinuxResources) error { + return c.Create(path, resources) +} + +func (c *cpusetController) getValues(path string) (cpus []byte, mems []byte, err error) { + if cpus, err = os.ReadFile(filepath.Join(path, "cpuset.cpus")); err != nil && !os.IsNotExist(err) { + return + } + if mems, err = os.ReadFile(filepath.Join(path, "cpuset.mems")); err != nil && !os.IsNotExist(err) { + return + } + return cpus, mems, nil +} + +// ensureParent makes sure that the parent directory of current is created +// and populated with the proper cpus and mems files copied from +// it's parent. +func (c *cpusetController) ensureParent(current, root string) error { + parent := filepath.Dir(current) + if _, err := filepath.Rel(root, parent); err != nil { + return nil + } + // Avoid infinite recursion. + if parent == current { + return fmt.Errorf("cpuset: cgroup parent path outside cgroup root") + } + if cleanPath(parent) != root { + if err := c.ensureParent(parent, root); err != nil { + return err + } + } + if err := os.MkdirAll(current, defaultDirPerm); err != nil { + return err + } + return c.copyIfNeeded(current, parent) +} + +// copyIfNeeded copies the cpuset.cpus and cpuset.mems from the parent +// directory to the current directory if the file's contents are 0 +func (c *cpusetController) copyIfNeeded(current, parent string) error { + var ( + err error + currentCpus, currentMems []byte + parentCpus, parentMems []byte + ) + if currentCpus, currentMems, err = c.getValues(current); err != nil { + return err + } + if parentCpus, parentMems, err = c.getValues(parent); err != nil { + return err + } + if isEmpty(currentCpus) { + if err := retryingWriteFile( + filepath.Join(current, "cpuset.cpus"), + parentCpus, + defaultFilePerm, + ); err != nil { + return err + } + } + if isEmpty(currentMems) { + if err := retryingWriteFile( + filepath.Join(current, "cpuset.mems"), + parentMems, + defaultFilePerm, + ); err != nil { + return err + } + } + return nil +} + +func isEmpty(b []byte) bool { + return len(bytes.Trim(b, "\n")) == 0 +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/devices.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/devices.go new file mode 100644 index 000000000000..7792566d5ee4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/devices.go @@ -0,0 +1,92 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "fmt" + "os" + "path/filepath" + + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +const ( + allowDeviceFile = "devices.allow" + denyDeviceFile = "devices.deny" + wildcard = -1 +) + +func NewDevices(root string) *devicesController { + return &devicesController{ + root: filepath.Join(root, string(Devices)), + } +} + +type devicesController struct { + root string +} + +func (d *devicesController) Name() Name { + return Devices +} + +func (d *devicesController) Path(path string) string { + return filepath.Join(d.root, path) +} + +func (d *devicesController) Create(path string, resources *specs.LinuxResources) error { + if err := os.MkdirAll(d.Path(path), defaultDirPerm); err != nil { + return err + } + for _, device := range resources.Devices { + file := denyDeviceFile + if device.Allow { + file = allowDeviceFile + } + if device.Type == "" { + device.Type = "a" + } + if err := retryingWriteFile( + filepath.Join(d.Path(path), file), + []byte(deviceString(device)), + defaultFilePerm, + ); err != nil { + return err + } + } + return nil +} + +func (d *devicesController) Update(path string, resources *specs.LinuxResources) error { + return d.Create(path, resources) +} + +func deviceString(device specs.LinuxDeviceCgroup) string { + return fmt.Sprintf("%s %s:%s %s", + device.Type, + deviceNumber(device.Major), + deviceNumber(device.Minor), + device.Access, + ) +} + +func deviceNumber(number *int64) string { + if number == nil || *number == wildcard { + return "*" + } + return fmt.Sprint(*number) +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/errors.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/errors.go new file mode 100644 index 000000000000..f1ad8315c81e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/errors.go @@ -0,0 +1,47 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "errors" + "os" +) + +var ( + ErrInvalidPid = errors.New("cgroups: pid must be greater than 0") + ErrMountPointNotExist = errors.New("cgroups: cgroup mountpoint does not exist") + ErrInvalidFormat = errors.New("cgroups: parsing file with invalid format failed") + ErrFreezerNotSupported = errors.New("cgroups: freezer cgroup not supported on this system") + ErrMemoryNotSupported = errors.New("cgroups: memory cgroup not supported on this system") + ErrCgroupDeleted = errors.New("cgroups: cgroup deleted") + ErrNoCgroupMountDestination = errors.New("cgroups: cannot find cgroup mount destination") +) + +// ErrorHandler is a function that handles and acts on errors +type ErrorHandler func(err error) error + +// IgnoreNotExist ignores any errors that are for not existing files +func IgnoreNotExist(err error) error { + if os.IsNotExist(err) { + return nil + } + return err +} + +func errPassthrough(err error) error { + return err +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/freezer.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/freezer.go new file mode 100644 index 000000000000..5783f0dcc741 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/freezer.go @@ -0,0 +1,82 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "os" + "path/filepath" + "strings" + "time" +) + +func NewFreezer(root string) *freezerController { + return &freezerController{ + root: filepath.Join(root, string(Freezer)), + } +} + +type freezerController struct { + root string +} + +func (f *freezerController) Name() Name { + return Freezer +} + +func (f *freezerController) Path(path string) string { + return filepath.Join(f.root, path) +} + +func (f *freezerController) Freeze(path string) error { + return f.waitState(path, Frozen) +} + +func (f *freezerController) Thaw(path string) error { + return f.waitState(path, Thawed) +} + +func (f *freezerController) changeState(path string, state State) error { + return retryingWriteFile( + filepath.Join(f.root, path, "freezer.state"), + []byte(strings.ToUpper(string(state))), + defaultFilePerm, + ) +} + +func (f *freezerController) state(path string) (State, error) { + current, err := os.ReadFile(filepath.Join(f.root, path, "freezer.state")) + if err != nil { + return "", err + } + return State(strings.ToLower(strings.TrimSpace(string(current)))), nil +} + +func (f *freezerController) waitState(path string, state State) error { + for { + if err := f.changeState(path, state); err != nil { + return err + } + current, err := f.state(path) + if err != nil { + return err + } + if current == state { + return nil + } + time.Sleep(1 * time.Millisecond) + } +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/hierarchy.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/hierarchy.go new file mode 100644 index 000000000000..ca3f1b9380b7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/hierarchy.go @@ -0,0 +1,20 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +// Hierarchy enables both unified and split hierarchy for cgroups +type Hierarchy func() ([]Subsystem, error) diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/hugetlb.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/hugetlb.go new file mode 100644 index 000000000000..c0eb03b24da3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/hugetlb.go @@ -0,0 +1,109 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "os" + "path/filepath" + "strconv" + "strings" + + v1 "github.com/containerd/cgroups/stats/v1" + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +func NewHugetlb(root string) (*hugetlbController, error) { + sizes, err := hugePageSizes() + if err != nil { + return nil, err + } + + return &hugetlbController{ + root: filepath.Join(root, string(Hugetlb)), + sizes: sizes, + }, nil +} + +type hugetlbController struct { + root string + sizes []string +} + +func (h *hugetlbController) Name() Name { + return Hugetlb +} + +func (h *hugetlbController) Path(path string) string { + return filepath.Join(h.root, path) +} + +func (h *hugetlbController) Create(path string, resources *specs.LinuxResources) error { + if err := os.MkdirAll(h.Path(path), defaultDirPerm); err != nil { + return err + } + for _, limit := range resources.HugepageLimits { + if err := retryingWriteFile( + filepath.Join(h.Path(path), strings.Join([]string{"hugetlb", limit.Pagesize, "limit_in_bytes"}, ".")), + []byte(strconv.FormatUint(limit.Limit, 10)), + defaultFilePerm, + ); err != nil { + return err + } + } + return nil +} + +func (h *hugetlbController) Stat(path string, stats *v1.Metrics) error { + for _, size := range h.sizes { + s, err := h.readSizeStat(path, size) + if err != nil { + return err + } + stats.Hugetlb = append(stats.Hugetlb, s) + } + return nil +} + +func (h *hugetlbController) readSizeStat(path, size string) (*v1.HugetlbStat, error) { + s := v1.HugetlbStat{ + Pagesize: size, + } + for _, t := range []struct { + name string + value *uint64 + }{ + { + name: "usage_in_bytes", + value: &s.Usage, + }, + { + name: "max_usage_in_bytes", + value: &s.Max, + }, + { + name: "failcnt", + value: &s.Failcnt, + }, + } { + v, err := readUint(filepath.Join(h.Path(path), strings.Join([]string{"hugetlb", size, t.name}, "."))) + if err != nil { + return nil, err + } + *t.value = v + } + return &s, nil +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/memory.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/memory.go new file mode 100644 index 000000000000..e271866ef94f --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/memory.go @@ -0,0 +1,480 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "bufio" + "fmt" + "io" + "os" + "path/filepath" + "strconv" + "strings" + + v1 "github.com/containerd/cgroups/stats/v1" + specs "github.com/opencontainers/runtime-spec/specs-go" + "golang.org/x/sys/unix" +) + +// MemoryEvent is an interface that V1 memory Cgroup notifications implement. Arg returns the +// file name whose fd should be written to "cgroups.event_control". EventFile returns the name of +// the file that supports the notification api e.g. "memory.usage_in_bytes". +type MemoryEvent interface { + Arg() string + EventFile() string +} + +type memoryThresholdEvent struct { + threshold uint64 + swap bool +} + +// MemoryThresholdEvent returns a new memory threshold event to be used with RegisterMemoryEvent. +// If swap is true, the event will be registered using memory.memsw.usage_in_bytes +func MemoryThresholdEvent(threshold uint64, swap bool) MemoryEvent { + return &memoryThresholdEvent{ + threshold, + swap, + } +} + +func (m *memoryThresholdEvent) Arg() string { + return strconv.FormatUint(m.threshold, 10) +} + +func (m *memoryThresholdEvent) EventFile() string { + if m.swap { + return "memory.memsw.usage_in_bytes" + } + return "memory.usage_in_bytes" +} + +type oomEvent struct{} + +// OOMEvent returns a new oom event to be used with RegisterMemoryEvent. +func OOMEvent() MemoryEvent { + return &oomEvent{} +} + +func (oom *oomEvent) Arg() string { + return "" +} + +func (oom *oomEvent) EventFile() string { + return "memory.oom_control" +} + +type memoryPressureEvent struct { + pressureLevel MemoryPressureLevel + hierarchy EventNotificationMode +} + +// MemoryPressureEvent returns a new memory pressure event to be used with RegisterMemoryEvent. +func MemoryPressureEvent(pressureLevel MemoryPressureLevel, hierarchy EventNotificationMode) MemoryEvent { + return &memoryPressureEvent{ + pressureLevel, + hierarchy, + } +} + +func (m *memoryPressureEvent) Arg() string { + return string(m.pressureLevel) + "," + string(m.hierarchy) +} + +func (m *memoryPressureEvent) EventFile() string { + return "memory.pressure_level" +} + +// MemoryPressureLevel corresponds to the memory pressure levels defined +// for memory cgroups. +type MemoryPressureLevel string + +// The three memory pressure levels are as follows. +// - The "low" level means that the system is reclaiming memory for new +// allocations. Monitoring this reclaiming activity might be useful for +// maintaining cache level. Upon notification, the program (typically +// "Activity Manager") might analyze vmstat and act in advance (i.e. +// prematurely shutdown unimportant services). +// - The "medium" level means that the system is experiencing medium memory +// pressure, the system might be making swap, paging out active file caches, +// etc. Upon this event applications may decide to further analyze +// vmstat/zoneinfo/memcg or internal memory usage statistics and free any +// resources that can be easily reconstructed or re-read from a disk. +// - The "critical" level means that the system is actively thrashing, it is +// about to out of memory (OOM) or even the in-kernel OOM killer is on its +// way to trigger. Applications should do whatever they can to help the +// system. It might be too late to consult with vmstat or any other +// statistics, so it is advisable to take an immediate action. +// "https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt" Section 11 +const ( + LowPressure MemoryPressureLevel = "low" + MediumPressure MemoryPressureLevel = "medium" + CriticalPressure MemoryPressureLevel = "critical" +) + +// EventNotificationMode corresponds to the notification modes +// for the memory cgroups pressure level notifications. +type EventNotificationMode string + +// There are three optional modes that specify different propagation behavior: +// - "default": this is the default behavior specified above. This mode is the +// same as omitting the optional mode parameter, preserved by backwards +// compatibility. +// - "hierarchy": events always propagate up to the root, similar to the default +// behavior, except that propagation continues regardless of whether there are +// event listeners at each level, with the "hierarchy" mode. In the above +// example, groups A, B, and C will receive notification of memory pressure. +// - "local": events are pass-through, i.e. they only receive notifications when +// memory pressure is experienced in the memcg for which the notification is +// registered. In the above example, group C will receive notification if +// registered for "local" notification and the group experiences memory +// pressure. However, group B will never receive notification, regardless if +// there is an event listener for group C or not, if group B is registered for +// local notification. +// "https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt" Section 11 +const ( + DefaultMode EventNotificationMode = "default" + LocalMode EventNotificationMode = "local" + HierarchyMode EventNotificationMode = "hierarchy" +) + +// NewMemory returns a Memory controller given the root folder of cgroups. +// It may optionally accept other configuration options, such as IgnoreModules(...) +func NewMemory(root string, options ...func(*memoryController)) *memoryController { + mc := &memoryController{ + root: filepath.Join(root, string(Memory)), + ignored: map[string]struct{}{}, + } + for _, opt := range options { + opt(mc) + } + return mc +} + +// IgnoreModules configure the memory controller to not read memory metrics for some +// module names (e.g. passing "memsw" would avoid all the memory.memsw.* entries) +func IgnoreModules(names ...string) func(*memoryController) { + return func(mc *memoryController) { + for _, name := range names { + mc.ignored[name] = struct{}{} + } + } +} + +// OptionalSwap allows the memory controller to not fail if cgroups is not accounting +// Swap memory (there are no memory.memsw.* entries) +func OptionalSwap() func(*memoryController) { + return func(mc *memoryController) { + _, err := os.Stat(filepath.Join(mc.root, "memory.memsw.usage_in_bytes")) + if os.IsNotExist(err) { + mc.ignored["memsw"] = struct{}{} + } + } +} + +type memoryController struct { + root string + ignored map[string]struct{} +} + +func (m *memoryController) Name() Name { + return Memory +} + +func (m *memoryController) Path(path string) string { + return filepath.Join(m.root, path) +} + +func (m *memoryController) Create(path string, resources *specs.LinuxResources) error { + if err := os.MkdirAll(m.Path(path), defaultDirPerm); err != nil { + return err + } + if resources.Memory == nil { + return nil + } + return m.set(path, getMemorySettings(resources)) +} + +func (m *memoryController) Update(path string, resources *specs.LinuxResources) error { + if resources.Memory == nil { + return nil + } + g := func(v *int64) bool { + return v != nil && *v > 0 + } + settings := getMemorySettings(resources) + if g(resources.Memory.Limit) && g(resources.Memory.Swap) { + // if the updated swap value is larger than the current memory limit set the swap changes first + // then set the memory limit as swap must always be larger than the current limit + current, err := readUint(filepath.Join(m.Path(path), "memory.limit_in_bytes")) + if err != nil { + return err + } + if current < uint64(*resources.Memory.Swap) { + settings[0], settings[1] = settings[1], settings[0] + } + } + return m.set(path, settings) +} + +func (m *memoryController) Stat(path string, stats *v1.Metrics) error { + fMemStat, err := os.Open(filepath.Join(m.Path(path), "memory.stat")) + if err != nil { + return err + } + defer fMemStat.Close() + stats.Memory = &v1.MemoryStat{ + Usage: &v1.MemoryEntry{}, + Swap: &v1.MemoryEntry{}, + Kernel: &v1.MemoryEntry{}, + KernelTCP: &v1.MemoryEntry{}, + } + if err := m.parseStats(fMemStat, stats.Memory); err != nil { + return err + } + + fMemOomControl, err := os.Open(filepath.Join(m.Path(path), "memory.oom_control")) + if err != nil { + return err + } + defer fMemOomControl.Close() + stats.MemoryOomControl = &v1.MemoryOomControl{} + if err := m.parseOomControlStats(fMemOomControl, stats.MemoryOomControl); err != nil { + return err + } + for _, t := range []struct { + module string + entry *v1.MemoryEntry + }{ + { + module: "", + entry: stats.Memory.Usage, + }, + { + module: "memsw", + entry: stats.Memory.Swap, + }, + { + module: "kmem", + entry: stats.Memory.Kernel, + }, + { + module: "kmem.tcp", + entry: stats.Memory.KernelTCP, + }, + } { + if _, ok := m.ignored[t.module]; ok { + continue + } + for _, tt := range []struct { + name string + value *uint64 + }{ + { + name: "usage_in_bytes", + value: &t.entry.Usage, + }, + { + name: "max_usage_in_bytes", + value: &t.entry.Max, + }, + { + name: "failcnt", + value: &t.entry.Failcnt, + }, + { + name: "limit_in_bytes", + value: &t.entry.Limit, + }, + } { + parts := []string{"memory"} + if t.module != "" { + parts = append(parts, t.module) + } + parts = append(parts, tt.name) + v, err := readUint(filepath.Join(m.Path(path), strings.Join(parts, "."))) + if err != nil { + return err + } + *tt.value = v + } + } + return nil +} + +func (m *memoryController) parseStats(r io.Reader, stat *v1.MemoryStat) error { + var ( + raw = make(map[string]uint64) + sc = bufio.NewScanner(r) + line int + ) + for sc.Scan() { + key, v, err := parseKV(sc.Text()) + if err != nil { + return fmt.Errorf("%d: %v", line, err) + } + raw[key] = v + line++ + } + if err := sc.Err(); err != nil { + return err + } + stat.Cache = raw["cache"] + stat.RSS = raw["rss"] + stat.RSSHuge = raw["rss_huge"] + stat.MappedFile = raw["mapped_file"] + stat.Dirty = raw["dirty"] + stat.Writeback = raw["writeback"] + stat.PgPgIn = raw["pgpgin"] + stat.PgPgOut = raw["pgpgout"] + stat.PgFault = raw["pgfault"] + stat.PgMajFault = raw["pgmajfault"] + stat.InactiveAnon = raw["inactive_anon"] + stat.ActiveAnon = raw["active_anon"] + stat.InactiveFile = raw["inactive_file"] + stat.ActiveFile = raw["active_file"] + stat.Unevictable = raw["unevictable"] + stat.HierarchicalMemoryLimit = raw["hierarchical_memory_limit"] + stat.HierarchicalSwapLimit = raw["hierarchical_memsw_limit"] + stat.TotalCache = raw["total_cache"] + stat.TotalRSS = raw["total_rss"] + stat.TotalRSSHuge = raw["total_rss_huge"] + stat.TotalMappedFile = raw["total_mapped_file"] + stat.TotalDirty = raw["total_dirty"] + stat.TotalWriteback = raw["total_writeback"] + stat.TotalPgPgIn = raw["total_pgpgin"] + stat.TotalPgPgOut = raw["total_pgpgout"] + stat.TotalPgFault = raw["total_pgfault"] + stat.TotalPgMajFault = raw["total_pgmajfault"] + stat.TotalInactiveAnon = raw["total_inactive_anon"] + stat.TotalActiveAnon = raw["total_active_anon"] + stat.TotalInactiveFile = raw["total_inactive_file"] + stat.TotalActiveFile = raw["total_active_file"] + stat.TotalUnevictable = raw["total_unevictable"] + return nil +} + +func (m *memoryController) parseOomControlStats(r io.Reader, stat *v1.MemoryOomControl) error { + var ( + raw = make(map[string]uint64) + sc = bufio.NewScanner(r) + line int + ) + for sc.Scan() { + key, v, err := parseKV(sc.Text()) + if err != nil { + return fmt.Errorf("%d: %v", line, err) + } + raw[key] = v + line++ + } + if err := sc.Err(); err != nil { + return err + } + stat.OomKillDisable = raw["oom_kill_disable"] + stat.UnderOom = raw["under_oom"] + stat.OomKill = raw["oom_kill"] + return nil +} + +func (m *memoryController) set(path string, settings []memorySettings) error { + for _, t := range settings { + if t.value != nil { + if err := retryingWriteFile( + filepath.Join(m.Path(path), "memory."+t.name), + []byte(strconv.FormatInt(*t.value, 10)), + defaultFilePerm, + ); err != nil { + return err + } + } + } + return nil +} + +type memorySettings struct { + name string + value *int64 +} + +func getMemorySettings(resources *specs.LinuxResources) []memorySettings { + mem := resources.Memory + var swappiness *int64 + if mem.Swappiness != nil { + v := int64(*mem.Swappiness) + swappiness = &v + } + return []memorySettings{ + { + name: "limit_in_bytes", + value: mem.Limit, + }, + { + name: "soft_limit_in_bytes", + value: mem.Reservation, + }, + { + name: "memsw.limit_in_bytes", + value: mem.Swap, + }, + { + name: "kmem.limit_in_bytes", + value: mem.Kernel, + }, + { + name: "kmem.tcp.limit_in_bytes", + value: mem.KernelTCP, + }, + { + name: "oom_control", + value: getOomControlValue(mem), + }, + { + name: "swappiness", + value: swappiness, + }, + } +} + +func getOomControlValue(mem *specs.LinuxMemory) *int64 { + if mem.DisableOOMKiller != nil && *mem.DisableOOMKiller { + i := int64(1) + return &i + } + return nil +} + +func (m *memoryController) memoryEvent(path string, event MemoryEvent) (uintptr, error) { + root := m.Path(path) + efd, err := unix.Eventfd(0, unix.EFD_CLOEXEC) + if err != nil { + return 0, err + } + evtFile, err := os.Open(filepath.Join(root, event.EventFile())) + if err != nil { + unix.Close(efd) + return 0, err + } + defer evtFile.Close() + data := fmt.Sprintf("%d %d %s", efd, evtFile.Fd(), event.Arg()) + evctlPath := filepath.Join(root, "cgroup.event_control") + if err := retryingWriteFile(evctlPath, []byte(data), 0700); err != nil { + unix.Close(efd) + return 0, err + } + return uintptr(efd), nil +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/named.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/named.go new file mode 100644 index 000000000000..06b16c3b1573 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/named.go @@ -0,0 +1,39 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import "path/filepath" + +func NewNamed(root string, name Name) *namedController { + return &namedController{ + root: root, + name: name, + } +} + +type namedController struct { + root string + name Name +} + +func (n *namedController) Name() Name { + return n.name +} + +func (n *namedController) Path(path string) string { + return filepath.Join(n.root, string(n.name), path) +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/net_cls.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/net_cls.go new file mode 100644 index 000000000000..839b06de080b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/net_cls.go @@ -0,0 +1,61 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "os" + "path/filepath" + "strconv" + + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +func NewNetCls(root string) *netclsController { + return &netclsController{ + root: filepath.Join(root, string(NetCLS)), + } +} + +type netclsController struct { + root string +} + +func (n *netclsController) Name() Name { + return NetCLS +} + +func (n *netclsController) Path(path string) string { + return filepath.Join(n.root, path) +} + +func (n *netclsController) Create(path string, resources *specs.LinuxResources) error { + if err := os.MkdirAll(n.Path(path), defaultDirPerm); err != nil { + return err + } + if resources.Network != nil && resources.Network.ClassID != nil && *resources.Network.ClassID > 0 { + return retryingWriteFile( + filepath.Join(n.Path(path), "net_cls.classid"), + []byte(strconv.FormatUint(uint64(*resources.Network.ClassID), 10)), + defaultFilePerm, + ) + } + return nil +} + +func (n *netclsController) Update(path string, resources *specs.LinuxResources) error { + return n.Create(path, resources) +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/net_prio.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/net_prio.go new file mode 100644 index 000000000000..6362fd084f71 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/net_prio.go @@ -0,0 +1,65 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "fmt" + "os" + "path/filepath" + + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +func NewNetPrio(root string) *netprioController { + return &netprioController{ + root: filepath.Join(root, string(NetPrio)), + } +} + +type netprioController struct { + root string +} + +func (n *netprioController) Name() Name { + return NetPrio +} + +func (n *netprioController) Path(path string) string { + return filepath.Join(n.root, path) +} + +func (n *netprioController) Create(path string, resources *specs.LinuxResources) error { + if err := os.MkdirAll(n.Path(path), defaultDirPerm); err != nil { + return err + } + if resources.Network != nil { + for _, prio := range resources.Network.Priorities { + if err := retryingWriteFile( + filepath.Join(n.Path(path), "net_prio.ifpriomap"), + formatPrio(prio.Name, prio.Priority), + defaultFilePerm, + ); err != nil { + return err + } + } + } + return nil +} + +func formatPrio(name string, prio uint32) []byte { + return []byte(fmt.Sprintf("%s %d", name, prio)) +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/opts.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/opts.go new file mode 100644 index 000000000000..1e428d048079 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/opts.go @@ -0,0 +1,61 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "errors" +) + +var ( + // ErrIgnoreSubsystem allows the specific subsystem to be skipped + ErrIgnoreSubsystem = errors.New("skip subsystem") + // ErrDevicesRequired is returned when the devices subsystem is required but + // does not exist or is not active + ErrDevicesRequired = errors.New("devices subsystem is required") +) + +// InitOpts allows configuration for the creation or loading of a cgroup +type InitOpts func(*InitConfig) error + +// InitConfig provides configuration options for the creation +// or loading of a cgroup and its subsystems +type InitConfig struct { + // InitCheck can be used to check initialization errors from the subsystem + InitCheck InitCheck +} + +func newInitConfig() *InitConfig { + return &InitConfig{ + InitCheck: RequireDevices, + } +} + +// InitCheck allows subsystems errors to be checked when initialized or loaded +type InitCheck func(Subsystem, Path, error) error + +// AllowAny allows any subsystem errors to be skipped +func AllowAny(_ Subsystem, _ Path, _ error) error { + return ErrIgnoreSubsystem +} + +// RequireDevices requires the device subsystem but no others +func RequireDevices(s Subsystem, _ Path, _ error) error { + if s.Name() == Devices { + return ErrDevicesRequired + } + return ErrIgnoreSubsystem +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/paths.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/paths.go new file mode 100644 index 000000000000..bddc4e9cdcd7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/paths.go @@ -0,0 +1,106 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "errors" + "fmt" + "path/filepath" +) + +type Path func(subsystem Name) (string, error) + +func RootPath(subsystem Name) (string, error) { + return "/", nil +} + +// StaticPath returns a static path to use for all cgroups +func StaticPath(path string) Path { + return func(_ Name) (string, error) { + return path, nil + } +} + +// NestedPath will nest the cgroups based on the calling processes cgroup +// placing its child processes inside its own path +func NestedPath(suffix string) Path { + paths, err := ParseCgroupFile("/proc/self/cgroup") + if err != nil { + return errorPath(err) + } + return existingPath(paths, suffix) +} + +// PidPath will return the correct cgroup paths for an existing process running inside a cgroup +// This is commonly used for the Load function to restore an existing container +func PidPath(pid int) Path { + p := fmt.Sprintf("/proc/%d/cgroup", pid) + paths, err := ParseCgroupFile(p) + if err != nil { + return errorPath(fmt.Errorf("parse cgroup file %s: %w", p, err)) + } + return existingPath(paths, "") +} + +// ErrControllerNotActive is returned when a controller is not supported or enabled +var ErrControllerNotActive = errors.New("controller is not supported") + +func existingPath(paths map[string]string, suffix string) Path { + // localize the paths based on the root mount dest for nested cgroups + for n, p := range paths { + dest, err := getCgroupDestination(n) + if err != nil { + return errorPath(err) + } + rel, err := filepath.Rel(dest, p) + if err != nil { + return errorPath(err) + } + if rel == "." { + rel = dest + } + paths[n] = filepath.Join("/", rel) + } + return func(name Name) (string, error) { + root, ok := paths[string(name)] + if !ok { + if root, ok = paths["name="+string(name)]; !ok { + return "", ErrControllerNotActive + } + } + if suffix != "" { + return filepath.Join(root, suffix), nil + } + return root, nil + } +} + +func subPath(path Path, subName string) Path { + return func(name Name) (string, error) { + p, err := path(name) + if err != nil { + return "", err + } + return filepath.Join(p, subName), nil + } +} + +func errorPath(err error) Path { + return func(_ Name) (string, error) { + return "", err + } +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/perf_event.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/perf_event.go new file mode 100644 index 000000000000..648786db68f6 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/perf_event.go @@ -0,0 +1,37 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import "path/filepath" + +func NewPerfEvent(root string) *PerfEventController { + return &PerfEventController{ + root: filepath.Join(root, string(PerfEvent)), + } +} + +type PerfEventController struct { + root string +} + +func (p *PerfEventController) Name() Name { + return PerfEvent +} + +func (p *PerfEventController) Path(path string) string { + return filepath.Join(p.root, path) +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/pids.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/pids.go new file mode 100644 index 000000000000..66a1b6b44708 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/pids.go @@ -0,0 +1,85 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "os" + "path/filepath" + "strconv" + "strings" + + v1 "github.com/containerd/cgroups/stats/v1" + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +func NewPids(root string) *pidsController { + return &pidsController{ + root: filepath.Join(root, string(Pids)), + } +} + +type pidsController struct { + root string +} + +func (p *pidsController) Name() Name { + return Pids +} + +func (p *pidsController) Path(path string) string { + return filepath.Join(p.root, path) +} + +func (p *pidsController) Create(path string, resources *specs.LinuxResources) error { + if err := os.MkdirAll(p.Path(path), defaultDirPerm); err != nil { + return err + } + if resources.Pids != nil && resources.Pids.Limit > 0 { + return retryingWriteFile( + filepath.Join(p.Path(path), "pids.max"), + []byte(strconv.FormatInt(resources.Pids.Limit, 10)), + defaultFilePerm, + ) + } + return nil +} + +func (p *pidsController) Update(path string, resources *specs.LinuxResources) error { + return p.Create(path, resources) +} + +func (p *pidsController) Stat(path string, stats *v1.Metrics) error { + current, err := readUint(filepath.Join(p.Path(path), "pids.current")) + if err != nil { + return err + } + var max uint64 + maxData, err := os.ReadFile(filepath.Join(p.Path(path), "pids.max")) + if err != nil { + return err + } + if maxS := strings.TrimSpace(string(maxData)); maxS != "max" { + if max, err = parseUint(maxS, 10, 64); err != nil { + return err + } + } + stats.Pids = &v1.PidsStat{ + Current: current, + Limit: max, + } + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/rdma.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/rdma.go new file mode 100644 index 000000000000..9d414203e42e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/rdma.go @@ -0,0 +1,154 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "math" + "os" + "path/filepath" + "strconv" + "strings" + + v1 "github.com/containerd/cgroups/stats/v1" + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +type rdmaController struct { + root string +} + +func (p *rdmaController) Name() Name { + return Rdma +} + +func (p *rdmaController) Path(path string) string { + return filepath.Join(p.root, path) +} + +func NewRdma(root string) *rdmaController { + return &rdmaController{ + root: filepath.Join(root, string(Rdma)), + } +} + +func createCmdString(device string, limits *specs.LinuxRdma) string { + var cmdString string + + cmdString = device + if limits.HcaHandles != nil { + cmdString = cmdString + " " + "hca_handle=" + strconv.FormatUint(uint64(*limits.HcaHandles), 10) + } + + if limits.HcaObjects != nil { + cmdString = cmdString + " " + "hca_object=" + strconv.FormatUint(uint64(*limits.HcaObjects), 10) + } + return cmdString +} + +func (p *rdmaController) Create(path string, resources *specs.LinuxResources) error { + if err := os.MkdirAll(p.Path(path), defaultDirPerm); err != nil { + return err + } + + for device, limit := range resources.Rdma { + if device != "" && (limit.HcaHandles != nil || limit.HcaObjects != nil) { + limit := limit + return retryingWriteFile( + filepath.Join(p.Path(path), "rdma.max"), + []byte(createCmdString(device, &limit)), + defaultFilePerm, + ) + } + } + return nil +} + +func (p *rdmaController) Update(path string, resources *specs.LinuxResources) error { + return p.Create(path, resources) +} + +func parseRdmaKV(raw string, entry *v1.RdmaEntry) { + var value uint64 + var err error + + parts := strings.Split(raw, "=") + switch len(parts) { + case 2: + if parts[1] == "max" { + value = math.MaxUint32 + } else { + value, err = parseUint(parts[1], 10, 32) + if err != nil { + return + } + } + if parts[0] == "hca_handle" { + entry.HcaHandles = uint32(value) + } else if parts[0] == "hca_object" { + entry.HcaObjects = uint32(value) + } + } +} + +func toRdmaEntry(strEntries []string) []*v1.RdmaEntry { + var rdmaEntries []*v1.RdmaEntry + for i := range strEntries { + parts := strings.Fields(strEntries[i]) + switch len(parts) { + case 3: + entry := new(v1.RdmaEntry) + entry.Device = parts[0] + parseRdmaKV(parts[1], entry) + parseRdmaKV(parts[2], entry) + + rdmaEntries = append(rdmaEntries, entry) + default: + continue + } + } + return rdmaEntries +} + +func (p *rdmaController) Stat(path string, stats *v1.Metrics) error { + + currentData, err := os.ReadFile(filepath.Join(p.Path(path), "rdma.current")) + if err != nil { + return err + } + currentPerDevices := strings.Split(string(currentData), "\n") + + maxData, err := os.ReadFile(filepath.Join(p.Path(path), "rdma.max")) + if err != nil { + return err + } + maxPerDevices := strings.Split(string(maxData), "\n") + + // If device got removed between reading two files, ignore returning + // stats. + if len(currentPerDevices) != len(maxPerDevices) { + return nil + } + + currentEntries := toRdmaEntry(currentPerDevices) + maxEntries := toRdmaEntry(maxPerDevices) + + stats.Rdma = &v1.RdmaStat{ + Current: currentEntries, + Limit: maxEntries, + } + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/state.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/state.go new file mode 100644 index 000000000000..cfeabbbc60b3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/state.go @@ -0,0 +1,28 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +// State is a type that represents the state of the current cgroup +type State string + +const ( + Unknown State = "" + Thawed State = "thawed" + Frozen State = "frozen" + Freezing State = "freezing" + Deleted State = "deleted" +) diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/subsystem.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/subsystem.go new file mode 100644 index 000000000000..b2f41854d2c5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/subsystem.go @@ -0,0 +1,116 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "fmt" + "os" + + v1 "github.com/containerd/cgroups/stats/v1" + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +// Name is a typed name for a cgroup subsystem +type Name string + +const ( + Devices Name = "devices" + Hugetlb Name = "hugetlb" + Freezer Name = "freezer" + Pids Name = "pids" + NetCLS Name = "net_cls" + NetPrio Name = "net_prio" + PerfEvent Name = "perf_event" + Cpuset Name = "cpuset" + Cpu Name = "cpu" + Cpuacct Name = "cpuacct" + Memory Name = "memory" + Blkio Name = "blkio" + Rdma Name = "rdma" +) + +// Subsystems returns a complete list of the default cgroups +// available on most linux systems +func Subsystems() []Name { + n := []Name{ + Freezer, + Pids, + NetCLS, + NetPrio, + PerfEvent, + Cpuset, + Cpu, + Cpuacct, + Memory, + Blkio, + Rdma, + } + if !RunningInUserNS() { + n = append(n, Devices) + } + if _, err := os.Stat("/sys/kernel/mm/hugepages"); err == nil { + n = append(n, Hugetlb) + } + return n +} + +type Subsystem interface { + Name() Name +} + +type pather interface { + Subsystem + Path(path string) string +} + +type creator interface { + Subsystem + Create(path string, resources *specs.LinuxResources) error +} + +type deleter interface { + Subsystem + Delete(path string) error +} + +type stater interface { + Subsystem + Stat(path string, stats *v1.Metrics) error +} + +type updater interface { + Subsystem + Update(path string, resources *specs.LinuxResources) error +} + +// SingleSubsystem returns a single cgroup subsystem within the base Hierarchy +func SingleSubsystem(baseHierarchy Hierarchy, subsystem Name) Hierarchy { + return func() ([]Subsystem, error) { + subsystems, err := baseHierarchy() + if err != nil { + return nil, err + } + for _, s := range subsystems { + if s.Name() == subsystem { + return []Subsystem{ + s, + }, nil + } + } + return nil, fmt.Errorf("unable to find subsystem %s", subsystem) + } +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/systemd.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/systemd.go new file mode 100644 index 000000000000..4da57cb4b00b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/systemd.go @@ -0,0 +1,158 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "context" + "path/filepath" + "strings" + "sync" + + systemdDbus "github.com/coreos/go-systemd/v22/dbus" + "github.com/godbus/dbus/v5" + specs "github.com/opencontainers/runtime-spec/specs-go" +) + +const ( + SystemdDbus Name = "systemd" + defaultSlice = "system.slice" +) + +var ( + canDelegate bool + once sync.Once +) + +func Systemd() ([]Subsystem, error) { + root, err := v1MountPoint() + if err != nil { + return nil, err + } + defaultSubsystems, err := defaults(root) + if err != nil { + return nil, err + } + s, err := NewSystemd(root) + if err != nil { + return nil, err + } + // make sure the systemd controller is added first + return append([]Subsystem{s}, defaultSubsystems...), nil +} + +func Slice(slice, name string) Path { + if slice == "" { + slice = defaultSlice + } + return func(subsystem Name) (string, error) { + return filepath.Join(slice, name), nil + } +} + +func NewSystemd(root string) (*SystemdController, error) { + return &SystemdController{ + root: root, + }, nil +} + +type SystemdController struct { + mu sync.Mutex + root string +} + +func (s *SystemdController) Name() Name { + return SystemdDbus +} + +func (s *SystemdController) Create(path string, _ *specs.LinuxResources) error { + ctx := context.TODO() + conn, err := systemdDbus.NewWithContext(ctx) + if err != nil { + return err + } + defer conn.Close() + slice, name := splitName(path) + // We need to see if systemd can handle the delegate property + // Systemd will return an error if it cannot handle delegate regardless + // of its bool setting. + checkDelegate := func() { + canDelegate = true + dlSlice := newProperty("Delegate", true) + if _, err := conn.StartTransientUnitContext(ctx, slice, "testdelegate", []systemdDbus.Property{dlSlice}, nil); err != nil { + if dbusError, ok := err.(dbus.Error); ok { + // Starting with systemd v237, Delegate is not even a property of slices anymore, + // so the D-Bus call fails with "InvalidArgs" error. + if strings.Contains(dbusError.Name, "org.freedesktop.DBus.Error.PropertyReadOnly") || strings.Contains(dbusError.Name, "org.freedesktop.DBus.Error.InvalidArgs") { + canDelegate = false + } + } + } + + _, _ = conn.StopUnitContext(ctx, slice, "testDelegate", nil) + } + once.Do(checkDelegate) + properties := []systemdDbus.Property{ + systemdDbus.PropDescription("cgroup " + name), + systemdDbus.PropWants(slice), + newProperty("DefaultDependencies", false), + newProperty("MemoryAccounting", true), + newProperty("CPUAccounting", true), + newProperty("BlockIOAccounting", true), + } + + // If we can delegate, we add the property back in + if canDelegate { + properties = append(properties, newProperty("Delegate", true)) + } + + ch := make(chan string) + _, err = conn.StartTransientUnitContext(ctx, name, "replace", properties, ch) + if err != nil { + return err + } + <-ch + return nil +} + +func (s *SystemdController) Delete(path string) error { + ctx := context.TODO() + conn, err := systemdDbus.NewWithContext(ctx) + if err != nil { + return err + } + defer conn.Close() + _, name := splitName(path) + ch := make(chan string) + _, err = conn.StopUnitContext(ctx, name, "replace", ch) + if err != nil { + return err + } + <-ch + return nil +} + +func newProperty(name string, units interface{}) systemdDbus.Property { + return systemdDbus.Property{ + Name: name, + Value: dbus.MakeVariant(units), + } +} + +func splitName(path string) (slice string, unit string) { + slice, unit = filepath.Split(path) + return strings.TrimSuffix(slice, "/"), unit +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/ticks.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/ticks.go new file mode 100644 index 000000000000..84dc38d0cc33 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/ticks.go @@ -0,0 +1,26 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +func getClockTicks() uint64 { + // The value comes from `C.sysconf(C._SC_CLK_TCK)`, and + // on Linux it's a constant which is safe to be hard coded, + // so we can avoid using cgo here. + // See https://github.com/containerd/cgroups/pull/12 for + // more details. + return 100 +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/utils.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/utils.go new file mode 100644 index 000000000000..c17a3a41423a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/utils.go @@ -0,0 +1,391 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "bufio" + "errors" + "fmt" + "io" + "os" + "path/filepath" + "strconv" + "strings" + "sync" + "syscall" + "time" + + units "github.com/docker/go-units" + specs "github.com/opencontainers/runtime-spec/specs-go" + "golang.org/x/sys/unix" +) + +var ( + nsOnce sync.Once + inUserNS bool + checkMode sync.Once + cgMode CGMode +) + +const unifiedMountpoint = "/sys/fs/cgroup" + +// CGMode is the cgroups mode of the host system +type CGMode int + +const ( + // Unavailable cgroup mountpoint + Unavailable CGMode = iota + // Legacy cgroups v1 + Legacy + // Hybrid with cgroups v1 and v2 controllers mounted + Hybrid + // Unified with only cgroups v2 mounted + Unified +) + +// Mode returns the cgroups mode running on the host +func Mode() CGMode { + checkMode.Do(func() { + var st unix.Statfs_t + if err := unix.Statfs(unifiedMountpoint, &st); err != nil { + cgMode = Unavailable + return + } + switch st.Type { + case unix.CGROUP2_SUPER_MAGIC: + cgMode = Unified + default: + cgMode = Legacy + if err := unix.Statfs(filepath.Join(unifiedMountpoint, "unified"), &st); err != nil { + return + } + if st.Type == unix.CGROUP2_SUPER_MAGIC { + cgMode = Hybrid + } + } + }) + return cgMode +} + +// RunningInUserNS detects whether we are currently running in a user namespace. +// Copied from github.com/lxc/lxd/shared/util.go +func RunningInUserNS() bool { + nsOnce.Do(func() { + file, err := os.Open("/proc/self/uid_map") + if err != nil { + // This kernel-provided file only exists if user namespaces are supported + return + } + defer file.Close() + + buf := bufio.NewReader(file) + l, _, err := buf.ReadLine() + if err != nil { + return + } + + line := string(l) + var a, b, c int64 + fmt.Sscanf(line, "%d %d %d", &a, &b, &c) + + /* + * We assume we are in the initial user namespace if we have a full + * range - 4294967295 uids starting at uid 0. + */ + if a == 0 && b == 0 && c == 4294967295 { + return + } + inUserNS = true + }) + return inUserNS +} + +// defaults returns all known groups +func defaults(root string) ([]Subsystem, error) { + h, err := NewHugetlb(root) + if err != nil && !os.IsNotExist(err) { + return nil, err + } + s := []Subsystem{ + NewNamed(root, "systemd"), + NewFreezer(root), + NewPids(root), + NewNetCls(root), + NewNetPrio(root), + NewPerfEvent(root), + NewCpuset(root), + NewCpu(root), + NewCpuacct(root), + NewMemory(root), + NewBlkio(root), + NewRdma(root), + } + // only add the devices cgroup if we are not in a user namespace + // because modifications are not allowed + if !RunningInUserNS() { + s = append(s, NewDevices(root)) + } + // add the hugetlb cgroup if error wasn't due to missing hugetlb + // cgroup support on the host + if err == nil { + s = append(s, h) + } + return s, nil +} + +// remove will remove a cgroup path handling EAGAIN and EBUSY errors and +// retrying the remove after a exp timeout +func remove(path string) error { + delay := 10 * time.Millisecond + for i := 0; i < 5; i++ { + if i != 0 { + time.Sleep(delay) + delay *= 2 + } + if err := os.RemoveAll(path); err == nil { + return nil + } + } + return fmt.Errorf("cgroups: unable to remove path %q", path) +} + +// readPids will read all the pids of processes or tasks in a cgroup by the provided path +func readPids(path string, subsystem Name, pType procType) ([]Process, error) { + f, err := os.Open(filepath.Join(path, pType)) + if err != nil { + return nil, err + } + defer f.Close() + var ( + out []Process + s = bufio.NewScanner(f) + ) + for s.Scan() { + if t := s.Text(); t != "" { + pid, err := strconv.Atoi(t) + if err != nil { + return nil, err + } + out = append(out, Process{ + Pid: pid, + Subsystem: subsystem, + Path: path, + }) + } + } + if err := s.Err(); err != nil { + // failed to read all pids? + return nil, err + } + return out, nil +} + +func hugePageSizes() ([]string, error) { + var ( + pageSizes []string + sizeList = []string{"B", "KB", "MB", "GB", "TB", "PB"} + ) + files, err := os.ReadDir("/sys/kernel/mm/hugepages") + if err != nil { + return nil, err + } + for _, st := range files { + nameArray := strings.Split(st.Name(), "-") + pageSize, err := units.RAMInBytes(nameArray[1]) + if err != nil { + return nil, err + } + pageSizes = append(pageSizes, units.CustomSize("%g%s", float64(pageSize), 1024.0, sizeList)) + } + return pageSizes, nil +} + +func readUint(path string) (uint64, error) { + v, err := os.ReadFile(path) + if err != nil { + return 0, err + } + return parseUint(strings.TrimSpace(string(v)), 10, 64) +} + +func parseUint(s string, base, bitSize int) (uint64, error) { + v, err := strconv.ParseUint(s, base, bitSize) + if err != nil { + intValue, intErr := strconv.ParseInt(s, base, bitSize) + // 1. Handle negative values greater than MinInt64 (and) + // 2. Handle negative values lesser than MinInt64 + if intErr == nil && intValue < 0 { + return 0, nil + } else if intErr != nil && + intErr.(*strconv.NumError).Err == strconv.ErrRange && + intValue < 0 { + return 0, nil + } + return 0, err + } + return v, nil +} + +func parseKV(raw string) (string, uint64, error) { + parts := strings.Fields(raw) + switch len(parts) { + case 2: + v, err := parseUint(parts[1], 10, 64) + if err != nil { + return "", 0, err + } + return parts[0], v, nil + default: + return "", 0, ErrInvalidFormat + } +} + +// ParseCgroupFile parses the given cgroup file, typically /proc/self/cgroup +// or /proc//cgroup, into a map of subsystems to cgroup paths, e.g. +// "cpu": "/user.slice/user-1000.slice" +// "pids": "/user.slice/user-1000.slice" +// etc. +// +// The resulting map does not have an element for cgroup v2 unified hierarchy. +// Use ParseCgroupFileUnified to get the unified path. +func ParseCgroupFile(path string) (map[string]string, error) { + x, _, err := ParseCgroupFileUnified(path) + return x, err +} + +// ParseCgroupFileUnified returns legacy subsystem paths as the first value, +// and returns the unified path as the second value. +func ParseCgroupFileUnified(path string) (map[string]string, string, error) { + f, err := os.Open(path) + if err != nil { + return nil, "", err + } + defer f.Close() + return parseCgroupFromReaderUnified(f) +} + +func parseCgroupFromReaderUnified(r io.Reader) (map[string]string, string, error) { + var ( + cgroups = make(map[string]string) + unified = "" + s = bufio.NewScanner(r) + ) + for s.Scan() { + var ( + text = s.Text() + parts = strings.SplitN(text, ":", 3) + ) + if len(parts) < 3 { + return nil, unified, fmt.Errorf("invalid cgroup entry: %q", text) + } + for _, subs := range strings.Split(parts[1], ",") { + if subs == "" { + unified = parts[2] + } else { + cgroups[subs] = parts[2] + } + } + } + if err := s.Err(); err != nil { + return nil, unified, err + } + return cgroups, unified, nil +} + +func getCgroupDestination(subsystem string) (string, error) { + f, err := os.Open("/proc/self/mountinfo") + if err != nil { + return "", err + } + defer f.Close() + s := bufio.NewScanner(f) + for s.Scan() { + fields := strings.Split(s.Text(), " ") + if len(fields) < 10 { + // broken mountinfo? + continue + } + if fields[len(fields)-3] != "cgroup" { + continue + } + for _, opt := range strings.Split(fields[len(fields)-1], ",") { + if opt == subsystem { + return fields[3], nil + } + } + } + if err := s.Err(); err != nil { + return "", err + } + return "", ErrNoCgroupMountDestination +} + +func pathers(subystems []Subsystem) []pather { + var out []pather + for _, s := range subystems { + if p, ok := s.(pather); ok { + out = append(out, p) + } + } + return out +} + +func initializeSubsystem(s Subsystem, path Path, resources *specs.LinuxResources) error { + if c, ok := s.(creator); ok { + p, err := path(s.Name()) + if err != nil { + return err + } + if err := c.Create(p, resources); err != nil { + return err + } + } else if c, ok := s.(pather); ok { + p, err := path(s.Name()) + if err != nil { + return err + } + // do the default create if the group does not have a custom one + if err := os.MkdirAll(c.Path(p), defaultDirPerm); err != nil { + return err + } + } + return nil +} + +func cleanPath(path string) string { + if path == "" { + return "" + } + path = filepath.Clean(path) + if !filepath.IsAbs(path) { + path, _ = filepath.Rel(string(os.PathSeparator), filepath.Clean(string(os.PathSeparator)+path)) + } + return path +} + +func retryingWriteFile(path string, data []byte, mode os.FileMode) error { + // Retry writes on EINTR; see: + // https://github.com/golang/go/issues/38033 + for { + err := os.WriteFile(path, data, mode) + if err == nil { + return nil + } else if !errors.Is(err, syscall.EINTR) { + return err + } + } +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/cgroups/v1.go b/cluster-autoscaler/vendor/github.com/containerd/cgroups/v1.go new file mode 100644 index 000000000000..2ec215c06f42 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/cgroups/v1.go @@ -0,0 +1,73 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package cgroups + +import ( + "bufio" + "fmt" + "os" + "path/filepath" + "strings" +) + +// V1 returns all the groups in the default cgroups mountpoint in a single hierarchy +func V1() ([]Subsystem, error) { + root, err := v1MountPoint() + if err != nil { + return nil, err + } + subsystems, err := defaults(root) + if err != nil { + return nil, err + } + var enabled []Subsystem + for _, s := range pathers(subsystems) { + // check and remove the default groups that do not exist + if _, err := os.Lstat(s.Path("/")); err == nil { + enabled = append(enabled, s) + } + } + return enabled, nil +} + +// v1MountPoint returns the mount point where the cgroup +// mountpoints are mounted in a single hiearchy +func v1MountPoint() (string, error) { + f, err := os.Open("/proc/self/mountinfo") + if err != nil { + return "", err + } + defer f.Close() + scanner := bufio.NewScanner(f) + for scanner.Scan() { + var ( + text = scanner.Text() + fields = strings.Split(text, " ") + numFields = len(fields) + ) + if numFields < 10 { + return "", fmt.Errorf("mountinfo: bad entry %q", text) + } + if fields[numFields-3] == "cgroup" { + return filepath.Dir(fields[4]), nil + } + } + if err := scanner.Err(); err != nil { + return "", err + } + return "", ErrMountPointNotExist +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.gitattributes b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.gitattributes new file mode 100644 index 000000000000..d207b1802b20 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.gitattributes @@ -0,0 +1 @@ +*.go text eol=lf diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.gitignore b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.gitignore index ea58090bd21e..88ceb2764bd1 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.gitignore +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.gitignore @@ -1,4 +1,5 @@ # Binaries for programs and plugins +/bin/ *.exe *.dll *.so @@ -9,3 +10,4 @@ # Output of the go coverage tool, specifically when used with LiteIDE *.out +coverage.txt diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.golangci.yml b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.golangci.yml new file mode 100644 index 000000000000..6462e52f66ff --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/.golangci.yml @@ -0,0 +1,52 @@ +linters: + enable: + - staticcheck + - unconvert + - gofmt + - goimports + - revive + - ineffassign + - vet + - unused + - misspell + disable: + - errcheck + +linters-settings: + revive: + ignore-generated-headers: true + rules: + - name: blank-imports + - name: context-as-argument + - name: context-keys-type + - name: dot-imports + - name: error-return + - name: error-strings + - name: error-naming + - name: exported + - name: if-return + - name: increment-decrement + - name: var-naming + arguments: [["UID", "GID"], []] + - name: var-declaration + - name: package-comments + - name: range + - name: receiver-naming + - name: time-naming + - name: unexported-return + - name: indent-error-flow + - name: errorf + - name: empty-block + - name: superfluous-else + - name: unused-parameter + - name: unreachable-code + - name: redefines-builtin-id + +issues: + include: + - EXC0002 + +run: + timeout: 8m + skip-dirs: + - example diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/Makefile b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/Makefile new file mode 100644 index 000000000000..c3a497dcac01 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/Makefile @@ -0,0 +1,180 @@ +# Copyright The containerd Authors. + +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at + +# http://www.apache.org/licenses/LICENSE-2.0 + +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +# Go command to use for build +GO ?= go +INSTALL ?= install + +# Root directory of the project (absolute path). +ROOTDIR=$(dir $(abspath $(lastword $(MAKEFILE_LIST)))) + +WHALE = "🇩" +ONI = "👹" + +# Project binaries. +COMMANDS=protoc-gen-go-ttrpc protoc-gen-gogottrpc + +ifdef BUILDTAGS + GO_BUILDTAGS = ${BUILDTAGS} +endif +GO_BUILDTAGS ?= +GO_TAGS=$(if $(GO_BUILDTAGS),-tags "$(strip $(GO_BUILDTAGS))",) + +# Project packages. +PACKAGES=$(shell $(GO) list ${GO_TAGS} ./... | grep -v /example) +TESTPACKAGES=$(shell $(GO) list ${GO_TAGS} ./... | grep -v /cmd | grep -v /integration | grep -v /example) +BINPACKAGES=$(addprefix ./cmd/,$(COMMANDS)) + +#Replaces ":" (*nix), ";" (windows) with newline for easy parsing +GOPATHS=$(shell echo ${GOPATH} | tr ":" "\n" | tr ";" "\n") + +TESTFLAGS_RACE= +GO_BUILD_FLAGS= +# See Golang issue re: '-trimpath': https://github.com/golang/go/issues/13809 +GO_GCFLAGS=$(shell \ + set -- ${GOPATHS}; \ + echo "-gcflags=-trimpath=$${1}/src"; \ + ) + +BINARIES=$(addprefix bin/,$(COMMANDS)) + +# Flags passed to `go test` +TESTFLAGS ?= $(TESTFLAGS_RACE) $(EXTRA_TESTFLAGS) +TESTFLAGS_PARALLEL ?= 8 + +# Use this to replace `go test` with, for instance, `gotestsum` +GOTEST ?= $(GO) test + +.PHONY: clean all AUTHORS build binaries test integration generate protos check-protos coverage ci check help install vendor install-protobuf install-protobuild +.DEFAULT: default + +# Forcibly set the default goal to all, in case an include above brought in a rule definition. +.DEFAULT_GOAL := all + +all: binaries + +check: proto-fmt ## run all linters + @echo "$(WHALE) $@" + GOGC=75 golangci-lint run + +ci: check binaries check-protos coverage # coverage-integration ## to be used by the CI + +AUTHORS: .mailmap .git/HEAD + git log --format='%aN <%aE>' | sort -fu > $@ + +generate: protos + @echo "$(WHALE) $@" + @PATH="${ROOTDIR}/bin:${PATH}" $(GO) generate -x ${PACKAGES} + +protos: bin/protoc-gen-gogottrpc bin/protoc-gen-go-ttrpc ## generate protobuf + @echo "$(WHALE) $@" + @(PATH="${ROOTDIR}/bin:${PATH}" protobuild --quiet ${PACKAGES}) + +check-protos: protos ## check if protobufs needs to be generated again + @echo "$(WHALE) $@" + @test -z "$$(git status --short | grep ".pb.go" | tee /dev/stderr)" || \ + ((git diff | cat) && \ + (echo "$(ONI) please run 'make protos' when making changes to proto files" && false)) + +check-api-descriptors: protos ## check that protobuf changes aren't present. + @echo "$(WHALE) $@" + @test -z "$$(git status --short | grep ".pb.txt" | tee /dev/stderr)" || \ + ((git diff $$(find . -name '*.pb.txt') | cat) && \ + (echo "$(ONI) please run 'make protos' when making changes to proto files and check-in the generated descriptor file changes" && false)) + +proto-fmt: ## check format of proto files + @echo "$(WHALE) $@" + @test -z "$$(find . -name '*.proto' -type f -exec grep -Hn -e "^ " {} \; | tee /dev/stderr)" || \ + (echo "$(ONI) please indent proto files with tabs only" && false) + @test -z "$$(find . -name '*.proto' -type f -exec grep -Hn "Meta meta = " {} \; | grep -v '(gogoproto.nullable) = false' | tee /dev/stderr)" || \ + (echo "$(ONI) meta fields in proto files must have option (gogoproto.nullable) = false" && false) + +build: ## build the go packages + @echo "$(WHALE) $@" + @$(GO) build ${DEBUG_GO_GCFLAGS} ${GO_GCFLAGS} ${GO_BUILD_FLAGS} ${EXTRA_FLAGS} ${PACKAGES} + +test: ## run tests, except integration tests and tests that require root + @echo "$(WHALE) $@" + @$(GOTEST) ${TESTFLAGS} ${TESTPACKAGES} + +integration: ## run integration tests + @echo "$(WHALE) $@" + @cd "${ROOTDIR}/integration" && $(GOTEST) -v ${TESTFLAGS} -parallel ${TESTFLAGS_PARALLEL} . + +benchmark: ## run benchmarks tests + @echo "$(WHALE) $@" + @$(GO) test ${TESTFLAGS} -bench . -run Benchmark + +FORCE: + +define BUILD_BINARY +@echo "$(WHALE) $@" +@$(GO) build ${DEBUG_GO_GCFLAGS} ${GO_GCFLAGS} ${GO_BUILD_FLAGS} -o $@ ${GO_TAGS} ./$< +endef + +# Build a binary from a cmd. +bin/%: cmd/% FORCE + $(call BUILD_BINARY) + +binaries: $(BINARIES) ## build binaries + @echo "$(WHALE) $@" + +clean: ## clean up binaries + @echo "$(WHALE) $@" + @rm -f $(BINARIES) + +install: ## install binaries + @echo "$(WHALE) $@ $(BINPACKAGES)" + @$(GO) install $(BINPACKAGES) + +install-protobuf: + @echo "$(WHALE) $@" + @script/install-protobuf + +install-protobuild: + @echo "$(WHALE) $@" + @$(GO) install google.golang.org/protobuf/cmd/protoc-gen-go@v1.28.1 + @$(GO) install github.com/containerd/protobuild@14832ccc41429f5c4f81028e5af08aa233a219cf + +coverage: ## generate coverprofiles from the unit tests, except tests that require root + @echo "$(WHALE) $@" + @rm -f coverage.txt + @$(GO) test ${TESTFLAGS} ${TESTPACKAGES} 2> /dev/null + @( for pkg in ${PACKAGES}; do \ + $(GO) test ${TESTFLAGS} \ + -cover \ + -coverprofile=profile.out \ + -covermode=atomic $$pkg || exit; \ + if [ -f profile.out ]; then \ + cat profile.out >> coverage.txt; \ + rm profile.out; \ + fi; \ + done ) + +vendor: ## ensure all the go.mod/go.sum files are up-to-date + @echo "$(WHALE) $@" + @$(GO) mod tidy + @$(GO) mod verify + +verify-vendor: ## verify if all the go.mod/go.sum files are up-to-date + @echo "$(WHALE) $@" + @$(GO) mod tidy + @$(GO) mod verify + @test -z "$$(git status --short | grep "go.sum" | tee /dev/stderr)" || \ + ((git diff | cat) && \ + (echo "$(ONI) make sure to checkin changes after go mod tidy" && false)) + +help: ## this help + @awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST) | sort diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/PROTOCOL.md b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/PROTOCOL.md new file mode 100644 index 000000000000..12b43f6bd6e3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/PROTOCOL.md @@ -0,0 +1,240 @@ +# Protocol Specification + +The ttrpc protocol is client/server protocol to support multiple request streams +over a single connection with lightweight framing. The client represents the +process which initiated the underlying connection and the server is the process +which accepted the connection. The protocol is currently defined as +asymmetrical, with clients sending requests and servers sending responses. Both +clients and servers are able to send stream data. The roles are also used in +determining the stream identifiers, with client initiated streams using odd +number identifiers and server initiated using even number. The protocol may be +extended in the future to support server initiated streams, that is not +supported in the latest version. + +## Purpose + +The ttrpc protocol is designed to be lightweight and optimized for low latency +and reliable connections between processes on the same host. The protocol does +not include features for handling unreliable connections such as handshakes, +resets, pings, or flow control. The protocol is designed to make low-overhead +implementations as simple as possible. It is not intended as a suitable +replacement for HTTP2/3 over the network. + +## Message Frame + +Each Message Frame consists of a 10-byte message header followed +by message data. The data length and stream ID are both big-endian +4-byte unsigned integers. The message type is an unsigned 1-byte +integer. The flags are also an unsigned 1-byte integer and +use is defined by the message type. + + +---------------------------------------------------------------+ + | Data Length (32) | + +---------------------------------------------------------------+ + | Stream ID (32) | + +---------------+-----------------------------------------------+ + | Msg Type (8) | + +---------------+ + | Flags (8) | + +---------------+-----------------------------------------------+ + | Data (*) | + +---------------------------------------------------------------+ + +The Data Length field represents the number of bytes in the Data field. The +total frame size will always be Data Length + 10 bytes. The maximum data length +is 4MB and any larger size should be rejected. Due to the maximum data size +being less than 16MB, the first frame byte should always be zero. This first +byte should be considered reserved for future use. + +The Stream ID must be odd for client initiated streams and even for server +initiated streams. Server initiated streams are not currently supported. + +## Mesage Types + +| Message Type | Name | Description | +|--------------|----------|----------------------------------| +| 0x01 | Request | Initiates stream | +| 0x02 | Response | Final stream data and terminates | +| 0x03 | Data | Stream data | + +### Request + +The request message is used to initiate stream and send along request data for +properly routing and handling the stream. The stream may indicate unary without +any inbound or outbound stream data with only a response is expected on the +stream. The request may also indicate the stream is still open for more data and +no response is expected until data is finished. If the remote indicates the +stream is closed, the request may be considered non-unary but without anymore +stream data sent. In the case of `remote closed`, the remote still expects to +receive a response or stream data. For compatibility with non streaming clients, +a request with empty flags indicates a unary request. + +#### Request Flags + +| Flag | Name | Description | +|------|-----------------|--------------------------------------------------| +| 0x01 | `remote closed` | Non-unary, but no more data expected from remote | +| 0x02 | `remote open` | Non-unary, remote is still sending data | + +### Response + +The response message is used to end a stream with data, an empty response, or +an error. A response message is the only expected message after a unary request. +A non-unary request does not require a response message if the server is sending +back stream data. A non-unary stream may return a single response message but no +other stream data may follow. + +#### Response Flags + +No response flags are defined at this time, flags should be empty. + +### Data + +The data message is used to send data on an already initialized stream. Either +client or server may send data. A data message is not allowed on a unary stream. +A data message should not be sent after indicating `remote closed` to the peer. +The last data message on a stream must set the `remote closed` flag. + +The `no data` flag is used to indicate that the data message does not include +any data. This is normally used with the `remote closed` flag to indicate the +stream is now closed without transmitting any data. Since ttrpc normally +transmits a single object per message, a zero length data message may be +interpreted as an empty object. For example, transmitting the number zero as a +protobuf message ends up with a data length of zero, but the message is still +considered data and should be processed. + +#### Data Flags + +| Flag | Name | Description | +|------|-----------------|-----------------------------------| +| 0x01 | `remote closed` | No more data expected from remote | +| 0x04 | `no data` | This message does not have data | + +## Streaming + +All ttrpc requests use streams to transfer data. Unary streams will only have +two messages sent per stream, a request from a client and a response from the +server. Non-unary streams, however, may send any numbers of messages from the +client and the server. This makes stream management more complicated than unary +streams since both client and server need to track additional state. To keep +this management as simple as possible, ttrpc minimizes the number of states and +uses two flags instead of control frames. Each stream has two states while a +stream is still alive: `local closed` and `remote closed`. Each peer considers +local and remote from their own perspective and sets flags from the other peer's +perspective. For example, if a client sends a data frame with the +`remote closed` flag, that is indicating that the client is now `local closed` +and the server will be `remote closed`. A unary operation does not need to send +these flags since each received message always indicates `remote closed`. Once a +peer is both `local closed` and `remote closed`, the stream is considered +finished and may be cleaned up. + +Due to the asymmetric nature of the current protocol, a client should +always be in the `local closed` state before `remote closed` and a server should +always be in the `remote closed` state before `local closed`. This happens +because the client is always initiating requests and a client always expects a +final response back from a server to indicate the initiated request has been +fulfilled. This may mean server sends a final empty response to finish a stream +even after it has already completed sending data before the client. + +### Unary State Diagram + + +--------+ +--------+ + | Client | | Server | + +---+----+ +----+---+ + | +---------+ | + local >---------------+ Request +--------------------> remote + closed | +---------+ | closed + | | + | +----------+ | + finished <--------------+ Response +--------------------< finished + | +----------+ | + | | + +### Non-Unary State Diagrams + +RC: `remote closed` flag +RO: `remote open` flag + + +--------+ +--------+ + | Client | | Server | + +---+----+ +----+---+ + | +--------------+ | + >-------------+ Request [RO] +-----------------> + | +--------------+ | + | | + | +------+ | + >-----------------+ Data +---------------------> + | +------+ | + | | + | +-----------+ | + local >---------------+ Data [RC] +------------------> remote + closed | +-----------+ | closed + | | + | +----------+ | + finished <--------------+ Response +--------------------< finished + | +----------+ | + | | + + +--------+ +--------+ + | Client | | Server | + +---+----+ +----+---+ + | +--------------+ | + local >-------------+ Request [RC] +-----------------> remote + closed | +--------------+ | closed + | | + | +------+ | + <-----------------+ Data +---------------------< + | +------+ | + | | + | +-----------+ | + finished <---------------+ Data [RC] +------------------< finished + | +-----------+ | + | | + + +--------+ +--------+ + | Client | | Server | + +---+----+ +----+---+ + | +--------------+ | + >-------------+ Request [RO] +-----------------> + | +--------------+ | + | | + | +------+ | + >-----------------+ Data +---------------------> + | +------+ | + | | + | +------+ | + <-----------------+ Data +---------------------< + | +------+ | + | | + | +------+ | + >-----------------+ Data +---------------------> + | +------+ | + | | + | +-----------+ | + local >---------------+ Data [RC] +------------------> remote + closed | +-----------+ | closed + | | + | +------+ | + <-----------------+ Data +---------------------< + | +------+ | + | | + | +-----------+ | + finished <---------------+ Data [RC] +------------------< finished + | +-----------+ | + | | + +## RPC + +While this protocol is defined primarily to support Remote Procedure Calls, the +protocol does not define the request and response types beyond the messages +defined in the protocol. The implementation provides a default protobuf +definition of request and response which may be used for cross language rpc. +All implementations should at least define a request type which support +routing by procedure name and a response type which supports call status. + +## Version History + +| Version | Features | +|---------|---------------------| +| 1.0 | Unary requests only | +| 1.2 | Streaming support | diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/Protobuild.toml b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/Protobuild.toml new file mode 100644 index 000000000000..0f6ccbd1e81a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/Protobuild.toml @@ -0,0 +1,28 @@ +version = "2" +generators = ["go"] + +# Control protoc include paths. Below are usually some good defaults, but feel +# free to try it without them if it works for your project. +[includes] + # Include paths that will be added before all others. Typically, you want to + # treat the root of the project as an include, but this may not be necessary. + before = ["."] + + # Paths that will be added untouched to the end of the includes. We use + # `/usr/local/include` to pickup the common install location of protobuf. + # This is the default. + after = ["/usr/local/include"] + +# This section maps protobuf imports to Go packages. These will become +# `-M` directives in the call to the go protobuf generator. +[packages] + "google/protobuf/any.proto" = "github.com/gogo/protobuf/types" + "proto/status.proto" = "google.golang.org/genproto/googleapis/rpc/status" + +[[overrides]] +# enable ttrpc and disable fieldpath and grpc for the shim +prefixes = ["github.com/containerd/ttrpc/integration/streaming"] +generators = ["go", "go-ttrpc"] + +[overrides.parameters.go-ttrpc] +prefix = "TTRPC" diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/README.md b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/README.md index 547a1297df7d..675a5179ef83 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/README.md +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/README.md @@ -1,7 +1,6 @@ # ttrpc [![Build Status](https://github.com/containerd/ttrpc/workflows/CI/badge.svg)](https://github.com/containerd/ttrpc/actions?query=workflow%3ACI) -[![codecov](https://codecov.io/gh/containerd/ttrpc/branch/main/graph/badge.svg)](https://codecov.io/gh/containerd/ttrpc) GRPC for low-memory environments. @@ -20,13 +19,17 @@ Please note that while this project supports generating either end of the protocol, the generated service definitions will be incompatible with regular GRPC services, as they do not speak the same protocol. +# Protocol + +See the [protocol specification](./PROTOCOL.md). + # Usage Create a gogo vanity binary (see [`cmd/protoc-gen-gogottrpc/main.go`](cmd/protoc-gen-gogottrpc/main.go) for an example with the ttrpc plugin enabled. -It's recommended to use [`protobuild`](https://github.com//stevvooe/protobuild) +It's recommended to use [`protobuild`](https://github.com/containerd/protobuild) to build the protobufs for this project, but this will work with protoc directly, if required. @@ -37,13 +40,11 @@ directly, if required. - The client and server interface are identical whereas in GRPC there is a client and server interface that are different. - The Go stdlib context package is used instead. -- No support for streams yet. # Status TODO: -- [ ] Document protocol layout - [ ] Add testing under concurrent load to ensure - [ ] Verify connection error handling diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/channel.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/channel.go index 81116a5e23fc..feafd9a6b57c 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/channel.go +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/channel.go @@ -38,6 +38,26 @@ type messageType uint8 const ( messageTypeRequest messageType = 0x1 messageTypeResponse messageType = 0x2 + messageTypeData messageType = 0x3 +) + +func (mt messageType) String() string { + switch mt { + case messageTypeRequest: + return "request" + case messageTypeResponse: + return "response" + case messageTypeData: + return "data" + default: + return "unknown" + } +} + +const ( + flagRemoteClosed uint8 = 0x1 + flagRemoteOpen uint8 = 0x2 + flagNoData uint8 = 0x4 ) // messageHeader represents the fixed-length message header of 10 bytes sent @@ -46,7 +66,7 @@ type messageHeader struct { Length uint32 // length excluding this header. b[:4] StreamID uint32 // identifies which request stream message is a part of. b[4:8] Type messageType // message type b[8] - Flags uint8 // reserved b[9] + Flags uint8 // type specific flags b[9] } func readMessageHeader(p []byte, r io.Reader) (messageHeader, error) { @@ -111,22 +131,31 @@ func (ch *channel) recv() (messageHeader, []byte, error) { return mh, nil, status.Errorf(codes.ResourceExhausted, "message length %v exceed maximum message size of %v", mh.Length, messageLengthMax) } - p := ch.getmbuf(int(mh.Length)) - if _, err := io.ReadFull(ch.br, p); err != nil { - return messageHeader{}, nil, fmt.Errorf("failed reading message: %w", err) + var p []byte + if mh.Length > 0 { + p = ch.getmbuf(int(mh.Length)) + if _, err := io.ReadFull(ch.br, p); err != nil { + return messageHeader{}, nil, fmt.Errorf("failed reading message: %w", err) + } } return mh, p, nil } -func (ch *channel) send(streamID uint32, t messageType, p []byte) error { - if err := writeMessageHeader(ch.bw, ch.hwbuf[:], messageHeader{Length: uint32(len(p)), StreamID: streamID, Type: t}); err != nil { +func (ch *channel) send(streamID uint32, t messageType, flags uint8, p []byte) error { + // TODO: Error on send rather than on recv + //if len(p) > messageLengthMax { + // return status.Errorf(codes.InvalidArgument, "refusing to send, message length %v exceed maximum message size of %v", len(p), messageLengthMax) + //} + if err := writeMessageHeader(ch.bw, ch.hwbuf[:], messageHeader{Length: uint32(len(p)), StreamID: streamID, Type: t, Flags: flags}); err != nil { return err } - _, err := ch.bw.Write(p) - if err != nil { - return err + if len(p) > 0 { + _, err := ch.bw.Write(p) + if err != nil { + return err + } } return ch.bw.Flush() diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/client.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/client.go index 26c3dd2a9819..4b1e1e709baa 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/client.go +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/client.go @@ -19,30 +19,30 @@ package ttrpc import ( "context" "errors" + "fmt" "io" "net" - "os" "strings" "sync" "syscall" "time" - "github.com/gogo/protobuf/proto" "github.com/sirupsen/logrus" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" + "google.golang.org/protobuf/proto" ) -// ErrClosed is returned by client methods when the underlying connection is -// closed. -var ErrClosed = errors.New("ttrpc: closed") - // Client for a ttrpc server type Client struct { codec codec conn net.Conn channel *channel - calls chan *callRequest + + streamLock sync.RWMutex + streams map[streamID]*stream + nextStreamID streamID + sendLock sync.Mutex ctx context.Context closed func() @@ -51,8 +51,6 @@ type Client struct { userCloseFunc func() userCloseWaitCh chan struct{} - errOnce sync.Once - err error interceptor UnaryClientInterceptor } @@ -73,13 +71,16 @@ func WithUnaryClientInterceptor(i UnaryClientInterceptor) ClientOpts { } } +// NewClient creates a new ttrpc client using the given connection func NewClient(conn net.Conn, opts ...ClientOpts) *Client { ctx, cancel := context.WithCancel(context.Background()) + channel := newChannel(conn) c := &Client{ codec: codec{}, conn: conn, - channel: newChannel(conn), - calls: make(chan *callRequest), + channel: channel, + streams: make(map[streamID]*stream), + nextStreamID: 1, closed: cancel, ctx: ctx, userCloseFunc: func() {}, @@ -95,13 +96,13 @@ func NewClient(conn net.Conn, opts ...ClientOpts) *Client { return c } -type callRequest struct { - ctx context.Context - req *Request - resp *Response // response will be written back here - errs chan error // error written here on completion +func (c *Client) send(sid uint32, mt messageType, flags uint8, b []byte) error { + c.sendLock.Lock() + defer c.sendLock.Unlock() + return c.channel.send(sid, mt, flags, b) } +// Call makes a unary request and returns with response func (c *Client) Call(ctx context.Context, service, method string, req, resp interface{}) error { payload, err := c.codec.Marshal(req) if err != nil { @@ -113,6 +114,7 @@ func (c *Client) Call(ctx context.Context, service, method string, req, resp int Service: service, Method: method, Payload: payload, + // TODO: metadata from context } cresp = &Response{} @@ -123,7 +125,7 @@ func (c *Client) Call(ctx context.Context, service, method string, req, resp int } if dl, ok := ctx.Deadline(); ok { - creq.TimeoutNano = dl.Sub(time.Now()).Nanoseconds() + creq.TimeoutNano = time.Until(dl).Nanoseconds() } info := &UnaryClientInfo{ @@ -143,36 +145,143 @@ func (c *Client) Call(ctx context.Context, service, method string, req, resp int return nil } -func (c *Client) dispatch(ctx context.Context, req *Request, resp *Response) error { - errs := make(chan error, 1) - call := &callRequest{ - ctx: ctx, - req: req, - resp: resp, - errs: errs, +// StreamDesc describes the stream properties, whether the stream has +// a streaming client, a streaming server, or both +type StreamDesc struct { + StreamingClient bool + StreamingServer bool +} + +// ClientStream is used to send or recv messages on the underlying stream +type ClientStream interface { + CloseSend() error + SendMsg(m interface{}) error + RecvMsg(m interface{}) error +} + +type clientStream struct { + ctx context.Context + s *stream + c *Client + desc *StreamDesc + localClosed bool + remoteClosed bool +} + +func (cs *clientStream) CloseSend() error { + if !cs.desc.StreamingClient { + return fmt.Errorf("%w: cannot close non-streaming client", ErrProtocol) } + if cs.localClosed { + return ErrStreamClosed + } + err := cs.s.send(messageTypeData, flagRemoteClosed|flagNoData, nil) + if err != nil { + return filterCloseErr(err) + } + cs.localClosed = true + return nil +} - select { - case <-ctx.Done(): - return ctx.Err() - case c.calls <- call: - case <-c.ctx.Done(): - return c.error() +func (cs *clientStream) SendMsg(m interface{}) error { + if !cs.desc.StreamingClient { + return fmt.Errorf("%w: cannot send data from non-streaming client", ErrProtocol) + } + if cs.localClosed { + return ErrStreamClosed } - select { - case <-ctx.Done(): - return ctx.Err() - case err := <-errs: + var ( + payload []byte + err error + ) + if m != nil { + payload, err = cs.c.codec.Marshal(m) + if err != nil { + return err + } + } + + err = cs.s.send(messageTypeData, 0, payload) + if err != nil { return filterCloseErr(err) - case <-c.ctx.Done(): - return c.error() } + + return nil +} + +func (cs *clientStream) RecvMsg(m interface{}) error { + if cs.remoteClosed { + return io.EOF + } + + var msg *streamMessage + select { + case <-cs.ctx.Done(): + return cs.ctx.Err() + case <-cs.s.recvClose: + // If recv has a pending message, process that first + select { + case msg = <-cs.s.recv: + default: + return cs.s.recvErr + } + case msg = <-cs.s.recv: + } + + if msg.header.Type == messageTypeResponse { + resp := &Response{} + err := proto.Unmarshal(msg.payload[:msg.header.Length], resp) + // return the payload buffer for reuse + cs.c.channel.putmbuf(msg.payload) + if err != nil { + return err + } + + if err := cs.c.codec.Unmarshal(resp.Payload, m); err != nil { + return err + } + + if resp.Status != nil && resp.Status.Code != int32(codes.OK) { + return status.ErrorProto(resp.Status) + } + + cs.c.deleteStream(cs.s) + cs.remoteClosed = true + + return nil + } else if msg.header.Type == messageTypeData { + if !cs.desc.StreamingServer { + cs.c.deleteStream(cs.s) + cs.remoteClosed = true + return fmt.Errorf("received data from non-streaming server: %w", ErrProtocol) + } + if msg.header.Flags&flagRemoteClosed == flagRemoteClosed { + cs.c.deleteStream(cs.s) + cs.remoteClosed = true + + if msg.header.Flags&flagNoData == flagNoData { + return io.EOF + } + } + + err := cs.c.codec.Unmarshal(msg.payload[:msg.header.Length], m) + cs.c.channel.putmbuf(msg.payload) + if err != nil { + return err + } + return nil + } + + return fmt.Errorf("unexpected %q message received: %w", msg.header.Type, ErrProtocol) } +// Close closes the ttrpc connection and underlying connection func (c *Client) Close() error { c.closeOnce.Do(func() { c.closed() + + c.conn.Close() }) return nil } @@ -188,194 +297,105 @@ func (c *Client) UserOnCloseWait(ctx context.Context) error { } } -type message struct { - messageHeader - p []byte - err error -} +func (c *Client) run() { + err := c.receiveLoop() + c.Close() + c.cleanupStreams(err) -// callMap provides access to a map of active calls, guarded by a mutex. -type callMap struct { - m sync.Mutex - activeCalls map[uint32]*callRequest - closeErr error + c.userCloseFunc() + close(c.userCloseWaitCh) } -// newCallMap returns a new callMap with an empty set of active calls. -func newCallMap() *callMap { - return &callMap{ - activeCalls: make(map[uint32]*callRequest), - } -} +func (c *Client) receiveLoop() error { + for { + select { + case <-c.ctx.Done(): + return ErrClosed + default: + var ( + msg = &streamMessage{} + err error + ) + + msg.header, msg.payload, err = c.channel.recv() + if err != nil { + _, ok := status.FromError(err) + if !ok { + // treat all errors that are not an rpc status as terminal. + // all others poison the connection. + return filterCloseErr(err) + } + } + sid := streamID(msg.header.StreamID) + s := c.getStream(sid) + if s == nil { + logrus.WithField("stream", sid).Errorf("ttrpc: received message on inactive stream") + continue + } -// set adds a call entry to the map with the given streamID key. -func (cm *callMap) set(streamID uint32, cr *callRequest) error { - cm.m.Lock() - defer cm.m.Unlock() - if cm.closeErr != nil { - return cm.closeErr + if err != nil { + s.closeWithError(err) + } else { + if err := s.receive(c.ctx, msg); err != nil { + logrus.WithError(err).WithField("stream", sid).Errorf("ttrpc: failed to handle message") + } + } + } } - cm.activeCalls[streamID] = cr - return nil } -// get looks up the call entry for the given streamID key, then removes it -// from the map and returns it. -func (cm *callMap) get(streamID uint32) (cr *callRequest, ok bool, err error) { - cm.m.Lock() - defer cm.m.Unlock() - if cm.closeErr != nil { - return nil, false, cm.closeErr - } - cr, ok = cm.activeCalls[streamID] - if ok { - delete(cm.activeCalls, streamID) - } - return -} +// createStream creates a new stream and registers it with the client +// Introduce stream types for multiple or single response +func (c *Client) createStream(flags uint8, b []byte) (*stream, error) { + c.streamLock.Lock() -// abort sends the given error to each active call, and clears the map. -// Once abort has been called, any subsequent calls to the callMap will return the error passed to abort. -func (cm *callMap) abort(err error) error { - cm.m.Lock() - defer cm.m.Unlock() - if cm.closeErr != nil { - return cm.closeErr - } - for streamID, call := range cm.activeCalls { - call.errs <- err - delete(cm.activeCalls, streamID) + // Check if closed since lock acquired to prevent adding + // anything after cleanup completes + select { + case <-c.ctx.Done(): + c.streamLock.Unlock() + return nil, ErrClosed + default: } - cm.closeErr = err - return nil -} -func (c *Client) run() { - var ( - waiters = newCallMap() - receiverDone = make(chan struct{}) - ) + // Stream ID should be allocated at same time + s := newStream(c.nextStreamID, c) + c.streams[s.id] = s + c.nextStreamID = c.nextStreamID + 2 - // Sender goroutine - // Receives calls from dispatch, adds them to the set of active calls, and sends them - // to the server. - go func() { - var streamID uint32 = 1 - for { - select { - case <-c.ctx.Done(): - return - case call := <-c.calls: - id := streamID - streamID += 2 // enforce odd client initiated request ids - if err := waiters.set(id, call); err != nil { - call.errs <- err // errs is buffered so should not block. - continue - } - if err := c.send(id, messageTypeRequest, call.req); err != nil { - call.errs <- err // errs is buffered so should not block. - waiters.get(id) // remove from waiters set - } - } - } - }() - - // Receiver goroutine - // Receives responses from the server, looks up the call info in the set of active calls, - // and notifies the caller of the response. - go func() { - defer close(receiverDone) - for { - select { - case <-c.ctx.Done(): - c.setError(c.ctx.Err()) - return - default: - mh, p, err := c.channel.recv() - if err != nil { - _, ok := status.FromError(err) - if !ok { - // treat all errors that are not an rpc status as terminal. - // all others poison the connection. - c.setError(filterCloseErr(err)) - return - } - } - msg := &message{ - messageHeader: mh, - p: p[:mh.Length], - err: err, - } - call, ok, err := waiters.get(mh.StreamID) - if err != nil { - logrus.Errorf("ttrpc: failed to look up active call: %s", err) - continue - } - if !ok { - logrus.Errorf("ttrpc: received message for unknown channel %v", mh.StreamID) - continue - } - call.errs <- c.recv(call.resp, msg) - } - } - }() - - defer func() { - c.conn.Close() - c.userCloseFunc() - close(c.userCloseWaitCh) - }() + c.sendLock.Lock() + defer c.sendLock.Unlock() + c.streamLock.Unlock() - for { - select { - case <-receiverDone: - // The receiver has exited. - // don't return out, let the close of the context trigger the abort of waiters - c.Close() - case <-c.ctx.Done(): - // Abort all active calls. This will also prevent any new calls from being added - // to waiters. - waiters.abort(c.error()) - return - } + if err := c.channel.send(uint32(s.id), messageTypeRequest, flags, b); err != nil { + return s, filterCloseErr(err) } -} -func (c *Client) error() error { - c.errOnce.Do(func() { - if c.err == nil { - c.err = ErrClosed - } - }) - return c.err + return s, nil } -func (c *Client) setError(err error) { - c.errOnce.Do(func() { - c.err = err - }) +func (c *Client) deleteStream(s *stream) { + c.streamLock.Lock() + delete(c.streams, s.id) + c.streamLock.Unlock() + s.closeWithError(nil) } -func (c *Client) send(streamID uint32, mtype messageType, msg interface{}) error { - p, err := c.codec.Marshal(msg) - if err != nil { - return err - } - - return c.channel.send(streamID, mtype, p) +func (c *Client) getStream(sid streamID) *stream { + c.streamLock.RLock() + s := c.streams[sid] + c.streamLock.RUnlock() + return s } -func (c *Client) recv(resp *Response, msg *message) error { - if msg.err != nil { - return msg.err - } +func (c *Client) cleanupStreams(err error) { + c.streamLock.Lock() + defer c.streamLock.Unlock() - if msg.Type != messageTypeResponse { - return errors.New("unknown message type received") + for sid, s := range c.streams { + s.closeWithError(err) + delete(c.streams, sid) } - - defer c.channel.putmbuf(msg.p) - return proto.Unmarshal(msg.p, resp) } // filterCloseErr rewrites EOF and EPIPE errors to ErrClosed. Use when @@ -388,6 +408,8 @@ func filterCloseErr(err error) error { return nil case err == io.EOF: return ErrClosed + case errors.Is(err, io.ErrClosedPipe): + return ErrClosed case errors.Is(err, io.EOF): return ErrClosed case strings.Contains(err.Error(), "use of closed network connection"): @@ -395,11 +417,9 @@ func filterCloseErr(err error) error { default: // if we have an epipe on a write or econnreset on a read , we cast to errclosed var oerr *net.OpError - if errors.As(err, &oerr) && (oerr.Op == "write" || oerr.Op == "read") { - serr, sok := oerr.Err.(*os.SyscallError) - if sok && ((serr.Err == syscall.EPIPE && oerr.Op == "write") || - (serr.Err == syscall.ECONNRESET && oerr.Op == "read")) { - + if errors.As(err, &oerr) { + if (oerr.Op == "write" && errors.Is(err, syscall.EPIPE)) || + (oerr.Op == "read" && errors.Is(err, syscall.ECONNRESET)) { return ErrClosed } } @@ -407,3 +427,86 @@ func filterCloseErr(err error) error { return err } + +// NewStream creates a new stream with the given stream descriptor to the +// specified service and method. If not a streaming client, the request object +// may be provided. +func (c *Client) NewStream(ctx context.Context, desc *StreamDesc, service, method string, req interface{}) (ClientStream, error) { + var payload []byte + if req != nil { + var err error + payload, err = c.codec.Marshal(req) + if err != nil { + return nil, err + } + } + + request := &Request{ + Service: service, + Method: method, + Payload: payload, + // TODO: metadata from context + } + p, err := c.codec.Marshal(request) + if err != nil { + return nil, err + } + + var flags uint8 + if desc.StreamingClient { + flags = flagRemoteOpen + } else { + flags = flagRemoteClosed + } + s, err := c.createStream(flags, p) + if err != nil { + return nil, err + } + + return &clientStream{ + ctx: ctx, + s: s, + c: c, + desc: desc, + }, nil +} + +func (c *Client) dispatch(ctx context.Context, req *Request, resp *Response) error { + p, err := c.codec.Marshal(req) + if err != nil { + return err + } + + s, err := c.createStream(0, p) + if err != nil { + return err + } + defer c.deleteStream(s) + + var msg *streamMessage + select { + case <-ctx.Done(): + return ctx.Err() + case <-c.ctx.Done(): + return ErrClosed + case <-s.recvClose: + // If recv has a pending message, process that first + select { + case msg = <-s.recv: + default: + return s.recvErr + } + case msg = <-s.recv: + } + + if msg.header.Type == messageTypeResponse { + err = proto.Unmarshal(msg.payload[:msg.header.Length], resp) + } else { + err = fmt.Errorf("unexpected %q message received: %w", msg.header.Type, ErrProtocol) + } + + // return the payload buffer for reuse + c.channel.putmbuf(msg.payload) + + return err +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/codec.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/codec.go index 880634c27e39..3e82722a4245 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/codec.go +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/codec.go @@ -19,7 +19,7 @@ package ttrpc import ( "fmt" - "github.com/gogo/protobuf/proto" + "google.golang.org/protobuf/proto" ) type codec struct{} diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/doc.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/doc.go new file mode 100644 index 000000000000..d80cd424cc8b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/doc.go @@ -0,0 +1,23 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +/* +package ttrpc defines and implements a low level simple transfer protocol +optimized for low latency and reliable connections between processes on the same +host. The protocol uses simple framing for sending requests, responses, and data +using multiple streams. +*/ +package ttrpc diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/errors.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/errors.go new file mode 100644 index 000000000000..ec14b7952bfb --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/errors.go @@ -0,0 +1,34 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package ttrpc + +import "errors" + +var ( + // ErrProtocol is a general error in the handling the protocol. + ErrProtocol = errors.New("protocol error") + + // ErrClosed is returned by client methods when the underlying connection is + // closed. + ErrClosed = errors.New("ttrpc: closed") + + // ErrServerClosed is returned when the Server has closed its connection. + ErrServerClosed = errors.New("ttrpc: server closed") + + // ErrStreamClosed is when the streaming connection is closed. + ErrStreamClosed = errors.New("ttrpc: stream closed") +) diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/handshake.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/handshake.go index a424b67a4976..3c6b610d35d4 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/handshake.go +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/handshake.go @@ -45,6 +45,6 @@ func (fn handshakerFunc) Handshake(ctx context.Context, conn net.Conn) (net.Conn return fn(ctx, conn) } -func noopHandshake(ctx context.Context, conn net.Conn) (net.Conn, interface{}, error) { +func noopHandshake(_ context.Context, conn net.Conn) (net.Conn, interface{}, error) { return conn, nil, nil } diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/interceptor.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/interceptor.go index c1219dac65f6..7ff5e9d33f23 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/interceptor.go +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/interceptor.go @@ -28,6 +28,13 @@ type UnaryClientInfo struct { FullMethod string } +// StreamServerInfo provides information about the server request +type StreamServerInfo struct { + FullMethod string + StreamingClient bool + StreamingServer bool +} + // Unmarshaler contains the server request data and allows it to be unmarshaled // into a concrete type type Unmarshaler func(interface{}) error @@ -41,10 +48,18 @@ type UnaryServerInterceptor func(context.Context, Unmarshaler, *UnaryServerInfo, // UnaryClientInterceptor specifies the interceptor function for client request/response type UnaryClientInterceptor func(context.Context, *Request, *Response, *UnaryClientInfo, Invoker) error -func defaultServerInterceptor(ctx context.Context, unmarshal Unmarshaler, info *UnaryServerInfo, method Method) (interface{}, error) { +func defaultServerInterceptor(ctx context.Context, unmarshal Unmarshaler, _ *UnaryServerInfo, method Method) (interface{}, error) { return method(ctx, unmarshal) } func defaultClientInterceptor(ctx context.Context, req *Request, resp *Response, _ *UnaryClientInfo, invoker Invoker) error { return invoker(ctx, req, resp) } + +type StreamServerInterceptor func(context.Context, StreamServer, *StreamServerInfo, StreamHandler) (interface{}, error) + +func defaultStreamServerInterceptor(ctx context.Context, ss StreamServer, _ *StreamServerInfo, stream StreamHandler) (interface{}, error) { + return stream(ctx, ss) +} + +type StreamClientInterceptor func(context.Context) diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/request.pb.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/request.pb.go new file mode 100644 index 000000000000..3921ae5a356c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/request.pb.go @@ -0,0 +1,396 @@ +// Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.28.1 +// protoc v3.20.1 +// source: github.com/containerd/ttrpc/request.proto + +package ttrpc + +import ( + status "google.golang.org/genproto/googleapis/rpc/status" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + reflect "reflect" + sync "sync" +) + +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) + +type Request struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + Service string `protobuf:"bytes,1,opt,name=service,proto3" json:"service,omitempty"` + Method string `protobuf:"bytes,2,opt,name=method,proto3" json:"method,omitempty"` + Payload []byte `protobuf:"bytes,3,opt,name=payload,proto3" json:"payload,omitempty"` + TimeoutNano int64 `protobuf:"varint,4,opt,name=timeout_nano,json=timeoutNano,proto3" json:"timeout_nano,omitempty"` + Metadata []*KeyValue `protobuf:"bytes,5,rep,name=metadata,proto3" json:"metadata,omitempty"` +} + +func (x *Request) Reset() { + *x = Request{} + if protoimpl.UnsafeEnabled { + mi := &file_github_com_containerd_ttrpc_request_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } +} + +func (x *Request) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Request) ProtoMessage() {} + +func (x *Request) ProtoReflect() protoreflect.Message { + mi := &file_github_com_containerd_ttrpc_request_proto_msgTypes[0] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Request.ProtoReflect.Descriptor instead. +func (*Request) Descriptor() ([]byte, []int) { + return file_github_com_containerd_ttrpc_request_proto_rawDescGZIP(), []int{0} +} + +func (x *Request) GetService() string { + if x != nil { + return x.Service + } + return "" +} + +func (x *Request) GetMethod() string { + if x != nil { + return x.Method + } + return "" +} + +func (x *Request) GetPayload() []byte { + if x != nil { + return x.Payload + } + return nil +} + +func (x *Request) GetTimeoutNano() int64 { + if x != nil { + return x.TimeoutNano + } + return 0 +} + +func (x *Request) GetMetadata() []*KeyValue { + if x != nil { + return x.Metadata + } + return nil +} + +type Response struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + Status *status.Status `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"` + Payload []byte `protobuf:"bytes,2,opt,name=payload,proto3" json:"payload,omitempty"` +} + +func (x *Response) Reset() { + *x = Response{} + if protoimpl.UnsafeEnabled { + mi := &file_github_com_containerd_ttrpc_request_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } +} + +func (x *Response) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*Response) ProtoMessage() {} + +func (x *Response) ProtoReflect() protoreflect.Message { + mi := &file_github_com_containerd_ttrpc_request_proto_msgTypes[1] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use Response.ProtoReflect.Descriptor instead. +func (*Response) Descriptor() ([]byte, []int) { + return file_github_com_containerd_ttrpc_request_proto_rawDescGZIP(), []int{1} +} + +func (x *Response) GetStatus() *status.Status { + if x != nil { + return x.Status + } + return nil +} + +func (x *Response) GetPayload() []byte { + if x != nil { + return x.Payload + } + return nil +} + +type StringList struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + List []string `protobuf:"bytes,1,rep,name=list,proto3" json:"list,omitempty"` +} + +func (x *StringList) Reset() { + *x = StringList{} + if protoimpl.UnsafeEnabled { + mi := &file_github_com_containerd_ttrpc_request_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } +} + +func (x *StringList) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*StringList) ProtoMessage() {} + +func (x *StringList) ProtoReflect() protoreflect.Message { + mi := &file_github_com_containerd_ttrpc_request_proto_msgTypes[2] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use StringList.ProtoReflect.Descriptor instead. +func (*StringList) Descriptor() ([]byte, []int) { + return file_github_com_containerd_ttrpc_request_proto_rawDescGZIP(), []int{2} +} + +func (x *StringList) GetList() []string { + if x != nil { + return x.List + } + return nil +} + +type KeyValue struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` + Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"` +} + +func (x *KeyValue) Reset() { + *x = KeyValue{} + if protoimpl.UnsafeEnabled { + mi := &file_github_com_containerd_ttrpc_request_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } +} + +func (x *KeyValue) String() string { + return protoimpl.X.MessageStringOf(x) +} + +func (*KeyValue) ProtoMessage() {} + +func (x *KeyValue) ProtoReflect() protoreflect.Message { + mi := &file_github_com_containerd_ttrpc_request_proto_msgTypes[3] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) +} + +// Deprecated: Use KeyValue.ProtoReflect.Descriptor instead. +func (*KeyValue) Descriptor() ([]byte, []int) { + return file_github_com_containerd_ttrpc_request_proto_rawDescGZIP(), []int{3} +} + +func (x *KeyValue) GetKey() string { + if x != nil { + return x.Key + } + return "" +} + +func (x *KeyValue) GetValue() string { + if x != nil { + return x.Value + } + return "" +} + +var File_github_com_containerd_ttrpc_request_proto protoreflect.FileDescriptor + +var file_github_com_containerd_ttrpc_request_proto_rawDesc = []byte{ + 0x0a, 0x29, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, + 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x74, 0x74, 0x72, 0x70, 0x63, 0x2f, 0x72, 0x65, + 0x71, 0x75, 0x65, 0x73, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x05, 0x74, 0x74, 0x72, + 0x70, 0x63, 0x1a, 0x12, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, + 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xa5, 0x01, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, + 0x73, 0x74, 0x12, 0x18, 0x0a, 0x07, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x18, 0x01, 0x20, + 0x01, 0x28, 0x09, 0x52, 0x07, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x12, 0x16, 0x0a, 0x06, + 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x6d, 0x65, + 0x74, 0x68, 0x6f, 0x64, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, + 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x21, + 0x0a, 0x0c, 0x74, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x5f, 0x6e, 0x61, 0x6e, 0x6f, 0x18, 0x04, + 0x20, 0x01, 0x28, 0x03, 0x52, 0x0b, 0x74, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x4e, 0x61, 0x6e, + 0x6f, 0x12, 0x2b, 0x0a, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x05, 0x20, + 0x03, 0x28, 0x0b, 0x32, 0x0f, 0x2e, 0x74, 0x74, 0x72, 0x70, 0x63, 0x2e, 0x4b, 0x65, 0x79, 0x56, + 0x61, 0x6c, 0x75, 0x65, 0x52, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x22, 0x45, + 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x1f, 0x0a, 0x06, 0x73, 0x74, + 0x61, 0x74, 0x75, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x07, 0x2e, 0x53, 0x74, 0x61, + 0x74, 0x75, 0x73, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x18, 0x0a, 0x07, 0x70, + 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x61, + 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x22, 0x20, 0x0a, 0x0a, 0x53, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x4c, + 0x69, 0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x6c, 0x69, 0x73, 0x74, 0x18, 0x01, 0x20, 0x03, 0x28, + 0x09, 0x52, 0x04, 0x6c, 0x69, 0x73, 0x74, 0x22, 0x32, 0x0a, 0x08, 0x4b, 0x65, 0x79, 0x56, 0x61, + 0x6c, 0x75, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, + 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, + 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x42, 0x1d, 0x5a, 0x1b, 0x67, + 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x6e, 0x74, 0x61, 0x69, + 0x6e, 0x65, 0x72, 0x64, 0x2f, 0x74, 0x74, 0x72, 0x70, 0x63, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, + 0x6f, 0x33, +} + +var ( + file_github_com_containerd_ttrpc_request_proto_rawDescOnce sync.Once + file_github_com_containerd_ttrpc_request_proto_rawDescData = file_github_com_containerd_ttrpc_request_proto_rawDesc +) + +func file_github_com_containerd_ttrpc_request_proto_rawDescGZIP() []byte { + file_github_com_containerd_ttrpc_request_proto_rawDescOnce.Do(func() { + file_github_com_containerd_ttrpc_request_proto_rawDescData = protoimpl.X.CompressGZIP(file_github_com_containerd_ttrpc_request_proto_rawDescData) + }) + return file_github_com_containerd_ttrpc_request_proto_rawDescData +} + +var file_github_com_containerd_ttrpc_request_proto_msgTypes = make([]protoimpl.MessageInfo, 4) +var file_github_com_containerd_ttrpc_request_proto_goTypes = []interface{}{ + (*Request)(nil), // 0: ttrpc.Request + (*Response)(nil), // 1: ttrpc.Response + (*StringList)(nil), // 2: ttrpc.StringList + (*KeyValue)(nil), // 3: ttrpc.KeyValue + (*status.Status)(nil), // 4: Status +} +var file_github_com_containerd_ttrpc_request_proto_depIdxs = []int32{ + 3, // 0: ttrpc.Request.metadata:type_name -> ttrpc.KeyValue + 4, // 1: ttrpc.Response.status:type_name -> Status + 2, // [2:2] is the sub-list for method output_type + 2, // [2:2] is the sub-list for method input_type + 2, // [2:2] is the sub-list for extension type_name + 2, // [2:2] is the sub-list for extension extendee + 0, // [0:2] is the sub-list for field type_name +} + +func init() { file_github_com_containerd_ttrpc_request_proto_init() } +func file_github_com_containerd_ttrpc_request_proto_init() { + if File_github_com_containerd_ttrpc_request_proto != nil { + return + } + if !protoimpl.UnsafeEnabled { + file_github_com_containerd_ttrpc_request_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Request); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_github_com_containerd_ttrpc_request_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Response); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_github_com_containerd_ttrpc_request_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*StringList); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_github_com_containerd_ttrpc_request_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*KeyValue); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: file_github_com_containerd_ttrpc_request_proto_rawDesc, + NumEnums: 0, + NumMessages: 4, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_github_com_containerd_ttrpc_request_proto_goTypes, + DependencyIndexes: file_github_com_containerd_ttrpc_request_proto_depIdxs, + MessageInfos: file_github_com_containerd_ttrpc_request_proto_msgTypes, + }.Build() + File_github_com_containerd_ttrpc_request_proto = out.File + file_github_com_containerd_ttrpc_request_proto_rawDesc = nil + file_github_com_containerd_ttrpc_request_proto_goTypes = nil + file_github_com_containerd_ttrpc_request_proto_depIdxs = nil +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/request.proto b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/request.proto new file mode 100644 index 000000000000..37da334fc2a9 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/request.proto @@ -0,0 +1,29 @@ +syntax = "proto3"; + +package ttrpc; + +import "proto/status.proto"; + +option go_package = "github.com/containerd/ttrpc"; + +message Request { + string service = 1; + string method = 2; + bytes payload = 3; + int64 timeout_nano = 4; + repeated KeyValue metadata = 5; +} + +message Response { + Status status = 1; + bytes payload = 2; +} + +message StringList { + repeated string list = 1; +} + +message KeyValue { + string key = 1; + string value = 2; +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/server.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/server.go index b0e48073e4d9..7af59f828e52 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/server.go +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/server.go @@ -24,6 +24,7 @@ import ( "net" "sync" "sync/atomic" + "syscall" "time" "github.com/sirupsen/logrus" @@ -31,10 +32,6 @@ import ( "google.golang.org/grpc/status" ) -var ( - ErrServerClosed = errors.New("ttrpc: server closed") -) - type Server struct { config *serverConfig services *serviceSet @@ -66,8 +63,14 @@ func NewServer(opts ...ServerOpt) (*Server, error) { }, nil } +// Register registers a map of methods to method handlers +// TODO: Remove in 2.0, does not support streams func (s *Server) Register(name string, methods map[string]Method) { - s.services.register(name, methods) + s.services.register(name, &ServiceDesc{Methods: methods}) +} + +func (s *Server) RegisterService(name string, desc *ServiceDesc) { + s.services.register(name, desc) } func (s *Server) Serve(ctx context.Context, l net.Listener) error { @@ -118,12 +121,18 @@ func (s *Server) Serve(ctx context.Context, l net.Listener) error { approved, handshake, err := handshaker.Handshake(ctx, conn) if err != nil { - logrus.WithError(err).Errorf("ttrpc: refusing connection after handshake") + logrus.WithError(err).Error("ttrpc: refusing connection after handshake") + conn.Close() + continue + } + + sc, err := s.newConn(approved, handshake) + if err != nil { + logrus.WithError(err).Error("ttrpc: create connection failed") conn.Close() continue } - sc := s.newConn(approved, handshake) go sc.run(ctx) } } @@ -142,15 +151,20 @@ func (s *Server) Shutdown(ctx context.Context) error { ticker := time.NewTicker(200 * time.Millisecond) defer ticker.Stop() for { - if s.closeIdleConns() { - return lnerr + s.closeIdleConns() + + if s.countConnection() == 0 { + break } + select { case <-ctx.Done(): return ctx.Err() case <-ticker.C: } } + + return lnerr } // Close the server without waiting for active connections. @@ -202,11 +216,18 @@ func (s *Server) closeListeners() error { return err } -func (s *Server) addConnection(c *serverConn) { +func (s *Server) addConnection(c *serverConn) error { s.mu.Lock() defer s.mu.Unlock() + select { + case <-s.done: + return ErrServerClosed + default: + } + s.connections[c] = struct{}{} + return nil } func (s *Server) delConnection(c *serverConn) { @@ -223,20 +244,17 @@ func (s *Server) countConnection() int { return len(s.connections) } -func (s *Server) closeIdleConns() bool { +func (s *Server) closeIdleConns() { s.mu.Lock() defer s.mu.Unlock() - quiescent := true + for c := range s.connections { - st, ok := c.getState() - if !ok || st != connStateIdle { - quiescent = false + if st, ok := c.getState(); !ok || st == connStateActive { continue } c.close() delete(s.connections, c) } - return quiescent } type connState int @@ -260,7 +278,7 @@ func (cs connState) String() string { } } -func (s *Server) newConn(conn net.Conn, handshake interface{}) *serverConn { +func (s *Server) newConn(conn net.Conn, handshake interface{}) (*serverConn, error) { c := &serverConn{ server: s, conn: conn, @@ -268,8 +286,11 @@ func (s *Server) newConn(conn net.Conn, handshake interface{}) *serverConn { shutdown: make(chan struct{}), } c.setState(connStateIdle) - s.addConnection(c) - return c + if err := s.addConnection(c); err != nil { + c.close() + return nil, err + } + return c, nil } type serverConn struct { @@ -301,27 +322,25 @@ func (c *serverConn) close() error { func (c *serverConn) run(sctx context.Context) { type ( - request struct { - id uint32 - req *Request - } - response struct { - id uint32 - resp *Response + id uint32 + status *status.Status + data []byte + closeStream bool + streaming bool } ) var ( - ch = newChannel(c.conn) - ctx, cancel = context.WithCancel(sctx) - active int - state connState = connStateIdle - responses = make(chan response) - requests = make(chan request) - recvErr = make(chan error, 1) - shutdown = c.shutdown - done = make(chan struct{}) + ch = newChannel(c.conn) + ctx, cancel = context.WithCancel(sctx) + state connState = connStateIdle + responses = make(chan response) + recvErr = make(chan error, 1) + done = make(chan struct{}) + streams = sync.Map{} + active int32 + lastStreamID uint32 ) defer c.conn.Close() @@ -329,27 +348,26 @@ func (c *serverConn) run(sctx context.Context) { defer close(done) defer c.server.delConnection(c) - go func(recvErr chan error) { - defer close(recvErr) - sendImmediate := func(id uint32, st *status.Status) bool { - select { - case responses <- response{ - // even though we've had an invalid stream id, we send it - // back on the same stream id so the client knows which - // stream id was bad. - id: id, - resp: &Response{ - Status: st.Proto(), - }, - }: - return true - case <-c.shutdown: - return false - case <-done: - return false - } + sendStatus := func(id uint32, st *status.Status) bool { + select { + case responses <- response{ + // even though we've had an invalid stream id, we send it + // back on the same stream id so the client knows which + // stream id was bad. + id: id, + status: st, + closeStream: true, + }: + return true + case <-c.shutdown: + return false + case <-done: + return false } + } + go func(recvErr chan error) { + defer close(recvErr) for { select { case <-c.shutdown: @@ -369,112 +387,173 @@ func (c *serverConn) run(sctx context.Context) { // in this case, we send an error for that particular message // when the status is defined. - if !sendImmediate(mh.StreamID, status) { + if !sendStatus(mh.StreamID, status) { return } continue } - if mh.Type != messageTypeRequest { - // we must ignore this for future compat. - continue - } - - var req Request - if err := c.server.codec.Unmarshal(p, &req); err != nil { - ch.putmbuf(p) - if !sendImmediate(mh.StreamID, status.Newf(codes.InvalidArgument, "unmarshal request error: %v", err)) { - return - } - continue - } - ch.putmbuf(p) - if mh.StreamID%2 != 1 { // enforce odd client initiated identifiers. - if !sendImmediate(mh.StreamID, status.Newf(codes.InvalidArgument, "StreamID must be odd for client initiated streams")) { + if !sendStatus(mh.StreamID, status.Newf(codes.InvalidArgument, "StreamID must be odd for client initiated streams")) { return } continue } - // Forward the request to the main loop. We don't wait on s.done - // because we have already accepted the client request. - select { - case requests <- request{ - id: mh.StreamID, - req: &req, - }: - case <-done: - return + if mh.Type == messageTypeData { + i, ok := streams.Load(mh.StreamID) + if !ok { + if !sendStatus(mh.StreamID, status.Newf(codes.InvalidArgument, "StreamID is no longer active")) { + return + } + } + sh := i.(*streamHandler) + if mh.Flags&flagNoData != flagNoData { + unmarshal := func(obj interface{}) error { + err := protoUnmarshal(p, obj) + ch.putmbuf(p) + return err + } + + if err := sh.data(unmarshal); err != nil { + if !sendStatus(mh.StreamID, status.Newf(codes.InvalidArgument, "data handling error: %v", err)) { + return + } + } + } + + if mh.Flags&flagRemoteClosed == flagRemoteClosed { + sh.closeSend() + if len(p) > 0 { + if !sendStatus(mh.StreamID, status.Newf(codes.InvalidArgument, "data close message cannot include data")) { + return + } + } + } + } else if mh.Type == messageTypeRequest { + if mh.StreamID <= lastStreamID { + // enforce odd client initiated identifiers. + if !sendStatus(mh.StreamID, status.Newf(codes.InvalidArgument, "StreamID cannot be re-used and must increment")) { + return + } + continue + + } + lastStreamID = mh.StreamID + + // TODO: Make request type configurable + // Unmarshaller which takes in a byte array and returns an interface? + var req Request + if err := c.server.codec.Unmarshal(p, &req); err != nil { + ch.putmbuf(p) + if !sendStatus(mh.StreamID, status.Newf(codes.InvalidArgument, "unmarshal request error: %v", err)) { + return + } + continue + } + ch.putmbuf(p) + + id := mh.StreamID + respond := func(status *status.Status, data []byte, streaming, closeStream bool) error { + select { + case responses <- response{ + id: id, + status: status, + data: data, + closeStream: closeStream, + streaming: streaming, + }: + case <-done: + return ErrClosed + } + return nil + } + sh, err := c.server.services.handle(ctx, &req, respond) + if err != nil { + status, _ := status.FromError(err) + if !sendStatus(mh.StreamID, status) { + return + } + continue + } + + streams.Store(id, sh) + atomic.AddInt32(&active, 1) } + // TODO: else we must ignore this for future compat. log this? } }(recvErr) for { - newstate := state - switch { - case active > 0: + var ( + newstate connState + shutdown chan struct{} + ) + + activeN := atomic.LoadInt32(&active) + if activeN > 0 { newstate = connStateActive shutdown = nil - case active == 0: + } else { newstate = connStateIdle shutdown = c.shutdown // only enable this branch in idle mode } - if newstate != state { c.setState(newstate) state = newstate } select { - case request := <-requests: - active++ - go func(id uint32) { - ctx, cancel := getRequestContext(ctx, request.req) - defer cancel() - - p, status := c.server.services.call(ctx, request.req.Service, request.req.Method, request.req.Payload) - resp := &Response{ - Status: status.Proto(), - Payload: p, + case response := <-responses: + if !response.streaming || response.status.Code() != codes.OK { + p, err := c.server.codec.Marshal(&Response{ + Status: response.status.Proto(), + Payload: response.data, + }) + if err != nil { + logrus.WithError(err).Error("failed marshaling response") + return } - select { - case responses <- response{ - id: id, - resp: resp, - }: - case <-done: + if err := ch.send(response.id, messageTypeResponse, 0, p); err != nil { + logrus.WithError(err).Error("failed sending message on channel") + return + } + } else { + var flags uint8 + if response.closeStream { + flags = flagRemoteClosed + } + if response.data == nil { + flags = flags | flagNoData + } + if err := ch.send(response.id, messageTypeData, flags, response.data); err != nil { + logrus.WithError(err).Error("failed sending message on channel") + return } - }(request.id) - case response := <-responses: - p, err := c.server.codec.Marshal(response.resp) - if err != nil { - logrus.WithError(err).Error("failed marshaling response") - return } - if err := ch.send(response.id, messageTypeResponse, p); err != nil { - logrus.WithError(err).Error("failed sending message on channel") - return + if response.closeStream { + // The ttrpc protocol currently does not support the case where + // the server is localClosed but not remoteClosed. Once the server + // is closing, the whole stream may be considered finished + streams.Delete(response.id) + atomic.AddInt32(&active, -1) } - - active-- case err := <-recvErr: // TODO(stevvooe): Not wildly clear what we should do in this // branch. Basically, it means that we are no longer receiving // requests due to a terminal error. recvErr = nil // connection is now "closing" - if err == io.EOF || err == io.ErrUnexpectedEOF { + if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, syscall.ECONNRESET) { // The client went away and we should stop processing // requests, so that the client connection is closed return } - if err != nil { - logrus.WithError(err).Error("error receiving message") - } + logrus.WithError(err).Error("error receiving message") + // else, initiate shutdown case <-shutdown: return } diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/services.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/services.go index f359e9611f9e..6aabfbb4d18d 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/services.go +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/services.go @@ -25,43 +25,62 @@ import ( "path" "unsafe" - "github.com/gogo/protobuf/proto" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" + "google.golang.org/protobuf/proto" ) type Method func(ctx context.Context, unmarshal func(interface{}) error) (interface{}, error) +type StreamHandler func(context.Context, StreamServer) (interface{}, error) + +type Stream struct { + Handler StreamHandler + StreamingClient bool + StreamingServer bool +} + type ServiceDesc struct { Methods map[string]Method - - // TODO(stevvooe): Add stream support. + Streams map[string]Stream } type serviceSet struct { - services map[string]ServiceDesc - interceptor UnaryServerInterceptor + services map[string]*ServiceDesc + unaryInterceptor UnaryServerInterceptor + streamInterceptor StreamServerInterceptor } func newServiceSet(interceptor UnaryServerInterceptor) *serviceSet { return &serviceSet{ - services: make(map[string]ServiceDesc), - interceptor: interceptor, + services: make(map[string]*ServiceDesc), + unaryInterceptor: interceptor, + streamInterceptor: defaultStreamServerInterceptor, } } -func (s *serviceSet) register(name string, methods map[string]Method) { +func (s *serviceSet) register(name string, desc *ServiceDesc) { if _, ok := s.services[name]; ok { panic(fmt.Errorf("duplicate service %v registered", name)) } - s.services[name] = ServiceDesc{ - Methods: methods, - } + s.services[name] = desc } -func (s *serviceSet) call(ctx context.Context, serviceName, methodName string, p []byte) ([]byte, *status.Status) { - p, err := s.dispatch(ctx, serviceName, methodName, p) +func (s *serviceSet) unaryCall(ctx context.Context, method Method, info *UnaryServerInfo, data []byte) (p []byte, st *status.Status) { + unmarshal := func(obj interface{}) error { + return protoUnmarshal(data, obj) + } + + resp, err := s.unaryInterceptor(ctx, unmarshal, info, method) + if err == nil { + if isNil(resp) { + err = errors.New("ttrpc: marshal called with nil") + } else { + p, err = protoMarshal(resp) + } + } + st, ok := status.FromError(err) if !ok { st = status.New(convertCode(err), err.Error()) @@ -70,38 +89,142 @@ func (s *serviceSet) call(ctx context.Context, serviceName, methodName string, p return p, st } -func (s *serviceSet) dispatch(ctx context.Context, serviceName, methodName string, p []byte) ([]byte, error) { - method, err := s.resolve(serviceName, methodName) - if err != nil { - return nil, err +func (s *serviceSet) streamCall(ctx context.Context, stream StreamHandler, info *StreamServerInfo, ss StreamServer) (p []byte, st *status.Status) { + resp, err := s.streamInterceptor(ctx, ss, info, stream) + if err == nil { + p, err = protoMarshal(resp) + } + st, ok := status.FromError(err) + if !ok { + st = status.New(convertCode(err), err.Error()) } + return +} - unmarshal := func(obj interface{}) error { - switch v := obj.(type) { - case proto.Message: - if err := proto.Unmarshal(p, v); err != nil { - return status.Errorf(codes.Internal, "ttrpc: error unmarshalling payload: %v", err.Error()) +func (s *serviceSet) handle(ctx context.Context, req *Request, respond func(*status.Status, []byte, bool, bool) error) (*streamHandler, error) { + srv, ok := s.services[req.Service] + if !ok { + return nil, status.Errorf(codes.Unimplemented, "service %v", req.Service) + } + + if method, ok := srv.Methods[req.Method]; ok { + go func() { + ctx, cancel := getRequestContext(ctx, req) + defer cancel() + + info := &UnaryServerInfo{ + FullMethod: fullPath(req.Service, req.Method), } - default: - return status.Errorf(codes.Internal, "ttrpc: error unsupported request type: %T", v) + p, st := s.unaryCall(ctx, method, info, req.Payload) + + respond(st, p, false, true) + }() + return nil, nil + } + if stream, ok := srv.Streams[req.Method]; ok { + ctx, cancel := getRequestContext(ctx, req) + info := &StreamServerInfo{ + FullMethod: fullPath(req.Service, req.Method), + StreamingClient: stream.StreamingClient, + StreamingServer: stream.StreamingServer, } - return nil + sh := &streamHandler{ + ctx: ctx, + respond: respond, + recv: make(chan Unmarshaler, 5), + info: info, + } + go func() { + defer cancel() + p, st := s.streamCall(ctx, stream.Handler, info, sh) + respond(st, p, stream.StreamingServer, true) + }() + + if req.Payload != nil { + unmarshal := func(obj interface{}) error { + return protoUnmarshal(req.Payload, obj) + } + if err := sh.data(unmarshal); err != nil { + return nil, err + } + } + + return sh, nil } + return nil, status.Errorf(codes.Unimplemented, "method %v", req.Method) +} + +type streamHandler struct { + ctx context.Context + respond func(*status.Status, []byte, bool, bool) error + recv chan Unmarshaler + info *StreamServerInfo + + remoteClosed bool + localClosed bool +} - info := &UnaryServerInfo{ - FullMethod: fullPath(serviceName, methodName), +func (s *streamHandler) closeSend() { + if !s.remoteClosed { + s.remoteClosed = true + close(s.recv) + } +} + +func (s *streamHandler) data(unmarshal Unmarshaler) error { + if s.remoteClosed { + return ErrStreamClosed + } + select { + case s.recv <- unmarshal: + return nil + case <-s.ctx.Done(): + return s.ctx.Err() } +} - resp, err := s.interceptor(ctx, unmarshal, info, method) +func (s *streamHandler) SendMsg(m interface{}) error { + if s.localClosed { + return ErrStreamClosed + } + p, err := protoMarshal(m) if err != nil { - return nil, err + return err } + return s.respond(nil, p, true, false) +} + +func (s *streamHandler) RecvMsg(m interface{}) error { + select { + case unmarshal, ok := <-s.recv: + if !ok { + return io.EOF + } + return unmarshal(m) + case <-s.ctx.Done(): + return s.ctx.Err() - if isNil(resp) { - return nil, errors.New("ttrpc: marshal called with nil") } +} - switch v := resp.(type) { +func protoUnmarshal(p []byte, obj interface{}) error { + switch v := obj.(type) { + case proto.Message: + if err := proto.Unmarshal(p, v); err != nil { + return status.Errorf(codes.Internal, "ttrpc: error unmarshalling payload: %v", err.Error()) + } + default: + return status.Errorf(codes.Internal, "ttrpc: error unsupported request type: %T", v) + } + return nil +} + +func protoMarshal(obj interface{}) ([]byte, error) { + if obj == nil { + return nil, nil + } + + switch v := obj.(type) { case proto.Message: r, err := proto.Marshal(v) if err != nil { @@ -114,20 +237,6 @@ func (s *serviceSet) dispatch(ctx context.Context, serviceName, methodName strin } } -func (s *serviceSet) resolve(service, method string) (Method, error) { - srv, ok := s.services[service] - if !ok { - return nil, status.Errorf(codes.Unimplemented, "service %v", service) - } - - mthd, ok := srv.Methods[method] - if !ok { - return nil, status.Errorf(codes.Unimplemented, "method %v", method) - } - - return mthd, nil -} - // convertCode maps stdlib go errors into grpc space. // // This is ripped from the grpc-go code base. diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/stream.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/stream.go new file mode 100644 index 000000000000..739a4c9675b7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/stream.go @@ -0,0 +1,84 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package ttrpc + +import ( + "context" + "sync" +) + +type streamID uint32 + +type streamMessage struct { + header messageHeader + payload []byte +} + +type stream struct { + id streamID + sender sender + recv chan *streamMessage + + closeOnce sync.Once + recvErr error + recvClose chan struct{} +} + +func newStream(id streamID, send sender) *stream { + return &stream{ + id: id, + sender: send, + recv: make(chan *streamMessage, 1), + recvClose: make(chan struct{}), + } +} + +func (s *stream) closeWithError(err error) error { + s.closeOnce.Do(func() { + if err != nil { + s.recvErr = err + } else { + s.recvErr = ErrClosed + } + close(s.recvClose) + }) + return nil +} + +func (s *stream) send(mt messageType, flags uint8, b []byte) error { + return s.sender.send(uint32(s.id), mt, flags, b) +} + +func (s *stream) receive(ctx context.Context, msg *streamMessage) error { + select { + case <-s.recvClose: + return s.recvErr + default: + } + select { + case <-s.recvClose: + return s.recvErr + case s.recv <- msg: + return nil + case <-ctx.Done(): + return ctx.Err() + } +} + +type sender interface { + send(uint32, messageType, uint8, []byte) error +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/stream_server.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/stream_server.go new file mode 100644 index 000000000000..b6d1ba720a43 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/stream_server.go @@ -0,0 +1,22 @@ +/* + Copyright The containerd Authors. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +*/ + +package ttrpc + +type StreamServer interface { + SendMsg(m interface{}) error + RecvMsg(m interface{}) error +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/test.proto b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/test.proto new file mode 100644 index 000000000000..0e114d556886 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/test.proto @@ -0,0 +1,16 @@ +syntax = "proto3"; + +package ttrpc; + +option go_package = "github.com/containerd/ttrpc/internal"; + +message TestPayload { + string foo = 1; + int64 deadline = 2; + string metadata = 3; +} + +message EchoPayload { + int64 seq = 1; + string msg = 2; +} diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/types.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/types.go deleted file mode 100644 index 9a1c19a7238d..000000000000 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/types.go +++ /dev/null @@ -1,63 +0,0 @@ -/* - Copyright The containerd Authors. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -package ttrpc - -import ( - "fmt" - - spb "google.golang.org/genproto/googleapis/rpc/status" -) - -type Request struct { - Service string `protobuf:"bytes,1,opt,name=service,proto3"` - Method string `protobuf:"bytes,2,opt,name=method,proto3"` - Payload []byte `protobuf:"bytes,3,opt,name=payload,proto3"` - TimeoutNano int64 `protobuf:"varint,4,opt,name=timeout_nano,proto3"` - Metadata []*KeyValue `protobuf:"bytes,5,rep,name=metadata,proto3"` -} - -func (r *Request) Reset() { *r = Request{} } -func (r *Request) String() string { return fmt.Sprintf("%+#v", r) } -func (r *Request) ProtoMessage() {} - -type Response struct { - Status *spb.Status `protobuf:"bytes,1,opt,name=status,proto3"` - Payload []byte `protobuf:"bytes,2,opt,name=payload,proto3"` -} - -func (r *Response) Reset() { *r = Response{} } -func (r *Response) String() string { return fmt.Sprintf("%+#v", r) } -func (r *Response) ProtoMessage() {} - -type StringList struct { - List []string `protobuf:"bytes,1,rep,name=list,proto3"` -} - -func (r *StringList) Reset() { *r = StringList{} } -func (r *StringList) String() string { return fmt.Sprintf("%+#v", r) } -func (r *StringList) ProtoMessage() {} - -func makeStringList(item ...string) StringList { return StringList{List: item} } - -type KeyValue struct { - Key string `protobuf:"bytes,1,opt,name=key,proto3"` - Value string `protobuf:"bytes,2,opt,name=value,proto3"` -} - -func (m *KeyValue) Reset() { *m = KeyValue{} } -func (*KeyValue) ProtoMessage() {} -func (m *KeyValue) String() string { return fmt.Sprintf("%+#v", m) } diff --git a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/unixcreds_linux.go b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/unixcreds_linux.go index a59dad60cd55..c82c9f9d4c74 100644 --- a/cluster-autoscaler/vendor/github.com/containerd/ttrpc/unixcreds_linux.go +++ b/cluster-autoscaler/vendor/github.com/containerd/ttrpc/unixcreds_linux.go @@ -29,7 +29,7 @@ import ( type UnixCredentialsFunc func(*unix.Ucred) error -func (fn UnixCredentialsFunc) Handshake(ctx context.Context, conn net.Conn) (net.Conn, interface{}, error) { +func (fn UnixCredentialsFunc) Handshake(_ context.Context, conn net.Conn) (net.Conn, interface{}, error) { uc, err := requireUnixSocket(conn) if err != nil { return nil, nil, fmt.Errorf("ttrpc.UnixCredentialsFunc: require unix socket: %w", err) @@ -50,7 +50,7 @@ func (fn UnixCredentialsFunc) Handshake(ctx context.Context, conn net.Conn) (net } if ucredErr != nil { - return nil, nil, fmt.Errorf("ttrpc.UnixCredentialsFunc: failed to retrieve socket peer credentials: %w", err) + return nil, nil, fmt.Errorf("ttrpc.UnixCredentialsFunc: failed to retrieve socket peer credentials: %w", ucredErr) } if err := fn(ucred); err != nil { @@ -88,10 +88,6 @@ func UnixSocketRequireSameUser() UnixCredentialsFunc { return UnixSocketRequireUidGid(euid, egid) } -func requireRoot(ucred *unix.Ucred) error { - return requireUidGid(ucred, 0, 0) -} - func requireUidGid(ucred *unix.Ucred, uid, gid int) error { if (uid != -1 && uint32(uid) != ucred.Uid) || (gid != -1 && uint32(gid) != ucred.Gid) { return fmt.Errorf("ttrpc: invalid credentials: %v", syscall.EPERM) diff --git a/cluster-autoscaler/vendor/github.com/docker/distribution/reference/reference.go b/cluster-autoscaler/vendor/github.com/docker/distribution/reference/reference.go index 8c0c23b2fe1b..b7cd00b0d68e 100644 --- a/cluster-autoscaler/vendor/github.com/docker/distribution/reference/reference.go +++ b/cluster-autoscaler/vendor/github.com/docker/distribution/reference/reference.go @@ -3,13 +3,13 @@ // // Grammar // -// reference := name [ ":" tag ] [ "@" digest ] +// reference := name [ ":" tag ] [ "@" digest ] // name := [domain '/'] path-component ['/' path-component]* // domain := domain-component ['.' domain-component]* [':' port-number] // domain-component := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/ // port-number := /[0-9]+/ // path-component := alpha-numeric [separator alpha-numeric]* -// alpha-numeric := /[a-z0-9]+/ +// alpha-numeric := /[a-z0-9]+/ // separator := /[_.]|__|[-]*/ // // tag := /[\w][\w.-]{0,127}/ diff --git a/cluster-autoscaler/vendor/github.com/gardener/machine-controller-manager-provider-aws/pkg/aws/apis/aws_provider_spec.go b/cluster-autoscaler/vendor/github.com/gardener/machine-controller-manager-provider-aws/pkg/aws/apis/aws_provider_spec.go index ef98d71316fc..b9398465f4b3 100644 --- a/cluster-autoscaler/vendor/github.com/gardener/machine-controller-manager-provider-aws/pkg/aws/apis/aws_provider_spec.go +++ b/cluster-autoscaler/vendor/github.com/gardener/machine-controller-manager-provider-aws/pkg/aws/apis/aws_provider_spec.go @@ -134,14 +134,17 @@ type AWSBlockDeviceMappingSpec struct { VirtualName string `json:"virtualName,omitempty"` } -// AWSCapacityReservationTargetSpec allows to target an AWS Capacity Reservation directly or -// indirectly using an AWS Resource Group +// AWSCapacityReservationTargetSpec allows to target an AWS Capacity Reservation directly or indirectly using an AWS Capacity Reservation. +// See https://docs.aws.amazon.com/sdk-for-go/api/service/ec2/#CapacityReservationSpecification for additional information. type AWSCapacityReservationTargetSpec struct { - // The ID of the Capacity Reservation in which to run the instance. + // CapacityReservationPreference indicates the instance's Capacity Reservation preferences (possible values are 'open' or 'none'). + CapacityReservationPreference *string `json:"capacityReservationPreference,omitempty"` + + // CapacityReservationID ID of the Capacity Reservation in which to run the instance. CapacityReservationID *string `json:"capacityReservationId,omitempty"` - // The ARN of the Capacity Reservation resource group in which to run the instance. + // CapacityReservationResourceGroupArn The ARN of the Capacity Reservation in which to run the instance. CapacityReservationResourceGroupArn *string `json:"capacityReservationResourceGroupArn,omitempty"` } diff --git a/cluster-autoscaler/vendor/github.com/gardener/machine-controller-manager-provider-azure/pkg/azure/apis/azure_provider_spec.go b/cluster-autoscaler/vendor/github.com/gardener/machine-controller-manager-provider-azure/pkg/azure/apis/azure_provider_spec.go index 331cd2b63647..2a49d5181d5d 100644 --- a/cluster-autoscaler/vendor/github.com/gardener/machine-controller-manager-provider-azure/pkg/azure/apis/azure_provider_spec.go +++ b/cluster-autoscaler/vendor/github.com/gardener/machine-controller-manager-provider-azure/pkg/azure/apis/azure_provider_spec.go @@ -28,9 +28,9 @@ const ( // AzureAlternativeTenantID is a constant for a key name of a secret containing the Azure credentials (tenant id). AzureAlternativeTenantID = "tenantID" - // MachineSetKindAvailabilitySet is the machine set kind for AvailabilitySet + // MachineSetKindAvailabilitySet is the machine set kind for AvailabilitySet. MachineSetKindAvailabilitySet string = "availabilityset" - // MachineSetKindVMO is the machine set kind for VirtualMachineScaleSet Orchestration Mode VM (VMO) + // MachineSetKindVMO is the machine set kind for VirtualMachineScaleSet Orchestration Mode VM (VMO). MachineSetKindVMO string = "vmo" ) @@ -43,7 +43,7 @@ type AzureProviderSpec struct { SubnetInfo AzureSubnetInfo `json:"subnetInfo,omitempty"` } -// AzureVirtualMachineProperties is describes the properties of a Virtual Machine. +// AzureVirtualMachineProperties describes the properties of a Virtual Machine. type AzureVirtualMachineProperties struct { HardwareProfile AzureHardwareProfile `json:"hardwareProfile,omitempty"` StorageProfile AzureStorageProfile `json:"storageProfile,omitempty"` @@ -55,31 +55,31 @@ type AzureVirtualMachineProperties struct { MachineSet *AzureMachineSetConfig `json:"machineSet,omitempty"` } -// AzureHardwareProfile is specifies the hardware settings for the virtual machine. -// Refer github.com/Azure/azure-sdk-for-go/arm/compute/models.go for VMSizes +// AzureHardwareProfile specifies the hardware settings for the virtual machine. +// Refer to the [azure-sdk-for-go repository](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/compute/armcompute/models.go) for VMSizes. type AzureHardwareProfile struct { VMSize string `json:"vmSize,omitempty"` } -// AzureMachineSetConfig contains the information about the machine set +// AzureMachineSetConfig contains the information about the machine set. type AzureMachineSetConfig struct { ID string `json:"id"` Kind string `json:"kind"` } -// AzureStorageProfile is specifies the storage settings for the virtual machine disks. +// AzureStorageProfile specifies the storage settings for the virtual machine disks. type AzureStorageProfile struct { ImageReference AzureImageReference `json:"imageReference,omitempty"` OsDisk AzureOSDisk `json:"osDisk,omitempty"` DataDisks []AzureDataDisk `json:"dataDisks,omitempty"` } -// AzureImageReference is specifies information about the image to use. You can specify information about platform images, +// AzureImageReference specifies information about the image to use. You can specify information about platform images, // marketplace images, community images, shared gallery images or virtual machine images. This element is required when you want to use a platform image, // marketplace image, community image, shared gallery image or virtual machine image, but is not used in other creation operations. type AzureImageReference struct { ID string `json:"id,omitempty"` - // Uniform Resource Name of the OS image to be used , it has the format 'publisher:offer:sku:version' + // Uniform Resource Name of the OS image to be used, it has the format 'publisher:offer:sku:version' URN *string `json:"urn,omitempty"` // CommunityGalleryImageID is the id of the OS image to be used, hosted within an Azure Community Image Gallery. CommunityGalleryImageID *string `json:"communityGalleryImageID,omitempty"` @@ -87,9 +87,9 @@ type AzureImageReference struct { SharedGalleryImageID *string `json:"sharedGalleryImageID,omitempty"` } -// AzureOSDisk is specifies information about the operating system disk used by the virtual machine.

For more -// information about disks, see [About disks and VHDs for Azure virtual -// machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-about-disks-vhds?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). +// AzureOSDisk specifies information about the operating system disk used by the virtual machine.

For more +// information about disks, see [Introduction to Azure Managed +// Disks](https://learn.microsoft.com/en-us/azure/virtual-machines/managed-disks-overview). type AzureOSDisk struct { Name string `json:"name,omitempty"` Caching string `json:"caching,omitempty"` @@ -113,7 +113,7 @@ type AzureManagedDiskParameters struct { StorageAccountType string `json:"storageAccountType,omitempty"` } -// AzureOSProfile is specifies the operating system settings for the virtual machine. +// AzureOSProfile specifies the operating system settings for the virtual machine. type AzureOSProfile struct { ComputerName string `json:"computerName,omitempty"` AdminUsername string `json:"adminUsername,omitempty"` @@ -122,17 +122,15 @@ type AzureOSProfile struct { LinuxConfiguration AzureLinuxConfiguration `json:"linuxConfiguration,omitempty"` } -// AzureLinuxConfiguration is specifies the Linux operating system settings on the virtual machine.

For a list of +// AzureLinuxConfiguration specifies the Linux operating system settings on the virtual machine.

For a list of // supported Linux distributions, see [Linux on Azure-Endorsed -// Distributions](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-endorsed-distros?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) -//

For running non-endorsed distributions, see [Information for Non-Endorsed -// Distributions](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-create-upload-generic?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). +// Distributions](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros). type AzureLinuxConfiguration struct { DisablePasswordAuthentication bool `json:"disablePasswordAuthentication,omitempty"` SSH AzureSSHConfiguration `json:"ssh,omitempty"` } -// AzureSSHConfiguration is SSH configuration for Linux based VMs running on Azure +// AzureSSHConfiguration is SSH configuration for Linux based VMs running on Azure. type AzureSSHConfiguration struct { PublicKeys AzureSSHPublicKey `json:"publicKeys,omitempty"` } @@ -144,19 +142,19 @@ type AzureSSHPublicKey struct { KeyData string `json:"keyData,omitempty"` } -// AzureNetworkProfile is specifies the network interfaces of the virtual machine. +// AzureNetworkProfile specifies the network interfaces of the virtual machine. type AzureNetworkProfile struct { NetworkInterfaces AzureNetworkInterfaceReference `json:"networkInterfaces,omitempty"` AcceleratedNetworking *bool `json:"acceleratedNetworking,omitempty"` } -// AzureNetworkInterfaceReference is describes a network interface reference. +// AzureNetworkInterfaceReference describes a network interface reference. type AzureNetworkInterfaceReference struct { ID string `json:"id,omitempty"` *AzureNetworkInterfaceReferenceProperties `json:"properties,omitempty"` } -// AzureNetworkInterfaceReferenceProperties is describes a network interface reference properties. +// AzureNetworkInterfaceReferenceProperties describes a network interface reference properties. type AzureNetworkInterfaceReferenceProperties struct { Primary bool `json:"primary,omitempty"` } @@ -166,7 +164,7 @@ type AzureSubResource struct { ID string `json:"id,omitempty"` } -// AzureSubnetInfo is the information containing the subnet details +// AzureSubnetInfo is the information containing the subnet details. type AzureSubnetInfo struct { VnetName string `json:"vnetName,omitempty"` VnetResourceGroup *string `json:"vnetResourceGroup,omitempty"` diff --git a/cluster-autoscaler/vendor/github.com/ghodss/yaml/.gitignore b/cluster-autoscaler/vendor/github.com/ghodss/yaml/.gitignore deleted file mode 100644 index e256a31e00a5..000000000000 --- a/cluster-autoscaler/vendor/github.com/ghodss/yaml/.gitignore +++ /dev/null @@ -1,20 +0,0 @@ -# OSX leaves these everywhere on SMB shares -._* - -# Eclipse files -.classpath -.project -.settings/** - -# Emacs save files -*~ - -# Vim-related files -[._]*.s[a-w][a-z] -[._]s[a-w][a-z] -*.un~ -Session.vim -.netrwhist - -# Go test binaries -*.test diff --git a/cluster-autoscaler/vendor/github.com/ghodss/yaml/.travis.yml b/cluster-autoscaler/vendor/github.com/ghodss/yaml/.travis.yml deleted file mode 100644 index 0e9d6edc010a..000000000000 --- a/cluster-autoscaler/vendor/github.com/ghodss/yaml/.travis.yml +++ /dev/null @@ -1,7 +0,0 @@ -language: go -go: - - 1.3 - - 1.4 -script: - - go test - - go build diff --git a/cluster-autoscaler/vendor/github.com/ghodss/yaml/README.md b/cluster-autoscaler/vendor/github.com/ghodss/yaml/README.md deleted file mode 100644 index 0200f75b4d12..000000000000 --- a/cluster-autoscaler/vendor/github.com/ghodss/yaml/README.md +++ /dev/null @@ -1,121 +0,0 @@ -# YAML marshaling and unmarshaling support for Go - -[![Build Status](https://travis-ci.org/ghodss/yaml.svg)](https://travis-ci.org/ghodss/yaml) - -## Introduction - -A wrapper around [go-yaml](https://github.com/go-yaml/yaml) designed to enable a better way of handling YAML when marshaling to and from structs. - -In short, this library first converts YAML to JSON using go-yaml and then uses `json.Marshal` and `json.Unmarshal` to convert to or from the struct. This means that it effectively reuses the JSON struct tags as well as the custom JSON methods `MarshalJSON` and `UnmarshalJSON` unlike go-yaml. For a detailed overview of the rationale behind this method, [see this blog post](http://ghodss.com/2014/the-right-way-to-handle-yaml-in-golang/). - -## Compatibility - -This package uses [go-yaml](https://github.com/go-yaml/yaml) and therefore supports [everything go-yaml supports](https://github.com/go-yaml/yaml#compatibility). - -## Caveats - -**Caveat #1:** When using `yaml.Marshal` and `yaml.Unmarshal`, binary data should NOT be preceded with the `!!binary` YAML tag. If you do, go-yaml will convert the binary data from base64 to native binary data, which is not compatible with JSON. You can still use binary in your YAML files though - just store them without the `!!binary` tag and decode the base64 in your code (e.g. in the custom JSON methods `MarshalJSON` and `UnmarshalJSON`). This also has the benefit that your YAML and your JSON binary data will be decoded exactly the same way. As an example: - -``` -BAD: - exampleKey: !!binary gIGC - -GOOD: - exampleKey: gIGC -... and decode the base64 data in your code. -``` - -**Caveat #2:** When using `YAMLToJSON` directly, maps with keys that are maps will result in an error since this is not supported by JSON. This error will occur in `Unmarshal` as well since you can't unmarshal map keys anyways since struct fields can't be keys. - -## Installation and usage - -To install, run: - -``` -$ go get github.com/ghodss/yaml -``` - -And import using: - -``` -import "github.com/ghodss/yaml" -``` - -Usage is very similar to the JSON library: - -```go -package main - -import ( - "fmt" - - "github.com/ghodss/yaml" -) - -type Person struct { - Name string `json:"name"` // Affects YAML field names too. - Age int `json:"age"` -} - -func main() { - // Marshal a Person struct to YAML. - p := Person{"John", 30} - y, err := yaml.Marshal(p) - if err != nil { - fmt.Printf("err: %v\n", err) - return - } - fmt.Println(string(y)) - /* Output: - age: 30 - name: John - */ - - // Unmarshal the YAML back into a Person struct. - var p2 Person - err = yaml.Unmarshal(y, &p2) - if err != nil { - fmt.Printf("err: %v\n", err) - return - } - fmt.Println(p2) - /* Output: - {John 30} - */ -} -``` - -`yaml.YAMLToJSON` and `yaml.JSONToYAML` methods are also available: - -```go -package main - -import ( - "fmt" - - "github.com/ghodss/yaml" -) - -func main() { - j := []byte(`{"name": "John", "age": 30}`) - y, err := yaml.JSONToYAML(j) - if err != nil { - fmt.Printf("err: %v\n", err) - return - } - fmt.Println(string(y)) - /* Output: - name: John - age: 30 - */ - j2, err := yaml.YAMLToJSON(y) - if err != nil { - fmt.Printf("err: %v\n", err) - return - } - fmt.Println(string(j2)) - /* Output: - {"age":30,"name":"John"} - */ -} -``` diff --git a/cluster-autoscaler/vendor/github.com/ghodss/yaml/fields.go b/cluster-autoscaler/vendor/github.com/ghodss/yaml/fields.go deleted file mode 100644 index 58600740266c..000000000000 --- a/cluster-autoscaler/vendor/github.com/ghodss/yaml/fields.go +++ /dev/null @@ -1,501 +0,0 @@ -// Copyright 2013 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. -package yaml - -import ( - "bytes" - "encoding" - "encoding/json" - "reflect" - "sort" - "strings" - "sync" - "unicode" - "unicode/utf8" -) - -// indirect walks down v allocating pointers as needed, -// until it gets to a non-pointer. -// if it encounters an Unmarshaler, indirect stops and returns that. -// if decodingNull is true, indirect stops at the last pointer so it can be set to nil. -func indirect(v reflect.Value, decodingNull bool) (json.Unmarshaler, encoding.TextUnmarshaler, reflect.Value) { - // If v is a named type and is addressable, - // start with its address, so that if the type has pointer methods, - // we find them. - if v.Kind() != reflect.Ptr && v.Type().Name() != "" && v.CanAddr() { - v = v.Addr() - } - for { - // Load value from interface, but only if the result will be - // usefully addressable. - if v.Kind() == reflect.Interface && !v.IsNil() { - e := v.Elem() - if e.Kind() == reflect.Ptr && !e.IsNil() && (!decodingNull || e.Elem().Kind() == reflect.Ptr) { - v = e - continue - } - } - - if v.Kind() != reflect.Ptr { - break - } - - if v.Elem().Kind() != reflect.Ptr && decodingNull && v.CanSet() { - break - } - if v.IsNil() { - if v.CanSet() { - v.Set(reflect.New(v.Type().Elem())) - } else { - v = reflect.New(v.Type().Elem()) - } - } - if v.Type().NumMethod() > 0 { - if u, ok := v.Interface().(json.Unmarshaler); ok { - return u, nil, reflect.Value{} - } - if u, ok := v.Interface().(encoding.TextUnmarshaler); ok { - return nil, u, reflect.Value{} - } - } - v = v.Elem() - } - return nil, nil, v -} - -// A field represents a single field found in a struct. -type field struct { - name string - nameBytes []byte // []byte(name) - equalFold func(s, t []byte) bool // bytes.EqualFold or equivalent - - tag bool - index []int - typ reflect.Type - omitEmpty bool - quoted bool -} - -func fillField(f field) field { - f.nameBytes = []byte(f.name) - f.equalFold = foldFunc(f.nameBytes) - return f -} - -// byName sorts field by name, breaking ties with depth, -// then breaking ties with "name came from json tag", then -// breaking ties with index sequence. -type byName []field - -func (x byName) Len() int { return len(x) } - -func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] } - -func (x byName) Less(i, j int) bool { - if x[i].name != x[j].name { - return x[i].name < x[j].name - } - if len(x[i].index) != len(x[j].index) { - return len(x[i].index) < len(x[j].index) - } - if x[i].tag != x[j].tag { - return x[i].tag - } - return byIndex(x).Less(i, j) -} - -// byIndex sorts field by index sequence. -type byIndex []field - -func (x byIndex) Len() int { return len(x) } - -func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] } - -func (x byIndex) Less(i, j int) bool { - for k, xik := range x[i].index { - if k >= len(x[j].index) { - return false - } - if xik != x[j].index[k] { - return xik < x[j].index[k] - } - } - return len(x[i].index) < len(x[j].index) -} - -// typeFields returns a list of fields that JSON should recognize for the given type. -// The algorithm is breadth-first search over the set of structs to include - the top struct -// and then any reachable anonymous structs. -func typeFields(t reflect.Type) []field { - // Anonymous fields to explore at the current level and the next. - current := []field{} - next := []field{{typ: t}} - - // Count of queued names for current level and the next. - count := map[reflect.Type]int{} - nextCount := map[reflect.Type]int{} - - // Types already visited at an earlier level. - visited := map[reflect.Type]bool{} - - // Fields found. - var fields []field - - for len(next) > 0 { - current, next = next, current[:0] - count, nextCount = nextCount, map[reflect.Type]int{} - - for _, f := range current { - if visited[f.typ] { - continue - } - visited[f.typ] = true - - // Scan f.typ for fields to include. - for i := 0; i < f.typ.NumField(); i++ { - sf := f.typ.Field(i) - if sf.PkgPath != "" { // unexported - continue - } - tag := sf.Tag.Get("json") - if tag == "-" { - continue - } - name, opts := parseTag(tag) - if !isValidTag(name) { - name = "" - } - index := make([]int, len(f.index)+1) - copy(index, f.index) - index[len(f.index)] = i - - ft := sf.Type - if ft.Name() == "" && ft.Kind() == reflect.Ptr { - // Follow pointer. - ft = ft.Elem() - } - - // Record found field and index sequence. - if name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct { - tagged := name != "" - if name == "" { - name = sf.Name - } - fields = append(fields, fillField(field{ - name: name, - tag: tagged, - index: index, - typ: ft, - omitEmpty: opts.Contains("omitempty"), - quoted: opts.Contains("string"), - })) - if count[f.typ] > 1 { - // If there were multiple instances, add a second, - // so that the annihilation code will see a duplicate. - // It only cares about the distinction between 1 or 2, - // so don't bother generating any more copies. - fields = append(fields, fields[len(fields)-1]) - } - continue - } - - // Record new anonymous struct to explore in next round. - nextCount[ft]++ - if nextCount[ft] == 1 { - next = append(next, fillField(field{name: ft.Name(), index: index, typ: ft})) - } - } - } - } - - sort.Sort(byName(fields)) - - // Delete all fields that are hidden by the Go rules for embedded fields, - // except that fields with JSON tags are promoted. - - // The fields are sorted in primary order of name, secondary order - // of field index length. Loop over names; for each name, delete - // hidden fields by choosing the one dominant field that survives. - out := fields[:0] - for advance, i := 0, 0; i < len(fields); i += advance { - // One iteration per name. - // Find the sequence of fields with the name of this first field. - fi := fields[i] - name := fi.name - for advance = 1; i+advance < len(fields); advance++ { - fj := fields[i+advance] - if fj.name != name { - break - } - } - if advance == 1 { // Only one field with this name - out = append(out, fi) - continue - } - dominant, ok := dominantField(fields[i : i+advance]) - if ok { - out = append(out, dominant) - } - } - - fields = out - sort.Sort(byIndex(fields)) - - return fields -} - -// dominantField looks through the fields, all of which are known to -// have the same name, to find the single field that dominates the -// others using Go's embedding rules, modified by the presence of -// JSON tags. If there are multiple top-level fields, the boolean -// will be false: This condition is an error in Go and we skip all -// the fields. -func dominantField(fields []field) (field, bool) { - // The fields are sorted in increasing index-length order. The winner - // must therefore be one with the shortest index length. Drop all - // longer entries, which is easy: just truncate the slice. - length := len(fields[0].index) - tagged := -1 // Index of first tagged field. - for i, f := range fields { - if len(f.index) > length { - fields = fields[:i] - break - } - if f.tag { - if tagged >= 0 { - // Multiple tagged fields at the same level: conflict. - // Return no field. - return field{}, false - } - tagged = i - } - } - if tagged >= 0 { - return fields[tagged], true - } - // All remaining fields have the same length. If there's more than one, - // we have a conflict (two fields named "X" at the same level) and we - // return no field. - if len(fields) > 1 { - return field{}, false - } - return fields[0], true -} - -var fieldCache struct { - sync.RWMutex - m map[reflect.Type][]field -} - -// cachedTypeFields is like typeFields but uses a cache to avoid repeated work. -func cachedTypeFields(t reflect.Type) []field { - fieldCache.RLock() - f := fieldCache.m[t] - fieldCache.RUnlock() - if f != nil { - return f - } - - // Compute fields without lock. - // Might duplicate effort but won't hold other computations back. - f = typeFields(t) - if f == nil { - f = []field{} - } - - fieldCache.Lock() - if fieldCache.m == nil { - fieldCache.m = map[reflect.Type][]field{} - } - fieldCache.m[t] = f - fieldCache.Unlock() - return f -} - -func isValidTag(s string) bool { - if s == "" { - return false - } - for _, c := range s { - switch { - case strings.ContainsRune("!#$%&()*+-./:<=>?@[]^_{|}~ ", c): - // Backslash and quote chars are reserved, but - // otherwise any punctuation chars are allowed - // in a tag name. - default: - if !unicode.IsLetter(c) && !unicode.IsDigit(c) { - return false - } - } - } - return true -} - -const ( - caseMask = ^byte(0x20) // Mask to ignore case in ASCII. - kelvin = '\u212a' - smallLongEss = '\u017f' -) - -// foldFunc returns one of four different case folding equivalence -// functions, from most general (and slow) to fastest: -// -// 1) bytes.EqualFold, if the key s contains any non-ASCII UTF-8 -// 2) equalFoldRight, if s contains special folding ASCII ('k', 'K', 's', 'S') -// 3) asciiEqualFold, no special, but includes non-letters (including _) -// 4) simpleLetterEqualFold, no specials, no non-letters. -// -// The letters S and K are special because they map to 3 runes, not just 2: -// * S maps to s and to U+017F 'ſ' Latin small letter long s -// * k maps to K and to U+212A 'K' Kelvin sign -// See http://play.golang.org/p/tTxjOc0OGo -// -// The returned function is specialized for matching against s and -// should only be given s. It's not curried for performance reasons. -func foldFunc(s []byte) func(s, t []byte) bool { - nonLetter := false - special := false // special letter - for _, b := range s { - if b >= utf8.RuneSelf { - return bytes.EqualFold - } - upper := b & caseMask - if upper < 'A' || upper > 'Z' { - nonLetter = true - } else if upper == 'K' || upper == 'S' { - // See above for why these letters are special. - special = true - } - } - if special { - return equalFoldRight - } - if nonLetter { - return asciiEqualFold - } - return simpleLetterEqualFold -} - -// equalFoldRight is a specialization of bytes.EqualFold when s is -// known to be all ASCII (including punctuation), but contains an 's', -// 'S', 'k', or 'K', requiring a Unicode fold on the bytes in t. -// See comments on foldFunc. -func equalFoldRight(s, t []byte) bool { - for _, sb := range s { - if len(t) == 0 { - return false - } - tb := t[0] - if tb < utf8.RuneSelf { - if sb != tb { - sbUpper := sb & caseMask - if 'A' <= sbUpper && sbUpper <= 'Z' { - if sbUpper != tb&caseMask { - return false - } - } else { - return false - } - } - t = t[1:] - continue - } - // sb is ASCII and t is not. t must be either kelvin - // sign or long s; sb must be s, S, k, or K. - tr, size := utf8.DecodeRune(t) - switch sb { - case 's', 'S': - if tr != smallLongEss { - return false - } - case 'k', 'K': - if tr != kelvin { - return false - } - default: - return false - } - t = t[size:] - - } - if len(t) > 0 { - return false - } - return true -} - -// asciiEqualFold is a specialization of bytes.EqualFold for use when -// s is all ASCII (but may contain non-letters) and contains no -// special-folding letters. -// See comments on foldFunc. -func asciiEqualFold(s, t []byte) bool { - if len(s) != len(t) { - return false - } - for i, sb := range s { - tb := t[i] - if sb == tb { - continue - } - if ('a' <= sb && sb <= 'z') || ('A' <= sb && sb <= 'Z') { - if sb&caseMask != tb&caseMask { - return false - } - } else { - return false - } - } - return true -} - -// simpleLetterEqualFold is a specialization of bytes.EqualFold for -// use when s is all ASCII letters (no underscores, etc) and also -// doesn't contain 'k', 'K', 's', or 'S'. -// See comments on foldFunc. -func simpleLetterEqualFold(s, t []byte) bool { - if len(s) != len(t) { - return false - } - for i, b := range s { - if b&caseMask != t[i]&caseMask { - return false - } - } - return true -} - -// tagOptions is the string following a comma in a struct field's "json" -// tag, or the empty string. It does not include the leading comma. -type tagOptions string - -// parseTag splits a struct field's json tag into its name and -// comma-separated options. -func parseTag(tag string) (string, tagOptions) { - if idx := strings.Index(tag, ","); idx != -1 { - return tag[:idx], tagOptions(tag[idx+1:]) - } - return tag, tagOptions("") -} - -// Contains reports whether a comma-separated list of options -// contains a particular substr flag. substr must be surrounded by a -// string boundary or commas. -func (o tagOptions) Contains(optionName string) bool { - if len(o) == 0 { - return false - } - s := string(o) - for s != "" { - var next string - i := strings.Index(s, ",") - if i >= 0 { - s, next = s[:i], s[i+1:] - } - if s == optionName { - return true - } - s = next - } - return false -} diff --git a/cluster-autoscaler/vendor/github.com/ghodss/yaml/yaml.go b/cluster-autoscaler/vendor/github.com/ghodss/yaml/yaml.go deleted file mode 100644 index 4fb4054a8b74..000000000000 --- a/cluster-autoscaler/vendor/github.com/ghodss/yaml/yaml.go +++ /dev/null @@ -1,277 +0,0 @@ -package yaml - -import ( - "bytes" - "encoding/json" - "fmt" - "reflect" - "strconv" - - "gopkg.in/yaml.v2" -) - -// Marshals the object into JSON then converts JSON to YAML and returns the -// YAML. -func Marshal(o interface{}) ([]byte, error) { - j, err := json.Marshal(o) - if err != nil { - return nil, fmt.Errorf("error marshaling into JSON: %v", err) - } - - y, err := JSONToYAML(j) - if err != nil { - return nil, fmt.Errorf("error converting JSON to YAML: %v", err) - } - - return y, nil -} - -// Converts YAML to JSON then uses JSON to unmarshal into an object. -func Unmarshal(y []byte, o interface{}) error { - vo := reflect.ValueOf(o) - j, err := yamlToJSON(y, &vo) - if err != nil { - return fmt.Errorf("error converting YAML to JSON: %v", err) - } - - err = json.Unmarshal(j, o) - if err != nil { - return fmt.Errorf("error unmarshaling JSON: %v", err) - } - - return nil -} - -// Convert JSON to YAML. -func JSONToYAML(j []byte) ([]byte, error) { - // Convert the JSON to an object. - var jsonObj interface{} - // We are using yaml.Unmarshal here (instead of json.Unmarshal) because the - // Go JSON library doesn't try to pick the right number type (int, float, - // etc.) when unmarshalling to interface{}, it just picks float64 - // universally. go-yaml does go through the effort of picking the right - // number type, so we can preserve number type throughout this process. - err := yaml.Unmarshal(j, &jsonObj) - if err != nil { - return nil, err - } - - // Marshal this object into YAML. - return yaml.Marshal(jsonObj) -} - -// Convert YAML to JSON. Since JSON is a subset of YAML, passing JSON through -// this method should be a no-op. -// -// Things YAML can do that are not supported by JSON: -// * In YAML you can have binary and null keys in your maps. These are invalid -// in JSON. (int and float keys are converted to strings.) -// * Binary data in YAML with the !!binary tag is not supported. If you want to -// use binary data with this library, encode the data as base64 as usual but do -// not use the !!binary tag in your YAML. This will ensure the original base64 -// encoded data makes it all the way through to the JSON. -func YAMLToJSON(y []byte) ([]byte, error) { - return yamlToJSON(y, nil) -} - -func yamlToJSON(y []byte, jsonTarget *reflect.Value) ([]byte, error) { - // Convert the YAML to an object. - var yamlObj interface{} - err := yaml.Unmarshal(y, &yamlObj) - if err != nil { - return nil, err - } - - // YAML objects are not completely compatible with JSON objects (e.g. you - // can have non-string keys in YAML). So, convert the YAML-compatible object - // to a JSON-compatible object, failing with an error if irrecoverable - // incompatibilties happen along the way. - jsonObj, err := convertToJSONableObject(yamlObj, jsonTarget) - if err != nil { - return nil, err - } - - // Convert this object to JSON and return the data. - return json.Marshal(jsonObj) -} - -func convertToJSONableObject(yamlObj interface{}, jsonTarget *reflect.Value) (interface{}, error) { - var err error - - // Resolve jsonTarget to a concrete value (i.e. not a pointer or an - // interface). We pass decodingNull as false because we're not actually - // decoding into the value, we're just checking if the ultimate target is a - // string. - if jsonTarget != nil { - ju, tu, pv := indirect(*jsonTarget, false) - // We have a JSON or Text Umarshaler at this level, so we can't be trying - // to decode into a string. - if ju != nil || tu != nil { - jsonTarget = nil - } else { - jsonTarget = &pv - } - } - - // If yamlObj is a number or a boolean, check if jsonTarget is a string - - // if so, coerce. Else return normal. - // If yamlObj is a map or array, find the field that each key is - // unmarshaling to, and when you recurse pass the reflect.Value for that - // field back into this function. - switch typedYAMLObj := yamlObj.(type) { - case map[interface{}]interface{}: - // JSON does not support arbitrary keys in a map, so we must convert - // these keys to strings. - // - // From my reading of go-yaml v2 (specifically the resolve function), - // keys can only have the types string, int, int64, float64, binary - // (unsupported), or null (unsupported). - strMap := make(map[string]interface{}) - for k, v := range typedYAMLObj { - // Resolve the key to a string first. - var keyString string - switch typedKey := k.(type) { - case string: - keyString = typedKey - case int: - keyString = strconv.Itoa(typedKey) - case int64: - // go-yaml will only return an int64 as a key if the system - // architecture is 32-bit and the key's value is between 32-bit - // and 64-bit. Otherwise the key type will simply be int. - keyString = strconv.FormatInt(typedKey, 10) - case float64: - // Stolen from go-yaml to use the same conversion to string as - // the go-yaml library uses to convert float to string when - // Marshaling. - s := strconv.FormatFloat(typedKey, 'g', -1, 32) - switch s { - case "+Inf": - s = ".inf" - case "-Inf": - s = "-.inf" - case "NaN": - s = ".nan" - } - keyString = s - case bool: - if typedKey { - keyString = "true" - } else { - keyString = "false" - } - default: - return nil, fmt.Errorf("Unsupported map key of type: %s, key: %+#v, value: %+#v", - reflect.TypeOf(k), k, v) - } - - // jsonTarget should be a struct or a map. If it's a struct, find - // the field it's going to map to and pass its reflect.Value. If - // it's a map, find the element type of the map and pass the - // reflect.Value created from that type. If it's neither, just pass - // nil - JSON conversion will error for us if it's a real issue. - if jsonTarget != nil { - t := *jsonTarget - if t.Kind() == reflect.Struct { - keyBytes := []byte(keyString) - // Find the field that the JSON library would use. - var f *field - fields := cachedTypeFields(t.Type()) - for i := range fields { - ff := &fields[i] - if bytes.Equal(ff.nameBytes, keyBytes) { - f = ff - break - } - // Do case-insensitive comparison. - if f == nil && ff.equalFold(ff.nameBytes, keyBytes) { - f = ff - } - } - if f != nil { - // Find the reflect.Value of the most preferential - // struct field. - jtf := t.Field(f.index[0]) - strMap[keyString], err = convertToJSONableObject(v, &jtf) - if err != nil { - return nil, err - } - continue - } - } else if t.Kind() == reflect.Map { - // Create a zero value of the map's element type to use as - // the JSON target. - jtv := reflect.Zero(t.Type().Elem()) - strMap[keyString], err = convertToJSONableObject(v, &jtv) - if err != nil { - return nil, err - } - continue - } - } - strMap[keyString], err = convertToJSONableObject(v, nil) - if err != nil { - return nil, err - } - } - return strMap, nil - case []interface{}: - // We need to recurse into arrays in case there are any - // map[interface{}]interface{}'s inside and to convert any - // numbers to strings. - - // If jsonTarget is a slice (which it really should be), find the - // thing it's going to map to. If it's not a slice, just pass nil - // - JSON conversion will error for us if it's a real issue. - var jsonSliceElemValue *reflect.Value - if jsonTarget != nil { - t := *jsonTarget - if t.Kind() == reflect.Slice { - // By default slices point to nil, but we need a reflect.Value - // pointing to a value of the slice type, so we create one here. - ev := reflect.Indirect(reflect.New(t.Type().Elem())) - jsonSliceElemValue = &ev - } - } - - // Make and use a new array. - arr := make([]interface{}, len(typedYAMLObj)) - for i, v := range typedYAMLObj { - arr[i], err = convertToJSONableObject(v, jsonSliceElemValue) - if err != nil { - return nil, err - } - } - return arr, nil - default: - // If the target type is a string and the YAML type is a number, - // convert the YAML type to a string. - if jsonTarget != nil && (*jsonTarget).Kind() == reflect.String { - // Based on my reading of go-yaml, it may return int, int64, - // float64, or uint64. - var s string - switch typedVal := typedYAMLObj.(type) { - case int: - s = strconv.FormatInt(int64(typedVal), 10) - case int64: - s = strconv.FormatInt(typedVal, 10) - case float64: - s = strconv.FormatFloat(typedVal, 'g', -1, 32) - case uint64: - s = strconv.FormatUint(typedVal, 10) - case bool: - if typedVal { - s = "true" - } else { - s = "false" - } - } - if len(s) > 0 { - yamlObj = interface{}(s) - } - } - return yamlObj, nil - } - - return nil, nil -} diff --git a/cluster-autoscaler/vendor/github.com/gofrs/uuid/.travis.yml b/cluster-autoscaler/vendor/github.com/gofrs/uuid/.travis.yml deleted file mode 100644 index 0783aaa9c4cf..000000000000 --- a/cluster-autoscaler/vendor/github.com/gofrs/uuid/.travis.yml +++ /dev/null @@ -1,22 +0,0 @@ -language: go -sudo: false -go: - - 1.7.x - - 1.8.x - - 1.9.x - - 1.10.x - - 1.11.x - - 1.12.x - - tip -matrix: - allow_failures: - - go: tip - fast_finish: true -before_install: - - go get golang.org/x/tools/cmd/cover -script: - - go test ./... -race -coverprofile=coverage.txt -covermode=atomic -after_success: - - bash <(curl -s https://codecov.io/bash) -notifications: - email: false diff --git a/cluster-autoscaler/vendor/github.com/gofrs/uuid/README.md b/cluster-autoscaler/vendor/github.com/gofrs/uuid/README.md index 2685a832e384..4f73bec82c09 100644 --- a/cluster-autoscaler/vendor/github.com/gofrs/uuid/README.md +++ b/cluster-autoscaler/vendor/github.com/gofrs/uuid/README.md @@ -16,6 +16,14 @@ This package supports the following UUID versions: * Version 4, based on random numbers (RFC-4122) * Version 5, based on SHA-1 hashing of a named value (RFC-4122) +This package also supports experimental Universally Unique Identifier implementations based on a +[draft RFC](https://www.ietf.org/archive/id/draft-peabody-dispatch-new-uuid-format-04.html) that updates RFC-4122 +* Version 6, a k-sortable id based on timestamp, and field-compatible with v1 (draft-peabody-dispatch-new-uuid-format, RFC-4122) +* Version 7, a k-sortable id based on timestamp (draft-peabody-dispatch-new-uuid-format, RFC-4122) + +The v6 and v7 IDs are **not** considered a part of the stable API, and may be subject to behavior or API changes as part of minor releases +to this package. They will be updated as the draft RFC changes, and will become stable if and when the draft RFC is accepted. + ## Project History This project was originally forked from the @@ -106,3 +114,4 @@ func main() { * [RFC-4122](https://tools.ietf.org/html/rfc4122) * [DCE 1.1: Authentication and Security Services](http://pubs.opengroup.org/onlinepubs/9696989899/chap5.htm#tagcjh_08_02_01_01) +* [New UUID Formats RFC Draft (Peabody) Rev 04](https://www.ietf.org/archive/id/draft-peabody-dispatch-new-uuid-format-04.html#) diff --git a/cluster-autoscaler/vendor/github.com/gofrs/uuid/codec.go b/cluster-autoscaler/vendor/github.com/gofrs/uuid/codec.go index e3014c68c663..665026414c30 100644 --- a/cluster-autoscaler/vendor/github.com/gofrs/uuid/codec.go +++ b/cluster-autoscaler/vendor/github.com/gofrs/uuid/codec.go @@ -22,8 +22,7 @@ package uuid import ( - "bytes" - "encoding/hex" + "errors" "fmt" ) @@ -45,11 +44,77 @@ func FromBytesOrNil(input []byte) UUID { return uuid } +var errInvalidFormat = errors.New("uuid: invalid UUID format") + +func fromHexChar(c byte) byte { + switch { + case '0' <= c && c <= '9': + return c - '0' + case 'a' <= c && c <= 'f': + return c - 'a' + 10 + case 'A' <= c && c <= 'F': + return c - 'A' + 10 + } + return 255 +} + +// Parse parses the UUID stored in the string text. Parsing and supported +// formats are the same as UnmarshalText. +func (u *UUID) Parse(s string) error { + switch len(s) { + case 32: // hash + case 36: // canonical + case 34, 38: + if s[0] != '{' || s[len(s)-1] != '}' { + return fmt.Errorf("uuid: incorrect UUID format in string %q", s) + } + s = s[1 : len(s)-1] + case 41, 45: + if s[:9] != "urn:uuid:" { + return fmt.Errorf("uuid: incorrect UUID format in string %q", s[:9]) + } + s = s[9:] + default: + return fmt.Errorf("uuid: incorrect UUID length %d in string %q", len(s), s) + } + // canonical + if len(s) == 36 { + if s[8] != '-' || s[13] != '-' || s[18] != '-' || s[23] != '-' { + return fmt.Errorf("uuid: incorrect UUID format in string %q", s) + } + for i, x := range [16]byte{ + 0, 2, 4, 6, + 9, 11, + 14, 16, + 19, 21, + 24, 26, 28, 30, 32, 34, + } { + v1 := fromHexChar(s[x]) + v2 := fromHexChar(s[x+1]) + if v1|v2 == 255 { + return errInvalidFormat + } + u[i] = (v1 << 4) | v2 + } + return nil + } + // hash like + for i := 0; i < 32; i += 2 { + v1 := fromHexChar(s[i]) + v2 := fromHexChar(s[i+1]) + if v1|v2 == 255 { + return errInvalidFormat + } + u[i/2] = (v1 << 4) | v2 + } + return nil +} + // FromString returns a UUID parsed from the input string. // Input is expected in a form accepted by UnmarshalText. -func FromString(input string) (UUID, error) { - u := UUID{} - err := u.UnmarshalText([]byte(input)) +func FromString(text string) (UUID, error) { + var u UUID + err := u.Parse(text) return u, err } @@ -66,133 +131,90 @@ func FromStringOrNil(input string) UUID { // MarshalText implements the encoding.TextMarshaler interface. // The encoding is the same as returned by the String() method. func (u UUID) MarshalText() ([]byte, error) { - return []byte(u.String()), nil + var buf [36]byte + encodeCanonical(buf[:], u) + return buf[:], nil } // UnmarshalText implements the encoding.TextUnmarshaler interface. // Following formats are supported: // -// "6ba7b810-9dad-11d1-80b4-00c04fd430c8", -// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}", -// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8" -// "6ba7b8109dad11d180b400c04fd430c8" -// "{6ba7b8109dad11d180b400c04fd430c8}", -// "urn:uuid:6ba7b8109dad11d180b400c04fd430c8" +// "6ba7b810-9dad-11d1-80b4-00c04fd430c8", +// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}", +// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8" +// "6ba7b8109dad11d180b400c04fd430c8" +// "{6ba7b8109dad11d180b400c04fd430c8}", +// "urn:uuid:6ba7b8109dad11d180b400c04fd430c8" // // ABNF for supported UUID text representation follows: // -// URN := 'urn' -// UUID-NID := 'uuid' -// -// hexdig := '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' | -// 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | -// 'A' | 'B' | 'C' | 'D' | 'E' | 'F' +// URN := 'urn' +// UUID-NID := 'uuid' // -// hexoct := hexdig hexdig -// 2hexoct := hexoct hexoct -// 4hexoct := 2hexoct 2hexoct -// 6hexoct := 4hexoct 2hexoct -// 12hexoct := 6hexoct 6hexoct +// hexdig := '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' | +// 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | +// 'A' | 'B' | 'C' | 'D' | 'E' | 'F' // -// hashlike := 12hexoct -// canonical := 4hexoct '-' 2hexoct '-' 2hexoct '-' 6hexoct +// hexoct := hexdig hexdig +// 2hexoct := hexoct hexoct +// 4hexoct := 2hexoct 2hexoct +// 6hexoct := 4hexoct 2hexoct +// 12hexoct := 6hexoct 6hexoct // -// plain := canonical | hashlike -// uuid := canonical | hashlike | braced | urn +// hashlike := 12hexoct +// canonical := 4hexoct '-' 2hexoct '-' 2hexoct '-' 6hexoct // -// braced := '{' plain '}' | '{' hashlike '}' -// urn := URN ':' UUID-NID ':' plain +// plain := canonical | hashlike +// uuid := canonical | hashlike | braced | urn // -func (u *UUID) UnmarshalText(text []byte) error { - switch len(text) { - case 32: - return u.decodeHashLike(text) +// braced := '{' plain '}' | '{' hashlike '}' +// urn := URN ':' UUID-NID ':' plain +func (u *UUID) UnmarshalText(b []byte) error { + switch len(b) { + case 32: // hash + case 36: // canonical case 34, 38: - return u.decodeBraced(text) - case 36: - return u.decodeCanonical(text) + if b[0] != '{' || b[len(b)-1] != '}' { + return fmt.Errorf("uuid: incorrect UUID format in string %q", b) + } + b = b[1 : len(b)-1] case 41, 45: - return u.decodeURN(text) + if string(b[:9]) != "urn:uuid:" { + return fmt.Errorf("uuid: incorrect UUID format in string %q", b[:9]) + } + b = b[9:] default: - return fmt.Errorf("uuid: incorrect UUID length %d in string %q", len(text), text) - } -} - -// decodeCanonical decodes UUID strings that are formatted as defined in RFC-4122 (section 3): -// "6ba7b810-9dad-11d1-80b4-00c04fd430c8". -func (u *UUID) decodeCanonical(t []byte) error { - if t[8] != '-' || t[13] != '-' || t[18] != '-' || t[23] != '-' { - return fmt.Errorf("uuid: incorrect UUID format in string %q", t) + return fmt.Errorf("uuid: incorrect UUID length %d in string %q", len(b), b) } - - src := t - dst := u[:] - - for i, byteGroup := range byteGroups { - if i > 0 { - src = src[1:] // skip dash + if len(b) == 36 { + if b[8] != '-' || b[13] != '-' || b[18] != '-' || b[23] != '-' { + return fmt.Errorf("uuid: incorrect UUID format in string %q", b) } - _, err := hex.Decode(dst[:byteGroup/2], src[:byteGroup]) - if err != nil { - return err + for i, x := range [16]byte{ + 0, 2, 4, 6, + 9, 11, + 14, 16, + 19, 21, + 24, 26, 28, 30, 32, 34, + } { + v1 := fromHexChar(b[x]) + v2 := fromHexChar(b[x+1]) + if v1|v2 == 255 { + return errInvalidFormat + } + u[i] = (v1 << 4) | v2 } - src = src[byteGroup:] - dst = dst[byteGroup/2:] - } - - return nil -} - -// decodeHashLike decodes UUID strings that are using the following format: -// "6ba7b8109dad11d180b400c04fd430c8". -func (u *UUID) decodeHashLike(t []byte) error { - src := t[:] - dst := u[:] - - _, err := hex.Decode(dst, src) - return err -} - -// decodeBraced decodes UUID strings that are using the following formats: -// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}" -// "{6ba7b8109dad11d180b400c04fd430c8}". -func (u *UUID) decodeBraced(t []byte) error { - l := len(t) - - if t[0] != '{' || t[l-1] != '}' { - return fmt.Errorf("uuid: incorrect UUID format in string %q", t) + return nil } - - return u.decodePlain(t[1 : l-1]) -} - -// decodeURN decodes UUID strings that are using the following formats: -// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8" -// "urn:uuid:6ba7b8109dad11d180b400c04fd430c8". -func (u *UUID) decodeURN(t []byte) error { - total := len(t) - - urnUUIDPrefix := t[:9] - - if !bytes.Equal(urnUUIDPrefix, urnPrefix) { - return fmt.Errorf("uuid: incorrect UUID format in string %q", t) - } - - return u.decodePlain(t[9:total]) -} - -// decodePlain decodes UUID strings that are using the following formats: -// "6ba7b810-9dad-11d1-80b4-00c04fd430c8" or in hash-like format -// "6ba7b8109dad11d180b400c04fd430c8". -func (u *UUID) decodePlain(t []byte) error { - switch len(t) { - case 32: - return u.decodeHashLike(t) - case 36: - return u.decodeCanonical(t) - default: - return fmt.Errorf("uuid: incorrect UUID length %d in string %q", len(t), t) + for i := 0; i < 32; i += 2 { + v1 := fromHexChar(b[i]) + v2 := fromHexChar(b[i+1]) + if v1|v2 == 255 { + return errInvalidFormat + } + u[i/2] = (v1 << 4) | v2 } + return nil } // MarshalBinary implements the encoding.BinaryMarshaler interface. diff --git a/cluster-autoscaler/vendor/github.com/gofrs/uuid/fuzz.go b/cluster-autoscaler/vendor/github.com/gofrs/uuid/fuzz.go index afaefbc8e399..ccf8d4ca2960 100644 --- a/cluster-autoscaler/vendor/github.com/gofrs/uuid/fuzz.go +++ b/cluster-autoscaler/vendor/github.com/gofrs/uuid/fuzz.go @@ -19,6 +19,7 @@ // OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION // WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +//go:build gofuzz // +build gofuzz package uuid @@ -27,15 +28,15 @@ package uuid // // To run: // -// $ go get github.com/dvyukov/go-fuzz/... -// $ cd $GOPATH/src/github.com/gofrs/uuid -// $ go-fuzz-build github.com/gofrs/uuid -// $ go-fuzz -bin=uuid-fuzz.zip -workdir=./testdata +// $ go get github.com/dvyukov/go-fuzz/... +// $ cd $GOPATH/src/github.com/gofrs/uuid +// $ go-fuzz-build github.com/gofrs/uuid +// $ go-fuzz -bin=uuid-fuzz.zip -workdir=./testdata // // If you make significant changes to FromString / UnmarshalText and add // new cases to fromStringTests (in codec_test.go), please run // -// $ go test -seed_fuzz_corpus +// $ go test -seed_fuzz_corpus // // to seed the corpus with the new interesting inputs, then run the fuzzer. func Fuzz(data []byte) int { diff --git a/cluster-autoscaler/vendor/github.com/gofrs/uuid/generator.go b/cluster-autoscaler/vendor/github.com/gofrs/uuid/generator.go index 2783d9e7787f..44be9e15854e 100644 --- a/cluster-autoscaler/vendor/github.com/gofrs/uuid/generator.go +++ b/cluster-autoscaler/vendor/github.com/gofrs/uuid/generator.go @@ -38,7 +38,8 @@ import ( // UUID epoch (October 15, 1582) and Unix epoch (January 1, 1970). const epochStart = 122192928000000000 -type epochFunc func() time.Time +// EpochFunc is the function type used to provide the current time. +type EpochFunc func() time.Time // HWAddrFunc is the function type used to provide hardware (MAC) addresses. type HWAddrFunc func() (net.HardwareAddr, error) @@ -66,12 +67,39 @@ func NewV5(ns UUID, name string) UUID { return DefaultGenerator.NewV5(ns, name) } +// NewV6 returns a k-sortable UUID based on a timestamp and 48 bits of +// pseudorandom data. The timestamp in a V6 UUID is the same as V1, with the bit +// order being adjusted to allow the UUID to be k-sortable. +// +// This is implemented based on revision 03 of the Peabody UUID draft, and may +// be subject to change pending further revisions. Until the final specification +// revision is finished, changes required to implement updates to the spec will +// not be considered a breaking change. They will happen as a minor version +// releases until the spec is final. +func NewV6() (UUID, error) { + return DefaultGenerator.NewV6() +} + +// NewV7 returns a k-sortable UUID based on the current millisecond precision +// UNIX epoch and 74 bits of pseudorandom data. It supports single-node batch generation (multiple UUIDs in the same timestamp) with a Monotonic Random counter. +// +// This is implemented based on revision 04 of the Peabody UUID draft, and may +// be subject to change pending further revisions. Until the final specification +// revision is finished, changes required to implement updates to the spec will +// not be considered a breaking change. They will happen as a minor version +// releases until the spec is final. +func NewV7() (UUID, error) { + return DefaultGenerator.NewV7() +} + // Generator provides an interface for generating UUIDs. type Generator interface { NewV1() (UUID, error) NewV3(ns UUID, name string) UUID NewV4() (UUID, error) NewV5(ns UUID, name string) UUID + NewV6() (UUID, error) + NewV7() (UUID, error) } // Gen is a reference UUID generator based on the specifications laid out in @@ -92,13 +120,16 @@ type Gen struct { rand io.Reader - epochFunc epochFunc + epochFunc EpochFunc hwAddrFunc HWAddrFunc lastTime uint64 clockSequence uint16 hardwareAddr [6]byte } +// GenOption is a function type that can be used to configure a Gen generator. +type GenOption func(*Gen) + // interface check -- build will fail if *Gen doesn't satisfy Generator var _ Generator = (*Gen)(nil) @@ -120,18 +151,82 @@ func NewGen() *Gen { // MAC address being used, you'll need to create a new generator using this // function. func NewGenWithHWAF(hwaf HWAddrFunc) *Gen { - return &Gen{ + return NewGenWithOptions(WithHWAddrFunc(hwaf)) +} + +// NewGenWithOptions returns a new instance of Gen with the options provided. +// Most people should use NewGen() or NewGenWithHWAF() instead. +// +// To customize the generator, you can pass in one or more GenOption functions. +// For example: +// +// gen := NewGenWithOptions( +// WithHWAddrFunc(myHWAddrFunc), +// WithEpochFunc(myEpochFunc), +// WithRandomReader(myRandomReader), +// ) +// +// NewGenWithOptions(WithHWAddrFunc(myHWAddrFunc)) is equivalent to calling +// NewGenWithHWAF(myHWAddrFunc) +// NewGenWithOptions() is equivalent to calling NewGen() +func NewGenWithOptions(opts ...GenOption) *Gen { + gen := &Gen{ epochFunc: time.Now, - hwAddrFunc: hwaf, + hwAddrFunc: defaultHWAddrFunc, rand: rand.Reader, } + + for _, opt := range opts { + opt(gen) + } + + return gen +} + +// WithHWAddrFunc is a GenOption that allows you to provide your own HWAddrFunc +// function. +// When this option is nil, the defaultHWAddrFunc is used. +func WithHWAddrFunc(hwaf HWAddrFunc) GenOption { + return func(gen *Gen) { + if hwaf == nil { + hwaf = defaultHWAddrFunc + } + + gen.hwAddrFunc = hwaf + } +} + +// WithEpochFunc is a GenOption that allows you to provide your own EpochFunc +// function. +// When this option is nil, time.Now is used. +func WithEpochFunc(epochf EpochFunc) GenOption { + return func(gen *Gen) { + if epochf == nil { + epochf = time.Now + } + + gen.epochFunc = epochf + } +} + +// WithRandomReader is a GenOption that allows you to provide your own random +// reader. +// When this option is nil, the default rand.Reader is used. +func WithRandomReader(reader io.Reader) GenOption { + return func(gen *Gen) { + if reader == nil { + reader = rand.Reader + } + + gen.rand = reader + } } // NewV1 returns a UUID based on the current timestamp and MAC address. func (g *Gen) NewV1() (UUID, error) { u := UUID{} - timeNow, clockSeq, err := g.getClockSequence() + timeNow, clockSeq, err := g.getClockSequence(false) if err != nil { return Nil, err } @@ -182,8 +277,44 @@ func (g *Gen) NewV5(ns UUID, name string) UUID { return u } -// getClockSequence returns the epoch and clock sequence. -func (g *Gen) getClockSequence() (uint64, uint16, error) { +// NewV6 returns a k-sortable UUID based on a timestamp and 48 bits of +// pseudorandom data. The timestamp in a V6 UUID is the same as V1, with the bit +// order being adjusted to allow the UUID to be k-sortable. +// +// This is implemented based on revision 03 of the Peabody UUID draft, and may +// be subject to change pending further revisions. Until the final specification +// revision is finished, changes required to implement updates to the spec will +// not be considered a breaking change. They will happen as a minor version +// releases until the spec is final. +func (g *Gen) NewV6() (UUID, error) { + var u UUID + + if _, err := io.ReadFull(g.rand, u[10:]); err != nil { + return Nil, err + } + + timeNow, clockSeq, err := g.getClockSequence(false) + if err != nil { + return Nil, err + } + + binary.BigEndian.PutUint32(u[0:], uint32(timeNow>>28)) // set time_high + binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>12)) // set time_mid + binary.BigEndian.PutUint16(u[6:], uint16(timeNow&0xfff)) // set time_low (minus four version bits) + binary.BigEndian.PutUint16(u[8:], clockSeq&0x3fff) // set clk_seq_hi_res (minus two variant bits) + + u.SetVersion(V6) + u.SetVariant(VariantRFC4122) + + return u, nil +} + +// getClockSequence returns the epoch and clock sequence for V1,V6 and V7 UUIDs. +// +// When useUnixTSMs is false, it uses the Coordinated Universal Time (UTC) as a count of 100- +// +// nanosecond intervals since 00:00:00.00, 15 October 1582 (the date of Gregorian reform to the Christian calendar). +func (g *Gen) getClockSequence(useUnixTSMs bool) (uint64, uint16, error) { var err error g.clockSequenceOnce.Do(func() { buf := make([]byte, 2) @@ -199,7 +330,12 @@ func (g *Gen) getClockSequence() (uint64, uint16, error) { g.storageMutex.Lock() defer g.storageMutex.Unlock() - timeNow := g.getEpoch() + var timeNow uint64 + if useUnixTSMs { + timeNow = uint64(g.epochFunc().UnixMilli()) + } else { + timeNow = g.getEpoch() + } // Clock didn't change since last UUID generation. // Should increase clock sequence. if timeNow <= g.lastTime { @@ -210,6 +346,59 @@ func (g *Gen) getClockSequence() (uint64, uint16, error) { return timeNow, g.clockSequence, nil } +// NewV7 returns a k-sortable UUID based on the current millisecond precision +// UNIX epoch and 74 bits of pseudorandom data. +// +// This is implemented based on revision 04 of the Peabody UUID draft, and may +// be subject to change pending further revisions. Until the final specification +// revision is finished, changes required to implement updates to the spec will +// not be considered a breaking change. They will happen as a minor version +// releases until the spec is final. +func (g *Gen) NewV7() (UUID, error) { + var u UUID + /* https://www.ietf.org/archive/id/draft-peabody-dispatch-new-uuid-format-04.html#name-uuid-version-7 + 0 1 2 3 + 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | unix_ts_ms | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | unix_ts_ms | ver | rand_a | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + |var| rand_b | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | rand_b | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ */ + + ms, clockSeq, err := g.getClockSequence(true) + if err != nil { + return Nil, err + } + //UUIDv7 features a 48 bit timestamp. First 32bit (4bytes) represents seconds since 1970, followed by 2 bytes for the ms granularity. + u[0] = byte(ms >> 40) //1-6 bytes: big-endian unsigned number of Unix epoch timestamp + u[1] = byte(ms >> 32) + u[2] = byte(ms >> 24) + u[3] = byte(ms >> 16) + u[4] = byte(ms >> 8) + u[5] = byte(ms) + + //support batching by using a monotonic pseudo-random sequence + //The 6th byte contains the version and partially rand_a data. + //We will lose the most significant bites from the clockSeq (with SetVersion), but it is ok, we need the least significant that contains the counter to ensure the monotonic property + binary.BigEndian.PutUint16(u[6:8], clockSeq) // set rand_a with clock seq which is random and monotonic + + //override first 4bits of u[6]. + u.SetVersion(V7) + + //set rand_b 64bits of pseudo-random bits (first 2 will be overridden) + if _, err = io.ReadFull(g.rand, u[8:16]); err != nil { + return Nil, err + } + //override first 2 bits of byte[8] for the variant + u.SetVariant(VariantRFC4122) + + return u, nil +} + // Returns the hardware address. func (g *Gen) getHardwareAddr() ([]byte, error) { var err error @@ -250,9 +439,11 @@ func newFromHash(h hash.Hash, ns UUID, name string) UUID { return u } +var netInterfaces = net.Interfaces + // Returns the hardware address. func defaultHWAddrFunc() (net.HardwareAddr, error) { - ifaces, err := net.Interfaces() + ifaces, err := netInterfaces() if err != nil { return []byte{}, err } diff --git a/cluster-autoscaler/vendor/github.com/gofrs/uuid/sql.go b/cluster-autoscaler/vendor/github.com/gofrs/uuid/sql.go index 6f254a4fd107..01d5d88496ce 100644 --- a/cluster-autoscaler/vendor/github.com/gofrs/uuid/sql.go +++ b/cluster-autoscaler/vendor/github.com/gofrs/uuid/sql.go @@ -22,12 +22,14 @@ package uuid import ( - "bytes" + "database/sql" "database/sql/driver" - "encoding/json" "fmt" ) +var _ driver.Valuer = UUID{} +var _ sql.Scanner = (*UUID)(nil) + // Value implements the driver.Valuer interface. func (u UUID) Value() (driver.Value, error) { return u.String(), nil @@ -49,7 +51,9 @@ func (u *UUID) Scan(src interface{}) error { return u.UnmarshalText(src) case string: - return u.UnmarshalText([]byte(src)) + uu, err := FromString(src) + *u = uu + return err } return fmt.Errorf("uuid: cannot convert %T to UUID", src) @@ -83,27 +87,30 @@ func (u *NullUUID) Scan(src interface{}) error { return u.UUID.Scan(src) } +var nullJSON = []byte("null") + // MarshalJSON marshals the NullUUID as null or the nested UUID func (u NullUUID) MarshalJSON() ([]byte, error) { if !u.Valid { - return json.Marshal(nil) + return nullJSON, nil } - - return json.Marshal(u.UUID) + var buf [38]byte + buf[0] = '"' + encodeCanonical(buf[1:37], u.UUID) + buf[37] = '"' + return buf[:], nil } // UnmarshalJSON unmarshals a NullUUID func (u *NullUUID) UnmarshalJSON(b []byte) error { - if bytes.Equal(b, []byte("null")) { + if string(b) == "null" { u.UUID, u.Valid = Nil, false return nil } - - if err := json.Unmarshal(b, &u.UUID); err != nil { - return err + if n := len(b); n >= 2 && b[0] == '"' { + b = b[1 : n-1] } - - u.Valid = true - - return nil + err := u.UUID.UnmarshalText(b) + u.Valid = (err == nil) + return err } diff --git a/cluster-autoscaler/vendor/github.com/gofrs/uuid/uuid.go b/cluster-autoscaler/vendor/github.com/gofrs/uuid/uuid.go index 78aed6e2539b..5320fb53894b 100644 --- a/cluster-autoscaler/vendor/github.com/gofrs/uuid/uuid.go +++ b/cluster-autoscaler/vendor/github.com/gofrs/uuid/uuid.go @@ -20,11 +20,13 @@ // WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. // Package uuid provides implementations of the Universally Unique Identifier -// (UUID), as specified in RFC-4122, +// (UUID), as specified in RFC-4122 and the Peabody RFC Draft (revision 03). // -// RFC-4122[1] provides the specification for versions 1, 3, 4, and 5. +// RFC-4122[1] provides the specification for versions 1, 3, 4, and 5. The +// Peabody UUID RFC Draft[2] provides the specification for the new k-sortable +// UUIDs, versions 6 and 7. // -// DCE 1.1[2] provides the specification for version 2, but version 2 support +// DCE 1.1[3] provides the specification for version 2, but version 2 support // was removed from this package in v4 due to some concerns with the // specification itself. Reading the spec, it seems that it would result in // generating UUIDs that aren't very unique. In having read the spec it seemed @@ -34,15 +36,14 @@ // ensure we were understanding the specification correctly. // // [1] https://tools.ietf.org/html/rfc4122 -// [2] http://pubs.opengroup.org/onlinepubs/9696989899/chap5.htm#tagcjh_08_02_01_01 +// [2] https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-03 +// [3] http://pubs.opengroup.org/onlinepubs/9696989899/chap5.htm#tagcjh_08_02_01_01 package uuid import ( "encoding/binary" "encoding/hex" "fmt" - "io" - "strings" "time" ) @@ -60,6 +61,9 @@ const ( V3 // Version 3 (namespace name-based) V4 // Version 4 (random) V5 // Version 5 (namespace name-based) + V6 // Version 6 (k-sortable timestamp and random data, field-compatible with v1) [peabody draft] + V7 // Version 7 (k-sortable timestamp and random data) [peabody draft] + _ // Version 8 (k-sortable timestamp, meant for custom implementations) [peabody draft] [not implemented] ) // UUID layout variants. @@ -88,6 +92,7 @@ const _100nsPerSecond = 10000000 func (t Timestamp) Time() (time.Time, error) { secs := uint64(t) / _100nsPerSecond nsecs := 100 * (uint64(t) % _100nsPerSecond) + return time.Unix(int64(secs)-(epochStart/_100nsPerSecond), int64(nsecs)), nil } @@ -98,17 +103,33 @@ func TimestampFromV1(u UUID) (Timestamp, error) { err := fmt.Errorf("uuid: %s is version %d, not version 1", u, u.Version()) return 0, err } + low := binary.BigEndian.Uint32(u[0:4]) mid := binary.BigEndian.Uint16(u[4:6]) hi := binary.BigEndian.Uint16(u[6:8]) & 0xfff + return Timestamp(uint64(low) + (uint64(mid) << 32) + (uint64(hi) << 48)), nil } -// String parse helpers. -var ( - urnPrefix = []byte("urn:uuid:") - byteGroups = []int{8, 4, 4, 4, 12} -) +// TimestampFromV6 returns the Timestamp embedded within a V6 UUID. This +// function returns an error if the UUID is any version other than 6. +// +// This is implemented based on revision 03 of the Peabody UUID draft, and may +// be subject to change pending further revisions. Until the final specification +// revision is finished, changes required to implement updates to the spec will +// not be considered a breaking change. They will happen as a minor version +// releases until the spec is final. +func TimestampFromV6(u UUID) (Timestamp, error) { + if u.Version() != 6 { + return 0, fmt.Errorf("uuid: %s is version %d, not version 6", u, u.Version()) + } + + hi := binary.BigEndian.Uint32(u[0:4]) + mid := binary.BigEndian.Uint16(u[4:6]) + low := binary.BigEndian.Uint16(u[6:8]) & 0xfff + + return Timestamp(uint64(low) + (uint64(mid) << 12) + (uint64(hi) << 28)), nil +} // Nil is the nil UUID, as specified in RFC-4122, that has all 128 bits set to // zero. @@ -122,6 +143,11 @@ var ( NamespaceX500 = Must(FromString("6ba7b814-9dad-11d1-80b4-00c04fd430c8")) ) +// IsNil returns if the UUID is equal to the nil UUID +func (u UUID) IsNil() bool { + return u == Nil +} + // Version returns the algorithm version used to generate the UUID. func (u UUID) Version() byte { return u[6] >> 4 @@ -148,22 +174,33 @@ func (u UUID) Bytes() []byte { return u[:] } +// encodeCanonical encodes the canonical RFC-4122 form of UUID u into the +// first 36 bytes dst. +func encodeCanonical(dst []byte, u UUID) { + const hextable = "0123456789abcdef" + dst[8] = '-' + dst[13] = '-' + dst[18] = '-' + dst[23] = '-' + for i, x := range [16]byte{ + 0, 2, 4, 6, + 9, 11, + 14, 16, + 19, 21, + 24, 26, 28, 30, 32, 34, + } { + c := u[i] + dst[x] = hextable[c>>4] + dst[x+1] = hextable[c&0x0f] + } +} + // String returns a canonical RFC-4122 string representation of the UUID: // xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. func (u UUID) String() string { - buf := make([]byte, 36) - - hex.Encode(buf[0:8], u[0:4]) - buf[8] = '-' - hex.Encode(buf[9:13], u[4:6]) - buf[13] = '-' - hex.Encode(buf[14:18], u[6:8]) - buf[18] = '-' - hex.Encode(buf[19:23], u[8:10]) - buf[23] = '-' - hex.Encode(buf[24:], u[10:]) - - return string(buf) + var buf [36]byte + encodeCanonical(buf[:], u) + return string(buf[:]) } // Format implements fmt.Formatter for UUID values. @@ -176,52 +213,41 @@ func (u UUID) String() string { // All other verbs not handled directly by the fmt package (like '%p') are unsupported and will return // "%!verb(uuid.UUID=value)" as recommended by the fmt package. func (u UUID) Format(f fmt.State, c rune) { + if c == 'v' && f.Flag('#') { + fmt.Fprintf(f, "%#v", [Size]byte(u)) + return + } switch c { case 'x', 'X': - s := hex.EncodeToString(u.Bytes()) + b := make([]byte, 32) + hex.Encode(b, u[:]) if c == 'X' { - s = strings.Map(toCapitalHexDigits, s) - } - _, _ = io.WriteString(f, s) - case 'v': - var s string - if f.Flag('#') { - s = fmt.Sprintf("%#v", [Size]byte(u)) - } else { - s = u.String() + toUpperHex(b) } - _, _ = io.WriteString(f, s) - case 's', 'S': - s := u.String() + _, _ = f.Write(b) + case 'v', 's', 'S': + b, _ := u.MarshalText() if c == 'S' { - s = strings.Map(toCapitalHexDigits, s) + toUpperHex(b) } - _, _ = io.WriteString(f, s) + _, _ = f.Write(b) case 'q': - _, _ = io.WriteString(f, `"`+u.String()+`"`) + b := make([]byte, 38) + b[0] = '"' + encodeCanonical(b[1:], u) + b[37] = '"' + _, _ = f.Write(b) default: // invalid/unsupported format verb fmt.Fprintf(f, "%%!%c(uuid.UUID=%s)", c, u.String()) } } -func toCapitalHexDigits(ch rune) rune { - // convert a-f hex digits to A-F - switch ch { - case 'a': - return 'A' - case 'b': - return 'B' - case 'c': - return 'C' - case 'd': - return 'D' - case 'e': - return 'E' - case 'f': - return 'F' - default: - return ch +func toUpperHex(b []byte) { + for i, c := range b { + if 'a' <= c && c <= 'f' { + b[i] = c - ('a' - 'A') + } } } @@ -249,7 +275,8 @@ func (u *UUID) SetVariant(v byte) { // Must is a helper that wraps a call to a function returning (UUID, error) // and panics if the error is non-nil. It is intended for use in variable // initializations such as -// var packageUUID = uuid.Must(uuid.FromString("123e4567-e89b-12d3-a456-426655440000")) +// +// var packageUUID = uuid.Must(uuid.FromString("123e4567-e89b-12d3-a456-426655440000")) func Must(u UUID, err error) UUID { if err != nil { panic(err) diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/README.md b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/README.md index f5d551ca8fd8..30f2f2a6f70c 100644 --- a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/README.md +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/README.md @@ -54,9 +54,9 @@ import "github.com/golang-jwt/jwt/v4" See [the project documentation](https://pkg.go.dev/github.com/golang-jwt/jwt/v4) for examples of usage: -* [Simple example of parsing and validating a token](https://pkg.go.dev/github.com/golang-jwt/jwt#example-Parse-Hmac) -* [Simple example of building and signing a token](https://pkg.go.dev/github.com/golang-jwt/jwt#example-New-Hmac) -* [Directory of Examples](https://pkg.go.dev/github.com/golang-jwt/jwt#pkg-examples) +* [Simple example of parsing and validating a token](https://pkg.go.dev/github.com/golang-jwt/jwt/v4#example-Parse-Hmac) +* [Simple example of building and signing a token](https://pkg.go.dev/github.com/golang-jwt/jwt/v4#example-New-Hmac) +* [Directory of Examples](https://pkg.go.dev/github.com/golang-jwt/jwt/v4#pkg-examples) ## Extensions @@ -96,7 +96,7 @@ A token is simply a JSON object that is signed by its author. this tells you exa * The author of the token was in the possession of the signing secret * The data has not been modified since it was signed -It's important to know that JWT does not provide encryption, which means anyone who has access to the token can read its contents. If you need to protect (encrypt) the data, there is a companion spec, `JWE`, that provides this functionality. JWE is currently outside the scope of this library. +It's important to know that JWT does not provide encryption, which means anyone who has access to the token can read its contents. If you need to protect (encrypt) the data, there is a companion spec, `JWE`, that provides this functionality. The companion project https://github.com/golang-jwt/jwe aims at a (very) experimental implementation of the JWE standard. ### Choosing a Signing Method @@ -110,10 +110,10 @@ Asymmetric signing methods, such as RSA, use different keys for signing and veri Each signing method expects a different object type for its signing keys. See the package documentation for details. Here are the most common ones: -* The [HMAC signing method](https://pkg.go.dev/github.com/golang-jwt/jwt#SigningMethodHMAC) (`HS256`,`HS384`,`HS512`) expect `[]byte` values for signing and validation -* The [RSA signing method](https://pkg.go.dev/github.com/golang-jwt/jwt#SigningMethodRSA) (`RS256`,`RS384`,`RS512`) expect `*rsa.PrivateKey` for signing and `*rsa.PublicKey` for validation -* The [ECDSA signing method](https://pkg.go.dev/github.com/golang-jwt/jwt#SigningMethodECDSA) (`ES256`,`ES384`,`ES512`) expect `*ecdsa.PrivateKey` for signing and `*ecdsa.PublicKey` for validation -* The [EdDSA signing method](https://pkg.go.dev/github.com/golang-jwt/jwt#SigningMethodEd25519) (`Ed25519`) expect `ed25519.PrivateKey` for signing and `ed25519.PublicKey` for validation +* The [HMAC signing method](https://pkg.go.dev/github.com/golang-jwt/jwt/v4#SigningMethodHMAC) (`HS256`,`HS384`,`HS512`) expect `[]byte` values for signing and validation +* The [RSA signing method](https://pkg.go.dev/github.com/golang-jwt/jwt/v4#SigningMethodRSA) (`RS256`,`RS384`,`RS512`) expect `*rsa.PrivateKey` for signing and `*rsa.PublicKey` for validation +* The [ECDSA signing method](https://pkg.go.dev/github.com/golang-jwt/jwt/v4#SigningMethodECDSA) (`ES256`,`ES384`,`ES512`) expect `*ecdsa.PrivateKey` for signing and `*ecdsa.PublicKey` for validation +* The [EdDSA signing method](https://pkg.go.dev/github.com/golang-jwt/jwt/v4#SigningMethodEd25519) (`Ed25519`) expect `ed25519.PrivateKey` for signing and `ed25519.PublicKey` for validation ### JWT and OAuth @@ -131,7 +131,7 @@ This library uses descriptive error messages whenever possible. If you are not g ## More -Documentation can be found [on pkg.go.dev](https://pkg.go.dev/github.com/golang-jwt/jwt). +Documentation can be found [on pkg.go.dev](https://pkg.go.dev/github.com/golang-jwt/jwt/v4). The command line utility included in this project (cmd/jwt) provides a straightforward example of token creation and parsing as well as a useful tool for debugging your own integration. You'll also find several implementation examples in the documentation. diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/claims.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/claims.go index 9d95cad2bf27..364cec8773cd 100644 --- a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/claims.go +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/claims.go @@ -265,9 +265,5 @@ func verifyIss(iss string, cmp string, required bool) bool { if iss == "" { return !required } - if subtle.ConstantTimeCompare([]byte(iss), []byte(cmp)) != 0 { - return true - } else { - return false - } + return subtle.ConstantTimeCompare([]byte(iss), []byte(cmp)) != 0 } diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/parser.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/parser.go index 2f61a69d7fcb..c0a6f6927917 100644 --- a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/parser.go +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/parser.go @@ -42,6 +42,13 @@ func (p *Parser) Parse(tokenString string, keyFunc Keyfunc) (*Token, error) { return p.ParseWithClaims(tokenString, MapClaims{}, keyFunc) } +// ParseWithClaims parses, validates, and verifies like Parse, but supplies a default object implementing the Claims +// interface. This provides default values which can be overridden and allows a caller to use their own type, rather +// than the default MapClaims implementation of Claims. +// +// Note: If you provide a custom claim implementation that embeds one of the standard claims (such as RegisteredClaims), +// make sure that a) you either embed a non-pointer version of the claims or b) if you are using a pointer, allocate the +// proper memory for it before passing in the overall claims, otherwise you might run into a panic. func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) { token, parts, err := p.ParseUnverified(tokenString, claims) if err != nil { diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/token.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/token.go index 3cb0f3f0e4c0..786b275ce03a 100644 --- a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/token.go +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v4/token.go @@ -14,6 +14,12 @@ import ( // To use the non-recommended decoding, set this boolean to `true` prior to using this package. var DecodePaddingAllowed bool +// DecodeStrict will switch the codec used for decoding JWTs into strict mode. +// In this mode, the decoder requires that trailing padding bits are zero, as described in RFC 4648 section 3.5. +// Note that this is a global variable, and updating it will change the behavior on a package level, and is also NOT go-routine safe. +// To use strict decoding, set this boolean to `true` prior to using this package. +var DecodeStrict bool + // TimeFunc provides the current time when parsing token to validate "exp" claim (expiration time). // You can override it to use another time value. This is useful for testing or if your // server uses a different time zone than your tokens. @@ -99,6 +105,11 @@ func Parse(tokenString string, keyFunc Keyfunc, options ...ParserOption) (*Token return NewParser(options...).Parse(tokenString, keyFunc) } +// ParseWithClaims is a shortcut for NewParser().ParseWithClaims(). +// +// Note: If you provide a custom claim implementation that embeds one of the standard claims (such as RegisteredClaims), +// make sure that a) you either embed a non-pointer version of the claims or b) if you are using a pointer, allocate the +// proper memory for it before passing in the overall claims, otherwise you might run into a panic. func ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc, options ...ParserOption) (*Token, error) { return NewParser(options...).ParseWithClaims(tokenString, claims, keyFunc) } @@ -116,12 +127,17 @@ func EncodeSegment(seg []byte) string { // Deprecated: In a future release, we will demote this function to a non-exported function, since it // should only be used internally func DecodeSegment(seg string) ([]byte, error) { + encoding := base64.RawURLEncoding + if DecodePaddingAllowed { if l := len(seg) % 4; l > 0 { seg += strings.Repeat("=", 4-l) } - return base64.URLEncoding.DecodeString(seg) + encoding = base64.URLEncoding } - return base64.RawURLEncoding.DecodeString(seg) + if DecodeStrict { + encoding = encoding.Strict() + } + return encoding.DecodeString(seg) } diff --git a/cluster-autoscaler/vendor/github.com/google/cadvisor/container/crio/handler.go b/cluster-autoscaler/vendor/github.com/google/cadvisor/container/crio/handler.go index 9a06cac35331..08857785653a 100644 --- a/cluster-autoscaler/vendor/github.com/google/cadvisor/container/crio/handler.go +++ b/cluster-autoscaler/vendor/github.com/google/cadvisor/container/crio/handler.go @@ -305,14 +305,20 @@ func (h *crioContainerHandler) GetStats() (*info.ContainerStats, error) { if err != nil { return stats, err } - // Clean up stats for containers that don't have their own network - this - // includes containers running in Kubernetes pods that use the network of the - // infrastructure container. This stops metrics being reported multiple times - // for each container in a pod. + if !h.needNet() { + // Clean up stats for containers that don't have their own network - this + // includes containers running in Kubernetes pods that use the network of the + // infrastructure container. This stops metrics being reported multiple times + // for each container in a pod. stats.Network = info.NetworkStats{} + } else if len(stats.Network.Interfaces) == 0 { + // No network related information indicates that the pid of the + // container is not longer valid and we need to ask crio to + // provide the pid of another container from that pod + h.pidKnown = false + return stats, nil } - // Get filesystem stats. err = h.getFsStats(stats) if err != nil { diff --git a/cluster-autoscaler/vendor/github.com/google/cadvisor/info/v1/machine.go b/cluster-autoscaler/vendor/github.com/google/cadvisor/info/v1/machine.go index 47a43604053c..cb3bb0e16f7f 100644 --- a/cluster-autoscaler/vendor/github.com/google/cadvisor/info/v1/machine.go +++ b/cluster-autoscaler/vendor/github.com/google/cadvisor/info/v1/machine.go @@ -263,6 +263,7 @@ func (m *MachineInfo) Clone() *MachineInfo { NumSockets: m.NumSockets, CpuFrequency: m.CpuFrequency, MemoryCapacity: m.MemoryCapacity, + SwapCapacity: m.SwapCapacity, MemoryByType: memoryByType, NVMInfo: m.NVMInfo, HugePages: m.HugePages, diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/cel/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/cel/BUILD.bazel index ddddbd2804e2..4331321139ec 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/cel/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/cel/BUILD.bazel @@ -32,7 +32,7 @@ go_library( "//interpreter:go_default_library", "//interpreter/functions:go_default_library", "//parser:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//reflect/protodesc:go_default_library", "@org_golang_google_protobuf//reflect/protoreflect:go_default_library", @@ -70,7 +70,7 @@ go_test( "//test/proto2pb:go_default_library", "//test/proto3pb:go_default_library", "@io_bazel_rules_go//proto/wkt:descriptor_go_proto", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//types/known/structpb:go_default_library", ], diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/cel/env.go b/cluster-autoscaler/vendor/github.com/google/cel-go/cel/env.go index 8cf442ee714b..d9c2ef63f27a 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/cel/env.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/cel/env.go @@ -109,10 +109,11 @@ type Env struct { prsrOpts []parser.Option // Internal checker representation - chk *checker.Env - chkErr error - chkOnce sync.Once - chkOpts []checker.Option + chkMutex sync.Mutex + chk *checker.Env + chkErr error + chkOnce sync.Once + chkOpts []checker.Option // Program options tied to the environment progOpts []ProgramOption @@ -178,14 +179,14 @@ func (e *Env) Check(ast *Ast) (*Ast, *Issues) { pe, _ := AstToParsedExpr(ast) // Construct the internal checker env, erroring if there is an issue adding the declarations. - err := e.initChecker() + chk, err := e.initChecker() if err != nil { errs := common.NewErrors(ast.Source()) - errs.ReportError(common.NoLocation, e.chkErr.Error()) + errs.ReportError(common.NoLocation, err.Error()) return nil, NewIssues(errs) } - res, errs := checker.Check(pe, ast.Source(), e.chk) + res, errs := checker.Check(pe, ast.Source(), chk) if len(errs.GetErrors()) > 0 { return nil, NewIssues(errs) } @@ -239,8 +240,9 @@ func (e *Env) CompileSource(src Source) (*Ast, *Issues) { // TypeProvider are immutable, or that their underlying implementations are based on the // ref.TypeRegistry which provides a Copy method which will be invoked by this method. func (e *Env) Extend(opts ...EnvOption) (*Env, error) { - if e.chkErr != nil { - return nil, e.chkErr + chk, chkErr := e.getCheckerOrError() + if chkErr != nil { + return nil, chkErr } prsrOptsCopy := make([]parser.Option, len(e.prsrOpts)) @@ -254,10 +256,10 @@ func (e *Env) Extend(opts ...EnvOption) (*Env, error) { // Copy the declarations if needed. decsCopy := []*exprpb.Decl{} - if e.chk != nil { + if chk != nil { // If the type-checker has already been instantiated, then the e.declarations have been // validated within the chk instance. - chkOptsCopy = append(chkOptsCopy, checker.ValidatedDeclarations(e.chk)) + chkOptsCopy = append(chkOptsCopy, checker.ValidatedDeclarations(chk)) } else { // If the type-checker has not been instantiated, ensure the unvalidated declarations are // provided to the extended Env instance. @@ -460,12 +462,12 @@ func (e *Env) ResidualAst(a *Ast, details *EvalDetails) (*Ast, error) { // EstimateCost estimates the cost of a type checked CEL expression using the length estimates of input data and // extension functions provided by estimator. -func (e *Env) EstimateCost(ast *Ast, estimator checker.CostEstimator) (checker.CostEstimate, error) { +func (e *Env) EstimateCost(ast *Ast, estimator checker.CostEstimator, opts ...checker.CostOption) (checker.CostEstimate, error) { checked, err := AstToCheckedExpr(ast) if err != nil { return checker.CostEstimate{}, fmt.Errorf("EsimateCost could not inspect Ast: %v", err) } - return checker.Cost(checked, estimator), nil + return checker.Cost(checked, estimator, opts...) } // configure applies a series of EnvOptions to the current environment. @@ -509,7 +511,7 @@ func (e *Env) configure(opts []EnvOption) (*Env, error) { // Ensure that the checker init happens eagerly rather than lazily. if e.HasFeature(featureEagerlyValidateDeclarations) { - err := e.initChecker() + _, err := e.initChecker() if err != nil { return nil, err } @@ -518,7 +520,7 @@ func (e *Env) configure(opts []EnvOption) (*Env, error) { return e, nil } -func (e *Env) initChecker() error { +func (e *Env) initChecker() (*checker.Env, error) { e.chkOnce.Do(func() { chkOpts := []checker.Option{} chkOpts = append(chkOpts, e.chkOpts...) @@ -530,32 +532,47 @@ func (e *Env) initChecker() error { ce, err := checker.NewEnv(e.Container, e.provider, chkOpts...) if err != nil { - e.chkErr = err + e.setCheckerOrError(nil, err) return } // Add the statically configured declarations. err = ce.Add(e.declarations...) if err != nil { - e.chkErr = err + e.setCheckerOrError(nil, err) return } // Add the function declarations which are derived from the FunctionDecl instances. for _, fn := range e.functions { fnDecl, err := functionDeclToExprDecl(fn) if err != nil { - e.chkErr = err + e.setCheckerOrError(nil, err) return } err = ce.Add(fnDecl) if err != nil { - e.chkErr = err + e.setCheckerOrError(nil, err) return } } // Add function declarations here separately. - e.chk = ce + e.setCheckerOrError(ce, nil) }) - return e.chkErr + return e.getCheckerOrError() +} + +// setCheckerOrError sets the checker.Env or error state in a concurrency-safe manner +func (e *Env) setCheckerOrError(chk *checker.Env, chkErr error) { + e.chkMutex.Lock() + e.chk = chk + e.chkErr = chkErr + e.chkMutex.Unlock() +} + +// getCheckerOrError gets the checker.Env or error state in a concurrency-safe manner +func (e *Env) getCheckerOrError() (*checker.Env, error) { + e.chkMutex.Lock() + defer e.chkMutex.Unlock() + return e.chk, e.chkErr } // maybeApplyFeature determines whether the feature-guarded option is enabled, and if so applies diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/cel/library.go b/cluster-autoscaler/vendor/github.com/google/cel-go/cel/library.go index 072cec30e6ff..bcfd44f78a9f 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/cel/library.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/cel/library.go @@ -20,6 +20,7 @@ import ( "time" "github.com/google/cel-go/checker" + "github.com/google/cel-go/common" "github.com/google/cel-go/common/operators" "github.com/google/cel-go/common/overloads" "github.com/google/cel-go/common/types" @@ -28,6 +29,18 @@ import ( "github.com/google/cel-go/interpreter" "github.com/google/cel-go/interpreter/functions" "github.com/google/cel-go/parser" + + exprpb "google.golang.org/genproto/googleapis/api/expr/v1alpha1" +) + +const ( + optMapMacro = "optMap" + hasValueFunc = "hasValue" + optionalNoneFunc = "optional.none" + optionalOfFunc = "optional.of" + optionalOfNonZeroValueFunc = "optional.ofNonZeroValue" + valueFunc = "value" + unusedIterVar = "#unused" ) // Library provides a collection of EnvOption and ProgramOption values used to configure a CEL @@ -130,13 +143,16 @@ func (optionalLibrary) CompileOptions() []EnvOption { // Introduce the optional type. Types(types.OptionalType), + // Configure the optMap macro. + Macros(NewReceiverMacro(optMapMacro, 2, optMap)), + // Global and member functions for working with optional values. - Function("optional.of", + Function(optionalOfFunc, Overload("optional_of", []*Type{paramTypeV}, optionalTypeV, UnaryBinding(func(value ref.Val) ref.Val { return types.OptionalOf(value) }))), - Function("optional.ofNonZeroValue", + Function(optionalOfNonZeroValueFunc, Overload("optional_ofNonZeroValue", []*Type{paramTypeV}, optionalTypeV, UnaryBinding(func(value ref.Val) ref.Val { v, isZeroer := value.(traits.Zeroer) @@ -145,18 +161,18 @@ func (optionalLibrary) CompileOptions() []EnvOption { } return types.OptionalNone }))), - Function("optional.none", + Function(optionalNoneFunc, Overload("optional_none", []*Type{}, optionalTypeV, FunctionBinding(func(values ...ref.Val) ref.Val { return types.OptionalNone }))), - Function("value", + Function(valueFunc, MemberOverload("optional_value", []*Type{optionalTypeV}, paramTypeV, UnaryBinding(func(value ref.Val) ref.Val { opt := value.(*types.Optional) return opt.GetValue() }))), - Function("hasValue", + Function(hasValueFunc, MemberOverload("optional_hasValue", []*Type{optionalTypeV}, BoolType, UnaryBinding(func(value ref.Val) ref.Val { opt := value.(*types.Optional) @@ -190,6 +206,37 @@ func (optionalLibrary) CompileOptions() []EnvOption { } } +func optMap(meh MacroExprHelper, target *exprpb.Expr, args []*exprpb.Expr) (*exprpb.Expr, *common.Error) { + varIdent := args[0] + varName := "" + switch varIdent.GetExprKind().(type) { + case *exprpb.Expr_IdentExpr: + varName = varIdent.GetIdentExpr().GetName() + default: + return nil, &common.Error{ + Message: "optMap() variable name must be a simple identifier", + Location: meh.OffsetLocation(varIdent.GetId()), + } + } + mapExpr := args[1] + return meh.GlobalCall( + operators.Conditional, + meh.ReceiverCall(hasValueFunc, target), + meh.GlobalCall(optionalOfFunc, + meh.Fold( + unusedIterVar, + meh.NewList(), + varName, + meh.ReceiverCall(valueFunc, target), + meh.LiteralBool(false), + meh.Ident(varName), + mapExpr, + ), + ), + meh.GlobalCall(optionalNoneFunc), + ), nil +} + // ProgramOptions implements the Library interface method. func (optionalLibrary) ProgramOptions() []ProgramOption { return []ProgramOption{ diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/cel/options.go b/cluster-autoscaler/vendor/github.com/google/cel-go/cel/options.go index 63321b548170..07f3d6c71611 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/cel/options.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/cel/options.go @@ -568,3 +568,12 @@ func ParserRecursionLimit(limit int) EnvOption { return e, nil } } + +// ParserExpressionSizeLimit adjusts the number of code points the expression parser is allowed to parse. +// Defaults defined in the parser package. +func ParserExpressionSizeLimit(limit int) EnvOption { + return func(e *Env) (*Env, error) { + e.prsrOpts = append(e.prsrOpts, parser.ExpressionSizeCodePointLimit(limit)) + return e, nil + } +} diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/checker/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/checker/BUILD.bazel index bec40b6e695b..1c6ddb7f7da6 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/checker/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/checker/BUILD.bazel @@ -30,7 +30,7 @@ go_library( "//common/types/pb:go_default_library", "//common/types/ref:go_default_library", "//parser:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//types/known/emptypb:go_default_library", "@org_golang_google_protobuf//types/known/structpb:go_default_library", @@ -54,7 +54,7 @@ go_test( "//test:go_default_library", "//test/proto2pb:go_default_library", "//test/proto3pb:go_default_library", - "@com_github_antlr_antlr4_runtime_go_antlr//:go_default_library", + "@com_github_antlr_antlr4_runtime_go_antlr_v4//:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", ], ) diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/checker/cost.go b/cluster-autoscaler/vendor/github.com/google/cel-go/checker/cost.go index 6cf8c4fea095..8ae8d18bfce7 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/checker/cost.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/checker/cost.go @@ -261,6 +261,8 @@ type coster struct { computedSizes map[int64]SizeEstimate checkedExpr *exprpb.CheckedExpr estimator CostEstimator + // presenceTestCost will either be a zero or one based on whether has() macros count against cost computations. + presenceTestCost CostEstimate } // Use a stack of iterVar -> iterRange Expr Ids to handle shadowed variable names. @@ -283,16 +285,39 @@ func (vs iterRangeScopes) peek(varName string) (int64, bool) { return 0, false } +// CostOption configures flags which affect cost computations. +type CostOption func(*coster) error + +// PresenceTestHasCost determines whether presence testing has a cost of one or zero. +// Defaults to presence test has a cost of one. +func PresenceTestHasCost(hasCost bool) CostOption { + return func(c *coster) error { + if hasCost { + c.presenceTestCost = selectAndIdentCost + return nil + } + c.presenceTestCost = CostEstimate{Min: 0, Max: 0} + return nil + } +} + // Cost estimates the cost of the parsed and type checked CEL expression. -func Cost(checker *exprpb.CheckedExpr, estimator CostEstimator) CostEstimate { - c := coster{ - checkedExpr: checker, - estimator: estimator, - exprPath: map[int64][]string{}, - iterRanges: map[string][]int64{}, - computedSizes: map[int64]SizeEstimate{}, +func Cost(checker *exprpb.CheckedExpr, estimator CostEstimator, opts ...CostOption) (CostEstimate, error) { + c := &coster{ + checkedExpr: checker, + estimator: estimator, + exprPath: map[int64][]string{}, + iterRanges: map[string][]int64{}, + computedSizes: map[int64]SizeEstimate{}, + presenceTestCost: CostEstimate{Min: 1, Max: 1}, + } + for _, opt := range opts { + err := opt(c) + if err != nil { + return CostEstimate{}, err + } } - return c.cost(checker.GetExpr()) + return c.cost(checker.GetExpr()), nil } func (c *coster) cost(e *exprpb.Expr) CostEstimate { @@ -347,6 +372,7 @@ func (c *coster) costSelect(e *exprpb.Expr) CostEstimate { // this is equivalent to how evalTestOnly increments the runtime cost counter // but does not add any additional cost for the qualifier, except here we do // the reverse (ident adds cost) + sum = sum.Add(c.presenceTestCost) sum = sum.Add(c.cost(sel.GetOperand())) return sum } diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/checker/decls/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/checker/decls/BUILD.bazel index 5a24f1da8092..9384be4507c2 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/checker/decls/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/checker/decls/BUILD.bazel @@ -13,7 +13,7 @@ go_library( ], importpath = "github.com/google/cel-go/checker/decls", deps = [ - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//types/known/emptypb:go_default_library", "@org_golang_google_protobuf//types/known/structpb:go_default_library", ], diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/checker/env.go b/cluster-autoscaler/vendor/github.com/google/cel-go/checker/env.go index c7eeb04eee6e..be89d2d68d76 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/checker/env.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/checker/env.go @@ -226,7 +226,7 @@ func (e *Env) setFunction(decl *exprpb.Decl) []errorMsg { newOverloads := []*exprpb.Decl_FunctionDecl_Overload{} for _, overload := range overloads { existing, found := existingOverloads[overload.GetOverloadId()] - if !found || !proto.Equal(existing, overload) { + if !found || !overloadsEqual(existing, overload) { newOverloads = append(newOverloads, overload) } } @@ -264,6 +264,31 @@ func (e *Env) isOverloadDisabled(overloadID string) bool { return found } +// overloadsEqual returns whether two overloads have identical signatures. +// +// type parameter names are ignored as they may be specified in any order and have no bearing on overload +// equivalence +func overloadsEqual(o1, o2 *exprpb.Decl_FunctionDecl_Overload) bool { + return o1.GetOverloadId() == o2.GetOverloadId() && + o1.GetIsInstanceFunction() == o2.GetIsInstanceFunction() && + paramsEqual(o1.GetParams(), o2.GetParams()) && + proto.Equal(o1.GetResultType(), o2.GetResultType()) +} + +// paramsEqual returns whether two lists have equal length and all types are equal +func paramsEqual(p1, p2 []*exprpb.Type) bool { + if len(p1) != len(p2) { + return false + } + for i, a := range p1 { + b := p2[i] + if !proto.Equal(a, b) { + return false + } + } + return true +} + // sanitizeFunction replaces well-known types referenced by message name with their equivalent // CEL built-in type instances. func sanitizeFunction(decl *exprpb.Decl) *exprpb.Decl { diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/common/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/common/BUILD.bazel index a0058aebe07f..d6165b13af05 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/common/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/common/BUILD.bazel @@ -17,7 +17,7 @@ go_library( importpath = "github.com/google/cel-go/common", deps = [ "//common/runes:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_x_text//width:go_default_library", ], ) diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/common/containers/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/common/containers/BUILD.bazel index 18142d94ef6c..3f3f07887124 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/common/containers/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/common/containers/BUILD.bazel @@ -12,7 +12,7 @@ go_library( ], importpath = "github.com/google/cel-go/common/containers", deps = [ - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", ], ) @@ -26,6 +26,6 @@ go_test( ":go_default_library", ], deps = [ - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", ], ) diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/common/debug/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/common/debug/BUILD.bazel index cf5c5d246760..1f029839c713 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/common/debug/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/common/debug/BUILD.bazel @@ -13,6 +13,6 @@ go_library( importpath = "github.com/google/cel-go/common/debug", deps = [ "//common:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", ], ) diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/BUILD.bazel index f56700de5d62..89c4feacbfc7 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/BUILD.bazel @@ -39,8 +39,8 @@ go_library( "//common/types/ref:go_default_library", "//common/types/traits:go_default_library", "@com_github_stoewer_go_strcase//:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", - "@org_golang_google_genproto//googleapis/rpc/status:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_rpc//status:go_default_library", "@org_golang_google_protobuf//encoding/protojson:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//reflect/protoreflect:go_default_library", @@ -80,7 +80,7 @@ go_test( "//common/types/ref:go_default_library", "//test:go_default_library", "//test/proto3pb:test_all_types_go_proto", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//encoding/protojson:go_default_library", "@org_golang_google_protobuf//types/known/anypb:go_default_library", "@org_golang_google_protobuf//types/known/durationpb:go_default_library", diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/pb/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/pb/BUILD.bazel index f23ac9c0e252..e2b9d37b569e 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/pb/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/pb/BUILD.bazel @@ -17,7 +17,7 @@ go_library( ], importpath = "github.com/google/cel-go/common/types/pb", deps = [ - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//encoding/protowire:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//reflect/protoreflect:go_default_library", diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/ref/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/ref/BUILD.bazel index 1d0f468993ba..79330c332163 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/ref/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/ref/BUILD.bazel @@ -13,7 +13,7 @@ go_library( ], importpath = "github.com/google/cel-go/common/types/ref", deps = [ - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//reflect/protoreflect:go_default_library", ], diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/ref/reference.go b/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/ref/reference.go index 5921ffd81f36..e0d58145cdcd 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/ref/reference.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/common/types/ref/reference.go @@ -37,9 +37,18 @@ type Type interface { type Val interface { // ConvertToNative converts the Value to a native Go struct according to the // reflected type description, or error if the conversion is not feasible. + // + // The ConvertToNative method is intended to be used to support conversion between CEL types + // and native types during object creation expressions or by clients who need to adapt the, + // returned CEL value into an equivalent Go value instance. + // + // When implementing or using ConvertToNative, the following guidelines apply: + // - Use ConvertToNative when marshalling CEL evaluation results to native types. + // - Do not use ConvertToNative within CEL extension functions. + // - Document whether your implementation supports non-CEL field types, such as Go or Protobuf. ConvertToNative(typeDesc reflect.Type) (any, error) - // ConvertToType supports type conversions between value types supported by the expression language. + // ConvertToType supports type conversions between CEL value types supported by the expression language. ConvertToType(typeValue Type) Val // Equal returns true if the `other` value has the same type and content as the implementing struct. diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/BUILD.bazel index 2f1003aba88a..4bcf8a283eaa 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/BUILD.bazel @@ -12,6 +12,7 @@ go_library( "math.go", "native.go", "protos.go", + "sets.go", "strings.go", ], importpath = "github.com/google/cel-go/ext", @@ -26,7 +27,7 @@ go_library( "//common/types/ref:go_default_library", "//common/types/traits:go_default_library", "//interpreter:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//reflect/protoreflect:go_default_library", "@org_golang_google_protobuf//types/known/structpb", @@ -43,6 +44,7 @@ go_test( "math_test.go", "native_test.go", "protos_test.go", + "sets_test.go", "strings_test.go", ], embed = [ @@ -58,7 +60,7 @@ go_test( "//test:go_default_library", "//test/proto2pb:go_default_library", "//test/proto3pb:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//types/known/wrapperspb:go_default_library", "@org_golang_google_protobuf//encoding/protojson:go_default_library", diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/README.md b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/README.md index c4faf59ab1d5..ef0eb2ab7f53 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/README.md +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/README.md @@ -3,6 +3,30 @@ CEL extensions are a related set of constants, functions, macros, or other features which may not be covered by the core CEL spec. +## Bindings + +Returns a cel.EnvOption to configure support for local variable bindings +in expressions. + +# Cel.Bind + +Binds a simple identifier to an initialization expression which may be used +in a subsequenct result expression. Bindings may also be nested within each +other. + + cel.bind(, , ) + +Examples: + + cel.bind(a, 'hello', + cel.bind(b, 'world', a + b + b + a)) // "helloworldworldhello" + + // Avoid a list allocation within the exists comprehension. + cel.bind(valid_values, [a, b, c], + [d, e, f].exists(elem, elem in valid_values)) + +Local bindings are not guaranteed to be evaluated before use. + ## Encoders Encoding utilies for marshalling data into standardized representations. @@ -33,8 +57,7 @@ Example: ## Math -Math returns a cel.EnvOption to configure namespaced math helper macros and -functions. +Math helper macros and functions. Note, all macros use the 'math' namespace; however, at the time of macro expansion the namespace looks just like any other identifier. If you are @@ -96,8 +119,7 @@ Examples: ## Protos -Protos returns a cel.EnvOption to configure extended macros and functions for -proto manipulation. +Protos configure extended macros and functions for proto manipulation. Note, all macros use the 'proto' namespace; however, at the time of macro expansion the namespace looks just like any other identifier. If you are @@ -127,6 +149,62 @@ Example: proto.hasExt(msg, google.expr.proto2.test.int32_ext) // returns true || false +## Sets + +Sets provides set relationship tests. + +There is no set type within CEL, and while one may be introduced in the +future, there are cases where a `list` type is known to behave like a set. +For such cases, this library provides some basic functionality for +determining set containment, equivalence, and intersection. + +### Sets.Contains + +Returns whether the first list argument contains all elements in the second +list argument. The list may contain elements of any type and standard CEL +equality is used to determine whether a value exists in both lists. If the +second list is empty, the result will always return true. + + sets.contains(list(T), list(T)) -> bool + +Examples: + + sets.contains([], []) // true + sets.contains([], [1]) // false + sets.contains([1, 2, 3, 4], [2, 3]) // true + sets.contains([1, 2.0, 3u], [1.0, 2u, 3]) // true + +### Sets.Equivalent + +Returns whether the first and second list are set equivalent. Lists are set +equivalent if for every item in the first list, there is an element in the +second which is equal. The lists may not be of the same size as they do not +guarantee the elements within them are unique, so size does not factor into +the computation. + + sets.equivalent(list(T), list(T)) -> bool + +Examples: + + sets.equivalent([], []) // true + sets.equivalent([1], [1, 1]) // true + sets.equivalent([1], [1u, 1.0]) // true + sets.equivalent([1, 2, 3], [3u, 2.0, 1]) // true + +### Sets.Intersects + +Returns whether the first list has at least one element whose value is equal +to an element in the second list. If either list is empty, the result will +be false. + + sets.intersects(list(T), list(T)) -> bool + +Examples: + + sets.intersects([1], []) // false + sets.intersects([1], [1, 2]) // true + sets.intersects([[1], [2, 3]], [[1, 2], [2, 3.0]]) // true + ## Strings Extended functions for string manipulation. As a general note, all indices are diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/bindings.go b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/bindings.go new file mode 100644 index 000000000000..9cc3c3efe585 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/bindings.go @@ -0,0 +1,100 @@ +// Copyright 2023 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package ext + +import ( + "github.com/google/cel-go/cel" + "github.com/google/cel-go/common" + + exprpb "google.golang.org/genproto/googleapis/api/expr/v1alpha1" +) + +// Bindings returns a cel.EnvOption to configure support for local variable +// bindings in expressions. +// +// # Cel.Bind +// +// Binds a simple identifier to an initialization expression which may be used +// in a subsequenct result expression. Bindings may also be nested within each +// other. +// +// cel.bind(, , ) +// +// Examples: +// +// cel.bind(a, 'hello', +// cel.bind(b, 'world', a + b + b + a)) // "helloworldworldhello" +// +// // Avoid a list allocation within the exists comprehension. +// cel.bind(valid_values, [a, b, c], +// [d, e, f].exists(elem, elem in valid_values)) +// +// Local bindings are not guaranteed to be evaluated before use. +func Bindings() cel.EnvOption { + return cel.Lib(celBindings{}) +} + +const ( + celNamespace = "cel" + bindMacro = "bind" + unusedIterVar = "#unused" +) + +type celBindings struct{} + +func (celBindings) LibraryName() string { + return "cel.lib.ext.cel.bindings" +} + +func (celBindings) CompileOptions() []cel.EnvOption { + return []cel.EnvOption{ + cel.Macros( + // cel.bind(var, , ) + cel.NewReceiverMacro(bindMacro, 3, celBind), + ), + } +} + +func (celBindings) ProgramOptions() []cel.ProgramOption { + return []cel.ProgramOption{} +} + +func celBind(meh cel.MacroExprHelper, target *exprpb.Expr, args []*exprpb.Expr) (*exprpb.Expr, *common.Error) { + if !macroTargetMatchesNamespace(celNamespace, target) { + return nil, nil + } + varIdent := args[0] + varName := "" + switch varIdent.GetExprKind().(type) { + case *exprpb.Expr_IdentExpr: + varName = varIdent.GetIdentExpr().GetName() + default: + return nil, &common.Error{ + Message: "cel.bind() variable names must be simple identifers", + Location: meh.OffsetLocation(varIdent.GetId()), + } + } + varInit := args[1] + resultExpr := args[2] + return meh.Fold( + unusedIterVar, + meh.NewList(), + varName, + varInit, + meh.LiteralBool(false), + meh.Ident(varName), + resultExpr, + ), nil +} diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/math.go b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/math.go index 79b0c8bf948d..1c8ad585a176 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/math.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/math.go @@ -91,7 +91,7 @@ func Math() cel.EnvOption { return cel.Lib(mathLib{}) } -var ( +const ( mathNamespace = "math" leastMacro = "least" greatestMacro = "greatest" diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/sets.go b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/sets.go new file mode 100644 index 000000000000..4820d6199e66 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/sets.go @@ -0,0 +1,138 @@ +// Copyright 2023 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package ext + +import ( + "github.com/google/cel-go/cel" + "github.com/google/cel-go/common/types" + "github.com/google/cel-go/common/types/ref" + "github.com/google/cel-go/common/types/traits" +) + +// Sets returns a cel.EnvOption to configure namespaced set relationship +// functions. +// +// There is no set type within CEL, and while one may be introduced in the +// future, there are cases where a `list` type is known to behave like a set. +// For such cases, this library provides some basic functionality for +// determining set containment, equivalence, and intersection. +// +// # Sets.Contains +// +// Returns whether the first list argument contains all elements in the second +// list argument. The list may contain elements of any type and standard CEL +// equality is used to determine whether a value exists in both lists. If the +// second list is empty, the result will always return true. +// +// sets.contains(list(T), list(T)) -> bool +// +// Examples: +// +// sets.contains([], []) // true +// sets.contains([], [1]) // false +// sets.contains([1, 2, 3, 4], [2, 3]) // true +// sets.contains([1, 2.0, 3u], [1.0, 2u, 3]) // true +// +// # Sets.Equivalent +// +// Returns whether the first and second list are set equivalent. Lists are set +// equivalent if for every item in the first list, there is an element in the +// second which is equal. The lists may not be of the same size as they do not +// guarantee the elements within them are unique, so size does not factor into +// the computation. +// +// Examples: +// +// sets.equivalent([], []) // true +// sets.equivalent([1], [1, 1]) // true +// sets.equivalent([1], [1u, 1.0]) // true +// sets.equivalent([1, 2, 3], [3u, 2.0, 1]) // true +// +// # Sets.Intersects +// +// Returns whether the first list has at least one element whose value is equal +// to an element in the second list. If either list is empty, the result will +// be false. +// +// Examples: +// +// sets.intersects([1], []) // false +// sets.intersects([1], [1, 2]) // true +// sets.intersects([[1], [2, 3]], [[1, 2], [2, 3.0]]) // true +func Sets() cel.EnvOption { + return cel.Lib(setsLib{}) +} + +type setsLib struct{} + +// LibraryName implements the SingletonLibrary interface method. +func (setsLib) LibraryName() string { + return "cel.lib.ext.sets" +} + +// CompileOptions implements the Library interface method. +func (setsLib) CompileOptions() []cel.EnvOption { + listType := cel.ListType(cel.TypeParamType("T")) + return []cel.EnvOption{ + cel.Function("sets.contains", + cel.Overload("list_sets_contains_list", []*cel.Type{listType, listType}, cel.BoolType, + cel.BinaryBinding(setsContains))), + cel.Function("sets.equivalent", + cel.Overload("list_sets_equivalent_list", []*cel.Type{listType, listType}, cel.BoolType, + cel.BinaryBinding(setsEquivalent))), + cel.Function("sets.intersects", + cel.Overload("list_sets_intersects_list", []*cel.Type{listType, listType}, cel.BoolType, + cel.BinaryBinding(setsIntersects))), + } +} + +// ProgramOptions implements the Library interface method. +func (setsLib) ProgramOptions() []cel.ProgramOption { + return []cel.ProgramOption{} +} + +func setsIntersects(listA, listB ref.Val) ref.Val { + lA := listA.(traits.Lister) + lB := listB.(traits.Lister) + it := lA.Iterator() + for it.HasNext() == types.True { + exists := lB.Contains(it.Next()) + if exists == types.True { + return types.True + } + } + return types.False +} + +func setsContains(list, sublist ref.Val) ref.Val { + l := list.(traits.Lister) + sub := sublist.(traits.Lister) + it := sub.Iterator() + for it.HasNext() == types.True { + exists := l.Contains(it.Next()) + if exists != types.True { + return exists + } + } + return types.True +} + +func setsEquivalent(listA, listB ref.Val) ref.Val { + aContainsB := setsContains(listA, listB) + if aContainsB != types.True { + return aContainsB + } + return setsContains(listB, listA) +} diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/strings.go b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/strings.go index 2962eb663071..8455d5829093 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/ext/strings.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/ext/strings.go @@ -81,7 +81,7 @@ const ( // `%X` - same as above, but with A-F capitalized. // `%o` - substitutes an integer with its equivalent in octal. // -// .format()` -> +// .format() -> // // Examples: // @@ -431,30 +431,12 @@ func (sl *stringLib) CompileOptions() []cel.EnvOption { s := str.(types.String) return stringOrError(upperASCII(string(s))) }))), - cel.Function("join", - cel.MemberOverload("list_join", []*cel.Type{cel.ListType(cel.StringType)}, cel.StringType, - cel.UnaryBinding(func(list ref.Val) ref.Val { - l, err := list.ConvertToNative(stringListType) - if err != nil { - return types.NewErr(err.Error()) - } - return stringOrError(join(l.([]string))) - })), - cel.MemberOverload("list_join_string", []*cel.Type{cel.ListType(cel.StringType), cel.StringType}, cel.StringType, - cel.BinaryBinding(func(list, delim ref.Val) ref.Val { - l, err := list.ConvertToNative(stringListType) - if err != nil { - return types.NewErr(err.Error()) - } - d := delim.(types.String) - return stringOrError(joinSeparator(l.([]string), string(d))) - }))), } if sl.version >= 1 { opts = append(opts, cel.Function("format", cel.MemberOverload("string_format", []*cel.Type{cel.StringType, cel.ListType(cel.DynType)}, cel.StringType, cel.FunctionBinding(func(args ...ref.Val) ref.Val { - s := args[0].(types.String).Value().(string) + s := string(args[0].(types.String)) formatArgs := args[1].(traits.Lister) return stringOrError(interpreter.ParseFormatString(s, &stringFormatter{}, &stringArgList{formatArgs}, formatLocale)) }))), @@ -465,6 +447,43 @@ func (sl *stringLib) CompileOptions() []cel.EnvOption { })))) } + if sl.version >= 2 { + opts = append(opts, + cel.Function("join", + cel.MemberOverload("list_join", []*cel.Type{cel.ListType(cel.StringType)}, cel.StringType, + cel.UnaryBinding(func(list ref.Val) ref.Val { + l := list.(traits.Lister) + return stringOrError(joinValSeparator(l, "")) + })), + cel.MemberOverload("list_join_string", []*cel.Type{cel.ListType(cel.StringType), cel.StringType}, cel.StringType, + cel.BinaryBinding(func(list, delim ref.Val) ref.Val { + l := list.(traits.Lister) + d := delim.(types.String) + return stringOrError(joinValSeparator(l, string(d))) + }))), + ) + } else { + opts = append(opts, + cel.Function("join", + cel.MemberOverload("list_join", []*cel.Type{cel.ListType(cel.StringType)}, cel.StringType, + cel.UnaryBinding(func(list ref.Val) ref.Val { + l, err := list.ConvertToNative(stringListType) + if err != nil { + return types.NewErr(err.Error()) + } + return stringOrError(join(l.([]string))) + })), + cel.MemberOverload("list_join_string", []*cel.Type{cel.ListType(cel.StringType), cel.StringType}, cel.StringType, + cel.BinaryBinding(func(list, delim ref.Val) ref.Val { + l, err := list.ConvertToNative(stringListType) + if err != nil { + return types.NewErr(err.Error()) + } + d := delim.(types.String) + return stringOrError(joinSeparator(l.([]string), string(d))) + }))), + ) + } return opts } @@ -623,6 +642,23 @@ func join(strs []string) (string, error) { return strings.Join(strs, ""), nil } +func joinValSeparator(strs traits.Lister, separator string) (string, error) { + sz := strs.Size().(types.Int) + var sb strings.Builder + for i := types.Int(0); i < sz; i++ { + if i != 0 { + sb.WriteString(separator) + } + elem := strs.Get(i) + str, ok := elem.(types.String) + if !ok { + return "", fmt.Errorf("join: invalid input: %v", elem) + } + sb.WriteString(string(str)) + } + return sb.String(), nil +} + type clauseImpl func(ref.Val, string) (string, error) func clauseForType(argType ref.Type) (clauseImpl, error) { diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/BUILD.bazel index c6a620656bf6..b6d04e000317 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/BUILD.bazel @@ -32,7 +32,7 @@ go_library( "//common/types/ref:go_default_library", "//common/types/traits:go_default_library", "//interpreter/functions:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//types/known/durationpb:go_default_library", "@org_golang_google_protobuf//types/known/structpb:go_default_library", @@ -49,6 +49,7 @@ go_test( "attributes_test.go", "interpreter_test.go", "prune_test.go", + "runtimecost_test.go", ], embed = [ ":go_default_library", @@ -65,7 +66,7 @@ go_test( "//test:go_default_library", "//test/proto2pb:go_default_library", "//test/proto3pb:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//types/known/anypb:go_default_library", ], diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/attributes.go b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/attributes.go index 5c8107ab7cbe..1b19dc2b57b8 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/attributes.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/attributes.go @@ -16,6 +16,7 @@ package interpreter import ( "fmt" + "strings" "github.com/google/cel-go/common/containers" "github.com/google/cel-go/common/types" @@ -231,7 +232,11 @@ type absoluteAttribute struct { // ID implements the Attribute interface method. func (a *absoluteAttribute) ID() int64 { - return a.id + qualCount := len(a.qualifiers) + if qualCount == 0 { + return a.id + } + return a.qualifiers[qualCount-1].ID() } // IsOptional returns trivially false for an attribute as the attribute represents a fully @@ -289,7 +294,11 @@ func (a *absoluteAttribute) Resolve(vars Activation) (any, error) { return nil, err } if isOpt { - return types.OptionalOf(a.adapter.NativeToValue(obj)), nil + val := a.adapter.NativeToValue(obj) + if types.IsUnknown(val) { + return val, nil + } + return types.OptionalOf(val), nil } return obj, nil } @@ -301,7 +310,14 @@ func (a *absoluteAttribute) Resolve(vars Activation) (any, error) { } } } - return nil, missingAttribute(a.String()) + var attrNames strings.Builder + for i, nm := range a.namespaceNames { + if i != 0 { + attrNames.WriteString(", ") + } + attrNames.WriteString(nm) + } + return nil, missingAttribute(attrNames.String()) } type conditionalAttribute struct { @@ -315,6 +331,11 @@ type conditionalAttribute struct { // ID is an implementation of the Attribute interface method. func (a *conditionalAttribute) ID() int64 { + // There's a field access after the conditional. + if a.truthy.ID() == a.falsy.ID() { + return a.truthy.ID() + } + // Otherwise return the conditional id as the consistent id being tracked. return a.id } @@ -379,7 +400,7 @@ type maybeAttribute struct { // ID is an implementation of the Attribute interface method. func (a *maybeAttribute) ID() int64 { - return a.id + return a.attrs[0].ID() } // IsOptional returns trivially false for an attribute as the attribute represents a fully @@ -436,7 +457,9 @@ func (a *maybeAttribute) AddQualifier(qual Qualifier) (Attribute, error) { } } // Next, ensure the most specific variable / type reference is searched first. - a.attrs = append([]NamespacedAttribute{a.fac.AbsoluteAttribute(qual.ID(), augmentedNames...)}, a.attrs...) + if len(augmentedNames) != 0 { + a.attrs = append([]NamespacedAttribute{a.fac.AbsoluteAttribute(qual.ID(), augmentedNames...)}, a.attrs...) + } return a, nil } @@ -494,7 +517,11 @@ type relativeAttribute struct { // ID is an implementation of the Attribute interface method. func (a *relativeAttribute) ID() int64 { - return a.id + qualCount := len(a.qualifiers) + if qualCount == 0 { + return a.id + } + return a.qualifiers[qualCount-1].ID() } // IsOptional returns trivially false for an attribute as the attribute represents a fully @@ -535,7 +562,11 @@ func (a *relativeAttribute) Resolve(vars Activation) (any, error) { return nil, err } if isOpt { - return types.OptionalOf(a.adapter.NativeToValue(obj)), nil + val := a.adapter.NativeToValue(obj) + if types.IsUnknown(val) { + return val, nil + } + return types.OptionalOf(val), nil } return obj, nil } @@ -1148,6 +1179,9 @@ func applyQualifiers(vars Activation, obj any, qualifiers []Qualifier) (any, boo return nil, false, err } if !present { + // We return optional none here with a presence of 'false' as the layers + // above will attempt to call types.OptionalOf() on a present value if any + // of the qualifiers is optional. return types.OptionalNone, false, nil } } else { @@ -1200,6 +1234,8 @@ func refQualify(adapter ref.TypeAdapter, obj any, idx ref.Val, presenceTest, pre return nil, false, v case traits.Mapper: val, found := v.Find(idx) + // If the index is of the wrong type for the map, then it is possible + // for the Find call to produce an error. if types.IsError(val) { return nil, false, val.(*types.Err) } @@ -1211,6 +1247,8 @@ func refQualify(adapter ref.TypeAdapter, obj any, idx ref.Val, presenceTest, pre } return nil, false, missingKey(idx) case traits.Lister: + // If the index argument is not a valid numeric type, then it is possible + // for the index operation to produce an error. i, err := types.IndexOrError(idx) if err != nil { return nil, false, err @@ -1231,6 +1269,8 @@ func refQualify(adapter ref.TypeAdapter, obj any, idx ref.Val, presenceTest, pre if types.IsError(presence) { return nil, false, presence.(*types.Err) } + // If not found or presence only test, then return. + // Otherwise, if found, obtain the value later on. if presenceOnly || presence == types.False { return nil, presence == types.True, nil } @@ -1288,7 +1328,7 @@ func (e *resolutionError) Error() string { return fmt.Sprintf("index out of bounds: %v", e.missingIndex) } if e.missingAttribute != "" { - return fmt.Sprintf("no such attribute: %s", e.missingAttribute) + return fmt.Sprintf("no such attribute(s): %s", e.missingAttribute) } return "invalid attribute" } @@ -1297,17 +1337,3 @@ func (e *resolutionError) Error() string { func (e *resolutionError) Is(err error) bool { return err.Error() == e.Error() } - -func findMin(x, y int64) int64 { - if x < y { - return x - } - return y -} - -func findMax(x, y int64) int64 { - if x > y { - return x - } - return y -} diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/interpretable.go b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/interpretable.go index 840779175e31..32e2bcb7deaa 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/interpretable.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/interpretable.go @@ -15,6 +15,8 @@ package interpreter import ( + "fmt" + "github.com/google/cel-go/common/operators" "github.com/google/cel-go/common/overloads" "github.com/google/cel-go/common/types" @@ -69,6 +71,7 @@ type InterpretableAttribute interface { // to whether the qualifier is present. QualifyIfPresent(vars Activation, obj any, presenceOnly bool) (any, bool, error) + // IsOptional indicates whether the resulting value is an optional type. IsOptional() bool // Resolve returns the value of the Attribute given the current Activation. @@ -108,10 +111,8 @@ type InterpretableConstructor interface { // Core Interpretable implementations used during the program planning phase. type evalTestOnly struct { - id int64 - attr InterpretableAttribute - qual Qualifier - field types.String + id int64 + InterpretableAttribute } // ID implements the Interpretable interface method. @@ -121,28 +122,55 @@ func (test *evalTestOnly) ID() int64 { // Eval implements the Interpretable interface method. func (test *evalTestOnly) Eval(ctx Activation) ref.Val { - val, err := test.attr.Resolve(ctx) + val, err := test.Resolve(ctx) + // Return an error if the resolve step fails if err != nil { - return types.NewErr(err.Error()) + return types.WrapErr(err) } - optVal, isOpt := val.(*types.Optional) - if isOpt { - if !optVal.HasValue() { - return types.False - } - val = optVal.GetValue() + if optVal, isOpt := val.(*types.Optional); isOpt { + return types.Bool(optVal.HasValue()) } - out, found, err := test.qual.QualifyIfPresent(ctx, val, true) + return test.Adapter().NativeToValue(val) +} + +// AddQualifier appends a qualifier that will always and only perform a presence test. +func (test *evalTestOnly) AddQualifier(q Qualifier) (Attribute, error) { + cq, ok := q.(ConstantQualifier) + if !ok { + return nil, fmt.Errorf("test only expressions must have constant qualifiers: %v", q) + } + return test.InterpretableAttribute.AddQualifier(&testOnlyQualifier{ConstantQualifier: cq}) +} + +type testOnlyQualifier struct { + ConstantQualifier +} + +// Qualify determines whether the test-only qualifier is present on the input object. +func (q *testOnlyQualifier) Qualify(vars Activation, obj any) (any, error) { + out, present, err := q.ConstantQualifier.QualifyIfPresent(vars, obj, true) if err != nil { - return types.NewErr(err.Error()) + return nil, err } if unk, isUnk := out.(types.Unknown); isUnk { - return unk + return unk, nil } - if found { - return types.True + if opt, isOpt := out.(types.Optional); isOpt { + return opt.HasValue(), nil } - return types.False + return present, nil +} + +// QualifyIfPresent returns whether the target field in the test-only expression is present. +func (q *testOnlyQualifier) QualifyIfPresent(vars Activation, obj any, presenceOnly bool) (any, bool, error) { + // Only ever test for presence. + return q.ConstantQualifier.QualifyIfPresent(vars, obj, true) +} + +// QualifierValueEquals determines whether the test-only constant qualifier equals the input value. +func (q *testOnlyQualifier) QualifierValueEquals(value any) bool { + // The input qualifier will always be of type string + return q.ConstantQualifier.Value().Value() == value } // NewConstValue creates a new constant valued Interpretable. @@ -802,6 +830,9 @@ func (e *evalSetMembership) ID() int64 { // Eval implements the Interpretable interface method. func (e *evalSetMembership) Eval(ctx Activation) ref.Val { val := e.arg.Eval(ctx) + if types.IsUnknownOrError(val) { + return val + } if ret, found := e.valueSet[val]; found { return ret } @@ -872,7 +903,7 @@ func (e *evalWatchConstQual) Qualify(vars Activation, obj any) (any, error) { out, err := e.ConstantQualifier.Qualify(vars, obj) var val ref.Val if err != nil { - val = types.NewErr(err.Error()) + val = types.WrapErr(err) } else { val = e.adapter.NativeToValue(out) } @@ -880,6 +911,23 @@ func (e *evalWatchConstQual) Qualify(vars Activation, obj any) (any, error) { return out, err } +// QualifyIfPresent conditionally qualifies the variable and only records a value if one is present. +func (e *evalWatchConstQual) QualifyIfPresent(vars Activation, obj any, presenceOnly bool) (any, bool, error) { + out, present, err := e.ConstantQualifier.QualifyIfPresent(vars, obj, presenceOnly) + var val ref.Val + if err != nil { + val = types.WrapErr(err) + } else if out != nil { + val = e.adapter.NativeToValue(out) + } else if presenceOnly { + val = types.Bool(present) + } + if present || presenceOnly { + e.observer(e.ID(), e.ConstantQualifier, val) + } + return out, present, err +} + // QualifierValueEquals tests whether the incoming value is equal to the qualifying constant. func (e *evalWatchConstQual) QualifierValueEquals(value any) bool { qve, ok := e.ConstantQualifier.(qualifierValueEquator) @@ -898,7 +946,7 @@ func (e *evalWatchQual) Qualify(vars Activation, obj any) (any, error) { out, err := e.Qualifier.Qualify(vars, obj) var val ref.Val if err != nil { - val = types.NewErr(err.Error()) + val = types.WrapErr(err) } else { val = e.adapter.NativeToValue(out) } @@ -906,6 +954,23 @@ func (e *evalWatchQual) Qualify(vars Activation, obj any) (any, error) { return out, err } +// QualifyIfPresent conditionally qualifies the variable and only records a value if one is present. +func (e *evalWatchQual) QualifyIfPresent(vars Activation, obj any, presenceOnly bool) (any, bool, error) { + out, present, err := e.Qualifier.QualifyIfPresent(vars, obj, presenceOnly) + var val ref.Val + if err != nil { + val = types.WrapErr(err) + } else if out != nil { + val = e.adapter.NativeToValue(out) + } else if presenceOnly { + val = types.Bool(present) + } + if present || presenceOnly { + e.observer(e.ID(), e.Qualifier, val) + } + return out, present, err +} + // evalWatchConst describes a watcher of an instConst Interpretable. type evalWatchConst struct { InterpretableConst @@ -1025,12 +1090,12 @@ func (cond *evalExhaustiveConditional) Eval(ctx Activation) ref.Val { } if cBool { if tErr != nil { - return types.NewErr(tErr.Error()) + return types.WrapErr(tErr) } return cond.adapter.NativeToValue(tVal) } if fErr != nil { - return types.NewErr(fErr.Error()) + return types.WrapErr(fErr) } return cond.adapter.NativeToValue(fVal) } @@ -1042,6 +1107,8 @@ type evalAttr struct { optional bool } +var _ InterpretableAttribute = &evalAttr{} + // ID of the attribute instruction. func (a *evalAttr) ID() int64 { return a.attr.ID() @@ -1068,7 +1135,7 @@ func (a *evalAttr) Adapter() ref.TypeAdapter { func (a *evalAttr) Eval(ctx Activation) ref.Val { v, err := a.attr.Resolve(ctx) if err != nil { - return types.NewErr(err.Error()) + return types.WrapErr(err) } return a.adapter.NativeToValue(v) } diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/planner.go b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/planner.go index 9cf8e4e5c02f..0b65d0fa90d2 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/planner.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/planner.go @@ -20,7 +20,6 @@ import ( "github.com/google/cel-go/common/containers" "github.com/google/cel-go/common/operators" - "github.com/google/cel-go/common/types" "github.com/google/cel-go/common/types/ref" "github.com/google/cel-go/interpreter/functions" @@ -217,18 +216,14 @@ func (p *planner) planSelect(expr *exprpb.Expr) (Interpretable, error) { if err != nil { return nil, err } - - // Return the test only eval expression. + // Modify the attribute to be test-only. if sel.GetTestOnly() { - return &evalTestOnly{ - id: expr.GetId(), - field: types.String(sel.GetField()), - attr: attr, - qual: qual, - }, nil + attr = &evalTestOnly{ + id: expr.GetId(), + InterpretableAttribute: attr, + } } - - // Otherwise, append the qualifier on the attribute. + // Append the qualifier on the attribute. _, err = attr.AddQualifier(qual) return attr, err } diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/prune.go b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/prune.go index b8b015a7a65b..d1b5d6bd6bf4 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/prune.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/prune.go @@ -16,6 +16,7 @@ package interpreter import ( "github.com/google/cel-go/common/operators" + "github.com/google/cel-go/common/overloads" "github.com/google/cel-go/common/types" "github.com/google/cel-go/common/types/ref" "github.com/google/cel-go/common/types/traits" @@ -67,11 +68,16 @@ type astPruner struct { // fold(and thus cache results of) some external calls, then they can prepare // the overloads accordingly. func PruneAst(expr *exprpb.Expr, macroCalls map[int64]*exprpb.Expr, state EvalState) *exprpb.ParsedExpr { + pruneState := NewEvalState() + for _, id := range state.IDs() { + v, _ := state.Value(id) + pruneState.SetValue(id, v) + } pruner := &astPruner{ expr: expr, macroCalls: macroCalls, - state: state, - nextExprID: 1} + state: pruneState, + nextExprID: getMaxID(expr)} newExpr, _ := pruner.maybePrune(expr) return &exprpb.ParsedExpr{ Expr: newExpr, @@ -89,28 +95,50 @@ func (p *astPruner) createLiteral(id int64, val *exprpb.Constant) *exprpb.Expr { } func (p *astPruner) maybeCreateLiteral(id int64, val ref.Val) (*exprpb.Expr, bool) { - switch val.Type() { - case types.BoolType: + switch v := val.(type) { + case types.Bool: + p.state.SetValue(id, val) return p.createLiteral(id, - &exprpb.Constant{ConstantKind: &exprpb.Constant_BoolValue{BoolValue: val.Value().(bool)}}), true - case types.IntType: + &exprpb.Constant{ConstantKind: &exprpb.Constant_BoolValue{BoolValue: bool(v)}}), true + case types.Bytes: + p.state.SetValue(id, val) return p.createLiteral(id, - &exprpb.Constant{ConstantKind: &exprpb.Constant_Int64Value{Int64Value: val.Value().(int64)}}), true - case types.UintType: + &exprpb.Constant{ConstantKind: &exprpb.Constant_BytesValue{BytesValue: []byte(v)}}), true + case types.Double: + p.state.SetValue(id, val) return p.createLiteral(id, - &exprpb.Constant{ConstantKind: &exprpb.Constant_Uint64Value{Uint64Value: val.Value().(uint64)}}), true - case types.StringType: + &exprpb.Constant{ConstantKind: &exprpb.Constant_DoubleValue{DoubleValue: float64(v)}}), true + case types.Duration: + p.state.SetValue(id, val) + durationString := string(v.ConvertToType(types.StringType).(types.String)) + return &exprpb.Expr{ + Id: id, + ExprKind: &exprpb.Expr_CallExpr{ + CallExpr: &exprpb.Expr_Call{ + Function: overloads.TypeConvertDuration, + Args: []*exprpb.Expr{ + p.createLiteral(p.nextID(), + &exprpb.Constant{ConstantKind: &exprpb.Constant_StringValue{StringValue: durationString}}), + }, + }, + }, + }, true + case types.Int: + p.state.SetValue(id, val) return p.createLiteral(id, - &exprpb.Constant{ConstantKind: &exprpb.Constant_StringValue{StringValue: val.Value().(string)}}), true - case types.DoubleType: + &exprpb.Constant{ConstantKind: &exprpb.Constant_Int64Value{Int64Value: int64(v)}}), true + case types.Uint: + p.state.SetValue(id, val) return p.createLiteral(id, - &exprpb.Constant{ConstantKind: &exprpb.Constant_DoubleValue{DoubleValue: val.Value().(float64)}}), true - case types.BytesType: + &exprpb.Constant{ConstantKind: &exprpb.Constant_Uint64Value{Uint64Value: uint64(v)}}), true + case types.String: + p.state.SetValue(id, val) return p.createLiteral(id, - &exprpb.Constant{ConstantKind: &exprpb.Constant_BytesValue{BytesValue: val.Value().([]byte)}}), true - case types.NullType: + &exprpb.Constant{ConstantKind: &exprpb.Constant_StringValue{StringValue: string(v)}}), true + case types.Null: + p.state.SetValue(id, val) return p.createLiteral(id, - &exprpb.Constant{ConstantKind: &exprpb.Constant_NullValue{NullValue: val.Value().(structpb.NullValue)}}), true + &exprpb.Constant{ConstantKind: &exprpb.Constant_NullValue{NullValue: v.Value().(structpb.NullValue)}}), true } // Attempt to build a list literal. @@ -128,6 +156,7 @@ func (p *astPruner) maybeCreateLiteral(id int64, val ref.Val) (*exprpb.Expr, boo } elemExprs[i] = elemExpr } + p.state.SetValue(id, val) return &exprpb.Expr{ Id: id, ExprKind: &exprpb.Expr_ListExpr{ @@ -167,6 +196,7 @@ func (p *astPruner) maybeCreateLiteral(id int64, val ref.Val) (*exprpb.Expr, boo entries[i] = entry i++ } + p.state.SetValue(id, val) return &exprpb.Expr{ Id: id, ExprKind: &exprpb.Expr_StructExpr{ @@ -182,75 +212,144 @@ func (p *astPruner) maybeCreateLiteral(id int64, val ref.Val) (*exprpb.Expr, boo return nil, false } -func (p *astPruner) maybePruneAndOr(node *exprpb.Expr) (*exprpb.Expr, bool) { - if !p.existsWithUnknownValue(node.GetId()) { +func (p *astPruner) maybePruneOptional(elem *exprpb.Expr) (*exprpb.Expr, bool) { + elemVal, found := p.value(elem.GetId()) + if found && elemVal.Type() == types.OptionalType { + opt := elemVal.(*types.Optional) + if !opt.HasValue() { + return nil, true + } + if newElem, pruned := p.maybeCreateLiteral(elem.GetId(), opt.GetValue()); pruned { + return newElem, true + } + } + return elem, false +} + +func (p *astPruner) maybePruneIn(node *exprpb.Expr) (*exprpb.Expr, bool) { + // elem in list + call := node.GetCallExpr() + val, exists := p.maybeValue(call.GetArgs()[1].GetId()) + if !exists { return nil, false } + if sz, ok := val.(traits.Sizer); ok && sz.Size() == types.IntZero { + return p.maybeCreateLiteral(node.GetId(), types.False) + } + return nil, false +} + +func (p *astPruner) maybePruneLogicalNot(node *exprpb.Expr) (*exprpb.Expr, bool) { + call := node.GetCallExpr() + arg := call.GetArgs()[0] + val, exists := p.maybeValue(arg.GetId()) + if !exists { + return nil, false + } + if b, ok := val.(types.Bool); ok { + return p.maybeCreateLiteral(node.GetId(), !b) + } + return nil, false +} +func (p *astPruner) maybePruneOr(node *exprpb.Expr) (*exprpb.Expr, bool) { call := node.GetCallExpr() // We know result is unknown, so we have at least one unknown arg // and if one side is a known value, we know we can ignore it. - if p.existsWithKnownValue(call.Args[0].GetId()) { - return call.Args[1], true + if v, exists := p.maybeValue(call.GetArgs()[0].GetId()); exists { + if v == types.True { + return p.maybeCreateLiteral(node.GetId(), types.True) + } + return call.GetArgs()[1], true } - if p.existsWithKnownValue(call.Args[1].GetId()) { - return call.Args[0], true + if v, exists := p.maybeValue(call.GetArgs()[1].GetId()); exists { + if v == types.True { + return p.maybeCreateLiteral(node.GetId(), types.True) + } + return call.GetArgs()[0], true } return nil, false } -func (p *astPruner) maybePruneConditional(node *exprpb.Expr) (*exprpb.Expr, bool) { - if !p.existsWithUnknownValue(node.GetId()) { - return nil, false +func (p *astPruner) maybePruneAnd(node *exprpb.Expr) (*exprpb.Expr, bool) { + call := node.GetCallExpr() + // We know result is unknown, so we have at least one unknown arg + // and if one side is a known value, we know we can ignore it. + if v, exists := p.maybeValue(call.GetArgs()[0].GetId()); exists { + if v == types.False { + return p.maybeCreateLiteral(node.GetId(), types.False) + } + return call.GetArgs()[1], true } + if v, exists := p.maybeValue(call.GetArgs()[1].GetId()); exists { + if v == types.False { + return p.maybeCreateLiteral(node.GetId(), types.False) + } + return call.GetArgs()[0], true + } + return nil, false +} +func (p *astPruner) maybePruneConditional(node *exprpb.Expr) (*exprpb.Expr, bool) { call := node.GetCallExpr() - condVal, condValueExists := p.value(call.Args[0].GetId()) - if !condValueExists || types.IsUnknownOrError(condVal) { + cond, exists := p.maybeValue(call.GetArgs()[0].GetId()) + if !exists { return nil, false } - - if condVal.Value().(bool) { - return call.Args[1], true + if cond.Value().(bool) { + return call.GetArgs()[1], true } - return call.Args[2], true + return call.GetArgs()[2], true } func (p *astPruner) maybePruneFunction(node *exprpb.Expr) (*exprpb.Expr, bool) { + if _, exists := p.value(node.GetId()); !exists { + return nil, false + } call := node.GetCallExpr() - if call.Function == operators.LogicalOr || call.Function == operators.LogicalAnd { - return p.maybePruneAndOr(node) + if call.Function == operators.LogicalOr { + return p.maybePruneOr(node) + } + if call.Function == operators.LogicalAnd { + return p.maybePruneAnd(node) } if call.Function == operators.Conditional { return p.maybePruneConditional(node) } - + if call.Function == operators.In { + return p.maybePruneIn(node) + } + if call.Function == operators.LogicalNot { + return p.maybePruneLogicalNot(node) + } return nil, false } func (p *astPruner) maybePrune(node *exprpb.Expr) (*exprpb.Expr, bool) { - out, pruned := p.prune(node) - if pruned { - delete(p.macroCalls, node.GetId()) - } - return out, pruned + return p.prune(node) } func (p *astPruner) prune(node *exprpb.Expr) (*exprpb.Expr, bool) { if node == nil { return node, false } - val, valueExists := p.value(node.GetId()) - if valueExists && !types.IsUnknownOrError(val) { + val, valueExists := p.maybeValue(node.GetId()) + if valueExists { if newNode, ok := p.maybeCreateLiteral(node.GetId(), val); ok { + delete(p.macroCalls, node.GetId()) return newNode, true } } + if macro, found := p.macroCalls[node.GetId()]; found { + // prune the expression in terms of the macro call instead of the expanded form. + if newMacro, pruned := p.prune(macro); pruned { + p.macroCalls[node.GetId()] = newMacro + } + } // We have either an unknown/error value, or something we don't want to // transform, or expression was not evaluated. If possible, drill down // more. - switch node.GetExprKind().(type) { case *exprpb.Expr_SelectExpr: if operand, pruned := p.maybePrune(node.GetSelectExpr().GetOperand()); pruned { @@ -266,10 +365,6 @@ func (p *astPruner) prune(node *exprpb.Expr) (*exprpb.Expr, bool) { }, true } case *exprpb.Expr_CallExpr: - if newExpr, pruned := p.maybePruneFunction(node); pruned { - newExpr, _ = p.maybePrune(newExpr) - return newExpr, true - } var prunedCall bool call := node.GetCallExpr() args := call.GetArgs() @@ -290,31 +385,66 @@ func (p *astPruner) prune(node *exprpb.Expr) (*exprpb.Expr, bool) { prunedCall = true newCall.Target = newTarget } + newNode := &exprpb.Expr{ + Id: node.GetId(), + ExprKind: &exprpb.Expr_CallExpr{ + CallExpr: newCall, + }, + } + if newExpr, pruned := p.maybePruneFunction(newNode); pruned { + newExpr, _ = p.maybePrune(newExpr) + return newExpr, true + } if prunedCall { - return &exprpb.Expr{ - Id: node.GetId(), - ExprKind: &exprpb.Expr_CallExpr{ - CallExpr: newCall, - }, - }, true + return newNode, true } case *exprpb.Expr_ListExpr: elems := node.GetListExpr().GetElements() - newElems := make([]*exprpb.Expr, len(elems)) + optIndices := node.GetListExpr().GetOptionalIndices() + optIndexMap := map[int32]bool{} + for _, i := range optIndices { + optIndexMap[i] = true + } + newOptIndexMap := make(map[int32]bool, len(optIndexMap)) + newElems := make([]*exprpb.Expr, 0, len(elems)) var prunedList bool + + prunedIdx := 0 for i, elem := range elems { - newElems[i] = elem + _, isOpt := optIndexMap[int32(i)] + if isOpt { + newElem, pruned := p.maybePruneOptional(elem) + if pruned { + prunedList = true + if newElem != nil { + newElems = append(newElems, newElem) + prunedIdx++ + } + continue + } + newOptIndexMap[int32(prunedIdx)] = true + } if newElem, prunedElem := p.maybePrune(elem); prunedElem { - newElems[i] = newElem + newElems = append(newElems, newElem) prunedList = true + } else { + newElems = append(newElems, elem) } + prunedIdx++ + } + optIndices = make([]int32, len(newOptIndexMap)) + idx := 0 + for i := range newOptIndexMap { + optIndices[idx] = i + idx++ } if prunedList { return &exprpb.Expr{ Id: node.GetId(), ExprKind: &exprpb.Expr_ListExpr{ ListExpr: &exprpb.Expr_CreateList{ - Elements: newElems, + Elements: newElems, + OptionalIndices: optIndices, }, }, }, true @@ -344,6 +474,7 @@ func (p *astPruner) prune(node *exprpb.Expr) (*exprpb.Expr, bool) { MapKey: newKey, } } + newEntry.OptionalEntry = entry.GetOptionalEntry() newEntries[i] = newEntry } if prunedStruct { @@ -357,27 +488,6 @@ func (p *astPruner) prune(node *exprpb.Expr) (*exprpb.Expr, bool) { }, }, true } - case *exprpb.Expr_ComprehensionExpr: - compre := node.GetComprehensionExpr() - // Only the range of the comprehension is pruned since the state tracking only records - // the last iteration of the comprehension and not each step in the evaluation which - // means that the any residuals computed in between might be inaccurate. - if newRange, pruned := p.maybePrune(compre.GetIterRange()); pruned { - return &exprpb.Expr{ - Id: node.GetId(), - ExprKind: &exprpb.Expr_ComprehensionExpr{ - ComprehensionExpr: &exprpb.Expr_Comprehension{ - IterVar: compre.GetIterVar(), - IterRange: newRange, - AccuVar: compre.GetAccuVar(), - AccuInit: compre.GetAccuInit(), - LoopCondition: compre.GetLoopCondition(), - LoopStep: compre.GetLoopStep(), - Result: compre.GetResult(), - }, - }, - }, true - } } return node, false } @@ -387,24 +497,82 @@ func (p *astPruner) value(id int64) (ref.Val, bool) { return val, (found && val != nil) } -func (p *astPruner) existsWithUnknownValue(id int64) bool { - val, valueExists := p.value(id) - return valueExists && types.IsUnknown(val) +func (p *astPruner) maybeValue(id int64) (ref.Val, bool) { + val, found := p.value(id) + if !found || types.IsUnknownOrError(val) { + return nil, false + } + return val, true } -func (p *astPruner) existsWithKnownValue(id int64) bool { - val, valueExists := p.value(id) - return valueExists && !types.IsUnknown(val) +func (p *astPruner) nextID() int64 { + next := p.nextExprID + p.nextExprID++ + return next } -func (p *astPruner) nextID() int64 { - for { - _, found := p.state.Value(p.nextExprID) - if !found { - next := p.nextExprID - p.nextExprID++ - return next +type astVisitor struct { + // visitEntry is called on every expr node, including those within a map/struct entry. + visitExpr func(expr *exprpb.Expr) + // visitEntry is called before entering the key, value of a map/struct entry. + visitEntry func(entry *exprpb.Expr_CreateStruct_Entry) +} + +func getMaxID(expr *exprpb.Expr) int64 { + maxID := int64(1) + visit(expr, maxIDVisitor(&maxID)) + return maxID +} + +func maxIDVisitor(maxID *int64) astVisitor { + return astVisitor{ + visitExpr: func(e *exprpb.Expr) { + if e.GetId() >= *maxID { + *maxID = e.GetId() + 1 + } + }, + visitEntry: func(e *exprpb.Expr_CreateStruct_Entry) { + if e.GetId() >= *maxID { + *maxID = e.GetId() + 1 + } + }, + } +} + +func visit(expr *exprpb.Expr, visitor astVisitor) { + exprs := []*exprpb.Expr{expr} + for len(exprs) != 0 { + e := exprs[0] + visitor.visitExpr(e) + exprs = exprs[1:] + switch e.GetExprKind().(type) { + case *exprpb.Expr_SelectExpr: + exprs = append(exprs, e.GetSelectExpr().GetOperand()) + case *exprpb.Expr_CallExpr: + call := e.GetCallExpr() + if call.GetTarget() != nil { + exprs = append(exprs, call.GetTarget()) + } + exprs = append(exprs, call.GetArgs()...) + case *exprpb.Expr_ComprehensionExpr: + compre := e.GetComprehensionExpr() + exprs = append(exprs, + compre.GetIterRange(), + compre.GetAccuInit(), + compre.GetLoopCondition(), + compre.GetLoopStep(), + compre.GetResult()) + case *exprpb.Expr_ListExpr: + list := e.GetListExpr() + exprs = append(exprs, list.GetElements()...) + case *exprpb.Expr_StructExpr: + for _, entry := range e.GetStructExpr().GetEntries() { + visitor.visitEntry(entry) + if entry.GetMapKey() != nil { + exprs = append(exprs, entry.GetMapKey()) + } + exprs = append(exprs, entry.GetValue()) + } } - p.nextExprID++ } } diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/runtimecost.go b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/runtimecost.go index e7daf011fc71..80e7f6134496 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/runtimecost.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/interpreter/runtimecost.go @@ -53,6 +53,11 @@ func CostObserver(tracker *CostTracker) EvalObserver { tracker.stack.drop(t.Attr().ID()) tracker.cost += common.SelectAndIdentCost } + if !tracker.presenceTestHasCost { + if _, isTestOnly := programStep.(*evalTestOnly); isTestOnly { + tracker.cost -= common.SelectAndIdentCost + } + } case *evalExhaustiveConditional: // Ternary has no direct cost. All cost is from the conditional and the true/false branch expressions. tracker.stack.drop(t.attr.falsy.ID(), t.attr.truthy.ID(), t.attr.expr.ID()) @@ -69,8 +74,6 @@ func CostObserver(tracker *CostTracker) EvalObserver { tracker.stack.drop(t.rhs.ID(), t.lhs.ID()) case *evalFold: tracker.stack.drop(t.iterRange.ID()) - case *evalTestOnly: - tracker.cost += common.SelectAndIdentCost case Qualifier: tracker.cost++ case InterpretableCall: @@ -97,21 +100,58 @@ func CostObserver(tracker *CostTracker) EvalObserver { return observer } -// CostTracker represents the information needed for tacking runtime cost +// CostTrackerOption configures the behavior of CostTracker objects. +type CostTrackerOption func(*CostTracker) error + +// CostTrackerLimit sets the runtime limit on the evaluation cost during execution and will terminate the expression +// evaluation if the limit is exceeded. +func CostTrackerLimit(limit uint64) CostTrackerOption { + return func(tracker *CostTracker) error { + tracker.Limit = &limit + return nil + } +} + +// PresenceTestHasCost determines whether presence testing has a cost of one or zero. +// Defaults to presence test has a cost of one. +func PresenceTestHasCost(hasCost bool) CostTrackerOption { + return func(tracker *CostTracker) error { + tracker.presenceTestHasCost = hasCost + return nil + } +} + +// NewCostTracker creates a new CostTracker with a given estimator and a set of functional CostTrackerOption values. +func NewCostTracker(estimator ActualCostEstimator, opts ...CostTrackerOption) (*CostTracker, error) { + tracker := &CostTracker{ + Estimator: estimator, + presenceTestHasCost: true, + } + for _, opt := range opts { + err := opt(tracker) + if err != nil { + return nil, err + } + } + return tracker, nil +} + +// CostTracker represents the information needed for tracking runtime cost. type CostTracker struct { - Estimator ActualCostEstimator - Limit *uint64 + Estimator ActualCostEstimator + Limit *uint64 + presenceTestHasCost bool cost uint64 stack refValStack } // ActualCost returns the runtime cost -func (c CostTracker) ActualCost() uint64 { +func (c *CostTracker) ActualCost() uint64 { return c.cost } -func (c CostTracker) costCall(call InterpretableCall, argValues []ref.Val, result ref.Val) uint64 { +func (c *CostTracker) costCall(call InterpretableCall, argValues []ref.Val, result ref.Val) uint64 { var cost uint64 if c.Estimator != nil { callCost := c.Estimator.CallCost(call.Function(), call.OverloadID(), argValues, result) @@ -181,7 +221,7 @@ func (c CostTracker) costCall(call InterpretableCall, argValues []ref.Val, resul } // actualSize returns the size of value -func (c CostTracker) actualSize(value ref.Val) uint64 { +func (c *CostTracker) actualSize(value ref.Val) uint64 { if sz, ok := value.(traits.Sizer); ok { return uint64(sz.Size().(types.Int)) } diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/BUILD.bazel index b5c15fa570da..67ecc9554397 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/BUILD.bazel @@ -23,8 +23,8 @@ go_library( "//common/operators:go_default_library", "//common/runes:go_default_library", "//parser/gen:go_default_library", - "@com_github_antlr_antlr4_runtime_go_antlr//:go_default_library", - "@org_golang_google_genproto//googleapis/api/expr/v1alpha1:go_default_library", + "@com_github_antlr_antlr4_runtime_go_antlr_v4//:go_default_library", + "@org_golang_google_genproto_googleapis_api//expr/v1alpha1:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//types/known/structpb:go_default_library", ], @@ -46,7 +46,7 @@ go_test( "//common/debug:go_default_library", "//parser/gen:go_default_library", "//test:go_default_library", - "@com_github_antlr_antlr4_runtime_go_antlr//:go_default_library", + "@com_github_antlr_antlr4_runtime_go_antlr_v4//:go_default_library", "@org_golang_google_protobuf//proto:go_default_library", "@org_golang_google_protobuf//testing/protocmp:go_default_library", ], diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/gen/BUILD.bazel b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/gen/BUILD.bazel index 22711310ce0f..654d1de7aad3 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/gen/BUILD.bazel +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/gen/BUILD.bazel @@ -21,6 +21,6 @@ go_library( ], importpath = "github.com/google/cel-go/parser/gen", deps = [ - "@com_github_antlr_antlr4_runtime_go_antlr//:go_default_library", + "@com_github_antlr_antlr4_runtime_go_antlr_v4//:go_default_library", ], ) diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/helper.go b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/helper.go index 52d7fd8bbeb4..8f8f478ed12e 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/helper.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/helper.go @@ -259,7 +259,8 @@ func (p *parserHelper) buildMacroCallArg(expr *exprpb.Expr) *exprpb.Expr { Id: expr.GetId(), ExprKind: &exprpb.Expr_ListExpr{ ListExpr: &exprpb.Expr_CreateList{ - Elements: macroListArgs, + Elements: macroListArgs, + OptionalIndices: listExpr.GetOptionalIndices(), }, }, } diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/options.go b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/options.go index 8bfdae55b919..674c697c5cd7 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/options.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/options.go @@ -18,6 +18,7 @@ import "fmt" type options struct { maxRecursionDepth int + errorReportingLimit int errorRecoveryTokenLookaheadLimit int errorRecoveryLimit int expressionSizeCodePointLimit int @@ -46,7 +47,7 @@ func MaxRecursionDepth(limit int) Option { // successfully resume. In some pathological cases, the parser can look through quite a large set of input which // in turn generates a lot of back-tracking and performance degredation. // -// The limit must be > 1, and is recommended to be less than the default of 256. +// The limit must be >= 1, and is recommended to be less than the default of 256. func ErrorRecoveryLookaheadTokenLimit(limit int) Option { return func(opts *options) error { if limit < 1 { @@ -68,6 +69,19 @@ func ErrorRecoveryLimit(limit int) Option { } } +// ErrorReportingLimit limits the number of syntax error reports before terminating parsing. +// +// The limit must be at least 1. If unset, the limit will be 100. +func ErrorReportingLimit(limit int) Option { + return func(opts *options) error { + if limit < 1 { + return fmt.Errorf("error reporting limit must be at least 1: %d", limit) + } + opts.errorReportingLimit = limit + return nil + } +} + // ExpressionSizeCodePointLimit is an option which limits the maximum code point count of an // expression. func ExpressionSizeCodePointLimit(expressionSizeCodePointLimit int) Option { diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/parser.go b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/parser.go index da4481776f18..e6f70f9060e0 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/parser.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/parser.go @@ -47,6 +47,9 @@ func NewParser(opts ...Option) (*Parser, error) { return nil, err } } + if p.errorReportingLimit == 0 { + p.errorReportingLimit = 100 + } if p.maxRecursionDepth == 0 { p.maxRecursionDepth = 250 } @@ -91,6 +94,7 @@ func (p *Parser) Parse(source common.Source) (*exprpb.ParsedExpr, *common.Errors helper: newParserHelper(source), macros: p.macros, maxRecursionDepth: p.maxRecursionDepth, + errorReportingLimit: p.errorReportingLimit, errorRecoveryLimit: p.errorRecoveryLimit, errorRecoveryLookaheadTokenLimit: p.errorRecoveryTokenLookaheadLimit, populateMacroCalls: p.populateMacroCalls, @@ -200,6 +204,16 @@ func (rl *recursionListener) ExitEveryRule(ctx antlr.ParserRuleContext) { var _ antlr.ParseTreeListener = &recursionListener{} +type tooManyErrors struct { + errorReportingLimit int +} + +func (t *tooManyErrors) Error() string { + return fmt.Sprintf("More than %d syntax errors", t.errorReportingLimit) +} + +var _ error = &tooManyErrors{} + type recoveryLimitError struct { message string } @@ -274,7 +288,9 @@ type parser struct { helper *parserHelper macros map[string]Macro recursionDepth int + errorReports int maxRecursionDepth int + errorReportingLimit int errorRecoveryLimit int errorRecoveryLookaheadTokenLimit int populateMacroCalls bool @@ -306,14 +322,14 @@ func (p *parser) parse(expr runes.Buffer, desc string) *exprpb.Expr { lexer := lexerPool.Get().(*gen.CELLexer) prsr := parserPool.Get().(*gen.CELParser) - // Unfortunately ANTLR Go runtime is missing (*antlr.BaseParser).RemoveParseListeners, so this is - // good enough until that is exported. prsrListener := &recursionListener{ maxDepth: p.maxRecursionDepth, ruleTypeDepth: map[int]*int{}, } defer func() { + // Unfortunately ANTLR Go runtime is missing (*antlr.BaseParser).RemoveParseListeners, + // so this is good enough until that is exported. // Reset the lexer and parser before putting them back in the pool. lexer.RemoveErrorListeners() prsr.RemoveParseListener(prsrListener) @@ -344,6 +360,8 @@ func (p *parser) parse(expr runes.Buffer, desc string) *exprpb.Expr { p.errors.ReportError(common.NoLocation, err.Error()) case *recursionError: p.errors.ReportError(common.NoLocation, err.Error()) + case *tooManyErrors: + // do nothing case *recoveryLimitError: // do nothing, listeners already notified and error reported. default: @@ -866,7 +884,6 @@ func (p *parser) reportError(ctx any, format string, args ...any) *exprpb.Expr { // ANTLR Parse listener implementations func (p *parser) SyntaxError(recognizer antlr.Recognizer, offendingSymbol any, line, column int, msg string, e antlr.RecognitionException) { - // TODO: Snippet l := p.helper.source.NewLocation(line, column) // Hack to keep existing error messages consistent with previous versions of CEL when a reserved word // is used as an identifier. This behavior needs to be overhauled to provide consistent, normalized error @@ -874,7 +891,16 @@ func (p *parser) SyntaxError(recognizer antlr.Recognizer, offendingSymbol any, l if strings.Contains(msg, "no viable alternative") { msg = reservedIdentifier.ReplaceAllString(msg, mismatchedReservedIdentifier) } - p.errors.syntaxError(l, msg) + // Ensure that no more than 100 syntax errors are reported as this will halt attempts to recover from a + // seriously broken expression. + if p.errorReports < p.errorReportingLimit { + p.errorReports++ + p.errors.syntaxError(l, msg) + } else { + tme := &tooManyErrors{errorReportingLimit: p.errorReportingLimit} + p.errors.syntaxError(l, tme.Error()) + panic(tme) + } } func (p *parser) ReportAmbiguity(recognizer antlr.Parser, dfa *antlr.DFA, startIndex, stopIndex int, exact bool, ambigAlts *antlr.BitSet, configs antlr.ATNConfigSet) { diff --git a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/unparser.go b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/unparser.go index 5ff979aa6009..c3c40a0dd36a 100644 --- a/cluster-autoscaler/vendor/github.com/google/cel-go/parser/unparser.go +++ b/cluster-autoscaler/vendor/github.com/google/cel-go/parser/unparser.go @@ -276,6 +276,9 @@ func (un *unparser) visitConst(expr *exprpb.Expr) error { // represent the float using the minimum required digits d := strconv.FormatFloat(c.GetDoubleValue(), 'g', -1, 64) un.str.WriteString(d) + if !strings.Contains(d, ".") { + un.str.WriteString(".0") + } case *exprpb.Constant_Int64Value: i := strconv.FormatInt(c.GetInt64Value(), 10) un.str.WriteString(i) diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/LICENSE b/cluster-autoscaler/vendor/github.com/google/gnostic-models/LICENSE similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/LICENSE rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/LICENSE diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/compiler/README.md b/cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/README.md similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/compiler/README.md rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/README.md diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/compiler/context.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/context.go similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/compiler/context.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/context.go diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/compiler/error.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/error.go similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/compiler/error.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/error.go diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/compiler/extensions.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/extensions.go similarity index 97% rename from cluster-autoscaler/vendor/github.com/google/gnostic/compiler/extensions.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/extensions.go index 5b5a916d2ece..250c81e8c854 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/compiler/extensions.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/extensions.go @@ -24,7 +24,7 @@ import ( "github.com/golang/protobuf/ptypes/any" yaml "gopkg.in/yaml.v3" - extensions "github.com/google/gnostic/extensions" + extensions "github.com/google/gnostic-models/extensions" ) // ExtensionHandler describes a binary that is called by the compiler to handle specification extensions. diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/compiler/helpers.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/helpers.go similarity index 99% rename from cluster-autoscaler/vendor/github.com/google/gnostic/compiler/helpers.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/helpers.go index 97ffaa5131ae..975d65e8f8de 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/compiler/helpers.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/helpers.go @@ -22,7 +22,7 @@ import ( "gopkg.in/yaml.v3" - "github.com/google/gnostic/jsonschema" + "github.com/google/gnostic-models/jsonschema" ) // compiler helper functions, usually called from generated code diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/compiler/main.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/main.go similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/compiler/main.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/main.go diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/compiler/reader.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/reader.go similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/compiler/reader.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/compiler/reader.go diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/extensions/README.md b/cluster-autoscaler/vendor/github.com/google/gnostic-models/extensions/README.md similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/extensions/README.md rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/extensions/README.md diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/extensions/extension.pb.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/extensions/extension.pb.go similarity index 99% rename from cluster-autoscaler/vendor/github.com/google/gnostic/extensions/extension.pb.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/extensions/extension.pb.go index a6a4ccca6cf6..a71df8abecc6 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/extensions/extension.pb.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/extensions/extension.pb.go @@ -14,8 +14,8 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.26.0 -// protoc v3.18.1 +// protoc-gen-go v1.27.1 +// protoc v3.19.3 // source: extensions/extension.proto package gnostic_extension_v1 diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/extensions/extension.proto b/cluster-autoscaler/vendor/github.com/google/gnostic-models/extensions/extension.proto similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/extensions/extension.proto rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/extensions/extension.proto diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/extensions/extensions.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/extensions/extensions.go similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/extensions/extensions.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/extensions/extensions.go diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/README.md b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/README.md similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/README.md rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/README.md diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/base.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/base.go similarity index 90% rename from cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/base.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/base.go index 0af8b148b9c0..5fcc4885a03c 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/base.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/base.go @@ -1,3 +1,16 @@ +// Copyright 2017 Google LLC. All Rights Reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. // THIS FILE IS AUTOMATICALLY GENERATED. @@ -81,4 +94,4 @@ YXkiIH0sCiAgICAgICAgImFueU9mIjogeyAiJHJlZiI6ICIjL2RlZmluaXRpb25zL3NjaGVtYUFycmF5 IiB9LAogICAgICAgICJvbmVPZiI6IHsgIiRyZWYiOiAiIy9kZWZpbml0aW9ucy9zY2hlbWFBcnJheSIg fSwKICAgICAgICAibm90IjogeyAiJHJlZiI6ICIjIiB9CiAgICB9LAogICAgImRlcGVuZGVuY2llcyI6 IHsKICAgICAgICAiZXhjbHVzaXZlTWF4aW11bSI6IFsgIm1heGltdW0iIF0sCiAgICAgICAgImV4Y2x1 -c2l2ZU1pbmltdW0iOiBbICJtaW5pbXVtIiBdCiAgICB9LAogICAgImRlZmF1bHQiOiB7fQp9Cg==`)} \ No newline at end of file +c2l2ZU1pbmltdW0iOiBbICJtaW5pbXVtIiBdCiAgICB9LAogICAgImRlZmF1bHQiOiB7fQp9Cg==`)} diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/display.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/display.go similarity index 92% rename from cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/display.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/display.go index 8677ed49a0ea..028a760a91b5 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/display.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/display.go @@ -46,23 +46,8 @@ func (schema *Schema) describeSchema(indent string) string { if schema.Schema != nil { result += indent + "$schema: " + *(schema.Schema) + "\n" } - if schema.ReadOnly != nil && *schema.ReadOnly { - result += indent + fmt.Sprintf("readOnly: %+v\n", *(schema.ReadOnly)) - } - if schema.WriteOnly != nil && *schema.WriteOnly { - result += indent + fmt.Sprintf("writeOnly: %+v\n", *(schema.WriteOnly)) - } if schema.ID != nil { - switch strings.TrimSuffix(*schema.Schema, "#") { - case "http://json-schema.org/draft-04/schema#": - fallthrough - case "#": - fallthrough - case "": - result += indent + "id: " + *(schema.ID) + "\n" - default: - result += indent + "$id: " + *(schema.ID) + "\n" - } + result += indent + "id: " + *(schema.ID) + "\n" } if schema.MultipleOf != nil { result += indent + fmt.Sprintf("multipleOf: %+v\n", *(schema.MultipleOf)) diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/models.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/models.go similarity index 97% rename from cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/models.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/models.go index 0d877249abb1..4781bdc5f500 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/models.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/models.go @@ -23,11 +23,9 @@ import "gopkg.in/yaml.v3" // All fields are pointers and are nil if the associated values // are not specified. type Schema struct { - Schema *string // $schema - ID *string // id keyword used for $ref resolution scope - Ref *string // $ref, i.e. JSON Pointers - ReadOnly *bool - WriteOnly *bool + Schema *string // $schema + ID *string // id keyword used for $ref resolution scope + Ref *string // $ref, i.e. JSON Pointers // http://json-schema.org/latest/json-schema-validation.html // 5.1. Validation keywords for numeric instances (number and integer) diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/operations.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/operations.go similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/operations.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/operations.go diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/reader.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/reader.go similarity index 99% rename from cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/reader.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/reader.go index a909a34128b7..b8583d466023 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/reader.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/reader.go @@ -165,6 +165,7 @@ func NewSchemaFromObject(jsonData *yaml.Node) *Schema { default: fmt.Printf("schemaValue: unexpected node %+v\n", jsonData) + return nil } return nil diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/schema.json b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/schema.json similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/schema.json rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/schema.json diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/writer.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/writer.go similarity index 92% rename from cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/writer.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/writer.go index 15b1f905067a..340dc5f93306 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/jsonschema/writer.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/jsonschema/writer.go @@ -16,7 +16,6 @@ package jsonschema import ( "fmt" - "strings" "gopkg.in/yaml.v3" ) @@ -34,11 +33,7 @@ func renderMappingNode(node *yaml.Node, indent string) (result string) { value := node.Content[i+1] switch value.Kind { case yaml.ScalarNode: - if value.Tag == "!!bool" { - result += value.Value - } else { - result += "\"" + value.Value + "\"" - } + result += "\"" + value.Value + "\"" case yaml.MappingNode: result += renderMappingNode(value, innerIndent) case yaml.SequenceNode: @@ -63,11 +58,7 @@ func renderSequenceNode(node *yaml.Node, indent string) (result string) { item := node.Content[i] switch item.Kind { case yaml.ScalarNode: - if item.Tag == "!!bool" { - result += innerIndent + item.Value - } else { - result += innerIndent + "\"" + item.Value + "\"" - } + result += innerIndent + "\"" + item.Value + "\"" case yaml.MappingNode: result += innerIndent + renderMappingNode(item, innerIndent) + "" default: @@ -269,26 +260,11 @@ func (schema *Schema) nodeValue() *yaml.Node { content = appendPair(content, "title", nodeForString(*schema.Title)) } if schema.ID != nil { - switch strings.TrimSuffix(*schema.Schema, "#") { - case "http://json-schema.org/draft-04/schema": - fallthrough - case "#": - fallthrough - case "": - content = appendPair(content, "id", nodeForString(*schema.ID)) - default: - content = appendPair(content, "$id", nodeForString(*schema.ID)) - } + content = appendPair(content, "id", nodeForString(*schema.ID)) } if schema.Schema != nil { content = appendPair(content, "$schema", nodeForString(*schema.Schema)) } - if schema.ReadOnly != nil && *schema.ReadOnly { - content = appendPair(content, "readOnly", nodeForBoolean(*schema.ReadOnly)) - } - if schema.WriteOnly != nil && *schema.WriteOnly { - content = appendPair(content, "writeOnly", nodeForBoolean(*schema.WriteOnly)) - } if schema.Type != nil { content = appendPair(content, "type", schema.Type.nodeValue()) } diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/OpenAPIv2.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/OpenAPIv2.go similarity index 99% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/OpenAPIv2.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/OpenAPIv2.go index 28c2777d5115..d71fe6d5451e 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/OpenAPIv2.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/OpenAPIv2.go @@ -23,7 +23,7 @@ import ( "gopkg.in/yaml.v3" - "github.com/google/gnostic/compiler" + "github.com/google/gnostic-models/compiler" ) // Version returns the package name (and OpenAPI version). @@ -7887,12 +7887,7 @@ func (m *Oauth2Scopes) ToRawInfo() *yaml.Node { if m == nil { return info } - if m.AdditionalProperties != nil { - for _, item := range m.AdditionalProperties { - info.Content = append(info.Content, compiler.NewScalarNodeForString(item.Name)) - info.Content = append(info.Content, compiler.NewScalarNodeForString(item.Value)) - } - } + // &{Name:additionalProperties Type:NamedString StringEnumValues:[] MapType:string Repeated:true Pattern: Implicit:true Description:} return info } diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/OpenAPIv2.pb.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/OpenAPIv2.pb.go similarity index 99% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/OpenAPIv2.pb.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/OpenAPIv2.pb.go index 06b60157c144..65c4c913ce70 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/OpenAPIv2.pb.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/OpenAPIv2.pb.go @@ -16,8 +16,8 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.26.0 -// protoc v3.18.1 +// protoc-gen-go v1.27.1 +// protoc v3.19.3 // source: openapiv2/OpenAPIv2.proto package openapi_v2 diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/OpenAPIv2.proto b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/OpenAPIv2.proto similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/OpenAPIv2.proto rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/OpenAPIv2.proto diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/README.md b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/README.md similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/README.md rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/README.md diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/document.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/document.go similarity index 96% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/document.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/document.go index 0021ae871a6f..e96ac0d6dac2 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/document.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/document.go @@ -17,7 +17,7 @@ package openapi_v2 import ( "gopkg.in/yaml.v3" - "github.com/google/gnostic/compiler" + "github.com/google/gnostic-models/compiler" ) // ParseDocument reads an OpenAPI v2 description from a YAML/JSON representation. diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/openapi-2.0.json b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/openapi-2.0.json similarity index 100% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv2/openapi-2.0.json rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv2/openapi-2.0.json diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/OpenAPIv3.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/OpenAPIv3.go similarity index 99% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/OpenAPIv3.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/OpenAPIv3.go index d54a84db7c0e..4b1131ce1c2e 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/OpenAPIv3.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/OpenAPIv3.go @@ -23,7 +23,7 @@ import ( "gopkg.in/yaml.v3" - "github.com/google/gnostic/compiler" + "github.com/google/gnostic-models/compiler" ) // Version returns the package name (and OpenAPI version). @@ -8560,12 +8560,7 @@ func (m *Strings) ToRawInfo() *yaml.Node { if m == nil { return info } - if m.AdditionalProperties != nil { - for _, item := range m.AdditionalProperties { - info.Content = append(info.Content, compiler.NewScalarNodeForString(item.Name)) - info.Content = append(info.Content, compiler.NewScalarNodeForString(item.Value)) - } - } + // &{Name:additionalProperties Type:NamedString StringEnumValues:[] MapType:string Repeated:true Pattern: Implicit:true Description:} return info } diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/OpenAPIv3.pb.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/OpenAPIv3.pb.go similarity index 99% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/OpenAPIv3.pb.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/OpenAPIv3.pb.go index 90a56f5526b2..945b8d11ff59 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/OpenAPIv3.pb.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/OpenAPIv3.pb.go @@ -16,8 +16,8 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.28.0 -// protoc v3.19.4 +// protoc-gen-go v1.27.1 +// protoc v3.19.3 // source: openapiv3/OpenAPIv3.proto package openapi_v3 @@ -6760,13 +6760,12 @@ var file_openapiv3_OpenAPIv3_proto_rawDesc = []byte{ 0x5f, 0x65, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x06, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x2e, 0x76, 0x33, 0x2e, 0x4e, 0x61, 0x6d, 0x65, 0x64, 0x41, 0x6e, 0x79, 0x52, 0x16, 0x73, 0x70, 0x65, 0x63, 0x69, 0x66, 0x69, 0x63, - 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x42, 0x56, + 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x45, 0x78, 0x74, 0x65, 0x6e, 0x73, 0x69, 0x6f, 0x6e, 0x42, 0x3e, 0x0a, 0x0e, 0x6f, 0x72, 0x67, 0x2e, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x5f, 0x76, 0x33, 0x42, 0x0c, 0x4f, 0x70, 0x65, 0x6e, 0x41, 0x50, 0x49, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, - 0x5a, 0x2e, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6f, - 0x67, 0x6c, 0x65, 0x2f, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x2f, 0x6f, 0x70, 0x65, 0x6e, - 0x61, 0x70, 0x69, 0x76, 0x33, 0x3b, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x5f, 0x76, 0x33, - 0xa2, 0x02, 0x03, 0x4f, 0x41, 0x53, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, + 0x5a, 0x16, 0x2e, 0x2f, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x76, 0x33, 0x3b, 0x6f, 0x70, + 0x65, 0x6e, 0x61, 0x70, 0x69, 0x5f, 0x76, 0x33, 0xa2, 0x02, 0x03, 0x4f, 0x41, 0x53, 0x62, 0x06, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } var ( diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/OpenAPIv3.proto b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/OpenAPIv3.proto similarity index 99% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/OpenAPIv3.proto rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/OpenAPIv3.proto index 7aede5ed9091..1be335b89ba0 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/OpenAPIv3.proto +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/OpenAPIv3.proto @@ -42,7 +42,7 @@ option java_package = "org.openapi_v3"; option objc_class_prefix = "OAS"; // The Go package name. -option go_package = "github.com/google/gnostic/openapiv3;openapi_v3"; +option go_package = "./openapiv3;openapi_v3"; message AdditionalPropertiesItem { oneof oneof { diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/README.md b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/README.md similarity index 89% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/README.md rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/README.md index 83603b82aab7..5ee12d92e24e 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/README.md +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/README.md @@ -19,7 +19,3 @@ for OpenAPI. The schema-generator directory contains support code which generates openapi-3.1.json from the OpenAPI 3.1 specification document (Markdown). - -### How to rebuild - -`protoc -I=. -I=third_party --go_out=. --go_opt=paths=source_relative openapiv3/*.proto` \ No newline at end of file diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/document.go b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/document.go similarity index 96% rename from cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/document.go rename to cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/document.go index ef10d1d90964..1cee4677350e 100644 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/document.go +++ b/cluster-autoscaler/vendor/github.com/google/gnostic-models/openapiv3/document.go @@ -17,7 +17,7 @@ package openapi_v3 import ( "gopkg.in/yaml.v3" - "github.com/google/gnostic/compiler" + "github.com/google/gnostic-models/compiler" ) // ParseDocument reads an OpenAPI v3 description from a YAML/JSON representation. diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/annotations.pb.go b/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/annotations.pb.go deleted file mode 100644 index ae242f304315..000000000000 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/annotations.pb.go +++ /dev/null @@ -1,183 +0,0 @@ -// Copyright 2022 Google LLC. All Rights Reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Code generated by protoc-gen-go. DO NOT EDIT. -// versions: -// protoc-gen-go v1.28.0 -// protoc v3.19.4 -// source: openapiv3/annotations.proto - -package openapi_v3 - -import ( - protoreflect "google.golang.org/protobuf/reflect/protoreflect" - protoimpl "google.golang.org/protobuf/runtime/protoimpl" - descriptorpb "google.golang.org/protobuf/types/descriptorpb" - reflect "reflect" -) - -const ( - // Verify that this generated code is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) - // Verify that runtime/protoimpl is sufficiently up-to-date. - _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) -) - -var file_openapiv3_annotations_proto_extTypes = []protoimpl.ExtensionInfo{ - { - ExtendedType: (*descriptorpb.FileOptions)(nil), - ExtensionType: (*Document)(nil), - Field: 1143, - Name: "openapi.v3.document", - Tag: "bytes,1143,opt,name=document", - Filename: "openapiv3/annotations.proto", - }, - { - ExtendedType: (*descriptorpb.MethodOptions)(nil), - ExtensionType: (*Operation)(nil), - Field: 1143, - Name: "openapi.v3.operation", - Tag: "bytes,1143,opt,name=operation", - Filename: "openapiv3/annotations.proto", - }, - { - ExtendedType: (*descriptorpb.MessageOptions)(nil), - ExtensionType: (*Schema)(nil), - Field: 1143, - Name: "openapi.v3.schema", - Tag: "bytes,1143,opt,name=schema", - Filename: "openapiv3/annotations.proto", - }, - { - ExtendedType: (*descriptorpb.FieldOptions)(nil), - ExtensionType: (*Schema)(nil), - Field: 1143, - Name: "openapi.v3.property", - Tag: "bytes,1143,opt,name=property", - Filename: "openapiv3/annotations.proto", - }, -} - -// Extension fields to descriptorpb.FileOptions. -var ( - // optional openapi.v3.Document document = 1143; - E_Document = &file_openapiv3_annotations_proto_extTypes[0] -) - -// Extension fields to descriptorpb.MethodOptions. -var ( - // optional openapi.v3.Operation operation = 1143; - E_Operation = &file_openapiv3_annotations_proto_extTypes[1] -) - -// Extension fields to descriptorpb.MessageOptions. -var ( - // optional openapi.v3.Schema schema = 1143; - E_Schema = &file_openapiv3_annotations_proto_extTypes[2] -) - -// Extension fields to descriptorpb.FieldOptions. -var ( - // optional openapi.v3.Schema property = 1143; - E_Property = &file_openapiv3_annotations_proto_extTypes[3] -) - -var File_openapiv3_annotations_proto protoreflect.FileDescriptor - -var file_openapiv3_annotations_proto_rawDesc = []byte{ - 0x0a, 0x1b, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x76, 0x33, 0x2f, 0x61, 0x6e, 0x6e, 0x6f, - 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x0a, 0x6f, - 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x2e, 0x76, 0x33, 0x1a, 0x19, 0x6f, 0x70, 0x65, 0x6e, 0x61, - 0x70, 0x69, 0x76, 0x33, 0x2f, 0x4f, 0x70, 0x65, 0x6e, 0x41, 0x50, 0x49, 0x76, 0x33, 0x2e, 0x70, - 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x20, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, - 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x3a, 0x4f, 0x0a, 0x08, 0x64, 0x6f, 0x63, 0x75, 0x6d, 0x65, - 0x6e, 0x74, 0x12, 0x1c, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, - 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x6c, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, - 0x18, 0xf7, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, - 0x69, 0x2e, 0x76, 0x33, 0x2e, 0x44, 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x52, 0x08, 0x64, - 0x6f, 0x63, 0x75, 0x6d, 0x65, 0x6e, 0x74, 0x3a, 0x54, 0x0a, 0x09, 0x6f, 0x70, 0x65, 0x72, 0x61, - 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, - 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x4f, 0x70, 0x74, - 0x69, 0x6f, 0x6e, 0x73, 0x18, 0xf7, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x6f, 0x70, - 0x65, 0x6e, 0x61, 0x70, 0x69, 0x2e, 0x76, 0x33, 0x2e, 0x4f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, - 0x6f, 0x6e, 0x52, 0x09, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x3a, 0x4c, 0x0a, - 0x06, 0x73, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x12, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, - 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0xf7, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, - 0x12, 0x2e, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x2e, 0x76, 0x33, 0x2e, 0x53, 0x63, 0x68, - 0x65, 0x6d, 0x61, 0x52, 0x06, 0x73, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x3a, 0x4e, 0x0a, 0x08, 0x70, - 0x72, 0x6f, 0x70, 0x65, 0x72, 0x74, 0x79, 0x12, 0x1d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, - 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x4f, - 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0xf7, 0x08, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x12, 0x2e, - 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x2e, 0x76, 0x33, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x6d, - 0x61, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x70, 0x65, 0x72, 0x74, 0x79, 0x42, 0x5a, 0x0a, 0x0e, 0x6f, - 0x72, 0x67, 0x2e, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x5f, 0x76, 0x33, 0x42, 0x10, 0x41, - 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, - 0x01, 0x5a, 0x2e, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, - 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x2f, 0x6f, 0x70, 0x65, - 0x6e, 0x61, 0x70, 0x69, 0x76, 0x33, 0x3b, 0x6f, 0x70, 0x65, 0x6e, 0x61, 0x70, 0x69, 0x5f, 0x76, - 0x33, 0xa2, 0x02, 0x03, 0x4f, 0x41, 0x53, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, -} - -var file_openapiv3_annotations_proto_goTypes = []interface{}{ - (*descriptorpb.FileOptions)(nil), // 0: google.protobuf.FileOptions - (*descriptorpb.MethodOptions)(nil), // 1: google.protobuf.MethodOptions - (*descriptorpb.MessageOptions)(nil), // 2: google.protobuf.MessageOptions - (*descriptorpb.FieldOptions)(nil), // 3: google.protobuf.FieldOptions - (*Document)(nil), // 4: openapi.v3.Document - (*Operation)(nil), // 5: openapi.v3.Operation - (*Schema)(nil), // 6: openapi.v3.Schema -} -var file_openapiv3_annotations_proto_depIdxs = []int32{ - 0, // 0: openapi.v3.document:extendee -> google.protobuf.FileOptions - 1, // 1: openapi.v3.operation:extendee -> google.protobuf.MethodOptions - 2, // 2: openapi.v3.schema:extendee -> google.protobuf.MessageOptions - 3, // 3: openapi.v3.property:extendee -> google.protobuf.FieldOptions - 4, // 4: openapi.v3.document:type_name -> openapi.v3.Document - 5, // 5: openapi.v3.operation:type_name -> openapi.v3.Operation - 6, // 6: openapi.v3.schema:type_name -> openapi.v3.Schema - 6, // 7: openapi.v3.property:type_name -> openapi.v3.Schema - 8, // [8:8] is the sub-list for method output_type - 8, // [8:8] is the sub-list for method input_type - 4, // [4:8] is the sub-list for extension type_name - 0, // [0:4] is the sub-list for extension extendee - 0, // [0:0] is the sub-list for field type_name -} - -func init() { file_openapiv3_annotations_proto_init() } -func file_openapiv3_annotations_proto_init() { - if File_openapiv3_annotations_proto != nil { - return - } - file_openapiv3_OpenAPIv3_proto_init() - type x struct{} - out := protoimpl.TypeBuilder{ - File: protoimpl.DescBuilder{ - GoPackagePath: reflect.TypeOf(x{}).PkgPath(), - RawDescriptor: file_openapiv3_annotations_proto_rawDesc, - NumEnums: 0, - NumMessages: 0, - NumExtensions: 4, - NumServices: 0, - }, - GoTypes: file_openapiv3_annotations_proto_goTypes, - DependencyIndexes: file_openapiv3_annotations_proto_depIdxs, - ExtensionInfos: file_openapiv3_annotations_proto_extTypes, - }.Build() - File_openapiv3_annotations_proto = out.File - file_openapiv3_annotations_proto_rawDesc = nil - file_openapiv3_annotations_proto_goTypes = nil - file_openapiv3_annotations_proto_depIdxs = nil -} diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/annotations.proto b/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/annotations.proto deleted file mode 100644 index 0bd87810db60..000000000000 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/annotations.proto +++ /dev/null @@ -1,60 +0,0 @@ -// Copyright 2022 Google LLC. All Rights Reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package openapi.v3; - -import "openapiv3/OpenAPIv3.proto"; -import "google/protobuf/descriptor.proto"; - -// This option lets the proto compiler generate Java code inside the package -// name (see below) instead of inside an outer class. It creates a simpler -// developer experience by reducing one-level of name nesting and be -// consistent with most programming languages that don't support outer classes. -option java_multiple_files = true; - -// The Java outer classname should be the filename in UpperCamelCase. This -// class is only used to hold proto descriptor, so developers don't need to -// work with it directly. -option java_outer_classname = "AnnotationsProto"; - -// The Java package name must be proto package name with proper prefix. -option java_package = "org.openapi_v3"; - -// A reasonable prefix for the Objective-C symbols generated from the package. -// It should at a minimum be 3 characters long, all uppercase, and convention -// is to use an abbreviation of the package name. Something short, but -// hopefully unique enough to not conflict with things that may come along in -// the future. 'GPB' is reserved for the protocol buffer implementation itself. -option objc_class_prefix = "OAS"; - -// The Go package name. -option go_package = "github.com/google/gnostic/openapiv3;openapi_v3"; - -extend google.protobuf.FileOptions { - Document document = 1143; -} - -extend google.protobuf.MethodOptions { - Operation operation = 1143; -} - -extend google.protobuf.MessageOptions { - Schema schema = 1143; -} - -extend google.protobuf.FieldOptions { - Schema property = 1143; -} \ No newline at end of file diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/openapi-3.0.json b/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/openapi-3.0.json deleted file mode 100644 index d5caed162d8f..000000000000 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/openapi-3.0.json +++ /dev/null @@ -1,1251 +0,0 @@ -{ - "title": "A JSON Schema for OpenAPI 3.0.", - "id": "http://openapis.org/v3/schema.json#", - "$schema": "http://json-schema.org/draft-04/schema#", - "type": "object", - "description": "This is the root document object of the OpenAPI document.", - "required": [ - "openapi", - "info", - "paths" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "openapi": { - "type": "string" - }, - "info": { - "$ref": "#/definitions/info" - }, - "servers": { - "type": "array", - "items": { - "$ref": "#/definitions/server" - }, - "uniqueItems": true - }, - "paths": { - "$ref": "#/definitions/paths" - }, - "components": { - "$ref": "#/definitions/components" - }, - "security": { - "type": "array", - "items": { - "$ref": "#/definitions/securityRequirement" - }, - "uniqueItems": true - }, - "tags": { - "type": "array", - "items": { - "$ref": "#/definitions/tag" - }, - "uniqueItems": true - }, - "externalDocs": { - "$ref": "#/definitions/externalDocs" - } - }, - "definitions": { - "info": { - "type": "object", - "description": "The object provides metadata about the API. The metadata MAY be used by the clients if needed, and MAY be presented in editing or documentation generation tools for convenience.", - "required": [ - "title", - "version" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "title": { - "type": "string" - }, - "description": { - "type": "string" - }, - "termsOfService": { - "type": "string" - }, - "contact": { - "$ref": "#/definitions/contact" - }, - "license": { - "$ref": "#/definitions/license" - }, - "version": { - "type": "string" - } - } - }, - "contact": { - "type": "object", - "description": "Contact information for the exposed API.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "url": { - "type": "string", - "format": "uri" - }, - "email": { - "type": "string", - "format": "email" - } - } - }, - "license": { - "type": "object", - "description": "License information for the exposed API.", - "required": [ - "name" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "url": { - "type": "string" - } - } - }, - "server": { - "type": "object", - "description": "An object representing a Server.", - "required": [ - "url" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "url": { - "type": "string" - }, - "description": { - "type": "string" - }, - "variables": { - "$ref": "#/definitions/serverVariables" - } - } - }, - "serverVariable": { - "type": "object", - "description": "An object representing a Server Variable for server URL template substitution.", - "required": [ - "default" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "enum": { - "type": "array", - "items": { - "type": "string" - }, - "uniqueItems": true - }, - "default": { - "type": "string" - }, - "description": { - "type": "string" - } - } - }, - "components": { - "type": "object", - "description": "Holds a set of reusable objects for different aspects of the OAS. All objects defined within the components object will have no effect on the API unless they are explicitly referenced from properties outside the components object.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "schemas": { - "$ref": "#/definitions/schemasOrReferences" - }, - "responses": { - "$ref": "#/definitions/responsesOrReferences" - }, - "parameters": { - "$ref": "#/definitions/parametersOrReferences" - }, - "examples": { - "$ref": "#/definitions/examplesOrReferences" - }, - "requestBodies": { - "$ref": "#/definitions/requestBodiesOrReferences" - }, - "headers": { - "$ref": "#/definitions/headersOrReferences" - }, - "securitySchemes": { - "$ref": "#/definitions/securitySchemesOrReferences" - }, - "links": { - "$ref": "#/definitions/linksOrReferences" - }, - "callbacks": { - "$ref": "#/definitions/callbacksOrReferences" - } - } - }, - "paths": { - "type": "object", - "description": "Holds the relative paths to the individual endpoints and their operations. The path is appended to the URL from the `Server Object` in order to construct the full URL. The Paths MAY be empty, due to ACL constraints.", - "additionalProperties": false, - "patternProperties": { - "^/": { - "$ref": "#/definitions/pathItem" - }, - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - } - }, - "pathItem": { - "type": "object", - "description": "Describes the operations available on a single path. A Path Item MAY be empty, due to ACL constraints. The path itself is still exposed to the documentation viewer but they will not know which operations and parameters are available.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "$ref": { - "type": "string" - }, - "summary": { - "type": "string" - }, - "description": { - "type": "string" - }, - "get": { - "$ref": "#/definitions/operation" - }, - "put": { - "$ref": "#/definitions/operation" - }, - "post": { - "$ref": "#/definitions/operation" - }, - "delete": { - "$ref": "#/definitions/operation" - }, - "options": { - "$ref": "#/definitions/operation" - }, - "head": { - "$ref": "#/definitions/operation" - }, - "patch": { - "$ref": "#/definitions/operation" - }, - "trace": { - "$ref": "#/definitions/operation" - }, - "servers": { - "type": "array", - "items": { - "$ref": "#/definitions/server" - }, - "uniqueItems": true - }, - "parameters": { - "type": "array", - "items": { - "$ref": "#/definitions/parameterOrReference" - }, - "uniqueItems": true - } - } - }, - "operation": { - "type": "object", - "description": "Describes a single API operation on a path.", - "required": [ - "responses" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "tags": { - "type": "array", - "items": { - "type": "string" - }, - "uniqueItems": true - }, - "summary": { - "type": "string" - }, - "description": { - "type": "string" - }, - "externalDocs": { - "$ref": "#/definitions/externalDocs" - }, - "operationId": { - "type": "string" - }, - "parameters": { - "type": "array", - "items": { - "$ref": "#/definitions/parameterOrReference" - }, - "uniqueItems": true - }, - "requestBody": { - "$ref": "#/definitions/requestBodyOrReference" - }, - "responses": { - "$ref": "#/definitions/responses" - }, - "callbacks": { - "$ref": "#/definitions/callbacksOrReferences" - }, - "deprecated": { - "type": "boolean" - }, - "security": { - "type": "array", - "items": { - "$ref": "#/definitions/securityRequirement" - }, - "uniqueItems": true - }, - "servers": { - "type": "array", - "items": { - "$ref": "#/definitions/server" - }, - "uniqueItems": true - } - } - }, - "externalDocs": { - "type": "object", - "description": "Allows referencing an external resource for extended documentation.", - "required": [ - "url" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "description": { - "type": "string" - }, - "url": { - "type": "string" - } - } - }, - "parameter": { - "type": "object", - "description": "Describes a single operation parameter. A unique parameter is defined by a combination of a name and location.", - "required": [ - "name", - "in" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "in": { - "type": "string" - }, - "description": { - "type": "string" - }, - "required": { - "type": "boolean" - }, - "deprecated": { - "type": "boolean" - }, - "allowEmptyValue": { - "type": "boolean" - }, - "style": { - "type": "string" - }, - "explode": { - "type": "boolean" - }, - "allowReserved": { - "type": "boolean" - }, - "schema": { - "$ref": "#/definitions/schemaOrReference" - }, - "example": { - "$ref": "#/definitions/any" - }, - "examples": { - "$ref": "#/definitions/examplesOrReferences" - }, - "content": { - "$ref": "#/definitions/mediaTypes" - } - } - }, - "requestBody": { - "type": "object", - "description": "Describes a single request body.", - "required": [ - "content" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "description": { - "type": "string" - }, - "content": { - "$ref": "#/definitions/mediaTypes" - }, - "required": { - "type": "boolean" - } - } - }, - "mediaType": { - "type": "object", - "description": "Each Media Type Object provides schema and examples for the media type identified by its key.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "schema": { - "$ref": "#/definitions/schemaOrReference" - }, - "example": { - "$ref": "#/definitions/any" - }, - "examples": { - "$ref": "#/definitions/examplesOrReferences" - }, - "encoding": { - "$ref": "#/definitions/encodings" - } - } - }, - "encoding": { - "type": "object", - "description": "A single encoding definition applied to a single schema property.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "contentType": { - "type": "string" - }, - "headers": { - "$ref": "#/definitions/headersOrReferences" - }, - "style": { - "type": "string" - }, - "explode": { - "type": "boolean" - }, - "allowReserved": { - "type": "boolean" - } - } - }, - "responses": { - "type": "object", - "description": "A container for the expected responses of an operation. The container maps a HTTP response code to the expected response. The documentation is not necessarily expected to cover all possible HTTP response codes because they may not be known in advance. However, documentation is expected to cover a successful operation response and any known errors. The `default` MAY be used as a default response object for all HTTP codes that are not covered individually by the specification. The `Responses Object` MUST contain at least one response code, and it SHOULD be the response for a successful operation call.", - "additionalProperties": false, - "patternProperties": { - "^([0-9X]{3})$": { - "$ref": "#/definitions/responseOrReference" - }, - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "default": { - "$ref": "#/definitions/responseOrReference" - } - } - }, - "response": { - "type": "object", - "description": "Describes a single response from an API Operation, including design-time, static `links` to operations based on the response.", - "required": [ - "description" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "description": { - "type": "string" - }, - "headers": { - "$ref": "#/definitions/headersOrReferences" - }, - "content": { - "$ref": "#/definitions/mediaTypes" - }, - "links": { - "$ref": "#/definitions/linksOrReferences" - } - } - }, - "callback": { - "type": "object", - "description": "A map of possible out-of band callbacks related to the parent operation. Each value in the map is a Path Item Object that describes a set of requests that may be initiated by the API provider and the expected responses. The key value used to identify the callback object is an expression, evaluated at runtime, that identifies a URL to use for the callback operation.", - "additionalProperties": false, - "patternProperties": { - "^": { - "$ref": "#/definitions/pathItem" - }, - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - } - }, - "example": { - "type": "object", - "description": "", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "summary": { - "type": "string" - }, - "description": { - "type": "string" - }, - "value": { - "$ref": "#/definitions/any" - }, - "externalValue": { - "type": "string" - } - } - }, - "link": { - "type": "object", - "description": "The `Link object` represents a possible design-time link for a response. The presence of a link does not guarantee the caller's ability to successfully invoke it, rather it provides a known relationship and traversal mechanism between responses and other operations. Unlike _dynamic_ links (i.e. links provided **in** the response payload), the OAS linking mechanism does not require link information in the runtime response. For computing links, and providing instructions to execute them, a runtime expression is used for accessing values in an operation and using them as parameters while invoking the linked operation.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "operationRef": { - "type": "string" - }, - "operationId": { - "type": "string" - }, - "parameters": { - "$ref": "#/definitions/anysOrExpressions" - }, - "requestBody": { - "$ref": "#/definitions/anyOrExpression" - }, - "description": { - "type": "string" - }, - "server": { - "$ref": "#/definitions/server" - } - } - }, - "header": { - "type": "object", - "description": "The Header Object follows the structure of the Parameter Object with the following changes: 1. `name` MUST NOT be specified, it is given in the corresponding `headers` map. 1. `in` MUST NOT be specified, it is implicitly in `header`. 1. All traits that are affected by the location MUST be applicable to a location of `header` (for example, `style`).", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "description": { - "type": "string" - }, - "required": { - "type": "boolean" - }, - "deprecated": { - "type": "boolean" - }, - "allowEmptyValue": { - "type": "boolean" - }, - "style": { - "type": "string" - }, - "explode": { - "type": "boolean" - }, - "allowReserved": { - "type": "boolean" - }, - "schema": { - "$ref": "#/definitions/schemaOrReference" - }, - "example": { - "$ref": "#/definitions/any" - }, - "examples": { - "$ref": "#/definitions/examplesOrReferences" - }, - "content": { - "$ref": "#/definitions/mediaTypes" - } - } - }, - "tag": { - "type": "object", - "description": "Adds metadata to a single tag that is used by the Operation Object. It is not mandatory to have a Tag Object per tag defined in the Operation Object instances.", - "required": [ - "name" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "description": { - "type": "string" - }, - "externalDocs": { - "$ref": "#/definitions/externalDocs" - } - } - }, - "reference": { - "type": "object", - "description": "A simple object to allow referencing other components in the specification, internally and externally. The Reference Object is defined by JSON Reference and follows the same structure, behavior and rules. For this specification, reference resolution is accomplished as defined by the JSON Reference specification and not by the JSON Schema specification.", - "required": [ - "$ref" - ], - "additionalProperties": false, - "properties": { - "$ref": { - "type": "string" - }, - "summary": { - "type": "string" - }, - "description": { - "type": "string" - } - } - }, - "schema": { - "type": "object", - "description": "The Schema Object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. This object is an extended subset of the JSON Schema Specification Wright Draft 00. For more information about the properties, see JSON Schema Core and JSON Schema Validation. Unless stated otherwise, the property definitions follow the JSON Schema.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "nullable": { - "type": "boolean" - }, - "discriminator": { - "$ref": "#/definitions/discriminator" - }, - "readOnly": { - "type": "boolean" - }, - "writeOnly": { - "type": "boolean" - }, - "xml": { - "$ref": "#/definitions/xml" - }, - "externalDocs": { - "$ref": "#/definitions/externalDocs" - }, - "example": { - "$ref": "#/definitions/any" - }, - "deprecated": { - "type": "boolean" - }, - "title": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/title" - }, - "multipleOf": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/multipleOf" - }, - "maximum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/maximum" - }, - "exclusiveMaximum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/exclusiveMaximum" - }, - "minimum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/minimum" - }, - "exclusiveMinimum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/exclusiveMinimum" - }, - "maxLength": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/maxLength" - }, - "minLength": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/minLength" - }, - "pattern": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/pattern" - }, - "maxItems": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/maxItems" - }, - "minItems": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/minItems" - }, - "uniqueItems": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/uniqueItems" - }, - "maxProperties": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/maxProperties" - }, - "minProperties": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/minProperties" - }, - "required": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/required" - }, - "enum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/enum" - }, - "type": { - "type": "string" - }, - "allOf": { - "type": "array", - "items": { - "$ref": "#/definitions/schemaOrReference" - }, - "minItems": 1 - }, - "oneOf": { - "type": "array", - "items": { - "$ref": "#/definitions/schemaOrReference" - }, - "minItems": 1 - }, - "anyOf": { - "type": "array", - "items": { - "$ref": "#/definitions/schemaOrReference" - }, - "minItems": 1 - }, - "not": { - "$ref": "#/definitions/schema" - }, - "items": { - "anyOf": [ - { - "$ref": "#/definitions/schemaOrReference" - }, - { - "type": "array", - "items": { - "$ref": "#/definitions/schemaOrReference" - }, - "minItems": 1 - } - ] - }, - "properties": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/schemaOrReference" - } - }, - "additionalProperties": { - "oneOf": [ - { - "$ref": "#/definitions/schemaOrReference" - }, - { - "type": "boolean" - } - ] - }, - "default": { - "$ref": "#/definitions/defaultType" - }, - "description": { - "type": "string" - }, - "format": { - "type": "string" - } - } - }, - "discriminator": { - "type": "object", - "description": "When request bodies or response payloads may be one of a number of different schemas, a `discriminator` object can be used to aid in serialization, deserialization, and validation. The discriminator is a specific object in a schema which is used to inform the consumer of the specification of an alternative schema based on the value associated with it. When using the discriminator, _inline_ schemas will not be considered.", - "required": [ - "propertyName" - ], - "additionalProperties": false, - "properties": { - "propertyName": { - "type": "string" - }, - "mapping": { - "$ref": "#/definitions/strings" - } - } - }, - "xml": { - "type": "object", - "description": "A metadata object that allows for more fine-tuned XML model definitions. When using arrays, XML element names are *not* inferred (for singular/plural forms) and the `name` property SHOULD be used to add that information. See examples for expected behavior.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "namespace": { - "type": "string" - }, - "prefix": { - "type": "string" - }, - "attribute": { - "type": "boolean" - }, - "wrapped": { - "type": "boolean" - } - } - }, - "securityScheme": { - "type": "object", - "description": "Defines a security scheme that can be used by the operations. Supported schemes are HTTP authentication, an API key (either as a header or as a query parameter), OAuth2's common flows (implicit, password, application and access code) as defined in RFC6749, and OpenID Connect Discovery.", - "required": [ - "type" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "type": { - "type": "string" - }, - "description": { - "type": "string" - }, - "name": { - "type": "string" - }, - "in": { - "type": "string" - }, - "scheme": { - "type": "string" - }, - "bearerFormat": { - "type": "string" - }, - "flows": { - "$ref": "#/definitions/oauthFlows" - }, - "openIdConnectUrl": { - "type": "string" - } - } - }, - "oauthFlows": { - "type": "object", - "description": "Allows configuration of the supported OAuth Flows.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "implicit": { - "$ref": "#/definitions/oauthFlow" - }, - "password": { - "$ref": "#/definitions/oauthFlow" - }, - "clientCredentials": { - "$ref": "#/definitions/oauthFlow" - }, - "authorizationCode": { - "$ref": "#/definitions/oauthFlow" - } - } - }, - "oauthFlow": { - "type": "object", - "description": "Configuration details for a supported OAuth Flow", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "authorizationUrl": { - "type": "string" - }, - "tokenUrl": { - "type": "string" - }, - "refreshUrl": { - "type": "string" - }, - "scopes": { - "$ref": "#/definitions/strings" - } - } - }, - "securityRequirement": { - "type": "object", - "description": "Lists the required security schemes to execute this operation. The name used for each property MUST correspond to a security scheme declared in the Security Schemes under the Components Object. Security Requirement Objects that contain multiple schemes require that all schemes MUST be satisfied for a request to be authorized. This enables support for scenarios where multiple query parameters or HTTP headers are required to convey security information. When a list of Security Requirement Objects is defined on the Open API object or Operation Object, only one of Security Requirement Objects in the list needs to be satisfied to authorize the request.", - "additionalProperties": false, - "patternProperties": { - "^[a-zA-Z0-9\\.\\-_]+$": { - "type": "array", - "items": { - "type": "string" - }, - "uniqueItems": true - } - } - }, - "anyOrExpression": { - "oneOf": [ - { - "$ref": "#/definitions/any" - }, - { - "$ref": "#/definitions/expression" - } - ] - }, - "callbackOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/callback" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "exampleOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/example" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "headerOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/header" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "linkOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/link" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "parameterOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/parameter" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "requestBodyOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/requestBody" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "responseOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/response" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "schemaOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/schema" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "securitySchemeOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/securityScheme" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "anysOrExpressions": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/anyOrExpression" - } - }, - "callbacksOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/callbackOrReference" - } - }, - "encodings": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/encoding" - } - }, - "examplesOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/exampleOrReference" - } - }, - "headersOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/headerOrReference" - } - }, - "linksOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/linkOrReference" - } - }, - "mediaTypes": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/mediaType" - } - }, - "parametersOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/parameterOrReference" - } - }, - "requestBodiesOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/requestBodyOrReference" - } - }, - "responsesOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/responseOrReference" - } - }, - "schemasOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/schemaOrReference" - } - }, - "securitySchemesOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/securitySchemeOrReference" - } - }, - "serverVariables": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/serverVariable" - } - }, - "strings": { - "type": "object", - "additionalProperties": { - "type": "string" - } - }, - "object": { - "type": "object", - "additionalProperties": true - }, - "any": { - "additionalProperties": true - }, - "expression": { - "type": "object", - "additionalProperties": true - }, - "specificationExtension": { - "description": "Any property starting with x- is valid.", - "oneOf": [ - { - "type": "null" - }, - { - "type": "number" - }, - { - "type": "boolean" - }, - { - "type": "string" - }, - { - "type": "object" - }, - { - "type": "array" - } - ] - }, - "defaultType": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "array" - }, - { - "type": "object" - }, - { - "type": "number" - }, - { - "type": "boolean" - }, - { - "type": "string" - } - ] - } - } -} diff --git a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/openapi-3.1.json b/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/openapi-3.1.json deleted file mode 100644 index ed0b83adf4dc..000000000000 --- a/cluster-autoscaler/vendor/github.com/google/gnostic/openapiv3/openapi-3.1.json +++ /dev/null @@ -1,1250 +0,0 @@ -{ - "title": "A JSON Schema for OpenAPI 3.0.", - "id": "http://openapis.org/v3/schema.json#", - "$schema": "http://json-schema.org/draft-04/schema#", - "type": "object", - "description": "This is the root document object of the OpenAPI document.", - "required": [ - "openapi", - "info", - "paths" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "openapi": { - "type": "string" - }, - "info": { - "$ref": "#/definitions/info" - }, - "servers": { - "type": "array", - "items": { - "$ref": "#/definitions/server" - }, - "uniqueItems": true - }, - "paths": { - "$ref": "#/definitions/paths" - }, - "components": { - "$ref": "#/definitions/components" - }, - "security": { - "type": "array", - "items": { - "$ref": "#/definitions/securityRequirement" - }, - "uniqueItems": true - }, - "tags": { - "type": "array", - "items": { - "$ref": "#/definitions/tag" - }, - "uniqueItems": true - }, - "externalDocs": { - "$ref": "#/definitions/externalDocs" - } - }, - "definitions": { - "info": { - "type": "object", - "description": "The object provides metadata about the API. The metadata MAY be used by the clients if needed, and MAY be presented in editing or documentation generation tools for convenience.", - "required": [ - "title", - "version" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "title": { - "type": "string" - }, - "description": { - "type": "string" - }, - "termsOfService": { - "type": "string" - }, - "contact": { - "$ref": "#/definitions/contact" - }, - "license": { - "$ref": "#/definitions/license" - }, - "version": { - "type": "string" - }, - "summary": { - "type": "string" - } - } - }, - "contact": { - "type": "object", - "description": "Contact information for the exposed API.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "url": { - "type": "string", - "format": "uri" - }, - "email": { - "type": "string", - "format": "email" - } - } - }, - "license": { - "type": "object", - "description": "License information for the exposed API.", - "required": [ - "name" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "url": { - "type": "string" - } - } - }, - "server": { - "type": "object", - "description": "An object representing a Server.", - "required": [ - "url" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "url": { - "type": "string" - }, - "description": { - "type": "string" - }, - "variables": { - "$ref": "#/definitions/serverVariables" - } - } - }, - "serverVariable": { - "type": "object", - "description": "An object representing a Server Variable for server URL template substitution.", - "required": [ - "default" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "enum": { - "type": "array", - "items": { - "type": "string" - }, - "uniqueItems": true - }, - "default": { - "type": "string" - }, - "description": { - "type": "string" - } - } - }, - "components": { - "type": "object", - "description": "Holds a set of reusable objects for different aspects of the OAS. All objects defined within the components object will have no effect on the API unless they are explicitly referenced from properties outside the components object.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "schemas": { - "$ref": "#/definitions/schemasOrReferences" - }, - "responses": { - "$ref": "#/definitions/responsesOrReferences" - }, - "parameters": { - "$ref": "#/definitions/parametersOrReferences" - }, - "examples": { - "$ref": "#/definitions/examplesOrReferences" - }, - "requestBodies": { - "$ref": "#/definitions/requestBodiesOrReferences" - }, - "headers": { - "$ref": "#/definitions/headersOrReferences" - }, - "securitySchemes": { - "$ref": "#/definitions/securitySchemesOrReferences" - }, - "links": { - "$ref": "#/definitions/linksOrReferences" - }, - "callbacks": { - "$ref": "#/definitions/callbacksOrReferences" - } - } - }, - "paths": { - "type": "object", - "description": "Holds the relative paths to the individual endpoints and their operations. The path is appended to the URL from the `Server Object` in order to construct the full URL. The Paths MAY be empty, due to ACL constraints.", - "additionalProperties": false, - "patternProperties": { - "^/": { - "$ref": "#/definitions/pathItem" - }, - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - } - }, - "pathItem": { - "type": "object", - "description": "Describes the operations available on a single path. A Path Item MAY be empty, due to ACL constraints. The path itself is still exposed to the documentation viewer but they will not know which operations and parameters are available.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "$ref": { - "type": "string" - }, - "summary": { - "type": "string" - }, - "description": { - "type": "string" - }, - "get": { - "$ref": "#/definitions/operation" - }, - "put": { - "$ref": "#/definitions/operation" - }, - "post": { - "$ref": "#/definitions/operation" - }, - "delete": { - "$ref": "#/definitions/operation" - }, - "options": { - "$ref": "#/definitions/operation" - }, - "head": { - "$ref": "#/definitions/operation" - }, - "patch": { - "$ref": "#/definitions/operation" - }, - "trace": { - "$ref": "#/definitions/operation" - }, - "servers": { - "type": "array", - "items": { - "$ref": "#/definitions/server" - }, - "uniqueItems": true - }, - "parameters": { - "type": "array", - "items": { - "$ref": "#/definitions/parameterOrReference" - }, - "uniqueItems": true - } - } - }, - "operation": { - "type": "object", - "description": "Describes a single API operation on a path.", - "required": [ - "responses" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "tags": { - "type": "array", - "items": { - "type": "string" - }, - "uniqueItems": true - }, - "summary": { - "type": "string" - }, - "description": { - "type": "string" - }, - "externalDocs": { - "$ref": "#/definitions/externalDocs" - }, - "operationId": { - "type": "string" - }, - "parameters": { - "type": "array", - "items": { - "$ref": "#/definitions/parameterOrReference" - }, - "uniqueItems": true - }, - "requestBody": { - "$ref": "#/definitions/requestBodyOrReference" - }, - "responses": { - "$ref": "#/definitions/responses" - }, - "callbacks": { - "$ref": "#/definitions/callbacksOrReferences" - }, - "deprecated": { - "type": "boolean" - }, - "security": { - "type": "array", - "items": { - "$ref": "#/definitions/securityRequirement" - }, - "uniqueItems": true - }, - "servers": { - "type": "array", - "items": { - "$ref": "#/definitions/server" - }, - "uniqueItems": true - } - } - }, - "externalDocs": { - "type": "object", - "description": "Allows referencing an external resource for extended documentation.", - "required": [ - "url" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "description": { - "type": "string" - }, - "url": { - "type": "string" - } - } - }, - "parameter": { - "type": "object", - "description": "Describes a single operation parameter. A unique parameter is defined by a combination of a name and location.", - "required": [ - "name", - "in" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "in": { - "type": "string" - }, - "description": { - "type": "string" - }, - "required": { - "type": "boolean" - }, - "deprecated": { - "type": "boolean" - }, - "allowEmptyValue": { - "type": "boolean" - }, - "style": { - "type": "string" - }, - "explode": { - "type": "boolean" - }, - "allowReserved": { - "type": "boolean" - }, - "schema": { - "$ref": "#/definitions/schemaOrReference" - }, - "example": { - "$ref": "#/definitions/any" - }, - "examples": { - "$ref": "#/definitions/examplesOrReferences" - }, - "content": { - "$ref": "#/definitions/mediaTypes" - } - } - }, - "requestBody": { - "type": "object", - "description": "Describes a single request body.", - "required": [ - "content" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "description": { - "type": "string" - }, - "content": { - "$ref": "#/definitions/mediaTypes" - }, - "required": { - "type": "boolean" - } - } - }, - "mediaType": { - "type": "object", - "description": "Each Media Type Object provides schema and examples for the media type identified by its key.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "schema": { - "$ref": "#/definitions/schemaOrReference" - }, - "example": { - "$ref": "#/definitions/any" - }, - "examples": { - "$ref": "#/definitions/examplesOrReferences" - }, - "encoding": { - "$ref": "#/definitions/encodings" - } - } - }, - "encoding": { - "type": "object", - "description": "A single encoding definition applied to a single schema property.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "contentType": { - "type": "string" - }, - "headers": { - "$ref": "#/definitions/headersOrReferences" - }, - "style": { - "type": "string" - }, - "explode": { - "type": "boolean" - }, - "allowReserved": { - "type": "boolean" - } - } - }, - "responses": { - "type": "object", - "description": "A container for the expected responses of an operation. The container maps a HTTP response code to the expected response. The documentation is not necessarily expected to cover all possible HTTP response codes because they may not be known in advance. However, documentation is expected to cover a successful operation response and any known errors. The `default` MAY be used as a default response object for all HTTP codes that are not covered individually by the specification. The `Responses Object` MUST contain at least one response code, and it SHOULD be the response for a successful operation call.", - "additionalProperties": false, - "patternProperties": { - "^([0-9X]{3})$": { - "$ref": "#/definitions/responseOrReference" - }, - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "default": { - "$ref": "#/definitions/responseOrReference" - } - } - }, - "response": { - "type": "object", - "description": "Describes a single response from an API Operation, including design-time, static `links` to operations based on the response.", - "required": [ - "description" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "description": { - "type": "string" - }, - "headers": { - "$ref": "#/definitions/headersOrReferences" - }, - "content": { - "$ref": "#/definitions/mediaTypes" - }, - "links": { - "$ref": "#/definitions/linksOrReferences" - } - } - }, - "callback": { - "type": "object", - "description": "A map of possible out-of band callbacks related to the parent operation. Each value in the map is a Path Item Object that describes a set of requests that may be initiated by the API provider and the expected responses. The key value used to identify the callback object is an expression, evaluated at runtime, that identifies a URL to use for the callback operation.", - "additionalProperties": false, - "patternProperties": { - "^": { - "$ref": "#/definitions/pathItem" - }, - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - } - }, - "example": { - "type": "object", - "description": "", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "summary": { - "type": "string" - }, - "description": { - "type": "string" - }, - "value": { - "$ref": "#/definitions/any" - }, - "externalValue": { - "type": "string" - } - } - }, - "link": { - "type": "object", - "description": "The `Link object` represents a possible design-time link for a response. The presence of a link does not guarantee the caller's ability to successfully invoke it, rather it provides a known relationship and traversal mechanism between responses and other operations. Unlike _dynamic_ links (i.e. links provided **in** the response payload), the OAS linking mechanism does not require link information in the runtime response. For computing links, and providing instructions to execute them, a runtime expression is used for accessing values in an operation and using them as parameters while invoking the linked operation.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "operationRef": { - "type": "string" - }, - "operationId": { - "type": "string" - }, - "parameters": { - "$ref": "#/definitions/anyOrExpression" - }, - "requestBody": { - "$ref": "#/definitions/anyOrExpression" - }, - "description": { - "type": "string" - }, - "server": { - "$ref": "#/definitions/server" - } - } - }, - "header": { - "type": "object", - "description": "The Header Object follows the structure of the Parameter Object with the following changes: 1. `name` MUST NOT be specified, it is given in the corresponding `headers` map. 1. `in` MUST NOT be specified, it is implicitly in `header`. 1. All traits that are affected by the location MUST be applicable to a location of `header` (for example, `style`).", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "description": { - "type": "string" - }, - "required": { - "type": "boolean" - }, - "deprecated": { - "type": "boolean" - }, - "allowEmptyValue": { - "type": "boolean" - }, - "style": { - "type": "string" - }, - "explode": { - "type": "boolean" - }, - "allowReserved": { - "type": "boolean" - }, - "schema": { - "$ref": "#/definitions/schemaOrReference" - }, - "example": { - "$ref": "#/definitions/any" - }, - "examples": { - "$ref": "#/definitions/examplesOrReferences" - }, - "content": { - "$ref": "#/definitions/mediaTypes" - } - } - }, - "tag": { - "type": "object", - "description": "Adds metadata to a single tag that is used by the Operation Object. It is not mandatory to have a Tag Object per tag defined in the Operation Object instances.", - "required": [ - "name" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "description": { - "type": "string" - }, - "externalDocs": { - "$ref": "#/definitions/externalDocs" - } - } - }, - "reference": { - "type": "object", - "description": "A simple object to allow referencing other components in the specification, internally and externally. The Reference Object is defined by JSON Reference and follows the same structure, behavior and rules. For this specification, reference resolution is accomplished as defined by the JSON Reference specification and not by the JSON Schema specification.", - "required": [ - "$ref" - ], - "additionalProperties": false, - "properties": { - "$ref": { - "type": "string" - }, - "summary": { - "type": "string" - }, - "description": { - "type": "string" - } - } - }, - "schema": { - "type": "object", - "description": "The Schema Object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. This object is an extended subset of the JSON Schema Specification Wright Draft 00. For more information about the properties, see JSON Schema Core and JSON Schema Validation. Unless stated otherwise, the property definitions follow the JSON Schema.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "nullable": { - "type": "boolean" - }, - "discriminator": { - "$ref": "#/definitions/discriminator" - }, - "readOnly": { - "type": "boolean" - }, - "writeOnly": { - "type": "boolean" - }, - "xml": { - "$ref": "#/definitions/xml" - }, - "externalDocs": { - "$ref": "#/definitions/externalDocs" - }, - "example": { - "$ref": "#/definitions/any" - }, - "deprecated": { - "type": "boolean" - }, - "title": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/title" - }, - "multipleOf": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/multipleOf" - }, - "maximum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/maximum" - }, - "exclusiveMaximum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/exclusiveMaximum" - }, - "minimum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/minimum" - }, - "exclusiveMinimum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/exclusiveMinimum" - }, - "maxLength": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/maxLength" - }, - "minLength": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/minLength" - }, - "pattern": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/pattern" - }, - "maxItems": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/maxItems" - }, - "minItems": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/minItems" - }, - "uniqueItems": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/uniqueItems" - }, - "maxProperties": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/maxProperties" - }, - "minProperties": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/minProperties" - }, - "required": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/required" - }, - "enum": { - "$ref": "http://json-schema.org/draft-04/schema#/properties/enum" - }, - "type": { - "type": "string" - }, - "allOf": { - "type": "array", - "items": { - "$ref": "#/definitions/schemaOrReference" - }, - "minItems": 1 - }, - "oneOf": { - "type": "array", - "items": { - "$ref": "#/definitions/schemaOrReference" - }, - "minItems": 1 - }, - "anyOf": { - "type": "array", - "items": { - "$ref": "#/definitions/schemaOrReference" - }, - "minItems": 1 - }, - "not": { - "$ref": "#/definitions/schema" - }, - "items": { - "anyOf": [ - { - "$ref": "#/definitions/schemaOrReference" - }, - { - "type": "array", - "items": { - "$ref": "#/definitions/schemaOrReference" - }, - "minItems": 1 - } - ] - }, - "properties": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/schemaOrReference" - } - }, - "additionalProperties": { - "oneOf": [ - { - "$ref": "#/definitions/schemaOrReference" - }, - { - "type": "boolean" - } - ] - }, - "default": { - "$ref": "#/definitions/defaultType" - }, - "description": { - "type": "string" - }, - "format": { - "type": "string" - } - } - }, - "discriminator": { - "type": "object", - "description": "When request bodies or response payloads may be one of a number of different schemas, a `discriminator` object can be used to aid in serialization, deserialization, and validation. The discriminator is a specific object in a schema which is used to inform the consumer of the specification of an alternative schema based on the value associated with it. When using the discriminator, _inline_ schemas will not be considered.", - "required": [ - "propertyName" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "propertyName": { - "type": "string" - }, - "mapping": { - "$ref": "#/definitions/strings" - } - } - }, - "xml": { - "type": "object", - "description": "A metadata object that allows for more fine-tuned XML model definitions. When using arrays, XML element names are *not* inferred (for singular/plural forms) and the `name` property SHOULD be used to add that information. See examples for expected behavior.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "name": { - "type": "string" - }, - "namespace": { - "type": "string" - }, - "prefix": { - "type": "string" - }, - "attribute": { - "type": "boolean" - }, - "wrapped": { - "type": "boolean" - } - } - }, - "securityScheme": { - "type": "object", - "description": "Defines a security scheme that can be used by the operations. Supported schemes are HTTP authentication, an API key (either as a header, a cookie parameter or as a query parameter), mutual TLS (use of a client certificate), OAuth2's common flows (implicit, password, application and access code) as defined in RFC6749, and OpenID Connect. Please note that currently (2019) the implicit flow is about to be deprecated OAuth 2.0 Security Best Current Practice. Recommended for most use case is Authorization Code Grant flow with PKCE.", - "required": [ - "type" - ], - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "type": { - "type": "string" - }, - "description": { - "type": "string" - }, - "name": { - "type": "string" - }, - "in": { - "type": "string" - }, - "scheme": { - "type": "string" - }, - "bearerFormat": { - "type": "string" - }, - "flows": { - "$ref": "#/definitions/oauthFlows" - }, - "openIdConnectUrl": { - "type": "string" - } - } - }, - "oauthFlows": { - "type": "object", - "description": "Allows configuration of the supported OAuth Flows.", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "implicit": { - "$ref": "#/definitions/oauthFlow" - }, - "password": { - "$ref": "#/definitions/oauthFlow" - }, - "clientCredentials": { - "$ref": "#/definitions/oauthFlow" - }, - "authorizationCode": { - "$ref": "#/definitions/oauthFlow" - } - } - }, - "oauthFlow": { - "type": "object", - "description": "Configuration details for a supported OAuth Flow", - "additionalProperties": false, - "patternProperties": { - "^x-": { - "$ref": "#/definitions/specificationExtension" - } - }, - "properties": { - "authorizationUrl": { - "type": "string" - }, - "tokenUrl": { - "type": "string" - }, - "refreshUrl": { - "type": "string" - }, - "scopes": { - "$ref": "#/definitions/strings" - } - } - }, - "securityRequirement": { - "type": "object", - "description": "Lists the required security schemes to execute this operation. The name used for each property MUST correspond to a security scheme declared in the Security Schemes under the Components Object. Security Requirement Objects that contain multiple schemes require that all schemes MUST be satisfied for a request to be authorized. This enables support for scenarios where multiple query parameters or HTTP headers are required to convey security information. When a list of Security Requirement Objects is defined on the OpenAPI Object or Operation Object, only one of the Security Requirement Objects in the list needs to be satisfied to authorize the request.", - "additionalProperties": { - "type": "array", - "items": { - "type": "string" - }, - "uniqueItems": true - } - }, - "anyOrExpression": { - "oneOf": [ - { - "$ref": "#/definitions/any" - }, - { - "$ref": "#/definitions/expression" - } - ] - }, - "callbackOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/callback" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "exampleOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/example" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "headerOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/header" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "linkOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/link" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "parameterOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/parameter" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "requestBodyOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/requestBody" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "responseOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/response" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "schemaOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/schema" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "securitySchemeOrReference": { - "oneOf": [ - { - "$ref": "#/definitions/securityScheme" - }, - { - "$ref": "#/definitions/reference" - } - ] - }, - "callbacksOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/callbackOrReference" - } - }, - "encodings": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/encoding" - } - }, - "examplesOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/exampleOrReference" - } - }, - "headersOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/headerOrReference" - } - }, - "linksOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/linkOrReference" - } - }, - "mediaTypes": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/mediaType" - } - }, - "parametersOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/parameterOrReference" - } - }, - "requestBodiesOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/requestBodyOrReference" - } - }, - "responsesOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/responseOrReference" - } - }, - "schemasOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/schemaOrReference" - } - }, - "securitySchemesOrReferences": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/securitySchemeOrReference" - } - }, - "serverVariables": { - "type": "object", - "additionalProperties": { - "$ref": "#/definitions/serverVariable" - } - }, - "strings": { - "type": "object", - "additionalProperties": { - "type": "string" - } - }, - "object": { - "type": "object", - "additionalProperties": true - }, - "any": { - "additionalProperties": true - }, - "expression": { - "type": "object", - "additionalProperties": true - }, - "specificationExtension": { - "description": "Any property starting with x- is valid.", - "oneOf": [ - { - "type": "null" - }, - { - "type": "number" - }, - { - "type": "boolean" - }, - { - "type": "string" - }, - { - "type": "object" - }, - { - "type": "array" - } - ] - }, - "defaultType": { - "oneOf": [ - { - "type": "null" - }, - { - "type": "array" - }, - { - "type": "object" - }, - { - "type": "number" - }, - { - "type": "boolean" - }, - { - "type": "string" - } - ] - } - } -} diff --git a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/CHANGELOG.md b/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/CHANGELOG.md deleted file mode 100644 index c758234904ec..000000000000 --- a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/CHANGELOG.md +++ /dev/null @@ -1,96 +0,0 @@ -## 1.5.0 - -* New option `IgnoreUntaggedFields` to ignore decoding to any fields - without `mapstructure` (or the configured tag name) set [GH-277] -* New option `ErrorUnset` which makes it an error if any fields - in a target struct are not set by the decoding process. [GH-225] -* New function `OrComposeDecodeHookFunc` to help compose decode hooks. [GH-240] -* Decoding to slice from array no longer crashes [GH-265] -* Decode nested struct pointers to map [GH-271] -* Fix issue where `,squash` was ignored if `Squash` option was set. [GH-280] -* Fix issue where fields with `,omitempty` would sometimes decode - into a map with an empty string key [GH-281] - -## 1.4.3 - -* Fix cases where `json.Number` didn't decode properly [GH-261] - -## 1.4.2 - -* Custom name matchers to support any sort of casing, formatting, etc. for - field names. [GH-250] -* Fix possible panic in ComposeDecodeHookFunc [GH-251] - -## 1.4.1 - -* Fix regression where `*time.Time` value would be set to empty and not be sent - to decode hooks properly [GH-232] - -## 1.4.0 - -* A new decode hook type `DecodeHookFuncValue` has been added that has - access to the full values. [GH-183] -* Squash is now supported with embedded fields that are struct pointers [GH-205] -* Empty strings will convert to 0 for all numeric types when weakly decoding [GH-206] - -## 1.3.3 - -* Decoding maps from maps creates a settable value for decode hooks [GH-203] - -## 1.3.2 - -* Decode into interface type with a struct value is supported [GH-187] - -## 1.3.1 - -* Squash should only squash embedded structs. [GH-194] - -## 1.3.0 - -* Added `",omitempty"` support. This will ignore zero values in the source - structure when encoding. [GH-145] - -## 1.2.3 - -* Fix duplicate entries in Keys list with pointer values. [GH-185] - -## 1.2.2 - -* Do not add unsettable (unexported) values to the unused metadata key - or "remain" value. [GH-150] - -## 1.2.1 - -* Go modules checksum mismatch fix - -## 1.2.0 - -* Added support to capture unused values in a field using the `",remain"` value - in the mapstructure tag. There is an example to showcase usage. -* Added `DecoderConfig` option to always squash embedded structs -* `json.Number` can decode into `uint` types -* Empty slices are preserved and not replaced with nil slices -* Fix panic that can occur in when decoding a map into a nil slice of structs -* Improved package documentation for godoc - -## 1.1.2 - -* Fix error when decode hook decodes interface implementation into interface - type. [GH-140] - -## 1.1.1 - -* Fix panic that can happen in `decodePtr` - -## 1.1.0 - -* Added `StringToIPHookFunc` to convert `string` to `net.IP` and `net.IPNet` [GH-133] -* Support struct to struct decoding [GH-137] -* If source map value is nil, then destination map value is nil (instead of empty) -* If source slice value is nil, then destination slice value is nil (instead of empty) -* If source pointer is nil, then destination pointer is set to nil (instead of - allocated zero value of type) - -## 1.0.0 - -* Initial tagged stable release. diff --git a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/README.md b/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/README.md deleted file mode 100644 index 0018dc7d9f94..000000000000 --- a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/README.md +++ /dev/null @@ -1,46 +0,0 @@ -# mapstructure [![Godoc](https://godoc.org/github.com/mitchellh/mapstructure?status.svg)](https://godoc.org/github.com/mitchellh/mapstructure) - -mapstructure is a Go library for decoding generic map values to structures -and vice versa, while providing helpful error handling. - -This library is most useful when decoding values from some data stream (JSON, -Gob, etc.) where you don't _quite_ know the structure of the underlying data -until you read a part of it. You can therefore read a `map[string]interface{}` -and use this library to decode it into the proper underlying native Go -structure. - -## Installation - -Standard `go get`: - -``` -$ go get github.com/mitchellh/mapstructure -``` - -## Usage & Example - -For usage and examples see the [Godoc](http://godoc.org/github.com/mitchellh/mapstructure). - -The `Decode` function has examples associated with it there. - -## But Why?! - -Go offers fantastic standard libraries for decoding formats such as JSON. -The standard method is to have a struct pre-created, and populate that struct -from the bytes of the encoded format. This is great, but the problem is if -you have configuration or an encoding that changes slightly depending on -specific fields. For example, consider this JSON: - -```json -{ - "type": "person", - "name": "Mitchell" -} -``` - -Perhaps we can't populate a specific structure without first reading -the "type" field from the JSON. We could always do two passes over the -decoding of the JSON (reading the "type" first, and the rest later). -However, it is much simpler to just decode this into a `map[string]interface{}` -structure, read the "type" key, then use something like this library -to decode it into the proper structure. diff --git a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/decode_hooks.go b/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/decode_hooks.go deleted file mode 100644 index 3a754ca72484..000000000000 --- a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/decode_hooks.go +++ /dev/null @@ -1,279 +0,0 @@ -package mapstructure - -import ( - "encoding" - "errors" - "fmt" - "net" - "reflect" - "strconv" - "strings" - "time" -) - -// typedDecodeHook takes a raw DecodeHookFunc (an interface{}) and turns -// it into the proper DecodeHookFunc type, such as DecodeHookFuncType. -func typedDecodeHook(h DecodeHookFunc) DecodeHookFunc { - // Create variables here so we can reference them with the reflect pkg - var f1 DecodeHookFuncType - var f2 DecodeHookFuncKind - var f3 DecodeHookFuncValue - - // Fill in the variables into this interface and the rest is done - // automatically using the reflect package. - potential := []interface{}{f1, f2, f3} - - v := reflect.ValueOf(h) - vt := v.Type() - for _, raw := range potential { - pt := reflect.ValueOf(raw).Type() - if vt.ConvertibleTo(pt) { - return v.Convert(pt).Interface() - } - } - - return nil -} - -// DecodeHookExec executes the given decode hook. This should be used -// since it'll naturally degrade to the older backwards compatible DecodeHookFunc -// that took reflect.Kind instead of reflect.Type. -func DecodeHookExec( - raw DecodeHookFunc, - from reflect.Value, to reflect.Value) (interface{}, error) { - - switch f := typedDecodeHook(raw).(type) { - case DecodeHookFuncType: - return f(from.Type(), to.Type(), from.Interface()) - case DecodeHookFuncKind: - return f(from.Kind(), to.Kind(), from.Interface()) - case DecodeHookFuncValue: - return f(from, to) - default: - return nil, errors.New("invalid decode hook signature") - } -} - -// ComposeDecodeHookFunc creates a single DecodeHookFunc that -// automatically composes multiple DecodeHookFuncs. -// -// The composed funcs are called in order, with the result of the -// previous transformation. -func ComposeDecodeHookFunc(fs ...DecodeHookFunc) DecodeHookFunc { - return func(f reflect.Value, t reflect.Value) (interface{}, error) { - var err error - data := f.Interface() - - newFrom := f - for _, f1 := range fs { - data, err = DecodeHookExec(f1, newFrom, t) - if err != nil { - return nil, err - } - newFrom = reflect.ValueOf(data) - } - - return data, nil - } -} - -// OrComposeDecodeHookFunc executes all input hook functions until one of them returns no error. In that case its value is returned. -// If all hooks return an error, OrComposeDecodeHookFunc returns an error concatenating all error messages. -func OrComposeDecodeHookFunc(ff ...DecodeHookFunc) DecodeHookFunc { - return func(a, b reflect.Value) (interface{}, error) { - var allErrs string - var out interface{} - var err error - - for _, f := range ff { - out, err = DecodeHookExec(f, a, b) - if err != nil { - allErrs += err.Error() + "\n" - continue - } - - return out, nil - } - - return nil, errors.New(allErrs) - } -} - -// StringToSliceHookFunc returns a DecodeHookFunc that converts -// string to []string by splitting on the given sep. -func StringToSliceHookFunc(sep string) DecodeHookFunc { - return func( - f reflect.Kind, - t reflect.Kind, - data interface{}) (interface{}, error) { - if f != reflect.String || t != reflect.Slice { - return data, nil - } - - raw := data.(string) - if raw == "" { - return []string{}, nil - } - - return strings.Split(raw, sep), nil - } -} - -// StringToTimeDurationHookFunc returns a DecodeHookFunc that converts -// strings to time.Duration. -func StringToTimeDurationHookFunc() DecodeHookFunc { - return func( - f reflect.Type, - t reflect.Type, - data interface{}) (interface{}, error) { - if f.Kind() != reflect.String { - return data, nil - } - if t != reflect.TypeOf(time.Duration(5)) { - return data, nil - } - - // Convert it by parsing - return time.ParseDuration(data.(string)) - } -} - -// StringToIPHookFunc returns a DecodeHookFunc that converts -// strings to net.IP -func StringToIPHookFunc() DecodeHookFunc { - return func( - f reflect.Type, - t reflect.Type, - data interface{}) (interface{}, error) { - if f.Kind() != reflect.String { - return data, nil - } - if t != reflect.TypeOf(net.IP{}) { - return data, nil - } - - // Convert it by parsing - ip := net.ParseIP(data.(string)) - if ip == nil { - return net.IP{}, fmt.Errorf("failed parsing ip %v", data) - } - - return ip, nil - } -} - -// StringToIPNetHookFunc returns a DecodeHookFunc that converts -// strings to net.IPNet -func StringToIPNetHookFunc() DecodeHookFunc { - return func( - f reflect.Type, - t reflect.Type, - data interface{}) (interface{}, error) { - if f.Kind() != reflect.String { - return data, nil - } - if t != reflect.TypeOf(net.IPNet{}) { - return data, nil - } - - // Convert it by parsing - _, net, err := net.ParseCIDR(data.(string)) - return net, err - } -} - -// StringToTimeHookFunc returns a DecodeHookFunc that converts -// strings to time.Time. -func StringToTimeHookFunc(layout string) DecodeHookFunc { - return func( - f reflect.Type, - t reflect.Type, - data interface{}) (interface{}, error) { - if f.Kind() != reflect.String { - return data, nil - } - if t != reflect.TypeOf(time.Time{}) { - return data, nil - } - - // Convert it by parsing - return time.Parse(layout, data.(string)) - } -} - -// WeaklyTypedHook is a DecodeHookFunc which adds support for weak typing to -// the decoder. -// -// Note that this is significantly different from the WeaklyTypedInput option -// of the DecoderConfig. -func WeaklyTypedHook( - f reflect.Kind, - t reflect.Kind, - data interface{}) (interface{}, error) { - dataVal := reflect.ValueOf(data) - switch t { - case reflect.String: - switch f { - case reflect.Bool: - if dataVal.Bool() { - return "1", nil - } - return "0", nil - case reflect.Float32: - return strconv.FormatFloat(dataVal.Float(), 'f', -1, 64), nil - case reflect.Int: - return strconv.FormatInt(dataVal.Int(), 10), nil - case reflect.Slice: - dataType := dataVal.Type() - elemKind := dataType.Elem().Kind() - if elemKind == reflect.Uint8 { - return string(dataVal.Interface().([]uint8)), nil - } - case reflect.Uint: - return strconv.FormatUint(dataVal.Uint(), 10), nil - } - } - - return data, nil -} - -func RecursiveStructToMapHookFunc() DecodeHookFunc { - return func(f reflect.Value, t reflect.Value) (interface{}, error) { - if f.Kind() != reflect.Struct { - return f.Interface(), nil - } - - var i interface{} = struct{}{} - if t.Type() != reflect.TypeOf(&i).Elem() { - return f.Interface(), nil - } - - m := make(map[string]interface{}) - t.Set(reflect.ValueOf(m)) - - return f.Interface(), nil - } -} - -// TextUnmarshallerHookFunc returns a DecodeHookFunc that applies -// strings to the UnmarshalText function, when the target type -// implements the encoding.TextUnmarshaler interface -func TextUnmarshallerHookFunc() DecodeHookFuncType { - return func( - f reflect.Type, - t reflect.Type, - data interface{}) (interface{}, error) { - if f.Kind() != reflect.String { - return data, nil - } - result := reflect.New(t).Interface() - unmarshaller, ok := result.(encoding.TextUnmarshaler) - if !ok { - return data, nil - } - if err := unmarshaller.UnmarshalText([]byte(data.(string))); err != nil { - return nil, err - } - return result, nil - } -} diff --git a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/error.go b/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/error.go deleted file mode 100644 index 47a99e5af3f1..000000000000 --- a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/error.go +++ /dev/null @@ -1,50 +0,0 @@ -package mapstructure - -import ( - "errors" - "fmt" - "sort" - "strings" -) - -// Error implements the error interface and can represents multiple -// errors that occur in the course of a single decode. -type Error struct { - Errors []string -} - -func (e *Error) Error() string { - points := make([]string, len(e.Errors)) - for i, err := range e.Errors { - points[i] = fmt.Sprintf("* %s", err) - } - - sort.Strings(points) - return fmt.Sprintf( - "%d error(s) decoding:\n\n%s", - len(e.Errors), strings.Join(points, "\n")) -} - -// WrappedErrors implements the errwrap.Wrapper interface to make this -// return value more useful with the errwrap and go-multierror libraries. -func (e *Error) WrappedErrors() []error { - if e == nil { - return nil - } - - result := make([]error, len(e.Errors)) - for i, e := range e.Errors { - result[i] = errors.New(e) - } - - return result -} - -func appendErrors(errors []string, err error) []string { - switch e := err.(type) { - case *Error: - return append(errors, e.Errors...) - default: - return append(errors, e.Error()) - } -} diff --git a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/mapstructure.go b/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/mapstructure.go deleted file mode 100644 index 1efb22ac3610..000000000000 --- a/cluster-autoscaler/vendor/github.com/mitchellh/mapstructure/mapstructure.go +++ /dev/null @@ -1,1540 +0,0 @@ -// Package mapstructure exposes functionality to convert one arbitrary -// Go type into another, typically to convert a map[string]interface{} -// into a native Go structure. -// -// The Go structure can be arbitrarily complex, containing slices, -// other structs, etc. and the decoder will properly decode nested -// maps and so on into the proper structures in the native Go struct. -// See the examples to see what the decoder is capable of. -// -// The simplest function to start with is Decode. -// -// Field Tags -// -// When decoding to a struct, mapstructure will use the field name by -// default to perform the mapping. For example, if a struct has a field -// "Username" then mapstructure will look for a key in the source value -// of "username" (case insensitive). -// -// type User struct { -// Username string -// } -// -// You can change the behavior of mapstructure by using struct tags. -// The default struct tag that mapstructure looks for is "mapstructure" -// but you can customize it using DecoderConfig. -// -// Renaming Fields -// -// To rename the key that mapstructure looks for, use the "mapstructure" -// tag and set a value directly. For example, to change the "username" example -// above to "user": -// -// type User struct { -// Username string `mapstructure:"user"` -// } -// -// Embedded Structs and Squashing -// -// Embedded structs are treated as if they're another field with that name. -// By default, the two structs below are equivalent when decoding with -// mapstructure: -// -// type Person struct { -// Name string -// } -// -// type Friend struct { -// Person -// } -// -// type Friend struct { -// Person Person -// } -// -// This would require an input that looks like below: -// -// map[string]interface{}{ -// "person": map[string]interface{}{"name": "alice"}, -// } -// -// If your "person" value is NOT nested, then you can append ",squash" to -// your tag value and mapstructure will treat it as if the embedded struct -// were part of the struct directly. Example: -// -// type Friend struct { -// Person `mapstructure:",squash"` -// } -// -// Now the following input would be accepted: -// -// map[string]interface{}{ -// "name": "alice", -// } -// -// When decoding from a struct to a map, the squash tag squashes the struct -// fields into a single map. Using the example structs from above: -// -// Friend{Person: Person{Name: "alice"}} -// -// Will be decoded into a map: -// -// map[string]interface{}{ -// "name": "alice", -// } -// -// DecoderConfig has a field that changes the behavior of mapstructure -// to always squash embedded structs. -// -// Remainder Values -// -// If there are any unmapped keys in the source value, mapstructure by -// default will silently ignore them. You can error by setting ErrorUnused -// in DecoderConfig. If you're using Metadata you can also maintain a slice -// of the unused keys. -// -// You can also use the ",remain" suffix on your tag to collect all unused -// values in a map. The field with this tag MUST be a map type and should -// probably be a "map[string]interface{}" or "map[interface{}]interface{}". -// See example below: -// -// type Friend struct { -// Name string -// Other map[string]interface{} `mapstructure:",remain"` -// } -// -// Given the input below, Other would be populated with the other -// values that weren't used (everything but "name"): -// -// map[string]interface{}{ -// "name": "bob", -// "address": "123 Maple St.", -// } -// -// Omit Empty Values -// -// When decoding from a struct to any other value, you may use the -// ",omitempty" suffix on your tag to omit that value if it equates to -// the zero value. The zero value of all types is specified in the Go -// specification. -// -// For example, the zero type of a numeric type is zero ("0"). If the struct -// field value is zero and a numeric type, the field is empty, and it won't -// be encoded into the destination type. -// -// type Source struct { -// Age int `mapstructure:",omitempty"` -// } -// -// Unexported fields -// -// Since unexported (private) struct fields cannot be set outside the package -// where they are defined, the decoder will simply skip them. -// -// For this output type definition: -// -// type Exported struct { -// private string // this unexported field will be skipped -// Public string -// } -// -// Using this map as input: -// -// map[string]interface{}{ -// "private": "I will be ignored", -// "Public": "I made it through!", -// } -// -// The following struct will be decoded: -// -// type Exported struct { -// private: "" // field is left with an empty string (zero value) -// Public: "I made it through!" -// } -// -// Other Configuration -// -// mapstructure is highly configurable. See the DecoderConfig struct -// for other features and options that are supported. -package mapstructure - -import ( - "encoding/json" - "errors" - "fmt" - "reflect" - "sort" - "strconv" - "strings" -) - -// DecodeHookFunc is the callback function that can be used for -// data transformations. See "DecodeHook" in the DecoderConfig -// struct. -// -// The type must be one of DecodeHookFuncType, DecodeHookFuncKind, or -// DecodeHookFuncValue. -// Values are a superset of Types (Values can return types), and Types are a -// superset of Kinds (Types can return Kinds) and are generally a richer thing -// to use, but Kinds are simpler if you only need those. -// -// The reason DecodeHookFunc is multi-typed is for backwards compatibility: -// we started with Kinds and then realized Types were the better solution, -// but have a promise to not break backwards compat so we now support -// both. -type DecodeHookFunc interface{} - -// DecodeHookFuncType is a DecodeHookFunc which has complete information about -// the source and target types. -type DecodeHookFuncType func(reflect.Type, reflect.Type, interface{}) (interface{}, error) - -// DecodeHookFuncKind is a DecodeHookFunc which knows only the Kinds of the -// source and target types. -type DecodeHookFuncKind func(reflect.Kind, reflect.Kind, interface{}) (interface{}, error) - -// DecodeHookFuncValue is a DecodeHookFunc which has complete access to both the source and target -// values. -type DecodeHookFuncValue func(from reflect.Value, to reflect.Value) (interface{}, error) - -// DecoderConfig is the configuration that is used to create a new decoder -// and allows customization of various aspects of decoding. -type DecoderConfig struct { - // DecodeHook, if set, will be called before any decoding and any - // type conversion (if WeaklyTypedInput is on). This lets you modify - // the values before they're set down onto the resulting struct. The - // DecodeHook is called for every map and value in the input. This means - // that if a struct has embedded fields with squash tags the decode hook - // is called only once with all of the input data, not once for each - // embedded struct. - // - // If an error is returned, the entire decode will fail with that error. - DecodeHook DecodeHookFunc - - // If ErrorUnused is true, then it is an error for there to exist - // keys in the original map that were unused in the decoding process - // (extra keys). - ErrorUnused bool - - // If ErrorUnset is true, then it is an error for there to exist - // fields in the result that were not set in the decoding process - // (extra fields). This only applies to decoding to a struct. This - // will affect all nested structs as well. - ErrorUnset bool - - // ZeroFields, if set to true, will zero fields before writing them. - // For example, a map will be emptied before decoded values are put in - // it. If this is false, a map will be merged. - ZeroFields bool - - // If WeaklyTypedInput is true, the decoder will make the following - // "weak" conversions: - // - // - bools to string (true = "1", false = "0") - // - numbers to string (base 10) - // - bools to int/uint (true = 1, false = 0) - // - strings to int/uint (base implied by prefix) - // - int to bool (true if value != 0) - // - string to bool (accepts: 1, t, T, TRUE, true, True, 0, f, F, - // FALSE, false, False. Anything else is an error) - // - empty array = empty map and vice versa - // - negative numbers to overflowed uint values (base 10) - // - slice of maps to a merged map - // - single values are converted to slices if required. Each - // element is weakly decoded. For example: "4" can become []int{4} - // if the target type is an int slice. - // - WeaklyTypedInput bool - - // Squash will squash embedded structs. A squash tag may also be - // added to an individual struct field using a tag. For example: - // - // type Parent struct { - // Child `mapstructure:",squash"` - // } - Squash bool - - // Metadata is the struct that will contain extra metadata about - // the decoding. If this is nil, then no metadata will be tracked. - Metadata *Metadata - - // Result is a pointer to the struct that will contain the decoded - // value. - Result interface{} - - // The tag name that mapstructure reads for field names. This - // defaults to "mapstructure" - TagName string - - // IgnoreUntaggedFields ignores all struct fields without explicit - // TagName, comparable to `mapstructure:"-"` as default behaviour. - IgnoreUntaggedFields bool - - // MatchName is the function used to match the map key to the struct - // field name or tag. Defaults to `strings.EqualFold`. This can be used - // to implement case-sensitive tag values, support snake casing, etc. - MatchName func(mapKey, fieldName string) bool -} - -// A Decoder takes a raw interface value and turns it into structured -// data, keeping track of rich error information along the way in case -// anything goes wrong. Unlike the basic top-level Decode method, you can -// more finely control how the Decoder behaves using the DecoderConfig -// structure. The top-level Decode method is just a convenience that sets -// up the most basic Decoder. -type Decoder struct { - config *DecoderConfig -} - -// Metadata contains information about decoding a structure that -// is tedious or difficult to get otherwise. -type Metadata struct { - // Keys are the keys of the structure which were successfully decoded - Keys []string - - // Unused is a slice of keys that were found in the raw value but - // weren't decoded since there was no matching field in the result interface - Unused []string - - // Unset is a slice of field names that were found in the result interface - // but weren't set in the decoding process since there was no matching value - // in the input - Unset []string -} - -// Decode takes an input structure and uses reflection to translate it to -// the output structure. output must be a pointer to a map or struct. -func Decode(input interface{}, output interface{}) error { - config := &DecoderConfig{ - Metadata: nil, - Result: output, - } - - decoder, err := NewDecoder(config) - if err != nil { - return err - } - - return decoder.Decode(input) -} - -// WeakDecode is the same as Decode but is shorthand to enable -// WeaklyTypedInput. See DecoderConfig for more info. -func WeakDecode(input, output interface{}) error { - config := &DecoderConfig{ - Metadata: nil, - Result: output, - WeaklyTypedInput: true, - } - - decoder, err := NewDecoder(config) - if err != nil { - return err - } - - return decoder.Decode(input) -} - -// DecodeMetadata is the same as Decode, but is shorthand to -// enable metadata collection. See DecoderConfig for more info. -func DecodeMetadata(input interface{}, output interface{}, metadata *Metadata) error { - config := &DecoderConfig{ - Metadata: metadata, - Result: output, - } - - decoder, err := NewDecoder(config) - if err != nil { - return err - } - - return decoder.Decode(input) -} - -// WeakDecodeMetadata is the same as Decode, but is shorthand to -// enable both WeaklyTypedInput and metadata collection. See -// DecoderConfig for more info. -func WeakDecodeMetadata(input interface{}, output interface{}, metadata *Metadata) error { - config := &DecoderConfig{ - Metadata: metadata, - Result: output, - WeaklyTypedInput: true, - } - - decoder, err := NewDecoder(config) - if err != nil { - return err - } - - return decoder.Decode(input) -} - -// NewDecoder returns a new decoder for the given configuration. Once -// a decoder has been returned, the same configuration must not be used -// again. -func NewDecoder(config *DecoderConfig) (*Decoder, error) { - val := reflect.ValueOf(config.Result) - if val.Kind() != reflect.Ptr { - return nil, errors.New("result must be a pointer") - } - - val = val.Elem() - if !val.CanAddr() { - return nil, errors.New("result must be addressable (a pointer)") - } - - if config.Metadata != nil { - if config.Metadata.Keys == nil { - config.Metadata.Keys = make([]string, 0) - } - - if config.Metadata.Unused == nil { - config.Metadata.Unused = make([]string, 0) - } - - if config.Metadata.Unset == nil { - config.Metadata.Unset = make([]string, 0) - } - } - - if config.TagName == "" { - config.TagName = "mapstructure" - } - - if config.MatchName == nil { - config.MatchName = strings.EqualFold - } - - result := &Decoder{ - config: config, - } - - return result, nil -} - -// Decode decodes the given raw interface to the target pointer specified -// by the configuration. -func (d *Decoder) Decode(input interface{}) error { - return d.decode("", input, reflect.ValueOf(d.config.Result).Elem()) -} - -// Decodes an unknown data type into a specific reflection value. -func (d *Decoder) decode(name string, input interface{}, outVal reflect.Value) error { - var inputVal reflect.Value - if input != nil { - inputVal = reflect.ValueOf(input) - - // We need to check here if input is a typed nil. Typed nils won't - // match the "input == nil" below so we check that here. - if inputVal.Kind() == reflect.Ptr && inputVal.IsNil() { - input = nil - } - } - - if input == nil { - // If the data is nil, then we don't set anything, unless ZeroFields is set - // to true. - if d.config.ZeroFields { - outVal.Set(reflect.Zero(outVal.Type())) - - if d.config.Metadata != nil && name != "" { - d.config.Metadata.Keys = append(d.config.Metadata.Keys, name) - } - } - return nil - } - - if !inputVal.IsValid() { - // If the input value is invalid, then we just set the value - // to be the zero value. - outVal.Set(reflect.Zero(outVal.Type())) - if d.config.Metadata != nil && name != "" { - d.config.Metadata.Keys = append(d.config.Metadata.Keys, name) - } - return nil - } - - if d.config.DecodeHook != nil { - // We have a DecodeHook, so let's pre-process the input. - var err error - input, err = DecodeHookExec(d.config.DecodeHook, inputVal, outVal) - if err != nil { - return fmt.Errorf("error decoding '%s': %s", name, err) - } - } - - var err error - outputKind := getKind(outVal) - addMetaKey := true - switch outputKind { - case reflect.Bool: - err = d.decodeBool(name, input, outVal) - case reflect.Interface: - err = d.decodeBasic(name, input, outVal) - case reflect.String: - err = d.decodeString(name, input, outVal) - case reflect.Int: - err = d.decodeInt(name, input, outVal) - case reflect.Uint: - err = d.decodeUint(name, input, outVal) - case reflect.Float32: - err = d.decodeFloat(name, input, outVal) - case reflect.Struct: - err = d.decodeStruct(name, input, outVal) - case reflect.Map: - err = d.decodeMap(name, input, outVal) - case reflect.Ptr: - addMetaKey, err = d.decodePtr(name, input, outVal) - case reflect.Slice: - err = d.decodeSlice(name, input, outVal) - case reflect.Array: - err = d.decodeArray(name, input, outVal) - case reflect.Func: - err = d.decodeFunc(name, input, outVal) - default: - // If we reached this point then we weren't able to decode it - return fmt.Errorf("%s: unsupported type: %s", name, outputKind) - } - - // If we reached here, then we successfully decoded SOMETHING, so - // mark the key as used if we're tracking metainput. - if addMetaKey && d.config.Metadata != nil && name != "" { - d.config.Metadata.Keys = append(d.config.Metadata.Keys, name) - } - - return err -} - -// This decodes a basic type (bool, int, string, etc.) and sets the -// value to "data" of that type. -func (d *Decoder) decodeBasic(name string, data interface{}, val reflect.Value) error { - if val.IsValid() && val.Elem().IsValid() { - elem := val.Elem() - - // If we can't address this element, then its not writable. Instead, - // we make a copy of the value (which is a pointer and therefore - // writable), decode into that, and replace the whole value. - copied := false - if !elem.CanAddr() { - copied = true - - // Make *T - copy := reflect.New(elem.Type()) - - // *T = elem - copy.Elem().Set(elem) - - // Set elem so we decode into it - elem = copy - } - - // Decode. If we have an error then return. We also return right - // away if we're not a copy because that means we decoded directly. - if err := d.decode(name, data, elem); err != nil || !copied { - return err - } - - // If we're a copy, we need to set te final result - val.Set(elem.Elem()) - return nil - } - - dataVal := reflect.ValueOf(data) - - // If the input data is a pointer, and the assigned type is the dereference - // of that exact pointer, then indirect it so that we can assign it. - // Example: *string to string - if dataVal.Kind() == reflect.Ptr && dataVal.Type().Elem() == val.Type() { - dataVal = reflect.Indirect(dataVal) - } - - if !dataVal.IsValid() { - dataVal = reflect.Zero(val.Type()) - } - - dataValType := dataVal.Type() - if !dataValType.AssignableTo(val.Type()) { - return fmt.Errorf( - "'%s' expected type '%s', got '%s'", - name, val.Type(), dataValType) - } - - val.Set(dataVal) - return nil -} - -func (d *Decoder) decodeString(name string, data interface{}, val reflect.Value) error { - dataVal := reflect.Indirect(reflect.ValueOf(data)) - dataKind := getKind(dataVal) - - converted := true - switch { - case dataKind == reflect.String: - val.SetString(dataVal.String()) - case dataKind == reflect.Bool && d.config.WeaklyTypedInput: - if dataVal.Bool() { - val.SetString("1") - } else { - val.SetString("0") - } - case dataKind == reflect.Int && d.config.WeaklyTypedInput: - val.SetString(strconv.FormatInt(dataVal.Int(), 10)) - case dataKind == reflect.Uint && d.config.WeaklyTypedInput: - val.SetString(strconv.FormatUint(dataVal.Uint(), 10)) - case dataKind == reflect.Float32 && d.config.WeaklyTypedInput: - val.SetString(strconv.FormatFloat(dataVal.Float(), 'f', -1, 64)) - case dataKind == reflect.Slice && d.config.WeaklyTypedInput, - dataKind == reflect.Array && d.config.WeaklyTypedInput: - dataType := dataVal.Type() - elemKind := dataType.Elem().Kind() - switch elemKind { - case reflect.Uint8: - var uints []uint8 - if dataKind == reflect.Array { - uints = make([]uint8, dataVal.Len(), dataVal.Len()) - for i := range uints { - uints[i] = dataVal.Index(i).Interface().(uint8) - } - } else { - uints = dataVal.Interface().([]uint8) - } - val.SetString(string(uints)) - default: - converted = false - } - default: - converted = false - } - - if !converted { - return fmt.Errorf( - "'%s' expected type '%s', got unconvertible type '%s', value: '%v'", - name, val.Type(), dataVal.Type(), data) - } - - return nil -} - -func (d *Decoder) decodeInt(name string, data interface{}, val reflect.Value) error { - dataVal := reflect.Indirect(reflect.ValueOf(data)) - dataKind := getKind(dataVal) - dataType := dataVal.Type() - - switch { - case dataKind == reflect.Int: - val.SetInt(dataVal.Int()) - case dataKind == reflect.Uint: - val.SetInt(int64(dataVal.Uint())) - case dataKind == reflect.Float32: - val.SetInt(int64(dataVal.Float())) - case dataKind == reflect.Bool && d.config.WeaklyTypedInput: - if dataVal.Bool() { - val.SetInt(1) - } else { - val.SetInt(0) - } - case dataKind == reflect.String && d.config.WeaklyTypedInput: - str := dataVal.String() - if str == "" { - str = "0" - } - - i, err := strconv.ParseInt(str, 0, val.Type().Bits()) - if err == nil { - val.SetInt(i) - } else { - return fmt.Errorf("cannot parse '%s' as int: %s", name, err) - } - case dataType.PkgPath() == "encoding/json" && dataType.Name() == "Number": - jn := data.(json.Number) - i, err := jn.Int64() - if err != nil { - return fmt.Errorf( - "error decoding json.Number into %s: %s", name, err) - } - val.SetInt(i) - default: - return fmt.Errorf( - "'%s' expected type '%s', got unconvertible type '%s', value: '%v'", - name, val.Type(), dataVal.Type(), data) - } - - return nil -} - -func (d *Decoder) decodeUint(name string, data interface{}, val reflect.Value) error { - dataVal := reflect.Indirect(reflect.ValueOf(data)) - dataKind := getKind(dataVal) - dataType := dataVal.Type() - - switch { - case dataKind == reflect.Int: - i := dataVal.Int() - if i < 0 && !d.config.WeaklyTypedInput { - return fmt.Errorf("cannot parse '%s', %d overflows uint", - name, i) - } - val.SetUint(uint64(i)) - case dataKind == reflect.Uint: - val.SetUint(dataVal.Uint()) - case dataKind == reflect.Float32: - f := dataVal.Float() - if f < 0 && !d.config.WeaklyTypedInput { - return fmt.Errorf("cannot parse '%s', %f overflows uint", - name, f) - } - val.SetUint(uint64(f)) - case dataKind == reflect.Bool && d.config.WeaklyTypedInput: - if dataVal.Bool() { - val.SetUint(1) - } else { - val.SetUint(0) - } - case dataKind == reflect.String && d.config.WeaklyTypedInput: - str := dataVal.String() - if str == "" { - str = "0" - } - - i, err := strconv.ParseUint(str, 0, val.Type().Bits()) - if err == nil { - val.SetUint(i) - } else { - return fmt.Errorf("cannot parse '%s' as uint: %s", name, err) - } - case dataType.PkgPath() == "encoding/json" && dataType.Name() == "Number": - jn := data.(json.Number) - i, err := strconv.ParseUint(string(jn), 0, 64) - if err != nil { - return fmt.Errorf( - "error decoding json.Number into %s: %s", name, err) - } - val.SetUint(i) - default: - return fmt.Errorf( - "'%s' expected type '%s', got unconvertible type '%s', value: '%v'", - name, val.Type(), dataVal.Type(), data) - } - - return nil -} - -func (d *Decoder) decodeBool(name string, data interface{}, val reflect.Value) error { - dataVal := reflect.Indirect(reflect.ValueOf(data)) - dataKind := getKind(dataVal) - - switch { - case dataKind == reflect.Bool: - val.SetBool(dataVal.Bool()) - case dataKind == reflect.Int && d.config.WeaklyTypedInput: - val.SetBool(dataVal.Int() != 0) - case dataKind == reflect.Uint && d.config.WeaklyTypedInput: - val.SetBool(dataVal.Uint() != 0) - case dataKind == reflect.Float32 && d.config.WeaklyTypedInput: - val.SetBool(dataVal.Float() != 0) - case dataKind == reflect.String && d.config.WeaklyTypedInput: - b, err := strconv.ParseBool(dataVal.String()) - if err == nil { - val.SetBool(b) - } else if dataVal.String() == "" { - val.SetBool(false) - } else { - return fmt.Errorf("cannot parse '%s' as bool: %s", name, err) - } - default: - return fmt.Errorf( - "'%s' expected type '%s', got unconvertible type '%s', value: '%v'", - name, val.Type(), dataVal.Type(), data) - } - - return nil -} - -func (d *Decoder) decodeFloat(name string, data interface{}, val reflect.Value) error { - dataVal := reflect.Indirect(reflect.ValueOf(data)) - dataKind := getKind(dataVal) - dataType := dataVal.Type() - - switch { - case dataKind == reflect.Int: - val.SetFloat(float64(dataVal.Int())) - case dataKind == reflect.Uint: - val.SetFloat(float64(dataVal.Uint())) - case dataKind == reflect.Float32: - val.SetFloat(dataVal.Float()) - case dataKind == reflect.Bool && d.config.WeaklyTypedInput: - if dataVal.Bool() { - val.SetFloat(1) - } else { - val.SetFloat(0) - } - case dataKind == reflect.String && d.config.WeaklyTypedInput: - str := dataVal.String() - if str == "" { - str = "0" - } - - f, err := strconv.ParseFloat(str, val.Type().Bits()) - if err == nil { - val.SetFloat(f) - } else { - return fmt.Errorf("cannot parse '%s' as float: %s", name, err) - } - case dataType.PkgPath() == "encoding/json" && dataType.Name() == "Number": - jn := data.(json.Number) - i, err := jn.Float64() - if err != nil { - return fmt.Errorf( - "error decoding json.Number into %s: %s", name, err) - } - val.SetFloat(i) - default: - return fmt.Errorf( - "'%s' expected type '%s', got unconvertible type '%s', value: '%v'", - name, val.Type(), dataVal.Type(), data) - } - - return nil -} - -func (d *Decoder) decodeMap(name string, data interface{}, val reflect.Value) error { - valType := val.Type() - valKeyType := valType.Key() - valElemType := valType.Elem() - - // By default we overwrite keys in the current map - valMap := val - - // If the map is nil or we're purposely zeroing fields, make a new map - if valMap.IsNil() || d.config.ZeroFields { - // Make a new map to hold our result - mapType := reflect.MapOf(valKeyType, valElemType) - valMap = reflect.MakeMap(mapType) - } - - // Check input type and based on the input type jump to the proper func - dataVal := reflect.Indirect(reflect.ValueOf(data)) - switch dataVal.Kind() { - case reflect.Map: - return d.decodeMapFromMap(name, dataVal, val, valMap) - - case reflect.Struct: - return d.decodeMapFromStruct(name, dataVal, val, valMap) - - case reflect.Array, reflect.Slice: - if d.config.WeaklyTypedInput { - return d.decodeMapFromSlice(name, dataVal, val, valMap) - } - - fallthrough - - default: - return fmt.Errorf("'%s' expected a map, got '%s'", name, dataVal.Kind()) - } -} - -func (d *Decoder) decodeMapFromSlice(name string, dataVal reflect.Value, val reflect.Value, valMap reflect.Value) error { - // Special case for BC reasons (covered by tests) - if dataVal.Len() == 0 { - val.Set(valMap) - return nil - } - - for i := 0; i < dataVal.Len(); i++ { - err := d.decode( - name+"["+strconv.Itoa(i)+"]", - dataVal.Index(i).Interface(), val) - if err != nil { - return err - } - } - - return nil -} - -func (d *Decoder) decodeMapFromMap(name string, dataVal reflect.Value, val reflect.Value, valMap reflect.Value) error { - valType := val.Type() - valKeyType := valType.Key() - valElemType := valType.Elem() - - // Accumulate errors - errors := make([]string, 0) - - // If the input data is empty, then we just match what the input data is. - if dataVal.Len() == 0 { - if dataVal.IsNil() { - if !val.IsNil() { - val.Set(dataVal) - } - } else { - // Set to empty allocated value - val.Set(valMap) - } - - return nil - } - - for _, k := range dataVal.MapKeys() { - fieldName := name + "[" + k.String() + "]" - - // First decode the key into the proper type - currentKey := reflect.Indirect(reflect.New(valKeyType)) - if err := d.decode(fieldName, k.Interface(), currentKey); err != nil { - errors = appendErrors(errors, err) - continue - } - - // Next decode the data into the proper type - v := dataVal.MapIndex(k).Interface() - currentVal := reflect.Indirect(reflect.New(valElemType)) - if err := d.decode(fieldName, v, currentVal); err != nil { - errors = appendErrors(errors, err) - continue - } - - valMap.SetMapIndex(currentKey, currentVal) - } - - // Set the built up map to the value - val.Set(valMap) - - // If we had errors, return those - if len(errors) > 0 { - return &Error{errors} - } - - return nil -} - -func (d *Decoder) decodeMapFromStruct(name string, dataVal reflect.Value, val reflect.Value, valMap reflect.Value) error { - typ := dataVal.Type() - for i := 0; i < typ.NumField(); i++ { - // Get the StructField first since this is a cheap operation. If the - // field is unexported, then ignore it. - f := typ.Field(i) - if f.PkgPath != "" { - continue - } - - // Next get the actual value of this field and verify it is assignable - // to the map value. - v := dataVal.Field(i) - if !v.Type().AssignableTo(valMap.Type().Elem()) { - return fmt.Errorf("cannot assign type '%s' to map value field of type '%s'", v.Type(), valMap.Type().Elem()) - } - - tagValue := f.Tag.Get(d.config.TagName) - keyName := f.Name - - if tagValue == "" && d.config.IgnoreUntaggedFields { - continue - } - - // If Squash is set in the config, we squash the field down. - squash := d.config.Squash && v.Kind() == reflect.Struct && f.Anonymous - - v = dereferencePtrToStructIfNeeded(v, d.config.TagName) - - // Determine the name of the key in the map - if index := strings.Index(tagValue, ","); index != -1 { - if tagValue[:index] == "-" { - continue - } - // If "omitempty" is specified in the tag, it ignores empty values. - if strings.Index(tagValue[index+1:], "omitempty") != -1 && isEmptyValue(v) { - continue - } - - // If "squash" is specified in the tag, we squash the field down. - squash = squash || strings.Index(tagValue[index+1:], "squash") != -1 - if squash { - // When squashing, the embedded type can be a pointer to a struct. - if v.Kind() == reflect.Ptr && v.Elem().Kind() == reflect.Struct { - v = v.Elem() - } - - // The final type must be a struct - if v.Kind() != reflect.Struct { - return fmt.Errorf("cannot squash non-struct type '%s'", v.Type()) - } - } - if keyNameTagValue := tagValue[:index]; keyNameTagValue != "" { - keyName = keyNameTagValue - } - } else if len(tagValue) > 0 { - if tagValue == "-" { - continue - } - keyName = tagValue - } - - switch v.Kind() { - // this is an embedded struct, so handle it differently - case reflect.Struct: - x := reflect.New(v.Type()) - x.Elem().Set(v) - - vType := valMap.Type() - vKeyType := vType.Key() - vElemType := vType.Elem() - mType := reflect.MapOf(vKeyType, vElemType) - vMap := reflect.MakeMap(mType) - - // Creating a pointer to a map so that other methods can completely - // overwrite the map if need be (looking at you decodeMapFromMap). The - // indirection allows the underlying map to be settable (CanSet() == true) - // where as reflect.MakeMap returns an unsettable map. - addrVal := reflect.New(vMap.Type()) - reflect.Indirect(addrVal).Set(vMap) - - err := d.decode(keyName, x.Interface(), reflect.Indirect(addrVal)) - if err != nil { - return err - } - - // the underlying map may have been completely overwritten so pull - // it indirectly out of the enclosing value. - vMap = reflect.Indirect(addrVal) - - if squash { - for _, k := range vMap.MapKeys() { - valMap.SetMapIndex(k, vMap.MapIndex(k)) - } - } else { - valMap.SetMapIndex(reflect.ValueOf(keyName), vMap) - } - - default: - valMap.SetMapIndex(reflect.ValueOf(keyName), v) - } - } - - if val.CanAddr() { - val.Set(valMap) - } - - return nil -} - -func (d *Decoder) decodePtr(name string, data interface{}, val reflect.Value) (bool, error) { - // If the input data is nil, then we want to just set the output - // pointer to be nil as well. - isNil := data == nil - if !isNil { - switch v := reflect.Indirect(reflect.ValueOf(data)); v.Kind() { - case reflect.Chan, - reflect.Func, - reflect.Interface, - reflect.Map, - reflect.Ptr, - reflect.Slice: - isNil = v.IsNil() - } - } - if isNil { - if !val.IsNil() && val.CanSet() { - nilValue := reflect.New(val.Type()).Elem() - val.Set(nilValue) - } - - return true, nil - } - - // Create an element of the concrete (non pointer) type and decode - // into that. Then set the value of the pointer to this type. - valType := val.Type() - valElemType := valType.Elem() - if val.CanSet() { - realVal := val - if realVal.IsNil() || d.config.ZeroFields { - realVal = reflect.New(valElemType) - } - - if err := d.decode(name, data, reflect.Indirect(realVal)); err != nil { - return false, err - } - - val.Set(realVal) - } else { - if err := d.decode(name, data, reflect.Indirect(val)); err != nil { - return false, err - } - } - return false, nil -} - -func (d *Decoder) decodeFunc(name string, data interface{}, val reflect.Value) error { - // Create an element of the concrete (non pointer) type and decode - // into that. Then set the value of the pointer to this type. - dataVal := reflect.Indirect(reflect.ValueOf(data)) - if val.Type() != dataVal.Type() { - return fmt.Errorf( - "'%s' expected type '%s', got unconvertible type '%s', value: '%v'", - name, val.Type(), dataVal.Type(), data) - } - val.Set(dataVal) - return nil -} - -func (d *Decoder) decodeSlice(name string, data interface{}, val reflect.Value) error { - dataVal := reflect.Indirect(reflect.ValueOf(data)) - dataValKind := dataVal.Kind() - valType := val.Type() - valElemType := valType.Elem() - sliceType := reflect.SliceOf(valElemType) - - // If we have a non array/slice type then we first attempt to convert. - if dataValKind != reflect.Array && dataValKind != reflect.Slice { - if d.config.WeaklyTypedInput { - switch { - // Slice and array we use the normal logic - case dataValKind == reflect.Slice, dataValKind == reflect.Array: - break - - // Empty maps turn into empty slices - case dataValKind == reflect.Map: - if dataVal.Len() == 0 { - val.Set(reflect.MakeSlice(sliceType, 0, 0)) - return nil - } - // Create slice of maps of other sizes - return d.decodeSlice(name, []interface{}{data}, val) - - case dataValKind == reflect.String && valElemType.Kind() == reflect.Uint8: - return d.decodeSlice(name, []byte(dataVal.String()), val) - - // All other types we try to convert to the slice type - // and "lift" it into it. i.e. a string becomes a string slice. - default: - // Just re-try this function with data as a slice. - return d.decodeSlice(name, []interface{}{data}, val) - } - } - - return fmt.Errorf( - "'%s': source data must be an array or slice, got %s", name, dataValKind) - } - - // If the input value is nil, then don't allocate since empty != nil - if dataValKind != reflect.Array && dataVal.IsNil() { - return nil - } - - valSlice := val - if valSlice.IsNil() || d.config.ZeroFields { - // Make a new slice to hold our result, same size as the original data. - valSlice = reflect.MakeSlice(sliceType, dataVal.Len(), dataVal.Len()) - } - - // Accumulate any errors - errors := make([]string, 0) - - for i := 0; i < dataVal.Len(); i++ { - currentData := dataVal.Index(i).Interface() - for valSlice.Len() <= i { - valSlice = reflect.Append(valSlice, reflect.Zero(valElemType)) - } - currentField := valSlice.Index(i) - - fieldName := name + "[" + strconv.Itoa(i) + "]" - if err := d.decode(fieldName, currentData, currentField); err != nil { - errors = appendErrors(errors, err) - } - } - - // Finally, set the value to the slice we built up - val.Set(valSlice) - - // If there were errors, we return those - if len(errors) > 0 { - return &Error{errors} - } - - return nil -} - -func (d *Decoder) decodeArray(name string, data interface{}, val reflect.Value) error { - dataVal := reflect.Indirect(reflect.ValueOf(data)) - dataValKind := dataVal.Kind() - valType := val.Type() - valElemType := valType.Elem() - arrayType := reflect.ArrayOf(valType.Len(), valElemType) - - valArray := val - - if valArray.Interface() == reflect.Zero(valArray.Type()).Interface() || d.config.ZeroFields { - // Check input type - if dataValKind != reflect.Array && dataValKind != reflect.Slice { - if d.config.WeaklyTypedInput { - switch { - // Empty maps turn into empty arrays - case dataValKind == reflect.Map: - if dataVal.Len() == 0 { - val.Set(reflect.Zero(arrayType)) - return nil - } - - // All other types we try to convert to the array type - // and "lift" it into it. i.e. a string becomes a string array. - default: - // Just re-try this function with data as a slice. - return d.decodeArray(name, []interface{}{data}, val) - } - } - - return fmt.Errorf( - "'%s': source data must be an array or slice, got %s", name, dataValKind) - - } - if dataVal.Len() > arrayType.Len() { - return fmt.Errorf( - "'%s': expected source data to have length less or equal to %d, got %d", name, arrayType.Len(), dataVal.Len()) - - } - - // Make a new array to hold our result, same size as the original data. - valArray = reflect.New(arrayType).Elem() - } - - // Accumulate any errors - errors := make([]string, 0) - - for i := 0; i < dataVal.Len(); i++ { - currentData := dataVal.Index(i).Interface() - currentField := valArray.Index(i) - - fieldName := name + "[" + strconv.Itoa(i) + "]" - if err := d.decode(fieldName, currentData, currentField); err != nil { - errors = appendErrors(errors, err) - } - } - - // Finally, set the value to the array we built up - val.Set(valArray) - - // If there were errors, we return those - if len(errors) > 0 { - return &Error{errors} - } - - return nil -} - -func (d *Decoder) decodeStruct(name string, data interface{}, val reflect.Value) error { - dataVal := reflect.Indirect(reflect.ValueOf(data)) - - // If the type of the value to write to and the data match directly, - // then we just set it directly instead of recursing into the structure. - if dataVal.Type() == val.Type() { - val.Set(dataVal) - return nil - } - - dataValKind := dataVal.Kind() - switch dataValKind { - case reflect.Map: - return d.decodeStructFromMap(name, dataVal, val) - - case reflect.Struct: - // Not the most efficient way to do this but we can optimize later if - // we want to. To convert from struct to struct we go to map first - // as an intermediary. - - // Make a new map to hold our result - mapType := reflect.TypeOf((map[string]interface{})(nil)) - mval := reflect.MakeMap(mapType) - - // Creating a pointer to a map so that other methods can completely - // overwrite the map if need be (looking at you decodeMapFromMap). The - // indirection allows the underlying map to be settable (CanSet() == true) - // where as reflect.MakeMap returns an unsettable map. - addrVal := reflect.New(mval.Type()) - - reflect.Indirect(addrVal).Set(mval) - if err := d.decodeMapFromStruct(name, dataVal, reflect.Indirect(addrVal), mval); err != nil { - return err - } - - result := d.decodeStructFromMap(name, reflect.Indirect(addrVal), val) - return result - - default: - return fmt.Errorf("'%s' expected a map, got '%s'", name, dataVal.Kind()) - } -} - -func (d *Decoder) decodeStructFromMap(name string, dataVal, val reflect.Value) error { - dataValType := dataVal.Type() - if kind := dataValType.Key().Kind(); kind != reflect.String && kind != reflect.Interface { - return fmt.Errorf( - "'%s' needs a map with string keys, has '%s' keys", - name, dataValType.Key().Kind()) - } - - dataValKeys := make(map[reflect.Value]struct{}) - dataValKeysUnused := make(map[interface{}]struct{}) - for _, dataValKey := range dataVal.MapKeys() { - dataValKeys[dataValKey] = struct{}{} - dataValKeysUnused[dataValKey.Interface()] = struct{}{} - } - - targetValKeysUnused := make(map[interface{}]struct{}) - errors := make([]string, 0) - - // This slice will keep track of all the structs we'll be decoding. - // There can be more than one struct if there are embedded structs - // that are squashed. - structs := make([]reflect.Value, 1, 5) - structs[0] = val - - // Compile the list of all the fields that we're going to be decoding - // from all the structs. - type field struct { - field reflect.StructField - val reflect.Value - } - - // remainField is set to a valid field set with the "remain" tag if - // we are keeping track of remaining values. - var remainField *field - - fields := []field{} - for len(structs) > 0 { - structVal := structs[0] - structs = structs[1:] - - structType := structVal.Type() - - for i := 0; i < structType.NumField(); i++ { - fieldType := structType.Field(i) - fieldVal := structVal.Field(i) - if fieldVal.Kind() == reflect.Ptr && fieldVal.Elem().Kind() == reflect.Struct { - // Handle embedded struct pointers as embedded structs. - fieldVal = fieldVal.Elem() - } - - // If "squash" is specified in the tag, we squash the field down. - squash := d.config.Squash && fieldVal.Kind() == reflect.Struct && fieldType.Anonymous - remain := false - - // We always parse the tags cause we're looking for other tags too - tagParts := strings.Split(fieldType.Tag.Get(d.config.TagName), ",") - for _, tag := range tagParts[1:] { - if tag == "squash" { - squash = true - break - } - - if tag == "remain" { - remain = true - break - } - } - - if squash { - if fieldVal.Kind() != reflect.Struct { - errors = appendErrors(errors, - fmt.Errorf("%s: unsupported type for squash: %s", fieldType.Name, fieldVal.Kind())) - } else { - structs = append(structs, fieldVal) - } - continue - } - - // Build our field - if remain { - remainField = &field{fieldType, fieldVal} - } else { - // Normal struct field, store it away - fields = append(fields, field{fieldType, fieldVal}) - } - } - } - - // for fieldType, field := range fields { - for _, f := range fields { - field, fieldValue := f.field, f.val - fieldName := field.Name - - tagValue := field.Tag.Get(d.config.TagName) - tagValue = strings.SplitN(tagValue, ",", 2)[0] - if tagValue != "" { - fieldName = tagValue - } - - rawMapKey := reflect.ValueOf(fieldName) - rawMapVal := dataVal.MapIndex(rawMapKey) - if !rawMapVal.IsValid() { - // Do a slower search by iterating over each key and - // doing case-insensitive search. - for dataValKey := range dataValKeys { - mK, ok := dataValKey.Interface().(string) - if !ok { - // Not a string key - continue - } - - if d.config.MatchName(mK, fieldName) { - rawMapKey = dataValKey - rawMapVal = dataVal.MapIndex(dataValKey) - break - } - } - - if !rawMapVal.IsValid() { - // There was no matching key in the map for the value in - // the struct. Remember it for potential errors and metadata. - targetValKeysUnused[fieldName] = struct{}{} - continue - } - } - - if !fieldValue.IsValid() { - // This should never happen - panic("field is not valid") - } - - // If we can't set the field, then it is unexported or something, - // and we just continue onwards. - if !fieldValue.CanSet() { - continue - } - - // Delete the key we're using from the unused map so we stop tracking - delete(dataValKeysUnused, rawMapKey.Interface()) - - // If the name is empty string, then we're at the root, and we - // don't dot-join the fields. - if name != "" { - fieldName = name + "." + fieldName - } - - if err := d.decode(fieldName, rawMapVal.Interface(), fieldValue); err != nil { - errors = appendErrors(errors, err) - } - } - - // If we have a "remain"-tagged field and we have unused keys then - // we put the unused keys directly into the remain field. - if remainField != nil && len(dataValKeysUnused) > 0 { - // Build a map of only the unused values - remain := map[interface{}]interface{}{} - for key := range dataValKeysUnused { - remain[key] = dataVal.MapIndex(reflect.ValueOf(key)).Interface() - } - - // Decode it as-if we were just decoding this map onto our map. - if err := d.decodeMap(name, remain, remainField.val); err != nil { - errors = appendErrors(errors, err) - } - - // Set the map to nil so we have none so that the next check will - // not error (ErrorUnused) - dataValKeysUnused = nil - } - - if d.config.ErrorUnused && len(dataValKeysUnused) > 0 { - keys := make([]string, 0, len(dataValKeysUnused)) - for rawKey := range dataValKeysUnused { - keys = append(keys, rawKey.(string)) - } - sort.Strings(keys) - - err := fmt.Errorf("'%s' has invalid keys: %s", name, strings.Join(keys, ", ")) - errors = appendErrors(errors, err) - } - - if d.config.ErrorUnset && len(targetValKeysUnused) > 0 { - keys := make([]string, 0, len(targetValKeysUnused)) - for rawKey := range targetValKeysUnused { - keys = append(keys, rawKey.(string)) - } - sort.Strings(keys) - - err := fmt.Errorf("'%s' has unset fields: %s", name, strings.Join(keys, ", ")) - errors = appendErrors(errors, err) - } - - if len(errors) > 0 { - return &Error{errors} - } - - // Add the unused keys to the list of unused keys if we're tracking metadata - if d.config.Metadata != nil { - for rawKey := range dataValKeysUnused { - key := rawKey.(string) - if name != "" { - key = name + "." + key - } - - d.config.Metadata.Unused = append(d.config.Metadata.Unused, key) - } - for rawKey := range targetValKeysUnused { - key := rawKey.(string) - if name != "" { - key = name + "." + key - } - - d.config.Metadata.Unset = append(d.config.Metadata.Unset, key) - } - } - - return nil -} - -func isEmptyValue(v reflect.Value) bool { - switch getKind(v) { - case reflect.Array, reflect.Map, reflect.Slice, reflect.String: - return v.Len() == 0 - case reflect.Bool: - return !v.Bool() - case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: - return v.Int() == 0 - case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: - return v.Uint() == 0 - case reflect.Float32, reflect.Float64: - return v.Float() == 0 - case reflect.Interface, reflect.Ptr: - return v.IsNil() - } - return false -} - -func getKind(val reflect.Value) reflect.Kind { - kind := val.Kind() - - switch { - case kind >= reflect.Int && kind <= reflect.Int64: - return reflect.Int - case kind >= reflect.Uint && kind <= reflect.Uint64: - return reflect.Uint - case kind >= reflect.Float32 && kind <= reflect.Float64: - return reflect.Float32 - default: - return kind - } -} - -func isStructTypeConvertibleToMap(typ reflect.Type, checkMapstructureTags bool, tagName string) bool { - for i := 0; i < typ.NumField(); i++ { - f := typ.Field(i) - if f.PkgPath == "" && !checkMapstructureTags { // check for unexported fields - return true - } - if checkMapstructureTags && f.Tag.Get(tagName) != "" { // check for mapstructure tags inside - return true - } - } - return false -} - -func dereferencePtrToStructIfNeeded(v reflect.Value, tagName string) reflect.Value { - if v.Kind() != reflect.Ptr || v.Elem().Kind() != reflect.Struct { - return v - } - deref := v.Elem() - derefT := deref.Type() - if isStructTypeConvertibleToMap(derefT, true, tagName) { - return deref - } - return v -} diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/ebpf/ebpf_linux.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/ebpf/ebpf_linux.go index 104c74a890f8..35b00aaf0552 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/ebpf/ebpf_linux.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/ebpf/ebpf_linux.go @@ -93,7 +93,7 @@ var ( ) // Loosely based on the BPF_F_REPLACE support check in -// . +// https://github.com/cilium/ebpf/blob/v0.6.0/link/syscalls.go. // // TODO: move this logic to cilium/ebpf func haveBpfProgReplace() bool { diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/fs.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/fs.go index fb4fcc7f75bb..9e2f0ec04c8a 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/fs.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/fs.go @@ -28,6 +28,7 @@ var subsystems = []subsystem{ &FreezerGroup{}, &RdmaGroup{}, &NameGroup{GroupName: "name=systemd", Join: true}, + &NameGroup{GroupName: "misc", Join: true}, } var errSubsystemDoesNotExist = errors.New("cgroup: subsystem does not exist") diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/common.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/common.go index 45744c15c0a1..5d561facebcb 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/common.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/common.go @@ -177,7 +177,7 @@ func allowAllDevices() []systemdDbus.Property { // generateDeviceProperties takes the configured device rules and generates a // corresponding set of systemd properties to configure the devices correctly. -func generateDeviceProperties(r *configs.Resources) ([]systemdDbus.Property, error) { +func generateDeviceProperties(r *configs.Resources, sdVer int) ([]systemdDbus.Property, error) { if r.SkipDevices { return nil, nil } @@ -238,9 +238,10 @@ func generateDeviceProperties(r *configs.Resources) ([]systemdDbus.Property, err // trickery to convert things: // // * Concrete rules with non-wildcard major/minor numbers have to use - // /dev/{block,char} paths. This is slightly odd because it means - // that we cannot add whitelist rules for devices that don't exist, - // but there's not too much we can do about that. + // /dev/{block,char}/MAJOR:minor paths. Before v240, systemd uses + // stat(2) on such paths to look up device properties, meaning we + // cannot add whitelist rules for devices that don't exist. Since v240, + // device properties are parsed from the path string. // // However, path globbing is not support for path-based rules so we // need to handle wildcards in some other manner. @@ -288,13 +289,16 @@ func generateDeviceProperties(r *configs.Resources) ([]systemdDbus.Property, err case devices.CharDevice: entry.Path = fmt.Sprintf("/dev/char/%d:%d", rule.Major, rule.Minor) } - // systemd will issue a warning if the path we give here doesn't exist. - // Since all of this logic is best-effort anyway (we manually set these - // rules separately to systemd) we can safely skip entries that don't - // have a corresponding path. - if _, err := os.Stat(entry.Path); err != nil { - logrus.Debugf("skipping device %s for systemd: %s", entry.Path, err) - continue + if sdVer < 240 { + // Old systemd versions use stat(2) on path to find out device major:minor + // numbers and type. If the path doesn't exist, it will not add the rule, + // emitting a warning instead. + // Since all of this logic is best-effort anyway (we manually set these + // rules separately to systemd) we can safely skip entries that don't + // have a corresponding path. + if _, err := os.Stat(entry.Path); err != nil { + continue + } } } deviceAllowList = append(deviceAllowList, entry) @@ -343,32 +347,52 @@ func isUnitExists(err error) bool { return isDbusError(err, "org.freedesktop.systemd1.UnitExists") } -func startUnit(cm *dbusConnManager, unitName string, properties []systemdDbus.Property) error { +func startUnit(cm *dbusConnManager, unitName string, properties []systemdDbus.Property, ignoreExist bool) error { statusChan := make(chan string, 1) + retry := true + +retry: err := cm.retryOnDisconnect(func(c *systemdDbus.Conn) error { _, err := c.StartTransientUnitContext(context.TODO(), unitName, "replace", properties, statusChan) return err }) - if err == nil { - timeout := time.NewTimer(30 * time.Second) - defer timeout.Stop() - - select { - case s := <-statusChan: - close(statusChan) - // Please refer to https://pkg.go.dev/github.com/coreos/go-systemd/v22/dbus#Conn.StartUnit - if s != "done" { - resetFailedUnit(cm, unitName) - return fmt.Errorf("error creating systemd unit `%s`: got `%s`", unitName, s) - } - case <-timeout.C: + if err != nil { + if !isUnitExists(err) { + return err + } + if ignoreExist { + // TODO: remove this hack. + // This is kubelet making sure a slice exists (see + // https://github.com/opencontainers/runc/pull/1124). + return nil + } + if retry { + // In case a unit with the same name exists, this may + // be a leftover failed unit. Reset it, so systemd can + // remove it, and retry once. resetFailedUnit(cm, unitName) - return errors.New("Timeout waiting for systemd to create " + unitName) + retry = false + goto retry } - } else if !isUnitExists(err) { return err } + timeout := time.NewTimer(30 * time.Second) + defer timeout.Stop() + + select { + case s := <-statusChan: + close(statusChan) + // Please refer to https://pkg.go.dev/github.com/coreos/go-systemd/v22/dbus#Conn.StartUnit + if s != "done" { + resetFailedUnit(cm, unitName) + return fmt.Errorf("error creating systemd unit `%s`: got `%s`", unitName, s) + } + case <-timeout.C: + resetFailedUnit(cm, unitName) + return errors.New("Timeout waiting for systemd to create " + unitName) + } + return nil } diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/cpuset.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/cpuset.go index 83d10dd705fd..dd474cf1b168 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/cpuset.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/cpuset.go @@ -51,5 +51,10 @@ func RangeToBits(str string) ([]byte, error) { // do not allow empty values return nil, errors.New("empty value") } + + // fit cpuset parsing order in systemd + for l, r := 0, len(ret)-1; l < r; l, r = l+1, r-1 { + ret[l], ret[r] = ret[r], ret[l] + } return ret, nil } diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/v1.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/v1.go index a74a05a5cd05..fe036b3bda5d 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/v1.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/v1.go @@ -71,12 +71,13 @@ var legacySubsystems = []subsystem{ &fs.NetClsGroup{}, &fs.NameGroup{GroupName: "name=systemd"}, &fs.RdmaGroup{}, + &fs.NameGroup{GroupName: "misc"}, } func genV1ResourcesProperties(r *configs.Resources, cm *dbusConnManager) ([]systemdDbus.Property, error) { var properties []systemdDbus.Property - deviceProperties, err := generateDeviceProperties(r) + deviceProperties, err := generateDeviceProperties(r, systemdVersion(cm)) if err != nil { return nil, err } @@ -206,7 +207,7 @@ func (m *legacyManager) Apply(pid int) error { properties = append(properties, c.SystemdProps...) - if err := startUnit(m.dbus, unitName, properties); err != nil { + if err := startUnit(m.dbus, unitName, properties, pid == -1); err != nil { return err } @@ -273,14 +274,7 @@ func getSubsystemPath(slice, unit, subsystem string) (string, error) { return "", err } - initPath, err := cgroups.GetInitCgroup(subsystem) - if err != nil { - return "", err - } - // if pid 1 is systemd 226 or later, it will be in init.scope, not the root - initPath = strings.TrimSuffix(filepath.Clean(initPath), "init.scope") - - return filepath.Join(mountpoint, initPath, slice, unit), nil + return filepath.Join(mountpoint, slice, unit), nil } func (m *legacyManager) Freeze(state configs.FreezerState) error { diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/v2.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/v2.go index de0cb974d460..919e5632f343 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/v2.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/v2.go @@ -182,7 +182,7 @@ func genV2ResourcesProperties(r *configs.Resources, cm *dbusConnManager) ([]syst // aren't the end of the world, but it is a bit concerning. However // it's unclear if systemd removes all eBPF programs attached when // doing SetUnitProperties... - deviceProperties, err := generateDeviceProperties(r) + deviceProperties, err := generateDeviceProperties(r, systemdVersion(cm)) if err != nil { return nil, err } @@ -284,7 +284,7 @@ func (m *unifiedManager) Apply(pid int) error { properties = append(properties, c.SystemdProps...) - if err := startUnit(m.dbus, unitName, properties); err != nil { + if err := startUnit(m.dbus, unitName, properties, pid == -1); err != nil { return fmt.Errorf("unable to start unit %q (properties %+v): %w", unitName, properties, err) } diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/utils.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/utils.go index b32af4ee5302..fc4ae44a4853 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/utils.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/cgroups/utils.go @@ -162,8 +162,10 @@ func readProcsFile(dir string) ([]int, error) { // ParseCgroupFile parses the given cgroup file, typically /proc/self/cgroup // or /proc//cgroup, into a map of subsystems to cgroup paths, e.g. -// "cpu": "/user.slice/user-1000.slice" -// "pids": "/user.slice/user-1000.slice" +// +// "cpu": "/user.slice/user-1000.slice" +// "pids": "/user.slice/user-1000.slice" +// // etc. // // Note that for cgroup v2 unified hierarchy, there are no per-controller diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/validator.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/validator.go index 627621a58d86..4fbd308dadd6 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/validator.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/validator.go @@ -131,9 +131,8 @@ func (v *ConfigValidator) cgroupnamespace(config *configs.Config) error { // convertSysctlVariableToDotsSeparator can return sysctl variables in dots separator format. // The '/' separator is also accepted in place of a '.'. // Convert the sysctl variables to dots separator format for validation. -// More info: -// https://man7.org/linux/man-pages/man8/sysctl.8.html -// https://man7.org/linux/man-pages/man5/sysctl.d.5.html +// More info: sysctl(8), sysctl.d(5). +// // For example: // Input sysctl variable "net/ipv4/conf/eno2.100.rp_filter" // will return the converted value "net.ipv4.conf.eno2/100.rp_filter" diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/container_linux.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/container_linux.go index 9df830d8cdbb..dd61dfd3c90c 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/container_linux.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/container_linux.go @@ -926,7 +926,7 @@ func (c *linuxContainer) criuSupportsExtNS(t configs.NamespaceType) bool { } func criuNsToKey(t configs.NamespaceType) string { - return "extRoot" + strings.Title(configs.NsName(t)) + "NS" + return "extRoot" + strings.Title(configs.NsName(t)) + "NS" //nolint:staticcheck // SA1019: strings.Title is deprecated } func (c *linuxContainer) handleCheckpointingExternalNamespaces(rpcOpts *criurpc.CriuOpts, t configs.NamespaceType) error { diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/eaccess_go119.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/eaccess_go119.go new file mode 100644 index 000000000000..cc1e2079a795 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/eaccess_go119.go @@ -0,0 +1,17 @@ +//go:build !go1.20 +// +build !go1.20 + +package libcontainer + +import "golang.org/x/sys/unix" + +func eaccess(path string) error { + // This check is similar to access(2) with X_OK except for + // setuid/setgid binaries where it checks against the effective + // (rather than real) uid and gid. It is not needed in go 1.20 + // and beyond and will be removed later. + + // Relies on code added in https://go-review.googlesource.com/c/sys/+/468877 + // and older CLs linked from there. + return unix.Faccessat(unix.AT_FDCWD, path, unix.X_OK, unix.AT_EACCESS) +} diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/eaccess_stub.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/eaccess_stub.go new file mode 100644 index 000000000000..7c049fd7aa02 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/eaccess_stub.go @@ -0,0 +1,10 @@ +//go:build go1.20 + +package libcontainer + +func eaccess(path string) error { + // Not needed in Go 1.20+ as the functionality is already in there + // (added by https://go.dev/cl/416115, https://go.dev/cl/414824, + // and fixed in Go 1.20.2 by https://go.dev/cl/469956). + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/factory_linux.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/factory_linux.go index e6c71ac34e37..a1fa7de2d249 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/factory_linux.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/factory_linux.go @@ -179,6 +179,12 @@ func (l *LinuxFactory) Create(id string, config *configs.Config) (Container, err return nil, fmt.Errorf("unable to get cgroup PIDs: %w", err) } if len(pids) != 0 { + if config.Cgroups.Systemd { + // systemd cgroup driver can't add a pid to an + // existing systemd unit and will return an + // error anyway, so let's error out early. + return nil, fmt.Errorf("container's cgroup is not empty: %d process(es) found", len(pids)) + } // TODO: return an error. logrus.Warnf("container's cgroup is not empty: %d process(es) found", len(pids)) logrus.Warn("DEPRECATED: running container in a non-empty cgroup won't be supported in runc 1.2; https://github.com/opencontainers/runc/issues/3132") @@ -338,10 +344,9 @@ func (l *LinuxFactory) StartInitialization() (err error) { defer func() { if e := recover(); e != nil { - if e, ok := e.(error); ok { - err = fmt.Errorf("panic from initialization: %w, %s", e, debug.Stack()) + if ee, ok := e.(error); ok { + err = fmt.Errorf("panic from initialization: %w, %s", ee, debug.Stack()) } else { - //nolint:errorlint // here e is not of error type err = fmt.Errorf("panic from initialization: %v, %s", e, debug.Stack()) } } diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/init_linux.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/init_linux.go index 1e5c394c3e06..2e4c59353c83 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/init_linux.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/init_linux.go @@ -411,8 +411,9 @@ func fixStdioPermissions(u *user.ExecUser) error { return &os.PathError{Op: "fstat", Path: file.Name(), Err: err} } - // Skip chown if uid is already the one we want. - if int(s.Uid) == u.Uid { + // Skip chown if uid is already the one we want or any of the STDIO descriptors + // were redirected to /dev/null. + if int(s.Uid) == u.Uid || s.Rdev == null.Rdev { continue } diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/rootfs_linux.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/rootfs_linux.go index ec7638e4d512..c3f88fc7038b 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/rootfs_linux.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/rootfs_linux.go @@ -329,26 +329,41 @@ func mountCgroupV2(m *configs.Mount, c *mountConfig) error { if err := os.MkdirAll(dest, 0o755); err != nil { return err } - return utils.WithProcfd(c.root, m.Destination, func(procfd string) error { - if err := mount(m.Source, m.Destination, procfd, "cgroup2", uintptr(m.Flags), m.Data); err != nil { - // when we are in UserNS but CgroupNS is not unshared, we cannot mount cgroup2 (#2158) - if errors.Is(err, unix.EPERM) || errors.Is(err, unix.EBUSY) { - src := fs2.UnifiedMountpoint - if c.cgroupns && c.cgroup2Path != "" { - // Emulate cgroupns by bind-mounting - // the container cgroup path rather than - // the whole /sys/fs/cgroup. - src = c.cgroup2Path - } - err = mount(src, m.Destination, procfd, "", uintptr(m.Flags)|unix.MS_BIND, "") - if c.rootlessCgroups && errors.Is(err, unix.ENOENT) { - err = nil - } - } - return err - } - return nil + err = utils.WithProcfd(c.root, m.Destination, func(procfd string) error { + return mount(m.Source, m.Destination, procfd, "cgroup2", uintptr(m.Flags), m.Data) }) + if err == nil || !(errors.Is(err, unix.EPERM) || errors.Is(err, unix.EBUSY)) { + return err + } + + // When we are in UserNS but CgroupNS is not unshared, we cannot mount + // cgroup2 (#2158), so fall back to bind mount. + bindM := &configs.Mount{ + Device: "bind", + Source: fs2.UnifiedMountpoint, + Destination: m.Destination, + Flags: unix.MS_BIND | m.Flags, + PropagationFlags: m.PropagationFlags, + } + if c.cgroupns && c.cgroup2Path != "" { + // Emulate cgroupns by bind-mounting the container cgroup path + // rather than the whole /sys/fs/cgroup. + bindM.Source = c.cgroup2Path + } + // mountToRootfs() handles remounting for MS_RDONLY. + // No need to set c.fd here, because mountToRootfs() calls utils.WithProcfd() by itself in mountPropagate(). + err = mountToRootfs(bindM, c) + if c.rootlessCgroups && errors.Is(err, unix.ENOENT) { + // ENOENT (for `src = c.cgroup2Path`) happens when rootless runc is being executed + // outside the userns+mountns. + // + // Mask `/sys/fs/cgroup` to ensure it is read-only, even when `/sys` is mounted + // with `rbind,ro` (`runc spec --rootless` produces `rbind,ro` for `/sys`). + err = utils.WithProcfd(c.root, m.Destination, func(procfd string) error { + return maskPath(procfd, c.label) + }) + } + return err } func doTmpfsCopyUp(m *configs.Mount, rootfs, mountLabel string) (Err error) { @@ -398,32 +413,43 @@ func doTmpfsCopyUp(m *configs.Mount, rootfs, mountLabel string) (Err error) { func mountToRootfs(m *configs.Mount, c *mountConfig) error { rootfs := c.root - mountLabel := c.label - mountFd := c.fd - dest, err := securejoin.SecureJoin(rootfs, m.Destination) - if err != nil { - return err - } + // procfs and sysfs are special because we need to ensure they are actually + // mounted on a specific path in a container without any funny business. switch m.Device { case "proc", "sysfs": // If the destination already exists and is not a directory, we bail - // out This is to avoid mounting through a symlink or similar -- which + // out. This is to avoid mounting through a symlink or similar -- which // has been a "fun" attack scenario in the past. // TODO: This won't be necessary once we switch to libpathrs and we can // stop all of these symlink-exchange attacks. + dest := filepath.Clean(m.Destination) + if !strings.HasPrefix(dest, rootfs) { + // Do not use securejoin as it resolves symlinks. + dest = filepath.Join(rootfs, dest) + } if fi, err := os.Lstat(dest); err != nil { if !os.IsNotExist(err) { return err } - } else if fi.Mode()&os.ModeDir == 0 { + } else if !fi.IsDir() { return fmt.Errorf("filesystem %q must be mounted on ordinary directory", m.Device) } if err := os.MkdirAll(dest, 0o755); err != nil { return err } - // Selinux kernels do not support labeling of /proc or /sys + // Selinux kernels do not support labeling of /proc or /sys. return mountPropagate(m, rootfs, "", nil) + } + + mountLabel := c.label + mountFd := c.fd + dest, err := securejoin.SecureJoin(rootfs, m.Destination) + if err != nil { + return err + } + + switch m.Device { case "mqueue": if err := os.MkdirAll(dest, 0o755); err != nil { return err diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/standard_init_linux.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/standard_init_linux.go index 081d1503a3f3..c09a7bed30ea 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/standard_init_linux.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/standard_init_linux.go @@ -198,11 +198,12 @@ func (l *linuxStandardInit) Init() error { if err != nil { return err } - // exec.LookPath might return no error for an executable residing on a - // file system mounted with noexec flag, so perform this extra check - // now while we can still return a proper error. - if err := system.Eaccess(name); err != nil { - return &os.PathError{Op: "exec", Path: name, Err: err} + // exec.LookPath in Go < 1.20 might return no error for an executable + // residing on a file system mounted with noexec flag, so perform this + // extra check now while we can still return a proper error. + // TODO: remove this once go < 1.20 is not supported. + if err := eaccess(name); err != nil { + return &os.PathError{Op: "eaccess", Path: name, Err: err} } // Set seccomp as close to execve as possible, so as few syscalls take diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/sync.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/sync.go index c9a23ef3a760..25dc28630710 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/sync.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/sync.go @@ -15,16 +15,16 @@ type syncType string // during container setup. They come in pairs (with procError being a generic // response which is followed by an &initError). // -// [ child ] <-> [ parent ] +// [ child ] <-> [ parent ] // -// procHooks --> [run hooks] -// <-- procResume +// procHooks --> [run hooks] +// <-- procResume // -// procReady --> [final setup] -// <-- procRun +// procReady --> [final setup] +// <-- procRun // -// procSeccomp --> [pick up seccomp fd with pidfd_getfd()] -// <-- procSeccompDone +// procSeccomp --> [pick up seccomp fd with pidfd_getfd()] +// <-- procSeccompDone const ( procError syncType = "procError" procReady syncType = "procReady" diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/system/linux.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/system/linux.go index 039059a444cc..e1d6eb18034c 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/system/linux.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/system/linux.go @@ -31,25 +31,6 @@ func (p ParentDeathSignal) Set() error { return SetParentDeathSignal(uintptr(p)) } -// Eaccess is similar to unix.Access except for setuid/setgid binaries -// it checks against the effective (rather than real) uid and gid. -func Eaccess(path string) error { - err := unix.Faccessat2(unix.AT_FDCWD, path, unix.X_OK, unix.AT_EACCESS) - if err != unix.ENOSYS && err != unix.EPERM { //nolint:errorlint // unix errors are bare - return err - } - - // Faccessat2() not available; check if we are a set[ug]id binary. - if os.Getuid() == os.Geteuid() && os.Getgid() == os.Getegid() { - // For a non-set[ug]id binary, use access(2). - return unix.Access(path, unix.X_OK) - } - - // For a setuid/setgid binary, there is no fallback way - // so assume we can execute the binary. - return nil -} - func Execv(cmd string, args []string, env []string) error { name, err := exec.LookPath(cmd) if err != nil { diff --git a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/user/user.go b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/user/user.go index 2473c5eaddce..a1e216683d90 100644 --- a/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/user/user.go +++ b/cluster-autoscaler/vendor/github.com/opencontainers/runc/libcontainer/user/user.go @@ -280,13 +280,13 @@ func GetExecUserPath(userSpec string, defaults *ExecUser, passwdPath, groupPath // found in any entry in passwd and group respectively. // // Examples of valid user specifications are: -// * "" -// * "user" -// * "uid" -// * "user:group" -// * "uid:gid -// * "user:gid" -// * "uid:group" +// - "" +// - "user" +// - "uid" +// - "user:group" +// - "uid:gid +// - "user:gid" +// - "uid:group" // // It should be noted that if you specify a numeric user or group id, they will // not be evaluated as usernames (only the metadata will be filled). So attempting diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/collectors/go_collector_latest.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/collectors/go_collector_latest.go index 246c5ea943c8..2f5616894e7d 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/collectors/go_collector_latest.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/collectors/go_collector_latest.go @@ -28,6 +28,8 @@ var ( MetricsAll = GoRuntimeMetricsRule{regexp.MustCompile("/.*")} // MetricsGC allows only GC metrics to be collected from Go runtime. // e.g. go_gc_cycles_automatic_gc_cycles_total + // NOTE: This does not include new class of "/cpu/classes/gc/..." metrics. + // Use custom metric rule to access those. MetricsGC = GoRuntimeMetricsRule{regexp.MustCompile(`^/gc/.*`)} // MetricsMemory allows only memory metrics to be collected from Go runtime. // e.g. go_memory_classes_heap_free_bytes diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/counter.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/counter.go index a912b75a05b7..62de4dc59aae 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/counter.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/counter.go @@ -59,6 +59,18 @@ type ExemplarAdder interface { // CounterOpts is an alias for Opts. See there for doc comments. type CounterOpts Opts +// CounterVecOpts bundles the options to create a CounterVec metric. +// It is mandatory to set CounterOpts, see there for mandatory fields. VariableLabels +// is optional and can safely be left to its default value. +type CounterVecOpts struct { + CounterOpts + + // VariableLabels are used to partition the metric vector by the given set + // of labels. Each label value will be constrained with the optional Contraint + // function, if provided. + VariableLabels ConstrainableLabels +} + // NewCounter creates a new Counter based on the provided CounterOpts. // // The returned implementation also implements ExemplarAdder. It is safe to @@ -174,16 +186,24 @@ type CounterVec struct { // NewCounterVec creates a new CounterVec based on the provided CounterOpts and // partitioned by the given label names. func NewCounterVec(opts CounterOpts, labelNames []string) *CounterVec { - desc := NewDesc( + return V2.NewCounterVec(CounterVecOpts{ + CounterOpts: opts, + VariableLabels: UnconstrainedLabels(labelNames), + }) +} + +// NewCounterVec creates a new CounterVec based on the provided CounterVecOpts. +func (v2) NewCounterVec(opts CounterVecOpts) *CounterVec { + desc := V2.NewDesc( BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), opts.Help, - labelNames, + opts.VariableLabels, opts.ConstLabels, ) return &CounterVec{ MetricVec: NewMetricVec(desc, func(lvs ...string) Metric { if len(lvs) != len(desc.variableLabels) { - panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels, lvs)) + panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels.labelNames(), lvs)) } result := &counter{desc: desc, labelPairs: MakeLabelPairs(desc, lvs), now: time.Now} result.init(result) // Init self-collection. diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/desc.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/desc.go index 8bc5e44e2fc4..deedc2dfbe75 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/desc.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/desc.go @@ -14,20 +14,16 @@ package prometheus import ( - "errors" "fmt" "sort" "strings" "github.com/cespare/xxhash/v2" - - "github.com/prometheus/client_golang/prometheus/internal" - - //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. - "github.com/golang/protobuf/proto" + dto "github.com/prometheus/client_model/go" "github.com/prometheus/common/model" + "google.golang.org/protobuf/proto" - dto "github.com/prometheus/client_model/go" + "github.com/prometheus/client_golang/prometheus/internal" ) // Desc is the descriptor used by every Prometheus Metric. It is essentially @@ -54,9 +50,9 @@ type Desc struct { // constLabelPairs contains precalculated DTO label pairs based on // the constant labels. constLabelPairs []*dto.LabelPair - // variableLabels contains names of labels for which the metric - // maintains variable values. - variableLabels []string + // variableLabels contains names of labels and normalization function for + // which the metric maintains variable values. + variableLabels ConstrainedLabels // id is a hash of the values of the ConstLabels and fqName. This // must be unique among all registered descriptors and can therefore be // used as an identifier of the descriptor. @@ -80,10 +76,24 @@ type Desc struct { // For constLabels, the label values are constant. Therefore, they are fully // specified in the Desc. See the Collector example for a usage pattern. func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) *Desc { + return V2.NewDesc(fqName, help, UnconstrainedLabels(variableLabels), constLabels) +} + +// NewDesc allocates and initializes a new Desc. Errors are recorded in the Desc +// and will be reported on registration time. variableLabels and constLabels can +// be nil if no such labels should be set. fqName must not be empty. +// +// variableLabels only contain the label names and normalization functions. Their +// label values are variable and therefore not part of the Desc. (They are managed +// within the Metric.) +// +// For constLabels, the label values are constant. Therefore, they are fully +// specified in the Desc. See the Collector example for a usage pattern. +func (v2) NewDesc(fqName, help string, variableLabels ConstrainableLabels, constLabels Labels) *Desc { d := &Desc{ fqName: fqName, help: help, - variableLabels: variableLabels, + variableLabels: variableLabels.constrainedLabels(), } if !model.IsValidMetricName(model.LabelValue(fqName)) { d.err = fmt.Errorf("%q is not a valid metric name", fqName) @@ -93,7 +103,7 @@ func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) * // their sorted label names) plus the fqName (at position 0). labelValues := make([]string, 1, len(constLabels)+1) labelValues[0] = fqName - labelNames := make([]string, 0, len(constLabels)+len(variableLabels)) + labelNames := make([]string, 0, len(constLabels)+len(d.variableLabels)) labelNameSet := map[string]struct{}{} // First add only the const label names and sort them... for labelName := range constLabels { @@ -118,16 +128,16 @@ func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) * // Now add the variable label names, but prefix them with something that // cannot be in a regular label name. That prevents matching the label // dimension with a different mix between preset and variable labels. - for _, labelName := range variableLabels { - if !checkLabelName(labelName) { - d.err = fmt.Errorf("%q is not a valid label name for metric %q", labelName, fqName) + for _, label := range d.variableLabels { + if !checkLabelName(label.Name) { + d.err = fmt.Errorf("%q is not a valid label name for metric %q", label.Name, fqName) return d } - labelNames = append(labelNames, "$"+labelName) - labelNameSet[labelName] = struct{}{} + labelNames = append(labelNames, "$"+label.Name) + labelNameSet[label.Name] = struct{}{} } if len(labelNames) != len(labelNameSet) { - d.err = errors.New("duplicate label names") + d.err = fmt.Errorf("duplicate label names in constant and variable labels for metric %q", fqName) return d } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/doc.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/doc.go index 811072cbd54f..962608f02c65 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/doc.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/doc.go @@ -37,35 +37,35 @@ // // type metrics struct { // cpuTemp prometheus.Gauge -// hdFailures *prometheus.CounterVec +// hdFailures *prometheus.CounterVec // } // // func NewMetrics(reg prometheus.Registerer) *metrics { -// m := &metrics{ -// cpuTemp: prometheus.NewGauge(prometheus.GaugeOpts{ -// Name: "cpu_temperature_celsius", -// Help: "Current temperature of the CPU.", -// }), -// hdFailures: prometheus.NewCounterVec( -// prometheus.CounterOpts{ -// Name: "hd_errors_total", -// Help: "Number of hard-disk errors.", -// }, -// []string{"device"}, -// ), -// } -// reg.MustRegister(m.cpuTemp) -// reg.MustRegister(m.hdFailures) -// return m +// m := &metrics{ +// cpuTemp: prometheus.NewGauge(prometheus.GaugeOpts{ +// Name: "cpu_temperature_celsius", +// Help: "Current temperature of the CPU.", +// }), +// hdFailures: prometheus.NewCounterVec( +// prometheus.CounterOpts{ +// Name: "hd_errors_total", +// Help: "Number of hard-disk errors.", +// }, +// []string{"device"}, +// ), +// } +// reg.MustRegister(m.cpuTemp) +// reg.MustRegister(m.hdFailures) +// return m // } // // func main() { -// // Create a non-global registry. -// reg := prometheus.NewRegistry() +// // Create a non-global registry. +// reg := prometheus.NewRegistry() // -// // Create new metrics and register them using the custom registry. -// m := NewMetrics(reg) -// // Set values for the new created metrics. +// // Create new metrics and register them using the custom registry. +// m := NewMetrics(reg) +// // Set values for the new created metrics. // m.cpuTemp.Set(65.3) // m.hdFailures.With(prometheus.Labels{"device":"/dev/sda"}).Inc() // diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/gauge.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/gauge.go index 21271a5bb462..f1ea6c76f756 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/gauge.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/gauge.go @@ -55,6 +55,18 @@ type Gauge interface { // GaugeOpts is an alias for Opts. See there for doc comments. type GaugeOpts Opts +// GaugeVecOpts bundles the options to create a GaugeVec metric. +// It is mandatory to set GaugeOpts, see there for mandatory fields. VariableLabels +// is optional and can safely be left to its default value. +type GaugeVecOpts struct { + GaugeOpts + + // VariableLabels are used to partition the metric vector by the given set + // of labels. Each label value will be constrained with the optional Contraint + // function, if provided. + VariableLabels ConstrainableLabels +} + // NewGauge creates a new Gauge based on the provided GaugeOpts. // // The returned implementation is optimized for a fast Set method. If you have a @@ -138,16 +150,24 @@ type GaugeVec struct { // NewGaugeVec creates a new GaugeVec based on the provided GaugeOpts and // partitioned by the given label names. func NewGaugeVec(opts GaugeOpts, labelNames []string) *GaugeVec { - desc := NewDesc( + return V2.NewGaugeVec(GaugeVecOpts{ + GaugeOpts: opts, + VariableLabels: UnconstrainedLabels(labelNames), + }) +} + +// NewGaugeVec creates a new GaugeVec based on the provided GaugeVecOpts. +func (v2) NewGaugeVec(opts GaugeVecOpts) *GaugeVec { + desc := V2.NewDesc( BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), opts.Help, - labelNames, + opts.VariableLabels, opts.ConstLabels, ) return &GaugeVec{ MetricVec: NewMetricVec(desc, func(lvs ...string) Metric { if len(lvs) != len(desc.variableLabels) { - panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels, lvs)) + panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels.labelNames(), lvs)) } result := &gauge{desc: desc, labelPairs: MakeLabelPairs(desc, lvs)} result.init(result) // Init self-collection. diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/go_collector_latest.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/go_collector_latest.go index 3a2d55e84b1e..2d8d9f64f430 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/go_collector_latest.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/go_collector_latest.go @@ -23,11 +23,10 @@ import ( "strings" "sync" - //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. - "github.com/golang/protobuf/proto" - dto "github.com/prometheus/client_model/go" - "github.com/prometheus/client_golang/prometheus/internal" + + dto "github.com/prometheus/client_model/go" + "google.golang.org/protobuf/proto" ) const ( diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/histogram.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/histogram.go index 4c873a01c3d3..8d818afe90d7 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/histogram.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/histogram.go @@ -22,10 +22,9 @@ import ( "sync/atomic" "time" - //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. - "github.com/golang/protobuf/proto" - dto "github.com/prometheus/client_model/go" + + "google.golang.org/protobuf/proto" ) // nativeHistogramBounds for the frac of observed values. Only relevant for @@ -402,7 +401,7 @@ type HistogramOpts struct { // Histogram by a Prometheus server with that feature enabled (requires // Prometheus v2.40+). Sparse buckets are exponential buckets covering // the whole float64 range (with the exception of the “zero” bucket, see - // SparseBucketsZeroThreshold below). From any one bucket to the next, + // NativeHistogramZeroThreshold below). From any one bucket to the next, // the width of the bucket grows by a constant // factor. NativeHistogramBucketFactor provides an upper bound for this // factor (exception see below). The smaller @@ -433,7 +432,7 @@ type HistogramOpts struct { // bucket. For best results, this should be close to a bucket // boundary. This is usually the case if picking a power of two. If // NativeHistogramZeroThreshold is left at zero, - // DefSparseBucketsZeroThreshold is used as the threshold. To configure + // DefNativeHistogramZeroThreshold is used as the threshold. To configure // a zero bucket with an actual threshold of zero (i.e. only // observations of precisely zero will go into the zero bucket), set // NativeHistogramZeroThreshold to the NativeHistogramZeroThresholdZero @@ -469,6 +468,18 @@ type HistogramOpts struct { NativeHistogramMaxZeroThreshold float64 } +// HistogramVecOpts bundles the options to create a HistogramVec metric. +// It is mandatory to set HistogramOpts, see there for mandatory fields. VariableLabels +// is optional and can safely be left to its default value. +type HistogramVecOpts struct { + HistogramOpts + + // VariableLabels are used to partition the metric vector by the given set + // of labels. Each label value will be constrained with the optional Contraint + // function, if provided. + VariableLabels ConstrainableLabels +} + // NewHistogram creates a new Histogram based on the provided HistogramOpts. It // panics if the buckets in HistogramOpts are not in strictly increasing order. // @@ -489,11 +500,11 @@ func NewHistogram(opts HistogramOpts) Histogram { func newHistogram(desc *Desc, opts HistogramOpts, labelValues ...string) Histogram { if len(desc.variableLabels) != len(labelValues) { - panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels, labelValues)) + panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels.labelNames(), labelValues)) } for _, n := range desc.variableLabels { - if n == bucketLabel { + if n.Name == bucketLabel { panic(errBucketLabelNotAllowed) } } @@ -544,16 +555,12 @@ func newHistogram(desc *Desc, opts HistogramOpts, labelValues ...string) Histogr } // Finally we know the final length of h.upperBounds and can make buckets // for both counts as well as exemplars: - h.counts[0] = &histogramCounts{ - buckets: make([]uint64, len(h.upperBounds)), - nativeHistogramZeroThresholdBits: math.Float64bits(h.nativeHistogramZeroThreshold), - nativeHistogramSchema: h.nativeHistogramSchema, - } - h.counts[1] = &histogramCounts{ - buckets: make([]uint64, len(h.upperBounds)), - nativeHistogramZeroThresholdBits: math.Float64bits(h.nativeHistogramZeroThreshold), - nativeHistogramSchema: h.nativeHistogramSchema, - } + h.counts[0] = &histogramCounts{buckets: make([]uint64, len(h.upperBounds))} + atomic.StoreUint64(&h.counts[0].nativeHistogramZeroThresholdBits, math.Float64bits(h.nativeHistogramZeroThreshold)) + atomic.StoreInt32(&h.counts[0].nativeHistogramSchema, h.nativeHistogramSchema) + h.counts[1] = &histogramCounts{buckets: make([]uint64, len(h.upperBounds))} + atomic.StoreUint64(&h.counts[1].nativeHistogramZeroThresholdBits, math.Float64bits(h.nativeHistogramZeroThreshold)) + atomic.StoreInt32(&h.counts[1].nativeHistogramSchema, h.nativeHistogramSchema) h.exemplars = make([]atomic.Value, len(h.upperBounds)+1) h.init(h) // Init self-collection. @@ -632,8 +639,8 @@ func (hc *histogramCounts) observe(v float64, bucket int, doSparse bool) { if frac == 0.5 { key-- } - div := 1 << -schema - key = (key + div - 1) / div + offset := (1 << -schema) - 1 + key = (key + offset) >> -schema } if isInf { key++ @@ -810,7 +817,7 @@ func (h *histogram) observe(v float64, bucket int) { } } -// limitSparsebuckets applies a strategy to limit the number of populated sparse +// limitBuckets applies a strategy to limit the number of populated sparse // buckets. It's generally best effort, and there are situations where the // number can go higher (if even the lowest resolution isn't enough to reduce // the number sufficiently, or if the provided counts aren't fully updated yet @@ -1034,15 +1041,23 @@ type HistogramVec struct { // NewHistogramVec creates a new HistogramVec based on the provided HistogramOpts and // partitioned by the given label names. func NewHistogramVec(opts HistogramOpts, labelNames []string) *HistogramVec { - desc := NewDesc( + return V2.NewHistogramVec(HistogramVecOpts{ + HistogramOpts: opts, + VariableLabels: UnconstrainedLabels(labelNames), + }) +} + +// NewHistogramVec creates a new HistogramVec based on the provided HistogramVecOpts. +func (v2) NewHistogramVec(opts HistogramVecOpts) *HistogramVec { + desc := V2.NewDesc( BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), opts.Help, - labelNames, + opts.VariableLabels, opts.ConstLabels, ) return &HistogramVec{ MetricVec: NewMetricVec(desc, func(lvs ...string) Metric { - return newHistogram(desc, opts, lvs...) + return newHistogram(desc, opts.HistogramOpts, lvs...) }), } } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/labels.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/labels.go index c1b8fad36aeb..63ff8683ce52 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/labels.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/labels.go @@ -32,6 +32,78 @@ import ( // create a Desc. type Labels map[string]string +// ConstrainedLabels represents a label name and its constrain function +// to normalize label values. This type is commonly used when constructing +// metric vector Collectors. +type ConstrainedLabel struct { + Name string + Constraint func(string) string +} + +func (cl ConstrainedLabel) Constrain(v string) string { + if cl.Constraint == nil { + return v + } + return cl.Constraint(v) +} + +// ConstrainableLabels is an interface that allows creating of labels that can +// be optionally constrained. +// +// prometheus.V2().NewCounterVec(CounterVecOpts{ +// CounterOpts: {...}, // Usual CounterOpts fields +// VariableLabels: []ConstrainedLabels{ +// {Name: "A"}, +// {Name: "B", Constraint: func(v string) string { ... }}, +// }, +// }) +type ConstrainableLabels interface { + constrainedLabels() ConstrainedLabels + labelNames() []string +} + +// ConstrainedLabels represents a collection of label name -> constrain function +// to normalize label values. This type is commonly used when constructing +// metric vector Collectors. +type ConstrainedLabels []ConstrainedLabel + +func (cls ConstrainedLabels) constrainedLabels() ConstrainedLabels { + return cls +} + +func (cls ConstrainedLabels) labelNames() []string { + names := make([]string, len(cls)) + for i, label := range cls { + names[i] = label.Name + } + return names +} + +// UnconstrainedLabels represents collection of label without any constraint on +// their value. Thus, it is simply a collection of label names. +// +// UnconstrainedLabels([]string{ "A", "B" }) +// +// is equivalent to +// +// ConstrainedLabels { +// { Name: "A" }, +// { Name: "B" }, +// } +type UnconstrainedLabels []string + +func (uls UnconstrainedLabels) constrainedLabels() ConstrainedLabels { + constrainedLabels := make([]ConstrainedLabel, len(uls)) + for i, l := range uls { + constrainedLabels[i] = ConstrainedLabel{Name: l} + } + return constrainedLabels +} + +func (uls UnconstrainedLabels) labelNames() []string { + return uls +} + // reservedLabelPrefix is a prefix which is not legal in user-supplied // label names. const reservedLabelPrefix = "__" diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/metric.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/metric.go index b5119c50410e..07bbc9d76871 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/metric.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/metric.go @@ -20,11 +20,9 @@ import ( "strings" "time" - //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. - "github.com/golang/protobuf/proto" - "github.com/prometheus/common/model" - dto "github.com/prometheus/client_model/go" + "github.com/prometheus/common/model" + "google.golang.org/protobuf/proto" ) var separatorByteSlice = []byte{model.SeparatorByte} // For convenient use with xxhash. diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go index a4cc9810b072..09b8d2fbead0 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go @@ -37,6 +37,7 @@ import ( "fmt" "io" "net/http" + "strconv" "strings" "sync" "time" @@ -47,9 +48,10 @@ import ( ) const ( - contentTypeHeader = "Content-Type" - contentEncodingHeader = "Content-Encoding" - acceptEncodingHeader = "Accept-Encoding" + contentTypeHeader = "Content-Type" + contentEncodingHeader = "Content-Encoding" + acceptEncodingHeader = "Accept-Encoding" + processStartTimeHeader = "Process-Start-Time-Unix" ) var gzipPool = sync.Pool{ @@ -121,6 +123,9 @@ func HandlerForTransactional(reg prometheus.TransactionalGatherer, opts HandlerO } h := http.HandlerFunc(func(rsp http.ResponseWriter, req *http.Request) { + if !opts.ProcessStartTime.IsZero() { + rsp.Header().Set(processStartTimeHeader, strconv.FormatInt(opts.ProcessStartTime.Unix(), 10)) + } if inFlightSem != nil { select { case inFlightSem <- struct{}{}: // All good, carry on. @@ -366,6 +371,14 @@ type HandlerOpts struct { // (which changes the identity of the resulting series on the Prometheus // server). EnableOpenMetrics bool + // ProcessStartTime allows setting process start timevalue that will be exposed + // with "Process-Start-Time-Unix" response header along with the metrics + // payload. This allow callers to have efficient transformations to cumulative + // counters (e.g. OpenTelemetry) or generally _created timestamp estimation per + // scrape target. + // NOTE: This feature is experimental and not covered by OpenMetrics or Prometheus + // exposition format. + ProcessStartTime time.Time } // gzipAccepted returns whether the client will accept gzip-encoded content. diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_client.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_client.go index 21086781621f..d3482c40ca76 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_client.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_client.go @@ -68,16 +68,17 @@ func InstrumentRoundTripperCounter(counter *prometheus.CounterVec, next http.Rou o.apply(rtOpts) } - code, method := checkLabels(counter) + // Curry the counter with dynamic labels before checking the remaining labels. + code, method := checkLabels(counter.MustCurryWith(rtOpts.emptyDynamicLabels())) return func(r *http.Request) (*http.Response, error) { resp, err := next.RoundTrip(r) if err == nil { - addWithExemplar( - counter.With(labels(code, method, r.Method, resp.StatusCode, rtOpts.extraMethods...)), - 1, - rtOpts.getExemplarFn(r.Context()), - ) + l := labels(code, method, r.Method, resp.StatusCode, rtOpts.extraMethods...) + for label, resolve := range rtOpts.extraLabelsFromCtx { + l[label] = resolve(resp.Request.Context()) + } + addWithExemplar(counter.With(l), 1, rtOpts.getExemplarFn(r.Context())) } return resp, err } @@ -110,17 +111,18 @@ func InstrumentRoundTripperDuration(obs prometheus.ObserverVec, next http.RoundT o.apply(rtOpts) } - code, method := checkLabels(obs) + // Curry the observer with dynamic labels before checking the remaining labels. + code, method := checkLabels(obs.MustCurryWith(rtOpts.emptyDynamicLabels())) return func(r *http.Request) (*http.Response, error) { start := time.Now() resp, err := next.RoundTrip(r) if err == nil { - observeWithExemplar( - obs.With(labels(code, method, r.Method, resp.StatusCode, rtOpts.extraMethods...)), - time.Since(start).Seconds(), - rtOpts.getExemplarFn(r.Context()), - ) + l := labels(code, method, r.Method, resp.StatusCode, rtOpts.extraMethods...) + for label, resolve := range rtOpts.extraLabelsFromCtx { + l[label] = resolve(resp.Request.Context()) + } + observeWithExemplar(obs.With(l), time.Since(start).Seconds(), rtOpts.getExemplarFn(r.Context())) } return resp, err } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go index cca67a78a90d..3793036ad09b 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/instrument_server.go @@ -87,7 +87,8 @@ func InstrumentHandlerDuration(obs prometheus.ObserverVec, next http.Handler, op o.apply(hOpts) } - code, method := checkLabels(obs) + // Curry the observer with dynamic labels before checking the remaining labels. + code, method := checkLabels(obs.MustCurryWith(hOpts.emptyDynamicLabels())) if code { return func(w http.ResponseWriter, r *http.Request) { @@ -95,23 +96,22 @@ func InstrumentHandlerDuration(obs prometheus.ObserverVec, next http.Handler, op d := newDelegator(w, nil) next.ServeHTTP(d, r) - observeWithExemplar( - obs.With(labels(code, method, r.Method, d.Status(), hOpts.extraMethods...)), - time.Since(now).Seconds(), - hOpts.getExemplarFn(r.Context()), - ) + l := labels(code, method, r.Method, d.Status(), hOpts.extraMethods...) + for label, resolve := range hOpts.extraLabelsFromCtx { + l[label] = resolve(r.Context()) + } + observeWithExemplar(obs.With(l), time.Since(now).Seconds(), hOpts.getExemplarFn(r.Context())) } } return func(w http.ResponseWriter, r *http.Request) { now := time.Now() next.ServeHTTP(w, r) - - observeWithExemplar( - obs.With(labels(code, method, r.Method, 0, hOpts.extraMethods...)), - time.Since(now).Seconds(), - hOpts.getExemplarFn(r.Context()), - ) + l := labels(code, method, r.Method, 0, hOpts.extraMethods...) + for label, resolve := range hOpts.extraLabelsFromCtx { + l[label] = resolve(r.Context()) + } + observeWithExemplar(obs.With(l), time.Since(now).Seconds(), hOpts.getExemplarFn(r.Context())) } } @@ -138,28 +138,30 @@ func InstrumentHandlerCounter(counter *prometheus.CounterVec, next http.Handler, o.apply(hOpts) } - code, method := checkLabels(counter) + // Curry the counter with dynamic labels before checking the remaining labels. + code, method := checkLabels(counter.MustCurryWith(hOpts.emptyDynamicLabels())) if code { return func(w http.ResponseWriter, r *http.Request) { d := newDelegator(w, nil) next.ServeHTTP(d, r) - addWithExemplar( - counter.With(labels(code, method, r.Method, d.Status(), hOpts.extraMethods...)), - 1, - hOpts.getExemplarFn(r.Context()), - ) + l := labels(code, method, r.Method, d.Status(), hOpts.extraMethods...) + for label, resolve := range hOpts.extraLabelsFromCtx { + l[label] = resolve(r.Context()) + } + addWithExemplar(counter.With(l), 1, hOpts.getExemplarFn(r.Context())) } } return func(w http.ResponseWriter, r *http.Request) { next.ServeHTTP(w, r) - addWithExemplar( - counter.With(labels(code, method, r.Method, 0, hOpts.extraMethods...)), - 1, - hOpts.getExemplarFn(r.Context()), - ) + + l := labels(code, method, r.Method, 0, hOpts.extraMethods...) + for label, resolve := range hOpts.extraLabelsFromCtx { + l[label] = resolve(r.Context()) + } + addWithExemplar(counter.With(l), 1, hOpts.getExemplarFn(r.Context())) } } @@ -191,16 +193,17 @@ func InstrumentHandlerTimeToWriteHeader(obs prometheus.ObserverVec, next http.Ha o.apply(hOpts) } - code, method := checkLabels(obs) + // Curry the observer with dynamic labels before checking the remaining labels. + code, method := checkLabels(obs.MustCurryWith(hOpts.emptyDynamicLabels())) return func(w http.ResponseWriter, r *http.Request) { now := time.Now() d := newDelegator(w, func(status int) { - observeWithExemplar( - obs.With(labels(code, method, r.Method, status, hOpts.extraMethods...)), - time.Since(now).Seconds(), - hOpts.getExemplarFn(r.Context()), - ) + l := labels(code, method, r.Method, status, hOpts.extraMethods...) + for label, resolve := range hOpts.extraLabelsFromCtx { + l[label] = resolve(r.Context()) + } + observeWithExemplar(obs.With(l), time.Since(now).Seconds(), hOpts.getExemplarFn(r.Context())) }) next.ServeHTTP(d, r) } @@ -231,28 +234,32 @@ func InstrumentHandlerRequestSize(obs prometheus.ObserverVec, next http.Handler, o.apply(hOpts) } - code, method := checkLabels(obs) + // Curry the observer with dynamic labels before checking the remaining labels. + code, method := checkLabels(obs.MustCurryWith(hOpts.emptyDynamicLabels())) + if code { return func(w http.ResponseWriter, r *http.Request) { d := newDelegator(w, nil) next.ServeHTTP(d, r) size := computeApproximateRequestSize(r) - observeWithExemplar( - obs.With(labels(code, method, r.Method, d.Status(), hOpts.extraMethods...)), - float64(size), - hOpts.getExemplarFn(r.Context()), - ) + + l := labels(code, method, r.Method, d.Status(), hOpts.extraMethods...) + for label, resolve := range hOpts.extraLabelsFromCtx { + l[label] = resolve(r.Context()) + } + observeWithExemplar(obs.With(l), float64(size), hOpts.getExemplarFn(r.Context())) } } return func(w http.ResponseWriter, r *http.Request) { next.ServeHTTP(w, r) size := computeApproximateRequestSize(r) - observeWithExemplar( - obs.With(labels(code, method, r.Method, 0, hOpts.extraMethods...)), - float64(size), - hOpts.getExemplarFn(r.Context()), - ) + + l := labels(code, method, r.Method, 0, hOpts.extraMethods...) + for label, resolve := range hOpts.extraLabelsFromCtx { + l[label] = resolve(r.Context()) + } + observeWithExemplar(obs.With(l), float64(size), hOpts.getExemplarFn(r.Context())) } } @@ -281,16 +288,18 @@ func InstrumentHandlerResponseSize(obs prometheus.ObserverVec, next http.Handler o.apply(hOpts) } - code, method := checkLabels(obs) + // Curry the observer with dynamic labels before checking the remaining labels. + code, method := checkLabels(obs.MustCurryWith(hOpts.emptyDynamicLabels())) return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { d := newDelegator(w, nil) next.ServeHTTP(d, r) - observeWithExemplar( - obs.With(labels(code, method, r.Method, d.Status(), hOpts.extraMethods...)), - float64(d.Written()), - hOpts.getExemplarFn(r.Context()), - ) + + l := labels(code, method, r.Method, d.Status(), hOpts.extraMethods...) + for label, resolve := range hOpts.extraLabelsFromCtx { + l[label] = resolve(r.Context()) + } + observeWithExemplar(obs.With(l), float64(d.Written()), hOpts.getExemplarFn(r.Context())) }) } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/option.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/option.go index c590d912c947..5d4383aa14a3 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/option.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/promhttp/option.go @@ -24,14 +24,32 @@ type Option interface { apply(*options) } +// LabelValueFromCtx are used to compute the label value from request context. +// Context can be filled with values from request through middleware. +type LabelValueFromCtx func(ctx context.Context) string + // options store options for both a handler or round tripper. type options struct { - extraMethods []string - getExemplarFn func(requestCtx context.Context) prometheus.Labels + extraMethods []string + getExemplarFn func(requestCtx context.Context) prometheus.Labels + extraLabelsFromCtx map[string]LabelValueFromCtx } func defaultOptions() *options { - return &options{getExemplarFn: func(ctx context.Context) prometheus.Labels { return nil }} + return &options{ + getExemplarFn: func(ctx context.Context) prometheus.Labels { return nil }, + extraLabelsFromCtx: map[string]LabelValueFromCtx{}, + } +} + +func (o *options) emptyDynamicLabels() prometheus.Labels { + labels := prometheus.Labels{} + + for label := range o.extraLabelsFromCtx { + labels[label] = "" + } + + return labels } type optionApplyFunc func(*options) @@ -48,11 +66,19 @@ func WithExtraMethods(methods ...string) Option { }) } -// WithExemplarFromContext adds allows to put a hook to all counter and histogram metrics. -// If the hook function returns non-nil labels, exemplars will be added for that request, otherwise metric -// will get instrumented without exemplar. +// WithExemplarFromContext allows to inject function that will get exemplar from context that will be put to counter and histogram metrics. +// If the function returns nil labels or the metric does not support exemplars, no exemplar will be added (noop), but +// metric will continue to observe/increment. func WithExemplarFromContext(getExemplarFn func(requestCtx context.Context) prometheus.Labels) Option { return optionApplyFunc(func(o *options) { o.getExemplarFn = getExemplarFn }) } + +// WithLabelFromCtx registers a label for dynamic resolution with access to context. +// See the example for ExampleInstrumentHandlerWithLabelResolver for example usage +func WithLabelFromCtx(name string, valueFn LabelValueFromCtx) Option { + return optionApplyFunc(func(o *options) { + o.extraLabelsFromCtx[name] = valueFn + }) +} diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/registry.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/registry.go index 09e34d307c97..44da9433beef 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/registry.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/registry.go @@ -21,18 +21,17 @@ import ( "path/filepath" "runtime" "sort" + "strconv" "strings" "sync" "unicode/utf8" - "github.com/cespare/xxhash/v2" - //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. - "github.com/golang/protobuf/proto" - "github.com/prometheus/common/expfmt" + "github.com/prometheus/client_golang/prometheus/internal" + "github.com/cespare/xxhash/v2" dto "github.com/prometheus/client_model/go" - - "github.com/prometheus/client_golang/prometheus/internal" + "github.com/prometheus/common/expfmt" + "google.golang.org/protobuf/proto" ) const ( @@ -933,6 +932,10 @@ func checkMetricConsistency( h.WriteString(lp.GetValue()) h.Write(separatorByteSlice) } + if dtoMetric.TimestampMs != nil { + h.WriteString(strconv.FormatInt(*(dtoMetric.TimestampMs), 10)) + h.Write(separatorByteSlice) + } hSum := h.Sum64() if _, exists := metricHashes[hSum]; exists { return fmt.Errorf( @@ -962,7 +965,7 @@ func checkDescConsistency( copy(lpsFromDesc, desc.constLabelPairs) for _, l := range desc.variableLabels { lpsFromDesc = append(lpsFromDesc, &dto.LabelPair{ - Name: proto.String(l), + Name: proto.String(l.Name), }) } if len(lpsFromDesc) != len(dtoMetric.Label) { diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/summary.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/summary.go index 7bc448a89394..dd359264e592 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/summary.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/summary.go @@ -22,11 +22,10 @@ import ( "sync/atomic" "time" - "github.com/beorn7/perks/quantile" - //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. - "github.com/golang/protobuf/proto" - dto "github.com/prometheus/client_model/go" + + "github.com/beorn7/perks/quantile" + "google.golang.org/protobuf/proto" ) // quantileLabel is used for the label that defines the quantile in a @@ -148,6 +147,18 @@ type SummaryOpts struct { BufCap uint32 } +// SummaryVecOpts bundles the options to create a SummaryVec metric. +// It is mandatory to set SummaryOpts, see there for mandatory fields. VariableLabels +// is optional and can safely be left to its default value. +type SummaryVecOpts struct { + SummaryOpts + + // VariableLabels are used to partition the metric vector by the given set + // of labels. Each label value will be constrained with the optional Contraint + // function, if provided. + VariableLabels ConstrainableLabels +} + // Problem with the sliding-window decay algorithm... The Merge method of // perk/quantile is actually not working as advertised - and it might be // unfixable, as the underlying algorithm is apparently not capable of merging @@ -178,11 +189,11 @@ func NewSummary(opts SummaryOpts) Summary { func newSummary(desc *Desc, opts SummaryOpts, labelValues ...string) Summary { if len(desc.variableLabels) != len(labelValues) { - panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels, labelValues)) + panic(makeInconsistentCardinalityError(desc.fqName, desc.variableLabels.labelNames(), labelValues)) } for _, n := range desc.variableLabels { - if n == quantileLabel { + if n.Name == quantileLabel { panic(errQuantileLabelNotAllowed) } } @@ -530,20 +541,28 @@ type SummaryVec struct { // it is handled by the Prometheus server internally, “quantile” is an illegal // label name. NewSummaryVec will panic if this label name is used. func NewSummaryVec(opts SummaryOpts, labelNames []string) *SummaryVec { - for _, ln := range labelNames { + return V2.NewSummaryVec(SummaryVecOpts{ + SummaryOpts: opts, + VariableLabels: UnconstrainedLabels(labelNames), + }) +} + +// NewSummaryVec creates a new SummaryVec based on the provided SummaryVecOpts. +func (v2) NewSummaryVec(opts SummaryVecOpts) *SummaryVec { + for _, ln := range opts.VariableLabels.labelNames() { if ln == quantileLabel { panic(errQuantileLabelNotAllowed) } } - desc := NewDesc( + desc := V2.NewDesc( BuildFQName(opts.Namespace, opts.Subsystem, opts.Name), opts.Help, - labelNames, + opts.VariableLabels, opts.ConstLabels, ) return &SummaryVec{ MetricVec: NewMetricVec(desc, func(lvs ...string) Metric { - return newSummary(desc, opts, lvs...) + return newSummary(desc, opts.SummaryOpts, lvs...) }), } } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/testutil/promlint/promlint.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/testutil/promlint/promlint.go index a20f159b78e9..c8864b6c3f5f 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/testutil/promlint/promlint.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/testutil/promlint/promlint.go @@ -287,17 +287,15 @@ func lintUnitAbbreviations(mf *dto.MetricFamily) []Problem { func metricUnits(m string) (unit, base string, ok bool) { ss := strings.Split(m, "_") - for unit, base := range units { - // Also check for "no prefix". - for _, p := range append(unitPrefixes, "") { - for _, s := range ss { - // Attempt to explicitly match a known unit with a known prefix, - // as some words may look like "units" when matching suffix. - // - // As an example, "thermometers" should not match "meters", but - // "kilometers" should. - if s == p+unit { - return p + unit, base, true + for _, s := range ss { + if base, found := units[s]; found { + return s, base, true + } + + for _, p := range unitPrefixes { + if strings.HasPrefix(s, p) { + if base, found := units[s[len(p):]]; found { + return s, base, true } } } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/testutil/testutil.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/testutil/testutil.go index 91b83b5285db..82d4a5436b62 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/testutil/testutil.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/testutil/testutil.go @@ -238,6 +238,7 @@ func convertReaderToMetricFamily(reader io.Reader) ([]*dto.MetricFamily, error) func compareMetricFamilies(got, expected []*dto.MetricFamily, metricNames ...string) error { if metricNames != nil { got = filterMetrics(got, metricNames) + expected = filterMetrics(expected, metricNames) } return compare(got, expected) diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/timer.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/timer.go index f28a76f3a62a..52344fef53f5 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/timer.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/timer.go @@ -23,7 +23,9 @@ type Timer struct { } // NewTimer creates a new Timer. The provided Observer is used to observe a -// duration in seconds. Timer is usually used to time a function call in the +// duration in seconds. If the Observer implements ExemplarObserver, passing exemplar +// later on will be also supported. +// Timer is usually used to time a function call in the // following way: // // func TimeMe() { @@ -31,6 +33,14 @@ type Timer struct { // defer timer.ObserveDuration() // // Do actual work. // } +// +// or +// +// func TimeMeWithExemplar() { +// timer := NewTimer(myHistogram) +// defer timer.ObserveDurationWithExemplar(exemplar) +// // Do actual work. +// } func NewTimer(o Observer) *Timer { return &Timer{ begin: time.Now(), @@ -53,3 +63,19 @@ func (t *Timer) ObserveDuration() time.Duration { } return d } + +// ObserveDurationWithExemplar is like ObserveDuration, but it will also +// observe exemplar with the duration unless exemplar is nil or provided Observer can't +// be casted to ExemplarObserver. +func (t *Timer) ObserveDurationWithExemplar(exemplar Labels) time.Duration { + d := time.Since(t.begin) + eo, ok := t.observer.(ExemplarObserver) + if ok && exemplar != nil { + eo.ObserveWithExemplar(d.Seconds(), exemplar) + return d + } + if t.observer != nil { + t.observer.Observe(d.Seconds()) + } + return d +} diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/value.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/value.go index 2d3abc1cbd68..5f6bb80014de 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/value.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/value.go @@ -19,13 +19,11 @@ import ( "time" "unicode/utf8" - //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. - "github.com/golang/protobuf/proto" - "google.golang.org/protobuf/types/known/timestamppb" - "github.com/prometheus/client_golang/prometheus/internal" dto "github.com/prometheus/client_model/go" + "google.golang.org/protobuf/proto" + "google.golang.org/protobuf/types/known/timestamppb" ) // ValueType is an enumeration of metric types that represent a simple value. @@ -188,9 +186,9 @@ func MakeLabelPairs(desc *Desc, labelValues []string) []*dto.LabelPair { return desc.constLabelPairs } labelPairs := make([]*dto.LabelPair, 0, totalLen) - for i, n := range desc.variableLabels { + for i, l := range desc.variableLabels { labelPairs = append(labelPairs, &dto.LabelPair{ - Name: proto.String(n), + Name: proto.String(l.Name), Value: proto.String(labelValues[i]), }) } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/vec.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/vec.go index 7ae322590c86..f0d0015a0ff9 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/vec.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/vec.go @@ -20,6 +20,24 @@ import ( "github.com/prometheus/common/model" ) +var labelsPool = &sync.Pool{ + New: func() interface{} { + return make(Labels) + }, +} + +func getLabelsFromPool() Labels { + return labelsPool.Get().(Labels) +} + +func putLabelsToPool(labels Labels) { + for k := range labels { + delete(labels, k) + } + + labelsPool.Put(labels) +} + // MetricVec is a Collector to bundle metrics of the same name that differ in // their label values. MetricVec is not used directly but as a building block // for implementations of vectors of a given metric type, like GaugeVec, @@ -72,6 +90,7 @@ func NewMetricVec(desc *Desc, newMetric func(lvs ...string) Metric) *MetricVec { // with a performance overhead (for creating and processing the Labels map). // See also the CounterVec example. func (m *MetricVec) DeleteLabelValues(lvs ...string) bool { + lvs = constrainLabelValues(m.desc, lvs, m.curry) h, err := m.hashLabelValues(lvs) if err != nil { return false @@ -91,6 +110,9 @@ func (m *MetricVec) DeleteLabelValues(lvs ...string) bool { // This method is used for the same purpose as DeleteLabelValues(...string). See // there for pros and cons of the two methods. func (m *MetricVec) Delete(labels Labels) bool { + labels = constrainLabels(m.desc, labels) + defer putLabelsToPool(labels) + h, err := m.hashLabels(labels) if err != nil { return false @@ -106,6 +128,9 @@ func (m *MetricVec) Delete(labels Labels) bool { // Note that curried labels will never be matched if deleting from the curried vector. // To match curried labels with DeletePartialMatch, it must be called on the base vector. func (m *MetricVec) DeletePartialMatch(labels Labels) int { + labels = constrainLabels(m.desc, labels) + defer putLabelsToPool(labels) + return m.metricMap.deleteByLabels(labels, m.curry) } @@ -145,10 +170,10 @@ func (m *MetricVec) CurryWith(labels Labels) (*MetricVec, error) { iCurry int ) for i, label := range m.desc.variableLabels { - val, ok := labels[label] + val, ok := labels[label.Name] if iCurry < len(oldCurry) && oldCurry[iCurry].index == i { if ok { - return nil, fmt.Errorf("label name %q is already curried", label) + return nil, fmt.Errorf("label name %q is already curried", label.Name) } newCurry = append(newCurry, oldCurry[iCurry]) iCurry++ @@ -156,7 +181,7 @@ func (m *MetricVec) CurryWith(labels Labels) (*MetricVec, error) { if !ok { continue // Label stays uncurried. } - newCurry = append(newCurry, curriedLabelValue{i, val}) + newCurry = append(newCurry, curriedLabelValue{i, label.Constrain(val)}) } } if l := len(oldCurry) + len(labels) - len(newCurry); l > 0 { @@ -199,6 +224,7 @@ func (m *MetricVec) CurryWith(labels Labels) (*MetricVec, error) { // a wrapper around MetricVec, implementing a vector for a specific Metric // implementation, for example GaugeVec. func (m *MetricVec) GetMetricWithLabelValues(lvs ...string) (Metric, error) { + lvs = constrainLabelValues(m.desc, lvs, m.curry) h, err := m.hashLabelValues(lvs) if err != nil { return nil, err @@ -224,6 +250,9 @@ func (m *MetricVec) GetMetricWithLabelValues(lvs ...string) (Metric, error) { // around MetricVec, implementing a vector for a specific Metric implementation, // for example GaugeVec. func (m *MetricVec) GetMetricWith(labels Labels) (Metric, error) { + labels = constrainLabels(m.desc, labels) + defer putLabelsToPool(labels) + h, err := m.hashLabels(labels) if err != nil { return nil, err @@ -266,16 +295,16 @@ func (m *MetricVec) hashLabels(labels Labels) (uint64, error) { iCurry int ) for i, label := range m.desc.variableLabels { - val, ok := labels[label] + val, ok := labels[label.Name] if iCurry < len(curry) && curry[iCurry].index == i { if ok { - return 0, fmt.Errorf("label name %q is already curried", label) + return 0, fmt.Errorf("label name %q is already curried", label.Name) } h = m.hashAdd(h, curry[iCurry].value) iCurry++ } else { if !ok { - return 0, fmt.Errorf("label name %q missing in label map", label) + return 0, fmt.Errorf("label name %q missing in label map", label.Name) } h = m.hashAdd(h, val) } @@ -453,7 +482,7 @@ func valueMatchesVariableOrCurriedValue(targetValue string, index int, values [] func matchPartialLabels(desc *Desc, values []string, labels Labels, curry []curriedLabelValue) bool { for l, v := range labels { // Check if the target label exists in our metrics and get the index. - varLabelIndex, validLabel := indexOf(l, desc.variableLabels) + varLabelIndex, validLabel := indexOf(l, desc.variableLabels.labelNames()) if validLabel { // Check the value of that label against the target value. // We don't consider curried values in partial matches. @@ -605,7 +634,7 @@ func matchLabels(desc *Desc, values []string, labels Labels, curry []curriedLabe iCurry++ continue } - if values[i] != labels[k] { + if values[i] != labels[k.Name] { return false } } @@ -621,7 +650,7 @@ func extractLabelValues(desc *Desc, labels Labels, curry []curriedLabelValue) [] iCurry++ continue } - labelValues[i] = labels[k] + labelValues[i] = labels[k.Name] } return labelValues } @@ -640,3 +669,35 @@ func inlineLabelValues(lvs []string, curry []curriedLabelValue) []string { } return labelValues } + +func constrainLabels(desc *Desc, labels Labels) Labels { + constrainedLabels := getLabelsFromPool() + for l, v := range labels { + if i, ok := indexOf(l, desc.variableLabels.labelNames()); ok { + v = desc.variableLabels[i].Constrain(v) + } + + constrainedLabels[l] = v + } + + return constrainedLabels +} + +func constrainLabelValues(desc *Desc, lvs []string, curry []curriedLabelValue) []string { + constrainedValues := make([]string, len(lvs)) + var iCurry, iLVs int + for i := 0; i < len(lvs)+len(curry); i++ { + if iCurry < len(curry) && curry[iCurry].index == i { + iCurry++ + continue + } + + if i < len(desc.variableLabels) { + constrainedValues[iLVs] = desc.variableLabels[i].Constrain(lvs[iLVs]) + } else { + constrainedValues[iLVs] = lvs[iLVs] + } + iLVs++ + } + return constrainedValues +} diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/vnext.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/vnext.go new file mode 100644 index 000000000000..42bc3a8f0661 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/vnext.go @@ -0,0 +1,23 @@ +// Copyright 2022 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package prometheus + +type v2 struct{} + +// V2 is a struct that can be referenced to access experimental API that might +// be present in v2 of client golang someday. It offers extended functionality +// of v1 with slightly changed API. It is acceptable to use some pieces from v1 +// and e.g `prometheus.NewGauge` and some from v2 e.g. `prometheus.V2.NewDesc` +// in the same codebase. +var V2 = v2{} diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/wrap.go b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/wrap.go index 1498ee144cb0..25da157f152c 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/wrap.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_golang/prometheus/wrap.go @@ -17,12 +17,10 @@ import ( "fmt" "sort" - //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. - "github.com/golang/protobuf/proto" + "github.com/prometheus/client_golang/prometheus/internal" dto "github.com/prometheus/client_model/go" - - "github.com/prometheus/client_golang/prometheus/internal" + "google.golang.org/protobuf/proto" ) // WrapRegistererWith returns a Registerer wrapping the provided @@ -206,7 +204,7 @@ func wrapDesc(desc *Desc, prefix string, labels Labels) *Desc { constLabels[ln] = lv } // NewDesc will do remaining validations. - newDesc := NewDesc(prefix+desc.fqName, desc.help, desc.variableLabels, constLabels) + newDesc := V2.NewDesc(prefix+desc.fqName, desc.help, desc.variableLabels, constLabels) // Propagate errors if there was any. This will override any errer // created by NewDesc above, i.e. earlier errors get precedence. if desc.err != nil { diff --git a/cluster-autoscaler/vendor/github.com/prometheus/client_model/go/metrics.pb.go b/cluster-autoscaler/vendor/github.com/prometheus/client_model/go/metrics.pb.go index 35904ea19861..2b5bca4b999a 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/client_model/go/metrics.pb.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/client_model/go/metrics.pb.go @@ -1,25 +1,38 @@ +// Copyright 2013 Prometheus Team +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + // Code generated by protoc-gen-go. DO NOT EDIT. +// versions: +// protoc-gen-go v1.30.0 +// protoc v3.20.3 // source: io/prometheus/client/metrics.proto package io_prometheus_client import ( - fmt "fmt" - proto "github.com/golang/protobuf/proto" - timestamp "github.com/golang/protobuf/ptypes/timestamp" - math "math" + protoreflect "google.golang.org/protobuf/reflect/protoreflect" + protoimpl "google.golang.org/protobuf/runtime/protoimpl" + timestamppb "google.golang.org/protobuf/types/known/timestamppb" + reflect "reflect" + sync "sync" ) -// Reference imports to suppress errors if they are not otherwise used. -var _ = proto.Marshal -var _ = fmt.Errorf -var _ = math.Inf - -// This is a compile-time assertion to ensure that this generated file -// is compatible with the proto package it is being compiled against. -// A compilation error at this line likely means your copy of the -// proto package needs to be updated. -const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package +const ( + // Verify that this generated code is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion) + // Verify that runtime/protoimpl is sufficiently up-to-date. + _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) +) type MetricType int32 @@ -38,23 +51,25 @@ const ( MetricType_GAUGE_HISTOGRAM MetricType = 5 ) -var MetricType_name = map[int32]string{ - 0: "COUNTER", - 1: "GAUGE", - 2: "SUMMARY", - 3: "UNTYPED", - 4: "HISTOGRAM", - 5: "GAUGE_HISTOGRAM", -} - -var MetricType_value = map[string]int32{ - "COUNTER": 0, - "GAUGE": 1, - "SUMMARY": 2, - "UNTYPED": 3, - "HISTOGRAM": 4, - "GAUGE_HISTOGRAM": 5, -} +// Enum value maps for MetricType. +var ( + MetricType_name = map[int32]string{ + 0: "COUNTER", + 1: "GAUGE", + 2: "SUMMARY", + 3: "UNTYPED", + 4: "HISTOGRAM", + 5: "GAUGE_HISTOGRAM", + } + MetricType_value = map[string]int32{ + "COUNTER": 0, + "GAUGE": 1, + "SUMMARY": 2, + "UNTYPED": 3, + "HISTOGRAM": 4, + "GAUGE_HISTOGRAM": 5, + } +) func (x MetricType) Enum() *MetricType { p := new(MetricType) @@ -63,449 +78,519 @@ func (x MetricType) Enum() *MetricType { } func (x MetricType) String() string { - return proto.EnumName(MetricType_name, int32(x)) + return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) } -func (x *MetricType) UnmarshalJSON(data []byte) error { - value, err := proto.UnmarshalJSONEnum(MetricType_value, data, "MetricType") +func (MetricType) Descriptor() protoreflect.EnumDescriptor { + return file_io_prometheus_client_metrics_proto_enumTypes[0].Descriptor() +} + +func (MetricType) Type() protoreflect.EnumType { + return &file_io_prometheus_client_metrics_proto_enumTypes[0] +} + +func (x MetricType) Number() protoreflect.EnumNumber { + return protoreflect.EnumNumber(x) +} + +// Deprecated: Do not use. +func (x *MetricType) UnmarshalJSON(b []byte) error { + num, err := protoimpl.X.UnmarshalJSONEnum(x.Descriptor(), b) if err != nil { return err } - *x = MetricType(value) + *x = MetricType(num) return nil } +// Deprecated: Use MetricType.Descriptor instead. func (MetricType) EnumDescriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{0} + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{0} } type LabelPair struct { - Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` - Value *string `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields -func (m *LabelPair) Reset() { *m = LabelPair{} } -func (m *LabelPair) String() string { return proto.CompactTextString(m) } -func (*LabelPair) ProtoMessage() {} -func (*LabelPair) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{0} + Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` + Value *string `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"` } -func (m *LabelPair) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_LabelPair.Unmarshal(m, b) -} -func (m *LabelPair) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_LabelPair.Marshal(b, m, deterministic) -} -func (m *LabelPair) XXX_Merge(src proto.Message) { - xxx_messageInfo_LabelPair.Merge(m, src) +func (x *LabelPair) Reset() { + *x = LabelPair{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[0] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *LabelPair) XXX_Size() int { - return xxx_messageInfo_LabelPair.Size(m) + +func (x *LabelPair) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *LabelPair) XXX_DiscardUnknown() { - xxx_messageInfo_LabelPair.DiscardUnknown(m) + +func (*LabelPair) ProtoMessage() {} + +func (x *LabelPair) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[0] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_LabelPair proto.InternalMessageInfo +// Deprecated: Use LabelPair.ProtoReflect.Descriptor instead. +func (*LabelPair) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{0} +} -func (m *LabelPair) GetName() string { - if m != nil && m.Name != nil { - return *m.Name +func (x *LabelPair) GetName() string { + if x != nil && x.Name != nil { + return *x.Name } return "" } -func (m *LabelPair) GetValue() string { - if m != nil && m.Value != nil { - return *m.Value +func (x *LabelPair) GetValue() string { + if x != nil && x.Value != nil { + return *x.Value } return "" } type Gauge struct { - Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields -func (m *Gauge) Reset() { *m = Gauge{} } -func (m *Gauge) String() string { return proto.CompactTextString(m) } -func (*Gauge) ProtoMessage() {} -func (*Gauge) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{1} + Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` } -func (m *Gauge) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_Gauge.Unmarshal(m, b) -} -func (m *Gauge) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_Gauge.Marshal(b, m, deterministic) -} -func (m *Gauge) XXX_Merge(src proto.Message) { - xxx_messageInfo_Gauge.Merge(m, src) +func (x *Gauge) Reset() { + *x = Gauge{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[1] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *Gauge) XXX_Size() int { - return xxx_messageInfo_Gauge.Size(m) + +func (x *Gauge) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *Gauge) XXX_DiscardUnknown() { - xxx_messageInfo_Gauge.DiscardUnknown(m) + +func (*Gauge) ProtoMessage() {} + +func (x *Gauge) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[1] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_Gauge proto.InternalMessageInfo +// Deprecated: Use Gauge.ProtoReflect.Descriptor instead. +func (*Gauge) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{1} +} -func (m *Gauge) GetValue() float64 { - if m != nil && m.Value != nil { - return *m.Value +func (x *Gauge) GetValue() float64 { + if x != nil && x.Value != nil { + return *x.Value } return 0 } type Counter struct { - Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` - Exemplar *Exemplar `protobuf:"bytes,2,opt,name=exemplar" json:"exemplar,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields -func (m *Counter) Reset() { *m = Counter{} } -func (m *Counter) String() string { return proto.CompactTextString(m) } -func (*Counter) ProtoMessage() {} -func (*Counter) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{2} + Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` + Exemplar *Exemplar `protobuf:"bytes,2,opt,name=exemplar" json:"exemplar,omitempty"` } -func (m *Counter) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_Counter.Unmarshal(m, b) -} -func (m *Counter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_Counter.Marshal(b, m, deterministic) -} -func (m *Counter) XXX_Merge(src proto.Message) { - xxx_messageInfo_Counter.Merge(m, src) +func (x *Counter) Reset() { + *x = Counter{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[2] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *Counter) XXX_Size() int { - return xxx_messageInfo_Counter.Size(m) + +func (x *Counter) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *Counter) XXX_DiscardUnknown() { - xxx_messageInfo_Counter.DiscardUnknown(m) + +func (*Counter) ProtoMessage() {} + +func (x *Counter) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[2] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_Counter proto.InternalMessageInfo +// Deprecated: Use Counter.ProtoReflect.Descriptor instead. +func (*Counter) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{2} +} -func (m *Counter) GetValue() float64 { - if m != nil && m.Value != nil { - return *m.Value +func (x *Counter) GetValue() float64 { + if x != nil && x.Value != nil { + return *x.Value } return 0 } -func (m *Counter) GetExemplar() *Exemplar { - if m != nil { - return m.Exemplar +func (x *Counter) GetExemplar() *Exemplar { + if x != nil { + return x.Exemplar } return nil } type Quantile struct { - Quantile *float64 `protobuf:"fixed64,1,opt,name=quantile" json:"quantile,omitempty"` - Value *float64 `protobuf:"fixed64,2,opt,name=value" json:"value,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields -func (m *Quantile) Reset() { *m = Quantile{} } -func (m *Quantile) String() string { return proto.CompactTextString(m) } -func (*Quantile) ProtoMessage() {} -func (*Quantile) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{3} + Quantile *float64 `protobuf:"fixed64,1,opt,name=quantile" json:"quantile,omitempty"` + Value *float64 `protobuf:"fixed64,2,opt,name=value" json:"value,omitempty"` } -func (m *Quantile) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_Quantile.Unmarshal(m, b) -} -func (m *Quantile) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_Quantile.Marshal(b, m, deterministic) -} -func (m *Quantile) XXX_Merge(src proto.Message) { - xxx_messageInfo_Quantile.Merge(m, src) +func (x *Quantile) Reset() { + *x = Quantile{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[3] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *Quantile) XXX_Size() int { - return xxx_messageInfo_Quantile.Size(m) + +func (x *Quantile) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *Quantile) XXX_DiscardUnknown() { - xxx_messageInfo_Quantile.DiscardUnknown(m) + +func (*Quantile) ProtoMessage() {} + +func (x *Quantile) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[3] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_Quantile proto.InternalMessageInfo +// Deprecated: Use Quantile.ProtoReflect.Descriptor instead. +func (*Quantile) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{3} +} -func (m *Quantile) GetQuantile() float64 { - if m != nil && m.Quantile != nil { - return *m.Quantile +func (x *Quantile) GetQuantile() float64 { + if x != nil && x.Quantile != nil { + return *x.Quantile } return 0 } -func (m *Quantile) GetValue() float64 { - if m != nil && m.Value != nil { - return *m.Value +func (x *Quantile) GetValue() float64 { + if x != nil && x.Value != nil { + return *x.Value } return 0 } type Summary struct { - SampleCount *uint64 `protobuf:"varint,1,opt,name=sample_count,json=sampleCount" json:"sample_count,omitempty"` - SampleSum *float64 `protobuf:"fixed64,2,opt,name=sample_sum,json=sampleSum" json:"sample_sum,omitempty"` - Quantile []*Quantile `protobuf:"bytes,3,rep,name=quantile" json:"quantile,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields -func (m *Summary) Reset() { *m = Summary{} } -func (m *Summary) String() string { return proto.CompactTextString(m) } -func (*Summary) ProtoMessage() {} -func (*Summary) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{4} + SampleCount *uint64 `protobuf:"varint,1,opt,name=sample_count,json=sampleCount" json:"sample_count,omitempty"` + SampleSum *float64 `protobuf:"fixed64,2,opt,name=sample_sum,json=sampleSum" json:"sample_sum,omitempty"` + Quantile []*Quantile `protobuf:"bytes,3,rep,name=quantile" json:"quantile,omitempty"` } -func (m *Summary) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_Summary.Unmarshal(m, b) -} -func (m *Summary) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_Summary.Marshal(b, m, deterministic) -} -func (m *Summary) XXX_Merge(src proto.Message) { - xxx_messageInfo_Summary.Merge(m, src) +func (x *Summary) Reset() { + *x = Summary{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[4] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *Summary) XXX_Size() int { - return xxx_messageInfo_Summary.Size(m) + +func (x *Summary) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *Summary) XXX_DiscardUnknown() { - xxx_messageInfo_Summary.DiscardUnknown(m) + +func (*Summary) ProtoMessage() {} + +func (x *Summary) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[4] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_Summary proto.InternalMessageInfo +// Deprecated: Use Summary.ProtoReflect.Descriptor instead. +func (*Summary) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{4} +} -func (m *Summary) GetSampleCount() uint64 { - if m != nil && m.SampleCount != nil { - return *m.SampleCount +func (x *Summary) GetSampleCount() uint64 { + if x != nil && x.SampleCount != nil { + return *x.SampleCount } return 0 } -func (m *Summary) GetSampleSum() float64 { - if m != nil && m.SampleSum != nil { - return *m.SampleSum +func (x *Summary) GetSampleSum() float64 { + if x != nil && x.SampleSum != nil { + return *x.SampleSum } return 0 } -func (m *Summary) GetQuantile() []*Quantile { - if m != nil { - return m.Quantile +func (x *Summary) GetQuantile() []*Quantile { + if x != nil { + return x.Quantile } return nil } type Untyped struct { - Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields -func (m *Untyped) Reset() { *m = Untyped{} } -func (m *Untyped) String() string { return proto.CompactTextString(m) } -func (*Untyped) ProtoMessage() {} -func (*Untyped) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{5} + Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"` } -func (m *Untyped) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_Untyped.Unmarshal(m, b) -} -func (m *Untyped) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_Untyped.Marshal(b, m, deterministic) -} -func (m *Untyped) XXX_Merge(src proto.Message) { - xxx_messageInfo_Untyped.Merge(m, src) +func (x *Untyped) Reset() { + *x = Untyped{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[5] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *Untyped) XXX_Size() int { - return xxx_messageInfo_Untyped.Size(m) + +func (x *Untyped) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *Untyped) XXX_DiscardUnknown() { - xxx_messageInfo_Untyped.DiscardUnknown(m) + +func (*Untyped) ProtoMessage() {} + +func (x *Untyped) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[5] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_Untyped proto.InternalMessageInfo +// Deprecated: Use Untyped.ProtoReflect.Descriptor instead. +func (*Untyped) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{5} +} -func (m *Untyped) GetValue() float64 { - if m != nil && m.Value != nil { - return *m.Value +func (x *Untyped) GetValue() float64 { + if x != nil && x.Value != nil { + return *x.Value } return 0 } type Histogram struct { + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + SampleCount *uint64 `protobuf:"varint,1,opt,name=sample_count,json=sampleCount" json:"sample_count,omitempty"` - SampleCountFloat *float64 `protobuf:"fixed64,4,opt,name=sample_count_float,json=sampleCountFloat" json:"sample_count_float,omitempty"` + SampleCountFloat *float64 `protobuf:"fixed64,4,opt,name=sample_count_float,json=sampleCountFloat" json:"sample_count_float,omitempty"` // Overrides sample_count if > 0. SampleSum *float64 `protobuf:"fixed64,2,opt,name=sample_sum,json=sampleSum" json:"sample_sum,omitempty"` // Buckets for the conventional histogram. - Bucket []*Bucket `protobuf:"bytes,3,rep,name=bucket" json:"bucket,omitempty"` + Bucket []*Bucket `protobuf:"bytes,3,rep,name=bucket" json:"bucket,omitempty"` // Ordered in increasing order of upper_bound, +Inf bucket is optional. // schema defines the bucket schema. Currently, valid numbers are -4 <= n <= 8. // They are all for base-2 bucket schemas, where 1 is a bucket boundary in each case, and // then each power of two is divided into 2^n logarithmic buckets. // Or in other words, each bucket boundary is the previous boundary times 2^(2^-n). // In the future, more bucket schemas may be added using numbers < -4 or > 8. Schema *int32 `protobuf:"zigzag32,5,opt,name=schema" json:"schema,omitempty"` - ZeroThreshold *float64 `protobuf:"fixed64,6,opt,name=zero_threshold,json=zeroThreshold" json:"zero_threshold,omitempty"` - ZeroCount *uint64 `protobuf:"varint,7,opt,name=zero_count,json=zeroCount" json:"zero_count,omitempty"` - ZeroCountFloat *float64 `protobuf:"fixed64,8,opt,name=zero_count_float,json=zeroCountFloat" json:"zero_count_float,omitempty"` + ZeroThreshold *float64 `protobuf:"fixed64,6,opt,name=zero_threshold,json=zeroThreshold" json:"zero_threshold,omitempty"` // Breadth of the zero bucket. + ZeroCount *uint64 `protobuf:"varint,7,opt,name=zero_count,json=zeroCount" json:"zero_count,omitempty"` // Count in zero bucket. + ZeroCountFloat *float64 `protobuf:"fixed64,8,opt,name=zero_count_float,json=zeroCountFloat" json:"zero_count_float,omitempty"` // Overrides sb_zero_count if > 0. // Negative buckets for the native histogram. NegativeSpan []*BucketSpan `protobuf:"bytes,9,rep,name=negative_span,json=negativeSpan" json:"negative_span,omitempty"` // Use either "negative_delta" or "negative_count", the former for // regular histograms with integer counts, the latter for float // histograms. - NegativeDelta []int64 `protobuf:"zigzag64,10,rep,name=negative_delta,json=negativeDelta" json:"negative_delta,omitempty"` - NegativeCount []float64 `protobuf:"fixed64,11,rep,name=negative_count,json=negativeCount" json:"negative_count,omitempty"` + NegativeDelta []int64 `protobuf:"zigzag64,10,rep,name=negative_delta,json=negativeDelta" json:"negative_delta,omitempty"` // Count delta of each bucket compared to previous one (or to zero for 1st bucket). + NegativeCount []float64 `protobuf:"fixed64,11,rep,name=negative_count,json=negativeCount" json:"negative_count,omitempty"` // Absolute count of each bucket. // Positive buckets for the native histogram. PositiveSpan []*BucketSpan `protobuf:"bytes,12,rep,name=positive_span,json=positiveSpan" json:"positive_span,omitempty"` // Use either "positive_delta" or "positive_count", the former for // regular histograms with integer counts, the latter for float // histograms. - PositiveDelta []int64 `protobuf:"zigzag64,13,rep,name=positive_delta,json=positiveDelta" json:"positive_delta,omitempty"` - PositiveCount []float64 `protobuf:"fixed64,14,rep,name=positive_count,json=positiveCount" json:"positive_count,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + PositiveDelta []int64 `protobuf:"zigzag64,13,rep,name=positive_delta,json=positiveDelta" json:"positive_delta,omitempty"` // Count delta of each bucket compared to previous one (or to zero for 1st bucket). + PositiveCount []float64 `protobuf:"fixed64,14,rep,name=positive_count,json=positiveCount" json:"positive_count,omitempty"` // Absolute count of each bucket. } -func (m *Histogram) Reset() { *m = Histogram{} } -func (m *Histogram) String() string { return proto.CompactTextString(m) } -func (*Histogram) ProtoMessage() {} -func (*Histogram) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{6} +func (x *Histogram) Reset() { + *x = Histogram{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[6] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *Histogram) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_Histogram.Unmarshal(m, b) +func (x *Histogram) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *Histogram) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_Histogram.Marshal(b, m, deterministic) -} -func (m *Histogram) XXX_Merge(src proto.Message) { - xxx_messageInfo_Histogram.Merge(m, src) -} -func (m *Histogram) XXX_Size() int { - return xxx_messageInfo_Histogram.Size(m) -} -func (m *Histogram) XXX_DiscardUnknown() { - xxx_messageInfo_Histogram.DiscardUnknown(m) + +func (*Histogram) ProtoMessage() {} + +func (x *Histogram) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[6] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_Histogram proto.InternalMessageInfo +// Deprecated: Use Histogram.ProtoReflect.Descriptor instead. +func (*Histogram) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{6} +} -func (m *Histogram) GetSampleCount() uint64 { - if m != nil && m.SampleCount != nil { - return *m.SampleCount +func (x *Histogram) GetSampleCount() uint64 { + if x != nil && x.SampleCount != nil { + return *x.SampleCount } return 0 } -func (m *Histogram) GetSampleCountFloat() float64 { - if m != nil && m.SampleCountFloat != nil { - return *m.SampleCountFloat +func (x *Histogram) GetSampleCountFloat() float64 { + if x != nil && x.SampleCountFloat != nil { + return *x.SampleCountFloat } return 0 } -func (m *Histogram) GetSampleSum() float64 { - if m != nil && m.SampleSum != nil { - return *m.SampleSum +func (x *Histogram) GetSampleSum() float64 { + if x != nil && x.SampleSum != nil { + return *x.SampleSum } return 0 } -func (m *Histogram) GetBucket() []*Bucket { - if m != nil { - return m.Bucket +func (x *Histogram) GetBucket() []*Bucket { + if x != nil { + return x.Bucket } return nil } -func (m *Histogram) GetSchema() int32 { - if m != nil && m.Schema != nil { - return *m.Schema +func (x *Histogram) GetSchema() int32 { + if x != nil && x.Schema != nil { + return *x.Schema } return 0 } -func (m *Histogram) GetZeroThreshold() float64 { - if m != nil && m.ZeroThreshold != nil { - return *m.ZeroThreshold +func (x *Histogram) GetZeroThreshold() float64 { + if x != nil && x.ZeroThreshold != nil { + return *x.ZeroThreshold } return 0 } -func (m *Histogram) GetZeroCount() uint64 { - if m != nil && m.ZeroCount != nil { - return *m.ZeroCount +func (x *Histogram) GetZeroCount() uint64 { + if x != nil && x.ZeroCount != nil { + return *x.ZeroCount } return 0 } -func (m *Histogram) GetZeroCountFloat() float64 { - if m != nil && m.ZeroCountFloat != nil { - return *m.ZeroCountFloat +func (x *Histogram) GetZeroCountFloat() float64 { + if x != nil && x.ZeroCountFloat != nil { + return *x.ZeroCountFloat } return 0 } -func (m *Histogram) GetNegativeSpan() []*BucketSpan { - if m != nil { - return m.NegativeSpan +func (x *Histogram) GetNegativeSpan() []*BucketSpan { + if x != nil { + return x.NegativeSpan } return nil } -func (m *Histogram) GetNegativeDelta() []int64 { - if m != nil { - return m.NegativeDelta +func (x *Histogram) GetNegativeDelta() []int64 { + if x != nil { + return x.NegativeDelta } return nil } -func (m *Histogram) GetNegativeCount() []float64 { - if m != nil { - return m.NegativeCount +func (x *Histogram) GetNegativeCount() []float64 { + if x != nil { + return x.NegativeCount } return nil } -func (m *Histogram) GetPositiveSpan() []*BucketSpan { - if m != nil { - return m.PositiveSpan +func (x *Histogram) GetPositiveSpan() []*BucketSpan { + if x != nil { + return x.PositiveSpan } return nil } -func (m *Histogram) GetPositiveDelta() []int64 { - if m != nil { - return m.PositiveDelta +func (x *Histogram) GetPositiveDelta() []int64 { + if x != nil { + return x.PositiveDelta } return nil } -func (m *Histogram) GetPositiveCount() []float64 { - if m != nil { - return m.PositiveCount +func (x *Histogram) GetPositiveCount() []float64 { + if x != nil { + return x.PositiveCount } return nil } @@ -513,64 +598,72 @@ func (m *Histogram) GetPositiveCount() []float64 { // A Bucket of a conventional histogram, each of which is treated as // an individual counter-like time series by Prometheus. type Bucket struct { - CumulativeCount *uint64 `protobuf:"varint,1,opt,name=cumulative_count,json=cumulativeCount" json:"cumulative_count,omitempty"` - CumulativeCountFloat *float64 `protobuf:"fixed64,4,opt,name=cumulative_count_float,json=cumulativeCountFloat" json:"cumulative_count_float,omitempty"` - UpperBound *float64 `protobuf:"fixed64,2,opt,name=upper_bound,json=upperBound" json:"upper_bound,omitempty"` + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + CumulativeCount *uint64 `protobuf:"varint,1,opt,name=cumulative_count,json=cumulativeCount" json:"cumulative_count,omitempty"` // Cumulative in increasing order. + CumulativeCountFloat *float64 `protobuf:"fixed64,4,opt,name=cumulative_count_float,json=cumulativeCountFloat" json:"cumulative_count_float,omitempty"` // Overrides cumulative_count if > 0. + UpperBound *float64 `protobuf:"fixed64,2,opt,name=upper_bound,json=upperBound" json:"upper_bound,omitempty"` // Inclusive. Exemplar *Exemplar `protobuf:"bytes,3,opt,name=exemplar" json:"exemplar,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` } -func (m *Bucket) Reset() { *m = Bucket{} } -func (m *Bucket) String() string { return proto.CompactTextString(m) } -func (*Bucket) ProtoMessage() {} -func (*Bucket) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{7} +func (x *Bucket) Reset() { + *x = Bucket{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[7] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *Bucket) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_Bucket.Unmarshal(m, b) -} -func (m *Bucket) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_Bucket.Marshal(b, m, deterministic) -} -func (m *Bucket) XXX_Merge(src proto.Message) { - xxx_messageInfo_Bucket.Merge(m, src) -} -func (m *Bucket) XXX_Size() int { - return xxx_messageInfo_Bucket.Size(m) +func (x *Bucket) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *Bucket) XXX_DiscardUnknown() { - xxx_messageInfo_Bucket.DiscardUnknown(m) + +func (*Bucket) ProtoMessage() {} + +func (x *Bucket) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[7] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_Bucket proto.InternalMessageInfo +// Deprecated: Use Bucket.ProtoReflect.Descriptor instead. +func (*Bucket) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{7} +} -func (m *Bucket) GetCumulativeCount() uint64 { - if m != nil && m.CumulativeCount != nil { - return *m.CumulativeCount +func (x *Bucket) GetCumulativeCount() uint64 { + if x != nil && x.CumulativeCount != nil { + return *x.CumulativeCount } return 0 } -func (m *Bucket) GetCumulativeCountFloat() float64 { - if m != nil && m.CumulativeCountFloat != nil { - return *m.CumulativeCountFloat +func (x *Bucket) GetCumulativeCountFloat() float64 { + if x != nil && x.CumulativeCountFloat != nil { + return *x.CumulativeCountFloat } return 0 } -func (m *Bucket) GetUpperBound() float64 { - if m != nil && m.UpperBound != nil { - return *m.UpperBound +func (x *Bucket) GetUpperBound() float64 { + if x != nil && x.UpperBound != nil { + return *x.UpperBound } return 0 } -func (m *Bucket) GetExemplar() *Exemplar { - if m != nil { - return m.Exemplar +func (x *Bucket) GetExemplar() *Exemplar { + if x != nil { + return x.Exemplar } return nil } @@ -582,333 +675,658 @@ func (m *Bucket) GetExemplar() *Exemplar { // structured here (with all the buckets in a single array separate // from the Spans). type BucketSpan struct { - Offset *int32 `protobuf:"zigzag32,1,opt,name=offset" json:"offset,omitempty"` - Length *uint32 `protobuf:"varint,2,opt,name=length" json:"length,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields -func (m *BucketSpan) Reset() { *m = BucketSpan{} } -func (m *BucketSpan) String() string { return proto.CompactTextString(m) } -func (*BucketSpan) ProtoMessage() {} -func (*BucketSpan) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{8} + Offset *int32 `protobuf:"zigzag32,1,opt,name=offset" json:"offset,omitempty"` // Gap to previous span, or starting point for 1st span (which can be negative). + Length *uint32 `protobuf:"varint,2,opt,name=length" json:"length,omitempty"` // Length of consecutive buckets. } -func (m *BucketSpan) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_BucketSpan.Unmarshal(m, b) -} -func (m *BucketSpan) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_BucketSpan.Marshal(b, m, deterministic) -} -func (m *BucketSpan) XXX_Merge(src proto.Message) { - xxx_messageInfo_BucketSpan.Merge(m, src) +func (x *BucketSpan) Reset() { + *x = BucketSpan{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[8] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *BucketSpan) XXX_Size() int { - return xxx_messageInfo_BucketSpan.Size(m) + +func (x *BucketSpan) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *BucketSpan) XXX_DiscardUnknown() { - xxx_messageInfo_BucketSpan.DiscardUnknown(m) + +func (*BucketSpan) ProtoMessage() {} + +func (x *BucketSpan) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[8] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_BucketSpan proto.InternalMessageInfo +// Deprecated: Use BucketSpan.ProtoReflect.Descriptor instead. +func (*BucketSpan) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{8} +} -func (m *BucketSpan) GetOffset() int32 { - if m != nil && m.Offset != nil { - return *m.Offset +func (x *BucketSpan) GetOffset() int32 { + if x != nil && x.Offset != nil { + return *x.Offset } return 0 } -func (m *BucketSpan) GetLength() uint32 { - if m != nil && m.Length != nil { - return *m.Length +func (x *BucketSpan) GetLength() uint32 { + if x != nil && x.Length != nil { + return *x.Length } return 0 } type Exemplar struct { - Label []*LabelPair `protobuf:"bytes,1,rep,name=label" json:"label,omitempty"` - Value *float64 `protobuf:"fixed64,2,opt,name=value" json:"value,omitempty"` - Timestamp *timestamp.Timestamp `protobuf:"bytes,3,opt,name=timestamp" json:"timestamp,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields -func (m *Exemplar) Reset() { *m = Exemplar{} } -func (m *Exemplar) String() string { return proto.CompactTextString(m) } -func (*Exemplar) ProtoMessage() {} -func (*Exemplar) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{9} + Label []*LabelPair `protobuf:"bytes,1,rep,name=label" json:"label,omitempty"` + Value *float64 `protobuf:"fixed64,2,opt,name=value" json:"value,omitempty"` + Timestamp *timestamppb.Timestamp `protobuf:"bytes,3,opt,name=timestamp" json:"timestamp,omitempty"` // OpenMetrics-style. } -func (m *Exemplar) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_Exemplar.Unmarshal(m, b) -} -func (m *Exemplar) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_Exemplar.Marshal(b, m, deterministic) -} -func (m *Exemplar) XXX_Merge(src proto.Message) { - xxx_messageInfo_Exemplar.Merge(m, src) +func (x *Exemplar) Reset() { + *x = Exemplar{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[9] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *Exemplar) XXX_Size() int { - return xxx_messageInfo_Exemplar.Size(m) + +func (x *Exemplar) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *Exemplar) XXX_DiscardUnknown() { - xxx_messageInfo_Exemplar.DiscardUnknown(m) + +func (*Exemplar) ProtoMessage() {} + +func (x *Exemplar) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[9] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_Exemplar proto.InternalMessageInfo +// Deprecated: Use Exemplar.ProtoReflect.Descriptor instead. +func (*Exemplar) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{9} +} -func (m *Exemplar) GetLabel() []*LabelPair { - if m != nil { - return m.Label +func (x *Exemplar) GetLabel() []*LabelPair { + if x != nil { + return x.Label } return nil } -func (m *Exemplar) GetValue() float64 { - if m != nil && m.Value != nil { - return *m.Value +func (x *Exemplar) GetValue() float64 { + if x != nil && x.Value != nil { + return *x.Value } return 0 } -func (m *Exemplar) GetTimestamp() *timestamp.Timestamp { - if m != nil { - return m.Timestamp +func (x *Exemplar) GetTimestamp() *timestamppb.Timestamp { + if x != nil { + return x.Timestamp } return nil } type Metric struct { - Label []*LabelPair `protobuf:"bytes,1,rep,name=label" json:"label,omitempty"` - Gauge *Gauge `protobuf:"bytes,2,opt,name=gauge" json:"gauge,omitempty"` - Counter *Counter `protobuf:"bytes,3,opt,name=counter" json:"counter,omitempty"` - Summary *Summary `protobuf:"bytes,4,opt,name=summary" json:"summary,omitempty"` - Untyped *Untyped `protobuf:"bytes,5,opt,name=untyped" json:"untyped,omitempty"` - Histogram *Histogram `protobuf:"bytes,7,opt,name=histogram" json:"histogram,omitempty"` - TimestampMs *int64 `protobuf:"varint,6,opt,name=timestamp_ms,json=timestampMs" json:"timestamp_ms,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} - -func (m *Metric) Reset() { *m = Metric{} } -func (m *Metric) String() string { return proto.CompactTextString(m) } -func (*Metric) ProtoMessage() {} -func (*Metric) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{10} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + Label []*LabelPair `protobuf:"bytes,1,rep,name=label" json:"label,omitempty"` + Gauge *Gauge `protobuf:"bytes,2,opt,name=gauge" json:"gauge,omitempty"` + Counter *Counter `protobuf:"bytes,3,opt,name=counter" json:"counter,omitempty"` + Summary *Summary `protobuf:"bytes,4,opt,name=summary" json:"summary,omitempty"` + Untyped *Untyped `protobuf:"bytes,5,opt,name=untyped" json:"untyped,omitempty"` + Histogram *Histogram `protobuf:"bytes,7,opt,name=histogram" json:"histogram,omitempty"` + TimestampMs *int64 `protobuf:"varint,6,opt,name=timestamp_ms,json=timestampMs" json:"timestamp_ms,omitempty"` +} + +func (x *Metric) Reset() { + *x = Metric{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[10] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *Metric) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_Metric.Unmarshal(m, b) +func (x *Metric) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *Metric) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_Metric.Marshal(b, m, deterministic) -} -func (m *Metric) XXX_Merge(src proto.Message) { - xxx_messageInfo_Metric.Merge(m, src) -} -func (m *Metric) XXX_Size() int { - return xxx_messageInfo_Metric.Size(m) -} -func (m *Metric) XXX_DiscardUnknown() { - xxx_messageInfo_Metric.DiscardUnknown(m) + +func (*Metric) ProtoMessage() {} + +func (x *Metric) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[10] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_Metric proto.InternalMessageInfo +// Deprecated: Use Metric.ProtoReflect.Descriptor instead. +func (*Metric) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{10} +} -func (m *Metric) GetLabel() []*LabelPair { - if m != nil { - return m.Label +func (x *Metric) GetLabel() []*LabelPair { + if x != nil { + return x.Label } return nil } -func (m *Metric) GetGauge() *Gauge { - if m != nil { - return m.Gauge +func (x *Metric) GetGauge() *Gauge { + if x != nil { + return x.Gauge } return nil } -func (m *Metric) GetCounter() *Counter { - if m != nil { - return m.Counter +func (x *Metric) GetCounter() *Counter { + if x != nil { + return x.Counter } return nil } -func (m *Metric) GetSummary() *Summary { - if m != nil { - return m.Summary +func (x *Metric) GetSummary() *Summary { + if x != nil { + return x.Summary } return nil } -func (m *Metric) GetUntyped() *Untyped { - if m != nil { - return m.Untyped +func (x *Metric) GetUntyped() *Untyped { + if x != nil { + return x.Untyped } return nil } -func (m *Metric) GetHistogram() *Histogram { - if m != nil { - return m.Histogram +func (x *Metric) GetHistogram() *Histogram { + if x != nil { + return x.Histogram } return nil } -func (m *Metric) GetTimestampMs() int64 { - if m != nil && m.TimestampMs != nil { - return *m.TimestampMs +func (x *Metric) GetTimestampMs() int64 { + if x != nil && x.TimestampMs != nil { + return *x.TimestampMs } return 0 } type MetricFamily struct { - Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` - Help *string `protobuf:"bytes,2,opt,name=help" json:"help,omitempty"` - Type *MetricType `protobuf:"varint,3,opt,name=type,enum=io.prometheus.client.MetricType" json:"type,omitempty"` - Metric []*Metric `protobuf:"bytes,4,rep,name=metric" json:"metric,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` -} - -func (m *MetricFamily) Reset() { *m = MetricFamily{} } -func (m *MetricFamily) String() string { return proto.CompactTextString(m) } -func (*MetricFamily) ProtoMessage() {} -func (*MetricFamily) Descriptor() ([]byte, []int) { - return fileDescriptor_d1e5ddb18987a258, []int{11} + state protoimpl.MessageState + sizeCache protoimpl.SizeCache + unknownFields protoimpl.UnknownFields + + Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"` + Help *string `protobuf:"bytes,2,opt,name=help" json:"help,omitempty"` + Type *MetricType `protobuf:"varint,3,opt,name=type,enum=io.prometheus.client.MetricType" json:"type,omitempty"` + Metric []*Metric `protobuf:"bytes,4,rep,name=metric" json:"metric,omitempty"` +} + +func (x *MetricFamily) Reset() { + *x = MetricFamily{} + if protoimpl.UnsafeEnabled { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[11] + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + ms.StoreMessageInfo(mi) + } } -func (m *MetricFamily) XXX_Unmarshal(b []byte) error { - return xxx_messageInfo_MetricFamily.Unmarshal(m, b) -} -func (m *MetricFamily) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - return xxx_messageInfo_MetricFamily.Marshal(b, m, deterministic) -} -func (m *MetricFamily) XXX_Merge(src proto.Message) { - xxx_messageInfo_MetricFamily.Merge(m, src) -} -func (m *MetricFamily) XXX_Size() int { - return xxx_messageInfo_MetricFamily.Size(m) +func (x *MetricFamily) String() string { + return protoimpl.X.MessageStringOf(x) } -func (m *MetricFamily) XXX_DiscardUnknown() { - xxx_messageInfo_MetricFamily.DiscardUnknown(m) + +func (*MetricFamily) ProtoMessage() {} + +func (x *MetricFamily) ProtoReflect() protoreflect.Message { + mi := &file_io_prometheus_client_metrics_proto_msgTypes[11] + if protoimpl.UnsafeEnabled && x != nil { + ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) + if ms.LoadMessageInfo() == nil { + ms.StoreMessageInfo(mi) + } + return ms + } + return mi.MessageOf(x) } -var xxx_messageInfo_MetricFamily proto.InternalMessageInfo +// Deprecated: Use MetricFamily.ProtoReflect.Descriptor instead. +func (*MetricFamily) Descriptor() ([]byte, []int) { + return file_io_prometheus_client_metrics_proto_rawDescGZIP(), []int{11} +} -func (m *MetricFamily) GetName() string { - if m != nil && m.Name != nil { - return *m.Name +func (x *MetricFamily) GetName() string { + if x != nil && x.Name != nil { + return *x.Name } return "" } -func (m *MetricFamily) GetHelp() string { - if m != nil && m.Help != nil { - return *m.Help +func (x *MetricFamily) GetHelp() string { + if x != nil && x.Help != nil { + return *x.Help } return "" } -func (m *MetricFamily) GetType() MetricType { - if m != nil && m.Type != nil { - return *m.Type +func (x *MetricFamily) GetType() MetricType { + if x != nil && x.Type != nil { + return *x.Type } return MetricType_COUNTER } -func (m *MetricFamily) GetMetric() []*Metric { - if m != nil { - return m.Metric +func (x *MetricFamily) GetMetric() []*Metric { + if x != nil { + return x.Metric } return nil } -func init() { - proto.RegisterEnum("io.prometheus.client.MetricType", MetricType_name, MetricType_value) - proto.RegisterType((*LabelPair)(nil), "io.prometheus.client.LabelPair") - proto.RegisterType((*Gauge)(nil), "io.prometheus.client.Gauge") - proto.RegisterType((*Counter)(nil), "io.prometheus.client.Counter") - proto.RegisterType((*Quantile)(nil), "io.prometheus.client.Quantile") - proto.RegisterType((*Summary)(nil), "io.prometheus.client.Summary") - proto.RegisterType((*Untyped)(nil), "io.prometheus.client.Untyped") - proto.RegisterType((*Histogram)(nil), "io.prometheus.client.Histogram") - proto.RegisterType((*Bucket)(nil), "io.prometheus.client.Bucket") - proto.RegisterType((*BucketSpan)(nil), "io.prometheus.client.BucketSpan") - proto.RegisterType((*Exemplar)(nil), "io.prometheus.client.Exemplar") - proto.RegisterType((*Metric)(nil), "io.prometheus.client.Metric") - proto.RegisterType((*MetricFamily)(nil), "io.prometheus.client.MetricFamily") -} - -func init() { - proto.RegisterFile("io/prometheus/client/metrics.proto", fileDescriptor_d1e5ddb18987a258) -} - -var fileDescriptor_d1e5ddb18987a258 = []byte{ - // 896 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x56, 0xdd, 0x8e, 0xdb, 0x44, - 0x18, 0xc5, 0x9b, 0x5f, 0x7f, 0xd9, 0x6c, 0xd3, 0x61, 0x55, 0x59, 0x0b, 0xcb, 0x06, 0x4b, 0x48, - 0x0b, 0x42, 0x8e, 0x40, 0x5b, 0x81, 0x0a, 0x5c, 0xec, 0xb6, 0xe9, 0x16, 0x89, 0xb4, 0x65, 0x92, - 0x5c, 0x14, 0x2e, 0xac, 0x49, 0x32, 0xeb, 0x58, 0x78, 0x3c, 0xc6, 0x1e, 0x57, 0x2c, 0x2f, 0xc0, - 0x35, 0xaf, 0xc0, 0xc3, 0xf0, 0x22, 0x3c, 0x08, 0x68, 0xfe, 0xec, 0xdd, 0xe2, 0x94, 0xd2, 0x3b, - 0x7f, 0x67, 0xce, 0xf7, 0xcd, 0x39, 0xe3, 0xc9, 0x71, 0xc0, 0x8f, 0xf9, 0x24, 0xcb, 0x39, 0xa3, - 0x62, 0x4b, 0xcb, 0x62, 0xb2, 0x4e, 0x62, 0x9a, 0x8a, 0x09, 0xa3, 0x22, 0x8f, 0xd7, 0x45, 0x90, - 0xe5, 0x5c, 0x70, 0x74, 0x18, 0xf3, 0xa0, 0xe6, 0x04, 0x9a, 0x73, 0x74, 0x12, 0x71, 0x1e, 0x25, - 0x74, 0xa2, 0x38, 0xab, 0xf2, 0x6a, 0x22, 0x62, 0x46, 0x0b, 0x41, 0x58, 0xa6, 0xdb, 0xfc, 0xfb, - 0xe0, 0x7e, 0x47, 0x56, 0x34, 0x79, 0x4e, 0xe2, 0x1c, 0x21, 0x68, 0xa7, 0x84, 0x51, 0xcf, 0x19, - 0x3b, 0xa7, 0x2e, 0x56, 0xcf, 0xe8, 0x10, 0x3a, 0x2f, 0x49, 0x52, 0x52, 0x6f, 0x4f, 0x81, 0xba, - 0xf0, 0x8f, 0xa1, 0x73, 0x49, 0xca, 0xe8, 0xc6, 0xb2, 0xec, 0x71, 0xec, 0xf2, 0x8f, 0xd0, 0x7b, - 0xc8, 0xcb, 0x54, 0xd0, 0xbc, 0x99, 0x80, 0x1e, 0x40, 0x9f, 0xfe, 0x42, 0x59, 0x96, 0x90, 0x5c, - 0x0d, 0x1e, 0x7c, 0xfe, 0x41, 0xd0, 0x64, 0x20, 0x98, 0x1a, 0x16, 0xae, 0xf8, 0xfe, 0xd7, 0xd0, - 0xff, 0xbe, 0x24, 0xa9, 0x88, 0x13, 0x8a, 0x8e, 0xa0, 0xff, 0xb3, 0x79, 0x36, 0x1b, 0x54, 0xf5, - 0x6d, 0xe5, 0x95, 0xb4, 0xdf, 0x1c, 0xe8, 0xcd, 0x4b, 0xc6, 0x48, 0x7e, 0x8d, 0x3e, 0x84, 0xfd, - 0x82, 0xb0, 0x2c, 0xa1, 0xe1, 0x5a, 0xaa, 0x55, 0x13, 0xda, 0x78, 0xa0, 0x31, 0x65, 0x00, 0x1d, - 0x03, 0x18, 0x4a, 0x51, 0x32, 0x33, 0xc9, 0xd5, 0xc8, 0xbc, 0x64, 0xd2, 0x47, 0xb5, 0x7f, 0x6b, - 0xdc, 0xda, 0xed, 0xc3, 0x2a, 0xae, 0xf5, 0xf9, 0x27, 0xd0, 0x5b, 0xa6, 0xe2, 0x3a, 0xa3, 0x9b, - 0x1d, 0xa7, 0xf8, 0x57, 0x1b, 0xdc, 0x27, 0x71, 0x21, 0x78, 0x94, 0x13, 0xf6, 0x26, 0x62, 0x3f, - 0x05, 0x74, 0x93, 0x12, 0x5e, 0x25, 0x9c, 0x08, 0xaf, 0xad, 0x66, 0x8e, 0x6e, 0x10, 0x1f, 0x4b, - 0xfc, 0xbf, 0xac, 0x9d, 0x41, 0x77, 0x55, 0xae, 0x7f, 0xa2, 0xc2, 0x18, 0x7b, 0xbf, 0xd9, 0xd8, - 0x85, 0xe2, 0x60, 0xc3, 0x45, 0xf7, 0xa0, 0x5b, 0xac, 0xb7, 0x94, 0x11, 0xaf, 0x33, 0x76, 0x4e, - 0xef, 0x62, 0x53, 0xa1, 0x8f, 0xe0, 0xe0, 0x57, 0x9a, 0xf3, 0x50, 0x6c, 0x73, 0x5a, 0x6c, 0x79, - 0xb2, 0xf1, 0xba, 0x6a, 0xc3, 0xa1, 0x44, 0x17, 0x16, 0x94, 0x9a, 0x14, 0x4d, 0x5b, 0xec, 0x29, - 0x8b, 0xae, 0x44, 0xb4, 0xc1, 0x53, 0x18, 0xd5, 0xcb, 0xc6, 0x5e, 0x5f, 0xcd, 0x39, 0xa8, 0x48, - 0xda, 0xdc, 0x14, 0x86, 0x29, 0x8d, 0x88, 0x88, 0x5f, 0xd2, 0xb0, 0xc8, 0x48, 0xea, 0xb9, 0xca, - 0xc4, 0xf8, 0x75, 0x26, 0xe6, 0x19, 0x49, 0xf1, 0xbe, 0x6d, 0x93, 0x95, 0x94, 0x5d, 0x8d, 0xd9, - 0xd0, 0x44, 0x10, 0x0f, 0xc6, 0xad, 0x53, 0x84, 0xab, 0xe1, 0x8f, 0x24, 0x78, 0x8b, 0xa6, 0xa5, - 0x0f, 0xc6, 0x2d, 0xe9, 0xce, 0xa2, 0x5a, 0xfe, 0x14, 0x86, 0x19, 0x2f, 0xe2, 0x5a, 0xd4, 0xfe, - 0x9b, 0x8a, 0xb2, 0x6d, 0x56, 0x54, 0x35, 0x46, 0x8b, 0x1a, 0x6a, 0x51, 0x16, 0xad, 0x44, 0x55, - 0x34, 0x2d, 0xea, 0x40, 0x8b, 0xb2, 0xa8, 0x12, 0xe5, 0xff, 0xe9, 0x40, 0x57, 0x6f, 0x85, 0x3e, - 0x86, 0xd1, 0xba, 0x64, 0x65, 0x72, 0xd3, 0x88, 0xbe, 0x66, 0x77, 0x6a, 0x5c, 0x5b, 0x39, 0x83, - 0x7b, 0xaf, 0x52, 0x6f, 0x5d, 0xb7, 0xc3, 0x57, 0x1a, 0xf4, 0x5b, 0x39, 0x81, 0x41, 0x99, 0x65, - 0x34, 0x0f, 0x57, 0xbc, 0x4c, 0x37, 0xe6, 0xce, 0x81, 0x82, 0x2e, 0x24, 0x72, 0x2b, 0x17, 0x5a, - 0xff, 0x3b, 0x17, 0xa0, 0x3e, 0x32, 0x79, 0x11, 0xf9, 0xd5, 0x55, 0x41, 0xb5, 0x83, 0xbb, 0xd8, - 0x54, 0x12, 0x4f, 0x68, 0x1a, 0x89, 0xad, 0xda, 0x7d, 0x88, 0x4d, 0xe5, 0xff, 0xee, 0x40, 0xdf, - 0x0e, 0x45, 0xf7, 0xa1, 0x93, 0xc8, 0x54, 0xf4, 0x1c, 0xf5, 0x82, 0x4e, 0x9a, 0x35, 0x54, 0xc1, - 0x89, 0x35, 0xbb, 0x39, 0x71, 0xd0, 0x97, 0xe0, 0x56, 0xa9, 0x6b, 0x4c, 0x1d, 0x05, 0x3a, 0x97, - 0x03, 0x9b, 0xcb, 0xc1, 0xc2, 0x32, 0x70, 0x4d, 0xf6, 0xff, 0xde, 0x83, 0xee, 0x4c, 0xa5, 0xfc, - 0xdb, 0x2a, 0xfa, 0x0c, 0x3a, 0x91, 0xcc, 0x69, 0x13, 0xb2, 0xef, 0x35, 0xb7, 0xa9, 0x28, 0xc7, - 0x9a, 0x89, 0xbe, 0x80, 0xde, 0x5a, 0x67, 0xb7, 0x11, 0x7b, 0xdc, 0xdc, 0x64, 0x02, 0x1e, 0x5b, - 0xb6, 0x6c, 0x2c, 0x74, 0xb0, 0xaa, 0x3b, 0xb0, 0xb3, 0xd1, 0xa4, 0x2f, 0xb6, 0x6c, 0xd9, 0x58, - 0xea, 0x20, 0x54, 0xa1, 0xb1, 0xb3, 0xd1, 0xa4, 0x25, 0xb6, 0x6c, 0xf4, 0x0d, 0xb8, 0x5b, 0x9b, - 0x8f, 0x2a, 0x2c, 0x76, 0x1e, 0x4c, 0x15, 0xa3, 0xb8, 0xee, 0x90, 0x89, 0x5a, 0x9d, 0x75, 0xc8, - 0x0a, 0x95, 0x48, 0x2d, 0x3c, 0xa8, 0xb0, 0x59, 0xe1, 0xff, 0xe1, 0xc0, 0xbe, 0x7e, 0x03, 0x8f, - 0x09, 0x8b, 0x93, 0xeb, 0xc6, 0x4f, 0x24, 0x82, 0xf6, 0x96, 0x26, 0x99, 0xf9, 0x42, 0xaa, 0x67, - 0x74, 0x06, 0x6d, 0xa9, 0x51, 0x1d, 0xe1, 0xc1, 0xae, 0x5f, 0xb8, 0x9e, 0xbc, 0xb8, 0xce, 0x28, - 0x56, 0x6c, 0x99, 0xb9, 0xfa, 0xab, 0xee, 0xb5, 0x5f, 0x97, 0xb9, 0xba, 0x0f, 0x1b, 0xee, 0x27, - 0x2b, 0x80, 0x7a, 0x12, 0x1a, 0x40, 0xef, 0xe1, 0xb3, 0xe5, 0xd3, 0xc5, 0x14, 0x8f, 0xde, 0x41, - 0x2e, 0x74, 0x2e, 0xcf, 0x97, 0x97, 0xd3, 0x91, 0x23, 0xf1, 0xf9, 0x72, 0x36, 0x3b, 0xc7, 0x2f, - 0x46, 0x7b, 0xb2, 0x58, 0x3e, 0x5d, 0xbc, 0x78, 0x3e, 0x7d, 0x34, 0x6a, 0xa1, 0x21, 0xb8, 0x4f, - 0xbe, 0x9d, 0x2f, 0x9e, 0x5d, 0xe2, 0xf3, 0xd9, 0xa8, 0x8d, 0xde, 0x85, 0x3b, 0xaa, 0x27, 0xac, - 0xc1, 0xce, 0x05, 0x86, 0xc6, 0x3f, 0x18, 0x3f, 0x3c, 0x88, 0x62, 0xb1, 0x2d, 0x57, 0xc1, 0x9a, - 0xb3, 0x7f, 0xff, 0x45, 0x09, 0x19, 0xdf, 0xd0, 0x64, 0x12, 0xf1, 0xaf, 0x62, 0x1e, 0xd6, 0xab, - 0xa1, 0x5e, 0xfd, 0x27, 0x00, 0x00, 0xff, 0xff, 0x16, 0x77, 0x81, 0x98, 0xd7, 0x08, 0x00, 0x00, +var File_io_prometheus_client_metrics_proto protoreflect.FileDescriptor + +var file_io_prometheus_client_metrics_proto_rawDesc = []byte{ + 0x0a, 0x22, 0x69, 0x6f, 0x2f, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2f, + 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2f, 0x6d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x73, 0x2e, 0x70, + 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x14, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, + 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, + 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, + 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x35, 0x0a, 0x09, 0x4c, + 0x61, 0x62, 0x65, 0x6c, 0x50, 0x61, 0x69, 0x72, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, + 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x14, 0x0a, 0x05, + 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, + 0x75, 0x65, 0x22, 0x1d, 0x0a, 0x05, 0x47, 0x61, 0x75, 0x67, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x76, + 0x61, 0x6c, 0x75, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x01, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, + 0x65, 0x22, 0x5b, 0x0a, 0x07, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x65, 0x72, 0x12, 0x14, 0x0a, 0x05, + 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x01, 0x52, 0x05, 0x76, 0x61, 0x6c, + 0x75, 0x65, 0x12, 0x3a, 0x0a, 0x08, 0x65, 0x78, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x72, 0x18, 0x02, + 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, + 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x45, 0x78, 0x65, 0x6d, + 0x70, 0x6c, 0x61, 0x72, 0x52, 0x08, 0x65, 0x78, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x72, 0x22, 0x3c, + 0x0a, 0x08, 0x51, 0x75, 0x61, 0x6e, 0x74, 0x69, 0x6c, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x71, 0x75, + 0x61, 0x6e, 0x74, 0x69, 0x6c, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x01, 0x52, 0x08, 0x71, 0x75, + 0x61, 0x6e, 0x74, 0x69, 0x6c, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, + 0x02, 0x20, 0x01, 0x28, 0x01, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x22, 0x87, 0x01, 0x0a, + 0x07, 0x53, 0x75, 0x6d, 0x6d, 0x61, 0x72, 0x79, 0x12, 0x21, 0x0a, 0x0c, 0x73, 0x61, 0x6d, 0x70, + 0x6c, 0x65, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0b, + 0x73, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, + 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x5f, 0x73, 0x75, 0x6d, 0x18, 0x02, 0x20, 0x01, 0x28, 0x01, 0x52, + 0x09, 0x73, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x53, 0x75, 0x6d, 0x12, 0x3a, 0x0a, 0x08, 0x71, 0x75, + 0x61, 0x6e, 0x74, 0x69, 0x6c, 0x65, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x69, + 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, + 0x65, 0x6e, 0x74, 0x2e, 0x51, 0x75, 0x61, 0x6e, 0x74, 0x69, 0x6c, 0x65, 0x52, 0x08, 0x71, 0x75, + 0x61, 0x6e, 0x74, 0x69, 0x6c, 0x65, 0x22, 0x1f, 0x0a, 0x07, 0x55, 0x6e, 0x74, 0x79, 0x70, 0x65, + 0x64, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x01, + 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x22, 0xe3, 0x04, 0x0a, 0x09, 0x48, 0x69, 0x73, 0x74, + 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x12, 0x21, 0x0a, 0x0c, 0x73, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x5f, + 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0b, 0x73, 0x61, 0x6d, + 0x70, 0x6c, 0x65, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x2c, 0x0a, 0x12, 0x73, 0x61, 0x6d, 0x70, + 0x6c, 0x65, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x5f, 0x66, 0x6c, 0x6f, 0x61, 0x74, 0x18, 0x04, + 0x20, 0x01, 0x28, 0x01, 0x52, 0x10, 0x73, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x43, 0x6f, 0x75, 0x6e, + 0x74, 0x46, 0x6c, 0x6f, 0x61, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x61, 0x6d, 0x70, 0x6c, 0x65, + 0x5f, 0x73, 0x75, 0x6d, 0x18, 0x02, 0x20, 0x01, 0x28, 0x01, 0x52, 0x09, 0x73, 0x61, 0x6d, 0x70, + 0x6c, 0x65, 0x53, 0x75, 0x6d, 0x12, 0x34, 0x0a, 0x06, 0x62, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x18, + 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1c, 0x2e, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, + 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x42, 0x75, 0x63, + 0x6b, 0x65, 0x74, 0x52, 0x06, 0x62, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x73, + 0x63, 0x68, 0x65, 0x6d, 0x61, 0x18, 0x05, 0x20, 0x01, 0x28, 0x11, 0x52, 0x06, 0x73, 0x63, 0x68, + 0x65, 0x6d, 0x61, 0x12, 0x25, 0x0a, 0x0e, 0x7a, 0x65, 0x72, 0x6f, 0x5f, 0x74, 0x68, 0x72, 0x65, + 0x73, 0x68, 0x6f, 0x6c, 0x64, 0x18, 0x06, 0x20, 0x01, 0x28, 0x01, 0x52, 0x0d, 0x7a, 0x65, 0x72, + 0x6f, 0x54, 0x68, 0x72, 0x65, 0x73, 0x68, 0x6f, 0x6c, 0x64, 0x12, 0x1d, 0x0a, 0x0a, 0x7a, 0x65, + 0x72, 0x6f, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x07, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, + 0x7a, 0x65, 0x72, 0x6f, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x28, 0x0a, 0x10, 0x7a, 0x65, 0x72, + 0x6f, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x5f, 0x66, 0x6c, 0x6f, 0x61, 0x74, 0x18, 0x08, 0x20, + 0x01, 0x28, 0x01, 0x52, 0x0e, 0x7a, 0x65, 0x72, 0x6f, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x46, 0x6c, + 0x6f, 0x61, 0x74, 0x12, 0x45, 0x0a, 0x0d, 0x6e, 0x65, 0x67, 0x61, 0x74, 0x69, 0x76, 0x65, 0x5f, + 0x73, 0x70, 0x61, 0x6e, 0x18, 0x09, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x20, 0x2e, 0x69, 0x6f, 0x2e, + 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, + 0x74, 0x2e, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x53, 0x70, 0x61, 0x6e, 0x52, 0x0c, 0x6e, 0x65, + 0x67, 0x61, 0x74, 0x69, 0x76, 0x65, 0x53, 0x70, 0x61, 0x6e, 0x12, 0x25, 0x0a, 0x0e, 0x6e, 0x65, + 0x67, 0x61, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x64, 0x65, 0x6c, 0x74, 0x61, 0x18, 0x0a, 0x20, 0x03, + 0x28, 0x12, 0x52, 0x0d, 0x6e, 0x65, 0x67, 0x61, 0x74, 0x69, 0x76, 0x65, 0x44, 0x65, 0x6c, 0x74, + 0x61, 0x12, 0x25, 0x0a, 0x0e, 0x6e, 0x65, 0x67, 0x61, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x63, 0x6f, + 0x75, 0x6e, 0x74, 0x18, 0x0b, 0x20, 0x03, 0x28, 0x01, 0x52, 0x0d, 0x6e, 0x65, 0x67, 0x61, 0x74, + 0x69, 0x76, 0x65, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x45, 0x0a, 0x0d, 0x70, 0x6f, 0x73, 0x69, + 0x74, 0x69, 0x76, 0x65, 0x5f, 0x73, 0x70, 0x61, 0x6e, 0x18, 0x0c, 0x20, 0x03, 0x28, 0x0b, 0x32, + 0x20, 0x2e, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, + 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x53, 0x70, 0x61, + 0x6e, 0x52, 0x0c, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x53, 0x70, 0x61, 0x6e, 0x12, + 0x25, 0x0a, 0x0e, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x64, 0x65, 0x6c, 0x74, + 0x61, 0x18, 0x0d, 0x20, 0x03, 0x28, 0x12, 0x52, 0x0d, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x76, + 0x65, 0x44, 0x65, 0x6c, 0x74, 0x61, 0x12, 0x25, 0x0a, 0x0e, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, + 0x76, 0x65, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x0e, 0x20, 0x03, 0x28, 0x01, 0x52, 0x0d, + 0x70, 0x6f, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0xc6, 0x01, + 0x0a, 0x06, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, 0x12, 0x29, 0x0a, 0x10, 0x63, 0x75, 0x6d, 0x75, + 0x6c, 0x61, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, + 0x28, 0x04, 0x52, 0x0f, 0x63, 0x75, 0x6d, 0x75, 0x6c, 0x61, 0x74, 0x69, 0x76, 0x65, 0x43, 0x6f, + 0x75, 0x6e, 0x74, 0x12, 0x34, 0x0a, 0x16, 0x63, 0x75, 0x6d, 0x75, 0x6c, 0x61, 0x74, 0x69, 0x76, + 0x65, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x5f, 0x66, 0x6c, 0x6f, 0x61, 0x74, 0x18, 0x04, 0x20, + 0x01, 0x28, 0x01, 0x52, 0x14, 0x63, 0x75, 0x6d, 0x75, 0x6c, 0x61, 0x74, 0x69, 0x76, 0x65, 0x43, + 0x6f, 0x75, 0x6e, 0x74, 0x46, 0x6c, 0x6f, 0x61, 0x74, 0x12, 0x1f, 0x0a, 0x0b, 0x75, 0x70, 0x70, + 0x65, 0x72, 0x5f, 0x62, 0x6f, 0x75, 0x6e, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x01, 0x52, 0x0a, + 0x75, 0x70, 0x70, 0x65, 0x72, 0x42, 0x6f, 0x75, 0x6e, 0x64, 0x12, 0x3a, 0x0a, 0x08, 0x65, 0x78, + 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x69, + 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, + 0x65, 0x6e, 0x74, 0x2e, 0x45, 0x78, 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x72, 0x52, 0x08, 0x65, 0x78, + 0x65, 0x6d, 0x70, 0x6c, 0x61, 0x72, 0x22, 0x3c, 0x0a, 0x0a, 0x42, 0x75, 0x63, 0x6b, 0x65, 0x74, + 0x53, 0x70, 0x61, 0x6e, 0x12, 0x16, 0x0a, 0x06, 0x6f, 0x66, 0x66, 0x73, 0x65, 0x74, 0x18, 0x01, + 0x20, 0x01, 0x28, 0x11, 0x52, 0x06, 0x6f, 0x66, 0x66, 0x73, 0x65, 0x74, 0x12, 0x16, 0x0a, 0x06, + 0x6c, 0x65, 0x6e, 0x67, 0x74, 0x68, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x06, 0x6c, 0x65, + 0x6e, 0x67, 0x74, 0x68, 0x22, 0x91, 0x01, 0x0a, 0x08, 0x45, 0x78, 0x65, 0x6d, 0x70, 0x6c, 0x61, + 0x72, 0x12, 0x35, 0x0a, 0x05, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, + 0x32, 0x1f, 0x2e, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, + 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x50, 0x61, 0x69, + 0x72, 0x52, 0x05, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, + 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x01, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x38, + 0x0a, 0x09, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28, + 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x74, + 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x22, 0xff, 0x02, 0x0a, 0x06, 0x4d, 0x65, 0x74, + 0x72, 0x69, 0x63, 0x12, 0x35, 0x0a, 0x05, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x18, 0x01, 0x20, 0x03, + 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, + 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x4c, 0x61, 0x62, 0x65, 0x6c, 0x50, + 0x61, 0x69, 0x72, 0x52, 0x05, 0x6c, 0x61, 0x62, 0x65, 0x6c, 0x12, 0x31, 0x0a, 0x05, 0x67, 0x61, + 0x75, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1b, 0x2e, 0x69, 0x6f, 0x2e, 0x70, + 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, + 0x2e, 0x47, 0x61, 0x75, 0x67, 0x65, 0x52, 0x05, 0x67, 0x61, 0x75, 0x67, 0x65, 0x12, 0x37, 0x0a, + 0x07, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x65, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1d, + 0x2e, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, + 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x65, 0x72, 0x52, 0x07, 0x63, + 0x6f, 0x75, 0x6e, 0x74, 0x65, 0x72, 0x12, 0x37, 0x0a, 0x07, 0x73, 0x75, 0x6d, 0x6d, 0x61, 0x72, + 0x79, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, + 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x53, + 0x75, 0x6d, 0x6d, 0x61, 0x72, 0x79, 0x52, 0x07, 0x73, 0x75, 0x6d, 0x6d, 0x61, 0x72, 0x79, 0x12, + 0x37, 0x0a, 0x07, 0x75, 0x6e, 0x74, 0x79, 0x70, 0x65, 0x64, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, + 0x32, 0x1d, 0x2e, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, + 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x55, 0x6e, 0x74, 0x79, 0x70, 0x65, 0x64, 0x52, + 0x07, 0x75, 0x6e, 0x74, 0x79, 0x70, 0x65, 0x64, 0x12, 0x3d, 0x0a, 0x09, 0x68, 0x69, 0x73, 0x74, + 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x69, 0x6f, + 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, + 0x6e, 0x74, 0x2e, 0x48, 0x69, 0x73, 0x74, 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x52, 0x09, 0x68, 0x69, + 0x73, 0x74, 0x6f, 0x67, 0x72, 0x61, 0x6d, 0x12, 0x21, 0x0a, 0x0c, 0x74, 0x69, 0x6d, 0x65, 0x73, + 0x74, 0x61, 0x6d, 0x70, 0x5f, 0x6d, 0x73, 0x18, 0x06, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0b, 0x74, + 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x4d, 0x73, 0x22, 0xa2, 0x01, 0x0a, 0x0c, 0x4d, + 0x65, 0x74, 0x72, 0x69, 0x63, 0x46, 0x61, 0x6d, 0x69, 0x6c, 0x79, 0x12, 0x12, 0x0a, 0x04, 0x6e, + 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, + 0x12, 0x0a, 0x04, 0x68, 0x65, 0x6c, 0x70, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x68, + 0x65, 0x6c, 0x70, 0x12, 0x34, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, + 0x0e, 0x32, 0x20, 0x2e, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, + 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x54, + 0x79, 0x70, 0x65, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x34, 0x0a, 0x06, 0x6d, 0x65, 0x74, + 0x72, 0x69, 0x63, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1c, 0x2e, 0x69, 0x6f, 0x2e, 0x70, + 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, + 0x2e, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x52, 0x06, 0x6d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x2a, + 0x62, 0x0a, 0x0a, 0x4d, 0x65, 0x74, 0x72, 0x69, 0x63, 0x54, 0x79, 0x70, 0x65, 0x12, 0x0b, 0x0a, + 0x07, 0x43, 0x4f, 0x55, 0x4e, 0x54, 0x45, 0x52, 0x10, 0x00, 0x12, 0x09, 0x0a, 0x05, 0x47, 0x41, + 0x55, 0x47, 0x45, 0x10, 0x01, 0x12, 0x0b, 0x0a, 0x07, 0x53, 0x55, 0x4d, 0x4d, 0x41, 0x52, 0x59, + 0x10, 0x02, 0x12, 0x0b, 0x0a, 0x07, 0x55, 0x4e, 0x54, 0x59, 0x50, 0x45, 0x44, 0x10, 0x03, 0x12, + 0x0d, 0x0a, 0x09, 0x48, 0x49, 0x53, 0x54, 0x4f, 0x47, 0x52, 0x41, 0x4d, 0x10, 0x04, 0x12, 0x13, + 0x0a, 0x0f, 0x47, 0x41, 0x55, 0x47, 0x45, 0x5f, 0x48, 0x49, 0x53, 0x54, 0x4f, 0x47, 0x52, 0x41, + 0x4d, 0x10, 0x05, 0x42, 0x52, 0x0a, 0x14, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, + 0x68, 0x65, 0x75, 0x73, 0x2e, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x5a, 0x3a, 0x67, 0x69, 0x74, + 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, + 0x75, 0x73, 0x2f, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x5f, 0x6d, 0x6f, 0x64, 0x65, 0x6c, 0x2f, + 0x67, 0x6f, 0x3b, 0x69, 0x6f, 0x5f, 0x70, 0x72, 0x6f, 0x6d, 0x65, 0x74, 0x68, 0x65, 0x75, 0x73, + 0x5f, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, +} + +var ( + file_io_prometheus_client_metrics_proto_rawDescOnce sync.Once + file_io_prometheus_client_metrics_proto_rawDescData = file_io_prometheus_client_metrics_proto_rawDesc +) + +func file_io_prometheus_client_metrics_proto_rawDescGZIP() []byte { + file_io_prometheus_client_metrics_proto_rawDescOnce.Do(func() { + file_io_prometheus_client_metrics_proto_rawDescData = protoimpl.X.CompressGZIP(file_io_prometheus_client_metrics_proto_rawDescData) + }) + return file_io_prometheus_client_metrics_proto_rawDescData +} + +var file_io_prometheus_client_metrics_proto_enumTypes = make([]protoimpl.EnumInfo, 1) +var file_io_prometheus_client_metrics_proto_msgTypes = make([]protoimpl.MessageInfo, 12) +var file_io_prometheus_client_metrics_proto_goTypes = []interface{}{ + (MetricType)(0), // 0: io.prometheus.client.MetricType + (*LabelPair)(nil), // 1: io.prometheus.client.LabelPair + (*Gauge)(nil), // 2: io.prometheus.client.Gauge + (*Counter)(nil), // 3: io.prometheus.client.Counter + (*Quantile)(nil), // 4: io.prometheus.client.Quantile + (*Summary)(nil), // 5: io.prometheus.client.Summary + (*Untyped)(nil), // 6: io.prometheus.client.Untyped + (*Histogram)(nil), // 7: io.prometheus.client.Histogram + (*Bucket)(nil), // 8: io.prometheus.client.Bucket + (*BucketSpan)(nil), // 9: io.prometheus.client.BucketSpan + (*Exemplar)(nil), // 10: io.prometheus.client.Exemplar + (*Metric)(nil), // 11: io.prometheus.client.Metric + (*MetricFamily)(nil), // 12: io.prometheus.client.MetricFamily + (*timestamppb.Timestamp)(nil), // 13: google.protobuf.Timestamp +} +var file_io_prometheus_client_metrics_proto_depIdxs = []int32{ + 10, // 0: io.prometheus.client.Counter.exemplar:type_name -> io.prometheus.client.Exemplar + 4, // 1: io.prometheus.client.Summary.quantile:type_name -> io.prometheus.client.Quantile + 8, // 2: io.prometheus.client.Histogram.bucket:type_name -> io.prometheus.client.Bucket + 9, // 3: io.prometheus.client.Histogram.negative_span:type_name -> io.prometheus.client.BucketSpan + 9, // 4: io.prometheus.client.Histogram.positive_span:type_name -> io.prometheus.client.BucketSpan + 10, // 5: io.prometheus.client.Bucket.exemplar:type_name -> io.prometheus.client.Exemplar + 1, // 6: io.prometheus.client.Exemplar.label:type_name -> io.prometheus.client.LabelPair + 13, // 7: io.prometheus.client.Exemplar.timestamp:type_name -> google.protobuf.Timestamp + 1, // 8: io.prometheus.client.Metric.label:type_name -> io.prometheus.client.LabelPair + 2, // 9: io.prometheus.client.Metric.gauge:type_name -> io.prometheus.client.Gauge + 3, // 10: io.prometheus.client.Metric.counter:type_name -> io.prometheus.client.Counter + 5, // 11: io.prometheus.client.Metric.summary:type_name -> io.prometheus.client.Summary + 6, // 12: io.prometheus.client.Metric.untyped:type_name -> io.prometheus.client.Untyped + 7, // 13: io.prometheus.client.Metric.histogram:type_name -> io.prometheus.client.Histogram + 0, // 14: io.prometheus.client.MetricFamily.type:type_name -> io.prometheus.client.MetricType + 11, // 15: io.prometheus.client.MetricFamily.metric:type_name -> io.prometheus.client.Metric + 16, // [16:16] is the sub-list for method output_type + 16, // [16:16] is the sub-list for method input_type + 16, // [16:16] is the sub-list for extension type_name + 16, // [16:16] is the sub-list for extension extendee + 0, // [0:16] is the sub-list for field type_name +} + +func init() { file_io_prometheus_client_metrics_proto_init() } +func file_io_prometheus_client_metrics_proto_init() { + if File_io_prometheus_client_metrics_proto != nil { + return + } + if !protoimpl.UnsafeEnabled { + file_io_prometheus_client_metrics_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*LabelPair); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Gauge); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Counter); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Quantile); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Summary); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Untyped); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Histogram); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Bucket); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*BucketSpan); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Exemplar); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*Metric); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + file_io_prometheus_client_metrics_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} { + switch v := v.(*MetricFamily); i { + case 0: + return &v.state + case 1: + return &v.sizeCache + case 2: + return &v.unknownFields + default: + return nil + } + } + } + type x struct{} + out := protoimpl.TypeBuilder{ + File: protoimpl.DescBuilder{ + GoPackagePath: reflect.TypeOf(x{}).PkgPath(), + RawDescriptor: file_io_prometheus_client_metrics_proto_rawDesc, + NumEnums: 1, + NumMessages: 12, + NumExtensions: 0, + NumServices: 0, + }, + GoTypes: file_io_prometheus_client_metrics_proto_goTypes, + DependencyIndexes: file_io_prometheus_client_metrics_proto_depIdxs, + EnumInfos: file_io_prometheus_client_metrics_proto_enumTypes, + MessageInfos: file_io_prometheus_client_metrics_proto_msgTypes, + }.Build() + File_io_prometheus_client_metrics_proto = out.File + file_io_prometheus_client_metrics_proto_rawDesc = nil + file_io_prometheus_client_metrics_proto_goTypes = nil + file_io_prometheus_client_metrics_proto_depIdxs = nil } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/decode.go b/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/decode.go index f4fc88455221..906397815138 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/decode.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/decode.go @@ -132,7 +132,10 @@ func (d *textDecoder) Decode(v *dto.MetricFamily) error { } // Pick off one MetricFamily per Decode until there's nothing left. for key, fam := range d.fams { - *v = *fam + v.Name = fam.Name + v.Help = fam.Help + v.Type = fam.Type + v.Metric = fam.Metric delete(d.fams, key) return nil } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/encode.go b/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/encode.go index 64dc0eb40c28..7f611ffaad7c 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/encode.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/encode.go @@ -18,9 +18,9 @@ import ( "io" "net/http" - "github.com/golang/protobuf/proto" //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. "github.com/matttproud/golang_protobuf_extensions/pbutil" "github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg" + "google.golang.org/protobuf/encoding/prototext" dto "github.com/prometheus/client_model/go" ) @@ -99,8 +99,11 @@ func NegotiateIncludingOpenMetrics(h http.Header) Format { if ac.Type == "text" && ac.SubType == "plain" && (ver == TextVersion || ver == "") { return FmtText } - if ac.Type+"/"+ac.SubType == OpenMetricsType && (ver == OpenMetricsVersion || ver == "") { - return FmtOpenMetrics + if ac.Type+"/"+ac.SubType == OpenMetricsType && (ver == OpenMetricsVersion_0_0_1 || ver == OpenMetricsVersion_1_0_0 || ver == "") { + if ver == OpenMetricsVersion_1_0_0 { + return FmtOpenMetrics_1_0_0 + } + return FmtOpenMetrics_0_0_1 } } return FmtText @@ -133,7 +136,7 @@ func NewEncoder(w io.Writer, format Format) Encoder { case FmtProtoText: return encoderCloser{ encode: func(v *dto.MetricFamily) error { - _, err := fmt.Fprintln(w, proto.MarshalTextString(v)) + _, err := fmt.Fprintln(w, prototext.Format(v)) return err }, close: func() error { return nil }, @@ -146,7 +149,7 @@ func NewEncoder(w io.Writer, format Format) Encoder { }, close: func() error { return nil }, } - case FmtOpenMetrics: + case FmtOpenMetrics_0_0_1, FmtOpenMetrics_1_0_0: return encoderCloser{ encode: func(v *dto.MetricFamily) error { _, err := MetricFamilyToOpenMetrics(w, v) diff --git a/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/expfmt.go b/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/expfmt.go index 0f176fa64f25..c4cb20f0d3ef 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/expfmt.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/expfmt.go @@ -19,20 +19,22 @@ type Format string // Constants to assemble the Content-Type values for the different wire protocols. const ( - TextVersion = "0.0.4" - ProtoType = `application/vnd.google.protobuf` - ProtoProtocol = `io.prometheus.client.MetricFamily` - ProtoFmt = ProtoType + "; proto=" + ProtoProtocol + ";" - OpenMetricsType = `application/openmetrics-text` - OpenMetricsVersion = "0.0.1" + TextVersion = "0.0.4" + ProtoType = `application/vnd.google.protobuf` + ProtoProtocol = `io.prometheus.client.MetricFamily` + ProtoFmt = ProtoType + "; proto=" + ProtoProtocol + ";" + OpenMetricsType = `application/openmetrics-text` + OpenMetricsVersion_0_0_1 = "0.0.1" + OpenMetricsVersion_1_0_0 = "1.0.0" // The Content-Type values for the different wire protocols. - FmtUnknown Format = `` - FmtText Format = `text/plain; version=` + TextVersion + `; charset=utf-8` - FmtProtoDelim Format = ProtoFmt + ` encoding=delimited` - FmtProtoText Format = ProtoFmt + ` encoding=text` - FmtProtoCompact Format = ProtoFmt + ` encoding=compact-text` - FmtOpenMetrics Format = OpenMetricsType + `; version=` + OpenMetricsVersion + `; charset=utf-8` + FmtUnknown Format = `` + FmtText Format = `text/plain; version=` + TextVersion + `; charset=utf-8` + FmtProtoDelim Format = ProtoFmt + ` encoding=delimited` + FmtProtoText Format = ProtoFmt + ` encoding=text` + FmtProtoCompact Format = ProtoFmt + ` encoding=compact-text` + FmtOpenMetrics_1_0_0 Format = OpenMetricsType + `; version=` + OpenMetricsVersion_1_0_0 + `; charset=utf-8` + FmtOpenMetrics_0_0_1 Format = OpenMetricsType + `; version=` + OpenMetricsVersion_0_0_1 + `; charset=utf-8` ) const ( diff --git a/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/text_parse.go b/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/text_parse.go index ac2482782c7b..35db1cc9d73c 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/text_parse.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/common/expfmt/text_parse.go @@ -24,8 +24,8 @@ import ( dto "github.com/prometheus/client_model/go" - "github.com/golang/protobuf/proto" //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility. "github.com/prometheus/common/model" + "google.golang.org/protobuf/proto" ) // A stateFn is a function that represents a state in a state machine. By diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/Makefile.common b/cluster-autoscaler/vendor/github.com/prometheus/procfs/Makefile.common index e358db69c5d3..b111d2562000 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/Makefile.common +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/Makefile.common @@ -61,7 +61,7 @@ PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_ SKIP_GOLANGCI_LINT := GOLANGCI_LINT := GOLANGCI_LINT_OPTS ?= -GOLANGCI_LINT_VERSION ?= v1.49.0 +GOLANGCI_LINT_VERSION ?= v1.51.2 # golangci-lint only supports linux, darwin and windows platforms on i386/amd64. # windows isn't included here because of the path separator being different. ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin)) @@ -91,6 +91,8 @@ BUILD_DOCKER_ARCHS = $(addprefix common-docker-,$(DOCKER_ARCHS)) PUBLISH_DOCKER_ARCHS = $(addprefix common-docker-publish-,$(DOCKER_ARCHS)) TAG_DOCKER_ARCHS = $(addprefix common-docker-tag-latest-,$(DOCKER_ARCHS)) +SANITIZED_DOCKER_IMAGE_TAG := $(subst +,-,$(DOCKER_IMAGE_TAG)) + ifeq ($(GOHOSTARCH),amd64) ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux freebsd darwin windows)) # Only supported on amd64 @@ -205,7 +207,7 @@ common-tarball: promu .PHONY: common-docker $(BUILD_DOCKER_ARCHS) common-docker: $(BUILD_DOCKER_ARCHS) $(BUILD_DOCKER_ARCHS): common-docker-%: - docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" \ + docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" \ -f $(DOCKERFILE_PATH) \ --build-arg ARCH="$*" \ --build-arg OS="linux" \ @@ -214,19 +216,19 @@ $(BUILD_DOCKER_ARCHS): common-docker-%: .PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS) common-docker-publish: $(PUBLISH_DOCKER_ARCHS) $(PUBLISH_DOCKER_ARCHS): common-docker-publish-%: - docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" + docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" DOCKER_MAJOR_VERSION_TAG = $(firstword $(subst ., ,$(shell cat VERSION))) .PHONY: common-docker-tag-latest $(TAG_DOCKER_ARCHS) common-docker-tag-latest: $(TAG_DOCKER_ARCHS) $(TAG_DOCKER_ARCHS): common-docker-tag-latest-%: - docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest" - docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:v$(DOCKER_MAJOR_VERSION_TAG)" + docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest" + docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(SANITIZED_DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:v$(DOCKER_MAJOR_VERSION_TAG)" .PHONY: common-docker-manifest common-docker-manifest: - DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(DOCKER_IMAGE_TAG)) - DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" + DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(SANITIZED_DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(SANITIZED_DOCKER_IMAGE_TAG)) + DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(SANITIZED_DOCKER_IMAGE_TAG)" .PHONY: promu promu: $(PROMU) diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs.go index 0102ab0fd856..60c551e026bf 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs.go @@ -21,6 +21,7 @@ import ( // kernel data structures. type FS struct { proc fs.FS + real bool } // DefaultMountPoint is the common mount point of the proc filesystem. @@ -39,5 +40,11 @@ func NewFS(mountPoint string) (FS, error) { if err != nil { return FS{}, err } - return FS{fs}, nil + + real, err := isRealProc(mountPoint) + if err != nil { + return FS{}, err + } + + return FS{fs, real}, nil } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs_statfs_notype.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs_statfs_notype.go new file mode 100644 index 000000000000..800576968966 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs_statfs_notype.go @@ -0,0 +1,23 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//go:build netbsd || openbsd || solaris || windows +// +build netbsd openbsd solaris windows + +package procfs + +// isRealProc returns true on architectures that don't have a Type argument +// in their Statfs_t struct +func isRealProc(mountPoint string) (bool, error) { + return true, nil +} diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs_statfs_type.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs_statfs_type.go new file mode 100644 index 000000000000..6233217ad292 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/fs_statfs_type.go @@ -0,0 +1,33 @@ +// Copyright 2018 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +//go:build !netbsd && !openbsd && !solaris && !windows +// +build !netbsd,!openbsd,!solaris,!windows + +package procfs + +import ( + "syscall" +) + +// isRealProc determines whether supplied mountpoint is really a proc filesystem. +func isRealProc(mountPoint string) (bool, error) { + stat := syscall.Statfs_t{} + err := syscall.Statfs(mountPoint, &stat) + if err != nil { + return false, err + } + + // 0x9fa0 is PROC_SUPER_MAGIC: https://elixir.bootlin.com/linux/v6.1/source/include/uapi/linux/magic.h#L87 + return stat.Type == 0x9fa0, nil +} diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/internal/util/parse.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/internal/util/parse.go index b030951faf98..14272dc78857 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/internal/util/parse.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/internal/util/parse.go @@ -64,6 +64,21 @@ func ParsePInt64s(ss []string) ([]*int64, error) { return us, nil } +// Parses a uint64 from given hex in string. +func ParseHexUint64s(ss []string) ([]*uint64, error) { + us := make([]*uint64, 0, len(ss)) + for _, s := range ss { + u, err := strconv.ParseUint(s, 16, 64) + if err != nil { + return nil, err + } + + us = append(us, &u) + } + + return us, nil +} + // ReadUintFromFile reads a file and attempts to parse a uint64 from it. func ReadUintFromFile(path string) (uint64, error) { data, err := os.ReadFile(path) diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/mountstats.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/mountstats.go index 0c482c18ccfe..7f68890cff16 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/mountstats.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/mountstats.go @@ -186,6 +186,8 @@ type NFSOperationStats struct { CumulativeTotalResponseMilliseconds uint64 // Duration from when a request was enqueued to when it was completely handled. CumulativeTotalRequestMilliseconds uint64 + // The average time from the point the client sends RPC requests until it receives the response. + AverageRTTMilliseconds float64 // The count of operations that complete with tk_status < 0. These statuses usually indicate error conditions. Errors uint64 } @@ -534,7 +536,6 @@ func parseNFSOperationStats(s *bufio.Scanner) ([]NFSOperationStats, error) { ns = append(ns, n) } - opStats := NFSOperationStats{ Operation: strings.TrimSuffix(ss[0], ":"), Requests: ns[0], @@ -546,6 +547,9 @@ func parseNFSOperationStats(s *bufio.Scanner) ([]NFSOperationStats, error) { CumulativeTotalResponseMilliseconds: ns[6], CumulativeTotalRequestMilliseconds: ns[7], } + if ns[0] != 0 { + opStats.AverageRTTMilliseconds = float64(ns[6]) / float64(ns[0]) + } if len(ns) > 8 { opStats.Errors = ns[8] diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_conntrackstat.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_conntrackstat.go index 8300daca0545..64a0e946068c 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_conntrackstat.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_conntrackstat.go @@ -18,7 +18,6 @@ import ( "bytes" "fmt" "io" - "strconv" "strings" "github.com/prometheus/procfs/internal/util" @@ -28,9 +27,13 @@ import ( // and contains netfilter conntrack statistics at one CPU core. type ConntrackStatEntry struct { Entries uint64 + Searched uint64 Found uint64 + New uint64 Invalid uint64 Ignore uint64 + Delete uint64 + DeleteList uint64 Insert uint64 InsertFailed uint64 Drop uint64 @@ -81,73 +84,34 @@ func parseConntrackStat(r io.Reader) ([]ConntrackStatEntry, error) { // Parses a ConntrackStatEntry from given array of fields. func parseConntrackStatEntry(fields []string) (*ConntrackStatEntry, error) { - if len(fields) != 17 { - return nil, fmt.Errorf("invalid conntrackstat entry, missing fields") - } - entry := &ConntrackStatEntry{} - - entries, err := parseConntrackStatField(fields[0]) - if err != nil { - return nil, err - } - entry.Entries = entries - - found, err := parseConntrackStatField(fields[2]) - if err != nil { - return nil, err - } - entry.Found = found - - invalid, err := parseConntrackStatField(fields[4]) - if err != nil { - return nil, err - } - entry.Invalid = invalid - - ignore, err := parseConntrackStatField(fields[5]) - if err != nil { - return nil, err - } - entry.Ignore = ignore - - insert, err := parseConntrackStatField(fields[8]) + entries, err := util.ParseHexUint64s(fields) if err != nil { - return nil, err + return nil, fmt.Errorf("invalid conntrackstat entry, couldn't parse fields: %s", err) } - entry.Insert = insert - - insertFailed, err := parseConntrackStatField(fields[9]) - if err != nil { - return nil, err + numEntries := len(entries) + if numEntries < 16 || numEntries > 17 { + return nil, fmt.Errorf("invalid conntrackstat entry, invalid number of fields: %d", numEntries) } - entry.InsertFailed = insertFailed - drop, err := parseConntrackStatField(fields[10]) - if err != nil { - return nil, err + stats := &ConntrackStatEntry{ + Entries: *entries[0], + Searched: *entries[1], + Found: *entries[2], + New: *entries[3], + Invalid: *entries[4], + Ignore: *entries[5], + Delete: *entries[6], + DeleteList: *entries[7], + Insert: *entries[8], + InsertFailed: *entries[9], + Drop: *entries[10], + EarlyDrop: *entries[11], } - entry.Drop = drop - earlyDrop, err := parseConntrackStatField(fields[11]) - if err != nil { - return nil, err + // Ignore missing search_restart on Linux < 2.6.35. + if numEntries == 17 { + stats.SearchRestart = *entries[16] } - entry.EarlyDrop = earlyDrop - searchRestart, err := parseConntrackStatField(fields[16]) - if err != nil { - return nil, err - } - entry.SearchRestart = searchRestart - - return entry, nil -} - -// Parses a uint64 from given hex in string. -func parseConntrackStatField(field string) (uint64, error) { - val, err := strconv.ParseUint(field, 16, 64) - if err != nil { - return 0, fmt.Errorf("couldn't parse %q field: %w", field, err) - } - return val, err + return stats, nil } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_softnet.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_softnet.go index 06b7b8f21638..540cea52c6f7 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_softnet.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_softnet.go @@ -76,6 +76,7 @@ func parseSoftnet(r io.Reader) ([]SoftnetStat, error) { s := bufio.NewScanner(r) var stats []SoftnetStat + cpuIndex := 0 for s.Scan() { columns := strings.Fields(s.Text()) width := len(columns) @@ -127,9 +128,13 @@ func parseSoftnet(r io.Reader) ([]SoftnetStat, error) { softnetStat.SoftnetBacklogLen = us[0] softnetStat.Index = us[1] + } else { + // For older kernels, create the Index based on the scan line number. + softnetStat.Index = uint32(cpuIndex) } softnetStat.Width = width stats = append(stats, softnetStat) + cpuIndex++ } return stats, nil diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_wireless.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_wireless.go new file mode 100644 index 000000000000..c80fb154247c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/net_wireless.go @@ -0,0 +1,182 @@ +// Copyright 2023 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package procfs + +import ( + "bufio" + "bytes" + "fmt" + "io" + "strconv" + "strings" + + "github.com/prometheus/procfs/internal/util" +) + +// Wireless models the content of /proc/net/wireless. +type Wireless struct { + Name string + + // Status is the current 4-digit hex value status of the interface. + Status uint64 + + // QualityLink is the link quality. + QualityLink int + + // QualityLevel is the signal gain (dBm). + QualityLevel int + + // QualityNoise is the signal noise baseline (dBm). + QualityNoise int + + // DiscardedNwid is the number of discarded packets with wrong nwid/essid. + DiscardedNwid int + + // DiscardedCrypt is the number of discarded packets with wrong code/decode (WEP). + DiscardedCrypt int + + // DiscardedFrag is the number of discarded packets that can't perform MAC reassembly. + DiscardedFrag int + + // DiscardedRetry is the number of discarded packets that reached max MAC retries. + DiscardedRetry int + + // DiscardedMisc is the number of discarded packets for other reasons. + DiscardedMisc int + + // MissedBeacon is the number of missed beacons/superframe. + MissedBeacon int +} + +// Wireless returns kernel wireless statistics. +func (fs FS) Wireless() ([]*Wireless, error) { + b, err := util.ReadFileNoStat(fs.proc.Path("net/wireless")) + if err != nil { + return nil, err + } + + m, err := parseWireless(bytes.NewReader(b)) + if err != nil { + return nil, fmt.Errorf("failed to parse wireless: %w", err) + } + + return m, nil +} + +// parseWireless parses the contents of /proc/net/wireless. +/* +Inter-| sta-| Quality | Discarded packets | Missed | WE +face | tus | link level noise | nwid crypt frag retry misc | beacon | 22 + eth1: 0000 5. -256. -10. 0 1 0 3 0 0 + eth2: 0000 5. -256. -20. 0 2 0 4 0 0 +*/ +func parseWireless(r io.Reader) ([]*Wireless, error) { + var ( + interfaces []*Wireless + scanner = bufio.NewScanner(r) + ) + + for n := 0; scanner.Scan(); n++ { + // Skip the 2 header lines. + if n < 2 { + continue + } + + line := scanner.Text() + + parts := strings.Split(line, ":") + if len(parts) != 2 { + return nil, fmt.Errorf("expected 2 parts after splitting line by ':', got %d for line %q", len(parts), line) + } + + name := strings.TrimSpace(parts[0]) + stats := strings.Fields(parts[1]) + + if len(stats) < 10 { + return nil, fmt.Errorf("invalid number of fields in line %d, expected at least 10, got %d: %q", n, len(stats), line) + } + + status, err := strconv.ParseUint(stats[0], 16, 16) + if err != nil { + return nil, fmt.Errorf("invalid status in line %d: %q", n, line) + } + + qlink, err := strconv.Atoi(strings.TrimSuffix(stats[1], ".")) + if err != nil { + return nil, fmt.Errorf("failed to parse Quality:link as integer %q: %w", qlink, err) + } + + qlevel, err := strconv.Atoi(strings.TrimSuffix(stats[2], ".")) + if err != nil { + return nil, fmt.Errorf("failed to parse Quality:level as integer %q: %w", qlevel, err) + } + + qnoise, err := strconv.Atoi(strings.TrimSuffix(stats[3], ".")) + if err != nil { + return nil, fmt.Errorf("failed to parse Quality:noise as integer %q: %w", qnoise, err) + } + + dnwid, err := strconv.Atoi(stats[4]) + if err != nil { + return nil, fmt.Errorf("failed to parse Discarded:nwid as integer %q: %w", dnwid, err) + } + + dcrypt, err := strconv.Atoi(stats[5]) + if err != nil { + return nil, fmt.Errorf("failed to parse Discarded:crypt as integer %q: %w", dcrypt, err) + } + + dfrag, err := strconv.Atoi(stats[6]) + if err != nil { + return nil, fmt.Errorf("failed to parse Discarded:frag as integer %q: %w", dfrag, err) + } + + dretry, err := strconv.Atoi(stats[7]) + if err != nil { + return nil, fmt.Errorf("failed to parse Discarded:retry as integer %q: %w", dretry, err) + } + + dmisc, err := strconv.Atoi(stats[8]) + if err != nil { + return nil, fmt.Errorf("failed to parse Discarded:misc as integer %q: %w", dmisc, err) + } + + mbeacon, err := strconv.Atoi(stats[9]) + if err != nil { + return nil, fmt.Errorf("failed to parse Missed:beacon as integer %q: %w", mbeacon, err) + } + + w := &Wireless{ + Name: name, + Status: status, + QualityLink: qlink, + QualityLevel: qlevel, + QualityNoise: qnoise, + DiscardedNwid: dnwid, + DiscardedCrypt: dcrypt, + DiscardedFrag: dfrag, + DiscardedRetry: dretry, + DiscardedMisc: dmisc, + MissedBeacon: mbeacon, + } + + interfaces = append(interfaces, w) + } + + if err := scanner.Err(); err != nil { + return nil, fmt.Errorf("failed to scan /proc/net/wireless: %w", err) + } + + return interfaces, nil +} diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/netstat.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/netstat.go index 5cc40aef55bf..742dff453ba8 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/netstat.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/netstat.go @@ -15,7 +15,6 @@ package procfs import ( "bufio" - "io" "os" "path/filepath" "strconv" @@ -38,12 +37,7 @@ func (fs FS) NetStat() ([]NetStat, error) { var netStatsTotal []NetStat for _, filePath := range statFiles { - file, err := os.Open(filePath) - if err != nil { - return nil, err - } - - procNetstat, err := parseNetstat(file) + procNetstat, err := parseNetstat(filePath) if err != nil { return nil, err } @@ -56,14 +50,17 @@ func (fs FS) NetStat() ([]NetStat, error) { // parseNetstat parses the metrics from `/proc/net/stat/` file // and returns a NetStat structure. -func parseNetstat(r io.Reader) (NetStat, error) { - var ( - scanner = bufio.NewScanner(r) - netStat = NetStat{ - Stats: make(map[string][]uint64), - } - ) +func parseNetstat(filePath string) (NetStat, error) { + netStat := NetStat{ + Stats: make(map[string][]uint64), + } + file, err := os.Open(filePath) + if err != nil { + return netStat, err + } + defer file.Close() + scanner := bufio.NewScanner(file) scanner.Scan() // First string is always a header for stats diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc.go index c30223af72ad..48f39dafd2aa 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc.go @@ -21,7 +21,6 @@ import ( "strconv" "strings" - "github.com/prometheus/procfs/internal/fs" "github.com/prometheus/procfs/internal/util" ) @@ -30,7 +29,7 @@ type Proc struct { // The process ID. PID int - fs fs.FS + fs FS } // Procs represents a list of Proc structs. @@ -92,7 +91,7 @@ func (fs FS) Proc(pid int) (Proc, error) { if _, err := os.Stat(fs.proc.Path(strconv.Itoa(pid))); err != nil { return Proc{}, err } - return Proc{PID: pid, fs: fs.proc}, nil + return Proc{PID: pid, fs: fs}, nil } // AllProcs returns a list of all currently available processes. @@ -114,7 +113,7 @@ func (fs FS) AllProcs() (Procs, error) { if err != nil { continue } - p = append(p, Proc{PID: int(pid), fs: fs.proc}) + p = append(p, Proc{PID: int(pid), fs: fs}) } return p, nil @@ -237,6 +236,19 @@ func (p Proc) FileDescriptorTargets() ([]string, error) { // FileDescriptorsLen returns the number of currently open file descriptors of // a process. func (p Proc) FileDescriptorsLen() (int, error) { + // Use fast path if available (Linux v6.2): https://github.com/torvalds/linux/commit/f1f1f2569901 + if p.fs.real { + stat, err := os.Stat(p.path("fd")) + if err != nil { + return 0, err + } + + size := stat.Size() + if size > 0 { + return int(size), nil + } + } + fds, err := p.fileDescriptors() if err != nil { return 0, err @@ -285,7 +297,7 @@ func (p Proc) fileDescriptors() ([]string, error) { } func (p Proc) path(pa ...string) string { - return p.fs.Path(append([]string{strconv.Itoa(p.PID)}, pa...)...) + return p.fs.proc.Path(append([]string{strconv.Itoa(p.PID)}, pa...)...) } // FileDescriptorsInfo retrieves information about all file descriptors of diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc_stat.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc_stat.go index b278eb2c2df7..14b249f4fc66 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc_stat.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc_stat.go @@ -18,7 +18,6 @@ import ( "fmt" "os" - "github.com/prometheus/procfs/internal/fs" "github.com/prometheus/procfs/internal/util" ) @@ -112,7 +111,7 @@ type ProcStat struct { // Aggregated block I/O delays, measured in clock ticks (centiseconds). DelayAcctBlkIOTicks uint64 - proc fs.FS + proc FS } // NewStat returns the current status information of the process. @@ -210,8 +209,7 @@ func (s ProcStat) ResidentMemory() int { // StartTime returns the unix timestamp of the process in seconds. func (s ProcStat) StartTime() (float64, error) { - fs := FS{proc: s.proc} - stat, err := fs.Stat() + stat, err := s.proc.Stat() if err != nil { return 0, err } diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc_status.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc_status.go index 3d8c06439a93..c055d075db00 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc_status.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/proc_status.go @@ -15,6 +15,7 @@ package procfs import ( "bytes" + "sort" "strconv" "strings" @@ -76,6 +77,9 @@ type ProcStatus struct { UIDs [4]string // GIDs of the process (Real, effective, saved set, and filesystem GIDs) GIDs [4]string + + // CpusAllowedList: List of cpu cores processes are allowed to run on. + CpusAllowedList []uint64 } // NewStatus returns the current status information of the process. @@ -161,10 +165,38 @@ func (s *ProcStatus) fillStatus(k string, vString string, vUint uint64, vUintByt s.VoluntaryCtxtSwitches = vUint case "nonvoluntary_ctxt_switches": s.NonVoluntaryCtxtSwitches = vUint + case "Cpus_allowed_list": + s.CpusAllowedList = calcCpusAllowedList(vString) } + } // TotalCtxtSwitches returns the total context switch. func (s ProcStatus) TotalCtxtSwitches() uint64 { return s.VoluntaryCtxtSwitches + s.NonVoluntaryCtxtSwitches } + +func calcCpusAllowedList(cpuString string) []uint64 { + s := strings.Split(cpuString, ",") + + var g []uint64 + + for _, cpu := range s { + // parse cpu ranges, example: 1-3=[1,2,3] + if l := strings.Split(strings.TrimSpace(cpu), "-"); len(l) > 1 { + startCPU, _ := strconv.ParseUint(l[0], 10, 64) + endCPU, _ := strconv.ParseUint(l[1], 10, 64) + + for i := startCPU; i <= endCPU; i++ { + g = append(g, i) + } + } else if len(l) == 1 { + cpu, _ := strconv.ParseUint(l[0], 10, 64) + g = append(g, cpu) + } + + } + + sort.Slice(g, func(i, j int) bool { return g[i] < g[j] }) + return g +} diff --git a/cluster-autoscaler/vendor/github.com/prometheus/procfs/thread.go b/cluster-autoscaler/vendor/github.com/prometheus/procfs/thread.go index f08bfc769db1..490c14708d43 100644 --- a/cluster-autoscaler/vendor/github.com/prometheus/procfs/thread.go +++ b/cluster-autoscaler/vendor/github.com/prometheus/procfs/thread.go @@ -54,7 +54,8 @@ func (fs FS) AllThreads(pid int) (Procs, error) { if err != nil { continue } - t = append(t, Proc{PID: int(tid), fs: fsi.FS(taskPath)}) + + t = append(t, Proc{PID: int(tid), fs: FS{fsi.FS(taskPath), fs.real}}) } return t, nil @@ -66,13 +67,13 @@ func (fs FS) Thread(pid, tid int) (Proc, error) { if _, err := os.Stat(taskPath); err != nil { return Proc{}, err } - return Proc{PID: tid, fs: fsi.FS(taskPath)}, nil + return Proc{PID: tid, fs: FS{fsi.FS(taskPath), fs.real}}, nil } // Thread returns a process for a given TID of Proc. func (proc Proc) Thread(tid int) (Proc, error) { - tfs := fsi.FS(proc.path("task")) - if _, err := os.Stat(tfs.Path(strconv.Itoa(tid))); err != nil { + tfs := FS{fsi.FS(proc.path("task")), proc.fs.real} + if _, err := os.Stat(tfs.proc.Path(strconv.Itoa(tid))); err != nil { return Proc{}, err } return Proc{PID: tid, fs: tfs}, nil diff --git a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/CHANGELOG b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/CHANGELOG index a01d9a722d91..905a9b5cdc88 100644 --- a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/CHANGELOG +++ b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/CHANGELOG @@ -2,6 +2,31 @@ libseccomp-golang: Releases =============================================================================== https://github.com/seccomp/libseccomp-golang +* Version 0.10.0 - June 9, 2022 +- Minimum supported version of libseccomp bumped to v2.3.1 +- Add seccomp userspace notification API (ActNotify, filter.*Notif*) +- Add filter.{Get,Set}SSB (to support SCMP_FLTATR_CTL_SSB) +- Add filter.{Get,Set}Optimize (to support SCMP_FLTATR_CTL_OPTIMIZE) +- Add filter.{Get,Set}RawRC (to support SCMP_FLTATR_API_SYSRAWRC) +- Add ArchPARISC, ArchPARISC64, ArchRISCV64 +- Add ActKillProcess and ActKillThread; deprecate ActKill +- Add go module support +- Return ErrSyscallDoesNotExist when unable to resolve a syscall +- Fix some functions to check for both kernel level API and libseccomp version +- Fix MakeCondition to use sanitizeCompareOp +- Fix AddRule to handle EACCES (from libseccomp >= 2.5.0) +- Updated the main docs and converted to README.md +- Added CONTRIBUTING.md, SECURITY.md, and administrative docs under doc/admin +- Add GitHub action CI, enable more linters +- test: test against various libseccomp versions +- test: fix and simplify execInSubprocess +- test: fix APILevelIsSupported +- Refactor the Errno(-1 * retCode) pattern +- Refactor/unify libseccomp version / API level checks +- Code cleanups (linter, formatting, spelling fixes) +- Cleanup: use errors.New instead of fmt.Errorf where appropriate +- Cleanup: remove duplicated cgo stuff, redundant linux build tag + * Version 0.9.1 - May 21, 2019 - Minimum supported version of libseccomp bumped to v2.2.0 - Use Libseccomp's `seccomp_version` API to retrieve library version diff --git a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/README.md b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/README.md index 6430f1c9e257..312135ee59e6 100644 --- a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/README.md +++ b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/README.md @@ -22,19 +22,37 @@ The library source repository currently lives on GitHub at the following URLs: * https://github.com/seccomp/libseccomp-golang * https://github.com/seccomp/libseccomp -The project mailing list is currently hosted on Google Groups at the URL below, -please note that a Google account is not required to subscribe to the mailing -list. - -* https://groups.google.com/d/forum/libseccomp - Documentation for this package is also available at: * https://pkg.go.dev/github.com/seccomp/libseccomp-golang +## Verifying Releases + +Starting with libseccomp-golang v0.10.0, the git tag corresponding to each +release should be signed by one of the libseccomp-golang maintainers. It is +recommended that before use you verify the release tags using the following +command: + + % git tag -v + +At present, only the following keys, specified via the fingerprints below, are +authorized to sign official libseccomp-golang release tags: + + Paul Moore + 7100 AADF AE6E 6E94 0D2E 0AD6 55E4 5A5A E8CA 7C8A + + Tom Hromatka + 47A6 8FCE 37C7 D702 4FD6 5E11 356C E62C 2B52 4099 + + Kir Kolyshkin + C242 8CD7 5720 FACD CF76 B6EA 17DE 5ECB 75A1 100E + +More information on GnuPG and git tag verification can be found at their +respective websites: https://git-scm.com/docs/git and https://gnupg.org. + ## Installing the package - # go get github.com/seccomp/libseccomp-golang + % go get github.com/seccomp/libseccomp-golang ## Contributing diff --git a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/SECURITY.md b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/SECURITY.md index c448faa8e809..f645d4efec99 100644 --- a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/SECURITY.md +++ b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/SECURITY.md @@ -22,6 +22,7 @@ window. * Paul Moore, paul@paul-moore.com * Tom Hromatka, tom.hromatka@oracle.com +* Kir Kolyshkin, kolyshkin@gmail.com ### Resolving Sensitive Security Issues diff --git a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/seccomp.go b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/seccomp.go index 8dad12fdbb9b..c23406754c6b 100644 --- a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/seccomp.go +++ b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/seccomp.go @@ -7,6 +7,7 @@ package seccomp import ( + "errors" "fmt" "os" "runtime" @@ -245,8 +246,8 @@ const ( ) // ErrSyscallDoesNotExist represents an error condition where -// libseccomp is unable to resolve the syscall -var ErrSyscallDoesNotExist = fmt.Errorf("could not resolve syscall name") +// libseccomp is unable to resolve the syscall. +var ErrSyscallDoesNotExist = errors.New("could not resolve syscall name") const ( // Userspace notification response flags @@ -556,7 +557,7 @@ func MakeCondition(arg uint, comparison ScmpCompareOp, values ...uint64) (ScmpCo } else if len(values) > 2 { return condStruct, fmt.Errorf("conditions can have at most 2 arguments (%d given)", len(values)) } else if len(values) == 0 { - return condStruct, fmt.Errorf("must provide at least one value to compare against") + return condStruct, errors.New("must provide at least one value to compare against") } condStruct.Argument = arg @@ -611,7 +612,7 @@ func NewFilter(defaultAction ScmpAction) (*ScmpFilter, error) { fPtr := C.seccomp_init(defaultAction.toNative()) if fPtr == nil { - return nil, fmt.Errorf("could not create filter") + return nil, errors.New("could not create filter") } filter := new(ScmpFilter) @@ -623,7 +624,7 @@ func NewFilter(defaultAction ScmpAction) (*ScmpFilter, error) { // If the kernel does not support TSYNC, allow us to continue without error. if err := filter.setFilterAttr(filterAttrTsync, 0x1); err != nil && err != syscall.ENOTSUP { filter.Release() - return nil, fmt.Errorf("could not create filter - error setting tsync bit: %v", err) + return nil, fmt.Errorf("could not create filter: error setting tsync bit: %w", err) } return filter, nil @@ -695,14 +696,14 @@ func (f *ScmpFilter) Merge(src *ScmpFilter) error { defer src.lock.Unlock() if !src.valid || !f.valid { - return fmt.Errorf("one or more of the filter contexts is invalid or uninitialized") + return errors.New("one or more of the filter contexts is invalid or uninitialized") } // Merge the filters if retCode := C.seccomp_merge(f.filterCtx, src.filterCtx); retCode != 0 { e := errRc(retCode) if e == syscall.EINVAL { - return fmt.Errorf("filters could not be merged due to a mismatch in attributes or invalid filter") + return fmt.Errorf("filters could not be merged due to a mismatch in attributes or invalid filter: %w", e) } return e } diff --git a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/seccomp_internal.go b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/seccomp_internal.go index df4dfb7eba87..0a7fd34f510d 100644 --- a/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/seccomp_internal.go +++ b/cluster-autoscaler/vendor/github.com/seccomp/libseccomp-golang/seccomp_internal.go @@ -340,7 +340,7 @@ func ensureSupportedVersion() error { func getAPI() (uint, error) { api := C.seccomp_api_get() if api == 0 { - return 0, fmt.Errorf("API level operations are not supported") + return 0, errors.New("API level operations are not supported") } return uint(api), nil @@ -349,11 +349,12 @@ func getAPI() (uint, error) { // Set the API level func setAPI(api uint) error { if retCode := C.seccomp_api_set(C.uint(api)); retCode != 0 { - if errRc(retCode) == syscall.EOPNOTSUPP { - return fmt.Errorf("API level operations are not supported") + e := errRc(retCode) + if e == syscall.EOPNOTSUPP { + return errors.New("API level operations are not supported") } - return fmt.Errorf("could not set API level: %v", retCode) + return fmt.Errorf("could not set API level: %w", e) } return nil @@ -411,7 +412,7 @@ func (f *ScmpFilter) setFilterAttr(attr scmpFilterAttr, value C.uint32_t) error // Wrapper for seccomp_rule_add_... functions func (f *ScmpFilter) addRuleWrapper(call ScmpSyscall, action ScmpAction, exact bool, length C.uint, cond C.scmp_cast_t) error { if length != 0 && cond == nil { - return fmt.Errorf("null conditions list, but length is nonzero") + return errors.New("null conditions list, but length is nonzero") } var retCode C.int @@ -430,7 +431,7 @@ func (f *ScmpFilter) addRuleWrapper(call ScmpSyscall, action ScmpAction, exact b case syscall.EPERM, syscall.EACCES: return errDefAction case syscall.EINVAL: - return fmt.Errorf("two checks on same syscall argument") + return errors.New("two checks on same syscall argument") default: return e } @@ -455,7 +456,7 @@ func (f *ScmpFilter) addRuleGeneric(call ScmpSyscall, action ScmpAction, exact b } else { argsArr := C.make_arg_cmp_array(C.uint(len(conds))) if argsArr == nil { - return fmt.Errorf("error allocating memory for conditions") + return errors.New("error allocating memory for conditions") } defer C.free(argsArr) @@ -495,7 +496,7 @@ func sanitizeAction(in ScmpAction) error { } if inTmp != ActTrace && inTmp != ActErrno && (in&0xFFFF0000) != 0 { - return fmt.Errorf("highest 16 bits must be zeroed except for Trace and Errno") + return errors.New("highest 16 bits must be zeroed except for Trace and Errno") } return nil diff --git a/cluster-autoscaler/vendor/github.com/vishvananda/netns/.golangci.yml b/cluster-autoscaler/vendor/github.com/vishvananda/netns/.golangci.yml new file mode 100644 index 000000000000..600bef78e2b7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/vishvananda/netns/.golangci.yml @@ -0,0 +1,2 @@ +run: + timeout: 5m diff --git a/cluster-autoscaler/vendor/github.com/vishvananda/netns/README.md b/cluster-autoscaler/vendor/github.com/vishvananda/netns/README.md index 91057c10a5ee..bdfedbe81fa5 100644 --- a/cluster-autoscaler/vendor/github.com/vishvananda/netns/README.md +++ b/cluster-autoscaler/vendor/github.com/vishvananda/netns/README.md @@ -49,14 +49,3 @@ func main() { } ``` - -## NOTE - -The library can be safely used only with Go >= 1.10 due to [golang/go#20676](https://github.com/golang/go/issues/20676). - -After locking a goroutine to its current OS thread with `runtime.LockOSThread()` -and changing its network namespace, any new subsequent goroutine won't be -scheduled on that thread while it's locked. Therefore, the new goroutine -will run in a different namespace leading to unexpected results. - -See [here](https://www.weave.works/blog/linux-namespaces-golang-followup) for more details. diff --git a/cluster-autoscaler/vendor/github.com/vishvananda/netns/netns_linux.go b/cluster-autoscaler/vendor/github.com/vishvananda/netns/netns_linux.go index ae0558d2903d..2ed7c7e2fa40 100644 --- a/cluster-autoscaler/vendor/github.com/vishvananda/netns/netns_linux.go +++ b/cluster-autoscaler/vendor/github.com/vishvananda/netns/netns_linux.go @@ -2,7 +2,6 @@ package netns import ( "fmt" - "io/ioutil" "os" "path" "path/filepath" @@ -136,7 +135,7 @@ func GetFromDocker(id string) (NsHandle, error) { // borrowed from docker/utils/utils.go func findCgroupMountpoint(cgroupType string) (int, string, error) { - output, err := ioutil.ReadFile("/proc/mounts") + output, err := os.ReadFile("/proc/mounts") if err != nil { return -1, "", err } @@ -166,7 +165,7 @@ func findCgroupMountpoint(cgroupType string) (int, string, error) { // borrowed from docker/utils/utils.go // modified to get the docker pid instead of using /proc/self func getDockerCgroup(cgroupVer int, cgroupType string) (string, error) { - dockerpid, err := ioutil.ReadFile("/var/run/docker.pid") + dockerpid, err := os.ReadFile("/var/run/docker.pid") if err != nil { return "", err } @@ -178,7 +177,7 @@ func getDockerCgroup(cgroupVer int, cgroupType string) (string, error) { if err != nil { return "", err } - output, err := ioutil.ReadFile(fmt.Sprintf("/proc/%d/cgroup", pid)) + output, err := os.ReadFile(fmt.Sprintf("/proc/%d/cgroup", pid)) if err != nil { return "", err } @@ -265,7 +264,7 @@ func getPidForContainer(id string) (int, error) { return pid, fmt.Errorf("Unable to find container: %v", id[:len(id)-1]) } - output, err := ioutil.ReadFile(filename) + output, err := os.ReadFile(filename) if err != nil { return pid, err } diff --git a/cluster-autoscaler/vendor/github.com/vishvananda/netns/nshandle_linux.go b/cluster-autoscaler/vendor/github.com/vishvananda/netns/nshandle_linux.go index 0e1117106859..1baffb66acbf 100644 --- a/cluster-autoscaler/vendor/github.com/vishvananda/netns/nshandle_linux.go +++ b/cluster-autoscaler/vendor/github.com/vishvananda/netns/nshandle_linux.go @@ -30,7 +30,7 @@ func (ns NsHandle) Equal(other NsHandle) bool { // String shows the file descriptor number and its dev and inode. func (ns NsHandle) String() string { if ns == -1 { - return "NS(None)" + return "NS(none)" } var s unix.Stat_t if err := unix.Fstat(int(ns), &s); err != nil { diff --git a/cluster-autoscaler/vendor/github.com/vishvananda/netns/nshandle_others.go b/cluster-autoscaler/vendor/github.com/vishvananda/netns/nshandle_others.go index 7a87acbe201a..af727bc091c5 100644 --- a/cluster-autoscaler/vendor/github.com/vishvananda/netns/nshandle_others.go +++ b/cluster-autoscaler/vendor/github.com/vishvananda/netns/nshandle_others.go @@ -17,7 +17,7 @@ func (ns NsHandle) Equal(_ NsHandle) bool { // It is only implemented on Linux, and returns "NS(none)" on other // platforms. func (ns NsHandle) String() string { - return "NS(None)" + return "NS(none)" } // UniqueId returns a string which uniquely identifies the namespace diff --git a/cluster-autoscaler/vendor/go.etcd.io/etcd/api/v3/version/version.go b/cluster-autoscaler/vendor/go.etcd.io/etcd/api/v3/version/version.go index f3b389421ef5..d62f6474d95c 100644 --- a/cluster-autoscaler/vendor/go.etcd.io/etcd/api/v3/version/version.go +++ b/cluster-autoscaler/vendor/go.etcd.io/etcd/api/v3/version/version.go @@ -26,7 +26,7 @@ import ( var ( // MinClusterVersion is the min cluster version this etcd binary is compatible with. MinClusterVersion = "3.0.0" - Version = "3.5.7" + Version = "3.5.9" APIVersion = "unknown" // Git SHA Value will be set during build diff --git a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/logutil/zap.go b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/logutil/zap.go index d7fd0d90dbd1..34f35b9f28fb 100644 --- a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/logutil/zap.go +++ b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/logutil/zap.go @@ -16,6 +16,7 @@ package logutil import ( "sort" + "time" "go.uber.org/zap" "go.uber.org/zap/zapcore" @@ -46,15 +47,20 @@ var DefaultZapLoggerConfig = zap.Config{ // copied from "zap.NewProductionEncoderConfig" with some updates EncoderConfig: zapcore.EncoderConfig{ - TimeKey: "ts", - LevelKey: "level", - NameKey: "logger", - CallerKey: "caller", - MessageKey: "msg", - StacktraceKey: "stacktrace", - LineEnding: zapcore.DefaultLineEnding, - EncodeLevel: zapcore.LowercaseLevelEncoder, - EncodeTime: zapcore.ISO8601TimeEncoder, + TimeKey: "ts", + LevelKey: "level", + NameKey: "logger", + CallerKey: "caller", + MessageKey: "msg", + StacktraceKey: "stacktrace", + LineEnding: zapcore.DefaultLineEnding, + EncodeLevel: zapcore.LowercaseLevelEncoder, + + // Custom EncodeTime function to ensure we match format and precision of historic capnslog timestamps + EncodeTime: func(t time.Time, enc zapcore.PrimitiveArrayEncoder) { + enc.AppendString(t.Format("2006-01-02T15:04:05.999999Z0700")) + }, + EncodeDuration: zapcore.StringDurationEncoder, EncodeCaller: zapcore.ShortCallerEncoder, }, diff --git a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/tlsutil/versions.go b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/tlsutil/versions.go new file mode 100644 index 000000000000..ffcecd8c670f --- /dev/null +++ b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/tlsutil/versions.go @@ -0,0 +1,47 @@ +// Copyright 2023 The etcd Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package tlsutil + +import ( + "crypto/tls" + "fmt" +) + +type TLSVersion string + +// Constants for TLS versions. +const ( + TLSVersionDefault TLSVersion = "" + TLSVersion12 TLSVersion = "TLS1.2" + TLSVersion13 TLSVersion = "TLS1.3" +) + +// GetTLSVersion returns the corresponding tls.Version or error. +func GetTLSVersion(version string) (uint16, error) { + var v uint16 + + switch version { + case string(TLSVersionDefault): + v = 0 // 0 means let Go decide. + case string(TLSVersion12): + v = tls.VersionTLS12 + case string(TLSVersion13): + v = tls.VersionTLS13 + default: + return 0, fmt.Errorf("unexpected TLS version %q (must be one of: TLS1.2, TLS1.3)", version) + } + + return v, nil +} diff --git a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/transport/listener.go b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/transport/listener.go index c3bc56a65b59..150545d08df2 100644 --- a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/transport/listener.go +++ b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/pkg/v3/transport/listener.go @@ -165,6 +165,14 @@ type TLSInfo struct { // Note that cipher suites are prioritized in the given order. CipherSuites []uint16 + // MinVersion is the minimum TLS version that is acceptable. + // If not set, the minimum version is TLS 1.2. + MinVersion uint16 + + // MaxVersion is the maximum TLS version that is acceptable. + // If not set, the default used by Go is selected (see tls.Config.MaxVersion). + MaxVersion uint16 + selfCert bool // parseFunc exists to simplify testing. Typically, parseFunc @@ -339,8 +347,8 @@ func SelfCert(lg *zap.Logger, dirpath string, hosts []string, selfSignedCertVali // Previously, // 1. Server has non-empty (*tls.Config).Certificates on client hello // 2. Server calls (*tls.Config).GetCertificate iff: -// - Server's (*tls.Config).Certificates is not empty, or -// - Client supplies SNI; non-empty (*tls.ClientHelloInfo).ServerName +// - Server's (*tls.Config).Certificates is not empty, or +// - Client supplies SNI; non-empty (*tls.ClientHelloInfo).ServerName // // When (*tls.Config).Certificates is always populated on initial handshake, // client is expected to provide a valid matching SNI to pass the TLS @@ -378,8 +386,17 @@ func (info TLSInfo) baseConfig() (*tls.Config, error) { } } + var minVersion uint16 + if info.MinVersion != 0 { + minVersion = info.MinVersion + } else { + // Default minimum version is TLS 1.2, previous versions are insecure and deprecated. + minVersion = tls.VersionTLS12 + } + cfg := &tls.Config{ - MinVersion: tls.VersionTLS12, + MinVersion: minVersion, + MaxVersion: info.MaxVersion, ServerName: info.ServerName, } @@ -510,11 +527,6 @@ func (info TLSInfo) ServerConfig() (*tls.Config, error) { // "h2" NextProtos is necessary for enabling HTTP2 for go's HTTP server cfg.NextProtos = []string{"h2"} - // go1.13 enables TLS 1.3 by default - // and in TLS 1.3, cipher suites are not configurable - // setting Max TLS version to TLS 1.2 for go 1.13 - cfg.MaxVersion = tls.VersionTLS12 - return cfg, nil } @@ -569,11 +581,6 @@ func (info TLSInfo) ClientConfig() (*tls.Config, error) { } } - // go1.13 enables TLS 1.3 by default - // and in TLS 1.3, cipher suites are not configurable - // setting Max TLS version to TLS 1.2 for go 1.13 - cfg.MaxVersion = tls.VersionTLS12 - return cfg, nil } diff --git a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/doc.go b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/doc.go index 645d744a5a7f..fd61aff117aa 100644 --- a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/doc.go +++ b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/doc.go @@ -61,7 +61,8 @@ // // 1. context error: canceled or deadline exceeded. // 2. gRPC error: e.g. when clock drifts in server-side before client's context deadline exceeded. -// See https://github.com/etcd-io/etcd/blob/main/api/v3rpc/rpctypes/error.go +// +// See https://github.com/etcd-io/etcd/blob/main/api/v3rpc/rpctypes/error.go // // Here is the example code to handle client errors: // @@ -102,5 +103,4 @@ // The grpc load balancer is registered statically and is shared across etcd clients. // To enable detailed load balancer logging, set the ETCD_CLIENT_DEBUG environment // variable. E.g. "ETCD_CLIENT_DEBUG=1". -// package clientv3 diff --git a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/internal/endpoint/endpoint.go b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/internal/endpoint/endpoint.go index 1d3f1a7a2c7f..f6674235cd9e 100644 --- a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/internal/endpoint/endpoint.go +++ b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/internal/endpoint/endpoint.go @@ -45,8 +45,8 @@ func extractHostFromPath(pathStr string) string { return extractHostFromHostPort(path.Base(pathStr)) } -//mustSplit2 returns the values from strings.SplitN(s, sep, 2). -//If sep is not found, it returns ("", "", false) instead. +// mustSplit2 returns the values from strings.SplitN(s, sep, 2). +// If sep is not found, it returns ("", "", false) instead. func mustSplit2(s, sep string) (string, string) { spl := strings.SplitN(s, sep, 2) if len(spl) < 2 { @@ -81,11 +81,12 @@ func schemeToCredsRequirement(schema string) CredsRequirement { // The main differences: // - etcd supports unixs & https names as opposed to unix & http to // distinguish need to configure certificates. -// - etcd support http(s) names as opposed to tcp supported by grpc/dial method. -// - etcd supports unix(s)://local-file naming schema +// - etcd support http(s) names as opposed to tcp supported by grpc/dial method. +// - etcd supports unix(s)://local-file naming schema // (as opposed to unix:local-file canonical name used by grpc for current dir files). -// - Within the unix(s) schemas, the last segment (filename) without 'port' (content after colon) -// is considered serverName - to allow local testing of cert-protected communication. +// - Within the unix(s) schemas, the last segment (filename) without 'port' (content after colon) +// is considered serverName - to allow local testing of cert-protected communication. +// // See more: // - https://github.com/grpc/grpc-go/blob/26c143bd5f59344a4b8a1e491e0f5e18aa97abc7/internal/grpcutil/target.go#L47 // - https://golang.org/pkg/net/#Dial diff --git a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/txn.go b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/txn.go index 22301fba6b14..3f6a953cf094 100644 --- a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/txn.go +++ b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/txn.go @@ -25,15 +25,14 @@ import ( // Txn is the interface that wraps mini-transactions. // -// Txn(context.TODO()).If( -// Compare(Value(k1), ">", v1), -// Compare(Version(k1), "=", 2) -// ).Then( -// OpPut(k2,v2), OpPut(k3,v3) -// ).Else( -// OpPut(k4,v4), OpPut(k5,v5) -// ).Commit() -// +// Txn(context.TODO()).If( +// Compare(Value(k1), ">", v1), +// Compare(Version(k1), "=", 2) +// ).Then( +// OpPut(k2,v2), OpPut(k3,v3) +// ).Else( +// OpPut(k4,v4), OpPut(k5,v5) +// ).Commit() type Txn interface { // If takes a list of comparison. If all comparisons passed in succeed, // the operations passed into Then() will be executed. Or the operations diff --git a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/watch.go b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/watch.go index bc886936c869..41a6ec976333 100644 --- a/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/watch.go +++ b/cluster-autoscaler/vendor/go.etcd.io/etcd/client/v3/watch.go @@ -848,7 +848,7 @@ func (w *watchGrpcStream) serveSubstream(ws *watcherStream, resumec chan struct{ } } else { // current progress of watch; <= store revision - nextRev = wr.Header.Revision + nextRev = wr.Header.Revision + 1 } if len(wr.Events) > 0 { diff --git a/cluster-autoscaler/vendor/golang.org/x/crypto/hkdf/hkdf.go b/cluster-autoscaler/vendor/golang.org/x/crypto/hkdf/hkdf.go new file mode 100644 index 000000000000..dda3f143bec5 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/crypto/hkdf/hkdf.go @@ -0,0 +1,93 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package hkdf implements the HMAC-based Extract-and-Expand Key Derivation +// Function (HKDF) as defined in RFC 5869. +// +// HKDF is a cryptographic key derivation function (KDF) with the goal of +// expanding limited input keying material into one or more cryptographically +// strong secret keys. +package hkdf // import "golang.org/x/crypto/hkdf" + +import ( + "crypto/hmac" + "errors" + "hash" + "io" +) + +// Extract generates a pseudorandom key for use with Expand from an input secret +// and an optional independent salt. +// +// Only use this function if you need to reuse the extracted key with multiple +// Expand invocations and different context values. Most common scenarios, +// including the generation of multiple keys, should use New instead. +func Extract(hash func() hash.Hash, secret, salt []byte) []byte { + if salt == nil { + salt = make([]byte, hash().Size()) + } + extractor := hmac.New(hash, salt) + extractor.Write(secret) + return extractor.Sum(nil) +} + +type hkdf struct { + expander hash.Hash + size int + + info []byte + counter byte + + prev []byte + buf []byte +} + +func (f *hkdf) Read(p []byte) (int, error) { + // Check whether enough data can be generated + need := len(p) + remains := len(f.buf) + int(255-f.counter+1)*f.size + if remains < need { + return 0, errors.New("hkdf: entropy limit reached") + } + // Read any leftover from the buffer + n := copy(p, f.buf) + p = p[n:] + + // Fill the rest of the buffer + for len(p) > 0 { + f.expander.Reset() + f.expander.Write(f.prev) + f.expander.Write(f.info) + f.expander.Write([]byte{f.counter}) + f.prev = f.expander.Sum(f.prev[:0]) + f.counter++ + + // Copy the new batch into p + f.buf = f.prev + n = copy(p, f.buf) + p = p[n:] + } + // Save leftovers for next run + f.buf = f.buf[n:] + + return need, nil +} + +// Expand returns a Reader, from which keys can be read, using the given +// pseudorandom key and optional context info, skipping the extraction step. +// +// The pseudorandomKey should have been generated by Extract, or be a uniformly +// random or pseudorandom cryptographically strong key. See RFC 5869, Section +// 3.3. Most common scenarios will want to use New instead. +func Expand(hash func() hash.Hash, pseudorandomKey, info []byte) io.Reader { + expander := hmac.New(hash, pseudorandomKey) + return &hkdf{expander, expander.Size(), info, 1, nil, nil} +} + +// New returns a Reader, from which keys can be read, using the given hash, +// secret, salt and context info. Salt and info can be nil. +func New(hash func() hash.Hash, secret, salt, info []byte) io.Reader { + prk := Extract(hash, secret, salt) + return Expand(hash, prk, info) +} diff --git a/cluster-autoscaler/vendor/github.com/ghodss/yaml/LICENSE b/cluster-autoscaler/vendor/golang.org/x/mod/LICENSE similarity index 55% rename from cluster-autoscaler/vendor/github.com/ghodss/yaml/LICENSE rename to cluster-autoscaler/vendor/golang.org/x/mod/LICENSE index 7805d36de730..6a66aea5eafe 100644 --- a/cluster-autoscaler/vendor/github.com/ghodss/yaml/LICENSE +++ b/cluster-autoscaler/vendor/golang.org/x/mod/LICENSE @@ -1,27 +1,4 @@ -The MIT License (MIT) - -Copyright (c) 2014 Sam Ghods - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - - -Copyright (c) 2012 The Go Authors. All rights reserved. +Copyright (c) 2009 The Go Authors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are diff --git a/cluster-autoscaler/vendor/golang.org/x/mod/PATENTS b/cluster-autoscaler/vendor/golang.org/x/mod/PATENTS new file mode 100644 index 000000000000..733099041f84 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/mod/PATENTS @@ -0,0 +1,22 @@ +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. diff --git a/cluster-autoscaler/vendor/golang.org/x/mod/semver/semver.go b/cluster-autoscaler/vendor/golang.org/x/mod/semver/semver.go new file mode 100644 index 000000000000..9a2dfd33a770 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/mod/semver/semver.go @@ -0,0 +1,401 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package semver implements comparison of semantic version strings. +// In this package, semantic version strings must begin with a leading "v", +// as in "v1.0.0". +// +// The general form of a semantic version string accepted by this package is +// +// vMAJOR[.MINOR[.PATCH[-PRERELEASE][+BUILD]]] +// +// where square brackets indicate optional parts of the syntax; +// MAJOR, MINOR, and PATCH are decimal integers without extra leading zeros; +// PRERELEASE and BUILD are each a series of non-empty dot-separated identifiers +// using only alphanumeric characters and hyphens; and +// all-numeric PRERELEASE identifiers must not have leading zeros. +// +// This package follows Semantic Versioning 2.0.0 (see semver.org) +// with two exceptions. First, it requires the "v" prefix. Second, it recognizes +// vMAJOR and vMAJOR.MINOR (with no prerelease or build suffixes) +// as shorthands for vMAJOR.0.0 and vMAJOR.MINOR.0. +package semver + +import "sort" + +// parsed returns the parsed form of a semantic version string. +type parsed struct { + major string + minor string + patch string + short string + prerelease string + build string +} + +// IsValid reports whether v is a valid semantic version string. +func IsValid(v string) bool { + _, ok := parse(v) + return ok +} + +// Canonical returns the canonical formatting of the semantic version v. +// It fills in any missing .MINOR or .PATCH and discards build metadata. +// Two semantic versions compare equal only if their canonical formattings +// are identical strings. +// The canonical invalid semantic version is the empty string. +func Canonical(v string) string { + p, ok := parse(v) + if !ok { + return "" + } + if p.build != "" { + return v[:len(v)-len(p.build)] + } + if p.short != "" { + return v + p.short + } + return v +} + +// Major returns the major version prefix of the semantic version v. +// For example, Major("v2.1.0") == "v2". +// If v is an invalid semantic version string, Major returns the empty string. +func Major(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + return v[:1+len(pv.major)] +} + +// MajorMinor returns the major.minor version prefix of the semantic version v. +// For example, MajorMinor("v2.1.0") == "v2.1". +// If v is an invalid semantic version string, MajorMinor returns the empty string. +func MajorMinor(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + i := 1 + len(pv.major) + if j := i + 1 + len(pv.minor); j <= len(v) && v[i] == '.' && v[i+1:j] == pv.minor { + return v[:j] + } + return v[:i] + "." + pv.minor +} + +// Prerelease returns the prerelease suffix of the semantic version v. +// For example, Prerelease("v2.1.0-pre+meta") == "-pre". +// If v is an invalid semantic version string, Prerelease returns the empty string. +func Prerelease(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + return pv.prerelease +} + +// Build returns the build suffix of the semantic version v. +// For example, Build("v2.1.0+meta") == "+meta". +// If v is an invalid semantic version string, Build returns the empty string. +func Build(v string) string { + pv, ok := parse(v) + if !ok { + return "" + } + return pv.build +} + +// Compare returns an integer comparing two versions according to +// semantic version precedence. +// The result will be 0 if v == w, -1 if v < w, or +1 if v > w. +// +// An invalid semantic version string is considered less than a valid one. +// All invalid semantic version strings compare equal to each other. +func Compare(v, w string) int { + pv, ok1 := parse(v) + pw, ok2 := parse(w) + if !ok1 && !ok2 { + return 0 + } + if !ok1 { + return -1 + } + if !ok2 { + return +1 + } + if c := compareInt(pv.major, pw.major); c != 0 { + return c + } + if c := compareInt(pv.minor, pw.minor); c != 0 { + return c + } + if c := compareInt(pv.patch, pw.patch); c != 0 { + return c + } + return comparePrerelease(pv.prerelease, pw.prerelease) +} + +// Max canonicalizes its arguments and then returns the version string +// that compares greater. +// +// Deprecated: use [Compare] instead. In most cases, returning a canonicalized +// version is not expected or desired. +func Max(v, w string) string { + v = Canonical(v) + w = Canonical(w) + if Compare(v, w) > 0 { + return v + } + return w +} + +// ByVersion implements [sort.Interface] for sorting semantic version strings. +type ByVersion []string + +func (vs ByVersion) Len() int { return len(vs) } +func (vs ByVersion) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] } +func (vs ByVersion) Less(i, j int) bool { + cmp := Compare(vs[i], vs[j]) + if cmp != 0 { + return cmp < 0 + } + return vs[i] < vs[j] +} + +// Sort sorts a list of semantic version strings using [ByVersion]. +func Sort(list []string) { + sort.Sort(ByVersion(list)) +} + +func parse(v string) (p parsed, ok bool) { + if v == "" || v[0] != 'v' { + return + } + p.major, v, ok = parseInt(v[1:]) + if !ok { + return + } + if v == "" { + p.minor = "0" + p.patch = "0" + p.short = ".0.0" + return + } + if v[0] != '.' { + ok = false + return + } + p.minor, v, ok = parseInt(v[1:]) + if !ok { + return + } + if v == "" { + p.patch = "0" + p.short = ".0" + return + } + if v[0] != '.' { + ok = false + return + } + p.patch, v, ok = parseInt(v[1:]) + if !ok { + return + } + if len(v) > 0 && v[0] == '-' { + p.prerelease, v, ok = parsePrerelease(v) + if !ok { + return + } + } + if len(v) > 0 && v[0] == '+' { + p.build, v, ok = parseBuild(v) + if !ok { + return + } + } + if v != "" { + ok = false + return + } + ok = true + return +} + +func parseInt(v string) (t, rest string, ok bool) { + if v == "" { + return + } + if v[0] < '0' || '9' < v[0] { + return + } + i := 1 + for i < len(v) && '0' <= v[i] && v[i] <= '9' { + i++ + } + if v[0] == '0' && i != 1 { + return + } + return v[:i], v[i:], true +} + +func parsePrerelease(v string) (t, rest string, ok bool) { + // "A pre-release version MAY be denoted by appending a hyphen and + // a series of dot separated identifiers immediately following the patch version. + // Identifiers MUST comprise only ASCII alphanumerics and hyphen [0-9A-Za-z-]. + // Identifiers MUST NOT be empty. Numeric identifiers MUST NOT include leading zeroes." + if v == "" || v[0] != '-' { + return + } + i := 1 + start := 1 + for i < len(v) && v[i] != '+' { + if !isIdentChar(v[i]) && v[i] != '.' { + return + } + if v[i] == '.' { + if start == i || isBadNum(v[start:i]) { + return + } + start = i + 1 + } + i++ + } + if start == i || isBadNum(v[start:i]) { + return + } + return v[:i], v[i:], true +} + +func parseBuild(v string) (t, rest string, ok bool) { + if v == "" || v[0] != '+' { + return + } + i := 1 + start := 1 + for i < len(v) { + if !isIdentChar(v[i]) && v[i] != '.' { + return + } + if v[i] == '.' { + if start == i { + return + } + start = i + 1 + } + i++ + } + if start == i { + return + } + return v[:i], v[i:], true +} + +func isIdentChar(c byte) bool { + return 'A' <= c && c <= 'Z' || 'a' <= c && c <= 'z' || '0' <= c && c <= '9' || c == '-' +} + +func isBadNum(v string) bool { + i := 0 + for i < len(v) && '0' <= v[i] && v[i] <= '9' { + i++ + } + return i == len(v) && i > 1 && v[0] == '0' +} + +func isNum(v string) bool { + i := 0 + for i < len(v) && '0' <= v[i] && v[i] <= '9' { + i++ + } + return i == len(v) +} + +func compareInt(x, y string) int { + if x == y { + return 0 + } + if len(x) < len(y) { + return -1 + } + if len(x) > len(y) { + return +1 + } + if x < y { + return -1 + } else { + return +1 + } +} + +func comparePrerelease(x, y string) int { + // "When major, minor, and patch are equal, a pre-release version has + // lower precedence than a normal version. + // Example: 1.0.0-alpha < 1.0.0. + // Precedence for two pre-release versions with the same major, minor, + // and patch version MUST be determined by comparing each dot separated + // identifier from left to right until a difference is found as follows: + // identifiers consisting of only digits are compared numerically and + // identifiers with letters or hyphens are compared lexically in ASCII + // sort order. Numeric identifiers always have lower precedence than + // non-numeric identifiers. A larger set of pre-release fields has a + // higher precedence than a smaller set, if all of the preceding + // identifiers are equal. + // Example: 1.0.0-alpha < 1.0.0-alpha.1 < 1.0.0-alpha.beta < + // 1.0.0-beta < 1.0.0-beta.2 < 1.0.0-beta.11 < 1.0.0-rc.1 < 1.0.0." + if x == y { + return 0 + } + if x == "" { + return +1 + } + if y == "" { + return -1 + } + for x != "" && y != "" { + x = x[1:] // skip - or . + y = y[1:] // skip - or . + var dx, dy string + dx, x = nextIdent(x) + dy, y = nextIdent(y) + if dx != dy { + ix := isNum(dx) + iy := isNum(dy) + if ix != iy { + if ix { + return -1 + } else { + return +1 + } + } + if ix { + if len(dx) < len(dy) { + return -1 + } + if len(dx) > len(dy) { + return +1 + } + } + if dx < dy { + return -1 + } else { + return +1 + } + } + } + if x == "" { + return -1 + } else { + return +1 + } +} + +func nextIdent(x string) (dx, rest string) { + i := 0 + for i < len(x) && x[i] != '.' { + i++ + } + return x[:i], x[i:] +} diff --git a/cluster-autoscaler/vendor/golang.org/x/oauth2/google/default.go b/cluster-autoscaler/vendor/golang.org/x/oauth2/google/default.go index b3e8783cc594..2cf71f0f93f8 100644 --- a/cluster-autoscaler/vendor/golang.org/x/oauth2/google/default.go +++ b/cluster-autoscaler/vendor/golang.org/x/oauth2/google/default.go @@ -8,7 +8,6 @@ import ( "context" "encoding/json" "fmt" - "io/ioutil" "net/http" "os" "path/filepath" @@ -142,10 +141,8 @@ func FindDefaultCredentialsWithParams(ctx context.Context, params CredentialsPar // Second, try a well-known file. filename := wellKnownFile() - if creds, err := readCredentialsFile(ctx, filename, params); err == nil { - return creds, nil - } else if !os.IsNotExist(err) { - return nil, fmt.Errorf("google: error getting credentials using well-known file (%v): %v", filename, err) + if b, err := os.ReadFile(filename); err == nil { + return CredentialsFromJSONWithParams(ctx, b, params) } // Third, if we're on a Google App Engine standard first generation runtime (<= Go 1.9) @@ -231,7 +228,7 @@ func wellKnownFile() string { } func readCredentialsFile(ctx context.Context, filename string, params CredentialsParams) (*Credentials, error) { - b, err := ioutil.ReadFile(filename) + b, err := os.ReadFile(filename) if err != nil { return nil, err } diff --git a/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/oauth2.go b/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/oauth2.go index c0ab196cf461..14989beaf493 100644 --- a/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/oauth2.go +++ b/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/oauth2.go @@ -14,7 +14,7 @@ import ( // ParseKey converts the binary contents of a private key file // to an *rsa.PrivateKey. It detects whether the private key is in a -// PEM container or not. If so, it extracts the the private key +// PEM container or not. If so, it extracts the private key // from PEM container before conversion. It only supports PEM // containers with no passphrase. func ParseKey(key []byte) (*rsa.PrivateKey, error) { diff --git a/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/token.go b/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/token.go index b4723fcacea7..58901bda53e5 100644 --- a/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/token.go +++ b/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/token.go @@ -55,12 +55,18 @@ type Token struct { } // tokenJSON is the struct representing the HTTP response from OAuth2 -// providers returning a token in JSON form. +// providers returning a token or error in JSON form. +// https://datatracker.ietf.org/doc/html/rfc6749#section-5.1 type tokenJSON struct { AccessToken string `json:"access_token"` TokenType string `json:"token_type"` RefreshToken string `json:"refresh_token"` ExpiresIn expirationTime `json:"expires_in"` // at least PayPal returns string, while most return number + // error fields + // https://datatracker.ietf.org/doc/html/rfc6749#section-5.2 + ErrorCode string `json:"error"` + ErrorDescription string `json:"error_description"` + ErrorURI string `json:"error_uri"` } func (e *tokenJSON) expiry() (t time.Time) { @@ -236,21 +242,29 @@ func doTokenRoundTrip(ctx context.Context, req *http.Request) (*Token, error) { if err != nil { return nil, fmt.Errorf("oauth2: cannot fetch token: %v", err) } - if code := r.StatusCode; code < 200 || code > 299 { - return nil, &RetrieveError{ - Response: r, - Body: body, - } + + failureStatus := r.StatusCode < 200 || r.StatusCode > 299 + retrieveError := &RetrieveError{ + Response: r, + Body: body, + // attempt to populate error detail below } var token *Token content, _, _ := mime.ParseMediaType(r.Header.Get("Content-Type")) switch content { case "application/x-www-form-urlencoded", "text/plain": + // some endpoints return a query string vals, err := url.ParseQuery(string(body)) if err != nil { - return nil, err + if failureStatus { + return nil, retrieveError + } + return nil, fmt.Errorf("oauth2: cannot parse response: %v", err) } + retrieveError.ErrorCode = vals.Get("error") + retrieveError.ErrorDescription = vals.Get("error_description") + retrieveError.ErrorURI = vals.Get("error_uri") token = &Token{ AccessToken: vals.Get("access_token"), TokenType: vals.Get("token_type"), @@ -265,8 +279,14 @@ func doTokenRoundTrip(ctx context.Context, req *http.Request) (*Token, error) { default: var tj tokenJSON if err = json.Unmarshal(body, &tj); err != nil { - return nil, err + if failureStatus { + return nil, retrieveError + } + return nil, fmt.Errorf("oauth2: cannot parse json: %v", err) } + retrieveError.ErrorCode = tj.ErrorCode + retrieveError.ErrorDescription = tj.ErrorDescription + retrieveError.ErrorURI = tj.ErrorURI token = &Token{ AccessToken: tj.AccessToken, TokenType: tj.TokenType, @@ -276,17 +296,37 @@ func doTokenRoundTrip(ctx context.Context, req *http.Request) (*Token, error) { } json.Unmarshal(body, &token.Raw) // no error checks for optional fields } + // according to spec, servers should respond status 400 in error case + // https://www.rfc-editor.org/rfc/rfc6749#section-5.2 + // but some unorthodox servers respond 200 in error case + if failureStatus || retrieveError.ErrorCode != "" { + return nil, retrieveError + } if token.AccessToken == "" { return nil, errors.New("oauth2: server response missing access_token") } return token, nil } +// mirrors oauth2.RetrieveError type RetrieveError struct { - Response *http.Response - Body []byte + Response *http.Response + Body []byte + ErrorCode string + ErrorDescription string + ErrorURI string } func (r *RetrieveError) Error() string { + if r.ErrorCode != "" { + s := fmt.Sprintf("oauth2: %q", r.ErrorCode) + if r.ErrorDescription != "" { + s += fmt.Sprintf(" %q", r.ErrorDescription) + } + if r.ErrorURI != "" { + s += fmt.Sprintf(" %q", r.ErrorURI) + } + return s + } return fmt.Sprintf("oauth2: cannot fetch token: %v\nResponse: %s", r.Response.Status, r.Body) } diff --git a/cluster-autoscaler/vendor/golang.org/x/oauth2/token.go b/cluster-autoscaler/vendor/golang.org/x/oauth2/token.go index 7c64006de693..5ffce9764be7 100644 --- a/cluster-autoscaler/vendor/golang.org/x/oauth2/token.go +++ b/cluster-autoscaler/vendor/golang.org/x/oauth2/token.go @@ -175,14 +175,31 @@ func retrieveToken(ctx context.Context, c *Config, v url.Values) (*Token, error) } // RetrieveError is the error returned when the token endpoint returns a -// non-2XX HTTP status code. +// non-2XX HTTP status code or populates RFC 6749's 'error' parameter. +// https://datatracker.ietf.org/doc/html/rfc6749#section-5.2 type RetrieveError struct { Response *http.Response // Body is the body that was consumed by reading Response.Body. // It may be truncated. Body []byte + // ErrorCode is RFC 6749's 'error' parameter. + ErrorCode string + // ErrorDescription is RFC 6749's 'error_description' parameter. + ErrorDescription string + // ErrorURI is RFC 6749's 'error_uri' parameter. + ErrorURI string } func (r *RetrieveError) Error() string { + if r.ErrorCode != "" { + s := fmt.Sprintf("oauth2: %q", r.ErrorCode) + if r.ErrorDescription != "" { + s += fmt.Sprintf(" %q", r.ErrorDescription) + } + if r.ErrorURI != "" { + s += fmt.Sprintf(" %q", r.ErrorURI) + } + return s + } return fmt.Sprintf("oauth2: cannot fetch token: %v\nResponse: %s", r.Response.Status, r.Body) } diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs.go b/cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs.go new file mode 100644 index 000000000000..3bf40fdfecd5 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs.go @@ -0,0 +1,102 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package execabs is a drop-in replacement for os/exec +// that requires PATH lookups to find absolute paths. +// That is, execabs.Command("cmd") runs the same PATH lookup +// as exec.Command("cmd"), but if the result is a path +// which is relative, the Run and Start methods will report +// an error instead of running the executable. +// +// See https://blog.golang.org/path-security for more information +// about when it may be necessary or appropriate to use this package. +package execabs + +import ( + "context" + "fmt" + "os/exec" + "path/filepath" + "reflect" + "unsafe" +) + +// ErrNotFound is the error resulting if a path search failed to find an executable file. +// It is an alias for exec.ErrNotFound. +var ErrNotFound = exec.ErrNotFound + +// Cmd represents an external command being prepared or run. +// It is an alias for exec.Cmd. +type Cmd = exec.Cmd + +// Error is returned by LookPath when it fails to classify a file as an executable. +// It is an alias for exec.Error. +type Error = exec.Error + +// An ExitError reports an unsuccessful exit by a command. +// It is an alias for exec.ExitError. +type ExitError = exec.ExitError + +func relError(file, path string) error { + return fmt.Errorf("%s resolves to executable in current directory (.%c%s)", file, filepath.Separator, path) +} + +// LookPath searches for an executable named file in the directories +// named by the PATH environment variable. If file contains a slash, +// it is tried directly and the PATH is not consulted. The result will be +// an absolute path. +// +// LookPath differs from exec.LookPath in its handling of PATH lookups, +// which are used for file names without slashes. If exec.LookPath's +// PATH lookup would have returned an executable from the current directory, +// LookPath instead returns an error. +func LookPath(file string) (string, error) { + path, err := exec.LookPath(file) + if err != nil && !isGo119ErrDot(err) { + return "", err + } + if filepath.Base(file) == file && !filepath.IsAbs(path) { + return "", relError(file, path) + } + return path, nil +} + +func fixCmd(name string, cmd *exec.Cmd) { + if filepath.Base(name) == name && !filepath.IsAbs(cmd.Path) && !isGo119ErrFieldSet(cmd) { + // exec.Command was called with a bare binary name and + // exec.LookPath returned a path which is not absolute. + // Set cmd.lookPathErr and clear cmd.Path so that it + // cannot be run. + lookPathErr := (*error)(unsafe.Pointer(reflect.ValueOf(cmd).Elem().FieldByName("lookPathErr").Addr().Pointer())) + if *lookPathErr == nil { + *lookPathErr = relError(name, cmd.Path) + } + cmd.Path = "" + } +} + +// CommandContext is like Command but includes a context. +// +// The provided context is used to kill the process (by calling os.Process.Kill) +// if the context becomes done before the command completes on its own. +func CommandContext(ctx context.Context, name string, arg ...string) *exec.Cmd { + cmd := exec.CommandContext(ctx, name, arg...) + fixCmd(name, cmd) + return cmd + +} + +// Command returns the Cmd struct to execute the named program with the given arguments. +// See exec.Command for most details. +// +// Command differs from exec.Command in its handling of PATH lookups, +// which are used when the program name contains no slashes. +// If exec.Command would have returned an exec.Cmd configured to run an +// executable from the current directory, Command instead +// returns an exec.Cmd that will return an error from Start or Run. +func Command(name string, arg ...string) *exec.Cmd { + cmd := exec.Command(name, arg...) + fixCmd(name, cmd) + return cmd +} diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs_go118.go b/cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs_go118.go new file mode 100644 index 000000000000..2000064a8124 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs_go118.go @@ -0,0 +1,18 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !go1.19 +// +build !go1.19 + +package execabs + +import "os/exec" + +func isGo119ErrDot(err error) bool { + return false +} + +func isGo119ErrFieldSet(cmd *exec.Cmd) bool { + return false +} diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs_go119.go b/cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs_go119.go new file mode 100644 index 000000000000..f364b3418926 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/sys/execabs/execabs_go119.go @@ -0,0 +1,21 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build go1.19 +// +build go1.19 + +package execabs + +import ( + "errors" + "os/exec" +) + +func isGo119ErrDot(err error) bool { + return errors.Is(err, exec.ErrDot) +} + +func isGo119ErrFieldSet(cmd *exec.Cmd) bool { + return cmd.Err != nil +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/cmd/stringer/stringer.go b/cluster-autoscaler/vendor/golang.org/x/tools/cmd/stringer/stringer.go new file mode 100644 index 000000000000..998d1a51bfd0 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/cmd/stringer/stringer.go @@ -0,0 +1,657 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Stringer is a tool to automate the creation of methods that satisfy the fmt.Stringer +// interface. Given the name of a (signed or unsigned) integer type T that has constants +// defined, stringer will create a new self-contained Go source file implementing +// +// func (t T) String() string +// +// The file is created in the same package and directory as the package that defines T. +// It has helpful defaults designed for use with go generate. +// +// Stringer works best with constants that are consecutive values such as created using iota, +// but creates good code regardless. In the future it might also provide custom support for +// constant sets that are bit patterns. +// +// For example, given this snippet, +// +// package painkiller +// +// type Pill int +// +// const ( +// Placebo Pill = iota +// Aspirin +// Ibuprofen +// Paracetamol +// Acetaminophen = Paracetamol +// ) +// +// running this command +// +// stringer -type=Pill +// +// in the same directory will create the file pill_string.go, in package painkiller, +// containing a definition of +// +// func (Pill) String() string +// +// That method will translate the value of a Pill constant to the string representation +// of the respective constant name, so that the call fmt.Print(painkiller.Aspirin) will +// print the string "Aspirin". +// +// Typically this process would be run using go generate, like this: +// +// //go:generate stringer -type=Pill +// +// If multiple constants have the same value, the lexically first matching name will +// be used (in the example, Acetaminophen will print as "Paracetamol"). +// +// With no arguments, it processes the package in the current directory. +// Otherwise, the arguments must name a single directory holding a Go package +// or a set of Go source files that represent a single Go package. +// +// The -type flag accepts a comma-separated list of types so a single run can +// generate methods for multiple types. The default output file is t_string.go, +// where t is the lower-cased name of the first type listed. It can be overridden +// with the -output flag. +// +// The -linecomment flag tells stringer to generate the text of any line comment, trimmed +// of leading spaces, instead of the constant name. For instance, if the constants above had a +// Pill prefix, one could write +// +// PillAspirin // Aspirin +// +// to suppress it in the output. +package main // import "golang.org/x/tools/cmd/stringer" + +import ( + "bytes" + "flag" + "fmt" + "go/ast" + "go/constant" + "go/format" + "go/token" + "go/types" + "log" + "os" + "path/filepath" + "sort" + "strings" + + "golang.org/x/tools/go/packages" +) + +var ( + typeNames = flag.String("type", "", "comma-separated list of type names; must be set") + output = flag.String("output", "", "output file name; default srcdir/_string.go") + trimprefix = flag.String("trimprefix", "", "trim the `prefix` from the generated constant names") + linecomment = flag.Bool("linecomment", false, "use line comment text as printed text when present") + buildTags = flag.String("tags", "", "comma-separated list of build tags to apply") +) + +// Usage is a replacement usage function for the flags package. +func Usage() { + fmt.Fprintf(os.Stderr, "Usage of stringer:\n") + fmt.Fprintf(os.Stderr, "\tstringer [flags] -type T [directory]\n") + fmt.Fprintf(os.Stderr, "\tstringer [flags] -type T files... # Must be a single package\n") + fmt.Fprintf(os.Stderr, "For more information, see:\n") + fmt.Fprintf(os.Stderr, "\thttps://pkg.go.dev/golang.org/x/tools/cmd/stringer\n") + fmt.Fprintf(os.Stderr, "Flags:\n") + flag.PrintDefaults() +} + +func main() { + log.SetFlags(0) + log.SetPrefix("stringer: ") + flag.Usage = Usage + flag.Parse() + if len(*typeNames) == 0 { + flag.Usage() + os.Exit(2) + } + types := strings.Split(*typeNames, ",") + var tags []string + if len(*buildTags) > 0 { + tags = strings.Split(*buildTags, ",") + } + + // We accept either one directory or a list of files. Which do we have? + args := flag.Args() + if len(args) == 0 { + // Default: process whole package in current directory. + args = []string{"."} + } + + // Parse the package once. + var dir string + g := Generator{ + trimPrefix: *trimprefix, + lineComment: *linecomment, + } + // TODO(suzmue): accept other patterns for packages (directories, list of files, import paths, etc). + if len(args) == 1 && isDirectory(args[0]) { + dir = args[0] + } else { + if len(tags) != 0 { + log.Fatal("-tags option applies only to directories, not when files are specified") + } + dir = filepath.Dir(args[0]) + } + + g.parsePackage(args, tags) + + // Print the header and package clause. + g.Printf("// Code generated by \"stringer %s\"; DO NOT EDIT.\n", strings.Join(os.Args[1:], " ")) + g.Printf("\n") + g.Printf("package %s", g.pkg.name) + g.Printf("\n") + g.Printf("import \"strconv\"\n") // Used by all methods. + + // Run generate for each type. + for _, typeName := range types { + g.generate(typeName) + } + + // Format the output. + src := g.format() + + // Write to file. + outputName := *output + if outputName == "" { + baseName := fmt.Sprintf("%s_string.go", types[0]) + outputName = filepath.Join(dir, strings.ToLower(baseName)) + } + err := os.WriteFile(outputName, src, 0644) + if err != nil { + log.Fatalf("writing output: %s", err) + } +} + +// isDirectory reports whether the named file is a directory. +func isDirectory(name string) bool { + info, err := os.Stat(name) + if err != nil { + log.Fatal(err) + } + return info.IsDir() +} + +// Generator holds the state of the analysis. Primarily used to buffer +// the output for format.Source. +type Generator struct { + buf bytes.Buffer // Accumulated output. + pkg *Package // Package we are scanning. + + trimPrefix string + lineComment bool +} + +func (g *Generator) Printf(format string, args ...interface{}) { + fmt.Fprintf(&g.buf, format, args...) +} + +// File holds a single parsed file and associated data. +type File struct { + pkg *Package // Package to which this file belongs. + file *ast.File // Parsed AST. + // These fields are reset for each type being generated. + typeName string // Name of the constant type. + values []Value // Accumulator for constant values of that type. + + trimPrefix string + lineComment bool +} + +type Package struct { + name string + defs map[*ast.Ident]types.Object + files []*File +} + +// parsePackage analyzes the single package constructed from the patterns and tags. +// parsePackage exits if there is an error. +func (g *Generator) parsePackage(patterns []string, tags []string) { + cfg := &packages.Config{ + Mode: packages.NeedName | packages.NeedTypes | packages.NeedTypesInfo | packages.NeedSyntax, + // TODO: Need to think about constants in test files. Maybe write type_string_test.go + // in a separate pass? For later. + Tests: false, + BuildFlags: []string{fmt.Sprintf("-tags=%s", strings.Join(tags, " "))}, + } + pkgs, err := packages.Load(cfg, patterns...) + if err != nil { + log.Fatal(err) + } + if len(pkgs) != 1 { + log.Fatalf("error: %d packages found", len(pkgs)) + } + g.addPackage(pkgs[0]) +} + +// addPackage adds a type checked Package and its syntax files to the generator. +func (g *Generator) addPackage(pkg *packages.Package) { + g.pkg = &Package{ + name: pkg.Name, + defs: pkg.TypesInfo.Defs, + files: make([]*File, len(pkg.Syntax)), + } + + for i, file := range pkg.Syntax { + g.pkg.files[i] = &File{ + file: file, + pkg: g.pkg, + trimPrefix: g.trimPrefix, + lineComment: g.lineComment, + } + } +} + +// generate produces the String method for the named type. +func (g *Generator) generate(typeName string) { + values := make([]Value, 0, 100) + for _, file := range g.pkg.files { + // Set the state for this run of the walker. + file.typeName = typeName + file.values = nil + if file.file != nil { + ast.Inspect(file.file, file.genDecl) + values = append(values, file.values...) + } + } + + if len(values) == 0 { + log.Fatalf("no values defined for type %s", typeName) + } + // Generate code that will fail if the constants change value. + g.Printf("func _() {\n") + g.Printf("\t// An \"invalid array index\" compiler error signifies that the constant values have changed.\n") + g.Printf("\t// Re-run the stringer command to generate them again.\n") + g.Printf("\tvar x [1]struct{}\n") + for _, v := range values { + g.Printf("\t_ = x[%s - %s]\n", v.originalName, v.str) + } + g.Printf("}\n") + runs := splitIntoRuns(values) + // The decision of which pattern to use depends on the number of + // runs in the numbers. If there's only one, it's easy. For more than + // one, there's a tradeoff between complexity and size of the data + // and code vs. the simplicity of a map. A map takes more space, + // but so does the code. The decision here (crossover at 10) is + // arbitrary, but considers that for large numbers of runs the cost + // of the linear scan in the switch might become important, and + // rather than use yet another algorithm such as binary search, + // we punt and use a map. In any case, the likelihood of a map + // being necessary for any realistic example other than bitmasks + // is very low. And bitmasks probably deserve their own analysis, + // to be done some other day. + switch { + case len(runs) == 1: + g.buildOneRun(runs, typeName) + case len(runs) <= 10: + g.buildMultipleRuns(runs, typeName) + default: + g.buildMap(runs, typeName) + } +} + +// splitIntoRuns breaks the values into runs of contiguous sequences. +// For example, given 1,2,3,5,6,7 it returns {1,2,3},{5,6,7}. +// The input slice is known to be non-empty. +func splitIntoRuns(values []Value) [][]Value { + // We use stable sort so the lexically first name is chosen for equal elements. + sort.Stable(byValue(values)) + // Remove duplicates. Stable sort has put the one we want to print first, + // so use that one. The String method won't care about which named constant + // was the argument, so the first name for the given value is the only one to keep. + // We need to do this because identical values would cause the switch or map + // to fail to compile. + j := 1 + for i := 1; i < len(values); i++ { + if values[i].value != values[i-1].value { + values[j] = values[i] + j++ + } + } + values = values[:j] + runs := make([][]Value, 0, 10) + for len(values) > 0 { + // One contiguous sequence per outer loop. + i := 1 + for i < len(values) && values[i].value == values[i-1].value+1 { + i++ + } + runs = append(runs, values[:i]) + values = values[i:] + } + return runs +} + +// format returns the gofmt-ed contents of the Generator's buffer. +func (g *Generator) format() []byte { + src, err := format.Source(g.buf.Bytes()) + if err != nil { + // Should never happen, but can arise when developing this code. + // The user can compile the output to see the error. + log.Printf("warning: internal error: invalid Go generated: %s", err) + log.Printf("warning: compile the package to analyze the error") + return g.buf.Bytes() + } + return src +} + +// Value represents a declared constant. +type Value struct { + originalName string // The name of the constant. + name string // The name with trimmed prefix. + // The value is stored as a bit pattern alone. The boolean tells us + // whether to interpret it as an int64 or a uint64; the only place + // this matters is when sorting. + // Much of the time the str field is all we need; it is printed + // by Value.String. + value uint64 // Will be converted to int64 when needed. + signed bool // Whether the constant is a signed type. + str string // The string representation given by the "go/constant" package. +} + +func (v *Value) String() string { + return v.str +} + +// byValue lets us sort the constants into increasing order. +// We take care in the Less method to sort in signed or unsigned order, +// as appropriate. +type byValue []Value + +func (b byValue) Len() int { return len(b) } +func (b byValue) Swap(i, j int) { b[i], b[j] = b[j], b[i] } +func (b byValue) Less(i, j int) bool { + if b[i].signed { + return int64(b[i].value) < int64(b[j].value) + } + return b[i].value < b[j].value +} + +// genDecl processes one declaration clause. +func (f *File) genDecl(node ast.Node) bool { + decl, ok := node.(*ast.GenDecl) + if !ok || decl.Tok != token.CONST { + // We only care about const declarations. + return true + } + // The name of the type of the constants we are declaring. + // Can change if this is a multi-element declaration. + typ := "" + // Loop over the elements of the declaration. Each element is a ValueSpec: + // a list of names possibly followed by a type, possibly followed by values. + // If the type and value are both missing, we carry down the type (and value, + // but the "go/types" package takes care of that). + for _, spec := range decl.Specs { + vspec := spec.(*ast.ValueSpec) // Guaranteed to succeed as this is CONST. + if vspec.Type == nil && len(vspec.Values) > 0 { + // "X = 1". With no type but a value. If the constant is untyped, + // skip this vspec and reset the remembered type. + typ = "" + + // If this is a simple type conversion, remember the type. + // We don't mind if this is actually a call; a qualified call won't + // be matched (that will be SelectorExpr, not Ident), and only unusual + // situations will result in a function call that appears to be + // a type conversion. + ce, ok := vspec.Values[0].(*ast.CallExpr) + if !ok { + continue + } + id, ok := ce.Fun.(*ast.Ident) + if !ok { + continue + } + typ = id.Name + } + if vspec.Type != nil { + // "X T". We have a type. Remember it. + ident, ok := vspec.Type.(*ast.Ident) + if !ok { + continue + } + typ = ident.Name + } + if typ != f.typeName { + // This is not the type we're looking for. + continue + } + // We now have a list of names (from one line of source code) all being + // declared with the desired type. + // Grab their names and actual values and store them in f.values. + for _, name := range vspec.Names { + if name.Name == "_" { + continue + } + // This dance lets the type checker find the values for us. It's a + // bit tricky: look up the object declared by the name, find its + // types.Const, and extract its value. + obj, ok := f.pkg.defs[name] + if !ok { + log.Fatalf("no value for constant %s", name) + } + info := obj.Type().Underlying().(*types.Basic).Info() + if info&types.IsInteger == 0 { + log.Fatalf("can't handle non-integer constant type %s", typ) + } + value := obj.(*types.Const).Val() // Guaranteed to succeed as this is CONST. + if value.Kind() != constant.Int { + log.Fatalf("can't happen: constant is not an integer %s", name) + } + i64, isInt := constant.Int64Val(value) + u64, isUint := constant.Uint64Val(value) + if !isInt && !isUint { + log.Fatalf("internal error: value of %s is not an integer: %s", name, value.String()) + } + if !isInt { + u64 = uint64(i64) + } + v := Value{ + originalName: name.Name, + value: u64, + signed: info&types.IsUnsigned == 0, + str: value.String(), + } + if c := vspec.Comment; f.lineComment && c != nil && len(c.List) == 1 { + v.name = strings.TrimSpace(c.Text()) + } else { + v.name = strings.TrimPrefix(v.originalName, f.trimPrefix) + } + f.values = append(f.values, v) + } + } + return false +} + +// Helpers + +// usize returns the number of bits of the smallest unsigned integer +// type that will hold n. Used to create the smallest possible slice of +// integers to use as indexes into the concatenated strings. +func usize(n int) int { + switch { + case n < 1<<8: + return 8 + case n < 1<<16: + return 16 + default: + // 2^32 is enough constants for anyone. + return 32 + } +} + +// declareIndexAndNameVars declares the index slices and concatenated names +// strings representing the runs of values. +func (g *Generator) declareIndexAndNameVars(runs [][]Value, typeName string) { + var indexes, names []string + for i, run := range runs { + index, name := g.createIndexAndNameDecl(run, typeName, fmt.Sprintf("_%d", i)) + if len(run) != 1 { + indexes = append(indexes, index) + } + names = append(names, name) + } + g.Printf("const (\n") + for _, name := range names { + g.Printf("\t%s\n", name) + } + g.Printf(")\n\n") + + if len(indexes) > 0 { + g.Printf("var (") + for _, index := range indexes { + g.Printf("\t%s\n", index) + } + g.Printf(")\n\n") + } +} + +// declareIndexAndNameVar is the single-run version of declareIndexAndNameVars +func (g *Generator) declareIndexAndNameVar(run []Value, typeName string) { + index, name := g.createIndexAndNameDecl(run, typeName, "") + g.Printf("const %s\n", name) + g.Printf("var %s\n", index) +} + +// createIndexAndNameDecl returns the pair of declarations for the run. The caller will add "const" and "var". +func (g *Generator) createIndexAndNameDecl(run []Value, typeName string, suffix string) (string, string) { + b := new(bytes.Buffer) + indexes := make([]int, len(run)) + for i := range run { + b.WriteString(run[i].name) + indexes[i] = b.Len() + } + nameConst := fmt.Sprintf("_%s_name%s = %q", typeName, suffix, b.String()) + nameLen := b.Len() + b.Reset() + fmt.Fprintf(b, "_%s_index%s = [...]uint%d{0, ", typeName, suffix, usize(nameLen)) + for i, v := range indexes { + if i > 0 { + fmt.Fprintf(b, ", ") + } + fmt.Fprintf(b, "%d", v) + } + fmt.Fprintf(b, "}") + return b.String(), nameConst +} + +// declareNameVars declares the concatenated names string representing all the values in the runs. +func (g *Generator) declareNameVars(runs [][]Value, typeName string, suffix string) { + g.Printf("const _%s_name%s = \"", typeName, suffix) + for _, run := range runs { + for i := range run { + g.Printf("%s", run[i].name) + } + } + g.Printf("\"\n") +} + +// buildOneRun generates the variables and String method for a single run of contiguous values. +func (g *Generator) buildOneRun(runs [][]Value, typeName string) { + values := runs[0] + g.Printf("\n") + g.declareIndexAndNameVar(values, typeName) + // The generated code is simple enough to write as a Printf format. + lessThanZero := "" + if values[0].signed { + lessThanZero = "i < 0 || " + } + if values[0].value == 0 { // Signed or unsigned, 0 is still 0. + g.Printf(stringOneRun, typeName, usize(len(values)), lessThanZero) + } else { + g.Printf(stringOneRunWithOffset, typeName, values[0].String(), usize(len(values)), lessThanZero) + } +} + +// Arguments to format are: +// +// [1]: type name +// [2]: size of index element (8 for uint8 etc.) +// [3]: less than zero check (for signed types) +const stringOneRun = `func (i %[1]s) String() string { + if %[3]si >= %[1]s(len(_%[1]s_index)-1) { + return "%[1]s(" + strconv.FormatInt(int64(i), 10) + ")" + } + return _%[1]s_name[_%[1]s_index[i]:_%[1]s_index[i+1]] +} +` + +// Arguments to format are: +// [1]: type name +// [2]: lowest defined value for type, as a string +// [3]: size of index element (8 for uint8 etc.) +// [4]: less than zero check (for signed types) +/* + */ +const stringOneRunWithOffset = `func (i %[1]s) String() string { + i -= %[2]s + if %[4]si >= %[1]s(len(_%[1]s_index)-1) { + return "%[1]s(" + strconv.FormatInt(int64(i + %[2]s), 10) + ")" + } + return _%[1]s_name[_%[1]s_index[i] : _%[1]s_index[i+1]] +} +` + +// buildMultipleRuns generates the variables and String method for multiple runs of contiguous values. +// For this pattern, a single Printf format won't do. +func (g *Generator) buildMultipleRuns(runs [][]Value, typeName string) { + g.Printf("\n") + g.declareIndexAndNameVars(runs, typeName) + g.Printf("func (i %s) String() string {\n", typeName) + g.Printf("\tswitch {\n") + for i, values := range runs { + if len(values) == 1 { + g.Printf("\tcase i == %s:\n", &values[0]) + g.Printf("\t\treturn _%s_name_%d\n", typeName, i) + continue + } + if values[0].value == 0 && !values[0].signed { + // For an unsigned lower bound of 0, "0 <= i" would be redundant. + g.Printf("\tcase i <= %s:\n", &values[len(values)-1]) + } else { + g.Printf("\tcase %s <= i && i <= %s:\n", &values[0], &values[len(values)-1]) + } + if values[0].value != 0 { + g.Printf("\t\ti -= %s\n", &values[0]) + } + g.Printf("\t\treturn _%s_name_%d[_%s_index_%d[i]:_%s_index_%d[i+1]]\n", + typeName, i, typeName, i, typeName, i) + } + g.Printf("\tdefault:\n") + g.Printf("\t\treturn \"%s(\" + strconv.FormatInt(int64(i), 10) + \")\"\n", typeName) + g.Printf("\t}\n") + g.Printf("}\n") +} + +// buildMap handles the case where the space is so sparse a map is a reasonable fallback. +// It's a rare situation but has simple code. +func (g *Generator) buildMap(runs [][]Value, typeName string) { + g.Printf("\n") + g.declareNameVars(runs, typeName, "") + g.Printf("\nvar _%s_map = map[%s]string{\n", typeName, typeName) + n := 0 + for _, values := range runs { + for _, value := range values { + g.Printf("\t%s: _%s_name[%d:%d],\n", &value, typeName, n, n+len(value.name)) + n += len(value.name) + } + } + g.Printf("}\n\n") + g.Printf(stringMap, typeName) +} + +// Argument to format is the type name. +const stringMap = `func (i %[1]s) String() string { + if str, ok := _%[1]s_map[i]; ok { + return str + } + return "%[1]s(" + strconv.FormatInt(int64(i), 10) + ")" +} +` diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go new file mode 100644 index 000000000000..03543bd4bb8f --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go @@ -0,0 +1,186 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package gcexportdata provides functions for locating, reading, and +// writing export data files containing type information produced by the +// gc compiler. This package supports go1.7 export data format and all +// later versions. +// +// Although it might seem convenient for this package to live alongside +// go/types in the standard library, this would cause version skew +// problems for developer tools that use it, since they must be able to +// consume the outputs of the gc compiler both before and after a Go +// update such as from Go 1.7 to Go 1.8. Because this package lives in +// golang.org/x/tools, sites can update their version of this repo some +// time before the Go 1.8 release and rebuild and redeploy their +// developer tools, which will then be able to consume both Go 1.7 and +// Go 1.8 export data files, so they will work before and after the +// Go update. (See discussion at https://golang.org/issue/15651.) +package gcexportdata // import "golang.org/x/tools/go/gcexportdata" + +import ( + "bufio" + "bytes" + "encoding/json" + "fmt" + "go/token" + "go/types" + "io" + "os/exec" + + "golang.org/x/tools/internal/gcimporter" +) + +// Find returns the name of an object (.o) or archive (.a) file +// containing type information for the specified import path, +// using the go command. +// If no file was found, an empty filename is returned. +// +// A relative srcDir is interpreted relative to the current working directory. +// +// Find also returns the package's resolved (canonical) import path, +// reflecting the effects of srcDir and vendoring on importPath. +// +// Deprecated: Use the higher-level API in golang.org/x/tools/go/packages, +// which is more efficient. +func Find(importPath, srcDir string) (filename, path string) { + cmd := exec.Command("go", "list", "-json", "-export", "--", importPath) + cmd.Dir = srcDir + out, err := cmd.CombinedOutput() + if err != nil { + return "", "" + } + var data struct { + ImportPath string + Export string + } + json.Unmarshal(out, &data) + return data.Export, data.ImportPath +} + +// NewReader returns a reader for the export data section of an object +// (.o) or archive (.a) file read from r. The new reader may provide +// additional trailing data beyond the end of the export data. +func NewReader(r io.Reader) (io.Reader, error) { + buf := bufio.NewReader(r) + _, size, err := gcimporter.FindExportData(buf) + if err != nil { + return nil, err + } + + if size >= 0 { + // We were given an archive and found the __.PKGDEF in it. + // This tells us the size of the export data, and we don't + // need to return the entire file. + return &io.LimitedReader{ + R: buf, + N: size, + }, nil + } else { + // We were given an object file. As such, we don't know how large + // the export data is and must return the entire file. + return buf, nil + } +} + +// readAll works the same way as io.ReadAll, but avoids allocations and copies +// by preallocating a byte slice of the necessary size if the size is known up +// front. This is always possible when the input is an archive. In that case, +// NewReader will return the known size using an io.LimitedReader. +func readAll(r io.Reader) ([]byte, error) { + if lr, ok := r.(*io.LimitedReader); ok { + data := make([]byte, lr.N) + _, err := io.ReadFull(lr, data) + return data, err + } + return io.ReadAll(r) +} + +// Read reads export data from in, decodes it, and returns type +// information for the package. +// +// The package path (effectively its linker symbol prefix) is +// specified by path, since unlike the package name, this information +// may not be recorded in the export data. +// +// File position information is added to fset. +// +// Read may inspect and add to the imports map to ensure that references +// within the export data to other packages are consistent. The caller +// must ensure that imports[path] does not exist, or exists but is +// incomplete (see types.Package.Complete), and Read inserts the +// resulting package into this map entry. +// +// On return, the state of the reader is undefined. +func Read(in io.Reader, fset *token.FileSet, imports map[string]*types.Package, path string) (*types.Package, error) { + data, err := readAll(in) + if err != nil { + return nil, fmt.Errorf("reading export data for %q: %v", path, err) + } + + if bytes.HasPrefix(data, []byte("!")) { + return nil, fmt.Errorf("can't read export data for %q directly from an archive file (call gcexportdata.NewReader first to extract export data)", path) + } + + // The indexed export format starts with an 'i'; the older + // binary export format starts with a 'c', 'd', or 'v' + // (from "version"). Select appropriate importer. + if len(data) > 0 { + switch data[0] { + case 'v', 'c', 'd': // binary, till go1.10 + return nil, fmt.Errorf("binary (%c) import format is no longer supported", data[0]) + + case 'i': // indexed, till go1.19 + _, pkg, err := gcimporter.IImportData(fset, imports, data[1:], path) + return pkg, err + + case 'u': // unified, from go1.20 + _, pkg, err := gcimporter.UImportData(fset, imports, data[1:], path) + return pkg, err + + default: + l := len(data) + if l > 10 { + l = 10 + } + return nil, fmt.Errorf("unexpected export data with prefix %q for path %s", string(data[:l]), path) + } + } + return nil, fmt.Errorf("empty export data for %s", path) +} + +// Write writes encoded type information for the specified package to out. +// The FileSet provides file position information for named objects. +func Write(out io.Writer, fset *token.FileSet, pkg *types.Package) error { + if _, err := io.WriteString(out, "i"); err != nil { + return err + } + return gcimporter.IExportData(out, fset, pkg) +} + +// ReadBundle reads an export bundle from in, decodes it, and returns type +// information for the packages. +// File position information is added to fset. +// +// ReadBundle may inspect and add to the imports map to ensure that references +// within the export bundle to other packages are consistent. +// +// On return, the state of the reader is undefined. +// +// Experimental: This API is experimental and may change in the future. +func ReadBundle(in io.Reader, fset *token.FileSet, imports map[string]*types.Package) ([]*types.Package, error) { + data, err := readAll(in) + if err != nil { + return nil, fmt.Errorf("reading export bundle: %v", err) + } + return gcimporter.IImportBundle(fset, imports, data) +} + +// WriteBundle writes encoded type information for the specified packages to out. +// The FileSet provides file position information for named objects. +// +// Experimental: This API is experimental and may change in the future. +func WriteBundle(out io.Writer, fset *token.FileSet, pkgs []*types.Package) error { + return gcimporter.IExportBundle(out, fset, pkgs) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/gcexportdata/importer.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/gcexportdata/importer.go new file mode 100644 index 000000000000..37a7247e2686 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/gcexportdata/importer.go @@ -0,0 +1,75 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package gcexportdata + +import ( + "fmt" + "go/token" + "go/types" + "os" +) + +// NewImporter returns a new instance of the types.Importer interface +// that reads type information from export data files written by gc. +// The Importer also satisfies types.ImporterFrom. +// +// Export data files are located using "go build" workspace conventions +// and the build.Default context. +// +// Use this importer instead of go/importer.For("gc", ...) to avoid the +// version-skew problems described in the documentation of this package, +// or to control the FileSet or access the imports map populated during +// package loading. +// +// Deprecated: Use the higher-level API in golang.org/x/tools/go/packages, +// which is more efficient. +func NewImporter(fset *token.FileSet, imports map[string]*types.Package) types.ImporterFrom { + return importer{fset, imports} +} + +type importer struct { + fset *token.FileSet + imports map[string]*types.Package +} + +func (imp importer) Import(importPath string) (*types.Package, error) { + return imp.ImportFrom(importPath, "", 0) +} + +func (imp importer) ImportFrom(importPath, srcDir string, mode types.ImportMode) (_ *types.Package, err error) { + filename, path := Find(importPath, srcDir) + if filename == "" { + if importPath == "unsafe" { + // Even for unsafe, call Find first in case + // the package was vendored. + return types.Unsafe, nil + } + return nil, fmt.Errorf("can't find import: %s", importPath) + } + + if pkg, ok := imp.imports[path]; ok && pkg.Complete() { + return pkg, nil // cache hit + } + + // open file + f, err := os.Open(filename) + if err != nil { + return nil, err + } + defer func() { + f.Close() + if err != nil { + // add file name to error + err = fmt.Errorf("reading export data: %s: %v", filename, err) + } + }() + + r, err := NewReader(f) + if err != nil { + return nil, err + } + + return Read(r, imp.fset, imp.imports, path) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go new file mode 100644 index 000000000000..18a002f82a1f --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go @@ -0,0 +1,49 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package packagesdriver fetches type sizes for go/packages and go/analysis. +package packagesdriver + +import ( + "context" + "fmt" + "go/types" + "strings" + + "golang.org/x/tools/internal/gocommand" +) + +var debug = false + +func GetSizesGolist(ctx context.Context, inv gocommand.Invocation, gocmdRunner *gocommand.Runner) (types.Sizes, error) { + inv.Verb = "list" + inv.Args = []string{"-f", "{{context.GOARCH}} {{context.Compiler}}", "--", "unsafe"} + stdout, stderr, friendlyErr, rawErr := gocmdRunner.RunRaw(ctx, inv) + var goarch, compiler string + if rawErr != nil { + if rawErrMsg := rawErr.Error(); strings.Contains(rawErrMsg, "cannot find main module") || strings.Contains(rawErrMsg, "go.mod file not found") { + // User's running outside of a module. All bets are off. Get GOARCH and guess compiler is gc. + // TODO(matloob): Is this a problem in practice? + inv.Verb = "env" + inv.Args = []string{"GOARCH"} + envout, enverr := gocmdRunner.Run(ctx, inv) + if enverr != nil { + return nil, enverr + } + goarch = strings.TrimSpace(envout.String()) + compiler = "gc" + } else { + return nil, friendlyErr + } + } else { + fields := strings.Fields(stdout.String()) + if len(fields) < 2 { + return nil, fmt.Errorf("could not parse GOARCH and Go compiler in format \" \":\nstdout: <<%s>>\nstderr: <<%s>>", + stdout.String(), stderr.String()) + } + goarch = fields[0] + compiler = fields[1] + } + return types.SizesFor(compiler, goarch), nil +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/doc.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/doc.go new file mode 100644 index 000000000000..da4ab89fe63f --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/doc.go @@ -0,0 +1,220 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* +Package packages loads Go packages for inspection and analysis. + +The Load function takes as input a list of patterns and return a list of Package +structs describing individual packages matched by those patterns. +The LoadMode controls the amount of detail in the loaded packages. + +Load passes most patterns directly to the underlying build tool, +but all patterns with the prefix "query=", where query is a +non-empty string of letters from [a-z], are reserved and may be +interpreted as query operators. + +Two query operators are currently supported: "file" and "pattern". + +The query "file=path/to/file.go" matches the package or packages enclosing +the Go source file path/to/file.go. For example "file=~/go/src/fmt/print.go" +might return the packages "fmt" and "fmt [fmt.test]". + +The query "pattern=string" causes "string" to be passed directly to +the underlying build tool. In most cases this is unnecessary, +but an application can use Load("pattern=" + x) as an escaping mechanism +to ensure that x is not interpreted as a query operator if it contains '='. + +All other query operators are reserved for future use and currently +cause Load to report an error. + +The Package struct provides basic information about the package, including + + - ID, a unique identifier for the package in the returned set; + - GoFiles, the names of the package's Go source files; + - Imports, a map from source import strings to the Packages they name; + - Types, the type information for the package's exported symbols; + - Syntax, the parsed syntax trees for the package's source code; and + - TypeInfo, the result of a complete type-check of the package syntax trees. + +(See the documentation for type Package for the complete list of fields +and more detailed descriptions.) + +For example, + + Load(nil, "bytes", "unicode...") + +returns four Package structs describing the standard library packages +bytes, unicode, unicode/utf16, and unicode/utf8. Note that one pattern +can match multiple packages and that a package might be matched by +multiple patterns: in general it is not possible to determine which +packages correspond to which patterns. + +Note that the list returned by Load contains only the packages matched +by the patterns. Their dependencies can be found by walking the import +graph using the Imports fields. + +The Load function can be configured by passing a pointer to a Config as +the first argument. A nil Config is equivalent to the zero Config, which +causes Load to run in LoadFiles mode, collecting minimal information. +See the documentation for type Config for details. + +As noted earlier, the Config.Mode controls the amount of detail +reported about the loaded packages. See the documentation for type LoadMode +for details. + +Most tools should pass their command-line arguments (after any flags) +uninterpreted to the loader, so that the loader can interpret them +according to the conventions of the underlying build system. +See the Example function for typical usage. +*/ +package packages // import "golang.org/x/tools/go/packages" + +/* + +Motivation and design considerations + +The new package's design solves problems addressed by two existing +packages: go/build, which locates and describes packages, and +golang.org/x/tools/go/loader, which loads, parses and type-checks them. +The go/build.Package structure encodes too much of the 'go build' way +of organizing projects, leaving us in need of a data type that describes a +package of Go source code independent of the underlying build system. +We wanted something that works equally well with go build and vgo, and +also other build systems such as Bazel and Blaze, making it possible to +construct analysis tools that work in all these environments. +Tools such as errcheck and staticcheck were essentially unavailable to +the Go community at Google, and some of Google's internal tools for Go +are unavailable externally. +This new package provides a uniform way to obtain package metadata by +querying each of these build systems, optionally supporting their +preferred command-line notations for packages, so that tools integrate +neatly with users' build environments. The Metadata query function +executes an external query tool appropriate to the current workspace. + +Loading packages always returns the complete import graph "all the way down", +even if all you want is information about a single package, because the query +mechanisms of all the build systems we currently support ({go,vgo} list, and +blaze/bazel aspect-based query) cannot provide detailed information +about one package without visiting all its dependencies too, so there is +no additional asymptotic cost to providing transitive information. +(This property might not be true of a hypothetical 5th build system.) + +In calls to TypeCheck, all initial packages, and any package that +transitively depends on one of them, must be loaded from source. +Consider A->B->C->D->E: if A,C are initial, A,B,C must be loaded from +source; D may be loaded from export data, and E may not be loaded at all +(though it's possible that D's export data mentions it, so a +types.Package may be created for it and exposed.) + +The old loader had a feature to suppress type-checking of function +bodies on a per-package basis, primarily intended to reduce the work of +obtaining type information for imported packages. Now that imports are +satisfied by export data, the optimization no longer seems necessary. + +Despite some early attempts, the old loader did not exploit export data, +instead always using the equivalent of WholeProgram mode. This was due +to the complexity of mixing source and export data packages (now +resolved by the upward traversal mentioned above), and because export data +files were nearly always missing or stale. Now that 'go build' supports +caching, all the underlying build systems can guarantee to produce +export data in a reasonable (amortized) time. + +Test "main" packages synthesized by the build system are now reported as +first-class packages, avoiding the need for clients (such as go/ssa) to +reinvent this generation logic. + +One way in which go/packages is simpler than the old loader is in its +treatment of in-package tests. In-package tests are packages that +consist of all the files of the library under test, plus the test files. +The old loader constructed in-package tests by a two-phase process of +mutation called "augmentation": first it would construct and type check +all the ordinary library packages and type-check the packages that +depend on them; then it would add more (test) files to the package and +type-check again. This two-phase approach had four major problems: +1) in processing the tests, the loader modified the library package, + leaving no way for a client application to see both the test + package and the library package; one would mutate into the other. +2) because test files can declare additional methods on types defined in + the library portion of the package, the dispatch of method calls in + the library portion was affected by the presence of the test files. + This should have been a clue that the packages were logically + different. +3) this model of "augmentation" assumed at most one in-package test + per library package, which is true of projects using 'go build', + but not other build systems. +4) because of the two-phase nature of test processing, all packages that + import the library package had to be processed before augmentation, + forcing a "one-shot" API and preventing the client from calling Load + in several times in sequence as is now possible in WholeProgram mode. + (TypeCheck mode has a similar one-shot restriction for a different reason.) + +Early drafts of this package supported "multi-shot" operation. +Although it allowed clients to make a sequence of calls (or concurrent +calls) to Load, building up the graph of Packages incrementally, +it was of marginal value: it complicated the API +(since it allowed some options to vary across calls but not others), +it complicated the implementation, +it cannot be made to work in Types mode, as explained above, +and it was less efficient than making one combined call (when this is possible). +Among the clients we have inspected, none made multiple calls to load +but could not be easily and satisfactorily modified to make only a single call. +However, applications changes may be required. +For example, the ssadump command loads the user-specified packages +and in addition the runtime package. It is tempting to simply append +"runtime" to the user-provided list, but that does not work if the user +specified an ad-hoc package such as [a.go b.go]. +Instead, ssadump no longer requests the runtime package, +but seeks it among the dependencies of the user-specified packages, +and emits an error if it is not found. + +Overlays: The Overlay field in the Config allows providing alternate contents +for Go source files, by providing a mapping from file path to contents. +go/packages will pull in new imports added in overlay files when go/packages +is run in LoadImports mode or greater. +Overlay support for the go list driver isn't complete yet: if the file doesn't +exist on disk, it will only be recognized in an overlay if it is a non-test file +and the package would be reported even without the overlay. + +Questions & Tasks + +- Add GOARCH/GOOS? + They are not portable concepts, but could be made portable. + Our goal has been to allow users to express themselves using the conventions + of the underlying build system: if the build system honors GOARCH + during a build and during a metadata query, then so should + applications built atop that query mechanism. + Conversely, if the target architecture of the build is determined by + command-line flags, the application can pass the relevant + flags through to the build system using a command such as: + myapp -query_flag="--cpu=amd64" -query_flag="--os=darwin" + However, this approach is low-level, unwieldy, and non-portable. + GOOS and GOARCH seem important enough to warrant a dedicated option. + +- How should we handle partial failures such as a mixture of good and + malformed patterns, existing and non-existent packages, successful and + failed builds, import failures, import cycles, and so on, in a call to + Load? + +- Support bazel, blaze, and go1.10 list, not just go1.11 list. + +- Handle (and test) various partial success cases, e.g. + a mixture of good packages and: + invalid patterns + nonexistent packages + empty packages + packages with malformed package or import declarations + unreadable files + import cycles + other parse errors + type errors + Make sure we record errors at the correct place in the graph. + +- Missing packages among initial arguments are not reported. + Return bogus packages for them, like golist does. + +- "undeclared name" errors (for example) are reported out of source file + order. I suspect this is due to the breadth-first resolution now used + by go/types. Is that a bug? Discuss with gri. + +*/ diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/external.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/external.go new file mode 100644 index 000000000000..7242a0a7d2be --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/external.go @@ -0,0 +1,101 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file enables an external tool to intercept package requests. +// If the tool is present then its results are used in preference to +// the go list command. + +package packages + +import ( + "bytes" + "encoding/json" + "fmt" + exec "golang.org/x/sys/execabs" + "os" + "strings" +) + +// The Driver Protocol +// +// The driver, given the inputs to a call to Load, returns metadata about the packages specified. +// This allows for different build systems to support go/packages by telling go/packages how the +// packages' source is organized. +// The driver is a binary, either specified by the GOPACKAGESDRIVER environment variable or in +// the path as gopackagesdriver. It's given the inputs to load in its argv. See the package +// documentation in doc.go for the full description of the patterns that need to be supported. +// A driver receives as a JSON-serialized driverRequest struct in standard input and will +// produce a JSON-serialized driverResponse (see definition in packages.go) in its standard output. + +// driverRequest is used to provide the portion of Load's Config that is needed by a driver. +type driverRequest struct { + Mode LoadMode `json:"mode"` + // Env specifies the environment the underlying build system should be run in. + Env []string `json:"env"` + // BuildFlags are flags that should be passed to the underlying build system. + BuildFlags []string `json:"build_flags"` + // Tests specifies whether the patterns should also return test packages. + Tests bool `json:"tests"` + // Overlay maps file paths (relative to the driver's working directory) to the byte contents + // of overlay files. + Overlay map[string][]byte `json:"overlay"` +} + +// findExternalDriver returns the file path of a tool that supplies +// the build system package structure, or "" if not found." +// If GOPACKAGESDRIVER is set in the environment findExternalTool returns its +// value, otherwise it searches for a binary named gopackagesdriver on the PATH. +func findExternalDriver(cfg *Config) driver { + const toolPrefix = "GOPACKAGESDRIVER=" + tool := "" + for _, env := range cfg.Env { + if val := strings.TrimPrefix(env, toolPrefix); val != env { + tool = val + } + } + if tool != "" && tool == "off" { + return nil + } + if tool == "" { + var err error + tool, err = exec.LookPath("gopackagesdriver") + if err != nil { + return nil + } + } + return func(cfg *Config, words ...string) (*driverResponse, error) { + req, err := json.Marshal(driverRequest{ + Mode: cfg.Mode, + Env: cfg.Env, + BuildFlags: cfg.BuildFlags, + Tests: cfg.Tests, + Overlay: cfg.Overlay, + }) + if err != nil { + return nil, fmt.Errorf("failed to encode message to driver tool: %v", err) + } + + buf := new(bytes.Buffer) + stderr := new(bytes.Buffer) + cmd := exec.CommandContext(cfg.Context, tool, words...) + cmd.Dir = cfg.Dir + cmd.Env = cfg.Env + cmd.Stdin = bytes.NewReader(req) + cmd.Stdout = buf + cmd.Stderr = stderr + + if err := cmd.Run(); err != nil { + return nil, fmt.Errorf("%v: %v: %s", tool, err, cmd.Stderr) + } + if len(stderr.Bytes()) != 0 && os.Getenv("GOPACKAGESPRINTDRIVERERRORS") != "" { + fmt.Fprintf(os.Stderr, "%s stderr: <<%s>>\n", cmdDebugStr(cmd), stderr) + } + + var response driverResponse + if err := json.Unmarshal(buf.Bytes(), &response); err != nil { + return nil, err + } + return &response, nil + } +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/golist.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/golist.go new file mode 100644 index 000000000000..58230038a7ce --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/golist.go @@ -0,0 +1,1183 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "go/types" + "io/ioutil" + "log" + "os" + "path" + "path/filepath" + "reflect" + "sort" + "strconv" + "strings" + "sync" + "unicode" + + exec "golang.org/x/sys/execabs" + "golang.org/x/tools/go/internal/packagesdriver" + "golang.org/x/tools/internal/gocommand" + "golang.org/x/tools/internal/packagesinternal" +) + +// debug controls verbose logging. +var debug, _ = strconv.ParseBool(os.Getenv("GOPACKAGESDEBUG")) + +// A goTooOldError reports that the go command +// found by exec.LookPath is too old to use the new go list behavior. +type goTooOldError struct { + error +} + +// responseDeduper wraps a driverResponse, deduplicating its contents. +type responseDeduper struct { + seenRoots map[string]bool + seenPackages map[string]*Package + dr *driverResponse +} + +func newDeduper() *responseDeduper { + return &responseDeduper{ + dr: &driverResponse{}, + seenRoots: map[string]bool{}, + seenPackages: map[string]*Package{}, + } +} + +// addAll fills in r with a driverResponse. +func (r *responseDeduper) addAll(dr *driverResponse) { + for _, pkg := range dr.Packages { + r.addPackage(pkg) + } + for _, root := range dr.Roots { + r.addRoot(root) + } + r.dr.GoVersion = dr.GoVersion +} + +func (r *responseDeduper) addPackage(p *Package) { + if r.seenPackages[p.ID] != nil { + return + } + r.seenPackages[p.ID] = p + r.dr.Packages = append(r.dr.Packages, p) +} + +func (r *responseDeduper) addRoot(id string) { + if r.seenRoots[id] { + return + } + r.seenRoots[id] = true + r.dr.Roots = append(r.dr.Roots, id) +} + +type golistState struct { + cfg *Config + ctx context.Context + + envOnce sync.Once + goEnvError error + goEnv map[string]string + + rootsOnce sync.Once + rootDirsError error + rootDirs map[string]string + + goVersionOnce sync.Once + goVersionError error + goVersion int // The X in Go 1.X. + + // vendorDirs caches the (non)existence of vendor directories. + vendorDirs map[string]bool +} + +// getEnv returns Go environment variables. Only specific variables are +// populated -- computing all of them is slow. +func (state *golistState) getEnv() (map[string]string, error) { + state.envOnce.Do(func() { + var b *bytes.Buffer + b, state.goEnvError = state.invokeGo("env", "-json", "GOMOD", "GOPATH") + if state.goEnvError != nil { + return + } + + state.goEnv = make(map[string]string) + decoder := json.NewDecoder(b) + if state.goEnvError = decoder.Decode(&state.goEnv); state.goEnvError != nil { + return + } + }) + return state.goEnv, state.goEnvError +} + +// mustGetEnv is a convenience function that can be used if getEnv has already succeeded. +func (state *golistState) mustGetEnv() map[string]string { + env, err := state.getEnv() + if err != nil { + panic(fmt.Sprintf("mustGetEnv: %v", err)) + } + return env +} + +// goListDriver uses the go list command to interpret the patterns and produce +// the build system package structure. +// See driver for more details. +func goListDriver(cfg *Config, patterns ...string) (*driverResponse, error) { + // Make sure that any asynchronous go commands are killed when we return. + parentCtx := cfg.Context + if parentCtx == nil { + parentCtx = context.Background() + } + ctx, cancel := context.WithCancel(parentCtx) + defer cancel() + + response := newDeduper() + + state := &golistState{ + cfg: cfg, + ctx: ctx, + vendorDirs: map[string]bool{}, + } + + // Fill in response.Sizes asynchronously if necessary. + var sizeserr error + var sizeswg sync.WaitGroup + if cfg.Mode&NeedTypesSizes != 0 || cfg.Mode&NeedTypes != 0 { + sizeswg.Add(1) + go func() { + var sizes types.Sizes + sizes, sizeserr = packagesdriver.GetSizesGolist(ctx, state.cfgInvocation(), cfg.gocmdRunner) + // types.SizesFor always returns nil or a *types.StdSizes. + response.dr.Sizes, _ = sizes.(*types.StdSizes) + sizeswg.Done() + }() + } + + // Determine files requested in contains patterns + var containFiles []string + restPatterns := make([]string, 0, len(patterns)) + // Extract file= and other [querytype]= patterns. Report an error if querytype + // doesn't exist. +extractQueries: + for _, pattern := range patterns { + eqidx := strings.Index(pattern, "=") + if eqidx < 0 { + restPatterns = append(restPatterns, pattern) + } else { + query, value := pattern[:eqidx], pattern[eqidx+len("="):] + switch query { + case "file": + containFiles = append(containFiles, value) + case "pattern": + restPatterns = append(restPatterns, value) + case "": // not a reserved query + restPatterns = append(restPatterns, pattern) + default: + for _, rune := range query { + if rune < 'a' || rune > 'z' { // not a reserved query + restPatterns = append(restPatterns, pattern) + continue extractQueries + } + } + // Reject all other patterns containing "=" + return nil, fmt.Errorf("invalid query type %q in query pattern %q", query, pattern) + } + } + } + + // See if we have any patterns to pass through to go list. Zero initial + // patterns also requires a go list call, since it's the equivalent of + // ".". + if len(restPatterns) > 0 || len(patterns) == 0 { + dr, err := state.createDriverResponse(restPatterns...) + if err != nil { + return nil, err + } + response.addAll(dr) + } + + if len(containFiles) != 0 { + if err := state.runContainsQueries(response, containFiles); err != nil { + return nil, err + } + } + + // Only use go/packages' overlay processing if we're using a Go version + // below 1.16. Otherwise, go list handles it. + if goVersion, err := state.getGoVersion(); err == nil && goVersion < 16 { + modifiedPkgs, needPkgs, err := state.processGolistOverlay(response) + if err != nil { + return nil, err + } + + var containsCandidates []string + if len(containFiles) > 0 { + containsCandidates = append(containsCandidates, modifiedPkgs...) + containsCandidates = append(containsCandidates, needPkgs...) + } + if err := state.addNeededOverlayPackages(response, needPkgs); err != nil { + return nil, err + } + // Check candidate packages for containFiles. + if len(containFiles) > 0 { + for _, id := range containsCandidates { + pkg, ok := response.seenPackages[id] + if !ok { + response.addPackage(&Package{ + ID: id, + Errors: []Error{{ + Kind: ListError, + Msg: fmt.Sprintf("package %s expected but not seen", id), + }}, + }) + continue + } + for _, f := range containFiles { + for _, g := range pkg.GoFiles { + if sameFile(f, g) { + response.addRoot(id) + } + } + } + } + } + // Add root for any package that matches a pattern. This applies only to + // packages that are modified by overlays, since they are not added as + // roots automatically. + for _, pattern := range restPatterns { + match := matchPattern(pattern) + for _, pkgID := range modifiedPkgs { + pkg, ok := response.seenPackages[pkgID] + if !ok { + continue + } + if match(pkg.PkgPath) { + response.addRoot(pkg.ID) + } + } + } + } + + sizeswg.Wait() + if sizeserr != nil { + return nil, sizeserr + } + return response.dr, nil +} + +func (state *golistState) addNeededOverlayPackages(response *responseDeduper, pkgs []string) error { + if len(pkgs) == 0 { + return nil + } + dr, err := state.createDriverResponse(pkgs...) + if err != nil { + return err + } + for _, pkg := range dr.Packages { + response.addPackage(pkg) + } + _, needPkgs, err := state.processGolistOverlay(response) + if err != nil { + return err + } + return state.addNeededOverlayPackages(response, needPkgs) +} + +func (state *golistState) runContainsQueries(response *responseDeduper, queries []string) error { + for _, query := range queries { + // TODO(matloob): Do only one query per directory. + fdir := filepath.Dir(query) + // Pass absolute path of directory to go list so that it knows to treat it as a directory, + // not a package path. + pattern, err := filepath.Abs(fdir) + if err != nil { + return fmt.Errorf("could not determine absolute path of file= query path %q: %v", query, err) + } + dirResponse, err := state.createDriverResponse(pattern) + + // If there was an error loading the package, or no packages are returned, + // or the package is returned with errors, try to load the file as an + // ad-hoc package. + // Usually the error will appear in a returned package, but may not if we're + // in module mode and the ad-hoc is located outside a module. + if err != nil || len(dirResponse.Packages) == 0 || len(dirResponse.Packages) == 1 && len(dirResponse.Packages[0].GoFiles) == 0 && + len(dirResponse.Packages[0].Errors) == 1 { + var queryErr error + if dirResponse, queryErr = state.adhocPackage(pattern, query); queryErr != nil { + return err // return the original error + } + } + isRoot := make(map[string]bool, len(dirResponse.Roots)) + for _, root := range dirResponse.Roots { + isRoot[root] = true + } + for _, pkg := range dirResponse.Packages { + // Add any new packages to the main set + // We don't bother to filter packages that will be dropped by the changes of roots, + // that will happen anyway during graph construction outside this function. + // Over-reporting packages is not a problem. + response.addPackage(pkg) + // if the package was not a root one, it cannot have the file + if !isRoot[pkg.ID] { + continue + } + for _, pkgFile := range pkg.GoFiles { + if filepath.Base(query) == filepath.Base(pkgFile) { + response.addRoot(pkg.ID) + break + } + } + } + } + return nil +} + +// adhocPackage attempts to load or construct an ad-hoc package for a given +// query, if the original call to the driver produced inadequate results. +func (state *golistState) adhocPackage(pattern, query string) (*driverResponse, error) { + response, err := state.createDriverResponse(query) + if err != nil { + return nil, err + } + // If we get nothing back from `go list`, + // try to make this file into its own ad-hoc package. + // TODO(rstambler): Should this check against the original response? + if len(response.Packages) == 0 { + response.Packages = append(response.Packages, &Package{ + ID: "command-line-arguments", + PkgPath: query, + GoFiles: []string{query}, + CompiledGoFiles: []string{query}, + Imports: make(map[string]*Package), + }) + response.Roots = append(response.Roots, "command-line-arguments") + } + // Handle special cases. + if len(response.Packages) == 1 { + // golang/go#33482: If this is a file= query for ad-hoc packages where + // the file only exists on an overlay, and exists outside of a module, + // add the file to the package and remove the errors. + if response.Packages[0].ID == "command-line-arguments" || + filepath.ToSlash(response.Packages[0].PkgPath) == filepath.ToSlash(query) { + if len(response.Packages[0].GoFiles) == 0 { + filename := filepath.Join(pattern, filepath.Base(query)) // avoid recomputing abspath + // TODO(matloob): check if the file is outside of a root dir? + for path := range state.cfg.Overlay { + if path == filename { + response.Packages[0].Errors = nil + response.Packages[0].GoFiles = []string{path} + response.Packages[0].CompiledGoFiles = []string{path} + } + } + } + } + } + return response, nil +} + +// Fields must match go list; +// see $GOROOT/src/cmd/go/internal/load/pkg.go. +type jsonPackage struct { + ImportPath string + Dir string + Name string + Export string + GoFiles []string + CompiledGoFiles []string + IgnoredGoFiles []string + IgnoredOtherFiles []string + EmbedPatterns []string + EmbedFiles []string + CFiles []string + CgoFiles []string + CXXFiles []string + MFiles []string + HFiles []string + FFiles []string + SFiles []string + SwigFiles []string + SwigCXXFiles []string + SysoFiles []string + Imports []string + ImportMap map[string]string + Deps []string + Module *Module + TestGoFiles []string + TestImports []string + XTestGoFiles []string + XTestImports []string + ForTest string // q in a "p [q.test]" package, else "" + DepOnly bool + + Error *packagesinternal.PackageError + DepsErrors []*packagesinternal.PackageError +} + +type jsonPackageError struct { + ImportStack []string + Pos string + Err string +} + +func otherFiles(p *jsonPackage) [][]string { + return [][]string{p.CFiles, p.CXXFiles, p.MFiles, p.HFiles, p.FFiles, p.SFiles, p.SwigFiles, p.SwigCXXFiles, p.SysoFiles} +} + +// createDriverResponse uses the "go list" command to expand the pattern +// words and return a response for the specified packages. +func (state *golistState) createDriverResponse(words ...string) (*driverResponse, error) { + // go list uses the following identifiers in ImportPath and Imports: + // + // "p" -- importable package or main (command) + // "q.test" -- q's test executable + // "p [q.test]" -- variant of p as built for q's test executable + // "q_test [q.test]" -- q's external test package + // + // The packages p that are built differently for a test q.test + // are q itself, plus any helpers used by the external test q_test, + // typically including "testing" and all its dependencies. + + // Run "go list" for complete + // information on the specified packages. + goVersion, err := state.getGoVersion() + if err != nil { + return nil, err + } + buf, err := state.invokeGo("list", golistargs(state.cfg, words, goVersion)...) + if err != nil { + return nil, err + } + + seen := make(map[string]*jsonPackage) + pkgs := make(map[string]*Package) + additionalErrors := make(map[string][]Error) + // Decode the JSON and convert it to Package form. + response := &driverResponse{ + GoVersion: goVersion, + } + for dec := json.NewDecoder(buf); dec.More(); { + p := new(jsonPackage) + if err := dec.Decode(p); err != nil { + return nil, fmt.Errorf("JSON decoding failed: %v", err) + } + + if p.ImportPath == "" { + // The documentation for go list says that “[e]rroneous packages will have + // a non-empty ImportPath”. If for some reason it comes back empty, we + // prefer to error out rather than silently discarding data or handing + // back a package without any way to refer to it. + if p.Error != nil { + return nil, Error{ + Pos: p.Error.Pos, + Msg: p.Error.Err, + } + } + return nil, fmt.Errorf("package missing import path: %+v", p) + } + + // Work around https://golang.org/issue/33157: + // go list -e, when given an absolute path, will find the package contained at + // that directory. But when no package exists there, it will return a fake package + // with an error and the ImportPath set to the absolute path provided to go list. + // Try to convert that absolute path to what its package path would be if it's + // contained in a known module or GOPATH entry. This will allow the package to be + // properly "reclaimed" when overlays are processed. + if filepath.IsAbs(p.ImportPath) && p.Error != nil { + pkgPath, ok, err := state.getPkgPath(p.ImportPath) + if err != nil { + return nil, err + } + if ok { + p.ImportPath = pkgPath + } + } + + if old, found := seen[p.ImportPath]; found { + // If one version of the package has an error, and the other doesn't, assume + // that this is a case where go list is reporting a fake dependency variant + // of the imported package: When a package tries to invalidly import another + // package, go list emits a variant of the imported package (with the same + // import path, but with an error on it, and the package will have a + // DepError set on it). An example of when this can happen is for imports of + // main packages: main packages can not be imported, but they may be + // separately matched and listed by another pattern. + // See golang.org/issue/36188 for more details. + + // The plan is that eventually, hopefully in Go 1.15, the error will be + // reported on the importing package rather than the duplicate "fake" + // version of the imported package. Once all supported versions of Go + // have the new behavior this logic can be deleted. + // TODO(matloob): delete the workaround logic once all supported versions of + // Go return the errors on the proper package. + + // There should be exactly one version of a package that doesn't have an + // error. + if old.Error == nil && p.Error == nil { + if !reflect.DeepEqual(p, old) { + return nil, fmt.Errorf("internal error: go list gives conflicting information for package %v", p.ImportPath) + } + continue + } + + // Determine if this package's error needs to be bubbled up. + // This is a hack, and we expect for go list to eventually set the error + // on the package. + if old.Error != nil { + var errkind string + if strings.Contains(old.Error.Err, "not an importable package") { + errkind = "not an importable package" + } else if strings.Contains(old.Error.Err, "use of internal package") && strings.Contains(old.Error.Err, "not allowed") { + errkind = "use of internal package not allowed" + } + if errkind != "" { + if len(old.Error.ImportStack) < 1 { + return nil, fmt.Errorf(`internal error: go list gave a %q error with empty import stack`, errkind) + } + importingPkg := old.Error.ImportStack[len(old.Error.ImportStack)-1] + if importingPkg == old.ImportPath { + // Using an older version of Go which put this package itself on top of import + // stack, instead of the importer. Look for importer in second from top + // position. + if len(old.Error.ImportStack) < 2 { + return nil, fmt.Errorf(`internal error: go list gave a %q error with an import stack without importing package`, errkind) + } + importingPkg = old.Error.ImportStack[len(old.Error.ImportStack)-2] + } + additionalErrors[importingPkg] = append(additionalErrors[importingPkg], Error{ + Pos: old.Error.Pos, + Msg: old.Error.Err, + Kind: ListError, + }) + } + } + + // Make sure that if there's a version of the package without an error, + // that's the one reported to the user. + if old.Error == nil { + continue + } + + // This package will replace the old one at the end of the loop. + } + seen[p.ImportPath] = p + + pkg := &Package{ + Name: p.Name, + ID: p.ImportPath, + GoFiles: absJoin(p.Dir, p.GoFiles, p.CgoFiles), + CompiledGoFiles: absJoin(p.Dir, p.CompiledGoFiles), + OtherFiles: absJoin(p.Dir, otherFiles(p)...), + EmbedFiles: absJoin(p.Dir, p.EmbedFiles), + EmbedPatterns: absJoin(p.Dir, p.EmbedPatterns), + IgnoredFiles: absJoin(p.Dir, p.IgnoredGoFiles, p.IgnoredOtherFiles), + forTest: p.ForTest, + depsErrors: p.DepsErrors, + Module: p.Module, + } + + if (state.cfg.Mode&typecheckCgo) != 0 && len(p.CgoFiles) != 0 { + if len(p.CompiledGoFiles) > len(p.GoFiles) { + // We need the cgo definitions, which are in the first + // CompiledGoFile after the non-cgo ones. This is a hack but there + // isn't currently a better way to find it. We also need the pure + // Go files and unprocessed cgo files, all of which are already + // in pkg.GoFiles. + cgoTypes := p.CompiledGoFiles[len(p.GoFiles)] + pkg.CompiledGoFiles = append([]string{cgoTypes}, pkg.GoFiles...) + } else { + // golang/go#38990: go list silently fails to do cgo processing + pkg.CompiledGoFiles = nil + pkg.Errors = append(pkg.Errors, Error{ + Msg: "go list failed to return CompiledGoFiles. This may indicate failure to perform cgo processing; try building at the command line. See https://golang.org/issue/38990.", + Kind: ListError, + }) + } + } + + // Work around https://golang.org/issue/28749: + // cmd/go puts assembly, C, and C++ files in CompiledGoFiles. + // Remove files from CompiledGoFiles that are non-go files + // (or are not files that look like they are from the cache). + if len(pkg.CompiledGoFiles) > 0 { + out := pkg.CompiledGoFiles[:0] + for _, f := range pkg.CompiledGoFiles { + if ext := filepath.Ext(f); ext != ".go" && ext != "" { // ext == "" means the file is from the cache, so probably cgo-processed file + continue + } + out = append(out, f) + } + pkg.CompiledGoFiles = out + } + + // Extract the PkgPath from the package's ID. + if i := strings.IndexByte(pkg.ID, ' '); i >= 0 { + pkg.PkgPath = pkg.ID[:i] + } else { + pkg.PkgPath = pkg.ID + } + + if pkg.PkgPath == "unsafe" { + pkg.CompiledGoFiles = nil // ignore fake unsafe.go file (#59929) + } else if len(pkg.CompiledGoFiles) == 0 { + // Work around for pre-go.1.11 versions of go list. + // TODO(matloob): they should be handled by the fallback. + // Can we delete this? + pkg.CompiledGoFiles = pkg.GoFiles + } + + // Assume go list emits only absolute paths for Dir. + if p.Dir != "" && !filepath.IsAbs(p.Dir) { + log.Fatalf("internal error: go list returned non-absolute Package.Dir: %s", p.Dir) + } + + if p.Export != "" && !filepath.IsAbs(p.Export) { + pkg.ExportFile = filepath.Join(p.Dir, p.Export) + } else { + pkg.ExportFile = p.Export + } + + // imports + // + // Imports contains the IDs of all imported packages. + // ImportsMap records (path, ID) only where they differ. + ids := make(map[string]bool) + for _, id := range p.Imports { + ids[id] = true + } + pkg.Imports = make(map[string]*Package) + for path, id := range p.ImportMap { + pkg.Imports[path] = &Package{ID: id} // non-identity import + delete(ids, id) + } + for id := range ids { + if id == "C" { + continue + } + + pkg.Imports[id] = &Package{ID: id} // identity import + } + if !p.DepOnly { + response.Roots = append(response.Roots, pkg.ID) + } + + // Temporary work-around for golang/go#39986. Parse filenames out of + // error messages. This happens if there are unrecoverable syntax + // errors in the source, so we can't match on a specific error message. + // + // TODO(rfindley): remove this heuristic, in favor of considering + // InvalidGoFiles from the list driver. + if err := p.Error; err != nil && state.shouldAddFilenameFromError(p) { + addFilenameFromPos := func(pos string) bool { + split := strings.Split(pos, ":") + if len(split) < 1 { + return false + } + filename := strings.TrimSpace(split[0]) + if filename == "" { + return false + } + if !filepath.IsAbs(filename) { + filename = filepath.Join(state.cfg.Dir, filename) + } + info, _ := os.Stat(filename) + if info == nil { + return false + } + pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, filename) + pkg.GoFiles = append(pkg.GoFiles, filename) + return true + } + found := addFilenameFromPos(err.Pos) + // In some cases, go list only reports the error position in the + // error text, not the error position. One such case is when the + // file's package name is a keyword (see golang.org/issue/39763). + if !found { + addFilenameFromPos(err.Err) + } + } + + if p.Error != nil { + msg := strings.TrimSpace(p.Error.Err) // Trim to work around golang.org/issue/32363. + // Address golang.org/issue/35964 by appending import stack to error message. + if msg == "import cycle not allowed" && len(p.Error.ImportStack) != 0 { + msg += fmt.Sprintf(": import stack: %v", p.Error.ImportStack) + } + pkg.Errors = append(pkg.Errors, Error{ + Pos: p.Error.Pos, + Msg: msg, + Kind: ListError, + }) + } + + pkgs[pkg.ID] = pkg + } + + for id, errs := range additionalErrors { + if p, ok := pkgs[id]; ok { + p.Errors = append(p.Errors, errs...) + } + } + for _, pkg := range pkgs { + response.Packages = append(response.Packages, pkg) + } + sort.Slice(response.Packages, func(i, j int) bool { return response.Packages[i].ID < response.Packages[j].ID }) + + return response, nil +} + +func (state *golistState) shouldAddFilenameFromError(p *jsonPackage) bool { + if len(p.GoFiles) > 0 || len(p.CompiledGoFiles) > 0 { + return false + } + + goV, err := state.getGoVersion() + if err != nil { + return false + } + + // On Go 1.14 and earlier, only add filenames from errors if the import stack is empty. + // The import stack behaves differently for these versions than newer Go versions. + if goV < 15 { + return len(p.Error.ImportStack) == 0 + } + + // On Go 1.15 and later, only parse filenames out of error if there's no import stack, + // or the current package is at the top of the import stack. This is not guaranteed + // to work perfectly, but should avoid some cases where files in errors don't belong to this + // package. + return len(p.Error.ImportStack) == 0 || p.Error.ImportStack[len(p.Error.ImportStack)-1] == p.ImportPath +} + +// getGoVersion returns the effective minor version of the go command. +func (state *golistState) getGoVersion() (int, error) { + state.goVersionOnce.Do(func() { + state.goVersion, state.goVersionError = gocommand.GoVersion(state.ctx, state.cfgInvocation(), state.cfg.gocmdRunner) + }) + return state.goVersion, state.goVersionError +} + +// getPkgPath finds the package path of a directory if it's relative to a root +// directory. +func (state *golistState) getPkgPath(dir string) (string, bool, error) { + absDir, err := filepath.Abs(dir) + if err != nil { + return "", false, err + } + roots, err := state.determineRootDirs() + if err != nil { + return "", false, err + } + + for rdir, rpath := range roots { + // Make sure that the directory is in the module, + // to avoid creating a path relative to another module. + if !strings.HasPrefix(absDir, rdir) { + continue + } + // TODO(matloob): This doesn't properly handle symlinks. + r, err := filepath.Rel(rdir, dir) + if err != nil { + continue + } + if rpath != "" { + // We choose only one root even though the directory even it can belong in multiple modules + // or GOPATH entries. This is okay because we only need to work with absolute dirs when a + // file is missing from disk, for instance when gopls calls go/packages in an overlay. + // Once the file is saved, gopls, or the next invocation of the tool will get the correct + // result straight from golist. + // TODO(matloob): Implement module tiebreaking? + return path.Join(rpath, filepath.ToSlash(r)), true, nil + } + return filepath.ToSlash(r), true, nil + } + return "", false, nil +} + +// absJoin absolutizes and flattens the lists of files. +func absJoin(dir string, fileses ...[]string) (res []string) { + for _, files := range fileses { + for _, file := range files { + if !filepath.IsAbs(file) { + file = filepath.Join(dir, file) + } + res = append(res, file) + } + } + return res +} + +func jsonFlag(cfg *Config, goVersion int) string { + if goVersion < 19 { + return "-json" + } + var fields []string + added := make(map[string]bool) + addFields := func(fs ...string) { + for _, f := range fs { + if !added[f] { + added[f] = true + fields = append(fields, f) + } + } + } + addFields("Name", "ImportPath", "Error") // These fields are always needed + if cfg.Mode&NeedFiles != 0 || cfg.Mode&NeedTypes != 0 { + addFields("Dir", "GoFiles", "IgnoredGoFiles", "IgnoredOtherFiles", "CFiles", + "CgoFiles", "CXXFiles", "MFiles", "HFiles", "FFiles", "SFiles", + "SwigFiles", "SwigCXXFiles", "SysoFiles") + if cfg.Tests { + addFields("TestGoFiles", "XTestGoFiles") + } + } + if cfg.Mode&NeedTypes != 0 { + // CompiledGoFiles seems to be required for the test case TestCgoNoSyntax, + // even when -compiled isn't passed in. + // TODO(#52435): Should we make the test ask for -compiled, or automatically + // request CompiledGoFiles in certain circumstances? + addFields("Dir", "CompiledGoFiles") + } + if cfg.Mode&NeedCompiledGoFiles != 0 { + addFields("Dir", "CompiledGoFiles", "Export") + } + if cfg.Mode&NeedImports != 0 { + // When imports are requested, DepOnly is used to distinguish between packages + // explicitly requested and transitive imports of those packages. + addFields("DepOnly", "Imports", "ImportMap") + if cfg.Tests { + addFields("TestImports", "XTestImports") + } + } + if cfg.Mode&NeedDeps != 0 { + addFields("DepOnly") + } + if usesExportData(cfg) { + // Request Dir in the unlikely case Export is not absolute. + addFields("Dir", "Export") + } + if cfg.Mode&needInternalForTest != 0 { + addFields("ForTest") + } + if cfg.Mode&needInternalDepsErrors != 0 { + addFields("DepsErrors") + } + if cfg.Mode&NeedModule != 0 { + addFields("Module") + } + if cfg.Mode&NeedEmbedFiles != 0 { + addFields("EmbedFiles") + } + if cfg.Mode&NeedEmbedPatterns != 0 { + addFields("EmbedPatterns") + } + return "-json=" + strings.Join(fields, ",") +} + +func golistargs(cfg *Config, words []string, goVersion int) []string { + const findFlags = NeedImports | NeedTypes | NeedSyntax | NeedTypesInfo + fullargs := []string{ + "-e", jsonFlag(cfg, goVersion), + fmt.Sprintf("-compiled=%t", cfg.Mode&(NeedCompiledGoFiles|NeedSyntax|NeedTypes|NeedTypesInfo|NeedTypesSizes) != 0), + fmt.Sprintf("-test=%t", cfg.Tests), + fmt.Sprintf("-export=%t", usesExportData(cfg)), + fmt.Sprintf("-deps=%t", cfg.Mode&NeedImports != 0), + // go list doesn't let you pass -test and -find together, + // probably because you'd just get the TestMain. + fmt.Sprintf("-find=%t", !cfg.Tests && cfg.Mode&findFlags == 0 && !usesExportData(cfg)), + } + + // golang/go#60456: with go1.21 and later, go list serves pgo variants, which + // can be costly to compute and may result in redundant processing for the + // caller. Disable these variants. If someone wants to add e.g. a NeedPGO + // mode flag, that should be a separate proposal. + if goVersion >= 21 { + fullargs = append(fullargs, "-pgo=off") + } + + fullargs = append(fullargs, cfg.BuildFlags...) + fullargs = append(fullargs, "--") + fullargs = append(fullargs, words...) + return fullargs +} + +// cfgInvocation returns an Invocation that reflects cfg's settings. +func (state *golistState) cfgInvocation() gocommand.Invocation { + cfg := state.cfg + return gocommand.Invocation{ + BuildFlags: cfg.BuildFlags, + ModFile: cfg.modFile, + ModFlag: cfg.modFlag, + CleanEnv: cfg.Env != nil, + Env: cfg.Env, + Logf: cfg.Logf, + WorkingDir: cfg.Dir, + } +} + +// invokeGo returns the stdout of a go command invocation. +func (state *golistState) invokeGo(verb string, args ...string) (*bytes.Buffer, error) { + cfg := state.cfg + + inv := state.cfgInvocation() + + // For Go versions 1.16 and above, `go list` accepts overlays directly via + // the -overlay flag. Set it, if it's available. + // + // The check for "list" is not necessarily required, but we should avoid + // getting the go version if possible. + if verb == "list" { + goVersion, err := state.getGoVersion() + if err != nil { + return nil, err + } + if goVersion >= 16 { + filename, cleanup, err := state.writeOverlays() + if err != nil { + return nil, err + } + defer cleanup() + inv.Overlay = filename + } + } + inv.Verb = verb + inv.Args = args + gocmdRunner := cfg.gocmdRunner + if gocmdRunner == nil { + gocmdRunner = &gocommand.Runner{} + } + stdout, stderr, friendlyErr, err := gocmdRunner.RunRaw(cfg.Context, inv) + if err != nil { + // Check for 'go' executable not being found. + if ee, ok := err.(*exec.Error); ok && ee.Err == exec.ErrNotFound { + return nil, fmt.Errorf("'go list' driver requires 'go', but %s", exec.ErrNotFound) + } + + exitErr, ok := err.(*exec.ExitError) + if !ok { + // Catastrophic error: + // - context cancellation + return nil, fmt.Errorf("couldn't run 'go': %w", err) + } + + // Old go version? + if strings.Contains(stderr.String(), "flag provided but not defined") { + return nil, goTooOldError{fmt.Errorf("unsupported version of go: %s: %s", exitErr, stderr)} + } + + // Related to #24854 + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "unexpected directory layout") { + return nil, friendlyErr + } + + // Is there an error running the C compiler in cgo? This will be reported in the "Error" field + // and should be suppressed by go list -e. + // + // This condition is not perfect yet because the error message can include other error messages than runtime/cgo. + isPkgPathRune := func(r rune) bool { + // From https://golang.org/ref/spec#Import_declarations: + // Implementation restriction: A compiler may restrict ImportPaths to non-empty strings + // using only characters belonging to Unicode's L, M, N, P, and S general categories + // (the Graphic characters without spaces) and may also exclude the + // characters !"#$%&'()*,:;<=>?[\]^`{|} and the Unicode replacement character U+FFFD. + return unicode.IsOneOf([]*unicode.RangeTable{unicode.L, unicode.M, unicode.N, unicode.P, unicode.S}, r) && + !strings.ContainsRune("!\"#$%&'()*,:;<=>?[\\]^`{|}\uFFFD", r) + } + // golang/go#36770: Handle case where cmd/go prints module download messages before the error. + msg := stderr.String() + for strings.HasPrefix(msg, "go: downloading") { + msg = msg[strings.IndexRune(msg, '\n')+1:] + } + if len(stderr.String()) > 0 && strings.HasPrefix(stderr.String(), "# ") { + msg := msg[len("# "):] + if strings.HasPrefix(strings.TrimLeftFunc(msg, isPkgPathRune), "\n") { + return stdout, nil + } + // Treat pkg-config errors as a special case (golang.org/issue/36770). + if strings.HasPrefix(msg, "pkg-config") { + return stdout, nil + } + } + + // This error only appears in stderr. See golang.org/cl/166398 for a fix in go list to show + // the error in the Err section of stdout in case -e option is provided. + // This fix is provided for backwards compatibility. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "named files must be .go files") { + output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Similar to the previous error, but currently lacks a fix in Go. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "named files must all be in one directory") { + output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Backwards compatibility for Go 1.11 because 1.12 and 1.13 put the directory in the ImportPath. + // If the package doesn't exist, put the absolute path of the directory into the error message, + // as Go 1.13 list does. + const noSuchDirectory = "no such directory" + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), noSuchDirectory) { + errstr := stderr.String() + abspath := strings.TrimSpace(errstr[strings.Index(errstr, noSuchDirectory)+len(noSuchDirectory):]) + output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + abspath, strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Workaround for #29280: go list -e has incorrect behavior when an ad-hoc package doesn't exist. + // Note that the error message we look for in this case is different that the one looked for above. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "no such file or directory") { + output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Workaround for #34273. go list -e with GO111MODULE=on has incorrect behavior when listing a + // directory outside any module. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "outside available modules") { + output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + // TODO(matloob): command-line-arguments isn't correct here. + "command-line-arguments", strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Another variation of the previous error + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "outside module root") { + output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + // TODO(matloob): command-line-arguments isn't correct here. + "command-line-arguments", strings.Trim(stderr.String(), "\n")) + return bytes.NewBufferString(output), nil + } + + // Workaround for an instance of golang.org/issue/26755: go list -e will return a non-zero exit + // status if there's a dependency on a package that doesn't exist. But it should return + // a zero exit status and set an error on that package. + if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "no Go files in") { + // Don't clobber stdout if `go list` actually returned something. + if len(stdout.String()) > 0 { + return stdout, nil + } + // try to extract package name from string + stderrStr := stderr.String() + var importPath string + colon := strings.Index(stderrStr, ":") + if colon > 0 && strings.HasPrefix(stderrStr, "go build ") { + importPath = stderrStr[len("go build "):colon] + } + output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`, + importPath, strings.Trim(stderrStr, "\n")) + return bytes.NewBufferString(output), nil + } + + // Export mode entails a build. + // If that build fails, errors appear on stderr + // (despite the -e flag) and the Export field is blank. + // Do not fail in that case. + // The same is true if an ad-hoc package given to go list doesn't exist. + // TODO(matloob): Remove these once we can depend on go list to exit with a zero status with -e even when + // packages don't exist or a build fails. + if !usesExportData(cfg) && !containsGoFile(args) { + return nil, friendlyErr + } + } + return stdout, nil +} + +// OverlayJSON is the format overlay files are expected to be in. +// The Replace map maps from overlaid paths to replacement paths: +// the Go command will forward all reads trying to open +// each overlaid path to its replacement path, or consider the overlaid +// path not to exist if the replacement path is empty. +// +// From golang/go#39958. +type OverlayJSON struct { + Replace map[string]string `json:"replace,omitempty"` +} + +// writeOverlays writes out files for go list's -overlay flag, as described +// above. +func (state *golistState) writeOverlays() (filename string, cleanup func(), err error) { + // Do nothing if there are no overlays in the config. + if len(state.cfg.Overlay) == 0 { + return "", func() {}, nil + } + dir, err := ioutil.TempDir("", "gopackages-*") + if err != nil { + return "", nil, err + } + // The caller must clean up this directory, unless this function returns an + // error. + cleanup = func() { + os.RemoveAll(dir) + } + defer func() { + if err != nil { + cleanup() + } + }() + overlays := map[string]string{} + for k, v := range state.cfg.Overlay { + // Create a unique filename for the overlaid files, to avoid + // creating nested directories. + noSeparator := strings.Join(strings.Split(filepath.ToSlash(k), "/"), "") + f, err := ioutil.TempFile(dir, fmt.Sprintf("*-%s", noSeparator)) + if err != nil { + return "", func() {}, err + } + if _, err := f.Write(v); err != nil { + return "", func() {}, err + } + if err := f.Close(); err != nil { + return "", func() {}, err + } + overlays[k] = f.Name() + } + b, err := json.Marshal(OverlayJSON{Replace: overlays}) + if err != nil { + return "", func() {}, err + } + // Write out the overlay file that contains the filepath mappings. + filename = filepath.Join(dir, "overlay.json") + if err := ioutil.WriteFile(filename, b, 0665); err != nil { + return "", func() {}, err + } + return filename, cleanup, nil +} + +func containsGoFile(s []string) bool { + for _, f := range s { + if strings.HasSuffix(f, ".go") { + return true + } + } + return false +} + +func cmdDebugStr(cmd *exec.Cmd) string { + env := make(map[string]string) + for _, kv := range cmd.Env { + split := strings.SplitN(kv, "=", 2) + k, v := split[0], split[1] + env[k] = v + } + + var args []string + for _, arg := range cmd.Args { + quoted := strconv.Quote(arg) + if quoted[1:len(quoted)-1] != arg || strings.Contains(arg, " ") { + args = append(args, quoted) + } else { + args = append(args, arg) + } + } + return fmt.Sprintf("GOROOT=%v GOPATH=%v GO111MODULE=%v GOPROXY=%v PWD=%v %v", env["GOROOT"], env["GOPATH"], env["GO111MODULE"], env["GOPROXY"], env["PWD"], strings.Join(args, " ")) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/golist_overlay.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/golist_overlay.go new file mode 100644 index 000000000000..9576b472f9cc --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/golist_overlay.go @@ -0,0 +1,575 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +import ( + "encoding/json" + "fmt" + "go/parser" + "go/token" + "os" + "path/filepath" + "regexp" + "sort" + "strconv" + "strings" + + "golang.org/x/tools/internal/gocommand" +) + +// processGolistOverlay provides rudimentary support for adding +// files that don't exist on disk to an overlay. The results can be +// sometimes incorrect. +// TODO(matloob): Handle unsupported cases, including the following: +// - determining the correct package to add given a new import path +func (state *golistState) processGolistOverlay(response *responseDeduper) (modifiedPkgs, needPkgs []string, err error) { + havePkgs := make(map[string]string) // importPath -> non-test package ID + needPkgsSet := make(map[string]bool) + modifiedPkgsSet := make(map[string]bool) + + pkgOfDir := make(map[string][]*Package) + for _, pkg := range response.dr.Packages { + // This is an approximation of import path to id. This can be + // wrong for tests, vendored packages, and a number of other cases. + havePkgs[pkg.PkgPath] = pkg.ID + dir, err := commonDir(pkg.GoFiles) + if err != nil { + return nil, nil, err + } + if dir != "" { + pkgOfDir[dir] = append(pkgOfDir[dir], pkg) + } + } + + // If no new imports are added, it is safe to avoid loading any needPkgs. + // Otherwise, it's hard to tell which package is actually being loaded + // (due to vendoring) and whether any modified package will show up + // in the transitive set of dependencies (because new imports are added, + // potentially modifying the transitive set of dependencies). + var overlayAddsImports bool + + // If both a package and its test package are created by the overlay, we + // need the real package first. Process all non-test files before test + // files, and make the whole process deterministic while we're at it. + var overlayFiles []string + for opath := range state.cfg.Overlay { + overlayFiles = append(overlayFiles, opath) + } + sort.Slice(overlayFiles, func(i, j int) bool { + iTest := strings.HasSuffix(overlayFiles[i], "_test.go") + jTest := strings.HasSuffix(overlayFiles[j], "_test.go") + if iTest != jTest { + return !iTest // non-tests are before tests. + } + return overlayFiles[i] < overlayFiles[j] + }) + for _, opath := range overlayFiles { + contents := state.cfg.Overlay[opath] + base := filepath.Base(opath) + dir := filepath.Dir(opath) + var pkg *Package // if opath belongs to both a package and its test variant, this will be the test variant + var testVariantOf *Package // if opath is a test file, this is the package it is testing + var fileExists bool + isTestFile := strings.HasSuffix(opath, "_test.go") + pkgName, ok := extractPackageName(opath, contents) + if !ok { + // Don't bother adding a file that doesn't even have a parsable package statement + // to the overlay. + continue + } + // If all the overlay files belong to a different package, change the + // package name to that package. + maybeFixPackageName(pkgName, isTestFile, pkgOfDir[dir]) + nextPackage: + for _, p := range response.dr.Packages { + if pkgName != p.Name && p.ID != "command-line-arguments" { + continue + } + for _, f := range p.GoFiles { + if !sameFile(filepath.Dir(f), dir) { + continue + } + // Make sure to capture information on the package's test variant, if needed. + if isTestFile && !hasTestFiles(p) { + // TODO(matloob): Are there packages other than the 'production' variant + // of a package that this can match? This shouldn't match the test main package + // because the file is generated in another directory. + testVariantOf = p + continue nextPackage + } else if !isTestFile && hasTestFiles(p) { + // We're examining a test variant, but the overlaid file is + // a non-test file. Because the overlay implementation + // (currently) only adds a file to one package, skip this + // package, so that we can add the file to the production + // variant of the package. (https://golang.org/issue/36857 + // tracks handling overlays on both the production and test + // variant of a package). + continue nextPackage + } + if pkg != nil && p != pkg && pkg.PkgPath == p.PkgPath { + // We have already seen the production version of the + // for which p is a test variant. + if hasTestFiles(p) { + testVariantOf = pkg + } + } + pkg = p + if filepath.Base(f) == base { + fileExists = true + } + } + } + // The overlay could have included an entirely new package or an + // ad-hoc package. An ad-hoc package is one that we have manually + // constructed from inadequate `go list` results for a file= query. + // It will have the ID command-line-arguments. + if pkg == nil || pkg.ID == "command-line-arguments" { + // Try to find the module or gopath dir the file is contained in. + // Then for modules, add the module opath to the beginning. + pkgPath, ok, err := state.getPkgPath(dir) + if err != nil { + return nil, nil, err + } + if !ok { + break + } + var forTest string // only set for x tests + isXTest := strings.HasSuffix(pkgName, "_test") + if isXTest { + forTest = pkgPath + pkgPath += "_test" + } + id := pkgPath + if isTestFile { + if isXTest { + id = fmt.Sprintf("%s [%s.test]", pkgPath, forTest) + } else { + id = fmt.Sprintf("%s [%s.test]", pkgPath, pkgPath) + } + } + if pkg != nil { + // TODO(rstambler): We should change the package's path and ID + // here. The only issue is that this messes with the roots. + } else { + // Try to reclaim a package with the same ID, if it exists in the response. + for _, p := range response.dr.Packages { + if reclaimPackage(p, id, opath, contents) { + pkg = p + break + } + } + // Otherwise, create a new package. + if pkg == nil { + pkg = &Package{ + PkgPath: pkgPath, + ID: id, + Name: pkgName, + Imports: make(map[string]*Package), + } + response.addPackage(pkg) + havePkgs[pkg.PkgPath] = id + // Add the production package's sources for a test variant. + if isTestFile && !isXTest && testVariantOf != nil { + pkg.GoFiles = append(pkg.GoFiles, testVariantOf.GoFiles...) + pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, testVariantOf.CompiledGoFiles...) + // Add the package under test and its imports to the test variant. + pkg.forTest = testVariantOf.PkgPath + for k, v := range testVariantOf.Imports { + pkg.Imports[k] = &Package{ID: v.ID} + } + } + if isXTest { + pkg.forTest = forTest + } + } + } + } + if !fileExists { + pkg.GoFiles = append(pkg.GoFiles, opath) + // TODO(matloob): Adding the file to CompiledGoFiles can exhibit the wrong behavior + // if the file will be ignored due to its build tags. + pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, opath) + modifiedPkgsSet[pkg.ID] = true + } + imports, err := extractImports(opath, contents) + if err != nil { + // Let the parser or type checker report errors later. + continue + } + for _, imp := range imports { + // TODO(rstambler): If the package is an x test and the import has + // a test variant, make sure to replace it. + if _, found := pkg.Imports[imp]; found { + continue + } + overlayAddsImports = true + id, ok := havePkgs[imp] + if !ok { + var err error + id, err = state.resolveImport(dir, imp) + if err != nil { + return nil, nil, err + } + } + pkg.Imports[imp] = &Package{ID: id} + // Add dependencies to the non-test variant version of this package as well. + if testVariantOf != nil { + testVariantOf.Imports[imp] = &Package{ID: id} + } + } + } + + // toPkgPath guesses the package path given the id. + toPkgPath := func(sourceDir, id string) (string, error) { + if i := strings.IndexByte(id, ' '); i >= 0 { + return state.resolveImport(sourceDir, id[:i]) + } + return state.resolveImport(sourceDir, id) + } + + // Now that new packages have been created, do another pass to determine + // the new set of missing packages. + for _, pkg := range response.dr.Packages { + for _, imp := range pkg.Imports { + if len(pkg.GoFiles) == 0 { + return nil, nil, fmt.Errorf("cannot resolve imports for package %q with no Go files", pkg.PkgPath) + } + pkgPath, err := toPkgPath(filepath.Dir(pkg.GoFiles[0]), imp.ID) + if err != nil { + return nil, nil, err + } + if _, ok := havePkgs[pkgPath]; !ok { + needPkgsSet[pkgPath] = true + } + } + } + + if overlayAddsImports { + needPkgs = make([]string, 0, len(needPkgsSet)) + for pkg := range needPkgsSet { + needPkgs = append(needPkgs, pkg) + } + } + modifiedPkgs = make([]string, 0, len(modifiedPkgsSet)) + for pkg := range modifiedPkgsSet { + modifiedPkgs = append(modifiedPkgs, pkg) + } + return modifiedPkgs, needPkgs, err +} + +// resolveImport finds the ID of a package given its import path. +// In particular, it will find the right vendored copy when in GOPATH mode. +func (state *golistState) resolveImport(sourceDir, importPath string) (string, error) { + env, err := state.getEnv() + if err != nil { + return "", err + } + if env["GOMOD"] != "" { + return importPath, nil + } + + searchDir := sourceDir + for { + vendorDir := filepath.Join(searchDir, "vendor") + exists, ok := state.vendorDirs[vendorDir] + if !ok { + info, err := os.Stat(vendorDir) + exists = err == nil && info.IsDir() + state.vendorDirs[vendorDir] = exists + } + + if exists { + vendoredPath := filepath.Join(vendorDir, importPath) + if info, err := os.Stat(vendoredPath); err == nil && info.IsDir() { + // We should probably check for .go files here, but shame on anyone who fools us. + path, ok, err := state.getPkgPath(vendoredPath) + if err != nil { + return "", err + } + if ok { + return path, nil + } + } + } + + // We know we've hit the top of the filesystem when we Dir / and get /, + // or C:\ and get C:\, etc. + next := filepath.Dir(searchDir) + if next == searchDir { + break + } + searchDir = next + } + return importPath, nil +} + +func hasTestFiles(p *Package) bool { + for _, f := range p.GoFiles { + if strings.HasSuffix(f, "_test.go") { + return true + } + } + return false +} + +// determineRootDirs returns a mapping from absolute directories that could +// contain code to their corresponding import path prefixes. +func (state *golistState) determineRootDirs() (map[string]string, error) { + env, err := state.getEnv() + if err != nil { + return nil, err + } + if env["GOMOD"] != "" { + state.rootsOnce.Do(func() { + state.rootDirs, state.rootDirsError = state.determineRootDirsModules() + }) + } else { + state.rootsOnce.Do(func() { + state.rootDirs, state.rootDirsError = state.determineRootDirsGOPATH() + }) + } + return state.rootDirs, state.rootDirsError +} + +func (state *golistState) determineRootDirsModules() (map[string]string, error) { + // List all of the modules--the first will be the directory for the main + // module. Any replaced modules will also need to be treated as roots. + // Editing files in the module cache isn't a great idea, so we don't + // plan to ever support that. + out, err := state.invokeGo("list", "-m", "-json", "all") + if err != nil { + // 'go list all' will fail if we're outside of a module and + // GO111MODULE=on. Try falling back without 'all'. + var innerErr error + out, innerErr = state.invokeGo("list", "-m", "-json") + if innerErr != nil { + return nil, err + } + } + roots := map[string]string{} + modules := map[string]string{} + var i int + for dec := json.NewDecoder(out); dec.More(); { + mod := new(gocommand.ModuleJSON) + if err := dec.Decode(mod); err != nil { + return nil, err + } + if mod.Dir != "" && mod.Path != "" { + // This is a valid module; add it to the map. + absDir, err := filepath.Abs(mod.Dir) + if err != nil { + return nil, err + } + modules[absDir] = mod.Path + // The first result is the main module. + if i == 0 || mod.Replace != nil && mod.Replace.Path != "" { + roots[absDir] = mod.Path + } + } + i++ + } + return roots, nil +} + +func (state *golistState) determineRootDirsGOPATH() (map[string]string, error) { + m := map[string]string{} + for _, dir := range filepath.SplitList(state.mustGetEnv()["GOPATH"]) { + absDir, err := filepath.Abs(dir) + if err != nil { + return nil, err + } + m[filepath.Join(absDir, "src")] = "" + } + return m, nil +} + +func extractImports(filename string, contents []byte) ([]string, error) { + f, err := parser.ParseFile(token.NewFileSet(), filename, contents, parser.ImportsOnly) // TODO(matloob): reuse fileset? + if err != nil { + return nil, err + } + var res []string + for _, imp := range f.Imports { + quotedPath := imp.Path.Value + path, err := strconv.Unquote(quotedPath) + if err != nil { + return nil, err + } + res = append(res, path) + } + return res, nil +} + +// reclaimPackage attempts to reuse a package that failed to load in an overlay. +// +// If the package has errors and has no Name, GoFiles, or Imports, +// then it's possible that it doesn't yet exist on disk. +func reclaimPackage(pkg *Package, id string, filename string, contents []byte) bool { + // TODO(rstambler): Check the message of the actual error? + // It differs between $GOPATH and module mode. + if pkg.ID != id { + return false + } + if len(pkg.Errors) != 1 { + return false + } + if pkg.Name != "" || pkg.ExportFile != "" { + return false + } + if len(pkg.GoFiles) > 0 || len(pkg.CompiledGoFiles) > 0 || len(pkg.OtherFiles) > 0 { + return false + } + if len(pkg.Imports) > 0 { + return false + } + pkgName, ok := extractPackageName(filename, contents) + if !ok { + return false + } + pkg.Name = pkgName + pkg.Errors = nil + return true +} + +func extractPackageName(filename string, contents []byte) (string, bool) { + // TODO(rstambler): Check the message of the actual error? + // It differs between $GOPATH and module mode. + f, err := parser.ParseFile(token.NewFileSet(), filename, contents, parser.PackageClauseOnly) // TODO(matloob): reuse fileset? + if err != nil { + return "", false + } + return f.Name.Name, true +} + +// commonDir returns the directory that all files are in, "" if files is empty, +// or an error if they aren't in the same directory. +func commonDir(files []string) (string, error) { + seen := make(map[string]bool) + for _, f := range files { + seen[filepath.Dir(f)] = true + } + if len(seen) > 1 { + return "", fmt.Errorf("files (%v) are in more than one directory: %v", files, seen) + } + for k := range seen { + // seen has only one element; return it. + return k, nil + } + return "", nil // no files +} + +// It is possible that the files in the disk directory dir have a different package +// name from newName, which is deduced from the overlays. If they all have a different +// package name, and they all have the same package name, then that name becomes +// the package name. +// It returns true if it changes the package name, false otherwise. +func maybeFixPackageName(newName string, isTestFile bool, pkgsOfDir []*Package) { + names := make(map[string]int) + for _, p := range pkgsOfDir { + names[p.Name]++ + } + if len(names) != 1 { + // some files are in different packages + return + } + var oldName string + for k := range names { + oldName = k + } + if newName == oldName { + return + } + // We might have a case where all of the package names in the directory are + // the same, but the overlay file is for an x test, which belongs to its + // own package. If the x test does not yet exist on disk, we may not yet + // have its package name on disk, but we should not rename the packages. + // + // We use a heuristic to determine if this file belongs to an x test: + // The test file should have a package name whose package name has a _test + // suffix or looks like "newName_test". + maybeXTest := strings.HasPrefix(oldName+"_test", newName) || strings.HasSuffix(newName, "_test") + if isTestFile && maybeXTest { + return + } + for _, p := range pkgsOfDir { + p.Name = newName + } +} + +// This function is copy-pasted from +// https://github.com/golang/go/blob/9706f510a5e2754595d716bd64be8375997311fb/src/cmd/go/internal/search/search.go#L360. +// It should be deleted when we remove support for overlays from go/packages. +// +// NOTE: This does not handle any ./... or ./ style queries, as this function +// doesn't know the working directory. +// +// matchPattern(pattern)(name) reports whether +// name matches pattern. Pattern is a limited glob +// pattern in which '...' means 'any string' and there +// is no other special syntax. +// Unfortunately, there are two special cases. Quoting "go help packages": +// +// First, /... at the end of the pattern can match an empty string, +// so that net/... matches both net and packages in its subdirectories, like net/http. +// Second, any slash-separated pattern element containing a wildcard never +// participates in a match of the "vendor" element in the path of a vendored +// package, so that ./... does not match packages in subdirectories of +// ./vendor or ./mycode/vendor, but ./vendor/... and ./mycode/vendor/... do. +// Note, however, that a directory named vendor that itself contains code +// is not a vendored package: cmd/vendor would be a command named vendor, +// and the pattern cmd/... matches it. +func matchPattern(pattern string) func(name string) bool { + // Convert pattern to regular expression. + // The strategy for the trailing /... is to nest it in an explicit ? expression. + // The strategy for the vendor exclusion is to change the unmatchable + // vendor strings to a disallowed code point (vendorChar) and to use + // "(anything but that codepoint)*" as the implementation of the ... wildcard. + // This is a bit complicated but the obvious alternative, + // namely a hand-written search like in most shell glob matchers, + // is too easy to make accidentally exponential. + // Using package regexp guarantees linear-time matching. + + const vendorChar = "\x00" + + if strings.Contains(pattern, vendorChar) { + return func(name string) bool { return false } + } + + re := regexp.QuoteMeta(pattern) + re = replaceVendor(re, vendorChar) + switch { + case strings.HasSuffix(re, `/`+vendorChar+`/\.\.\.`): + re = strings.TrimSuffix(re, `/`+vendorChar+`/\.\.\.`) + `(/vendor|/` + vendorChar + `/\.\.\.)` + case re == vendorChar+`/\.\.\.`: + re = `(/vendor|/` + vendorChar + `/\.\.\.)` + case strings.HasSuffix(re, `/\.\.\.`): + re = strings.TrimSuffix(re, `/\.\.\.`) + `(/\.\.\.)?` + } + re = strings.ReplaceAll(re, `\.\.\.`, `[^`+vendorChar+`]*`) + + reg := regexp.MustCompile(`^` + re + `$`) + + return func(name string) bool { + if strings.Contains(name, vendorChar) { + return false + } + return reg.MatchString(replaceVendor(name, vendorChar)) + } +} + +// replaceVendor returns the result of replacing +// non-trailing vendor path elements in x with repl. +func replaceVendor(x, repl string) string { + if !strings.Contains(x, "vendor") { + return x + } + elem := strings.Split(x, "/") + for i := 0; i < len(elem)-1; i++ { + if elem[i] == "vendor" { + elem[i] = repl + } + } + return strings.Join(elem, "/") +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/loadmode_string.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/loadmode_string.go new file mode 100644 index 000000000000..5c080d21b54a --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/loadmode_string.go @@ -0,0 +1,57 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +import ( + "fmt" + "strings" +) + +var allModes = []LoadMode{ + NeedName, + NeedFiles, + NeedCompiledGoFiles, + NeedImports, + NeedDeps, + NeedExportFile, + NeedTypes, + NeedSyntax, + NeedTypesInfo, + NeedTypesSizes, +} + +var modeStrings = []string{ + "NeedName", + "NeedFiles", + "NeedCompiledGoFiles", + "NeedImports", + "NeedDeps", + "NeedExportFile", + "NeedTypes", + "NeedSyntax", + "NeedTypesInfo", + "NeedTypesSizes", +} + +func (mod LoadMode) String() string { + m := mod + if m == 0 { + return "LoadMode(0)" + } + var out []string + for i, x := range allModes { + if x > m { + break + } + if (m & x) != 0 { + out = append(out, modeStrings[i]) + m = m ^ x + } + } + if m != 0 { + out = append(out, "Unknown") + } + return fmt.Sprintf("LoadMode(%s)", strings.Join(out, "|")) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/packages.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/packages.go new file mode 100644 index 000000000000..da1a27eea62e --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/packages.go @@ -0,0 +1,1332 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +// See doc.go for package documentation and implementation notes. + +import ( + "context" + "encoding/json" + "fmt" + "go/ast" + "go/parser" + "go/scanner" + "go/token" + "go/types" + "io" + "io/ioutil" + "log" + "os" + "path/filepath" + "runtime" + "strings" + "sync" + "time" + + "golang.org/x/tools/go/gcexportdata" + "golang.org/x/tools/internal/gocommand" + "golang.org/x/tools/internal/packagesinternal" + "golang.org/x/tools/internal/typeparams" + "golang.org/x/tools/internal/typesinternal" +) + +// A LoadMode controls the amount of detail to return when loading. +// The bits below can be combined to specify which fields should be +// filled in the result packages. +// The zero value is a special case, equivalent to combining +// the NeedName, NeedFiles, and NeedCompiledGoFiles bits. +// ID and Errors (if present) will always be filled. +// Load may return more information than requested. +type LoadMode int + +const ( + // NeedName adds Name and PkgPath. + NeedName LoadMode = 1 << iota + + // NeedFiles adds GoFiles and OtherFiles. + NeedFiles + + // NeedCompiledGoFiles adds CompiledGoFiles. + NeedCompiledGoFiles + + // NeedImports adds Imports. If NeedDeps is not set, the Imports field will contain + // "placeholder" Packages with only the ID set. + NeedImports + + // NeedDeps adds the fields requested by the LoadMode in the packages in Imports. + NeedDeps + + // NeedExportFile adds ExportFile. + NeedExportFile + + // NeedTypes adds Types, Fset, and IllTyped. + NeedTypes + + // NeedSyntax adds Syntax. + NeedSyntax + + // NeedTypesInfo adds TypesInfo. + NeedTypesInfo + + // NeedTypesSizes adds TypesSizes. + NeedTypesSizes + + // needInternalDepsErrors adds the internal deps errors field for use by gopls. + needInternalDepsErrors + + // needInternalForTest adds the internal forTest field. + // Tests must also be set on the context for this field to be populated. + needInternalForTest + + // typecheckCgo enables full support for type checking cgo. Requires Go 1.15+. + // Modifies CompiledGoFiles and Types, and has no effect on its own. + typecheckCgo + + // NeedModule adds Module. + NeedModule + + // NeedEmbedFiles adds EmbedFiles. + NeedEmbedFiles + + // NeedEmbedPatterns adds EmbedPatterns. + NeedEmbedPatterns +) + +const ( + // Deprecated: LoadFiles exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadFiles = NeedName | NeedFiles | NeedCompiledGoFiles + + // Deprecated: LoadImports exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadImports = LoadFiles | NeedImports + + // Deprecated: LoadTypes exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadTypes = LoadImports | NeedTypes | NeedTypesSizes + + // Deprecated: LoadSyntax exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadSyntax = LoadTypes | NeedSyntax | NeedTypesInfo + + // Deprecated: LoadAllSyntax exists for historical compatibility + // and should not be used. Please directly specify the needed fields using the Need values. + LoadAllSyntax = LoadSyntax | NeedDeps + + // Deprecated: NeedExportsFile is a historical misspelling of NeedExportFile. + NeedExportsFile = NeedExportFile +) + +// A Config specifies details about how packages should be loaded. +// The zero value is a valid configuration. +// Calls to Load do not modify this struct. +type Config struct { + // Mode controls the level of information returned for each package. + Mode LoadMode + + // Context specifies the context for the load operation. + // If the context is cancelled, the loader may stop early + // and return an ErrCancelled error. + // If Context is nil, the load cannot be cancelled. + Context context.Context + + // Logf is the logger for the config. + // If the user provides a logger, debug logging is enabled. + // If the GOPACKAGESDEBUG environment variable is set to true, + // but the logger is nil, default to log.Printf. + Logf func(format string, args ...interface{}) + + // Dir is the directory in which to run the build system's query tool + // that provides information about the packages. + // If Dir is empty, the tool is run in the current directory. + Dir string + + // Env is the environment to use when invoking the build system's query tool. + // If Env is nil, the current environment is used. + // As in os/exec's Cmd, only the last value in the slice for + // each environment key is used. To specify the setting of only + // a few variables, append to the current environment, as in: + // + // opt.Env = append(os.Environ(), "GOOS=plan9", "GOARCH=386") + // + Env []string + + // gocmdRunner guards go command calls from concurrency errors. + gocmdRunner *gocommand.Runner + + // BuildFlags is a list of command-line flags to be passed through to + // the build system's query tool. + BuildFlags []string + + // modFile will be used for -modfile in go command invocations. + modFile string + + // modFlag will be used for -modfile in go command invocations. + modFlag string + + // Fset provides source position information for syntax trees and types. + // If Fset is nil, Load will use a new fileset, but preserve Fset's value. + Fset *token.FileSet + + // ParseFile is called to read and parse each file + // when preparing a package's type-checked syntax tree. + // It must be safe to call ParseFile simultaneously from multiple goroutines. + // If ParseFile is nil, the loader will uses parser.ParseFile. + // + // ParseFile should parse the source from src and use filename only for + // recording position information. + // + // An application may supply a custom implementation of ParseFile + // to change the effective file contents or the behavior of the parser, + // or to modify the syntax tree. For example, selectively eliminating + // unwanted function bodies can significantly accelerate type checking. + ParseFile func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) + + // If Tests is set, the loader includes not just the packages + // matching a particular pattern but also any related test packages, + // including test-only variants of the package and the test executable. + // + // For example, when using the go command, loading "fmt" with Tests=true + // returns four packages, with IDs "fmt" (the standard package), + // "fmt [fmt.test]" (the package as compiled for the test), + // "fmt_test" (the test functions from source files in package fmt_test), + // and "fmt.test" (the test binary). + // + // In build systems with explicit names for tests, + // setting Tests may have no effect. + Tests bool + + // Overlay provides a mapping of absolute file paths to file contents. + // If the file with the given path already exists, the parser will use the + // alternative file contents provided by the map. + // + // Overlays provide incomplete support for when a given file doesn't + // already exist on disk. See the package doc above for more details. + Overlay map[string][]byte +} + +// driver is the type for functions that query the build system for the +// packages named by the patterns. +type driver func(cfg *Config, patterns ...string) (*driverResponse, error) + +// driverResponse contains the results for a driver query. +type driverResponse struct { + // NotHandled is returned if the request can't be handled by the current + // driver. If an external driver returns a response with NotHandled, the + // rest of the driverResponse is ignored, and go/packages will fallback + // to the next driver. If go/packages is extended in the future to support + // lists of multiple drivers, go/packages will fall back to the next driver. + NotHandled bool + + // Sizes, if not nil, is the types.Sizes to use when type checking. + Sizes *types.StdSizes + + // Roots is the set of package IDs that make up the root packages. + // We have to encode this separately because when we encode a single package + // we cannot know if it is one of the roots as that requires knowledge of the + // graph it is part of. + Roots []string `json:",omitempty"` + + // Packages is the full set of packages in the graph. + // The packages are not connected into a graph. + // The Imports if populated will be stubs that only have their ID set. + // Imports will be connected and then type and syntax information added in a + // later pass (see refine). + Packages []*Package + + // GoVersion is the minor version number used by the driver + // (e.g. the go command on the PATH) when selecting .go files. + // Zero means unknown. + GoVersion int +} + +// Load loads and returns the Go packages named by the given patterns. +// +// Config specifies loading options; +// nil behaves the same as an empty Config. +// +// Load returns an error if any of the patterns was invalid +// as defined by the underlying build system. +// It may return an empty list of packages without an error, +// for instance for an empty expansion of a valid wildcard. +// Errors associated with a particular package are recorded in the +// corresponding Package's Errors list, and do not cause Load to +// return an error. Clients may need to handle such errors before +// proceeding with further analysis. The PrintErrors function is +// provided for convenient display of all errors. +func Load(cfg *Config, patterns ...string) ([]*Package, error) { + l := newLoader(cfg) + response, err := defaultDriver(&l.Config, patterns...) + if err != nil { + return nil, err + } + l.sizes = response.Sizes + return l.refine(response) +} + +// defaultDriver is a driver that implements go/packages' fallback behavior. +// It will try to request to an external driver, if one exists. If there's +// no external driver, or the driver returns a response with NotHandled set, +// defaultDriver will fall back to the go list driver. +func defaultDriver(cfg *Config, patterns ...string) (*driverResponse, error) { + driver := findExternalDriver(cfg) + if driver == nil { + driver = goListDriver + } + response, err := driver(cfg, patterns...) + if err != nil { + return response, err + } else if response.NotHandled { + return goListDriver(cfg, patterns...) + } + return response, nil +} + +// A Package describes a loaded Go package. +type Package struct { + // ID is a unique identifier for a package, + // in a syntax provided by the underlying build system. + // + // Because the syntax varies based on the build system, + // clients should treat IDs as opaque and not attempt to + // interpret them. + ID string + + // Name is the package name as it appears in the package source code. + Name string + + // PkgPath is the package path as used by the go/types package. + PkgPath string + + // Errors contains any errors encountered querying the metadata + // of the package, or while parsing or type-checking its files. + Errors []Error + + // TypeErrors contains the subset of errors produced during type checking. + TypeErrors []types.Error + + // GoFiles lists the absolute file paths of the package's Go source files. + // It may include files that should not be compiled, for example because + // they contain non-matching build tags, are documentary pseudo-files such as + // unsafe/unsafe.go or builtin/builtin.go, or are subject to cgo preprocessing. + GoFiles []string + + // CompiledGoFiles lists the absolute file paths of the package's source + // files that are suitable for type checking. + // This may differ from GoFiles if files are processed before compilation. + CompiledGoFiles []string + + // OtherFiles lists the absolute file paths of the package's non-Go source files, + // including assembly, C, C++, Fortran, Objective-C, SWIG, and so on. + OtherFiles []string + + // EmbedFiles lists the absolute file paths of the package's files + // embedded with go:embed. + EmbedFiles []string + + // EmbedPatterns lists the absolute file patterns of the package's + // files embedded with go:embed. + EmbedPatterns []string + + // IgnoredFiles lists source files that are not part of the package + // using the current build configuration but that might be part of + // the package using other build configurations. + IgnoredFiles []string + + // ExportFile is the absolute path to a file containing type + // information for the package as provided by the build system. + ExportFile string + + // Imports maps import paths appearing in the package's Go source files + // to corresponding loaded Packages. + Imports map[string]*Package + + // Types provides type information for the package. + // The NeedTypes LoadMode bit sets this field for packages matching the + // patterns; type information for dependencies may be missing or incomplete, + // unless NeedDeps and NeedImports are also set. + Types *types.Package + + // Fset provides position information for Types, TypesInfo, and Syntax. + // It is set only when Types is set. + Fset *token.FileSet + + // IllTyped indicates whether the package or any dependency contains errors. + // It is set only when Types is set. + IllTyped bool + + // Syntax is the package's syntax trees, for the files listed in CompiledGoFiles. + // + // The NeedSyntax LoadMode bit populates this field for packages matching the patterns. + // If NeedDeps and NeedImports are also set, this field will also be populated + // for dependencies. + // + // Syntax is kept in the same order as CompiledGoFiles, with the caveat that nils are + // removed. If parsing returned nil, Syntax may be shorter than CompiledGoFiles. + Syntax []*ast.File + + // TypesInfo provides type information about the package's syntax trees. + // It is set only when Syntax is set. + TypesInfo *types.Info + + // TypesSizes provides the effective size function for types in TypesInfo. + TypesSizes types.Sizes + + // forTest is the package under test, if any. + forTest string + + // depsErrors is the DepsErrors field from the go list response, if any. + depsErrors []*packagesinternal.PackageError + + // module is the module information for the package if it exists. + Module *Module +} + +// Module provides module information for a package. +type Module struct { + Path string // module path + Version string // module version + Replace *Module // replaced by this module + Time *time.Time // time version was created + Main bool // is this the main module? + Indirect bool // is this module only an indirect dependency of main module? + Dir string // directory holding files for this module, if any + GoMod string // path to go.mod file used when loading this module, if any + GoVersion string // go version used in module + Error *ModuleError // error loading module +} + +// ModuleError holds errors loading a module. +type ModuleError struct { + Err string // the error itself +} + +func init() { + packagesinternal.GetForTest = func(p interface{}) string { + return p.(*Package).forTest + } + packagesinternal.GetDepsErrors = func(p interface{}) []*packagesinternal.PackageError { + return p.(*Package).depsErrors + } + packagesinternal.GetGoCmdRunner = func(config interface{}) *gocommand.Runner { + return config.(*Config).gocmdRunner + } + packagesinternal.SetGoCmdRunner = func(config interface{}, runner *gocommand.Runner) { + config.(*Config).gocmdRunner = runner + } + packagesinternal.SetModFile = func(config interface{}, value string) { + config.(*Config).modFile = value + } + packagesinternal.SetModFlag = func(config interface{}, value string) { + config.(*Config).modFlag = value + } + packagesinternal.TypecheckCgo = int(typecheckCgo) + packagesinternal.DepsErrors = int(needInternalDepsErrors) + packagesinternal.ForTest = int(needInternalForTest) +} + +// An Error describes a problem with a package's metadata, syntax, or types. +type Error struct { + Pos string // "file:line:col" or "file:line" or "" or "-" + Msg string + Kind ErrorKind +} + +// ErrorKind describes the source of the error, allowing the user to +// differentiate between errors generated by the driver, the parser, or the +// type-checker. +type ErrorKind int + +const ( + UnknownError ErrorKind = iota + ListError + ParseError + TypeError +) + +func (err Error) Error() string { + pos := err.Pos + if pos == "" { + pos = "-" // like token.Position{}.String() + } + return pos + ": " + err.Msg +} + +// flatPackage is the JSON form of Package +// It drops all the type and syntax fields, and transforms the Imports +// +// TODO(adonovan): identify this struct with Package, effectively +// publishing the JSON protocol. +type flatPackage struct { + ID string + Name string `json:",omitempty"` + PkgPath string `json:",omitempty"` + Errors []Error `json:",omitempty"` + GoFiles []string `json:",omitempty"` + CompiledGoFiles []string `json:",omitempty"` + OtherFiles []string `json:",omitempty"` + EmbedFiles []string `json:",omitempty"` + EmbedPatterns []string `json:",omitempty"` + IgnoredFiles []string `json:",omitempty"` + ExportFile string `json:",omitempty"` + Imports map[string]string `json:",omitempty"` +} + +// MarshalJSON returns the Package in its JSON form. +// For the most part, the structure fields are written out unmodified, and +// the type and syntax fields are skipped. +// The imports are written out as just a map of path to package id. +// The errors are written using a custom type that tries to preserve the +// structure of error types we know about. +// +// This method exists to enable support for additional build systems. It is +// not intended for use by clients of the API and we may change the format. +func (p *Package) MarshalJSON() ([]byte, error) { + flat := &flatPackage{ + ID: p.ID, + Name: p.Name, + PkgPath: p.PkgPath, + Errors: p.Errors, + GoFiles: p.GoFiles, + CompiledGoFiles: p.CompiledGoFiles, + OtherFiles: p.OtherFiles, + EmbedFiles: p.EmbedFiles, + EmbedPatterns: p.EmbedPatterns, + IgnoredFiles: p.IgnoredFiles, + ExportFile: p.ExportFile, + } + if len(p.Imports) > 0 { + flat.Imports = make(map[string]string, len(p.Imports)) + for path, ipkg := range p.Imports { + flat.Imports[path] = ipkg.ID + } + } + return json.Marshal(flat) +} + +// UnmarshalJSON reads in a Package from its JSON format. +// See MarshalJSON for details about the format accepted. +func (p *Package) UnmarshalJSON(b []byte) error { + flat := &flatPackage{} + if err := json.Unmarshal(b, &flat); err != nil { + return err + } + *p = Package{ + ID: flat.ID, + Name: flat.Name, + PkgPath: flat.PkgPath, + Errors: flat.Errors, + GoFiles: flat.GoFiles, + CompiledGoFiles: flat.CompiledGoFiles, + OtherFiles: flat.OtherFiles, + EmbedFiles: flat.EmbedFiles, + EmbedPatterns: flat.EmbedPatterns, + ExportFile: flat.ExportFile, + } + if len(flat.Imports) > 0 { + p.Imports = make(map[string]*Package, len(flat.Imports)) + for path, id := range flat.Imports { + p.Imports[path] = &Package{ID: id} + } + } + return nil +} + +func (p *Package) String() string { return p.ID } + +// loaderPackage augments Package with state used during the loading phase +type loaderPackage struct { + *Package + importErrors map[string]error // maps each bad import to its error + loadOnce sync.Once + color uint8 // for cycle detection + needsrc bool // load from source (Mode >= LoadTypes) + needtypes bool // type information is either requested or depended on + initial bool // package was matched by a pattern + goVersion int // minor version number of go command on PATH +} + +// loader holds the working state of a single call to load. +type loader struct { + pkgs map[string]*loaderPackage + Config + sizes types.Sizes + parseCache map[string]*parseValue + parseCacheMu sync.Mutex + exportMu sync.Mutex // enforces mutual exclusion of exportdata operations + + // Config.Mode contains the implied mode (see impliedLoadMode). + // Implied mode contains all the fields we need the data for. + // In requestedMode there are the actually requested fields. + // We'll zero them out before returning packages to the user. + // This makes it easier for us to get the conditions where + // we need certain modes right. + requestedMode LoadMode +} + +type parseValue struct { + f *ast.File + err error + ready chan struct{} +} + +func newLoader(cfg *Config) *loader { + ld := &loader{ + parseCache: map[string]*parseValue{}, + } + if cfg != nil { + ld.Config = *cfg + // If the user has provided a logger, use it. + ld.Config.Logf = cfg.Logf + } + if ld.Config.Logf == nil { + // If the GOPACKAGESDEBUG environment variable is set to true, + // but the user has not provided a logger, default to log.Printf. + if debug { + ld.Config.Logf = log.Printf + } else { + ld.Config.Logf = func(format string, args ...interface{}) {} + } + } + if ld.Config.Mode == 0 { + ld.Config.Mode = NeedName | NeedFiles | NeedCompiledGoFiles // Preserve zero behavior of Mode for backwards compatibility. + } + if ld.Config.Env == nil { + ld.Config.Env = os.Environ() + } + if ld.Config.gocmdRunner == nil { + ld.Config.gocmdRunner = &gocommand.Runner{} + } + if ld.Context == nil { + ld.Context = context.Background() + } + if ld.Dir == "" { + if dir, err := os.Getwd(); err == nil { + ld.Dir = dir + } + } + + // Save the actually requested fields. We'll zero them out before returning packages to the user. + ld.requestedMode = ld.Mode + ld.Mode = impliedLoadMode(ld.Mode) + + if ld.Mode&NeedTypes != 0 || ld.Mode&NeedSyntax != 0 { + if ld.Fset == nil { + ld.Fset = token.NewFileSet() + } + + // ParseFile is required even in LoadTypes mode + // because we load source if export data is missing. + if ld.ParseFile == nil { + ld.ParseFile = func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) { + const mode = parser.AllErrors | parser.ParseComments + return parser.ParseFile(fset, filename, src, mode) + } + } + } + + return ld +} + +// refine connects the supplied packages into a graph and then adds type +// and syntax information as requested by the LoadMode. +func (ld *loader) refine(response *driverResponse) ([]*Package, error) { + roots := response.Roots + rootMap := make(map[string]int, len(roots)) + for i, root := range roots { + rootMap[root] = i + } + ld.pkgs = make(map[string]*loaderPackage) + // first pass, fixup and build the map and roots + var initial = make([]*loaderPackage, len(roots)) + for _, pkg := range response.Packages { + rootIndex := -1 + if i, found := rootMap[pkg.ID]; found { + rootIndex = i + } + + // Overlays can invalidate export data. + // TODO(matloob): make this check fine-grained based on dependencies on overlaid files + exportDataInvalid := len(ld.Overlay) > 0 || pkg.ExportFile == "" && pkg.PkgPath != "unsafe" + // This package needs type information if the caller requested types and the package is + // either a root, or it's a non-root and the user requested dependencies ... + needtypes := (ld.Mode&NeedTypes|NeedTypesInfo != 0 && (rootIndex >= 0 || ld.Mode&NeedDeps != 0)) + // This package needs source if the call requested source (or types info, which implies source) + // and the package is either a root, or itas a non- root and the user requested dependencies... + needsrc := ((ld.Mode&(NeedSyntax|NeedTypesInfo) != 0 && (rootIndex >= 0 || ld.Mode&NeedDeps != 0)) || + // ... or if we need types and the exportData is invalid. We fall back to (incompletely) + // typechecking packages from source if they fail to compile. + (ld.Mode&(NeedTypes|NeedTypesInfo) != 0 && exportDataInvalid)) && pkg.PkgPath != "unsafe" + lpkg := &loaderPackage{ + Package: pkg, + needtypes: needtypes, + needsrc: needsrc, + goVersion: response.GoVersion, + } + ld.pkgs[lpkg.ID] = lpkg + if rootIndex >= 0 { + initial[rootIndex] = lpkg + lpkg.initial = true + } + } + for i, root := range roots { + if initial[i] == nil { + return nil, fmt.Errorf("root package %v is missing", root) + } + } + + // Materialize the import graph. + + const ( + white = 0 // new + grey = 1 // in progress + black = 2 // complete + ) + + // visit traverses the import graph, depth-first, + // and materializes the graph as Packages.Imports. + // + // Valid imports are saved in the Packages.Import map. + // Invalid imports (cycles and missing nodes) are saved in the importErrors map. + // Thus, even in the presence of both kinds of errors, the Import graph remains a DAG. + // + // visit returns whether the package needs src or has a transitive + // dependency on a package that does. These are the only packages + // for which we load source code. + var stack []*loaderPackage + var visit func(lpkg *loaderPackage) bool + var srcPkgs []*loaderPackage + visit = func(lpkg *loaderPackage) bool { + switch lpkg.color { + case black: + return lpkg.needsrc + case grey: + panic("internal error: grey node") + } + lpkg.color = grey + stack = append(stack, lpkg) // push + stubs := lpkg.Imports // the structure form has only stubs with the ID in the Imports + // If NeedImports isn't set, the imports fields will all be zeroed out. + if ld.Mode&NeedImports != 0 { + lpkg.Imports = make(map[string]*Package, len(stubs)) + for importPath, ipkg := range stubs { + var importErr error + imp := ld.pkgs[ipkg.ID] + if imp == nil { + // (includes package "C" when DisableCgo) + importErr = fmt.Errorf("missing package: %q", ipkg.ID) + } else if imp.color == grey { + importErr = fmt.Errorf("import cycle: %s", stack) + } + if importErr != nil { + if lpkg.importErrors == nil { + lpkg.importErrors = make(map[string]error) + } + lpkg.importErrors[importPath] = importErr + continue + } + + if visit(imp) { + lpkg.needsrc = true + } + lpkg.Imports[importPath] = imp.Package + } + } + if lpkg.needsrc { + srcPkgs = append(srcPkgs, lpkg) + } + if ld.Mode&NeedTypesSizes != 0 { + lpkg.TypesSizes = ld.sizes + } + stack = stack[:len(stack)-1] // pop + lpkg.color = black + + return lpkg.needsrc + } + + if ld.Mode&NeedImports == 0 { + // We do this to drop the stub import packages that we are not even going to try to resolve. + for _, lpkg := range initial { + lpkg.Imports = nil + } + } else { + // For each initial package, create its import DAG. + for _, lpkg := range initial { + visit(lpkg) + } + } + if ld.Mode&NeedImports != 0 && ld.Mode&NeedTypes != 0 { + for _, lpkg := range srcPkgs { + // Complete type information is required for the + // immediate dependencies of each source package. + for _, ipkg := range lpkg.Imports { + imp := ld.pkgs[ipkg.ID] + imp.needtypes = true + } + } + } + // Load type data and syntax if needed, starting at + // the initial packages (roots of the import DAG). + if ld.Mode&NeedTypes != 0 || ld.Mode&NeedSyntax != 0 { + var wg sync.WaitGroup + for _, lpkg := range initial { + wg.Add(1) + go func(lpkg *loaderPackage) { + ld.loadRecursive(lpkg) + wg.Done() + }(lpkg) + } + wg.Wait() + } + + result := make([]*Package, len(initial)) + for i, lpkg := range initial { + result[i] = lpkg.Package + } + for i := range ld.pkgs { + // Clear all unrequested fields, + // to catch programs that use more than they request. + if ld.requestedMode&NeedName == 0 { + ld.pkgs[i].Name = "" + ld.pkgs[i].PkgPath = "" + } + if ld.requestedMode&NeedFiles == 0 { + ld.pkgs[i].GoFiles = nil + ld.pkgs[i].OtherFiles = nil + ld.pkgs[i].IgnoredFiles = nil + } + if ld.requestedMode&NeedEmbedFiles == 0 { + ld.pkgs[i].EmbedFiles = nil + } + if ld.requestedMode&NeedEmbedPatterns == 0 { + ld.pkgs[i].EmbedPatterns = nil + } + if ld.requestedMode&NeedCompiledGoFiles == 0 { + ld.pkgs[i].CompiledGoFiles = nil + } + if ld.requestedMode&NeedImports == 0 { + ld.pkgs[i].Imports = nil + } + if ld.requestedMode&NeedExportFile == 0 { + ld.pkgs[i].ExportFile = "" + } + if ld.requestedMode&NeedTypes == 0 { + ld.pkgs[i].Types = nil + ld.pkgs[i].Fset = nil + ld.pkgs[i].IllTyped = false + } + if ld.requestedMode&NeedSyntax == 0 { + ld.pkgs[i].Syntax = nil + } + if ld.requestedMode&NeedTypesInfo == 0 { + ld.pkgs[i].TypesInfo = nil + } + if ld.requestedMode&NeedTypesSizes == 0 { + ld.pkgs[i].TypesSizes = nil + } + if ld.requestedMode&NeedModule == 0 { + ld.pkgs[i].Module = nil + } + } + + return result, nil +} + +// loadRecursive loads the specified package and its dependencies, +// recursively, in parallel, in topological order. +// It is atomic and idempotent. +// Precondition: ld.Mode&NeedTypes. +func (ld *loader) loadRecursive(lpkg *loaderPackage) { + lpkg.loadOnce.Do(func() { + // Load the direct dependencies, in parallel. + var wg sync.WaitGroup + for _, ipkg := range lpkg.Imports { + imp := ld.pkgs[ipkg.ID] + wg.Add(1) + go func(imp *loaderPackage) { + ld.loadRecursive(imp) + wg.Done() + }(imp) + } + wg.Wait() + ld.loadPackage(lpkg) + }) +} + +// loadPackage loads the specified package. +// It must be called only once per Package, +// after immediate dependencies are loaded. +// Precondition: ld.Mode & NeedTypes. +func (ld *loader) loadPackage(lpkg *loaderPackage) { + if lpkg.PkgPath == "unsafe" { + // Fill in the blanks to avoid surprises. + lpkg.Types = types.Unsafe + lpkg.Fset = ld.Fset + lpkg.Syntax = []*ast.File{} + lpkg.TypesInfo = new(types.Info) + lpkg.TypesSizes = ld.sizes + return + } + + // Call NewPackage directly with explicit name. + // This avoids skew between golist and go/types when the files' + // package declarations are inconsistent. + lpkg.Types = types.NewPackage(lpkg.PkgPath, lpkg.Name) + lpkg.Fset = ld.Fset + + // Subtle: we populate all Types fields with an empty Package + // before loading export data so that export data processing + // never has to create a types.Package for an indirect dependency, + // which would then require that such created packages be explicitly + // inserted back into the Import graph as a final step after export data loading. + // (Hence this return is after the Types assignment.) + // The Diamond test exercises this case. + if !lpkg.needtypes && !lpkg.needsrc { + return + } + if !lpkg.needsrc { + if err := ld.loadFromExportData(lpkg); err != nil { + lpkg.Errors = append(lpkg.Errors, Error{ + Pos: "-", + Msg: err.Error(), + Kind: UnknownError, // e.g. can't find/open/parse export data + }) + } + return // not a source package, don't get syntax trees + } + + appendError := func(err error) { + // Convert various error types into the one true Error. + var errs []Error + switch err := err.(type) { + case Error: + // from driver + errs = append(errs, err) + + case *os.PathError: + // from parser + errs = append(errs, Error{ + Pos: err.Path + ":1", + Msg: err.Err.Error(), + Kind: ParseError, + }) + + case scanner.ErrorList: + // from parser + for _, err := range err { + errs = append(errs, Error{ + Pos: err.Pos.String(), + Msg: err.Msg, + Kind: ParseError, + }) + } + + case types.Error: + // from type checker + lpkg.TypeErrors = append(lpkg.TypeErrors, err) + errs = append(errs, Error{ + Pos: err.Fset.Position(err.Pos).String(), + Msg: err.Msg, + Kind: TypeError, + }) + + default: + // unexpected impoverished error from parser? + errs = append(errs, Error{ + Pos: "-", + Msg: err.Error(), + Kind: UnknownError, + }) + + // If you see this error message, please file a bug. + log.Printf("internal error: error %q (%T) without position", err, err) + } + + lpkg.Errors = append(lpkg.Errors, errs...) + } + + // If the go command on the PATH is newer than the runtime, + // then the go/{scanner,ast,parser,types} packages from the + // standard library may be unable to process the files + // selected by go list. + // + // There is currently no way to downgrade the effective + // version of the go command (see issue 52078), so we proceed + // with the newer go command but, in case of parse or type + // errors, we emit an additional diagnostic. + // + // See: + // - golang.org/issue/52078 (flag to set release tags) + // - golang.org/issue/50825 (gopls legacy version support) + // - golang.org/issue/55883 (go/packages confusing error) + // + // Should we assert a hard minimum of (currently) go1.16 here? + var runtimeVersion int + if _, err := fmt.Sscanf(runtime.Version(), "go1.%d", &runtimeVersion); err == nil && runtimeVersion < lpkg.goVersion { + defer func() { + if len(lpkg.Errors) > 0 { + appendError(Error{ + Pos: "-", + Msg: fmt.Sprintf("This application uses version go1.%d of the source-processing packages but runs version go1.%d of 'go list'. It may fail to process source files that rely on newer language features. If so, rebuild the application using a newer version of Go.", runtimeVersion, lpkg.goVersion), + Kind: UnknownError, + }) + } + }() + } + + if ld.Config.Mode&NeedTypes != 0 && len(lpkg.CompiledGoFiles) == 0 && lpkg.ExportFile != "" { + // The config requested loading sources and types, but sources are missing. + // Add an error to the package and fall back to loading from export data. + appendError(Error{"-", fmt.Sprintf("sources missing for package %s", lpkg.ID), ParseError}) + _ = ld.loadFromExportData(lpkg) // ignore any secondary errors + + return // can't get syntax trees for this package + } + + files, errs := ld.parseFiles(lpkg.CompiledGoFiles) + for _, err := range errs { + appendError(err) + } + + lpkg.Syntax = files + if ld.Config.Mode&NeedTypes == 0 { + return + } + + lpkg.TypesInfo = &types.Info{ + Types: make(map[ast.Expr]types.TypeAndValue), + Defs: make(map[*ast.Ident]types.Object), + Uses: make(map[*ast.Ident]types.Object), + Implicits: make(map[ast.Node]types.Object), + Scopes: make(map[ast.Node]*types.Scope), + Selections: make(map[*ast.SelectorExpr]*types.Selection), + } + typeparams.InitInstanceInfo(lpkg.TypesInfo) + lpkg.TypesSizes = ld.sizes + + importer := importerFunc(func(path string) (*types.Package, error) { + if path == "unsafe" { + return types.Unsafe, nil + } + + // The imports map is keyed by import path. + ipkg := lpkg.Imports[path] + if ipkg == nil { + if err := lpkg.importErrors[path]; err != nil { + return nil, err + } + // There was skew between the metadata and the + // import declarations, likely due to an edit + // race, or because the ParseFile feature was + // used to supply alternative file contents. + return nil, fmt.Errorf("no metadata for %s", path) + } + + if ipkg.Types != nil && ipkg.Types.Complete() { + return ipkg.Types, nil + } + log.Fatalf("internal error: package %q without types was imported from %q", path, lpkg) + panic("unreachable") + }) + + // type-check + tc := &types.Config{ + Importer: importer, + + // Type-check bodies of functions only in initial packages. + // Example: for import graph A->B->C and initial packages {A,C}, + // we can ignore function bodies in B. + IgnoreFuncBodies: ld.Mode&NeedDeps == 0 && !lpkg.initial, + + Error: appendError, + Sizes: ld.sizes, + } + if lpkg.Module != nil && lpkg.Module.GoVersion != "" { + typesinternal.SetGoVersion(tc, "go"+lpkg.Module.GoVersion) + } + if (ld.Mode & typecheckCgo) != 0 { + if !typesinternal.SetUsesCgo(tc) { + appendError(Error{ + Msg: "typecheckCgo requires Go 1.15+", + Kind: ListError, + }) + return + } + } + types.NewChecker(tc, ld.Fset, lpkg.Types, lpkg.TypesInfo).Files(lpkg.Syntax) + + lpkg.importErrors = nil // no longer needed + + // If !Cgo, the type-checker uses FakeImportC mode, so + // it doesn't invoke the importer for import "C", + // nor report an error for the import, + // or for any undefined C.f reference. + // We must detect this explicitly and correctly + // mark the package as IllTyped (by reporting an error). + // TODO(adonovan): if these errors are annoying, + // we could just set IllTyped quietly. + if tc.FakeImportC { + outer: + for _, f := range lpkg.Syntax { + for _, imp := range f.Imports { + if imp.Path.Value == `"C"` { + err := types.Error{Fset: ld.Fset, Pos: imp.Pos(), Msg: `import "C" ignored`} + appendError(err) + break outer + } + } + } + } + + // Record accumulated errors. + illTyped := len(lpkg.Errors) > 0 + if !illTyped { + for _, imp := range lpkg.Imports { + if imp.IllTyped { + illTyped = true + break + } + } + } + lpkg.IllTyped = illTyped +} + +// An importFunc is an implementation of the single-method +// types.Importer interface based on a function value. +type importerFunc func(path string) (*types.Package, error) + +func (f importerFunc) Import(path string) (*types.Package, error) { return f(path) } + +// We use a counting semaphore to limit +// the number of parallel I/O calls per process. +var ioLimit = make(chan bool, 20) + +func (ld *loader) parseFile(filename string) (*ast.File, error) { + ld.parseCacheMu.Lock() + v, ok := ld.parseCache[filename] + if ok { + // cache hit + ld.parseCacheMu.Unlock() + <-v.ready + } else { + // cache miss + v = &parseValue{ready: make(chan struct{})} + ld.parseCache[filename] = v + ld.parseCacheMu.Unlock() + + var src []byte + for f, contents := range ld.Config.Overlay { + if sameFile(f, filename) { + src = contents + } + } + var err error + if src == nil { + ioLimit <- true // wait + src, err = ioutil.ReadFile(filename) + <-ioLimit // signal + } + if err != nil { + v.err = err + } else { + v.f, v.err = ld.ParseFile(ld.Fset, filename, src) + } + + close(v.ready) + } + return v.f, v.err +} + +// parseFiles reads and parses the Go source files and returns the ASTs +// of the ones that could be at least partially parsed, along with a +// list of I/O and parse errors encountered. +// +// Because files are scanned in parallel, the token.Pos +// positions of the resulting ast.Files are not ordered. +func (ld *loader) parseFiles(filenames []string) ([]*ast.File, []error) { + var wg sync.WaitGroup + n := len(filenames) + parsed := make([]*ast.File, n) + errors := make([]error, n) + for i, file := range filenames { + if ld.Config.Context.Err() != nil { + parsed[i] = nil + errors[i] = ld.Config.Context.Err() + continue + } + wg.Add(1) + go func(i int, filename string) { + parsed[i], errors[i] = ld.parseFile(filename) + wg.Done() + }(i, file) + } + wg.Wait() + + // Eliminate nils, preserving order. + var o int + for _, f := range parsed { + if f != nil { + parsed[o] = f + o++ + } + } + parsed = parsed[:o] + + o = 0 + for _, err := range errors { + if err != nil { + errors[o] = err + o++ + } + } + errors = errors[:o] + + return parsed, errors +} + +// sameFile returns true if x and y have the same basename and denote +// the same file. +func sameFile(x, y string) bool { + if x == y { + // It could be the case that y doesn't exist. + // For instance, it may be an overlay file that + // hasn't been written to disk. To handle that case + // let x == y through. (We added the exact absolute path + // string to the CompiledGoFiles list, so the unwritten + // overlay case implies x==y.) + return true + } + if strings.EqualFold(filepath.Base(x), filepath.Base(y)) { // (optimisation) + if xi, err := os.Stat(x); err == nil { + if yi, err := os.Stat(y); err == nil { + return os.SameFile(xi, yi) + } + } + } + return false +} + +// loadFromExportData ensures that type information is present for the specified +// package, loading it from an export data file on the first request. +// On success it sets lpkg.Types to a new Package. +func (ld *loader) loadFromExportData(lpkg *loaderPackage) error { + if lpkg.PkgPath == "" { + log.Fatalf("internal error: Package %s has no PkgPath", lpkg) + } + + // Because gcexportdata.Read has the potential to create or + // modify the types.Package for each node in the transitive + // closure of dependencies of lpkg, all exportdata operations + // must be sequential. (Finer-grained locking would require + // changes to the gcexportdata API.) + // + // The exportMu lock guards the lpkg.Types field and the + // types.Package it points to, for each loaderPackage in the graph. + // + // Not all accesses to Package.Pkg need to be protected by exportMu: + // graph ordering ensures that direct dependencies of source + // packages are fully loaded before the importer reads their Pkg field. + ld.exportMu.Lock() + defer ld.exportMu.Unlock() + + if tpkg := lpkg.Types; tpkg != nil && tpkg.Complete() { + return nil // cache hit + } + + lpkg.IllTyped = true // fail safe + + if lpkg.ExportFile == "" { + // Errors while building export data will have been printed to stderr. + return fmt.Errorf("no export data file") + } + f, err := os.Open(lpkg.ExportFile) + if err != nil { + return err + } + defer f.Close() + + // Read gc export data. + // + // We don't currently support gccgo export data because all + // underlying workspaces use the gc toolchain. (Even build + // systems that support gccgo don't use it for workspace + // queries.) + r, err := gcexportdata.NewReader(f) + if err != nil { + return fmt.Errorf("reading %s: %v", lpkg.ExportFile, err) + } + + // Build the view. + // + // The gcexportdata machinery has no concept of package ID. + // It identifies packages by their PkgPath, which although not + // globally unique is unique within the scope of one invocation + // of the linker, type-checker, or gcexportdata. + // + // So, we must build a PkgPath-keyed view of the global + // (conceptually ID-keyed) cache of packages and pass it to + // gcexportdata. The view must contain every existing + // package that might possibly be mentioned by the + // current package---its transitive closure. + // + // In loadPackage, we unconditionally create a types.Package for + // each dependency so that export data loading does not + // create new ones. + // + // TODO(adonovan): it would be simpler and more efficient + // if the export data machinery invoked a callback to + // get-or-create a package instead of a map. + // + view := make(map[string]*types.Package) // view seen by gcexportdata + seen := make(map[*loaderPackage]bool) // all visited packages + var visit func(pkgs map[string]*Package) + visit = func(pkgs map[string]*Package) { + for _, p := range pkgs { + lpkg := ld.pkgs[p.ID] + if !seen[lpkg] { + seen[lpkg] = true + view[lpkg.PkgPath] = lpkg.Types + visit(lpkg.Imports) + } + } + } + visit(lpkg.Imports) + + viewLen := len(view) + 1 // adding the self package + // Parse the export data. + // (May modify incomplete packages in view but not create new ones.) + tpkg, err := gcexportdata.Read(r, ld.Fset, view, lpkg.PkgPath) + if err != nil { + return fmt.Errorf("reading %s: %v", lpkg.ExportFile, err) + } + if _, ok := view["go.shape"]; ok { + // Account for the pseudopackage "go.shape" that gets + // created by generic code. + viewLen++ + } + if viewLen != len(view) { + log.Panicf("golang.org/x/tools/go/packages: unexpected new packages during load of %s", lpkg.PkgPath) + } + + lpkg.Types = tpkg + lpkg.IllTyped = false + return nil +} + +// impliedLoadMode returns loadMode with its dependencies. +func impliedLoadMode(loadMode LoadMode) LoadMode { + if loadMode&(NeedDeps|NeedTypes|NeedTypesInfo) != 0 { + // All these things require knowing the import graph. + loadMode |= NeedImports + } + + return loadMode +} + +func usesExportData(cfg *Config) bool { + return cfg.Mode&NeedExportFile != 0 || cfg.Mode&NeedTypes != 0 && cfg.Mode&NeedDeps == 0 +} + +var _ interface{} = io.Discard // assert build toolchain is go1.16 or later diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/visit.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/visit.go new file mode 100644 index 000000000000..a1dcc40b7270 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/packages/visit.go @@ -0,0 +1,59 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package packages + +import ( + "fmt" + "os" + "sort" +) + +// Visit visits all the packages in the import graph whose roots are +// pkgs, calling the optional pre function the first time each package +// is encountered (preorder), and the optional post function after a +// package's dependencies have been visited (postorder). +// The boolean result of pre(pkg) determines whether +// the imports of package pkg are visited. +func Visit(pkgs []*Package, pre func(*Package) bool, post func(*Package)) { + seen := make(map[*Package]bool) + var visit func(*Package) + visit = func(pkg *Package) { + if !seen[pkg] { + seen[pkg] = true + + if pre == nil || pre(pkg) { + paths := make([]string, 0, len(pkg.Imports)) + for path := range pkg.Imports { + paths = append(paths, path) + } + sort.Strings(paths) // Imports is a map, this makes visit stable + for _, path := range paths { + visit(pkg.Imports[path]) + } + } + + if post != nil { + post(pkg) + } + } + } + for _, pkg := range pkgs { + visit(pkg) + } +} + +// PrintErrors prints to os.Stderr the accumulated errors of all +// packages in the import graph rooted at pkgs, dependencies first. +// PrintErrors returns the number of errors printed. +func PrintErrors(pkgs []*Package) int { + var n int + Visit(pkgs, nil, func(pkg *Package) { + for _, err := range pkg.Errors { + fmt.Fprintln(os.Stderr, err) + n++ + } + }) + return n +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/go/types/objectpath/objectpath.go b/cluster-autoscaler/vendor/golang.org/x/tools/go/types/objectpath/objectpath.go new file mode 100644 index 000000000000..c725d839ba15 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/go/types/objectpath/objectpath.go @@ -0,0 +1,824 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package objectpath defines a naming scheme for types.Objects +// (that is, named entities in Go programs) relative to their enclosing +// package. +// +// Type-checker objects are canonical, so they are usually identified by +// their address in memory (a pointer), but a pointer has meaning only +// within one address space. By contrast, objectpath names allow the +// identity of an object to be sent from one program to another, +// establishing a correspondence between types.Object variables that are +// distinct but logically equivalent. +// +// A single object may have multiple paths. In this example, +// +// type A struct{ X int } +// type B A +// +// the field X has two paths due to its membership of both A and B. +// The For(obj) function always returns one of these paths, arbitrarily +// but consistently. +package objectpath + +import ( + "fmt" + "go/types" + "sort" + "strconv" + "strings" + _ "unsafe" + + "golang.org/x/tools/internal/typeparams" +) + +// A Path is an opaque name that identifies a types.Object +// relative to its package. Conceptually, the name consists of a +// sequence of destructuring operations applied to the package scope +// to obtain the original object. +// The name does not include the package itself. +type Path string + +// Encoding +// +// An object path is a textual and (with training) human-readable encoding +// of a sequence of destructuring operators, starting from a types.Package. +// The sequences represent a path through the package/object/type graph. +// We classify these operators by their type: +// +// PO package->object Package.Scope.Lookup +// OT object->type Object.Type +// TT type->type Type.{Elem,Key,Params,Results,Underlying} [EKPRU] +// TO type->object Type.{At,Field,Method,Obj} [AFMO] +// +// All valid paths start with a package and end at an object +// and thus may be defined by the regular language: +// +// objectpath = PO (OT TT* TO)* +// +// The concrete encoding follows directly: +// - The only PO operator is Package.Scope.Lookup, which requires an identifier. +// - The only OT operator is Object.Type, +// which we encode as '.' because dot cannot appear in an identifier. +// - The TT operators are encoded as [EKPRUTC]; +// one of these (TypeParam) requires an integer operand, +// which is encoded as a string of decimal digits. +// - The TO operators are encoded as [AFMO]; +// three of these (At,Field,Method) require an integer operand, +// which is encoded as a string of decimal digits. +// These indices are stable across different representations +// of the same package, even source and export data. +// The indices used are implementation specific and may not correspond to +// the argument to the go/types function. +// +// In the example below, +// +// package p +// +// type T interface { +// f() (a string, b struct{ X int }) +// } +// +// field X has the path "T.UM0.RA1.F0", +// representing the following sequence of operations: +// +// p.Lookup("T") T +// .Type().Underlying().Method(0). f +// .Type().Results().At(1) b +// .Type().Field(0) X +// +// The encoding is not maximally compact---every R or P is +// followed by an A, for example---but this simplifies the +// encoder and decoder. +const ( + // object->type operators + opType = '.' // .Type() (Object) + + // type->type operators + opElem = 'E' // .Elem() (Pointer, Slice, Array, Chan, Map) + opKey = 'K' // .Key() (Map) + opParams = 'P' // .Params() (Signature) + opResults = 'R' // .Results() (Signature) + opUnderlying = 'U' // .Underlying() (Named) + opTypeParam = 'T' // .TypeParams.At(i) (Named, Signature) + opConstraint = 'C' // .Constraint() (TypeParam) + + // type->object operators + opAt = 'A' // .At(i) (Tuple) + opField = 'F' // .Field(i) (Struct) + opMethod = 'M' // .Method(i) (Named or Interface; not Struct: "promoted" names are ignored) + opObj = 'O' // .Obj() (Named, TypeParam) +) + +// For is equivalent to new(Encoder).For(obj). +// +// It may be more efficient to reuse a single Encoder across several calls. +func For(obj types.Object) (Path, error) { + return new(Encoder).For(obj) +} + +// An Encoder amortizes the cost of encoding the paths of multiple objects. +// The zero value of an Encoder is ready to use. +type Encoder struct { + scopeMemo map[*types.Scope][]types.Object // memoization of scopeObjects + namedMethodsMemo map[*types.Named][]*types.Func // memoization of namedMethods() + skipMethodSorting bool +} + +// Exposed to gopls via golang.org/x/tools/internal/typesinternal +// TODO(golang/go#61443): eliminate this parameter one way or the other. +// +//go:linkname skipMethodSorting +func skipMethodSorting(enc *Encoder) { + enc.skipMethodSorting = true +} + +// For returns the path to an object relative to its package, +// or an error if the object is not accessible from the package's Scope. +// +// The For function guarantees to return a path only for the following objects: +// - package-level types +// - exported package-level non-types +// - methods +// - parameter and result variables +// - struct fields +// These objects are sufficient to define the API of their package. +// The objects described by a package's export data are drawn from this set. +// +// The set of objects accessible from a package's Scope depends on +// whether the package was produced by type-checking syntax, or +// reading export data; the latter may have a smaller Scope since +// export data trims objects that are not reachable from an exported +// declaration. For example, the For function will return a path for +// an exported method of an unexported type that is not reachable +// from any public declaration; this path will cause the Object +// function to fail if called on a package loaded from export data. +// TODO(adonovan): is this a bug or feature? Should this package +// compute accessibility in the same way? +// +// For does not return a path for predeclared names, imported package +// names, local names, and unexported package-level names (except +// types). +// +// Example: given this definition, +// +// package p +// +// type T interface { +// f() (a string, b struct{ X int }) +// } +// +// For(X) would return a path that denotes the following sequence of operations: +// +// p.Scope().Lookup("T") (TypeName T) +// .Type().Underlying().Method(0). (method Func f) +// .Type().Results().At(1) (field Var b) +// .Type().Field(0) (field Var X) +// +// where p is the package (*types.Package) to which X belongs. +func (enc *Encoder) For(obj types.Object) (Path, error) { + pkg := obj.Pkg() + + // This table lists the cases of interest. + // + // Object Action + // ------ ------ + // nil reject + // builtin reject + // pkgname reject + // label reject + // var + // package-level accept + // func param/result accept + // local reject + // struct field accept + // const + // package-level accept + // local reject + // func + // package-level accept + // init functions reject + // concrete method accept + // interface method accept + // type + // package-level accept + // local reject + // + // The only accessible package-level objects are members of pkg itself. + // + // The cases are handled in four steps: + // + // 1. reject nil and builtin + // 2. accept package-level objects + // 3. reject obviously invalid objects + // 4. search the API for the path to the param/result/field/method. + + // 1. reference to nil or builtin? + if pkg == nil { + return "", fmt.Errorf("predeclared %s has no path", obj) + } + scope := pkg.Scope() + + // 2. package-level object? + if scope.Lookup(obj.Name()) == obj { + // Only exported objects (and non-exported types) have a path. + // Non-exported types may be referenced by other objects. + if _, ok := obj.(*types.TypeName); !ok && !obj.Exported() { + return "", fmt.Errorf("no path for non-exported %v", obj) + } + return Path(obj.Name()), nil + } + + // 3. Not a package-level object. + // Reject obviously non-viable cases. + switch obj := obj.(type) { + case *types.TypeName: + if _, ok := obj.Type().(*typeparams.TypeParam); !ok { + // With the exception of type parameters, only package-level type names + // have a path. + return "", fmt.Errorf("no path for %v", obj) + } + case *types.Const, // Only package-level constants have a path. + *types.Label, // Labels are function-local. + *types.PkgName: // PkgNames are file-local. + return "", fmt.Errorf("no path for %v", obj) + + case *types.Var: + // Could be: + // - a field (obj.IsField()) + // - a func parameter or result + // - a local var. + // Sadly there is no way to distinguish + // a param/result from a local + // so we must proceed to the find. + + case *types.Func: + // A func, if not package-level, must be a method. + if recv := obj.Type().(*types.Signature).Recv(); recv == nil { + return "", fmt.Errorf("func is not a method: %v", obj) + } + + if path, ok := enc.concreteMethod(obj); ok { + // Fast path for concrete methods that avoids looping over scope. + return path, nil + } + + default: + panic(obj) + } + + // 4. Search the API for the path to the var (field/param/result) or method. + + // First inspect package-level named types. + // In the presence of path aliases, these give + // the best paths because non-types may + // refer to types, but not the reverse. + empty := make([]byte, 0, 48) // initial space + objs := enc.scopeObjects(scope) + for _, o := range objs { + tname, ok := o.(*types.TypeName) + if !ok { + continue // handle non-types in second pass + } + + path := append(empty, o.Name()...) + path = append(path, opType) + + T := o.Type() + + if tname.IsAlias() { + // type alias + if r := find(obj, T, path, nil); r != nil { + return Path(r), nil + } + } else { + if named, _ := T.(*types.Named); named != nil { + if r := findTypeParam(obj, typeparams.ForNamed(named), path, nil); r != nil { + // generic named type + return Path(r), nil + } + } + // defined (named) type + if r := find(obj, T.Underlying(), append(path, opUnderlying), nil); r != nil { + return Path(r), nil + } + } + } + + // Then inspect everything else: + // non-types, and declared methods of defined types. + for _, o := range objs { + path := append(empty, o.Name()...) + if _, ok := o.(*types.TypeName); !ok { + if o.Exported() { + // exported non-type (const, var, func) + if r := find(obj, o.Type(), append(path, opType), nil); r != nil { + return Path(r), nil + } + } + continue + } + + // Inspect declared methods of defined types. + if T, ok := o.Type().(*types.Named); ok { + path = append(path, opType) + if !enc.skipMethodSorting { + // Note that method index here is always with respect + // to canonical ordering of methods, regardless of how + // they appear in the underlying type. + for i, m := range enc.namedMethods(T) { + path2 := appendOpArg(path, opMethod, i) + if m == obj { + return Path(path2), nil // found declared method + } + if r := find(obj, m.Type(), append(path2, opType), nil); r != nil { + return Path(r), nil + } + } + } else { + // This branch must match the logic in the branch above, using go/types + // APIs without sorting. + for i := 0; i < T.NumMethods(); i++ { + m := T.Method(i) + path2 := appendOpArg(path, opMethod, i) + if m == obj { + return Path(path2), nil // found declared method + } + if r := find(obj, m.Type(), append(path2, opType), nil); r != nil { + return Path(r), nil + } + } + } + } + } + + return "", fmt.Errorf("can't find path for %v in %s", obj, pkg.Path()) +} + +func appendOpArg(path []byte, op byte, arg int) []byte { + path = append(path, op) + path = strconv.AppendInt(path, int64(arg), 10) + return path +} + +// concreteMethod returns the path for meth, which must have a non-nil receiver. +// The second return value indicates success and may be false if the method is +// an interface method or if it is an instantiated method. +// +// This function is just an optimization that avoids the general scope walking +// approach. You are expected to fall back to the general approach if this +// function fails. +func (enc *Encoder) concreteMethod(meth *types.Func) (Path, bool) { + // Concrete methods can only be declared on package-scoped named types. For + // that reason we can skip the expensive walk over the package scope: the + // path will always be package -> named type -> method. We can trivially get + // the type name from the receiver, and only have to look over the type's + // methods to find the method index. + // + // Methods on generic types require special consideration, however. Consider + // the following package: + // + // L1: type S[T any] struct{} + // L2: func (recv S[A]) Foo() { recv.Bar() } + // L3: func (recv S[B]) Bar() { } + // L4: type Alias = S[int] + // L5: func _[T any]() { var s S[int]; s.Foo() } + // + // The receivers of methods on generic types are instantiations. L2 and L3 + // instantiate S with the type-parameters A and B, which are scoped to the + // respective methods. L4 and L5 each instantiate S with int. Each of these + // instantiations has its own method set, full of methods (and thus objects) + // with receivers whose types are the respective instantiations. In other + // words, we have + // + // S[A].Foo, S[A].Bar + // S[B].Foo, S[B].Bar + // S[int].Foo, S[int].Bar + // + // We may thus be trying to produce object paths for any of these objects. + // + // S[A].Foo and S[B].Bar are the origin methods, and their paths are S.Foo + // and S.Bar, which are the paths that this function naturally produces. + // + // S[A].Bar, S[B].Foo, and both methods on S[int] are instantiations that + // don't correspond to the origin methods. For S[int], this is significant. + // The most precise object path for S[int].Foo, for example, is Alias.Foo, + // not S.Foo. Our function, however, would produce S.Foo, which would + // resolve to a different object. + // + // For S[A].Bar and S[B].Foo it could be argued that S.Bar and S.Foo are + // still the correct paths, since only the origin methods have meaningful + // paths. But this is likely only true for trivial cases and has edge cases. + // Since this function is only an optimization, we err on the side of giving + // up, deferring to the slower but definitely correct algorithm. Most users + // of objectpath will only be giving us origin methods, anyway, as referring + // to instantiated methods is usually not useful. + + if typeparams.OriginMethod(meth) != meth { + return "", false + } + + recvT := meth.Type().(*types.Signature).Recv().Type() + if ptr, ok := recvT.(*types.Pointer); ok { + recvT = ptr.Elem() + } + + named, ok := recvT.(*types.Named) + if !ok { + return "", false + } + + if types.IsInterface(named) { + // Named interfaces don't have to be package-scoped + // + // TODO(dominikh): opt: if scope.Lookup(name) == named, then we can apply this optimization to interface + // methods, too, I think. + return "", false + } + + // Preallocate space for the name, opType, opMethod, and some digits. + name := named.Obj().Name() + path := make([]byte, 0, len(name)+8) + path = append(path, name...) + path = append(path, opType) + + if !enc.skipMethodSorting { + for i, m := range enc.namedMethods(named) { + if m == meth { + path = appendOpArg(path, opMethod, i) + return Path(path), true + } + } + } else { + // This branch must match the logic of the branch above, using go/types + // APIs without sorting. + for i := 0; i < named.NumMethods(); i++ { + m := named.Method(i) + if m == meth { + path = appendOpArg(path, opMethod, i) + return Path(path), true + } + } + } + + // Due to golang/go#59944, go/types fails to associate the receiver with + // certain methods on cgo types. + // + // TODO(rfindley): replace this panic once golang/go#59944 is fixed in all Go + // versions gopls supports. + return "", false + // panic(fmt.Sprintf("couldn't find method %s on type %s; methods: %#v", meth, named, enc.namedMethods(named))) +} + +// find finds obj within type T, returning the path to it, or nil if not found. +// +// The seen map is used to short circuit cycles through type parameters. If +// nil, it will be allocated as necessary. +func find(obj types.Object, T types.Type, path []byte, seen map[*types.TypeName]bool) []byte { + switch T := T.(type) { + case *types.Basic, *types.Named: + // Named types belonging to pkg were handled already, + // so T must belong to another package. No path. + return nil + case *types.Pointer: + return find(obj, T.Elem(), append(path, opElem), seen) + case *types.Slice: + return find(obj, T.Elem(), append(path, opElem), seen) + case *types.Array: + return find(obj, T.Elem(), append(path, opElem), seen) + case *types.Chan: + return find(obj, T.Elem(), append(path, opElem), seen) + case *types.Map: + if r := find(obj, T.Key(), append(path, opKey), seen); r != nil { + return r + } + return find(obj, T.Elem(), append(path, opElem), seen) + case *types.Signature: + if r := findTypeParam(obj, typeparams.ForSignature(T), path, seen); r != nil { + return r + } + if r := find(obj, T.Params(), append(path, opParams), seen); r != nil { + return r + } + return find(obj, T.Results(), append(path, opResults), seen) + case *types.Struct: + for i := 0; i < T.NumFields(); i++ { + fld := T.Field(i) + path2 := appendOpArg(path, opField, i) + if fld == obj { + return path2 // found field var + } + if r := find(obj, fld.Type(), append(path2, opType), seen); r != nil { + return r + } + } + return nil + case *types.Tuple: + for i := 0; i < T.Len(); i++ { + v := T.At(i) + path2 := appendOpArg(path, opAt, i) + if v == obj { + return path2 // found param/result var + } + if r := find(obj, v.Type(), append(path2, opType), seen); r != nil { + return r + } + } + return nil + case *types.Interface: + for i := 0; i < T.NumMethods(); i++ { + m := T.Method(i) + path2 := appendOpArg(path, opMethod, i) + if m == obj { + return path2 // found interface method + } + if r := find(obj, m.Type(), append(path2, opType), seen); r != nil { + return r + } + } + return nil + case *typeparams.TypeParam: + name := T.Obj() + if name == obj { + return append(path, opObj) + } + if seen[name] { + return nil + } + if seen == nil { + seen = make(map[*types.TypeName]bool) + } + seen[name] = true + if r := find(obj, T.Constraint(), append(path, opConstraint), seen); r != nil { + return r + } + return nil + } + panic(T) +} + +func findTypeParam(obj types.Object, list *typeparams.TypeParamList, path []byte, seen map[*types.TypeName]bool) []byte { + for i := 0; i < list.Len(); i++ { + tparam := list.At(i) + path2 := appendOpArg(path, opTypeParam, i) + if r := find(obj, tparam, path2, seen); r != nil { + return r + } + } + return nil +} + +// Object returns the object denoted by path p within the package pkg. +func Object(pkg *types.Package, p Path) (types.Object, error) { + return object(pkg, p, false) +} + +// Note: the skipMethodSorting parameter must match the value of +// Encoder.skipMethodSorting used during encoding. +func object(pkg *types.Package, p Path, skipMethodSorting bool) (types.Object, error) { + if p == "" { + return nil, fmt.Errorf("empty path") + } + + pathstr := string(p) + var pkgobj, suffix string + if dot := strings.IndexByte(pathstr, opType); dot < 0 { + pkgobj = pathstr + } else { + pkgobj = pathstr[:dot] + suffix = pathstr[dot:] // suffix starts with "." + } + + obj := pkg.Scope().Lookup(pkgobj) + if obj == nil { + return nil, fmt.Errorf("package %s does not contain %q", pkg.Path(), pkgobj) + } + + // abstraction of *types.{Pointer,Slice,Array,Chan,Map} + type hasElem interface { + Elem() types.Type + } + // abstraction of *types.{Named,Signature} + type hasTypeParams interface { + TypeParams() *typeparams.TypeParamList + } + // abstraction of *types.{Named,TypeParam} + type hasObj interface { + Obj() *types.TypeName + } + + // The loop state is the pair (t, obj), + // exactly one of which is non-nil, initially obj. + // All suffixes start with '.' (the only object->type operation), + // followed by optional type->type operations, + // then a type->object operation. + // The cycle then repeats. + var t types.Type + for suffix != "" { + code := suffix[0] + suffix = suffix[1:] + + // Codes [AFM] have an integer operand. + var index int + switch code { + case opAt, opField, opMethod, opTypeParam: + rest := strings.TrimLeft(suffix, "0123456789") + numerals := suffix[:len(suffix)-len(rest)] + suffix = rest + i, err := strconv.Atoi(numerals) + if err != nil { + return nil, fmt.Errorf("invalid path: bad numeric operand %q for code %q", numerals, code) + } + index = int(i) + case opObj: + // no operand + default: + // The suffix must end with a type->object operation. + if suffix == "" { + return nil, fmt.Errorf("invalid path: ends with %q, want [AFMO]", code) + } + } + + if code == opType { + if t != nil { + return nil, fmt.Errorf("invalid path: unexpected %q in type context", opType) + } + t = obj.Type() + obj = nil + continue + } + + if t == nil { + return nil, fmt.Errorf("invalid path: code %q in object context", code) + } + + // Inv: t != nil, obj == nil + + switch code { + case opElem: + hasElem, ok := t.(hasElem) // Pointer, Slice, Array, Chan, Map + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want pointer, slice, array, chan or map)", code, t, t) + } + t = hasElem.Elem() + + case opKey: + mapType, ok := t.(*types.Map) + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want map)", code, t, t) + } + t = mapType.Key() + + case opParams: + sig, ok := t.(*types.Signature) + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want signature)", code, t, t) + } + t = sig.Params() + + case opResults: + sig, ok := t.(*types.Signature) + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want signature)", code, t, t) + } + t = sig.Results() + + case opUnderlying: + named, ok := t.(*types.Named) + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want named)", code, t, t) + } + t = named.Underlying() + + case opTypeParam: + hasTypeParams, ok := t.(hasTypeParams) // Named, Signature + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want named or signature)", code, t, t) + } + tparams := hasTypeParams.TypeParams() + if n := tparams.Len(); index >= n { + return nil, fmt.Errorf("tuple index %d out of range [0-%d)", index, n) + } + t = tparams.At(index) + + case opConstraint: + tparam, ok := t.(*typeparams.TypeParam) + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want type parameter)", code, t, t) + } + t = tparam.Constraint() + + case opAt: + tuple, ok := t.(*types.Tuple) + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want tuple)", code, t, t) + } + if n := tuple.Len(); index >= n { + return nil, fmt.Errorf("tuple index %d out of range [0-%d)", index, n) + } + obj = tuple.At(index) + t = nil + + case opField: + structType, ok := t.(*types.Struct) + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want struct)", code, t, t) + } + if n := structType.NumFields(); index >= n { + return nil, fmt.Errorf("field index %d out of range [0-%d)", index, n) + } + obj = structType.Field(index) + t = nil + + case opMethod: + switch t := t.(type) { + case *types.Interface: + if index >= t.NumMethods() { + return nil, fmt.Errorf("method index %d out of range [0-%d)", index, t.NumMethods()) + } + obj = t.Method(index) // Id-ordered + + case *types.Named: + if index >= t.NumMethods() { + return nil, fmt.Errorf("method index %d out of range [0-%d)", index, t.NumMethods()) + } + if skipMethodSorting { + obj = t.Method(index) + } else { + methods := namedMethods(t) // (unmemoized) + obj = methods[index] // Id-ordered + } + + default: + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want interface or named)", code, t, t) + } + t = nil + + case opObj: + hasObj, ok := t.(hasObj) + if !ok { + return nil, fmt.Errorf("cannot apply %q to %s (got %T, want named or type param)", code, t, t) + } + obj = hasObj.Obj() + t = nil + + default: + return nil, fmt.Errorf("invalid path: unknown code %q", code) + } + } + + if obj.Pkg() != pkg { + return nil, fmt.Errorf("path denotes %s, which belongs to a different package", obj) + } + + return obj, nil // success +} + +// namedMethods returns the methods of a Named type in ascending Id order. +func namedMethods(named *types.Named) []*types.Func { + methods := make([]*types.Func, named.NumMethods()) + for i := range methods { + methods[i] = named.Method(i) + } + sort.Slice(methods, func(i, j int) bool { + return methods[i].Id() < methods[j].Id() + }) + return methods +} + +// namedMethods is a memoization of the namedMethods function. Callers must not modify the result. +func (enc *Encoder) namedMethods(named *types.Named) []*types.Func { + m := enc.namedMethodsMemo + if m == nil { + m = make(map[*types.Named][]*types.Func) + enc.namedMethodsMemo = m + } + methods, ok := m[named] + if !ok { + methods = namedMethods(named) // allocates and sorts + m[named] = methods + } + return methods +} + +// scopeObjects is a memoization of scope objects. +// Callers must not modify the result. +func (enc *Encoder) scopeObjects(scope *types.Scope) []types.Object { + m := enc.scopeMemo + if m == nil { + m = make(map[*types.Scope][]types.Object) + enc.scopeMemo = m + } + objs, ok := m[scope] + if !ok { + names := scope.Names() // allocates and sorts + objs = make([]types.Object, len(names)) + for i, name := range names { + objs[i] = scope.Lookup(name) + } + m[scope] = objs + } + return objs +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/event.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/event.go new file mode 100644 index 000000000000..a6cf0e64a4b1 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/event.go @@ -0,0 +1,85 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package core provides support for event based telemetry. +package core + +import ( + "fmt" + "time" + + "golang.org/x/tools/internal/event/label" +) + +// Event holds the information about an event of note that occurred. +type Event struct { + at time.Time + + // As events are often on the stack, storing the first few labels directly + // in the event can avoid an allocation at all for the very common cases of + // simple events. + // The length needs to be large enough to cope with the majority of events + // but no so large as to cause undue stack pressure. + // A log message with two values will use 3 labels (one for each value and + // one for the message itself). + + static [3]label.Label // inline storage for the first few labels + dynamic []label.Label // dynamically sized storage for remaining labels +} + +// eventLabelMap implements label.Map for a the labels of an Event. +type eventLabelMap struct { + event Event +} + +func (ev Event) At() time.Time { return ev.at } + +func (ev Event) Format(f fmt.State, r rune) { + if !ev.at.IsZero() { + fmt.Fprint(f, ev.at.Format("2006/01/02 15:04:05 ")) + } + for index := 0; ev.Valid(index); index++ { + if l := ev.Label(index); l.Valid() { + fmt.Fprintf(f, "\n\t%v", l) + } + } +} + +func (ev Event) Valid(index int) bool { + return index >= 0 && index < len(ev.static)+len(ev.dynamic) +} + +func (ev Event) Label(index int) label.Label { + if index < len(ev.static) { + return ev.static[index] + } + return ev.dynamic[index-len(ev.static)] +} + +func (ev Event) Find(key label.Key) label.Label { + for _, l := range ev.static { + if l.Key() == key { + return l + } + } + for _, l := range ev.dynamic { + if l.Key() == key { + return l + } + } + return label.Label{} +} + +func MakeEvent(static [3]label.Label, labels []label.Label) Event { + return Event{ + static: static, + dynamic: labels, + } +} + +// CloneEvent event returns a copy of the event with the time adjusted to at. +func CloneEvent(ev Event, at time.Time) Event { + ev.at = at + return ev +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/export.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/export.go new file mode 100644 index 000000000000..05f3a9a5791a --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/export.go @@ -0,0 +1,70 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package core + +import ( + "context" + "sync/atomic" + "time" + "unsafe" + + "golang.org/x/tools/internal/event/label" +) + +// Exporter is a function that handles events. +// It may return a modified context and event. +type Exporter func(context.Context, Event, label.Map) context.Context + +var ( + exporter unsafe.Pointer +) + +// SetExporter sets the global exporter function that handles all events. +// The exporter is called synchronously from the event call site, so it should +// return quickly so as not to hold up user code. +func SetExporter(e Exporter) { + p := unsafe.Pointer(&e) + if e == nil { + // &e is always valid, and so p is always valid, but for the early abort + // of ProcessEvent to be efficient it needs to make the nil check on the + // pointer without having to dereference it, so we make the nil function + // also a nil pointer + p = nil + } + atomic.StorePointer(&exporter, p) +} + +// deliver is called to deliver an event to the supplied exporter. +// it will fill in the time. +func deliver(ctx context.Context, exporter Exporter, ev Event) context.Context { + // add the current time to the event + ev.at = time.Now() + // hand the event off to the current exporter + return exporter(ctx, ev, ev) +} + +// Export is called to deliver an event to the global exporter if set. +func Export(ctx context.Context, ev Event) context.Context { + // get the global exporter and abort early if there is not one + exporterPtr := (*Exporter)(atomic.LoadPointer(&exporter)) + if exporterPtr == nil { + return ctx + } + return deliver(ctx, *exporterPtr, ev) +} + +// ExportPair is called to deliver a start event to the supplied exporter. +// It also returns a function that will deliver the end event to the same +// exporter. +// It will fill in the time. +func ExportPair(ctx context.Context, begin, end Event) (context.Context, func()) { + // get the global exporter and abort early if there is not one + exporterPtr := (*Exporter)(atomic.LoadPointer(&exporter)) + if exporterPtr == nil { + return ctx, func() {} + } + ctx = deliver(ctx, *exporterPtr, begin) + return ctx, func() { deliver(ctx, *exporterPtr, end) } +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/fast.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/fast.go new file mode 100644 index 000000000000..06c1d4615e6b --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/core/fast.go @@ -0,0 +1,77 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package core + +import ( + "context" + + "golang.org/x/tools/internal/event/keys" + "golang.org/x/tools/internal/event/label" +) + +// Log1 takes a message and one label delivers a log event to the exporter. +// It is a customized version of Print that is faster and does no allocation. +func Log1(ctx context.Context, message string, t1 label.Label) { + Export(ctx, MakeEvent([3]label.Label{ + keys.Msg.Of(message), + t1, + }, nil)) +} + +// Log2 takes a message and two labels and delivers a log event to the exporter. +// It is a customized version of Print that is faster and does no allocation. +func Log2(ctx context.Context, message string, t1 label.Label, t2 label.Label) { + Export(ctx, MakeEvent([3]label.Label{ + keys.Msg.Of(message), + t1, + t2, + }, nil)) +} + +// Metric1 sends a label event to the exporter with the supplied labels. +func Metric1(ctx context.Context, t1 label.Label) context.Context { + return Export(ctx, MakeEvent([3]label.Label{ + keys.Metric.New(), + t1, + }, nil)) +} + +// Metric2 sends a label event to the exporter with the supplied labels. +func Metric2(ctx context.Context, t1, t2 label.Label) context.Context { + return Export(ctx, MakeEvent([3]label.Label{ + keys.Metric.New(), + t1, + t2, + }, nil)) +} + +// Start1 sends a span start event with the supplied label list to the exporter. +// It also returns a function that will end the span, which should normally be +// deferred. +func Start1(ctx context.Context, name string, t1 label.Label) (context.Context, func()) { + return ExportPair(ctx, + MakeEvent([3]label.Label{ + keys.Start.Of(name), + t1, + }, nil), + MakeEvent([3]label.Label{ + keys.End.New(), + }, nil)) +} + +// Start2 sends a span start event with the supplied label list to the exporter. +// It also returns a function that will end the span, which should normally be +// deferred. +func Start2(ctx context.Context, name string, t1, t2 label.Label) (context.Context, func()) { + return ExportPair(ctx, + MakeEvent([3]label.Label{ + keys.Start.Of(name), + t1, + t2, + }, nil), + MakeEvent([3]label.Label{ + keys.End.New(), + }, nil)) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/doc.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/doc.go new file mode 100644 index 000000000000..5dc6e6babedd --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/doc.go @@ -0,0 +1,7 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package event provides a set of packages that cover the main +// concepts of telemetry in an implementation agnostic way. +package event diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/event.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/event.go new file mode 100644 index 000000000000..4d55e577d1a8 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/event.go @@ -0,0 +1,127 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package event + +import ( + "context" + + "golang.org/x/tools/internal/event/core" + "golang.org/x/tools/internal/event/keys" + "golang.org/x/tools/internal/event/label" +) + +// Exporter is a function that handles events. +// It may return a modified context and event. +type Exporter func(context.Context, core.Event, label.Map) context.Context + +// SetExporter sets the global exporter function that handles all events. +// The exporter is called synchronously from the event call site, so it should +// return quickly so as not to hold up user code. +func SetExporter(e Exporter) { + core.SetExporter(core.Exporter(e)) +} + +// Log takes a message and a label list and combines them into a single event +// before delivering them to the exporter. +func Log(ctx context.Context, message string, labels ...label.Label) { + core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Msg.Of(message), + }, labels)) +} + +// IsLog returns true if the event was built by the Log function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsLog(ev core.Event) bool { + return ev.Label(0).Key() == keys.Msg +} + +// Error takes a message and a label list and combines them into a single event +// before delivering them to the exporter. It captures the error in the +// delivered event. +func Error(ctx context.Context, message string, err error, labels ...label.Label) { + core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Msg.Of(message), + keys.Err.Of(err), + }, labels)) +} + +// IsError returns true if the event was built by the Error function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsError(ev core.Event) bool { + return ev.Label(0).Key() == keys.Msg && + ev.Label(1).Key() == keys.Err +} + +// Metric sends a label event to the exporter with the supplied labels. +func Metric(ctx context.Context, labels ...label.Label) { + core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Metric.New(), + }, labels)) +} + +// IsMetric returns true if the event was built by the Metric function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsMetric(ev core.Event) bool { + return ev.Label(0).Key() == keys.Metric +} + +// Label sends a label event to the exporter with the supplied labels. +func Label(ctx context.Context, labels ...label.Label) context.Context { + return core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Label.New(), + }, labels)) +} + +// IsLabel returns true if the event was built by the Label function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsLabel(ev core.Event) bool { + return ev.Label(0).Key() == keys.Label +} + +// Start sends a span start event with the supplied label list to the exporter. +// It also returns a function that will end the span, which should normally be +// deferred. +func Start(ctx context.Context, name string, labels ...label.Label) (context.Context, func()) { + return core.ExportPair(ctx, + core.MakeEvent([3]label.Label{ + keys.Start.Of(name), + }, labels), + core.MakeEvent([3]label.Label{ + keys.End.New(), + }, nil)) +} + +// IsStart returns true if the event was built by the Start function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsStart(ev core.Event) bool { + return ev.Label(0).Key() == keys.Start +} + +// IsEnd returns true if the event was built by the End function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsEnd(ev core.Event) bool { + return ev.Label(0).Key() == keys.End +} + +// Detach returns a context without an associated span. +// This allows the creation of spans that are not children of the current span. +func Detach(ctx context.Context) context.Context { + return core.Export(ctx, core.MakeEvent([3]label.Label{ + keys.Detach.New(), + }, nil)) +} + +// IsDetach returns true if the event was built by the Detach function. +// It is intended to be used in exporters to identify the semantics of the +// event when deciding what to do with it. +func IsDetach(ev core.Event) bool { + return ev.Label(0).Key() == keys.Detach +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/keys/keys.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/keys/keys.go new file mode 100644 index 000000000000..a02206e30150 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/keys/keys.go @@ -0,0 +1,564 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package keys + +import ( + "fmt" + "io" + "math" + "strconv" + + "golang.org/x/tools/internal/event/label" +) + +// Value represents a key for untyped values. +type Value struct { + name string + description string +} + +// New creates a new Key for untyped values. +func New(name, description string) *Value { + return &Value{name: name, description: description} +} + +func (k *Value) Name() string { return k.name } +func (k *Value) Description() string { return k.description } + +func (k *Value) Format(w io.Writer, buf []byte, l label.Label) { + fmt.Fprint(w, k.From(l)) +} + +// Get can be used to get a label for the key from a label.Map. +func (k *Value) Get(lm label.Map) interface{} { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return nil +} + +// From can be used to get a value from a Label. +func (k *Value) From(t label.Label) interface{} { return t.UnpackValue() } + +// Of creates a new Label with this key and the supplied value. +func (k *Value) Of(value interface{}) label.Label { return label.OfValue(k, value) } + +// Tag represents a key for tagging labels that have no value. +// These are used when the existence of the label is the entire information it +// carries, such as marking events to be of a specific kind, or from a specific +// package. +type Tag struct { + name string + description string +} + +// NewTag creates a new Key for tagging labels. +func NewTag(name, description string) *Tag { + return &Tag{name: name, description: description} +} + +func (k *Tag) Name() string { return k.name } +func (k *Tag) Description() string { return k.description } + +func (k *Tag) Format(w io.Writer, buf []byte, l label.Label) {} + +// New creates a new Label with this key. +func (k *Tag) New() label.Label { return label.OfValue(k, nil) } + +// Int represents a key +type Int struct { + name string + description string +} + +// NewInt creates a new Key for int values. +func NewInt(name, description string) *Int { + return &Int{name: name, description: description} +} + +func (k *Int) Name() string { return k.name } +func (k *Int) Description() string { return k.description } + +func (k *Int) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, int64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int) Of(v int) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int) Get(lm label.Map) int { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int) From(t label.Label) int { return int(t.Unpack64()) } + +// Int8 represents a key +type Int8 struct { + name string + description string +} + +// NewInt8 creates a new Key for int8 values. +func NewInt8(name, description string) *Int8 { + return &Int8{name: name, description: description} +} + +func (k *Int8) Name() string { return k.name } +func (k *Int8) Description() string { return k.description } + +func (k *Int8) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, int64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int8) Of(v int8) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int8) Get(lm label.Map) int8 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int8) From(t label.Label) int8 { return int8(t.Unpack64()) } + +// Int16 represents a key +type Int16 struct { + name string + description string +} + +// NewInt16 creates a new Key for int16 values. +func NewInt16(name, description string) *Int16 { + return &Int16{name: name, description: description} +} + +func (k *Int16) Name() string { return k.name } +func (k *Int16) Description() string { return k.description } + +func (k *Int16) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, int64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int16) Of(v int16) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int16) Get(lm label.Map) int16 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int16) From(t label.Label) int16 { return int16(t.Unpack64()) } + +// Int32 represents a key +type Int32 struct { + name string + description string +} + +// NewInt32 creates a new Key for int32 values. +func NewInt32(name, description string) *Int32 { + return &Int32{name: name, description: description} +} + +func (k *Int32) Name() string { return k.name } +func (k *Int32) Description() string { return k.description } + +func (k *Int32) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, int64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int32) Of(v int32) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int32) Get(lm label.Map) int32 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int32) From(t label.Label) int32 { return int32(t.Unpack64()) } + +// Int64 represents a key +type Int64 struct { + name string + description string +} + +// NewInt64 creates a new Key for int64 values. +func NewInt64(name, description string) *Int64 { + return &Int64{name: name, description: description} +} + +func (k *Int64) Name() string { return k.name } +func (k *Int64) Description() string { return k.description } + +func (k *Int64) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendInt(buf, k.From(l), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Int64) Of(v int64) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Int64) Get(lm label.Map) int64 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Int64) From(t label.Label) int64 { return int64(t.Unpack64()) } + +// UInt represents a key +type UInt struct { + name string + description string +} + +// NewUInt creates a new Key for uint values. +func NewUInt(name, description string) *UInt { + return &UInt{name: name, description: description} +} + +func (k *UInt) Name() string { return k.name } +func (k *UInt) Description() string { return k.description } + +func (k *UInt) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, uint64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt) Of(v uint) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt) Get(lm label.Map) uint { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt) From(t label.Label) uint { return uint(t.Unpack64()) } + +// UInt8 represents a key +type UInt8 struct { + name string + description string +} + +// NewUInt8 creates a new Key for uint8 values. +func NewUInt8(name, description string) *UInt8 { + return &UInt8{name: name, description: description} +} + +func (k *UInt8) Name() string { return k.name } +func (k *UInt8) Description() string { return k.description } + +func (k *UInt8) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, uint64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt8) Of(v uint8) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt8) Get(lm label.Map) uint8 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt8) From(t label.Label) uint8 { return uint8(t.Unpack64()) } + +// UInt16 represents a key +type UInt16 struct { + name string + description string +} + +// NewUInt16 creates a new Key for uint16 values. +func NewUInt16(name, description string) *UInt16 { + return &UInt16{name: name, description: description} +} + +func (k *UInt16) Name() string { return k.name } +func (k *UInt16) Description() string { return k.description } + +func (k *UInt16) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, uint64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt16) Of(v uint16) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt16) Get(lm label.Map) uint16 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt16) From(t label.Label) uint16 { return uint16(t.Unpack64()) } + +// UInt32 represents a key +type UInt32 struct { + name string + description string +} + +// NewUInt32 creates a new Key for uint32 values. +func NewUInt32(name, description string) *UInt32 { + return &UInt32{name: name, description: description} +} + +func (k *UInt32) Name() string { return k.name } +func (k *UInt32) Description() string { return k.description } + +func (k *UInt32) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, uint64(k.From(l)), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt32) Of(v uint32) label.Label { return label.Of64(k, uint64(v)) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt32) Get(lm label.Map) uint32 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt32) From(t label.Label) uint32 { return uint32(t.Unpack64()) } + +// UInt64 represents a key +type UInt64 struct { + name string + description string +} + +// NewUInt64 creates a new Key for uint64 values. +func NewUInt64(name, description string) *UInt64 { + return &UInt64{name: name, description: description} +} + +func (k *UInt64) Name() string { return k.name } +func (k *UInt64) Description() string { return k.description } + +func (k *UInt64) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendUint(buf, k.From(l), 10)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *UInt64) Of(v uint64) label.Label { return label.Of64(k, v) } + +// Get can be used to get a label for the key from a label.Map. +func (k *UInt64) Get(lm label.Map) uint64 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *UInt64) From(t label.Label) uint64 { return t.Unpack64() } + +// Float32 represents a key +type Float32 struct { + name string + description string +} + +// NewFloat32 creates a new Key for float32 values. +func NewFloat32(name, description string) *Float32 { + return &Float32{name: name, description: description} +} + +func (k *Float32) Name() string { return k.name } +func (k *Float32) Description() string { return k.description } + +func (k *Float32) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendFloat(buf, float64(k.From(l)), 'E', -1, 32)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Float32) Of(v float32) label.Label { + return label.Of64(k, uint64(math.Float32bits(v))) +} + +// Get can be used to get a label for the key from a label.Map. +func (k *Float32) Get(lm label.Map) float32 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Float32) From(t label.Label) float32 { + return math.Float32frombits(uint32(t.Unpack64())) +} + +// Float64 represents a key +type Float64 struct { + name string + description string +} + +// NewFloat64 creates a new Key for int64 values. +func NewFloat64(name, description string) *Float64 { + return &Float64{name: name, description: description} +} + +func (k *Float64) Name() string { return k.name } +func (k *Float64) Description() string { return k.description } + +func (k *Float64) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendFloat(buf, k.From(l), 'E', -1, 64)) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Float64) Of(v float64) label.Label { + return label.Of64(k, math.Float64bits(v)) +} + +// Get can be used to get a label for the key from a label.Map. +func (k *Float64) Get(lm label.Map) float64 { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return 0 +} + +// From can be used to get a value from a Label. +func (k *Float64) From(t label.Label) float64 { + return math.Float64frombits(t.Unpack64()) +} + +// String represents a key +type String struct { + name string + description string +} + +// NewString creates a new Key for int64 values. +func NewString(name, description string) *String { + return &String{name: name, description: description} +} + +func (k *String) Name() string { return k.name } +func (k *String) Description() string { return k.description } + +func (k *String) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendQuote(buf, k.From(l))) +} + +// Of creates a new Label with this key and the supplied value. +func (k *String) Of(v string) label.Label { return label.OfString(k, v) } + +// Get can be used to get a label for the key from a label.Map. +func (k *String) Get(lm label.Map) string { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return "" +} + +// From can be used to get a value from a Label. +func (k *String) From(t label.Label) string { return t.UnpackString() } + +// Boolean represents a key +type Boolean struct { + name string + description string +} + +// NewBoolean creates a new Key for bool values. +func NewBoolean(name, description string) *Boolean { + return &Boolean{name: name, description: description} +} + +func (k *Boolean) Name() string { return k.name } +func (k *Boolean) Description() string { return k.description } + +func (k *Boolean) Format(w io.Writer, buf []byte, l label.Label) { + w.Write(strconv.AppendBool(buf, k.From(l))) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Boolean) Of(v bool) label.Label { + if v { + return label.Of64(k, 1) + } + return label.Of64(k, 0) +} + +// Get can be used to get a label for the key from a label.Map. +func (k *Boolean) Get(lm label.Map) bool { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return false +} + +// From can be used to get a value from a Label. +func (k *Boolean) From(t label.Label) bool { return t.Unpack64() > 0 } + +// Error represents a key +type Error struct { + name string + description string +} + +// NewError creates a new Key for int64 values. +func NewError(name, description string) *Error { + return &Error{name: name, description: description} +} + +func (k *Error) Name() string { return k.name } +func (k *Error) Description() string { return k.description } + +func (k *Error) Format(w io.Writer, buf []byte, l label.Label) { + io.WriteString(w, k.From(l).Error()) +} + +// Of creates a new Label with this key and the supplied value. +func (k *Error) Of(v error) label.Label { return label.OfValue(k, v) } + +// Get can be used to get a label for the key from a label.Map. +func (k *Error) Get(lm label.Map) error { + if t := lm.Find(k); t.Valid() { + return k.From(t) + } + return nil +} + +// From can be used to get a value from a Label. +func (k *Error) From(t label.Label) error { + err, _ := t.UnpackValue().(error) + return err +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/keys/standard.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/keys/standard.go new file mode 100644 index 000000000000..7e9586659213 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/keys/standard.go @@ -0,0 +1,22 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package keys + +var ( + // Msg is a key used to add message strings to label lists. + Msg = NewString("message", "a readable message") + // Label is a key used to indicate an event adds labels to the context. + Label = NewTag("label", "a label context marker") + // Start is used for things like traces that have a name. + Start = NewString("start", "span start") + // Metric is a key used to indicate an event records metrics. + End = NewTag("end", "a span end marker") + // Metric is a key used to indicate an event records metrics. + Detach = NewTag("detach", "a span detach marker") + // Err is a key used to add error values to label lists. + Err = NewError("error", "an error that occurred") + // Metric is a key used to indicate an event records metrics. + Metric = NewTag("metric", "a metric event marker") +) diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/label/label.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/label/label.go new file mode 100644 index 000000000000..0f526e1f9ab4 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/label/label.go @@ -0,0 +1,215 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package label + +import ( + "fmt" + "io" + "reflect" + "unsafe" +) + +// Key is used as the identity of a Label. +// Keys are intended to be compared by pointer only, the name should be unique +// for communicating with external systems, but it is not required or enforced. +type Key interface { + // Name returns the key name. + Name() string + // Description returns a string that can be used to describe the value. + Description() string + + // Format is used in formatting to append the value of the label to the + // supplied buffer. + // The formatter may use the supplied buf as a scratch area to avoid + // allocations. + Format(w io.Writer, buf []byte, l Label) +} + +// Label holds a key and value pair. +// It is normally used when passing around lists of labels. +type Label struct { + key Key + packed uint64 + untyped interface{} +} + +// Map is the interface to a collection of Labels indexed by key. +type Map interface { + // Find returns the label that matches the supplied key. + Find(key Key) Label +} + +// List is the interface to something that provides an iterable +// list of labels. +// Iteration should start from 0 and continue until Valid returns false. +type List interface { + // Valid returns true if the index is within range for the list. + // It does not imply the label at that index will itself be valid. + Valid(index int) bool + // Label returns the label at the given index. + Label(index int) Label +} + +// list implements LabelList for a list of Labels. +type list struct { + labels []Label +} + +// filter wraps a LabelList filtering out specific labels. +type filter struct { + keys []Key + underlying List +} + +// listMap implements LabelMap for a simple list of labels. +type listMap struct { + labels []Label +} + +// mapChain implements LabelMap for a list of underlying LabelMap. +type mapChain struct { + maps []Map +} + +// OfValue creates a new label from the key and value. +// This method is for implementing new key types, label creation should +// normally be done with the Of method of the key. +func OfValue(k Key, value interface{}) Label { return Label{key: k, untyped: value} } + +// UnpackValue assumes the label was built using LabelOfValue and returns the value +// that was passed to that constructor. +// This method is for implementing new key types, for type safety normal +// access should be done with the From method of the key. +func (t Label) UnpackValue() interface{} { return t.untyped } + +// Of64 creates a new label from a key and a uint64. This is often +// used for non uint64 values that can be packed into a uint64. +// This method is for implementing new key types, label creation should +// normally be done with the Of method of the key. +func Of64(k Key, v uint64) Label { return Label{key: k, packed: v} } + +// Unpack64 assumes the label was built using LabelOf64 and returns the value that +// was passed to that constructor. +// This method is for implementing new key types, for type safety normal +// access should be done with the From method of the key. +func (t Label) Unpack64() uint64 { return t.packed } + +type stringptr unsafe.Pointer + +// OfString creates a new label from a key and a string. +// This method is for implementing new key types, label creation should +// normally be done with the Of method of the key. +func OfString(k Key, v string) Label { + hdr := (*reflect.StringHeader)(unsafe.Pointer(&v)) + return Label{ + key: k, + packed: uint64(hdr.Len), + untyped: stringptr(hdr.Data), + } +} + +// UnpackString assumes the label was built using LabelOfString and returns the +// value that was passed to that constructor. +// This method is for implementing new key types, for type safety normal +// access should be done with the From method of the key. +func (t Label) UnpackString() string { + var v string + hdr := (*reflect.StringHeader)(unsafe.Pointer(&v)) + hdr.Data = uintptr(t.untyped.(stringptr)) + hdr.Len = int(t.packed) + return v +} + +// Valid returns true if the Label is a valid one (it has a key). +func (t Label) Valid() bool { return t.key != nil } + +// Key returns the key of this Label. +func (t Label) Key() Key { return t.key } + +// Format is used for debug printing of labels. +func (t Label) Format(f fmt.State, r rune) { + if !t.Valid() { + io.WriteString(f, `nil`) + return + } + io.WriteString(f, t.Key().Name()) + io.WriteString(f, "=") + var buf [128]byte + t.Key().Format(f, buf[:0], t) +} + +func (l *list) Valid(index int) bool { + return index >= 0 && index < len(l.labels) +} + +func (l *list) Label(index int) Label { + return l.labels[index] +} + +func (f *filter) Valid(index int) bool { + return f.underlying.Valid(index) +} + +func (f *filter) Label(index int) Label { + l := f.underlying.Label(index) + for _, f := range f.keys { + if l.Key() == f { + return Label{} + } + } + return l +} + +func (lm listMap) Find(key Key) Label { + for _, l := range lm.labels { + if l.Key() == key { + return l + } + } + return Label{} +} + +func (c mapChain) Find(key Key) Label { + for _, src := range c.maps { + l := src.Find(key) + if l.Valid() { + return l + } + } + return Label{} +} + +var emptyList = &list{} + +func NewList(labels ...Label) List { + if len(labels) == 0 { + return emptyList + } + return &list{labels: labels} +} + +func Filter(l List, keys ...Key) List { + if len(keys) == 0 { + return l + } + return &filter{keys: keys, underlying: l} +} + +func NewMap(labels ...Label) Map { + return listMap{labels: labels} +} + +func MergeMaps(srcs ...Map) Map { + var nonNil []Map + for _, src := range srcs { + if src != nil { + nonNil = append(nonNil, src) + } + } + if len(nonNil) == 1 { + return nonNil[0] + } + return mapChain{maps: nonNil} +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/tag/tag.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/tag/tag.go new file mode 100644 index 000000000000..581b26c2041f --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/event/tag/tag.go @@ -0,0 +1,59 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package tag provides the labels used for telemetry throughout gopls. +package tag + +import ( + "golang.org/x/tools/internal/event/keys" +) + +var ( + // create the label keys we use + Method = keys.NewString("method", "") + StatusCode = keys.NewString("status.code", "") + StatusMessage = keys.NewString("status.message", "") + RPCID = keys.NewString("id", "") + RPCDirection = keys.NewString("direction", "") + File = keys.NewString("file", "") + Directory = keys.New("directory", "") + URI = keys.New("URI", "") + Package = keys.NewString("package", "") // sorted comma-separated list of Package IDs + PackagePath = keys.NewString("package_path", "") + Query = keys.New("query", "") + Snapshot = keys.NewUInt64("snapshot", "") + Operation = keys.NewString("operation", "") + + Position = keys.New("position", "") + Category = keys.NewString("category", "") + PackageCount = keys.NewInt("packages", "") + Files = keys.New("files", "") + Port = keys.NewInt("port", "") + Type = keys.New("type", "") + HoverKind = keys.NewString("hoverkind", "") + + NewServer = keys.NewString("new_server", "A new server was added") + EndServer = keys.NewString("end_server", "A server was shut down") + + ServerID = keys.NewString("server", "The server ID an event is related to") + Logfile = keys.NewString("logfile", "") + DebugAddress = keys.NewString("debug_address", "") + GoplsPath = keys.NewString("gopls_path", "") + ClientID = keys.NewString("client_id", "") + + Level = keys.NewInt("level", "The logging level") +) + +var ( + // create the stats we measure + Started = keys.NewInt64("started", "Count of started RPCs.") + ReceivedBytes = keys.NewInt64("received_bytes", "Bytes received.") //, unit.Bytes) + SentBytes = keys.NewInt64("sent_bytes", "Bytes sent.") //, unit.Bytes) + Latency = keys.NewFloat64("latency_ms", "Elapsed time in milliseconds") //, unit.Milliseconds) +) + +const ( + Inbound = "in" + Outbound = "out" +) diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/bimport.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/bimport.go new file mode 100644 index 000000000000..d98b0db2a9a9 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/bimport.go @@ -0,0 +1,150 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file contains the remaining vestiges of +// $GOROOT/src/go/internal/gcimporter/bimport.go. + +package gcimporter + +import ( + "fmt" + "go/token" + "go/types" + "sync" +) + +func errorf(format string, args ...interface{}) { + panic(fmt.Sprintf(format, args...)) +} + +const deltaNewFile = -64 // see cmd/compile/internal/gc/bexport.go + +// Synthesize a token.Pos +type fakeFileSet struct { + fset *token.FileSet + files map[string]*fileInfo +} + +type fileInfo struct { + file *token.File + lastline int +} + +const maxlines = 64 * 1024 + +func (s *fakeFileSet) pos(file string, line, column int) token.Pos { + // TODO(mdempsky): Make use of column. + + // Since we don't know the set of needed file positions, we reserve maxlines + // positions per file. We delay calling token.File.SetLines until all + // positions have been calculated (by way of fakeFileSet.setLines), so that + // we can avoid setting unnecessary lines. See also golang/go#46586. + f := s.files[file] + if f == nil { + f = &fileInfo{file: s.fset.AddFile(file, -1, maxlines)} + s.files[file] = f + } + if line > maxlines { + line = 1 + } + if line > f.lastline { + f.lastline = line + } + + // Return a fake position assuming that f.file consists only of newlines. + return token.Pos(f.file.Base() + line - 1) +} + +func (s *fakeFileSet) setLines() { + fakeLinesOnce.Do(func() { + fakeLines = make([]int, maxlines) + for i := range fakeLines { + fakeLines[i] = i + } + }) + for _, f := range s.files { + f.file.SetLines(fakeLines[:f.lastline]) + } +} + +var ( + fakeLines []int + fakeLinesOnce sync.Once +) + +func chanDir(d int) types.ChanDir { + // tag values must match the constants in cmd/compile/internal/gc/go.go + switch d { + case 1 /* Crecv */ : + return types.RecvOnly + case 2 /* Csend */ : + return types.SendOnly + case 3 /* Cboth */ : + return types.SendRecv + default: + errorf("unexpected channel dir %d", d) + return 0 + } +} + +var predeclOnce sync.Once +var predecl []types.Type // initialized lazily + +func predeclared() []types.Type { + predeclOnce.Do(func() { + // initialize lazily to be sure that all + // elements have been initialized before + predecl = []types.Type{ // basic types + types.Typ[types.Bool], + types.Typ[types.Int], + types.Typ[types.Int8], + types.Typ[types.Int16], + types.Typ[types.Int32], + types.Typ[types.Int64], + types.Typ[types.Uint], + types.Typ[types.Uint8], + types.Typ[types.Uint16], + types.Typ[types.Uint32], + types.Typ[types.Uint64], + types.Typ[types.Uintptr], + types.Typ[types.Float32], + types.Typ[types.Float64], + types.Typ[types.Complex64], + types.Typ[types.Complex128], + types.Typ[types.String], + + // basic type aliases + types.Universe.Lookup("byte").Type(), + types.Universe.Lookup("rune").Type(), + + // error + types.Universe.Lookup("error").Type(), + + // untyped types + types.Typ[types.UntypedBool], + types.Typ[types.UntypedInt], + types.Typ[types.UntypedRune], + types.Typ[types.UntypedFloat], + types.Typ[types.UntypedComplex], + types.Typ[types.UntypedString], + types.Typ[types.UntypedNil], + + // package unsafe + types.Typ[types.UnsafePointer], + + // invalid type + types.Typ[types.Invalid], // only appears in packages with errors + + // used internally by gc; never used by this package or in .a files + anyType{}, + } + predecl = append(predecl, additionalPredeclared()...) + }) + return predecl +} + +type anyType struct{} + +func (t anyType) Underlying() types.Type { return t } +func (t anyType) String() string { return "any" } diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/exportdata.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/exportdata.go new file mode 100644 index 000000000000..f6437feb1cfd --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/exportdata.go @@ -0,0 +1,99 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file is a copy of $GOROOT/src/go/internal/gcimporter/exportdata.go. + +// This file implements FindExportData. + +package gcimporter + +import ( + "bufio" + "fmt" + "io" + "strconv" + "strings" +) + +func readGopackHeader(r *bufio.Reader) (name string, size int64, err error) { + // See $GOROOT/include/ar.h. + hdr := make([]byte, 16+12+6+6+8+10+2) + _, err = io.ReadFull(r, hdr) + if err != nil { + return + } + // leave for debugging + if false { + fmt.Printf("header: %s", hdr) + } + s := strings.TrimSpace(string(hdr[16+12+6+6+8:][:10])) + length, err := strconv.Atoi(s) + size = int64(length) + if err != nil || hdr[len(hdr)-2] != '`' || hdr[len(hdr)-1] != '\n' { + err = fmt.Errorf("invalid archive header") + return + } + name = strings.TrimSpace(string(hdr[:16])) + return +} + +// FindExportData positions the reader r at the beginning of the +// export data section of an underlying GC-created object/archive +// file by reading from it. The reader must be positioned at the +// start of the file before calling this function. The hdr result +// is the string before the export data, either "$$" or "$$B". +// The size result is the length of the export data in bytes, or -1 if not known. +func FindExportData(r *bufio.Reader) (hdr string, size int64, err error) { + // Read first line to make sure this is an object file. + line, err := r.ReadSlice('\n') + if err != nil { + err = fmt.Errorf("can't find export data (%v)", err) + return + } + + if string(line) == "!\n" { + // Archive file. Scan to __.PKGDEF. + var name string + if name, size, err = readGopackHeader(r); err != nil { + return + } + + // First entry should be __.PKGDEF. + if name != "__.PKGDEF" { + err = fmt.Errorf("go archive is missing __.PKGDEF") + return + } + + // Read first line of __.PKGDEF data, so that line + // is once again the first line of the input. + if line, err = r.ReadSlice('\n'); err != nil { + err = fmt.Errorf("can't find export data (%v)", err) + return + } + size -= int64(len(line)) + } + + // Now at __.PKGDEF in archive or still at beginning of file. + // Either way, line should begin with "go object ". + if !strings.HasPrefix(string(line), "go object ") { + err = fmt.Errorf("not a Go object file") + return + } + + // Skip over object header to export data. + // Begins after first line starting with $$. + for line[0] != '$' { + if line, err = r.ReadSlice('\n'); err != nil { + err = fmt.Errorf("can't find export data (%v)", err) + return + } + size -= int64(len(line)) + } + hdr = string(line) + if size < 0 { + size = -1 + } + + return +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/gcimporter.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/gcimporter.go new file mode 100644 index 000000000000..b1223713b940 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/gcimporter.go @@ -0,0 +1,274 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// This file is a reduced copy of $GOROOT/src/go/internal/gcimporter/gcimporter.go. + +// Package gcimporter provides various functions for reading +// gc-generated object files that can be used to implement the +// Importer interface defined by the Go 1.5 standard library package. +// +// The encoding is deterministic: if the encoder is applied twice to +// the same types.Package data structure, both encodings are equal. +// This property may be important to avoid spurious changes in +// applications such as build systems. +// +// However, the encoder is not necessarily idempotent. Importing an +// exported package may yield a types.Package that, while it +// represents the same set of Go types as the original, may differ in +// the details of its internal representation. Because of these +// differences, re-encoding the imported package may yield a +// different, but equally valid, encoding of the package. +package gcimporter // import "golang.org/x/tools/internal/gcimporter" + +import ( + "bufio" + "bytes" + "fmt" + "go/build" + "go/token" + "go/types" + "io" + "io/ioutil" + "os" + "os/exec" + "path/filepath" + "strings" + "sync" +) + +const ( + // Enable debug during development: it adds some additional checks, and + // prevents errors from being recovered. + debug = false + + // If trace is set, debugging output is printed to std out. + trace = false +) + +var exportMap sync.Map // package dir → func() (string, bool) + +// lookupGorootExport returns the location of the export data +// (normally found in the build cache, but located in GOROOT/pkg +// in prior Go releases) for the package located in pkgDir. +// +// (We use the package's directory instead of its import path +// mainly to simplify handling of the packages in src/vendor +// and cmd/vendor.) +func lookupGorootExport(pkgDir string) (string, bool) { + f, ok := exportMap.Load(pkgDir) + if !ok { + var ( + listOnce sync.Once + exportPath string + ) + f, _ = exportMap.LoadOrStore(pkgDir, func() (string, bool) { + listOnce.Do(func() { + cmd := exec.Command("go", "list", "-export", "-f", "{{.Export}}", pkgDir) + cmd.Dir = build.Default.GOROOT + var output []byte + output, err := cmd.Output() + if err != nil { + return + } + + exports := strings.Split(string(bytes.TrimSpace(output)), "\n") + if len(exports) != 1 { + return + } + + exportPath = exports[0] + }) + + return exportPath, exportPath != "" + }) + } + + return f.(func() (string, bool))() +} + +var pkgExts = [...]string{".a", ".o"} + +// FindPkg returns the filename and unique package id for an import +// path based on package information provided by build.Import (using +// the build.Default build.Context). A relative srcDir is interpreted +// relative to the current working directory. +// If no file was found, an empty filename is returned. +func FindPkg(path, srcDir string) (filename, id string) { + if path == "" { + return + } + + var noext string + switch { + default: + // "x" -> "$GOPATH/pkg/$GOOS_$GOARCH/x.ext", "x" + // Don't require the source files to be present. + if abs, err := filepath.Abs(srcDir); err == nil { // see issue 14282 + srcDir = abs + } + bp, _ := build.Import(path, srcDir, build.FindOnly|build.AllowBinary) + if bp.PkgObj == "" { + var ok bool + if bp.Goroot && bp.Dir != "" { + filename, ok = lookupGorootExport(bp.Dir) + } + if !ok { + id = path // make sure we have an id to print in error message + return + } + } else { + noext = strings.TrimSuffix(bp.PkgObj, ".a") + id = bp.ImportPath + } + + case build.IsLocalImport(path): + // "./x" -> "/this/directory/x.ext", "/this/directory/x" + noext = filepath.Join(srcDir, path) + id = noext + + case filepath.IsAbs(path): + // for completeness only - go/build.Import + // does not support absolute imports + // "/x" -> "/x.ext", "/x" + noext = path + id = path + } + + if false { // for debugging + if path != id { + fmt.Printf("%s -> %s\n", path, id) + } + } + + if filename != "" { + if f, err := os.Stat(filename); err == nil && !f.IsDir() { + return + } + } + + // try extensions + for _, ext := range pkgExts { + filename = noext + ext + if f, err := os.Stat(filename); err == nil && !f.IsDir() { + return + } + } + + filename = "" // not found + return +} + +// Import imports a gc-generated package given its import path and srcDir, adds +// the corresponding package object to the packages map, and returns the object. +// The packages map must contain all packages already imported. +func Import(packages map[string]*types.Package, path, srcDir string, lookup func(path string) (io.ReadCloser, error)) (pkg *types.Package, err error) { + var rc io.ReadCloser + var filename, id string + if lookup != nil { + // With custom lookup specified, assume that caller has + // converted path to a canonical import path for use in the map. + if path == "unsafe" { + return types.Unsafe, nil + } + id = path + + // No need to re-import if the package was imported completely before. + if pkg = packages[id]; pkg != nil && pkg.Complete() { + return + } + f, err := lookup(path) + if err != nil { + return nil, err + } + rc = f + } else { + filename, id = FindPkg(path, srcDir) + if filename == "" { + if path == "unsafe" { + return types.Unsafe, nil + } + return nil, fmt.Errorf("can't find import: %q", id) + } + + // no need to re-import if the package was imported completely before + if pkg = packages[id]; pkg != nil && pkg.Complete() { + return + } + + // open file + f, err := os.Open(filename) + if err != nil { + return nil, err + } + defer func() { + if err != nil { + // add file name to error + err = fmt.Errorf("%s: %v", filename, err) + } + }() + rc = f + } + defer rc.Close() + + var hdr string + var size int64 + buf := bufio.NewReader(rc) + if hdr, size, err = FindExportData(buf); err != nil { + return + } + + switch hdr { + case "$$B\n": + var data []byte + data, err = ioutil.ReadAll(buf) + if err != nil { + break + } + + // TODO(gri): allow clients of go/importer to provide a FileSet. + // Or, define a new standard go/types/gcexportdata package. + fset := token.NewFileSet() + + // Select appropriate importer. + if len(data) > 0 { + switch data[0] { + case 'v', 'c', 'd': // binary, till go1.10 + return nil, fmt.Errorf("binary (%c) import format is no longer supported", data[0]) + + case 'i': // indexed, till go1.19 + _, pkg, err := IImportData(fset, packages, data[1:], id) + return pkg, err + + case 'u': // unified, from go1.20 + _, pkg, err := UImportData(fset, packages, data[1:size], id) + return pkg, err + + default: + l := len(data) + if l > 10 { + l = 10 + } + return nil, fmt.Errorf("unexpected export data with prefix %q for path %s", string(data[:l]), id) + } + } + + default: + err = fmt.Errorf("unknown export data header: %q", hdr) + } + + return +} + +func deref(typ types.Type) types.Type { + if p, _ := typ.(*types.Pointer); p != nil { + return p.Elem() + } + return typ +} + +type byPath []*types.Package + +func (a byPath) Len() int { return len(a) } +func (a byPath) Swap(i, j int) { a[i], a[j] = a[j], a[i] } +func (a byPath) Less(i, j int) bool { return a[i].Path() < a[j].Path() } diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/iexport.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/iexport.go new file mode 100644 index 000000000000..6103dd7102b3 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/iexport.go @@ -0,0 +1,1322 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Indexed binary package export. +// This file was derived from $GOROOT/src/cmd/compile/internal/gc/iexport.go; +// see that file for specification of the format. + +package gcimporter + +import ( + "bytes" + "encoding/binary" + "fmt" + "go/constant" + "go/token" + "go/types" + "io" + "math/big" + "reflect" + "sort" + "strconv" + "strings" + + "golang.org/x/tools/go/types/objectpath" + "golang.org/x/tools/internal/tokeninternal" + "golang.org/x/tools/internal/typeparams" +) + +// IExportShallow encodes "shallow" export data for the specified package. +// +// No promises are made about the encoding other than that it can be decoded by +// the same version of IIExportShallow. If you plan to save export data in the +// file system, be sure to include a cryptographic digest of the executable in +// the key to avoid version skew. +// +// If the provided reportf func is non-nil, it will be used for reporting bugs +// encountered during export. +// TODO(rfindley): remove reportf when we are confident enough in the new +// objectpath encoding. +func IExportShallow(fset *token.FileSet, pkg *types.Package, reportf ReportFunc) ([]byte, error) { + // In principle this operation can only fail if out.Write fails, + // but that's impossible for bytes.Buffer---and as a matter of + // fact iexportCommon doesn't even check for I/O errors. + // TODO(adonovan): handle I/O errors properly. + // TODO(adonovan): use byte slices throughout, avoiding copying. + const bundle, shallow = false, true + var out bytes.Buffer + err := iexportCommon(&out, fset, bundle, shallow, iexportVersion, []*types.Package{pkg}) + return out.Bytes(), err +} + +// IImportShallow decodes "shallow" types.Package data encoded by +// IExportShallow in the same executable. This function cannot import data from +// cmd/compile or gcexportdata.Write. +// +// The importer calls getPackages to obtain package symbols for all +// packages mentioned in the export data, including the one being +// decoded. +// +// If the provided reportf func is non-nil, it will be used for reporting bugs +// encountered during import. +// TODO(rfindley): remove reportf when we are confident enough in the new +// objectpath encoding. +func IImportShallow(fset *token.FileSet, getPackages GetPackagesFunc, data []byte, path string, reportf ReportFunc) (*types.Package, error) { + const bundle = false + const shallow = true + pkgs, err := iimportCommon(fset, getPackages, data, bundle, path, shallow, reportf) + if err != nil { + return nil, err + } + return pkgs[0], nil +} + +// ReportFunc is the type of a function used to report formatted bugs. +type ReportFunc = func(string, ...interface{}) + +// Current bundled export format version. Increase with each format change. +// 0: initial implementation +const bundleVersion = 0 + +// IExportData writes indexed export data for pkg to out. +// +// If no file set is provided, position info will be missing. +// The package path of the top-level package will not be recorded, +// so that calls to IImportData can override with a provided package path. +func IExportData(out io.Writer, fset *token.FileSet, pkg *types.Package) error { + const bundle, shallow = false, false + return iexportCommon(out, fset, bundle, shallow, iexportVersion, []*types.Package{pkg}) +} + +// IExportBundle writes an indexed export bundle for pkgs to out. +func IExportBundle(out io.Writer, fset *token.FileSet, pkgs []*types.Package) error { + const bundle, shallow = true, false + return iexportCommon(out, fset, bundle, shallow, iexportVersion, pkgs) +} + +func iexportCommon(out io.Writer, fset *token.FileSet, bundle, shallow bool, version int, pkgs []*types.Package) (err error) { + if !debug { + defer func() { + if e := recover(); e != nil { + if ierr, ok := e.(internalError); ok { + err = ierr + return + } + // Not an internal error; panic again. + panic(e) + } + }() + } + + p := iexporter{ + fset: fset, + version: version, + shallow: shallow, + allPkgs: map[*types.Package]bool{}, + stringIndex: map[string]uint64{}, + declIndex: map[types.Object]uint64{}, + tparamNames: map[types.Object]string{}, + typIndex: map[types.Type]uint64{}, + } + if !bundle { + p.localpkg = pkgs[0] + } + + for i, pt := range predeclared() { + p.typIndex[pt] = uint64(i) + } + if len(p.typIndex) > predeclReserved { + panic(internalErrorf("too many predeclared types: %d > %d", len(p.typIndex), predeclReserved)) + } + + // Initialize work queue with exported declarations. + for _, pkg := range pkgs { + scope := pkg.Scope() + for _, name := range scope.Names() { + if token.IsExported(name) { + p.pushDecl(scope.Lookup(name)) + } + } + + if bundle { + // Ensure pkg and its imports are included in the index. + p.allPkgs[pkg] = true + for _, imp := range pkg.Imports() { + p.allPkgs[imp] = true + } + } + } + + // Loop until no more work. + for !p.declTodo.empty() { + p.doDecl(p.declTodo.popHead()) + } + + // Produce index of offset of each file record in files. + var files intWriter + var fileOffset []uint64 // fileOffset[i] is offset in files of file encoded as i + if p.shallow { + fileOffset = make([]uint64, len(p.fileInfos)) + for i, info := range p.fileInfos { + fileOffset[i] = uint64(files.Len()) + p.encodeFile(&files, info.file, info.needed) + } + } + + // Append indices to data0 section. + dataLen := uint64(p.data0.Len()) + w := p.newWriter() + w.writeIndex(p.declIndex) + + if bundle { + w.uint64(uint64(len(pkgs))) + for _, pkg := range pkgs { + w.pkg(pkg) + imps := pkg.Imports() + w.uint64(uint64(len(imps))) + for _, imp := range imps { + w.pkg(imp) + } + } + } + w.flush() + + // Assemble header. + var hdr intWriter + if bundle { + hdr.uint64(bundleVersion) + } + hdr.uint64(uint64(p.version)) + hdr.uint64(uint64(p.strings.Len())) + if p.shallow { + hdr.uint64(uint64(files.Len())) + hdr.uint64(uint64(len(fileOffset))) + for _, offset := range fileOffset { + hdr.uint64(offset) + } + } + hdr.uint64(dataLen) + + // Flush output. + io.Copy(out, &hdr) + io.Copy(out, &p.strings) + if p.shallow { + io.Copy(out, &files) + } + io.Copy(out, &p.data0) + + return nil +} + +// encodeFile writes to w a representation of the file sufficient to +// faithfully restore position information about all needed offsets. +// Mutates the needed array. +func (p *iexporter) encodeFile(w *intWriter, file *token.File, needed []uint64) { + _ = needed[0] // precondition: needed is non-empty + + w.uint64(p.stringOff(file.Name())) + + size := uint64(file.Size()) + w.uint64(size) + + // Sort the set of needed offsets. Duplicates are harmless. + sort.Slice(needed, func(i, j int) bool { return needed[i] < needed[j] }) + + lines := tokeninternal.GetLines(file) // byte offset of each line start + w.uint64(uint64(len(lines))) + + // Rather than record the entire array of line start offsets, + // we save only a sparse list of (index, offset) pairs for + // the start of each line that contains a needed position. + var sparse [][2]int // (index, offset) pairs +outer: + for i, lineStart := range lines { + lineEnd := size + if i < len(lines)-1 { + lineEnd = uint64(lines[i+1]) + } + // Does this line contains a needed offset? + if needed[0] < lineEnd { + sparse = append(sparse, [2]int{i, lineStart}) + for needed[0] < lineEnd { + needed = needed[1:] + if len(needed) == 0 { + break outer + } + } + } + } + + // Delta-encode the columns. + w.uint64(uint64(len(sparse))) + var prev [2]int + for _, pair := range sparse { + w.uint64(uint64(pair[0] - prev[0])) + w.uint64(uint64(pair[1] - prev[1])) + prev = pair + } +} + +// writeIndex writes out an object index. mainIndex indicates whether +// we're writing out the main index, which is also read by +// non-compiler tools and includes a complete package description +// (i.e., name and height). +func (w *exportWriter) writeIndex(index map[types.Object]uint64) { + type pkgObj struct { + obj types.Object + name string // qualified name; differs from obj.Name for type params + } + // Build a map from packages to objects from that package. + pkgObjs := map[*types.Package][]pkgObj{} + + // For the main index, make sure to include every package that + // we reference, even if we're not exporting (or reexporting) + // any symbols from it. + if w.p.localpkg != nil { + pkgObjs[w.p.localpkg] = nil + } + for pkg := range w.p.allPkgs { + pkgObjs[pkg] = nil + } + + for obj := range index { + name := w.p.exportName(obj) + pkgObjs[obj.Pkg()] = append(pkgObjs[obj.Pkg()], pkgObj{obj, name}) + } + + var pkgs []*types.Package + for pkg, objs := range pkgObjs { + pkgs = append(pkgs, pkg) + + sort.Slice(objs, func(i, j int) bool { + return objs[i].name < objs[j].name + }) + } + + sort.Slice(pkgs, func(i, j int) bool { + return w.exportPath(pkgs[i]) < w.exportPath(pkgs[j]) + }) + + w.uint64(uint64(len(pkgs))) + for _, pkg := range pkgs { + w.string(w.exportPath(pkg)) + w.string(pkg.Name()) + w.uint64(uint64(0)) // package height is not needed for go/types + + objs := pkgObjs[pkg] + w.uint64(uint64(len(objs))) + for _, obj := range objs { + w.string(obj.name) + w.uint64(index[obj.obj]) + } + } +} + +// exportName returns the 'exported' name of an object. It differs from +// obj.Name() only for type parameters (see tparamExportName for details). +func (p *iexporter) exportName(obj types.Object) (res string) { + if name := p.tparamNames[obj]; name != "" { + return name + } + return obj.Name() +} + +type iexporter struct { + fset *token.FileSet + out *bytes.Buffer + version int + + shallow bool // don't put types from other packages in the index + objEncoder *objectpath.Encoder // encodes objects from other packages in shallow mode; lazily allocated + localpkg *types.Package // (nil in bundle mode) + + // allPkgs tracks all packages that have been referenced by + // the export data, so we can ensure to include them in the + // main index. + allPkgs map[*types.Package]bool + + declTodo objQueue + + strings intWriter + stringIndex map[string]uint64 + + // In shallow mode, object positions are encoded as (file, offset). + // Each file is recorded as a line-number table. + // Only the lines of needed positions are saved faithfully. + fileInfo map[*token.File]uint64 // value is index in fileInfos + fileInfos []*filePositions + + data0 intWriter + declIndex map[types.Object]uint64 + tparamNames map[types.Object]string // typeparam->exported name + typIndex map[types.Type]uint64 + + indent int // for tracing support +} + +type filePositions struct { + file *token.File + needed []uint64 // unordered list of needed file offsets +} + +func (p *iexporter) trace(format string, args ...interface{}) { + if !trace { + // Call sites should also be guarded, but having this check here allows + // easily enabling/disabling debug trace statements. + return + } + fmt.Printf(strings.Repeat("..", p.indent)+format+"\n", args...) +} + +// objectpathEncoder returns the lazily allocated objectpath.Encoder to use +// when encoding objects in other packages during shallow export. +// +// Using a shared Encoder amortizes some of cost of objectpath search. +func (p *iexporter) objectpathEncoder() *objectpath.Encoder { + if p.objEncoder == nil { + p.objEncoder = new(objectpath.Encoder) + } + return p.objEncoder +} + +// stringOff returns the offset of s within the string section. +// If not already present, it's added to the end. +func (p *iexporter) stringOff(s string) uint64 { + off, ok := p.stringIndex[s] + if !ok { + off = uint64(p.strings.Len()) + p.stringIndex[s] = off + + p.strings.uint64(uint64(len(s))) + p.strings.WriteString(s) + } + return off +} + +// fileIndexAndOffset returns the index of the token.File and the byte offset of pos within it. +func (p *iexporter) fileIndexAndOffset(file *token.File, pos token.Pos) (uint64, uint64) { + index, ok := p.fileInfo[file] + if !ok { + index = uint64(len(p.fileInfo)) + p.fileInfos = append(p.fileInfos, &filePositions{file: file}) + if p.fileInfo == nil { + p.fileInfo = make(map[*token.File]uint64) + } + p.fileInfo[file] = index + } + // Record each needed offset. + info := p.fileInfos[index] + offset := uint64(file.Offset(pos)) + info.needed = append(info.needed, offset) + + return index, offset +} + +// pushDecl adds n to the declaration work queue, if not already present. +func (p *iexporter) pushDecl(obj types.Object) { + // Package unsafe is known to the compiler and predeclared. + // Caller should not ask us to do export it. + if obj.Pkg() == types.Unsafe { + panic("cannot export package unsafe") + } + + // Shallow export data: don't index decls from other packages. + if p.shallow && obj.Pkg() != p.localpkg { + return + } + + if _, ok := p.declIndex[obj]; ok { + return + } + + p.declIndex[obj] = ^uint64(0) // mark obj present in work queue + p.declTodo.pushTail(obj) +} + +// exportWriter handles writing out individual data section chunks. +type exportWriter struct { + p *iexporter + + data intWriter + prevFile string + prevLine int64 + prevColumn int64 +} + +func (w *exportWriter) exportPath(pkg *types.Package) string { + if pkg == w.p.localpkg { + return "" + } + return pkg.Path() +} + +func (p *iexporter) doDecl(obj types.Object) { + if trace { + p.trace("exporting decl %v (%T)", obj, obj) + p.indent++ + defer func() { + p.indent-- + p.trace("=> %s", obj) + }() + } + w := p.newWriter() + + switch obj := obj.(type) { + case *types.Var: + w.tag('V') + w.pos(obj.Pos()) + w.typ(obj.Type(), obj.Pkg()) + + case *types.Func: + sig, _ := obj.Type().(*types.Signature) + if sig.Recv() != nil { + // We shouldn't see methods in the package scope, + // but the type checker may repair "func () F() {}" + // to "func (Invalid) F()" and then treat it like "func F()", + // so allow that. See golang/go#57729. + if sig.Recv().Type() != types.Typ[types.Invalid] { + panic(internalErrorf("unexpected method: %v", sig)) + } + } + + // Function. + if typeparams.ForSignature(sig).Len() == 0 { + w.tag('F') + } else { + w.tag('G') + } + w.pos(obj.Pos()) + // The tparam list of the function type is the declaration of the type + // params. So, write out the type params right now. Then those type params + // will be referenced via their type offset (via typOff) in all other + // places in the signature and function where they are used. + // + // While importing the type parameters, tparamList computes and records + // their export name, so that it can be later used when writing the index. + if tparams := typeparams.ForSignature(sig); tparams.Len() > 0 { + w.tparamList(obj.Name(), tparams, obj.Pkg()) + } + w.signature(sig) + + case *types.Const: + w.tag('C') + w.pos(obj.Pos()) + w.value(obj.Type(), obj.Val()) + + case *types.TypeName: + t := obj.Type() + + if tparam, ok := t.(*typeparams.TypeParam); ok { + w.tag('P') + w.pos(obj.Pos()) + constraint := tparam.Constraint() + if p.version >= iexportVersionGo1_18 { + implicit := false + if iface, _ := constraint.(*types.Interface); iface != nil { + implicit = typeparams.IsImplicit(iface) + } + w.bool(implicit) + } + w.typ(constraint, obj.Pkg()) + break + } + + if obj.IsAlias() { + w.tag('A') + w.pos(obj.Pos()) + w.typ(t, obj.Pkg()) + break + } + + // Defined type. + named, ok := t.(*types.Named) + if !ok { + panic(internalErrorf("%s is not a defined type", t)) + } + + if typeparams.ForNamed(named).Len() == 0 { + w.tag('T') + } else { + w.tag('U') + } + w.pos(obj.Pos()) + + if typeparams.ForNamed(named).Len() > 0 { + // While importing the type parameters, tparamList computes and records + // their export name, so that it can be later used when writing the index. + w.tparamList(obj.Name(), typeparams.ForNamed(named), obj.Pkg()) + } + + underlying := obj.Type().Underlying() + w.typ(underlying, obj.Pkg()) + + if types.IsInterface(t) { + break + } + + n := named.NumMethods() + w.uint64(uint64(n)) + for i := 0; i < n; i++ { + m := named.Method(i) + w.pos(m.Pos()) + w.string(m.Name()) + sig, _ := m.Type().(*types.Signature) + + // Receiver type parameters are type arguments of the receiver type, so + // their name must be qualified before exporting recv. + if rparams := typeparams.RecvTypeParams(sig); rparams.Len() > 0 { + prefix := obj.Name() + "." + m.Name() + for i := 0; i < rparams.Len(); i++ { + rparam := rparams.At(i) + name := tparamExportName(prefix, rparam) + w.p.tparamNames[rparam.Obj()] = name + } + } + w.param(sig.Recv()) + w.signature(sig) + } + + default: + panic(internalErrorf("unexpected object: %v", obj)) + } + + p.declIndex[obj] = w.flush() +} + +func (w *exportWriter) tag(tag byte) { + w.data.WriteByte(tag) +} + +func (w *exportWriter) pos(pos token.Pos) { + if w.p.shallow { + w.posV2(pos) + } else if w.p.version >= iexportVersionPosCol { + w.posV1(pos) + } else { + w.posV0(pos) + } +} + +// posV2 encoding (used only in shallow mode) records positions as +// (file, offset), where file is the index in the token.File table +// (which records the file name and newline offsets) and offset is a +// byte offset. It effectively ignores //line directives. +func (w *exportWriter) posV2(pos token.Pos) { + if pos == token.NoPos { + w.uint64(0) + return + } + file := w.p.fset.File(pos) // fset must be non-nil + index, offset := w.p.fileIndexAndOffset(file, pos) + w.uint64(1 + index) + w.uint64(offset) +} + +func (w *exportWriter) posV1(pos token.Pos) { + if w.p.fset == nil { + w.int64(0) + return + } + + p := w.p.fset.Position(pos) + file := p.Filename + line := int64(p.Line) + column := int64(p.Column) + + deltaColumn := (column - w.prevColumn) << 1 + deltaLine := (line - w.prevLine) << 1 + + if file != w.prevFile { + deltaLine |= 1 + } + if deltaLine != 0 { + deltaColumn |= 1 + } + + w.int64(deltaColumn) + if deltaColumn&1 != 0 { + w.int64(deltaLine) + if deltaLine&1 != 0 { + w.string(file) + } + } + + w.prevFile = file + w.prevLine = line + w.prevColumn = column +} + +func (w *exportWriter) posV0(pos token.Pos) { + if w.p.fset == nil { + w.int64(0) + return + } + + p := w.p.fset.Position(pos) + file := p.Filename + line := int64(p.Line) + + // When file is the same as the last position (common case), + // we can save a few bytes by delta encoding just the line + // number. + // + // Note: Because data objects may be read out of order (or not + // at all), we can only apply delta encoding within a single + // object. This is handled implicitly by tracking prevFile and + // prevLine as fields of exportWriter. + + if file == w.prevFile { + delta := line - w.prevLine + w.int64(delta) + if delta == deltaNewFile { + w.int64(-1) + } + } else { + w.int64(deltaNewFile) + w.int64(line) // line >= 0 + w.string(file) + w.prevFile = file + } + w.prevLine = line +} + +func (w *exportWriter) pkg(pkg *types.Package) { + // Ensure any referenced packages are declared in the main index. + w.p.allPkgs[pkg] = true + + w.string(w.exportPath(pkg)) +} + +func (w *exportWriter) qualifiedType(obj *types.TypeName) { + name := w.p.exportName(obj) + + // Ensure any referenced declarations are written out too. + w.p.pushDecl(obj) + w.string(name) + w.pkg(obj.Pkg()) +} + +// TODO(rfindley): what does 'pkg' even mean here? It would be better to pass +// it in explicitly into signatures and structs that may use it for +// constructing fields. +func (w *exportWriter) typ(t types.Type, pkg *types.Package) { + w.data.uint64(w.p.typOff(t, pkg)) +} + +func (p *iexporter) newWriter() *exportWriter { + return &exportWriter{p: p} +} + +func (w *exportWriter) flush() uint64 { + off := uint64(w.p.data0.Len()) + io.Copy(&w.p.data0, &w.data) + return off +} + +func (p *iexporter) typOff(t types.Type, pkg *types.Package) uint64 { + off, ok := p.typIndex[t] + if !ok { + w := p.newWriter() + w.doTyp(t, pkg) + off = predeclReserved + w.flush() + p.typIndex[t] = off + } + return off +} + +func (w *exportWriter) startType(k itag) { + w.data.uint64(uint64(k)) +} + +func (w *exportWriter) doTyp(t types.Type, pkg *types.Package) { + if trace { + w.p.trace("exporting type %s (%T)", t, t) + w.p.indent++ + defer func() { + w.p.indent-- + w.p.trace("=> %s", t) + }() + } + switch t := t.(type) { + case *types.Named: + if targs := typeparams.NamedTypeArgs(t); targs.Len() > 0 { + w.startType(instanceType) + // TODO(rfindley): investigate if this position is correct, and if it + // matters. + w.pos(t.Obj().Pos()) + w.typeList(targs, pkg) + w.typ(typeparams.NamedTypeOrigin(t), pkg) + return + } + w.startType(definedType) + w.qualifiedType(t.Obj()) + + case *typeparams.TypeParam: + w.startType(typeParamType) + w.qualifiedType(t.Obj()) + + case *types.Pointer: + w.startType(pointerType) + w.typ(t.Elem(), pkg) + + case *types.Slice: + w.startType(sliceType) + w.typ(t.Elem(), pkg) + + case *types.Array: + w.startType(arrayType) + w.uint64(uint64(t.Len())) + w.typ(t.Elem(), pkg) + + case *types.Chan: + w.startType(chanType) + // 1 RecvOnly; 2 SendOnly; 3 SendRecv + var dir uint64 + switch t.Dir() { + case types.RecvOnly: + dir = 1 + case types.SendOnly: + dir = 2 + case types.SendRecv: + dir = 3 + } + w.uint64(dir) + w.typ(t.Elem(), pkg) + + case *types.Map: + w.startType(mapType) + w.typ(t.Key(), pkg) + w.typ(t.Elem(), pkg) + + case *types.Signature: + w.startType(signatureType) + w.pkg(pkg) + w.signature(t) + + case *types.Struct: + w.startType(structType) + n := t.NumFields() + // Even for struct{} we must emit some qualifying package, because that's + // what the compiler does, and thus that's what the importer expects. + fieldPkg := pkg + if n > 0 { + fieldPkg = t.Field(0).Pkg() + } + if fieldPkg == nil { + // TODO(rfindley): improve this very hacky logic. + // + // The importer expects a package to be set for all struct types, even + // those with no fields. A better encoding might be to set NumFields + // before pkg. setPkg panics with a nil package, which may be possible + // to reach with invalid packages (and perhaps valid packages, too?), so + // (arbitrarily) set the localpkg if available. + // + // Alternatively, we may be able to simply guarantee that pkg != nil, by + // reconsidering the encoding of constant values. + if w.p.shallow { + fieldPkg = w.p.localpkg + } else { + panic(internalErrorf("no package to set for empty struct")) + } + } + w.pkg(fieldPkg) + w.uint64(uint64(n)) + + for i := 0; i < n; i++ { + f := t.Field(i) + if w.p.shallow { + w.objectPath(f) + } + w.pos(f.Pos()) + w.string(f.Name()) // unexported fields implicitly qualified by prior setPkg + w.typ(f.Type(), fieldPkg) + w.bool(f.Anonymous()) + w.string(t.Tag(i)) // note (or tag) + } + + case *types.Interface: + w.startType(interfaceType) + w.pkg(pkg) + + n := t.NumEmbeddeds() + w.uint64(uint64(n)) + for i := 0; i < n; i++ { + ft := t.EmbeddedType(i) + tPkg := pkg + if named, _ := ft.(*types.Named); named != nil { + w.pos(named.Obj().Pos()) + } else { + w.pos(token.NoPos) + } + w.typ(ft, tPkg) + } + + // See comment for struct fields. In shallow mode we change the encoding + // for interface methods that are promoted from other packages. + + n = t.NumExplicitMethods() + w.uint64(uint64(n)) + for i := 0; i < n; i++ { + m := t.ExplicitMethod(i) + if w.p.shallow { + w.objectPath(m) + } + w.pos(m.Pos()) + w.string(m.Name()) + sig, _ := m.Type().(*types.Signature) + w.signature(sig) + } + + case *typeparams.Union: + w.startType(unionType) + nt := t.Len() + w.uint64(uint64(nt)) + for i := 0; i < nt; i++ { + term := t.Term(i) + w.bool(term.Tilde()) + w.typ(term.Type(), pkg) + } + + default: + panic(internalErrorf("unexpected type: %v, %v", t, reflect.TypeOf(t))) + } +} + +// objectPath writes the package and objectPath to use to look up obj in a +// different package, when encoding in "shallow" mode. +// +// When doing a shallow import, the importer creates only the local package, +// and requests package symbols for dependencies from the client. +// However, certain types defined in the local package may hold objects defined +// (perhaps deeply) within another package. +// +// For example, consider the following: +// +// package a +// func F() chan * map[string] struct { X int } +// +// package b +// import "a" +// var B = a.F() +// +// In this example, the type of b.B holds fields defined in package a. +// In order to have the correct canonical objects for the field defined in the +// type of B, they are encoded as objectPaths and later looked up in the +// importer. The same problem applies to interface methods. +func (w *exportWriter) objectPath(obj types.Object) { + if obj.Pkg() == nil || obj.Pkg() == w.p.localpkg { + // obj.Pkg() may be nil for the builtin error.Error. + // In this case, or if obj is declared in the local package, no need to + // encode. + w.string("") + return + } + objectPath, err := w.p.objectpathEncoder().For(obj) + if err != nil { + // Fall back to the empty string, which will cause the importer to create a + // new object, which matches earlier behavior. Creating a new object is + // sufficient for many purposes (such as type checking), but causes certain + // references algorithms to fail (golang/go#60819). However, we didn't + // notice this problem during months of gopls@v0.12.0 testing. + // + // TODO(golang/go#61674): this workaround is insufficient, as in the case + // where the field forwarded from an instantiated type that may not appear + // in the export data of the original package: + // + // // package a + // type A[P any] struct{ F P } + // + // // package b + // type B a.A[int] + // + // We need to update references algorithms not to depend on this + // de-duplication, at which point we may want to simply remove the + // workaround here. + w.string("") + return + } + w.string(string(objectPath)) + w.pkg(obj.Pkg()) +} + +func (w *exportWriter) signature(sig *types.Signature) { + w.paramList(sig.Params()) + w.paramList(sig.Results()) + if sig.Params().Len() > 0 { + w.bool(sig.Variadic()) + } +} + +func (w *exportWriter) typeList(ts *typeparams.TypeList, pkg *types.Package) { + w.uint64(uint64(ts.Len())) + for i := 0; i < ts.Len(); i++ { + w.typ(ts.At(i), pkg) + } +} + +func (w *exportWriter) tparamList(prefix string, list *typeparams.TypeParamList, pkg *types.Package) { + ll := uint64(list.Len()) + w.uint64(ll) + for i := 0; i < list.Len(); i++ { + tparam := list.At(i) + // Set the type parameter exportName before exporting its type. + exportName := tparamExportName(prefix, tparam) + w.p.tparamNames[tparam.Obj()] = exportName + w.typ(list.At(i), pkg) + } +} + +const blankMarker = "$" + +// tparamExportName returns the 'exported' name of a type parameter, which +// differs from its actual object name: it is prefixed with a qualifier, and +// blank type parameter names are disambiguated by their index in the type +// parameter list. +func tparamExportName(prefix string, tparam *typeparams.TypeParam) string { + assert(prefix != "") + name := tparam.Obj().Name() + if name == "_" { + name = blankMarker + strconv.Itoa(tparam.Index()) + } + return prefix + "." + name +} + +// tparamName returns the real name of a type parameter, after stripping its +// qualifying prefix and reverting blank-name encoding. See tparamExportName +// for details. +func tparamName(exportName string) string { + // Remove the "path" from the type param name that makes it unique. + ix := strings.LastIndex(exportName, ".") + if ix < 0 { + errorf("malformed type parameter export name %s: missing prefix", exportName) + } + name := exportName[ix+1:] + if strings.HasPrefix(name, blankMarker) { + return "_" + } + return name +} + +func (w *exportWriter) paramList(tup *types.Tuple) { + n := tup.Len() + w.uint64(uint64(n)) + for i := 0; i < n; i++ { + w.param(tup.At(i)) + } +} + +func (w *exportWriter) param(obj types.Object) { + w.pos(obj.Pos()) + w.localIdent(obj) + w.typ(obj.Type(), obj.Pkg()) +} + +func (w *exportWriter) value(typ types.Type, v constant.Value) { + w.typ(typ, nil) + if w.p.version >= iexportVersionGo1_18 { + w.int64(int64(v.Kind())) + } + + if v.Kind() == constant.Unknown { + // golang/go#60605: treat unknown constant values as if they have invalid type + // + // This loses some fidelity over the package type-checked from source, but that + // is acceptable. + // + // TODO(rfindley): we should switch on the recorded constant kind rather + // than the constant type + return + } + + switch b := typ.Underlying().(*types.Basic); b.Info() & types.IsConstType { + case types.IsBoolean: + w.bool(constant.BoolVal(v)) + case types.IsInteger: + var i big.Int + if i64, exact := constant.Int64Val(v); exact { + i.SetInt64(i64) + } else if ui64, exact := constant.Uint64Val(v); exact { + i.SetUint64(ui64) + } else { + i.SetString(v.ExactString(), 10) + } + w.mpint(&i, typ) + case types.IsFloat: + f := constantToFloat(v) + w.mpfloat(f, typ) + case types.IsComplex: + w.mpfloat(constantToFloat(constant.Real(v)), typ) + w.mpfloat(constantToFloat(constant.Imag(v)), typ) + case types.IsString: + w.string(constant.StringVal(v)) + default: + if b.Kind() == types.Invalid { + // package contains type errors + break + } + panic(internalErrorf("unexpected type %v (%v)", typ, typ.Underlying())) + } +} + +// constantToFloat converts a constant.Value with kind constant.Float to a +// big.Float. +func constantToFloat(x constant.Value) *big.Float { + x = constant.ToFloat(x) + // Use the same floating-point precision (512) as cmd/compile + // (see Mpprec in cmd/compile/internal/gc/mpfloat.go). + const mpprec = 512 + var f big.Float + f.SetPrec(mpprec) + if v, exact := constant.Float64Val(x); exact { + // float64 + f.SetFloat64(v) + } else if num, denom := constant.Num(x), constant.Denom(x); num.Kind() == constant.Int { + // TODO(gri): add big.Rat accessor to constant.Value. + n := valueToRat(num) + d := valueToRat(denom) + f.SetRat(n.Quo(n, d)) + } else { + // Value too large to represent as a fraction => inaccessible. + // TODO(gri): add big.Float accessor to constant.Value. + _, ok := f.SetString(x.ExactString()) + assert(ok) + } + return &f +} + +func valueToRat(x constant.Value) *big.Rat { + // Convert little-endian to big-endian. + // I can't believe this is necessary. + bytes := constant.Bytes(x) + for i := 0; i < len(bytes)/2; i++ { + bytes[i], bytes[len(bytes)-1-i] = bytes[len(bytes)-1-i], bytes[i] + } + return new(big.Rat).SetInt(new(big.Int).SetBytes(bytes)) +} + +// mpint exports a multi-precision integer. +// +// For unsigned types, small values are written out as a single +// byte. Larger values are written out as a length-prefixed big-endian +// byte string, where the length prefix is encoded as its complement. +// For example, bytes 0, 1, and 2 directly represent the integer +// values 0, 1, and 2; while bytes 255, 254, and 253 indicate a 1-, +// 2-, and 3-byte big-endian string follow. +// +// Encoding for signed types use the same general approach as for +// unsigned types, except small values use zig-zag encoding and the +// bottom bit of length prefix byte for large values is reserved as a +// sign bit. +// +// The exact boundary between small and large encodings varies +// according to the maximum number of bytes needed to encode a value +// of type typ. As a special case, 8-bit types are always encoded as a +// single byte. +// +// TODO(mdempsky): Is this level of complexity really worthwhile? +func (w *exportWriter) mpint(x *big.Int, typ types.Type) { + basic, ok := typ.Underlying().(*types.Basic) + if !ok { + panic(internalErrorf("unexpected type %v (%T)", typ.Underlying(), typ.Underlying())) + } + + signed, maxBytes := intSize(basic) + + negative := x.Sign() < 0 + if !signed && negative { + panic(internalErrorf("negative unsigned integer; type %v, value %v", typ, x)) + } + + b := x.Bytes() + if len(b) > 0 && b[0] == 0 { + panic(internalErrorf("leading zeros")) + } + if uint(len(b)) > maxBytes { + panic(internalErrorf("bad mpint length: %d > %d (type %v, value %v)", len(b), maxBytes, typ, x)) + } + + maxSmall := 256 - maxBytes + if signed { + maxSmall = 256 - 2*maxBytes + } + if maxBytes == 1 { + maxSmall = 256 + } + + // Check if x can use small value encoding. + if len(b) <= 1 { + var ux uint + if len(b) == 1 { + ux = uint(b[0]) + } + if signed { + ux <<= 1 + if negative { + ux-- + } + } + if ux < maxSmall { + w.data.WriteByte(byte(ux)) + return + } + } + + n := 256 - uint(len(b)) + if signed { + n = 256 - 2*uint(len(b)) + if negative { + n |= 1 + } + } + if n < maxSmall || n >= 256 { + panic(internalErrorf("encoding mistake: %d, %v, %v => %d", len(b), signed, negative, n)) + } + + w.data.WriteByte(byte(n)) + w.data.Write(b) +} + +// mpfloat exports a multi-precision floating point number. +// +// The number's value is decomposed into mantissa × 2**exponent, where +// mantissa is an integer. The value is written out as mantissa (as a +// multi-precision integer) and then the exponent, except exponent is +// omitted if mantissa is zero. +func (w *exportWriter) mpfloat(f *big.Float, typ types.Type) { + if f.IsInf() { + panic("infinite constant") + } + + // Break into f = mant × 2**exp, with 0.5 <= mant < 1. + var mant big.Float + exp := int64(f.MantExp(&mant)) + + // Scale so that mant is an integer. + prec := mant.MinPrec() + mant.SetMantExp(&mant, int(prec)) + exp -= int64(prec) + + manti, acc := mant.Int(nil) + if acc != big.Exact { + panic(internalErrorf("mantissa scaling failed for %f (%s)", f, acc)) + } + w.mpint(manti, typ) + if manti.Sign() != 0 { + w.int64(exp) + } +} + +func (w *exportWriter) bool(b bool) bool { + var x uint64 + if b { + x = 1 + } + w.uint64(x) + return b +} + +func (w *exportWriter) int64(x int64) { w.data.int64(x) } +func (w *exportWriter) uint64(x uint64) { w.data.uint64(x) } +func (w *exportWriter) string(s string) { w.uint64(w.p.stringOff(s)) } + +func (w *exportWriter) localIdent(obj types.Object) { + // Anonymous parameters. + if obj == nil { + w.string("") + return + } + + name := obj.Name() + if name == "_" { + w.string("_") + return + } + + w.string(name) +} + +type intWriter struct { + bytes.Buffer +} + +func (w *intWriter) int64(x int64) { + var buf [binary.MaxVarintLen64]byte + n := binary.PutVarint(buf[:], x) + w.Write(buf[:n]) +} + +func (w *intWriter) uint64(x uint64) { + var buf [binary.MaxVarintLen64]byte + n := binary.PutUvarint(buf[:], x) + w.Write(buf[:n]) +} + +func assert(cond bool) { + if !cond { + panic("internal error: assertion failed") + } +} + +// The below is copied from go/src/cmd/compile/internal/gc/syntax.go. + +// objQueue is a FIFO queue of types.Object. The zero value of objQueue is +// a ready-to-use empty queue. +type objQueue struct { + ring []types.Object + head, tail int +} + +// empty returns true if q contains no Nodes. +func (q *objQueue) empty() bool { + return q.head == q.tail +} + +// pushTail appends n to the tail of the queue. +func (q *objQueue) pushTail(obj types.Object) { + if len(q.ring) == 0 { + q.ring = make([]types.Object, 16) + } else if q.head+len(q.ring) == q.tail { + // Grow the ring. + nring := make([]types.Object, len(q.ring)*2) + // Copy the old elements. + part := q.ring[q.head%len(q.ring):] + if q.tail-q.head <= len(part) { + part = part[:q.tail-q.head] + copy(nring, part) + } else { + pos := copy(nring, part) + copy(nring[pos:], q.ring[:q.tail%len(q.ring)]) + } + q.ring, q.head, q.tail = nring, 0, q.tail-q.head + } + + q.ring[q.tail%len(q.ring)] = obj + q.tail++ +} + +// popHead pops a node from the head of the queue. It panics if q is empty. +func (q *objQueue) popHead() types.Object { + if q.empty() { + panic("dequeue empty") + } + obj := q.ring[q.head%len(q.ring)] + q.head++ + return obj +} + +// internalError represents an error generated inside this package. +type internalError string + +func (e internalError) Error() string { return "gcimporter: " + string(e) } + +// TODO(adonovan): make this call panic, so that it's symmetric with errorf. +// Otherwise it's easy to forget to do anything with the error. +// +// TODO(adonovan): also, consider switching the names "errorf" and +// "internalErrorf" as the former is used for bugs, whose cause is +// internal inconsistency, whereas the latter is used for ordinary +// situations like bad input, whose cause is external. +func internalErrorf(format string, args ...interface{}) error { + return internalError(fmt.Sprintf(format, args...)) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/iimport.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/iimport.go new file mode 100644 index 000000000000..8e64cf644fce --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/iimport.go @@ -0,0 +1,1083 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Indexed package import. +// See cmd/compile/internal/gc/iexport.go for the export data format. + +// This file is a copy of $GOROOT/src/go/internal/gcimporter/iimport.go. + +package gcimporter + +import ( + "bytes" + "encoding/binary" + "fmt" + "go/constant" + "go/token" + "go/types" + "io" + "math/big" + "sort" + "strings" + + "golang.org/x/tools/go/types/objectpath" + "golang.org/x/tools/internal/typeparams" +) + +type intReader struct { + *bytes.Reader + path string +} + +func (r *intReader) int64() int64 { + i, err := binary.ReadVarint(r.Reader) + if err != nil { + errorf("import %q: read varint error: %v", r.path, err) + } + return i +} + +func (r *intReader) uint64() uint64 { + i, err := binary.ReadUvarint(r.Reader) + if err != nil { + errorf("import %q: read varint error: %v", r.path, err) + } + return i +} + +// Keep this in sync with constants in iexport.go. +const ( + iexportVersionGo1_11 = 0 + iexportVersionPosCol = 1 + iexportVersionGo1_18 = 2 + iexportVersionGenerics = 2 + + iexportVersionCurrent = 2 +) + +type ident struct { + pkg *types.Package + name string +} + +const predeclReserved = 32 + +type itag uint64 + +const ( + // Types + definedType itag = iota + pointerType + sliceType + arrayType + chanType + mapType + signatureType + structType + interfaceType + typeParamType + instanceType + unionType +) + +// IImportData imports a package from the serialized package data +// and returns 0 and a reference to the package. +// If the export data version is not recognized or the format is otherwise +// compromised, an error is returned. +func IImportData(fset *token.FileSet, imports map[string]*types.Package, data []byte, path string) (int, *types.Package, error) { + pkgs, err := iimportCommon(fset, GetPackagesFromMap(imports), data, false, path, false, nil) + if err != nil { + return 0, nil, err + } + return 0, pkgs[0], nil +} + +// IImportBundle imports a set of packages from the serialized package bundle. +func IImportBundle(fset *token.FileSet, imports map[string]*types.Package, data []byte) ([]*types.Package, error) { + return iimportCommon(fset, GetPackagesFromMap(imports), data, true, "", false, nil) +} + +// A GetPackagesFunc function obtains the non-nil symbols for a set of +// packages, creating and recursively importing them as needed. An +// implementation should store each package symbol is in the Pkg +// field of the items array. +// +// Any error causes importing to fail. This can be used to quickly read +// the import manifest of an export data file without fully decoding it. +type GetPackagesFunc = func(items []GetPackagesItem) error + +// A GetPackagesItem is a request from the importer for the package +// symbol of the specified name and path. +type GetPackagesItem struct { + Name, Path string + Pkg *types.Package // to be filled in by GetPackagesFunc call + + // private importer state + pathOffset uint64 + nameIndex map[string]uint64 +} + +// GetPackagesFromMap returns a GetPackagesFunc that retrieves +// packages from the given map of package path to package. +// +// The returned function may mutate m: each requested package that is not +// found is created with types.NewPackage and inserted into m. +func GetPackagesFromMap(m map[string]*types.Package) GetPackagesFunc { + return func(items []GetPackagesItem) error { + for i, item := range items { + pkg, ok := m[item.Path] + if !ok { + pkg = types.NewPackage(item.Path, item.Name) + m[item.Path] = pkg + } + items[i].Pkg = pkg + } + return nil + } +} + +func iimportCommon(fset *token.FileSet, getPackages GetPackagesFunc, data []byte, bundle bool, path string, shallow bool, reportf ReportFunc) (pkgs []*types.Package, err error) { + const currentVersion = iexportVersionCurrent + version := int64(-1) + if !debug { + defer func() { + if e := recover(); e != nil { + if bundle { + err = fmt.Errorf("%v", e) + } else if version > currentVersion { + err = fmt.Errorf("cannot import %q (%v), export data is newer version - update tool", path, e) + } else { + err = fmt.Errorf("internal error while importing %q (%v); please report an issue", path, e) + } + } + }() + } + + r := &intReader{bytes.NewReader(data), path} + + if bundle { + if v := r.uint64(); v != bundleVersion { + errorf("unknown bundle format version %d", v) + } + } + + version = int64(r.uint64()) + switch version { + case iexportVersionGo1_18, iexportVersionPosCol, iexportVersionGo1_11: + default: + if version > iexportVersionGo1_18 { + errorf("unstable iexport format version %d, just rebuild compiler and std library", version) + } else { + errorf("unknown iexport format version %d", version) + } + } + + sLen := int64(r.uint64()) + var fLen int64 + var fileOffset []uint64 + if shallow { + // Shallow mode uses a different position encoding. + fLen = int64(r.uint64()) + fileOffset = make([]uint64, r.uint64()) + for i := range fileOffset { + fileOffset[i] = r.uint64() + } + } + dLen := int64(r.uint64()) + + whence, _ := r.Seek(0, io.SeekCurrent) + stringData := data[whence : whence+sLen] + fileData := data[whence+sLen : whence+sLen+fLen] + declData := data[whence+sLen+fLen : whence+sLen+fLen+dLen] + r.Seek(sLen+fLen+dLen, io.SeekCurrent) + + p := iimporter{ + version: int(version), + ipath: path, + shallow: shallow, + reportf: reportf, + + stringData: stringData, + stringCache: make(map[uint64]string), + fileOffset: fileOffset, + fileData: fileData, + fileCache: make([]*token.File, len(fileOffset)), + pkgCache: make(map[uint64]*types.Package), + + declData: declData, + pkgIndex: make(map[*types.Package]map[string]uint64), + typCache: make(map[uint64]types.Type), + // Separate map for typeparams, keyed by their package and unique + // name. + tparamIndex: make(map[ident]types.Type), + + fake: fakeFileSet{ + fset: fset, + files: make(map[string]*fileInfo), + }, + } + defer p.fake.setLines() // set lines for files in fset + + for i, pt := range predeclared() { + p.typCache[uint64(i)] = pt + } + + // Gather the relevant packages from the manifest. + items := make([]GetPackagesItem, r.uint64()) + for i := range items { + pkgPathOff := r.uint64() + pkgPath := p.stringAt(pkgPathOff) + pkgName := p.stringAt(r.uint64()) + _ = r.uint64() // package height; unused by go/types + + if pkgPath == "" { + pkgPath = path + } + items[i].Name = pkgName + items[i].Path = pkgPath + items[i].pathOffset = pkgPathOff + + // Read index for package. + nameIndex := make(map[string]uint64) + nSyms := r.uint64() + // In shallow mode, only the current package (i=0) has an index. + assert(!(shallow && i > 0 && nSyms != 0)) + for ; nSyms > 0; nSyms-- { + name := p.stringAt(r.uint64()) + nameIndex[name] = r.uint64() + } + + items[i].nameIndex = nameIndex + } + + // Request packages all at once from the client, + // enabling a parallel implementation. + if err := getPackages(items); err != nil { + return nil, err // don't wrap this error + } + + // Check the results and complete the index. + pkgList := make([]*types.Package, len(items)) + for i, item := range items { + pkg := item.Pkg + if pkg == nil { + errorf("internal error: getPackages returned nil package for %q", item.Path) + } else if pkg.Path() != item.Path { + errorf("internal error: getPackages returned wrong path %q, want %q", pkg.Path(), item.Path) + } else if pkg.Name() != item.Name { + errorf("internal error: getPackages returned wrong name %s for package %q, want %s", pkg.Name(), item.Path, item.Name) + } + p.pkgCache[item.pathOffset] = pkg + p.pkgIndex[pkg] = item.nameIndex + pkgList[i] = pkg + } + + if bundle { + pkgs = make([]*types.Package, r.uint64()) + for i := range pkgs { + pkg := p.pkgAt(r.uint64()) + imps := make([]*types.Package, r.uint64()) + for j := range imps { + imps[j] = p.pkgAt(r.uint64()) + } + pkg.SetImports(imps) + pkgs[i] = pkg + } + } else { + if len(pkgList) == 0 { + errorf("no packages found for %s", path) + panic("unreachable") + } + pkgs = pkgList[:1] + + // record all referenced packages as imports + list := append(([]*types.Package)(nil), pkgList[1:]...) + sort.Sort(byPath(list)) + pkgs[0].SetImports(list) + } + + for _, pkg := range pkgs { + if pkg.Complete() { + continue + } + + names := make([]string, 0, len(p.pkgIndex[pkg])) + for name := range p.pkgIndex[pkg] { + names = append(names, name) + } + sort.Strings(names) + for _, name := range names { + p.doDecl(pkg, name) + } + + // package was imported completely and without errors + pkg.MarkComplete() + } + + // SetConstraint can't be called if the constraint type is not yet complete. + // When type params are created in the 'P' case of (*importReader).obj(), + // the associated constraint type may not be complete due to recursion. + // Therefore, we defer calling SetConstraint there, and call it here instead + // after all types are complete. + for _, d := range p.later { + typeparams.SetTypeParamConstraint(d.t, d.constraint) + } + + for _, typ := range p.interfaceList { + typ.Complete() + } + + // Workaround for golang/go#61561. See the doc for instanceList for details. + for _, typ := range p.instanceList { + if iface, _ := typ.Underlying().(*types.Interface); iface != nil { + iface.Complete() + } + } + + return pkgs, nil +} + +type setConstraintArgs struct { + t *typeparams.TypeParam + constraint types.Type +} + +type iimporter struct { + version int + ipath string + + shallow bool + reportf ReportFunc // if non-nil, used to report bugs + + stringData []byte + stringCache map[uint64]string + fileOffset []uint64 // fileOffset[i] is offset in fileData for info about file encoded as i + fileData []byte + fileCache []*token.File // memoized decoding of file encoded as i + pkgCache map[uint64]*types.Package + + declData []byte + pkgIndex map[*types.Package]map[string]uint64 + typCache map[uint64]types.Type + tparamIndex map[ident]types.Type + + fake fakeFileSet + interfaceList []*types.Interface + + // Workaround for the go/types bug golang/go#61561: instances produced during + // instantiation may contain incomplete interfaces. Here we only complete the + // underlying type of the instance, which is the most common case but doesn't + // handle parameterized interface literals defined deeper in the type. + instanceList []types.Type // instances for later completion (see golang/go#61561) + + // Arguments for calls to SetConstraint that are deferred due to recursive types + later []setConstraintArgs + + indent int // for tracing support +} + +func (p *iimporter) trace(format string, args ...interface{}) { + if !trace { + // Call sites should also be guarded, but having this check here allows + // easily enabling/disabling debug trace statements. + return + } + fmt.Printf(strings.Repeat("..", p.indent)+format+"\n", args...) +} + +func (p *iimporter) doDecl(pkg *types.Package, name string) { + if debug { + p.trace("import decl %s", name) + p.indent++ + defer func() { + p.indent-- + p.trace("=> %s", name) + }() + } + // See if we've already imported this declaration. + if obj := pkg.Scope().Lookup(name); obj != nil { + return + } + + off, ok := p.pkgIndex[pkg][name] + if !ok { + // In deep mode, the index should be complete. In shallow + // mode, we should have already recursively loaded necessary + // dependencies so the above Lookup succeeds. + errorf("%v.%v not in index", pkg, name) + } + + r := &importReader{p: p, currPkg: pkg} + r.declReader.Reset(p.declData[off:]) + + r.obj(name) +} + +func (p *iimporter) stringAt(off uint64) string { + if s, ok := p.stringCache[off]; ok { + return s + } + + slen, n := binary.Uvarint(p.stringData[off:]) + if n <= 0 { + errorf("varint failed") + } + spos := off + uint64(n) + s := string(p.stringData[spos : spos+slen]) + p.stringCache[off] = s + return s +} + +func (p *iimporter) fileAt(index uint64) *token.File { + file := p.fileCache[index] + if file == nil { + off := p.fileOffset[index] + file = p.decodeFile(intReader{bytes.NewReader(p.fileData[off:]), p.ipath}) + p.fileCache[index] = file + } + return file +} + +func (p *iimporter) decodeFile(rd intReader) *token.File { + filename := p.stringAt(rd.uint64()) + size := int(rd.uint64()) + file := p.fake.fset.AddFile(filename, -1, size) + + // SetLines requires a nondecreasing sequence. + // Because it is common for clients to derive the interval + // [start, start+len(name)] from a start position, and we + // want to ensure that the end offset is on the same line, + // we fill in the gaps of the sparse encoding with values + // that strictly increase by the largest possible amount. + // This allows us to avoid having to record the actual end + // offset of each needed line. + + lines := make([]int, int(rd.uint64())) + var index, offset int + for i, n := 0, int(rd.uint64()); i < n; i++ { + index += int(rd.uint64()) + offset += int(rd.uint64()) + lines[index] = offset + + // Ensure monotonicity between points. + for j := index - 1; j > 0 && lines[j] == 0; j-- { + lines[j] = lines[j+1] - 1 + } + } + + // Ensure monotonicity after last point. + for j := len(lines) - 1; j > 0 && lines[j] == 0; j-- { + size-- + lines[j] = size + } + + if !file.SetLines(lines) { + errorf("SetLines failed: %d", lines) // can't happen + } + return file +} + +func (p *iimporter) pkgAt(off uint64) *types.Package { + if pkg, ok := p.pkgCache[off]; ok { + return pkg + } + path := p.stringAt(off) + errorf("missing package %q in %q", path, p.ipath) + return nil +} + +func (p *iimporter) typAt(off uint64, base *types.Named) types.Type { + if t, ok := p.typCache[off]; ok && canReuse(base, t) { + return t + } + + if off < predeclReserved { + errorf("predeclared type missing from cache: %v", off) + } + + r := &importReader{p: p} + r.declReader.Reset(p.declData[off-predeclReserved:]) + t := r.doType(base) + + if canReuse(base, t) { + p.typCache[off] = t + } + return t +} + +// canReuse reports whether the type rhs on the RHS of the declaration for def +// may be re-used. +// +// Specifically, if def is non-nil and rhs is an interface type with methods, it +// may not be re-used because we have a convention of setting the receiver type +// for interface methods to def. +func canReuse(def *types.Named, rhs types.Type) bool { + if def == nil { + return true + } + iface, _ := rhs.(*types.Interface) + if iface == nil { + return true + } + // Don't use iface.Empty() here as iface may not be complete. + return iface.NumEmbeddeds() == 0 && iface.NumExplicitMethods() == 0 +} + +type importReader struct { + p *iimporter + declReader bytes.Reader + currPkg *types.Package + prevFile string + prevLine int64 + prevColumn int64 +} + +func (r *importReader) obj(name string) { + tag := r.byte() + pos := r.pos() + + switch tag { + case 'A': + typ := r.typ() + + r.declare(types.NewTypeName(pos, r.currPkg, name, typ)) + + case 'C': + typ, val := r.value() + + r.declare(types.NewConst(pos, r.currPkg, name, typ, val)) + + case 'F', 'G': + var tparams []*typeparams.TypeParam + if tag == 'G' { + tparams = r.tparamList() + } + sig := r.signature(nil, nil, tparams) + r.declare(types.NewFunc(pos, r.currPkg, name, sig)) + + case 'T', 'U': + // Types can be recursive. We need to setup a stub + // declaration before recursing. + obj := types.NewTypeName(pos, r.currPkg, name, nil) + named := types.NewNamed(obj, nil, nil) + // Declare obj before calling r.tparamList, so the new type name is recognized + // if used in the constraint of one of its own typeparams (see #48280). + r.declare(obj) + if tag == 'U' { + tparams := r.tparamList() + typeparams.SetForNamed(named, tparams) + } + + underlying := r.p.typAt(r.uint64(), named).Underlying() + named.SetUnderlying(underlying) + + if !isInterface(underlying) { + for n := r.uint64(); n > 0; n-- { + mpos := r.pos() + mname := r.ident() + recv := r.param() + + // If the receiver has any targs, set those as the + // rparams of the method (since those are the + // typeparams being used in the method sig/body). + base := baseType(recv.Type()) + assert(base != nil) + targs := typeparams.NamedTypeArgs(base) + var rparams []*typeparams.TypeParam + if targs.Len() > 0 { + rparams = make([]*typeparams.TypeParam, targs.Len()) + for i := range rparams { + rparams[i] = targs.At(i).(*typeparams.TypeParam) + } + } + msig := r.signature(recv, rparams, nil) + + named.AddMethod(types.NewFunc(mpos, r.currPkg, mname, msig)) + } + } + + case 'P': + // We need to "declare" a typeparam in order to have a name that + // can be referenced recursively (if needed) in the type param's + // bound. + if r.p.version < iexportVersionGenerics { + errorf("unexpected type param type") + } + name0 := tparamName(name) + tn := types.NewTypeName(pos, r.currPkg, name0, nil) + t := typeparams.NewTypeParam(tn, nil) + + // To handle recursive references to the typeparam within its + // bound, save the partial type in tparamIndex before reading the bounds. + id := ident{r.currPkg, name} + r.p.tparamIndex[id] = t + var implicit bool + if r.p.version >= iexportVersionGo1_18 { + implicit = r.bool() + } + constraint := r.typ() + if implicit { + iface, _ := constraint.(*types.Interface) + if iface == nil { + errorf("non-interface constraint marked implicit") + } + typeparams.MarkImplicit(iface) + } + // The constraint type may not be complete, if we + // are in the middle of a type recursion involving type + // constraints. So, we defer SetConstraint until we have + // completely set up all types in ImportData. + r.p.later = append(r.p.later, setConstraintArgs{t: t, constraint: constraint}) + + case 'V': + typ := r.typ() + + r.declare(types.NewVar(pos, r.currPkg, name, typ)) + + default: + errorf("unexpected tag: %v", tag) + } +} + +func (r *importReader) declare(obj types.Object) { + obj.Pkg().Scope().Insert(obj) +} + +func (r *importReader) value() (typ types.Type, val constant.Value) { + typ = r.typ() + if r.p.version >= iexportVersionGo1_18 { + // TODO: add support for using the kind. + _ = constant.Kind(r.int64()) + } + + switch b := typ.Underlying().(*types.Basic); b.Info() & types.IsConstType { + case types.IsBoolean: + val = constant.MakeBool(r.bool()) + + case types.IsString: + val = constant.MakeString(r.string()) + + case types.IsInteger: + var x big.Int + r.mpint(&x, b) + val = constant.Make(&x) + + case types.IsFloat: + val = r.mpfloat(b) + + case types.IsComplex: + re := r.mpfloat(b) + im := r.mpfloat(b) + val = constant.BinaryOp(re, token.ADD, constant.MakeImag(im)) + + default: + if b.Kind() == types.Invalid { + val = constant.MakeUnknown() + return + } + errorf("unexpected type %v", typ) // panics + panic("unreachable") + } + + return +} + +func intSize(b *types.Basic) (signed bool, maxBytes uint) { + if (b.Info() & types.IsUntyped) != 0 { + return true, 64 + } + + switch b.Kind() { + case types.Float32, types.Complex64: + return true, 3 + case types.Float64, types.Complex128: + return true, 7 + } + + signed = (b.Info() & types.IsUnsigned) == 0 + switch b.Kind() { + case types.Int8, types.Uint8: + maxBytes = 1 + case types.Int16, types.Uint16: + maxBytes = 2 + case types.Int32, types.Uint32: + maxBytes = 4 + default: + maxBytes = 8 + } + + return +} + +func (r *importReader) mpint(x *big.Int, typ *types.Basic) { + signed, maxBytes := intSize(typ) + + maxSmall := 256 - maxBytes + if signed { + maxSmall = 256 - 2*maxBytes + } + if maxBytes == 1 { + maxSmall = 256 + } + + n, _ := r.declReader.ReadByte() + if uint(n) < maxSmall { + v := int64(n) + if signed { + v >>= 1 + if n&1 != 0 { + v = ^v + } + } + x.SetInt64(v) + return + } + + v := -n + if signed { + v = -(n &^ 1) >> 1 + } + if v < 1 || uint(v) > maxBytes { + errorf("weird decoding: %v, %v => %v", n, signed, v) + } + b := make([]byte, v) + io.ReadFull(&r.declReader, b) + x.SetBytes(b) + if signed && n&1 != 0 { + x.Neg(x) + } +} + +func (r *importReader) mpfloat(typ *types.Basic) constant.Value { + var mant big.Int + r.mpint(&mant, typ) + var f big.Float + f.SetInt(&mant) + if f.Sign() != 0 { + f.SetMantExp(&f, int(r.int64())) + } + return constant.Make(&f) +} + +func (r *importReader) ident() string { + return r.string() +} + +func (r *importReader) qualifiedIdent() (*types.Package, string) { + name := r.string() + pkg := r.pkg() + return pkg, name +} + +func (r *importReader) pos() token.Pos { + if r.p.shallow { + // precise offsets are encoded only in shallow mode + return r.posv2() + } + if r.p.version >= iexportVersionPosCol { + r.posv1() + } else { + r.posv0() + } + + if r.prevFile == "" && r.prevLine == 0 && r.prevColumn == 0 { + return token.NoPos + } + return r.p.fake.pos(r.prevFile, int(r.prevLine), int(r.prevColumn)) +} + +func (r *importReader) posv0() { + delta := r.int64() + if delta != deltaNewFile { + r.prevLine += delta + } else if l := r.int64(); l == -1 { + r.prevLine += deltaNewFile + } else { + r.prevFile = r.string() + r.prevLine = l + } +} + +func (r *importReader) posv1() { + delta := r.int64() + r.prevColumn += delta >> 1 + if delta&1 != 0 { + delta = r.int64() + r.prevLine += delta >> 1 + if delta&1 != 0 { + r.prevFile = r.string() + } + } +} + +func (r *importReader) posv2() token.Pos { + file := r.uint64() + if file == 0 { + return token.NoPos + } + tf := r.p.fileAt(file - 1) + return tf.Pos(int(r.uint64())) +} + +func (r *importReader) typ() types.Type { + return r.p.typAt(r.uint64(), nil) +} + +func isInterface(t types.Type) bool { + _, ok := t.(*types.Interface) + return ok +} + +func (r *importReader) pkg() *types.Package { return r.p.pkgAt(r.uint64()) } +func (r *importReader) string() string { return r.p.stringAt(r.uint64()) } + +func (r *importReader) doType(base *types.Named) (res types.Type) { + k := r.kind() + if debug { + r.p.trace("importing type %d (base: %s)", k, base) + r.p.indent++ + defer func() { + r.p.indent-- + r.p.trace("=> %s", res) + }() + } + switch k { + default: + errorf("unexpected kind tag in %q: %v", r.p.ipath, k) + return nil + + case definedType: + pkg, name := r.qualifiedIdent() + r.p.doDecl(pkg, name) + return pkg.Scope().Lookup(name).(*types.TypeName).Type() + case pointerType: + return types.NewPointer(r.typ()) + case sliceType: + return types.NewSlice(r.typ()) + case arrayType: + n := r.uint64() + return types.NewArray(r.typ(), int64(n)) + case chanType: + dir := chanDir(int(r.uint64())) + return types.NewChan(dir, r.typ()) + case mapType: + return types.NewMap(r.typ(), r.typ()) + case signatureType: + r.currPkg = r.pkg() + return r.signature(nil, nil, nil) + + case structType: + r.currPkg = r.pkg() + + fields := make([]*types.Var, r.uint64()) + tags := make([]string, len(fields)) + for i := range fields { + var field *types.Var + if r.p.shallow { + field, _ = r.objectPathObject().(*types.Var) + } + + fpos := r.pos() + fname := r.ident() + ftyp := r.typ() + emb := r.bool() + tag := r.string() + + // Either this is not a shallow import, the field is local, or the + // encoded objectPath failed to produce an object (a bug). + // + // Even in this last, buggy case, fall back on creating a new field. As + // discussed in iexport.go, this is not correct, but mostly works and is + // preferable to failing (for now at least). + if field == nil { + field = types.NewField(fpos, r.currPkg, fname, ftyp, emb) + } + + fields[i] = field + tags[i] = tag + } + return types.NewStruct(fields, tags) + + case interfaceType: + r.currPkg = r.pkg() + + embeddeds := make([]types.Type, r.uint64()) + for i := range embeddeds { + _ = r.pos() + embeddeds[i] = r.typ() + } + + methods := make([]*types.Func, r.uint64()) + for i := range methods { + var method *types.Func + if r.p.shallow { + method, _ = r.objectPathObject().(*types.Func) + } + + mpos := r.pos() + mname := r.ident() + + // TODO(mdempsky): Matches bimport.go, but I + // don't agree with this. + var recv *types.Var + if base != nil { + recv = types.NewVar(token.NoPos, r.currPkg, "", base) + } + msig := r.signature(recv, nil, nil) + + if method == nil { + method = types.NewFunc(mpos, r.currPkg, mname, msig) + } + methods[i] = method + } + + typ := newInterface(methods, embeddeds) + r.p.interfaceList = append(r.p.interfaceList, typ) + return typ + + case typeParamType: + if r.p.version < iexportVersionGenerics { + errorf("unexpected type param type") + } + pkg, name := r.qualifiedIdent() + id := ident{pkg, name} + if t, ok := r.p.tparamIndex[id]; ok { + // We're already in the process of importing this typeparam. + return t + } + // Otherwise, import the definition of the typeparam now. + r.p.doDecl(pkg, name) + return r.p.tparamIndex[id] + + case instanceType: + if r.p.version < iexportVersionGenerics { + errorf("unexpected instantiation type") + } + // pos does not matter for instances: they are positioned on the original + // type. + _ = r.pos() + len := r.uint64() + targs := make([]types.Type, len) + for i := range targs { + targs[i] = r.typ() + } + baseType := r.typ() + // The imported instantiated type doesn't include any methods, so + // we must always use the methods of the base (orig) type. + // TODO provide a non-nil *Environment + t, _ := typeparams.Instantiate(nil, baseType, targs, false) + + // Workaround for golang/go#61561. See the doc for instanceList for details. + r.p.instanceList = append(r.p.instanceList, t) + return t + + case unionType: + if r.p.version < iexportVersionGenerics { + errorf("unexpected instantiation type") + } + terms := make([]*typeparams.Term, r.uint64()) + for i := range terms { + terms[i] = typeparams.NewTerm(r.bool(), r.typ()) + } + return typeparams.NewUnion(terms) + } +} + +func (r *importReader) kind() itag { + return itag(r.uint64()) +} + +// objectPathObject is the inverse of exportWriter.objectPath. +// +// In shallow mode, certain fields and methods may need to be looked up in an +// imported package. See the doc for exportWriter.objectPath for a full +// explanation. +func (r *importReader) objectPathObject() types.Object { + objPath := objectpath.Path(r.string()) + if objPath == "" { + return nil + } + pkg := r.pkg() + obj, err := objectpath.Object(pkg, objPath) + if err != nil { + if r.p.reportf != nil { + r.p.reportf("failed to find object for objectPath %q: %v", objPath, err) + } + } + return obj +} + +func (r *importReader) signature(recv *types.Var, rparams []*typeparams.TypeParam, tparams []*typeparams.TypeParam) *types.Signature { + params := r.paramList() + results := r.paramList() + variadic := params.Len() > 0 && r.bool() + return typeparams.NewSignatureType(recv, rparams, tparams, params, results, variadic) +} + +func (r *importReader) tparamList() []*typeparams.TypeParam { + n := r.uint64() + if n == 0 { + return nil + } + xs := make([]*typeparams.TypeParam, n) + for i := range xs { + // Note: the standard library importer is tolerant of nil types here, + // though would panic in SetTypeParams. + xs[i] = r.typ().(*typeparams.TypeParam) + } + return xs +} + +func (r *importReader) paramList() *types.Tuple { + xs := make([]*types.Var, r.uint64()) + for i := range xs { + xs[i] = r.param() + } + return types.NewTuple(xs...) +} + +func (r *importReader) param() *types.Var { + pos := r.pos() + name := r.ident() + typ := r.typ() + return types.NewParam(pos, r.currPkg, name, typ) +} + +func (r *importReader) bool() bool { + return r.uint64() != 0 +} + +func (r *importReader) int64() int64 { + n, err := binary.ReadVarint(&r.declReader) + if err != nil { + errorf("readVarint: %v", err) + } + return n +} + +func (r *importReader) uint64() uint64 { + n, err := binary.ReadUvarint(&r.declReader) + if err != nil { + errorf("readUvarint: %v", err) + } + return n +} + +func (r *importReader) byte() byte { + x, err := r.declReader.ReadByte() + if err != nil { + errorf("declReader.ReadByte: %v", err) + } + return x +} + +func baseType(typ types.Type) *types.Named { + // pointer receivers are never types.Named types + if p, _ := typ.(*types.Pointer); p != nil { + typ = p.Elem() + } + // receiver base types are always (possibly generic) types.Named types + n, _ := typ.(*types.Named) + return n +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/newInterface10.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/newInterface10.go new file mode 100644 index 000000000000..8b163e3d058a --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/newInterface10.go @@ -0,0 +1,22 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !go1.11 +// +build !go1.11 + +package gcimporter + +import "go/types" + +func newInterface(methods []*types.Func, embeddeds []types.Type) *types.Interface { + named := make([]*types.Named, len(embeddeds)) + for i, e := range embeddeds { + var ok bool + named[i], ok = e.(*types.Named) + if !ok { + panic("embedding of non-defined interfaces in interfaces is not supported before Go 1.11") + } + } + return types.NewInterface(methods, named) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/newInterface11.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/newInterface11.go new file mode 100644 index 000000000000..49984f40fd80 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/newInterface11.go @@ -0,0 +1,14 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build go1.11 +// +build go1.11 + +package gcimporter + +import "go/types" + +func newInterface(methods []*types.Func, embeddeds []types.Type) *types.Interface { + return types.NewInterfaceType(methods, embeddeds) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/support_go117.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/support_go117.go new file mode 100644 index 000000000000..d892273efb61 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/support_go117.go @@ -0,0 +1,16 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !go1.18 +// +build !go1.18 + +package gcimporter + +import "go/types" + +const iexportVersion = iexportVersionGo1_11 + +func additionalPredeclared() []types.Type { + return nil +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/support_go118.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/support_go118.go new file mode 100644 index 000000000000..edbe6ea7041d --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/support_go118.go @@ -0,0 +1,37 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build go1.18 +// +build go1.18 + +package gcimporter + +import "go/types" + +const iexportVersion = iexportVersionGenerics + +// additionalPredeclared returns additional predeclared types in go.1.18. +func additionalPredeclared() []types.Type { + return []types.Type{ + // comparable + types.Universe.Lookup("comparable").Type(), + + // any + types.Universe.Lookup("any").Type(), + } +} + +// See cmd/compile/internal/types.SplitVargenSuffix. +func splitVargenSuffix(name string) (base, suffix string) { + i := len(name) + for i > 0 && name[i-1] >= '0' && name[i-1] <= '9' { + i-- + } + const dot = "·" + if i >= len(dot) && name[i-len(dot):i] == dot { + i -= len(dot) + return name[:i], name[i:] + } + return name, "" +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/unified_no.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/unified_no.go new file mode 100644 index 000000000000..286bf445483d --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/unified_no.go @@ -0,0 +1,10 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !(go1.18 && goexperiment.unified) +// +build !go1.18 !goexperiment.unified + +package gcimporter + +const unifiedIR = false diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/unified_yes.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/unified_yes.go new file mode 100644 index 000000000000..b5d69ffbe682 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/unified_yes.go @@ -0,0 +1,10 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build go1.18 && goexperiment.unified +// +build go1.18,goexperiment.unified + +package gcimporter + +const unifiedIR = true diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/ureader_no.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/ureader_no.go new file mode 100644 index 000000000000..8eb20729c2ad --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/ureader_no.go @@ -0,0 +1,19 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !go1.18 +// +build !go1.18 + +package gcimporter + +import ( + "fmt" + "go/token" + "go/types" +) + +func UImportData(fset *token.FileSet, imports map[string]*types.Package, data []byte, path string) (_ int, pkg *types.Package, err error) { + err = fmt.Errorf("go/tools compiled with a Go version earlier than 1.18 cannot read unified IR export data") + return +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/ureader_yes.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/ureader_yes.go new file mode 100644 index 000000000000..b977435f626d --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gcimporter/ureader_yes.go @@ -0,0 +1,728 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Derived from go/internal/gcimporter/ureader.go + +//go:build go1.18 +// +build go1.18 + +package gcimporter + +import ( + "fmt" + "go/token" + "go/types" + "sort" + "strings" + + "golang.org/x/tools/internal/pkgbits" +) + +// A pkgReader holds the shared state for reading a unified IR package +// description. +type pkgReader struct { + pkgbits.PkgDecoder + + fake fakeFileSet + + ctxt *types.Context + imports map[string]*types.Package // previously imported packages, indexed by path + + // lazily initialized arrays corresponding to the unified IR + // PosBase, Pkg, and Type sections, respectively. + posBases []string // position bases (i.e., file names) + pkgs []*types.Package + typs []types.Type + + // laterFns holds functions that need to be invoked at the end of + // import reading. + laterFns []func() + // laterFors is used in case of 'type A B' to ensure that B is processed before A. + laterFors map[types.Type]int + + // ifaces holds a list of constructed Interfaces, which need to have + // Complete called after importing is done. + ifaces []*types.Interface +} + +// later adds a function to be invoked at the end of import reading. +func (pr *pkgReader) later(fn func()) { + pr.laterFns = append(pr.laterFns, fn) +} + +// See cmd/compile/internal/noder.derivedInfo. +type derivedInfo struct { + idx pkgbits.Index + needed bool +} + +// See cmd/compile/internal/noder.typeInfo. +type typeInfo struct { + idx pkgbits.Index + derived bool +} + +func UImportData(fset *token.FileSet, imports map[string]*types.Package, data []byte, path string) (_ int, pkg *types.Package, err error) { + if !debug { + defer func() { + if x := recover(); x != nil { + err = fmt.Errorf("internal error in importing %q (%v); please report an issue", path, x) + } + }() + } + + s := string(data) + s = s[:strings.LastIndex(s, "\n$$\n")] + input := pkgbits.NewPkgDecoder(path, s) + pkg = readUnifiedPackage(fset, nil, imports, input) + return +} + +// laterFor adds a function to be invoked at the end of import reading, and records the type that function is finishing. +func (pr *pkgReader) laterFor(t types.Type, fn func()) { + if pr.laterFors == nil { + pr.laterFors = make(map[types.Type]int) + } + pr.laterFors[t] = len(pr.laterFns) + pr.laterFns = append(pr.laterFns, fn) +} + +// readUnifiedPackage reads a package description from the given +// unified IR export data decoder. +func readUnifiedPackage(fset *token.FileSet, ctxt *types.Context, imports map[string]*types.Package, input pkgbits.PkgDecoder) *types.Package { + pr := pkgReader{ + PkgDecoder: input, + + fake: fakeFileSet{ + fset: fset, + files: make(map[string]*fileInfo), + }, + + ctxt: ctxt, + imports: imports, + + posBases: make([]string, input.NumElems(pkgbits.RelocPosBase)), + pkgs: make([]*types.Package, input.NumElems(pkgbits.RelocPkg)), + typs: make([]types.Type, input.NumElems(pkgbits.RelocType)), + } + defer pr.fake.setLines() + + r := pr.newReader(pkgbits.RelocMeta, pkgbits.PublicRootIdx, pkgbits.SyncPublic) + pkg := r.pkg() + r.Bool() // has init + + for i, n := 0, r.Len(); i < n; i++ { + // As if r.obj(), but avoiding the Scope.Lookup call, + // to avoid eager loading of imports. + r.Sync(pkgbits.SyncObject) + assert(!r.Bool()) + r.p.objIdx(r.Reloc(pkgbits.RelocObj)) + assert(r.Len() == 0) + } + + r.Sync(pkgbits.SyncEOF) + + for _, fn := range pr.laterFns { + fn() + } + + for _, iface := range pr.ifaces { + iface.Complete() + } + + // Imports() of pkg are all of the transitive packages that were loaded. + var imps []*types.Package + for _, imp := range pr.pkgs { + if imp != nil && imp != pkg { + imps = append(imps, imp) + } + } + sort.Sort(byPath(imps)) + pkg.SetImports(imps) + + pkg.MarkComplete() + return pkg +} + +// A reader holds the state for reading a single unified IR element +// within a package. +type reader struct { + pkgbits.Decoder + + p *pkgReader + + dict *readerDict +} + +// A readerDict holds the state for type parameters that parameterize +// the current unified IR element. +type readerDict struct { + // bounds is a slice of typeInfos corresponding to the underlying + // bounds of the element's type parameters. + bounds []typeInfo + + // tparams is a slice of the constructed TypeParams for the element. + tparams []*types.TypeParam + + // devived is a slice of types derived from tparams, which may be + // instantiated while reading the current element. + derived []derivedInfo + derivedTypes []types.Type // lazily instantiated from derived +} + +func (pr *pkgReader) newReader(k pkgbits.RelocKind, idx pkgbits.Index, marker pkgbits.SyncMarker) *reader { + return &reader{ + Decoder: pr.NewDecoder(k, idx, marker), + p: pr, + } +} + +func (pr *pkgReader) tempReader(k pkgbits.RelocKind, idx pkgbits.Index, marker pkgbits.SyncMarker) *reader { + return &reader{ + Decoder: pr.TempDecoder(k, idx, marker), + p: pr, + } +} + +func (pr *pkgReader) retireReader(r *reader) { + pr.RetireDecoder(&r.Decoder) +} + +// @@@ Positions + +func (r *reader) pos() token.Pos { + r.Sync(pkgbits.SyncPos) + if !r.Bool() { + return token.NoPos + } + + // TODO(mdempsky): Delta encoding. + posBase := r.posBase() + line := r.Uint() + col := r.Uint() + return r.p.fake.pos(posBase, int(line), int(col)) +} + +func (r *reader) posBase() string { + return r.p.posBaseIdx(r.Reloc(pkgbits.RelocPosBase)) +} + +func (pr *pkgReader) posBaseIdx(idx pkgbits.Index) string { + if b := pr.posBases[idx]; b != "" { + return b + } + + var filename string + { + r := pr.tempReader(pkgbits.RelocPosBase, idx, pkgbits.SyncPosBase) + + // Within types2, position bases have a lot more details (e.g., + // keeping track of where //line directives appeared exactly). + // + // For go/types, we just track the file name. + + filename = r.String() + + if r.Bool() { // file base + // Was: "b = token.NewTrimmedFileBase(filename, true)" + } else { // line base + pos := r.pos() + line := r.Uint() + col := r.Uint() + + // Was: "b = token.NewLineBase(pos, filename, true, line, col)" + _, _, _ = pos, line, col + } + pr.retireReader(r) + } + b := filename + pr.posBases[idx] = b + return b +} + +// @@@ Packages + +func (r *reader) pkg() *types.Package { + r.Sync(pkgbits.SyncPkg) + return r.p.pkgIdx(r.Reloc(pkgbits.RelocPkg)) +} + +func (pr *pkgReader) pkgIdx(idx pkgbits.Index) *types.Package { + // TODO(mdempsky): Consider using some non-nil pointer to indicate + // the universe scope, so we don't need to keep re-reading it. + if pkg := pr.pkgs[idx]; pkg != nil { + return pkg + } + + pkg := pr.newReader(pkgbits.RelocPkg, idx, pkgbits.SyncPkgDef).doPkg() + pr.pkgs[idx] = pkg + return pkg +} + +func (r *reader) doPkg() *types.Package { + path := r.String() + switch path { + case "": + path = r.p.PkgPath() + case "builtin": + return nil // universe + case "unsafe": + return types.Unsafe + } + + if pkg := r.p.imports[path]; pkg != nil { + return pkg + } + + name := r.String() + + pkg := types.NewPackage(path, name) + r.p.imports[path] = pkg + + return pkg +} + +// @@@ Types + +func (r *reader) typ() types.Type { + return r.p.typIdx(r.typInfo(), r.dict) +} + +func (r *reader) typInfo() typeInfo { + r.Sync(pkgbits.SyncType) + if r.Bool() { + return typeInfo{idx: pkgbits.Index(r.Len()), derived: true} + } + return typeInfo{idx: r.Reloc(pkgbits.RelocType), derived: false} +} + +func (pr *pkgReader) typIdx(info typeInfo, dict *readerDict) types.Type { + idx := info.idx + var where *types.Type + if info.derived { + where = &dict.derivedTypes[idx] + idx = dict.derived[idx].idx + } else { + where = &pr.typs[idx] + } + + if typ := *where; typ != nil { + return typ + } + + var typ types.Type + { + r := pr.tempReader(pkgbits.RelocType, idx, pkgbits.SyncTypeIdx) + r.dict = dict + + typ = r.doTyp() + assert(typ != nil) + pr.retireReader(r) + } + // See comment in pkgReader.typIdx explaining how this happens. + if prev := *where; prev != nil { + return prev + } + + *where = typ + return typ +} + +func (r *reader) doTyp() (res types.Type) { + switch tag := pkgbits.CodeType(r.Code(pkgbits.SyncType)); tag { + default: + errorf("unhandled type tag: %v", tag) + panic("unreachable") + + case pkgbits.TypeBasic: + return types.Typ[r.Len()] + + case pkgbits.TypeNamed: + obj, targs := r.obj() + name := obj.(*types.TypeName) + if len(targs) != 0 { + t, _ := types.Instantiate(r.p.ctxt, name.Type(), targs, false) + return t + } + return name.Type() + + case pkgbits.TypeTypeParam: + return r.dict.tparams[r.Len()] + + case pkgbits.TypeArray: + len := int64(r.Uint64()) + return types.NewArray(r.typ(), len) + case pkgbits.TypeChan: + dir := types.ChanDir(r.Len()) + return types.NewChan(dir, r.typ()) + case pkgbits.TypeMap: + return types.NewMap(r.typ(), r.typ()) + case pkgbits.TypePointer: + return types.NewPointer(r.typ()) + case pkgbits.TypeSignature: + return r.signature(nil, nil, nil) + case pkgbits.TypeSlice: + return types.NewSlice(r.typ()) + case pkgbits.TypeStruct: + return r.structType() + case pkgbits.TypeInterface: + return r.interfaceType() + case pkgbits.TypeUnion: + return r.unionType() + } +} + +func (r *reader) structType() *types.Struct { + fields := make([]*types.Var, r.Len()) + var tags []string + for i := range fields { + pos := r.pos() + pkg, name := r.selector() + ftyp := r.typ() + tag := r.String() + embedded := r.Bool() + + fields[i] = types.NewField(pos, pkg, name, ftyp, embedded) + if tag != "" { + for len(tags) < i { + tags = append(tags, "") + } + tags = append(tags, tag) + } + } + return types.NewStruct(fields, tags) +} + +func (r *reader) unionType() *types.Union { + terms := make([]*types.Term, r.Len()) + for i := range terms { + terms[i] = types.NewTerm(r.Bool(), r.typ()) + } + return types.NewUnion(terms) +} + +func (r *reader) interfaceType() *types.Interface { + methods := make([]*types.Func, r.Len()) + embeddeds := make([]types.Type, r.Len()) + implicit := len(methods) == 0 && len(embeddeds) == 1 && r.Bool() + + for i := range methods { + pos := r.pos() + pkg, name := r.selector() + mtyp := r.signature(nil, nil, nil) + methods[i] = types.NewFunc(pos, pkg, name, mtyp) + } + + for i := range embeddeds { + embeddeds[i] = r.typ() + } + + iface := types.NewInterfaceType(methods, embeddeds) + if implicit { + iface.MarkImplicit() + } + + // We need to call iface.Complete(), but if there are any embedded + // defined types, then we may not have set their underlying + // interface type yet. So we need to defer calling Complete until + // after we've called SetUnderlying everywhere. + // + // TODO(mdempsky): After CL 424876 lands, it should be safe to call + // iface.Complete() immediately. + r.p.ifaces = append(r.p.ifaces, iface) + + return iface +} + +func (r *reader) signature(recv *types.Var, rtparams, tparams []*types.TypeParam) *types.Signature { + r.Sync(pkgbits.SyncSignature) + + params := r.params() + results := r.params() + variadic := r.Bool() + + return types.NewSignatureType(recv, rtparams, tparams, params, results, variadic) +} + +func (r *reader) params() *types.Tuple { + r.Sync(pkgbits.SyncParams) + + params := make([]*types.Var, r.Len()) + for i := range params { + params[i] = r.param() + } + + return types.NewTuple(params...) +} + +func (r *reader) param() *types.Var { + r.Sync(pkgbits.SyncParam) + + pos := r.pos() + pkg, name := r.localIdent() + typ := r.typ() + + return types.NewParam(pos, pkg, name, typ) +} + +// @@@ Objects + +func (r *reader) obj() (types.Object, []types.Type) { + r.Sync(pkgbits.SyncObject) + + assert(!r.Bool()) + + pkg, name := r.p.objIdx(r.Reloc(pkgbits.RelocObj)) + obj := pkgScope(pkg).Lookup(name) + + targs := make([]types.Type, r.Len()) + for i := range targs { + targs[i] = r.typ() + } + + return obj, targs +} + +func (pr *pkgReader) objIdx(idx pkgbits.Index) (*types.Package, string) { + + var objPkg *types.Package + var objName string + var tag pkgbits.CodeObj + { + rname := pr.tempReader(pkgbits.RelocName, idx, pkgbits.SyncObject1) + + objPkg, objName = rname.qualifiedIdent() + assert(objName != "") + + tag = pkgbits.CodeObj(rname.Code(pkgbits.SyncCodeObj)) + pr.retireReader(rname) + } + + if tag == pkgbits.ObjStub { + assert(objPkg == nil || objPkg == types.Unsafe) + return objPkg, objName + } + + // Ignore local types promoted to global scope (#55110). + if _, suffix := splitVargenSuffix(objName); suffix != "" { + return objPkg, objName + } + + if objPkg.Scope().Lookup(objName) == nil { + dict := pr.objDictIdx(idx) + + r := pr.newReader(pkgbits.RelocObj, idx, pkgbits.SyncObject1) + r.dict = dict + + declare := func(obj types.Object) { + objPkg.Scope().Insert(obj) + } + + switch tag { + default: + panic("weird") + + case pkgbits.ObjAlias: + pos := r.pos() + typ := r.typ() + declare(types.NewTypeName(pos, objPkg, objName, typ)) + + case pkgbits.ObjConst: + pos := r.pos() + typ := r.typ() + val := r.Value() + declare(types.NewConst(pos, objPkg, objName, typ, val)) + + case pkgbits.ObjFunc: + pos := r.pos() + tparams := r.typeParamNames() + sig := r.signature(nil, nil, tparams) + declare(types.NewFunc(pos, objPkg, objName, sig)) + + case pkgbits.ObjType: + pos := r.pos() + + obj := types.NewTypeName(pos, objPkg, objName, nil) + named := types.NewNamed(obj, nil, nil) + declare(obj) + + named.SetTypeParams(r.typeParamNames()) + + setUnderlying := func(underlying types.Type) { + // If the underlying type is an interface, we need to + // duplicate its methods so we can replace the receiver + // parameter's type (#49906). + if iface, ok := underlying.(*types.Interface); ok && iface.NumExplicitMethods() != 0 { + methods := make([]*types.Func, iface.NumExplicitMethods()) + for i := range methods { + fn := iface.ExplicitMethod(i) + sig := fn.Type().(*types.Signature) + + recv := types.NewVar(fn.Pos(), fn.Pkg(), "", named) + methods[i] = types.NewFunc(fn.Pos(), fn.Pkg(), fn.Name(), types.NewSignature(recv, sig.Params(), sig.Results(), sig.Variadic())) + } + + embeds := make([]types.Type, iface.NumEmbeddeds()) + for i := range embeds { + embeds[i] = iface.EmbeddedType(i) + } + + newIface := types.NewInterfaceType(methods, embeds) + r.p.ifaces = append(r.p.ifaces, newIface) + underlying = newIface + } + + named.SetUnderlying(underlying) + } + + // Since go.dev/cl/455279, we can assume rhs.Underlying() will + // always be non-nil. However, to temporarily support users of + // older snapshot releases, we continue to fallback to the old + // behavior for now. + // + // TODO(mdempsky): Remove fallback code and simplify after + // allowing time for snapshot users to upgrade. + rhs := r.typ() + if underlying := rhs.Underlying(); underlying != nil { + setUnderlying(underlying) + } else { + pk := r.p + pk.laterFor(named, func() { + // First be sure that the rhs is initialized, if it needs to be initialized. + delete(pk.laterFors, named) // prevent cycles + if i, ok := pk.laterFors[rhs]; ok { + f := pk.laterFns[i] + pk.laterFns[i] = func() {} // function is running now, so replace it with a no-op + f() // initialize RHS + } + setUnderlying(rhs.Underlying()) + }) + } + + for i, n := 0, r.Len(); i < n; i++ { + named.AddMethod(r.method()) + } + + case pkgbits.ObjVar: + pos := r.pos() + typ := r.typ() + declare(types.NewVar(pos, objPkg, objName, typ)) + } + } + + return objPkg, objName +} + +func (pr *pkgReader) objDictIdx(idx pkgbits.Index) *readerDict { + + var dict readerDict + + { + r := pr.tempReader(pkgbits.RelocObjDict, idx, pkgbits.SyncObject1) + if implicits := r.Len(); implicits != 0 { + errorf("unexpected object with %v implicit type parameter(s)", implicits) + } + + dict.bounds = make([]typeInfo, r.Len()) + for i := range dict.bounds { + dict.bounds[i] = r.typInfo() + } + + dict.derived = make([]derivedInfo, r.Len()) + dict.derivedTypes = make([]types.Type, len(dict.derived)) + for i := range dict.derived { + dict.derived[i] = derivedInfo{r.Reloc(pkgbits.RelocType), r.Bool()} + } + + pr.retireReader(r) + } + // function references follow, but reader doesn't need those + + return &dict +} + +func (r *reader) typeParamNames() []*types.TypeParam { + r.Sync(pkgbits.SyncTypeParamNames) + + // Note: This code assumes it only processes objects without + // implement type parameters. This is currently fine, because + // reader is only used to read in exported declarations, which are + // always package scoped. + + if len(r.dict.bounds) == 0 { + return nil + } + + // Careful: Type parameter lists may have cycles. To allow for this, + // we construct the type parameter list in two passes: first we + // create all the TypeNames and TypeParams, then we construct and + // set the bound type. + + r.dict.tparams = make([]*types.TypeParam, len(r.dict.bounds)) + for i := range r.dict.bounds { + pos := r.pos() + pkg, name := r.localIdent() + + tname := types.NewTypeName(pos, pkg, name, nil) + r.dict.tparams[i] = types.NewTypeParam(tname, nil) + } + + typs := make([]types.Type, len(r.dict.bounds)) + for i, bound := range r.dict.bounds { + typs[i] = r.p.typIdx(bound, r.dict) + } + + // TODO(mdempsky): This is subtle, elaborate further. + // + // We have to save tparams outside of the closure, because + // typeParamNames() can be called multiple times with the same + // dictionary instance. + // + // Also, this needs to happen later to make sure SetUnderlying has + // been called. + // + // TODO(mdempsky): Is it safe to have a single "later" slice or do + // we need to have multiple passes? See comments on CL 386002 and + // go.dev/issue/52104. + tparams := r.dict.tparams + r.p.later(func() { + for i, typ := range typs { + tparams[i].SetConstraint(typ) + } + }) + + return r.dict.tparams +} + +func (r *reader) method() *types.Func { + r.Sync(pkgbits.SyncMethod) + pos := r.pos() + pkg, name := r.selector() + + rparams := r.typeParamNames() + sig := r.signature(r.param(), rparams, nil) + + _ = r.pos() // TODO(mdempsky): Remove; this is a hacker for linker.go. + return types.NewFunc(pos, pkg, name, sig) +} + +func (r *reader) qualifiedIdent() (*types.Package, string) { return r.ident(pkgbits.SyncSym) } +func (r *reader) localIdent() (*types.Package, string) { return r.ident(pkgbits.SyncLocalIdent) } +func (r *reader) selector() (*types.Package, string) { return r.ident(pkgbits.SyncSelector) } + +func (r *reader) ident(marker pkgbits.SyncMarker) (*types.Package, string) { + r.Sync(marker) + return r.pkg(), r.String() +} + +// pkgScope returns pkg.Scope(). +// If pkg is nil, it returns types.Universe instead. +// +// TODO(mdempsky): Remove after x/tools can depend on Go 1.19. +func pkgScope(pkg *types.Package) *types.Scope { + if pkg != nil { + return pkg.Scope() + } + return types.Universe +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/invoke.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/invoke.go new file mode 100644 index 000000000000..53cf66da0193 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/invoke.go @@ -0,0 +1,462 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package gocommand is a helper for calling the go command. +package gocommand + +import ( + "bytes" + "context" + "errors" + "fmt" + "io" + "log" + "os" + "reflect" + "regexp" + "runtime" + "strconv" + "strings" + "sync" + "time" + + exec "golang.org/x/sys/execabs" + + "golang.org/x/tools/internal/event" + "golang.org/x/tools/internal/event/keys" + "golang.org/x/tools/internal/event/label" + "golang.org/x/tools/internal/event/tag" +) + +// An Runner will run go command invocations and serialize +// them if it sees a concurrency error. +type Runner struct { + // once guards the runner initialization. + once sync.Once + + // inFlight tracks available workers. + inFlight chan struct{} + + // serialized guards the ability to run a go command serially, + // to avoid deadlocks when claiming workers. + serialized chan struct{} +} + +const maxInFlight = 10 + +func (runner *Runner) initialize() { + runner.once.Do(func() { + runner.inFlight = make(chan struct{}, maxInFlight) + runner.serialized = make(chan struct{}, 1) + }) +} + +// 1.13: go: updates to go.mod needed, but contents have changed +// 1.14: go: updating go.mod: existing contents have changed since last read +var modConcurrencyError = regexp.MustCompile(`go:.*go.mod.*contents have changed`) + +// verb is an event label for the go command verb. +var verb = keys.NewString("verb", "go command verb") + +func invLabels(inv Invocation) []label.Label { + return []label.Label{verb.Of(inv.Verb), tag.Directory.Of(inv.WorkingDir)} +} + +// Run is a convenience wrapper around RunRaw. +// It returns only stdout and a "friendly" error. +func (runner *Runner) Run(ctx context.Context, inv Invocation) (*bytes.Buffer, error) { + ctx, done := event.Start(ctx, "gocommand.Runner.Run", invLabels(inv)...) + defer done() + + stdout, _, friendly, _ := runner.RunRaw(ctx, inv) + return stdout, friendly +} + +// RunPiped runs the invocation serially, always waiting for any concurrent +// invocations to complete first. +func (runner *Runner) RunPiped(ctx context.Context, inv Invocation, stdout, stderr io.Writer) error { + ctx, done := event.Start(ctx, "gocommand.Runner.RunPiped", invLabels(inv)...) + defer done() + + _, err := runner.runPiped(ctx, inv, stdout, stderr) + return err +} + +// RunRaw runs the invocation, serializing requests only if they fight over +// go.mod changes. +func (runner *Runner) RunRaw(ctx context.Context, inv Invocation) (*bytes.Buffer, *bytes.Buffer, error, error) { + ctx, done := event.Start(ctx, "gocommand.Runner.RunRaw", invLabels(inv)...) + defer done() + // Make sure the runner is always initialized. + runner.initialize() + + // First, try to run the go command concurrently. + stdout, stderr, friendlyErr, err := runner.runConcurrent(ctx, inv) + + // If we encounter a load concurrency error, we need to retry serially. + if friendlyErr == nil || !modConcurrencyError.MatchString(friendlyErr.Error()) { + return stdout, stderr, friendlyErr, err + } + event.Error(ctx, "Load concurrency error, will retry serially", err) + + // Run serially by calling runPiped. + stdout.Reset() + stderr.Reset() + friendlyErr, err = runner.runPiped(ctx, inv, stdout, stderr) + return stdout, stderr, friendlyErr, err +} + +func (runner *Runner) runConcurrent(ctx context.Context, inv Invocation) (*bytes.Buffer, *bytes.Buffer, error, error) { + // Wait for 1 worker to become available. + select { + case <-ctx.Done(): + return nil, nil, nil, ctx.Err() + case runner.inFlight <- struct{}{}: + defer func() { <-runner.inFlight }() + } + + stdout, stderr := &bytes.Buffer{}, &bytes.Buffer{} + friendlyErr, err := inv.runWithFriendlyError(ctx, stdout, stderr) + return stdout, stderr, friendlyErr, err +} + +func (runner *Runner) runPiped(ctx context.Context, inv Invocation, stdout, stderr io.Writer) (error, error) { + // Make sure the runner is always initialized. + runner.initialize() + + // Acquire the serialization lock. This avoids deadlocks between two + // runPiped commands. + select { + case <-ctx.Done(): + return nil, ctx.Err() + case runner.serialized <- struct{}{}: + defer func() { <-runner.serialized }() + } + + // Wait for all in-progress go commands to return before proceeding, + // to avoid load concurrency errors. + for i := 0; i < maxInFlight; i++ { + select { + case <-ctx.Done(): + return nil, ctx.Err() + case runner.inFlight <- struct{}{}: + // Make sure we always "return" any workers we took. + defer func() { <-runner.inFlight }() + } + } + + return inv.runWithFriendlyError(ctx, stdout, stderr) +} + +// An Invocation represents a call to the go command. +type Invocation struct { + Verb string + Args []string + BuildFlags []string + + // If ModFlag is set, the go command is invoked with -mod=ModFlag. + ModFlag string + + // If ModFile is set, the go command is invoked with -modfile=ModFile. + ModFile string + + // If Overlay is set, the go command is invoked with -overlay=Overlay. + Overlay string + + // If CleanEnv is set, the invocation will run only with the environment + // in Env, not starting with os.Environ. + CleanEnv bool + Env []string + WorkingDir string + Logf func(format string, args ...interface{}) +} + +func (i *Invocation) runWithFriendlyError(ctx context.Context, stdout, stderr io.Writer) (friendlyError error, rawError error) { + rawError = i.run(ctx, stdout, stderr) + if rawError != nil { + friendlyError = rawError + // Check for 'go' executable not being found. + if ee, ok := rawError.(*exec.Error); ok && ee.Err == exec.ErrNotFound { + friendlyError = fmt.Errorf("go command required, not found: %v", ee) + } + if ctx.Err() != nil { + friendlyError = ctx.Err() + } + friendlyError = fmt.Errorf("err: %v: stderr: %s", friendlyError, stderr) + } + return +} + +func (i *Invocation) run(ctx context.Context, stdout, stderr io.Writer) error { + log := i.Logf + if log == nil { + log = func(string, ...interface{}) {} + } + + goArgs := []string{i.Verb} + + appendModFile := func() { + if i.ModFile != "" { + goArgs = append(goArgs, "-modfile="+i.ModFile) + } + } + appendModFlag := func() { + if i.ModFlag != "" { + goArgs = append(goArgs, "-mod="+i.ModFlag) + } + } + appendOverlayFlag := func() { + if i.Overlay != "" { + goArgs = append(goArgs, "-overlay="+i.Overlay) + } + } + + switch i.Verb { + case "env", "version": + goArgs = append(goArgs, i.Args...) + case "mod": + // mod needs the sub-verb before flags. + goArgs = append(goArgs, i.Args[0]) + appendModFile() + goArgs = append(goArgs, i.Args[1:]...) + case "get": + goArgs = append(goArgs, i.BuildFlags...) + appendModFile() + goArgs = append(goArgs, i.Args...) + + default: // notably list and build. + goArgs = append(goArgs, i.BuildFlags...) + appendModFile() + appendModFlag() + appendOverlayFlag() + goArgs = append(goArgs, i.Args...) + } + cmd := exec.Command("go", goArgs...) + cmd.Stdout = stdout + cmd.Stderr = stderr + + // cmd.WaitDelay was added only in go1.20 (see #50436). + if waitDelay := reflect.ValueOf(cmd).Elem().FieldByName("WaitDelay"); waitDelay.IsValid() { + // https://go.dev/issue/59541: don't wait forever copying stderr + // after the command has exited. + // After CL 484741 we copy stdout manually, so we we'll stop reading that as + // soon as ctx is done. However, we also don't want to wait around forever + // for stderr. Give a much-longer-than-reasonable delay and then assume that + // something has wedged in the kernel or runtime. + waitDelay.Set(reflect.ValueOf(30 * time.Second)) + } + + // On darwin the cwd gets resolved to the real path, which breaks anything that + // expects the working directory to keep the original path, including the + // go command when dealing with modules. + // The Go stdlib has a special feature where if the cwd and the PWD are the + // same node then it trusts the PWD, so by setting it in the env for the child + // process we fix up all the paths returned by the go command. + if !i.CleanEnv { + cmd.Env = os.Environ() + } + cmd.Env = append(cmd.Env, i.Env...) + if i.WorkingDir != "" { + cmd.Env = append(cmd.Env, "PWD="+i.WorkingDir) + cmd.Dir = i.WorkingDir + } + + defer func(start time.Time) { log("%s for %v", time.Since(start), cmdDebugStr(cmd)) }(time.Now()) + + return runCmdContext(ctx, cmd) +} + +// DebugHangingGoCommands may be set by tests to enable additional +// instrumentation (including panics) for debugging hanging Go commands. +// +// See golang/go#54461 for details. +var DebugHangingGoCommands = false + +// runCmdContext is like exec.CommandContext except it sends os.Interrupt +// before os.Kill. +func runCmdContext(ctx context.Context, cmd *exec.Cmd) (err error) { + // If cmd.Stdout is not an *os.File, the exec package will create a pipe and + // copy it to the Writer in a goroutine until the process has finished and + // either the pipe reaches EOF or command's WaitDelay expires. + // + // However, the output from 'go list' can be quite large, and we don't want to + // keep reading (and allocating buffers) if we've already decided we don't + // care about the output. We don't want to wait for the process to finish, and + // we don't wait to wait for the WaitDelay to expire either. + // + // Instead, if cmd.Stdout requires a copying goroutine we explicitly replace + // it with a pipe (which is an *os.File), which we can close in order to stop + // copying output as soon as we realize we don't care about it. + var stdoutW *os.File + if cmd.Stdout != nil { + if _, ok := cmd.Stdout.(*os.File); !ok { + var stdoutR *os.File + stdoutR, stdoutW, err = os.Pipe() + if err != nil { + return err + } + prevStdout := cmd.Stdout + cmd.Stdout = stdoutW + + stdoutErr := make(chan error, 1) + go func() { + _, err := io.Copy(prevStdout, stdoutR) + if err != nil { + err = fmt.Errorf("copying stdout: %w", err) + } + stdoutErr <- err + }() + defer func() { + // We started a goroutine to copy a stdout pipe. + // Wait for it to finish, or terminate it if need be. + var err2 error + select { + case err2 = <-stdoutErr: + stdoutR.Close() + case <-ctx.Done(): + stdoutR.Close() + // Per https://pkg.go.dev/os#File.Close, the call to stdoutR.Close + // should cause the Read call in io.Copy to unblock and return + // immediately, but we still need to receive from stdoutErr to confirm + // that it has happened. + <-stdoutErr + err2 = ctx.Err() + } + if err == nil { + err = err2 + } + }() + + // Per https://pkg.go.dev/os/exec#Cmd, “If Stdout and Stderr are the + // same writer, and have a type that can be compared with ==, at most + // one goroutine at a time will call Write.” + // + // Since we're starting a goroutine that writes to cmd.Stdout, we must + // also update cmd.Stderr so that it still holds. + func() { + defer func() { recover() }() + if cmd.Stderr == prevStdout { + cmd.Stderr = cmd.Stdout + } + }() + } + } + + err = cmd.Start() + if stdoutW != nil { + // The child process has inherited the pipe file, + // so close the copy held in this process. + stdoutW.Close() + stdoutW = nil + } + if err != nil { + return err + } + + resChan := make(chan error, 1) + go func() { + resChan <- cmd.Wait() + }() + + // If we're interested in debugging hanging Go commands, stop waiting after a + // minute and panic with interesting information. + debug := DebugHangingGoCommands + if debug { + timer := time.NewTimer(1 * time.Minute) + defer timer.Stop() + select { + case err := <-resChan: + return err + case <-timer.C: + HandleHangingGoCommand(cmd.Process) + case <-ctx.Done(): + } + } else { + select { + case err := <-resChan: + return err + case <-ctx.Done(): + } + } + + // Cancelled. Interrupt and see if it ends voluntarily. + if err := cmd.Process.Signal(os.Interrupt); err == nil { + // (We used to wait only 1s but this proved + // fragile on loaded builder machines.) + timer := time.NewTimer(5 * time.Second) + defer timer.Stop() + select { + case err := <-resChan: + return err + case <-timer.C: + } + } + + // Didn't shut down in response to interrupt. Kill it hard. + // TODO(rfindley): per advice from bcmills@, it may be better to send SIGQUIT + // on certain platforms, such as unix. + if err := cmd.Process.Kill(); err != nil && !errors.Is(err, os.ErrProcessDone) && debug { + log.Printf("error killing the Go command: %v", err) + } + + return <-resChan +} + +func HandleHangingGoCommand(proc *os.Process) { + switch runtime.GOOS { + case "linux", "darwin", "freebsd", "netbsd": + fmt.Fprintln(os.Stderr, `DETECTED A HANGING GO COMMAND + +The gopls test runner has detected a hanging go command. In order to debug +this, the output of ps and lsof/fstat is printed below. + +See golang/go#54461 for more details.`) + + fmt.Fprintln(os.Stderr, "\nps axo ppid,pid,command:") + fmt.Fprintln(os.Stderr, "-------------------------") + psCmd := exec.Command("ps", "axo", "ppid,pid,command") + psCmd.Stdout = os.Stderr + psCmd.Stderr = os.Stderr + if err := psCmd.Run(); err != nil { + panic(fmt.Sprintf("running ps: %v", err)) + } + + listFiles := "lsof" + if runtime.GOOS == "freebsd" || runtime.GOOS == "netbsd" { + listFiles = "fstat" + } + + fmt.Fprintln(os.Stderr, "\n"+listFiles+":") + fmt.Fprintln(os.Stderr, "-----") + listFilesCmd := exec.Command(listFiles) + listFilesCmd.Stdout = os.Stderr + listFilesCmd.Stderr = os.Stderr + if err := listFilesCmd.Run(); err != nil { + panic(fmt.Sprintf("running %s: %v", listFiles, err)) + } + } + panic(fmt.Sprintf("detected hanging go command (pid %d): see golang/go#54461 for more details", proc.Pid)) +} + +func cmdDebugStr(cmd *exec.Cmd) string { + env := make(map[string]string) + for _, kv := range cmd.Env { + split := strings.SplitN(kv, "=", 2) + if len(split) == 2 { + k, v := split[0], split[1] + env[k] = v + } + } + + var args []string + for _, arg := range cmd.Args { + quoted := strconv.Quote(arg) + if quoted[1:len(quoted)-1] != arg || strings.Contains(arg, " ") { + args = append(args, quoted) + } else { + args = append(args, arg) + } + } + return fmt.Sprintf("GOROOT=%v GOPATH=%v GO111MODULE=%v GOPROXY=%v PWD=%v %v", env["GOROOT"], env["GOPATH"], env["GO111MODULE"], env["GOPROXY"], env["PWD"], strings.Join(args, " ")) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/vendor.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/vendor.go new file mode 100644 index 000000000000..2d3d408c0bed --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/vendor.go @@ -0,0 +1,109 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package gocommand + +import ( + "bytes" + "context" + "fmt" + "os" + "path/filepath" + "regexp" + "strings" + "time" + + "golang.org/x/mod/semver" +) + +// ModuleJSON holds information about a module. +type ModuleJSON struct { + Path string // module path + Version string // module version + Versions []string // available module versions (with -versions) + Replace *ModuleJSON // replaced by this module + Time *time.Time // time version was created + Update *ModuleJSON // available update, if any (with -u) + Main bool // is this the main module? + Indirect bool // is this module only an indirect dependency of main module? + Dir string // directory holding files for this module, if any + GoMod string // path to go.mod file used when loading this module, if any + GoVersion string // go version used in module +} + +var modFlagRegexp = regexp.MustCompile(`-mod[ =](\w+)`) + +// VendorEnabled reports whether vendoring is enabled. It takes a *Runner to execute Go commands +// with the supplied context.Context and Invocation. The Invocation can contain pre-defined fields, +// of which only Verb and Args are modified to run the appropriate Go command. +// Inspired by setDefaultBuildMod in modload/init.go +func VendorEnabled(ctx context.Context, inv Invocation, r *Runner) (bool, *ModuleJSON, error) { + mainMod, go114, err := getMainModuleAnd114(ctx, inv, r) + if err != nil { + return false, nil, err + } + + // We check the GOFLAGS to see if there is anything overridden or not. + inv.Verb = "env" + inv.Args = []string{"GOFLAGS"} + stdout, err := r.Run(ctx, inv) + if err != nil { + return false, nil, err + } + goflags := string(bytes.TrimSpace(stdout.Bytes())) + matches := modFlagRegexp.FindStringSubmatch(goflags) + var modFlag string + if len(matches) != 0 { + modFlag = matches[1] + } + // Don't override an explicit '-mod=' argument. + if modFlag == "vendor" { + return true, mainMod, nil + } else if modFlag != "" { + return false, nil, nil + } + if mainMod == nil || !go114 { + return false, nil, nil + } + // Check 1.14's automatic vendor mode. + if fi, err := os.Stat(filepath.Join(mainMod.Dir, "vendor")); err == nil && fi.IsDir() { + if mainMod.GoVersion != "" && semver.Compare("v"+mainMod.GoVersion, "v1.14") >= 0 { + // The Go version is at least 1.14, and a vendor directory exists. + // Set -mod=vendor by default. + return true, mainMod, nil + } + } + return false, nil, nil +} + +// getMainModuleAnd114 gets one of the main modules' information and whether the +// go command in use is 1.14+. This is the information needed to figure out +// if vendoring should be enabled. +func getMainModuleAnd114(ctx context.Context, inv Invocation, r *Runner) (*ModuleJSON, bool, error) { + const format = `{{.Path}} +{{.Dir}} +{{.GoMod}} +{{.GoVersion}} +{{range context.ReleaseTags}}{{if eq . "go1.14"}}{{.}}{{end}}{{end}} +` + inv.Verb = "list" + inv.Args = []string{"-m", "-f", format} + stdout, err := r.Run(ctx, inv) + if err != nil { + return nil, false, err + } + + lines := strings.Split(stdout.String(), "\n") + if len(lines) < 5 { + return nil, false, fmt.Errorf("unexpected stdout: %q", stdout.String()) + } + mod := &ModuleJSON{ + Path: lines[0], + Dir: lines[1], + GoMod: lines[2], + GoVersion: lines[3], + Main: true, + } + return mod, lines[4] == "go1.14", nil +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/version.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/version.go new file mode 100644 index 000000000000..446c5846a60f --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/gocommand/version.go @@ -0,0 +1,71 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package gocommand + +import ( + "context" + "fmt" + "regexp" + "strings" +) + +// GoVersion reports the minor version number of the highest release +// tag built into the go command on the PATH. +// +// Note that this may be higher than the version of the go tool used +// to build this application, and thus the versions of the standard +// go/{scanner,parser,ast,types} packages that are linked into it. +// In that case, callers should either downgrade to the version of +// go used to build the application, or report an error that the +// application is too old to use the go command on the PATH. +func GoVersion(ctx context.Context, inv Invocation, r *Runner) (int, error) { + inv.Verb = "list" + inv.Args = []string{"-e", "-f", `{{context.ReleaseTags}}`, `--`, `unsafe`} + inv.BuildFlags = nil // This is not a build command. + inv.ModFlag = "" + inv.ModFile = "" + inv.Env = append(inv.Env[:len(inv.Env):len(inv.Env)], "GO111MODULE=off") + + stdoutBytes, err := r.Run(ctx, inv) + if err != nil { + return 0, err + } + stdout := stdoutBytes.String() + if len(stdout) < 3 { + return 0, fmt.Errorf("bad ReleaseTags output: %q", stdout) + } + // Split up "[go1.1 go1.15]" and return highest go1.X value. + tags := strings.Fields(stdout[1 : len(stdout)-2]) + for i := len(tags) - 1; i >= 0; i-- { + var version int + if _, err := fmt.Sscanf(tags[i], "go1.%d", &version); err != nil { + continue + } + return version, nil + } + return 0, fmt.Errorf("no parseable ReleaseTags in %v", tags) +} + +// GoVersionOutput returns the complete output of the go version command. +func GoVersionOutput(ctx context.Context, inv Invocation, r *Runner) (string, error) { + inv.Verb = "version" + goVersion, err := r.Run(ctx, inv) + if err != nil { + return "", err + } + return goVersion.String(), nil +} + +// ParseGoVersionOutput extracts the Go version string +// from the output of the "go version" command. +// Given an unrecognized form, it returns an empty string. +func ParseGoVersionOutput(data string) string { + re := regexp.MustCompile(`^go version (go\S+|devel \S+)`) + m := re.FindStringSubmatch(data) + if len(m) != 2 { + return "" // unrecognized version + } + return m[1] +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/packagesinternal/packages.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/packagesinternal/packages.go new file mode 100644 index 000000000000..d9950b1f0bef --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/packagesinternal/packages.go @@ -0,0 +1,30 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package packagesinternal exposes internal-only fields from go/packages. +package packagesinternal + +import ( + "golang.org/x/tools/internal/gocommand" +) + +var GetForTest = func(p interface{}) string { return "" } +var GetDepsErrors = func(p interface{}) []*PackageError { return nil } + +type PackageError struct { + ImportStack []string // shortest path from package named on command line to this one + Pos string // position of error (if present, file:line:col) + Err string // the error itself +} + +var GetGoCmdRunner = func(config interface{}) *gocommand.Runner { return nil } + +var SetGoCmdRunner = func(config interface{}, runner *gocommand.Runner) {} + +var TypecheckCgo int +var DepsErrors int // must be set as a LoadMode to call GetDepsErrors +var ForTest int // must be set as a LoadMode to call GetForTest + +var SetModFlag = func(config interface{}, value string) {} +var SetModFile = func(config interface{}, value string) {} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/codes.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/codes.go new file mode 100644 index 000000000000..f0cabde96eba --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/codes.go @@ -0,0 +1,77 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package pkgbits + +// A Code is an enum value that can be encoded into bitstreams. +// +// Code types are preferable for enum types, because they allow +// Decoder to detect desyncs. +type Code interface { + // Marker returns the SyncMarker for the Code's dynamic type. + Marker() SyncMarker + + // Value returns the Code's ordinal value. + Value() int +} + +// A CodeVal distinguishes among go/constant.Value encodings. +type CodeVal int + +func (c CodeVal) Marker() SyncMarker { return SyncVal } +func (c CodeVal) Value() int { return int(c) } + +// Note: These values are public and cannot be changed without +// updating the go/types importers. + +const ( + ValBool CodeVal = iota + ValString + ValInt64 + ValBigInt + ValBigRat + ValBigFloat +) + +// A CodeType distinguishes among go/types.Type encodings. +type CodeType int + +func (c CodeType) Marker() SyncMarker { return SyncType } +func (c CodeType) Value() int { return int(c) } + +// Note: These values are public and cannot be changed without +// updating the go/types importers. + +const ( + TypeBasic CodeType = iota + TypeNamed + TypePointer + TypeSlice + TypeArray + TypeChan + TypeMap + TypeSignature + TypeStruct + TypeInterface + TypeUnion + TypeTypeParam +) + +// A CodeObj distinguishes among go/types.Object encodings. +type CodeObj int + +func (c CodeObj) Marker() SyncMarker { return SyncCodeObj } +func (c CodeObj) Value() int { return int(c) } + +// Note: These values are public and cannot be changed without +// updating the go/types importers. + +const ( + ObjAlias CodeObj = iota + ObjConst + ObjType + ObjFunc + ObjVar + ObjStub +) diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/decoder.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/decoder.go new file mode 100644 index 000000000000..b92e8e6eb329 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/decoder.go @@ -0,0 +1,517 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package pkgbits + +import ( + "encoding/binary" + "errors" + "fmt" + "go/constant" + "go/token" + "io" + "math/big" + "os" + "runtime" + "strings" +) + +// A PkgDecoder provides methods for decoding a package's Unified IR +// export data. +type PkgDecoder struct { + // version is the file format version. + version uint32 + + // sync indicates whether the file uses sync markers. + sync bool + + // pkgPath is the package path for the package to be decoded. + // + // TODO(mdempsky): Remove; unneeded since CL 391014. + pkgPath string + + // elemData is the full data payload of the encoded package. + // Elements are densely and contiguously packed together. + // + // The last 8 bytes of elemData are the package fingerprint. + elemData string + + // elemEnds stores the byte-offset end positions of element + // bitstreams within elemData. + // + // For example, element I's bitstream data starts at elemEnds[I-1] + // (or 0, if I==0) and ends at elemEnds[I]. + // + // Note: elemEnds is indexed by absolute indices, not + // section-relative indices. + elemEnds []uint32 + + // elemEndsEnds stores the index-offset end positions of relocation + // sections within elemEnds. + // + // For example, section K's end positions start at elemEndsEnds[K-1] + // (or 0, if K==0) and end at elemEndsEnds[K]. + elemEndsEnds [numRelocs]uint32 + + scratchRelocEnt []RelocEnt +} + +// PkgPath returns the package path for the package +// +// TODO(mdempsky): Remove; unneeded since CL 391014. +func (pr *PkgDecoder) PkgPath() string { return pr.pkgPath } + +// SyncMarkers reports whether pr uses sync markers. +func (pr *PkgDecoder) SyncMarkers() bool { return pr.sync } + +// NewPkgDecoder returns a PkgDecoder initialized to read the Unified +// IR export data from input. pkgPath is the package path for the +// compilation unit that produced the export data. +// +// TODO(mdempsky): Remove pkgPath parameter; unneeded since CL 391014. +func NewPkgDecoder(pkgPath, input string) PkgDecoder { + pr := PkgDecoder{ + pkgPath: pkgPath, + } + + // TODO(mdempsky): Implement direct indexing of input string to + // avoid copying the position information. + + r := strings.NewReader(input) + + assert(binary.Read(r, binary.LittleEndian, &pr.version) == nil) + + switch pr.version { + default: + panic(fmt.Errorf("unsupported version: %v", pr.version)) + case 0: + // no flags + case 1: + var flags uint32 + assert(binary.Read(r, binary.LittleEndian, &flags) == nil) + pr.sync = flags&flagSyncMarkers != 0 + } + + assert(binary.Read(r, binary.LittleEndian, pr.elemEndsEnds[:]) == nil) + + pr.elemEnds = make([]uint32, pr.elemEndsEnds[len(pr.elemEndsEnds)-1]) + assert(binary.Read(r, binary.LittleEndian, pr.elemEnds[:]) == nil) + + pos, err := r.Seek(0, io.SeekCurrent) + assert(err == nil) + + pr.elemData = input[pos:] + assert(len(pr.elemData)-8 == int(pr.elemEnds[len(pr.elemEnds)-1])) + + return pr +} + +// NumElems returns the number of elements in section k. +func (pr *PkgDecoder) NumElems(k RelocKind) int { + count := int(pr.elemEndsEnds[k]) + if k > 0 { + count -= int(pr.elemEndsEnds[k-1]) + } + return count +} + +// TotalElems returns the total number of elements across all sections. +func (pr *PkgDecoder) TotalElems() int { + return len(pr.elemEnds) +} + +// Fingerprint returns the package fingerprint. +func (pr *PkgDecoder) Fingerprint() [8]byte { + var fp [8]byte + copy(fp[:], pr.elemData[len(pr.elemData)-8:]) + return fp +} + +// AbsIdx returns the absolute index for the given (section, index) +// pair. +func (pr *PkgDecoder) AbsIdx(k RelocKind, idx Index) int { + absIdx := int(idx) + if k > 0 { + absIdx += int(pr.elemEndsEnds[k-1]) + } + if absIdx >= int(pr.elemEndsEnds[k]) { + errorf("%v:%v is out of bounds; %v", k, idx, pr.elemEndsEnds) + } + return absIdx +} + +// DataIdx returns the raw element bitstream for the given (section, +// index) pair. +func (pr *PkgDecoder) DataIdx(k RelocKind, idx Index) string { + absIdx := pr.AbsIdx(k, idx) + + var start uint32 + if absIdx > 0 { + start = pr.elemEnds[absIdx-1] + } + end := pr.elemEnds[absIdx] + + return pr.elemData[start:end] +} + +// StringIdx returns the string value for the given string index. +func (pr *PkgDecoder) StringIdx(idx Index) string { + return pr.DataIdx(RelocString, idx) +} + +// NewDecoder returns a Decoder for the given (section, index) pair, +// and decodes the given SyncMarker from the element bitstream. +func (pr *PkgDecoder) NewDecoder(k RelocKind, idx Index, marker SyncMarker) Decoder { + r := pr.NewDecoderRaw(k, idx) + r.Sync(marker) + return r +} + +// TempDecoder returns a Decoder for the given (section, index) pair, +// and decodes the given SyncMarker from the element bitstream. +// If possible the Decoder should be RetireDecoder'd when it is no longer +// needed, this will avoid heap allocations. +func (pr *PkgDecoder) TempDecoder(k RelocKind, idx Index, marker SyncMarker) Decoder { + r := pr.TempDecoderRaw(k, idx) + r.Sync(marker) + return r +} + +func (pr *PkgDecoder) RetireDecoder(d *Decoder) { + pr.scratchRelocEnt = d.Relocs + d.Relocs = nil +} + +// NewDecoderRaw returns a Decoder for the given (section, index) pair. +// +// Most callers should use NewDecoder instead. +func (pr *PkgDecoder) NewDecoderRaw(k RelocKind, idx Index) Decoder { + r := Decoder{ + common: pr, + k: k, + Idx: idx, + } + + // TODO(mdempsky) r.data.Reset(...) after #44505 is resolved. + r.Data = *strings.NewReader(pr.DataIdx(k, idx)) + + r.Sync(SyncRelocs) + r.Relocs = make([]RelocEnt, r.Len()) + for i := range r.Relocs { + r.Sync(SyncReloc) + r.Relocs[i] = RelocEnt{RelocKind(r.Len()), Index(r.Len())} + } + + return r +} + +func (pr *PkgDecoder) TempDecoderRaw(k RelocKind, idx Index) Decoder { + r := Decoder{ + common: pr, + k: k, + Idx: idx, + } + + r.Data.Reset(pr.DataIdx(k, idx)) + r.Sync(SyncRelocs) + l := r.Len() + if cap(pr.scratchRelocEnt) >= l { + r.Relocs = pr.scratchRelocEnt[:l] + pr.scratchRelocEnt = nil + } else { + r.Relocs = make([]RelocEnt, l) + } + for i := range r.Relocs { + r.Sync(SyncReloc) + r.Relocs[i] = RelocEnt{RelocKind(r.Len()), Index(r.Len())} + } + + return r +} + +// A Decoder provides methods for decoding an individual element's +// bitstream data. +type Decoder struct { + common *PkgDecoder + + Relocs []RelocEnt + Data strings.Reader + + k RelocKind + Idx Index +} + +func (r *Decoder) checkErr(err error) { + if err != nil { + errorf("unexpected decoding error: %w", err) + } +} + +func (r *Decoder) rawUvarint() uint64 { + x, err := readUvarint(&r.Data) + r.checkErr(err) + return x +} + +// readUvarint is a type-specialized copy of encoding/binary.ReadUvarint. +// This avoids the interface conversion and thus has better escape properties, +// which flows up the stack. +func readUvarint(r *strings.Reader) (uint64, error) { + var x uint64 + var s uint + for i := 0; i < binary.MaxVarintLen64; i++ { + b, err := r.ReadByte() + if err != nil { + if i > 0 && err == io.EOF { + err = io.ErrUnexpectedEOF + } + return x, err + } + if b < 0x80 { + if i == binary.MaxVarintLen64-1 && b > 1 { + return x, overflow + } + return x | uint64(b)<> 1) + if ux&1 != 0 { + x = ^x + } + return x +} + +func (r *Decoder) rawReloc(k RelocKind, idx int) Index { + e := r.Relocs[idx] + assert(e.Kind == k) + return e.Idx +} + +// Sync decodes a sync marker from the element bitstream and asserts +// that it matches the expected marker. +// +// If r.common.sync is false, then Sync is a no-op. +func (r *Decoder) Sync(mWant SyncMarker) { + if !r.common.sync { + return + } + + pos, _ := r.Data.Seek(0, io.SeekCurrent) + mHave := SyncMarker(r.rawUvarint()) + writerPCs := make([]int, r.rawUvarint()) + for i := range writerPCs { + writerPCs[i] = int(r.rawUvarint()) + } + + if mHave == mWant { + return + } + + // There's some tension here between printing: + // + // (1) full file paths that tools can recognize (e.g., so emacs + // hyperlinks the "file:line" text for easy navigation), or + // + // (2) short file paths that are easier for humans to read (e.g., by + // omitting redundant or irrelevant details, so it's easier to + // focus on the useful bits that remain). + // + // The current formatting favors the former, as it seems more + // helpful in practice. But perhaps the formatting could be improved + // to better address both concerns. For example, use relative file + // paths if they would be shorter, or rewrite file paths to contain + // "$GOROOT" (like objabi.AbsFile does) if tools can be taught how + // to reliably expand that again. + + fmt.Printf("export data desync: package %q, section %v, index %v, offset %v\n", r.common.pkgPath, r.k, r.Idx, pos) + + fmt.Printf("\nfound %v, written at:\n", mHave) + if len(writerPCs) == 0 { + fmt.Printf("\t[stack trace unavailable; recompile package %q with -d=syncframes]\n", r.common.pkgPath) + } + for _, pc := range writerPCs { + fmt.Printf("\t%s\n", r.common.StringIdx(r.rawReloc(RelocString, pc))) + } + + fmt.Printf("\nexpected %v, reading at:\n", mWant) + var readerPCs [32]uintptr // TODO(mdempsky): Dynamically size? + n := runtime.Callers(2, readerPCs[:]) + for _, pc := range fmtFrames(readerPCs[:n]...) { + fmt.Printf("\t%s\n", pc) + } + + // We already printed a stack trace for the reader, so now we can + // simply exit. Printing a second one with panic or base.Fatalf + // would just be noise. + os.Exit(1) +} + +// Bool decodes and returns a bool value from the element bitstream. +func (r *Decoder) Bool() bool { + r.Sync(SyncBool) + x, err := r.Data.ReadByte() + r.checkErr(err) + assert(x < 2) + return x != 0 +} + +// Int64 decodes and returns an int64 value from the element bitstream. +func (r *Decoder) Int64() int64 { + r.Sync(SyncInt64) + return r.rawVarint() +} + +// Uint64 decodes and returns a uint64 value from the element bitstream. +func (r *Decoder) Uint64() uint64 { + r.Sync(SyncUint64) + return r.rawUvarint() +} + +// Len decodes and returns a non-negative int value from the element bitstream. +func (r *Decoder) Len() int { x := r.Uint64(); v := int(x); assert(uint64(v) == x); return v } + +// Int decodes and returns an int value from the element bitstream. +func (r *Decoder) Int() int { x := r.Int64(); v := int(x); assert(int64(v) == x); return v } + +// Uint decodes and returns a uint value from the element bitstream. +func (r *Decoder) Uint() uint { x := r.Uint64(); v := uint(x); assert(uint64(v) == x); return v } + +// Code decodes a Code value from the element bitstream and returns +// its ordinal value. It's the caller's responsibility to convert the +// result to an appropriate Code type. +// +// TODO(mdempsky): Ideally this method would have signature "Code[T +// Code] T" instead, but we don't allow generic methods and the +// compiler can't depend on generics yet anyway. +func (r *Decoder) Code(mark SyncMarker) int { + r.Sync(mark) + return r.Len() +} + +// Reloc decodes a relocation of expected section k from the element +// bitstream and returns an index to the referenced element. +func (r *Decoder) Reloc(k RelocKind) Index { + r.Sync(SyncUseReloc) + return r.rawReloc(k, r.Len()) +} + +// String decodes and returns a string value from the element +// bitstream. +func (r *Decoder) String() string { + r.Sync(SyncString) + return r.common.StringIdx(r.Reloc(RelocString)) +} + +// Strings decodes and returns a variable-length slice of strings from +// the element bitstream. +func (r *Decoder) Strings() []string { + res := make([]string, r.Len()) + for i := range res { + res[i] = r.String() + } + return res +} + +// Value decodes and returns a constant.Value from the element +// bitstream. +func (r *Decoder) Value() constant.Value { + r.Sync(SyncValue) + isComplex := r.Bool() + val := r.scalar() + if isComplex { + val = constant.BinaryOp(val, token.ADD, constant.MakeImag(r.scalar())) + } + return val +} + +func (r *Decoder) scalar() constant.Value { + switch tag := CodeVal(r.Code(SyncVal)); tag { + default: + panic(fmt.Errorf("unexpected scalar tag: %v", tag)) + + case ValBool: + return constant.MakeBool(r.Bool()) + case ValString: + return constant.MakeString(r.String()) + case ValInt64: + return constant.MakeInt64(r.Int64()) + case ValBigInt: + return constant.Make(r.bigInt()) + case ValBigRat: + num := r.bigInt() + denom := r.bigInt() + return constant.Make(new(big.Rat).SetFrac(num, denom)) + case ValBigFloat: + return constant.Make(r.bigFloat()) + } +} + +func (r *Decoder) bigInt() *big.Int { + v := new(big.Int).SetBytes([]byte(r.String())) + if r.Bool() { + v.Neg(v) + } + return v +} + +func (r *Decoder) bigFloat() *big.Float { + v := new(big.Float).SetPrec(512) + assert(v.UnmarshalText([]byte(r.String())) == nil) + return v +} + +// @@@ Helpers + +// TODO(mdempsky): These should probably be removed. I think they're a +// smell that the export data format is not yet quite right. + +// PeekPkgPath returns the package path for the specified package +// index. +func (pr *PkgDecoder) PeekPkgPath(idx Index) string { + var path string + { + r := pr.TempDecoder(RelocPkg, idx, SyncPkgDef) + path = r.String() + pr.RetireDecoder(&r) + } + if path == "" { + path = pr.pkgPath + } + return path +} + +// PeekObj returns the package path, object name, and CodeObj for the +// specified object index. +func (pr *PkgDecoder) PeekObj(idx Index) (string, string, CodeObj) { + var ridx Index + var name string + var rcode int + { + r := pr.TempDecoder(RelocName, idx, SyncObject1) + r.Sync(SyncSym) + r.Sync(SyncPkg) + ridx = r.Reloc(RelocPkg) + name = r.String() + rcode = r.Code(SyncCodeObj) + pr.RetireDecoder(&r) + } + + path := pr.PeekPkgPath(ridx) + assert(name != "") + + tag := CodeObj(rcode) + + return path, name, tag +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/doc.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/doc.go new file mode 100644 index 000000000000..c8a2796b5e4c --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/doc.go @@ -0,0 +1,32 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package pkgbits implements low-level coding abstractions for +// Unified IR's export data format. +// +// At a low-level, a package is a collection of bitstream elements. +// Each element has a "kind" and a dense, non-negative index. +// Elements can be randomly accessed given their kind and index. +// +// Individual elements are sequences of variable-length values (e.g., +// integers, booleans, strings, go/constant values, cross-references +// to other elements). Package pkgbits provides APIs for encoding and +// decoding these low-level values, but the details of mapping +// higher-level Go constructs into elements is left to higher-level +// abstractions. +// +// Elements may cross-reference each other with "relocations." For +// example, an element representing a pointer type has a relocation +// referring to the element type. +// +// Go constructs may be composed as a constellation of multiple +// elements. For example, a declared function may have one element to +// describe the object (e.g., its name, type, position), and a +// separate element to describe its function body. This allows readers +// some flexibility in efficiently seeking or re-reading data (e.g., +// inlining requires re-reading the function body for each inlined +// call, without needing to re-read the object-level details). +// +// This is a copy of internal/pkgbits in the Go implementation. +package pkgbits diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/encoder.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/encoder.go new file mode 100644 index 000000000000..6482617a4fcc --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/encoder.go @@ -0,0 +1,383 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package pkgbits + +import ( + "bytes" + "crypto/md5" + "encoding/binary" + "go/constant" + "io" + "math/big" + "runtime" +) + +// currentVersion is the current version number. +// +// - v0: initial prototype +// +// - v1: adds the flags uint32 word +const currentVersion uint32 = 1 + +// A PkgEncoder provides methods for encoding a package's Unified IR +// export data. +type PkgEncoder struct { + // elems holds the bitstream for previously encoded elements. + elems [numRelocs][]string + + // stringsIdx maps previously encoded strings to their index within + // the RelocString section, to allow deduplication. That is, + // elems[RelocString][stringsIdx[s]] == s (if present). + stringsIdx map[string]Index + + // syncFrames is the number of frames to write at each sync + // marker. A negative value means sync markers are omitted. + syncFrames int +} + +// SyncMarkers reports whether pw uses sync markers. +func (pw *PkgEncoder) SyncMarkers() bool { return pw.syncFrames >= 0 } + +// NewPkgEncoder returns an initialized PkgEncoder. +// +// syncFrames is the number of caller frames that should be serialized +// at Sync points. Serializing additional frames results in larger +// export data files, but can help diagnosing desync errors in +// higher-level Unified IR reader/writer code. If syncFrames is +// negative, then sync markers are omitted entirely. +func NewPkgEncoder(syncFrames int) PkgEncoder { + return PkgEncoder{ + stringsIdx: make(map[string]Index), + syncFrames: syncFrames, + } +} + +// DumpTo writes the package's encoded data to out0 and returns the +// package fingerprint. +func (pw *PkgEncoder) DumpTo(out0 io.Writer) (fingerprint [8]byte) { + h := md5.New() + out := io.MultiWriter(out0, h) + + writeUint32 := func(x uint32) { + assert(binary.Write(out, binary.LittleEndian, x) == nil) + } + + writeUint32(currentVersion) + + var flags uint32 + if pw.SyncMarkers() { + flags |= flagSyncMarkers + } + writeUint32(flags) + + // Write elemEndsEnds. + var sum uint32 + for _, elems := range &pw.elems { + sum += uint32(len(elems)) + writeUint32(sum) + } + + // Write elemEnds. + sum = 0 + for _, elems := range &pw.elems { + for _, elem := range elems { + sum += uint32(len(elem)) + writeUint32(sum) + } + } + + // Write elemData. + for _, elems := range &pw.elems { + for _, elem := range elems { + _, err := io.WriteString(out, elem) + assert(err == nil) + } + } + + // Write fingerprint. + copy(fingerprint[:], h.Sum(nil)) + _, err := out0.Write(fingerprint[:]) + assert(err == nil) + + return +} + +// StringIdx adds a string value to the strings section, if not +// already present, and returns its index. +func (pw *PkgEncoder) StringIdx(s string) Index { + if idx, ok := pw.stringsIdx[s]; ok { + assert(pw.elems[RelocString][idx] == s) + return idx + } + + idx := Index(len(pw.elems[RelocString])) + pw.elems[RelocString] = append(pw.elems[RelocString], s) + pw.stringsIdx[s] = idx + return idx +} + +// NewEncoder returns an Encoder for a new element within the given +// section, and encodes the given SyncMarker as the start of the +// element bitstream. +func (pw *PkgEncoder) NewEncoder(k RelocKind, marker SyncMarker) Encoder { + e := pw.NewEncoderRaw(k) + e.Sync(marker) + return e +} + +// NewEncoderRaw returns an Encoder for a new element within the given +// section. +// +// Most callers should use NewEncoder instead. +func (pw *PkgEncoder) NewEncoderRaw(k RelocKind) Encoder { + idx := Index(len(pw.elems[k])) + pw.elems[k] = append(pw.elems[k], "") // placeholder + + return Encoder{ + p: pw, + k: k, + Idx: idx, + } +} + +// An Encoder provides methods for encoding an individual element's +// bitstream data. +type Encoder struct { + p *PkgEncoder + + Relocs []RelocEnt + RelocMap map[RelocEnt]uint32 + Data bytes.Buffer // accumulated element bitstream data + + encodingRelocHeader bool + + k RelocKind + Idx Index // index within relocation section +} + +// Flush finalizes the element's bitstream and returns its Index. +func (w *Encoder) Flush() Index { + var sb bytes.Buffer // TODO(mdempsky): strings.Builder after #44505 is resolved + + // Backup the data so we write the relocations at the front. + var tmp bytes.Buffer + io.Copy(&tmp, &w.Data) + + // TODO(mdempsky): Consider writing these out separately so they're + // easier to strip, along with function bodies, so that we can prune + // down to just the data that's relevant to go/types. + if w.encodingRelocHeader { + panic("encodingRelocHeader already true; recursive flush?") + } + w.encodingRelocHeader = true + w.Sync(SyncRelocs) + w.Len(len(w.Relocs)) + for _, rEnt := range w.Relocs { + w.Sync(SyncReloc) + w.Len(int(rEnt.Kind)) + w.Len(int(rEnt.Idx)) + } + + io.Copy(&sb, &w.Data) + io.Copy(&sb, &tmp) + w.p.elems[w.k][w.Idx] = sb.String() + + return w.Idx +} + +func (w *Encoder) checkErr(err error) { + if err != nil { + errorf("unexpected encoding error: %v", err) + } +} + +func (w *Encoder) rawUvarint(x uint64) { + var buf [binary.MaxVarintLen64]byte + n := binary.PutUvarint(buf[:], x) + _, err := w.Data.Write(buf[:n]) + w.checkErr(err) +} + +func (w *Encoder) rawVarint(x int64) { + // Zig-zag encode. + ux := uint64(x) << 1 + if x < 0 { + ux = ^ux + } + + w.rawUvarint(ux) +} + +func (w *Encoder) rawReloc(r RelocKind, idx Index) int { + e := RelocEnt{r, idx} + if w.RelocMap != nil { + if i, ok := w.RelocMap[e]; ok { + return int(i) + } + } else { + w.RelocMap = make(map[RelocEnt]uint32) + } + + i := len(w.Relocs) + w.RelocMap[e] = uint32(i) + w.Relocs = append(w.Relocs, e) + return i +} + +func (w *Encoder) Sync(m SyncMarker) { + if !w.p.SyncMarkers() { + return + } + + // Writing out stack frame string references requires working + // relocations, but writing out the relocations themselves involves + // sync markers. To prevent infinite recursion, we simply trim the + // stack frame for sync markers within the relocation header. + var frames []string + if !w.encodingRelocHeader && w.p.syncFrames > 0 { + pcs := make([]uintptr, w.p.syncFrames) + n := runtime.Callers(2, pcs) + frames = fmtFrames(pcs[:n]...) + } + + // TODO(mdempsky): Save space by writing out stack frames as a + // linked list so we can share common stack frames. + w.rawUvarint(uint64(m)) + w.rawUvarint(uint64(len(frames))) + for _, frame := range frames { + w.rawUvarint(uint64(w.rawReloc(RelocString, w.p.StringIdx(frame)))) + } +} + +// Bool encodes and writes a bool value into the element bitstream, +// and then returns the bool value. +// +// For simple, 2-alternative encodings, the idiomatic way to call Bool +// is something like: +// +// if w.Bool(x != 0) { +// // alternative #1 +// } else { +// // alternative #2 +// } +// +// For multi-alternative encodings, use Code instead. +func (w *Encoder) Bool(b bool) bool { + w.Sync(SyncBool) + var x byte + if b { + x = 1 + } + err := w.Data.WriteByte(x) + w.checkErr(err) + return b +} + +// Int64 encodes and writes an int64 value into the element bitstream. +func (w *Encoder) Int64(x int64) { + w.Sync(SyncInt64) + w.rawVarint(x) +} + +// Uint64 encodes and writes a uint64 value into the element bitstream. +func (w *Encoder) Uint64(x uint64) { + w.Sync(SyncUint64) + w.rawUvarint(x) +} + +// Len encodes and writes a non-negative int value into the element bitstream. +func (w *Encoder) Len(x int) { assert(x >= 0); w.Uint64(uint64(x)) } + +// Int encodes and writes an int value into the element bitstream. +func (w *Encoder) Int(x int) { w.Int64(int64(x)) } + +// Uint encodes and writes a uint value into the element bitstream. +func (w *Encoder) Uint(x uint) { w.Uint64(uint64(x)) } + +// Reloc encodes and writes a relocation for the given (section, +// index) pair into the element bitstream. +// +// Note: Only the index is formally written into the element +// bitstream, so bitstream decoders must know from context which +// section an encoded relocation refers to. +func (w *Encoder) Reloc(r RelocKind, idx Index) { + w.Sync(SyncUseReloc) + w.Len(w.rawReloc(r, idx)) +} + +// Code encodes and writes a Code value into the element bitstream. +func (w *Encoder) Code(c Code) { + w.Sync(c.Marker()) + w.Len(c.Value()) +} + +// String encodes and writes a string value into the element +// bitstream. +// +// Internally, strings are deduplicated by adding them to the strings +// section (if not already present), and then writing a relocation +// into the element bitstream. +func (w *Encoder) String(s string) { + w.Sync(SyncString) + w.Reloc(RelocString, w.p.StringIdx(s)) +} + +// Strings encodes and writes a variable-length slice of strings into +// the element bitstream. +func (w *Encoder) Strings(ss []string) { + w.Len(len(ss)) + for _, s := range ss { + w.String(s) + } +} + +// Value encodes and writes a constant.Value into the element +// bitstream. +func (w *Encoder) Value(val constant.Value) { + w.Sync(SyncValue) + if w.Bool(val.Kind() == constant.Complex) { + w.scalar(constant.Real(val)) + w.scalar(constant.Imag(val)) + } else { + w.scalar(val) + } +} + +func (w *Encoder) scalar(val constant.Value) { + switch v := constant.Val(val).(type) { + default: + errorf("unhandled %v (%v)", val, val.Kind()) + case bool: + w.Code(ValBool) + w.Bool(v) + case string: + w.Code(ValString) + w.String(v) + case int64: + w.Code(ValInt64) + w.Int64(v) + case *big.Int: + w.Code(ValBigInt) + w.bigInt(v) + case *big.Rat: + w.Code(ValBigRat) + w.bigInt(v.Num()) + w.bigInt(v.Denom()) + case *big.Float: + w.Code(ValBigFloat) + w.bigFloat(v) + } +} + +func (w *Encoder) bigInt(v *big.Int) { + b := v.Bytes() + w.String(string(b)) // TODO: More efficient encoding. + w.Bool(v.Sign() < 0) +} + +func (w *Encoder) bigFloat(v *big.Float) { + b := v.Append(nil, 'p', -1) + w.String(string(b)) // TODO: More efficient encoding. +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/flags.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/flags.go new file mode 100644 index 000000000000..654222745fac --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/flags.go @@ -0,0 +1,9 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package pkgbits + +const ( + flagSyncMarkers = 1 << iota // file format contains sync markers +) diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/frames_go1.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/frames_go1.go new file mode 100644 index 000000000000..5294f6a63edd --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/frames_go1.go @@ -0,0 +1,21 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build !go1.7 +// +build !go1.7 + +// TODO(mdempsky): Remove after #44505 is resolved + +package pkgbits + +import "runtime" + +func walkFrames(pcs []uintptr, visit frameVisitor) { + for _, pc := range pcs { + fn := runtime.FuncForPC(pc) + file, line := fn.FileLine(pc) + + visit(file, line, fn.Name(), pc-fn.Entry()) + } +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/frames_go17.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/frames_go17.go new file mode 100644 index 000000000000..2324ae7adfe2 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/frames_go17.go @@ -0,0 +1,28 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build go1.7 +// +build go1.7 + +package pkgbits + +import "runtime" + +// walkFrames calls visit for each call frame represented by pcs. +// +// pcs should be a slice of PCs, as returned by runtime.Callers. +func walkFrames(pcs []uintptr, visit frameVisitor) { + if len(pcs) == 0 { + return + } + + frames := runtime.CallersFrames(pcs) + for { + frame, more := frames.Next() + visit(frame.File, frame.Line, frame.Function, frame.PC-frame.Entry) + if !more { + return + } + } +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/reloc.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/reloc.go new file mode 100644 index 000000000000..fcdfb97ca992 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/reloc.go @@ -0,0 +1,42 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package pkgbits + +// A RelocKind indicates a particular section within a unified IR export. +type RelocKind int32 + +// An Index represents a bitstream element index within a particular +// section. +type Index int32 + +// A relocEnt (relocation entry) is an entry in an element's local +// reference table. +// +// TODO(mdempsky): Rename this too. +type RelocEnt struct { + Kind RelocKind + Idx Index +} + +// Reserved indices within the meta relocation section. +const ( + PublicRootIdx Index = 0 + PrivateRootIdx Index = 1 +) + +const ( + RelocString RelocKind = iota + RelocMeta + RelocPosBase + RelocPkg + RelocName + RelocType + RelocObj + RelocObjExt + RelocObjDict + RelocBody + + numRelocs = iota +) diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/support.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/support.go new file mode 100644 index 000000000000..ad26d3b28cae --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/support.go @@ -0,0 +1,17 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package pkgbits + +import "fmt" + +func assert(b bool) { + if !b { + panic("assertion failed") + } +} + +func errorf(format string, args ...interface{}) { + panic(fmt.Errorf(format, args...)) +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/sync.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/sync.go new file mode 100644 index 000000000000..5bd51ef71700 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/sync.go @@ -0,0 +1,113 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package pkgbits + +import ( + "fmt" + "strings" +) + +// fmtFrames formats a backtrace for reporting reader/writer desyncs. +func fmtFrames(pcs ...uintptr) []string { + res := make([]string, 0, len(pcs)) + walkFrames(pcs, func(file string, line int, name string, offset uintptr) { + // Trim package from function name. It's just redundant noise. + name = strings.TrimPrefix(name, "cmd/compile/internal/noder.") + + res = append(res, fmt.Sprintf("%s:%v: %s +0x%v", file, line, name, offset)) + }) + return res +} + +type frameVisitor func(file string, line int, name string, offset uintptr) + +// SyncMarker is an enum type that represents markers that may be +// written to export data to ensure the reader and writer stay +// synchronized. +type SyncMarker int + +//go:generate stringer -type=SyncMarker -trimprefix=Sync + +const ( + _ SyncMarker = iota + + // Public markers (known to go/types importers). + + // Low-level coding markers. + SyncEOF + SyncBool + SyncInt64 + SyncUint64 + SyncString + SyncValue + SyncVal + SyncRelocs + SyncReloc + SyncUseReloc + + // Higher-level object and type markers. + SyncPublic + SyncPos + SyncPosBase + SyncObject + SyncObject1 + SyncPkg + SyncPkgDef + SyncMethod + SyncType + SyncTypeIdx + SyncTypeParamNames + SyncSignature + SyncParams + SyncParam + SyncCodeObj + SyncSym + SyncLocalIdent + SyncSelector + + // Private markers (only known to cmd/compile). + SyncPrivate + + SyncFuncExt + SyncVarExt + SyncTypeExt + SyncPragma + + SyncExprList + SyncExprs + SyncExpr + SyncExprType + SyncAssign + SyncOp + SyncFuncLit + SyncCompLit + + SyncDecl + SyncFuncBody + SyncOpenScope + SyncCloseScope + SyncCloseAnotherScope + SyncDeclNames + SyncDeclName + + SyncStmts + SyncBlockStmt + SyncIfStmt + SyncForStmt + SyncSwitchStmt + SyncRangeStmt + SyncCaseClause + SyncCommClause + SyncSelectStmt + SyncDecls + SyncLabeledStmt + SyncUseObjLocal + SyncAddLocal + SyncLinkname + SyncStmt1 + SyncStmtsEnd + SyncLabel + SyncOptLabel +) diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/syncmarker_string.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/syncmarker_string.go new file mode 100644 index 000000000000..4a5b0ca5f2ff --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/pkgbits/syncmarker_string.go @@ -0,0 +1,89 @@ +// Code generated by "stringer -type=SyncMarker -trimprefix=Sync"; DO NOT EDIT. + +package pkgbits + +import "strconv" + +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[SyncEOF-1] + _ = x[SyncBool-2] + _ = x[SyncInt64-3] + _ = x[SyncUint64-4] + _ = x[SyncString-5] + _ = x[SyncValue-6] + _ = x[SyncVal-7] + _ = x[SyncRelocs-8] + _ = x[SyncReloc-9] + _ = x[SyncUseReloc-10] + _ = x[SyncPublic-11] + _ = x[SyncPos-12] + _ = x[SyncPosBase-13] + _ = x[SyncObject-14] + _ = x[SyncObject1-15] + _ = x[SyncPkg-16] + _ = x[SyncPkgDef-17] + _ = x[SyncMethod-18] + _ = x[SyncType-19] + _ = x[SyncTypeIdx-20] + _ = x[SyncTypeParamNames-21] + _ = x[SyncSignature-22] + _ = x[SyncParams-23] + _ = x[SyncParam-24] + _ = x[SyncCodeObj-25] + _ = x[SyncSym-26] + _ = x[SyncLocalIdent-27] + _ = x[SyncSelector-28] + _ = x[SyncPrivate-29] + _ = x[SyncFuncExt-30] + _ = x[SyncVarExt-31] + _ = x[SyncTypeExt-32] + _ = x[SyncPragma-33] + _ = x[SyncExprList-34] + _ = x[SyncExprs-35] + _ = x[SyncExpr-36] + _ = x[SyncExprType-37] + _ = x[SyncAssign-38] + _ = x[SyncOp-39] + _ = x[SyncFuncLit-40] + _ = x[SyncCompLit-41] + _ = x[SyncDecl-42] + _ = x[SyncFuncBody-43] + _ = x[SyncOpenScope-44] + _ = x[SyncCloseScope-45] + _ = x[SyncCloseAnotherScope-46] + _ = x[SyncDeclNames-47] + _ = x[SyncDeclName-48] + _ = x[SyncStmts-49] + _ = x[SyncBlockStmt-50] + _ = x[SyncIfStmt-51] + _ = x[SyncForStmt-52] + _ = x[SyncSwitchStmt-53] + _ = x[SyncRangeStmt-54] + _ = x[SyncCaseClause-55] + _ = x[SyncCommClause-56] + _ = x[SyncSelectStmt-57] + _ = x[SyncDecls-58] + _ = x[SyncLabeledStmt-59] + _ = x[SyncUseObjLocal-60] + _ = x[SyncAddLocal-61] + _ = x[SyncLinkname-62] + _ = x[SyncStmt1-63] + _ = x[SyncStmtsEnd-64] + _ = x[SyncLabel-65] + _ = x[SyncOptLabel-66] +} + +const _SyncMarker_name = "EOFBoolInt64Uint64StringValueValRelocsRelocUseRelocPublicPosPosBaseObjectObject1PkgPkgDefMethodTypeTypeIdxTypeParamNamesSignatureParamsParamCodeObjSymLocalIdentSelectorPrivateFuncExtVarExtTypeExtPragmaExprListExprsExprExprTypeAssignOpFuncLitCompLitDeclFuncBodyOpenScopeCloseScopeCloseAnotherScopeDeclNamesDeclNameStmtsBlockStmtIfStmtForStmtSwitchStmtRangeStmtCaseClauseCommClauseSelectStmtDeclsLabeledStmtUseObjLocalAddLocalLinknameStmt1StmtsEndLabelOptLabel" + +var _SyncMarker_index = [...]uint16{0, 3, 7, 12, 18, 24, 29, 32, 38, 43, 51, 57, 60, 67, 73, 80, 83, 89, 95, 99, 106, 120, 129, 135, 140, 147, 150, 160, 168, 175, 182, 188, 195, 201, 209, 214, 218, 226, 232, 234, 241, 248, 252, 260, 269, 279, 296, 305, 313, 318, 327, 333, 340, 350, 359, 369, 379, 389, 394, 405, 416, 424, 432, 437, 445, 450, 458} + +func (i SyncMarker) String() string { + i -= 1 + if i < 0 || i >= SyncMarker(len(_SyncMarker_index)-1) { + return "SyncMarker(" + strconv.FormatInt(int64(i+1), 10) + ")" + } + return _SyncMarker_name[_SyncMarker_index[i]:_SyncMarker_index[i+1]] +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/tokeninternal/tokeninternal.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/tokeninternal/tokeninternal.go new file mode 100644 index 000000000000..7e638ec24fcb --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/tokeninternal/tokeninternal.go @@ -0,0 +1,151 @@ +// Copyright 2023 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// package tokeninternal provides access to some internal features of the token +// package. +package tokeninternal + +import ( + "fmt" + "go/token" + "sort" + "sync" + "unsafe" +) + +// GetLines returns the table of line-start offsets from a token.File. +func GetLines(file *token.File) []int { + // token.File has a Lines method on Go 1.21 and later. + if file, ok := (interface{})(file).(interface{ Lines() []int }); ok { + return file.Lines() + } + + // This declaration must match that of token.File. + // This creates a risk of dependency skew. + // For now we check that the size of the two + // declarations is the same, on the (fragile) assumption + // that future changes would add fields. + type tokenFile119 struct { + _ string + _ int + _ int + mu sync.Mutex // we're not complete monsters + lines []int + _ []struct{} + } + type tokenFile118 struct { + _ *token.FileSet // deleted in go1.19 + tokenFile119 + } + + type uP = unsafe.Pointer + switch unsafe.Sizeof(*file) { + case unsafe.Sizeof(tokenFile118{}): + var ptr *tokenFile118 + *(*uP)(uP(&ptr)) = uP(file) + ptr.mu.Lock() + defer ptr.mu.Unlock() + return ptr.lines + + case unsafe.Sizeof(tokenFile119{}): + var ptr *tokenFile119 + *(*uP)(uP(&ptr)) = uP(file) + ptr.mu.Lock() + defer ptr.mu.Unlock() + return ptr.lines + + default: + panic("unexpected token.File size") + } +} + +// AddExistingFiles adds the specified files to the FileSet if they +// are not already present. It panics if any pair of files in the +// resulting FileSet would overlap. +func AddExistingFiles(fset *token.FileSet, files []*token.File) { + // Punch through the FileSet encapsulation. + type tokenFileSet struct { + // This type remained essentially consistent from go1.16 to go1.21. + mutex sync.RWMutex + base int + files []*token.File + _ *token.File // changed to atomic.Pointer[token.File] in go1.19 + } + + // If the size of token.FileSet changes, this will fail to compile. + const delta = int64(unsafe.Sizeof(tokenFileSet{})) - int64(unsafe.Sizeof(token.FileSet{})) + var _ [-delta * delta]int + + type uP = unsafe.Pointer + var ptr *tokenFileSet + *(*uP)(uP(&ptr)) = uP(fset) + ptr.mutex.Lock() + defer ptr.mutex.Unlock() + + // Merge and sort. + newFiles := append(ptr.files, files...) + sort.Slice(newFiles, func(i, j int) bool { + return newFiles[i].Base() < newFiles[j].Base() + }) + + // Reject overlapping files. + // Discard adjacent identical files. + out := newFiles[:0] + for i, file := range newFiles { + if i > 0 { + prev := newFiles[i-1] + if file == prev { + continue + } + if prev.Base()+prev.Size()+1 > file.Base() { + panic(fmt.Sprintf("file %s (%d-%d) overlaps with file %s (%d-%d)", + prev.Name(), prev.Base(), prev.Base()+prev.Size(), + file.Name(), file.Base(), file.Base()+file.Size())) + } + } + out = append(out, file) + } + newFiles = out + + ptr.files = newFiles + + // Advance FileSet.Base(). + if len(newFiles) > 0 { + last := newFiles[len(newFiles)-1] + newBase := last.Base() + last.Size() + 1 + if ptr.base < newBase { + ptr.base = newBase + } + } +} + +// FileSetFor returns a new FileSet containing a sequence of new Files with +// the same base, size, and line as the input files, for use in APIs that +// require a FileSet. +// +// Precondition: the input files must be non-overlapping, and sorted in order +// of their Base. +func FileSetFor(files ...*token.File) *token.FileSet { + fset := token.NewFileSet() + for _, f := range files { + f2 := fset.AddFile(f.Name(), f.Base(), f.Size()) + lines := GetLines(f) + f2.SetLines(lines) + } + return fset +} + +// CloneFileSet creates a new FileSet holding all files in fset. It does not +// create copies of the token.Files in fset: they are added to the resulting +// FileSet unmodified. +func CloneFileSet(fset *token.FileSet) *token.FileSet { + var files []*token.File + fset.Iterate(func(f *token.File) bool { + files = append(files, f) + return true + }) + newFileSet := token.NewFileSet() + AddExistingFiles(newFileSet, files) + return newFileSet +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/errorcode.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/errorcode.go new file mode 100644 index 000000000000..07484073a57d --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/errorcode.go @@ -0,0 +1,1560 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package typesinternal + +//go:generate stringer -type=ErrorCode + +type ErrorCode int + +// This file defines the error codes that can be produced during type-checking. +// Collectively, these codes provide an identifier that may be used to +// implement special handling for certain types of errors. +// +// Error codes should be fine-grained enough that the exact nature of the error +// can be easily determined, but coarse enough that they are not an +// implementation detail of the type checking algorithm. As a rule-of-thumb, +// errors should be considered equivalent if there is a theoretical refactoring +// of the type checker in which they are emitted in exactly one place. For +// example, the type checker emits different error messages for "too many +// arguments" and "too few arguments", but one can imagine an alternative type +// checker where this check instead just emits a single "wrong number of +// arguments", so these errors should have the same code. +// +// Error code names should be as brief as possible while retaining accuracy and +// distinctiveness. In most cases names should start with an adjective +// describing the nature of the error (e.g. "invalid", "unused", "misplaced"), +// and end with a noun identifying the relevant language object. For example, +// "DuplicateDecl" or "InvalidSliceExpr". For brevity, naming follows the +// convention that "bad" implies a problem with syntax, and "invalid" implies a +// problem with types. + +const ( + // InvalidSyntaxTree occurs if an invalid syntax tree is provided + // to the type checker. It should never happen. + InvalidSyntaxTree ErrorCode = -1 +) + +const ( + _ ErrorCode = iota + + // Test is reserved for errors that only apply while in self-test mode. + Test + + /* package names */ + + // BlankPkgName occurs when a package name is the blank identifier "_". + // + // Per the spec: + // "The PackageName must not be the blank identifier." + BlankPkgName + + // MismatchedPkgName occurs when a file's package name doesn't match the + // package name already established by other files. + MismatchedPkgName + + // InvalidPkgUse occurs when a package identifier is used outside of a + // selector expression. + // + // Example: + // import "fmt" + // + // var _ = fmt + InvalidPkgUse + + /* imports */ + + // BadImportPath occurs when an import path is not valid. + BadImportPath + + // BrokenImport occurs when importing a package fails. + // + // Example: + // import "amissingpackage" + BrokenImport + + // ImportCRenamed occurs when the special import "C" is renamed. "C" is a + // pseudo-package, and must not be renamed. + // + // Example: + // import _ "C" + ImportCRenamed + + // UnusedImport occurs when an import is unused. + // + // Example: + // import "fmt" + // + // func main() {} + UnusedImport + + /* initialization */ + + // InvalidInitCycle occurs when an invalid cycle is detected within the + // initialization graph. + // + // Example: + // var x int = f() + // + // func f() int { return x } + InvalidInitCycle + + /* decls */ + + // DuplicateDecl occurs when an identifier is declared multiple times. + // + // Example: + // var x = 1 + // var x = 2 + DuplicateDecl + + // InvalidDeclCycle occurs when a declaration cycle is not valid. + // + // Example: + // import "unsafe" + // + // type T struct { + // a [n]int + // } + // + // var n = unsafe.Sizeof(T{}) + InvalidDeclCycle + + // InvalidTypeCycle occurs when a cycle in type definitions results in a + // type that is not well-defined. + // + // Example: + // import "unsafe" + // + // type T [unsafe.Sizeof(T{})]int + InvalidTypeCycle + + /* decls > const */ + + // InvalidConstInit occurs when a const declaration has a non-constant + // initializer. + // + // Example: + // var x int + // const _ = x + InvalidConstInit + + // InvalidConstVal occurs when a const value cannot be converted to its + // target type. + // + // TODO(findleyr): this error code and example are not very clear. Consider + // removing it. + // + // Example: + // const _ = 1 << "hello" + InvalidConstVal + + // InvalidConstType occurs when the underlying type in a const declaration + // is not a valid constant type. + // + // Example: + // const c *int = 4 + InvalidConstType + + /* decls > var (+ other variable assignment codes) */ + + // UntypedNilUse occurs when the predeclared (untyped) value nil is used to + // initialize a variable declared without an explicit type. + // + // Example: + // var x = nil + UntypedNilUse + + // WrongAssignCount occurs when the number of values on the right-hand side + // of an assignment or or initialization expression does not match the number + // of variables on the left-hand side. + // + // Example: + // var x = 1, 2 + WrongAssignCount + + // UnassignableOperand occurs when the left-hand side of an assignment is + // not assignable. + // + // Example: + // func f() { + // const c = 1 + // c = 2 + // } + UnassignableOperand + + // NoNewVar occurs when a short variable declaration (':=') does not declare + // new variables. + // + // Example: + // func f() { + // x := 1 + // x := 2 + // } + NoNewVar + + // MultiValAssignOp occurs when an assignment operation (+=, *=, etc) does + // not have single-valued left-hand or right-hand side. + // + // Per the spec: + // "In assignment operations, both the left- and right-hand expression lists + // must contain exactly one single-valued expression" + // + // Example: + // func f() int { + // x, y := 1, 2 + // x, y += 1 + // return x + y + // } + MultiValAssignOp + + // InvalidIfaceAssign occurs when a value of type T is used as an + // interface, but T does not implement a method of the expected interface. + // + // Example: + // type I interface { + // f() + // } + // + // type T int + // + // var x I = T(1) + InvalidIfaceAssign + + // InvalidChanAssign occurs when a chan assignment is invalid. + // + // Per the spec, a value x is assignable to a channel type T if: + // "x is a bidirectional channel value, T is a channel type, x's type V and + // T have identical element types, and at least one of V or T is not a + // defined type." + // + // Example: + // type T1 chan int + // type T2 chan int + // + // var x T1 + // // Invalid assignment because both types are named + // var _ T2 = x + InvalidChanAssign + + // IncompatibleAssign occurs when the type of the right-hand side expression + // in an assignment cannot be assigned to the type of the variable being + // assigned. + // + // Example: + // var x []int + // var _ int = x + IncompatibleAssign + + // UnaddressableFieldAssign occurs when trying to assign to a struct field + // in a map value. + // + // Example: + // func f() { + // m := make(map[string]struct{i int}) + // m["foo"].i = 42 + // } + UnaddressableFieldAssign + + /* decls > type (+ other type expression codes) */ + + // NotAType occurs when the identifier used as the underlying type in a type + // declaration or the right-hand side of a type alias does not denote a type. + // + // Example: + // var S = 2 + // + // type T S + NotAType + + // InvalidArrayLen occurs when an array length is not a constant value. + // + // Example: + // var n = 3 + // var _ = [n]int{} + InvalidArrayLen + + // BlankIfaceMethod occurs when a method name is '_'. + // + // Per the spec: + // "The name of each explicitly specified method must be unique and not + // blank." + // + // Example: + // type T interface { + // _(int) + // } + BlankIfaceMethod + + // IncomparableMapKey occurs when a map key type does not support the == and + // != operators. + // + // Per the spec: + // "The comparison operators == and != must be fully defined for operands of + // the key type; thus the key type must not be a function, map, or slice." + // + // Example: + // var x map[T]int + // + // type T []int + IncomparableMapKey + + // InvalidIfaceEmbed occurs when a non-interface type is embedded in an + // interface. + // + // Example: + // type T struct {} + // + // func (T) m() + // + // type I interface { + // T + // } + InvalidIfaceEmbed + + // InvalidPtrEmbed occurs when an embedded field is of the pointer form *T, + // and T itself is itself a pointer, an unsafe.Pointer, or an interface. + // + // Per the spec: + // "An embedded field must be specified as a type name T or as a pointer to + // a non-interface type name *T, and T itself may not be a pointer type." + // + // Example: + // type T *int + // + // type S struct { + // *T + // } + InvalidPtrEmbed + + /* decls > func and method */ + + // BadRecv occurs when a method declaration does not have exactly one + // receiver parameter. + // + // Example: + // func () _() {} + BadRecv + + // InvalidRecv occurs when a receiver type expression is not of the form T + // or *T, or T is a pointer type. + // + // Example: + // type T struct {} + // + // func (**T) m() {} + InvalidRecv + + // DuplicateFieldAndMethod occurs when an identifier appears as both a field + // and method name. + // + // Example: + // type T struct { + // m int + // } + // + // func (T) m() {} + DuplicateFieldAndMethod + + // DuplicateMethod occurs when two methods on the same receiver type have + // the same name. + // + // Example: + // type T struct {} + // func (T) m() {} + // func (T) m(i int) int { return i } + DuplicateMethod + + /* decls > special */ + + // InvalidBlank occurs when a blank identifier is used as a value or type. + // + // Per the spec: + // "The blank identifier may appear as an operand only on the left-hand side + // of an assignment." + // + // Example: + // var x = _ + InvalidBlank + + // InvalidIota occurs when the predeclared identifier iota is used outside + // of a constant declaration. + // + // Example: + // var x = iota + InvalidIota + + // MissingInitBody occurs when an init function is missing its body. + // + // Example: + // func init() + MissingInitBody + + // InvalidInitSig occurs when an init function declares parameters or + // results. + // + // Example: + // func init() int { return 1 } + InvalidInitSig + + // InvalidInitDecl occurs when init is declared as anything other than a + // function. + // + // Example: + // var init = 1 + InvalidInitDecl + + // InvalidMainDecl occurs when main is declared as anything other than a + // function, in a main package. + InvalidMainDecl + + /* exprs */ + + // TooManyValues occurs when a function returns too many values for the + // expression context in which it is used. + // + // Example: + // func ReturnTwo() (int, int) { + // return 1, 2 + // } + // + // var x = ReturnTwo() + TooManyValues + + // NotAnExpr occurs when a type expression is used where a value expression + // is expected. + // + // Example: + // type T struct {} + // + // func f() { + // T + // } + NotAnExpr + + /* exprs > const */ + + // TruncatedFloat occurs when a float constant is truncated to an integer + // value. + // + // Example: + // var _ int = 98.6 + TruncatedFloat + + // NumericOverflow occurs when a numeric constant overflows its target type. + // + // Example: + // var x int8 = 1000 + NumericOverflow + + /* exprs > operation */ + + // UndefinedOp occurs when an operator is not defined for the type(s) used + // in an operation. + // + // Example: + // var c = "a" - "b" + UndefinedOp + + // MismatchedTypes occurs when operand types are incompatible in a binary + // operation. + // + // Example: + // var a = "hello" + // var b = 1 + // var c = a - b + MismatchedTypes + + // DivByZero occurs when a division operation is provable at compile + // time to be a division by zero. + // + // Example: + // const divisor = 0 + // var x int = 1/divisor + DivByZero + + // NonNumericIncDec occurs when an increment or decrement operator is + // applied to a non-numeric value. + // + // Example: + // func f() { + // var c = "c" + // c++ + // } + NonNumericIncDec + + /* exprs > ptr */ + + // UnaddressableOperand occurs when the & operator is applied to an + // unaddressable expression. + // + // Example: + // var x = &1 + UnaddressableOperand + + // InvalidIndirection occurs when a non-pointer value is indirected via the + // '*' operator. + // + // Example: + // var x int + // var y = *x + InvalidIndirection + + /* exprs > [] */ + + // NonIndexableOperand occurs when an index operation is applied to a value + // that cannot be indexed. + // + // Example: + // var x = 1 + // var y = x[1] + NonIndexableOperand + + // InvalidIndex occurs when an index argument is not of integer type, + // negative, or out-of-bounds. + // + // Example: + // var s = [...]int{1,2,3} + // var x = s[5] + // + // Example: + // var s = []int{1,2,3} + // var _ = s[-1] + // + // Example: + // var s = []int{1,2,3} + // var i string + // var _ = s[i] + InvalidIndex + + // SwappedSliceIndices occurs when constant indices in a slice expression + // are decreasing in value. + // + // Example: + // var _ = []int{1,2,3}[2:1] + SwappedSliceIndices + + /* operators > slice */ + + // NonSliceableOperand occurs when a slice operation is applied to a value + // whose type is not sliceable, or is unaddressable. + // + // Example: + // var x = [...]int{1, 2, 3}[:1] + // + // Example: + // var x = 1 + // var y = 1[:1] + NonSliceableOperand + + // InvalidSliceExpr occurs when a three-index slice expression (a[x:y:z]) is + // applied to a string. + // + // Example: + // var s = "hello" + // var x = s[1:2:3] + InvalidSliceExpr + + /* exprs > shift */ + + // InvalidShiftCount occurs when the right-hand side of a shift operation is + // either non-integer, negative, or too large. + // + // Example: + // var ( + // x string + // y int = 1 << x + // ) + InvalidShiftCount + + // InvalidShiftOperand occurs when the shifted operand is not an integer. + // + // Example: + // var s = "hello" + // var x = s << 2 + InvalidShiftOperand + + /* exprs > chan */ + + // InvalidReceive occurs when there is a channel receive from a value that + // is either not a channel, or is a send-only channel. + // + // Example: + // func f() { + // var x = 1 + // <-x + // } + InvalidReceive + + // InvalidSend occurs when there is a channel send to a value that is not a + // channel, or is a receive-only channel. + // + // Example: + // func f() { + // var x = 1 + // x <- "hello!" + // } + InvalidSend + + /* exprs > literal */ + + // DuplicateLitKey occurs when an index is duplicated in a slice, array, or + // map literal. + // + // Example: + // var _ = []int{0:1, 0:2} + // + // Example: + // var _ = map[string]int{"a": 1, "a": 2} + DuplicateLitKey + + // MissingLitKey occurs when a map literal is missing a key expression. + // + // Example: + // var _ = map[string]int{1} + MissingLitKey + + // InvalidLitIndex occurs when the key in a key-value element of a slice or + // array literal is not an integer constant. + // + // Example: + // var i = 0 + // var x = []string{i: "world"} + InvalidLitIndex + + // OversizeArrayLit occurs when an array literal exceeds its length. + // + // Example: + // var _ = [2]int{1,2,3} + OversizeArrayLit + + // MixedStructLit occurs when a struct literal contains a mix of positional + // and named elements. + // + // Example: + // var _ = struct{i, j int}{i: 1, 2} + MixedStructLit + + // InvalidStructLit occurs when a positional struct literal has an incorrect + // number of values. + // + // Example: + // var _ = struct{i, j int}{1,2,3} + InvalidStructLit + + // MissingLitField occurs when a struct literal refers to a field that does + // not exist on the struct type. + // + // Example: + // var _ = struct{i int}{j: 2} + MissingLitField + + // DuplicateLitField occurs when a struct literal contains duplicated + // fields. + // + // Example: + // var _ = struct{i int}{i: 1, i: 2} + DuplicateLitField + + // UnexportedLitField occurs when a positional struct literal implicitly + // assigns an unexported field of an imported type. + UnexportedLitField + + // InvalidLitField occurs when a field name is not a valid identifier. + // + // Example: + // var _ = struct{i int}{1: 1} + InvalidLitField + + // UntypedLit occurs when a composite literal omits a required type + // identifier. + // + // Example: + // type outer struct{ + // inner struct { i int } + // } + // + // var _ = outer{inner: {1}} + UntypedLit + + // InvalidLit occurs when a composite literal expression does not match its + // type. + // + // Example: + // type P *struct{ + // x int + // } + // var _ = P {} + InvalidLit + + /* exprs > selector */ + + // AmbiguousSelector occurs when a selector is ambiguous. + // + // Example: + // type E1 struct { i int } + // type E2 struct { i int } + // type T struct { E1; E2 } + // + // var x T + // var _ = x.i + AmbiguousSelector + + // UndeclaredImportedName occurs when a package-qualified identifier is + // undeclared by the imported package. + // + // Example: + // import "go/types" + // + // var _ = types.NotAnActualIdentifier + UndeclaredImportedName + + // UnexportedName occurs when a selector refers to an unexported identifier + // of an imported package. + // + // Example: + // import "reflect" + // + // type _ reflect.flag + UnexportedName + + // UndeclaredName occurs when an identifier is not declared in the current + // scope. + // + // Example: + // var x T + UndeclaredName + + // MissingFieldOrMethod occurs when a selector references a field or method + // that does not exist. + // + // Example: + // type T struct {} + // + // var x = T{}.f + MissingFieldOrMethod + + /* exprs > ... */ + + // BadDotDotDotSyntax occurs when a "..." occurs in a context where it is + // not valid. + // + // Example: + // var _ = map[int][...]int{0: {}} + BadDotDotDotSyntax + + // NonVariadicDotDotDot occurs when a "..." is used on the final argument to + // a non-variadic function. + // + // Example: + // func printArgs(s []string) { + // for _, a := range s { + // println(a) + // } + // } + // + // func f() { + // s := []string{"a", "b", "c"} + // printArgs(s...) + // } + NonVariadicDotDotDot + + // MisplacedDotDotDot occurs when a "..." is used somewhere other than the + // final argument to a function call. + // + // Example: + // func printArgs(args ...int) { + // for _, a := range args { + // println(a) + // } + // } + // + // func f() { + // a := []int{1,2,3} + // printArgs(0, a...) + // } + MisplacedDotDotDot + + // InvalidDotDotDotOperand occurs when a "..." operator is applied to a + // single-valued operand. + // + // Example: + // func printArgs(args ...int) { + // for _, a := range args { + // println(a) + // } + // } + // + // func f() { + // a := 1 + // printArgs(a...) + // } + // + // Example: + // func args() (int, int) { + // return 1, 2 + // } + // + // func printArgs(args ...int) { + // for _, a := range args { + // println(a) + // } + // } + // + // func g() { + // printArgs(args()...) + // } + InvalidDotDotDotOperand + + // InvalidDotDotDot occurs when a "..." is used in a non-variadic built-in + // function. + // + // Example: + // var s = []int{1, 2, 3} + // var l = len(s...) + InvalidDotDotDot + + /* exprs > built-in */ + + // UncalledBuiltin occurs when a built-in function is used as a + // function-valued expression, instead of being called. + // + // Per the spec: + // "The built-in functions do not have standard Go types, so they can only + // appear in call expressions; they cannot be used as function values." + // + // Example: + // var _ = copy + UncalledBuiltin + + // InvalidAppend occurs when append is called with a first argument that is + // not a slice. + // + // Example: + // var _ = append(1, 2) + InvalidAppend + + // InvalidCap occurs when an argument to the cap built-in function is not of + // supported type. + // + // See https://golang.org/ref/spec#Lengthand_capacity for information on + // which underlying types are supported as arguments to cap and len. + // + // Example: + // var s = 2 + // var x = cap(s) + InvalidCap + + // InvalidClose occurs when close(...) is called with an argument that is + // not of channel type, or that is a receive-only channel. + // + // Example: + // func f() { + // var x int + // close(x) + // } + InvalidClose + + // InvalidCopy occurs when the arguments are not of slice type or do not + // have compatible type. + // + // See https://golang.org/ref/spec#Appendingand_copying_slices for more + // information on the type requirements for the copy built-in. + // + // Example: + // func f() { + // var x []int + // y := []int64{1,2,3} + // copy(x, y) + // } + InvalidCopy + + // InvalidComplex occurs when the complex built-in function is called with + // arguments with incompatible types. + // + // Example: + // var _ = complex(float32(1), float64(2)) + InvalidComplex + + // InvalidDelete occurs when the delete built-in function is called with a + // first argument that is not a map. + // + // Example: + // func f() { + // m := "hello" + // delete(m, "e") + // } + InvalidDelete + + // InvalidImag occurs when the imag built-in function is called with an + // argument that does not have complex type. + // + // Example: + // var _ = imag(int(1)) + InvalidImag + + // InvalidLen occurs when an argument to the len built-in function is not of + // supported type. + // + // See https://golang.org/ref/spec#Lengthand_capacity for information on + // which underlying types are supported as arguments to cap and len. + // + // Example: + // var s = 2 + // var x = len(s) + InvalidLen + + // SwappedMakeArgs occurs when make is called with three arguments, and its + // length argument is larger than its capacity argument. + // + // Example: + // var x = make([]int, 3, 2) + SwappedMakeArgs + + // InvalidMake occurs when make is called with an unsupported type argument. + // + // See https://golang.org/ref/spec#Makingslices_maps_and_channels for + // information on the types that may be created using make. + // + // Example: + // var x = make(int) + InvalidMake + + // InvalidReal occurs when the real built-in function is called with an + // argument that does not have complex type. + // + // Example: + // var _ = real(int(1)) + InvalidReal + + /* exprs > assertion */ + + // InvalidAssert occurs when a type assertion is applied to a + // value that is not of interface type. + // + // Example: + // var x = 1 + // var _ = x.(float64) + InvalidAssert + + // ImpossibleAssert occurs for a type assertion x.(T) when the value x of + // interface cannot have dynamic type T, due to a missing or mismatching + // method on T. + // + // Example: + // type T int + // + // func (t *T) m() int { return int(*t) } + // + // type I interface { m() int } + // + // var x I + // var _ = x.(T) + ImpossibleAssert + + /* exprs > conversion */ + + // InvalidConversion occurs when the argument type cannot be converted to the + // target. + // + // See https://golang.org/ref/spec#Conversions for the rules of + // convertibility. + // + // Example: + // var x float64 + // var _ = string(x) + InvalidConversion + + // InvalidUntypedConversion occurs when an there is no valid implicit + // conversion from an untyped value satisfying the type constraints of the + // context in which it is used. + // + // Example: + // var _ = 1 + "" + InvalidUntypedConversion + + /* offsetof */ + + // BadOffsetofSyntax occurs when unsafe.Offsetof is called with an argument + // that is not a selector expression. + // + // Example: + // import "unsafe" + // + // var x int + // var _ = unsafe.Offsetof(x) + BadOffsetofSyntax + + // InvalidOffsetof occurs when unsafe.Offsetof is called with a method + // selector, rather than a field selector, or when the field is embedded via + // a pointer. + // + // Per the spec: + // + // "If f is an embedded field, it must be reachable without pointer + // indirections through fields of the struct. " + // + // Example: + // import "unsafe" + // + // type T struct { f int } + // type S struct { *T } + // var s S + // var _ = unsafe.Offsetof(s.f) + // + // Example: + // import "unsafe" + // + // type S struct{} + // + // func (S) m() {} + // + // var s S + // var _ = unsafe.Offsetof(s.m) + InvalidOffsetof + + /* control flow > scope */ + + // UnusedExpr occurs when a side-effect free expression is used as a + // statement. Such a statement has no effect. + // + // Example: + // func f(i int) { + // i*i + // } + UnusedExpr + + // UnusedVar occurs when a variable is declared but unused. + // + // Example: + // func f() { + // x := 1 + // } + UnusedVar + + // MissingReturn occurs when a function with results is missing a return + // statement. + // + // Example: + // func f() int {} + MissingReturn + + // WrongResultCount occurs when a return statement returns an incorrect + // number of values. + // + // Example: + // func ReturnOne() int { + // return 1, 2 + // } + WrongResultCount + + // OutOfScopeResult occurs when the name of a value implicitly returned by + // an empty return statement is shadowed in a nested scope. + // + // Example: + // func factor(n int) (i int) { + // for i := 2; i < n; i++ { + // if n%i == 0 { + // return + // } + // } + // return 0 + // } + OutOfScopeResult + + /* control flow > if */ + + // InvalidCond occurs when an if condition is not a boolean expression. + // + // Example: + // func checkReturn(i int) { + // if i { + // panic("non-zero return") + // } + // } + InvalidCond + + /* control flow > for */ + + // InvalidPostDecl occurs when there is a declaration in a for-loop post + // statement. + // + // Example: + // func f() { + // for i := 0; i < 10; j := 0 {} + // } + InvalidPostDecl + + // InvalidChanRange occurs when a send-only channel used in a range + // expression. + // + // Example: + // func sum(c chan<- int) { + // s := 0 + // for i := range c { + // s += i + // } + // } + InvalidChanRange + + // InvalidIterVar occurs when two iteration variables are used while ranging + // over a channel. + // + // Example: + // func f(c chan int) { + // for k, v := range c { + // println(k, v) + // } + // } + InvalidIterVar + + // InvalidRangeExpr occurs when the type of a range expression is not array, + // slice, string, map, or channel. + // + // Example: + // func f(i int) { + // for j := range i { + // println(j) + // } + // } + InvalidRangeExpr + + /* control flow > switch */ + + // MisplacedBreak occurs when a break statement is not within a for, switch, + // or select statement of the innermost function definition. + // + // Example: + // func f() { + // break + // } + MisplacedBreak + + // MisplacedContinue occurs when a continue statement is not within a for + // loop of the innermost function definition. + // + // Example: + // func sumeven(n int) int { + // proceed := func() { + // continue + // } + // sum := 0 + // for i := 1; i <= n; i++ { + // if i % 2 != 0 { + // proceed() + // } + // sum += i + // } + // return sum + // } + MisplacedContinue + + // MisplacedFallthrough occurs when a fallthrough statement is not within an + // expression switch. + // + // Example: + // func typename(i interface{}) string { + // switch i.(type) { + // case int64: + // fallthrough + // case int: + // return "int" + // } + // return "unsupported" + // } + MisplacedFallthrough + + // DuplicateCase occurs when a type or expression switch has duplicate + // cases. + // + // Example: + // func printInt(i int) { + // switch i { + // case 1: + // println("one") + // case 1: + // println("One") + // } + // } + DuplicateCase + + // DuplicateDefault occurs when a type or expression switch has multiple + // default clauses. + // + // Example: + // func printInt(i int) { + // switch i { + // case 1: + // println("one") + // default: + // println("One") + // default: + // println("1") + // } + // } + DuplicateDefault + + // BadTypeKeyword occurs when a .(type) expression is used anywhere other + // than a type switch. + // + // Example: + // type I interface { + // m() + // } + // var t I + // var _ = t.(type) + BadTypeKeyword + + // InvalidTypeSwitch occurs when .(type) is used on an expression that is + // not of interface type. + // + // Example: + // func f(i int) { + // switch x := i.(type) {} + // } + InvalidTypeSwitch + + // InvalidExprSwitch occurs when a switch expression is not comparable. + // + // Example: + // func _() { + // var a struct{ _ func() } + // switch a /* ERROR cannot switch on a */ { + // } + // } + InvalidExprSwitch + + /* control flow > select */ + + // InvalidSelectCase occurs when a select case is not a channel send or + // receive. + // + // Example: + // func checkChan(c <-chan int) bool { + // select { + // case c: + // return true + // default: + // return false + // } + // } + InvalidSelectCase + + /* control flow > labels and jumps */ + + // UndeclaredLabel occurs when an undeclared label is jumped to. + // + // Example: + // func f() { + // goto L + // } + UndeclaredLabel + + // DuplicateLabel occurs when a label is declared more than once. + // + // Example: + // func f() int { + // L: + // L: + // return 1 + // } + DuplicateLabel + + // MisplacedLabel occurs when a break or continue label is not on a for, + // switch, or select statement. + // + // Example: + // func f() { + // L: + // a := []int{1,2,3} + // for _, e := range a { + // if e > 10 { + // break L + // } + // println(a) + // } + // } + MisplacedLabel + + // UnusedLabel occurs when a label is declared but not used. + // + // Example: + // func f() { + // L: + // } + UnusedLabel + + // JumpOverDecl occurs when a label jumps over a variable declaration. + // + // Example: + // func f() int { + // goto L + // x := 2 + // L: + // x++ + // return x + // } + JumpOverDecl + + // JumpIntoBlock occurs when a forward jump goes to a label inside a nested + // block. + // + // Example: + // func f(x int) { + // goto L + // if x > 0 { + // L: + // print("inside block") + // } + // } + JumpIntoBlock + + /* control flow > calls */ + + // InvalidMethodExpr occurs when a pointer method is called but the argument + // is not addressable. + // + // Example: + // type T struct {} + // + // func (*T) m() int { return 1 } + // + // var _ = T.m(T{}) + InvalidMethodExpr + + // WrongArgCount occurs when too few or too many arguments are passed by a + // function call. + // + // Example: + // func f(i int) {} + // var x = f() + WrongArgCount + + // InvalidCall occurs when an expression is called that is not of function + // type. + // + // Example: + // var x = "x" + // var y = x() + InvalidCall + + /* control flow > suspended */ + + // UnusedResults occurs when a restricted expression-only built-in function + // is suspended via go or defer. Such a suspension discards the results of + // these side-effect free built-in functions, and therefore is ineffectual. + // + // Example: + // func f(a []int) int { + // defer len(a) + // return i + // } + UnusedResults + + // InvalidDefer occurs when a deferred expression is not a function call, + // for example if the expression is a type conversion. + // + // Example: + // func f(i int) int { + // defer int32(i) + // return i + // } + InvalidDefer + + // InvalidGo occurs when a go expression is not a function call, for example + // if the expression is a type conversion. + // + // Example: + // func f(i int) int { + // go int32(i) + // return i + // } + InvalidGo + + // All codes below were added in Go 1.17. + + /* decl */ + + // BadDecl occurs when a declaration has invalid syntax. + BadDecl + + // RepeatedDecl occurs when an identifier occurs more than once on the left + // hand side of a short variable declaration. + // + // Example: + // func _() { + // x, y, y := 1, 2, 3 + // } + RepeatedDecl + + /* unsafe */ + + // InvalidUnsafeAdd occurs when unsafe.Add is called with a + // length argument that is not of integer type. + // + // Example: + // import "unsafe" + // + // var p unsafe.Pointer + // var _ = unsafe.Add(p, float64(1)) + InvalidUnsafeAdd + + // InvalidUnsafeSlice occurs when unsafe.Slice is called with a + // pointer argument that is not of pointer type or a length argument + // that is not of integer type, negative, or out of bounds. + // + // Example: + // import "unsafe" + // + // var x int + // var _ = unsafe.Slice(x, 1) + // + // Example: + // import "unsafe" + // + // var x int + // var _ = unsafe.Slice(&x, float64(1)) + // + // Example: + // import "unsafe" + // + // var x int + // var _ = unsafe.Slice(&x, -1) + // + // Example: + // import "unsafe" + // + // var x int + // var _ = unsafe.Slice(&x, uint64(1) << 63) + InvalidUnsafeSlice + + // All codes below were added in Go 1.18. + + /* features */ + + // UnsupportedFeature occurs when a language feature is used that is not + // supported at this Go version. + UnsupportedFeature + + /* type params */ + + // NotAGenericType occurs when a non-generic type is used where a generic + // type is expected: in type or function instantiation. + // + // Example: + // type T int + // + // var _ T[int] + NotAGenericType + + // WrongTypeArgCount occurs when a type or function is instantiated with an + // incorrent number of type arguments, including when a generic type or + // function is used without instantiation. + // + // Errors inolving failed type inference are assigned other error codes. + // + // Example: + // type T[p any] int + // + // var _ T[int, string] + // + // Example: + // func f[T any]() {} + // + // var x = f + WrongTypeArgCount + + // CannotInferTypeArgs occurs when type or function type argument inference + // fails to infer all type arguments. + // + // Example: + // func f[T any]() {} + // + // func _() { + // f() + // } + // + // Example: + // type N[P, Q any] struct{} + // + // var _ N[int] + CannotInferTypeArgs + + // InvalidTypeArg occurs when a type argument does not satisfy its + // corresponding type parameter constraints. + // + // Example: + // type T[P ~int] struct{} + // + // var _ T[string] + InvalidTypeArg // arguments? InferenceFailed + + // InvalidInstanceCycle occurs when an invalid cycle is detected + // within the instantiation graph. + // + // Example: + // func f[T any]() { f[*T]() } + InvalidInstanceCycle + + // InvalidUnion occurs when an embedded union or approximation element is + // not valid. + // + // Example: + // type _ interface { + // ~int | interface{ m() } + // } + InvalidUnion + + // MisplacedConstraintIface occurs when a constraint-type interface is used + // outside of constraint position. + // + // Example: + // type I interface { ~int } + // + // var _ I + MisplacedConstraintIface + + // InvalidMethodTypeParams occurs when methods have type parameters. + // + // It cannot be encountered with an AST parsed using go/parser. + InvalidMethodTypeParams + + // MisplacedTypeParam occurs when a type parameter is used in a place where + // it is not permitted. + // + // Example: + // type T[P any] P + // + // Example: + // type T[P any] struct{ *P } + MisplacedTypeParam + + // InvalidUnsafeSliceData occurs when unsafe.SliceData is called with + // an argument that is not of slice type. It also occurs if it is used + // in a package compiled for a language version before go1.20. + // + // Example: + // import "unsafe" + // + // var x int + // var _ = unsafe.SliceData(x) + InvalidUnsafeSliceData + + // InvalidUnsafeString occurs when unsafe.String is called with + // a length argument that is not of integer type, negative, or + // out of bounds. It also occurs if it is used in a package + // compiled for a language version before go1.20. + // + // Example: + // import "unsafe" + // + // var b [10]byte + // var _ = unsafe.String(&b[0], -1) + InvalidUnsafeString + + // InvalidUnsafeStringData occurs if it is used in a package + // compiled for a language version before go1.20. + _ // not used anymore + +) diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/errorcode_string.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/errorcode_string.go new file mode 100644 index 000000000000..15ecf7c5ded9 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/errorcode_string.go @@ -0,0 +1,179 @@ +// Code generated by "stringer -type=ErrorCode"; DO NOT EDIT. + +package typesinternal + +import "strconv" + +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[InvalidSyntaxTree - -1] + _ = x[Test-1] + _ = x[BlankPkgName-2] + _ = x[MismatchedPkgName-3] + _ = x[InvalidPkgUse-4] + _ = x[BadImportPath-5] + _ = x[BrokenImport-6] + _ = x[ImportCRenamed-7] + _ = x[UnusedImport-8] + _ = x[InvalidInitCycle-9] + _ = x[DuplicateDecl-10] + _ = x[InvalidDeclCycle-11] + _ = x[InvalidTypeCycle-12] + _ = x[InvalidConstInit-13] + _ = x[InvalidConstVal-14] + _ = x[InvalidConstType-15] + _ = x[UntypedNilUse-16] + _ = x[WrongAssignCount-17] + _ = x[UnassignableOperand-18] + _ = x[NoNewVar-19] + _ = x[MultiValAssignOp-20] + _ = x[InvalidIfaceAssign-21] + _ = x[InvalidChanAssign-22] + _ = x[IncompatibleAssign-23] + _ = x[UnaddressableFieldAssign-24] + _ = x[NotAType-25] + _ = x[InvalidArrayLen-26] + _ = x[BlankIfaceMethod-27] + _ = x[IncomparableMapKey-28] + _ = x[InvalidIfaceEmbed-29] + _ = x[InvalidPtrEmbed-30] + _ = x[BadRecv-31] + _ = x[InvalidRecv-32] + _ = x[DuplicateFieldAndMethod-33] + _ = x[DuplicateMethod-34] + _ = x[InvalidBlank-35] + _ = x[InvalidIota-36] + _ = x[MissingInitBody-37] + _ = x[InvalidInitSig-38] + _ = x[InvalidInitDecl-39] + _ = x[InvalidMainDecl-40] + _ = x[TooManyValues-41] + _ = x[NotAnExpr-42] + _ = x[TruncatedFloat-43] + _ = x[NumericOverflow-44] + _ = x[UndefinedOp-45] + _ = x[MismatchedTypes-46] + _ = x[DivByZero-47] + _ = x[NonNumericIncDec-48] + _ = x[UnaddressableOperand-49] + _ = x[InvalidIndirection-50] + _ = x[NonIndexableOperand-51] + _ = x[InvalidIndex-52] + _ = x[SwappedSliceIndices-53] + _ = x[NonSliceableOperand-54] + _ = x[InvalidSliceExpr-55] + _ = x[InvalidShiftCount-56] + _ = x[InvalidShiftOperand-57] + _ = x[InvalidReceive-58] + _ = x[InvalidSend-59] + _ = x[DuplicateLitKey-60] + _ = x[MissingLitKey-61] + _ = x[InvalidLitIndex-62] + _ = x[OversizeArrayLit-63] + _ = x[MixedStructLit-64] + _ = x[InvalidStructLit-65] + _ = x[MissingLitField-66] + _ = x[DuplicateLitField-67] + _ = x[UnexportedLitField-68] + _ = x[InvalidLitField-69] + _ = x[UntypedLit-70] + _ = x[InvalidLit-71] + _ = x[AmbiguousSelector-72] + _ = x[UndeclaredImportedName-73] + _ = x[UnexportedName-74] + _ = x[UndeclaredName-75] + _ = x[MissingFieldOrMethod-76] + _ = x[BadDotDotDotSyntax-77] + _ = x[NonVariadicDotDotDot-78] + _ = x[MisplacedDotDotDot-79] + _ = x[InvalidDotDotDotOperand-80] + _ = x[InvalidDotDotDot-81] + _ = x[UncalledBuiltin-82] + _ = x[InvalidAppend-83] + _ = x[InvalidCap-84] + _ = x[InvalidClose-85] + _ = x[InvalidCopy-86] + _ = x[InvalidComplex-87] + _ = x[InvalidDelete-88] + _ = x[InvalidImag-89] + _ = x[InvalidLen-90] + _ = x[SwappedMakeArgs-91] + _ = x[InvalidMake-92] + _ = x[InvalidReal-93] + _ = x[InvalidAssert-94] + _ = x[ImpossibleAssert-95] + _ = x[InvalidConversion-96] + _ = x[InvalidUntypedConversion-97] + _ = x[BadOffsetofSyntax-98] + _ = x[InvalidOffsetof-99] + _ = x[UnusedExpr-100] + _ = x[UnusedVar-101] + _ = x[MissingReturn-102] + _ = x[WrongResultCount-103] + _ = x[OutOfScopeResult-104] + _ = x[InvalidCond-105] + _ = x[InvalidPostDecl-106] + _ = x[InvalidChanRange-107] + _ = x[InvalidIterVar-108] + _ = x[InvalidRangeExpr-109] + _ = x[MisplacedBreak-110] + _ = x[MisplacedContinue-111] + _ = x[MisplacedFallthrough-112] + _ = x[DuplicateCase-113] + _ = x[DuplicateDefault-114] + _ = x[BadTypeKeyword-115] + _ = x[InvalidTypeSwitch-116] + _ = x[InvalidExprSwitch-117] + _ = x[InvalidSelectCase-118] + _ = x[UndeclaredLabel-119] + _ = x[DuplicateLabel-120] + _ = x[MisplacedLabel-121] + _ = x[UnusedLabel-122] + _ = x[JumpOverDecl-123] + _ = x[JumpIntoBlock-124] + _ = x[InvalidMethodExpr-125] + _ = x[WrongArgCount-126] + _ = x[InvalidCall-127] + _ = x[UnusedResults-128] + _ = x[InvalidDefer-129] + _ = x[InvalidGo-130] + _ = x[BadDecl-131] + _ = x[RepeatedDecl-132] + _ = x[InvalidUnsafeAdd-133] + _ = x[InvalidUnsafeSlice-134] + _ = x[UnsupportedFeature-135] + _ = x[NotAGenericType-136] + _ = x[WrongTypeArgCount-137] + _ = x[CannotInferTypeArgs-138] + _ = x[InvalidTypeArg-139] + _ = x[InvalidInstanceCycle-140] + _ = x[InvalidUnion-141] + _ = x[MisplacedConstraintIface-142] + _ = x[InvalidMethodTypeParams-143] + _ = x[MisplacedTypeParam-144] + _ = x[InvalidUnsafeSliceData-145] + _ = x[InvalidUnsafeString-146] +} + +const ( + _ErrorCode_name_0 = "InvalidSyntaxTree" + _ErrorCode_name_1 = "TestBlankPkgNameMismatchedPkgNameInvalidPkgUseBadImportPathBrokenImportImportCRenamedUnusedImportInvalidInitCycleDuplicateDeclInvalidDeclCycleInvalidTypeCycleInvalidConstInitInvalidConstValInvalidConstTypeUntypedNilUseWrongAssignCountUnassignableOperandNoNewVarMultiValAssignOpInvalidIfaceAssignInvalidChanAssignIncompatibleAssignUnaddressableFieldAssignNotATypeInvalidArrayLenBlankIfaceMethodIncomparableMapKeyInvalidIfaceEmbedInvalidPtrEmbedBadRecvInvalidRecvDuplicateFieldAndMethodDuplicateMethodInvalidBlankInvalidIotaMissingInitBodyInvalidInitSigInvalidInitDeclInvalidMainDeclTooManyValuesNotAnExprTruncatedFloatNumericOverflowUndefinedOpMismatchedTypesDivByZeroNonNumericIncDecUnaddressableOperandInvalidIndirectionNonIndexableOperandInvalidIndexSwappedSliceIndicesNonSliceableOperandInvalidSliceExprInvalidShiftCountInvalidShiftOperandInvalidReceiveInvalidSendDuplicateLitKeyMissingLitKeyInvalidLitIndexOversizeArrayLitMixedStructLitInvalidStructLitMissingLitFieldDuplicateLitFieldUnexportedLitFieldInvalidLitFieldUntypedLitInvalidLitAmbiguousSelectorUndeclaredImportedNameUnexportedNameUndeclaredNameMissingFieldOrMethodBadDotDotDotSyntaxNonVariadicDotDotDotMisplacedDotDotDotInvalidDotDotDotOperandInvalidDotDotDotUncalledBuiltinInvalidAppendInvalidCapInvalidCloseInvalidCopyInvalidComplexInvalidDeleteInvalidImagInvalidLenSwappedMakeArgsInvalidMakeInvalidRealInvalidAssertImpossibleAssertInvalidConversionInvalidUntypedConversionBadOffsetofSyntaxInvalidOffsetofUnusedExprUnusedVarMissingReturnWrongResultCountOutOfScopeResultInvalidCondInvalidPostDeclInvalidChanRangeInvalidIterVarInvalidRangeExprMisplacedBreakMisplacedContinueMisplacedFallthroughDuplicateCaseDuplicateDefaultBadTypeKeywordInvalidTypeSwitchInvalidExprSwitchInvalidSelectCaseUndeclaredLabelDuplicateLabelMisplacedLabelUnusedLabelJumpOverDeclJumpIntoBlockInvalidMethodExprWrongArgCountInvalidCallUnusedResultsInvalidDeferInvalidGoBadDeclRepeatedDeclInvalidUnsafeAddInvalidUnsafeSliceUnsupportedFeatureNotAGenericTypeWrongTypeArgCountCannotInferTypeArgsInvalidTypeArgInvalidInstanceCycleInvalidUnionMisplacedConstraintIfaceInvalidMethodTypeParamsMisplacedTypeParamInvalidUnsafeSliceDataInvalidUnsafeString" +) + +var ( + _ErrorCode_index_1 = [...]uint16{0, 4, 16, 33, 46, 59, 71, 85, 97, 113, 126, 142, 158, 174, 189, 205, 218, 234, 253, 261, 277, 295, 312, 330, 354, 362, 377, 393, 411, 428, 443, 450, 461, 484, 499, 511, 522, 537, 551, 566, 581, 594, 603, 617, 632, 643, 658, 667, 683, 703, 721, 740, 752, 771, 790, 806, 823, 842, 856, 867, 882, 895, 910, 926, 940, 956, 971, 988, 1006, 1021, 1031, 1041, 1058, 1080, 1094, 1108, 1128, 1146, 1166, 1184, 1207, 1223, 1238, 1251, 1261, 1273, 1284, 1298, 1311, 1322, 1332, 1347, 1358, 1369, 1382, 1398, 1415, 1439, 1456, 1471, 1481, 1490, 1503, 1519, 1535, 1546, 1561, 1577, 1591, 1607, 1621, 1638, 1658, 1671, 1687, 1701, 1718, 1735, 1752, 1767, 1781, 1795, 1806, 1818, 1831, 1848, 1861, 1872, 1885, 1897, 1906, 1913, 1925, 1941, 1959, 1977, 1992, 2009, 2028, 2042, 2062, 2074, 2098, 2121, 2139, 2161, 2180} +) + +func (i ErrorCode) String() string { + switch { + case i == -1: + return _ErrorCode_name_0 + case 1 <= i && i <= 146: + i -= 1 + return _ErrorCode_name_1[_ErrorCode_index_1[i]:_ErrorCode_index_1[i+1]] + default: + return "ErrorCode(" + strconv.FormatInt(int64(i), 10) + ")" + } +} diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/types.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/types.go new file mode 100644 index 000000000000..66e8b099bd6d --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/types.go @@ -0,0 +1,68 @@ +// Copyright 2020 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package typesinternal provides access to internal go/types APIs that are not +// yet exported. +package typesinternal + +import ( + "go/token" + "go/types" + "reflect" + "unsafe" + + "golang.org/x/tools/go/types/objectpath" +) + +func SetUsesCgo(conf *types.Config) bool { + v := reflect.ValueOf(conf).Elem() + + f := v.FieldByName("go115UsesCgo") + if !f.IsValid() { + f = v.FieldByName("UsesCgo") + if !f.IsValid() { + return false + } + } + + addr := unsafe.Pointer(f.UnsafeAddr()) + *(*bool)(addr) = true + + return true +} + +// ReadGo116ErrorData extracts additional information from types.Error values +// generated by Go version 1.16 and later: the error code, start position, and +// end position. If all positions are valid, start <= err.Pos <= end. +// +// If the data could not be read, the final result parameter will be false. +func ReadGo116ErrorData(err types.Error) (code ErrorCode, start, end token.Pos, ok bool) { + var data [3]int + // By coincidence all of these fields are ints, which simplifies things. + v := reflect.ValueOf(err) + for i, name := range []string{"go116code", "go116start", "go116end"} { + f := v.FieldByName(name) + if !f.IsValid() { + return 0, 0, 0, false + } + data[i] = int(f.Int()) + } + return ErrorCode(data[0]), token.Pos(data[1]), token.Pos(data[2]), true +} + +var SetGoVersion = func(conf *types.Config, version string) bool { return false } + +// SkipEncoderMethodSorting marks the encoder as not requiring sorted methods, +// as an optimization for gopls (which guarantees the order of parsed source files). +// +// TODO(golang/go#61443): eliminate this parameter one way or the other. +// +//go:linkname SkipEncoderMethodSorting golang.org/x/tools/go/types/objectpath.skipMethodSorting +func SkipEncoderMethodSorting(enc *objectpath.Encoder) + +// ObjectpathObject is like objectpath.Object, but allows suppressing method +// sorting (which is not necessary for gopls). +// +//go:linkname ObjectpathObject golang.org/x/tools/go/types/objectpath.object +func ObjectpathObject(pkg *types.Package, p objectpath.Path, skipMethodSorting bool) (types.Object, error) diff --git a/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/types_118.go b/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/types_118.go new file mode 100644 index 000000000000..a42b072a67d3 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/tools/internal/typesinternal/types_118.go @@ -0,0 +1,19 @@ +// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build go1.18 +// +build go1.18 + +package typesinternal + +import ( + "go/types" +) + +func init() { + SetGoVersion = func(conf *types.Config, version string) bool { + conf.GoVersion = version + return true + } +} diff --git a/cluster-autoscaler/vendor/k8s.io/kube-proxy/LICENSE b/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/LICENSE similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kube-proxy/LICENSE rename to cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/LICENSE diff --git a/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go b/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go index bb9d94893176..83774fbcbe71 100644 --- a/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go +++ b/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/annotations/client.pb.go @@ -15,7 +15,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: // protoc-gen-go v1.26.0 -// protoc v3.21.9 +// protoc v3.21.12 // source: google/api/client.proto package annotations @@ -53,6 +53,12 @@ const ( ClientLibraryOrganization_PHOTOS ClientLibraryOrganization = 3 // Street View Org. ClientLibraryOrganization_STREET_VIEW ClientLibraryOrganization = 4 + // Shopping Org. + ClientLibraryOrganization_SHOPPING ClientLibraryOrganization = 5 + // Geo Org. + ClientLibraryOrganization_GEO ClientLibraryOrganization = 6 + // Generative AI - https://developers.generativeai.google + ClientLibraryOrganization_GENERATIVE_AI ClientLibraryOrganization = 7 ) // Enum value maps for ClientLibraryOrganization. @@ -63,13 +69,19 @@ var ( 2: "ADS", 3: "PHOTOS", 4: "STREET_VIEW", + 5: "SHOPPING", + 6: "GEO", + 7: "GENERATIVE_AI", } ClientLibraryOrganization_value = map[string]int32{ "CLIENT_LIBRARY_ORGANIZATION_UNSPECIFIED": 0, - "CLOUD": 1, - "ADS": 2, - "PHOTOS": 3, - "STREET_VIEW": 4, + "CLOUD": 1, + "ADS": 2, + "PHOTOS": 3, + "STREET_VIEW": 4, + "SHOPPING": 5, + "GEO": 6, + "GENERATIVE_AI": 7, } ) @@ -370,7 +382,7 @@ type Publishing struct { // A list of API method settings, e.g. the behavior for methods that use the // long-running operation pattern. MethodSettings []*MethodSettings `protobuf:"bytes,2,rep,name=method_settings,json=methodSettings,proto3" json:"method_settings,omitempty"` - // Link to a place that API users can report issues. Example: + // Link to a *public* URI where users can report issues. Example: // https://issuetracker.google.com/issues/new?component=190865&template=1161103 NewIssueUri string `protobuf:"bytes,101,opt,name=new_issue_uri,json=newIssueUri,proto3" json:"new_issue_uri,omitempty"` // Link to product home page. Example: @@ -1465,42 +1477,44 @@ var file_google_api_client_proto_rawDesc = []byte{ 0x6f, 0x75, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x10, 0x74, 0x6f, 0x74, 0x61, 0x6c, 0x50, 0x6f, 0x6c, 0x6c, 0x54, - 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x2a, 0x79, 0x0a, 0x19, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, - 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x4f, 0x72, 0x67, 0x61, 0x6e, 0x69, 0x7a, 0x61, 0x74, - 0x69, 0x6f, 0x6e, 0x12, 0x2b, 0x0a, 0x27, 0x43, 0x4c, 0x49, 0x45, 0x4e, 0x54, 0x5f, 0x4c, 0x49, - 0x42, 0x52, 0x41, 0x52, 0x59, 0x5f, 0x4f, 0x52, 0x47, 0x41, 0x4e, 0x49, 0x5a, 0x41, 0x54, 0x49, - 0x4f, 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, - 0x12, 0x09, 0x0a, 0x05, 0x43, 0x4c, 0x4f, 0x55, 0x44, 0x10, 0x01, 0x12, 0x07, 0x0a, 0x03, 0x41, - 0x44, 0x53, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x50, 0x48, 0x4f, 0x54, 0x4f, 0x53, 0x10, 0x03, - 0x12, 0x0f, 0x0a, 0x0b, 0x53, 0x54, 0x52, 0x45, 0x45, 0x54, 0x5f, 0x56, 0x49, 0x45, 0x57, 0x10, - 0x04, 0x2a, 0x67, 0x0a, 0x18, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61, - 0x72, 0x79, 0x44, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x2a, 0x0a, - 0x26, 0x43, 0x4c, 0x49, 0x45, 0x4e, 0x54, 0x5f, 0x4c, 0x49, 0x42, 0x52, 0x41, 0x52, 0x59, 0x5f, - 0x44, 0x45, 0x53, 0x54, 0x49, 0x4e, 0x41, 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, - 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x47, 0x49, 0x54, - 0x48, 0x55, 0x42, 0x10, 0x0a, 0x12, 0x13, 0x0a, 0x0f, 0x50, 0x41, 0x43, 0x4b, 0x41, 0x47, 0x45, - 0x5f, 0x4d, 0x41, 0x4e, 0x41, 0x47, 0x45, 0x52, 0x10, 0x14, 0x3a, 0x4a, 0x0a, 0x10, 0x6d, 0x65, - 0x74, 0x68, 0x6f, 0x64, 0x5f, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x12, 0x1e, - 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, - 0x2e, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x9b, - 0x08, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0f, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x53, 0x69, 0x67, - 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x3a, 0x43, 0x0a, 0x0c, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, - 0x74, 0x5f, 0x68, 0x6f, 0x73, 0x74, 0x12, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, + 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x2a, 0xa3, 0x01, 0x0a, 0x19, 0x43, 0x6c, 0x69, 0x65, 0x6e, + 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x4f, 0x72, 0x67, 0x61, 0x6e, 0x69, 0x7a, 0x61, + 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x2b, 0x0a, 0x27, 0x43, 0x4c, 0x49, 0x45, 0x4e, 0x54, 0x5f, 0x4c, + 0x49, 0x42, 0x52, 0x41, 0x52, 0x59, 0x5f, 0x4f, 0x52, 0x47, 0x41, 0x4e, 0x49, 0x5a, 0x41, 0x54, + 0x49, 0x4f, 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, + 0x00, 0x12, 0x09, 0x0a, 0x05, 0x43, 0x4c, 0x4f, 0x55, 0x44, 0x10, 0x01, 0x12, 0x07, 0x0a, 0x03, + 0x41, 0x44, 0x53, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x50, 0x48, 0x4f, 0x54, 0x4f, 0x53, 0x10, + 0x03, 0x12, 0x0f, 0x0a, 0x0b, 0x53, 0x54, 0x52, 0x45, 0x45, 0x54, 0x5f, 0x56, 0x49, 0x45, 0x57, + 0x10, 0x04, 0x12, 0x0c, 0x0a, 0x08, 0x53, 0x48, 0x4f, 0x50, 0x50, 0x49, 0x4e, 0x47, 0x10, 0x05, + 0x12, 0x07, 0x0a, 0x03, 0x47, 0x45, 0x4f, 0x10, 0x06, 0x12, 0x11, 0x0a, 0x0d, 0x47, 0x45, 0x4e, + 0x45, 0x52, 0x41, 0x54, 0x49, 0x56, 0x45, 0x5f, 0x41, 0x49, 0x10, 0x07, 0x2a, 0x67, 0x0a, 0x18, + 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x4c, 0x69, 0x62, 0x72, 0x61, 0x72, 0x79, 0x44, 0x65, 0x73, + 0x74, 0x69, 0x6e, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x2a, 0x0a, 0x26, 0x43, 0x4c, 0x49, 0x45, + 0x4e, 0x54, 0x5f, 0x4c, 0x49, 0x42, 0x52, 0x41, 0x52, 0x59, 0x5f, 0x44, 0x45, 0x53, 0x54, 0x49, + 0x4e, 0x41, 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, + 0x45, 0x44, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x47, 0x49, 0x54, 0x48, 0x55, 0x42, 0x10, 0x0a, + 0x12, 0x13, 0x0a, 0x0f, 0x50, 0x41, 0x43, 0x4b, 0x41, 0x47, 0x45, 0x5f, 0x4d, 0x41, 0x4e, 0x41, + 0x47, 0x45, 0x52, 0x10, 0x14, 0x3a, 0x4a, 0x0a, 0x10, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x5f, + 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x12, 0x1e, 0x2e, 0x67, 0x6f, 0x6f, 0x67, + 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x4d, 0x65, 0x74, 0x68, + 0x6f, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x9b, 0x08, 0x20, 0x03, 0x28, 0x09, + 0x52, 0x0f, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, + 0x65, 0x3a, 0x43, 0x0a, 0x0c, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x5f, 0x68, 0x6f, 0x73, + 0x74, 0x12, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x75, 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, + 0x6e, 0x73, 0x18, 0x99, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x64, 0x65, 0x66, 0x61, 0x75, + 0x6c, 0x74, 0x48, 0x6f, 0x73, 0x74, 0x3a, 0x43, 0x0a, 0x0c, 0x6f, 0x61, 0x75, 0x74, 0x68, 0x5f, + 0x73, 0x63, 0x6f, 0x70, 0x65, 0x73, 0x12, 0x1f, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, - 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x99, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, - 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x48, 0x6f, 0x73, 0x74, 0x3a, 0x43, 0x0a, 0x0c, 0x6f, - 0x61, 0x75, 0x74, 0x68, 0x5f, 0x73, 0x63, 0x6f, 0x70, 0x65, 0x73, 0x12, 0x1f, 0x2e, 0x67, 0x6f, - 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x53, 0x65, - 0x72, 0x76, 0x69, 0x63, 0x65, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x9a, 0x08, 0x20, - 0x01, 0x28, 0x09, 0x52, 0x0b, 0x6f, 0x61, 0x75, 0x74, 0x68, 0x53, 0x63, 0x6f, 0x70, 0x65, 0x73, - 0x42, 0x69, 0x0a, 0x0e, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, - 0x70, 0x69, 0x42, 0x0b, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, - 0x01, 0x5a, 0x41, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, - 0x2e, 0x6f, 0x72, 0x67, 0x2f, 0x67, 0x65, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, - 0x6f, 0x67, 0x6c, 0x65, 0x61, 0x70, 0x69, 0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e, - 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x3b, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, - 0x69, 0x6f, 0x6e, 0x73, 0xa2, 0x02, 0x04, 0x47, 0x41, 0x50, 0x49, 0x62, 0x06, 0x70, 0x72, 0x6f, - 0x74, 0x6f, 0x33, + 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x9a, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, + 0x6f, 0x61, 0x75, 0x74, 0x68, 0x53, 0x63, 0x6f, 0x70, 0x65, 0x73, 0x42, 0x69, 0x0a, 0x0e, 0x63, + 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x42, 0x0b, 0x43, + 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x41, 0x67, 0x6f, + 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67, 0x2e, 0x6f, 0x72, 0x67, 0x2f, + 0x67, 0x65, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x61, + 0x70, 0x69, 0x73, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, + 0x6f, 0x6e, 0x73, 0x3b, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0xa2, + 0x02, 0x04, 0x47, 0x41, 0x50, 0x49, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } var ( diff --git a/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/tidyfix.go b/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/tidyfix.go new file mode 100644 index 000000000000..1d3f1b5b7efe --- /dev/null +++ b/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/tidyfix.go @@ -0,0 +1,23 @@ +// Copyright 2023 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// This file, and the {{.RootMod}} import, won't actually become part of +// the resultant binary. +//go:build modhack +// +build modhack + +package api + +// Necessary for safely adding multi-module repo. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository +import _ "google.golang.org/genproto/internal" diff --git a/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/rpc/LICENSE b/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/rpc/LICENSE new file mode 100644 index 000000000000..d64569567334 --- /dev/null +++ b/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/rpc/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/cluster-autoscaler/vendor/google.golang.org/genproto/internal/doc.go b/cluster-autoscaler/vendor/google.golang.org/genproto/internal/doc.go new file mode 100644 index 000000000000..90e89b4aa3ff --- /dev/null +++ b/cluster-autoscaler/vendor/google.golang.org/genproto/internal/doc.go @@ -0,0 +1,17 @@ +// Copyright 2023 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// This file makes internal an importable go package +// for use with backreferences from submodules. +package internal diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/generated.proto index cdf1f47655fd..a8903621c8e2 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/generated.proto @@ -215,7 +215,7 @@ message MutatingWebhook { // - If failurePolicy=Fail, reject the request // - If failurePolicy=Ignore, the error is ignored and the webhook is skipped // - // This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. + // This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. // // +patchMergeKey=name // +patchStrategy=merge @@ -473,7 +473,7 @@ message ValidatingWebhook { // - If failurePolicy=Fail, reject the request // - If failurePolicy=Ignore, the error is ignored and the webhook is skipped // - // This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. + // This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. // // +patchMergeKey=name // +patchStrategy=merge diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/types.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/types.go index 74f17d54a2bb..07ed7a6246bd 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/types.go @@ -320,7 +320,7 @@ type ValidatingWebhook struct { // - If failurePolicy=Fail, reject the request // - If failurePolicy=Ignore, the error is ignored and the webhook is skipped // - // This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. + // This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. // // +patchMergeKey=name // +patchStrategy=merge @@ -489,7 +489,7 @@ type MutatingWebhook struct { // - If failurePolicy=Fail, reject the request // - If failurePolicy=Ignore, the error is ignored and the webhook is skipped // - // This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. + // This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. // // +patchMergeKey=name // +patchStrategy=merge diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/types_swagger_doc_generated.go index ce306b307a8b..c41cceb2f24b 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1/types_swagger_doc_generated.go @@ -50,7 +50,7 @@ var map_MutatingWebhook = map[string]string{ "timeoutSeconds": "TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds.", "admissionReviewVersions": "AdmissionReviewVersions is an ordered list of preferred `AdmissionReview` versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy.", "reinvocationPolicy": "reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation. Allowed values are \"Never\" and \"IfNeeded\".\n\nNever: the webhook will not be called more than once in a single admission evaluation.\n\nIfNeeded: the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. Webhooks that specify this option *must* be idempotent, able to process objects they previously admitted. Note: * the number of additional invocations is not guaranteed to be exactly one. * if additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. * webhooks that use this option may be reordered to minimize the number of additional invocations. * to validate an object after all mutations are guaranteed complete, use a validating admission webhook instead.\n\nDefaults to \"Never\".", - "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\nThis is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate.", + "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\nThis is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate.", } func (MutatingWebhook) SwaggerDoc() map[string]string { @@ -122,7 +122,7 @@ var map_ValidatingWebhook = map[string]string{ "sideEffects": "SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some.", "timeoutSeconds": "TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds.", "admissionReviewVersions": "AdmissionReviewVersions is an ordered list of preferred `AdmissionReview` versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy.", - "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\nThis is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate.", + "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\nThis is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate.", } func (ValidatingWebhook) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/generated.pb.go index 7465350263bd..4f1373ec5a7b 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/generated.pb.go @@ -493,6 +493,34 @@ func (m *Validation) XXX_DiscardUnknown() { var xxx_messageInfo_Validation proto.InternalMessageInfo +func (m *Variable) Reset() { *m = Variable{} } +func (*Variable) ProtoMessage() {} +func (*Variable) Descriptor() ([]byte, []int) { + return fileDescriptor_c3be8d256e3ae3cf, []int{16} +} +func (m *Variable) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *Variable) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *Variable) XXX_Merge(src proto.Message) { + xxx_messageInfo_Variable.Merge(m, src) +} +func (m *Variable) XXX_Size() int { + return m.Size() +} +func (m *Variable) XXX_DiscardUnknown() { + xxx_messageInfo_Variable.DiscardUnknown(m) +} + +var xxx_messageInfo_Variable proto.InternalMessageInfo + func init() { proto.RegisterType((*AuditAnnotation)(nil), "k8s.io.api.admissionregistration.v1alpha1.AuditAnnotation") proto.RegisterType((*ExpressionWarning)(nil), "k8s.io.api.admissionregistration.v1alpha1.ExpressionWarning") @@ -510,6 +538,7 @@ func init() { proto.RegisterType((*ValidatingAdmissionPolicySpec)(nil), "k8s.io.api.admissionregistration.v1alpha1.ValidatingAdmissionPolicySpec") proto.RegisterType((*ValidatingAdmissionPolicyStatus)(nil), "k8s.io.api.admissionregistration.v1alpha1.ValidatingAdmissionPolicyStatus") proto.RegisterType((*Validation)(nil), "k8s.io.api.admissionregistration.v1alpha1.Validation") + proto.RegisterType((*Variable)(nil), "k8s.io.api.admissionregistration.v1alpha1.Variable") } func init() { @@ -517,95 +546,102 @@ func init() { } var fileDescriptor_c3be8d256e3ae3cf = []byte{ - // 1407 bytes of a gzipped FileDescriptorProto + // 1509 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x58, 0xcb, 0x6f, 0x1b, 0x45, - 0x18, 0xcf, 0xc6, 0x4e, 0x9a, 0x8c, 0xf3, 0xb0, 0x87, 0x56, 0x75, 0x23, 0x6a, 0x47, 0xab, 0x0a, - 0x35, 0x12, 0xec, 0x92, 0xb4, 0x50, 0x40, 0x48, 0x28, 0xdb, 0x17, 0x7d, 0xa4, 0x89, 0xa6, 0x28, - 0x91, 0x10, 0x95, 0x98, 0xec, 0x4e, 0xec, 0xa9, 0xbd, 0x0f, 0x76, 0xd6, 0xa1, 0x11, 0x48, 0x54, - 0xe2, 0x02, 0x37, 0x0e, 0x5c, 0xf8, 0x5f, 0xb8, 0x70, 0xeb, 0xb1, 0xc7, 0x72, 0xc0, 0x22, 0xe6, - 0xc2, 0x5f, 0x00, 0x52, 0x2e, 0xa0, 0x99, 0x9d, 0x7d, 0x3b, 0xc4, 0x2e, 0x81, 0x9b, 0xf7, 0x7b, - 0xfc, 0x7e, 0xf3, 0x7d, 0xf3, 0x7d, 0x33, 0xdf, 0x18, 0xa0, 0xce, 0x3b, 0x4c, 0xa3, 0xae, 0xde, - 0xe9, 0xed, 0x12, 0xdf, 0x21, 0x01, 0x61, 0xfa, 0x3e, 0x71, 0x2c, 0xd7, 0xd7, 0xa5, 0x02, 0x7b, - 0x54, 0xc7, 0x96, 0x4d, 0x19, 0xa3, 0xae, 0xe3, 0x93, 0x16, 0x65, 0x81, 0x8f, 0x03, 0xea, 0x3a, - 0xfa, 0xfe, 0x2a, 0xee, 0x7a, 0x6d, 0xbc, 0xaa, 0xb7, 0x88, 0x43, 0x7c, 0x1c, 0x10, 0x4b, 0xf3, - 0x7c, 0x37, 0x70, 0xe1, 0x4a, 0xe8, 0xaa, 0x61, 0x8f, 0x6a, 0x43, 0x5d, 0xb5, 0xc8, 0x75, 0xe9, - 0x8d, 0x16, 0x0d, 0xda, 0xbd, 0x5d, 0xcd, 0x74, 0x6d, 0xbd, 0xe5, 0xb6, 0x5c, 0x5d, 0x20, 0xec, - 0xf6, 0xf6, 0xc4, 0x97, 0xf8, 0x10, 0xbf, 0x42, 0xe4, 0xa5, 0x2b, 0x23, 0x2c, 0x2a, 0xbf, 0x9c, - 0xa5, 0xab, 0x89, 0x93, 0x8d, 0xcd, 0x36, 0x75, 0x88, 0x7f, 0xa0, 0x7b, 0x9d, 0x16, 0x17, 0x30, - 0xdd, 0x26, 0x01, 0x1e, 0xe6, 0xa5, 0x1f, 0xe7, 0xe5, 0xf7, 0x9c, 0x80, 0xda, 0xa4, 0xe0, 0xf0, - 0xf6, 0x49, 0x0e, 0xcc, 0x6c, 0x13, 0x1b, 0xe7, 0xfd, 0x54, 0x06, 0x16, 0xd7, 0x7b, 0x16, 0x0d, - 0xd6, 0x1d, 0xc7, 0x0d, 0x44, 0x10, 0xf0, 0x22, 0x28, 0x75, 0xc8, 0x41, 0x5d, 0x59, 0x56, 0x2e, - 0xcf, 0x1a, 0x95, 0x67, 0xfd, 0xe6, 0xc4, 0xa0, 0xdf, 0x2c, 0xdd, 0x23, 0x07, 0x88, 0xcb, 0xe1, - 0x3a, 0x58, 0xdc, 0xc7, 0xdd, 0x1e, 0xb9, 0xf9, 0xc4, 0xf3, 0x89, 0x48, 0x41, 0x7d, 0x52, 0x98, - 0x9e, 0x97, 0xa6, 0x8b, 0xdb, 0x59, 0x35, 0xca, 0xdb, 0xab, 0x5d, 0x50, 0x4b, 0xbe, 0x76, 0xb0, - 0xef, 0x50, 0xa7, 0x05, 0x5f, 0x07, 0x33, 0x7b, 0x94, 0x74, 0x2d, 0x44, 0xf6, 0x24, 0x60, 0x55, - 0x02, 0xce, 0xdc, 0x92, 0x72, 0x14, 0x5b, 0xc0, 0x15, 0x70, 0xe6, 0xf3, 0xd0, 0xb1, 0x5e, 0x12, - 0xc6, 0x8b, 0xd2, 0xf8, 0x8c, 0xc4, 0x43, 0x91, 0x5e, 0xdd, 0x03, 0x0b, 0x1b, 0x38, 0x30, 0xdb, - 0xd7, 0x5d, 0xc7, 0xa2, 0x22, 0xc2, 0x65, 0x50, 0x76, 0xb0, 0x4d, 0x64, 0x88, 0x73, 0xd2, 0xb3, - 0xfc, 0x00, 0xdb, 0x04, 0x09, 0x0d, 0x5c, 0x03, 0x80, 0xe4, 0xe3, 0x83, 0xd2, 0x0e, 0xa4, 0x42, - 0x4b, 0x59, 0xa9, 0x3f, 0x97, 0x25, 0x11, 0x22, 0xcc, 0xed, 0xf9, 0x26, 0x61, 0xf0, 0x09, 0xa8, - 0x71, 0x38, 0xe6, 0x61, 0x93, 0x3c, 0x24, 0x5d, 0x62, 0x06, 0xae, 0x2f, 0x58, 0x2b, 0x6b, 0x57, - 0xb4, 0xa4, 0x4e, 0xe3, 0x1d, 0xd3, 0xbc, 0x4e, 0x8b, 0x0b, 0x98, 0xc6, 0x0b, 0x43, 0xdb, 0x5f, - 0xd5, 0xee, 0xe3, 0x5d, 0xd2, 0x8d, 0x5c, 0x8d, 0x73, 0x83, 0x7e, 0xb3, 0xf6, 0x20, 0x8f, 0x88, - 0x8a, 0x24, 0xd0, 0x05, 0x0b, 0xee, 0xee, 0x63, 0x62, 0x06, 0x31, 0xed, 0xe4, 0xcb, 0xd3, 0xc2, - 0x41, 0xbf, 0xb9, 0xb0, 0x99, 0x81, 0x43, 0x39, 0x78, 0xf8, 0x15, 0x98, 0xf7, 0x65, 0xdc, 0xa8, - 0xd7, 0x25, 0xac, 0x5e, 0x5a, 0x2e, 0x5d, 0xae, 0xac, 0x19, 0xda, 0xc8, 0xed, 0xa8, 0xf1, 0xc0, - 0x2c, 0xee, 0xbc, 0x43, 0x83, 0xf6, 0xa6, 0x47, 0x42, 0x3d, 0x33, 0xce, 0xc9, 0xc4, 0xcf, 0xa3, - 0x34, 0x01, 0xca, 0xf2, 0xc1, 0xef, 0x15, 0x70, 0x96, 0x3c, 0x31, 0xbb, 0x3d, 0x8b, 0x64, 0xec, - 0xea, 0xe5, 0x53, 0x5b, 0xc8, 0xab, 0x72, 0x21, 0x67, 0x6f, 0x0e, 0xe1, 0x41, 0x43, 0xd9, 0xe1, - 0x0d, 0x50, 0xb1, 0x79, 0x51, 0x6c, 0xb9, 0x5d, 0x6a, 0x1e, 0xd4, 0xcf, 0x88, 0x52, 0x52, 0x07, - 0xfd, 0x66, 0x65, 0x23, 0x11, 0x1f, 0xf5, 0x9b, 0x8b, 0xa9, 0xcf, 0x8f, 0x0e, 0x3c, 0x82, 0xd2, - 0x6e, 0xea, 0x0b, 0x05, 0x9c, 0x3f, 0x66, 0x55, 0xf0, 0x5a, 0x92, 0x79, 0x51, 0x1a, 0x75, 0x65, - 0xb9, 0x74, 0x79, 0xd6, 0xa8, 0xa5, 0x33, 0x26, 0x14, 0x28, 0x6b, 0x07, 0xbf, 0x56, 0x00, 0xf4, - 0x0b, 0x78, 0xb2, 0x50, 0xae, 0x8d, 0x92, 0x2f, 0x6d, 0x48, 0x92, 0x96, 0x64, 0x92, 0x60, 0x51, - 0x87, 0x86, 0xd0, 0xa9, 0x18, 0xcc, 0x6e, 0x61, 0x1f, 0xdb, 0xf7, 0xa8, 0x63, 0xf1, 0xbe, 0xc3, - 0x1e, 0xdd, 0x26, 0xbe, 0xe8, 0x3b, 0x25, 0xdb, 0x77, 0xeb, 0x5b, 0x77, 0xa4, 0x06, 0xa5, 0xac, - 0x78, 0x37, 0x77, 0xa8, 0x63, 0xc9, 0x2e, 0x8d, 0xbb, 0x99, 0xe3, 0x21, 0xa1, 0x51, 0x1f, 0x81, - 0x19, 0x41, 0xc1, 0x0f, 0x8e, 0x93, 0x7b, 0x5f, 0x07, 0xb3, 0x71, 0x3f, 0x49, 0xd0, 0x9a, 0x34, - 0x9b, 0x8d, 0x7b, 0x0f, 0x25, 0x36, 0xea, 0x0f, 0x0a, 0x98, 0xe3, 0x5b, 0x76, 0xbd, 0x4d, 0xcc, - 0x0e, 0x3f, 0xca, 0xbe, 0x51, 0x00, 0x24, 0xf9, 0x03, 0x2e, 0xdc, 0x97, 0xca, 0xda, 0xfb, 0x63, - 0x14, 0x62, 0xe1, 0x94, 0x4c, 0xb2, 0x5b, 0x50, 0x31, 0x34, 0x84, 0x53, 0xfd, 0x65, 0x12, 0x5c, - 0xd8, 0xc6, 0x5d, 0x6a, 0xe1, 0x80, 0x3a, 0xad, 0xf5, 0x88, 0x2e, 0x2c, 0x2b, 0xf8, 0x29, 0x98, - 0xe1, 0x1d, 0x6f, 0xe1, 0x00, 0xcb, 0x63, 0xe9, 0xcd, 0xd1, 0xce, 0x87, 0xf0, 0x30, 0xd8, 0x20, - 0x01, 0x4e, 0xb6, 0x27, 0x91, 0xa1, 0x18, 0x15, 0x3e, 0x06, 0x65, 0xe6, 0x11, 0x53, 0x16, 0xd5, - 0x87, 0x63, 0xc4, 0x7e, 0xec, 0xaa, 0x1f, 0x7a, 0xc4, 0x4c, 0x36, 0x8e, 0x7f, 0x21, 0xc1, 0x01, - 0x7d, 0x30, 0xcd, 0x02, 0x1c, 0xf4, 0x98, 0xb8, 0x12, 0x2a, 0x6b, 0x77, 0x4f, 0x85, 0x4d, 0x20, - 0x1a, 0x0b, 0x92, 0x6f, 0x3a, 0xfc, 0x46, 0x92, 0x49, 0xfd, 0x53, 0x01, 0xcb, 0xc7, 0xfa, 0x1a, - 0xd4, 0xb1, 0x78, 0x3d, 0xfc, 0xf7, 0x69, 0xfe, 0x2c, 0x93, 0xe6, 0xcd, 0xd3, 0x08, 0x5c, 0x2e, - 0xfe, 0xb8, 0x6c, 0xab, 0x7f, 0x28, 0xe0, 0xd2, 0x49, 0xce, 0xf7, 0x29, 0x0b, 0xe0, 0x27, 0x85, - 0xe8, 0xb5, 0x11, 0x2f, 0x21, 0xca, 0xc2, 0xd8, 0xe3, 0x41, 0x20, 0x92, 0xa4, 0x22, 0xf7, 0xc0, - 0x14, 0x0d, 0x88, 0xcd, 0x8f, 0x2d, 0xde, 0x5d, 0xf7, 0x4e, 0x31, 0x74, 0x63, 0x5e, 0xf2, 0x4e, - 0xdd, 0xe1, 0x0c, 0x28, 0x24, 0x52, 0xbf, 0x2d, 0x9d, 0x1c, 0x38, 0xcf, 0x13, 0x3f, 0xcc, 0x3c, - 0x21, 0x7c, 0x90, 0x1c, 0x38, 0xf1, 0x36, 0x6e, 0xc5, 0x1a, 0x94, 0xb2, 0x82, 0x8f, 0xc0, 0x8c, - 0x27, 0x8f, 0xaa, 0x21, 0x37, 0xf6, 0x49, 0x11, 0x45, 0xa7, 0x9c, 0x31, 0xc7, 0xb3, 0x15, 0x7d, - 0xa1, 0x18, 0x12, 0xf6, 0xc0, 0x82, 0x9d, 0x19, 0x51, 0x64, 0xab, 0xbc, 0x3b, 0x06, 0x49, 0x76, - 0xc6, 0x09, 0x87, 0x83, 0xac, 0x0c, 0xe5, 0x48, 0xe0, 0x0e, 0xa8, 0xed, 0xcb, 0x8c, 0xb9, 0xce, - 0xba, 0x19, 0xde, 0x33, 0x65, 0x71, 0x4d, 0xad, 0xf0, 0x91, 0x66, 0x3b, 0xaf, 0x3c, 0xea, 0x37, - 0xab, 0x79, 0x21, 0x2a, 0x62, 0xa8, 0xbf, 0x2b, 0xe0, 0xe2, 0xb1, 0x7b, 0xf1, 0x3f, 0x54, 0x1f, - 0xcd, 0x56, 0xdf, 0x8d, 0x53, 0xa9, 0xbe, 0xe1, 0x65, 0xf7, 0xe3, 0xd4, 0x3f, 0x84, 0x2a, 0xea, - 0x0d, 0x83, 0x59, 0x2f, 0xba, 0x49, 0x65, 0xac, 0x57, 0xc7, 0x2d, 0x1e, 0xee, 0x6b, 0xcc, 0xf3, - 0xab, 0x2e, 0xfe, 0x44, 0x09, 0x2a, 0xfc, 0x02, 0x54, 0x6d, 0x39, 0x4b, 0x73, 0x00, 0xea, 0x04, - 0xd1, 0xbc, 0xf0, 0x2f, 0x2a, 0xe8, 0xec, 0xa0, 0xdf, 0xac, 0x6e, 0xe4, 0x60, 0x51, 0x81, 0x08, - 0x76, 0x41, 0x25, 0xa9, 0x80, 0x68, 0xc0, 0x7c, 0xeb, 0x25, 0x52, 0xee, 0x3a, 0xc6, 0x2b, 0x32, - 0xc7, 0x95, 0x44, 0xc6, 0x50, 0x1a, 0x1e, 0xde, 0x07, 0xf3, 0x7b, 0x98, 0x76, 0x7b, 0x3e, 0x91, - 0xa3, 0x5b, 0x59, 0x34, 0xf0, 0x6b, 0x7c, 0xac, 0xba, 0x95, 0x56, 0x1c, 0xf5, 0x9b, 0xb5, 0x8c, - 0x40, 0x8c, 0x6f, 0x59, 0x67, 0xf8, 0x54, 0x01, 0x55, 0x9c, 0x7d, 0x68, 0xb1, 0xfa, 0x94, 0x88, - 0xe0, 0xbd, 0x31, 0x22, 0xc8, 0xbd, 0xd5, 0x8c, 0xba, 0x0c, 0xa3, 0x9a, 0x53, 0x30, 0x54, 0x60, - 0x83, 0x5f, 0x82, 0x45, 0x3b, 0xf3, 0x0e, 0x62, 0xf5, 0x69, 0xb1, 0x80, 0xb1, 0xb7, 0x2e, 0x46, - 0x48, 0xde, 0x7c, 0x59, 0x39, 0x43, 0x79, 0x2a, 0xf5, 0xa7, 0x49, 0xd0, 0x3c, 0xe1, 0x92, 0x85, - 0x77, 0x01, 0x74, 0x77, 0x19, 0xf1, 0xf7, 0x89, 0x75, 0x3b, 0x7c, 0xa7, 0x46, 0x53, 0x60, 0x29, - 0x19, 0x7c, 0x36, 0x0b, 0x16, 0x68, 0x88, 0x17, 0xb4, 0xc1, 0x5c, 0x90, 0x9a, 0xc9, 0xc6, 0x99, - 0x6a, 0x65, 0xa8, 0xe9, 0x91, 0xce, 0xa8, 0x0e, 0xfa, 0xcd, 0xcc, 0x90, 0x87, 0x32, 0xf0, 0xd0, - 0x04, 0xc0, 0x4c, 0xf2, 0x1a, 0x96, 0xa6, 0x3e, 0xda, 0x41, 0x93, 0x64, 0x33, 0xbe, 0x1c, 0x52, - 0x89, 0x4c, 0xc1, 0xaa, 0x7f, 0x29, 0x00, 0x24, 0xf5, 0x0a, 0x2f, 0x81, 0xd4, 0x53, 0x54, 0xde, - 0x2f, 0x65, 0x0e, 0x81, 0x52, 0x72, 0xfe, 0x52, 0xb6, 0x09, 0x63, 0xb8, 0x15, 0x0d, 0xb3, 0xf1, - 0x4b, 0x79, 0x23, 0x14, 0xa3, 0x48, 0x0f, 0x77, 0xc0, 0xb4, 0x4f, 0x30, 0x73, 0x1d, 0xf9, 0xa6, - 0xfe, 0x80, 0x0f, 0x3c, 0x48, 0x48, 0x8e, 0xfa, 0xcd, 0xd5, 0x51, 0xfe, 0xc9, 0xd0, 0xe4, 0x7c, - 0x24, 0x9c, 0x90, 0x84, 0x83, 0xb7, 0x41, 0x4d, 0x72, 0xa4, 0x16, 0x1c, 0xf6, 0xd3, 0x05, 0xb9, - 0x9a, 0xda, 0x46, 0xde, 0x00, 0x15, 0x7d, 0x8c, 0xcd, 0x67, 0x87, 0x8d, 0x89, 0xe7, 0x87, 0x8d, - 0x89, 0x17, 0x87, 0x8d, 0x89, 0xa7, 0x83, 0x86, 0xf2, 0x6c, 0xd0, 0x50, 0x9e, 0x0f, 0x1a, 0xca, - 0x8b, 0x41, 0x43, 0xf9, 0x75, 0xd0, 0x50, 0xbe, 0xfb, 0xad, 0x31, 0xf1, 0xf1, 0xca, 0xc8, 0xff, - 0x1e, 0xfd, 0x1d, 0x00, 0x00, 0xff, 0xff, 0x08, 0xaf, 0xaa, 0x52, 0x82, 0x12, 0x00, 0x00, + 0x18, 0xcf, 0xc6, 0x6e, 0x12, 0x8f, 0xf3, 0xf2, 0xd0, 0x2a, 0x6e, 0xa0, 0xde, 0x68, 0x55, 0xa1, + 0x46, 0x82, 0x35, 0x49, 0x0b, 0x85, 0x0a, 0x09, 0x65, 0xfb, 0xa2, 0x8f, 0x3c, 0x34, 0x45, 0x89, + 0x84, 0x40, 0x62, 0xb2, 0x3b, 0x71, 0xa6, 0xf6, 0x3e, 0xd8, 0x59, 0x9b, 0x46, 0x20, 0x51, 0x89, + 0x0b, 0xdc, 0x38, 0x70, 0xe1, 0xca, 0x9f, 0xc0, 0x7f, 0xc0, 0xad, 0xc7, 0x1e, 0xcb, 0x01, 0x8b, + 0x9a, 0x0b, 0x7f, 0x01, 0x48, 0xb9, 0x80, 0x66, 0x76, 0xf6, 0x69, 0x9b, 0xd8, 0x25, 0x70, 0xf3, + 0x7c, 0x8f, 0xdf, 0xf7, 0x98, 0xef, 0xfb, 0xf6, 0x1b, 0x03, 0xd4, 0x7c, 0x9b, 0xe9, 0xd4, 0xad, + 0x37, 0xdb, 0xfb, 0xc4, 0x77, 0x48, 0x40, 0x58, 0xbd, 0x43, 0x1c, 0xcb, 0xf5, 0xeb, 0x92, 0x81, + 0x3d, 0x5a, 0xc7, 0x96, 0x4d, 0x19, 0xa3, 0xae, 0xe3, 0x93, 0x06, 0x65, 0x81, 0x8f, 0x03, 0xea, + 0x3a, 0xf5, 0xce, 0x1a, 0x6e, 0x79, 0x87, 0x78, 0xad, 0xde, 0x20, 0x0e, 0xf1, 0x71, 0x40, 0x2c, + 0xdd, 0xf3, 0xdd, 0xc0, 0x85, 0xab, 0xa1, 0xaa, 0x8e, 0x3d, 0xaa, 0x0f, 0x54, 0xd5, 0x23, 0xd5, + 0xe5, 0xd7, 0x1b, 0x34, 0x38, 0x6c, 0xef, 0xeb, 0xa6, 0x6b, 0xd7, 0x1b, 0x6e, 0xc3, 0xad, 0x0b, + 0x84, 0xfd, 0xf6, 0x81, 0x38, 0x89, 0x83, 0xf8, 0x15, 0x22, 0x2f, 0x5f, 0x1e, 0xc1, 0xa9, 0xbc, + 0x3b, 0xcb, 0x57, 0x12, 0x25, 0x1b, 0x9b, 0x87, 0xd4, 0x21, 0xfe, 0x51, 0xdd, 0x6b, 0x36, 0x38, + 0x81, 0xd5, 0x6d, 0x12, 0xe0, 0x41, 0x5a, 0xf5, 0x61, 0x5a, 0x7e, 0xdb, 0x09, 0xa8, 0x4d, 0xfa, + 0x14, 0xde, 0x3a, 0x49, 0x81, 0x99, 0x87, 0xc4, 0xc6, 0x79, 0x3d, 0x8d, 0x81, 0x85, 0x8d, 0xb6, + 0x45, 0x83, 0x0d, 0xc7, 0x71, 0x03, 0x11, 0x04, 0xbc, 0x00, 0x0a, 0x4d, 0x72, 0x54, 0x55, 0x56, + 0x94, 0x4b, 0x25, 0xa3, 0xfc, 0xa4, 0xab, 0x4e, 0xf4, 0xba, 0x6a, 0xe1, 0x1e, 0x39, 0x42, 0x9c, + 0x0e, 0x37, 0xc0, 0x42, 0x07, 0xb7, 0xda, 0xe4, 0xe6, 0x23, 0xcf, 0x27, 0x22, 0x05, 0xd5, 0x49, + 0x21, 0xba, 0x24, 0x45, 0x17, 0x76, 0xb3, 0x6c, 0x94, 0x97, 0xd7, 0x5a, 0xa0, 0x92, 0x9c, 0xf6, + 0xb0, 0xef, 0x50, 0xa7, 0x01, 0x5f, 0x03, 0x33, 0x07, 0x94, 0xb4, 0x2c, 0x44, 0x0e, 0x24, 0xe0, + 0xa2, 0x04, 0x9c, 0xb9, 0x25, 0xe9, 0x28, 0x96, 0x80, 0xab, 0x60, 0xfa, 0xb3, 0x50, 0xb1, 0x5a, + 0x10, 0xc2, 0x0b, 0x52, 0x78, 0x5a, 0xe2, 0xa1, 0x88, 0xaf, 0x1d, 0x80, 0xf9, 0x4d, 0x1c, 0x98, + 0x87, 0xd7, 0x5d, 0xc7, 0xa2, 0x22, 0xc2, 0x15, 0x50, 0x74, 0xb0, 0x4d, 0x64, 0x88, 0xb3, 0x52, + 0xb3, 0xb8, 0x85, 0x6d, 0x82, 0x04, 0x07, 0xae, 0x03, 0x40, 0xf2, 0xf1, 0x41, 0x29, 0x07, 0x52, + 0xa1, 0xa5, 0xa4, 0xb4, 0x9f, 0x8b, 0xd2, 0x10, 0x22, 0xcc, 0x6d, 0xfb, 0x26, 0x61, 0xf0, 0x11, + 0xa8, 0x70, 0x38, 0xe6, 0x61, 0x93, 0x3c, 0x20, 0x2d, 0x62, 0x06, 0xae, 0x2f, 0xac, 0x96, 0xd7, + 0x2f, 0xeb, 0x49, 0x9d, 0xc6, 0x37, 0xa6, 0x7b, 0xcd, 0x06, 0x27, 0x30, 0x9d, 0x17, 0x86, 0xde, + 0x59, 0xd3, 0xef, 0xe3, 0x7d, 0xd2, 0x8a, 0x54, 0x8d, 0x73, 0xbd, 0xae, 0x5a, 0xd9, 0xca, 0x23, + 0xa2, 0x7e, 0x23, 0xd0, 0x05, 0xf3, 0xee, 0xfe, 0x43, 0x62, 0x06, 0xb1, 0xd9, 0xc9, 0x17, 0x37, + 0x0b, 0x7b, 0x5d, 0x75, 0x7e, 0x3b, 0x03, 0x87, 0x72, 0xf0, 0xf0, 0x4b, 0x30, 0xe7, 0xcb, 0xb8, + 0x51, 0xbb, 0x45, 0x58, 0xb5, 0xb0, 0x52, 0xb8, 0x54, 0x5e, 0x37, 0xf4, 0x91, 0xdb, 0x51, 0xe7, + 0x81, 0x59, 0x5c, 0x79, 0x8f, 0x06, 0x87, 0xdb, 0x1e, 0x09, 0xf9, 0xcc, 0x38, 0x27, 0x13, 0x3f, + 0x87, 0xd2, 0x06, 0x50, 0xd6, 0x1e, 0xfc, 0x4e, 0x01, 0x67, 0xc9, 0x23, 0xb3, 0xd5, 0xb6, 0x48, + 0x46, 0xae, 0x5a, 0x3c, 0x35, 0x47, 0x5e, 0x91, 0x8e, 0x9c, 0xbd, 0x39, 0xc0, 0x0e, 0x1a, 0x68, + 0x1d, 0xde, 0x00, 0x65, 0x9b, 0x17, 0xc5, 0x8e, 0xdb, 0xa2, 0xe6, 0x51, 0x75, 0x5a, 0x94, 0x92, + 0xd6, 0xeb, 0xaa, 0xe5, 0xcd, 0x84, 0x7c, 0xdc, 0x55, 0x17, 0x52, 0xc7, 0x0f, 0x8e, 0x3c, 0x82, + 0xd2, 0x6a, 0xda, 0x33, 0x05, 0x2c, 0x0d, 0xf1, 0x0a, 0x5e, 0x4d, 0x32, 0x2f, 0x4a, 0xa3, 0xaa, + 0xac, 0x14, 0x2e, 0x95, 0x8c, 0x4a, 0x3a, 0x63, 0x82, 0x81, 0xb2, 0x72, 0xf0, 0x2b, 0x05, 0x40, + 0xbf, 0x0f, 0x4f, 0x16, 0xca, 0xd5, 0x51, 0xf2, 0xa5, 0x0f, 0x48, 0xd2, 0xb2, 0x4c, 0x12, 0xec, + 0xe7, 0xa1, 0x01, 0xe6, 0x34, 0x0c, 0x4a, 0x3b, 0xd8, 0xc7, 0xf6, 0x3d, 0xea, 0x58, 0xbc, 0xef, + 0xb0, 0x47, 0x77, 0x89, 0x2f, 0xfa, 0x4e, 0xc9, 0xf6, 0xdd, 0xc6, 0xce, 0x1d, 0xc9, 0x41, 0x29, + 0x29, 0xde, 0xcd, 0x4d, 0xea, 0x58, 0xb2, 0x4b, 0xe3, 0x6e, 0xe6, 0x78, 0x48, 0x70, 0xb4, 0x1f, + 0x27, 0xc1, 0x8c, 0xb0, 0xc1, 0x27, 0xc7, 0xc9, 0xcd, 0x5f, 0x07, 0xa5, 0xb8, 0xa1, 0x24, 0x6a, + 0x45, 0x8a, 0x95, 0xe2, 0xe6, 0x43, 0x89, 0x0c, 0xfc, 0x18, 0xcc, 0xb0, 0xa8, 0xcd, 0x0a, 0x2f, + 0xde, 0x66, 0xb3, 0x7c, 0xd6, 0xc5, 0x0d, 0x16, 0x43, 0xc2, 0x00, 0x2c, 0x79, 0xdc, 0x7b, 0x12, + 0x10, 0x7f, 0xcb, 0x0d, 0x6e, 0xb9, 0x6d, 0xc7, 0xda, 0x30, 0x79, 0xf6, 0xaa, 0x45, 0xe1, 0xdd, + 0xb5, 0x5e, 0x57, 0x5d, 0xda, 0x19, 0x2c, 0x72, 0xdc, 0x55, 0x5f, 0x1e, 0xc2, 0x12, 0x65, 0x36, + 0x0c, 0x5a, 0xfb, 0x5e, 0x01, 0xb3, 0x5c, 0xe2, 0xfa, 0x21, 0x31, 0x9b, 0x7c, 0x40, 0x7f, 0xad, + 0x00, 0x48, 0xf2, 0x63, 0x3b, 0xac, 0xb6, 0xf2, 0xfa, 0xbb, 0x63, 0xb4, 0x57, 0xdf, 0xec, 0x4f, + 0x6a, 0xa6, 0x8f, 0xc5, 0xd0, 0x00, 0x9b, 0xda, 0x2f, 0x93, 0xe0, 0xfc, 0x2e, 0x6e, 0x51, 0x0b, + 0x07, 0xd4, 0x69, 0x6c, 0x44, 0xe6, 0xc2, 0x66, 0x81, 0x9f, 0x80, 0x19, 0x9e, 0x60, 0x0b, 0x07, + 0x58, 0x0e, 0xdb, 0x37, 0x46, 0xbb, 0x8e, 0x70, 0xc4, 0x6d, 0x92, 0x00, 0x27, 0x45, 0x97, 0xd0, + 0x50, 0x8c, 0x0a, 0x1f, 0x82, 0x22, 0xf3, 0x88, 0x29, 0x5b, 0xe5, 0xfd, 0x31, 0x62, 0x1f, 0xea, + 0xf5, 0x03, 0x8f, 0x98, 0x49, 0x35, 0xf2, 0x13, 0x12, 0x36, 0xa0, 0x0f, 0xa6, 0x58, 0x80, 0x83, + 0x36, 0x93, 0xa5, 0x75, 0xf7, 0x54, 0xac, 0x09, 0x44, 0x63, 0x5e, 0xda, 0x9b, 0x0a, 0xcf, 0x48, + 0x5a, 0xd2, 0xfe, 0x54, 0xc0, 0xca, 0x50, 0x5d, 0x83, 0x3a, 0x16, 0xaf, 0x87, 0xff, 0x3e, 0xcd, + 0x9f, 0x66, 0xd2, 0xbc, 0x7d, 0x1a, 0x81, 0x4b, 0xe7, 0x87, 0x65, 0x5b, 0xfb, 0x43, 0x01, 0x17, + 0x4f, 0x52, 0xbe, 0x4f, 0x59, 0x00, 0x3f, 0xea, 0x8b, 0x5e, 0x1f, 0xb1, 0xe7, 0x29, 0x0b, 0x63, + 0x8f, 0xd7, 0x9b, 0x88, 0x92, 0x8a, 0xdc, 0x03, 0x67, 0x68, 0x40, 0x6c, 0x3e, 0x8c, 0x79, 0x77, + 0xdd, 0x3b, 0xc5, 0xd0, 0x8d, 0x39, 0x69, 0xf7, 0xcc, 0x1d, 0x6e, 0x01, 0x85, 0x86, 0xb4, 0x6f, + 0x0a, 0x27, 0x07, 0xce, 0xf3, 0xc4, 0x47, 0xb4, 0x27, 0x88, 0x5b, 0xc9, 0x14, 0x8d, 0xaf, 0x71, + 0x27, 0xe6, 0xa0, 0x94, 0x14, 0x1f, 0x90, 0x9e, 0x9c, 0xbf, 0x03, 0xf6, 0x90, 0x93, 0x22, 0x8a, + 0x46, 0x77, 0x38, 0x20, 0xa3, 0x13, 0x8a, 0x21, 0x61, 0x1b, 0xcc, 0xdb, 0x99, 0xc5, 0x4b, 0xb6, + 0xca, 0x3b, 0x63, 0x18, 0xc9, 0x6e, 0x6e, 0xe1, 0xca, 0x93, 0xa5, 0xa1, 0x9c, 0x11, 0xb8, 0x07, + 0x2a, 0x1d, 0x99, 0x31, 0xd7, 0x09, 0xa7, 0x66, 0xb8, 0x6d, 0x94, 0x8c, 0x55, 0xbe, 0xa8, 0xed, + 0xe6, 0x99, 0xc7, 0x5d, 0x75, 0x31, 0x4f, 0x44, 0xfd, 0x18, 0xda, 0xef, 0x0a, 0xb8, 0x30, 0xf4, + 0x2e, 0xfe, 0x87, 0xea, 0xa3, 0xd9, 0xea, 0xbb, 0x71, 0x2a, 0xd5, 0x37, 0xb8, 0xec, 0x7e, 0x98, + 0xfa, 0x87, 0x50, 0x45, 0xbd, 0x61, 0x50, 0xf2, 0xa2, 0xfd, 0x40, 0xc6, 0x7a, 0x65, 0xdc, 0xe2, + 0xe1, 0xba, 0xc6, 0x1c, 0xff, 0x7e, 0xc7, 0x47, 0x94, 0xa0, 0xc2, 0xcf, 0xc1, 0xa2, 0x2d, 0x5f, + 0x08, 0x1c, 0x80, 0x3a, 0x41, 0xb4, 0x05, 0xfd, 0x8b, 0x0a, 0x3a, 0xdb, 0xeb, 0xaa, 0x8b, 0x9b, + 0x39, 0x58, 0xd4, 0x67, 0x08, 0xb6, 0x40, 0x39, 0xa9, 0x80, 0x68, 0x6d, 0x7e, 0xf3, 0x05, 0x52, + 0xee, 0x3a, 0xc6, 0x4b, 0x32, 0xc7, 0xe5, 0x84, 0xc6, 0x50, 0x1a, 0x1e, 0xde, 0x07, 0x73, 0x07, + 0x98, 0xb6, 0xda, 0x3e, 0x91, 0x0b, 0x69, 0xb8, 0x41, 0xbc, 0xca, 0x97, 0xc5, 0x5b, 0x69, 0xc6, + 0x71, 0x57, 0xad, 0x64, 0x08, 0x62, 0x5b, 0xc8, 0x2a, 0xc3, 0xc7, 0x0a, 0x58, 0xc4, 0xd9, 0xe7, + 0x23, 0xab, 0x9e, 0x11, 0x11, 0x5c, 0x1b, 0x23, 0x82, 0xdc, 0x0b, 0xd4, 0xa8, 0xca, 0x30, 0x16, + 0x73, 0x0c, 0x86, 0xfa, 0xac, 0xc1, 0x2f, 0xc0, 0x82, 0x9d, 0x79, 0xdd, 0xb1, 0xea, 0x94, 0x70, + 0x60, 0xec, 0xab, 0x8b, 0x11, 0x92, 0x97, 0x6c, 0x96, 0xce, 0x50, 0xde, 0x14, 0xb4, 0x40, 0xa9, + 0x83, 0x7d, 0x8a, 0xf7, 0xf9, 0x43, 0x63, 0x5a, 0xd8, 0xbd, 0x3c, 0xd6, 0xd5, 0x85, 0xba, 0xc9, + 0x7e, 0x19, 0x51, 0x18, 0x4a, 0x80, 0xb5, 0x9f, 0x26, 0x81, 0x7a, 0xc2, 0xa7, 0x1c, 0xde, 0x05, + 0xd0, 0xdd, 0x67, 0xc4, 0xef, 0x10, 0xeb, 0x76, 0xf8, 0xc6, 0x8f, 0x36, 0xe8, 0x42, 0xb2, 0x5e, + 0x6d, 0xf7, 0x49, 0xa0, 0x01, 0x5a, 0xd0, 0x06, 0xb3, 0x41, 0x6a, 0xf3, 0x1b, 0xe7, 0x45, 0x20, + 0x03, 0x4b, 0x2f, 0x8e, 0xc6, 0x62, 0xaf, 0xab, 0x66, 0x56, 0x49, 0x94, 0x81, 0x87, 0x26, 0x00, + 0x66, 0x72, 0x7b, 0x61, 0x03, 0xd4, 0x47, 0x1b, 0x67, 0xc9, 0x9d, 0xc5, 0x9f, 0xa0, 0xd4, 0x75, + 0xa5, 0x60, 0xb5, 0xbf, 0x14, 0x00, 0x92, 0xae, 0x80, 0x17, 0x41, 0xea, 0x19, 0x2f, 0xbf, 0x62, + 0x45, 0x0e, 0x81, 0x52, 0x74, 0xb8, 0x0a, 0xa6, 0x6d, 0xc2, 0x18, 0x6e, 0x44, 0xef, 0x80, 0xf8, + 0x5f, 0x86, 0xcd, 0x90, 0x8c, 0x22, 0x3e, 0xdc, 0x03, 0x53, 0x3e, 0xc1, 0xcc, 0x75, 0xe4, 0xff, + 0x11, 0xef, 0xf1, 0xb5, 0x0a, 0x09, 0xca, 0x71, 0x57, 0x5d, 0x1b, 0xe5, 0x5f, 0x20, 0x5d, 0x6e, + 0x61, 0x42, 0x09, 0x49, 0x38, 0x78, 0x1b, 0x54, 0xa4, 0x8d, 0x94, 0xc3, 0x61, 0xd7, 0x9e, 0x97, + 0xde, 0x54, 0x36, 0xf3, 0x02, 0xa8, 0x5f, 0x47, 0xbb, 0x0b, 0x66, 0xa2, 0xea, 0x82, 0x55, 0x50, + 0x4c, 0x7d, 0xbe, 0xc3, 0xc0, 0x05, 0x25, 0x97, 0x98, 0xc9, 0xc1, 0x89, 0x31, 0xb6, 0x9f, 0x3c, + 0xaf, 0x4d, 0x3c, 0x7d, 0x5e, 0x9b, 0x78, 0xf6, 0xbc, 0x36, 0xf1, 0xb8, 0x57, 0x53, 0x9e, 0xf4, + 0x6a, 0xca, 0xd3, 0x5e, 0x4d, 0x79, 0xd6, 0xab, 0x29, 0xbf, 0xf6, 0x6a, 0xca, 0xb7, 0xbf, 0xd5, + 0x26, 0x3e, 0x5c, 0x1d, 0xf9, 0x5f, 0xbc, 0xbf, 0x03, 0x00, 0x00, 0xff, 0xff, 0xad, 0xe2, 0x61, + 0x96, 0x0a, 0x14, 0x00, 0x00, } func (m *AuditAnnotation) Marshal() (dAtA []byte, err error) { @@ -884,6 +920,25 @@ func (m *ParamRef) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if m.ParameterNotFoundAction != nil { + i -= len(*m.ParameterNotFoundAction) + copy(dAtA[i:], *m.ParameterNotFoundAction) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.ParameterNotFoundAction))) + i-- + dAtA[i] = 0x22 + } + if m.Selector != nil { + { + size, err := m.Selector.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } i -= len(m.Namespace) copy(dAtA[i:], m.Namespace) i = encodeVarintGenerated(dAtA, i, uint64(len(m.Namespace))) @@ -1205,6 +1260,20 @@ func (m *ValidatingAdmissionPolicySpec) MarshalToSizedBuffer(dAtA []byte) (int, _ = i var l int _ = l + if len(m.Variables) > 0 { + for iNdEx := len(m.Variables) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Variables[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3a + } + } if len(m.MatchConditions) > 0 { for iNdEx := len(m.MatchConditions) - 1; iNdEx >= 0; iNdEx-- { { @@ -1378,6 +1447,39 @@ func (m *Validation) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *Variable) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *Variable) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Variable) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + i -= len(m.Expression) + copy(dAtA[i:], m.Expression) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Expression))) + i-- + dAtA[i] = 0x12 + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int { offset -= sovGenerated(v) base := offset @@ -1501,6 +1603,14 @@ func (m *ParamRef) Size() (n int) { n += 1 + l + sovGenerated(uint64(l)) l = len(m.Namespace) n += 1 + l + sovGenerated(uint64(l)) + if m.Selector != nil { + l = m.Selector.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if m.ParameterNotFoundAction != nil { + l = len(*m.ParameterNotFoundAction) + n += 1 + l + sovGenerated(uint64(l)) + } return n } @@ -1642,6 +1752,12 @@ func (m *ValidatingAdmissionPolicySpec) Size() (n int) { n += 1 + l + sovGenerated(uint64(l)) } } + if len(m.Variables) > 0 { + for _, e := range m.Variables { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } return n } @@ -1684,6 +1800,19 @@ func (m *Validation) Size() (n int) { return n } +func (m *Variable) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + n += 1 + l + sovGenerated(uint64(l)) + l = len(m.Expression) + n += 1 + l + sovGenerated(uint64(l)) + return n +} + func sovGenerated(x uint64) (n int) { return (math_bits.Len64(x|1) + 6) / 7 } @@ -1776,6 +1905,8 @@ func (this *ParamRef) String() string { s := strings.Join([]string{`&ParamRef{`, `Name:` + fmt.Sprintf("%v", this.Name) + `,`, `Namespace:` + fmt.Sprintf("%v", this.Namespace) + `,`, + `Selector:` + strings.Replace(fmt.Sprintf("%v", this.Selector), "LabelSelector", "v1.LabelSelector", 1) + `,`, + `ParameterNotFoundAction:` + valueToStringGenerated(this.ParameterNotFoundAction) + `,`, `}`, }, "") return s @@ -1882,6 +2013,11 @@ func (this *ValidatingAdmissionPolicySpec) String() string { repeatedStringForMatchConditions += strings.Replace(strings.Replace(f.String(), "MatchCondition", "MatchCondition", 1), `&`, ``, 1) + "," } repeatedStringForMatchConditions += "}" + repeatedStringForVariables := "[]Variable{" + for _, f := range this.Variables { + repeatedStringForVariables += strings.Replace(strings.Replace(f.String(), "Variable", "Variable", 1), `&`, ``, 1) + "," + } + repeatedStringForVariables += "}" s := strings.Join([]string{`&ValidatingAdmissionPolicySpec{`, `ParamKind:` + strings.Replace(this.ParamKind.String(), "ParamKind", "ParamKind", 1) + `,`, `MatchConstraints:` + strings.Replace(this.MatchConstraints.String(), "MatchResources", "MatchResources", 1) + `,`, @@ -1889,6 +2025,7 @@ func (this *ValidatingAdmissionPolicySpec) String() string { `FailurePolicy:` + valueToStringGenerated(this.FailurePolicy) + `,`, `AuditAnnotations:` + repeatedStringForAuditAnnotations + `,`, `MatchConditions:` + repeatedStringForMatchConditions + `,`, + `Variables:` + repeatedStringForVariables + `,`, `}`, }, "") return s @@ -1923,6 +2060,17 @@ func (this *Validation) String() string { }, "") return s } +func (this *Variable) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&Variable{`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + `Expression:` + fmt.Sprintf("%v", this.Expression) + `,`, + `}`, + }, "") + return s +} func valueToStringGenerated(v interface{}) string { rv := reflect.ValueOf(v) if rv.IsNil() { @@ -2818,6 +2966,75 @@ func (m *ParamRef) Unmarshal(dAtA []byte) error { } m.Namespace = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Selector", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Selector == nil { + m.Selector = &v1.LabelSelector{} + } + if err := m.Selector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ParameterNotFoundAction", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := ParameterNotFoundActionType(dAtA[iNdEx:postIndex]) + m.ParameterNotFoundAction = &s + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -3844,6 +4061,40 @@ func (m *ValidatingAdmissionPolicySpec) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Variables", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Variables = append(m.Variables, Variable{}) + if err := m.Variables[len(m.Variables)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -4183,6 +4434,120 @@ func (m *Validation) Unmarshal(dAtA []byte) error { } return nil } +func (m *Variable) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Variable: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: Variable: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Expression", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Expression = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func skipGenerated(dAtA []byte) (n int, err error) { l := len(dAtA) iNdEx := 0 diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/generated.proto index c718c5464df0..db02dd929f00 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/generated.proto @@ -227,16 +227,59 @@ message ParamKind { optional string kind = 2; } -// ParamRef references a parameter resource +// ParamRef describes how to locate the params to be used as input to +// expressions of rules applied by a policy binding. // +structType=atomic message ParamRef { - // Name of the resource being referenced. + // `name` is the name of the resource being referenced. + // + // `name` and `selector` are mutually exclusive properties. If one is set, + // the other must be unset. + // + // +optional optional string name = 1; - // Namespace of the referenced resource. - // Should be empty for the cluster-scoped resources + // namespace is the namespace of the referenced resource. Allows limiting + // the search for params to a specific namespace. Applies to both `name` and + // `selector` fields. + // + // A per-namespace parameter may be used by specifying a namespace-scoped + // `paramKind` in the policy and leaving this field empty. + // + // - If `paramKind` is cluster-scoped, this field MUST be unset. Setting this + // field results in a configuration error. + // + // - If `paramKind` is namespace-scoped, the namespace of the object being + // evaluated for admission will be used when this field is left unset. Take + // care that if this is left empty the binding must not match any cluster-scoped + // resources, which will result in an error. + // // +optional optional string namespace = 2; + + // selector can be used to match multiple param objects based on their labels. + // Supply selector: {} to match all resources of the ParamKind. + // + // If multiple params are found, they are all evaluated with the policy expressions + // and the results are ANDed together. + // + // One of `name` or `selector` must be set, but `name` and `selector` are + // mutually exclusive properties. If one is set, the other must be unset. + // + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.LabelSelector selector = 3; + + // `parameterNotFoundAction` controls the behavior of the binding when the resource + // exists, and name or selector is valid, but there are no parameters + // matched by the binding. If the value is set to `Allow`, then no + // matched parameters will be treated as successful validation by the binding. + // If set to `Deny`, then no matched parameters will be subject to the + // `failurePolicy` of the policy. + // + // Allowed values are `Allow` or `Deny` + // Default to `Deny` + // +optional + optional string parameterNotFoundAction = 4; } // TypeChecking contains results of type checking the expressions in the @@ -267,6 +310,15 @@ message ValidatingAdmissionPolicy { // ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. // ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters. +// +// For a given admission request, each binding will cause its policy to be +// evaluated N times, where N is 1 for policies/bindings that don't use +// params, otherwise N is the number of parameters selected by the binding. +// +// The CEL expressions of a policy must have a computed CEL cost below the maximum +// CEL budget. Each evaluation of the policy is given an independent CEL cost budget. +// Adding/removing policies, bindings, or params can not affect whether a +// given (policy, binding, param) combination is within its own CEL budget. message ValidatingAdmissionPolicyBinding { // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata. // +optional @@ -294,9 +346,10 @@ message ValidatingAdmissionPolicyBindingSpec { // Required. optional string policyName = 1; - // ParamRef specifies the parameter resource used to configure the admission control policy. + // paramRef specifies the parameter resource used to configure the admission control policy. // It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy. // If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist, this binding is considered mis-configured and the FailurePolicy of the ValidatingAdmissionPolicy applied. + // If the policy does not specify a ParamKind then this field is ignored, and the rules are evaluated without a param. // +optional optional ParamRef paramRef = 2; @@ -430,6 +483,20 @@ message ValidatingAdmissionPolicySpec { // +listMapKey=name // +optional repeated MatchCondition matchConditions = 6; + + // Variables contain definitions of variables that can be used in composition of other expressions. + // Each variable is defined as a named CEL expression. + // The variables defined here will be available under `variables` in other expressions of the policy + // except MatchConditions because MatchConditions are evaluated before the rest of the policy. + // + // The expression of a variable can refer to other variables defined earlier in the list but not those after. + // Thus, Variables must be sorted by the order of first appearance and acyclic. + // +patchMergeKey=name + // +patchStrategy=merge + // +listType=map + // +listMapKey=name + // +optional + repeated Variable variables = 7; } // ValidatingAdmissionPolicyStatus represents the status of a ValidatingAdmissionPolicy. @@ -460,6 +527,9 @@ message Validation { // - 'oldObject' - The existing object. The value is null for CREATE requests. // - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). // - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. + // - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. + // - 'variables' - Map of composited variables, from its name to its lazily evaluated value. + // For example, a variable named 'foo' can be accessed as 'variables.foo'. // - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. // See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz // - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the @@ -525,3 +595,15 @@ message Validation { optional string messageExpression = 4; } +// Variable is the definition of a variable that is used for composition. +message Variable { + // Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. + // The variable can be accessed in other expressions through `variables` + // For example, if name is "foo", the variable will be available as `variables.foo` + optional string Name = 1; + + // Expression is the expression that will be evaluated as the value of the variable. + // The CEL expression has access to the same identifiers as the CEL expressions in Validation. + optional string Expression = 2; +} + diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/types.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/types.go index 2bbb55a47da5..575456c83866 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/types.go @@ -39,6 +39,18 @@ const ( AllScopes ScopeType = v1.AllScopes ) +// ParameterNotFoundActionType specifies a failure policy that defines how a binding +// is evaluated when the param referred by its perNamespaceParamRef is not found. +// +enum +type ParameterNotFoundActionType string + +const ( + // Ignore means that an error finding params for a binding is ignored + AllowAction ParameterNotFoundActionType = "Allow" + // Fail means that an error finding params for a binding is ignored + DenyAction ParameterNotFoundActionType = "Deny" +) + // FailurePolicyType specifies a failure policy that defines how unrecognized errors from the admission endpoint are handled. // +enum type FailurePolicyType string @@ -201,6 +213,20 @@ type ValidatingAdmissionPolicySpec struct { // +listMapKey=name // +optional MatchConditions []MatchCondition `json:"matchConditions,omitempty" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,6,rep,name=matchConditions"` + + // Variables contain definitions of variables that can be used in composition of other expressions. + // Each variable is defined as a named CEL expression. + // The variables defined here will be available under `variables` in other expressions of the policy + // except MatchConditions because MatchConditions are evaluated before the rest of the policy. + // + // The expression of a variable can refer to other variables defined earlier in the list but not those after. + // Thus, Variables must be sorted by the order of first appearance and acyclic. + // +patchMergeKey=name + // +patchStrategy=merge + // +listType=map + // +listMapKey=name + // +optional + Variables []Variable `json:"variables" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,7,rep,name=variables"` } type MatchCondition v1.MatchCondition @@ -228,6 +254,9 @@ type Validation struct { // - 'oldObject' - The existing object. The value is null for CREATE requests. // - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). // - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. + // - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. + // - 'variables' - Map of composited variables, from its name to its lazily evaluated value. + // For example, a variable named 'foo' can be accessed as 'variables.foo'. // - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. // See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz // - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the @@ -290,6 +319,18 @@ type Validation struct { MessageExpression string `json:"messageExpression,omitempty" protobuf:"bytes,4,opt,name=messageExpression"` } +// Variable is the definition of a variable that is used for composition. +type Variable struct { + // Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. + // The variable can be accessed in other expressions through `variables` + // For example, if name is "foo", the variable will be available as `variables.foo` + Name string `json:"name" protobuf:"bytes,1,opt,name=Name"` + + // Expression is the expression that will be evaluated as the value of the variable. + // The CEL expression has access to the same identifiers as the CEL expressions in Validation. + Expression string `json:"expression" protobuf:"bytes,2,opt,name=Expression"` +} + // AuditAnnotation describes how to produce an audit annotation for an API request. type AuditAnnotation struct { // key specifies the audit annotation key. The audit annotation keys of @@ -334,6 +375,15 @@ type AuditAnnotation struct { // ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. // ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters. +// +// For a given admission request, each binding will cause its policy to be +// evaluated N times, where N is 1 for policies/bindings that don't use +// params, otherwise N is the number of parameters selected by the binding. +// +// The CEL expressions of a policy must have a computed CEL cost below the maximum +// CEL budget. Each evaluation of the policy is given an independent CEL cost budget. +// Adding/removing policies, bindings, or params can not affect whether a +// given (policy, binding, param) combination is within its own CEL budget. type ValidatingAdmissionPolicyBinding struct { metav1.TypeMeta `json:",inline"` // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata. @@ -364,9 +414,10 @@ type ValidatingAdmissionPolicyBindingSpec struct { // Required. PolicyName string `json:"policyName,omitempty" protobuf:"bytes,1,rep,name=policyName"` - // ParamRef specifies the parameter resource used to configure the admission control policy. + // paramRef specifies the parameter resource used to configure the admission control policy. // It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy. // If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist, this binding is considered mis-configured and the FailurePolicy of the ValidatingAdmissionPolicy applied. + // If the policy does not specify a ParamKind then this field is ignored, and the rules are evaluated without a param. // +optional ParamRef *ParamRef `json:"paramRef,omitempty" protobuf:"bytes,2,rep,name=paramRef"` @@ -421,15 +472,59 @@ type ValidatingAdmissionPolicyBindingSpec struct { ValidationActions []ValidationAction `json:"validationActions,omitempty" protobuf:"bytes,4,rep,name=validationActions"` } -// ParamRef references a parameter resource +// ParamRef describes how to locate the params to be used as input to +// expressions of rules applied by a policy binding. // +structType=atomic type ParamRef struct { - // Name of the resource being referenced. + // `name` is the name of the resource being referenced. + // + // `name` and `selector` are mutually exclusive properties. If one is set, + // the other must be unset. + // + // +optional Name string `json:"name,omitempty" protobuf:"bytes,1,rep,name=name"` - // Namespace of the referenced resource. - // Should be empty for the cluster-scoped resources + + // namespace is the namespace of the referenced resource. Allows limiting + // the search for params to a specific namespace. Applies to both `name` and + // `selector` fields. + // + // A per-namespace parameter may be used by specifying a namespace-scoped + // `paramKind` in the policy and leaving this field empty. + // + // - If `paramKind` is cluster-scoped, this field MUST be unset. Setting this + // field results in a configuration error. + // + // - If `paramKind` is namespace-scoped, the namespace of the object being + // evaluated for admission will be used when this field is left unset. Take + // care that if this is left empty the binding must not match any cluster-scoped + // resources, which will result in an error. + // // +optional Namespace string `json:"namespace,omitempty" protobuf:"bytes,2,rep,name=namespace"` + + // selector can be used to match multiple param objects based on their labels. + // Supply selector: {} to match all resources of the ParamKind. + // + // If multiple params are found, they are all evaluated with the policy expressions + // and the results are ANDed together. + // + // One of `name` or `selector` must be set, but `name` and `selector` are + // mutually exclusive properties. If one is set, the other must be unset. + // + // +optional + Selector *metav1.LabelSelector `json:"selector,omitempty" protobuf:"bytes,3,rep,name=selector"` + + // `parameterNotFoundAction` controls the behavior of the binding when the resource + // exists, and name or selector is valid, but there are no parameters + // matched by the binding. If the value is set to `Allow`, then no + // matched parameters will be treated as successful validation by the binding. + // If set to `Deny`, then no matched parameters will be subject to the + // `failurePolicy` of the policy. + // + // Allowed values are `Allow` or `Deny` + // Default to `Deny` + // +optional + ParameterNotFoundAction *ParameterNotFoundActionType `json:"parameterNotFoundAction,omitempty" protobuf:"bytes,4,rep,name=parameterNotFoundAction"` } // MatchResources decides whether to run the admission control policy on an object based diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/types_swagger_doc_generated.go index b3cac1821bad..dcf46b324f1e 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/types_swagger_doc_generated.go @@ -80,9 +80,11 @@ func (ParamKind) SwaggerDoc() map[string]string { } var map_ParamRef = map[string]string{ - "": "ParamRef references a parameter resource", - "name": "Name of the resource being referenced.", - "namespace": "Namespace of the referenced resource. Should be empty for the cluster-scoped resources", + "": "ParamRef describes how to locate the params to be used as input to expressions of rules applied by a policy binding.", + "name": "`name` is the name of the resource being referenced.\n\n`name` and `selector` are mutually exclusive properties. If one is set, the other must be unset.", + "namespace": "namespace is the namespace of the referenced resource. Allows limiting the search for params to a specific namespace. Applies to both `name` and `selector` fields.\n\nA per-namespace parameter may be used by specifying a namespace-scoped `paramKind` in the policy and leaving this field empty.\n\n- If `paramKind` is cluster-scoped, this field MUST be unset. Setting this field results in a configuration error.\n\n- If `paramKind` is namespace-scoped, the namespace of the object being evaluated for admission will be used when this field is left unset. Take care that if this is left empty the binding must not match any cluster-scoped resources, which will result in an error.", + "selector": "selector can be used to match multiple param objects based on their labels. Supply selector: {} to match all resources of the ParamKind.\n\nIf multiple params are found, they are all evaluated with the policy expressions and the results are ANDed together.\n\nOne of `name` or `selector` must be set, but `name` and `selector` are mutually exclusive properties. If one is set, the other must be unset.", + "parameterNotFoundAction": "`parameterNotFoundAction` controls the behavior of the binding when the resource exists, and name or selector is valid, but there are no parameters matched by the binding. If the value is set to `Allow`, then no matched parameters will be treated as successful validation by the binding. If set to `Deny`, then no matched parameters will be subject to the `failurePolicy` of the policy.\n\nAllowed values are `Allow` or `Deny` Default to `Deny`", } func (ParamRef) SwaggerDoc() map[string]string { @@ -110,7 +112,7 @@ func (ValidatingAdmissionPolicy) SwaggerDoc() map[string]string { } var map_ValidatingAdmissionPolicyBinding = map[string]string{ - "": "ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters.", + "": "ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters.\n\nFor a given admission request, each binding will cause its policy to be evaluated N times, where N is 1 for policies/bindings that don't use params, otherwise N is the number of parameters selected by the binding.\n\nThe CEL expressions of a policy must have a computed CEL cost below the maximum CEL budget. Each evaluation of the policy is given an independent CEL cost budget. Adding/removing policies, bindings, or params can not affect whether a given (policy, binding, param) combination is within its own CEL budget.", "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.", "spec": "Specification of the desired behavior of the ValidatingAdmissionPolicyBinding.", } @@ -132,7 +134,7 @@ func (ValidatingAdmissionPolicyBindingList) SwaggerDoc() map[string]string { var map_ValidatingAdmissionPolicyBindingSpec = map[string]string{ "": "ValidatingAdmissionPolicyBindingSpec is the specification of the ValidatingAdmissionPolicyBinding.", "policyName": "PolicyName references a ValidatingAdmissionPolicy name which the ValidatingAdmissionPolicyBinding binds to. If the referenced resource does not exist, this binding is considered invalid and will be ignored Required.", - "paramRef": "ParamRef specifies the parameter resource used to configure the admission control policy. It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy. If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist, this binding is considered mis-configured and the FailurePolicy of the ValidatingAdmissionPolicy applied.", + "paramRef": "paramRef specifies the parameter resource used to configure the admission control policy. It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy. If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist, this binding is considered mis-configured and the FailurePolicy of the ValidatingAdmissionPolicy applied. If the policy does not specify a ParamKind then this field is ignored, and the rules are evaluated without a param.", "matchResources": "MatchResources declares what resources match this binding and will be validated by it. Note that this is intersected with the policy's matchConstraints, so only requests that are matched by the policy can be selected by this. If this is unset, all resources matched by the policy are validated by this binding When resourceRules is unset, it does not constrain resource matching. If a resource is matched by the other fields of this object, it will be validated. Note that this is differs from ValidatingAdmissionPolicy matchConstraints, where resourceRules are required.", "validationActions": "validationActions declares how Validations of the referenced ValidatingAdmissionPolicy are enforced. If a validation evaluates to false it is always enforced according to these actions.\n\nFailures defined by the ValidatingAdmissionPolicy's FailurePolicy are enforced according to these actions only if the FailurePolicy is set to Fail, otherwise the failures are ignored. This includes compilation errors, runtime errors and misconfigurations of the policy.\n\nvalidationActions is declared as a set of action values. Order does not matter. validationActions may not contain duplicates of the same action.\n\nThe supported actions values are:\n\n\"Deny\" specifies that a validation failure results in a denied request.\n\n\"Warn\" specifies that a validation failure is reported to the request client in HTTP Warning headers, with a warning code of 299. Warnings can be sent both for allowed or denied admission responses.\n\n\"Audit\" specifies that a validation failure is included in the published audit event for the request. The audit event will contain a `validation.policy.admission.k8s.io/validation_failure` audit annotation with a value containing the details of the validation failures, formatted as a JSON list of objects, each with the following fields: - message: The validation failure message string - policy: The resource name of the ValidatingAdmissionPolicy - binding: The resource name of the ValidatingAdmissionPolicyBinding - expressionIndex: The index of the failed validations in the ValidatingAdmissionPolicy - validationActions: The enforcement actions enacted for the validation failure Example audit annotation: `\"validation.policy.admission.k8s.io/validation_failure\": \"[{\"message\": \"Invalid value\", {\"policy\": \"policy.example.com\", {\"binding\": \"policybinding.example.com\", {\"expressionIndex\": \"1\", {\"validationActions\": [\"Audit\"]}]\"`\n\nClients should expect to handle additional values by ignoring any values not recognized.\n\n\"Deny\" and \"Warn\" may not be used together since this combination needlessly duplicates the validation failure both in the API response body and the HTTP warning headers.\n\nRequired.", } @@ -159,6 +161,7 @@ var map_ValidatingAdmissionPolicySpec = map[string]string{ "failurePolicy": "failurePolicy defines how to handle failures for the admission policy. Failures can occur from CEL expression parse errors, type check errors, runtime errors and invalid or mis-configured policy definitions or bindings.\n\nA policy is invalid if spec.paramKind refers to a non-existent Kind. A binding is invalid if spec.paramRef.name refers to a non-existent resource.\n\nfailurePolicy does not define how validations that evaluate to false are handled.\n\nWhen failurePolicy is set to Fail, ValidatingAdmissionPolicyBinding validationActions define how failures are enforced.\n\nAllowed values are Ignore or Fail. Defaults to Fail.", "auditAnnotations": "auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request. validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is required.", "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be validated. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nIf a parameter object is provided, it can be accessed via the `params` handle in the same manner as validation expressions.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the policy is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the policy is evaluated.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the policy is skipped", + "variables": "Variables contain definitions of variables that can be used in composition of other expressions. Each variable is defined as a named CEL expression. The variables defined here will be available under `variables` in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy.\n\nThe expression of a variable can refer to other variables defined earlier in the list but not those after. Thus, Variables must be sorted by the order of first appearance and acyclic.", } func (ValidatingAdmissionPolicySpec) SwaggerDoc() map[string]string { @@ -178,7 +181,7 @@ func (ValidatingAdmissionPolicyStatus) SwaggerDoc() map[string]string { var map_Validation = map[string]string{ "": "Validation specifies the CEL expression which is used to apply the validation.", - "expression": "Expression represents the expression which will be evaluated by CEL. ref: https://github.com/google/cel-spec CEL expressions have access to the contents of the API request/response, organized into CEL variables as well as some other useful variables:\n\n- 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request.\n See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz\n- 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the\n request resource.\n\nThe `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the object. No other metadata properties are accessible.\n\nOnly property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: - '__' escapes to '__underscores__' - '.' escapes to '__dot__' - '-' escapes to '__dash__' - '/' escapes to '__slash__' - Property names that exactly match a CEL RESERVED keyword escape to '__{keyword}__'. The keywords are:\n\t \"true\", \"false\", \"null\", \"in\", \"as\", \"break\", \"const\", \"continue\", \"else\", \"for\", \"function\", \"if\",\n\t \"import\", \"let\", \"loop\", \"package\", \"namespace\", \"return\".\nExamples:\n - Expression accessing a property named \"namespace\": {\"Expression\": \"object.__namespace__ > 0\"}\n - Expression accessing a property named \"x-prop\": {\"Expression\": \"object.x__dash__prop > 0\"}\n - Expression accessing a property named \"redact__d\": {\"Expression\": \"object.redact__underscores__d > 0\"}\n\nEquality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:\n - 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and\n non-intersecting elements in `Y` are appended, retaining their partial order.\n - 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values\n are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with\n non-intersecting keys are appended, retaining their partial order.\nRequired.", + "expression": "Expression represents the expression which will be evaluated by CEL. ref: https://github.com/google/cel-spec CEL expressions have access to the contents of the API request/response, organized into CEL variables as well as some other useful variables:\n\n- 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. - 'variables' - Map of composited variables, from its name to its lazily evaluated value.\n For example, a variable named 'foo' can be accessed as 'variables.foo'.\n- 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request.\n See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz\n- 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the\n request resource.\n\nThe `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the object. No other metadata properties are accessible.\n\nOnly property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: - '__' escapes to '__underscores__' - '.' escapes to '__dot__' - '-' escapes to '__dash__' - '/' escapes to '__slash__' - Property names that exactly match a CEL RESERVED keyword escape to '__{keyword}__'. The keywords are:\n\t \"true\", \"false\", \"null\", \"in\", \"as\", \"break\", \"const\", \"continue\", \"else\", \"for\", \"function\", \"if\",\n\t \"import\", \"let\", \"loop\", \"package\", \"namespace\", \"return\".\nExamples:\n - Expression accessing a property named \"namespace\": {\"Expression\": \"object.__namespace__ > 0\"}\n - Expression accessing a property named \"x-prop\": {\"Expression\": \"object.x__dash__prop > 0\"}\n - Expression accessing a property named \"redact__d\": {\"Expression\": \"object.redact__underscores__d > 0\"}\n\nEquality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:\n - 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and\n non-intersecting elements in `Y` are appended, retaining their partial order.\n - 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values\n are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with\n non-intersecting keys are appended, retaining their partial order.\nRequired.", "message": "Message represents the message displayed when validation fails. The message is required if the Expression contains line breaks. The message must not contain line breaks. If unset, the message is \"failed rule: {Rule}\". e.g. \"must be a URL with the host matching spec.host\" If the Expression contains line breaks. Message is required. The message must not contain line breaks. If unset, the message is \"failed Expression: {Expression}\".", "reason": "Reason represents a machine-readable description of why this validation failed. If this is the first validation in the list to fail, this reason, as well as the corresponding HTTP response code, are used in the HTTP response to the client. The currently supported reasons are: \"Unauthorized\", \"Forbidden\", \"Invalid\", \"RequestEntityTooLarge\". If not set, StatusReasonInvalid is used in the response to the client.", "messageExpression": "messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string. If both message and messageExpression are present on a validation, then messageExpression will be used if validation fails. If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and the fact that messageExpression produced an empty string/string with only spaces/string with line breaks will be logged. messageExpression has access to all the same variables as the `expression` except for 'authorizer' and 'authorizer.requestResource'. Example: \"object.x must be less than max (\"+string(params.max)+\")\"", @@ -188,4 +191,14 @@ func (Validation) SwaggerDoc() map[string]string { return map_Validation } +var map_Variable = map[string]string{ + "": "Variable is the definition of a variable that is used for composition.", + "name": "Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. The variable can be accessed in other expressions through `variables` For example, if name is \"foo\", the variable will be available as `variables.foo`", + "expression": "Expression is the expression that will be evaluated as the value of the variable. The CEL expression has access to the same identifiers as the CEL expressions in Validation.", +} + +func (Variable) SwaggerDoc() map[string]string { + return map_Variable +} + // AUTO-GENERATED FUNCTIONS END HERE diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/zz_generated.deepcopy.go index 8e4abfd0877a..24cd0e4e9b45 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1alpha1/zz_generated.deepcopy.go @@ -160,6 +160,16 @@ func (in *ParamKind) DeepCopy() *ParamKind { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ParamRef) DeepCopyInto(out *ParamRef) { *out = *in + if in.Selector != nil { + in, out := &in.Selector, &out.Selector + *out = new(v1.LabelSelector) + (*in).DeepCopyInto(*out) + } + if in.ParameterNotFoundAction != nil { + in, out := &in.ParameterNotFoundAction, &out.ParameterNotFoundAction + *out = new(ParameterNotFoundActionType) + **out = **in + } return } @@ -288,7 +298,7 @@ func (in *ValidatingAdmissionPolicyBindingSpec) DeepCopyInto(out *ValidatingAdmi if in.ParamRef != nil { in, out := &in.ParamRef, &out.ParamRef *out = new(ParamRef) - **out = **in + (*in).DeepCopyInto(*out) } if in.MatchResources != nil { in, out := &in.MatchResources, &out.MatchResources @@ -381,6 +391,11 @@ func (in *ValidatingAdmissionPolicySpec) DeepCopyInto(out *ValidatingAdmissionPo *out = make([]MatchCondition, len(*in)) copy(*out, *in) } + if in.Variables != nil { + in, out := &in.Variables, &out.Variables + *out = make([]Variable, len(*in)) + copy(*out, *in) + } return } @@ -442,3 +457,19 @@ func (in *Validation) DeepCopy() *Validation { in.DeepCopyInto(out) return out } + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Variable) DeepCopyInto(out *Variable) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Variable. +func (in *Variable) DeepCopy() *Variable { + if in == nil { + return nil + } + out := new(Variable) + in.DeepCopyInto(out) + return out +} diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/generated.pb.go index 8fb354c319a5..267ddc1cbd66 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/generated.pb.go @@ -25,8 +25,9 @@ import ( io "io" proto "github.com/gogo/protobuf/proto" - v1 "k8s.io/api/admissionregistration/v1" - v11 "k8s.io/apimachinery/pkg/apis/meta/v1" + v11 "k8s.io/api/admissionregistration/v1" + k8s_io_apimachinery_pkg_apis_meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" math "math" math_bits "math/bits" @@ -45,10 +46,66 @@ var _ = math.Inf // proto package needs to be updated. const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package +func (m *AuditAnnotation) Reset() { *m = AuditAnnotation{} } +func (*AuditAnnotation) ProtoMessage() {} +func (*AuditAnnotation) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{0} +} +func (m *AuditAnnotation) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *AuditAnnotation) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *AuditAnnotation) XXX_Merge(src proto.Message) { + xxx_messageInfo_AuditAnnotation.Merge(m, src) +} +func (m *AuditAnnotation) XXX_Size() int { + return m.Size() +} +func (m *AuditAnnotation) XXX_DiscardUnknown() { + xxx_messageInfo_AuditAnnotation.DiscardUnknown(m) +} + +var xxx_messageInfo_AuditAnnotation proto.InternalMessageInfo + +func (m *ExpressionWarning) Reset() { *m = ExpressionWarning{} } +func (*ExpressionWarning) ProtoMessage() {} +func (*ExpressionWarning) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{1} +} +func (m *ExpressionWarning) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ExpressionWarning) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ExpressionWarning) XXX_Merge(src proto.Message) { + xxx_messageInfo_ExpressionWarning.Merge(m, src) +} +func (m *ExpressionWarning) XXX_Size() int { + return m.Size() +} +func (m *ExpressionWarning) XXX_DiscardUnknown() { + xxx_messageInfo_ExpressionWarning.DiscardUnknown(m) +} + +var xxx_messageInfo_ExpressionWarning proto.InternalMessageInfo + func (m *MatchCondition) Reset() { *m = MatchCondition{} } func (*MatchCondition) ProtoMessage() {} func (*MatchCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_abeea74cbc46f55a, []int{0} + return fileDescriptor_abeea74cbc46f55a, []int{2} } func (m *MatchCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -73,10 +130,38 @@ func (m *MatchCondition) XXX_DiscardUnknown() { var xxx_messageInfo_MatchCondition proto.InternalMessageInfo +func (m *MatchResources) Reset() { *m = MatchResources{} } +func (*MatchResources) ProtoMessage() {} +func (*MatchResources) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{3} +} +func (m *MatchResources) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *MatchResources) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *MatchResources) XXX_Merge(src proto.Message) { + xxx_messageInfo_MatchResources.Merge(m, src) +} +func (m *MatchResources) XXX_Size() int { + return m.Size() +} +func (m *MatchResources) XXX_DiscardUnknown() { + xxx_messageInfo_MatchResources.DiscardUnknown(m) +} + +var xxx_messageInfo_MatchResources proto.InternalMessageInfo + func (m *MutatingWebhook) Reset() { *m = MutatingWebhook{} } func (*MutatingWebhook) ProtoMessage() {} func (*MutatingWebhook) Descriptor() ([]byte, []int) { - return fileDescriptor_abeea74cbc46f55a, []int{1} + return fileDescriptor_abeea74cbc46f55a, []int{4} } func (m *MutatingWebhook) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -104,7 +189,7 @@ var xxx_messageInfo_MutatingWebhook proto.InternalMessageInfo func (m *MutatingWebhookConfiguration) Reset() { *m = MutatingWebhookConfiguration{} } func (*MutatingWebhookConfiguration) ProtoMessage() {} func (*MutatingWebhookConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_abeea74cbc46f55a, []int{2} + return fileDescriptor_abeea74cbc46f55a, []int{5} } func (m *MutatingWebhookConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -132,7 +217,7 @@ var xxx_messageInfo_MutatingWebhookConfiguration proto.InternalMessageInfo func (m *MutatingWebhookConfigurationList) Reset() { *m = MutatingWebhookConfigurationList{} } func (*MutatingWebhookConfigurationList) ProtoMessage() {} func (*MutatingWebhookConfigurationList) Descriptor() ([]byte, []int) { - return fileDescriptor_abeea74cbc46f55a, []int{3} + return fileDescriptor_abeea74cbc46f55a, []int{6} } func (m *MutatingWebhookConfigurationList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -157,10 +242,94 @@ func (m *MutatingWebhookConfigurationList) XXX_DiscardUnknown() { var xxx_messageInfo_MutatingWebhookConfigurationList proto.InternalMessageInfo +func (m *NamedRuleWithOperations) Reset() { *m = NamedRuleWithOperations{} } +func (*NamedRuleWithOperations) ProtoMessage() {} +func (*NamedRuleWithOperations) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{7} +} +func (m *NamedRuleWithOperations) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *NamedRuleWithOperations) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *NamedRuleWithOperations) XXX_Merge(src proto.Message) { + xxx_messageInfo_NamedRuleWithOperations.Merge(m, src) +} +func (m *NamedRuleWithOperations) XXX_Size() int { + return m.Size() +} +func (m *NamedRuleWithOperations) XXX_DiscardUnknown() { + xxx_messageInfo_NamedRuleWithOperations.DiscardUnknown(m) +} + +var xxx_messageInfo_NamedRuleWithOperations proto.InternalMessageInfo + +func (m *ParamKind) Reset() { *m = ParamKind{} } +func (*ParamKind) ProtoMessage() {} +func (*ParamKind) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{8} +} +func (m *ParamKind) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ParamKind) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ParamKind) XXX_Merge(src proto.Message) { + xxx_messageInfo_ParamKind.Merge(m, src) +} +func (m *ParamKind) XXX_Size() int { + return m.Size() +} +func (m *ParamKind) XXX_DiscardUnknown() { + xxx_messageInfo_ParamKind.DiscardUnknown(m) +} + +var xxx_messageInfo_ParamKind proto.InternalMessageInfo + +func (m *ParamRef) Reset() { *m = ParamRef{} } +func (*ParamRef) ProtoMessage() {} +func (*ParamRef) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{9} +} +func (m *ParamRef) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ParamRef) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ParamRef) XXX_Merge(src proto.Message) { + xxx_messageInfo_ParamRef.Merge(m, src) +} +func (m *ParamRef) XXX_Size() int { + return m.Size() +} +func (m *ParamRef) XXX_DiscardUnknown() { + xxx_messageInfo_ParamRef.DiscardUnknown(m) +} + +var xxx_messageInfo_ParamRef proto.InternalMessageInfo + func (m *ServiceReference) Reset() { *m = ServiceReference{} } func (*ServiceReference) ProtoMessage() {} func (*ServiceReference) Descriptor() ([]byte, []int) { - return fileDescriptor_abeea74cbc46f55a, []int{4} + return fileDescriptor_abeea74cbc46f55a, []int{10} } func (m *ServiceReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -185,10 +354,234 @@ func (m *ServiceReference) XXX_DiscardUnknown() { var xxx_messageInfo_ServiceReference proto.InternalMessageInfo +func (m *TypeChecking) Reset() { *m = TypeChecking{} } +func (*TypeChecking) ProtoMessage() {} +func (*TypeChecking) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{11} +} +func (m *TypeChecking) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *TypeChecking) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *TypeChecking) XXX_Merge(src proto.Message) { + xxx_messageInfo_TypeChecking.Merge(m, src) +} +func (m *TypeChecking) XXX_Size() int { + return m.Size() +} +func (m *TypeChecking) XXX_DiscardUnknown() { + xxx_messageInfo_TypeChecking.DiscardUnknown(m) +} + +var xxx_messageInfo_TypeChecking proto.InternalMessageInfo + +func (m *ValidatingAdmissionPolicy) Reset() { *m = ValidatingAdmissionPolicy{} } +func (*ValidatingAdmissionPolicy) ProtoMessage() {} +func (*ValidatingAdmissionPolicy) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{12} +} +func (m *ValidatingAdmissionPolicy) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ValidatingAdmissionPolicy) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ValidatingAdmissionPolicy) XXX_Merge(src proto.Message) { + xxx_messageInfo_ValidatingAdmissionPolicy.Merge(m, src) +} +func (m *ValidatingAdmissionPolicy) XXX_Size() int { + return m.Size() +} +func (m *ValidatingAdmissionPolicy) XXX_DiscardUnknown() { + xxx_messageInfo_ValidatingAdmissionPolicy.DiscardUnknown(m) +} + +var xxx_messageInfo_ValidatingAdmissionPolicy proto.InternalMessageInfo + +func (m *ValidatingAdmissionPolicyBinding) Reset() { *m = ValidatingAdmissionPolicyBinding{} } +func (*ValidatingAdmissionPolicyBinding) ProtoMessage() {} +func (*ValidatingAdmissionPolicyBinding) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{13} +} +func (m *ValidatingAdmissionPolicyBinding) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ValidatingAdmissionPolicyBinding) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ValidatingAdmissionPolicyBinding) XXX_Merge(src proto.Message) { + xxx_messageInfo_ValidatingAdmissionPolicyBinding.Merge(m, src) +} +func (m *ValidatingAdmissionPolicyBinding) XXX_Size() int { + return m.Size() +} +func (m *ValidatingAdmissionPolicyBinding) XXX_DiscardUnknown() { + xxx_messageInfo_ValidatingAdmissionPolicyBinding.DiscardUnknown(m) +} + +var xxx_messageInfo_ValidatingAdmissionPolicyBinding proto.InternalMessageInfo + +func (m *ValidatingAdmissionPolicyBindingList) Reset() { *m = ValidatingAdmissionPolicyBindingList{} } +func (*ValidatingAdmissionPolicyBindingList) ProtoMessage() {} +func (*ValidatingAdmissionPolicyBindingList) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{14} +} +func (m *ValidatingAdmissionPolicyBindingList) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ValidatingAdmissionPolicyBindingList) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ValidatingAdmissionPolicyBindingList) XXX_Merge(src proto.Message) { + xxx_messageInfo_ValidatingAdmissionPolicyBindingList.Merge(m, src) +} +func (m *ValidatingAdmissionPolicyBindingList) XXX_Size() int { + return m.Size() +} +func (m *ValidatingAdmissionPolicyBindingList) XXX_DiscardUnknown() { + xxx_messageInfo_ValidatingAdmissionPolicyBindingList.DiscardUnknown(m) +} + +var xxx_messageInfo_ValidatingAdmissionPolicyBindingList proto.InternalMessageInfo + +func (m *ValidatingAdmissionPolicyBindingSpec) Reset() { *m = ValidatingAdmissionPolicyBindingSpec{} } +func (*ValidatingAdmissionPolicyBindingSpec) ProtoMessage() {} +func (*ValidatingAdmissionPolicyBindingSpec) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{15} +} +func (m *ValidatingAdmissionPolicyBindingSpec) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ValidatingAdmissionPolicyBindingSpec) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ValidatingAdmissionPolicyBindingSpec) XXX_Merge(src proto.Message) { + xxx_messageInfo_ValidatingAdmissionPolicyBindingSpec.Merge(m, src) +} +func (m *ValidatingAdmissionPolicyBindingSpec) XXX_Size() int { + return m.Size() +} +func (m *ValidatingAdmissionPolicyBindingSpec) XXX_DiscardUnknown() { + xxx_messageInfo_ValidatingAdmissionPolicyBindingSpec.DiscardUnknown(m) +} + +var xxx_messageInfo_ValidatingAdmissionPolicyBindingSpec proto.InternalMessageInfo + +func (m *ValidatingAdmissionPolicyList) Reset() { *m = ValidatingAdmissionPolicyList{} } +func (*ValidatingAdmissionPolicyList) ProtoMessage() {} +func (*ValidatingAdmissionPolicyList) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{16} +} +func (m *ValidatingAdmissionPolicyList) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ValidatingAdmissionPolicyList) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ValidatingAdmissionPolicyList) XXX_Merge(src proto.Message) { + xxx_messageInfo_ValidatingAdmissionPolicyList.Merge(m, src) +} +func (m *ValidatingAdmissionPolicyList) XXX_Size() int { + return m.Size() +} +func (m *ValidatingAdmissionPolicyList) XXX_DiscardUnknown() { + xxx_messageInfo_ValidatingAdmissionPolicyList.DiscardUnknown(m) +} + +var xxx_messageInfo_ValidatingAdmissionPolicyList proto.InternalMessageInfo + +func (m *ValidatingAdmissionPolicySpec) Reset() { *m = ValidatingAdmissionPolicySpec{} } +func (*ValidatingAdmissionPolicySpec) ProtoMessage() {} +func (*ValidatingAdmissionPolicySpec) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{17} +} +func (m *ValidatingAdmissionPolicySpec) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ValidatingAdmissionPolicySpec) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ValidatingAdmissionPolicySpec) XXX_Merge(src proto.Message) { + xxx_messageInfo_ValidatingAdmissionPolicySpec.Merge(m, src) +} +func (m *ValidatingAdmissionPolicySpec) XXX_Size() int { + return m.Size() +} +func (m *ValidatingAdmissionPolicySpec) XXX_DiscardUnknown() { + xxx_messageInfo_ValidatingAdmissionPolicySpec.DiscardUnknown(m) +} + +var xxx_messageInfo_ValidatingAdmissionPolicySpec proto.InternalMessageInfo + +func (m *ValidatingAdmissionPolicyStatus) Reset() { *m = ValidatingAdmissionPolicyStatus{} } +func (*ValidatingAdmissionPolicyStatus) ProtoMessage() {} +func (*ValidatingAdmissionPolicyStatus) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{18} +} +func (m *ValidatingAdmissionPolicyStatus) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ValidatingAdmissionPolicyStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ValidatingAdmissionPolicyStatus) XXX_Merge(src proto.Message) { + xxx_messageInfo_ValidatingAdmissionPolicyStatus.Merge(m, src) +} +func (m *ValidatingAdmissionPolicyStatus) XXX_Size() int { + return m.Size() +} +func (m *ValidatingAdmissionPolicyStatus) XXX_DiscardUnknown() { + xxx_messageInfo_ValidatingAdmissionPolicyStatus.DiscardUnknown(m) +} + +var xxx_messageInfo_ValidatingAdmissionPolicyStatus proto.InternalMessageInfo + func (m *ValidatingWebhook) Reset() { *m = ValidatingWebhook{} } func (*ValidatingWebhook) ProtoMessage() {} func (*ValidatingWebhook) Descriptor() ([]byte, []int) { - return fileDescriptor_abeea74cbc46f55a, []int{5} + return fileDescriptor_abeea74cbc46f55a, []int{19} } func (m *ValidatingWebhook) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -216,7 +609,7 @@ var xxx_messageInfo_ValidatingWebhook proto.InternalMessageInfo func (m *ValidatingWebhookConfiguration) Reset() { *m = ValidatingWebhookConfiguration{} } func (*ValidatingWebhookConfiguration) ProtoMessage() {} func (*ValidatingWebhookConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_abeea74cbc46f55a, []int{6} + return fileDescriptor_abeea74cbc46f55a, []int{20} } func (m *ValidatingWebhookConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -244,7 +637,7 @@ var xxx_messageInfo_ValidatingWebhookConfiguration proto.InternalMessageInfo func (m *ValidatingWebhookConfigurationList) Reset() { *m = ValidatingWebhookConfigurationList{} } func (*ValidatingWebhookConfigurationList) ProtoMessage() {} func (*ValidatingWebhookConfigurationList) Descriptor() ([]byte, []int) { - return fileDescriptor_abeea74cbc46f55a, []int{7} + return fileDescriptor_abeea74cbc46f55a, []int{21} } func (m *ValidatingWebhookConfigurationList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -269,15 +662,15 @@ func (m *ValidatingWebhookConfigurationList) XXX_DiscardUnknown() { var xxx_messageInfo_ValidatingWebhookConfigurationList proto.InternalMessageInfo -func (m *WebhookClientConfig) Reset() { *m = WebhookClientConfig{} } -func (*WebhookClientConfig) ProtoMessage() {} -func (*WebhookClientConfig) Descriptor() ([]byte, []int) { - return fileDescriptor_abeea74cbc46f55a, []int{8} +func (m *Validation) Reset() { *m = Validation{} } +func (*Validation) ProtoMessage() {} +func (*Validation) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{22} } -func (m *WebhookClientConfig) XXX_Unmarshal(b []byte) error { +func (m *Validation) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) } -func (m *WebhookClientConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { +func (m *Validation) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { b = b[:cap(b)] n, err := m.MarshalToSizedBuffer(b) if err != nil { @@ -285,27 +678,99 @@ func (m *WebhookClientConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, } return b[:n], nil } -func (m *WebhookClientConfig) XXX_Merge(src proto.Message) { - xxx_messageInfo_WebhookClientConfig.Merge(m, src) +func (m *Validation) XXX_Merge(src proto.Message) { + xxx_messageInfo_Validation.Merge(m, src) } -func (m *WebhookClientConfig) XXX_Size() int { +func (m *Validation) XXX_Size() int { return m.Size() } -func (m *WebhookClientConfig) XXX_DiscardUnknown() { - xxx_messageInfo_WebhookClientConfig.DiscardUnknown(m) +func (m *Validation) XXX_DiscardUnknown() { + xxx_messageInfo_Validation.DiscardUnknown(m) } -var xxx_messageInfo_WebhookClientConfig proto.InternalMessageInfo +var xxx_messageInfo_Validation proto.InternalMessageInfo -func init() { - proto.RegisterType((*MatchCondition)(nil), "k8s.io.api.admissionregistration.v1beta1.MatchCondition") - proto.RegisterType((*MutatingWebhook)(nil), "k8s.io.api.admissionregistration.v1beta1.MutatingWebhook") +func (m *Variable) Reset() { *m = Variable{} } +func (*Variable) ProtoMessage() {} +func (*Variable) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{23} +} +func (m *Variable) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *Variable) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *Variable) XXX_Merge(src proto.Message) { + xxx_messageInfo_Variable.Merge(m, src) +} +func (m *Variable) XXX_Size() int { + return m.Size() +} +func (m *Variable) XXX_DiscardUnknown() { + xxx_messageInfo_Variable.DiscardUnknown(m) +} + +var xxx_messageInfo_Variable proto.InternalMessageInfo + +func (m *WebhookClientConfig) Reset() { *m = WebhookClientConfig{} } +func (*WebhookClientConfig) ProtoMessage() {} +func (*WebhookClientConfig) Descriptor() ([]byte, []int) { + return fileDescriptor_abeea74cbc46f55a, []int{24} +} +func (m *WebhookClientConfig) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *WebhookClientConfig) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *WebhookClientConfig) XXX_Merge(src proto.Message) { + xxx_messageInfo_WebhookClientConfig.Merge(m, src) +} +func (m *WebhookClientConfig) XXX_Size() int { + return m.Size() +} +func (m *WebhookClientConfig) XXX_DiscardUnknown() { + xxx_messageInfo_WebhookClientConfig.DiscardUnknown(m) +} + +var xxx_messageInfo_WebhookClientConfig proto.InternalMessageInfo + +func init() { + proto.RegisterType((*AuditAnnotation)(nil), "k8s.io.api.admissionregistration.v1beta1.AuditAnnotation") + proto.RegisterType((*ExpressionWarning)(nil), "k8s.io.api.admissionregistration.v1beta1.ExpressionWarning") + proto.RegisterType((*MatchCondition)(nil), "k8s.io.api.admissionregistration.v1beta1.MatchCondition") + proto.RegisterType((*MatchResources)(nil), "k8s.io.api.admissionregistration.v1beta1.MatchResources") + proto.RegisterType((*MutatingWebhook)(nil), "k8s.io.api.admissionregistration.v1beta1.MutatingWebhook") proto.RegisterType((*MutatingWebhookConfiguration)(nil), "k8s.io.api.admissionregistration.v1beta1.MutatingWebhookConfiguration") proto.RegisterType((*MutatingWebhookConfigurationList)(nil), "k8s.io.api.admissionregistration.v1beta1.MutatingWebhookConfigurationList") + proto.RegisterType((*NamedRuleWithOperations)(nil), "k8s.io.api.admissionregistration.v1beta1.NamedRuleWithOperations") + proto.RegisterType((*ParamKind)(nil), "k8s.io.api.admissionregistration.v1beta1.ParamKind") + proto.RegisterType((*ParamRef)(nil), "k8s.io.api.admissionregistration.v1beta1.ParamRef") proto.RegisterType((*ServiceReference)(nil), "k8s.io.api.admissionregistration.v1beta1.ServiceReference") + proto.RegisterType((*TypeChecking)(nil), "k8s.io.api.admissionregistration.v1beta1.TypeChecking") + proto.RegisterType((*ValidatingAdmissionPolicy)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingAdmissionPolicy") + proto.RegisterType((*ValidatingAdmissionPolicyBinding)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyBinding") + proto.RegisterType((*ValidatingAdmissionPolicyBindingList)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyBindingList") + proto.RegisterType((*ValidatingAdmissionPolicyBindingSpec)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyBindingSpec") + proto.RegisterType((*ValidatingAdmissionPolicyList)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyList") + proto.RegisterType((*ValidatingAdmissionPolicySpec)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingAdmissionPolicySpec") + proto.RegisterType((*ValidatingAdmissionPolicyStatus)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyStatus") proto.RegisterType((*ValidatingWebhook)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingWebhook") proto.RegisterType((*ValidatingWebhookConfiguration)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingWebhookConfiguration") proto.RegisterType((*ValidatingWebhookConfigurationList)(nil), "k8s.io.api.admissionregistration.v1beta1.ValidatingWebhookConfigurationList") + proto.RegisterType((*Validation)(nil), "k8s.io.api.admissionregistration.v1beta1.Validation") + proto.RegisterType((*Variable)(nil), "k8s.io.api.admissionregistration.v1beta1.Variable") proto.RegisterType((*WebhookClientConfig)(nil), "k8s.io.api.admissionregistration.v1beta1.WebhookClientConfig") } @@ -314,73 +779,197 @@ func init() { } var fileDescriptor_abeea74cbc46f55a = []byte{ - // 1041 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x57, 0x4f, 0x73, 0xdb, 0xc4, - 0x1b, 0x8e, 0xe2, 0xf8, 0x17, 0x67, 0xed, 0x24, 0xcd, 0xfe, 0x80, 0x88, 0xd0, 0xb1, 0x3c, 0x3e, - 0x30, 0xbe, 0x20, 0xb5, 0x29, 0x03, 0xa5, 0x0c, 0x87, 0x2a, 0xb4, 0x03, 0x33, 0x49, 0x5a, 0x36, - 0xfd, 0x33, 0x03, 0x65, 0xa6, 0x6b, 0xf9, 0xb5, 0xbd, 0x58, 0xd2, 0x7a, 0xb4, 0xab, 0xb4, 0x19, - 0x2e, 0x7c, 0x04, 0xbe, 0x02, 0x1f, 0x84, 0x03, 0xb7, 0x1c, 0x7b, 0xec, 0x05, 0x0d, 0x11, 0x67, - 0x0e, 0x5c, 0x73, 0x62, 0xb4, 0x52, 0x6c, 0xcb, 0x76, 0x5a, 0x11, 0x66, 0x72, 0xca, 0xcd, 0xfb, - 0xbc, 0xfb, 0xbe, 0xcf, 0x3e, 0xab, 0x77, 0xdf, 0x67, 0x8c, 0xbe, 0x19, 0xdc, 0x16, 0x26, 0xe3, - 0xd6, 0x20, 0x6c, 0x43, 0xe0, 0x83, 0x04, 0x61, 0x1d, 0x82, 0xdf, 0xe1, 0x81, 0x95, 0x05, 0xe8, - 0x90, 0x59, 0xb4, 0xe3, 0x31, 0x21, 0x18, 0xf7, 0x03, 0xe8, 0x31, 0x21, 0x03, 0x2a, 0x19, 0xf7, - 0xad, 0xc3, 0x9b, 0x6d, 0x90, 0xf4, 0xa6, 0xd5, 0x03, 0x1f, 0x02, 0x2a, 0xa1, 0x63, 0x0e, 0x03, - 0x2e, 0x39, 0x6e, 0xa5, 0x99, 0x26, 0x1d, 0x32, 0x73, 0x6e, 0xa6, 0x99, 0x65, 0x6e, 0x7d, 0xd4, - 0x63, 0xb2, 0x1f, 0xb6, 0x4d, 0x87, 0x7b, 0x56, 0x8f, 0xf7, 0xb8, 0xa5, 0x0a, 0xb4, 0xc3, 0xae, - 0x5a, 0xa9, 0x85, 0xfa, 0x95, 0x16, 0xde, 0xba, 0x55, 0xe0, 0x48, 0xd3, 0xa7, 0xd9, 0xfa, 0x78, - 0x9c, 0xe4, 0x51, 0xa7, 0xcf, 0x7c, 0x08, 0x8e, 0xac, 0xe1, 0xa0, 0x97, 0x00, 0xc2, 0xf2, 0x40, - 0xd2, 0x79, 0x59, 0xd6, 0x79, 0x59, 0x41, 0xe8, 0x4b, 0xe6, 0xc1, 0x4c, 0xc2, 0x27, 0x6f, 0x4b, - 0x10, 0x4e, 0x1f, 0x3c, 0x3a, 0x9d, 0xd7, 0xec, 0xa2, 0xb5, 0x3d, 0x2a, 0x9d, 0xfe, 0x0e, 0xf7, - 0x3b, 0x2c, 0xd1, 0x80, 0x1b, 0x68, 0xc9, 0xa7, 0x1e, 0xe8, 0x5a, 0x43, 0x6b, 0xad, 0xd8, 0xb5, - 0xe3, 0xc8, 0x58, 0x88, 0x23, 0x63, 0x69, 0x9f, 0x7a, 0x40, 0x54, 0x04, 0x6f, 0x23, 0x04, 0x2f, - 0x87, 0x01, 0x28, 0xfd, 0xfa, 0xa2, 0xda, 0x87, 0xb3, 0x7d, 0xe8, 0xde, 0x28, 0x42, 0x26, 0x76, - 0x35, 0x7f, 0xab, 0xa0, 0xf5, 0xbd, 0x50, 0x52, 0xc9, 0xfc, 0xde, 0x53, 0x68, 0xf7, 0x39, 0x1f, - 0x14, 0x60, 0x7a, 0x81, 0x6a, 0x8e, 0xcb, 0xc0, 0x97, 0x3b, 0xdc, 0xef, 0xb2, 0x9e, 0xe2, 0xaa, - 0x6e, 0x7f, 0x61, 0x16, 0xfd, 0xc2, 0x66, 0x46, 0xb5, 0x33, 0x51, 0xc4, 0x7e, 0x27, 0x23, 0xaa, - 0x4d, 0xa2, 0x24, 0x47, 0x84, 0x9f, 0xa1, 0x72, 0x10, 0xba, 0x20, 0xf4, 0x52, 0xa3, 0xd4, 0xaa, - 0x6e, 0x7f, 0x5a, 0x84, 0xd1, 0x24, 0xa1, 0x0b, 0x4f, 0x99, 0xec, 0x3f, 0x18, 0x42, 0x0a, 0x0a, - 0x7b, 0x35, 0xe3, 0x2a, 0x27, 0x31, 0x41, 0xd2, 0xa2, 0x78, 0x17, 0xad, 0x76, 0x29, 0x73, 0xc3, - 0x00, 0x1e, 0x72, 0x97, 0x39, 0x47, 0xfa, 0x92, 0xba, 0x81, 0x0f, 0xe3, 0xc8, 0x58, 0xbd, 0x3f, - 0x19, 0x38, 0x8d, 0x8c, 0x8d, 0x1c, 0xf0, 0xe8, 0x68, 0x08, 0x24, 0x9f, 0x8c, 0xbf, 0x44, 0x55, - 0x2f, 0xf9, 0x84, 0x59, 0xad, 0x15, 0x55, 0xab, 0x19, 0x47, 0x46, 0x75, 0x6f, 0x0c, 0x9f, 0x46, - 0xc6, 0xfa, 0xc4, 0x52, 0xd5, 0x99, 0x4c, 0xc3, 0x2f, 0xd1, 0x46, 0x72, 0xe5, 0x62, 0x48, 0x1d, - 0x38, 0x00, 0x17, 0x1c, 0xc9, 0x03, 0xbd, 0xac, 0xee, 0xfb, 0xd6, 0x84, 0xfa, 0x51, 0x73, 0x99, - 0xc3, 0x41, 0x2f, 0x01, 0x84, 0x99, 0xf4, 0x70, 0x22, 0x7f, 0x97, 0xb6, 0xc1, 0x3d, 0x4b, 0xb5, - 0xdf, 0x8d, 0x23, 0x63, 0x63, 0x7f, 0xba, 0x22, 0x99, 0x25, 0xc1, 0x1c, 0xad, 0xf1, 0xf6, 0x0f, - 0xe0, 0xc8, 0x11, 0x6d, 0xf5, 0xe2, 0xb4, 0x38, 0x8e, 0x8c, 0xb5, 0x07, 0xb9, 0x72, 0x64, 0xaa, - 0x7c, 0x72, 0x61, 0x82, 0x75, 0xe0, 0x5e, 0xb7, 0x0b, 0x8e, 0x14, 0xfa, 0xff, 0xc6, 0x17, 0x76, - 0x30, 0x86, 0x93, 0x0b, 0x1b, 0x2f, 0x77, 0x5c, 0x2a, 0x04, 0x99, 0x4c, 0xc3, 0x77, 0xd0, 0x5a, - 0xf2, 0xb0, 0x78, 0x28, 0x0f, 0xc0, 0xe1, 0x7e, 0x47, 0xe8, 0xcb, 0x0d, 0xad, 0x55, 0x4e, 0x4f, - 0xf0, 0x28, 0x17, 0x21, 0x53, 0x3b, 0xf1, 0x63, 0xb4, 0x39, 0xea, 0x22, 0x02, 0x87, 0x0c, 0x5e, - 0x3c, 0x81, 0x20, 0x59, 0x08, 0xbd, 0xd2, 0x28, 0xb5, 0x56, 0xec, 0x0f, 0xe2, 0xc8, 0xd8, 0xbc, - 0x3b, 0x7f, 0x0b, 0x39, 0x2f, 0x17, 0x3f, 0x47, 0x38, 0x00, 0xe6, 0x1f, 0x72, 0x47, 0xb5, 0x5f, - 0xd6, 0x10, 0x48, 0xe9, 0xbb, 0x11, 0x47, 0x06, 0x26, 0x33, 0xd1, 0xd3, 0xc8, 0x78, 0x6f, 0x16, - 0x55, 0xed, 0x31, 0xa7, 0x16, 0xfe, 0x11, 0xad, 0x7b, 0xb9, 0x71, 0x21, 0xf4, 0x9a, 0x7a, 0x21, - 0xb7, 0x8b, 0xbf, 0xc9, 0xfc, 0xbc, 0xb1, 0x37, 0xb3, 0x27, 0xb2, 0x9e, 0xc7, 0x05, 0x99, 0x66, - 0x6a, 0xfe, 0xae, 0xa1, 0xeb, 0x53, 0x33, 0x24, 0x7d, 0xae, 0x61, 0xca, 0x80, 0x9f, 0xa3, 0x4a, - 0xd2, 0x15, 0x1d, 0x2a, 0xa9, 0x1a, 0x2a, 0xd5, 0xed, 0x1b, 0xc5, 0x7a, 0x28, 0x6d, 0x98, 0x3d, - 0x90, 0x74, 0x3c, 0xc8, 0xc6, 0x18, 0x19, 0x55, 0xc5, 0xdf, 0xa1, 0x4a, 0xc6, 0x2c, 0xf4, 0x45, - 0x25, 0xfc, 0xb3, 0x7f, 0x21, 0x3c, 0x7f, 0x76, 0x7b, 0x29, 0xa1, 0x22, 0xa3, 0x82, 0xcd, 0xbf, - 0x34, 0xd4, 0x78, 0x93, 0xbe, 0x5d, 0x26, 0x24, 0x7e, 0x36, 0xa3, 0xd1, 0x2c, 0xf8, 0x4e, 0x98, - 0x48, 0x15, 0x5e, 0xcb, 0x14, 0x56, 0xce, 0x90, 0x09, 0x7d, 0x03, 0x54, 0x66, 0x12, 0xbc, 0x33, - 0x71, 0xf7, 0x2f, 0x2c, 0x2e, 0x77, 0xf0, 0xf1, 0x18, 0xfc, 0x3a, 0x29, 0x4e, 0x52, 0x8e, 0xe6, - 0x2f, 0x1a, 0xba, 0x76, 0x00, 0xc1, 0x21, 0x73, 0x80, 0x40, 0x17, 0x02, 0xf0, 0x1d, 0xc0, 0x16, - 0x5a, 0x19, 0x8d, 0x88, 0xcc, 0x19, 0x36, 0xb2, 0xec, 0x95, 0xd1, 0x38, 0x21, 0xe3, 0x3d, 0x23, - 0x17, 0x59, 0x3c, 0xd7, 0x45, 0xae, 0xa3, 0xa5, 0x21, 0x95, 0x7d, 0xbd, 0xa4, 0x76, 0x54, 0x92, - 0xe8, 0x43, 0x2a, 0xfb, 0x44, 0xa1, 0x2a, 0xca, 0x03, 0xa9, 0x66, 0x70, 0x39, 0x8b, 0xf2, 0x40, - 0x12, 0x85, 0x36, 0x4f, 0x96, 0xd1, 0xc6, 0x13, 0xea, 0xb2, 0xce, 0x95, 0x73, 0x5d, 0x39, 0xd7, - 0xdb, 0x9d, 0x0b, 0x5d, 0x39, 0xd7, 0x85, 0x9c, 0x6b, 0x8e, 0xaf, 0x54, 0x2f, 0xcd, 0x57, 0x4e, - 0x34, 0x54, 0x9f, 0x79, 0xe3, 0x97, 0xed, 0x2c, 0xdf, 0xcf, 0x38, 0xcb, 0xe7, 0xc5, 0xa5, 0xcf, - 0x9c, 0x7e, 0xc6, 0x5b, 0xfe, 0xd6, 0x50, 0xf3, 0xcd, 0x1a, 0x2f, 0xc1, 0x5d, 0xbc, 0xbc, 0xbb, - 0x7c, 0xf5, 0x1f, 0x04, 0x16, 0xf1, 0x97, 0x5f, 0x35, 0xf4, 0xff, 0x39, 0x63, 0x14, 0xbf, 0x8f, - 0x4a, 0x61, 0xe0, 0x66, 0x76, 0xb0, 0x1c, 0x47, 0x46, 0xe9, 0x31, 0xd9, 0x25, 0x09, 0x86, 0x29, - 0x5a, 0x16, 0xa9, 0x23, 0x65, 0xf2, 0xef, 0x14, 0x3f, 0xe3, 0xb4, 0x95, 0xd9, 0xd5, 0x38, 0x32, - 0x96, 0xcf, 0xd0, 0xb3, 0xba, 0xb8, 0x85, 0x2a, 0x0e, 0xb5, 0x43, 0xbf, 0xe3, 0xa6, 0x9e, 0x55, - 0xb3, 0x6b, 0xc9, 0x75, 0xed, 0xdc, 0x4d, 0x31, 0x32, 0x8a, 0xda, 0xfb, 0xc7, 0x27, 0xf5, 0x85, - 0x57, 0x27, 0xf5, 0x85, 0xd7, 0x27, 0xf5, 0x85, 0x9f, 0xe2, 0xba, 0x76, 0x1c, 0xd7, 0xb5, 0x57, - 0x71, 0x5d, 0x7b, 0x1d, 0xd7, 0xb5, 0x3f, 0xe2, 0xba, 0xf6, 0xf3, 0x9f, 0xf5, 0x85, 0x6f, 0x5b, - 0x45, 0xff, 0x28, 0xff, 0x13, 0x00, 0x00, 0xff, 0xff, 0x1f, 0xf5, 0x97, 0x1c, 0x6c, 0x0f, 0x00, - 0x00, + // 1973 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x1a, 0x4d, 0x6f, 0x23, 0x49, + 0x35, 0x1d, 0xe7, 0xc3, 0x7e, 0xce, 0x97, 0x6b, 0x67, 0x89, 0x77, 0x76, 0xd6, 0x8e, 0x5a, 0x2b, + 0x94, 0x91, 0xc0, 0xde, 0xc9, 0xae, 0x76, 0x97, 0x59, 0x21, 0x14, 0x67, 0x67, 0x86, 0x99, 0x9d, + 0x64, 0x42, 0x65, 0x37, 0x91, 0x60, 0x57, 0x9a, 0x72, 0x77, 0xd9, 0x6e, 0x6c, 0x77, 0x37, 0x5d, + 0x6d, 0xcf, 0x04, 0x24, 0x40, 0xe2, 0xb0, 0x57, 0x24, 0x2e, 0x48, 0x9c, 0xf8, 0x0b, 0xdc, 0x91, + 0xe0, 0x36, 0xc7, 0xbd, 0x31, 0x12, 0xc2, 0x22, 0xe6, 0xc0, 0x89, 0x03, 0x07, 0x38, 0xe4, 0x02, + 0xaa, 0xea, 0xea, 0x4f, 0xb7, 0x27, 0x9d, 0x90, 0x09, 0x97, 0xb9, 0xa5, 0xdf, 0x67, 0xbd, 0x57, + 0xef, 0xab, 0x9e, 0x03, 0xdf, 0xeb, 0x7e, 0xc8, 0x6a, 0x86, 0x55, 0xef, 0x0e, 0x9a, 0xd4, 0x31, + 0xa9, 0x4b, 0x59, 0x7d, 0x48, 0x4d, 0xdd, 0x72, 0xea, 0x12, 0x41, 0x6c, 0xa3, 0x4e, 0xf4, 0xbe, + 0xc1, 0x98, 0x61, 0x99, 0x0e, 0x6d, 0x1b, 0xcc, 0x75, 0x88, 0x6b, 0x58, 0x66, 0x7d, 0x78, 0xab, + 0x49, 0x5d, 0x72, 0xab, 0xde, 0xa6, 0x26, 0x75, 0x88, 0x4b, 0xf5, 0x9a, 0xed, 0x58, 0xae, 0x85, + 0x36, 0x3d, 0xce, 0x1a, 0xb1, 0x8d, 0x5a, 0x2a, 0x67, 0x4d, 0x72, 0x5e, 0xff, 0x66, 0xdb, 0x70, + 0x3b, 0x83, 0x66, 0x4d, 0xb3, 0xfa, 0xf5, 0xb6, 0xd5, 0xb6, 0xea, 0x42, 0x40, 0x73, 0xd0, 0x12, + 0x5f, 0xe2, 0x43, 0xfc, 0xe5, 0x09, 0xbe, 0xfe, 0x6e, 0x86, 0x23, 0x25, 0x4f, 0x73, 0xfd, 0xbd, + 0x90, 0xa9, 0x4f, 0xb4, 0x8e, 0x61, 0x52, 0xe7, 0xb8, 0x6e, 0x77, 0xdb, 0x1c, 0xc0, 0xea, 0x7d, + 0xea, 0x92, 0x34, 0xae, 0xfa, 0x34, 0x2e, 0x67, 0x60, 0xba, 0x46, 0x9f, 0x4e, 0x30, 0xbc, 0x7f, + 0x16, 0x03, 0xd3, 0x3a, 0xb4, 0x4f, 0x92, 0x7c, 0x2a, 0x83, 0xd5, 0xed, 0x81, 0x6e, 0xb8, 0xdb, + 0xa6, 0x69, 0xb9, 0xc2, 0x08, 0xf4, 0x16, 0xe4, 0xba, 0xf4, 0xb8, 0xac, 0x6c, 0x28, 0x9b, 0x85, + 0x46, 0xf1, 0xd9, 0xa8, 0x3a, 0x33, 0x1e, 0x55, 0x73, 0x9f, 0xd0, 0x63, 0xcc, 0xe1, 0x68, 0x1b, + 0x56, 0x87, 0xa4, 0x37, 0xa0, 0x77, 0x9e, 0xda, 0x0e, 0x15, 0x2e, 0x28, 0xcf, 0x0a, 0xd2, 0x75, + 0x49, 0xba, 0x7a, 0x18, 0x47, 0xe3, 0x24, 0xbd, 0xda, 0x83, 0x52, 0xf8, 0x75, 0x44, 0x1c, 0xd3, + 0x30, 0xdb, 0xe8, 0x1b, 0x90, 0x6f, 0x19, 0xb4, 0xa7, 0x63, 0xda, 0x92, 0x02, 0xd7, 0xa4, 0xc0, + 0xfc, 0x5d, 0x09, 0xc7, 0x01, 0x05, 0xba, 0x09, 0x8b, 0x4f, 0x3c, 0xc6, 0x72, 0x4e, 0x10, 0xaf, + 0x4a, 0xe2, 0x45, 0x29, 0x0f, 0xfb, 0x78, 0xb5, 0x05, 0x2b, 0xbb, 0xc4, 0xd5, 0x3a, 0x3b, 0x96, + 0xa9, 0x1b, 0xc2, 0xc2, 0x0d, 0x98, 0x33, 0x49, 0x9f, 0x4a, 0x13, 0x97, 0x24, 0xe7, 0xdc, 0x1e, + 0xe9, 0x53, 0x2c, 0x30, 0x68, 0x0b, 0x80, 0x26, 0xed, 0x43, 0x92, 0x0e, 0x22, 0xa6, 0x45, 0xa8, + 0xd4, 0x3f, 0xcd, 0x49, 0x45, 0x98, 0x32, 0x6b, 0xe0, 0x68, 0x94, 0xa1, 0xa7, 0x50, 0xe2, 0xe2, + 0x98, 0x4d, 0x34, 0x7a, 0x40, 0x7b, 0x54, 0x73, 0x2d, 0x47, 0x68, 0x2d, 0x6e, 0xbd, 0x5b, 0x0b, + 0xc3, 0x34, 0xb8, 0xb1, 0x9a, 0xdd, 0x6d, 0x73, 0x00, 0xab, 0xf1, 0xc0, 0xa8, 0x0d, 0x6f, 0xd5, + 0x1e, 0x92, 0x26, 0xed, 0xf9, 0xac, 0x8d, 0xd7, 0xc7, 0xa3, 0x6a, 0x69, 0x2f, 0x29, 0x11, 0x4f, + 0x2a, 0x41, 0x16, 0xac, 0x58, 0xcd, 0x1f, 0x52, 0xcd, 0x0d, 0xd4, 0xce, 0x5e, 0x5c, 0x2d, 0x1a, + 0x8f, 0xaa, 0x2b, 0x8f, 0x62, 0xe2, 0x70, 0x42, 0x3c, 0xfa, 0x29, 0x2c, 0x3b, 0xd2, 0x6e, 0x3c, + 0xe8, 0x51, 0x56, 0xce, 0x6d, 0xe4, 0x36, 0x8b, 0x5b, 0xdb, 0xb5, 0xac, 0xd9, 0x58, 0xe3, 0x76, + 0xe9, 0x9c, 0xf7, 0xc8, 0x70, 0x3b, 0x8f, 0x6c, 0xea, 0xa1, 0x59, 0xe3, 0x75, 0xe9, 0xf7, 0x65, + 0x1c, 0x95, 0x8f, 0xe3, 0xea, 0xd0, 0xaf, 0x14, 0xb8, 0x46, 0x9f, 0x6a, 0xbd, 0x81, 0x4e, 0x63, + 0x74, 0xe5, 0xb9, 0xcb, 0x3a, 0xc7, 0x0d, 0x79, 0x8e, 0x6b, 0x77, 0x52, 0xd4, 0xe0, 0x54, 0xe5, + 0xe8, 0x63, 0x28, 0xf6, 0x79, 0x48, 0xec, 0x5b, 0x3d, 0x43, 0x3b, 0x2e, 0x2f, 0x8a, 0x40, 0x52, + 0xc7, 0xa3, 0x6a, 0x71, 0x37, 0x04, 0x9f, 0x8e, 0xaa, 0xab, 0x91, 0xcf, 0x4f, 0x8f, 0x6d, 0x8a, + 0xa3, 0x6c, 0xea, 0x1f, 0xf3, 0xb0, 0xba, 0x3b, 0xe0, 0xe9, 0x69, 0xb6, 0x8f, 0x68, 0xb3, 0x63, + 0x59, 0xdd, 0x0c, 0x31, 0xfc, 0x04, 0x96, 0xb4, 0x9e, 0x41, 0x4d, 0x77, 0xc7, 0x32, 0x5b, 0x46, + 0x5b, 0x06, 0xc0, 0xb7, 0xb3, 0x3b, 0x42, 0xaa, 0xda, 0x89, 0x08, 0x69, 0x5c, 0x93, 0x8a, 0x96, + 0xa2, 0x50, 0x1c, 0x53, 0x84, 0x3e, 0x87, 0x79, 0x27, 0x12, 0x02, 0x1f, 0x64, 0xd1, 0x58, 0x4b, + 0x71, 0xf8, 0xb2, 0xd4, 0x35, 0xef, 0x79, 0xd8, 0x13, 0x8a, 0x1e, 0xc2, 0x72, 0x8b, 0x18, 0xbd, + 0x81, 0x43, 0xa5, 0x53, 0xe7, 0x84, 0x07, 0xbe, 0xce, 0x23, 0xe4, 0x6e, 0x14, 0x71, 0x3a, 0xaa, + 0x96, 0x62, 0x00, 0xe1, 0xd8, 0x38, 0x73, 0xf2, 0x82, 0x0a, 0x17, 0xba, 0xa0, 0xf4, 0x3c, 0x9f, + 0xff, 0xff, 0xe4, 0x79, 0xf1, 0xe5, 0xe6, 0xf9, 0xc7, 0x50, 0x64, 0x86, 0x4e, 0xef, 0xb4, 0x5a, + 0x54, 0x73, 0x59, 0x79, 0x21, 0x74, 0xd8, 0x41, 0x08, 0xe6, 0x0e, 0x0b, 0x3f, 0x77, 0x7a, 0x84, + 0x31, 0x1c, 0x65, 0x43, 0xb7, 0x61, 0x85, 0x77, 0x25, 0x6b, 0xe0, 0x1e, 0x50, 0xcd, 0x32, 0x75, + 0x26, 0x52, 0x63, 0xde, 0x3b, 0xc1, 0xa7, 0x31, 0x0c, 0x4e, 0x50, 0xa2, 0xcf, 0x60, 0x3d, 0x88, + 0x22, 0x4c, 0x87, 0x06, 0x7d, 0x72, 0x48, 0x1d, 0xfe, 0xc1, 0xca, 0xf9, 0x8d, 0xdc, 0x66, 0xa1, + 0xf1, 0xe6, 0x78, 0x54, 0x5d, 0xdf, 0x4e, 0x27, 0xc1, 0xd3, 0x78, 0xd1, 0x63, 0x40, 0x0e, 0x35, + 0xcc, 0xa1, 0xa5, 0x89, 0xf0, 0x93, 0x01, 0x01, 0xc2, 0xbe, 0x77, 0xc6, 0xa3, 0x2a, 0xc2, 0x13, + 0xd8, 0xd3, 0x51, 0xf5, 0x6b, 0x93, 0x50, 0x11, 0x1e, 0x29, 0xb2, 0xd0, 0x4f, 0x60, 0xb5, 0x1f, + 0x6b, 0x44, 0xac, 0xbc, 0x24, 0x32, 0xe4, 0xc3, 0xec, 0x39, 0x19, 0xef, 0x64, 0x61, 0xcf, 0x8d, + 0xc3, 0x19, 0x4e, 0x6a, 0x52, 0xff, 0xa2, 0xc0, 0x8d, 0x44, 0x0d, 0xf1, 0xd2, 0x75, 0xe0, 0x69, + 0x40, 0x8f, 0x21, 0xcf, 0xa3, 0x42, 0x27, 0x2e, 0x91, 0x2d, 0xea, 0x9d, 0x6c, 0x31, 0xe4, 0x05, + 0xcc, 0x2e, 0x75, 0x49, 0xd8, 0x22, 0x43, 0x18, 0x0e, 0xa4, 0xa2, 0x1f, 0x40, 0x5e, 0x6a, 0x66, + 0xe5, 0x59, 0x61, 0xf8, 0xb7, 0xce, 0x61, 0x78, 0xfc, 0xec, 0x8d, 0x39, 0xae, 0x0a, 0x07, 0x02, + 0xd5, 0x7f, 0x28, 0xb0, 0xf1, 0x22, 0xfb, 0x1e, 0x1a, 0xcc, 0x45, 0x9f, 0x4f, 0xd8, 0x58, 0xcb, + 0x98, 0x27, 0x06, 0xf3, 0x2c, 0x0c, 0x66, 0x12, 0x1f, 0x12, 0xb1, 0xaf, 0x0b, 0xf3, 0x86, 0x4b, + 0xfb, 0xbe, 0x71, 0x77, 0x2f, 0x6c, 0x5c, 0xec, 0xe0, 0x61, 0x19, 0xbc, 0xcf, 0x85, 0x63, 0x4f, + 0x87, 0xfa, 0x5c, 0x81, 0xf5, 0x29, 0x9d, 0x0a, 0x7d, 0x10, 0xf6, 0x62, 0x51, 0x44, 0xca, 0x8a, + 0xc8, 0x8b, 0x52, 0xb4, 0x89, 0x0a, 0x04, 0x8e, 0xd3, 0xa1, 0x5f, 0x28, 0x80, 0x9c, 0x09, 0x79, + 0xb2, 0x73, 0x5c, 0xb8, 0x8e, 0x5f, 0x97, 0x06, 0xa0, 0x49, 0x1c, 0x4e, 0x51, 0xa7, 0x12, 0x28, + 0xec, 0x13, 0x87, 0xf4, 0x3f, 0x31, 0x4c, 0x9d, 0x4f, 0x62, 0xc4, 0x36, 0x64, 0x96, 0xca, 0x6e, + 0x17, 0x84, 0xd9, 0xf6, 0xfe, 0x7d, 0x89, 0xc1, 0x11, 0x2a, 0xde, 0x1b, 0xbb, 0x86, 0xa9, 0xcb, + 0xb9, 0x2d, 0xe8, 0x8d, 0x5c, 0x1e, 0x16, 0x18, 0xf5, 0x77, 0xb3, 0x90, 0x17, 0x3a, 0xf8, 0x2c, + 0x79, 0x76, 0x2b, 0xad, 0x43, 0x21, 0x28, 0xbd, 0x52, 0x6a, 0x49, 0x92, 0x15, 0x82, 0x32, 0x8d, + 0x43, 0x1a, 0xf4, 0x05, 0xe4, 0x99, 0x5f, 0x90, 0x73, 0x17, 0x2f, 0xc8, 0x4b, 0x3c, 0xd2, 0x82, + 0x52, 0x1c, 0x88, 0x44, 0x2e, 0xac, 0xdb, 0xfc, 0xf4, 0xd4, 0xa5, 0xce, 0x9e, 0xe5, 0xde, 0xb5, + 0x06, 0xa6, 0xbe, 0xad, 0x71, 0xef, 0xc9, 0x6e, 0x78, 0x9b, 0x97, 0xc0, 0xfd, 0x74, 0x92, 0xd3, + 0x51, 0xf5, 0xcd, 0x29, 0x28, 0x51, 0xba, 0xa6, 0x89, 0x56, 0x7f, 0xab, 0xc0, 0xda, 0x01, 0x75, + 0x86, 0x86, 0x46, 0x31, 0x6d, 0x51, 0x87, 0x9a, 0x5a, 0xc2, 0x35, 0x4a, 0x06, 0xd7, 0xf8, 0xde, + 0x9e, 0x9d, 0xea, 0xed, 0x1b, 0x30, 0x67, 0x13, 0xb7, 0x23, 0x07, 0xfb, 0x3c, 0xc7, 0xee, 0x13, + 0xb7, 0x83, 0x05, 0x54, 0x60, 0x2d, 0xc7, 0x15, 0x86, 0xce, 0x4b, 0xac, 0xe5, 0xb8, 0x58, 0x40, + 0xd5, 0x5f, 0x2b, 0xb0, 0xc4, 0xad, 0xd8, 0xe9, 0x50, 0xad, 0xcb, 0x9f, 0x15, 0x5f, 0x2a, 0x80, + 0x68, 0xf2, 0xb1, 0xe1, 0x65, 0x44, 0x71, 0xeb, 0xa3, 0xec, 0x29, 0x3a, 0xf1, 0x60, 0x09, 0xc3, + 0x7a, 0x02, 0xc5, 0x70, 0x8a, 0x4a, 0xf5, 0xcf, 0xb3, 0xf0, 0xc6, 0x21, 0xe9, 0x19, 0xba, 0x48, + 0xf5, 0xa0, 0x3f, 0xc9, 0xe6, 0xf0, 0xf2, 0xcb, 0xaf, 0x01, 0x73, 0xcc, 0xa6, 0x9a, 0xcc, 0xe6, + 0x7b, 0xd9, 0x4d, 0x9f, 0x7a, 0xe8, 0x03, 0x9b, 0x6a, 0xe1, 0x0d, 0xf2, 0x2f, 0x2c, 0x54, 0xa0, + 0x1f, 0xc1, 0x02, 0x73, 0x89, 0x3b, 0x60, 0x32, 0xf8, 0xef, 0x5f, 0x86, 0x32, 0x21, 0xb0, 0xb1, + 0x22, 0xd5, 0x2d, 0x78, 0xdf, 0x58, 0x2a, 0x52, 0xff, 0xad, 0xc0, 0xc6, 0x54, 0xde, 0x86, 0x61, + 0xea, 0x3c, 0x18, 0x5e, 0xbe, 0x93, 0xed, 0x98, 0x93, 0xf7, 0x2e, 0xc1, 0x6e, 0x79, 0xf6, 0x69, + 0xbe, 0x56, 0xff, 0xa5, 0xc0, 0xdb, 0x67, 0x31, 0x5f, 0x41, 0xf3, 0xb3, 0xe2, 0xcd, 0xef, 0xc1, + 0xe5, 0x59, 0x3e, 0xa5, 0x01, 0x7e, 0x99, 0x3b, 0xdb, 0x6e, 0xee, 0x26, 0xde, 0x41, 0x6c, 0x01, + 0xdc, 0x0b, 0x8b, 0x7c, 0x70, 0x89, 0xfb, 0x01, 0x06, 0x47, 0xa8, 0xb8, 0xaf, 0x6c, 0xd9, 0x1e, + 0xe4, 0x55, 0x6e, 0x65, 0x37, 0xc8, 0x6f, 0x2c, 0x5e, 0xf9, 0xf6, 0xbf, 0x70, 0x20, 0x11, 0xb9, + 0xb0, 0xd2, 0x8f, 0x2d, 0x0a, 0x64, 0x9a, 0x9c, 0x77, 0x0e, 0x0c, 0xf8, 0xbd, 0xb9, 0x39, 0x0e, + 0xc3, 0x09, 0x1d, 0xe8, 0x08, 0x4a, 0x43, 0xe9, 0x2f, 0xcb, 0xf4, 0x4a, 0xba, 0xf7, 0x3a, 0x2e, + 0x34, 0x6e, 0xf2, 0xf7, 0xc6, 0x61, 0x12, 0x79, 0x3a, 0xaa, 0xae, 0x25, 0x81, 0x78, 0x52, 0x86, + 0xfa, 0x77, 0x05, 0xde, 0x9a, 0x7a, 0x13, 0x57, 0x10, 0x7a, 0x9d, 0x78, 0xe8, 0xed, 0x5c, 0x46, + 0xe8, 0xa5, 0xc7, 0xdc, 0x6f, 0x16, 0x5e, 0x60, 0xa9, 0x08, 0xb6, 0xc7, 0x50, 0xb0, 0xfd, 0xd9, + 0x25, 0x65, 0xd3, 0x93, 0x25, 0x72, 0x38, 0x6b, 0x63, 0x99, 0xf7, 0xcf, 0xe0, 0x13, 0x87, 0x42, + 0xd1, 0x8f, 0x61, 0xcd, 0x9f, 0xed, 0x39, 0xbf, 0x61, 0xba, 0xfe, 0x80, 0x76, 0xf1, 0xf0, 0xb9, + 0x36, 0x1e, 0x55, 0xd7, 0x76, 0x13, 0x52, 0xf1, 0x84, 0x1e, 0xd4, 0x85, 0x62, 0x78, 0xfd, 0xfe, + 0xfb, 0xfe, 0xbd, 0xf3, 0xfb, 0xdb, 0x32, 0x1b, 0xaf, 0x49, 0x07, 0x17, 0x43, 0x18, 0xc3, 0x51, + 0xe9, 0x97, 0xfc, 0xd0, 0xff, 0x19, 0xac, 0x91, 0xf8, 0xa2, 0x93, 0x95, 0xe7, 0xcf, 0xfb, 0x08, + 0x49, 0xac, 0x4a, 0x1b, 0x65, 0x69, 0xc4, 0x5a, 0x02, 0xc1, 0xf0, 0x84, 0xb2, 0xb4, 0xd7, 0xdf, + 0xc2, 0x55, 0xbd, 0xfe, 0x90, 0x06, 0x85, 0x21, 0x71, 0x0c, 0xd2, 0xec, 0x51, 0xfe, 0xd4, 0xce, + 0x9d, 0xaf, 0xa0, 0x1d, 0x4a, 0xd6, 0x70, 0xb2, 0xf3, 0x21, 0x0c, 0x87, 0x72, 0xd5, 0x3f, 0xcc, + 0x42, 0xf5, 0x8c, 0xf6, 0x8d, 0x1e, 0x00, 0xb2, 0x9a, 0x8c, 0x3a, 0x43, 0xaa, 0xdf, 0xf3, 0x56, + 0xd1, 0xfe, 0x58, 0x9f, 0x0b, 0x07, 0xaa, 0x47, 0x13, 0x14, 0x38, 0x85, 0x0b, 0xf5, 0x60, 0xc9, + 0x8d, 0x8c, 0x7a, 0x32, 0x0b, 0xde, 0xcf, 0x6e, 0x57, 0x74, 0x50, 0x6c, 0xac, 0x8d, 0x47, 0xd5, + 0xd8, 0xe8, 0x88, 0x63, 0xd2, 0x91, 0x06, 0xa0, 0x85, 0x57, 0xe7, 0x85, 0x7e, 0x3d, 0x5b, 0x15, + 0x0b, 0x6f, 0x2c, 0xe8, 0x3b, 0x91, 0xcb, 0x8a, 0x88, 0x55, 0x4f, 0x16, 0xa1, 0x14, 0xba, 0xf0, + 0xd5, 0xae, 0xef, 0xd5, 0xae, 0xef, 0x85, 0xbb, 0x3e, 0x78, 0xb5, 0xeb, 0xbb, 0xd0, 0xae, 0x2f, + 0xa5, 0x16, 0x17, 0xaf, 0x6c, 0x13, 0x77, 0xa2, 0x40, 0x65, 0x22, 0xc7, 0xaf, 0x7a, 0x17, 0xf7, + 0xc5, 0xc4, 0x2e, 0xee, 0xa3, 0x8b, 0x8c, 0x4d, 0xd3, 0xb6, 0x71, 0xff, 0x54, 0x40, 0x7d, 0xb1, + 0x8d, 0x57, 0x30, 0x17, 0xf6, 0xe3, 0x73, 0xe1, 0x77, 0xff, 0x07, 0x03, 0xb3, 0x6c, 0xe4, 0xfe, + 0xa3, 0x00, 0x84, 0xc3, 0x0c, 0x7a, 0x1b, 0x22, 0x3f, 0x14, 0xca, 0xd2, 0xed, 0xb9, 0x29, 0x02, + 0x47, 0x37, 0x61, 0xb1, 0x4f, 0x19, 0x23, 0x6d, 0x7f, 0x21, 0x12, 0xfc, 0x8e, 0xb9, 0xeb, 0x81, + 0xb1, 0x8f, 0x47, 0x47, 0xb0, 0xe0, 0x50, 0xc2, 0x2c, 0x53, 0x2e, 0x46, 0xbe, 0xc3, 0x5f, 0xc1, + 0x58, 0x40, 0x4e, 0x47, 0xd5, 0x5b, 0x59, 0x7e, 0x67, 0xae, 0xc9, 0x47, 0xb3, 0x60, 0xc2, 0x52, + 0x1c, 0xba, 0x07, 0x25, 0xa9, 0x23, 0x72, 0x60, 0xaf, 0xd2, 0xbe, 0x21, 0x4f, 0x53, 0xda, 0x4d, + 0x12, 0xe0, 0x49, 0x1e, 0xf5, 0x01, 0xe4, 0xfd, 0xc1, 0x00, 0x95, 0x61, 0x2e, 0xf2, 0xde, 0xf2, + 0x0c, 0x17, 0x90, 0x84, 0x63, 0x66, 0xd3, 0x1d, 0xa3, 0xfe, 0x5e, 0x81, 0xd7, 0x52, 0x9a, 0x12, + 0x7a, 0x03, 0x72, 0x03, 0xa7, 0x27, 0x5d, 0xb0, 0x38, 0x1e, 0x55, 0x73, 0x9f, 0xe1, 0x87, 0x98, + 0xc3, 0x10, 0x81, 0x45, 0xe6, 0xad, 0xa7, 0x64, 0x30, 0xdd, 0xce, 0x7e, 0xe3, 0xc9, 0xbd, 0x56, + 0xa3, 0xc8, 0xef, 0xc0, 0x87, 0xfa, 0x72, 0xd1, 0x26, 0xe4, 0x35, 0xd2, 0x18, 0x98, 0x7a, 0xcf, + 0xbb, 0xaf, 0x25, 0xef, 0x8d, 0xb7, 0xb3, 0xed, 0xc1, 0x70, 0x80, 0x6d, 0xec, 0x3d, 0x3b, 0xa9, + 0xcc, 0x7c, 0x75, 0x52, 0x99, 0x79, 0x7e, 0x52, 0x99, 0xf9, 0xf9, 0xb8, 0xa2, 0x3c, 0x1b, 0x57, + 0x94, 0xaf, 0xc6, 0x15, 0xe5, 0xf9, 0xb8, 0xa2, 0xfc, 0x75, 0x5c, 0x51, 0x7e, 0xf9, 0xb7, 0xca, + 0xcc, 0xf7, 0x37, 0xb3, 0xfe, 0x97, 0xc3, 0x7f, 0x03, 0x00, 0x00, 0xff, 0xff, 0x71, 0x54, 0x54, + 0xe6, 0x29, 0x21, 0x00, 0x00, +} + +func (m *AuditAnnotation) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AuditAnnotation) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *AuditAnnotation) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + i -= len(m.ValueExpression) + copy(dAtA[i:], m.ValueExpression) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.ValueExpression))) + i-- + dAtA[i] = 0x12 + i -= len(m.Key) + copy(dAtA[i:], m.Key) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Key))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *ExpressionWarning) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ExpressionWarning) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ExpressionWarning) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + i -= len(m.Warning) + copy(dAtA[i:], m.Warning) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Warning))) + i-- + dAtA[i] = 0x1a + i -= len(m.FieldRef) + copy(dAtA[i:], m.FieldRef) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.FieldRef))) + i-- + dAtA[i] = 0x12 + return len(dAtA) - i, nil } func (m *MatchCondition) Marshal() (dAtA []byte, err error) { @@ -416,6 +1005,88 @@ func (m *MatchCondition) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *MatchResources) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *MatchResources) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *MatchResources) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.MatchPolicy != nil { + i -= len(*m.MatchPolicy) + copy(dAtA[i:], *m.MatchPolicy) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.MatchPolicy))) + i-- + dAtA[i] = 0x3a + } + if len(m.ExcludeResourceRules) > 0 { + for iNdEx := len(m.ExcludeResourceRules) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.ExcludeResourceRules[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + } + } + if len(m.ResourceRules) > 0 { + for iNdEx := len(m.ResourceRules) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.ResourceRules[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } + } + if m.ObjectSelector != nil { + { + size, err := m.ObjectSelector.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + if m.NamespaceSelector != nil { + { + size, err := m.NamespaceSelector.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *MutatingWebhook) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -642,7 +1313,7 @@ func (m *MutatingWebhookConfigurationList) MarshalToSizedBuffer(dAtA []byte) (in return len(dAtA) - i, nil } -func (m *ServiceReference) Marshal() (dAtA []byte, err error) { +func (m *NamedRuleWithOperations) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -652,42 +1323,72 @@ func (m *ServiceReference) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ServiceReference) MarshalTo(dAtA []byte) (int, error) { +func (m *NamedRuleWithOperations) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ServiceReference) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *NamedRuleWithOperations) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if m.Port != nil { - i = encodeVarintGenerated(dAtA, i, uint64(*m.Port)) - i-- - dAtA[i] = 0x20 + { + size, err := m.RuleWithOperations.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) } - if m.Path != nil { - i -= len(*m.Path) - copy(dAtA[i:], *m.Path) - i = encodeVarintGenerated(dAtA, i, uint64(len(*m.Path))) - i-- - dAtA[i] = 0x1a + i-- + dAtA[i] = 0x12 + if len(m.ResourceNames) > 0 { + for iNdEx := len(m.ResourceNames) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.ResourceNames[iNdEx]) + copy(dAtA[i:], m.ResourceNames[iNdEx]) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.ResourceNames[iNdEx]))) + i-- + dAtA[i] = 0xa + } } - i -= len(m.Name) - copy(dAtA[i:], m.Name) - i = encodeVarintGenerated(dAtA, i, uint64(len(m.Name))) + return len(dAtA) - i, nil +} + +func (m *ParamKind) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ParamKind) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ParamKind) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + i -= len(m.Kind) + copy(dAtA[i:], m.Kind) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Kind))) i-- dAtA[i] = 0x12 - i -= len(m.Namespace) - copy(dAtA[i:], m.Namespace) - i = encodeVarintGenerated(dAtA, i, uint64(len(m.Namespace))) + i -= len(m.APIVersion) + copy(dAtA[i:], m.APIVersion) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.APIVersion))) i-- dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *ValidatingWebhook) Marshal() (dAtA []byte, err error) { +func (m *ParamRef) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -697,33 +1398,26 @@ func (m *ValidatingWebhook) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ValidatingWebhook) MarshalTo(dAtA []byte) (int, error) { +func (m *ParamRef) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ValidatingWebhook) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ParamRef) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if len(m.MatchConditions) > 0 { - for iNdEx := len(m.MatchConditions) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.MatchConditions[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintGenerated(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x5a - } + if m.ParameterNotFoundAction != nil { + i -= len(*m.ParameterNotFoundAction) + copy(dAtA[i:], *m.ParameterNotFoundAction) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.ParameterNotFoundAction))) + i-- + dAtA[i] = 0x22 } - if m.ObjectSelector != nil { + if m.Selector != nil { { - size, err := m.ObjectSelector.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Selector.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -731,59 +1425,90 @@ func (m *ValidatingWebhook) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x52 - } - if m.MatchPolicy != nil { - i -= len(*m.MatchPolicy) - copy(dAtA[i:], *m.MatchPolicy) - i = encodeVarintGenerated(dAtA, i, uint64(len(*m.MatchPolicy))) - i-- - dAtA[i] = 0x4a - } - if len(m.AdmissionReviewVersions) > 0 { - for iNdEx := len(m.AdmissionReviewVersions) - 1; iNdEx >= 0; iNdEx-- { - i -= len(m.AdmissionReviewVersions[iNdEx]) - copy(dAtA[i:], m.AdmissionReviewVersions[iNdEx]) - i = encodeVarintGenerated(dAtA, i, uint64(len(m.AdmissionReviewVersions[iNdEx]))) - i-- - dAtA[i] = 0x42 - } - } - if m.TimeoutSeconds != nil { - i = encodeVarintGenerated(dAtA, i, uint64(*m.TimeoutSeconds)) - i-- - dAtA[i] = 0x38 + dAtA[i] = 0x1a } - if m.SideEffects != nil { - i -= len(*m.SideEffects) - copy(dAtA[i:], *m.SideEffects) - i = encodeVarintGenerated(dAtA, i, uint64(len(*m.SideEffects))) - i-- - dAtA[i] = 0x32 + i -= len(m.Namespace) + copy(dAtA[i:], m.Namespace) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Namespace))) + i-- + dAtA[i] = 0x12 + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *ServiceReference) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - if m.NamespaceSelector != nil { - { - size, err := m.NamespaceSelector.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintGenerated(dAtA, i, uint64(size)) - } + return dAtA[:n], nil +} + +func (m *ServiceReference) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ServiceReference) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Port != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.Port)) i-- - dAtA[i] = 0x2a + dAtA[i] = 0x20 } - if m.FailurePolicy != nil { - i -= len(*m.FailurePolicy) - copy(dAtA[i:], *m.FailurePolicy) - i = encodeVarintGenerated(dAtA, i, uint64(len(*m.FailurePolicy))) + if m.Path != nil { + i -= len(*m.Path) + copy(dAtA[i:], *m.Path) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.Path))) i-- - dAtA[i] = 0x22 + dAtA[i] = 0x1a } - if len(m.Rules) > 0 { - for iNdEx := len(m.Rules) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0x12 + i -= len(m.Namespace) + copy(dAtA[i:], m.Namespace) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Namespace))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *TypeChecking) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *TypeChecking) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *TypeChecking) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.ExpressionWarnings) > 0 { + for iNdEx := len(m.ExpressionWarnings) - 1; iNdEx >= 0; iNdEx-- { { - size, err := m.Rules[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ExpressionWarnings[iNdEx].MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -791,11 +1516,44 @@ func (m *ValidatingWebhook) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0x1a + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *ValidatingAdmissionPolicy) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ValidatingAdmissionPolicy) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ValidatingAdmissionPolicy) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x1a { - size, err := m.ClientConfig.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -804,15 +1562,20 @@ func (m *ValidatingWebhook) MarshalToSizedBuffer(dAtA []byte) (int, error) { } i-- dAtA[i] = 0x12 - i -= len(m.Name) - copy(dAtA[i:], m.Name) - i = encodeVarintGenerated(dAtA, i, uint64(len(m.Name))) + { + size, err := m.ObjectMeta.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } i-- dAtA[i] = 0xa return len(dAtA) - i, nil } -func (m *ValidatingWebhookConfiguration) Marshal() (dAtA []byte, err error) { +func (m *ValidatingAdmissionPolicyBinding) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -822,30 +1585,26 @@ func (m *ValidatingWebhookConfiguration) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *ValidatingWebhookConfiguration) MarshalTo(dAtA []byte) (int, error) { +func (m *ValidatingAdmissionPolicyBinding) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ValidatingWebhookConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ValidatingAdmissionPolicyBinding) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if len(m.Webhooks) > 0 { - for iNdEx := len(m.Webhooks) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.Webhooks[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintGenerated(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x12 + { + size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0x12 { size, err := m.ObjectMeta.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -859,7 +1618,7 @@ func (m *ValidatingWebhookConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, return len(dAtA) - i, nil } -func (m *ValidatingWebhookConfigurationList) Marshal() (dAtA []byte, err error) { +func (m *ValidatingAdmissionPolicyBindingList) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -869,12 +1628,12 @@ func (m *ValidatingWebhookConfigurationList) Marshal() (dAtA []byte, err error) return dAtA[:n], nil } -func (m *ValidatingWebhookConfigurationList) MarshalTo(dAtA []byte) (int, error) { +func (m *ValidatingAdmissionPolicyBindingList) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ValidatingWebhookConfigurationList) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ValidatingAdmissionPolicyBindingList) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int @@ -906,7 +1665,7 @@ func (m *ValidatingWebhookConfigurationList) MarshalToSizedBuffer(dAtA []byte) ( return len(dAtA) - i, nil } -func (m *WebhookClientConfig) Marshal() (dAtA []byte, err error) { +func (m *ValidatingAdmissionPolicyBindingSpec) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalToSizedBuffer(dAtA[:size]) @@ -916,33 +1675,40 @@ func (m *WebhookClientConfig) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *WebhookClientConfig) MarshalTo(dAtA []byte) (int, error) { +func (m *ValidatingAdmissionPolicyBindingSpec) MarshalTo(dAtA []byte) (int, error) { size := m.Size() return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *WebhookClientConfig) MarshalToSizedBuffer(dAtA []byte) (int, error) { +func (m *ValidatingAdmissionPolicyBindingSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) { i := len(dAtA) _ = i var l int _ = l - if m.URL != nil { - i -= len(*m.URL) - copy(dAtA[i:], *m.URL) - i = encodeVarintGenerated(dAtA, i, uint64(len(*m.URL))) - i-- - dAtA[i] = 0x1a + if len(m.ValidationActions) > 0 { + for iNdEx := len(m.ValidationActions) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.ValidationActions[iNdEx]) + copy(dAtA[i:], m.ValidationActions[iNdEx]) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.ValidationActions[iNdEx]))) + i-- + dAtA[i] = 0x22 + } } - if m.CABundle != nil { - i -= len(m.CABundle) - copy(dAtA[i:], m.CABundle) - i = encodeVarintGenerated(dAtA, i, uint64(len(m.CABundle))) + if m.MatchResources != nil { + { + size, err := m.MatchResources.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } i-- - dAtA[i] = 0x12 + dAtA[i] = 0x1a } - if m.Service != nil { + if m.ParamRef != nil { { - size, err := m.Service.MarshalToSizedBuffer(dAtA[:i]) + size, err := m.ParamRef.MarshalToSizedBuffer(dAtA[:i]) if err != nil { return 0, err } @@ -950,432 +1716,3359 @@ func (m *WebhookClientConfig) MarshalToSizedBuffer(dAtA []byte) (int, error) { i = encodeVarintGenerated(dAtA, i, uint64(size)) } i-- - dAtA[i] = 0xa + dAtA[i] = 0x12 } + i -= len(m.PolicyName) + copy(dAtA[i:], m.PolicyName) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.PolicyName))) + i-- + dAtA[i] = 0xa return len(dAtA) - i, nil } -func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int { - offset -= sovGenerated(v) - base := offset - for v >= 1<<7 { - dAtA[offset] = uint8(v&0x7f | 0x80) - v >>= 7 - offset++ +func (m *ValidatingAdmissionPolicyList) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - dAtA[offset] = uint8(v) - return base + return dAtA[:n], nil } -func (m *MatchCondition) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = len(m.Name) - n += 1 + l + sovGenerated(uint64(l)) - l = len(m.Expression) - n += 1 + l + sovGenerated(uint64(l)) - return n + +func (m *ValidatingAdmissionPolicyList) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *MutatingWebhook) Size() (n int) { - if m == nil { - return 0 - } +func (m *ValidatingAdmissionPolicyList) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = len(m.Name) - n += 1 + l + sovGenerated(uint64(l)) - l = m.ClientConfig.Size() - n += 1 + l + sovGenerated(uint64(l)) - if len(m.Rules) > 0 { - for _, e := range m.Rules { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) + if len(m.Items) > 0 { + for iNdEx := len(m.Items) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Items[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 } } - if m.FailurePolicy != nil { - l = len(*m.FailurePolicy) - n += 1 + l + sovGenerated(uint64(l)) + { + size, err := m.ListMeta.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) } - if m.NamespaceSelector != nil { - l = m.NamespaceSelector.Size() - n += 1 + l + sovGenerated(uint64(l)) - } - if m.SideEffects != nil { - l = len(*m.SideEffects) - n += 1 + l + sovGenerated(uint64(l)) - } - if m.TimeoutSeconds != nil { - n += 1 + sovGenerated(uint64(*m.TimeoutSeconds)) - } - if len(m.AdmissionReviewVersions) > 0 { - for _, s := range m.AdmissionReviewVersions { - l = len(s) - n += 1 + l + sovGenerated(uint64(l)) - } - } - if m.MatchPolicy != nil { - l = len(*m.MatchPolicy) - n += 1 + l + sovGenerated(uint64(l)) - } - if m.ReinvocationPolicy != nil { - l = len(*m.ReinvocationPolicy) - n += 1 + l + sovGenerated(uint64(l)) - } - if m.ObjectSelector != nil { - l = m.ObjectSelector.Size() - n += 1 + l + sovGenerated(uint64(l)) - } - if len(m.MatchConditions) > 0 { - for _, e := range m.MatchConditions { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) - } - } - return n + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (m *MutatingWebhookConfiguration) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.ObjectMeta.Size() - n += 1 + l + sovGenerated(uint64(l)) - if len(m.Webhooks) > 0 { - for _, e := range m.Webhooks { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) - } +func (m *ValidatingAdmissionPolicySpec) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return n + return dAtA[:n], nil } -func (m *MutatingWebhookConfigurationList) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.ListMeta.Size() - n += 1 + l + sovGenerated(uint64(l)) - if len(m.Items) > 0 { - for _, e := range m.Items { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) - } - } - return n +func (m *ValidatingAdmissionPolicySpec) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) } -func (m *ServiceReference) Size() (n int) { - if m == nil { - return 0 - } +func (m *ValidatingAdmissionPolicySpec) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = len(m.Namespace) - n += 1 + l + sovGenerated(uint64(l)) - l = len(m.Name) - n += 1 + l + sovGenerated(uint64(l)) - if m.Path != nil { - l = len(*m.Path) - n += 1 + l + sovGenerated(uint64(l)) - } - if m.Port != nil { - n += 1 + sovGenerated(uint64(*m.Port)) + if len(m.Variables) > 0 { + for iNdEx := len(m.Variables) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Variables[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x3a + } } - return n -} - -func (m *ValidatingWebhook) Size() (n int) { - if m == nil { - return 0 + if len(m.MatchConditions) > 0 { + for iNdEx := len(m.MatchConditions) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.MatchConditions[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x32 + } } - var l int - _ = l - l = len(m.Name) - n += 1 + l + sovGenerated(uint64(l)) - l = m.ClientConfig.Size() - n += 1 + l + sovGenerated(uint64(l)) - if len(m.Rules) > 0 { - for _, e := range m.Rules { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) + if len(m.AuditAnnotations) > 0 { + for iNdEx := len(m.AuditAnnotations) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.AuditAnnotations[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a } } if m.FailurePolicy != nil { - l = len(*m.FailurePolicy) - n += 1 + l + sovGenerated(uint64(l)) - } - if m.NamespaceSelector != nil { - l = m.NamespaceSelector.Size() - n += 1 + l + sovGenerated(uint64(l)) - } - if m.SideEffects != nil { - l = len(*m.SideEffects) - n += 1 + l + sovGenerated(uint64(l)) - } - if m.TimeoutSeconds != nil { - n += 1 + sovGenerated(uint64(*m.TimeoutSeconds)) + i -= len(*m.FailurePolicy) + copy(dAtA[i:], *m.FailurePolicy) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.FailurePolicy))) + i-- + dAtA[i] = 0x22 } - if len(m.AdmissionReviewVersions) > 0 { - for _, s := range m.AdmissionReviewVersions { - l = len(s) - n += 1 + l + sovGenerated(uint64(l)) + if len(m.Validations) > 0 { + for iNdEx := len(m.Validations) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Validations[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a } } - if m.MatchPolicy != nil { - l = len(*m.MatchPolicy) - n += 1 + l + sovGenerated(uint64(l)) - } - if m.ObjectSelector != nil { - l = m.ObjectSelector.Size() - n += 1 + l + sovGenerated(uint64(l)) + if m.MatchConstraints != nil { + { + size, err := m.MatchConstraints.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 } - if len(m.MatchConditions) > 0 { - for _, e := range m.MatchConditions { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) + if m.ParamKind != nil { + { + size, err := m.ParamKind.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) } + i-- + dAtA[i] = 0xa } - return n + return len(dAtA) - i, nil } -func (m *ValidatingWebhookConfiguration) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - l = m.ObjectMeta.Size() - n += 1 + l + sovGenerated(uint64(l)) - if len(m.Webhooks) > 0 { - for _, e := range m.Webhooks { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) - } +func (m *ValidatingAdmissionPolicyStatus) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - return n + return dAtA[:n], nil } -func (m *ValidatingWebhookConfigurationList) Size() (n int) { - if m == nil { - return 0 - } +func (m *ValidatingAdmissionPolicyStatus) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ValidatingAdmissionPolicyStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - l = m.ListMeta.Size() - n += 1 + l + sovGenerated(uint64(l)) - if len(m.Items) > 0 { - for _, e := range m.Items { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) + if len(m.Conditions) > 0 { + for iNdEx := len(m.Conditions) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Conditions[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a } } - return n + if m.TypeChecking != nil { + { + size, err := m.TypeChecking.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + i = encodeVarintGenerated(dAtA, i, uint64(m.ObservedGeneration)) + i-- + dAtA[i] = 0x8 + return len(dAtA) - i, nil } -func (m *WebhookClientConfig) Size() (n int) { - if m == nil { - return 0 +func (m *ValidatingWebhook) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } + return dAtA[:n], nil +} + +func (m *ValidatingWebhook) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ValidatingWebhook) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i var l int _ = l - if m.Service != nil { - l = m.Service.Size() - n += 1 + l + sovGenerated(uint64(l)) - } - if m.CABundle != nil { - l = len(m.CABundle) - n += 1 + l + sovGenerated(uint64(l)) + if len(m.MatchConditions) > 0 { + for iNdEx := len(m.MatchConditions) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.MatchConditions[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x5a + } } - if m.URL != nil { - l = len(*m.URL) - n += 1 + l + sovGenerated(uint64(l)) + if m.ObjectSelector != nil { + { + size, err := m.ObjectSelector.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x52 } - return n -} - -func sovGenerated(x uint64) (n int) { - return (math_bits.Len64(x|1) + 6) / 7 -} -func sozGenerated(x uint64) (n int) { - return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63)))) -} -func (this *MatchCondition) String() string { - if this == nil { - return "nil" + if m.MatchPolicy != nil { + i -= len(*m.MatchPolicy) + copy(dAtA[i:], *m.MatchPolicy) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.MatchPolicy))) + i-- + dAtA[i] = 0x4a } - s := strings.Join([]string{`&MatchCondition{`, - `Name:` + fmt.Sprintf("%v", this.Name) + `,`, - `Expression:` + fmt.Sprintf("%v", this.Expression) + `,`, - `}`, - }, "") - return s -} -func (this *MutatingWebhook) String() string { - if this == nil { - return "nil" + if len(m.AdmissionReviewVersions) > 0 { + for iNdEx := len(m.AdmissionReviewVersions) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.AdmissionReviewVersions[iNdEx]) + copy(dAtA[i:], m.AdmissionReviewVersions[iNdEx]) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.AdmissionReviewVersions[iNdEx]))) + i-- + dAtA[i] = 0x42 + } } - repeatedStringForRules := "[]RuleWithOperations{" - for _, f := range this.Rules { - repeatedStringForRules += fmt.Sprintf("%v", f) + "," + if m.TimeoutSeconds != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.TimeoutSeconds)) + i-- + dAtA[i] = 0x38 } - repeatedStringForRules += "}" - repeatedStringForMatchConditions := "[]MatchCondition{" - for _, f := range this.MatchConditions { - repeatedStringForMatchConditions += strings.Replace(strings.Replace(f.String(), "MatchCondition", "MatchCondition", 1), `&`, ``, 1) + "," + if m.SideEffects != nil { + i -= len(*m.SideEffects) + copy(dAtA[i:], *m.SideEffects) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.SideEffects))) + i-- + dAtA[i] = 0x32 } - repeatedStringForMatchConditions += "}" - s := strings.Join([]string{`&MutatingWebhook{`, - `Name:` + fmt.Sprintf("%v", this.Name) + `,`, - `ClientConfig:` + strings.Replace(strings.Replace(this.ClientConfig.String(), "WebhookClientConfig", "WebhookClientConfig", 1), `&`, ``, 1) + `,`, - `Rules:` + repeatedStringForRules + `,`, - `FailurePolicy:` + valueToStringGenerated(this.FailurePolicy) + `,`, - `NamespaceSelector:` + strings.Replace(fmt.Sprintf("%v", this.NamespaceSelector), "LabelSelector", "v11.LabelSelector", 1) + `,`, - `SideEffects:` + valueToStringGenerated(this.SideEffects) + `,`, - `TimeoutSeconds:` + valueToStringGenerated(this.TimeoutSeconds) + `,`, - `AdmissionReviewVersions:` + fmt.Sprintf("%v", this.AdmissionReviewVersions) + `,`, - `MatchPolicy:` + valueToStringGenerated(this.MatchPolicy) + `,`, - `ReinvocationPolicy:` + valueToStringGenerated(this.ReinvocationPolicy) + `,`, - `ObjectSelector:` + strings.Replace(fmt.Sprintf("%v", this.ObjectSelector), "LabelSelector", "v11.LabelSelector", 1) + `,`, - `MatchConditions:` + repeatedStringForMatchConditions + `,`, - `}`, - }, "") - return s -} -func (this *MutatingWebhookConfiguration) String() string { - if this == nil { - return "nil" + if m.NamespaceSelector != nil { + { + size, err := m.NamespaceSelector.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a } - repeatedStringForWebhooks := "[]MutatingWebhook{" - for _, f := range this.Webhooks { - repeatedStringForWebhooks += strings.Replace(strings.Replace(f.String(), "MutatingWebhook", "MutatingWebhook", 1), `&`, ``, 1) + "," + if m.FailurePolicy != nil { + i -= len(*m.FailurePolicy) + copy(dAtA[i:], *m.FailurePolicy) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.FailurePolicy))) + i-- + dAtA[i] = 0x22 } - repeatedStringForWebhooks += "}" - s := strings.Join([]string{`&MutatingWebhookConfiguration{`, - `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v11.ObjectMeta", 1), `&`, ``, 1) + `,`, - `Webhooks:` + repeatedStringForWebhooks + `,`, - `}`, - }, "") - return s -} -func (this *MutatingWebhookConfigurationList) String() string { - if this == nil { - return "nil" + if len(m.Rules) > 0 { + for iNdEx := len(m.Rules) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Rules[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } } - repeatedStringForItems := "[]MutatingWebhookConfiguration{" - for _, f := range this.Items { - repeatedStringForItems += strings.Replace(strings.Replace(f.String(), "MutatingWebhookConfiguration", "MutatingWebhookConfiguration", 1), `&`, ``, 1) + "," + { + size, err := m.ClientConfig.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) } - repeatedStringForItems += "}" - s := strings.Join([]string{`&MutatingWebhookConfigurationList{`, - `ListMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ListMeta), "ListMeta", "v11.ListMeta", 1), `&`, ``, 1) + `,`, - `Items:` + repeatedStringForItems + `,`, - `}`, - }, "") - return s + i-- + dAtA[i] = 0x12 + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil } -func (this *ServiceReference) String() string { - if this == nil { - return "nil" + +func (m *ValidatingWebhookConfiguration) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err } - s := strings.Join([]string{`&ServiceReference{`, - `Namespace:` + fmt.Sprintf("%v", this.Namespace) + `,`, - `Name:` + fmt.Sprintf("%v", this.Name) + `,`, - `Path:` + valueToStringGenerated(this.Path) + `,`, - `Port:` + valueToStringGenerated(this.Port) + `,`, - `}`, - }, "") - return s + return dAtA[:n], nil } -func (this *ValidatingWebhook) String() string { - if this == nil { - return "nil" - } - repeatedStringForRules := "[]RuleWithOperations{" - for _, f := range this.Rules { - repeatedStringForRules += fmt.Sprintf("%v", f) + "," + +func (m *ValidatingWebhookConfiguration) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ValidatingWebhookConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Webhooks) > 0 { + for iNdEx := len(m.Webhooks) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Webhooks[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } } - repeatedStringForRules += "}" - repeatedStringForMatchConditions := "[]MatchCondition{" - for _, f := range this.MatchConditions { - repeatedStringForMatchConditions += strings.Replace(strings.Replace(f.String(), "MatchCondition", "MatchCondition", 1), `&`, ``, 1) + "," + { + size, err := m.ObjectMeta.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) } - repeatedStringForMatchConditions += "}" - s := strings.Join([]string{`&ValidatingWebhook{`, - `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *ValidatingWebhookConfigurationList) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ValidatingWebhookConfigurationList) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ValidatingWebhookConfigurationList) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Items) > 0 { + for iNdEx := len(m.Items) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Items[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + } + { + size, err := m.ListMeta.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *Validation) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *Validation) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Validation) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + i -= len(m.MessageExpression) + copy(dAtA[i:], m.MessageExpression) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.MessageExpression))) + i-- + dAtA[i] = 0x22 + if m.Reason != nil { + i -= len(*m.Reason) + copy(dAtA[i:], *m.Reason) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.Reason))) + i-- + dAtA[i] = 0x1a + } + i -= len(m.Message) + copy(dAtA[i:], m.Message) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Message))) + i-- + dAtA[i] = 0x12 + i -= len(m.Expression) + copy(dAtA[i:], m.Expression) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Expression))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *Variable) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *Variable) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Variable) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + i -= len(m.Expression) + copy(dAtA[i:], m.Expression) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Expression))) + i-- + dAtA[i] = 0x12 + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *WebhookClientConfig) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *WebhookClientConfig) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *WebhookClientConfig) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.URL != nil { + i -= len(*m.URL) + copy(dAtA[i:], *m.URL) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.URL))) + i-- + dAtA[i] = 0x1a + } + if m.CABundle != nil { + i -= len(m.CABundle) + copy(dAtA[i:], m.CABundle) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.CABundle))) + i-- + dAtA[i] = 0x12 + } + if m.Service != nil { + { + size, err := m.Service.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int { + offset -= sovGenerated(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + v >>= 7 + offset++ + } + dAtA[offset] = uint8(v) + return base +} +func (m *AuditAnnotation) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Key) + n += 1 + l + sovGenerated(uint64(l)) + l = len(m.ValueExpression) + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *ExpressionWarning) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.FieldRef) + n += 1 + l + sovGenerated(uint64(l)) + l = len(m.Warning) + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *MatchCondition) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + n += 1 + l + sovGenerated(uint64(l)) + l = len(m.Expression) + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *MatchResources) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.NamespaceSelector != nil { + l = m.NamespaceSelector.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if m.ObjectSelector != nil { + l = m.ObjectSelector.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if len(m.ResourceRules) > 0 { + for _, e := range m.ResourceRules { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + if len(m.ExcludeResourceRules) > 0 { + for _, e := range m.ExcludeResourceRules { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + if m.MatchPolicy != nil { + l = len(*m.MatchPolicy) + n += 1 + l + sovGenerated(uint64(l)) + } + return n +} + +func (m *MutatingWebhook) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + n += 1 + l + sovGenerated(uint64(l)) + l = m.ClientConfig.Size() + n += 1 + l + sovGenerated(uint64(l)) + if len(m.Rules) > 0 { + for _, e := range m.Rules { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + if m.FailurePolicy != nil { + l = len(*m.FailurePolicy) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.NamespaceSelector != nil { + l = m.NamespaceSelector.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if m.SideEffects != nil { + l = len(*m.SideEffects) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.TimeoutSeconds != nil { + n += 1 + sovGenerated(uint64(*m.TimeoutSeconds)) + } + if len(m.AdmissionReviewVersions) > 0 { + for _, s := range m.AdmissionReviewVersions { + l = len(s) + n += 1 + l + sovGenerated(uint64(l)) + } + } + if m.MatchPolicy != nil { + l = len(*m.MatchPolicy) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.ReinvocationPolicy != nil { + l = len(*m.ReinvocationPolicy) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.ObjectSelector != nil { + l = m.ObjectSelector.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if len(m.MatchConditions) > 0 { + for _, e := range m.MatchConditions { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *MutatingWebhookConfiguration) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.ObjectMeta.Size() + n += 1 + l + sovGenerated(uint64(l)) + if len(m.Webhooks) > 0 { + for _, e := range m.Webhooks { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *MutatingWebhookConfigurationList) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.ListMeta.Size() + n += 1 + l + sovGenerated(uint64(l)) + if len(m.Items) > 0 { + for _, e := range m.Items { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *NamedRuleWithOperations) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.ResourceNames) > 0 { + for _, s := range m.ResourceNames { + l = len(s) + n += 1 + l + sovGenerated(uint64(l)) + } + } + l = m.RuleWithOperations.Size() + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *ParamKind) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.APIVersion) + n += 1 + l + sovGenerated(uint64(l)) + l = len(m.Kind) + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *ParamRef) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + n += 1 + l + sovGenerated(uint64(l)) + l = len(m.Namespace) + n += 1 + l + sovGenerated(uint64(l)) + if m.Selector != nil { + l = m.Selector.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if m.ParameterNotFoundAction != nil { + l = len(*m.ParameterNotFoundAction) + n += 1 + l + sovGenerated(uint64(l)) + } + return n +} + +func (m *ServiceReference) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Namespace) + n += 1 + l + sovGenerated(uint64(l)) + l = len(m.Name) + n += 1 + l + sovGenerated(uint64(l)) + if m.Path != nil { + l = len(*m.Path) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.Port != nil { + n += 1 + sovGenerated(uint64(*m.Port)) + } + return n +} + +func (m *TypeChecking) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.ExpressionWarnings) > 0 { + for _, e := range m.ExpressionWarnings { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *ValidatingAdmissionPolicy) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.ObjectMeta.Size() + n += 1 + l + sovGenerated(uint64(l)) + l = m.Spec.Size() + n += 1 + l + sovGenerated(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *ValidatingAdmissionPolicyBinding) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.ObjectMeta.Size() + n += 1 + l + sovGenerated(uint64(l)) + l = m.Spec.Size() + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *ValidatingAdmissionPolicyBindingList) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.ListMeta.Size() + n += 1 + l + sovGenerated(uint64(l)) + if len(m.Items) > 0 { + for _, e := range m.Items { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *ValidatingAdmissionPolicyBindingSpec) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.PolicyName) + n += 1 + l + sovGenerated(uint64(l)) + if m.ParamRef != nil { + l = m.ParamRef.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if m.MatchResources != nil { + l = m.MatchResources.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if len(m.ValidationActions) > 0 { + for _, s := range m.ValidationActions { + l = len(s) + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *ValidatingAdmissionPolicyList) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.ListMeta.Size() + n += 1 + l + sovGenerated(uint64(l)) + if len(m.Items) > 0 { + for _, e := range m.Items { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *ValidatingAdmissionPolicySpec) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.ParamKind != nil { + l = m.ParamKind.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if m.MatchConstraints != nil { + l = m.MatchConstraints.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if len(m.Validations) > 0 { + for _, e := range m.Validations { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + if m.FailurePolicy != nil { + l = len(*m.FailurePolicy) + n += 1 + l + sovGenerated(uint64(l)) + } + if len(m.AuditAnnotations) > 0 { + for _, e := range m.AuditAnnotations { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + if len(m.MatchConditions) > 0 { + for _, e := range m.MatchConditions { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + if len(m.Variables) > 0 { + for _, e := range m.Variables { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *ValidatingAdmissionPolicyStatus) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + n += 1 + sovGenerated(uint64(m.ObservedGeneration)) + if m.TypeChecking != nil { + l = m.TypeChecking.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if len(m.Conditions) > 0 { + for _, e := range m.Conditions { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *ValidatingWebhook) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + n += 1 + l + sovGenerated(uint64(l)) + l = m.ClientConfig.Size() + n += 1 + l + sovGenerated(uint64(l)) + if len(m.Rules) > 0 { + for _, e := range m.Rules { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + if m.FailurePolicy != nil { + l = len(*m.FailurePolicy) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.NamespaceSelector != nil { + l = m.NamespaceSelector.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if m.SideEffects != nil { + l = len(*m.SideEffects) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.TimeoutSeconds != nil { + n += 1 + sovGenerated(uint64(*m.TimeoutSeconds)) + } + if len(m.AdmissionReviewVersions) > 0 { + for _, s := range m.AdmissionReviewVersions { + l = len(s) + n += 1 + l + sovGenerated(uint64(l)) + } + } + if m.MatchPolicy != nil { + l = len(*m.MatchPolicy) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.ObjectSelector != nil { + l = m.ObjectSelector.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if len(m.MatchConditions) > 0 { + for _, e := range m.MatchConditions { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *ValidatingWebhookConfiguration) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.ObjectMeta.Size() + n += 1 + l + sovGenerated(uint64(l)) + if len(m.Webhooks) > 0 { + for _, e := range m.Webhooks { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *ValidatingWebhookConfigurationList) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.ListMeta.Size() + n += 1 + l + sovGenerated(uint64(l)) + if len(m.Items) > 0 { + for _, e := range m.Items { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + return n +} + +func (m *Validation) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Expression) + n += 1 + l + sovGenerated(uint64(l)) + l = len(m.Message) + n += 1 + l + sovGenerated(uint64(l)) + if m.Reason != nil { + l = len(*m.Reason) + n += 1 + l + sovGenerated(uint64(l)) + } + l = len(m.MessageExpression) + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *Variable) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + n += 1 + l + sovGenerated(uint64(l)) + l = len(m.Expression) + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *WebhookClientConfig) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Service != nil { + l = m.Service.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + if m.CABundle != nil { + l = len(m.CABundle) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.URL != nil { + l = len(*m.URL) + n += 1 + l + sovGenerated(uint64(l)) + } + return n +} + +func sovGenerated(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} +func sozGenerated(x uint64) (n int) { + return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} +func (this *AuditAnnotation) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&AuditAnnotation{`, + `Key:` + fmt.Sprintf("%v", this.Key) + `,`, + `ValueExpression:` + fmt.Sprintf("%v", this.ValueExpression) + `,`, + `}`, + }, "") + return s +} +func (this *ExpressionWarning) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ExpressionWarning{`, + `FieldRef:` + fmt.Sprintf("%v", this.FieldRef) + `,`, + `Warning:` + fmt.Sprintf("%v", this.Warning) + `,`, + `}`, + }, "") + return s +} +func (this *MatchCondition) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&MatchCondition{`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + `Expression:` + fmt.Sprintf("%v", this.Expression) + `,`, + `}`, + }, "") + return s +} +func (this *MatchResources) String() string { + if this == nil { + return "nil" + } + repeatedStringForResourceRules := "[]NamedRuleWithOperations{" + for _, f := range this.ResourceRules { + repeatedStringForResourceRules += strings.Replace(strings.Replace(f.String(), "NamedRuleWithOperations", "NamedRuleWithOperations", 1), `&`, ``, 1) + "," + } + repeatedStringForResourceRules += "}" + repeatedStringForExcludeResourceRules := "[]NamedRuleWithOperations{" + for _, f := range this.ExcludeResourceRules { + repeatedStringForExcludeResourceRules += strings.Replace(strings.Replace(f.String(), "NamedRuleWithOperations", "NamedRuleWithOperations", 1), `&`, ``, 1) + "," + } + repeatedStringForExcludeResourceRules += "}" + s := strings.Join([]string{`&MatchResources{`, + `NamespaceSelector:` + strings.Replace(fmt.Sprintf("%v", this.NamespaceSelector), "LabelSelector", "v1.LabelSelector", 1) + `,`, + `ObjectSelector:` + strings.Replace(fmt.Sprintf("%v", this.ObjectSelector), "LabelSelector", "v1.LabelSelector", 1) + `,`, + `ResourceRules:` + repeatedStringForResourceRules + `,`, + `ExcludeResourceRules:` + repeatedStringForExcludeResourceRules + `,`, + `MatchPolicy:` + valueToStringGenerated(this.MatchPolicy) + `,`, + `}`, + }, "") + return s +} +func (this *MutatingWebhook) String() string { + if this == nil { + return "nil" + } + repeatedStringForRules := "[]RuleWithOperations{" + for _, f := range this.Rules { + repeatedStringForRules += fmt.Sprintf("%v", f) + "," + } + repeatedStringForRules += "}" + repeatedStringForMatchConditions := "[]MatchCondition{" + for _, f := range this.MatchConditions { + repeatedStringForMatchConditions += strings.Replace(strings.Replace(f.String(), "MatchCondition", "MatchCondition", 1), `&`, ``, 1) + "," + } + repeatedStringForMatchConditions += "}" + s := strings.Join([]string{`&MutatingWebhook{`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + `ClientConfig:` + strings.Replace(strings.Replace(this.ClientConfig.String(), "WebhookClientConfig", "WebhookClientConfig", 1), `&`, ``, 1) + `,`, + `Rules:` + repeatedStringForRules + `,`, + `FailurePolicy:` + valueToStringGenerated(this.FailurePolicy) + `,`, + `NamespaceSelector:` + strings.Replace(fmt.Sprintf("%v", this.NamespaceSelector), "LabelSelector", "v1.LabelSelector", 1) + `,`, + `SideEffects:` + valueToStringGenerated(this.SideEffects) + `,`, + `TimeoutSeconds:` + valueToStringGenerated(this.TimeoutSeconds) + `,`, + `AdmissionReviewVersions:` + fmt.Sprintf("%v", this.AdmissionReviewVersions) + `,`, + `MatchPolicy:` + valueToStringGenerated(this.MatchPolicy) + `,`, + `ReinvocationPolicy:` + valueToStringGenerated(this.ReinvocationPolicy) + `,`, + `ObjectSelector:` + strings.Replace(fmt.Sprintf("%v", this.ObjectSelector), "LabelSelector", "v1.LabelSelector", 1) + `,`, + `MatchConditions:` + repeatedStringForMatchConditions + `,`, + `}`, + }, "") + return s +} +func (this *MutatingWebhookConfiguration) String() string { + if this == nil { + return "nil" + } + repeatedStringForWebhooks := "[]MutatingWebhook{" + for _, f := range this.Webhooks { + repeatedStringForWebhooks += strings.Replace(strings.Replace(f.String(), "MutatingWebhook", "MutatingWebhook", 1), `&`, ``, 1) + "," + } + repeatedStringForWebhooks += "}" + s := strings.Join([]string{`&MutatingWebhookConfiguration{`, + `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v1.ObjectMeta", 1), `&`, ``, 1) + `,`, + `Webhooks:` + repeatedStringForWebhooks + `,`, + `}`, + }, "") + return s +} +func (this *MutatingWebhookConfigurationList) String() string { + if this == nil { + return "nil" + } + repeatedStringForItems := "[]MutatingWebhookConfiguration{" + for _, f := range this.Items { + repeatedStringForItems += strings.Replace(strings.Replace(f.String(), "MutatingWebhookConfiguration", "MutatingWebhookConfiguration", 1), `&`, ``, 1) + "," + } + repeatedStringForItems += "}" + s := strings.Join([]string{`&MutatingWebhookConfigurationList{`, + `ListMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ListMeta), "ListMeta", "v1.ListMeta", 1), `&`, ``, 1) + `,`, + `Items:` + repeatedStringForItems + `,`, + `}`, + }, "") + return s +} +func (this *NamedRuleWithOperations) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&NamedRuleWithOperations{`, + `ResourceNames:` + fmt.Sprintf("%v", this.ResourceNames) + `,`, + `RuleWithOperations:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.RuleWithOperations), "RuleWithOperations", "v11.RuleWithOperations", 1), `&`, ``, 1) + `,`, + `}`, + }, "") + return s +} +func (this *ParamKind) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ParamKind{`, + `APIVersion:` + fmt.Sprintf("%v", this.APIVersion) + `,`, + `Kind:` + fmt.Sprintf("%v", this.Kind) + `,`, + `}`, + }, "") + return s +} +func (this *ParamRef) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ParamRef{`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + `Namespace:` + fmt.Sprintf("%v", this.Namespace) + `,`, + `Selector:` + strings.Replace(fmt.Sprintf("%v", this.Selector), "LabelSelector", "v1.LabelSelector", 1) + `,`, + `ParameterNotFoundAction:` + valueToStringGenerated(this.ParameterNotFoundAction) + `,`, + `}`, + }, "") + return s +} +func (this *ServiceReference) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ServiceReference{`, + `Namespace:` + fmt.Sprintf("%v", this.Namespace) + `,`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + `Path:` + valueToStringGenerated(this.Path) + `,`, + `Port:` + valueToStringGenerated(this.Port) + `,`, + `}`, + }, "") + return s +} +func (this *TypeChecking) String() string { + if this == nil { + return "nil" + } + repeatedStringForExpressionWarnings := "[]ExpressionWarning{" + for _, f := range this.ExpressionWarnings { + repeatedStringForExpressionWarnings += strings.Replace(strings.Replace(f.String(), "ExpressionWarning", "ExpressionWarning", 1), `&`, ``, 1) + "," + } + repeatedStringForExpressionWarnings += "}" + s := strings.Join([]string{`&TypeChecking{`, + `ExpressionWarnings:` + repeatedStringForExpressionWarnings + `,`, + `}`, + }, "") + return s +} +func (this *ValidatingAdmissionPolicy) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ValidatingAdmissionPolicy{`, + `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v1.ObjectMeta", 1), `&`, ``, 1) + `,`, + `Spec:` + strings.Replace(strings.Replace(this.Spec.String(), "ValidatingAdmissionPolicySpec", "ValidatingAdmissionPolicySpec", 1), `&`, ``, 1) + `,`, + `Status:` + strings.Replace(strings.Replace(this.Status.String(), "ValidatingAdmissionPolicyStatus", "ValidatingAdmissionPolicyStatus", 1), `&`, ``, 1) + `,`, + `}`, + }, "") + return s +} +func (this *ValidatingAdmissionPolicyBinding) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ValidatingAdmissionPolicyBinding{`, + `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v1.ObjectMeta", 1), `&`, ``, 1) + `,`, + `Spec:` + strings.Replace(strings.Replace(this.Spec.String(), "ValidatingAdmissionPolicyBindingSpec", "ValidatingAdmissionPolicyBindingSpec", 1), `&`, ``, 1) + `,`, + `}`, + }, "") + return s +} +func (this *ValidatingAdmissionPolicyBindingList) String() string { + if this == nil { + return "nil" + } + repeatedStringForItems := "[]ValidatingAdmissionPolicyBinding{" + for _, f := range this.Items { + repeatedStringForItems += strings.Replace(strings.Replace(f.String(), "ValidatingAdmissionPolicyBinding", "ValidatingAdmissionPolicyBinding", 1), `&`, ``, 1) + "," + } + repeatedStringForItems += "}" + s := strings.Join([]string{`&ValidatingAdmissionPolicyBindingList{`, + `ListMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ListMeta), "ListMeta", "v1.ListMeta", 1), `&`, ``, 1) + `,`, + `Items:` + repeatedStringForItems + `,`, + `}`, + }, "") + return s +} +func (this *ValidatingAdmissionPolicyBindingSpec) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ValidatingAdmissionPolicyBindingSpec{`, + `PolicyName:` + fmt.Sprintf("%v", this.PolicyName) + `,`, + `ParamRef:` + strings.Replace(this.ParamRef.String(), "ParamRef", "ParamRef", 1) + `,`, + `MatchResources:` + strings.Replace(this.MatchResources.String(), "MatchResources", "MatchResources", 1) + `,`, + `ValidationActions:` + fmt.Sprintf("%v", this.ValidationActions) + `,`, + `}`, + }, "") + return s +} +func (this *ValidatingAdmissionPolicyList) String() string { + if this == nil { + return "nil" + } + repeatedStringForItems := "[]ValidatingAdmissionPolicy{" + for _, f := range this.Items { + repeatedStringForItems += strings.Replace(strings.Replace(f.String(), "ValidatingAdmissionPolicy", "ValidatingAdmissionPolicy", 1), `&`, ``, 1) + "," + } + repeatedStringForItems += "}" + s := strings.Join([]string{`&ValidatingAdmissionPolicyList{`, + `ListMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ListMeta), "ListMeta", "v1.ListMeta", 1), `&`, ``, 1) + `,`, + `Items:` + repeatedStringForItems + `,`, + `}`, + }, "") + return s +} +func (this *ValidatingAdmissionPolicySpec) String() string { + if this == nil { + return "nil" + } + repeatedStringForValidations := "[]Validation{" + for _, f := range this.Validations { + repeatedStringForValidations += strings.Replace(strings.Replace(f.String(), "Validation", "Validation", 1), `&`, ``, 1) + "," + } + repeatedStringForValidations += "}" + repeatedStringForAuditAnnotations := "[]AuditAnnotation{" + for _, f := range this.AuditAnnotations { + repeatedStringForAuditAnnotations += strings.Replace(strings.Replace(f.String(), "AuditAnnotation", "AuditAnnotation", 1), `&`, ``, 1) + "," + } + repeatedStringForAuditAnnotations += "}" + repeatedStringForMatchConditions := "[]MatchCondition{" + for _, f := range this.MatchConditions { + repeatedStringForMatchConditions += strings.Replace(strings.Replace(f.String(), "MatchCondition", "MatchCondition", 1), `&`, ``, 1) + "," + } + repeatedStringForMatchConditions += "}" + repeatedStringForVariables := "[]Variable{" + for _, f := range this.Variables { + repeatedStringForVariables += strings.Replace(strings.Replace(f.String(), "Variable", "Variable", 1), `&`, ``, 1) + "," + } + repeatedStringForVariables += "}" + s := strings.Join([]string{`&ValidatingAdmissionPolicySpec{`, + `ParamKind:` + strings.Replace(this.ParamKind.String(), "ParamKind", "ParamKind", 1) + `,`, + `MatchConstraints:` + strings.Replace(this.MatchConstraints.String(), "MatchResources", "MatchResources", 1) + `,`, + `Validations:` + repeatedStringForValidations + `,`, + `FailurePolicy:` + valueToStringGenerated(this.FailurePolicy) + `,`, + `AuditAnnotations:` + repeatedStringForAuditAnnotations + `,`, + `MatchConditions:` + repeatedStringForMatchConditions + `,`, + `Variables:` + repeatedStringForVariables + `,`, + `}`, + }, "") + return s +} +func (this *ValidatingAdmissionPolicyStatus) String() string { + if this == nil { + return "nil" + } + repeatedStringForConditions := "[]Condition{" + for _, f := range this.Conditions { + repeatedStringForConditions += fmt.Sprintf("%v", f) + "," + } + repeatedStringForConditions += "}" + s := strings.Join([]string{`&ValidatingAdmissionPolicyStatus{`, + `ObservedGeneration:` + fmt.Sprintf("%v", this.ObservedGeneration) + `,`, + `TypeChecking:` + strings.Replace(this.TypeChecking.String(), "TypeChecking", "TypeChecking", 1) + `,`, + `Conditions:` + repeatedStringForConditions + `,`, + `}`, + }, "") + return s +} +func (this *ValidatingWebhook) String() string { + if this == nil { + return "nil" + } + repeatedStringForRules := "[]RuleWithOperations{" + for _, f := range this.Rules { + repeatedStringForRules += fmt.Sprintf("%v", f) + "," + } + repeatedStringForRules += "}" + repeatedStringForMatchConditions := "[]MatchCondition{" + for _, f := range this.MatchConditions { + repeatedStringForMatchConditions += strings.Replace(strings.Replace(f.String(), "MatchCondition", "MatchCondition", 1), `&`, ``, 1) + "," + } + repeatedStringForMatchConditions += "}" + s := strings.Join([]string{`&ValidatingWebhook{`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, `ClientConfig:` + strings.Replace(strings.Replace(this.ClientConfig.String(), "WebhookClientConfig", "WebhookClientConfig", 1), `&`, ``, 1) + `,`, `Rules:` + repeatedStringForRules + `,`, `FailurePolicy:` + valueToStringGenerated(this.FailurePolicy) + `,`, - `NamespaceSelector:` + strings.Replace(fmt.Sprintf("%v", this.NamespaceSelector), "LabelSelector", "v11.LabelSelector", 1) + `,`, + `NamespaceSelector:` + strings.Replace(fmt.Sprintf("%v", this.NamespaceSelector), "LabelSelector", "v1.LabelSelector", 1) + `,`, `SideEffects:` + valueToStringGenerated(this.SideEffects) + `,`, `TimeoutSeconds:` + valueToStringGenerated(this.TimeoutSeconds) + `,`, `AdmissionReviewVersions:` + fmt.Sprintf("%v", this.AdmissionReviewVersions) + `,`, `MatchPolicy:` + valueToStringGenerated(this.MatchPolicy) + `,`, - `ObjectSelector:` + strings.Replace(fmt.Sprintf("%v", this.ObjectSelector), "LabelSelector", "v11.LabelSelector", 1) + `,`, + `ObjectSelector:` + strings.Replace(fmt.Sprintf("%v", this.ObjectSelector), "LabelSelector", "v1.LabelSelector", 1) + `,`, `MatchConditions:` + repeatedStringForMatchConditions + `,`, `}`, }, "") return s } -func (this *ValidatingWebhookConfiguration) String() string { - if this == nil { - return "nil" +func (this *ValidatingWebhookConfiguration) String() string { + if this == nil { + return "nil" + } + repeatedStringForWebhooks := "[]ValidatingWebhook{" + for _, f := range this.Webhooks { + repeatedStringForWebhooks += strings.Replace(strings.Replace(f.String(), "ValidatingWebhook", "ValidatingWebhook", 1), `&`, ``, 1) + "," + } + repeatedStringForWebhooks += "}" + s := strings.Join([]string{`&ValidatingWebhookConfiguration{`, + `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v1.ObjectMeta", 1), `&`, ``, 1) + `,`, + `Webhooks:` + repeatedStringForWebhooks + `,`, + `}`, + }, "") + return s +} +func (this *ValidatingWebhookConfigurationList) String() string { + if this == nil { + return "nil" + } + repeatedStringForItems := "[]ValidatingWebhookConfiguration{" + for _, f := range this.Items { + repeatedStringForItems += strings.Replace(strings.Replace(f.String(), "ValidatingWebhookConfiguration", "ValidatingWebhookConfiguration", 1), `&`, ``, 1) + "," + } + repeatedStringForItems += "}" + s := strings.Join([]string{`&ValidatingWebhookConfigurationList{`, + `ListMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ListMeta), "ListMeta", "v1.ListMeta", 1), `&`, ``, 1) + `,`, + `Items:` + repeatedStringForItems + `,`, + `}`, + }, "") + return s +} +func (this *Validation) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&Validation{`, + `Expression:` + fmt.Sprintf("%v", this.Expression) + `,`, + `Message:` + fmt.Sprintf("%v", this.Message) + `,`, + `Reason:` + valueToStringGenerated(this.Reason) + `,`, + `MessageExpression:` + fmt.Sprintf("%v", this.MessageExpression) + `,`, + `}`, + }, "") + return s +} +func (this *Variable) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&Variable{`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + `Expression:` + fmt.Sprintf("%v", this.Expression) + `,`, + `}`, + }, "") + return s +} +func (this *WebhookClientConfig) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&WebhookClientConfig{`, + `Service:` + strings.Replace(this.Service.String(), "ServiceReference", "ServiceReference", 1) + `,`, + `CABundle:` + valueToStringGenerated(this.CABundle) + `,`, + `URL:` + valueToStringGenerated(this.URL) + `,`, + `}`, + }, "") + return s +} +func valueToStringGenerated(v interface{}) string { + rv := reflect.ValueOf(v) + if rv.IsNil() { + return "nil" + } + pv := reflect.Indirect(rv).Interface() + return fmt.Sprintf("*%v", pv) +} +func (m *AuditAnnotation) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AuditAnnotation: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AuditAnnotation: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Key = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ValueExpression", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ValueExpression = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ExpressionWarning) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ExpressionWarning: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ExpressionWarning: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FieldRef", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.FieldRef = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Warning", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Warning = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MatchCondition) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MatchCondition: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MatchCondition: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Expression", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Expression = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MatchResources) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MatchResources: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MatchResources: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NamespaceSelector", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.NamespaceSelector == nil { + m.NamespaceSelector = &v1.LabelSelector{} + } + if err := m.NamespaceSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ObjectSelector", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.ObjectSelector == nil { + m.ObjectSelector = &v1.LabelSelector{} + } + if err := m.ObjectSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceRules", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ResourceRules = append(m.ResourceRules, NamedRuleWithOperations{}) + if err := m.ResourceRules[len(m.ResourceRules)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ExcludeResourceRules", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ExcludeResourceRules = append(m.ExcludeResourceRules, NamedRuleWithOperations{}) + if err := m.ExcludeResourceRules[len(m.ExcludeResourceRules)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MatchPolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := MatchPolicyType(dAtA[iNdEx:postIndex]) + m.MatchPolicy = &s + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MutatingWebhook: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MutatingWebhook: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ClientConfig", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ClientConfig.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Rules", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Rules = append(m.Rules, v11.RuleWithOperations{}) + if err := m.Rules[len(m.Rules)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FailurePolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := FailurePolicyType(dAtA[iNdEx:postIndex]) + m.FailurePolicy = &s + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NamespaceSelector", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.NamespaceSelector == nil { + m.NamespaceSelector = &v1.LabelSelector{} + } + if err := m.NamespaceSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SideEffects", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := SideEffectClass(dAtA[iNdEx:postIndex]) + m.SideEffects = &s + iNdEx = postIndex + case 7: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field TimeoutSeconds", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.TimeoutSeconds = &v + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AdmissionReviewVersions", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AdmissionReviewVersions = append(m.AdmissionReviewVersions, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MatchPolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := MatchPolicyType(dAtA[iNdEx:postIndex]) + m.MatchPolicy = &s + iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ReinvocationPolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := ReinvocationPolicyType(dAtA[iNdEx:postIndex]) + m.ReinvocationPolicy = &s + iNdEx = postIndex + case 11: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ObjectSelector", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.ObjectSelector == nil { + m.ObjectSelector = &v1.LabelSelector{} + } + if err := m.ObjectSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 12: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MatchConditions", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.MatchConditions = append(m.MatchConditions, MatchCondition{}) + if err := m.MatchConditions[len(m.MatchConditions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MutatingWebhookConfiguration) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MutatingWebhookConfiguration: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MutatingWebhookConfiguration: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Webhooks", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Webhooks = append(m.Webhooks, MutatingWebhook{}) + if err := m.Webhooks[len(m.Webhooks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MutatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MutatingWebhookConfigurationList: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MutatingWebhookConfigurationList: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Items = append(m.Items, MutatingWebhookConfiguration{}) + if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *NamedRuleWithOperations) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: NamedRuleWithOperations: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: NamedRuleWithOperations: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceNames", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ResourceNames = append(m.ResourceNames, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RuleWithOperations", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.RuleWithOperations.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ParamKind) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ParamKind: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ParamKind: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field APIVersion", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.APIVersion = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Kind", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Kind = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } - repeatedStringForWebhooks := "[]ValidatingWebhook{" - for _, f := range this.Webhooks { - repeatedStringForWebhooks += strings.Replace(strings.Replace(f.String(), "ValidatingWebhook", "ValidatingWebhook", 1), `&`, ``, 1) + "," + + if iNdEx > l { + return io.ErrUnexpectedEOF } - repeatedStringForWebhooks += "}" - s := strings.Join([]string{`&ValidatingWebhookConfiguration{`, - `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v11.ObjectMeta", 1), `&`, ``, 1) + `,`, - `Webhooks:` + repeatedStringForWebhooks + `,`, - `}`, - }, "") - return s + return nil } -func (this *ValidatingWebhookConfigurationList) String() string { - if this == nil { - return "nil" +func (m *ParamRef) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ParamRef: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ParamRef: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Namespace = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Selector", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Selector == nil { + m.Selector = &v1.LabelSelector{} + } + if err := m.Selector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ParameterNotFoundAction", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := ParameterNotFoundActionType(dAtA[iNdEx:postIndex]) + m.ParameterNotFoundAction = &s + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } - repeatedStringForItems := "[]ValidatingWebhookConfiguration{" - for _, f := range this.Items { - repeatedStringForItems += strings.Replace(strings.Replace(f.String(), "ValidatingWebhookConfiguration", "ValidatingWebhookConfiguration", 1), `&`, ``, 1) + "," + + if iNdEx > l { + return io.ErrUnexpectedEOF } - repeatedStringForItems += "}" - s := strings.Join([]string{`&ValidatingWebhookConfigurationList{`, - `ListMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ListMeta), "ListMeta", "v11.ListMeta", 1), `&`, ``, 1) + `,`, - `Items:` + repeatedStringForItems + `,`, - `}`, - }, "") - return s + return nil } -func (this *WebhookClientConfig) String() string { - if this == nil { - return "nil" +func (m *ServiceReference) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ServiceReference: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ServiceReference: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Namespace = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := string(dAtA[iNdEx:postIndex]) + m.Path = &s + iNdEx = postIndex + case 4: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Port = &v + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } } - s := strings.Join([]string{`&WebhookClientConfig{`, - `Service:` + strings.Replace(this.Service.String(), "ServiceReference", "ServiceReference", 1) + `,`, - `CABundle:` + valueToStringGenerated(this.CABundle) + `,`, - `URL:` + valueToStringGenerated(this.URL) + `,`, - `}`, - }, "") - return s -} -func valueToStringGenerated(v interface{}) string { - rv := reflect.ValueOf(v) - if rv.IsNil() { - return "nil" + + if iNdEx > l { + return io.ErrUnexpectedEOF } - pv := reflect.Indirect(rv).Interface() - return fmt.Sprintf("*%v", pv) + return nil } -func (m *MatchCondition) Unmarshal(dAtA []byte) error { +func (m *TypeChecking) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1398,17 +5091,17 @@ func (m *MatchCondition) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MatchCondition: wiretype end group for non-group") + return fmt.Errorf("proto: TypeChecking: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MatchCondition: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: TypeChecking: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ExpressionWarnings", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -1418,55 +5111,25 @@ func (m *MatchCondition) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex - case 2: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Expression", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthGenerated - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthGenerated - } - if postIndex > l { - return io.ErrUnexpectedEOF + m.ExpressionWarnings = append(m.ExpressionWarnings, ExpressionWarning{}) + if err := m.ExpressionWarnings[len(m.ExpressionWarnings)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.Expression = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -1489,7 +5152,7 @@ func (m *MatchCondition) Unmarshal(dAtA []byte) error { } return nil } -func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { +func (m *ValidatingAdmissionPolicy) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1512,17 +5175,17 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MutatingWebhook: wiretype end group for non-group") + return fmt.Errorf("proto: ValidatingAdmissionPolicy: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MutatingWebhook: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ValidatingAdmissionPolicy: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -1532,27 +5195,28 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ClientConfig", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1579,13 +5243,13 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ClientConfig.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Rules", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1612,16 +5276,65 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Rules = append(m.Rules, v1.RuleWithOperations{}) - if err := m.Rules[len(m.Rules)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 4: + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ValidatingAdmissionPolicyBinding) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ValidatingAdmissionPolicyBinding: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ValidatingAdmissionPolicyBinding: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field FailurePolicy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -1631,28 +5344,28 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - s := FailurePolicyType(dAtA[iNdEx:postIndex]) - m.FailurePolicy = &s + if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 5: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field NamespaceSelector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1679,18 +5392,65 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.NamespaceSelector == nil { - m.NamespaceSelector = &v11.LabelSelector{} - } - if err := m.NamespaceSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 6: + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ValidatingAdmissionPolicyBindingList) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ValidatingAdmissionPolicyBindingList: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ValidatingAdmissionPolicyBindingList: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field SideEffects", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -1700,50 +5460,30 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - s := SideEffectClass(dAtA[iNdEx:postIndex]) - m.SideEffects = &s - iNdEx = postIndex - case 7: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field TimeoutSeconds", wireType) - } - var v int32 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.TimeoutSeconds = &v - case 8: + iNdEx = postIndex + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AdmissionReviewVersions", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -1753,27 +5493,79 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } - if postIndex > l { + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Items = append(m.Items, ValidatingAdmissionPolicyBinding{}) + if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ValidatingAdmissionPolicyBindingSpec) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { return io.ErrUnexpectedEOF } - m.AdmissionReviewVersions = append(m.AdmissionReviewVersions, string(dAtA[iNdEx:postIndex])) - iNdEx = postIndex - case 9: + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ValidatingAdmissionPolicyBindingSpec: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ValidatingAdmissionPolicyBindingSpec: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MatchPolicy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field PolicyName", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -1801,14 +5593,13 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - s := MatchPolicyType(dAtA[iNdEx:postIndex]) - m.MatchPolicy = &s + m.PolicyName = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex - case 10: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ReinvocationPolicy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ParamRef", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -1818,28 +5609,31 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - s := ReinvocationPolicyType(dAtA[iNdEx:postIndex]) - m.ReinvocationPolicy = &s + if m.ParamRef == nil { + m.ParamRef = &ParamRef{} + } + if err := m.ParamRef.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 11: + case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ObjectSelector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MatchResources", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1866,18 +5660,18 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.ObjectSelector == nil { - m.ObjectSelector = &v11.LabelSelector{} + if m.MatchResources == nil { + m.MatchResources = &MatchResources{} } - if err := m.ObjectSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.MatchResources.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 12: + case 4: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MatchConditions", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ValidationActions", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -1887,25 +5681,23 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - m.MatchConditions = append(m.MatchConditions, MatchCondition{}) - if err := m.MatchConditions[len(m.MatchConditions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.ValidationActions = append(m.ValidationActions, ValidationAction(dAtA[iNdEx:postIndex])) iNdEx = postIndex default: iNdEx = preIndex @@ -1928,7 +5720,7 @@ func (m *MutatingWebhook) Unmarshal(dAtA []byte) error { } return nil } -func (m *MutatingWebhookConfiguration) Unmarshal(dAtA []byte) error { +func (m *ValidatingAdmissionPolicyList) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1951,15 +5743,15 @@ func (m *MutatingWebhookConfiguration) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MutatingWebhookConfiguration: wiretype end group for non-group") + return fmt.Errorf("proto: ValidatingAdmissionPolicyList: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MutatingWebhookConfiguration: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ValidatingAdmissionPolicyList: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1986,13 +5778,13 @@ func (m *MutatingWebhookConfiguration) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Webhooks", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -2019,8 +5811,8 @@ func (m *MutatingWebhookConfiguration) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Webhooks = append(m.Webhooks, MutatingWebhook{}) - if err := m.Webhooks[len(m.Webhooks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Items = append(m.Items, ValidatingAdmissionPolicy{}) + if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -2045,7 +5837,7 @@ func (m *MutatingWebhookConfiguration) Unmarshal(dAtA []byte) error { } return nil } -func (m *MutatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { +func (m *ValidatingAdmissionPolicySpec) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -2068,15 +5860,15 @@ func (m *MutatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MutatingWebhookConfigurationList: wiretype end group for non-group") + return fmt.Errorf("proto: ValidatingAdmissionPolicySpec: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MutatingWebhookConfigurationList: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ValidatingAdmissionPolicySpec: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ParamKind", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -2103,13 +5895,187 @@ func (m *MutatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if m.ParamKind == nil { + m.ParamKind = &ParamKind{} + } + if err := m.ParamKind.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field MatchConstraints", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.MatchConstraints == nil { + m.MatchConstraints = &MatchResources{} + } + if err := m.MatchConstraints.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Validations", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Validations = append(m.Validations, Validation{}) + if err := m.Validations[len(m.Validations)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FailurePolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := FailurePolicyType(dAtA[iNdEx:postIndex]) + m.FailurePolicy = &s + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AuditAnnotations", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AuditAnnotations = append(m.AuditAnnotations, AuditAnnotation{}) + if err := m.AuditAnnotations[len(m.AuditAnnotations)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MatchConditions", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.MatchConditions = append(m.MatchConditions, MatchCondition{}) + if err := m.MatchConditions[len(m.MatchConditions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Variables", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -2136,8 +6102,8 @@ func (m *MutatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Items = append(m.Items, MutatingWebhookConfiguration{}) - if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Variables = append(m.Variables, Variable{}) + if err := m.Variables[len(m.Variables)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -2162,7 +6128,7 @@ func (m *MutatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { } return nil } -func (m *ServiceReference) Unmarshal(dAtA []byte) error { +func (m *ValidatingAdmissionPolicyStatus) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -2185,17 +6151,17 @@ func (m *ServiceReference) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ServiceReference: wiretype end group for non-group") + return fmt.Errorf("proto: ValidatingAdmissionPolicyStatus: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ServiceReference: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: ValidatingAdmissionPolicyStatus: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType) + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field ObservedGeneration", wireType) } - var stringLen uint64 + m.ObservedGeneration = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -2205,29 +6171,16 @@ func (m *ServiceReference) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + m.ObservedGeneration |= int64(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthGenerated - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthGenerated - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Namespace = string(dAtA[iNdEx:postIndex]) - iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TypeChecking", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -2237,29 +6190,33 @@ func (m *ServiceReference) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - m.Name = string(dAtA[iNdEx:postIndex]) + if m.TypeChecking == nil { + m.TypeChecking = &TypeChecking{} + } + if err := m.TypeChecking.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Conditions", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -2269,45 +6226,26 @@ func (m *ServiceReference) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - s := string(dAtA[iNdEx:postIndex]) - m.Path = &s - iNdEx = postIndex - case 4: - if wireType != 0 { - return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType) - } - var v int32 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + m.Conditions = append(m.Conditions, v1.Condition{}) + if err := m.Conditions[len(m.Conditions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err } - m.Port = &v + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -2452,7 +6390,7 @@ func (m *ValidatingWebhook) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Rules = append(m.Rules, v1.RuleWithOperations{}) + m.Rules = append(m.Rules, v11.RuleWithOperations{}) if err := m.Rules[len(m.Rules)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } @@ -2520,7 +6458,7 @@ func (m *ValidatingWebhook) Unmarshal(dAtA []byte) error { return io.ErrUnexpectedEOF } if m.NamespaceSelector == nil { - m.NamespaceSelector = &v11.LabelSelector{} + m.NamespaceSelector = &v1.LabelSelector{} } if err := m.NamespaceSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err @@ -2563,27 +6501,212 @@ func (m *ValidatingWebhook) Unmarshal(dAtA []byte) error { if wireType != 0 { return fmt.Errorf("proto: wrong wireType = %d for field TimeoutSeconds", wireType) } - var v int32 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - v |= int32(b&0x7F) << shift - if b < 0x80 { - break - } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.TimeoutSeconds = &v + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AdmissionReviewVersions", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.AdmissionReviewVersions = append(m.AdmissionReviewVersions, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MatchPolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := MatchPolicyType(dAtA[iNdEx:postIndex]) + m.MatchPolicy = &s + iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ObjectSelector", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.ObjectSelector == nil { + m.ObjectSelector = &v1.LabelSelector{} + } + if err := m.ObjectSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 11: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MatchConditions", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.MatchConditions = append(m.MatchConditions, MatchCondition{}) + if err := m.MatchConditions[len(m.MatchConditions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ValidatingWebhookConfiguration) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break } - m.TimeoutSeconds = &v - case 8: + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ValidatingWebhookConfiguration: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ValidatingWebhookConfiguration: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AdmissionReviewVersions", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -2593,29 +6716,30 @@ func (m *ValidatingWebhook) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - m.AdmissionReviewVersions = append(m.AdmissionReviewVersions, string(dAtA[iNdEx:postIndex])) + if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 9: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MatchPolicy", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Webhooks", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -2625,28 +6749,79 @@ func (m *ValidatingWebhook) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= uint64(b&0x7F) << shift + msglen |= int(b&0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - s := MatchPolicyType(dAtA[iNdEx:postIndex]) - m.MatchPolicy = &s + m.Webhooks = append(m.Webhooks, ValidatingWebhook{}) + if err := m.Webhooks[len(m.Webhooks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex - case 10: + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ValidatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ValidatingWebhookConfigurationList: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ValidatingWebhookConfigurationList: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ObjectSelector", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -2673,16 +6848,13 @@ func (m *ValidatingWebhook) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.ObjectSelector == nil { - m.ObjectSelector = &v11.LabelSelector{} - } - if err := m.ObjectSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 11: + case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field MatchConditions", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -2709,8 +6881,8 @@ func (m *ValidatingWebhook) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.MatchConditions = append(m.MatchConditions, MatchCondition{}) - if err := m.MatchConditions[len(m.MatchConditions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Items = append(m.Items, ValidatingWebhookConfiguration{}) + if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -2735,7 +6907,7 @@ func (m *ValidatingWebhook) Unmarshal(dAtA []byte) error { } return nil } -func (m *ValidatingWebhookConfiguration) Unmarshal(dAtA []byte) error { +func (m *Validation) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -2758,17 +6930,17 @@ func (m *ValidatingWebhookConfiguration) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ValidatingWebhookConfiguration: wiretype end group for non-group") + return fmt.Errorf("proto: Validation: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ValidatingWebhookConfiguration: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Validation: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Expression", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -2778,30 +6950,29 @@ func (m *ValidatingWebhookConfiguration) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Expression = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Webhooks", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -2811,25 +6982,88 @@ func (m *ValidatingWebhookConfiguration) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - m.Webhooks = append(m.Webhooks, ValidatingWebhook{}) - if err := m.Webhooks[len(m.Webhooks)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err + m.Message = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF } + s := k8s_io_apimachinery_pkg_apis_meta_v1.StatusReason(dAtA[iNdEx:postIndex]) + m.Reason = &s + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field MessageExpression", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.MessageExpression = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex @@ -2852,7 +7086,7 @@ func (m *ValidatingWebhookConfiguration) Unmarshal(dAtA []byte) error { } return nil } -func (m *ValidatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { +func (m *Variable) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -2875,17 +7109,17 @@ func (m *ValidatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ValidatingWebhookConfigurationList: wiretype end group for non-group") + return fmt.Errorf("proto: Variable: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ValidatingWebhookConfigurationList: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Variable: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -2895,30 +7129,29 @@ func (m *ValidatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Name = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Expression", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -2928,25 +7161,23 @@ func (m *ValidatingWebhookConfigurationList) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= int(b&0x7F) << shift + stringLen |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthGenerated } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex < 0 { return ErrInvalidLengthGenerated } if postIndex > l { return io.ErrUnexpectedEOF } - m.Items = append(m.Items, ValidatingWebhookConfiguration{}) - if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.Expression = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/generated.proto index cfd759285417..1855cdfc4f7e 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/generated.proto @@ -29,6 +29,56 @@ import "k8s.io/apimachinery/pkg/runtime/schema/generated.proto"; // Package-wide variables from generator "generated". option go_package = "k8s.io/api/admissionregistration/v1beta1"; +// AuditAnnotation describes how to produce an audit annotation for an API request. +message AuditAnnotation { + // key specifies the audit annotation key. The audit annotation keys of + // a ValidatingAdmissionPolicy must be unique. The key must be a qualified + // name ([A-Za-z0-9][-A-Za-z0-9_.]*) no more than 63 bytes in length. + // + // The key is combined with the resource name of the + // ValidatingAdmissionPolicy to construct an audit annotation key: + // "{ValidatingAdmissionPolicy name}/{key}". + // + // If an admission webhook uses the same resource name as this ValidatingAdmissionPolicy + // and the same audit annotation key, the annotation key will be identical. + // In this case, the first annotation written with the key will be included + // in the audit event and all subsequent annotations with the same key + // will be discarded. + // + // Required. + optional string key = 1; + + // valueExpression represents the expression which is evaluated by CEL to + // produce an audit annotation value. The expression must evaluate to either + // a string or null value. If the expression evaluates to a string, the + // audit annotation is included with the string value. If the expression + // evaluates to null or empty string the audit annotation will be omitted. + // The valueExpression may be no longer than 5kb in length. + // If the result of the valueExpression is more than 10kb in length, it + // will be truncated to 10kb. + // + // If multiple ValidatingAdmissionPolicyBinding resources match an + // API request, then the valueExpression will be evaluated for + // each binding. All unique values produced by the valueExpressions + // will be joined together in a comma-separated list. + // + // Required. + optional string valueExpression = 2; +} + +// ExpressionWarning is a warning information that targets a specific expression. +message ExpressionWarning { + // The path to the field that refers the expression. + // For example, the reference to the expression of the first item of + // validations is "spec.validations[0].expression" + optional string fieldRef = 2; + + // The content of type checking information in a human-readable form. + // Each line of the warning contains the type that the expression is checked + // against, followed by the type check error from the compiler. + optional string warning = 3; +} + // MatchCondition represents a condition which must be fulfilled for a request to be sent to a webhook. message MatchCondition { // Name is an identifier for this match condition, used for strategic merging of MatchConditions, @@ -58,6 +108,101 @@ message MatchCondition { optional string expression = 2; } +// MatchResources decides whether to run the admission control policy on an object based +// on whether it meets the match criteria. +// The exclude rules take precedence over include rules (if a resource matches both, it is excluded) +// +structType=atomic +message MatchResources { + // NamespaceSelector decides whether to run the admission control policy on an object based + // on whether the namespace for that object matches the selector. If the + // object itself is a namespace, the matching is performed on + // object.metadata.labels. If the object is another cluster scoped resource, + // it never skips the policy. + // + // For example, to run the webhook on any objects whose namespace is not + // associated with "runlevel" of "0" or "1"; you will set the selector as + // follows: + // "namespaceSelector": { + // "matchExpressions": [ + // { + // "key": "runlevel", + // "operator": "NotIn", + // "values": [ + // "0", + // "1" + // ] + // } + // ] + // } + // + // If instead you want to only run the policy on any objects whose + // namespace is associated with the "environment" of "prod" or "staging"; + // you will set the selector as follows: + // "namespaceSelector": { + // "matchExpressions": [ + // { + // "key": "environment", + // "operator": "In", + // "values": [ + // "prod", + // "staging" + // ] + // } + // ] + // } + // + // See + // https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ + // for more examples of label selectors. + // + // Default to the empty LabelSelector, which matches everything. + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.LabelSelector namespaceSelector = 1; + + // ObjectSelector decides whether to run the validation based on if the + // object has matching labels. objectSelector is evaluated against both + // the oldObject and newObject that would be sent to the cel validation, and + // is considered to match if either object matches the selector. A null + // object (oldObject in the case of create, or newObject in the case of + // delete) or an object that cannot have labels (like a + // DeploymentRollback or a PodProxyOptions object) is not considered to + // match. + // Use the object selector only if the webhook is opt-in, because end + // users may skip the admission webhook by setting the labels. + // Default to the empty LabelSelector, which matches everything. + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.LabelSelector objectSelector = 2; + + // ResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy matches. + // The policy cares about an operation if it matches _any_ Rule. + // +listType=atomic + // +optional + repeated NamedRuleWithOperations resourceRules = 3; + + // ExcludeResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy should not care about. + // The exclude rules take precedence over include rules (if a resource matches both, it is excluded) + // +listType=atomic + // +optional + repeated NamedRuleWithOperations excludeResourceRules = 4; + + // matchPolicy defines how the "MatchResources" list is used to match incoming requests. + // Allowed values are "Exact" or "Equivalent". + // + // - Exact: match a request only if it exactly matches a specified rule. + // For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, + // but "rules" only included `apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"]`, + // a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the ValidatingAdmissionPolicy. + // + // - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. + // For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, + // and "rules" only included `apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"]`, + // a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the ValidatingAdmissionPolicy. + // + // Defaults to "Equivalent" + // +optional + optional string matchPolicy = 7; +} + // MutatingWebhook describes an admission webhook and the resources and operations it applies to. message MutatingWebhook { // The name of the admission webhook. @@ -219,7 +364,7 @@ message MutatingWebhook { // - If failurePolicy=Fail, reject the request // - If failurePolicy=Ignore, the error is ignored and the webhook is skipped // - // This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. + // This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. // // +patchMergeKey=name // +patchStrategy=merge @@ -255,6 +400,88 @@ message MutatingWebhookConfigurationList { repeated MutatingWebhookConfiguration items = 2; } +// NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. +// +structType=atomic +message NamedRuleWithOperations { + // ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. + // +listType=atomic + // +optional + repeated string resourceNames = 1; + + // RuleWithOperations is a tuple of Operations and Resources. + optional k8s.io.api.admissionregistration.v1.RuleWithOperations ruleWithOperations = 2; +} + +// ParamKind is a tuple of Group Kind and Version. +// +structType=atomic +message ParamKind { + // APIVersion is the API group version the resources belong to. + // In format of "group/version". + // Required. + optional string apiVersion = 1; + + // Kind is the API kind the resources belong to. + // Required. + optional string kind = 2; +} + +// ParamRef describes how to locate the params to be used as input to +// expressions of rules applied by a policy binding. +// +structType=atomic +message ParamRef { + // name is the name of the resource being referenced. + // + // One of `name` or `selector` must be set, but `name` and `selector` are + // mutually exclusive properties. If one is set, the other must be unset. + // + // A single parameter used for all admission requests can be configured + // by setting the `name` field, leaving `selector` blank, and setting namespace + // if `paramKind` is namespace-scoped. + optional string name = 1; + + // namespace is the namespace of the referenced resource. Allows limiting + // the search for params to a specific namespace. Applies to both `name` and + // `selector` fields. + // + // A per-namespace parameter may be used by specifying a namespace-scoped + // `paramKind` in the policy and leaving this field empty. + // + // - If `paramKind` is cluster-scoped, this field MUST be unset. Setting this + // field results in a configuration error. + // + // - If `paramKind` is namespace-scoped, the namespace of the object being + // evaluated for admission will be used when this field is left unset. Take + // care that if this is left empty the binding must not match any cluster-scoped + // resources, which will result in an error. + // + // +optional + optional string namespace = 2; + + // selector can be used to match multiple param objects based on their labels. + // Supply selector: {} to match all resources of the ParamKind. + // + // If multiple params are found, they are all evaluated with the policy expressions + // and the results are ANDed together. + // + // One of `name` or `selector` must be set, but `name` and `selector` are + // mutually exclusive properties. If one is set, the other must be unset. + // + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.LabelSelector selector = 3; + + // `parameterNotFoundAction` controls the behavior of the binding when the resource + // exists, and name or selector is valid, but there are no parameters + // matched by the binding. If the value is set to `Allow`, then no + // matched parameters will be treated as successful validation by the binding. + // If set to `Deny`, then no matched parameters will be subject to the + // `failurePolicy` of the policy. + // + // Allowed values are `Allow` or `Deny` + // + // Required + optional string parameterNotFoundAction = 4; +} + // ServiceReference holds a reference to Service.legacy.k8s.io message ServiceReference { // `namespace` is the namespace of the service. @@ -277,6 +504,248 @@ message ServiceReference { optional int32 port = 4; } +// TypeChecking contains results of type checking the expressions in the +// ValidatingAdmissionPolicy +message TypeChecking { + // The type checking warnings for each expression. + // +optional + // +listType=atomic + repeated ExpressionWarning expressionWarnings = 1; +} + +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object +// +genclient +// +genclient:nonNamespaced +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object +// +k8s:prerelease-lifecycle-gen:introduced=1.28 +// ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it. +message ValidatingAdmissionPolicy { + // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata. + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1; + + // Specification of the desired behavior of the ValidatingAdmissionPolicy. + optional ValidatingAdmissionPolicySpec spec = 2; + + // The status of the ValidatingAdmissionPolicy, including warnings that are useful to determine if the policy + // behaves in the expected way. + // Populated by the system. + // Read-only. + // +optional + optional ValidatingAdmissionPolicyStatus status = 3; +} + +// ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. +// ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters. +// +// For a given admission request, each binding will cause its policy to be +// evaluated N times, where N is 1 for policies/bindings that don't use +// params, otherwise N is the number of parameters selected by the binding. +// +// The CEL expressions of a policy must have a computed CEL cost below the maximum +// CEL budget. Each evaluation of the policy is given an independent CEL cost budget. +// Adding/removing policies, bindings, or params can not affect whether a +// given (policy, binding, param) combination is within its own CEL budget. +message ValidatingAdmissionPolicyBinding { + // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata. + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1; + + // Specification of the desired behavior of the ValidatingAdmissionPolicyBinding. + optional ValidatingAdmissionPolicyBindingSpec spec = 2; +} + +// ValidatingAdmissionPolicyBindingList is a list of ValidatingAdmissionPolicyBinding. +message ValidatingAdmissionPolicyBindingList { + // Standard list metadata. + // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1; + + // List of PolicyBinding. + repeated ValidatingAdmissionPolicyBinding items = 2; +} + +// ValidatingAdmissionPolicyBindingSpec is the specification of the ValidatingAdmissionPolicyBinding. +message ValidatingAdmissionPolicyBindingSpec { + // PolicyName references a ValidatingAdmissionPolicy name which the ValidatingAdmissionPolicyBinding binds to. + // If the referenced resource does not exist, this binding is considered invalid and will be ignored + // Required. + optional string policyName = 1; + + // paramRef specifies the parameter resource used to configure the admission control policy. + // It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy. + // If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist, this binding is considered mis-configured and the FailurePolicy of the ValidatingAdmissionPolicy applied. + // If the policy does not specify a ParamKind then this field is ignored, and the rules are evaluated without a param. + // +optional + optional ParamRef paramRef = 2; + + // MatchResources declares what resources match this binding and will be validated by it. + // Note that this is intersected with the policy's matchConstraints, so only requests that are matched by the policy can be selected by this. + // If this is unset, all resources matched by the policy are validated by this binding + // When resourceRules is unset, it does not constrain resource matching. If a resource is matched by the other fields of this object, it will be validated. + // Note that this is differs from ValidatingAdmissionPolicy matchConstraints, where resourceRules are required. + // +optional + optional MatchResources matchResources = 3; + + // validationActions declares how Validations of the referenced ValidatingAdmissionPolicy are enforced. + // If a validation evaluates to false it is always enforced according to these actions. + // + // Failures defined by the ValidatingAdmissionPolicy's FailurePolicy are enforced according + // to these actions only if the FailurePolicy is set to Fail, otherwise the failures are + // ignored. This includes compilation errors, runtime errors and misconfigurations of the policy. + // + // validationActions is declared as a set of action values. Order does + // not matter. validationActions may not contain duplicates of the same action. + // + // The supported actions values are: + // + // "Deny" specifies that a validation failure results in a denied request. + // + // "Warn" specifies that a validation failure is reported to the request client + // in HTTP Warning headers, with a warning code of 299. Warnings can be sent + // both for allowed or denied admission responses. + // + // "Audit" specifies that a validation failure is included in the published + // audit event for the request. The audit event will contain a + // `validation.policy.admission.k8s.io/validation_failure` audit annotation + // with a value containing the details of the validation failures, formatted as + // a JSON list of objects, each with the following fields: + // - message: The validation failure message string + // - policy: The resource name of the ValidatingAdmissionPolicy + // - binding: The resource name of the ValidatingAdmissionPolicyBinding + // - expressionIndex: The index of the failed validations in the ValidatingAdmissionPolicy + // - validationActions: The enforcement actions enacted for the validation failure + // Example audit annotation: + // `"validation.policy.admission.k8s.io/validation_failure": "[{\"message\": \"Invalid value\", {\"policy\": \"policy.example.com\", {\"binding\": \"policybinding.example.com\", {\"expressionIndex\": \"1\", {\"validationActions\": [\"Audit\"]}]"` + // + // Clients should expect to handle additional values by ignoring + // any values not recognized. + // + // "Deny" and "Warn" may not be used together since this combination + // needlessly duplicates the validation failure both in the + // API response body and the HTTP warning headers. + // + // Required. + // +listType=set + repeated string validationActions = 4; +} + +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object +// +k8s:prerelease-lifecycle-gen:introduced=1.28 +// ValidatingAdmissionPolicyList is a list of ValidatingAdmissionPolicy. +message ValidatingAdmissionPolicyList { + // Standard list metadata. + // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1; + + // List of ValidatingAdmissionPolicy. + repeated ValidatingAdmissionPolicy items = 2; +} + +// ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy. +message ValidatingAdmissionPolicySpec { + // ParamKind specifies the kind of resources used to parameterize this policy. + // If absent, there are no parameters for this policy and the param CEL variable will not be provided to validation expressions. + // If ParamKind refers to a non-existent kind, this policy definition is mis-configured and the FailurePolicy is applied. + // If paramKind is specified but paramRef is unset in ValidatingAdmissionPolicyBinding, the params variable will be null. + // +optional + optional ParamKind paramKind = 1; + + // MatchConstraints specifies what resources this policy is designed to validate. + // The AdmissionPolicy cares about a request if it matches _all_ Constraints. + // However, in order to prevent clusters from being put into an unstable state that cannot be recovered from via the API + // ValidatingAdmissionPolicy cannot match ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding. + // Required. + optional MatchResources matchConstraints = 2; + + // Validations contain CEL expressions which is used to apply the validation. + // Validations and AuditAnnotations may not both be empty; a minimum of one Validations or AuditAnnotations is + // required. + // +listType=atomic + // +optional + repeated Validation validations = 3; + + // failurePolicy defines how to handle failures for the admission policy. Failures can + // occur from CEL expression parse errors, type check errors, runtime errors and invalid + // or mis-configured policy definitions or bindings. + // + // A policy is invalid if spec.paramKind refers to a non-existent Kind. + // A binding is invalid if spec.paramRef.name refers to a non-existent resource. + // + // failurePolicy does not define how validations that evaluate to false are handled. + // + // When failurePolicy is set to Fail, ValidatingAdmissionPolicyBinding validationActions + // define how failures are enforced. + // + // Allowed values are Ignore or Fail. Defaults to Fail. + // +optional + optional string failurePolicy = 4; + + // auditAnnotations contains CEL expressions which are used to produce audit + // annotations for the audit event of the API request. + // validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is + // required. + // +listType=atomic + // +optional + repeated AuditAnnotation auditAnnotations = 5; + + // MatchConditions is a list of conditions that must be met for a request to be validated. + // Match conditions filter requests that have already been matched by the rules, + // namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. + // There are a maximum of 64 match conditions allowed. + // + // If a parameter object is provided, it can be accessed via the `params` handle in the same + // manner as validation expressions. + // + // The exact matching logic is (in order): + // 1. If ANY matchCondition evaluates to FALSE, the policy is skipped. + // 2. If ALL matchConditions evaluate to TRUE, the policy is evaluated. + // 3. If any matchCondition evaluates to an error (but none are FALSE): + // - If failurePolicy=Fail, reject the request + // - If failurePolicy=Ignore, the policy is skipped + // + // +patchMergeKey=name + // +patchStrategy=merge + // +listType=map + // +listMapKey=name + // +optional + repeated MatchCondition matchConditions = 6; + + // Variables contain definitions of variables that can be used in composition of other expressions. + // Each variable is defined as a named CEL expression. + // The variables defined here will be available under `variables` in other expressions of the policy + // except MatchConditions because MatchConditions are evaluated before the rest of the policy. + // + // The expression of a variable can refer to other variables defined earlier in the list but not those after. + // Thus, Variables must be sorted by the order of first appearance and acyclic. + // +patchMergeKey=name + // +patchStrategy=merge + // +listType=map + // +listMapKey=name + // +optional + repeated Variable variables = 7; +} + +// ValidatingAdmissionPolicyStatus represents the status of an admission validation policy. +message ValidatingAdmissionPolicyStatus { + // The generation observed by the controller. + // +optional + optional int64 observedGeneration = 1; + + // The results of type checking for each expression. + // Presence of this field indicates the completion of the type checking. + // +optional + optional TypeChecking typeChecking = 2; + + // The conditions represent the latest available observations of a policy's current state. + // +optional + // +listType=map + // +listMapKey=type + repeated k8s.io.apimachinery.pkg.apis.meta.v1.Condition conditions = 3; +} + // ValidatingWebhook describes an admission webhook and the resources and operations it applies to. message ValidatingWebhook { // The name of the admission webhook. @@ -420,7 +889,7 @@ message ValidatingWebhook { // - If failurePolicy=Fail, reject the request // - If failurePolicy=Ignore, the error is ignored and the webhook is skipped // - // This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. + // This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. // // +patchMergeKey=name // +patchStrategy=merge @@ -456,6 +925,97 @@ message ValidatingWebhookConfigurationList { repeated ValidatingWebhookConfiguration items = 2; } +// Validation specifies the CEL expression which is used to apply the validation. +message Validation { + // Expression represents the expression which will be evaluated by CEL. + // ref: https://github.com/google/cel-spec + // CEL expressions have access to the contents of the API request/response, organized into CEL variables as well as some other useful variables: + // + // - 'object' - The object from the incoming request. The value is null for DELETE requests. + // - 'oldObject' - The existing object. The value is null for CREATE requests. + // - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). + // - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. + // - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. + // - 'variables' - Map of composited variables, from its name to its lazily evaluated value. + // For example, a variable named 'foo' can be accessed as 'variables.foo'. + // - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. + // See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz + // - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the + // request resource. + // + // The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the + // object. No other metadata properties are accessible. + // + // Only property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible. + // Accessible property names are escaped according to the following rules when accessed in the expression: + // - '__' escapes to '__underscores__' + // - '.' escapes to '__dot__' + // - '-' escapes to '__dash__' + // - '/' escapes to '__slash__' + // - Property names that exactly match a CEL RESERVED keyword escape to '__{keyword}__'. The keywords are: + // "true", "false", "null", "in", "as", "break", "const", "continue", "else", "for", "function", "if", + // "import", "let", "loop", "package", "namespace", "return". + // Examples: + // - Expression accessing a property named "namespace": {"Expression": "object.__namespace__ > 0"} + // - Expression accessing a property named "x-prop": {"Expression": "object.x__dash__prop > 0"} + // - Expression accessing a property named "redact__d": {"Expression": "object.redact__underscores__d > 0"} + // + // Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. + // Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type: + // - 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and + // non-intersecting elements in `Y` are appended, retaining their partial order. + // - 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values + // are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with + // non-intersecting keys are appended, retaining their partial order. + // Required. + optional string Expression = 1; + + // Message represents the message displayed when validation fails. The message is required if the Expression contains + // line breaks. The message must not contain line breaks. + // If unset, the message is "failed rule: {Rule}". + // e.g. "must be a URL with the host matching spec.host" + // If the Expression contains line breaks. Message is required. + // The message must not contain line breaks. + // If unset, the message is "failed Expression: {Expression}". + // +optional + optional string message = 2; + + // Reason represents a machine-readable description of why this validation failed. + // If this is the first validation in the list to fail, this reason, as well as the + // corresponding HTTP response code, are used in the + // HTTP response to the client. + // The currently supported reasons are: "Unauthorized", "Forbidden", "Invalid", "RequestEntityTooLarge". + // If not set, StatusReasonInvalid is used in the response to the client. + // +optional + optional string reason = 3; + + // messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. + // Since messageExpression is used as a failure message, it must evaluate to a string. + // If both message and messageExpression are present on a validation, then messageExpression will be used if validation fails. + // If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced + // as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string + // that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and + // the fact that messageExpression produced an empty string/string with only spaces/string with line breaks will be logged. + // messageExpression has access to all the same variables as the `expression` except for 'authorizer' and 'authorizer.requestResource'. + // Example: + // "object.x must be less than max ("+string(params.max)+")" + // +optional + optional string messageExpression = 4; +} + +// Variable is the definition of a variable that is used for composition. A variable is defined as a named expression. +// +structType=atomic +message Variable { + // Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. + // The variable can be accessed in other expressions through `variables` + // For example, if name is "foo", the variable will be available as `variables.foo` + optional string Name = 1; + + // Expression is the expression that will be evaluated as the value of the variable. + // The CEL expression has access to the same identifiers as the CEL expressions in Validation. + optional string Expression = 2; +} + // WebhookClientConfig contains the information to make a TLS // connection with the webhook message WebhookClientConfig { diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/register.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/register.go index 098744cf634f..363233a2f9aa 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/register.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/register.go @@ -50,6 +50,10 @@ func addKnownTypes(scheme *runtime.Scheme) error { &ValidatingWebhookConfigurationList{}, &MutatingWebhookConfiguration{}, &MutatingWebhookConfigurationList{}, + &ValidatingAdmissionPolicy{}, + &ValidatingAdmissionPolicyList{}, + &ValidatingAdmissionPolicyBinding{}, + &ValidatingAdmissionPolicyBindingList{}, ) metav1.AddToGroupVersion(scheme, SchemeGroupVersion) return nil diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/types.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/types.go index 82ee7df9bad6..c199702fbd02 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/types.go @@ -38,6 +38,18 @@ const ( AllScopes ScopeType = v1.AllScopes ) +// ParameterNotFoundActionType specifies a failure policy that defines how a binding +// is evaluated when the param referred by its perNamespaceParamRef is not found. +type ParameterNotFoundActionType string + +const ( + // Allow means all requests will be admitted if no param resources + // could be found. + AllowAction ParameterNotFoundActionType = "Allow" + // Deny means all requests will be denied if no param resources are found. + DenyAction ParameterNotFoundActionType = "Deny" +) + // FailurePolicyType specifies a failure policy that defines how unrecognized errors from the admission endpoint are handled. type FailurePolicyType string @@ -75,6 +87,584 @@ const ( SideEffectClassNoneOnDryRun SideEffectClass = "NoneOnDryRun" ) +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object +// +genclient +// +genclient:nonNamespaced +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object +// +k8s:prerelease-lifecycle-gen:introduced=1.28 +// ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it. +type ValidatingAdmissionPolicy struct { + metav1.TypeMeta `json:",inline"` + // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata. + // +optional + metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"` + // Specification of the desired behavior of the ValidatingAdmissionPolicy. + Spec ValidatingAdmissionPolicySpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"` + // The status of the ValidatingAdmissionPolicy, including warnings that are useful to determine if the policy + // behaves in the expected way. + // Populated by the system. + // Read-only. + // +optional + Status ValidatingAdmissionPolicyStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"` +} + +// ValidatingAdmissionPolicyStatus represents the status of an admission validation policy. +type ValidatingAdmissionPolicyStatus struct { + // The generation observed by the controller. + // +optional + ObservedGeneration int64 `json:"observedGeneration,omitempty" protobuf:"varint,1,opt,name=observedGeneration"` + // The results of type checking for each expression. + // Presence of this field indicates the completion of the type checking. + // +optional + TypeChecking *TypeChecking `json:"typeChecking,omitempty" protobuf:"bytes,2,opt,name=typeChecking"` + // The conditions represent the latest available observations of a policy's current state. + // +optional + // +listType=map + // +listMapKey=type + Conditions []metav1.Condition `json:"conditions,omitempty" protobuf:"bytes,3,rep,name=conditions"` +} + +// ValidatingAdmissionPolicyConditionType is the condition type of admission validation policy. +type ValidatingAdmissionPolicyConditionType string + +// TypeChecking contains results of type checking the expressions in the +// ValidatingAdmissionPolicy +type TypeChecking struct { + // The type checking warnings for each expression. + // +optional + // +listType=atomic + ExpressionWarnings []ExpressionWarning `json:"expressionWarnings,omitempty" protobuf:"bytes,1,rep,name=expressionWarnings"` +} + +// ExpressionWarning is a warning information that targets a specific expression. +type ExpressionWarning struct { + // The path to the field that refers the expression. + // For example, the reference to the expression of the first item of + // validations is "spec.validations[0].expression" + FieldRef string `json:"fieldRef" protobuf:"bytes,2,opt,name=fieldRef"` + // The content of type checking information in a human-readable form. + // Each line of the warning contains the type that the expression is checked + // against, followed by the type check error from the compiler. + Warning string `json:"warning" protobuf:"bytes,3,opt,name=warning"` +} + +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object +// +k8s:prerelease-lifecycle-gen:introduced=1.28 +// ValidatingAdmissionPolicyList is a list of ValidatingAdmissionPolicy. +type ValidatingAdmissionPolicyList struct { + metav1.TypeMeta `json:",inline"` + // Standard list metadata. + // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + // +optional + metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"` + // List of ValidatingAdmissionPolicy. + Items []ValidatingAdmissionPolicy `json:"items,omitempty" protobuf:"bytes,2,rep,name=items"` +} + +// ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy. +type ValidatingAdmissionPolicySpec struct { + // ParamKind specifies the kind of resources used to parameterize this policy. + // If absent, there are no parameters for this policy and the param CEL variable will not be provided to validation expressions. + // If ParamKind refers to a non-existent kind, this policy definition is mis-configured and the FailurePolicy is applied. + // If paramKind is specified but paramRef is unset in ValidatingAdmissionPolicyBinding, the params variable will be null. + // +optional + ParamKind *ParamKind `json:"paramKind,omitempty" protobuf:"bytes,1,rep,name=paramKind"` + + // MatchConstraints specifies what resources this policy is designed to validate. + // The AdmissionPolicy cares about a request if it matches _all_ Constraints. + // However, in order to prevent clusters from being put into an unstable state that cannot be recovered from via the API + // ValidatingAdmissionPolicy cannot match ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding. + // Required. + MatchConstraints *MatchResources `json:"matchConstraints,omitempty" protobuf:"bytes,2,rep,name=matchConstraints"` + + // Validations contain CEL expressions which is used to apply the validation. + // Validations and AuditAnnotations may not both be empty; a minimum of one Validations or AuditAnnotations is + // required. + // +listType=atomic + // +optional + Validations []Validation `json:"validations,omitempty" protobuf:"bytes,3,rep,name=validations"` + + // failurePolicy defines how to handle failures for the admission policy. Failures can + // occur from CEL expression parse errors, type check errors, runtime errors and invalid + // or mis-configured policy definitions or bindings. + // + // A policy is invalid if spec.paramKind refers to a non-existent Kind. + // A binding is invalid if spec.paramRef.name refers to a non-existent resource. + // + // failurePolicy does not define how validations that evaluate to false are handled. + // + // When failurePolicy is set to Fail, ValidatingAdmissionPolicyBinding validationActions + // define how failures are enforced. + // + // Allowed values are Ignore or Fail. Defaults to Fail. + // +optional + FailurePolicy *FailurePolicyType `json:"failurePolicy,omitempty" protobuf:"bytes,4,opt,name=failurePolicy,casttype=FailurePolicyType"` + + // auditAnnotations contains CEL expressions which are used to produce audit + // annotations for the audit event of the API request. + // validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is + // required. + // +listType=atomic + // +optional + AuditAnnotations []AuditAnnotation `json:"auditAnnotations,omitempty" protobuf:"bytes,5,rep,name=auditAnnotations"` + + // MatchConditions is a list of conditions that must be met for a request to be validated. + // Match conditions filter requests that have already been matched by the rules, + // namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. + // There are a maximum of 64 match conditions allowed. + // + // If a parameter object is provided, it can be accessed via the `params` handle in the same + // manner as validation expressions. + // + // The exact matching logic is (in order): + // 1. If ANY matchCondition evaluates to FALSE, the policy is skipped. + // 2. If ALL matchConditions evaluate to TRUE, the policy is evaluated. + // 3. If any matchCondition evaluates to an error (but none are FALSE): + // - If failurePolicy=Fail, reject the request + // - If failurePolicy=Ignore, the policy is skipped + // + // +patchMergeKey=name + // +patchStrategy=merge + // +listType=map + // +listMapKey=name + // +optional + MatchConditions []MatchCondition `json:"matchConditions,omitempty" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,6,rep,name=matchConditions"` + + // Variables contain definitions of variables that can be used in composition of other expressions. + // Each variable is defined as a named CEL expression. + // The variables defined here will be available under `variables` in other expressions of the policy + // except MatchConditions because MatchConditions are evaluated before the rest of the policy. + // + // The expression of a variable can refer to other variables defined earlier in the list but not those after. + // Thus, Variables must be sorted by the order of first appearance and acyclic. + // +patchMergeKey=name + // +patchStrategy=merge + // +listType=map + // +listMapKey=name + // +optional + Variables []Variable `json:"variables" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,7,rep,name=variables"` +} + +// ParamKind is a tuple of Group Kind and Version. +// +structType=atomic +type ParamKind struct { + // APIVersion is the API group version the resources belong to. + // In format of "group/version". + // Required. + APIVersion string `json:"apiVersion,omitempty" protobuf:"bytes,1,rep,name=apiVersion"` + + // Kind is the API kind the resources belong to. + // Required. + Kind string `json:"kind,omitempty" protobuf:"bytes,2,rep,name=kind"` +} + +// Validation specifies the CEL expression which is used to apply the validation. +type Validation struct { + // Expression represents the expression which will be evaluated by CEL. + // ref: https://github.com/google/cel-spec + // CEL expressions have access to the contents of the API request/response, organized into CEL variables as well as some other useful variables: + // + // - 'object' - The object from the incoming request. The value is null for DELETE requests. + // - 'oldObject' - The existing object. The value is null for CREATE requests. + // - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). + // - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. + // - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. + // - 'variables' - Map of composited variables, from its name to its lazily evaluated value. + // For example, a variable named 'foo' can be accessed as 'variables.foo'. + // - 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. + // See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz + // - 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the + // request resource. + // + // The `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the + // object. No other metadata properties are accessible. + // + // Only property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible. + // Accessible property names are escaped according to the following rules when accessed in the expression: + // - '__' escapes to '__underscores__' + // - '.' escapes to '__dot__' + // - '-' escapes to '__dash__' + // - '/' escapes to '__slash__' + // - Property names that exactly match a CEL RESERVED keyword escape to '__{keyword}__'. The keywords are: + // "true", "false", "null", "in", "as", "break", "const", "continue", "else", "for", "function", "if", + // "import", "let", "loop", "package", "namespace", "return". + // Examples: + // - Expression accessing a property named "namespace": {"Expression": "object.__namespace__ > 0"} + // - Expression accessing a property named "x-prop": {"Expression": "object.x__dash__prop > 0"} + // - Expression accessing a property named "redact__d": {"Expression": "object.redact__underscores__d > 0"} + // + // Equality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. + // Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type: + // - 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and + // non-intersecting elements in `Y` are appended, retaining their partial order. + // - 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values + // are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with + // non-intersecting keys are appended, retaining their partial order. + // Required. + Expression string `json:"expression" protobuf:"bytes,1,opt,name=Expression"` + // Message represents the message displayed when validation fails. The message is required if the Expression contains + // line breaks. The message must not contain line breaks. + // If unset, the message is "failed rule: {Rule}". + // e.g. "must be a URL with the host matching spec.host" + // If the Expression contains line breaks. Message is required. + // The message must not contain line breaks. + // If unset, the message is "failed Expression: {Expression}". + // +optional + Message string `json:"message,omitempty" protobuf:"bytes,2,opt,name=message"` + // Reason represents a machine-readable description of why this validation failed. + // If this is the first validation in the list to fail, this reason, as well as the + // corresponding HTTP response code, are used in the + // HTTP response to the client. + // The currently supported reasons are: "Unauthorized", "Forbidden", "Invalid", "RequestEntityTooLarge". + // If not set, StatusReasonInvalid is used in the response to the client. + // +optional + Reason *metav1.StatusReason `json:"reason,omitempty" protobuf:"bytes,3,opt,name=reason"` + // messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. + // Since messageExpression is used as a failure message, it must evaluate to a string. + // If both message and messageExpression are present on a validation, then messageExpression will be used if validation fails. + // If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced + // as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string + // that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and + // the fact that messageExpression produced an empty string/string with only spaces/string with line breaks will be logged. + // messageExpression has access to all the same variables as the `expression` except for 'authorizer' and 'authorizer.requestResource'. + // Example: + // "object.x must be less than max ("+string(params.max)+")" + // +optional + MessageExpression string `json:"messageExpression,omitempty" protobuf:"bytes,4,opt,name=messageExpression"` +} + +// Variable is the definition of a variable that is used for composition. A variable is defined as a named expression. +// +structType=atomic +type Variable struct { + // Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. + // The variable can be accessed in other expressions through `variables` + // For example, if name is "foo", the variable will be available as `variables.foo` + Name string `json:"name" protobuf:"bytes,1,opt,name=Name"` + + // Expression is the expression that will be evaluated as the value of the variable. + // The CEL expression has access to the same identifiers as the CEL expressions in Validation. + Expression string `json:"expression" protobuf:"bytes,2,opt,name=Expression"` +} + +// AuditAnnotation describes how to produce an audit annotation for an API request. +type AuditAnnotation struct { + // key specifies the audit annotation key. The audit annotation keys of + // a ValidatingAdmissionPolicy must be unique. The key must be a qualified + // name ([A-Za-z0-9][-A-Za-z0-9_.]*) no more than 63 bytes in length. + // + // The key is combined with the resource name of the + // ValidatingAdmissionPolicy to construct an audit annotation key: + // "{ValidatingAdmissionPolicy name}/{key}". + // + // If an admission webhook uses the same resource name as this ValidatingAdmissionPolicy + // and the same audit annotation key, the annotation key will be identical. + // In this case, the first annotation written with the key will be included + // in the audit event and all subsequent annotations with the same key + // will be discarded. + // + // Required. + Key string `json:"key" protobuf:"bytes,1,opt,name=key"` + + // valueExpression represents the expression which is evaluated by CEL to + // produce an audit annotation value. The expression must evaluate to either + // a string or null value. If the expression evaluates to a string, the + // audit annotation is included with the string value. If the expression + // evaluates to null or empty string the audit annotation will be omitted. + // The valueExpression may be no longer than 5kb in length. + // If the result of the valueExpression is more than 10kb in length, it + // will be truncated to 10kb. + // + // If multiple ValidatingAdmissionPolicyBinding resources match an + // API request, then the valueExpression will be evaluated for + // each binding. All unique values produced by the valueExpressions + // will be joined together in a comma-separated list. + // + // Required. + ValueExpression string `json:"valueExpression" protobuf:"bytes,2,opt,name=valueExpression"` +} + +// +genclient +// +genclient:nonNamespaced +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object +// +k8s:prerelease-lifecycle-gen:introduced=1.28 + +// ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. +// ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters. +// +// For a given admission request, each binding will cause its policy to be +// evaluated N times, where N is 1 for policies/bindings that don't use +// params, otherwise N is the number of parameters selected by the binding. +// +// The CEL expressions of a policy must have a computed CEL cost below the maximum +// CEL budget. Each evaluation of the policy is given an independent CEL cost budget. +// Adding/removing policies, bindings, or params can not affect whether a +// given (policy, binding, param) combination is within its own CEL budget. +type ValidatingAdmissionPolicyBinding struct { + metav1.TypeMeta `json:",inline"` + // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata. + // +optional + metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"` + // Specification of the desired behavior of the ValidatingAdmissionPolicyBinding. + Spec ValidatingAdmissionPolicyBindingSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"` +} + +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object +// +k8s:prerelease-lifecycle-gen:introduced=1.28 + +// ValidatingAdmissionPolicyBindingList is a list of ValidatingAdmissionPolicyBinding. +type ValidatingAdmissionPolicyBindingList struct { + metav1.TypeMeta `json:",inline"` + // Standard list metadata. + // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + // +optional + metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"` + // List of PolicyBinding. + Items []ValidatingAdmissionPolicyBinding `json:"items,omitempty" protobuf:"bytes,2,rep,name=items"` +} + +// ValidatingAdmissionPolicyBindingSpec is the specification of the ValidatingAdmissionPolicyBinding. +type ValidatingAdmissionPolicyBindingSpec struct { + // PolicyName references a ValidatingAdmissionPolicy name which the ValidatingAdmissionPolicyBinding binds to. + // If the referenced resource does not exist, this binding is considered invalid and will be ignored + // Required. + PolicyName string `json:"policyName,omitempty" protobuf:"bytes,1,rep,name=policyName"` + + // paramRef specifies the parameter resource used to configure the admission control policy. + // It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy. + // If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist, this binding is considered mis-configured and the FailurePolicy of the ValidatingAdmissionPolicy applied. + // If the policy does not specify a ParamKind then this field is ignored, and the rules are evaluated without a param. + // +optional + ParamRef *ParamRef `json:"paramRef,omitempty" protobuf:"bytes,2,rep,name=paramRef"` + + // MatchResources declares what resources match this binding and will be validated by it. + // Note that this is intersected with the policy's matchConstraints, so only requests that are matched by the policy can be selected by this. + // If this is unset, all resources matched by the policy are validated by this binding + // When resourceRules is unset, it does not constrain resource matching. If a resource is matched by the other fields of this object, it will be validated. + // Note that this is differs from ValidatingAdmissionPolicy matchConstraints, where resourceRules are required. + // +optional + MatchResources *MatchResources `json:"matchResources,omitempty" protobuf:"bytes,3,rep,name=matchResources"` + + // validationActions declares how Validations of the referenced ValidatingAdmissionPolicy are enforced. + // If a validation evaluates to false it is always enforced according to these actions. + // + // Failures defined by the ValidatingAdmissionPolicy's FailurePolicy are enforced according + // to these actions only if the FailurePolicy is set to Fail, otherwise the failures are + // ignored. This includes compilation errors, runtime errors and misconfigurations of the policy. + // + // validationActions is declared as a set of action values. Order does + // not matter. validationActions may not contain duplicates of the same action. + // + // The supported actions values are: + // + // "Deny" specifies that a validation failure results in a denied request. + // + // "Warn" specifies that a validation failure is reported to the request client + // in HTTP Warning headers, with a warning code of 299. Warnings can be sent + // both for allowed or denied admission responses. + // + // "Audit" specifies that a validation failure is included in the published + // audit event for the request. The audit event will contain a + // `validation.policy.admission.k8s.io/validation_failure` audit annotation + // with a value containing the details of the validation failures, formatted as + // a JSON list of objects, each with the following fields: + // - message: The validation failure message string + // - policy: The resource name of the ValidatingAdmissionPolicy + // - binding: The resource name of the ValidatingAdmissionPolicyBinding + // - expressionIndex: The index of the failed validations in the ValidatingAdmissionPolicy + // - validationActions: The enforcement actions enacted for the validation failure + // Example audit annotation: + // `"validation.policy.admission.k8s.io/validation_failure": "[{\"message\": \"Invalid value\", {\"policy\": \"policy.example.com\", {\"binding\": \"policybinding.example.com\", {\"expressionIndex\": \"1\", {\"validationActions\": [\"Audit\"]}]"` + // + // Clients should expect to handle additional values by ignoring + // any values not recognized. + // + // "Deny" and "Warn" may not be used together since this combination + // needlessly duplicates the validation failure both in the + // API response body and the HTTP warning headers. + // + // Required. + // +listType=set + ValidationActions []ValidationAction `json:"validationActions,omitempty" protobuf:"bytes,4,rep,name=validationActions"` +} + +// ParamRef describes how to locate the params to be used as input to +// expressions of rules applied by a policy binding. +// +structType=atomic +type ParamRef struct { + // name is the name of the resource being referenced. + // + // One of `name` or `selector` must be set, but `name` and `selector` are + // mutually exclusive properties. If one is set, the other must be unset. + // + // A single parameter used for all admission requests can be configured + // by setting the `name` field, leaving `selector` blank, and setting namespace + // if `paramKind` is namespace-scoped. + // + Name string `json:"name,omitempty" protobuf:"bytes,1,rep,name=name"` + + // namespace is the namespace of the referenced resource. Allows limiting + // the search for params to a specific namespace. Applies to both `name` and + // `selector` fields. + // + // A per-namespace parameter may be used by specifying a namespace-scoped + // `paramKind` in the policy and leaving this field empty. + // + // - If `paramKind` is cluster-scoped, this field MUST be unset. Setting this + // field results in a configuration error. + // + // - If `paramKind` is namespace-scoped, the namespace of the object being + // evaluated for admission will be used when this field is left unset. Take + // care that if this is left empty the binding must not match any cluster-scoped + // resources, which will result in an error. + // + // +optional + Namespace string `json:"namespace,omitempty" protobuf:"bytes,2,rep,name=namespace"` + + // selector can be used to match multiple param objects based on their labels. + // Supply selector: {} to match all resources of the ParamKind. + // + // If multiple params are found, they are all evaluated with the policy expressions + // and the results are ANDed together. + // + // One of `name` or `selector` must be set, but `name` and `selector` are + // mutually exclusive properties. If one is set, the other must be unset. + // + // +optional + Selector *metav1.LabelSelector `json:"selector,omitempty" protobuf:"bytes,3,rep,name=selector"` + + // `parameterNotFoundAction` controls the behavior of the binding when the resource + // exists, and name or selector is valid, but there are no parameters + // matched by the binding. If the value is set to `Allow`, then no + // matched parameters will be treated as successful validation by the binding. + // If set to `Deny`, then no matched parameters will be subject to the + // `failurePolicy` of the policy. + // + // Allowed values are `Allow` or `Deny` + // + // Required + ParameterNotFoundAction *ParameterNotFoundActionType `json:"parameterNotFoundAction,omitempty" protobuf:"bytes,4,rep,name=parameterNotFoundAction"` +} + +// MatchResources decides whether to run the admission control policy on an object based +// on whether it meets the match criteria. +// The exclude rules take precedence over include rules (if a resource matches both, it is excluded) +// +structType=atomic +type MatchResources struct { + // NamespaceSelector decides whether to run the admission control policy on an object based + // on whether the namespace for that object matches the selector. If the + // object itself is a namespace, the matching is performed on + // object.metadata.labels. If the object is another cluster scoped resource, + // it never skips the policy. + // + // For example, to run the webhook on any objects whose namespace is not + // associated with "runlevel" of "0" or "1"; you will set the selector as + // follows: + // "namespaceSelector": { + // "matchExpressions": [ + // { + // "key": "runlevel", + // "operator": "NotIn", + // "values": [ + // "0", + // "1" + // ] + // } + // ] + // } + // + // If instead you want to only run the policy on any objects whose + // namespace is associated with the "environment" of "prod" or "staging"; + // you will set the selector as follows: + // "namespaceSelector": { + // "matchExpressions": [ + // { + // "key": "environment", + // "operator": "In", + // "values": [ + // "prod", + // "staging" + // ] + // } + // ] + // } + // + // See + // https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ + // for more examples of label selectors. + // + // Default to the empty LabelSelector, which matches everything. + // +optional + NamespaceSelector *metav1.LabelSelector `json:"namespaceSelector,omitempty" protobuf:"bytes,1,opt,name=namespaceSelector"` + // ObjectSelector decides whether to run the validation based on if the + // object has matching labels. objectSelector is evaluated against both + // the oldObject and newObject that would be sent to the cel validation, and + // is considered to match if either object matches the selector. A null + // object (oldObject in the case of create, or newObject in the case of + // delete) or an object that cannot have labels (like a + // DeploymentRollback or a PodProxyOptions object) is not considered to + // match. + // Use the object selector only if the webhook is opt-in, because end + // users may skip the admission webhook by setting the labels. + // Default to the empty LabelSelector, which matches everything. + // +optional + ObjectSelector *metav1.LabelSelector `json:"objectSelector,omitempty" protobuf:"bytes,2,opt,name=objectSelector"` + // ResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy matches. + // The policy cares about an operation if it matches _any_ Rule. + // +listType=atomic + // +optional + ResourceRules []NamedRuleWithOperations `json:"resourceRules,omitempty" protobuf:"bytes,3,rep,name=resourceRules"` + // ExcludeResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy should not care about. + // The exclude rules take precedence over include rules (if a resource matches both, it is excluded) + // +listType=atomic + // +optional + ExcludeResourceRules []NamedRuleWithOperations `json:"excludeResourceRules,omitempty" protobuf:"bytes,4,rep,name=excludeResourceRules"` + // matchPolicy defines how the "MatchResources" list is used to match incoming requests. + // Allowed values are "Exact" or "Equivalent". + // + // - Exact: match a request only if it exactly matches a specified rule. + // For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, + // but "rules" only included `apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"]`, + // a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the ValidatingAdmissionPolicy. + // + // - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. + // For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, + // and "rules" only included `apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"]`, + // a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the ValidatingAdmissionPolicy. + // + // Defaults to "Equivalent" + // +optional + MatchPolicy *MatchPolicyType `json:"matchPolicy,omitempty" protobuf:"bytes,7,opt,name=matchPolicy,casttype=MatchPolicyType"` +} + +// ValidationAction specifies a policy enforcement action. +// +enum +type ValidationAction string + +const ( + // Deny specifies that a validation failure results in a denied request. + Deny ValidationAction = "Deny" + // Warn specifies that a validation failure is reported to the request client + // in HTTP Warning headers, with a warning code of 299. Warnings can be sent + // both for allowed or denied admission responses. + Warn ValidationAction = "Warn" + // Audit specifies that a validation failure is included in the published + // audit event for the request. The audit event will contain a + // `validation.policy.admission.k8s.io/validation_failure` audit annotation + // with a value containing the details of the validation failure. + Audit ValidationAction = "Audit" +) + +// NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames. +// +structType=atomic +type NamedRuleWithOperations struct { + // ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. + // +listType=atomic + // +optional + ResourceNames []string `json:"resourceNames,omitempty" protobuf:"bytes,1,rep,name=resourceNames"` + // RuleWithOperations is a tuple of Operations and Resources. + RuleWithOperations `json:",inline" protobuf:"bytes,2,opt,name=ruleWithOperations"` +} + // +genclient // +genclient:nonNamespaced // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object @@ -296,7 +886,7 @@ type ValidatingWebhook struct { // - If failurePolicy=Fail, reject the request // - If failurePolicy=Ignore, the error is ignored and the webhook is skipped // - // This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. + // This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. // // +patchMergeKey=name // +patchStrategy=merge @@ -468,7 +1058,7 @@ type MutatingWebhook struct { // - If failurePolicy=Fail, reject the request // - If failurePolicy=Ignore, the error is ignored and the webhook is skipped // - // This is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate. + // This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. // // +patchMergeKey=name // +patchStrategy=merge diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.go index 2c0a9f011795..adaf4bc11dbe 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.go @@ -27,6 +27,26 @@ package v1beta1 // Those methods can be generated by using hack/update-codegen.sh // AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT. +var map_AuditAnnotation = map[string]string{ + "": "AuditAnnotation describes how to produce an audit annotation for an API request.", + "key": "key specifies the audit annotation key. The audit annotation keys of a ValidatingAdmissionPolicy must be unique. The key must be a qualified name ([A-Za-z0-9][-A-Za-z0-9_.]*) no more than 63 bytes in length.\n\nThe key is combined with the resource name of the ValidatingAdmissionPolicy to construct an audit annotation key: \"{ValidatingAdmissionPolicy name}/{key}\".\n\nIf an admission webhook uses the same resource name as this ValidatingAdmissionPolicy and the same audit annotation key, the annotation key will be identical. In this case, the first annotation written with the key will be included in the audit event and all subsequent annotations with the same key will be discarded.\n\nRequired.", + "valueExpression": "valueExpression represents the expression which is evaluated by CEL to produce an audit annotation value. The expression must evaluate to either a string or null value. If the expression evaluates to a string, the audit annotation is included with the string value. If the expression evaluates to null or empty string the audit annotation will be omitted. The valueExpression may be no longer than 5kb in length. If the result of the valueExpression is more than 10kb in length, it will be truncated to 10kb.\n\nIf multiple ValidatingAdmissionPolicyBinding resources match an API request, then the valueExpression will be evaluated for each binding. All unique values produced by the valueExpressions will be joined together in a comma-separated list.\n\nRequired.", +} + +func (AuditAnnotation) SwaggerDoc() map[string]string { + return map_AuditAnnotation +} + +var map_ExpressionWarning = map[string]string{ + "": "ExpressionWarning is a warning information that targets a specific expression.", + "fieldRef": "The path to the field that refers the expression. For example, the reference to the expression of the first item of validations is \"spec.validations[0].expression\"", + "warning": "The content of type checking information in a human-readable form. Each line of the warning contains the type that the expression is checked against, followed by the type check error from the compiler.", +} + +func (ExpressionWarning) SwaggerDoc() map[string]string { + return map_ExpressionWarning +} + var map_MatchCondition = map[string]string{ "": "MatchCondition represents a condition which must be fulfilled for a request to be sent to a webhook.", "name": "Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName')\n\nRequired.", @@ -37,6 +57,19 @@ func (MatchCondition) SwaggerDoc() map[string]string { return map_MatchCondition } +var map_MatchResources = map[string]string{ + "": "MatchResources decides whether to run the admission control policy on an object based on whether it meets the match criteria. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)", + "namespaceSelector": "NamespaceSelector decides whether to run the admission control policy on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the policy.\n\nFor example, to run the webhook on any objects whose namespace is not associated with \"runlevel\" of \"0\" or \"1\"; you will set the selector as follows: \"namespaceSelector\": {\n \"matchExpressions\": [\n {\n \"key\": \"runlevel\",\n \"operator\": \"NotIn\",\n \"values\": [\n \"0\",\n \"1\"\n ]\n }\n ]\n}\n\nIf instead you want to only run the policy on any objects whose namespace is associated with the \"environment\" of \"prod\" or \"staging\"; you will set the selector as follows: \"namespaceSelector\": {\n \"matchExpressions\": [\n {\n \"key\": \"environment\",\n \"operator\": \"In\",\n \"values\": [\n \"prod\",\n \"staging\"\n ]\n }\n ]\n}\n\nSee https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more examples of label selectors.\n\nDefault to the empty LabelSelector, which matches everything.", + "objectSelector": "ObjectSelector decides whether to run the validation based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the cel validation, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything.", + "resourceRules": "ResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy matches. The policy cares about an operation if it matches _any_ Rule.", + "excludeResourceRules": "ExcludeResourceRules describes what operations on what resources/subresources the ValidatingAdmissionPolicy should not care about. The exclude rules take precedence over include rules (if a resource matches both, it is excluded)", + "matchPolicy": "matchPolicy defines how the \"MatchResources\" list is used to match incoming requests. Allowed values are \"Exact\" or \"Equivalent\".\n\n- Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the ValidatingAdmissionPolicy.\n\n- Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the ValidatingAdmissionPolicy.\n\nDefaults to \"Equivalent\"", +} + +func (MatchResources) SwaggerDoc() map[string]string { + return map_MatchResources +} + var map_MutatingWebhook = map[string]string{ "": "MutatingWebhook describes an admission webhook and the resources and operations it applies to.", "name": "The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where \"imagepolicy\" is the name of the webhook, and kubernetes.io is the name of the organization. Required.", @@ -50,7 +83,7 @@ var map_MutatingWebhook = map[string]string{ "timeoutSeconds": "TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 30 seconds.", "admissionReviewVersions": "AdmissionReviewVersions is an ordered list of preferred `AdmissionReview` versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. Default to `['v1beta1']`.", "reinvocationPolicy": "reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation. Allowed values are \"Never\" and \"IfNeeded\".\n\nNever: the webhook will not be called more than once in a single admission evaluation.\n\nIfNeeded: the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. Webhooks that specify this option *must* be idempotent, able to process objects they previously admitted. Note: * the number of additional invocations is not guaranteed to be exactly one. * if additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. * webhooks that use this option may be reordered to minimize the number of additional invocations. * to validate an object after all mutations are guaranteed complete, use a validating admission webhook instead.\n\nDefaults to \"Never\".", - "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\nThis is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate.", + "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\nThis is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate.", } func (MutatingWebhook) SwaggerDoc() map[string]string { @@ -77,6 +110,37 @@ func (MutatingWebhookConfigurationList) SwaggerDoc() map[string]string { return map_MutatingWebhookConfigurationList } +var map_NamedRuleWithOperations = map[string]string{ + "": "NamedRuleWithOperations is a tuple of Operations and Resources with ResourceNames.", + "resourceNames": "ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed.", +} + +func (NamedRuleWithOperations) SwaggerDoc() map[string]string { + return map_NamedRuleWithOperations +} + +var map_ParamKind = map[string]string{ + "": "ParamKind is a tuple of Group Kind and Version.", + "apiVersion": "APIVersion is the API group version the resources belong to. In format of \"group/version\". Required.", + "kind": "Kind is the API kind the resources belong to. Required.", +} + +func (ParamKind) SwaggerDoc() map[string]string { + return map_ParamKind +} + +var map_ParamRef = map[string]string{ + "": "ParamRef describes how to locate the params to be used as input to expressions of rules applied by a policy binding.", + "name": "name is the name of the resource being referenced.\n\nOne of `name` or `selector` must be set, but `name` and `selector` are mutually exclusive properties. If one is set, the other must be unset.\n\nA single parameter used for all admission requests can be configured by setting the `name` field, leaving `selector` blank, and setting namespace if `paramKind` is namespace-scoped.", + "namespace": "namespace is the namespace of the referenced resource. Allows limiting the search for params to a specific namespace. Applies to both `name` and `selector` fields.\n\nA per-namespace parameter may be used by specifying a namespace-scoped `paramKind` in the policy and leaving this field empty.\n\n- If `paramKind` is cluster-scoped, this field MUST be unset. Setting this field results in a configuration error.\n\n- If `paramKind` is namespace-scoped, the namespace of the object being evaluated for admission will be used when this field is left unset. Take care that if this is left empty the binding must not match any cluster-scoped resources, which will result in an error.", + "selector": "selector can be used to match multiple param objects based on their labels. Supply selector: {} to match all resources of the ParamKind.\n\nIf multiple params are found, they are all evaluated with the policy expressions and the results are ANDed together.\n\nOne of `name` or `selector` must be set, but `name` and `selector` are mutually exclusive properties. If one is set, the other must be unset.", + "parameterNotFoundAction": "`parameterNotFoundAction` controls the behavior of the binding when the resource exists, and name or selector is valid, but there are no parameters matched by the binding. If the value is set to `Allow`, then no matched parameters will be treated as successful validation by the binding. If set to `Deny`, then no matched parameters will be subject to the `failurePolicy` of the policy.\n\nAllowed values are `Allow` or `Deny`\n\nRequired", +} + +func (ParamRef) SwaggerDoc() map[string]string { + return map_ParamRef +} + var map_ServiceReference = map[string]string{ "": "ServiceReference holds a reference to Service.legacy.k8s.io", "namespace": "`namespace` is the namespace of the service. Required", @@ -89,6 +153,94 @@ func (ServiceReference) SwaggerDoc() map[string]string { return map_ServiceReference } +var map_TypeChecking = map[string]string{ + "": "TypeChecking contains results of type checking the expressions in the ValidatingAdmissionPolicy", + "expressionWarnings": "The type checking warnings for each expression.", +} + +func (TypeChecking) SwaggerDoc() map[string]string { + return map_TypeChecking +} + +var map_ValidatingAdmissionPolicy = map[string]string{ + "": "ValidatingAdmissionPolicy describes the definition of an admission validation policy that accepts or rejects an object without changing it.", + "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.", + "spec": "Specification of the desired behavior of the ValidatingAdmissionPolicy.", + "status": "The status of the ValidatingAdmissionPolicy, including warnings that are useful to determine if the policy behaves in the expected way. Populated by the system. Read-only.", +} + +func (ValidatingAdmissionPolicy) SwaggerDoc() map[string]string { + return map_ValidatingAdmissionPolicy +} + +var map_ValidatingAdmissionPolicyBinding = map[string]string{ + "": "ValidatingAdmissionPolicyBinding binds the ValidatingAdmissionPolicy with paramerized resources. ValidatingAdmissionPolicyBinding and parameter CRDs together define how cluster administrators configure policies for clusters.\n\nFor a given admission request, each binding will cause its policy to be evaluated N times, where N is 1 for policies/bindings that don't use params, otherwise N is the number of parameters selected by the binding.\n\nThe CEL expressions of a policy must have a computed CEL cost below the maximum CEL budget. Each evaluation of the policy is given an independent CEL cost budget. Adding/removing policies, bindings, or params can not affect whether a given (policy, binding, param) combination is within its own CEL budget.", + "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.", + "spec": "Specification of the desired behavior of the ValidatingAdmissionPolicyBinding.", +} + +func (ValidatingAdmissionPolicyBinding) SwaggerDoc() map[string]string { + return map_ValidatingAdmissionPolicyBinding +} + +var map_ValidatingAdmissionPolicyBindingList = map[string]string{ + "": "ValidatingAdmissionPolicyBindingList is a list of ValidatingAdmissionPolicyBinding.", + "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "items": "List of PolicyBinding.", +} + +func (ValidatingAdmissionPolicyBindingList) SwaggerDoc() map[string]string { + return map_ValidatingAdmissionPolicyBindingList +} + +var map_ValidatingAdmissionPolicyBindingSpec = map[string]string{ + "": "ValidatingAdmissionPolicyBindingSpec is the specification of the ValidatingAdmissionPolicyBinding.", + "policyName": "PolicyName references a ValidatingAdmissionPolicy name which the ValidatingAdmissionPolicyBinding binds to. If the referenced resource does not exist, this binding is considered invalid and will be ignored Required.", + "paramRef": "paramRef specifies the parameter resource used to configure the admission control policy. It should point to a resource of the type specified in ParamKind of the bound ValidatingAdmissionPolicy. If the policy specifies a ParamKind and the resource referred to by ParamRef does not exist, this binding is considered mis-configured and the FailurePolicy of the ValidatingAdmissionPolicy applied. If the policy does not specify a ParamKind then this field is ignored, and the rules are evaluated without a param.", + "matchResources": "MatchResources declares what resources match this binding and will be validated by it. Note that this is intersected with the policy's matchConstraints, so only requests that are matched by the policy can be selected by this. If this is unset, all resources matched by the policy are validated by this binding When resourceRules is unset, it does not constrain resource matching. If a resource is matched by the other fields of this object, it will be validated. Note that this is differs from ValidatingAdmissionPolicy matchConstraints, where resourceRules are required.", + "validationActions": "validationActions declares how Validations of the referenced ValidatingAdmissionPolicy are enforced. If a validation evaluates to false it is always enforced according to these actions.\n\nFailures defined by the ValidatingAdmissionPolicy's FailurePolicy are enforced according to these actions only if the FailurePolicy is set to Fail, otherwise the failures are ignored. This includes compilation errors, runtime errors and misconfigurations of the policy.\n\nvalidationActions is declared as a set of action values. Order does not matter. validationActions may not contain duplicates of the same action.\n\nThe supported actions values are:\n\n\"Deny\" specifies that a validation failure results in a denied request.\n\n\"Warn\" specifies that a validation failure is reported to the request client in HTTP Warning headers, with a warning code of 299. Warnings can be sent both for allowed or denied admission responses.\n\n\"Audit\" specifies that a validation failure is included in the published audit event for the request. The audit event will contain a `validation.policy.admission.k8s.io/validation_failure` audit annotation with a value containing the details of the validation failures, formatted as a JSON list of objects, each with the following fields: - message: The validation failure message string - policy: The resource name of the ValidatingAdmissionPolicy - binding: The resource name of the ValidatingAdmissionPolicyBinding - expressionIndex: The index of the failed validations in the ValidatingAdmissionPolicy - validationActions: The enforcement actions enacted for the validation failure Example audit annotation: `\"validation.policy.admission.k8s.io/validation_failure\": \"[{\"message\": \"Invalid value\", {\"policy\": \"policy.example.com\", {\"binding\": \"policybinding.example.com\", {\"expressionIndex\": \"1\", {\"validationActions\": [\"Audit\"]}]\"`\n\nClients should expect to handle additional values by ignoring any values not recognized.\n\n\"Deny\" and \"Warn\" may not be used together since this combination needlessly duplicates the validation failure both in the API response body and the HTTP warning headers.\n\nRequired.", +} + +func (ValidatingAdmissionPolicyBindingSpec) SwaggerDoc() map[string]string { + return map_ValidatingAdmissionPolicyBindingSpec +} + +var map_ValidatingAdmissionPolicyList = map[string]string{ + "": "ValidatingAdmissionPolicyList is a list of ValidatingAdmissionPolicy.", + "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", + "items": "List of ValidatingAdmissionPolicy.", +} + +func (ValidatingAdmissionPolicyList) SwaggerDoc() map[string]string { + return map_ValidatingAdmissionPolicyList +} + +var map_ValidatingAdmissionPolicySpec = map[string]string{ + "": "ValidatingAdmissionPolicySpec is the specification of the desired behavior of the AdmissionPolicy.", + "paramKind": "ParamKind specifies the kind of resources used to parameterize this policy. If absent, there are no parameters for this policy and the param CEL variable will not be provided to validation expressions. If ParamKind refers to a non-existent kind, this policy definition is mis-configured and the FailurePolicy is applied. If paramKind is specified but paramRef is unset in ValidatingAdmissionPolicyBinding, the params variable will be null.", + "matchConstraints": "MatchConstraints specifies what resources this policy is designed to validate. The AdmissionPolicy cares about a request if it matches _all_ Constraints. However, in order to prevent clusters from being put into an unstable state that cannot be recovered from via the API ValidatingAdmissionPolicy cannot match ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding. Required.", + "validations": "Validations contain CEL expressions which is used to apply the validation. Validations and AuditAnnotations may not both be empty; a minimum of one Validations or AuditAnnotations is required.", + "failurePolicy": "failurePolicy defines how to handle failures for the admission policy. Failures can occur from CEL expression parse errors, type check errors, runtime errors and invalid or mis-configured policy definitions or bindings.\n\nA policy is invalid if spec.paramKind refers to a non-existent Kind. A binding is invalid if spec.paramRef.name refers to a non-existent resource.\n\nfailurePolicy does not define how validations that evaluate to false are handled.\n\nWhen failurePolicy is set to Fail, ValidatingAdmissionPolicyBinding validationActions define how failures are enforced.\n\nAllowed values are Ignore or Fail. Defaults to Fail.", + "auditAnnotations": "auditAnnotations contains CEL expressions which are used to produce audit annotations for the audit event of the API request. validations and auditAnnotations may not both be empty; a least one of validations or auditAnnotations is required.", + "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be validated. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nIf a parameter object is provided, it can be accessed via the `params` handle in the same manner as validation expressions.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the policy is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the policy is evaluated.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the policy is skipped", + "variables": "Variables contain definitions of variables that can be used in composition of other expressions. Each variable is defined as a named CEL expression. The variables defined here will be available under `variables` in other expressions of the policy except MatchConditions because MatchConditions are evaluated before the rest of the policy.\n\nThe expression of a variable can refer to other variables defined earlier in the list but not those after. Thus, Variables must be sorted by the order of first appearance and acyclic.", +} + +func (ValidatingAdmissionPolicySpec) SwaggerDoc() map[string]string { + return map_ValidatingAdmissionPolicySpec +} + +var map_ValidatingAdmissionPolicyStatus = map[string]string{ + "": "ValidatingAdmissionPolicyStatus represents the status of an admission validation policy.", + "observedGeneration": "The generation observed by the controller.", + "typeChecking": "The results of type checking for each expression. Presence of this field indicates the completion of the type checking.", + "conditions": "The conditions represent the latest available observations of a policy's current state.", +} + +func (ValidatingAdmissionPolicyStatus) SwaggerDoc() map[string]string { + return map_ValidatingAdmissionPolicyStatus +} + var map_ValidatingWebhook = map[string]string{ "": "ValidatingWebhook describes an admission webhook and the resources and operations it applies to.", "name": "The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where \"imagepolicy\" is the name of the webhook, and kubernetes.io is the name of the organization. Required.", @@ -101,7 +253,7 @@ var map_ValidatingWebhook = map[string]string{ "sideEffects": "SideEffects states whether this webhook has side effects. Acceptable values are: Unknown, None, Some, NoneOnDryRun Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Defaults to Unknown.", "timeoutSeconds": "TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 30 seconds.", "admissionReviewVersions": "AdmissionReviewVersions is an ordered list of preferred `AdmissionReview` versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. Default to `['v1beta1']`.", - "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\nThis is an alpha feature and managed by the AdmissionWebhookMatchConditions feature gate.", + "matchConditions": "MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed.\n\nThe exact matching logic is (in order):\n 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped.\n 2. If ALL matchConditions evaluate to TRUE, the webhook is called.\n 3. If any matchCondition evaluates to an error (but none are FALSE):\n - If failurePolicy=Fail, reject the request\n - If failurePolicy=Ignore, the error is ignored and the webhook is skipped\n\nThis is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate.", } func (ValidatingWebhook) SwaggerDoc() map[string]string { @@ -128,6 +280,28 @@ func (ValidatingWebhookConfigurationList) SwaggerDoc() map[string]string { return map_ValidatingWebhookConfigurationList } +var map_Validation = map[string]string{ + "": "Validation specifies the CEL expression which is used to apply the validation.", + "expression": "Expression represents the expression which will be evaluated by CEL. ref: https://github.com/google/cel-spec CEL expressions have access to the contents of the API request/response, organized into CEL variables as well as some other useful variables:\n\n- 'object' - The object from the incoming request. The value is null for DELETE requests. - 'oldObject' - The existing object. The value is null for CREATE requests. - 'request' - Attributes of the API request([ref](/pkg/apis/admission/types.go#AdmissionRequest)). - 'params' - Parameter resource referred to by the policy binding being evaluated. Only populated if the policy has a ParamKind. - 'namespaceObject' - The namespace object that the incoming object belongs to. The value is null for cluster-scoped resources. - 'variables' - Map of composited variables, from its name to its lazily evaluated value.\n For example, a variable named 'foo' can be accessed as 'variables.foo'.\n- 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request.\n See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz\n- 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the\n request resource.\n\nThe `apiVersion`, `kind`, `metadata.name` and `metadata.generateName` are always accessible from the root of the object. No other metadata properties are accessible.\n\nOnly property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible. Accessible property names are escaped according to the following rules when accessed in the expression: - '__' escapes to '__underscores__' - '.' escapes to '__dot__' - '-' escapes to '__dash__' - '/' escapes to '__slash__' - Property names that exactly match a CEL RESERVED keyword escape to '__{keyword}__'. The keywords are:\n\t \"true\", \"false\", \"null\", \"in\", \"as\", \"break\", \"const\", \"continue\", \"else\", \"for\", \"function\", \"if\",\n\t \"import\", \"let\", \"loop\", \"package\", \"namespace\", \"return\".\nExamples:\n - Expression accessing a property named \"namespace\": {\"Expression\": \"object.__namespace__ > 0\"}\n - Expression accessing a property named \"x-prop\": {\"Expression\": \"object.x__dash__prop > 0\"}\n - Expression accessing a property named \"redact__d\": {\"Expression\": \"object.redact__underscores__d > 0\"}\n\nEquality on arrays with list type of 'set' or 'map' ignores element order, i.e. [1, 2] == [2, 1]. Concatenation on arrays with x-kubernetes-list-type use the semantics of the list type:\n - 'set': `X + Y` performs a union where the array positions of all elements in `X` are preserved and\n non-intersecting elements in `Y` are appended, retaining their partial order.\n - 'map': `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values\n are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with\n non-intersecting keys are appended, retaining their partial order.\nRequired.", + "message": "Message represents the message displayed when validation fails. The message is required if the Expression contains line breaks. The message must not contain line breaks. If unset, the message is \"failed rule: {Rule}\". e.g. \"must be a URL with the host matching spec.host\" If the Expression contains line breaks. Message is required. The message must not contain line breaks. If unset, the message is \"failed Expression: {Expression}\".", + "reason": "Reason represents a machine-readable description of why this validation failed. If this is the first validation in the list to fail, this reason, as well as the corresponding HTTP response code, are used in the HTTP response to the client. The currently supported reasons are: \"Unauthorized\", \"Forbidden\", \"Invalid\", \"RequestEntityTooLarge\". If not set, StatusReasonInvalid is used in the response to the client.", + "messageExpression": "messageExpression declares a CEL expression that evaluates to the validation failure message that is returned when this rule fails. Since messageExpression is used as a failure message, it must evaluate to a string. If both message and messageExpression are present on a validation, then messageExpression will be used if validation fails. If messageExpression results in a runtime error, the runtime error is logged, and the validation failure message is produced as if the messageExpression field were unset. If messageExpression evaluates to an empty string, a string with only spaces, or a string that contains line breaks, then the validation failure message will also be produced as if the messageExpression field were unset, and the fact that messageExpression produced an empty string/string with only spaces/string with line breaks will be logged. messageExpression has access to all the same variables as the `expression` except for 'authorizer' and 'authorizer.requestResource'. Example: \"object.x must be less than max (\"+string(params.max)+\")\"", +} + +func (Validation) SwaggerDoc() map[string]string { + return map_Validation +} + +var map_Variable = map[string]string{ + "": "Variable is the definition of a variable that is used for composition. A variable is defined as a named expression.", + "name": "Name is the name of the variable. The name must be a valid CEL identifier and unique among all variables. The variable can be accessed in other expressions through `variables` For example, if name is \"foo\", the variable will be available as `variables.foo`", + "expression": "Expression is the expression that will be evaluated as the value of the variable. The CEL expression has access to the same identifiers as the CEL expressions in Validation.", +} + +func (Variable) SwaggerDoc() map[string]string { + return map_Variable +} + var map_WebhookClientConfig = map[string]string{ "": "WebhookClientConfig contains the information to make a TLS connection with the webhook", "url": "`url` gives the location of the webhook, in standard URL form (`scheme://host:port/path`). Exactly one of `url` or `service` must be specified.\n\nThe `host` should not refer to a service running in the cluster; use the `service` field instead. The host might be resolved via external DNS in some apiservers (e.g., `kube-apiserver` cannot resolve in-cluster DNS as that would be a layering violation). `host` may also be an IP address.\n\nPlease note that using `localhost` or `127.0.0.1` as a `host` is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster.\n\nThe scheme must be \"https\"; the URL must begin with \"https://\".\n\nA path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier.\n\nAttempting to use a user or basic auth e.g. \"user:password@\" is not allowed. Fragments (\"#...\") and query parameters (\"?...\") are not allowed, either.", diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.deepcopy.go index 9c5299bdfa2d..4c10b1d1135a 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.deepcopy.go @@ -22,11 +22,43 @@ limitations under the License. package v1beta1 import ( - v1 "k8s.io/api/admissionregistration/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + admissionregistrationv1 "k8s.io/api/admissionregistration/v1" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" runtime "k8s.io/apimachinery/pkg/runtime" ) +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *AuditAnnotation) DeepCopyInto(out *AuditAnnotation) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AuditAnnotation. +func (in *AuditAnnotation) DeepCopy() *AuditAnnotation { + if in == nil { + return nil + } + out := new(AuditAnnotation) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ExpressionWarning) DeepCopyInto(out *ExpressionWarning) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExpressionWarning. +func (in *ExpressionWarning) DeepCopy() *ExpressionWarning { + if in == nil { + return nil + } + out := new(ExpressionWarning) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *MatchCondition) DeepCopyInto(out *MatchCondition) { *out = *in @@ -43,13 +75,58 @@ func (in *MatchCondition) DeepCopy() *MatchCondition { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *MatchResources) DeepCopyInto(out *MatchResources) { + *out = *in + if in.NamespaceSelector != nil { + in, out := &in.NamespaceSelector, &out.NamespaceSelector + *out = new(v1.LabelSelector) + (*in).DeepCopyInto(*out) + } + if in.ObjectSelector != nil { + in, out := &in.ObjectSelector, &out.ObjectSelector + *out = new(v1.LabelSelector) + (*in).DeepCopyInto(*out) + } + if in.ResourceRules != nil { + in, out := &in.ResourceRules, &out.ResourceRules + *out = make([]NamedRuleWithOperations, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.ExcludeResourceRules != nil { + in, out := &in.ExcludeResourceRules, &out.ExcludeResourceRules + *out = make([]NamedRuleWithOperations, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.MatchPolicy != nil { + in, out := &in.MatchPolicy, &out.MatchPolicy + *out = new(MatchPolicyType) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MatchResources. +func (in *MatchResources) DeepCopy() *MatchResources { + if in == nil { + return nil + } + out := new(MatchResources) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *MutatingWebhook) DeepCopyInto(out *MutatingWebhook) { *out = *in in.ClientConfig.DeepCopyInto(&out.ClientConfig) if in.Rules != nil { in, out := &in.Rules, &out.Rules - *out = make([]v1.RuleWithOperations, len(*in)) + *out = make([]admissionregistrationv1.RuleWithOperations, len(*in)) for i := range *in { (*in)[i].DeepCopyInto(&(*out)[i]) } @@ -66,12 +143,12 @@ func (in *MutatingWebhook) DeepCopyInto(out *MutatingWebhook) { } if in.NamespaceSelector != nil { in, out := &in.NamespaceSelector, &out.NamespaceSelector - *out = new(metav1.LabelSelector) + *out = new(v1.LabelSelector) (*in).DeepCopyInto(*out) } if in.ObjectSelector != nil { in, out := &in.ObjectSelector, &out.ObjectSelector - *out = new(metav1.LabelSelector) + *out = new(v1.LabelSelector) (*in).DeepCopyInto(*out) } if in.SideEffects != nil { @@ -178,6 +255,70 @@ func (in *MutatingWebhookConfigurationList) DeepCopyObject() runtime.Object { return nil } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *NamedRuleWithOperations) DeepCopyInto(out *NamedRuleWithOperations) { + *out = *in + if in.ResourceNames != nil { + in, out := &in.ResourceNames, &out.ResourceNames + *out = make([]string, len(*in)) + copy(*out, *in) + } + in.RuleWithOperations.DeepCopyInto(&out.RuleWithOperations) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NamedRuleWithOperations. +func (in *NamedRuleWithOperations) DeepCopy() *NamedRuleWithOperations { + if in == nil { + return nil + } + out := new(NamedRuleWithOperations) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ParamKind) DeepCopyInto(out *ParamKind) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ParamKind. +func (in *ParamKind) DeepCopy() *ParamKind { + if in == nil { + return nil + } + out := new(ParamKind) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ParamRef) DeepCopyInto(out *ParamRef) { + *out = *in + if in.Selector != nil { + in, out := &in.Selector, &out.Selector + *out = new(v1.LabelSelector) + (*in).DeepCopyInto(*out) + } + if in.ParameterNotFoundAction != nil { + in, out := &in.ParameterNotFoundAction, &out.ParameterNotFoundAction + *out = new(ParameterNotFoundActionType) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ParamRef. +func (in *ParamRef) DeepCopy() *ParamRef { + if in == nil { + return nil + } + out := new(ParamRef) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ServiceReference) DeepCopyInto(out *ServiceReference) { *out = *in @@ -204,13 +345,267 @@ func (in *ServiceReference) DeepCopy() *ServiceReference { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *TypeChecking) DeepCopyInto(out *TypeChecking) { + *out = *in + if in.ExpressionWarnings != nil { + in, out := &in.ExpressionWarnings, &out.ExpressionWarnings + *out = make([]ExpressionWarning, len(*in)) + copy(*out, *in) + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TypeChecking. +func (in *TypeChecking) DeepCopy() *TypeChecking { + if in == nil { + return nil + } + out := new(TypeChecking) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ValidatingAdmissionPolicy) DeepCopyInto(out *ValidatingAdmissionPolicy) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ValidatingAdmissionPolicy. +func (in *ValidatingAdmissionPolicy) DeepCopy() *ValidatingAdmissionPolicy { + if in == nil { + return nil + } + out := new(ValidatingAdmissionPolicy) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ValidatingAdmissionPolicy) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ValidatingAdmissionPolicyBinding) DeepCopyInto(out *ValidatingAdmissionPolicyBinding) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ValidatingAdmissionPolicyBinding. +func (in *ValidatingAdmissionPolicyBinding) DeepCopy() *ValidatingAdmissionPolicyBinding { + if in == nil { + return nil + } + out := new(ValidatingAdmissionPolicyBinding) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ValidatingAdmissionPolicyBinding) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ValidatingAdmissionPolicyBindingList) DeepCopyInto(out *ValidatingAdmissionPolicyBindingList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]ValidatingAdmissionPolicyBinding, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ValidatingAdmissionPolicyBindingList. +func (in *ValidatingAdmissionPolicyBindingList) DeepCopy() *ValidatingAdmissionPolicyBindingList { + if in == nil { + return nil + } + out := new(ValidatingAdmissionPolicyBindingList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ValidatingAdmissionPolicyBindingList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ValidatingAdmissionPolicyBindingSpec) DeepCopyInto(out *ValidatingAdmissionPolicyBindingSpec) { + *out = *in + if in.ParamRef != nil { + in, out := &in.ParamRef, &out.ParamRef + *out = new(ParamRef) + (*in).DeepCopyInto(*out) + } + if in.MatchResources != nil { + in, out := &in.MatchResources, &out.MatchResources + *out = new(MatchResources) + (*in).DeepCopyInto(*out) + } + if in.ValidationActions != nil { + in, out := &in.ValidationActions, &out.ValidationActions + *out = make([]ValidationAction, len(*in)) + copy(*out, *in) + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ValidatingAdmissionPolicyBindingSpec. +func (in *ValidatingAdmissionPolicyBindingSpec) DeepCopy() *ValidatingAdmissionPolicyBindingSpec { + if in == nil { + return nil + } + out := new(ValidatingAdmissionPolicyBindingSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ValidatingAdmissionPolicyList) DeepCopyInto(out *ValidatingAdmissionPolicyList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]ValidatingAdmissionPolicy, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ValidatingAdmissionPolicyList. +func (in *ValidatingAdmissionPolicyList) DeepCopy() *ValidatingAdmissionPolicyList { + if in == nil { + return nil + } + out := new(ValidatingAdmissionPolicyList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *ValidatingAdmissionPolicyList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ValidatingAdmissionPolicySpec) DeepCopyInto(out *ValidatingAdmissionPolicySpec) { + *out = *in + if in.ParamKind != nil { + in, out := &in.ParamKind, &out.ParamKind + *out = new(ParamKind) + **out = **in + } + if in.MatchConstraints != nil { + in, out := &in.MatchConstraints, &out.MatchConstraints + *out = new(MatchResources) + (*in).DeepCopyInto(*out) + } + if in.Validations != nil { + in, out := &in.Validations, &out.Validations + *out = make([]Validation, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.FailurePolicy != nil { + in, out := &in.FailurePolicy, &out.FailurePolicy + *out = new(FailurePolicyType) + **out = **in + } + if in.AuditAnnotations != nil { + in, out := &in.AuditAnnotations, &out.AuditAnnotations + *out = make([]AuditAnnotation, len(*in)) + copy(*out, *in) + } + if in.MatchConditions != nil { + in, out := &in.MatchConditions, &out.MatchConditions + *out = make([]MatchCondition, len(*in)) + copy(*out, *in) + } + if in.Variables != nil { + in, out := &in.Variables, &out.Variables + *out = make([]Variable, len(*in)) + copy(*out, *in) + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ValidatingAdmissionPolicySpec. +func (in *ValidatingAdmissionPolicySpec) DeepCopy() *ValidatingAdmissionPolicySpec { + if in == nil { + return nil + } + out := new(ValidatingAdmissionPolicySpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ValidatingAdmissionPolicyStatus) DeepCopyInto(out *ValidatingAdmissionPolicyStatus) { + *out = *in + if in.TypeChecking != nil { + in, out := &in.TypeChecking, &out.TypeChecking + *out = new(TypeChecking) + (*in).DeepCopyInto(*out) + } + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ValidatingAdmissionPolicyStatus. +func (in *ValidatingAdmissionPolicyStatus) DeepCopy() *ValidatingAdmissionPolicyStatus { + if in == nil { + return nil + } + out := new(ValidatingAdmissionPolicyStatus) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ValidatingWebhook) DeepCopyInto(out *ValidatingWebhook) { *out = *in in.ClientConfig.DeepCopyInto(&out.ClientConfig) if in.Rules != nil { in, out := &in.Rules, &out.Rules - *out = make([]v1.RuleWithOperations, len(*in)) + *out = make([]admissionregistrationv1.RuleWithOperations, len(*in)) for i := range *in { (*in)[i].DeepCopyInto(&(*out)[i]) } @@ -227,12 +622,12 @@ func (in *ValidatingWebhook) DeepCopyInto(out *ValidatingWebhook) { } if in.NamespaceSelector != nil { in, out := &in.NamespaceSelector, &out.NamespaceSelector - *out = new(metav1.LabelSelector) + *out = new(v1.LabelSelector) (*in).DeepCopyInto(*out) } if in.ObjectSelector != nil { in, out := &in.ObjectSelector, &out.ObjectSelector - *out = new(metav1.LabelSelector) + *out = new(v1.LabelSelector) (*in).DeepCopyInto(*out) } if in.SideEffects != nil { @@ -334,6 +729,43 @@ func (in *ValidatingWebhookConfigurationList) DeepCopyObject() runtime.Object { return nil } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Validation) DeepCopyInto(out *Validation) { + *out = *in + if in.Reason != nil { + in, out := &in.Reason, &out.Reason + *out = new(v1.StatusReason) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Validation. +func (in *Validation) DeepCopy() *Validation { + if in == nil { + return nil + } + out := new(Validation) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Variable) DeepCopyInto(out *Variable) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Variable. +func (in *Variable) DeepCopy() *Variable { + if in == nil { + return nil + } + out := new(Variable) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *WebhookClientConfig) DeepCopyInto(out *WebhookClientConfig) { *out = *in diff --git a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.prerelease-lifecycle.go b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.prerelease-lifecycle.go index 09a92f47689e..c1be5122a879 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.prerelease-lifecycle.go +++ b/cluster-autoscaler/vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.prerelease-lifecycle.go @@ -73,6 +73,78 @@ func (in *MutatingWebhookConfigurationList) APILifecycleRemoved() (major, minor return 1, 22 } +// APILifecycleIntroduced is an autogenerated function, returning the release in which the API struct was introduced as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:introduced" tags in types.go. +func (in *ValidatingAdmissionPolicy) APILifecycleIntroduced() (major, minor int) { + return 1, 28 +} + +// APILifecycleDeprecated is an autogenerated function, returning the release in which the API struct was or will be deprecated as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:deprecated" tags in types.go or "k8s:prerelease-lifecycle-gen:introduced" plus three minor. +func (in *ValidatingAdmissionPolicy) APILifecycleDeprecated() (major, minor int) { + return 1, 31 +} + +// APILifecycleRemoved is an autogenerated function, returning the release in which the API is no longer served as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:removed" tags in types.go or "k8s:prerelease-lifecycle-gen:deprecated" plus three minor. +func (in *ValidatingAdmissionPolicy) APILifecycleRemoved() (major, minor int) { + return 1, 34 +} + +// APILifecycleIntroduced is an autogenerated function, returning the release in which the API struct was introduced as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:introduced" tags in types.go. +func (in *ValidatingAdmissionPolicyBinding) APILifecycleIntroduced() (major, minor int) { + return 1, 28 +} + +// APILifecycleDeprecated is an autogenerated function, returning the release in which the API struct was or will be deprecated as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:deprecated" tags in types.go or "k8s:prerelease-lifecycle-gen:introduced" plus three minor. +func (in *ValidatingAdmissionPolicyBinding) APILifecycleDeprecated() (major, minor int) { + return 1, 31 +} + +// APILifecycleRemoved is an autogenerated function, returning the release in which the API is no longer served as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:removed" tags in types.go or "k8s:prerelease-lifecycle-gen:deprecated" plus three minor. +func (in *ValidatingAdmissionPolicyBinding) APILifecycleRemoved() (major, minor int) { + return 1, 34 +} + +// APILifecycleIntroduced is an autogenerated function, returning the release in which the API struct was introduced as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:introduced" tags in types.go. +func (in *ValidatingAdmissionPolicyBindingList) APILifecycleIntroduced() (major, minor int) { + return 1, 28 +} + +// APILifecycleDeprecated is an autogenerated function, returning the release in which the API struct was or will be deprecated as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:deprecated" tags in types.go or "k8s:prerelease-lifecycle-gen:introduced" plus three minor. +func (in *ValidatingAdmissionPolicyBindingList) APILifecycleDeprecated() (major, minor int) { + return 1, 31 +} + +// APILifecycleRemoved is an autogenerated function, returning the release in which the API is no longer served as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:removed" tags in types.go or "k8s:prerelease-lifecycle-gen:deprecated" plus three minor. +func (in *ValidatingAdmissionPolicyBindingList) APILifecycleRemoved() (major, minor int) { + return 1, 34 +} + +// APILifecycleIntroduced is an autogenerated function, returning the release in which the API struct was introduced as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:introduced" tags in types.go. +func (in *ValidatingAdmissionPolicyList) APILifecycleIntroduced() (major, minor int) { + return 1, 28 +} + +// APILifecycleDeprecated is an autogenerated function, returning the release in which the API struct was or will be deprecated as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:deprecated" tags in types.go or "k8s:prerelease-lifecycle-gen:introduced" plus three minor. +func (in *ValidatingAdmissionPolicyList) APILifecycleDeprecated() (major, minor int) { + return 1, 31 +} + +// APILifecycleRemoved is an autogenerated function, returning the release in which the API is no longer served as int versions of major and minor for comparison. +// It is controlled by "k8s:prerelease-lifecycle-gen:removed" tags in types.go or "k8s:prerelease-lifecycle-gen:deprecated" plus three minor. +func (in *ValidatingAdmissionPolicyList) APILifecycleRemoved() (major, minor int) { + return 1, 34 +} + // APILifecycleIntroduced is an autogenerated function, returning the release in which the API struct was introduced as int versions of major and minor for comparison. // It is controlled by "k8s:prerelease-lifecycle-gen:introduced" tags in types.go. func (in *ValidatingWebhookConfiguration) APILifecycleIntroduced() (major, minor int) { diff --git a/cluster-autoscaler/vendor/k8s.io/api/apidiscovery/v2beta1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/apidiscovery/v2beta1/generated.proto index aa08b4978c24..a09af750ba36 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/apidiscovery/v2beta1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/apidiscovery/v2beta1/generated.proto @@ -71,7 +71,7 @@ message APIResourceDiscovery { // responseKind describes the group, version, and kind of the serialization schema for the object type this endpoint typically returns. // APIs may return other objects types at their discretion, such as error conditions, requests for alternate representations, or other operation specific behavior. - // This value will be null if an APIService reports subresources but supports no operations on the parent resource + // This value will be null or empty if an APIService reports subresources but supports no operations on the parent resource optional k8s.io.apimachinery.pkg.apis.meta.v1.GroupVersionKind responseKind = 2; // scope indicates the scope of a resource, either Cluster or Namespaced @@ -111,7 +111,7 @@ message APISubresourceDiscovery { optional string subresource = 1; // responseKind describes the group, version, and kind of the serialization schema for the object type this endpoint typically returns. - // Some subresources do not return normal resources, these will have null return types. + // Some subresources do not return normal resources, these will have null or empty return types. optional k8s.io.apimachinery.pkg.apis.meta.v1.GroupVersionKind responseKind = 2; // acceptedTypes describes the kinds that this endpoint accepts. diff --git a/cluster-autoscaler/vendor/k8s.io/api/apidiscovery/v2beta1/types.go b/cluster-autoscaler/vendor/k8s.io/api/apidiscovery/v2beta1/types.go index 1aff3e3702f0..834293773047 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/apidiscovery/v2beta1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/apidiscovery/v2beta1/types.go @@ -92,7 +92,7 @@ type APIResourceDiscovery struct { Resource string `json:"resource" protobuf:"bytes,1,opt,name=resource"` // responseKind describes the group, version, and kind of the serialization schema for the object type this endpoint typically returns. // APIs may return other objects types at their discretion, such as error conditions, requests for alternate representations, or other operation specific behavior. - // This value will be null if an APIService reports subresources but supports no operations on the parent resource + // This value will be null or empty if an APIService reports subresources but supports no operations on the parent resource ResponseKind *v1.GroupVersionKind `json:"responseKind,omitempty" protobuf:"bytes,2,opt,name=responseKind"` // scope indicates the scope of a resource, either Cluster or Namespaced Scope ResourceScope `json:"scope" protobuf:"bytes,3,opt,name=scope"` @@ -141,7 +141,7 @@ type APISubresourceDiscovery struct { // for this resource across all versions. Subresource string `json:"subresource" protobuf:"bytes,1,opt,name=subresource"` // responseKind describes the group, version, and kind of the serialization schema for the object type this endpoint typically returns. - // Some subresources do not return normal resources, these will have null return types. + // Some subresources do not return normal resources, these will have null or empty return types. ResponseKind *v1.GroupVersionKind `json:"responseKind,omitempty" protobuf:"bytes,2,opt,name=responseKind"` // acceptedTypes describes the kinds that this endpoint accepts. // Subresources may accept the standard content types or define diff --git a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.pb.go index 4effbc6c17e4..6871da414c3e 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.pb.go @@ -225,55 +225,57 @@ func init() { } var fileDescriptor_a3903ff5e3cc7a03 = []byte{ - // 768 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x55, 0xdf, 0x4e, 0x13, 0x4d, - 0x14, 0xef, 0xd2, 0x52, 0x60, 0xfa, 0x7d, 0xf4, 0x63, 0x3e, 0x08, 0xb5, 0x26, 0x5b, 0x6c, 0x82, - 0x41, 0x8d, 0xbb, 0xd2, 0x88, 0x91, 0x98, 0x68, 0x58, 0x20, 0x06, 0x05, 0x31, 0x03, 0xf1, 0x02, - 0xbd, 0x70, 0xba, 0x3b, 0x6e, 0xd7, 0x76, 0x77, 0x36, 0x3b, 0xd3, 0x26, 0xdc, 0x18, 0x1f, 0xc1, - 0x07, 0xf1, 0xd2, 0x87, 0xe0, 0xca, 0x70, 0x63, 0x42, 0x62, 0xd2, 0xc8, 0xfa, 0x16, 0x5c, 0x99, - 0x99, 0xdd, 0xb6, 0x6c, 0xbb, 0xc4, 0x86, 0x8b, 0x26, 0x9d, 0x73, 0xce, 0xef, 0x77, 0xfe, 0xcc, - 0x6f, 0xce, 0x82, 0x57, 0xcd, 0xc7, 0x4c, 0x73, 0xa8, 0xde, 0x6c, 0xd7, 0x49, 0xe0, 0x11, 0x4e, - 0x98, 0xde, 0x21, 0x9e, 0x45, 0x03, 0x3d, 0x76, 0x60, 0xdf, 0x11, 0x3f, 0x46, 0x82, 0x0e, 0x09, - 0x1c, 0x8f, 0x93, 0xc0, 0xc3, 0x2d, 0xbd, 0xb3, 0x8a, 0x5b, 0x7e, 0x03, 0xaf, 0xea, 0x36, 0xf1, - 0x48, 0x80, 0x39, 0xb1, 0x34, 0x3f, 0xa0, 0x9c, 0xc2, 0xe5, 0x08, 0xa6, 0x61, 0xdf, 0xd1, 0x46, - 0x60, 0x5a, 0x0f, 0x56, 0xbe, 0x6f, 0x3b, 0xbc, 0xd1, 0xae, 0x6b, 0x26, 0x75, 0x75, 0x9b, 0xda, - 0x54, 0x97, 0xe8, 0x7a, 0xfb, 0x83, 0x3c, 0xc9, 0x83, 0xfc, 0x17, 0xb1, 0x96, 0x1f, 0x0e, 0x8a, - 0x71, 0xb1, 0xd9, 0x70, 0x3c, 0x12, 0x1c, 0xeb, 0x7e, 0xd3, 0x96, 0x95, 0xe9, 0x2e, 0xe1, 0x58, - 0xef, 0x8c, 0xd4, 0x52, 0xd6, 0xaf, 0x42, 0x05, 0x6d, 0x8f, 0x3b, 0x2e, 0x19, 0x01, 0x3c, 0xfa, - 0x1b, 0x80, 0x99, 0x0d, 0xe2, 0xe2, 0x61, 0x5c, 0xf5, 0x87, 0x02, 0xe6, 0x0f, 0x64, 0xa7, 0x07, - 0x9c, 0x06, 0xd8, 0x26, 0x6f, 0x48, 0xc0, 0x1c, 0xea, 0xc1, 0x35, 0x50, 0xc0, 0xbe, 0x13, 0xb9, - 0x76, 0xb6, 0x4a, 0xca, 0x92, 0xb2, 0x32, 0x63, 0xfc, 0x7f, 0xd2, 0xad, 0x64, 0xc2, 0x6e, 0xa5, - 0xb0, 0xf1, 0x7a, 0xa7, 0xe7, 0x42, 0x97, 0xe3, 0xe0, 0x06, 0x28, 0x12, 0xcf, 0xa4, 0x96, 0xe3, - 0xd9, 0x31, 0x53, 0x69, 0x42, 0x42, 0x17, 0x63, 0x68, 0x71, 0x3b, 0xe9, 0x46, 0xc3, 0xf1, 0x70, - 0x13, 0xcc, 0x59, 0xc4, 0xa4, 0x16, 0xae, 0xb7, 0x7a, 0xd5, 0xb0, 0x52, 0x76, 0x29, 0xbb, 0x32, - 0x63, 0x2c, 0x84, 0xdd, 0xca, 0xdc, 0xd6, 0xb0, 0x13, 0x8d, 0xc6, 0x57, 0xbf, 0x4d, 0x80, 0xd9, - 0xa1, 0x8e, 0xde, 0x83, 0x69, 0x31, 0x6e, 0x0b, 0x73, 0x2c, 0xdb, 0x29, 0xd4, 0x1e, 0x68, 0x83, - 0x2b, 0xef, 0x4f, 0x4d, 0xf3, 0x9b, 0xb6, 0xbc, 0x7f, 0x4d, 0x44, 0x6b, 0x9d, 0x55, 0x6d, 0xbf, - 0xfe, 0x91, 0x98, 0x7c, 0x8f, 0x70, 0x6c, 0xc0, 0xb8, 0x0b, 0x30, 0xb0, 0xa1, 0x3e, 0x2b, 0x7c, - 0x0b, 0x72, 0xcc, 0x27, 0xa6, 0xec, 0xb8, 0x50, 0x5b, 0xd7, 0xc6, 0x12, 0x94, 0x96, 0x2c, 0xf3, - 0xc0, 0x27, 0xa6, 0xf1, 0x4f, 0x9c, 0x26, 0x27, 0x4e, 0x48, 0x92, 0x42, 0x13, 0xe4, 0x19, 0xc7, - 0xbc, 0x2d, 0x66, 0x21, 0xe8, 0x9f, 0x5c, 0x8f, 0x5e, 0x52, 0x18, 0xb3, 0x71, 0x82, 0x7c, 0x74, - 0x46, 0x31, 0x75, 0xf5, 0x6b, 0x16, 0x2c, 0x26, 0x01, 0x9b, 0xd4, 0xb3, 0x1c, 0x2e, 0xe6, 0xf7, - 0x0c, 0xe4, 0xf8, 0xb1, 0x4f, 0x62, 0x29, 0xdc, 0xeb, 0x95, 0x78, 0x78, 0xec, 0x93, 0x8b, 0x6e, - 0xe5, 0xe6, 0x15, 0x30, 0xe1, 0x46, 0x12, 0x08, 0xd7, 0xfb, 0x1d, 0x44, 0x92, 0xb8, 0x95, 0x2c, - 0xe2, 0xa2, 0x5b, 0x29, 0xf6, 0x61, 0xc9, 0xba, 0xe0, 0x0b, 0x00, 0x69, 0x5d, 0x76, 0x68, 0x3d, - 0x8f, 0x14, 0x2c, 0x94, 0x25, 0x06, 0x91, 0x35, 0xca, 0x31, 0x0d, 0xdc, 0x1f, 0x89, 0x40, 0x29, - 0x28, 0xd8, 0x01, 0xb0, 0x85, 0x19, 0x3f, 0x0c, 0xb0, 0xc7, 0xa2, 0x12, 0x1d, 0x97, 0x94, 0x72, - 0x72, 0xa8, 0x77, 0xc7, 0x53, 0x84, 0x40, 0x0c, 0xf2, 0xee, 0x8e, 0xb0, 0xa1, 0x94, 0x0c, 0xf0, - 0x36, 0xc8, 0x07, 0x04, 0x33, 0xea, 0x95, 0x26, 0x65, 0xfb, 0xfd, 0x3b, 0x40, 0xd2, 0x8a, 0x62, - 0x2f, 0xbc, 0x03, 0xa6, 0x5c, 0xc2, 0x18, 0xb6, 0x49, 0x29, 0x2f, 0x03, 0x8b, 0x71, 0xe0, 0xd4, - 0x5e, 0x64, 0x46, 0x3d, 0x7f, 0xf5, 0xbb, 0x02, 0x60, 0x72, 0xee, 0xbb, 0x0e, 0xe3, 0xf0, 0xdd, - 0x88, 0xd2, 0xb5, 0xf1, 0xfa, 0x12, 0x68, 0xa9, 0xf3, 0xff, 0xe2, 0x94, 0xd3, 0x3d, 0xcb, 0x25, - 0x95, 0x1f, 0x81, 0x49, 0x87, 0x13, 0x57, 0xdc, 0x62, 0x76, 0xa5, 0x50, 0x5b, 0xbb, 0x96, 0x0e, - 0x8d, 0x7f, 0xe3, 0x0c, 0x93, 0x3b, 0x82, 0x0b, 0x45, 0x94, 0xd5, 0xf9, 0xe1, 0x7e, 0xc4, 0x03, - 0xa8, 0xfe, 0x9c, 0x00, 0xf3, 0x69, 0x32, 0x86, 0x9f, 0x40, 0x91, 0x25, 0xec, 0xac, 0xa4, 0xc8, - 0xa2, 0xc6, 0x7e, 0x1c, 0x29, 0xab, 0x6f, 0xb0, 0xaa, 0x92, 0x76, 0x86, 0x86, 0x93, 0xc1, 0x7d, - 0xb0, 0x60, 0x52, 0xd7, 0xa5, 0xde, 0x76, 0xea, 0xce, 0xbb, 0x11, 0x76, 0x2b, 0x0b, 0x9b, 0x69, - 0x01, 0x28, 0x1d, 0x07, 0x03, 0x00, 0xcc, 0xde, 0x13, 0x88, 0x96, 0x5e, 0xa1, 0xf6, 0xf4, 0x5a, - 0x03, 0xee, 0xbf, 0xa4, 0xc1, 0xce, 0xea, 0x9b, 0x18, 0xba, 0x94, 0xc5, 0x78, 0x79, 0x72, 0xae, - 0x66, 0x4e, 0xcf, 0xd5, 0xcc, 0xd9, 0xb9, 0x9a, 0xf9, 0x1c, 0xaa, 0xca, 0x49, 0xa8, 0x2a, 0xa7, - 0xa1, 0xaa, 0x9c, 0x85, 0xaa, 0xf2, 0x2b, 0x54, 0x95, 0x2f, 0xbf, 0xd5, 0xcc, 0xd1, 0xf2, 0x58, - 0x1f, 0xd5, 0x3f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xa0, 0xd0, 0x65, 0xbc, 0x95, 0x07, 0x00, 0x00, + // 790 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x55, 0x41, 0x4f, 0xdb, 0x48, + 0x14, 0x8e, 0x49, 0x08, 0x30, 0xd9, 0x4d, 0x96, 0x59, 0x10, 0xd9, 0xac, 0xe4, 0xb0, 0x91, 0x58, + 0xb1, 0xbb, 0x5a, 0x7b, 0x89, 0x96, 0xaa, 0xb4, 0x52, 0x2b, 0x0c, 0xa8, 0xa2, 0x85, 0x52, 0x4d, + 0x50, 0x0f, 0xb4, 0x87, 0x4e, 0xec, 0xa9, 0xe3, 0x26, 0xf6, 0x58, 0x9e, 0x49, 0x24, 0x2e, 0x55, + 0x7f, 0x42, 0xfb, 0x3f, 0x7a, 0xec, 0x8f, 0xe0, 0x54, 0x71, 0x44, 0xaa, 0x14, 0x15, 0xf7, 0x5f, + 0x70, 0xaa, 0x66, 0xec, 0x38, 0x38, 0x09, 0x6a, 0xc4, 0x21, 0x52, 0xe6, 0xbd, 0xf7, 0x7d, 0xef, + 0xcd, 0x37, 0xdf, 0x8c, 0xc1, 0xd3, 0xf6, 0x5d, 0xa6, 0x39, 0x54, 0x6f, 0x77, 0x9b, 0x24, 0xf0, + 0x08, 0x27, 0x4c, 0xef, 0x11, 0xcf, 0xa2, 0x81, 0x1e, 0x27, 0xb0, 0xef, 0x88, 0x1f, 0x23, 0x41, + 0x8f, 0x04, 0x8e, 0xc7, 0x49, 0xe0, 0xe1, 0x8e, 0xde, 0xdb, 0xc0, 0x1d, 0xbf, 0x85, 0x37, 0x74, + 0x9b, 0x78, 0x24, 0xc0, 0x9c, 0x58, 0x9a, 0x1f, 0x50, 0x4e, 0xe1, 0x5a, 0x04, 0xd3, 0xb0, 0xef, + 0x68, 0x63, 0x30, 0x6d, 0x00, 0xab, 0xfc, 0x6b, 0x3b, 0xbc, 0xd5, 0x6d, 0x6a, 0x26, 0x75, 0x75, + 0x9b, 0xda, 0x54, 0x97, 0xe8, 0x66, 0xf7, 0xb5, 0x5c, 0xc9, 0x85, 0xfc, 0x17, 0xb1, 0x56, 0xfe, + 0x1f, 0x0e, 0xe3, 0x62, 0xb3, 0xe5, 0x78, 0x24, 0x38, 0xd5, 0xfd, 0xb6, 0x2d, 0x27, 0xd3, 0x5d, + 0xc2, 0xb1, 0xde, 0x1b, 0x9b, 0xa5, 0xa2, 0xdf, 0x84, 0x0a, 0xba, 0x1e, 0x77, 0x5c, 0x32, 0x06, + 0xb8, 0xf3, 0x23, 0x00, 0x33, 0x5b, 0xc4, 0xc5, 0xa3, 0xb8, 0xda, 0x87, 0x19, 0xb0, 0xd4, 0x90, + 0x3b, 0x6d, 0x70, 0x1a, 0x60, 0x9b, 0x3c, 0x27, 0x01, 0x73, 0xa8, 0x07, 0x37, 0x41, 0x01, 0xfb, + 0x4e, 0x94, 0xda, 0xdf, 0x2d, 0x2b, 0xab, 0xca, 0xfa, 0x82, 0xf1, 0xeb, 0x59, 0xbf, 0x9a, 0x09, + 0xfb, 0xd5, 0xc2, 0xf6, 0xb3, 0xfd, 0x41, 0x0a, 0x5d, 0xaf, 0x83, 0xdb, 0xa0, 0x44, 0x3c, 0x93, + 0x5a, 0x8e, 0x67, 0xc7, 0x4c, 0xe5, 0x19, 0x09, 0x5d, 0x89, 0xa1, 0xa5, 0xbd, 0x74, 0x1a, 0x8d, + 0xd6, 0xc3, 0x1d, 0xb0, 0x68, 0x11, 0x93, 0x5a, 0xb8, 0xd9, 0x19, 0x4c, 0xc3, 0xca, 0xd9, 0xd5, + 0xec, 0xfa, 0x82, 0xb1, 0x1c, 0xf6, 0xab, 0x8b, 0xbb, 0xa3, 0x49, 0x34, 0x5e, 0x0f, 0xef, 0x81, + 0xa2, 0x3c, 0x40, 0x2b, 0x61, 0xc8, 0x49, 0x06, 0x18, 0xf6, 0xab, 0xc5, 0x46, 0x2a, 0x83, 0x46, + 0x2a, 0x6b, 0x9f, 0x66, 0x40, 0x71, 0x44, 0x8d, 0x57, 0x60, 0x5e, 0x1c, 0x95, 0x85, 0x39, 0x96, + 0x52, 0x14, 0xea, 0xff, 0x69, 0x43, 0xbb, 0x24, 0x8a, 0x6b, 0x7e, 0xdb, 0x96, 0xde, 0xd1, 0x44, + 0xb5, 0xd6, 0xdb, 0xd0, 0x8e, 0x9a, 0x6f, 0x88, 0xc9, 0x0f, 0x09, 0xc7, 0x06, 0x8c, 0x15, 0x00, + 0xc3, 0x18, 0x4a, 0x58, 0xe1, 0x0b, 0x90, 0x63, 0x3e, 0x31, 0xa5, 0x5a, 0x85, 0xfa, 0x96, 0x36, + 0x95, 0x19, 0xb5, 0xf4, 0x98, 0x0d, 0x9f, 0x98, 0xc6, 0x4f, 0x71, 0x9b, 0x9c, 0x58, 0x21, 0x49, + 0x0a, 0x4d, 0x90, 0x67, 0x1c, 0xf3, 0xae, 0xd0, 0x51, 0xd0, 0xdf, 0xbf, 0x1d, 0xbd, 0xa4, 0x30, + 0x8a, 0x71, 0x83, 0x7c, 0xb4, 0x46, 0x31, 0x75, 0xed, 0x63, 0x16, 0xac, 0xa4, 0x01, 0x3b, 0xd4, + 0xb3, 0x1c, 0x2e, 0xf4, 0x7b, 0x08, 0x72, 0xfc, 0xd4, 0x27, 0xb1, 0x8d, 0xfe, 0x19, 0x8c, 0x78, + 0x7c, 0xea, 0x93, 0xab, 0x7e, 0xf5, 0xf7, 0x1b, 0x60, 0x22, 0x8d, 0x24, 0x10, 0x6e, 0x25, 0x3b, + 0x88, 0xec, 0xf4, 0x47, 0x7a, 0x88, 0xab, 0x7e, 0xb5, 0x94, 0xc0, 0xd2, 0x73, 0xc1, 0xc7, 0x00, + 0xd2, 0x66, 0x74, 0xc4, 0x8f, 0x22, 0xf7, 0x0b, 0x57, 0x0a, 0x21, 0xb2, 0x46, 0x25, 0xa6, 0x81, + 0x47, 0x63, 0x15, 0x68, 0x02, 0x0a, 0xf6, 0x00, 0xec, 0x60, 0xc6, 0x8f, 0x03, 0xec, 0xb1, 0x68, + 0x44, 0xc7, 0x25, 0xe5, 0x9c, 0x14, 0xf5, 0xef, 0xe9, 0x1c, 0x21, 0x10, 0xc3, 0xbe, 0x07, 0x63, + 0x6c, 0x68, 0x42, 0x07, 0xf8, 0x27, 0xc8, 0x07, 0x04, 0x33, 0xea, 0x95, 0x67, 0xe5, 0xf6, 0x93, + 0x33, 0x40, 0x32, 0x8a, 0xe2, 0x2c, 0xfc, 0x0b, 0xcc, 0xb9, 0x84, 0x31, 0x6c, 0x93, 0x72, 0x5e, + 0x16, 0x96, 0xe2, 0xc2, 0xb9, 0xc3, 0x28, 0x8c, 0x06, 0xf9, 0xda, 0x67, 0x05, 0xc0, 0xb4, 0xee, + 0x07, 0x0e, 0xe3, 0xf0, 0xe5, 0x98, 0xd3, 0xb5, 0xe9, 0xf6, 0x25, 0xd0, 0xd2, 0xe7, 0xbf, 0xc4, + 0x2d, 0xe7, 0x07, 0x91, 0x6b, 0x2e, 0x3f, 0x01, 0xb3, 0x0e, 0x27, 0xae, 0x38, 0xc5, 0xec, 0x7a, + 0xa1, 0xbe, 0x79, 0x2b, 0x1f, 0x1a, 0x3f, 0xc7, 0x1d, 0x66, 0xf7, 0x05, 0x17, 0x8a, 0x28, 0x6b, + 0x4b, 0xa3, 0xfb, 0x11, 0x17, 0xa0, 0xf6, 0x45, 0x3c, 0x70, 0x13, 0x6c, 0x0c, 0xdf, 0x82, 0x12, + 0x4b, 0xc5, 0x59, 0x59, 0x91, 0x43, 0x4d, 0x7d, 0x39, 0x26, 0x3c, 0x9b, 0xc3, 0x67, 0x2e, 0x1d, + 0x67, 0x68, 0xb4, 0x19, 0x3c, 0x02, 0xcb, 0x26, 0x75, 0x5d, 0xea, 0xed, 0x4d, 0x7c, 0x2f, 0x7f, + 0x0b, 0xfb, 0xd5, 0xe5, 0x9d, 0x49, 0x05, 0x68, 0x32, 0x0e, 0x06, 0x00, 0x98, 0x83, 0x2b, 0x10, + 0x3d, 0x98, 0x85, 0xfa, 0x83, 0x5b, 0x09, 0x9c, 0xdc, 0xa4, 0xe1, 0x9b, 0x95, 0x84, 0x18, 0xba, + 0xd6, 0xc5, 0x78, 0x72, 0x76, 0xa9, 0x66, 0xce, 0x2f, 0xd5, 0xcc, 0xc5, 0xa5, 0x9a, 0x79, 0x17, + 0xaa, 0xca, 0x59, 0xa8, 0x2a, 0xe7, 0xa1, 0xaa, 0x5c, 0x84, 0xaa, 0xf2, 0x35, 0x54, 0x95, 0xf7, + 0xdf, 0xd4, 0xcc, 0xc9, 0xda, 0x54, 0x1f, 0xe4, 0xef, 0x01, 0x00, 0x00, 0xff, 0xff, 0xa0, 0x3a, + 0x2e, 0x07, 0xd1, 0x07, 0x00, 0x00, } func (m *ServerStorageVersion) Marshal() (dAtA []byte, err error) { @@ -296,6 +298,15 @@ func (m *ServerStorageVersion) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if len(m.ServedVersions) > 0 { + for iNdEx := len(m.ServedVersions) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.ServedVersions[iNdEx]) + copy(dAtA[i:], m.ServedVersions[iNdEx]) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.ServedVersions[iNdEx]))) + i-- + dAtA[i] = 0x22 + } + } if len(m.DecodableVersions) > 0 { for iNdEx := len(m.DecodableVersions) - 1; iNdEx >= 0; iNdEx-- { i -= len(m.DecodableVersions[iNdEx]) @@ -582,6 +593,12 @@ func (m *ServerStorageVersion) Size() (n int) { n += 1 + l + sovGenerated(uint64(l)) } } + if len(m.ServedVersions) > 0 { + for _, s := range m.ServedVersions { + l = len(s) + n += 1 + l + sovGenerated(uint64(l)) + } + } return n } @@ -685,6 +702,7 @@ func (this *ServerStorageVersion) String() string { `APIServerID:` + fmt.Sprintf("%v", this.APIServerID) + `,`, `EncodingVersion:` + fmt.Sprintf("%v", this.EncodingVersion) + `,`, `DecodableVersions:` + fmt.Sprintf("%v", this.DecodableVersions) + `,`, + `ServedVersions:` + fmt.Sprintf("%v", this.ServedVersions) + `,`, `}`, }, "") return s @@ -896,6 +914,38 @@ func (m *ServerStorageVersion) Unmarshal(dAtA []byte) error { } m.DecodableVersions = append(m.DecodableVersions, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ServedVersions", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ServedVersions = append(m.ServedVersions, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) diff --git a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.proto index 63c45d54d716..6e6bab52182b 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.proto @@ -42,6 +42,11 @@ message ServerStorageVersion { // The encodingVersion must be included in the decodableVersions. // +listType=set repeated string decodableVersions = 3; + + // The API server can serve these versions. + // DecodableVersions must include all ServedVersions. + // +listType=set + repeated string servedVersions = 4; } // Storage version of a specific resource. diff --git a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/types.go b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/types.go index a0437b5074cc..0ffcf95f0667 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/types.go @@ -77,6 +77,11 @@ type ServerStorageVersion struct { // The encodingVersion must be included in the decodableVersions. // +listType=set DecodableVersions []string `json:"decodableVersions,omitempty" protobuf:"bytes,3,opt,name=decodableVersions"` + + // The API server can serve these versions. + // DecodableVersions must include all ServedVersions. + // +listType=set + ServedVersions []string `json:"servedVersions,omitempty" protobuf:"bytes,4,opt,name=servedVersions"` } type StorageVersionConditionType string diff --git a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/types_swagger_doc_generated.go index 3b75fa65bc33..6fd1c3ebe8a9 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/types_swagger_doc_generated.go @@ -32,6 +32,7 @@ var map_ServerStorageVersion = map[string]string{ "apiServerID": "The ID of the reporting API server.", "encodingVersion": "The API server encodes the object to this version when persisting it in the backend (e.g., etcd).", "decodableVersions": "The API server can decode objects encoded in these versions. The encodingVersion must be included in the decodableVersions.", + "servedVersions": "The API server can serve these versions. DecodableVersions must include all ServedVersions.", } func (ServerStorageVersion) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/zz_generated.deepcopy.go index 44dffa75128e..638d80140264 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/apiserverinternal/v1alpha1/zz_generated.deepcopy.go @@ -33,6 +33,11 @@ func (in *ServerStorageVersion) DeepCopyInto(out *ServerStorageVersion) { *out = make([]string, len(*in)) copy(*out, *in) } + if in.ServedVersions != nil { + in, out := &in.ServedVersions, &out.ServedVersions + *out = make([]string, len(*in)) + copy(*out, *in) + } return } diff --git a/cluster-autoscaler/vendor/k8s.io/api/apps/v1/types.go b/cluster-autoscaler/vendor/k8s.io/api/apps/v1/types.go index 15dc3150a63a..644d368fe4d8 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/apps/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/apps/v1/types.go @@ -17,7 +17,7 @@ limitations under the License. package v1 import ( - "k8s.io/api/core/v1" + v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" runtime "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/util/intstr" @@ -29,6 +29,7 @@ const ( DeprecatedRollbackTo = "deprecated.deployment.rollback.to" DeprecatedTemplateGeneration = "deprecated.daemonset.template.generation" StatefulSetPodNameLabel = "statefulset.kubernetes.io/pod-name" + PodIndexLabel = "apps.kubernetes.io/pod-index" ) // +genclient diff --git a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/generated.pb.go index efbecf02c568..304bbd0744d8 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/generated.pb.go @@ -102,10 +102,66 @@ func (m *ExtraValue) XXX_DiscardUnknown() { var xxx_messageInfo_ExtraValue proto.InternalMessageInfo +func (m *SelfSubjectReview) Reset() { *m = SelfSubjectReview{} } +func (*SelfSubjectReview) ProtoMessage() {} +func (*SelfSubjectReview) Descriptor() ([]byte, []int) { + return fileDescriptor_2953ea822e7ffe1e, []int{2} +} +func (m *SelfSubjectReview) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *SelfSubjectReview) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *SelfSubjectReview) XXX_Merge(src proto.Message) { + xxx_messageInfo_SelfSubjectReview.Merge(m, src) +} +func (m *SelfSubjectReview) XXX_Size() int { + return m.Size() +} +func (m *SelfSubjectReview) XXX_DiscardUnknown() { + xxx_messageInfo_SelfSubjectReview.DiscardUnknown(m) +} + +var xxx_messageInfo_SelfSubjectReview proto.InternalMessageInfo + +func (m *SelfSubjectReviewStatus) Reset() { *m = SelfSubjectReviewStatus{} } +func (*SelfSubjectReviewStatus) ProtoMessage() {} +func (*SelfSubjectReviewStatus) Descriptor() ([]byte, []int) { + return fileDescriptor_2953ea822e7ffe1e, []int{3} +} +func (m *SelfSubjectReviewStatus) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *SelfSubjectReviewStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *SelfSubjectReviewStatus) XXX_Merge(src proto.Message) { + xxx_messageInfo_SelfSubjectReviewStatus.Merge(m, src) +} +func (m *SelfSubjectReviewStatus) XXX_Size() int { + return m.Size() +} +func (m *SelfSubjectReviewStatus) XXX_DiscardUnknown() { + xxx_messageInfo_SelfSubjectReviewStatus.DiscardUnknown(m) +} + +var xxx_messageInfo_SelfSubjectReviewStatus proto.InternalMessageInfo + func (m *TokenRequest) Reset() { *m = TokenRequest{} } func (*TokenRequest) ProtoMessage() {} func (*TokenRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_2953ea822e7ffe1e, []int{2} + return fileDescriptor_2953ea822e7ffe1e, []int{4} } func (m *TokenRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -133,7 +189,7 @@ var xxx_messageInfo_TokenRequest proto.InternalMessageInfo func (m *TokenRequestSpec) Reset() { *m = TokenRequestSpec{} } func (*TokenRequestSpec) ProtoMessage() {} func (*TokenRequestSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_2953ea822e7ffe1e, []int{3} + return fileDescriptor_2953ea822e7ffe1e, []int{5} } func (m *TokenRequestSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -161,7 +217,7 @@ var xxx_messageInfo_TokenRequestSpec proto.InternalMessageInfo func (m *TokenRequestStatus) Reset() { *m = TokenRequestStatus{} } func (*TokenRequestStatus) ProtoMessage() {} func (*TokenRequestStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_2953ea822e7ffe1e, []int{4} + return fileDescriptor_2953ea822e7ffe1e, []int{6} } func (m *TokenRequestStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -189,7 +245,7 @@ var xxx_messageInfo_TokenRequestStatus proto.InternalMessageInfo func (m *TokenReview) Reset() { *m = TokenReview{} } func (*TokenReview) ProtoMessage() {} func (*TokenReview) Descriptor() ([]byte, []int) { - return fileDescriptor_2953ea822e7ffe1e, []int{5} + return fileDescriptor_2953ea822e7ffe1e, []int{7} } func (m *TokenReview) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -217,7 +273,7 @@ var xxx_messageInfo_TokenReview proto.InternalMessageInfo func (m *TokenReviewSpec) Reset() { *m = TokenReviewSpec{} } func (*TokenReviewSpec) ProtoMessage() {} func (*TokenReviewSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_2953ea822e7ffe1e, []int{6} + return fileDescriptor_2953ea822e7ffe1e, []int{8} } func (m *TokenReviewSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -245,7 +301,7 @@ var xxx_messageInfo_TokenReviewSpec proto.InternalMessageInfo func (m *TokenReviewStatus) Reset() { *m = TokenReviewStatus{} } func (*TokenReviewStatus) ProtoMessage() {} func (*TokenReviewStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_2953ea822e7ffe1e, []int{7} + return fileDescriptor_2953ea822e7ffe1e, []int{9} } func (m *TokenReviewStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -273,7 +329,7 @@ var xxx_messageInfo_TokenReviewStatus proto.InternalMessageInfo func (m *UserInfo) Reset() { *m = UserInfo{} } func (*UserInfo) ProtoMessage() {} func (*UserInfo) Descriptor() ([]byte, []int) { - return fileDescriptor_2953ea822e7ffe1e, []int{8} + return fileDescriptor_2953ea822e7ffe1e, []int{10} } func (m *UserInfo) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -301,6 +357,8 @@ var xxx_messageInfo_UserInfo proto.InternalMessageInfo func init() { proto.RegisterType((*BoundObjectReference)(nil), "k8s.io.api.authentication.v1.BoundObjectReference") proto.RegisterType((*ExtraValue)(nil), "k8s.io.api.authentication.v1.ExtraValue") + proto.RegisterType((*SelfSubjectReview)(nil), "k8s.io.api.authentication.v1.SelfSubjectReview") + proto.RegisterType((*SelfSubjectReviewStatus)(nil), "k8s.io.api.authentication.v1.SelfSubjectReviewStatus") proto.RegisterType((*TokenRequest)(nil), "k8s.io.api.authentication.v1.TokenRequest") proto.RegisterType((*TokenRequestSpec)(nil), "k8s.io.api.authentication.v1.TokenRequestSpec") proto.RegisterType((*TokenRequestStatus)(nil), "k8s.io.api.authentication.v1.TokenRequestStatus") @@ -316,64 +374,67 @@ func init() { } var fileDescriptor_2953ea822e7ffe1e = []byte{ - // 907 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x56, 0xcf, 0x6f, 0xe3, 0x44, - 0x14, 0x8e, 0xf3, 0xa3, 0x4a, 0x26, 0xdb, 0xd2, 0xce, 0xb2, 0x52, 0x54, 0x96, 0xa4, 0x78, 0x25, - 0x54, 0x01, 0x6b, 0x6f, 0x23, 0x04, 0xab, 0x45, 0x42, 0xaa, 0x69, 0x04, 0x11, 0x82, 0x5d, 0xcd, - 0x6e, 0x0b, 0xe2, 0xc4, 0xc4, 0x7e, 0x4d, 0x87, 0xe0, 0xb1, 0xb1, 0xc7, 0x61, 0x73, 0xdb, 0x3f, - 0x81, 0x23, 0x48, 0x1c, 0xf8, 0x23, 0x90, 0xf8, 0x17, 0x7a, 0x5c, 0x71, 0xea, 0x01, 0x45, 0xd4, - 0x5c, 0x39, 0x72, 0xe2, 0x84, 0x66, 0x3c, 0xad, 0xe3, 0xa4, 0x4d, 0x73, 0xe2, 0x96, 0x79, 0xef, - 0x7b, 0xdf, 0xbc, 0xf7, 0xcd, 0xe7, 0x99, 0xa0, 0xde, 0xe8, 0x61, 0x6c, 0xb1, 0xc0, 0x1e, 0x25, - 0x03, 0x88, 0x38, 0x08, 0x88, 0xed, 0x31, 0x70, 0x2f, 0x88, 0x6c, 0x9d, 0xa0, 0x21, 0xb3, 0x69, - 0x22, 0x4e, 0x80, 0x0b, 0xe6, 0x52, 0xc1, 0x02, 0x6e, 0x8f, 0xf7, 0xec, 0x21, 0x70, 0x88, 0xa8, - 0x00, 0xcf, 0x0a, 0xa3, 0x40, 0x04, 0xf8, 0x6e, 0x86, 0xb6, 0x68, 0xc8, 0xac, 0x22, 0xda, 0x1a, - 0xef, 0x6d, 0xdf, 0x1f, 0x32, 0x71, 0x92, 0x0c, 0x2c, 0x37, 0xf0, 0xed, 0x61, 0x30, 0x0c, 0x6c, - 0x55, 0x34, 0x48, 0x8e, 0xd5, 0x4a, 0x2d, 0xd4, 0xaf, 0x8c, 0x6c, 0xfb, 0xdd, 0x7c, 0x6b, 0x9f, - 0xba, 0x27, 0x8c, 0x43, 0x34, 0xb1, 0xc3, 0xd1, 0x50, 0x06, 0x62, 0xdb, 0x07, 0x41, 0xaf, 0x68, - 0x61, 0xdb, 0xbe, 0xae, 0x2a, 0x4a, 0xb8, 0x60, 0x3e, 0x2c, 0x14, 0xbc, 0x77, 0x53, 0x41, 0xec, - 0x9e, 0x80, 0x4f, 0xe7, 0xeb, 0xcc, 0xdf, 0x0d, 0xf4, 0xaa, 0x13, 0x24, 0xdc, 0x7b, 0x3c, 0xf8, - 0x06, 0x5c, 0x41, 0xe0, 0x18, 0x22, 0xe0, 0x2e, 0xe0, 0x1d, 0x54, 0x1d, 0x31, 0xee, 0xb5, 0x8c, - 0x1d, 0x63, 0xb7, 0xe1, 0xdc, 0x3a, 0x9d, 0x76, 0x4a, 0xe9, 0xb4, 0x53, 0xfd, 0x94, 0x71, 0x8f, - 0xa8, 0x0c, 0xee, 0x22, 0x44, 0x43, 0x76, 0x04, 0x51, 0xcc, 0x02, 0xde, 0x2a, 0x2b, 0x1c, 0xd6, - 0x38, 0xb4, 0xff, 0xa4, 0xaf, 0x33, 0x64, 0x06, 0x25, 0x59, 0x39, 0xf5, 0xa1, 0x55, 0x29, 0xb2, - 0x7e, 0x4e, 0x7d, 0x20, 0x2a, 0x83, 0x1d, 0x54, 0x49, 0xfa, 0x07, 0xad, 0xaa, 0x02, 0x3c, 0xd0, - 0x80, 0xca, 0x61, 0xff, 0xe0, 0xdf, 0x69, 0xe7, 0x8d, 0xeb, 0x86, 0x14, 0x93, 0x10, 0x62, 0xeb, - 0xb0, 0x7f, 0x40, 0x64, 0xb1, 0xf9, 0x3e, 0x42, 0xbd, 0xe7, 0x22, 0xa2, 0x47, 0xf4, 0xdb, 0x04, - 0x70, 0x07, 0xd5, 0x98, 0x00, 0x3f, 0x6e, 0x19, 0x3b, 0x95, 0xdd, 0x86, 0xd3, 0x48, 0xa7, 0x9d, - 0x5a, 0x5f, 0x06, 0x48, 0x16, 0x7f, 0x54, 0xff, 0xf1, 0x97, 0x4e, 0xe9, 0xc5, 0x1f, 0x3b, 0x25, - 0xf3, 0xe7, 0x32, 0xba, 0xf5, 0x2c, 0x18, 0x01, 0x27, 0xf0, 0x5d, 0x02, 0xb1, 0xc0, 0x5f, 0xa3, - 0xba, 0x3c, 0x22, 0x8f, 0x0a, 0xaa, 0x94, 0x68, 0x76, 0x1f, 0x58, 0xb9, 0x3b, 0x2e, 0x9b, 0xb0, - 0xc2, 0xd1, 0x50, 0x06, 0x62, 0x4b, 0xa2, 0xad, 0xf1, 0x9e, 0x95, 0xc9, 0xf9, 0x19, 0x08, 0x9a, - 0x6b, 0x92, 0xc7, 0xc8, 0x25, 0x2b, 0x7e, 0x82, 0xaa, 0x71, 0x08, 0xae, 0xd2, 0xaf, 0xd9, 0xb5, - 0xac, 0x65, 0xde, 0xb3, 0x66, 0x7b, 0x7b, 0x1a, 0x82, 0x9b, 0x2b, 0x28, 0x57, 0x44, 0x31, 0xe1, - 0x2f, 0xd1, 0x5a, 0x2c, 0xa8, 0x48, 0x62, 0xa5, 0x72, 0xb1, 0xe3, 0x9b, 0x38, 0x55, 0x9d, 0xb3, - 0xa1, 0x59, 0xd7, 0xb2, 0x35, 0xd1, 0x7c, 0xe6, 0x3f, 0x06, 0xda, 0x9c, 0x6f, 0x01, 0xbf, 0x8d, - 0x1a, 0x34, 0xf1, 0x98, 0x34, 0xcd, 0x85, 0xc4, 0xeb, 0xe9, 0xb4, 0xd3, 0xd8, 0xbf, 0x08, 0x92, - 0x3c, 0x8f, 0x3f, 0x42, 0x5b, 0xf0, 0x3c, 0x64, 0x91, 0xda, 0xfd, 0x29, 0xb8, 0x01, 0xf7, 0x62, - 0x75, 0xd6, 0x15, 0xe7, 0x4e, 0x3a, 0xed, 0x6c, 0xf5, 0xe6, 0x93, 0x64, 0x11, 0x8f, 0x39, 0xda, - 0x18, 0x14, 0x2c, 0xab, 0x07, 0xed, 0x2e, 0x1f, 0xf4, 0x2a, 0x9b, 0x3b, 0x38, 0x9d, 0x76, 0x36, - 0x8a, 0x19, 0x32, 0xc7, 0x6e, 0xfe, 0x6a, 0x20, 0xbc, 0xa8, 0x12, 0xbe, 0x87, 0x6a, 0x42, 0x46, - 0xf5, 0x27, 0xb2, 0xae, 0x45, 0xab, 0x65, 0xd0, 0x2c, 0x87, 0x27, 0xe8, 0x76, 0x3e, 0xc0, 0x33, - 0xe6, 0x43, 0x2c, 0xa8, 0x1f, 0xea, 0xd3, 0x7e, 0x6b, 0x35, 0x2f, 0xc9, 0x32, 0xe7, 0x35, 0x4d, - 0x7f, 0xbb, 0xb7, 0x48, 0x47, 0xae, 0xda, 0xc3, 0xfc, 0xa9, 0x8c, 0x9a, 0xba, 0xed, 0x31, 0x83, - 0xef, 0xff, 0x07, 0x2f, 0x3f, 0x2e, 0x78, 0xf9, 0xfe, 0x4a, 0xbe, 0x93, 0xad, 0x5d, 0x6b, 0xe5, - 0x2f, 0xe6, 0xac, 0x6c, 0xaf, 0x4e, 0xb9, 0xdc, 0xc9, 0x2e, 0x7a, 0x65, 0x6e, 0xff, 0xd5, 0x8e, - 0xb3, 0x60, 0xf6, 0xf2, 0x72, 0xb3, 0x9b, 0x7f, 0x1b, 0x68, 0x6b, 0xa1, 0x25, 0xfc, 0x01, 0x5a, - 0x9f, 0xe9, 0x1c, 0xb2, 0x1b, 0xb6, 0xee, 0xdc, 0xd1, 0xfb, 0xad, 0xef, 0xcf, 0x26, 0x49, 0x11, - 0x8b, 0x3f, 0x41, 0xd5, 0x24, 0x86, 0x48, 0x2b, 0xfc, 0xe6, 0x72, 0x39, 0x0e, 0x63, 0x88, 0xfa, - 0xfc, 0x38, 0xc8, 0xa5, 0x95, 0x11, 0xa2, 0x18, 0x8a, 0x93, 0x54, 0x6f, 0xf8, 0x6c, 0xef, 0xa1, - 0x1a, 0x44, 0x51, 0x10, 0xe9, 0x7b, 0xfb, 0x52, 0x9b, 0x9e, 0x0c, 0x92, 0x2c, 0x67, 0xfe, 0x56, - 0x46, 0xf5, 0x8b, 0x2d, 0xf1, 0x3b, 0xa8, 0x2e, 0xb7, 0x51, 0x97, 0x7d, 0x26, 0xe8, 0xa6, 0x2e, - 0x52, 0x18, 0x19, 0x27, 0x97, 0x08, 0xfc, 0x3a, 0xaa, 0x24, 0xcc, 0xd3, 0x6f, 0x48, 0x73, 0xe6, - 0xd2, 0x27, 0x32, 0x8e, 0x4d, 0xb4, 0x36, 0x8c, 0x82, 0x24, 0x94, 0x36, 0x90, 0x8d, 0x22, 0x79, - 0xa2, 0x1f, 0xab, 0x08, 0xd1, 0x19, 0x7c, 0x84, 0x6a, 0x20, 0xef, 0x7c, 0x35, 0x4b, 0xb3, 0xbb, - 0xb7, 0x9a, 0x34, 0x96, 0x7a, 0x27, 0x7a, 0x5c, 0x44, 0x93, 0x99, 0xa9, 0x64, 0x8c, 0x64, 0x74, - 0xdb, 0x03, 0xfd, 0x96, 0x28, 0x0c, 0xde, 0x44, 0x95, 0x11, 0x4c, 0xb2, 0x89, 0x88, 0xfc, 0x89, - 0x3f, 0x44, 0xb5, 0xb1, 0x7c, 0x66, 0xf4, 0x91, 0xec, 0x2e, 0xdf, 0x37, 0x7f, 0x96, 0x48, 0x56, - 0xf6, 0xa8, 0xfc, 0xd0, 0x70, 0x9c, 0xd3, 0xf3, 0x76, 0xe9, 0xe5, 0x79, 0xbb, 0x74, 0x76, 0xde, - 0x2e, 0xbd, 0x48, 0xdb, 0xc6, 0x69, 0xda, 0x36, 0x5e, 0xa6, 0x6d, 0xe3, 0x2c, 0x6d, 0x1b, 0x7f, - 0xa6, 0x6d, 0xe3, 0x87, 0xbf, 0xda, 0xa5, 0xaf, 0xee, 0x2e, 0xfb, 0x13, 0xf3, 0x5f, 0x00, 0x00, - 0x00, 0xff, 0xff, 0x12, 0xb8, 0x31, 0x91, 0xfc, 0x08, 0x00, 0x00, + // 958 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x56, 0x4b, 0x6f, 0x23, 0x45, + 0x10, 0xf6, 0xf8, 0x11, 0xd9, 0xe5, 0x4d, 0x48, 0x7a, 0x59, 0x61, 0x85, 0xc5, 0x0e, 0xb3, 0x12, + 0x8a, 0x80, 0x9d, 0xd9, 0x58, 0x3c, 0x56, 0x8b, 0x84, 0x94, 0x21, 0x16, 0x58, 0x08, 0x76, 0xd5, + 0x4e, 0x02, 0x42, 0x42, 0xa2, 0x3d, 0xae, 0x38, 0x83, 0x77, 0x1e, 0xcc, 0xf4, 0x98, 0xf5, 0x6d, + 0x7f, 0x02, 0x47, 0x90, 0x38, 0xf0, 0x23, 0x90, 0xf8, 0x0b, 0x39, 0xae, 0x10, 0x87, 0x3d, 0x20, + 0x8b, 0x0c, 0x57, 0x8e, 0x9c, 0x38, 0xa1, 0xee, 0xe9, 0xf8, 0x99, 0x4c, 0x7c, 0xda, 0x9b, 0xa7, + 0x1e, 0x5f, 0x55, 0x7d, 0x55, 0x5d, 0x65, 0x68, 0x0d, 0xee, 0x47, 0x86, 0xe3, 0x9b, 0x83, 0xb8, + 0x8b, 0xa1, 0x87, 0x1c, 0x23, 0x73, 0x88, 0x5e, 0xcf, 0x0f, 0x4d, 0xa5, 0x60, 0x81, 0x63, 0xb2, + 0x98, 0x9f, 0xa2, 0xc7, 0x1d, 0x9b, 0x71, 0xc7, 0xf7, 0xcc, 0xe1, 0x9e, 0xd9, 0x47, 0x0f, 0x43, + 0xc6, 0xb1, 0x67, 0x04, 0xa1, 0xcf, 0x7d, 0x72, 0x3b, 0xb5, 0x36, 0x58, 0xe0, 0x18, 0xf3, 0xd6, + 0xc6, 0x70, 0x6f, 0xfb, 0x6e, 0xdf, 0xe1, 0xa7, 0x71, 0xd7, 0xb0, 0x7d, 0xd7, 0xec, 0xfb, 0x7d, + 0xdf, 0x94, 0x4e, 0xdd, 0xf8, 0x44, 0x7e, 0xc9, 0x0f, 0xf9, 0x2b, 0x05, 0xdb, 0x7e, 0x67, 0x1a, + 0xda, 0x65, 0xf6, 0xa9, 0xe3, 0x61, 0x38, 0x32, 0x83, 0x41, 0x5f, 0x08, 0x22, 0xd3, 0x45, 0xce, + 0x2e, 0x49, 0x61, 0xdb, 0xbc, 0xca, 0x2b, 0x8c, 0x3d, 0xee, 0xb8, 0xb8, 0xe4, 0xf0, 0xde, 0x75, + 0x0e, 0x91, 0x7d, 0x8a, 0x2e, 0x5b, 0xf4, 0xd3, 0x7f, 0xd7, 0xe0, 0x65, 0xcb, 0x8f, 0xbd, 0xde, + 0xc3, 0xee, 0xb7, 0x68, 0x73, 0x8a, 0x27, 0x18, 0xa2, 0x67, 0x23, 0xd9, 0x81, 0xe2, 0xc0, 0xf1, + 0x7a, 0x35, 0x6d, 0x47, 0xdb, 0xad, 0x58, 0x37, 0xce, 0xc6, 0x8d, 0x5c, 0x32, 0x6e, 0x14, 0x3f, + 0x75, 0xbc, 0x1e, 0x95, 0x1a, 0xd2, 0x04, 0x60, 0x81, 0x73, 0x8c, 0x61, 0xe4, 0xf8, 0x5e, 0x2d, + 0x2f, 0xed, 0x88, 0xb2, 0x83, 0xfd, 0x47, 0x6d, 0xa5, 0xa1, 0x33, 0x56, 0x02, 0xd5, 0x63, 0x2e, + 0xd6, 0x0a, 0xf3, 0xa8, 0x9f, 0x33, 0x17, 0xa9, 0xd4, 0x10, 0x0b, 0x0a, 0x71, 0xfb, 0xa0, 0x56, + 0x94, 0x06, 0xf7, 0x94, 0x41, 0xe1, 0xa8, 0x7d, 0xf0, 0xdf, 0xb8, 0xf1, 0xfa, 0x55, 0x45, 0xf2, + 0x51, 0x80, 0x91, 0x71, 0xd4, 0x3e, 0xa0, 0xc2, 0x59, 0x7f, 0x1f, 0xa0, 0xf5, 0x84, 0x87, 0xec, + 0x98, 0x3d, 0x8e, 0x91, 0x34, 0xa0, 0xe4, 0x70, 0x74, 0xa3, 0x9a, 0xb6, 0x53, 0xd8, 0xad, 0x58, + 0x95, 0x64, 0xdc, 0x28, 0xb5, 0x85, 0x80, 0xa6, 0xf2, 0x07, 0xe5, 0x1f, 0x7f, 0x69, 0xe4, 0x9e, + 0xfe, 0xb9, 0x93, 0xd3, 0xff, 0xd0, 0x60, 0xab, 0x83, 0x8f, 0x4f, 0x3a, 0xb1, 0x62, 0x63, 0xe8, + 0xe0, 0xf7, 0xe4, 0x1b, 0x28, 0x8b, 0x3e, 0xf5, 0x18, 0x67, 0x92, 0x8e, 0x6a, 0xf3, 0x9e, 0x31, + 0x1d, 0x91, 0x49, 0x26, 0x46, 0x30, 0xe8, 0x0b, 0x41, 0x64, 0x08, 0x6b, 0x63, 0xb8, 0x67, 0xa4, + 0x9c, 0x7e, 0x86, 0x9c, 0x4d, 0x89, 0x99, 0xca, 0xe8, 0x04, 0x95, 0x7c, 0x0d, 0x6b, 0x11, 0x67, + 0x3c, 0x8e, 0x24, 0x8d, 0xd5, 0xe6, 0xbb, 0x46, 0xd6, 0x08, 0x1a, 0x4b, 0x29, 0x76, 0xa4, 0xb3, + 0xb5, 0xa1, 0x82, 0xac, 0xa5, 0xdf, 0x54, 0x81, 0xea, 0x3e, 0xbc, 0x72, 0x85, 0x0b, 0x39, 0x84, + 0x72, 0x1c, 0x61, 0xd8, 0xf6, 0x4e, 0x7c, 0x55, 0xdb, 0x1b, 0xd9, 0xb1, 0x8f, 0x94, 0xb5, 0xb5, + 0xa9, 0x82, 0x95, 0x2f, 0x24, 0x74, 0x82, 0xa4, 0xff, 0x9c, 0x87, 0x1b, 0x87, 0xfe, 0x00, 0x3d, + 0x8a, 0xdf, 0xc5, 0x18, 0xf1, 0x17, 0x40, 0xe1, 0x23, 0x28, 0x46, 0x01, 0xda, 0x8a, 0x40, 0x23, + 0xbb, 0x88, 0xd9, 0xdc, 0x3a, 0x01, 0xda, 0xd3, 0x49, 0x14, 0x5f, 0x54, 0x22, 0x91, 0x2f, 0x27, + 0x4d, 0x29, 0x2c, 0x65, 0x7c, 0x1d, 0x66, 0x76, 0x3f, 0xfe, 0xd5, 0x60, 0x73, 0x31, 0x05, 0xf2, + 0x16, 0x54, 0x58, 0xdc, 0x73, 0xc4, 0xe3, 0xbb, 0x18, 0xd5, 0xf5, 0x64, 0xdc, 0xa8, 0xec, 0x5f, + 0x08, 0xe9, 0x54, 0x4f, 0x3e, 0x82, 0x2d, 0x7c, 0x12, 0x38, 0xa1, 0x8c, 0xde, 0x41, 0xdb, 0xf7, + 0x7a, 0x91, 0x7c, 0x33, 0x05, 0xeb, 0x56, 0x32, 0x6e, 0x6c, 0xb5, 0x16, 0x95, 0x74, 0xd9, 0x9e, + 0x78, 0xb0, 0xd1, 0x9d, 0x7b, 0xfa, 0xaa, 0xd0, 0x66, 0x76, 0xa1, 0x97, 0xad, 0x0b, 0x8b, 0x24, + 0xe3, 0xc6, 0xc6, 0xbc, 0x86, 0x2e, 0xa0, 0xeb, 0xbf, 0x6a, 0x40, 0x96, 0x59, 0x22, 0x77, 0xa0, + 0xc4, 0x85, 0x54, 0xad, 0x9a, 0x75, 0x45, 0x5a, 0x29, 0x35, 0x4d, 0x75, 0x64, 0x04, 0x37, 0xa7, + 0x05, 0x1c, 0x3a, 0x2e, 0x46, 0x9c, 0xb9, 0x81, 0xea, 0xf6, 0x9b, 0xab, 0xcd, 0x92, 0x70, 0xb3, + 0x5e, 0x55, 0xf0, 0x37, 0x5b, 0xcb, 0x70, 0xf4, 0xb2, 0x18, 0xfa, 0x4f, 0x79, 0xa8, 0xaa, 0xb4, + 0x5f, 0xd0, 0x3a, 0x78, 0x38, 0x37, 0xcb, 0x77, 0x57, 0x9a, 0x3b, 0xf9, 0xa6, 0xaf, 0x1a, 0xe5, + 0x2f, 0x16, 0x46, 0xd9, 0x5c, 0x1d, 0x32, 0x7b, 0x92, 0x6d, 0x78, 0x69, 0x21, 0xfe, 0x6a, 0xed, + 0x9c, 0x1b, 0xf6, 0x7c, 0xf6, 0xb0, 0xeb, 0xff, 0x68, 0xb0, 0xb5, 0x94, 0x12, 0xf9, 0x00, 0xd6, + 0x67, 0x32, 0xc7, 0xf4, 0x52, 0x95, 0xad, 0x5b, 0x2a, 0xde, 0xfa, 0xfe, 0xac, 0x92, 0xce, 0xdb, + 0x92, 0x4f, 0xa0, 0x28, 0x96, 0x95, 0x62, 0x78, 0xd5, 0x95, 0x37, 0xa1, 0x56, 0x48, 0xa8, 0x44, + 0x98, 0xaf, 0xa4, 0x78, 0xcd, 0xb3, 0xbd, 0x03, 0x25, 0x0c, 0x43, 0x3f, 0x54, 0xf7, 0x6f, 0xc2, + 0x4d, 0x4b, 0x08, 0x69, 0xaa, 0xd3, 0x7f, 0xcb, 0xc3, 0x64, 0xa7, 0x92, 0xb7, 0xd3, 0xfd, 0x2c, + 0x8f, 0x66, 0x4a, 0xe8, 0xdc, 0xde, 0x15, 0x72, 0x3a, 0xb1, 0x20, 0xaf, 0x41, 0x21, 0x76, 0x7a, + 0xea, 0x16, 0x57, 0x67, 0x8e, 0x27, 0x15, 0x72, 0xa2, 0xc3, 0x5a, 0x3f, 0xf4, 0xe3, 0x40, 0x8c, + 0x81, 0x48, 0x14, 0x44, 0x47, 0x3f, 0x96, 0x12, 0xaa, 0x34, 0xe4, 0x18, 0x4a, 0x28, 0x6e, 0xa7, + 0xac, 0xa5, 0xda, 0xdc, 0x5b, 0x8d, 0x1a, 0x43, 0xde, 0xdb, 0x96, 0xc7, 0xc3, 0xd1, 0x4c, 0x55, + 0x42, 0x46, 0x53, 0xb8, 0xed, 0xae, 0xba, 0xc9, 0xd2, 0x86, 0x6c, 0x42, 0x61, 0x80, 0xa3, 0xb4, + 0x22, 0x2a, 0x7e, 0x92, 0x0f, 0xa1, 0x34, 0x14, 0xe7, 0x5a, 0xb5, 0x64, 0x37, 0x3b, 0xee, 0xf4, + 0xbc, 0xd3, 0xd4, 0xed, 0x41, 0xfe, 0xbe, 0x66, 0x59, 0x67, 0xe7, 0xf5, 0xdc, 0xb3, 0xf3, 0x7a, + 0xee, 0xf9, 0x79, 0x3d, 0xf7, 0x34, 0xa9, 0x6b, 0x67, 0x49, 0x5d, 0x7b, 0x96, 0xd4, 0xb5, 0xe7, + 0x49, 0x5d, 0xfb, 0x2b, 0xa9, 0x6b, 0x3f, 0xfc, 0x5d, 0xcf, 0x7d, 0x75, 0x3b, 0xeb, 0xcf, 0xe0, + 0xff, 0x01, 0x00, 0x00, 0xff, 0xff, 0x0d, 0x9a, 0x38, 0x17, 0x44, 0x0a, 0x00, 0x00, } func (m *BoundObjectReference) Marshal() (dAtA []byte, err error) { @@ -451,6 +512,82 @@ func (m ExtraValue) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *SelfSubjectReview) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SelfSubjectReview) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SelfSubjectReview) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + { + size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + { + size, err := m.ObjectMeta.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + +func (m *SelfSubjectReviewStatus) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SelfSubjectReviewStatus) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SelfSubjectReviewStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + { + size, err := m.UserInfo.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + func (m *TokenRequest) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -850,6 +987,30 @@ func (m ExtraValue) Size() (n int) { return n } +func (m *SelfSubjectReview) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.ObjectMeta.Size() + n += 1 + l + sovGenerated(uint64(l)) + l = m.Status.Size() + n += 1 + l + sovGenerated(uint64(l)) + return n +} + +func (m *SelfSubjectReviewStatus) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = m.UserInfo.Size() + n += 1 + l + sovGenerated(uint64(l)) + return n +} + func (m *TokenRequest) Size() (n int) { if m == nil { return 0 @@ -999,6 +1160,27 @@ func (this *BoundObjectReference) String() string { }, "") return s } +func (this *SelfSubjectReview) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&SelfSubjectReview{`, + `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v1.ObjectMeta", 1), `&`, ``, 1) + `,`, + `Status:` + strings.Replace(strings.Replace(this.Status.String(), "SelfSubjectReviewStatus", "SelfSubjectReviewStatus", 1), `&`, ``, 1) + `,`, + `}`, + }, "") + return s +} +func (this *SelfSubjectReviewStatus) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&SelfSubjectReviewStatus{`, + `UserInfo:` + strings.Replace(strings.Replace(this.UserInfo.String(), "UserInfo", "UserInfo", 1), `&`, ``, 1) + `,`, + `}`, + }, "") + return s +} func (this *TokenRequest) String() string { if this == nil { return "nil" @@ -1361,6 +1543,205 @@ func (m *ExtraValue) Unmarshal(dAtA []byte) error { } return nil } +func (m *SelfSubjectReview) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SelfSubjectReview: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SelfSubjectReview: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *SelfSubjectReviewStatus) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SelfSubjectReviewStatus: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SelfSubjectReviewStatus: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserInfo", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if err := m.UserInfo.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func (m *TokenRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 diff --git a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/generated.proto index f4806a3c6361..1632070c872c 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/generated.proto @@ -56,6 +56,26 @@ message ExtraValue { repeated string items = 1; } +// SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request. +// When using impersonation, users will receive the user info of the user being impersonated. If impersonation or +// request header authentication is used, any extra keys will have their case ignored and returned as lowercase. +message SelfSubjectReview { + // Standard object's metadata. + // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1; + + // Status is filled in by the server with the user attributes. + optional SelfSubjectReviewStatus status = 2; +} + +// SelfSubjectReviewStatus is filled by the kube-apiserver and sent back to a user. +message SelfSubjectReviewStatus { + // User attributes of the user making this request. + // +optional + optional UserInfo userInfo = 1; +} + // TokenRequest requests a token for a given service account. message TokenRequest { // Standard object's metadata. diff --git a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/register.go b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/register.go index c522e4a46d71..6a32b5926b3b 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/register.go +++ b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/register.go @@ -46,6 +46,7 @@ func addKnownTypes(scheme *runtime.Scheme) error { scheme.AddKnownTypes(SchemeGroupVersion, &TokenReview{}, &TokenRequest{}, + &SelfSubjectReview{}, ) metav1.AddToGroupVersion(scheme, SchemeGroupVersion) return nil diff --git a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/types.go b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/types.go index 4e221e58c7d9..b498007c000c 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/types.go @@ -197,3 +197,28 @@ type BoundObjectReference struct { // +optional UID types.UID `json:"uid,omitempty" protobuf:"bytes,4,opt,name=uID,casttype=k8s.io/apimachinery/pkg/types.UID"` } + +// +genclient +// +genclient:nonNamespaced +// +genclient:onlyVerbs=create +// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object + +// SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request. +// When using impersonation, users will receive the user info of the user being impersonated. If impersonation or +// request header authentication is used, any extra keys will have their case ignored and returned as lowercase. +type SelfSubjectReview struct { + metav1.TypeMeta `json:",inline"` + // Standard object's metadata. + // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + // +optional + metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"` + // Status is filled in by the server with the user attributes. + Status SelfSubjectReviewStatus `json:"status,omitempty" protobuf:"bytes,2,opt,name=status"` +} + +// SelfSubjectReviewStatus is filled by the kube-apiserver and sent back to a user. +type SelfSubjectReviewStatus struct { + // User attributes of the user making this request. + // +optional + UserInfo UserInfo `json:"userInfo,omitempty" protobuf:"bytes,1,opt,name=userInfo"` +} diff --git a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/types_swagger_doc_generated.go index b1a730b816ee..ebfd4852c058 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/types_swagger_doc_generated.go @@ -39,6 +39,25 @@ func (BoundObjectReference) SwaggerDoc() map[string]string { return map_BoundObjectReference } +var map_SelfSubjectReview = map[string]string{ + "": "SelfSubjectReview contains the user information that the kube-apiserver has about the user making this request. When using impersonation, users will receive the user info of the user being impersonated. If impersonation or request header authentication is used, any extra keys will have their case ignored and returned as lowercase.", + "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", + "status": "Status is filled in by the server with the user attributes.", +} + +func (SelfSubjectReview) SwaggerDoc() map[string]string { + return map_SelfSubjectReview +} + +var map_SelfSubjectReviewStatus = map[string]string{ + "": "SelfSubjectReviewStatus is filled by the kube-apiserver and sent back to a user.", + "userInfo": "User attributes of the user making this request.", +} + +func (SelfSubjectReviewStatus) SwaggerDoc() map[string]string { + return map_SelfSubjectReviewStatus +} + var map_TokenRequest = map[string]string{ "": "TokenRequest requests a token for a given service account.", "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", diff --git a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/zz_generated.deepcopy.go index 2af533191ba3..369c89b8631b 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/authentication/v1/zz_generated.deepcopy.go @@ -61,6 +61,50 @@ func (in ExtraValue) DeepCopy() ExtraValue { return *out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *SelfSubjectReview) DeepCopyInto(out *SelfSubjectReview) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Status.DeepCopyInto(&out.Status) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SelfSubjectReview. +func (in *SelfSubjectReview) DeepCopy() *SelfSubjectReview { + if in == nil { + return nil + } + out := new(SelfSubjectReview) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *SelfSubjectReview) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *SelfSubjectReviewStatus) DeepCopyInto(out *SelfSubjectReviewStatus) { + *out = *in + in.UserInfo.DeepCopyInto(&out.UserInfo) + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SelfSubjectReviewStatus. +func (in *SelfSubjectReviewStatus) DeepCopy() *SelfSubjectReviewStatus { + if in == nil { + return nil + } + out := new(SelfSubjectReviewStatus) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *TokenRequest) DeepCopyInto(out *TokenRequest) { *out = *in diff --git a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/generated.pb.go index feafc23c2bba..59a7482a0d20 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/generated.pb.go @@ -495,113 +495,120 @@ func init() { } var fileDescriptor_3b52da57c93de713 = []byte{ - // 1696 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x58, 0x4f, 0x73, 0xe3, 0x48, - 0x15, 0x8f, 0xe2, 0xd8, 0xb1, 0xdb, 0xc9, 0xc4, 0xd3, 0xb3, 0x33, 0x63, 0xc2, 0x96, 0x95, 0xd5, - 0xec, 0x6e, 0x65, 0xa9, 0x45, 0x66, 0xb2, 0x53, 0x2c, 0xff, 0x8b, 0x91, 0x87, 0x59, 0x26, 0x78, - 0x36, 0xa6, 0x9d, 0x40, 0xd5, 0xb2, 0x50, 0xc8, 0x52, 0xdb, 0xd1, 0x46, 0x56, 0x1b, 0x75, 0x2b, - 0xb5, 0xb9, 0x50, 0x54, 0xf1, 0x05, 0xe0, 0xc8, 0x17, 0xe0, 0xc8, 0x05, 0xce, 0x70, 0xa3, 0x72, - 0xdc, 0xe2, 0xb4, 0xc5, 0x41, 0xc5, 0x88, 0x0f, 0xc0, 0x3d, 0x5c, 0xa8, 0x6e, 0xb5, 0xf5, 0xcf, - 0x52, 0xc8, 0x6c, 0x15, 0x5b, 0xdc, 0xa2, 0xf7, 0x7e, 0xef, 0xd7, 0x4f, 0xfd, 0x9e, 0x7e, 0xef, - 0xc5, 0xe0, 0x5b, 0x67, 0x5f, 0xa3, 0xba, 0x43, 0xfa, 0x67, 0xc1, 0x04, 0xfb, 0x1e, 0x66, 0x98, - 0xf6, 0xcf, 0xb1, 0x67, 0x13, 0xbf, 0x2f, 0x1d, 0xe6, 0xc2, 0xe9, 0x4f, 0x4c, 0x66, 0x9d, 0xf6, - 0xcf, 0x1f, 0xf6, 0x67, 0xd8, 0xc3, 0xbe, 0xc9, 0xb0, 0xad, 0x2f, 0x7c, 0xc2, 0x08, 0xbc, 0x13, - 0x83, 0x74, 0x73, 0xe1, 0xe8, 0x02, 0xa4, 0x9f, 0x3f, 0xdc, 0xfd, 0xf2, 0xcc, 0x61, 0xa7, 0xc1, - 0x44, 0xb7, 0xc8, 0xbc, 0x3f, 0x23, 0x33, 0xd2, 0x17, 0xd8, 0x49, 0x30, 0x15, 0x4f, 0xe2, 0x41, - 0xfc, 0x15, 0x73, 0xec, 0x6a, 0x99, 0x83, 0x2c, 0xe2, 0xe3, 0x92, 0x73, 0x76, 0x1f, 0xa5, 0x98, - 0xb9, 0x69, 0x9d, 0x3a, 0x1e, 0xf6, 0x2f, 0xfa, 0x8b, 0xb3, 0x19, 0x37, 0xd0, 0xfe, 0x1c, 0x33, - 0xb3, 0x2c, 0xaa, 0x5f, 0x15, 0xe5, 0x07, 0x1e, 0x73, 0xe6, 0x78, 0x25, 0xe0, 0xab, 0xff, 0x2d, - 0x80, 0x5a, 0xa7, 0x78, 0x6e, 0x16, 0xe3, 0xb4, 0x7f, 0x2b, 0x60, 0x73, 0xe0, 0x13, 0xef, 0x90, - 0x4c, 0xe0, 0xcf, 0x41, 0x93, 0xe7, 0x63, 0x9b, 0xcc, 0xec, 0x2a, 0x7b, 0xca, 0x7e, 0xfb, 0xe0, - 0x2b, 0x7a, 0x7a, 0x4b, 0x09, 0xad, 0xbe, 0x38, 0x9b, 0x71, 0x03, 0xd5, 0x39, 0x5a, 0x3f, 0x7f, - 0xa8, 0x1f, 0x4d, 0x3e, 0xc2, 0x16, 0x7b, 0x8e, 0x99, 0x69, 0xc0, 0xcb, 0x50, 0x5d, 0x8b, 0x42, - 0x15, 0xa4, 0x36, 0x94, 0xb0, 0x42, 0x03, 0x6c, 0xd0, 0x05, 0xb6, 0xba, 0xeb, 0x82, 0x7d, 0x4f, - 0x2f, 0xa9, 0x81, 0x2e, 0xb3, 0x19, 0x2f, 0xb0, 0x65, 0x6c, 0x49, 0xb6, 0x0d, 0xfe, 0x84, 0x44, - 0x2c, 0x3c, 0x04, 0x0d, 0xca, 0x4c, 0x16, 0xd0, 0x6e, 0x4d, 0xb0, 0x68, 0xd7, 0xb2, 0x08, 0xa4, - 0x71, 0x4b, 0xf2, 0x34, 0xe2, 0x67, 0x24, 0x19, 0xb4, 0x3f, 0x28, 0xa0, 0x2d, 0x91, 0x43, 0x87, - 0x32, 0xf8, 0xe1, 0xca, 0x0d, 0xe8, 0x37, 0xbb, 0x01, 0x1e, 0x2d, 0xde, 0xbf, 0x23, 0x4f, 0x6a, - 0x2e, 0x2d, 0x99, 0xb7, 0x7f, 0x0c, 0xea, 0x0e, 0xc3, 0x73, 0xda, 0x5d, 0xdf, 0xab, 0xed, 0xb7, - 0x0f, 0x5e, 0xbd, 0x2e, 0x71, 0x63, 0x5b, 0x12, 0xd5, 0x9f, 0xf1, 0x10, 0x14, 0x47, 0x6a, 0x7f, - 0xdb, 0x48, 0x12, 0xe6, 0x57, 0x02, 0xdf, 0x06, 0x4d, 0x5e, 0x58, 0x3b, 0x70, 0xb1, 0x48, 0xb8, - 0x95, 0x26, 0x30, 0x96, 0x76, 0x94, 0x20, 0xe0, 0x3e, 0x68, 0xf2, 0x5e, 0xf8, 0x80, 0x78, 0xb8, - 0xdb, 0x14, 0xe8, 0x2d, 0x8e, 0x3c, 0x96, 0x36, 0x94, 0x78, 0xe1, 0x09, 0xb8, 0x4f, 0x99, 0xe9, - 0x33, 0xc7, 0x9b, 0x3d, 0xc1, 0xa6, 0xed, 0x3a, 0x1e, 0x1e, 0x63, 0x8b, 0x78, 0x36, 0x15, 0xb5, - 0xab, 0x19, 0x5f, 0x8c, 0x42, 0xf5, 0xfe, 0xb8, 0x1c, 0x82, 0xaa, 0x62, 0xe1, 0x87, 0xe0, 0xb6, - 0x45, 0x3c, 0x2b, 0xf0, 0x7d, 0xec, 0x59, 0x17, 0x23, 0xe2, 0x3a, 0xd6, 0x85, 0x28, 0x63, 0xcb, - 0xd0, 0x65, 0xde, 0xb7, 0x07, 0x45, 0xc0, 0x55, 0x99, 0x11, 0xad, 0x12, 0xc1, 0x37, 0xc0, 0x26, - 0x0d, 0xe8, 0x02, 0x7b, 0x76, 0x77, 0x63, 0x4f, 0xd9, 0x6f, 0x1a, 0xed, 0x28, 0x54, 0x37, 0xc7, - 0xb1, 0x09, 0x2d, 0x7d, 0xf0, 0x27, 0xa0, 0xfd, 0x11, 0x99, 0x1c, 0xe3, 0xf9, 0xc2, 0x35, 0x19, - 0xee, 0xd6, 0x45, 0x9d, 0x5f, 0x2f, 0x2d, 0xc6, 0x61, 0x8a, 0x13, 0xfd, 0x78, 0x47, 0x26, 0xd9, - 0xce, 0x38, 0x50, 0x96, 0x0d, 0xfe, 0x0c, 0xec, 0xd2, 0xc0, 0xb2, 0x30, 0xa5, 0xd3, 0xc0, 0x3d, - 0x24, 0x13, 0xfa, 0x7d, 0x87, 0x32, 0xe2, 0x5f, 0x0c, 0x9d, 0xb9, 0xc3, 0xba, 0x8d, 0x3d, 0x65, - 0xbf, 0x6e, 0xf4, 0xa2, 0x50, 0xdd, 0x1d, 0x57, 0xa2, 0xd0, 0x35, 0x0c, 0x10, 0x81, 0x7b, 0x53, - 0xd3, 0x71, 0xb1, 0xbd, 0xc2, 0xbd, 0x29, 0xb8, 0x77, 0xa3, 0x50, 0xbd, 0xf7, 0xb4, 0x14, 0x81, - 0x2a, 0x22, 0xb5, 0x3f, 0xaf, 0x83, 0xed, 0xdc, 0xf7, 0x02, 0x7f, 0x00, 0x1a, 0xa6, 0xc5, 0x9c, - 0x73, 0xde, 0x54, 0xbc, 0x55, 0x1f, 0x64, 0x6f, 0x87, 0x2b, 0x5d, 0xfa, 0xd5, 0x23, 0x3c, 0xc5, - 0xbc, 0x08, 0x38, 0xfd, 0xc8, 0x1e, 0x8b, 0x50, 0x24, 0x29, 0xa0, 0x0b, 0x3a, 0xae, 0x49, 0xd9, - 0xb2, 0x1f, 0x79, 0xb7, 0x89, 0xfa, 0xb4, 0x0f, 0xbe, 0x74, 0xb3, 0x8f, 0x8b, 0x47, 0x18, 0xaf, - 0x44, 0xa1, 0xda, 0x19, 0x16, 0x78, 0xd0, 0x0a, 0x33, 0xf4, 0x01, 0x14, 0xb6, 0xe4, 0x0a, 0xc5, - 0x79, 0xf5, 0x97, 0x3e, 0xef, 0x5e, 0x14, 0xaa, 0x70, 0xb8, 0xc2, 0x84, 0x4a, 0xd8, 0xb5, 0x7f, - 0x29, 0xa0, 0xf6, 0xf9, 0x08, 0xe8, 0x77, 0x72, 0x02, 0xfa, 0x6a, 0x55, 0xd3, 0x56, 0x8a, 0xe7, - 0xd3, 0x82, 0x78, 0xf6, 0x2a, 0x19, 0xae, 0x17, 0xce, 0xbf, 0xd6, 0xc0, 0xd6, 0x21, 0x99, 0x0c, - 0x88, 0x67, 0x3b, 0xcc, 0x21, 0x1e, 0x7c, 0x04, 0x36, 0xd8, 0xc5, 0x62, 0x29, 0x42, 0x7b, 0xcb, - 0xa3, 0x8f, 0x2f, 0x16, 0xf8, 0x2a, 0x54, 0x3b, 0x59, 0x2c, 0xb7, 0x21, 0x81, 0x86, 0xc3, 0x24, - 0x9d, 0x75, 0x11, 0xf7, 0x28, 0x7f, 0xdc, 0x55, 0xa8, 0x96, 0x8c, 0x58, 0x3d, 0x61, 0xca, 0x27, - 0x05, 0x67, 0x60, 0x9b, 0x17, 0x67, 0xe4, 0x93, 0x49, 0xdc, 0x65, 0xb5, 0x97, 0xae, 0xfa, 0x5d, - 0x99, 0xc0, 0xf6, 0x30, 0x4b, 0x84, 0xf2, 0xbc, 0xf0, 0x3c, 0xee, 0xb1, 0x63, 0xdf, 0xf4, 0x68, - 0xfc, 0x4a, 0x9f, 0xad, 0xa7, 0x77, 0xe5, 0x69, 0xa2, 0xcf, 0xf2, 0x6c, 0xa8, 0xe4, 0x04, 0xf8, - 0x26, 0x68, 0xf8, 0xd8, 0xa4, 0xc4, 0x13, 0xfd, 0xdc, 0x4a, 0xab, 0x83, 0x84, 0x15, 0x49, 0x2f, - 0x7c, 0x0b, 0x6c, 0xce, 0x31, 0xa5, 0xe6, 0x0c, 0x0b, 0xc5, 0x69, 0x19, 0x3b, 0x12, 0xb8, 0xf9, - 0x3c, 0x36, 0xa3, 0xa5, 0x5f, 0xfb, 0xbd, 0x02, 0x36, 0x3f, 0x9f, 0xe9, 0xf7, 0xed, 0xfc, 0xf4, - 0xeb, 0x56, 0x75, 0x5e, 0xc5, 0xe4, 0xfb, 0x5d, 0x43, 0x24, 0x2a, 0xa6, 0xde, 0x43, 0xd0, 0x5e, - 0x98, 0xbe, 0xe9, 0xba, 0xd8, 0x75, 0xe8, 0x5c, 0xe4, 0x5a, 0x37, 0x76, 0xb8, 0x2e, 0x8f, 0x52, - 0x33, 0xca, 0x62, 0x78, 0x88, 0x45, 0xe6, 0x0b, 0x17, 0xf3, 0xcb, 0x8c, 0xdb, 0x4d, 0x86, 0x0c, - 0x52, 0x33, 0xca, 0x62, 0xe0, 0x11, 0xb8, 0x1b, 0x2b, 0x58, 0x71, 0x02, 0xd6, 0xc4, 0x04, 0xfc, - 0x42, 0x14, 0xaa, 0x77, 0x1f, 0x97, 0x01, 0x50, 0x79, 0x1c, 0x9c, 0x81, 0xce, 0x82, 0xd8, 0x5c, - 0x9c, 0x03, 0x1f, 0xcb, 0xe1, 0xd7, 0x16, 0xf7, 0xfc, 0x46, 0xe9, 0x65, 0x8c, 0x0a, 0xe0, 0x58, - 0x03, 0x8b, 0x56, 0xb4, 0x42, 0x0a, 0x1f, 0x81, 0xad, 0x89, 0x69, 0x9d, 0x91, 0xe9, 0x34, 0x3b, - 0x1a, 0x3a, 0x51, 0xa8, 0x6e, 0x19, 0x19, 0x3b, 0xca, 0xa1, 0xe0, 0x4f, 0x41, 0x93, 0x62, 0x17, - 0x5b, 0x8c, 0xf8, 0xb2, 0x97, 0xdf, 0xb9, 0x61, 0xf9, 0xcd, 0x09, 0x76, 0xc7, 0x32, 0x34, 0x5e, - 0x29, 0x96, 0x4f, 0x28, 0xa1, 0x84, 0xdf, 0x00, 0xb7, 0xe6, 0xa6, 0x17, 0x98, 0x09, 0x52, 0x34, - 0x71, 0xd3, 0x80, 0x51, 0xa8, 0xde, 0x7a, 0x9e, 0xf3, 0xa0, 0x02, 0x12, 0xfe, 0x10, 0x34, 0xd9, - 0x72, 0x5e, 0x37, 0x44, 0x6a, 0xa5, 0x13, 0x69, 0x44, 0xec, 0xdc, 0xb8, 0x4e, 0xda, 0x31, 0x99, - 0xd5, 0x09, 0x0d, 0xdf, 0x70, 0x18, 0x73, 0x65, 0x69, 0x1e, 0x4f, 0x19, 0xf6, 0x9f, 0x3a, 0x9e, - 0x43, 0x4f, 0xb1, 0x2d, 0x56, 0xa3, 0x7a, 0xbc, 0xe1, 0x1c, 0x1f, 0x0f, 0xcb, 0x20, 0xa8, 0x2a, - 0x16, 0x0e, 0xc1, 0xad, 0xb4, 0x87, 0x9e, 0x13, 0x1b, 0x77, 0x5b, 0xe2, 0x0b, 0x7c, 0x9d, 0xbf, - 0xe5, 0x20, 0xe7, 0xb9, 0x5a, 0xb1, 0xa0, 0x42, 0x6c, 0x76, 0xa3, 0x01, 0xd5, 0x1b, 0x8d, 0xf6, - 0xdb, 0x3a, 0x68, 0xa5, 0xc3, 0xfb, 0x04, 0x00, 0x6b, 0xa9, 0x90, 0x54, 0x0e, 0xf0, 0xd7, 0xaa, - 0xbe, 0xb6, 0x44, 0x4b, 0xd3, 0xc1, 0x93, 0x98, 0x28, 0xca, 0x10, 0xc1, 0x1f, 0x83, 0x96, 0x58, - 0xeb, 0x84, 0xd6, 0xad, 0xbf, 0xb4, 0xd6, 0x6d, 0x47, 0xa1, 0xda, 0x1a, 0x2f, 0x09, 0x50, 0xca, - 0x05, 0xa7, 0xd9, 0x2b, 0xfb, 0x8c, 0xba, 0x0d, 0xf3, 0xd7, 0x2b, 0x8e, 0x28, 0xb0, 0x72, 0xf5, - 0x94, 0x4b, 0xcd, 0x86, 0x28, 0x70, 0xd5, 0xbe, 0xd2, 0x07, 0x2d, 0xb1, 0x80, 0x61, 0x1b, 0xdb, - 0xa2, 0x47, 0xeb, 0xc6, 0x6d, 0x09, 0x6d, 0x8d, 0x97, 0x0e, 0x94, 0x62, 0x38, 0x71, 0xbc, 0x59, - 0xc9, 0xfd, 0x2e, 0x21, 0x8e, 0xf7, 0x30, 0x24, 0xbd, 0xf0, 0x09, 0xe8, 0xc8, 0x94, 0xb0, 0xfd, - 0xcc, 0xb3, 0xf1, 0xc7, 0x98, 0x8a, 0x4f, 0xb3, 0x65, 0x74, 0x65, 0x44, 0x67, 0x50, 0xf0, 0xa3, - 0x95, 0x08, 0xf8, 0x6b, 0x05, 0xdc, 0x0f, 0x3c, 0x8b, 0x04, 0x1e, 0xc3, 0xf6, 0x31, 0xf6, 0xe7, - 0x8e, 0xc7, 0xff, 0x9f, 0x1b, 0x11, 0x9b, 0x8a, 0xce, 0x6d, 0x1f, 0xbc, 0x5d, 0x5a, 0xec, 0x93, - 0xf2, 0x98, 0xb8, 0xcf, 0x2b, 0x9c, 0xa8, 0xea, 0x24, 0xa8, 0x82, 0xba, 0x8f, 0x4d, 0xfb, 0x42, - 0xb4, 0x77, 0xdd, 0x68, 0x71, 0xbd, 0x46, 0xdc, 0x80, 0x62, 0xbb, 0xf6, 0x47, 0x05, 0xec, 0x14, - 0xd6, 0xe7, 0xff, 0xff, 0xfd, 0x48, 0x9b, 0x80, 0x15, 0x7d, 0x85, 0xef, 0x83, 0xba, 0x1f, 0xb8, - 0x78, 0xf9, 0x29, 0xbd, 0x75, 0x23, 0xad, 0x46, 0x81, 0x8b, 0xd3, 0x49, 0xc6, 0x9f, 0x28, 0x8a, - 0x69, 0xb4, 0xbf, 0x2b, 0xe0, 0xcd, 0x22, 0xfc, 0xc8, 0xfb, 0xde, 0xc7, 0x0e, 0x1b, 0x10, 0x1b, - 0x53, 0x84, 0x7f, 0x11, 0x38, 0x3e, 0x9e, 0x63, 0x8f, 0xc1, 0x77, 0xc1, 0xb6, 0x45, 0x3c, 0x66, - 0xf2, 0x6b, 0x79, 0xdf, 0x9c, 0x2f, 0xd7, 0xab, 0xdb, 0x7c, 0x43, 0x19, 0x64, 0x1d, 0x28, 0x8f, - 0x83, 0x63, 0xd0, 0x24, 0x0b, 0xfe, 0x8f, 0x3e, 0xf1, 0xe5, 0x6a, 0xf5, 0xee, 0x52, 0x0b, 0x8f, - 0xa4, 0xfd, 0x2a, 0x54, 0x1f, 0x5c, 0x93, 0xc6, 0x12, 0x86, 0x12, 0x22, 0xa8, 0x81, 0xc6, 0xb9, - 0xe9, 0x06, 0x98, 0x4f, 0xc0, 0xda, 0x7e, 0xdd, 0x00, 0xbc, 0xc7, 0x7f, 0x24, 0x2c, 0x48, 0x7a, - 0xb4, 0xbf, 0x94, 0xbe, 0xdc, 0x88, 0xd8, 0xa9, 0xaa, 0x8c, 0x4c, 0xc6, 0xb0, 0xef, 0xc1, 0xf7, - 0x72, 0x2b, 0xe3, 0x3b, 0x85, 0x95, 0xf1, 0x41, 0xc9, 0xe2, 0x97, 0xa5, 0xf9, 0x5f, 0x6d, 0x91, - 0xda, 0xe5, 0x3a, 0x78, 0xa5, 0xac, 0x9a, 0xf0, 0xbb, 0xb1, 0x7e, 0x10, 0x4f, 0x66, 0xbc, 0x9f, - 0xd5, 0x0f, 0xe2, 0x5d, 0x85, 0xea, 0xbd, 0x62, 0x5c, 0xec, 0x41, 0x32, 0x0e, 0x7a, 0xa0, 0x4d, - 0xd2, 0x1b, 0x96, 0x4d, 0xfa, 0xcd, 0x1b, 0xf5, 0x53, 0x79, 0x83, 0xc4, 0x1b, 0x4c, 0xd6, 0x97, - 0x3d, 0x00, 0xfe, 0x12, 0xec, 0x90, 0xfc, 0xdd, 0x8b, 0xca, 0xdd, 0xfc, 0xcc, 0xb2, 0xba, 0x19, - 0xf7, 0xe5, 0x7b, 0xef, 0x14, 0xfc, 0xa8, 0x78, 0x98, 0xf6, 0x27, 0x05, 0x54, 0x29, 0x0b, 0x1c, - 0x65, 0x55, 0x96, 0x7f, 0x59, 0x2d, 0xe3, 0x20, 0xa7, 0xb0, 0x57, 0xa1, 0xfa, 0x5a, 0xd5, 0x8f, - 0x5a, 0xbc, 0xec, 0x54, 0x3f, 0x79, 0xf6, 0x24, 0x2b, 0xc3, 0xef, 0x25, 0x32, 0xbc, 0x2e, 0xe8, - 0xfa, 0xa9, 0x04, 0xdf, 0x8c, 0x4b, 0x86, 0x1b, 0x5f, 0xbf, 0x7c, 0xd1, 0x5b, 0xfb, 0xe4, 0x45, - 0x6f, 0xed, 0xd3, 0x17, 0xbd, 0xb5, 0x5f, 0x45, 0x3d, 0xe5, 0x32, 0xea, 0x29, 0x9f, 0x44, 0x3d, - 0xe5, 0xd3, 0xa8, 0xa7, 0xfc, 0x23, 0xea, 0x29, 0xbf, 0xf9, 0x67, 0x6f, 0xed, 0x83, 0x3b, 0x25, - 0xbf, 0x32, 0xfe, 0x27, 0x00, 0x00, 0xff, 0xff, 0xf2, 0x8e, 0x19, 0x59, 0x94, 0x14, 0x00, 0x00, + // 1797 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x58, 0xcd, 0x6f, 0x23, 0x49, + 0x15, 0x8f, 0x93, 0x38, 0xb1, 0xcb, 0xf9, 0xf0, 0xd4, 0x64, 0x66, 0x4c, 0x58, 0xb9, 0xb3, 0x9e, + 0xdd, 0x55, 0x16, 0x2d, 0xed, 0x9d, 0xec, 0x88, 0xe5, 0x5b, 0x3b, 0x9d, 0x61, 0x96, 0x09, 0xce, + 0x8e, 0x29, 0x67, 0x40, 0x5a, 0x16, 0x44, 0xb9, 0xbb, 0xec, 0xf4, 0xa6, 0xdd, 0xd5, 0x74, 0x55, + 0x47, 0x93, 0x0b, 0x42, 0xe2, 0x0f, 0x80, 0xbf, 0x82, 0x23, 0x17, 0x38, 0xc3, 0x0d, 0xcd, 0x71, + 0xc5, 0x69, 0xc5, 0xa1, 0xc5, 0x34, 0x7f, 0x00, 0xf7, 0x20, 0x24, 0x54, 0xd5, 0xe5, 0xfe, 0x72, + 0x77, 0xc8, 0xac, 0xc4, 0x88, 0x5b, 0xfa, 0xbd, 0xdf, 0xfb, 0xd5, 0xc7, 0x7b, 0xf5, 0x7b, 0x2f, + 0x06, 0xdf, 0x3e, 0xfb, 0x3a, 0xd3, 0x6d, 0xda, 0x3f, 0x0b, 0xc6, 0xc4, 0x77, 0x09, 0x27, 0xac, + 0x7f, 0x4e, 0x5c, 0x8b, 0xfa, 0x7d, 0xe5, 0xc0, 0x9e, 0xdd, 0x1f, 0x63, 0x6e, 0x9e, 0xf6, 0xcf, + 0xef, 0xf5, 0xa7, 0xc4, 0x25, 0x3e, 0xe6, 0xc4, 0xd2, 0x3d, 0x9f, 0x72, 0x0a, 0x6f, 0xc6, 0x20, + 0x1d, 0x7b, 0xb6, 0x2e, 0x41, 0xfa, 0xf9, 0xbd, 0xdd, 0xaf, 0x4e, 0x6d, 0x7e, 0x1a, 0x8c, 0x75, + 0x93, 0xce, 0xfa, 0x53, 0x3a, 0xa5, 0x7d, 0x89, 0x1d, 0x07, 0x13, 0xf9, 0x25, 0x3f, 0xe4, 0x5f, + 0x31, 0xc7, 0x6e, 0x2f, 0xb3, 0x90, 0x49, 0x7d, 0x52, 0xb2, 0xce, 0xee, 0xfd, 0x14, 0x33, 0xc3, + 0xe6, 0xa9, 0xed, 0x12, 0xff, 0xa2, 0xef, 0x9d, 0x4d, 0x85, 0x81, 0xf5, 0x67, 0x84, 0xe3, 0xb2, + 0xa8, 0x7e, 0x55, 0x94, 0x1f, 0xb8, 0xdc, 0x9e, 0x91, 0x85, 0x80, 0xaf, 0xfd, 0xb7, 0x00, 0x66, + 0x9e, 0x92, 0x19, 0x2e, 0xc6, 0xf5, 0xfe, 0x55, 0x03, 0xeb, 0x87, 0x3e, 0x75, 0x8f, 0xe8, 0x18, + 0xfe, 0x1c, 0x34, 0xc4, 0x7e, 0x2c, 0xcc, 0x71, 0xa7, 0xb6, 0x57, 0xdb, 0x6f, 0x1d, 0xbc, 0xab, + 0xa7, 0xb7, 0x94, 0xd0, 0xea, 0xde, 0xd9, 0x54, 0x18, 0x98, 0x2e, 0xd0, 0xfa, 0xf9, 0x3d, 0xfd, + 0xc9, 0xf8, 0x53, 0x62, 0xf2, 0x63, 0xc2, 0xb1, 0x01, 0x9f, 0x87, 0xda, 0x52, 0x14, 0x6a, 0x20, + 0xb5, 0xa1, 0x84, 0x15, 0x1a, 0x60, 0x95, 0x79, 0xc4, 0xec, 0x2c, 0x4b, 0xf6, 0x3d, 0xbd, 0x24, + 0x07, 0xba, 0xda, 0xcd, 0xc8, 0x23, 0xa6, 0xb1, 0xa1, 0xd8, 0x56, 0xc5, 0x17, 0x92, 0xb1, 0xf0, + 0x08, 0xac, 0x31, 0x8e, 0x79, 0xc0, 0x3a, 0x2b, 0x92, 0xa5, 0x77, 0x25, 0x8b, 0x44, 0x1a, 0x5b, + 0x8a, 0x67, 0x2d, 0xfe, 0x46, 0x8a, 0xa1, 0xf7, 0xfb, 0x1a, 0x68, 0x29, 0xe4, 0xc0, 0x66, 0x1c, + 0x7e, 0xb2, 0x70, 0x03, 0xfa, 0xf5, 0x6e, 0x40, 0x44, 0xcb, 0xf3, 0xb7, 0xd5, 0x4a, 0x8d, 0xb9, + 0x25, 0x73, 0xfa, 0x07, 0xa0, 0x6e, 0x73, 0x32, 0x63, 0x9d, 0xe5, 0xbd, 0x95, 0xfd, 0xd6, 0xc1, + 0x6b, 0x57, 0x6d, 0xdc, 0xd8, 0x54, 0x44, 0xf5, 0xc7, 0x22, 0x04, 0xc5, 0x91, 0xbd, 0xbf, 0xae, + 0x26, 0x1b, 0x16, 0x57, 0x02, 0xdf, 0x01, 0x0d, 0x91, 0x58, 0x2b, 0x70, 0x88, 0xdc, 0x70, 0x33, + 0xdd, 0xc0, 0x48, 0xd9, 0x51, 0x82, 0x80, 0xfb, 0xa0, 0x21, 0x6a, 0xe1, 0x63, 0xea, 0x92, 0x4e, + 0x43, 0xa2, 0x37, 0x04, 0xf2, 0x44, 0xd9, 0x50, 0xe2, 0x85, 0x4f, 0xc1, 0x1d, 0xc6, 0xb1, 0xcf, + 0x6d, 0x77, 0xfa, 0x90, 0x60, 0xcb, 0xb1, 0x5d, 0x32, 0x22, 0x26, 0x75, 0x2d, 0x26, 0x73, 0xb7, + 0x62, 0x7c, 0x39, 0x0a, 0xb5, 0x3b, 0xa3, 0x72, 0x08, 0xaa, 0x8a, 0x85, 0x9f, 0x80, 0x1b, 0x26, + 0x75, 0xcd, 0xc0, 0xf7, 0x89, 0x6b, 0x5e, 0x0c, 0xa9, 0x63, 0x9b, 0x17, 0x32, 0x8d, 0x4d, 0x43, + 0x57, 0xfb, 0xbe, 0x71, 0x58, 0x04, 0x5c, 0x96, 0x19, 0xd1, 0x22, 0x11, 0x7c, 0x13, 0xac, 0xb3, + 0x80, 0x79, 0xc4, 0xb5, 0x3a, 0xab, 0x7b, 0xb5, 0xfd, 0x86, 0xd1, 0x8a, 0x42, 0x6d, 0x7d, 0x14, + 0x9b, 0xd0, 0xdc, 0x07, 0x7f, 0x02, 0x5a, 0x9f, 0xd2, 0xf1, 0x09, 0x99, 0x79, 0x0e, 0xe6, 0xa4, + 0x53, 0x97, 0x79, 0x7e, 0xa3, 0x34, 0x19, 0x47, 0x29, 0x4e, 0xd6, 0xe3, 0x4d, 0xb5, 0xc9, 0x56, + 0xc6, 0x81, 0xb2, 0x6c, 0xf0, 0x67, 0x60, 0x97, 0x05, 0xa6, 0x49, 0x18, 0x9b, 0x04, 0xce, 0x11, + 0x1d, 0xb3, 0xef, 0xdb, 0x8c, 0x53, 0xff, 0x62, 0x60, 0xcf, 0x6c, 0xde, 0x59, 0xdb, 0xab, 0xed, + 0xd7, 0x8d, 0x6e, 0x14, 0x6a, 0xbb, 0xa3, 0x4a, 0x14, 0xba, 0x82, 0x01, 0x22, 0x70, 0x7b, 0x82, + 0x6d, 0x87, 0x58, 0x0b, 0xdc, 0xeb, 0x92, 0x7b, 0x37, 0x0a, 0xb5, 0xdb, 0x8f, 0x4a, 0x11, 0xa8, + 0x22, 0xb2, 0xf7, 0xa7, 0x65, 0xb0, 0x99, 0x7b, 0x2f, 0xf0, 0x07, 0x60, 0x0d, 0x9b, 0xdc, 0x3e, + 0x17, 0x45, 0x25, 0x4a, 0xf5, 0x6e, 0xf6, 0x76, 0x84, 0xd2, 0xa5, 0xaf, 0x1e, 0x91, 0x09, 0x11, + 0x49, 0x20, 0xe9, 0x23, 0x7b, 0x20, 0x43, 0x91, 0xa2, 0x80, 0x0e, 0x68, 0x3b, 0x98, 0xf1, 0x79, + 0x3d, 0x8a, 0x6a, 0x93, 0xf9, 0x69, 0x1d, 0x7c, 0xe5, 0x7a, 0x8f, 0x4b, 0x44, 0x18, 0x3b, 0x51, + 0xa8, 0xb5, 0x07, 0x05, 0x1e, 0xb4, 0xc0, 0x0c, 0x7d, 0x00, 0xa5, 0x2d, 0xb9, 0x42, 0xb9, 0x5e, + 0xfd, 0xa5, 0xd7, 0xbb, 0x1d, 0x85, 0x1a, 0x1c, 0x2c, 0x30, 0xa1, 0x12, 0xf6, 0xde, 0x3f, 0x6b, + 0x60, 0xe5, 0xd5, 0x08, 0xe8, 0x77, 0x73, 0x02, 0xfa, 0x5a, 0x55, 0xd1, 0x56, 0x8a, 0xe7, 0xa3, + 0x82, 0x78, 0x76, 0x2b, 0x19, 0xae, 0x16, 0xce, 0xbf, 0xac, 0x80, 0x8d, 0x23, 0x3a, 0x3e, 0xa4, + 0xae, 0x65, 0x73, 0x9b, 0xba, 0xf0, 0x3e, 0x58, 0xe5, 0x17, 0xde, 0x5c, 0x84, 0xf6, 0xe6, 0x4b, + 0x9f, 0x5c, 0x78, 0xe4, 0x32, 0xd4, 0xda, 0x59, 0xac, 0xb0, 0x21, 0x89, 0x86, 0x83, 0x64, 0x3b, + 0xcb, 0x32, 0xee, 0x7e, 0x7e, 0xb9, 0xcb, 0x50, 0x2b, 0x69, 0xb1, 0x7a, 0xc2, 0x94, 0xdf, 0x14, + 0x9c, 0x82, 0x4d, 0x91, 0x9c, 0xa1, 0x4f, 0xc7, 0x71, 0x95, 0xad, 0xbc, 0x74, 0xd6, 0x6f, 0xa9, + 0x0d, 0x6c, 0x0e, 0xb2, 0x44, 0x28, 0xcf, 0x0b, 0xcf, 0xe3, 0x1a, 0x3b, 0xf1, 0xb1, 0xcb, 0xe2, + 0x23, 0x7d, 0xb1, 0x9a, 0xde, 0x55, 0xab, 0xc9, 0x3a, 0xcb, 0xb3, 0xa1, 0x92, 0x15, 0xe0, 0x5b, + 0x60, 0xcd, 0x27, 0x98, 0x51, 0x57, 0xd6, 0x73, 0x33, 0xcd, 0x0e, 0x92, 0x56, 0xa4, 0xbc, 0xf0, + 0x6d, 0xb0, 0x3e, 0x23, 0x8c, 0xe1, 0x29, 0x91, 0x8a, 0xd3, 0x34, 0xb6, 0x15, 0x70, 0xfd, 0x38, + 0x36, 0xa3, 0xb9, 0xbf, 0xf7, 0xbb, 0x1a, 0x58, 0x7f, 0x35, 0xdd, 0xef, 0x3b, 0xf9, 0xee, 0xd7, + 0xa9, 0xaa, 0xbc, 0x8a, 0xce, 0xf7, 0x9b, 0x86, 0xdc, 0xa8, 0xec, 0x7a, 0xf7, 0x40, 0xcb, 0xc3, + 0x3e, 0x76, 0x1c, 0xe2, 0xd8, 0x6c, 0x26, 0xf7, 0x5a, 0x37, 0xb6, 0x85, 0x2e, 0x0f, 0x53, 0x33, + 0xca, 0x62, 0x44, 0x88, 0x49, 0x67, 0x9e, 0x43, 0xc4, 0x65, 0xc6, 0xe5, 0xa6, 0x42, 0x0e, 0x53, + 0x33, 0xca, 0x62, 0xe0, 0x13, 0x70, 0x2b, 0x56, 0xb0, 0x62, 0x07, 0x5c, 0x91, 0x1d, 0xf0, 0x4b, + 0x51, 0xa8, 0xdd, 0x7a, 0x50, 0x06, 0x40, 0xe5, 0x71, 0x70, 0x0a, 0xda, 0x1e, 0xb5, 0x84, 0x38, + 0x07, 0x3e, 0x51, 0xcd, 0xaf, 0x25, 0xef, 0xf9, 0xcd, 0xd2, 0xcb, 0x18, 0x16, 0xc0, 0xb1, 0x06, + 0x16, 0xad, 0x68, 0x81, 0x14, 0xde, 0x07, 0x1b, 0x63, 0x6c, 0x9e, 0xd1, 0xc9, 0x24, 0xdb, 0x1a, + 0xda, 0x51, 0xa8, 0x6d, 0x18, 0x19, 0x3b, 0xca, 0xa1, 0xe0, 0x00, 0xec, 0x64, 0xbf, 0x87, 0xc4, + 0x7f, 0xec, 0x5a, 0xe4, 0x59, 0x67, 0x43, 0x46, 0x77, 0xa2, 0x50, 0xdb, 0x31, 0x4a, 0xfc, 0xa8, + 0x34, 0x0a, 0x7e, 0x00, 0xda, 0x33, 0xfc, 0x2c, 0xee, 0x44, 0xd2, 0x42, 0x58, 0x67, 0x53, 0x32, + 0xc9, 0x53, 0x1c, 0x17, 0x7c, 0x68, 0x01, 0x0d, 0x7f, 0x0a, 0x1a, 0x8c, 0x38, 0xc4, 0xe4, 0xd4, + 0x57, 0x6f, 0xeb, 0xbd, 0x6b, 0x96, 0x23, 0x1e, 0x13, 0x67, 0xa4, 0x42, 0xe3, 0x11, 0x67, 0xfe, + 0x85, 0x12, 0x4a, 0xf8, 0x4d, 0xb0, 0x35, 0xc3, 0x6e, 0x80, 0x13, 0xa4, 0x7c, 0x54, 0x0d, 0x03, + 0x46, 0xa1, 0xb6, 0x75, 0x9c, 0xf3, 0xa0, 0x02, 0x12, 0xfe, 0x10, 0x34, 0xf8, 0x7c, 0x7e, 0x58, + 0x93, 0x5b, 0x2b, 0xed, 0x90, 0x43, 0x6a, 0xe5, 0xc6, 0x87, 0xe4, 0x79, 0x24, 0xb3, 0x43, 0x42, + 0x23, 0x26, 0x2e, 0xce, 0x1d, 0x55, 0x2a, 0x0f, 0x26, 0x9c, 0xf8, 0x8f, 0x6c, 0xd7, 0x66, 0xa7, + 0xc4, 0x92, 0xa3, 0x5a, 0x3d, 0x9e, 0xb8, 0x4e, 0x4e, 0x06, 0x65, 0x10, 0x54, 0x15, 0x0b, 0x07, + 0x60, 0x2b, 0xad, 0xe9, 0x63, 0x6a, 0x91, 0x4e, 0x53, 0x2a, 0xc2, 0x1b, 0xe2, 0x94, 0x87, 0x39, + 0xcf, 0xe5, 0x82, 0x05, 0x15, 0x62, 0xb3, 0x13, 0x16, 0xb8, 0x62, 0xc2, 0xb2, 0xc0, 0x8e, 0x47, + 0x2d, 0x44, 0x3c, 0x07, 0x9b, 0x64, 0x46, 0x5c, 0xae, 0x8a, 0x7d, 0x4b, 0x2e, 0xfd, 0xae, 0xa8, + 0xa4, 0x61, 0x89, 0xff, 0xb2, 0xc2, 0x8e, 0x4a, 0xd9, 0x7a, 0xff, 0xae, 0x83, 0x66, 0x3a, 0xb2, + 0x3c, 0x05, 0xc0, 0x9c, 0xf7, 0x05, 0xa6, 0xc6, 0x96, 0xd7, 0xab, 0x34, 0x26, 0xe9, 0x20, 0x69, + 0xbb, 0x4d, 0x4c, 0x0c, 0x65, 0x88, 0xe0, 0x8f, 0x41, 0x53, 0x0e, 0xb3, 0x52, 0xe1, 0x97, 0x5f, + 0x5a, 0xe1, 0x37, 0xa3, 0x50, 0x6b, 0x8e, 0xe6, 0x04, 0x28, 0xe5, 0x82, 0x93, 0x6c, 0x62, 0xbe, + 0x60, 0xb7, 0x82, 0xf9, 0x24, 0xca, 0x25, 0x0a, 0xac, 0xa2, 0x67, 0xa8, 0x51, 0x6e, 0x55, 0x96, + 0x51, 0xd5, 0x94, 0xd6, 0x07, 0x4d, 0x39, 0x76, 0x12, 0x8b, 0x58, 0xf2, 0x25, 0xd4, 0x8d, 0x1b, + 0x0a, 0xda, 0x1c, 0xcd, 0x1d, 0x28, 0xc5, 0x08, 0xe2, 0x78, 0x9e, 0x54, 0x53, 0x6d, 0x42, 0x1c, + 0xbf, 0x62, 0xa4, 0xbc, 0x42, 0x79, 0x39, 0xf1, 0x67, 0xb6, 0x8b, 0xc5, 0x7f, 0x04, 0x52, 0xf0, + 0x94, 0xf2, 0x9e, 0xa4, 0x66, 0x94, 0xc5, 0xc0, 0x87, 0xa0, 0xad, 0x4e, 0x91, 0x6a, 0xc7, 0xba, + 0xac, 0x9d, 0x8e, 0x5a, 0xa4, 0x7d, 0x58, 0xf0, 0xa3, 0x85, 0x08, 0xf8, 0x3e, 0xd8, 0x9c, 0xe4, + 0xe4, 0x07, 0x48, 0x8a, 0x1b, 0xa2, 0xbd, 0xe7, 0xb5, 0x27, 0x8f, 0x83, 0xbf, 0xae, 0x81, 0x3b, + 0x81, 0x6b, 0xd2, 0xc0, 0xe5, 0xc4, 0x9a, 0x6f, 0x92, 0x58, 0x43, 0x6a, 0x31, 0xf9, 0x16, 0x5b, + 0x07, 0xef, 0x94, 0x16, 0xd6, 0xd3, 0xf2, 0x98, 0xf8, 0xe5, 0x56, 0x38, 0x51, 0xd5, 0x4a, 0x50, + 0x03, 0x75, 0x9f, 0x60, 0xeb, 0x42, 0x3e, 0xd8, 0xba, 0xd1, 0x14, 0x1d, 0x11, 0x09, 0x03, 0x8a, + 0xed, 0xbd, 0x3f, 0xd4, 0xc0, 0x76, 0xe1, 0x1f, 0x94, 0xff, 0xff, 0x09, 0xb4, 0x37, 0x06, 0x0b, + 0x1d, 0x0c, 0x7e, 0x04, 0xea, 0x7e, 0xe0, 0x90, 0xf9, 0xb3, 0x7d, 0xfb, 0x5a, 0xdd, 0x10, 0x05, + 0x0e, 0x49, 0x67, 0x05, 0xf1, 0xc5, 0x50, 0x4c, 0xd3, 0xfb, 0x5b, 0x0d, 0xbc, 0x55, 0x84, 0x3f, + 0x71, 0xbf, 0xf7, 0xcc, 0xe6, 0x87, 0xd4, 0x22, 0x0c, 0x91, 0x5f, 0x04, 0xb6, 0x2f, 0xa5, 0x44, + 0x14, 0x89, 0x49, 0x5d, 0x8e, 0xc5, 0xb5, 0x7c, 0x84, 0x67, 0xf3, 0x01, 0x56, 0x16, 0xc9, 0x61, + 0xd6, 0x81, 0xf2, 0x38, 0x38, 0x02, 0x0d, 0xea, 0x11, 0x1f, 0x8b, 0xc6, 0x11, 0x0f, 0xaf, 0xef, + 0xcf, 0xd5, 0xfd, 0x89, 0xb2, 0x5f, 0x86, 0xda, 0xdd, 0x2b, 0xb6, 0x31, 0x87, 0xa1, 0x84, 0x08, + 0xf6, 0xc0, 0xda, 0x39, 0x76, 0x02, 0x22, 0x66, 0x8c, 0x95, 0xfd, 0xba, 0x01, 0xc4, 0x7b, 0xfa, + 0x91, 0xb4, 0x20, 0xe5, 0xe9, 0xfd, 0xb9, 0xf4, 0x70, 0x43, 0x6a, 0xa5, 0x0a, 0x36, 0xc4, 0x9c, + 0x13, 0xdf, 0x85, 0x1f, 0xe6, 0x86, 0xf2, 0xf7, 0x0a, 0x43, 0xf9, 0xdd, 0x92, 0xd1, 0x3a, 0x4b, + 0xf3, 0xbf, 0x9a, 0xd3, 0x7b, 0xcf, 0x97, 0xc1, 0x4e, 0x59, 0x36, 0xe1, 0x07, 0xb1, 0x56, 0x51, + 0x57, 0xed, 0x78, 0x3f, 0xab, 0x55, 0xd4, 0xbd, 0x0c, 0xb5, 0xdb, 0xc5, 0xb8, 0xd8, 0x83, 0x54, + 0x1c, 0x74, 0x41, 0x8b, 0xa6, 0x37, 0xac, 0x8a, 0xf4, 0x5b, 0xd7, 0xaa, 0xa7, 0xf2, 0x02, 0x89, + 0x95, 0x2a, 0xeb, 0xcb, 0x2e, 0x00, 0x7f, 0x09, 0xb6, 0x69, 0xfe, 0xee, 0x65, 0xe6, 0xae, 0xbf, + 0x66, 0x59, 0xde, 0x8c, 0x3b, 0xea, 0xdc, 0xdb, 0x05, 0x3f, 0x2a, 0x2e, 0xd6, 0xfb, 0x63, 0x0d, + 0x54, 0x29, 0x0b, 0x1c, 0x66, 0x15, 0x5d, 0xbc, 0xac, 0xa6, 0x71, 0x90, 0x53, 0xf3, 0xcb, 0x50, + 0x7b, 0xbd, 0xea, 0x67, 0x43, 0x91, 0x76, 0xa6, 0x3f, 0x7d, 0xfc, 0x30, 0x2b, 0xf9, 0x1f, 0x26, + 0x92, 0xbf, 0x2c, 0xe9, 0xfa, 0xa9, 0xdc, 0x5f, 0x8f, 0x4b, 0x85, 0x1b, 0xdf, 0x78, 0xfe, 0xa2, + 0xbb, 0xf4, 0xd9, 0x8b, 0xee, 0xd2, 0xe7, 0x2f, 0xba, 0x4b, 0xbf, 0x8a, 0xba, 0xb5, 0xe7, 0x51, + 0xb7, 0xf6, 0x59, 0xd4, 0xad, 0x7d, 0x1e, 0x75, 0x6b, 0x7f, 0x8f, 0xba, 0xb5, 0xdf, 0xfe, 0xa3, + 0xbb, 0xf4, 0xf1, 0xcd, 0x92, 0xdf, 0x71, 0xff, 0x13, 0x00, 0x00, 0xff, 0xff, 0x43, 0xdf, 0xa6, + 0x7c, 0xf6, 0x15, 0x00, 0x00, } func (m *CronJob) Marshal() (dAtA []byte, err error) { @@ -1023,6 +1030,23 @@ func (m *JobSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if m.PodReplacementPolicy != nil { + i -= len(*m.PodReplacementPolicy) + copy(dAtA[i:], *m.PodReplacementPolicy) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.PodReplacementPolicy))) + i-- + dAtA[i] = 0x72 + } + if m.MaxFailedIndexes != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.MaxFailedIndexes)) + i-- + dAtA[i] = 0x68 + } + if m.BackoffLimitPerIndex != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.BackoffLimitPerIndex)) + i-- + dAtA[i] = 0x60 + } if m.PodFailurePolicy != nil { { size, err := m.PodFailurePolicy.MarshalToSizedBuffer(dAtA[:i]) @@ -1132,6 +1156,18 @@ func (m *JobStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if m.Terminating != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.Terminating)) + i-- + dAtA[i] = 0x58 + } + if m.FailedIndexes != nil { + i -= len(*m.FailedIndexes) + copy(dAtA[i:], *m.FailedIndexes) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.FailedIndexes))) + i-- + dAtA[i] = 0x52 + } if m.Ready != nil { i = encodeVarintGenerated(dAtA, i, uint64(*m.Ready)) i-- @@ -1645,6 +1681,16 @@ func (m *JobSpec) Size() (n int) { l = m.PodFailurePolicy.Size() n += 1 + l + sovGenerated(uint64(l)) } + if m.BackoffLimitPerIndex != nil { + n += 1 + sovGenerated(uint64(*m.BackoffLimitPerIndex)) + } + if m.MaxFailedIndexes != nil { + n += 1 + sovGenerated(uint64(*m.MaxFailedIndexes)) + } + if m.PodReplacementPolicy != nil { + l = len(*m.PodReplacementPolicy) + n += 1 + l + sovGenerated(uint64(l)) + } return n } @@ -1680,6 +1726,13 @@ func (m *JobStatus) Size() (n int) { if m.Ready != nil { n += 1 + sovGenerated(uint64(*m.Ready)) } + if m.FailedIndexes != nil { + l = len(*m.FailedIndexes) + n += 1 + l + sovGenerated(uint64(l)) + } + if m.Terminating != nil { + n += 1 + sovGenerated(uint64(*m.Terminating)) + } return n } @@ -1913,6 +1966,9 @@ func (this *JobSpec) String() string { `CompletionMode:` + valueToStringGenerated(this.CompletionMode) + `,`, `Suspend:` + valueToStringGenerated(this.Suspend) + `,`, `PodFailurePolicy:` + strings.Replace(this.PodFailurePolicy.String(), "PodFailurePolicy", "PodFailurePolicy", 1) + `,`, + `BackoffLimitPerIndex:` + valueToStringGenerated(this.BackoffLimitPerIndex) + `,`, + `MaxFailedIndexes:` + valueToStringGenerated(this.MaxFailedIndexes) + `,`, + `PodReplacementPolicy:` + valueToStringGenerated(this.PodReplacementPolicy) + `,`, `}`, }, "") return s @@ -1936,6 +1992,8 @@ func (this *JobStatus) String() string { `CompletedIndexes:` + fmt.Sprintf("%v", this.CompletedIndexes) + `,`, `UncountedTerminatedPods:` + strings.Replace(this.UncountedTerminatedPods.String(), "UncountedTerminatedPods", "UncountedTerminatedPods", 1) + `,`, `Ready:` + valueToStringGenerated(this.Ready) + `,`, + `FailedIndexes:` + valueToStringGenerated(this.FailedIndexes) + `,`, + `Terminating:` + valueToStringGenerated(this.Terminating) + `,`, `}`, }, "") return s @@ -3527,6 +3585,79 @@ func (m *JobSpec) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 12: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field BackoffLimitPerIndex", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.BackoffLimitPerIndex = &v + case 13: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field MaxFailedIndexes", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.MaxFailedIndexes = &v + case 14: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PodReplacementPolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := PodReplacementPolicy(dAtA[iNdEx:postIndex]) + m.PodReplacementPolicy = &s + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -3828,6 +3959,59 @@ func (m *JobStatus) Unmarshal(dAtA []byte) error { } } m.Ready = &v + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field FailedIndexes", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := string(dAtA[iNdEx:postIndex]) + m.FailedIndexes = &s + iNdEx = postIndex + case 11: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Terminating", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.Terminating = &v default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) diff --git a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/generated.proto index 181c79597da6..e1bef3a4634c 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/generated.proto @@ -213,8 +213,8 @@ message JobSpec { // checked against the backoffLimit. This field cannot be used in combination // with restartPolicy=OnFailure. // - // This field is alpha-level. To use this field, you must enable the - // `JobPodFailurePolicy` feature gate (disabled by default). + // This field is beta-level. It can be used when the `JobPodFailurePolicy` + // feature gate is enabled (enabled by default). // +optional optional PodFailurePolicy podFailurePolicy = 11; @@ -223,6 +223,30 @@ message JobSpec { // +optional optional int32 backoffLimit = 7; + // Specifies the limit for the number of retries within an + // index before marking this index as failed. When enabled the number of + // failures per index is kept in the pod's + // batch.kubernetes.io/job-index-failure-count annotation. It can only + // be set when Job's completionMode=Indexed, and the Pod's restart + // policy is Never. The field is immutable. + // This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` + // feature gate is enabled (disabled by default). + // +optional + optional int32 backoffLimitPerIndex = 12; + + // Specifies the maximal number of failed indexes before marking the Job as + // failed, when backoffLimitPerIndex is set. Once the number of failed + // indexes exceeds this number the entire Job is marked as Failed and its + // execution is terminated. When left as null the job continues execution of + // all of its indexes and is marked with the `Complete` Job condition. + // It can only be specified when backoffLimitPerIndex is set. + // It can be null or up to completions. It is required and must be + // less than or equal to 10^4 when is completions greater than 10^5. + // This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` + // feature gate is enabled (disabled by default). + // +optional + optional int32 maxFailedIndexes = 13; + // A label query over pods that should match the pod count. // Normally, the system sets this field for you. // More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors @@ -292,6 +316,19 @@ message JobSpec { // // +optional optional bool suspend = 10; + + // podReplacementPolicy specifies when to create replacement Pods. + // Possible values are: + // - TerminatingOrFailed means that we recreate pods + // when they are terminating (has a metadata.deletionTimestamp) or failed. + // - Failed means to wait until a previously created Pod is fully terminated (has phase + // Failed or Succeeded) before creating a replacement Pod. + // + // When using podFailurePolicy, Failed is the the only allowed value. + // TerminatingOrFailed and Failed are allowed values when podFailurePolicy is not in use. + // This is an alpha field. Enable JobPodReplacementPolicy to be able to use this field. + // +optional + optional string podReplacementPolicy = 14; } // JobStatus represents the current state of a Job. @@ -335,6 +372,14 @@ message JobStatus { // +optional optional int32 failed = 6; + // The number of pods which are terminating (in phase Pending or Running + // and have a deletionTimestamp). + // + // This field is alpha-level. The job controller populates the field when + // the feature gate JobPodReplacementPolicy is enabled (disabled by default). + // +optional + optional int32 terminating = 11; + // completedIndexes holds the completed indexes when .spec.completionMode = // "Indexed" in a text format. The indexes are represented as decimal integers // separated by commas. The numbers are listed in increasing order. Three or @@ -345,6 +390,19 @@ message JobStatus { // +optional optional string completedIndexes = 7; + // FailedIndexes holds the failed indexes when backoffLimitPerIndex=true. + // The indexes are represented in the text format analogous as for the + // `completedIndexes` field, ie. they are kept as decimal integers + // separated by commas. The numbers are listed in increasing order. Three or + // more consecutive numbers are compressed and represented by the first and + // last element of the series, separated by a hyphen. + // For example, if the failed indexes are 1, 3, 4, 5 and 7, they are + // represented as "1,3-5,7". + // This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` + // feature gate is enabled (disabled by default). + // +optional + optional string failedIndexes = 10; + // uncountedTerminatedPods holds the UIDs of Pods that have terminated but // the job controller hasn't yet accounted for in the status counters. // @@ -452,6 +510,10 @@ message PodFailurePolicyRule { // // - FailJob: indicates that the pod's job is marked as Failed and all // running pods are terminated. + // - FailIndex: indicates that the pod's index is marked as Failed and will + // not be restarted. + // This value is alpha-level. It can be used when the + // `JobBackoffLimitPerIndex` feature gate is enabled (disabled by default). // - Ignore: indicates that the counter towards the .backoffLimit is not // incremented and a replacement pod is created. // - Count: indicates that the pod is handled in the default way - the diff --git a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/types.go b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/types.go index 346676b09515..883d193aaebf 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/types.go @@ -27,6 +27,11 @@ const ( // More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#label-selector-and-annotation-conventions labelPrefix = "batch.kubernetes.io/" + // CronJobScheduledTimestampAnnotation is the scheduled timestamp annotation for the Job. + // It records the original/expected scheduled timestamp for the running job, represented in RFC3339. + // The CronJob controller adds this annotation if the CronJobsScheduledAnnotation feature gate (beta in 1.28) is enabled. + CronJobScheduledTimestampAnnotation = labelPrefix + "cronjob-scheduled-timestamp" + JobCompletionIndexAnnotation = labelPrefix + "job-completion-index" // JobTrackingFinalizer is a finalizer for Job's pods. It prevents them from // being deleted before being accounted in the Job status. @@ -45,6 +50,13 @@ const ( // ControllerUid is used to programatically get pods corresponding to a Job. // There is a corresponding label without the batch.kubernetes.io that we support for legacy reasons. ControllerUidLabel = labelPrefix + "controller-uid" + // Annotation indicating the number of failures for the index corresponding + // to the pod, which are counted towards the backoff limit. + JobIndexFailureCountAnnotation = labelPrefix + "job-index-failure-count" + // Annotation indicating the number of failures for the index corresponding + // to the pod, which don't count towards the backoff limit, according to the + // pod failure policy. When the annotation is absent zero is implied. + JobIndexIgnoredFailureCountAnnotation = labelPrefix + "job-index-ignored-failure-count" ) // +genclient @@ -109,6 +121,11 @@ const ( // pod's job as Failed and terminate all running pods. PodFailurePolicyActionFailJob PodFailurePolicyAction = "FailJob" + // This is an action which might be taken on a pod failure - mark the + // Job's index as failed to avoid restarts within this index. This action + // can only be used when backoffLimitPerIndex is set. + PodFailurePolicyActionFailIndex PodFailurePolicyAction = "FailIndex" + // This is an action which might be taken on a pod failure - the counter towards // .backoffLimit, represented by the job's .status.failed field, is not // incremented and a replacement pod is created. @@ -128,6 +145,19 @@ const ( PodFailurePolicyOnExitCodesOpNotIn PodFailurePolicyOnExitCodesOperator = "NotIn" ) +// PodReplacementPolicy specifies the policy for creating pod replacements. +// +enum +type PodReplacementPolicy string + +const ( + // TerminatingOrFailed means that we recreate pods + // when they are terminating (has a metadata.deletionTimestamp) or failed. + TerminatingOrFailed PodReplacementPolicy = "TerminatingOrFailed" + // Failed means to wait until a previously created Pod is fully terminated (has phase + // Failed or Succeeded) before creating a replacement Pod. + Failed PodReplacementPolicy = "Failed" +) + // PodFailurePolicyOnExitCodesRequirement describes the requirement for handling // a failed pod based on its container exit codes. In particular, it lookups the // .state.terminated.exitCode for each app container and init container status, @@ -186,6 +216,10 @@ type PodFailurePolicyRule struct { // // - FailJob: indicates that the pod's job is marked as Failed and all // running pods are terminated. + // - FailIndex: indicates that the pod's index is marked as Failed and will + // not be restarted. + // This value is alpha-level. It can be used when the + // `JobBackoffLimitPerIndex` feature gate is enabled (disabled by default). // - Ignore: indicates that the counter towards the .backoffLimit is not // incremented and a replacement pod is created. // - Count: indicates that the pod is handled in the default way - the @@ -252,8 +286,8 @@ type JobSpec struct { // checked against the backoffLimit. This field cannot be used in combination // with restartPolicy=OnFailure. // - // This field is alpha-level. To use this field, you must enable the - // `JobPodFailurePolicy` feature gate (disabled by default). + // This field is beta-level. It can be used when the `JobPodFailurePolicy` + // feature gate is enabled (enabled by default). // +optional PodFailurePolicy *PodFailurePolicy `json:"podFailurePolicy,omitempty" protobuf:"bytes,11,opt,name=podFailurePolicy"` @@ -262,6 +296,30 @@ type JobSpec struct { // +optional BackoffLimit *int32 `json:"backoffLimit,omitempty" protobuf:"varint,7,opt,name=backoffLimit"` + // Specifies the limit for the number of retries within an + // index before marking this index as failed. When enabled the number of + // failures per index is kept in the pod's + // batch.kubernetes.io/job-index-failure-count annotation. It can only + // be set when Job's completionMode=Indexed, and the Pod's restart + // policy is Never. The field is immutable. + // This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` + // feature gate is enabled (disabled by default). + // +optional + BackoffLimitPerIndex *int32 `json:"backoffLimitPerIndex,omitempty" protobuf:"varint,12,opt,name=backoffLimitPerIndex"` + + // Specifies the maximal number of failed indexes before marking the Job as + // failed, when backoffLimitPerIndex is set. Once the number of failed + // indexes exceeds this number the entire Job is marked as Failed and its + // execution is terminated. When left as null the job continues execution of + // all of its indexes and is marked with the `Complete` Job condition. + // It can only be specified when backoffLimitPerIndex is set. + // It can be null or up to completions. It is required and must be + // less than or equal to 10^4 when is completions greater than 10^5. + // This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` + // feature gate is enabled (disabled by default). + // +optional + MaxFailedIndexes *int32 `json:"maxFailedIndexes,omitempty" protobuf:"varint,13,opt,name=maxFailedIndexes"` + // TODO enabled it when https://github.com/kubernetes/kubernetes/issues/28486 has been fixed // Optional number of failed pods to retain. // +optional @@ -336,6 +394,19 @@ type JobSpec struct { // // +optional Suspend *bool `json:"suspend,omitempty" protobuf:"varint,10,opt,name=suspend"` + + // podReplacementPolicy specifies when to create replacement Pods. + // Possible values are: + // - TerminatingOrFailed means that we recreate pods + // when they are terminating (has a metadata.deletionTimestamp) or failed. + // - Failed means to wait until a previously created Pod is fully terminated (has phase + // Failed or Succeeded) before creating a replacement Pod. + // + // When using podFailurePolicy, Failed is the the only allowed value. + // TerminatingOrFailed and Failed are allowed values when podFailurePolicy is not in use. + // This is an alpha field. Enable JobPodReplacementPolicy to be able to use this field. + // +optional + PodReplacementPolicy *PodReplacementPolicy `json:"podReplacementPolicy,omitempty" protobuf:"bytes,14,opt,name=podReplacementPolicy,casttype=podReplacementPolicy"` } // JobStatus represents the current state of a Job. @@ -379,6 +450,14 @@ type JobStatus struct { // +optional Failed int32 `json:"failed,omitempty" protobuf:"varint,6,opt,name=failed"` + // The number of pods which are terminating (in phase Pending or Running + // and have a deletionTimestamp). + // + // This field is alpha-level. The job controller populates the field when + // the feature gate JobPodReplacementPolicy is enabled (disabled by default). + // +optional + Terminating *int32 `json:"terminating,omitempty" protobuf:"varint,11,opt,name=terminating"` + // completedIndexes holds the completed indexes when .spec.completionMode = // "Indexed" in a text format. The indexes are represented as decimal integers // separated by commas. The numbers are listed in increasing order. Three or @@ -389,6 +468,19 @@ type JobStatus struct { // +optional CompletedIndexes string `json:"completedIndexes,omitempty" protobuf:"bytes,7,opt,name=completedIndexes"` + // FailedIndexes holds the failed indexes when backoffLimitPerIndex=true. + // The indexes are represented in the text format analogous as for the + // `completedIndexes` field, ie. they are kept as decimal integers + // separated by commas. The numbers are listed in increasing order. Three or + // more consecutive numbers are compressed and represented by the first and + // last element of the series, separated by a hyphen. + // For example, if the failed indexes are 1, 3, 4, 5 and 7, they are + // represented as "1,3-5,7". + // This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` + // feature gate is enabled (disabled by default). + // +optional + FailedIndexes *string `json:"failedIndexes,omitempty" protobuf:"bytes,10,opt,name=failedIndexes"` + // uncountedTerminatedPods holds the UIDs of Pods that have terminated but // the job controller hasn't yet accounted for in the status counters. // diff --git a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/types_swagger_doc_generated.go index 1f28f006cc7a..43b4e1e7d944 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/types_swagger_doc_generated.go @@ -115,14 +115,17 @@ var map_JobSpec = map[string]string{ "parallelism": "Specifies the maximum desired number of pods the job should run at any given time. The actual number of pods running in steady state will be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism), i.e. when the work left to do is less than max parallelism. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/", "completions": "Specifies the desired number of successfully finished pods the job should be run with. Setting to null means that the success of any pod signals the success of all pods, and allows parallelism to have any positive value. Setting to 1 means that parallelism is limited to 1 and the success of that pod signals the success of the job. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/", "activeDeadlineSeconds": "Specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it; value must be positive integer. If a Job is suspended (at creation or through an update), this timer will effectively be stopped and reset when the Job is resumed again.", - "podFailurePolicy": "Specifies the policy of handling failed pods. In particular, it allows to specify the set of actions and conditions which need to be satisfied to take the associated action. If empty, the default behaviour applies - the counter of failed pods, represented by the jobs's .status.failed field, is incremented and it is checked against the backoffLimit. This field cannot be used in combination with restartPolicy=OnFailure.\n\nThis field is alpha-level. To use this field, you must enable the `JobPodFailurePolicy` feature gate (disabled by default).", + "podFailurePolicy": "Specifies the policy of handling failed pods. In particular, it allows to specify the set of actions and conditions which need to be satisfied to take the associated action. If empty, the default behaviour applies - the counter of failed pods, represented by the jobs's .status.failed field, is incremented and it is checked against the backoffLimit. This field cannot be used in combination with restartPolicy=OnFailure.\n\nThis field is beta-level. It can be used when the `JobPodFailurePolicy` feature gate is enabled (enabled by default).", "backoffLimit": "Specifies the number of retries before marking this job failed. Defaults to 6", + "backoffLimitPerIndex": "Specifies the limit for the number of retries within an index before marking this index as failed. When enabled the number of failures per index is kept in the pod's batch.kubernetes.io/job-index-failure-count annotation. It can only be set when Job's completionMode=Indexed, and the Pod's restart policy is Never. The field is immutable. This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` feature gate is enabled (disabled by default).", + "maxFailedIndexes": "Specifies the maximal number of failed indexes before marking the Job as failed, when backoffLimitPerIndex is set. Once the number of failed indexes exceeds this number the entire Job is marked as Failed and its execution is terminated. When left as null the job continues execution of all of its indexes and is marked with the `Complete` Job condition. It can only be specified when backoffLimitPerIndex is set. It can be null or up to completions. It is required and must be less than or equal to 10^4 when is completions greater than 10^5. This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` feature gate is enabled (disabled by default).", "selector": "A label query over pods that should match the pod count. Normally, the system sets this field for you. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors", "manualSelector": "manualSelector controls generation of pod labels and pod selectors. Leave `manualSelector` unset unless you are certain what you are doing. When false or unset, the system pick labels unique to this job and appends those labels to the pod template. When true, the user is responsible for picking unique labels and specifying the selector. Failure to pick a unique label may cause this and other jobs to not function correctly. However, You may see `manualSelector=true` in jobs that were created with the old `extensions/v1beta1` API. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#specifying-your-own-pod-selector", "template": "Describes the pod that will be created when executing a job. The only allowed template.spec.restartPolicy values are \"Never\" or \"OnFailure\". More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/", "ttlSecondsAfterFinished": "ttlSecondsAfterFinished limits the lifetime of a Job that has finished execution (either Complete or Failed). If this field is set, ttlSecondsAfterFinished after the Job finishes, it is eligible to be automatically deleted. When the Job is being deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the Job won't be automatically deleted. If this field is set to zero, the Job becomes eligible to be deleted immediately after it finishes.", "completionMode": "completionMode specifies how Pod completions are tracked. It can be `NonIndexed` (default) or `Indexed`.\n\n`NonIndexed` means that the Job is considered complete when there have been .spec.completions successfully completed Pods. Each Pod completion is homologous to each other.\n\n`Indexed` means that the Pods of a Job get an associated completion index from 0 to (.spec.completions - 1), available in the annotation batch.kubernetes.io/job-completion-index. The Job is considered complete when there is one successfully completed Pod for each index. When value is `Indexed`, .spec.completions must be specified and `.spec.parallelism` must be less than or equal to 10^5. In addition, The Pod name takes the form `$(job-name)-$(index)-$(random-string)`, the Pod hostname takes the form `$(job-name)-$(index)`.\n\nMore completion modes can be added in the future. If the Job controller observes a mode that it doesn't recognize, which is possible during upgrades due to version skew, the controller skips updates for the Job.", "suspend": "suspend specifies whether the Job controller should create Pods or not. If a Job is created with suspend set to true, no Pods are created by the Job controller. If a Job is suspended after creation (i.e. the flag goes from false to true), the Job controller will delete all active Pods associated with this Job. Users must design their workload to gracefully handle this. Suspending a Job will reset the StartTime field of the Job, effectively resetting the ActiveDeadlineSeconds timer too. Defaults to false.", + "podReplacementPolicy": "podReplacementPolicy specifies when to create replacement Pods. Possible values are: - TerminatingOrFailed means that we recreate pods\n when they are terminating (has a metadata.deletionTimestamp) or failed.\n- Failed means to wait until a previously created Pod is fully terminated (has phase\n Failed or Succeeded) before creating a replacement Pod.\n\nWhen using podFailurePolicy, Failed is the the only allowed value. TerminatingOrFailed and Failed are allowed values when podFailurePolicy is not in use. This is an alpha field. Enable JobPodReplacementPolicy to be able to use this field.", } func (JobSpec) SwaggerDoc() map[string]string { @@ -137,7 +140,9 @@ var map_JobStatus = map[string]string{ "active": "The number of pending and running pods.", "succeeded": "The number of pods which reached phase Succeeded.", "failed": "The number of pods which reached phase Failed.", + "terminating": "The number of pods which are terminating (in phase Pending or Running and have a deletionTimestamp).\n\nThis field is alpha-level. The job controller populates the field when the feature gate JobPodReplacementPolicy is enabled (disabled by default).", "completedIndexes": "completedIndexes holds the completed indexes when .spec.completionMode = \"Indexed\" in a text format. The indexes are represented as decimal integers separated by commas. The numbers are listed in increasing order. Three or more consecutive numbers are compressed and represented by the first and last element of the series, separated by a hyphen. For example, if the completed indexes are 1, 3, 4, 5 and 7, they are represented as \"1,3-5,7\".", + "failedIndexes": "FailedIndexes holds the failed indexes when backoffLimitPerIndex=true. The indexes are represented in the text format analogous as for the `completedIndexes` field, ie. they are kept as decimal integers separated by commas. The numbers are listed in increasing order. Three or more consecutive numbers are compressed and represented by the first and last element of the series, separated by a hyphen. For example, if the failed indexes are 1, 3, 4, 5 and 7, they are represented as \"1,3-5,7\". This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` feature gate is enabled (disabled by default).", "uncountedTerminatedPods": "uncountedTerminatedPods holds the UIDs of Pods that have terminated but the job controller hasn't yet accounted for in the status counters.\n\nThe job controller creates pods with a finalizer. When a pod terminates (succeeded or failed), the controller does three steps to account for it in the job status:\n\n1. Add the pod UID to the arrays in this field. 2. Remove the pod finalizer. 3. Remove the pod UID from the arrays while increasing the corresponding\n counter.\n\nOld jobs might not be tracked using this field, in which case the field remains null.", "ready": "The number of pods which have a Ready condition.\n\nThis field is beta-level. The job controller populates the field when the feature gate JobReadyPods is enabled (enabled by default).", } @@ -188,7 +193,7 @@ func (PodFailurePolicyOnPodConditionsPattern) SwaggerDoc() map[string]string { var map_PodFailurePolicyRule = map[string]string{ "": "PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of onExitCodes and onPodConditions, but not both, can be used in each rule.", - "action": "Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are:\n\n- FailJob: indicates that the pod's job is marked as Failed and all\n running pods are terminated.\n- Ignore: indicates that the counter towards the .backoffLimit is not\n incremented and a replacement pod is created.\n- Count: indicates that the pod is handled in the default way - the\n counter towards the .backoffLimit is incremented.\nAdditional values are considered to be added in the future. Clients should react to an unknown action by skipping the rule.", + "action": "Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are:\n\n- FailJob: indicates that the pod's job is marked as Failed and all\n running pods are terminated.\n- FailIndex: indicates that the pod's index is marked as Failed and will\n not be restarted.\n This value is alpha-level. It can be used when the\n `JobBackoffLimitPerIndex` feature gate is enabled (disabled by default).\n- Ignore: indicates that the counter towards the .backoffLimit is not\n incremented and a replacement pod is created.\n- Count: indicates that the pod is handled in the default way - the\n counter towards the .backoffLimit is incremented.\nAdditional values are considered to be added in the future. Clients should react to an unknown action by skipping the rule.", "onExitCodes": "Represents the requirement on the container exit codes.", "onPodConditions": "Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed.", } diff --git a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/zz_generated.deepcopy.go index 2a901e9d0f95..43fc41515be1 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/batch/v1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/batch/v1/zz_generated.deepcopy.go @@ -267,6 +267,16 @@ func (in *JobSpec) DeepCopyInto(out *JobSpec) { *out = new(int32) **out = **in } + if in.BackoffLimitPerIndex != nil { + in, out := &in.BackoffLimitPerIndex, &out.BackoffLimitPerIndex + *out = new(int32) + **out = **in + } + if in.MaxFailedIndexes != nil { + in, out := &in.MaxFailedIndexes, &out.MaxFailedIndexes + *out = new(int32) + **out = **in + } if in.Selector != nil { in, out := &in.Selector, &out.Selector *out = new(metav1.LabelSelector) @@ -293,6 +303,11 @@ func (in *JobSpec) DeepCopyInto(out *JobSpec) { *out = new(bool) **out = **in } + if in.PodReplacementPolicy != nil { + in, out := &in.PodReplacementPolicy, &out.PodReplacementPolicy + *out = new(PodReplacementPolicy) + **out = **in + } return } @@ -324,6 +339,16 @@ func (in *JobStatus) DeepCopyInto(out *JobStatus) { in, out := &in.CompletionTime, &out.CompletionTime *out = (*in).DeepCopy() } + if in.Terminating != nil { + in, out := &in.Terminating, &out.Terminating + *out = new(int32) + **out = **in + } + if in.FailedIndexes != nil { + in, out := &in.FailedIndexes, &out.FailedIndexes + *out = new(string) + **out = **in + } if in.UncountedTerminatedPods != nil { in, out := &in.UncountedTerminatedPods, &out.UncountedTerminatedPods *out = new(UncountedTerminatedPods) diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/annotation_key_constants.go b/cluster-autoscaler/vendor/k8s.io/api/core/v1/annotation_key_constants.go index 61f86f850a3b..106ba14c3df1 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/annotation_key_constants.go +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/annotation_key_constants.go @@ -56,9 +56,9 @@ const ( // AppArmorBetaContainerAnnotationKeyPrefix is the prefix to an annotation key specifying a container's apparmor profile. AppArmorBetaContainerAnnotationKeyPrefix = "container.apparmor.security.beta.kubernetes.io/" - // AppArmorBetaDefaultProfileAnnotatoinKey is the annotation key specifying the default AppArmor profile. + // AppArmorBetaDefaultProfileAnnotationKey is the annotation key specifying the default AppArmor profile. AppArmorBetaDefaultProfileAnnotationKey = "apparmor.security.beta.kubernetes.io/defaultProfileName" - // AppArmorBetaAllowedProfileAnnotationKey is the annotation key specifying the allowed AppArmor profiles. + // AppArmorBetaAllowedProfilesAnnotationKey is the annotation key specifying the allowed AppArmor profiles. AppArmorBetaAllowedProfilesAnnotationKey = "apparmor.security.beta.kubernetes.io/allowedProfileNames" // AppArmorBetaProfileRuntimeDefault is the profile specifying the runtime default. @@ -78,7 +78,7 @@ const ( // in the Annotations of a Node. PreferAvoidPodsAnnotationKey string = "scheduler.alpha.kubernetes.io/preferAvoidPods" - // ObjectTTLAnnotations represents a suggestion for kubelet for how long it can cache + // ObjectTTLAnnotationKey represents a suggestion for kubelet for how long it can cache // an object (e.g. secret, config map) before fetching it again from apiserver. // This annotation can be attached to node. ObjectTTLAnnotationKey string = "node.alpha.kubernetes.io/ttl" diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.pb.go index c76646296002..c267a5febded 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.pb.go @@ -1981,10 +1981,38 @@ func (m *HostAlias) XXX_DiscardUnknown() { var xxx_messageInfo_HostAlias proto.InternalMessageInfo +func (m *HostIP) Reset() { *m = HostIP{} } +func (*HostIP) ProtoMessage() {} +func (*HostIP) Descriptor() ([]byte, []int) { + return fileDescriptor_83c10c24ec417dc9, []int{69} +} +func (m *HostIP) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *HostIP) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *HostIP) XXX_Merge(src proto.Message) { + xxx_messageInfo_HostIP.Merge(m, src) +} +func (m *HostIP) XXX_Size() int { + return m.Size() +} +func (m *HostIP) XXX_DiscardUnknown() { + xxx_messageInfo_HostIP.DiscardUnknown(m) +} + +var xxx_messageInfo_HostIP proto.InternalMessageInfo + func (m *HostPathVolumeSource) Reset() { *m = HostPathVolumeSource{} } func (*HostPathVolumeSource) ProtoMessage() {} func (*HostPathVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{69} + return fileDescriptor_83c10c24ec417dc9, []int{70} } func (m *HostPathVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2012,7 +2040,7 @@ var xxx_messageInfo_HostPathVolumeSource proto.InternalMessageInfo func (m *ISCSIPersistentVolumeSource) Reset() { *m = ISCSIPersistentVolumeSource{} } func (*ISCSIPersistentVolumeSource) ProtoMessage() {} func (*ISCSIPersistentVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{70} + return fileDescriptor_83c10c24ec417dc9, []int{71} } func (m *ISCSIPersistentVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2040,7 +2068,7 @@ var xxx_messageInfo_ISCSIPersistentVolumeSource proto.InternalMessageInfo func (m *ISCSIVolumeSource) Reset() { *m = ISCSIVolumeSource{} } func (*ISCSIVolumeSource) ProtoMessage() {} func (*ISCSIVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{71} + return fileDescriptor_83c10c24ec417dc9, []int{72} } func (m *ISCSIVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2068,7 +2096,7 @@ var xxx_messageInfo_ISCSIVolumeSource proto.InternalMessageInfo func (m *KeyToPath) Reset() { *m = KeyToPath{} } func (*KeyToPath) ProtoMessage() {} func (*KeyToPath) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{72} + return fileDescriptor_83c10c24ec417dc9, []int{73} } func (m *KeyToPath) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2096,7 +2124,7 @@ var xxx_messageInfo_KeyToPath proto.InternalMessageInfo func (m *Lifecycle) Reset() { *m = Lifecycle{} } func (*Lifecycle) ProtoMessage() {} func (*Lifecycle) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{73} + return fileDescriptor_83c10c24ec417dc9, []int{74} } func (m *Lifecycle) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2124,7 +2152,7 @@ var xxx_messageInfo_Lifecycle proto.InternalMessageInfo func (m *LifecycleHandler) Reset() { *m = LifecycleHandler{} } func (*LifecycleHandler) ProtoMessage() {} func (*LifecycleHandler) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{74} + return fileDescriptor_83c10c24ec417dc9, []int{75} } func (m *LifecycleHandler) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2152,7 +2180,7 @@ var xxx_messageInfo_LifecycleHandler proto.InternalMessageInfo func (m *LimitRange) Reset() { *m = LimitRange{} } func (*LimitRange) ProtoMessage() {} func (*LimitRange) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{75} + return fileDescriptor_83c10c24ec417dc9, []int{76} } func (m *LimitRange) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2180,7 +2208,7 @@ var xxx_messageInfo_LimitRange proto.InternalMessageInfo func (m *LimitRangeItem) Reset() { *m = LimitRangeItem{} } func (*LimitRangeItem) ProtoMessage() {} func (*LimitRangeItem) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{76} + return fileDescriptor_83c10c24ec417dc9, []int{77} } func (m *LimitRangeItem) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2208,7 +2236,7 @@ var xxx_messageInfo_LimitRangeItem proto.InternalMessageInfo func (m *LimitRangeList) Reset() { *m = LimitRangeList{} } func (*LimitRangeList) ProtoMessage() {} func (*LimitRangeList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{77} + return fileDescriptor_83c10c24ec417dc9, []int{78} } func (m *LimitRangeList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2236,7 +2264,7 @@ var xxx_messageInfo_LimitRangeList proto.InternalMessageInfo func (m *LimitRangeSpec) Reset() { *m = LimitRangeSpec{} } func (*LimitRangeSpec) ProtoMessage() {} func (*LimitRangeSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{78} + return fileDescriptor_83c10c24ec417dc9, []int{79} } func (m *LimitRangeSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2264,7 +2292,7 @@ var xxx_messageInfo_LimitRangeSpec proto.InternalMessageInfo func (m *List) Reset() { *m = List{} } func (*List) ProtoMessage() {} func (*List) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{79} + return fileDescriptor_83c10c24ec417dc9, []int{80} } func (m *List) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2292,7 +2320,7 @@ var xxx_messageInfo_List proto.InternalMessageInfo func (m *LoadBalancerIngress) Reset() { *m = LoadBalancerIngress{} } func (*LoadBalancerIngress) ProtoMessage() {} func (*LoadBalancerIngress) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{80} + return fileDescriptor_83c10c24ec417dc9, []int{81} } func (m *LoadBalancerIngress) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2320,7 +2348,7 @@ var xxx_messageInfo_LoadBalancerIngress proto.InternalMessageInfo func (m *LoadBalancerStatus) Reset() { *m = LoadBalancerStatus{} } func (*LoadBalancerStatus) ProtoMessage() {} func (*LoadBalancerStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{81} + return fileDescriptor_83c10c24ec417dc9, []int{82} } func (m *LoadBalancerStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2348,7 +2376,7 @@ var xxx_messageInfo_LoadBalancerStatus proto.InternalMessageInfo func (m *LocalObjectReference) Reset() { *m = LocalObjectReference{} } func (*LocalObjectReference) ProtoMessage() {} func (*LocalObjectReference) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{82} + return fileDescriptor_83c10c24ec417dc9, []int{83} } func (m *LocalObjectReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2376,7 +2404,7 @@ var xxx_messageInfo_LocalObjectReference proto.InternalMessageInfo func (m *LocalVolumeSource) Reset() { *m = LocalVolumeSource{} } func (*LocalVolumeSource) ProtoMessage() {} func (*LocalVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{83} + return fileDescriptor_83c10c24ec417dc9, []int{84} } func (m *LocalVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2404,7 +2432,7 @@ var xxx_messageInfo_LocalVolumeSource proto.InternalMessageInfo func (m *NFSVolumeSource) Reset() { *m = NFSVolumeSource{} } func (*NFSVolumeSource) ProtoMessage() {} func (*NFSVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{84} + return fileDescriptor_83c10c24ec417dc9, []int{85} } func (m *NFSVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2432,7 +2460,7 @@ var xxx_messageInfo_NFSVolumeSource proto.InternalMessageInfo func (m *Namespace) Reset() { *m = Namespace{} } func (*Namespace) ProtoMessage() {} func (*Namespace) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{85} + return fileDescriptor_83c10c24ec417dc9, []int{86} } func (m *Namespace) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2460,7 +2488,7 @@ var xxx_messageInfo_Namespace proto.InternalMessageInfo func (m *NamespaceCondition) Reset() { *m = NamespaceCondition{} } func (*NamespaceCondition) ProtoMessage() {} func (*NamespaceCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{86} + return fileDescriptor_83c10c24ec417dc9, []int{87} } func (m *NamespaceCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2488,7 +2516,7 @@ var xxx_messageInfo_NamespaceCondition proto.InternalMessageInfo func (m *NamespaceList) Reset() { *m = NamespaceList{} } func (*NamespaceList) ProtoMessage() {} func (*NamespaceList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{87} + return fileDescriptor_83c10c24ec417dc9, []int{88} } func (m *NamespaceList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2516,7 +2544,7 @@ var xxx_messageInfo_NamespaceList proto.InternalMessageInfo func (m *NamespaceSpec) Reset() { *m = NamespaceSpec{} } func (*NamespaceSpec) ProtoMessage() {} func (*NamespaceSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{88} + return fileDescriptor_83c10c24ec417dc9, []int{89} } func (m *NamespaceSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2544,7 +2572,7 @@ var xxx_messageInfo_NamespaceSpec proto.InternalMessageInfo func (m *NamespaceStatus) Reset() { *m = NamespaceStatus{} } func (*NamespaceStatus) ProtoMessage() {} func (*NamespaceStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{89} + return fileDescriptor_83c10c24ec417dc9, []int{90} } func (m *NamespaceStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2572,7 +2600,7 @@ var xxx_messageInfo_NamespaceStatus proto.InternalMessageInfo func (m *Node) Reset() { *m = Node{} } func (*Node) ProtoMessage() {} func (*Node) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{90} + return fileDescriptor_83c10c24ec417dc9, []int{91} } func (m *Node) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2600,7 +2628,7 @@ var xxx_messageInfo_Node proto.InternalMessageInfo func (m *NodeAddress) Reset() { *m = NodeAddress{} } func (*NodeAddress) ProtoMessage() {} func (*NodeAddress) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{91} + return fileDescriptor_83c10c24ec417dc9, []int{92} } func (m *NodeAddress) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2628,7 +2656,7 @@ var xxx_messageInfo_NodeAddress proto.InternalMessageInfo func (m *NodeAffinity) Reset() { *m = NodeAffinity{} } func (*NodeAffinity) ProtoMessage() {} func (*NodeAffinity) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{92} + return fileDescriptor_83c10c24ec417dc9, []int{93} } func (m *NodeAffinity) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2656,7 +2684,7 @@ var xxx_messageInfo_NodeAffinity proto.InternalMessageInfo func (m *NodeCondition) Reset() { *m = NodeCondition{} } func (*NodeCondition) ProtoMessage() {} func (*NodeCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{93} + return fileDescriptor_83c10c24ec417dc9, []int{94} } func (m *NodeCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2684,7 +2712,7 @@ var xxx_messageInfo_NodeCondition proto.InternalMessageInfo func (m *NodeConfigSource) Reset() { *m = NodeConfigSource{} } func (*NodeConfigSource) ProtoMessage() {} func (*NodeConfigSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{94} + return fileDescriptor_83c10c24ec417dc9, []int{95} } func (m *NodeConfigSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2712,7 +2740,7 @@ var xxx_messageInfo_NodeConfigSource proto.InternalMessageInfo func (m *NodeConfigStatus) Reset() { *m = NodeConfigStatus{} } func (*NodeConfigStatus) ProtoMessage() {} func (*NodeConfigStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{95} + return fileDescriptor_83c10c24ec417dc9, []int{96} } func (m *NodeConfigStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2740,7 +2768,7 @@ var xxx_messageInfo_NodeConfigStatus proto.InternalMessageInfo func (m *NodeDaemonEndpoints) Reset() { *m = NodeDaemonEndpoints{} } func (*NodeDaemonEndpoints) ProtoMessage() {} func (*NodeDaemonEndpoints) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{96} + return fileDescriptor_83c10c24ec417dc9, []int{97} } func (m *NodeDaemonEndpoints) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2768,7 +2796,7 @@ var xxx_messageInfo_NodeDaemonEndpoints proto.InternalMessageInfo func (m *NodeList) Reset() { *m = NodeList{} } func (*NodeList) ProtoMessage() {} func (*NodeList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{97} + return fileDescriptor_83c10c24ec417dc9, []int{98} } func (m *NodeList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2796,7 +2824,7 @@ var xxx_messageInfo_NodeList proto.InternalMessageInfo func (m *NodeProxyOptions) Reset() { *m = NodeProxyOptions{} } func (*NodeProxyOptions) ProtoMessage() {} func (*NodeProxyOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{98} + return fileDescriptor_83c10c24ec417dc9, []int{99} } func (m *NodeProxyOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2824,7 +2852,7 @@ var xxx_messageInfo_NodeProxyOptions proto.InternalMessageInfo func (m *NodeResources) Reset() { *m = NodeResources{} } func (*NodeResources) ProtoMessage() {} func (*NodeResources) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{99} + return fileDescriptor_83c10c24ec417dc9, []int{100} } func (m *NodeResources) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2852,7 +2880,7 @@ var xxx_messageInfo_NodeResources proto.InternalMessageInfo func (m *NodeSelector) Reset() { *m = NodeSelector{} } func (*NodeSelector) ProtoMessage() {} func (*NodeSelector) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{100} + return fileDescriptor_83c10c24ec417dc9, []int{101} } func (m *NodeSelector) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2880,7 +2908,7 @@ var xxx_messageInfo_NodeSelector proto.InternalMessageInfo func (m *NodeSelectorRequirement) Reset() { *m = NodeSelectorRequirement{} } func (*NodeSelectorRequirement) ProtoMessage() {} func (*NodeSelectorRequirement) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{101} + return fileDescriptor_83c10c24ec417dc9, []int{102} } func (m *NodeSelectorRequirement) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2908,7 +2936,7 @@ var xxx_messageInfo_NodeSelectorRequirement proto.InternalMessageInfo func (m *NodeSelectorTerm) Reset() { *m = NodeSelectorTerm{} } func (*NodeSelectorTerm) ProtoMessage() {} func (*NodeSelectorTerm) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{102} + return fileDescriptor_83c10c24ec417dc9, []int{103} } func (m *NodeSelectorTerm) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2936,7 +2964,7 @@ var xxx_messageInfo_NodeSelectorTerm proto.InternalMessageInfo func (m *NodeSpec) Reset() { *m = NodeSpec{} } func (*NodeSpec) ProtoMessage() {} func (*NodeSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{103} + return fileDescriptor_83c10c24ec417dc9, []int{104} } func (m *NodeSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2964,7 +2992,7 @@ var xxx_messageInfo_NodeSpec proto.InternalMessageInfo func (m *NodeStatus) Reset() { *m = NodeStatus{} } func (*NodeStatus) ProtoMessage() {} func (*NodeStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{104} + return fileDescriptor_83c10c24ec417dc9, []int{105} } func (m *NodeStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -2992,7 +3020,7 @@ var xxx_messageInfo_NodeStatus proto.InternalMessageInfo func (m *NodeSystemInfo) Reset() { *m = NodeSystemInfo{} } func (*NodeSystemInfo) ProtoMessage() {} func (*NodeSystemInfo) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{105} + return fileDescriptor_83c10c24ec417dc9, []int{106} } func (m *NodeSystemInfo) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3020,7 +3048,7 @@ var xxx_messageInfo_NodeSystemInfo proto.InternalMessageInfo func (m *ObjectFieldSelector) Reset() { *m = ObjectFieldSelector{} } func (*ObjectFieldSelector) ProtoMessage() {} func (*ObjectFieldSelector) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{106} + return fileDescriptor_83c10c24ec417dc9, []int{107} } func (m *ObjectFieldSelector) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3048,7 +3076,7 @@ var xxx_messageInfo_ObjectFieldSelector proto.InternalMessageInfo func (m *ObjectReference) Reset() { *m = ObjectReference{} } func (*ObjectReference) ProtoMessage() {} func (*ObjectReference) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{107} + return fileDescriptor_83c10c24ec417dc9, []int{108} } func (m *ObjectReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3076,7 +3104,7 @@ var xxx_messageInfo_ObjectReference proto.InternalMessageInfo func (m *PersistentVolume) Reset() { *m = PersistentVolume{} } func (*PersistentVolume) ProtoMessage() {} func (*PersistentVolume) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{108} + return fileDescriptor_83c10c24ec417dc9, []int{109} } func (m *PersistentVolume) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3104,7 +3132,7 @@ var xxx_messageInfo_PersistentVolume proto.InternalMessageInfo func (m *PersistentVolumeClaim) Reset() { *m = PersistentVolumeClaim{} } func (*PersistentVolumeClaim) ProtoMessage() {} func (*PersistentVolumeClaim) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{109} + return fileDescriptor_83c10c24ec417dc9, []int{110} } func (m *PersistentVolumeClaim) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3132,7 +3160,7 @@ var xxx_messageInfo_PersistentVolumeClaim proto.InternalMessageInfo func (m *PersistentVolumeClaimCondition) Reset() { *m = PersistentVolumeClaimCondition{} } func (*PersistentVolumeClaimCondition) ProtoMessage() {} func (*PersistentVolumeClaimCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{110} + return fileDescriptor_83c10c24ec417dc9, []int{111} } func (m *PersistentVolumeClaimCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3160,7 +3188,7 @@ var xxx_messageInfo_PersistentVolumeClaimCondition proto.InternalMessageInfo func (m *PersistentVolumeClaimList) Reset() { *m = PersistentVolumeClaimList{} } func (*PersistentVolumeClaimList) ProtoMessage() {} func (*PersistentVolumeClaimList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{111} + return fileDescriptor_83c10c24ec417dc9, []int{112} } func (m *PersistentVolumeClaimList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3188,7 +3216,7 @@ var xxx_messageInfo_PersistentVolumeClaimList proto.InternalMessageInfo func (m *PersistentVolumeClaimSpec) Reset() { *m = PersistentVolumeClaimSpec{} } func (*PersistentVolumeClaimSpec) ProtoMessage() {} func (*PersistentVolumeClaimSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{112} + return fileDescriptor_83c10c24ec417dc9, []int{113} } func (m *PersistentVolumeClaimSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3216,7 +3244,7 @@ var xxx_messageInfo_PersistentVolumeClaimSpec proto.InternalMessageInfo func (m *PersistentVolumeClaimStatus) Reset() { *m = PersistentVolumeClaimStatus{} } func (*PersistentVolumeClaimStatus) ProtoMessage() {} func (*PersistentVolumeClaimStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{113} + return fileDescriptor_83c10c24ec417dc9, []int{114} } func (m *PersistentVolumeClaimStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3244,7 +3272,7 @@ var xxx_messageInfo_PersistentVolumeClaimStatus proto.InternalMessageInfo func (m *PersistentVolumeClaimTemplate) Reset() { *m = PersistentVolumeClaimTemplate{} } func (*PersistentVolumeClaimTemplate) ProtoMessage() {} func (*PersistentVolumeClaimTemplate) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{114} + return fileDescriptor_83c10c24ec417dc9, []int{115} } func (m *PersistentVolumeClaimTemplate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3272,7 +3300,7 @@ var xxx_messageInfo_PersistentVolumeClaimTemplate proto.InternalMessageInfo func (m *PersistentVolumeClaimVolumeSource) Reset() { *m = PersistentVolumeClaimVolumeSource{} } func (*PersistentVolumeClaimVolumeSource) ProtoMessage() {} func (*PersistentVolumeClaimVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{115} + return fileDescriptor_83c10c24ec417dc9, []int{116} } func (m *PersistentVolumeClaimVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3300,7 +3328,7 @@ var xxx_messageInfo_PersistentVolumeClaimVolumeSource proto.InternalMessageInfo func (m *PersistentVolumeList) Reset() { *m = PersistentVolumeList{} } func (*PersistentVolumeList) ProtoMessage() {} func (*PersistentVolumeList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{116} + return fileDescriptor_83c10c24ec417dc9, []int{117} } func (m *PersistentVolumeList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3328,7 +3356,7 @@ var xxx_messageInfo_PersistentVolumeList proto.InternalMessageInfo func (m *PersistentVolumeSource) Reset() { *m = PersistentVolumeSource{} } func (*PersistentVolumeSource) ProtoMessage() {} func (*PersistentVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{117} + return fileDescriptor_83c10c24ec417dc9, []int{118} } func (m *PersistentVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3356,7 +3384,7 @@ var xxx_messageInfo_PersistentVolumeSource proto.InternalMessageInfo func (m *PersistentVolumeSpec) Reset() { *m = PersistentVolumeSpec{} } func (*PersistentVolumeSpec) ProtoMessage() {} func (*PersistentVolumeSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{118} + return fileDescriptor_83c10c24ec417dc9, []int{119} } func (m *PersistentVolumeSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3384,7 +3412,7 @@ var xxx_messageInfo_PersistentVolumeSpec proto.InternalMessageInfo func (m *PersistentVolumeStatus) Reset() { *m = PersistentVolumeStatus{} } func (*PersistentVolumeStatus) ProtoMessage() {} func (*PersistentVolumeStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{119} + return fileDescriptor_83c10c24ec417dc9, []int{120} } func (m *PersistentVolumeStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3412,7 +3440,7 @@ var xxx_messageInfo_PersistentVolumeStatus proto.InternalMessageInfo func (m *PhotonPersistentDiskVolumeSource) Reset() { *m = PhotonPersistentDiskVolumeSource{} } func (*PhotonPersistentDiskVolumeSource) ProtoMessage() {} func (*PhotonPersistentDiskVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{120} + return fileDescriptor_83c10c24ec417dc9, []int{121} } func (m *PhotonPersistentDiskVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3440,7 +3468,7 @@ var xxx_messageInfo_PhotonPersistentDiskVolumeSource proto.InternalMessageInfo func (m *Pod) Reset() { *m = Pod{} } func (*Pod) ProtoMessage() {} func (*Pod) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{121} + return fileDescriptor_83c10c24ec417dc9, []int{122} } func (m *Pod) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3468,7 +3496,7 @@ var xxx_messageInfo_Pod proto.InternalMessageInfo func (m *PodAffinity) Reset() { *m = PodAffinity{} } func (*PodAffinity) ProtoMessage() {} func (*PodAffinity) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{122} + return fileDescriptor_83c10c24ec417dc9, []int{123} } func (m *PodAffinity) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3496,7 +3524,7 @@ var xxx_messageInfo_PodAffinity proto.InternalMessageInfo func (m *PodAffinityTerm) Reset() { *m = PodAffinityTerm{} } func (*PodAffinityTerm) ProtoMessage() {} func (*PodAffinityTerm) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{123} + return fileDescriptor_83c10c24ec417dc9, []int{124} } func (m *PodAffinityTerm) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3524,7 +3552,7 @@ var xxx_messageInfo_PodAffinityTerm proto.InternalMessageInfo func (m *PodAntiAffinity) Reset() { *m = PodAntiAffinity{} } func (*PodAntiAffinity) ProtoMessage() {} func (*PodAntiAffinity) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{124} + return fileDescriptor_83c10c24ec417dc9, []int{125} } func (m *PodAntiAffinity) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3552,7 +3580,7 @@ var xxx_messageInfo_PodAntiAffinity proto.InternalMessageInfo func (m *PodAttachOptions) Reset() { *m = PodAttachOptions{} } func (*PodAttachOptions) ProtoMessage() {} func (*PodAttachOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{125} + return fileDescriptor_83c10c24ec417dc9, []int{126} } func (m *PodAttachOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3580,7 +3608,7 @@ var xxx_messageInfo_PodAttachOptions proto.InternalMessageInfo func (m *PodCondition) Reset() { *m = PodCondition{} } func (*PodCondition) ProtoMessage() {} func (*PodCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{126} + return fileDescriptor_83c10c24ec417dc9, []int{127} } func (m *PodCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3608,7 +3636,7 @@ var xxx_messageInfo_PodCondition proto.InternalMessageInfo func (m *PodDNSConfig) Reset() { *m = PodDNSConfig{} } func (*PodDNSConfig) ProtoMessage() {} func (*PodDNSConfig) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{127} + return fileDescriptor_83c10c24ec417dc9, []int{128} } func (m *PodDNSConfig) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3636,7 +3664,7 @@ var xxx_messageInfo_PodDNSConfig proto.InternalMessageInfo func (m *PodDNSConfigOption) Reset() { *m = PodDNSConfigOption{} } func (*PodDNSConfigOption) ProtoMessage() {} func (*PodDNSConfigOption) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{128} + return fileDescriptor_83c10c24ec417dc9, []int{129} } func (m *PodDNSConfigOption) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3664,7 +3692,7 @@ var xxx_messageInfo_PodDNSConfigOption proto.InternalMessageInfo func (m *PodExecOptions) Reset() { *m = PodExecOptions{} } func (*PodExecOptions) ProtoMessage() {} func (*PodExecOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{129} + return fileDescriptor_83c10c24ec417dc9, []int{130} } func (m *PodExecOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3692,7 +3720,7 @@ var xxx_messageInfo_PodExecOptions proto.InternalMessageInfo func (m *PodIP) Reset() { *m = PodIP{} } func (*PodIP) ProtoMessage() {} func (*PodIP) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{130} + return fileDescriptor_83c10c24ec417dc9, []int{131} } func (m *PodIP) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3720,7 +3748,7 @@ var xxx_messageInfo_PodIP proto.InternalMessageInfo func (m *PodList) Reset() { *m = PodList{} } func (*PodList) ProtoMessage() {} func (*PodList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{131} + return fileDescriptor_83c10c24ec417dc9, []int{132} } func (m *PodList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3748,7 +3776,7 @@ var xxx_messageInfo_PodList proto.InternalMessageInfo func (m *PodLogOptions) Reset() { *m = PodLogOptions{} } func (*PodLogOptions) ProtoMessage() {} func (*PodLogOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{132} + return fileDescriptor_83c10c24ec417dc9, []int{133} } func (m *PodLogOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3776,7 +3804,7 @@ var xxx_messageInfo_PodLogOptions proto.InternalMessageInfo func (m *PodOS) Reset() { *m = PodOS{} } func (*PodOS) ProtoMessage() {} func (*PodOS) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{133} + return fileDescriptor_83c10c24ec417dc9, []int{134} } func (m *PodOS) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3804,7 +3832,7 @@ var xxx_messageInfo_PodOS proto.InternalMessageInfo func (m *PodPortForwardOptions) Reset() { *m = PodPortForwardOptions{} } func (*PodPortForwardOptions) ProtoMessage() {} func (*PodPortForwardOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{134} + return fileDescriptor_83c10c24ec417dc9, []int{135} } func (m *PodPortForwardOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3832,7 +3860,7 @@ var xxx_messageInfo_PodPortForwardOptions proto.InternalMessageInfo func (m *PodProxyOptions) Reset() { *m = PodProxyOptions{} } func (*PodProxyOptions) ProtoMessage() {} func (*PodProxyOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{135} + return fileDescriptor_83c10c24ec417dc9, []int{136} } func (m *PodProxyOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3860,7 +3888,7 @@ var xxx_messageInfo_PodProxyOptions proto.InternalMessageInfo func (m *PodReadinessGate) Reset() { *m = PodReadinessGate{} } func (*PodReadinessGate) ProtoMessage() {} func (*PodReadinessGate) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{136} + return fileDescriptor_83c10c24ec417dc9, []int{137} } func (m *PodReadinessGate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3888,7 +3916,7 @@ var xxx_messageInfo_PodReadinessGate proto.InternalMessageInfo func (m *PodResourceClaim) Reset() { *m = PodResourceClaim{} } func (*PodResourceClaim) ProtoMessage() {} func (*PodResourceClaim) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{137} + return fileDescriptor_83c10c24ec417dc9, []int{138} } func (m *PodResourceClaim) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3913,10 +3941,38 @@ func (m *PodResourceClaim) XXX_DiscardUnknown() { var xxx_messageInfo_PodResourceClaim proto.InternalMessageInfo +func (m *PodResourceClaimStatus) Reset() { *m = PodResourceClaimStatus{} } +func (*PodResourceClaimStatus) ProtoMessage() {} +func (*PodResourceClaimStatus) Descriptor() ([]byte, []int) { + return fileDescriptor_83c10c24ec417dc9, []int{139} +} +func (m *PodResourceClaimStatus) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *PodResourceClaimStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *PodResourceClaimStatus) XXX_Merge(src proto.Message) { + xxx_messageInfo_PodResourceClaimStatus.Merge(m, src) +} +func (m *PodResourceClaimStatus) XXX_Size() int { + return m.Size() +} +func (m *PodResourceClaimStatus) XXX_DiscardUnknown() { + xxx_messageInfo_PodResourceClaimStatus.DiscardUnknown(m) +} + +var xxx_messageInfo_PodResourceClaimStatus proto.InternalMessageInfo + func (m *PodSchedulingGate) Reset() { *m = PodSchedulingGate{} } func (*PodSchedulingGate) ProtoMessage() {} func (*PodSchedulingGate) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{138} + return fileDescriptor_83c10c24ec417dc9, []int{140} } func (m *PodSchedulingGate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3944,7 +4000,7 @@ var xxx_messageInfo_PodSchedulingGate proto.InternalMessageInfo func (m *PodSecurityContext) Reset() { *m = PodSecurityContext{} } func (*PodSecurityContext) ProtoMessage() {} func (*PodSecurityContext) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{139} + return fileDescriptor_83c10c24ec417dc9, []int{141} } func (m *PodSecurityContext) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -3972,7 +4028,7 @@ var xxx_messageInfo_PodSecurityContext proto.InternalMessageInfo func (m *PodSignature) Reset() { *m = PodSignature{} } func (*PodSignature) ProtoMessage() {} func (*PodSignature) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{140} + return fileDescriptor_83c10c24ec417dc9, []int{142} } func (m *PodSignature) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4000,7 +4056,7 @@ var xxx_messageInfo_PodSignature proto.InternalMessageInfo func (m *PodSpec) Reset() { *m = PodSpec{} } func (*PodSpec) ProtoMessage() {} func (*PodSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{141} + return fileDescriptor_83c10c24ec417dc9, []int{143} } func (m *PodSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4028,7 +4084,7 @@ var xxx_messageInfo_PodSpec proto.InternalMessageInfo func (m *PodStatus) Reset() { *m = PodStatus{} } func (*PodStatus) ProtoMessage() {} func (*PodStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{142} + return fileDescriptor_83c10c24ec417dc9, []int{144} } func (m *PodStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4056,7 +4112,7 @@ var xxx_messageInfo_PodStatus proto.InternalMessageInfo func (m *PodStatusResult) Reset() { *m = PodStatusResult{} } func (*PodStatusResult) ProtoMessage() {} func (*PodStatusResult) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{143} + return fileDescriptor_83c10c24ec417dc9, []int{145} } func (m *PodStatusResult) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4084,7 +4140,7 @@ var xxx_messageInfo_PodStatusResult proto.InternalMessageInfo func (m *PodTemplate) Reset() { *m = PodTemplate{} } func (*PodTemplate) ProtoMessage() {} func (*PodTemplate) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{144} + return fileDescriptor_83c10c24ec417dc9, []int{146} } func (m *PodTemplate) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4112,7 +4168,7 @@ var xxx_messageInfo_PodTemplate proto.InternalMessageInfo func (m *PodTemplateList) Reset() { *m = PodTemplateList{} } func (*PodTemplateList) ProtoMessage() {} func (*PodTemplateList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{145} + return fileDescriptor_83c10c24ec417dc9, []int{147} } func (m *PodTemplateList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4140,7 +4196,7 @@ var xxx_messageInfo_PodTemplateList proto.InternalMessageInfo func (m *PodTemplateSpec) Reset() { *m = PodTemplateSpec{} } func (*PodTemplateSpec) ProtoMessage() {} func (*PodTemplateSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{146} + return fileDescriptor_83c10c24ec417dc9, []int{148} } func (m *PodTemplateSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4168,7 +4224,7 @@ var xxx_messageInfo_PodTemplateSpec proto.InternalMessageInfo func (m *PortStatus) Reset() { *m = PortStatus{} } func (*PortStatus) ProtoMessage() {} func (*PortStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{147} + return fileDescriptor_83c10c24ec417dc9, []int{149} } func (m *PortStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4196,7 +4252,7 @@ var xxx_messageInfo_PortStatus proto.InternalMessageInfo func (m *PortworxVolumeSource) Reset() { *m = PortworxVolumeSource{} } func (*PortworxVolumeSource) ProtoMessage() {} func (*PortworxVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{148} + return fileDescriptor_83c10c24ec417dc9, []int{150} } func (m *PortworxVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4224,7 +4280,7 @@ var xxx_messageInfo_PortworxVolumeSource proto.InternalMessageInfo func (m *Preconditions) Reset() { *m = Preconditions{} } func (*Preconditions) ProtoMessage() {} func (*Preconditions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{149} + return fileDescriptor_83c10c24ec417dc9, []int{151} } func (m *Preconditions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4252,7 +4308,7 @@ var xxx_messageInfo_Preconditions proto.InternalMessageInfo func (m *PreferAvoidPodsEntry) Reset() { *m = PreferAvoidPodsEntry{} } func (*PreferAvoidPodsEntry) ProtoMessage() {} func (*PreferAvoidPodsEntry) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{150} + return fileDescriptor_83c10c24ec417dc9, []int{152} } func (m *PreferAvoidPodsEntry) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4280,7 +4336,7 @@ var xxx_messageInfo_PreferAvoidPodsEntry proto.InternalMessageInfo func (m *PreferredSchedulingTerm) Reset() { *m = PreferredSchedulingTerm{} } func (*PreferredSchedulingTerm) ProtoMessage() {} func (*PreferredSchedulingTerm) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{151} + return fileDescriptor_83c10c24ec417dc9, []int{153} } func (m *PreferredSchedulingTerm) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4308,7 +4364,7 @@ var xxx_messageInfo_PreferredSchedulingTerm proto.InternalMessageInfo func (m *Probe) Reset() { *m = Probe{} } func (*Probe) ProtoMessage() {} func (*Probe) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{152} + return fileDescriptor_83c10c24ec417dc9, []int{154} } func (m *Probe) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4336,7 +4392,7 @@ var xxx_messageInfo_Probe proto.InternalMessageInfo func (m *ProbeHandler) Reset() { *m = ProbeHandler{} } func (*ProbeHandler) ProtoMessage() {} func (*ProbeHandler) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{153} + return fileDescriptor_83c10c24ec417dc9, []int{155} } func (m *ProbeHandler) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4364,7 +4420,7 @@ var xxx_messageInfo_ProbeHandler proto.InternalMessageInfo func (m *ProjectedVolumeSource) Reset() { *m = ProjectedVolumeSource{} } func (*ProjectedVolumeSource) ProtoMessage() {} func (*ProjectedVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{154} + return fileDescriptor_83c10c24ec417dc9, []int{156} } func (m *ProjectedVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4392,7 +4448,7 @@ var xxx_messageInfo_ProjectedVolumeSource proto.InternalMessageInfo func (m *QuobyteVolumeSource) Reset() { *m = QuobyteVolumeSource{} } func (*QuobyteVolumeSource) ProtoMessage() {} func (*QuobyteVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{155} + return fileDescriptor_83c10c24ec417dc9, []int{157} } func (m *QuobyteVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4420,7 +4476,7 @@ var xxx_messageInfo_QuobyteVolumeSource proto.InternalMessageInfo func (m *RBDPersistentVolumeSource) Reset() { *m = RBDPersistentVolumeSource{} } func (*RBDPersistentVolumeSource) ProtoMessage() {} func (*RBDPersistentVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{156} + return fileDescriptor_83c10c24ec417dc9, []int{158} } func (m *RBDPersistentVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4448,7 +4504,7 @@ var xxx_messageInfo_RBDPersistentVolumeSource proto.InternalMessageInfo func (m *RBDVolumeSource) Reset() { *m = RBDVolumeSource{} } func (*RBDVolumeSource) ProtoMessage() {} func (*RBDVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{157} + return fileDescriptor_83c10c24ec417dc9, []int{159} } func (m *RBDVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4476,7 +4532,7 @@ var xxx_messageInfo_RBDVolumeSource proto.InternalMessageInfo func (m *RangeAllocation) Reset() { *m = RangeAllocation{} } func (*RangeAllocation) ProtoMessage() {} func (*RangeAllocation) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{158} + return fileDescriptor_83c10c24ec417dc9, []int{160} } func (m *RangeAllocation) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4504,7 +4560,7 @@ var xxx_messageInfo_RangeAllocation proto.InternalMessageInfo func (m *ReplicationController) Reset() { *m = ReplicationController{} } func (*ReplicationController) ProtoMessage() {} func (*ReplicationController) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{159} + return fileDescriptor_83c10c24ec417dc9, []int{161} } func (m *ReplicationController) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4532,7 +4588,7 @@ var xxx_messageInfo_ReplicationController proto.InternalMessageInfo func (m *ReplicationControllerCondition) Reset() { *m = ReplicationControllerCondition{} } func (*ReplicationControllerCondition) ProtoMessage() {} func (*ReplicationControllerCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{160} + return fileDescriptor_83c10c24ec417dc9, []int{162} } func (m *ReplicationControllerCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4560,7 +4616,7 @@ var xxx_messageInfo_ReplicationControllerCondition proto.InternalMessageInfo func (m *ReplicationControllerList) Reset() { *m = ReplicationControllerList{} } func (*ReplicationControllerList) ProtoMessage() {} func (*ReplicationControllerList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{161} + return fileDescriptor_83c10c24ec417dc9, []int{163} } func (m *ReplicationControllerList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4588,7 +4644,7 @@ var xxx_messageInfo_ReplicationControllerList proto.InternalMessageInfo func (m *ReplicationControllerSpec) Reset() { *m = ReplicationControllerSpec{} } func (*ReplicationControllerSpec) ProtoMessage() {} func (*ReplicationControllerSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{162} + return fileDescriptor_83c10c24ec417dc9, []int{164} } func (m *ReplicationControllerSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4616,7 +4672,7 @@ var xxx_messageInfo_ReplicationControllerSpec proto.InternalMessageInfo func (m *ReplicationControllerStatus) Reset() { *m = ReplicationControllerStatus{} } func (*ReplicationControllerStatus) ProtoMessage() {} func (*ReplicationControllerStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{163} + return fileDescriptor_83c10c24ec417dc9, []int{165} } func (m *ReplicationControllerStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4644,7 +4700,7 @@ var xxx_messageInfo_ReplicationControllerStatus proto.InternalMessageInfo func (m *ResourceClaim) Reset() { *m = ResourceClaim{} } func (*ResourceClaim) ProtoMessage() {} func (*ResourceClaim) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{164} + return fileDescriptor_83c10c24ec417dc9, []int{166} } func (m *ResourceClaim) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4672,7 +4728,7 @@ var xxx_messageInfo_ResourceClaim proto.InternalMessageInfo func (m *ResourceFieldSelector) Reset() { *m = ResourceFieldSelector{} } func (*ResourceFieldSelector) ProtoMessage() {} func (*ResourceFieldSelector) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{165} + return fileDescriptor_83c10c24ec417dc9, []int{167} } func (m *ResourceFieldSelector) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4700,7 +4756,7 @@ var xxx_messageInfo_ResourceFieldSelector proto.InternalMessageInfo func (m *ResourceQuota) Reset() { *m = ResourceQuota{} } func (*ResourceQuota) ProtoMessage() {} func (*ResourceQuota) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{166} + return fileDescriptor_83c10c24ec417dc9, []int{168} } func (m *ResourceQuota) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4728,7 +4784,7 @@ var xxx_messageInfo_ResourceQuota proto.InternalMessageInfo func (m *ResourceQuotaList) Reset() { *m = ResourceQuotaList{} } func (*ResourceQuotaList) ProtoMessage() {} func (*ResourceQuotaList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{167} + return fileDescriptor_83c10c24ec417dc9, []int{169} } func (m *ResourceQuotaList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4756,7 +4812,7 @@ var xxx_messageInfo_ResourceQuotaList proto.InternalMessageInfo func (m *ResourceQuotaSpec) Reset() { *m = ResourceQuotaSpec{} } func (*ResourceQuotaSpec) ProtoMessage() {} func (*ResourceQuotaSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{168} + return fileDescriptor_83c10c24ec417dc9, []int{170} } func (m *ResourceQuotaSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4784,7 +4840,7 @@ var xxx_messageInfo_ResourceQuotaSpec proto.InternalMessageInfo func (m *ResourceQuotaStatus) Reset() { *m = ResourceQuotaStatus{} } func (*ResourceQuotaStatus) ProtoMessage() {} func (*ResourceQuotaStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{169} + return fileDescriptor_83c10c24ec417dc9, []int{171} } func (m *ResourceQuotaStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4812,7 +4868,7 @@ var xxx_messageInfo_ResourceQuotaStatus proto.InternalMessageInfo func (m *ResourceRequirements) Reset() { *m = ResourceRequirements{} } func (*ResourceRequirements) ProtoMessage() {} func (*ResourceRequirements) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{170} + return fileDescriptor_83c10c24ec417dc9, []int{172} } func (m *ResourceRequirements) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4840,7 +4896,7 @@ var xxx_messageInfo_ResourceRequirements proto.InternalMessageInfo func (m *SELinuxOptions) Reset() { *m = SELinuxOptions{} } func (*SELinuxOptions) ProtoMessage() {} func (*SELinuxOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{171} + return fileDescriptor_83c10c24ec417dc9, []int{173} } func (m *SELinuxOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4868,7 +4924,7 @@ var xxx_messageInfo_SELinuxOptions proto.InternalMessageInfo func (m *ScaleIOPersistentVolumeSource) Reset() { *m = ScaleIOPersistentVolumeSource{} } func (*ScaleIOPersistentVolumeSource) ProtoMessage() {} func (*ScaleIOPersistentVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{172} + return fileDescriptor_83c10c24ec417dc9, []int{174} } func (m *ScaleIOPersistentVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4896,7 +4952,7 @@ var xxx_messageInfo_ScaleIOPersistentVolumeSource proto.InternalMessageInfo func (m *ScaleIOVolumeSource) Reset() { *m = ScaleIOVolumeSource{} } func (*ScaleIOVolumeSource) ProtoMessage() {} func (*ScaleIOVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{173} + return fileDescriptor_83c10c24ec417dc9, []int{175} } func (m *ScaleIOVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4924,7 +4980,7 @@ var xxx_messageInfo_ScaleIOVolumeSource proto.InternalMessageInfo func (m *ScopeSelector) Reset() { *m = ScopeSelector{} } func (*ScopeSelector) ProtoMessage() {} func (*ScopeSelector) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{174} + return fileDescriptor_83c10c24ec417dc9, []int{176} } func (m *ScopeSelector) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4952,7 +5008,7 @@ var xxx_messageInfo_ScopeSelector proto.InternalMessageInfo func (m *ScopedResourceSelectorRequirement) Reset() { *m = ScopedResourceSelectorRequirement{} } func (*ScopedResourceSelectorRequirement) ProtoMessage() {} func (*ScopedResourceSelectorRequirement) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{175} + return fileDescriptor_83c10c24ec417dc9, []int{177} } func (m *ScopedResourceSelectorRequirement) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -4980,7 +5036,7 @@ var xxx_messageInfo_ScopedResourceSelectorRequirement proto.InternalMessageInfo func (m *SeccompProfile) Reset() { *m = SeccompProfile{} } func (*SeccompProfile) ProtoMessage() {} func (*SeccompProfile) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{176} + return fileDescriptor_83c10c24ec417dc9, []int{178} } func (m *SeccompProfile) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5008,7 +5064,7 @@ var xxx_messageInfo_SeccompProfile proto.InternalMessageInfo func (m *Secret) Reset() { *m = Secret{} } func (*Secret) ProtoMessage() {} func (*Secret) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{177} + return fileDescriptor_83c10c24ec417dc9, []int{179} } func (m *Secret) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5036,7 +5092,7 @@ var xxx_messageInfo_Secret proto.InternalMessageInfo func (m *SecretEnvSource) Reset() { *m = SecretEnvSource{} } func (*SecretEnvSource) ProtoMessage() {} func (*SecretEnvSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{178} + return fileDescriptor_83c10c24ec417dc9, []int{180} } func (m *SecretEnvSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5064,7 +5120,7 @@ var xxx_messageInfo_SecretEnvSource proto.InternalMessageInfo func (m *SecretKeySelector) Reset() { *m = SecretKeySelector{} } func (*SecretKeySelector) ProtoMessage() {} func (*SecretKeySelector) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{179} + return fileDescriptor_83c10c24ec417dc9, []int{181} } func (m *SecretKeySelector) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5092,7 +5148,7 @@ var xxx_messageInfo_SecretKeySelector proto.InternalMessageInfo func (m *SecretList) Reset() { *m = SecretList{} } func (*SecretList) ProtoMessage() {} func (*SecretList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{180} + return fileDescriptor_83c10c24ec417dc9, []int{182} } func (m *SecretList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5120,7 +5176,7 @@ var xxx_messageInfo_SecretList proto.InternalMessageInfo func (m *SecretProjection) Reset() { *m = SecretProjection{} } func (*SecretProjection) ProtoMessage() {} func (*SecretProjection) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{181} + return fileDescriptor_83c10c24ec417dc9, []int{183} } func (m *SecretProjection) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5148,7 +5204,7 @@ var xxx_messageInfo_SecretProjection proto.InternalMessageInfo func (m *SecretReference) Reset() { *m = SecretReference{} } func (*SecretReference) ProtoMessage() {} func (*SecretReference) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{182} + return fileDescriptor_83c10c24ec417dc9, []int{184} } func (m *SecretReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5176,7 +5232,7 @@ var xxx_messageInfo_SecretReference proto.InternalMessageInfo func (m *SecretVolumeSource) Reset() { *m = SecretVolumeSource{} } func (*SecretVolumeSource) ProtoMessage() {} func (*SecretVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{183} + return fileDescriptor_83c10c24ec417dc9, []int{185} } func (m *SecretVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5204,7 +5260,7 @@ var xxx_messageInfo_SecretVolumeSource proto.InternalMessageInfo func (m *SecurityContext) Reset() { *m = SecurityContext{} } func (*SecurityContext) ProtoMessage() {} func (*SecurityContext) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{184} + return fileDescriptor_83c10c24ec417dc9, []int{186} } func (m *SecurityContext) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5232,7 +5288,7 @@ var xxx_messageInfo_SecurityContext proto.InternalMessageInfo func (m *SerializedReference) Reset() { *m = SerializedReference{} } func (*SerializedReference) ProtoMessage() {} func (*SerializedReference) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{185} + return fileDescriptor_83c10c24ec417dc9, []int{187} } func (m *SerializedReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5260,7 +5316,7 @@ var xxx_messageInfo_SerializedReference proto.InternalMessageInfo func (m *Service) Reset() { *m = Service{} } func (*Service) ProtoMessage() {} func (*Service) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{186} + return fileDescriptor_83c10c24ec417dc9, []int{188} } func (m *Service) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5288,7 +5344,7 @@ var xxx_messageInfo_Service proto.InternalMessageInfo func (m *ServiceAccount) Reset() { *m = ServiceAccount{} } func (*ServiceAccount) ProtoMessage() {} func (*ServiceAccount) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{187} + return fileDescriptor_83c10c24ec417dc9, []int{189} } func (m *ServiceAccount) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5316,7 +5372,7 @@ var xxx_messageInfo_ServiceAccount proto.InternalMessageInfo func (m *ServiceAccountList) Reset() { *m = ServiceAccountList{} } func (*ServiceAccountList) ProtoMessage() {} func (*ServiceAccountList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{188} + return fileDescriptor_83c10c24ec417dc9, []int{190} } func (m *ServiceAccountList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5344,7 +5400,7 @@ var xxx_messageInfo_ServiceAccountList proto.InternalMessageInfo func (m *ServiceAccountTokenProjection) Reset() { *m = ServiceAccountTokenProjection{} } func (*ServiceAccountTokenProjection) ProtoMessage() {} func (*ServiceAccountTokenProjection) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{189} + return fileDescriptor_83c10c24ec417dc9, []int{191} } func (m *ServiceAccountTokenProjection) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5372,7 +5428,7 @@ var xxx_messageInfo_ServiceAccountTokenProjection proto.InternalMessageInfo func (m *ServiceList) Reset() { *m = ServiceList{} } func (*ServiceList) ProtoMessage() {} func (*ServiceList) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{190} + return fileDescriptor_83c10c24ec417dc9, []int{192} } func (m *ServiceList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5400,7 +5456,7 @@ var xxx_messageInfo_ServiceList proto.InternalMessageInfo func (m *ServicePort) Reset() { *m = ServicePort{} } func (*ServicePort) ProtoMessage() {} func (*ServicePort) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{191} + return fileDescriptor_83c10c24ec417dc9, []int{193} } func (m *ServicePort) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5428,7 +5484,7 @@ var xxx_messageInfo_ServicePort proto.InternalMessageInfo func (m *ServiceProxyOptions) Reset() { *m = ServiceProxyOptions{} } func (*ServiceProxyOptions) ProtoMessage() {} func (*ServiceProxyOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{192} + return fileDescriptor_83c10c24ec417dc9, []int{194} } func (m *ServiceProxyOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5456,7 +5512,7 @@ var xxx_messageInfo_ServiceProxyOptions proto.InternalMessageInfo func (m *ServiceSpec) Reset() { *m = ServiceSpec{} } func (*ServiceSpec) ProtoMessage() {} func (*ServiceSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{193} + return fileDescriptor_83c10c24ec417dc9, []int{195} } func (m *ServiceSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5484,7 +5540,7 @@ var xxx_messageInfo_ServiceSpec proto.InternalMessageInfo func (m *ServiceStatus) Reset() { *m = ServiceStatus{} } func (*ServiceStatus) ProtoMessage() {} func (*ServiceStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{194} + return fileDescriptor_83c10c24ec417dc9, []int{196} } func (m *ServiceStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5512,7 +5568,7 @@ var xxx_messageInfo_ServiceStatus proto.InternalMessageInfo func (m *SessionAffinityConfig) Reset() { *m = SessionAffinityConfig{} } func (*SessionAffinityConfig) ProtoMessage() {} func (*SessionAffinityConfig) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{195} + return fileDescriptor_83c10c24ec417dc9, []int{197} } func (m *SessionAffinityConfig) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5540,7 +5596,7 @@ var xxx_messageInfo_SessionAffinityConfig proto.InternalMessageInfo func (m *StorageOSPersistentVolumeSource) Reset() { *m = StorageOSPersistentVolumeSource{} } func (*StorageOSPersistentVolumeSource) ProtoMessage() {} func (*StorageOSPersistentVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{196} + return fileDescriptor_83c10c24ec417dc9, []int{198} } func (m *StorageOSPersistentVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5568,7 +5624,7 @@ var xxx_messageInfo_StorageOSPersistentVolumeSource proto.InternalMessageInfo func (m *StorageOSVolumeSource) Reset() { *m = StorageOSVolumeSource{} } func (*StorageOSVolumeSource) ProtoMessage() {} func (*StorageOSVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{197} + return fileDescriptor_83c10c24ec417dc9, []int{199} } func (m *StorageOSVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5596,7 +5652,7 @@ var xxx_messageInfo_StorageOSVolumeSource proto.InternalMessageInfo func (m *Sysctl) Reset() { *m = Sysctl{} } func (*Sysctl) ProtoMessage() {} func (*Sysctl) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{198} + return fileDescriptor_83c10c24ec417dc9, []int{200} } func (m *Sysctl) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5624,7 +5680,7 @@ var xxx_messageInfo_Sysctl proto.InternalMessageInfo func (m *TCPSocketAction) Reset() { *m = TCPSocketAction{} } func (*TCPSocketAction) ProtoMessage() {} func (*TCPSocketAction) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{199} + return fileDescriptor_83c10c24ec417dc9, []int{201} } func (m *TCPSocketAction) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5652,7 +5708,7 @@ var xxx_messageInfo_TCPSocketAction proto.InternalMessageInfo func (m *Taint) Reset() { *m = Taint{} } func (*Taint) ProtoMessage() {} func (*Taint) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{200} + return fileDescriptor_83c10c24ec417dc9, []int{202} } func (m *Taint) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5680,7 +5736,7 @@ var xxx_messageInfo_Taint proto.InternalMessageInfo func (m *Toleration) Reset() { *m = Toleration{} } func (*Toleration) ProtoMessage() {} func (*Toleration) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{201} + return fileDescriptor_83c10c24ec417dc9, []int{203} } func (m *Toleration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5708,7 +5764,7 @@ var xxx_messageInfo_Toleration proto.InternalMessageInfo func (m *TopologySelectorLabelRequirement) Reset() { *m = TopologySelectorLabelRequirement{} } func (*TopologySelectorLabelRequirement) ProtoMessage() {} func (*TopologySelectorLabelRequirement) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{202} + return fileDescriptor_83c10c24ec417dc9, []int{204} } func (m *TopologySelectorLabelRequirement) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5736,7 +5792,7 @@ var xxx_messageInfo_TopologySelectorLabelRequirement proto.InternalMessageInfo func (m *TopologySelectorTerm) Reset() { *m = TopologySelectorTerm{} } func (*TopologySelectorTerm) ProtoMessage() {} func (*TopologySelectorTerm) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{203} + return fileDescriptor_83c10c24ec417dc9, []int{205} } func (m *TopologySelectorTerm) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5764,7 +5820,7 @@ var xxx_messageInfo_TopologySelectorTerm proto.InternalMessageInfo func (m *TopologySpreadConstraint) Reset() { *m = TopologySpreadConstraint{} } func (*TopologySpreadConstraint) ProtoMessage() {} func (*TopologySpreadConstraint) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{204} + return fileDescriptor_83c10c24ec417dc9, []int{206} } func (m *TopologySpreadConstraint) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5792,7 +5848,7 @@ var xxx_messageInfo_TopologySpreadConstraint proto.InternalMessageInfo func (m *TypedLocalObjectReference) Reset() { *m = TypedLocalObjectReference{} } func (*TypedLocalObjectReference) ProtoMessage() {} func (*TypedLocalObjectReference) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{205} + return fileDescriptor_83c10c24ec417dc9, []int{207} } func (m *TypedLocalObjectReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5820,7 +5876,7 @@ var xxx_messageInfo_TypedLocalObjectReference proto.InternalMessageInfo func (m *TypedObjectReference) Reset() { *m = TypedObjectReference{} } func (*TypedObjectReference) ProtoMessage() {} func (*TypedObjectReference) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{206} + return fileDescriptor_83c10c24ec417dc9, []int{208} } func (m *TypedObjectReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5848,7 +5904,7 @@ var xxx_messageInfo_TypedObjectReference proto.InternalMessageInfo func (m *Volume) Reset() { *m = Volume{} } func (*Volume) ProtoMessage() {} func (*Volume) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{207} + return fileDescriptor_83c10c24ec417dc9, []int{209} } func (m *Volume) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5876,7 +5932,7 @@ var xxx_messageInfo_Volume proto.InternalMessageInfo func (m *VolumeDevice) Reset() { *m = VolumeDevice{} } func (*VolumeDevice) ProtoMessage() {} func (*VolumeDevice) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{208} + return fileDescriptor_83c10c24ec417dc9, []int{210} } func (m *VolumeDevice) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5904,7 +5960,7 @@ var xxx_messageInfo_VolumeDevice proto.InternalMessageInfo func (m *VolumeMount) Reset() { *m = VolumeMount{} } func (*VolumeMount) ProtoMessage() {} func (*VolumeMount) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{209} + return fileDescriptor_83c10c24ec417dc9, []int{211} } func (m *VolumeMount) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5932,7 +5988,7 @@ var xxx_messageInfo_VolumeMount proto.InternalMessageInfo func (m *VolumeNodeAffinity) Reset() { *m = VolumeNodeAffinity{} } func (*VolumeNodeAffinity) ProtoMessage() {} func (*VolumeNodeAffinity) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{210} + return fileDescriptor_83c10c24ec417dc9, []int{212} } func (m *VolumeNodeAffinity) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5960,7 +6016,7 @@ var xxx_messageInfo_VolumeNodeAffinity proto.InternalMessageInfo func (m *VolumeProjection) Reset() { *m = VolumeProjection{} } func (*VolumeProjection) ProtoMessage() {} func (*VolumeProjection) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{211} + return fileDescriptor_83c10c24ec417dc9, []int{213} } func (m *VolumeProjection) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -5988,7 +6044,7 @@ var xxx_messageInfo_VolumeProjection proto.InternalMessageInfo func (m *VolumeSource) Reset() { *m = VolumeSource{} } func (*VolumeSource) ProtoMessage() {} func (*VolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{212} + return fileDescriptor_83c10c24ec417dc9, []int{214} } func (m *VolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6016,7 +6072,7 @@ var xxx_messageInfo_VolumeSource proto.InternalMessageInfo func (m *VsphereVirtualDiskVolumeSource) Reset() { *m = VsphereVirtualDiskVolumeSource{} } func (*VsphereVirtualDiskVolumeSource) ProtoMessage() {} func (*VsphereVirtualDiskVolumeSource) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{213} + return fileDescriptor_83c10c24ec417dc9, []int{215} } func (m *VsphereVirtualDiskVolumeSource) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6044,7 +6100,7 @@ var xxx_messageInfo_VsphereVirtualDiskVolumeSource proto.InternalMessageInfo func (m *WeightedPodAffinityTerm) Reset() { *m = WeightedPodAffinityTerm{} } func (*WeightedPodAffinityTerm) ProtoMessage() {} func (*WeightedPodAffinityTerm) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{214} + return fileDescriptor_83c10c24ec417dc9, []int{216} } func (m *WeightedPodAffinityTerm) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6072,7 +6128,7 @@ var xxx_messageInfo_WeightedPodAffinityTerm proto.InternalMessageInfo func (m *WindowsSecurityContextOptions) Reset() { *m = WindowsSecurityContextOptions{} } func (*WindowsSecurityContextOptions) ProtoMessage() {} func (*WindowsSecurityContextOptions) Descriptor() ([]byte, []int) { - return fileDescriptor_83c10c24ec417dc9, []int{215} + return fileDescriptor_83c10c24ec417dc9, []int{217} } func (m *WindowsSecurityContextOptions) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -6174,6 +6230,7 @@ func init() { proto.RegisterType((*HTTPGetAction)(nil), "k8s.io.api.core.v1.HTTPGetAction") proto.RegisterType((*HTTPHeader)(nil), "k8s.io.api.core.v1.HTTPHeader") proto.RegisterType((*HostAlias)(nil), "k8s.io.api.core.v1.HostAlias") + proto.RegisterType((*HostIP)(nil), "k8s.io.api.core.v1.HostIP") proto.RegisterType((*HostPathVolumeSource)(nil), "k8s.io.api.core.v1.HostPathVolumeSource") proto.RegisterType((*ISCSIPersistentVolumeSource)(nil), "k8s.io.api.core.v1.ISCSIPersistentVolumeSource") proto.RegisterType((*ISCSIVolumeSource)(nil), "k8s.io.api.core.v1.ISCSIVolumeSource") @@ -6227,6 +6284,7 @@ func init() { proto.RegisterType((*PersistentVolumeClaimList)(nil), "k8s.io.api.core.v1.PersistentVolumeClaimList") proto.RegisterType((*PersistentVolumeClaimSpec)(nil), "k8s.io.api.core.v1.PersistentVolumeClaimSpec") proto.RegisterType((*PersistentVolumeClaimStatus)(nil), "k8s.io.api.core.v1.PersistentVolumeClaimStatus") + proto.RegisterMapType((map[ResourceName]ClaimResourceStatus)(nil), "k8s.io.api.core.v1.PersistentVolumeClaimStatus.AllocatedResourceStatusesEntry") proto.RegisterMapType((ResourceList)(nil), "k8s.io.api.core.v1.PersistentVolumeClaimStatus.AllocatedResourcesEntry") proto.RegisterMapType((ResourceList)(nil), "k8s.io.api.core.v1.PersistentVolumeClaimStatus.CapacityEntry") proto.RegisterType((*PersistentVolumeClaimTemplate)(nil), "k8s.io.api.core.v1.PersistentVolumeClaimTemplate") @@ -6254,6 +6312,7 @@ func init() { proto.RegisterType((*PodProxyOptions)(nil), "k8s.io.api.core.v1.PodProxyOptions") proto.RegisterType((*PodReadinessGate)(nil), "k8s.io.api.core.v1.PodReadinessGate") proto.RegisterType((*PodResourceClaim)(nil), "k8s.io.api.core.v1.PodResourceClaim") + proto.RegisterType((*PodResourceClaimStatus)(nil), "k8s.io.api.core.v1.PodResourceClaimStatus") proto.RegisterType((*PodSchedulingGate)(nil), "k8s.io.api.core.v1.PodSchedulingGate") proto.RegisterType((*PodSecurityContext)(nil), "k8s.io.api.core.v1.PodSecurityContext") proto.RegisterType((*PodSignature)(nil), "k8s.io.api.core.v1.PodSignature") @@ -6350,925 +6409,934 @@ func init() { } var fileDescriptor_83c10c24ec417dc9 = []byte{ - // 14685 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0xbd, 0x69, 0x90, 0x5c, 0xd7, - 0x75, 0x18, 0xac, 0xd7, 0x3d, 0x5b, 0x9f, 0xd9, 0xef, 0x00, 0xe0, 0x60, 0x48, 0xa0, 0xc1, 0x47, - 0x12, 0x04, 0x45, 0x72, 0x20, 0x70, 0x91, 0x28, 0x52, 0xa2, 0x35, 0x2b, 0x30, 0x04, 0x66, 0xd0, - 0xbc, 0x3d, 0x00, 0x24, 0x8a, 0x52, 0xe9, 0x4d, 0xf7, 0x9d, 0x99, 0xa7, 0xe9, 0x7e, 0xaf, 0xf9, - 0xde, 0xeb, 0x01, 0x06, 0x9f, 0x54, 0x9f, 0x2d, 0xc7, 0x8b, 0x6c, 0x27, 0xa5, 0x4a, 0x39, 0x4b, - 0xc9, 0x2e, 0x57, 0xca, 0x76, 0x6c, 0x2b, 0xca, 0xa6, 0xc8, 0xb1, 0x1d, 0xcb, 0x5b, 0xb6, 0x8a, - 0x93, 0x4a, 0x39, 0x8e, 0xab, 0x62, 0xb9, 0xe2, 0xca, 0xc4, 0x82, 0x53, 0xe5, 0x52, 0x55, 0x62, - 0x3b, 0xcb, 0x8f, 0x64, 0xe2, 0xc4, 0xa9, 0xbb, 0xbe, 0x7b, 0xdf, 0xd2, 0xdd, 0x03, 0x0e, 0x46, - 0x94, 0x8a, 0xff, 0xba, 0xcf, 0x39, 0xf7, 0xdc, 0xfb, 0xee, 0x7a, 0xee, 0x39, 0xe7, 0x9e, 0x03, - 0xaf, 0xec, 0xbc, 0x14, 0xce, 0xba, 0xfe, 0xc5, 0x9d, 0xf6, 0x06, 0x09, 0x3c, 0x12, 0x91, 0xf0, - 0xe2, 0x2e, 0xf1, 0xea, 0x7e, 0x70, 0x51, 0x20, 0x9c, 0x96, 0x7b, 0xb1, 0xe6, 0x07, 0xe4, 0xe2, - 0xee, 0xa5, 0x8b, 0x5b, 0xc4, 0x23, 0x81, 0x13, 0x91, 0xfa, 0x6c, 0x2b, 0xf0, 0x23, 0x1f, 0x21, - 0x4e, 0x33, 0xeb, 0xb4, 0xdc, 0x59, 0x4a, 0x33, 0xbb, 0x7b, 0x69, 0xe6, 0xd9, 0x2d, 0x37, 0xda, - 0x6e, 0x6f, 0xcc, 0xd6, 0xfc, 0xe6, 0xc5, 0x2d, 0x7f, 0xcb, 0xbf, 0xc8, 0x48, 0x37, 0xda, 0x9b, - 0xec, 0x1f, 0xfb, 0xc3, 0x7e, 0x71, 0x16, 0x33, 0x2f, 0xc4, 0xd5, 0x34, 0x9d, 0xda, 0xb6, 0xeb, - 0x91, 0x60, 0xef, 0x62, 0x6b, 0x67, 0x8b, 0xd5, 0x1b, 0x90, 0xd0, 0x6f, 0x07, 0x35, 0x92, 0xac, - 0xb8, 0x63, 0xa9, 0xf0, 0x62, 0x93, 0x44, 0x4e, 0x46, 0x73, 0x67, 0x2e, 0xe6, 0x95, 0x0a, 0xda, - 0x5e, 0xe4, 0x36, 0xd3, 0xd5, 0xbc, 0xbf, 0x5b, 0x81, 0xb0, 0xb6, 0x4d, 0x9a, 0x4e, 0xaa, 0xdc, - 0xf3, 0x79, 0xe5, 0xda, 0x91, 0xdb, 0xb8, 0xe8, 0x7a, 0x51, 0x18, 0x05, 0xc9, 0x42, 0xf6, 0xd7, - 0x2d, 0x38, 0x37, 0x77, 0xab, 0xba, 0xd4, 0x70, 0xc2, 0xc8, 0xad, 0xcd, 0x37, 0xfc, 0xda, 0x4e, - 0x35, 0xf2, 0x03, 0x72, 0xd3, 0x6f, 0xb4, 0x9b, 0xa4, 0xca, 0x3a, 0x02, 0x3d, 0x03, 0x43, 0xbb, - 0xec, 0xff, 0xca, 0xe2, 0xb4, 0x75, 0xce, 0xba, 0x50, 0x9a, 0x9f, 0xf8, 0xcd, 0xfd, 0xf2, 0x7b, - 0xee, 0xed, 0x97, 0x87, 0x6e, 0x0a, 0x38, 0x56, 0x14, 0xe8, 0x3c, 0x0c, 0x6c, 0x86, 0xeb, 0x7b, - 0x2d, 0x32, 0x5d, 0x60, 0xb4, 0x63, 0x82, 0x76, 0x60, 0xb9, 0x4a, 0xa1, 0x58, 0x60, 0xd1, 0x45, - 0x28, 0xb5, 0x9c, 0x20, 0x72, 0x23, 0xd7, 0xf7, 0xa6, 0x8b, 0xe7, 0xac, 0x0b, 0xfd, 0xf3, 0x93, - 0x82, 0xb4, 0x54, 0x91, 0x08, 0x1c, 0xd3, 0xd0, 0x66, 0x04, 0xc4, 0xa9, 0x5f, 0xf7, 0x1a, 0x7b, - 0xd3, 0x7d, 0xe7, 0xac, 0x0b, 0x43, 0x71, 0x33, 0xb0, 0x80, 0x63, 0x45, 0x61, 0x7f, 0xb1, 0x00, - 0x43, 0x73, 0x9b, 0x9b, 0xae, 0xe7, 0x46, 0x7b, 0xe8, 0x26, 0x8c, 0x78, 0x7e, 0x9d, 0xc8, 0xff, - 0xec, 0x2b, 0x86, 0x9f, 0x3b, 0x37, 0x9b, 0x9e, 0x4a, 0xb3, 0x6b, 0x1a, 0xdd, 0xfc, 0xc4, 0xbd, - 0xfd, 0xf2, 0x88, 0x0e, 0xc1, 0x06, 0x1f, 0x84, 0x61, 0xb8, 0xe5, 0xd7, 0x15, 0xdb, 0x02, 0x63, - 0x5b, 0xce, 0x62, 0x5b, 0x89, 0xc9, 0xe6, 0xc7, 0xef, 0xed, 0x97, 0x87, 0x35, 0x00, 0xd6, 0x99, - 0xa0, 0x0d, 0x18, 0xa7, 0x7f, 0xbd, 0xc8, 0x55, 0x7c, 0x8b, 0x8c, 0xef, 0x63, 0x79, 0x7c, 0x35, - 0xd2, 0xf9, 0xa9, 0x7b, 0xfb, 0xe5, 0xf1, 0x04, 0x10, 0x27, 0x19, 0xda, 0x77, 0x61, 0x6c, 0x2e, - 0x8a, 0x9c, 0xda, 0x36, 0xa9, 0xf3, 0x11, 0x44, 0x2f, 0x40, 0x9f, 0xe7, 0x34, 0x89, 0x18, 0xdf, - 0x73, 0xa2, 0x63, 0xfb, 0xd6, 0x9c, 0x26, 0x39, 0xd8, 0x2f, 0x4f, 0xdc, 0xf0, 0xdc, 0xb7, 0xda, - 0x62, 0x56, 0x50, 0x18, 0x66, 0xd4, 0xe8, 0x39, 0x80, 0x3a, 0xd9, 0x75, 0x6b, 0xa4, 0xe2, 0x44, - 0xdb, 0x62, 0xbc, 0x91, 0x28, 0x0b, 0x8b, 0x0a, 0x83, 0x35, 0x2a, 0xfb, 0x0e, 0x94, 0xe6, 0x76, - 0x7d, 0xb7, 0x5e, 0xf1, 0xeb, 0x21, 0xda, 0x81, 0xf1, 0x56, 0x40, 0x36, 0x49, 0xa0, 0x40, 0xd3, - 0xd6, 0xb9, 0xe2, 0x85, 0xe1, 0xe7, 0x2e, 0x64, 0x7e, 0xac, 0x49, 0xba, 0xe4, 0x45, 0xc1, 0xde, - 0xfc, 0x43, 0xa2, 0xbe, 0xf1, 0x04, 0x16, 0x27, 0x39, 0xdb, 0xff, 0xac, 0x00, 0x27, 0xe7, 0xee, - 0xb6, 0x03, 0xb2, 0xe8, 0x86, 0x3b, 0xc9, 0x19, 0x5e, 0x77, 0xc3, 0x9d, 0xb5, 0xb8, 0x07, 0xd4, - 0xd4, 0x5a, 0x14, 0x70, 0xac, 0x28, 0xd0, 0xb3, 0x30, 0x48, 0x7f, 0xdf, 0xc0, 0x2b, 0xe2, 0x93, - 0xa7, 0x04, 0xf1, 0xf0, 0xa2, 0x13, 0x39, 0x8b, 0x1c, 0x85, 0x25, 0x0d, 0x5a, 0x85, 0xe1, 0x1a, - 0x5b, 0x90, 0x5b, 0xab, 0x7e, 0x9d, 0xb0, 0xc1, 0x2c, 0xcd, 0x3f, 0x4d, 0xc9, 0x17, 0x62, 0xf0, - 0xc1, 0x7e, 0x79, 0x9a, 0xb7, 0x4d, 0xb0, 0xd0, 0x70, 0x58, 0x2f, 0x8f, 0x6c, 0xb5, 0xbe, 0xfa, - 0x18, 0x27, 0xc8, 0x58, 0x5b, 0x17, 0xb4, 0xa5, 0xd2, 0xcf, 0x96, 0xca, 0x48, 0xf6, 0x32, 0x41, - 0x97, 0xa0, 0x6f, 0xc7, 0xf5, 0xea, 0xd3, 0x03, 0x8c, 0xd7, 0x19, 0x3a, 0xe6, 0x57, 0x5d, 0xaf, - 0x7e, 0xb0, 0x5f, 0x9e, 0x34, 0x9a, 0x43, 0x81, 0x98, 0x91, 0xda, 0xff, 0xdd, 0x82, 0x32, 0xc3, - 0x2d, 0xbb, 0x0d, 0x52, 0x21, 0x41, 0xe8, 0x86, 0x11, 0xf1, 0x22, 0xa3, 0x43, 0x9f, 0x03, 0x08, - 0x49, 0x2d, 0x20, 0x91, 0xd6, 0xa5, 0x6a, 0x62, 0x54, 0x15, 0x06, 0x6b, 0x54, 0x74, 0x43, 0x08, - 0xb7, 0x9d, 0x80, 0xcd, 0x2f, 0xd1, 0xb1, 0x6a, 0x43, 0xa8, 0x4a, 0x04, 0x8e, 0x69, 0x8c, 0x0d, - 0xa1, 0xd8, 0x6d, 0x43, 0x40, 0x1f, 0x86, 0xf1, 0xb8, 0xb2, 0xb0, 0xe5, 0xd4, 0x64, 0x07, 0xb2, - 0x25, 0x53, 0x35, 0x51, 0x38, 0x49, 0x6b, 0xff, 0x2d, 0x4b, 0x4c, 0x1e, 0xfa, 0xd5, 0xef, 0xf0, - 0x6f, 0xb5, 0x7f, 0xc9, 0x82, 0xc1, 0x79, 0xd7, 0xab, 0xbb, 0xde, 0x16, 0xfa, 0x14, 0x0c, 0xd1, - 0xb3, 0xa9, 0xee, 0x44, 0x8e, 0xd8, 0xf7, 0xde, 0xa7, 0xad, 0x2d, 0x75, 0x54, 0xcc, 0xb6, 0x76, - 0xb6, 0x28, 0x20, 0x9c, 0xa5, 0xd4, 0x74, 0xb5, 0x5d, 0xdf, 0xf8, 0x34, 0xa9, 0x45, 0xab, 0x24, - 0x72, 0xe2, 0xcf, 0x89, 0x61, 0x58, 0x71, 0x45, 0x57, 0x61, 0x20, 0x72, 0x82, 0x2d, 0x12, 0x89, - 0x0d, 0x30, 0x73, 0xa3, 0xe2, 0x25, 0x31, 0x5d, 0x91, 0xc4, 0xab, 0x91, 0xf8, 0x58, 0x58, 0x67, - 0x45, 0xb1, 0x60, 0x61, 0xff, 0x9f, 0x41, 0x38, 0xbd, 0x50, 0x5d, 0xc9, 0x99, 0x57, 0xe7, 0x61, - 0xa0, 0x1e, 0xb8, 0xbb, 0x24, 0x10, 0xfd, 0xac, 0xb8, 0x2c, 0x32, 0x28, 0x16, 0x58, 0xf4, 0x12, - 0x8c, 0xf0, 0x03, 0xe9, 0x8a, 0xe3, 0xd5, 0x1b, 0xb2, 0x8b, 0x4f, 0x08, 0xea, 0x91, 0x9b, 0x1a, - 0x0e, 0x1b, 0x94, 0x87, 0x9c, 0x54, 0xe7, 0x13, 0x8b, 0x31, 0xef, 0xb0, 0xfb, 0xbc, 0x05, 0x13, - 0xbc, 0x9a, 0xb9, 0x28, 0x0a, 0xdc, 0x8d, 0x76, 0x44, 0xc2, 0xe9, 0x7e, 0xb6, 0xd3, 0x2d, 0x64, - 0xf5, 0x56, 0x6e, 0x0f, 0xcc, 0xde, 0x4c, 0x70, 0xe1, 0x9b, 0xe0, 0xb4, 0xa8, 0x77, 0x22, 0x89, - 0xc6, 0xa9, 0x6a, 0xd1, 0xf7, 0x5a, 0x30, 0x53, 0xf3, 0xbd, 0x28, 0xf0, 0x1b, 0x0d, 0x12, 0x54, - 0xda, 0x1b, 0x0d, 0x37, 0xdc, 0xe6, 0xf3, 0x14, 0x93, 0x4d, 0xb6, 0x13, 0xe4, 0x8c, 0xa1, 0x22, - 0x12, 0x63, 0x78, 0xf6, 0xde, 0x7e, 0x79, 0x66, 0x21, 0x97, 0x15, 0xee, 0x50, 0x0d, 0xda, 0x01, - 0x44, 0x8f, 0xd2, 0x6a, 0xe4, 0x6c, 0x91, 0xb8, 0xf2, 0xc1, 0xde, 0x2b, 0x3f, 0x75, 0x6f, 0xbf, - 0x8c, 0xd6, 0x52, 0x2c, 0x70, 0x06, 0x5b, 0xf4, 0x16, 0x9c, 0xa0, 0xd0, 0xd4, 0xb7, 0x0e, 0xf5, - 0x5e, 0xdd, 0xf4, 0xbd, 0xfd, 0xf2, 0x89, 0xb5, 0x0c, 0x26, 0x38, 0x93, 0x35, 0xfa, 0x6e, 0x0b, - 0x4e, 0xc7, 0x9f, 0xbf, 0x74, 0xa7, 0xe5, 0x78, 0xf5, 0xb8, 0xe2, 0x52, 0xef, 0x15, 0xd3, 0x3d, - 0xf9, 0xf4, 0x42, 0x1e, 0x27, 0x9c, 0x5f, 0x09, 0xf2, 0x60, 0x8a, 0x36, 0x2d, 0x59, 0x37, 0xf4, - 0x5e, 0xf7, 0x43, 0xf7, 0xf6, 0xcb, 0x53, 0x6b, 0x69, 0x1e, 0x38, 0x8b, 0xf1, 0xcc, 0x02, 0x9c, - 0xcc, 0x9c, 0x9d, 0x68, 0x02, 0x8a, 0x3b, 0x84, 0x4b, 0x5d, 0x25, 0x4c, 0x7f, 0xa2, 0x13, 0xd0, - 0xbf, 0xeb, 0x34, 0xda, 0x62, 0x61, 0x62, 0xfe, 0xe7, 0xe5, 0xc2, 0x4b, 0x96, 0xfd, 0xcf, 0x8b, - 0x30, 0xbe, 0x50, 0x5d, 0xb9, 0xaf, 0x55, 0xaf, 0x1f, 0x7b, 0x85, 0x8e, 0xc7, 0x5e, 0x7c, 0x88, - 0x16, 0x73, 0x0f, 0xd1, 0xff, 0x3f, 0x63, 0xc9, 0xf6, 0xb1, 0x25, 0xfb, 0xc1, 0x9c, 0x25, 0x7b, - 0xc4, 0x0b, 0x75, 0x37, 0x67, 0xd6, 0xf6, 0xb3, 0x01, 0xcc, 0x94, 0x90, 0xae, 0xf9, 0x35, 0xa7, - 0x91, 0xdc, 0x6a, 0x0f, 0x39, 0x75, 0x8f, 0x66, 0x1c, 0x6b, 0x30, 0xb2, 0xe0, 0xb4, 0x9c, 0x0d, - 0xb7, 0xe1, 0x46, 0x2e, 0x09, 0xd1, 0x93, 0x50, 0x74, 0xea, 0x75, 0x26, 0xdd, 0x95, 0xe6, 0x4f, - 0xde, 0xdb, 0x2f, 0x17, 0xe7, 0xea, 0x54, 0xcc, 0x00, 0x45, 0xb5, 0x87, 0x29, 0x05, 0x7a, 0x2f, - 0xf4, 0xd5, 0x03, 0xbf, 0x35, 0x5d, 0x60, 0x94, 0x74, 0x95, 0xf7, 0x2d, 0x06, 0x7e, 0x2b, 0x41, - 0xca, 0x68, 0xec, 0xdf, 0x28, 0xc0, 0x23, 0x0b, 0xa4, 0xb5, 0xbd, 0x5c, 0xcd, 0x39, 0x2f, 0x2e, - 0xc0, 0x50, 0xd3, 0xf7, 0xdc, 0xc8, 0x0f, 0x42, 0x51, 0x35, 0x9b, 0x11, 0xab, 0x02, 0x86, 0x15, - 0x16, 0x9d, 0x83, 0xbe, 0x56, 0x2c, 0xc4, 0x8e, 0x48, 0x01, 0x98, 0x89, 0xaf, 0x0c, 0x43, 0x29, - 0xda, 0x21, 0x09, 0xc4, 0x8c, 0x51, 0x14, 0x37, 0x42, 0x12, 0x60, 0x86, 0x89, 0x25, 0x01, 0x2a, - 0x23, 0x88, 0x13, 0x21, 0x21, 0x09, 0x50, 0x0c, 0xd6, 0xa8, 0x50, 0x05, 0x4a, 0x61, 0x62, 0x64, - 0x7b, 0x5a, 0x9a, 0xa3, 0x4c, 0x54, 0x50, 0x23, 0x19, 0x33, 0x31, 0x4e, 0xb0, 0x81, 0xae, 0xa2, - 0xc2, 0xd7, 0x0a, 0x80, 0x78, 0x17, 0x7e, 0x9b, 0x75, 0xdc, 0x8d, 0x74, 0xc7, 0xf5, 0xbe, 0x24, - 0x8e, 0xaa, 0xf7, 0xfe, 0x87, 0x05, 0x8f, 0x2c, 0xb8, 0x5e, 0x9d, 0x04, 0x39, 0x13, 0xf0, 0xc1, - 0xdc, 0x9d, 0x0f, 0x27, 0xa4, 0x18, 0x53, 0xac, 0xef, 0x08, 0xa6, 0x98, 0xfd, 0x27, 0x16, 0x20, - 0xfe, 0xd9, 0xef, 0xb8, 0x8f, 0xbd, 0x91, 0xfe, 0xd8, 0x23, 0x98, 0x16, 0xf6, 0xdf, 0xb3, 0x60, - 0x78, 0xa1, 0xe1, 0xb8, 0x4d, 0xf1, 0xa9, 0x0b, 0x30, 0x29, 0x15, 0x45, 0x0c, 0xac, 0xc9, 0xfe, - 0x74, 0x73, 0x9b, 0xc4, 0x49, 0x24, 0x4e, 0xd3, 0xa3, 0x8f, 0xc3, 0x69, 0x03, 0xb8, 0x4e, 0x9a, - 0xad, 0x86, 0x13, 0xe9, 0xb7, 0x02, 0x76, 0xfa, 0xe3, 0x3c, 0x22, 0x9c, 0x5f, 0xde, 0xbe, 0x06, - 0x63, 0x0b, 0x0d, 0x97, 0x78, 0xd1, 0x4a, 0x65, 0xc1, 0xf7, 0x36, 0xdd, 0x2d, 0xf4, 0x32, 0x8c, - 0x45, 0x6e, 0x93, 0xf8, 0xed, 0xa8, 0x4a, 0x6a, 0xbe, 0xc7, 0xee, 0xda, 0xd6, 0x85, 0xfe, 0x79, - 0x74, 0x6f, 0xbf, 0x3c, 0xb6, 0x6e, 0x60, 0x70, 0x82, 0xd2, 0xfe, 0x7d, 0x3a, 0xe2, 0x7e, 0xb3, - 0xe5, 0x7b, 0xc4, 0x8b, 0x16, 0x7c, 0xaf, 0xce, 0x75, 0x32, 0x2f, 0x43, 0x5f, 0x44, 0x47, 0x90, - 0x7f, 0xf9, 0x79, 0xb9, 0xb4, 0xe9, 0xb8, 0x1d, 0xec, 0x97, 0x4f, 0xa5, 0x4b, 0xb0, 0x91, 0x65, - 0x65, 0xd0, 0x07, 0x61, 0x20, 0x8c, 0x9c, 0xa8, 0x1d, 0x8a, 0x4f, 0x7d, 0x54, 0x8e, 0x7f, 0x95, - 0x41, 0x0f, 0xf6, 0xcb, 0xe3, 0xaa, 0x18, 0x07, 0x61, 0x51, 0x00, 0x3d, 0x05, 0x83, 0x4d, 0x12, - 0x86, 0xce, 0x96, 0x3c, 0xbf, 0xc7, 0x45, 0xd9, 0xc1, 0x55, 0x0e, 0xc6, 0x12, 0x8f, 0x1e, 0x83, - 0x7e, 0x12, 0x04, 0x7e, 0x20, 0x76, 0x95, 0x51, 0x41, 0xd8, 0xbf, 0x44, 0x81, 0x98, 0xe3, 0xec, - 0x7f, 0x63, 0xc1, 0xb8, 0x6a, 0x2b, 0xaf, 0xeb, 0x18, 0xee, 0x4d, 0x6f, 0x00, 0xd4, 0xe4, 0x07, - 0x86, 0xec, 0xbc, 0x1b, 0x7e, 0xee, 0x7c, 0xa6, 0x68, 0x91, 0xea, 0xc6, 0x98, 0xb3, 0x02, 0x85, - 0x58, 0xe3, 0x66, 0xff, 0xaa, 0x05, 0x53, 0x89, 0x2f, 0xba, 0xe6, 0x86, 0x11, 0x7a, 0x33, 0xf5, - 0x55, 0xb3, 0xbd, 0x7d, 0x15, 0x2d, 0xcd, 0xbe, 0x49, 0x2d, 0x3e, 0x09, 0xd1, 0xbe, 0xe8, 0x0a, - 0xf4, 0xbb, 0x11, 0x69, 0xca, 0x8f, 0x79, 0xac, 0xe3, 0xc7, 0xf0, 0x56, 0xc5, 0x23, 0xb2, 0x42, - 0x4b, 0x62, 0xce, 0xc0, 0xfe, 0x8d, 0x22, 0x94, 0xf8, 0xb4, 0x5d, 0x75, 0x5a, 0xc7, 0x30, 0x16, - 0x4f, 0x43, 0xc9, 0x6d, 0x36, 0xdb, 0x91, 0xb3, 0x21, 0x0e, 0xa0, 0x21, 0xbe, 0x19, 0xac, 0x48, - 0x20, 0x8e, 0xf1, 0x68, 0x05, 0xfa, 0x58, 0x53, 0xf8, 0x57, 0x3e, 0x99, 0xfd, 0x95, 0xa2, 0xed, - 0xb3, 0x8b, 0x4e, 0xe4, 0x70, 0xd9, 0x4f, 0x9d, 0x7c, 0x14, 0x84, 0x19, 0x0b, 0xe4, 0x00, 0x6c, - 0xb8, 0x9e, 0x13, 0xec, 0x51, 0xd8, 0x74, 0x91, 0x31, 0x7c, 0xb6, 0x33, 0xc3, 0x79, 0x45, 0xcf, - 0xd9, 0xaa, 0x0f, 0x8b, 0x11, 0x58, 0x63, 0x3a, 0xf3, 0x01, 0x28, 0x29, 0xe2, 0xc3, 0x88, 0x70, - 0x33, 0x1f, 0x86, 0xf1, 0x44, 0x5d, 0xdd, 0x8a, 0x8f, 0xe8, 0x12, 0xe0, 0x2f, 0xb3, 0x2d, 0x43, - 0xb4, 0x7a, 0xc9, 0xdb, 0x15, 0x3b, 0xe7, 0x5d, 0x38, 0xd1, 0xc8, 0xd8, 0x7b, 0xc5, 0xb8, 0xf6, - 0xbe, 0x57, 0x3f, 0x22, 0x3e, 0xfb, 0x44, 0x16, 0x16, 0x67, 0xd6, 0x41, 0xa5, 0x1a, 0xbf, 0x45, - 0x17, 0x88, 0xd3, 0xd0, 0x2f, 0x08, 0xd7, 0x05, 0x0c, 0x2b, 0x2c, 0xdd, 0xef, 0x4e, 0xa8, 0xc6, - 0x5f, 0x25, 0x7b, 0x55, 0xd2, 0x20, 0xb5, 0xc8, 0x0f, 0xbe, 0xa5, 0xcd, 0x3f, 0xc3, 0x7b, 0x9f, - 0x6f, 0x97, 0xc3, 0x82, 0x41, 0xf1, 0x2a, 0xd9, 0xe3, 0x43, 0xa1, 0x7f, 0x5d, 0xb1, 0xe3, 0xd7, - 0x7d, 0xc5, 0x82, 0x51, 0xf5, 0x75, 0xc7, 0xb0, 0x2f, 0xcc, 0x9b, 0xfb, 0xc2, 0x99, 0x8e, 0x13, - 0x3c, 0x67, 0x47, 0xf8, 0x5a, 0x01, 0x4e, 0x2b, 0x1a, 0x7a, 0x9b, 0xe1, 0x7f, 0xc4, 0xac, 0xba, - 0x08, 0x25, 0x4f, 0xe9, 0xf5, 0x2c, 0x53, 0xa1, 0x16, 0x6b, 0xf5, 0x62, 0x1a, 0x2a, 0x94, 0x7a, - 0xf1, 0x31, 0x3b, 0xa2, 0x2b, 0xbc, 0x85, 0x72, 0x7b, 0x1e, 0x8a, 0x6d, 0xb7, 0x2e, 0x0e, 0x98, - 0xf7, 0xc9, 0xde, 0xbe, 0xb1, 0xb2, 0x78, 0xb0, 0x5f, 0x7e, 0x34, 0xcf, 0xd8, 0x42, 0x4f, 0xb6, - 0x70, 0xf6, 0xc6, 0xca, 0x22, 0xa6, 0x85, 0xd1, 0x1c, 0x8c, 0xcb, 0x13, 0xfa, 0x26, 0x15, 0x10, - 0x7d, 0x4f, 0x9c, 0x43, 0x4a, 0x6b, 0x8d, 0x4d, 0x34, 0x4e, 0xd2, 0xa3, 0x45, 0x98, 0xd8, 0x69, - 0x6f, 0x90, 0x06, 0x89, 0xf8, 0x07, 0x5f, 0x25, 0x5c, 0xa7, 0x5b, 0x8a, 0xef, 0x92, 0x57, 0x13, - 0x78, 0x9c, 0x2a, 0x61, 0xff, 0x39, 0x3b, 0x0f, 0x44, 0xef, 0x55, 0x02, 0x9f, 0x4e, 0x2c, 0xca, - 0xfd, 0x5b, 0x39, 0x9d, 0x7b, 0x99, 0x15, 0x57, 0xc9, 0xde, 0xba, 0x4f, 0xef, 0x12, 0xd9, 0xb3, - 0xc2, 0x98, 0xf3, 0x7d, 0x1d, 0xe7, 0xfc, 0xcf, 0x17, 0xe0, 0xa4, 0xea, 0x01, 0x43, 0x6c, 0xfd, - 0x76, 0xef, 0x83, 0x4b, 0x30, 0x5c, 0x27, 0x9b, 0x4e, 0xbb, 0x11, 0x29, 0x03, 0x43, 0x3f, 0x37, - 0x32, 0x2d, 0xc6, 0x60, 0xac, 0xd3, 0x1c, 0xa2, 0xdb, 0xfe, 0xfd, 0x08, 0x3b, 0x88, 0x23, 0x87, - 0xce, 0x71, 0xb5, 0x6a, 0xac, 0xdc, 0x55, 0xf3, 0x18, 0xf4, 0xbb, 0x4d, 0x2a, 0x98, 0x15, 0x4c, - 0x79, 0x6b, 0x85, 0x02, 0x31, 0xc7, 0xa1, 0x27, 0x60, 0xb0, 0xe6, 0x37, 0x9b, 0x8e, 0x57, 0x67, - 0x47, 0x5e, 0x69, 0x7e, 0x98, 0xca, 0x6e, 0x0b, 0x1c, 0x84, 0x25, 0x0e, 0x3d, 0x02, 0x7d, 0x4e, - 0xb0, 0xc5, 0xb5, 0x2e, 0xa5, 0xf9, 0x21, 0x5a, 0xd3, 0x5c, 0xb0, 0x15, 0x62, 0x06, 0xa5, 0x97, - 0xc6, 0xdb, 0x7e, 0xb0, 0xe3, 0x7a, 0x5b, 0x8b, 0x6e, 0x20, 0x96, 0x84, 0x3a, 0x0b, 0x6f, 0x29, - 0x0c, 0xd6, 0xa8, 0xd0, 0x32, 0xf4, 0xb7, 0xfc, 0x20, 0x0a, 0xa7, 0x07, 0x58, 0x77, 0x3f, 0x9a, - 0xb3, 0x11, 0xf1, 0xaf, 0xad, 0xf8, 0x41, 0x14, 0x7f, 0x00, 0xfd, 0x17, 0x62, 0x5e, 0x1c, 0x5d, - 0x83, 0x41, 0xe2, 0xed, 0x2e, 0x07, 0x7e, 0x73, 0x7a, 0x2a, 0x9f, 0xd3, 0x12, 0x27, 0xe1, 0xd3, - 0x2c, 0x96, 0x51, 0x05, 0x18, 0x4b, 0x16, 0xe8, 0x83, 0x50, 0x24, 0xde, 0xee, 0xf4, 0x20, 0xe3, - 0x34, 0x93, 0xc3, 0xe9, 0xa6, 0x13, 0xc4, 0x7b, 0xfe, 0x92, 0xb7, 0x8b, 0x69, 0x19, 0xf4, 0x31, - 0x28, 0xc9, 0x0d, 0x23, 0x14, 0xea, 0xcc, 0xcc, 0x09, 0x2b, 0xb7, 0x19, 0x4c, 0xde, 0x6a, 0xbb, - 0x01, 0x69, 0x12, 0x2f, 0x0a, 0xe3, 0x1d, 0x52, 0x62, 0x43, 0x1c, 0x73, 0x43, 0x35, 0x18, 0x09, - 0x48, 0xe8, 0xde, 0x25, 0x15, 0xbf, 0xe1, 0xd6, 0xf6, 0xa6, 0x1f, 0x62, 0xcd, 0x7b, 0xaa, 0x63, - 0x97, 0x61, 0xad, 0x40, 0xac, 0x6e, 0xd7, 0xa1, 0xd8, 0x60, 0x8a, 0x3e, 0x26, 0x15, 0xf5, 0xab, - 0x7e, 0xdb, 0x8b, 0xc2, 0xe9, 0x12, 0xab, 0x24, 0xd3, 0x84, 0x7a, 0x33, 0xa6, 0x4b, 0x6a, 0xf2, - 0x79, 0x61, 0x6c, 0xb0, 0x42, 0x9f, 0x80, 0x51, 0xfe, 0x9f, 0x1b, 0x22, 0xc3, 0xe9, 0x93, 0x8c, - 0xf7, 0xb9, 0x7c, 0xde, 0x9c, 0x70, 0xfe, 0xa4, 0x60, 0x3e, 0xaa, 0x43, 0x43, 0x6c, 0x72, 0x43, - 0x18, 0x46, 0x1b, 0xee, 0x2e, 0xf1, 0x48, 0x18, 0x56, 0x02, 0x7f, 0x83, 0x08, 0xbd, 0xea, 0xe9, - 0x6c, 0xc3, 0xa5, 0xbf, 0x41, 0xe6, 0x27, 0x29, 0xcf, 0x6b, 0x7a, 0x19, 0x6c, 0xb2, 0x40, 0x37, - 0x60, 0x8c, 0x5e, 0x64, 0xdd, 0x98, 0xe9, 0x70, 0x37, 0xa6, 0xec, 0xf2, 0x86, 0x8d, 0x42, 0x38, - 0xc1, 0x04, 0x5d, 0x87, 0x91, 0x30, 0x72, 0x82, 0xa8, 0xdd, 0xe2, 0x4c, 0x4f, 0x75, 0x63, 0xca, - 0xec, 0xde, 0x55, 0xad, 0x08, 0x36, 0x18, 0xa0, 0xd7, 0xa0, 0xd4, 0x70, 0x37, 0x49, 0x6d, 0xaf, - 0xd6, 0x20, 0xd3, 0x23, 0x8c, 0x5b, 0xe6, 0xce, 0x75, 0x4d, 0x12, 0x71, 0x61, 0x5a, 0xfd, 0xc5, - 0x71, 0x71, 0x74, 0x13, 0x4e, 0x45, 0x24, 0x68, 0xba, 0x9e, 0x43, 0x77, 0x1c, 0x71, 0x7f, 0x63, - 0xf6, 0xe4, 0x51, 0xb6, 0xa4, 0xcf, 0x8a, 0xd1, 0x38, 0xb5, 0x9e, 0x49, 0x85, 0x73, 0x4a, 0xa3, - 0x3b, 0x30, 0x9d, 0x81, 0xe1, 0x53, 0xf9, 0x04, 0xe3, 0xfc, 0x21, 0xc1, 0x79, 0x7a, 0x3d, 0x87, - 0xee, 0xa0, 0x03, 0x0e, 0xe7, 0x72, 0x47, 0xd7, 0x61, 0x9c, 0x6d, 0x73, 0x95, 0x76, 0xa3, 0x21, - 0x2a, 0x1c, 0x63, 0x15, 0x3e, 0x21, 0x0f, 0xfd, 0x15, 0x13, 0x7d, 0xb0, 0x5f, 0x86, 0xf8, 0x1f, - 0x4e, 0x96, 0x46, 0x1b, 0xcc, 0x74, 0xd9, 0x0e, 0xdc, 0x68, 0x8f, 0xae, 0x34, 0x72, 0x27, 0x9a, - 0x1e, 0xef, 0xa8, 0xc6, 0xd1, 0x49, 0x95, 0x7d, 0x53, 0x07, 0xe2, 0x24, 0x43, 0xba, 0x6f, 0x87, - 0x51, 0xdd, 0xf5, 0xa6, 0x27, 0xf8, 0xe5, 0x47, 0x6e, 0x7b, 0x55, 0x0a, 0xc4, 0x1c, 0xc7, 0xcc, - 0x96, 0xf4, 0xc7, 0x75, 0x7a, 0x3c, 0x4e, 0x32, 0xc2, 0xd8, 0x6c, 0x29, 0x11, 0x38, 0xa6, 0xa1, - 0x12, 0x6b, 0x14, 0xed, 0x4d, 0x23, 0x46, 0xaa, 0x76, 0xaf, 0xf5, 0xf5, 0x8f, 0x61, 0x0a, 0xb7, - 0x37, 0x60, 0x4c, 0x6d, 0x1d, 0xac, 0x4f, 0x50, 0x19, 0xfa, 0x99, 0x8c, 0x26, 0x94, 0x8e, 0x25, - 0xda, 0x04, 0x26, 0xbf, 0x61, 0x0e, 0x67, 0x4d, 0x70, 0xef, 0x92, 0xf9, 0xbd, 0x88, 0x70, 0xc5, - 0x41, 0x51, 0x6b, 0x82, 0x44, 0xe0, 0x98, 0xc6, 0xfe, 0xbf, 0x5c, 0xd6, 0x8d, 0xb7, 0xf4, 0x1e, - 0x0e, 0xb1, 0x67, 0x60, 0x68, 0xdb, 0x0f, 0x23, 0x4a, 0xcd, 0xea, 0xe8, 0x8f, 0xa5, 0xdb, 0x2b, - 0x02, 0x8e, 0x15, 0x05, 0x7a, 0x05, 0x46, 0x6b, 0x7a, 0x05, 0xe2, 0x04, 0x56, 0xdb, 0x88, 0x51, - 0x3b, 0x36, 0x69, 0xd1, 0x4b, 0x30, 0xc4, 0x5c, 0x71, 0x6a, 0x7e, 0x43, 0x88, 0x86, 0x52, 0x8c, - 0x18, 0xaa, 0x08, 0xf8, 0x81, 0xf6, 0x1b, 0x2b, 0x6a, 0x74, 0x1e, 0x06, 0x68, 0x13, 0x56, 0x2a, - 0xe2, 0xec, 0x53, 0xfa, 0xb3, 0x2b, 0x0c, 0x8a, 0x05, 0xd6, 0xfe, 0x55, 0x8b, 0x09, 0x3e, 0xe9, - 0x0d, 0x1a, 0x5d, 0x61, 0x3b, 0x3c, 0xdb, 0xee, 0x35, 0xfd, 0xd5, 0xe3, 0xda, 0xb6, 0xad, 0x70, - 0x07, 0x89, 0xff, 0xd8, 0x28, 0x89, 0xde, 0x80, 0xd1, 0x80, 0xb0, 0x2d, 0x42, 0x4c, 0x78, 0x7e, - 0xfa, 0xbf, 0x20, 0xbb, 0x00, 0xeb, 0xc8, 0x83, 0xfd, 0xf2, 0xc3, 0xf1, 0x79, 0x44, 0xdb, 0x63, - 0xa0, 0xb1, 0xc9, 0xca, 0xfe, 0xcb, 0x05, 0x6d, 0x96, 0x54, 0x23, 0x27, 0x22, 0xa8, 0x02, 0x83, - 0xb7, 0x1d, 0x37, 0x72, 0xbd, 0x2d, 0x21, 0xa4, 0x75, 0x3e, 0x95, 0x58, 0xa1, 0x5b, 0xbc, 0x00, - 0x17, 0x35, 0xc4, 0x1f, 0x2c, 0xd9, 0x50, 0x8e, 0x41, 0xdb, 0xf3, 0x28, 0xc7, 0x42, 0xaf, 0x1c, - 0x31, 0x2f, 0xc0, 0x39, 0x8a, 0x3f, 0x58, 0xb2, 0x41, 0x6f, 0x02, 0xc8, 0x1d, 0x82, 0xd4, 0x85, - 0x0b, 0xcf, 0x33, 0xdd, 0x99, 0xae, 0xab, 0x32, 0xf3, 0x63, 0x54, 0x90, 0x89, 0xff, 0x63, 0x8d, - 0x9f, 0x1d, 0x69, 0x63, 0xaa, 0x37, 0x06, 0x7d, 0x9c, 0x2e, 0x51, 0x27, 0x88, 0x48, 0x7d, 0x2e, - 0x12, 0x9d, 0xf3, 0xde, 0xde, 0x6e, 0x72, 0xeb, 0x6e, 0x93, 0xe8, 0xcb, 0x59, 0x30, 0xc1, 0x31, - 0x3f, 0xfb, 0x17, 0x8b, 0x30, 0x9d, 0xd7, 0x5c, 0xba, 0x68, 0xc8, 0x1d, 0x37, 0x5a, 0xa0, 0x32, - 0xa8, 0x65, 0x2e, 0x9a, 0x25, 0x01, 0xc7, 0x8a, 0x82, 0xce, 0xde, 0xd0, 0xdd, 0x92, 0x17, 0xf1, - 0xfe, 0x78, 0xf6, 0x56, 0x19, 0x14, 0x0b, 0x2c, 0xa5, 0x0b, 0x88, 0x13, 0x0a, 0x1f, 0x31, 0x6d, - 0x96, 0x63, 0x06, 0xc5, 0x02, 0xab, 0xab, 0x04, 0xfb, 0xba, 0xa8, 0x04, 0x8d, 0x2e, 0xea, 0x3f, - 0xda, 0x2e, 0x42, 0x9f, 0x04, 0xd8, 0x74, 0x3d, 0x37, 0xdc, 0x66, 0xdc, 0x07, 0x0e, 0xcd, 0x5d, - 0x49, 0xb0, 0xcb, 0x8a, 0x0b, 0xd6, 0x38, 0xa2, 0x17, 0x61, 0x58, 0x6d, 0x20, 0x2b, 0x8b, 0xcc, - 0x60, 0xae, 0x39, 0x20, 0xc5, 0xbb, 0xe9, 0x22, 0xd6, 0xe9, 0xec, 0x4f, 0x27, 0xe7, 0x8b, 0x58, - 0x01, 0x5a, 0xff, 0x5a, 0xbd, 0xf6, 0x6f, 0xa1, 0x73, 0xff, 0xda, 0xdf, 0x18, 0x80, 0x71, 0xa3, - 0xb2, 0x76, 0xd8, 0xc3, 0x9e, 0x7b, 0x99, 0x1e, 0x40, 0x4e, 0x44, 0xc4, 0xfa, 0xb3, 0xbb, 0x2f, - 0x15, 0xfd, 0x90, 0xa2, 0x2b, 0x80, 0x97, 0x47, 0x9f, 0x84, 0x52, 0xc3, 0x09, 0x99, 0x7a, 0x91, - 0x88, 0x75, 0xd7, 0x0b, 0xb3, 0xf8, 0xf6, 0xe6, 0x84, 0x91, 0x76, 0xea, 0x73, 0xde, 0x31, 0x4b, - 0x7a, 0x52, 0x52, 0xf9, 0x4a, 0x3a, 0x21, 0xaa, 0x46, 0x50, 0x21, 0x6c, 0x0f, 0x73, 0x1c, 0x7a, - 0x89, 0x6d, 0xad, 0x74, 0x56, 0x2c, 0x50, 0x69, 0x94, 0x4d, 0xb3, 0x7e, 0x43, 0x22, 0x56, 0x38, - 0x6c, 0x50, 0xc6, 0x17, 0xa8, 0x81, 0x0e, 0x17, 0xa8, 0xa7, 0x60, 0x90, 0xfd, 0x50, 0x33, 0x40, - 0x8d, 0xc6, 0x0a, 0x07, 0x63, 0x89, 0x4f, 0x4e, 0x98, 0xa1, 0xde, 0x26, 0x0c, 0xbd, 0xa2, 0x89, - 0x49, 0xcd, 0x9c, 0x15, 0x86, 0xf8, 0x2e, 0x27, 0xa6, 0x3c, 0x96, 0x38, 0xf4, 0xb3, 0x16, 0x20, - 0xa7, 0x41, 0xaf, 0xb6, 0x14, 0xac, 0x6e, 0x22, 0xc0, 0x44, 0xed, 0x57, 0xba, 0x76, 0x7b, 0x3b, - 0x9c, 0x9d, 0x4b, 0x95, 0xe6, 0x6a, 0xcd, 0x97, 0x45, 0x13, 0x51, 0x9a, 0x40, 0x3f, 0x8c, 0xae, - 0xb9, 0x61, 0xf4, 0xb9, 0xff, 0x98, 0x38, 0x9c, 0x32, 0x9a, 0x84, 0x6e, 0xe8, 0x37, 0xa5, 0xe1, - 0x43, 0xde, 0x94, 0x46, 0xf3, 0x6e, 0x49, 0x33, 0x6d, 0x78, 0x28, 0xe7, 0x0b, 0x32, 0x94, 0xa5, - 0x8b, 0xba, 0xb2, 0xb4, 0x8b, 0x8a, 0x6d, 0x56, 0xd6, 0x31, 0xfb, 0x7a, 0xdb, 0xf1, 0x22, 0x37, - 0xda, 0xd3, 0x95, 0xab, 0xef, 0x85, 0xb1, 0x45, 0x87, 0x34, 0x7d, 0x6f, 0xc9, 0xab, 0xb7, 0x7c, - 0xd7, 0x8b, 0xd0, 0x34, 0xf4, 0x31, 0xe1, 0x83, 0x6f, 0xbd, 0x7d, 0xb4, 0xf7, 0x30, 0x83, 0xd8, - 0x5b, 0x70, 0x72, 0xd1, 0xbf, 0xed, 0xdd, 0x76, 0x82, 0xfa, 0x5c, 0x65, 0x45, 0x53, 0xfe, 0xac, - 0x49, 0xe5, 0x83, 0x95, 0x7f, 0xb5, 0xd3, 0x4a, 0xf2, 0xeb, 0xd0, 0xb2, 0xdb, 0x20, 0x39, 0x2a, - 0xba, 0xbf, 0x56, 0x30, 0x6a, 0x8a, 0xe9, 0x95, 0x91, 0xd8, 0xca, 0x35, 0x12, 0xbf, 0x0e, 0x43, - 0x9b, 0x2e, 0x69, 0xd4, 0x31, 0xd9, 0x14, 0xbd, 0xf3, 0x64, 0xbe, 0x1b, 0xd9, 0x32, 0xa5, 0x94, - 0x2a, 0x59, 0xae, 0xba, 0x58, 0x16, 0x85, 0xb1, 0x62, 0x83, 0x76, 0x60, 0x42, 0xf6, 0xa1, 0xc4, - 0x8a, 0xfd, 0xe0, 0xa9, 0x4e, 0x03, 0x6f, 0x32, 0x3f, 0x71, 0x6f, 0xbf, 0x3c, 0x81, 0x13, 0x6c, - 0x70, 0x8a, 0x31, 0x7a, 0x04, 0xfa, 0x9a, 0xf4, 0xe4, 0xeb, 0x63, 0xdd, 0xcf, 0x74, 0x15, 0x4c, - 0xed, 0xc2, 0xa0, 0xf6, 0x8f, 0x5b, 0xf0, 0x50, 0xaa, 0x67, 0x84, 0xfa, 0xe9, 0x88, 0x47, 0x21, - 0xa9, 0x0e, 0x2a, 0x74, 0x57, 0x07, 0xd9, 0x7f, 0xdb, 0x82, 0x13, 0x4b, 0xcd, 0x56, 0xb4, 0xb7, - 0xe8, 0x9a, 0x16, 0xdd, 0x0f, 0xc0, 0x40, 0x93, 0xd4, 0xdd, 0x76, 0x53, 0x8c, 0x5c, 0x59, 0x9e, - 0x0e, 0xab, 0x0c, 0x7a, 0xb0, 0x5f, 0x1e, 0xad, 0x46, 0x7e, 0xe0, 0x6c, 0x11, 0x0e, 0xc0, 0x82, - 0x9c, 0x9d, 0xb1, 0xee, 0x5d, 0x72, 0xcd, 0x6d, 0xba, 0xd1, 0xfd, 0xcd, 0x76, 0x61, 0x8c, 0x95, - 0x4c, 0x70, 0xcc, 0xcf, 0xfe, 0xba, 0x05, 0xe3, 0x72, 0xde, 0xcf, 0xd5, 0xeb, 0x01, 0x09, 0x43, - 0x34, 0x03, 0x05, 0xb7, 0x25, 0x5a, 0x09, 0xa2, 0x95, 0x85, 0x95, 0x0a, 0x2e, 0xb8, 0x2d, 0x29, - 0xce, 0xb3, 0x03, 0xa8, 0x68, 0xda, 0xa5, 0xaf, 0x08, 0x38, 0x56, 0x14, 0xe8, 0x02, 0x0c, 0x79, - 0x7e, 0x9d, 0x4b, 0xc4, 0x5c, 0x94, 0x60, 0x13, 0x6c, 0x4d, 0xc0, 0xb0, 0xc2, 0xa2, 0x0a, 0x94, - 0xb8, 0xd7, 0x62, 0x3c, 0x69, 0x7b, 0xf2, 0x7d, 0x64, 0x5f, 0xb6, 0x2e, 0x4b, 0xe2, 0x98, 0x89, - 0xfd, 0xeb, 0x16, 0x8c, 0xc8, 0x2f, 0xeb, 0xf1, 0xae, 0x42, 0x97, 0x56, 0x7c, 0x4f, 0x89, 0x97, - 0x16, 0xbd, 0x6b, 0x30, 0x8c, 0x71, 0xc5, 0x28, 0x1e, 0xea, 0x8a, 0x71, 0x09, 0x86, 0x9d, 0x56, - 0xab, 0x62, 0xde, 0x4f, 0xd8, 0x54, 0x9a, 0x8b, 0xc1, 0x58, 0xa7, 0xb1, 0x7f, 0xac, 0x00, 0x63, - 0xf2, 0x0b, 0xaa, 0xed, 0x8d, 0x90, 0x44, 0x68, 0x1d, 0x4a, 0x0e, 0x1f, 0x25, 0x22, 0x27, 0xf9, - 0x63, 0xd9, 0x4a, 0x2e, 0x63, 0x48, 0x63, 0x41, 0x6b, 0x4e, 0x96, 0xc6, 0x31, 0x23, 0xd4, 0x80, - 0x49, 0xcf, 0x8f, 0xd8, 0xa1, 0xab, 0xf0, 0x9d, 0xec, 0x8e, 0x49, 0xee, 0xa7, 0x05, 0xf7, 0xc9, - 0xb5, 0x24, 0x17, 0x9c, 0x66, 0x8c, 0x96, 0xa4, 0xe2, 0xb0, 0x98, 0xaf, 0x44, 0xd2, 0x07, 0x2e, - 0x5b, 0x6f, 0x68, 0xff, 0x8a, 0x05, 0x25, 0x49, 0x76, 0x1c, 0x26, 0xe6, 0x55, 0x18, 0x0c, 0xd9, - 0x20, 0xc8, 0xae, 0xb1, 0x3b, 0x35, 0x9c, 0x8f, 0x57, 0x2c, 0x4b, 0xf0, 0xff, 0x21, 0x96, 0x3c, - 0x98, 0xdd, 0x48, 0x35, 0xff, 0x1d, 0x62, 0x37, 0x52, 0xed, 0xc9, 0x39, 0x94, 0xfe, 0x88, 0xb5, - 0x59, 0x53, 0xc4, 0x52, 0x91, 0xb7, 0x15, 0x90, 0x4d, 0xf7, 0x4e, 0x52, 0xe4, 0xad, 0x30, 0x28, - 0x16, 0x58, 0xf4, 0x26, 0x8c, 0xd4, 0xa4, 0xc1, 0x20, 0x5e, 0xe1, 0xe7, 0x3b, 0x1a, 0xaf, 0x94, - 0x9d, 0x93, 0xeb, 0xd0, 0x16, 0xb4, 0xf2, 0xd8, 0xe0, 0x66, 0x7a, 0xe5, 0x14, 0xbb, 0x79, 0xe5, - 0xc4, 0x7c, 0xf3, 0x7d, 0x54, 0x7e, 0xc2, 0x82, 0x01, 0xae, 0x28, 0xee, 0x4d, 0x4f, 0xaf, 0x99, - 0x7d, 0xe3, 0xbe, 0xbb, 0x49, 0x81, 0x42, 0xd2, 0x40, 0xab, 0x50, 0x62, 0x3f, 0x98, 0xa2, 0xbb, - 0x98, 0xff, 0x68, 0x86, 0xd7, 0xaa, 0x37, 0xf0, 0xa6, 0x2c, 0x86, 0x63, 0x0e, 0xf6, 0x8f, 0x16, - 0xe9, 0xee, 0x16, 0x93, 0x1a, 0x87, 0xbe, 0xf5, 0xe0, 0x0e, 0xfd, 0xc2, 0x83, 0x3a, 0xf4, 0xb7, - 0x60, 0xbc, 0xa6, 0x19, 0x89, 0xe3, 0x91, 0xbc, 0xd0, 0x71, 0x92, 0x68, 0xf6, 0x64, 0xae, 0x9d, - 0x5b, 0x30, 0x99, 0xe0, 0x24, 0x57, 0xf4, 0x71, 0x18, 0xe1, 0xe3, 0x2c, 0x6a, 0xe1, 0x8e, 0x4d, - 0x4f, 0xe4, 0xcf, 0x17, 0xbd, 0x0a, 0xae, 0xcd, 0xd5, 0x8a, 0x63, 0x83, 0x99, 0xfd, 0xa7, 0x16, - 0xa0, 0xa5, 0xd6, 0x36, 0x69, 0x92, 0xc0, 0x69, 0xc4, 0xb6, 0x9e, 0x1f, 0xb2, 0x60, 0x9a, 0xa4, - 0xc0, 0x0b, 0x7e, 0xb3, 0x29, 0x2e, 0x8b, 0x39, 0xfa, 0x8c, 0xa5, 0x9c, 0x32, 0xea, 0x55, 0xd1, - 0x74, 0x1e, 0x05, 0xce, 0xad, 0x0f, 0xad, 0xc2, 0x14, 0x3f, 0x25, 0x15, 0x42, 0x73, 0x92, 0x7a, - 0x58, 0x30, 0x9e, 0x5a, 0x4f, 0x93, 0xe0, 0xac, 0x72, 0xf6, 0x37, 0x47, 0x20, 0xb7, 0x15, 0xef, - 0x1a, 0xb9, 0xde, 0x35, 0x72, 0xbd, 0x6b, 0xe4, 0x7a, 0xd7, 0xc8, 0xf5, 0xae, 0x91, 0xeb, 0x5d, - 0x23, 0xd7, 0x51, 0x18, 0xb9, 0xfe, 0x8a, 0x05, 0x27, 0xd5, 0x59, 0x63, 0xdc, 0xae, 0x3f, 0x03, - 0x53, 0x7c, 0xb9, 0x19, 0xde, 0xbb, 0xe2, 0x6c, 0xbd, 0x94, 0x39, 0x73, 0x13, 0x5e, 0xe6, 0x46, - 0x41, 0xfe, 0x5c, 0x27, 0x03, 0x81, 0xb3, 0xaa, 0xb1, 0x7f, 0x71, 0x08, 0xfa, 0x97, 0x76, 0x89, - 0x17, 0x1d, 0xc3, 0x3d, 0xa4, 0x06, 0x63, 0xae, 0xb7, 0xeb, 0x37, 0x76, 0x49, 0x9d, 0xe3, 0x0f, - 0x73, 0x5d, 0x3e, 0x25, 0x58, 0x8f, 0xad, 0x18, 0x2c, 0x70, 0x82, 0xe5, 0x83, 0x30, 0x15, 0x5c, - 0x86, 0x01, 0x7e, 0x52, 0x08, 0x3b, 0x41, 0xe6, 0x9e, 0xcd, 0x3a, 0x51, 0x9c, 0x7f, 0xb1, 0x19, - 0x83, 0x9f, 0x44, 0xa2, 0x38, 0xfa, 0x34, 0x8c, 0x6d, 0xba, 0x41, 0x18, 0xad, 0xbb, 0x4d, 0x12, - 0x46, 0x4e, 0xb3, 0x75, 0x1f, 0xa6, 0x01, 0xd5, 0x0f, 0xcb, 0x06, 0x27, 0x9c, 0xe0, 0x8c, 0xb6, - 0x60, 0xb4, 0xe1, 0xe8, 0x55, 0x0d, 0x1e, 0xba, 0x2a, 0x75, 0x3a, 0x5c, 0xd3, 0x19, 0x61, 0x93, - 0x2f, 0x5d, 0x4e, 0x35, 0xa6, 0xdd, 0x1e, 0x62, 0xba, 0x07, 0xb5, 0x9c, 0xb8, 0x5a, 0x9b, 0xe3, - 0xa8, 0x34, 0xc5, 0x5c, 0xc4, 0x4b, 0xa6, 0x34, 0xa5, 0x39, 0x82, 0x7f, 0x0a, 0x4a, 0x84, 0x76, - 0x21, 0x65, 0x2c, 0x0e, 0x98, 0x8b, 0xbd, 0xb5, 0x75, 0xd5, 0xad, 0x05, 0xbe, 0x69, 0x94, 0x59, - 0x92, 0x9c, 0x70, 0xcc, 0x14, 0x2d, 0xc0, 0x40, 0x48, 0x02, 0x57, 0x29, 0x7e, 0x3b, 0x0c, 0x23, - 0x23, 0xe3, 0xef, 0xc1, 0xf8, 0x6f, 0x2c, 0x8a, 0xd2, 0xe9, 0xe5, 0x30, 0xbd, 0x29, 0x3b, 0x0c, - 0xb4, 0xe9, 0x35, 0xc7, 0xa0, 0x58, 0x60, 0xd1, 0x6b, 0x30, 0x18, 0x90, 0x06, 0xb3, 0xfa, 0x8d, - 0xf6, 0x3e, 0xc9, 0xb9, 0x11, 0x91, 0x97, 0xc3, 0x92, 0x01, 0xba, 0x0a, 0x28, 0x20, 0x54, 0x1a, - 0x73, 0xbd, 0x2d, 0xe5, 0x38, 0x2d, 0x36, 0x5a, 0x25, 0xf5, 0xe2, 0x98, 0x42, 0x3e, 0x05, 0xc4, - 0x19, 0xc5, 0xd0, 0x65, 0x98, 0x54, 0xd0, 0x15, 0x2f, 0x8c, 0x1c, 0xba, 0xc1, 0x8d, 0x33, 0x5e, - 0x4a, 0x19, 0x82, 0x93, 0x04, 0x38, 0x5d, 0xc6, 0xfe, 0x92, 0x05, 0xbc, 0x9f, 0x8f, 0x41, 0x05, - 0xf0, 0xaa, 0xa9, 0x02, 0x38, 0x9d, 0x3b, 0x72, 0x39, 0xd7, 0xff, 0x2f, 0x59, 0x30, 0xac, 0x8d, - 0x6c, 0x3c, 0x67, 0xad, 0x0e, 0x73, 0xb6, 0x0d, 0x13, 0x74, 0xa6, 0x5f, 0xdf, 0x08, 0x49, 0xb0, - 0x4b, 0xea, 0x6c, 0x62, 0x16, 0xee, 0x6f, 0x62, 0x2a, 0x27, 0xcd, 0x6b, 0x09, 0x86, 0x38, 0x55, - 0x85, 0xfd, 0x29, 0xd9, 0x54, 0xe5, 0xd3, 0x5a, 0x53, 0x63, 0x9e, 0xf0, 0x69, 0x55, 0xa3, 0x8a, - 0x63, 0x1a, 0xba, 0xd4, 0xb6, 0xfd, 0x30, 0x4a, 0xfa, 0xb4, 0x5e, 0xf1, 0xc3, 0x08, 0x33, 0x8c, - 0xfd, 0x3c, 0xc0, 0xd2, 0x1d, 0x52, 0xe3, 0x33, 0x56, 0xbf, 0xa1, 0x58, 0xf9, 0x37, 0x14, 0xfb, - 0x77, 0x2c, 0x18, 0x5b, 0x5e, 0x30, 0x4e, 0xae, 0x59, 0x00, 0x7e, 0xad, 0xba, 0x75, 0x6b, 0x4d, - 0xfa, 0x6a, 0x70, 0x73, 0xb5, 0x82, 0x62, 0x8d, 0x02, 0x9d, 0x86, 0x62, 0xa3, 0xed, 0x09, 0x1d, - 0xe5, 0x20, 0x3d, 0x1e, 0xaf, 0xb5, 0x3d, 0x4c, 0x61, 0xda, 0x33, 0xa0, 0x62, 0xcf, 0xcf, 0x80, - 0xba, 0x86, 0xff, 0x40, 0x65, 0xe8, 0xbf, 0x7d, 0xdb, 0xad, 0xf3, 0x47, 0xd6, 0xc2, 0x8f, 0xe4, - 0xd6, 0xad, 0x95, 0xc5, 0x10, 0x73, 0xb8, 0xfd, 0x85, 0x22, 0xcc, 0x2c, 0x37, 0xc8, 0x9d, 0xb7, - 0xf9, 0xd0, 0xbc, 0xd7, 0x47, 0x4c, 0x87, 0xd3, 0xf6, 0x1c, 0xf6, 0xa1, 0x5a, 0xf7, 0xfe, 0xd8, - 0x84, 0x41, 0xee, 0xd2, 0x29, 0x9f, 0x9d, 0x67, 0xda, 0xe6, 0xf2, 0x3b, 0x64, 0x96, 0xbb, 0x86, - 0x0a, 0xdb, 0x9c, 0x3a, 0x30, 0x05, 0x14, 0x4b, 0xe6, 0x33, 0x2f, 0xc3, 0x88, 0x4e, 0x79, 0xa8, - 0x27, 0xa3, 0xdf, 0x53, 0x84, 0x09, 0xda, 0x82, 0x07, 0x3a, 0x10, 0x37, 0xd2, 0x03, 0x71, 0xd4, - 0xcf, 0x06, 0xbb, 0x8f, 0xc6, 0x9b, 0xc9, 0xd1, 0xb8, 0x94, 0x37, 0x1a, 0xc7, 0x3d, 0x06, 0xdf, - 0x6b, 0xc1, 0xd4, 0x72, 0xc3, 0xaf, 0xed, 0x24, 0x9e, 0xf6, 0xbd, 0x08, 0xc3, 0x74, 0x3b, 0x0e, - 0x8d, 0x28, 0x17, 0x46, 0xdc, 0x13, 0x81, 0xc2, 0x3a, 0x9d, 0x56, 0xec, 0xc6, 0x8d, 0x95, 0xc5, - 0xac, 0x70, 0x29, 0x02, 0x85, 0x75, 0x3a, 0xfb, 0xb7, 0x2c, 0x38, 0x73, 0x79, 0x61, 0x29, 0x9e, - 0x8a, 0xa9, 0x88, 0x2d, 0xe7, 0x61, 0xa0, 0x55, 0xd7, 0x9a, 0x12, 0xeb, 0x70, 0x17, 0x59, 0x2b, - 0x04, 0xf6, 0x9d, 0x12, 0x8d, 0xe8, 0x06, 0xc0, 0x65, 0x5c, 0x59, 0x10, 0xfb, 0xae, 0x34, 0xd9, - 0x58, 0xb9, 0x26, 0x9b, 0x27, 0x60, 0x90, 0x9e, 0x0b, 0x6e, 0x4d, 0xb6, 0x9b, 0x5b, 0xdf, 0x39, - 0x08, 0x4b, 0x9c, 0xfd, 0x73, 0x16, 0x4c, 0x5d, 0x76, 0x23, 0x7a, 0x68, 0x27, 0x43, 0x92, 0xd0, - 0x53, 0x3b, 0x74, 0x23, 0x3f, 0xd8, 0x4b, 0x86, 0x24, 0xc1, 0x0a, 0x83, 0x35, 0x2a, 0xfe, 0x41, - 0xbb, 0x2e, 0x7b, 0xa3, 0x50, 0x30, 0x8d, 0x64, 0x58, 0xc0, 0xb1, 0xa2, 0xa0, 0xfd, 0x55, 0x77, - 0x03, 0xa6, 0x5f, 0xdc, 0x13, 0x1b, 0xb7, 0xea, 0xaf, 0x45, 0x89, 0xc0, 0x31, 0x8d, 0xfd, 0xc7, - 0x16, 0x94, 0x2f, 0x37, 0xda, 0x61, 0x44, 0x82, 0xcd, 0x30, 0x67, 0xd3, 0x7d, 0x1e, 0x4a, 0x44, - 0x6a, 0xf3, 0xe5, 0x63, 0x4a, 0x29, 0x88, 0x2a, 0x35, 0x3f, 0x8f, 0x8c, 0xa2, 0xe8, 0x7a, 0x78, - 0x7f, 0x7c, 0xb8, 0x07, 0xa4, 0xcb, 0x80, 0x88, 0x5e, 0x97, 0x1e, 0x2a, 0x86, 0xc5, 0x9c, 0x58, - 0x4a, 0x61, 0x71, 0x46, 0x09, 0xfb, 0xc7, 0x2d, 0x38, 0xa9, 0x3e, 0xf8, 0x1d, 0xf7, 0x99, 0xf6, - 0x57, 0x0b, 0x30, 0x7a, 0x65, 0x7d, 0xbd, 0x72, 0x99, 0x44, 0xda, 0xac, 0xec, 0x6c, 0xa3, 0xc7, - 0x9a, 0xa9, 0xb1, 0xd3, 0x1d, 0xb1, 0x1d, 0xb9, 0x8d, 0x59, 0x1e, 0x71, 0x6c, 0x76, 0xc5, 0x8b, - 0xae, 0x07, 0xd5, 0x28, 0x70, 0xbd, 0xad, 0xcc, 0x99, 0x2e, 0x65, 0x96, 0x62, 0x9e, 0xcc, 0x82, - 0x9e, 0x87, 0x01, 0x16, 0xf2, 0x4c, 0x0e, 0xc2, 0xc3, 0xea, 0x8a, 0xc5, 0xa0, 0x07, 0xfb, 0xe5, - 0xd2, 0x0d, 0xbc, 0xc2, 0xff, 0x60, 0x41, 0x8a, 0x6e, 0xc0, 0xf0, 0x76, 0x14, 0xb5, 0xae, 0x10, - 0xa7, 0x4e, 0x02, 0xb9, 0xcb, 0x9e, 0xcd, 0xda, 0x65, 0x69, 0x27, 0x70, 0xb2, 0x78, 0x63, 0x8a, - 0x61, 0x21, 0xd6, 0xf9, 0xd8, 0x55, 0x80, 0x18, 0x77, 0x44, 0x56, 0x16, 0x7b, 0x1d, 0x4a, 0xf4, - 0x73, 0xe7, 0x1a, 0xae, 0xd3, 0xd9, 0x8e, 0xfd, 0x34, 0x94, 0xa4, 0x95, 0x3a, 0x14, 0xf1, 0x11, - 0xd8, 0x89, 0x24, 0x8d, 0xd8, 0x21, 0x8e, 0xf1, 0xf6, 0x26, 0x9c, 0x60, 0xbe, 0xaa, 0x4e, 0xb4, - 0x6d, 0xcc, 0xbe, 0xee, 0xc3, 0xfc, 0x8c, 0xb8, 0xb1, 0xf1, 0x36, 0x4f, 0x6b, 0x0f, 0x7a, 0x47, - 0x24, 0xc7, 0xf8, 0xf6, 0x66, 0x7f, 0xb3, 0x0f, 0x1e, 0x5e, 0xa9, 0xe6, 0x87, 0xec, 0x79, 0x09, - 0x46, 0xb8, 0x20, 0x48, 0x07, 0xdd, 0x69, 0x88, 0x7a, 0x95, 0x6e, 0x73, 0x5d, 0xc3, 0x61, 0x83, - 0x12, 0x9d, 0x81, 0xa2, 0xfb, 0x96, 0x97, 0x7c, 0xee, 0xb6, 0xf2, 0xfa, 0x1a, 0xa6, 0x70, 0x8a, - 0xa6, 0x32, 0x25, 0xdf, 0xac, 0x15, 0x5a, 0xc9, 0x95, 0xaf, 0xc2, 0x98, 0x1b, 0xd6, 0x42, 0x77, - 0xc5, 0xa3, 0x2b, 0x50, 0x5b, 0xc3, 0x4a, 0x9b, 0x40, 0x1b, 0xad, 0xb0, 0x38, 0x41, 0xad, 0x9d, - 0x1c, 0xfd, 0x3d, 0xcb, 0xa5, 0x5d, 0x03, 0x06, 0xd0, 0x8d, 0xbd, 0xc5, 0xbe, 0x2e, 0x64, 0x9a, - 0x70, 0xb1, 0xb1, 0xf3, 0x0f, 0x0e, 0xb1, 0xc4, 0xd1, 0xab, 0x5a, 0x6d, 0xdb, 0x69, 0xcd, 0xb5, - 0xa3, 0xed, 0x45, 0x37, 0xac, 0xf9, 0xbb, 0x24, 0xd8, 0x63, 0xb7, 0xec, 0xa1, 0xf8, 0xaa, 0xa6, - 0x10, 0x0b, 0x57, 0xe6, 0x2a, 0x94, 0x12, 0xa7, 0xcb, 0xa0, 0x39, 0x18, 0x97, 0xc0, 0x2a, 0x09, - 0xd9, 0xe6, 0x3e, 0xcc, 0xd8, 0xa8, 0x07, 0x68, 0x02, 0xac, 0x98, 0x24, 0xe9, 0x4d, 0xd1, 0x15, - 0x8e, 0x42, 0x74, 0xfd, 0x00, 0x8c, 0xba, 0x9e, 0x1b, 0xb9, 0x4e, 0xe4, 0x73, 0x33, 0x0e, 0xbf, - 0x50, 0x33, 0xd5, 0xf1, 0x8a, 0x8e, 0xc0, 0x26, 0x9d, 0xfd, 0x9f, 0xfa, 0x60, 0x92, 0x0d, 0xdb, - 0xbb, 0x33, 0xec, 0x3b, 0x69, 0x86, 0xdd, 0x48, 0xcf, 0xb0, 0xa3, 0x90, 0xc9, 0xef, 0x7b, 0x9a, - 0x7d, 0x1a, 0x4a, 0xea, 0xcd, 0x9d, 0x7c, 0x74, 0x6b, 0xe5, 0x3c, 0xba, 0xed, 0x7e, 0x2e, 0x4b, - 0xcf, 0xb0, 0x62, 0xa6, 0x67, 0xd8, 0x97, 0x2d, 0x88, 0x4d, 0x06, 0xe8, 0x75, 0x28, 0xb5, 0x7c, - 0xe6, 0x68, 0x1a, 0x48, 0xef, 0xed, 0xc7, 0x3b, 0xda, 0x1c, 0x78, 0xd4, 0xb2, 0x80, 0xf7, 0x42, - 0x45, 0x16, 0xc5, 0x31, 0x17, 0x74, 0x15, 0x06, 0x5b, 0x01, 0xa9, 0x46, 0x2c, 0xa4, 0x4e, 0xef, - 0x0c, 0xf9, 0xac, 0xe1, 0x05, 0xb1, 0xe4, 0x60, 0xff, 0x67, 0x0b, 0x26, 0x92, 0xa4, 0xe8, 0x43, - 0xd0, 0x47, 0xee, 0x90, 0x9a, 0x68, 0x6f, 0xe6, 0x21, 0x1b, 0x2b, 0x1d, 0x78, 0x07, 0xd0, 0xff, - 0x98, 0x95, 0x42, 0x57, 0x60, 0x90, 0x9e, 0xb0, 0x97, 0x55, 0xf8, 0xb8, 0x47, 0xf3, 0x4e, 0x69, - 0x25, 0xaa, 0xf0, 0xc6, 0x09, 0x10, 0x96, 0xc5, 0x99, 0x3b, 0x56, 0xad, 0x55, 0xa5, 0x97, 0x97, - 0xa8, 0xd3, 0x1d, 0x7b, 0x7d, 0xa1, 0xc2, 0x89, 0x04, 0x37, 0xee, 0x8e, 0x25, 0x81, 0x38, 0x66, - 0x62, 0xff, 0xbc, 0x05, 0xc0, 0xbd, 0xcf, 0x1c, 0x6f, 0x8b, 0x1c, 0x83, 0x9e, 0x7c, 0x11, 0xfa, - 0xc2, 0x16, 0xa9, 0x75, 0xf2, 0x81, 0x8e, 0xdb, 0x53, 0x6d, 0x91, 0x5a, 0x3c, 0xe3, 0xe8, 0x3f, - 0xcc, 0x4a, 0xdb, 0xdf, 0x07, 0x30, 0x16, 0x93, 0xad, 0x44, 0xa4, 0x89, 0x9e, 0x35, 0x02, 0x75, - 0x9c, 0x4e, 0x04, 0xea, 0x28, 0x31, 0x6a, 0x4d, 0x25, 0xfb, 0x69, 0x28, 0x36, 0x9d, 0x3b, 0x42, - 0xe7, 0xf6, 0x74, 0xe7, 0x66, 0x50, 0xfe, 0xb3, 0xab, 0xce, 0x1d, 0x7e, 0x2d, 0x7d, 0x5a, 0xae, - 0x90, 0x55, 0xe7, 0x4e, 0x57, 0x3f, 0x5d, 0x5a, 0x09, 0xab, 0xcb, 0xf5, 0x84, 0x63, 0x55, 0x4f, - 0x75, 0xb9, 0x5e, 0xb2, 0x2e, 0xd7, 0xeb, 0xa1, 0x2e, 0xd7, 0x43, 0x77, 0x61, 0x50, 0xf8, 0x3d, - 0x8a, 0x50, 0x5e, 0x17, 0x7b, 0xa8, 0x4f, 0xb8, 0x4d, 0xf2, 0x3a, 0x2f, 0xca, 0x6b, 0xb7, 0x80, - 0x76, 0xad, 0x57, 0x56, 0x88, 0xfe, 0xaa, 0x05, 0x63, 0xe2, 0x37, 0x26, 0x6f, 0xb5, 0x49, 0x18, - 0x09, 0xb1, 0xf4, 0xfd, 0xbd, 0xb7, 0x41, 0x14, 0xe4, 0x4d, 0x79, 0xbf, 0x3c, 0x67, 0x4c, 0x64, - 0xd7, 0x16, 0x25, 0x5a, 0x81, 0xfe, 0xae, 0x05, 0x27, 0x9a, 0xce, 0x1d, 0x5e, 0x23, 0x87, 0x61, - 0x27, 0x72, 0x7d, 0xe1, 0x3f, 0xf0, 0xa1, 0xde, 0x86, 0x3f, 0x55, 0x9c, 0x37, 0x52, 0xda, 0x1f, - 0x4f, 0x64, 0x91, 0x74, 0x6d, 0x6a, 0x66, 0xbb, 0x66, 0x36, 0x61, 0x48, 0xce, 0xb7, 0x07, 0xe9, - 0x64, 0xcd, 0xea, 0x11, 0x73, 0xed, 0x81, 0xd6, 0xf3, 0x69, 0x18, 0xd1, 0xe7, 0xd8, 0x03, 0xad, - 0xeb, 0x2d, 0x98, 0xca, 0x98, 0x4b, 0x0f, 0xb4, 0xca, 0xdb, 0x70, 0x3a, 0x77, 0x7e, 0x3c, 0x50, - 0x27, 0xf9, 0xaf, 0x5a, 0xfa, 0x3e, 0x78, 0x0c, 0xc6, 0x8a, 0x05, 0xd3, 0x58, 0x71, 0xb6, 0xf3, - 0xca, 0xc9, 0xb1, 0x58, 0xbc, 0xa9, 0x37, 0x9a, 0xee, 0xea, 0xe8, 0x35, 0x18, 0x68, 0x50, 0x88, - 0xf4, 0x9e, 0xb5, 0xbb, 0xaf, 0xc8, 0x58, 0x98, 0x64, 0xf0, 0x10, 0x0b, 0x0e, 0xf6, 0x2f, 0x59, - 0xd0, 0x77, 0x0c, 0x3d, 0x81, 0xcd, 0x9e, 0x78, 0x36, 0x97, 0xb5, 0x88, 0x6a, 0x3e, 0x8b, 0x9d, - 0xdb, 0x4b, 0x77, 0x22, 0xe2, 0x85, 0xec, 0x44, 0xce, 0xec, 0x98, 0x9f, 0xb6, 0x60, 0xea, 0x9a, - 0xef, 0xd4, 0xe7, 0x9d, 0x86, 0xe3, 0xd5, 0x48, 0xb0, 0xe2, 0x6d, 0x1d, 0xca, 0xf5, 0xbb, 0xd0, - 0xd5, 0xf5, 0x7b, 0x41, 0x7a, 0x4e, 0xf5, 0xe5, 0x8f, 0x1f, 0x95, 0xa4, 0x93, 0xa1, 0x8b, 0x0c, - 0x1f, 0xdf, 0x6d, 0x40, 0x7a, 0x2b, 0xc5, 0x03, 0x28, 0x0c, 0x83, 0x2e, 0x6f, 0xaf, 0x18, 0xc4, - 0x27, 0xb3, 0x25, 0xdc, 0xd4, 0xe7, 0x69, 0x4f, 0x7b, 0x38, 0x00, 0x4b, 0x46, 0xf6, 0x4b, 0x90, - 0x19, 0x6a, 0xa2, 0xbb, 0x5e, 0xc2, 0xfe, 0x18, 0x4c, 0xb2, 0x92, 0x87, 0xd4, 0x0c, 0xd8, 0x09, - 0x6d, 0x6a, 0x46, 0xd8, 0x4c, 0xfb, 0xf3, 0x16, 0x8c, 0xaf, 0x25, 0xa2, 0x09, 0x9e, 0x67, 0xf6, - 0xd7, 0x0c, 0x25, 0x7e, 0x95, 0x41, 0xb1, 0xc0, 0x1e, 0xb9, 0x92, 0xeb, 0xcf, 0x2d, 0x88, 0xa3, - 0xbf, 0x1c, 0x83, 0xf8, 0xb6, 0x60, 0x88, 0x6f, 0x99, 0x82, 0xac, 0x6a, 0x4e, 0x9e, 0xf4, 0x86, - 0xae, 0xaa, 0xb8, 0x68, 0x1d, 0x64, 0xd8, 0x98, 0x0d, 0x9f, 0x8a, 0x63, 0x66, 0xf0, 0x34, 0x19, - 0x29, 0xcd, 0xfe, 0xdd, 0x02, 0x20, 0x45, 0xdb, 0x73, 0xdc, 0xb6, 0x74, 0x89, 0xa3, 0x89, 0xdb, - 0xb6, 0x0b, 0x88, 0x79, 0x10, 0x04, 0x8e, 0x17, 0x72, 0xb6, 0xae, 0x50, 0xeb, 0x1d, 0xce, 0x3d, - 0x61, 0x46, 0xbe, 0x0d, 0xbb, 0x96, 0xe2, 0x86, 0x33, 0x6a, 0xd0, 0x3c, 0x43, 0xfa, 0x7b, 0xf5, - 0x0c, 0x19, 0xe8, 0xf2, 0xc8, 0xf1, 0x2b, 0x16, 0x8c, 0xaa, 0x6e, 0x7a, 0x87, 0xb8, 0xc2, 0xab, - 0xf6, 0xe4, 0x6c, 0xa0, 0x15, 0xad, 0xc9, 0xec, 0x60, 0xf9, 0x2e, 0xf6, 0x58, 0xd5, 0x69, 0xb8, - 0x77, 0x89, 0x8a, 0xf3, 0x59, 0x16, 0x8f, 0x4f, 0x05, 0xf4, 0x60, 0xbf, 0x3c, 0xaa, 0xfe, 0xf1, - 0x38, 0xe6, 0x71, 0x11, 0xba, 0x25, 0x8f, 0x27, 0xa6, 0x22, 0x7a, 0x11, 0xfa, 0x5b, 0xdb, 0x4e, - 0x48, 0x12, 0x4f, 0x86, 0xfa, 0x2b, 0x14, 0x78, 0xb0, 0x5f, 0x1e, 0x53, 0x05, 0x18, 0x04, 0x73, - 0xea, 0xde, 0xa3, 0xe1, 0xa5, 0x27, 0x67, 0xd7, 0x68, 0x78, 0x7f, 0x6a, 0x41, 0xdf, 0x9a, 0x5f, - 0x3f, 0x8e, 0x2d, 0xe0, 0x55, 0x63, 0x0b, 0x78, 0x24, 0x2f, 0xc5, 0x44, 0xee, 0xea, 0x5f, 0x4e, - 0xac, 0xfe, 0xb3, 0xb9, 0x1c, 0x3a, 0x2f, 0xfc, 0x26, 0x0c, 0xb3, 0xc4, 0x15, 0xe2, 0x79, 0xd4, - 0xf3, 0xc6, 0x82, 0x2f, 0x27, 0x16, 0xfc, 0xb8, 0x46, 0xaa, 0xad, 0xf4, 0xa7, 0x60, 0x50, 0xbc, - 0xb7, 0x49, 0xbe, 0xf9, 0x15, 0xb4, 0x58, 0xe2, 0xed, 0x9f, 0x28, 0x82, 0x91, 0x28, 0x03, 0xfd, - 0x8a, 0x05, 0xb3, 0x01, 0xf7, 0xc3, 0xad, 0x2f, 0xb6, 0x03, 0xd7, 0xdb, 0xaa, 0xd6, 0xb6, 0x49, - 0xbd, 0xdd, 0x70, 0xbd, 0xad, 0x95, 0x2d, 0xcf, 0x57, 0xe0, 0xa5, 0x3b, 0xa4, 0xd6, 0x66, 0x66, - 0xb7, 0x2e, 0x59, 0x39, 0x94, 0x3f, 0xfb, 0x73, 0xf7, 0xf6, 0xcb, 0xb3, 0xf8, 0x50, 0xbc, 0xf1, - 0x21, 0xdb, 0x82, 0x7e, 0xcb, 0x82, 0x8b, 0x3c, 0x7f, 0x44, 0xef, 0xed, 0xef, 0x70, 0x5b, 0xae, - 0x48, 0x56, 0x31, 0x93, 0x75, 0x12, 0x34, 0xe7, 0x3f, 0x20, 0x3a, 0xf4, 0x62, 0xe5, 0x70, 0x75, - 0xe1, 0xc3, 0x36, 0xce, 0xfe, 0xc7, 0x45, 0x18, 0x15, 0x51, 0xd3, 0xc4, 0x19, 0xf0, 0xa2, 0x31, - 0x25, 0x1e, 0x4d, 0x4c, 0x89, 0x49, 0x83, 0xf8, 0x68, 0xb6, 0xff, 0x10, 0x26, 0xe9, 0xe6, 0x7c, - 0x85, 0x38, 0x41, 0xb4, 0x41, 0x1c, 0xee, 0xf0, 0x55, 0x3c, 0xf4, 0xee, 0xaf, 0xf4, 0x93, 0xd7, - 0x92, 0xcc, 0x70, 0x9a, 0xff, 0x77, 0xd2, 0x99, 0xe3, 0xc1, 0x44, 0x2a, 0xf0, 0xdd, 0x1b, 0x50, - 0x52, 0x8f, 0x45, 0xc4, 0xa6, 0xd3, 0x39, 0x7e, 0x64, 0x92, 0x03, 0x57, 0x7f, 0xc5, 0x0f, 0x95, - 0x62, 0x76, 0xf6, 0xdf, 0x2f, 0x18, 0x15, 0xf2, 0x41, 0x5c, 0x83, 0x21, 0x27, 0x0c, 0xdd, 0x2d, - 0x8f, 0xd4, 0x3b, 0x69, 0x28, 0x53, 0xd5, 0xb0, 0x07, 0x3b, 0x73, 0xa2, 0x24, 0x56, 0x3c, 0xd0, - 0x15, 0xee, 0x56, 0xb7, 0x4b, 0x3a, 0xa9, 0x27, 0x53, 0xdc, 0x40, 0x3a, 0xde, 0xed, 0x12, 0x2c, - 0xca, 0xa3, 0x4f, 0x70, 0xbf, 0xc7, 0xab, 0x9e, 0x7f, 0xdb, 0xbb, 0xec, 0xfb, 0x32, 0xe8, 0x46, - 0x6f, 0x0c, 0x27, 0xa5, 0xb7, 0xa3, 0x2a, 0x8e, 0x4d, 0x6e, 0xbd, 0x45, 0x92, 0xfd, 0x0c, 0xb0, - 0x78, 0xf9, 0xe6, 0xdb, 0xec, 0x10, 0x11, 0x18, 0x17, 0x21, 0xf9, 0x24, 0x4c, 0xf4, 0x5d, 0xe6, - 0x55, 0xce, 0x2c, 0x1d, 0x2b, 0xd2, 0xaf, 0x9a, 0x2c, 0x70, 0x92, 0xa7, 0xfd, 0xb3, 0x16, 0xb0, - 0x77, 0xaa, 0xc7, 0x20, 0x8f, 0x7c, 0xd8, 0x94, 0x47, 0xa6, 0xf3, 0x3a, 0x39, 0x47, 0x14, 0x79, - 0x81, 0xcf, 0xac, 0x4a, 0xe0, 0xdf, 0xd9, 0x13, 0xce, 0x2a, 0xdd, 0xef, 0x1f, 0xf6, 0xff, 0xb6, - 0xf8, 0x26, 0x16, 0xbf, 0xea, 0xff, 0x2c, 0x0c, 0xd5, 0x9c, 0x96, 0x53, 0xe3, 0x59, 0x9d, 0x72, - 0x35, 0x7a, 0x46, 0xa1, 0xd9, 0x05, 0x51, 0x82, 0x6b, 0xa8, 0x64, 0x68, 0xc7, 0x21, 0x09, 0xee, - 0xaa, 0x95, 0x52, 0x55, 0xce, 0xec, 0xc0, 0xa8, 0xc1, 0xec, 0x81, 0xaa, 0x33, 0x3e, 0xcb, 0x8f, - 0x58, 0x15, 0x8a, 0xb4, 0x09, 0x93, 0x9e, 0xf6, 0x9f, 0x1e, 0x28, 0xf2, 0x72, 0xf9, 0x78, 0xb7, - 0x43, 0x94, 0x9d, 0x3e, 0xda, 0x13, 0xd8, 0x04, 0x1b, 0x9c, 0xe6, 0x6c, 0xff, 0xa4, 0x05, 0x0f, - 0xe9, 0x84, 0xda, 0x2b, 0x9b, 0x6e, 0x46, 0x92, 0x45, 0x18, 0xf2, 0x5b, 0x24, 0x70, 0x22, 0x3f, - 0x10, 0xa7, 0xc6, 0x05, 0xd9, 0xe9, 0xd7, 0x05, 0xfc, 0x40, 0xe4, 0x28, 0x90, 0xdc, 0x25, 0x1c, - 0xab, 0x92, 0xf4, 0xf6, 0xc9, 0x3a, 0x23, 0x14, 0xef, 0xa9, 0xd8, 0x1e, 0xc0, 0x2c, 0xe9, 0x21, - 0x16, 0x18, 0xfb, 0x9b, 0x16, 0x9f, 0x58, 0x7a, 0xd3, 0xd1, 0x5b, 0x30, 0xd1, 0x74, 0xa2, 0xda, - 0xf6, 0xd2, 0x9d, 0x56, 0xc0, 0x4d, 0x4e, 0xb2, 0x9f, 0x9e, 0xee, 0xd6, 0x4f, 0xda, 0x47, 0xc6, - 0xae, 0x9c, 0xab, 0x09, 0x66, 0x38, 0xc5, 0x1e, 0x6d, 0xc0, 0x30, 0x83, 0xb1, 0xa7, 0x82, 0x61, - 0x27, 0xd1, 0x20, 0xaf, 0x36, 0xe5, 0x8c, 0xb0, 0x1a, 0xf3, 0xc1, 0x3a, 0x53, 0xfb, 0xcb, 0x45, - 0xbe, 0xda, 0x99, 0x28, 0xff, 0x14, 0x0c, 0xb6, 0xfc, 0xfa, 0xc2, 0xca, 0x22, 0x16, 0xa3, 0xa0, - 0x8e, 0x91, 0x0a, 0x07, 0x63, 0x89, 0x47, 0x17, 0x60, 0x48, 0xfc, 0x94, 0x26, 0x42, 0xb6, 0x37, - 0x0b, 0xba, 0x10, 0x2b, 0x2c, 0x7a, 0x0e, 0xa0, 0x15, 0xf8, 0xbb, 0x6e, 0x9d, 0x85, 0x0e, 0x29, - 0x9a, 0x7e, 0x44, 0x15, 0x85, 0xc1, 0x1a, 0x15, 0x7a, 0x05, 0x46, 0xdb, 0x5e, 0xc8, 0xc5, 0x11, - 0x2d, 0x9a, 0xb2, 0xf2, 0x70, 0xb9, 0xa1, 0x23, 0xb1, 0x49, 0x8b, 0xe6, 0x60, 0x20, 0x72, 0x98, - 0x5f, 0x4c, 0x7f, 0xbe, 0xbb, 0xef, 0x3a, 0xa5, 0xd0, 0x13, 0x08, 0xd1, 0x02, 0x58, 0x14, 0x44, - 0x6f, 0xc8, 0x57, 0xbb, 0x7c, 0x63, 0x17, 0x7e, 0xf6, 0xbd, 0x1d, 0x02, 0xda, 0x9b, 0x5d, 0xe1, - 0xbf, 0x6f, 0xf0, 0x42, 0x2f, 0x03, 0x90, 0x3b, 0x11, 0x09, 0x3c, 0xa7, 0xa1, 0xbc, 0xd9, 0x94, - 0x5c, 0xb0, 0xe8, 0xaf, 0xf9, 0xd1, 0x8d, 0x90, 0x2c, 0x29, 0x0a, 0xac, 0x51, 0xdb, 0xbf, 0x55, - 0x02, 0x88, 0xe5, 0x76, 0x74, 0x37, 0xb5, 0x71, 0x3d, 0xd3, 0x59, 0xd2, 0x3f, 0xba, 0x5d, 0x0b, - 0x7d, 0xbf, 0x05, 0xc3, 0x22, 0x42, 0x0a, 0x1b, 0xa1, 0x42, 0xe7, 0x8d, 0xd3, 0x0c, 0xd4, 0x42, - 0x4b, 0xf0, 0x26, 0x3c, 0x2f, 0x67, 0xa8, 0x86, 0xe9, 0xda, 0x0a, 0xbd, 0x62, 0xf4, 0x3e, 0x79, - 0x55, 0x2c, 0x1a, 0x5d, 0xa9, 0xae, 0x8a, 0x25, 0x76, 0x46, 0xe8, 0xb7, 0xc4, 0x1b, 0xc6, 0x2d, - 0xb1, 0x2f, 0xff, 0x59, 0xa2, 0x21, 0xbe, 0x76, 0xbb, 0x20, 0xa2, 0x8a, 0x1e, 0xa2, 0xa0, 0x3f, - 0xff, 0x79, 0x9e, 0x76, 0x4f, 0xea, 0x12, 0x9e, 0xe0, 0xd3, 0x30, 0x5e, 0x37, 0x85, 0x00, 0x31, - 0x13, 0x9f, 0xcc, 0xe3, 0x9b, 0x90, 0x19, 0xe2, 0x63, 0x3f, 0x81, 0xc0, 0x49, 0xc6, 0xa8, 0xc2, - 0x23, 0x56, 0xac, 0x78, 0x9b, 0xbe, 0x78, 0xeb, 0x61, 0xe7, 0x8e, 0xe5, 0x5e, 0x18, 0x91, 0x26, - 0xa5, 0x8c, 0x4f, 0xf7, 0x35, 0x51, 0x16, 0x2b, 0x2e, 0xe8, 0x35, 0x18, 0x60, 0xef, 0xb3, 0xc2, - 0xe9, 0xa1, 0x7c, 0x8d, 0xb3, 0x19, 0xba, 0x2f, 0x5e, 0x90, 0xec, 0x6f, 0x88, 0x05, 0x07, 0x74, - 0x45, 0xbe, 0x7e, 0x0c, 0x57, 0xbc, 0x1b, 0x21, 0x61, 0xaf, 0x1f, 0x4b, 0xf3, 0x8f, 0xc7, 0x0f, - 0x1b, 0x39, 0x3c, 0x33, 0xcd, 0xa0, 0x51, 0x92, 0x4a, 0x51, 0xe2, 0xbf, 0xcc, 0x5e, 0x28, 0x02, - 0x0d, 0x65, 0x36, 0xcf, 0xcc, 0x70, 0x18, 0x77, 0xe7, 0x4d, 0x93, 0x05, 0x4e, 0xf2, 0xa4, 0x12, - 0x29, 0x5f, 0xf5, 0xe2, 0xb5, 0x48, 0xb7, 0xbd, 0x83, 0x5f, 0xc4, 0xd9, 0x69, 0xc4, 0x21, 0x58, - 0x94, 0x3f, 0x56, 0xf1, 0x60, 0xc6, 0x83, 0x89, 0xe4, 0x12, 0x7d, 0xa0, 0xe2, 0xc8, 0x1f, 0xf6, - 0xc1, 0x98, 0x39, 0xa5, 0xd0, 0x45, 0x28, 0x09, 0x26, 0x2a, 0x03, 0x88, 0x5a, 0x25, 0xab, 0x12, - 0x81, 0x63, 0x1a, 0x96, 0xf8, 0x85, 0x15, 0xd7, 0xdc, 0x83, 0xe3, 0xc4, 0x2f, 0x0a, 0x83, 0x35, - 0x2a, 0x7a, 0xb1, 0xda, 0xf0, 0xfd, 0x48, 0x1d, 0x48, 0x6a, 0xde, 0xcd, 0x33, 0x28, 0x16, 0x58, - 0x7a, 0x10, 0xed, 0x90, 0xc0, 0x23, 0x0d, 0x33, 0xf2, 0xb6, 0x3a, 0x88, 0xae, 0xea, 0x48, 0x6c, - 0xd2, 0xd2, 0xe3, 0xd4, 0x0f, 0xd9, 0x44, 0x16, 0xd7, 0xb7, 0xd8, 0xdd, 0xba, 0xca, 0x5f, 0x79, - 0x4b, 0x3c, 0xfa, 0x18, 0x3c, 0xa4, 0x02, 0x67, 0x61, 0x6e, 0xcd, 0x90, 0x35, 0x0e, 0x18, 0xda, - 0x96, 0x87, 0x16, 0xb2, 0xc9, 0x70, 0x5e, 0x79, 0xf4, 0x2a, 0x8c, 0x09, 0x11, 0x5f, 0x72, 0x1c, - 0x34, 0x3d, 0x8c, 0xae, 0x1a, 0x58, 0x9c, 0xa0, 0x96, 0xb1, 0xc3, 0x99, 0x94, 0x2d, 0x39, 0x0c, - 0xa5, 0x63, 0x87, 0xeb, 0x78, 0x9c, 0x2a, 0x81, 0xe6, 0x60, 0x9c, 0xcb, 0x60, 0xae, 0xb7, 0xc5, - 0xc7, 0x44, 0x3c, 0xe6, 0x52, 0x4b, 0xea, 0xba, 0x89, 0xc6, 0x49, 0x7a, 0xf4, 0x12, 0x8c, 0x38, - 0x41, 0x6d, 0xdb, 0x8d, 0x48, 0x2d, 0x6a, 0x07, 0xfc, 0x95, 0x97, 0xe6, 0xa2, 0x35, 0xa7, 0xe1, - 0xb0, 0x41, 0x69, 0xdf, 0x85, 0xa9, 0x8c, 0xf0, 0x0f, 0x74, 0xe2, 0x38, 0x2d, 0x57, 0x7e, 0x53, - 0xc2, 0xc3, 0x79, 0xae, 0xb2, 0x22, 0xbf, 0x46, 0xa3, 0xa2, 0xb3, 0x93, 0x85, 0x89, 0xd0, 0x92, - 0x95, 0xaa, 0xd9, 0xb9, 0x2c, 0x11, 0x38, 0xa6, 0xb1, 0xff, 0x5b, 0x01, 0xc6, 0x33, 0x6c, 0x2b, - 0x2c, 0x61, 0x66, 0xe2, 0x92, 0x12, 0xe7, 0xc7, 0x34, 0x43, 0xd1, 0x17, 0x0e, 0x11, 0x8a, 0xbe, - 0xd8, 0x2d, 0x14, 0x7d, 0xdf, 0xdb, 0x09, 0x45, 0x6f, 0xf6, 0x58, 0x7f, 0x4f, 0x3d, 0x96, 0x11, - 0xbe, 0x7e, 0xe0, 0x90, 0xe1, 0xeb, 0x8d, 0x4e, 0x1f, 0xec, 0xa1, 0xd3, 0x7f, 0xb4, 0x00, 0x13, - 0x49, 0x57, 0xd2, 0x63, 0xd0, 0xdb, 0xbe, 0x66, 0xe8, 0x6d, 0x2f, 0xf4, 0xf2, 0xf8, 0x36, 0x57, - 0x87, 0x8b, 0x13, 0x3a, 0xdc, 0xf7, 0xf6, 0xc4, 0xad, 0xb3, 0x3e, 0xf7, 0xa7, 0x0a, 0x70, 0x32, - 0xf3, 0xf5, 0xef, 0x31, 0xf4, 0xcd, 0x75, 0xa3, 0x6f, 0x9e, 0xed, 0xf9, 0x61, 0x72, 0x6e, 0x07, - 0xdd, 0x4a, 0x74, 0xd0, 0xc5, 0xde, 0x59, 0x76, 0xee, 0xa5, 0xaf, 0x17, 0xe1, 0x6c, 0x66, 0xb9, - 0x58, 0xed, 0xb9, 0x6c, 0xa8, 0x3d, 0x9f, 0x4b, 0xa8, 0x3d, 0xed, 0xce, 0xa5, 0x8f, 0x46, 0x0f, - 0x2a, 0x1e, 0xe8, 0xb2, 0x30, 0x03, 0xf7, 0xa9, 0x03, 0x35, 0x1e, 0xe8, 0x2a, 0x46, 0xd8, 0xe4, - 0xfb, 0x9d, 0xa4, 0xfb, 0xfc, 0x97, 0x16, 0x9c, 0xce, 0x1c, 0x9b, 0x63, 0xd0, 0x75, 0xad, 0x99, - 0xba, 0xae, 0xa7, 0x7a, 0x9e, 0xad, 0x39, 0xca, 0xaf, 0x9f, 0xe9, 0xcf, 0xf9, 0x16, 0x76, 0x93, - 0xbf, 0x0e, 0xc3, 0x4e, 0xad, 0x46, 0xc2, 0x70, 0xd5, 0xaf, 0xab, 0x40, 0xd8, 0xcf, 0xb2, 0x7b, - 0x56, 0x0c, 0x3e, 0xd8, 0x2f, 0xcf, 0x24, 0x59, 0xc4, 0x68, 0xac, 0x73, 0x40, 0x9f, 0x80, 0xa1, - 0x50, 0x9c, 0x9b, 0x62, 0xec, 0x9f, 0xef, 0xb1, 0x73, 0x9c, 0x0d, 0xd2, 0x30, 0x23, 0x2e, 0x29, - 0x4d, 0x85, 0x62, 0x69, 0x46, 0x67, 0x29, 0x1c, 0x69, 0x74, 0x96, 0xe7, 0x00, 0x76, 0xd5, 0x65, - 0x20, 0xa9, 0x7f, 0xd0, 0xae, 0x09, 0x1a, 0x15, 0xfa, 0x08, 0x4c, 0x84, 0x3c, 0x24, 0xe1, 0x42, - 0xc3, 0x09, 0xd9, 0x3b, 0x1a, 0x31, 0x0b, 0x59, 0x54, 0xa7, 0x6a, 0x02, 0x87, 0x53, 0xd4, 0x68, - 0x59, 0xd6, 0xca, 0xe2, 0x27, 0xf2, 0x89, 0x79, 0x3e, 0xae, 0x51, 0xa4, 0xeb, 0x3e, 0x91, 0xec, - 0x7e, 0xd6, 0xf1, 0x5a, 0x49, 0xf4, 0x09, 0x00, 0x3a, 0x7d, 0x84, 0x1e, 0x62, 0x30, 0x7f, 0xf3, - 0xa4, 0xbb, 0x4a, 0x3d, 0xd3, 0xb9, 0x99, 0xbd, 0xa9, 0x5d, 0x54, 0x4c, 0xb0, 0xc6, 0x10, 0x39, - 0x30, 0x1a, 0xff, 0x8b, 0xb3, 0xd9, 0x5e, 0xc8, 0xad, 0x21, 0xc9, 0x9c, 0xa9, 0xbc, 0x17, 0x75, - 0x16, 0xd8, 0xe4, 0x68, 0xff, 0xf8, 0x20, 0x3c, 0xdc, 0x61, 0x1b, 0x46, 0x73, 0xa6, 0xa9, 0xf7, - 0xe9, 0xe4, 0xfd, 0x7d, 0x26, 0xb3, 0xb0, 0x71, 0xa1, 0x4f, 0xcc, 0xf6, 0xc2, 0xdb, 0x9e, 0xed, - 0x3f, 0x6c, 0x69, 0x9a, 0x15, 0xee, 0x54, 0xfa, 0xe1, 0x43, 0x1e, 0x2f, 0x47, 0xa8, 0x6a, 0xd9, - 0xcc, 0xd0, 0x57, 0x3c, 0xd7, 0x73, 0x73, 0x7a, 0x57, 0x60, 0x7c, 0x35, 0x3b, 0x0e, 0x2f, 0x57, - 0x65, 0x5c, 0x3e, 0xec, 0xf7, 0x1f, 0x57, 0x4c, 0xde, 0x8f, 0xc9, 0xe8, 0x4b, 0xbc, 0x5e, 0xb1, - 0xd6, 0x5e, 0x8c, 0xc3, 0x29, 0xa9, 0xb3, 0xf4, 0xd1, 0xcc, 0xe6, 0xea, 0x44, 0xd8, 0x60, 0x75, - 0xbc, 0x57, 0xef, 0x6f, 0x51, 0x10, 0xe0, 0xdf, 0xb1, 0xe0, 0x4c, 0xc7, 0x88, 0x30, 0xdf, 0x86, - 0xb2, 0xa1, 0xfd, 0x39, 0x0b, 0xb2, 0x07, 0xdb, 0xf0, 0x28, 0xbb, 0x08, 0xa5, 0x5a, 0x22, 0xef, - 0x66, 0x1c, 0x1b, 0x41, 0xe5, 0xdc, 0x8c, 0x69, 0x0c, 0xc7, 0xb1, 0x42, 0x57, 0xc7, 0xb1, 0x5f, - 0xb7, 0x20, 0xb5, 0xbf, 0x1f, 0x83, 0xa0, 0xb1, 0x62, 0x0a, 0x1a, 0x8f, 0xf7, 0xd2, 0x9b, 0x39, - 0x32, 0xc6, 0x9f, 0x8c, 0xc3, 0xa9, 0x9c, 0x17, 0x79, 0xbb, 0x30, 0xb9, 0x55, 0x23, 0xe6, 0xe3, - 0xea, 0x4e, 0x41, 0x87, 0x3a, 0xbe, 0xc4, 0xe6, 0xe9, 0x4e, 0x53, 0x24, 0x38, 0x5d, 0x05, 0xfa, - 0x9c, 0x05, 0x27, 0x9c, 0xdb, 0xe1, 0x12, 0x15, 0x18, 0xdd, 0xda, 0x7c, 0xc3, 0xaf, 0xed, 0xd0, - 0xd3, 0x58, 0x2e, 0x84, 0x17, 0x32, 0x95, 0x78, 0xb7, 0xaa, 0x29, 0x7a, 0xa3, 0x7a, 0x96, 0xdc, - 0x3a, 0x8b, 0x0a, 0x67, 0xd6, 0x85, 0xb0, 0x48, 0xed, 0x41, 0xaf, 0xa3, 0x1d, 0x9e, 0xff, 0x67, - 0x3d, 0x9d, 0xe4, 0x12, 0x90, 0xc4, 0x60, 0xc5, 0x07, 0x7d, 0x0a, 0x4a, 0x5b, 0xf2, 0xa5, 0x6f, - 0x86, 0x84, 0x15, 0x77, 0x64, 0xe7, 0xf7, 0xcf, 0xdc, 0x12, 0xaf, 0x88, 0x70, 0xcc, 0x14, 0xbd, - 0x0a, 0x45, 0x6f, 0x33, 0xec, 0x94, 0x1f, 0x3a, 0xe1, 0x72, 0xc9, 0x83, 0x6c, 0xac, 0x2d, 0x57, - 0x31, 0x2d, 0x88, 0xae, 0x40, 0x31, 0xd8, 0xa8, 0x0b, 0x0d, 0x74, 0xe6, 0x22, 0xc5, 0xf3, 0x8b, - 0x39, 0xad, 0x62, 0x9c, 0xf0, 0xfc, 0x22, 0xa6, 0x2c, 0x50, 0x05, 0xfa, 0xd9, 0x33, 0x36, 0x21, - 0xcf, 0x64, 0xde, 0xdc, 0x3a, 0x3c, 0x07, 0xe5, 0x91, 0x38, 0x18, 0x01, 0xe6, 0x8c, 0xd0, 0x3a, - 0x0c, 0xd4, 0x58, 0x2e, 0x61, 0x21, 0xc0, 0xbc, 0x2f, 0x53, 0xd7, 0xdc, 0x21, 0xc9, 0xb2, 0x50, - 0xbd, 0x32, 0x0a, 0x2c, 0x78, 0x31, 0xae, 0xa4, 0xb5, 0xbd, 0x19, 0x8a, 0x5c, 0xfb, 0xd9, 0x5c, - 0x3b, 0xe4, 0x0e, 0x17, 0x5c, 0x19, 0x05, 0x16, 0xbc, 0xd0, 0xcb, 0x50, 0xd8, 0xac, 0x89, 0x27, - 0x6a, 0x99, 0x4a, 0x67, 0x33, 0x4e, 0xca, 0xfc, 0xc0, 0xbd, 0xfd, 0x72, 0x61, 0x79, 0x01, 0x17, - 0x36, 0x6b, 0x68, 0x0d, 0x06, 0x37, 0x79, 0x64, 0x05, 0xa1, 0x57, 0x7e, 0x32, 0x3b, 0xe8, 0x43, - 0x2a, 0xf8, 0x02, 0x7f, 0xee, 0x24, 0x10, 0x58, 0x32, 0x61, 0x99, 0x26, 0x54, 0x84, 0x08, 0x11, - 0xa0, 0x6e, 0xf6, 0x70, 0x51, 0x3d, 0xb8, 0x7c, 0x19, 0xc7, 0x99, 0xc0, 0x1a, 0x47, 0x3a, 0xab, - 0x9d, 0xbb, 0xed, 0x80, 0x85, 0x1a, 0x17, 0x91, 0x8c, 0x32, 0x67, 0xf5, 0x9c, 0x24, 0xea, 0x34, - 0xab, 0x15, 0x11, 0x8e, 0x99, 0xa2, 0x1d, 0x18, 0xdd, 0x0d, 0x5b, 0xdb, 0x44, 0x2e, 0x69, 0x16, - 0xd8, 0x28, 0x47, 0x3e, 0xba, 0x29, 0x08, 0xdd, 0x20, 0x6a, 0x3b, 0x8d, 0xd4, 0x2e, 0xc4, 0x64, - 0xd9, 0x9b, 0x3a, 0x33, 0x6c, 0xf2, 0xa6, 0xdd, 0xff, 0x56, 0xdb, 0xdf, 0xd8, 0x8b, 0x88, 0x88, - 0x2b, 0x97, 0xd9, 0xfd, 0xaf, 0x73, 0x92, 0x74, 0xf7, 0x0b, 0x04, 0x96, 0x4c, 0xd0, 0x4d, 0xd1, - 0x3d, 0x6c, 0xf7, 0x9c, 0xc8, 0x8f, 0x30, 0x3b, 0x27, 0x89, 0x72, 0x3a, 0x85, 0xed, 0x96, 0x31, - 0x2b, 0xb6, 0x4b, 0xb6, 0xb6, 0xfd, 0xc8, 0xf7, 0x12, 0x3b, 0xf4, 0x64, 0xfe, 0x2e, 0x59, 0xc9, - 0xa0, 0x4f, 0xef, 0x92, 0x59, 0x54, 0x38, 0xb3, 0x2e, 0x54, 0x87, 0xb1, 0x96, 0x1f, 0x44, 0xb7, - 0xfd, 0x40, 0xce, 0x2f, 0xd4, 0x41, 0x2f, 0x66, 0x50, 0x8a, 0x1a, 0x59, 0xc8, 0x46, 0x13, 0x83, - 0x13, 0x3c, 0xd1, 0x47, 0x61, 0x30, 0xac, 0x39, 0x0d, 0xb2, 0x72, 0x7d, 0x7a, 0x2a, 0xff, 0xf8, - 0xa9, 0x72, 0x92, 0x9c, 0xd9, 0xc5, 0x03, 0x63, 0x70, 0x12, 0x2c, 0xd9, 0xa1, 0x65, 0xe8, 0x67, - 0xe9, 0x16, 0x59, 0x10, 0xc4, 0x9c, 0x40, 0xb9, 0x29, 0x07, 0x78, 0xbe, 0x37, 0x31, 0x30, 0xe6, - 0xc5, 0xe9, 0x1a, 0x10, 0xd7, 0x43, 0x3f, 0x9c, 0x3e, 0x99, 0xbf, 0x06, 0xc4, 0xad, 0xf2, 0x7a, - 0xb5, 0xd3, 0x1a, 0x50, 0x44, 0x38, 0x66, 0x4a, 0x77, 0x66, 0xba, 0x9b, 0x9e, 0xea, 0xe0, 0xb9, - 0x95, 0xbb, 0x97, 0xb2, 0x9d, 0x99, 0xee, 0xa4, 0x94, 0x85, 0xfd, 0x07, 0x83, 0x69, 0x99, 0x85, - 0x29, 0x14, 0xfe, 0x82, 0x95, 0xb2, 0x35, 0xbf, 0xbf, 0x57, 0xfd, 0xe6, 0x11, 0x5e, 0x85, 0x3e, - 0x67, 0xc1, 0xa9, 0x56, 0xe6, 0x87, 0x08, 0x01, 0xa0, 0x37, 0x35, 0x29, 0xff, 0x74, 0x15, 0x30, - 0x33, 0x1b, 0x8f, 0x73, 0x6a, 0x4a, 0x5e, 0x37, 0x8b, 0x6f, 0xfb, 0xba, 0xb9, 0x0a, 0x43, 0x35, - 0x7e, 0x15, 0xe9, 0x98, 0x5b, 0x3f, 0x79, 0xf7, 0x66, 0xa2, 0x84, 0xb8, 0xc3, 0x6c, 0x62, 0xc5, - 0x02, 0xfd, 0x88, 0x05, 0x67, 0x92, 0x4d, 0xc7, 0x84, 0xa1, 0x45, 0x94, 0x4d, 0xae, 0xcb, 0x58, - 0x16, 0xdf, 0x9f, 0x92, 0xff, 0x0d, 0xe2, 0x83, 0x6e, 0x04, 0xb8, 0x73, 0x65, 0x68, 0x31, 0x43, - 0x99, 0x32, 0x60, 0x1a, 0x90, 0x7a, 0x50, 0xa8, 0xbc, 0x00, 0x23, 0x4d, 0xbf, 0xed, 0x45, 0xc2, - 0xd1, 0x4b, 0x38, 0x9d, 0x30, 0x67, 0x8b, 0x55, 0x0d, 0x8e, 0x0d, 0xaa, 0x84, 0x1a, 0x66, 0xe8, - 0xbe, 0xd5, 0x30, 0x6f, 0xc2, 0x88, 0xa7, 0x79, 0x26, 0x0b, 0x79, 0xe0, 0x7c, 0x7e, 0x84, 0x5c, - 0xdd, 0x8f, 0x99, 0xb7, 0x52, 0x87, 0x60, 0x83, 0xdb, 0xf1, 0x7a, 0x80, 0x7d, 0xc9, 0xca, 0x10, - 0xea, 0xb9, 0x2a, 0xe6, 0x43, 0xa6, 0x2a, 0xe6, 0x7c, 0x52, 0x15, 0x93, 0x32, 0x1e, 0x18, 0x5a, - 0x98, 0xde, 0xb3, 0x3b, 0xf5, 0x1a, 0x65, 0xd3, 0x6e, 0xc0, 0xb9, 0x6e, 0xc7, 0x12, 0xf3, 0xf8, - 0xab, 0x2b, 0x53, 0x71, 0xec, 0xf1, 0x57, 0x5f, 0x59, 0xc4, 0x0c, 0xd3, 0x6b, 0xfc, 0x26, 0xfb, - 0xbf, 0x58, 0x50, 0xac, 0xf8, 0xf5, 0x63, 0xb8, 0xf0, 0x7e, 0xd8, 0xb8, 0xf0, 0x3e, 0x9c, 0x7d, - 0x20, 0xd6, 0x73, 0x4d, 0x1f, 0x4b, 0x09, 0xd3, 0xc7, 0x99, 0x3c, 0x06, 0x9d, 0x0d, 0x1d, 0x3f, - 0x5d, 0x84, 0xe1, 0x8a, 0x5f, 0x57, 0xee, 0xf6, 0xff, 0xf4, 0x7e, 0xdc, 0xed, 0x73, 0x73, 0x65, - 0x68, 0x9c, 0x99, 0xa3, 0xa0, 0x7c, 0x69, 0xfc, 0x6d, 0xe6, 0x75, 0x7f, 0x8b, 0xb8, 0x5b, 0xdb, - 0x11, 0xa9, 0x27, 0x3f, 0xe7, 0xf8, 0xbc, 0xee, 0xff, 0xa0, 0x00, 0xe3, 0x89, 0xda, 0x51, 0x03, - 0x46, 0x1b, 0xba, 0x62, 0x5d, 0xcc, 0xd3, 0xfb, 0xd2, 0xc9, 0x0b, 0xaf, 0x65, 0x0d, 0x84, 0x4d, - 0xe6, 0x68, 0x16, 0x40, 0x59, 0x9a, 0xa5, 0x7a, 0x95, 0x49, 0xfd, 0xca, 0x14, 0x1d, 0x62, 0x8d, - 0x02, 0xbd, 0x08, 0xc3, 0x91, 0xdf, 0xf2, 0x1b, 0xfe, 0xd6, 0xde, 0x55, 0x22, 0x43, 0x7b, 0x29, - 0x5f, 0xc4, 0xf5, 0x18, 0x85, 0x75, 0x3a, 0x74, 0x07, 0x26, 0x15, 0x93, 0xea, 0x11, 0x18, 0x1b, - 0x98, 0x56, 0x61, 0x2d, 0xc9, 0x11, 0xa7, 0x2b, 0xb1, 0x7f, 0xae, 0xc8, 0xbb, 0xd8, 0x8b, 0xdc, - 0x77, 0x57, 0xc3, 0x3b, 0x7b, 0x35, 0x7c, 0xdd, 0x82, 0x09, 0x5a, 0x3b, 0x73, 0xb4, 0x92, 0xc7, - 0xbc, 0x8a, 0xc9, 0x6d, 0x75, 0x88, 0xc9, 0x7d, 0x9e, 0xee, 0x9a, 0x75, 0xbf, 0x1d, 0x09, 0xdd, - 0x9d, 0xb6, 0x2d, 0x52, 0x28, 0x16, 0x58, 0x41, 0x47, 0x82, 0x40, 0x3c, 0x0e, 0xd5, 0xe9, 0x48, - 0x10, 0x60, 0x81, 0x95, 0x21, 0xbb, 0xfb, 0xb2, 0x43, 0x76, 0xf3, 0xc8, 0xab, 0xc2, 0x25, 0x47, - 0x08, 0x5c, 0x5a, 0xe4, 0x55, 0xe9, 0xab, 0x13, 0xd3, 0xd8, 0x5f, 0x2d, 0xc2, 0x48, 0xc5, 0xaf, - 0xc7, 0x56, 0xe6, 0x17, 0x0c, 0x2b, 0xf3, 0xb9, 0x84, 0x95, 0x79, 0x42, 0xa7, 0x7d, 0xd7, 0xa6, - 0xfc, 0xad, 0xb2, 0x29, 0xff, 0x9a, 0xc5, 0x46, 0x6d, 0x71, 0xad, 0xca, 0xfd, 0xf6, 0xd0, 0x25, - 0x18, 0x66, 0x1b, 0x0c, 0x7b, 0x8d, 0x2c, 0x4d, 0xaf, 0x2c, 0xdf, 0xd5, 0x5a, 0x0c, 0xc6, 0x3a, - 0x0d, 0xba, 0x00, 0x43, 0x21, 0x71, 0x82, 0xda, 0xb6, 0xda, 0x5d, 0x85, 0x9d, 0x94, 0xc3, 0xb0, - 0xc2, 0xa2, 0xd7, 0xe3, 0xa0, 0x9f, 0xc5, 0xfc, 0xd7, 0x8d, 0x7a, 0x7b, 0xf8, 0x12, 0xc9, 0x8f, - 0xf4, 0x69, 0xdf, 0x02, 0x94, 0xa6, 0xef, 0x21, 0x2c, 0x5d, 0xd9, 0x0c, 0x4b, 0x57, 0x4a, 0x85, - 0xa4, 0xfb, 0x33, 0x0b, 0xc6, 0x2a, 0x7e, 0x9d, 0x2e, 0xdd, 0xef, 0xa4, 0x75, 0xaa, 0x47, 0x3c, - 0x1e, 0xe8, 0x10, 0xf1, 0xf8, 0x31, 0xe8, 0xaf, 0xf8, 0xf5, 0x95, 0x4a, 0xa7, 0xd0, 0x02, 0xf6, - 0xdf, 0xb4, 0x60, 0xb0, 0xe2, 0xd7, 0x8f, 0xc1, 0x2c, 0xf0, 0x21, 0xd3, 0x2c, 0xf0, 0x50, 0xce, - 0xbc, 0xc9, 0xb1, 0x04, 0xfc, 0x8d, 0x3e, 0x18, 0xa5, 0xed, 0xf4, 0xb7, 0xe4, 0x50, 0x1a, 0xdd, - 0x66, 0xf5, 0xd0, 0x6d, 0x54, 0x0a, 0xf7, 0x1b, 0x0d, 0xff, 0x76, 0x72, 0x58, 0x97, 0x19, 0x14, - 0x0b, 0x2c, 0x7a, 0x06, 0x86, 0x5a, 0x01, 0xd9, 0x75, 0x7d, 0x21, 0xde, 0x6a, 0x46, 0x96, 0x8a, - 0x80, 0x63, 0x45, 0x41, 0xaf, 0x85, 0xa1, 0xeb, 0xd1, 0xa3, 0xbc, 0xe6, 0x7b, 0x75, 0xae, 0x39, - 0x2f, 0x8a, 0xb4, 0x1c, 0x1a, 0x1c, 0x1b, 0x54, 0xe8, 0x16, 0x94, 0xd8, 0x7f, 0xb6, 0xed, 0x1c, - 0x3e, 0x7b, 0xaf, 0xc8, 0x2a, 0x28, 0x18, 0xe0, 0x98, 0x17, 0x7a, 0x0e, 0x20, 0x92, 0xa1, 0xed, - 0x43, 0x11, 0x68, 0x4d, 0x5d, 0x05, 0x54, 0xd0, 0xfb, 0x10, 0x6b, 0x54, 0xe8, 0x69, 0x28, 0x45, - 0x8e, 0xdb, 0xb8, 0xe6, 0x7a, 0x24, 0x64, 0x1a, 0xf1, 0xa2, 0x4c, 0xee, 0x27, 0x80, 0x38, 0xc6, - 0x53, 0x51, 0x8c, 0x05, 0xe1, 0xe0, 0xb9, 0xcb, 0x87, 0x18, 0x35, 0x13, 0xc5, 0xae, 0x29, 0x28, - 0xd6, 0x28, 0xd0, 0x36, 0x3c, 0xe2, 0x7a, 0x2c, 0x85, 0x05, 0xa9, 0xee, 0xb8, 0xad, 0xf5, 0x6b, - 0xd5, 0x9b, 0x24, 0x70, 0x37, 0xf7, 0xe6, 0x9d, 0xda, 0x0e, 0xf1, 0x64, 0x5e, 0x56, 0x99, 0xae, - 0xfb, 0x91, 0x95, 0x0e, 0xb4, 0xb8, 0x23, 0x27, 0xfb, 0x79, 0x36, 0xdf, 0xaf, 0x57, 0xd1, 0x7b, - 0x8d, 0xad, 0xe3, 0x94, 0xbe, 0x75, 0x1c, 0xec, 0x97, 0x07, 0xae, 0x57, 0xb5, 0x18, 0x12, 0x2f, - 0xc1, 0xc9, 0x8a, 0x5f, 0xaf, 0xf8, 0x41, 0xb4, 0xec, 0x07, 0xb7, 0x9d, 0xa0, 0x2e, 0xa7, 0x57, - 0x59, 0x46, 0xd1, 0xa0, 0xfb, 0x67, 0x3f, 0xdf, 0x5d, 0x8c, 0x08, 0x19, 0xcf, 0x33, 0x89, 0xed, - 0x90, 0x6f, 0xbf, 0x6a, 0x4c, 0x76, 0x50, 0x49, 0x60, 0x2e, 0x3b, 0x11, 0x41, 0xd7, 0x59, 0xe6, - 0xf5, 0xf8, 0x18, 0x15, 0xc5, 0x9f, 0xd2, 0x32, 0xaf, 0xc7, 0xc8, 0xcc, 0x73, 0xd7, 0x2c, 0x6f, - 0x7f, 0x56, 0x54, 0xc2, 0xef, 0xe0, 0xdc, 0xbf, 0xae, 0x97, 0xd4, 0xc5, 0x32, 0x4b, 0x44, 0x21, - 0x3f, 0xbd, 0x00, 0xb7, 0x7a, 0x76, 0xcc, 0x12, 0x61, 0xbf, 0x08, 0x93, 0xf4, 0xea, 0xa7, 0xe4, - 0x28, 0xf6, 0x91, 0xdd, 0xa3, 0x79, 0xfc, 0xd7, 0x7e, 0x76, 0x0e, 0x24, 0xd2, 0x9f, 0xa0, 0x4f, - 0xc2, 0x58, 0x48, 0xae, 0xb9, 0x5e, 0xfb, 0x8e, 0x54, 0xbc, 0x74, 0x78, 0x73, 0x58, 0x5d, 0xd2, - 0x29, 0xb9, 0xfa, 0xd6, 0x84, 0xe1, 0x04, 0x37, 0xd4, 0x84, 0xb1, 0xdb, 0xae, 0x57, 0xf7, 0x6f, - 0x87, 0x92, 0xff, 0x50, 0xbe, 0x16, 0xf7, 0x16, 0xa7, 0x4c, 0xb4, 0xd1, 0xa8, 0xee, 0x96, 0xc1, - 0x0c, 0x27, 0x98, 0xd3, 0xb5, 0x16, 0xb4, 0xbd, 0xb9, 0xf0, 0x46, 0x48, 0x02, 0x91, 0xf9, 0x9f, - 0xa7, 0xe5, 0x95, 0x40, 0x1c, 0xe3, 0xe9, 0x5a, 0x63, 0x7f, 0x2e, 0x07, 0x7e, 0x9b, 0xe7, 0xda, - 0x10, 0x6b, 0x0d, 0x2b, 0x28, 0xd6, 0x28, 0xe8, 0x5e, 0xc4, 0xfe, 0xad, 0xf9, 0x1e, 0xf6, 0xfd, - 0x48, 0xee, 0x5e, 0xcc, 0x13, 0x41, 0x83, 0x63, 0x83, 0x0a, 0x2d, 0x03, 0x0a, 0xdb, 0xad, 0x56, - 0x83, 0x39, 0x33, 0x39, 0x0d, 0xc6, 0x8a, 0x7b, 0x79, 0x14, 0x79, 0xac, 0xe0, 0x6a, 0x0a, 0x8b, - 0x33, 0x4a, 0xd0, 0x63, 0x69, 0x53, 0x34, 0xb5, 0x9f, 0x35, 0x95, 0x5b, 0x7c, 0xaa, 0xbc, 0x9d, - 0x12, 0x87, 0x96, 0x60, 0x30, 0xdc, 0x0b, 0x6b, 0x91, 0x08, 0xed, 0x98, 0x93, 0x46, 0xab, 0xca, - 0x48, 0xb4, 0x2c, 0x8e, 0xbc, 0x08, 0x96, 0x65, 0x51, 0x0d, 0xa6, 0x04, 0xc7, 0x85, 0x6d, 0xc7, - 0x53, 0xf9, 0x82, 0xb8, 0x4f, 0xf7, 0xa5, 0x7b, 0xfb, 0xe5, 0x29, 0x51, 0xb3, 0x8e, 0x3e, 0xd8, - 0x2f, 0x9f, 0xaa, 0xf8, 0xf5, 0x0c, 0x0c, 0xce, 0xe2, 0xc6, 0x27, 0x5f, 0xad, 0xe6, 0x37, 0x5b, - 0x95, 0xc0, 0xdf, 0x74, 0x1b, 0xa4, 0x93, 0xd5, 0xac, 0x6a, 0x50, 0x8a, 0xc9, 0x67, 0xc0, 0x70, - 0x82, 0x9b, 0xfd, 0x59, 0x26, 0xba, 0xb1, 0x64, 0xf1, 0x51, 0x3b, 0x20, 0xa8, 0x09, 0xa3, 0x2d, - 0xb6, 0xb8, 0x45, 0x06, 0x0c, 0x31, 0xd7, 0x5f, 0xe8, 0x51, 0xfb, 0x73, 0x9b, 0xe5, 0xf5, 0x32, - 0x3c, 0xa3, 0x2a, 0x3a, 0x3b, 0x6c, 0x72, 0xb7, 0xff, 0xf5, 0x69, 0x76, 0xf8, 0x57, 0xb9, 0x4a, - 0x67, 0x50, 0x3c, 0x21, 0x11, 0xb7, 0xc8, 0x99, 0x7c, 0xdd, 0x62, 0x3c, 0x2c, 0xe2, 0x19, 0x0a, - 0x96, 0x65, 0xd1, 0x27, 0x60, 0x8c, 0x5e, 0xca, 0xd4, 0x01, 0x1c, 0x4e, 0x9f, 0xc8, 0x0f, 0xf5, - 0xa1, 0xa8, 0xf4, 0xec, 0x38, 0x7a, 0x61, 0x9c, 0x60, 0x86, 0x5e, 0x67, 0x9e, 0x48, 0x92, 0x75, - 0xa1, 0x17, 0xd6, 0xba, 0xd3, 0x91, 0x64, 0xab, 0x31, 0x41, 0x6d, 0x98, 0x4a, 0x27, 0xec, 0x0b, - 0xa7, 0xed, 0x7c, 0xe9, 0x36, 0x9d, 0x73, 0x2f, 0x4e, 0x63, 0x92, 0xc6, 0x85, 0x38, 0x8b, 0x3f, - 0xba, 0x06, 0xa3, 0x22, 0x63, 0xba, 0x98, 0xb9, 0x45, 0x43, 0xe5, 0x39, 0x8a, 0x75, 0xe4, 0x41, - 0x12, 0x80, 0xcd, 0xc2, 0x68, 0x0b, 0xce, 0x68, 0x49, 0xae, 0x2e, 0x07, 0x0e, 0xf3, 0x5b, 0x70, - 0xd9, 0x76, 0xaa, 0x89, 0x25, 0x8f, 0xde, 0xdb, 0x2f, 0x9f, 0x59, 0xef, 0x44, 0x88, 0x3b, 0xf3, - 0x41, 0xd7, 0xe1, 0x24, 0x7f, 0xa8, 0xbe, 0x48, 0x9c, 0x7a, 0xc3, 0xf5, 0x94, 0xdc, 0xc3, 0x97, - 0xfc, 0xe9, 0x7b, 0xfb, 0xe5, 0x93, 0x73, 0x59, 0x04, 0x38, 0xbb, 0x1c, 0xfa, 0x10, 0x94, 0xea, - 0x5e, 0x28, 0xfa, 0x60, 0xc0, 0xc8, 0x23, 0x56, 0x5a, 0x5c, 0xab, 0xaa, 0xef, 0x8f, 0xff, 0xe0, - 0xb8, 0x00, 0xda, 0xe2, 0x6a, 0x71, 0xa5, 0xac, 0x19, 0x4c, 0x05, 0xea, 0x4a, 0xea, 0x33, 0x8d, - 0xa7, 0xaa, 0xdc, 0x1e, 0xa4, 0x5e, 0x70, 0x18, 0xaf, 0x58, 0x0d, 0xc6, 0xe8, 0x35, 0x40, 0x22, - 0x5e, 0xfd, 0x5c, 0x8d, 0xa5, 0x57, 0x61, 0x56, 0x84, 0x21, 0xf3, 0xf1, 0x64, 0x35, 0x45, 0x81, - 0x33, 0x4a, 0xa1, 0x2b, 0x74, 0x57, 0xd1, 0xa1, 0x62, 0xd7, 0x52, 0xa9, 0x25, 0x17, 0x49, 0x2b, - 0x20, 0xcc, 0x0f, 0xcb, 0xe4, 0x88, 0x13, 0xe5, 0x50, 0x1d, 0x1e, 0x71, 0xda, 0x91, 0xcf, 0x2c, - 0x0e, 0x26, 0xe9, 0xba, 0xbf, 0x43, 0x3c, 0x66, 0xec, 0x1b, 0x9a, 0x3f, 0x47, 0x05, 0xab, 0xb9, - 0x0e, 0x74, 0xb8, 0x23, 0x17, 0x2a, 0x10, 0xab, 0x5c, 0xd2, 0x60, 0x86, 0x1f, 0xcb, 0xc8, 0x27, - 0xfd, 0x22, 0x0c, 0x6f, 0xfb, 0x61, 0xb4, 0x46, 0xa2, 0xdb, 0x7e, 0xb0, 0x23, 0xc2, 0xe8, 0xc6, - 0x41, 0xc9, 0x63, 0x14, 0xd6, 0xe9, 0xe8, 0x8d, 0x97, 0xb9, 0xa2, 0xac, 0x2c, 0x32, 0x2f, 0x80, - 0xa1, 0x78, 0x8f, 0xb9, 0xc2, 0xc1, 0x58, 0xe2, 0x25, 0xe9, 0x4a, 0x65, 0x81, 0x59, 0xf4, 0x13, - 0xa4, 0x2b, 0x95, 0x05, 0x2c, 0xf1, 0x74, 0xba, 0x86, 0xdb, 0x4e, 0x40, 0x2a, 0x81, 0x5f, 0x23, - 0xa1, 0x16, 0x0a, 0xff, 0x61, 0x1e, 0x24, 0x98, 0x4e, 0xd7, 0x6a, 0x16, 0x01, 0xce, 0x2e, 0x87, - 0x48, 0x3a, 0xc1, 0xdb, 0x58, 0xbe, 0x29, 0x26, 0x2d, 0xcf, 0xf4, 0x98, 0xe3, 0xcd, 0x83, 0x09, - 0x95, 0x5a, 0x8e, 0x87, 0x05, 0x0e, 0xa7, 0xc7, 0xd9, 0xdc, 0xee, 0x3d, 0xa6, 0xb0, 0x32, 0x6e, - 0xad, 0x24, 0x38, 0xe1, 0x14, 0x6f, 0x23, 0xc2, 0xdc, 0x44, 0xd7, 0x08, 0x73, 0x17, 0xa1, 0x14, - 0xb6, 0x37, 0xea, 0x7e, 0xd3, 0x71, 0x3d, 0x66, 0xd1, 0xd7, 0xae, 0x5e, 0x55, 0x89, 0xc0, 0x31, - 0x0d, 0x5a, 0x86, 0x21, 0x47, 0x5a, 0xae, 0x50, 0x7e, 0x4c, 0x21, 0x65, 0xaf, 0xe2, 0x61, 0x36, - 0xa4, 0xad, 0x4a, 0x95, 0x45, 0xaf, 0xc0, 0xa8, 0x78, 0x68, 0x2d, 0x52, 0xa7, 0x4e, 0x99, 0xaf, - 0xe1, 0xaa, 0x3a, 0x12, 0x9b, 0xb4, 0xe8, 0x06, 0x0c, 0x47, 0x7e, 0x83, 0x3d, 0xe9, 0xa2, 0x62, - 0xde, 0xa9, 0xfc, 0xe8, 0x78, 0xeb, 0x8a, 0x4c, 0x57, 0x1a, 0xab, 0xa2, 0x58, 0xe7, 0x83, 0xd6, - 0xf9, 0x7c, 0x67, 0x81, 0xef, 0x49, 0x28, 0x72, 0x6f, 0x9e, 0xc9, 0x73, 0xc7, 0x62, 0x64, 0xe6, - 0x72, 0x10, 0x25, 0xb1, 0xce, 0x06, 0x5d, 0x86, 0xc9, 0x56, 0xe0, 0xfa, 0x6c, 0x4e, 0x28, 0xa3, - 0xe5, 0xb4, 0x99, 0xe6, 0xaa, 0x92, 0x24, 0xc0, 0xe9, 0x32, 0xec, 0x9d, 0xbc, 0x00, 0x4e, 0x9f, - 0xe6, 0xa9, 0x3a, 0xf8, 0x4d, 0x96, 0xc3, 0xb0, 0xc2, 0xa2, 0x55, 0xb6, 0x13, 0x73, 0x25, 0xcc, - 0xf4, 0x4c, 0x7e, 0x18, 0x23, 0x5d, 0x59, 0xc3, 0x85, 0x57, 0xf5, 0x17, 0xc7, 0x1c, 0x50, 0x5d, - 0xcb, 0x90, 0x49, 0xaf, 0x00, 0xe1, 0xf4, 0x23, 0x1d, 0xfc, 0x01, 0x13, 0x97, 0xa2, 0x58, 0x20, - 0x30, 0xc0, 0x21, 0x4e, 0xf0, 0x44, 0x1f, 0x81, 0x09, 0x11, 0x7c, 0x31, 0xee, 0xa6, 0x33, 0xb1, - 0xa3, 0x3c, 0x4e, 0xe0, 0x70, 0x8a, 0x9a, 0xa7, 0xca, 0x70, 0x36, 0x1a, 0x44, 0x6c, 0x7d, 0xd7, - 0x5c, 0x6f, 0x27, 0x9c, 0x3e, 0xcb, 0xf6, 0x07, 0x91, 0x2a, 0x23, 0x89, 0xc5, 0x19, 0x25, 0xd0, - 0x3a, 0x4c, 0xb4, 0x02, 0x42, 0x9a, 0x4c, 0xd0, 0x17, 0xe7, 0x59, 0x99, 0x87, 0x89, 0xa0, 0x2d, - 0xa9, 0x24, 0x70, 0x07, 0x19, 0x30, 0x9c, 0xe2, 0x80, 0x6e, 0xc3, 0x90, 0xbf, 0x4b, 0x82, 0x6d, - 0xe2, 0xd4, 0xa7, 0xcf, 0x75, 0x78, 0xb8, 0x21, 0x0e, 0xb7, 0xeb, 0x82, 0x36, 0xe1, 0xe8, 0x20, - 0xc1, 0xdd, 0x1d, 0x1d, 0x64, 0x65, 0xe8, 0x2f, 0x5a, 0x70, 0x5a, 0xda, 0x46, 0xaa, 0x2d, 0xda, - 0xeb, 0x0b, 0xbe, 0x17, 0x46, 0x01, 0x0f, 0x6c, 0xf0, 0x68, 0xfe, 0x63, 0xff, 0xf5, 0x9c, 0x42, - 0x4a, 0x0f, 0x7c, 0x3a, 0x8f, 0x22, 0xc4, 0xf9, 0x35, 0xa2, 0x05, 0x98, 0x0c, 0x49, 0x24, 0x37, - 0xa3, 0xb9, 0x70, 0xf9, 0xf5, 0xc5, 0xb5, 0xe9, 0xc7, 0x78, 0x54, 0x06, 0xba, 0x18, 0xaa, 0x49, - 0x24, 0x4e, 0xd3, 0xa3, 0x4b, 0x50, 0xf0, 0xc3, 0xe9, 0xc7, 0x3b, 0x24, 0x55, 0xf5, 0xeb, 0xd7, - 0xab, 0xdc, 0xe1, 0xed, 0x7a, 0x15, 0x17, 0xfc, 0x50, 0xa6, 0xab, 0xa0, 0xf7, 0xb1, 0x70, 0xfa, - 0x09, 0xae, 0x35, 0x94, 0xe9, 0x2a, 0x18, 0x10, 0xc7, 0x78, 0xb4, 0x0d, 0xe3, 0xa1, 0x71, 0xef, - 0x0d, 0xa7, 0xcf, 0xb3, 0x9e, 0x7a, 0x22, 0x6f, 0xd0, 0x0c, 0x6a, 0x2d, 0xda, 0xbc, 0xc9, 0x05, - 0x27, 0xd9, 0xf2, 0xd5, 0xa5, 0x5d, 0xf0, 0xc3, 0xe9, 0x27, 0xbb, 0xac, 0x2e, 0x8d, 0x58, 0x5f, - 0x5d, 0x3a, 0x0f, 0x9c, 0xe0, 0x39, 0xf3, 0x5d, 0x30, 0x99, 0x12, 0x97, 0x0e, 0x93, 0x89, 0x69, - 0x66, 0x07, 0x46, 0x8d, 0x29, 0xf9, 0x40, 0x1d, 0x0b, 0xbe, 0x67, 0x08, 0x4a, 0xca, 0xe8, 0x8c, - 0x2e, 0x9a, 0xbe, 0x04, 0xa7, 0x93, 0xbe, 0x04, 0x43, 0x15, 0xbf, 0x6e, 0xb8, 0x0f, 0xac, 0x67, - 0xc4, 0xee, 0xcb, 0xdb, 0x00, 0x7b, 0x7f, 0xd3, 0xa0, 0x69, 0xf2, 0x8b, 0x3d, 0x3b, 0x25, 0xf4, - 0x75, 0x34, 0x0e, 0x5c, 0x86, 0x49, 0xcf, 0x67, 0x32, 0x3a, 0xa9, 0x4b, 0x01, 0x8c, 0xc9, 0x59, - 0x25, 0x3d, 0x18, 0x4e, 0x82, 0x00, 0xa7, 0xcb, 0xd0, 0x0a, 0xb9, 0xa0, 0x94, 0xb4, 0x46, 0x70, - 0x39, 0x0a, 0x0b, 0x2c, 0x7a, 0x0c, 0xfa, 0x5b, 0x7e, 0x7d, 0xa5, 0x22, 0xe4, 0x73, 0x2d, 0x62, - 0x6c, 0x7d, 0xa5, 0x82, 0x39, 0x0e, 0xcd, 0xc1, 0x00, 0xfb, 0x11, 0x4e, 0x8f, 0xe4, 0x47, 0x3d, - 0x61, 0x25, 0xb4, 0x3c, 0x57, 0xac, 0x00, 0x16, 0x05, 0x99, 0x56, 0x94, 0x5e, 0x6a, 0x98, 0x56, - 0x74, 0xf0, 0x3e, 0xb5, 0xa2, 0x92, 0x01, 0x8e, 0x79, 0xa1, 0x3b, 0x70, 0xd2, 0xb8, 0x48, 0xf2, - 0x29, 0x42, 0x42, 0x11, 0x79, 0xe1, 0xb1, 0x8e, 0x37, 0x48, 0xe1, 0xc4, 0x70, 0x46, 0x34, 0xfa, - 0xe4, 0x4a, 0x16, 0x27, 0x9c, 0x5d, 0x01, 0x6a, 0xc0, 0x64, 0x2d, 0x55, 0xeb, 0x50, 0xef, 0xb5, - 0xaa, 0x01, 0x4d, 0xd7, 0x98, 0x66, 0x8c, 0x5e, 0x81, 0xa1, 0xb7, 0xfc, 0x90, 0x9d, 0x6d, 0xe2, - 0x4e, 0x21, 0x9f, 0xed, 0x0f, 0xbd, 0x7e, 0xbd, 0xca, 0xe0, 0x07, 0xfb, 0xe5, 0xe1, 0x8a, 0x5f, - 0x97, 0x7f, 0xb1, 0x2a, 0x80, 0x7e, 0xc0, 0x82, 0x99, 0xf4, 0x4d, 0x55, 0x35, 0x7a, 0xb4, 0xf7, - 0x46, 0xdb, 0xa2, 0xd2, 0x99, 0xa5, 0x5c, 0x76, 0xb8, 0x43, 0x55, 0xe8, 0x83, 0x74, 0x21, 0x84, - 0xee, 0x5d, 0x22, 0x92, 0x84, 0x3e, 0x1a, 0x2f, 0x04, 0x0a, 0x3d, 0xd8, 0x2f, 0x8f, 0xf3, 0x2d, - 0x2d, 0x7e, 0x37, 0x23, 0x0a, 0xd8, 0xbf, 0x6c, 0x31, 0xb5, 0xac, 0x80, 0x92, 0xb0, 0xdd, 0x38, - 0x8e, 0xcc, 0xc0, 0x4b, 0x86, 0xc9, 0xf3, 0xbe, 0xfd, 0x61, 0xfe, 0x89, 0xc5, 0xfc, 0x61, 0x8e, - 0xf1, 0xe1, 0xcb, 0xeb, 0x30, 0x14, 0xc9, 0x8c, 0xcd, 0x1d, 0x92, 0x19, 0x6b, 0x8d, 0x62, 0x3e, - 0x41, 0xea, 0x72, 0xa0, 0x92, 0x33, 0x2b, 0x36, 0xf6, 0x3f, 0xe4, 0x23, 0x20, 0x31, 0xc7, 0x60, - 0x59, 0x5a, 0x34, 0x2d, 0x4b, 0xe5, 0x2e, 0x5f, 0x90, 0x63, 0x61, 0xfa, 0x07, 0x66, 0xbb, 0x99, - 0x52, 0xec, 0x9d, 0xee, 0x88, 0x65, 0x7f, 0xde, 0x02, 0x88, 0x63, 0x79, 0xf7, 0x90, 0x93, 0xef, - 0x25, 0x7a, 0x1d, 0xf0, 0x23, 0xbf, 0xe6, 0x37, 0x84, 0xdd, 0xf4, 0x91, 0xd8, 0xb8, 0xc5, 0xe1, - 0x07, 0xda, 0x6f, 0xac, 0xa8, 0x51, 0x59, 0x46, 0x0e, 0x2c, 0xc6, 0xe6, 0x56, 0x23, 0x6a, 0xe0, - 0x17, 0x2d, 0x38, 0x91, 0xe5, 0x45, 0x4d, 0x2f, 0x97, 0x5c, 0x3d, 0xa8, 0x9c, 0xe4, 0xd4, 0x68, - 0xde, 0x14, 0x70, 0xac, 0x28, 0x7a, 0x4e, 0x76, 0x78, 0xb8, 0x20, 0xda, 0xd7, 0x61, 0xb4, 0x12, - 0x10, 0xed, 0x5c, 0x7e, 0x95, 0x47, 0xa3, 0xe0, 0xed, 0x79, 0xe6, 0xd0, 0x91, 0x28, 0xec, 0x2f, - 0x17, 0xe0, 0x04, 0xf7, 0x35, 0x99, 0xdb, 0xf5, 0xdd, 0x7a, 0xc5, 0xaf, 0x8b, 0xb7, 0x72, 0x6f, - 0xc0, 0x48, 0x4b, 0xd3, 0xe9, 0x76, 0x0a, 0x08, 0xab, 0xeb, 0x7e, 0x63, 0x2d, 0x94, 0x0e, 0xc5, - 0x06, 0x2f, 0x54, 0x87, 0x11, 0xb2, 0xeb, 0xd6, 0x94, 0xc3, 0x42, 0xe1, 0xd0, 0x67, 0xa4, 0xaa, - 0x65, 0x49, 0xe3, 0x83, 0x0d, 0xae, 0x0f, 0x20, 0x05, 0xb9, 0xfd, 0x63, 0x16, 0x3c, 0x94, 0x13, - 0x3e, 0x96, 0x56, 0x77, 0x9b, 0x79, 0xf5, 0x88, 0x69, 0xab, 0xaa, 0xe3, 0xbe, 0x3e, 0x58, 0x60, - 0xd1, 0x47, 0x01, 0xb8, 0xaf, 0x0e, 0xf1, 0x6a, 0x5d, 0xe3, 0x6c, 0x1a, 0x21, 0x02, 0xb5, 0x68, - 0x6f, 0xb2, 0x3c, 0xd6, 0x78, 0xd9, 0x5f, 0xec, 0x83, 0x7e, 0xe6, 0x1b, 0x82, 0x2a, 0x30, 0xb8, - 0xcd, 0x13, 0x02, 0x75, 0x1c, 0x37, 0x4a, 0x2b, 0x73, 0x0c, 0xc5, 0xe3, 0xa6, 0x41, 0xb1, 0x64, - 0x83, 0x56, 0x61, 0x8a, 0xe7, 0x65, 0x6a, 0x2c, 0x92, 0x86, 0xb3, 0x27, 0xd5, 0xa5, 0x3c, 0x89, - 0xb0, 0x52, 0x1b, 0xaf, 0xa4, 0x49, 0x70, 0x56, 0x39, 0xf4, 0x2a, 0x8c, 0xd1, 0xeb, 0xab, 0xdf, - 0x8e, 0x24, 0x27, 0x9e, 0x91, 0x49, 0x49, 0xf4, 0xeb, 0x06, 0x16, 0x27, 0xa8, 0xd1, 0x2b, 0x30, - 0xda, 0x4a, 0x29, 0x86, 0xfb, 0x63, 0x0d, 0x8a, 0xa9, 0x0c, 0x36, 0x69, 0x99, 0x23, 0x75, 0x9b, - 0xb9, 0x8d, 0xaf, 0x6f, 0x07, 0x24, 0xdc, 0xf6, 0x1b, 0x75, 0x26, 0x39, 0xf6, 0x6b, 0x8e, 0xd4, - 0x09, 0x3c, 0x4e, 0x95, 0xa0, 0x5c, 0x36, 0x1d, 0xb7, 0xd1, 0x0e, 0x48, 0xcc, 0x65, 0xc0, 0xe4, - 0xb2, 0x9c, 0xc0, 0xe3, 0x54, 0x89, 0xee, 0x1a, 0xef, 0xc1, 0xa3, 0xd1, 0x78, 0xdb, 0x3f, 0x53, - 0x00, 0x63, 0x68, 0xbf, 0x73, 0x33, 0x45, 0xd1, 0x2f, 0xdb, 0x0a, 0x5a, 0x35, 0xe1, 0x07, 0x95, - 0xf9, 0x65, 0x71, 0x02, 0x58, 0xfe, 0x65, 0xf4, 0x3f, 0x66, 0xa5, 0xe8, 0x1a, 0x3f, 0x59, 0x09, - 0x7c, 0x7a, 0xc8, 0xc9, 0x78, 0x65, 0xea, 0xbd, 0xc2, 0xa0, 0x7c, 0xcb, 0xdd, 0x21, 0xb2, 0xa7, - 0xf0, 0xe8, 0xe6, 0x1c, 0x0c, 0x97, 0xa1, 0xaa, 0x08, 0xaa, 0x20, 0xb9, 0xa0, 0x4b, 0x30, 0x2c, - 0xd2, 0xff, 0x30, 0xb7, 0x7a, 0xbe, 0x98, 0x98, 0x8b, 0xd3, 0x62, 0x0c, 0xc6, 0x3a, 0x8d, 0xfd, - 0x83, 0x05, 0x98, 0xca, 0x78, 0x17, 0xc5, 0x8f, 0x91, 0x2d, 0x37, 0x8c, 0x54, 0x8e, 0x59, 0xed, - 0x18, 0xe1, 0x70, 0xac, 0x28, 0xe8, 0x5e, 0xc5, 0x0f, 0xaa, 0xe4, 0xe1, 0x24, 0xde, 0x1d, 0x08, - 0xec, 0x21, 0xb3, 0xb5, 0x9e, 0x83, 0xbe, 0x76, 0x48, 0x64, 0x4c, 0x5e, 0x75, 0x6c, 0x33, 0x73, - 0x30, 0xc3, 0xd0, 0x1b, 0xd8, 0x96, 0xb2, 0xac, 0x6a, 0x37, 0x30, 0x6e, 0x5b, 0xe5, 0x38, 0xda, - 0xb8, 0x88, 0x78, 0x8e, 0x17, 0x89, 0x7b, 0x5a, 0x1c, 0x5c, 0x92, 0x41, 0xb1, 0xc0, 0xda, 0x5f, - 0x28, 0xc2, 0xe9, 0xdc, 0x97, 0x92, 0xb4, 0xe9, 0x4d, 0xdf, 0x73, 0x23, 0x5f, 0xf9, 0x8e, 0xf1, - 0x80, 0x92, 0xa4, 0xb5, 0xbd, 0x2a, 0xe0, 0x58, 0x51, 0xa0, 0xf3, 0xd0, 0xcf, 0x94, 0xc9, 0xa9, - 0x6c, 0xbb, 0xf3, 0x8b, 0x3c, 0xc2, 0x18, 0x47, 0xf7, 0x9c, 0x20, 0xfd, 0x31, 0x2a, 0xc1, 0xf8, - 0x8d, 0xe4, 0x81, 0x42, 0x9b, 0xeb, 0xfb, 0x0d, 0xcc, 0x90, 0xe8, 0x09, 0xd1, 0x5f, 0x09, 0x67, - 0x29, 0xec, 0xd4, 0xfd, 0x50, 0xeb, 0xb4, 0xa7, 0x60, 0x70, 0x87, 0xec, 0x05, 0xae, 0xb7, 0x95, - 0x74, 0xa2, 0xbb, 0xca, 0xc1, 0x58, 0xe2, 0xcd, 0xf4, 0x90, 0x83, 0x47, 0x9d, 0xd9, 0x7c, 0xa8, - 0xab, 0x78, 0xf2, 0xc3, 0x45, 0x18, 0xc7, 0xf3, 0x8b, 0xef, 0x0e, 0xc4, 0x8d, 0xf4, 0x40, 0x1c, - 0x75, 0x66, 0xf3, 0xee, 0xa3, 0xf1, 0x0b, 0x16, 0x8c, 0xb3, 0x24, 0x44, 0x22, 0x1e, 0x82, 0xeb, - 0x7b, 0xc7, 0x70, 0x15, 0x78, 0x0c, 0xfa, 0x03, 0x5a, 0x69, 0x32, 0xcd, 0x2e, 0x6b, 0x09, 0xe6, - 0x38, 0xf4, 0x08, 0xf4, 0xb1, 0x26, 0xd0, 0xc1, 0x1b, 0xe1, 0x5b, 0xf0, 0xa2, 0x13, 0x39, 0x98, - 0x41, 0x59, 0x7c, 0x2d, 0x4c, 0x5a, 0x0d, 0x97, 0x37, 0x3a, 0x36, 0xf5, 0xbf, 0x33, 0x62, 0x28, - 0x64, 0x36, 0xed, 0xed, 0xc5, 0xd7, 0xca, 0x66, 0xd9, 0xf9, 0x9a, 0xfd, 0xc7, 0x05, 0x38, 0x9b, - 0x59, 0xae, 0xe7, 0xf8, 0x5a, 0x9d, 0x4b, 0x3f, 0xc8, 0x34, 0x33, 0xc5, 0x63, 0x74, 0x51, 0xee, - 0xeb, 0x55, 0xfa, 0xef, 0xef, 0x21, 0xec, 0x55, 0x66, 0x97, 0xbd, 0x43, 0xc2, 0x5e, 0x65, 0xb6, - 0x2d, 0x47, 0x4d, 0xf0, 0xe7, 0x85, 0x9c, 0x6f, 0x61, 0x0a, 0x83, 0x0b, 0x74, 0x9f, 0x61, 0xc8, - 0x50, 0x5e, 0xc2, 0xf9, 0x1e, 0xc3, 0x61, 0x58, 0x61, 0xd1, 0x1c, 0x8c, 0x37, 0x5d, 0x8f, 0x6e, - 0x3e, 0x7b, 0xa6, 0x28, 0xae, 0x6c, 0x00, 0xab, 0x26, 0x1a, 0x27, 0xe9, 0x91, 0xab, 0x85, 0xc4, - 0xe2, 0x5f, 0xf7, 0xca, 0xa1, 0x56, 0xdd, 0xac, 0xe9, 0x06, 0xa1, 0x7a, 0x31, 0x23, 0x3c, 0xd6, - 0xaa, 0xa6, 0x27, 0x2a, 0xf6, 0xae, 0x27, 0x1a, 0xc9, 0xd6, 0x11, 0xcd, 0xbc, 0x02, 0xa3, 0xf7, - 0x6d, 0x53, 0xb0, 0xbf, 0x5e, 0x84, 0x87, 0x3b, 0x2c, 0x7b, 0xbe, 0xd7, 0x1b, 0x63, 0xa0, 0xed, - 0xf5, 0xa9, 0x71, 0xa8, 0xc0, 0x89, 0xcd, 0x76, 0xa3, 0xb1, 0xc7, 0x5e, 0xee, 0x90, 0xba, 0xa4, - 0x10, 0x32, 0xa5, 0x54, 0x8e, 0x9c, 0x58, 0xce, 0xa0, 0xc1, 0x99, 0x25, 0xe9, 0x15, 0x8b, 0x9e, - 0x24, 0x7b, 0x8a, 0x55, 0xe2, 0x8a, 0x85, 0x75, 0x24, 0x36, 0x69, 0xd1, 0x65, 0x98, 0x74, 0x76, - 0x1d, 0x97, 0xc7, 0x15, 0x97, 0x0c, 0xf8, 0x1d, 0x4b, 0xa9, 0x82, 0xe7, 0x92, 0x04, 0x38, 0x5d, - 0x06, 0xbd, 0x06, 0xc8, 0xdf, 0x60, 0xfe, 0xfd, 0xf5, 0xcb, 0xc4, 0x13, 0xd6, 0x6a, 0x36, 0x76, - 0xc5, 0x78, 0x4b, 0xb8, 0x9e, 0xa2, 0xc0, 0x19, 0xa5, 0x12, 0xf1, 0x9f, 0x06, 0xf2, 0xe3, 0x3f, - 0x75, 0xde, 0x17, 0xbb, 0x66, 0x38, 0xba, 0x04, 0xa3, 0x87, 0xf4, 0x5a, 0xb5, 0xff, 0x83, 0x45, - 0x4f, 0x3c, 0x5e, 0xc6, 0x0c, 0xae, 0xfa, 0x0a, 0x73, 0xab, 0xe5, 0x9a, 0x65, 0x2d, 0xc0, 0xce, - 0x49, 0xcd, 0xad, 0x36, 0x46, 0x62, 0x93, 0x96, 0xcf, 0x21, 0xcd, 0x1d, 0xd6, 0xb8, 0x15, 0x88, - 0x08, 0x70, 0x8a, 0x02, 0x7d, 0x0c, 0x06, 0xeb, 0xee, 0xae, 0x1b, 0x0a, 0xe5, 0xd8, 0xa1, 0x8d, - 0x58, 0xf1, 0xd6, 0xb9, 0xc8, 0xd9, 0x60, 0xc9, 0xcf, 0xfe, 0xe1, 0x42, 0xdc, 0x27, 0xaf, 0xb7, - 0xfd, 0xc8, 0x39, 0x86, 0x93, 0xfc, 0xb2, 0x71, 0x92, 0x3f, 0xd1, 0x29, 0x0c, 0x1e, 0x6b, 0x52, - 0xee, 0x09, 0x7e, 0x3d, 0x71, 0x82, 0x3f, 0xd9, 0x9d, 0x55, 0xe7, 0x93, 0xfb, 0x1f, 0x59, 0x30, - 0x69, 0xd0, 0x1f, 0xc3, 0x01, 0xb2, 0x6c, 0x1e, 0x20, 0x8f, 0x76, 0xfd, 0x86, 0x9c, 0x83, 0xe3, - 0xfb, 0x8a, 0x89, 0xb6, 0xb3, 0x03, 0xe3, 0x2d, 0xe8, 0xdb, 0x76, 0x82, 0x7a, 0xa7, 0xb4, 0x1f, - 0xa9, 0x42, 0xb3, 0x57, 0x9c, 0x40, 0x58, 0xf8, 0x9f, 0x91, 0xbd, 0x4e, 0x41, 0x5d, 0xad, 0xfb, - 0xac, 0x2a, 0xf4, 0x12, 0x0c, 0x84, 0x35, 0xbf, 0xa5, 0x9e, 0xfa, 0x9c, 0x63, 0x1d, 0xcd, 0x20, - 0x07, 0xfb, 0x65, 0x64, 0x56, 0x47, 0xc1, 0x58, 0xd0, 0xa3, 0x37, 0x60, 0x94, 0xfd, 0x52, 0xee, - 0x76, 0xc5, 0x7c, 0x0d, 0x46, 0x55, 0x27, 0xe4, 0xbe, 0xa8, 0x06, 0x08, 0x9b, 0xac, 0x66, 0xb6, - 0xa0, 0xa4, 0x3e, 0xeb, 0x81, 0x5a, 0x89, 0xff, 0x6d, 0x11, 0xa6, 0x32, 0xe6, 0x1c, 0x0a, 0x8d, - 0x91, 0xb8, 0xd4, 0xe3, 0x54, 0x7d, 0x9b, 0x63, 0x11, 0xb2, 0x0b, 0x54, 0x5d, 0xcc, 0xad, 0x9e, - 0x2b, 0xbd, 0x11, 0x92, 0x64, 0xa5, 0x14, 0xd4, 0xbd, 0x52, 0x5a, 0xd9, 0xb1, 0x75, 0x35, 0xad, - 0x48, 0xb5, 0xf4, 0x81, 0x8e, 0xe9, 0xaf, 0xf5, 0xc1, 0x89, 0xac, 0xc8, 0x9c, 0xe8, 0x33, 0x89, - 0xa4, 0xb3, 0x2f, 0xf4, 0x1a, 0xd3, 0x93, 0x67, 0xa2, 0x15, 0x11, 0x03, 0x67, 0xcd, 0x34, 0xb4, - 0x5d, 0xbb, 0x59, 0xd4, 0xc9, 0x62, 0x96, 0x04, 0x3c, 0x59, 0xb0, 0xdc, 0x3e, 0xde, 0xdf, 0x73, - 0x03, 0x44, 0x96, 0xe1, 0x30, 0xe1, 0xca, 0x23, 0xc1, 0xdd, 0x5d, 0x79, 0x64, 0xcd, 0x68, 0x05, - 0x06, 0x6a, 0xdc, 0x47, 0xa4, 0xd8, 0x7d, 0x0b, 0xe3, 0x0e, 0x22, 0x6a, 0x03, 0x16, 0x8e, 0x21, - 0x82, 0xc1, 0x8c, 0x0b, 0xc3, 0x5a, 0xc7, 0x3c, 0xd0, 0xc9, 0xb3, 0x43, 0x0f, 0x3e, 0xad, 0x0b, - 0x1e, 0xe8, 0x04, 0xfa, 0x31, 0x0b, 0x12, 0x0f, 0x45, 0x94, 0x52, 0xce, 0xca, 0x55, 0xca, 0x9d, - 0x83, 0xbe, 0xc0, 0x6f, 0x90, 0x64, 0xa2, 0x57, 0xec, 0x37, 0x08, 0x66, 0x18, 0x4a, 0x11, 0xc5, - 0xaa, 0x96, 0x11, 0xfd, 0x1a, 0x29, 0x2e, 0x88, 0x8f, 0x41, 0x7f, 0x83, 0xec, 0x92, 0x46, 0x32, - 0x1f, 0xd7, 0x35, 0x0a, 0xc4, 0x1c, 0x67, 0xff, 0x42, 0x1f, 0x9c, 0xe9, 0x18, 0x40, 0x88, 0x5e, - 0xc6, 0xb6, 0x9c, 0x88, 0xdc, 0x76, 0xf6, 0x92, 0x89, 0x73, 0x2e, 0x73, 0x30, 0x96, 0x78, 0xf6, - 0x6a, 0x91, 0xc7, 0xbf, 0x4f, 0xa8, 0x30, 0x45, 0xd8, 0x7b, 0x81, 0x35, 0x55, 0x62, 0xc5, 0xa3, - 0x50, 0x89, 0x3d, 0x07, 0x10, 0x86, 0x0d, 0xee, 0x4e, 0x57, 0x17, 0xcf, 0x21, 0xe3, 0x3c, 0x09, - 0xd5, 0x6b, 0x02, 0x83, 0x35, 0x2a, 0xb4, 0x08, 0x13, 0xad, 0xc0, 0x8f, 0xb8, 0x46, 0x78, 0x91, - 0x7b, 0x9c, 0xf6, 0x9b, 0xb1, 0x5b, 0x2a, 0x09, 0x3c, 0x4e, 0x95, 0x40, 0x2f, 0xc2, 0xb0, 0x88, - 0xe7, 0x52, 0xf1, 0xfd, 0x86, 0x50, 0x42, 0x29, 0x27, 0xcc, 0x6a, 0x8c, 0xc2, 0x3a, 0x9d, 0x56, - 0x8c, 0xa9, 0x99, 0x07, 0x33, 0x8b, 0x71, 0x55, 0xb3, 0x46, 0x97, 0x08, 0xf8, 0x3b, 0xd4, 0x53, - 0xc0, 0xdf, 0x58, 0x2d, 0x57, 0xea, 0xd9, 0xea, 0x09, 0x5d, 0x15, 0x59, 0x5f, 0xe9, 0x83, 0x29, - 0x31, 0x71, 0x1e, 0xf4, 0x74, 0xb9, 0x91, 0x9e, 0x2e, 0x47, 0xa1, 0xb8, 0x7b, 0x77, 0xce, 0x1c, - 0xf7, 0x9c, 0xf9, 0x11, 0x0b, 0x4c, 0x49, 0x0d, 0xfd, 0x7f, 0xb9, 0x99, 0xc7, 0x5e, 0xcc, 0x95, - 0xfc, 0x94, 0xc3, 0xe1, 0xdb, 0xcc, 0x41, 0x66, 0xff, 0x3b, 0x0b, 0x1e, 0xed, 0xca, 0x11, 0x2d, - 0x41, 0x89, 0x89, 0x93, 0xda, 0x45, 0xef, 0x49, 0xe5, 0x91, 0x2e, 0x11, 0x39, 0xd2, 0x6d, 0x5c, - 0x12, 0x2d, 0xa5, 0x52, 0xbc, 0x3d, 0x95, 0x91, 0xe2, 0xed, 0xa4, 0xd1, 0x3d, 0xf7, 0x99, 0xe3, - 0xed, 0x87, 0xe8, 0x89, 0x63, 0xbc, 0x06, 0x43, 0xef, 0x37, 0x94, 0x8e, 0x76, 0x42, 0xe9, 0x88, - 0x4c, 0x6a, 0xed, 0x0c, 0xf9, 0x08, 0x4c, 0xb0, 0x40, 0x6f, 0xec, 0x7d, 0x84, 0x78, 0xa7, 0x56, - 0x88, 0x7d, 0xa0, 0xaf, 0x25, 0x70, 0x38, 0x45, 0x6d, 0xff, 0x51, 0x11, 0x06, 0xf8, 0xf2, 0x3b, - 0x86, 0xeb, 0xe5, 0xd3, 0x50, 0x72, 0x9b, 0xcd, 0x36, 0xcf, 0xda, 0xd5, 0x1f, 0x7b, 0xd4, 0xae, - 0x48, 0x20, 0x8e, 0xf1, 0x68, 0x59, 0xe8, 0xbb, 0x3b, 0xc4, 0x92, 0xe5, 0x0d, 0x9f, 0x5d, 0x74, - 0x22, 0x87, 0xcb, 0x4a, 0xea, 0x9c, 0x8d, 0x35, 0xe3, 0xe8, 0x93, 0x00, 0x61, 0x14, 0xb8, 0xde, - 0x16, 0x85, 0x89, 0x10, 0xd6, 0xef, 0xed, 0xc0, 0xad, 0xaa, 0x88, 0x39, 0xcf, 0x78, 0xcf, 0x51, - 0x08, 0xac, 0x71, 0x44, 0xb3, 0xc6, 0x49, 0x3f, 0x93, 0x18, 0x3b, 0xe0, 0x5c, 0xe3, 0x31, 0x9b, - 0xf9, 0x00, 0x94, 0x14, 0xf3, 0x6e, 0xda, 0xaf, 0x11, 0x5d, 0x2c, 0xfa, 0x30, 0x8c, 0x27, 0xda, - 0x76, 0x28, 0xe5, 0xd9, 0x2f, 0x5a, 0x30, 0xce, 0x1b, 0xb3, 0xe4, 0xed, 0x8a, 0xd3, 0xe0, 0x2e, - 0x9c, 0x68, 0x64, 0xec, 0xca, 0x62, 0xf8, 0x7b, 0xdf, 0xc5, 0x95, 0xb2, 0x2c, 0x0b, 0x8b, 0x33, - 0xeb, 0x40, 0x17, 0xe8, 0x8a, 0xa3, 0xbb, 0xae, 0xd3, 0x10, 0xcf, 0xf2, 0x47, 0xf8, 0x6a, 0xe3, - 0x30, 0xac, 0xb0, 0xf6, 0xef, 0x59, 0x30, 0xc9, 0x5b, 0x7e, 0x95, 0xec, 0xa9, 0xbd, 0xe9, 0x5b, - 0xd9, 0x76, 0x91, 0x2f, 0xb2, 0x90, 0x93, 0x2f, 0x52, 0xff, 0xb4, 0x62, 0xc7, 0x4f, 0xfb, 0xb2, - 0x05, 0x62, 0x86, 0x1c, 0x83, 0x3e, 0xe3, 0xbb, 0x4c, 0x7d, 0xc6, 0x4c, 0xfe, 0x22, 0xc8, 0x51, - 0x64, 0xfc, 0x99, 0x05, 0x13, 0x9c, 0x20, 0xb6, 0xd5, 0x7f, 0x4b, 0xc7, 0xa1, 0x97, 0xac, 0xf2, - 0x57, 0xc9, 0xde, 0xba, 0x5f, 0x71, 0xa2, 0xed, 0xec, 0x8f, 0x32, 0x06, 0xab, 0xaf, 0xe3, 0x60, - 0xd5, 0xe5, 0x02, 0x32, 0xd2, 0x29, 0x75, 0x79, 0x5c, 0x7f, 0xd8, 0x74, 0x4a, 0xf6, 0x37, 0x2d, - 0x40, 0xbc, 0x1a, 0x43, 0x70, 0xa3, 0xe2, 0x10, 0x83, 0x6a, 0x07, 0x5d, 0xbc, 0x35, 0x29, 0x0c, - 0xd6, 0xa8, 0x8e, 0xa4, 0x7b, 0x12, 0x0e, 0x17, 0xc5, 0xee, 0x0e, 0x17, 0x87, 0xe8, 0xd1, 0x7f, - 0x31, 0x00, 0xc9, 0x17, 0x71, 0xe8, 0x26, 0x8c, 0xd4, 0x9c, 0x96, 0xb3, 0xe1, 0x36, 0xdc, 0xc8, - 0x25, 0x61, 0x27, 0x6f, 0xac, 0x05, 0x8d, 0x4e, 0x98, 0xc8, 0x35, 0x08, 0x36, 0xf8, 0xa0, 0x59, - 0x80, 0x56, 0xe0, 0xee, 0xba, 0x0d, 0xb2, 0xc5, 0xd4, 0x2e, 0x2c, 0x10, 0x08, 0x77, 0x0d, 0x93, - 0x50, 0xac, 0x51, 0x64, 0x84, 0x1f, 0x28, 0x3e, 0xe0, 0xf0, 0x03, 0x70, 0x6c, 0xe1, 0x07, 0xfa, - 0x0e, 0x15, 0x7e, 0x60, 0xe8, 0xd0, 0xe1, 0x07, 0xfa, 0x7b, 0x0a, 0x3f, 0x80, 0xe1, 0x94, 0x94, - 0x3d, 0xe9, 0xff, 0x65, 0xb7, 0x41, 0xc4, 0x85, 0x83, 0x47, 0x2f, 0x99, 0xb9, 0xb7, 0x5f, 0x3e, - 0x85, 0x33, 0x29, 0x70, 0x4e, 0x49, 0xf4, 0x51, 0x98, 0x76, 0x1a, 0x0d, 0xff, 0xb6, 0x1a, 0xd4, - 0xa5, 0xb0, 0xe6, 0x34, 0xb8, 0x09, 0x64, 0x90, 0x71, 0x7d, 0xe4, 0xde, 0x7e, 0x79, 0x7a, 0x2e, - 0x87, 0x06, 0xe7, 0x96, 0x46, 0x1f, 0x82, 0x52, 0x2b, 0xf0, 0x6b, 0xab, 0xda, 0xb3, 0xdd, 0xb3, - 0xb4, 0x03, 0x2b, 0x12, 0x78, 0xb0, 0x5f, 0x1e, 0x55, 0x7f, 0xd8, 0x81, 0x1f, 0x17, 0xc8, 0x88, - 0x27, 0x30, 0x7c, 0xa4, 0xf1, 0x04, 0x76, 0x60, 0xaa, 0x4a, 0x02, 0xd7, 0x69, 0xb8, 0x77, 0xa9, - 0xbc, 0x2c, 0xf7, 0xa7, 0x75, 0x28, 0x05, 0x89, 0x1d, 0xb9, 0xa7, 0xf8, 0xae, 0x5a, 0x5e, 0x1b, - 0xb9, 0x03, 0xc7, 0x8c, 0xec, 0xff, 0x65, 0xc1, 0xa0, 0x78, 0x01, 0x77, 0x0c, 0x52, 0xe3, 0x9c, - 0x61, 0x94, 0x28, 0x67, 0x77, 0x18, 0x6b, 0x4c, 0xae, 0x39, 0x62, 0x25, 0x61, 0x8e, 0x78, 0xb4, - 0x13, 0x93, 0xce, 0x86, 0x88, 0xbf, 0x5e, 0xa4, 0xd2, 0xbb, 0xf1, 0x16, 0xfb, 0xc1, 0x77, 0xc1, - 0x1a, 0x0c, 0x86, 0xe2, 0x2d, 0x70, 0x21, 0xff, 0x31, 0x46, 0x72, 0x10, 0x63, 0x2f, 0x3a, 0xf1, - 0xfa, 0x57, 0x32, 0xc9, 0x7c, 0x64, 0x5c, 0x7c, 0x80, 0x8f, 0x8c, 0xbb, 0xbd, 0x56, 0xef, 0x3b, - 0x8a, 0xd7, 0xea, 0xf6, 0xd7, 0xd8, 0xc9, 0xa9, 0xc3, 0x8f, 0x41, 0xa8, 0xba, 0x6c, 0x9e, 0xb1, - 0x76, 0x87, 0x99, 0x25, 0x1a, 0x95, 0x23, 0x5c, 0xfd, 0xbc, 0x05, 0x67, 0x32, 0xbe, 0x4a, 0x93, - 0xb4, 0x9e, 0x81, 0x21, 0xa7, 0x5d, 0x77, 0xd5, 0x5a, 0xd6, 0x4c, 0x93, 0x73, 0x02, 0x8e, 0x15, - 0x05, 0x5a, 0x80, 0x49, 0x72, 0xa7, 0xe5, 0x72, 0x43, 0xae, 0xee, 0x7c, 0x5c, 0xe4, 0xcf, 0x26, - 0x97, 0x92, 0x48, 0x9c, 0xa6, 0x57, 0x71, 0x8d, 0x8a, 0xb9, 0x71, 0x8d, 0xfe, 0x8e, 0x05, 0xc3, - 0xea, 0x35, 0xec, 0x03, 0xef, 0xed, 0x8f, 0x98, 0xbd, 0xfd, 0x70, 0x87, 0xde, 0xce, 0xe9, 0xe6, - 0xdf, 0x29, 0xa8, 0xf6, 0x56, 0xfc, 0x20, 0xea, 0x41, 0x82, 0xbb, 0xff, 0x87, 0x13, 0x97, 0x60, - 0xd8, 0x69, 0xb5, 0x24, 0x42, 0x7a, 0xc0, 0xb1, 0x68, 0xdd, 0x31, 0x18, 0xeb, 0x34, 0xea, 0x1d, - 0x47, 0x31, 0xf7, 0x1d, 0x47, 0x1d, 0x20, 0x72, 0x82, 0x2d, 0x12, 0x51, 0x98, 0x70, 0xd8, 0xcd, - 0xdf, 0x6f, 0xda, 0x91, 0xdb, 0x98, 0x75, 0xbd, 0x28, 0x8c, 0x82, 0xd9, 0x15, 0x2f, 0xba, 0x1e, - 0xf0, 0x2b, 0xa4, 0x16, 0x19, 0x4c, 0xf1, 0xc2, 0x1a, 0x5f, 0x19, 0xf9, 0x81, 0xd5, 0xd1, 0x6f, - 0xba, 0x52, 0xac, 0x09, 0x38, 0x56, 0x14, 0xf6, 0x07, 0xd8, 0xe9, 0xc3, 0xfa, 0xf4, 0x70, 0x51, - 0xb1, 0x7e, 0x6a, 0x44, 0x8d, 0x06, 0x33, 0x8a, 0x2e, 0xea, 0xb1, 0xb7, 0x3a, 0x6f, 0xf6, 0xb4, - 0x62, 0xfd, 0x41, 0x62, 0x1c, 0xa0, 0x0b, 0x7d, 0x3c, 0xe5, 0x1e, 0xf3, 0x6c, 0x97, 0x53, 0xe3, - 0x10, 0x0e, 0x31, 0x2c, 0x75, 0x0f, 0x4b, 0x6c, 0xb2, 0x52, 0x11, 0xeb, 0x42, 0x4b, 0xdd, 0x23, - 0x10, 0x38, 0xa6, 0xa1, 0xc2, 0x94, 0xfa, 0x13, 0x4e, 0xa3, 0x38, 0x84, 0xad, 0xa2, 0x0e, 0xb1, - 0x46, 0x81, 0x2e, 0x0a, 0x85, 0x02, 0xb7, 0x0b, 0x3c, 0x9c, 0x50, 0x28, 0xc8, 0xee, 0xd2, 0xb4, - 0x40, 0x97, 0x60, 0x58, 0x25, 0x6a, 0xaf, 0xf0, 0xa4, 0x59, 0x62, 0x9a, 0x2d, 0xc5, 0x60, 0xac, - 0xd3, 0xa0, 0x75, 0x18, 0x0f, 0xb9, 0x9e, 0x4d, 0xc5, 0x15, 0xe7, 0xfa, 0xca, 0xf7, 0xaa, 0x77, - 0xc8, 0x26, 0xfa, 0x80, 0x81, 0xf8, 0xee, 0x24, 0xa3, 0x33, 0x24, 0x59, 0xa0, 0x57, 0x61, 0xac, - 0xe1, 0x3b, 0xf5, 0x79, 0xa7, 0xe1, 0x78, 0x35, 0xd6, 0x3f, 0x43, 0x66, 0xbe, 0xdf, 0x6b, 0x06, - 0x16, 0x27, 0xa8, 0xa9, 0xf0, 0xa6, 0x43, 0x44, 0x74, 0x31, 0xc7, 0xdb, 0x22, 0xa1, 0x48, 0xbb, - 0xcd, 0x84, 0xb7, 0x6b, 0x39, 0x34, 0x38, 0xb7, 0x34, 0x7a, 0x09, 0x46, 0xe4, 0xe7, 0x6b, 0xc1, - 0x4c, 0xe2, 0x27, 0x31, 0x1a, 0x0e, 0x1b, 0x94, 0x28, 0x84, 0x93, 0xf2, 0xff, 0x7a, 0xe0, 0x6c, - 0x6e, 0xba, 0x35, 0xf1, 0xc2, 0x9f, 0x3f, 0xbb, 0xfd, 0xb0, 0x7c, 0x1b, 0xba, 0x94, 0x45, 0x74, - 0xb0, 0x5f, 0x7e, 0x44, 0xf4, 0x5a, 0x26, 0x1e, 0x67, 0xf3, 0x46, 0xab, 0x30, 0xb5, 0x4d, 0x9c, - 0x46, 0xb4, 0xbd, 0xb0, 0x4d, 0x6a, 0x3b, 0x72, 0xc1, 0xb1, 0xf0, 0x28, 0xda, 0xd3, 0x91, 0x2b, - 0x69, 0x12, 0x9c, 0x55, 0x0e, 0xbd, 0x09, 0xd3, 0xad, 0xf6, 0x46, 0xc3, 0x0d, 0xb7, 0xd7, 0xfc, - 0x88, 0x39, 0x21, 0xa9, 0x9c, 0xef, 0x22, 0x8e, 0x8a, 0x0a, 0x40, 0x53, 0xc9, 0xa1, 0xc3, 0xb9, - 0x1c, 0xd0, 0x5d, 0x38, 0x99, 0x98, 0x08, 0x22, 0x92, 0xc4, 0x58, 0x7e, 0x56, 0x91, 0x6a, 0x56, - 0x01, 0x11, 0x94, 0x25, 0x0b, 0x85, 0xb3, 0xab, 0x40, 0x2f, 0x03, 0xb8, 0xad, 0x65, 0xa7, 0xe9, - 0x36, 0xe8, 0x55, 0x71, 0x8a, 0xcd, 0x11, 0x7a, 0x6d, 0x80, 0x95, 0x8a, 0x84, 0xd2, 0xbd, 0x59, - 0xfc, 0xdb, 0xc3, 0x1a, 0x35, 0xba, 0x06, 0x63, 0xe2, 0xdf, 0x9e, 0x18, 0x52, 0x1e, 0xd0, 0xe4, - 0x71, 0x16, 0x8d, 0xaa, 0xa2, 0x63, 0x0e, 0x52, 0x10, 0x9c, 0x28, 0x8b, 0xb6, 0xe0, 0x8c, 0xcc, - 0x10, 0xa7, 0xcf, 0x4f, 0x39, 0x06, 0x21, 0x4b, 0xe5, 0x31, 0xc4, 0x5f, 0xa5, 0xcc, 0x75, 0x22, - 0xc4, 0x9d, 0xf9, 0xd0, 0x73, 0x5d, 0x9f, 0xe6, 0xfc, 0xc9, 0xef, 0x49, 0xee, 0xe1, 0x44, 0xcf, - 0xf5, 0x6b, 0x49, 0x24, 0x4e, 0xd3, 0x23, 0x1f, 0x4e, 0xba, 0x5e, 0xd6, 0xac, 0x3e, 0xc5, 0x18, - 0x7d, 0x90, 0xbf, 0x76, 0xee, 0x3c, 0xa3, 0x33, 0xf1, 0x38, 0x9b, 0xef, 0xdb, 0xf3, 0xfb, 0xfb, - 0x5d, 0x8b, 0x96, 0xd6, 0xa4, 0x73, 0xf4, 0x29, 0x18, 0xd1, 0x3f, 0x4a, 0x48, 0x1a, 0xe7, 0xb3, - 0x85, 0x57, 0x6d, 0x4f, 0xe0, 0xb2, 0xbd, 0x5a, 0xf7, 0x3a, 0x0e, 0x1b, 0x1c, 0x51, 0x2d, 0x23, - 0x26, 0xc0, 0xc5, 0xde, 0x24, 0x99, 0xde, 0xdd, 0xde, 0x08, 0x64, 0x4f, 0x77, 0x74, 0x0d, 0x86, - 0x6a, 0x0d, 0x97, 0x78, 0xd1, 0x4a, 0xa5, 0x53, 0xd4, 0xc3, 0x05, 0x41, 0x23, 0xd6, 0x8f, 0xc8, - 0xca, 0xc1, 0x61, 0x58, 0x71, 0xb0, 0x7f, 0xa3, 0x00, 0xe5, 0x2e, 0x29, 0x5e, 0x12, 0x66, 0x28, - 0xab, 0x27, 0x33, 0xd4, 0x1c, 0x8c, 0xc7, 0xff, 0x74, 0x0d, 0x97, 0xf2, 0x64, 0xbd, 0x69, 0xa2, - 0x71, 0x92, 0xbe, 0xe7, 0x47, 0x09, 0xba, 0x25, 0xab, 0xaf, 0xeb, 0xb3, 0x1a, 0xc3, 0x82, 0xdd, - 0xdf, 0xfb, 0xb5, 0x37, 0xd7, 0x1a, 0x69, 0x7f, 0xad, 0x00, 0x27, 0x55, 0x17, 0x7e, 0xe7, 0x76, - 0xdc, 0x8d, 0x74, 0xc7, 0x1d, 0x81, 0x2d, 0xd7, 0xbe, 0x0e, 0x03, 0x3c, 0x8c, 0x63, 0x0f, 0xe2, - 0xf6, 0x63, 0x66, 0x70, 0x67, 0x25, 0xe1, 0x19, 0x01, 0x9e, 0x7f, 0xc0, 0x82, 0xf1, 0xc4, 0xeb, - 0x36, 0x84, 0xb5, 0x27, 0xd0, 0xf7, 0x23, 0x12, 0x67, 0x09, 0xdb, 0xe7, 0xa0, 0x6f, 0xdb, 0x0f, - 0xa3, 0xa4, 0xa3, 0xc7, 0x15, 0x3f, 0x8c, 0x30, 0xc3, 0xd8, 0xbf, 0x6f, 0x41, 0xff, 0xba, 0xe3, - 0x7a, 0x91, 0x34, 0x0a, 0x58, 0x39, 0x46, 0x81, 0x5e, 0xbe, 0x0b, 0xbd, 0x08, 0x03, 0x64, 0x73, - 0x93, 0xd4, 0x22, 0x31, 0xaa, 0x32, 0xf4, 0xc4, 0xc0, 0x12, 0x83, 0x52, 0xf9, 0x8f, 0x55, 0xc6, - 0xff, 0x62, 0x41, 0x8c, 0x6e, 0x41, 0x29, 0x72, 0x9b, 0x64, 0xae, 0x5e, 0x17, 0xa6, 0xf2, 0xfb, - 0x08, 0x9f, 0xb1, 0x2e, 0x19, 0xe0, 0x98, 0x97, 0xfd, 0x85, 0x02, 0x40, 0x1c, 0xff, 0xaa, 0xdb, - 0x27, 0xce, 0xa7, 0x8c, 0xa8, 0xe7, 0x33, 0x8c, 0xa8, 0x28, 0x66, 0x98, 0x61, 0x41, 0x55, 0xdd, - 0x54, 0xec, 0xa9, 0x9b, 0xfa, 0x0e, 0xd3, 0x4d, 0x0b, 0x30, 0x19, 0xc7, 0xef, 0x32, 0xc3, 0x17, - 0xb2, 0xa3, 0x73, 0x3d, 0x89, 0xc4, 0x69, 0x7a, 0x9b, 0xc0, 0x39, 0x15, 0xc6, 0x48, 0x9c, 0x68, - 0xcc, 0x0f, 0x5c, 0x37, 0x4a, 0x77, 0xe9, 0xa7, 0xd8, 0x4a, 0x5c, 0xc8, 0xb5, 0x12, 0xff, 0xa4, - 0x05, 0x27, 0x92, 0xf5, 0xb0, 0x47, 0xd3, 0x9f, 0xb7, 0xe0, 0x24, 0xb3, 0x95, 0xb3, 0x5a, 0xd3, - 0x96, 0xf9, 0x17, 0x3a, 0x86, 0x66, 0xca, 0x69, 0x71, 0x1c, 0xe3, 0x64, 0x35, 0x8b, 0x35, 0xce, - 0xae, 0xd1, 0xfe, 0x9f, 0x7d, 0x30, 0x9d, 0x17, 0xd3, 0x89, 0x3d, 0x13, 0x71, 0xee, 0x54, 0x77, - 0xc8, 0x6d, 0xe1, 0x8c, 0x1f, 0x3f, 0x13, 0xe1, 0x60, 0x2c, 0xf1, 0xc9, 0xac, 0x1d, 0x85, 0x1e, - 0xb3, 0x76, 0x6c, 0xc3, 0xe4, 0xed, 0x6d, 0xe2, 0xdd, 0xf0, 0x42, 0x27, 0x72, 0xc3, 0x4d, 0x97, - 0xd9, 0x95, 0xf9, 0xbc, 0x91, 0xa9, 0x7e, 0x27, 0x6f, 0x25, 0x09, 0x0e, 0xf6, 0xcb, 0x67, 0x0c, - 0x40, 0xdc, 0x64, 0xbe, 0x91, 0xe0, 0x34, 0xd3, 0x74, 0xd2, 0x93, 0xbe, 0x07, 0x9c, 0xf4, 0xa4, - 0xe9, 0x0a, 0x6f, 0x14, 0xf9, 0x06, 0x80, 0xdd, 0x18, 0x57, 0x15, 0x14, 0x6b, 0x14, 0xe8, 0x13, - 0x80, 0xf4, 0xa4, 0x4e, 0x46, 0x48, 0xcd, 0x67, 0xef, 0xed, 0x97, 0xd1, 0x5a, 0x0a, 0x7b, 0xb0, - 0x5f, 0x9e, 0xa2, 0xd0, 0x15, 0x8f, 0xde, 0x3c, 0xe3, 0x38, 0x64, 0x19, 0x8c, 0xd0, 0x2d, 0x98, - 0xa0, 0x50, 0xb6, 0xa2, 0x64, 0xbc, 0x4e, 0x7e, 0x5b, 0x7c, 0xfa, 0xde, 0x7e, 0x79, 0x62, 0x2d, - 0x81, 0xcb, 0x63, 0x9d, 0x62, 0x82, 0x5e, 0x86, 0xb1, 0x78, 0x5e, 0x5d, 0x25, 0x7b, 0x3c, 0x3e, - 0x4e, 0x89, 0x2b, 0xbc, 0x57, 0x0d, 0x0c, 0x4e, 0x50, 0xda, 0x9f, 0xb7, 0xe0, 0x74, 0x6e, 0xe2, - 0x71, 0x74, 0x01, 0x86, 0x9c, 0x96, 0xcb, 0xcd, 0x17, 0xe2, 0xa8, 0x61, 0x6a, 0xb2, 0xca, 0x0a, - 0x37, 0x5e, 0x28, 0x2c, 0xdd, 0xe1, 0x77, 0x5c, 0xaf, 0x9e, 0xdc, 0xe1, 0xaf, 0xba, 0x5e, 0x1d, - 0x33, 0x8c, 0x3a, 0xb2, 0x8a, 0xb9, 0x4f, 0x11, 0xbe, 0x42, 0xd7, 0x6a, 0x46, 0x8a, 0xf2, 0xe3, - 0x6d, 0x06, 0x7a, 0x5a, 0x37, 0x35, 0x0a, 0xaf, 0xc2, 0x5c, 0x33, 0xe3, 0xf7, 0x5b, 0x20, 0x9e, - 0x2e, 0xf7, 0x70, 0x26, 0xbf, 0x01, 0x23, 0xbb, 0xe9, 0x84, 0x77, 0xe7, 0xf2, 0xdf, 0x72, 0x8b, - 0x40, 0xe1, 0x4a, 0xd0, 0x36, 0x92, 0xdb, 0x19, 0xbc, 0xec, 0x3a, 0x08, 0xec, 0x22, 0x61, 0x06, - 0x85, 0xee, 0xad, 0x79, 0x0e, 0xa0, 0xce, 0x68, 0x59, 0x16, 0xdc, 0x82, 0x29, 0x71, 0x2d, 0x2a, - 0x0c, 0xd6, 0xa8, 0xec, 0x7f, 0x55, 0x80, 0x61, 0x99, 0x60, 0xad, 0xed, 0xf5, 0xa2, 0xf6, 0x3b, - 0x54, 0xc6, 0x65, 0x74, 0x11, 0x4a, 0x4c, 0x2f, 0x5d, 0x89, 0xb5, 0xa5, 0x4a, 0x2b, 0xb4, 0x2a, - 0x11, 0x38, 0xa6, 0xa1, 0xbb, 0x63, 0xd8, 0xde, 0x60, 0xe4, 0x89, 0x87, 0xb6, 0x55, 0x0e, 0xc6, - 0x12, 0x8f, 0x3e, 0x0a, 0x13, 0xbc, 0x5c, 0xe0, 0xb7, 0x9c, 0x2d, 0x6e, 0xcb, 0xea, 0x57, 0xd1, - 0x4b, 0x26, 0x56, 0x13, 0xb8, 0x83, 0xfd, 0xf2, 0x89, 0x24, 0x8c, 0x19, 0x69, 0x53, 0x5c, 0x98, - 0xcb, 0x1a, 0xaf, 0x84, 0xee, 0xea, 0x29, 0x4f, 0xb7, 0x18, 0x85, 0x75, 0x3a, 0xfb, 0x53, 0x80, - 0xd2, 0xa9, 0xe6, 0xd0, 0x6b, 0xdc, 0xe5, 0xd9, 0x0d, 0x48, 0xbd, 0x93, 0xd1, 0x56, 0x8f, 0xd1, - 0x21, 0xdf, 0xc8, 0xf1, 0x52, 0x58, 0x95, 0xb7, 0xff, 0x52, 0x11, 0x26, 0x92, 0x51, 0x01, 0xd0, - 0x15, 0x18, 0xe0, 0x22, 0xa5, 0x60, 0xdf, 0xc1, 0x27, 0x48, 0x8b, 0x25, 0xc0, 0x0e, 0x57, 0x21, - 0x95, 0x8a, 0xf2, 0xe8, 0x4d, 0x18, 0xae, 0xfb, 0xb7, 0xbd, 0xdb, 0x4e, 0x50, 0x9f, 0xab, 0xac, - 0x88, 0xe9, 0x9c, 0xa9, 0xa8, 0x58, 0x8c, 0xc9, 0xf4, 0xf8, 0x04, 0xcc, 0xfe, 0x1d, 0xa3, 0xb0, - 0xce, 0x0e, 0xad, 0xb3, 0xfc, 0x14, 0x9b, 0xee, 0xd6, 0xaa, 0xd3, 0xea, 0xf4, 0xfe, 0x65, 0x41, - 0x12, 0x69, 0x9c, 0x47, 0x45, 0x12, 0x0b, 0x8e, 0xc0, 0x31, 0x23, 0xf4, 0x19, 0x98, 0x0a, 0x73, - 0x4c, 0x27, 0x79, 0x99, 0x47, 0x3b, 0x59, 0x13, 0xe6, 0x1f, 0xba, 0xb7, 0x5f, 0x9e, 0xca, 0x32, - 0xb2, 0x64, 0x55, 0x63, 0x7f, 0xf1, 0x04, 0x18, 0x8b, 0xd8, 0x48, 0x44, 0x6d, 0x1d, 0x51, 0x22, - 0x6a, 0x0c, 0x43, 0xa4, 0xd9, 0x8a, 0xf6, 0x16, 0xdd, 0x40, 0x8c, 0x49, 0x26, 0xcf, 0x25, 0x41, - 0x93, 0xe6, 0x29, 0x31, 0x58, 0xf1, 0xc9, 0xce, 0x16, 0x5e, 0xfc, 0x16, 0x66, 0x0b, 0xef, 0x3b, - 0xc6, 0x6c, 0xe1, 0x6b, 0x30, 0xb8, 0xe5, 0x46, 0x98, 0xb4, 0x7c, 0x71, 0x99, 0xcb, 0x9c, 0x87, - 0x97, 0x39, 0x49, 0x3a, 0x2f, 0xad, 0x40, 0x60, 0xc9, 0x04, 0xbd, 0xa6, 0x56, 0xe0, 0x40, 0xbe, - 0xc2, 0x25, 0xed, 0xbc, 0x92, 0xb9, 0x06, 0x45, 0x4e, 0xf0, 0xc1, 0xfb, 0xcd, 0x09, 0xbe, 0x2c, - 0x33, 0x79, 0x0f, 0xe5, 0x3f, 0x56, 0x63, 0x89, 0xba, 0xbb, 0xe4, 0xef, 0xbe, 0xa9, 0x67, 0x3f, - 0x2f, 0xe5, 0xef, 0x04, 0x2a, 0xb1, 0x79, 0x8f, 0x39, 0xcf, 0xbf, 0xdf, 0x82, 0x93, 0xc9, 0xec, - 0xa4, 0xec, 0x4d, 0x85, 0xf0, 0xf3, 0x78, 0xb1, 0x97, 0x74, 0xb1, 0xac, 0x80, 0x51, 0x21, 0xd3, - 0x91, 0x66, 0x92, 0xe1, 0xec, 0xea, 0x68, 0x47, 0x07, 0x1b, 0x75, 0xe1, 0x6f, 0xf0, 0x58, 0x4e, - 0xf2, 0xf4, 0x0e, 0x29, 0xd3, 0xd7, 0x33, 0x12, 0x75, 0x3f, 0x9e, 0x97, 0xa8, 0xbb, 0xe7, 0xf4, - 0xdc, 0xaf, 0xa9, 0xb4, 0xe9, 0xa3, 0xf9, 0x53, 0x89, 0x27, 0x45, 0xef, 0x9a, 0x2c, 0xfd, 0x35, - 0x95, 0x2c, 0xbd, 0x43, 0x44, 0x6e, 0x9e, 0x0a, 0xbd, 0x6b, 0x8a, 0x74, 0x2d, 0xcd, 0xf9, 0xf8, - 0xd1, 0xa4, 0x39, 0x37, 0x8e, 0x1a, 0x9e, 0x69, 0xfb, 0xe9, 0x2e, 0x47, 0x8d, 0xc1, 0xb7, 0xf3, - 0x61, 0xc3, 0x53, 0xba, 0x4f, 0xde, 0x57, 0x4a, 0xf7, 0x9b, 0x7a, 0x8a, 0x74, 0xd4, 0x25, 0x07, - 0x38, 0x25, 0xea, 0x31, 0x31, 0xfa, 0x4d, 0xfd, 0x00, 0x9c, 0xca, 0xe7, 0xab, 0xce, 0xb9, 0x34, - 0xdf, 0xcc, 0x23, 0x30, 0x95, 0x70, 0xfd, 0xc4, 0xf1, 0x24, 0x5c, 0x3f, 0x79, 0xe4, 0x09, 0xd7, - 0x4f, 0x1d, 0x43, 0xc2, 0xf5, 0x87, 0x8e, 0x31, 0xe1, 0xfa, 0x4d, 0xe6, 0x1c, 0xc5, 0x03, 0x40, - 0x89, 0x08, 0xe2, 0x4f, 0xe5, 0xc4, 0x4f, 0x4b, 0x47, 0x89, 0xe2, 0x1f, 0xa7, 0x50, 0x38, 0x66, - 0x95, 0x91, 0xc8, 0x7d, 0xfa, 0x01, 0x24, 0x72, 0x5f, 0x8b, 0x13, 0xb9, 0x9f, 0xce, 0x1f, 0xea, - 0x8c, 0xe7, 0x34, 0x39, 0xe9, 0xdb, 0x6f, 0xea, 0x69, 0xd7, 0x1f, 0xee, 0x60, 0x05, 0xcb, 0x52, - 0x28, 0x77, 0x48, 0xb6, 0xfe, 0x2a, 0x4f, 0xb6, 0xfe, 0x48, 0xfe, 0x4e, 0x9e, 0x3c, 0xee, 0x8c, - 0x14, 0xeb, 0xb4, 0x5d, 0x2a, 0xf6, 0x2a, 0x8b, 0x95, 0x9e, 0xd3, 0x2e, 0x15, 0xbc, 0x35, 0xdd, - 0x2e, 0x85, 0xc2, 0x31, 0x2b, 0xfb, 0x07, 0x0b, 0x70, 0xb6, 0xf3, 0x7a, 0x8b, 0xb5, 0xe4, 0x95, - 0xd8, 0x21, 0x20, 0xa1, 0x25, 0xe7, 0x77, 0xb6, 0x98, 0xaa, 0xe7, 0x78, 0x90, 0x97, 0x61, 0x52, - 0xbd, 0xc3, 0x69, 0xb8, 0xb5, 0xbd, 0xb5, 0xf8, 0x9a, 0xac, 0x22, 0x27, 0x54, 0x93, 0x04, 0x38, - 0x5d, 0x06, 0xcd, 0xc1, 0xb8, 0x01, 0x5c, 0x59, 0x14, 0x77, 0xb3, 0x38, 0x3a, 0xb7, 0x89, 0xc6, - 0x49, 0x7a, 0xfb, 0x4b, 0x16, 0x3c, 0x94, 0x93, 0xa9, 0xb4, 0xe7, 0x70, 0x87, 0x9b, 0x30, 0xde, - 0x32, 0x8b, 0x76, 0x89, 0xd0, 0x6a, 0xe4, 0x43, 0x55, 0x6d, 0x4d, 0x20, 0x70, 0x92, 0xa9, 0xfd, - 0xb3, 0x05, 0x38, 0xd3, 0xd1, 0xb1, 0x14, 0x61, 0x38, 0xb5, 0xd5, 0x0c, 0x9d, 0x85, 0x80, 0xd4, - 0x89, 0x17, 0xb9, 0x4e, 0xa3, 0xda, 0x22, 0x35, 0xcd, 0xce, 0xc1, 0x3c, 0x34, 0x2f, 0xaf, 0x56, - 0xe7, 0xd2, 0x14, 0x38, 0xa7, 0x24, 0x5a, 0x06, 0x94, 0xc6, 0x88, 0x11, 0x66, 0x51, 0xf7, 0xd3, - 0xfc, 0x70, 0x46, 0x09, 0xf4, 0x01, 0x18, 0x55, 0x0e, 0xab, 0xda, 0x88, 0xb3, 0x8d, 0x1d, 0xeb, - 0x08, 0x6c, 0xd2, 0xa1, 0x4b, 0x3c, 0x6d, 0x83, 0x48, 0xf0, 0x21, 0x8c, 0x22, 0xe3, 0x32, 0x27, - 0x83, 0x00, 0x63, 0x9d, 0x66, 0xfe, 0xa5, 0xdf, 0xfc, 0xc6, 0xd9, 0xf7, 0xfc, 0xf6, 0x37, 0xce, - 0xbe, 0xe7, 0xf7, 0xbe, 0x71, 0xf6, 0x3d, 0xdf, 0x7d, 0xef, 0xac, 0xf5, 0x9b, 0xf7, 0xce, 0x5a, - 0xbf, 0x7d, 0xef, 0xac, 0xf5, 0x7b, 0xf7, 0xce, 0x5a, 0x7f, 0x70, 0xef, 0xac, 0xf5, 0x85, 0x3f, - 0x3c, 0xfb, 0x9e, 0x37, 0x50, 0x1c, 0x40, 0xf4, 0x22, 0x1d, 0x9d, 0x8b, 0xbb, 0x97, 0xfe, 0x5f, - 0x00, 0x00, 0x00, 0xff, 0xff, 0xb0, 0x6c, 0x51, 0x7f, 0x2c, 0x10, 0x01, 0x00, + // 14822 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0xbd, 0x69, 0x70, 0x24, 0xc9, + 0x75, 0x18, 0xcc, 0xea, 0xc6, 0xd5, 0x0f, 0x77, 0x62, 0x0e, 0x0c, 0x76, 0x66, 0x7a, 0xb6, 0x76, + 0x77, 0x76, 0xf6, 0xc2, 0x70, 0xf6, 0x20, 0x97, 0xbb, 0xe4, 0x8a, 0x38, 0x67, 0xb0, 0x03, 0x60, + 0x7a, 0xb3, 0x31, 0x33, 0xe4, 0x72, 0xc9, 0x60, 0xa1, 0x3b, 0x01, 0x14, 0xd1, 0xa8, 0xea, 0xad, + 0xaa, 0xc6, 0x0c, 0xe6, 0x23, 0x43, 0x12, 0xf5, 0xe9, 0xa0, 0xa4, 0xef, 0x0b, 0xc6, 0x17, 0xfa, + 0x8e, 0xa0, 0x14, 0x8a, 0x2f, 0x24, 0x59, 0x87, 0x69, 0xd9, 0xa6, 0x29, 0x4b, 0xb2, 0xa8, 0xcb, + 0x57, 0x58, 0x72, 0x38, 0x64, 0x59, 0x11, 0x16, 0x15, 0xa1, 0x30, 0x24, 0x8e, 0x1c, 0x21, 0x2b, + 0xc2, 0x96, 0xe4, 0xe3, 0x87, 0x0d, 0xcb, 0x96, 0x23, 0xcf, 0xca, 0xac, 0xa3, 0xbb, 0x31, 0x8b, + 0x01, 0x97, 0x8c, 0xfd, 0xd7, 0xfd, 0xde, 0xcb, 0x97, 0x59, 0x79, 0xbe, 0x7c, 0xef, 0xe5, 0x7b, + 0xf0, 0xea, 0xf6, 0xcb, 0xe1, 0xb4, 0xeb, 0x5f, 0xde, 0x6e, 0xad, 0x93, 0xc0, 0x23, 0x11, 0x09, + 0x2f, 0xef, 0x12, 0xaf, 0xee, 0x07, 0x97, 0x05, 0xc2, 0x69, 0xba, 0x97, 0x6b, 0x7e, 0x40, 0x2e, + 0xef, 0x5e, 0xb9, 0xbc, 0x49, 0x3c, 0x12, 0x38, 0x11, 0xa9, 0x4f, 0x37, 0x03, 0x3f, 0xf2, 0x11, + 0xe2, 0x34, 0xd3, 0x4e, 0xd3, 0x9d, 0xa6, 0x34, 0xd3, 0xbb, 0x57, 0xa6, 0x9e, 0xdb, 0x74, 0xa3, + 0xad, 0xd6, 0xfa, 0x74, 0xcd, 0xdf, 0xb9, 0xbc, 0xe9, 0x6f, 0xfa, 0x97, 0x19, 0xe9, 0x7a, 0x6b, + 0x83, 0xfd, 0x63, 0x7f, 0xd8, 0x2f, 0xce, 0x62, 0xea, 0xc5, 0xb8, 0x9a, 0x1d, 0xa7, 0xb6, 0xe5, + 0x7a, 0x24, 0xd8, 0xbb, 0xdc, 0xdc, 0xde, 0x64, 0xf5, 0x06, 0x24, 0xf4, 0x5b, 0x41, 0x8d, 0x24, + 0x2b, 0x6e, 0x5b, 0x2a, 0xbc, 0xbc, 0x43, 0x22, 0x27, 0xa3, 0xb9, 0x53, 0x97, 0xf3, 0x4a, 0x05, + 0x2d, 0x2f, 0x72, 0x77, 0xd2, 0xd5, 0x7c, 0xa0, 0x53, 0x81, 0xb0, 0xb6, 0x45, 0x76, 0x9c, 0x54, + 0xb9, 0x17, 0xf2, 0xca, 0xb5, 0x22, 0xb7, 0x71, 0xd9, 0xf5, 0xa2, 0x30, 0x0a, 0x92, 0x85, 0xec, + 0xaf, 0x5b, 0x70, 0x61, 0xe6, 0x76, 0x75, 0xa1, 0xe1, 0x84, 0x91, 0x5b, 0x9b, 0x6d, 0xf8, 0xb5, + 0xed, 0x6a, 0xe4, 0x07, 0xe4, 0x96, 0xdf, 0x68, 0xed, 0x90, 0x2a, 0xeb, 0x08, 0xf4, 0x2c, 0x0c, + 0xec, 0xb2, 0xff, 0x4b, 0xf3, 0x93, 0xd6, 0x05, 0xeb, 0x52, 0x69, 0x76, 0xec, 0xb7, 0xf6, 0xcb, + 0xef, 0xbb, 0xbf, 0x5f, 0x1e, 0xb8, 0x25, 0xe0, 0x58, 0x51, 0xa0, 0x8b, 0xd0, 0xb7, 0x11, 0xae, + 0xed, 0x35, 0xc9, 0x64, 0x81, 0xd1, 0x8e, 0x08, 0xda, 0xbe, 0xc5, 0x2a, 0x85, 0x62, 0x81, 0x45, + 0x97, 0xa1, 0xd4, 0x74, 0x82, 0xc8, 0x8d, 0x5c, 0xdf, 0x9b, 0x2c, 0x5e, 0xb0, 0x2e, 0xf5, 0xce, + 0x8e, 0x0b, 0xd2, 0x52, 0x45, 0x22, 0x70, 0x4c, 0x43, 0x9b, 0x11, 0x10, 0xa7, 0x7e, 0xc3, 0x6b, + 0xec, 0x4d, 0xf6, 0x5c, 0xb0, 0x2e, 0x0d, 0xc4, 0xcd, 0xc0, 0x02, 0x8e, 0x15, 0x85, 0xfd, 0xa5, + 0x02, 0x0c, 0xcc, 0x6c, 0x6c, 0xb8, 0x9e, 0x1b, 0xed, 0xa1, 0x5b, 0x30, 0xe4, 0xf9, 0x75, 0x22, + 0xff, 0xb3, 0xaf, 0x18, 0x7c, 0xfe, 0xc2, 0x74, 0x7a, 0x2a, 0x4d, 0xaf, 0x6a, 0x74, 0xb3, 0x63, + 0xf7, 0xf7, 0xcb, 0x43, 0x3a, 0x04, 0x1b, 0x7c, 0x10, 0x86, 0xc1, 0xa6, 0x5f, 0x57, 0x6c, 0x0b, + 0x8c, 0x6d, 0x39, 0x8b, 0x6d, 0x25, 0x26, 0x9b, 0x1d, 0xbd, 0xbf, 0x5f, 0x1e, 0xd4, 0x00, 0x58, + 0x67, 0x82, 0xd6, 0x61, 0x94, 0xfe, 0xf5, 0x22, 0x57, 0xf1, 0x2d, 0x32, 0xbe, 0x8f, 0xe5, 0xf1, + 0xd5, 0x48, 0x67, 0x27, 0xee, 0xef, 0x97, 0x47, 0x13, 0x40, 0x9c, 0x64, 0x68, 0xdf, 0x83, 0x91, + 0x99, 0x28, 0x72, 0x6a, 0x5b, 0xa4, 0xce, 0x47, 0x10, 0xbd, 0x08, 0x3d, 0x9e, 0xb3, 0x43, 0xc4, + 0xf8, 0x5e, 0x10, 0x1d, 0xdb, 0xb3, 0xea, 0xec, 0x90, 0x83, 0xfd, 0xf2, 0xd8, 0x4d, 0xcf, 0x7d, + 0xbb, 0x25, 0x66, 0x05, 0x85, 0x61, 0x46, 0x8d, 0x9e, 0x07, 0xa8, 0x93, 0x5d, 0xb7, 0x46, 0x2a, + 0x4e, 0xb4, 0x25, 0xc6, 0x1b, 0x89, 0xb2, 0x30, 0xaf, 0x30, 0x58, 0xa3, 0xb2, 0xef, 0x42, 0x69, + 0x66, 0xd7, 0x77, 0xeb, 0x15, 0xbf, 0x1e, 0xa2, 0x6d, 0x18, 0x6d, 0x06, 0x64, 0x83, 0x04, 0x0a, + 0x34, 0x69, 0x5d, 0x28, 0x5e, 0x1a, 0x7c, 0xfe, 0x52, 0xe6, 0xc7, 0x9a, 0xa4, 0x0b, 0x5e, 0x14, + 0xec, 0xcd, 0x9e, 0x16, 0xf5, 0x8d, 0x26, 0xb0, 0x38, 0xc9, 0xd9, 0xfe, 0x27, 0x05, 0x38, 0x39, + 0x73, 0xaf, 0x15, 0x90, 0x79, 0x37, 0xdc, 0x4e, 0xce, 0xf0, 0xba, 0x1b, 0x6e, 0xaf, 0xc6, 0x3d, + 0xa0, 0xa6, 0xd6, 0xbc, 0x80, 0x63, 0x45, 0x81, 0x9e, 0x83, 0x7e, 0xfa, 0xfb, 0x26, 0x5e, 0x12, + 0x9f, 0x3c, 0x21, 0x88, 0x07, 0xe7, 0x9d, 0xc8, 0x99, 0xe7, 0x28, 0x2c, 0x69, 0xd0, 0x0a, 0x0c, + 0xd6, 0xd8, 0x82, 0xdc, 0x5c, 0xf1, 0xeb, 0x84, 0x0d, 0x66, 0x69, 0xf6, 0x19, 0x4a, 0x3e, 0x17, + 0x83, 0x0f, 0xf6, 0xcb, 0x93, 0xbc, 0x6d, 0x82, 0x85, 0x86, 0xc3, 0x7a, 0x79, 0x64, 0xab, 0xf5, + 0xd5, 0xc3, 0x38, 0x41, 0xc6, 0xda, 0xba, 0xa4, 0x2d, 0x95, 0x5e, 0xb6, 0x54, 0x86, 0xb2, 0x97, + 0x09, 0xba, 0x02, 0x3d, 0xdb, 0xae, 0x57, 0x9f, 0xec, 0x63, 0xbc, 0xce, 0xd1, 0x31, 0xbf, 0xee, + 0x7a, 0xf5, 0x83, 0xfd, 0xf2, 0xb8, 0xd1, 0x1c, 0x0a, 0xc4, 0x8c, 0xd4, 0xfe, 0xcf, 0x16, 0x94, + 0x19, 0x6e, 0xd1, 0x6d, 0x90, 0x0a, 0x09, 0x42, 0x37, 0x8c, 0x88, 0x17, 0x19, 0x1d, 0xfa, 0x3c, + 0x40, 0x48, 0x6a, 0x01, 0x89, 0xb4, 0x2e, 0x55, 0x13, 0xa3, 0xaa, 0x30, 0x58, 0xa3, 0xa2, 0x1b, + 0x42, 0xb8, 0xe5, 0x04, 0x6c, 0x7e, 0x89, 0x8e, 0x55, 0x1b, 0x42, 0x55, 0x22, 0x70, 0x4c, 0x63, + 0x6c, 0x08, 0xc5, 0x4e, 0x1b, 0x02, 0xfa, 0x08, 0x8c, 0xc6, 0x95, 0x85, 0x4d, 0xa7, 0x26, 0x3b, + 0x90, 0x2d, 0x99, 0xaa, 0x89, 0xc2, 0x49, 0x5a, 0xfb, 0x6f, 0x5a, 0x62, 0xf2, 0xd0, 0xaf, 0x7e, + 0x97, 0x7f, 0xab, 0xfd, 0xcb, 0x16, 0xf4, 0xcf, 0xba, 0x5e, 0xdd, 0xf5, 0x36, 0xd1, 0xa7, 0x61, + 0x80, 0x9e, 0x4d, 0x75, 0x27, 0x72, 0xc4, 0xbe, 0xf7, 0x7e, 0x6d, 0x6d, 0xa9, 0xa3, 0x62, 0xba, + 0xb9, 0xbd, 0x49, 0x01, 0xe1, 0x34, 0xa5, 0xa6, 0xab, 0xed, 0xc6, 0xfa, 0x67, 0x48, 0x2d, 0x5a, + 0x21, 0x91, 0x13, 0x7f, 0x4e, 0x0c, 0xc3, 0x8a, 0x2b, 0xba, 0x0e, 0x7d, 0x91, 0x13, 0x6c, 0x92, + 0x48, 0x6c, 0x80, 0x99, 0x1b, 0x15, 0x2f, 0x89, 0xe9, 0x8a, 0x24, 0x5e, 0x8d, 0xc4, 0xc7, 0xc2, + 0x1a, 0x2b, 0x8a, 0x05, 0x0b, 0xfb, 0x7f, 0xf4, 0xc3, 0x99, 0xb9, 0xea, 0x52, 0xce, 0xbc, 0xba, + 0x08, 0x7d, 0xf5, 0xc0, 0xdd, 0x25, 0x81, 0xe8, 0x67, 0xc5, 0x65, 0x9e, 0x41, 0xb1, 0xc0, 0xa2, + 0x97, 0x61, 0x88, 0x1f, 0x48, 0xd7, 0x1c, 0xaf, 0xde, 0x90, 0x5d, 0x7c, 0x42, 0x50, 0x0f, 0xdd, + 0xd2, 0x70, 0xd8, 0xa0, 0x3c, 0xe4, 0xa4, 0xba, 0x98, 0x58, 0x8c, 0x79, 0x87, 0xdd, 0x17, 0x2c, + 0x18, 0xe3, 0xd5, 0xcc, 0x44, 0x51, 0xe0, 0xae, 0xb7, 0x22, 0x12, 0x4e, 0xf6, 0xb2, 0x9d, 0x6e, + 0x2e, 0xab, 0xb7, 0x72, 0x7b, 0x60, 0xfa, 0x56, 0x82, 0x0b, 0xdf, 0x04, 0x27, 0x45, 0xbd, 0x63, + 0x49, 0x34, 0x4e, 0x55, 0x8b, 0xbe, 0xc7, 0x82, 0xa9, 0x9a, 0xef, 0x45, 0x81, 0xdf, 0x68, 0x90, + 0xa0, 0xd2, 0x5a, 0x6f, 0xb8, 0xe1, 0x16, 0x9f, 0xa7, 0x98, 0x6c, 0xb0, 0x9d, 0x20, 0x67, 0x0c, + 0x15, 0x91, 0x18, 0xc3, 0xf3, 0xf7, 0xf7, 0xcb, 0x53, 0x73, 0xb9, 0xac, 0x70, 0x9b, 0x6a, 0xd0, + 0x36, 0x20, 0x7a, 0x94, 0x56, 0x23, 0x67, 0x93, 0xc4, 0x95, 0xf7, 0x77, 0x5f, 0xf9, 0xa9, 0xfb, + 0xfb, 0x65, 0xb4, 0x9a, 0x62, 0x81, 0x33, 0xd8, 0xa2, 0xb7, 0xe1, 0x04, 0x85, 0xa6, 0xbe, 0x75, + 0xa0, 0xfb, 0xea, 0x26, 0xef, 0xef, 0x97, 0x4f, 0xac, 0x66, 0x30, 0xc1, 0x99, 0xac, 0xd1, 0x77, + 0x59, 0x70, 0x26, 0xfe, 0xfc, 0x85, 0xbb, 0x4d, 0xc7, 0xab, 0xc7, 0x15, 0x97, 0xba, 0xaf, 0x98, + 0xee, 0xc9, 0x67, 0xe6, 0xf2, 0x38, 0xe1, 0xfc, 0x4a, 0x90, 0x07, 0x13, 0xb4, 0x69, 0xc9, 0xba, + 0xa1, 0xfb, 0xba, 0x4f, 0xdf, 0xdf, 0x2f, 0x4f, 0xac, 0xa6, 0x79, 0xe0, 0x2c, 0xc6, 0x53, 0x73, + 0x70, 0x32, 0x73, 0x76, 0xa2, 0x31, 0x28, 0x6e, 0x13, 0x2e, 0x75, 0x95, 0x30, 0xfd, 0x89, 0x4e, + 0x40, 0xef, 0xae, 0xd3, 0x68, 0x89, 0x85, 0x89, 0xf9, 0x9f, 0x57, 0x0a, 0x2f, 0x5b, 0xf6, 0x3f, + 0x2d, 0xc2, 0xe8, 0x5c, 0x75, 0xe9, 0x81, 0x56, 0xbd, 0x7e, 0xec, 0x15, 0xda, 0x1e, 0x7b, 0xf1, + 0x21, 0x5a, 0xcc, 0x3d, 0x44, 0xbf, 0x33, 0x63, 0xc9, 0xf6, 0xb0, 0x25, 0xfb, 0xa1, 0x9c, 0x25, + 0x7b, 0xc4, 0x0b, 0x75, 0x37, 0x67, 0xd6, 0xf6, 0xb2, 0x01, 0xcc, 0x94, 0x90, 0x96, 0xfd, 0x9a, + 0xd3, 0x48, 0x6e, 0xb5, 0x87, 0x9c, 0xba, 0x47, 0x33, 0x8e, 0x35, 0x18, 0x9a, 0x73, 0x9a, 0xce, + 0xba, 0xdb, 0x70, 0x23, 0x97, 0x84, 0xe8, 0x49, 0x28, 0x3a, 0xf5, 0x3a, 0x93, 0xee, 0x4a, 0xb3, + 0x27, 0xef, 0xef, 0x97, 0x8b, 0x33, 0x75, 0x2a, 0x66, 0x80, 0xa2, 0xda, 0xc3, 0x94, 0x02, 0x3d, + 0x0d, 0x3d, 0xf5, 0xc0, 0x6f, 0x4e, 0x16, 0x18, 0x25, 0x5d, 0xe5, 0x3d, 0xf3, 0x81, 0xdf, 0x4c, + 0x90, 0x32, 0x1a, 0xfb, 0x37, 0x0b, 0x70, 0x76, 0x8e, 0x34, 0xb7, 0x16, 0xab, 0x39, 0xe7, 0xc5, + 0x25, 0x18, 0xd8, 0xf1, 0x3d, 0x37, 0xf2, 0x83, 0x50, 0x54, 0xcd, 0x66, 0xc4, 0x8a, 0x80, 0x61, + 0x85, 0x45, 0x17, 0xa0, 0xa7, 0x19, 0x0b, 0xb1, 0x43, 0x52, 0x00, 0x66, 0xe2, 0x2b, 0xc3, 0x50, + 0x8a, 0x56, 0x48, 0x02, 0x31, 0x63, 0x14, 0xc5, 0xcd, 0x90, 0x04, 0x98, 0x61, 0x62, 0x49, 0x80, + 0xca, 0x08, 0xe2, 0x44, 0x48, 0x48, 0x02, 0x14, 0x83, 0x35, 0x2a, 0x54, 0x81, 0x52, 0x98, 0x18, + 0xd9, 0xae, 0x96, 0xe6, 0x30, 0x13, 0x15, 0xd4, 0x48, 0xc6, 0x4c, 0x8c, 0x13, 0xac, 0xaf, 0xa3, + 0xa8, 0xf0, 0xb5, 0x02, 0x20, 0xde, 0x85, 0xdf, 0x62, 0x1d, 0x77, 0x33, 0xdd, 0x71, 0xdd, 0x2f, + 0x89, 0xa3, 0xea, 0xbd, 0xff, 0x62, 0xc1, 0xd9, 0x39, 0xd7, 0xab, 0x93, 0x20, 0x67, 0x02, 0x3e, + 0x9c, 0xbb, 0xf3, 0xe1, 0x84, 0x14, 0x63, 0x8a, 0xf5, 0x1c, 0xc1, 0x14, 0xb3, 0xff, 0xc2, 0x02, + 0xc4, 0x3f, 0xfb, 0x5d, 0xf7, 0xb1, 0x37, 0xd3, 0x1f, 0x7b, 0x04, 0xd3, 0xc2, 0xfe, 0x3b, 0x16, + 0x0c, 0xce, 0x35, 0x1c, 0x77, 0x47, 0x7c, 0xea, 0x1c, 0x8c, 0x4b, 0x45, 0x11, 0x03, 0x6b, 0xb2, + 0x3f, 0xdd, 0xdc, 0xc6, 0x71, 0x12, 0x89, 0xd3, 0xf4, 0xe8, 0x13, 0x70, 0xc6, 0x00, 0xae, 0x91, + 0x9d, 0x66, 0xc3, 0x89, 0xf4, 0x5b, 0x01, 0x3b, 0xfd, 0x71, 0x1e, 0x11, 0xce, 0x2f, 0x6f, 0x2f, + 0xc3, 0xc8, 0x5c, 0xc3, 0x25, 0x5e, 0xb4, 0x54, 0x99, 0xf3, 0xbd, 0x0d, 0x77, 0x13, 0xbd, 0x02, + 0x23, 0x91, 0xbb, 0x43, 0xfc, 0x56, 0x54, 0x25, 0x35, 0xdf, 0x63, 0x77, 0x6d, 0xeb, 0x52, 0xef, + 0x2c, 0xba, 0xbf, 0x5f, 0x1e, 0x59, 0x33, 0x30, 0x38, 0x41, 0x69, 0xff, 0x21, 0x1d, 0x71, 0x7f, + 0xa7, 0xe9, 0x7b, 0xc4, 0x8b, 0xe6, 0x7c, 0xaf, 0xce, 0x75, 0x32, 0xaf, 0x40, 0x4f, 0x44, 0x47, + 0x90, 0x7f, 0xf9, 0x45, 0xb9, 0xb4, 0xe9, 0xb8, 0x1d, 0xec, 0x97, 0x4f, 0xa5, 0x4b, 0xb0, 0x91, + 0x65, 0x65, 0xd0, 0x87, 0xa0, 0x2f, 0x8c, 0x9c, 0xa8, 0x15, 0x8a, 0x4f, 0x7d, 0x54, 0x8e, 0x7f, + 0x95, 0x41, 0x0f, 0xf6, 0xcb, 0xa3, 0xaa, 0x18, 0x07, 0x61, 0x51, 0x00, 0x3d, 0x05, 0xfd, 0x3b, + 0x24, 0x0c, 0x9d, 0x4d, 0x79, 0x7e, 0x8f, 0x8a, 0xb2, 0xfd, 0x2b, 0x1c, 0x8c, 0x25, 0x1e, 0x3d, + 0x06, 0xbd, 0x24, 0x08, 0xfc, 0x40, 0xec, 0x2a, 0xc3, 0x82, 0xb0, 0x77, 0x81, 0x02, 0x31, 0xc7, + 0xd9, 0xff, 0xd2, 0x82, 0x51, 0xd5, 0x56, 0x5e, 0xd7, 0x31, 0xdc, 0x9b, 0xde, 0x04, 0xa8, 0xc9, + 0x0f, 0x0c, 0xd9, 0x79, 0x37, 0xf8, 0xfc, 0xc5, 0x4c, 0xd1, 0x22, 0xd5, 0x8d, 0x31, 0x67, 0x05, + 0x0a, 0xb1, 0xc6, 0xcd, 0xfe, 0x35, 0x0b, 0x26, 0x12, 0x5f, 0xb4, 0xec, 0x86, 0x11, 0x7a, 0x2b, + 0xf5, 0x55, 0xd3, 0xdd, 0x7d, 0x15, 0x2d, 0xcd, 0xbe, 0x49, 0x2d, 0x3e, 0x09, 0xd1, 0xbe, 0xe8, + 0x1a, 0xf4, 0xba, 0x11, 0xd9, 0x91, 0x1f, 0xf3, 0x58, 0xdb, 0x8f, 0xe1, 0xad, 0x8a, 0x47, 0x64, + 0x89, 0x96, 0xc4, 0x9c, 0x81, 0xfd, 0x9b, 0x45, 0x28, 0xf1, 0x69, 0xbb, 0xe2, 0x34, 0x8f, 0x61, + 0x2c, 0x9e, 0x81, 0x92, 0xbb, 0xb3, 0xd3, 0x8a, 0x9c, 0x75, 0x71, 0x00, 0x0d, 0xf0, 0xcd, 0x60, + 0x49, 0x02, 0x71, 0x8c, 0x47, 0x4b, 0xd0, 0xc3, 0x9a, 0xc2, 0xbf, 0xf2, 0xc9, 0xec, 0xaf, 0x14, + 0x6d, 0x9f, 0x9e, 0x77, 0x22, 0x87, 0xcb, 0x7e, 0xea, 0xe4, 0xa3, 0x20, 0xcc, 0x58, 0x20, 0x07, + 0x60, 0xdd, 0xf5, 0x9c, 0x60, 0x8f, 0xc2, 0x26, 0x8b, 0x8c, 0xe1, 0x73, 0xed, 0x19, 0xce, 0x2a, + 0x7a, 0xce, 0x56, 0x7d, 0x58, 0x8c, 0xc0, 0x1a, 0xd3, 0xa9, 0x0f, 0x42, 0x49, 0x11, 0x1f, 0x46, + 0x84, 0x9b, 0xfa, 0x08, 0x8c, 0x26, 0xea, 0xea, 0x54, 0x7c, 0x48, 0x97, 0x00, 0x7f, 0x85, 0x6d, + 0x19, 0xa2, 0xd5, 0x0b, 0xde, 0xae, 0xd8, 0x39, 0xef, 0xc1, 0x89, 0x46, 0xc6, 0xde, 0x2b, 0xc6, + 0xb5, 0xfb, 0xbd, 0xfa, 0xac, 0xf8, 0xec, 0x13, 0x59, 0x58, 0x9c, 0x59, 0x07, 0x95, 0x6a, 0xfc, + 0x26, 0x5d, 0x20, 0x4e, 0x43, 0xbf, 0x20, 0xdc, 0x10, 0x30, 0xac, 0xb0, 0x74, 0xbf, 0x3b, 0xa1, + 0x1a, 0x7f, 0x9d, 0xec, 0x55, 0x49, 0x83, 0xd4, 0x22, 0x3f, 0xf8, 0xa6, 0x36, 0xff, 0x1c, 0xef, + 0x7d, 0xbe, 0x5d, 0x0e, 0x0a, 0x06, 0xc5, 0xeb, 0x64, 0x8f, 0x0f, 0x85, 0xfe, 0x75, 0xc5, 0xb6, + 0x5f, 0xf7, 0x15, 0x0b, 0x86, 0xd5, 0xd7, 0x1d, 0xc3, 0xbe, 0x30, 0x6b, 0xee, 0x0b, 0xe7, 0xda, + 0x4e, 0xf0, 0x9c, 0x1d, 0xe1, 0x6b, 0x05, 0x38, 0xa3, 0x68, 0xe8, 0x6d, 0x86, 0xff, 0x11, 0xb3, + 0xea, 0x32, 0x94, 0x3c, 0xa5, 0xd7, 0xb3, 0x4c, 0x85, 0x5a, 0xac, 0xd5, 0x8b, 0x69, 0xa8, 0x50, + 0xea, 0xc5, 0xc7, 0xec, 0x90, 0xae, 0xf0, 0x16, 0xca, 0xed, 0x59, 0x28, 0xb6, 0xdc, 0xba, 0x38, + 0x60, 0xde, 0x2f, 0x7b, 0xfb, 0xe6, 0xd2, 0xfc, 0xc1, 0x7e, 0xf9, 0xd1, 0x3c, 0x63, 0x0b, 0x3d, + 0xd9, 0xc2, 0xe9, 0x9b, 0x4b, 0xf3, 0x98, 0x16, 0x46, 0x33, 0x30, 0x2a, 0x4f, 0xe8, 0x5b, 0x54, + 0x40, 0xf4, 0x3d, 0x71, 0x0e, 0x29, 0xad, 0x35, 0x36, 0xd1, 0x38, 0x49, 0x8f, 0xe6, 0x61, 0x6c, + 0xbb, 0xb5, 0x4e, 0x1a, 0x24, 0xe2, 0x1f, 0x7c, 0x9d, 0x70, 0x9d, 0x6e, 0x29, 0xbe, 0x4b, 0x5e, + 0x4f, 0xe0, 0x71, 0xaa, 0x84, 0xfd, 0xd7, 0xec, 0x3c, 0x10, 0xbd, 0x57, 0x09, 0x7c, 0x3a, 0xb1, + 0x28, 0xf7, 0x6f, 0xe6, 0x74, 0xee, 0x66, 0x56, 0x5c, 0x27, 0x7b, 0x6b, 0x3e, 0xbd, 0x4b, 0x64, + 0xcf, 0x0a, 0x63, 0xce, 0xf7, 0xb4, 0x9d, 0xf3, 0xbf, 0x50, 0x80, 0x93, 0xaa, 0x07, 0x0c, 0xb1, + 0xf5, 0x5b, 0xbd, 0x0f, 0xae, 0xc0, 0x60, 0x9d, 0x6c, 0x38, 0xad, 0x46, 0xa4, 0x0c, 0x0c, 0xbd, + 0xdc, 0xc8, 0x34, 0x1f, 0x83, 0xb1, 0x4e, 0x73, 0x88, 0x6e, 0xfb, 0xf9, 0x61, 0x76, 0x10, 0x47, + 0x0e, 0x9d, 0xe3, 0x6a, 0xd5, 0x58, 0xb9, 0xab, 0xe6, 0x31, 0xe8, 0x75, 0x77, 0xa8, 0x60, 0x56, + 0x30, 0xe5, 0xad, 0x25, 0x0a, 0xc4, 0x1c, 0x87, 0x9e, 0x80, 0xfe, 0x9a, 0xbf, 0xb3, 0xe3, 0x78, + 0x75, 0x76, 0xe4, 0x95, 0x66, 0x07, 0xa9, 0xec, 0x36, 0xc7, 0x41, 0x58, 0xe2, 0xd0, 0x59, 0xe8, + 0x71, 0x82, 0x4d, 0xae, 0x75, 0x29, 0xcd, 0x0e, 0xd0, 0x9a, 0x66, 0x82, 0xcd, 0x10, 0x33, 0x28, + 0xbd, 0x34, 0xde, 0xf1, 0x83, 0x6d, 0xd7, 0xdb, 0x9c, 0x77, 0x03, 0xb1, 0x24, 0xd4, 0x59, 0x78, + 0x5b, 0x61, 0xb0, 0x46, 0x85, 0x16, 0xa1, 0xb7, 0xe9, 0x07, 0x51, 0x38, 0xd9, 0xc7, 0xba, 0xfb, + 0xd1, 0x9c, 0x8d, 0x88, 0x7f, 0x6d, 0xc5, 0x0f, 0xa2, 0xf8, 0x03, 0xe8, 0xbf, 0x10, 0xf3, 0xe2, + 0x68, 0x19, 0xfa, 0x89, 0xb7, 0xbb, 0x18, 0xf8, 0x3b, 0x93, 0x13, 0xf9, 0x9c, 0x16, 0x38, 0x09, + 0x9f, 0x66, 0xb1, 0x8c, 0x2a, 0xc0, 0x58, 0xb2, 0x40, 0x1f, 0x82, 0x22, 0xf1, 0x76, 0x27, 0xfb, + 0x19, 0xa7, 0xa9, 0x1c, 0x4e, 0xb7, 0x9c, 0x20, 0xde, 0xf3, 0x17, 0xbc, 0x5d, 0x4c, 0xcb, 0xa0, + 0x8f, 0x43, 0x49, 0x6e, 0x18, 0xa1, 0x50, 0x67, 0x66, 0x4e, 0x58, 0xb9, 0xcd, 0x60, 0xf2, 0x76, + 0xcb, 0x0d, 0xc8, 0x0e, 0xf1, 0xa2, 0x30, 0xde, 0x21, 0x25, 0x36, 0xc4, 0x31, 0x37, 0x54, 0x83, + 0xa1, 0x80, 0x84, 0xee, 0x3d, 0x52, 0xf1, 0x1b, 0x6e, 0x6d, 0x6f, 0xf2, 0x34, 0x6b, 0xde, 0x53, + 0x6d, 0xbb, 0x0c, 0x6b, 0x05, 0x62, 0x75, 0xbb, 0x0e, 0xc5, 0x06, 0x53, 0xf4, 0x06, 0x0c, 0x07, + 0x24, 0x8c, 0x9c, 0x20, 0x12, 0xb5, 0x4c, 0x2a, 0xf3, 0xd8, 0x30, 0xd6, 0x11, 0xfc, 0x3a, 0x11, + 0x57, 0x13, 0x63, 0xb0, 0xc9, 0x01, 0x7d, 0x5c, 0xea, 0xfe, 0x57, 0xfc, 0x96, 0x17, 0x85, 0x93, + 0x25, 0xd6, 0xee, 0x4c, 0xab, 0xec, 0xad, 0x98, 0x2e, 0x69, 0x1c, 0xe0, 0x85, 0xb1, 0xc1, 0x0a, + 0x7d, 0x12, 0x86, 0xf9, 0x7f, 0x6e, 0xdb, 0x0c, 0x27, 0x4f, 0x32, 0xde, 0x17, 0xf2, 0x79, 0x73, + 0xc2, 0xd9, 0x93, 0x82, 0xf9, 0xb0, 0x0e, 0x0d, 0xb1, 0xc9, 0x0d, 0x61, 0x18, 0x6e, 0xb8, 0xbb, + 0xc4, 0x23, 0x61, 0x58, 0x09, 0xfc, 0x75, 0x22, 0x54, 0xb5, 0x67, 0xb2, 0x6d, 0xa1, 0xfe, 0x3a, + 0x99, 0x1d, 0xa7, 0x3c, 0x97, 0xf5, 0x32, 0xd8, 0x64, 0x81, 0x6e, 0xc2, 0x08, 0xbd, 0x1b, 0xbb, + 0x31, 0xd3, 0xc1, 0x4e, 0x4c, 0xd9, 0x7d, 0x10, 0x1b, 0x85, 0x70, 0x82, 0x09, 0xba, 0x01, 0x43, + 0xac, 0xcf, 0x5b, 0x4d, 0xce, 0xf4, 0x54, 0x27, 0xa6, 0xcc, 0x94, 0x5e, 0xd5, 0x8a, 0x60, 0x83, + 0x01, 0x7a, 0x1d, 0x4a, 0x0d, 0x77, 0x83, 0xd4, 0xf6, 0x6a, 0x0d, 0x32, 0x39, 0xc4, 0xb8, 0x65, + 0x6e, 0x86, 0xcb, 0x92, 0x88, 0xcb, 0xe7, 0xea, 0x2f, 0x8e, 0x8b, 0xa3, 0x5b, 0x70, 0x2a, 0x22, + 0xc1, 0x8e, 0xeb, 0x39, 0x74, 0x13, 0x13, 0x57, 0x42, 0x66, 0xa2, 0x1e, 0x66, 0xb3, 0xeb, 0xbc, + 0x18, 0x8d, 0x53, 0x6b, 0x99, 0x54, 0x38, 0xa7, 0x34, 0xba, 0x0b, 0x93, 0x19, 0x18, 0x3e, 0x6f, + 0x4f, 0x30, 0xce, 0x1f, 0x16, 0x9c, 0x27, 0xd7, 0x72, 0xe8, 0x0e, 0xda, 0xe0, 0x70, 0x2e, 0x77, + 0x74, 0x03, 0x46, 0xd9, 0xce, 0x59, 0x69, 0x35, 0x1a, 0xa2, 0xc2, 0x11, 0x56, 0xe1, 0x13, 0x52, + 0x8e, 0x58, 0x32, 0xd1, 0x07, 0xfb, 0x65, 0x88, 0xff, 0xe1, 0x64, 0x69, 0xb4, 0xce, 0xac, 0xa1, + 0xad, 0xc0, 0x8d, 0xf6, 0xe8, 0xaa, 0x22, 0x77, 0xa3, 0xc9, 0xd1, 0xb6, 0x9a, 0x21, 0x9d, 0x54, + 0x99, 0x4c, 0x75, 0x20, 0x4e, 0x32, 0xa4, 0x47, 0x41, 0x18, 0xd5, 0x5d, 0x6f, 0x72, 0x8c, 0xdf, + 0xa7, 0xe4, 0x4e, 0x5a, 0xa5, 0x40, 0xcc, 0x71, 0xcc, 0x12, 0x4a, 0x7f, 0xdc, 0xa0, 0x27, 0xee, + 0x38, 0x23, 0x8c, 0x2d, 0xa1, 0x12, 0x81, 0x63, 0x1a, 0x2a, 0x04, 0x47, 0xd1, 0xde, 0x24, 0x62, + 0xa4, 0x6a, 0x43, 0x5c, 0x5b, 0xfb, 0x38, 0xa6, 0x70, 0x7b, 0x1d, 0x46, 0xd4, 0x36, 0xc1, 0xfa, + 0x04, 0x95, 0xa1, 0x97, 0x89, 0x7d, 0x42, 0x8f, 0x59, 0xa2, 0x4d, 0x60, 0x22, 0x21, 0xe6, 0x70, + 0xd6, 0x04, 0xf7, 0x1e, 0x99, 0xdd, 0x8b, 0x08, 0xd7, 0x45, 0x14, 0xb5, 0x26, 0x48, 0x04, 0x8e, + 0x69, 0xec, 0xff, 0xc9, 0xc5, 0xe7, 0xf8, 0x94, 0xe8, 0xe2, 0x5c, 0x7c, 0x16, 0x06, 0xb6, 0xfc, + 0x30, 0xa2, 0xd4, 0xac, 0x8e, 0xde, 0x58, 0x60, 0xbe, 0x26, 0xe0, 0x58, 0x51, 0xa0, 0x57, 0x61, + 0xb8, 0xa6, 0x57, 0x20, 0x0e, 0x75, 0xb5, 0x8d, 0x18, 0xb5, 0x63, 0x93, 0x16, 0xbd, 0x0c, 0x03, + 0xcc, 0xbb, 0xa7, 0xe6, 0x37, 0x84, 0xb4, 0x29, 0x25, 0x93, 0x81, 0x8a, 0x80, 0x1f, 0x68, 0xbf, + 0xb1, 0xa2, 0x46, 0x17, 0xa1, 0x8f, 0x36, 0x61, 0xa9, 0x22, 0x8e, 0x53, 0xa5, 0x92, 0xbb, 0xc6, + 0xa0, 0x58, 0x60, 0xed, 0x5f, 0xb3, 0x98, 0x2c, 0x95, 0xde, 0xf3, 0xd1, 0x35, 0x76, 0x68, 0xb0, + 0x13, 0x44, 0x53, 0x89, 0x3d, 0xae, 0x9d, 0x04, 0x0a, 0x77, 0x90, 0xf8, 0x8f, 0x8d, 0x92, 0xe8, + 0xcd, 0xe4, 0xc9, 0xc0, 0x05, 0x8a, 0x17, 0x65, 0x17, 0x24, 0x4f, 0x87, 0x47, 0xe2, 0x23, 0x8e, + 0xb6, 0xa7, 0xdd, 0x11, 0x61, 0xff, 0x5f, 0x05, 0x6d, 0x96, 0x54, 0x23, 0x27, 0x22, 0xa8, 0x02, + 0xfd, 0x77, 0x1c, 0x37, 0x72, 0xbd, 0x4d, 0x21, 0xf7, 0xb5, 0x3f, 0xe8, 0x58, 0xa1, 0xdb, 0xbc, + 0x00, 0x97, 0x5e, 0xc4, 0x1f, 0x2c, 0xd9, 0x50, 0x8e, 0x41, 0xcb, 0xf3, 0x28, 0xc7, 0x42, 0xb7, + 0x1c, 0x31, 0x2f, 0xc0, 0x39, 0x8a, 0x3f, 0x58, 0xb2, 0x41, 0x6f, 0x01, 0xc8, 0x1d, 0x82, 0xd4, + 0x85, 0x57, 0xd0, 0xb3, 0x9d, 0x99, 0xae, 0xa9, 0x32, 0xb3, 0x23, 0x54, 0x36, 0x8a, 0xff, 0x63, + 0x8d, 0x9f, 0x1d, 0x69, 0x63, 0xaa, 0x37, 0x06, 0x7d, 0x82, 0x2e, 0x51, 0x27, 0x88, 0x48, 0x7d, + 0x26, 0x12, 0x9d, 0xf3, 0x74, 0x77, 0x97, 0xc3, 0x35, 0x77, 0x87, 0xe8, 0xcb, 0x59, 0x30, 0xc1, + 0x31, 0x3f, 0xfb, 0x97, 0x8a, 0x30, 0x99, 0xd7, 0x5c, 0xba, 0x68, 0xc8, 0x5d, 0x37, 0x9a, 0xa3, + 0x62, 0xad, 0x65, 0x2e, 0x9a, 0x05, 0x01, 0xc7, 0x8a, 0x82, 0xce, 0xde, 0xd0, 0xdd, 0x94, 0x77, + 0xfb, 0xde, 0x78, 0xf6, 0x56, 0x19, 0x14, 0x0b, 0x2c, 0xa5, 0x0b, 0x88, 0x13, 0x0a, 0xb7, 0x33, + 0x6d, 0x96, 0x63, 0x06, 0xc5, 0x02, 0xab, 0x6b, 0x19, 0x7b, 0x3a, 0x68, 0x19, 0x8d, 0x2e, 0xea, + 0x3d, 0xda, 0x2e, 0x42, 0x9f, 0x02, 0xd8, 0x70, 0x3d, 0x37, 0xdc, 0x62, 0xdc, 0xfb, 0x0e, 0xcd, + 0x5d, 0x09, 0xc5, 0x8b, 0x8a, 0x0b, 0xd6, 0x38, 0xa2, 0x97, 0x60, 0x50, 0x6d, 0x20, 0x4b, 0xf3, + 0xcc, 0x06, 0xaf, 0xf9, 0x34, 0xc5, 0xbb, 0xe9, 0x3c, 0xd6, 0xe9, 0xec, 0xcf, 0x24, 0xe7, 0x8b, + 0x58, 0x01, 0x5a, 0xff, 0x5a, 0xdd, 0xf6, 0x6f, 0xa1, 0x7d, 0xff, 0xda, 0xdf, 0xe8, 0x83, 0x51, + 0xa3, 0xb2, 0x56, 0xd8, 0xc5, 0x9e, 0x7b, 0x95, 0x1e, 0x40, 0x4e, 0x44, 0xc4, 0xfa, 0xb3, 0x3b, + 0x2f, 0x15, 0xfd, 0x90, 0xa2, 0x2b, 0x80, 0x97, 0x47, 0x9f, 0x82, 0x52, 0xc3, 0x09, 0x99, 0xc6, + 0x92, 0x88, 0x75, 0xd7, 0x0d, 0xb3, 0xf8, 0x42, 0xe8, 0x84, 0x91, 0x76, 0xea, 0x73, 0xde, 0x31, + 0x4b, 0x7a, 0x52, 0x52, 0xf9, 0x4a, 0xfa, 0x35, 0xaa, 0x46, 0x50, 0x21, 0x6c, 0x0f, 0x73, 0x1c, + 0x7a, 0x99, 0x6d, 0xad, 0x74, 0x56, 0xcc, 0x51, 0x69, 0x94, 0x4d, 0xb3, 0x5e, 0x43, 0xc8, 0x56, + 0x38, 0x6c, 0x50, 0xc6, 0x77, 0xb2, 0xbe, 0x36, 0x77, 0xb2, 0xa7, 0xa0, 0x9f, 0xfd, 0x50, 0x33, + 0x40, 0x8d, 0xc6, 0x12, 0x07, 0x63, 0x89, 0x4f, 0x4e, 0x98, 0x81, 0xee, 0x26, 0x0c, 0xbd, 0xf5, + 0x89, 0x49, 0xcd, 0xfc, 0x1f, 0x06, 0xf8, 0x2e, 0x27, 0xa6, 0x3c, 0x96, 0x38, 0xf4, 0x33, 0x16, + 0x20, 0xa7, 0x41, 0x6f, 0xcb, 0x14, 0xac, 0x2e, 0x37, 0xc0, 0x44, 0xed, 0x57, 0x3b, 0x76, 0x7b, + 0x2b, 0x9c, 0x9e, 0x49, 0x95, 0xe6, 0x9a, 0xd2, 0x57, 0x44, 0x13, 0x51, 0x9a, 0x40, 0x3f, 0x8c, + 0x96, 0xdd, 0x30, 0xfa, 0xfc, 0x1f, 0x25, 0x0e, 0xa7, 0x8c, 0x26, 0xa1, 0x9b, 0xfa, 0xe5, 0x6b, + 0xf0, 0x90, 0x97, 0xaf, 0xe1, 0xbc, 0x8b, 0xd7, 0x54, 0x0b, 0x4e, 0xe7, 0x7c, 0x41, 0x86, 0xfe, + 0x75, 0x5e, 0xd7, 0xbf, 0x76, 0xd0, 0xda, 0x4d, 0xcb, 0x3a, 0xa6, 0xdf, 0x68, 0x39, 0x5e, 0xe4, + 0x46, 0x7b, 0xba, 0xbe, 0xf6, 0x69, 0x18, 0x99, 0x77, 0xc8, 0x8e, 0xef, 0x2d, 0x78, 0xf5, 0xa6, + 0xef, 0x7a, 0x11, 0x9a, 0x84, 0x1e, 0x26, 0x7c, 0xf0, 0xad, 0xb7, 0x87, 0xf6, 0x1e, 0x66, 0x10, + 0x7b, 0x13, 0x4e, 0xce, 0xfb, 0x77, 0xbc, 0x3b, 0x4e, 0x50, 0x9f, 0xa9, 0x2c, 0x69, 0xfa, 0xa4, + 0x55, 0xa9, 0xcf, 0xb0, 0xf2, 0x6f, 0x8b, 0x5a, 0x49, 0x7e, 0x1d, 0x5a, 0x74, 0x1b, 0x24, 0x47, + 0xeb, 0xf7, 0xff, 0x16, 0x8c, 0x9a, 0x62, 0x7a, 0x65, 0x77, 0xb6, 0x72, 0xed, 0xce, 0x6f, 0xc0, + 0xc0, 0x86, 0x4b, 0x1a, 0x75, 0x4c, 0x36, 0x44, 0xef, 0x3c, 0x99, 0xef, 0x99, 0xb6, 0x48, 0x29, + 0xa5, 0x96, 0x97, 0x6b, 0x43, 0x16, 0x45, 0x61, 0xac, 0xd8, 0xa0, 0x6d, 0x18, 0x93, 0x7d, 0x28, + 0xb1, 0x62, 0x3f, 0x78, 0xaa, 0xdd, 0xc0, 0x9b, 0xcc, 0x4f, 0xdc, 0xdf, 0x2f, 0x8f, 0xe1, 0x04, + 0x1b, 0x9c, 0x62, 0x8c, 0xce, 0x42, 0xcf, 0x0e, 0x3d, 0xf9, 0x7a, 0x58, 0xf7, 0x33, 0xf5, 0x07, + 0xd3, 0xe4, 0x30, 0xa8, 0xfd, 0x63, 0x16, 0x9c, 0x4e, 0xf5, 0x8c, 0xd0, 0x68, 0x1d, 0xf1, 0x28, + 0x24, 0x35, 0x4c, 0x85, 0xce, 0x1a, 0x26, 0xfb, 0x6f, 0x59, 0x70, 0x62, 0x61, 0xa7, 0x19, 0xed, + 0xcd, 0xbb, 0xa6, 0x91, 0xf8, 0x83, 0xd0, 0xb7, 0x43, 0xea, 0x6e, 0x6b, 0x47, 0x8c, 0x5c, 0x59, + 0x9e, 0x0e, 0x2b, 0x0c, 0x7a, 0xb0, 0x5f, 0x1e, 0xae, 0x46, 0x7e, 0xe0, 0x6c, 0x12, 0x0e, 0xc0, + 0x82, 0x9c, 0x9d, 0xb1, 0xee, 0x3d, 0xb2, 0xec, 0xee, 0xb8, 0xd1, 0x83, 0xcd, 0x76, 0x61, 0xdf, + 0x95, 0x4c, 0x70, 0xcc, 0xcf, 0xfe, 0xba, 0x05, 0xa3, 0x72, 0xde, 0xcf, 0xd4, 0xeb, 0x01, 0x09, + 0x43, 0x34, 0x05, 0x05, 0xb7, 0x29, 0x5a, 0x09, 0xa2, 0x95, 0x85, 0xa5, 0x0a, 0x2e, 0xb8, 0x4d, + 0x29, 0xce, 0xb3, 0x03, 0xa8, 0x68, 0x9a, 0xba, 0xaf, 0x09, 0x38, 0x56, 0x14, 0xe8, 0x12, 0x0c, + 0x78, 0x7e, 0x9d, 0x4b, 0xc4, 0x5c, 0x94, 0x60, 0x13, 0x6c, 0x55, 0xc0, 0xb0, 0xc2, 0xa2, 0x0a, + 0x94, 0xb8, 0x23, 0x64, 0x3c, 0x69, 0xbb, 0x72, 0xa7, 0x64, 0x5f, 0xb6, 0x26, 0x4b, 0xe2, 0x98, + 0x89, 0xfd, 0x1b, 0x16, 0x0c, 0xc9, 0x2f, 0xeb, 0xf2, 0xae, 0x42, 0x97, 0x56, 0x7c, 0x4f, 0x89, + 0x97, 0x16, 0xbd, 0x6b, 0x30, 0x8c, 0x71, 0xc5, 0x28, 0x1e, 0xea, 0x8a, 0x71, 0x05, 0x06, 0x9d, + 0x66, 0xb3, 0x62, 0xde, 0x4f, 0xd8, 0x54, 0x9a, 0x89, 0xc1, 0x58, 0xa7, 0xb1, 0x7f, 0xb4, 0x00, + 0x23, 0xf2, 0x0b, 0xaa, 0xad, 0xf5, 0x90, 0x44, 0x68, 0x0d, 0x4a, 0x0e, 0x1f, 0x25, 0x22, 0x27, + 0xf9, 0x63, 0xd9, 0x7a, 0x33, 0x63, 0x48, 0x63, 0x41, 0x6b, 0x46, 0x96, 0xc6, 0x31, 0x23, 0xd4, + 0x80, 0x71, 0xcf, 0x8f, 0xd8, 0xa1, 0xab, 0xf0, 0xed, 0x4c, 0x99, 0x49, 0xee, 0x67, 0x04, 0xf7, + 0xf1, 0xd5, 0x24, 0x17, 0x9c, 0x66, 0x8c, 0x16, 0xa4, 0x2e, 0xb2, 0x98, 0xaf, 0x44, 0xd2, 0x07, + 0x2e, 0x5b, 0x15, 0x69, 0xff, 0xaa, 0x05, 0x25, 0x49, 0x76, 0x1c, 0x56, 0xeb, 0x15, 0xe8, 0x0f, + 0xd9, 0x20, 0xc8, 0xae, 0xb1, 0xdb, 0x35, 0x9c, 0x8f, 0x57, 0x2c, 0x4b, 0xf0, 0xff, 0x21, 0x96, + 0x3c, 0x98, 0x29, 0x4a, 0x35, 0xff, 0x5d, 0x62, 0x8a, 0x52, 0xed, 0xc9, 0x39, 0x94, 0xfe, 0x94, + 0xb5, 0x59, 0xd3, 0xed, 0x52, 0x91, 0xb7, 0x19, 0x90, 0x0d, 0xf7, 0x6e, 0x52, 0xe4, 0xad, 0x30, + 0x28, 0x16, 0x58, 0xf4, 0x16, 0x0c, 0xd5, 0xa4, 0x0d, 0x22, 0x5e, 0xe1, 0x17, 0xdb, 0xda, 0xc3, + 0x94, 0xe9, 0x94, 0xeb, 0xd0, 0xe6, 0xb4, 0xf2, 0xd8, 0xe0, 0x66, 0x3a, 0xfa, 0x14, 0x3b, 0x39, + 0xfa, 0xc4, 0x7c, 0xf3, 0xdd, 0x5e, 0x7e, 0xdc, 0x82, 0x3e, 0xae, 0x7b, 0xee, 0x4e, 0xf5, 0xaf, + 0x59, 0x92, 0xe3, 0xbe, 0xbb, 0x45, 0x81, 0x42, 0xd2, 0x40, 0x2b, 0x50, 0x62, 0x3f, 0x98, 0xee, + 0xbc, 0x98, 0xff, 0x0e, 0x87, 0xd7, 0xaa, 0x37, 0xf0, 0x96, 0x2c, 0x86, 0x63, 0x0e, 0xf6, 0x8f, + 0x14, 0xe9, 0xee, 0x16, 0x93, 0x1a, 0x87, 0xbe, 0xf5, 0xf0, 0x0e, 0xfd, 0xc2, 0xc3, 0x3a, 0xf4, + 0x37, 0x61, 0xb4, 0xa6, 0xd9, 0x9d, 0xe3, 0x91, 0xbc, 0xd4, 0x76, 0x92, 0x68, 0x26, 0x6a, 0xae, + 0x9d, 0x9b, 0x33, 0x99, 0xe0, 0x24, 0x57, 0xf4, 0x09, 0x18, 0xe2, 0xe3, 0x2c, 0x6a, 0xe1, 0xbe, + 0x52, 0x4f, 0xe4, 0xcf, 0x17, 0xbd, 0x0a, 0xae, 0xcd, 0xd5, 0x8a, 0x63, 0x83, 0x99, 0xfd, 0x97, + 0x16, 0xa0, 0x85, 0xe6, 0x16, 0xd9, 0x21, 0x81, 0xd3, 0x88, 0xcd, 0x47, 0x3f, 0x68, 0xc1, 0x24, + 0x49, 0x81, 0xe7, 0xfc, 0x9d, 0x1d, 0x71, 0x59, 0xcc, 0xd1, 0x67, 0x2c, 0xe4, 0x94, 0x51, 0x0f, + 0x95, 0x26, 0xf3, 0x28, 0x70, 0x6e, 0x7d, 0x68, 0x05, 0x26, 0xf8, 0x29, 0xa9, 0x10, 0x9a, 0xdf, + 0xd5, 0x23, 0x82, 0xf1, 0xc4, 0x5a, 0x9a, 0x04, 0x67, 0x95, 0xb3, 0x7f, 0x75, 0x18, 0x72, 0x5b, + 0xf1, 0x9e, 0xdd, 0xec, 0x3d, 0xbb, 0xd9, 0x7b, 0x76, 0xb3, 0xf7, 0xec, 0x66, 0xef, 0xd9, 0xcd, + 0xde, 0xb3, 0x9b, 0xbd, 0x4b, 0xed, 0x66, 0xff, 0xb7, 0x05, 0x27, 0xd5, 0xf1, 0x65, 0x5c, 0xd8, + 0x3f, 0x0b, 0x13, 0x7c, 0xb9, 0x19, 0x3e, 0xc6, 0xe2, 0xb8, 0xbe, 0x92, 0x39, 0x73, 0x13, 0xbe, + 0xf0, 0x46, 0x41, 0xfe, 0xa8, 0x28, 0x03, 0x81, 0xb3, 0xaa, 0xb1, 0x7f, 0x69, 0x00, 0x7a, 0x17, + 0x76, 0x89, 0x17, 0x1d, 0xc3, 0xd5, 0xa6, 0x06, 0x23, 0xae, 0xb7, 0xeb, 0x37, 0x76, 0x49, 0x9d, + 0xe3, 0x0f, 0x73, 0x03, 0x3f, 0x25, 0x58, 0x8f, 0x2c, 0x19, 0x2c, 0x70, 0x82, 0xe5, 0xc3, 0xb0, + 0x3e, 0x5c, 0x85, 0x3e, 0x7e, 0xf8, 0x08, 0xd3, 0x43, 0xe6, 0x9e, 0xcd, 0x3a, 0x51, 0x1c, 0xa9, + 0xb1, 0x65, 0x84, 0x1f, 0x6e, 0xa2, 0x38, 0xfa, 0x0c, 0x8c, 0x6c, 0xb8, 0x41, 0x18, 0xad, 0xb9, + 0x3b, 0xf4, 0x68, 0xd8, 0x69, 0x3e, 0x80, 0xb5, 0x41, 0xf5, 0xc3, 0xa2, 0xc1, 0x09, 0x27, 0x38, + 0xa3, 0x4d, 0x18, 0x6e, 0x38, 0x7a, 0x55, 0xfd, 0x87, 0xae, 0x4a, 0x9d, 0x0e, 0xcb, 0x3a, 0x23, + 0x6c, 0xf2, 0xa5, 0xcb, 0xa9, 0xc6, 0x14, 0xe6, 0x03, 0x4c, 0x9d, 0xa1, 0x96, 0x13, 0xd7, 0x94, + 0x73, 0x1c, 0x15, 0xd0, 0x98, 0x23, 0x7b, 0xc9, 0x14, 0xd0, 0x34, 0x77, 0xf5, 0x4f, 0x43, 0x89, + 0xd0, 0x2e, 0xa4, 0x8c, 0xc5, 0x01, 0x73, 0xb9, 0xbb, 0xb6, 0xae, 0xb8, 0xb5, 0xc0, 0x37, 0xed, + 0x3c, 0x0b, 0x92, 0x13, 0x8e, 0x99, 0xa2, 0x39, 0xe8, 0x0b, 0x49, 0xe0, 0x2a, 0x5d, 0x72, 0x9b, + 0x61, 0x64, 0x64, 0xfc, 0xd5, 0x1a, 0xff, 0x8d, 0x45, 0x51, 0x3a, 0xbd, 0x1c, 0xa6, 0x8a, 0x65, + 0x87, 0x81, 0x36, 0xbd, 0x66, 0x18, 0x14, 0x0b, 0x2c, 0x7a, 0x1d, 0xfa, 0x03, 0xd2, 0x60, 0x86, + 0xc4, 0xe1, 0xee, 0x27, 0x39, 0xb7, 0x4b, 0xf2, 0x72, 0x58, 0x32, 0x40, 0xd7, 0x01, 0x05, 0x84, + 0x0a, 0x78, 0xae, 0xb7, 0xa9, 0xdc, 0xbb, 0xc5, 0x46, 0xab, 0x04, 0x69, 0x1c, 0x53, 0xc8, 0x07, + 0x8b, 0x38, 0xa3, 0x18, 0xba, 0x0a, 0xe3, 0x0a, 0xba, 0xe4, 0x85, 0x91, 0x43, 0x37, 0xb8, 0x51, + 0xc6, 0x4b, 0xe9, 0x57, 0x70, 0x92, 0x00, 0xa7, 0xcb, 0xd8, 0x3f, 0x67, 0x01, 0xef, 0xe7, 0x63, + 0xd0, 0x2a, 0xbc, 0x66, 0x6a, 0x15, 0xce, 0xe4, 0x8e, 0x5c, 0x8e, 0x46, 0xe1, 0xe7, 0x2c, 0x18, + 0xd4, 0x46, 0x36, 0x9e, 0xb3, 0x56, 0x9b, 0x39, 0xdb, 0x82, 0x31, 0x3a, 0xd3, 0x6f, 0xac, 0x87, + 0x24, 0xd8, 0x25, 0x75, 0x36, 0x31, 0x0b, 0x0f, 0x36, 0x31, 0x95, 0x2b, 0xe9, 0x72, 0x82, 0x21, + 0x4e, 0x55, 0x61, 0x7f, 0x5a, 0x36, 0x55, 0x79, 0xde, 0xd6, 0xd4, 0x98, 0x27, 0x3c, 0x6f, 0xd5, + 0xa8, 0xe2, 0x98, 0x86, 0x2e, 0xb5, 0x2d, 0x3f, 0x8c, 0x92, 0x9e, 0xb7, 0xd7, 0xfc, 0x30, 0xc2, + 0x0c, 0x63, 0xbf, 0x00, 0xb0, 0x70, 0x97, 0xd4, 0xf8, 0x8c, 0xd5, 0x2f, 0x3d, 0x56, 0xfe, 0xa5, + 0xc7, 0xfe, 0x3d, 0x0b, 0x46, 0x16, 0xe7, 0x8c, 0x93, 0x6b, 0x1a, 0x80, 0xdf, 0xd4, 0x6e, 0xdf, + 0x5e, 0x95, 0xee, 0x1f, 0xdc, 0x02, 0xae, 0xa0, 0x58, 0xa3, 0x40, 0x67, 0xa0, 0xd8, 0x68, 0x79, + 0x42, 0xed, 0xd9, 0x4f, 0x8f, 0xc7, 0xe5, 0x96, 0x87, 0x29, 0x4c, 0x7b, 0xac, 0x54, 0xec, 0xfa, + 0xb1, 0x52, 0xc7, 0x20, 0x25, 0xa8, 0x0c, 0xbd, 0x77, 0xee, 0xb8, 0x75, 0xfe, 0x14, 0x5c, 0xb8, + 0xa6, 0xdc, 0xbe, 0xbd, 0x34, 0x1f, 0x62, 0x0e, 0xb7, 0xbf, 0x58, 0x84, 0xa9, 0xc5, 0x06, 0xb9, + 0xfb, 0x0e, 0x9f, 0xc3, 0x77, 0xfb, 0xd4, 0xea, 0x70, 0x0a, 0xa4, 0xc3, 0x3e, 0xa7, 0xeb, 0xdc, + 0x1f, 0x1b, 0xd0, 0xcf, 0x1d, 0x4f, 0xe5, 0xe3, 0xf8, 0x4c, 0x73, 0x5f, 0x7e, 0x87, 0x4c, 0x73, + 0x07, 0x56, 0x61, 0xee, 0x53, 0x07, 0xa6, 0x80, 0x62, 0xc9, 0x7c, 0xea, 0x15, 0x18, 0xd2, 0x29, + 0x0f, 0xf5, 0xb0, 0xf5, 0xbb, 0x8b, 0x30, 0x46, 0x5b, 0xf0, 0x50, 0x07, 0xe2, 0x66, 0x7a, 0x20, + 0x8e, 0xfa, 0x71, 0x63, 0xe7, 0xd1, 0x78, 0x2b, 0x39, 0x1a, 0x57, 0xf2, 0x46, 0xe3, 0xb8, 0xc7, + 0xe0, 0x7b, 0x2c, 0x98, 0x58, 0x6c, 0xf8, 0xb5, 0xed, 0xc4, 0x03, 0xc4, 0x97, 0x60, 0x90, 0x6e, + 0xc7, 0xa1, 0x11, 0x8b, 0xc3, 0x88, 0xce, 0x22, 0x50, 0x58, 0xa7, 0xd3, 0x8a, 0xdd, 0xbc, 0xb9, + 0x34, 0x9f, 0x15, 0xd4, 0x45, 0xa0, 0xb0, 0x4e, 0x67, 0xff, 0x8e, 0x05, 0xe7, 0xae, 0xce, 0x2d, + 0xc4, 0x53, 0x31, 0x15, 0x57, 0xe6, 0x22, 0xf4, 0x35, 0xeb, 0x5a, 0x53, 0x62, 0xb5, 0xf0, 0x3c, + 0x6b, 0x85, 0xc0, 0xbe, 0x5b, 0x62, 0x26, 0xdd, 0x04, 0xb8, 0x8a, 0x2b, 0x73, 0x62, 0xdf, 0x95, + 0x56, 0x20, 0x2b, 0xd7, 0x0a, 0xf4, 0x04, 0xf4, 0xd3, 0x73, 0xc1, 0xad, 0xc9, 0x76, 0x73, 0x83, + 0x3e, 0x07, 0x61, 0x89, 0xb3, 0x7f, 0xd6, 0x82, 0x89, 0xab, 0x6e, 0x44, 0x0f, 0xed, 0x64, 0xe0, + 0x14, 0x7a, 0x6a, 0x87, 0x6e, 0xe4, 0x07, 0x7b, 0xc9, 0xc0, 0x29, 0x58, 0x61, 0xb0, 0x46, 0xc5, + 0x3f, 0x68, 0xd7, 0x65, 0x2f, 0x29, 0x0a, 0xa6, 0xdd, 0x0d, 0x0b, 0x38, 0x56, 0x14, 0xb4, 0xbf, + 0xea, 0x6e, 0xc0, 0x54, 0x96, 0x7b, 0x62, 0xe3, 0x56, 0xfd, 0x35, 0x2f, 0x11, 0x38, 0xa6, 0xb1, + 0xff, 0xdc, 0x82, 0xf2, 0xd5, 0x46, 0x2b, 0x8c, 0x48, 0xb0, 0x11, 0xe6, 0x6c, 0xba, 0x2f, 0x40, + 0x89, 0x48, 0x03, 0x81, 0x7c, 0xf2, 0x29, 0x05, 0x51, 0x65, 0x39, 0xe0, 0xf1, 0x5b, 0x14, 0x5d, + 0x17, 0xaf, 0xa4, 0x0f, 0xf7, 0xcc, 0x75, 0x11, 0x10, 0xd1, 0xeb, 0xd2, 0x03, 0xda, 0xb0, 0xc8, + 0x18, 0x0b, 0x29, 0x2c, 0xce, 0x28, 0x61, 0xff, 0x98, 0x05, 0x27, 0xd5, 0x07, 0xbf, 0xeb, 0x3e, + 0xd3, 0xfe, 0x6a, 0x01, 0x86, 0xaf, 0xad, 0xad, 0x55, 0xae, 0x92, 0x48, 0x9b, 0x95, 0xed, 0xcd, + 0xfe, 0x58, 0xb3, 0x5e, 0xb6, 0xbb, 0x23, 0xb6, 0x22, 0xb7, 0x31, 0xcd, 0xe3, 0xa2, 0x4d, 0x2f, + 0x79, 0xd1, 0x8d, 0xa0, 0x1a, 0x05, 0xae, 0xb7, 0x99, 0x39, 0xd3, 0xa5, 0xcc, 0x52, 0xcc, 0x93, + 0x59, 0xd0, 0x0b, 0xd0, 0xc7, 0x02, 0xb3, 0xc9, 0x41, 0x78, 0x44, 0x5d, 0xb1, 0x18, 0xf4, 0x60, + 0xbf, 0x5c, 0xba, 0x89, 0x97, 0xf8, 0x1f, 0x2c, 0x48, 0xd1, 0x4d, 0x18, 0xdc, 0x8a, 0xa2, 0xe6, + 0x35, 0xe2, 0xd4, 0x49, 0x20, 0x77, 0xd9, 0xf3, 0x59, 0xbb, 0x2c, 0xed, 0x04, 0x4e, 0x16, 0x6f, + 0x4c, 0x31, 0x2c, 0xc4, 0x3a, 0x1f, 0xbb, 0x0a, 0x10, 0xe3, 0x8e, 0xc8, 0x70, 0x63, 0xaf, 0x41, + 0x89, 0x7e, 0xee, 0x4c, 0xc3, 0x75, 0xda, 0x9b, 0xc6, 0x9f, 0x81, 0x92, 0x34, 0x7c, 0x87, 0x22, + 0x8a, 0x03, 0x3b, 0x91, 0xa4, 0x5d, 0x3c, 0xc4, 0x31, 0xde, 0x7e, 0x1c, 0x84, 0x6f, 0x69, 0x3b, + 0x96, 0xf6, 0x06, 0x9c, 0x60, 0x4e, 0xb2, 0x4e, 0xb4, 0x65, 0xcc, 0xd1, 0xce, 0x93, 0xe1, 0x59, + 0x71, 0xaf, 0xe3, 0x5f, 0x36, 0xa9, 0x3d, 0x4e, 0x1e, 0x92, 0x1c, 0xe3, 0x3b, 0x9e, 0xfd, 0x67, + 0x3d, 0xf0, 0xc8, 0x52, 0x35, 0x3f, 0xfc, 0xd0, 0xcb, 0x30, 0xc4, 0xc5, 0x45, 0x3a, 0x35, 0x9c, + 0x86, 0xa8, 0x57, 0x69, 0x40, 0xd7, 0x34, 0x1c, 0x36, 0x28, 0xd1, 0x39, 0x28, 0xba, 0x6f, 0x7b, + 0xc9, 0xa7, 0x7b, 0x4b, 0x6f, 0xac, 0x62, 0x0a, 0xa7, 0x68, 0x2a, 0x79, 0xf2, 0x2d, 0x5d, 0xa1, + 0x95, 0xf4, 0xf9, 0x1a, 0x8c, 0xb8, 0x61, 0x2d, 0x74, 0x97, 0x3c, 0xba, 0x4e, 0xb5, 0x95, 0xae, + 0x74, 0x0e, 0xb4, 0xd1, 0x0a, 0x8b, 0x13, 0xd4, 0xda, 0xf9, 0xd2, 0xdb, 0xb5, 0xf4, 0xda, 0x31, + 0xf8, 0x01, 0xdd, 0xfe, 0x9b, 0xec, 0xeb, 0x42, 0xa6, 0x82, 0x17, 0xdb, 0x3f, 0xff, 0xe0, 0x10, + 0x4b, 0x1c, 0xbd, 0xd0, 0xd5, 0xb6, 0x9c, 0xe6, 0x4c, 0x2b, 0xda, 0x9a, 0x77, 0xc3, 0x9a, 0xbf, + 0x4b, 0x82, 0x3d, 0x76, 0x17, 0x1f, 0x88, 0x2f, 0x74, 0x0a, 0x31, 0x77, 0x6d, 0xa6, 0x42, 0x29, + 0x71, 0xba, 0x0c, 0x9a, 0x81, 0x51, 0x09, 0xac, 0x92, 0x90, 0x1d, 0x01, 0x83, 0x8c, 0x8d, 0x7a, + 0x4c, 0x27, 0xc0, 0x8a, 0x49, 0x92, 0xde, 0x14, 0x70, 0xe1, 0x28, 0x04, 0xdc, 0x0f, 0xc2, 0xb0, + 0xeb, 0xb9, 0x91, 0xeb, 0x44, 0x3e, 0xb7, 0x1f, 0xf1, 0x6b, 0x37, 0x53, 0x30, 0x2f, 0xe9, 0x08, + 0x6c, 0xd2, 0xd9, 0xff, 0xb6, 0x07, 0xc6, 0xd9, 0xb0, 0xbd, 0x37, 0xc3, 0xbe, 0x9d, 0x66, 0xd8, + 0xcd, 0xf4, 0x0c, 0x3b, 0x0a, 0xc9, 0xfd, 0x81, 0xa7, 0xd9, 0x67, 0xa0, 0xa4, 0xde, 0x0f, 0xca, + 0x07, 0xc4, 0x56, 0xce, 0x03, 0xe2, 0xce, 0xa7, 0xb7, 0x74, 0x49, 0x2b, 0x66, 0xba, 0xa4, 0x7d, + 0xd9, 0x82, 0xd8, 0xb0, 0x80, 0xde, 0x80, 0x52, 0xd3, 0x67, 0x1e, 0xae, 0x81, 0x74, 0x1b, 0x7f, + 0xbc, 0xad, 0x65, 0x82, 0x47, 0x60, 0x0b, 0x78, 0x2f, 0x54, 0x64, 0x51, 0x1c, 0x73, 0x41, 0xd7, + 0xa1, 0xbf, 0x19, 0x90, 0x6a, 0xc4, 0xc2, 0x03, 0x75, 0xcf, 0x90, 0xcf, 0x1a, 0x5e, 0x10, 0x4b, + 0x0e, 0xf6, 0xbf, 0xb7, 0x60, 0x2c, 0x49, 0x8a, 0x3e, 0x0c, 0x3d, 0xe4, 0x2e, 0xa9, 0x89, 0xf6, + 0x66, 0x1e, 0xc5, 0xb1, 0x6a, 0x82, 0x77, 0x00, 0xfd, 0x8f, 0x59, 0x29, 0x74, 0x0d, 0xfa, 0xe9, + 0x39, 0x7c, 0x55, 0x85, 0xc2, 0x7b, 0x34, 0xef, 0x2c, 0x57, 0x02, 0x0d, 0x6f, 0x9c, 0x00, 0x61, + 0x59, 0x9c, 0xf9, 0x81, 0xd5, 0x9a, 0x55, 0x7a, 0xc5, 0x89, 0xda, 0xdd, 0xc4, 0xd7, 0xe6, 0x2a, + 0x9c, 0x48, 0x70, 0xe3, 0x7e, 0x60, 0x12, 0x88, 0x63, 0x26, 0xf6, 0x2f, 0x58, 0x00, 0xdc, 0xed, + 0xcd, 0xf1, 0x36, 0xc9, 0x31, 0x68, 0xd3, 0xe7, 0xa1, 0x27, 0x6c, 0x92, 0x5a, 0x3b, 0xe7, 0xeb, + 0xb8, 0x3d, 0xd5, 0x26, 0xa9, 0xc5, 0x33, 0x8e, 0xfe, 0xc3, 0xac, 0xb4, 0xfd, 0xbd, 0x00, 0x23, + 0x31, 0xd9, 0x52, 0x44, 0x76, 0xd0, 0x73, 0x46, 0xd0, 0x91, 0x33, 0x89, 0xa0, 0x23, 0x25, 0x46, + 0xad, 0x29, 0x6e, 0x3f, 0x03, 0xc5, 0x1d, 0xe7, 0xae, 0xd0, 0xcc, 0x3d, 0xd3, 0xbe, 0x19, 0x94, + 0xff, 0xf4, 0x8a, 0x73, 0x97, 0x5f, 0x5e, 0x9f, 0x91, 0x2b, 0x64, 0xc5, 0xb9, 0xdb, 0xd1, 0x41, + 0x98, 0x56, 0xc2, 0xea, 0x72, 0x3d, 0xe1, 0xd1, 0xd5, 0x55, 0x5d, 0xae, 0x97, 0xac, 0xcb, 0xf5, + 0xba, 0xa8, 0xcb, 0xf5, 0xd0, 0x3d, 0xe8, 0x17, 0x0e, 0x97, 0x22, 0x2c, 0xd9, 0xe5, 0x2e, 0xea, + 0x13, 0xfe, 0x9a, 0xbc, 0xce, 0xcb, 0xf2, 0x72, 0x2e, 0xa0, 0x1d, 0xeb, 0x95, 0x15, 0xa2, 0xff, + 0xc7, 0x82, 0x11, 0xf1, 0x1b, 0x93, 0xb7, 0x5b, 0x24, 0x8c, 0x84, 0xf0, 0xfa, 0x81, 0xee, 0xdb, + 0x20, 0x0a, 0xf2, 0xa6, 0x7c, 0x40, 0x9e, 0x33, 0x26, 0xb2, 0x63, 0x8b, 0x12, 0xad, 0x40, 0x7f, + 0xdb, 0x82, 0x13, 0x3b, 0xce, 0x5d, 0x5e, 0x23, 0x87, 0x61, 0x27, 0x72, 0x7d, 0xe1, 0xb8, 0xf0, + 0xe1, 0xee, 0x86, 0x3f, 0x55, 0x9c, 0x37, 0x52, 0x5a, 0x29, 0x4f, 0x64, 0x91, 0x74, 0x6c, 0x6a, + 0x66, 0xbb, 0xa6, 0x36, 0x60, 0x40, 0xce, 0xb7, 0x87, 0xe9, 0xdd, 0xcd, 0xea, 0x11, 0x73, 0xed, + 0xa1, 0xd6, 0xf3, 0x19, 0x18, 0xd2, 0xe7, 0xd8, 0x43, 0xad, 0xeb, 0x6d, 0x98, 0xc8, 0x98, 0x4b, + 0x0f, 0xb5, 0xca, 0x3b, 0x70, 0x26, 0x77, 0x7e, 0x3c, 0x54, 0xef, 0xfc, 0xaf, 0x5a, 0xfa, 0x3e, + 0x78, 0x0c, 0x26, 0x8d, 0x39, 0xd3, 0xa4, 0x71, 0xbe, 0xfd, 0xca, 0xc9, 0xb1, 0x6b, 0xbc, 0xa5, + 0x37, 0x9a, 0xee, 0xea, 0xe8, 0x75, 0xe8, 0x6b, 0x50, 0x88, 0x74, 0xdb, 0xb5, 0x3b, 0xaf, 0xc8, + 0x58, 0x98, 0x64, 0xf0, 0x10, 0x0b, 0x0e, 0xf6, 0x2f, 0x5b, 0xd0, 0x73, 0x0c, 0x3d, 0x81, 0xcd, + 0x9e, 0x78, 0x2e, 0x97, 0xb5, 0x88, 0xd0, 0x3e, 0x8d, 0x9d, 0x3b, 0x0b, 0x77, 0x23, 0xe2, 0x85, + 0xec, 0x44, 0xce, 0xec, 0x98, 0x9f, 0xb2, 0x60, 0x62, 0xd9, 0x77, 0xea, 0xb3, 0x4e, 0xc3, 0xf1, + 0x6a, 0x24, 0x58, 0xf2, 0x36, 0x0f, 0xe5, 0x73, 0x5e, 0xe8, 0xe8, 0x73, 0x3e, 0x27, 0x5d, 0xb6, + 0x7a, 0xf2, 0xc7, 0x8f, 0x4a, 0xd2, 0xc9, 0x30, 0x4c, 0x86, 0x73, 0xf1, 0x16, 0x20, 0xbd, 0x95, + 0xe2, 0xe5, 0x15, 0x86, 0x7e, 0x97, 0xb7, 0x57, 0x0c, 0xe2, 0x93, 0xd9, 0x12, 0x6e, 0xea, 0xf3, + 0xb4, 0x37, 0x45, 0x1c, 0x80, 0x25, 0x23, 0xfb, 0x65, 0xc8, 0x0c, 0x9b, 0xd1, 0x59, 0x7b, 0x61, + 0x7f, 0x1c, 0xc6, 0x59, 0xc9, 0x43, 0x6a, 0x06, 0xec, 0x84, 0xce, 0x35, 0x23, 0x04, 0xa8, 0xfd, + 0x05, 0x0b, 0x46, 0x57, 0x13, 0x91, 0x11, 0x2f, 0x32, 0x2b, 0x6d, 0x86, 0xaa, 0xbf, 0xca, 0xa0, + 0x58, 0x60, 0x8f, 0x5c, 0x15, 0xf6, 0xd7, 0x16, 0xc4, 0x91, 0x6c, 0x8e, 0x41, 0x7c, 0x9b, 0x33, + 0xc4, 0xb7, 0x4c, 0x41, 0x56, 0x35, 0x27, 0x4f, 0x7a, 0x43, 0xd7, 0x55, 0x8c, 0xb7, 0x36, 0x32, + 0x6c, 0xcc, 0x86, 0x4f, 0xc5, 0x11, 0x33, 0x10, 0x9c, 0x8c, 0xfa, 0x66, 0xff, 0x7e, 0x01, 0x90, + 0xa2, 0xed, 0x3a, 0x06, 0x5d, 0xba, 0xc4, 0xd1, 0xc4, 0xa0, 0xdb, 0x05, 0xc4, 0xfc, 0x0c, 0x02, + 0xc7, 0x0b, 0x39, 0x5b, 0x57, 0x28, 0xff, 0x0e, 0xe7, 0xc4, 0x30, 0x25, 0x1f, 0xa5, 0x2d, 0xa7, + 0xb8, 0xe1, 0x8c, 0x1a, 0x34, 0xff, 0x91, 0xde, 0x6e, 0xfd, 0x47, 0xfa, 0x3a, 0xbc, 0xae, 0xfc, + 0x8a, 0x05, 0xc3, 0xaa, 0x9b, 0xde, 0x25, 0x3e, 0xf8, 0xaa, 0x3d, 0x39, 0x1b, 0x68, 0x45, 0x6b, + 0x32, 0x3b, 0x58, 0xbe, 0x83, 0xbd, 0x92, 0x75, 0x1a, 0xee, 0x3d, 0xa2, 0x62, 0x96, 0x96, 0xc5, + 0xab, 0x57, 0x01, 0x3d, 0xd8, 0x2f, 0x0f, 0xab, 0x7f, 0x3c, 0x26, 0x7b, 0x5c, 0x84, 0x6e, 0xc9, + 0xa3, 0x89, 0xa9, 0x88, 0x5e, 0x82, 0xde, 0xe6, 0x96, 0x13, 0x92, 0xc4, 0x5b, 0xa5, 0xde, 0x0a, + 0x05, 0x1e, 0xec, 0x97, 0x47, 0x54, 0x01, 0x06, 0xc1, 0x9c, 0xba, 0xfb, 0xc8, 0x7e, 0xe9, 0xc9, + 0xd9, 0x31, 0xb2, 0xdf, 0x5f, 0x5a, 0xd0, 0xb3, 0xea, 0xd7, 0x8f, 0x63, 0x0b, 0x78, 0xcd, 0xd8, + 0x02, 0xce, 0xe6, 0xa5, 0xcb, 0xc8, 0x5d, 0xfd, 0x8b, 0x89, 0xd5, 0x7f, 0x3e, 0x97, 0x43, 0xfb, + 0x85, 0xbf, 0x03, 0x83, 0x2c, 0x09, 0x87, 0x78, 0x97, 0xf5, 0x82, 0xb1, 0xe0, 0xcb, 0x89, 0x05, + 0x3f, 0xaa, 0x91, 0x6a, 0x2b, 0xfd, 0x29, 0xe8, 0x17, 0x0f, 0x7d, 0x92, 0x8f, 0x8d, 0x05, 0x2d, + 0x96, 0x78, 0xfb, 0xc7, 0x8b, 0x60, 0x24, 0xfd, 0x40, 0xbf, 0x6a, 0xc1, 0x74, 0xc0, 0x1d, 0x80, + 0xeb, 0xf3, 0xad, 0xc0, 0xf5, 0x36, 0xab, 0xb5, 0x2d, 0x52, 0x6f, 0x35, 0x5c, 0x6f, 0x73, 0x69, + 0xd3, 0xf3, 0x15, 0x78, 0xe1, 0x2e, 0xa9, 0xb5, 0x98, 0x71, 0xae, 0x43, 0x86, 0x11, 0xe5, 0x48, + 0xff, 0xfc, 0xfd, 0xfd, 0xf2, 0x34, 0x3e, 0x14, 0x6f, 0x7c, 0xc8, 0xb6, 0xa0, 0xdf, 0xb1, 0xe0, + 0x32, 0xcf, 0x85, 0xd1, 0x7d, 0xfb, 0xdb, 0xdc, 0x96, 0x2b, 0x92, 0x55, 0xcc, 0x64, 0x8d, 0x04, + 0x3b, 0xb3, 0x1f, 0x14, 0x1d, 0x7a, 0xb9, 0x72, 0xb8, 0xba, 0xf0, 0x61, 0x1b, 0x67, 0xff, 0xc3, + 0x22, 0x0c, 0x8b, 0x08, 0x70, 0xe2, 0x0c, 0x78, 0xc9, 0x98, 0x12, 0x8f, 0x26, 0xa6, 0xc4, 0xb8, + 0x41, 0x7c, 0x34, 0xdb, 0x7f, 0x08, 0xe3, 0x74, 0x73, 0xbe, 0x46, 0x9c, 0x20, 0x5a, 0x27, 0x0e, + 0x77, 0x0b, 0x2b, 0x1e, 0x7a, 0xf7, 0x57, 0xfa, 0xc9, 0xe5, 0x24, 0x33, 0x9c, 0xe6, 0xff, 0xed, + 0x74, 0xe6, 0x78, 0x30, 0x96, 0x0a, 0xe2, 0xf7, 0x26, 0x94, 0xd4, 0x2b, 0x15, 0xb1, 0xe9, 0xb4, + 0x8f, 0x85, 0x99, 0xe4, 0xc0, 0xd5, 0x5f, 0xf1, 0x0b, 0xa9, 0x98, 0x9d, 0xfd, 0x77, 0x0b, 0x46, + 0x85, 0x7c, 0x10, 0x57, 0x61, 0xc0, 0x09, 0x43, 0x77, 0xd3, 0x23, 0xf5, 0x76, 0x1a, 0xca, 0x54, + 0x35, 0xec, 0xa5, 0xd0, 0x8c, 0x28, 0x89, 0x15, 0x0f, 0x74, 0x8d, 0x3b, 0xdf, 0xed, 0x92, 0x76, + 0xea, 0xc9, 0x14, 0x37, 0x90, 0xee, 0x79, 0xbb, 0x04, 0x8b, 0xf2, 0xe8, 0x93, 0xdc, 0x3b, 0xf2, + 0xba, 0xe7, 0xdf, 0xf1, 0xae, 0xfa, 0xbe, 0x8c, 0xf6, 0xd1, 0x1d, 0xc3, 0x71, 0xe9, 0x13, 0xa9, + 0x8a, 0x63, 0x93, 0x5b, 0x77, 0x51, 0x71, 0x3f, 0x0b, 0x2c, 0xf6, 0xbf, 0xf9, 0x28, 0x3c, 0x44, + 0x04, 0x46, 0x45, 0x78, 0x41, 0x09, 0x13, 0x7d, 0x97, 0x79, 0x95, 0x33, 0x4b, 0xc7, 0x8a, 0xf4, + 0xeb, 0x26, 0x0b, 0x9c, 0xe4, 0x69, 0xff, 0x8c, 0x05, 0xec, 0x81, 0xec, 0x31, 0xc8, 0x23, 0x1f, + 0x31, 0xe5, 0x91, 0xc9, 0xbc, 0x4e, 0xce, 0x11, 0x45, 0x5e, 0xe4, 0x33, 0xab, 0x12, 0xf8, 0x77, + 0xf7, 0x84, 0x4b, 0x4b, 0xe7, 0xfb, 0x87, 0xfd, 0xdf, 0x2d, 0xbe, 0x89, 0xc5, 0xe1, 0x04, 0x3e, + 0x07, 0x03, 0x35, 0xa7, 0xe9, 0xd4, 0x78, 0x86, 0xaa, 0x5c, 0x8d, 0x9e, 0x51, 0x68, 0x7a, 0x4e, + 0x94, 0xe0, 0x1a, 0x2a, 0x19, 0xa6, 0x72, 0x40, 0x82, 0x3b, 0x6a, 0xa5, 0x54, 0x95, 0x53, 0xdb, + 0x30, 0x6c, 0x30, 0x7b, 0xa8, 0xea, 0x8c, 0xcf, 0xf1, 0x23, 0x56, 0x85, 0x55, 0xdd, 0x81, 0x71, + 0x4f, 0xfb, 0x4f, 0x0f, 0x14, 0x79, 0xb9, 0x7c, 0xbc, 0xd3, 0x21, 0xca, 0x4e, 0x1f, 0xed, 0xed, + 0x6d, 0x82, 0x0d, 0x4e, 0x73, 0xb6, 0x7f, 0xc2, 0x82, 0xd3, 0x3a, 0xa1, 0xf6, 0xbc, 0xa7, 0x93, + 0x91, 0x64, 0x1e, 0x06, 0xfc, 0x26, 0x09, 0x9c, 0xc8, 0x0f, 0xc4, 0xa9, 0x71, 0x49, 0x76, 0xfa, + 0x0d, 0x01, 0x3f, 0x10, 0xf9, 0x16, 0x24, 0x77, 0x09, 0xc7, 0xaa, 0x24, 0xbd, 0x7d, 0xb2, 0xce, + 0x08, 0xc5, 0x43, 0x2e, 0xb6, 0x07, 0x30, 0x7b, 0x7b, 0x88, 0x05, 0xc6, 0xfe, 0x33, 0x8b, 0x4f, + 0x2c, 0xbd, 0xe9, 0xe8, 0x6d, 0x18, 0xdb, 0x71, 0xa2, 0xda, 0xd6, 0xc2, 0xdd, 0x66, 0xc0, 0x4d, + 0x4e, 0xb2, 0x9f, 0x9e, 0xe9, 0xd4, 0x4f, 0xda, 0x47, 0xc6, 0x0e, 0x9f, 0x2b, 0x09, 0x66, 0x38, + 0xc5, 0x1e, 0xad, 0xc3, 0x20, 0x83, 0xb1, 0x37, 0x8a, 0x61, 0x3b, 0xd1, 0x20, 0xaf, 0x36, 0xe5, + 0xb2, 0xb0, 0x12, 0xf3, 0xc1, 0x3a, 0x53, 0xfb, 0xcb, 0x45, 0xbe, 0xda, 0x99, 0x28, 0xff, 0x14, + 0xf4, 0x37, 0xfd, 0xfa, 0xdc, 0xd2, 0x3c, 0x16, 0xa3, 0xa0, 0x8e, 0x91, 0x0a, 0x07, 0x63, 0x89, + 0x47, 0x97, 0x60, 0x40, 0xfc, 0x94, 0x26, 0x42, 0xb6, 0x37, 0x0b, 0xba, 0x10, 0x2b, 0x2c, 0x7a, + 0x1e, 0xa0, 0x19, 0xf8, 0xbb, 0x6e, 0x9d, 0xc5, 0x2c, 0x29, 0x9a, 0xde, 0x46, 0x15, 0x85, 0xc1, + 0x1a, 0x15, 0x7a, 0x15, 0x86, 0x5b, 0x5e, 0xc8, 0xc5, 0x11, 0x2d, 0x32, 0xb4, 0xf2, 0x83, 0xb9, + 0xa9, 0x23, 0xb1, 0x49, 0x8b, 0x66, 0xa0, 0x2f, 0x72, 0x98, 0xf7, 0x4c, 0x6f, 0xbe, 0x53, 0xf0, + 0x1a, 0xa5, 0xd0, 0x93, 0x21, 0xd1, 0x02, 0x58, 0x14, 0x44, 0x6f, 0xca, 0xe7, 0xc2, 0x7c, 0x63, + 0x17, 0xde, 0xf8, 0xdd, 0x1d, 0x02, 0xda, 0x63, 0x61, 0xe1, 0xe5, 0x6f, 0xf0, 0x42, 0xaf, 0x00, + 0x90, 0xbb, 0x11, 0x09, 0x3c, 0xa7, 0xa1, 0x7c, 0xde, 0x94, 0x5c, 0x30, 0xef, 0xaf, 0xfa, 0xd1, + 0xcd, 0x90, 0x2c, 0x28, 0x0a, 0xac, 0x51, 0xdb, 0xbf, 0x53, 0x02, 0x88, 0xe5, 0x76, 0x74, 0x2f, + 0xb5, 0x71, 0x3d, 0xdb, 0x5e, 0xd2, 0x3f, 0xba, 0x5d, 0x0b, 0x7d, 0x9f, 0x05, 0x83, 0x22, 0x34, + 0x0b, 0x1b, 0xa1, 0x42, 0xfb, 0x8d, 0xd3, 0x8c, 0x10, 0x43, 0x4b, 0xf0, 0x26, 0xbc, 0x20, 0x67, + 0xa8, 0x86, 0xe9, 0xd8, 0x0a, 0xbd, 0x62, 0xf4, 0x7e, 0x79, 0x55, 0x2c, 0x1a, 0x5d, 0xa9, 0xae, + 0x8a, 0x25, 0x76, 0x46, 0xe8, 0xb7, 0xc4, 0x9b, 0xc6, 0x2d, 0xb1, 0x27, 0xff, 0x3d, 0xa4, 0x21, + 0xbe, 0x76, 0xba, 0x20, 0xa2, 0x8a, 0x1e, 0x1b, 0xa1, 0x37, 0xff, 0x11, 0x9f, 0x76, 0x4f, 0xea, + 0x10, 0x17, 0xe1, 0x33, 0x30, 0x5a, 0x37, 0x85, 0x00, 0x31, 0x13, 0x9f, 0xcc, 0xe3, 0x9b, 0x90, + 0x19, 0xe2, 0x63, 0x3f, 0x81, 0xc0, 0x49, 0xc6, 0xa8, 0xc2, 0x43, 0x65, 0x2c, 0x79, 0x1b, 0xbe, + 0x78, 0x11, 0x62, 0xe7, 0x8e, 0xe5, 0x5e, 0x18, 0x91, 0x1d, 0x4a, 0x19, 0x9f, 0xee, 0xab, 0xa2, + 0x2c, 0x56, 0x5c, 0xd0, 0xeb, 0xd0, 0xc7, 0x5e, 0x71, 0x85, 0x93, 0x03, 0xf9, 0x1a, 0x67, 0x33, + 0x66, 0x60, 0xbc, 0x20, 0xd9, 0xdf, 0x10, 0x0b, 0x0e, 0xe8, 0x9a, 0x7c, 0x23, 0x19, 0x2e, 0x79, + 0x37, 0x43, 0xc2, 0xde, 0x48, 0x96, 0x66, 0x1f, 0x8f, 0x9f, 0x3f, 0x72, 0x78, 0x66, 0xca, 0x44, + 0xa3, 0x24, 0x95, 0xa2, 0xc4, 0x7f, 0x99, 0x89, 0x51, 0x44, 0x38, 0xca, 0x6c, 0x9e, 0x99, 0xad, + 0x31, 0xee, 0xce, 0x5b, 0x26, 0x0b, 0x9c, 0xe4, 0x49, 0x25, 0x52, 0xbe, 0xea, 0xc5, 0x9b, 0x92, + 0x4e, 0x7b, 0x07, 0xbf, 0x88, 0xb3, 0xd3, 0x88, 0x43, 0xb0, 0x28, 0x7f, 0xac, 0xe2, 0xc1, 0x94, + 0x07, 0x63, 0xc9, 0x25, 0xfa, 0x50, 0xc5, 0x91, 0x3f, 0xe9, 0x81, 0x11, 0x73, 0x4a, 0xa1, 0xcb, + 0x50, 0x12, 0x4c, 0x54, 0x36, 0x13, 0xb5, 0x4a, 0x56, 0x24, 0x02, 0xc7, 0x34, 0x2c, 0x89, 0x0d, + 0x2b, 0xae, 0x39, 0x11, 0xc7, 0x49, 0x6c, 0x14, 0x06, 0x6b, 0x54, 0xf4, 0x62, 0xb5, 0xee, 0xfb, + 0x91, 0x3a, 0x90, 0xd4, 0xbc, 0x9b, 0x65, 0x50, 0x2c, 0xb0, 0xf4, 0x20, 0xda, 0x26, 0x81, 0x47, + 0x1a, 0x66, 0x14, 0x71, 0x75, 0x10, 0x5d, 0xd7, 0x91, 0xd8, 0xa4, 0xa5, 0xc7, 0xa9, 0x1f, 0xb2, + 0x89, 0x2c, 0xae, 0x6f, 0xb1, 0x53, 0x76, 0x95, 0x3f, 0x2f, 0x97, 0x78, 0xf4, 0x71, 0x38, 0xad, + 0x22, 0x76, 0x61, 0x6e, 0xcd, 0x90, 0x35, 0xf6, 0x19, 0xda, 0x96, 0xd3, 0x73, 0xd9, 0x64, 0x38, + 0xaf, 0x3c, 0x7a, 0x0d, 0x46, 0x84, 0x88, 0x2f, 0x39, 0xf6, 0x9b, 0x1e, 0x46, 0xd7, 0x0d, 0x2c, + 0x4e, 0x50, 0xcb, 0x38, 0xe8, 0x4c, 0xca, 0x96, 0x1c, 0x06, 0xd2, 0x71, 0xd0, 0x75, 0x3c, 0x4e, + 0x95, 0x40, 0x33, 0x30, 0xca, 0x65, 0x30, 0xd7, 0xdb, 0xe4, 0x63, 0x22, 0x9e, 0x7c, 0xa9, 0x25, + 0x75, 0xc3, 0x44, 0xe3, 0x24, 0x3d, 0x7a, 0x19, 0x86, 0x9c, 0xa0, 0xb6, 0xe5, 0x46, 0xa4, 0x16, + 0xb5, 0x02, 0xfe, 0x16, 0x4c, 0x73, 0xd1, 0x9a, 0xd1, 0x70, 0xd8, 0xa0, 0xb4, 0xef, 0xc1, 0x44, + 0x46, 0xdc, 0x09, 0x3a, 0x71, 0x9c, 0xa6, 0x2b, 0xbf, 0x29, 0xe1, 0x07, 0x3d, 0x53, 0x59, 0x92, + 0x5f, 0xa3, 0x51, 0xd1, 0xd9, 0xc9, 0xe2, 0x53, 0x68, 0x89, 0x57, 0xd5, 0xec, 0x5c, 0x94, 0x08, + 0x1c, 0xd3, 0xd8, 0xff, 0xa9, 0x00, 0xa3, 0x19, 0xb6, 0x15, 0x96, 0xfc, 0x33, 0x71, 0x49, 0x89, + 0x73, 0x7d, 0x9a, 0x61, 0xf5, 0x0b, 0x87, 0x08, 0xab, 0x5f, 0xec, 0x14, 0x56, 0xbf, 0xe7, 0x9d, + 0x84, 0xd5, 0x37, 0x7b, 0xac, 0xb7, 0xab, 0x1e, 0xcb, 0x08, 0xc5, 0xdf, 0x77, 0xc8, 0x50, 0xfc, + 0x46, 0xa7, 0xf7, 0x77, 0xd1, 0xe9, 0x3f, 0x52, 0x80, 0xb1, 0xa4, 0x2b, 0xe9, 0x31, 0xe8, 0x6d, + 0x5f, 0x37, 0xf4, 0xb6, 0x97, 0xba, 0x79, 0xa2, 0x9b, 0xab, 0xc3, 0xc5, 0x09, 0x1d, 0xee, 0xd3, + 0x5d, 0x71, 0x6b, 0xaf, 0xcf, 0xfd, 0xc9, 0x02, 0x9c, 0xcc, 0x7c, 0x23, 0x7c, 0x0c, 0x7d, 0x73, + 0xc3, 0xe8, 0x9b, 0xe7, 0xba, 0x7e, 0xbe, 0x9c, 0xdb, 0x41, 0xb7, 0x13, 0x1d, 0x74, 0xb9, 0x7b, + 0x96, 0xed, 0x7b, 0xe9, 0xeb, 0x45, 0x38, 0x9f, 0x59, 0x2e, 0x56, 0x7b, 0x2e, 0x1a, 0x6a, 0xcf, + 0xe7, 0x13, 0x6a, 0x4f, 0xbb, 0x7d, 0xe9, 0xa3, 0xd1, 0x83, 0x8a, 0x67, 0xbc, 0x2c, 0x18, 0xc1, + 0x03, 0xea, 0x40, 0x8d, 0x67, 0xbc, 0x8a, 0x11, 0x36, 0xf9, 0x7e, 0x3b, 0xe9, 0x3e, 0x7f, 0xdb, + 0x82, 0x33, 0x99, 0x63, 0x73, 0x0c, 0xba, 0xae, 0x55, 0x53, 0xd7, 0xf5, 0x54, 0xd7, 0xb3, 0x35, + 0x47, 0xf9, 0xf5, 0xd3, 0xbd, 0x39, 0xdf, 0xc2, 0x6e, 0xf2, 0x37, 0x60, 0xd0, 0xa9, 0xd5, 0x48, + 0x18, 0xae, 0xf8, 0x75, 0x15, 0x81, 0xfb, 0x39, 0x76, 0xcf, 0x8a, 0xc1, 0x07, 0xfb, 0xe5, 0xa9, + 0x24, 0x8b, 0x18, 0x8d, 0x75, 0x0e, 0xe8, 0x93, 0x30, 0x10, 0x8a, 0x73, 0x53, 0x8c, 0xfd, 0x0b, + 0x5d, 0x76, 0x8e, 0xb3, 0x4e, 0x1a, 0x66, 0xa8, 0x27, 0xa5, 0xa9, 0x50, 0x2c, 0xcd, 0xb0, 0x30, + 0x85, 0x23, 0x0d, 0x0b, 0xf3, 0x3c, 0xc0, 0xae, 0xba, 0x0c, 0x24, 0xf5, 0x0f, 0xda, 0x35, 0x41, + 0xa3, 0x42, 0x1f, 0x85, 0xb1, 0x90, 0xc7, 0x42, 0x9c, 0x6b, 0x38, 0x21, 0x7b, 0x6d, 0x23, 0x66, + 0x21, 0x0b, 0x27, 0x55, 0x4d, 0xe0, 0x70, 0x8a, 0x1a, 0x2d, 0xca, 0x5a, 0x59, 0xe0, 0x46, 0x3e, + 0x31, 0x2f, 0xc6, 0x35, 0x8a, 0xd4, 0xe3, 0x27, 0x92, 0xdd, 0xcf, 0x3a, 0x5e, 0x2b, 0x89, 0x3e, + 0x09, 0x40, 0xa7, 0x8f, 0xd0, 0x43, 0xf4, 0xe7, 0x6f, 0x9e, 0x74, 0x57, 0xa9, 0x67, 0x3a, 0x37, + 0xb3, 0x97, 0xb7, 0xf3, 0x8a, 0x09, 0xd6, 0x18, 0x22, 0x07, 0x86, 0xe3, 0x7f, 0x71, 0x66, 0xde, + 0x4b, 0xb9, 0x35, 0x24, 0x99, 0x33, 0x95, 0xf7, 0xbc, 0xce, 0x02, 0x9b, 0x1c, 0xed, 0x7f, 0x37, + 0x00, 0x8f, 0xb4, 0xd9, 0x86, 0xd1, 0x8c, 0x69, 0xea, 0x7d, 0x26, 0x79, 0x7f, 0x9f, 0xca, 0x2c, + 0x6c, 0x5c, 0xe8, 0x13, 0xb3, 0xbd, 0xf0, 0x8e, 0x67, 0xfb, 0x0f, 0x59, 0x9a, 0x66, 0x85, 0x3b, + 0x95, 0x7e, 0xe4, 0x90, 0xc7, 0xcb, 0x11, 0xaa, 0x5a, 0x36, 0x32, 0xf4, 0x15, 0xcf, 0x77, 0xdd, + 0x9c, 0xee, 0x15, 0x18, 0x5f, 0xcd, 0x0e, 0x00, 0xcc, 0x55, 0x19, 0x57, 0x0f, 0xfb, 0xfd, 0xc7, + 0x15, 0x0c, 0xf8, 0xf7, 0x2d, 0x38, 0x93, 0x02, 0xf3, 0x36, 0x90, 0x50, 0xc4, 0xa8, 0x5a, 0x7d, + 0xc7, 0x8d, 0x97, 0x0c, 0xf9, 0x37, 0x5c, 0x13, 0xdf, 0x70, 0x26, 0x97, 0x2e, 0xd9, 0xf4, 0x1f, + 0xfc, 0xa3, 0xf2, 0x04, 0xab, 0xc0, 0x24, 0xc4, 0xf9, 0x4d, 0x3f, 0xde, 0x8b, 0xff, 0x37, 0x27, + 0xf6, 0xf1, 0xd4, 0x32, 0x9c, 0x6f, 0xdf, 0xd5, 0x87, 0x7a, 0x9e, 0xfc, 0x7b, 0x16, 0x9c, 0x6b, + 0x1b, 0x03, 0xe7, 0x5b, 0x50, 0xce, 0xb5, 0x3f, 0x6f, 0xc1, 0xa3, 0x99, 0x25, 0x0c, 0xef, 0xb8, + 0xcb, 0x50, 0xaa, 0x25, 0xf2, 0xa1, 0xc6, 0xd1, 0x20, 0x54, 0x2e, 0xd4, 0x98, 0xc6, 0x70, 0x82, + 0x2b, 0x74, 0x74, 0x82, 0xfb, 0x0d, 0x0b, 0x52, 0x67, 0xd5, 0x31, 0x08, 0x4d, 0x4b, 0xa6, 0xd0, + 0xf4, 0x78, 0x37, 0xbd, 0x99, 0x23, 0x2f, 0xfd, 0xc5, 0x28, 0x9c, 0xca, 0x79, 0x5d, 0xb8, 0x0b, + 0xe3, 0x9b, 0x35, 0x62, 0x3e, 0x27, 0x6f, 0x17, 0x66, 0xa9, 0xed, 0xdb, 0x73, 0x9e, 0x86, 0x36, + 0x45, 0x82, 0xd3, 0x55, 0xa0, 0xcf, 0x5b, 0x70, 0xc2, 0xb9, 0x13, 0x2e, 0x50, 0xe1, 0xd7, 0xad, + 0xcd, 0x36, 0xfc, 0xda, 0x36, 0x95, 0x2c, 0xe4, 0xb2, 0x7a, 0x31, 0x53, 0x21, 0x79, 0xbb, 0x9a, + 0xa2, 0x37, 0xaa, 0x67, 0x49, 0xc7, 0xb3, 0xa8, 0x70, 0x66, 0x5d, 0x08, 0x8b, 0xfc, 0x28, 0xf4, + 0x6a, 0xdd, 0x26, 0xe0, 0x41, 0xd6, 0x33, 0x50, 0x2e, 0xcd, 0x49, 0x0c, 0x56, 0x7c, 0xd0, 0xa7, + 0xa1, 0xb4, 0x29, 0xdf, 0x36, 0x67, 0x48, 0x8b, 0x71, 0x47, 0xb6, 0x7f, 0xf1, 0xcd, 0xbd, 0x0a, + 0x14, 0x11, 0x8e, 0x99, 0xa2, 0xd7, 0xa0, 0xe8, 0x6d, 0x84, 0xed, 0xf2, 0x76, 0x27, 0xdc, 0x47, + 0x79, 0x58, 0x91, 0xd5, 0xc5, 0x2a, 0xa6, 0x05, 0xd1, 0x35, 0x28, 0x06, 0xeb, 0x75, 0xa1, 0x4d, + 0xcf, 0x5c, 0xa4, 0x78, 0x76, 0x3e, 0xa7, 0x55, 0x8c, 0x13, 0x9e, 0x9d, 0xc7, 0x94, 0x05, 0xaa, + 0x40, 0x2f, 0x7b, 0x92, 0x27, 0x64, 0xb3, 0xcc, 0x5b, 0x68, 0x9b, 0xa7, 0xad, 0x3c, 0xf6, 0x08, + 0x23, 0xc0, 0x9c, 0x11, 0x5a, 0x83, 0xbe, 0x1a, 0xcb, 0xf1, 0x2c, 0x84, 0xb1, 0xf7, 0x67, 0xea, + 0xcd, 0xdb, 0x24, 0xbf, 0x16, 0x6a, 0x64, 0x46, 0x81, 0x05, 0x2f, 0xc6, 0x95, 0x34, 0xb7, 0x36, + 0x42, 0xa6, 0x77, 0xcb, 0xe3, 0xda, 0x26, 0xa7, 0xbb, 0xe0, 0xca, 0x28, 0xb0, 0xe0, 0x85, 0x5e, + 0x81, 0xc2, 0x46, 0x4d, 0x3c, 0xb7, 0xcb, 0x54, 0xa0, 0x9b, 0x91, 0x61, 0x66, 0xfb, 0xee, 0xef, + 0x97, 0x0b, 0x8b, 0x73, 0xb8, 0xb0, 0x51, 0x43, 0xab, 0xd0, 0xbf, 0xc1, 0x63, 0x49, 0x08, 0x1d, + 0xf9, 0x93, 0xd9, 0x61, 0x2e, 0x52, 0xe1, 0x26, 0xf8, 0xd3, 0x2d, 0x81, 0xc0, 0x92, 0x09, 0x4b, + 0xd7, 0xa1, 0x62, 0x62, 0x88, 0x90, 0x7c, 0xd3, 0x87, 0x8b, 0x63, 0xc2, 0x65, 0xe5, 0x38, 0xb2, + 0x06, 0xd6, 0x38, 0xd2, 0x59, 0xed, 0xdc, 0x6b, 0x05, 0x2c, 0x5e, 0xbb, 0x88, 0xdd, 0x94, 0x39, + 0xab, 0x67, 0x24, 0x51, 0xbb, 0x59, 0xad, 0x88, 0x70, 0xcc, 0x14, 0x6d, 0xc3, 0xf0, 0x6e, 0xd8, + 0xdc, 0x22, 0x72, 0x49, 0xb3, 0x50, 0x4e, 0x39, 0xb2, 0xde, 0x2d, 0x41, 0xe8, 0x06, 0x51, 0xcb, + 0x69, 0xa4, 0x76, 0x21, 0x26, 0x97, 0xdf, 0xd2, 0x99, 0x61, 0x93, 0x37, 0xed, 0xfe, 0xb7, 0x5b, + 0xfe, 0xfa, 0x5e, 0x44, 0x44, 0x24, 0xbd, 0xcc, 0xee, 0x7f, 0x83, 0x93, 0xa4, 0xbb, 0x5f, 0x20, + 0xb0, 0x64, 0x82, 0x6e, 0x89, 0xee, 0x61, 0xbb, 0xe7, 0x58, 0x7e, 0x98, 0xde, 0x19, 0x49, 0x94, + 0xd3, 0x29, 0x6c, 0xb7, 0x8c, 0x59, 0xb1, 0x5d, 0xb2, 0xb9, 0xe5, 0x47, 0xbe, 0x97, 0xd8, 0xa1, + 0xc7, 0xf3, 0x77, 0xc9, 0x4a, 0x06, 0x7d, 0x7a, 0x97, 0xcc, 0xa2, 0xc2, 0x99, 0x75, 0xa1, 0x3a, + 0x8c, 0x34, 0xfd, 0x20, 0xba, 0xe3, 0x07, 0x72, 0x7e, 0xa1, 0x36, 0x3a, 0x3e, 0x83, 0x52, 0xd4, + 0xc8, 0x82, 0x54, 0x9a, 0x18, 0x9c, 0xe0, 0x89, 0x3e, 0x06, 0xfd, 0x61, 0xcd, 0x69, 0x90, 0xa5, + 0x1b, 0x93, 0x13, 0xf9, 0xc7, 0x4f, 0x95, 0x93, 0xe4, 0xcc, 0x2e, 0x1e, 0x0a, 0x84, 0x93, 0x60, + 0xc9, 0x0e, 0x2d, 0x42, 0x2f, 0x4b, 0x83, 0xc9, 0xc2, 0x3e, 0xe6, 0x44, 0x1b, 0x4e, 0x39, 0xf3, + 0xf3, 0xbd, 0x89, 0x81, 0x31, 0x2f, 0x4e, 0xd7, 0x80, 0xb8, 0xea, 0xfa, 0xe1, 0xe4, 0xc9, 0xfc, + 0x35, 0x20, 0x6e, 0xc8, 0x37, 0xaa, 0xed, 0xd6, 0x80, 0x22, 0xc2, 0x31, 0x53, 0xba, 0x33, 0xd3, + 0xdd, 0xf4, 0x54, 0x1b, 0x2f, 0xb4, 0xdc, 0xbd, 0x94, 0xed, 0xcc, 0x74, 0x27, 0xa5, 0x2c, 0xec, + 0x3f, 0xee, 0x4f, 0xcb, 0x2c, 0x4c, 0x39, 0xf2, 0xbf, 0x5b, 0x29, 0xbb, 0xf9, 0x07, 0xba, 0xd5, + 0xd5, 0x1e, 0xe1, 0xb5, 0xee, 0xf3, 0x16, 0x9c, 0x6a, 0x66, 0x7e, 0x88, 0x10, 0x00, 0xba, 0x53, + 0xf9, 0xf2, 0x4f, 0x57, 0x21, 0x42, 0xb3, 0xf1, 0x38, 0xa7, 0xa6, 0xe4, 0xd5, 0xb9, 0xf8, 0x8e, + 0xaf, 0xce, 0x2b, 0x30, 0x50, 0xe3, 0xf7, 0x1c, 0x19, 0xda, 0xba, 0xab, 0x00, 0x77, 0x4c, 0x94, + 0x10, 0x17, 0xa4, 0x0d, 0xac, 0x58, 0xa0, 0x1f, 0xb6, 0xe0, 0x5c, 0xb2, 0xe9, 0x98, 0x30, 0xb4, + 0x88, 0x2b, 0xca, 0xf5, 0x32, 0x8b, 0xe2, 0xfb, 0x53, 0xf2, 0xbf, 0x41, 0x7c, 0xd0, 0x89, 0x00, + 0xb7, 0xaf, 0x0c, 0xcd, 0x67, 0x28, 0x86, 0xfa, 0x4c, 0x63, 0x58, 0x17, 0xca, 0xa1, 0x17, 0x61, + 0x68, 0xc7, 0x6f, 0x79, 0x91, 0x70, 0x5a, 0x13, 0x0e, 0x34, 0xcc, 0x71, 0x64, 0x45, 0x83, 0x63, + 0x83, 0x2a, 0xa1, 0x52, 0x1a, 0x78, 0x60, 0x95, 0xd2, 0x5b, 0x30, 0xe4, 0x69, 0x5e, 0xd6, 0x42, + 0x1e, 0xb8, 0x98, 0x1f, 0x13, 0x58, 0xf7, 0xc9, 0xe6, 0xad, 0xd4, 0x21, 0xd8, 0xe0, 0x76, 0xbc, + 0xde, 0x6c, 0x3f, 0x5f, 0xc8, 0x10, 0xea, 0xb9, 0x5a, 0xe9, 0xc3, 0xa6, 0x5a, 0xe9, 0x62, 0x52, + 0xad, 0x94, 0x32, 0x84, 0x18, 0x1a, 0xa5, 0xee, 0x53, 0x64, 0x75, 0x1d, 0x57, 0xf4, 0xbb, 0x2d, + 0x38, 0xcd, 0x34, 0xeb, 0xb4, 0x82, 0x77, 0xac, 0x4d, 0x7f, 0xe4, 0xfe, 0x7e, 0xf9, 0xf4, 0x72, + 0x36, 0x3b, 0x9c, 0x57, 0x8f, 0xdd, 0x80, 0x0b, 0x9d, 0x8e, 0x46, 0xe6, 0x41, 0x59, 0x57, 0xa6, + 0xf7, 0xd8, 0x83, 0xb2, 0xbe, 0x34, 0x8f, 0x19, 0xa6, 0xdb, 0xa8, 0x59, 0xf6, 0x7f, 0xb0, 0xa0, + 0x58, 0xf1, 0xeb, 0xc7, 0x70, 0xe9, 0xfe, 0x88, 0x71, 0xe9, 0x7e, 0x24, 0xfb, 0x50, 0xae, 0xe7, + 0x9a, 0x92, 0x16, 0x12, 0xa6, 0xa4, 0x73, 0x79, 0x0c, 0xda, 0x1b, 0x8e, 0x7e, 0xaa, 0x08, 0x83, + 0x15, 0xbf, 0xae, 0x9e, 0x2f, 0xfc, 0xe3, 0x07, 0x79, 0xbe, 0x90, 0x9b, 0xf4, 0x44, 0xe3, 0xcc, + 0x1c, 0x2f, 0xe5, 0xcb, 0xed, 0x6f, 0xb1, 0x57, 0x0c, 0xb7, 0x89, 0xbb, 0xb9, 0x15, 0x91, 0x7a, + 0xf2, 0x73, 0x8e, 0xef, 0x15, 0xc3, 0x1f, 0x17, 0x60, 0x34, 0x51, 0x3b, 0x6a, 0xc0, 0x70, 0x43, + 0x37, 0x54, 0x88, 0x79, 0xfa, 0x40, 0x36, 0x0e, 0xe1, 0x05, 0xae, 0x81, 0xb0, 0xc9, 0x1c, 0x4d, + 0x03, 0x28, 0xcb, 0xbd, 0x54, 0x57, 0xb3, 0x9b, 0x87, 0x32, 0xed, 0x87, 0x58, 0xa3, 0x40, 0x2f, + 0xc1, 0x60, 0xe4, 0x37, 0xfd, 0x86, 0xbf, 0xb9, 0x77, 0x9d, 0xc8, 0x80, 0x6a, 0xca, 0xb7, 0x73, + 0x2d, 0x46, 0x61, 0x9d, 0x0e, 0xdd, 0x85, 0x71, 0xc5, 0xa4, 0x7a, 0x04, 0xc6, 0x1b, 0xa6, 0xd9, + 0x58, 0x4d, 0x72, 0xc4, 0xe9, 0x4a, 0xec, 0x9f, 0x2d, 0xf2, 0x2e, 0xf6, 0x22, 0xf7, 0xbd, 0xd5, + 0xf0, 0xee, 0x5e, 0x0d, 0x5f, 0xb7, 0x60, 0x8c, 0xd6, 0xce, 0x1c, 0xd7, 0xa4, 0xa8, 0xa1, 0x22, + 0xa1, 0x5b, 0x6d, 0x22, 0xa1, 0x5f, 0xa4, 0xbb, 0x66, 0xdd, 0x6f, 0x45, 0x42, 0x7f, 0xa8, 0x6d, + 0x8b, 0x14, 0x8a, 0x05, 0x56, 0xd0, 0x91, 0x20, 0x10, 0x8f, 0x6d, 0x75, 0x3a, 0x12, 0x04, 0x58, + 0x60, 0x65, 0xa0, 0xf4, 0x9e, 0xec, 0x40, 0xe9, 0x3c, 0xde, 0xad, 0x70, 0x71, 0x12, 0x42, 0x9f, + 0x16, 0xef, 0x56, 0xfa, 0x3e, 0xc5, 0x34, 0xf6, 0x57, 0x8b, 0x30, 0x54, 0xf1, 0xeb, 0xb1, 0xd5, + 0xfe, 0x45, 0xc3, 0x6a, 0x7f, 0x21, 0x61, 0xb5, 0x1f, 0xd3, 0x69, 0xdf, 0xb3, 0xd1, 0x7f, 0xb3, + 0x6c, 0xf4, 0xbf, 0x6e, 0xb1, 0x51, 0x9b, 0x5f, 0xad, 0x72, 0x3f, 0x48, 0x74, 0x05, 0x06, 0xd9, + 0x06, 0xc3, 0x5e, 0x77, 0x4b, 0x53, 0x36, 0x4b, 0x5c, 0xb6, 0x1a, 0x83, 0xb1, 0x4e, 0x83, 0x2e, + 0xc1, 0x40, 0x48, 0x9c, 0xa0, 0xb6, 0xa5, 0x76, 0x57, 0x61, 0x77, 0xe6, 0x30, 0xac, 0xb0, 0xe8, + 0x8d, 0x38, 0xd4, 0x6a, 0x31, 0xff, 0xb5, 0xa8, 0xde, 0x1e, 0xbe, 0x44, 0xf2, 0xe3, 0xab, 0xda, + 0xb7, 0x01, 0xa5, 0xe9, 0xbb, 0x08, 0x06, 0x58, 0x36, 0x83, 0x01, 0x96, 0x52, 0x81, 0x00, 0xff, + 0xca, 0x82, 0x91, 0x8a, 0x5f, 0xa7, 0x4b, 0xf7, 0xdb, 0x69, 0x9d, 0xea, 0x71, 0xa6, 0xfb, 0xda, + 0xc4, 0x99, 0x7e, 0x0c, 0x7a, 0x2b, 0x7e, 0xbd, 0x43, 0xc0, 0xc2, 0xbf, 0x61, 0x41, 0x7f, 0xc5, + 0xaf, 0x1f, 0x83, 0x69, 0xe2, 0xc3, 0xa6, 0x69, 0xe2, 0x74, 0xce, 0xbc, 0xc9, 0xb1, 0x46, 0xfc, + 0xff, 0x3d, 0x30, 0x4c, 0xdb, 0xe9, 0x6f, 0xca, 0xa1, 0x34, 0xba, 0xcd, 0xea, 0xa2, 0xdb, 0xa8, + 0x14, 0xee, 0x37, 0x1a, 0xfe, 0x9d, 0xe4, 0xb0, 0x2e, 0x32, 0x28, 0x16, 0x58, 0xf4, 0x2c, 0x0c, + 0x34, 0x03, 0xb2, 0xeb, 0xfa, 0x42, 0xbc, 0xd5, 0x0c, 0x3d, 0x15, 0x01, 0xc7, 0x8a, 0x82, 0x5e, + 0x4d, 0x43, 0xd7, 0xa3, 0x47, 0x79, 0xcd, 0xf7, 0xea, 0x5c, 0x7b, 0x5f, 0x14, 0xc9, 0x50, 0x34, + 0x38, 0x36, 0xa8, 0xd0, 0x6d, 0x28, 0xb1, 0xff, 0x6c, 0xdb, 0x39, 0x7c, 0x1a, 0x66, 0x91, 0x1e, + 0x52, 0x30, 0xc0, 0x31, 0x2f, 0xf4, 0x3c, 0x40, 0x24, 0x13, 0x0a, 0x84, 0x22, 0x70, 0x9d, 0xba, + 0x0a, 0xa8, 0x54, 0x03, 0x21, 0xd6, 0xa8, 0xd0, 0x33, 0x50, 0x8a, 0x1c, 0xb7, 0xb1, 0xec, 0x7a, + 0xcc, 0xfe, 0x4b, 0xdb, 0x2f, 0xb2, 0x34, 0x0a, 0x20, 0x8e, 0xf1, 0x54, 0x14, 0x63, 0x41, 0x4d, + 0x78, 0x12, 0xfa, 0x01, 0x46, 0xcd, 0x44, 0xb1, 0x65, 0x05, 0xc5, 0x1a, 0x05, 0xda, 0x82, 0xb3, + 0xae, 0xc7, 0x12, 0x87, 0x90, 0xea, 0xb6, 0xdb, 0x5c, 0x5b, 0xae, 0xde, 0x22, 0x81, 0xbb, 0xb1, + 0x37, 0xeb, 0xd4, 0xb6, 0x89, 0x27, 0x13, 0xec, 0xca, 0xbc, 0xeb, 0x67, 0x97, 0xda, 0xd0, 0xe2, + 0xb6, 0x9c, 0xec, 0x17, 0xd8, 0x7c, 0xbf, 0x51, 0x45, 0x4f, 0x1b, 0x5b, 0xc7, 0x29, 0x7d, 0xeb, + 0x38, 0xd8, 0x2f, 0xf7, 0xdd, 0xa8, 0x6a, 0x31, 0x39, 0x5e, 0x86, 0x93, 0x15, 0xbf, 0x5e, 0xf1, + 0x83, 0x68, 0xd1, 0x0f, 0xee, 0x38, 0x41, 0x5d, 0x4e, 0xaf, 0xb2, 0x8c, 0x4a, 0x42, 0xf7, 0xcf, + 0x5e, 0xbe, 0xbb, 0x18, 0x11, 0x47, 0x5e, 0x60, 0x12, 0xdb, 0x21, 0xdf, 0xd2, 0xd5, 0x98, 0xec, + 0xa0, 0x52, 0xef, 0x5c, 0x75, 0x22, 0x82, 0x6e, 0xb0, 0x14, 0xfa, 0xf1, 0x31, 0x2a, 0x8a, 0x3f, + 0xa5, 0xa5, 0xd0, 0x8f, 0x91, 0x99, 0xe7, 0xae, 0x59, 0xde, 0xfe, 0x9c, 0xa8, 0x84, 0xeb, 0x01, + 0xb8, 0xbf, 0x62, 0x37, 0x39, 0xa8, 0x65, 0x6e, 0x8e, 0x42, 0x7e, 0x52, 0x07, 0x6e, 0x79, 0x6d, + 0x9b, 0x9b, 0xc3, 0xfe, 0x4e, 0x38, 0x95, 0xac, 0xbe, 0xeb, 0x44, 0xd8, 0x73, 0x30, 0x1e, 0xe8, + 0x05, 0xb5, 0x44, 0x67, 0x27, 0x79, 0x3e, 0x85, 0x04, 0x12, 0xa7, 0xe9, 0xed, 0x97, 0x60, 0x9c, + 0xde, 0x3d, 0x95, 0x20, 0xc7, 0x7a, 0xb9, 0x73, 0x78, 0x96, 0xff, 0xd8, 0xcb, 0x0e, 0xa2, 0x44, + 0xd6, 0x1b, 0xf4, 0x29, 0x18, 0x09, 0xc9, 0xb2, 0xeb, 0xb5, 0xee, 0x4a, 0xed, 0x53, 0x9b, 0x47, + 0xa4, 0xd5, 0x05, 0x9d, 0x92, 0xeb, 0xb0, 0x4d, 0x18, 0x4e, 0x70, 0x43, 0x3b, 0x30, 0x72, 0xc7, + 0xf5, 0xea, 0xfe, 0x9d, 0x50, 0xf2, 0x1f, 0xc8, 0x57, 0x65, 0xdf, 0xe6, 0x94, 0x89, 0x36, 0x1a, + 0xd5, 0xdd, 0x36, 0x98, 0xe1, 0x04, 0x73, 0xba, 0xd8, 0x83, 0x96, 0x37, 0x13, 0xde, 0x0c, 0x09, + 0x7f, 0x16, 0x28, 0x16, 0x3b, 0x96, 0x40, 0x1c, 0xe3, 0xe9, 0x62, 0x67, 0x7f, 0xae, 0x06, 0x7e, + 0x8b, 0xa7, 0x58, 0x11, 0x8b, 0x1d, 0x2b, 0x28, 0xd6, 0x28, 0xe8, 0x66, 0xc8, 0xfe, 0xad, 0xfa, + 0x1e, 0xf6, 0xfd, 0x48, 0x6e, 0x9f, 0x2c, 0x45, 0x98, 0x06, 0xc7, 0x06, 0x15, 0x5a, 0x04, 0x14, + 0xb6, 0x9a, 0xcd, 0x06, 0xf3, 0x4e, 0x73, 0x1a, 0x8c, 0x15, 0x77, 0xdb, 0x29, 0xf2, 0x10, 0xd1, + 0xd5, 0x14, 0x16, 0x67, 0x94, 0xa0, 0xe7, 0xe2, 0x86, 0x68, 0x6a, 0x2f, 0x6b, 0x2a, 0x37, 0x7b, + 0x55, 0x79, 0x3b, 0x25, 0x0e, 0x2d, 0x40, 0x7f, 0xb8, 0x17, 0xd6, 0xa2, 0x46, 0xd8, 0x2e, 0x21, + 0x5b, 0x95, 0x91, 0x68, 0xf9, 0x40, 0x79, 0x11, 0x2c, 0xcb, 0xa2, 0x1a, 0x4c, 0x08, 0x8e, 0x73, + 0x5b, 0x8e, 0xa7, 0xd2, 0x44, 0x71, 0x27, 0xfd, 0x2b, 0xf7, 0xf7, 0xcb, 0x13, 0xa2, 0x66, 0x1d, + 0x7d, 0xb0, 0x5f, 0xa6, 0x8b, 0x23, 0x03, 0x83, 0xb3, 0xb8, 0xf1, 0xc9, 0x57, 0xab, 0xf9, 0x3b, + 0xcd, 0x4a, 0xe0, 0x6f, 0xb8, 0x0d, 0xd2, 0xce, 0x74, 0x58, 0x35, 0x28, 0xc5, 0xe4, 0x33, 0x60, + 0x38, 0xc1, 0xcd, 0xfe, 0x1c, 0x93, 0x1d, 0xab, 0xee, 0xa6, 0xe7, 0x44, 0xad, 0x80, 0xa0, 0x1d, + 0x18, 0x6e, 0xb2, 0xdd, 0x45, 0x24, 0x3e, 0x11, 0x73, 0xfd, 0xc5, 0x2e, 0xd5, 0x4f, 0x77, 0x58, + 0xea, 0x36, 0xc3, 0xd5, 0xad, 0xa2, 0xb3, 0xc3, 0x26, 0x77, 0xfb, 0x5f, 0x9c, 0x61, 0xd2, 0x47, + 0x95, 0xeb, 0x94, 0xfa, 0xc5, 0x9b, 0x20, 0x71, 0x8d, 0x9d, 0xca, 0x57, 0xb0, 0xc6, 0xc3, 0x22, + 0xde, 0x15, 0x61, 0x59, 0x16, 0x7d, 0x12, 0x46, 0xe8, 0xad, 0x50, 0x49, 0x00, 0xe1, 0xe4, 0x89, + 0xfc, 0xd8, 0x2d, 0x8a, 0x4a, 0x4f, 0x8a, 0xa4, 0x17, 0xc6, 0x09, 0x66, 0xe8, 0x0d, 0xe6, 0x5a, + 0x26, 0x59, 0x17, 0xba, 0x61, 0xad, 0x7b, 0x91, 0x49, 0xb6, 0x1a, 0x13, 0xd4, 0x82, 0x89, 0x74, + 0xea, 0xc7, 0x70, 0xd2, 0xce, 0x17, 0xaf, 0xd3, 0xd9, 0x1b, 0xe3, 0xec, 0x35, 0x69, 0x5c, 0x88, + 0xb3, 0xf8, 0xa3, 0xe5, 0x64, 0x62, 0xbe, 0xa2, 0xa1, 0xf7, 0x4d, 0x25, 0xe7, 0x1b, 0x6e, 0x9b, + 0x93, 0x6f, 0x13, 0xce, 0x69, 0xb9, 0xcd, 0xae, 0x06, 0x0e, 0x73, 0xde, 0x70, 0xd9, 0x76, 0xaa, + 0xc9, 0x45, 0x8f, 0xde, 0xdf, 0x2f, 0x9f, 0x5b, 0x6b, 0x47, 0x88, 0xdb, 0xf3, 0x41, 0x37, 0xe0, + 0x24, 0x8f, 0x3c, 0x30, 0x4f, 0x9c, 0x7a, 0xc3, 0xf5, 0x94, 0xe0, 0xc5, 0x97, 0xfc, 0x99, 0xfb, + 0xfb, 0xe5, 0x93, 0x33, 0x59, 0x04, 0x38, 0xbb, 0x1c, 0xfa, 0x30, 0x94, 0xea, 0x5e, 0x28, 0xfa, + 0xa0, 0xcf, 0x48, 0x1f, 0x57, 0x9a, 0x5f, 0xad, 0xaa, 0xef, 0x8f, 0xff, 0xe0, 0xb8, 0x00, 0xda, + 0xe4, 0xb6, 0x01, 0xa5, 0x2d, 0xea, 0x4f, 0x45, 0x5e, 0x4b, 0x2a, 0x54, 0x8d, 0xb7, 0xc7, 0xdc, + 0x28, 0xa6, 0x9e, 0xe4, 0x18, 0xcf, 0x92, 0x0d, 0xc6, 0xe8, 0x75, 0x40, 0x22, 0x4d, 0xc1, 0x4c, + 0x8d, 0x65, 0xd5, 0x61, 0x47, 0xe3, 0x80, 0xf9, 0x1a, 0xb6, 0x9a, 0xa2, 0xc0, 0x19, 0xa5, 0xd0, + 0x35, 0xba, 0xab, 0xe8, 0x50, 0xb1, 0x6b, 0xa9, 0x24, 0xa5, 0xf3, 0xa4, 0x19, 0x10, 0xe6, 0x63, + 0x66, 0x72, 0xc4, 0x89, 0x72, 0xa8, 0x0e, 0x67, 0x9d, 0x56, 0xe4, 0x33, 0xb3, 0x8b, 0x49, 0xba, + 0xe6, 0x6f, 0x13, 0x8f, 0x59, 0x3c, 0x07, 0x66, 0x2f, 0x50, 0xc9, 0x6e, 0xa6, 0x0d, 0x1d, 0x6e, + 0xcb, 0x85, 0x4a, 0xe4, 0x2a, 0x2b, 0x39, 0x98, 0xf1, 0xe4, 0x32, 0x32, 0x93, 0xbf, 0x04, 0x83, + 0x5b, 0x7e, 0x18, 0xad, 0x92, 0xe8, 0x8e, 0x1f, 0x6c, 0x8b, 0xb8, 0xc8, 0x71, 0x2c, 0xfa, 0x18, + 0x85, 0x75, 0x3a, 0x7a, 0xe5, 0x66, 0xfe, 0x38, 0x4b, 0xf3, 0xcc, 0x15, 0x62, 0x20, 0xde, 0x63, + 0xae, 0x71, 0x30, 0x96, 0x78, 0x49, 0xba, 0x54, 0x99, 0x63, 0x6e, 0x0d, 0x09, 0xd2, 0xa5, 0xca, + 0x1c, 0x96, 0x78, 0x3a, 0x5d, 0xc3, 0x2d, 0x27, 0x20, 0x95, 0xc0, 0xaf, 0x91, 0x50, 0xcb, 0x80, + 0xf0, 0x08, 0x8f, 0xfa, 0x4c, 0xa7, 0x6b, 0x35, 0x8b, 0x00, 0x67, 0x97, 0x43, 0x24, 0x9d, 0xd7, + 0x6f, 0x24, 0xdf, 0x1e, 0x95, 0x96, 0x67, 0xba, 0x4c, 0xed, 0xe7, 0xc1, 0x98, 0xca, 0x28, 0xc8, + 0xe3, 0x3c, 0x87, 0x93, 0xa3, 0x6c, 0x6e, 0x77, 0x1f, 0x24, 0x5a, 0x59, 0xf8, 0x96, 0x12, 0x9c, + 0x70, 0x8a, 0xb7, 0x11, 0x32, 0x70, 0xac, 0x63, 0xc8, 0xc0, 0xcb, 0x50, 0x0a, 0x5b, 0xeb, 0x75, + 0x7f, 0xc7, 0x71, 0x3d, 0xe6, 0xd6, 0xa0, 0xdd, 0xfd, 0xaa, 0x12, 0x81, 0x63, 0x1a, 0xb4, 0x08, + 0x03, 0x8e, 0x34, 0xdf, 0xa1, 0xfc, 0x20, 0x51, 0xca, 0x68, 0xc7, 0xe3, 0xa6, 0x48, 0x83, 0x9d, + 0x2a, 0x8b, 0x5e, 0x85, 0x61, 0xf1, 0x72, 0x5e, 0x24, 0xe1, 0x9d, 0x30, 0x9f, 0x37, 0x56, 0x75, + 0x24, 0x36, 0x69, 0xd1, 0x4d, 0x18, 0x8c, 0xfc, 0x06, 0x7b, 0xa3, 0x47, 0xc5, 0xbc, 0x53, 0xf9, + 0xe1, 0x0e, 0xd7, 0x14, 0x99, 0xae, 0xb5, 0x56, 0x45, 0xb1, 0xce, 0x07, 0xad, 0xf1, 0xf9, 0xce, + 0xf2, 0x1d, 0x90, 0x50, 0x64, 0x71, 0x3d, 0x97, 0xe7, 0x93, 0xc6, 0xc8, 0xcc, 0xe5, 0x20, 0x4a, + 0x62, 0x9d, 0x0d, 0xba, 0x0a, 0xe3, 0xcd, 0xc0, 0xf5, 0xd9, 0x9c, 0x50, 0x96, 0xdb, 0x49, 0x33, + 0xbb, 0x59, 0x25, 0x49, 0x80, 0xd3, 0x65, 0x58, 0xe0, 0x03, 0x01, 0x9c, 0x3c, 0xc3, 0x33, 0xb4, + 0xf0, 0xab, 0x34, 0x87, 0x61, 0x85, 0x45, 0x2b, 0x6c, 0x27, 0xe6, 0x5a, 0xa0, 0xc9, 0xa9, 0xfc, + 0xb8, 0x54, 0xba, 0xb6, 0x88, 0x0b, 0xaf, 0xea, 0x2f, 0x8e, 0x39, 0xa0, 0xba, 0x96, 0x18, 0x95, + 0x5e, 0x01, 0xc2, 0xc9, 0xb3, 0x6d, 0x9c, 0x22, 0x13, 0xb7, 0xb2, 0x58, 0x20, 0x30, 0xc0, 0x21, + 0x4e, 0xf0, 0x44, 0x1f, 0x85, 0x31, 0x11, 0x4d, 0x33, 0xee, 0xa6, 0x73, 0xf1, 0xcb, 0x07, 0x9c, + 0xc0, 0xe1, 0x14, 0x35, 0xcf, 0x90, 0xe2, 0xac, 0x37, 0x88, 0xd8, 0xfa, 0x96, 0x5d, 0x6f, 0x3b, + 0x9c, 0x3c, 0xcf, 0xf6, 0x07, 0x91, 0x21, 0x25, 0x89, 0xc5, 0x19, 0x25, 0xd0, 0x1a, 0x8c, 0x35, + 0x03, 0x42, 0x76, 0x98, 0xa0, 0x2f, 0xce, 0xb3, 0x32, 0x8f, 0xfb, 0x41, 0x5b, 0x52, 0x49, 0xe0, + 0x0e, 0x32, 0x60, 0x38, 0xc5, 0x01, 0xdd, 0x81, 0x01, 0x7f, 0x97, 0x04, 0x5b, 0xc4, 0xa9, 0x4f, + 0x5e, 0x68, 0xf3, 0x12, 0x47, 0x1c, 0x6e, 0x37, 0x04, 0x6d, 0xc2, 0xdb, 0x43, 0x82, 0x3b, 0x7b, + 0x7b, 0xc8, 0xca, 0xd0, 0xff, 0x61, 0xc1, 0x19, 0x69, 0x9c, 0xa9, 0x36, 0x69, 0xaf, 0xcf, 0xf9, + 0x5e, 0x18, 0x05, 0x3c, 0x52, 0xc5, 0xa3, 0xf9, 0xd1, 0x1b, 0xd6, 0x72, 0x0a, 0x29, 0x45, 0xf4, + 0x99, 0x3c, 0x8a, 0x10, 0xe7, 0xd7, 0x48, 0xaf, 0xa6, 0x21, 0x89, 0xe4, 0x66, 0x34, 0x13, 0x2e, + 0xbe, 0x31, 0xbf, 0x3a, 0xf9, 0x18, 0x0f, 0xb3, 0x41, 0x17, 0x43, 0x35, 0x89, 0xc4, 0x69, 0x7a, + 0x74, 0x05, 0x0a, 0x7e, 0x38, 0xf9, 0x78, 0x9b, 0x5c, 0xba, 0x7e, 0xfd, 0x46, 0x95, 0x7b, 0xfd, + 0xdd, 0xa8, 0xe2, 0x82, 0x1f, 0xca, 0x2c, 0x25, 0xf4, 0x3e, 0x16, 0x4e, 0x3e, 0xc1, 0xd5, 0x96, + 0x32, 0x4b, 0x09, 0x03, 0xe2, 0x18, 0x8f, 0xb6, 0x60, 0x34, 0x34, 0xee, 0xbd, 0xe1, 0xe4, 0x45, + 0xd6, 0x53, 0x4f, 0xe4, 0x0d, 0x9a, 0x41, 0xad, 0xa5, 0x0f, 0x30, 0xb9, 0xe0, 0x24, 0x5b, 0xbe, + 0xba, 0xb4, 0x9b, 0x77, 0x38, 0xf9, 0x64, 0x87, 0xd5, 0xa5, 0x11, 0xeb, 0xab, 0x4b, 0xe7, 0x81, + 0x13, 0x3c, 0xa7, 0xbe, 0x03, 0xc6, 0x53, 0xe2, 0xd2, 0x61, 0x3c, 0xdc, 0xa7, 0xb6, 0x61, 0xd8, + 0x98, 0x92, 0x0f, 0xd5, 0xbb, 0xe2, 0xb7, 0x4b, 0x50, 0x52, 0x56, 0x6f, 0x74, 0xd9, 0x74, 0xa8, + 0x38, 0x93, 0x74, 0xa8, 0x18, 0xa8, 0xf8, 0x75, 0xc3, 0x87, 0x62, 0x2d, 0x23, 0x18, 0x63, 0xde, + 0x06, 0xd8, 0xfd, 0x23, 0x15, 0xcd, 0x94, 0x50, 0xec, 0xda, 0x33, 0xa3, 0xa7, 0xad, 0x75, 0xe2, + 0x2a, 0x8c, 0x7b, 0x3e, 0x93, 0xd1, 0x49, 0x5d, 0x0a, 0x60, 0x4c, 0xce, 0x2a, 0xe9, 0xd1, 0x8d, + 0x12, 0x04, 0x38, 0x5d, 0x86, 0x56, 0xc8, 0x05, 0xa5, 0xa4, 0x39, 0x84, 0xcb, 0x51, 0x58, 0x60, + 0xe9, 0xdd, 0x90, 0xff, 0x0a, 0x27, 0xc7, 0xf2, 0xef, 0x86, 0xbc, 0x50, 0x52, 0x18, 0x0b, 0xa5, + 0x30, 0xc6, 0xb4, 0xff, 0x4d, 0xbf, 0xbe, 0x54, 0x11, 0x62, 0xbe, 0x16, 0x49, 0xb8, 0xbe, 0x54, + 0xc1, 0x1c, 0x87, 0x66, 0xa0, 0x8f, 0xfd, 0x08, 0x27, 0x87, 0xf2, 0xa3, 0xe1, 0xb0, 0x12, 0x5a, + 0x96, 0x34, 0x56, 0x00, 0x8b, 0x82, 0x4c, 0xbb, 0x4b, 0xef, 0x46, 0x4c, 0xbb, 0xdb, 0xff, 0x80, + 0xda, 0x5d, 0xc9, 0x00, 0xc7, 0xbc, 0xd0, 0x5d, 0x38, 0x69, 0xdc, 0x47, 0xd5, 0xab, 0x1d, 0xc8, + 0x37, 0xfc, 0x26, 0x88, 0x67, 0xcf, 0x89, 0x46, 0x9f, 0x5c, 0xca, 0xe2, 0x84, 0xb3, 0x2b, 0x40, + 0x0d, 0x18, 0xaf, 0xa5, 0x6a, 0x1d, 0xe8, 0xbe, 0x56, 0x35, 0x2f, 0xd2, 0x35, 0xa6, 0x19, 0xa3, + 0x57, 0x61, 0xe0, 0x6d, 0x3f, 0x64, 0x47, 0xa4, 0xb8, 0x9a, 0xc8, 0x70, 0x0e, 0x03, 0x6f, 0xdc, + 0xa8, 0x32, 0xf8, 0xc1, 0x7e, 0x79, 0xb0, 0xe2, 0xd7, 0xe5, 0x5f, 0xac, 0x0a, 0xa0, 0xef, 0xb7, + 0x60, 0x2a, 0x7d, 0xe1, 0x55, 0x8d, 0x1e, 0xee, 0xbe, 0xd1, 0xb6, 0xa8, 0x74, 0x6a, 0x21, 0x97, + 0x1d, 0x6e, 0x53, 0x15, 0xfa, 0x10, 0x5d, 0x4f, 0xa1, 0x7b, 0x8f, 0x88, 0x14, 0xb3, 0x8f, 0xc6, + 0xeb, 0x89, 0x42, 0x0f, 0xf6, 0xcb, 0xa3, 0x7c, 0x67, 0x74, 0xef, 0xc9, 0xe7, 0x4d, 0xa2, 0x00, + 0xfa, 0x4e, 0x38, 0x19, 0xa4, 0x35, 0xa8, 0x44, 0x0a, 0xe1, 0x4f, 0x77, 0xb3, 0xcb, 0x26, 0x07, + 0x1c, 0x67, 0x31, 0xc4, 0xd9, 0xf5, 0xd8, 0xbf, 0x62, 0x31, 0xfd, 0xb6, 0x68, 0x16, 0x09, 0x5b, + 0x8d, 0xe3, 0x48, 0x6c, 0xbd, 0x60, 0xd8, 0x8e, 0x1f, 0xd8, 0xb1, 0xe8, 0x1f, 0x59, 0xcc, 0xb1, + 0xe8, 0x18, 0x5f, 0x31, 0xbd, 0x01, 0x03, 0x91, 0x4c, 0x38, 0xde, 0x26, 0x17, 0xb7, 0xd6, 0x28, + 0xe6, 0x5c, 0xa5, 0x2e, 0x39, 0x2a, 0xb7, 0xb8, 0x62, 0x63, 0xff, 0x7d, 0x3e, 0x02, 0x12, 0x73, + 0x0c, 0x26, 0xba, 0x79, 0xd3, 0x44, 0x57, 0xee, 0xf0, 0x05, 0x39, 0xa6, 0xba, 0xbf, 0x67, 0xb6, + 0x9b, 0x29, 0xf7, 0xde, 0xed, 0x1e, 0x6d, 0xf6, 0x17, 0x2c, 0x80, 0x38, 0xc8, 0x7c, 0x17, 0x29, + 0x25, 0x5f, 0xa6, 0xd7, 0x1a, 0x3f, 0xf2, 0x6b, 0x7e, 0x43, 0x18, 0x28, 0xce, 0xc6, 0x56, 0x42, + 0x0e, 0x3f, 0xd0, 0x7e, 0x63, 0x45, 0x8d, 0xca, 0x32, 0xa4, 0x65, 0x31, 0xb6, 0x5b, 0x1b, 0xe1, + 0x2c, 0xbf, 0x64, 0xc1, 0x89, 0x2c, 0x97, 0x78, 0x7a, 0x49, 0xe6, 0x6a, 0x4e, 0xe5, 0x6d, 0xa8, + 0x46, 0xf3, 0x96, 0x80, 0x63, 0x45, 0xd1, 0x75, 0xae, 0xce, 0xc3, 0x45, 0x77, 0xbf, 0x01, 0xc3, + 0x95, 0x80, 0x68, 0xf2, 0xc5, 0x6b, 0x3c, 0x4c, 0x0a, 0x6f, 0xcf, 0xb3, 0x87, 0x0e, 0x91, 0x62, + 0x7f, 0xb9, 0x00, 0x27, 0xb8, 0xd3, 0xce, 0xcc, 0xae, 0xef, 0xd6, 0x2b, 0x7e, 0x5d, 0x3c, 0x64, + 0x7c, 0x13, 0x86, 0x9a, 0x9a, 0x6e, 0xba, 0x5d, 0xa4, 0x62, 0x5d, 0x87, 0x1d, 0x6b, 0xd3, 0x74, + 0x28, 0x36, 0x78, 0xa1, 0x3a, 0x0c, 0x91, 0x5d, 0xb7, 0xa6, 0x3c, 0x3f, 0x0a, 0x87, 0x3e, 0xa4, + 0x55, 0x2d, 0x0b, 0x1a, 0x1f, 0x6c, 0x70, 0x7d, 0x08, 0x19, 0xf4, 0xed, 0x1f, 0xb5, 0xe0, 0x74, + 0x4e, 0x5c, 0x63, 0x5a, 0xdd, 0x1d, 0xe6, 0x1e, 0x25, 0xa6, 0xad, 0xaa, 0x8e, 0x3b, 0x4d, 0x61, + 0x81, 0x45, 0x1f, 0x03, 0xe0, 0x4e, 0x4f, 0xc4, 0xab, 0x75, 0x0c, 0x00, 0x6b, 0xc4, 0xae, 0xd4, + 0xc2, 0x10, 0xca, 0xf2, 0x58, 0xe3, 0x65, 0x7f, 0xa9, 0x07, 0x7a, 0x99, 0x93, 0x0d, 0xaa, 0x40, + 0xff, 0x16, 0xcf, 0x54, 0xd5, 0x76, 0xdc, 0x28, 0xad, 0x4c, 0x7e, 0x15, 0x8f, 0x9b, 0x06, 0xc5, + 0x92, 0x0d, 0x5a, 0x81, 0x09, 0x9e, 0x30, 0xac, 0x31, 0x4f, 0x1a, 0xce, 0x9e, 0x54, 0xfb, 0xf2, + 0x1c, 0xd8, 0x4a, 0xfd, 0xbd, 0x94, 0x26, 0xc1, 0x59, 0xe5, 0xd0, 0x6b, 0x30, 0x42, 0xaf, 0xe1, + 0x7e, 0x2b, 0x92, 0x9c, 0x78, 0xaa, 0x30, 0x75, 0x33, 0x59, 0x33, 0xb0, 0x38, 0x41, 0x8d, 0x5e, + 0x85, 0xe1, 0x66, 0x4a, 0xc1, 0xdd, 0x1b, 0x6b, 0x82, 0x4c, 0xa5, 0xb6, 0x49, 0xcb, 0xbc, 0xe2, + 0x5b, 0xec, 0x0d, 0xc0, 0xda, 0x56, 0x40, 0xc2, 0x2d, 0xbf, 0x51, 0x67, 0x12, 0x70, 0xaf, 0xe6, + 0x15, 0x9f, 0xc0, 0xe3, 0x54, 0x09, 0xca, 0x65, 0xc3, 0x71, 0x1b, 0xad, 0x80, 0xc4, 0x5c, 0xfa, + 0x4c, 0x2e, 0x8b, 0x09, 0x3c, 0x4e, 0x95, 0xe8, 0xac, 0xb9, 0xef, 0x3f, 0x1a, 0xcd, 0xbd, 0xfd, + 0xd3, 0x05, 0x30, 0x86, 0xf6, 0xdb, 0x37, 0x85, 0x19, 0xfd, 0xb2, 0xcd, 0xa0, 0x59, 0x13, 0x0e, + 0x65, 0x99, 0x5f, 0x16, 0xe7, 0x2f, 0xe6, 0x5f, 0x46, 0xff, 0x63, 0x56, 0x8a, 0xae, 0xf1, 0x93, + 0x95, 0xc0, 0xa7, 0x87, 0x9c, 0x0c, 0xa4, 0xa7, 0x1e, 0x9f, 0xf4, 0xcb, 0x20, 0x03, 0x6d, 0x42, + 0xce, 0x0a, 0xf7, 0x7c, 0xce, 0xc1, 0xf0, 0xbd, 0xaa, 0x8a, 0x68, 0x1f, 0x92, 0x0b, 0xba, 0x02, + 0x83, 0x22, 0x2f, 0x15, 0x7b, 0x23, 0xc1, 0x17, 0x13, 0xf3, 0x15, 0x9b, 0x8f, 0xc1, 0x58, 0xa7, + 0xb1, 0x7f, 0xa0, 0x00, 0x13, 0x19, 0x8f, 0xdc, 0xf8, 0x31, 0xb2, 0xe9, 0x86, 0x91, 0x4a, 0x91, + 0xac, 0x1d, 0x23, 0x1c, 0x8e, 0x15, 0x05, 0xdd, 0xab, 0xf8, 0x41, 0x95, 0x3c, 0x9c, 0xc4, 0x23, + 0x12, 0x81, 0x3d, 0x64, 0xb2, 0xe1, 0x0b, 0xd0, 0xd3, 0x0a, 0x89, 0x0c, 0x16, 0xad, 0x8e, 0x6d, + 0x66, 0xd6, 0x66, 0x18, 0x7a, 0x05, 0xdc, 0x54, 0x16, 0x62, 0xed, 0x0a, 0xc8, 0x6d, 0xc4, 0x1c, + 0x47, 0x1b, 0x17, 0x11, 0xcf, 0xf1, 0x22, 0x71, 0x51, 0x8c, 0xa3, 0x9e, 0x32, 0x28, 0x16, 0x58, + 0xfb, 0x8b, 0x45, 0x38, 0x93, 0xfb, 0xec, 0x95, 0x36, 0x7d, 0xc7, 0xf7, 0xdc, 0xc8, 0x57, 0x4e, + 0x78, 0x3c, 0xd2, 0x29, 0x69, 0x6e, 0xad, 0x08, 0x38, 0x56, 0x14, 0xe8, 0x22, 0xf4, 0x32, 0xa5, + 0x78, 0x2a, 0x59, 0xf4, 0xec, 0x3c, 0x0f, 0x7d, 0xc7, 0xd1, 0x5d, 0xe7, 0xf7, 0x7f, 0x8c, 0x4a, + 0x30, 0x7e, 0x23, 0x79, 0xa0, 0xd0, 0xe6, 0xfa, 0x7e, 0x03, 0x33, 0x24, 0x7a, 0x42, 0xf4, 0x57, + 0xc2, 0xeb, 0x0c, 0x3b, 0x75, 0x3f, 0xd4, 0x3a, 0xed, 0x29, 0xe8, 0xdf, 0x26, 0x7b, 0x81, 0xeb, + 0x6d, 0x26, 0xbd, 0x11, 0xaf, 0x73, 0x30, 0x96, 0x78, 0x33, 0x6f, 0x69, 0xff, 0x51, 0x27, 0xe6, + 0x1f, 0xe8, 0x28, 0x9e, 0xfc, 0x50, 0x11, 0x46, 0xf1, 0xec, 0xfc, 0x7b, 0x03, 0x71, 0x33, 0x3d, + 0x10, 0x47, 0x9d, 0x98, 0xbf, 0xf3, 0x68, 0xfc, 0xa2, 0x05, 0xa3, 0x2c, 0x3b, 0x96, 0x88, 0x59, + 0xe1, 0xfa, 0xde, 0x31, 0x5c, 0x05, 0x1e, 0x83, 0xde, 0x80, 0x56, 0x9a, 0xcc, 0x12, 0xcd, 0x5a, + 0x82, 0x39, 0x0e, 0x9d, 0x85, 0x1e, 0xd6, 0x04, 0x3a, 0x78, 0x43, 0x7c, 0x0b, 0x9e, 0x77, 0x22, + 0x07, 0x33, 0x28, 0x0b, 0xfc, 0x86, 0x49, 0xb3, 0xe1, 0xf2, 0x46, 0xc7, 0x2e, 0x0b, 0xef, 0x8e, + 0x80, 0x18, 0x99, 0x4d, 0x7b, 0x67, 0x81, 0xdf, 0xb2, 0x59, 0xb6, 0xbf, 0x66, 0xff, 0x79, 0x01, + 0xce, 0x67, 0x96, 0xeb, 0x3a, 0xf0, 0x5b, 0xfb, 0xd2, 0x0f, 0x33, 0xff, 0x51, 0xf1, 0x18, 0x7d, + 0xbd, 0x7b, 0xba, 0x95, 0xfe, 0x7b, 0xbb, 0x88, 0xc7, 0x96, 0xd9, 0x65, 0xef, 0x92, 0x78, 0x6c, + 0x99, 0x6d, 0xcb, 0x51, 0x13, 0xfc, 0x75, 0x21, 0xe7, 0x5b, 0x98, 0xc2, 0xe0, 0x12, 0xdd, 0x67, + 0x18, 0x32, 0x94, 0x97, 0x70, 0xbe, 0xc7, 0x70, 0x18, 0x56, 0x58, 0x34, 0x03, 0xa3, 0x3b, 0xae, + 0x47, 0x37, 0x9f, 0x3d, 0x53, 0x14, 0x57, 0xb6, 0x8c, 0x15, 0x13, 0x8d, 0x93, 0xf4, 0xc8, 0xd5, + 0x62, 0xb5, 0xf1, 0xaf, 0x7b, 0xf5, 0x50, 0xab, 0x6e, 0xda, 0x74, 0xe7, 0x50, 0xbd, 0x98, 0x11, + 0xb7, 0x6d, 0x45, 0xd3, 0x13, 0x15, 0xbb, 0xd7, 0x13, 0x0d, 0x65, 0xeb, 0x88, 0xa6, 0x5e, 0x85, + 0xe1, 0x07, 0xb6, 0x8d, 0xd8, 0x5f, 0x2f, 0xc2, 0x23, 0x6d, 0x96, 0x3d, 0xdf, 0xeb, 0x8d, 0x31, + 0xd0, 0xf6, 0xfa, 0xd4, 0x38, 0x54, 0xe0, 0xc4, 0x46, 0xab, 0xd1, 0xd8, 0x63, 0x4f, 0xa0, 0x48, + 0x5d, 0x52, 0x08, 0x99, 0x52, 0x2a, 0x47, 0x4e, 0x2c, 0x66, 0xd0, 0xe0, 0xcc, 0x92, 0xf4, 0x8a, + 0x45, 0x4f, 0x92, 0x3d, 0xc5, 0x2a, 0x71, 0xc5, 0xc2, 0x3a, 0x12, 0x9b, 0xb4, 0xe8, 0x2a, 0x8c, + 0x3b, 0xbb, 0x8e, 0xcb, 0x03, 0xde, 0x4b, 0x06, 0xfc, 0x8e, 0xa5, 0x74, 0xd1, 0x33, 0x49, 0x02, + 0x9c, 0x2e, 0x83, 0x5e, 0x07, 0xe4, 0xaf, 0xb3, 0x87, 0x12, 0xf5, 0xab, 0xc4, 0x13, 0x56, 0x77, + 0x36, 0x76, 0xc5, 0x78, 0x4b, 0xb8, 0x91, 0xa2, 0xc0, 0x19, 0xa5, 0x12, 0x81, 0xc9, 0xfa, 0xf2, + 0x03, 0x93, 0xb5, 0xdf, 0x17, 0x3b, 0xa6, 0xde, 0xba, 0x02, 0xc3, 0x87, 0x74, 0xff, 0xb5, 0xff, + 0x8d, 0x05, 0x4a, 0x41, 0x6c, 0x46, 0xfd, 0x7d, 0x95, 0xf9, 0x27, 0x73, 0xd5, 0xb6, 0x16, 0x2d, + 0xe9, 0xa4, 0xe6, 0x9f, 0x1c, 0x23, 0xb1, 0x49, 0xcb, 0xe7, 0x90, 0xe6, 0x57, 0x6c, 0xdc, 0x0a, + 0x44, 0x68, 0x42, 0x45, 0x81, 0x3e, 0x0e, 0xfd, 0x75, 0x77, 0xd7, 0x0d, 0x85, 0x72, 0xec, 0xd0, + 0xc6, 0xb8, 0x78, 0xeb, 0x9c, 0xe7, 0x6c, 0xb0, 0xe4, 0x67, 0xff, 0x50, 0x21, 0xee, 0x93, 0x37, + 0x5a, 0x7e, 0xe4, 0x1c, 0xc3, 0x49, 0x7e, 0xd5, 0x38, 0xc9, 0x9f, 0x68, 0x17, 0x9f, 0x91, 0x35, + 0x29, 0xf7, 0x04, 0xbf, 0x91, 0x38, 0xc1, 0x9f, 0xec, 0xcc, 0xaa, 0xfd, 0xc9, 0xfd, 0x0f, 0x2c, + 0x18, 0x37, 0xe8, 0x8f, 0xe1, 0x00, 0x59, 0x34, 0x0f, 0x90, 0x47, 0x3b, 0x7e, 0x43, 0xce, 0xc1, + 0xf1, 0xbd, 0xc5, 0x44, 0xdb, 0xd9, 0x81, 0xf1, 0x36, 0xf4, 0x6c, 0x39, 0x41, 0xbd, 0x5d, 0x3e, + 0x9a, 0x54, 0xa1, 0xe9, 0x6b, 0x4e, 0x20, 0x3c, 0x15, 0x9e, 0x95, 0xbd, 0x4e, 0x41, 0x1d, 0xbd, + 0x14, 0x58, 0x55, 0xe8, 0x65, 0xe8, 0x0b, 0x6b, 0x7e, 0x53, 0xbd, 0x99, 0xba, 0xc0, 0x3a, 0x9a, + 0x41, 0x0e, 0xf6, 0xcb, 0xc8, 0xac, 0x8e, 0x82, 0xb1, 0xa0, 0x47, 0x6f, 0xc2, 0x30, 0xfb, 0xa5, + 0xdc, 0x06, 0x8b, 0xf9, 0x1a, 0x8c, 0xaa, 0x4e, 0xc8, 0x7d, 0x6a, 0x0d, 0x10, 0x36, 0x59, 0x4d, + 0x6d, 0x42, 0x49, 0x7d, 0xd6, 0x43, 0xb5, 0x76, 0xff, 0xab, 0x22, 0x4c, 0x64, 0xcc, 0x39, 0x14, + 0x1a, 0x23, 0x71, 0xa5, 0xcb, 0xa9, 0xfa, 0x0e, 0xc7, 0x22, 0x64, 0x17, 0xa8, 0xba, 0x98, 0x5b, + 0x5d, 0x57, 0x7a, 0x33, 0x24, 0xc9, 0x4a, 0x29, 0xa8, 0x73, 0xa5, 0xb4, 0xb2, 0x63, 0xeb, 0x6a, + 0x5a, 0x91, 0x6a, 0xe9, 0x43, 0x1d, 0xd3, 0x5f, 0xef, 0x81, 0x13, 0x59, 0x21, 0x63, 0xd1, 0x67, + 0x13, 0xd9, 0x90, 0x5f, 0xec, 0x36, 0xd8, 0x2c, 0x4f, 0x91, 0x2c, 0xc2, 0x40, 0x4e, 0x9b, 0xf9, + 0x91, 0x3b, 0x76, 0xb3, 0xa8, 0x93, 0x05, 0xa0, 0x09, 0x78, 0x16, 0x6b, 0xb9, 0x7d, 0x7c, 0xa0, + 0xeb, 0x06, 0x88, 0xf4, 0xd7, 0x61, 0xc2, 0x25, 0x49, 0x82, 0x3b, 0xbb, 0x24, 0xc9, 0x9a, 0xd1, + 0x12, 0xf4, 0xd5, 0xb8, 0xaf, 0x4b, 0xb1, 0xf3, 0x16, 0xc6, 0x1d, 0x5d, 0xd4, 0x06, 0x2c, 0x1c, + 0x5c, 0x04, 0x83, 0x29, 0x17, 0x06, 0xb5, 0x8e, 0x79, 0xa8, 0x93, 0x67, 0x9b, 0x1e, 0x7c, 0x5a, + 0x17, 0x3c, 0xd4, 0x09, 0xf4, 0xa3, 0x16, 0x24, 0x1e, 0xbc, 0x28, 0xa5, 0x9c, 0x95, 0xab, 0x94, + 0xbb, 0x00, 0x3d, 0x81, 0xdf, 0x20, 0xc9, 0x0c, 0xc4, 0xd8, 0x6f, 0x10, 0xcc, 0x30, 0x94, 0x22, + 0x8a, 0x55, 0x2d, 0x43, 0xfa, 0x35, 0x52, 0x5c, 0x10, 0x1f, 0x83, 0xde, 0x06, 0xd9, 0x25, 0x8d, + 0x64, 0xa2, 0xb8, 0x65, 0x0a, 0xc4, 0x1c, 0x67, 0xff, 0x62, 0x0f, 0x9c, 0x6b, 0x1b, 0x0d, 0x8a, + 0x5e, 0xc6, 0x36, 0x9d, 0x88, 0xdc, 0x71, 0xf6, 0x92, 0x19, 0x9d, 0xae, 0x72, 0x30, 0x96, 0x78, + 0xf6, 0xfc, 0x93, 0x27, 0x66, 0x48, 0xa8, 0x30, 0x45, 0x3e, 0x06, 0x81, 0x35, 0x55, 0x62, 0xc5, + 0xa3, 0x50, 0x89, 0x3d, 0x0f, 0x10, 0x86, 0x0d, 0xee, 0x16, 0x58, 0x17, 0xef, 0x4a, 0xe3, 0x04, + 0x1e, 0xd5, 0x65, 0x81, 0xc1, 0x1a, 0x15, 0x9a, 0x87, 0xb1, 0x66, 0xe0, 0x47, 0x5c, 0x23, 0x3c, + 0xcf, 0x3d, 0x67, 0x7b, 0xcd, 0x40, 0x3c, 0x95, 0x04, 0x1e, 0xa7, 0x4a, 0xa0, 0x97, 0x60, 0x50, + 0x04, 0xe7, 0xa9, 0xf8, 0x7e, 0x43, 0x28, 0xa1, 0x94, 0x33, 0x69, 0x35, 0x46, 0x61, 0x9d, 0x4e, + 0x2b, 0xc6, 0xd4, 0xcc, 0xfd, 0x99, 0xc5, 0xb8, 0xaa, 0x59, 0xa3, 0x4b, 0x44, 0xa2, 0x1e, 0xe8, + 0x2a, 0x12, 0x75, 0xac, 0x96, 0x2b, 0x75, 0x6d, 0xf5, 0x84, 0x8e, 0x8a, 0xac, 0xaf, 0xf4, 0xc0, + 0x84, 0x98, 0x38, 0x0f, 0x7b, 0xba, 0xdc, 0x4c, 0x4f, 0x97, 0xa3, 0x50, 0xdc, 0xbd, 0x37, 0x67, + 0x8e, 0x7b, 0xce, 0xfc, 0xb0, 0x05, 0xa6, 0xa4, 0x86, 0xfe, 0xb7, 0xdc, 0x94, 0x78, 0x2f, 0xe5, + 0x4a, 0x7e, 0x71, 0x94, 0xdf, 0x77, 0x96, 0x1c, 0xcf, 0xfe, 0xd7, 0x16, 0x3c, 0xda, 0x91, 0x23, + 0x5a, 0x80, 0x12, 0x13, 0x27, 0xb5, 0x8b, 0xde, 0x93, 0xca, 0xb3, 0x5e, 0x22, 0x72, 0xa4, 0xdb, + 0xb8, 0x24, 0x5a, 0x48, 0xe5, 0x1e, 0x7c, 0x2a, 0x23, 0xf7, 0xe0, 0x49, 0xa3, 0x7b, 0x1e, 0x30, + 0xf9, 0xe0, 0x0f, 0xd2, 0x13, 0xc7, 0x78, 0xd5, 0x86, 0x3e, 0x60, 0x28, 0x1d, 0xed, 0x84, 0xd2, + 0x11, 0x99, 0xd4, 0xda, 0x19, 0xf2, 0x51, 0x18, 0x63, 0x51, 0xfb, 0xd8, 0x3b, 0x0f, 0xf1, 0xde, + 0xae, 0x10, 0xfb, 0x72, 0x2f, 0x27, 0x70, 0x38, 0x45, 0x6d, 0xff, 0x69, 0x11, 0xfa, 0xf8, 0xf2, + 0x3b, 0x86, 0xeb, 0xe5, 0x33, 0x50, 0x72, 0x77, 0x76, 0x5a, 0x3c, 0x9d, 0x5c, 0x6f, 0xec, 0x19, + 0xbc, 0x24, 0x81, 0x38, 0xc6, 0xa3, 0x45, 0xa1, 0xef, 0x6e, 0x13, 0x18, 0x98, 0x37, 0x7c, 0x7a, + 0xde, 0x89, 0x1c, 0x2e, 0x2b, 0xa9, 0x73, 0x36, 0xd6, 0x8c, 0xa3, 0x4f, 0x01, 0x84, 0x51, 0xe0, + 0x7a, 0x9b, 0x14, 0x26, 0x62, 0xab, 0x3f, 0xdd, 0x86, 0x5b, 0x55, 0x11, 0x73, 0x9e, 0xf1, 0x9e, + 0xa3, 0x10, 0x58, 0xe3, 0x88, 0xa6, 0x8d, 0x93, 0x7e, 0x2a, 0x31, 0x76, 0xc0, 0xb9, 0xc6, 0x63, + 0x36, 0xf5, 0x41, 0x28, 0x29, 0xe6, 0x9d, 0xb4, 0x5f, 0x43, 0xba, 0x58, 0xf4, 0x11, 0x18, 0x4d, + 0xb4, 0xed, 0x50, 0xca, 0xb3, 0x5f, 0xb2, 0x60, 0x94, 0x37, 0x66, 0xc1, 0xdb, 0x15, 0xa7, 0xc1, + 0x3d, 0x38, 0xd1, 0xc8, 0xd8, 0x95, 0xc5, 0xf0, 0x77, 0xbf, 0x8b, 0x2b, 0x65, 0x59, 0x16, 0x16, + 0x67, 0xd6, 0x81, 0x2e, 0xd1, 0x15, 0x47, 0x77, 0x5d, 0xa7, 0x21, 0xe2, 0x1b, 0x0c, 0xf1, 0xd5, + 0xc6, 0x61, 0x58, 0x61, 0xed, 0x3f, 0xb0, 0x60, 0x9c, 0xb7, 0xfc, 0x3a, 0xd9, 0x53, 0x7b, 0xd3, + 0x37, 0xb3, 0xed, 0x22, 0x91, 0x69, 0x21, 0x27, 0x91, 0xa9, 0xfe, 0x69, 0xc5, 0xb6, 0x9f, 0xf6, + 0x65, 0x0b, 0xc4, 0x0c, 0x39, 0x06, 0x7d, 0xc6, 0x77, 0x98, 0xfa, 0x8c, 0xa9, 0xfc, 0x45, 0x90, + 0xa3, 0xc8, 0xf8, 0x2b, 0x0b, 0xc6, 0x38, 0x41, 0x6c, 0xab, 0xff, 0xa6, 0x8e, 0xc3, 0xac, 0xf9, + 0x45, 0x99, 0xce, 0x97, 0xd7, 0xc9, 0xde, 0x9a, 0x5f, 0x71, 0xa2, 0xad, 0xec, 0x8f, 0x32, 0x06, + 0xab, 0xa7, 0xed, 0x60, 0xd5, 0xe5, 0x02, 0x32, 0xf2, 0x7c, 0x75, 0x08, 0x10, 0x70, 0xd8, 0x3c, + 0x5f, 0xf6, 0x9f, 0x59, 0x80, 0x78, 0x35, 0x86, 0xe0, 0x46, 0xc5, 0x21, 0x06, 0xd5, 0x0e, 0xba, + 0x78, 0x6b, 0x52, 0x18, 0xac, 0x51, 0x1d, 0x49, 0xf7, 0x24, 0x1c, 0x2e, 0x8a, 0x9d, 0x1d, 0x2e, + 0x0e, 0xd1, 0xa3, 0xff, 0xac, 0x0f, 0x92, 0x2f, 0xfb, 0xd0, 0x2d, 0x18, 0xaa, 0x39, 0x4d, 0x67, + 0xdd, 0x6d, 0xb8, 0x91, 0x4b, 0xc2, 0x76, 0xde, 0x58, 0x73, 0x1a, 0x9d, 0x30, 0x91, 0x6b, 0x10, + 0x6c, 0xf0, 0x41, 0xd3, 0x00, 0xcd, 0xc0, 0xdd, 0x75, 0x1b, 0x64, 0x93, 0xa9, 0x5d, 0x58, 0x44, + 0x15, 0xee, 0x1a, 0x26, 0xa1, 0x58, 0xa3, 0xc8, 0x08, 0xa3, 0x50, 0x7c, 0xc8, 0x61, 0x14, 0xe0, + 0xd8, 0xc2, 0x28, 0xf4, 0x1c, 0x2a, 0x8c, 0xc2, 0xc0, 0xa1, 0xc3, 0x28, 0xf4, 0x76, 0x15, 0x46, + 0x01, 0xc3, 0x29, 0x29, 0x7b, 0xd2, 0xff, 0x8b, 0x6e, 0x83, 0x88, 0x0b, 0x07, 0x0f, 0x03, 0x33, + 0x75, 0x7f, 0xbf, 0x7c, 0x0a, 0x67, 0x52, 0xe0, 0x9c, 0x92, 0xe8, 0x63, 0x30, 0xe9, 0x34, 0x1a, + 0xfe, 0x1d, 0x35, 0xa8, 0x0b, 0x61, 0xcd, 0x69, 0x70, 0x13, 0x48, 0x3f, 0xe3, 0x7a, 0xf6, 0xfe, + 0x7e, 0x79, 0x72, 0x26, 0x87, 0x06, 0xe7, 0x96, 0x46, 0x1f, 0x86, 0x52, 0x33, 0xf0, 0x6b, 0x2b, + 0xda, 0xf3, 0xe3, 0xf3, 0xb4, 0x03, 0x2b, 0x12, 0x78, 0xb0, 0x5f, 0x1e, 0x56, 0x7f, 0xd8, 0x81, + 0x1f, 0x17, 0xc8, 0x88, 0x8b, 0x30, 0x78, 0xa4, 0x71, 0x11, 0xb6, 0x61, 0xa2, 0x4a, 0x02, 0xd7, + 0x69, 0xb8, 0xf7, 0xa8, 0xbc, 0x2c, 0xf7, 0xa7, 0x35, 0x28, 0x05, 0x89, 0x1d, 0xb9, 0xab, 0x60, + 0xbd, 0x5a, 0xc2, 0x25, 0xb9, 0x03, 0xc7, 0x8c, 0xec, 0xff, 0x66, 0x41, 0xbf, 0x78, 0xc9, 0x77, + 0x0c, 0x52, 0xe3, 0x8c, 0x61, 0x94, 0x28, 0x67, 0x77, 0x18, 0x6b, 0x4c, 0xae, 0x39, 0x62, 0x29, + 0x61, 0x8e, 0x78, 0xb4, 0x1d, 0x93, 0xf6, 0x86, 0x88, 0xff, 0xaf, 0x48, 0xa5, 0x77, 0xe3, 0x4d, + 0xf9, 0xc3, 0xef, 0x82, 0x55, 0xe8, 0x0f, 0xc5, 0x9b, 0xe6, 0x42, 0xfe, 0x6b, 0x90, 0xe4, 0x20, + 0xc6, 0x5e, 0x74, 0xe2, 0x15, 0xb3, 0x64, 0x92, 0xf9, 0x58, 0xba, 0xf8, 0x10, 0x1f, 0x4b, 0x77, + 0x7a, 0x75, 0xdf, 0x73, 0x14, 0xaf, 0xee, 0xed, 0xaf, 0xb1, 0x93, 0x53, 0x87, 0x1f, 0x83, 0x50, + 0x75, 0xd5, 0x3c, 0x63, 0xed, 0x36, 0x33, 0x4b, 0x34, 0x2a, 0x47, 0xb8, 0xfa, 0x05, 0x0b, 0xce, + 0x65, 0x7c, 0x95, 0x26, 0x69, 0x3d, 0x0b, 0x03, 0x4e, 0xab, 0xee, 0xaa, 0xb5, 0xac, 0x99, 0x26, + 0x67, 0x04, 0x1c, 0x2b, 0x0a, 0x34, 0x07, 0xe3, 0xe4, 0x6e, 0xd3, 0xe5, 0x86, 0x5c, 0xdd, 0xf9, + 0xb8, 0xc8, 0x9f, 0x7f, 0x2e, 0x24, 0x91, 0x38, 0x4d, 0xaf, 0x02, 0x44, 0x15, 0x73, 0x03, 0x44, + 0xfd, 0xbc, 0x05, 0x83, 0xea, 0x55, 0xef, 0x43, 0xef, 0xed, 0x8f, 0x9a, 0xbd, 0xfd, 0x48, 0x9b, + 0xde, 0xce, 0xe9, 0xe6, 0xdf, 0x2b, 0xa8, 0xf6, 0x56, 0xfc, 0x20, 0xea, 0x42, 0x82, 0x7b, 0xf0, + 0x87, 0x13, 0x57, 0x60, 0xd0, 0x69, 0x36, 0x25, 0x42, 0x7a, 0xc0, 0xb1, 0xd0, 0xeb, 0x31, 0x18, + 0xeb, 0x34, 0xea, 0x1d, 0x47, 0x31, 0xf7, 0x1d, 0x47, 0x1d, 0x20, 0x72, 0x82, 0x4d, 0x12, 0x51, + 0x98, 0x70, 0xd8, 0xcd, 0xdf, 0x6f, 0x5a, 0x91, 0xdb, 0x98, 0x76, 0xbd, 0x28, 0x8c, 0x82, 0xe9, + 0x25, 0x2f, 0xba, 0x11, 0xf0, 0x2b, 0xa4, 0x16, 0x62, 0x4d, 0xf1, 0xc2, 0x1a, 0x5f, 0x19, 0xc1, + 0x82, 0xd5, 0xd1, 0x6b, 0xba, 0x52, 0xac, 0x0a, 0x38, 0x56, 0x14, 0xf6, 0x07, 0xd9, 0xe9, 0xc3, + 0xfa, 0xf4, 0x70, 0xe1, 0xc5, 0x7e, 0x72, 0x48, 0x8d, 0x06, 0x33, 0x8a, 0xce, 0xeb, 0x41, 0xcc, + 0xda, 0x6f, 0xf6, 0xb4, 0x62, 0xfd, 0x45, 0x64, 0x1c, 0xe9, 0x0c, 0x7d, 0x22, 0xe5, 0x1e, 0xf3, + 0x5c, 0x87, 0x53, 0xe3, 0x10, 0x0e, 0x31, 0x2c, 0x0f, 0x13, 0xcb, 0x52, 0xb3, 0x54, 0x11, 0xeb, + 0x42, 0xcb, 0xc3, 0x24, 0x10, 0x38, 0xa6, 0xa1, 0xc2, 0x94, 0xfa, 0x13, 0x4e, 0xa2, 0x38, 0x16, + 0xb0, 0xa2, 0x0e, 0xb1, 0x46, 0x81, 0x2e, 0x0b, 0x85, 0x02, 0xb7, 0x0b, 0x3c, 0x92, 0x50, 0x28, + 0xc8, 0xee, 0xd2, 0xb4, 0x40, 0x57, 0x60, 0x90, 0xdc, 0x8d, 0x48, 0xe0, 0x39, 0x0d, 0x5a, 0x43, + 0x6f, 0x1c, 0x3f, 0x73, 0x21, 0x06, 0x63, 0x9d, 0x06, 0xad, 0xc1, 0x68, 0xc8, 0xf5, 0x6c, 0x2a, + 0x48, 0x3c, 0xd7, 0x57, 0x3e, 0xad, 0xde, 0x53, 0x9b, 0xe8, 0x03, 0x06, 0xe2, 0xbb, 0x93, 0x8c, + 0x32, 0x91, 0x64, 0x81, 0x5e, 0x83, 0x91, 0x86, 0xef, 0xd4, 0x67, 0x9d, 0x86, 0xe3, 0xd5, 0x58, + 0xff, 0x0c, 0x98, 0x89, 0xa8, 0x97, 0x0d, 0x2c, 0x4e, 0x50, 0x53, 0xe1, 0x4d, 0x87, 0x88, 0x30, + 0x6d, 0x8e, 0xb7, 0x49, 0x42, 0x91, 0x0f, 0x9e, 0x09, 0x6f, 0xcb, 0x39, 0x34, 0x38, 0xb7, 0x34, + 0x7a, 0x19, 0x86, 0xe4, 0xe7, 0x6b, 0x41, 0x59, 0xe2, 0x27, 0x31, 0x1a, 0x0e, 0x1b, 0x94, 0x28, + 0x84, 0x93, 0xf2, 0xff, 0x5a, 0xe0, 0x6c, 0x6c, 0xb8, 0x35, 0x11, 0xa9, 0x80, 0x3f, 0x1f, 0xfe, + 0x88, 0x7c, 0xab, 0xb8, 0x90, 0x45, 0x74, 0xb0, 0x5f, 0x3e, 0x2b, 0x7a, 0x2d, 0x13, 0x8f, 0xb3, + 0x79, 0xa3, 0x15, 0x98, 0xd8, 0x22, 0x4e, 0x23, 0xda, 0x9a, 0xdb, 0x22, 0xb5, 0x6d, 0xb9, 0xe0, + 0x58, 0x98, 0x17, 0xed, 0xe9, 0xc8, 0xb5, 0x34, 0x09, 0xce, 0x2a, 0x87, 0xde, 0x82, 0xc9, 0x66, + 0x6b, 0xbd, 0xe1, 0x86, 0x5b, 0xab, 0x7e, 0xc4, 0x9c, 0x90, 0x66, 0xea, 0xf5, 0x80, 0x84, 0xfc, + 0x75, 0x29, 0x3b, 0x7a, 0x65, 0x20, 0x9d, 0x4a, 0x0e, 0x1d, 0xce, 0xe5, 0x80, 0xee, 0xc1, 0xc9, + 0xc4, 0x44, 0x10, 0x11, 0x31, 0x46, 0xf2, 0x53, 0xc4, 0x54, 0xb3, 0x0a, 0x88, 0xe0, 0x32, 0x59, + 0x28, 0x9c, 0x5d, 0x05, 0x7a, 0x05, 0xc0, 0x6d, 0x2e, 0x3a, 0x3b, 0x6e, 0x83, 0x5e, 0x15, 0x27, + 0xd8, 0x1c, 0xa1, 0xd7, 0x06, 0x58, 0xaa, 0x48, 0x28, 0xdd, 0x9b, 0xc5, 0xbf, 0x3d, 0xac, 0x51, + 0xa3, 0x65, 0x18, 0x11, 0xff, 0xf6, 0xc4, 0x90, 0xf2, 0xc0, 0x2c, 0x8f, 0xb3, 0xa8, 0x5a, 0x15, + 0x1d, 0x73, 0x90, 0x82, 0xe0, 0x44, 0x59, 0xb4, 0x09, 0xe7, 0x64, 0xa2, 0x3f, 0x7d, 0x7e, 0xca, + 0x31, 0x08, 0x59, 0x5e, 0x96, 0x01, 0xfe, 0x2a, 0x65, 0xa6, 0x1d, 0x21, 0x6e, 0xcf, 0x87, 0x9e, + 0xeb, 0xfa, 0x34, 0xe7, 0x6f, 0x8e, 0x4f, 0xc6, 0x11, 0x07, 0x97, 0x93, 0x48, 0x9c, 0xa6, 0x47, + 0x3e, 0x9c, 0x74, 0xbd, 0xac, 0x59, 0x7d, 0x8a, 0x31, 0xfa, 0x10, 0x7f, 0x6e, 0xdd, 0x7e, 0x46, + 0x67, 0xe2, 0x71, 0x36, 0xdf, 0x77, 0xe6, 0xf7, 0xf7, 0xfb, 0x16, 0x2d, 0xad, 0x49, 0xe7, 0xe8, + 0xd3, 0x30, 0xa4, 0x7f, 0x94, 0x90, 0x34, 0x2e, 0x66, 0x0b, 0xaf, 0xda, 0x9e, 0xc0, 0x65, 0x7b, + 0xb5, 0xee, 0x75, 0x1c, 0x36, 0x38, 0xa2, 0x5a, 0x46, 0x6c, 0x83, 0xcb, 0xdd, 0x49, 0x32, 0xdd, + 0xbb, 0xbd, 0x11, 0xc8, 0x9e, 0xee, 0x68, 0x19, 0x06, 0x6a, 0x0d, 0x97, 0x78, 0xd1, 0x52, 0xa5, + 0x5d, 0xf4, 0xc6, 0x39, 0x41, 0x23, 0xd6, 0x8f, 0x48, 0xb1, 0xc2, 0x61, 0x58, 0x71, 0xb0, 0x7f, + 0xb3, 0x00, 0xe5, 0x0e, 0xf9, 0x7a, 0x12, 0x66, 0x28, 0xab, 0x2b, 0x33, 0xd4, 0x0c, 0x8c, 0xc6, + 0xff, 0x74, 0x0d, 0x97, 0xf2, 0x64, 0xbd, 0x65, 0xa2, 0x71, 0x92, 0xbe, 0xeb, 0x47, 0x09, 0xba, + 0x25, 0xab, 0xa7, 0xe3, 0xb3, 0x1a, 0xc3, 0x82, 0xdd, 0xdb, 0xfd, 0xb5, 0x37, 0xd7, 0x1a, 0x69, + 0x7f, 0xad, 0x00, 0x27, 0x55, 0x17, 0x7e, 0xfb, 0x76, 0xdc, 0xcd, 0x74, 0xc7, 0x1d, 0x81, 0x2d, + 0xd7, 0xbe, 0x01, 0x7d, 0x3c, 0x1c, 0x65, 0x17, 0xe2, 0xf6, 0x63, 0x66, 0x94, 0x6c, 0x25, 0xe1, + 0x19, 0x91, 0xb2, 0xbf, 0xdf, 0x82, 0xd1, 0xc4, 0xeb, 0x36, 0x84, 0xb5, 0x27, 0xd0, 0x0f, 0x22, + 0x12, 0x67, 0x09, 0xdb, 0x17, 0xa0, 0x67, 0xcb, 0x0f, 0xa3, 0xa4, 0xa3, 0xc7, 0x35, 0x3f, 0x8c, + 0x30, 0xc3, 0xd8, 0x7f, 0x68, 0x41, 0xef, 0x9a, 0xe3, 0x7a, 0x91, 0x34, 0x0a, 0x58, 0x39, 0x46, + 0x81, 0x6e, 0xbe, 0x0b, 0xbd, 0x04, 0x7d, 0x64, 0x63, 0x83, 0xd4, 0x22, 0x31, 0xaa, 0x32, 0x14, + 0x42, 0xdf, 0x02, 0x83, 0x52, 0xf9, 0x8f, 0x55, 0xc6, 0xff, 0x62, 0x41, 0x8c, 0x6e, 0x43, 0x29, + 0x72, 0x77, 0xc8, 0x4c, 0xbd, 0x2e, 0x4c, 0xe5, 0x0f, 0x10, 0xbf, 0x63, 0x4d, 0x32, 0xc0, 0x31, + 0x2f, 0xfb, 0x8b, 0x05, 0x80, 0x38, 0x8e, 0x57, 0xa7, 0x4f, 0x9c, 0x4d, 0x19, 0x51, 0x2f, 0x66, + 0x18, 0x51, 0x51, 0xcc, 0x30, 0xc3, 0x82, 0xaa, 0xba, 0xa9, 0xd8, 0x55, 0x37, 0xf5, 0x1c, 0xa6, + 0x9b, 0xe6, 0x60, 0x3c, 0x8e, 0x43, 0x66, 0x86, 0x61, 0x64, 0x47, 0xe7, 0x5a, 0x12, 0x89, 0xd3, + 0xf4, 0x36, 0x81, 0x0b, 0x2a, 0x1c, 0x93, 0x38, 0xd1, 0x98, 0x1f, 0xb8, 0x6e, 0x94, 0xee, 0xd0, + 0x4f, 0xb1, 0x95, 0xb8, 0x90, 0x6b, 0x25, 0xfe, 0x09, 0x0b, 0x4e, 0x24, 0xeb, 0x61, 0x8f, 0xa6, + 0xbf, 0x60, 0xc1, 0x49, 0x66, 0x2b, 0x67, 0xb5, 0xa6, 0x2d, 0xf3, 0x2f, 0xb6, 0x0d, 0x31, 0x95, + 0xd3, 0xe2, 0x38, 0xe6, 0xc6, 0x4a, 0x16, 0x6b, 0x9c, 0x5d, 0xa3, 0xfd, 0x5f, 0x7b, 0x60, 0x32, + 0x2f, 0x36, 0x15, 0x7b, 0x26, 0xe2, 0xdc, 0xad, 0x6e, 0x93, 0x3b, 0xc2, 0x19, 0x3f, 0x7e, 0x26, + 0xc2, 0xc1, 0x58, 0xe2, 0x93, 0xe9, 0x4f, 0x0a, 0x5d, 0xa6, 0x3f, 0xd9, 0x82, 0xf1, 0x3b, 0x5b, + 0xc4, 0xbb, 0xe9, 0x85, 0x4e, 0xe4, 0x86, 0x1b, 0x2e, 0xb3, 0x2b, 0xf3, 0x79, 0x23, 0x73, 0x50, + 0x8f, 0xdf, 0x4e, 0x12, 0x1c, 0xec, 0x97, 0xcf, 0x19, 0x80, 0xb8, 0xc9, 0x7c, 0x23, 0xc1, 0x69, + 0xa6, 0xe9, 0xec, 0x31, 0x3d, 0x0f, 0x39, 0x7b, 0xcc, 0x8e, 0x2b, 0xbc, 0x51, 0xe4, 0x1b, 0x00, + 0x76, 0x63, 0x5c, 0x51, 0x50, 0xac, 0x51, 0xa0, 0x4f, 0x02, 0xd2, 0x33, 0x74, 0x19, 0xa1, 0x41, + 0x9f, 0xbb, 0xbf, 0x5f, 0x46, 0xab, 0x29, 0xec, 0xc1, 0x7e, 0x79, 0x82, 0x42, 0x97, 0x3c, 0x7a, + 0xf3, 0x8c, 0xe3, 0xa9, 0x65, 0x30, 0x42, 0xb7, 0x61, 0x8c, 0x42, 0xd9, 0x8a, 0x92, 0x71, 0x47, + 0xf9, 0x6d, 0xf1, 0x99, 0xfb, 0xfb, 0xe5, 0xb1, 0xd5, 0x04, 0x2e, 0x8f, 0x75, 0x8a, 0x09, 0x7a, + 0x05, 0x46, 0xe2, 0x79, 0x75, 0x9d, 0xec, 0xf1, 0x00, 0x3d, 0x25, 0xae, 0xf0, 0x5e, 0x31, 0x30, + 0x38, 0x41, 0x69, 0x7f, 0xc1, 0x82, 0x33, 0xb9, 0x19, 0xf1, 0xd1, 0x25, 0x18, 0x70, 0x9a, 0x2e, + 0x37, 0x5f, 0x88, 0xa3, 0x86, 0xa9, 0xc9, 0x2a, 0x4b, 0xdc, 0x78, 0xa1, 0xb0, 0x74, 0x87, 0xdf, + 0x76, 0xbd, 0x7a, 0x72, 0x87, 0xbf, 0xee, 0x7a, 0x75, 0xcc, 0x30, 0xea, 0xc8, 0x2a, 0xe6, 0x3e, + 0x45, 0xf8, 0x0a, 0x5d, 0xab, 0x19, 0xb9, 0xf3, 0x8f, 0xb7, 0x19, 0xe8, 0x19, 0xdd, 0xd4, 0x28, + 0xbc, 0x0a, 0x73, 0xcd, 0x8c, 0xdf, 0x67, 0x81, 0x78, 0xba, 0xdc, 0xc5, 0x99, 0xfc, 0x26, 0x0c, + 0xed, 0xa6, 0xb3, 0x17, 0x5e, 0xc8, 0x7f, 0xcb, 0x2d, 0x22, 0xae, 0x2b, 0x41, 0xdb, 0xc8, 0x54, + 0x68, 0xf0, 0xb2, 0xeb, 0x20, 0xb0, 0xf3, 0x84, 0x19, 0x14, 0x3a, 0xb7, 0xe6, 0x79, 0x80, 0x3a, + 0xa3, 0x65, 0x29, 0x8d, 0x0b, 0xa6, 0xc4, 0x35, 0xaf, 0x30, 0x58, 0xa3, 0xb2, 0xff, 0x79, 0x01, + 0x06, 0x65, 0xb6, 0xbc, 0x96, 0xd7, 0x8d, 0xda, 0xef, 0x50, 0xe9, 0xb3, 0xd1, 0x65, 0x28, 0x31, + 0xbd, 0x74, 0x25, 0xd6, 0x96, 0x2a, 0xad, 0xd0, 0x8a, 0x44, 0xe0, 0x98, 0x86, 0xee, 0x8e, 0x61, + 0x6b, 0x9d, 0x91, 0x27, 0x1e, 0xda, 0x56, 0x39, 0x18, 0x4b, 0x3c, 0xfa, 0x18, 0x8c, 0xf1, 0x72, + 0x81, 0xdf, 0x74, 0x36, 0xb9, 0x2d, 0xab, 0x57, 0x45, 0x2f, 0x19, 0x5b, 0x49, 0xe0, 0x0e, 0xf6, + 0xcb, 0x27, 0x92, 0x30, 0x66, 0xa4, 0x4d, 0x71, 0x61, 0x2e, 0x6b, 0xbc, 0x12, 0xba, 0xab, 0xa7, + 0x3c, 0xdd, 0x62, 0x14, 0xd6, 0xe9, 0xec, 0x4f, 0x03, 0x4a, 0xe7, 0x0d, 0x44, 0xaf, 0x73, 0x97, + 0x67, 0x37, 0x20, 0xf5, 0x76, 0x46, 0x5b, 0x3d, 0x46, 0x87, 0x7c, 0x23, 0xc7, 0x4b, 0x61, 0x55, + 0xde, 0xfe, 0x3f, 0x8b, 0x30, 0x96, 0x8c, 0x0a, 0x80, 0xae, 0x41, 0x1f, 0x17, 0x29, 0x05, 0xfb, + 0x36, 0x3e, 0x41, 0x5a, 0x2c, 0x01, 0x76, 0xb8, 0x0a, 0xa9, 0x54, 0x94, 0x47, 0x6f, 0xc1, 0x60, + 0xdd, 0xbf, 0xe3, 0xdd, 0x71, 0x82, 0xfa, 0x4c, 0x65, 0x49, 0x4c, 0xe7, 0x4c, 0x45, 0xc5, 0x7c, + 0x4c, 0xa6, 0xc7, 0x27, 0x60, 0xf6, 0xef, 0x18, 0x85, 0x75, 0x76, 0x68, 0x8d, 0x25, 0xfa, 0xd8, + 0x70, 0x37, 0x57, 0x9c, 0x66, 0xbb, 0xf7, 0x2f, 0x73, 0x92, 0x48, 0xe3, 0x3c, 0x2c, 0xb2, 0x81, + 0x70, 0x04, 0x8e, 0x19, 0xa1, 0xcf, 0xc2, 0x44, 0x98, 0x63, 0x3a, 0xc9, 0x4b, 0x23, 0xdb, 0xce, + 0x9a, 0x30, 0x7b, 0xfa, 0xfe, 0x7e, 0x79, 0x22, 0xcb, 0xc8, 0x92, 0x55, 0x8d, 0xfd, 0xa5, 0x13, + 0x60, 0x2c, 0x62, 0x23, 0xab, 0xb8, 0x75, 0x44, 0x59, 0xc5, 0x31, 0x0c, 0x90, 0x9d, 0x66, 0xb4, + 0x37, 0xef, 0x06, 0x62, 0x4c, 0x32, 0x79, 0x2e, 0x08, 0x9a, 0x34, 0x4f, 0x89, 0xc1, 0x8a, 0x4f, + 0x76, 0xea, 0xf7, 0xe2, 0x37, 0x31, 0xf5, 0x7b, 0xcf, 0x31, 0xa6, 0x7e, 0x5f, 0x85, 0xfe, 0x4d, + 0x37, 0xc2, 0xa4, 0xe9, 0x8b, 0xcb, 0x5c, 0xe6, 0x3c, 0xbc, 0xca, 0x49, 0xd2, 0x49, 0x86, 0x05, + 0x02, 0x4b, 0x26, 0xe8, 0x75, 0xb5, 0x02, 0xfb, 0xf2, 0x15, 0x2e, 0x69, 0xe7, 0x95, 0xcc, 0x35, + 0x28, 0x12, 0xbc, 0xf7, 0x3f, 0x68, 0x82, 0xf7, 0x45, 0x99, 0x96, 0x7d, 0x20, 0xff, 0xb1, 0x1a, + 0xcb, 0xba, 0xde, 0x21, 0x19, 0xfb, 0x2d, 0x3d, 0x95, 0x7d, 0x29, 0x7f, 0x27, 0x50, 0x59, 0xea, + 0xbb, 0x4c, 0x60, 0xff, 0x7d, 0x16, 0x9c, 0x4c, 0xa6, 0x9a, 0x65, 0x6f, 0x2a, 0x84, 0x9f, 0xc7, + 0x4b, 0xdd, 0xe4, 0xfe, 0x65, 0x05, 0x8c, 0x0a, 0x99, 0x8e, 0x34, 0x93, 0x0c, 0x67, 0x57, 0x47, + 0x3b, 0x3a, 0x58, 0xaf, 0x0b, 0x7f, 0x83, 0xc7, 0x72, 0x32, 0xe1, 0xb7, 0xc9, 0x7f, 0xbf, 0x96, + 0x91, 0x75, 0xfd, 0xf1, 0xbc, 0xac, 0xeb, 0x5d, 0xe7, 0x5a, 0x7f, 0x5d, 0xe5, 0xc0, 0x1f, 0xce, + 0x9f, 0x4a, 0x3c, 0xc3, 0x7d, 0xc7, 0xcc, 0xf7, 0xaf, 0xab, 0xcc, 0xf7, 0x6d, 0x22, 0x8b, 0xf3, + 0xbc, 0xf6, 0x1d, 0xf3, 0xdd, 0x6b, 0x39, 0xeb, 0x47, 0x8f, 0x26, 0x67, 0xbd, 0x71, 0xd4, 0xf0, + 0xb4, 0xe9, 0xcf, 0x74, 0x38, 0x6a, 0x0c, 0xbe, 0xed, 0x0f, 0x1b, 0x9e, 0x9f, 0x7f, 0xfc, 0x81, + 0xf2, 0xf3, 0xdf, 0xd2, 0xf3, 0xdd, 0xa3, 0x0e, 0x09, 0xdd, 0x29, 0x51, 0x97, 0x59, 0xee, 0x6f, + 0xe9, 0x07, 0xe0, 0x44, 0x3e, 0x5f, 0x75, 0xce, 0xa5, 0xf9, 0x66, 0x1e, 0x81, 0xa9, 0xec, 0xf9, + 0x27, 0x8e, 0x27, 0x7b, 0xfe, 0xc9, 0x23, 0xcf, 0x9e, 0x7f, 0xea, 0x18, 0xb2, 0xe7, 0x9f, 0x3e, + 0xc6, 0xec, 0xf9, 0xb7, 0x98, 0x73, 0x14, 0x0f, 0x00, 0x25, 0x22, 0xa1, 0x3f, 0x95, 0x13, 0x3f, + 0x2d, 0x1d, 0x25, 0x8a, 0x7f, 0x9c, 0x42, 0xe1, 0x98, 0x55, 0x46, 0x56, 0xfe, 0xc9, 0x87, 0x90, + 0x95, 0x7f, 0x35, 0xce, 0xca, 0x7f, 0x26, 0x7f, 0xa8, 0x33, 0x9e, 0xd3, 0xe4, 0xe4, 0xe2, 0xbf, + 0xa5, 0xe7, 0xd0, 0x7f, 0xa4, 0x8d, 0x15, 0x2c, 0x4b, 0xa1, 0xdc, 0x26, 0x73, 0xfe, 0x6b, 0x3c, + 0x73, 0xfe, 0xd9, 0xfc, 0x9d, 0x3c, 0x79, 0xdc, 0x19, 0xf9, 0xf2, 0x69, 0xbb, 0x54, 0xf0, 0x57, + 0x16, 0xf3, 0x3d, 0xa7, 0x5d, 0x2a, 0x7a, 0x6c, 0xba, 0x5d, 0x0a, 0x85, 0x63, 0x56, 0xf6, 0x0f, + 0x14, 0xe0, 0x7c, 0xfb, 0xf5, 0x16, 0x6b, 0xc9, 0x2b, 0xb1, 0x43, 0x40, 0x42, 0x4b, 0xce, 0xef, + 0x6c, 0x31, 0x55, 0xd7, 0xf1, 0x20, 0xaf, 0xc2, 0xb8, 0x7a, 0x87, 0xd3, 0x70, 0x6b, 0x7b, 0xab, + 0xf1, 0x35, 0x59, 0x45, 0x4e, 0xa8, 0x26, 0x09, 0x70, 0xba, 0x0c, 0x9a, 0x81, 0x51, 0x03, 0xb8, + 0x34, 0x2f, 0xee, 0x66, 0x71, 0x94, 0x71, 0x13, 0x8d, 0x93, 0xf4, 0xf6, 0xcf, 0x59, 0x70, 0x3a, + 0x27, 0xe5, 0x6b, 0xd7, 0xe1, 0x0e, 0x37, 0x60, 0xb4, 0x69, 0x16, 0xed, 0x10, 0xa1, 0xd5, 0x48, + 0x2c, 0xab, 0xda, 0x9a, 0x40, 0xe0, 0x24, 0x53, 0xfb, 0x67, 0x0a, 0x70, 0xae, 0xad, 0x63, 0x29, + 0xc2, 0x70, 0x6a, 0x73, 0x27, 0x74, 0xe6, 0x02, 0x52, 0x27, 0x5e, 0xe4, 0x3a, 0x8d, 0x6a, 0x93, + 0xd4, 0x34, 0x3b, 0x07, 0xf3, 0xd0, 0xbc, 0xba, 0x52, 0x9d, 0x49, 0x53, 0xe0, 0x9c, 0x92, 0x68, + 0x11, 0x50, 0x1a, 0x23, 0x46, 0x98, 0x65, 0x0f, 0x48, 0xf3, 0xc3, 0x19, 0x25, 0xd0, 0x07, 0x61, + 0x58, 0x39, 0xac, 0x6a, 0x23, 0xce, 0x36, 0x76, 0xac, 0x23, 0xb0, 0x49, 0x87, 0xae, 0xf0, 0xf4, + 0x13, 0x22, 0x51, 0x89, 0x30, 0x8a, 0x8c, 0xca, 0xdc, 0x12, 0x02, 0x8c, 0x75, 0x9a, 0xd9, 0x97, + 0x7f, 0xeb, 0x1b, 0xe7, 0xdf, 0xf7, 0xbb, 0xdf, 0x38, 0xff, 0xbe, 0x3f, 0xf8, 0xc6, 0xf9, 0xf7, + 0x7d, 0xd7, 0xfd, 0xf3, 0xd6, 0x6f, 0xdd, 0x3f, 0x6f, 0xfd, 0xee, 0xfd, 0xf3, 0xd6, 0x1f, 0xdc, + 0x3f, 0x6f, 0xfd, 0xf1, 0xfd, 0xf3, 0xd6, 0x17, 0xff, 0xe4, 0xfc, 0xfb, 0xde, 0x44, 0x71, 0x00, + 0xd1, 0xcb, 0x74, 0x74, 0x2e, 0xef, 0x5e, 0xf9, 0x5f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xbd, 0x0b, + 0x0a, 0x3d, 0x91, 0x13, 0x01, 0x00, } func (m *AWSElasticBlockStoreVolumeSource) Marshal() (dAtA []byte, err error) { @@ -8752,6 +8820,15 @@ func (m *Container) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if m.RestartPolicy != nil { + i -= len(*m.RestartPolicy) + copy(dAtA[i:], *m.RestartPolicy) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.RestartPolicy))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xc2 + } if len(m.ResizePolicy) > 0 { for iNdEx := len(m.ResizePolicy) - 1; iNdEx >= 0; iNdEx-- { { @@ -10105,6 +10182,15 @@ func (m *EphemeralContainerCommon) MarshalToSizedBuffer(dAtA []byte) (int, error _ = i var l int _ = l + if m.RestartPolicy != nil { + i -= len(*m.RestartPolicy) + copy(dAtA[i:], *m.RestartPolicy) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.RestartPolicy))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0xc2 + } if len(m.ResizePolicy) > 0 { for iNdEx := len(m.ResizePolicy) - 1; iNdEx >= 0; iNdEx-- { { @@ -11255,6 +11341,34 @@ func (m *HostAlias) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *HostIP) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *HostIP) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *HostIP) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + i -= len(m.IP) + copy(dAtA[i:], m.IP) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.IP))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + func (m *HostPathVolumeSource) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -13740,12 +13854,29 @@ func (m *PersistentVolumeClaimStatus) MarshalToSizedBuffer(dAtA []byte) (int, er _ = i var l int _ = l - if m.ResizeStatus != nil { - i -= len(*m.ResizeStatus) - copy(dAtA[i:], *m.ResizeStatus) - i = encodeVarintGenerated(dAtA, i, uint64(len(*m.ResizeStatus))) - i-- - dAtA[i] = 0x32 + if len(m.AllocatedResourceStatuses) > 0 { + keysForAllocatedResourceStatuses := make([]string, 0, len(m.AllocatedResourceStatuses)) + for k := range m.AllocatedResourceStatuses { + keysForAllocatedResourceStatuses = append(keysForAllocatedResourceStatuses, string(k)) + } + github_com_gogo_protobuf_sortkeys.Strings(keysForAllocatedResourceStatuses) + for iNdEx := len(keysForAllocatedResourceStatuses) - 1; iNdEx >= 0; iNdEx-- { + v := m.AllocatedResourceStatuses[ResourceName(keysForAllocatedResourceStatuses[iNdEx])] + baseI := i + i -= len(v) + copy(dAtA[i:], v) + i = encodeVarintGenerated(dAtA, i, uint64(len(v))) + i-- + dAtA[i] = 0x12 + i -= len(keysForAllocatedResourceStatuses[iNdEx]) + copy(dAtA[i:], keysForAllocatedResourceStatuses[iNdEx]) + i = encodeVarintGenerated(dAtA, i, uint64(len(keysForAllocatedResourceStatuses[iNdEx]))) + i-- + dAtA[i] = 0xa + i = encodeVarintGenerated(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0x3a + } } if len(m.AllocatedResources) > 0 { keysForAllocatedResources := make([]string, 0, len(m.AllocatedResources)) @@ -14404,6 +14535,18 @@ func (m *PersistentVolumeStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) _ = i var l int _ = l + if m.LastPhaseTransitionTime != nil { + { + size, err := m.LastPhaseTransitionTime.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x22 + } i -= len(m.Reason) copy(dAtA[i:], m.Reason) i = encodeVarintGenerated(dAtA, i, uint64(len(m.Reason))) @@ -15267,6 +15410,41 @@ func (m *PodResourceClaim) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *PodResourceClaimStatus) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *PodResourceClaimStatus) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *PodResourceClaimStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.ResourceClaimName != nil { + i -= len(*m.ResourceClaimName) + copy(dAtA[i:], *m.ResourceClaimName) + i = encodeVarintGenerated(dAtA, i, uint64(len(*m.ResourceClaimName))) + i-- + dAtA[i] = 0x12 + } + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintGenerated(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0xa + return len(dAtA) - i, nil +} + func (m *PodSchedulingGate) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -15936,6 +16114,36 @@ func (m *PodStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if len(m.HostIPs) > 0 { + for iNdEx := len(m.HostIPs) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.HostIPs[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x82 + } + } + if len(m.ResourceClaimStatuses) > 0 { + for iNdEx := len(m.ResourceClaimStatuses) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.ResourceClaimStatuses[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x7a + } + } i -= len(m.Resize) copy(dAtA[i:], m.Resize) i = encodeVarintGenerated(dAtA, i, uint64(len(m.Resize))) @@ -20970,6 +21178,10 @@ func (m *Container) Size() (n int) { n += 2 + l + sovGenerated(uint64(l)) } } + if m.RestartPolicy != nil { + l = len(*m.RestartPolicy) + n += 2 + l + sovGenerated(uint64(l)) + } return n } @@ -21469,6 +21681,10 @@ func (m *EphemeralContainerCommon) Size() (n int) { n += 2 + l + sovGenerated(uint64(l)) } } + if m.RestartPolicy != nil { + l = len(*m.RestartPolicy) + n += 2 + l + sovGenerated(uint64(l)) + } return n } @@ -21805,6 +22021,17 @@ func (m *HostAlias) Size() (n int) { return n } +func (m *HostIP) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.IP) + n += 1 + l + sovGenerated(uint64(l)) + return n +} + func (m *HostPathVolumeSource) Size() (n int) { if m == nil { return 0 @@ -22744,9 +22971,13 @@ func (m *PersistentVolumeClaimStatus) Size() (n int) { n += mapEntrySize + 1 + sovGenerated(uint64(mapEntrySize)) } } - if m.ResizeStatus != nil { - l = len(*m.ResizeStatus) - n += 1 + l + sovGenerated(uint64(l)) + if len(m.AllocatedResourceStatuses) > 0 { + for k, v := range m.AllocatedResourceStatuses { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovGenerated(uint64(len(k))) + 1 + len(v) + sovGenerated(uint64(len(v))) + n += mapEntrySize + 1 + sovGenerated(uint64(mapEntrySize)) + } } return n } @@ -22950,6 +23181,10 @@ func (m *PersistentVolumeStatus) Size() (n int) { n += 1 + l + sovGenerated(uint64(l)) l = len(m.Reason) n += 1 + l + sovGenerated(uint64(l)) + if m.LastPhaseTransitionTime != nil { + l = m.LastPhaseTransitionTime.Size() + n += 1 + l + sovGenerated(uint64(l)) + } return n } @@ -23263,6 +23498,21 @@ func (m *PodResourceClaim) Size() (n int) { return n } +func (m *PodResourceClaimStatus) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + n += 1 + l + sovGenerated(uint64(l)) + if m.ResourceClaimName != nil { + l = len(*m.ResourceClaimName) + n += 1 + l + sovGenerated(uint64(l)) + } + return n +} + func (m *PodSchedulingGate) Size() (n int) { if m == nil { return 0 @@ -23552,6 +23802,18 @@ func (m *PodStatus) Size() (n int) { } l = len(m.Resize) n += 1 + l + sovGenerated(uint64(l)) + if len(m.ResourceClaimStatuses) > 0 { + for _, e := range m.ResourceClaimStatuses { + l = e.Size() + n += 1 + l + sovGenerated(uint64(l)) + } + } + if len(m.HostIPs) > 0 { + for _, e := range m.HostIPs { + l = e.Size() + n += 2 + l + sovGenerated(uint64(l)) + } + } return n } @@ -25585,6 +25847,7 @@ func (this *Container) String() string { `VolumeDevices:` + repeatedStringForVolumeDevices + `,`, `StartupProbe:` + strings.Replace(this.StartupProbe.String(), "Probe", "Probe", 1) + `,`, `ResizePolicy:` + repeatedStringForResizePolicy + `,`, + `RestartPolicy:` + valueToStringGenerated(this.RestartPolicy) + `,`, `}`, }, "") return s @@ -25960,6 +26223,7 @@ func (this *EphemeralContainerCommon) String() string { `VolumeDevices:` + repeatedStringForVolumeDevices + `,`, `StartupProbe:` + strings.Replace(this.StartupProbe.String(), "Probe", "Probe", 1) + `,`, `ResizePolicy:` + repeatedStringForResizePolicy + `,`, + `RestartPolicy:` + valueToStringGenerated(this.RestartPolicy) + `,`, `}`, }, "") return s @@ -26221,6 +26485,16 @@ func (this *HostAlias) String() string { }, "") return s } +func (this *HostIP) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&HostIP{`, + `IP:` + fmt.Sprintf("%v", this.IP) + `,`, + `}`, + }, "") + return s +} func (this *HostPathVolumeSource) String() string { if this == nil { return "nil" @@ -26972,13 +27246,23 @@ func (this *PersistentVolumeClaimStatus) String() string { mapStringForAllocatedResources += fmt.Sprintf("%v: %v,", k, this.AllocatedResources[ResourceName(k)]) } mapStringForAllocatedResources += "}" + keysForAllocatedResourceStatuses := make([]string, 0, len(this.AllocatedResourceStatuses)) + for k := range this.AllocatedResourceStatuses { + keysForAllocatedResourceStatuses = append(keysForAllocatedResourceStatuses, string(k)) + } + github_com_gogo_protobuf_sortkeys.Strings(keysForAllocatedResourceStatuses) + mapStringForAllocatedResourceStatuses := "map[ResourceName]ClaimResourceStatus{" + for _, k := range keysForAllocatedResourceStatuses { + mapStringForAllocatedResourceStatuses += fmt.Sprintf("%v: %v,", k, this.AllocatedResourceStatuses[ResourceName(k)]) + } + mapStringForAllocatedResourceStatuses += "}" s := strings.Join([]string{`&PersistentVolumeClaimStatus{`, `Phase:` + fmt.Sprintf("%v", this.Phase) + `,`, `AccessModes:` + fmt.Sprintf("%v", this.AccessModes) + `,`, `Capacity:` + mapStringForCapacity + `,`, `Conditions:` + repeatedStringForConditions + `,`, `AllocatedResources:` + mapStringForAllocatedResources + `,`, - `ResizeStatus:` + valueToStringGenerated(this.ResizeStatus) + `,`, + `AllocatedResourceStatuses:` + mapStringForAllocatedResourceStatuses + `,`, `}`, }, "") return s @@ -27088,6 +27372,7 @@ func (this *PersistentVolumeStatus) String() string { `Phase:` + fmt.Sprintf("%v", this.Phase) + `,`, `Message:` + fmt.Sprintf("%v", this.Message) + `,`, `Reason:` + fmt.Sprintf("%v", this.Reason) + `,`, + `LastPhaseTransitionTime:` + strings.Replace(fmt.Sprintf("%v", this.LastPhaseTransitionTime), "Time", "v1.Time", 1) + `,`, `}`, }, "") return s @@ -27337,6 +27622,17 @@ func (this *PodResourceClaim) String() string { }, "") return s } +func (this *PodResourceClaimStatus) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&PodResourceClaimStatus{`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + `ResourceClaimName:` + valueToStringGenerated(this.ResourceClaimName) + `,`, + `}`, + }, "") + return s +} func (this *PodSchedulingGate) String() string { if this == nil { return "nil" @@ -27533,6 +27829,16 @@ func (this *PodStatus) String() string { repeatedStringForEphemeralContainerStatuses += strings.Replace(strings.Replace(f.String(), "ContainerStatus", "ContainerStatus", 1), `&`, ``, 1) + "," } repeatedStringForEphemeralContainerStatuses += "}" + repeatedStringForResourceClaimStatuses := "[]PodResourceClaimStatus{" + for _, f := range this.ResourceClaimStatuses { + repeatedStringForResourceClaimStatuses += strings.Replace(strings.Replace(f.String(), "PodResourceClaimStatus", "PodResourceClaimStatus", 1), `&`, ``, 1) + "," + } + repeatedStringForResourceClaimStatuses += "}" + repeatedStringForHostIPs := "[]HostIP{" + for _, f := range this.HostIPs { + repeatedStringForHostIPs += strings.Replace(strings.Replace(f.String(), "HostIP", "HostIP", 1), `&`, ``, 1) + "," + } + repeatedStringForHostIPs += "}" s := strings.Join([]string{`&PodStatus{`, `Phase:` + fmt.Sprintf("%v", this.Phase) + `,`, `Conditions:` + repeatedStringForConditions + `,`, @@ -27548,6 +27854,8 @@ func (this *PodStatus) String() string { `PodIPs:` + repeatedStringForPodIPs + `,`, `EphemeralContainerStatuses:` + repeatedStringForEphemeralContainerStatuses + `,`, `Resize:` + fmt.Sprintf("%v", this.Resize) + `,`, + `ResourceClaimStatuses:` + repeatedStringForResourceClaimStatuses + `,`, + `HostIPs:` + repeatedStringForHostIPs + `,`, `}`, }, "") return s @@ -34125,6 +34433,39 @@ func (m *Container) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 24: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RestartPolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := ContainerRestartPolicy(dAtA[iNdEx:postIndex]) + m.RestartPolicy = &s + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -38278,6 +38619,39 @@ func (m *EphemeralContainerCommon) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 24: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field RestartPolicy", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := ContainerRestartPolicy(dAtA[iNdEx:postIndex]) + m.RestartPolicy = &s + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -41368,6 +41742,88 @@ func (m *HostAlias) Unmarshal(dAtA []byte) error { } return nil } +func (m *HostIP) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: HostIP: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: HostIP: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field IP", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.IP = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func (m *HostPathVolumeSource) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 @@ -49625,11 +50081,140 @@ func (m *PersistentVolumeClaimStatus) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.AllocatedResources == nil { - m.AllocatedResources = make(ResourceList) + if m.AllocatedResources == nil { + m.AllocatedResources = make(ResourceList) + } + var mapkey ResourceName + mapvalue := &resource.Quantity{} + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthGenerated + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthGenerated + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = ResourceName(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var mapmsglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + mapmsglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if mapmsglen < 0 { + return ErrInvalidLengthGenerated + } + postmsgIndex := iNdEx + mapmsglen + if postmsgIndex < 0 { + return ErrInvalidLengthGenerated + } + if postmsgIndex > l { + return io.ErrUnexpectedEOF + } + mapvalue = &resource.Quantity{} + if err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil { + return err + } + iNdEx = postmsgIndex + } else { + iNdEx = entryPreIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.AllocatedResources[ResourceName(mapkey)] = *mapvalue + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AllocatedResourceStatuses", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.AllocatedResourceStatuses == nil { + m.AllocatedResourceStatuses = make(map[ResourceName]ClaimResourceStatus) } var mapkey ResourceName - mapvalue := &resource.Quantity{} + var mapvalue ClaimResourceStatus for iNdEx < postIndex { entryPreIndex := iNdEx var wire uint64 @@ -49678,7 +50263,7 @@ func (m *PersistentVolumeClaimStatus) Unmarshal(dAtA []byte) error { mapkey = ResourceName(dAtA[iNdEx:postStringIndexmapkey]) iNdEx = postStringIndexmapkey } else if fieldNum == 2 { - var mapmsglen int + var stringLenmapvalue uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated @@ -49688,26 +50273,24 @@ func (m *PersistentVolumeClaimStatus) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - mapmsglen |= int(b&0x7F) << shift + stringLenmapvalue |= uint64(b&0x7F) << shift if b < 0x80 { break } } - if mapmsglen < 0 { + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { return ErrInvalidLengthGenerated } - postmsgIndex := iNdEx + mapmsglen - if postmsgIndex < 0 { + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue < 0 { return ErrInvalidLengthGenerated } - if postmsgIndex > l { + if postStringIndexmapvalue > l { return io.ErrUnexpectedEOF } - mapvalue = &resource.Quantity{} - if err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil { - return err - } - iNdEx = postmsgIndex + mapvalue = ClaimResourceStatus(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue } else { iNdEx = entryPreIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -49723,40 +50306,7 @@ func (m *PersistentVolumeClaimStatus) Unmarshal(dAtA []byte) error { iNdEx += skippy } } - m.AllocatedResources[ResourceName(mapkey)] = *mapvalue - iNdEx = postIndex - case 6: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field ResizeStatus", wireType) - } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - stringLen |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - intStringLen := int(stringLen) - if intStringLen < 0 { - return ErrInvalidLengthGenerated - } - postIndex := iNdEx + intStringLen - if postIndex < 0 { - return ErrInvalidLengthGenerated - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - s := PersistentVolumeClaimResizeStatus(dAtA[iNdEx:postIndex]) - m.ResizeStatus = &s + m.AllocatedResourceStatuses[ResourceName(mapkey)] = ((ClaimResourceStatus)(mapvalue)) iNdEx = postIndex default: iNdEx = preIndex @@ -51526,6 +52076,42 @@ func (m *PersistentVolumeStatus) Unmarshal(dAtA []byte) error { } m.Reason = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field LastPhaseTransitionTime", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.LastPhaseTransitionTime == nil { + m.LastPhaseTransitionTime = &v1.Time{} + } + if err := m.LastPhaseTransitionTime.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -54039,6 +54625,121 @@ func (m *PodResourceClaim) Unmarshal(dAtA []byte) error { } return nil } +func (m *PodResourceClaimStatus) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: PodResourceClaimStatus: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: PodResourceClaimStatus: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceClaimName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + s := string(dAtA[iNdEx:postIndex]) + m.ResourceClaimName = &s + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func (m *PodSchedulingGate) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 @@ -56483,6 +57184,74 @@ func (m *PodStatus) Unmarshal(dAtA []byte) error { } m.Resize = PodResizeStatus(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 15: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceClaimStatuses", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ResourceClaimStatuses = append(m.ResourceClaimStatuses, PodResourceClaimStatus{}) + if err := m.ResourceClaimStatuses[len(m.ResourceClaimStatuses)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 16: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field HostIPs", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.HostIPs = append(m.HostIPs, HostIP{}) + if err := m.HostIPs[len(m.HostIPs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.proto index 94e0a71156cc..901e837313fa 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.proto @@ -414,15 +414,9 @@ message ClaimSource { // // The template will be used to create a new ResourceClaim, which will // be bound to this pod. When this pod is deleted, the ResourceClaim - // will also be deleted. The name of the ResourceClaim will be -, where is the - // PodResourceClaim.Name. Pod validation will reject the pod if the - // concatenated name is not valid for a ResourceClaim (e.g. too long). - // - // An existing ResourceClaim with that name that is not owned by the - // pod will not be used for the pod to avoid using an unrelated - // resource by mistake. Scheduling and pod startup are then blocked - // until the unrelated ResourceClaim is removed. + // will also be deleted. The pod name and resource name, along with a + // generated component, will be used to form a unique name for the + // ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses. // // This field is immutable and no changes will be made to the // corresponding ResourceClaim by the control plane after creating the @@ -729,6 +723,25 @@ message Container { // +listType=atomic repeated ContainerResizePolicy resizePolicy = 23; + // RestartPolicy defines the restart behavior of individual containers in a pod. + // This field may only be set for init containers, and the only allowed value is "Always". + // For non-init containers or when this field is not specified, + // the restart behavior is defined by the Pod's restart policy and the container type. + // Setting the RestartPolicy as "Always" for the init container will have the following effect: + // this init container will be continually restarted on + // exit until all regular containers have terminated. Once all regular + // containers have completed, all init containers with restartPolicy "Always" + // will be shut down. This lifecycle differs from normal init containers and + // is often referred to as a "sidecar" container. Although this init + // container still starts in the init container sequence, it does not wait + // for the container to complete before proceeding to the next init + // container. Instead, the next init container starts immediately after this + // init container is started, or after any startupProbe has successfully + // completed. + // +featureGate=SidecarContainers + // +optional + optional string restartPolicy = 24; + // Pod volumes to mount into the container's filesystem. // Cannot be updated. // +optional @@ -1147,6 +1160,8 @@ message EndpointPort { // // * Kubernetes-defined prefixed names: // * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 + // * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 + // * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 // // * Other protocols should use implementation-defined prefixed names such as // mycompany.com/my-custom-protocol. @@ -1386,6 +1401,14 @@ message EphemeralContainerCommon { // +listType=atomic repeated ContainerResizePolicy resizePolicy = 23; + // Restart policy for the container to manage the restart behavior of each + // container within a pod. + // This may only be set for init containers. You cannot set this field on + // ephemeral containers. + // +featureGate=SidecarContainers + // +optional + optional string restartPolicy = 24; + // Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. // Cannot be updated. // +optional @@ -1853,7 +1876,8 @@ message HTTPGetAction { // HTTPHeader describes a custom header to be used in HTTP probes message HTTPHeader { - // The header field name + // The header field name. + // This will be canonicalized upon output, so case-variant names will be understood as the same header. optional string name = 1; // The header field value @@ -1870,6 +1894,12 @@ message HostAlias { repeated string hostnames = 2; } +// HostIP represents a single IP address allocated to the host. +message HostIP { + // IP is the IP address assigned to the host + optional string ip = 1; +} + // Represents a host path mapped into a pod. // Host path volumes do not support ownership management or SELinux relabeling. message HostPathVolumeSource { @@ -2862,25 +2892,71 @@ message PersistentVolumeClaimStatus { // +patchStrategy=merge repeated PersistentVolumeClaimCondition conditions = 4; - // allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may - // be larger than the actual capacity when a volume expansion operation is requested. + // allocatedResources tracks the resources allocated to a PVC including its capacity. + // Key names follow standard Kubernetes label syntax. Valid values are either: + // * Un-prefixed keys: + // - storage - the capacity of the volume. + // * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" + // Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered + // reserved and hence may not be used. + // + // Capacity reported here may be larger than the actual capacity when a volume expansion operation + // is requested. // For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. // If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. // If a volume expansion capacity request is lowered, allocatedResources is only // lowered if there are no expansion operations in progress and if the actual volume capacity // is equal or lower than the requested capacity. + // + // A controller that receives PVC update with previously unknown resourceName + // should ignore the update for the purpose it was designed. For example - a controller that + // only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid + // resources associated with PVC. + // // This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. // +featureGate=RecoverVolumeExpansionFailure // +optional map allocatedResources = 5; - // resizeStatus stores status of resize operation. - // ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty - // string by resize controller or kubelet. + // allocatedResourceStatuses stores status of resource being resized for the given PVC. + // Key names follow standard Kubernetes label syntax. Valid values are either: + // * Un-prefixed keys: + // - storage - the capacity of the volume. + // * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" + // Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered + // reserved and hence may not be used. + // + // ClaimResourceStatus can be in any of following states: + // - ControllerResizeInProgress: + // State set when resize controller starts resizing the volume in control-plane. + // - ControllerResizeFailed: + // State set when resize has failed in resize controller with a terminal error. + // - NodeResizePending: + // State set when resize controller has finished resizing the volume but further resizing of + // volume is needed on the node. + // - NodeResizeInProgress: + // State set when kubelet starts resizing the volume. + // - NodeResizeFailed: + // State set when resizing has failed in kubelet with a terminal error. Transient errors don't set + // NodeResizeFailed. + // For example: if expanding a PVC for more capacity - this field can be one of the following states: + // - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" + // - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" + // - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" + // - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" + // - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" + // When this field is not set, it means that no resize operation is in progress for the given PVC. + // + // A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus + // should ignore the update for the purpose it was designed. For example - a controller that + // only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid + // resources associated with PVC. + // // This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. // +featureGate=RecoverVolumeExpansionFailure + // +mapType=granular // +optional - optional string resizeStatus = 6; + map allocatedResourceStatuses = 7; } // PersistentVolumeClaimTemplate is used to produce @@ -3102,6 +3178,13 @@ message PersistentVolumeStatus { // for machine parsing and tidy display in the CLI. // +optional optional string reason = 3; + + // lastPhaseTransitionTime is the time the phase transitioned from one to another + // and automatically resets to current time everytime a volume phase transitions. + // This is an alpha field and requires enabling PersistentVolumeLastPhaseTransitionTime feature. + // +featureGate=PersistentVolumeLastPhaseTransitionTime + // +optional + optional k8s.io.apimachinery.pkg.apis.meta.v1.Time lastPhaseTransitionTime = 4; } // Represents a Photon Controller persistent disk resource. @@ -3346,12 +3429,9 @@ message PodExecOptions { repeated string command = 6; } -// IP address information for entries in the (plural) PodIPs field. -// Each entry includes: -// -// IP: An IP address allocated to the pod. Routable at least within the cluster. +// PodIP represents a single IP address allocated to the pod. message PodIP { - // ip is an IP address (IPv4 or IPv6) assigned to the pod + // IP is the IP address assigned to the pod optional string ip = 1; } @@ -3468,6 +3548,24 @@ message PodResourceClaim { optional ClaimSource source = 2; } +// PodResourceClaimStatus is stored in the PodStatus for each PodResourceClaim +// which references a ResourceClaimTemplate. It stores the generated name for +// the corresponding ResourceClaim. +message PodResourceClaimStatus { + // Name uniquely identifies this resource claim inside the pod. + // This must match the name of an entry in pod.spec.resourceClaims, + // which implies that the string must be a DNS_LABEL. + optional string name = 1; + + // ResourceClaimName is the name of the ResourceClaim that was + // generated for the Pod in the namespace of the Pod. It this is + // unset, then generating a ResourceClaim was not necessary. The + // pod.spec.resourceClaims entry can be ignored in this case. + // + // +optional + optional string resourceClaimName = 2; +} + // PodSchedulingGate is associated to a Pod to guard its scheduling. message PodSchedulingGate { // Name of the scheduling gate. @@ -3959,11 +4057,23 @@ message PodStatus { // +optional optional string nominatedNodeName = 11; - // IP address of the host to which the pod is assigned. Empty if not yet scheduled. + // hostIP holds the IP address of the host to which the pod is assigned. Empty if the pod has not started yet. + // A pod can be assigned to a node that has a problem in kubelet which in turns mean that HostIP will + // not be updated even if there is a node is assigned to pod // +optional optional string hostIP = 5; - // IP address allocated to the pod. Routable at least within the cluster. + // hostIPs holds the IP addresses allocated to the host. If this field is specified, the first entry must + // match the hostIP field. This list is empty if the pod has not started yet. + // A pod can be assigned to a node that has a problem in kubelet which in turns means that HostIPs will + // not be updated even if there is a node is assigned to this pod. + // +optional + // +patchStrategy=merge + // +patchMergeKey=ip + // +listType=atomic + repeated HostIP hostIPs = 16; + + // podIP address allocated to the pod. Routable at least within the cluster. // Empty if not yet allocated. // +optional optional string podIP = 6; @@ -4008,6 +4118,15 @@ message PodStatus { // +featureGate=InPlacePodVerticalScaling // +optional optional string resize = 14; + + // Status of resource claims. + // +patchMergeKey=name + // +patchStrategy=merge,retainKeys + // +listType=map + // +listMapKey=name + // +featureGate=DynamicResourceAllocation + // +optional + repeated PodResourceClaimStatus resourceClaimStatuses = 15; } // PodStatusResult is a wrapper for PodStatus returned by kubelet that can be encode/decoded @@ -4752,7 +4871,7 @@ message SeccompProfile { // localhostProfile indicates a profile defined in a file on the node should be used. // The profile must be preconfigured on the node to work. // Must be a descending path, relative to the kubelet's configured seccomp profile location. - // Must only be set if type is "Localhost". + // Must be set if type is "Localhost". Must NOT be set for any other type. // +optional optional string localhostProfile = 2; } @@ -5123,10 +5242,19 @@ message ServicePort { optional string protocol = 2; // The application protocol for this port. + // This is used as a hint for implementations to offer richer behavior for protocols that they understand. // This field follows standard Kubernetes label syntax. - // Un-prefixed names are reserved for IANA standard service names (as per + // Valid values are either: + // + // * Un-prefixed protocol names - reserved for IANA standard service names (as per // RFC-6335 and https://www.iana.org/assignments/service-names). - // Non-standard protocols should use prefixed names such as + // + // * Kubernetes-defined prefixed names: + // * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 + // * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 + // * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 + // + // * Other protocols should use implementation-defined prefixed names such as // mycompany.com/my-custom-protocol. // +optional optional string appProtocol = 6; @@ -5274,10 +5402,9 @@ message ServiceSpec { // This feature depends on whether the underlying cloud-provider supports specifying // the loadBalancerIP when a load balancer is created. // This field will be ignored if the cloud-provider does not support the feature. - // Deprecated: This field was under-specified and its meaning varies across implementations, - // and it cannot support dual-stack. - // As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. - // This field may be removed in a future API version. + // Deprecated: This field was under-specified and its meaning varies across implementations. + // Using it is non-portable and it may not support dual-stack. + // Users are encouraged to use implementation-specific annotations when available. // +optional optional string loadBalancerIP = 8; @@ -6052,12 +6179,9 @@ message WindowsSecurityContextOptions { optional string runAsUserName = 3; // HostProcess determines if a container should be run as a 'Host Process' container. - // This field is alpha-level and will only be honored by components that enable the - // WindowsHostProcessContainers feature flag. Setting this field without the feature - // flag will result in errors when validating the Pod. All of a Pod's containers must - // have the same effective HostProcess value (it is not allowed to have a mix of HostProcess - // containers and non-HostProcess containers). In addition, if HostProcess is true - // then HostNetwork must also be set to true. + // All of a Pod's containers must have the same effective HostProcess value + // (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). + // In addition, if HostProcess is true then HostNetwork must also be set to true. // +optional optional bool hostProcess = 4; } diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/types.go b/cluster-autoscaler/vendor/k8s.io/api/core/v1/types.go index c9bb18a2cc77..9e05c2235652 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/types.go @@ -411,6 +411,12 @@ type PersistentVolumeStatus struct { // for machine parsing and tidy display in the CLI. // +optional Reason string `json:"reason,omitempty" protobuf:"bytes,3,opt,name=reason"` + // lastPhaseTransitionTime is the time the phase transitioned from one to another + // and automatically resets to current time everytime a volume phase transitions. + // This is an alpha field and requires enabling PersistentVolumeLastPhaseTransitionTime feature. + // +featureGate=PersistentVolumeLastPhaseTransitionTime + // +optional + LastPhaseTransitionTime *metav1.Time `json:"lastPhaseTransitionTime,omitempty" protobuf:"bytes,4,opt,name=lastPhaseTransitionTime"` } // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object @@ -558,23 +564,27 @@ const ( ) // +enum -type PersistentVolumeClaimResizeStatus string +// When a controller receives persistentvolume claim update with ClaimResourceStatus for a resource +// that it does not recognizes, then it should ignore that update and let other controllers +// handle it. +type ClaimResourceStatus string const ( - // When expansion is complete, the empty string is set by resize controller or kubelet. - PersistentVolumeClaimNoExpansionInProgress PersistentVolumeClaimResizeStatus = "" - // State set when resize controller starts expanding the volume in control-plane - PersistentVolumeClaimControllerExpansionInProgress PersistentVolumeClaimResizeStatus = "ControllerExpansionInProgress" - // State set when expansion has failed in resize controller with a terminal error. - // Transient errors such as timeout should not set this status and should leave ResizeStatus + // State set when resize controller starts resizing the volume in control-plane. + PersistentVolumeClaimControllerResizeInProgress ClaimResourceStatus = "ControllerResizeInProgress" + + // State set when resize has failed in resize controller with a terminal error. + // Transient errors such as timeout should not set this status and should leave allocatedResourceStatus // unmodified, so as resize controller can resume the volume expansion. - PersistentVolumeClaimControllerExpansionFailed PersistentVolumeClaimResizeStatus = "ControllerExpansionFailed" - // State set when resize controller has finished expanding the volume but further expansion is needed on the node. - PersistentVolumeClaimNodeExpansionPending PersistentVolumeClaimResizeStatus = "NodeExpansionPending" - // State set when kubelet starts expanding the volume. - PersistentVolumeClaimNodeExpansionInProgress PersistentVolumeClaimResizeStatus = "NodeExpansionInProgress" - // State set when expansion has failed in kubelet with a terminal error. Transient errors don't set NodeExpansionFailed. - PersistentVolumeClaimNodeExpansionFailed PersistentVolumeClaimResizeStatus = "NodeExpansionFailed" + PersistentVolumeClaimControllerResizeFailed ClaimResourceStatus = "ControllerResizeFailed" + + // State set when resize controller has finished resizing the volume but further resizing of volume + // is needed on the node. + PersistentVolumeClaimNodeResizePending ClaimResourceStatus = "NodeResizePending" + // State set when kubelet starts resizing the volume. + PersistentVolumeClaimNodeResizeInProgress ClaimResourceStatus = "NodeResizeInProgress" + // State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed + PersistentVolumeClaimNodeResizeFailed ClaimResourceStatus = "NodeResizeFailed" ) // PersistentVolumeClaimCondition contains details about state of pvc @@ -615,24 +625,74 @@ type PersistentVolumeClaimStatus struct { // +patchMergeKey=type // +patchStrategy=merge Conditions []PersistentVolumeClaimCondition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,4,rep,name=conditions"` - // allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may - // be larger than the actual capacity when a volume expansion operation is requested. + // allocatedResources tracks the resources allocated to a PVC including its capacity. + // Key names follow standard Kubernetes label syntax. Valid values are either: + // * Un-prefixed keys: + // - storage - the capacity of the volume. + // * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" + // Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered + // reserved and hence may not be used. + // + // Capacity reported here may be larger than the actual capacity when a volume expansion operation + // is requested. // For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. // If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. // If a volume expansion capacity request is lowered, allocatedResources is only // lowered if there are no expansion operations in progress and if the actual volume capacity // is equal or lower than the requested capacity. + // + // A controller that receives PVC update with previously unknown resourceName + // should ignore the update for the purpose it was designed. For example - a controller that + // only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid + // resources associated with PVC. + // // This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. // +featureGate=RecoverVolumeExpansionFailure // +optional AllocatedResources ResourceList `json:"allocatedResources,omitempty" protobuf:"bytes,5,rep,name=allocatedResources,casttype=ResourceList,castkey=ResourceName"` - // resizeStatus stores status of resize operation. - // ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty - // string by resize controller or kubelet. + + // resizestatus is tombstoned since the field was replaced by allocatedResourceStatus + // ResizeStatus *PersistentVolumeClaimResizeStatus `json:"resizeStatus,omitempty" protobuf:"bytes,6,opt,name=resizeStatus,casttype=PersistentVolumeClaimResizeStatus"` + + // allocatedResourceStatuses stores status of resource being resized for the given PVC. + // Key names follow standard Kubernetes label syntax. Valid values are either: + // * Un-prefixed keys: + // - storage - the capacity of the volume. + // * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" + // Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered + // reserved and hence may not be used. + // + // ClaimResourceStatus can be in any of following states: + // - ControllerResizeInProgress: + // State set when resize controller starts resizing the volume in control-plane. + // - ControllerResizeFailed: + // State set when resize has failed in resize controller with a terminal error. + // - NodeResizePending: + // State set when resize controller has finished resizing the volume but further resizing of + // volume is needed on the node. + // - NodeResizeInProgress: + // State set when kubelet starts resizing the volume. + // - NodeResizeFailed: + // State set when resizing has failed in kubelet with a terminal error. Transient errors don't set + // NodeResizeFailed. + // For example: if expanding a PVC for more capacity - this field can be one of the following states: + // - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" + // - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" + // - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" + // - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" + // - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" + // When this field is not set, it means that no resize operation is in progress for the given PVC. + // + // A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus + // should ignore the update for the purpose it was designed. For example - a controller that + // only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid + // resources associated with PVC. + // // This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. // +featureGate=RecoverVolumeExpansionFailure + // +mapType=granular // +optional - ResizeStatus *PersistentVolumeClaimResizeStatus `json:"resizeStatus,omitempty" protobuf:"bytes,6,opt,name=resizeStatus,casttype=PersistentVolumeClaimResizeStatus"` + AllocatedResourceStatuses map[ResourceName]ClaimResourceStatus `json:"allocatedResourceStatuses,omitempty" protobuf:"bytes,7,rep,name=allocatedResourceStatuses"` } // +enum @@ -2137,7 +2197,8 @@ type SecretEnvSource struct { // HTTPHeader describes a custom header to be used in HTTP probes type HTTPHeader struct { - // The header field name + // The header field name. + // This will be canonicalized upon output, so case-variant names will be understood as the same header. Name string `json:"name" protobuf:"bytes,1,opt,name=name"` // The header field value Value string `json:"value" protobuf:"bytes,2,opt,name=value"` @@ -2445,6 +2506,24 @@ type Container struct { // +optional // +listType=atomic ResizePolicy []ContainerResizePolicy `json:"resizePolicy,omitempty" protobuf:"bytes,23,rep,name=resizePolicy"` + // RestartPolicy defines the restart behavior of individual containers in a pod. + // This field may only be set for init containers, and the only allowed value is "Always". + // For non-init containers or when this field is not specified, + // the restart behavior is defined by the Pod's restart policy and the container type. + // Setting the RestartPolicy as "Always" for the init container will have the following effect: + // this init container will be continually restarted on + // exit until all regular containers have terminated. Once all regular + // containers have completed, all init containers with restartPolicy "Always" + // will be shut down. This lifecycle differs from normal init containers and + // is often referred to as a "sidecar" container. Although this init + // container still starts in the init container sequence, it does not wait + // for the container to complete before proceeding to the next init + // container. Instead, the next init container starts immediately after this + // init container is started, or after any startupProbe has successfully + // completed. + // +featureGate=SidecarContainers + // +optional + RestartPolicy *ContainerRestartPolicy `json:"restartPolicy,omitempty" protobuf:"bytes,24,opt,name=restartPolicy,casttype=ContainerRestartPolicy"` // Pod volumes to mount into the container's filesystem. // Cannot be updated. // +optional @@ -2841,6 +2920,14 @@ const ( RestartPolicyNever RestartPolicy = "Never" ) +// ContainerRestartPolicy is the restart policy for a single container. +// This may only be set for init containers and only allowed value is "Always". +type ContainerRestartPolicy string + +const ( + ContainerRestartPolicyAlways ContainerRestartPolicy = "Always" +) + // DNSPolicy defines how a pod's DNS will be configured. // +enum type DNSPolicy string @@ -3523,15 +3610,9 @@ type ClaimSource struct { // // The template will be used to create a new ResourceClaim, which will // be bound to this pod. When this pod is deleted, the ResourceClaim - // will also be deleted. The name of the ResourceClaim will be -, where is the - // PodResourceClaim.Name. Pod validation will reject the pod if the - // concatenated name is not valid for a ResourceClaim (e.g. too long). - // - // An existing ResourceClaim with that name that is not owned by the - // pod will not be used for the pod to avoid using an unrelated - // resource by mistake. Scheduling and pod startup are then blocked - // until the unrelated ResourceClaim is removed. + // will also be deleted. The pod name and resource name, along with a + // generated component, will be used to form a unique name for the + // ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses. // // This field is immutable and no changes will be made to the // corresponding ResourceClaim by the control plane after creating the @@ -3539,6 +3620,24 @@ type ClaimSource struct { ResourceClaimTemplateName *string `json:"resourceClaimTemplateName,omitempty" protobuf:"bytes,2,opt,name=resourceClaimTemplateName"` } +// PodResourceClaimStatus is stored in the PodStatus for each PodResourceClaim +// which references a ResourceClaimTemplate. It stores the generated name for +// the corresponding ResourceClaim. +type PodResourceClaimStatus struct { + // Name uniquely identifies this resource claim inside the pod. + // This must match the name of an entry in pod.spec.resourceClaims, + // which implies that the string must be a DNS_LABEL. + Name string `json:"name" protobuf:"bytes,1,name=name"` + + // ResourceClaimName is the name of the ResourceClaim that was + // generated for the Pod in the namespace of the Pod. It this is + // unset, then generating a ResourceClaim was not necessary. The + // pod.spec.resourceClaims entry can be ignored in this case. + // + // +optional + ResourceClaimName *string `json:"resourceClaimName,omitempty" protobuf:"bytes,2,opt,name=resourceClaimName"` +} + // OSName is the set of OS'es that can be used in OS. type OSName string @@ -3837,7 +3936,7 @@ type SeccompProfile struct { // localhostProfile indicates a profile defined in a file on the node should be used. // The profile must be preconfigured on the node to work. // Must be a descending path, relative to the kubelet's configured seccomp profile location. - // Must only be set if type is "Localhost". + // Must be set if type is "Localhost". Must NOT be set for any other type. // +optional LocalhostProfile *string `json:"localhostProfile,omitempty" protobuf:"bytes,2,opt,name=localhostProfile"` } @@ -3898,12 +3997,15 @@ type PodDNSConfigOption struct { Value *string `json:"value,omitempty" protobuf:"bytes,2,opt,name=value"` } -// IP address information for entries in the (plural) PodIPs field. -// Each entry includes: -// -// IP: An IP address allocated to the pod. Routable at least within the cluster. +// PodIP represents a single IP address allocated to the pod. type PodIP struct { - // ip is an IP address (IPv4 or IPv6) assigned to the pod + // IP is the IP address assigned to the pod + IP string `json:"ip,omitempty" protobuf:"bytes,1,opt,name=ip"` +} + +// HostIP represents a single IP address allocated to the host. +type HostIP struct { + // IP is the IP address assigned to the host IP string `json:"ip,omitempty" protobuf:"bytes,1,opt,name=ip"` } @@ -3975,6 +4077,13 @@ type EphemeralContainerCommon struct { // +optional // +listType=atomic ResizePolicy []ContainerResizePolicy `json:"resizePolicy,omitempty" protobuf:"bytes,23,rep,name=resizePolicy"` + // Restart policy for the container to manage the restart behavior of each + // container within a pod. + // This may only be set for init containers. You cannot set this field on + // ephemeral containers. + // +featureGate=SidecarContainers + // +optional + RestartPolicy *ContainerRestartPolicy `json:"restartPolicy,omitempty" protobuf:"bytes,24,opt,name=restartPolicy,casttype=ContainerRestartPolicy"` // Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. // Cannot be updated. // +optional @@ -4127,10 +4236,23 @@ type PodStatus struct { // +optional NominatedNodeName string `json:"nominatedNodeName,omitempty" protobuf:"bytes,11,opt,name=nominatedNodeName"` - // IP address of the host to which the pod is assigned. Empty if not yet scheduled. + // hostIP holds the IP address of the host to which the pod is assigned. Empty if the pod has not started yet. + // A pod can be assigned to a node that has a problem in kubelet which in turns mean that HostIP will + // not be updated even if there is a node is assigned to pod // +optional HostIP string `json:"hostIP,omitempty" protobuf:"bytes,5,opt,name=hostIP"` - // IP address allocated to the pod. Routable at least within the cluster. + + // hostIPs holds the IP addresses allocated to the host. If this field is specified, the first entry must + // match the hostIP field. This list is empty if the pod has not started yet. + // A pod can be assigned to a node that has a problem in kubelet which in turns means that HostIPs will + // not be updated even if there is a node is assigned to this pod. + // +optional + // +patchStrategy=merge + // +patchMergeKey=ip + // +listType=atomic + HostIPs []HostIP `json:"hostIPs,omitempty" protobuf:"bytes,16,rep,name=hostIPs" patchStrategy:"merge" patchMergeKey:"ip"` + + // podIP address allocated to the pod. Routable at least within the cluster. // Empty if not yet allocated. // +optional PodIP string `json:"podIP,omitempty" protobuf:"bytes,6,opt,name=podIP"` @@ -4173,6 +4295,15 @@ type PodStatus struct { // +featureGate=InPlacePodVerticalScaling // +optional Resize PodResizeStatus `json:"resize,omitempty" protobuf:"bytes,14,opt,name=resize,casttype=PodResizeStatus"` + + // Status of resource claims. + // +patchMergeKey=name + // +patchStrategy=merge,retainKeys + // +listType=map + // +listMapKey=name + // +featureGate=DynamicResourceAllocation + // +optional + ResourceClaimStatuses []PodResourceClaimStatus `json:"resourceClaimStatuses,omitempty" patchStrategy:"merge,retainKeys" patchMergeKey:"name" protobuf:"bytes,15,rep,name=resourceClaimStatuses"` } // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object @@ -4713,10 +4844,9 @@ type ServiceSpec struct { // This feature depends on whether the underlying cloud-provider supports specifying // the loadBalancerIP when a load balancer is created. // This field will be ignored if the cloud-provider does not support the feature. - // Deprecated: This field was under-specified and its meaning varies across implementations, - // and it cannot support dual-stack. - // As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. - // This field may be removed in a future API version. + // Deprecated: This field was under-specified and its meaning varies across implementations. + // Using it is non-portable and it may not support dual-stack. + // Users are encouraged to use implementation-specific annotations when available. // +optional LoadBalancerIP string `json:"loadBalancerIP,omitempty" protobuf:"bytes,8,opt,name=loadBalancerIP"` @@ -4865,10 +4995,19 @@ type ServicePort struct { Protocol Protocol `json:"protocol,omitempty" protobuf:"bytes,2,opt,name=protocol,casttype=Protocol"` // The application protocol for this port. + // This is used as a hint for implementations to offer richer behavior for protocols that they understand. // This field follows standard Kubernetes label syntax. - // Un-prefixed names are reserved for IANA standard service names (as per + // Valid values are either: + // + // * Un-prefixed protocol names - reserved for IANA standard service names (as per // RFC-6335 and https://www.iana.org/assignments/service-names). - // Non-standard protocols should use prefixed names such as + // + // * Kubernetes-defined prefixed names: + // * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 + // * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 + // * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 + // + // * Other protocols should use implementation-defined prefixed names such as // mycompany.com/my-custom-protocol. // +optional AppProtocol *string `json:"appProtocol,omitempty" protobuf:"bytes,6,opt,name=appProtocol"` @@ -5109,6 +5248,8 @@ type EndpointPort struct { // // * Kubernetes-defined prefixed names: // * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 + // * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 + // * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 // // * Other protocols should use implementation-defined prefixed names such as // mycompany.com/my-custom-protocol. @@ -6801,12 +6942,9 @@ type WindowsSecurityContextOptions struct { RunAsUserName *string `json:"runAsUserName,omitempty" protobuf:"bytes,3,opt,name=runAsUserName"` // HostProcess determines if a container should be run as a 'Host Process' container. - // This field is alpha-level and will only be honored by components that enable the - // WindowsHostProcessContainers feature flag. Setting this field without the feature - // flag will result in errors when validating the Pod. All of a Pod's containers must - // have the same effective HostProcess value (it is not allowed to have a mix of HostProcess - // containers and non-HostProcess containers). In addition, if HostProcess is true - // then HostNetwork must also be set to true. + // All of a Pod's containers must have the same effective HostProcess value + // (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). + // In addition, if HostProcess is true then HostNetwork must also be set to true. // +optional HostProcess *bool `json:"hostProcess,omitempty" protobuf:"bytes,4,opt,name=hostProcess"` } diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go index a2cf00db87a7..9734d8b41eb5 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go @@ -212,7 +212,7 @@ func (CinderVolumeSource) SwaggerDoc() map[string]string { var map_ClaimSource = map[string]string{ "": "ClaimSource describes a reference to a ResourceClaim.\n\nExactly one of these fields should be set. Consumers of this type must treat an empty object as if it has an unknown value.", "resourceClaimName": "ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod.", - "resourceClaimTemplateName": "ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod.\n\nThe template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The name of the ResourceClaim will be -, where is the PodResourceClaim.Name. Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (e.g. too long).\n\nAn existing ResourceClaim with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated resource by mistake. Scheduling and pod startup are then blocked until the unrelated ResourceClaim is removed.\n\nThis field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim.", + "resourceClaimTemplateName": "ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod.\n\nThe template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The pod name and resource name, along with a generated component, will be used to form a unique name for the ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses.\n\nThis field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim.", } func (ClaimSource) SwaggerDoc() map[string]string { @@ -347,6 +347,7 @@ var map_Container = map[string]string{ "env": "List of environment variables to set in the container. Cannot be updated.", "resources": "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/", "resizePolicy": "Resources resize policy for the container.", + "restartPolicy": "RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is \"Always\". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as \"Always\" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy \"Always\" will be shut down. This lifecycle differs from normal init containers and is often referred to as a \"sidecar\" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the next init container. Instead, the next init container starts immediately after this init container is started, or after any startupProbe has successfully completed.", "volumeMounts": "Pod volumes to mount into the container's filesystem. Cannot be updated.", "volumeDevices": "volumeDevices is the list of block devices to be used by the container.", "livenessProbe": "Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes", @@ -530,7 +531,7 @@ var map_EndpointPort = map[string]string{ "name": "The name of this port. This must match the 'name' field in the corresponding ServicePort. Must be a DNS_LABEL. Optional only if one port is defined.", "port": "The port number of the endpoint.", "protocol": "The IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP.", - "appProtocol": "The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:\n\n* Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names).\n\n* Kubernetes-defined prefixed names:\n * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540\n\n* Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol.", + "appProtocol": "The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:\n\n* Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names).\n\n* Kubernetes-defined prefixed names:\n * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540\n * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455\n * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455\n\n* Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol.", } func (EndpointPort) SwaggerDoc() map[string]string { @@ -623,6 +624,7 @@ var map_EphemeralContainerCommon = map[string]string{ "env": "List of environment variables to set in the container. Cannot be updated.", "resources": "Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod.", "resizePolicy": "Resources resize policy for the container.", + "restartPolicy": "Restart policy for the container to manage the restart behavior of each container within a pod. This may only be set for init containers. You cannot set this field on ephemeral containers.", "volumeMounts": "Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated.", "volumeDevices": "volumeDevices is the list of block devices to be used by the container.", "livenessProbe": "Probes are not allowed for ephemeral containers.", @@ -832,7 +834,7 @@ func (HTTPGetAction) SwaggerDoc() map[string]string { var map_HTTPHeader = map[string]string{ "": "HTTPHeader describes a custom header to be used in HTTP probes", - "name": "The header field name", + "name": "The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header.", "value": "The header field value", } @@ -850,6 +852,15 @@ func (HostAlias) SwaggerDoc() map[string]string { return map_HostAlias } +var map_HostIP = map[string]string{ + "": "HostIP represents a single IP address allocated to the host.", + "ip": "IP is the IP address assigned to the host", +} + +func (HostIP) SwaggerDoc() map[string]string { + return map_HostIP +} + var map_HostPathVolumeSource = map[string]string{ "": "Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling.", "path": "path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath", @@ -1344,13 +1355,13 @@ func (PersistentVolumeClaimSpec) SwaggerDoc() map[string]string { } var map_PersistentVolumeClaimStatus = map[string]string{ - "": "PersistentVolumeClaimStatus is the current status of a persistent volume claim.", - "phase": "phase represents the current phase of PersistentVolumeClaim.", - "accessModes": "accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1", - "capacity": "capacity represents the actual resources of the underlying volume.", - "conditions": "conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.", - "allocatedResources": "allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.", - "resizeStatus": "resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.", + "": "PersistentVolumeClaimStatus is the current status of a persistent volume claim.", + "phase": "phase represents the current phase of PersistentVolumeClaim.", + "accessModes": "accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1", + "capacity": "capacity represents the actual resources of the underlying volume.", + "conditions": "conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.", + "allocatedResources": "allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either:\n\t* Un-prefixed keys:\n\t\t- storage - the capacity of the volume.\n\t* Custom resources must use implementation-defined prefixed names such as \"example.com/my-custom-resource\"\nApart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used.\n\nCapacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity.\n\nA controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC.\n\nThis is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.", + "allocatedResourceStatuses": "allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either:\n\t* Un-prefixed keys:\n\t\t- storage - the capacity of the volume.\n\t* Custom resources must use implementation-defined prefixed names such as \"example.com/my-custom-resource\"\nApart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used.\n\nClaimResourceStatus can be in any of following states:\n\t- ControllerResizeInProgress:\n\t\tState set when resize controller starts resizing the volume in control-plane.\n\t- ControllerResizeFailed:\n\t\tState set when resize has failed in resize controller with a terminal error.\n\t- NodeResizePending:\n\t\tState set when resize controller has finished resizing the volume but further resizing of\n\t\tvolume is needed on the node.\n\t- NodeResizeInProgress:\n\t\tState set when kubelet starts resizing the volume.\n\t- NodeResizeFailed:\n\t\tState set when resizing has failed in kubelet with a terminal error. Transient errors don't set\n\t\tNodeResizeFailed.\nFor example: if expanding a PVC for more capacity - this field can be one of the following states:\n\t- pvc.status.allocatedResourceStatus['storage'] = \"ControllerResizeInProgress\"\n - pvc.status.allocatedResourceStatus['storage'] = \"ControllerResizeFailed\"\n - pvc.status.allocatedResourceStatus['storage'] = \"NodeResizePending\"\n - pvc.status.allocatedResourceStatus['storage'] = \"NodeResizeInProgress\"\n - pvc.status.allocatedResourceStatus['storage'] = \"NodeResizeFailed\"\nWhen this field is not set, it means that no resize operation is in progress for the given PVC.\n\nA controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC.\n\nThis is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.", } func (PersistentVolumeClaimStatus) SwaggerDoc() map[string]string { @@ -1434,10 +1445,11 @@ func (PersistentVolumeSpec) SwaggerDoc() map[string]string { } var map_PersistentVolumeStatus = map[string]string{ - "": "PersistentVolumeStatus is the current status of a persistent volume.", - "phase": "phase indicates if a volume is available, bound to a claim, or released by a claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#phase", - "message": "message is a human-readable message indicating details about why the volume is in this state.", - "reason": "reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI.", + "": "PersistentVolumeStatus is the current status of a persistent volume.", + "phase": "phase indicates if a volume is available, bound to a claim, or released by a claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#phase", + "message": "message is a human-readable message indicating details about why the volume is in this state.", + "reason": "reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI.", + "lastPhaseTransitionTime": "lastPhaseTransitionTime is the time the phase transitioned from one to another and automatically resets to current time everytime a volume phase transitions. This is an alpha field and requires enabling PersistentVolumeLastPhaseTransitionTime feature.", } func (PersistentVolumeStatus) SwaggerDoc() map[string]string { @@ -1559,8 +1571,8 @@ func (PodExecOptions) SwaggerDoc() map[string]string { } var map_PodIP = map[string]string{ - "": "IP address information for entries in the (plural) PodIPs field. Each entry includes:\n\n\tIP: An IP address allocated to the pod. Routable at least within the cluster.", - "ip": "ip is an IP address (IPv4 or IPv6) assigned to the pod", + "": "PodIP represents a single IP address allocated to the pod.", + "ip": "IP is the IP address assigned to the pod", } func (PodIP) SwaggerDoc() map[string]string { @@ -1640,6 +1652,16 @@ func (PodResourceClaim) SwaggerDoc() map[string]string { return map_PodResourceClaim } +var map_PodResourceClaimStatus = map[string]string{ + "": "PodResourceClaimStatus is stored in the PodStatus for each PodResourceClaim which references a ResourceClaimTemplate. It stores the generated name for the corresponding ResourceClaim.", + "name": "Name uniquely identifies this resource claim inside the pod. This must match the name of an entry in pod.spec.resourceClaims, which implies that the string must be a DNS_LABEL.", + "resourceClaimName": "ResourceClaimName is the name of the ResourceClaim that was generated for the Pod in the namespace of the Pod. It this is unset, then generating a ResourceClaim was not necessary. The pod.spec.resourceClaims entry can be ignored in this case.", +} + +func (PodResourceClaimStatus) SwaggerDoc() map[string]string { + return map_PodResourceClaimStatus +} + var map_PodSchedulingGate = map[string]string{ "": "PodSchedulingGate is associated to a Pod to guard its scheduling.", "name": "Name of the scheduling gate. Each scheduling gate must have a unique name field.", @@ -1730,8 +1752,9 @@ var map_PodStatus = map[string]string{ "message": "A human readable message indicating details about why the pod is in this condition.", "reason": "A brief CamelCase message indicating details about why the pod is in this state. e.g. 'Evicted'", "nominatedNodeName": "nominatedNodeName is set only when this pod preempts other pods on the node, but it cannot be scheduled right away as preemption victims receive their graceful termination periods. This field does not guarantee that the pod will be scheduled on this node. Scheduler may decide to place the pod elsewhere if other nodes become available sooner. Scheduler may also decide to give the resources on this node to a higher priority pod that is created after preemption. As a result, this field may be different than PodSpec.nodeName when the pod is scheduled.", - "hostIP": "IP address of the host to which the pod is assigned. Empty if not yet scheduled.", - "podIP": "IP address allocated to the pod. Routable at least within the cluster. Empty if not yet allocated.", + "hostIP": "hostIP holds the IP address of the host to which the pod is assigned. Empty if the pod has not started yet. A pod can be assigned to a node that has a problem in kubelet which in turns mean that HostIP will not be updated even if there is a node is assigned to pod", + "hostIPs": "hostIPs holds the IP addresses allocated to the host. If this field is specified, the first entry must match the hostIP field. This list is empty if the pod has not started yet. A pod can be assigned to a node that has a problem in kubelet which in turns means that HostIPs will not be updated even if there is a node is assigned to this pod.", + "podIP": "podIP address allocated to the pod. Routable at least within the cluster. Empty if not yet allocated.", "podIPs": "podIPs holds the IP addresses allocated to the pod. If this field is specified, the 0th entry must match the podIP field. Pods may be allocated at most 1 value for each of IPv4 and IPv6. This list is empty if no IPs have been allocated yet.", "startTime": "RFC 3339 date and time at which the object was acknowledged by the Kubelet. This is before the Kubelet pulled the container image(s) for the pod.", "initContainerStatuses": "The list has one entry per init container in the manifest. The most recent successful init container will have ready = true, the most recently started container will have startTime set. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status", @@ -1739,6 +1762,7 @@ var map_PodStatus = map[string]string{ "qosClass": "The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#quality-of-service-classes", "ephemeralContainerStatuses": "Status for any ephemeral containers that have run in this pod.", "resize": "Status of resources resize desired for pod's containers. It is empty if no resources resize is pending. Any changes to container resources will automatically set this to \"Proposed\"", + "resourceClaimStatuses": "Status of resource claims.", } func (PodStatus) SwaggerDoc() map[string]string { @@ -2134,7 +2158,7 @@ func (ScopedResourceSelectorRequirement) SwaggerDoc() map[string]string { var map_SeccompProfile = map[string]string{ "": "SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set.", "type": "type indicates which kind of seccomp profile will be applied. Valid options are:\n\nLocalhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied.", - "localhostProfile": "localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is \"Localhost\".", + "localhostProfile": "localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is \"Localhost\". Must NOT be set for any other type.", } func (SeccompProfile) SwaggerDoc() map[string]string { @@ -2301,7 +2325,7 @@ var map_ServicePort = map[string]string{ "": "ServicePort contains information on service's port.", "name": "The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.", "protocol": "The IP protocol for this port. Supports \"TCP\", \"UDP\", and \"SCTP\". Default is TCP.", - "appProtocol": "The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.", + "appProtocol": "The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:\n\n* Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names).\n\n* Kubernetes-defined prefixed names:\n * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540\n * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455\n * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455\n\n* Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol.", "port": "The port that will be exposed by this service.", "targetPort": "Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service", "nodePort": "The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport", @@ -2329,7 +2353,7 @@ var map_ServiceSpec = map[string]string{ "type": "type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. \"ClusterIP\" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is \"None\", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. \"NodePort\" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. \"LoadBalancer\" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. \"ExternalName\" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types", "externalIPs": "externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system.", "sessionAffinity": "Supports \"ClientIP\" and \"None\". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies", - "loadBalancerIP": "Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.", + "loadBalancerIP": "Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations. Using it is non-portable and it may not support dual-stack. Users are encouraged to use implementation-specific annotations when available.", "loadBalancerSourceRanges": "If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature.\" More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/", "externalName": "externalName is the external reference that discovery mechanisms will return as an alias for this service (e.g. a DNS CNAME record). No proxying will be involved. Must be a lowercase RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) and requires `type` to be \"ExternalName\".", "externalTrafficPolicy": "externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's \"externally-facing\" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to \"Local\", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, \"Cluster\", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get \"Cluster\" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node.", @@ -2612,7 +2636,7 @@ var map_WindowsSecurityContextOptions = map[string]string{ "gmsaCredentialSpecName": "GMSACredentialSpecName is the name of the GMSA credential spec to use.", "gmsaCredentialSpec": "GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.", "runAsUserName": "The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.", - "hostProcess": "HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.", + "hostProcess": "HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.", } func (WindowsSecurityContextOptions) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/well_known_labels.go b/cluster-autoscaler/vendor/k8s.io/api/core/v1/well_known_labels.go index 5cf82a981755..8c3cb87b82a1 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/well_known_labels.go +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/well_known_labels.go @@ -19,6 +19,10 @@ package v1 const ( LabelHostname = "kubernetes.io/hostname" + // Label value is the network location of kube-apiserver stored as + // Stored in APIServer Identity lease objects to view what address is used for peer proxy + AnnotationPeerAdvertiseAddress = "kubernetes.io/peer-advertise-address" + LabelTopologyZone = "topology.kubernetes.io/zone" LabelTopologyRegion = "topology.kubernetes.io/region" diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/core/v1/zz_generated.deepcopy.go index bfb7e0bff541..d76f0bbbcf76 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/zz_generated.deepcopy.go @@ -793,6 +793,11 @@ func (in *Container) DeepCopyInto(out *Container) { *out = make([]ContainerResizePolicy, len(*in)) copy(*out, *in) } + if in.RestartPolicy != nil { + in, out := &in.RestartPolicy, &out.RestartPolicy + *out = new(ContainerRestartPolicy) + **out = **in + } if in.VolumeMounts != nil { in, out := &in.VolumeMounts, &out.VolumeMounts *out = make([]VolumeMount, len(*in)) @@ -1420,6 +1425,11 @@ func (in *EphemeralContainerCommon) DeepCopyInto(out *EphemeralContainerCommon) *out = make([]ContainerResizePolicy, len(*in)) copy(*out, *in) } + if in.RestartPolicy != nil { + in, out := &in.RestartPolicy, &out.RestartPolicy + *out = new(ContainerRestartPolicy) + **out = **in + } if in.VolumeMounts != nil { in, out := &in.VolumeMounts, &out.VolumeMounts *out = make([]VolumeMount, len(*in)) @@ -1871,6 +1881,22 @@ func (in *HostAlias) DeepCopy() *HostAlias { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *HostIP) DeepCopyInto(out *HostIP) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HostIP. +func (in *HostIP) DeepCopy() *HostIP { + if in == nil { + return nil + } + out := new(HostIP) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *HostPathVolumeSource) DeepCopyInto(out *HostPathVolumeSource) { *out = *in @@ -2895,7 +2921,7 @@ func (in *PersistentVolume) DeepCopyInto(out *PersistentVolume) { out.TypeMeta = in.TypeMeta in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) in.Spec.DeepCopyInto(&out.Spec) - out.Status = in.Status + in.Status.DeepCopyInto(&out.Status) return } @@ -3072,10 +3098,12 @@ func (in *PersistentVolumeClaimStatus) DeepCopyInto(out *PersistentVolumeClaimSt (*out)[key] = val.DeepCopy() } } - if in.ResizeStatus != nil { - in, out := &in.ResizeStatus, &out.ResizeStatus - *out = new(PersistentVolumeClaimResizeStatus) - **out = **in + if in.AllocatedResourceStatuses != nil { + in, out := &in.AllocatedResourceStatuses, &out.AllocatedResourceStatuses + *out = make(map[ResourceName]ClaimResourceStatus, len(*in)) + for key, val := range *in { + (*out)[key] = val + } } return } @@ -3335,6 +3363,10 @@ func (in *PersistentVolumeSpec) DeepCopy() *PersistentVolumeSpec { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *PersistentVolumeStatus) DeepCopyInto(out *PersistentVolumeStatus) { *out = *in + if in.LastPhaseTransitionTime != nil { + in, out := &in.LastPhaseTransitionTime, &out.LastPhaseTransitionTime + *out = (*in).DeepCopy() + } return } @@ -3807,6 +3839,27 @@ func (in *PodResourceClaim) DeepCopy() *PodResourceClaim { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PodResourceClaimStatus) DeepCopyInto(out *PodResourceClaimStatus) { + *out = *in + if in.ResourceClaimName != nil { + in, out := &in.ResourceClaimName, &out.ResourceClaimName + *out = new(string) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodResourceClaimStatus. +func (in *PodResourceClaimStatus) DeepCopy() *PodResourceClaimStatus { + if in == nil { + return nil + } + out := new(PodResourceClaimStatus) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *PodSchedulingGate) DeepCopyInto(out *PodSchedulingGate) { *out = *in @@ -4091,6 +4144,11 @@ func (in *PodStatus) DeepCopyInto(out *PodStatus) { (*in)[i].DeepCopyInto(&(*out)[i]) } } + if in.HostIPs != nil { + in, out := &in.HostIPs, &out.HostIPs + *out = make([]HostIP, len(*in)) + copy(*out, *in) + } if in.PodIPs != nil { in, out := &in.PodIPs, &out.PodIPs *out = make([]PodIP, len(*in)) @@ -4121,6 +4179,13 @@ func (in *PodStatus) DeepCopyInto(out *PodStatus) { (*in)[i].DeepCopyInto(&(*out)[i]) } } + if in.ResourceClaimStatuses != nil { + in, out := &in.ResourceClaimStatuses, &out.ResourceClaimStatuses + *out = make([]PodResourceClaimStatus, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } return } diff --git a/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/generated.proto index b7150ef2cb82..490ce8922474 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/generated.proto @@ -146,6 +146,8 @@ message EndpointPort { // // * Kubernetes-defined prefixed names: // * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 + // * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 + // * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 // // * Other protocols should use implementation-defined prefixed names such as // mycompany.com/my-custom-protocol. diff --git a/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/types.go b/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/types.go index 9b4daafca900..efbb09918c20 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/types.go @@ -196,6 +196,8 @@ type EndpointPort struct { // // * Kubernetes-defined prefixed names: // * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 + // * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 + // * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 // // * Other protocols should use implementation-defined prefixed names such as // mycompany.com/my-custom-protocol. diff --git a/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/types_swagger_doc_generated.go index c780c9573d18..bef7745398ab 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/discovery/v1/types_swagger_doc_generated.go @@ -68,7 +68,7 @@ var map_EndpointPort = map[string]string{ "name": "name represents the name of this port. All ports in an EndpointSlice must have a unique name. If the EndpointSlice is dervied from a Kubernetes service, this corresponds to the Service.ports[].name. Name must either be an empty string or pass DNS_LABEL validation: * must be no more than 63 characters long. * must consist of lower case alphanumeric characters or '-'. * must start and end with an alphanumeric character. Default is empty string.", "protocol": "protocol represents the IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP.", "port": "port represents the port number of the endpoint. If this is not specified, ports are not restricted and must be interpreted in the context of the specific consumer.", - "appProtocol": "The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:\n\n* Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names).\n\n* Kubernetes-defined prefixed names:\n * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540\n\n* Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol.", + "appProtocol": "The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either:\n\n* Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names).\n\n* Kubernetes-defined prefixed names:\n * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540\n * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455\n * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455\n\n* Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol.", } func (EndpointPort) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/generated.pb.go index 863ebbc4a722..d967e3810683 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/generated.pb.go @@ -1001,38 +1001,10 @@ func (m *NetworkPolicySpec) XXX_DiscardUnknown() { var xxx_messageInfo_NetworkPolicySpec proto.InternalMessageInfo -func (m *NetworkPolicyStatus) Reset() { *m = NetworkPolicyStatus{} } -func (*NetworkPolicyStatus) ProtoMessage() {} -func (*NetworkPolicyStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{34} -} -func (m *NetworkPolicyStatus) XXX_Unmarshal(b []byte) error { - return m.Unmarshal(b) -} -func (m *NetworkPolicyStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - b = b[:cap(b)] - n, err := m.MarshalToSizedBuffer(b) - if err != nil { - return nil, err - } - return b[:n], nil -} -func (m *NetworkPolicyStatus) XXX_Merge(src proto.Message) { - xxx_messageInfo_NetworkPolicyStatus.Merge(m, src) -} -func (m *NetworkPolicyStatus) XXX_Size() int { - return m.Size() -} -func (m *NetworkPolicyStatus) XXX_DiscardUnknown() { - xxx_messageInfo_NetworkPolicyStatus.DiscardUnknown(m) -} - -var xxx_messageInfo_NetworkPolicyStatus proto.InternalMessageInfo - func (m *ReplicaSet) Reset() { *m = ReplicaSet{} } func (*ReplicaSet) ProtoMessage() {} func (*ReplicaSet) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{35} + return fileDescriptor_cdc93917efc28165, []int{34} } func (m *ReplicaSet) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1060,7 +1032,7 @@ var xxx_messageInfo_ReplicaSet proto.InternalMessageInfo func (m *ReplicaSetCondition) Reset() { *m = ReplicaSetCondition{} } func (*ReplicaSetCondition) ProtoMessage() {} func (*ReplicaSetCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{36} + return fileDescriptor_cdc93917efc28165, []int{35} } func (m *ReplicaSetCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1088,7 +1060,7 @@ var xxx_messageInfo_ReplicaSetCondition proto.InternalMessageInfo func (m *ReplicaSetList) Reset() { *m = ReplicaSetList{} } func (*ReplicaSetList) ProtoMessage() {} func (*ReplicaSetList) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{37} + return fileDescriptor_cdc93917efc28165, []int{36} } func (m *ReplicaSetList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1116,7 +1088,7 @@ var xxx_messageInfo_ReplicaSetList proto.InternalMessageInfo func (m *ReplicaSetSpec) Reset() { *m = ReplicaSetSpec{} } func (*ReplicaSetSpec) ProtoMessage() {} func (*ReplicaSetSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{38} + return fileDescriptor_cdc93917efc28165, []int{37} } func (m *ReplicaSetSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1144,7 +1116,7 @@ var xxx_messageInfo_ReplicaSetSpec proto.InternalMessageInfo func (m *ReplicaSetStatus) Reset() { *m = ReplicaSetStatus{} } func (*ReplicaSetStatus) ProtoMessage() {} func (*ReplicaSetStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{39} + return fileDescriptor_cdc93917efc28165, []int{38} } func (m *ReplicaSetStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1172,7 +1144,7 @@ var xxx_messageInfo_ReplicaSetStatus proto.InternalMessageInfo func (m *RollbackConfig) Reset() { *m = RollbackConfig{} } func (*RollbackConfig) ProtoMessage() {} func (*RollbackConfig) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{40} + return fileDescriptor_cdc93917efc28165, []int{39} } func (m *RollbackConfig) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1200,7 +1172,7 @@ var xxx_messageInfo_RollbackConfig proto.InternalMessageInfo func (m *RollingUpdateDaemonSet) Reset() { *m = RollingUpdateDaemonSet{} } func (*RollingUpdateDaemonSet) ProtoMessage() {} func (*RollingUpdateDaemonSet) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{41} + return fileDescriptor_cdc93917efc28165, []int{40} } func (m *RollingUpdateDaemonSet) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1228,7 +1200,7 @@ var xxx_messageInfo_RollingUpdateDaemonSet proto.InternalMessageInfo func (m *RollingUpdateDeployment) Reset() { *m = RollingUpdateDeployment{} } func (*RollingUpdateDeployment) ProtoMessage() {} func (*RollingUpdateDeployment) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{42} + return fileDescriptor_cdc93917efc28165, []int{41} } func (m *RollingUpdateDeployment) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1256,7 +1228,7 @@ var xxx_messageInfo_RollingUpdateDeployment proto.InternalMessageInfo func (m *Scale) Reset() { *m = Scale{} } func (*Scale) ProtoMessage() {} func (*Scale) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{43} + return fileDescriptor_cdc93917efc28165, []int{42} } func (m *Scale) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1284,7 +1256,7 @@ var xxx_messageInfo_Scale proto.InternalMessageInfo func (m *ScaleSpec) Reset() { *m = ScaleSpec{} } func (*ScaleSpec) ProtoMessage() {} func (*ScaleSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{44} + return fileDescriptor_cdc93917efc28165, []int{43} } func (m *ScaleSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1312,7 +1284,7 @@ var xxx_messageInfo_ScaleSpec proto.InternalMessageInfo func (m *ScaleStatus) Reset() { *m = ScaleStatus{} } func (*ScaleStatus) ProtoMessage() {} func (*ScaleStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_cdc93917efc28165, []int{45} + return fileDescriptor_cdc93917efc28165, []int{44} } func (m *ScaleStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1373,7 +1345,6 @@ func init() { proto.RegisterType((*NetworkPolicyPeer)(nil), "k8s.io.api.extensions.v1beta1.NetworkPolicyPeer") proto.RegisterType((*NetworkPolicyPort)(nil), "k8s.io.api.extensions.v1beta1.NetworkPolicyPort") proto.RegisterType((*NetworkPolicySpec)(nil), "k8s.io.api.extensions.v1beta1.NetworkPolicySpec") - proto.RegisterType((*NetworkPolicyStatus)(nil), "k8s.io.api.extensions.v1beta1.NetworkPolicyStatus") proto.RegisterType((*ReplicaSet)(nil), "k8s.io.api.extensions.v1beta1.ReplicaSet") proto.RegisterType((*ReplicaSetCondition)(nil), "k8s.io.api.extensions.v1beta1.ReplicaSetCondition") proto.RegisterType((*ReplicaSetList)(nil), "k8s.io.api.extensions.v1beta1.ReplicaSetList") @@ -1393,188 +1364,186 @@ func init() { } var fileDescriptor_cdc93917efc28165 = []byte{ - // 2890 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x5b, 0xcf, 0x6f, 0x24, 0x47, - 0xf5, 0xdf, 0x9e, 0xf1, 0xd8, 0xe3, 0xe7, 0xb5, 0xbd, 0x5b, 0xeb, 0xac, 0x1d, 0xef, 0x37, 0x76, - 0xd4, 0x5f, 0x11, 0x36, 0x61, 0x33, 0xc3, 0x6e, 0x92, 0x25, 0x3f, 0xa4, 0x84, 0x1d, 0xef, 0x26, - 0xeb, 0xc4, 0x1e, 0x4f, 0x6a, 0xc6, 0x09, 0x8a, 0x08, 0xd0, 0xee, 0x29, 0x8f, 0x3b, 0xee, 0xe9, - 0x1e, 0x75, 0xd7, 0x98, 0x35, 0x27, 0x10, 0x5c, 0x72, 0x82, 0x4b, 0x20, 0x47, 0x10, 0x12, 0x57, - 0xae, 0x1c, 0x42, 0x04, 0x22, 0x48, 0x2b, 0xc4, 0x21, 0x12, 0x07, 0x72, 0xb2, 0x88, 0x73, 0x42, - 0xfc, 0x03, 0x68, 0x4f, 0xa8, 0x7e, 0x74, 0xf5, 0x6f, 0xbb, 0xc7, 0x38, 0x16, 0x41, 0x9c, 0x3c, - 0x5d, 0xef, 0xbd, 0x4f, 0xbd, 0xaa, 0x7a, 0xf5, 0xde, 0xa7, 0xba, 0xda, 0xf0, 0xf2, 0xee, 0xb3, - 0x7e, 0xcd, 0x72, 0xeb, 0xbb, 0xc3, 0x2d, 0xe2, 0x39, 0x84, 0x12, 0xbf, 0xbe, 0x47, 0x9c, 0xae, - 0xeb, 0xd5, 0xa5, 0xc0, 0x18, 0x58, 0x75, 0x72, 0x8f, 0x12, 0xc7, 0xb7, 0x5c, 0xc7, 0xaf, 0xef, - 0x5d, 0xdf, 0x22, 0xd4, 0xb8, 0x5e, 0xef, 0x11, 0x87, 0x78, 0x06, 0x25, 0xdd, 0xda, 0xc0, 0x73, - 0xa9, 0x8b, 0x1e, 0x11, 0xea, 0x35, 0x63, 0x60, 0xd5, 0x42, 0xf5, 0x9a, 0x54, 0x5f, 0x7c, 0xb2, - 0x67, 0xd1, 0x9d, 0xe1, 0x56, 0xcd, 0x74, 0xfb, 0xf5, 0x9e, 0xdb, 0x73, 0xeb, 0xdc, 0x6a, 0x6b, - 0xb8, 0xcd, 0x9f, 0xf8, 0x03, 0xff, 0x25, 0xd0, 0x16, 0xf5, 0x48, 0xe7, 0xa6, 0xeb, 0x91, 0xfa, - 0x5e, 0xaa, 0xc7, 0xc5, 0xa7, 0x43, 0x9d, 0xbe, 0x61, 0xee, 0x58, 0x0e, 0xf1, 0xf6, 0xeb, 0x83, - 0xdd, 0x1e, 0x6b, 0xf0, 0xeb, 0x7d, 0x42, 0x8d, 0x2c, 0xab, 0x7a, 0x9e, 0x95, 0x37, 0x74, 0xa8, - 0xd5, 0x27, 0x29, 0x83, 0x9b, 0xc7, 0x19, 0xf8, 0xe6, 0x0e, 0xe9, 0x1b, 0x29, 0xbb, 0xa7, 0xf2, - 0xec, 0x86, 0xd4, 0xb2, 0xeb, 0x96, 0x43, 0x7d, 0xea, 0x25, 0x8d, 0xf4, 0xf7, 0x4a, 0x30, 0x79, - 0xdb, 0x20, 0x7d, 0xd7, 0x69, 0x13, 0x8a, 0xbe, 0x03, 0x55, 0x36, 0x8c, 0xae, 0x41, 0x8d, 0x05, - 0xed, 0x51, 0xed, 0xea, 0xd4, 0x8d, 0xaf, 0xd6, 0xc2, 0x69, 0x56, 0xa8, 0xb5, 0xc1, 0x6e, 0x8f, - 0x35, 0xf8, 0x35, 0xa6, 0x5d, 0xdb, 0xbb, 0x5e, 0xdb, 0xd8, 0x7a, 0x87, 0x98, 0x74, 0x9d, 0x50, - 0xa3, 0x81, 0xee, 0x1f, 0x2c, 0x9f, 0x3b, 0x3c, 0x58, 0x86, 0xb0, 0x0d, 0x2b, 0x54, 0xd4, 0x84, - 0x31, 0x7f, 0x40, 0xcc, 0x85, 0x12, 0x47, 0xbf, 0x56, 0x3b, 0x72, 0x11, 0x6b, 0xca, 0xb3, 0xf6, - 0x80, 0x98, 0x8d, 0xf3, 0x12, 0x79, 0x8c, 0x3d, 0x61, 0x8e, 0x83, 0xde, 0x80, 0x71, 0x9f, 0x1a, - 0x74, 0xe8, 0x2f, 0x94, 0x39, 0x62, 0xad, 0x30, 0x22, 0xb7, 0x6a, 0xcc, 0x48, 0xcc, 0x71, 0xf1, - 0x8c, 0x25, 0x9a, 0xfe, 0xf7, 0x12, 0x20, 0xa5, 0xbb, 0xe2, 0x3a, 0x5d, 0x8b, 0x5a, 0xae, 0x83, - 0x9e, 0x87, 0x31, 0xba, 0x3f, 0x20, 0x7c, 0x72, 0x26, 0x1b, 0x8f, 0x05, 0x0e, 0x75, 0xf6, 0x07, - 0xe4, 0xc1, 0xc1, 0xf2, 0xe5, 0xb4, 0x05, 0x93, 0x60, 0x6e, 0x83, 0xd6, 0x94, 0xab, 0x25, 0x6e, - 0xfd, 0x74, 0xbc, 0xeb, 0x07, 0x07, 0xcb, 0x19, 0x41, 0x58, 0x53, 0x48, 0x71, 0x07, 0xd1, 0x1e, - 0x20, 0xdb, 0xf0, 0x69, 0xc7, 0x33, 0x1c, 0x5f, 0xf4, 0x64, 0xf5, 0x89, 0x9c, 0x84, 0x27, 0x8a, - 0x2d, 0x1a, 0xb3, 0x68, 0x2c, 0x4a, 0x2f, 0xd0, 0x5a, 0x0a, 0x0d, 0x67, 0xf4, 0x80, 0x1e, 0x83, - 0x71, 0x8f, 0x18, 0xbe, 0xeb, 0x2c, 0x8c, 0xf1, 0x51, 0xa8, 0x09, 0xc4, 0xbc, 0x15, 0x4b, 0x29, - 0x7a, 0x1c, 0x26, 0xfa, 0xc4, 0xf7, 0x8d, 0x1e, 0x59, 0xa8, 0x70, 0xc5, 0x59, 0xa9, 0x38, 0xb1, - 0x2e, 0x9a, 0x71, 0x20, 0xd7, 0x3f, 0xd0, 0x60, 0x5a, 0xcd, 0xdc, 0x9a, 0xe5, 0x53, 0xf4, 0xcd, - 0x54, 0x1c, 0xd6, 0x8a, 0x0d, 0x89, 0x59, 0xf3, 0x28, 0xbc, 0x20, 0x7b, 0xab, 0x06, 0x2d, 0x91, - 0x18, 0x5c, 0x87, 0x8a, 0x45, 0x49, 0x9f, 0xad, 0x43, 0xf9, 0xea, 0xd4, 0x8d, 0xab, 0x45, 0x43, - 0xa6, 0x31, 0x2d, 0x41, 0x2b, 0xab, 0xcc, 0x1c, 0x0b, 0x14, 0xfd, 0xa7, 0x63, 0x11, 0xf7, 0x59, - 0x68, 0xa2, 0xb7, 0xa1, 0xea, 0x13, 0x9b, 0x98, 0xd4, 0xf5, 0xa4, 0xfb, 0x4f, 0x15, 0x74, 0xdf, - 0xd8, 0x22, 0x76, 0x5b, 0x9a, 0x36, 0xce, 0x33, 0xff, 0x83, 0x27, 0xac, 0x20, 0xd1, 0xeb, 0x50, - 0xa5, 0xa4, 0x3f, 0xb0, 0x0d, 0x4a, 0xe4, 0x3e, 0xfa, 0xff, 0xe8, 0x10, 0x58, 0xe4, 0x30, 0xb0, - 0x96, 0xdb, 0xed, 0x48, 0x35, 0xbe, 0x7d, 0xd4, 0x94, 0x04, 0xad, 0x58, 0xc1, 0xa0, 0x3d, 0x98, - 0x19, 0x0e, 0xba, 0x4c, 0x93, 0xb2, 0xec, 0xd0, 0xdb, 0x97, 0x91, 0x74, 0xb3, 0xe8, 0xdc, 0x6c, - 0xc6, 0xac, 0x1b, 0x97, 0x65, 0x5f, 0x33, 0xf1, 0x76, 0x9c, 0xe8, 0x05, 0xdd, 0x82, 0xd9, 0xbe, - 0xe5, 0x60, 0x62, 0x74, 0xf7, 0xdb, 0xc4, 0x74, 0x9d, 0xae, 0xcf, 0xc3, 0xaa, 0xd2, 0x98, 0x97, - 0x00, 0xb3, 0xeb, 0x71, 0x31, 0x4e, 0xea, 0xa3, 0x57, 0x01, 0x05, 0xc3, 0x78, 0x45, 0x24, 0x37, - 0xcb, 0x75, 0x78, 0xcc, 0x95, 0xc3, 0xe0, 0xee, 0xa4, 0x34, 0x70, 0x86, 0x15, 0x5a, 0x83, 0x39, - 0x8f, 0xec, 0x59, 0x6c, 0x8c, 0x77, 0x2d, 0x9f, 0xba, 0xde, 0xfe, 0x9a, 0xd5, 0xb7, 0xe8, 0xc2, - 0x38, 0xf7, 0x69, 0xe1, 0xf0, 0x60, 0x79, 0x0e, 0x67, 0xc8, 0x71, 0xa6, 0x95, 0xfe, 0xb3, 0x71, - 0x98, 0x4d, 0xe4, 0x1b, 0xf4, 0x06, 0x5c, 0x36, 0x87, 0x9e, 0x47, 0x1c, 0xda, 0x1c, 0xf6, 0xb7, - 0x88, 0xd7, 0x36, 0x77, 0x48, 0x77, 0x68, 0x93, 0x2e, 0x0f, 0x94, 0x4a, 0x63, 0x49, 0x7a, 0x7c, - 0x79, 0x25, 0x53, 0x0b, 0xe7, 0x58, 0xb3, 0x59, 0x70, 0x78, 0xd3, 0xba, 0xe5, 0xfb, 0x0a, 0xb3, - 0xc4, 0x31, 0xd5, 0x2c, 0x34, 0x53, 0x1a, 0x38, 0xc3, 0x8a, 0xf9, 0xd8, 0x25, 0xbe, 0xe5, 0x91, - 0x6e, 0xd2, 0xc7, 0x72, 0xdc, 0xc7, 0xdb, 0x99, 0x5a, 0x38, 0xc7, 0x1a, 0x3d, 0x03, 0x53, 0xa2, - 0x37, 0xbe, 0x7e, 0x72, 0xa1, 0x2f, 0x49, 0xb0, 0xa9, 0x66, 0x28, 0xc2, 0x51, 0x3d, 0x36, 0x34, - 0x77, 0xcb, 0x27, 0xde, 0x1e, 0xe9, 0xe6, 0x2f, 0xf0, 0x46, 0x4a, 0x03, 0x67, 0x58, 0xb1, 0xa1, - 0x89, 0x08, 0x4c, 0x0d, 0x6d, 0x3c, 0x3e, 0xb4, 0xcd, 0x4c, 0x2d, 0x9c, 0x63, 0xcd, 0xe2, 0x58, - 0xb8, 0x7c, 0x6b, 0xcf, 0xb0, 0x6c, 0x63, 0xcb, 0x26, 0x0b, 0x13, 0xf1, 0x38, 0x6e, 0xc6, 0xc5, - 0x38, 0xa9, 0x8f, 0x5e, 0x81, 0x8b, 0xa2, 0x69, 0xd3, 0x31, 0x14, 0x48, 0x95, 0x83, 0x3c, 0x2c, - 0x41, 0x2e, 0x36, 0x93, 0x0a, 0x38, 0x6d, 0x83, 0x9e, 0x87, 0x19, 0xd3, 0xb5, 0x6d, 0x1e, 0x8f, - 0x2b, 0xee, 0xd0, 0xa1, 0x0b, 0x93, 0x1c, 0x05, 0xb1, 0xfd, 0xb8, 0x12, 0x93, 0xe0, 0x84, 0x26, - 0x22, 0x00, 0x66, 0x50, 0x70, 0xfc, 0x05, 0xe0, 0xf9, 0xf1, 0x7a, 0xd1, 0x1c, 0xa0, 0x4a, 0x55, - 0xc8, 0x01, 0x54, 0x93, 0x8f, 0x23, 0xc0, 0xfa, 0x9f, 0x34, 0x98, 0xcf, 0x49, 0x1d, 0xe8, 0xa5, - 0x58, 0x89, 0xfd, 0x4a, 0xa2, 0xc4, 0x5e, 0xc9, 0x31, 0x8b, 0xd4, 0x59, 0x07, 0xa6, 0x3d, 0x36, - 0x2a, 0xa7, 0x27, 0x54, 0x64, 0x8e, 0x7c, 0xe6, 0x98, 0x61, 0xe0, 0xa8, 0x4d, 0x98, 0xf3, 0x2f, - 0x1e, 0x1e, 0x2c, 0x4f, 0xc7, 0x64, 0x38, 0x0e, 0xaf, 0xbf, 0x5f, 0x02, 0xb8, 0x4d, 0x06, 0xb6, - 0xbb, 0xdf, 0x27, 0xce, 0x59, 0x70, 0xa8, 0x8d, 0x18, 0x87, 0x7a, 0xf2, 0xb8, 0xe5, 0x51, 0xae, - 0xe5, 0x92, 0xa8, 0x37, 0x13, 0x24, 0xaa, 0x5e, 0x1c, 0xf2, 0x68, 0x16, 0xf5, 0xd7, 0x32, 0x5c, - 0x0a, 0x95, 0x43, 0x1a, 0xf5, 0x42, 0x6c, 0x8d, 0xbf, 0x9c, 0x58, 0xe3, 0xf9, 0x0c, 0x93, 0xcf, - 0x8d, 0x47, 0xbd, 0x03, 0x33, 0x8c, 0xe5, 0x88, 0xb5, 0xe4, 0x1c, 0x6a, 0x7c, 0x64, 0x0e, 0xa5, - 0xaa, 0xdd, 0x5a, 0x0c, 0x09, 0x27, 0x90, 0x73, 0x38, 0xdb, 0xc4, 0x17, 0x91, 0xb3, 0x7d, 0xa8, - 0xc1, 0x4c, 0xb8, 0x4c, 0x67, 0x40, 0xda, 0x9a, 0x71, 0xd2, 0xf6, 0x78, 0xe1, 0x10, 0xcd, 0x61, - 0x6d, 0xff, 0x64, 0x04, 0x5f, 0x29, 0xb1, 0x0d, 0xbe, 0x65, 0x98, 0xbb, 0xe8, 0x51, 0x18, 0x73, - 0x8c, 0x7e, 0x10, 0x99, 0x6a, 0xb3, 0x34, 0x8d, 0x3e, 0xc1, 0x5c, 0x82, 0xde, 0xd3, 0x00, 0xc9, - 0x2a, 0x70, 0xcb, 0x71, 0x5c, 0x6a, 0x88, 0x5c, 0x29, 0xdc, 0x5a, 0x2d, 0xec, 0x56, 0xd0, 0x63, - 0x6d, 0x33, 0x85, 0x75, 0xc7, 0xa1, 0xde, 0x7e, 0xb8, 0xc8, 0x69, 0x05, 0x9c, 0xe1, 0x00, 0x32, - 0x00, 0x3c, 0x89, 0xd9, 0x71, 0xe5, 0x46, 0x7e, 0xb2, 0x40, 0xce, 0x63, 0x06, 0x2b, 0xae, 0xb3, - 0x6d, 0xf5, 0xc2, 0xb4, 0x83, 0x15, 0x10, 0x8e, 0x80, 0x2e, 0xde, 0x81, 0xf9, 0x1c, 0x6f, 0xd1, - 0x05, 0x28, 0xef, 0x92, 0x7d, 0x31, 0x6d, 0x98, 0xfd, 0x44, 0x73, 0x50, 0xd9, 0x33, 0xec, 0xa1, - 0x48, 0xbf, 0x93, 0x58, 0x3c, 0x3c, 0x5f, 0x7a, 0x56, 0xd3, 0x3f, 0xa8, 0x44, 0x63, 0x87, 0x33, - 0xe6, 0xab, 0x50, 0xf5, 0xc8, 0xc0, 0xb6, 0x4c, 0xc3, 0x97, 0x44, 0x88, 0x93, 0x5f, 0x2c, 0xdb, - 0xb0, 0x92, 0xc6, 0xb8, 0x75, 0xe9, 0xf3, 0xe5, 0xd6, 0xe5, 0xd3, 0xe1, 0xd6, 0xdf, 0x86, 0xaa, - 0x1f, 0xb0, 0xea, 0x31, 0x0e, 0x79, 0x7d, 0x84, 0xfc, 0x2a, 0x09, 0xb5, 0xea, 0x40, 0x51, 0x69, - 0x05, 0x9a, 0x45, 0xa2, 0x2b, 0x23, 0x92, 0xe8, 0x53, 0x25, 0xbe, 0x2c, 0xdf, 0x0c, 0x8c, 0xa1, - 0x4f, 0xba, 0x3c, 0xb7, 0x55, 0xc3, 0x7c, 0xd3, 0xe2, 0xad, 0x58, 0x4a, 0xd1, 0xdb, 0xb1, 0x90, - 0xad, 0x9e, 0x24, 0x64, 0x67, 0xf2, 0xc3, 0x15, 0x6d, 0xc2, 0xfc, 0xc0, 0x73, 0x7b, 0x1e, 0xf1, - 0xfd, 0xdb, 0xc4, 0xe8, 0xda, 0x96, 0x43, 0x82, 0xf9, 0x11, 0x8c, 0xe8, 0xca, 0xe1, 0xc1, 0xf2, - 0x7c, 0x2b, 0x5b, 0x05, 0xe7, 0xd9, 0xea, 0xf7, 0xc7, 0xe0, 0x42, 0xb2, 0x02, 0xe6, 0x90, 0x54, - 0xed, 0x44, 0x24, 0xf5, 0x5a, 0x64, 0x33, 0x08, 0x06, 0xaf, 0x56, 0x3f, 0x63, 0x43, 0xdc, 0x82, - 0x59, 0x99, 0x0d, 0x02, 0xa1, 0xa4, 0xe9, 0x6a, 0xf5, 0x37, 0xe3, 0x62, 0x9c, 0xd4, 0x47, 0x2f, - 0xc0, 0xb4, 0xc7, 0x79, 0x77, 0x00, 0x20, 0xb8, 0xeb, 0x43, 0x12, 0x60, 0x1a, 0x47, 0x85, 0x38, - 0xae, 0xcb, 0x78, 0x6b, 0x48, 0x47, 0x03, 0x80, 0xb1, 0x38, 0x6f, 0xbd, 0x95, 0x54, 0xc0, 0x69, - 0x1b, 0xb4, 0x0e, 0x97, 0x86, 0x4e, 0x1a, 0x4a, 0x84, 0xf2, 0x15, 0x09, 0x75, 0x69, 0x33, 0xad, - 0x82, 0xb3, 0xec, 0xd0, 0x76, 0x8c, 0xca, 0x8e, 0xf3, 0xf4, 0x7c, 0xa3, 0xf0, 0xc6, 0x2b, 0xcc, - 0x65, 0x33, 0xe8, 0x76, 0xb5, 0x28, 0xdd, 0xd6, 0x7f, 0xaf, 0x45, 0x8b, 0x90, 0xa2, 0xc0, 0xc7, - 0xbd, 0x65, 0x4a, 0x59, 0x44, 0xd8, 0x91, 0x9b, 0xcd, 0x7e, 0x6f, 0x8e, 0xc4, 0x7e, 0xc3, 0xe2, - 0x79, 0x3c, 0xfd, 0xfd, 0x83, 0x06, 0xb3, 0x77, 0x3b, 0x9d, 0xd6, 0xaa, 0xc3, 0x77, 0x4b, 0xcb, - 0xa0, 0x3b, 0xac, 0x8a, 0x0e, 0x0c, 0xba, 0x93, 0xac, 0xa2, 0x4c, 0x86, 0xb9, 0x04, 0x3d, 0x0d, - 0x55, 0xf6, 0x97, 0x39, 0xce, 0xc3, 0x75, 0x92, 0x27, 0x99, 0x6a, 0x4b, 0xb6, 0x3d, 0x88, 0xfc, - 0xc6, 0x4a, 0x13, 0x7d, 0x03, 0x26, 0xd8, 0xde, 0x26, 0x4e, 0xb7, 0x20, 0xf9, 0x95, 0x4e, 0x35, - 0x84, 0x51, 0xc8, 0x67, 0x64, 0x03, 0x0e, 0xe0, 0xf4, 0x5d, 0x98, 0x8b, 0x0c, 0x02, 0x0f, 0x6d, - 0xf2, 0x06, 0xab, 0x57, 0xa8, 0x0d, 0x15, 0xd6, 0x3b, 0xab, 0x4a, 0xe5, 0x02, 0xaf, 0x17, 0x13, - 0x13, 0x11, 0x72, 0x0f, 0xf6, 0xe4, 0x63, 0x81, 0xa5, 0x6f, 0xc0, 0xc4, 0x6a, 0xab, 0x61, 0xbb, - 0x82, 0x6f, 0x98, 0x56, 0xd7, 0x4b, 0xce, 0xd4, 0xca, 0xea, 0x6d, 0x8c, 0xb9, 0x04, 0xe9, 0x30, - 0x4e, 0xee, 0x99, 0x64, 0x40, 0x39, 0xc5, 0x98, 0x6c, 0x00, 0x4b, 0xa4, 0x77, 0x78, 0x0b, 0x96, - 0x12, 0xfd, 0xc7, 0x25, 0x98, 0x90, 0xdd, 0x9e, 0xc1, 0xf9, 0x63, 0x2d, 0x76, 0xfe, 0x78, 0xa2, - 0xd8, 0x12, 0xe4, 0x1e, 0x3e, 0x3a, 0x89, 0xc3, 0xc7, 0xb5, 0x82, 0x78, 0x47, 0x9f, 0x3c, 0xde, - 0x2d, 0xc1, 0x4c, 0x7c, 0xf1, 0xd1, 0x33, 0x30, 0xc5, 0x52, 0xad, 0x65, 0x92, 0x66, 0xc8, 0xf0, - 0xd4, 0xeb, 0x87, 0x76, 0x28, 0xc2, 0x51, 0x3d, 0xd4, 0x53, 0x66, 0x2d, 0xd7, 0xa3, 0x72, 0xd0, - 0xf9, 0x53, 0x3a, 0xa4, 0x96, 0x5d, 0x13, 0x2f, 0xdb, 0x6b, 0xab, 0x0e, 0xdd, 0xf0, 0xda, 0xd4, - 0xb3, 0x9c, 0x5e, 0xaa, 0x23, 0x06, 0x86, 0xa3, 0xc8, 0xe8, 0x4d, 0x96, 0xf6, 0x7d, 0x77, 0xe8, - 0x99, 0x24, 0x8b, 0xbe, 0x05, 0xd4, 0x83, 0x6d, 0x84, 0xee, 0x9a, 0x6b, 0x1a, 0xb6, 0x58, 0x1c, - 0x4c, 0xb6, 0x89, 0x47, 0x1c, 0x93, 0x04, 0x94, 0x49, 0x40, 0x60, 0x05, 0xa6, 0xff, 0x46, 0x83, - 0x29, 0x39, 0x17, 0x67, 0x40, 0xd4, 0x5f, 0x8b, 0x13, 0xf5, 0xc7, 0x0a, 0xee, 0xd0, 0x6c, 0x96, - 0xfe, 0x5b, 0x0d, 0x16, 0x03, 0xd7, 0x5d, 0xa3, 0xdb, 0x30, 0x6c, 0xc3, 0x31, 0x89, 0x17, 0xc4, - 0xfa, 0x22, 0x94, 0xac, 0x81, 0x5c, 0x49, 0x90, 0x00, 0xa5, 0xd5, 0x16, 0x2e, 0x59, 0x03, 0x56, - 0x45, 0x77, 0x5c, 0x9f, 0x72, 0x36, 0x2f, 0x0e, 0x8a, 0xca, 0xeb, 0xbb, 0xb2, 0x1d, 0x2b, 0x0d, - 0xb4, 0x09, 0x95, 0x81, 0xeb, 0x51, 0x56, 0xb9, 0xca, 0x89, 0xf5, 0x3d, 0xc2, 0x6b, 0xb6, 0x6e, - 0x32, 0x10, 0xc3, 0x9d, 0xce, 0x60, 0xb0, 0x40, 0xd3, 0x7f, 0xa0, 0xc1, 0xc3, 0x19, 0xfe, 0x4b, - 0xd2, 0xd0, 0x85, 0x09, 0x4b, 0x08, 0x65, 0x7a, 0x79, 0xae, 0x58, 0xb7, 0x19, 0x53, 0x11, 0xa6, - 0xb6, 0x20, 0x85, 0x05, 0xd0, 0xfa, 0x2f, 0x35, 0xb8, 0x98, 0xf2, 0x97, 0xa7, 0x68, 0x16, 0xcf, - 0x92, 0x6d, 0xab, 0x14, 0xcd, 0xc2, 0x92, 0x4b, 0xd0, 0x6b, 0x50, 0xe5, 0x77, 0x44, 0xa6, 0x6b, - 0xcb, 0x09, 0xac, 0x07, 0x13, 0xd8, 0x92, 0xed, 0x0f, 0x0e, 0x96, 0xaf, 0x64, 0x9c, 0xb5, 0x03, - 0x31, 0x56, 0x00, 0x68, 0x19, 0x2a, 0xc4, 0xf3, 0x5c, 0x4f, 0x26, 0xfb, 0x49, 0x36, 0x53, 0x77, - 0x58, 0x03, 0x16, 0xed, 0xfa, 0xaf, 0xc2, 0x20, 0x65, 0xd9, 0x97, 0xf9, 0xc7, 0x16, 0x27, 0x99, - 0x18, 0xd9, 0xd2, 0x61, 0x2e, 0x41, 0x43, 0xb8, 0x60, 0x25, 0xd2, 0xb5, 0xdc, 0x9d, 0xf5, 0x62, - 0xd3, 0xa8, 0xcc, 0x1a, 0x0b, 0x12, 0xfe, 0x42, 0x52, 0x82, 0x53, 0x5d, 0xe8, 0x04, 0x52, 0x5a, - 0xe8, 0x75, 0x18, 0xdb, 0xa1, 0x74, 0x90, 0xf1, 0xb2, 0xff, 0x98, 0x22, 0x11, 0xba, 0x50, 0xe5, - 0xa3, 0xeb, 0x74, 0x5a, 0x98, 0x43, 0xe9, 0xbf, 0x2b, 0xa9, 0xf9, 0xe0, 0x27, 0xa4, 0xaf, 0xab, - 0xd1, 0xae, 0xd8, 0x86, 0xef, 0xf3, 0x14, 0x26, 0x4e, 0xf3, 0x73, 0x11, 0xc7, 0x95, 0x0c, 0xa7, - 0xb4, 0x51, 0x27, 0x2c, 0x9e, 0xda, 0x49, 0x8a, 0xe7, 0x54, 0x56, 0xe1, 0x44, 0x77, 0xa1, 0x4c, - 0xed, 0xa2, 0xa7, 0x72, 0x89, 0xd8, 0x59, 0x6b, 0x37, 0xa6, 0xe4, 0x94, 0x97, 0x3b, 0x6b, 0x6d, - 0xcc, 0x20, 0xd0, 0x06, 0x54, 0xbc, 0xa1, 0x4d, 0x58, 0x1d, 0x28, 0x17, 0xaf, 0x2b, 0x6c, 0x06, - 0xc3, 0xcd, 0xc7, 0x9e, 0x7c, 0x2c, 0x70, 0xf4, 0x1f, 0x6a, 0x30, 0x1d, 0xab, 0x16, 0xc8, 0x83, - 0xf3, 0x76, 0x64, 0xef, 0xc8, 0x79, 0x78, 0x76, 0xf4, 0x5d, 0x27, 0x37, 0xfd, 0x9c, 0xec, 0xf7, - 0x7c, 0x54, 0x86, 0x63, 0x7d, 0xe8, 0x06, 0x40, 0x38, 0x6c, 0xb6, 0x0f, 0x58, 0xf0, 0x8a, 0x0d, - 0x2f, 0xf7, 0x01, 0x8b, 0x69, 0x1f, 0x8b, 0x76, 0x74, 0x03, 0xc0, 0x27, 0xa6, 0x47, 0x68, 0x33, - 0x4c, 0x5c, 0xaa, 0x1c, 0xb7, 0x95, 0x04, 0x47, 0xb4, 0xf4, 0x5f, 0x94, 0x60, 0xba, 0x49, 0xe8, - 0x77, 0x5d, 0x6f, 0xb7, 0xe5, 0xda, 0x96, 0xb9, 0x7f, 0x06, 0x24, 0x00, 0xc7, 0x48, 0xc0, 0x71, - 0xf9, 0x32, 0xe6, 0x5d, 0x2e, 0x15, 0x78, 0x2b, 0x41, 0x05, 0x6e, 0x8c, 0x84, 0x7a, 0x34, 0x21, - 0xf8, 0x50, 0x83, 0xf9, 0x98, 0xfe, 0x9d, 0x30, 0xd7, 0xa8, 0xe4, 0xaf, 0x15, 0x4a, 0xfe, 0x31, - 0x18, 0x96, 0x30, 0xb3, 0x93, 0x3f, 0x5a, 0x83, 0x12, 0x75, 0xe5, 0xce, 0x18, 0x0d, 0x93, 0x10, - 0x2f, 0xac, 0x67, 0x1d, 0x17, 0x97, 0xa8, 0xab, 0xff, 0x51, 0x83, 0x85, 0x98, 0x56, 0x34, 0x5b, - 0x7e, 0x4e, 0x23, 0xc0, 0x30, 0xb6, 0xed, 0xb9, 0xfd, 0x13, 0x8f, 0x41, 0x2d, 0xf2, 0xcb, 0x9e, - 0xdb, 0xc7, 0x1c, 0x4b, 0xff, 0x48, 0x83, 0x8b, 0x31, 0xcd, 0x33, 0xe0, 0x24, 0xaf, 0xc7, 0x39, - 0xc9, 0xb5, 0x51, 0x06, 0x92, 0xc3, 0x4c, 0x3e, 0x2a, 0x25, 0x86, 0xc1, 0x06, 0x8c, 0xb6, 0x61, - 0x6a, 0xe0, 0x76, 0xdb, 0xa7, 0x70, 0xf9, 0x3b, 0xcb, 0xb8, 0x62, 0x2b, 0xc4, 0xc2, 0x51, 0x60, - 0x74, 0x0f, 0x2e, 0x32, 0xda, 0xe2, 0x0f, 0x0c, 0x93, 0xb4, 0x4f, 0xe1, 0x75, 0xd8, 0x43, 0xfc, - 0x76, 0x29, 0x89, 0x88, 0xd3, 0x9d, 0xa0, 0x75, 0x98, 0xb0, 0x06, 0xfc, 0xec, 0x22, 0x37, 0xe9, - 0xb1, 0x04, 0x4f, 0x9c, 0x74, 0x44, 0xf9, 0x90, 0x0f, 0x38, 0xc0, 0xd0, 0xff, 0x92, 0x8c, 0x06, - 0x4e, 0x85, 0x5f, 0x89, 0x50, 0x0f, 0x79, 0x0f, 0x74, 0x32, 0xda, 0xd1, 0x94, 0x2c, 0xe7, 0xa4, - 0xac, 0xbd, 0x9a, 0xe0, 0x44, 0x5f, 0x82, 0x09, 0xe2, 0x74, 0xf9, 0x41, 0x40, 0xbc, 0x64, 0xe1, - 0xa3, 0xba, 0x23, 0x9a, 0x70, 0x20, 0xd3, 0x7f, 0x54, 0x4e, 0x8c, 0x8a, 0x97, 0xf0, 0x77, 0x4e, - 0x2d, 0x38, 0xd4, 0x61, 0x22, 0x37, 0x40, 0xb6, 0x42, 0x6a, 0x29, 0x62, 0xfe, 0x6b, 0xa3, 0xc4, - 0x7c, 0xb4, 0xb6, 0xe6, 0x12, 0x4b, 0xf4, 0x2d, 0x18, 0x27, 0xa2, 0x0b, 0x51, 0xb1, 0x6f, 0x8e, - 0xd2, 0x45, 0x98, 0x7e, 0xc3, 0x94, 0x2d, 0xdb, 0x24, 0x2a, 0x7a, 0x89, 0xcd, 0x17, 0xd3, 0x65, - 0x47, 0x1e, 0xc1, 0xcc, 0x27, 0x1b, 0x8f, 0x88, 0x61, 0xab, 0xe6, 0x07, 0x07, 0xcb, 0x10, 0x3e, - 0xe2, 0xa8, 0x85, 0xfe, 0x3d, 0xb8, 0x94, 0x51, 0x22, 0x90, 0x19, 0x7b, 0x33, 0x24, 0x32, 0x66, - 0xbd, 0xd8, 0x32, 0x14, 0xbf, 0xe2, 0x7c, 0xbf, 0x04, 0x20, 0xdf, 0x45, 0x9d, 0xcd, 0x97, 0x55, - 0xa3, 0xdd, 0x0a, 0x86, 0xae, 0x9d, 0xda, 0xad, 0x60, 0x04, 0xf2, 0xe8, 0x52, 0xfc, 0x8f, 0x12, - 0x5c, 0x0a, 0x95, 0x0b, 0xdf, 0x0a, 0x66, 0x98, 0xfc, 0xef, 0xeb, 0xaa, 0x62, 0x37, 0x75, 0xe1, - 0xd4, 0xfd, 0xe7, 0xdd, 0xd4, 0x85, 0xbe, 0xe5, 0x54, 0xda, 0x5f, 0x97, 0xa2, 0x03, 0x18, 0xf1, - 0xba, 0xe8, 0x14, 0x3e, 0x30, 0xfa, 0xc2, 0xdd, 0x38, 0xe9, 0x7f, 0x2e, 0xc3, 0x85, 0xe4, 0x6e, - 0x8c, 0xdd, 0x2a, 0x68, 0xc7, 0xde, 0x2a, 0xb4, 0x60, 0x6e, 0x7b, 0x68, 0xdb, 0xfb, 0x7c, 0x0c, - 0x91, 0xab, 0x05, 0x71, 0x1f, 0xf1, 0x7f, 0xd2, 0x72, 0xee, 0xe5, 0x0c, 0x1d, 0x9c, 0x69, 0x99, - 0xbe, 0x64, 0x18, 0xfb, 0x77, 0x2f, 0x19, 0x2a, 0x27, 0xb8, 0x64, 0xc8, 0xbe, 0xa7, 0x29, 0x9f, - 0xe8, 0x9e, 0xe6, 0x24, 0x37, 0x0c, 0x19, 0x49, 0xec, 0xd8, 0x52, 0xf2, 0x22, 0xcc, 0xc4, 0x6f, - 0xbd, 0xc4, 0x5a, 0x8a, 0x8b, 0x37, 0x79, 0xc7, 0x14, 0x59, 0x4b, 0xd1, 0x8e, 0x95, 0x86, 0x7e, - 0xa8, 0xc1, 0xe5, 0xec, 0xaf, 0x5b, 0x90, 0x0d, 0x33, 0x7d, 0xe3, 0x5e, 0xf4, 0x8b, 0x23, 0xed, - 0x84, 0x4c, 0x89, 0x5f, 0x77, 0xac, 0xc7, 0xb0, 0x70, 0x02, 0x1b, 0xbd, 0x05, 0xd5, 0xbe, 0x71, - 0xaf, 0x3d, 0xf4, 0x7a, 0xe4, 0xc4, 0x8c, 0x8c, 0x6f, 0xa3, 0x75, 0x89, 0x82, 0x15, 0x9e, 0xfe, - 0x99, 0x06, 0xf3, 0x39, 0x97, 0x18, 0xff, 0x45, 0xa3, 0x7c, 0xb7, 0x04, 0x95, 0xb6, 0x69, 0xd8, - 0xe4, 0x0c, 0x08, 0xc5, 0xab, 0x31, 0x42, 0x71, 0xdc, 0x57, 0xb2, 0xdc, 0xab, 0x5c, 0x2e, 0x81, - 0x13, 0x5c, 0xe2, 0x89, 0x42, 0x68, 0x47, 0xd3, 0x88, 0xe7, 0x60, 0x52, 0x75, 0x3a, 0x5a, 0x76, - 0xd3, 0x7f, 0x5e, 0x82, 0xa9, 0x48, 0x17, 0x23, 0xe6, 0xc6, 0xed, 0x58, 0x41, 0x28, 0x17, 0x78, - 0x83, 0x14, 0xe9, 0xab, 0x16, 0x94, 0x00, 0xf1, 0x95, 0x47, 0x78, 0xaf, 0x9f, 0xae, 0x0c, 0x2f, - 0xc2, 0x0c, 0x35, 0xbc, 0x1e, 0xa1, 0xea, 0xc8, 0x20, 0x5e, 0x9e, 0xaa, 0xcf, 0x8d, 0x3a, 0x31, - 0x29, 0x4e, 0x68, 0x2f, 0xbe, 0x00, 0xd3, 0xb1, 0xce, 0x46, 0xf9, 0x48, 0xa3, 0xb1, 0x72, 0xff, - 0xd3, 0xa5, 0x73, 0x1f, 0x7f, 0xba, 0x74, 0xee, 0x93, 0x4f, 0x97, 0xce, 0x7d, 0xff, 0x70, 0x49, - 0xbb, 0x7f, 0xb8, 0xa4, 0x7d, 0x7c, 0xb8, 0xa4, 0x7d, 0x72, 0xb8, 0xa4, 0xfd, 0xed, 0x70, 0x49, - 0xfb, 0xc9, 0x67, 0x4b, 0xe7, 0xde, 0x7a, 0xe4, 0xc8, 0xff, 0xd9, 0xf8, 0x57, 0x00, 0x00, 0x00, - 0xff, 0xff, 0x39, 0x36, 0x95, 0x55, 0xec, 0x31, 0x00, 0x00, + // 2858 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x5b, 0xcd, 0x6f, 0x24, 0x47, + 0x15, 0xdf, 0x9e, 0xf1, 0xd8, 0xe3, 0xe7, 0xb5, 0xbd, 0x5b, 0xeb, 0xac, 0x1d, 0x2f, 0xb1, 0xa3, + 0x46, 0x84, 0x4d, 0xd8, 0xcc, 0xb0, 0x9b, 0x64, 0xc9, 0x87, 0x94, 0xb0, 0xe3, 0xdd, 0x64, 0x9d, + 0xd8, 0xe3, 0x49, 0xcd, 0x38, 0x41, 0x11, 0x01, 0xda, 0x3d, 0xe5, 0x71, 0xc7, 0x3d, 0xdd, 0xa3, + 0xee, 0x1a, 0xb3, 0xbe, 0x81, 0xe0, 0x92, 0x13, 0x5c, 0x02, 0x1c, 0x91, 0x90, 0xb8, 0x72, 0xe5, + 0x10, 0x22, 0x10, 0x41, 0x5a, 0x21, 0x0e, 0x91, 0x38, 0x90, 0x93, 0x45, 0x9c, 0x13, 0xe2, 0x1f, + 0x40, 0x7b, 0x42, 0xf5, 0xd1, 0xd5, 0xdf, 0x76, 0x8f, 0xf1, 0x5a, 0x04, 0x71, 0x5a, 0x4f, 0xbd, + 0xf7, 0x7e, 0xf5, 0xaa, 0xea, 0xd5, 0x7b, 0xbf, 0xaa, 0xea, 0x85, 0x57, 0x77, 0x9f, 0xf7, 0x6b, + 0x96, 0x5b, 0xdf, 0x1d, 0x6e, 0x11, 0xcf, 0x21, 0x94, 0xf8, 0xf5, 0x3d, 0xe2, 0x74, 0x5d, 0xaf, + 0x2e, 0x05, 0xc6, 0xc0, 0xaa, 0x93, 0x7b, 0x94, 0x38, 0xbe, 0xe5, 0x3a, 0x7e, 0x7d, 0xef, 0xfa, + 0x16, 0xa1, 0xc6, 0xf5, 0x7a, 0x8f, 0x38, 0xc4, 0x33, 0x28, 0xe9, 0xd6, 0x06, 0x9e, 0x4b, 0x5d, + 0xf4, 0x98, 0x50, 0xaf, 0x19, 0x03, 0xab, 0x16, 0xaa, 0xd7, 0xa4, 0xfa, 0xe2, 0xd3, 0x3d, 0x8b, + 0xee, 0x0c, 0xb7, 0x6a, 0xa6, 0xdb, 0xaf, 0xf7, 0xdc, 0x9e, 0x5b, 0xe7, 0x56, 0x5b, 0xc3, 0x6d, + 0xfe, 0x8b, 0xff, 0xe0, 0x7f, 0x09, 0xb4, 0x45, 0x3d, 0xd2, 0xb9, 0xe9, 0x7a, 0xa4, 0xbe, 0x97, + 0xea, 0x71, 0xf1, 0xd9, 0x50, 0xa7, 0x6f, 0x98, 0x3b, 0x96, 0x43, 0xbc, 0xfd, 0xfa, 0x60, 0xb7, + 0xc7, 0x1a, 0xfc, 0x7a, 0x9f, 0x50, 0x23, 0xcb, 0xaa, 0x9e, 0x67, 0xe5, 0x0d, 0x1d, 0x6a, 0xf5, + 0x49, 0xca, 0xe0, 0xe6, 0x71, 0x06, 0xbe, 0xb9, 0x43, 0xfa, 0x46, 0xca, 0xee, 0x99, 0x3c, 0xbb, + 0x21, 0xb5, 0xec, 0xba, 0xe5, 0x50, 0x9f, 0x7a, 0x49, 0x23, 0xfd, 0x83, 0x12, 0x4c, 0xde, 0x36, + 0x48, 0xdf, 0x75, 0xda, 0x84, 0xa2, 0xef, 0x41, 0x95, 0x0d, 0xa3, 0x6b, 0x50, 0x63, 0x41, 0x7b, + 0x5c, 0xbb, 0x3a, 0x75, 0xe3, 0xeb, 0xb5, 0x70, 0x9a, 0x15, 0x6a, 0x6d, 0xb0, 0xdb, 0x63, 0x0d, + 0x7e, 0x8d, 0x69, 0xd7, 0xf6, 0xae, 0xd7, 0x36, 0xb6, 0xde, 0x23, 0x26, 0x5d, 0x27, 0xd4, 0x68, + 0xa0, 0xfb, 0x07, 0xcb, 0xe7, 0x0e, 0x0f, 0x96, 0x21, 0x6c, 0xc3, 0x0a, 0x15, 0x35, 0x61, 0xcc, + 0x1f, 0x10, 0x73, 0xa1, 0xc4, 0xd1, 0xaf, 0xd5, 0x8e, 0x5c, 0xc4, 0x9a, 0xf2, 0xac, 0x3d, 0x20, + 0x66, 0xe3, 0xbc, 0x44, 0x1e, 0x63, 0xbf, 0x30, 0xc7, 0x41, 0x6f, 0xc1, 0xb8, 0x4f, 0x0d, 0x3a, + 0xf4, 0x17, 0xca, 0x1c, 0xb1, 0x56, 0x18, 0x91, 0x5b, 0x35, 0x66, 0x24, 0xe6, 0xb8, 0xf8, 0x8d, + 0x25, 0x9a, 0xfe, 0x8f, 0x12, 0x20, 0xa5, 0xbb, 0xe2, 0x3a, 0x5d, 0x8b, 0x5a, 0xae, 0x83, 0x5e, + 0x84, 0x31, 0xba, 0x3f, 0x20, 0x7c, 0x72, 0x26, 0x1b, 0x4f, 0x04, 0x0e, 0x75, 0xf6, 0x07, 0xe4, + 0xc1, 0xc1, 0xf2, 0xe5, 0xb4, 0x05, 0x93, 0x60, 0x6e, 0x83, 0xd6, 0x94, 0xab, 0x25, 0x6e, 0xfd, + 0x6c, 0xbc, 0xeb, 0x07, 0x07, 0xcb, 0x19, 0x41, 0x58, 0x53, 0x48, 0x71, 0x07, 0xd1, 0x1e, 0x20, + 0xdb, 0xf0, 0x69, 0xc7, 0x33, 0x1c, 0x5f, 0xf4, 0x64, 0xf5, 0x89, 0x9c, 0x84, 0xa7, 0x8a, 0x2d, + 0x1a, 0xb3, 0x68, 0x2c, 0x4a, 0x2f, 0xd0, 0x5a, 0x0a, 0x0d, 0x67, 0xf4, 0x80, 0x9e, 0x80, 0x71, + 0x8f, 0x18, 0xbe, 0xeb, 0x2c, 0x8c, 0xf1, 0x51, 0xa8, 0x09, 0xc4, 0xbc, 0x15, 0x4b, 0x29, 0x7a, + 0x12, 0x26, 0xfa, 0xc4, 0xf7, 0x8d, 0x1e, 0x59, 0xa8, 0x70, 0xc5, 0x59, 0xa9, 0x38, 0xb1, 0x2e, + 0x9a, 0x71, 0x20, 0xd7, 0x3f, 0xd4, 0x60, 0x5a, 0xcd, 0xdc, 0x9a, 0xe5, 0x53, 0xf4, 0xed, 0x54, + 0x1c, 0xd6, 0x8a, 0x0d, 0x89, 0x59, 0xf3, 0x28, 0xbc, 0x20, 0x7b, 0xab, 0x06, 0x2d, 0x91, 0x18, + 0x5c, 0x87, 0x8a, 0x45, 0x49, 0x9f, 0xad, 0x43, 0xf9, 0xea, 0xd4, 0x8d, 0xab, 0x45, 0x43, 0xa6, + 0x31, 0x2d, 0x41, 0x2b, 0xab, 0xcc, 0x1c, 0x0b, 0x14, 0xfd, 0x67, 0x63, 0x11, 0xf7, 0x59, 0x68, + 0xa2, 0x77, 0xa1, 0xea, 0x13, 0x9b, 0x98, 0xd4, 0xf5, 0xa4, 0xfb, 0xcf, 0x14, 0x74, 0xdf, 0xd8, + 0x22, 0x76, 0x5b, 0x9a, 0x36, 0xce, 0x33, 0xff, 0x83, 0x5f, 0x58, 0x41, 0xa2, 0x37, 0xa1, 0x4a, + 0x49, 0x7f, 0x60, 0x1b, 0x94, 0xc8, 0x7d, 0xf4, 0xe5, 0xe8, 0x10, 0x58, 0xe4, 0x30, 0xb0, 0x96, + 0xdb, 0xed, 0x48, 0x35, 0xbe, 0x7d, 0xd4, 0x94, 0x04, 0xad, 0x58, 0xc1, 0xa0, 0x3d, 0x98, 0x19, + 0x0e, 0xba, 0x4c, 0x93, 0xb2, 0xec, 0xd0, 0xdb, 0x97, 0x91, 0x74, 0xb3, 0xe8, 0xdc, 0x6c, 0xc6, + 0xac, 0x1b, 0x97, 0x65, 0x5f, 0x33, 0xf1, 0x76, 0x9c, 0xe8, 0x05, 0xdd, 0x82, 0xd9, 0xbe, 0xe5, + 0x60, 0x62, 0x74, 0xf7, 0xdb, 0xc4, 0x74, 0x9d, 0xae, 0xcf, 0xc3, 0xaa, 0xd2, 0x98, 0x97, 0x00, + 0xb3, 0xeb, 0x71, 0x31, 0x4e, 0xea, 0xa3, 0xd7, 0x01, 0x05, 0xc3, 0x78, 0x4d, 0x24, 0x37, 0xcb, + 0x75, 0x78, 0xcc, 0x95, 0xc3, 0xe0, 0xee, 0xa4, 0x34, 0x70, 0x86, 0x15, 0x5a, 0x83, 0x39, 0x8f, + 0xec, 0x59, 0x6c, 0x8c, 0x77, 0x2d, 0x9f, 0xba, 0xde, 0xfe, 0x9a, 0xd5, 0xb7, 0xe8, 0xc2, 0x38, + 0xf7, 0x69, 0xe1, 0xf0, 0x60, 0x79, 0x0e, 0x67, 0xc8, 0x71, 0xa6, 0x95, 0xfe, 0xf3, 0x71, 0x98, + 0x4d, 0xe4, 0x1b, 0xf4, 0x16, 0x5c, 0x36, 0x87, 0x9e, 0x47, 0x1c, 0xda, 0x1c, 0xf6, 0xb7, 0x88, + 0xd7, 0x36, 0x77, 0x48, 0x77, 0x68, 0x93, 0x2e, 0x0f, 0x94, 0x4a, 0x63, 0x49, 0x7a, 0x7c, 0x79, + 0x25, 0x53, 0x0b, 0xe7, 0x58, 0xb3, 0x59, 0x70, 0x78, 0xd3, 0xba, 0xe5, 0xfb, 0x0a, 0xb3, 0xc4, + 0x31, 0xd5, 0x2c, 0x34, 0x53, 0x1a, 0x38, 0xc3, 0x8a, 0xf9, 0xd8, 0x25, 0xbe, 0xe5, 0x91, 0x6e, + 0xd2, 0xc7, 0x72, 0xdc, 0xc7, 0xdb, 0x99, 0x5a, 0x38, 0xc7, 0x1a, 0x3d, 0x07, 0x53, 0xa2, 0x37, + 0xbe, 0x7e, 0x72, 0xa1, 0x2f, 0x49, 0xb0, 0xa9, 0x66, 0x28, 0xc2, 0x51, 0x3d, 0x36, 0x34, 0x77, + 0xcb, 0x27, 0xde, 0x1e, 0xe9, 0xe6, 0x2f, 0xf0, 0x46, 0x4a, 0x03, 0x67, 0x58, 0xb1, 0xa1, 0x89, + 0x08, 0x4c, 0x0d, 0x6d, 0x3c, 0x3e, 0xb4, 0xcd, 0x4c, 0x2d, 0x9c, 0x63, 0xcd, 0xe2, 0x58, 0xb8, + 0x7c, 0x6b, 0xcf, 0xb0, 0x6c, 0x63, 0xcb, 0x26, 0x0b, 0x13, 0xf1, 0x38, 0x6e, 0xc6, 0xc5, 0x38, + 0xa9, 0x8f, 0x5e, 0x83, 0x8b, 0xa2, 0x69, 0xd3, 0x31, 0x14, 0x48, 0x95, 0x83, 0x3c, 0x2a, 0x41, + 0x2e, 0x36, 0x93, 0x0a, 0x38, 0x6d, 0x83, 0x5e, 0x84, 0x19, 0xd3, 0xb5, 0x6d, 0x1e, 0x8f, 0x2b, + 0xee, 0xd0, 0xa1, 0x0b, 0x93, 0x1c, 0x05, 0xb1, 0xfd, 0xb8, 0x12, 0x93, 0xe0, 0x84, 0x26, 0x22, + 0x00, 0x66, 0x50, 0x70, 0xfc, 0x05, 0xe0, 0xf9, 0xf1, 0x7a, 0xd1, 0x1c, 0xa0, 0x4a, 0x55, 0xc8, + 0x01, 0x54, 0x93, 0x8f, 0x23, 0xc0, 0xfa, 0x9f, 0x35, 0x98, 0xcf, 0x49, 0x1d, 0xe8, 0x95, 0x58, + 0x89, 0xfd, 0x5a, 0xa2, 0xc4, 0x5e, 0xc9, 0x31, 0x8b, 0xd4, 0x59, 0x07, 0xa6, 0x3d, 0x36, 0x2a, + 0xa7, 0x27, 0x54, 0x64, 0x8e, 0x7c, 0xee, 0x98, 0x61, 0xe0, 0xa8, 0x4d, 0x98, 0xf3, 0x2f, 0x1e, + 0x1e, 0x2c, 0x4f, 0xc7, 0x64, 0x38, 0x0e, 0xaf, 0xff, 0xa2, 0x04, 0x70, 0x9b, 0x0c, 0x6c, 0x77, + 0xbf, 0x4f, 0x9c, 0xb3, 0xe0, 0x50, 0x1b, 0x31, 0x0e, 0xf5, 0xf4, 0x71, 0xcb, 0xa3, 0x5c, 0xcb, + 0x25, 0x51, 0x6f, 0x27, 0x48, 0x54, 0xbd, 0x38, 0xe4, 0xd1, 0x2c, 0xea, 0x6f, 0x65, 0xb8, 0x14, + 0x2a, 0x87, 0x34, 0xea, 0xa5, 0xd8, 0x1a, 0x7f, 0x35, 0xb1, 0xc6, 0xf3, 0x19, 0x26, 0x0f, 0x8d, + 0x47, 0xbd, 0x07, 0x33, 0x8c, 0xe5, 0x88, 0xb5, 0xe4, 0x1c, 0x6a, 0x7c, 0x64, 0x0e, 0xa5, 0xaa, + 0xdd, 0x5a, 0x0c, 0x09, 0x27, 0x90, 0x73, 0x38, 0xdb, 0xc4, 0x17, 0x91, 0xb3, 0x7d, 0xa4, 0xc1, + 0x4c, 0xb8, 0x4c, 0x67, 0x40, 0xda, 0x9a, 0x71, 0xd2, 0xf6, 0x64, 0xe1, 0x10, 0xcd, 0x61, 0x6d, + 0xff, 0x62, 0x04, 0x5f, 0x29, 0xb1, 0x0d, 0xbe, 0x65, 0x98, 0xbb, 0xe8, 0x71, 0x18, 0x73, 0x8c, + 0x7e, 0x10, 0x99, 0x6a, 0xb3, 0x34, 0x8d, 0x3e, 0xc1, 0x5c, 0x82, 0x3e, 0xd0, 0x00, 0xc9, 0x2a, + 0x70, 0xcb, 0x71, 0x5c, 0x6a, 0x88, 0x5c, 0x29, 0xdc, 0x5a, 0x2d, 0xec, 0x56, 0xd0, 0x63, 0x6d, + 0x33, 0x85, 0x75, 0xc7, 0xa1, 0xde, 0x7e, 0xb8, 0xc8, 0x69, 0x05, 0x9c, 0xe1, 0x00, 0x32, 0x00, + 0x3c, 0x89, 0xd9, 0x71, 0xe5, 0x46, 0x7e, 0xba, 0x40, 0xce, 0x63, 0x06, 0x2b, 0xae, 0xb3, 0x6d, + 0xf5, 0xc2, 0xb4, 0x83, 0x15, 0x10, 0x8e, 0x80, 0x2e, 0xde, 0x81, 0xf9, 0x1c, 0x6f, 0xd1, 0x05, + 0x28, 0xef, 0x92, 0x7d, 0x31, 0x6d, 0x98, 0xfd, 0x89, 0xe6, 0xa0, 0xb2, 0x67, 0xd8, 0x43, 0x91, + 0x7e, 0x27, 0xb1, 0xf8, 0xf1, 0x62, 0xe9, 0x79, 0x4d, 0xff, 0xb0, 0x12, 0x8d, 0x1d, 0xce, 0x98, + 0xaf, 0x42, 0xd5, 0x23, 0x03, 0xdb, 0x32, 0x0d, 0x5f, 0x12, 0x21, 0x4e, 0x7e, 0xb1, 0x6c, 0xc3, + 0x4a, 0x1a, 0xe3, 0xd6, 0xa5, 0x87, 0xcb, 0xad, 0xcb, 0xa7, 0xc3, 0xad, 0xbf, 0x0b, 0x55, 0x3f, + 0x60, 0xd5, 0x63, 0x1c, 0xf2, 0xfa, 0x08, 0xf9, 0x55, 0x12, 0x6a, 0xd5, 0x81, 0xa2, 0xd2, 0x0a, + 0x34, 0x8b, 0x44, 0x57, 0x46, 0x24, 0xd1, 0xa7, 0x4a, 0x7c, 0x59, 0xbe, 0x19, 0x18, 0x43, 0x9f, + 0x74, 0x79, 0x6e, 0xab, 0x86, 0xf9, 0xa6, 0xc5, 0x5b, 0xb1, 0x94, 0xa2, 0x77, 0x63, 0x21, 0x5b, + 0x3d, 0x49, 0xc8, 0xce, 0xe4, 0x87, 0x2b, 0xda, 0x84, 0xf9, 0x81, 0xe7, 0xf6, 0x3c, 0xe2, 0xfb, + 0xb7, 0x89, 0xd1, 0xb5, 0x2d, 0x87, 0x04, 0xf3, 0x23, 0x18, 0xd1, 0x95, 0xc3, 0x83, 0xe5, 0xf9, + 0x56, 0xb6, 0x0a, 0xce, 0xb3, 0xd5, 0xef, 0x8f, 0xc1, 0x85, 0x64, 0x05, 0xcc, 0x21, 0xa9, 0xda, + 0x89, 0x48, 0xea, 0xb5, 0xc8, 0x66, 0x10, 0x0c, 0x5e, 0xad, 0x7e, 0xc6, 0x86, 0xb8, 0x05, 0xb3, + 0x32, 0x1b, 0x04, 0x42, 0x49, 0xd3, 0xd5, 0xea, 0x6f, 0xc6, 0xc5, 0x38, 0xa9, 0x8f, 0x5e, 0x82, + 0x69, 0x8f, 0xf3, 0xee, 0x00, 0x40, 0x70, 0xd7, 0x47, 0x24, 0xc0, 0x34, 0x8e, 0x0a, 0x71, 0x5c, + 0x97, 0xf1, 0xd6, 0x90, 0x8e, 0x06, 0x00, 0x63, 0x71, 0xde, 0x7a, 0x2b, 0xa9, 0x80, 0xd3, 0x36, + 0x68, 0x1d, 0x2e, 0x0d, 0x9d, 0x34, 0x94, 0x08, 0xe5, 0x2b, 0x12, 0xea, 0xd2, 0x66, 0x5a, 0x05, + 0x67, 0xd9, 0xa1, 0xed, 0x18, 0x95, 0x1d, 0xe7, 0xe9, 0xf9, 0x46, 0xe1, 0x8d, 0x57, 0x98, 0xcb, + 0x66, 0xd0, 0xed, 0x6a, 0x51, 0xba, 0xad, 0xff, 0x41, 0x8b, 0x16, 0x21, 0x45, 0x81, 0x8f, 0xbb, + 0x65, 0x4a, 0x59, 0x44, 0xd8, 0x91, 0x9b, 0xcd, 0x7e, 0x6f, 0x8e, 0xc4, 0x7e, 0xc3, 0xe2, 0x79, + 0x3c, 0xfd, 0xfd, 0xa3, 0x06, 0xb3, 0x77, 0x3b, 0x9d, 0xd6, 0xaa, 0xc3, 0x77, 0x4b, 0xcb, 0xa0, + 0x3b, 0xac, 0x8a, 0x0e, 0x0c, 0xba, 0x93, 0xac, 0xa2, 0x4c, 0x86, 0xb9, 0x04, 0x3d, 0x0b, 0x55, + 0xf6, 0x2f, 0x73, 0x9c, 0x87, 0xeb, 0x24, 0x4f, 0x32, 0xd5, 0x96, 0x6c, 0x7b, 0x10, 0xf9, 0x1b, + 0x2b, 0x4d, 0xf4, 0x2d, 0x98, 0x60, 0x7b, 0x9b, 0x38, 0xdd, 0x82, 0xe4, 0x57, 0x3a, 0xd5, 0x10, + 0x46, 0x21, 0x9f, 0x91, 0x0d, 0x38, 0x80, 0xd3, 0x77, 0x61, 0x2e, 0x32, 0x08, 0x3c, 0xb4, 0xc9, + 0x5b, 0xac, 0x5e, 0xa1, 0x36, 0x54, 0x58, 0xef, 0xac, 0x2a, 0x95, 0x0b, 0x5c, 0x2f, 0x26, 0x26, + 0x22, 0xe4, 0x1e, 0xec, 0x97, 0x8f, 0x05, 0x96, 0xbe, 0x01, 0x13, 0xab, 0xad, 0x86, 0xed, 0x0a, + 0xbe, 0x61, 0x5a, 0x5d, 0x2f, 0x39, 0x53, 0x2b, 0xab, 0xb7, 0x31, 0xe6, 0x12, 0xa4, 0xc3, 0x38, + 0xb9, 0x67, 0x92, 0x01, 0xe5, 0x14, 0x63, 0xb2, 0x01, 0x2c, 0x91, 0xde, 0xe1, 0x2d, 0x58, 0x4a, + 0xf4, 0x9f, 0x94, 0x60, 0x42, 0x76, 0x7b, 0x06, 0xe7, 0x8f, 0xb5, 0xd8, 0xf9, 0xe3, 0xa9, 0x62, + 0x4b, 0x90, 0x7b, 0xf8, 0xe8, 0x24, 0x0e, 0x1f, 0xd7, 0x0a, 0xe2, 0x1d, 0x7d, 0xf2, 0x78, 0xbf, + 0x04, 0x33, 0xf1, 0xc5, 0x47, 0xcf, 0xc1, 0x14, 0x4b, 0xb5, 0x96, 0x49, 0x9a, 0x21, 0xc3, 0x53, + 0xd7, 0x0f, 0xed, 0x50, 0x84, 0xa3, 0x7a, 0xa8, 0xa7, 0xcc, 0x5a, 0xae, 0x47, 0xe5, 0xa0, 0xf3, + 0xa7, 0x74, 0x48, 0x2d, 0xbb, 0x26, 0x2e, 0xdb, 0x6b, 0xab, 0x0e, 0xdd, 0xf0, 0xda, 0xd4, 0xb3, + 0x9c, 0x5e, 0xaa, 0x23, 0x06, 0x86, 0xa3, 0xc8, 0xe8, 0x6d, 0x96, 0xf6, 0x7d, 0x77, 0xe8, 0x99, + 0x24, 0x8b, 0xbe, 0x05, 0xd4, 0x83, 0x6d, 0x84, 0xee, 0x9a, 0x6b, 0x1a, 0xb6, 0x58, 0x1c, 0x4c, + 0xb6, 0x89, 0x47, 0x1c, 0x93, 0x04, 0x94, 0x49, 0x40, 0x60, 0x05, 0xa6, 0xff, 0x56, 0x83, 0x29, + 0x39, 0x17, 0x67, 0x40, 0xd4, 0xdf, 0x88, 0x13, 0xf5, 0x27, 0x0a, 0xee, 0xd0, 0x6c, 0x96, 0xfe, + 0x3b, 0x0d, 0x16, 0x03, 0xd7, 0x5d, 0xa3, 0xdb, 0x30, 0x6c, 0xc3, 0x31, 0x89, 0x17, 0xc4, 0xfa, + 0x22, 0x94, 0xac, 0x81, 0x5c, 0x49, 0x90, 0x00, 0xa5, 0xd5, 0x16, 0x2e, 0x59, 0x03, 0x56, 0x45, + 0x77, 0x5c, 0x9f, 0x72, 0x36, 0x2f, 0x0e, 0x8a, 0xca, 0xeb, 0xbb, 0xb2, 0x1d, 0x2b, 0x0d, 0xb4, + 0x09, 0x95, 0x81, 0xeb, 0x51, 0x56, 0xb9, 0xca, 0x89, 0xf5, 0x3d, 0xc2, 0x6b, 0xb6, 0x6e, 0x32, + 0x10, 0xc3, 0x9d, 0xce, 0x60, 0xb0, 0x40, 0xd3, 0x7f, 0xa8, 0xc1, 0xa3, 0x19, 0xfe, 0x4b, 0xd2, + 0xd0, 0x85, 0x09, 0x4b, 0x08, 0x65, 0x7a, 0x79, 0xa1, 0x58, 0xb7, 0x19, 0x53, 0x11, 0xa6, 0xb6, + 0x20, 0x85, 0x05, 0xd0, 0xfa, 0xaf, 0x34, 0xb8, 0x98, 0xf2, 0x97, 0xa7, 0x68, 0x16, 0xcf, 0x92, + 0x6d, 0xab, 0x14, 0xcd, 0xc2, 0x92, 0x4b, 0xd0, 0x1b, 0x50, 0xe5, 0x6f, 0x44, 0xa6, 0x6b, 0xcb, + 0x09, 0xac, 0x07, 0x13, 0xd8, 0x92, 0xed, 0x0f, 0x0e, 0x96, 0xaf, 0x64, 0x9c, 0xb5, 0x03, 0x31, + 0x56, 0x00, 0x68, 0x19, 0x2a, 0xc4, 0xf3, 0x5c, 0x4f, 0x26, 0xfb, 0x49, 0x36, 0x53, 0x77, 0x58, + 0x03, 0x16, 0xed, 0xfa, 0xaf, 0xc3, 0x20, 0x65, 0xd9, 0x97, 0xf9, 0xc7, 0x16, 0x27, 0x99, 0x18, + 0xd9, 0xd2, 0x61, 0x2e, 0x41, 0x43, 0xb8, 0x60, 0x25, 0xd2, 0xb5, 0xdc, 0x9d, 0xf5, 0x62, 0xd3, + 0xa8, 0xcc, 0x1a, 0x0b, 0x12, 0xfe, 0x42, 0x52, 0x82, 0x53, 0x5d, 0xe8, 0x04, 0x52, 0x5a, 0xe8, + 0x4d, 0x18, 0xdb, 0xa1, 0x74, 0x90, 0x71, 0xd9, 0x7f, 0x4c, 0x91, 0x08, 0x5d, 0xa8, 0xf2, 0xd1, + 0x75, 0x3a, 0x2d, 0xcc, 0xa1, 0xf4, 0xdf, 0x97, 0xd4, 0x7c, 0xf0, 0x13, 0xd2, 0x37, 0xd5, 0x68, + 0x57, 0x6c, 0xc3, 0xf7, 0x79, 0x0a, 0x13, 0xa7, 0xf9, 0xb9, 0x88, 0xe3, 0x4a, 0x86, 0x53, 0xda, + 0xa8, 0x13, 0x16, 0x4f, 0xed, 0x24, 0xc5, 0x73, 0x2a, 0xab, 0x70, 0xa2, 0xbb, 0x50, 0xa6, 0x76, + 0xd1, 0x53, 0xb9, 0x44, 0xec, 0xac, 0xb5, 0x1b, 0x53, 0x72, 0xca, 0xcb, 0x9d, 0xb5, 0x36, 0x66, + 0x10, 0x68, 0x03, 0x2a, 0xde, 0xd0, 0x26, 0xac, 0x0e, 0x94, 0x8b, 0xd7, 0x15, 0x36, 0x83, 0xe1, + 0xe6, 0x63, 0xbf, 0x7c, 0x2c, 0x70, 0xf4, 0x1f, 0x69, 0x30, 0x1d, 0xab, 0x16, 0xc8, 0x83, 0xf3, + 0x76, 0x64, 0xef, 0xc8, 0x79, 0x78, 0x7e, 0xf4, 0x5d, 0x27, 0x37, 0xfd, 0x9c, 0xec, 0xf7, 0x7c, + 0x54, 0x86, 0x63, 0x7d, 0xe8, 0x06, 0x40, 0x38, 0x6c, 0xb6, 0x0f, 0x58, 0xf0, 0x8a, 0x0d, 0x2f, + 0xf7, 0x01, 0x8b, 0x69, 0x1f, 0x8b, 0x76, 0x74, 0x03, 0xc0, 0x27, 0xa6, 0x47, 0x68, 0x33, 0x4c, + 0x5c, 0xaa, 0x1c, 0xb7, 0x95, 0x04, 0x47, 0xb4, 0xf4, 0x3f, 0x69, 0x30, 0xdd, 0x24, 0xf4, 0xfb, + 0xae, 0xb7, 0xdb, 0x72, 0x6d, 0xcb, 0xdc, 0x3f, 0x03, 0x12, 0x80, 0x63, 0x24, 0xe0, 0xb8, 0x7c, + 0x19, 0xf3, 0x2e, 0x8f, 0x0a, 0xe8, 0x1f, 0x69, 0x30, 0x1f, 0xd3, 0xbc, 0x13, 0xe6, 0x03, 0x95, + 0xa0, 0xb5, 0x42, 0x09, 0x3a, 0x06, 0xc3, 0x92, 0x5a, 0x76, 0x82, 0x46, 0x6b, 0x50, 0xa2, 0xae, + 0x8c, 0xde, 0xd1, 0x30, 0x09, 0xf1, 0xc2, 0x9a, 0xd3, 0x71, 0x71, 0x89, 0xba, 0x6c, 0x21, 0x16, + 0x62, 0x5a, 0xd1, 0x8c, 0xf6, 0x90, 0x46, 0x80, 0x61, 0x6c, 0xdb, 0x73, 0xfb, 0x27, 0x1e, 0x83, + 0x5a, 0x88, 0x57, 0x3d, 0xb7, 0x8f, 0x39, 0x96, 0xfe, 0xb1, 0x06, 0x17, 0x63, 0x9a, 0x67, 0xc0, + 0x1b, 0xde, 0x8c, 0xf3, 0x86, 0x6b, 0xa3, 0x0c, 0x24, 0x87, 0x3d, 0x7c, 0x5c, 0x4a, 0x0c, 0x83, + 0x0d, 0x18, 0x6d, 0xc3, 0xd4, 0xc0, 0xed, 0xb6, 0x4f, 0xe1, 0x81, 0x76, 0x96, 0xf1, 0xb9, 0x56, + 0x88, 0x85, 0xa3, 0xc0, 0xe8, 0x1e, 0x5c, 0x64, 0xd4, 0xc2, 0x1f, 0x18, 0x26, 0x69, 0x9f, 0xc2, + 0x95, 0xd5, 0x23, 0xfc, 0x05, 0x28, 0x89, 0x88, 0xd3, 0x9d, 0xa0, 0x75, 0x98, 0xb0, 0x06, 0xfc, + 0x7c, 0x21, 0x89, 0xe4, 0xb1, 0x24, 0x4c, 0x9c, 0x46, 0x44, 0x8a, 0x97, 0x3f, 0x70, 0x80, 0xa1, + 0xff, 0x35, 0x19, 0x0d, 0x9c, 0xae, 0xbe, 0x16, 0xa1, 0x07, 0xf2, 0xad, 0xe6, 0x64, 0xd4, 0xa0, + 0x29, 0x99, 0xc8, 0x49, 0x99, 0x75, 0x35, 0xc1, 0x5b, 0xbe, 0x02, 0x13, 0xc4, 0xe9, 0x72, 0xb2, + 0x2e, 0x2e, 0x42, 0xf8, 0xa8, 0xee, 0x88, 0x26, 0x1c, 0xc8, 0xf4, 0x1f, 0x97, 0x13, 0xa3, 0xe2, + 0x65, 0xf6, 0xbd, 0x53, 0x0b, 0x0e, 0x45, 0xf8, 0x73, 0x03, 0x64, 0x2b, 0xa4, 0x7f, 0x22, 0xe6, + 0xbf, 0x31, 0x4a, 0xcc, 0x47, 0xeb, 0x5f, 0x2e, 0xf9, 0x43, 0xdf, 0x81, 0x71, 0x22, 0xba, 0x10, + 0x55, 0xf5, 0xe6, 0x28, 0x5d, 0x84, 0xe9, 0x37, 0x3c, 0x67, 0xc9, 0x36, 0x89, 0x8a, 0x5e, 0x61, + 0xf3, 0xc5, 0x74, 0xd9, 0xb1, 0x44, 0xb0, 0xe7, 0xc9, 0xc6, 0x63, 0x62, 0xd8, 0xaa, 0xf9, 0xc1, + 0xc1, 0x32, 0x84, 0x3f, 0x71, 0xd4, 0x82, 0xbf, 0x9e, 0xc9, 0x3b, 0x9b, 0xb3, 0xf9, 0x02, 0x69, + 0xb4, 0xd7, 0xb3, 0xd0, 0xb5, 0x53, 0x7b, 0x3d, 0x8b, 0x40, 0x1e, 0x7d, 0x86, 0xfd, 0x67, 0x09, + 0x2e, 0x85, 0xca, 0x85, 0x5f, 0xcf, 0x32, 0x4c, 0xfe, 0xff, 0x15, 0x52, 0xb1, 0x17, 0xad, 0x70, + 0xea, 0xfe, 0xfb, 0x5e, 0xb4, 0x42, 0xdf, 0x72, 0xaa, 0xdd, 0x6f, 0x4a, 0xd1, 0x01, 0x8c, 0xf8, + 0xac, 0x72, 0x0a, 0x1f, 0xe2, 0x7c, 0xe1, 0x5e, 0x66, 0xf4, 0xbf, 0x94, 0xe1, 0x42, 0x72, 0x37, + 0xc6, 0x6e, 0xdf, 0xb5, 0x63, 0x6f, 0xdf, 0x5b, 0x30, 0xb7, 0x3d, 0xb4, 0xed, 0x7d, 0x3e, 0x86, + 0xc8, 0x15, 0xbc, 0xb8, 0xb7, 0xff, 0x92, 0xb4, 0x9c, 0x7b, 0x35, 0x43, 0x07, 0x67, 0x5a, 0xa6, + 0x2f, 0xe3, 0xc7, 0xfe, 0xd3, 0xcb, 0xf8, 0xca, 0x09, 0x2e, 0xe3, 0xb3, 0xdf, 0x33, 0xca, 0x27, + 0x7a, 0xcf, 0x38, 0xc9, 0x4d, 0x7c, 0x46, 0x12, 0x3b, 0xf6, 0xab, 0x92, 0x97, 0x61, 0x26, 0xfe, + 0x3a, 0x24, 0xd6, 0x52, 0x3c, 0x50, 0xc9, 0xb7, 0x98, 0xc8, 0x5a, 0x8a, 0x76, 0xac, 0x34, 0xf4, + 0x43, 0x0d, 0x2e, 0x67, 0x7f, 0x05, 0x82, 0x6c, 0x98, 0xe9, 0x1b, 0xf7, 0xa2, 0x5f, 0xe6, 0x68, + 0x27, 0x64, 0x2b, 0xfc, 0x59, 0x60, 0x3d, 0x86, 0x85, 0x13, 0xd8, 0xe8, 0x1d, 0xa8, 0xf6, 0x8d, + 0x7b, 0xed, 0xa1, 0xd7, 0x23, 0x27, 0x66, 0x45, 0x7c, 0x1b, 0xad, 0x4b, 0x14, 0xac, 0xf0, 0xf4, + 0xcf, 0x35, 0x98, 0xcf, 0xb9, 0xec, 0xff, 0x1f, 0x1a, 0xe5, 0xfb, 0x25, 0xa8, 0xb4, 0x4d, 0xc3, + 0x26, 0x67, 0x40, 0x28, 0x5e, 0x8f, 0x11, 0x8a, 0xe3, 0xbe, 0x26, 0xe5, 0x5e, 0xe5, 0x72, 0x09, + 0x9c, 0xe0, 0x12, 0x4f, 0x15, 0x42, 0x3b, 0x9a, 0x46, 0xbc, 0x00, 0x93, 0xaa, 0xd3, 0xd1, 0xb2, + 0x9b, 0xfe, 0xcb, 0x12, 0x4c, 0x45, 0xba, 0x18, 0x31, 0x37, 0x6e, 0xc7, 0x0a, 0x42, 0xb9, 0xc0, + 0x4d, 0x4b, 0xa4, 0xaf, 0x5a, 0x50, 0x02, 0xc4, 0xd7, 0x10, 0xe1, 0xfb, 0x77, 0xba, 0x32, 0xbc, + 0x0c, 0x33, 0xd4, 0xf0, 0x7a, 0x84, 0x2a, 0xda, 0x2e, 0x2e, 0x19, 0xd5, 0x67, 0x39, 0x9d, 0x98, + 0x14, 0x27, 0xb4, 0x17, 0x5f, 0x82, 0xe9, 0x58, 0x67, 0xa3, 0x7c, 0xcc, 0xd0, 0x58, 0xb9, 0xff, + 0xd9, 0xd2, 0xb9, 0x4f, 0x3e, 0x5b, 0x3a, 0xf7, 0xe9, 0x67, 0x4b, 0xe7, 0x7e, 0x70, 0xb8, 0xa4, + 0xdd, 0x3f, 0x5c, 0xd2, 0x3e, 0x39, 0x5c, 0xd2, 0x3e, 0x3d, 0x5c, 0xd2, 0xfe, 0x7e, 0xb8, 0xa4, + 0xfd, 0xf4, 0xf3, 0xa5, 0x73, 0xef, 0x3c, 0x76, 0xe4, 0xff, 0x6d, 0xf8, 0x77, 0x00, 0x00, 0x00, + 0xff, 0xff, 0xf3, 0x1c, 0xa0, 0x16, 0x14, 0x31, 0x00, 0x00, } func (m *DaemonSet) Marshal() (dAtA []byte, err error) { @@ -2944,16 +2913,6 @@ func (m *NetworkPolicy) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintGenerated(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a { size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -3302,43 +3261,6 @@ func (m *NetworkPolicySpec) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *NetworkPolicyStatus) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *NetworkPolicyStatus) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *NetworkPolicyStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if len(m.Conditions) > 0 { - for iNdEx := len(m.Conditions) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.Conditions[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintGenerated(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0xa - } - } - return len(dAtA) - i, nil -} - func (m *ReplicaSet) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -4362,8 +4284,6 @@ func (m *NetworkPolicy) Size() (n int) { n += 1 + l + sovGenerated(uint64(l)) l = m.Spec.Size() n += 1 + l + sovGenerated(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovGenerated(uint64(l)) return n } @@ -4496,21 +4416,6 @@ func (m *NetworkPolicySpec) Size() (n int) { return n } -func (m *NetworkPolicyStatus) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if len(m.Conditions) > 0 { - for _, e := range m.Conditions { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) - } - } - return n -} - func (m *ReplicaSet) Size() (n int) { if m == nil { return 0 @@ -5098,7 +5003,6 @@ func (this *NetworkPolicy) String() string { s := strings.Join([]string{`&NetworkPolicy{`, `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v1.ObjectMeta", 1), `&`, ``, 1) + `,`, `Spec:` + strings.Replace(strings.Replace(this.Spec.String(), "NetworkPolicySpec", "NetworkPolicySpec", 1), `&`, ``, 1) + `,`, - `Status:` + strings.Replace(strings.Replace(this.Status.String(), "NetworkPolicyStatus", "NetworkPolicyStatus", 1), `&`, ``, 1) + `,`, `}`, }, "") return s @@ -5208,21 +5112,6 @@ func (this *NetworkPolicySpec) String() string { }, "") return s } -func (this *NetworkPolicyStatus) String() string { - if this == nil { - return "nil" - } - repeatedStringForConditions := "[]Condition{" - for _, f := range this.Conditions { - repeatedStringForConditions += fmt.Sprintf("%v", f) + "," - } - repeatedStringForConditions += "}" - s := strings.Join([]string{`&NetworkPolicyStatus{`, - `Conditions:` + repeatedStringForConditions + `,`, - `}`, - }, "") - return s -} func (this *ReplicaSet) String() string { if this == nil { return "nil" @@ -9627,39 +9516,6 @@ func (m *NetworkPolicy) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthGenerated - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthGenerated - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -10514,90 +10370,6 @@ func (m *NetworkPolicySpec) Unmarshal(dAtA []byte) error { } return nil } -func (m *NetworkPolicyStatus) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: NetworkPolicyStatus: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: NetworkPolicyStatus: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Conditions", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthGenerated - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthGenerated - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Conditions = append(m.Conditions, v1.Condition{}) - if err := m.Conditions[len(m.Conditions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipGenerated(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthGenerated - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} func (m *ReplicaSet) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 diff --git a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/generated.proto index 3ab6a093b559..3f2549681ecb 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/generated.proto @@ -646,11 +646,6 @@ message NetworkPolicy { // Specification of the desired behavior for this NetworkPolicy. // +optional optional NetworkPolicySpec spec = 2; - - // Status is the current state of the NetworkPolicy. - // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status - // +optional - optional NetworkPolicyStatus status = 3; } // DEPRECATED 1.9 - This group version of NetworkPolicyEgressRule is deprecated by networking/v1/NetworkPolicyEgressRule. @@ -798,18 +793,6 @@ message NetworkPolicySpec { repeated string policyTypes = 4; } -// NetworkPolicyStatus describe the current state of the NetworkPolicy. -message NetworkPolicyStatus { - // Conditions holds an array of metav1.Condition that describe the state of the NetworkPolicy. - // Current service state - // +optional - // +patchMergeKey=type - // +patchStrategy=merge - // +listType=map - // +listMapKey=type - repeated k8s.io.apimachinery.pkg.apis.meta.v1.Condition conditions = 1; -} - // DEPRECATED - This group version of ReplicaSet is deprecated by apps/v1beta2/ReplicaSet. See the release notes for // more information. // ReplicaSet ensures that a specified number of pod replicas are running at any given time. diff --git a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/types.go b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/types.go index c0ac6fa25dd9..70b349f654b3 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/types.go @@ -1041,10 +1041,10 @@ type NetworkPolicy struct { // +optional Spec NetworkPolicySpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"` - // Status is the current state of the NetworkPolicy. - // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status - // +optional - Status NetworkPolicyStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"` + // Status is tombstoned to show why 3 is a reserved protobuf tag. + // This commented field should remain, so in the future if we decide to reimplement + // NetworkPolicyStatus a different protobuf name and tag SHOULD be used! + // Status NetworkPolicyStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"` } // DEPRECATED 1.9 - This group version of PolicyType is deprecated by networking/v1/PolicyType. @@ -1207,48 +1207,6 @@ type NetworkPolicyPeer struct { IPBlock *IPBlock `json:"ipBlock,omitempty" protobuf:"bytes,3,rep,name=ipBlock"` } -// NetworkPolicyConditionType is the type for status conditions on -// a NetworkPolicy. This type should be used with the -// NetworkPolicyStatus.Conditions field. -type NetworkPolicyConditionType string - -const ( - // NetworkPolicyConditionStatusAccepted represents status of a Network Policy that could be properly parsed by - // the Network Policy provider and will be implemented in the cluster - NetworkPolicyConditionStatusAccepted NetworkPolicyConditionType = "Accepted" - - // NetworkPolicyConditionStatusPartialFailure represents status of a Network Policy that could be partially - // parsed by the Network Policy provider and may not be completely implemented due to a lack of a feature or some - // other condition - NetworkPolicyConditionStatusPartialFailure NetworkPolicyConditionType = "PartialFailure" - - // NetworkPolicyConditionStatusFailure represents status of a Network Policy that could not be parsed by the - // Network Policy provider and will not be implemented in the cluster - NetworkPolicyConditionStatusFailure NetworkPolicyConditionType = "Failure" -) - -// NetworkPolicyConditionReason defines the set of reasons that explain why a -// particular NetworkPolicy condition type has been raised. -type NetworkPolicyConditionReason string - -const ( - // NetworkPolicyConditionReasonFeatureNotSupported represents a reason where the Network Policy may not have been - // implemented in the cluster due to a lack of some feature not supported by the Network Policy provider - NetworkPolicyConditionReasonFeatureNotSupported NetworkPolicyConditionReason = "FeatureNotSupported" -) - -// NetworkPolicyStatus describe the current state of the NetworkPolicy. -type NetworkPolicyStatus struct { - // Conditions holds an array of metav1.Condition that describe the state of the NetworkPolicy. - // Current service state - // +optional - // +patchMergeKey=type - // +patchStrategy=merge - // +listType=map - // +listMapKey=type - Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"` -} - // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // +k8s:prerelease-lifecycle-gen:introduced=1.3 // +k8s:prerelease-lifecycle-gen:deprecated=1.9 diff --git a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/types_swagger_doc_generated.go index 39aaf485377b..408022c9d84e 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/types_swagger_doc_generated.go @@ -338,7 +338,6 @@ var map_NetworkPolicy = map[string]string{ "": "DEPRECATED 1.9 - This group version of NetworkPolicy is deprecated by networking/v1/NetworkPolicy. NetworkPolicy describes what network traffic is allowed for a set of Pods", "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", "spec": "Specification of the desired behavior for this NetworkPolicy.", - "status": "Status is the current state of the NetworkPolicy. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status", } func (NetworkPolicy) SwaggerDoc() map[string]string { @@ -409,15 +408,6 @@ func (NetworkPolicySpec) SwaggerDoc() map[string]string { return map_NetworkPolicySpec } -var map_NetworkPolicyStatus = map[string]string{ - "": "NetworkPolicyStatus describe the current state of the NetworkPolicy.", - "conditions": "Conditions holds an array of metav1.Condition that describe the state of the NetworkPolicy. Current service state", -} - -func (NetworkPolicyStatus) SwaggerDoc() map[string]string { - return map_NetworkPolicyStatus -} - var map_ReplicaSet = map[string]string{ "": "DEPRECATED - This group version of ReplicaSet is deprecated by apps/v1beta2/ReplicaSet. See the release notes for more information. ReplicaSet ensures that a specified number of pod replicas are running at any given time.", "metadata": "If the Labels of a ReplicaSet are empty, they are defaulted to be the same as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", diff --git a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/zz_generated.deepcopy.go index b6e92729928c..6b474ae483e8 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/extensions/v1beta1/zz_generated.deepcopy.go @@ -725,7 +725,6 @@ func (in *NetworkPolicy) DeepCopyInto(out *NetworkPolicy) { out.TypeMeta = in.TypeMeta in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) in.Spec.DeepCopyInto(&out.Spec) - in.Status.DeepCopyInto(&out.Status) return } @@ -938,29 +937,6 @@ func (in *NetworkPolicySpec) DeepCopy() *NetworkPolicySpec { return out } -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *NetworkPolicyStatus) DeepCopyInto(out *NetworkPolicyStatus) { - *out = *in - if in.Conditions != nil { - in, out := &in.Conditions, &out.Conditions - *out = make([]v1.Condition, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NetworkPolicyStatus. -func (in *NetworkPolicyStatus) DeepCopy() *NetworkPolicyStatus { - if in == nil { - return nil - } - out := new(NetworkPolicyStatus) - in.DeepCopyInto(out) - return out -} - // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ReplicaSet) DeepCopyInto(out *ReplicaSet) { *out = *in diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/generated.pb.go index cf5fc5600ba8..b54e1ceefbba 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/generated.pb.go @@ -43,10 +43,38 @@ var _ = math.Inf // proto package needs to be updated. const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package +func (m *ExemptPriorityLevelConfiguration) Reset() { *m = ExemptPriorityLevelConfiguration{} } +func (*ExemptPriorityLevelConfiguration) ProtoMessage() {} +func (*ExemptPriorityLevelConfiguration) Descriptor() ([]byte, []int) { + return fileDescriptor_45ba024d525b289b, []int{0} +} +func (m *ExemptPriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ExemptPriorityLevelConfiguration) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ExemptPriorityLevelConfiguration) XXX_Merge(src proto.Message) { + xxx_messageInfo_ExemptPriorityLevelConfiguration.Merge(m, src) +} +func (m *ExemptPriorityLevelConfiguration) XXX_Size() int { + return m.Size() +} +func (m *ExemptPriorityLevelConfiguration) XXX_DiscardUnknown() { + xxx_messageInfo_ExemptPriorityLevelConfiguration.DiscardUnknown(m) +} + +var xxx_messageInfo_ExemptPriorityLevelConfiguration proto.InternalMessageInfo + func (m *FlowDistinguisherMethod) Reset() { *m = FlowDistinguisherMethod{} } func (*FlowDistinguisherMethod) ProtoMessage() {} func (*FlowDistinguisherMethod) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{0} + return fileDescriptor_45ba024d525b289b, []int{1} } func (m *FlowDistinguisherMethod) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -74,7 +102,7 @@ var xxx_messageInfo_FlowDistinguisherMethod proto.InternalMessageInfo func (m *FlowSchema) Reset() { *m = FlowSchema{} } func (*FlowSchema) ProtoMessage() {} func (*FlowSchema) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{1} + return fileDescriptor_45ba024d525b289b, []int{2} } func (m *FlowSchema) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -102,7 +130,7 @@ var xxx_messageInfo_FlowSchema proto.InternalMessageInfo func (m *FlowSchemaCondition) Reset() { *m = FlowSchemaCondition{} } func (*FlowSchemaCondition) ProtoMessage() {} func (*FlowSchemaCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{2} + return fileDescriptor_45ba024d525b289b, []int{3} } func (m *FlowSchemaCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -130,7 +158,7 @@ var xxx_messageInfo_FlowSchemaCondition proto.InternalMessageInfo func (m *FlowSchemaList) Reset() { *m = FlowSchemaList{} } func (*FlowSchemaList) ProtoMessage() {} func (*FlowSchemaList) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{3} + return fileDescriptor_45ba024d525b289b, []int{4} } func (m *FlowSchemaList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -158,7 +186,7 @@ var xxx_messageInfo_FlowSchemaList proto.InternalMessageInfo func (m *FlowSchemaSpec) Reset() { *m = FlowSchemaSpec{} } func (*FlowSchemaSpec) ProtoMessage() {} func (*FlowSchemaSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{4} + return fileDescriptor_45ba024d525b289b, []int{5} } func (m *FlowSchemaSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -186,7 +214,7 @@ var xxx_messageInfo_FlowSchemaSpec proto.InternalMessageInfo func (m *FlowSchemaStatus) Reset() { *m = FlowSchemaStatus{} } func (*FlowSchemaStatus) ProtoMessage() {} func (*FlowSchemaStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{5} + return fileDescriptor_45ba024d525b289b, []int{6} } func (m *FlowSchemaStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -214,7 +242,7 @@ var xxx_messageInfo_FlowSchemaStatus proto.InternalMessageInfo func (m *GroupSubject) Reset() { *m = GroupSubject{} } func (*GroupSubject) ProtoMessage() {} func (*GroupSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{6} + return fileDescriptor_45ba024d525b289b, []int{7} } func (m *GroupSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -242,7 +270,7 @@ var xxx_messageInfo_GroupSubject proto.InternalMessageInfo func (m *LimitResponse) Reset() { *m = LimitResponse{} } func (*LimitResponse) ProtoMessage() {} func (*LimitResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{7} + return fileDescriptor_45ba024d525b289b, []int{8} } func (m *LimitResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -270,7 +298,7 @@ var xxx_messageInfo_LimitResponse proto.InternalMessageInfo func (m *LimitedPriorityLevelConfiguration) Reset() { *m = LimitedPriorityLevelConfiguration{} } func (*LimitedPriorityLevelConfiguration) ProtoMessage() {} func (*LimitedPriorityLevelConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{8} + return fileDescriptor_45ba024d525b289b, []int{9} } func (m *LimitedPriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -298,7 +326,7 @@ var xxx_messageInfo_LimitedPriorityLevelConfiguration proto.InternalMessageInfo func (m *NonResourcePolicyRule) Reset() { *m = NonResourcePolicyRule{} } func (*NonResourcePolicyRule) ProtoMessage() {} func (*NonResourcePolicyRule) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{9} + return fileDescriptor_45ba024d525b289b, []int{10} } func (m *NonResourcePolicyRule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -326,7 +354,7 @@ var xxx_messageInfo_NonResourcePolicyRule proto.InternalMessageInfo func (m *PolicyRulesWithSubjects) Reset() { *m = PolicyRulesWithSubjects{} } func (*PolicyRulesWithSubjects) ProtoMessage() {} func (*PolicyRulesWithSubjects) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{10} + return fileDescriptor_45ba024d525b289b, []int{11} } func (m *PolicyRulesWithSubjects) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -354,7 +382,7 @@ var xxx_messageInfo_PolicyRulesWithSubjects proto.InternalMessageInfo func (m *PriorityLevelConfiguration) Reset() { *m = PriorityLevelConfiguration{} } func (*PriorityLevelConfiguration) ProtoMessage() {} func (*PriorityLevelConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{11} + return fileDescriptor_45ba024d525b289b, []int{12} } func (m *PriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -382,7 +410,7 @@ var xxx_messageInfo_PriorityLevelConfiguration proto.InternalMessageInfo func (m *PriorityLevelConfigurationCondition) Reset() { *m = PriorityLevelConfigurationCondition{} } func (*PriorityLevelConfigurationCondition) ProtoMessage() {} func (*PriorityLevelConfigurationCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{12} + return fileDescriptor_45ba024d525b289b, []int{13} } func (m *PriorityLevelConfigurationCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -410,7 +438,7 @@ var xxx_messageInfo_PriorityLevelConfigurationCondition proto.InternalMessageInf func (m *PriorityLevelConfigurationList) Reset() { *m = PriorityLevelConfigurationList{} } func (*PriorityLevelConfigurationList) ProtoMessage() {} func (*PriorityLevelConfigurationList) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{13} + return fileDescriptor_45ba024d525b289b, []int{14} } func (m *PriorityLevelConfigurationList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -438,7 +466,7 @@ var xxx_messageInfo_PriorityLevelConfigurationList proto.InternalMessageInfo func (m *PriorityLevelConfigurationReference) Reset() { *m = PriorityLevelConfigurationReference{} } func (*PriorityLevelConfigurationReference) ProtoMessage() {} func (*PriorityLevelConfigurationReference) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{14} + return fileDescriptor_45ba024d525b289b, []int{15} } func (m *PriorityLevelConfigurationReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -466,7 +494,7 @@ var xxx_messageInfo_PriorityLevelConfigurationReference proto.InternalMessageInf func (m *PriorityLevelConfigurationSpec) Reset() { *m = PriorityLevelConfigurationSpec{} } func (*PriorityLevelConfigurationSpec) ProtoMessage() {} func (*PriorityLevelConfigurationSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{15} + return fileDescriptor_45ba024d525b289b, []int{16} } func (m *PriorityLevelConfigurationSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -494,7 +522,7 @@ var xxx_messageInfo_PriorityLevelConfigurationSpec proto.InternalMessageInfo func (m *PriorityLevelConfigurationStatus) Reset() { *m = PriorityLevelConfigurationStatus{} } func (*PriorityLevelConfigurationStatus) ProtoMessage() {} func (*PriorityLevelConfigurationStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{16} + return fileDescriptor_45ba024d525b289b, []int{17} } func (m *PriorityLevelConfigurationStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -522,7 +550,7 @@ var xxx_messageInfo_PriorityLevelConfigurationStatus proto.InternalMessageInfo func (m *QueuingConfiguration) Reset() { *m = QueuingConfiguration{} } func (*QueuingConfiguration) ProtoMessage() {} func (*QueuingConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{17} + return fileDescriptor_45ba024d525b289b, []int{18} } func (m *QueuingConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -550,7 +578,7 @@ var xxx_messageInfo_QueuingConfiguration proto.InternalMessageInfo func (m *ResourcePolicyRule) Reset() { *m = ResourcePolicyRule{} } func (*ResourcePolicyRule) ProtoMessage() {} func (*ResourcePolicyRule) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{18} + return fileDescriptor_45ba024d525b289b, []int{19} } func (m *ResourcePolicyRule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -578,7 +606,7 @@ var xxx_messageInfo_ResourcePolicyRule proto.InternalMessageInfo func (m *ServiceAccountSubject) Reset() { *m = ServiceAccountSubject{} } func (*ServiceAccountSubject) ProtoMessage() {} func (*ServiceAccountSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{19} + return fileDescriptor_45ba024d525b289b, []int{20} } func (m *ServiceAccountSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -606,7 +634,7 @@ var xxx_messageInfo_ServiceAccountSubject proto.InternalMessageInfo func (m *Subject) Reset() { *m = Subject{} } func (*Subject) ProtoMessage() {} func (*Subject) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{20} + return fileDescriptor_45ba024d525b289b, []int{21} } func (m *Subject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -634,7 +662,7 @@ var xxx_messageInfo_Subject proto.InternalMessageInfo func (m *UserSubject) Reset() { *m = UserSubject{} } func (*UserSubject) ProtoMessage() {} func (*UserSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_45ba024d525b289b, []int{21} + return fileDescriptor_45ba024d525b289b, []int{22} } func (m *UserSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -660,6 +688,7 @@ func (m *UserSubject) XXX_DiscardUnknown() { var xxx_messageInfo_UserSubject proto.InternalMessageInfo func init() { + proto.RegisterType((*ExemptPriorityLevelConfiguration)(nil), "k8s.io.api.flowcontrol.v1alpha1.ExemptPriorityLevelConfiguration") proto.RegisterType((*FlowDistinguisherMethod)(nil), "k8s.io.api.flowcontrol.v1alpha1.FlowDistinguisherMethod") proto.RegisterType((*FlowSchema)(nil), "k8s.io.api.flowcontrol.v1alpha1.FlowSchema") proto.RegisterType((*FlowSchemaCondition)(nil), "k8s.io.api.flowcontrol.v1alpha1.FlowSchemaCondition") @@ -689,105 +718,142 @@ func init() { } var fileDescriptor_45ba024d525b289b = []byte{ - // 1554 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x58, 0x4d, 0x6f, 0x13, 0xc7, - 0x1b, 0xcf, 0x3a, 0x76, 0x12, 0x4f, 0x5e, 0x99, 0x10, 0xc5, 0xff, 0x20, 0xd9, 0x61, 0xff, 0x52, - 0xa1, 0x05, 0x76, 0x09, 0x05, 0x4a, 0x85, 0x2a, 0x94, 0x0d, 0x94, 0xb7, 0x24, 0x24, 0x13, 0xa0, - 0x2a, 0xa2, 0x12, 0x9b, 0xf5, 0xc4, 0x1e, 0x62, 0xef, 0x6e, 0x67, 0x76, 0x9d, 0xa6, 0xe2, 0x50, - 0xa9, 0x5f, 0xa0, 0x1f, 0x80, 0x63, 0x0f, 0x3d, 0xf7, 0x13, 0xf4, 0x18, 0x55, 0x3d, 0x70, 0xe4, - 0x64, 0x11, 0xf7, 0xda, 0x0f, 0xd0, 0x72, 0xa8, 0xaa, 0x99, 0x9d, 0xdd, 0xf5, 0xfa, 0x25, 0x6b, - 0x1a, 0x89, 0x53, 0x6f, 0xd9, 0xe7, 0xe5, 0xf7, 0xbc, 0xcc, 0xf3, 0xe6, 0x80, 0x3b, 0xbb, 0xd7, - 0x98, 0x46, 0x1c, 0x7d, 0xd7, 0xdf, 0xc6, 0xd4, 0xc6, 0x1e, 0x66, 0x7a, 0x03, 0xdb, 0x65, 0x87, - 0xea, 0x92, 0x61, 0xba, 0x44, 0xdf, 0xa9, 0x39, 0x7b, 0x96, 0x63, 0x7b, 0xd4, 0xa9, 0xe9, 0x8d, - 0x25, 0xb3, 0xe6, 0x56, 0xcd, 0x25, 0xbd, 0x82, 0x6d, 0x4c, 0x4d, 0x0f, 0x97, 0x35, 0x97, 0x3a, - 0x9e, 0x03, 0x4b, 0x81, 0x82, 0x66, 0xba, 0x44, 0x6b, 0x53, 0xd0, 0x42, 0x85, 0x85, 0x0b, 0x15, - 0xe2, 0x55, 0xfd, 0x6d, 0xcd, 0x72, 0xea, 0x7a, 0xc5, 0xa9, 0x38, 0xba, 0xd0, 0xdb, 0xf6, 0x77, - 0xc4, 0x97, 0xf8, 0x10, 0x7f, 0x05, 0x78, 0x0b, 0x97, 0x63, 0x07, 0xea, 0xa6, 0x55, 0x25, 0x36, - 0xa6, 0xfb, 0xba, 0xbb, 0x5b, 0xe1, 0x04, 0xa6, 0xd7, 0xb1, 0x67, 0xea, 0x8d, 0x2e, 0x2f, 0x16, - 0xf4, 0x7e, 0x5a, 0xd4, 0xb7, 0x3d, 0x52, 0xc7, 0x5d, 0x0a, 0x57, 0xd3, 0x14, 0x98, 0x55, 0xc5, - 0x75, 0xb3, 0x53, 0x4f, 0x7d, 0x02, 0xe6, 0x3f, 0xaf, 0x39, 0x7b, 0x37, 0x09, 0xf3, 0x88, 0x5d, - 0xf1, 0x09, 0xab, 0x62, 0xba, 0x86, 0xbd, 0xaa, 0x53, 0x86, 0x37, 0x40, 0xd6, 0xdb, 0x77, 0x71, - 0x41, 0x59, 0x54, 0xce, 0xe6, 0x8d, 0x73, 0x07, 0xcd, 0xd2, 0x50, 0xab, 0x59, 0xca, 0x3e, 0xdc, - 0x77, 0xf1, 0xdb, 0x66, 0xe9, 0x54, 0x1f, 0x35, 0xce, 0x46, 0x42, 0x51, 0x7d, 0x99, 0x01, 0x80, - 0x4b, 0x6d, 0x09, 0xd3, 0xf0, 0x19, 0x18, 0xe3, 0xe1, 0x96, 0x4d, 0xcf, 0x14, 0x98, 0xe3, 0x97, - 0x2e, 0x6a, 0x71, 0xb2, 0x23, 0xaf, 0x35, 0x77, 0xb7, 0xc2, 0x09, 0x4c, 0xe3, 0xd2, 0x5a, 0x63, - 0x49, 0x7b, 0xb0, 0xfd, 0x1c, 0x5b, 0xde, 0x1a, 0xf6, 0x4c, 0x03, 0x4a, 0x2f, 0x40, 0x4c, 0x43, - 0x11, 0x2a, 0xdc, 0x04, 0x59, 0xe6, 0x62, 0xab, 0x90, 0x11, 0xe8, 0xba, 0x96, 0xf2, 0x94, 0x5a, - 0xec, 0xdc, 0x96, 0x8b, 0x2d, 0x63, 0x22, 0x0c, 0x91, 0x7f, 0x21, 0x01, 0x05, 0xbf, 0x04, 0x23, - 0xcc, 0x33, 0x3d, 0x9f, 0x15, 0x86, 0x05, 0xe8, 0xd2, 0xbb, 0x80, 0x0a, 0x45, 0x63, 0x4a, 0xc2, - 0x8e, 0x04, 0xdf, 0x48, 0x02, 0xaa, 0xaf, 0x33, 0x60, 0x36, 0x16, 0x5e, 0x71, 0xec, 0x32, 0xf1, - 0x88, 0x63, 0xc3, 0xeb, 0x89, 0xbc, 0x9f, 0xe9, 0xc8, 0xfb, 0x7c, 0x0f, 0x95, 0x38, 0xe7, 0xf0, - 0xd3, 0xc8, 0xdf, 0x8c, 0x50, 0x3f, 0x9d, 0x34, 0xfe, 0xb6, 0x59, 0x9a, 0x8e, 0xd4, 0x92, 0xfe, - 0xc0, 0x06, 0x80, 0x35, 0x93, 0x79, 0x0f, 0xa9, 0x69, 0xb3, 0x00, 0x96, 0xd4, 0xb1, 0x0c, 0xfb, - 0xa3, 0xc1, 0x5e, 0x8a, 0x6b, 0x18, 0x0b, 0xd2, 0x24, 0x5c, 0xed, 0x42, 0x43, 0x3d, 0x2c, 0xc0, - 0x0f, 0xc0, 0x08, 0xc5, 0x26, 0x73, 0xec, 0x42, 0x56, 0xb8, 0x1c, 0xe5, 0x0b, 0x09, 0x2a, 0x92, - 0x5c, 0xf8, 0x21, 0x18, 0xad, 0x63, 0xc6, 0xcc, 0x0a, 0x2e, 0xe4, 0x84, 0xe0, 0xb4, 0x14, 0x1c, - 0x5d, 0x0b, 0xc8, 0x28, 0xe4, 0xab, 0xbf, 0x28, 0x60, 0x2a, 0xce, 0xd3, 0x2a, 0x61, 0x1e, 0x7c, - 0xda, 0x55, 0x7d, 0xda, 0x60, 0x31, 0x71, 0x6d, 0x51, 0x7b, 0x33, 0xd2, 0xdc, 0x58, 0x48, 0x69, - 0xab, 0xbc, 0x0d, 0x90, 0x23, 0x1e, 0xae, 0xf3, 0xac, 0x0f, 0x9f, 0x1d, 0xbf, 0x74, 0xee, 0x1d, - 0xaa, 0xc4, 0x98, 0x94, 0xb8, 0xb9, 0xbb, 0x1c, 0x01, 0x05, 0x40, 0xea, 0x1f, 0xc3, 0xed, 0x21, - 0xf0, 0x8a, 0x84, 0x3f, 0x29, 0x60, 0xc1, 0xa5, 0xc4, 0xa1, 0xc4, 0xdb, 0x5f, 0xc5, 0x0d, 0x5c, - 0x5b, 0x71, 0xec, 0x1d, 0x52, 0xf1, 0xa9, 0xc9, 0x73, 0x29, 0xa3, 0xba, 0x99, 0x6a, 0x7a, 0xa3, - 0x2f, 0x04, 0xc2, 0x3b, 0x98, 0x62, 0xdb, 0xc2, 0x86, 0x2a, 0x7d, 0x5a, 0x38, 0x42, 0xf8, 0x08, - 0x5f, 0xe0, 0x3d, 0x00, 0xeb, 0xa6, 0xc7, 0x73, 0x5a, 0xd9, 0xa0, 0xd8, 0xc2, 0x65, 0x8e, 0x2a, - 0x4a, 0x32, 0x17, 0xd7, 0xc7, 0x5a, 0x97, 0x04, 0xea, 0xa1, 0x05, 0xbf, 0x57, 0xc0, 0x6c, 0xb9, - 0x7b, 0xd0, 0xc8, 0xca, 0xbc, 0x36, 0x50, 0xaa, 0x7b, 0x0c, 0x2a, 0x63, 0xbe, 0xd5, 0x2c, 0xcd, - 0xf6, 0x60, 0xa0, 0x5e, 0xd6, 0xe0, 0x57, 0x20, 0x47, 0xfd, 0x1a, 0x66, 0x85, 0xac, 0x78, 0xe1, - 0x74, 0xb3, 0x1b, 0x4e, 0x8d, 0x58, 0xfb, 0x88, 0xeb, 0x7c, 0x41, 0xbc, 0xea, 0x96, 0x2f, 0x26, - 0x16, 0x8b, 0x9f, 0x5b, 0xb0, 0x50, 0x80, 0xaa, 0xbe, 0x00, 0x33, 0x9d, 0x83, 0x03, 0x56, 0x01, - 0xb0, 0xc2, 0x5e, 0x65, 0x05, 0x45, 0xd8, 0xbd, 0xfc, 0x0e, 0x95, 0x15, 0x35, 0x7a, 0x3c, 0x36, - 0x23, 0x12, 0x43, 0x6d, 0xd8, 0xea, 0x45, 0x30, 0x71, 0x9b, 0x3a, 0xbe, 0x2b, 0x9d, 0x84, 0x8b, - 0x20, 0x6b, 0x9b, 0xf5, 0x70, 0x04, 0x45, 0x73, 0x71, 0xdd, 0xac, 0x63, 0x24, 0x38, 0xea, 0x8f, - 0x0a, 0x98, 0x5c, 0x25, 0x75, 0xe2, 0x21, 0xcc, 0x5c, 0xc7, 0x66, 0x18, 0x5e, 0x49, 0x8c, 0xad, - 0xd3, 0x1d, 0x63, 0xeb, 0x44, 0x42, 0xb8, 0x6d, 0x60, 0x3d, 0x05, 0xa3, 0x5f, 0xfb, 0xd8, 0x27, - 0x76, 0x45, 0x8e, 0xed, 0x2b, 0xa9, 0x11, 0x6e, 0x06, 0xf2, 0x89, 0x8a, 0x33, 0xc6, 0xf9, 0x20, - 0x90, 0x1c, 0x14, 0x42, 0xaa, 0x7f, 0x67, 0xc0, 0x69, 0x61, 0x19, 0x97, 0xfb, 0x57, 0x32, 0x7c, - 0x0a, 0x0a, 0x26, 0x63, 0x3e, 0xc5, 0xe5, 0x15, 0xc7, 0xb6, 0x7c, 0xca, 0x7b, 0x60, 0x7f, 0xab, - 0x6a, 0x52, 0xcc, 0x44, 0x38, 0x39, 0x63, 0x51, 0x86, 0x53, 0x58, 0xee, 0x23, 0x87, 0xfa, 0x22, - 0xc0, 0x5d, 0x30, 0x59, 0x6b, 0x0f, 0x5e, 0xc6, 0xa9, 0xa5, 0xc6, 0x99, 0x48, 0x99, 0x31, 0x27, - 0x5d, 0x48, 0xa6, 0x1d, 0x25, 0xb1, 0xe1, 0x67, 0x60, 0xba, 0x86, 0xed, 0xb2, 0xb9, 0x5d, 0xc3, - 0x1b, 0x98, 0x5a, 0xd8, 0xf6, 0x44, 0x9f, 0xe4, 0x8c, 0xd9, 0x56, 0xb3, 0x34, 0xbd, 0x9a, 0x64, - 0xa1, 0x4e, 0x59, 0xf8, 0x00, 0xcc, 0x6d, 0x3b, 0x94, 0x3a, 0x7b, 0xc4, 0xae, 0x08, 0x3b, 0x21, - 0x48, 0x56, 0x80, 0xfc, 0xaf, 0xd5, 0x2c, 0xcd, 0x19, 0xbd, 0x04, 0x50, 0x6f, 0x3d, 0x75, 0x0f, - 0xcc, 0xad, 0xf3, 0xc1, 0xc2, 0x1c, 0x9f, 0x5a, 0x38, 0xee, 0x09, 0x58, 0x02, 0xb9, 0x06, 0xa6, - 0xdb, 0x41, 0x5d, 0xe7, 0x8d, 0x3c, 0xef, 0x88, 0xc7, 0x9c, 0x80, 0x02, 0x3a, 0x8f, 0xc4, 0x8e, - 0x35, 0x1f, 0xa1, 0x55, 0x56, 0x18, 0x11, 0xa2, 0x22, 0x92, 0xf5, 0x24, 0x0b, 0x75, 0xca, 0xaa, - 0x87, 0x19, 0x30, 0xdf, 0xa7, 0x05, 0xe1, 0x63, 0x30, 0xc6, 0xe4, 0xdf, 0xb2, 0xad, 0xce, 0xa6, - 0x3e, 0x86, 0x54, 0x8e, 0xb7, 0x40, 0x88, 0x86, 0x22, 0x2c, 0xe8, 0x82, 0x49, 0x2a, 0x7d, 0x10, - 0x46, 0xe5, 0x36, 0xf8, 0x38, 0x15, 0xbc, 0x3b, 0x3f, 0xf1, 0x73, 0xa3, 0x76, 0x44, 0x94, 0x34, - 0x00, 0x5f, 0x80, 0x99, 0xb6, 0xc0, 0x03, 0xa3, 0xc3, 0xc2, 0xe8, 0xd5, 0x54, 0xa3, 0x3d, 0xdf, - 0xc5, 0x28, 0x48, 0xbb, 0x33, 0xeb, 0x1d, 0xb8, 0xa8, 0xcb, 0x92, 0xfa, 0x5b, 0x06, 0x1c, 0xb1, - 0x20, 0xde, 0xc3, 0xc1, 0x67, 0x26, 0x0e, 0xbe, 0x1b, 0xc7, 0x58, 0x7d, 0x7d, 0x0f, 0x40, 0xd2, - 0x71, 0x00, 0x2e, 0x1f, 0xc7, 0xc8, 0xd1, 0x07, 0xe1, 0x9f, 0x19, 0xf0, 0xff, 0xfe, 0xca, 0xf1, - 0x81, 0x78, 0x3f, 0x31, 0x69, 0x3f, 0xe9, 0x98, 0xb4, 0x67, 0x06, 0x80, 0xf8, 0xef, 0x60, 0xec, - 0x38, 0x18, 0xdf, 0x28, 0xa0, 0xd8, 0x3f, 0x6f, 0xef, 0xe1, 0x80, 0x7c, 0x96, 0x3c, 0x20, 0xaf, - 0x1f, 0xa3, 0xca, 0xfa, 0x1c, 0x94, 0xb7, 0x8f, 0x2a, 0xae, 0xe8, 0xf2, 0x1b, 0x60, 0xf5, 0x1f, - 0x1c, 0x99, 0x2b, 0x71, 0xa9, 0xa6, 0xfc, 0x84, 0x49, 0x68, 0xdf, 0xb2, 0xf9, 0x02, 0xaa, 0xf3, - 0x1d, 0x12, 0x54, 0x24, 0x01, 0xa3, 0xb5, 0x60, 0x65, 0xcb, 0xbe, 0x36, 0x06, 0xdb, 0x94, 0x47, - 0xad, 0xf8, 0xe0, 0x3c, 0x90, 0x62, 0x28, 0xc4, 0x57, 0x5f, 0x2a, 0x60, 0x31, 0xad, 0x5d, 0xe1, - 0x37, 0x3d, 0xce, 0xb0, 0xe3, 0x5c, 0xd9, 0x83, 0x9f, 0x65, 0x3f, 0x2b, 0xe0, 0x64, 0xaf, 0x63, - 0x87, 0x77, 0x00, 0xbf, 0x70, 0xa2, 0xf3, 0x24, 0xea, 0x80, 0x4d, 0x41, 0x45, 0x92, 0x0b, 0xcf, - 0x83, 0xb1, 0xaa, 0x69, 0x97, 0xb7, 0xc8, 0xb7, 0xe1, 0xf1, 0x1d, 0xd5, 0xe0, 0x1d, 0x49, 0x47, - 0x91, 0x04, 0xbc, 0x09, 0x66, 0x84, 0xde, 0x2a, 0xb6, 0x2b, 0x5e, 0x55, 0x24, 0x4b, 0x1e, 0x0f, - 0xd1, 0x52, 0xd8, 0xec, 0xe0, 0xa3, 0x2e, 0x0d, 0xf5, 0x2f, 0x05, 0xc0, 0x7f, 0xb3, 0xef, 0xcf, - 0x81, 0xbc, 0xe9, 0x12, 0x71, 0x86, 0x06, 0x5d, 0x90, 0x37, 0x26, 0x5b, 0xcd, 0x52, 0x7e, 0x79, - 0xe3, 0x6e, 0x40, 0x44, 0x31, 0x9f, 0x0b, 0x87, 0x8b, 0x30, 0x58, 0x78, 0x52, 0x38, 0x34, 0xcc, - 0x50, 0xcc, 0x87, 0xd7, 0xc0, 0x84, 0x55, 0xf3, 0x99, 0x87, 0xe9, 0x96, 0xe5, 0xb8, 0x58, 0x4c, - 0x8d, 0x31, 0xe3, 0xa4, 0x8c, 0x69, 0x62, 0xa5, 0x8d, 0x87, 0x12, 0x92, 0x50, 0x03, 0x80, 0x97, - 0x3c, 0x73, 0x4d, 0x6e, 0x27, 0x27, 0xec, 0x4c, 0xf1, 0x07, 0x5b, 0x8f, 0xa8, 0xa8, 0x4d, 0x42, - 0x7d, 0x0e, 0xe6, 0xb6, 0x30, 0x6d, 0x10, 0x0b, 0x2f, 0x5b, 0x96, 0xe3, 0xdb, 0x5e, 0x78, 0x50, - 0xeb, 0x20, 0x1f, 0x89, 0xc9, 0xae, 0x38, 0x21, 0xed, 0xe7, 0x23, 0x2c, 0x14, 0xcb, 0x44, 0x6d, - 0x98, 0xe9, 0xdb, 0x86, 0xbf, 0x66, 0xc0, 0x68, 0x0c, 0x9f, 0xdd, 0x25, 0x76, 0x59, 0x22, 0x9f, - 0x0a, 0xa5, 0xef, 0x13, 0xbb, 0xfc, 0xb6, 0x59, 0x1a, 0x97, 0x62, 0xfc, 0x13, 0x09, 0x41, 0x78, - 0x0f, 0x64, 0x7d, 0x86, 0xa9, 0x6c, 0xb0, 0xf3, 0xa9, 0xd5, 0xfc, 0x88, 0x61, 0x1a, 0x5e, 0x40, - 0x63, 0x1c, 0x9a, 0x13, 0x90, 0xc0, 0x80, 0xeb, 0x20, 0x57, 0xe1, 0xaf, 0x22, 0x27, 0xff, 0x85, - 0x54, 0xb0, 0xf6, 0x9f, 0x1a, 0x41, 0x21, 0x08, 0x0a, 0x0a, 0x60, 0x20, 0x05, 0x53, 0x2c, 0x91, - 0x44, 0xf1, 0x60, 0x83, 0x5c, 0x34, 0x3d, 0x73, 0x6f, 0xc0, 0x56, 0xb3, 0x34, 0x95, 0x64, 0xa1, - 0x0e, 0x0b, 0xaa, 0x0e, 0xc6, 0xdb, 0x42, 0x4c, 0x1f, 0x82, 0xc6, 0xad, 0x83, 0xc3, 0xe2, 0xd0, - 0xab, 0xc3, 0xe2, 0xd0, 0xeb, 0xc3, 0xe2, 0xd0, 0x77, 0xad, 0xa2, 0x72, 0xd0, 0x2a, 0x2a, 0xaf, - 0x5a, 0x45, 0xe5, 0x75, 0xab, 0xa8, 0xbc, 0x69, 0x15, 0x95, 0x1f, 0x7e, 0x2f, 0x0e, 0x3d, 0x29, - 0xa5, 0xfc, 0xf7, 0xf1, 0x9f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x48, 0x2e, 0x29, 0x16, 0xb8, 0x14, - 0x00, 0x00, + // 1621 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x58, 0x4d, 0x6f, 0xdb, 0x46, + 0x1a, 0x36, 0x65, 0xc9, 0xb6, 0xc6, 0x9f, 0x19, 0xc7, 0xb0, 0xd6, 0x59, 0x48, 0x0e, 0x17, 0xd8, + 0x64, 0x37, 0x09, 0x15, 0x67, 0x93, 0x6c, 0x16, 0xc1, 0x22, 0x30, 0x93, 0x6c, 0xbe, 0x6c, 0xc7, + 0x1e, 0x27, 0xd9, 0x36, 0x48, 0x81, 0xd0, 0xd4, 0x58, 0x9a, 0x58, 0x22, 0xd9, 0x19, 0x52, 0x8e, + 0x8b, 0x1c, 0x0a, 0xf4, 0x0f, 0xf4, 0x07, 0xe4, 0xd8, 0x43, 0x6f, 0x05, 0x7a, 0xed, 0xa5, 0xc7, + 0xa0, 0xe8, 0x21, 0xc7, 0x9c, 0x84, 0x58, 0xbd, 0xf6, 0x07, 0xb4, 0x39, 0x14, 0xc5, 0x0c, 0x87, + 0xa4, 0x28, 0x89, 0xa2, 0x52, 0x03, 0x39, 0xf5, 0x66, 0xbe, 0x1f, 0xcf, 0x3b, 0xf3, 0xce, 0xfb, + 0xf1, 0xc8, 0xe0, 0xf6, 0xde, 0x15, 0xa6, 0x11, 0xbb, 0xbc, 0xe7, 0xed, 0x60, 0x6a, 0x61, 0x17, + 0xb3, 0x72, 0x13, 0x5b, 0x15, 0x9b, 0x96, 0xa5, 0xc2, 0x70, 0x48, 0x79, 0xb7, 0x6e, 0xef, 0x9b, + 0xb6, 0xe5, 0x52, 0xbb, 0x5e, 0x6e, 0xae, 0x18, 0x75, 0xa7, 0x66, 0xac, 0x94, 0xab, 0xd8, 0xc2, + 0xd4, 0x70, 0x71, 0x45, 0x73, 0xa8, 0xed, 0xda, 0xb0, 0xe4, 0x3b, 0x68, 0x86, 0x43, 0xb4, 0x0e, + 0x07, 0x2d, 0x70, 0x58, 0x3a, 0x57, 0x25, 0x6e, 0xcd, 0xdb, 0xd1, 0x4c, 0xbb, 0x51, 0xae, 0xda, + 0x55, 0xbb, 0x2c, 0xfc, 0x76, 0xbc, 0x5d, 0xf1, 0x25, 0x3e, 0xc4, 0x5f, 0x3e, 0xde, 0xd2, 0xc5, + 0xe8, 0x00, 0x0d, 0xc3, 0xac, 0x11, 0x0b, 0xd3, 0x83, 0xb2, 0xb3, 0x57, 0xe5, 0x02, 0x56, 0x6e, + 0x60, 0xd7, 0x28, 0x37, 0x7b, 0x4e, 0xb1, 0x54, 0x4e, 0xf2, 0xa2, 0x9e, 0xe5, 0x92, 0x06, 0xee, + 0x71, 0xb8, 0x9c, 0xe6, 0xc0, 0xcc, 0x1a, 0x6e, 0x18, 0xdd, 0x7e, 0xea, 0x77, 0x0a, 0x58, 0xbe, + 0xf9, 0x1c, 0x37, 0x1c, 0x77, 0x93, 0x12, 0x9b, 0x12, 0xf7, 0x60, 0x0d, 0x37, 0x71, 0xfd, 0xba, + 0x6d, 0xed, 0x92, 0xaa, 0x47, 0x0d, 0x97, 0xd8, 0x16, 0xfc, 0x08, 0x14, 0x2c, 0xbb, 0x41, 0x2c, + 0x83, 0xcb, 0x4d, 0x8f, 0x52, 0x6c, 0x99, 0x07, 0xdb, 0x35, 0x83, 0x62, 0x56, 0x50, 0x96, 0x95, + 0xd3, 0x39, 0xfd, 0xaf, 0xed, 0x56, 0xa9, 0xb0, 0x91, 0x60, 0x83, 0x12, 0xbd, 0xe1, 0x7f, 0xc1, + 0x6c, 0x1d, 0x5b, 0x15, 0x63, 0xa7, 0x8e, 0x37, 0x31, 0x35, 0xb1, 0xe5, 0x16, 0x32, 0x02, 0x70, + 0xbe, 0xdd, 0x2a, 0xcd, 0xae, 0xc5, 0x55, 0xa8, 0xdb, 0x56, 0x7d, 0x0c, 0x16, 0xff, 0x57, 0xb7, + 0xf7, 0x6f, 0x10, 0xe6, 0x12, 0xab, 0xea, 0x11, 0x56, 0xc3, 0x74, 0x1d, 0xbb, 0x35, 0xbb, 0x02, + 0xaf, 0x81, 0xac, 0x7b, 0xe0, 0x60, 0x71, 0xbe, 0xbc, 0x7e, 0xe6, 0x55, 0xab, 0x34, 0xd2, 0x6e, + 0x95, 0xb2, 0x0f, 0x0e, 0x1c, 0xfc, 0xae, 0x55, 0x3a, 0x91, 0xe0, 0xc6, 0xd5, 0x48, 0x38, 0xaa, + 0x2f, 0x33, 0x00, 0x70, 0xab, 0x6d, 0x91, 0x38, 0xf8, 0x14, 0x4c, 0xf0, 0xc7, 0xaa, 0x18, 0xae, + 0x21, 0x30, 0x27, 0x2f, 0x9c, 0xd7, 0xa2, 0x52, 0x09, 0x73, 0xae, 0x39, 0x7b, 0x55, 0x2e, 0x60, + 0x1a, 0xb7, 0xd6, 0x9a, 0x2b, 0xda, 0xfd, 0x9d, 0x67, 0xd8, 0x74, 0xd7, 0xb1, 0x6b, 0xe8, 0x50, + 0x9e, 0x02, 0x44, 0x32, 0x14, 0xa2, 0xc2, 0x2d, 0x90, 0x65, 0x0e, 0x36, 0x45, 0x02, 0x26, 0x2f, + 0x94, 0xb5, 0x94, 0x42, 0xd4, 0xa2, 0xc3, 0x6d, 0x3b, 0xd8, 0xd4, 0xa7, 0x82, 0x2b, 0xf2, 0x2f, + 0x24, 0xa0, 0xe0, 0xc7, 0x60, 0x8c, 0xb9, 0x86, 0xeb, 0xb1, 0xc2, 0xa8, 0x00, 0x5d, 0x79, 0x1f, + 0x50, 0xe1, 0xa8, 0xcf, 0x48, 0xd8, 0x31, 0xff, 0x1b, 0x49, 0x40, 0xf5, 0x4d, 0x06, 0xcc, 0x47, + 0xc6, 0xd7, 0x6d, 0xab, 0x42, 0x44, 0xad, 0x5c, 0x8d, 0xe5, 0xfd, 0x54, 0x57, 0xde, 0x17, 0xfb, + 0xb8, 0x44, 0x39, 0x87, 0xff, 0x09, 0xcf, 0x9b, 0x11, 0xee, 0x27, 0xe3, 0xc1, 0xdf, 0xb5, 0x4a, + 0xb3, 0xa1, 0x5b, 0xfc, 0x3c, 0xb0, 0x09, 0x60, 0xdd, 0x60, 0xee, 0x03, 0x6a, 0x58, 0xcc, 0x87, + 0x25, 0x0d, 0x2c, 0xaf, 0xfd, 0xcf, 0xe1, 0x5e, 0x8a, 0x7b, 0xe8, 0x4b, 0x32, 0x24, 0x5c, 0xeb, + 0x41, 0x43, 0x7d, 0x22, 0xc0, 0xbf, 0x83, 0x31, 0x8a, 0x0d, 0x66, 0x5b, 0x85, 0xac, 0x38, 0x72, + 0x98, 0x2f, 0x24, 0xa4, 0x48, 0x6a, 0xe1, 0x3f, 0xc0, 0x78, 0x03, 0x33, 0x66, 0x54, 0x71, 0x21, + 0x27, 0x0c, 0x67, 0xa5, 0xe1, 0xf8, 0xba, 0x2f, 0x46, 0x81, 0x5e, 0xfd, 0x5e, 0x01, 0x33, 0x51, + 0x9e, 0xd6, 0x08, 0x73, 0xe1, 0x93, 0x9e, 0xea, 0xd3, 0x86, 0xbb, 0x13, 0xf7, 0x16, 0xb5, 0x37, + 0x27, 0xc3, 0x4d, 0x04, 0x92, 0x8e, 0xca, 0xdb, 0x04, 0x39, 0xe2, 0xe2, 0x06, 0xcf, 0xfa, 0xe8, + 0xe9, 0xc9, 0x0b, 0x67, 0xde, 0xa3, 0x4a, 0xf4, 0x69, 0x89, 0x9b, 0xbb, 0xc3, 0x11, 0x90, 0x0f, + 0xa4, 0xfe, 0x3c, 0xda, 0x79, 0x05, 0x5e, 0x91, 0xf0, 0x6b, 0x05, 0x2c, 0x39, 0x89, 0x33, 0x46, + 0xde, 0xea, 0x46, 0x6a, 0xe8, 0xe4, 0x31, 0x85, 0xf0, 0x2e, 0xe6, 0xb3, 0x05, 0xeb, 0xaa, 0x3c, + 0xd3, 0xd2, 0x00, 0xe3, 0x01, 0x67, 0x81, 0x77, 0x01, 0x6c, 0x18, 0x2e, 0xcf, 0x69, 0x75, 0x93, + 0x62, 0x13, 0x57, 0x38, 0xaa, 0x1c, 0x4c, 0x61, 0x7d, 0xac, 0xf7, 0x58, 0xa0, 0x3e, 0x5e, 0xf0, + 0x0b, 0x05, 0xcc, 0x57, 0x7a, 0x07, 0x8d, 0xac, 0xcc, 0x2b, 0x43, 0xa5, 0xba, 0xcf, 0xa0, 0xd2, + 0x17, 0xdb, 0xad, 0xd2, 0x7c, 0x1f, 0x05, 0xea, 0x17, 0x0d, 0x7e, 0x02, 0x72, 0xd4, 0xab, 0x63, + 0x56, 0xc8, 0x8a, 0x17, 0x4e, 0x0f, 0xbb, 0x69, 0xd7, 0x89, 0x79, 0x80, 0xb8, 0xcf, 0xff, 0x89, + 0x5b, 0xdb, 0xf6, 0xc4, 0xc4, 0x62, 0xd1, 0x73, 0x0b, 0x15, 0xf2, 0x51, 0xd5, 0x17, 0x60, 0xae, + 0x7b, 0x70, 0xc0, 0x1a, 0x00, 0x66, 0xd0, 0xab, 0x7c, 0x4d, 0xf0, 0xb8, 0x17, 0xdf, 0xa3, 0xb2, + 0xc2, 0x46, 0x8f, 0xc6, 0x66, 0x28, 0x62, 0xa8, 0x03, 0x5b, 0x3d, 0x0f, 0xa6, 0x6e, 0x51, 0xdb, + 0x73, 0xe4, 0x21, 0xe1, 0x32, 0xc8, 0x5a, 0x46, 0x23, 0x18, 0x41, 0xe1, 0x5c, 0xdc, 0x30, 0x1a, + 0x18, 0x09, 0x8d, 0xfa, 0x95, 0x02, 0xa6, 0xd7, 0x48, 0x83, 0xb8, 0x08, 0x33, 0xc7, 0xb6, 0x18, + 0x86, 0x97, 0x62, 0x63, 0xeb, 0x64, 0xd7, 0xd8, 0x3a, 0x16, 0x33, 0xee, 0x18, 0x58, 0x4f, 0xc0, + 0xf8, 0xa7, 0x1e, 0xf6, 0x88, 0x55, 0x95, 0x63, 0xfb, 0x52, 0xea, 0x0d, 0xb7, 0x7c, 0xfb, 0x58, + 0xc5, 0xe9, 0x93, 0x7c, 0x10, 0x48, 0x0d, 0x0a, 0x20, 0xd5, 0xdf, 0x32, 0xe0, 0xa4, 0x88, 0x8c, + 0x2b, 0x03, 0xb6, 0xf3, 0x13, 0x50, 0x30, 0x18, 0xf3, 0x28, 0xae, 0x24, 0x6d, 0xe7, 0x65, 0x79, + 0x9d, 0xc2, 0x6a, 0x82, 0x1d, 0x4a, 0x44, 0x80, 0x7b, 0x60, 0xba, 0xde, 0x79, 0x79, 0x79, 0x4f, + 0x2d, 0xf5, 0x9e, 0xb1, 0x94, 0xe9, 0x0b, 0xf2, 0x08, 0xf1, 0xb4, 0xa3, 0x38, 0x76, 0x3f, 0x3a, + 0x30, 0x3a, 0x3c, 0x1d, 0x80, 0xf7, 0xc1, 0xc2, 0x8e, 0x4d, 0xa9, 0xbd, 0x4f, 0xac, 0xaa, 0x88, + 0x13, 0x80, 0x64, 0x05, 0xc8, 0x5f, 0xda, 0xad, 0xd2, 0x82, 0xde, 0xcf, 0x00, 0xf5, 0xf7, 0x53, + 0xf7, 0xc1, 0xc2, 0x06, 0x1f, 0x2c, 0xcc, 0xf6, 0xa8, 0x89, 0xa3, 0x9e, 0x80, 0x25, 0x90, 0x6b, + 0x62, 0xba, 0xe3, 0xd7, 0x75, 0x5e, 0xcf, 0xf3, 0x8e, 0x78, 0xc4, 0x05, 0xc8, 0x97, 0xf3, 0x9b, + 0x58, 0x91, 0xe7, 0x43, 0xb4, 0xc6, 0x0a, 0x63, 0xc2, 0x54, 0xdc, 0x64, 0x23, 0xae, 0x42, 0xdd, + 0xb6, 0xea, 0x61, 0x06, 0x2c, 0x26, 0xb4, 0x20, 0x7c, 0x04, 0x26, 0x98, 0xfc, 0x5b, 0xb6, 0xd5, + 0xe9, 0xd4, 0xc7, 0x90, 0xce, 0xd1, 0x16, 0x08, 0xd0, 0x50, 0x88, 0x05, 0x1d, 0x30, 0x4d, 0xe5, + 0x19, 0x44, 0x50, 0xb9, 0x0d, 0xfe, 0x95, 0x0a, 0xde, 0x9b, 0x9f, 0xe8, 0xb9, 0x51, 0x27, 0x22, + 0x8a, 0x07, 0x80, 0x2f, 0xc0, 0x5c, 0xc7, 0xc5, 0xfd, 0xa0, 0xa3, 0x22, 0xe8, 0xe5, 0xd4, 0xa0, + 0x7d, 0xdf, 0x45, 0x2f, 0xc8, 0xb8, 0x73, 0x1b, 0x5d, 0xb8, 0xa8, 0x27, 0x92, 0xfa, 0x63, 0x06, + 0x0c, 0x58, 0x10, 0x1f, 0x80, 0xf0, 0x19, 0x31, 0xc2, 0x77, 0xed, 0x08, 0xab, 0x2f, 0x91, 0x00, + 0x92, 0x2e, 0x02, 0xb8, 0x7a, 0x94, 0x20, 0x83, 0x09, 0xe1, 0x2f, 0x19, 0xf0, 0xb7, 0x64, 0xe7, + 0x88, 0x20, 0xde, 0x8b, 0x4d, 0xda, 0x7f, 0x77, 0x4d, 0xda, 0x53, 0x43, 0x40, 0xfc, 0x49, 0x18, + 0xbb, 0x08, 0xe3, 0x5b, 0x05, 0x14, 0x93, 0xf3, 0xf6, 0x01, 0x08, 0xe4, 0xd3, 0x38, 0x81, 0xbc, + 0x7a, 0x84, 0x2a, 0x4b, 0x20, 0x94, 0xb7, 0x06, 0x15, 0x57, 0xc8, 0xfc, 0x86, 0x58, 0xfd, 0xdf, + 0x64, 0x06, 0xe5, 0x4a, 0x30, 0xd5, 0x94, 0x9f, 0x30, 0x31, 0xef, 0x9b, 0x16, 0x5f, 0x40, 0x0d, + 0xbe, 0x43, 0xfc, 0x8a, 0x24, 0x60, 0xbc, 0xee, 0xaf, 0x6c, 0xd9, 0xd7, 0xfa, 0x70, 0x9b, 0x72, + 0xd0, 0x8a, 0xf7, 0xe9, 0x81, 0x34, 0x43, 0x01, 0x3e, 0xc4, 0x60, 0x0c, 0x8b, 0x9f, 0xee, 0x43, + 0x37, 0x77, 0xda, 0x2f, 0x7d, 0x1d, 0xf0, 0x42, 0xf4, 0xad, 0x90, 0x04, 0x57, 0x5f, 0x2a, 0x60, + 0x39, 0x6d, 0x2a, 0xc0, 0xe7, 0x7d, 0xd8, 0xde, 0x51, 0xc8, 0xfc, 0xf0, 0xec, 0xef, 0x5b, 0x05, + 0x1c, 0xef, 0xc7, 0xa9, 0x78, 0xa3, 0x71, 0x22, 0x15, 0xb2, 0xa0, 0xb0, 0xd1, 0xb6, 0x84, 0x14, + 0x49, 0x2d, 0x3c, 0x0b, 0x26, 0x6a, 0x86, 0x55, 0xd9, 0x26, 0x9f, 0x05, 0x1c, 0x3f, 0x2c, 0xf5, + 0xdb, 0x52, 0x8e, 0x42, 0x0b, 0x78, 0x03, 0xcc, 0x09, 0xbf, 0x35, 0x6c, 0x55, 0xdd, 0x9a, 0x78, + 0x13, 0xc9, 0x51, 0xc2, 0xdd, 0xb3, 0xd5, 0xa5, 0x47, 0x3d, 0x1e, 0xea, 0xaf, 0x0a, 0x80, 0x7f, + 0x84, 0x56, 0x9c, 0x01, 0x79, 0xc3, 0x21, 0x82, 0xed, 0xfa, 0xcd, 0x96, 0xd7, 0xa7, 0xdb, 0xad, + 0x52, 0x7e, 0x75, 0xf3, 0x8e, 0x2f, 0x44, 0x91, 0x9e, 0x1b, 0x07, 0xfb, 0xd6, 0xdf, 0xab, 0xd2, + 0x38, 0x08, 0xcc, 0x50, 0xa4, 0x87, 0x57, 0xc0, 0x94, 0x59, 0xf7, 0x98, 0x8b, 0xe9, 0xb6, 0x69, + 0x3b, 0x58, 0x0c, 0xa7, 0x09, 0xfd, 0xb8, 0xbc, 0xd3, 0xd4, 0xf5, 0x0e, 0x1d, 0x8a, 0x59, 0x42, + 0x0d, 0x00, 0xde, 0x59, 0xcc, 0x31, 0x78, 0x9c, 0x9c, 0x88, 0x33, 0xc3, 0x1f, 0x6c, 0x23, 0x94, + 0xa2, 0x0e, 0x0b, 0xf5, 0x19, 0x58, 0xd8, 0xc6, 0xb4, 0x49, 0x4c, 0xbc, 0x6a, 0x9a, 0xb6, 0x67, + 0xb9, 0x01, 0x6f, 0x2f, 0x83, 0x7c, 0x68, 0x26, 0x9b, 0xef, 0x98, 0x8c, 0x9f, 0x0f, 0xb1, 0x50, + 0x64, 0x13, 0x76, 0x7b, 0x26, 0xb1, 0xdb, 0x7f, 0xc8, 0x80, 0xf1, 0x08, 0x3e, 0xbb, 0x47, 0xac, + 0x8a, 0x44, 0x3e, 0x11, 0x58, 0xdf, 0x23, 0x56, 0xe5, 0x5d, 0xab, 0x34, 0x29, 0xcd, 0xf8, 0x27, + 0x12, 0x86, 0xf0, 0x2e, 0xc8, 0x7a, 0x0c, 0x53, 0xd9, 0xc7, 0x67, 0x53, 0xab, 0xf9, 0x21, 0xc3, + 0x34, 0x20, 0x5a, 0x13, 0x1c, 0x9a, 0x0b, 0x90, 0xc0, 0x80, 0x1b, 0x20, 0x57, 0xe5, 0xaf, 0x22, + 0x5b, 0xf5, 0x5c, 0x2a, 0x58, 0xe7, 0x2f, 0x1a, 0xbf, 0x10, 0x84, 0x04, 0xf9, 0x30, 0x90, 0x82, + 0x19, 0x16, 0x4b, 0xa2, 0x78, 0xb0, 0x61, 0x88, 0x53, 0xdf, 0xdc, 0xeb, 0xb0, 0xdd, 0x2a, 0xcd, + 0xc4, 0x55, 0xa8, 0x2b, 0x82, 0x5a, 0x06, 0x93, 0x1d, 0x57, 0x4c, 0x9f, 0xb5, 0xfa, 0xcd, 0x57, + 0x87, 0xc5, 0x91, 0xd7, 0x87, 0xc5, 0x91, 0x37, 0x87, 0xc5, 0x91, 0xcf, 0xdb, 0x45, 0xe5, 0x55, + 0xbb, 0xa8, 0xbc, 0x6e, 0x17, 0x95, 0x37, 0xed, 0xa2, 0xf2, 0xb6, 0x5d, 0x54, 0xbe, 0xfc, 0xa9, + 0x38, 0xf2, 0xb8, 0x94, 0xf2, 0x2f, 0xda, 0xdf, 0x03, 0x00, 0x00, 0xff, 0xff, 0xc1, 0x6c, 0x4e, + 0x4e, 0xdd, 0x15, 0x00, 0x00, +} + +func (m *ExemptPriorityLevelConfiguration) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ExemptPriorityLevelConfiguration) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ExemptPriorityLevelConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.LendablePercent != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.LendablePercent)) + i-- + dAtA[i] = 0x10 + } + if m.NominalConcurrencyShares != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.NominalConcurrencyShares)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil } func (m *FlowDistinguisherMethod) Marshal() (dAtA []byte, err error) { @@ -1491,6 +1557,18 @@ func (m *PriorityLevelConfigurationSpec) MarshalToSizedBuffer(dAtA []byte) (int, _ = i var l int _ = l + if m.Exempt != nil { + { + size, err := m.Exempt.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } if m.Limited != nil { { size, err := m.Limited.MarshalToSizedBuffer(dAtA[:i]) @@ -1783,6 +1861,21 @@ func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int { dAtA[offset] = uint8(v) return base } +func (m *ExemptPriorityLevelConfiguration) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.NominalConcurrencyShares != nil { + n += 1 + sovGenerated(uint64(*m.NominalConcurrencyShares)) + } + if m.LendablePercent != nil { + n += 1 + sovGenerated(uint64(*m.LendablePercent)) + } + return n +} + func (m *FlowDistinguisherMethod) Size() (n int) { if m == nil { return 0 @@ -2048,6 +2141,10 @@ func (m *PriorityLevelConfigurationSpec) Size() (n int) { l = m.Limited.Size() n += 1 + l + sovGenerated(uint64(l)) } + if m.Exempt != nil { + l = m.Exempt.Size() + n += 1 + l + sovGenerated(uint64(l)) + } return n } @@ -2165,6 +2262,17 @@ func sovGenerated(x uint64) (n int) { func sozGenerated(x uint64) (n int) { return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } +func (this *ExemptPriorityLevelConfiguration) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ExemptPriorityLevelConfiguration{`, + `NominalConcurrencyShares:` + valueToStringGenerated(this.NominalConcurrencyShares) + `,`, + `LendablePercent:` + valueToStringGenerated(this.LendablePercent) + `,`, + `}`, + }, "") + return s +} func (this *FlowDistinguisherMethod) String() string { if this == nil { return "nil" @@ -2381,6 +2489,7 @@ func (this *PriorityLevelConfigurationSpec) String() string { s := strings.Join([]string{`&PriorityLevelConfigurationSpec{`, `Type:` + fmt.Sprintf("%v", this.Type) + `,`, `Limited:` + strings.Replace(this.Limited.String(), "LimitedPriorityLevelConfiguration", "LimitedPriorityLevelConfiguration", 1) + `,`, + `Exempt:` + strings.Replace(this.Exempt.String(), "ExemptPriorityLevelConfiguration", "ExemptPriorityLevelConfiguration", 1) + `,`, `}`, }, "") return s @@ -2468,6 +2577,96 @@ func valueToStringGenerated(v interface{}) string { pv := reflect.Indirect(rv).Interface() return fmt.Sprintf("*%v", pv) } +func (m *ExemptPriorityLevelConfiguration) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ExemptPriorityLevelConfiguration: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ExemptPriorityLevelConfiguration: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NominalConcurrencyShares", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.NominalConcurrencyShares = &v + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field LendablePercent", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.LendablePercent = &v + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func (m *FlowDistinguisherMethod) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 @@ -4547,6 +4746,42 @@ func (m *PriorityLevelConfigurationSpec) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Exempt", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Exempt == nil { + m.Exempt = &ExemptPriorityLevelConfiguration{} + } + if err := m.Exempt.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/generated.proto index 69ca79ad2fc3..6509386f26f7 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/generated.proto @@ -28,6 +28,40 @@ import "k8s.io/apimachinery/pkg/runtime/schema/generated.proto"; // Package-wide variables from generator "generated". option go_package = "k8s.io/api/flowcontrol/v1alpha1"; +// ExemptPriorityLevelConfiguration describes the configurable aspects +// of the handling of exempt requests. +// In the mandatory exempt configuration object the values in the fields +// here can be modified by authorized users, unlike the rest of the `spec`. +message ExemptPriorityLevelConfiguration { + // `nominalConcurrencyShares` (NCS) contributes to the computation of the + // NominalConcurrencyLimit (NominalCL) of this level. + // This is the number of execution seats nominally reserved for this priority level. + // This DOES NOT limit the dispatching from this priority level + // but affects the other priority levels through the borrowing mechanism. + // The server's concurrency limit (ServerCL) is divided among all the + // priority levels in proportion to their NCS values: + // + // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) + // sum_ncs = sum[priority level k] NCS(k) + // + // Bigger numbers mean a larger nominal concurrency limit, + // at the expense of every other priority level. + // This field has a default value of zero. + // +optional + optional int32 nominalConcurrencyShares = 1; + + // `lendablePercent` prescribes the fraction of the level's NominalCL that + // can be borrowed by other priority levels. This value of this + // field must be between 0 and 100, inclusive, and it defaults to 0. + // The number of seats that other levels can borrow from this level, known + // as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. + // + // LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) + // + // +optional + optional int32 lendablePercent = 2; +} + // FlowDistinguisherMethod specifies the method of a flow distinguisher. message FlowDistinguisherMethod { // `type` is the type of flow distinguisher method @@ -332,6 +366,14 @@ message PriorityLevelConfigurationSpec { // This field must be non-empty if and only if `type` is `"Limited"`. // +optional optional LimitedPriorityLevelConfiguration limited = 2; + + // `exempt` specifies how requests are handled for an exempt priority level. + // This field MUST be empty if `type` is `"Limited"`. + // This field MAY be non-empty if `type` is `"Exempt"`. + // If empty and `type` is `"Exempt"` then the default values + // for `ExemptPriorityLevelConfiguration` apply. + // +optional + optional ExemptPriorityLevelConfiguration exempt = 3; } // PriorityLevelConfigurationStatus represents the current state of a "request-priority". diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/types.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/types.go index ebf665bcc3b0..161411ff3380 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/types.go @@ -399,6 +399,14 @@ type PriorityLevelConfigurationSpec struct { // This field must be non-empty if and only if `type` is `"Limited"`. // +optional Limited *LimitedPriorityLevelConfiguration `json:"limited,omitempty" protobuf:"bytes,2,opt,name=limited"` + + // `exempt` specifies how requests are handled for an exempt priority level. + // This field MUST be empty if `type` is `"Limited"`. + // This field MAY be non-empty if `type` is `"Exempt"`. + // If empty and `type` is `"Exempt"` then the default values + // for `ExemptPriorityLevelConfiguration` apply. + // +optional + Exempt *ExemptPriorityLevelConfiguration `json:"exempt,omitempty" protobuf:"bytes,3,opt,name=exempt"` } // PriorityLevelEnablement indicates whether limits on execution are enabled for the priority level @@ -469,6 +477,43 @@ type LimitedPriorityLevelConfiguration struct { BorrowingLimitPercent *int32 `json:"borrowingLimitPercent,omitempty" protobuf:"varint,4,opt,name=borrowingLimitPercent"` } +// ExemptPriorityLevelConfiguration describes the configurable aspects +// of the handling of exempt requests. +// In the mandatory exempt configuration object the values in the fields +// here can be modified by authorized users, unlike the rest of the `spec`. +type ExemptPriorityLevelConfiguration struct { + // `nominalConcurrencyShares` (NCS) contributes to the computation of the + // NominalConcurrencyLimit (NominalCL) of this level. + // This is the number of execution seats nominally reserved for this priority level. + // This DOES NOT limit the dispatching from this priority level + // but affects the other priority levels through the borrowing mechanism. + // The server's concurrency limit (ServerCL) is divided among all the + // priority levels in proportion to their NCS values: + // + // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) + // sum_ncs = sum[priority level k] NCS(k) + // + // Bigger numbers mean a larger nominal concurrency limit, + // at the expense of every other priority level. + // This field has a default value of zero. + // +optional + NominalConcurrencyShares *int32 `json:"nominalConcurrencyShares,omitempty" protobuf:"varint,1,opt,name=nominalConcurrencyShares"` + // `lendablePercent` prescribes the fraction of the level's NominalCL that + // can be borrowed by other priority levels. This value of this + // field must be between 0 and 100, inclusive, and it defaults to 0. + // The number of seats that other levels can borrow from this level, known + // as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. + // + // LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) + // + // +optional + LendablePercent *int32 `json:"lendablePercent,omitempty" protobuf:"varint,2,opt,name=lendablePercent"` + // The `BorrowingCL` of an Exempt priority level is implicitly `ServerCL`. + // In other words, an exempt priority level + // has no meaningful limit on how much it borrows. + // There is no explicit representation of that here. +} + // LimitResponse defines how to handle requests that can not be executed right now. // +union type LimitResponse struct { diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/types_swagger_doc_generated.go index c95999fa5e07..1d0680c10856 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/types_swagger_doc_generated.go @@ -27,6 +27,16 @@ package v1alpha1 // Those methods can be generated by using hack/update-codegen.sh // AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT. +var map_ExemptPriorityLevelConfiguration = map[string]string{ + "": "ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the `spec`.", + "nominalConcurrencyShares": "`nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats nominally reserved for this priority level. This DOES NOT limit the dispatching from this priority level but affects the other priority levels through the borrowing mechanism. The server's concurrency limit (ServerCL) is divided among all the priority levels in proportion to their NCS values:\n\nNominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k)\n\nBigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of zero.", + "lendablePercent": "`lendablePercent` prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. This value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows.\n\nLendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 )", +} + +func (ExemptPriorityLevelConfiguration) SwaggerDoc() map[string]string { + return map_ExemptPriorityLevelConfiguration +} + var map_FlowDistinguisherMethod = map[string]string{ "": "FlowDistinguisherMethod specifies the method of a flow distinguisher.", "type": "`type` is the type of flow distinguisher method The supported types are \"ByUser\" and \"ByNamespace\". Required.", @@ -190,6 +200,7 @@ var map_PriorityLevelConfigurationSpec = map[string]string{ "": "PriorityLevelConfigurationSpec specifies the configuration of a priority level.", "type": "`type` indicates whether this priority level is subject to limitation on request execution. A value of `\"Exempt\"` means that requests of this priority level are not subject to a limit (and thus are never queued) and do not detract from the capacity made available to other priority levels. A value of `\"Limited\"` means that (a) requests of this priority level _are_ subject to limits and (b) some of the server's limited capacity is made available exclusively to this priority level. Required.", "limited": "`limited` specifies how requests are handled for a Limited priority level. This field must be non-empty if and only if `type` is `\"Limited\"`.", + "exempt": "`exempt` specifies how requests are handled for an exempt priority level. This field MUST be empty if `type` is `\"Limited\"`. This field MAY be non-empty if `type` is `\"Exempt\"`. If empty and `type` is `\"Exempt\"` then the default values for `ExemptPriorityLevelConfiguration` apply.", } func (PriorityLevelConfigurationSpec) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/zz_generated.deepcopy.go index e0272804f457..a5c9737aa5f9 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1alpha1/zz_generated.deepcopy.go @@ -25,6 +25,32 @@ import ( runtime "k8s.io/apimachinery/pkg/runtime" ) +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ExemptPriorityLevelConfiguration) DeepCopyInto(out *ExemptPriorityLevelConfiguration) { + *out = *in + if in.NominalConcurrencyShares != nil { + in, out := &in.NominalConcurrencyShares, &out.NominalConcurrencyShares + *out = new(int32) + **out = **in + } + if in.LendablePercent != nil { + in, out := &in.LendablePercent, &out.LendablePercent + *out = new(int32) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExemptPriorityLevelConfiguration. +func (in *ExemptPriorityLevelConfiguration) DeepCopy() *ExemptPriorityLevelConfiguration { + if in == nil { + return nil + } + out := new(ExemptPriorityLevelConfiguration) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *FlowDistinguisherMethod) DeepCopyInto(out *FlowDistinguisherMethod) { *out = *in @@ -400,6 +426,11 @@ func (in *PriorityLevelConfigurationSpec) DeepCopyInto(out *PriorityLevelConfigu *out = new(LimitedPriorityLevelConfiguration) (*in).DeepCopyInto(*out) } + if in.Exempt != nil { + in, out := &in.Exempt, &out.Exempt + *out = new(ExemptPriorityLevelConfiguration) + (*in).DeepCopyInto(*out) + } return } diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/generated.pb.go index fbaea85dd6be..33f4b97e391d 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/generated.pb.go @@ -43,10 +43,38 @@ var _ = math.Inf // proto package needs to be updated. const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package +func (m *ExemptPriorityLevelConfiguration) Reset() { *m = ExemptPriorityLevelConfiguration{} } +func (*ExemptPriorityLevelConfiguration) ProtoMessage() {} +func (*ExemptPriorityLevelConfiguration) Descriptor() ([]byte, []int) { + return fileDescriptor_80171c2a4e3669de, []int{0} +} +func (m *ExemptPriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ExemptPriorityLevelConfiguration) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ExemptPriorityLevelConfiguration) XXX_Merge(src proto.Message) { + xxx_messageInfo_ExemptPriorityLevelConfiguration.Merge(m, src) +} +func (m *ExemptPriorityLevelConfiguration) XXX_Size() int { + return m.Size() +} +func (m *ExemptPriorityLevelConfiguration) XXX_DiscardUnknown() { + xxx_messageInfo_ExemptPriorityLevelConfiguration.DiscardUnknown(m) +} + +var xxx_messageInfo_ExemptPriorityLevelConfiguration proto.InternalMessageInfo + func (m *FlowDistinguisherMethod) Reset() { *m = FlowDistinguisherMethod{} } func (*FlowDistinguisherMethod) ProtoMessage() {} func (*FlowDistinguisherMethod) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{0} + return fileDescriptor_80171c2a4e3669de, []int{1} } func (m *FlowDistinguisherMethod) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -74,7 +102,7 @@ var xxx_messageInfo_FlowDistinguisherMethod proto.InternalMessageInfo func (m *FlowSchema) Reset() { *m = FlowSchema{} } func (*FlowSchema) ProtoMessage() {} func (*FlowSchema) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{1} + return fileDescriptor_80171c2a4e3669de, []int{2} } func (m *FlowSchema) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -102,7 +130,7 @@ var xxx_messageInfo_FlowSchema proto.InternalMessageInfo func (m *FlowSchemaCondition) Reset() { *m = FlowSchemaCondition{} } func (*FlowSchemaCondition) ProtoMessage() {} func (*FlowSchemaCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{2} + return fileDescriptor_80171c2a4e3669de, []int{3} } func (m *FlowSchemaCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -130,7 +158,7 @@ var xxx_messageInfo_FlowSchemaCondition proto.InternalMessageInfo func (m *FlowSchemaList) Reset() { *m = FlowSchemaList{} } func (*FlowSchemaList) ProtoMessage() {} func (*FlowSchemaList) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{3} + return fileDescriptor_80171c2a4e3669de, []int{4} } func (m *FlowSchemaList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -158,7 +186,7 @@ var xxx_messageInfo_FlowSchemaList proto.InternalMessageInfo func (m *FlowSchemaSpec) Reset() { *m = FlowSchemaSpec{} } func (*FlowSchemaSpec) ProtoMessage() {} func (*FlowSchemaSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{4} + return fileDescriptor_80171c2a4e3669de, []int{5} } func (m *FlowSchemaSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -186,7 +214,7 @@ var xxx_messageInfo_FlowSchemaSpec proto.InternalMessageInfo func (m *FlowSchemaStatus) Reset() { *m = FlowSchemaStatus{} } func (*FlowSchemaStatus) ProtoMessage() {} func (*FlowSchemaStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{5} + return fileDescriptor_80171c2a4e3669de, []int{6} } func (m *FlowSchemaStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -214,7 +242,7 @@ var xxx_messageInfo_FlowSchemaStatus proto.InternalMessageInfo func (m *GroupSubject) Reset() { *m = GroupSubject{} } func (*GroupSubject) ProtoMessage() {} func (*GroupSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{6} + return fileDescriptor_80171c2a4e3669de, []int{7} } func (m *GroupSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -242,7 +270,7 @@ var xxx_messageInfo_GroupSubject proto.InternalMessageInfo func (m *LimitResponse) Reset() { *m = LimitResponse{} } func (*LimitResponse) ProtoMessage() {} func (*LimitResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{7} + return fileDescriptor_80171c2a4e3669de, []int{8} } func (m *LimitResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -270,7 +298,7 @@ var xxx_messageInfo_LimitResponse proto.InternalMessageInfo func (m *LimitedPriorityLevelConfiguration) Reset() { *m = LimitedPriorityLevelConfiguration{} } func (*LimitedPriorityLevelConfiguration) ProtoMessage() {} func (*LimitedPriorityLevelConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{8} + return fileDescriptor_80171c2a4e3669de, []int{9} } func (m *LimitedPriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -298,7 +326,7 @@ var xxx_messageInfo_LimitedPriorityLevelConfiguration proto.InternalMessageInfo func (m *NonResourcePolicyRule) Reset() { *m = NonResourcePolicyRule{} } func (*NonResourcePolicyRule) ProtoMessage() {} func (*NonResourcePolicyRule) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{9} + return fileDescriptor_80171c2a4e3669de, []int{10} } func (m *NonResourcePolicyRule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -326,7 +354,7 @@ var xxx_messageInfo_NonResourcePolicyRule proto.InternalMessageInfo func (m *PolicyRulesWithSubjects) Reset() { *m = PolicyRulesWithSubjects{} } func (*PolicyRulesWithSubjects) ProtoMessage() {} func (*PolicyRulesWithSubjects) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{10} + return fileDescriptor_80171c2a4e3669de, []int{11} } func (m *PolicyRulesWithSubjects) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -354,7 +382,7 @@ var xxx_messageInfo_PolicyRulesWithSubjects proto.InternalMessageInfo func (m *PriorityLevelConfiguration) Reset() { *m = PriorityLevelConfiguration{} } func (*PriorityLevelConfiguration) ProtoMessage() {} func (*PriorityLevelConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{11} + return fileDescriptor_80171c2a4e3669de, []int{12} } func (m *PriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -382,7 +410,7 @@ var xxx_messageInfo_PriorityLevelConfiguration proto.InternalMessageInfo func (m *PriorityLevelConfigurationCondition) Reset() { *m = PriorityLevelConfigurationCondition{} } func (*PriorityLevelConfigurationCondition) ProtoMessage() {} func (*PriorityLevelConfigurationCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{12} + return fileDescriptor_80171c2a4e3669de, []int{13} } func (m *PriorityLevelConfigurationCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -410,7 +438,7 @@ var xxx_messageInfo_PriorityLevelConfigurationCondition proto.InternalMessageInf func (m *PriorityLevelConfigurationList) Reset() { *m = PriorityLevelConfigurationList{} } func (*PriorityLevelConfigurationList) ProtoMessage() {} func (*PriorityLevelConfigurationList) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{13} + return fileDescriptor_80171c2a4e3669de, []int{14} } func (m *PriorityLevelConfigurationList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -438,7 +466,7 @@ var xxx_messageInfo_PriorityLevelConfigurationList proto.InternalMessageInfo func (m *PriorityLevelConfigurationReference) Reset() { *m = PriorityLevelConfigurationReference{} } func (*PriorityLevelConfigurationReference) ProtoMessage() {} func (*PriorityLevelConfigurationReference) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{14} + return fileDescriptor_80171c2a4e3669de, []int{15} } func (m *PriorityLevelConfigurationReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -466,7 +494,7 @@ var xxx_messageInfo_PriorityLevelConfigurationReference proto.InternalMessageInf func (m *PriorityLevelConfigurationSpec) Reset() { *m = PriorityLevelConfigurationSpec{} } func (*PriorityLevelConfigurationSpec) ProtoMessage() {} func (*PriorityLevelConfigurationSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{15} + return fileDescriptor_80171c2a4e3669de, []int{16} } func (m *PriorityLevelConfigurationSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -494,7 +522,7 @@ var xxx_messageInfo_PriorityLevelConfigurationSpec proto.InternalMessageInfo func (m *PriorityLevelConfigurationStatus) Reset() { *m = PriorityLevelConfigurationStatus{} } func (*PriorityLevelConfigurationStatus) ProtoMessage() {} func (*PriorityLevelConfigurationStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{16} + return fileDescriptor_80171c2a4e3669de, []int{17} } func (m *PriorityLevelConfigurationStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -522,7 +550,7 @@ var xxx_messageInfo_PriorityLevelConfigurationStatus proto.InternalMessageInfo func (m *QueuingConfiguration) Reset() { *m = QueuingConfiguration{} } func (*QueuingConfiguration) ProtoMessage() {} func (*QueuingConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{17} + return fileDescriptor_80171c2a4e3669de, []int{18} } func (m *QueuingConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -550,7 +578,7 @@ var xxx_messageInfo_QueuingConfiguration proto.InternalMessageInfo func (m *ResourcePolicyRule) Reset() { *m = ResourcePolicyRule{} } func (*ResourcePolicyRule) ProtoMessage() {} func (*ResourcePolicyRule) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{18} + return fileDescriptor_80171c2a4e3669de, []int{19} } func (m *ResourcePolicyRule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -578,7 +606,7 @@ var xxx_messageInfo_ResourcePolicyRule proto.InternalMessageInfo func (m *ServiceAccountSubject) Reset() { *m = ServiceAccountSubject{} } func (*ServiceAccountSubject) ProtoMessage() {} func (*ServiceAccountSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{19} + return fileDescriptor_80171c2a4e3669de, []int{20} } func (m *ServiceAccountSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -606,7 +634,7 @@ var xxx_messageInfo_ServiceAccountSubject proto.InternalMessageInfo func (m *Subject) Reset() { *m = Subject{} } func (*Subject) ProtoMessage() {} func (*Subject) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{20} + return fileDescriptor_80171c2a4e3669de, []int{21} } func (m *Subject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -634,7 +662,7 @@ var xxx_messageInfo_Subject proto.InternalMessageInfo func (m *UserSubject) Reset() { *m = UserSubject{} } func (*UserSubject) ProtoMessage() {} func (*UserSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_80171c2a4e3669de, []int{21} + return fileDescriptor_80171c2a4e3669de, []int{22} } func (m *UserSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -660,6 +688,7 @@ func (m *UserSubject) XXX_DiscardUnknown() { var xxx_messageInfo_UserSubject proto.InternalMessageInfo func init() { + proto.RegisterType((*ExemptPriorityLevelConfiguration)(nil), "k8s.io.api.flowcontrol.v1beta1.ExemptPriorityLevelConfiguration") proto.RegisterType((*FlowDistinguisherMethod)(nil), "k8s.io.api.flowcontrol.v1beta1.FlowDistinguisherMethod") proto.RegisterType((*FlowSchema)(nil), "k8s.io.api.flowcontrol.v1beta1.FlowSchema") proto.RegisterType((*FlowSchemaCondition)(nil), "k8s.io.api.flowcontrol.v1beta1.FlowSchemaCondition") @@ -689,105 +718,141 @@ func init() { } var fileDescriptor_80171c2a4e3669de = []byte{ - // 1553 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x58, 0x4f, 0x6f, 0xdb, 0xc6, - 0x12, 0x37, 0x65, 0xc9, 0xb6, 0xc6, 0x7f, 0xb3, 0x8e, 0x61, 0x3d, 0x07, 0x90, 0x1c, 0x3e, 0xe0, - 0xe5, 0xbd, 0x97, 0x84, 0x4a, 0xd2, 0xa4, 0x49, 0x5b, 0xf4, 0x8f, 0xe9, 0xb4, 0x69, 0x1a, 0xdb, - 0x71, 0xd6, 0x49, 0x5b, 0xa4, 0x01, 0x1a, 0x8a, 0x5a, 0x4b, 0x8c, 0x25, 0x92, 0xd9, 0x25, 0x65, - 0xb8, 0xb9, 0x14, 0xfd, 0x04, 0x3d, 0xb7, 0xc7, 0x1e, 0x7a, 0xef, 0x17, 0xe8, 0xb1, 0x41, 0x4f, - 0x39, 0xe6, 0xa4, 0x36, 0xea, 0xa9, 0xdf, 0xa0, 0x0d, 0x50, 0xa0, 0xd8, 0xe5, 0x92, 0x14, 0xf5, - 0x8f, 0x42, 0x02, 0xe4, 0xd4, 0x9b, 0x39, 0xf3, 0x9b, 0xdf, 0xec, 0xcc, 0xce, 0xcc, 0x8e, 0x0c, - 0xd7, 0x0e, 0xae, 0x30, 0xcd, 0x72, 0xca, 0x07, 0x7e, 0x85, 0x50, 0x9b, 0x78, 0x84, 0x95, 0x5b, - 0xc4, 0xae, 0x3a, 0xb4, 0x2c, 0x15, 0x86, 0x6b, 0x95, 0xf7, 0x1b, 0xce, 0xa1, 0xe9, 0xd8, 0x1e, - 0x75, 0x1a, 0xe5, 0xd6, 0xf9, 0x0a, 0xf1, 0x8c, 0xf3, 0xe5, 0x1a, 0xb1, 0x09, 0x35, 0x3c, 0x52, - 0xd5, 0x5c, 0xea, 0x78, 0x0e, 0x2a, 0x06, 0x78, 0xcd, 0x70, 0x2d, 0xad, 0x0b, 0xaf, 0x49, 0xfc, - 0xda, 0xd9, 0x9a, 0xe5, 0xd5, 0xfd, 0x8a, 0x66, 0x3a, 0xcd, 0x72, 0xcd, 0xa9, 0x39, 0x65, 0x61, - 0x56, 0xf1, 0xf7, 0xc5, 0x97, 0xf8, 0x10, 0x7f, 0x05, 0x74, 0x6b, 0x17, 0x63, 0xf7, 0x4d, 0xc3, - 0xac, 0x5b, 0x36, 0xa1, 0x47, 0x65, 0xf7, 0xa0, 0xc6, 0x05, 0xac, 0xdc, 0x24, 0x9e, 0x51, 0x6e, - 0xf5, 0x1d, 0x62, 0xad, 0x3c, 0xcc, 0x8a, 0xfa, 0xb6, 0x67, 0x35, 0x49, 0x9f, 0xc1, 0xeb, 0x69, - 0x06, 0xcc, 0xac, 0x93, 0xa6, 0xd1, 0x6b, 0xa7, 0xde, 0x85, 0xd5, 0x0f, 0x1a, 0xce, 0xe1, 0x55, - 0x8b, 0x79, 0x96, 0x5d, 0xf3, 0x2d, 0x56, 0x27, 0x74, 0x9b, 0x78, 0x75, 0xa7, 0x8a, 0xde, 0x85, - 0xac, 0x77, 0xe4, 0x92, 0x82, 0xb2, 0xae, 0xfc, 0x37, 0xaf, 0x9f, 0x7e, 0xdc, 0x2e, 0x4d, 0x74, - 0xda, 0xa5, 0xec, 0xed, 0x23, 0x97, 0x3c, 0x6f, 0x97, 0x4e, 0x0c, 0x31, 0xe3, 0x6a, 0x2c, 0x0c, - 0xd5, 0x6f, 0x32, 0x00, 0x1c, 0xb5, 0x27, 0x5c, 0xa3, 0xfb, 0x30, 0xc3, 0xc3, 0xad, 0x1a, 0x9e, - 0x21, 0x38, 0x67, 0x2f, 0x9c, 0xd3, 0xe2, 0x5c, 0x47, 0xa7, 0xd6, 0xdc, 0x83, 0x1a, 0x17, 0x30, - 0x8d, 0xa3, 0xb5, 0xd6, 0x79, 0xed, 0x66, 0xe5, 0x01, 0x31, 0xbd, 0x6d, 0xe2, 0x19, 0x3a, 0x92, - 0xa7, 0x80, 0x58, 0x86, 0x23, 0x56, 0xb4, 0x0b, 0x59, 0xe6, 0x12, 0xb3, 0x90, 0x11, 0xec, 0x9a, - 0x36, 0xfa, 0x26, 0xb5, 0xf8, 0x6c, 0x7b, 0x2e, 0x31, 0xf5, 0xb9, 0x30, 0x42, 0xfe, 0x85, 0x05, - 0x13, 0xfa, 0x14, 0xa6, 0x98, 0x67, 0x78, 0x3e, 0x2b, 0x4c, 0xf6, 0x9d, 0x38, 0x8d, 0x53, 0xd8, - 0xe9, 0x0b, 0x92, 0x75, 0x2a, 0xf8, 0xc6, 0x92, 0x4f, 0x7d, 0x9a, 0x81, 0xe5, 0x18, 0xbc, 0xe9, - 0xd8, 0x55, 0xcb, 0xb3, 0x1c, 0x1b, 0xbd, 0x95, 0xc8, 0xfa, 0xa9, 0x9e, 0xac, 0xaf, 0x0e, 0x30, - 0x89, 0x33, 0x8e, 0xde, 0x88, 0x8e, 0x9b, 0x11, 0xe6, 0x27, 0x93, 0xce, 0x9f, 0xb7, 0x4b, 0x8b, - 0x91, 0x59, 0xf2, 0x3c, 0xa8, 0x05, 0xa8, 0x61, 0x30, 0xef, 0x36, 0x35, 0x6c, 0x16, 0xd0, 0x5a, - 0x4d, 0x22, 0xa3, 0xfe, 0xff, 0x78, 0xf7, 0xc4, 0x2d, 0xf4, 0x35, 0xe9, 0x12, 0x6d, 0xf5, 0xb1, - 0xe1, 0x01, 0x1e, 0xd0, 0x7f, 0x60, 0x8a, 0x12, 0x83, 0x39, 0x76, 0x21, 0x2b, 0x8e, 0x1c, 0xe5, - 0x0b, 0x0b, 0x29, 0x96, 0x5a, 0xf4, 0x3f, 0x98, 0x6e, 0x12, 0xc6, 0x8c, 0x1a, 0x29, 0xe4, 0x04, - 0x70, 0x51, 0x02, 0xa7, 0xb7, 0x03, 0x31, 0x0e, 0xf5, 0xea, 0x8f, 0x0a, 0x2c, 0xc4, 0x79, 0xda, - 0xb2, 0x98, 0x87, 0xee, 0xf5, 0xd5, 0x9e, 0x36, 0x5e, 0x4c, 0xdc, 0x5a, 0x54, 0xde, 0x92, 0x74, - 0x37, 0x13, 0x4a, 0xba, 0xea, 0xee, 0x26, 0xe4, 0x2c, 0x8f, 0x34, 0x79, 0xd6, 0x27, 0x7b, 0xd2, - 0x95, 0x52, 0x24, 0xfa, 0xbc, 0xa4, 0xcd, 0x5d, 0xe7, 0x04, 0x38, 0xe0, 0x51, 0x7f, 0x9f, 0xec, - 0x8e, 0x80, 0xd7, 0x23, 0xfa, 0x5e, 0x81, 0x35, 0x97, 0x5a, 0x0e, 0xb5, 0xbc, 0xa3, 0x2d, 0xd2, - 0x22, 0x8d, 0x4d, 0xc7, 0xde, 0xb7, 0x6a, 0x3e, 0x35, 0x78, 0x2a, 0x65, 0x50, 0x9b, 0x69, 0x9e, - 0x77, 0x87, 0x32, 0x60, 0xb2, 0x4f, 0x28, 0xb1, 0x4d, 0xa2, 0xab, 0xf2, 0x48, 0x6b, 0x23, 0xc0, - 0x23, 0x8e, 0x82, 0x3e, 0x02, 0xd4, 0x34, 0x3c, 0x9e, 0xd1, 0xda, 0x2e, 0x25, 0x26, 0xa9, 0x72, - 0x56, 0x51, 0x90, 0xb9, 0xb8, 0x3a, 0xb6, 0xfb, 0x10, 0x78, 0x80, 0x15, 0xfa, 0x4a, 0x81, 0xe5, - 0x6a, 0xff, 0x90, 0x91, 0x75, 0x79, 0x79, 0x9c, 0x44, 0x0f, 0x98, 0x51, 0xfa, 0x6a, 0xa7, 0x5d, - 0x5a, 0x1e, 0xa0, 0xc0, 0x83, 0x9c, 0xa1, 0x7b, 0x90, 0xa3, 0x7e, 0x83, 0xb0, 0x42, 0x56, 0x5c, - 0x6f, 0xaa, 0xd7, 0x5d, 0xa7, 0x61, 0x99, 0x47, 0x98, 0x9b, 0x7c, 0x62, 0x79, 0xf5, 0x3d, 0x5f, - 0xcc, 0x2a, 0x16, 0xdf, 0xb5, 0x50, 0xe1, 0x80, 0x54, 0x7d, 0x04, 0x4b, 0xbd, 0x43, 0x03, 0xd5, - 0x00, 0xcc, 0xb0, 0x4f, 0x59, 0x41, 0x11, 0x6e, 0x5f, 0x1b, 0xbf, 0xaa, 0xa2, 0x1e, 0x8f, 0xe7, - 0x65, 0x24, 0x62, 0xb8, 0x8b, 0x5a, 0x3d, 0x07, 0x73, 0xd7, 0xa8, 0xe3, 0xbb, 0xf2, 0x8c, 0x68, - 0x1d, 0xb2, 0xb6, 0xd1, 0x0c, 0xa7, 0x4f, 0x34, 0x11, 0x77, 0x8c, 0x26, 0xc1, 0x42, 0xa3, 0x7e, - 0xa7, 0xc0, 0xfc, 0x96, 0xd5, 0xb4, 0x3c, 0x4c, 0x98, 0xeb, 0xd8, 0x8c, 0xa0, 0x4b, 0x89, 0x89, - 0x75, 0xb2, 0x67, 0x62, 0x1d, 0x4b, 0x80, 0xbb, 0x66, 0xd5, 0x67, 0x30, 0xfd, 0xd0, 0x27, 0xbe, - 0x65, 0xd7, 0xe4, 0xbc, 0xbe, 0x98, 0x16, 0xe0, 0xad, 0x00, 0x9e, 0xa8, 0x36, 0x7d, 0x96, 0x8f, - 0x00, 0xa9, 0xc1, 0x21, 0xa3, 0xfa, 0x57, 0x06, 0x4e, 0x0a, 0xc7, 0xa4, 0x3a, 0xbc, 0x8a, 0xd1, - 0x3d, 0x28, 0x18, 0x8c, 0xf9, 0x94, 0x54, 0x37, 0x1d, 0xdb, 0xf4, 0x29, 0xaf, 0xff, 0xa3, 0xbd, - 0xba, 0x41, 0x09, 0x13, 0xd1, 0xe4, 0xf4, 0x75, 0x19, 0x4d, 0x61, 0x63, 0x08, 0x0e, 0x0f, 0x65, - 0x40, 0x0f, 0x60, 0xbe, 0xd1, 0x1d, 0xbb, 0x0c, 0xf3, 0x6c, 0x5a, 0x98, 0x89, 0x84, 0xe9, 0x2b, - 0xf2, 0x04, 0xc9, 0xa4, 0xe3, 0x24, 0x35, 0x7a, 0x1b, 0x16, 0x1b, 0xc4, 0xae, 0x1a, 0x95, 0x06, - 0xd9, 0x25, 0xd4, 0x24, 0xb6, 0x27, 0x5a, 0x24, 0xa7, 0x2f, 0x77, 0xda, 0xa5, 0xc5, 0xad, 0xa4, - 0x0a, 0xf7, 0x62, 0xd1, 0x4d, 0x58, 0xa9, 0x38, 0x94, 0x3a, 0x87, 0x96, 0x5d, 0x13, 0x7e, 0x42, - 0x92, 0xac, 0x20, 0xf9, 0x57, 0xa7, 0x5d, 0x5a, 0xd1, 0x07, 0x01, 0xf0, 0x60, 0x3b, 0xf5, 0x10, - 0x56, 0x76, 0xf8, 0x4c, 0x61, 0x8e, 0x4f, 0x4d, 0x12, 0x37, 0x04, 0x2a, 0x41, 0xae, 0x45, 0x68, - 0x25, 0x28, 0xea, 0xbc, 0x9e, 0xe7, 0xed, 0xf0, 0x31, 0x17, 0xe0, 0x40, 0xce, 0x23, 0xb1, 0x63, - 0xcb, 0x3b, 0x78, 0x8b, 0x15, 0xa6, 0x04, 0x54, 0x44, 0xb2, 0x93, 0x54, 0xe1, 0x5e, 0xac, 0xda, - 0xce, 0xc0, 0xea, 0x90, 0xfe, 0x43, 0x77, 0x60, 0x86, 0xc9, 0xbf, 0x65, 0x4f, 0x9d, 0x4a, 0xbb, - 0x0b, 0x69, 0x1b, 0x4f, 0xff, 0x90, 0x0c, 0x47, 0x54, 0xc8, 0x81, 0x79, 0x2a, 0x8f, 0x20, 0x7c, - 0xca, 0x57, 0xe0, 0x42, 0x1a, 0x77, 0x7f, 0x76, 0xe2, 0xcb, 0xc6, 0xdd, 0x84, 0x38, 0xc9, 0x8f, - 0x1e, 0xc1, 0x52, 0x57, 0xd8, 0x81, 0xcf, 0x49, 0xe1, 0xf3, 0x52, 0x9a, 0xcf, 0x81, 0x97, 0xa2, - 0x17, 0xa4, 0xdb, 0xa5, 0x9d, 0x1e, 0x5a, 0xdc, 0xe7, 0x48, 0xfd, 0x39, 0x03, 0x23, 0x1e, 0x86, - 0x57, 0xb0, 0xe4, 0xdd, 0x4f, 0x2c, 0x79, 0xef, 0xbc, 0xf8, 0x8b, 0x37, 0x74, 0xe9, 0xab, 0xf7, - 0x2c, 0x7d, 0xef, 0xbd, 0x84, 0x8f, 0xd1, 0x4b, 0xe0, 0x1f, 0x19, 0xf8, 0xf7, 0x70, 0xe3, 0x78, - 0x29, 0xbc, 0x91, 0x18, 0xb1, 0x97, 0x7b, 0x46, 0xec, 0xa9, 0x31, 0x28, 0xfe, 0x59, 0x12, 0x7b, - 0x96, 0xc4, 0x5f, 0x14, 0x28, 0x0e, 0xcf, 0xdb, 0x2b, 0x58, 0x1a, 0x3f, 0x4f, 0x2e, 0x8d, 0x6f, - 0xbe, 0x78, 0x91, 0x0d, 0x59, 0x22, 0xaf, 0x8d, 0xaa, 0xad, 0x68, 0xdd, 0x1b, 0xe3, 0xc9, 0xff, - 0x69, 0x64, 0xaa, 0xc4, 0x76, 0x9a, 0xf2, 0xab, 0x25, 0x61, 0xfd, 0xbe, 0xcd, 0x9f, 0x9e, 0x26, - 0x7f, 0x3d, 0x82, 0x82, 0xac, 0xc3, 0x74, 0x23, 0x78, 0xab, 0x65, 0x53, 0x6f, 0x8c, 0xf5, 0x44, - 0x8e, 0x7a, 0xda, 0x83, 0xb5, 0x40, 0xc2, 0x70, 0x48, 0xaf, 0x7e, 0xab, 0xc0, 0x7a, 0x5a, 0xb3, - 0xa2, 0xc3, 0x01, 0xcb, 0xd7, 0x4b, 0x2c, 0xd6, 0xe3, 0x2f, 0x63, 0x3f, 0x28, 0x70, 0x7c, 0xd0, - 0x8e, 0xc3, 0xcb, 0x9f, 0x2f, 0x36, 0xd1, 0x56, 0x12, 0x95, 0xff, 0x2d, 0x21, 0xc5, 0x52, 0x8b, - 0xce, 0xc0, 0x4c, 0xdd, 0xb0, 0xab, 0x7b, 0xd6, 0x17, 0xe1, 0xbe, 0x1d, 0x15, 0xe0, 0x87, 0x52, - 0x8e, 0x23, 0x04, 0xba, 0x0a, 0x4b, 0xc2, 0x6e, 0x8b, 0xd8, 0x35, 0xaf, 0x2e, 0x72, 0x25, 0x97, - 0x86, 0xe8, 0x3d, 0xb8, 0xd5, 0xa3, 0xc7, 0x7d, 0x16, 0xea, 0x9f, 0x0a, 0xa0, 0x17, 0x79, 0xe7, - 0x4f, 0x43, 0xde, 0x70, 0x2d, 0xb1, 0x7c, 0x06, 0x2d, 0x90, 0xd7, 0xe7, 0x3b, 0xed, 0x52, 0x7e, - 0x63, 0xf7, 0x7a, 0x20, 0xc4, 0xb1, 0x9e, 0x83, 0xc3, 0x27, 0x30, 0x78, 0xea, 0x24, 0x38, 0x74, - 0xcc, 0x70, 0xac, 0x47, 0x57, 0x60, 0xce, 0x6c, 0xf8, 0xcc, 0x23, 0x74, 0xcf, 0x74, 0x5c, 0x22, - 0x46, 0xc6, 0x8c, 0x7e, 0x5c, 0xc6, 0x34, 0xb7, 0xd9, 0xa5, 0xc3, 0x09, 0x24, 0xd2, 0x00, 0x78, - 0xc1, 0x33, 0xd7, 0xe0, 0x7e, 0x72, 0xc2, 0xcf, 0x02, 0xbf, 0xb0, 0x9d, 0x48, 0x8a, 0xbb, 0x10, - 0xea, 0x03, 0x58, 0xd9, 0x23, 0xb4, 0x65, 0x99, 0x64, 0xc3, 0x34, 0x1d, 0xdf, 0xf6, 0xc2, 0x35, - 0xba, 0x0c, 0xf9, 0x08, 0x26, 0x7b, 0xe2, 0x98, 0xf4, 0x9f, 0x8f, 0xb8, 0x70, 0x8c, 0x89, 0x9a, - 0x30, 0x33, 0xbc, 0x09, 0x33, 0x30, 0x1d, 0xd3, 0x67, 0x0f, 0x2c, 0xbb, 0x2a, 0x99, 0x4f, 0x84, - 0xe8, 0x1b, 0x96, 0x5d, 0x7d, 0xde, 0x2e, 0xcd, 0x4a, 0x18, 0xff, 0xc4, 0x02, 0x88, 0xae, 0x43, - 0xd6, 0x67, 0x84, 0xca, 0xf6, 0x3a, 0x9d, 0x56, 0xcc, 0x77, 0x18, 0xa1, 0xe1, 0xe6, 0x33, 0xc3, - 0x99, 0xb9, 0x00, 0x0b, 0x0a, 0xb4, 0x0d, 0xb9, 0x1a, 0xbf, 0x14, 0x39, 0xf5, 0xcf, 0xa4, 0x71, - 0x75, 0xff, 0xbc, 0x08, 0xca, 0x40, 0x48, 0x70, 0xc0, 0x82, 0x1e, 0xc2, 0x02, 0x4b, 0xa4, 0x50, - 0x5c, 0xd7, 0x18, 0x9b, 0xcc, 0xc0, 0xc4, 0xeb, 0xa8, 0xd3, 0x2e, 0x2d, 0x24, 0x55, 0xb8, 0xc7, - 0x81, 0x5a, 0x86, 0xd9, 0xae, 0x00, 0xd3, 0xe7, 0x9f, 0x7e, 0xf5, 0xf1, 0xb3, 0xe2, 0xc4, 0x93, - 0x67, 0xc5, 0x89, 0xa7, 0xcf, 0x8a, 0x13, 0x5f, 0x76, 0x8a, 0xca, 0xe3, 0x4e, 0x51, 0x79, 0xd2, - 0x29, 0x2a, 0x4f, 0x3b, 0x45, 0xe5, 0xd7, 0x4e, 0x51, 0xf9, 0xfa, 0xb7, 0xe2, 0xc4, 0xdd, 0xe2, - 0xe8, 0xff, 0x33, 0xfe, 0x1d, 0x00, 0x00, 0xff, 0xff, 0x2c, 0x6d, 0x6e, 0x75, 0xa1, 0x14, 0x00, - 0x00, + // 1614 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x58, 0xcf, 0x73, 0xdb, 0xc4, + 0x17, 0x8f, 0x1c, 0x3b, 0x89, 0x5f, 0x7e, 0x76, 0xd3, 0x4c, 0xfc, 0x4d, 0xbf, 0x63, 0xa7, 0x62, + 0x86, 0x02, 0x6d, 0xe5, 0xb6, 0xb4, 0xb4, 0xc0, 0xf0, 0x23, 0x4a, 0x4b, 0x29, 0x4d, 0xd2, 0x74, + 0xd3, 0x42, 0xa7, 0x74, 0x86, 0xca, 0xf2, 0xc6, 0x56, 0x63, 0x4b, 0xea, 0xae, 0xe4, 0x10, 0x7a, + 0x61, 0xf8, 0x0b, 0x38, 0xc3, 0x91, 0x03, 0x27, 0x2e, 0x5c, 0x39, 0x70, 0xa4, 0xc3, 0xa9, 0xc7, + 0x9e, 0x0c, 0x35, 0x27, 0xfe, 0x03, 0xe8, 0x0c, 0x33, 0xcc, 0xae, 0xd6, 0x92, 0xe5, 0x5f, 0xf2, + 0xb4, 0x33, 0x3d, 0x71, 0x8b, 0xde, 0xfb, 0xbc, 0xcf, 0xdb, 0x7d, 0xfb, 0x7e, 0x39, 0x70, 0x79, + 0xef, 0x02, 0xd3, 0x2c, 0xa7, 0xb8, 0xe7, 0x97, 0x08, 0xb5, 0x89, 0x47, 0x58, 0xb1, 0x41, 0xec, + 0xb2, 0x43, 0x8b, 0x52, 0x61, 0xb8, 0x56, 0x71, 0xb7, 0xe6, 0xec, 0x9b, 0x8e, 0xed, 0x51, 0xa7, + 0x56, 0x6c, 0x9c, 0x2e, 0x11, 0xcf, 0x38, 0x5d, 0xac, 0x10, 0x9b, 0x50, 0xc3, 0x23, 0x65, 0xcd, + 0xa5, 0x8e, 0xe7, 0xa0, 0x7c, 0x80, 0xd7, 0x0c, 0xd7, 0xd2, 0x3a, 0xf0, 0x9a, 0xc4, 0xaf, 0x9c, + 0xac, 0x58, 0x5e, 0xd5, 0x2f, 0x69, 0xa6, 0x53, 0x2f, 0x56, 0x9c, 0x8a, 0x53, 0x14, 0x66, 0x25, + 0x7f, 0x57, 0x7c, 0x89, 0x0f, 0xf1, 0x57, 0x40, 0xb7, 0x72, 0x36, 0x72, 0x5f, 0x37, 0xcc, 0xaa, + 0x65, 0x13, 0x7a, 0x50, 0x74, 0xf7, 0x2a, 0x5c, 0xc0, 0x8a, 0x75, 0xe2, 0x19, 0xc5, 0x46, 0xcf, + 0x21, 0x56, 0x8a, 0x83, 0xac, 0xa8, 0x6f, 0x7b, 0x56, 0x9d, 0xf4, 0x18, 0xbc, 0x91, 0x64, 0xc0, + 0xcc, 0x2a, 0xa9, 0x1b, 0xdd, 0x76, 0xea, 0x4f, 0x0a, 0xac, 0x5e, 0xfa, 0x9c, 0xd4, 0x5d, 0x6f, + 0x9b, 0x5a, 0x0e, 0xb5, 0xbc, 0x83, 0x0d, 0xd2, 0x20, 0xb5, 0x75, 0xc7, 0xde, 0xb5, 0x2a, 0x3e, + 0x35, 0x3c, 0xcb, 0xb1, 0xd1, 0x2d, 0xc8, 0xd9, 0x4e, 0xdd, 0xb2, 0x0d, 0x2e, 0x37, 0x7d, 0x4a, + 0x89, 0x6d, 0x1e, 0xec, 0x54, 0x0d, 0x4a, 0x58, 0x4e, 0x59, 0x55, 0x5e, 0xc9, 0xe8, 0xff, 0x6f, + 0x35, 0x0b, 0xb9, 0xad, 0x01, 0x18, 0x3c, 0xd0, 0x1a, 0xbd, 0x03, 0xf3, 0x35, 0x62, 0x97, 0x8d, + 0x52, 0x8d, 0x6c, 0x13, 0x6a, 0x12, 0xdb, 0xcb, 0xa5, 0x04, 0xe1, 0x62, 0xab, 0x59, 0x98, 0xdf, + 0x88, 0xab, 0x70, 0x37, 0x56, 0xbd, 0x0d, 0xcb, 0x1f, 0xd4, 0x9c, 0xfd, 0x8b, 0x16, 0xf3, 0x2c, + 0xbb, 0xe2, 0x5b, 0xac, 0x4a, 0xe8, 0x26, 0xf1, 0xaa, 0x4e, 0x19, 0xbd, 0x07, 0x69, 0xef, 0xc0, + 0x25, 0xe2, 0x7c, 0x59, 0xfd, 0xf8, 0xc3, 0x66, 0x61, 0xac, 0xd5, 0x2c, 0xa4, 0x6f, 0x1c, 0xb8, + 0xe4, 0x69, 0xb3, 0x70, 0x64, 0x80, 0x19, 0x57, 0x63, 0x61, 0xa8, 0x7e, 0x93, 0x02, 0xe0, 0xa8, + 0x1d, 0x11, 0x38, 0x74, 0x17, 0xa6, 0xf8, 0x63, 0x95, 0x0d, 0xcf, 0x10, 0x9c, 0xd3, 0x67, 0x4e, + 0x69, 0x51, 0xa6, 0x84, 0x31, 0xd7, 0xdc, 0xbd, 0x0a, 0x17, 0x30, 0x8d, 0xa3, 0xb5, 0xc6, 0x69, + 0xed, 0x5a, 0xe9, 0x1e, 0x31, 0xbd, 0x4d, 0xe2, 0x19, 0x3a, 0x92, 0xa7, 0x80, 0x48, 0x86, 0x43, + 0x56, 0xb4, 0x0d, 0x69, 0xe6, 0x12, 0x53, 0x04, 0x60, 0xfa, 0x8c, 0xa6, 0x0d, 0xcf, 0x43, 0x2d, + 0x3a, 0xdb, 0x8e, 0x4b, 0x4c, 0x7d, 0xa6, 0x7d, 0x43, 0xfe, 0x85, 0x05, 0x13, 0xba, 0x05, 0x13, + 0xcc, 0x33, 0x3c, 0x9f, 0xe5, 0xc6, 0x7b, 0x4e, 0x9c, 0xc4, 0x29, 0xec, 0xf4, 0x39, 0xc9, 0x3a, + 0x11, 0x7c, 0x63, 0xc9, 0xa7, 0x3e, 0x4e, 0xc1, 0x62, 0x04, 0x5e, 0x77, 0xec, 0xb2, 0x25, 0x32, + 0xe5, 0xed, 0x58, 0xd4, 0x8f, 0x75, 0x45, 0x7d, 0xb9, 0x8f, 0x49, 0x14, 0x71, 0xf4, 0x66, 0x78, + 0xdc, 0x94, 0x30, 0x3f, 0x1a, 0x77, 0xfe, 0xb4, 0x59, 0x98, 0x0f, 0xcd, 0xe2, 0xe7, 0x41, 0x0d, + 0x40, 0x35, 0x83, 0x79, 0x37, 0xa8, 0x61, 0xb3, 0x80, 0xd6, 0xaa, 0x13, 0x79, 0xeb, 0xd7, 0x46, + 0x7b, 0x27, 0x6e, 0xa1, 0xaf, 0x48, 0x97, 0x68, 0xa3, 0x87, 0x0d, 0xf7, 0xf1, 0x80, 0x5e, 0x86, + 0x09, 0x4a, 0x0c, 0xe6, 0xd8, 0xb9, 0xb4, 0x38, 0x72, 0x18, 0x2f, 0x2c, 0xa4, 0x58, 0x6a, 0xd1, + 0xab, 0x30, 0x59, 0x27, 0x8c, 0x19, 0x15, 0x92, 0xcb, 0x08, 0xe0, 0xbc, 0x04, 0x4e, 0x6e, 0x06, + 0x62, 0xdc, 0xd6, 0xab, 0x3f, 0x2b, 0x30, 0x17, 0xc5, 0x69, 0xc3, 0x62, 0x1e, 0xba, 0xd3, 0x93, + 0x7b, 0xda, 0x68, 0x77, 0xe2, 0xd6, 0x22, 0xf3, 0x16, 0xa4, 0xbb, 0xa9, 0xb6, 0xa4, 0x23, 0xef, + 0xae, 0x41, 0xc6, 0xf2, 0x48, 0x9d, 0x47, 0x7d, 0xbc, 0x2b, 0x5c, 0x09, 0x49, 0xa2, 0xcf, 0x4a, + 0xda, 0xcc, 0x15, 0x4e, 0x80, 0x03, 0x1e, 0xf5, 0xcf, 0xf1, 0xce, 0x1b, 0xf0, 0x7c, 0x44, 0xdf, + 0x2b, 0xb0, 0xe2, 0x0e, 0x6c, 0x30, 0xf2, 0x52, 0xeb, 0x49, 0x9e, 0x07, 0xb7, 0x28, 0x4c, 0x76, + 0x09, 0xef, 0x2b, 0x44, 0x57, 0xe5, 0x91, 0x56, 0x86, 0x80, 0x87, 0x1c, 0x05, 0x7d, 0x04, 0xa8, + 0x6e, 0x78, 0x3c, 0xa2, 0x95, 0x6d, 0x4a, 0x4c, 0x52, 0xe6, 0xac, 0xb2, 0x29, 0x85, 0xd9, 0xb1, + 0xd9, 0x83, 0xc0, 0x7d, 0xac, 0xd0, 0x57, 0x0a, 0x2c, 0x96, 0x7b, 0x9b, 0x8c, 0xcc, 0xcb, 0xf3, + 0xa3, 0x04, 0xba, 0x4f, 0x8f, 0xd2, 0x97, 0x5b, 0xcd, 0xc2, 0x62, 0x1f, 0x05, 0xee, 0xe7, 0x0c, + 0xdd, 0x81, 0x0c, 0xf5, 0x6b, 0x84, 0xe5, 0xd2, 0xe2, 0x79, 0x13, 0xbd, 0x6e, 0x3b, 0x35, 0xcb, + 0x3c, 0xc0, 0xdc, 0xe4, 0x13, 0xcb, 0xab, 0xee, 0xf8, 0xa2, 0x57, 0xb1, 0xe8, 0xad, 0x85, 0x0a, + 0x07, 0xa4, 0xea, 0x03, 0x58, 0xe8, 0x6e, 0x1a, 0xa8, 0x02, 0x60, 0xb6, 0xeb, 0x94, 0x0f, 0x08, + 0xee, 0xf6, 0xf5, 0xd1, 0xb3, 0x2a, 0xac, 0xf1, 0xa8, 0x5f, 0x86, 0x22, 0x86, 0x3b, 0xa8, 0xd5, + 0x53, 0x30, 0x73, 0x99, 0x3a, 0xbe, 0x2b, 0xcf, 0x88, 0x56, 0x21, 0x6d, 0x1b, 0xf5, 0x76, 0xf7, + 0x09, 0x3b, 0xe2, 0x96, 0x51, 0x27, 0x58, 0x68, 0xd4, 0xef, 0x14, 0x98, 0xdd, 0xb0, 0xea, 0x96, + 0x87, 0x09, 0x73, 0x1d, 0x9b, 0x11, 0x74, 0x2e, 0xd6, 0xb1, 0x8e, 0x76, 0x75, 0xac, 0x43, 0x31, + 0x70, 0x47, 0xaf, 0xfa, 0x14, 0x26, 0xef, 0xfb, 0xc4, 0xb7, 0xec, 0x8a, 0xec, 0xd7, 0x67, 0x93, + 0x2e, 0x78, 0x3d, 0x80, 0xc7, 0xb2, 0x4d, 0x9f, 0xe6, 0x2d, 0x40, 0x6a, 0x70, 0x9b, 0x51, 0xfd, + 0x27, 0x05, 0x47, 0x85, 0x63, 0x52, 0x1e, 0x32, 0x95, 0xef, 0x40, 0xce, 0x60, 0xcc, 0xa7, 0xa4, + 0x3c, 0x68, 0x2a, 0xaf, 0xca, 0xdb, 0xe4, 0xd6, 0x06, 0xe0, 0xf0, 0x40, 0x06, 0x74, 0x0f, 0x66, + 0x6b, 0x9d, 0x77, 0x97, 0xd7, 0x3c, 0x99, 0x74, 0xcd, 0x58, 0xc0, 0xf4, 0x25, 0x79, 0x82, 0x78, + 0xd0, 0x71, 0x9c, 0xba, 0xdf, 0x16, 0x30, 0x3e, 0xfa, 0x16, 0x80, 0xae, 0xc1, 0x52, 0xc9, 0xa1, + 0xd4, 0xd9, 0xb7, 0xec, 0x8a, 0xf0, 0xd3, 0x26, 0x49, 0x0b, 0x92, 0xff, 0xb5, 0x9a, 0x85, 0x25, + 0xbd, 0x1f, 0x00, 0xf7, 0xb7, 0x53, 0xf7, 0x61, 0x69, 0x8b, 0xf7, 0x14, 0xe6, 0xf8, 0xd4, 0x24, + 0x51, 0x41, 0xa0, 0x02, 0x64, 0x1a, 0x84, 0x96, 0x82, 0xa4, 0xce, 0xea, 0x59, 0x5e, 0x0e, 0x1f, + 0x73, 0x01, 0x0e, 0xe4, 0xfc, 0x26, 0x76, 0x64, 0x79, 0x13, 0x6f, 0xb0, 0xdc, 0x84, 0x80, 0x8a, + 0x9b, 0x6c, 0xc5, 0x55, 0xb8, 0x1b, 0xab, 0x36, 0x53, 0xb0, 0x3c, 0xa0, 0xfe, 0xd0, 0x4d, 0x98, + 0x62, 0xf2, 0x6f, 0x59, 0x53, 0xc7, 0x92, 0xde, 0x42, 0xda, 0x46, 0xdd, 0xbf, 0x4d, 0x86, 0x43, + 0x2a, 0xe4, 0xc0, 0x2c, 0x95, 0x47, 0x10, 0x3e, 0xe5, 0x14, 0x38, 0x93, 0xc4, 0xdd, 0x1b, 0x9d, + 0xe8, 0xb1, 0x71, 0x27, 0x21, 0x8e, 0xf3, 0xa3, 0x07, 0xb0, 0xd0, 0x71, 0xed, 0xc0, 0xe7, 0xb8, + 0xf0, 0x79, 0x2e, 0xc9, 0x67, 0xdf, 0x47, 0xd1, 0x73, 0xd2, 0xed, 0xc2, 0x56, 0x17, 0x2d, 0xee, + 0x71, 0xa4, 0xfe, 0x9a, 0x82, 0x21, 0x83, 0xe1, 0x05, 0x2c, 0x79, 0x77, 0x63, 0x4b, 0xde, 0xbb, + 0xcf, 0x3e, 0xf1, 0x06, 0x2e, 0x7d, 0xd5, 0xae, 0xa5, 0xef, 0xfd, 0xe7, 0xf0, 0x31, 0x7c, 0x09, + 0xfc, 0x2b, 0x05, 0x2f, 0x0d, 0x36, 0x8e, 0x96, 0xc2, 0xab, 0xb1, 0x16, 0x7b, 0xbe, 0xab, 0xc5, + 0x1e, 0x1b, 0x81, 0xe2, 0xbf, 0x25, 0xb1, 0x6b, 0x49, 0xfc, 0x4d, 0x81, 0xfc, 0xe0, 0xb8, 0xbd, + 0x80, 0xa5, 0xf1, 0xb3, 0xf8, 0xd2, 0xf8, 0xd6, 0xb3, 0x27, 0xd9, 0x80, 0x25, 0xf2, 0xf2, 0xb0, + 0xdc, 0x0a, 0xd7, 0xbd, 0x11, 0x46, 0xfe, 0x0f, 0xa9, 0x61, 0xa1, 0x12, 0xdb, 0x69, 0xc2, 0xaf, + 0x96, 0x98, 0xf5, 0x25, 0x9b, 0x8f, 0x9e, 0x3a, 0x9f, 0x1e, 0x41, 0x42, 0x56, 0x61, 0xb2, 0x16, + 0xcc, 0x6a, 0x59, 0xd4, 0x6b, 0x23, 0x8d, 0xc8, 0x61, 0xa3, 0x3d, 0x58, 0x0b, 0x24, 0x0c, 0xb7, + 0xe9, 0x51, 0x19, 0x26, 0x88, 0xf8, 0xa9, 0x3e, 0x6a, 0x65, 0x27, 0xfd, 0xb0, 0xd7, 0x81, 0x67, + 0x61, 0x80, 0xc2, 0x92, 0x5b, 0xfd, 0x56, 0x81, 0xd5, 0xa4, 0x96, 0x80, 0xf6, 0xfb, 0xac, 0x78, + 0xcf, 0xb1, 0xbe, 0x8f, 0xbe, 0xf2, 0xfd, 0xa8, 0xc0, 0xe1, 0x7e, 0x9b, 0x14, 0x2f, 0x32, 0xbe, + 0x3e, 0x85, 0xbb, 0x4f, 0x58, 0x64, 0xd7, 0x85, 0x14, 0x4b, 0x2d, 0x3a, 0x01, 0x53, 0x55, 0xc3, + 0x2e, 0xef, 0x58, 0x5f, 0xb4, 0xb7, 0xfa, 0x30, 0xcd, 0x3f, 0x94, 0x72, 0x1c, 0x22, 0xd0, 0x45, + 0x58, 0x10, 0x76, 0x1b, 0xc4, 0xae, 0x78, 0x55, 0xf1, 0x22, 0x72, 0x35, 0x09, 0xa7, 0xce, 0xf5, + 0x2e, 0x3d, 0xee, 0xb1, 0x50, 0xff, 0x56, 0x00, 0x3d, 0xcb, 0x36, 0x71, 0x1c, 0xb2, 0x86, 0x6b, + 0x89, 0x15, 0x37, 0x28, 0xb4, 0xac, 0x3e, 0xdb, 0x6a, 0x16, 0xb2, 0x6b, 0xdb, 0x57, 0x02, 0x21, + 0x8e, 0xf4, 0x1c, 0xdc, 0x1e, 0xb4, 0xc1, 0x40, 0x95, 0xe0, 0xb6, 0x63, 0x86, 0x23, 0x3d, 0xba, + 0x00, 0x33, 0x66, 0xcd, 0x67, 0x1e, 0xa1, 0x3b, 0xa6, 0xe3, 0x12, 0xd1, 0x98, 0xa6, 0xf4, 0xc3, + 0xf2, 0x4e, 0x33, 0xeb, 0x1d, 0x3a, 0x1c, 0x43, 0x22, 0x0d, 0x80, 0x97, 0x15, 0x73, 0x0d, 0xee, + 0x27, 0x23, 0xfc, 0xcc, 0xf1, 0x07, 0xdb, 0x0a, 0xa5, 0xb8, 0x03, 0xa1, 0xde, 0x83, 0xa5, 0x1d, + 0x42, 0x1b, 0x96, 0x49, 0xd6, 0x4c, 0xd3, 0xf1, 0x6d, 0xaf, 0xbd, 0xac, 0x17, 0x21, 0x1b, 0xc2, + 0x64, 0xe5, 0x1d, 0x92, 0xfe, 0xb3, 0x21, 0x17, 0x8e, 0x30, 0x61, 0xa9, 0xa7, 0x06, 0x96, 0xfa, + 0x2f, 0x29, 0x98, 0x8c, 0xe8, 0xd3, 0x7b, 0x96, 0x5d, 0x96, 0xcc, 0x47, 0xda, 0xe8, 0xab, 0x96, + 0x5d, 0x7e, 0xda, 0x2c, 0x4c, 0x4b, 0x18, 0xff, 0xc4, 0x02, 0x88, 0xae, 0x40, 0xda, 0x67, 0x84, + 0xca, 0x22, 0x3e, 0x9e, 0x94, 0xcc, 0x37, 0x19, 0xa1, 0xed, 0xfd, 0x6a, 0x8a, 0x33, 0x73, 0x01, + 0x16, 0x14, 0x68, 0x13, 0x32, 0x15, 0xfe, 0x28, 0xb2, 0x4e, 0x4f, 0x24, 0x71, 0x75, 0xfe, 0x88, + 0x09, 0xd2, 0x40, 0x48, 0x70, 0xc0, 0x82, 0xee, 0xc3, 0x1c, 0x8b, 0x85, 0x50, 0x3c, 0xd7, 0x08, + 0xfb, 0x52, 0xdf, 0xc0, 0xeb, 0xa8, 0xd5, 0x2c, 0xcc, 0xc5, 0x55, 0xb8, 0xcb, 0x81, 0x5a, 0x84, + 0xe9, 0x8e, 0x0b, 0x26, 0x77, 0x59, 0xfd, 0xe2, 0xc3, 0x27, 0xf9, 0xb1, 0x47, 0x4f, 0xf2, 0x63, + 0x8f, 0x9f, 0xe4, 0xc7, 0xbe, 0x6c, 0xe5, 0x95, 0x87, 0xad, 0xbc, 0xf2, 0xa8, 0x95, 0x57, 0x1e, + 0xb7, 0xf2, 0xca, 0xef, 0xad, 0xbc, 0xf2, 0xf5, 0x1f, 0xf9, 0xb1, 0xdb, 0xf9, 0xe1, 0xff, 0x8b, + 0xfd, 0x37, 0x00, 0x00, 0xff, 0xff, 0x3a, 0xda, 0x82, 0x48, 0xc5, 0x15, 0x00, 0x00, +} + +func (m *ExemptPriorityLevelConfiguration) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ExemptPriorityLevelConfiguration) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ExemptPriorityLevelConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.LendablePercent != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.LendablePercent)) + i-- + dAtA[i] = 0x10 + } + if m.NominalConcurrencyShares != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.NominalConcurrencyShares)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil } func (m *FlowDistinguisherMethod) Marshal() (dAtA []byte, err error) { @@ -1491,6 +1556,18 @@ func (m *PriorityLevelConfigurationSpec) MarshalToSizedBuffer(dAtA []byte) (int, _ = i var l int _ = l + if m.Exempt != nil { + { + size, err := m.Exempt.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } if m.Limited != nil { { size, err := m.Limited.MarshalToSizedBuffer(dAtA[:i]) @@ -1783,6 +1860,21 @@ func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int { dAtA[offset] = uint8(v) return base } +func (m *ExemptPriorityLevelConfiguration) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.NominalConcurrencyShares != nil { + n += 1 + sovGenerated(uint64(*m.NominalConcurrencyShares)) + } + if m.LendablePercent != nil { + n += 1 + sovGenerated(uint64(*m.LendablePercent)) + } + return n +} + func (m *FlowDistinguisherMethod) Size() (n int) { if m == nil { return 0 @@ -2048,6 +2140,10 @@ func (m *PriorityLevelConfigurationSpec) Size() (n int) { l = m.Limited.Size() n += 1 + l + sovGenerated(uint64(l)) } + if m.Exempt != nil { + l = m.Exempt.Size() + n += 1 + l + sovGenerated(uint64(l)) + } return n } @@ -2165,6 +2261,17 @@ func sovGenerated(x uint64) (n int) { func sozGenerated(x uint64) (n int) { return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } +func (this *ExemptPriorityLevelConfiguration) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ExemptPriorityLevelConfiguration{`, + `NominalConcurrencyShares:` + valueToStringGenerated(this.NominalConcurrencyShares) + `,`, + `LendablePercent:` + valueToStringGenerated(this.LendablePercent) + `,`, + `}`, + }, "") + return s +} func (this *FlowDistinguisherMethod) String() string { if this == nil { return "nil" @@ -2381,6 +2488,7 @@ func (this *PriorityLevelConfigurationSpec) String() string { s := strings.Join([]string{`&PriorityLevelConfigurationSpec{`, `Type:` + fmt.Sprintf("%v", this.Type) + `,`, `Limited:` + strings.Replace(this.Limited.String(), "LimitedPriorityLevelConfiguration", "LimitedPriorityLevelConfiguration", 1) + `,`, + `Exempt:` + strings.Replace(this.Exempt.String(), "ExemptPriorityLevelConfiguration", "ExemptPriorityLevelConfiguration", 1) + `,`, `}`, }, "") return s @@ -2468,6 +2576,96 @@ func valueToStringGenerated(v interface{}) string { pv := reflect.Indirect(rv).Interface() return fmt.Sprintf("*%v", pv) } +func (m *ExemptPriorityLevelConfiguration) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ExemptPriorityLevelConfiguration: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ExemptPriorityLevelConfiguration: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NominalConcurrencyShares", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.NominalConcurrencyShares = &v + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field LendablePercent", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.LendablePercent = &v + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func (m *FlowDistinguisherMethod) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 @@ -4547,6 +4745,42 @@ func (m *PriorityLevelConfigurationSpec) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Exempt", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Exempt == nil { + m.Exempt = &ExemptPriorityLevelConfiguration{} + } + if err := m.Exempt.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/generated.proto index 98bfabe9c673..96df0ace798e 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/generated.proto @@ -28,6 +28,40 @@ import "k8s.io/apimachinery/pkg/runtime/schema/generated.proto"; // Package-wide variables from generator "generated". option go_package = "k8s.io/api/flowcontrol/v1beta1"; +// ExemptPriorityLevelConfiguration describes the configurable aspects +// of the handling of exempt requests. +// In the mandatory exempt configuration object the values in the fields +// here can be modified by authorized users, unlike the rest of the `spec`. +message ExemptPriorityLevelConfiguration { + // `nominalConcurrencyShares` (NCS) contributes to the computation of the + // NominalConcurrencyLimit (NominalCL) of this level. + // This is the number of execution seats nominally reserved for this priority level. + // This DOES NOT limit the dispatching from this priority level + // but affects the other priority levels through the borrowing mechanism. + // The server's concurrency limit (ServerCL) is divided among all the + // priority levels in proportion to their NCS values: + // + // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) + // sum_ncs = sum[priority level k] NCS(k) + // + // Bigger numbers mean a larger nominal concurrency limit, + // at the expense of every other priority level. + // This field has a default value of zero. + // +optional + optional int32 nominalConcurrencyShares = 1; + + // `lendablePercent` prescribes the fraction of the level's NominalCL that + // can be borrowed by other priority levels. This value of this + // field must be between 0 and 100, inclusive, and it defaults to 0. + // The number of seats that other levels can borrow from this level, known + // as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. + // + // LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) + // + // +optional + optional int32 lendablePercent = 2; +} + // FlowDistinguisherMethod specifies the method of a flow distinguisher. message FlowDistinguisherMethod { // `type` is the type of flow distinguisher method @@ -332,6 +366,14 @@ message PriorityLevelConfigurationSpec { // This field must be non-empty if and only if `type` is `"Limited"`. // +optional optional LimitedPriorityLevelConfiguration limited = 2; + + // `exempt` specifies how requests are handled for an exempt priority level. + // This field MUST be empty if `type` is `"Limited"`. + // This field MAY be non-empty if `type` is `"Exempt"`. + // If empty and `type` is `"Exempt"` then the default values + // for `ExemptPriorityLevelConfiguration` apply. + // +optional + optional ExemptPriorityLevelConfiguration exempt = 3; } // PriorityLevelConfigurationStatus represents the current state of a "request-priority". diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/types.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/types.go index c3b7f607a797..9e05ff1a090f 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/types.go @@ -77,7 +77,9 @@ const ( // is a boolean false or has an invalid boolean representation // (if the cluster operator sets it to 'false' it will be stomped) // - any changes to the spec made by the cluster operator will be - // stomped. + // stomped, except for changes to the `nominalConcurrencyShares` + // and `lendablePercent` fields of the PriorityLevelConfiguration + // named "exempt". // // The kube-apiserver will apply updates on the suggested configuration if: // - the cluster operator has enabled auto-update by setting the annotation @@ -435,6 +437,14 @@ type PriorityLevelConfigurationSpec struct { // This field must be non-empty if and only if `type` is `"Limited"`. // +optional Limited *LimitedPriorityLevelConfiguration `json:"limited,omitempty" protobuf:"bytes,2,opt,name=limited"` + + // `exempt` specifies how requests are handled for an exempt priority level. + // This field MUST be empty if `type` is `"Limited"`. + // This field MAY be non-empty if `type` is `"Exempt"`. + // If empty and `type` is `"Exempt"` then the default values + // for `ExemptPriorityLevelConfiguration` apply. + // +optional + Exempt *ExemptPriorityLevelConfiguration `json:"exempt,omitempty" protobuf:"bytes,3,opt,name=exempt"` } // PriorityLevelEnablement indicates whether limits on execution are enabled for the priority level @@ -505,6 +515,43 @@ type LimitedPriorityLevelConfiguration struct { BorrowingLimitPercent *int32 `json:"borrowingLimitPercent,omitempty" protobuf:"varint,4,opt,name=borrowingLimitPercent"` } +// ExemptPriorityLevelConfiguration describes the configurable aspects +// of the handling of exempt requests. +// In the mandatory exempt configuration object the values in the fields +// here can be modified by authorized users, unlike the rest of the `spec`. +type ExemptPriorityLevelConfiguration struct { + // `nominalConcurrencyShares` (NCS) contributes to the computation of the + // NominalConcurrencyLimit (NominalCL) of this level. + // This is the number of execution seats nominally reserved for this priority level. + // This DOES NOT limit the dispatching from this priority level + // but affects the other priority levels through the borrowing mechanism. + // The server's concurrency limit (ServerCL) is divided among all the + // priority levels in proportion to their NCS values: + // + // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) + // sum_ncs = sum[priority level k] NCS(k) + // + // Bigger numbers mean a larger nominal concurrency limit, + // at the expense of every other priority level. + // This field has a default value of zero. + // +optional + NominalConcurrencyShares *int32 `json:"nominalConcurrencyShares,omitempty" protobuf:"varint,1,opt,name=nominalConcurrencyShares"` + // `lendablePercent` prescribes the fraction of the level's NominalCL that + // can be borrowed by other priority levels. This value of this + // field must be between 0 and 100, inclusive, and it defaults to 0. + // The number of seats that other levels can borrow from this level, known + // as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. + // + // LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) + // + // +optional + LendablePercent *int32 `json:"lendablePercent,omitempty" protobuf:"varint,2,opt,name=lendablePercent"` + // The `BorrowingCL` of an Exempt priority level is implicitly `ServerCL`. + // In other words, an exempt priority level + // has no meaningful limit on how much it borrows. + // There is no explicit representation of that here. +} + // LimitResponse defines how to handle requests that can not be executed right now. // +union type LimitResponse struct { diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/types_swagger_doc_generated.go index fc08e128db3c..1405f3c3ca6a 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/types_swagger_doc_generated.go @@ -27,6 +27,16 @@ package v1beta1 // Those methods can be generated by using hack/update-codegen.sh // AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT. +var map_ExemptPriorityLevelConfiguration = map[string]string{ + "": "ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the `spec`.", + "nominalConcurrencyShares": "`nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats nominally reserved for this priority level. This DOES NOT limit the dispatching from this priority level but affects the other priority levels through the borrowing mechanism. The server's concurrency limit (ServerCL) is divided among all the priority levels in proportion to their NCS values:\n\nNominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k)\n\nBigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of zero.", + "lendablePercent": "`lendablePercent` prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. This value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows.\n\nLendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 )", +} + +func (ExemptPriorityLevelConfiguration) SwaggerDoc() map[string]string { + return map_ExemptPriorityLevelConfiguration +} + var map_FlowDistinguisherMethod = map[string]string{ "": "FlowDistinguisherMethod specifies the method of a flow distinguisher.", "type": "`type` is the type of flow distinguisher method The supported types are \"ByUser\" and \"ByNamespace\". Required.", @@ -190,6 +200,7 @@ var map_PriorityLevelConfigurationSpec = map[string]string{ "": "PriorityLevelConfigurationSpec specifies the configuration of a priority level.", "type": "`type` indicates whether this priority level is subject to limitation on request execution. A value of `\"Exempt\"` means that requests of this priority level are not subject to a limit (and thus are never queued) and do not detract from the capacity made available to other priority levels. A value of `\"Limited\"` means that (a) requests of this priority level _are_ subject to limits and (b) some of the server's limited capacity is made available exclusively to this priority level. Required.", "limited": "`limited` specifies how requests are handled for a Limited priority level. This field must be non-empty if and only if `type` is `\"Limited\"`.", + "exempt": "`exempt` specifies how requests are handled for an exempt priority level. This field MUST be empty if `type` is `\"Limited\"`. This field MAY be non-empty if `type` is `\"Exempt\"`. If empty and `type` is `\"Exempt\"` then the default values for `ExemptPriorityLevelConfiguration` apply.", } func (PriorityLevelConfigurationSpec) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/zz_generated.deepcopy.go index 027c3057f81a..965d5e55a381 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta1/zz_generated.deepcopy.go @@ -25,6 +25,32 @@ import ( runtime "k8s.io/apimachinery/pkg/runtime" ) +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ExemptPriorityLevelConfiguration) DeepCopyInto(out *ExemptPriorityLevelConfiguration) { + *out = *in + if in.NominalConcurrencyShares != nil { + in, out := &in.NominalConcurrencyShares, &out.NominalConcurrencyShares + *out = new(int32) + **out = **in + } + if in.LendablePercent != nil { + in, out := &in.LendablePercent, &out.LendablePercent + *out = new(int32) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExemptPriorityLevelConfiguration. +func (in *ExemptPriorityLevelConfiguration) DeepCopy() *ExemptPriorityLevelConfiguration { + if in == nil { + return nil + } + out := new(ExemptPriorityLevelConfiguration) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *FlowDistinguisherMethod) DeepCopyInto(out *FlowDistinguisherMethod) { *out = *in @@ -400,6 +426,11 @@ func (in *PriorityLevelConfigurationSpec) DeepCopyInto(out *PriorityLevelConfigu *out = new(LimitedPriorityLevelConfiguration) (*in).DeepCopyInto(*out) } + if in.Exempt != nil { + in, out := &in.Exempt, &out.Exempt + *out = new(ExemptPriorityLevelConfiguration) + (*in).DeepCopyInto(*out) + } return } diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/generated.pb.go index b4c8f958f1df..7f8ee0850636 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/generated.pb.go @@ -43,10 +43,38 @@ var _ = math.Inf // proto package needs to be updated. const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package +func (m *ExemptPriorityLevelConfiguration) Reset() { *m = ExemptPriorityLevelConfiguration{} } +func (*ExemptPriorityLevelConfiguration) ProtoMessage() {} +func (*ExemptPriorityLevelConfiguration) Descriptor() ([]byte, []int) { + return fileDescriptor_ed300aa8e672704e, []int{0} +} +func (m *ExemptPriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ExemptPriorityLevelConfiguration) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ExemptPriorityLevelConfiguration) XXX_Merge(src proto.Message) { + xxx_messageInfo_ExemptPriorityLevelConfiguration.Merge(m, src) +} +func (m *ExemptPriorityLevelConfiguration) XXX_Size() int { + return m.Size() +} +func (m *ExemptPriorityLevelConfiguration) XXX_DiscardUnknown() { + xxx_messageInfo_ExemptPriorityLevelConfiguration.DiscardUnknown(m) +} + +var xxx_messageInfo_ExemptPriorityLevelConfiguration proto.InternalMessageInfo + func (m *FlowDistinguisherMethod) Reset() { *m = FlowDistinguisherMethod{} } func (*FlowDistinguisherMethod) ProtoMessage() {} func (*FlowDistinguisherMethod) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{0} + return fileDescriptor_ed300aa8e672704e, []int{1} } func (m *FlowDistinguisherMethod) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -74,7 +102,7 @@ var xxx_messageInfo_FlowDistinguisherMethod proto.InternalMessageInfo func (m *FlowSchema) Reset() { *m = FlowSchema{} } func (*FlowSchema) ProtoMessage() {} func (*FlowSchema) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{1} + return fileDescriptor_ed300aa8e672704e, []int{2} } func (m *FlowSchema) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -102,7 +130,7 @@ var xxx_messageInfo_FlowSchema proto.InternalMessageInfo func (m *FlowSchemaCondition) Reset() { *m = FlowSchemaCondition{} } func (*FlowSchemaCondition) ProtoMessage() {} func (*FlowSchemaCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{2} + return fileDescriptor_ed300aa8e672704e, []int{3} } func (m *FlowSchemaCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -130,7 +158,7 @@ var xxx_messageInfo_FlowSchemaCondition proto.InternalMessageInfo func (m *FlowSchemaList) Reset() { *m = FlowSchemaList{} } func (*FlowSchemaList) ProtoMessage() {} func (*FlowSchemaList) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{3} + return fileDescriptor_ed300aa8e672704e, []int{4} } func (m *FlowSchemaList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -158,7 +186,7 @@ var xxx_messageInfo_FlowSchemaList proto.InternalMessageInfo func (m *FlowSchemaSpec) Reset() { *m = FlowSchemaSpec{} } func (*FlowSchemaSpec) ProtoMessage() {} func (*FlowSchemaSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{4} + return fileDescriptor_ed300aa8e672704e, []int{5} } func (m *FlowSchemaSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -186,7 +214,7 @@ var xxx_messageInfo_FlowSchemaSpec proto.InternalMessageInfo func (m *FlowSchemaStatus) Reset() { *m = FlowSchemaStatus{} } func (*FlowSchemaStatus) ProtoMessage() {} func (*FlowSchemaStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{5} + return fileDescriptor_ed300aa8e672704e, []int{6} } func (m *FlowSchemaStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -214,7 +242,7 @@ var xxx_messageInfo_FlowSchemaStatus proto.InternalMessageInfo func (m *GroupSubject) Reset() { *m = GroupSubject{} } func (*GroupSubject) ProtoMessage() {} func (*GroupSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{6} + return fileDescriptor_ed300aa8e672704e, []int{7} } func (m *GroupSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -242,7 +270,7 @@ var xxx_messageInfo_GroupSubject proto.InternalMessageInfo func (m *LimitResponse) Reset() { *m = LimitResponse{} } func (*LimitResponse) ProtoMessage() {} func (*LimitResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{7} + return fileDescriptor_ed300aa8e672704e, []int{8} } func (m *LimitResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -270,7 +298,7 @@ var xxx_messageInfo_LimitResponse proto.InternalMessageInfo func (m *LimitedPriorityLevelConfiguration) Reset() { *m = LimitedPriorityLevelConfiguration{} } func (*LimitedPriorityLevelConfiguration) ProtoMessage() {} func (*LimitedPriorityLevelConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{8} + return fileDescriptor_ed300aa8e672704e, []int{9} } func (m *LimitedPriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -298,7 +326,7 @@ var xxx_messageInfo_LimitedPriorityLevelConfiguration proto.InternalMessageInfo func (m *NonResourcePolicyRule) Reset() { *m = NonResourcePolicyRule{} } func (*NonResourcePolicyRule) ProtoMessage() {} func (*NonResourcePolicyRule) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{9} + return fileDescriptor_ed300aa8e672704e, []int{10} } func (m *NonResourcePolicyRule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -326,7 +354,7 @@ var xxx_messageInfo_NonResourcePolicyRule proto.InternalMessageInfo func (m *PolicyRulesWithSubjects) Reset() { *m = PolicyRulesWithSubjects{} } func (*PolicyRulesWithSubjects) ProtoMessage() {} func (*PolicyRulesWithSubjects) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{10} + return fileDescriptor_ed300aa8e672704e, []int{11} } func (m *PolicyRulesWithSubjects) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -354,7 +382,7 @@ var xxx_messageInfo_PolicyRulesWithSubjects proto.InternalMessageInfo func (m *PriorityLevelConfiguration) Reset() { *m = PriorityLevelConfiguration{} } func (*PriorityLevelConfiguration) ProtoMessage() {} func (*PriorityLevelConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{11} + return fileDescriptor_ed300aa8e672704e, []int{12} } func (m *PriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -382,7 +410,7 @@ var xxx_messageInfo_PriorityLevelConfiguration proto.InternalMessageInfo func (m *PriorityLevelConfigurationCondition) Reset() { *m = PriorityLevelConfigurationCondition{} } func (*PriorityLevelConfigurationCondition) ProtoMessage() {} func (*PriorityLevelConfigurationCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{12} + return fileDescriptor_ed300aa8e672704e, []int{13} } func (m *PriorityLevelConfigurationCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -410,7 +438,7 @@ var xxx_messageInfo_PriorityLevelConfigurationCondition proto.InternalMessageInf func (m *PriorityLevelConfigurationList) Reset() { *m = PriorityLevelConfigurationList{} } func (*PriorityLevelConfigurationList) ProtoMessage() {} func (*PriorityLevelConfigurationList) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{13} + return fileDescriptor_ed300aa8e672704e, []int{14} } func (m *PriorityLevelConfigurationList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -438,7 +466,7 @@ var xxx_messageInfo_PriorityLevelConfigurationList proto.InternalMessageInfo func (m *PriorityLevelConfigurationReference) Reset() { *m = PriorityLevelConfigurationReference{} } func (*PriorityLevelConfigurationReference) ProtoMessage() {} func (*PriorityLevelConfigurationReference) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{14} + return fileDescriptor_ed300aa8e672704e, []int{15} } func (m *PriorityLevelConfigurationReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -466,7 +494,7 @@ var xxx_messageInfo_PriorityLevelConfigurationReference proto.InternalMessageInf func (m *PriorityLevelConfigurationSpec) Reset() { *m = PriorityLevelConfigurationSpec{} } func (*PriorityLevelConfigurationSpec) ProtoMessage() {} func (*PriorityLevelConfigurationSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{15} + return fileDescriptor_ed300aa8e672704e, []int{16} } func (m *PriorityLevelConfigurationSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -494,7 +522,7 @@ var xxx_messageInfo_PriorityLevelConfigurationSpec proto.InternalMessageInfo func (m *PriorityLevelConfigurationStatus) Reset() { *m = PriorityLevelConfigurationStatus{} } func (*PriorityLevelConfigurationStatus) ProtoMessage() {} func (*PriorityLevelConfigurationStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{16} + return fileDescriptor_ed300aa8e672704e, []int{17} } func (m *PriorityLevelConfigurationStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -522,7 +550,7 @@ var xxx_messageInfo_PriorityLevelConfigurationStatus proto.InternalMessageInfo func (m *QueuingConfiguration) Reset() { *m = QueuingConfiguration{} } func (*QueuingConfiguration) ProtoMessage() {} func (*QueuingConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{17} + return fileDescriptor_ed300aa8e672704e, []int{18} } func (m *QueuingConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -550,7 +578,7 @@ var xxx_messageInfo_QueuingConfiguration proto.InternalMessageInfo func (m *ResourcePolicyRule) Reset() { *m = ResourcePolicyRule{} } func (*ResourcePolicyRule) ProtoMessage() {} func (*ResourcePolicyRule) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{18} + return fileDescriptor_ed300aa8e672704e, []int{19} } func (m *ResourcePolicyRule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -578,7 +606,7 @@ var xxx_messageInfo_ResourcePolicyRule proto.InternalMessageInfo func (m *ServiceAccountSubject) Reset() { *m = ServiceAccountSubject{} } func (*ServiceAccountSubject) ProtoMessage() {} func (*ServiceAccountSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{19} + return fileDescriptor_ed300aa8e672704e, []int{20} } func (m *ServiceAccountSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -606,7 +634,7 @@ var xxx_messageInfo_ServiceAccountSubject proto.InternalMessageInfo func (m *Subject) Reset() { *m = Subject{} } func (*Subject) ProtoMessage() {} func (*Subject) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{20} + return fileDescriptor_ed300aa8e672704e, []int{21} } func (m *Subject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -634,7 +662,7 @@ var xxx_messageInfo_Subject proto.InternalMessageInfo func (m *UserSubject) Reset() { *m = UserSubject{} } func (*UserSubject) ProtoMessage() {} func (*UserSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_ed300aa8e672704e, []int{21} + return fileDescriptor_ed300aa8e672704e, []int{22} } func (m *UserSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -660,6 +688,7 @@ func (m *UserSubject) XXX_DiscardUnknown() { var xxx_messageInfo_UserSubject proto.InternalMessageInfo func init() { + proto.RegisterType((*ExemptPriorityLevelConfiguration)(nil), "k8s.io.api.flowcontrol.v1beta2.ExemptPriorityLevelConfiguration") proto.RegisterType((*FlowDistinguisherMethod)(nil), "k8s.io.api.flowcontrol.v1beta2.FlowDistinguisherMethod") proto.RegisterType((*FlowSchema)(nil), "k8s.io.api.flowcontrol.v1beta2.FlowSchema") proto.RegisterType((*FlowSchemaCondition)(nil), "k8s.io.api.flowcontrol.v1beta2.FlowSchemaCondition") @@ -689,105 +718,142 @@ func init() { } var fileDescriptor_ed300aa8e672704e = []byte{ - // 1554 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x58, 0x4f, 0x6f, 0xdb, 0xc6, - 0x12, 0x37, 0x65, 0xc9, 0xb6, 0xc6, 0x7f, 0xb3, 0x8e, 0x61, 0x3d, 0x07, 0x90, 0x1c, 0x3e, 0xe0, - 0xe5, 0xbd, 0x97, 0x84, 0x4a, 0xd2, 0xa4, 0x49, 0x5b, 0xf4, 0x8f, 0xe9, 0xb4, 0x69, 0x1a, 0xdb, - 0x71, 0xd6, 0x49, 0x5b, 0xa4, 0x01, 0x1a, 0x8a, 0x5a, 0x4b, 0x8c, 0x25, 0x92, 0xd9, 0x25, 0x65, - 0xb8, 0xb9, 0x14, 0xfd, 0x04, 0x3d, 0xb7, 0xc7, 0x1e, 0x7a, 0xef, 0x17, 0xe8, 0xb1, 0x41, 0x4f, - 0x39, 0xe6, 0xa4, 0x36, 0xea, 0xa9, 0xdf, 0xa0, 0x0d, 0x50, 0xa0, 0xd8, 0xe5, 0x92, 0x14, 0xf5, - 0x8f, 0x42, 0x02, 0xe4, 0xd4, 0x9b, 0x39, 0xf3, 0x9b, 0xdf, 0xec, 0xcc, 0xce, 0xcc, 0x8e, 0x0c, - 0xd7, 0x0e, 0xae, 0x30, 0xcd, 0x72, 0xca, 0x07, 0x7e, 0x85, 0x50, 0x9b, 0x78, 0x84, 0x95, 0x5b, - 0xc4, 0xae, 0x3a, 0xb4, 0x2c, 0x15, 0x86, 0x6b, 0x95, 0xf7, 0x1b, 0xce, 0xa1, 0xe9, 0xd8, 0x1e, - 0x75, 0x1a, 0xe5, 0xd6, 0xf9, 0x0a, 0xf1, 0x8c, 0x0b, 0xe5, 0x1a, 0xb1, 0x09, 0x35, 0x3c, 0x52, - 0xd5, 0x5c, 0xea, 0x78, 0x0e, 0x2a, 0x06, 0x78, 0xcd, 0x70, 0x2d, 0xad, 0x0b, 0xaf, 0x49, 0xfc, - 0xda, 0xd9, 0x9a, 0xe5, 0xd5, 0xfd, 0x8a, 0x66, 0x3a, 0xcd, 0x72, 0xcd, 0xa9, 0x39, 0x65, 0x61, - 0x56, 0xf1, 0xf7, 0xc5, 0x97, 0xf8, 0x10, 0x7f, 0x05, 0x74, 0x6b, 0x17, 0x63, 0xf7, 0x4d, 0xc3, - 0xac, 0x5b, 0x36, 0xa1, 0x47, 0x65, 0xf7, 0xa0, 0xc6, 0x05, 0xac, 0xdc, 0x24, 0x9e, 0x51, 0x6e, - 0x9d, 0xef, 0x3d, 0xc4, 0x5a, 0x79, 0x98, 0x15, 0xf5, 0x6d, 0xcf, 0x6a, 0x92, 0x3e, 0x83, 0xd7, - 0xd3, 0x0c, 0x98, 0x59, 0x27, 0x4d, 0xa3, 0xd7, 0x4e, 0xbd, 0x0b, 0xab, 0x1f, 0x34, 0x9c, 0xc3, - 0xab, 0x16, 0xf3, 0x2c, 0xbb, 0xe6, 0x5b, 0xac, 0x4e, 0xe8, 0x36, 0xf1, 0xea, 0x4e, 0x15, 0xbd, - 0x0b, 0x59, 0xef, 0xc8, 0x25, 0x05, 0x65, 0x5d, 0xf9, 0x6f, 0x5e, 0x3f, 0xfd, 0xb8, 0x5d, 0x9a, - 0xe8, 0xb4, 0x4b, 0xd9, 0xdb, 0x47, 0x2e, 0x79, 0xde, 0x2e, 0x9d, 0x18, 0x62, 0xc6, 0xd5, 0x58, - 0x18, 0xaa, 0xdf, 0x64, 0x00, 0x38, 0x6a, 0x4f, 0xb8, 0x46, 0xf7, 0x61, 0x86, 0x87, 0x5b, 0x35, - 0x3c, 0x43, 0x70, 0xce, 0x5e, 0x38, 0xa7, 0xc5, 0xb9, 0x8e, 0x4e, 0xad, 0xb9, 0x07, 0x35, 0x2e, - 0x60, 0x1a, 0x47, 0x6b, 0xad, 0xf3, 0xda, 0xcd, 0xca, 0x03, 0x62, 0x7a, 0xdb, 0xc4, 0x33, 0x74, - 0x24, 0x4f, 0x01, 0xb1, 0x0c, 0x47, 0xac, 0x68, 0x17, 0xb2, 0xcc, 0x25, 0x66, 0x21, 0x23, 0xd8, - 0x35, 0x6d, 0xf4, 0x4d, 0x6a, 0xf1, 0xd9, 0xf6, 0x5c, 0x62, 0xea, 0x73, 0x61, 0x84, 0xfc, 0x0b, - 0x0b, 0x26, 0xf4, 0x29, 0x4c, 0x31, 0xcf, 0xf0, 0x7c, 0x56, 0x98, 0xec, 0x3b, 0x71, 0x1a, 0xa7, - 0xb0, 0xd3, 0x17, 0x24, 0xeb, 0x54, 0xf0, 0x8d, 0x25, 0x9f, 0xfa, 0x34, 0x03, 0xcb, 0x31, 0x78, - 0xd3, 0xb1, 0xab, 0x96, 0x67, 0x39, 0x36, 0x7a, 0x2b, 0x91, 0xf5, 0x53, 0x3d, 0x59, 0x5f, 0x1d, - 0x60, 0x12, 0x67, 0x1c, 0xbd, 0x11, 0x1d, 0x37, 0x23, 0xcc, 0x4f, 0x26, 0x9d, 0x3f, 0x6f, 0x97, - 0x16, 0x23, 0xb3, 0xe4, 0x79, 0x50, 0x0b, 0x50, 0xc3, 0x60, 0xde, 0x6d, 0x6a, 0xd8, 0x2c, 0xa0, - 0xb5, 0x9a, 0x44, 0x46, 0xfd, 0xff, 0xf1, 0xee, 0x89, 0x5b, 0xe8, 0x6b, 0xd2, 0x25, 0xda, 0xea, - 0x63, 0xc3, 0x03, 0x3c, 0xa0, 0xff, 0xc0, 0x14, 0x25, 0x06, 0x73, 0xec, 0x42, 0x56, 0x1c, 0x39, - 0xca, 0x17, 0x16, 0x52, 0x2c, 0xb5, 0xe8, 0x7f, 0x30, 0xdd, 0x24, 0x8c, 0x19, 0x35, 0x52, 0xc8, - 0x09, 0xe0, 0xa2, 0x04, 0x4e, 0x6f, 0x07, 0x62, 0x1c, 0xea, 0xd5, 0x1f, 0x15, 0x58, 0x88, 0xf3, - 0xb4, 0x65, 0x31, 0x0f, 0xdd, 0xeb, 0xab, 0x3d, 0x6d, 0xbc, 0x98, 0xb8, 0xb5, 0xa8, 0xbc, 0x25, - 0xe9, 0x6e, 0x26, 0x94, 0x74, 0xd5, 0xdd, 0x4d, 0xc8, 0x59, 0x1e, 0x69, 0xf2, 0xac, 0x4f, 0xf6, - 0xa4, 0x2b, 0xa5, 0x48, 0xf4, 0x79, 0x49, 0x9b, 0xbb, 0xce, 0x09, 0x70, 0xc0, 0xa3, 0xfe, 0x3e, - 0xd9, 0x1d, 0x01, 0xaf, 0x47, 0xf4, 0xbd, 0x02, 0x6b, 0x2e, 0xb5, 0x1c, 0x6a, 0x79, 0x47, 0x5b, - 0xa4, 0x45, 0x1a, 0x9b, 0x8e, 0xbd, 0x6f, 0xd5, 0x7c, 0x6a, 0xf0, 0x54, 0xca, 0xa0, 0x36, 0xd3, - 0x3c, 0xef, 0x0e, 0x65, 0xc0, 0x64, 0x9f, 0x50, 0x62, 0x9b, 0x44, 0x57, 0xe5, 0x91, 0xd6, 0x46, - 0x80, 0x47, 0x1c, 0x05, 0x7d, 0x04, 0xa8, 0x69, 0x78, 0x3c, 0xa3, 0xb5, 0x5d, 0x4a, 0x4c, 0x52, - 0xe5, 0xac, 0xa2, 0x20, 0x73, 0x71, 0x75, 0x6c, 0xf7, 0x21, 0xf0, 0x00, 0x2b, 0xf4, 0x95, 0x02, - 0xcb, 0xd5, 0xfe, 0x21, 0x23, 0xeb, 0xf2, 0xf2, 0x38, 0x89, 0x1e, 0x30, 0xa3, 0xf4, 0xd5, 0x4e, - 0xbb, 0xb4, 0x3c, 0x40, 0x81, 0x07, 0x39, 0x43, 0xf7, 0x20, 0x47, 0xfd, 0x06, 0x61, 0x85, 0xac, - 0xb8, 0xde, 0x54, 0xaf, 0xbb, 0x4e, 0xc3, 0x32, 0x8f, 0x30, 0x37, 0xf9, 0xc4, 0xf2, 0xea, 0x7b, - 0xbe, 0x98, 0x55, 0x2c, 0xbe, 0x6b, 0xa1, 0xc2, 0x01, 0xa9, 0xfa, 0x08, 0x96, 0x7a, 0x87, 0x06, - 0xaa, 0x01, 0x98, 0x61, 0x9f, 0xb2, 0x82, 0x22, 0xdc, 0xbe, 0x36, 0x7e, 0x55, 0x45, 0x3d, 0x1e, - 0xcf, 0xcb, 0x48, 0xc4, 0x70, 0x17, 0xb5, 0x7a, 0x0e, 0xe6, 0xae, 0x51, 0xc7, 0x77, 0xe5, 0x19, - 0xd1, 0x3a, 0x64, 0x6d, 0xa3, 0x19, 0x4e, 0x9f, 0x68, 0x22, 0xee, 0x18, 0x4d, 0x82, 0x85, 0x46, - 0xfd, 0x4e, 0x81, 0xf9, 0x2d, 0xab, 0x69, 0x79, 0x98, 0x30, 0xd7, 0xb1, 0x19, 0x41, 0x97, 0x12, - 0x13, 0xeb, 0x64, 0xcf, 0xc4, 0x3a, 0x96, 0x00, 0x77, 0xcd, 0xaa, 0xcf, 0x60, 0xfa, 0xa1, 0x4f, - 0x7c, 0xcb, 0xae, 0xc9, 0x79, 0x7d, 0x31, 0x2d, 0xc0, 0x5b, 0x01, 0x3c, 0x51, 0x6d, 0xfa, 0x2c, - 0x1f, 0x01, 0x52, 0x83, 0x43, 0x46, 0xf5, 0xaf, 0x0c, 0x9c, 0x14, 0x8e, 0x49, 0x75, 0x78, 0x15, - 0xa3, 0x7b, 0x50, 0x30, 0x18, 0xf3, 0x29, 0xa9, 0x6e, 0x3a, 0xb6, 0xe9, 0x53, 0x5e, 0xff, 0x47, - 0x7b, 0x75, 0x83, 0x12, 0x26, 0xa2, 0xc9, 0xe9, 0xeb, 0x32, 0x9a, 0xc2, 0xc6, 0x10, 0x1c, 0x1e, - 0xca, 0x80, 0x1e, 0xc0, 0x7c, 0xa3, 0x3b, 0x76, 0x19, 0xe6, 0xd9, 0xb4, 0x30, 0x13, 0x09, 0xd3, - 0x57, 0xe4, 0x09, 0x92, 0x49, 0xc7, 0x49, 0x6a, 0xf4, 0x36, 0x2c, 0x36, 0x88, 0x5d, 0x35, 0x2a, - 0x0d, 0xb2, 0x4b, 0xa8, 0x49, 0x6c, 0x4f, 0xb4, 0x48, 0x4e, 0x5f, 0xee, 0xb4, 0x4b, 0x8b, 0x5b, - 0x49, 0x15, 0xee, 0xc5, 0xa2, 0x9b, 0xb0, 0x52, 0x71, 0x28, 0x75, 0x0e, 0x2d, 0xbb, 0x26, 0xfc, - 0x84, 0x24, 0x59, 0x41, 0xf2, 0xaf, 0x4e, 0xbb, 0xb4, 0xa2, 0x0f, 0x02, 0xe0, 0xc1, 0x76, 0xea, - 0x21, 0xac, 0xec, 0xf0, 0x99, 0xc2, 0x1c, 0x9f, 0x9a, 0x24, 0x6e, 0x08, 0x54, 0x82, 0x5c, 0x8b, - 0xd0, 0x4a, 0x50, 0xd4, 0x79, 0x3d, 0xcf, 0xdb, 0xe1, 0x63, 0x2e, 0xc0, 0x81, 0x9c, 0x47, 0x62, - 0xc7, 0x96, 0x77, 0xf0, 0x16, 0x2b, 0x4c, 0x09, 0xa8, 0x88, 0x64, 0x27, 0xa9, 0xc2, 0xbd, 0x58, - 0xb5, 0x9d, 0x81, 0xd5, 0x21, 0xfd, 0x87, 0xee, 0xc0, 0x0c, 0x93, 0x7f, 0xcb, 0x9e, 0x3a, 0x95, - 0x76, 0x17, 0xd2, 0x36, 0x9e, 0xfe, 0x21, 0x19, 0x8e, 0xa8, 0x90, 0x03, 0xf3, 0x54, 0x1e, 0x41, - 0xf8, 0x94, 0xaf, 0xc0, 0x85, 0x34, 0xee, 0xfe, 0xec, 0xc4, 0x97, 0x8d, 0xbb, 0x09, 0x71, 0x92, - 0x1f, 0x3d, 0x82, 0xa5, 0xae, 0xb0, 0x03, 0x9f, 0x93, 0xc2, 0xe7, 0xa5, 0x34, 0x9f, 0x03, 0x2f, - 0x45, 0x2f, 0x48, 0xb7, 0x4b, 0x3b, 0x3d, 0xb4, 0xb8, 0xcf, 0x91, 0xfa, 0x73, 0x06, 0x46, 0x3c, - 0x0c, 0xaf, 0x60, 0xc9, 0xbb, 0x9f, 0x58, 0xf2, 0xde, 0x79, 0xf1, 0x17, 0x6f, 0xe8, 0xd2, 0x57, - 0xef, 0x59, 0xfa, 0xde, 0x7b, 0x09, 0x1f, 0xa3, 0x97, 0xc0, 0x3f, 0x32, 0xf0, 0xef, 0xe1, 0xc6, - 0xf1, 0x52, 0x78, 0x23, 0x31, 0x62, 0x2f, 0xf7, 0x8c, 0xd8, 0x53, 0x63, 0x50, 0xfc, 0xb3, 0x24, - 0xf6, 0x2c, 0x89, 0xbf, 0x28, 0x50, 0x1c, 0x9e, 0xb7, 0x57, 0xb0, 0x34, 0x7e, 0x9e, 0x5c, 0x1a, - 0xdf, 0x7c, 0xf1, 0x22, 0x1b, 0xb2, 0x44, 0x5e, 0x1b, 0x55, 0x5b, 0xd1, 0xba, 0x37, 0xc6, 0x93, - 0xff, 0xd3, 0xc8, 0x54, 0x89, 0xed, 0x34, 0xe5, 0x57, 0x4b, 0xc2, 0xfa, 0x7d, 0x9b, 0x3f, 0x3d, - 0x4d, 0xfe, 0x7a, 0x04, 0x05, 0x59, 0x87, 0xe9, 0x46, 0xf0, 0x56, 0xcb, 0xa6, 0xde, 0x18, 0xeb, - 0x89, 0x1c, 0xf5, 0xb4, 0x07, 0x6b, 0x81, 0x84, 0xe1, 0x90, 0x5e, 0xfd, 0x56, 0x81, 0xf5, 0xb4, - 0x66, 0x45, 0x87, 0x03, 0x96, 0xaf, 0x97, 0x58, 0xac, 0xc7, 0x5f, 0xc6, 0x7e, 0x50, 0xe0, 0xf8, - 0xa0, 0x1d, 0x87, 0x97, 0x3f, 0x5f, 0x6c, 0xa2, 0xad, 0x24, 0x2a, 0xff, 0x5b, 0x42, 0x8a, 0xa5, - 0x16, 0x9d, 0x81, 0x99, 0xba, 0x61, 0x57, 0xf7, 0xac, 0x2f, 0xc2, 0x7d, 0x3b, 0x2a, 0xc0, 0x0f, - 0xa5, 0x1c, 0x47, 0x08, 0x74, 0x15, 0x96, 0x84, 0xdd, 0x16, 0xb1, 0x6b, 0x5e, 0x5d, 0xe4, 0x4a, - 0x2e, 0x0d, 0xd1, 0x7b, 0x70, 0xab, 0x47, 0x8f, 0xfb, 0x2c, 0xd4, 0x3f, 0x15, 0x40, 0x2f, 0xf2, - 0xce, 0x9f, 0x86, 0xbc, 0xe1, 0x5a, 0x62, 0xf9, 0x0c, 0x5a, 0x20, 0xaf, 0xcf, 0x77, 0xda, 0xa5, - 0xfc, 0xc6, 0xee, 0xf5, 0x40, 0x88, 0x63, 0x3d, 0x07, 0x87, 0x4f, 0x60, 0xf0, 0xd4, 0x49, 0x70, - 0xe8, 0x98, 0xe1, 0x58, 0x8f, 0xae, 0xc0, 0x9c, 0xd9, 0xf0, 0x99, 0x47, 0xe8, 0x9e, 0xe9, 0xb8, - 0x44, 0x8c, 0x8c, 0x19, 0xfd, 0xb8, 0x8c, 0x69, 0x6e, 0xb3, 0x4b, 0x87, 0x13, 0x48, 0xa4, 0x01, - 0xf0, 0x82, 0x67, 0xae, 0xc1, 0xfd, 0xe4, 0x84, 0x9f, 0x05, 0x7e, 0x61, 0x3b, 0x91, 0x14, 0x77, - 0x21, 0xd4, 0x07, 0xb0, 0xb2, 0x47, 0x68, 0xcb, 0x32, 0xc9, 0x86, 0x69, 0x3a, 0xbe, 0xed, 0x85, - 0x6b, 0x74, 0x19, 0xf2, 0x11, 0x4c, 0xf6, 0xc4, 0x31, 0xe9, 0x3f, 0x1f, 0x71, 0xe1, 0x18, 0x13, - 0x35, 0x61, 0x66, 0x78, 0x13, 0x66, 0x60, 0x3a, 0xa6, 0xcf, 0x1e, 0x58, 0x76, 0x55, 0x32, 0x9f, - 0x08, 0xd1, 0x37, 0x2c, 0xbb, 0xfa, 0xbc, 0x5d, 0x9a, 0x95, 0x30, 0xfe, 0x89, 0x05, 0x10, 0x5d, - 0x87, 0xac, 0xcf, 0x08, 0x95, 0xed, 0x75, 0x3a, 0xad, 0x98, 0xef, 0x30, 0x42, 0xc3, 0xcd, 0x67, - 0x86, 0x33, 0x73, 0x01, 0x16, 0x14, 0x68, 0x1b, 0x72, 0x35, 0x7e, 0x29, 0x72, 0xea, 0x9f, 0x49, - 0xe3, 0xea, 0xfe, 0x79, 0x11, 0x94, 0x81, 0x90, 0xe0, 0x80, 0x05, 0x3d, 0x84, 0x05, 0x96, 0x48, - 0xa1, 0xb8, 0xae, 0x31, 0x36, 0x99, 0x81, 0x89, 0xd7, 0x51, 0xa7, 0x5d, 0x5a, 0x48, 0xaa, 0x70, - 0x8f, 0x03, 0xb5, 0x0c, 0xb3, 0x5d, 0x01, 0xa6, 0xcf, 0x3f, 0xfd, 0xea, 0xe3, 0x67, 0xc5, 0x89, - 0x27, 0xcf, 0x8a, 0x13, 0x4f, 0x9f, 0x15, 0x27, 0xbe, 0xec, 0x14, 0x95, 0xc7, 0x9d, 0xa2, 0xf2, - 0xa4, 0x53, 0x54, 0x9e, 0x76, 0x8a, 0xca, 0xaf, 0x9d, 0xa2, 0xf2, 0xf5, 0x6f, 0xc5, 0x89, 0xbb, - 0xc5, 0xd1, 0xff, 0x67, 0xfc, 0x3b, 0x00, 0x00, 0xff, 0xff, 0x87, 0x72, 0xbf, 0xe2, 0xa1, 0x14, - 0x00, 0x00, + // 1617 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x58, 0x4b, 0x73, 0x1b, 0xc5, + 0x16, 0xf6, 0xc8, 0x92, 0x6d, 0x1d, 0x3f, 0xd3, 0x8e, 0xcb, 0xba, 0xce, 0x2d, 0xc9, 0x99, 0x5b, + 0x75, 0x73, 0x2f, 0x49, 0x46, 0x89, 0x49, 0x48, 0x80, 0xe2, 0xe1, 0x71, 0x42, 0x08, 0xb1, 0x1d, + 0xa7, 0x9d, 0x40, 0x2a, 0xa4, 0x8a, 0x8c, 0x46, 0x6d, 0x69, 0x62, 0x69, 0x66, 0xd2, 0x3d, 0x23, + 0x63, 0xb2, 0xa1, 0xf8, 0x05, 0xac, 0x61, 0xc9, 0x82, 0x15, 0x1b, 0xb6, 0x2c, 0x58, 0x92, 0x62, + 0x95, 0x65, 0x56, 0x82, 0x88, 0x15, 0xff, 0x00, 0x52, 0x45, 0x15, 0xd5, 0x3d, 0xad, 0x19, 0x8d, + 0x5e, 0xa3, 0x4a, 0xaa, 0xb2, 0x62, 0xe7, 0x39, 0xe7, 0x3b, 0xdf, 0xe9, 0x3e, 0x7d, 0x5e, 0x32, + 0x5c, 0xd9, 0xbf, 0xc8, 0x34, 0xcb, 0x29, 0xee, 0xfb, 0x25, 0x42, 0x6d, 0xe2, 0x11, 0x56, 0x6c, + 0x10, 0xbb, 0xec, 0xd0, 0xa2, 0x54, 0x18, 0xae, 0x55, 0xdc, 0xab, 0x39, 0x07, 0xa6, 0x63, 0x7b, + 0xd4, 0xa9, 0x15, 0x1b, 0x67, 0x4b, 0xc4, 0x33, 0xd6, 0x8a, 0x15, 0x62, 0x13, 0x6a, 0x78, 0xa4, + 0xac, 0xb9, 0xd4, 0xf1, 0x1c, 0x94, 0x0f, 0xf0, 0x9a, 0xe1, 0x5a, 0x5a, 0x07, 0x5e, 0x93, 0xf8, + 0x95, 0xd3, 0x15, 0xcb, 0xab, 0xfa, 0x25, 0xcd, 0x74, 0xea, 0xc5, 0x8a, 0x53, 0x71, 0x8a, 0xc2, + 0xac, 0xe4, 0xef, 0x89, 0x2f, 0xf1, 0x21, 0xfe, 0x0a, 0xe8, 0x56, 0xce, 0x45, 0xee, 0xeb, 0x86, + 0x59, 0xb5, 0x6c, 0x42, 0x0f, 0x8b, 0xee, 0x7e, 0x85, 0x0b, 0x58, 0xb1, 0x4e, 0x3c, 0xa3, 0xd8, + 0x38, 0xdb, 0x7d, 0x88, 0x95, 0xe2, 0x20, 0x2b, 0xea, 0xdb, 0x9e, 0x55, 0x27, 0x3d, 0x06, 0xaf, + 0x25, 0x19, 0x30, 0xb3, 0x4a, 0xea, 0x46, 0xb7, 0x9d, 0xfa, 0x83, 0x02, 0xab, 0x97, 0x3f, 0x25, + 0x75, 0xd7, 0xdb, 0xa1, 0x96, 0x43, 0x2d, 0xef, 0x70, 0x93, 0x34, 0x48, 0x6d, 0xc3, 0xb1, 0xf7, + 0xac, 0x8a, 0x4f, 0x0d, 0xcf, 0x72, 0x6c, 0x74, 0x1b, 0x72, 0xb6, 0x53, 0xb7, 0x6c, 0x83, 0xcb, + 0x4d, 0x9f, 0x52, 0x62, 0x9b, 0x87, 0xbb, 0x55, 0x83, 0x12, 0x96, 0x53, 0x56, 0x95, 0xff, 0x65, + 0xf4, 0x7f, 0xb7, 0x9a, 0x85, 0xdc, 0xf6, 0x00, 0x0c, 0x1e, 0x68, 0x8d, 0xde, 0x82, 0xf9, 0x1a, + 0xb1, 0xcb, 0x46, 0xa9, 0x46, 0x76, 0x08, 0x35, 0x89, 0xed, 0xe5, 0x52, 0x82, 0x70, 0xb1, 0xd5, + 0x2c, 0xcc, 0x6f, 0xc6, 0x55, 0xb8, 0x1b, 0xab, 0xde, 0x81, 0xe5, 0xf7, 0x6a, 0xce, 0xc1, 0x25, + 0x8b, 0x79, 0x96, 0x5d, 0xf1, 0x2d, 0x56, 0x25, 0x74, 0x8b, 0x78, 0x55, 0xa7, 0x8c, 0xde, 0x81, + 0xb4, 0x77, 0xe8, 0x12, 0x71, 0xbe, 0xac, 0x7e, 0xf2, 0x51, 0xb3, 0x30, 0xd6, 0x6a, 0x16, 0xd2, + 0x37, 0x0f, 0x5d, 0xf2, 0xac, 0x59, 0x38, 0x36, 0xc0, 0x8c, 0xab, 0xb1, 0x30, 0x54, 0xbf, 0x4a, + 0x01, 0x70, 0xd4, 0xae, 0x08, 0x1c, 0xba, 0x07, 0x53, 0xfc, 0xb1, 0xca, 0x86, 0x67, 0x08, 0xce, + 0xe9, 0xb5, 0x33, 0x5a, 0x94, 0x29, 0x61, 0xcc, 0x35, 0x77, 0xbf, 0xc2, 0x05, 0x4c, 0xe3, 0x68, + 0xad, 0x71, 0x56, 0xbb, 0x5e, 0xba, 0x4f, 0x4c, 0x6f, 0x8b, 0x78, 0x86, 0x8e, 0xe4, 0x29, 0x20, + 0x92, 0xe1, 0x90, 0x15, 0xed, 0x40, 0x9a, 0xb9, 0xc4, 0x14, 0x01, 0x98, 0x5e, 0xd3, 0xb4, 0xe1, + 0x79, 0xa8, 0x45, 0x67, 0xdb, 0x75, 0x89, 0xa9, 0xcf, 0xb4, 0x6f, 0xc8, 0xbf, 0xb0, 0x60, 0x42, + 0xb7, 0x61, 0x82, 0x79, 0x86, 0xe7, 0xb3, 0xdc, 0x78, 0xcf, 0x89, 0x93, 0x38, 0x85, 0x9d, 0x3e, + 0x27, 0x59, 0x27, 0x82, 0x6f, 0x2c, 0xf9, 0xd4, 0x27, 0x29, 0x58, 0x8c, 0xc0, 0x1b, 0x8e, 0x5d, + 0xb6, 0x44, 0xa6, 0xbc, 0x19, 0x8b, 0xfa, 0x89, 0xae, 0xa8, 0x2f, 0xf7, 0x31, 0x89, 0x22, 0x8e, + 0x5e, 0x0f, 0x8f, 0x9b, 0x12, 0xe6, 0xc7, 0xe3, 0xce, 0x9f, 0x35, 0x0b, 0xf3, 0xa1, 0x59, 0xfc, + 0x3c, 0xa8, 0x01, 0xa8, 0x66, 0x30, 0xef, 0x26, 0x35, 0x6c, 0x16, 0xd0, 0x5a, 0x75, 0x22, 0x6f, + 0xfd, 0xca, 0x68, 0xef, 0xc4, 0x2d, 0xf4, 0x15, 0xe9, 0x12, 0x6d, 0xf6, 0xb0, 0xe1, 0x3e, 0x1e, + 0xd0, 0x7f, 0x61, 0x82, 0x12, 0x83, 0x39, 0x76, 0x2e, 0x2d, 0x8e, 0x1c, 0xc6, 0x0b, 0x0b, 0x29, + 0x96, 0x5a, 0xf4, 0x7f, 0x98, 0xac, 0x13, 0xc6, 0x8c, 0x0a, 0xc9, 0x65, 0x04, 0x70, 0x5e, 0x02, + 0x27, 0xb7, 0x02, 0x31, 0x6e, 0xeb, 0xd5, 0x1f, 0x15, 0x98, 0x8b, 0xe2, 0xb4, 0x69, 0x31, 0x0f, + 0xdd, 0xed, 0xc9, 0x3d, 0x6d, 0xb4, 0x3b, 0x71, 0x6b, 0x91, 0x79, 0x0b, 0xd2, 0xdd, 0x54, 0x5b, + 0xd2, 0x91, 0x77, 0xd7, 0x21, 0x63, 0x79, 0xa4, 0xce, 0xa3, 0x3e, 0xde, 0x15, 0xae, 0x84, 0x24, + 0xd1, 0x67, 0x25, 0x6d, 0xe6, 0x2a, 0x27, 0xc0, 0x01, 0x8f, 0xfa, 0xfb, 0x78, 0xe7, 0x0d, 0x78, + 0x3e, 0xa2, 0x6f, 0x15, 0x58, 0x71, 0x07, 0x36, 0x18, 0x79, 0xa9, 0x8d, 0x24, 0xcf, 0x83, 0x5b, + 0x14, 0x26, 0x7b, 0x84, 0xf7, 0x15, 0xa2, 0xab, 0xf2, 0x48, 0x2b, 0x43, 0xc0, 0x43, 0x8e, 0x82, + 0x3e, 0x00, 0x54, 0x37, 0x3c, 0x1e, 0xd1, 0xca, 0x0e, 0x25, 0x26, 0x29, 0x73, 0x56, 0xd9, 0x94, + 0xc2, 0xec, 0xd8, 0xea, 0x41, 0xe0, 0x3e, 0x56, 0xe8, 0x0b, 0x05, 0x16, 0xcb, 0xbd, 0x4d, 0x46, + 0xe6, 0xe5, 0x85, 0x51, 0x02, 0xdd, 0xa7, 0x47, 0xe9, 0xcb, 0xad, 0x66, 0x61, 0xb1, 0x8f, 0x02, + 0xf7, 0x73, 0x86, 0xee, 0x42, 0x86, 0xfa, 0x35, 0xc2, 0x72, 0x69, 0xf1, 0xbc, 0x89, 0x5e, 0x77, + 0x9c, 0x9a, 0x65, 0x1e, 0x62, 0x6e, 0xf2, 0x91, 0xe5, 0x55, 0x77, 0x7d, 0xd1, 0xab, 0x58, 0xf4, + 0xd6, 0x42, 0x85, 0x03, 0x52, 0xf5, 0x21, 0x2c, 0x74, 0x37, 0x0d, 0x54, 0x01, 0x30, 0xdb, 0x75, + 0xca, 0x07, 0x04, 0x77, 0xfb, 0xea, 0xe8, 0x59, 0x15, 0xd6, 0x78, 0xd4, 0x2f, 0x43, 0x11, 0xc3, + 0x1d, 0xd4, 0xea, 0x19, 0x98, 0xb9, 0x42, 0x1d, 0xdf, 0x95, 0x67, 0x44, 0xab, 0x90, 0xb6, 0x8d, + 0x7a, 0xbb, 0xfb, 0x84, 0x1d, 0x71, 0xdb, 0xa8, 0x13, 0x2c, 0x34, 0xea, 0x37, 0x0a, 0xcc, 0x6e, + 0x5a, 0x75, 0xcb, 0xc3, 0x84, 0xb9, 0x8e, 0xcd, 0x08, 0x3a, 0x1f, 0xeb, 0x58, 0xc7, 0xbb, 0x3a, + 0xd6, 0x91, 0x18, 0xb8, 0xa3, 0x57, 0x7d, 0x0c, 0x93, 0x0f, 0x7c, 0xe2, 0x5b, 0x76, 0x45, 0xf6, + 0xeb, 0x73, 0x49, 0x17, 0xbc, 0x11, 0xc0, 0x63, 0xd9, 0xa6, 0x4f, 0xf3, 0x16, 0x20, 0x35, 0xb8, + 0xcd, 0xa8, 0xfe, 0x95, 0x82, 0xe3, 0xc2, 0x31, 0x29, 0x0f, 0x99, 0xca, 0x77, 0x21, 0x67, 0x30, + 0xe6, 0x53, 0x52, 0x1e, 0x34, 0x95, 0x57, 0xe5, 0x6d, 0x72, 0xeb, 0x03, 0x70, 0x78, 0x20, 0x03, + 0xba, 0x0f, 0xb3, 0xb5, 0xce, 0xbb, 0xcb, 0x6b, 0x9e, 0x4e, 0xba, 0x66, 0x2c, 0x60, 0xfa, 0x92, + 0x3c, 0x41, 0x3c, 0xe8, 0x38, 0x4e, 0xdd, 0x6f, 0x0b, 0x18, 0x1f, 0x7d, 0x0b, 0x40, 0xd7, 0x61, + 0xa9, 0xe4, 0x50, 0xea, 0x1c, 0x58, 0x76, 0x45, 0xf8, 0x69, 0x93, 0xa4, 0x05, 0xc9, 0xbf, 0x5a, + 0xcd, 0xc2, 0x92, 0xde, 0x0f, 0x80, 0xfb, 0xdb, 0xa9, 0x07, 0xb0, 0xb4, 0xcd, 0x7b, 0x0a, 0x73, + 0x7c, 0x6a, 0x92, 0xa8, 0x20, 0x50, 0x01, 0x32, 0x0d, 0x42, 0x4b, 0x41, 0x52, 0x67, 0xf5, 0x2c, + 0x2f, 0x87, 0x0f, 0xb9, 0x00, 0x07, 0x72, 0x7e, 0x13, 0x3b, 0xb2, 0xbc, 0x85, 0x37, 0x59, 0x6e, + 0x42, 0x40, 0xc5, 0x4d, 0xb6, 0xe3, 0x2a, 0xdc, 0x8d, 0x55, 0x9b, 0x29, 0x58, 0x1e, 0x50, 0x7f, + 0xe8, 0x16, 0x4c, 0x31, 0xf9, 0xb7, 0xac, 0xa9, 0x13, 0x49, 0x6f, 0x21, 0x6d, 0xa3, 0xee, 0xdf, + 0x26, 0xc3, 0x21, 0x15, 0x72, 0x60, 0x96, 0xca, 0x23, 0x08, 0x9f, 0x72, 0x0a, 0xac, 0x25, 0x71, + 0xf7, 0x46, 0x27, 0x7a, 0x6c, 0xdc, 0x49, 0x88, 0xe3, 0xfc, 0xe8, 0x21, 0x2c, 0x74, 0x5c, 0x3b, + 0xf0, 0x39, 0x2e, 0x7c, 0x9e, 0x4f, 0xf2, 0xd9, 0xf7, 0x51, 0xf4, 0x9c, 0x74, 0xbb, 0xb0, 0xdd, + 0x45, 0x8b, 0x7b, 0x1c, 0xa9, 0x3f, 0xa7, 0x60, 0xc8, 0x60, 0x78, 0x09, 0x4b, 0xde, 0xbd, 0xd8, + 0x92, 0xf7, 0xf6, 0xf3, 0x4f, 0xbc, 0x81, 0x4b, 0x5f, 0xb5, 0x6b, 0xe9, 0x7b, 0xf7, 0x05, 0x7c, + 0x0c, 0x5f, 0x02, 0xff, 0x48, 0xc1, 0x7f, 0x06, 0x1b, 0x47, 0x4b, 0xe1, 0xb5, 0x58, 0x8b, 0xbd, + 0xd0, 0xd5, 0x62, 0x4f, 0x8c, 0x40, 0xf1, 0xcf, 0x92, 0xd8, 0xb5, 0x24, 0xfe, 0xa2, 0x40, 0x7e, + 0x70, 0xdc, 0x5e, 0xc2, 0xd2, 0xf8, 0x49, 0x7c, 0x69, 0x7c, 0xe3, 0xf9, 0x93, 0x6c, 0xc0, 0x12, + 0x79, 0x65, 0x58, 0x6e, 0x85, 0xeb, 0xde, 0x08, 0x23, 0xff, 0xbb, 0xd4, 0xb0, 0x50, 0x89, 0xed, + 0x34, 0xe1, 0x57, 0x4b, 0xcc, 0xfa, 0xb2, 0xcd, 0x47, 0x4f, 0x9d, 0x4f, 0x8f, 0x20, 0x21, 0xab, + 0x30, 0x59, 0x0b, 0x66, 0xb5, 0x2c, 0xea, 0xf5, 0x91, 0x46, 0xe4, 0xb0, 0xd1, 0x1e, 0xac, 0x05, + 0x12, 0x86, 0xdb, 0xf4, 0xa8, 0x0c, 0x13, 0x44, 0xfc, 0x54, 0x1f, 0xb5, 0xb2, 0x93, 0x7e, 0xd8, + 0xeb, 0xc0, 0xb3, 0x30, 0x40, 0x61, 0xc9, 0xad, 0x7e, 0xad, 0xc0, 0x6a, 0x52, 0x4b, 0x40, 0x07, + 0x7d, 0x56, 0xbc, 0x17, 0x58, 0xdf, 0x47, 0x5f, 0xf9, 0xbe, 0x57, 0xe0, 0x68, 0xbf, 0x4d, 0x8a, + 0x17, 0x19, 0x5f, 0x9f, 0xc2, 0xdd, 0x27, 0x2c, 0xb2, 0x1b, 0x42, 0x8a, 0xa5, 0x16, 0x9d, 0x82, + 0xa9, 0xaa, 0x61, 0x97, 0x77, 0xad, 0xcf, 0xda, 0x5b, 0x7d, 0x98, 0xe6, 0xef, 0x4b, 0x39, 0x0e, + 0x11, 0xe8, 0x12, 0x2c, 0x08, 0xbb, 0x4d, 0x62, 0x57, 0xbc, 0xaa, 0x78, 0x11, 0xb9, 0x9a, 0x84, + 0x53, 0xe7, 0x46, 0x97, 0x1e, 0xf7, 0x58, 0xa8, 0x7f, 0x2a, 0x80, 0x9e, 0x67, 0x9b, 0x38, 0x09, + 0x59, 0xc3, 0xb5, 0xc4, 0x8a, 0x1b, 0x14, 0x5a, 0x56, 0x9f, 0x6d, 0x35, 0x0b, 0xd9, 0xf5, 0x9d, + 0xab, 0x81, 0x10, 0x47, 0x7a, 0x0e, 0x6e, 0x0f, 0xda, 0x60, 0xa0, 0x4a, 0x70, 0xdb, 0x31, 0xc3, + 0x91, 0x1e, 0x5d, 0x84, 0x19, 0xb3, 0xe6, 0x33, 0x8f, 0xd0, 0x5d, 0xd3, 0x71, 0x89, 0x68, 0x4c, + 0x53, 0xfa, 0x51, 0x79, 0xa7, 0x99, 0x8d, 0x0e, 0x1d, 0x8e, 0x21, 0x91, 0x06, 0xc0, 0xcb, 0x8a, + 0xb9, 0x06, 0xf7, 0x93, 0x11, 0x7e, 0xe6, 0xf8, 0x83, 0x6d, 0x87, 0x52, 0xdc, 0x81, 0x50, 0xef, + 0xc3, 0xd2, 0x2e, 0xa1, 0x0d, 0xcb, 0x24, 0xeb, 0xa6, 0xe9, 0xf8, 0xb6, 0xd7, 0x5e, 0xd6, 0x8b, + 0x90, 0x0d, 0x61, 0xb2, 0xf2, 0x8e, 0x48, 0xff, 0xd9, 0x90, 0x0b, 0x47, 0x98, 0xb0, 0xd4, 0x53, + 0x03, 0x4b, 0xfd, 0xa7, 0x14, 0x4c, 0x46, 0xf4, 0xe9, 0x7d, 0xcb, 0x2e, 0x4b, 0xe6, 0x63, 0x6d, + 0xf4, 0x35, 0xcb, 0x2e, 0x3f, 0x6b, 0x16, 0xa6, 0x25, 0x8c, 0x7f, 0x62, 0x01, 0x44, 0x57, 0x21, + 0xed, 0x33, 0x42, 0x65, 0x11, 0x9f, 0x4c, 0x4a, 0xe6, 0x5b, 0x8c, 0xd0, 0xf6, 0x7e, 0x35, 0xc5, + 0x99, 0xb9, 0x00, 0x0b, 0x0a, 0xb4, 0x05, 0x99, 0x0a, 0x7f, 0x14, 0x59, 0xa7, 0xa7, 0x92, 0xb8, + 0x3a, 0x7f, 0xc4, 0x04, 0x69, 0x20, 0x24, 0x38, 0x60, 0x41, 0x0f, 0x60, 0x8e, 0xc5, 0x42, 0x28, + 0x9e, 0x6b, 0x84, 0x7d, 0xa9, 0x6f, 0xe0, 0x75, 0xd4, 0x6a, 0x16, 0xe6, 0xe2, 0x2a, 0xdc, 0xe5, + 0x40, 0x2d, 0xc2, 0x74, 0xc7, 0x05, 0x93, 0xbb, 0xac, 0x7e, 0xe9, 0xd1, 0xd3, 0xfc, 0xd8, 0xe3, + 0xa7, 0xf9, 0xb1, 0x27, 0x4f, 0xf3, 0x63, 0x9f, 0xb7, 0xf2, 0xca, 0xa3, 0x56, 0x5e, 0x79, 0xdc, + 0xca, 0x2b, 0x4f, 0x5a, 0x79, 0xe5, 0xd7, 0x56, 0x5e, 0xf9, 0xf2, 0xb7, 0xfc, 0xd8, 0x9d, 0xfc, + 0xf0, 0xff, 0xc5, 0xfe, 0x1d, 0x00, 0x00, 0xff, 0xff, 0xfd, 0x4d, 0x1e, 0x25, 0xc5, 0x15, 0x00, + 0x00, +} + +func (m *ExemptPriorityLevelConfiguration) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ExemptPriorityLevelConfiguration) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ExemptPriorityLevelConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.LendablePercent != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.LendablePercent)) + i-- + dAtA[i] = 0x10 + } + if m.NominalConcurrencyShares != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.NominalConcurrencyShares)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil } func (m *FlowDistinguisherMethod) Marshal() (dAtA []byte, err error) { @@ -1491,6 +1557,18 @@ func (m *PriorityLevelConfigurationSpec) MarshalToSizedBuffer(dAtA []byte) (int, _ = i var l int _ = l + if m.Exempt != nil { + { + size, err := m.Exempt.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } if m.Limited != nil { { size, err := m.Limited.MarshalToSizedBuffer(dAtA[:i]) @@ -1783,6 +1861,21 @@ func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int { dAtA[offset] = uint8(v) return base } +func (m *ExemptPriorityLevelConfiguration) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.NominalConcurrencyShares != nil { + n += 1 + sovGenerated(uint64(*m.NominalConcurrencyShares)) + } + if m.LendablePercent != nil { + n += 1 + sovGenerated(uint64(*m.LendablePercent)) + } + return n +} + func (m *FlowDistinguisherMethod) Size() (n int) { if m == nil { return 0 @@ -2048,6 +2141,10 @@ func (m *PriorityLevelConfigurationSpec) Size() (n int) { l = m.Limited.Size() n += 1 + l + sovGenerated(uint64(l)) } + if m.Exempt != nil { + l = m.Exempt.Size() + n += 1 + l + sovGenerated(uint64(l)) + } return n } @@ -2165,6 +2262,17 @@ func sovGenerated(x uint64) (n int) { func sozGenerated(x uint64) (n int) { return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } +func (this *ExemptPriorityLevelConfiguration) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ExemptPriorityLevelConfiguration{`, + `NominalConcurrencyShares:` + valueToStringGenerated(this.NominalConcurrencyShares) + `,`, + `LendablePercent:` + valueToStringGenerated(this.LendablePercent) + `,`, + `}`, + }, "") + return s +} func (this *FlowDistinguisherMethod) String() string { if this == nil { return "nil" @@ -2381,6 +2489,7 @@ func (this *PriorityLevelConfigurationSpec) String() string { s := strings.Join([]string{`&PriorityLevelConfigurationSpec{`, `Type:` + fmt.Sprintf("%v", this.Type) + `,`, `Limited:` + strings.Replace(this.Limited.String(), "LimitedPriorityLevelConfiguration", "LimitedPriorityLevelConfiguration", 1) + `,`, + `Exempt:` + strings.Replace(this.Exempt.String(), "ExemptPriorityLevelConfiguration", "ExemptPriorityLevelConfiguration", 1) + `,`, `}`, }, "") return s @@ -2468,6 +2577,96 @@ func valueToStringGenerated(v interface{}) string { pv := reflect.Indirect(rv).Interface() return fmt.Sprintf("*%v", pv) } +func (m *ExemptPriorityLevelConfiguration) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ExemptPriorityLevelConfiguration: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ExemptPriorityLevelConfiguration: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NominalConcurrencyShares", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.NominalConcurrencyShares = &v + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field LendablePercent", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.LendablePercent = &v + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func (m *FlowDistinguisherMethod) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 @@ -4547,6 +4746,42 @@ func (m *PriorityLevelConfigurationSpec) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Exempt", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Exempt == nil { + m.Exempt = &ExemptPriorityLevelConfiguration{} + } + if err := m.Exempt.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/generated.proto index 4c98f21bcf5b..a8c8a3273740 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/generated.proto @@ -28,6 +28,40 @@ import "k8s.io/apimachinery/pkg/runtime/schema/generated.proto"; // Package-wide variables from generator "generated". option go_package = "k8s.io/api/flowcontrol/v1beta2"; +// ExemptPriorityLevelConfiguration describes the configurable aspects +// of the handling of exempt requests. +// In the mandatory exempt configuration object the values in the fields +// here can be modified by authorized users, unlike the rest of the `spec`. +message ExemptPriorityLevelConfiguration { + // `nominalConcurrencyShares` (NCS) contributes to the computation of the + // NominalConcurrencyLimit (NominalCL) of this level. + // This is the number of execution seats nominally reserved for this priority level. + // This DOES NOT limit the dispatching from this priority level + // but affects the other priority levels through the borrowing mechanism. + // The server's concurrency limit (ServerCL) is divided among all the + // priority levels in proportion to their NCS values: + // + // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) + // sum_ncs = sum[priority level k] NCS(k) + // + // Bigger numbers mean a larger nominal concurrency limit, + // at the expense of every other priority level. + // This field has a default value of zero. + // +optional + optional int32 nominalConcurrencyShares = 1; + + // `lendablePercent` prescribes the fraction of the level's NominalCL that + // can be borrowed by other priority levels. This value of this + // field must be between 0 and 100, inclusive, and it defaults to 0. + // The number of seats that other levels can borrow from this level, known + // as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. + // + // LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) + // + // +optional + optional int32 lendablePercent = 2; +} + // FlowDistinguisherMethod specifies the method of a flow distinguisher. message FlowDistinguisherMethod { // `type` is the type of flow distinguisher method @@ -332,6 +366,14 @@ message PriorityLevelConfigurationSpec { // This field must be non-empty if and only if `type` is `"Limited"`. // +optional optional LimitedPriorityLevelConfiguration limited = 2; + + // `exempt` specifies how requests are handled for an exempt priority level. + // This field MUST be empty if `type` is `"Limited"`. + // This field MAY be non-empty if `type` is `"Exempt"`. + // If empty and `type` is `"Exempt"` then the default values + // for `ExemptPriorityLevelConfiguration` apply. + // +optional + optional ExemptPriorityLevelConfiguration exempt = 3; } // PriorityLevelConfigurationStatus represents the current state of a "request-priority". diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/types.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/types.go index 75409cee3e7f..e8cf7abfff6e 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/types.go @@ -77,7 +77,9 @@ const ( // is a boolean false or has an invalid boolean representation // (if the cluster operator sets it to 'false' it will be stomped) // - any changes to the spec made by the cluster operator will be - // stomped. + // stomped, except for changes to the `nominalConcurrencyShares` + // and `lendablePercent` fields of the PriorityLevelConfiguration + // named "exempt". // // The kube-apiserver will apply updates on the suggested configuration if: // - the cluster operator has enabled auto-update by setting the annotation @@ -435,6 +437,14 @@ type PriorityLevelConfigurationSpec struct { // This field must be non-empty if and only if `type` is `"Limited"`. // +optional Limited *LimitedPriorityLevelConfiguration `json:"limited,omitempty" protobuf:"bytes,2,opt,name=limited"` + + // `exempt` specifies how requests are handled for an exempt priority level. + // This field MUST be empty if `type` is `"Limited"`. + // This field MAY be non-empty if `type` is `"Exempt"`. + // If empty and `type` is `"Exempt"` then the default values + // for `ExemptPriorityLevelConfiguration` apply. + // +optional + Exempt *ExemptPriorityLevelConfiguration `json:"exempt,omitempty" protobuf:"bytes,3,opt,name=exempt"` } // PriorityLevelEnablement indicates whether limits on execution are enabled for the priority level @@ -505,6 +515,43 @@ type LimitedPriorityLevelConfiguration struct { BorrowingLimitPercent *int32 `json:"borrowingLimitPercent,omitempty" protobuf:"varint,4,opt,name=borrowingLimitPercent"` } +// ExemptPriorityLevelConfiguration describes the configurable aspects +// of the handling of exempt requests. +// In the mandatory exempt configuration object the values in the fields +// here can be modified by authorized users, unlike the rest of the `spec`. +type ExemptPriorityLevelConfiguration struct { + // `nominalConcurrencyShares` (NCS) contributes to the computation of the + // NominalConcurrencyLimit (NominalCL) of this level. + // This is the number of execution seats nominally reserved for this priority level. + // This DOES NOT limit the dispatching from this priority level + // but affects the other priority levels through the borrowing mechanism. + // The server's concurrency limit (ServerCL) is divided among all the + // priority levels in proportion to their NCS values: + // + // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) + // sum_ncs = sum[priority level k] NCS(k) + // + // Bigger numbers mean a larger nominal concurrency limit, + // at the expense of every other priority level. + // This field has a default value of zero. + // +optional + NominalConcurrencyShares *int32 `json:"nominalConcurrencyShares,omitempty" protobuf:"varint,1,opt,name=nominalConcurrencyShares"` + // `lendablePercent` prescribes the fraction of the level's NominalCL that + // can be borrowed by other priority levels. This value of this + // field must be between 0 and 100, inclusive, and it defaults to 0. + // The number of seats that other levels can borrow from this level, known + // as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. + // + // LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) + // + // +optional + LendablePercent *int32 `json:"lendablePercent,omitempty" protobuf:"varint,2,opt,name=lendablePercent"` + // The `BorrowingCL` of an Exempt priority level is implicitly `ServerCL`. + // In other words, an exempt priority level + // has no meaningful limit on how much it borrows. + // There is no explicit representation of that here. +} + // LimitResponse defines how to handle requests that can not be executed right now. // +union type LimitResponse struct { diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/types_swagger_doc_generated.go index b2eff7f96e7d..49a417809663 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/types_swagger_doc_generated.go @@ -27,6 +27,16 @@ package v1beta2 // Those methods can be generated by using hack/update-codegen.sh // AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT. +var map_ExemptPriorityLevelConfiguration = map[string]string{ + "": "ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the `spec`.", + "nominalConcurrencyShares": "`nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats nominally reserved for this priority level. This DOES NOT limit the dispatching from this priority level but affects the other priority levels through the borrowing mechanism. The server's concurrency limit (ServerCL) is divided among all the priority levels in proportion to their NCS values:\n\nNominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k)\n\nBigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of zero.", + "lendablePercent": "`lendablePercent` prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. This value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows.\n\nLendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 )", +} + +func (ExemptPriorityLevelConfiguration) SwaggerDoc() map[string]string { + return map_ExemptPriorityLevelConfiguration +} + var map_FlowDistinguisherMethod = map[string]string{ "": "FlowDistinguisherMethod specifies the method of a flow distinguisher.", "type": "`type` is the type of flow distinguisher method The supported types are \"ByUser\" and \"ByNamespace\". Required.", @@ -190,6 +200,7 @@ var map_PriorityLevelConfigurationSpec = map[string]string{ "": "PriorityLevelConfigurationSpec specifies the configuration of a priority level.", "type": "`type` indicates whether this priority level is subject to limitation on request execution. A value of `\"Exempt\"` means that requests of this priority level are not subject to a limit (and thus are never queued) and do not detract from the capacity made available to other priority levels. A value of `\"Limited\"` means that (a) requests of this priority level _are_ subject to limits and (b) some of the server's limited capacity is made available exclusively to this priority level. Required.", "limited": "`limited` specifies how requests are handled for a Limited priority level. This field must be non-empty if and only if `type` is `\"Limited\"`.", + "exempt": "`exempt` specifies how requests are handled for an exempt priority level. This field MUST be empty if `type` is `\"Limited\"`. This field MAY be non-empty if `type` is `\"Exempt\"`. If empty and `type` is `\"Exempt\"` then the default values for `ExemptPriorityLevelConfiguration` apply.", } func (PriorityLevelConfigurationSpec) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/zz_generated.deepcopy.go index aa692484c1cc..e0605b95d720 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta2/zz_generated.deepcopy.go @@ -25,6 +25,32 @@ import ( runtime "k8s.io/apimachinery/pkg/runtime" ) +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ExemptPriorityLevelConfiguration) DeepCopyInto(out *ExemptPriorityLevelConfiguration) { + *out = *in + if in.NominalConcurrencyShares != nil { + in, out := &in.NominalConcurrencyShares, &out.NominalConcurrencyShares + *out = new(int32) + **out = **in + } + if in.LendablePercent != nil { + in, out := &in.LendablePercent, &out.LendablePercent + *out = new(int32) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExemptPriorityLevelConfiguration. +func (in *ExemptPriorityLevelConfiguration) DeepCopy() *ExemptPriorityLevelConfiguration { + if in == nil { + return nil + } + out := new(ExemptPriorityLevelConfiguration) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *FlowDistinguisherMethod) DeepCopyInto(out *FlowDistinguisherMethod) { *out = *in @@ -400,6 +426,11 @@ func (in *PriorityLevelConfigurationSpec) DeepCopyInto(out *PriorityLevelConfigu *out = new(LimitedPriorityLevelConfiguration) (*in).DeepCopyInto(*out) } + if in.Exempt != nil { + in, out := &in.Exempt, &out.Exempt + *out = new(ExemptPriorityLevelConfiguration) + (*in).DeepCopyInto(*out) + } return } diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/generated.pb.go index 166e8520b7cb..c6598306d99d 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/generated.pb.go @@ -43,10 +43,38 @@ var _ = math.Inf // proto package needs to be updated. const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package +func (m *ExemptPriorityLevelConfiguration) Reset() { *m = ExemptPriorityLevelConfiguration{} } +func (*ExemptPriorityLevelConfiguration) ProtoMessage() {} +func (*ExemptPriorityLevelConfiguration) Descriptor() ([]byte, []int) { + return fileDescriptor_803504887082f044, []int{0} +} +func (m *ExemptPriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *ExemptPriorityLevelConfiguration) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil +} +func (m *ExemptPriorityLevelConfiguration) XXX_Merge(src proto.Message) { + xxx_messageInfo_ExemptPriorityLevelConfiguration.Merge(m, src) +} +func (m *ExemptPriorityLevelConfiguration) XXX_Size() int { + return m.Size() +} +func (m *ExemptPriorityLevelConfiguration) XXX_DiscardUnknown() { + xxx_messageInfo_ExemptPriorityLevelConfiguration.DiscardUnknown(m) +} + +var xxx_messageInfo_ExemptPriorityLevelConfiguration proto.InternalMessageInfo + func (m *FlowDistinguisherMethod) Reset() { *m = FlowDistinguisherMethod{} } func (*FlowDistinguisherMethod) ProtoMessage() {} func (*FlowDistinguisherMethod) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{0} + return fileDescriptor_803504887082f044, []int{1} } func (m *FlowDistinguisherMethod) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -74,7 +102,7 @@ var xxx_messageInfo_FlowDistinguisherMethod proto.InternalMessageInfo func (m *FlowSchema) Reset() { *m = FlowSchema{} } func (*FlowSchema) ProtoMessage() {} func (*FlowSchema) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{1} + return fileDescriptor_803504887082f044, []int{2} } func (m *FlowSchema) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -102,7 +130,7 @@ var xxx_messageInfo_FlowSchema proto.InternalMessageInfo func (m *FlowSchemaCondition) Reset() { *m = FlowSchemaCondition{} } func (*FlowSchemaCondition) ProtoMessage() {} func (*FlowSchemaCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{2} + return fileDescriptor_803504887082f044, []int{3} } func (m *FlowSchemaCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -130,7 +158,7 @@ var xxx_messageInfo_FlowSchemaCondition proto.InternalMessageInfo func (m *FlowSchemaList) Reset() { *m = FlowSchemaList{} } func (*FlowSchemaList) ProtoMessage() {} func (*FlowSchemaList) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{3} + return fileDescriptor_803504887082f044, []int{4} } func (m *FlowSchemaList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -158,7 +186,7 @@ var xxx_messageInfo_FlowSchemaList proto.InternalMessageInfo func (m *FlowSchemaSpec) Reset() { *m = FlowSchemaSpec{} } func (*FlowSchemaSpec) ProtoMessage() {} func (*FlowSchemaSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{4} + return fileDescriptor_803504887082f044, []int{5} } func (m *FlowSchemaSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -186,7 +214,7 @@ var xxx_messageInfo_FlowSchemaSpec proto.InternalMessageInfo func (m *FlowSchemaStatus) Reset() { *m = FlowSchemaStatus{} } func (*FlowSchemaStatus) ProtoMessage() {} func (*FlowSchemaStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{5} + return fileDescriptor_803504887082f044, []int{6} } func (m *FlowSchemaStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -214,7 +242,7 @@ var xxx_messageInfo_FlowSchemaStatus proto.InternalMessageInfo func (m *GroupSubject) Reset() { *m = GroupSubject{} } func (*GroupSubject) ProtoMessage() {} func (*GroupSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{6} + return fileDescriptor_803504887082f044, []int{7} } func (m *GroupSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -242,7 +270,7 @@ var xxx_messageInfo_GroupSubject proto.InternalMessageInfo func (m *LimitResponse) Reset() { *m = LimitResponse{} } func (*LimitResponse) ProtoMessage() {} func (*LimitResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{7} + return fileDescriptor_803504887082f044, []int{8} } func (m *LimitResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -270,7 +298,7 @@ var xxx_messageInfo_LimitResponse proto.InternalMessageInfo func (m *LimitedPriorityLevelConfiguration) Reset() { *m = LimitedPriorityLevelConfiguration{} } func (*LimitedPriorityLevelConfiguration) ProtoMessage() {} func (*LimitedPriorityLevelConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{8} + return fileDescriptor_803504887082f044, []int{9} } func (m *LimitedPriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -298,7 +326,7 @@ var xxx_messageInfo_LimitedPriorityLevelConfiguration proto.InternalMessageInfo func (m *NonResourcePolicyRule) Reset() { *m = NonResourcePolicyRule{} } func (*NonResourcePolicyRule) ProtoMessage() {} func (*NonResourcePolicyRule) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{9} + return fileDescriptor_803504887082f044, []int{10} } func (m *NonResourcePolicyRule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -326,7 +354,7 @@ var xxx_messageInfo_NonResourcePolicyRule proto.InternalMessageInfo func (m *PolicyRulesWithSubjects) Reset() { *m = PolicyRulesWithSubjects{} } func (*PolicyRulesWithSubjects) ProtoMessage() {} func (*PolicyRulesWithSubjects) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{10} + return fileDescriptor_803504887082f044, []int{11} } func (m *PolicyRulesWithSubjects) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -354,7 +382,7 @@ var xxx_messageInfo_PolicyRulesWithSubjects proto.InternalMessageInfo func (m *PriorityLevelConfiguration) Reset() { *m = PriorityLevelConfiguration{} } func (*PriorityLevelConfiguration) ProtoMessage() {} func (*PriorityLevelConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{11} + return fileDescriptor_803504887082f044, []int{12} } func (m *PriorityLevelConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -382,7 +410,7 @@ var xxx_messageInfo_PriorityLevelConfiguration proto.InternalMessageInfo func (m *PriorityLevelConfigurationCondition) Reset() { *m = PriorityLevelConfigurationCondition{} } func (*PriorityLevelConfigurationCondition) ProtoMessage() {} func (*PriorityLevelConfigurationCondition) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{12} + return fileDescriptor_803504887082f044, []int{13} } func (m *PriorityLevelConfigurationCondition) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -410,7 +438,7 @@ var xxx_messageInfo_PriorityLevelConfigurationCondition proto.InternalMessageInf func (m *PriorityLevelConfigurationList) Reset() { *m = PriorityLevelConfigurationList{} } func (*PriorityLevelConfigurationList) ProtoMessage() {} func (*PriorityLevelConfigurationList) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{13} + return fileDescriptor_803504887082f044, []int{14} } func (m *PriorityLevelConfigurationList) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -438,7 +466,7 @@ var xxx_messageInfo_PriorityLevelConfigurationList proto.InternalMessageInfo func (m *PriorityLevelConfigurationReference) Reset() { *m = PriorityLevelConfigurationReference{} } func (*PriorityLevelConfigurationReference) ProtoMessage() {} func (*PriorityLevelConfigurationReference) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{14} + return fileDescriptor_803504887082f044, []int{15} } func (m *PriorityLevelConfigurationReference) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -466,7 +494,7 @@ var xxx_messageInfo_PriorityLevelConfigurationReference proto.InternalMessageInf func (m *PriorityLevelConfigurationSpec) Reset() { *m = PriorityLevelConfigurationSpec{} } func (*PriorityLevelConfigurationSpec) ProtoMessage() {} func (*PriorityLevelConfigurationSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{15} + return fileDescriptor_803504887082f044, []int{16} } func (m *PriorityLevelConfigurationSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -494,7 +522,7 @@ var xxx_messageInfo_PriorityLevelConfigurationSpec proto.InternalMessageInfo func (m *PriorityLevelConfigurationStatus) Reset() { *m = PriorityLevelConfigurationStatus{} } func (*PriorityLevelConfigurationStatus) ProtoMessage() {} func (*PriorityLevelConfigurationStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{16} + return fileDescriptor_803504887082f044, []int{17} } func (m *PriorityLevelConfigurationStatus) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -522,7 +550,7 @@ var xxx_messageInfo_PriorityLevelConfigurationStatus proto.InternalMessageInfo func (m *QueuingConfiguration) Reset() { *m = QueuingConfiguration{} } func (*QueuingConfiguration) ProtoMessage() {} func (*QueuingConfiguration) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{17} + return fileDescriptor_803504887082f044, []int{18} } func (m *QueuingConfiguration) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -550,7 +578,7 @@ var xxx_messageInfo_QueuingConfiguration proto.InternalMessageInfo func (m *ResourcePolicyRule) Reset() { *m = ResourcePolicyRule{} } func (*ResourcePolicyRule) ProtoMessage() {} func (*ResourcePolicyRule) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{18} + return fileDescriptor_803504887082f044, []int{19} } func (m *ResourcePolicyRule) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -578,7 +606,7 @@ var xxx_messageInfo_ResourcePolicyRule proto.InternalMessageInfo func (m *ServiceAccountSubject) Reset() { *m = ServiceAccountSubject{} } func (*ServiceAccountSubject) ProtoMessage() {} func (*ServiceAccountSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{19} + return fileDescriptor_803504887082f044, []int{20} } func (m *ServiceAccountSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -606,7 +634,7 @@ var xxx_messageInfo_ServiceAccountSubject proto.InternalMessageInfo func (m *Subject) Reset() { *m = Subject{} } func (*Subject) ProtoMessage() {} func (*Subject) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{20} + return fileDescriptor_803504887082f044, []int{21} } func (m *Subject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -634,7 +662,7 @@ var xxx_messageInfo_Subject proto.InternalMessageInfo func (m *UserSubject) Reset() { *m = UserSubject{} } func (*UserSubject) ProtoMessage() {} func (*UserSubject) Descriptor() ([]byte, []int) { - return fileDescriptor_803504887082f044, []int{21} + return fileDescriptor_803504887082f044, []int{22} } func (m *UserSubject) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -660,6 +688,7 @@ func (m *UserSubject) XXX_DiscardUnknown() { var xxx_messageInfo_UserSubject proto.InternalMessageInfo func init() { + proto.RegisterType((*ExemptPriorityLevelConfiguration)(nil), "k8s.io.api.flowcontrol.v1beta3.ExemptPriorityLevelConfiguration") proto.RegisterType((*FlowDistinguisherMethod)(nil), "k8s.io.api.flowcontrol.v1beta3.FlowDistinguisherMethod") proto.RegisterType((*FlowSchema)(nil), "k8s.io.api.flowcontrol.v1beta3.FlowSchema") proto.RegisterType((*FlowSchemaCondition)(nil), "k8s.io.api.flowcontrol.v1beta3.FlowSchemaCondition") @@ -689,104 +718,141 @@ func init() { } var fileDescriptor_803504887082f044 = []byte{ - // 1552 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x58, 0x4d, 0x6f, 0xdb, 0x46, - 0x13, 0x36, 0x65, 0xc9, 0xb6, 0xd6, 0x9f, 0x59, 0xc7, 0xb0, 0x5e, 0x07, 0x90, 0x1c, 0xbe, 0xc0, - 0x9b, 0xb7, 0x4d, 0x42, 0xe5, 0xb3, 0x49, 0x5b, 0xf4, 0x23, 0x74, 0xda, 0x34, 0x8d, 0xed, 0x38, - 0xeb, 0xa4, 0x2d, 0xd2, 0x00, 0x0d, 0x45, 0xad, 0xa9, 0x8d, 0x25, 0x92, 0xd9, 0x25, 0x65, 0xb8, - 0xb9, 0x14, 0xfd, 0x05, 0x3d, 0xb7, 0xc7, 0x1e, 0x7a, 0xef, 0x1f, 0xe8, 0xb1, 0x41, 0x4f, 0x39, - 0xe6, 0xa4, 0x36, 0xea, 0xa9, 0xff, 0xa0, 0x0d, 0x50, 0xa0, 0xd8, 0xe5, 0x92, 0x14, 0xa9, 0x0f, - 0x0a, 0x09, 0x90, 0x53, 0x6f, 0xe6, 0xcc, 0x33, 0xcf, 0xec, 0xcc, 0xce, 0xcc, 0x8e, 0x0c, 0xae, - 0xed, 0x5f, 0x66, 0x1a, 0x71, 0xaa, 0xfb, 0x7e, 0x0d, 0x53, 0x1b, 0x7b, 0x98, 0x55, 0xdb, 0xd8, - 0xae, 0x3b, 0xb4, 0x2a, 0x15, 0x86, 0x4b, 0xaa, 0x7b, 0x4d, 0xe7, 0xc0, 0x74, 0x6c, 0x8f, 0x3a, - 0xcd, 0x6a, 0xfb, 0x6c, 0x0d, 0x7b, 0xc6, 0xf9, 0xaa, 0x85, 0x6d, 0x4c, 0x0d, 0x0f, 0xd7, 0x35, - 0x97, 0x3a, 0x9e, 0x03, 0xcb, 0x01, 0x5e, 0x33, 0x5c, 0xa2, 0xf5, 0xe0, 0x35, 0x89, 0x5f, 0x3b, - 0x6d, 0x11, 0xaf, 0xe1, 0xd7, 0x34, 0xd3, 0x69, 0x55, 0x2d, 0xc7, 0x72, 0xaa, 0xc2, 0xac, 0xe6, - 0xef, 0x89, 0x2f, 0xf1, 0x21, 0xfe, 0x0a, 0xe8, 0xd6, 0x2e, 0xc4, 0xee, 0x5b, 0x86, 0xd9, 0x20, - 0x36, 0xa6, 0x87, 0x55, 0x77, 0xdf, 0xe2, 0x02, 0x56, 0x6d, 0x61, 0xcf, 0xa8, 0xb6, 0xcf, 0xa6, - 0x0f, 0xb1, 0x56, 0x1d, 0x66, 0x45, 0x7d, 0xdb, 0x23, 0x2d, 0xdc, 0x67, 0xf0, 0x46, 0x96, 0x01, - 0x33, 0x1b, 0xb8, 0x65, 0xa4, 0xed, 0xd4, 0xbb, 0x60, 0xf5, 0xc3, 0xa6, 0x73, 0x70, 0x95, 0x30, - 0x8f, 0xd8, 0x96, 0x4f, 0x58, 0x03, 0xd3, 0x2d, 0xec, 0x35, 0x9c, 0x3a, 0x7c, 0x0f, 0xe4, 0xbd, - 0x43, 0x17, 0x97, 0x94, 0x75, 0xe5, 0xff, 0x45, 0xfd, 0xe4, 0xe3, 0x4e, 0x65, 0xa2, 0xdb, 0xa9, - 0xe4, 0x6f, 0x1f, 0xba, 0xf8, 0x79, 0xa7, 0x72, 0x6c, 0x88, 0x19, 0x57, 0x23, 0x61, 0xa8, 0x7e, - 0x9b, 0x03, 0x80, 0xa3, 0x76, 0x85, 0x6b, 0x78, 0x1f, 0xcc, 0xf0, 0x70, 0xeb, 0x86, 0x67, 0x08, - 0xce, 0xd9, 0x73, 0x67, 0xb4, 0x38, 0xd7, 0xd1, 0xa9, 0x35, 0x77, 0xdf, 0xe2, 0x02, 0xa6, 0x71, - 0xb4, 0xd6, 0x3e, 0xab, 0xdd, 0xac, 0x3d, 0xc0, 0xa6, 0xb7, 0x85, 0x3d, 0x43, 0x87, 0xf2, 0x14, - 0x20, 0x96, 0xa1, 0x88, 0x15, 0xee, 0x80, 0x3c, 0x73, 0xb1, 0x59, 0xca, 0x09, 0x76, 0x4d, 0x1b, - 0x7d, 0x93, 0x5a, 0x7c, 0xb6, 0x5d, 0x17, 0x9b, 0xfa, 0x5c, 0x18, 0x21, 0xff, 0x42, 0x82, 0x09, - 0x7e, 0x06, 0xa6, 0x98, 0x67, 0x78, 0x3e, 0x2b, 0x4d, 0xf6, 0x9d, 0x38, 0x8b, 0x53, 0xd8, 0xe9, - 0x0b, 0x92, 0x75, 0x2a, 0xf8, 0x46, 0x92, 0x4f, 0x7d, 0x9a, 0x03, 0xcb, 0x31, 0x78, 0xc3, 0xb1, - 0xeb, 0xc4, 0x23, 0x8e, 0x0d, 0xdf, 0x4e, 0x64, 0xfd, 0x44, 0x2a, 0xeb, 0xab, 0x03, 0x4c, 0xe2, - 0x8c, 0xc3, 0x37, 0xa3, 0xe3, 0xe6, 0x84, 0xf9, 0xf1, 0xa4, 0xf3, 0xe7, 0x9d, 0xca, 0x62, 0x64, - 0x96, 0x3c, 0x0f, 0x6c, 0x03, 0xd8, 0x34, 0x98, 0x77, 0x9b, 0x1a, 0x36, 0x0b, 0x68, 0x49, 0x0b, - 0xcb, 0xa8, 0x5f, 0x1f, 0xef, 0x9e, 0xb8, 0x85, 0xbe, 0x26, 0x5d, 0xc2, 0xcd, 0x3e, 0x36, 0x34, - 0xc0, 0x03, 0xfc, 0x1f, 0x98, 0xa2, 0xd8, 0x60, 0x8e, 0x5d, 0xca, 0x8b, 0x23, 0x47, 0xf9, 0x42, - 0x42, 0x8a, 0xa4, 0x16, 0xbe, 0x06, 0xa6, 0x5b, 0x98, 0x31, 0xc3, 0xc2, 0xa5, 0x82, 0x00, 0x2e, - 0x4a, 0xe0, 0xf4, 0x56, 0x20, 0x46, 0xa1, 0x5e, 0xfd, 0x49, 0x01, 0x0b, 0x71, 0x9e, 0x36, 0x09, - 0xf3, 0xe0, 0xbd, 0xbe, 0xda, 0xd3, 0xc6, 0x8b, 0x89, 0x5b, 0x8b, 0xca, 0x5b, 0x92, 0xee, 0x66, - 0x42, 0x49, 0x4f, 0xdd, 0xdd, 0x04, 0x05, 0xe2, 0xe1, 0x16, 0xcf, 0xfa, 0x64, 0x2a, 0x5d, 0x19, - 0x45, 0xa2, 0xcf, 0x4b, 0xda, 0xc2, 0x75, 0x4e, 0x80, 0x02, 0x1e, 0xf5, 0x8f, 0xc9, 0xde, 0x08, - 0x78, 0x3d, 0xc2, 0x1f, 0x14, 0xb0, 0xe6, 0x52, 0xe2, 0x50, 0xe2, 0x1d, 0x6e, 0xe2, 0x36, 0x6e, - 0x6e, 0x38, 0xf6, 0x1e, 0xb1, 0x7c, 0x6a, 0xf0, 0x54, 0xca, 0xa0, 0x36, 0xb2, 0x3c, 0xef, 0x0c, - 0x65, 0x40, 0x78, 0x0f, 0x53, 0x6c, 0x9b, 0x58, 0x57, 0xe5, 0x91, 0xd6, 0x46, 0x80, 0x47, 0x1c, - 0x05, 0x7e, 0x0c, 0x60, 0xcb, 0xf0, 0x78, 0x46, 0xad, 0x1d, 0x8a, 0x4d, 0x5c, 0xe7, 0xac, 0xa2, - 0x20, 0x0b, 0x71, 0x75, 0x6c, 0xf5, 0x21, 0xd0, 0x00, 0x2b, 0xf8, 0xb5, 0x02, 0x96, 0xeb, 0xfd, - 0x43, 0x46, 0xd6, 0xe5, 0xa5, 0x71, 0x12, 0x3d, 0x60, 0x46, 0xe9, 0xab, 0xdd, 0x4e, 0x65, 0x79, - 0x80, 0x02, 0x0d, 0x72, 0x06, 0xef, 0x81, 0x02, 0xf5, 0x9b, 0x98, 0x95, 0xf2, 0xe2, 0x7a, 0x33, - 0xbd, 0xee, 0x38, 0x4d, 0x62, 0x1e, 0x22, 0x6e, 0xf2, 0x29, 0xf1, 0x1a, 0xbb, 0xbe, 0x98, 0x55, - 0x2c, 0xbe, 0x6b, 0xa1, 0x42, 0x01, 0xa9, 0xfa, 0x08, 0x2c, 0xa5, 0x87, 0x06, 0xb4, 0x00, 0x30, - 0xc3, 0x3e, 0x65, 0x25, 0x45, 0xb8, 0x3d, 0x3f, 0x7e, 0x55, 0x45, 0x3d, 0x1e, 0xcf, 0xcb, 0x48, - 0xc4, 0x50, 0x0f, 0xb5, 0x7a, 0x06, 0xcc, 0x5d, 0xa3, 0x8e, 0xef, 0xca, 0x33, 0xc2, 0x75, 0x90, - 0xb7, 0x8d, 0x56, 0x38, 0x7d, 0xa2, 0x89, 0xb8, 0x6d, 0xb4, 0x30, 0x12, 0x1a, 0xf5, 0x7b, 0x05, - 0xcc, 0x6f, 0x92, 0x16, 0xf1, 0x10, 0x66, 0xae, 0x63, 0x33, 0x0c, 0x2f, 0x26, 0x26, 0xd6, 0xf1, - 0xd4, 0xc4, 0x3a, 0x92, 0x00, 0xf7, 0xcc, 0xaa, 0xcf, 0xc1, 0xf4, 0x43, 0x1f, 0xfb, 0xc4, 0xb6, - 0xe4, 0xbc, 0xbe, 0x90, 0x15, 0xe0, 0xad, 0x00, 0x9e, 0xa8, 0x36, 0x7d, 0x96, 0x8f, 0x00, 0xa9, - 0x41, 0x21, 0xa3, 0xfa, 0x77, 0x0e, 0x1c, 0x17, 0x8e, 0x71, 0x7d, 0x78, 0x15, 0xc3, 0x7b, 0xa0, - 0x64, 0x3b, 0x2d, 0x62, 0x1b, 0x5c, 0x6e, 0xfa, 0x94, 0xd7, 0xff, 0xe1, 0x6e, 0xc3, 0xa0, 0x98, - 0x89, 0x68, 0x0a, 0xfa, 0xba, 0x8c, 0xa6, 0xb4, 0x3d, 0x04, 0x87, 0x86, 0x32, 0xc0, 0x07, 0x60, - 0xbe, 0xd9, 0x1b, 0xbb, 0x0c, 0xf3, 0x74, 0x56, 0x98, 0x89, 0x84, 0xe9, 0x2b, 0xf2, 0x04, 0xc9, - 0xa4, 0xa3, 0x24, 0x35, 0x7c, 0x07, 0x2c, 0x36, 0xb1, 0x5d, 0x37, 0x6a, 0x4d, 0xbc, 0x83, 0xa9, - 0x89, 0x6d, 0x4f, 0xb4, 0x48, 0x41, 0x5f, 0xee, 0x76, 0x2a, 0x8b, 0x9b, 0x49, 0x15, 0x4a, 0x63, - 0xe1, 0x4d, 0xb0, 0x52, 0x73, 0x28, 0x75, 0x0e, 0x88, 0x6d, 0x09, 0x3f, 0x21, 0x49, 0x5e, 0x90, - 0xfc, 0xa7, 0xdb, 0xa9, 0xac, 0xe8, 0x83, 0x00, 0x68, 0xb0, 0x9d, 0x7a, 0x00, 0x56, 0xb6, 0xf9, - 0x4c, 0x61, 0x8e, 0x4f, 0x4d, 0x1c, 0x37, 0x04, 0xac, 0x80, 0x42, 0x1b, 0xd3, 0x5a, 0x50, 0xd4, - 0x45, 0xbd, 0xc8, 0xdb, 0xe1, 0x13, 0x2e, 0x40, 0x81, 0x9c, 0x47, 0x62, 0xc7, 0x96, 0x77, 0xd0, - 0x26, 0x2b, 0x4d, 0x09, 0xa8, 0x88, 0x64, 0x3b, 0xa9, 0x42, 0x69, 0xac, 0xda, 0xc9, 0x81, 0xd5, - 0x21, 0xfd, 0x07, 0xef, 0x80, 0x19, 0x26, 0xff, 0x96, 0x3d, 0x75, 0x22, 0xeb, 0x2e, 0xa4, 0x6d, - 0x3c, 0xfd, 0x43, 0x32, 0x14, 0x51, 0x41, 0x07, 0xcc, 0x53, 0x79, 0x04, 0xe1, 0x53, 0xbe, 0x02, - 0xe7, 0xb2, 0xb8, 0xfb, 0xb3, 0x13, 0x5f, 0x36, 0xea, 0x25, 0x44, 0x49, 0x7e, 0xf8, 0x08, 0x2c, - 0xf5, 0x84, 0x1d, 0xf8, 0x9c, 0x14, 0x3e, 0x2f, 0x66, 0xf9, 0x1c, 0x78, 0x29, 0x7a, 0x49, 0xba, - 0x5d, 0xda, 0x4e, 0xd1, 0xa2, 0x3e, 0x47, 0xea, 0x2f, 0x39, 0x30, 0xe2, 0x61, 0x78, 0x05, 0x4b, - 0xde, 0xfd, 0xc4, 0x92, 0xf7, 0xee, 0x8b, 0xbf, 0x78, 0x43, 0x97, 0xbe, 0x46, 0x6a, 0xe9, 0x7b, - 0xff, 0x25, 0x7c, 0x8c, 0x5e, 0x02, 0xff, 0xcc, 0x81, 0xff, 0x0e, 0x37, 0x8e, 0x97, 0xc2, 0x1b, - 0x89, 0x11, 0x7b, 0x29, 0x35, 0x62, 0x4f, 0x8c, 0x41, 0xf1, 0xef, 0x92, 0x98, 0x5a, 0x12, 0x7f, - 0x55, 0x40, 0x79, 0x78, 0xde, 0x5e, 0xc1, 0xd2, 0xf8, 0x45, 0x72, 0x69, 0x7c, 0xeb, 0xc5, 0x8b, - 0x6c, 0xc8, 0x12, 0x79, 0x6d, 0x54, 0x6d, 0x45, 0xeb, 0xde, 0x18, 0x4f, 0xfe, 0xcf, 0x23, 0x53, - 0x25, 0xb6, 0xd3, 0x8c, 0x5f, 0x2d, 0x09, 0xeb, 0x0f, 0x6c, 0xfe, 0xf4, 0xb4, 0xf8, 0xeb, 0x11, - 0x14, 0x64, 0x03, 0x4c, 0x37, 0x83, 0xb7, 0x5a, 0x36, 0xf5, 0x95, 0xb1, 0x9e, 0xc8, 0x51, 0x4f, - 0x7b, 0xb0, 0x16, 0x48, 0x18, 0x0a, 0xe9, 0xd5, 0xef, 0x14, 0xb0, 0x9e, 0xd5, 0xac, 0xf0, 0x60, - 0xc0, 0xf2, 0xf5, 0x12, 0x8b, 0xf5, 0xf8, 0xcb, 0xd8, 0x8f, 0x0a, 0x38, 0x3a, 0x68, 0xc7, 0xe1, - 0xe5, 0xcf, 0x17, 0x9b, 0x68, 0x2b, 0x89, 0xca, 0xff, 0x96, 0x90, 0x22, 0xa9, 0x85, 0xa7, 0xc0, - 0x4c, 0xc3, 0xb0, 0xeb, 0xbb, 0xe4, 0xcb, 0x70, 0xdf, 0x8e, 0x0a, 0xf0, 0x23, 0x29, 0x47, 0x11, - 0x02, 0x5e, 0x05, 0x4b, 0xc2, 0x6e, 0x13, 0xdb, 0x96, 0xd7, 0x10, 0xb9, 0x92, 0x4b, 0x43, 0xf4, - 0x1e, 0xdc, 0x4a, 0xe9, 0x51, 0x9f, 0x85, 0xfa, 0x97, 0x02, 0xe0, 0x8b, 0xbc, 0xf3, 0x27, 0x41, - 0xd1, 0x70, 0x89, 0x58, 0x3e, 0x83, 0x16, 0x28, 0xea, 0xf3, 0xdd, 0x4e, 0xa5, 0x78, 0x65, 0xe7, - 0x7a, 0x20, 0x44, 0xb1, 0x9e, 0x83, 0xc3, 0x27, 0x30, 0x78, 0xea, 0x24, 0x38, 0x74, 0xcc, 0x50, - 0xac, 0x87, 0x97, 0xc1, 0x9c, 0xd9, 0xf4, 0x99, 0x87, 0xe9, 0xae, 0xe9, 0xb8, 0x58, 0x8c, 0x8c, - 0x19, 0xfd, 0xa8, 0x8c, 0x69, 0x6e, 0xa3, 0x47, 0x87, 0x12, 0x48, 0xa8, 0x01, 0xc0, 0x0b, 0x9e, - 0xb9, 0x06, 0xf7, 0x53, 0x10, 0x7e, 0x16, 0xf8, 0x85, 0x6d, 0x47, 0x52, 0xd4, 0x83, 0x50, 0x1f, - 0x80, 0x95, 0x5d, 0x4c, 0xdb, 0xc4, 0xc4, 0x57, 0x4c, 0xd3, 0xf1, 0x6d, 0x2f, 0x5c, 0xa3, 0xab, - 0xa0, 0x18, 0xc1, 0x64, 0x4f, 0x1c, 0x91, 0xfe, 0x8b, 0x11, 0x17, 0x8a, 0x31, 0x51, 0x13, 0xe6, - 0x86, 0x37, 0x61, 0x0e, 0x4c, 0xc7, 0xf4, 0xf9, 0x7d, 0x62, 0xd7, 0x25, 0xf3, 0xb1, 0x10, 0x7d, - 0x83, 0xd8, 0xf5, 0xe7, 0x9d, 0xca, 0xac, 0x84, 0xf1, 0x4f, 0x24, 0x80, 0xf0, 0x3a, 0xc8, 0xfb, - 0x0c, 0x53, 0xd9, 0x5e, 0x27, 0xb3, 0x8a, 0xf9, 0x0e, 0xc3, 0x34, 0xdc, 0x7c, 0x66, 0x38, 0x33, - 0x17, 0x20, 0x41, 0x01, 0xb7, 0x40, 0xc1, 0xe2, 0x97, 0x22, 0xa7, 0xfe, 0xa9, 0x2c, 0xae, 0xde, - 0x9f, 0x17, 0x41, 0x19, 0x08, 0x09, 0x0a, 0x58, 0xe0, 0x43, 0xb0, 0xc0, 0x12, 0x29, 0x14, 0xd7, - 0x35, 0xc6, 0x26, 0x33, 0x30, 0xf1, 0x3a, 0xec, 0x76, 0x2a, 0x0b, 0x49, 0x15, 0x4a, 0x39, 0x50, - 0xab, 0x60, 0xb6, 0x27, 0xc0, 0xec, 0xf9, 0xa7, 0x5f, 0x7d, 0xfc, 0xac, 0x3c, 0xf1, 0xe4, 0x59, - 0x79, 0xe2, 0xe9, 0xb3, 0xf2, 0xc4, 0x57, 0xdd, 0xb2, 0xf2, 0xb8, 0x5b, 0x56, 0x9e, 0x74, 0xcb, - 0xca, 0xd3, 0x6e, 0x59, 0xf9, 0xad, 0x5b, 0x56, 0xbe, 0xf9, 0xbd, 0x3c, 0x71, 0xb7, 0x3c, 0xfa, - 0xff, 0x8c, 0xff, 0x04, 0x00, 0x00, 0xff, 0xff, 0x98, 0x4a, 0x24, 0x86, 0xa1, 0x14, 0x00, 0x00, + // 1604 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x58, 0xcb, 0x73, 0xdb, 0x54, + 0x17, 0x8f, 0x1c, 0x3b, 0x89, 0x4f, 0x9e, 0xbd, 0x69, 0x26, 0xfe, 0xd2, 0x6f, 0xec, 0x54, 0xdf, + 0xcc, 0x57, 0xa0, 0xad, 0xdc, 0x27, 0x2d, 0x30, 0x3c, 0xaa, 0xb4, 0x94, 0xd2, 0x24, 0x4d, 0x6f, + 0x5a, 0xe8, 0x94, 0xce, 0x50, 0x59, 0xbe, 0xb1, 0xd5, 0x58, 0x8f, 0xea, 0x4a, 0x0e, 0xa1, 0x1b, + 0x86, 0xbf, 0x80, 0x35, 0x2c, 0x59, 0xb0, 0x62, 0xc3, 0x96, 0x05, 0x4b, 0x3a, 0xac, 0xba, 0xec, + 0xca, 0x50, 0xb3, 0xe2, 0x3f, 0x80, 0xce, 0x30, 0xc3, 0xdc, 0xab, 0x2b, 0xc9, 0xf2, 0x4b, 0x9e, + 0x74, 0xa6, 0x2b, 0x76, 0xd1, 0x79, 0xfc, 0xce, 0xbd, 0xe7, 0x9e, 0xc7, 0xcf, 0x81, 0xab, 0xbb, + 0x17, 0xa9, 0x62, 0xd8, 0xe5, 0x5d, 0xbf, 0x42, 0x5c, 0x8b, 0x78, 0x84, 0x96, 0x9b, 0xc4, 0xaa, + 0xda, 0x6e, 0x59, 0x28, 0x34, 0xc7, 0x28, 0xef, 0x34, 0xec, 0x3d, 0xdd, 0xb6, 0x3c, 0xd7, 0x6e, + 0x94, 0x9b, 0xa7, 0x2b, 0xc4, 0xd3, 0xce, 0x96, 0x6b, 0xc4, 0x22, 0xae, 0xe6, 0x91, 0xaa, 0xe2, + 0xb8, 0xb6, 0x67, 0xa3, 0x62, 0x60, 0xaf, 0x68, 0x8e, 0xa1, 0x74, 0xd8, 0x2b, 0xc2, 0x7e, 0xe5, + 0x64, 0xcd, 0xf0, 0xea, 0x7e, 0x45, 0xd1, 0x6d, 0xb3, 0x5c, 0xb3, 0x6b, 0x76, 0x99, 0xbb, 0x55, + 0xfc, 0x1d, 0xfe, 0xc5, 0x3f, 0xf8, 0x5f, 0x01, 0xdc, 0xca, 0xb9, 0x38, 0xbc, 0xa9, 0xe9, 0x75, + 0xc3, 0x22, 0xee, 0x7e, 0xd9, 0xd9, 0xad, 0x31, 0x01, 0x2d, 0x9b, 0xc4, 0xd3, 0xca, 0xcd, 0xd3, + 0xdd, 0x87, 0x58, 0x29, 0x0f, 0xf2, 0x72, 0x7d, 0xcb, 0x33, 0x4c, 0xd2, 0xe3, 0xf0, 0x7a, 0x9a, + 0x03, 0xd5, 0xeb, 0xc4, 0xd4, 0xba, 0xfd, 0xe4, 0x1f, 0x25, 0x58, 0xbd, 0xf2, 0x19, 0x31, 0x1d, + 0x6f, 0xcb, 0x35, 0x6c, 0xd7, 0xf0, 0xf6, 0xd7, 0x49, 0x93, 0x34, 0xd6, 0x6c, 0x6b, 0xc7, 0xa8, + 0xf9, 0xae, 0xe6, 0x19, 0xb6, 0x85, 0xee, 0x40, 0xc1, 0xb2, 0x4d, 0xc3, 0xd2, 0x98, 0x5c, 0xf7, + 0x5d, 0x97, 0x58, 0xfa, 0xfe, 0x76, 0x5d, 0x73, 0x09, 0x2d, 0x48, 0xab, 0xd2, 0x2b, 0x39, 0xf5, + 0xbf, 0xed, 0x56, 0xa9, 0xb0, 0x39, 0xc0, 0x06, 0x0f, 0xf4, 0x46, 0x6f, 0xc3, 0x7c, 0x83, 0x58, + 0x55, 0xad, 0xd2, 0x20, 0x5b, 0xc4, 0xd5, 0x89, 0xe5, 0x15, 0x32, 0x1c, 0x70, 0xb1, 0xdd, 0x2a, + 0xcd, 0xaf, 0x27, 0x55, 0xb8, 0xdb, 0x56, 0xbe, 0x0b, 0xcb, 0xef, 0x37, 0xec, 0xbd, 0xcb, 0x06, + 0xf5, 0x0c, 0xab, 0xe6, 0x1b, 0xb4, 0x4e, 0xdc, 0x0d, 0xe2, 0xd5, 0xed, 0x2a, 0x7a, 0x17, 0xb2, + 0xde, 0xbe, 0x43, 0xf8, 0xf9, 0xf2, 0xea, 0xf1, 0xc7, 0xad, 0xd2, 0x58, 0xbb, 0x55, 0xca, 0xde, + 0xda, 0x77, 0xc8, 0xf3, 0x56, 0xe9, 0xc8, 0x00, 0x37, 0xa6, 0xc6, 0xdc, 0x51, 0xfe, 0x3a, 0x03, + 0xc0, 0xac, 0xb6, 0x79, 0xe2, 0xd0, 0x7d, 0x98, 0x62, 0x8f, 0x55, 0xd5, 0x3c, 0x8d, 0x63, 0x4e, + 0x9f, 0x39, 0xa5, 0xc4, 0x95, 0x12, 0xe5, 0x5c, 0x71, 0x76, 0x6b, 0x4c, 0x40, 0x15, 0x66, 0xad, + 0x34, 0x4f, 0x2b, 0x37, 0x2a, 0x0f, 0x88, 0xee, 0x6d, 0x10, 0x4f, 0x53, 0x91, 0x38, 0x05, 0xc4, + 0x32, 0x1c, 0xa1, 0xa2, 0x2d, 0xc8, 0x52, 0x87, 0xe8, 0x3c, 0x01, 0xd3, 0x67, 0x14, 0x65, 0x78, + 0x1d, 0x2a, 0xf1, 0xd9, 0xb6, 0x1d, 0xa2, 0xab, 0x33, 0xe1, 0x0d, 0xd9, 0x17, 0xe6, 0x48, 0xe8, + 0x0e, 0x4c, 0x50, 0x4f, 0xf3, 0x7c, 0x5a, 0x18, 0xef, 0x39, 0x71, 0x1a, 0x26, 0xf7, 0x53, 0xe7, + 0x04, 0xea, 0x44, 0xf0, 0x8d, 0x05, 0x9e, 0xfc, 0x34, 0x03, 0x8b, 0xb1, 0xf1, 0x9a, 0x6d, 0x55, + 0x0d, 0x5e, 0x29, 0x6f, 0x25, 0xb2, 0x7e, 0xac, 0x2b, 0xeb, 0xcb, 0x7d, 0x5c, 0xe2, 0x8c, 0xa3, + 0x37, 0xa2, 0xe3, 0x66, 0xb8, 0xfb, 0xd1, 0x64, 0xf0, 0xe7, 0xad, 0xd2, 0x7c, 0xe4, 0x96, 0x3c, + 0x0f, 0x6a, 0x02, 0x6a, 0x68, 0xd4, 0xbb, 0xe5, 0x6a, 0x16, 0x0d, 0x60, 0x0d, 0x93, 0x88, 0x5b, + 0xbf, 0x36, 0xda, 0x3b, 0x31, 0x0f, 0x75, 0x45, 0x84, 0x44, 0xeb, 0x3d, 0x68, 0xb8, 0x4f, 0x04, + 0xf4, 0x7f, 0x98, 0x70, 0x89, 0x46, 0x6d, 0xab, 0x90, 0xe5, 0x47, 0x8e, 0xf2, 0x85, 0xb9, 0x14, + 0x0b, 0x2d, 0x7a, 0x15, 0x26, 0x4d, 0x42, 0xa9, 0x56, 0x23, 0x85, 0x1c, 0x37, 0x9c, 0x17, 0x86, + 0x93, 0x1b, 0x81, 0x18, 0x87, 0x7a, 0xf9, 0x27, 0x09, 0xe6, 0xe2, 0x3c, 0xad, 0x1b, 0xd4, 0x43, + 0xf7, 0x7a, 0x6a, 0x4f, 0x19, 0xed, 0x4e, 0xcc, 0x9b, 0x57, 0xde, 0x82, 0x08, 0x37, 0x15, 0x4a, + 0x3a, 0xea, 0xee, 0x06, 0xe4, 0x0c, 0x8f, 0x98, 0x2c, 0xeb, 0xe3, 0x5d, 0xe9, 0x4a, 0x29, 0x12, + 0x75, 0x56, 0xc0, 0xe6, 0xae, 0x31, 0x00, 0x1c, 0xe0, 0xc8, 0x7f, 0x8c, 0x77, 0xde, 0x80, 0xd5, + 0x23, 0xfa, 0x4e, 0x82, 0x15, 0x67, 0xe0, 0x80, 0x11, 0x97, 0x5a, 0x4b, 0x8b, 0x3c, 0x78, 0x44, + 0x61, 0xb2, 0x43, 0xd8, 0x5c, 0x21, 0xaa, 0x2c, 0x8e, 0xb4, 0x32, 0xc4, 0x78, 0xc8, 0x51, 0xd0, + 0x87, 0x80, 0x4c, 0xcd, 0x63, 0x19, 0xad, 0x6d, 0xb9, 0x44, 0x27, 0x55, 0x86, 0x2a, 0x86, 0x52, + 0x54, 0x1d, 0x1b, 0x3d, 0x16, 0xb8, 0x8f, 0x17, 0xfa, 0x52, 0x82, 0xc5, 0x6a, 0xef, 0x90, 0x11, + 0x75, 0x79, 0x61, 0x94, 0x44, 0xf7, 0x99, 0x51, 0xea, 0x72, 0xbb, 0x55, 0x5a, 0xec, 0xa3, 0xc0, + 0xfd, 0x82, 0xa1, 0x7b, 0x90, 0x73, 0xfd, 0x06, 0xa1, 0x85, 0x2c, 0x7f, 0xde, 0xd4, 0xa8, 0x5b, + 0x76, 0xc3, 0xd0, 0xf7, 0x31, 0x73, 0xf9, 0xd8, 0xf0, 0xea, 0xdb, 0x3e, 0x9f, 0x55, 0x34, 0x7e, + 0x6b, 0xae, 0xc2, 0x01, 0xa8, 0xfc, 0x08, 0x16, 0xba, 0x87, 0x06, 0xaa, 0x01, 0xe8, 0x61, 0x9f, + 0xb2, 0x05, 0xc1, 0xc2, 0x9e, 0x1d, 0xbd, 0xaa, 0xa2, 0x1e, 0x8f, 0xe7, 0x65, 0x24, 0xa2, 0xb8, + 0x03, 0x5a, 0x3e, 0x05, 0x33, 0x57, 0x5d, 0xdb, 0x77, 0xc4, 0x19, 0xd1, 0x2a, 0x64, 0x2d, 0xcd, + 0x0c, 0xa7, 0x4f, 0x34, 0x11, 0x37, 0x35, 0x93, 0x60, 0xae, 0x91, 0xbf, 0x95, 0x60, 0x76, 0xdd, + 0x30, 0x0d, 0x0f, 0x13, 0xea, 0xd8, 0x16, 0x25, 0xe8, 0x7c, 0x62, 0x62, 0x1d, 0xed, 0x9a, 0x58, + 0x87, 0x12, 0xc6, 0x1d, 0xb3, 0xea, 0x13, 0x98, 0x7c, 0xe8, 0x13, 0xdf, 0xb0, 0x6a, 0x62, 0x5e, + 0x9f, 0x4b, 0xbb, 0xe0, 0xcd, 0xc0, 0x3c, 0x51, 0x6d, 0xea, 0x34, 0x1b, 0x01, 0x42, 0x83, 0x43, + 0x44, 0xf9, 0xef, 0x0c, 0x1c, 0xe5, 0x81, 0x49, 0x75, 0xc8, 0x56, 0xbe, 0x97, 0xba, 0x95, 0x57, + 0xc5, 0x6d, 0x0e, 0xb2, 0x99, 0x1f, 0xc0, 0x6c, 0xa3, 0xf3, 0xee, 0xe2, 0x9a, 0x27, 0xd3, 0xae, + 0x99, 0x48, 0x98, 0xba, 0x24, 0x4e, 0x90, 0x4c, 0x3a, 0x4e, 0x42, 0xf7, 0x63, 0x01, 0xe3, 0xa3, + 0xb3, 0x00, 0x74, 0x03, 0x96, 0x2a, 0xb6, 0xeb, 0xda, 0x7b, 0x86, 0x55, 0xe3, 0x71, 0x42, 0x90, + 0x2c, 0x07, 0xf9, 0x4f, 0xbb, 0x55, 0x5a, 0x52, 0xfb, 0x19, 0xe0, 0xfe, 0x7e, 0xf2, 0x1e, 0x2c, + 0x6d, 0xb2, 0x99, 0x42, 0x6d, 0xdf, 0xd5, 0x49, 0xdc, 0x10, 0xa8, 0x04, 0xb9, 0x26, 0x71, 0x2b, + 0x41, 0x51, 0xe7, 0xd5, 0x3c, 0x6b, 0x87, 0x8f, 0x98, 0x00, 0x07, 0x72, 0x76, 0x13, 0x2b, 0xf6, + 0xbc, 0x8d, 0xd7, 0x69, 0x61, 0x82, 0x9b, 0xf2, 0x9b, 0x6c, 0x26, 0x55, 0xb8, 0xdb, 0x56, 0x6e, + 0x65, 0x60, 0x79, 0x40, 0xff, 0xa1, 0xdb, 0x30, 0x45, 0xc5, 0xdf, 0xa2, 0xa7, 0x8e, 0xa5, 0xbd, + 0x85, 0xf0, 0x8d, 0xa7, 0x7f, 0x08, 0x86, 0x23, 0x28, 0x64, 0xc3, 0xac, 0x2b, 0x8e, 0xc0, 0x63, + 0x8a, 0x2d, 0x70, 0x26, 0x0d, 0xbb, 0x37, 0x3b, 0xf1, 0x63, 0xe3, 0x4e, 0x40, 0x9c, 0xc4, 0x47, + 0x8f, 0x60, 0xa1, 0xe3, 0xda, 0x41, 0xcc, 0x71, 0x1e, 0xf3, 0x7c, 0x5a, 0xcc, 0xbe, 0x8f, 0xa2, + 0x16, 0x44, 0xd8, 0x85, 0xcd, 0x2e, 0x58, 0xdc, 0x13, 0x48, 0xfe, 0x25, 0x03, 0x43, 0x16, 0xc3, + 0x4b, 0x20, 0x79, 0xf7, 0x13, 0x24, 0xef, 0x9d, 0x83, 0x6f, 0xbc, 0x81, 0xa4, 0xaf, 0xde, 0x45, + 0xfa, 0xde, 0x7b, 0x81, 0x18, 0xc3, 0x49, 0xe0, 0x9f, 0x19, 0xf8, 0xdf, 0x60, 0xe7, 0x98, 0x14, + 0x5e, 0x4f, 0x8c, 0xd8, 0x0b, 0x5d, 0x23, 0xf6, 0xd8, 0x08, 0x10, 0xff, 0x92, 0xc4, 0x2e, 0x92, + 0xf8, 0xab, 0x04, 0xc5, 0xc1, 0x79, 0x7b, 0x09, 0xa4, 0xf1, 0xd3, 0x24, 0x69, 0x7c, 0xf3, 0xe0, + 0x45, 0x36, 0x80, 0x44, 0x5e, 0x1d, 0x56, 0x5b, 0x11, 0xdd, 0x1b, 0x61, 0xe5, 0x7f, 0x9f, 0x19, + 0x96, 0x2a, 0xce, 0x4e, 0x53, 0x7e, 0xb5, 0x24, 0xbc, 0xaf, 0x58, 0x6c, 0xf5, 0x98, 0x6c, 0x7b, + 0x04, 0x05, 0x59, 0x87, 0xc9, 0x46, 0xb0, 0xab, 0x45, 0x53, 0x5f, 0x1a, 0x69, 0x45, 0x0e, 0x5b, + 0xed, 0x01, 0x2d, 0x10, 0x66, 0x38, 0x84, 0x47, 0x55, 0x98, 0x20, 0xfc, 0xa7, 0xfa, 0xa8, 0x9d, + 0x9d, 0xf6, 0xc3, 0x5e, 0x05, 0x56, 0x85, 0x81, 0x15, 0x16, 0xd8, 0xf2, 0x37, 0x12, 0xac, 0xa6, + 0x8d, 0x04, 0xb4, 0xd7, 0x87, 0xe2, 0xbd, 0x00, 0x7d, 0x1f, 0x9d, 0xf2, 0xfd, 0x20, 0xc1, 0xe1, + 0x7e, 0x4c, 0x8a, 0x35, 0x19, 0xa3, 0x4f, 0x11, 0xf7, 0x89, 0x9a, 0xec, 0x26, 0x97, 0x62, 0xa1, + 0x45, 0x27, 0x60, 0xaa, 0xae, 0x59, 0xd5, 0x6d, 0xe3, 0xf3, 0x90, 0xd5, 0x47, 0x65, 0xfe, 0x81, + 0x90, 0xe3, 0xc8, 0x02, 0x5d, 0x86, 0x05, 0xee, 0xb7, 0x4e, 0xac, 0x9a, 0x57, 0xe7, 0x2f, 0x22, + 0xa8, 0x49, 0xb4, 0x75, 0x6e, 0x76, 0xe9, 0x71, 0x8f, 0x87, 0xfc, 0x97, 0x04, 0xe8, 0x20, 0x6c, + 0xe2, 0x38, 0xe4, 0x35, 0xc7, 0xe0, 0x14, 0x37, 0x68, 0xb4, 0xbc, 0x3a, 0xdb, 0x6e, 0x95, 0xf2, + 0x97, 0xb6, 0xae, 0x05, 0x42, 0x1c, 0xeb, 0x99, 0x71, 0xb8, 0x68, 0x83, 0x85, 0x2a, 0x8c, 0xc3, + 0xc0, 0x14, 0xc7, 0x7a, 0x74, 0x11, 0x66, 0xf4, 0x86, 0x4f, 0x3d, 0xe2, 0x6e, 0xeb, 0xb6, 0x43, + 0xf8, 0x60, 0x9a, 0x52, 0x0f, 0x8b, 0x3b, 0xcd, 0xac, 0x75, 0xe8, 0x70, 0xc2, 0x12, 0x29, 0x00, + 0xac, 0xad, 0xa8, 0xa3, 0xb1, 0x38, 0x39, 0x1e, 0x67, 0x8e, 0x3d, 0xd8, 0x66, 0x24, 0xc5, 0x1d, + 0x16, 0xf2, 0x03, 0x58, 0xda, 0x26, 0x6e, 0xd3, 0xd0, 0xc9, 0x25, 0x5d, 0xb7, 0x7d, 0xcb, 0x0b, + 0xc9, 0x7a, 0x19, 0xf2, 0x91, 0x99, 0xe8, 0xbc, 0x43, 0x22, 0x7e, 0x3e, 0xc2, 0xc2, 0xb1, 0x4d, + 0xd4, 0xea, 0x99, 0x81, 0xad, 0xfe, 0x73, 0x06, 0x26, 0x63, 0xf8, 0xec, 0xae, 0x61, 0x55, 0x05, + 0xf2, 0x91, 0xd0, 0xfa, 0xba, 0x61, 0x55, 0x9f, 0xb7, 0x4a, 0xd3, 0xc2, 0x8c, 0x7d, 0x62, 0x6e, + 0x88, 0xae, 0x41, 0xd6, 0xa7, 0xc4, 0x15, 0x4d, 0x7c, 0x3c, 0xad, 0x98, 0x6f, 0x53, 0xe2, 0x86, + 0xfc, 0x6a, 0x8a, 0x21, 0x33, 0x01, 0xe6, 0x10, 0x68, 0x03, 0x72, 0x35, 0xf6, 0x28, 0xa2, 0x4f, + 0x4f, 0xa4, 0x61, 0x75, 0xfe, 0x88, 0x09, 0xca, 0x80, 0x4b, 0x70, 0x80, 0x82, 0x1e, 0xc2, 0x1c, + 0x4d, 0xa4, 0x90, 0x3f, 0xd7, 0x08, 0x7c, 0xa9, 0x6f, 0xe2, 0x55, 0xd4, 0x6e, 0x95, 0xe6, 0x92, + 0x2a, 0xdc, 0x15, 0x40, 0x2e, 0xc3, 0x74, 0xc7, 0x05, 0xd3, 0xa7, 0xac, 0x7a, 0xf9, 0xf1, 0xb3, + 0xe2, 0xd8, 0x93, 0x67, 0xc5, 0xb1, 0xa7, 0xcf, 0x8a, 0x63, 0x5f, 0xb4, 0x8b, 0xd2, 0xe3, 0x76, + 0x51, 0x7a, 0xd2, 0x2e, 0x4a, 0x4f, 0xdb, 0x45, 0xe9, 0xb7, 0x76, 0x51, 0xfa, 0xea, 0xf7, 0xe2, + 0xd8, 0xdd, 0xe2, 0xf0, 0xff, 0xc5, 0xfe, 0x13, 0x00, 0x00, 0xff, 0xff, 0x1d, 0xc5, 0x22, 0x46, + 0xc5, 0x15, 0x00, 0x00, +} + +func (m *ExemptPriorityLevelConfiguration) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *ExemptPriorityLevelConfiguration) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *ExemptPriorityLevelConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.LendablePercent != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.LendablePercent)) + i-- + dAtA[i] = 0x10 + } + if m.NominalConcurrencyShares != nil { + i = encodeVarintGenerated(dAtA, i, uint64(*m.NominalConcurrencyShares)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil } func (m *FlowDistinguisherMethod) Marshal() (dAtA []byte, err error) { @@ -1490,6 +1556,18 @@ func (m *PriorityLevelConfigurationSpec) MarshalToSizedBuffer(dAtA []byte) (int, _ = i var l int _ = l + if m.Exempt != nil { + { + size, err := m.Exempt.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintGenerated(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } if m.Limited != nil { { size, err := m.Limited.MarshalToSizedBuffer(dAtA[:i]) @@ -1782,6 +1860,21 @@ func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int { dAtA[offset] = uint8(v) return base } +func (m *ExemptPriorityLevelConfiguration) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.NominalConcurrencyShares != nil { + n += 1 + sovGenerated(uint64(*m.NominalConcurrencyShares)) + } + if m.LendablePercent != nil { + n += 1 + sovGenerated(uint64(*m.LendablePercent)) + } + return n +} + func (m *FlowDistinguisherMethod) Size() (n int) { if m == nil { return 0 @@ -2047,6 +2140,10 @@ func (m *PriorityLevelConfigurationSpec) Size() (n int) { l = m.Limited.Size() n += 1 + l + sovGenerated(uint64(l)) } + if m.Exempt != nil { + l = m.Exempt.Size() + n += 1 + l + sovGenerated(uint64(l)) + } return n } @@ -2164,6 +2261,17 @@ func sovGenerated(x uint64) (n int) { func sozGenerated(x uint64) (n int) { return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } +func (this *ExemptPriorityLevelConfiguration) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&ExemptPriorityLevelConfiguration{`, + `NominalConcurrencyShares:` + valueToStringGenerated(this.NominalConcurrencyShares) + `,`, + `LendablePercent:` + valueToStringGenerated(this.LendablePercent) + `,`, + `}`, + }, "") + return s +} func (this *FlowDistinguisherMethod) String() string { if this == nil { return "nil" @@ -2380,6 +2488,7 @@ func (this *PriorityLevelConfigurationSpec) String() string { s := strings.Join([]string{`&PriorityLevelConfigurationSpec{`, `Type:` + fmt.Sprintf("%v", this.Type) + `,`, `Limited:` + strings.Replace(this.Limited.String(), "LimitedPriorityLevelConfiguration", "LimitedPriorityLevelConfiguration", 1) + `,`, + `Exempt:` + strings.Replace(this.Exempt.String(), "ExemptPriorityLevelConfiguration", "ExemptPriorityLevelConfiguration", 1) + `,`, `}`, }, "") return s @@ -2467,6 +2576,96 @@ func valueToStringGenerated(v interface{}) string { pv := reflect.Indirect(rv).Interface() return fmt.Sprintf("*%v", pv) } +func (m *ExemptPriorityLevelConfiguration) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ExemptPriorityLevelConfiguration: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ExemptPriorityLevelConfiguration: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field NominalConcurrencyShares", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.NominalConcurrencyShares = &v + case 2: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field LendablePercent", wireType) + } + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= int32(b&0x7F) << shift + if b < 0x80 { + break + } + } + m.LendablePercent = &v + default: + iNdEx = preIndex + skippy, err := skipGenerated(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthGenerated + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func (m *FlowDistinguisherMethod) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 @@ -4546,6 +4745,42 @@ func (m *PriorityLevelConfigurationSpec) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Exempt", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowGenerated + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthGenerated + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthGenerated + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Exempt == nil { + m.Exempt = &ExemptPriorityLevelConfiguration{} + } + if err := m.Exempt.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/generated.proto index adf9e8682c41..eda0f7829e78 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/generated.proto @@ -28,6 +28,40 @@ import "k8s.io/apimachinery/pkg/runtime/schema/generated.proto"; // Package-wide variables from generator "generated". option go_package = "k8s.io/api/flowcontrol/v1beta3"; +// ExemptPriorityLevelConfiguration describes the configurable aspects +// of the handling of exempt requests. +// In the mandatory exempt configuration object the values in the fields +// here can be modified by authorized users, unlike the rest of the `spec`. +message ExemptPriorityLevelConfiguration { + // `nominalConcurrencyShares` (NCS) contributes to the computation of the + // NominalConcurrencyLimit (NominalCL) of this level. + // This is the number of execution seats nominally reserved for this priority level. + // This DOES NOT limit the dispatching from this priority level + // but affects the other priority levels through the borrowing mechanism. + // The server's concurrency limit (ServerCL) is divided among all the + // priority levels in proportion to their NCS values: + // + // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) + // sum_ncs = sum[priority level k] NCS(k) + // + // Bigger numbers mean a larger nominal concurrency limit, + // at the expense of every other priority level. + // This field has a default value of zero. + // +optional + optional int32 nominalConcurrencyShares = 1; + + // `lendablePercent` prescribes the fraction of the level's NominalCL that + // can be borrowed by other priority levels. This value of this + // field must be between 0 and 100, inclusive, and it defaults to 0. + // The number of seats that other levels can borrow from this level, known + // as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. + // + // LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) + // + // +optional + optional int32 lendablePercent = 2; +} + // FlowDistinguisherMethod specifies the method of a flow distinguisher. message FlowDistinguisherMethod { // `type` is the type of flow distinguisher method @@ -168,10 +202,10 @@ message LimitedPriorityLevelConfiguration { // Limited priority levels in proportion to their NCS values: // // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) - // sum_ncs = sum[limited priority level k] NCS(k) + // sum_ncs = sum[priority level k] NCS(k) // // Bigger numbers mean a larger nominal concurrency limit, - // at the expense of every other Limited priority level. + // at the expense of every other priority level. // This field has a default value of 30. // +optional optional int32 nominalConcurrencyShares = 1; @@ -334,6 +368,14 @@ message PriorityLevelConfigurationSpec { // This field must be non-empty if and only if `type` is `"Limited"`. // +optional optional LimitedPriorityLevelConfiguration limited = 2; + + // `exempt` specifies how requests are handled for an exempt priority level. + // This field MUST be empty if `type` is `"Limited"`. + // This field MAY be non-empty if `type` is `"Exempt"`. + // If empty and `type` is `"Exempt"` then the default values + // for `ExemptPriorityLevelConfiguration` apply. + // +optional + optional ExemptPriorityLevelConfiguration exempt = 3; } // PriorityLevelConfigurationStatus represents the current state of a "request-priority". diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/types.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/types.go index 2baf2dc39eb2..810941557b27 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/types.go @@ -77,7 +77,9 @@ const ( // is a boolean false or has an invalid boolean representation // (if the cluster operator sets it to 'false' it will be stomped) // - any changes to the spec made by the cluster operator will be - // stomped. + // stomped, except for changes to the `nominalConcurrencyShares` + // and `lendablePercent` fields of the PriorityLevelConfiguration + // named "exempt". // // The kube-apiserver will apply updates on the suggested configuration if: // - the cluster operator has enabled auto-update by setting the annotation @@ -433,6 +435,14 @@ type PriorityLevelConfigurationSpec struct { // This field must be non-empty if and only if `type` is `"Limited"`. // +optional Limited *LimitedPriorityLevelConfiguration `json:"limited,omitempty" protobuf:"bytes,2,opt,name=limited"` + + // `exempt` specifies how requests are handled for an exempt priority level. + // This field MUST be empty if `type` is `"Limited"`. + // This field MAY be non-empty if `type` is `"Exempt"`. + // If empty and `type` is `"Exempt"` then the default values + // for `ExemptPriorityLevelConfiguration` apply. + // +optional + Exempt *ExemptPriorityLevelConfiguration `json:"exempt,omitempty" protobuf:"bytes,3,opt,name=exempt"` } // PriorityLevelEnablement indicates whether limits on execution are enabled for the priority level @@ -462,10 +472,10 @@ type LimitedPriorityLevelConfiguration struct { // Limited priority levels in proportion to their NCS values: // // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) - // sum_ncs = sum[limited priority level k] NCS(k) + // sum_ncs = sum[priority level k] NCS(k) // // Bigger numbers mean a larger nominal concurrency limit, - // at the expense of every other Limited priority level. + // at the expense of every other priority level. // This field has a default value of 30. // +optional NominalConcurrencyShares int32 `json:"nominalConcurrencyShares" protobuf:"varint,1,opt,name=nominalConcurrencyShares"` @@ -503,6 +513,43 @@ type LimitedPriorityLevelConfiguration struct { BorrowingLimitPercent *int32 `json:"borrowingLimitPercent,omitempty" protobuf:"varint,4,opt,name=borrowingLimitPercent"` } +// ExemptPriorityLevelConfiguration describes the configurable aspects +// of the handling of exempt requests. +// In the mandatory exempt configuration object the values in the fields +// here can be modified by authorized users, unlike the rest of the `spec`. +type ExemptPriorityLevelConfiguration struct { + // `nominalConcurrencyShares` (NCS) contributes to the computation of the + // NominalConcurrencyLimit (NominalCL) of this level. + // This is the number of execution seats nominally reserved for this priority level. + // This DOES NOT limit the dispatching from this priority level + // but affects the other priority levels through the borrowing mechanism. + // The server's concurrency limit (ServerCL) is divided among all the + // priority levels in proportion to their NCS values: + // + // NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) + // sum_ncs = sum[priority level k] NCS(k) + // + // Bigger numbers mean a larger nominal concurrency limit, + // at the expense of every other priority level. + // This field has a default value of zero. + // +optional + NominalConcurrencyShares *int32 `json:"nominalConcurrencyShares,omitempty" protobuf:"varint,1,opt,name=nominalConcurrencyShares"` + // `lendablePercent` prescribes the fraction of the level's NominalCL that + // can be borrowed by other priority levels. This value of this + // field must be between 0 and 100, inclusive, and it defaults to 0. + // The number of seats that other levels can borrow from this level, known + // as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. + // + // LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) + // + // +optional + LendablePercent *int32 `json:"lendablePercent,omitempty" protobuf:"varint,2,opt,name=lendablePercent"` + // The `BorrowingCL` of an Exempt priority level is implicitly `ServerCL`. + // In other words, an exempt priority level + // has no meaningful limit on how much it borrows. + // There is no explicit representation of that here. +} + // LimitResponse defines how to handle requests that can not be executed right now. // +union type LimitResponse struct { diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/types_swagger_doc_generated.go index 728252c0cf2c..fa76112a724a 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/types_swagger_doc_generated.go @@ -27,6 +27,16 @@ package v1beta3 // Those methods can be generated by using hack/update-codegen.sh // AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT. +var map_ExemptPriorityLevelConfiguration = map[string]string{ + "": "ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the `spec`.", + "nominalConcurrencyShares": "`nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats nominally reserved for this priority level. This DOES NOT limit the dispatching from this priority level but affects the other priority levels through the borrowing mechanism. The server's concurrency limit (ServerCL) is divided among all the priority levels in proportion to their NCS values:\n\nNominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k)\n\nBigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of zero.", + "lendablePercent": "`lendablePercent` prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. This value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows.\n\nLendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 )", +} + +func (ExemptPriorityLevelConfiguration) SwaggerDoc() map[string]string { + return map_ExemptPriorityLevelConfiguration +} + var map_FlowDistinguisherMethod = map[string]string{ "": "FlowDistinguisherMethod specifies the method of a flow distinguisher.", "type": "`type` is the type of flow distinguisher method The supported types are \"ByUser\" and \"ByNamespace\". Required.", @@ -112,7 +122,7 @@ func (LimitResponse) SwaggerDoc() map[string]string { var map_LimitedPriorityLevelConfiguration = map[string]string{ "": "LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues:\n - How are requests for this priority level limited?\n - What should be done with requests that exceed the limit?", - "nominalConcurrencyShares": "`nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats available at this priority level. This is used both for requests dispatched from this priority level as well as requests dispatched from other priority levels borrowing seats from this level. The server's concurrency limit (ServerCL) is divided among the Limited priority levels in proportion to their NCS values:\n\nNominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[limited priority level k] NCS(k)\n\nBigger numbers mean a larger nominal concurrency limit, at the expense of every other Limited priority level. This field has a default value of 30.", + "nominalConcurrencyShares": "`nominalConcurrencyShares` (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats available at this priority level. This is used both for requests dispatched from this priority level as well as requests dispatched from other priority levels borrowing seats from this level. The server's concurrency limit (ServerCL) is divided among the Limited priority levels in proportion to their NCS values:\n\nNominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k)\n\nBigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of 30.", "limitResponse": "`limitResponse` indicates what to do with requests that can not be executed right now", "lendablePercent": "`lendablePercent` prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. The value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows.\n\nLendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 )", "borrowingLimitPercent": "`borrowingLimitPercent`, if present, configures a limit on how many seats this priority level can borrow from other priority levels. The limit is known as this level's BorrowingConcurrencyLimit (BorrowingCL) and is a limit on the total number of seats that this level may borrow at any one time. This field holds the ratio of that limit to the level's nominal concurrency limit. When this field is non-nil, it must hold a non-negative integer and the limit is calculated as follows.\n\nBorrowingCL(i) = round( NominalCL(i) * borrowingLimitPercent(i)/100.0 )\n\nThe value of this field can be more than 100, implying that this priority level can borrow a number of seats that is greater than its own nominal concurrency limit (NominalCL). When this field is left `nil`, the limit is effectively infinite.", @@ -190,6 +200,7 @@ var map_PriorityLevelConfigurationSpec = map[string]string{ "": "PriorityLevelConfigurationSpec specifies the configuration of a priority level.", "type": "`type` indicates whether this priority level is subject to limitation on request execution. A value of `\"Exempt\"` means that requests of this priority level are not subject to a limit (and thus are never queued) and do not detract from the capacity made available to other priority levels. A value of `\"Limited\"` means that (a) requests of this priority level _are_ subject to limits and (b) some of the server's limited capacity is made available exclusively to this priority level. Required.", "limited": "`limited` specifies how requests are handled for a Limited priority level. This field must be non-empty if and only if `type` is `\"Limited\"`.", + "exempt": "`exempt` specifies how requests are handled for an exempt priority level. This field MUST be empty if `type` is `\"Limited\"`. This field MAY be non-empty if `type` is `\"Exempt\"`. If empty and `type` is `\"Exempt\"` then the default values for `ExemptPriorityLevelConfiguration` apply.", } func (PriorityLevelConfigurationSpec) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/zz_generated.deepcopy.go index ec02d2a9c4cd..09fefa20aa82 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/flowcontrol/v1beta3/zz_generated.deepcopy.go @@ -25,6 +25,32 @@ import ( runtime "k8s.io/apimachinery/pkg/runtime" ) +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ExemptPriorityLevelConfiguration) DeepCopyInto(out *ExemptPriorityLevelConfiguration) { + *out = *in + if in.NominalConcurrencyShares != nil { + in, out := &in.NominalConcurrencyShares, &out.NominalConcurrencyShares + *out = new(int32) + **out = **in + } + if in.LendablePercent != nil { + in, out := &in.LendablePercent, &out.LendablePercent + *out = new(int32) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExemptPriorityLevelConfiguration. +func (in *ExemptPriorityLevelConfiguration) DeepCopy() *ExemptPriorityLevelConfiguration { + if in == nil { + return nil + } + out := new(ExemptPriorityLevelConfiguration) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *FlowDistinguisherMethod) DeepCopyInto(out *FlowDistinguisherMethod) { *out = *in @@ -400,6 +426,11 @@ func (in *PriorityLevelConfigurationSpec) DeepCopyInto(out *PriorityLevelConfigu *out = new(LimitedPriorityLevelConfiguration) (*in).DeepCopyInto(*out) } + if in.Exempt != nil { + in, out := &in.Exempt, &out.Exempt + *out = new(ExemptPriorityLevelConfiguration) + (*in).DeepCopyInto(*out) + } return } diff --git a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/generated.pb.go b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/generated.pb.go index e9566d57e277..daeaea5dce7c 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/generated.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/generated.pb.go @@ -776,38 +776,10 @@ func (m *NetworkPolicySpec) XXX_DiscardUnknown() { var xxx_messageInfo_NetworkPolicySpec proto.InternalMessageInfo -func (m *NetworkPolicyStatus) Reset() { *m = NetworkPolicyStatus{} } -func (*NetworkPolicyStatus) ProtoMessage() {} -func (*NetworkPolicyStatus) Descriptor() ([]byte, []int) { - return fileDescriptor_1c72867a70a7cc90, []int{26} -} -func (m *NetworkPolicyStatus) XXX_Unmarshal(b []byte) error { - return m.Unmarshal(b) -} -func (m *NetworkPolicyStatus) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { - b = b[:cap(b)] - n, err := m.MarshalToSizedBuffer(b) - if err != nil { - return nil, err - } - return b[:n], nil -} -func (m *NetworkPolicyStatus) XXX_Merge(src proto.Message) { - xxx_messageInfo_NetworkPolicyStatus.Merge(m, src) -} -func (m *NetworkPolicyStatus) XXX_Size() int { - return m.Size() -} -func (m *NetworkPolicyStatus) XXX_DiscardUnknown() { - xxx_messageInfo_NetworkPolicyStatus.DiscardUnknown(m) -} - -var xxx_messageInfo_NetworkPolicyStatus proto.InternalMessageInfo - func (m *ServiceBackendPort) Reset() { *m = ServiceBackendPort{} } func (*ServiceBackendPort) ProtoMessage() {} func (*ServiceBackendPort) Descriptor() ([]byte, []int) { - return fileDescriptor_1c72867a70a7cc90, []int{27} + return fileDescriptor_1c72867a70a7cc90, []int{26} } func (m *ServiceBackendPort) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -859,7 +831,6 @@ func init() { proto.RegisterType((*NetworkPolicyPeer)(nil), "k8s.io.api.networking.v1.NetworkPolicyPeer") proto.RegisterType((*NetworkPolicyPort)(nil), "k8s.io.api.networking.v1.NetworkPolicyPort") proto.RegisterType((*NetworkPolicySpec)(nil), "k8s.io.api.networking.v1.NetworkPolicySpec") - proto.RegisterType((*NetworkPolicyStatus)(nil), "k8s.io.api.networking.v1.NetworkPolicyStatus") proto.RegisterType((*ServiceBackendPort)(nil), "k8s.io.api.networking.v1.ServiceBackendPort") } @@ -868,115 +839,112 @@ func init() { } var fileDescriptor_1c72867a70a7cc90 = []byte{ - // 1715 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x58, 0x4b, 0x6f, 0x1b, 0x47, - 0x12, 0xd6, 0x50, 0xa2, 0x48, 0x35, 0x25, 0x59, 0x6a, 0xdb, 0x58, 0xae, 0x16, 0x4b, 0x6a, 0x07, - 0x6b, 0x5b, 0xbb, 0xb6, 0xc9, 0xb5, 0x6c, 0x2c, 0x76, 0x2f, 0x49, 0x3c, 0xb2, 0x2c, 0x2b, 0x96, - 0x29, 0xa2, 0xc9, 0x38, 0x48, 0x90, 0x87, 0x47, 0xc3, 0x16, 0x35, 0xe6, 0x70, 0x7a, 0xd0, 0xd3, - 0x54, 0xac, 0x20, 0x08, 0x72, 0xc9, 0x21, 0xb7, 0xdc, 0x72, 0x0e, 0xf2, 0x0b, 0x82, 0xe4, 0x10, - 0x20, 0x48, 0x8c, 0x5c, 0x02, 0x1f, 0x0d, 0xe4, 0xe2, 0x4b, 0x88, 0x98, 0xf9, 0x17, 0x3a, 0x05, - 0xfd, 0x98, 0x17, 0x1f, 0x22, 0x63, 0x18, 0x3a, 0x49, 0x5d, 0x55, 0xfd, 0x75, 0xbd, 0xab, 0x86, - 0xe0, 0x66, 0xeb, 0x7f, 0x7e, 0xc9, 0x26, 0xe5, 0x56, 0x67, 0x0f, 0x53, 0x17, 0x33, 0xec, 0x97, - 0x0f, 0xb1, 0xdb, 0x20, 0xb4, 0xac, 0x18, 0xa6, 0x67, 0x97, 0x5d, 0xcc, 0x3e, 0x20, 0xb4, 0x65, - 0xbb, 0xcd, 0xf2, 0xe1, 0xb5, 0x72, 0x13, 0xbb, 0x98, 0x9a, 0x0c, 0x37, 0x4a, 0x1e, 0x25, 0x8c, - 0xc0, 0xbc, 0x94, 0x2c, 0x99, 0x9e, 0x5d, 0x8a, 0x24, 0x4b, 0x87, 0xd7, 0x56, 0xae, 0x36, 0x6d, - 0x76, 0xd0, 0xd9, 0x2b, 0x59, 0xa4, 0x5d, 0x6e, 0x92, 0x26, 0x29, 0x8b, 0x0b, 0x7b, 0x9d, 0x7d, - 0x71, 0x12, 0x07, 0xf1, 0x9f, 0x04, 0x5a, 0xd1, 0x63, 0x4f, 0x5a, 0x84, 0xe2, 0x21, 0x8f, 0xad, - 0xdc, 0x88, 0x64, 0xda, 0xa6, 0x75, 0x60, 0xbb, 0x98, 0x1e, 0x95, 0xbd, 0x56, 0x93, 0x13, 0xfc, - 0x72, 0x1b, 0x33, 0x73, 0xd8, 0xad, 0xf2, 0xa8, 0x5b, 0xb4, 0xe3, 0x32, 0xbb, 0x8d, 0x07, 0x2e, - 0xfc, 0x77, 0xdc, 0x05, 0xdf, 0x3a, 0xc0, 0x6d, 0x73, 0xe0, 0xde, 0xf5, 0x51, 0xf7, 0x3a, 0xcc, - 0x76, 0xca, 0xb6, 0xcb, 0x7c, 0x46, 0xfb, 0x2f, 0xe9, 0x3f, 0x6a, 0xe0, 0xcc, 0x9d, 0x7a, 0xbd, - 0xba, 0xed, 0x36, 0x29, 0xf6, 0xfd, 0xaa, 0xc9, 0x0e, 0xe0, 0x2a, 0x98, 0xf1, 0x4c, 0x76, 0x90, - 0xd7, 0x56, 0xb5, 0xb5, 0x39, 0x63, 0xfe, 0x49, 0xb7, 0x38, 0xd5, 0xeb, 0x16, 0x67, 0x38, 0x0f, - 0x09, 0x0e, 0xbc, 0x01, 0xb2, 0xfc, 0x6f, 0xfd, 0xc8, 0xc3, 0xf9, 0x69, 0x21, 0x95, 0xef, 0x75, - 0x8b, 0xd9, 0xaa, 0xa2, 0x1d, 0xc7, 0xfe, 0x47, 0xa1, 0x24, 0xac, 0x81, 0xcc, 0x9e, 0x69, 0xb5, - 0xb0, 0xdb, 0xc8, 0xa7, 0x56, 0xb5, 0xb5, 0xdc, 0xfa, 0x5a, 0x69, 0x54, 0xf8, 0x4a, 0x4a, 0x1f, - 0x43, 0xca, 0x1b, 0x67, 0x94, 0x12, 0x19, 0x45, 0x40, 0x01, 0x92, 0xbe, 0x0f, 0xce, 0xc5, 0xf4, - 0x47, 0x1d, 0x07, 0xdf, 0x37, 0x9d, 0x0e, 0x86, 0x15, 0x90, 0xe6, 0x0f, 0xfb, 0x79, 0x6d, 0x75, - 0x7a, 0x2d, 0xb7, 0xfe, 0xaf, 0xd1, 0x4f, 0xf5, 0x99, 0x6f, 0x2c, 0xa8, 0xb7, 0xd2, 0xfc, 0xe4, - 0x23, 0x09, 0xa3, 0xef, 0x82, 0xcc, 0x76, 0xd5, 0x70, 0x88, 0xd5, 0xe2, 0xfe, 0xb1, 0xec, 0x06, - 0xed, 0xf7, 0xcf, 0xc6, 0xf6, 0x2d, 0x84, 0x04, 0x07, 0xea, 0x60, 0x16, 0x3f, 0xb2, 0xb0, 0xc7, - 0xf2, 0xa9, 0xd5, 0xe9, 0xb5, 0x39, 0x03, 0xf4, 0xba, 0xc5, 0xd9, 0x4d, 0x41, 0x41, 0x8a, 0xa3, - 0x7f, 0x9a, 0x02, 0x19, 0xf5, 0x2c, 0x7c, 0x00, 0xb2, 0x3c, 0x7d, 0x1a, 0x26, 0x33, 0x05, 0x6a, - 0x6e, 0xfd, 0x3f, 0x31, 0x7d, 0xc3, 0x68, 0x96, 0xbc, 0x56, 0x93, 0x13, 0xfc, 0x12, 0x97, 0xe6, - 0xba, 0xef, 0xee, 0x3d, 0xc4, 0x16, 0xbb, 0x87, 0x99, 0x69, 0x40, 0xa5, 0x07, 0x88, 0x68, 0x28, - 0x44, 0x85, 0x5b, 0x60, 0xc6, 0xf7, 0xb0, 0xa5, 0x1c, 0x7f, 0x61, 0xac, 0xe3, 0x6b, 0x1e, 0xb6, - 0x22, 0xd3, 0xf8, 0x09, 0x09, 0x00, 0xb8, 0x0b, 0x66, 0x7d, 0x66, 0xb2, 0x8e, 0x2f, 0x02, 0x9f, - 0x5b, 0xbf, 0x34, 0x1e, 0x4a, 0x88, 0x1b, 0x8b, 0x0a, 0x6c, 0x56, 0x9e, 0x91, 0x82, 0xd1, 0x7f, - 0xd2, 0xc0, 0x62, 0x32, 0xda, 0xf0, 0x3e, 0xc8, 0xf8, 0x98, 0x1e, 0xda, 0x16, 0xce, 0xcf, 0x88, - 0x47, 0xca, 0xe3, 0x1f, 0x91, 0xf2, 0x41, 0xbe, 0xe4, 0x78, 0xae, 0x28, 0x1a, 0x0a, 0xc0, 0xe0, - 0x9b, 0x20, 0x4b, 0xb1, 0x4f, 0x3a, 0xd4, 0xc2, 0x4a, 0xfb, 0xab, 0x71, 0x60, 0x5e, 0xf7, 0x1c, - 0x92, 0x27, 0x6b, 0x63, 0x87, 0x58, 0xa6, 0x23, 0x5d, 0x89, 0xf0, 0x3e, 0xa6, 0xd8, 0xb5, 0xb0, - 0x31, 0xcf, 0xb3, 0x1c, 0x29, 0x08, 0x14, 0x82, 0xf1, 0x2a, 0x9a, 0x57, 0x8a, 0x6c, 0x38, 0xe6, - 0xa9, 0x04, 0x74, 0x27, 0x11, 0xd0, 0x7f, 0x8f, 0x75, 0x90, 0xd0, 0x6b, 0x54, 0x54, 0xf5, 0x1f, - 0x34, 0xb0, 0x14, 0x17, 0xdc, 0xb1, 0x7d, 0x06, 0xdf, 0x19, 0x30, 0xa2, 0x34, 0x99, 0x11, 0xfc, - 0xb6, 0x30, 0x61, 0x49, 0x3d, 0x95, 0x0d, 0x28, 0x31, 0x03, 0xee, 0x82, 0xb4, 0xcd, 0x70, 0xdb, - 0x17, 0x25, 0x92, 0x5b, 0xbf, 0x38, 0x99, 0x05, 0x51, 0x75, 0x6e, 0xf3, 0xcb, 0x48, 0x62, 0xe8, - 0xbf, 0x6a, 0xa0, 0x18, 0x17, 0xab, 0x9a, 0xd4, 0x6c, 0x63, 0x86, 0xa9, 0x1f, 0x06, 0x0f, 0xae, - 0x81, 0xac, 0x59, 0xdd, 0xde, 0xa2, 0xa4, 0xe3, 0x05, 0xa5, 0xcb, 0x55, 0xbb, 0xa9, 0x68, 0x28, - 0xe4, 0xf2, 0x02, 0x6f, 0xd9, 0xaa, 0x4b, 0xc5, 0x0a, 0xfc, 0xae, 0xed, 0x36, 0x90, 0xe0, 0x70, - 0x09, 0xd7, 0x6c, 0x07, 0xcd, 0x2f, 0x94, 0xa8, 0x98, 0x6d, 0x8c, 0x04, 0x07, 0x16, 0x41, 0xda, - 0xb7, 0x88, 0x27, 0x33, 0x78, 0xce, 0x98, 0xe3, 0x2a, 0xd7, 0x38, 0x01, 0x49, 0x3a, 0xbc, 0x0c, - 0xe6, 0xb8, 0xa0, 0xef, 0x99, 0x16, 0xce, 0xa7, 0x85, 0xd0, 0x42, 0xaf, 0x5b, 0x9c, 0xab, 0x04, - 0x44, 0x14, 0xf1, 0xf5, 0xaf, 0xfb, 0xe2, 0xc3, 0x43, 0x07, 0xd7, 0x01, 0xb0, 0x88, 0xcb, 0x28, - 0x71, 0x1c, 0x1c, 0x74, 0xa3, 0x30, 0x69, 0x36, 0x42, 0x0e, 0x8a, 0x49, 0x41, 0x1b, 0x00, 0x2f, - 0xf4, 0x8d, 0x4a, 0x9e, 0xff, 0x4f, 0xe6, 0xfa, 0x21, 0x3e, 0x35, 0x16, 0xf9, 0x53, 0x31, 0x46, - 0x0c, 0x5c, 0xff, 0x46, 0x03, 0x39, 0x75, 0xff, 0x14, 0xd2, 0xe9, 0x76, 0x32, 0x9d, 0xfe, 0x31, - 0x7e, 0xb4, 0x0c, 0xcf, 0xa4, 0xef, 0x34, 0xb0, 0x12, 0x68, 0x4d, 0xcc, 0x86, 0x61, 0x3a, 0xa6, - 0x6b, 0x61, 0x1a, 0x74, 0xea, 0x15, 0x90, 0xb2, 0x83, 0xf4, 0x01, 0x0a, 0x20, 0xb5, 0x5d, 0x45, - 0x29, 0xdb, 0x83, 0x57, 0x40, 0xf6, 0x80, 0xf8, 0x4c, 0x24, 0x86, 0x4c, 0x9d, 0x50, 0xe1, 0x3b, - 0x8a, 0x8e, 0x42, 0x09, 0x58, 0x05, 0x69, 0x8f, 0x50, 0xe6, 0xe7, 0x67, 0x84, 0xc2, 0x97, 0xc7, - 0x2a, 0x5c, 0x25, 0x94, 0xa9, 0x5e, 0x1a, 0x8d, 0x28, 0x8e, 0x80, 0x24, 0x90, 0xfe, 0x11, 0xf8, - 0xeb, 0x10, 0xcd, 0xe5, 0x15, 0xf8, 0x3e, 0xc8, 0xd8, 0x92, 0xa9, 0x26, 0xe2, 0x8d, 0xb1, 0x0f, - 0x0e, 0xb1, 0x3f, 0x1a, 0xc4, 0xc1, 0xc0, 0x0d, 0x50, 0xf5, 0xaf, 0x34, 0xb0, 0x3c, 0xa0, 0xa9, - 0xd8, 0x25, 0x08, 0x65, 0xc2, 0x63, 0xe9, 0xd8, 0x2e, 0x41, 0x28, 0x43, 0x82, 0x03, 0xef, 0x82, - 0xac, 0x58, 0x45, 0x2c, 0xe2, 0x28, 0xaf, 0x95, 0x03, 0xaf, 0x55, 0x15, 0xfd, 0xb8, 0x5b, 0xfc, - 0xdb, 0xe0, 0x7e, 0x56, 0x0a, 0xd8, 0x28, 0x04, 0xe0, 0x55, 0x87, 0x29, 0x25, 0x54, 0x15, 0xa6, - 0xa8, 0xba, 0x4d, 0x4e, 0x40, 0x92, 0xae, 0x7f, 0x19, 0x25, 0x25, 0xdf, 0x15, 0xb8, 0x7e, 0x3c, - 0x22, 0xfd, 0xb3, 0x9c, 0xc7, 0x0b, 0x09, 0x0e, 0xf4, 0xc0, 0x92, 0xdd, 0xb7, 0x5c, 0x4c, 0xdc, - 0x74, 0xc3, 0x1b, 0x46, 0x5e, 0x21, 0x2f, 0xf5, 0x73, 0xd0, 0x00, 0xba, 0xfe, 0x00, 0x0c, 0x48, - 0xf1, 0x76, 0x7f, 0xc0, 0x98, 0x37, 0xa4, 0x70, 0x46, 0x6f, 0x33, 0xd1, 0xeb, 0x59, 0x61, 0x53, - 0xbd, 0x5e, 0x45, 0x02, 0x45, 0xff, 0x4c, 0x03, 0xe7, 0x87, 0x0e, 0xce, 0xb0, 0xb1, 0x69, 0x23, - 0x1b, 0x5b, 0x45, 0x45, 0x54, 0xfa, 0xe0, 0xca, 0x68, 0x4d, 0x92, 0xc8, 0x3c, 0xe2, 0xc3, 0xe2, - 0xaf, 0xff, 0x9c, 0x0a, 0x23, 0x22, 0xba, 0xda, 0x6b, 0xa1, 0xbf, 0x45, 0xd7, 0xe1, 0x2f, 0xab, - 0x1e, 0x7a, 0x2e, 0xe6, 0xbf, 0x90, 0x87, 0x06, 0xa4, 0x61, 0x03, 0x2c, 0x36, 0xf0, 0xbe, 0xd9, - 0x71, 0x98, 0x7a, 0x5b, 0x79, 0x6d, 0xf2, 0x75, 0x13, 0xf6, 0xba, 0xc5, 0xc5, 0x5b, 0x09, 0x0c, - 0xd4, 0x87, 0x09, 0x37, 0xc0, 0x34, 0x73, 0x82, 0x76, 0xf3, 0xcf, 0xb1, 0xd0, 0xf5, 0x9d, 0x9a, - 0x91, 0x53, 0xe6, 0x4f, 0xd7, 0x77, 0x6a, 0x88, 0xdf, 0x86, 0xaf, 0x83, 0x34, 0xed, 0x38, 0x98, - 0x2f, 0x53, 0xd3, 0x13, 0xed, 0x65, 0x3c, 0xa6, 0x51, 0xf9, 0xf3, 0x93, 0x8f, 0x24, 0x84, 0xfe, - 0x31, 0x58, 0x48, 0x6c, 0x5c, 0xb0, 0x0d, 0xe6, 0x9d, 0x58, 0x09, 0x2b, 0x2f, 0x5c, 0xff, 0x53, - 0x75, 0xaf, 0x1a, 0xce, 0x39, 0xf5, 0xe2, 0x7c, 0x9c, 0x87, 0x12, 0xf0, 0xba, 0x09, 0x40, 0x64, - 0x2b, 0xaf, 0x44, 0x5e, 0x3e, 0xb2, 0xdb, 0xa8, 0x4a, 0xe4, 0x55, 0xe5, 0x23, 0x49, 0xe7, 0xd3, - 0xcb, 0xc7, 0x16, 0xc5, 0xac, 0x12, 0xf5, 0xcb, 0x70, 0x7a, 0xd5, 0x42, 0x0e, 0x8a, 0x49, 0xe9, - 0x5f, 0xa4, 0xc0, 0x42, 0x45, 0xaa, 0x5c, 0x25, 0x8e, 0x6d, 0x1d, 0x9d, 0xc2, 0xa2, 0x75, 0x2f, - 0xb1, 0x68, 0x9d, 0xd0, 0xa6, 0x13, 0x8a, 0x8d, 0xdc, 0x9f, 0xdf, 0xe8, 0xdb, 0x9f, 0xaf, 0x4e, - 0x0a, 0x78, 0xf2, 0x16, 0xfd, 0xad, 0x06, 0xfe, 0x92, 0x90, 0xdf, 0x8c, 0x7a, 0x5c, 0x38, 0x69, - 0xb4, 0x71, 0x93, 0x26, 0x81, 0x20, 0x2a, 0x76, 0xe8, 0xa4, 0x81, 0x5b, 0x20, 0xc5, 0x88, 0x4a, - 0xfd, 0x89, 0xe1, 0x30, 0xa6, 0xd1, 0xc8, 0xac, 0x13, 0x94, 0x62, 0x44, 0xff, 0x5e, 0x03, 0xf9, - 0x84, 0x54, 0xbc, 0x37, 0xbf, 0x7c, 0xbd, 0xef, 0x81, 0x99, 0x7d, 0x4a, 0xda, 0x2f, 0xa2, 0x79, - 0x18, 0xcb, 0xdb, 0x94, 0xb4, 0x91, 0x80, 0xd1, 0x1f, 0x6b, 0x60, 0x39, 0x21, 0x79, 0x0a, 0x7b, - 0xce, 0x4e, 0x72, 0xcf, 0xb9, 0x34, 0xa1, 0x0d, 0x23, 0xb6, 0x9d, 0xc7, 0xa9, 0x3e, 0x0b, 0xb8, - 0xad, 0x70, 0x1f, 0xe4, 0x3c, 0xd2, 0xa8, 0x61, 0x07, 0x5b, 0x8c, 0x0c, 0xeb, 0x1b, 0x27, 0x19, - 0x61, 0xee, 0x61, 0x27, 0xb8, 0x6a, 0x9c, 0xe9, 0x75, 0x8b, 0xb9, 0x6a, 0x84, 0x85, 0xe2, 0xc0, - 0xf0, 0x11, 0x58, 0x0e, 0x57, 0xdc, 0xf0, 0xb5, 0xd4, 0x8b, 0xbf, 0x76, 0xbe, 0xd7, 0x2d, 0x2e, - 0x57, 0xfa, 0x11, 0xd1, 0xe0, 0x23, 0xf0, 0x0e, 0xc8, 0xd8, 0x9e, 0xf8, 0x9a, 0x57, 0x65, 0x78, - 0xd2, 0xbe, 0x28, 0x3f, 0xfb, 0xe5, 0x37, 0xa5, 0x3a, 0xa0, 0xe0, 0xba, 0xfe, 0x4b, 0x7f, 0x0e, - 0xf0, 0x84, 0x83, 0x5b, 0xb1, 0xa5, 0x46, 0x8e, 0xd2, 0xcb, 0x2f, 0xb6, 0xd0, 0x24, 0xa7, 0xed, - 0xe8, 0xde, 0xd6, 0x61, 0xb6, 0x53, 0x92, 0xbf, 0xf1, 0x94, 0xb6, 0x5d, 0xb6, 0x4b, 0x6b, 0x8c, - 0xda, 0x6e, 0x53, 0x4e, 0xfe, 0xd8, 0xb6, 0x75, 0x01, 0x64, 0xd4, 0x30, 0x16, 0x86, 0xa7, 0xa5, - 0x55, 0x9b, 0x92, 0x84, 0x02, 0x9e, 0x7e, 0xdc, 0x9f, 0x17, 0x62, 0x34, 0x3f, 0x7c, 0x69, 0x79, - 0x71, 0x56, 0x65, 0xe3, 0xe8, 0xdc, 0x78, 0x37, 0xda, 0x57, 0x65, 0xa6, 0xaf, 0x4f, 0x98, 0xe9, - 0xf1, 0x41, 0x39, 0x72, 0x5b, 0x85, 0x6f, 0x81, 0x59, 0x2c, 0xd1, 0xe5, 0xe4, 0xbd, 0x36, 0x21, - 0x7a, 0xd4, 0x56, 0xa3, 0x56, 0xac, 0x68, 0x0a, 0x10, 0xbe, 0xca, 0xbd, 0xc4, 0x65, 0xeb, 0x47, - 0x1e, 0x96, 0xeb, 0xfd, 0x9c, 0xf1, 0x77, 0x69, 0x6c, 0x48, 0x3e, 0xe6, 0xdf, 0x4d, 0xe1, 0x11, - 0xc5, 0x6f, 0xe8, 0x1f, 0x82, 0xb3, 0x43, 0x5a, 0x3f, 0xb4, 0xc4, 0xe7, 0x5e, 0xc3, 0x66, 0x36, - 0x71, 0x83, 0x9e, 0x58, 0x9e, 0xcc, 0xf9, 0x1b, 0xc1, 0xbd, 0xc4, 0xf7, 0xa1, 0x82, 0x42, 0x31, - 0x58, 0xfd, 0x3d, 0x00, 0x07, 0xf7, 0xb6, 0x09, 0xb6, 0xc2, 0x8b, 0x60, 0xd6, 0xed, 0xb4, 0xf7, - 0xb0, 0xac, 0xdf, 0x74, 0xe4, 0x9c, 0x8a, 0xa0, 0x22, 0xc5, 0x35, 0x5e, 0x79, 0xf2, 0xbc, 0x30, - 0xf5, 0xf4, 0x79, 0x61, 0xea, 0xd9, 0xf3, 0xc2, 0xd4, 0x27, 0xbd, 0x82, 0xf6, 0xa4, 0x57, 0xd0, - 0x9e, 0xf6, 0x0a, 0xda, 0xb3, 0x5e, 0x41, 0xfb, 0xad, 0x57, 0xd0, 0x3e, 0xff, 0xbd, 0x30, 0xf5, - 0x76, 0x7e, 0xd4, 0x0f, 0xc0, 0x7f, 0x04, 0x00, 0x00, 0xff, 0xff, 0x61, 0x0f, 0x0a, 0xd7, 0x34, - 0x16, 0x00, 0x00, + // 1671 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x58, 0xcb, 0x6f, 0x1b, 0xd5, + 0x1a, 0xcf, 0x38, 0x71, 0xec, 0x1c, 0x27, 0x69, 0x72, 0x6e, 0xab, 0xeb, 0x9b, 0xab, 0x6b, 0xe7, + 0x8e, 0x68, 0x1b, 0x68, 0x6b, 0xd3, 0xb4, 0x42, 0xb0, 0x01, 0x3a, 0x69, 0x9a, 0x86, 0xa6, 0x8e, + 0x75, 0x6c, 0x15, 0x81, 0x78, 0x74, 0x32, 0x3e, 0xb1, 0xa7, 0x1e, 0xcf, 0x19, 0x9d, 0x39, 0x0e, + 0xad, 0x84, 0x10, 0x1b, 0x16, 0xec, 0xf8, 0x17, 0x10, 0x7f, 0x01, 0x82, 0x05, 0x12, 0x82, 0xc2, + 0x06, 0x75, 0x59, 0x89, 0x4d, 0x37, 0x58, 0xd4, 0xfc, 0x17, 0x59, 0xa1, 0xf3, 0x98, 0x97, 0x1f, + 0xb5, 0xa9, 0xaa, 0xac, 0x92, 0xf3, 0x7d, 0xdf, 0xf9, 0x7d, 0x8f, 0xf3, 0xbd, 0xc6, 0xe0, 0x5a, + 0xfb, 0x75, 0xbf, 0x64, 0x93, 0x72, 0xbb, 0x7b, 0x80, 0xa9, 0x8b, 0x19, 0xf6, 0xcb, 0x47, 0xd8, + 0x6d, 0x10, 0x5a, 0x56, 0x0c, 0xd3, 0xb3, 0xcb, 0x2e, 0x66, 0x9f, 0x10, 0xda, 0xb6, 0xdd, 0x66, + 0xf9, 0xe8, 0x72, 0xb9, 0x89, 0x5d, 0x4c, 0x4d, 0x86, 0x1b, 0x25, 0x8f, 0x12, 0x46, 0x60, 0x5e, + 0x4a, 0x96, 0x4c, 0xcf, 0x2e, 0x45, 0x92, 0xa5, 0xa3, 0xcb, 0x6b, 0x97, 0x9a, 0x36, 0x6b, 0x75, + 0x0f, 0x4a, 0x16, 0xe9, 0x94, 0x9b, 0xa4, 0x49, 0xca, 0xe2, 0xc2, 0x41, 0xf7, 0x50, 0x9c, 0xc4, + 0x41, 0xfc, 0x27, 0x81, 0xd6, 0xf4, 0x98, 0x4a, 0x8b, 0x50, 0x3c, 0x42, 0xd9, 0xda, 0xd5, 0x48, + 0xa6, 0x63, 0x5a, 0x2d, 0xdb, 0xc5, 0xf4, 0x41, 0xd9, 0x6b, 0x37, 0x39, 0xc1, 0x2f, 0x77, 0x30, + 0x33, 0x47, 0xdd, 0x2a, 0x8f, 0xbb, 0x45, 0xbb, 0x2e, 0xb3, 0x3b, 0x78, 0xe8, 0xc2, 0x6b, 0x93, + 0x2e, 0xf8, 0x56, 0x0b, 0x77, 0xcc, 0xa1, 0x7b, 0x57, 0xc6, 0xdd, 0xeb, 0x32, 0xdb, 0x29, 0xdb, + 0x2e, 0xf3, 0x19, 0x1d, 0xbc, 0xa4, 0xff, 0xac, 0x81, 0x53, 0x37, 0xeb, 0xf5, 0xea, 0xae, 0xdb, + 0xa4, 0xd8, 0xf7, 0xab, 0x26, 0x6b, 0xc1, 0x75, 0x30, 0xe7, 0x99, 0xac, 0x95, 0xd7, 0xd6, 0xb5, + 0x8d, 0x05, 0x63, 0xf1, 0x51, 0xaf, 0x38, 0xd3, 0xef, 0x15, 0xe7, 0x38, 0x0f, 0x09, 0x0e, 0xbc, + 0x0a, 0xb2, 0xfc, 0x6f, 0xfd, 0x81, 0x87, 0xf3, 0xb3, 0x42, 0x2a, 0xdf, 0xef, 0x15, 0xb3, 0x55, + 0x45, 0x3b, 0x8e, 0xfd, 0x8f, 0x42, 0x49, 0x58, 0x03, 0x99, 0x03, 0xd3, 0x6a, 0x63, 0xb7, 0x91, + 0x4f, 0xad, 0x6b, 0x1b, 0xb9, 0xcd, 0x8d, 0xd2, 0xb8, 0xe7, 0x2b, 0x29, 0x7b, 0x0c, 0x29, 0x6f, + 0x9c, 0x52, 0x46, 0x64, 0x14, 0x01, 0x05, 0x48, 0xfa, 0x21, 0x38, 0x1d, 0xb3, 0x1f, 0x75, 0x1d, + 0x7c, 0xc7, 0x74, 0xba, 0x18, 0x56, 0x40, 0x9a, 0x2b, 0xf6, 0xf3, 0xda, 0xfa, 0xec, 0x46, 0x6e, + 0xf3, 0xe5, 0xf1, 0xaa, 0x06, 0xdc, 0x37, 0x96, 0x94, 0xae, 0x34, 0x3f, 0xf9, 0x48, 0xc2, 0xe8, + 0xfb, 0x20, 0xb3, 0x5b, 0x35, 0x1c, 0x62, 0xb5, 0x79, 0x7c, 0x2c, 0xbb, 0x41, 0x07, 0xe3, 0xb3, + 0xb5, 0x7b, 0x1d, 0x21, 0xc1, 0x81, 0x3a, 0x98, 0xc7, 0xf7, 0x2d, 0xec, 0xb1, 0x7c, 0x6a, 0x7d, + 0x76, 0x63, 0xc1, 0x00, 0xfd, 0x5e, 0x71, 0x7e, 0x5b, 0x50, 0x90, 0xe2, 0xe8, 0x5f, 0xa4, 0x40, + 0x46, 0xa9, 0x85, 0x77, 0x41, 0x96, 0xa7, 0x4f, 0xc3, 0x64, 0xa6, 0x40, 0xcd, 0x6d, 0xbe, 0x1a, + 0xb3, 0x37, 0x7c, 0xcd, 0x92, 0xd7, 0x6e, 0x72, 0x82, 0x5f, 0xe2, 0xd2, 0xdc, 0xf6, 0xfd, 0x83, + 0x7b, 0xd8, 0x62, 0xb7, 0x31, 0x33, 0x0d, 0xa8, 0xec, 0x00, 0x11, 0x0d, 0x85, 0xa8, 0x70, 0x07, + 0xcc, 0xf9, 0x1e, 0xb6, 0x54, 0xe0, 0xcf, 0x4e, 0x0c, 0x7c, 0xcd, 0xc3, 0x56, 0xe4, 0x1a, 0x3f, + 0x21, 0x01, 0x00, 0xf7, 0xc1, 0xbc, 0xcf, 0x4c, 0xd6, 0xf5, 0xc5, 0xc3, 0xe7, 0x36, 0xcf, 0x4f, + 0x86, 0x12, 0xe2, 0xc6, 0xb2, 0x02, 0x9b, 0x97, 0x67, 0xa4, 0x60, 0xf4, 0x5f, 0x35, 0xb0, 0x9c, + 0x7c, 0x6d, 0x78, 0x07, 0x64, 0x7c, 0x4c, 0x8f, 0x6c, 0x0b, 0xe7, 0xe7, 0x84, 0x92, 0xf2, 0x64, + 0x25, 0x52, 0x3e, 0xc8, 0x97, 0x1c, 0xcf, 0x15, 0x45, 0x43, 0x01, 0x18, 0x7c, 0x17, 0x64, 0x29, + 0xf6, 0x49, 0x97, 0x5a, 0x58, 0x59, 0x7f, 0x29, 0x0e, 0xcc, 0xeb, 0x9e, 0x43, 0xf2, 0x64, 0x6d, + 0xec, 0x11, 0xcb, 0x74, 0x64, 0x28, 0x11, 0x3e, 0xc4, 0x14, 0xbb, 0x16, 0x36, 0x16, 0x79, 0x96, + 0x23, 0x05, 0x81, 0x42, 0x30, 0x5e, 0x45, 0x8b, 0xca, 0x90, 0x2d, 0xc7, 0x3c, 0x91, 0x07, 0xdd, + 0x4b, 0x3c, 0xe8, 0x2b, 0x13, 0x03, 0x24, 0xec, 0x1a, 0xf7, 0xaa, 0xfa, 0x4f, 0x1a, 0x58, 0x89, + 0x0b, 0xee, 0xd9, 0x3e, 0x83, 0x1f, 0x0c, 0x39, 0x51, 0x9a, 0xce, 0x09, 0x7e, 0x5b, 0xb8, 0xb0, + 0xa2, 0x54, 0x65, 0x03, 0x4a, 0xcc, 0x81, 0x5b, 0x20, 0x6d, 0x33, 0xdc, 0xf1, 0x45, 0x89, 0xe4, + 0x36, 0xcf, 0x4d, 0xe7, 0x41, 0x54, 0x9d, 0xbb, 0xfc, 0x32, 0x92, 0x18, 0xfa, 0x1f, 0x1a, 0x28, + 0xc6, 0xc5, 0xaa, 0x26, 0x35, 0x3b, 0x98, 0x61, 0xea, 0x87, 0x8f, 0x07, 0x37, 0x40, 0xd6, 0xac, + 0xee, 0xee, 0x50, 0xd2, 0xf5, 0x82, 0xd2, 0xe5, 0xa6, 0x5d, 0x53, 0x34, 0x14, 0x72, 0x79, 0x81, + 0xb7, 0x6d, 0xd5, 0xa5, 0x62, 0x05, 0x7e, 0xcb, 0x76, 0x1b, 0x48, 0x70, 0xb8, 0x84, 0x6b, 0x76, + 0x82, 0xe6, 0x17, 0x4a, 0x54, 0xcc, 0x0e, 0x46, 0x82, 0x03, 0x8b, 0x20, 0xed, 0x5b, 0xc4, 0x93, + 0x19, 0xbc, 0x60, 0x2c, 0x70, 0x93, 0x6b, 0x9c, 0x80, 0x24, 0x1d, 0x5e, 0x00, 0x0b, 0x5c, 0xd0, + 0xf7, 0x4c, 0x0b, 0xe7, 0xd3, 0x42, 0x68, 0xa9, 0xdf, 0x2b, 0x2e, 0x54, 0x02, 0x22, 0x8a, 0xf8, + 0xfa, 0xb7, 0x03, 0xef, 0xc3, 0x9f, 0x0e, 0x6e, 0x02, 0x60, 0x11, 0x97, 0x51, 0xe2, 0x38, 0x38, + 0xe8, 0x46, 0x61, 0xd2, 0x6c, 0x85, 0x1c, 0x14, 0x93, 0x82, 0x36, 0x00, 0x5e, 0x18, 0x1b, 0x95, + 0x3c, 0x6f, 0x4c, 0x17, 0xfa, 0x11, 0x31, 0x35, 0x96, 0xb9, 0xaa, 0x18, 0x23, 0x06, 0xae, 0x7f, + 0xa7, 0x81, 0x9c, 0xba, 0x7f, 0x02, 0xe9, 0x74, 0x23, 0x99, 0x4e, 0xff, 0x9f, 0x3c, 0x5a, 0x46, + 0x67, 0xd2, 0x0f, 0x1a, 0x58, 0x0b, 0xac, 0x26, 0x66, 0xc3, 0x30, 0x1d, 0xd3, 0xb5, 0x30, 0x0d, + 0x3a, 0xf5, 0x1a, 0x48, 0xd9, 0x41, 0xfa, 0x00, 0x05, 0x90, 0xda, 0xad, 0xa2, 0x94, 0xed, 0xc1, + 0x8b, 0x20, 0xdb, 0x22, 0x3e, 0x13, 0x89, 0x21, 0x53, 0x27, 0x34, 0xf8, 0xa6, 0xa2, 0xa3, 0x50, + 0x02, 0x56, 0x41, 0xda, 0x23, 0x94, 0xf9, 0xf9, 0x39, 0x61, 0xf0, 0x85, 0x89, 0x06, 0x57, 0x09, + 0x65, 0xaa, 0x97, 0x46, 0x23, 0x8a, 0x23, 0x20, 0x09, 0xa4, 0x7f, 0x0a, 0xfe, 0x33, 0xc2, 0x72, + 0x79, 0x05, 0x7e, 0x0c, 0x32, 0xb6, 0x64, 0xaa, 0x89, 0x78, 0x75, 0xa2, 0xc2, 0x11, 0xfe, 0x47, + 0x83, 0x38, 0x18, 0xb8, 0x01, 0xaa, 0xfe, 0x8d, 0x06, 0x56, 0x87, 0x2c, 0x15, 0xbb, 0x04, 0xa1, + 0x4c, 0x44, 0x2c, 0x1d, 0xdb, 0x25, 0x08, 0x65, 0x48, 0x70, 0xe0, 0x2d, 0x90, 0x15, 0xab, 0x88, + 0x45, 0x1c, 0x15, 0xb5, 0x72, 0x10, 0xb5, 0xaa, 0xa2, 0x1f, 0xf7, 0x8a, 0xff, 0x1d, 0xde, 0xcf, + 0x4a, 0x01, 0x1b, 0x85, 0x00, 0xbc, 0xea, 0x30, 0xa5, 0x84, 0xaa, 0xc2, 0x14, 0x55, 0xb7, 0xcd, + 0x09, 0x48, 0xd2, 0xf5, 0xaf, 0xa3, 0xa4, 0xe4, 0xbb, 0x02, 0xb7, 0x8f, 0xbf, 0xc8, 0xe0, 0x2c, + 0xe7, 0xef, 0x85, 0x04, 0x07, 0x7a, 0x60, 0xc5, 0x1e, 0x58, 0x2e, 0xa6, 0x6e, 0xba, 0xe1, 0x0d, + 0x23, 0xaf, 0x90, 0x57, 0x06, 0x39, 0x68, 0x08, 0x5d, 0xbf, 0x0b, 0x86, 0xa4, 0x78, 0xbb, 0x6f, + 0x31, 0xe6, 0x8d, 0x28, 0x9c, 0xf1, 0xdb, 0x4c, 0xa4, 0x3d, 0x2b, 0x7c, 0xaa, 0xd7, 0xab, 0x48, + 0xa0, 0xe8, 0x5f, 0x6a, 0xe0, 0xcc, 0xc8, 0xc1, 0x19, 0x36, 0x36, 0x6d, 0x6c, 0x63, 0xab, 0xa8, + 0x17, 0x95, 0x31, 0xb8, 0x38, 0xde, 0x92, 0x24, 0x32, 0x7f, 0xf1, 0x51, 0xef, 0xaf, 0xff, 0x96, + 0x0a, 0x5f, 0x44, 0x74, 0xb5, 0xb7, 0xc3, 0x78, 0x8b, 0xae, 0xc3, 0x35, 0xab, 0x1e, 0x7a, 0x3a, + 0x16, 0xbf, 0x90, 0x87, 0x86, 0xa4, 0x61, 0x03, 0x2c, 0x37, 0xf0, 0xa1, 0xd9, 0x75, 0x98, 0xd2, + 0xad, 0xa2, 0x36, 0xfd, 0xba, 0x09, 0xfb, 0xbd, 0xe2, 0xf2, 0xf5, 0x04, 0x06, 0x1a, 0xc0, 0x84, + 0x5b, 0x60, 0x96, 0x39, 0x41, 0xbb, 0x79, 0x69, 0x22, 0x74, 0x7d, 0xaf, 0x66, 0xe4, 0x94, 0xfb, + 0xb3, 0xf5, 0xbd, 0x1a, 0xe2, 0xb7, 0xe1, 0x3b, 0x20, 0x4d, 0xbb, 0x0e, 0xe6, 0xcb, 0xd4, 0xec, + 0x54, 0x7b, 0x19, 0x7f, 0xd3, 0xa8, 0xfc, 0xf9, 0xc9, 0x47, 0x12, 0x42, 0xff, 0x0c, 0x2c, 0x25, + 0x36, 0x2e, 0xd8, 0x01, 0x8b, 0x4e, 0xac, 0x84, 0x55, 0x14, 0xae, 0xfc, 0xa3, 0xba, 0x57, 0x0d, + 0xe7, 0xb4, 0xd2, 0xb8, 0x18, 0xe7, 0xa1, 0x04, 0xbc, 0x6e, 0x02, 0x10, 0xf9, 0xca, 0x2b, 0x91, + 0x97, 0x8f, 0xec, 0x36, 0xaa, 0x12, 0x79, 0x55, 0xf9, 0x48, 0xd2, 0xf9, 0xf4, 0xf2, 0xb1, 0x45, + 0x31, 0xab, 0x44, 0xfd, 0x32, 0x9c, 0x5e, 0xb5, 0x90, 0x83, 0x62, 0x52, 0xfa, 0x2f, 0x1a, 0x58, + 0xaa, 0x48, 0x93, 0xab, 0xc4, 0xb1, 0xad, 0x07, 0x27, 0xb0, 0x68, 0xdd, 0x4e, 0x2c, 0x5a, 0xcf, + 0x68, 0xd3, 0x09, 0xc3, 0xc6, 0x6e, 0x5a, 0xdf, 0x6b, 0xe0, 0xdf, 0x09, 0xc9, 0xed, 0xa8, 0x19, + 0x85, 0x23, 0x41, 0x9b, 0x34, 0x12, 0x12, 0x08, 0xa2, 0xb4, 0x46, 0x8e, 0x04, 0xb8, 0x03, 0x52, + 0x8c, 0xa8, 0x1c, 0x9d, 0x1a, 0x0e, 0x63, 0x1a, 0xcd, 0xb6, 0x3a, 0x41, 0x29, 0x46, 0xf4, 0x1f, + 0x35, 0x90, 0x4f, 0x48, 0xc5, 0x9b, 0xe8, 0x8b, 0xb7, 0xfb, 0x36, 0x98, 0x3b, 0xa4, 0xa4, 0xf3, + 0x3c, 0x96, 0x87, 0x41, 0xbf, 0x41, 0x49, 0x07, 0x09, 0x18, 0xfd, 0xa1, 0x06, 0x56, 0x13, 0x92, + 0x27, 0xb0, 0x90, 0xec, 0x25, 0x17, 0x92, 0xf3, 0x53, 0xfa, 0x30, 0x66, 0x2d, 0x79, 0x98, 0x1a, + 0xf0, 0x80, 0xfb, 0x0a, 0x0f, 0x41, 0xce, 0x23, 0x8d, 0x1a, 0x76, 0xb0, 0xc5, 0xc8, 0xa8, 0x02, + 0x7f, 0x96, 0x13, 0xe6, 0x01, 0x76, 0x82, 0xab, 0xc6, 0xa9, 0x7e, 0xaf, 0x98, 0xab, 0x46, 0x58, + 0x28, 0x0e, 0x0c, 0xef, 0x83, 0xd5, 0x70, 0x17, 0x0d, 0xb5, 0xa5, 0x9e, 0x5f, 0xdb, 0x99, 0x7e, + 0xaf, 0xb8, 0x5a, 0x19, 0x44, 0x44, 0xc3, 0x4a, 0xe0, 0x4d, 0x90, 0xb1, 0x3d, 0xf1, 0xd9, 0xad, + 0xbe, 0xd8, 0x9e, 0xb5, 0xd8, 0xc9, 0xef, 0x73, 0xf9, 0xf1, 0xa7, 0x0e, 0x28, 0xb8, 0xae, 0xff, + 0x3e, 0x98, 0x03, 0x3c, 0xe1, 0xe0, 0x4e, 0x6c, 0xfb, 0x90, 0x33, 0xef, 0xc2, 0xf3, 0x6d, 0x1e, + 0xc9, 0xb1, 0x38, 0xbe, 0x09, 0x75, 0x99, 0xed, 0x94, 0xe4, 0x8f, 0x31, 0xa5, 0x5d, 0x97, 0xed, + 0xd3, 0x1a, 0xa3, 0xb6, 0xdb, 0x94, 0x23, 0x3a, 0xb6, 0x16, 0x9d, 0x05, 0x19, 0x35, 0x35, 0x85, + 0xe3, 0x69, 0xe9, 0xd5, 0xb6, 0x24, 0xa1, 0x80, 0xa7, 0x1f, 0x0f, 0xe6, 0x85, 0x98, 0xa1, 0xf7, + 0x5e, 0x58, 0x5e, 0xfc, 0x4b, 0x65, 0xe3, 0xf8, 0xdc, 0xf8, 0x30, 0x5a, 0x2c, 0x65, 0xa6, 0x6f, + 0x4e, 0x99, 0xe9, 0xf1, 0x89, 0x36, 0x76, 0xad, 0x84, 0xef, 0x81, 0x79, 0x2c, 0xd1, 0xe5, 0x88, + 0xbc, 0x3c, 0x25, 0x7a, 0xd4, 0x56, 0xa3, 0x5f, 0x1e, 0x14, 0x4d, 0x01, 0xc2, 0xb7, 0x78, 0x94, + 0xb8, 0x2c, 0xff, 0xe0, 0x97, 0x7b, 0xf8, 0x82, 0xf1, 0x3f, 0xe9, 0x6c, 0x48, 0x3e, 0xe6, 0x1f, + 0x38, 0xe1, 0x11, 0xc5, 0x6f, 0xe8, 0x1f, 0x01, 0x38, 0xbc, 0xe4, 0x4c, 0xb1, 0x42, 0x9d, 0x03, + 0xf3, 0x6e, 0xb7, 0x73, 0x80, 0x65, 0x0d, 0xa5, 0x23, 0x03, 0x2b, 0x82, 0x8a, 0x14, 0xd7, 0x78, + 0xf3, 0xd1, 0xd3, 0xc2, 0xcc, 0xe3, 0xa7, 0x85, 0x99, 0x27, 0x4f, 0x0b, 0x33, 0x9f, 0xf7, 0x0b, + 0xda, 0xa3, 0x7e, 0x41, 0x7b, 0xdc, 0x2f, 0x68, 0x4f, 0xfa, 0x05, 0xed, 0xcf, 0x7e, 0x41, 0xfb, + 0xea, 0xaf, 0xc2, 0xcc, 0xfb, 0xf9, 0x71, 0xbf, 0x96, 0xfe, 0x1d, 0x00, 0x00, 0xff, 0xff, 0xd4, + 0x46, 0x40, 0xf2, 0x61, 0x15, 0x00, 0x00, } func (m *HTTPIngressPath) Marshal() (dAtA []byte, err error) { @@ -1822,16 +1790,6 @@ func (m *NetworkPolicy) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l - { - size, err := m.Status.MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintGenerated(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0x1a { size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i]) if err != nil { @@ -2180,43 +2138,6 @@ func (m *NetworkPolicySpec) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } -func (m *NetworkPolicyStatus) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalToSizedBuffer(dAtA[:size]) - if err != nil { - return nil, err - } - return dAtA[:n], nil -} - -func (m *NetworkPolicyStatus) MarshalTo(dAtA []byte) (int, error) { - size := m.Size() - return m.MarshalToSizedBuffer(dAtA[:size]) -} - -func (m *NetworkPolicyStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) { - i := len(dAtA) - _ = i - var l int - _ = l - if len(m.Conditions) > 0 { - for iNdEx := len(m.Conditions) - 1; iNdEx >= 0; iNdEx-- { - { - size, err := m.Conditions[iNdEx].MarshalToSizedBuffer(dAtA[:i]) - if err != nil { - return 0, err - } - i -= size - i = encodeVarintGenerated(dAtA, i, uint64(size)) - } - i-- - dAtA[i] = 0xa - } - } - return len(dAtA) - i, nil -} - func (m *ServiceBackendPort) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -2583,8 +2504,6 @@ func (m *NetworkPolicy) Size() (n int) { n += 1 + l + sovGenerated(uint64(l)) l = m.Spec.Size() n += 1 + l + sovGenerated(uint64(l)) - l = m.Status.Size() - n += 1 + l + sovGenerated(uint64(l)) return n } @@ -2717,21 +2636,6 @@ func (m *NetworkPolicySpec) Size() (n int) { return n } -func (m *NetworkPolicyStatus) Size() (n int) { - if m == nil { - return 0 - } - var l int - _ = l - if len(m.Conditions) > 0 { - for _, e := range m.Conditions { - l = e.Size() - n += 1 + l + sovGenerated(uint64(l)) - } - } - return n -} - func (m *ServiceBackendPort) Size() (n int) { if m == nil { return 0 @@ -3006,7 +2910,6 @@ func (this *NetworkPolicy) String() string { s := strings.Join([]string{`&NetworkPolicy{`, `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v1.ObjectMeta", 1), `&`, ``, 1) + `,`, `Spec:` + strings.Replace(strings.Replace(this.Spec.String(), "NetworkPolicySpec", "NetworkPolicySpec", 1), `&`, ``, 1) + `,`, - `Status:` + strings.Replace(strings.Replace(this.Status.String(), "NetworkPolicyStatus", "NetworkPolicyStatus", 1), `&`, ``, 1) + `,`, `}`, }, "") return s @@ -3116,21 +3019,6 @@ func (this *NetworkPolicySpec) String() string { }, "") return s } -func (this *NetworkPolicyStatus) String() string { - if this == nil { - return "nil" - } - repeatedStringForConditions := "[]Condition{" - for _, f := range this.Conditions { - repeatedStringForConditions += fmt.Sprintf("%v", f) + "," - } - repeatedStringForConditions += "}" - s := strings.Join([]string{`&NetworkPolicyStatus{`, - `Conditions:` + repeatedStringForConditions + `,`, - `}`, - }, "") - return s -} func (this *ServiceBackendPort) String() string { if this == nil { return "nil" @@ -5609,39 +5497,6 @@ func (m *NetworkPolicy) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex - case 3: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthGenerated - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthGenerated - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(dAtA[iNdEx:]) @@ -6496,90 +6351,6 @@ func (m *NetworkPolicySpec) Unmarshal(dAtA []byte) error { } return nil } -func (m *NetworkPolicyStatus) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= uint64(b&0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: NetworkPolicyStatus: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: NetworkPolicyStatus: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Conditions", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowGenerated - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthGenerated - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthGenerated - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Conditions = append(m.Conditions, v1.Condition{}) - if err := m.Conditions[len(m.Conditions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipGenerated(dAtA[iNdEx:]) - if err != nil { - return err - } - if (skippy < 0) || (iNdEx+skippy) < 0 { - return ErrInvalidLengthGenerated - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} func (m *ServiceBackendPort) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 diff --git a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/generated.proto index ed194a89d564..b50dd491e0fa 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/generated.proto @@ -384,11 +384,6 @@ message NetworkPolicy { // spec represents the specification of the desired behavior for this NetworkPolicy. // +optional optional NetworkPolicySpec spec = 2; - - // status represents the current state of the NetworkPolicy. - // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status - // +optional - optional NetworkPolicyStatus status = 3; } // NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods @@ -536,18 +531,6 @@ message NetworkPolicySpec { repeated string policyTypes = 4; } -// NetworkPolicyStatus describes the current state of the NetworkPolicy. -message NetworkPolicyStatus { - // conditions holds an array of metav1.Condition that describe the state of the NetworkPolicy. - // Current service state - // +optional - // +patchMergeKey=type - // +patchStrategy=merge - // +listType=map - // +listMapKey=type - repeated k8s.io.apimachinery.pkg.apis.meta.v1.Condition conditions = 1; -} - // ServiceBackendPort is the service port being referenced. message ServiceBackendPort { // name is the name of the port on the Service. diff --git a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/types.go b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/types.go index fa7cf1bd700f..a17e2cb5b393 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/types.go @@ -38,10 +38,10 @@ type NetworkPolicy struct { // +optional Spec NetworkPolicySpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"` - // status represents the current state of the NetworkPolicy. - // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status - // +optional - Status NetworkPolicyStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"` + // Status is tombstoned to show why 3 is a reserved protobuf tag. + // This commented field should remain, so in the future if we decide to reimplement + // NetworkPolicyStatus a different protobuf name and tag SHOULD be used! + // Status NetworkPolicyStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"` } // PolicyType string describes the NetworkPolicy type @@ -205,48 +205,6 @@ type NetworkPolicyPeer struct { IPBlock *IPBlock `json:"ipBlock,omitempty" protobuf:"bytes,3,rep,name=ipBlock"` } -// NetworkPolicyConditionType is the type for status conditions on -// a NetworkPolicy. This type should be used with the -// NetworkPolicyStatus.Conditions field. -type NetworkPolicyConditionType string - -const ( - // NetworkPolicyConditionStatusAccepted represents status of a Network Policy that could be properly parsed by - // the Network Policy provider and will be implemented in the cluster - NetworkPolicyConditionStatusAccepted NetworkPolicyConditionType = "Accepted" - - // NetworkPolicyConditionStatusPartialFailure represents status of a Network Policy that could be partially - // parsed by the Network Policy provider and may not be completely implemented due to a lack of a feature or some - // other condition - NetworkPolicyConditionStatusPartialFailure NetworkPolicyConditionType = "PartialFailure" - - // NetworkPolicyConditionStatusFailure represents status of a Network Policy that could not be parsed by the - // Network Policy provider and will not be implemented in the cluster - NetworkPolicyConditionStatusFailure NetworkPolicyConditionType = "Failure" -) - -// NetworkPolicyConditionReason defines the set of reasons that explain why a -// particular NetworkPolicy condition type has been raised. -type NetworkPolicyConditionReason string - -const ( - // NetworkPolicyConditionReasonFeatureNotSupported represents a reason where the Network Policy may not have been - // implemented in the cluster due to a lack of some feature not supported by the Network Policy provider - NetworkPolicyConditionReasonFeatureNotSupported NetworkPolicyConditionReason = "FeatureNotSupported" -) - -// NetworkPolicyStatus describes the current state of the NetworkPolicy. -type NetworkPolicyStatus struct { - // conditions holds an array of metav1.Condition that describe the state of the NetworkPolicy. - // Current service state - // +optional - // +patchMergeKey=type - // +patchStrategy=merge - // +listType=map - // +listMapKey=type - Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"` -} - // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // NetworkPolicyList is a list of NetworkPolicy objects. diff --git a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/types_swagger_doc_generated.go index 91161d5ca4e6..ff080540d39b 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/types_swagger_doc_generated.go @@ -224,7 +224,6 @@ var map_NetworkPolicy = map[string]string{ "": "NetworkPolicy describes what network traffic is allowed for a set of Pods", "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata", "spec": "spec represents the specification of the desired behavior for this NetworkPolicy.", - "status": "status represents the current state of the NetworkPolicy. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status", } func (NetworkPolicy) SwaggerDoc() map[string]string { @@ -295,15 +294,6 @@ func (NetworkPolicySpec) SwaggerDoc() map[string]string { return map_NetworkPolicySpec } -var map_NetworkPolicyStatus = map[string]string{ - "": "NetworkPolicyStatus describes the current state of the NetworkPolicy.", - "conditions": "conditions holds an array of metav1.Condition that describe the state of the NetworkPolicy. Current service state", -} - -func (NetworkPolicyStatus) SwaggerDoc() map[string]string { - return map_NetworkPolicyStatus -} - var map_ServiceBackendPort = map[string]string{ "": "ServiceBackendPort is the service port being referenced.", "name": "name is the name of the port on the Service. This is a mutually exclusive setting with \"Number\".", diff --git a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/zz_generated.deepcopy.go index c95653c918ce..540873833f3c 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/networking/v1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/api/networking/v1/zz_generated.deepcopy.go @@ -499,7 +499,6 @@ func (in *NetworkPolicy) DeepCopyInto(out *NetworkPolicy) { out.TypeMeta = in.TypeMeta in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) in.Spec.DeepCopyInto(&out.Spec) - in.Status.DeepCopyInto(&out.Status) return } @@ -712,29 +711,6 @@ func (in *NetworkPolicySpec) DeepCopy() *NetworkPolicySpec { return out } -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *NetworkPolicyStatus) DeepCopyInto(out *NetworkPolicyStatus) { - *out = *in - if in.Conditions != nil { - in, out := &in.Conditions, &out.Conditions - *out = make([]metav1.Condition, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NetworkPolicyStatus. -func (in *NetworkPolicyStatus) DeepCopy() *NetworkPolicyStatus { - if in == nil { - return nil - } - out := new(NetworkPolicyStatus) - in.DeepCopyInto(out) - return out -} - // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ServiceBackendPort) DeepCopyInto(out *ServiceBackendPort) { *out = *in diff --git a/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/generated.proto index 222f2b9052b1..13ff60ea718c 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/generated.proto @@ -66,6 +66,7 @@ message ClusterRoleBinding { // RoleRef can only reference a ClusterRole in the global namespace. // If the RoleRef cannot be resolved, the Authorizer must return an error. + // This field is immutable. optional RoleRef roleRef = 3; } @@ -140,6 +141,7 @@ message RoleBinding { // RoleRef can reference a Role in the current namespace or a ClusterRole in the global namespace. // If the RoleRef cannot be resolved, the Authorizer must return an error. + // This field is immutable. optional RoleRef roleRef = 3; } diff --git a/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/types.go b/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/types.go index 5a8e4a85c88d..ce845d69b426 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/types.go @@ -132,6 +132,7 @@ type RoleBinding struct { // RoleRef can reference a Role in the current namespace or a ClusterRole in the global namespace. // If the RoleRef cannot be resolved, the Authorizer must return an error. + // This field is immutable. RoleRef RoleRef `json:"roleRef" protobuf:"bytes,3,opt,name=roleRef"` } @@ -209,6 +210,7 @@ type ClusterRoleBinding struct { // RoleRef can only reference a ClusterRole in the global namespace. // If the RoleRef cannot be resolved, the Authorizer must return an error. + // This field is immutable. RoleRef RoleRef `json:"roleRef" protobuf:"bytes,3,opt,name=roleRef"` } diff --git a/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/types_swagger_doc_generated.go index 370398198bca..0471a5594466 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/rbac/v1/types_swagger_doc_generated.go @@ -51,7 +51,7 @@ var map_ClusterRoleBinding = map[string]string{ "": "ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject.", "metadata": "Standard object's metadata.", "subjects": "Subjects holds references to the objects the role applies to.", - "roleRef": "RoleRef can only reference a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error.", + "roleRef": "RoleRef can only reference a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error. This field is immutable.", } func (ClusterRoleBinding) SwaggerDoc() map[string]string { @@ -105,7 +105,7 @@ var map_RoleBinding = map[string]string{ "": "RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace.", "metadata": "Standard object's metadata.", "subjects": "Subjects holds references to the objects the role applies to.", - "roleRef": "RoleRef can reference a Role in the current namespace or a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error.", + "roleRef": "RoleRef can reference a Role in the current namespace or a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error. This field is immutable.", } func (RoleBinding) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/LICENSE b/cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/LICENSE new file mode 100644 index 000000000000..d64569567334 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/pkg/features/OWNERS b/cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/pkg/features/OWNERS new file mode 100644 index 000000000000..3e1dd9f081d6 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/pkg/features/OWNERS @@ -0,0 +1,4 @@ +# See the OWNERS docs at https://go.k8s.io/owners + +approvers: + - feature-approvers diff --git a/cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/pkg/features/kube_features.go b/cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/pkg/features/kube_features.go new file mode 100644 index 000000000000..1844ed8d1eb4 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiextensions-apiserver/pkg/features/kube_features.go @@ -0,0 +1,48 @@ +/* +Copyright 2017 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package features + +import ( + utilfeature "k8s.io/apiserver/pkg/util/feature" + "k8s.io/component-base/featuregate" +) + +const ( + // Every feature gate should add method here following this template: + // + // // owner: @username + // // alpha: v1.4 + // MyFeature() bool + + // owner: @alexzielenski + // alpha: v1.28 + // + // Ignores errors raised on unchanged fields of Custom Resources + // across UPDATE/PATCH requests. + CRDValidationRatcheting featuregate.Feature = "CRDValidationRatcheting" +) + +func init() { + utilfeature.DefaultMutableFeatureGate.Add(defaultKubernetesFeatureGates) +} + +// defaultKubernetesFeatureGates consists of all known Kubernetes-specific feature keys. +// To add a new feature, define a key for it above and add it here. The features will be +// available throughout Kubernetes binaries. +var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureSpec{ + CRDValidationRatcheting: {Default: false, PreRelease: featuregate.Alpha}, +} diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/errors/OWNERS b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/errors/OWNERS index 155648acb6df..1a9f5e7706b5 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/errors/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/errors/OWNERS @@ -2,7 +2,6 @@ reviewers: - thockin - - lavalamp - smarterclayton - wojtek-t - deads2k diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/meta/help.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/meta/help.go index 1bf6b06d47f8..1fdd32c4ba3e 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/meta/help.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/meta/help.go @@ -112,8 +112,27 @@ func getItemsPtr(list runtime.Object) (interface{}, error) { // EachListItem invokes fn on each runtime.Object in the list. Any error immediately terminates // the loop. +// +// If items passed to fn are retained for different durations, and you want to avoid +// retaining all items in obj as long as any item is referenced, use EachListItemWithAlloc instead. func EachListItem(obj runtime.Object, fn func(runtime.Object) error) error { + return eachListItem(obj, fn, false) +} + +// EachListItemWithAlloc works like EachListItem, but avoids retaining references to the items slice in obj. +// It does this by making a shallow copy of non-pointer items in obj. +// +// If the items passed to fn are not retained, or are retained for the same duration, use EachListItem instead for memory efficiency. +func EachListItemWithAlloc(obj runtime.Object, fn func(runtime.Object) error) error { + return eachListItem(obj, fn, true) +} + +// allocNew: Whether shallow copy is required when the elements in Object.Items are struct +func eachListItem(obj runtime.Object, fn func(runtime.Object) error, allocNew bool) error { if unstructured, ok := obj.(runtime.Unstructured); ok { + if allocNew { + return unstructured.EachListItemWithAlloc(fn) + } return unstructured.EachListItem(fn) } // TODO: Change to an interface call? @@ -140,8 +159,19 @@ func EachListItem(obj runtime.Object, fn func(runtime.Object) error) error { for i := 0; i < len; i++ { raw := items.Index(i) if takeAddr { - raw = raw.Addr() + if allocNew { + // shallow copy to avoid retaining a reference to the original list item + itemCopy := reflect.New(raw.Type()) + // assign to itemCopy and type-assert + itemCopy.Elem().Set(raw) + // reflect.New will guarantee that itemCopy must be a pointer. + raw = itemCopy + } else { + raw = raw.Addr() + } } + // raw must be a pointer or an interface + // allocate a pointer is cheap switch item := raw.Interface().(type) { case *runtime.RawExtension: if err := fn(item.Object); err != nil { @@ -166,7 +196,23 @@ func EachListItem(obj runtime.Object, fn func(runtime.Object) error) error { // ExtractList returns obj's Items element as an array of runtime.Objects. // Returns an error if obj is not a List type (does not have an Items member). +// +// If items in the returned list are retained for different durations, and you want to avoid +// retaining all items in obj as long as any item is referenced, use ExtractListWithAlloc instead. func ExtractList(obj runtime.Object) ([]runtime.Object, error) { + return extractList(obj, false) +} + +// ExtractListWithAlloc works like ExtractList, but avoids retaining references to the items slice in obj. +// It does this by making a shallow copy of non-pointer items in obj. +// +// If the items in the returned list are not retained, or are retained for the same duration, use ExtractList instead for memory efficiency. +func ExtractListWithAlloc(obj runtime.Object) ([]runtime.Object, error) { + return extractList(obj, true) +} + +// allocNew: Whether shallow copy is required when the elements in Object.Items are struct +func extractList(obj runtime.Object, allocNew bool) ([]runtime.Object, error) { itemsPtr, err := GetItemsPtr(obj) if err != nil { return nil, err @@ -176,10 +222,17 @@ func ExtractList(obj runtime.Object) ([]runtime.Object, error) { return nil, err } list := make([]runtime.Object, items.Len()) + if len(list) == 0 { + return list, nil + } + elemType := items.Type().Elem() + isRawExtension := elemType == rawExtensionObjectType + implementsObject := elemType.Implements(objectType) for i := range list { raw := items.Index(i) - switch item := raw.Interface().(type) { - case runtime.RawExtension: + switch { + case isRawExtension: + item := raw.Interface().(runtime.RawExtension) switch { case item.Object != nil: list[i] = item.Object @@ -189,8 +242,18 @@ func ExtractList(obj runtime.Object) ([]runtime.Object, error) { default: list[i] = nil } - case runtime.Object: - list[i] = item + case implementsObject: + list[i] = raw.Interface().(runtime.Object) + case allocNew: + // shallow copy to avoid retaining a reference to the original list item + itemCopy := reflect.New(raw.Type()) + // assign to itemCopy and type-assert + itemCopy.Elem().Set(raw) + var ok bool + // reflect.New will guarantee that itemCopy must be a pointer. + if list[i], ok = itemCopy.Interface().(runtime.Object); !ok { + return nil, fmt.Errorf("%v: item[%v]: Expected object, got %#v(%s)", obj, i, raw.Interface(), raw.Kind()) + } default: var found bool if list[i], found = raw.Addr().Interface().(runtime.Object); !found { @@ -201,8 +264,12 @@ func ExtractList(obj runtime.Object) ([]runtime.Object, error) { return list, nil } -// objectSliceType is the type of a slice of Objects -var objectSliceType = reflect.TypeOf([]runtime.Object{}) +var ( + // objectSliceType is the type of a slice of Objects + objectSliceType = reflect.TypeOf([]runtime.Object{}) + objectType = reflect.TypeOf((*runtime.Object)(nil)).Elem() + rawExtensionObjectType = reflect.TypeOf(runtime.RawExtension{}) +) // LenList returns the length of this list or 0 if it is not a list. func LenList(list runtime.Object) int { @@ -237,7 +304,7 @@ func SetList(list runtime.Object, objects []runtime.Object) error { slice := reflect.MakeSlice(items.Type(), len(objects), len(objects)) for i := range objects { dest := slice.Index(i) - if dest.Type() == reflect.TypeOf(runtime.RawExtension{}) { + if dest.Type() == rawExtensionObjectType { dest = dest.FieldByName("Object") } diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/resource/OWNERS b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/resource/OWNERS index d1c9f53074d5..063fd285dad1 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/resource/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/api/resource/OWNERS @@ -2,7 +2,6 @@ reviewers: - thockin - - lavalamp - smarterclayton - wojtek-t - derekwaynecarr diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto index 48955dca85bb..a2cd8015fb5f 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto @@ -425,8 +425,6 @@ message LabelSelector { // relates the key and values. message LabelSelectorRequirement { // key is the label key that the selector applies to. - // +patchMergeKey=key - // +patchStrategy=merge optional string key = 1; // operator represents a key's relationship to a set of values. diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go index 352d58ebc24c..8a8ff701899a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go @@ -995,6 +995,24 @@ const ( // CauseTypeFieldValueNotSupported is used to report valid (as per formatting rules) // values that can not be handled (e.g. an enumerated string). CauseTypeFieldValueNotSupported CauseType = "FieldValueNotSupported" + // CauseTypeForbidden is used to report valid (as per formatting rules) + // values which would be accepted under some conditions, but which are not + // permitted by the current conditions (such as security policy). See + // Forbidden(). + CauseTypeForbidden CauseType = "FieldValueForbidden" + // CauseTypeTooLong is used to report that the given value is too long. + // This is similar to ErrorTypeInvalid, but the error will not include the + // too-long value. See TooLong(). + CauseTypeTooLong CauseType = "FieldValueTooLong" + // CauseTypeTooMany is used to report "too many". This is used to + // report that a given list has too many items. This is similar to FieldValueTooLong, + // but the error indicates quantity instead of length. + CauseTypeTooMany CauseType = "FieldValueTooMany" + // CauseTypeInternal is used to report other errors that are not related + // to user input. See InternalError(). + CauseTypeInternal CauseType = "InternalError" + // CauseTypeTypeInvalid is for the value did not match the schema type for that field + CauseTypeTypeInvalid CauseType = "FieldValueTypeInvalid" // CauseTypeUnexpectedServerResponse is used to report when the server responded to the client // without the expected return type. The presence of this cause indicates the error may be // due to an intervening proxy or the server software malfunctioning. @@ -1207,9 +1225,7 @@ type LabelSelector struct { // relates the key and values. type LabelSelectorRequirement struct { // key is the label key that the selector applies to. - // +patchMergeKey=key - // +patchStrategy=merge - Key string `json:"key" patchStrategy:"merge" patchMergeKey:"key" protobuf:"bytes,1,opt,name=key"` + Key string `json:"key" protobuf:"bytes,1,opt,name=key"` // operator represents a key's relationship to a set of values. // Valid operators are In, NotIn, Exists and DoesNotExist. Operator LabelSelectorOperator `json:"operator" protobuf:"bytes,2,opt,name=operator,casttype=LabelSelectorOperator"` diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured.go index a499eee8ebb8..40d289f3750c 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured.go @@ -101,6 +101,11 @@ func (obj *Unstructured) EachListItem(fn func(runtime.Object) error) error { return nil } +func (obj *Unstructured) EachListItemWithAlloc(fn func(runtime.Object) error) error { + // EachListItem has allocated a new Object for the user, we can use it directly. + return obj.EachListItem(fn) +} + func (obj *Unstructured) UnstructuredContent() map[string]interface{} { if obj.Object == nil { return make(map[string]interface{}) diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured_list.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured_list.go index 5028f5fb57a3..82beda2a29c2 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured_list.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured_list.go @@ -52,6 +52,15 @@ func (u *UnstructuredList) EachListItem(fn func(runtime.Object) error) error { return nil } +func (u *UnstructuredList) EachListItemWithAlloc(fn func(runtime.Object) error) error { + for i := range u.Items { + if err := fn(&Unstructured{Object: u.Items[i].Object}); err != nil { + return err + } + } + return nil +} + // NewEmptyInstance returns a new instance of the concrete type containing only kind/apiVersion and no other data. // This should be called instead of reflect.New() for unstructured types because the go type alone does not preserve kind/apiVersion info. func (u *UnstructuredList) NewEmptyInstance() runtime.Unstructured { diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/codec.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/codec.go index 7fc513dd0e7f..73f85286c235 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/codec.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/codec.go @@ -45,7 +45,6 @@ func NewCodec(e Encoder, d Decoder) Codec { // Encode is a convenience wrapper for encoding to a []byte from an Encoder func Encode(e Encoder, obj Object) ([]byte, error) { - // TODO: reuse buffer buf := &bytes.Buffer{} if err := e.Encode(obj, buf); err != nil { return nil, err diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/converter.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/converter.go index 90bf487e354c..62eb27afc195 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/converter.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/converter.go @@ -231,7 +231,7 @@ func (c *fromUnstructuredContext) pushKey(key string) { } -// FromUnstructuredWIthValidation converts an object from map[string]interface{} representation into a concrete type. +// FromUnstructuredWithValidation converts an object from map[string]interface{} representation into a concrete type. // It uses encoding/json/Unmarshaler if object implements it or reflection if not. // It takes a validationDirective that indicates how to behave when it encounters unknown fields. func (c *unstructuredConverter) FromUnstructuredWithValidation(u map[string]interface{}, obj interface{}, returnUnknownFields bool) error { @@ -465,7 +465,7 @@ func sliceFromUnstructured(sv, dv reflect.Value, ctx *fromUnstructuredContext) e } dv.SetBytes(data) } else { - dv.Set(reflect.Zero(dt)) + dv.Set(reflect.MakeSlice(dt, 0, 0)) } return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/interfaces.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/interfaces.go index 710a977952f2..e89ea8939178 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/interfaces.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/interfaces.go @@ -365,4 +365,9 @@ type Unstructured interface { // error should terminate the iteration. If IsList() returns false, this method should return an error // instead of calling the provided function. EachListItem(func(Object) error) error + // EachListItemWithAlloc works like EachListItem, but avoids retaining references to a slice of items. + // It does this by making a shallow copy of non-pointer items before passing them to fn. + // + // If the items passed to fn are not retained, or are retained for the same duration, use EachListItem instead for memory efficiency. + EachListItemWithAlloc(func(Object) error) error } diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go index 54ccb7a74c77..d1c37c9429c9 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go @@ -39,7 +39,7 @@ func ParseResourceArg(arg string) (*GroupVersionResource, GroupResource) { // ParseKindArg takes the common style of string which may be either `Kind.group.com` or `Kind.version.group.com` // and parses it out into both possibilities. This code takes no responsibility for knowing which representation was intended // but with a knowledge of all GroupKinds, calling code can take a very good guess. If there are only two segments, then -// `*GroupVersionResource` is nil. +// `*GroupVersionKind` is nil. // `Kind.group.com` -> `group=com, version=group, kind=Kind` and `group=group.com, kind=Kind` func ParseKindArg(arg string) (*GroupVersionKind, GroupKind) { var gvk *GroupVersionKind diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/splice.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/splice.go new file mode 100644 index 000000000000..2badb7b97f3a --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/runtime/splice.go @@ -0,0 +1,76 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runtime + +import ( + "bytes" + "io" +) + +// Splice is the interface that wraps the Splice method. +// +// Splice moves data from given slice without copying the underlying data for +// efficiency purpose. Therefore, the caller should make sure the underlying +// data is not changed later. +type Splice interface { + Splice([]byte) + io.Writer + Reset() + Bytes() []byte +} + +// A spliceBuffer implements Splice and io.Writer interfaces. +type spliceBuffer struct { + raw []byte + buf *bytes.Buffer +} + +func NewSpliceBuffer() Splice { + return &spliceBuffer{} +} + +// Splice implements the Splice interface. +func (sb *spliceBuffer) Splice(raw []byte) { + sb.raw = raw +} + +// Write implements the io.Writer interface. +func (sb *spliceBuffer) Write(p []byte) (n int, err error) { + if sb.buf == nil { + sb.buf = &bytes.Buffer{} + } + return sb.buf.Write(p) +} + +// Reset resets the buffer to be empty. +func (sb *spliceBuffer) Reset() { + if sb.buf != nil { + sb.buf.Reset() + } + sb.raw = nil +} + +// Bytes returns the data held by the buffer. +func (sb *spliceBuffer) Bytes() []byte { + if sb.buf != nil && len(sb.buf.Bytes()) > 0 { + return sb.buf.Bytes() + } + if sb.raw != nil { + return sb.raw + } + return []byte{} +} diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/types/namespacedname.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/types/namespacedname.go index 29fb4f950a40..db18ce1ce21b 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/types/namespacedname.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/types/namespacedname.go @@ -41,7 +41,8 @@ func (n NamespacedName) String() string { // MarshalLog emits a struct containing required key/value pair func (n NamespacedName) MarshalLog() interface{} { return struct { - Name, Namespace string + Name string `json:"name"` + Namespace string `json:"namespace,omitempty"` }{ Name: n.Name, Namespace: n.Namespace, diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/cache/expiring.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/cache/expiring.go index 0d2f153bf9eb..1396274c7bf9 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/cache/expiring.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/cache/expiring.go @@ -40,6 +40,13 @@ func NewExpiringWithClock(clock clock.Clock) *Expiring { // Expiring is a map whose entries expire after a per-entry timeout. type Expiring struct { + // AllowExpiredGet causes the expiration check to be skipped on Get. + // It should only be used when a key always corresponds to the exact same value. + // Thus when this field is true, expired keys are considered valid + // until the next call to Set (which causes the GC to run). + // It may not be changed concurrently with calls to Get. + AllowExpiredGet bool + clock clock.Clock // mu protects the below fields @@ -70,7 +77,10 @@ func (c *Expiring) Get(key interface{}) (val interface{}, ok bool) { c.mu.RLock() defer c.mu.RUnlock() e, ok := c.cache[key] - if !ok || !c.clock.Now().Before(e.expiry) { + if !ok { + return nil, false + } + if !c.AllowExpiredGet && !c.clock.Now().Before(e.expiry) { return nil, false } return e.val, true diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/diff/diff.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/diff/diff.go index ec4002e38a26..fc0301844906 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/diff/diff.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/diff/diff.go @@ -23,34 +23,20 @@ import ( "strings" "text/tabwriter" - "github.com/davecgh/go-spew/spew" "github.com/google/go-cmp/cmp" + "k8s.io/apimachinery/pkg/util/dump" ) -// StringDiff diffs a and b and returns a human readable diff. -func StringDiff(a, b string) string { - ba := []byte(a) - bb := []byte(b) - out := []byte{} - i := 0 - for ; i < len(ba) && i < len(bb); i++ { - if ba[i] != bb[i] { - break - } - out = append(out, ba[i]) - } - out = append(out, []byte("\n\nA: ")...) - out = append(out, ba[i:]...) - out = append(out, []byte("\n\nB: ")...) - out = append(out, bb[i:]...) - out = append(out, []byte("\n\n")...) - return string(out) -} - func legacyDiff(a, b interface{}) string { return cmp.Diff(a, b) } +// StringDiff diffs a and b and returns a human readable diff. +// DEPRECATED: use github.com/google/go-cmp/cmp.Diff +func StringDiff(a, b string) string { + return legacyDiff(a, b) +} + // ObjectDiff prints the diff of two go objects and fails if the objects // contain unhandled unexported fields. // DEPRECATED: use github.com/google/go-cmp/cmp.Diff @@ -75,13 +61,8 @@ func ObjectReflectDiff(a, b interface{}) string { // ObjectGoPrintSideBySide prints a and b as textual dumps side by side, // enabling easy visual scanning for mismatches. func ObjectGoPrintSideBySide(a, b interface{}) string { - s := spew.ConfigState{ - Indent: " ", - // Extra deep spew. - DisableMethods: true, - } - sA := s.Sdump(a) - sB := s.Sdump(b) + sA := dump.Pretty(a) + sB := dump.Pretty(b) linesA := strings.Split(sA, "\n") linesB := strings.Split(sB, "\n") diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/dump/dump.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/dump/dump.go new file mode 100644 index 000000000000..cf61ef76aed3 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/dump/dump.go @@ -0,0 +1,54 @@ +/* +Copyright 2021 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package dump + +import ( + "github.com/davecgh/go-spew/spew" +) + +var prettyPrintConfig = &spew.ConfigState{ + Indent: " ", + DisableMethods: true, + DisablePointerAddresses: true, + DisableCapacities: true, +} + +// The config MUST NOT be changed because that could change the result of a hash operation +var prettyPrintConfigForHash = &spew.ConfigState{ + Indent: " ", + SortKeys: true, + DisableMethods: true, + SpewKeys: true, + DisablePointerAddresses: true, + DisableCapacities: true, +} + +// Pretty wrap the spew.Sdump with Indent, and disabled methods like error() and String() +// The output may change over time, so for guaranteed output please take more direct control +func Pretty(a interface{}) string { + return prettyPrintConfig.Sdump(a) +} + +// ForHash keeps the original Spew.Sprintf format to ensure the same checksum +func ForHash(a interface{}) string { + return prettyPrintConfigForHash.Sprintf("%#v", a) +} + +// OneLine outputs the object in one line +func OneLine(a interface{}) string { + return prettyPrintConfig.Sprintf("%#v", a) +} diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/spdy/roundtripper.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/spdy/roundtripper.go index 27c3d2d56451..7fe52ee568ed 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/spdy/roundtripper.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/spdy/roundtripper.go @@ -23,7 +23,7 @@ import ( "encoding/base64" "errors" "fmt" - "io/ioutil" + "io" "net" "net/http" "net/http/httputil" @@ -337,7 +337,7 @@ func (s *SpdyRoundTripper) NewConnection(resp *http.Response) (httpstream.Connec if (resp.StatusCode != http.StatusSwitchingProtocols) || !strings.Contains(connectionHeader, strings.ToLower(httpstream.HeaderUpgrade)) || !strings.Contains(upgradeHeader, strings.ToLower(HeaderSpdy31)) { defer resp.Body.Close() responseError := "" - responseErrorBytes, err := ioutil.ReadAll(resp.Body) + responseErrorBytes, err := io.ReadAll(resp.Body) if err != nil { responseError = "unable to read error from server response" } else { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/wsstream/conn.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/conn.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/wsstream/conn.go rename to cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/conn.go diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/wsstream/doc.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/doc.go similarity index 91% rename from cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/wsstream/doc.go rename to cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/doc.go index 694ce81d20d5..a1aa1688bd94 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/wsstream/doc.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/doc.go @@ -18,4 +18,4 @@ limitations under the License. // The Conn type allows callers to multiplex multiple read/write channels over // a single websocket. The Reader type allows an io.Reader to be copied over // a websocket channel as binary content. -package wsstream // import "k8s.io/apiserver/pkg/util/wsstream" +package wsstream // import "k8s.io/apimachinery/pkg/util/httpstream/wsstream" diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/wsstream/stream.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/stream.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/wsstream/stream.go rename to cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/stream.go diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/intstr/intstr.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/intstr/intstr.go index 5e8009704535..0ea88156bef1 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/intstr/intstr.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/intstr/intstr.go @@ -54,7 +54,7 @@ const ( // FromInt creates an IntOrString object with an int32 value. It is // your responsibility not to call this method with a value greater // than int32. -// TODO: convert to (val int32) +// Deprecated: use FromInt32 instead. func FromInt(val int) IntOrString { if val > math.MaxInt32 || val < math.MinInt32 { klog.Errorf("value: %d overflows int32\n%s\n", val, debug.Stack()) @@ -62,6 +62,11 @@ func FromInt(val int) IntOrString { return IntOrString{Type: Int, IntVal: int32(val)} } +// FromInt32 creates an IntOrString object with an int32 value. +func FromInt32(val int32) IntOrString { + return IntOrString{Type: Int, IntVal: val} +} + // FromString creates an IntOrString object with a string value. func FromString(val string) IntOrString { return IntOrString{Type: String, StrVal: val} diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/fieldmanager.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/fieldmanager.go index f3111d4bc723..eca04a711638 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/fieldmanager.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/fieldmanager.go @@ -56,17 +56,20 @@ func NewFieldManager(f Manager, subresource string) *FieldManager { // newDefaultFieldManager is a helper function which wraps a Manager with certain default logic. func NewDefaultFieldManager(f Manager, typeConverter TypeConverter, objectConverter runtime.ObjectConvertor, objectCreater runtime.ObjectCreater, kind schema.GroupVersionKind, subresource string) *FieldManager { return NewFieldManager( - NewLastAppliedUpdater( - NewLastAppliedManager( - NewProbabilisticSkipNonAppliedManager( - NewCapManagersManager( - NewBuildManagerInfoManager( - NewManagedFieldsUpdater( - NewStripMetaManager(f), - ), kind.GroupVersion(), subresource, - ), DefaultMaxUpdateManagers, - ), objectCreater, kind, DefaultTrackOnCreateProbability, - ), typeConverter, objectConverter, kind.GroupVersion()), + NewVersionCheckManager( + NewLastAppliedUpdater( + NewLastAppliedManager( + NewProbabilisticSkipNonAppliedManager( + NewCapManagersManager( + NewBuildManagerInfoManager( + NewManagedFieldsUpdater( + NewStripMetaManager(f), + ), kind.GroupVersion(), subresource, + ), DefaultMaxUpdateManagers, + ), objectCreater, DefaultTrackOnCreateProbability, + ), typeConverter, objectConverter, kind.GroupVersion(), + ), + ), kind, ), subresource, ) } diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/skipnonapplied.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/skipnonapplied.go index 6b281ec1e575..f24c040edd03 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/skipnonapplied.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/skipnonapplied.go @@ -22,13 +22,11 @@ import ( "k8s.io/apimachinery/pkg/api/meta" "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/runtime/schema" ) type skipNonAppliedManager struct { fieldManager Manager objectCreater runtime.ObjectCreater - gvk schema.GroupVersionKind beforeApplyManagerName string probability float32 } @@ -36,17 +34,16 @@ type skipNonAppliedManager struct { var _ Manager = &skipNonAppliedManager{} // NewSkipNonAppliedManager creates a new wrapped FieldManager that only starts tracking managers after the first apply. -func NewSkipNonAppliedManager(fieldManager Manager, objectCreater runtime.ObjectCreater, gvk schema.GroupVersionKind) Manager { - return NewProbabilisticSkipNonAppliedManager(fieldManager, objectCreater, gvk, 0.0) +func NewSkipNonAppliedManager(fieldManager Manager, objectCreater runtime.ObjectCreater) Manager { + return NewProbabilisticSkipNonAppliedManager(fieldManager, objectCreater, 0.0) } // NewProbabilisticSkipNonAppliedManager creates a new wrapped FieldManager that starts tracking managers after the first apply, // or starts tracking on create with p probability. -func NewProbabilisticSkipNonAppliedManager(fieldManager Manager, objectCreater runtime.ObjectCreater, gvk schema.GroupVersionKind, p float32) Manager { +func NewProbabilisticSkipNonAppliedManager(fieldManager Manager, objectCreater runtime.ObjectCreater, p float32) Manager { return &skipNonAppliedManager{ fieldManager: fieldManager, objectCreater: objectCreater, - gvk: gvk, beforeApplyManagerName: "before-first-apply", probability: p, } @@ -78,9 +75,10 @@ func (f *skipNonAppliedManager) Update(liveObj, newObj runtime.Object, managed M // Apply implements Manager. func (f *skipNonAppliedManager) Apply(liveObj, appliedObj runtime.Object, managed Managed, fieldManager string, force bool) (runtime.Object, Managed, error) { if len(managed.Fields()) == 0 { - emptyObj, err := f.objectCreater.New(f.gvk) + gvk := appliedObj.GetObjectKind().GroupVersionKind() + emptyObj, err := f.objectCreater.New(gvk) if err != nil { - return nil, nil, fmt.Errorf("failed to create empty object of type %v: %v", f.gvk, err) + return nil, nil, fmt.Errorf("failed to create empty object of type %v: %v", gvk, err) } liveObj, managed, err = f.fieldManager.Update(emptyObj, liveObj, managed, f.beforeApplyManagerName) if err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/structuredmerge.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/structuredmerge.go index eb5598ac3bfa..2112c9ab7e90 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/structuredmerge.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/structuredmerge.go @@ -41,6 +41,9 @@ var _ Manager = &structuredMergeManager{} // NewStructuredMergeManager creates a new Manager that merges apply requests // and update managed fields for other types of requests. func NewStructuredMergeManager(typeConverter TypeConverter, objectConverter runtime.ObjectConvertor, objectDefaulter runtime.ObjectDefaulter, gv schema.GroupVersion, hub schema.GroupVersion, resetFields map[fieldpath.APIVersion]*fieldpath.Set) (Manager, error) { + if typeConverter == nil { + return nil, fmt.Errorf("typeconverter must not be nil") + } return &structuredMergeManager{ typeConverter: typeConverter, objectConverter: objectConverter, diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/versioncheck.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/versioncheck.go new file mode 100644 index 000000000000..ee1e2bca7017 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/managedfields/internal/versioncheck.go @@ -0,0 +1,52 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package internal + +import ( + "fmt" + + "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" +) + +type versionCheckManager struct { + fieldManager Manager + gvk schema.GroupVersionKind +} + +var _ Manager = &versionCheckManager{} + +// NewVersionCheckManager creates a manager that makes sure that the +// applied object is in the proper version. +func NewVersionCheckManager(fieldManager Manager, gvk schema.GroupVersionKind) Manager { + return &versionCheckManager{fieldManager: fieldManager, gvk: gvk} +} + +// Update implements Manager. +func (f *versionCheckManager) Update(liveObj, newObj runtime.Object, managed Managed, manager string) (runtime.Object, Managed, error) { + // Nothing to do for updates, this is checked in many other places. + return f.fieldManager.Update(liveObj, newObj, managed, manager) +} + +// Apply implements Manager. +func (f *versionCheckManager) Apply(liveObj, appliedObj runtime.Object, managed Managed, fieldManager string, force bool) (runtime.Object, Managed, error) { + if gvk := appliedObj.GetObjectKind().GroupVersionKind(); gvk != f.gvk { + return nil, nil, errors.NewBadRequest(fmt.Sprintf("invalid object type: %v", gvk)) + } + return f.fieldManager.Apply(liveObj, appliedObj, managed, fieldManager, force) +} diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/mergepatch/util.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/mergepatch/util.go index a20efd187159..25626cf3af2a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/mergepatch/util.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/mergepatch/util.go @@ -20,7 +20,7 @@ import ( "fmt" "reflect" - "github.com/davecgh/go-spew/spew" + "k8s.io/apimachinery/pkg/util/dump" "sigs.k8s.io/yaml" ) @@ -76,7 +76,7 @@ func ToYAMLOrError(v interface{}) string { func toYAML(v interface{}) (string, error) { y, err := yaml.Marshal(v) if err != nil { - return "", fmt.Errorf("yaml marshal failed:%v\n%v\n", err, spew.Sdump(v)) + return "", fmt.Errorf("yaml marshal failed:%v\n%v\n", err, dump.Pretty(v)) } return string(y), nil diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/net/util.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/net/util.go index 1c2aba55f7bf..1635e69a5c00 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/net/util.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/net/util.go @@ -20,6 +20,7 @@ import ( "errors" "net" "reflect" + "strings" "syscall" ) @@ -47,6 +48,11 @@ func IsConnectionReset(err error) bool { return false } +// Returns if the given err is "http2: client connection lost" error. +func IsHTTP2ConnectionLost(err error) bool { + return err != nil && strings.Contains(err.Error(), "http2: client connection lost") +} + // Returns if the given err is "connection refused" error func IsConnectionRefused(err error) bool { var errno syscall.Errno diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/proxy/transport.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/proxy/transport.go index 489d9b042641..5a2dd6e14c87 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/proxy/transport.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/proxy/transport.go @@ -22,7 +22,6 @@ import ( "compress/gzip" "fmt" "io" - "io/ioutil" "net/http" "net/url" "path" @@ -263,7 +262,7 @@ func (t *Transport) rewriteResponse(req *http.Request, resp *http.Response) (*ht return resp, err } - resp.Body = ioutil.NopCloser(newContent) + resp.Body = io.NopCloser(newContent) // Update header node with new content-length // TODO: Remove any hash/signature headers here? resp.Header.Del("Content-Length") diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/proxy/upgradeaware.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/proxy/upgradeaware.go index a5bb58575807..ac2ada5472c3 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/proxy/upgradeaware.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/proxy/upgradeaware.go @@ -21,7 +21,6 @@ import ( "bytes" "fmt" "io" - "io/ioutil" "log" "net" "net/http" @@ -148,7 +147,7 @@ func (onewayRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) { return &http.Response{ Status: "200 OK", StatusCode: http.StatusOK, - Body: ioutil.NopCloser(&bytes.Buffer{}), + Body: io.NopCloser(&bytes.Buffer{}), Request: req, }, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/strategicpatch/patch.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/strategicpatch/patch.go index 3ee683b99701..920c113bbd73 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/strategicpatch/patch.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/strategicpatch/patch.go @@ -1182,7 +1182,13 @@ func mergePatchIntoOriginal(original, patch map[string]interface{}, schema Looku merged = originalFieldValue case !foundOriginal && foundPatch: // list was added - merged = patchFieldValue + v, keep := removeDirectives(patchFieldValue) + if !keep { + // Shouldn't be possible since patchFieldValue is a slice + continue + } + + merged = v.([]interface{}) case foundOriginal && foundPatch: merged, err = mergeSliceHandler(originalList, patchList, subschema, patchStrategy, patchMeta.GetPatchMergeKey(), false, mergeOptions) @@ -1270,6 +1276,42 @@ func partitionMapsByPresentInList(original, partitionBy []interface{}, mergeKey return patch, serverOnly, nil } +// Removes directives from an object and returns value to use instead and whether +// or not the field/index should even be kept +// May modify input +func removeDirectives(obj interface{}) (interface{}, bool) { + if obj == nil { + return obj, true + } else if typedV, ok := obj.(map[string]interface{}); ok { + if _, hasDirective := typedV[directiveMarker]; hasDirective { + return nil, false + } + + for k, v := range typedV { + var keep bool + typedV[k], keep = removeDirectives(v) + if !keep { + delete(typedV, k) + } + } + return typedV, true + } else if typedV, ok := obj.([]interface{}); ok { + var res []interface{} + if typedV != nil { + // Make sure res is non-nil if patch is non-nil + res = []interface{}{} + } + for _, v := range typedV { + if newV, keep := removeDirectives(v); keep { + res = append(res, newV) + } + } + return res, true + } else { + return obj, true + } +} + // Merge fields from a patch map into the original map. Note: This may modify // both the original map and the patch because getting a deep copy of a map in // golang is highly non-trivial. @@ -1333,7 +1375,10 @@ func mergeMap(original, patch map[string]interface{}, schema LookupPatchMeta, me if mergeOptions.IgnoreUnmatchedNulls { discardNullValuesFromPatch(patchV) } - original[k] = patchV + original[k], ok = removeDirectives(patchV) + if !ok { + delete(original, k) + } } continue } @@ -1345,7 +1390,10 @@ func mergeMap(original, patch map[string]interface{}, schema LookupPatchMeta, me if mergeOptions.IgnoreUnmatchedNulls { discardNullValuesFromPatch(patchV) } - original[k] = patchV + original[k], ok = removeDirectives(patchV) + if !ok { + delete(original, k) + } } continue } @@ -1372,7 +1420,11 @@ func mergeMap(original, patch map[string]interface{}, schema LookupPatchMeta, me } original[k], err = mergeSliceHandler(original[k], patchV, subschema, patchStrategy, patchMeta.GetPatchMergeKey(), isDeleteList, mergeOptions) default: - original[k] = patchV + original[k], ok = removeDirectives(patchV) + if !ok { + // if patchV itself is a directive, then don't keep it + delete(original, k) + } } if err != nil { return nil, err @@ -1425,7 +1477,8 @@ func mergeSliceHandler(original, patch interface{}, schema LookupPatchMeta, return nil, err } - if fieldPatchStrategy == mergeDirective { + // Delete lists are handled the same way regardless of what the field's patch strategy is + if fieldPatchStrategy == mergeDirective || isDeleteList { return mergeSlice(typedOriginal, typedPatch, schema, fieldPatchMergeKey, mergeOptions, isDeleteList) } else { return typedPatch, nil diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/version/version.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/version/version.go index 8c997ec4502b..4c6195695336 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/version/version.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/version/version.go @@ -121,6 +121,11 @@ func MustParseSemantic(str string) *Version { return v } +// MajorMinor returns a version with the provided major and minor version. +func MajorMinor(major, minor uint) *Version { + return &Version{components: []uint{major, minor}} +} + // Major returns the major release number func (v *Version) Major() uint { return v.components[0] diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go index 51864d70f956..0dd13c626c82 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go @@ -27,9 +27,11 @@ import ( // the provided timer until the provided context is cancelled, the condition returns // true, or the condition returns an error. If sliding is true, the period is computed // after condition runs. If it is false then period includes the runtime for condition. -// If immediate is false the first delay happens before any call to condition. The -// returned error is the error returned by the last condition or the context error if -// the context was terminated. +// If immediate is false the first delay happens before any call to condition, if +// immediate is true the condition will be invoked before waiting and guarantees that +// the condition is invoked at least once, regardless of whether the context has been +// cancelled. The returned error is the error returned by the last condition or the +// context error if the context was terminated. // // This is the common loop construct for all polling in the wait package. func loopConditionUntilContext(ctx context.Context, t Timer, immediate, sliding bool, condition ConditionWithContextFunc) error { @@ -38,8 +40,17 @@ func loopConditionUntilContext(ctx context.Context, t Timer, immediate, sliding var timeCh <-chan time.Time doneCh := ctx.Done() + // if immediate is true the condition is + // guaranteed to be executed at least once, // if we haven't requested immediate execution, delay once - if !immediate { + if immediate { + if ok, err := func() (bool, error) { + defer runtime.HandleCrash() + return condition(ctx) + }(); err != nil || ok { + return err + } + } else { timeCh = t.C() select { case <-doneCh: diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go index 32e8688ca0f0..231d4c384239 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go @@ -38,10 +38,10 @@ func PollUntilContextCancel(ctx context.Context, interval time.Duration, immedia // a deadline and is equivalent to: // // deadlineCtx, deadlineCancel := context.WithTimeout(ctx, timeout) -// err := PollUntilContextCancel(ctx, interval, immediate, condition) +// err := PollUntilContextCancel(deadlineCtx, interval, immediate, condition) // // The deadline context will be cancelled if the Poll succeeds before the timeout, simplifying -// inline usage. All other behavior is identical to PollWithContextTimeout. +// inline usage. All other behavior is identical to PollUntilContextCancel. func PollUntilContextTimeout(ctx context.Context, interval, timeout time.Duration, immediate bool, condition ConditionWithContextFunc) error { deadlineCtx, deadlineCancel := context.WithTimeout(ctx, timeout) defer deadlineCancel() @@ -59,7 +59,7 @@ func PollUntilContextTimeout(ctx context.Context, interval, timeout time.Duratio // // If you want to Poll something forever, see PollInfinite. // -// Deprecated: This method does not return errors from context, use PollWithContextTimeout. +// Deprecated: This method does not return errors from context, use PollUntilContextTimeout. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func Poll(interval, timeout time.Duration, condition ConditionFunc) error { @@ -78,7 +78,7 @@ func Poll(interval, timeout time.Duration, condition ConditionFunc) error { // // If you want to Poll something forever, see PollInfinite. // -// Deprecated: This method does not return errors from context, use PollWithContextTimeout. +// Deprecated: This method does not return errors from context, use PollUntilContextTimeout. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollWithContext(ctx context.Context, interval, timeout time.Duration, condition ConditionWithContextFunc) error { @@ -91,7 +91,7 @@ func PollWithContext(ctx context.Context, interval, timeout time.Duration, condi // PollUntil always waits interval before the first run of 'condition'. // 'condition' will always be invoked at least once. // -// Deprecated: This method does not return errors from context, use PollWithContextCancel. +// Deprecated: This method does not return errors from context, use PollUntilContextCancel. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollUntil(interval time.Duration, condition ConditionFunc, stopCh <-chan struct{}) error { @@ -104,7 +104,7 @@ func PollUntil(interval time.Duration, condition ConditionFunc, stopCh <-chan st // PollUntilWithContext always waits interval before the first run of 'condition'. // 'condition' will always be invoked at least once. // -// Deprecated: This method does not return errors from context, use PollWithContextCancel. +// Deprecated: This method does not return errors from context, use PollUntilContextCancel. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollUntilWithContext(ctx context.Context, interval time.Duration, condition ConditionWithContextFunc) error { @@ -118,7 +118,7 @@ func PollUntilWithContext(ctx context.Context, interval time.Duration, condition // Some intervals may be missed if the condition takes too long or the time // window is too short. // -// Deprecated: This method does not return errors from context, use PollWithContextCancel. +// Deprecated: This method does not return errors from context, use PollUntilContextCancel. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollInfinite(interval time.Duration, condition ConditionFunc) error { @@ -132,7 +132,7 @@ func PollInfinite(interval time.Duration, condition ConditionFunc) error { // Some intervals may be missed if the condition takes too long or the time // window is too short. // -// Deprecated: This method does not return errors from context, use PollWithContextCancel. +// Deprecated: This method does not return errors from context, use PollUntilContextCancel. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollInfiniteWithContext(ctx context.Context, interval time.Duration, condition ConditionWithContextFunc) error { @@ -150,7 +150,7 @@ func PollInfiniteWithContext(ctx context.Context, interval time.Duration, condit // // If you want to immediately Poll something forever, see PollImmediateInfinite. // -// Deprecated: This method does not return errors from context, use PollWithContextTimeout. +// Deprecated: This method does not return errors from context, use PollUntilContextTimeout. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollImmediate(interval, timeout time.Duration, condition ConditionFunc) error { @@ -168,7 +168,7 @@ func PollImmediate(interval, timeout time.Duration, condition ConditionFunc) err // // If you want to immediately Poll something forever, see PollImmediateInfinite. // -// Deprecated: This method does not return errors from context, use PollWithContextTimeout. +// Deprecated: This method does not return errors from context, use PollUntilContextTimeout. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollImmediateWithContext(ctx context.Context, interval, timeout time.Duration, condition ConditionWithContextFunc) error { @@ -180,7 +180,7 @@ func PollImmediateWithContext(ctx context.Context, interval, timeout time.Durati // PollImmediateUntil runs the 'condition' before waiting for the interval. // 'condition' will always be invoked at least once. // -// Deprecated: This method does not return errors from context, use PollWithContextCancel. +// Deprecated: This method does not return errors from context, use PollUntilContextCancel. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollImmediateUntil(interval time.Duration, condition ConditionFunc, stopCh <-chan struct{}) error { @@ -193,7 +193,7 @@ func PollImmediateUntil(interval time.Duration, condition ConditionFunc, stopCh // PollImmediateUntilWithContext runs the 'condition' before waiting for the interval. // 'condition' will always be invoked at least once. // -// Deprecated: This method does not return errors from context, use PollWithContextCancel. +// Deprecated: This method does not return errors from context, use PollUntilContextCancel. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollImmediateUntilWithContext(ctx context.Context, interval time.Duration, condition ConditionWithContextFunc) error { @@ -207,7 +207,7 @@ func PollImmediateUntilWithContext(ctx context.Context, interval time.Duration, // Some intervals may be missed if the condition takes too long or the time // window is too short. // -// Deprecated: This method does not return errors from context, use PollWithContextCancel. +// Deprecated: This method does not return errors from context, use PollUntilContextCancel. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollImmediateInfinite(interval time.Duration, condition ConditionFunc) error { @@ -222,7 +222,7 @@ func PollImmediateInfinite(interval time.Duration, condition ConditionFunc) erro // Some intervals may be missed if the condition takes too long or the time // window is too short. // -// Deprecated: This method does not return errors from context, use PollWithContextCancel. +// Deprecated: This method does not return errors from context, use PollUntilContextCancel. // Note that the new method will no longer return ErrWaitTimeout and instead return errors // defined by the context package. Will be removed in a future release. func PollImmediateInfiniteWithContext(ctx context.Context, interval time.Duration, condition ConditionWithContextFunc) error { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/configuration/mutating_webhook_manager.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/configuration/mutating_webhook_manager.go index daee67859918..3ecc00b74cbd 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/configuration/mutating_webhook_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/configuration/mutating_webhook_manager.go @@ -19,8 +19,9 @@ package configuration import ( "fmt" "sort" + "sync" - "k8s.io/api/admissionregistration/v1" + v1 "k8s.io/api/admissionregistration/v1" "k8s.io/apimachinery/pkg/labels" utilruntime "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apiserver/pkg/admission/plugin/webhook" @@ -29,13 +30,22 @@ import ( admissionregistrationlisters "k8s.io/client-go/listers/admissionregistration/v1" "k8s.io/client-go/tools/cache" "k8s.io/client-go/tools/cache/synctrack" + "k8s.io/klog/v2" ) +// Type for test injection. +type mutatingWebhookAccessorCreator func(uid string, configurationName string, h *v1.MutatingWebhook) webhook.WebhookAccessor + // mutatingWebhookConfigurationManager collects the mutating webhook objects so that they can be called. type mutatingWebhookConfigurationManager struct { - lister admissionregistrationlisters.MutatingWebhookConfigurationLister - hasSynced func() bool - lazy synctrack.Lazy[[]webhook.WebhookAccessor] + lister admissionregistrationlisters.MutatingWebhookConfigurationLister + hasSynced func() bool + lazy synctrack.Lazy[[]webhook.WebhookAccessor] + configurationsCache sync.Map + // createMutatingWebhookAccessor is used to instantiate webhook accessors. + // This function is defined as field instead of a struct method to allow injection + // during tests + createMutatingWebhookAccessor mutatingWebhookAccessorCreator } var _ generic.Source = &mutatingWebhookConfigurationManager{} @@ -43,14 +53,35 @@ var _ generic.Source = &mutatingWebhookConfigurationManager{} func NewMutatingWebhookConfigurationManager(f informers.SharedInformerFactory) generic.Source { informer := f.Admissionregistration().V1().MutatingWebhookConfigurations() manager := &mutatingWebhookConfigurationManager{ - lister: informer.Lister(), + lister: informer.Lister(), + createMutatingWebhookAccessor: webhook.NewMutatingWebhookAccessor, } manager.lazy.Evaluate = manager.getConfiguration handle, _ := informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: func(_ interface{}) { manager.lazy.Notify() }, - UpdateFunc: func(_, _ interface{}) { manager.lazy.Notify() }, - DeleteFunc: func(_ interface{}) { manager.lazy.Notify() }, + AddFunc: func(_ interface{}) { manager.lazy.Notify() }, + UpdateFunc: func(old, new interface{}) { + obj := new.(*v1.MutatingWebhookConfiguration) + manager.configurationsCache.Delete(obj.GetName()) + manager.lazy.Notify() + }, + DeleteFunc: func(obj interface{}) { + vwc, ok := obj.(*v1.MutatingWebhookConfiguration) + if !ok { + tombstone, ok := obj.(cache.DeletedFinalStateUnknown) + if !ok { + klog.V(2).Infof("Couldn't get object from tombstone %#v", obj) + return + } + vwc, ok = tombstone.Obj.(*v1.MutatingWebhookConfiguration) + if !ok { + klog.V(2).Infof("Tombstone contained object that is not expected %#v", obj) + return + } + } + manager.configurationsCache.Delete(vwc.Name) + manager.lazy.Notify() + }, }) manager.hasSynced = handle.HasSynced @@ -75,25 +106,46 @@ func (m *mutatingWebhookConfigurationManager) getConfiguration() ([]webhook.Webh if err != nil { return []webhook.WebhookAccessor{}, err } - return mergeMutatingWebhookConfigurations(configurations), nil + return m.getMutatingWebhookConfigurations(configurations), nil } -func mergeMutatingWebhookConfigurations(configurations []*v1.MutatingWebhookConfiguration) []webhook.WebhookAccessor { +// getMutatingWebhookConfigurations returns the webhook accessors for a given list of +// mutating webhook configurations. +// +// This function will, first, try to load the webhook accessors from the cache and avoid +// recreating them, which can be expessive (requiring CEL expression recompilation). +func (m *mutatingWebhookConfigurationManager) getMutatingWebhookConfigurations(configurations []*v1.MutatingWebhookConfiguration) []webhook.WebhookAccessor { // The internal order of webhooks for each configuration is provided by the user // but configurations themselves can be in any order. As we are going to run these // webhooks in serial, they are sorted here to have a deterministic order. sort.SliceStable(configurations, MutatingWebhookConfigurationSorter(configurations).ByName) - accessors := []webhook.WebhookAccessor{} + size := 0 + for _, cfg := range configurations { + size += len(cfg.Webhooks) + } + accessors := make([]webhook.WebhookAccessor, 0, size) + for _, c := range configurations { + cachedConfigurationAccessors, ok := m.configurationsCache.Load(c.Name) + if ok { + // Pick an already cached webhookAccessor + accessors = append(accessors, cachedConfigurationAccessors.([]webhook.WebhookAccessor)...) + continue + } + // webhook names are not validated for uniqueness, so we check for duplicates and // add a int suffix to distinguish between them names := map[string]int{} + configurationAccessors := make([]webhook.WebhookAccessor, 0, len(c.Webhooks)) for i := range c.Webhooks { n := c.Webhooks[i].Name uid := fmt.Sprintf("%s/%s/%d", c.Name, n, names[n]) names[n]++ - accessors = append(accessors, webhook.NewMutatingWebhookAccessor(uid, c.Name, &c.Webhooks[i])) + configurationAccessor := m.createMutatingWebhookAccessor(uid, c.Name, &c.Webhooks[i]) + configurationAccessors = append(configurationAccessors, configurationAccessor) } + accessors = append(accessors, configurationAccessors...) + m.configurationsCache.Store(c.Name, configurationAccessors) } return accessors } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/configuration/validating_webhook_manager.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/configuration/validating_webhook_manager.go index f318b5012938..b42332117702 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/configuration/validating_webhook_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/configuration/validating_webhook_manager.go @@ -19,8 +19,9 @@ package configuration import ( "fmt" "sort" + "sync" - "k8s.io/api/admissionregistration/v1" + v1 "k8s.io/api/admissionregistration/v1" "k8s.io/apimachinery/pkg/labels" utilruntime "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apiserver/pkg/admission/plugin/webhook" @@ -29,13 +30,22 @@ import ( admissionregistrationlisters "k8s.io/client-go/listers/admissionregistration/v1" "k8s.io/client-go/tools/cache" "k8s.io/client-go/tools/cache/synctrack" + "k8s.io/klog/v2" ) +// Type for test injection. +type validatingWebhookAccessorCreator func(uid string, configurationName string, h *v1.ValidatingWebhook) webhook.WebhookAccessor + // validatingWebhookConfigurationManager collects the validating webhook objects so that they can be called. type validatingWebhookConfigurationManager struct { - lister admissionregistrationlisters.ValidatingWebhookConfigurationLister - hasSynced func() bool - lazy synctrack.Lazy[[]webhook.WebhookAccessor] + lister admissionregistrationlisters.ValidatingWebhookConfigurationLister + hasSynced func() bool + lazy synctrack.Lazy[[]webhook.WebhookAccessor] + configurationsCache sync.Map + // createValidatingWebhookAccessor is used to instantiate webhook accessors. + // This function is defined as field instead of a struct method to allow injection + // during tests + createValidatingWebhookAccessor validatingWebhookAccessorCreator } var _ generic.Source = &validatingWebhookConfigurationManager{} @@ -43,14 +53,35 @@ var _ generic.Source = &validatingWebhookConfigurationManager{} func NewValidatingWebhookConfigurationManager(f informers.SharedInformerFactory) generic.Source { informer := f.Admissionregistration().V1().ValidatingWebhookConfigurations() manager := &validatingWebhookConfigurationManager{ - lister: informer.Lister(), + lister: informer.Lister(), + createValidatingWebhookAccessor: webhook.NewValidatingWebhookAccessor, } manager.lazy.Evaluate = manager.getConfiguration handle, _ := informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: func(_ interface{}) { manager.lazy.Notify() }, - UpdateFunc: func(_, _ interface{}) { manager.lazy.Notify() }, - DeleteFunc: func(_ interface{}) { manager.lazy.Notify() }, + AddFunc: func(_ interface{}) { manager.lazy.Notify() }, + UpdateFunc: func(old, new interface{}) { + obj := new.(*v1.ValidatingWebhookConfiguration) + manager.configurationsCache.Delete(obj.GetName()) + manager.lazy.Notify() + }, + DeleteFunc: func(obj interface{}) { + vwc, ok := obj.(*v1.ValidatingWebhookConfiguration) + if !ok { + tombstone, ok := obj.(cache.DeletedFinalStateUnknown) + if !ok { + klog.V(2).Infof("Couldn't get object from tombstone %#v", obj) + return + } + vwc, ok = tombstone.Obj.(*v1.ValidatingWebhookConfiguration) + if !ok { + klog.V(2).Infof("Tombstone contained object that is not expected %#v", obj) + return + } + } + manager.configurationsCache.Delete(vwc.Name) + manager.lazy.Notify() + }, }) manager.hasSynced = handle.HasSynced @@ -66,7 +97,7 @@ func (v *validatingWebhookConfigurationManager) Webhooks() []webhook.WebhookAcce return out } -// HasSynced returns true if the initial set of mutating webhook configurations +// HasSynced returns true if the initial set of validating webhook configurations // has been loaded. func (v *validatingWebhookConfigurationManager) HasSynced() bool { return v.hasSynced() } @@ -75,23 +106,45 @@ func (v *validatingWebhookConfigurationManager) getConfiguration() ([]webhook.We if err != nil { return []webhook.WebhookAccessor{}, err } - return mergeValidatingWebhookConfigurations(configurations), nil + return v.getValidatingWebhookConfigurations(configurations), nil } -func mergeValidatingWebhookConfigurations(configurations []*v1.ValidatingWebhookConfiguration) []webhook.WebhookAccessor { +// getMutatingWebhookConfigurations returns the webhook accessors for a given list of +// mutating webhook configurations. +// +// This function will, first, try to load the webhook accessors from the cache and avoid +// recreating them, which can be expessive (requiring CEL expression recompilation). +func (v *validatingWebhookConfigurationManager) getValidatingWebhookConfigurations(configurations []*v1.ValidatingWebhookConfiguration) []webhook.WebhookAccessor { sort.SliceStable(configurations, ValidatingWebhookConfigurationSorter(configurations).ByName) - accessors := []webhook.WebhookAccessor{} + size := 0 + for _, cfg := range configurations { + size += len(cfg.Webhooks) + } + accessors := make([]webhook.WebhookAccessor, 0, size) + for _, c := range configurations { + cachedConfigurationAccessors, ok := v.configurationsCache.Load(c.Name) + if ok { + // Pick an already cached webhookAccessor + accessors = append(accessors, cachedConfigurationAccessors.([]webhook.WebhookAccessor)...) + continue + } + // webhook names are not validated for uniqueness, so we check for duplicates and // add a int suffix to distinguish between them names := map[string]int{} + configurationAccessors := make([]webhook.WebhookAccessor, 0, len(c.Webhooks)) for i := range c.Webhooks { n := c.Webhooks[i].Name uid := fmt.Sprintf("%s/%s/%d", c.Name, n, names[n]) names[n]++ - accessors = append(accessors, webhook.NewValidatingWebhookAccessor(uid, c.Name, &c.Webhooks[i])) + configurationAccessor := v.createValidatingWebhookAccessor(uid, c.Name, &c.Webhooks[i]) + configurationAccessors = append(configurationAccessors, configurationAccessor) } + accessors = append(accessors, configurationAccessors...) + v.configurationsCache.Store(c.Name, configurationAccessors) } + return accessors } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/metrics/metrics.go index 26b82c37e392..6c1761149f24 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/metrics/metrics.go @@ -54,6 +54,8 @@ var ( type ObserverFunc func(ctx context.Context, elapsed time.Duration, rejected bool, attr admission.Attributes, stepType string, extraLabels ...string) const ( + kindWebhook = "webhook" + kindPolicy = "policy" stepValidate = "validate" stepAdmit = "admit" ) @@ -112,13 +114,15 @@ func (p pluginHandlerWithMetrics) Validate(ctx context.Context, a admission.Attr // AdmissionMetrics instruments admission with prometheus metrics. type AdmissionMetrics struct { - step *metricSet - controller *metricSet - webhook *metricSet - webhookRejection *metrics.CounterVec - webhookFailOpen *metrics.CounterVec - webhookRequest *metrics.CounterVec - matchConditionEvalErrors *metrics.CounterVec + step *metricSet + controller *metricSet + webhook *metricSet + webhookRejection *metrics.CounterVec + webhookFailOpen *metrics.CounterVec + webhookRequest *metrics.CounterVec + matchConditionEvalErrors *metrics.CounterVec + matchConditionExclusions *metrics.CounterVec + matchConditionEvaluationSeconds *metricSet } // newAdmissionMetrics create a new AdmissionMetrics, configured with default metric names. @@ -222,20 +226,47 @@ func newAdmissionMetrics() *AdmissionMetrics { &metrics.CounterOpts{ Namespace: namespace, Subsystem: subsystem, - Name: "admission_match_condition_evaluation_errors_total", - Help: "Admission match condition evaluation errors count, identified by name of resource containing the match condition and broken out for each admission type (validating or mutating).", + Name: "match_condition_evaluation_errors_total", + Help: "Admission match condition evaluation errors count, identified by name of resource containing the match condition and broken out for each kind containing matchConditions (webhook or policy), operation and admission type (validate or admit).", StabilityLevel: metrics.ALPHA, }, - []string{"name", "type"}) + []string{"name", "kind", "type", "operation"}) + + matchConditionExclusions := metrics.NewCounterVec( + &metrics.CounterOpts{ + Namespace: namespace, + Subsystem: subsystem, + Name: "match_condition_exclusions_total", + Help: "Admission match condition evaluation exclusions count, identified by name of resource containing the match condition and broken out for each kind containing matchConditions (webhook or policy), operation and admission type (validate or admit).", + StabilityLevel: metrics.ALPHA, + }, + []string{"name", "kind", "type", "operation"}) + + matchConditionEvaluationSeconds := &metricSet{ + latencies: metrics.NewHistogramVec( + &metrics.HistogramOpts{ + Namespace: namespace, + Subsystem: subsystem, + Name: "match_condition_evaluation_seconds", + Help: "Admission match condition evaluation time in seconds, identified by name and broken out for each kind containing matchConditions (webhook or policy), operation and type (validate or admit).", + Buckets: []float64{0.001, 0.005, 0.01, 0.025, 0.1, 0.2, 0.25}, + StabilityLevel: metrics.ALPHA, + }, + []string{"name", "kind", "type", "operation"}, + ), + latenciesSummary: nil, + } step.mustRegister() controller.mustRegister() webhook.mustRegister() + matchConditionEvaluationSeconds.mustRegister() legacyregistry.MustRegister(webhookRejection) legacyregistry.MustRegister(webhookFailOpen) legacyregistry.MustRegister(webhookRequest) legacyregistry.MustRegister(matchConditionEvalError) - return &AdmissionMetrics{step: step, controller: controller, webhook: webhook, webhookRejection: webhookRejection, webhookFailOpen: webhookFailOpen, webhookRequest: webhookRequest, matchConditionEvalErrors: matchConditionEvalError} + legacyregistry.MustRegister(matchConditionExclusions) + return &AdmissionMetrics{step: step, controller: controller, webhook: webhook, webhookRejection: webhookRejection, webhookFailOpen: webhookFailOpen, webhookRequest: webhookRequest, matchConditionEvalErrors: matchConditionEvalError, matchConditionExclusions: matchConditionExclusions, matchConditionEvaluationSeconds: matchConditionEvaluationSeconds} } func (m *AdmissionMetrics) reset() { @@ -280,8 +311,18 @@ func (m *AdmissionMetrics) ObserveWebhookFailOpen(ctx context.Context, name, ste } // ObserveMatchConditionEvalError records validating or mutating webhook that are not called due to match conditions -func (m *AdmissionMetrics) ObserveMatchConditionEvalError(ctx context.Context, name, stepType string) { - m.matchConditionEvalErrors.WithContext(ctx).WithLabelValues(name, stepType).Inc() +func (m *AdmissionMetrics) ObserveMatchConditionEvalError(ctx context.Context, name, kind, stepType, operation string) { + m.matchConditionEvalErrors.WithContext(ctx).WithLabelValues(name, kind, stepType, operation).Inc() +} + +// ObserveMatchConditionExclusion records validating or mutating webhook that are not called due to match conditions +func (m *AdmissionMetrics) ObserveMatchConditionExclusion(ctx context.Context, name, kind, stepType, operation string) { + m.matchConditionExclusions.WithContext(ctx).WithLabelValues(name, kind, stepType, operation).Inc() +} + +// ObserveMatchConditionEvaluationTime records duration of match condition evaluation process. +func (m *AdmissionMetrics) ObserveMatchConditionEvaluationTime(ctx context.Context, elapsed time.Duration, name, kind, stepType, operation string) { + m.matchConditionEvaluationSeconds.observe(ctx, elapsed, name, kind, stepType, operation) } type metricSet struct { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/compile.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/compile.go index bb122de5fafe..25ee108ea95c 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/compile.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/compile.go @@ -18,12 +18,13 @@ package cel import ( "fmt" - celconfig "k8s.io/apiserver/pkg/apis/cel" - "sync" "github.com/google/cel-go/cel" + "k8s.io/apimachinery/pkg/util/version" + celconfig "k8s.io/apiserver/pkg/apis/cel" apiservercel "k8s.io/apiserver/pkg/cel" + "k8s.io/apiserver/pkg/cel/environment" "k8s.io/apiserver/pkg/cel/library" ) @@ -32,108 +33,12 @@ const ( OldObjectVarName = "oldObject" ParamsVarName = "params" RequestVarName = "request" + NamespaceVarName = "namespaceObject" AuthorizerVarName = "authorizer" RequestResourceAuthorizerVarName = "authorizer.requestResource" + VariableVarName = "variables" ) -var ( - initEnvsOnce sync.Once - initEnvs envs - initEnvsErr error -) - -func getEnvs() (envs, error) { - initEnvsOnce.Do(func() { - requiredVarsEnv, err := buildRequiredVarsEnv() - if err != nil { - initEnvsErr = err - return - } - - initEnvs, err = buildWithOptionalVarsEnvs(requiredVarsEnv) - if err != nil { - initEnvsErr = err - return - } - }) - return initEnvs, initEnvsErr -} - -// This is a similar code as in k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/compilation.go -// If any changes are made here, consider to make the same changes there as well. -func buildBaseEnv() (*cel.Env, error) { - var opts []cel.EnvOption - opts = append(opts, cel.HomogeneousAggregateLiterals()) - // Validate function declarations once during base env initialization, - // so they don't need to be evaluated each time a CEL rule is compiled. - // This is a relatively expensive operation. - opts = append(opts, cel.EagerlyValidateDeclarations(true), cel.DefaultUTCTimeZone(true)) - opts = append(opts, library.ExtensionLibs...) - - return cel.NewEnv(opts...) -} - -func buildRequiredVarsEnv() (*cel.Env, error) { - baseEnv, err := buildBaseEnv() - if err != nil { - return nil, err - } - var propDecls []cel.EnvOption - reg := apiservercel.NewRegistry(baseEnv) - - requestType := BuildRequestType() - rt, err := apiservercel.NewRuleTypes(requestType.TypeName(), requestType, reg) - if err != nil { - return nil, err - } - if rt == nil { - return nil, nil - } - opts, err := rt.EnvOptions(baseEnv.TypeProvider()) - if err != nil { - return nil, err - } - propDecls = append(propDecls, cel.Variable(ObjectVarName, cel.DynType)) - propDecls = append(propDecls, cel.Variable(OldObjectVarName, cel.DynType)) - propDecls = append(propDecls, cel.Variable(RequestVarName, requestType.CelType())) - - opts = append(opts, propDecls...) - env, err := baseEnv.Extend(opts...) - if err != nil { - return nil, err - } - return env, nil -} - -type envs map[OptionalVariableDeclarations]*cel.Env - -func buildEnvWithVars(baseVarsEnv *cel.Env, options OptionalVariableDeclarations) (*cel.Env, error) { - var opts []cel.EnvOption - if options.HasParams { - opts = append(opts, cel.Variable(ParamsVarName, cel.DynType)) - } - if options.HasAuthorizer { - opts = append(opts, cel.Variable(AuthorizerVarName, library.AuthorizerType)) - opts = append(opts, cel.Variable(RequestResourceAuthorizerVarName, library.ResourceCheckType)) - } - return baseVarsEnv.Extend(opts...) -} - -func buildWithOptionalVarsEnvs(requiredVarsEnv *cel.Env) (envs, error) { - envs := make(envs, 4) // since the number of variable combinations is small, pre-build a environment for each - for _, hasParams := range []bool{false, true} { - for _, hasAuthorizer := range []bool{false, true} { - opts := OptionalVariableDeclarations{HasParams: hasParams, HasAuthorizer: hasAuthorizer} - env, err := buildEnvWithVars(requiredVarsEnv, opts) - if err != nil { - return nil, err - } - envs[opts] = env - } - } - return envs, nil -} - // BuildRequestType generates a DeclType for AdmissionRequest. This may be replaced with a utility that // converts the native type definition to apiservercel.DeclType once such a utility becomes available. // The 'uid' field is omitted since it is not needed for in-process admission review. @@ -181,6 +86,56 @@ func BuildRequestType() *apiservercel.DeclType { )) } +// BuildNamespaceType generates a DeclType for Namespace. +// Certain nested fields in Namespace (e.g. managedFields, ownerReferences etc.) are omitted in the generated DeclType +// by design. +func BuildNamespaceType() *apiservercel.DeclType { + field := func(name string, declType *apiservercel.DeclType, required bool) *apiservercel.DeclField { + return apiservercel.NewDeclField(name, declType, required, nil, nil) + } + fields := func(fields ...*apiservercel.DeclField) map[string]*apiservercel.DeclField { + result := make(map[string]*apiservercel.DeclField, len(fields)) + for _, f := range fields { + result[f.Name] = f + } + return result + } + + specType := apiservercel.NewObjectType("kubernetes.NamespaceSpec", fields( + field("finalizers", apiservercel.NewListType(apiservercel.StringType, -1), true), + )) + conditionType := apiservercel.NewObjectType("kubernetes.NamespaceCondition", fields( + field("status", apiservercel.StringType, true), + field("type", apiservercel.StringType, true), + field("lastTransitionTime", apiservercel.TimestampType, true), + field("message", apiservercel.StringType, true), + field("reason", apiservercel.StringType, true), + )) + statusType := apiservercel.NewObjectType("kubernetes.NamespaceStatus", fields( + field("conditions", apiservercel.NewListType(conditionType, -1), true), + field("phase", apiservercel.StringType, true), + )) + metadataType := apiservercel.NewObjectType("kubernetes.NamespaceMetadata", fields( + field("name", apiservercel.StringType, true), + field("generateName", apiservercel.StringType, true), + field("namespace", apiservercel.StringType, true), + field("labels", apiservercel.NewMapType(apiservercel.StringType, apiservercel.StringType, -1), true), + field("annotations", apiservercel.NewMapType(apiservercel.StringType, apiservercel.StringType, -1), true), + field("UID", apiservercel.StringType, true), + field("creationTimestamp", apiservercel.TimestampType, true), + field("deletionGracePeriodSeconds", apiservercel.IntType, true), + field("deletionTimestamp", apiservercel.TimestampType, true), + field("generation", apiservercel.IntType, true), + field("resourceVersion", apiservercel.StringType, true), + field("finalizers", apiservercel.NewListType(apiservercel.StringType, -1), true), + )) + return apiservercel.NewObjectType("kubernetes.Namespace", fields( + field("metadata", metadataType, true), + field("spec", specType, true), + field("status", statusType, true), + )) +} + // CompilationResult represents a compiled validations expression. type CompilationResult struct { Program cel.Program @@ -188,45 +143,48 @@ type CompilationResult struct { ExpressionAccessor ExpressionAccessor } +// Compiler provides a CEL expression compiler configured with the desired admission related CEL variables and +// environment mode. +type Compiler interface { + CompileCELExpression(expressionAccessor ExpressionAccessor, options OptionalVariableDeclarations, mode environment.Type) CompilationResult +} + +type compiler struct { + varEnvs variableDeclEnvs +} + +func NewCompiler(env *environment.EnvSet) Compiler { + return &compiler{varEnvs: mustBuildEnvs(env)} +} + +type variableDeclEnvs map[OptionalVariableDeclarations]*environment.EnvSet + // CompileCELExpression returns a compiled CEL expression. // perCallLimit was added for testing purpose only. Callers should always use const PerCallLimit from k8s.io/apiserver/pkg/apis/cel/config.go as input. -func CompileCELExpression(expressionAccessor ExpressionAccessor, optionalVars OptionalVariableDeclarations, perCallLimit uint64) CompilationResult { - var env *cel.Env - envs, err := getEnvs() - if err != nil { +func (c compiler) CompileCELExpression(expressionAccessor ExpressionAccessor, options OptionalVariableDeclarations, envType environment.Type) CompilationResult { + resultError := func(errorString string, errType apiservercel.ErrorType) CompilationResult { return CompilationResult{ Error: &apiservercel.Error{ - Type: apiservercel.ErrorTypeInternal, - Detail: "compiler initialization failed: " + err.Error(), + Type: errType, + Detail: errorString, }, ExpressionAccessor: expressionAccessor, } } - env, ok := envs[optionalVars] - if !ok { - return CompilationResult{ - Error: &apiservercel.Error{ - Type: apiservercel.ErrorTypeInvalid, - Detail: fmt.Sprintf("compiler initialization failed: failed to load environment for %v", optionalVars), - }, - ExpressionAccessor: expressionAccessor, - } + + env, err := c.varEnvs[options].Env(envType) + if err != nil { + return resultError(fmt.Sprintf("unexpected error loading CEL environment: %v", err), apiservercel.ErrorTypeInternal) } ast, issues := env.Compile(expressionAccessor.GetExpression()) if issues != nil { - return CompilationResult{ - Error: &apiservercel.Error{ - Type: apiservercel.ErrorTypeInvalid, - Detail: "compilation failed: " + issues.String(), - }, - ExpressionAccessor: expressionAccessor, - } + return resultError("compilation failed: "+issues.String(), apiservercel.ErrorTypeInvalid) } found := false returnTypes := expressionAccessor.ReturnTypes() for _, returnType := range returnTypes { - if ast.OutputType() == returnType { + if ast.OutputType() == returnType || cel.AnyType == returnType { found = true break } @@ -239,43 +197,64 @@ func CompileCELExpression(expressionAccessor ExpressionAccessor, optionalVars Op reason = fmt.Sprintf("must evaluate to one of %v", returnTypes) } - return CompilationResult{ - Error: &apiservercel.Error{ - Type: apiservercel.ErrorTypeInvalid, - Detail: reason, - }, - ExpressionAccessor: expressionAccessor, - } + return resultError(reason, apiservercel.ErrorTypeInvalid) } _, err = cel.AstToCheckedExpr(ast) if err != nil { // should be impossible since env.Compile returned no issues - return CompilationResult{ - Error: &apiservercel.Error{ - Type: apiservercel.ErrorTypeInternal, - Detail: "unexpected compilation error: " + err.Error(), - }, - ExpressionAccessor: expressionAccessor, - } + return resultError("unexpected compilation error: "+err.Error(), apiservercel.ErrorTypeInternal) } prog, err := env.Program(ast, - cel.EvalOptions(cel.OptOptimize, cel.OptTrackCost), - cel.OptimizeRegex(library.ExtensionLibRegexOptimizations...), cel.InterruptCheckFrequency(celconfig.CheckFrequency), - cel.CostLimit(perCallLimit), ) if err != nil { - return CompilationResult{ - Error: &apiservercel.Error{ - Type: apiservercel.ErrorTypeInvalid, - Detail: "program instantiation failed: " + err.Error(), - }, - ExpressionAccessor: expressionAccessor, - } + return resultError("program instantiation failed: "+err.Error(), apiservercel.ErrorTypeInternal) } return CompilationResult{ Program: prog, ExpressionAccessor: expressionAccessor, } } + +func mustBuildEnvs(baseEnv *environment.EnvSet) variableDeclEnvs { + requestType := BuildRequestType() + namespaceType := BuildNamespaceType() + envs := make(variableDeclEnvs, 4) // since the number of variable combinations is small, pre-build a environment for each + for _, hasParams := range []bool{false, true} { + for _, hasAuthorizer := range []bool{false, true} { + var envOpts []cel.EnvOption + if hasParams { + envOpts = append(envOpts, cel.Variable(ParamsVarName, cel.DynType)) + } + if hasAuthorizer { + envOpts = append(envOpts, + cel.Variable(AuthorizerVarName, library.AuthorizerType), + cel.Variable(RequestResourceAuthorizerVarName, library.ResourceCheckType)) + } + envOpts = append(envOpts, + cel.Variable(ObjectVarName, cel.DynType), + cel.Variable(OldObjectVarName, cel.DynType), + cel.Variable(NamespaceVarName, namespaceType.CelType()), + cel.Variable(RequestVarName, requestType.CelType())) + + extended, err := baseEnv.Extend( + environment.VersionedOptions{ + // Feature epoch was actually 1.26, but we artificially set it to 1.0 because these + // options should always be present. + IntroducedVersion: version.MajorMinor(1, 0), + EnvOptions: envOpts, + DeclTypes: []*apiservercel.DeclType{ + namespaceType, + requestType, + }, + }, + ) + if err != nil { + panic(fmt.Sprintf("environment misconfigured: %v", err)) + } + envs[OptionalVariableDeclarations{HasParams: hasParams, HasAuthorizer: hasAuthorizer}] = extended + } + } + return envs +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/composition.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/composition.go new file mode 100644 index 000000000000..38b80a304aad --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/composition.go @@ -0,0 +1,198 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package cel + +import ( + "context" + "math" + + "github.com/google/cel-go/cel" + "github.com/google/cel-go/common/types" + "github.com/google/cel-go/common/types/ref" + + v1 "k8s.io/api/admission/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/util/version" + "k8s.io/apiserver/pkg/admission" + apiservercel "k8s.io/apiserver/pkg/cel" + "k8s.io/apiserver/pkg/cel/environment" + "k8s.io/apiserver/pkg/cel/lazy" +) + +const VariablesTypeName = "kubernetes.variables" + +type CompositedCompiler struct { + Compiler + FilterCompiler + + CompositionEnv *CompositionEnv +} + +type CompositedFilter struct { + Filter + + compositionEnv *CompositionEnv +} + +func NewCompositedCompiler(envSet *environment.EnvSet) (*CompositedCompiler, error) { + compositionContext, err := NewCompositionEnv(VariablesTypeName, envSet) + if err != nil { + return nil, err + } + compiler := NewCompiler(compositionContext.EnvSet) + filterCompiler := NewFilterCompiler(compositionContext.EnvSet) + return &CompositedCompiler{ + Compiler: compiler, + FilterCompiler: filterCompiler, + CompositionEnv: compositionContext, + }, nil +} + +func (c *CompositedCompiler) CompileAndStoreVariables(variables []NamedExpressionAccessor, options OptionalVariableDeclarations, mode environment.Type) { + for _, v := range variables { + _ = c.CompileAndStoreVariable(v, options, mode) + } +} + +func (c *CompositedCompiler) CompileAndStoreVariable(variable NamedExpressionAccessor, options OptionalVariableDeclarations, mode environment.Type) CompilationResult { + c.CompositionEnv.AddField(variable.GetName()) + result := c.Compiler.CompileCELExpression(variable, options, mode) + c.CompositionEnv.CompiledVariables[variable.GetName()] = result + return result +} + +func (c *CompositedCompiler) Compile(expressions []ExpressionAccessor, optionalDecls OptionalVariableDeclarations, envType environment.Type) Filter { + filter := c.FilterCompiler.Compile(expressions, optionalDecls, envType) + return &CompositedFilter{ + Filter: filter, + compositionEnv: c.CompositionEnv, + } +} + +type CompositionEnv struct { + *environment.EnvSet + + MapType *apiservercel.DeclType + CompiledVariables map[string]CompilationResult +} + +func (c *CompositionEnv) AddField(name string) { + c.MapType.Fields[name] = apiservercel.NewDeclField(name, apiservercel.DynType, true, nil, nil) +} + +func NewCompositionEnv(typeName string, baseEnvSet *environment.EnvSet) (*CompositionEnv, error) { + declType := apiservercel.NewObjectType(typeName, map[string]*apiservercel.DeclField{}) + envSet, err := baseEnvSet.Extend(environment.VersionedOptions{ + // set to 1.0 because composition is one of the fundamental components + IntroducedVersion: version.MajorMinor(1, 0), + EnvOptions: []cel.EnvOption{ + cel.Variable("variables", declType.CelType()), + }, + DeclTypes: []*apiservercel.DeclType{ + declType, + }, + }) + if err != nil { + return nil, err + } + return &CompositionEnv{ + MapType: declType, + EnvSet: envSet, + CompiledVariables: map[string]CompilationResult{}, + }, nil +} + +func (c *CompositionEnv) CreateContext(parent context.Context) CompositionContext { + return &compositionContext{ + Context: parent, + compositionEnv: c, + } +} + +type CompositionContext interface { + context.Context + Variables(activation any) ref.Val + GetAndResetCost() int64 +} + +type compositionContext struct { + context.Context + + compositionEnv *CompositionEnv + accumulatedCost int64 +} + +func (c *compositionContext) Variables(activation any) ref.Val { + lazyMap := lazy.NewMapValue(c.compositionEnv.MapType) + for name, result := range c.compositionEnv.CompiledVariables { + accessor := &variableAccessor{ + name: name, + result: result, + activation: activation, + context: c, + } + lazyMap.Append(name, accessor.Callback) + } + return lazyMap +} + +func (f *CompositedFilter) ForInput(ctx context.Context, versionedAttr *admission.VersionedAttributes, request *v1.AdmissionRequest, optionalVars OptionalVariableBindings, namespace *corev1.Namespace, runtimeCELCostBudget int64) ([]EvaluationResult, int64, error) { + ctx = f.compositionEnv.CreateContext(ctx) + return f.Filter.ForInput(ctx, versionedAttr, request, optionalVars, namespace, runtimeCELCostBudget) +} + +func (c *compositionContext) reportCost(cost int64) { + c.accumulatedCost += cost +} + +func (c *compositionContext) GetAndResetCost() int64 { + cost := c.accumulatedCost + c.accumulatedCost = 0 + return cost +} + +type variableAccessor struct { + name string + result CompilationResult + activation any + context *compositionContext +} + +func (a *variableAccessor) Callback(_ *lazy.MapValue) ref.Val { + if a.result.Error != nil { + return types.NewErr("composited variable %q fails to compile: %v", a.name, a.result.Error) + } + + v, details, err := a.result.Program.Eval(a.activation) + if details == nil { + return types.NewErr("unable to get evaluation details of variable %q", a.name) + } + costPtr := details.ActualCost() + if costPtr == nil { + return types.NewErr("unable to calculate cost of variable %q", a.name) + } + cost := int64(*costPtr) + if *costPtr > math.MaxInt64 { + cost = math.MaxInt64 + } + a.context.reportCost(cost) + + if err != nil { + return types.NewErr("composited variable %q fails to evaluate: %v", a.name, err) + } + return v +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/filter.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/filter.go index 6e504897c5a4..3e2a63e75ca7 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/filter.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/filter.go @@ -27,24 +27,27 @@ import ( admissionv1 "k8s.io/api/admission/v1" authenticationv1 "k8s.io/api/authentication/v1" + v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apiserver/pkg/admission" "k8s.io/apiserver/pkg/cel" + "k8s.io/apiserver/pkg/cel/environment" "k8s.io/apiserver/pkg/cel/library" ) // filterCompiler implement the interface FilterCompiler. type filterCompiler struct { + compiler Compiler } -func NewFilterCompiler() FilterCompiler { - return &filterCompiler{} +func NewFilterCompiler(env *environment.EnvSet) FilterCompiler { + return &filterCompiler{compiler: NewCompiler(env)} } type evaluationActivation struct { - object, oldObject, params, request, authorizer, requestResourceAuthorizer interface{} + object, oldObject, params, request, namespace, authorizer, requestResourceAuthorizer, variables interface{} } // ResolveName returns a value from the activation by qualified name, or false if the name @@ -59,10 +62,14 @@ func (a *evaluationActivation) ResolveName(name string) (interface{}, bool) { return a.params, true // params may be null case RequestVarName: return a.request, true + case NamespaceVarName: + return a.namespace, true case AuthorizerVarName: return a.authorizer, a.authorizer != nil case RequestResourceAuthorizerVarName: return a.requestResourceAuthorizer, a.requestResourceAuthorizer != nil + case VariableVarName: // variables always present + return a.variables, true default: return nil, false } @@ -75,13 +82,13 @@ func (a *evaluationActivation) Parent() interpreter.Activation { } // Compile compiles the cel expressions defined in the ExpressionAccessors into a Filter -func (c *filterCompiler) Compile(expressionAccessors []ExpressionAccessor, options OptionalVariableDeclarations, perCallLimit uint64) Filter { +func (c *filterCompiler) Compile(expressionAccessors []ExpressionAccessor, options OptionalVariableDeclarations, mode environment.Type) Filter { compilationResults := make([]CompilationResult, len(expressionAccessors)) for i, expressionAccessor := range expressionAccessors { if expressionAccessor == nil { continue } - compilationResults[i] = CompileCELExpression(expressionAccessor, options, perCallLimit) + compilationResults[i] = c.compiler.CompileCELExpression(expressionAccessor, options, mode) } return NewFilter(compilationResults) } @@ -122,7 +129,7 @@ func objectToResolveVal(r runtime.Object) (interface{}, error) { // ForInput evaluates the compiled CEL expressions converting them into CELEvaluations // errors per evaluation are returned on the Evaluation object // runtimeCELCostBudget was added for testing purpose only. Callers should always use const RuntimeCELCostBudget from k8s.io/apiserver/pkg/apis/cel/config.go as input. -func (f *filter) ForInput(ctx context.Context, versionedAttr *admission.VersionedAttributes, request *admissionv1.AdmissionRequest, inputs OptionalVariableBindings, runtimeCELCostBudget int64) ([]EvaluationResult, int64, error) { +func (f *filter) ForInput(ctx context.Context, versionedAttr *admission.VersionedAttributes, request *admissionv1.AdmissionRequest, inputs OptionalVariableBindings, namespace *v1.Namespace, runtimeCELCostBudget int64) ([]EvaluationResult, int64, error) { // TODO: replace unstructured with ref.Val for CEL variables when native type support is available evaluations := make([]EvaluationResult, len(f.compilationResults)) var err error @@ -152,15 +159,28 @@ func (f *filter) ForInput(ctx context.Context, versionedAttr *admission.Versione if err != nil { return nil, -1, err } + namespaceVal, err := objectToResolveVal(namespace) + if err != nil { + return nil, -1, err + } va := &evaluationActivation{ object: objectVal, oldObject: oldObjectVal, params: paramsVal, request: requestVal.Object, + namespace: namespaceVal, authorizer: authorizerVal, requestResourceAuthorizer: requestResourceAuthorizerVal, } + // composition is an optional feature that only applies for ValidatingAdmissionPolicy. + // check if the context allows composition + var compositionCtx CompositionContext + var ok bool + if compositionCtx, ok = ctx.(CompositionContext); ok { + va.variables = compositionCtx.Variables(va) + } + remainingBudget := runtimeCELCostBudget for i, compilationResult := range f.compilationResults { var evaluation = &evaluations[i] @@ -184,6 +204,17 @@ func (f *filter) ForInput(ctx context.Context, versionedAttr *admission.Versione } t1 := time.Now() evalResult, evalDetails, err := compilationResult.Program.ContextEval(ctx, va) + // budget may be spent due to lazy evaluation of composited variables + if compositionCtx != nil { + compositionCost := compositionCtx.GetAndResetCost() + if compositionCost > remainingBudget { + return nil, -1, &cel.Error{ + Type: cel.ErrorTypeInvalid, + Detail: fmt.Sprintf("validation failed due to running out of cost budget, no further validation rules will be run"), + } + } + remainingBudget -= compositionCost + } elapsed := time.Since(t1) evaluation.Elapsed = elapsed if evalDetails == nil { @@ -222,10 +253,13 @@ func (f *filter) ForInput(ctx context.Context, versionedAttr *admission.Versione } // TODO: to reuse https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/request/admissionreview.go#L154 -func CreateAdmissionRequest(attr admission.Attributes) *admissionv1.AdmissionRequest { - // FIXME: how to get resource GVK, GVR and subresource? - gvk := attr.GetKind() - gvr := attr.GetResource() +func CreateAdmissionRequest(attr admission.Attributes, equivalentGVR metav1.GroupVersionResource, equivalentKind metav1.GroupVersionKind) *admissionv1.AdmissionRequest { + // Attempting to use same logic as webhook for constructing resource + // GVK, GVR, subresource + // Use the GVK, GVR that the matcher decided was equivalent to that of the request + // https://github.com/kubernetes/kubernetes/blob/90c362b3430bcbbf8f245fadbcd521dab39f1d7c/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/generic/webhook.go#L182-L210 + gvk := equivalentKind + gvr := equivalentGVR subresource := attr.GetSubresource() requestGVK := attr.GetKind() @@ -284,6 +318,33 @@ func CreateAdmissionRequest(attr admission.Attributes) *admissionv1.AdmissionReq } } +// CreateNamespaceObject creates a Namespace object that is suitable for the CEL evaluation. +// If the namespace is nil, CreateNamespaceObject returns nil +func CreateNamespaceObject(namespace *v1.Namespace) *v1.Namespace { + if namespace == nil { + return nil + } + + return &v1.Namespace{ + Status: namespace.Status, + Spec: namespace.Spec, + ObjectMeta: metav1.ObjectMeta{ + Name: namespace.Name, + GenerateName: namespace.GenerateName, + Namespace: namespace.Namespace, + UID: namespace.UID, + ResourceVersion: namespace.ResourceVersion, + Generation: namespace.Generation, + CreationTimestamp: namespace.CreationTimestamp, + DeletionTimestamp: namespace.DeletionTimestamp, + DeletionGracePeriodSeconds: namespace.DeletionGracePeriodSeconds, + Labels: namespace.Labels, + Annotations: namespace.Annotations, + Finalizers: namespace.Finalizers, + }, + } +} + // CompilationErrors returns a list of all the errors from the compilation of the evaluator func (e *filter) CompilationErrors() []error { compilationErrors := []error{} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/interface.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/interface.go index d3c4a0217d13..c9f4e63369f0 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/interface.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/interface.go @@ -24,9 +24,11 @@ import ( "github.com/google/cel-go/common/types/ref" v1 "k8s.io/api/admission/v1" + corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apiserver/pkg/admission" "k8s.io/apiserver/pkg/authorization/authorizer" + "k8s.io/apiserver/pkg/cel/environment" ) type ExpressionAccessor interface { @@ -34,6 +36,13 @@ type ExpressionAccessor interface { ReturnTypes() []*cel.Type } +// NamedExpressionAccessor extends NamedExpressionAccessor with a name. +type NamedExpressionAccessor interface { + ExpressionAccessor + + GetName() string // follows the naming convention of ExpressionAccessor +} + // EvaluationResult contains the minimal required fields and metadata of a cel evaluation type EvaluationResult struct { EvalResult ref.Val @@ -57,8 +66,7 @@ type OptionalVariableDeclarations struct { // FilterCompiler contains a function to assist with converting types and values to/from CEL-typed values. type FilterCompiler interface { // Compile is used for the cel expression compilation - // perCallLimit was added for testing purpose only. Callers should always use const PerCallLimit from k8s.io/apiserver/pkg/apis/cel/config.go as input. - Compile(expressions []ExpressionAccessor, optionalDecls OptionalVariableDeclarations, perCallLimit uint64) Filter + Compile(expressions []ExpressionAccessor, optionalDecls OptionalVariableDeclarations, envType environment.Type) Filter } // OptionalVariableBindings provides expression bindings for optional CEL variables. @@ -80,7 +88,7 @@ type Filter interface { // ForInput converts compiled CEL-typed values into evaluated CEL-typed value. // runtimeCELCostBudget was added for testing purpose only. Callers should always use const RuntimeCELCostBudget from k8s.io/apiserver/pkg/apis/cel/config.go as input. // If cost budget is calculated, the filter should return the remaining budget. - ForInput(ctx context.Context, versionedAttr *admission.VersionedAttributes, request *v1.AdmissionRequest, optionalVars OptionalVariableBindings, runtimeCELCostBudget int64) ([]EvaluationResult, int64, error) + ForInput(ctx context.Context, versionedAttr *admission.VersionedAttributes, request *v1.AdmissionRequest, optionalVars OptionalVariableBindings, namespace *corev1.Namespace, runtimeCELCostBudget int64) ([]EvaluationResult, int64, error) // CompilationErrors returns a list of errors from the compilation of the evaluator CompilationErrors() []error diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/admission.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/admission.go index 9a514b463190..e51bc6e73790 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/admission.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/admission.go @@ -24,7 +24,6 @@ import ( "k8s.io/apimachinery/pkg/api/meta" "k8s.io/apiserver/pkg/authorization/authorizer" - "k8s.io/apiserver/pkg/cel/openapi/resolver" "k8s.io/apiserver/pkg/features" "k8s.io/client-go/dynamic" "k8s.io/component-base/featuregate" @@ -74,7 +73,6 @@ type celAdmissionPlugin struct { dynamicClient dynamic.Interface stopCh <-chan struct{} authorizer authorizer.Authorizer - schemaResolver resolver.SchemaResolver } var _ initializer.WantsExternalKubeInformerFactory = &celAdmissionPlugin{} @@ -83,7 +81,6 @@ var _ initializer.WantsRESTMapper = &celAdmissionPlugin{} var _ initializer.WantsDynamicClient = &celAdmissionPlugin{} var _ initializer.WantsDrainedNotification = &celAdmissionPlugin{} var _ initializer.WantsAuthorizer = &celAdmissionPlugin{} -var _ initializer.WantsSchemaResolver = &celAdmissionPlugin{} var _ admission.InitializationValidator = &celAdmissionPlugin{} var _ admission.ValidationInterface = &celAdmissionPlugin{} @@ -116,11 +113,6 @@ func (c *celAdmissionPlugin) SetDrainedNotification(stopCh <-chan struct{}) { func (c *celAdmissionPlugin) SetAuthorizer(authorizer authorizer.Authorizer) { c.authorizer = authorizer } - -func (c *celAdmissionPlugin) SetSchemaResolver(resolver resolver.SchemaResolver) { - c.schemaResolver = resolver -} - func (c *celAdmissionPlugin) InspectFeatureGates(featureGates featuregate.FeatureGate) { if featureGates.Enabled(features.ValidatingAdmissionPolicy) { c.enabled = true @@ -154,7 +146,7 @@ func (c *celAdmissionPlugin) ValidateInitialization() error { if c.authorizer == nil { return errors.New("missing authorizer") } - c.evaluator = NewAdmissionController(c.informerFactory, c.client, c.restMapper, c.schemaResolver /* (optional) */, c.dynamicClient, c.authorizer) + c.evaluator = NewAdmissionController(c.informerFactory, c.client, c.restMapper, c.dynamicClient, c.authorizer) if err := c.evaluator.ValidateInitialization(); err != nil { return err } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/caching_authorizer.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/caching_authorizer.go new file mode 100644 index 000000000000..a295cb30dc02 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/caching_authorizer.go @@ -0,0 +1,133 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package validatingadmissionpolicy + +import ( + "context" + "encoding/json" + "sort" + "strings" + + "k8s.io/apiserver/pkg/authentication/user" + "k8s.io/apiserver/pkg/authorization/authorizer" +) + +type authzResult struct { + authorized authorizer.Decision + reason string + err error +} + +type cachingAuthorizer struct { + authorizer authorizer.Authorizer + decisions map[string]authzResult +} + +func newCachingAuthorizer(in authorizer.Authorizer) authorizer.Authorizer { + return &cachingAuthorizer{ + authorizer: in, + decisions: make(map[string]authzResult), + } +} + +// The attribute accessors known to cache key construction. If this fails to compile, the cache +// implementation may need to be updated. +var _ authorizer.Attributes = (interface { + GetUser() user.Info + GetVerb() string + IsReadOnly() bool + GetNamespace() string + GetResource() string + GetSubresource() string + GetName() string + GetAPIGroup() string + GetAPIVersion() string + IsResourceRequest() bool + GetPath() string +})(nil) + +// The user info accessors known to cache key construction. If this fails to compile, the cache +// implementation may need to be updated. +var _ user.Info = (interface { + GetName() string + GetUID() string + GetGroups() []string + GetExtra() map[string][]string +})(nil) + +// Authorize returns an authorization decision by delegating to another Authorizer. If an equivalent +// check has already been performed, a cached result is returned. Not safe for concurrent use. +func (ca *cachingAuthorizer) Authorize(ctx context.Context, a authorizer.Attributes) (authorizer.Decision, string, error) { + serializableAttributes := authorizer.AttributesRecord{ + Verb: a.GetVerb(), + Namespace: a.GetNamespace(), + APIGroup: a.GetAPIGroup(), + APIVersion: a.GetAPIVersion(), + Resource: a.GetResource(), + Subresource: a.GetSubresource(), + Name: a.GetName(), + ResourceRequest: a.IsResourceRequest(), + Path: a.GetPath(), + } + + if u := a.GetUser(); u != nil { + di := &user.DefaultInfo{ + Name: u.GetName(), + UID: u.GetUID(), + } + + // Differently-ordered groups or extras could cause otherwise-equivalent checks to + // have distinct cache keys. + if groups := u.GetGroups(); len(groups) > 0 { + di.Groups = make([]string, len(groups)) + copy(di.Groups, groups) + sort.Strings(di.Groups) + } + + if extra := u.GetExtra(); len(extra) > 0 { + di.Extra = make(map[string][]string, len(extra)) + for k, vs := range extra { + vdupe := make([]string, len(vs)) + copy(vdupe, vs) + sort.Strings(vdupe) + di.Extra[k] = vdupe + } + } + + serializableAttributes.User = di + } + + var b strings.Builder + if err := json.NewEncoder(&b).Encode(serializableAttributes); err != nil { + return authorizer.DecisionNoOpinion, "", err + } + key := b.String() + + if cached, ok := ca.decisions[key]; ok { + return cached.authorized, cached.reason, cached.err + } + + authorized, reason, err := ca.authorizer.Authorize(ctx, a) + + ca.decisions[key] = authzResult{ + authorized: authorized, + reason: reason, + err: err, + } + + return authorized, reason, err +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller.go index f54f1acb36f5..46b76e06d5f9 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller.go @@ -25,30 +25,29 @@ import ( "sync/atomic" "time" - "k8s.io/klog/v2" - - "k8s.io/api/admissionregistration/v1alpha1" + "k8s.io/api/admissionregistration/v1beta1" + v1 "k8s.io/api/core/v1" k8serrors "k8s.io/apimachinery/pkg/api/errors" "k8s.io/apimachinery/pkg/api/meta" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" utiljson "k8s.io/apimachinery/pkg/util/json" utilruntime "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apimachinery/pkg/util/sets" "k8s.io/apimachinery/pkg/util/wait" "k8s.io/apiserver/pkg/admission" celmetrics "k8s.io/apiserver/pkg/admission/cel" - "k8s.io/apiserver/pkg/admission/plugin/cel" "k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/internal/generic" "k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/matching" celconfig "k8s.io/apiserver/pkg/apis/cel" "k8s.io/apiserver/pkg/authorization/authorizer" - "k8s.io/apiserver/pkg/cel/openapi/resolver" "k8s.io/apiserver/pkg/warning" "k8s.io/client-go/dynamic" "k8s.io/client-go/informers" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/cache" + "k8s.io/klog/v2" ) var _ CELPolicyEvaluator = &celAdmissionController{} @@ -66,22 +65,24 @@ type celAdmissionController struct { // A snapshot of the current policy configuration is synced with this field // asynchronously definitions atomic.Value + + authz authorizer.Authorizer } // Everything someone might need to validate a single ValidatingPolicyDefinition // against all of its registered bindings. type policyData struct { definitionInfo - paramController generic.Controller[runtime.Object] - bindings []bindingInfo + paramInfo + bindings []bindingInfo } // contains the cel PolicyDecisions along with the ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding // that determined the decision type policyDecisionWithMetadata struct { PolicyDecision - Definition *v1alpha1.ValidatingAdmissionPolicy - Binding *v1alpha1.ValidatingAdmissionPolicyBinding + Definition *v1beta1.ValidatingAdmissionPolicy + Binding *v1beta1.ValidatingAdmissionPolicyBinding } // namespaceName is used as a key in definitionInfo and bindingInfos @@ -97,7 +98,7 @@ type definitionInfo struct { // Last value seen by this controller to be used in policy enforcement // May not be nil - lastReconciledValue *v1alpha1.ValidatingAdmissionPolicy + lastReconciledValue *v1beta1.ValidatingAdmissionPolicy } type bindingInfo struct { @@ -106,7 +107,7 @@ type bindingInfo struct { // Last value seen by this controller to be used in policy enforcement // May not be nil - lastReconciledValue *v1alpha1.ValidatingAdmissionPolicyBinding + lastReconciledValue *v1beta1.ValidatingAdmissionPolicyBinding } type paramInfo struct { @@ -116,6 +117,9 @@ type paramInfo struct { // Function to call to stop the informer and clean up the controller stop func() + // Whether this param is cluster or namespace scoped + scope meta.RESTScope + // Policy Definitions which refer to this param CRD dependentDefinitions sets.Set[namespacedName] } @@ -125,29 +129,24 @@ func NewAdmissionController( informerFactory informers.SharedInformerFactory, client kubernetes.Interface, restMapper meta.RESTMapper, - schemaResolver resolver.SchemaResolver, dynamicClient dynamic.Interface, authz authorizer.Authorizer, ) CELPolicyEvaluator { - var typeChecker *TypeChecker - if schemaResolver != nil { - typeChecker = &TypeChecker{schemaResolver: schemaResolver, restMapper: restMapper} - } return &celAdmissionController{ definitions: atomic.Value{}, policyController: newPolicyController( restMapper, client, dynamicClient, - typeChecker, - cel.NewFilterCompiler(), + informerFactory, + nil, NewMatcher(matching.NewMatcher(informerFactory.Core().V1().Namespaces().Lister(), client)), - generic.NewInformer[*v1alpha1.ValidatingAdmissionPolicy]( - informerFactory.Admissionregistration().V1alpha1().ValidatingAdmissionPolicies().Informer()), - generic.NewInformer[*v1alpha1.ValidatingAdmissionPolicyBinding]( - informerFactory.Admissionregistration().V1alpha1().ValidatingAdmissionPolicyBindings().Informer()), - authz, + generic.NewInformer[*v1beta1.ValidatingAdmissionPolicy]( + informerFactory.Admissionregistration().V1beta1().ValidatingAdmissionPolicies().Informer()), + generic.NewInformer[*v1beta1.ValidatingAdmissionPolicyBinding]( + informerFactory.Admissionregistration().V1beta1().ValidatingAdmissionPolicyBindings().Informer()), ), + authz: authz, } } @@ -193,21 +192,21 @@ func (c *celAdmissionController) Validate( var deniedDecisions []policyDecisionWithMetadata - addConfigError := func(err error, definition *v1alpha1.ValidatingAdmissionPolicy, binding *v1alpha1.ValidatingAdmissionPolicyBinding) { + addConfigError := func(err error, definition *v1beta1.ValidatingAdmissionPolicy, binding *v1beta1.ValidatingAdmissionPolicyBinding) { // we always default the FailurePolicy if it is unset and validate it in API level - var policy v1alpha1.FailurePolicyType + var policy v1beta1.FailurePolicyType if definition.Spec.FailurePolicy == nil { - policy = v1alpha1.Fail + policy = v1beta1.Fail } else { policy = *definition.Spec.FailurePolicy } // apply FailurePolicy specified in ValidatingAdmissionPolicy, the default would be Fail switch policy { - case v1alpha1.Ignore: + case v1beta1.Ignore: // TODO: add metrics for ignored error here return - case v1alpha1.Fail: + case v1beta1.Fail: var message string if binding == nil { message = fmt.Errorf("failed to configure policy: %w", err).Error() @@ -235,9 +234,17 @@ func (c *celAdmissionController) Validate( } policyDatas := c.definitions.Load().([]policyData) + authz := newCachingAuthorizer(c.authz) + for _, definitionInfo := range policyDatas { + // versionedAttributes will be set to non-nil inside of the loop, but + // is scoped outside of the param loop so we only convert once. We defer + // conversion so that it is only performed when we know a policy matches, + // saving the cost of converting non-matching requests. + var versionedAttr *admission.VersionedAttributes + definition := definitionInfo.lastReconciledValue - matches, matchKind, err := c.policyController.matcher.DefinitionMatches(a, o, definition) + matches, matchResource, matchKind, err := c.policyController.matcher.DefinitionMatches(a, o, definition) if err != nil { // Configuration error. addConfigError(err, definition, nil) @@ -267,65 +274,13 @@ func (c *celAdmissionController) Validate( continue } - var param runtime.Object - - // versionedAttributes will be set to non-nil inside of the loop, but - // is scoped outside of the param loop so we only convert once. We defer - // conversion so that it is only performed when we know a policy matches, - // saving the cost of converting non-matching requests. - var versionedAttr *admission.VersionedAttributes - - // If definition has paramKind, paramRef is required in binding. - // If definition has no paramKind, paramRef set in binding will be ignored. - paramKind := definition.Spec.ParamKind - paramRef := binding.Spec.ParamRef - if paramKind != nil && paramRef != nil { - paramController := definitionInfo.paramController - if paramController == nil { - addConfigError(fmt.Errorf("paramKind kind `%v` not known", - paramKind.String()), definition, binding) - continue - } - - // If the param informer for this admission policy has not yet - // had time to perform an initial listing, don't attempt to use - // it. - timeoutCtx, cancel := context.WithTimeout(c.policyController.context, 1*time.Second) - defer cancel() - - if !cache.WaitForCacheSync(timeoutCtx.Done(), paramController.HasSynced) { - addConfigError(fmt.Errorf("paramKind kind `%v` not yet synced to use for admission", - paramKind.String()), definition, binding) - continue - } - - if len(paramRef.Namespace) == 0 { - param, err = paramController.Informer().Get(paramRef.Name) - } else { - param, err = paramController.Informer().Namespaced(paramRef.Namespace).Get(paramRef.Name) - } - - if err != nil { - // Apply failure policy - addConfigError(err, definition, binding) - - if k8serrors.IsInvalid(err) { - // Param mis-configured - // require to set paramRef.namespace for namespaced resource and unset paramRef.namespace for cluster scoped resource - continue - } else if k8serrors.IsNotFound(err) { - // Param not yet available. User may need to wait a bit - // before being able to use it for validation. - continue - } - - // There was a bad internal error - utilruntime.HandleError(err) - continue - } - } - - if versionedAttr == nil { + params, err := c.collectParams(definition.Spec.ParamKind, definitionInfo.paramInfo, binding.Spec.ParamRef, a.GetNamespace()) + if err != nil { + addConfigError(err, definition, binding) + continue + } else if versionedAttr == nil && len(params) > 0 { + // As optimization versionedAttr creation is deferred until + // first use. Since > 0 params, we will validate va, err := admission.NewVersionedAttributes(a, matchKind, o) if err != nil { wrappedErr := fmt.Errorf("failed to convert object version: %w", err) @@ -335,68 +290,98 @@ func (c *celAdmissionController) Validate( versionedAttr = va } - validationResult := bindingInfo.validator.Validate(ctx, versionedAttr, param, celconfig.RuntimeCELCostBudget) - if err != nil { - // runtime error. Apply failure policy - wrappedError := fmt.Errorf("failed to evaluate CEL expression: %w", err) - addConfigError(wrappedError, definition, binding) - continue + var validationResults []ValidateResult + var namespace *v1.Namespace + namespaceName := a.GetNamespace() + + // Special case, the namespace object has the namespace of itself (maybe a bug). + // unset it if the incoming object is a namespace + if gvk := a.GetKind(); gvk.Kind == "Namespace" && gvk.Version == "v1" && gvk.Group == "" { + namespaceName = "" + } + + // if it is cluster scoped, namespaceName will be empty + // Otherwise, get the Namespace resource. + if namespaceName != "" { + namespace, err = c.policyController.matcher.GetNamespace(namespaceName) + if err != nil { + return err + } } - for i, decision := range validationResult.Decisions { - switch decision.Action { - case ActionAdmit: - if decision.Evaluation == EvalError { - celmetrics.Metrics.ObserveAdmissionWithError(ctx, decision.Elapsed, definition.Name, binding.Name, "active") + for _, param := range params { + var p runtime.Object = param + if p != nil && p.GetObjectKind().GroupVersionKind().Empty() { + // Make sure param has TypeMeta populated + // This is a simple hack to make sure typeMeta is + // available to CEL without making copies of objects, etc. + p = &wrappedParam{ + TypeMeta: metav1.TypeMeta{ + APIVersion: definition.Spec.ParamKind.APIVersion, + Kind: definition.Spec.ParamKind.Kind, + }, + nested: param, } - case ActionDeny: - for _, action := range binding.Spec.ValidationActions { - switch action { - case v1alpha1.Deny: - deniedDecisions = append(deniedDecisions, policyDecisionWithMetadata{ - Definition: definition, - Binding: binding, - PolicyDecision: decision, - }) - celmetrics.Metrics.ObserveRejection(ctx, decision.Elapsed, definition.Name, binding.Name, "active") - case v1alpha1.Audit: - c.publishValidationFailureAnnotation(binding, i, decision, versionedAttr) - celmetrics.Metrics.ObserveAudit(ctx, decision.Elapsed, definition.Name, binding.Name, "active") - case v1alpha1.Warn: - warning.AddWarning(ctx, "", fmt.Sprintf("Validation failed for ValidatingAdmissionPolicy '%s' with binding '%s': %s", definition.Name, binding.Name, decision.Message)) - celmetrics.Metrics.ObserveWarn(ctx, decision.Elapsed, definition.Name, binding.Name, "active") + } + validationResults = append(validationResults, bindingInfo.validator.Validate(ctx, matchResource, versionedAttr, p, namespace, celconfig.RuntimeCELCostBudget, authz)) + } + + for _, validationResult := range validationResults { + for i, decision := range validationResult.Decisions { + switch decision.Action { + case ActionAdmit: + if decision.Evaluation == EvalError { + celmetrics.Metrics.ObserveAdmissionWithError(ctx, decision.Elapsed, definition.Name, binding.Name, "active") } + case ActionDeny: + for _, action := range binding.Spec.ValidationActions { + switch action { + case v1beta1.Deny: + deniedDecisions = append(deniedDecisions, policyDecisionWithMetadata{ + Definition: definition, + Binding: binding, + PolicyDecision: decision, + }) + celmetrics.Metrics.ObserveRejection(ctx, decision.Elapsed, definition.Name, binding.Name, "active") + case v1beta1.Audit: + c.publishValidationFailureAnnotation(binding, i, decision, versionedAttr) + celmetrics.Metrics.ObserveAudit(ctx, decision.Elapsed, definition.Name, binding.Name, "active") + case v1beta1.Warn: + warning.AddWarning(ctx, "", fmt.Sprintf("Validation failed for ValidatingAdmissionPolicy '%s' with binding '%s': %s", definition.Name, binding.Name, decision.Message)) + celmetrics.Metrics.ObserveWarn(ctx, decision.Elapsed, definition.Name, binding.Name, "active") + } + } + default: + return fmt.Errorf("unrecognized evaluation decision '%s' for ValidatingAdmissionPolicyBinding '%s' with ValidatingAdmissionPolicy '%s'", + decision.Action, binding.Name, definition.Name) } - default: - return fmt.Errorf("unrecognized evaluation decision '%s' for ValidatingAdmissionPolicyBinding '%s' with ValidatingAdmissionPolicy '%s'", - decision.Action, binding.Name, definition.Name) } - } - for _, auditAnnotation := range validationResult.AuditAnnotations { - switch auditAnnotation.Action { - case AuditAnnotationActionPublish: - value := auditAnnotation.Value - if len(auditAnnotation.Value) > maxAuditAnnotationValueLength { - value = value[:maxAuditAnnotationValueLength] + for _, auditAnnotation := range validationResult.AuditAnnotations { + switch auditAnnotation.Action { + case AuditAnnotationActionPublish: + value := auditAnnotation.Value + if len(auditAnnotation.Value) > maxAuditAnnotationValueLength { + value = value[:maxAuditAnnotationValueLength] + } + auditAnnotationCollector.add(auditAnnotation.Key, value) + case AuditAnnotationActionError: + // When failurePolicy=fail, audit annotation errors result in deny + deniedDecisions = append(deniedDecisions, policyDecisionWithMetadata{ + Definition: definition, + Binding: binding, + PolicyDecision: PolicyDecision{ + Action: ActionDeny, + Evaluation: EvalError, + Message: auditAnnotation.Error, + Elapsed: auditAnnotation.Elapsed, + }, + }) + celmetrics.Metrics.ObserveRejection(ctx, auditAnnotation.Elapsed, definition.Name, binding.Name, "active") + case AuditAnnotationActionExclude: // skip it + default: + return fmt.Errorf("unsupported AuditAnnotation Action: %s", auditAnnotation.Action) } - auditAnnotationCollector.add(auditAnnotation.Key, value) - case AuditAnnotationActionError: - // When failurePolicy=fail, audit annotation errors result in deny - deniedDecisions = append(deniedDecisions, policyDecisionWithMetadata{ - Definition: definition, - Binding: binding, - PolicyDecision: PolicyDecision{ - Action: ActionDeny, - Evaluation: EvalError, - Message: auditAnnotation.Error, - Elapsed: auditAnnotation.Elapsed, - }, - }) - celmetrics.Metrics.ObserveRejection(ctx, auditAnnotation.Elapsed, definition.Name, binding.Name, "active") - case AuditAnnotationActionExclude: // skip it - default: - return fmt.Errorf("unsupported AuditAnnotation Action: %s", auditAnnotation.Action) } } } @@ -425,7 +410,124 @@ func (c *celAdmissionController) Validate( return nil } -func (c *celAdmissionController) publishValidationFailureAnnotation(binding *v1alpha1.ValidatingAdmissionPolicyBinding, expressionIndex int, decision PolicyDecision, attributes admission.Attributes) { +// Returns objects to use to evaluate the policy +func (c *celAdmissionController) collectParams( + paramKind *v1beta1.ParamKind, + info paramInfo, + paramRef *v1beta1.ParamRef, + namespace string, +) ([]runtime.Object, error) { + // If definition has paramKind, paramRef is required in binding. + // If definition has no paramKind, paramRef set in binding will be ignored. + var params []runtime.Object + var paramStore generic.NamespacedLister[runtime.Object] + + // Make sure the param kind is ready to use + if paramKind != nil && paramRef != nil { + if info.controller == nil { + return nil, fmt.Errorf("paramKind kind `%v` not known", + paramKind.String()) + } + + // Set up cluster-scoped, or namespaced access to the params + // "default" if not provided, and paramKind is namespaced + paramStore = info.controller.Informer() + if info.scope.Name() == meta.RESTScopeNameNamespace { + paramsNamespace := namespace + if len(paramRef.Namespace) > 0 { + paramsNamespace = paramRef.Namespace + } else if len(paramsNamespace) == 0 { + // You must supply namespace if your matcher can possibly + // match a cluster-scoped resource + return nil, fmt.Errorf("cannot use namespaced paramRef in policy binding that matches cluster-scoped resources") + } + + paramStore = info.controller.Informer().Namespaced(paramsNamespace) + } + + // If the param informer for this admission policy has not yet + // had time to perform an initial listing, don't attempt to use + // it. + timeoutCtx, cancel := context.WithTimeout(c.policyController.context, 1*time.Second) + defer cancel() + + if !cache.WaitForCacheSync(timeoutCtx.Done(), info.controller.HasSynced) { + return nil, fmt.Errorf("paramKind kind `%v` not yet synced to use for admission", + paramKind.String()) + } + } + + // Find params to use with policy + switch { + case paramKind == nil: + // ParamKind is unset. Ignore any globalParamRef or namespaceParamRef + // setting. + return []runtime.Object{nil}, nil + case paramRef == nil: + // Policy ParamKind is set, but binding does not use it. + // Validate with nil params + return []runtime.Object{nil}, nil + case len(paramRef.Namespace) > 0 && info.scope.Name() == meta.RESTScopeRoot.Name(): + // Not allowed to set namespace for cluster-scoped param + return nil, fmt.Errorf("paramRef.namespace must not be provided for a cluster-scoped `paramKind`") + + case len(paramRef.Name) > 0: + if paramRef.Selector != nil { + // This should be validated, but just in case. + return nil, fmt.Errorf("paramRef.name and paramRef.selector are mutually exclusive") + } + + switch param, err := paramStore.Get(paramRef.Name); { + case err == nil: + params = []runtime.Object{param} + case k8serrors.IsNotFound(err): + // Param not yet available. User may need to wait a bit + // before being able to use it for validation. + // + // Set params to nil to prepare for not found action + params = nil + case k8serrors.IsInvalid(err): + // Param mis-configured + // require to set namespace for namespaced resource + // and unset namespace for cluster scoped resource + return nil, err + default: + // Internal error + utilruntime.HandleError(err) + return nil, err + } + case paramRef.Selector != nil: + // Select everything by default if empty name and selector + selector, err := metav1.LabelSelectorAsSelector(paramRef.Selector) + if err != nil { + // Cannot parse label selector: configuration error + return nil, err + + } + + paramList, err := paramStore.List(selector) + if err != nil { + // There was a bad internal error + utilruntime.HandleError(err) + return nil, err + } + + // Successfully grabbed params + params = paramList + default: + // Should be unreachable due to validation + return nil, fmt.Errorf("one of name or selector must be provided") + } + + // Apply fail action for params not found case + if len(params) == 0 && paramRef.ParameterNotFoundAction != nil && *paramRef.ParameterNotFoundAction == v1beta1.DenyAction { + return nil, errors.New("no params found for policy binding with `Deny` parameterNotFoundAction") + } + + return params, nil +} + +func (c *celAdmissionController) publishValidationFailureAnnotation(binding *v1beta1.ValidatingAdmissionPolicyBinding, expressionIndex int, decision PolicyDecision, attributes admission.Attributes) { key := "validation.policy.admission.k8s.io/validation_failure" // Marshal to a list of failures since, in the future, we may need to support multiple failures valueJson, err := utiljson.Marshal([]validationFailureValue{{ @@ -459,11 +561,11 @@ func (c *celAdmissionController) refreshPolicies() { // validationFailureValue defines the JSON format of a "validation.policy.admission.k8s.io/validation_failure" audit // annotation value. type validationFailureValue struct { - Message string `json:"message"` - Policy string `json:"policy"` - Binding string `json:"binding"` - ExpressionIndex int `json:"expressionIndex"` - ValidationActions []v1alpha1.ValidationAction `json:"validationActions"` + Message string `json:"message"` + Policy string `json:"policy"` + Binding string `json:"binding"` + ExpressionIndex int `json:"expressionIndex"` + ValidationActions []v1beta1.ValidationAction `json:"validationActions"` } type auditAnnotationCollector struct { @@ -500,3 +602,48 @@ func (a auditAnnotationCollector) publish(policyName string, attributes admissio } } } + +// A workaround to fact that native types do not have TypeMeta populated, which +// is needed for CEL expressions to be able to access the value. +type wrappedParam struct { + metav1.TypeMeta + nested runtime.Object +} + +func (w *wrappedParam) MarshalJSON() ([]byte, error) { + return nil, errors.New("MarshalJSON unimplemented for wrappedParam") +} + +func (w *wrappedParam) UnmarshalJSON(data []byte) error { + return errors.New("UnmarshalJSON unimplemented for wrappedParam") +} + +func (w *wrappedParam) ToUnstructured() interface{} { + res, err := runtime.DefaultUnstructuredConverter.ToUnstructured(w.nested) + + if err != nil { + return nil + } + + metaRes, err := runtime.DefaultUnstructuredConverter.ToUnstructured(&w.TypeMeta) + if err != nil { + return nil + } + + for k, v := range metaRes { + res[k] = v + } + + return res +} + +func (w *wrappedParam) DeepCopyObject() runtime.Object { + return &wrappedParam{ + TypeMeta: w.TypeMeta, + nested: w.nested.DeepCopyObject(), + } +} + +func (w *wrappedParam) GetObjectKind() schema.ObjectKind { + return w +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller_reconcile.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller_reconcile.go index 296ac416aa28..b2624694c846 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller_reconcile.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller_reconcile.go @@ -23,11 +23,10 @@ import ( "time" v1 "k8s.io/api/admissionregistration/v1" - "k8s.io/api/admissionregistration/v1alpha1" + "k8s.io/api/admissionregistration/v1beta1" corev1 "k8s.io/api/core/v1" apiequality "k8s.io/apimachinery/pkg/api/equality" "k8s.io/apimachinery/pkg/api/meta" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" utilruntime "k8s.io/apimachinery/pkg/util/runtime" @@ -36,13 +35,11 @@ import ( "k8s.io/apiserver/pkg/admission/plugin/cel" "k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/internal/generic" "k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions" - celconfig "k8s.io/apiserver/pkg/apis/cel" - "k8s.io/apiserver/pkg/authorization/authorizer" + "k8s.io/apiserver/pkg/cel/environment" "k8s.io/client-go/dynamic" "k8s.io/client-go/dynamic/dynamicinformer" "k8s.io/client-go/informers" "k8s.io/client-go/kubernetes" - k8sscheme "k8s.io/client-go/kubernetes/scheme" "k8s.io/client-go/tools/cache" ) @@ -50,36 +47,30 @@ type policyController struct { once sync.Once context context.Context dynamicClient dynamic.Interface + informerFactory informers.SharedInformerFactory restMapper meta.RESTMapper - policyDefinitionsController generic.Controller[*v1alpha1.ValidatingAdmissionPolicy] - policyBindingController generic.Controller[*v1alpha1.ValidatingAdmissionPolicyBinding] + policyDefinitionsController generic.Controller[*v1beta1.ValidatingAdmissionPolicy] + policyBindingController generic.Controller[*v1beta1.ValidatingAdmissionPolicyBinding] // Provided to the policy's Compile function as an injected dependency to // assist with compiling its expressions to CEL + // pass nil to create filter compiler in demand filterCompiler cel.FilterCompiler matcher Matcher newValidator - // The TypeCheck checks the policy's expressions for type errors. - // Type of params is defined in policy.Spec.ParamsKind - // Types of object are calculated from policy.Spec.MatchingConstraints - typeChecker *TypeChecker - - // Lock which protects: - // - cachedPolicies - // - paramCRDControllers - // - definitionInfo - // - bindingInfos - // - definitionsToBindings - // All other fields should be assumed constant + client kubernetes.Interface + // Lock which protects + // All Below fields + // All above fields should be assumed constant mutex sync.RWMutex cachedPolicies []policyData // controller and metadata - paramsCRDControllers map[v1alpha1.ParamKind]*paramInfo + paramsCRDControllers map[v1beta1.ParamKind]*paramInfo // Index for each definition namespace/name, contains all binding // namespace/names known to exist for that definition @@ -94,32 +85,26 @@ type policyController struct { // All keys must have at least one dependent binding // All binding names MUST exist as a key bindingInfos definitionsToBindings map[namespacedName]sets.Set[namespacedName] - - client kubernetes.Interface - - authz authorizer.Authorizer } -type newValidator func(validationFilter cel.Filter, celMatcher matchconditions.Matcher, auditAnnotationFilter, messageFilter cel.Filter, failurePolicy *v1.FailurePolicyType, authorizer authorizer.Authorizer) Validator +type newValidator func(validationFilter cel.Filter, celMatcher matchconditions.Matcher, auditAnnotationFilter, messageFilter cel.Filter, failurePolicy *v1.FailurePolicyType) Validator func newPolicyController( restMapper meta.RESTMapper, client kubernetes.Interface, dynamicClient dynamic.Interface, - typeChecker *TypeChecker, + informerFactory informers.SharedInformerFactory, filterCompiler cel.FilterCompiler, matcher Matcher, - policiesInformer generic.Informer[*v1alpha1.ValidatingAdmissionPolicy], - bindingsInformer generic.Informer[*v1alpha1.ValidatingAdmissionPolicyBinding], - authz authorizer.Authorizer, + policiesInformer generic.Informer[*v1beta1.ValidatingAdmissionPolicy], + bindingsInformer generic.Informer[*v1beta1.ValidatingAdmissionPolicyBinding], ) *policyController { res := &policyController{} *res = policyController{ filterCompiler: filterCompiler, - typeChecker: typeChecker, definitionInfo: make(map[namespacedName]*definitionInfo), bindingInfos: make(map[namespacedName]*bindingInfo), - paramsCRDControllers: make(map[v1alpha1.ParamKind]*paramInfo), + paramsCRDControllers: make(map[v1beta1.ParamKind]*paramInfo), definitionsToBindings: make(map[namespacedName]sets.Set[namespacedName]), matcher: matcher, newValidator: NewValidator, @@ -139,10 +124,10 @@ func newPolicyController( Name: "cel-policy-bindings", }, ), - restMapper: restMapper, - dynamicClient: dynamicClient, - client: client, - authz: authz, + restMapper: restMapper, + dynamicClient: dynamicClient, + informerFactory: informerFactory, + client: client, } return res } @@ -175,20 +160,14 @@ func (c *policyController) HasSynced() bool { return c.policyDefinitionsController.HasSynced() && c.policyBindingController.HasSynced() } -func (c *policyController) reconcilePolicyDefinition(namespace, name string, definition *v1alpha1.ValidatingAdmissionPolicy) error { +func (c *policyController) reconcilePolicyDefinition(namespace, name string, definition *v1beta1.ValidatingAdmissionPolicy) error { c.mutex.Lock() defer c.mutex.Unlock() err := c.reconcilePolicyDefinitionSpec(namespace, name, definition) - if err != nil { - return err - } - if c.typeChecker != nil { - err = c.reconcilePolicyStatus(namespace, name, definition) - } return err } -func (c *policyController) reconcilePolicyDefinitionSpec(namespace, name string, definition *v1alpha1.ValidatingAdmissionPolicy) error { +func (c *policyController) reconcilePolicyDefinitionSpec(namespace, name string, definition *v1beta1.ValidatingAdmissionPolicy) error { c.cachedPolicies = nil // invalidate cachedPolicies // Namespace for policydefinition is empty. @@ -207,7 +186,7 @@ func (c *policyController) reconcilePolicyDefinitionSpec(namespace, name string, return nil } - var paramSource *v1alpha1.ParamKind + var paramSource *v1beta1.ParamKind if definition != nil { paramSource = definition.Spec.ParamKind } @@ -253,7 +232,6 @@ func (c *policyController) reconcilePolicyDefinitionSpec(namespace, name string, // Skip setting up controller for empty param type return nil } - // find GVR for params // Parse param source into a GVK @@ -280,104 +258,78 @@ func (c *policyController) reconcilePolicyDefinitionSpec(namespace, name string, return info.configurationError } - if info, ok := c.paramsCRDControllers[*paramSource]; ok { - // If a param controller is already active for this paramsource, make - // sure it is tracking this policy's dependency upon it - info.dependentDefinitions.Insert(nn) + paramInfo := c.ensureParamInfo(paramSource, paramsGVR) + paramInfo.dependentDefinitions.Insert(nn) - } else { - instanceContext, instanceCancel := context.WithCancel(c.context) - - var informer cache.SharedIndexInformer - - // Informer Factory is optional - if c.client != nil { - // Create temporary informer factory - // Cannot use the k8s shared informer factory for dynamic params informer. - // Would leak unnecessary informers when we are done since we would have to - // call informerFactory.Start() with a longer-lived stopCh than necessary. - // SharedInformerFactory does not support temporary usage. - dynamicFactory := informers.NewSharedInformerFactory(c.client, 10*time.Minute) - - // Look for a typed informer. If it does not exist - genericInformer, err := dynamicFactory.ForResource(paramsGVR.Resource) - - // Ignore error. We fallback to dynamic informer if there is no - // typed informer - if err != nil { - informer = nil - } else { - informer = genericInformer.Informer() - - // Set transformer on the informer to workaround inconsistency - // where typed objects have TypeMeta wiped out but dynamic - // objects keep kind/apiVersion fields - informer.SetTransform(func(i interface{}) (interface{}, error) { - // Ensure param is populated with its GVK for consistency - // (CRD dynamic informer always returns objects with kind/apiversion, - // but native types do not include populated TypeMeta. - if param := i.(runtime.Object); param != nil { - if param.GetObjectKind().GroupVersionKind().Empty() { - // https://github.com/kubernetes/client-go/issues/413#issue-324586398 - gvks, _, _ := k8sscheme.Scheme.ObjectKinds(param) - for _, gvk := range gvks { - if len(gvk.Kind) == 0 { - continue - } - if len(gvk.Version) == 0 || gvk.Version == runtime.APIVersionInternal { - continue - } - param.GetObjectKind().SetGroupVersionKind(gvk) - break - } - } - } + return nil +} - return i, nil - }) - } - } +// Ensures that there is an informer started for the given GVK to be used as a +// param +func (c *policyController) ensureParamInfo(paramSource *v1beta1.ParamKind, mapping *meta.RESTMapping) *paramInfo { + if info, ok := c.paramsCRDControllers[*paramSource]; ok { + return info + } - if informer == nil { - // Dynamic JSON informer fallback. - // Cannot use shared dynamic informer since it would be impossible - // to clean CRD informers properly with multiple dependents - // (cannot start ahead of time, and cannot track dependencies via stopCh) - informer = dynamicinformer.NewFilteredDynamicInformer( - c.dynamicClient, - paramsGVR.Resource, - corev1.NamespaceAll, - // Use same interval as is used for k8s typed sharedInformerFactory - // https://github.com/kubernetes/kubernetes/blob/7e0923899fed622efbc8679cca6b000d43633e38/cmd/kube-apiserver/app/server.go#L430 - 10*time.Minute, - cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, - nil, - ).Informer() - } + // We are not watching this param. Start an informer for it. + instanceContext, instanceCancel := context.WithCancel(c.context) - controller := generic.NewController( - generic.NewInformer[runtime.Object](informer), - c.reconcileParams, - generic.ControllerOptions{ - Workers: 1, - Name: paramSource.String() + "-controller", - }, - ) + var informer cache.SharedIndexInformer - c.paramsCRDControllers[*paramSource] = ¶mInfo{ - controller: controller, - stop: instanceCancel, - dependentDefinitions: sets.New(nn), - } + // Try to see if our provided informer factory has an informer for this type. + // We assume the informer is already started, and starts all types associated + // with it. + if genericInformer, err := c.informerFactory.ForResource(mapping.Resource); err == nil { + informer = genericInformer.Informer() - go controller.Run(instanceContext) + // Ensure the informer is started + // Use policyController's context rather than the instance context. + // PolicyController context is expected to last until app shutdown + // This is due to behavior of informerFactory which would cause the + // informer to stop running once the context is cancelled, and + // never started again. + c.informerFactory.Start(c.context.Done()) + } else { + // Dynamic JSON informer fallback. + // Cannot use shared dynamic informer since it would be impossible + // to clean CRD informers properly with multiple dependents + // (cannot start ahead of time, and cannot track dependencies via stopCh) + informer = dynamicinformer.NewFilteredDynamicInformer( + c.dynamicClient, + mapping.Resource, + corev1.NamespaceAll, + // Use same interval as is used for k8s typed sharedInformerFactory + // https://github.com/kubernetes/kubernetes/blob/7e0923899fed622efbc8679cca6b000d43633e38/cmd/kube-apiserver/app/server.go#L430 + 10*time.Minute, + cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, + nil, + ).Informer() go informer.Run(instanceContext.Done()) } - return nil + controller := generic.NewController( + generic.NewInformer[runtime.Object](informer), + c.reconcileParams, + generic.ControllerOptions{ + Workers: 1, + Name: paramSource.String() + "-controller", + }, + ) + + ret := ¶mInfo{ + controller: controller, + stop: instanceCancel, + scope: mapping.Scope, + dependentDefinitions: sets.New[namespacedName](), + } + c.paramsCRDControllers[*paramSource] = ret + + go controller.Run(instanceContext) + return ret + } -func (c *policyController) reconcilePolicyBinding(namespace, name string, binding *v1alpha1.ValidatingAdmissionPolicyBinding) error { +func (c *policyController) reconcilePolicyBinding(namespace, name string, binding *v1beta1.ValidatingAdmissionPolicyBinding) error { c.mutex.Lock() defer c.mutex.Unlock() @@ -443,30 +395,6 @@ func (c *policyController) reconcilePolicyBinding(namespace, name string, bindin return nil } -func (c *policyController) reconcilePolicyStatus(namespace, name string, definition *v1alpha1.ValidatingAdmissionPolicy) error { - if definition != nil && definition.Status.ObservedGeneration < definition.Generation { - st := c.calculatePolicyStatus(definition) - newDefinition := definition.DeepCopy() - newDefinition.Status = *st - _, err := c.client.AdmissionregistrationV1alpha1().ValidatingAdmissionPolicies().UpdateStatus(c.context, newDefinition, metav1.UpdateOptions{}) - if err != nil { - // ignore error when the controller is not able to - // mutate the definition, and to avoid infinite requeue. - utilruntime.HandleError(err) - } - } - return nil -} - -func (c *policyController) calculatePolicyStatus(definition *v1alpha1.ValidatingAdmissionPolicy) *v1alpha1.ValidatingAdmissionPolicyStatus { - expressionWarnings := c.typeChecker.Check(definition) - // modifying a deepcopy of the original status, preserving unrelated existing data - status := definition.Status.DeepCopy() - status.ObservedGeneration = definition.Generation - status.TypeChecking = &v1alpha1.TypeChecking{ExpressionWarnings: expressionWarnings} - return status -} - func (c *policyController) reconcileParams(namespace, name string, params runtime.Object) error { // Do nothing. // When we add informational type checking we will need to compile in the @@ -504,39 +432,49 @@ func (c *policyController) latestPolicyData() []policyData { } optionalVars := cel.OptionalVariableDeclarations{HasParams: hasParam, HasAuthorizer: true} expressionOptionalVars := cel.OptionalVariableDeclarations{HasParams: hasParam, HasAuthorizer: false} - failurePolicy := convertv1alpha1FailurePolicyTypeTov1FailurePolicyType(definitionInfo.lastReconciledValue.Spec.FailurePolicy) + failurePolicy := convertv1beta1FailurePolicyTypeTov1FailurePolicyType(definitionInfo.lastReconciledValue.Spec.FailurePolicy) var matcher matchconditions.Matcher = nil matchConditions := definitionInfo.lastReconciledValue.Spec.MatchConditions + + filterCompiler := c.filterCompiler + if filterCompiler == nil { + compositedCompiler, err := cel.NewCompositedCompiler(environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion())) + if err == nil { + filterCompiler = compositedCompiler + compositedCompiler.CompileAndStoreVariables(convertv1beta1Variables(definitionInfo.lastReconciledValue.Spec.Variables), optionalVars, environment.StoredExpressions) + } else { + utilruntime.HandleError(err) + } + } if len(matchConditions) > 0 { matchExpressionAccessors := make([]cel.ExpressionAccessor, len(matchConditions)) for i := range matchConditions { matchExpressionAccessors[i] = (*matchconditions.MatchCondition)(&matchConditions[i]) } - matcher = matchconditions.NewMatcher(c.filterCompiler.Compile(matchExpressionAccessors, optionalVars, celconfig.PerCallLimit), c.authz, failurePolicy, "validatingadmissionpolicy", definitionInfo.lastReconciledValue.Name) + matcher = matchconditions.NewMatcher(filterCompiler.Compile(matchExpressionAccessors, optionalVars, environment.StoredExpressions), failurePolicy, "policy", "validate", definitionInfo.lastReconciledValue.Name) } bindingInfo.validator = c.newValidator( - c.filterCompiler.Compile(convertv1alpha1Validations(definitionInfo.lastReconciledValue.Spec.Validations), optionalVars, celconfig.PerCallLimit), + filterCompiler.Compile(convertv1beta1Validations(definitionInfo.lastReconciledValue.Spec.Validations), optionalVars, environment.StoredExpressions), matcher, - c.filterCompiler.Compile(convertv1alpha1AuditAnnotations(definitionInfo.lastReconciledValue.Spec.AuditAnnotations), optionalVars, celconfig.PerCallLimit), - c.filterCompiler.Compile(convertV1Alpha1MessageExpressions(definitionInfo.lastReconciledValue.Spec.Validations), expressionOptionalVars, celconfig.PerCallLimit), + filterCompiler.Compile(convertv1beta1AuditAnnotations(definitionInfo.lastReconciledValue.Spec.AuditAnnotations), optionalVars, environment.StoredExpressions), + filterCompiler.Compile(convertv1beta1MessageExpressions(definitionInfo.lastReconciledValue.Spec.Validations), expressionOptionalVars, environment.StoredExpressions), failurePolicy, - c.authz, ) } bindingInfos = append(bindingInfos, *bindingInfo) } - var paramController generic.Controller[runtime.Object] + var pInfo paramInfo if paramKind := definitionInfo.lastReconciledValue.Spec.ParamKind; paramKind != nil { if info, ok := c.paramsCRDControllers[*paramKind]; ok { - paramController = info.controller + pInfo = *info } } res = append(res, policyData{ - definitionInfo: *definitionInfo, - paramController: paramController, - bindings: bindingInfos, + definitionInfo: *definitionInfo, + paramInfo: pInfo, + bindings: bindingInfos, }) } @@ -544,21 +482,21 @@ func (c *policyController) latestPolicyData() []policyData { return res } -func convertv1alpha1FailurePolicyTypeTov1FailurePolicyType(policyType *v1alpha1.FailurePolicyType) *v1.FailurePolicyType { +func convertv1beta1FailurePolicyTypeTov1FailurePolicyType(policyType *v1beta1.FailurePolicyType) *v1.FailurePolicyType { if policyType == nil { return nil } var v1FailPolicy v1.FailurePolicyType - if *policyType == v1alpha1.Fail { + if *policyType == v1beta1.Fail { v1FailPolicy = v1.Fail - } else if *policyType == v1alpha1.Ignore { + } else if *policyType == v1beta1.Ignore { v1FailPolicy = v1.Ignore } return &v1FailPolicy } -func convertv1alpha1Validations(inputValidations []v1alpha1.Validation) []cel.ExpressionAccessor { +func convertv1beta1Validations(inputValidations []v1beta1.Validation) []cel.ExpressionAccessor { celExpressionAccessor := make([]cel.ExpressionAccessor, len(inputValidations)) for i, validation := range inputValidations { validation := ValidationCondition{ @@ -571,7 +509,7 @@ func convertv1alpha1Validations(inputValidations []v1alpha1.Validation) []cel.Ex return celExpressionAccessor } -func convertV1Alpha1MessageExpressions(inputValidations []v1alpha1.Validation) []cel.ExpressionAccessor { +func convertv1beta1MessageExpressions(inputValidations []v1beta1.Validation) []cel.ExpressionAccessor { celExpressionAccessor := make([]cel.ExpressionAccessor, len(inputValidations)) for i, validation := range inputValidations { if validation.MessageExpression != "" { @@ -584,7 +522,7 @@ func convertV1Alpha1MessageExpressions(inputValidations []v1alpha1.Validation) [ return celExpressionAccessor } -func convertv1alpha1AuditAnnotations(inputValidations []v1alpha1.AuditAnnotation) []cel.ExpressionAccessor { +func convertv1beta1AuditAnnotations(inputValidations []v1beta1.AuditAnnotation) []cel.ExpressionAccessor { celExpressionAccessor := make([]cel.ExpressionAccessor, len(inputValidations)) for i, validation := range inputValidations { validation := AuditAnnotationCondition{ @@ -596,6 +534,14 @@ func convertv1alpha1AuditAnnotations(inputValidations []v1alpha1.AuditAnnotation return celExpressionAccessor } +func convertv1beta1Variables(variables []v1beta1.Variable) []cel.NamedExpressionAccessor { + namedExpressions := make([]cel.NamedExpressionAccessor, len(variables)) + for i, variable := range variables { + namedExpressions[i] = &Variable{Name: variable.Name, Expression: variable.Expression} + } + return namedExpressions +} + func getNamespaceName(namespace, name string) namespacedName { return namespacedName{ namespace: namespace, diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/interface.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/interface.go index 0f84152e8b4c..206fc1378316 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/interface.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/interface.go @@ -21,12 +21,14 @@ import ( celgo "github.com/google/cel-go/cel" - "k8s.io/api/admissionregistration/v1alpha1" + "k8s.io/api/admissionregistration/v1beta1" + corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apiserver/pkg/admission" "k8s.io/apiserver/pkg/admission/plugin/cel" + "k8s.io/apiserver/pkg/authorization/authorizer" ) var _ cel.ExpressionAccessor = &ValidationCondition{} @@ -60,17 +62,39 @@ func (v *AuditAnnotationCondition) ReturnTypes() []*celgo.Type { return []*celgo.Type{celgo.StringType, celgo.NullType} } +// Variable is a named expression for composition. +type Variable struct { + Name string + Expression string +} + +func (v *Variable) GetExpression() string { + return v.Expression +} + +func (v *Variable) ReturnTypes() []*celgo.Type { + return []*celgo.Type{celgo.AnyType, celgo.DynType} +} + +func (v *Variable) GetName() string { + return v.Name +} + // Matcher is used for matching ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBinding to attributes type Matcher interface { admission.InitializationValidator // DefinitionMatches says whether this policy definition matches the provided admission // resource request - DefinitionMatches(a admission.Attributes, o admission.ObjectInterfaces, definition *v1alpha1.ValidatingAdmissionPolicy) (bool, schema.GroupVersionKind, error) + DefinitionMatches(a admission.Attributes, o admission.ObjectInterfaces, definition *v1beta1.ValidatingAdmissionPolicy) (bool, schema.GroupVersionResource, schema.GroupVersionKind, error) // BindingMatches says whether this policy definition matches the provided admission // resource request - BindingMatches(a admission.Attributes, o admission.ObjectInterfaces, definition *v1alpha1.ValidatingAdmissionPolicyBinding) (bool, error) + BindingMatches(a admission.Attributes, o admission.ObjectInterfaces, definition *v1beta1.ValidatingAdmissionPolicyBinding) (bool, error) + + // GetNamespace retrieves the Namespace resource by the given name. The name may be empty, in which case + // GetNamespace must return nil, nil + GetNamespace(name string) (*corev1.Namespace, error) } // ValidateResult defines the result of a Validator.Validate operation. @@ -85,5 +109,5 @@ type ValidateResult struct { type Validator interface { // Validate is used to take cel evaluations and convert into decisions // runtimeCELCostBudget was added for testing purpose only. Callers should always use const RuntimeCELCostBudget from k8s.io/apiserver/pkg/apis/cel/config.go as input. - Validate(ctx context.Context, versionedAttr *admission.VersionedAttributes, versionedParams runtime.Object, runtimeCELCostBudget int64) ValidateResult + Validate(ctx context.Context, matchedResource schema.GroupVersionResource, versionedAttr *admission.VersionedAttributes, versionedParams runtime.Object, namespace *corev1.Namespace, runtimeCELCostBudget int64, authz authorizer.Authorizer) ValidateResult } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/matcher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/matcher.go index a659a99f14c4..397f2c267146 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/matcher.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/matcher.go @@ -17,7 +17,8 @@ limitations under the License. package validatingadmissionpolicy import ( - "k8s.io/api/admissionregistration/v1alpha1" + "k8s.io/api/admissionregistration/v1beta1" + corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/runtime/schema" @@ -28,7 +29,7 @@ import ( var _ matching.MatchCriteria = &matchCriteria{} type matchCriteria struct { - constraints *v1alpha1.MatchResources + constraints *v1beta1.MatchResources } // GetParsedNamespaceSelector returns the converted LabelSelector which implements labels.Selector @@ -42,7 +43,7 @@ func (m *matchCriteria) GetParsedObjectSelector() (labels.Selector, error) { } // GetMatchResources returns the matchConstraints -func (m *matchCriteria) GetMatchResources() v1alpha1.MatchResources { +func (m *matchCriteria) GetMatchResources() v1beta1.MatchResources { return *m.constraints } @@ -62,17 +63,21 @@ func (c *matcher) ValidateInitialization() error { } // DefinitionMatches returns whether this ValidatingAdmissionPolicy matches the provided admission resource request -func (c *matcher) DefinitionMatches(a admission.Attributes, o admission.ObjectInterfaces, definition *v1alpha1.ValidatingAdmissionPolicy) (bool, schema.GroupVersionKind, error) { +func (c *matcher) DefinitionMatches(a admission.Attributes, o admission.ObjectInterfaces, definition *v1beta1.ValidatingAdmissionPolicy) (bool, schema.GroupVersionResource, schema.GroupVersionKind, error) { criteria := matchCriteria{constraints: definition.Spec.MatchConstraints} return c.Matcher.Matches(a, o, &criteria) } // BindingMatches returns whether this ValidatingAdmissionPolicyBinding matches the provided admission resource request -func (c *matcher) BindingMatches(a admission.Attributes, o admission.ObjectInterfaces, binding *v1alpha1.ValidatingAdmissionPolicyBinding) (bool, error) { +func (c *matcher) BindingMatches(a admission.Attributes, o admission.ObjectInterfaces, binding *v1beta1.ValidatingAdmissionPolicyBinding) (bool, error) { if binding.Spec.MatchResources == nil { return true, nil } criteria := matchCriteria{constraints: binding.Spec.MatchResources} - isMatch, _, err := c.Matcher.Matches(a, o, &criteria) + isMatch, _, _, err := c.Matcher.Matches(a, o, &criteria) return isMatch, err } + +func (c *matcher) GetNamespace(name string) (*corev1.Namespace, error) { + return c.Matcher.GetNamespace(name) +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/matching/matching.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/matching/matching.go index c4f7e64af2da..ebdb61db889a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/matching/matching.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/matching/matching.go @@ -20,7 +20,8 @@ import ( "fmt" v1 "k8s.io/api/admissionregistration/v1" - "k8s.io/api/admissionregistration/v1alpha1" + "k8s.io/api/admissionregistration/v1beta1" + corev1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apiserver/pkg/admission" "k8s.io/client-go/kubernetes" @@ -35,7 +36,7 @@ type MatchCriteria interface { namespace.NamespaceSelectorProvider object.ObjectSelectorProvider - GetMatchResources() v1alpha1.MatchResources + GetMatchResources() v1beta1.MatchResources } // Matcher decides if a request matches against matchCriteria @@ -44,6 +45,10 @@ type Matcher struct { objectMatcher *object.Matcher } +func (m *Matcher) GetNamespace(name string) (*corev1.Namespace, error) { + return m.namespaceMatcher.GetNamespace(name) +} + // NewMatcher initialize the matcher with dependencies requires func NewMatcher( namespaceLister listersv1.NamespaceLister, @@ -66,56 +71,60 @@ func (m *Matcher) ValidateInitialization() error { return nil } -func (m *Matcher) Matches(attr admission.Attributes, o admission.ObjectInterfaces, criteria MatchCriteria) (bool, schema.GroupVersionKind, error) { +func (m *Matcher) Matches(attr admission.Attributes, o admission.ObjectInterfaces, criteria MatchCriteria) (bool, schema.GroupVersionResource, schema.GroupVersionKind, error) { matches, matchNsErr := m.namespaceMatcher.MatchNamespaceSelector(criteria, attr) // Should not return an error here for policy which do not apply to the request, even if err is an unexpected scenario. if !matches && matchNsErr == nil { - return false, schema.GroupVersionKind{}, nil + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, nil } matches, matchObjErr := m.objectMatcher.MatchObjectSelector(criteria, attr) // Should not return an error here for policy which do not apply to the request, even if err is an unexpected scenario. if !matches && matchObjErr == nil { - return false, schema.GroupVersionKind{}, nil + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, nil } matchResources := criteria.GetMatchResources() matchPolicy := matchResources.MatchPolicy - if isExcluded, _, err := matchesResourceRules(matchResources.ExcludeResourceRules, matchPolicy, attr, o); isExcluded || err != nil { - return false, schema.GroupVersionKind{}, err + if isExcluded, _, _, err := matchesResourceRules(matchResources.ExcludeResourceRules, matchPolicy, attr, o); isExcluded || err != nil { + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, err } var ( - isMatch bool - matchKind schema.GroupVersionKind - matchErr error + isMatch bool + matchResource schema.GroupVersionResource + matchKind schema.GroupVersionKind + matchErr error ) if len(matchResources.ResourceRules) == 0 { isMatch = true matchKind = attr.GetKind() + matchResource = attr.GetResource() } else { - isMatch, matchKind, matchErr = matchesResourceRules(matchResources.ResourceRules, matchPolicy, attr, o) + isMatch, matchResource, matchKind, matchErr = matchesResourceRules(matchResources.ResourceRules, matchPolicy, attr, o) } if matchErr != nil { - return false, schema.GroupVersionKind{}, matchErr + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, matchErr } if !isMatch { - return false, schema.GroupVersionKind{}, nil + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, nil } // now that we know this applies to this request otherwise, if there were selector errors, return them if matchNsErr != nil { - return false, schema.GroupVersionKind{}, matchNsErr + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, matchNsErr } if matchObjErr != nil { - return false, schema.GroupVersionKind{}, matchObjErr + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, matchObjErr } - return true, matchKind, nil + return true, matchResource, matchKind, nil } -func matchesResourceRules(namedRules []v1alpha1.NamedRuleWithOperations, matchPolicy *v1alpha1.MatchPolicyType, attr admission.Attributes, o admission.ObjectInterfaces) (bool, schema.GroupVersionKind, error) { +func matchesResourceRules(namedRules []v1beta1.NamedRuleWithOperations, matchPolicy *v1beta1.MatchPolicyType, attr admission.Attributes, o admission.ObjectInterfaces) (bool, schema.GroupVersionResource, schema.GroupVersionKind, error) { matchKind := attr.GetKind() + matchResource := attr.GetResource() + for _, namedRule := range namedRules { rule := v1.RuleWithOperations(namedRule.RuleWithOperations) ruleMatcher := rules.Matcher{ @@ -127,22 +136,22 @@ func matchesResourceRules(namedRules []v1alpha1.NamedRuleWithOperations, matchPo } // an empty name list always matches if len(namedRule.ResourceNames) == 0 { - return true, matchKind, nil + return true, matchResource, matchKind, nil } // TODO: GetName() can return an empty string if the user is relying on // the API server to generate the name... figure out what to do for this edge case name := attr.GetName() for _, matchedName := range namedRule.ResourceNames { if name == matchedName { - return true, matchKind, nil + return true, matchResource, matchKind, nil } } } // if match policy is undefined or exact, don't perform fuzzy matching // note that defaulting to fuzzy matching is set by the API - if matchPolicy == nil || *matchPolicy == v1alpha1.Exact { - return false, schema.GroupVersionKind{}, nil + if matchPolicy == nil || *matchPolicy == v1beta1.Exact { + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, nil } attrWithOverride := &attrWithResourceOverride{Attributes: attr} @@ -164,11 +173,11 @@ func matchesResourceRules(namedRules []v1alpha1.NamedRuleWithOperations, matchPo } matchKind = o.GetEquivalentResourceMapper().KindFor(equivalent, attr.GetSubresource()) if matchKind.Empty() { - return false, schema.GroupVersionKind{}, fmt.Errorf("unable to convert to %v: unknown kind", equivalent) + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, fmt.Errorf("unable to convert to %v: unknown kind", equivalent) } // an empty name list always matches if len(namedRule.ResourceNames) == 0 { - return true, matchKind, nil + return true, equivalent, matchKind, nil } // TODO: GetName() can return an empty string if the user is relying on @@ -176,12 +185,12 @@ func matchesResourceRules(namedRules []v1alpha1.NamedRuleWithOperations, matchPo name := attr.GetName() for _, matchedName := range namedRule.ResourceNames { if name == matchedName { - return true, matchKind, nil + return true, equivalent, matchKind, nil } } } } - return false, schema.GroupVersionKind{}, nil + return false, schema.GroupVersionResource{}, schema.GroupVersionKind{}, nil } type attrWithResourceOverride struct { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/typechecking.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/typechecking.go index 7b128e38185c..6d73e237b078 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/typechecking.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/typechecking.go @@ -21,19 +21,20 @@ import ( "fmt" "sort" "strings" - "sync" + "time" "github.com/google/cel-go/cel" - "github.com/google/cel-go/common/types/ref" - "k8s.io/api/admissionregistration/v1alpha1" + "k8s.io/api/admissionregistration/v1beta1" "k8s.io/apimachinery/pkg/api/meta" "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/util/sets" "k8s.io/apimachinery/pkg/util/validation/field" + "k8s.io/apimachinery/pkg/util/version" plugincel "k8s.io/apiserver/pkg/admission/plugin/cel" apiservercel "k8s.io/apiserver/pkg/cel" "k8s.io/apiserver/pkg/cel/common" + "k8s.io/apiserver/pkg/cel/environment" "k8s.io/apiserver/pkg/cel/library" "k8s.io/apiserver/pkg/cel/openapi" "k8s.io/apiserver/pkg/cel/openapi/resolver" @@ -43,8 +44,17 @@ import ( const maxTypesToCheck = 10 type TypeChecker struct { - schemaResolver resolver.SchemaResolver - restMapper meta.RESTMapper + SchemaResolver resolver.SchemaResolver + RestMapper meta.RESTMapper +} + +// TypeCheckingContext holds information about the policy being type-checked. +// The struct is opaque to the caller. +type TypeCheckingContext struct { + gvks []schema.GroupVersionKind + declTypes []*apiservercel.DeclType + paramGVK schema.GroupVersionKind + paramDeclType *apiservercel.DeclType } type typeOverwrite struct { @@ -52,127 +62,148 @@ type typeOverwrite struct { params *apiservercel.DeclType } -// typeCheckingResult holds the issues found during type checking, any returned +// TypeCheckingResult holds the issues found during type checking, any returned // error, and the gvk that the type checking is performed against. -type typeCheckingResult struct { - gvk schema.GroupVersionKind +type TypeCheckingResult struct { + // GVK is the associated GVK + GVK schema.GroupVersionKind + // Issues contain machine-readable information about the typechecking result. + Issues *cel.Issues + // Err is the possible error that was encounter during type checking. + Err error +} + +// TypeCheckingResults is a collection of TypeCheckingResult +type TypeCheckingResults []*TypeCheckingResult - issues *cel.Issues - err error +func (rs TypeCheckingResults) String() string { + var messages []string + for _, r := range rs { + message := r.String() + if message != "" { + messages = append(messages, message) + } + } + return strings.Join(messages, "\n") +} + +// String converts the result to human-readable form as a string. +func (r *TypeCheckingResult) String() string { + if r.Issues == nil && r.Err == nil { + return "" + } + if r.Err != nil { + return fmt.Sprintf("%v: type checking error: %v\n", r.GVK, r.Err) + } + return fmt.Sprintf("%v: %s\n", r.GVK, r.Issues) } // Check preforms the type check against the given policy, and format the result // as []ExpressionWarning that is ready to be set in policy.Status // The result is nil if type checking returns no warning. // The policy object is NOT mutated. The caller should update Status accordingly -func (c *TypeChecker) Check(policy *v1alpha1.ValidatingAdmissionPolicy) []v1alpha1.ExpressionWarning { - exps := make([]string, 0, len(policy.Spec.Validations)) - // check main validation expressions, located in spec.validations[*] +func (c *TypeChecker) Check(policy *v1beta1.ValidatingAdmissionPolicy) []v1beta1.ExpressionWarning { + ctx := c.CreateContext(policy) + + // warnings to return, note that the capacity is optimistically set to zero + var warnings []v1beta1.ExpressionWarning // intentionally not setting capacity + + // check main validation expressions and their message expressions, located in spec.validations[*] fieldRef := field.NewPath("spec", "validations") - for _, v := range policy.Spec.Validations { - exps = append(exps, v.Expression) - } - msgs := c.CheckExpressions(exps, policy.Spec.ParamKind != nil, policy) - var results []v1alpha1.ExpressionWarning // intentionally not setting capacity - for i, msg := range msgs { - if msg != "" { - results = append(results, v1alpha1.ExpressionWarning{ + for i, v := range policy.Spec.Validations { + results := c.CheckExpression(ctx, v.Expression) + if len(results) != 0 { + warnings = append(warnings, v1beta1.ExpressionWarning{ FieldRef: fieldRef.Index(i).Child("expression").String(), - Warning: msg, + Warning: results.String(), + }) + } + // Note that MessageExpression is optional + if v.MessageExpression == "" { + continue + } + results = c.CheckExpression(ctx, v.MessageExpression) + if len(results) != 0 { + warnings = append(warnings, v1beta1.ExpressionWarning{ + FieldRef: fieldRef.Index(i).Child("messageExpression").String(), + Warning: results.String(), }) } } - return results + + return warnings } -// CheckExpressions checks a set of compiled CEL programs against the GVKs defined in -// policy.Spec.MatchConstraints -// The result is a human-readable form that describe which expressions -// violate what types at what place. The indexes of the return []string -// matches these of the input expressions. -// TODO: It is much more useful to have machine-readable output and let the -// client format it. That requires an update to the KEP, probably in coming -// releases. -func (c *TypeChecker) CheckExpressions(expressions []string, hasParams bool, policy *v1alpha1.ValidatingAdmissionPolicy) []string { - var allWarnings []string +// CreateContext resolves all types and their schemas from a policy definition and creates the context. +func (c *TypeChecker) CreateContext(policy *v1beta1.ValidatingAdmissionPolicy) *TypeCheckingContext { + ctx := new(TypeCheckingContext) allGvks := c.typesToCheck(policy) gvks := make([]schema.GroupVersionKind, 0, len(allGvks)) - schemas := make([]common.Schema, 0, len(allGvks)) + declTypes := make([]*apiservercel.DeclType, 0, len(allGvks)) for _, gvk := range allGvks { - s, err := c.schemaResolver.ResolveSchema(gvk) + declType, err := c.declType(gvk) if err != nil { // type checking errors MUST NOT alter the behavior of the policy // even if an error occurs. if !errors.Is(err, resolver.ErrSchemaNotFound) { // Anything except ErrSchemaNotFound is an internal error - klog.ErrorS(err, "internal error: schema resolution failure", "gvk", gvk) + klog.V(2).ErrorS(err, "internal error: schema resolution failure", "gvk", gvk) } - // skip if an unrecoverable error occurs. + // skip for not found or internal error continue } gvks = append(gvks, gvk) - schemas = append(schemas, &openapi.Schema{Schema: s}) + declTypes = append(declTypes, declType) } + ctx.gvks = gvks + ctx.declTypes = declTypes - paramsType := c.paramsType(policy) - paramsDeclType, err := c.declType(paramsType) + paramsGVK := c.paramsGVK(policy) // maybe empty, correctly handled + paramsDeclType, err := c.declType(paramsGVK) if err != nil { if !errors.Is(err, resolver.ErrSchemaNotFound) { - klog.V(2).ErrorS(err, "cannot resolve schema for params", "gvk", paramsType) + klog.V(2).ErrorS(err, "internal error: cannot resolve schema for params", "gvk", paramsGVK) } paramsDeclType = nil } + ctx.paramGVK = paramsGVK + ctx.paramDeclType = paramsDeclType + return ctx +} - for _, exp := range expressions { - var results []typeCheckingResult - for i, gvk := range gvks { - s := schemas[i] - issues, err := c.checkExpression(exp, hasParams, typeOverwrite{ - object: common.SchemaDeclType(s, true), - params: paramsDeclType, - }) - // save even if no issues are found, for the sake of formatting. - results = append(results, typeCheckingResult{ - gvk: gvk, - issues: issues, - err: err, - }) +// CheckExpression type checks a single expression, given the context +func (c *TypeChecker) CheckExpression(ctx *TypeCheckingContext, expression string) TypeCheckingResults { + var results TypeCheckingResults + for i, gvk := range ctx.gvks { + declType := ctx.declTypes[i] + // TODO(jiahuif) hasAuthorizer always true for now, will change after expending type checking to all fields. + issues, err := c.checkExpression(expression, ctx.paramDeclType != nil, true, typeOverwrite{ + object: declType, + params: ctx.paramDeclType, + }) + if issues != nil || err != nil { + results = append(results, &TypeCheckingResult{Issues: issues, Err: err, GVK: gvk}) } - allWarnings = append(allWarnings, c.formatWarning(results)) } - - return allWarnings + return results } -// formatWarning converts the resulting issues and possible error during -// type checking into a human-readable string -func (c *TypeChecker) formatWarning(results []typeCheckingResult) string { - var sb strings.Builder - for _, result := range results { - if result.issues == nil && result.err == nil { - continue - } - if result.err != nil { - sb.WriteString(fmt.Sprintf("%v: type checking error: %v\n", result.gvk, result.err)) - } else { - sb.WriteString(fmt.Sprintf("%v: %s\n", result.gvk, result.issues)) - } - } - return strings.TrimSuffix(sb.String(), "\n") +func generateUniqueTypeName(kind string) string { + return fmt.Sprintf("%s%d", kind, time.Now().Nanosecond()) } func (c *TypeChecker) declType(gvk schema.GroupVersionKind) (*apiservercel.DeclType, error) { if gvk.Empty() { return nil, nil } - s, err := c.schemaResolver.ResolveSchema(gvk) + s, err := c.SchemaResolver.ResolveSchema(gvk) if err != nil { return nil, err } - return common.SchemaDeclType(&openapi.Schema{Schema: s}, true), nil + return common.SchemaDeclType(&openapi.Schema{Schema: s}, true).MaybeAssignTypeName(generateUniqueTypeName(gvk.Kind)), nil } -func (c *TypeChecker) paramsType(policy *v1alpha1.ValidatingAdmissionPolicy) schema.GroupVersionKind { +func (c *TypeChecker) paramsGVK(policy *v1beta1.ValidatingAdmissionPolicy) schema.GroupVersionKind { if policy.Spec.ParamKind == nil { return schema.GroupVersionKind{} } @@ -183,8 +214,8 @@ func (c *TypeChecker) paramsType(policy *v1alpha1.ValidatingAdmissionPolicy) sch return gv.WithKind(policy.Spec.ParamKind.Kind) } -func (c *TypeChecker) checkExpression(expression string, hasParams bool, types typeOverwrite) (*cel.Issues, error) { - env, err := buildEnv(hasParams, types) +func (c *TypeChecker) checkExpression(expression string, hasParams, hasAuthorizer bool, types typeOverwrite) (*cel.Issues, error) { + env, err := buildEnv(hasParams, hasAuthorizer, types) if err != nil { return nil, err } @@ -202,7 +233,7 @@ func (c *TypeChecker) checkExpression(expression string, hasParams bool, types t // typesToCheck extracts a list of GVKs that needs type checking from the policy // the result is sorted in the order of Group, Version, and Kind -func (c *TypeChecker) typesToCheck(p *v1alpha1.ValidatingAdmissionPolicy) []schema.GroupVersionKind { +func (c *TypeChecker) typesToCheck(p *v1beta1.ValidatingAdmissionPolicy) []schema.GroupVersionKind { gvks := sets.New[schema.GroupVersionKind]() if p.Spec.MatchConstraints == nil || len(p.Spec.MatchConstraints.ResourceRules) == 0 { return nil @@ -235,7 +266,7 @@ func (c *TypeChecker) typesToCheck(p *v1alpha1.ValidatingAdmissionPolicy) []sche Version: version, Resource: resource, } - resolved, err := c.restMapper.KindsFor(gvr) + resolved, err := c.RestMapper.KindsFor(gvr) if err != nil { continue } @@ -263,7 +294,7 @@ func (c *TypeChecker) typesToCheck(p *v1alpha1.ValidatingAdmissionPolicy) []sche return sortGVKList(gvks.UnsortedList()) } -func extractGroups(rule *v1alpha1.Rule) []string { +func extractGroups(rule *v1beta1.Rule) []string { groups := make([]string, 0, len(rule.APIGroups)) for _, group := range rule.APIGroups { // give up if wildcard @@ -275,7 +306,7 @@ func extractGroups(rule *v1alpha1.Rule) []string { return groups } -func extractVersions(rule *v1alpha1.Rule) []string { +func extractVersions(rule *v1beta1.Rule) []string { versions := make([]string, 0, len(rule.APIVersions)) for _, version := range rule.APIVersions { if strings.ContainsAny(version, "*") { @@ -286,7 +317,7 @@ func extractVersions(rule *v1alpha1.Rule) []string { return versions } -func extractResources(rule *v1alpha1.Rule) []string { +func extractResources(rule *v1beta1.Rule) []string { resources := make([]string, 0, len(rule.Resources)) for _, resource := range rule.Resources { // skip wildcard and subresources @@ -313,123 +344,64 @@ func sortGVKList(list []schema.GroupVersionKind) []schema.GroupVersionKind { return list } -func buildEnv(hasParams bool, types typeOverwrite) (*cel.Env, error) { - baseEnv, err := getBaseEnv() - if err != nil { - return nil, err - } - reg := apiservercel.NewRegistry(baseEnv) +func buildEnv(hasParams bool, hasAuthorizer bool, types typeOverwrite) (*cel.Env, error) { + baseEnv := environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion()) requestType := plugincel.BuildRequestType() + namespaceType := plugincel.BuildNamespaceType() var varOpts []cel.EnvOption - var rts []*apiservercel.RuleTypes + var declTypes []*apiservercel.DeclType + + // namespace, hand-crafted type + declTypes = append(declTypes, namespaceType) + varOpts = append(varOpts, createVariableOpts(namespaceType, plugincel.NamespaceVarName)...) // request, hand-crafted type - rt, opts, err := createRuleTypesAndOptions(reg, requestType, plugincel.RequestVarName) - if err != nil { - return nil, err - } - rts = append(rts, rt) - varOpts = append(varOpts, opts...) + declTypes = append(declTypes, requestType) + varOpts = append(varOpts, createVariableOpts(requestType, plugincel.RequestVarName)...) // object and oldObject, same type, type(s) resolved from constraints - rt, opts, err = createRuleTypesAndOptions(reg, types.object, plugincel.ObjectVarName, plugincel.OldObjectVarName) - if err != nil { - return nil, err - } - rts = append(rts, rt) - varOpts = append(varOpts, opts...) + declTypes = append(declTypes, types.object) + varOpts = append(varOpts, createVariableOpts(types.object, plugincel.ObjectVarName, plugincel.OldObjectVarName)...) // params, defined by ParamKind - if hasParams { - rt, opts, err := createRuleTypesAndOptions(reg, types.params, plugincel.ParamsVarName) - if err != nil { - return nil, err - } - rts = append(rts, rt) - varOpts = append(varOpts, opts...) + if hasParams && types.params != nil { + declTypes = append(declTypes, types.params) + varOpts = append(varOpts, createVariableOpts(types.params, plugincel.ParamsVarName)...) } - opts, err = ruleTypesOpts(rts, baseEnv.TypeProvider()) - if err != nil { - return nil, err + // authorizer, implicitly available to all expressions of a policy + if hasAuthorizer { + // we only need its structure but not the variable itself + varOpts = append(varOpts, cel.Variable("authorizer", library.AuthorizerType)) } - opts = append(opts, varOpts...) // add variables after ruleTypes. - env, err := baseEnv.Extend(opts...) + + env, err := baseEnv.Extend( + environment.VersionedOptions{ + // Feature epoch was actually 1.26, but we artificially set it to 1.0 because these + // options should always be present. + IntroducedVersion: version.MajorMinor(1, 0), + EnvOptions: varOpts, + DeclTypes: declTypes, + }, + ) if err != nil { return nil, err } - return env, nil + return env.Env(environment.StoredExpressions) } -// createRuleTypeAndOptions creates the cel RuleTypes and a slice of EnvOption +// createVariableOpts creates a slice of EnvOption // that can be used for creating a CEL env containing variables of declType. // declType can be nil, in which case the variables will be of DynType. -func createRuleTypesAndOptions(registry *apiservercel.Registry, declType *apiservercel.DeclType, variables ...string) (*apiservercel.RuleTypes, []cel.EnvOption, error) { +func createVariableOpts(declType *apiservercel.DeclType, variables ...string) []cel.EnvOption { opts := make([]cel.EnvOption, 0, len(variables)) - // untyped, use DynType - if declType == nil { - for _, v := range variables { - opts = append(opts, cel.Variable(v, cel.DynType)) - } - return nil, opts, nil - } - // create a RuleType for the given type - rt, err := apiservercel.NewRuleTypes(declType.TypeName(), declType, registry) - if err != nil { - return nil, nil, err - } - if rt == nil { - return nil, nil, nil + t := cel.DynType + if declType != nil { + t = declType.CelType() } for _, v := range variables { - opts = append(opts, cel.Variable(v, declType.CelType())) - } - return rt, opts, nil -} - -func ruleTypesOpts(ruleTypes []*apiservercel.RuleTypes, underlyingTypeProvider ref.TypeProvider) ([]cel.EnvOption, error) { - var providers []ref.TypeProvider // may be unused, too small to matter - var adapters []ref.TypeAdapter - for _, rt := range ruleTypes { - if rt != nil { - withTP, err := rt.WithTypeProvider(underlyingTypeProvider) - if err != nil { - return nil, err - } - providers = append(providers, withTP) - adapters = append(adapters, withTP) - } - } - var tp ref.TypeProvider - var ta ref.TypeAdapter - switch len(providers) { - case 0: - return nil, nil - case 1: - tp = providers[0] - ta = adapters[0] - default: - tp = &apiservercel.CompositedTypeProvider{Providers: providers} - ta = &apiservercel.CompositedTypeAdapter{Adapters: adapters} + opts = append(opts, cel.Variable(v, t)) } - return []cel.EnvOption{cel.CustomTypeProvider(tp), cel.CustomTypeAdapter(ta)}, nil + return opts } - -func getBaseEnv() (*cel.Env, error) { - typeCheckingBaseEnvInit.Do(func() { - var opts []cel.EnvOption - opts = append(opts, cel.HomogeneousAggregateLiterals()) - // Validate function declarations once during base env initialization, - // so they don't need to be evaluated each time a CEL rule is compiled. - // This is a relatively expensive operation. - opts = append(opts, cel.EagerlyValidateDeclarations(true), cel.DefaultUTCTimeZone(true)) - opts = append(opts, library.ExtensionLibs...) - typeCheckingBaseEnv, typeCheckingBaseEnvError = cel.NewEnv(opts...) - }) - return typeCheckingBaseEnv, typeCheckingBaseEnvError -} - -var typeCheckingBaseEnv *cel.Env -var typeCheckingBaseEnvError error -var typeCheckingBaseEnvInit sync.Once diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/validator.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/validator.go index 448750c91994..9630a4974719 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/validator.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/validator.go @@ -24,8 +24,10 @@ import ( celtypes "github.com/google/cel-go/common/types" v1 "k8s.io/api/admissionregistration/v1" + corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apiserver/pkg/admission" "k8s.io/apiserver/pkg/admission/plugin/cel" "k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions" @@ -42,17 +44,15 @@ type validator struct { auditAnnotationFilter cel.Filter messageFilter cel.Filter failPolicy *v1.FailurePolicyType - authorizer authorizer.Authorizer } -func NewValidator(validationFilter cel.Filter, celMatcher matchconditions.Matcher, auditAnnotationFilter, messageFilter cel.Filter, failPolicy *v1.FailurePolicyType, authorizer authorizer.Authorizer) Validator { +func NewValidator(validationFilter cel.Filter, celMatcher matchconditions.Matcher, auditAnnotationFilter, messageFilter cel.Filter, failPolicy *v1.FailurePolicyType) Validator { return &validator{ celMatcher: celMatcher, validationFilter: validationFilter, auditAnnotationFilter: auditAnnotationFilter, messageFilter: messageFilter, failPolicy: failPolicy, - authorizer: authorizer, } } @@ -72,7 +72,8 @@ func auditAnnotationEvaluationForError(f v1.FailurePolicyType) PolicyAuditAnnota // Validate takes a list of Evaluation and a failure policy and converts them into actionable PolicyDecisions // runtimeCELCostBudget was added for testing purpose only. Callers should always use const RuntimeCELCostBudget from k8s.io/apiserver/pkg/apis/cel/config.go as input. -func (v *validator) Validate(ctx context.Context, versionedAttr *admission.VersionedAttributes, versionedParams runtime.Object, runtimeCELCostBudget int64) ValidateResult { + +func (v *validator) Validate(ctx context.Context, matchedResource schema.GroupVersionResource, versionedAttr *admission.VersionedAttributes, versionedParams runtime.Object, namespace *corev1.Namespace, runtimeCELCostBudget int64, authz authorizer.Authorizer) ValidateResult { var f v1.FailurePolicyType if v.failPolicy == nil { f = v1.Fail @@ -81,7 +82,7 @@ func (v *validator) Validate(ctx context.Context, versionedAttr *admission.Versi } if v.celMatcher != nil { - matchResults := v.celMatcher.Match(ctx, versionedAttr, versionedParams) + matchResults := v.celMatcher.Match(ctx, versionedAttr, versionedParams, authz) if matchResults.Error != nil { return ValidateResult{ Decisions: []PolicyDecision{ @@ -100,10 +101,12 @@ func (v *validator) Validate(ctx context.Context, versionedAttr *admission.Versi } } - optionalVars := cel.OptionalVariableBindings{VersionedParams: versionedParams, Authorizer: v.authorizer} + optionalVars := cel.OptionalVariableBindings{VersionedParams: versionedParams, Authorizer: authz} expressionOptionalVars := cel.OptionalVariableBindings{VersionedParams: versionedParams} - admissionRequest := cel.CreateAdmissionRequest(versionedAttr.Attributes) - evalResults, remainingBudget, err := v.validationFilter.ForInput(ctx, versionedAttr, admissionRequest, optionalVars, runtimeCELCostBudget) + admissionRequest := cel.CreateAdmissionRequest(versionedAttr.Attributes, metav1.GroupVersionResource(matchedResource), metav1.GroupVersionKind(versionedAttr.VersionedKind)) + // Decide which fields are exposed + ns := cel.CreateNamespaceObject(namespace) + evalResults, remainingBudget, err := v.validationFilter.ForInput(ctx, versionedAttr, admissionRequest, optionalVars, ns, runtimeCELCostBudget) if err != nil { return ValidateResult{ Decisions: []PolicyDecision{ @@ -116,7 +119,7 @@ func (v *validator) Validate(ctx context.Context, versionedAttr *admission.Versi } } decisions := make([]PolicyDecision, len(evalResults)) - messageResults, _, err := v.messageFilter.ForInput(ctx, versionedAttr, admissionRequest, expressionOptionalVars, remainingBudget) + messageResults, _, err := v.messageFilter.ForInput(ctx, versionedAttr, admissionRequest, expressionOptionalVars, ns, remainingBudget) for i, evalResult := range evalResults { var decision = &decisions[i] // TODO: move this to generics @@ -193,7 +196,7 @@ func (v *validator) Validate(ctx context.Context, versionedAttr *admission.Versi } options := cel.OptionalVariableBindings{VersionedParams: versionedParams} - auditAnnotationEvalResults, _, err := v.auditAnnotationFilter.ForInput(ctx, versionedAttr, cel.CreateAdmissionRequest(versionedAttr.Attributes), options, runtimeCELCostBudget) + auditAnnotationEvalResults, _, err := v.auditAnnotationFilter.ForInput(ctx, versionedAttr, admissionRequest, options, namespace, runtimeCELCostBudget) if err != nil { return ValidateResult{ Decisions: []PolicyDecision{ diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/accessors.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/accessors.go index 102597cbcc04..e60d245a621b 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/accessors.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/accessors.go @@ -26,8 +26,7 @@ import ( "k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions" "k8s.io/apiserver/pkg/admission/plugin/webhook/predicates/namespace" "k8s.io/apiserver/pkg/admission/plugin/webhook/predicates/object" - celconfig "k8s.io/apiserver/pkg/apis/cel" - "k8s.io/apiserver/pkg/authorization/authorizer" + "k8s.io/apiserver/pkg/cel/environment" webhookutil "k8s.io/apiserver/pkg/util/webhook" "k8s.io/client-go/rest" ) @@ -49,7 +48,7 @@ type WebhookAccessor interface { GetRESTClient(clientManager *webhookutil.ClientManager) (*rest.RESTClient, error) // GetCompiledMatcher gets the compiled matcher object - GetCompiledMatcher(compiler cel.FilterCompiler, authorizer authorizer.Authorizer) matchconditions.Matcher + GetCompiledMatcher(compiler cel.FilterCompiler) matchconditions.Matcher // GetName gets the webhook Name field. Note that the name is scoped to the webhook // configuration and does not provide a globally unique identity, if a unique identity is @@ -81,6 +80,9 @@ type WebhookAccessor interface { GetMutatingWebhook() (*v1.MutatingWebhook, bool) // GetValidatingWebhook if the accessor contains a ValidatingWebhook, returns it and true, else returns false. GetValidatingWebhook() (*v1.ValidatingWebhook, bool) + + // GetType returns the type of the accessor (validate or admit) + GetType() string } // NewMutatingWebhookAccessor creates an accessor for a MutatingWebhook. @@ -124,8 +126,11 @@ func (m *mutatingWebhookAccessor) GetRESTClient(clientManager *webhookutil.Clien return m.client, m.clientErr } -// TODO: graduation to beta: resolve the fact that we rebuild ALL items whenever ANY config changes in NewMutatingWebhookConfigurationManager and NewValidatingWebhookConfigurationManager ... now that we're doing CEL compilation, we probably want to avoid that -func (m *mutatingWebhookAccessor) GetCompiledMatcher(compiler cel.FilterCompiler, authorizer authorizer.Authorizer) matchconditions.Matcher { +func (m *mutatingWebhookAccessor) GetType() string { + return "admit" +} + +func (m *mutatingWebhookAccessor) GetCompiledMatcher(compiler cel.FilterCompiler) matchconditions.Matcher { m.compileMatcher.Do(func() { expressions := make([]cel.ExpressionAccessor, len(m.MutatingWebhook.MatchConditions)) for i, matchCondition := range m.MutatingWebhook.MatchConditions { @@ -140,8 +145,8 @@ func (m *mutatingWebhookAccessor) GetCompiledMatcher(compiler cel.FilterCompiler HasParams: false, HasAuthorizer: true, }, - celconfig.PerCallLimit, - ), authorizer, m.FailurePolicy, "validating", m.Name) + environment.StoredExpressions, + ), m.FailurePolicy, "webhook", "admit", m.Name) }) return m.compiledMatcher } @@ -253,7 +258,7 @@ func (v *validatingWebhookAccessor) GetRESTClient(clientManager *webhookutil.Cli return v.client, v.clientErr } -func (v *validatingWebhookAccessor) GetCompiledMatcher(compiler cel.FilterCompiler, authorizer authorizer.Authorizer) matchconditions.Matcher { +func (v *validatingWebhookAccessor) GetCompiledMatcher(compiler cel.FilterCompiler) matchconditions.Matcher { v.compileMatcher.Do(func() { expressions := make([]cel.ExpressionAccessor, len(v.ValidatingWebhook.MatchConditions)) for i, matchCondition := range v.ValidatingWebhook.MatchConditions { @@ -268,8 +273,8 @@ func (v *validatingWebhookAccessor) GetCompiledMatcher(compiler cel.FilterCompil HasParams: false, HasAuthorizer: true, }, - celconfig.PerCallLimit, - ), authorizer, v.FailurePolicy, "validating", v.Name) + environment.StoredExpressions, + ), v.FailurePolicy, "webhook", "validating", v.Name) }) return v.compiledMatcher } @@ -288,6 +293,10 @@ func (v *validatingWebhookAccessor) GetParsedObjectSelector() (labels.Selector, return v.objectSelector, v.objectSelectorErr } +func (m *validatingWebhookAccessor) GetType() string { + return "validate" +} + func (v *validatingWebhookAccessor) GetName() string { return v.Name } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/generic/webhook.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/generic/webhook.go index a5828983112b..6a513f1c11aa 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/generic/webhook.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/generic/webhook.go @@ -21,6 +21,9 @@ import ( "fmt" "io" + admissionmetrics "k8s.io/apiserver/pkg/admission/metrics" + "k8s.io/klog/v2" + admissionv1 "k8s.io/api/admission/v1" admissionv1beta1 "k8s.io/api/admission/v1beta1" v1 "k8s.io/api/admissionregistration/v1" @@ -35,10 +38,10 @@ import ( "k8s.io/apiserver/pkg/admission/plugin/webhook/predicates/object" "k8s.io/apiserver/pkg/admission/plugin/webhook/predicates/rules" "k8s.io/apiserver/pkg/authorization/authorizer" + "k8s.io/apiserver/pkg/cel/environment" webhookutil "k8s.io/apiserver/pkg/util/webhook" "k8s.io/client-go/informers" clientset "k8s.io/client-go/kubernetes" - "k8s.io/klog/v2" ) // Webhook is an abstract admission plugin with all the infrastructure to define Admit or Validate on-top. @@ -97,7 +100,7 @@ func NewWebhook(handler *admission.Handler, configFile io.Reader, sourceFactory namespaceMatcher: &namespace.Matcher{}, objectMatcher: &object.Matcher{}, dispatcher: dispatcherFactory(&cm), - filterCompiler: cel.NewFilterCompiler(), + filterCompiler: cel.NewFilterCompiler(environment.MustBaseEnvSet(environment.DefaultCompatibilityVersion())), }, nil } @@ -216,7 +219,6 @@ func (a *Webhook) ShouldCallHook(ctx context.Context, h webhook.WebhookAccessor, if matchObjErr != nil { return nil, matchObjErr } - matchConditions := h.GetMatchConditions() if len(matchConditions) > 0 { versionedAttr, err := v.VersionedAttribute(invocation.Kind) @@ -224,13 +226,14 @@ func (a *Webhook) ShouldCallHook(ctx context.Context, h webhook.WebhookAccessor, return nil, apierrors.NewInternalError(err) } - matcher := h.GetCompiledMatcher(a.filterCompiler, a.authorizer) - matchResult := matcher.Match(ctx, versionedAttr, nil) + matcher := h.GetCompiledMatcher(a.filterCompiler) + matchResult := matcher.Match(ctx, versionedAttr, nil, a.authorizer) if matchResult.Error != nil { klog.Warningf("Failed evaluating match conditions, failing closed %v: %v", h.GetName(), matchResult.Error) return nil, apierrors.NewForbidden(attr.GetResource().GroupResource(), attr.GetName(), matchResult.Error) } else if !matchResult.Matches { + admissionmetrics.Metrics.ObserveMatchConditionExclusion(ctx, h.GetName(), "webhook", h.GetType(), string(attr.GetOperation())) // if no match, always skip webhook return nil, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions/interface.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions/interface.go index 09468655bd09..094a019d1f97 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions/interface.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions/interface.go @@ -21,6 +21,7 @@ import ( "k8s.io/apimachinery/pkg/runtime" "k8s.io/apiserver/pkg/admission" + "k8s.io/apiserver/pkg/authorization/authorizer" ) type MatchResult struct { @@ -32,5 +33,5 @@ type MatchResult struct { // Matcher contains logic for converting Evaluations to bool of matches or does not match type Matcher interface { // Match is used to take cel evaluations and convert into decisions - Match(ctx context.Context, versionedAttr *admission.VersionedAttributes, versionedParams runtime.Object) MatchResult + Match(ctx context.Context, versionedAttr *admission.VersionedAttributes, versionedParams runtime.Object, authz authorizer.Authorizer) MatchResult } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions/matcher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions/matcher.go index 09a500dd39cb..21dd28f6c24f 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions/matcher.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/matchconditions/matcher.go @@ -20,11 +20,13 @@ import ( "context" "errors" "fmt" + "time" "github.com/google/cel-go/cel" celtypes "github.com/google/cel-go/common/types" v1 "k8s.io/api/admissionregistration/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" utilerrors "k8s.io/apimachinery/pkg/util/errors" "k8s.io/apiserver/pkg/admission" @@ -53,13 +55,13 @@ var _ Matcher = &matcher{} // matcher evaluates compiled cel expressions and determines if they match the given request or not type matcher struct { filter celplugin.Filter - authorizer authorizer.Authorizer failPolicy v1.FailurePolicyType matcherType string + matcherKind string objectName string } -func NewMatcher(filter celplugin.Filter, authorizer authorizer.Authorizer, failPolicy *v1.FailurePolicyType, matcherType, objectName string) Matcher { +func NewMatcher(filter celplugin.Filter, failPolicy *v1.FailurePolicyType, matcherKind, matcherType, objectName string) Matcher { var f v1.FailurePolicyType if failPolicy == nil { f = v1.Fail @@ -68,20 +70,22 @@ func NewMatcher(filter celplugin.Filter, authorizer authorizer.Authorizer, failP } return &matcher{ filter: filter, - authorizer: authorizer, failPolicy: f, + matcherKind: matcherKind, matcherType: matcherType, objectName: objectName, } } -func (m *matcher) Match(ctx context.Context, versionedAttr *admission.VersionedAttributes, versionedParams runtime.Object) MatchResult { - evalResults, _, err := m.filter.ForInput(ctx, versionedAttr, celplugin.CreateAdmissionRequest(versionedAttr.Attributes), celplugin.OptionalVariableBindings{ +func (m *matcher) Match(ctx context.Context, versionedAttr *admission.VersionedAttributes, versionedParams runtime.Object, authz authorizer.Authorizer) MatchResult { + t := time.Now() + evalResults, _, err := m.filter.ForInput(ctx, versionedAttr, celplugin.CreateAdmissionRequest(versionedAttr.Attributes, metav1.GroupVersionResource(versionedAttr.GetResource()), metav1.GroupVersionKind(versionedAttr.VersionedKind)), celplugin.OptionalVariableBindings{ VersionedParams: versionedParams, - Authorizer: m.authorizer, - }, celconfig.RuntimeCELCostBudgetMatchConditions) + Authorizer: authz, + }, nil, celconfig.RuntimeCELCostBudgetMatchConditions) if err != nil { + admissionmetrics.Metrics.ObserveMatchConditionEvaluationTime(ctx, time.Since(t), m.objectName, m.matcherKind, m.matcherType, string(versionedAttr.GetOperation())) // filter returning error is unexpected and not an evaluation error so not incrementing metric here if m.failPolicy == v1.Fail { return MatchResult{ @@ -106,10 +110,10 @@ func (m *matcher) Match(ctx context.Context, versionedAttr *admission.VersionedA } if evalResult.Error != nil { errorList = append(errorList, evalResult.Error) - //TODO: what's the best way to handle this metric since its reused by VAP for match conditions - admissionmetrics.Metrics.ObserveMatchConditionEvalError(ctx, m.objectName, m.matcherType) + admissionmetrics.Metrics.ObserveMatchConditionEvalError(ctx, m.objectName, m.matcherKind, m.matcherType, string(versionedAttr.GetOperation())) } if evalResult.EvalResult == celtypes.False { + admissionmetrics.Metrics.ObserveMatchConditionEvaluationTime(ctx, time.Since(t), m.objectName, m.matcherKind, m.matcherType, string(versionedAttr.GetOperation())) // If any condition false, skip calling webhook always return MatchResult{ Matches: false, @@ -118,6 +122,7 @@ func (m *matcher) Match(ctx context.Context, versionedAttr *admission.VersionedA } } if len(errorList) > 0 { + admissionmetrics.Metrics.ObserveMatchConditionEvaluationTime(ctx, time.Since(t), m.objectName, m.matcherKind, m.matcherType, string(versionedAttr.GetOperation())) // If mix of true and eval errors then resort to fail policy if m.failPolicy == v1.Fail { // mix of true and errors with fail policy fail should fail request without calling webhook diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/mutating/dispatcher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/mutating/dispatcher.go index c1d1ca6ff6b9..af237ae0c008 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/mutating/dispatcher.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/mutating/dispatcher.go @@ -168,6 +168,10 @@ func (a *mutatingDispatcher) Dispatch(ctx context.Context, attr admission.Attrib if err != nil { switch err := err.(type) { case *webhookutil.ErrCallingWebhook: + if ctx.Err() == context.Canceled { + klog.Warningf("Context Canceled when calling webhook %v", hook.Name) + return err + } if !ignoreClientCallFailures { rejected = true admissionmetrics.Metrics.ObserveWebhookRejection(ctx, hook.Name, "admit", string(versionedAttr.Attributes.GetOperation()), admissionmetrics.WebhookRejectionCallingWebhookError, int(err.Status.ErrStatus.Code)) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/predicates/namespace/matcher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/predicates/namespace/matcher.go index 459e3f5df6b4..6427bc67484a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/predicates/namespace/matcher.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/predicates/namespace/matcher.go @@ -20,6 +20,8 @@ import ( "context" "fmt" + v1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" "k8s.io/apimachinery/pkg/api/meta" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" @@ -42,6 +44,10 @@ type Matcher struct { Client clientset.Interface } +func (m *Matcher) GetNamespace(name string) (*v1.Namespace, error) { + return m.NamespaceLister.Get(name) +} + // Validate checks if the Matcher has a NamespaceLister and Client. func (m *Matcher) Validate() error { var errs []error diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/validating/dispatcher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/validating/dispatcher.go index 14312fadd541..af435649bd0a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/validating/dispatcher.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/webhook/validating/dispatcher.go @@ -173,6 +173,10 @@ func (d *validatingDispatcher) Dispatch(ctx context.Context, attr admission.Attr if err != nil { switch err := err.(type) { case *webhookutil.ErrCallingWebhook: + if ctx.Err() == context.Canceled { + klog.Warningf("Context Canceled when calling webhook %v", hook.Name) + return + } if !ignoreClientCallFailures { rejected = true admissionmetrics.Metrics.ObserveWebhookRejection(ctx, hook.Name, "validating", string(versionedAttr.Attributes.GetOperation()), admissionmetrics.WebhookRejectionCallingWebhookError, int(err.Status.ErrStatus.Code)) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/apis/flowcontrol/bootstrap/default.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/apis/flowcontrol/bootstrap/default.go index 3859a54d1fe1..b037371e3a85 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/apis/flowcontrol/bootstrap/default.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/apis/flowcontrol/bootstrap/default.go @@ -89,6 +89,10 @@ var ( flowcontrol.PriorityLevelConfigurationNameExempt, flowcontrol.PriorityLevelConfigurationSpec{ Type: flowcontrol.PriorityLevelEnablementExempt, + Exempt: &flowcontrol.ExemptPriorityLevelConfiguration{ + NominalConcurrencyShares: pointer.Int32(0), + LendablePercent: pointer.Int32(0), + }, }, ) MandatoryPriorityLevelConfigurationCatchAll = newPriorityLevelConfiguration( diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/audit/context.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/audit/context.go index 95a18bcd5ce2..9648587378ec 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/audit/context.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/audit/context.go @@ -39,21 +39,18 @@ type AuditContext struct { RequestAuditConfig RequestAuditConfig // Event is the audit Event object that is being captured to be written in - // the API audit log. It is set to nil when the request is not being audited. - Event *auditinternal.Event + // the API audit log. + Event auditinternal.Event - // annotations holds audit annotations that are recorded before the event has been initialized. - // This is represented as a slice rather than a map to preserve order. - annotations []annotation - // annotationMutex guards annotations AND event.Annotations + // annotationMutex guards event.Annotations annotationMutex sync.Mutex - - // auditID is the Audit ID associated with this request. - auditID types.UID } -type annotation struct { - key, value string +// Enabled checks whether auditing is enabled for this audit context. +func (ac *AuditContext) Enabled() bool { + // Note: An unset Level should be considered Enabled, so that request data (e.g. annotations) + // can still be captured before the audit policy is evaluated. + return ac != nil && ac.RequestAuditConfig.Level != auditinternal.LevelNone } // AddAuditAnnotation sets the audit annotation for the given key, value pair. @@ -65,8 +62,7 @@ type annotation struct { // prefer AddAuditAnnotation over LogAnnotation to avoid dropping annotations. func AddAuditAnnotation(ctx context.Context, key, value string) { ac := AuditContextFrom(ctx) - if ac == nil { - // auditing is not enabled + if !ac.Enabled() { return } @@ -81,8 +77,7 @@ func AddAuditAnnotation(ctx context.Context, key, value string) { // keysAndValues are the key-value pairs to add, and must have an even number of items. func AddAuditAnnotations(ctx context.Context, keysAndValues ...string) { ac := AuditContextFrom(ctx) - if ac == nil { - // auditing is not enabled + if !ac.Enabled() { return } @@ -101,8 +96,7 @@ func AddAuditAnnotations(ctx context.Context, keysAndValues ...string) { // restrictions on when this can be called. func AddAuditAnnotationsMap(ctx context.Context, annotations map[string]string) { ac := AuditContextFrom(ctx) - if ac == nil { - // auditing is not enabled + if !ac.Enabled() { return } @@ -114,38 +108,10 @@ func AddAuditAnnotationsMap(ctx context.Context, annotations map[string]string) } } -// addAuditAnnotationLocked is the shared code for recording an audit annotation. This method should -// only be called while the auditAnnotationsMutex is locked. +// addAuditAnnotationLocked records the audit annotation on the event. func addAuditAnnotationLocked(ac *AuditContext, key, value string) { - if ac.Event != nil { - logAnnotation(ac.Event, key, value) - } else { - ac.annotations = append(ac.annotations, annotation{key: key, value: value}) - } -} - -// This is private to prevent reads/write to the slice from outside of this package. -// The audit event should be directly read to get access to the annotations. -func addAuditAnnotationsFrom(ctx context.Context, ev *auditinternal.Event) { - ac := AuditContextFrom(ctx) - if ac == nil { - // auditing is not enabled - return - } - - ac.annotationMutex.Lock() - defer ac.annotationMutex.Unlock() + ae := &ac.Event - for _, kv := range ac.annotations { - logAnnotation(ev, kv.key, kv.value) - } -} - -// LogAnnotation fills in the Annotations according to the key value pair. -func logAnnotation(ae *auditinternal.Event, key, value string) { - if ae == nil || ae.Level.Less(auditinternal.LevelMetadata) { - return - } if ae.Annotations == nil { ae.Annotations = make(map[string]string) } @@ -167,8 +133,8 @@ func WithAuditContext(parent context.Context) context.Context { // AuditEventFrom returns the audit event struct on the ctx func AuditEventFrom(ctx context.Context) *auditinternal.Event { - if o := AuditContextFrom(ctx); o != nil { - return o.Event + if ac := AuditContextFrom(ctx); ac.Enabled() { + return &ac.Event } return nil } @@ -187,20 +153,16 @@ func WithAuditID(ctx context.Context, auditID types.UID) { if auditID == "" { return } - ac := AuditContextFrom(ctx) - if ac == nil { - return - } - ac.auditID = auditID - if ac.Event != nil { + if ac := AuditContextFrom(ctx); ac != nil { ac.Event.AuditID = auditID } } -// AuditIDFrom returns the value of the audit ID from the request context. +// AuditIDFrom returns the value of the audit ID from the request context, along with whether +// auditing is enabled. func AuditIDFrom(ctx context.Context) (types.UID, bool) { if ac := AuditContextFrom(ctx); ac != nil { - return ac.auditID, ac.auditID != "" + return ac.Event.AuditID, true } return "", false } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/audit/request.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/audit/request.go index 972669536ed7..9185278f06fb 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/audit/request.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/audit/request.go @@ -28,14 +28,11 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" - "k8s.io/apimachinery/pkg/types" utilnet "k8s.io/apimachinery/pkg/util/net" auditinternal "k8s.io/apiserver/pkg/apis/audit" "k8s.io/apiserver/pkg/authentication/user" "k8s.io/apiserver/pkg/authorization/authorizer" "k8s.io/klog/v2" - - "github.com/google/uuid" ) const ( @@ -43,20 +40,18 @@ const ( userAgentTruncateSuffix = "...TRUNCATED" ) -func NewEventFromRequest(req *http.Request, requestReceivedTimestamp time.Time, level auditinternal.Level, attribs authorizer.Attributes) (*auditinternal.Event, error) { - ev := &auditinternal.Event{ - RequestReceivedTimestamp: metav1.NewMicroTime(requestReceivedTimestamp), - Verb: attribs.GetVerb(), - RequestURI: req.URL.RequestURI(), - UserAgent: maybeTruncateUserAgent(req), - Level: level, +func LogRequestMetadata(ctx context.Context, req *http.Request, requestReceivedTimestamp time.Time, level auditinternal.Level, attribs authorizer.Attributes) { + ac := AuditContextFrom(ctx) + if !ac.Enabled() { + return } + ev := &ac.Event - auditID, found := AuditIDFrom(req.Context()) - if !found { - auditID = types.UID(uuid.New().String()) - } - ev.AuditID = auditID + ev.RequestReceivedTimestamp = metav1.NewMicroTime(requestReceivedTimestamp) + ev.Verb = attribs.GetVerb() + ev.RequestURI = req.URL.RequestURI() + ev.UserAgent = maybeTruncateUserAgent(req) + ev.Level = level ips := utilnet.SourceIPs(req) ev.SourceIPs = make([]string, len(ips)) @@ -84,10 +79,6 @@ func NewEventFromRequest(req *http.Request, requestReceivedTimestamp time.Time, APIVersion: attribs.GetAPIVersion(), } } - - addAuditAnnotationsFrom(req.Context(), ev) - - return ev, nil } // LogImpersonatedUser fills in the impersonated user attributes into an audit event. diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/authentication/request/websocket/protocol.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/authentication/request/websocket/protocol.go index 11afa84cbd0e..ee8c89f5ceda 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/authentication/request/websocket/protocol.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/authentication/request/websocket/protocol.go @@ -24,8 +24,8 @@ import ( "strings" "unicode/utf8" + "k8s.io/apimachinery/pkg/util/httpstream/wsstream" "k8s.io/apiserver/pkg/authentication/authenticator" - "k8s.io/apiserver/pkg/util/wsstream" ) const bearerProtocolPrefix = "base64url.bearer.authorization.k8s.io." diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/authentication/token/cache/cached_token_authenticator.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/authentication/token/cache/cached_token_authenticator.go index ec0b14768df1..18167dddc2bf 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/authentication/token/cache/cached_token_authenticator.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/authentication/token/cache/cached_token_authenticator.go @@ -197,15 +197,14 @@ func (a *cachedTokenAuthenticator) doAuthenticateToken(ctx context.Context, toke recorder := &recorder{} ctx = warning.WithWarningRecorder(ctx, recorder) - // since this is shared work between multiple requests, we have no way of knowing if any - // particular request supports audit annotations. thus we always attempt to record them. - ev := &auditinternal.Event{Level: auditinternal.LevelMetadata} ctx = audit.WithAuditContext(ctx) ac := audit.AuditContextFrom(ctx) - ac.Event = ev + // since this is shared work between multiple requests, we have no way of knowing if any + // particular request supports audit annotations. thus we always attempt to record them. + ac.Event.Level = auditinternal.LevelMetadata record.resp, record.ok, record.err = a.authenticator.AuthenticateToken(ctx, token) - record.annotations = ev.Annotations + record.annotations = ac.Event.Annotations record.warnings = recorder.extractWarnings() if !a.cacheErrs && record.err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/common/values.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/common/values.go index e6d7b99757e6..d9034a80fb2a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/common/values.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/common/values.go @@ -26,9 +26,10 @@ import ( "github.com/google/cel-go/common/types/ref" "github.com/google/cel-go/common/types/traits" + "k8s.io/kube-openapi/pkg/validation/strfmt" + "k8s.io/apimachinery/pkg/api/equality" "k8s.io/apiserver/pkg/cel" - "k8s.io/kube-openapi/pkg/validation/strfmt" ) // UnstructuredToVal converts a Kubernetes unstructured data element to a CEL Val. @@ -425,7 +426,22 @@ var _ = traits.Lister(&unstructuredList{}) func (t *unstructuredList) ConvertToNative(typeDesc reflect.Type) (interface{}, error) { switch typeDesc.Kind() { case reflect.Slice: - return t.elements, nil + switch t.itemsSchema.Type() { + // Workaround for https://github.com/kubernetes/kubernetes/issues/117590 until we + // resolve the desired behavior in cel-go via https://github.com/google/cel-go/issues/688 + case "string": + var result []string + for _, e := range t.elements { + s, ok := e.(string) + if !ok { + return nil, fmt.Errorf("expected all elements to be of type string, but got %T", e) + } + result = append(result, s) + } + return result, nil + default: + return t.elements, nil + } } return nil, fmt.Errorf("type conversion error from '%s' to '%s'", t.Type(), typeDesc) } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/composited.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/composited.go deleted file mode 100644 index 9e5e634d0c34..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/composited.go +++ /dev/null @@ -1,119 +0,0 @@ -/* -Copyright 2023 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package cel - -import ( - "github.com/google/cel-go/common/types/ref" - exprpb "google.golang.org/genproto/googleapis/api/expr/v1alpha1" -) - -var _ ref.TypeProvider = (*CompositedTypeProvider)(nil) -var _ ref.TypeAdapter = (*CompositedTypeAdapter)(nil) - -// CompositedTypeProvider is the provider that tries each of the underlying -// providers in order, and returns result of the first successful attempt. -type CompositedTypeProvider struct { - // Providers contains the underlying type providers. - // If Providers is empty, the CompositedTypeProvider becomes no-op provider. - Providers []ref.TypeProvider -} - -// EnumValue finds out the numeric value of the given enum name. -// The result comes from first provider that returns non-nil. -func (c *CompositedTypeProvider) EnumValue(enumName string) ref.Val { - for _, p := range c.Providers { - val := p.EnumValue(enumName) - if val != nil { - return val - } - } - return nil -} - -// FindIdent takes a qualified identifier name and returns a Value if one -// exists. The result comes from first provider that returns non-nil. -func (c *CompositedTypeProvider) FindIdent(identName string) (ref.Val, bool) { - for _, p := range c.Providers { - val, ok := p.FindIdent(identName) - if ok { - return val, ok - } - } - return nil, false -} - -// FindType finds the Type given a qualified type name, or return false -// if none of the providers finds the type. -// If any of the providers find the type, the first provider that returns true -// will be the result. -func (c *CompositedTypeProvider) FindType(typeName string) (*exprpb.Type, bool) { - for _, p := range c.Providers { - typ, ok := p.FindType(typeName) - if ok { - return typ, ok - } - } - return nil, false -} - -// FindFieldType returns the field type for a checked type value. Returns -// false if none of the providers can find the type. -// If multiple providers can find the field, the result is taken from -// the first that does. -func (c *CompositedTypeProvider) FindFieldType(messageType string, fieldName string) (*ref.FieldType, bool) { - for _, p := range c.Providers { - ft, ok := p.FindFieldType(messageType, fieldName) - if ok { - return ft, ok - } - } - return nil, false -} - -// NewValue creates a new type value from a qualified name and map of field -// name to value. -// If multiple providers can create the new type, the first that returns -// non-nil will decide the result. -func (c *CompositedTypeProvider) NewValue(typeName string, fields map[string]ref.Val) ref.Val { - for _, p := range c.Providers { - v := p.NewValue(typeName, fields) - if v != nil { - return v - } - } - return nil -} - -// CompositedTypeAdapter is the adapter that tries each of the underlying -// type adapter in order until the first successfully conversion. -type CompositedTypeAdapter struct { - // Adapters contains underlying type adapters. - // If Adapters is empty, the CompositedTypeAdapter becomes a no-op adapter. - Adapters []ref.TypeAdapter -} - -// NativeToValue takes the value and convert it into a ref.Val -// The result comes from the first TypeAdapter that returns non-nil. -func (c *CompositedTypeAdapter) NativeToValue(value interface{}) ref.Val { - for _, a := range c.Adapters { - v := a.NativeToValue(value) - if v != nil { - return v - } - } - return nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/base.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/base.go new file mode 100644 index 000000000000..ed0d3404116a --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/base.go @@ -0,0 +1,119 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package environment + +import ( + "fmt" + "strconv" + "sync" + + "github.com/google/cel-go/cel" + "github.com/google/cel-go/ext" + "golang.org/x/sync/singleflight" + + "k8s.io/apimachinery/pkg/util/version" + celconfig "k8s.io/apiserver/pkg/apis/cel" + "k8s.io/apiserver/pkg/cel/library" +) + +// DefaultCompatibilityVersion returns a default compatibility version for use with EnvSet +// that guarantees compatibility with CEL features/libraries/parameters understood by +// an n-1 version +// +// This default will be set to no more than n-1 the current Kubernetes major.minor version. +// +// Note that a default version number less than n-1 indicates a wider range of version +// compatibility than strictly required for rollback. A wide range of compatibility is +// desirable because it means that CEL expressions are portable across a wider range +// of Kubernetes versions. +func DefaultCompatibilityVersion() *version.Version { + return version.MajorMinor(1, 27) +} + +var baseOpts = []VersionedOptions{ + { + // CEL epoch was actually 1.23, but we artificially set it to 1.0 because these + // options should always be present. + IntroducedVersion: version.MajorMinor(1, 0), + EnvOptions: []cel.EnvOption{ + cel.HomogeneousAggregateLiterals(), + // Validate function declarations once during base env initialization, + // so they don't need to be evaluated each time a CEL rule is compiled. + // This is a relatively expensive operation. + cel.EagerlyValidateDeclarations(true), + cel.DefaultUTCTimeZone(true), + + ext.Strings(ext.StringsVersion(0)), + library.URLs(), + library.Regex(), + library.Lists(), + }, + ProgramOptions: []cel.ProgramOption{ + cel.EvalOptions(cel.OptOptimize, cel.OptTrackCost), + cel.CostLimit(celconfig.PerCallLimit), + }, + }, + { + IntroducedVersion: version.MajorMinor(1, 27), + EnvOptions: []cel.EnvOption{ + library.Authz(), + }, + }, + { + IntroducedVersion: version.MajorMinor(1, 28), + EnvOptions: []cel.EnvOption{ + cel.CrossTypeNumericComparisons(true), + cel.OptionalTypes(), + library.Quantity(), + }, + }, + // TODO: switch to ext.Strings version 2 once format() is fixed to work with HomogeneousAggregateLiterals. +} + +// MustBaseEnvSet returns the common CEL base environments for Kubernetes for Version, or panics +// if the version is nil, or does not have major and minor components. +// +// The returned environment contains function libraries, language settings, optimizations and +// runtime cost limits appropriate CEL as it is used in Kubernetes. +// +// The returned environment contains no CEL variable definitions or custom type declarations and +// should be extended to construct environments with the appropriate variable definitions, +// type declarations and any other needed configuration. +func MustBaseEnvSet(ver *version.Version) *EnvSet { + if ver == nil { + panic("version must be non-nil") + } + if len(ver.Components()) < 2 { + panic(fmt.Sprintf("version must contain an major and minor component, but got: %s", ver.String())) + } + key := strconv.FormatUint(uint64(ver.Major()), 10) + "." + strconv.FormatUint(uint64(ver.Minor()), 10) + if entry, ok := baseEnvs.Load(key); ok { + return entry.(*EnvSet) + } + + entry, _, _ := baseEnvsSingleflight.Do(key, func() (interface{}, error) { + entry := mustNewEnvSet(ver, baseOpts) + baseEnvs.Store(key, entry) + return entry, nil + }) + return entry.(*EnvSet) +} + +var ( + baseEnvs = sync.Map{} + baseEnvsSingleflight = &singleflight.Group{} +) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/environment.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/environment.go new file mode 100644 index 000000000000..b47bc8e984be --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/environment.go @@ -0,0 +1,274 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package environment + +import ( + "fmt" + "math" + + "github.com/google/cel-go/cel" + + "k8s.io/apimachinery/pkg/util/version" + apiservercel "k8s.io/apiserver/pkg/cel" +) + +// Type defines the different types of CEL environments used in Kubernetes. +// CEL environments are used to compile and evaluate CEL expressions. +// Environments include: +// - Function libraries +// - Variables +// - Types (both core CEL types and Kubernetes types) +// - Other CEL environment and program options +type Type string + +const ( + // NewExpressions is used to validate new or modified expressions in + // requests that write expressions to API resources. + // + // This environment type is compatible with a specific Kubernetes + // major/minor version. To ensure safe rollback, this environment type + // may not include all the function libraries, variables, type declarations, and CEL + // language settings available in the StoredExpressions environment type. + // + // NewExpressions must be used to validate (parse, compile, type check) + // all new or modified CEL expressions before they are written to storage. + NewExpressions Type = "NewExpressions" + + // StoredExpressions is used to compile and run CEL expressions that have been + // persisted to storage. + // + // This environment type is compatible with CEL expressions that have been + // persisted to storage by all known versions of Kubernetes. This is the most + // permissive environment available. + // + // StoredExpressions is appropriate for use with CEL expressions in + // configuration files. + StoredExpressions Type = "StoredExpressions" +) + +// EnvSet manages the creation and extension of CEL environments. Each EnvSet contains +// both an NewExpressions and StoredExpressions environment. EnvSets are created +// and extended using VersionedOptions so that the EnvSet can prepare environments according +// to what options were introduced at which versions. +// +// Each EnvSet is given a compatibility version when it is created, and prepares the +// NewExpressions environment to be compatible with that version. The EnvSet also +// prepares StoredExpressions to be compatible with all known versions of Kubernetes. +type EnvSet struct { + // compatibilityVersion is the version that all configuration in + // the NewExpressions environment is compatible with. + compatibilityVersion *version.Version + + // newExpressions is an environment containing only configuration + // in this EnvSet that is enabled at this compatibilityVersion. + newExpressions *cel.Env + + // storedExpressions is an environment containing the latest configuration + // in this EnvSet. + storedExpressions *cel.Env +} + +func newEnvSet(compatibilityVersion *version.Version, opts []VersionedOptions) (*EnvSet, error) { + base, err := cel.NewEnv() + if err != nil { + return nil, err + } + baseSet := EnvSet{compatibilityVersion: compatibilityVersion, newExpressions: base, storedExpressions: base} + return baseSet.Extend(opts...) +} + +func mustNewEnvSet(ver *version.Version, opts []VersionedOptions) *EnvSet { + envSet, err := newEnvSet(ver, opts) + if err != nil { + panic(fmt.Sprintf("Default environment misconfigured: %v", err)) + } + return envSet +} + +// NewExpressionsEnv returns the NewExpressions environment Type for this EnvSet. +// See NewExpressions for details. +func (e *EnvSet) NewExpressionsEnv() *cel.Env { + return e.newExpressions +} + +// StoredExpressionsEnv returns the StoredExpressions environment Type for this EnvSet. +// See StoredExpressions for details. +func (e *EnvSet) StoredExpressionsEnv() *cel.Env { + return e.storedExpressions +} + +// Env returns the CEL environment for the given Type. +func (e *EnvSet) Env(envType Type) (*cel.Env, error) { + switch envType { + case NewExpressions: + return e.newExpressions, nil + case StoredExpressions: + return e.storedExpressions, nil + default: + return nil, fmt.Errorf("unsupported environment type: %v", envType) + } +} + +// VersionedOptions provides a set of CEL configuration options as well as the version the +// options were introduced and, optionally, the version the options were removed. +type VersionedOptions struct { + // IntroducedVersion is the version at which these options were introduced. + // The NewExpressions environment will only include options introduced at or before the + // compatibility version of the EnvSet. + // + // For example, to configure a CEL environment with an "object" variable bound to a + // resource kind, first create a DeclType from the groupVersionKind of the resource and then + // populate a VersionedOptions with the variable and the type: + // + // schema := schemaResolver.ResolveSchema(groupVersionKind) + // objectType := apiservercel.SchemaDeclType(schema, true) + // ... + // VersionOptions{ + // IntroducedVersion: version.MajorMinor(1, 26), + // DeclTypes: []*apiservercel.DeclType{ objectType }, + // EnvOptions: []cel.EnvOption{ cel.Variable("object", objectType.CelType()) }, + // }, + // + // To create an DeclType from a CRD, use a structural schema. For example: + // + // schema := structuralschema.NewStructural(crdJSONProps) + // objectType := apiservercel.SchemaDeclType(schema, true) + // + // Required. + IntroducedVersion *version.Version + // RemovedVersion is the version at which these options were removed. + // The NewExpressions environment will not include options removed at or before the + // compatibility version of the EnvSet. + // + // All option removals must be backward compatible; the removal must either be paired + // with a compatible replacement introduced at the same version, or the removal must be non-breaking. + // The StoredExpressions environment will not include removed options. + // + // A function library may be upgraded by setting the RemovedVersion of the old library + // to the same value as the IntroducedVersion of the new library. The new library must + // be backward compatible with the old library. + // + // For example: + // + // VersionOptions{ + // IntroducedVersion: version.MajorMinor(1, 26), RemovedVersion: version.MajorMinor(1, 27), + // EnvOptions: []cel.EnvOption{ libraries.Example(libraries.ExampleVersion(1)) }, + // }, + // VersionOptions{ + // IntroducedVersion: version.MajorMinor(1, 27), + // EnvOptions: []EnvOptions{ libraries.Example(libraries.ExampleVersion(2)) }, + // }, + // + // Optional. + RemovedVersion *version.Version + + // EnvOptions provides CEL EnvOptions. This may be used to add a cel.Variable, a + // cel.Library, or to enable other CEL EnvOptions such as language settings. + // + // If an added cel.Variable has an OpenAPI type, the type must be included in DeclTypes. + EnvOptions []cel.EnvOption + // ProgramOptions provides CEL ProgramOptions. This may be used to set a cel.CostLimit, + // enable optimizations, and set other program level options that should be enabled + // for all programs using this environment. + ProgramOptions []cel.ProgramOption + // DeclTypes provides OpenAPI type declarations to register with the environment. + // + // If cel.Variables added to EnvOptions refer to a OpenAPI type, the type must be included in + // DeclTypes. + DeclTypes []*apiservercel.DeclType +} + +// Extend returns an EnvSet based on this EnvSet but extended with given VersionedOptions. +// This EnvSet is not mutated. +// The returned EnvSet has the same compatibility version as the EnvSet that was extended. +// +// Extend is an expensive operation and each call to Extend that adds DeclTypes increases +// the depth of a chain of resolvers. For these reasons, calls to Extend should be kept +// to a minimum. +// +// Some best practices: +// +// - Minimize calls Extend when handling API requests. Where possible, call Extend +// when initializing components. +// - If an EnvSets returned by Extend can be used to compile multiple CEL programs, +// call Extend once and reuse the returned EnvSets. +// - Prefer a single call to Extend with a full list of VersionedOptions over +// making multiple calls to Extend. +func (e *EnvSet) Extend(options ...VersionedOptions) (*EnvSet, error) { + if len(options) > 0 { + newExprOpts, err := e.filterAndBuildOpts(e.newExpressions, e.compatibilityVersion, options) + if err != nil { + return nil, err + } + p, err := e.newExpressions.Extend(newExprOpts) + if err != nil { + return nil, err + } + storedExprOpt, err := e.filterAndBuildOpts(e.storedExpressions, version.MajorMinor(math.MaxUint, math.MaxUint), options) + if err != nil { + return nil, err + } + s, err := e.storedExpressions.Extend(storedExprOpt) + if err != nil { + return nil, err + } + return &EnvSet{compatibilityVersion: e.compatibilityVersion, newExpressions: p, storedExpressions: s}, nil + } + return e, nil +} + +func (e *EnvSet) filterAndBuildOpts(base *cel.Env, compatVer *version.Version, opts []VersionedOptions) (cel.EnvOption, error) { + var envOpts []cel.EnvOption + var progOpts []cel.ProgramOption + var declTypes []*apiservercel.DeclType + + for _, opt := range opts { + if compatVer.AtLeast(opt.IntroducedVersion) && (opt.RemovedVersion == nil || compatVer.LessThan(opt.RemovedVersion)) { + envOpts = append(envOpts, opt.EnvOptions...) + progOpts = append(progOpts, opt.ProgramOptions...) + declTypes = append(declTypes, opt.DeclTypes...) + } + } + + if len(declTypes) > 0 { + provider := apiservercel.NewDeclTypeProvider(declTypes...) + providerOpts, err := provider.EnvOptions(base.TypeProvider()) + if err != nil { + return nil, err + } + envOpts = append(envOpts, providerOpts...) + } + + combined := cel.Lib(&envLoader{ + envOpts: envOpts, + progOpts: progOpts, + }) + return combined, nil +} + +type envLoader struct { + envOpts []cel.EnvOption + progOpts []cel.ProgramOption +} + +func (e *envLoader) CompileOptions() []cel.EnvOption { + return e.envOpts +} + +func (e *envLoader) ProgramOptions() []cel.ProgramOption { + return e.progOpts +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/lazy/lazy.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/lazy/lazy.go new file mode 100644 index 000000000000..1742deb0a2f6 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/lazy/lazy.go @@ -0,0 +1,191 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package lazy + +import ( + "fmt" + "reflect" + + "github.com/google/cel-go/common/types" + "github.com/google/cel-go/common/types/ref" + "github.com/google/cel-go/common/types/traits" + + "k8s.io/apiserver/pkg/cel" +) + +type GetFieldFunc func(*MapValue) ref.Val + +var _ ref.Val = (*MapValue)(nil) +var _ traits.Mapper = (*MapValue)(nil) + +// MapValue is a map that lazily evaluate its value when a field is first accessed. +// The map value is not designed to be thread-safe. +type MapValue struct { + typeValue *types.TypeValue + + // values are previously evaluated values obtained from callbacks + values map[string]ref.Val + // callbacks are a map of field name to the function that returns the field Val + callbacks map[string]GetFieldFunc + // knownValues are registered names, used for iteration + knownValues []string +} + +func NewMapValue(objectType ref.Type) *MapValue { + return &MapValue{ + typeValue: types.NewTypeValue(objectType.TypeName(), traits.IndexerType|traits.FieldTesterType|traits.IterableType), + values: map[string]ref.Val{}, + callbacks: map[string]GetFieldFunc{}, + } +} + +// Append adds the given field with its name and callback. +func (m *MapValue) Append(name string, callback GetFieldFunc) { + m.knownValues = append(m.knownValues, name) + m.callbacks[name] = callback +} + +// Contains checks if the key is known to the map +func (m *MapValue) Contains(key ref.Val) ref.Val { + v, found := m.Find(key) + if v != nil && types.IsUnknownOrError(v) { + return v + } + return types.Bool(found) +} + +// Iterator returns an iterator to traverse the map. +func (m *MapValue) Iterator() traits.Iterator { + return &iterator{parent: m, index: 0} +} + +// Size returns the number of currently known fields +func (m *MapValue) Size() ref.Val { + return types.Int(len(m.callbacks)) +} + +// ConvertToNative returns an error because it is disallowed +func (m *MapValue) ConvertToNative(typeDesc reflect.Type) (any, error) { + return nil, fmt.Errorf("disallowed conversion from %q to %q", m.typeValue.TypeName(), typeDesc.Name()) +} + +// ConvertToType converts the map to the given type. +// Only its own type and "Type" type are allowed. +func (m *MapValue) ConvertToType(typeVal ref.Type) ref.Val { + switch typeVal { + case m.typeValue: + return m + case types.TypeType: + return m.typeValue + } + return types.NewErr("disallowed conversion from %q to %q", m.typeValue.TypeName(), typeVal.TypeName()) +} + +// Equal returns true if the other object is the same pointer-wise. +func (m *MapValue) Equal(other ref.Val) ref.Val { + otherMap, ok := other.(*MapValue) + if !ok { + return types.MaybeNoSuchOverloadErr(other) + } + return types.Bool(m == otherMap) +} + +// Type returns its registered type. +func (m *MapValue) Type() ref.Type { + return m.typeValue +} + +// Value is not allowed. +func (m *MapValue) Value() any { + return types.NoSuchOverloadErr() +} + +// resolveField resolves the field. Calls the callback if the value is not yet stored. +func (m *MapValue) resolveField(name string) ref.Val { + v, seen := m.values[name] + if seen { + return v + } + f := m.callbacks[name] + v = f(m) + m.values[name] = v + return v +} + +func (m *MapValue) Find(key ref.Val) (ref.Val, bool) { + n, ok := key.(types.String) + if !ok { + return types.MaybeNoSuchOverloadErr(n), true + } + name, ok := cel.Unescape(n.Value().(string)) + if !ok { + return nil, false + } + if _, exists := m.callbacks[name]; !exists { + return nil, false + } + return m.resolveField(name), true +} + +func (m *MapValue) Get(key ref.Val) ref.Val { + v, found := m.Find(key) + if found { + return v + } + return types.ValOrErr(key, "no such key: %v", key) +} + +type iterator struct { + parent *MapValue + index int +} + +func (i *iterator) ConvertToNative(typeDesc reflect.Type) (any, error) { + return nil, fmt.Errorf("disallowed conversion to %q", typeDesc.Name()) +} + +func (i *iterator) ConvertToType(typeValue ref.Type) ref.Val { + return types.NewErr("disallowed conversion o %q", typeValue.TypeName()) +} + +func (i *iterator) Equal(other ref.Val) ref.Val { + otherIterator, ok := other.(*iterator) + if !ok { + return types.MaybeNoSuchOverloadErr(other) + } + return types.Bool(otherIterator == i) +} + +func (i *iterator) Type() ref.Type { + return types.IteratorType +} + +func (i *iterator) Value() any { + return nil +} + +func (i *iterator) HasNext() ref.Val { + return types.Bool(i.index < len(i.parent.knownValues)) +} + +func (i *iterator) Next() ref.Val { + ret := i.parent.Get(types.String(i.parent.knownValues[i.index])) + i.index++ + return ret +} + +var _ traits.Iterator = (*iterator)(nil) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/authz.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/authz.go index 606e5769adb8..00f0200e865d 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/authz.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/authz.go @@ -174,6 +174,26 @@ import ( // Examples: // // authorizer.path('/healthz').check('GET').reason() +// +// errored +// +// Returns true if the authorization check resulted in an error. +// +// .errored() +// +// Examples: +// +// authorizer.group('').resource('pods').namespace('default').check('create').errored() // Returns true if the authorization check resulted in an error +// +// error +// +// If the authorization check resulted in an error, returns the error. Otherwise, returns the empty string. +// +// .error() +// +// Examples: +// +// authorizer.group('').resource('pods').namespace('default').check('create').error() func Authz() cel.EnvOption { return cel.Lib(authzLib) } @@ -209,6 +229,12 @@ var authzLibraryDecls = map[string][]cel.FunctionOpt{ cel.BinaryBinding(pathCheckCheck)), cel.MemberOverload("resourcecheck_check", []*cel.Type{ResourceCheckType, cel.StringType}, DecisionType, cel.BinaryBinding(resourceCheckCheck))}, + "errored": { + cel.MemberOverload("decision_errored", []*cel.Type{DecisionType}, cel.BoolType, + cel.UnaryBinding(decisionErrored))}, + "error": { + cel.MemberOverload("decision_error", []*cel.Type{DecisionType}, cel.StringType, + cel.UnaryBinding(decisionError))}, "allowed": { cel.MemberOverload("decision_allowed", []*cel.Type{DecisionType}, cel.BoolType, cel.UnaryBinding(decisionAllowed))}, @@ -384,6 +410,27 @@ func resourceCheckCheck(arg1, arg2 ref.Val) ref.Val { return resourceCheck.Authorize(context.TODO(), apiVerb) } +func decisionErrored(arg ref.Val) ref.Val { + decision, ok := arg.(decisionVal) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + return types.Bool(decision.err != nil) +} + +func decisionError(arg ref.Val) ref.Val { + decision, ok := arg.(decisionVal) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + if decision.err == nil { + return types.String("") + } + return types.String(decision.err.Error()) +} + func decisionAllowed(arg ref.Val) ref.Val { decision, ok := arg.(decisionVal) if !ok { @@ -478,10 +525,7 @@ func (a pathCheckVal) Authorize(ctx context.Context, verb string) ref.Val { } decision, reason, err := a.authorizer.authAuthorizer.Authorize(ctx, attr) - if err != nil { - return types.NewErr("error in authorization check: %v", err) - } - return newDecision(decision, reason) + return newDecision(decision, err, reason) } type groupCheckVal struct { @@ -516,18 +560,16 @@ func (a resourceCheckVal) Authorize(ctx context.Context, verb string) ref.Val { User: a.groupCheck.authorizer.userInfo, } decision, reason, err := a.groupCheck.authorizer.authAuthorizer.Authorize(ctx, attr) - if err != nil { - return types.NewErr("error in authorization check: %v", err) - } - return newDecision(decision, reason) + return newDecision(decision, err, reason) } -func newDecision(authDecision authorizer.Decision, reason string) decisionVal { - return decisionVal{receiverOnlyObjectVal: receiverOnlyVal(DecisionType), authDecision: authDecision, reason: reason} +func newDecision(authDecision authorizer.Decision, err error, reason string) decisionVal { + return decisionVal{receiverOnlyObjectVal: receiverOnlyVal(DecisionType), authDecision: authDecision, err: err, reason: reason} } type decisionVal struct { receiverOnlyObjectVal + err error authDecision authorizer.Decision reason string } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/cost.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/cost.go index 6cc6290323f4..5201d187be2c 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/cost.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/cost.go @@ -41,7 +41,7 @@ func (l *CostEstimator) CallCost(function, overloadId string, args []ref.Val, re // This cost is set to allow for only two authorization checks per expression cost := uint64(350000) return &cost - case "serviceAccount", "path", "group", "resource", "subresource", "namespace", "name", "allowed", "denied", "reason": + case "serviceAccount", "path", "group", "resource", "subresource", "namespace", "name", "allowed", "reason", "error", "errored": // All authorization builder and accessor functions have a nominal cost cost := uint64(1) return &cost @@ -91,7 +91,7 @@ func (l *CostEstimator) EstimateCallCost(function, overloadId string, target *ch // An authorization check has a fixed cost // This cost is set to allow for only two authorization checks per expression return &checker.CallEstimate{CostEstimate: checker.CostEstimate{Min: 350000, Max: 350000}} - case "serviceAccount", "path", "group", "resource", "subresource", "namespace", "name", "allowed", "denied", "reason": + case "serviceAccount", "path", "group", "resource", "subresource", "namespace", "name", "allowed", "reason", "error", "errored": // All authorization builder and accessor functions have a nominal cost return &checker.CallEstimate{CostEstimate: checker.CostEstimate{Min: 1, Max: 1}} case "isSorted", "sum", "max", "min", "indexOf", "lastIndexOf": diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/quantity.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/quantity.go new file mode 100644 index 000000000000..49e3dae7cdb6 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/quantity.go @@ -0,0 +1,375 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package library + +import ( + "errors" + + "github.com/google/cel-go/cel" + "github.com/google/cel-go/common/types" + "github.com/google/cel-go/common/types/ref" + "k8s.io/apimachinery/pkg/api/resource" + apiservercel "k8s.io/apiserver/pkg/cel" +) + +// Quantity provides a CEL function library extension of Kubernetes +// resource.Quantity parsing functions. See `resource.Quantity` +// documentation for more detailed information about the format itself: +// https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#Quantity +// +// quantity +// +// Converts a string to a Quantity or results in an error if the string is not a valid Quantity. Refer +// to resource.Quantity documentation for information on accepted patterns. +// +// quantity() +// +// Examples: +// +// quantity('1.5G') // returns a Quantity +// quantity('200k') // returns a Quantity +// quantity('200K') // error +// quantity('Three') // error +// quantity('Mi') // error +// +// isQuantity +// +// Returns true if a string is a valid Quantity. isQuantity returns true if and +// only if quantity does not result in error. +// +// isQuantity( ) +// +// Examples: +// +// isQuantity('1.3G') // returns true +// isQuantity('1.3Gi') // returns true +// isQuantity('1,3G') // returns false +// isQuantity('10000k') // returns true +// isQuantity('200K') // returns false +// isQuantity('Three') // returns false +// isQuantity('Mi') // returns false +// +// Conversion to Scalars: +// +// - isInteger: returns true if and only if asInteger is safe to call without an error +// +// - asInteger: returns a representation of the current value as an int64 if +// possible or results in an error if conversion would result in overflow +// or loss of precision. +// +// - asApproximateFloat: returns a float64 representation of the quantity which may +// lose precision. If the value of the quantity is outside the range of a float64 +// +Inf/-Inf will be returned. +// +// .isInteger() +// .asInteger() +// .asApproximateFloat() +// +// Examples: +// +// quantity("50000000G").isInteger() // returns true +// quantity("50k").isInteger() // returns true +// quantity("9999999999999999999999999999999999999G").asInteger() // error: cannot convert value to integer +// quantity("9999999999999999999999999999999999999G").isInteger() // returns false +// quantity("50k").asInteger() == 50000 // returns true +// quantity("50k").sub(20000).asApproximateFloat() == 30000 // returns true +// +// Arithmetic +// +// - sign: Returns `1` if the quantity is positive, `-1` if it is negative. `0` if it is zero +// +// - add: Returns sum of two quantities or a quantity and an integer +// +// - sub: Returns difference between two quantities or a quantity and an integer +// +// .sign() +// .add() +// .add() +// .sub() +// .sub() +// +// Examples: +// +// quantity("50k").add("20k") == quantity("70k") // returns true +// quantity("50k").add(20) == quantity("50020") // returns true +// quantity("50k").sub("20k") == quantity("30k") // returns true +// quantity("50k").sub(20000) == quantity("30k") // returns true +// quantity("50k").add(20).sub(quantity("100k")).sub(-50000) == quantity("20") // returns true +// +// Comparisons +// +// - isGreaterThan: Returns true if and only if the receiver is greater than the operand +// +// - isLessThan: Returns true if and only if the receiver is less than the operand +// +// - compareTo: Compares receiver to operand and returns 0 if they are equal, 1 if the receiver is greater, or -1 if the receiver is less than the operand +// +// +// .isLessThan() +// .isGreaterThan() +// .compareTo() +// +// Examples: +// +// quantity("200M").compareTo(quantity("0.2G")) // returns 0 +// quantity("50M").compareTo(quantity("50Mi")) // returns -1 +// quantity("50Mi").compareTo(quantity("50M")) // returns 1 +// quantity("150Mi").isGreaterThan(quantity("100Mi")) // returns true +// quantity("50Mi").isGreaterThan(quantity("100Mi")) // returns false +// quantity("50M").isLessThan(quantity("100M")) // returns true +// quantity("100M").isLessThan(quantity("50M")) // returns false + +func Quantity() cel.EnvOption { + return cel.Lib(quantityLib) +} + +var quantityLib = &quantity{} + +type quantity struct{} + +var quantityLibraryDecls = map[string][]cel.FunctionOpt{ + "quantity": { + cel.Overload("string_to_quantity", []*cel.Type{cel.StringType}, apiservercel.QuantityType, cel.UnaryBinding((stringToQuantity))), + }, + "isQuantity": { + cel.Overload("is_quantity_string", []*cel.Type{cel.StringType}, cel.BoolType, cel.UnaryBinding(isQuantity)), + }, + "sign": { + cel.Overload("quantity_sign", []*cel.Type{apiservercel.QuantityType}, cel.IntType, cel.UnaryBinding(quantityGetSign)), + }, + "isGreaterThan": { + cel.MemberOverload("quantity_is_greater_than", []*cel.Type{apiservercel.QuantityType, apiservercel.QuantityType}, cel.BoolType, cel.BinaryBinding(quantityIsGreaterThan)), + }, + "isLessThan": { + cel.MemberOverload("quantity_is_less_than", []*cel.Type{apiservercel.QuantityType, apiservercel.QuantityType}, cel.BoolType, cel.BinaryBinding(quantityIsLessThan)), + }, + "compareTo": { + cel.MemberOverload("quantity_compare_to", []*cel.Type{apiservercel.QuantityType, apiservercel.QuantityType}, cel.IntType, cel.BinaryBinding(quantityCompareTo)), + }, + "asApproximateFloat": { + cel.MemberOverload("quantity_get_float", []*cel.Type{apiservercel.QuantityType}, cel.DoubleType, cel.UnaryBinding(quantityGetApproximateFloat)), + }, + "asInteger": { + cel.MemberOverload("quantity_get_int", []*cel.Type{apiservercel.QuantityType}, cel.IntType, cel.UnaryBinding(quantityGetValue)), + }, + "isInteger": { + cel.MemberOverload("quantity_is_integer", []*cel.Type{apiservercel.QuantityType}, cel.BoolType, cel.UnaryBinding(quantityCanValue)), + }, + "add": { + cel.MemberOverload("quantity_add", []*cel.Type{apiservercel.QuantityType, apiservercel.QuantityType}, apiservercel.QuantityType, cel.BinaryBinding(quantityAdd)), + cel.MemberOverload("quantity_add_int", []*cel.Type{apiservercel.QuantityType, cel.IntType}, apiservercel.QuantityType, cel.BinaryBinding(quantityAddInt)), + }, + "sub": { + cel.MemberOverload("quantity_sub", []*cel.Type{apiservercel.QuantityType, apiservercel.QuantityType}, apiservercel.QuantityType, cel.BinaryBinding(quantitySub)), + cel.MemberOverload("quantity_sub_int", []*cel.Type{apiservercel.QuantityType, cel.IntType}, apiservercel.QuantityType, cel.BinaryBinding(quantitySubInt)), + }, +} + +func (*quantity) CompileOptions() []cel.EnvOption { + options := make([]cel.EnvOption, 0, len(quantityLibraryDecls)) + for name, overloads := range quantityLibraryDecls { + options = append(options, cel.Function(name, overloads...)) + } + return options +} + +func (*quantity) ProgramOptions() []cel.ProgramOption { + return []cel.ProgramOption{} +} + +func isQuantity(arg ref.Val) ref.Val { + str, ok := arg.Value().(string) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + _, err := resource.ParseQuantity(str) + if err != nil { + return types.Bool(false) + } + + return types.Bool(true) +} + +func stringToQuantity(arg ref.Val) ref.Val { + str, ok := arg.Value().(string) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q, err := resource.ParseQuantity(str) + if err != nil { + return types.WrapErr(err) + } + + return apiservercel.Quantity{Quantity: &q} +} + +func quantityGetApproximateFloat(arg ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + return types.Double(q.AsApproximateFloat64()) +} + +func quantityCanValue(arg ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + _, success := q.AsInt64() + return types.Bool(success) +} + +func quantityGetValue(arg ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + v, success := q.AsInt64() + if !success { + return types.WrapErr(errors.New("cannot convert value to integer")) + } + return types.Int(v) +} + +func quantityGetSign(arg ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + return types.Int(q.Sign()) +} + +func quantityIsGreaterThan(arg ref.Val, other ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q2, ok := other.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + return types.Bool(q.Cmp(*q2) == 1) +} + +func quantityIsLessThan(arg ref.Val, other ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q2, ok := other.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + return types.Bool(q.Cmp(*q2) == -1) +} + +func quantityCompareTo(arg ref.Val, other ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q2, ok := other.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + return types.Int(q.Cmp(*q2)) +} + +func quantityAdd(arg ref.Val, other ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q2, ok := other.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + copy := *q + copy.Add(*q2) + return &apiservercel.Quantity{ + Quantity: ©, + } +} + +func quantityAddInt(arg ref.Val, other ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q2, ok := other.Value().(int64) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q2Converted := *resource.NewQuantity(q2, resource.DecimalExponent) + + copy := *q + copy.Add(q2Converted) + return &apiservercel.Quantity{ + Quantity: ©, + } +} + +func quantitySub(arg ref.Val, other ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q2, ok := other.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + copy := *q + copy.Sub(*q2) + return &apiservercel.Quantity{ + Quantity: ©, + } +} + +func quantitySubInt(arg ref.Val, other ref.Val) ref.Val { + q, ok := arg.Value().(*resource.Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q2, ok := other.Value().(int64) + if !ok { + return types.MaybeNoSuchOverloadErr(arg) + } + + q2Converted := *resource.NewQuantity(q2, resource.DecimalExponent) + + copy := *q + copy.Sub(q2Converted) + return &apiservercel.Quantity{ + Quantity: ©, + } +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/regex.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/regex.go index 6db5ef195756..17fb3d44c970 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/regex.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/regex.go @@ -77,7 +77,9 @@ func (*regex) CompileOptions() []cel.EnvOption { } func (*regex) ProgramOptions() []cel.ProgramOption { - return []cel.ProgramOption{} + return []cel.ProgramOption{ + cel.OptimizeRegex(FindRegexOptimization, FindAllRegexOptimization), + } } func find(strVal ref.Val, regexVal ref.Val) ref.Val { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/test.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/test.go new file mode 100644 index 000000000000..95446f63c6bd --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/library/test.go @@ -0,0 +1,79 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package library + +import ( + "math" + + "github.com/google/cel-go/cel" + "github.com/google/cel-go/common/types" + "github.com/google/cel-go/common/types/ref" +) + +// Test provides a test() function that returns true. +func Test(options ...TestOption) cel.EnvOption { + t := &testLib{version: math.MaxUint32} + for _, o := range options { + t = o(t) + } + return cel.Lib(t) +} + +type testLib struct { + version uint32 +} + +type TestOption func(*testLib) *testLib + +func TestVersion(version uint32) func(lib *testLib) *testLib { + return func(sl *testLib) *testLib { + sl.version = version + return sl + } +} + +func (t *testLib) CompileOptions() []cel.EnvOption { + var options []cel.EnvOption + + if t.version == 0 { + options = append(options, cel.Function("test", + cel.Overload("test", []*cel.Type{}, cel.BoolType, + cel.FunctionBinding(func(args ...ref.Val) ref.Val { + return types.True + })))) + } + + if t.version >= 1 { + options = append(options, cel.Function("test", + cel.Overload("test", []*cel.Type{}, cel.BoolType, + cel.FunctionBinding(func(args ...ref.Val) ref.Val { + // Return false here so tests can observe which version of the function is registered + // Actual function libraries must not break backward compatibility + return types.False + })))) + options = append(options, cel.Function("testV1", + cel.Overload("testV1", []*cel.Type{}, cel.BoolType, + cel.FunctionBinding(func(args ...ref.Val) ref.Val { + return types.True + })))) + } + return options +} + +func (*testLib) ProgramOptions() []cel.ProgramOption { + return []cel.ProgramOption{} +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/quantity.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/quantity.go new file mode 100644 index 000000000000..1057e33fe8e7 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/quantity.go @@ -0,0 +1,76 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package cel + +import ( + "fmt" + "reflect" + + "github.com/google/cel-go/cel" + "github.com/google/cel-go/checker/decls" + "github.com/google/cel-go/common/types" + "github.com/google/cel-go/common/types/ref" + "k8s.io/apimachinery/pkg/api/resource" +) + +var ( + QuantityObject = decls.NewObjectType("kubernetes.Quantity") + quantityTypeValue = types.NewTypeValue("kubernetes.Quantity") + QuantityType = cel.ObjectType("kubernetes.Quantity") +) + +// Quantity provdes a CEL representation of a resource.Quantity +type Quantity struct { + *resource.Quantity +} + +func (d Quantity) ConvertToNative(typeDesc reflect.Type) (interface{}, error) { + if reflect.TypeOf(d.Quantity).AssignableTo(typeDesc) { + return d.Quantity, nil + } + if reflect.TypeOf("").AssignableTo(typeDesc) { + return d.Quantity.String(), nil + } + return nil, fmt.Errorf("type conversion error from 'Quantity' to '%v'", typeDesc) +} + +func (d Quantity) ConvertToType(typeVal ref.Type) ref.Val { + switch typeVal { + case typeValue: + return d + case types.TypeType: + return quantityTypeValue + default: + return types.NewErr("type conversion error from '%s' to '%s'", quantityTypeValue, typeVal) + } +} + +func (d Quantity) Equal(other ref.Val) ref.Val { + otherDur, ok := other.(Quantity) + if !ok { + return types.MaybeNoSuchOverloadErr(other) + } + return types.Bool(d.Quantity.Equal(*otherDur.Quantity)) +} + +func (d Quantity) Type() ref.Type { + return quantityTypeValue +} + +func (d Quantity) Value() interface{} { + return d.Quantity +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/registry.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/registry.go deleted file mode 100644 index 1aee3a127d6c..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/registry.go +++ /dev/null @@ -1,79 +0,0 @@ -/* -Copyright 2022 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package cel - -import ( - "sync" - - "github.com/google/cel-go/cel" -) - -// Resolver declares methods to find policy templates and related configuration objects. -type Resolver interface { - // FindType returns a DeclType instance corresponding to the given fully-qualified name, if - // present. - FindType(name string) (*DeclType, bool) -} - -// NewRegistry create a registry for keeping track of environments and types -// from a base cel.Env expression environment. -func NewRegistry(stdExprEnv *cel.Env) *Registry { - return &Registry{ - exprEnvs: map[string]*cel.Env{"": stdExprEnv}, - types: map[string]*DeclType{ - BoolType.TypeName(): BoolType, - BytesType.TypeName(): BytesType, - DoubleType.TypeName(): DoubleType, - DurationType.TypeName(): DurationType, - IntType.TypeName(): IntType, - NullType.TypeName(): NullType, - StringType.TypeName(): StringType, - TimestampType.TypeName(): TimestampType, - UintType.TypeName(): UintType, - ListType.TypeName(): ListType, - MapType.TypeName(): MapType, - }, - } -} - -// Registry defines a repository of environment, schema, template, and type definitions. -// -// Registry instances are concurrency-safe. -type Registry struct { - rwMux sync.RWMutex - exprEnvs map[string]*cel.Env - types map[string]*DeclType -} - -// FindType implements the Resolver interface method. -func (r *Registry) FindType(name string) (*DeclType, bool) { - r.rwMux.RLock() - defer r.rwMux.RUnlock() - typ, found := r.types[name] - if found { - return typ, true - } - return typ, found -} - -// SetType registers a DeclType descriptor by its fully qualified name. -func (r *Registry) SetType(name string, declType *DeclType) error { - r.rwMux.Lock() - defer r.rwMux.Unlock() - r.types[name] = declType - return nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/types.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/types.go index b2cc92d59eb1..bd14e1697445 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/types.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/types.go @@ -319,44 +319,53 @@ func (f *DeclField) EnumValues() []ref.Val { return ev } -// NewRuleTypes returns an Open API Schema-based type-system which is CEL compatible. -func NewRuleTypes(kind string, - declType *DeclType, - res Resolver) (*RuleTypes, error) { +func allTypesForDecl(declTypes []*DeclType) map[string]*DeclType { + if declTypes == nil { + return nil + } + allTypes := map[string]*DeclType{} + for _, declType := range declTypes { + for k, t := range FieldTypeMap(declType.TypeName(), declType) { + allTypes[k] = t + } + } + + return allTypes +} + +// NewDeclTypeProvider returns an Open API Schema-based type-system which is CEL compatible. +func NewDeclTypeProvider(rootTypes ...*DeclType) *DeclTypeProvider { // Note, if the schema indicates that it's actually based on another proto // then prefer the proto definition. For expressions in the proto, a new field // annotation will be needed to indicate the expected environment and type of // the expression. - schemaTypes, err := newSchemaTypeProvider(kind, declType) - if err != nil { - return nil, err + allTypes := allTypesForDecl(rootTypes) + return &DeclTypeProvider{ + registeredTypes: allTypes, } - if schemaTypes == nil { - return nil, nil - } - return &RuleTypes{ - ruleSchemaDeclTypes: schemaTypes, - resolver: res, - }, nil } -// RuleTypes extends the CEL ref.TypeProvider interface and provides an Open API Schema-based +// DeclTypeProvider extends the CEL ref.TypeProvider interface and provides an Open API Schema-based // type-system. -type RuleTypes struct { - ref.TypeProvider - ruleSchemaDeclTypes *schemaTypeProvider - typeAdapter ref.TypeAdapter - resolver Resolver +type DeclTypeProvider struct { + registeredTypes map[string]*DeclType + typeProvider ref.TypeProvider + typeAdapter ref.TypeAdapter +} + +func (rt *DeclTypeProvider) EnumValue(enumName string) ref.Val { + return rt.typeProvider.EnumValue(enumName) +} + +func (rt *DeclTypeProvider) FindIdent(identName string) (ref.Val, bool) { + return rt.typeProvider.FindIdent(identName) } // EnvOptions returns a set of cel.EnvOption values which includes the declaration set // as well as a custom ref.TypeProvider. // -// Note, the standard declaration set includes 'rule' which is defined as the top-level rule-schema -// type if one is configured. -// -// If the RuleTypes value is nil, an empty []cel.EnvOption set is returned. -func (rt *RuleTypes) EnvOptions(tp ref.TypeProvider) ([]cel.EnvOption, error) { +// If the DeclTypeProvider value is nil, an empty []cel.EnvOption set is returned. +func (rt *DeclTypeProvider) EnvOptions(tp ref.TypeProvider) ([]cel.EnvOption, error) { if rt == nil { return []cel.EnvOption{}, nil } @@ -367,13 +376,12 @@ func (rt *RuleTypes) EnvOptions(tp ref.TypeProvider) ([]cel.EnvOption, error) { return []cel.EnvOption{ cel.CustomTypeProvider(rtWithTypes), cel.CustomTypeAdapter(rtWithTypes), - cel.Variable("rule", rt.ruleSchemaDeclTypes.root.CelType()), }, nil } -// WithTypeProvider returns a new RuleTypes that sets the given TypeProvider -// If the original RuleTypes is nil, the returned RuleTypes is still nil. -func (rt *RuleTypes) WithTypeProvider(tp ref.TypeProvider) (*RuleTypes, error) { +// WithTypeProvider returns a new DeclTypeProvider that sets the given TypeProvider +// If the original DeclTypeProvider is nil, the returned DeclTypeProvider is still nil. +func (rt *DeclTypeProvider) WithTypeProvider(tp ref.TypeProvider) (*DeclTypeProvider, error) { if rt == nil { return nil, nil } @@ -382,13 +390,12 @@ func (rt *RuleTypes) WithTypeProvider(tp ref.TypeProvider) (*RuleTypes, error) { if ok { ta = tpa } - rtWithTypes := &RuleTypes{ - TypeProvider: tp, - typeAdapter: ta, - ruleSchemaDeclTypes: rt.ruleSchemaDeclTypes, - resolver: rt.resolver, + rtWithTypes := &DeclTypeProvider{ + typeProvider: tp, + typeAdapter: ta, + registeredTypes: rt.registeredTypes, } - for name, declType := range rt.ruleSchemaDeclTypes.types { + for name, declType := range rt.registeredTypes { tpType, found := tp.FindType(name) expT, err := declType.ExprType() if err != nil { @@ -396,7 +403,7 @@ func (rt *RuleTypes) WithTypeProvider(tp ref.TypeProvider) (*RuleTypes, error) { } if found && !proto.Equal(tpType, expT) { return nil, fmt.Errorf( - "type %s definition differs between CEL environment and rule", name) + "type %s definition differs between CEL environment and type provider", name) } } return rtWithTypes, nil @@ -409,7 +416,7 @@ func (rt *RuleTypes) WithTypeProvider(tp ref.TypeProvider) (*RuleTypes, error) { // // Note, when the type name is based on the Open API Schema, the name will reflect the object path // where the type definition appears. -func (rt *RuleTypes) FindType(typeName string) (*exprpb.Type, bool) { +func (rt *DeclTypeProvider) FindType(typeName string) (*exprpb.Type, bool) { if rt == nil { return nil, false } @@ -421,11 +428,11 @@ func (rt *RuleTypes) FindType(typeName string) (*exprpb.Type, bool) { } return expT, found } - return rt.TypeProvider.FindType(typeName) + return rt.typeProvider.FindType(typeName) } // FindDeclType returns the CPT type description which can be mapped to a CEL type. -func (rt *RuleTypes) FindDeclType(typeName string) (*DeclType, bool) { +func (rt *DeclTypeProvider) FindDeclType(typeName string) (*DeclType, bool) { if rt == nil { return nil, false } @@ -438,10 +445,10 @@ func (rt *RuleTypes) FindDeclType(typeName string) (*DeclType, bool) { // If, in the future an object instance rather than a type name were provided, the field // resolution might more accurately reflect the expected type model. However, in this case // concessions were made to align with the existing CEL interfaces. -func (rt *RuleTypes) FindFieldType(typeName, fieldName string) (*ref.FieldType, bool) { +func (rt *DeclTypeProvider) FindFieldType(typeName, fieldName string) (*ref.FieldType, bool) { st, found := rt.findDeclType(typeName) if !found { - return rt.TypeProvider.FindFieldType(typeName, fieldName) + return rt.typeProvider.FindFieldType(typeName, fieldName) } f, found := st.Fields[fieldName] @@ -471,48 +478,63 @@ func (rt *RuleTypes) FindFieldType(typeName, fieldName string) (*ref.FieldType, // NativeToValue is an implementation of the ref.TypeAdapater interface which supports conversion // of rule values to CEL ref.Val instances. -func (rt *RuleTypes) NativeToValue(val interface{}) ref.Val { +func (rt *DeclTypeProvider) NativeToValue(val interface{}) ref.Val { return rt.typeAdapter.NativeToValue(val) } -// TypeNames returns the list of type names declared within the RuleTypes object. -func (rt *RuleTypes) TypeNames() []string { - typeNames := make([]string, len(rt.ruleSchemaDeclTypes.types)) +func (rt *DeclTypeProvider) NewValue(typeName string, fields map[string]ref.Val) ref.Val { + // TODO: implement for OpenAPI types to enable CEL object instantiation, which is needed + // for mutating admission. + return rt.typeProvider.NewValue(typeName, fields) +} + +// TypeNames returns the list of type names declared within the DeclTypeProvider object. +func (rt *DeclTypeProvider) TypeNames() []string { + typeNames := make([]string, len(rt.registeredTypes)) i := 0 - for name := range rt.ruleSchemaDeclTypes.types { + for name := range rt.registeredTypes { typeNames[i] = name i++ } return typeNames } -func (rt *RuleTypes) findDeclType(typeName string) (*DeclType, bool) { - declType, found := rt.ruleSchemaDeclTypes.types[typeName] +func (rt *DeclTypeProvider) findDeclType(typeName string) (*DeclType, bool) { + declType, found := rt.registeredTypes[typeName] if found { return declType, true } - declType, found = rt.resolver.FindType(typeName) - if found { - return declType, true + declType = findScalar(typeName) + return declType, declType != nil +} + +func findScalar(typename string) *DeclType { + switch typename { + case BoolType.TypeName(): + return BoolType + case BytesType.TypeName(): + return BytesType + case DoubleType.TypeName(): + return DoubleType + case DurationType.TypeName(): + return DurationType + case IntType.TypeName(): + return IntType + case NullType.TypeName(): + return NullType + case StringType.TypeName(): + return StringType + case TimestampType.TypeName(): + return TimestampType + case UintType.TypeName(): + return UintType + case ListType.TypeName(): + return ListType + case MapType.TypeName(): + return MapType + default: + return nil } - return nil, false -} - -func newSchemaTypeProvider(kind string, declType *DeclType) (*schemaTypeProvider, error) { - if declType == nil { - return nil, nil - } - root := declType.MaybeAssignTypeName(kind) - types := FieldTypeMap(kind, root) - return &schemaTypeProvider{ - root: root, - types: types, - }, nil -} - -type schemaTypeProvider struct { - root *DeclType - types map[string]*DeclType } var ( diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/audit.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/audit.go index ccb628b443e9..6f850f728bfd 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/audit.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/audit.go @@ -51,11 +51,11 @@ func WithAudit(handler http.Handler, sink audit.Sink, policy audit.PolicyRuleEva return } - if ac == nil || ac.Event == nil { + if !ac.Enabled() { handler.ServeHTTP(w, req) return } - ev := ac.Event + ev := &ac.Event ctx := req.Context() omitStages := ac.RequestAuditConfig.OmitStages @@ -124,7 +124,7 @@ func evaluatePolicyAndCreateAuditEvent(req *http.Request, policy audit.PolicyRul ctx := req.Context() ac := audit.AuditContextFrom(ctx) if ac == nil { - // Auditing not enabled. + // Auditing not configured. return nil, nil } @@ -145,12 +145,7 @@ func evaluatePolicyAndCreateAuditEvent(req *http.Request, policy audit.PolicyRul if !ok { requestReceivedTimestamp = time.Now() } - ev, err := audit.NewEventFromRequest(req, requestReceivedTimestamp, rac.Level, attribs) - if err != nil { - return nil, fmt.Errorf("failed to complete audit event from request: %v", err) - } - - ac.Event = ev + audit.LogRequestMetadata(ctx, req, requestReceivedTimestamp, rac.Level, attribs) return ac, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go index d6741bf3a3aa..277bdcdfe5f6 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go @@ -34,17 +34,17 @@ import ( "k8s.io/klog/v2" ) -type recordMetrics func(context.Context, *authenticator.Response, bool, error, authenticator.Audiences, time.Time, time.Time) +type authenticationRecordMetricsFunc func(context.Context, *authenticator.Response, bool, error, authenticator.Audiences, time.Time, time.Time) // WithAuthentication creates an http handler that tries to authenticate the given request as a user, and then // stores any such user found onto the provided context for the request. If authentication fails or returns an error // the failed handler is used. On success, "Authorization" header is removed from the request and handler // is invoked to serve the request. func WithAuthentication(handler http.Handler, auth authenticator.Request, failed http.Handler, apiAuds authenticator.Audiences, requestHeaderConfig *authenticatorfactory.RequestHeaderConfig) http.Handler { - return withAuthentication(handler, auth, failed, apiAuds, requestHeaderConfig, recordAuthMetrics) + return withAuthentication(handler, auth, failed, apiAuds, requestHeaderConfig, recordAuthenticationMetrics) } -func withAuthentication(handler http.Handler, auth authenticator.Request, failed http.Handler, apiAuds authenticator.Audiences, requestHeaderConfig *authenticatorfactory.RequestHeaderConfig, metrics recordMetrics) http.Handler { +func withAuthentication(handler http.Handler, auth authenticator.Request, failed http.Handler, apiAuds authenticator.Audiences, requestHeaderConfig *authenticatorfactory.RequestHeaderConfig, metrics authenticationRecordMetricsFunc) http.Handler { if auth == nil { klog.Warning("Authentication is disabled") return handler diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authn_audit.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authn_audit.go index 092a9dd03201..4bd6bbc13966 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authn_audit.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authn_audit.go @@ -43,11 +43,11 @@ func WithFailedAuthenticationAudit(failedHandler http.Handler, sink audit.Sink, return } - if ac == nil || ac.Event == nil { + if !ac.Enabled() { failedHandler.ServeHTTP(w, req) return } - ev := ac.Event + ev := &ac.Event ev.ResponseStatus = &metav1.Status{} ev.ResponseStatus.Message = getAuthMethods(req) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go index f7648d41cedd..e102a1e32818 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go @@ -20,6 +20,7 @@ import ( "context" "errors" "net/http" + "time" "k8s.io/klog/v2" @@ -41,14 +42,21 @@ const ( reasonError = "internal error" ) -// WithAuthorizationCheck passes all authorized requests on to handler, and returns a forbidden error otherwise. -func WithAuthorization(handler http.Handler, a authorizer.Authorizer, s runtime.NegotiatedSerializer) http.Handler { +type recordAuthorizationMetricsFunc func(ctx context.Context, authorized authorizer.Decision, err error, authStart time.Time, authFinish time.Time) + +// WithAuthorization passes all authorized requests on to handler, and returns a forbidden error otherwise. +func WithAuthorization(hhandler http.Handler, auth authorizer.Authorizer, s runtime.NegotiatedSerializer) http.Handler { + return withAuthorization(hhandler, auth, s, recordAuthorizationMetrics) +} + +func withAuthorization(handler http.Handler, a authorizer.Authorizer, s runtime.NegotiatedSerializer, metrics recordAuthorizationMetricsFunc) http.Handler { if a == nil { klog.Warning("Authorization is disabled") return handler } return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { ctx := req.Context() + authorizationStart := time.Now() attributes, err := GetAuthorizerAttributes(ctx) if err != nil { @@ -56,6 +64,12 @@ func WithAuthorization(handler http.Handler, a authorizer.Authorizer, s runtime. return } authorized, reason, err := a.Authorize(ctx, attributes) + + authorizationFinish := time.Now() + defer func() { + metrics(ctx, authorized, err, authorizationStart, authorizationFinish) + }() + // an authorizer like RBAC could encounter evaluation errors and still allow the request, so authorizer decision is checked before error here. if authorized == authorizer.DecisionAllow { audit.AddAuditAnnotations(ctx, diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/metrics.go index 47e1be847c7a..a4dae3d84ed9 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/metrics.go @@ -21,6 +21,8 @@ import ( "strings" "time" + "k8s.io/apiserver/pkg/authorization/authorizer" + "k8s.io/apiserver/pkg/authentication/authenticator" "k8s.io/component-base/metrics" "k8s.io/component-base/metrics/legacyregistry" @@ -38,6 +40,10 @@ const ( successLabel = "success" failureLabel = "failure" errorLabel = "error" + + allowedLabel = "allowed" + deniedLabel = "denied" + noOpinionLabel = "no-opinion" ) var ( @@ -68,15 +74,54 @@ var ( }, []string{"result"}, ) + + authorizationAttemptsCounter = metrics.NewCounterVec( + &metrics.CounterOpts{ + Name: "authorization_attempts_total", + Help: "Counter of authorization attempts broken down by result. It can be either 'allowed', 'denied', 'no-opinion' or 'error'.", + StabilityLevel: metrics.ALPHA, + }, + []string{"result"}, + ) + + authorizationLatency = metrics.NewHistogramVec( + &metrics.HistogramOpts{ + Name: "authorization_duration_seconds", + Help: "Authorization duration in seconds broken out by result.", + Buckets: metrics.ExponentialBuckets(0.001, 2, 15), + StabilityLevel: metrics.ALPHA, + }, + []string{"result"}, + ) ) func init() { legacyregistry.MustRegister(authenticatedUserCounter) legacyregistry.MustRegister(authenticatedAttemptsCounter) legacyregistry.MustRegister(authenticationLatency) + legacyregistry.MustRegister(authorizationAttemptsCounter) + legacyregistry.MustRegister(authorizationLatency) +} + +func recordAuthorizationMetrics(ctx context.Context, authorized authorizer.Decision, err error, authStart time.Time, authFinish time.Time) { + var resultLabel string + + switch { + case authorized == authorizer.DecisionAllow: + resultLabel = allowedLabel + case err != nil: + resultLabel = errorLabel + case authorized == authorizer.DecisionDeny: + resultLabel = deniedLabel + case authorized == authorizer.DecisionNoOpinion: + resultLabel = noOpinionLabel + } + + authorizationAttemptsCounter.WithContext(ctx).WithLabelValues(resultLabel).Inc() + authorizationLatency.WithContext(ctx).WithLabelValues(resultLabel).Observe(authFinish.Sub(authStart).Seconds()) } -func recordAuthMetrics(ctx context.Context, resp *authenticator.Response, ok bool, err error, apiAudiences authenticator.Audiences, authStart time.Time, authFinish time.Time) { +func recordAuthenticationMetrics(ctx context.Context, resp *authenticator.Response, ok bool, err error, apiAudiences authenticator.Audiences, authStart time.Time, authFinish time.Time) { var resultLabel string switch { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go index 66b569e891b1..51425bb8acd1 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go @@ -115,11 +115,11 @@ func withFailedRequestAudit(failedHandler http.Handler, statusErr *apierrors.Sta return } - if ac == nil || ac.Event == nil { + if !ac.Enabled() { failedHandler.ServeHTTP(w, req) return } - ev := ac.Event + ev := &ac.Event ev.ResponseStatus = &metav1.Status{} ev.Stage = auditinternal.StageResponseStarted diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/groupversion.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/groupversion.go index 3c70e89ec0ee..0ce06ab10699 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/groupversion.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/groupversion.go @@ -56,6 +56,11 @@ type APIGroupVersion struct { // GroupVersion is the external group version GroupVersion schema.GroupVersion + // AllServedVersionsByResource is indexed by resource and maps to a list of versions that resource exists in. + // This was created so that StorageVersion for APIs can include a list of all version that are served for each + // GroupResource tuple. + AllServedVersionsByResource map[string][]string + // OptionsExternalVersion controls the Kubernetes APIVersion used for common objects in the apiserver // schema like api.Status, api.DeleteOptions, and metav1.ListOptions. Other implementors may // define a version "v1beta1" but want to use the Kubernetes "v1" internal objects. If diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go index 78c1d2f52a73..120d3f665bd0 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/create.go @@ -191,14 +191,13 @@ func createHandler(r rest.NamedCreater, scope *RequestScope, admit admission.Int // Dedup owner references before updating managed fields dedupOwnerReferencesAndAddWarning(obj, req.Context(), false) result, err := finisher.FinishRequest(ctx, func() (runtime.Object, error) { - if scope.FieldManager != nil { - liveObj, err := scope.Creater.New(scope.Kind) - if err != nil { - return nil, fmt.Errorf("failed to create new object (Create for %v): %v", scope.Kind, err) - } - obj = scope.FieldManager.UpdateNoErrors(liveObj, obj, managerOrUserAgent(options.FieldManager, req.UserAgent())) - admit = fieldmanager.NewManagedFieldsValidatingAdmissionController(admit) + liveObj, err := scope.Creater.New(scope.Kind) + if err != nil { + return nil, fmt.Errorf("failed to create new object (Create for %v): %v", scope.Kind, err) } + obj = scope.FieldManager.UpdateNoErrors(liveObj, obj, managerOrUserAgent(options.FieldManager, req.UserAgent())) + admit = fieldmanager.NewManagedFieldsValidatingAdmissionController(admit) + if mutatingAdmission, ok := admit.(admission.MutationInterface); ok && mutatingAdmission.Handles(admission.Create) { if err := mutatingAdmission.Admit(ctx, admissionAttributes, scope); err != nil { return nil, err diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/patch.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/patch.go index 4f5533f34afe..8209efef708e 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/patch.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/patch.go @@ -177,9 +177,8 @@ func PatchResource(r rest.Patcher, scope *RequestScope, admit admission.Interfac userInfo, ) - if scope.FieldManager != nil { - admit = fieldmanager.NewManagedFieldsValidatingAdmissionController(admit) - } + admit = fieldmanager.NewManagedFieldsValidatingAdmissionController(admit) + mutatingAdmission, _ := admit.(admission.MutationInterface) createAuthorizerAttributes := authorizer.AttributesRecord{ User: userInfo, @@ -345,9 +344,12 @@ func (p *jsonPatcher) applyPatchToCurrentObject(requestContext context.Context, } } - if p.fieldManager != nil { - objToUpdate = p.fieldManager.UpdateNoErrors(currentObject, objToUpdate, managerOrUserAgent(p.options.FieldManager, p.userAgent)) + if p.options == nil { + // Provide a more informative error for the crash that would + // happen on the next line + panic("PatchOptions required but not provided") } + objToUpdate = p.fieldManager.UpdateNoErrors(currentObject, objToUpdate, managerOrUserAgent(p.options.FieldManager, p.userAgent)) return objToUpdate, nil } @@ -441,9 +443,7 @@ func (p *smpPatcher) applyPatchToCurrentObject(requestContext context.Context, c return nil, err } - if p.fieldManager != nil { - newObj = p.fieldManager.UpdateNoErrors(currentObject, newObj, managerOrUserAgent(p.options.FieldManager, p.userAgent)) - } + newObj = p.fieldManager.UpdateNoErrors(currentObject, newObj, managerOrUserAgent(p.options.FieldManager, p.userAgent)) return newObj, nil } @@ -654,9 +654,6 @@ func (p *patcher) patchResource(ctx context.Context, scope *RequestScope) (runti } transformers := []rest.TransformFunc{p.applyPatch, p.applyAdmission, dedupOwnerReferencesTransformer} - if scope.FieldManager != nil { - transformers = append(transformers, fieldmanager.IgnoreManagedFieldsTimestampsTransformer) - } wasCreated := false p.updatedObjectInfo = rest.DefaultUpdatedObjectInfo(nil, transformers...) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go index cf84e8e290e2..acd8f0357aaf 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go @@ -34,6 +34,7 @@ import ( "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/util/httpstream/wsstream" utilruntime "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apiserver/pkg/audit" "k8s.io/apiserver/pkg/endpoints/handlers/negotiation" @@ -42,7 +43,6 @@ import ( "k8s.io/apiserver/pkg/registry/rest" utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/apiserver/pkg/util/flushwriter" - "k8s.io/apiserver/pkg/util/wsstream" "k8s.io/component-base/tracing" ) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/update.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/update.go index 630c97cdcdd6..4b76ef97e07d 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/update.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/update.go @@ -156,15 +156,13 @@ func UpdateResource(r rest.Updater, scope *RequestScope, admit admission.Interfa // allows skipping managedFields update if the resulting object is too big shouldUpdateManagedFields := true - if scope.FieldManager != nil { - admit = fieldmanager.NewManagedFieldsValidatingAdmissionController(admit) - transformers = append(transformers, func(_ context.Context, newObj, liveObj runtime.Object) (runtime.Object, error) { - if shouldUpdateManagedFields { - return scope.FieldManager.UpdateNoErrors(liveObj, newObj, managerOrUserAgent(options.FieldManager, req.UserAgent())), nil - } - return newObj, nil - }) - } + admit = fieldmanager.NewManagedFieldsValidatingAdmissionController(admit) + transformers = append(transformers, func(_ context.Context, newObj, liveObj runtime.Object) (runtime.Object, error) { + if shouldUpdateManagedFields { + return scope.FieldManager.UpdateNoErrors(liveObj, newObj, managerOrUserAgent(options.FieldManager, req.UserAgent())), nil + } + return newObj, nil + }) if mutatingAdmission, ok := admit.(admission.MutationInterface); ok { transformers = append(transformers, func(ctx context.Context, newObj, oldObj runtime.Object) (runtime.Object, error) { @@ -189,15 +187,6 @@ func UpdateResource(r rest.Updater, scope *RequestScope, admit admission.Interfa }) } - // Ignore changes that only affect managed fields - // timestamps. FieldManager can't know about changes - // like normalized fields, defaulted fields and other - // mutations. - // Only makes sense when SSA field manager is being used - if scope.FieldManager != nil { - transformers = append(transformers, fieldmanager.IgnoreManagedFieldsTimestampsTransformer) - } - createAuthorizerAttributes := authorizer.AttributesRecord{ User: userInfo, ResourceRequest: true, @@ -237,7 +226,7 @@ func UpdateResource(r rest.Updater, scope *RequestScope, admit admission.Interfa result, err := requestFunc() // If the object wasn't committed to storage because it's serialized size was too large, // it is safe to remove managedFields (which can be large) and try again. - if isTooLargeError(err) && scope.FieldManager != nil { + if isTooLargeError(err) { if accessor, accessorErr := meta.Accessor(obj); accessorErr == nil { accessor.SetManagedFields(nil) shouldUpdateManagedFields = false diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/watch.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/watch.go index c76cc194a2ce..79cb11ca6001 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/watch.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/handlers/watch.go @@ -30,12 +30,12 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/serializer/streaming" + "k8s.io/apimachinery/pkg/util/httpstream/wsstream" utilruntime "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apimachinery/pkg/watch" "k8s.io/apiserver/pkg/endpoints/handlers/negotiation" "k8s.io/apiserver/pkg/endpoints/metrics" apirequest "k8s.io/apiserver/pkg/endpoints/request" - "k8s.io/apiserver/pkg/util/wsstream" ) // nothing will ever be sent down this channel @@ -219,7 +219,7 @@ func (s *WatchServer) ServeHTTP(w http.ResponseWriter, req *http.Request) { var unknown runtime.Unknown internalEvent := &metav1.InternalEvent{} outEvent := &metav1.WatchEvent{} - buf := &bytes.Buffer{} + buf := runtime.NewSpliceBuffer() ch := s.Watching.ResultChan() done := req.Context().Done() diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/installer.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/installer.go index 3f8b6807e759..042bd802f1aa 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/installer.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/installer.go @@ -127,6 +127,9 @@ func ConvertGroupVersionIntoToDiscovery(list []metav1.APIResource) ([]apidiscove apiResourceList = append(apiResourceList, apidiscoveryv2beta1.APIResourceDiscovery{ Resource: split[0], Scope: scope, + // avoid nil panics in v0.26.0-v0.26.3 client-go clients + // see https://github.com/kubernetes/kubernetes/issues/118361 + ResponseKind: &metav1.GroupVersionKind{}, }) parentidx = len(apiResourceList) - 1 parentResources[split[0]] = parentidx @@ -140,6 +143,9 @@ func ConvertGroupVersionIntoToDiscovery(list []metav1.APIResource) ([]apidiscove subresource := apidiscoveryv2beta1.APISubresourceDiscovery{ Subresource: split[1], Verbs: r.Verbs, + // avoid nil panics in v0.26.0-v0.26.3 client-go clients + // see https://github.com/kubernetes/kubernetes/issues/118361 + ResponseKind: &metav1.GroupVersionKind{}, } if r.Kind != "" { subresource.ResponseKind = &metav1.GroupVersionKind{ @@ -600,6 +606,7 @@ func (a *APIInstaller) registerResourceHandlers(path string, storage rest.Storag if a.group.ConvertabilityChecker != nil { decodableVersions = a.group.ConvertabilityChecker.VersionsForGroupKind(fqKindToRegister.GroupKind()) } + resourceInfo = &storageversion.ResourceInfo{ GroupResource: schema.GroupResource{ Group: a.group.GroupVersion.Group, @@ -612,6 +619,8 @@ func (a *APIInstaller) registerResourceHandlers(path string, storage rest.Storag EquivalentResourceMapper: a.group.EquivalentResourceRegistry, DirectlyDecodableVersions: decodableVersions, + + ServedVersions: a.group.AllServedVersionsByResource[path], } } @@ -674,28 +683,23 @@ func (a *APIInstaller) registerResourceHandlers(path string, storage rest.Storag reqScope.MetaGroupVersion = *a.group.MetaGroupVersion } - // Use TypeConverter's nil-ness as a proxy for whether SSA/OpenAPI is enabled - // This should be removed in the future and made unconditional - // https://github.com/kubernetes/kubernetes/pull/114998 - if a.group.TypeConverter != nil { - var resetFields map[fieldpath.APIVersion]*fieldpath.Set - if resetFieldsStrategy, isResetFieldsStrategy := storage.(rest.ResetFieldsStrategy); isResetFieldsStrategy { - resetFields = resetFieldsStrategy.GetResetFields() - } + var resetFields map[fieldpath.APIVersion]*fieldpath.Set + if resetFieldsStrategy, isResetFieldsStrategy := storage.(rest.ResetFieldsStrategy); isResetFieldsStrategy { + resetFields = resetFieldsStrategy.GetResetFields() + } - reqScope.FieldManager, err = managedfields.NewDefaultFieldManager( - a.group.TypeConverter, - a.group.UnsafeConvertor, - a.group.Defaulter, - a.group.Creater, - fqKindToRegister, - reqScope.HubGroupVersion, - subresource, - resetFields, - ) - if err != nil { - return nil, nil, fmt.Errorf("failed to create field manager: %v", err) - } + reqScope.FieldManager, err = managedfields.NewDefaultFieldManager( + a.group.TypeConverter, + a.group.UnsafeConvertor, + a.group.Defaulter, + a.group.Creater, + fqKindToRegister, + reqScope.HubGroupVersion, + subresource, + resetFields, + ) + if err != nil { + return nil, nil, fmt.Errorf("failed to create field manager: %v", err) } for _, action := range actions { @@ -716,7 +720,7 @@ func (a *APIInstaller) registerResourceHandlers(path string, storage rest.Storag requestScope = "resource" operationSuffix = operationSuffix + "WithPath" } - if strings.Index(action.Path, "/{name}") != -1 || action.Verb == "POST" { + if strings.Contains(action.Path, "/{name}") || action.Verb == "POST" { requestScope = "resource" } if action.AllNamespaces { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go index 450a6653da6f..ba2aed69d448 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go @@ -229,7 +229,7 @@ var ( Subsystem: APIServerComponent, Name: "request_filter_duration_seconds", Help: "Request filter latency distribution in seconds, for each filter type", - Buckets: []float64{0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1.0, 5.0}, + Buckets: []float64{0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1.0, 5.0, 10.0, 15.0, 30.0}, StabilityLevel: compbasemetrics.ALPHA, }, []string{"filter"}, diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/features/kube_features.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/features/kube_features.go index 72cd493758bb..f1d1879ec4b2 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/features/kube_features.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/features/kube_features.go @@ -37,6 +37,7 @@ const ( // owner: @ivelichkovich, @tallclair // alpha: v1.27 + // beta: v1.28 // kep: https://kep.k8s.io/3716 // // Enables usage of MatchConditions fields to use CEL expressions for matching on admission webhooks @@ -87,16 +88,6 @@ const ( // Add support for distributed tracing in the API Server APIServerTracing featuregate.Feature = "APIServerTracing" - // owner: @tallclair - // alpha: v1.7 - // beta: v1.8 - // GA: v1.12 - // - // AdvancedAuditing enables a much more general API auditing pipeline, which includes support for - // pluggable output backends and an audit policy specifying how different requests should be - // audited. - AdvancedAuditing featuregate.Feature = "AdvancedAuditing" - // owner: @cici37 @jpbetz // kep: http://kep.k8s.io/3488 // alpha: v1.26 @@ -112,17 +103,6 @@ const ( // Enables expression validation for Custom Resource CustomResourceValidationExpressions featuregate.Feature = "CustomResourceValidationExpressions" - // owner: @apelisse - // alpha: v1.12 - // beta: v1.13 - // stable: v1.18 - // - // Allow requests to be processed but not stored, so that - // validation, merging, mutation can be tested without - // committing. - DryRun featuregate.Feature = "DryRun" - - // owner: @wojtek-t // alpha: v1.20 // beta: v1.21 // GA: v1.24 @@ -130,6 +110,13 @@ const ( // Allows for updating watchcache resource version with progress notify events. EfficientWatchResumption featuregate.Feature = "EfficientWatchResumption" + // owner: @aramase + // kep: https://kep.k8s.io/3299 + // deprecated: v1.28 + // + // Enables KMS v1 API for encryption at rest. + KMSv1 featuregate.Feature = "KMSv1" + // owner: @aramase // kep: https://kep.k8s.io/3299 // alpha: v1.25 @@ -138,6 +125,13 @@ const ( // Enables KMS v2 API for encryption at rest. KMSv2 featuregate.Feature = "KMSv2" + // owner: @enj + // kep: https://kep.k8s.io/3299 + // beta: v1.28 + // + // Enables the use of derived encryption keys with KMS v2. + KMSv2KDF featuregate.Feature = "KMSv2KDF" + // owner: @jiahuif // kep: https://kep.k8s.io/2887 // alpha: v1.23 @@ -222,6 +216,13 @@ const ( // // Allow the API server to stream individual items instead of chunking WatchList featuregate.Feature = "WatchList" + + // owner: @serathius + // kep: http://kep.k8s.io/2340 + // alpha: v1.28 + // + // Allow the API server to serve consistent lists from cache + ConsistentListFromCache featuregate.Feature = "ConsistentListFromCache" ) func init() { @@ -235,7 +236,7 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS AggregatedDiscoveryEndpoint: {Default: true, PreRelease: featuregate.Beta}, - AdmissionWebhookMatchConditions: {Default: false, PreRelease: featuregate.Alpha}, + AdmissionWebhookMatchConditions: {Default: true, PreRelease: featuregate.Beta}, APIListChunking: {Default: true, PreRelease: featuregate.Beta}, @@ -247,18 +248,18 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS APIServerTracing: {Default: true, PreRelease: featuregate.Beta}, - AdvancedAuditing: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 - - ValidatingAdmissionPolicy: {Default: false, PreRelease: featuregate.Alpha}, + ValidatingAdmissionPolicy: {Default: false, PreRelease: featuregate.Beta}, CustomResourceValidationExpressions: {Default: true, PreRelease: featuregate.Beta}, - DryRun: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 - EfficientWatchResumption: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, + KMSv1: {Default: true, PreRelease: featuregate.Deprecated}, + KMSv2: {Default: true, PreRelease: featuregate.Beta}, + KMSv2KDF: {Default: false, PreRelease: featuregate.Beta}, // default and lock to true in 1.29, remove in 1.31 + OpenAPIEnums: {Default: true, PreRelease: featuregate.Beta}, OpenAPIV3: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 @@ -280,4 +281,6 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS InPlacePodVerticalScaling: {Default: false, PreRelease: featuregate.Alpha}, WatchList: {Default: false, PreRelease: featuregate.Alpha}, + + ConsistentListFromCache: {Default: false, PreRelease: featuregate.Alpha}, } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/OWNERS b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/OWNERS index 29d730907f9c..c0e4923f6ac7 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/OWNERS @@ -2,7 +2,6 @@ reviewers: - thockin - - lavalamp - smarterclayton - wojtek-t - deads2k diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/registry/dryrun.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/registry/dryrun.go index 018e4b4c52df..c8db56b2bab7 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/registry/dryrun.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/registry/dryrun.go @@ -18,7 +18,10 @@ package registry import ( "context" + "fmt" + "reflect" + "k8s.io/apimachinery/pkg/conversion" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/watch" "k8s.io/apiserver/pkg/storage" @@ -72,19 +75,30 @@ func (s *DryRunnableStorage) GuaranteedUpdate( ctx context.Context, key string, destination runtime.Object, ignoreNotFound bool, preconditions *storage.Preconditions, tryUpdate storage.UpdateFunc, dryRun bool, cachedExistingObject runtime.Object) error { if dryRun { - err := s.Storage.Get(ctx, key, storage.GetOptions{IgnoreNotFound: ignoreNotFound}, destination) + var current runtime.Object + v, err := conversion.EnforcePtr(destination) + if err != nil { + return fmt.Errorf("unable to convert output object to pointer: %v", err) + } + if u, ok := v.Addr().Interface().(runtime.Unstructured); ok { + current = u.NewEmptyInstance() + } else { + current = reflect.New(v.Type()).Interface().(runtime.Object) + } + + err = s.Storage.Get(ctx, key, storage.GetOptions{IgnoreNotFound: ignoreNotFound}, current) if err != nil { return err } - err = preconditions.Check(key, destination) + err = preconditions.Check(key, current) if err != nil { return err } - rev, err := s.Versioner().ObjectResourceVersion(destination) + rev, err := s.Versioner().ObjectResourceVersion(current) if err != nil { return err } - updated, _, err := tryUpdate(destination, storage.ResponseMeta{ResourceVersion: rev}) + updated, _, err := tryUpdate(current, storage.ResponseMeta{ResourceVersion: rev}) if err != nil { return err } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/registry/store.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/registry/store.go index fa23d29d6c90..028053952a33 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/registry/store.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/registry/generic/registry/store.go @@ -38,6 +38,7 @@ import ( "k8s.io/apimachinery/pkg/util/validation/field" "k8s.io/apimachinery/pkg/util/wait" "k8s.io/apimachinery/pkg/watch" + "k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager" genericapirequest "k8s.io/apiserver/pkg/endpoints/request" "k8s.io/apiserver/pkg/registry/generic" "k8s.io/apiserver/pkg/registry/rest" @@ -671,6 +672,15 @@ func (e *Store) Update(ctx context.Context, name string, objInfo rest.UpdatedObj if err := rest.BeforeUpdate(e.UpdateStrategy, ctx, obj, existing); err != nil { return nil, nil, err } + + // Ignore changes that only affect managed fields timestamps. + // FieldManager can't know about changes like normalized fields, defaulted + // fields and other mutations. + obj, err = fieldmanager.IgnoreManagedFieldsTimestampsTransformer(ctx, obj, existing) + if err != nil { + return nil, nil, err + } + // at this point we have a fully formed object. It is time to call the validators that the apiserver // handling chain wants to enforce. if updateValidation != nil { @@ -1133,6 +1143,11 @@ func (e *Store) DeleteReturnsDeletedObject() bool { return e.ReturnDeletedObject } +// deleteCollectionPageSize is the size of the page used when +// listing objects from storage during DeleteCollection calls. +// It's a variable to make allow overwriting in tests. +var deleteCollectionPageSize = int64(10000) + // DeleteCollection removes all items returned by List with a given ListOptions from storage. // // DeleteCollection is currently NOT atomic. It can happen that only subset of objects @@ -1145,32 +1160,22 @@ func (e *Store) DeleteCollection(ctx context.Context, deleteValidation rest.Vali listOptions = listOptions.DeepCopy() } - listObj, err := e.List(ctx, listOptions) - if err != nil { - return nil, err - } - items, err := meta.ExtractList(listObj) - if err != nil { - return nil, err - } - if len(items) == 0 { - // Nothing to delete, return now - return listObj, nil - } - // Spawn a number of goroutines, so that we can issue requests to storage - // in parallel to speed up deletion. - // It is proportional to the number of items to delete, up to - // DeleteCollectionWorkers (it doesn't make much sense to spawn 16 - // workers to delete 10 items). + var items []runtime.Object + + // TODO(wojtek-t): Decide if we don't want to start workers more opportunistically. workersNumber := e.DeleteCollectionWorkers - if workersNumber > len(items) { - workersNumber = len(items) - } if workersNumber < 1 { workersNumber = 1 } wg := sync.WaitGroup{} - toProcess := make(chan int, 2*workersNumber) + // Ensure that chanSize is not too high (to avoid wasted work) but + // at the same time high enough to start listing before we process + // the whole page. + chanSize := 2 * workersNumber + if chanSize < 256 { + chanSize = 256 + } + toProcess := make(chan runtime.Object, chanSize) errs := make(chan error, workersNumber+1) workersExited := make(chan struct{}) @@ -1183,8 +1188,8 @@ func (e *Store) DeleteCollection(ctx context.Context, deleteValidation rest.Vali }) defer wg.Done() - for index := range toProcess { - accessor, err := meta.Accessor(items[index]) + for item := range toProcess { + accessor, err := meta.Accessor(item) if err != nil { errs <- err return @@ -1210,20 +1215,82 @@ func (e *Store) DeleteCollection(ctx context.Context, deleteValidation rest.Vali close(workersExited) }() - func() { + hasLimit := listOptions.Limit > 0 + if listOptions.Limit == 0 { + listOptions.Limit = deleteCollectionPageSize + } + + // Paginate the list request and throw all items into workers. + listObj, err := func() (runtime.Object, error) { defer close(toProcess) - for i := 0; i < len(items); i++ { + processedItems := 0 + var originalList runtime.Object + for { select { - case toProcess <- i: - case <-workersExited: - klog.V(4).InfoS("workers already exited, and there are some items waiting to be processed", "finished", i, "total", len(items)) - return + case <-ctx.Done(): + return nil, ctx.Err() + default: + } + + listObj, err := e.List(ctx, listOptions) + if err != nil { + return nil, err } + + newItems, err := meta.ExtractList(listObj) + if err != nil { + return nil, err + } + items = append(items, newItems...) + + for i := 0; i < len(newItems); i++ { + select { + case toProcess <- newItems[i]: + case <-workersExited: + klog.V(4).InfoS("workers already exited, and there are some items waiting to be processed", "queued/finished", i, "total", processedItems+len(newItems)) + // Try to propagate an error from the workers if possible. + select { + case err := <-errs: + return nil, err + default: + return nil, fmt.Errorf("all DeleteCollection workers exited") + } + } + } + processedItems += len(newItems) + + // If the original request was setting the limit, finish after running it. + if hasLimit { + return listObj, nil + } + + if originalList == nil { + originalList = listObj + meta.SetList(originalList, nil) + } + + // If there are no more items, return the list. + m, err := meta.ListAccessor(listObj) + if err != nil { + return nil, err + } + if len(m.GetContinue()) == 0 { + meta.SetList(originalList, items) + return originalList, nil + } + + // Set up the next loop. + listOptions.Continue = m.GetContinue() + listOptions.ResourceVersion = "" + listOptions.ResourceVersionMatch = "" } }() + if err != nil { + return nil, err + } - // Wait for all workers to exist. + // Wait for all workers to exit. <-workersExited select { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/config.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/config.go index 9dc87506a40b..d678f52dfb75 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/config.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/config.go @@ -65,6 +65,7 @@ import ( "k8s.io/apiserver/pkg/server/healthz" "k8s.io/apiserver/pkg/server/routes" serverstore "k8s.io/apiserver/pkg/server/storage" + storagevalue "k8s.io/apiserver/pkg/storage/value" "k8s.io/apiserver/pkg/storageversion" utilfeature "k8s.io/apiserver/pkg/util/feature" utilflowcontrol "k8s.io/apiserver/pkg/util/flowcontrol" @@ -85,6 +86,13 @@ import ( _ "k8s.io/apiserver/pkg/apis/apiserver/install" ) +// hostnameFunc is a function to set the hostnameFunc of this apiserver. +// To be used for testing purpose only, to simulate scenarios where multiple apiservers +// exist. In such cases we want to ensure unique apiserver IDs which are a hash of hostnameFunc. +var ( + hostnameFunc = os.Hostname +) + const ( // DefaultLegacyAPIPrefix is where the legacy APIs will be located. DefaultLegacyAPIPrefix = "/api" @@ -190,6 +198,8 @@ type Config struct { // SkipOpenAPIInstallation avoids installing the OpenAPI handler if set to true. SkipOpenAPIInstallation bool + // ResourceTransformers are used to transform resources from and to etcd, e.g. encryption. + ResourceTransformers storagevalue.ResourceTransformers // RESTOptionsGetter is used to construct RESTStorage types via the generic registry. RESTOptionsGetter genericregistry.RESTOptionsGetter @@ -364,7 +374,7 @@ func NewConfig(codecs serializer.CodecFactory) *Config { defaultHealthChecks := []healthz.HealthChecker{healthz.PingHealthz, healthz.LogHealthz} var id string if utilfeature.DefaultFeatureGate.Enabled(genericfeatures.APIServerIdentity) { - hostname, err := os.Hostname() + hostname, err := hostnameFunc() if err != nil { klog.Fatalf("error getting hostname for apiserver identity: %v", err) } @@ -894,14 +904,16 @@ func BuildHandlerChainWithStorageVersionPrecondition(apiHandler http.Handler, c } func DefaultBuildHandlerChain(apiHandler http.Handler, c *Config) http.Handler { - handler := filterlatency.TrackCompleted(apiHandler) + handler := apiHandler + + handler = filterlatency.TrackCompleted(handler) handler = genericapifilters.WithAuthorization(handler, c.Authorization.Authorizer, c.Serializer) handler = filterlatency.TrackStarted(handler, c.TracerProvider, "authorization") if c.FlowControl != nil { workEstimatorCfg := flowcontrolrequest.DefaultWorkEstimatorConfig() requestWorkEstimator := flowcontrolrequest.NewWorkEstimator( - c.StorageObjectCountTracker.Get, c.FlowControl.GetInterestedWatchCount, workEstimatorCfg) + c.StorageObjectCountTracker.Get, c.FlowControl.GetInterestedWatchCount, workEstimatorCfg, c.FlowControl.GetMaxSeats) handler = filterlatency.TrackCompleted(handler) handler = genericfilters.WithPriorityAndFairness(handler, c.LongRunningFunc, c.FlowControl, requestWorkEstimator) handler = filterlatency.TrackStarted(handler, c.TracerProvider, "priorityandfairness") @@ -1067,3 +1079,12 @@ func AuthorizeClientBearerToken(loopback *restclient.Config, authn *Authenticati tokenAuthenticator := authenticatorfactory.NewFromTokens(tokens, authn.APIAudiences) authn.Authenticator = authenticatorunion.New(tokenAuthenticator, authn.Authenticator) } + +// For testing purpose only +func SetHostnameFuncForTests(name string) { + hostnameFunc = func() (host string, err error) { + host = name + err = nil + return + } +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go index 5d7b00ec337d..9effcb768f29 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go @@ -34,7 +34,6 @@ import ( const ( // Constant for the retry-after interval on rate limiting. - // TODO: maybe make this dynamic? or user-adjustable? retryAfter = "1" // How often inflight usage metric should be updated. Because @@ -210,7 +209,7 @@ func WithMaxInFlightLimit( // We need to split this data between buckets used for throttling. metrics.RecordDroppedRequest(r, requestInfo, metrics.APIServerComponent, isMutatingRequest) metrics.RecordRequestTermination(r, requestInfo, metrics.APIServerComponent, http.StatusTooManyRequests) - tooManyRequests(r, w) + tooManyRequests(r, w, retryAfter) } } }) @@ -221,9 +220,3 @@ func WithMaxInFlightLimit( func StartMaxInFlightWatermarkMaintenance(stopCh <-chan struct{}) { startWatermarkMaintenance(watermark, stopCh) } - -func tooManyRequests(req *http.Request, w http.ResponseWriter) { - // Return a 429 status indicating "Too Many Requests" - w.Header().Set("Retry-After", retryAfter) - http.Error(w, "Too many requests, please try again later.", http.StatusTooManyRequests) -} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go index 937971c17eb7..6b3987781601 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go @@ -21,6 +21,7 @@ import ( "fmt" "net/http" "runtime" + "strconv" "sync" "sync/atomic" "time" @@ -67,242 +68,268 @@ func truncateLogField(s string) string { var initAPFOnce sync.Once -// WithPriorityAndFairness limits the number of in-flight -// requests in a fine-grained way. -func WithPriorityAndFairness( - handler http.Handler, - longRunningRequestCheck apirequest.LongRunningRequestCheck, - fcIfc utilflowcontrol.Interface, - workEstimator flowcontrolrequest.WorkEstimatorFunc, -) http.Handler { - if fcIfc == nil { - klog.Warningf("priority and fairness support not found, skipping") - return handler +type priorityAndFairnessHandler struct { + handler http.Handler + longRunningRequestCheck apirequest.LongRunningRequestCheck + fcIfc utilflowcontrol.Interface + workEstimator flowcontrolrequest.WorkEstimatorFunc + + // droppedRequests tracks the history of dropped requests for + // the purpose of computing RetryAfter header to avoid system + // overload. + droppedRequests utilflowcontrol.DroppedRequestsTracker +} + +func (h *priorityAndFairnessHandler) Handle(w http.ResponseWriter, r *http.Request) { + ctx := r.Context() + requestInfo, ok := apirequest.RequestInfoFrom(ctx) + if !ok { + handleError(w, r, fmt.Errorf("no RequestInfo found in context")) + return + } + user, ok := apirequest.UserFrom(ctx) + if !ok { + handleError(w, r, fmt.Errorf("no User found in context")) + return } - initAPFOnce.Do(func() { - initMaxInFlight(0, 0) - // Fetching these gauges is delayed until after their underlying metric has been registered - // so that this latches onto the efficient implementation. - waitingMark.readOnlyObserver = fcmetrics.GetWaitingReadonlyConcurrency() - waitingMark.mutatingObserver = fcmetrics.GetWaitingMutatingConcurrency() - }) - return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - ctx := r.Context() - requestInfo, ok := apirequest.RequestInfoFrom(ctx) - if !ok { - handleError(w, r, fmt.Errorf("no RequestInfo found in context")) - return - } - user, ok := apirequest.UserFrom(ctx) - if !ok { - handleError(w, r, fmt.Errorf("no User found in context")) - return - } - isWatchRequest := watchVerbs.Has(requestInfo.Verb) + isWatchRequest := watchVerbs.Has(requestInfo.Verb) - // Skip tracking long running non-watch requests. - if longRunningRequestCheck != nil && longRunningRequestCheck(r, requestInfo) && !isWatchRequest { - klog.V(6).Infof("Serving RequestInfo=%#+v, user.Info=%#+v as longrunning\n", requestInfo, user) - handler.ServeHTTP(w, r) - return - } + // Skip tracking long running non-watch requests. + if h.longRunningRequestCheck != nil && h.longRunningRequestCheck(r, requestInfo) && !isWatchRequest { + klog.V(6).Infof("Serving RequestInfo=%#+v, user.Info=%#+v as longrunning\n", requestInfo, user) + h.handler.ServeHTTP(w, r) + return + } - var classification *PriorityAndFairnessClassification - noteFn := func(fs *flowcontrol.FlowSchema, pl *flowcontrol.PriorityLevelConfiguration, flowDistinguisher string) { - classification = &PriorityAndFairnessClassification{ - FlowSchemaName: fs.Name, - FlowSchemaUID: fs.UID, - PriorityLevelName: pl.Name, - PriorityLevelUID: pl.UID} + var classification *PriorityAndFairnessClassification + noteFn := func(fs *flowcontrol.FlowSchema, pl *flowcontrol.PriorityLevelConfiguration, flowDistinguisher string) { + classification = &PriorityAndFairnessClassification{ + FlowSchemaName: fs.Name, + FlowSchemaUID: fs.UID, + PriorityLevelName: pl.Name, + PriorityLevelUID: pl.UID, + } - httplog.AddKeyValue(ctx, "apf_pl", truncateLogField(pl.Name)) - httplog.AddKeyValue(ctx, "apf_fs", truncateLogField(fs.Name)) + httplog.AddKeyValue(ctx, "apf_pl", truncateLogField(pl.Name)) + httplog.AddKeyValue(ctx, "apf_fs", truncateLogField(fs.Name)) + } + // estimateWork is called, if at all, after noteFn + estimateWork := func() flowcontrolrequest.WorkEstimate { + if classification == nil { + // workEstimator is being invoked before classification of + // the request has completed, we should never be here though. + klog.ErrorS(fmt.Errorf("workEstimator is being invoked before classification of the request has completed"), + "Using empty FlowSchema and PriorityLevelConfiguration name", "verb", r.Method, "URI", r.RequestURI) + return h.workEstimator(r, "", "") } - // estimateWork is called, if at all, after noteFn - estimateWork := func() flowcontrolrequest.WorkEstimate { - if classification == nil { - // workEstimator is being invoked before classification of - // the request has completed, we should never be here though. - klog.ErrorS(fmt.Errorf("workEstimator is being invoked before classification of the request has completed"), - "Using empty FlowSchema and PriorityLevelConfiguration name", "verb", r.Method, "URI", r.RequestURI) - - return workEstimator(r, "", "") - } - workEstimate := workEstimator(r, classification.FlowSchemaName, classification.PriorityLevelName) + workEstimate := h.workEstimator(r, classification.FlowSchemaName, classification.PriorityLevelName) - fcmetrics.ObserveWorkEstimatedSeats(classification.PriorityLevelName, classification.FlowSchemaName, workEstimate.MaxSeats()) - httplog.AddKeyValue(ctx, "apf_iseats", workEstimate.InitialSeats) - httplog.AddKeyValue(ctx, "apf_fseats", workEstimate.FinalSeats) - httplog.AddKeyValue(ctx, "apf_additionalLatency", workEstimate.AdditionalLatency) + fcmetrics.ObserveWorkEstimatedSeats(classification.PriorityLevelName, classification.FlowSchemaName, workEstimate.MaxSeats()) + httplog.AddKeyValue(ctx, "apf_iseats", workEstimate.InitialSeats) + httplog.AddKeyValue(ctx, "apf_fseats", workEstimate.FinalSeats) + httplog.AddKeyValue(ctx, "apf_additionalLatency", workEstimate.AdditionalLatency) - return workEstimate - } + return workEstimate + } - var served bool - isMutatingRequest := !nonMutatingRequestVerbs.Has(requestInfo.Verb) - noteExecutingDelta := func(delta int32) { - if isMutatingRequest { - watermark.recordMutating(int(atomic.AddInt32(&atomicMutatingExecuting, delta))) - } else { - watermark.recordReadOnly(int(atomic.AddInt32(&atomicReadOnlyExecuting, delta))) - } - } - noteWaitingDelta := func(delta int32) { - if isMutatingRequest { - waitingMark.recordMutating(int(atomic.AddInt32(&atomicMutatingWaiting, delta))) - } else { - waitingMark.recordReadOnly(int(atomic.AddInt32(&atomicReadOnlyWaiting, delta))) - } + var served bool + isMutatingRequest := !nonMutatingRequestVerbs.Has(requestInfo.Verb) + noteExecutingDelta := func(delta int32) { + if isMutatingRequest { + watermark.recordMutating(int(atomic.AddInt32(&atomicMutatingExecuting, delta))) + } else { + watermark.recordReadOnly(int(atomic.AddInt32(&atomicReadOnlyExecuting, delta))) } - queueNote := func(inQueue bool) { - if inQueue { - noteWaitingDelta(1) - } else { - noteWaitingDelta(-1) - } + } + noteWaitingDelta := func(delta int32) { + if isMutatingRequest { + waitingMark.recordMutating(int(atomic.AddInt32(&atomicMutatingWaiting, delta))) + } else { + waitingMark.recordReadOnly(int(atomic.AddInt32(&atomicReadOnlyWaiting, delta))) } - - digest := utilflowcontrol.RequestDigest{ - RequestInfo: requestInfo, - User: user, + } + queueNote := func(inQueue bool) { + if inQueue { + noteWaitingDelta(1) + } else { + noteWaitingDelta(-1) } + } - if isWatchRequest { - // This channel blocks calling handler.ServeHTTP() until closed, and is closed inside execute(). - // If APF rejects the request, it is never closed. - shouldStartWatchCh := make(chan struct{}) + digest := utilflowcontrol.RequestDigest{ + RequestInfo: requestInfo, + User: user, + } - watchInitializationSignal := newInitializationSignal() - // This wraps the request passed to handler.ServeHTTP(), - // setting a context that plumbs watchInitializationSignal to storage - var watchReq *http.Request - // This is set inside execute(), prior to closing shouldStartWatchCh. - // If the request is rejected by APF it is left nil. - var forgetWatch utilflowcontrol.ForgetWatchFunc + if isWatchRequest { + // This channel blocks calling handler.ServeHTTP() until closed, and is closed inside execute(). + // If APF rejects the request, it is never closed. + shouldStartWatchCh := make(chan struct{}) + + watchInitializationSignal := newInitializationSignal() + // This wraps the request passed to handler.ServeHTTP(), + // setting a context that plumbs watchInitializationSignal to storage + var watchReq *http.Request + // This is set inside execute(), prior to closing shouldStartWatchCh. + // If the request is rejected by APF it is left nil. + var forgetWatch utilflowcontrol.ForgetWatchFunc + + defer func() { + // Protect from the situation when request will not reach storage layer + // and the initialization signal will not be send. + if watchInitializationSignal != nil { + watchInitializationSignal.Signal() + } + // Forget the watcher if it was registered. + // + // This is race-free because by this point, one of the following occurred: + // case <-shouldStartWatchCh: execute() completed the assignment to forgetWatch + // case <-resultCh: Handle() completed, and Handle() does not return + // while execute() is running + if forgetWatch != nil { + forgetWatch() + } + }() + execute := func() { + startedAt := time.Now() defer func() { - // Protect from the situation when request will not reach storage layer - // and the initialization signal will not be send. - if watchInitializationSignal != nil { - watchInitializationSignal.Signal() - } - // Forget the watcher if it was registered. - // - // // This is race-free because by this point, one of the following occurred: - // case <-shouldStartWatchCh: execute() completed the assignment to forgetWatch - // case <-resultCh: Handle() completed, and Handle() does not return - // while execute() is running - if forgetWatch != nil { - forgetWatch() - } + httplog.AddKeyValue(ctx, "apf_init_latency", time.Since(startedAt)) }() + noteExecutingDelta(1) + defer noteExecutingDelta(-1) + served = true + setResponseHeaders(classification, w) - execute := func() { - startedAt := time.Now() - defer func() { - httplog.AddKeyValue(ctx, "apf_init_latency", time.Since(startedAt)) - }() - noteExecutingDelta(1) - defer noteExecutingDelta(-1) - served = true - setResponseHeaders(classification, w) + forgetWatch = h.fcIfc.RegisterWatch(r) - forgetWatch = fcIfc.RegisterWatch(r) + // Notify the main thread that we're ready to start the watch. + close(shouldStartWatchCh) - // Notify the main thread that we're ready to start the watch. - close(shouldStartWatchCh) + // Wait until the request is finished from the APF point of view + // (which is when its initialization is done). + watchInitializationSignal.Wait() + } - // Wait until the request is finished from the APF point of view - // (which is when its initialization is done). - watchInitializationSignal.Wait() - } + // Ensure that an item can be put to resultCh asynchronously. + resultCh := make(chan interface{}, 1) + + // Call Handle in a separate goroutine. + // The reason for it is that from APF point of view, the request processing + // finishes as soon as watch is initialized (which is generally orders of + // magnitude faster then the watch request itself). This means that Handle() + // call finishes much faster and for performance reasons we want to reduce + // the number of running goroutines - so we run the shorter thing in a + // dedicated goroutine and the actual watch handler in the main one. + go func() { + defer func() { + err := recover() + // do not wrap the sentinel ErrAbortHandler panic value + if err != nil && err != http.ErrAbortHandler { + // Same as stdlib http server code. Manually allocate stack + // trace buffer size to prevent excessively large logs + const size = 64 << 10 + buf := make([]byte, size) + buf = buf[:runtime.Stack(buf, false)] + err = fmt.Sprintf("%v\n%s", err, buf) + } - // Ensure that an item can be put to resultCh asynchronously. - resultCh := make(chan interface{}, 1) - - // Call Handle in a separate goroutine. - // The reason for it is that from APF point of view, the request processing - // finishes as soon as watch is initialized (which is generally orders of - // magnitude faster then the watch request itself). This means that Handle() - // call finishes much faster and for performance reasons we want to reduce - // the number of running goroutines - so we run the shorter thing in a - // dedicated goroutine and the actual watch handler in the main one. - go func() { - defer func() { - err := recover() - // do not wrap the sentinel ErrAbortHandler panic value - if err != nil && err != http.ErrAbortHandler { - // Same as stdlib http server code. Manually allocate stack - // trace buffer size to prevent excessively large logs - const size = 64 << 10 - buf := make([]byte, size) - buf = buf[:runtime.Stack(buf, false)] - err = fmt.Sprintf("%v\n%s", err, buf) - } - - // Ensure that the result is put into resultCh independently of the panic. - resultCh <- err - }() - - // We create handleCtx with explicit cancelation function. - // The reason for it is that Handle() underneath may start additional goroutine - // that is blocked on context cancellation. However, from APF point of view, - // we don't want to wait until the whole watch request is processed (which is - // when it context is actually cancelled) - we want to unblock the goroutine as - // soon as the request is processed from the APF point of view. - // - // Note that we explicitly do NOT call the actuall handler using that context - // to avoid cancelling request too early. - handleCtx, handleCtxCancel := context.WithCancel(ctx) - defer handleCtxCancel() - - // Note that Handle will return irrespective of whether the request - // executes or is rejected. In the latter case, the function will return - // without calling the passed `execute` function. - fcIfc.Handle(handleCtx, digest, noteFn, estimateWork, queueNote, execute) + // Ensure that the result is put into resultCh independently of the panic. + resultCh <- err }() - select { - case <-shouldStartWatchCh: - watchCtx := utilflowcontrol.WithInitializationSignal(ctx, watchInitializationSignal) - watchReq = r.WithContext(watchCtx) - handler.ServeHTTP(w, watchReq) - // Protect from the situation when request will not reach storage layer - // and the initialization signal will not be send. - // It has to happen before waiting on the resultCh below. - watchInitializationSignal.Signal() - // TODO: Consider finishing the request as soon as Handle call panics. - if err := <-resultCh; err != nil { - panic(err) - } - case err := <-resultCh: - if err != nil { - panic(err) - } + // We create handleCtx with explicit cancelation function. + // The reason for it is that Handle() underneath may start additional goroutine + // that is blocked on context cancellation. However, from APF point of view, + // we don't want to wait until the whole watch request is processed (which is + // when it context is actually cancelled) - we want to unblock the goroutine as + // soon as the request is processed from the APF point of view. + // + // Note that we explicitly do NOT call the actuall handler using that context + // to avoid cancelling request too early. + handleCtx, handleCtxCancel := context.WithCancel(ctx) + defer handleCtxCancel() + + // Note that Handle will return irrespective of whether the request + // executes or is rejected. In the latter case, the function will return + // without calling the passed `execute` function. + h.fcIfc.Handle(handleCtx, digest, noteFn, estimateWork, queueNote, execute) + }() + + select { + case <-shouldStartWatchCh: + watchCtx := utilflowcontrol.WithInitializationSignal(ctx, watchInitializationSignal) + watchReq = r.WithContext(watchCtx) + h.handler.ServeHTTP(w, watchReq) + // Protect from the situation when request will not reach storage layer + // and the initialization signal will not be send. + // It has to happen before waiting on the resultCh below. + watchInitializationSignal.Signal() + // TODO: Consider finishing the request as soon as Handle call panics. + if err := <-resultCh; err != nil { + panic(err) } - } else { - execute := func() { - noteExecutingDelta(1) - defer noteExecutingDelta(-1) - served = true - setResponseHeaders(classification, w) - - handler.ServeHTTP(w, r) + case err := <-resultCh: + if err != nil { + panic(err) } - - fcIfc.Handle(ctx, digest, noteFn, estimateWork, queueNote, execute) } - - if !served { + } else { + execute := func() { + noteExecutingDelta(1) + defer noteExecutingDelta(-1) + served = true setResponseHeaders(classification, w) - epmetrics.RecordDroppedRequest(r, requestInfo, epmetrics.APIServerComponent, isMutatingRequest) - epmetrics.RecordRequestTermination(r, requestInfo, epmetrics.APIServerComponent, http.StatusTooManyRequests) - tooManyRequests(r, w) + h.handler.ServeHTTP(w, r) } + + h.fcIfc.Handle(ctx, digest, noteFn, estimateWork, queueNote, execute) + } + + if !served { + setResponseHeaders(classification, w) + + epmetrics.RecordDroppedRequest(r, requestInfo, epmetrics.APIServerComponent, isMutatingRequest) + epmetrics.RecordRequestTermination(r, requestInfo, epmetrics.APIServerComponent, http.StatusTooManyRequests) + h.droppedRequests.RecordDroppedRequest(classification.PriorityLevelName) + + // TODO(wojtek-t): Idea from deads2k: we can consider some jittering and in case of non-int + // number, just return the truncated result and sleep the remainder server-side. + tooManyRequests(r, w, strconv.Itoa(int(h.droppedRequests.GetRetryAfter(classification.PriorityLevelName)))) + } +} + +// WithPriorityAndFairness limits the number of in-flight +// requests in a fine-grained way. +func WithPriorityAndFairness( + handler http.Handler, + longRunningRequestCheck apirequest.LongRunningRequestCheck, + fcIfc utilflowcontrol.Interface, + workEstimator flowcontrolrequest.WorkEstimatorFunc, +) http.Handler { + if fcIfc == nil { + klog.Warningf("priority and fairness support not found, skipping") + return handler + } + initAPFOnce.Do(func() { + initMaxInFlight(0, 0) + // Fetching these gauges is delayed until after their underlying metric has been registered + // so that this latches onto the efficient implementation. + waitingMark.readOnlyObserver = fcmetrics.GetWaitingReadonlyConcurrency() + waitingMark.mutatingObserver = fcmetrics.GetWaitingMutatingConcurrency() }) + + priorityAndFairnessHandler := &priorityAndFairnessHandler{ + handler: handler, + longRunningRequestCheck: longRunningRequestCheck, + fcIfc: fcIfc, + workEstimator: workEstimator, + droppedRequests: utilflowcontrol.NewDroppedRequestsTracker(), + } + return http.HandlerFunc(priorityAndFairnessHandler.Handle) } // StartPriorityAndFairnessWatermarkMaintenance starts the goroutines to observe and maintain watermarks for @@ -323,3 +350,9 @@ func setResponseHeaders(classification *PriorityAndFairnessClassification, w htt w.Header().Set(flowcontrol.ResponseHeaderMatchedPriorityLevelConfigurationUID, string(classification.PriorityLevelUID)) w.Header().Set(flowcontrol.ResponseHeaderMatchedFlowSchemaUID, string(classification.FlowSchemaUID)) } + +func tooManyRequests(req *http.Request, w http.ResponseWriter, retryAfter string) { + // Return a 429 status indicating "Too Many Requests" + w.Header().Set("Retry-After", retryAfter) + http.Error(w, "Too many requests, please try again later.", http.StatusTooManyRequests) +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/genericapiserver.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/genericapiserver.go index 52c865f8a98e..665f20bebdb0 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/genericapiserver.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/genericapiserver.go @@ -18,6 +18,7 @@ package server import ( "context" + "errors" "fmt" "net/http" gpath "path" @@ -736,16 +737,7 @@ func (s preparedGenericAPIServer) NonBlockingRun(stopCh <-chan struct{}, shutdow } // installAPIResources is a private method for installing the REST storage backing each api groupversionresource -func (s *GenericAPIServer) installAPIResources(apiPrefix string, apiGroupInfo *APIGroupInfo, openAPIModels map[string]*spec.Schema) error { - var typeConverter managedfields.TypeConverter - - if len(openAPIModels) > 0 { - var err error - typeConverter, err = managedfields.NewTypeConverter(openAPIModels, false) - if err != nil { - return err - } - } +func (s *GenericAPIServer) installAPIResources(apiPrefix string, apiGroupInfo *APIGroupInfo, typeConverter managedfields.TypeConverter) error { var resourceInfos []*storageversion.ResourceInfo for _, groupVersion := range apiGroupInfo.PrioritizedVersions { if len(apiGroupInfo.VersionedResourcesStorageMap[groupVersion.Version]) == 0 { @@ -844,6 +836,9 @@ func (s *GenericAPIServer) InstallLegacyAPIGroup(apiPrefix string, apiGroupInfo // underlying storage will be destroyed on this servers shutdown. func (s *GenericAPIServer) InstallAPIGroups(apiGroupInfos ...*APIGroupInfo) error { for _, apiGroupInfo := range apiGroupInfos { + if len(apiGroupInfo.PrioritizedVersions) == 0 { + return fmt.Errorf("no version priority set for %#v", *apiGroupInfo) + } // Do not register empty group or empty version. Doing so claims /apis/ for the wrong entity to be returned. // Catching these here places the error much closer to its origin if len(apiGroupInfo.PrioritizedVersions[0].Group) == 0 { @@ -916,9 +911,22 @@ func (s *GenericAPIServer) getAPIGroupVersion(apiGroupInfo *APIGroupInfo, groupV } func (s *GenericAPIServer) newAPIGroupVersion(apiGroupInfo *APIGroupInfo, groupVersion schema.GroupVersion) *genericapi.APIGroupVersion { + + allServedVersionsByResource := map[string][]string{} + for version, resourcesInVersion := range apiGroupInfo.VersionedResourcesStorageMap { + for resource := range resourcesInVersion { + if len(groupVersion.Group) == 0 { + allServedVersionsByResource[resource] = append(allServedVersionsByResource[resource], version) + } else { + allServedVersionsByResource[resource] = append(allServedVersionsByResource[resource], fmt.Sprintf("%s/%s", groupVersion.Group, version)) + } + } + } + return &genericapi.APIGroupVersion{ - GroupVersion: groupVersion, - MetaGroupVersion: apiGroupInfo.MetaGroupVersion, + GroupVersion: groupVersion, + AllServedVersionsByResource: allServedVersionsByResource, + MetaGroupVersion: apiGroupInfo.MetaGroupVersion, ParameterCodec: apiGroupInfo.ParameterCodec, Serializer: apiGroupInfo.NegotiatedSerializer, @@ -953,13 +961,13 @@ func NewDefaultAPIGroupInfo(group string, scheme *runtime.Scheme, parameterCodec } // getOpenAPIModels is a private method for getting the OpenAPI models -func (s *GenericAPIServer) getOpenAPIModels(apiPrefix string, apiGroupInfos ...*APIGroupInfo) (map[string]*spec.Schema, error) { +func (s *GenericAPIServer) getOpenAPIModels(apiPrefix string, apiGroupInfos ...*APIGroupInfo) (managedfields.TypeConverter, error) { if s.openAPIV3Config == nil { - //!TODO: A future work should add a requirement that - // OpenAPIV3 config is required. May require some refactoring of tests. - return nil, nil + // SSA is GA and requires OpenAPI config to be set + // to create models. + return nil, errors.New("OpenAPIV3 config must not be nil") } - pathsToIgnore := openapiutil.NewTrie(s.openAPIConfig.IgnorePrefixes) + pathsToIgnore := openapiutil.NewTrie(s.openAPIV3Config.IgnorePrefixes) resourceNames := make([]string, 0) for _, apiGroupInfo := range apiGroupInfos { groupResources, err := getResourceNamesForGroup(apiPrefix, apiGroupInfo, pathsToIgnore) @@ -977,7 +985,13 @@ func (s *GenericAPIServer) getOpenAPIModels(apiPrefix string, apiGroupInfos ...* for _, apiGroupInfo := range apiGroupInfos { apiGroupInfo.StaticOpenAPISpec = openAPISpec } - return openAPISpec, nil + + typeConverter, err := managedfields.NewTypeConverter(openAPISpec, false) + if err != nil { + return nil, err + } + + return typeConverter, nil } // getResourceNamesForGroup is a private method for getting the canonical names for each resource to build in an api group diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/handler.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/handler.go index 9f37df1cdff3..847a624e36b9 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/handler.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/handler.go @@ -53,13 +53,13 @@ type APIServerHandler struct { // Director is here so that we can properly handle fall through and proxy cases. // This looks a bit bonkers, but here's what's happening. We need to have /apis handling registered in gorestful in order to have // swagger generated for compatibility. Doing that with `/apis` as a webservice, means that it forcibly 404s (no defaulting allowed) - // all requests which are not /apis or /apis/. We need those calls to fall through behind goresful for proper delegation. Trying to + // all requests which are not /apis or /apis/. We need those calls to fall through behind gorestful for proper delegation. Trying to // register for a pattern which includes everything behind it doesn't work because gorestful negotiates for verbs and content encoding // and all those things go crazy when gorestful really just needs to pass through. In addition, openapi enforces unique verb constraints // which we don't fit into and it still muddies up swagger. Trying to switch the webservices into a route doesn't work because the // containing webservice faces all the same problems listed above. // This leads to the crazy thing done here. Our mux does what we need, so we'll place it in front of gorestful. It will introspect to - // decide if the route is likely to be handled by goresful and route there if needed. Otherwise, it goes to NonGoRestfulMux mux in + // decide if the route is likely to be handled by gorestful and route there if needed. Otherwise, it goes to NonGoRestfulMux mux in // order to handle "normal" paths and delegation. Hopefully no API consumers will ever have to deal with this level of detail. I think // we should consider completely removing gorestful. // Other servers should only use this opaquely to delegate to an API server. diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/OWNERS b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/OWNERS index 4105e64af83d..841cd8b54b67 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/OWNERS @@ -1,5 +1,7 @@ # See the OWNERS docs at https://go.k8s.io/owners +approvers: + - enj reviewers: - smarterclayton - wojtek-t diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/admission.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/admission.go index 5ee0036de1fc..6f4990a7e2c8 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/admission.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/admission.go @@ -39,7 +39,6 @@ import ( "k8s.io/client-go/dynamic" "k8s.io/client-go/informers" "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" "k8s.io/component-base/featuregate" ) @@ -123,7 +122,8 @@ func (a *AdmissionOptions) AddFlags(fs *pflag.FlagSet) { func (a *AdmissionOptions) ApplyTo( c *server.Config, informers informers.SharedInformerFactory, - kubeAPIServerClientConfig *rest.Config, + kubeClient kubernetes.Interface, + dynamicClient dynamic.Interface, features featuregate.FeatureGate, pluginInitializers ...admission.PluginInitializer, ) error { @@ -143,15 +143,8 @@ func (a *AdmissionOptions) ApplyTo( return fmt.Errorf("failed to read plugin config: %v", err) } - clientset, err := kubernetes.NewForConfig(kubeAPIServerClientConfig) - if err != nil { - return err - } - dynamicClient, err := dynamic.NewForConfig(kubeAPIServerClientConfig) - if err != nil { - return err - } - genericInitializer := initializer.New(clientset, dynamicClient, informers, c.Authorization.Authorizer, features, c.DrainedNotify()) + genericInitializer := initializer.New(kubeClient, dynamicClient, informers, c.Authorization.Authorizer, features, + c.DrainedNotify()) initializersChain := admission.PluginInitializers{genericInitializer} initializersChain = append(initializersChain, pluginInitializers...) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/audit.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/audit.go index f3c9adba04fc..af5b06a5bd75 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/audit.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/audit.go @@ -142,16 +142,6 @@ type AuditWebhookOptions struct { GroupVersionString string } -// AuditDynamicOptions control the configuration of dynamic backends for audit events -type AuditDynamicOptions struct { - // Enabled tells whether the dynamic audit capability is enabled. - Enabled bool - - // Configuration for batching backend. This is currently only used as an override - // for integration tests - BatchConfig *pluginbuffered.BatchConfig -} - func NewAuditOptions() *AuditOptions { return &AuditOptions{ WebhookOptions: AuditWebhookOptions{ diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/deprecated_insecure_serving.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/deprecated_insecure_serving.go index 173c28b80d03..dd05bfecdb5a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/deprecated_insecure_serving.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/deprecated_insecure_serving.go @@ -67,7 +67,7 @@ func (s *DeprecatedInsecureServingOptions) AddFlags(fs *pflag.FlagSet) { } fs.IPVar(&s.BindAddress, "insecure-bind-address", s.BindAddress, ""+ - "The IP address on which to serve the --insecure-port (set to 0.0.0.0 or :: for listening in all interfaces and IP families).") + "The IP address on which to serve the --insecure-port (set to 0.0.0.0 or :: for listening on all interfaces and IP address families).") // Though this flag is deprecated, we discovered security concerns over how to do health checks without it e.g. #43784 fs.MarkDeprecated("insecure-bind-address", "This flag will be removed in a future version.") fs.Lookup("insecure-bind-address").Hidden = false @@ -86,7 +86,7 @@ func (s *DeprecatedInsecureServingOptions) AddUnqualifiedFlags(fs *pflag.FlagSet } fs.IPVar(&s.BindAddress, "address", s.BindAddress, - "The IP address on which to serve the insecure --port (set to '0.0.0.0' or '::' for listening in all interfaces and IP families).") + "The IP address on which to serve the insecure --port (set to '0.0.0.0' or '::' for listening on all interfaces and IP address families).") fs.MarkDeprecated("address", "see --bind-address instead.") fs.Lookup("address").Hidden = false diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/config.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/config.go index 796cc6b03dc1..13819bf90c15 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/config.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/config.go @@ -43,10 +43,11 @@ import ( "k8s.io/apiserver/pkg/apis/config/validation" "k8s.io/apiserver/pkg/features" "k8s.io/apiserver/pkg/server/healthz" - "k8s.io/apiserver/pkg/storage/value" + storagevalue "k8s.io/apiserver/pkg/storage/value" aestransformer "k8s.io/apiserver/pkg/storage/value/encrypt/aes" "k8s.io/apiserver/pkg/storage/value/encrypt/envelope" envelopekmsv2 "k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2" + kmstypes "k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/v2" "k8s.io/apiserver/pkg/storage/value/encrypt/envelope/metrics" "k8s.io/apiserver/pkg/storage/value/encrypt/identity" "k8s.io/apiserver/pkg/storage/value/encrypt/secretbox" @@ -63,13 +64,13 @@ const ( kmsTransformerPrefixV2 = "k8s:enc:kms:v2:" // these constants relate to how the KMS v2 plugin status poll logic - // and the DEK generation logic behave. In particular, the positive + // and the DEK/seed generation logic behave. In particular, the positive // interval and max TTL are closely related as the difference between - // these values defines the worst case window in which the write DEK + // these values defines the worst case window in which the write DEK/seed // could expire due to the plugin going into an error state. The // worst case window divided by the negative interval defines the // minimum amount of times the server will attempt to return to a - // healthy state before the DEK expires and writes begin to fail. + // healthy state before the DEK/seed expires and writes begin to fail. // // For now, these values are kept small and hardcoded to support being // able to perform a "passive" storage migration while tolerating some @@ -82,13 +83,13 @@ const ( // At that point, they are guaranteed to either migrate to the new key // or get errors during the migration. // - // If the API server coasted forever on the last DEK, they would need + // If the API server coasted forever on the last DEK/seed, they would need // to actively check if it had observed the new key ID before starting - // a migration - otherwise it could keep using the old DEK and their + // a migration - otherwise it could keep using the old DEK/seed and their // storage migration would not do what they thought it did. kmsv2PluginHealthzPositiveInterval = 1 * time.Minute kmsv2PluginHealthzNegativeInterval = 10 * time.Second - kmsv2PluginWriteDEKMaxTTL = 3 * time.Minute + kmsv2PluginWriteDEKSourceMaxTTL = 3 * time.Minute kmsPluginHealthzNegativeTTL = 3 * time.Second kmsPluginHealthzPositiveTTL = 20 * time.Second @@ -159,7 +160,7 @@ func (h *kmsv2PluginProbe) toHealthzCheck(idx int) healthz.HealthChecker { // EncryptionConfiguration represents the parsed and normalized encryption configuration for the apiserver. type EncryptionConfiguration struct { // Transformers is a list of value.Transformer that will be used to encrypt and decrypt data. - Transformers map[schema.GroupResource]value.Transformer + Transformers map[schema.GroupResource]storagevalue.Transformer // HealthChecks is a list of healthz.HealthChecker that will be used to check the health of the encryption providers. HealthChecks []healthz.HealthChecker @@ -207,7 +208,7 @@ func LoadEncryptionConfig(ctx context.Context, filepath string, reload bool) (*E // getTransformerOverridesAndKMSPluginHealthzCheckers creates the set of transformers and KMS healthz checks based on the given config. // It may launch multiple go routines whose lifecycle is controlled by ctx. // In case of an error, the caller is responsible for canceling ctx to clean up any go routines that may have been launched. -func getTransformerOverridesAndKMSPluginHealthzCheckers(ctx context.Context, config *apiserverconfig.EncryptionConfiguration) (map[schema.GroupResource]value.Transformer, []healthz.HealthChecker, *kmsState, error) { +func getTransformerOverridesAndKMSPluginHealthzCheckers(ctx context.Context, config *apiserverconfig.EncryptionConfiguration) (map[schema.GroupResource]storagevalue.Transformer, []healthz.HealthChecker, *kmsState, error) { var kmsHealthChecks []healthz.HealthChecker transformers, probes, kmsUsed, err := getTransformerOverridesAndKMSPluginProbes(ctx, config) if err != nil { @@ -228,8 +229,8 @@ type healthChecker interface { // getTransformerOverridesAndKMSPluginProbes creates the set of transformers and KMS probes based on the given config. // It may launch multiple go routines whose lifecycle is controlled by ctx. // In case of an error, the caller is responsible for canceling ctx to clean up any go routines that may have been launched. -func getTransformerOverridesAndKMSPluginProbes(ctx context.Context, config *apiserverconfig.EncryptionConfiguration) (map[schema.GroupResource]value.Transformer, []healthChecker, *kmsState, error) { - resourceToPrefixTransformer := map[schema.GroupResource][]value.PrefixTransformer{} +func getTransformerOverridesAndKMSPluginProbes(ctx context.Context, config *apiserverconfig.EncryptionConfiguration) (map[schema.GroupResource]storagevalue.Transformer, []healthChecker, *kmsState, error) { + resourceToPrefixTransformer := map[schema.GroupResource][]storagevalue.PrefixTransformer{} var probes []healthChecker var kmsUsed kmsState @@ -268,11 +269,11 @@ func getTransformerOverridesAndKMSPluginProbes(ctx context.Context, config *apis probes = append(probes, p...) } - transformers := make(map[schema.GroupResource]value.Transformer, len(resourceToPrefixTransformer)) + transformers := make(map[schema.GroupResource]storagevalue.Transformer, len(resourceToPrefixTransformer)) for gr, transList := range resourceToPrefixTransformer { gr := gr transList := transList - transformers[gr] = value.NewPrefixTransformers(fmt.Errorf("no matching prefix found"), transList...) + transformers[gr] = storagevalue.NewPrefixTransformers(fmt.Errorf("no matching prefix found"), transList...) } return transformers, probes, &kmsUsed, nil @@ -332,8 +333,8 @@ func (h *kmsv2PluginProbe) check(ctx context.Context) error { return nil } -// rotateDEKOnKeyIDChange tries to rotate to a new DEK if the key ID returned by Status does not match the -// current state. If a successful rotation is performed, the new DEK and keyID overwrite the existing state. +// rotateDEKOnKeyIDChange tries to rotate to a new DEK/seed if the key ID returned by Status does not match the +// current state. If a successful rotation is performed, the new DEK/seed and keyID overwrite the existing state. // On any failure during rotation (including mismatch between status and encrypt calls), the current state is // preserved and will remain valid to use for encryption until its expiration (the system attempts to coast). // If the key ID returned by Status matches the current state, the expiration of the current state is extended @@ -346,47 +347,62 @@ func (h *kmsv2PluginProbe) rotateDEKOnKeyIDChange(ctx context.Context, statusKey // allow reads indefinitely in all cases // allow writes indefinitely as long as there is no error - // allow writes for only up to kmsv2PluginWriteDEKMaxTTL from now when there are errors - // we start the timer before we make the network call because kmsv2PluginWriteDEKMaxTTL is meant to be the upper bound - expirationTimestamp := envelopekmsv2.NowFunc().Add(kmsv2PluginWriteDEKMaxTTL) - - // state is valid and status keyID is unchanged from when we generated this DEK so there is no need to rotate it + // allow writes for only up to kmsv2PluginWriteDEKSourceMaxTTL from now when there are errors + // we start the timer before we make the network call because kmsv2PluginWriteDEKSourceMaxTTL is meant to be the upper bound + expirationTimestamp := envelopekmsv2.NowFunc().Add(kmsv2PluginWriteDEKSourceMaxTTL) + + // dynamically check if we want to use KDF seed to derive DEKs or just a single DEK + // this gate can only change during tests, but the check is cheap enough to always make + // this allows us to easily exercise both modes without restarting the API server + // TODO integration test that this dynamically takes effect + useSeed := utilfeature.DefaultFeatureGate.Enabled(features.KMSv2KDF) + stateUseSeed := state.EncryptedObject.EncryptedDEKSourceType == kmstypes.EncryptedDEKSourceType_HKDF_SHA256_XNONCE_AES_GCM_SEED + + // state is valid and status keyID is unchanged from when we generated this DEK/seed so there is no need to rotate it // just move the expiration of the current state forward by the reuse interval - if errState == nil && state.KeyID == statusKeyID { + // useSeed can only change at runtime during tests, so we check it here to allow us to easily exercise both modes + if errState == nil && state.EncryptedObject.KeyID == statusKeyID && stateUseSeed == useSeed { state.ExpirationTimestamp = expirationTimestamp h.state.Store(&state) return nil } - transformer, resp, cacheKey, errGen := envelopekmsv2.GenerateTransformer(ctx, uid, h.service) + transformer, encObject, cacheKey, errGen := envelopekmsv2.GenerateTransformer(ctx, uid, h.service, useSeed) - if resp == nil { - resp = &kmsservice.EncryptResponse{} // avoid nil panics + if encObject == nil { + encObject = &kmstypes.EncryptedObject{} // avoid nil panics } // happy path, should be the common case // TODO maybe add success metrics? - if errGen == nil && resp.KeyID == statusKeyID { + if errGen == nil && encObject.KeyID == statusKeyID { h.state.Store(&envelopekmsv2.State{ Transformer: transformer, - EncryptedDEK: resp.Ciphertext, - KeyID: resp.KeyID, - Annotations: resp.Annotations, + EncryptedObject: *encObject, UID: uid, ExpirationTimestamp: expirationTimestamp, CacheKey: cacheKey, }) - klog.V(6).InfoS("successfully rotated DEK", - "uid", uid, - "newKeyID", resp.KeyID, - "oldKeyID", state.KeyID, - "expirationTimestamp", expirationTimestamp.Format(time.RFC3339), - ) - return nil + + // it should be logically impossible for the new state to be invalid but check just in case + _, errGen = h.getCurrentState() + if errGen == nil { + klogV6 := klog.V(6) + if klogV6.Enabled() { + klogV6.InfoS("successfully rotated DEK", + "uid", uid, + "useSeed", useSeed, + "newKeyIDHash", envelopekmsv2.GetHashIfNotEmpty(encObject.KeyID), + "oldKeyIDHash", envelopekmsv2.GetHashIfNotEmpty(state.EncryptedObject.KeyID), + "expirationTimestamp", expirationTimestamp.Format(time.RFC3339), + ) + } + return nil + } } - return fmt.Errorf("failed to rotate DEK uid=%q, errState=%v, errGen=%v, statusKeyID=%q, encryptKeyID=%q, stateKeyID=%q, expirationTimestamp=%s", - uid, errState, errGen, statusKeyID, resp.KeyID, state.KeyID, state.ExpirationTimestamp.Format(time.RFC3339)) + return fmt.Errorf("failed to rotate DEK uid=%q, useSeed=%v, errState=%v, errGen=%v, statusKeyIDHash=%q, encryptKeyIDHash=%q, stateKeyIDHash=%q, expirationTimestamp=%s", + uid, useSeed, errState, errGen, envelopekmsv2.GetHashIfNotEmpty(statusKeyID), envelopekmsv2.GetHashIfNotEmpty(encObject.KeyID), envelopekmsv2.GetHashIfNotEmpty(state.EncryptedObject.KeyID), state.ExpirationTimestamp.Format(time.RFC3339)) } // getCurrentState returns the latest state from the last status and encrypt calls. @@ -399,12 +415,13 @@ func (h *kmsv2PluginProbe) getCurrentState() (envelopekmsv2.State, error) { return envelopekmsv2.State{}, fmt.Errorf("got unexpected nil transformer") } - if len(state.EncryptedDEK) == 0 { - return envelopekmsv2.State{}, fmt.Errorf("got unexpected empty EncryptedDEK") + encryptedObjectCopy := state.EncryptedObject + if len(encryptedObjectCopy.EncryptedData) != 0 { + return envelopekmsv2.State{}, fmt.Errorf("got unexpected non-empty EncryptedData") } - - if len(state.KeyID) == 0 { - return envelopekmsv2.State{}, fmt.Errorf("got unexpected empty keyID") + encryptedObjectCopy.EncryptedData = []byte{0} // any non-empty value to pass validation + if err := envelopekmsv2.ValidateEncryptedObject(&encryptedObjectCopy); err != nil { + return envelopekmsv2.State{}, fmt.Errorf("got invalid EncryptedObject: %w", err) } if state.ExpirationTimestamp.IsZero() { @@ -429,7 +446,7 @@ func (h *kmsv2PluginProbe) isKMSv2ProviderHealthyAndMaybeRotateDEK(ctx context.C if errCode, err := envelopekmsv2.ValidateKeyID(response.KeyID); err != nil { metrics.RecordInvalidKeyIDFromStatus(h.name, string(errCode)) - errs = append(errs, fmt.Errorf("got invalid KMSv2 KeyID %q: %w", response.KeyID, err)) + errs = append(errs, fmt.Errorf("got invalid KMSv2 KeyID hash %q: %w", envelopekmsv2.GetHashIfNotEmpty(response.KeyID), err)) } else { metrics.RecordKeyIDFromStatus(h.name, response.KeyID) // unconditionally append as we filter out nil errors below @@ -478,15 +495,15 @@ func loadConfig(filepath string, reload bool) (*apiserverconfig.EncryptionConfig // prefixTransformersAndProbes creates the set of transformers and KMS probes based on the given resource config. // It may launch multiple go routines whose lifecycle is controlled by ctx. // In case of an error, the caller is responsible for canceling ctx to clean up any go routines that may have been launched. -func prefixTransformersAndProbes(ctx context.Context, config apiserverconfig.ResourceConfiguration) ([]value.PrefixTransformer, []healthChecker, *kmsState, error) { - var transformers []value.PrefixTransformer +func prefixTransformersAndProbes(ctx context.Context, config apiserverconfig.ResourceConfiguration) ([]storagevalue.PrefixTransformer, []healthChecker, *kmsState, error) { + var transformers []storagevalue.PrefixTransformer var probes []healthChecker var kmsUsed kmsState for _, provider := range config.Providers { provider := provider var ( - transformer value.PrefixTransformer + transformer storagevalue.PrefixTransformer transformerErr error probe healthChecker used *kmsState @@ -497,7 +514,7 @@ func prefixTransformersAndProbes(ctx context.Context, config apiserverconfig.Res transformer, transformerErr = aesPrefixTransformer(provider.AESGCM, aestransformer.NewGCMTransformer, aesGCMTransformerPrefixV1) case provider.AESCBC != nil: - cbcTransformer := func(block cipher.Block) (value.Transformer, error) { + cbcTransformer := func(block cipher.Block) (storagevalue.Transformer, error) { return aestransformer.NewCBCTransformer(block), nil } transformer, transformerErr = aesPrefixTransformer(provider.AESCBC, cbcTransformer, aesCBCTransformerPrefixV1) @@ -513,7 +530,7 @@ func prefixTransformersAndProbes(ctx context.Context, config apiserverconfig.Res } case provider.Identity != nil: - transformer = value.PrefixTransformer{ + transformer = storagevalue.PrefixTransformer{ Transformer: identity.NewEncryptCheckTransformer(), Prefix: []byte{}, } @@ -532,10 +549,10 @@ func prefixTransformersAndProbes(ctx context.Context, config apiserverconfig.Res return transformers, probes, &kmsUsed, nil } -type blockTransformerFunc func(cipher.Block) (value.Transformer, error) +type blockTransformerFunc func(cipher.Block) (storagevalue.Transformer, error) -func aesPrefixTransformer(config *apiserverconfig.AESConfiguration, fn blockTransformerFunc, prefix string) (value.PrefixTransformer, error) { - var result value.PrefixTransformer +func aesPrefixTransformer(config *apiserverconfig.AESConfiguration, fn blockTransformerFunc, prefix string) (storagevalue.PrefixTransformer, error) { + var result storagevalue.PrefixTransformer if len(config.Keys) == 0 { return result, fmt.Errorf("aes provider has no valid keys") @@ -550,7 +567,7 @@ func aesPrefixTransformer(config *apiserverconfig.AESConfiguration, fn blockTran } } - keyTransformers := []value.PrefixTransformer{} + keyTransformers := []storagevalue.PrefixTransformer{} for _, keyData := range config.Keys { keyData := keyData @@ -569,26 +586,26 @@ func aesPrefixTransformer(config *apiserverconfig.AESConfiguration, fn blockTran // Create a new PrefixTransformer for this key keyTransformers = append(keyTransformers, - value.PrefixTransformer{ + storagevalue.PrefixTransformer{ Transformer: transformer, Prefix: []byte(keyData.Name + ":"), }) } // Create a prefixTransformer which can choose between these keys - keyTransformer := value.NewPrefixTransformers( + keyTransformer := storagevalue.NewPrefixTransformers( fmt.Errorf("no matching key was found for the provided AES transformer"), keyTransformers...) // Create a PrefixTransformer which shall later be put in a list with other providers - result = value.PrefixTransformer{ + result = storagevalue.PrefixTransformer{ Transformer: keyTransformer, Prefix: []byte(prefix), } return result, nil } -func secretboxPrefixTransformer(config *apiserverconfig.SecretboxConfiguration) (value.PrefixTransformer, error) { - var result value.PrefixTransformer +func secretboxPrefixTransformer(config *apiserverconfig.SecretboxConfiguration) (storagevalue.PrefixTransformer, error) { + var result storagevalue.PrefixTransformer if len(config.Keys) == 0 { return result, fmt.Errorf("secretbox provider has no valid keys") @@ -603,7 +620,7 @@ func secretboxPrefixTransformer(config *apiserverconfig.SecretboxConfiguration) } } - keyTransformers := []value.PrefixTransformer{} + keyTransformers := []storagevalue.PrefixTransformer{} for _, keyData := range config.Keys { keyData := keyData @@ -621,18 +638,18 @@ func secretboxPrefixTransformer(config *apiserverconfig.SecretboxConfiguration) // Create a new PrefixTransformer for this key keyTransformers = append(keyTransformers, - value.PrefixTransformer{ + storagevalue.PrefixTransformer{ Transformer: secretbox.NewSecretboxTransformer(keyArray), Prefix: []byte(keyData.Name + ":"), }) } // Create a prefixTransformer which can choose between these keys - keyTransformer := value.NewPrefixTransformers( + keyTransformer := storagevalue.NewPrefixTransformers( fmt.Errorf("no matching key was found for the provided Secretbox transformer"), keyTransformers...) // Create a PrefixTransformer which shall later be put in a list with other providers - result = value.PrefixTransformer{ + result = storagevalue.PrefixTransformer{ Transformer: keyTransformer, Prefix: []byte(secretboxTransformerPrefixV1), } @@ -665,13 +682,18 @@ func (s *kmsState) accumulate(other *kmsState) { // kmsPrefixTransformer creates a KMS transformer and probe based on the given KMS config. // It may launch multiple go routines whose lifecycle is controlled by ctx. // In case of an error, the caller is responsible for canceling ctx to clean up any go routines that may have been launched. -func kmsPrefixTransformer(ctx context.Context, config *apiserverconfig.KMSConfiguration) (value.PrefixTransformer, healthChecker, *kmsState, error) { +func kmsPrefixTransformer(ctx context.Context, config *apiserverconfig.KMSConfiguration) (storagevalue.PrefixTransformer, healthChecker, *kmsState, error) { kmsName := config.Name switch config.APIVersion { case kmsAPIVersionV1: + if !utilfeature.DefaultFeatureGate.Enabled(features.KMSv1) { + return storagevalue.PrefixTransformer{}, nil, nil, fmt.Errorf("KMSv1 is deprecated and will only receive security updates going forward. Use KMSv2 instead. Set --feature-gates=KMSv1=true to use the deprecated KMSv1 feature.") + } + klog.InfoS("KMSv1 is deprecated and will only receive security updates going forward. Use KMSv2 instead.") + envelopeService, err := envelopeServiceFactory(ctx, config.Endpoint, config.Timeout.Duration) if err != nil { - return value.PrefixTransformer{}, nil, nil, fmt.Errorf("could not configure KMSv1-Plugin's probe %q, error: %w", kmsName, err) + return storagevalue.PrefixTransformer{}, nil, nil, fmt.Errorf("could not configure KMSv1-Plugin's probe %q, error: %w", kmsName, err) } probe := &kmsPluginProbe{ @@ -692,12 +714,12 @@ func kmsPrefixTransformer(ctx context.Context, config *apiserverconfig.KMSConfig case kmsAPIVersionV2: if !utilfeature.DefaultFeatureGate.Enabled(features.KMSv2) { - return value.PrefixTransformer{}, nil, nil, fmt.Errorf("could not configure KMSv2 plugin %q, KMSv2 feature is not enabled", kmsName) + return storagevalue.PrefixTransformer{}, nil, nil, fmt.Errorf("could not configure KMSv2 plugin %q, KMSv2 feature is not enabled", kmsName) } envelopeService, err := EnvelopeKMSv2ServiceFactory(ctx, config.Endpoint, config.Name, config.Timeout.Duration) if err != nil { - return value.PrefixTransformer{}, nil, nil, fmt.Errorf("could not configure KMSv2-Plugin's probe %q, error: %w", kmsName, err) + return storagevalue.PrefixTransformer{}, nil, nil, fmt.Errorf("could not configure KMSv2-Plugin's probe %q, error: %w", kmsName, err) } probe := &kmsv2PluginProbe{ @@ -710,45 +732,9 @@ func kmsPrefixTransformer(ctx context.Context, config *apiserverconfig.KMSConfig // initialize state so that Load always works probe.state.Store(&envelopekmsv2.State{}) - runProbeCheckAndLog := func(ctx context.Context) error { - if err := probe.check(ctx); err != nil { - klog.VDepth(1, 2).ErrorS(err, "kms plugin failed health check probe", "name", kmsName) - return err - } - return nil - } - - // on the happy path where the plugin is healthy and available on server start, - // prime keyID and DEK by running the check inline once (this also prevents unit tests from flaking) - // ignore the error here since we want to support the plugin starting up async with the API server - _ = runProbeCheckAndLog(ctx) - // make sure that the plugin's key ID is reasonably up-to-date - // also, make sure that our DEK is up-to-date to with said key ID (if it expires the server will fail all writes) - // if this background loop ever stops running, the server will become unfunctional after kmsv2PluginWriteDEKMaxTTL - go wait.PollUntilWithContext( - ctx, - kmsv2PluginHealthzPositiveInterval, - func(ctx context.Context) (bool, error) { - if err := runProbeCheckAndLog(ctx); err == nil { - return false, nil - } - - // TODO add integration test for quicker error poll on failure - // if we fail, block the outer polling and start a new quicker poll inline - // this limits the chance that our DEK expires during a transient failure - _ = wait.PollUntilWithContext( - ctx, - kmsv2PluginHealthzNegativeInterval, - func(ctx context.Context) (bool, error) { - return runProbeCheckAndLog(ctx) == nil, nil - }, - ) - - return false, nil - }) + primeAndProbeKMSv2(ctx, probe, kmsName) - // using AES-GCM by default for encrypting data with KMSv2 - transformer := value.PrefixTransformer{ + transformer := storagevalue.PrefixTransformer{ Transformer: envelopekmsv2.NewEnvelopeTransformer(envelopeService, kmsName, probe.getCurrentState), Prefix: []byte(kmsTransformerPrefixV2 + kmsName + ":"), } @@ -759,12 +745,62 @@ func kmsPrefixTransformer(ctx context.Context, config *apiserverconfig.KMSConfig }, nil default: - return value.PrefixTransformer{}, nil, nil, fmt.Errorf("could not configure KMS plugin %q, unsupported KMS API version %q", kmsName, config.APIVersion) + return storagevalue.PrefixTransformer{}, nil, nil, fmt.Errorf("could not configure KMS plugin %q, unsupported KMS API version %q", kmsName, config.APIVersion) } } -func envelopePrefixTransformer(config *apiserverconfig.KMSConfiguration, envelopeService envelope.Service, prefix string) value.PrefixTransformer { - baseTransformerFunc := func(block cipher.Block) (value.Transformer, error) { +func primeAndProbeKMSv2(ctx context.Context, probe *kmsv2PluginProbe, kmsName string) { + runProbeCheckAndLog := func(ctx context.Context, depth int) error { + if err := probe.check(ctx); err != nil { + klog.VDepth(1+depth, 2).ErrorS(err, "kms plugin failed health check probe", "name", kmsName) + return err + } + return nil + } + + blockAndProbeFastUntilSuccess := func(ctx context.Context) { + _ = wait.PollUntilWithContext( + ctx, + kmsv2PluginHealthzNegativeInterval, + func(ctx context.Context) (bool, error) { + return runProbeCheckAndLog(ctx, 1) == nil, nil + }, + ) + } + + // on the happy path where the plugin is healthy and available on server start, + // prime keyID and DEK by running the check inline once (this also prevents unit tests from flaking) + errPrime := runProbeCheckAndLog(ctx, 0) + + // if our initial attempt to prime failed, start trying to get to a valid state in the background ASAP + // this prevents a slow start when the external healthz checker is configured to ignore the KMS healthz endpoint + // since we want to support the plugin starting up async with the API server, this error is not fatal + if errPrime != nil { + go blockAndProbeFastUntilSuccess(ctx) // separate go routine to avoid blocking + } + + // make sure that the plugin's key ID is reasonably up-to-date + // also, make sure that our DEK is up-to-date to with said key ID (if it expires the server will fail all writes) + // if this background loop ever stops running, the server will become unfunctional after kmsv2PluginWriteDEKSourceMaxTTL + go wait.PollUntilWithContext( + ctx, + kmsv2PluginHealthzPositiveInterval, + func(ctx context.Context) (bool, error) { + if err := runProbeCheckAndLog(ctx, 0); err == nil { + return false, nil + } + + // TODO add integration test for quicker error poll on failure + // if we fail, block the outer polling and start a new quicker poll inline + // this limits the chance that our DEK expires during a transient failure + blockAndProbeFastUntilSuccess(ctx) + + return false, nil + }) +} + +func envelopePrefixTransformer(config *apiserverconfig.KMSConfiguration, envelopeService envelope.Service, prefix string) storagevalue.PrefixTransformer { + baseTransformerFunc := func(block cipher.Block) (storagevalue.Transformer, error) { gcm, err := aestransformer.NewGCMTransformer(block) if err != nil { return nil, err @@ -777,15 +813,15 @@ func envelopePrefixTransformer(config *apiserverconfig.KMSConfiguration, envelop return unionTransformers{gcm, aestransformer.NewCBCTransformer(block)}, nil } - return value.PrefixTransformer{ + return storagevalue.PrefixTransformer{ Transformer: envelope.NewEnvelopeTransformer(envelopeService, int(*config.CacheSize), baseTransformerFunc), Prefix: []byte(prefix + config.Name + ":"), } } -type unionTransformers []value.Transformer +type unionTransformers []storagevalue.Transformer -func (u unionTransformers) TransformFromStorage(ctx context.Context, data []byte, dataCtx value.Context) (out []byte, stale bool, err error) { +func (u unionTransformers) TransformFromStorage(ctx context.Context, data []byte, dataCtx storagevalue.Context) (out []byte, stale bool, err error) { var errs []error for i := range u { transformer := u[i] @@ -804,7 +840,7 @@ func (u unionTransformers) TransformFromStorage(ctx context.Context, data []byte return nil, false, fmt.Errorf("unionTransformers: unable to transform from storage") } -func (u unionTransformers) TransformToStorage(ctx context.Context, data []byte, dataCtx value.Context) (out []byte, err error) { +func (u unionTransformers) TransformToStorage(ctx context.Context, data []byte, dataCtx storagevalue.Context) (out []byte, err error) { return u[0].TransformToStorage(ctx, data, dataCtx) } @@ -815,7 +851,7 @@ func computeEncryptionConfigHash(data []byte) string { return fmt.Sprintf("%x", sha256.Sum256(data)) } -var _ ResourceTransformers = &DynamicTransformers{} +var _ storagevalue.ResourceTransformers = &DynamicTransformers{} var _ healthz.HealthChecker = &DynamicTransformers{} // DynamicTransformers holds transformers that may be dynamically updated via a single external actor, likely a controller. @@ -825,7 +861,7 @@ type DynamicTransformers struct { } type transformTracker struct { - transformerOverrides map[schema.GroupResource]value.Transformer + transformerOverrides map[schema.GroupResource]storagevalue.Transformer kmsPluginHealthzCheck healthz.HealthChecker closeTransformers context.CancelFunc kmsCloseGracePeriod time.Duration @@ -833,7 +869,7 @@ type transformTracker struct { // NewDynamicTransformers returns transformers, health checks for kms providers and an ability to close transformers. func NewDynamicTransformers( - transformerOverrides map[schema.GroupResource]value.Transformer, + transformerOverrides map[schema.GroupResource]storagevalue.Transformer, kmsPluginHealthzCheck healthz.HealthChecker, closeTransformers context.CancelFunc, kmsCloseGracePeriod time.Duration, @@ -864,7 +900,7 @@ func (d *DynamicTransformers) Name() string { } // TransformerForResource returns the transformer for the given resource. -func (d *DynamicTransformers) TransformerForResource(resource schema.GroupResource) value.Transformer { +func (d *DynamicTransformers) TransformerForResource(resource schema.GroupResource) storagevalue.Transformer { return &resourceTransformer{ resource: resource, transformTracker: d.transformTracker, @@ -873,7 +909,7 @@ func (d *DynamicTransformers) TransformerForResource(resource schema.GroupResour // Set sets the transformer overrides. This method is not go routine safe and must only be called by the same, single caller throughout the lifetime of this object. func (d *DynamicTransformers) Set( - transformerOverrides map[schema.GroupResource]value.Transformer, + transformerOverrides map[schema.GroupResource]storagevalue.Transformer, closeTransformers context.CancelFunc, kmsPluginHealthzCheck healthz.HealthChecker, kmsCloseGracePeriod time.Duration, @@ -898,34 +934,30 @@ func (d *DynamicTransformers) Set( }() } -var _ value.Transformer = &resourceTransformer{} +var _ storagevalue.Transformer = &resourceTransformer{} type resourceTransformer struct { resource schema.GroupResource transformTracker *atomic.Value } -func (r *resourceTransformer) TransformFromStorage(ctx context.Context, data []byte, dataCtx value.Context) ([]byte, bool, error) { +func (r *resourceTransformer) TransformFromStorage(ctx context.Context, data []byte, dataCtx storagevalue.Context) ([]byte, bool, error) { return r.transformer().TransformFromStorage(ctx, data, dataCtx) } -func (r *resourceTransformer) TransformToStorage(ctx context.Context, data []byte, dataCtx value.Context) ([]byte, error) { +func (r *resourceTransformer) TransformToStorage(ctx context.Context, data []byte, dataCtx storagevalue.Context) ([]byte, error) { return r.transformer().TransformToStorage(ctx, data, dataCtx) } -func (r *resourceTransformer) transformer() value.Transformer { +func (r *resourceTransformer) transformer() storagevalue.Transformer { return transformerFromOverrides(r.transformTracker.Load().(*transformTracker).transformerOverrides, r.resource) } -type ResourceTransformers interface { - TransformerForResource(resource schema.GroupResource) value.Transformer -} - -var _ ResourceTransformers = &StaticTransformers{} +var _ storagevalue.ResourceTransformers = &StaticTransformers{} -type StaticTransformers map[schema.GroupResource]value.Transformer +type StaticTransformers map[schema.GroupResource]storagevalue.Transformer -func (s StaticTransformers) TransformerForResource(resource schema.GroupResource) value.Transformer { +func (s StaticTransformers) TransformerForResource(resource schema.GroupResource) storagevalue.Transformer { return transformerFromOverrides(s, resource) } @@ -934,7 +966,7 @@ var anyGroupAnyResource = schema.GroupResource{ Resource: "*", } -func transformerFromOverrides(transformerOverrides map[schema.GroupResource]value.Transformer, resource schema.GroupResource) value.Transformer { +func transformerFromOverrides(transformerOverrides map[schema.GroupResource]storagevalue.Transformer, resource schema.GroupResource) storagevalue.Transformer { if transformer := transformerOverrides[resource]; transformer != nil { return transformer } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/controller/controller.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/controller/controller.go index b8c66826bf50..94782ccbacd4 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/controller/controller.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/controller/controller.go @@ -27,6 +27,7 @@ import ( "k8s.io/apimachinery/pkg/util/wait" "k8s.io/apiserver/pkg/server/healthz" "k8s.io/apiserver/pkg/server/options/encryptionconfig" + "k8s.io/apiserver/pkg/server/options/encryptionconfig/metrics" "k8s.io/client-go/util/workqueue" "k8s.io/klog/v2" ) @@ -163,16 +164,19 @@ func (d *DynamicKMSEncryptionConfigContent) processNextWorkItem(serverCtx contex ctx, closeTransformers := context.WithCancel(serverCtx) defer func() { - // TODO: increment success metric when updatedEffectiveConfig=true - // TODO can work queue metrics help here? if !updatedEffectiveConfig { // avoid leaking if we're not using the newly constructed transformers (due to an error or them not being changed) closeTransformers() } + + if updatedEffectiveConfig && err == nil { + metrics.RecordEncryptionConfigAutomaticReloadSuccess() + } + if err != nil { - // TODO: increment failure metric + metrics.RecordEncryptionConfigAutomaticReloadFailure() utilruntime.HandleError(fmt.Errorf("error processing encryption config file %s: %v", d.filePath, err)) // add dummy item back to the queue to trigger file content processing. d.queue.AddRateLimited(key) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/metrics/metrics.go new file mode 100644 index 000000000000..799b584cf7aa --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/encryptionconfig/metrics/metrics.go @@ -0,0 +1,86 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package metrics + +import ( + "sync" + + "k8s.io/component-base/metrics" + "k8s.io/component-base/metrics/legacyregistry" +) + +const ( + namespace = "apiserver" + subsystem = "encryption_config_controller" +) + +var ( + encryptionConfigAutomaticReloadFailureTotal = metrics.NewCounter( + &metrics.CounterOpts{ + Namespace: namespace, + Subsystem: subsystem, + Name: "automatic_reload_failures_total", + Help: "Total number of failed automatic reloads of encryption configuration.", + StabilityLevel: metrics.ALPHA, + }, + ) + + encryptionConfigAutomaticReloadSuccessTotal = metrics.NewCounter( + &metrics.CounterOpts{ + Namespace: namespace, + Subsystem: subsystem, + Name: "automatic_reload_success_total", + Help: "Total number of successful automatic reloads of encryption configuration.", + StabilityLevel: metrics.ALPHA, + }, + ) + + encryptionConfigAutomaticReloadLastTimestampSeconds = metrics.NewGaugeVec( + &metrics.GaugeOpts{ + Namespace: namespace, + Subsystem: subsystem, + Name: "automatic_reload_last_timestamp_seconds", + Help: "Timestamp of the last successful or failed automatic reload of encryption configuration.", + StabilityLevel: metrics.ALPHA, + }, + []string{"status"}, + ) +) + +var registerMetrics sync.Once + +func RegisterMetrics() { + registerMetrics.Do(func() { + legacyregistry.MustRegister(encryptionConfigAutomaticReloadFailureTotal) + legacyregistry.MustRegister(encryptionConfigAutomaticReloadSuccessTotal) + legacyregistry.MustRegister(encryptionConfigAutomaticReloadLastTimestampSeconds) + }) +} + +func RecordEncryptionConfigAutomaticReloadFailure() { + encryptionConfigAutomaticReloadFailureTotal.Inc() + recordEncryptionConfigAutomaticReloadTimestamp("failure") +} + +func RecordEncryptionConfigAutomaticReloadSuccess() { + encryptionConfigAutomaticReloadSuccessTotal.Inc() + recordEncryptionConfigAutomaticReloadTimestamp("success") +} + +func recordEncryptionConfigAutomaticReloadTimestamp(result string) { + encryptionConfigAutomaticReloadLastTimestampSeconds.WithLabelValues(result).SetToCurrentTime() +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/etcd.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/etcd.go index 6aabbf255bed..57e9c1a9f138 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/etcd.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/etcd.go @@ -36,9 +36,10 @@ import ( "k8s.io/apiserver/pkg/server/options/encryptionconfig" encryptionconfigcontroller "k8s.io/apiserver/pkg/server/options/encryptionconfig/controller" serverstorage "k8s.io/apiserver/pkg/server/storage" + "k8s.io/apiserver/pkg/storage/etcd3/metrics" "k8s.io/apiserver/pkg/storage/storagebackend" storagefactory "k8s.io/apiserver/pkg/storage/storagebackend/factory" - flowcontrolrequest "k8s.io/apiserver/pkg/util/flowcontrol/request" + storagevalue "k8s.io/apiserver/pkg/storage/value" "k8s.io/klog/v2" ) @@ -64,11 +65,6 @@ type EtcdOptions struct { // WatchCacheSizes represents override to a given resource WatchCacheSizes []string - // complete guards fields that must be initialized via Complete before the Apply methods can be used. - complete bool - resourceTransformers encryptionconfig.ResourceTransformers - kmsPluginHealthzChecks []healthz.HealthChecker - // SkipHealthEndpoints, when true, causes the Apply methods to not set up health endpoints. // This allows multiple invocations of the Apply methods without duplication of said endpoints. SkipHealthEndpoints bool @@ -212,118 +208,138 @@ func (s *EtcdOptions) AddFlags(fs *pflag.FlagSet) { "The time in seconds that each lease is reused. A lower value could avoid large number of objects reusing the same lease. Notice that a too small value may cause performance problems at storage layer.") } -// Complete must be called exactly once before using any of the Apply methods. It is responsible for setting -// up objects that must be created once and reused across multiple invocations such as storage transformers. -// This method mutates the receiver (EtcdOptions). It must never mutate the inputs. -func (s *EtcdOptions) Complete( - storageObjectCountTracker flowcontrolrequest.StorageObjectCountTracker, - stopCh <-chan struct{}, - addPostStartHook func(name string, hook server.PostStartHookFunc) error, -) error { +// ApplyTo mutates the provided server.Config. It must never mutate the receiver (EtcdOptions). +func (s *EtcdOptions) ApplyTo(c *server.Config) error { if s == nil { return nil } - if s.complete { - return fmt.Errorf("EtcdOptions.Complete called more than once") + storageConfigCopy := s.StorageConfig + if storageConfigCopy.StorageObjectCountTracker == nil { + storageConfigCopy.StorageObjectCountTracker = c.StorageObjectCountTracker } - if len(s.EncryptionProviderConfigFilepath) != 0 { - ctxServer := wait.ContextForChannel(stopCh) - // nolint:govet // The only code path where closeTransformers does not get called is when it gets stored in dynamicTransformers. - ctxTransformers, closeTransformers := context.WithCancel(ctxServer) + return s.ApplyWithStorageFactoryTo(&SimpleStorageFactory{StorageConfig: storageConfigCopy}, c) +} - encryptionConfiguration, err := encryptionconfig.LoadEncryptionConfig(ctxTransformers, s.EncryptionProviderConfigFilepath, s.EncryptionProviderConfigAutomaticReload) - if err != nil { - // in case of error, we want to close partially initialized (if any) transformers - closeTransformers() +// ApplyWithStorageFactoryTo mutates the provided server.Config. It must never mutate the receiver (EtcdOptions). +func (s *EtcdOptions) ApplyWithStorageFactoryTo(factory serverstorage.StorageFactory, c *server.Config) error { + if s == nil { + return nil + } + + if !s.SkipHealthEndpoints { + if err := s.addEtcdHealthEndpoint(c); err != nil { return err } + } - // enable kms hot reload controller only if the config file is set to be automatically reloaded - if s.EncryptionProviderConfigAutomaticReload { - // with reload=true we will always have 1 health check - if len(encryptionConfiguration.HealthChecks) != 1 { - // in case of error, we want to close partially initialized (if any) transformers - closeTransformers() - return fmt.Errorf("failed to start kms encryption config hot reload controller. only 1 health check should be available when reload is enabled") - } + // setup encryption + if err := s.maybeApplyResourceTransformers(c); err != nil { + return err + } + + metrics.SetStorageMonitorGetter(monitorGetter(factory)) + + c.RESTOptionsGetter = s.CreateRESTOptionsGetter(factory, c.ResourceTransformers) + return nil +} - // Here the dynamic transformers take ownership of the transformers and their cancellation. - dynamicTransformers := encryptionconfig.NewDynamicTransformers(encryptionConfiguration.Transformers, encryptionConfiguration.HealthChecks[0], closeTransformers, encryptionConfiguration.KMSCloseGracePeriod) - - // add post start hook to start hot reload controller - // adding this hook here will ensure that it gets configured exactly once - err = addPostStartHook( - "start-encryption-provider-config-automatic-reload", - func(_ server.PostStartHookContext) error { - dynamicEncryptionConfigController := encryptionconfigcontroller.NewDynamicEncryptionConfiguration( - "encryption-provider-config-automatic-reload-controller", - s.EncryptionProviderConfigFilepath, - dynamicTransformers, - encryptionConfiguration.EncryptionFileContentHash, - ) - - go dynamicEncryptionConfigController.Run(ctxServer) - - return nil - }, - ) +func monitorGetter(factory serverstorage.StorageFactory) func() (monitors []metrics.Monitor, err error) { + return func() (monitors []metrics.Monitor, err error) { + defer func() { if err != nil { - // in case of error, we want to close partially initialized (if any) transformers - closeTransformers() - return fmt.Errorf("failed to add post start hook for kms encryption config hot reload controller: %w", err) + for _, m := range monitors { + m.Close() + } } + }() - s.resourceTransformers = dynamicTransformers - s.kmsPluginHealthzChecks = []healthz.HealthChecker{dynamicTransformers} - } else { - s.resourceTransformers = encryptionconfig.StaticTransformers(encryptionConfiguration.Transformers) - s.kmsPluginHealthzChecks = encryptionConfiguration.HealthChecks + var m metrics.Monitor + for _, cfg := range factory.Configs() { + m, err = storagefactory.CreateMonitor(cfg) + if err != nil { + return nil, err + } + monitors = append(monitors, m) } + return monitors, nil } - - s.StorageConfig.StorageObjectCountTracker = storageObjectCountTracker - - s.complete = true - - // nolint:govet // The only code path where closeTransformers does not get called is when it gets stored in dynamicTransformers. - return nil } -// ApplyTo mutates the provided server.Config. It must never mutate the receiver (EtcdOptions). -func (s *EtcdOptions) ApplyTo(c *server.Config) error { - if s == nil { - return nil +func (s *EtcdOptions) CreateRESTOptionsGetter(factory serverstorage.StorageFactory, resourceTransformers storagevalue.ResourceTransformers) generic.RESTOptionsGetter { + if resourceTransformers != nil { + factory = &transformerStorageFactory{ + delegate: factory, + resourceTransformers: resourceTransformers, + } } - - return s.ApplyWithStorageFactoryTo(&SimpleStorageFactory{StorageConfig: s.StorageConfig}, c) + return &StorageFactoryRestOptionsFactory{Options: *s, StorageFactory: factory} } -// ApplyWithStorageFactoryTo mutates the provided server.Config. It must never mutate the receiver (EtcdOptions). -func (s *EtcdOptions) ApplyWithStorageFactoryTo(factory serverstorage.StorageFactory, c *server.Config) error { - if s == nil { +func (s *EtcdOptions) maybeApplyResourceTransformers(c *server.Config) (err error) { + if c.ResourceTransformers != nil { return nil } - - if !s.complete { - return fmt.Errorf("EtcdOptions.Apply called without completion") + if len(s.EncryptionProviderConfigFilepath) == 0 { + return nil } - if !s.SkipHealthEndpoints { - if err := s.addEtcdHealthEndpoint(c); err != nil { - return err + ctxServer := wait.ContextForChannel(c.DrainedNotify()) + ctxTransformers, closeTransformers := context.WithCancel(ctxServer) + defer func() { + // in case of error, we want to close partially initialized (if any) transformers + if err != nil { + closeTransformers() } + }() + + encryptionConfiguration, err := encryptionconfig.LoadEncryptionConfig(ctxTransformers, s.EncryptionProviderConfigFilepath, s.EncryptionProviderConfigAutomaticReload) + if err != nil { + return err } - if s.resourceTransformers != nil { - factory = &transformerStorageFactory{ - delegate: factory, - resourceTransformers: s.resourceTransformers, + if s.EncryptionProviderConfigAutomaticReload { + // with reload=true we will always have 1 health check + if len(encryptionConfiguration.HealthChecks) != 1 { + return fmt.Errorf("failed to start kms encryption config hot reload controller. only 1 health check should be available when reload is enabled") + } + + // Here the dynamic transformers take ownership of the transformers and their cancellation. + dynamicTransformers := encryptionconfig.NewDynamicTransformers(encryptionConfiguration.Transformers, encryptionConfiguration.HealthChecks[0], closeTransformers, encryptionConfiguration.KMSCloseGracePeriod) + + // add post start hook to start hot reload controller + // adding this hook here will ensure that it gets configured exactly once + err = c.AddPostStartHook( + "start-encryption-provider-config-automatic-reload", + func(_ server.PostStartHookContext) error { + dynamicEncryptionConfigController := encryptionconfigcontroller.NewDynamicEncryptionConfiguration( + "encryption-provider-config-automatic-reload-controller", + s.EncryptionProviderConfigFilepath, + dynamicTransformers, + encryptionConfiguration.EncryptionFileContentHash, + ) + + go dynamicEncryptionConfigController.Run(ctxServer) + + return nil + }, + ) + if err != nil { + return fmt.Errorf("failed to add post start hook for kms encryption config hot reload controller: %w", err) + } + + c.ResourceTransformers = dynamicTransformers + if !s.SkipHealthEndpoints { + c.AddHealthChecks(dynamicTransformers) + } + } else { + c.ResourceTransformers = encryptionconfig.StaticTransformers(encryptionConfiguration.Transformers) + if !s.SkipHealthEndpoints { + c.AddHealthChecks(encryptionConfiguration.HealthChecks...) } } - c.RESTOptionsGetter = &StorageFactoryRestOptionsFactory{Options: *s, StorageFactory: factory} return nil } @@ -344,8 +360,6 @@ func (s *EtcdOptions) addEtcdHealthEndpoint(c *server.Config) error { return readyCheck() })) - c.AddHealthChecks(s.kmsPluginHealthzChecks...) - return nil } @@ -444,6 +458,10 @@ func (s *SimpleStorageFactory) ResourcePrefix(resource schema.GroupResource) str return resource.Group + "/" + resource.Resource } +func (s *SimpleStorageFactory) Configs() []storagebackend.Config { + return serverstorage.Configs(s.StorageConfig) +} + func (s *SimpleStorageFactory) Backends() []serverstorage.Backend { // nothing should ever call this method but we still provide a functional implementation return serverstorage.Backends(s.StorageConfig) @@ -453,7 +471,7 @@ var _ serverstorage.StorageFactory = &transformerStorageFactory{} type transformerStorageFactory struct { delegate serverstorage.StorageFactory - resourceTransformers encryptionconfig.ResourceTransformers + resourceTransformers storagevalue.ResourceTransformers } func (t *transformerStorageFactory) NewConfig(resource schema.GroupResource) (*storagebackend.ConfigForResource, error) { @@ -474,6 +492,10 @@ func (t *transformerStorageFactory) ResourcePrefix(resource schema.GroupResource return t.delegate.ResourcePrefix(resource) } +func (t *transformerStorageFactory) Configs() []storagebackend.Config { + return t.delegate.Configs() +} + func (t *transformerStorageFactory) Backends() []serverstorage.Backend { return t.delegate.Backends() } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/recommended.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/recommended.go index 28aad0daf635..69f8fb51556b 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/recommended.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/recommended.go @@ -20,7 +20,6 @@ import ( "fmt" "github.com/spf13/pflag" - "k8s.io/apimachinery/pkg/runtime" "k8s.io/apiserver/pkg/admission" "k8s.io/apiserver/pkg/features" @@ -28,6 +27,7 @@ import ( "k8s.io/apiserver/pkg/storage/storagebackend" "k8s.io/apiserver/pkg/util/feature" utilflowcontrol "k8s.io/apiserver/pkg/util/flowcontrol" + "k8s.io/client-go/dynamic" "k8s.io/client-go/kubernetes" "k8s.io/component-base/featuregate" "k8s.io/klog/v2" @@ -101,9 +101,6 @@ func (o *RecommendedOptions) AddFlags(fs *pflag.FlagSet) { // ApplyTo adds RecommendedOptions to the server configuration. // pluginInitializers can be empty, it is only need for additional initializers. func (o *RecommendedOptions) ApplyTo(config *server.RecommendedConfig) error { - if err := o.Etcd.Complete(config.Config.StorageObjectCountTracker, config.Config.DrainedNotify(), config.Config.AddPostStartHook); err != nil { - return err - } if err := o.Etcd.ApplyTo(&config.Config); err != nil { return err } @@ -131,9 +128,20 @@ func (o *RecommendedOptions) ApplyTo(config *server.RecommendedConfig) error { if err := o.CoreAPI.ApplyTo(config); err != nil { return err } - if initializers, err := o.ExtraAdmissionInitializers(config); err != nil { + initializers, err := o.ExtraAdmissionInitializers(config) + if err != nil { + return err + } + kubeClient, err := kubernetes.NewForConfig(config.ClientConfig) + if err != nil { + return err + } + dynamicClient, err := dynamic.NewForConfig(config.ClientConfig) + if err != nil { return err - } else if err := o.Admission.ApplyTo(&config.Config, config.SharedInformerFactory, config.ClientConfig, o.FeatureGate, initializers...); err != nil { + } + if err := o.Admission.ApplyTo(&config.Config, config.SharedInformerFactory, kubeClient, dynamicClient, o.FeatureGate, + initializers...); err != nil { return err } if feature.DefaultFeatureGate.Enabled(features.APIPriorityAndFairness) { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/serving.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/serving.go index c64798b4f96c..efda02ef7c9a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/serving.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/options/serving.go @@ -153,7 +153,7 @@ func (s *SecureServingOptions) AddFlags(fs *pflag.FlagSet) { fs.IPVar(&s.BindAddress, "bind-address", s.BindAddress, ""+ "The IP address on which to listen for the --secure-port port. The "+ "associated interface(s) must be reachable by the rest of the cluster, and by CLI/web "+ - "clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used.") + "clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used.") desc := "The port on which to serve HTTPS with authentication and authorization." if s.Required { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/routes/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/routes/metrics.go index d30f74b9c416..ad1eb2835ef0 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/routes/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/routes/metrics.go @@ -22,6 +22,7 @@ import ( cachermetrics "k8s.io/apiserver/pkg/storage/cacher/metrics" etcd3metrics "k8s.io/apiserver/pkg/storage/etcd3/metrics" flowcontrolmetrics "k8s.io/apiserver/pkg/util/flowcontrol/metrics" + peerproxymetrics "k8s.io/apiserver/pkg/util/peerproxy/metrics" "k8s.io/component-base/metrics/legacyregistry" ) @@ -50,4 +51,5 @@ func register() { cachermetrics.Register() etcd3metrics.Register() flowcontrolmetrics.Register() + peerproxymetrics.Register() } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/routes/openapi.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/routes/openapi.go index 17cc1f85a09b..2819d1576016 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/routes/openapi.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/routes/openapi.go @@ -43,10 +43,7 @@ func (oa OpenAPI) InstallV2(c *restful.Container, mux *mux.PathRecorderMux) (*ha } spec.Definitions = handler.PruneDefaults(spec.Definitions) openAPIVersionedService := handler.NewOpenAPIService(spec) - err = openAPIVersionedService.RegisterOpenAPIVersionedService("/openapi/v2", mux) - if err != nil { - klog.Fatalf("Failed to register versioned open api spec for root: %v", err) - } + openAPIVersionedService.RegisterOpenAPIVersionedService("/openapi/v2", mux) return openAPIVersionedService, spec } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/storage/storage_factory.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/storage/storage_factory.go index 5b1c24446c7d..be4d0390d602 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/storage/storage_factory.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/server/storage/storage_factory.go @@ -22,14 +22,13 @@ import ( "io/ioutil" "strings" - "k8s.io/klog/v2" - "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/util/sets" "k8s.io/apiserver/pkg/features" "k8s.io/apiserver/pkg/storage/storagebackend" utilfeature "k8s.io/apiserver/pkg/util/feature" + "k8s.io/klog/v2" ) // Backend describes the storage servers, the information here should be enough @@ -52,8 +51,12 @@ type StorageFactory interface { // centralized control over the shape of etcd directories ResourcePrefix(groupResource schema.GroupResource) string + // Configs gets configurations for all of registered storage destinations. + Configs() []storagebackend.Config + // Backends gets all backends for all registered storage destinations. // Used for getting all instances for health validations. + // Deprecated: Use Configs instead Backends() []Backend } @@ -276,14 +279,41 @@ func (s *DefaultStorageFactory) NewConfig(groupResource schema.GroupResource) (* return storageConfig.ForResource(groupResource), nil } -// Backends returns all backends for all registered storage destinations. -// Used for getting all instances for health validations. +// Configs implements StorageFactory. +func (s *DefaultStorageFactory) Configs() []storagebackend.Config { + return configs(s.StorageConfig, s.Overrides) +} + +// Configs gets configurations for all of registered storage destinations. +func Configs(storageConfig storagebackend.Config) []storagebackend.Config { + return configs(storageConfig, nil) +} + +// Returns all storage configurations including those for group resource overrides +func configs(storageConfig storagebackend.Config, grOverrides map[schema.GroupResource]groupResourceOverrides) []storagebackend.Config { + configs := []storagebackend.Config{storageConfig} + + for _, override := range grOverrides { + if len(override.etcdLocation) == 0 { + continue + } + // copy + newConfig := storageConfig + override.Apply(&newConfig, &StorageCodecConfig{}) + newConfig.Transport.ServerList = override.etcdLocation + configs = append(configs, newConfig) + } + return configs +} + +// Backends implements StorageFactory. func (s *DefaultStorageFactory) Backends() []Backend { return backends(s.StorageConfig, s.Overrides) } // Backends returns all backends for all registered storage destinations. // Used for getting all instances for health validations. +// Deprecated: Validate health by passing storagebackend.Config directly to storagefactory.CreateProber. func Backends(storageConfig storagebackend.Config) []Backend { return backends(storageConfig, nil) } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/OWNERS b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/OWNERS index c77bfe44b5d4..044ecb9f61f8 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/OWNERS @@ -1,11 +1,9 @@ # See the OWNERS docs at https://go.k8s.io/owners approvers: - - lavalamp - liggitt - wojtek-t reviewers: - - lavalamp - smarterclayton - wojtek-t - deads2k @@ -16,6 +14,8 @@ reviewers: - ingvagabund - enj - stevekuznetsov + - MadhavJivrajani emeritus_approvers: - xiang90 - timothysc + - lavalamp diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/cacher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/cacher.go index eada35b1d0a9..0796f591d7f7 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/cacher.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/cacher.go @@ -104,7 +104,7 @@ type Config struct { Codec runtime.Codec - Clock clock.Clock + Clock clock.WithTicker } type watchersMap map[int]*cacheWatcher @@ -184,7 +184,6 @@ func (i *indexedWatchers) terminateAll(groupResource schema.GroupResource, done // second in a bucket, and pop up them once at the timeout. To be more specific, // if you set fire time at X, you can get the bookmark within (X-1,X+1) period. type watcherBookmarkTimeBuckets struct { - lock sync.Mutex // the key of watcherBuckets is the number of seconds since createTime watchersBuckets map[int64][]*cacheWatcher createTime time.Time @@ -205,7 +204,7 @@ func newTimeBucketWatchers(clock clock.Clock, bookmarkFrequency time.Duration) * // adds a watcher to the bucket, if the deadline is before the start, it will be // added to the first one. -func (t *watcherBookmarkTimeBuckets) addWatcher(w *cacheWatcher) bool { +func (t *watcherBookmarkTimeBuckets) addWatcherThreadUnsafe(w *cacheWatcher) bool { // note that the returned time can be before t.createTime, // especially in cases when the nextBookmarkTime method // give us the zero value of type Time @@ -215,8 +214,6 @@ func (t *watcherBookmarkTimeBuckets) addWatcher(w *cacheWatcher) bool { return false } bucketID := int64(nextTime.Sub(t.createTime) / time.Second) - t.lock.Lock() - defer t.lock.Unlock() if bucketID < t.startBucketID { bucketID = t.startBucketID } @@ -225,12 +222,10 @@ func (t *watcherBookmarkTimeBuckets) addWatcher(w *cacheWatcher) bool { return true } -func (t *watcherBookmarkTimeBuckets) popExpiredWatchers() [][]*cacheWatcher { +func (t *watcherBookmarkTimeBuckets) popExpiredWatchersThreadUnsafe() [][]*cacheWatcher { currentBucketID := int64(t.clock.Since(t.createTime) / time.Second) // There should be one or two elements in almost all cases expiredWatchers := make([][]*cacheWatcher, 0, 2) - t.lock.Lock() - defer t.lock.Unlock() for ; t.startBucketID <= currentBucketID; t.startBucketID++ { if watchers, ok := t.watchersBuckets[t.startBucketID]; ok { delete(t.watchersBuckets, t.startBucketID) @@ -328,11 +323,16 @@ type Cacher struct { // dispatching that event to avoid race with closing channels in watchers. watchersToStop []*cacheWatcher // Maintain a timeout queue to send the bookmark event before the watcher times out. + // Note that this field when accessed MUST be protected by the Cacher.lock. bookmarkWatchers *watcherBookmarkTimeBuckets // expiredBookmarkWatchers is a list of watchers that were expired and need to be schedule for a next bookmark event expiredBookmarkWatchers []*cacheWatcher } +func (c *Cacher) RequestWatchProgress(ctx context.Context) error { + return c.storage.RequestWatchProgress(ctx) +} + // NewCacherFromConfig creates a new Cacher responsible for servicing WATCH and LIST requests from // its internal cache and updating its cache in the background based on the // given configuration. @@ -401,10 +401,10 @@ func NewCacherFromConfig(config Config) (*Cacher, error) { // so that future reuse does not get a spurious timeout. <-cacher.timer.C } - + progressRequester := newConditionalProgressRequester(config.Storage.RequestWatchProgress, config.Clock) watchCache := newWatchCache( - config.KeyFunc, cacher.processEvent, config.GetAttrsFunc, config.Versioner, config.Indexers, config.Clock, config.GroupResource) - listerWatcher := NewCacherListerWatcher(config.Storage, config.ResourcePrefix, config.NewListFunc) + config.KeyFunc, cacher.processEvent, config.GetAttrsFunc, config.Versioner, config.Indexers, config.Clock, config.GroupResource, progressRequester) + listerWatcher := NewListerWatcher(config.Storage, config.ResourcePrefix, config.NewListFunc) reflectorName := "storage/cacher.go:" + config.ResourcePrefix reflector := cache.NewNamedReflector(reflectorName, listerWatcher, obj, watchCache, 0) @@ -423,6 +423,7 @@ func NewCacherFromConfig(config Config) (*Cacher, error) { cacher.reflector = reflector go cacher.dispatchEvents() + go progressRequester.Run(stopCh) cacher.stopWg.Add(1) go func() { @@ -592,6 +593,18 @@ func (c *Cacher) Watch(ctx context.Context, key string, opts storage.ListOptions identifier, ) + // note that c.waitUntilWatchCacheFreshAndForceAllEvents must be called without + // the c.watchCache.RLock held otherwise we are at risk of a deadlock + // mainly because c.watchCache.processEvent method won't be able to make progress + // + // moreover even though the c.waitUntilWatchCacheFreshAndForceAllEvents acquires a lock + // it is safe to release the lock after the method finishes because we don't require + // any atomicity between the call to the method and further calls that actually get the events. + forceAllEvents, err := c.waitUntilWatchCacheFreshAndForceAllEvents(ctx, requestedWatchRV, opts) + if err != nil { + return newErrWatcher(err), nil + } + // We explicitly use thread unsafe version and do locking ourself to ensure that // no new events will be processed in the meantime. The watchCache will be unlocked // on return from this function. @@ -599,10 +612,7 @@ func (c *Cacher) Watch(ctx context.Context, key string, opts storage.ListOptions // underlying watchCache is calling processEvent under its lock. c.watchCache.RLock() defer c.watchCache.RUnlock() - forceAllEvents, err := c.waitUntilWatchCacheFreshAndForceAllEvents(ctx, requestedWatchRV, opts) - if err != nil { - return newErrWatcher(err), nil - } + startWatchRV := startWatchResourceVersionFn() var cacheInterval *watchCacheInterval if forceAllEvents { @@ -638,7 +648,7 @@ func (c *Cacher) Watch(ctx context.Context, key string, opts storage.ListOptions // Add it to the queue only when the client support watch bookmarks. if watcher.allowWatchBookmarks { - c.bookmarkWatchers.addWatcher(watcher) + c.bookmarkWatchers.addWatcherThreadUnsafe(watcher) } c.watcherIdx++ }() @@ -716,17 +726,18 @@ func shouldDelegateList(opts storage.ListOptions) bool { pred := opts.Predicate match := opts.ResourceVersionMatch pagingEnabled := utilfeature.DefaultFeatureGate.Enabled(features.APIListChunking) + consistentListFromCacheEnabled := utilfeature.DefaultFeatureGate.Enabled(features.ConsistentListFromCache) + + // Serve consistent reads from storage if ConsistentListFromCache is disabled + consistentReadFromStorage := resourceVersion == "" && !consistentListFromCacheEnabled + // Watch cache doesn't support continuations, so serve them from etcd. hasContinuation := pagingEnabled && len(pred.Continue) > 0 + // Serve paginated requests about revision "0" from watch cache to avoid overwhelming etcd. hasLimit := pagingEnabled && pred.Limit > 0 && resourceVersion != "0" + // Watch cache only supports ResourceVersionMatchNotOlderThan (default). unsupportedMatch := match != "" && match != metav1.ResourceVersionMatchNotOlderThan - // If resourceVersion is not specified, serve it from underlying - // storage (for backward compatibility). If a continuation is - // requested, serve it from the underlying storage as well. - // Limits are only sent to storage when resourceVersion is non-zero - // since the watch cache isn't able to perform continuations, and - // limits are ignored when resource version is zero - return resourceVersion == "" || hasContinuation || hasLimit || unsupportedMatch + return consistentReadFromStorage || hasContinuation || hasLimit || unsupportedMatch } func (c *Cacher) listItems(ctx context.Context, listRV uint64, key string, pred storage.SelectionPredicate, recursive bool) ([]interface{}, uint64, string, error) { @@ -752,19 +763,21 @@ func (c *Cacher) GetList(ctx context.Context, key string, opts storage.ListOptio return c.storage.GetList(ctx, key, opts, listObj) } - // If resourceVersion is specified, serve it from cache. - // It's guaranteed that the returned value is at least that - // fresh as the given resourceVersion. listRV, err := c.versioner.ParseResourceVersion(resourceVersion) if err != nil { return err } - if listRV == 0 && !c.ready.check() { // If Cacher is not yet initialized and we don't require any specific // minimal resource version, simply forward the request to storage. return c.storage.GetList(ctx, key, opts, listObj) } + if listRV == 0 && utilfeature.DefaultFeatureGate.Enabled(features.ConsistentListFromCache) { + listRV, err = c.getCurrentResourceVersionFromStorage(ctx) + if err != nil { + return err + } + } ctx, span := tracing.Start(ctx, "cacher list", attribute.String("audit-id", audit.GetAuditIDTruncated(ctx)), @@ -795,24 +808,30 @@ func (c *Cacher) GetList(ctx context.Context, key string, opts storage.ListOptio return err } span.AddEvent("Listed items from cache", attribute.Int("count", len(objs))) - if len(objs) > listVal.Cap() && pred.Label.Empty() && pred.Field.Empty() { - // Resize the slice appropriately, since we already know that none - // of the elements will be filtered out. - listVal.Set(reflect.MakeSlice(reflect.SliceOf(c.objectType.Elem()), 0, len(objs))) - span.AddEvent("Resized result") - } + // store pointer of eligible objects, + // Why not directly put object in the items of listObj? + // the elements in ListObject are Struct type, making slice will bring excessive memory consumption. + // so we try to delay this action as much as possible + var selectedObjects []runtime.Object for _, obj := range objs { elem, ok := obj.(*storeElement) if !ok { return fmt.Errorf("non *storeElement returned from storage: %v", obj) } if filter(elem.Key, elem.Labels, elem.Fields) { - listVal.Set(reflect.Append(listVal, reflect.ValueOf(elem.Object).Elem())) + selectedObjects = append(selectedObjects, elem.Object) } } - if listVal.IsNil() { + if len(selectedObjects) == 0 { // Ensure that we never return a nil Items pointer in the result for consistency. listVal.Set(reflect.MakeSlice(listVal.Type(), 0, 0)) + } else { + // Resize the slice appropriately, since we already know that size of result set + listVal.Set(reflect.MakeSlice(listVal.Type(), len(selectedObjects), len(selectedObjects))) + span.AddEvent("Resized result") + for i, o := range selectedObjects { + listVal.Index(i).Set(reflect.ValueOf(o).Elem()) + } } span.AddEvent("Filtered items", attribute.Int("count", listVal.Len())) if c.versioner != nil { @@ -911,9 +930,25 @@ func (c *Cacher) dispatchEvents() { bookmarkTimer.Reset(wait.Jitter(time.Second, 0.25)) // Never send a bookmark event if we did not see an event here, this is fine // because we don't provide any guarantees on sending bookmarks. + // + // Just pop closed watchers and requeue others if needed. + // + // TODO(#115478): rework the following logic + // in a way that would allow more + // efficient cleanup of closed watchers if lastProcessedResourceVersion == 0 { - // pop expired watchers in case there has been no update - c.bookmarkWatchers.popExpiredWatchers() + func() { + c.Lock() + defer c.Unlock() + for _, watchers := range c.bookmarkWatchers.popExpiredWatchersThreadUnsafe() { + for _, watcher := range watchers { + if watcher.stopped { + continue + } + c.bookmarkWatchers.addWatcherThreadUnsafe(watcher) + } + } + }() continue } bookmarkEvent := &watchCacheEvent{ @@ -1035,7 +1070,7 @@ func (c *Cacher) dispatchEvent(event *watchCacheEvent) { func (c *Cacher) startDispatchingBookmarkEventsLocked() { // Pop already expired watchers. However, explicitly ignore stopped ones, // as we don't delete watcher from bookmarkWatchers when it is stopped. - for _, watchers := range c.bookmarkWatchers.popExpiredWatchers() { + for _, watchers := range c.bookmarkWatchers.popExpiredWatchersThreadUnsafe() { for _, watcher := range watchers { // c.Lock() is held here. // watcher.stopThreadUnsafe() is protected by c.Lock() @@ -1140,7 +1175,7 @@ func (c *Cacher) finishDispatching() { continue } // requeue the watcher for the next bookmark if needed. - c.bookmarkWatchers.addWatcher(watcher) + c.bookmarkWatchers.addWatcherThreadUnsafe(watcher) } c.expiredBookmarkWatchers = c.expiredBookmarkWatchers[:0] } @@ -1309,54 +1344,6 @@ func (c *Cacher) waitUntilWatchCacheFreshAndForceAllEvents(ctx context.Context, return false, nil } -// cacherListerWatcher opaques storage.Interface to expose cache.ListerWatcher. -type cacherListerWatcher struct { - storage storage.Interface - resourcePrefix string - newListFunc func() runtime.Object -} - -// NewCacherListerWatcher returns a storage.Interface backed ListerWatcher. -func NewCacherListerWatcher(storage storage.Interface, resourcePrefix string, newListFunc func() runtime.Object) cache.ListerWatcher { - return &cacherListerWatcher{ - storage: storage, - resourcePrefix: resourcePrefix, - newListFunc: newListFunc, - } -} - -// Implements cache.ListerWatcher interface. -func (lw *cacherListerWatcher) List(options metav1.ListOptions) (runtime.Object, error) { - list := lw.newListFunc() - pred := storage.SelectionPredicate{ - Label: labels.Everything(), - Field: fields.Everything(), - Limit: options.Limit, - Continue: options.Continue, - } - - storageOpts := storage.ListOptions{ - ResourceVersionMatch: options.ResourceVersionMatch, - Predicate: pred, - Recursive: true, - } - if err := lw.storage.GetList(context.TODO(), lw.resourcePrefix, storageOpts, list); err != nil { - return nil, err - } - return list, nil -} - -// Implements cache.ListerWatcher interface. -func (lw *cacherListerWatcher) Watch(options metav1.ListOptions) (watch.Interface, error) { - opts := storage.ListOptions{ - ResourceVersion: options.ResourceVersion, - Predicate: storage.Everything, - Recursive: true, - ProgressNotify: true, - } - return lw.storage.Watch(context.TODO(), lw.resourcePrefix, opts) -} - // errWatcher implements watch.Interface to return a single error type errWatcher struct { result chan watch.Event diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/caching_object.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/caching_object.go index 258efed84258..e2e2aa5e79d0 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/caching_object.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/caching_object.go @@ -148,6 +148,10 @@ func (o *cachingObject) CacheEncode(id runtime.Identifier, encode func(runtime.O if result.err != nil { return result.err } + if b, support := w.(runtime.Splice); support { + b.Splice(result.raw) + return nil + } _, err := w.Write(result.raw) return err } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go new file mode 100644 index 000000000000..1252e5e34959 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go @@ -0,0 +1,77 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package cacher + +import ( + "context" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/fields" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/watch" + "k8s.io/apiserver/pkg/storage" + "k8s.io/client-go/tools/cache" +) + +// listerWatcher opaques storage.Interface to expose cache.ListerWatcher. +type listerWatcher struct { + storage storage.Interface + resourcePrefix string + newListFunc func() runtime.Object +} + +// NewListerWatcher returns a storage.Interface backed ListerWatcher. +func NewListerWatcher(storage storage.Interface, resourcePrefix string, newListFunc func() runtime.Object) cache.ListerWatcher { + return &listerWatcher{ + storage: storage, + resourcePrefix: resourcePrefix, + newListFunc: newListFunc, + } +} + +// Implements cache.ListerWatcher interface. +func (lw *listerWatcher) List(options metav1.ListOptions) (runtime.Object, error) { + list := lw.newListFunc() + pred := storage.SelectionPredicate{ + Label: labels.Everything(), + Field: fields.Everything(), + Limit: options.Limit, + Continue: options.Continue, + } + + storageOpts := storage.ListOptions{ + ResourceVersionMatch: options.ResourceVersionMatch, + Predicate: pred, + Recursive: true, + } + if err := lw.storage.GetList(context.TODO(), lw.resourcePrefix, storageOpts, list); err != nil { + return nil, err + } + return list, nil +} + +// Implements cache.ListerWatcher interface. +func (lw *listerWatcher) Watch(options metav1.ListOptions) (watch.Interface, error) { + opts := storage.ListOptions{ + ResourceVersion: options.ResourceVersion, + Predicate: storage.Everything, + Recursive: true, + ProgressNotify: true, + } + return lw.storage.Watch(context.TODO(), lw.resourcePrefix, opts) +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go index 4d86018e5208..c26eb55dac44 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go @@ -30,8 +30,10 @@ import ( "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/watch" + "k8s.io/apiserver/pkg/features" "k8s.io/apiserver/pkg/storage" "k8s.io/apiserver/pkg/storage/cacher/metrics" + utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/client-go/tools/cache" "k8s.io/component-base/tracing" "k8s.io/klog/v2" @@ -196,6 +198,10 @@ type watchCache struct { // For testing cache interval invalidation. indexValidator indexValidator + + // Requests progress notification if there are requests waiting for watch + // to be fresh + waitingUntilFresh *conditionalProgressRequester } func newWatchCache( @@ -204,8 +210,9 @@ func newWatchCache( getAttrsFunc func(runtime.Object) (labels.Set, fields.Set, error), versioner storage.Versioner, indexers *cache.Indexers, - clock clock.Clock, - groupResource schema.GroupResource) *watchCache { + clock clock.WithTicker, + groupResource schema.GroupResource, + progressRequester *conditionalProgressRequester) *watchCache { wc := &watchCache{ capacity: defaultLowerBoundCapacity, keyFunc: keyFunc, @@ -222,6 +229,7 @@ func newWatchCache( clock: clock, versioner: versioner, groupResource: groupResource, + waitingUntilFresh: progressRequester, } metrics.WatchCacheCapacity.WithLabelValues(groupResource.String()).Set(float64(wc.capacity)) wc.cond = sync.NewCond(wc.RLocker()) @@ -305,7 +313,7 @@ func (w *watchCache) processEvent(event watch.Event, resourceVersion uint64, upd if err := func() error { // TODO: We should consider moving this lock below after the watchCacheEvent - // is created. In such situation, the only problematic scenario is Replace( + // is created. In such situation, the only problematic scenario is Replace() // happening after getting object from store and before acquiring a lock. // Maybe introduce another lock for this purpose. w.Lock() @@ -406,6 +414,7 @@ func (w *watchCache) UpdateResourceVersion(resourceVersion string) { w.Lock() defer w.Unlock() w.resourceVersion = rv + w.cond.Broadcast() }() // Avoid calling event handler under lock. @@ -484,7 +493,14 @@ func (s sortableStoreElements) Swap(i, j int) { // WaitUntilFreshAndList returns list of pointers to `storeElement` objects along // with their ResourceVersion and the name of the index, if any, that was used. func (w *watchCache) WaitUntilFreshAndList(ctx context.Context, resourceVersion uint64, matchValues []storage.MatchValue) ([]interface{}, uint64, string, error) { - err := w.waitUntilFreshAndBlock(ctx, resourceVersion) + var err error + if utilfeature.DefaultFeatureGate.Enabled(features.ConsistentListFromCache) && w.notFresh(resourceVersion) { + w.waitingUntilFresh.Add() + err = w.waitUntilFreshAndBlock(ctx, resourceVersion) + w.waitingUntilFresh.Remove() + } else { + err = w.waitUntilFreshAndBlock(ctx, resourceVersion) + } defer w.RUnlock() if err != nil { return nil, 0, "", err @@ -507,6 +523,12 @@ func (w *watchCache) WaitUntilFreshAndList(ctx context.Context, resourceVersion return result, rv, index, err } +func (w *watchCache) notFresh(resourceVersion uint64) bool { + w.RLock() + defer w.RUnlock() + return resourceVersion > w.resourceVersion +} + // WaitUntilFreshAndGet returns a pointers to object. func (w *watchCache) WaitUntilFreshAndGet(ctx context.Context, resourceVersion uint64, key string) (interface{}, bool, uint64, error) { err := w.waitUntilFreshAndBlock(ctx, resourceVersion) @@ -608,8 +630,8 @@ func (w *watchCache) Resync() error { } func (w *watchCache) currentCapacity() int { - w.Lock() - defer w.Unlock() + w.RLock() + defer w.RUnlock() return w.capacity } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_progress.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_progress.go new file mode 100644 index 000000000000..f44ca9325b88 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_progress.go @@ -0,0 +1,121 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package cacher + +import ( + "context" + "sync" + "time" + + utilruntime "k8s.io/apimachinery/pkg/util/runtime" + "k8s.io/apimachinery/pkg/util/wait" + + "k8s.io/klog/v2" + "k8s.io/utils/clock" +) + +const ( + // progressRequestPeriod determines period of requesting progress + // from etcd when there is a request waiting for watch cache to be fresh. + progressRequestPeriod = 100 * time.Millisecond +) + +func newConditionalProgressRequester(requestWatchProgress WatchProgressRequester, clock TickerFactory) *conditionalProgressRequester { + pr := &conditionalProgressRequester{ + clock: clock, + requestWatchProgress: requestWatchProgress, + } + pr.cond = sync.NewCond(pr.mux.RLocker()) + return pr +} + +type WatchProgressRequester func(ctx context.Context) error + +type TickerFactory interface { + NewTicker(time.Duration) clock.Ticker +} + +// conditionalProgressRequester will request progress notification if there +// is a request waiting for watch cache to be fresh. +type conditionalProgressRequester struct { + clock TickerFactory + requestWatchProgress WatchProgressRequester + + mux sync.RWMutex + cond *sync.Cond + waiting int + stopped bool +} + +func (pr *conditionalProgressRequester) Run(stopCh <-chan struct{}) { + ctx := wait.ContextForChannel(stopCh) + go func() { + defer utilruntime.HandleCrash() + <-stopCh + pr.mux.Lock() + defer pr.mux.Unlock() + pr.stopped = true + pr.cond.Signal() + }() + ticker := pr.clock.NewTicker(progressRequestPeriod) + defer ticker.Stop() + for { + stopped := func() bool { + pr.mux.RLock() + defer pr.mux.RUnlock() + for pr.waiting == 0 && !pr.stopped { + pr.cond.Wait() + } + return pr.stopped + }() + if stopped { + return + } + + select { + case <-ticker.C(): + shouldRequest := func() bool { + pr.mux.RLock() + defer pr.mux.RUnlock() + return pr.waiting > 0 && !pr.stopped + }() + if !shouldRequest { + continue + } + err := pr.requestWatchProgress(ctx) + if err != nil { + klog.V(4).InfoS("Error requesting bookmark", "err", err) + } + case <-stopCh: + return + } + } +} + +func (pr *conditionalProgressRequester) Add() { + pr.mux.Lock() + defer pr.mux.Unlock() + pr.waiting += 1 + pr.cond.Signal() +} + +func (pr *conditionalProgressRequester) Remove() { + pr.mux.Lock() + defer pr.mux.Unlock() + pr.waiting -= 1 + pr.cond.Signal() +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/healthcheck.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/healthcheck.go index ad051d2d6cdf..3d4898103789 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/healthcheck.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/healthcheck.go @@ -28,6 +28,7 @@ type etcdHealth struct { } // EtcdHealthCheck decodes data returned from etcd /healthz handler. +// Deprecated: Validate health by passing storagebackend.Config directly to storagefactory.CreateProber. func EtcdHealthCheck(data []byte) error { obj := etcdHealth{} if err := json.Unmarshal(data, &obj); err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go index 6f155c0adb29..ac023d55d8c1 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go @@ -17,11 +17,14 @@ limitations under the License. package metrics import ( + "context" + "fmt" "sync" "time" compbasemetrics "k8s.io/component-base/metrics" "k8s.io/component-base/metrics/legacyregistry" + "k8s.io/klog/v2" ) /* @@ -47,6 +50,22 @@ var ( }, []string{"operation", "type"}, ) + etcdRequestCounts = compbasemetrics.NewCounterVec( + &compbasemetrics.CounterOpts{ + Name: "etcd_requests_total", + Help: "Etcd request counts for each operation and object type.", + StabilityLevel: compbasemetrics.ALPHA, + }, + []string{"operation", "type"}, + ) + etcdRequestErrorCounts = compbasemetrics.NewCounterVec( + &compbasemetrics.CounterOpts{ + Name: "etcd_request_errors_total", + Help: "Etcd failed request counts for each operation and object type.", + StabilityLevel: compbasemetrics.ALPHA, + }, + []string{"operation", "type"}, + ) objectCounts = compbasemetrics.NewGaugeVec( &compbasemetrics.GaugeOpts{ Name: "apiserver_storage_objects", @@ -57,13 +76,16 @@ var ( ) dbTotalSize = compbasemetrics.NewGaugeVec( &compbasemetrics.GaugeOpts{ - Subsystem: "apiserver", - Name: "storage_db_total_size_in_bytes", - Help: "Total size of the storage database file physically allocated in bytes.", - StabilityLevel: compbasemetrics.ALPHA, + Subsystem: "apiserver", + Name: "storage_db_total_size_in_bytes", + Help: "Total size of the storage database file physically allocated in bytes.", + StabilityLevel: compbasemetrics.ALPHA, + DeprecatedVersion: "1.28.0", }, []string{"endpoint"}, ) + storageSizeDescription = compbasemetrics.NewDesc("apiserver_storage_size_bytes", "Size of the storage database file physically allocated in bytes.", []string{"cluster"}, nil, compbasemetrics.ALPHA, "") + storageMonitor = &monitorCollector{monitorGetter: func() ([]Monitor, error) { return nil, nil }} etcdEventsReceivedCounts = compbasemetrics.NewCounterVec( &compbasemetrics.CounterOpts{ Subsystem: "apiserver", @@ -140,8 +162,11 @@ func Register() { // Register the metrics. registerMetrics.Do(func() { legacyregistry.MustRegister(etcdRequestLatency) + legacyregistry.MustRegister(etcdRequestCounts) + legacyregistry.MustRegister(etcdRequestErrorCounts) legacyregistry.MustRegister(objectCounts) legacyregistry.MustRegister(dbTotalSize) + legacyregistry.CustomMustRegister(storageMonitor) legacyregistry.MustRegister(etcdBookmarkCounts) legacyregistry.MustRegister(etcdLeaseObjectCounts) legacyregistry.MustRegister(listStorageCount) @@ -157,9 +182,15 @@ func UpdateObjectCount(resourcePrefix string, count int64) { objectCounts.WithLabelValues(resourcePrefix).Set(float64(count)) } -// RecordEtcdRequestLatency sets the etcd_request_duration_seconds metrics. -func RecordEtcdRequestLatency(verb, resource string, startTime time.Time) { - etcdRequestLatency.WithLabelValues(verb, resource).Observe(sinceInSeconds(startTime)) +// RecordEtcdRequest updates and sets the etcd_request_duration_seconds, +// etcd_request_total, etcd_request_errors_total metrics. +func RecordEtcdRequest(verb, resource string, err error, startTime time.Time) { + v := []string{verb, resource} + etcdRequestLatency.WithLabelValues(v...).Observe(sinceInSeconds(startTime)) + etcdRequestCounts.WithLabelValues(v...).Inc() + if err != nil { + etcdRequestErrorCounts.WithLabelValues(v...).Inc() + } } // RecordEtcdEvent updated the etcd_events_received_total metric. @@ -183,15 +214,23 @@ func Reset() { } // sinceInSeconds gets the time since the specified start in seconds. -func sinceInSeconds(start time.Time) float64 { +// +// This is a variable to facilitate testing. +var sinceInSeconds = func(start time.Time) float64 { return time.Since(start).Seconds() } // UpdateEtcdDbSize sets the etcd_db_total_size_in_bytes metric. +// Deprecated: Metric etcd_db_total_size_in_bytes will be replaced with apiserver_storage_size_bytes func UpdateEtcdDbSize(ep string, size int64) { dbTotalSize.WithLabelValues(ep).Set(float64(size)) } +// SetStorageMonitorGetter sets monitor getter to allow monitoring etcd stats. +func SetStorageMonitorGetter(getter func() ([]Monitor, error)) { + storageMonitor.monitorGetter = getter +} + // UpdateLeaseObjectCount sets the etcd_lease_object_counts metric. func UpdateLeaseObjectCount(count int64) { // Currently we only store one previous lease, since all the events have the same ttl. @@ -206,3 +245,51 @@ func RecordStorageListMetrics(resource string, numFetched, numEvald, numReturned listStorageNumSelectorEvals.WithLabelValues(resource).Add(float64(numEvald)) listStorageNumReturned.WithLabelValues(resource).Add(float64(numReturned)) } + +type Monitor interface { + Monitor(ctx context.Context) (StorageMetrics, error) + Close() error +} + +type StorageMetrics struct { + Size int64 +} + +type monitorCollector struct { + compbasemetrics.BaseStableCollector + + monitorGetter func() ([]Monitor, error) +} + +// DescribeWithStability implements compbasemetrics.StableColletor +func (c *monitorCollector) DescribeWithStability(ch chan<- *compbasemetrics.Desc) { + ch <- storageSizeDescription +} + +// CollectWithStability implements compbasemetrics.StableColletor +func (c *monitorCollector) CollectWithStability(ch chan<- compbasemetrics.Metric) { + monitors, err := c.monitorGetter() + if err != nil { + return + } + + for i, m := range monitors { + cluster := fmt.Sprintf("etcd-%d", i) + + klog.V(4).InfoS("Start collecting storage metrics", "cluster", cluster) + ctx, cancel := context.WithTimeout(context.Background(), time.Second) + metrics, err := m.Monitor(ctx) + cancel() + m.Close() + if err != nil { + klog.InfoS("Failed to get storage metrics", "cluster", cluster, "err", err) + continue + } + + metric, err := compbasemetrics.NewConstMetric(storageSizeDescription, compbasemetrics.GaugeValue, float64(metrics.Size), cluster) + if err != nil { + klog.ErrorS(err, "Failed to create metric", "cluster", cluster) + } + ch <- metric + } +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/store.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/store.go index 2fc237de3314..7374152239ce 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/store.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/store.go @@ -85,6 +85,12 @@ type store struct { leaseManager *leaseManager } +func (s *store) RequestWatchProgress(ctx context.Context) error { + // Use watchContext to match ctx metadata provided when creating the watch. + // In best case scenario we would use the same context that watch was created, but there is no way access it from watchCache. + return s.client.RequestProgress(s.watchContext(ctx)) +} + type objState struct { obj runtime.Object meta *storage.ResponseMeta @@ -136,7 +142,7 @@ func (s *store) Get(ctx context.Context, key string, opts storage.GetOptions, ou } startTime := time.Now() getResp, err := s.client.KV.Get(ctx, preparedKey) - metrics.RecordEtcdRequestLatency("get", s.groupResourceString, startTime) + metrics.RecordEtcdRequest("get", s.groupResourceString, err, startTime) if err != nil { return err } @@ -210,7 +216,7 @@ func (s *store) Create(ctx context.Context, key string, obj, out runtime.Object, ).Then( clientv3.OpPut(preparedKey, string(newData), opts...), ).Commit() - metrics.RecordEtcdRequestLatency("create", s.groupResourceString, startTime) + metrics.RecordEtcdRequest("create", s.groupResourceString, err, startTime) if err != nil { span.AddEvent("Txn call failed", attribute.String("err", err.Error())) return err @@ -255,7 +261,7 @@ func (s *store) conditionalDelete( getCurrentState := func() (*objState, error) { startTime := time.Now() getResp, err := s.client.KV.Get(ctx, key) - metrics.RecordEtcdRequestLatency("get", s.groupResourceString, startTime) + metrics.RecordEtcdRequest("get", s.groupResourceString, err, startTime) if err != nil { return nil, err } @@ -337,7 +343,7 @@ func (s *store) conditionalDelete( ).Else( clientv3.OpGet(key), ).Commit() - metrics.RecordEtcdRequestLatency("delete", s.groupResourceString, startTime) + metrics.RecordEtcdRequest("delete", s.groupResourceString, err, startTime) if err != nil { return err } @@ -391,7 +397,7 @@ func (s *store) GuaranteedUpdate( getCurrentState := func() (*objState, error) { startTime := time.Now() getResp, err := s.client.KV.Get(ctx, preparedKey) - metrics.RecordEtcdRequestLatency("get", s.groupResourceString, startTime) + metrics.RecordEtcdRequest("get", s.groupResourceString, err, startTime) if err != nil { return nil, err } @@ -512,7 +518,7 @@ func (s *store) GuaranteedUpdate( ).Else( clientv3.OpGet(preparedKey), ).Commit() - metrics.RecordEtcdRequestLatency("update", s.groupResourceString, startTime) + metrics.RecordEtcdRequest("update", s.groupResourceString, err, startTime) if err != nil { span.AddEvent("Txn call failed", attribute.String("err", err.Error())) return err @@ -575,7 +581,7 @@ func (s *store) Count(key string) (int64, error) { startTime := time.Now() getResp, err := s.client.KV.Get(context.Background(), preparedKey, clientv3.WithRange(clientv3.GetPrefixRangeEnd(preparedKey)), clientv3.WithCountOnly()) - metrics.RecordEtcdRequestLatency("listWithCount", preparedKey, startTime) + metrics.RecordEtcdRequest("listWithCount", preparedKey, err, startTime) if err != nil { return 0, err } @@ -720,14 +726,16 @@ func (s *store) GetList(ctx context.Context, key string, opts storage.ListOption numReturn := v.Len() metrics.RecordStorageListMetrics(s.groupResourceString, numFetched, numEvald, numReturn) }() + + metricsOp := "get" + if recursive { + metricsOp = "list" + } + for { startTime := time.Now() getResp, err = s.client.KV.Get(ctx, preparedKey, options...) - if recursive { - metrics.RecordEtcdRequestLatency("list", s.groupResourceString, startTime) - } else { - metrics.RecordEtcdRequestLatency("get", s.groupResourceString, startTime) - } + metrics.RecordEtcdRequest(metricsOp, s.groupResourceString, err, startTime) if err != nil { return interpretListError(err, len(pred.Continue) > 0, continueKey, keyPrefix) } @@ -863,8 +871,12 @@ func growSlice(v reflect.Value, maxCapacity int, sizes ...int) { } // Watch implements storage.Interface.Watch. +// TODO(#115478): In order to graduate the WatchList feature to beta, the etcd3 implementation must/should also support it. func (s *store) Watch(ctx context.Context, key string, opts storage.ListOptions) (watch.Interface, error) { - if opts.SendInitialEvents != nil { + // it is safe to skip SendInitialEvents if the request is backward compatible + // see https://github.com/kubernetes/kubernetes/blob/267eb25e60955fe8e438c6311412e7cf7d028acb/staging/src/k8s.io/apiserver/pkg/storage/etcd3/watcher.go#L260 + compatibility := opts.Predicate.AllowWatchBookmarks == false && (opts.ResourceVersion == "" || opts.ResourceVersion == "0") + if opts.SendInitialEvents != nil && !compatibility { return nil, apierrors.NewInvalid( schema.GroupKind{Group: s.groupResource.Group, Kind: s.groupResource.Resource}, "", @@ -879,7 +891,18 @@ func (s *store) Watch(ctx context.Context, key string, opts storage.ListOptions) if err != nil { return nil, err } - return s.watcher.Watch(ctx, preparedKey, int64(rev), opts.Recursive, opts.ProgressNotify, s.transformer, opts.Predicate) + return s.watcher.Watch(s.watchContext(ctx), preparedKey, int64(rev), opts.Recursive, opts.ProgressNotify, s.transformer, opts.Predicate) +} + +func (s *store) watchContext(ctx context.Context) context.Context { + // The etcd server waits until it cannot find a leader for 3 election + // timeouts to cancel existing streams. 3 is currently a hard coded + // constant. The election timeout defaults to 1000ms. If the cluster is + // healthy, when the leader is stopped, the leadership transfer should be + // smooth. (leader transfers its leadership before stopping). If leader is + // hard killed, other servers will take an election timeout to realize + // leader lost and start campaign. + return clientv3.WithRequireLeader(ctx) } func (s *store) getState(ctx context.Context, getResp *clientv3.GetResponse, key string, v reflect.Value, ignoreNotFound bool) (*objState, error) { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/watcher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/watcher.go index 49d9005fc641..d4929bd9d82a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/watcher.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/watcher.go @@ -144,15 +144,7 @@ func (w *watcher) createWatchChan(ctx context.Context, key string, rev int64, re // The filter doesn't filter out any object. wc.internalPred = storage.Everything } - - // The etcd server waits until it cannot find a leader for 3 election - // timeouts to cancel existing streams. 3 is currently a hard coded - // constant. The election timeout defaults to 1000ms. If the cluster is - // healthy, when the leader is stopped, the leadership transfer should be - // smooth. (leader transfers its leadership before stopping). If leader is - // hard killed, other servers will take an election timeout to realize - // leader lost and start campaign. - wc.ctx, wc.cancel = context.WithCancel(clientv3.WithRequireLeader(ctx)) + wc.ctx, wc.cancel = context.WithCancel(ctx) return wc } @@ -223,6 +215,10 @@ func (wc *watchChan) ResultChan() <-chan watch.Event { return wc.resultChan } +func (wc *watchChan) RequestWatchProgress() error { + return wc.watcher.client.RequestProgress(wc.ctx) +} + // sync tries to retrieve existing data and send them to process. // The revision to watch will be set to the revision in response. // All events sent will have isCreated=true diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/interfaces.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/interfaces.go index daf30a242f55..76123fde8643 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/interfaces.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/interfaces.go @@ -236,6 +236,21 @@ type Interface interface { // Count returns number of different entries under the key (generally being path prefix). Count(key string) (int64, error) + + // RequestWatchProgress requests the a watch stream progress status be sent in the + // watch response stream as soon as possible. + // Used for monitor watch progress even if watching resources with no changes. + // + // If watch is lagging, progress status might: + // * be pointing to stale resource version. Use etcd KV request to get linearizable resource version. + // * not be delivered at all. It's recommended to poll request progress periodically. + // + // Note: Only watches with matching context grpc metadata will be notified. + // https://github.com/kubernetes/kubernetes/blob/9325a57125e8502941d1b0c7379c4bb80a678d5c/vendor/go.etcd.io/etcd/client/v3/watch.go#L1037-L1042 + // + // TODO: Remove when storage.Interface will be separate from etc3.store. + // Deprecated: Added temporarily to simplify exposing RequestProgress for watch cache. + RequestWatchProgress(ctx context.Context) error } // GetOptions provides the options that may be provided for storage get operations. diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/OWNERS b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/OWNERS index c29de755d0b8..7b8dfb623fc9 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/OWNERS @@ -1,6 +1,5 @@ # See the OWNERS docs at https://go.k8s.io/owners reviewers: - - lavalamp - smarterclayton - wojtek-t diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/factory/etcd3.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/factory/etcd3.go index c17859649565..5736abf63c4d 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/factory/etcd3.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/factory/etcd3.go @@ -20,6 +20,7 @@ import ( "context" "fmt" "log" + "math/rand" "net" "net/url" "os" @@ -37,6 +38,7 @@ import ( "go.uber.org/zap/zapcore" "golang.org/x/time/rate" "google.golang.org/grpc" + "k8s.io/klog/v2" "k8s.io/apimachinery/pkg/runtime" utilnet "k8s.io/apimachinery/pkg/util/net" @@ -52,7 +54,6 @@ import ( utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/component-base/metrics/legacyregistry" tracing "k8s.io/component-base/tracing" - "k8s.io/klog/v2" ) const ( @@ -153,18 +154,18 @@ func newETCD3Check(c storagebackend.Config, timeout time.Duration, stopCh <-chan // retry in a loop in the background until we successfully create the client, storing the client or error encountered lock := sync.RWMutex{} - var client *clientv3.Client + var prober *etcd3ProberMonitor clientErr := fmt.Errorf("etcd client connection not yet established") go wait.PollUntil(time.Second, func() (bool, error) { - newClient, err := newETCD3Client(c.Transport) + newProber, err := newETCD3ProberMonitor(c) lock.Lock() defer lock.Unlock() // Ensure that server is already not shutting down. select { case <-stopCh: if err == nil { - newClient.Close() + newProber.Close() } return true, nil default: @@ -173,7 +174,7 @@ func newETCD3Check(c storagebackend.Config, timeout time.Duration, stopCh <-chan clientErr = err return false, nil } - client = newClient + prober = newProber clientErr = nil return true, nil }, stopCh) @@ -185,8 +186,8 @@ func newETCD3Check(c storagebackend.Config, timeout time.Duration, stopCh <-chan lock.Lock() defer lock.Unlock() - if client != nil { - client.Close() + if prober != nil { + prober.Close() clientErr = fmt.Errorf("server is shutting down") } }() @@ -214,17 +215,73 @@ func newETCD3Check(c storagebackend.Config, timeout time.Duration, stopCh <-chan } ctx, cancel := context.WithTimeout(context.Background(), timeout) defer cancel() - // See https://github.com/etcd-io/etcd/blob/c57f8b3af865d1b531b979889c602ba14377420e/etcdctl/ctlv3/command/ep_command.go#L118 now := time.Now() - _, err := client.Get(ctx, path.Join("/", c.Prefix, "health")) - if err != nil { - err = fmt.Errorf("error getting data from etcd: %w", err) - } + err := prober.Probe(ctx) lastError.Store(err, now) return err }, nil } +func newETCD3ProberMonitor(c storagebackend.Config) (*etcd3ProberMonitor, error) { + client, err := newETCD3Client(c.Transport) + if err != nil { + return nil, err + } + return &etcd3ProberMonitor{ + client: client, + prefix: c.Prefix, + endpoints: c.Transport.ServerList, + }, nil +} + +type etcd3ProberMonitor struct { + prefix string + endpoints []string + + mux sync.RWMutex + client *clientv3.Client + closed bool +} + +func (t *etcd3ProberMonitor) Close() error { + t.mux.Lock() + defer t.mux.Unlock() + if !t.closed { + t.closed = true + return t.client.Close() + } + return fmt.Errorf("closed") +} + +func (t *etcd3ProberMonitor) Probe(ctx context.Context) error { + t.mux.RLock() + defer t.mux.RUnlock() + if t.closed { + return fmt.Errorf("closed") + } + // See https://github.com/etcd-io/etcd/blob/c57f8b3af865d1b531b979889c602ba14377420e/etcdctl/ctlv3/command/ep_command.go#L118 + _, err := t.client.Get(ctx, path.Join("/", t.prefix, "health")) + if err != nil { + return fmt.Errorf("error getting data from etcd: %w", err) + } + return nil +} + +func (t *etcd3ProberMonitor) Monitor(ctx context.Context) (metrics.StorageMetrics, error) { + t.mux.RLock() + defer t.mux.RUnlock() + if t.closed { + return metrics.StorageMetrics{}, fmt.Errorf("closed") + } + status, err := t.client.Status(ctx, t.endpoints[rand.Int()%len(t.endpoints)]) + if err != nil { + return metrics.StorageMetrics{}, err + } + return metrics.StorageMetrics{ + Size: status.DbSize, + }, nil +} + var newETCD3Client = func(c storagebackend.TransportConfig) (*clientv3.Client, error) { tlsInfo := transport.TLSInfo{ CertFile: c.CertFile, @@ -402,6 +459,7 @@ func newETCD3Storage(c storagebackend.ConfigForResource, newFunc func() runtime. // startDBSizeMonitorPerEndpoint starts a loop to monitor etcd database size and update the // corresponding metric etcd_db_total_size_in_bytes for each etcd server endpoint. +// Deprecated: Will be replaced with newETCD3ProberMonitor func startDBSizeMonitorPerEndpoint(client *clientv3.Client, interval time.Duration) (func(), error) { if interval == 0 { return func() {}, nil diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/factory/factory.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/factory/factory.go index 4c8a409d659c..1a60c92902cb 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/factory/factory.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/storagebackend/factory/factory.go @@ -17,10 +17,12 @@ limitations under the License. package factory import ( + "context" "fmt" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apiserver/pkg/storage" + "k8s.io/apiserver/pkg/storage/etcd3/metrics" "k8s.io/apiserver/pkg/storage/storagebackend" ) @@ -61,3 +63,31 @@ func CreateReadyCheck(c storagebackend.Config, stopCh <-chan struct{}) (func() e return nil, fmt.Errorf("unknown storage type: %s", c.Type) } } + +func CreateProber(c storagebackend.Config) (Prober, error) { + switch c.Type { + case storagebackend.StorageTypeETCD2: + return nil, fmt.Errorf("%s is no longer a supported storage backend", c.Type) + case storagebackend.StorageTypeUnset, storagebackend.StorageTypeETCD3: + return newETCD3ProberMonitor(c) + default: + return nil, fmt.Errorf("unknown storage type: %s", c.Type) + } +} + +func CreateMonitor(c storagebackend.Config) (metrics.Monitor, error) { + switch c.Type { + case storagebackend.StorageTypeETCD2: + return nil, fmt.Errorf("%s is no longer a supported storage backend", c.Type) + case storagebackend.StorageTypeUnset, storagebackend.StorageTypeETCD3: + return newETCD3ProberMonitor(c) + default: + return nil, fmt.Errorf("unknown storage type: %s", c.Type) + } +} + +// Prober is an interface that defines the Probe function for doing etcd readiness/liveness checks. +type Prober interface { + Probe(ctx context.Context) error + Close() error +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/aes.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/aes.go index b26c92e2d556..39469e9c6664 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/aes.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/aes.go @@ -34,33 +34,11 @@ import ( "k8s.io/klog/v2" ) -type gcm struct { - aead cipher.AEAD - nonceFunc func([]byte) error -} - -// NewGCMTransformer takes the given block cipher and performs encryption and decryption on the given data. -// It implements AEAD encryption of the provided values given a cipher.Block algorithm. -// The authenticated data provided as part of the value.Context method must match when the same -// value is set to and loaded from storage. In order to ensure that values cannot be copied by -// an attacker from a location under their control, use characteristics of the storage location -// (such as the etcd key) as part of the authenticated data. -// -// Because this mode requires a generated IV and IV reuse is a known weakness of AES-GCM, keys -// must be rotated before a birthday attack becomes feasible. NIST SP 800-38D -// (http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf) recommends using the same -// key with random 96-bit nonces (the default nonce length) no more than 2^32 times, and -// therefore transformers using this implementation *must* ensure they allow for frequent key -// rotation. Future work should include investigation of AES-GCM-SIV as an alternative to -// random nonces. -func NewGCMTransformer(block cipher.Block) (value.Transformer, error) { - aead, err := newGCM(block) - if err != nil { - return nil, err - } +// commonSize is the length of various security sensitive byte slices such as encryption keys. +// Do not change this value. It would be a backward incompatible change. +const commonSize = 32 - return &gcm{aead: aead, nonceFunc: randomNonce}, nil -} +const keySizeCounterNonceGCM = commonSize // NewGCMTransformerWithUniqueKeyUnsafe is the same as NewGCMTransformer but is unsafe for general // use because it makes assumptions about the key underlying the block cipher. Specifically, @@ -78,7 +56,7 @@ func NewGCMTransformer(block cipher.Block) (value.Transformer, error) { // it can be passed to NewGCMTransformer(aes.NewCipher(key)) to construct a transformer capable // of decrypting values encrypted by this transformer (that transformer must not be used for encryption). func NewGCMTransformerWithUniqueKeyUnsafe() (value.Transformer, []byte, error) { - key, err := generateKey(32) + key, err := GenerateKey(keySizeCounterNonceGCM) if err != nil { return nil, nil, err } @@ -126,17 +104,6 @@ func newGCMTransformerWithUniqueKeyUnsafe(block cipher.Block, nonceGen *nonceGen return &gcm{aead: aead, nonceFunc: nonceFunc}, nil } -func newGCM(block cipher.Block) (cipher.AEAD, error) { - aead, err := cipher.NewGCM(block) - if err != nil { - return nil, err - } - if nonceSize := aead.NonceSize(); nonceSize != 12 { // all data in etcd will be broken if this ever changes - return nil, fmt.Errorf("crypto/cipher.NewGCM returned unexpected nonce size: %d", nonceSize) - } - return aead, nil -} - func randomNonce(b []byte) error { _, err := rand.Read(b) return err @@ -164,8 +131,8 @@ func die(msg string) { klog.FatalDepth(1, msg) } -// generateKey generates a random key using system randomness. -func generateKey(length int) (key []byte, err error) { +// GenerateKey generates a random key using system randomness. +func GenerateKey(length int) (key []byte, err error) { defer func(start time.Time) { value.RecordDataKeyGeneration(start, err) }(time.Now()) @@ -177,6 +144,45 @@ func generateKey(length int) (key []byte, err error) { return key, nil } +// NewGCMTransformer takes the given block cipher and performs encryption and decryption on the given data. +// It implements AEAD encryption of the provided values given a cipher.Block algorithm. +// The authenticated data provided as part of the value.Context method must match when the same +// value is set to and loaded from storage. In order to ensure that values cannot be copied by +// an attacker from a location under their control, use characteristics of the storage location +// (such as the etcd key) as part of the authenticated data. +// +// Because this mode requires a generated IV and IV reuse is a known weakness of AES-GCM, keys +// must be rotated before a birthday attack becomes feasible. NIST SP 800-38D +// (http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf) recommends using the same +// key with random 96-bit nonces (the default nonce length) no more than 2^32 times, and +// therefore transformers using this implementation *must* ensure they allow for frequent key +// rotation. Future work should include investigation of AES-GCM-SIV as an alternative to +// random nonces. +func NewGCMTransformer(block cipher.Block) (value.Transformer, error) { + aead, err := newGCM(block) + if err != nil { + return nil, err + } + + return &gcm{aead: aead, nonceFunc: randomNonce}, nil +} + +func newGCM(block cipher.Block) (cipher.AEAD, error) { + aead, err := cipher.NewGCM(block) + if err != nil { + return nil, err + } + if nonceSize := aead.NonceSize(); nonceSize != 12 { // all data in etcd will be broken if this ever changes + return nil, fmt.Errorf("crypto/cipher.NewGCM returned unexpected nonce size: %d", nonceSize) + } + return aead, nil +} + +type gcm struct { + aead cipher.AEAD + nonceFunc func([]byte) error +} + func (t *gcm) TransformFromStorage(ctx context.Context, data []byte, dataCtx value.Context) ([]byte, bool, error) { nonceSize := t.aead.NonceSize() if len(data) < nonceSize { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/aes_extended_nonce.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/aes_extended_nonce.go new file mode 100644 index 000000000000..cf8f39305dc4 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/aes_extended_nonce.go @@ -0,0 +1,186 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package aes + +import ( + "bytes" + "context" + "crypto/aes" + "crypto/sha256" + "errors" + "fmt" + "io" + "time" + + "golang.org/x/crypto/hkdf" + + "k8s.io/apiserver/pkg/storage/value" + "k8s.io/utils/clock" +) + +const ( + // cacheTTL is the TTL of KDF cache entries. We assume that the value.Context.AuthenticatedData + // for every call is the etcd storage path of the associated resource, and use that as the primary + // cache key (with a secondary check that confirms that the info matches). Thus if a client + // is constantly creating resources with new names (and thus new paths), they will keep adding new + // entries to the cache for up to this TTL before the GC logic starts deleting old entries. Each + // entry is ~300 bytes in size, so even a malicious client will be bounded in the overall memory + // it can consume. + cacheTTL = 10 * time.Minute + + derivedKeySizeExtendedNonceGCM = commonSize + infoSizeExtendedNonceGCM + MinSeedSizeExtendedNonceGCM +) + +// NewHKDFExtendedNonceGCMTransformer is the same as NewGCMTransformer but trades storage, +// memory and CPU to work around the limitations of AES-GCM's 12 byte nonce size. The input seed +// is assumed to be a cryptographically strong slice of MinSeedSizeExtendedNonceGCM+ random bytes. +// Unlike NewGCMTransformer, this function is immune to the birthday attack because a new key is generated +// per encryption via a key derivation function: KDF(seed, random_bytes) -> key. The derived key is +// only used once as an AES-GCM key with a random 12 byte nonce. This avoids any concerns around +// cryptographic wear out (by either number of encryptions or the amount of data being encrypted). +// Speaking on the cryptographic safety, the limit on the number of operations that can be preformed +// with a single seed with derived keys and randomly generated nonces is not practically reachable. +// Thus, the scheme does not impose any specific requirements on the seed rotation schedule. +// Reusing the same seed is safe to do over time and across process restarts. Whenever a new +// seed is needed, the caller should generate it via GenerateKey(MinSeedSizeExtendedNonceGCM). +// In regard to KMSv2, organization standards or compliance policies around rotation may require +// that the seed be rotated at some interval. This can be implemented externally by rotating +// the key encryption key via a key ID change. +func NewHKDFExtendedNonceGCMTransformer(seed []byte) (value.Transformer, error) { + if seedLen := len(seed); seedLen < MinSeedSizeExtendedNonceGCM { + return nil, fmt.Errorf("invalid seed length %d used for key generation", seedLen) + } + return &extendedNonceGCM{ + seed: seed, + cache: newSimpleCache(clock.RealClock{}, cacheTTL), + }, nil +} + +type extendedNonceGCM struct { + seed []byte + cache *simpleCache +} + +func (e *extendedNonceGCM) TransformFromStorage(ctx context.Context, data []byte, dataCtx value.Context) ([]byte, bool, error) { + if len(data) < infoSizeExtendedNonceGCM { + return nil, false, errors.New("the stored data was shorter than the required size") + } + + info := data[:infoSizeExtendedNonceGCM] + + transformer, err := e.derivedKeyTransformer(info, dataCtx, false) + if err != nil { + return nil, false, fmt.Errorf("failed to derive read key from KDF: %w", err) + } + + return transformer.TransformFromStorage(ctx, data, dataCtx) +} + +func (e *extendedNonceGCM) TransformToStorage(ctx context.Context, data []byte, dataCtx value.Context) ([]byte, error) { + info := make([]byte, infoSizeExtendedNonceGCM) + if err := randomNonce(info); err != nil { + return nil, fmt.Errorf("failed to generate info for KDF: %w", err) + } + + transformer, err := e.derivedKeyTransformer(info, dataCtx, true) + if err != nil { + return nil, fmt.Errorf("failed to derive write key from KDF: %w", err) + } + + return transformer.TransformToStorage(ctx, data, dataCtx) +} + +func (e *extendedNonceGCM) derivedKeyTransformer(info []byte, dataCtx value.Context, write bool) (value.Transformer, error) { + if !write { // no need to check cache on write since we always generate a new transformer + if transformer := e.cache.get(info, dataCtx); transformer != nil { + return transformer, nil + } + + // on read, this is a subslice of a much larger slice and we do not want to hold onto that larger slice + info = bytes.Clone(info) + } + + key, err := e.sha256KDFExpandOnly(info) + if err != nil { + return nil, fmt.Errorf("failed to KDF expand seed with info: %w", err) + } + + transformer, err := newGCMTransformerWithInfo(key, info) + if err != nil { + return nil, fmt.Errorf("failed to build transformer with KDF derived key: %w", err) + } + + e.cache.set(dataCtx, transformer) + + return transformer, nil +} + +func (e *extendedNonceGCM) sha256KDFExpandOnly(info []byte) ([]byte, error) { + kdf := hkdf.Expand(sha256.New, e.seed, info) + + derivedKey := make([]byte, derivedKeySizeExtendedNonceGCM) + if _, err := io.ReadFull(kdf, derivedKey); err != nil { + return nil, fmt.Errorf("failed to read a derived key from KDF: %w", err) + } + + return derivedKey, nil +} + +func newGCMTransformerWithInfo(key, info []byte) (*transformerWithInfo, error) { + block, err := aes.NewCipher(key) + if err != nil { + return nil, err + } + + transformer, err := NewGCMTransformer(block) + if err != nil { + return nil, err + } + + return &transformerWithInfo{transformer: transformer, info: info}, nil +} + +type transformerWithInfo struct { + transformer value.Transformer + // info are extra opaque bytes prepended to the writes from transformer and stripped from reads. + // currently info is used to generate a key via KDF(seed, info) -> key + // and transformer is the output of NewGCMTransformer(aes.NewCipher(key)) + info []byte +} + +func (t *transformerWithInfo) TransformFromStorage(ctx context.Context, data []byte, dataCtx value.Context) ([]byte, bool, error) { + if !bytes.HasPrefix(data, t.info) { + return nil, false, errors.New("the stored data is missing the required info prefix") + } + + return t.transformer.TransformFromStorage(ctx, data[len(t.info):], dataCtx) +} + +func (t *transformerWithInfo) TransformToStorage(ctx context.Context, data []byte, dataCtx value.Context) ([]byte, error) { + out, err := t.transformer.TransformToStorage(ctx, data, dataCtx) + if err != nil { + return nil, err + } + + outWithInfo := make([]byte, 0, len(out)+len(t.info)) + outWithInfo = append(outWithInfo, t.info...) + outWithInfo = append(outWithInfo, out...) + + return outWithInfo, nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/cache.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/cache.go new file mode 100644 index 000000000000..c2551a2fbf5e --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/aes/cache.go @@ -0,0 +1,91 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package aes + +import ( + "bytes" + "time" + "unsafe" + + utilcache "k8s.io/apimachinery/pkg/util/cache" + "k8s.io/apiserver/pkg/storage/value" + "k8s.io/utils/clock" +) + +type simpleCache struct { + cache *utilcache.Expiring + ttl time.Duration +} + +func newSimpleCache(clock clock.Clock, ttl time.Duration) *simpleCache { + cache := utilcache.NewExpiringWithClock(clock) + // "Stale" entries are always valid for us because the TTL is just used to prevent + // unbounded growth on the cache - for a given info the transformer is always the same. + // The key always corresponds to the exact same value, with the caveat that + // since we use the value.Context.AuthenticatedData to overwrite old keys, + // we always have to check that the info matches (to validate the transformer is correct). + cache.AllowExpiredGet = true + return &simpleCache{ + cache: cache, + ttl: ttl, + } +} + +// given a key, return the transformer, or nil if it does not exist in the cache +func (c *simpleCache) get(info []byte, dataCtx value.Context) *transformerWithInfo { + val, ok := c.cache.Get(keyFunc(dataCtx)) + if !ok { + return nil + } + + transformer := val.(*transformerWithInfo) + + if !bytes.Equal(transformer.info, info) { + return nil + } + + return transformer +} + +// set caches the record for the key +func (c *simpleCache) set(dataCtx value.Context, transformer *transformerWithInfo) { + if dataCtx == nil || len(dataCtx.AuthenticatedData()) == 0 { + panic("authenticated data must not be empty") + } + if transformer == nil { + panic("transformer must not be nil") + } + if len(transformer.info) == 0 { + panic("info must not be empty") + } + c.cache.Set(keyFunc(dataCtx), transformer, c.ttl) +} + +func keyFunc(dataCtx value.Context) string { + return toString(dataCtx.AuthenticatedData()) +} + +// toString performs unholy acts to avoid allocations +func toString(b []byte) string { + // unsafe.SliceData relies on cap whereas we want to rely on len + if len(b) == 0 { + return "" + } + // Copied from go 1.20.1 strings.Builder.String + // https://github.com/golang/go/blob/202a1a57064127c3f19d96df57b9f9586145e21c/src/strings/builder.go#L48 + return unsafe.String(unsafe.SliceData(b), len(b)) +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/cache.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/cache.go index 3c1fbbf8a362..c677f54b5ba0 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/cache.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/cache.go @@ -18,7 +18,6 @@ limitations under the License. package kmsv2 import ( - "context" "crypto/sha256" "hash" "sync" @@ -30,17 +29,10 @@ import ( "k8s.io/utils/clock" ) -// prevent decryptTransformer from drifting from value.Transformer -var _ decryptTransformer = value.Transformer(nil) - -// decryptTransformer is the decryption subset of value.Transformer. -// this exists purely to statically enforce that transformers placed in the cache are not used for encryption. +// simpleCache stores the decryption subset of value.Transformer (value.Read). +// this statically enforces that transformers placed in the cache are not used for encryption. // this is relevant in the context of nonce collision since transformers that are created // from encrypted DEKs retrieved from etcd cannot maintain their nonce counter state. -type decryptTransformer interface { - TransformFromStorage(ctx context.Context, data []byte, dataCtx value.Context) (out []byte, stale bool, err error) -} - type simpleCache struct { cache *utilcache.Expiring ttl time.Duration @@ -50,8 +42,10 @@ type simpleCache struct { } func newSimpleCache(clock clock.Clock, ttl time.Duration) *simpleCache { + cache := utilcache.NewExpiringWithClock(clock) + cache.AllowExpiredGet = true // for a given key, the value (the decryptTransformer) is always the same return &simpleCache{ - cache: utilcache.NewExpiringWithClock(clock), + cache: cache, ttl: ttl, hashPool: &sync.Pool{ New: func() interface{} { @@ -62,16 +56,16 @@ func newSimpleCache(clock clock.Clock, ttl time.Duration) *simpleCache { } // given a key, return the transformer, or nil if it does not exist in the cache -func (c *simpleCache) get(key []byte) decryptTransformer { +func (c *simpleCache) get(key []byte) value.Read { record, ok := c.cache.Get(c.keyFunc(key)) if !ok { return nil } - return record.(decryptTransformer) + return record.(value.Read) } // set caches the record for the key -func (c *simpleCache) set(key []byte, transformer decryptTransformer) { +func (c *simpleCache) set(key []byte, transformer value.Read) { if len(key) == 0 { panic("key must not be empty") } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/envelope.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/envelope.go index 43ba22d65e0c..45d5db58b751 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/envelope.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/envelope.go @@ -20,6 +20,8 @@ package kmsv2 import ( "context" "crypto/aes" + "crypto/cipher" + "crypto/sha256" "fmt" "sort" "time" @@ -42,6 +44,8 @@ import ( "k8s.io/utils/clock" ) +// TODO integration test with old AES GCM data recorded and new KDF data recorded + func init() { value.RegisterMetrics() metrics.RegisterMetrics() @@ -54,22 +58,22 @@ const ( annotationsMaxSize = 32 * 1024 // 32 kB // KeyIDMaxSize is the maximum size of the keyID. KeyIDMaxSize = 1 * 1024 // 1 kB - // encryptedDEKMaxSize is the maximum size of the encrypted DEK. - encryptedDEKMaxSize = 1 * 1024 // 1 kB + // encryptedDEKSourceMaxSize is the maximum size of the encrypted DEK source. + encryptedDEKSourceMaxSize = 1 * 1024 // 1 kB // cacheTTL is the default time-to-live for the cache entry. // this allows the cache to grow to an infinite size for up to a day. - // this is meant as a temporary solution until the cache is re-written to not have a TTL. // there is unlikely to be any meaningful memory impact on the server - // because the cache will likely never have more than a few thousand entries - // and each entry is roughly ~200 bytes in size. with DEK reuse - // and no storage migration, the number of entries in this cache + // because the cache will likely never have more than a few thousand entries. + // each entry can be large due to an internal cache that maps the DEK seed to individual + // DEK entries, but that cache has an aggressive TTL to keep the size under control. + // with DEK/seed reuse and no storage migration, the number of entries in this cache // would be approximated by unique key IDs used by the KMS plugin // combined with the number of server restarts. If storage migration // is performed after key ID changes, and the number of restarts // is limited, this cache size may be as small as the number of API // servers in use (once old entries expire out from the TTL). cacheTTL = 24 * time.Hour - // error code + // key ID related error codes for metrics errKeyIDOKCode ErrCodeKeyID = "ok" errKeyIDEmptyCode ErrCodeKeyID = "empty" errKeyIDTooLongCode ErrCodeKeyID = "too_long" @@ -82,23 +86,22 @@ type StateFunc func() (State, error) type ErrCodeKeyID string type State struct { - Transformer value.Transformer - EncryptedDEK []byte - KeyID string - Annotations map[string][]byte + Transformer value.Transformer + + EncryptedObject kmstypes.EncryptedObject UID string ExpirationTimestamp time.Time - // CacheKey is the key used to cache the DEK in transformer.cache. + // CacheKey is the key used to cache the DEK/seed in envelopeTransformer.cache. CacheKey []byte } func (s *State) ValidateEncryptCapability() error { if now := NowFunc(); now.After(s.ExpirationTimestamp) { - return fmt.Errorf("EDEK with keyID %q expired at %s (current time is %s)", - s.KeyID, s.ExpirationTimestamp.Format(time.RFC3339), now.Format(time.RFC3339)) + return fmt.Errorf("encryptedDEKSource with keyID hash %q expired at %s (current time is %s)", + GetHashIfNotEmpty(s.EncryptedObject.KeyID), s.ExpirationTimestamp.Format(time.RFC3339), now.Format(time.RFC3339)) } return nil } @@ -136,6 +139,8 @@ func (t *envelopeTransformer) TransformFromStorage(ctx context.Context, data []b return nil, false, err } + useSeed := encryptedObject.EncryptedDEKSourceType == kmstypes.EncryptedDEKSourceType_HKDF_SHA256_XNONCE_AES_GCM_SEED + // TODO: consider marking state.EncryptedDEK != encryptedObject.EncryptedDEK as a stale read to support DEK defragmentation // at a minimum we should have a metric that helps the user understand if DEK fragmentation is high state, err := t.stateFunc() // no need to call state.ValidateEncryptCapability on reads @@ -143,7 +148,7 @@ func (t *envelopeTransformer) TransformFromStorage(ctx context.Context, data []b return nil, false, err } - encryptedObjectCacheKey, err := generateCacheKey(encryptedObject.EncryptedDEK, encryptedObject.KeyID, encryptedObject.Annotations) + encryptedObjectCacheKey, err := generateCacheKey(encryptedObject.EncryptedDEKSourceType, encryptedObject.EncryptedDEKSource, encryptedObject.KeyID, encryptedObject.Annotations) if err != nil { return nil, false, err } @@ -162,7 +167,7 @@ func (t *envelopeTransformer) TransformFromStorage(ctx context.Context, data []b "verb", requestInfo.Verb, "namespace", requestInfo.Namespace, "name", requestInfo.Name) key, err := t.envelopeService.Decrypt(ctx, uid, &kmsservice.DecryptRequest{ - Ciphertext: encryptedObject.EncryptedDEK, + Ciphertext: encryptedObject.EncryptedDEKSource, KeyID: encryptedObject.KeyID, Annotations: encryptedObject.Annotations, }) @@ -170,7 +175,7 @@ func (t *envelopeTransformer) TransformFromStorage(ctx context.Context, data []b return nil, false, fmt.Errorf("failed to decrypt DEK, error: %w", err) } - transformer, err = t.addTransformerForDecryption(encryptedObjectCacheKey, key) + transformer, err = t.addTransformerForDecryption(encryptedObjectCacheKey, key, useSeed) if err != nil { return nil, false, err } @@ -183,8 +188,11 @@ func (t *envelopeTransformer) TransformFromStorage(ctx context.Context, data []b } // data is considered stale if the key ID does not match our current write transformer - return out, stale || encryptedObject.KeyID != state.KeyID, nil - + return out, + stale || + encryptedObject.KeyID != state.EncryptedObject.KeyID || + encryptedObject.EncryptedDEKSourceType != state.EncryptedObject.EncryptedDEKSourceType, + nil } // TransformToStorage encrypts data to be written to disk using envelope encryption. @@ -200,7 +208,7 @@ func (t *envelopeTransformer) TransformToStorage(ctx context.Context, data []byt // this prevents a cache miss every time the DEK rotates // this has the side benefit of causing the cache to perform a GC // TODO see if we can do this inside the stateFunc control loop - // TODO(aramase): Add metrics for cache fill percentage with custom cache implementation. + // TODO(aramase): Add metrics for cache size. t.cache.set(state.CacheKey, state.Transformer) requestInfo := getRequestInfoFromContext(ctx) @@ -213,39 +221,43 @@ func (t *envelopeTransformer) TransformToStorage(ctx context.Context, data []byt return nil, err } - metrics.RecordKeyID(metrics.ToStorageLabel, t.providerName, state.KeyID) + metrics.RecordKeyID(metrics.ToStorageLabel, t.providerName, state.EncryptedObject.KeyID) - encObject := &kmstypes.EncryptedObject{ - KeyID: state.KeyID, - EncryptedDEK: state.EncryptedDEK, - EncryptedData: result, - Annotations: state.Annotations, - } + encObjectCopy := state.EncryptedObject + encObjectCopy.EncryptedData = result // Serialize the EncryptedObject to a byte array. - return t.doEncode(encObject) + return t.doEncode(&encObjectCopy) } // addTransformerForDecryption inserts a new transformer to the Envelope cache of DEKs for future reads. -func (t *envelopeTransformer) addTransformerForDecryption(cacheKey []byte, key []byte) (decryptTransformer, error) { - block, err := aes.NewCipher(key) - if err != nil { - return nil, err +func (t *envelopeTransformer) addTransformerForDecryption(cacheKey []byte, key []byte, useSeed bool) (value.Read, error) { + var transformer value.Read + var err error + if useSeed { + // the input key is considered safe to use here because it is coming from the KMS plugin / etcd + transformer, err = aestransformer.NewHKDFExtendedNonceGCMTransformer(key) + } else { + var block cipher.Block + block, err = aes.NewCipher(key) + if err != nil { + return nil, err + } + // this is compatible with NewGCMTransformerWithUniqueKeyUnsafe for decryption + // it would use random nonces for encryption but we never do that + transformer, err = aestransformer.NewGCMTransformer(block) } - // this is compatible with NewGCMTransformerWithUniqueKeyUnsafe for decryption - // it would use random nonces for encryption but we never do that - transformer, err := aestransformer.NewGCMTransformer(block) if err != nil { return nil, err } - // TODO(aramase): Add metrics for cache fill percentage with custom cache implementation. + // TODO(aramase): Add metrics for cache size. t.cache.set(cacheKey, transformer) return transformer, nil } // doEncode encodes the EncryptedObject to a byte array. func (t *envelopeTransformer) doEncode(request *kmstypes.EncryptedObject) ([]byte, error) { - if err := validateEncryptedObject(request); err != nil { + if err := ValidateEncryptedObject(request); err != nil { return nil, err } return proto.Marshal(request) @@ -257,16 +269,31 @@ func (t *envelopeTransformer) doDecode(originalData []byte) (*kmstypes.Encrypted if err := proto.Unmarshal(originalData, o); err != nil { return nil, err } - // validate the EncryptedObject - if err := validateEncryptedObject(o); err != nil { + if err := ValidateEncryptedObject(o); err != nil { return nil, err } return o, nil } -func GenerateTransformer(ctx context.Context, uid string, envelopeService kmsservice.Service) (value.Transformer, *kmsservice.EncryptResponse, []byte, error) { - transformer, newKey, err := aestransformer.NewGCMTransformerWithUniqueKeyUnsafe() +// GenerateTransformer generates a new transformer and encrypts the DEK/seed using the envelope service. +// It returns the transformer, the encrypted DEK/seed, cache key and error. +func GenerateTransformer(ctx context.Context, uid string, envelopeService kmsservice.Service, useSeed bool) (value.Transformer, *kmstypes.EncryptedObject, []byte, error) { + newTransformerFunc := func() (value.Transformer, []byte, error) { + seed, err := aestransformer.GenerateKey(aestransformer.MinSeedSizeExtendedNonceGCM) + if err != nil { + return nil, nil, err + } + transformer, err := aestransformer.NewHKDFExtendedNonceGCMTransformer(seed) + if err != nil { + return nil, nil, err + } + return transformer, seed, nil + } + if !useSeed { + newTransformerFunc = aestransformer.NewGCMTransformerWithUniqueKeyUnsafe + } + transformer, newKey, err := newTransformerFunc() if err != nil { return nil, nil, nil, err } @@ -278,32 +305,48 @@ func GenerateTransformer(ctx context.Context, uid string, envelopeService kmsser return nil, nil, nil, fmt.Errorf("failed to encrypt DEK, error: %w", err) } - if err := validateEncryptedObject(&kmstypes.EncryptedObject{ - KeyID: resp.KeyID, - EncryptedDEK: resp.Ciphertext, - EncryptedData: []byte{0}, // any non-empty value to pass validation - Annotations: resp.Annotations, - }); err != nil { + o := &kmstypes.EncryptedObject{ + KeyID: resp.KeyID, + EncryptedDEKSource: resp.Ciphertext, + EncryptedData: []byte{0}, // any non-empty value to pass validation + Annotations: resp.Annotations, + } + + if useSeed { + o.EncryptedDEKSourceType = kmstypes.EncryptedDEKSourceType_HKDF_SHA256_XNONCE_AES_GCM_SEED + } else { + o.EncryptedDEKSourceType = kmstypes.EncryptedDEKSourceType_AES_GCM_KEY + } + + if err := ValidateEncryptedObject(o); err != nil { return nil, nil, nil, err } - cacheKey, err := generateCacheKey(resp.Ciphertext, resp.KeyID, resp.Annotations) + cacheKey, err := generateCacheKey(o.EncryptedDEKSourceType, resp.Ciphertext, resp.KeyID, resp.Annotations) if err != nil { return nil, nil, nil, err } - return transformer, resp, cacheKey, nil + o.EncryptedData = nil // make sure that later code that uses this encrypted object sets this field + + return transformer, o, cacheKey, nil } -func validateEncryptedObject(o *kmstypes.EncryptedObject) error { +func ValidateEncryptedObject(o *kmstypes.EncryptedObject) error { if o == nil { return fmt.Errorf("encrypted object is nil") } + switch t := o.EncryptedDEKSourceType; t { + case kmstypes.EncryptedDEKSourceType_AES_GCM_KEY: + case kmstypes.EncryptedDEKSourceType_HKDF_SHA256_XNONCE_AES_GCM_SEED: + default: + return fmt.Errorf("unknown encryptedDEKSourceType: %d", t) + } if len(o.EncryptedData) == 0 { return fmt.Errorf("encrypted data is empty") } - if err := validateEncryptedDEK(o.EncryptedDEK); err != nil { - return fmt.Errorf("failed to validate encrypted DEK: %w", err) + if err := validateEncryptedDEKSource(o.EncryptedDEKSource); err != nil { + return fmt.Errorf("failed to validate encrypted DEK source: %w", err) } if _, err := ValidateKeyID(o.KeyID); err != nil { return fmt.Errorf("failed to validate key id: %w", err) @@ -314,15 +357,15 @@ func validateEncryptedObject(o *kmstypes.EncryptedObject) error { return nil } -// validateEncryptedDEK tests the following: -// 1. The encrypted DEK is not empty. -// 2. The size of encrypted DEK is less than 1 kB. -func validateEncryptedDEK(encryptedDEK []byte) error { - if len(encryptedDEK) == 0 { - return fmt.Errorf("encrypted DEK is empty") +// validateEncryptedDEKSource tests the following: +// 1. The encrypted DEK source is not empty. +// 2. The size of encrypted DEK source is less than 1 kB. +func validateEncryptedDEKSource(encryptedDEKSource []byte) error { + if len(encryptedDEKSource) == 0 { + return fmt.Errorf("encrypted DEK source is empty") } - if len(encryptedDEK) > encryptedDEKMaxSize { - return fmt.Errorf("encrypted DEK is %d bytes, which exceeds the max size of %d", len(encryptedDEK), encryptedDEKMaxSize) + if len(encryptedDEKSource) > encryptedDEKSourceMaxSize { + return fmt.Errorf("encrypted DEK source is %d bytes, which exceeds the max size of %d", len(encryptedDEKSource), encryptedDEKSourceMaxSize) } return nil } @@ -367,17 +410,19 @@ func getRequestInfoFromContext(ctx context.Context) *genericapirequest.RequestIn // generateCacheKey returns a key for the cache. // The key is a concatenation of: -// 1. encryptedDEK +// 0. encryptedDEKSourceType +// 1. encryptedDEKSource // 2. keyID // 3. length of annotations // 4. annotations (sorted by key) - each annotation is a concatenation of: // a. annotation key // b. annotation value -func generateCacheKey(encryptedDEK []byte, keyID string, annotations map[string][]byte) ([]byte, error) { +func generateCacheKey(encryptedDEKSourceType kmstypes.EncryptedDEKSourceType, encryptedDEKSource []byte, keyID string, annotations map[string][]byte) ([]byte, error) { // TODO(aramase): use sync pool buffer to avoid allocations b := cryptobyte.NewBuilder(nil) + b.AddUint32(uint32(encryptedDEKSourceType)) b.AddUint16LengthPrefixed(func(b *cryptobyte.Builder) { - b.AddBytes(encryptedDEK) + b.AddBytes(encryptedDEKSource) }) b.AddUint16LengthPrefixed(func(b *cryptobyte.Builder) { b.AddBytes(toBytes(keyID)) @@ -420,3 +465,11 @@ func toBytes(s string) []byte { // https://github.com/golang/go/blob/202a1a57064127c3f19d96df57b9f9586145e21c/src/os/file.go#L246 return unsafe.Slice(unsafe.StringData(s), len(s)) } + +// GetHashIfNotEmpty returns the sha256 hash of the data if it is not empty. +func GetHashIfNotEmpty(data string) string { + if len(data) > 0 { + return fmt.Sprintf("sha256:%x", sha256.Sum256([]byte(data))) + } + return "" +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/v2/api.pb.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/v2/api.pb.go index c7bdd66f0f34..811c8f67d253 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/v2/api.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/v2/api.pb.go @@ -36,19 +36,52 @@ var _ = math.Inf // proto package needs to be updated. const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package +type EncryptedDEKSourceType int32 + +const ( + // AES_GCM_KEY means that the plaintext of encryptedDEKSource is the DEK itself, with AES-GCM as the encryption algorithm. + EncryptedDEKSourceType_AES_GCM_KEY EncryptedDEKSourceType = 0 + // HKDF_SHA256_XNONCE_AES_GCM_SEED means that the plaintext of encryptedDEKSource is the pseudo random key + // (referred to as the seed throughout the code) that is fed into HKDF expand. SHA256 is the hash algorithm + // and first 32 bytes of encryptedData are the info param. The first 32 bytes from the HKDF stream are used + // as the DEK with AES-GCM as the encryption algorithm. + EncryptedDEKSourceType_HKDF_SHA256_XNONCE_AES_GCM_SEED EncryptedDEKSourceType = 1 +) + +var EncryptedDEKSourceType_name = map[int32]string{ + 0: "AES_GCM_KEY", + 1: "HKDF_SHA256_XNONCE_AES_GCM_SEED", +} + +var EncryptedDEKSourceType_value = map[string]int32{ + "AES_GCM_KEY": 0, + "HKDF_SHA256_XNONCE_AES_GCM_SEED": 1, +} + +func (x EncryptedDEKSourceType) String() string { + return proto.EnumName(EncryptedDEKSourceType_name, int32(x)) +} + +func (EncryptedDEKSourceType) EnumDescriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{0} +} + // EncryptedObject is the representation of data stored in etcd after envelope encryption. type EncryptedObject struct { // EncryptedData is the encrypted data. EncryptedData []byte `protobuf:"bytes,1,opt,name=encryptedData,proto3" json:"encryptedData,omitempty"` // KeyID is the KMS key ID used for encryption operations. KeyID string `protobuf:"bytes,2,opt,name=keyID,proto3" json:"keyID,omitempty"` - // EncryptedDEK is the encrypted DEK. - EncryptedDEK []byte `protobuf:"bytes,3,opt,name=encryptedDEK,proto3" json:"encryptedDEK,omitempty"` + // EncryptedDEKSource is the ciphertext of the source of the DEK used to encrypt the data stored in encryptedData. + // encryptedDEKSourceType defines the process of using the plaintext of this field to determine the aforementioned DEK. + EncryptedDEKSource []byte `protobuf:"bytes,3,opt,name=encryptedDEKSource,proto3" json:"encryptedDEKSource,omitempty"` // Annotations is additional metadata that was provided by the KMS plugin. - Annotations map[string][]byte `protobuf:"bytes,4,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_unrecognized []byte `json:"-"` - XXX_sizecache int32 `json:"-"` + Annotations map[string][]byte `protobuf:"bytes,4,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + // encryptedDEKSourceType defines the process of using the plaintext of encryptedDEKSource to determine the DEK. + EncryptedDEKSourceType EncryptedDEKSourceType `protobuf:"varint,5,opt,name=encryptedDEKSourceType,proto3,enum=v2.EncryptedDEKSourceType" json:"encryptedDEKSourceType,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_unrecognized []byte `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *EncryptedObject) Reset() { *m = EncryptedObject{} } @@ -89,9 +122,9 @@ func (m *EncryptedObject) GetKeyID() string { return "" } -func (m *EncryptedObject) GetEncryptedDEK() []byte { +func (m *EncryptedObject) GetEncryptedDEKSource() []byte { if m != nil { - return m.EncryptedDEK + return m.EncryptedDEKSource } return nil } @@ -103,7 +136,15 @@ func (m *EncryptedObject) GetAnnotations() map[string][]byte { return nil } +func (m *EncryptedObject) GetEncryptedDEKSourceType() EncryptedDEKSourceType { + if m != nil { + return m.EncryptedDEKSourceType + } + return EncryptedDEKSourceType_AES_GCM_KEY +} + func init() { + proto.RegisterEnum("v2.EncryptedDEKSourceType", EncryptedDEKSourceType_name, EncryptedDEKSourceType_value) proto.RegisterType((*EncryptedObject)(nil), "v2.EncryptedObject") proto.RegisterMapType((map[string][]byte)(nil), "v2.EncryptedObject.AnnotationsEntry") } @@ -111,21 +152,26 @@ func init() { func init() { proto.RegisterFile("api.proto", fileDescriptor_00212fb1f9d3bf1c) } var fileDescriptor_00212fb1f9d3bf1c = []byte{ - // 244 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x5c, 0x90, 0xb1, 0x4b, 0x03, 0x31, - 0x14, 0xc6, 0xc9, 0x9d, 0x0a, 0x97, 0x9e, 0x58, 0x82, 0xc3, 0xe1, 0x74, 0x94, 0x0e, 0x37, 0x25, - 0x10, 0x97, 0x22, 0x52, 0x50, 0x7a, 0x82, 0x38, 0x08, 0x19, 0xdd, 0xd2, 0xfa, 0x28, 0x67, 0x6a, - 0x12, 0x92, 0x18, 0xc8, 0x9f, 0xee, 0x26, 0x4d, 0x95, 0xda, 0xdb, 0xde, 0xf7, 0xf1, 0xfb, 0xe0, - 0xc7, 0xc3, 0x95, 0xb4, 0x03, 0xb5, 0xce, 0x04, 0x43, 0x8a, 0xc8, 0x67, 0xdf, 0x08, 0x5f, 0xf5, - 0x7a, 0xe3, 0x92, 0x0d, 0xf0, 0xfe, 0xba, 0xfe, 0x80, 0x4d, 0x20, 0x73, 0x7c, 0x09, 0x7f, 0xd5, - 0x4a, 0x06, 0xd9, 0xa0, 0x16, 0x75, 0xb5, 0x38, 0x2d, 0xc9, 0x35, 0x3e, 0x57, 0x90, 0x9e, 0x57, - 0x4d, 0xd1, 0xa2, 0xae, 0x12, 0x87, 0x40, 0x66, 0xb8, 0x3e, 0x62, 0xfd, 0x4b, 0x53, 0xe6, 0xe9, - 0x49, 0x47, 0x9e, 0xf0, 0x44, 0x6a, 0x6d, 0x82, 0x0c, 0x83, 0xd1, 0xbe, 0x39, 0x6b, 0xcb, 0x6e, - 0xc2, 0xe7, 0x34, 0x72, 0x3a, 0x32, 0xa1, 0x0f, 0x47, 0xac, 0xd7, 0xc1, 0x25, 0xf1, 0x7f, 0x78, - 0xb3, 0xc4, 0xd3, 0x31, 0x40, 0xa6, 0xb8, 0x54, 0x90, 0xb2, 0x71, 0x25, 0xf6, 0xe7, 0xde, 0x33, - 0xca, 0xdd, 0x17, 0x64, 0xcf, 0x5a, 0x1c, 0xc2, 0x5d, 0xb1, 0x40, 0x8f, 0xcb, 0xb7, 0x7b, 0xb5, - 0xf0, 0x74, 0x30, 0x4c, 0xda, 0xc1, 0x83, 0x8b, 0xe0, 0x98, 0x55, 0x5b, 0xe6, 0x83, 0x71, 0x72, - 0x0b, 0x2c, 0x93, 0xec, 0x57, 0x9d, 0x81, 0x8e, 0xb0, 0x33, 0x16, 0x98, 0xfa, 0xf4, 0x91, 0xb3, - 0xc8, 0xd7, 0x17, 0xf9, 0x8d, 0xb7, 0x3f, 0x01, 0x00, 0x00, 0xff, 0xff, 0x00, 0x80, 0x43, 0x93, - 0x53, 0x01, 0x00, 0x00, + // 329 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x91, 0xe1, 0x4b, 0xc2, 0x40, + 0x18, 0xc6, 0xdb, 0xcc, 0xc0, 0xd3, 0x72, 0x1c, 0x21, 0xc3, 0x2f, 0x8d, 0xf2, 0xc3, 0xe8, 0xc3, + 0x0e, 0x16, 0x85, 0x44, 0x08, 0xe6, 0xce, 0x0c, 0x49, 0x61, 0xeb, 0x43, 0xf5, 0x65, 0x9c, 0xf6, + 0x22, 0x6b, 0xb6, 0x1b, 0xb7, 0xf3, 0x60, 0x7f, 0x6a, 0xff, 0x4d, 0x38, 0x13, 0xd3, 0xec, 0xdb, + 0xbd, 0xef, 0xfd, 0xde, 0xe7, 0xb9, 0x7b, 0x5e, 0x54, 0x61, 0x69, 0xe4, 0xa4, 0x82, 0x4b, 0x8e, + 0x75, 0xe5, 0x9e, 0x7f, 0xe9, 0xa8, 0x4e, 0x93, 0xa9, 0xc8, 0x53, 0x09, 0xef, 0xe3, 0xc9, 0x07, + 0x4c, 0x25, 0x6e, 0xa1, 0x63, 0x58, 0xb7, 0x3c, 0x26, 0x99, 0xa9, 0x59, 0x9a, 0x5d, 0xf3, 0xb7, + 0x9b, 0xf8, 0x14, 0x95, 0x63, 0xc8, 0x1f, 0x3d, 0x53, 0xb7, 0x34, 0xbb, 0xe2, 0xaf, 0x0a, 0xec, + 0x20, 0xbc, 0xc1, 0xe8, 0x30, 0xe0, 0x0b, 0x31, 0x05, 0xb3, 0x54, 0x08, 0xec, 0xb9, 0xc1, 0x7d, + 0x54, 0x65, 0x49, 0xc2, 0x25, 0x93, 0x11, 0x4f, 0x32, 0xf3, 0xd0, 0x2a, 0xd9, 0x55, 0xb7, 0xe5, + 0x28, 0xd7, 0xd9, 0x79, 0x95, 0xd3, 0xdd, 0x60, 0x34, 0x91, 0x22, 0xf7, 0x7f, 0x0f, 0x62, 0x1f, + 0x35, 0xfe, 0xaa, 0x3f, 0xe7, 0x29, 0x98, 0x65, 0x4b, 0xb3, 0x4f, 0xdc, 0xe6, 0x96, 0xe4, 0x16, + 0xe1, 0xff, 0x33, 0xd9, 0xec, 0x20, 0x63, 0xd7, 0x14, 0x1b, 0xa8, 0x14, 0x43, 0x5e, 0x24, 0x52, + 0xf1, 0x97, 0xc7, 0x65, 0x0e, 0x8a, 0xcd, 0x17, 0x50, 0xe4, 0x50, 0xf3, 0x57, 0xc5, 0xad, 0xde, + 0xd6, 0x2e, 0x47, 0xa8, 0xb1, 0xdf, 0x11, 0xd7, 0x51, 0xb5, 0x4b, 0x83, 0xf0, 0xa1, 0xf7, 0x14, + 0x0e, 0xe9, 0xab, 0x71, 0x80, 0x2f, 0xd0, 0xd9, 0x60, 0xe8, 0xf5, 0xc3, 0x60, 0xd0, 0x75, 0xaf, + 0x6f, 0xc2, 0x97, 0xd1, 0x78, 0xd4, 0xa3, 0xe1, 0x9a, 0x09, 0x28, 0xf5, 0x0c, 0xed, 0xbe, 0xf3, + 0x76, 0x17, 0xb7, 0x33, 0x27, 0xe2, 0x84, 0xa5, 0x51, 0x06, 0x42, 0x81, 0x20, 0x69, 0x3c, 0x23, + 0x99, 0xe4, 0x82, 0xcd, 0x80, 0x14, 0xce, 0xe4, 0xe7, 0x33, 0x04, 0x12, 0x05, 0x73, 0x9e, 0x02, + 0x89, 0x3f, 0x33, 0xe5, 0x12, 0xe5, 0x4e, 0x8e, 0x8a, 0xb5, 0x5f, 0x7d, 0x07, 0x00, 0x00, 0xff, + 0xff, 0xcc, 0x0f, 0x2b, 0x2e, 0x03, 0x02, 0x00, 0x00, } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/v2/api.proto b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/v2/api.proto index 9ca2ccf96f9d..ec1eb2680c85 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/v2/api.proto +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/v2/api.proto @@ -28,9 +28,24 @@ message EncryptedObject { // KeyID is the KMS key ID used for encryption operations. string keyID = 2; - // EncryptedDEK is the encrypted DEK. - bytes encryptedDEK = 3; + // EncryptedDEKSource is the ciphertext of the source of the DEK used to encrypt the data stored in encryptedData. + // encryptedDEKSourceType defines the process of using the plaintext of this field to determine the aforementioned DEK. + bytes encryptedDEKSource = 3; // Annotations is additional metadata that was provided by the KMS plugin. map annotations = 4; + + // encryptedDEKSourceType defines the process of using the plaintext of encryptedDEKSource to determine the DEK. + EncryptedDEKSourceType encryptedDEKSourceType = 5; +} + +enum EncryptedDEKSourceType { + // AES_GCM_KEY means that the plaintext of encryptedDEKSource is the DEK itself, with AES-GCM as the encryption algorithm. + AES_GCM_KEY = 0; + + // HKDF_SHA256_XNONCE_AES_GCM_SEED means that the plaintext of encryptedDEKSource is the pseudo random key + // (referred to as the seed throughout the code) that is fed into HKDF expand. SHA256 is the hash algorithm + // and first 32 bytes of encryptedData are the info param. The first 32 bytes from the HKDF stream are used + // as the DEK with AES-GCM as the encryption algorithm. + HKDF_SHA256_XNONCE_AES_GCM_SEED = 1; } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/metrics.go index c8fd2f4c04d0..35ec01369209 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/metrics.go @@ -17,9 +17,11 @@ limitations under the License. package value import ( + "errors" "sync" "time" + "google.golang.org/grpc/codes" "google.golang.org/grpc/status" "k8s.io/component-base/metrics" @@ -59,7 +61,7 @@ var ( Namespace: namespace, Subsystem: subsystem, Name: "transformation_operations_total", - Help: "Total number of transformations.", + Help: "Total number of transformations. Successful transformation will have a status 'OK' and a varied status string when the transformation fails. This status and transformation_type fields may be used for alerting on encryption/decryption failure using transformation_type from_storage for decryption and to_storage for encryption", StabilityLevel: metrics.ALPHA, }, []string{"transformation_type", "transformer_prefix", "status"}, @@ -112,7 +114,7 @@ func RegisterMetrics() { // RecordTransformation records latencies and count of TransformFromStorage and TransformToStorage operations. // Note that transformation_failures_total metric is deprecated, use transformation_operations_total instead. func RecordTransformation(transformationType, transformerPrefix string, elapsed time.Duration, err error) { - transformerOperationsTotal.WithLabelValues(transformationType, transformerPrefix, status.Code(err).String()).Inc() + transformerOperationsTotal.WithLabelValues(transformationType, transformerPrefix, getErrorCode(err)).Inc() if err == nil { transformerLatencies.WithLabelValues(transformationType, transformerPrefix).Observe(elapsed.Seconds()) @@ -138,3 +140,23 @@ func RecordDataKeyGeneration(start time.Time, err error) { func sinceInSeconds(start time.Time) float64 { return time.Since(start).Seconds() } + +type gRPCError interface { + GRPCStatus() *status.Status +} + +func getErrorCode(err error) string { + if err == nil { + return codes.OK.String() + } + + // handle errors wrapped with fmt.Errorf and similar + var s gRPCError + if errors.As(err, &s) { + return s.GRPCStatus().Code().String() + } + + // This is not gRPC error. The operation must have failed before gRPC + // method was called, otherwise we would get gRPC error. + return "unknown-non-grpc" +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/transformer.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/transformer.go index a6a4aa184d61..c5e97ac2daa3 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/transformer.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/value/transformer.go @@ -23,7 +23,10 @@ import ( "fmt" "time" + "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/util/errors" + genericapirequest "k8s.io/apiserver/pkg/endpoints/request" + "k8s.io/klog/v2" ) func init() { @@ -39,17 +42,30 @@ type Context interface { AuthenticatedData() []byte } -// Transformer allows a value to be transformed before being read from or written to the underlying store. The methods -// must be able to undo the transformation caused by the other. -type Transformer interface { +type Read interface { // TransformFromStorage may transform the provided data from its underlying storage representation or return an error. // Stale is true if the object on disk is stale and a write to etcd should be issued, even if the contents of the object // have not changed. TransformFromStorage(ctx context.Context, data []byte, dataCtx Context) (out []byte, stale bool, err error) +} + +type Write interface { // TransformToStorage may transform the provided data into the appropriate form in storage or return an error. TransformToStorage(ctx context.Context, data []byte, dataCtx Context) (out []byte, err error) } +// Transformer allows a value to be transformed before being read from or written to the underlying store. The methods +// must be able to undo the transformation caused by the other. +type Transformer interface { + Read + Write +} + +// ResourceTransformers returns a transformer for the provided resource. +type ResourceTransformers interface { + TransformerForResource(resource schema.GroupResource) Transformer +} + // DefaultContext is a simple implementation of Context for a slice of bytes. type DefaultContext []byte @@ -144,6 +160,7 @@ func (t *prefixTransformers) TransformFromStorage(ctx context.Context, data []by } } if err := errors.Reduce(errors.NewAggregate(errs)); err != nil { + logTransformErr(ctx, err, "failed to decrypt data") return nil, false, err } RecordTransformation("from_storage", "unknown", time.Since(start), t.err) @@ -157,6 +174,7 @@ func (t *prefixTransformers) TransformToStorage(ctx context.Context, data []byte result, err := transformer.Transformer.TransformToStorage(ctx, data, dataCtx) RecordTransformation("to_storage", string(transformer.Prefix), time.Since(start), err) if err != nil { + logTransformErr(ctx, err, "failed to encrypt data") return nil, err } prefixedData := make([]byte, len(transformer.Prefix), len(result)+len(transformer.Prefix)) @@ -164,3 +182,32 @@ func (t *prefixTransformers) TransformToStorage(ctx context.Context, data []byte prefixedData = append(prefixedData, result...) return prefixedData, nil } + +func logTransformErr(ctx context.Context, err error, message string) { + requestInfo := getRequestInfoFromContext(ctx) + if klogLevel6 := klog.V(6); klogLevel6.Enabled() { + klogLevel6.InfoSDepth( + 1, + message, + "err", err, + "group", requestInfo.APIGroup, + "version", requestInfo.APIVersion, + "resource", requestInfo.Resource, + "subresource", requestInfo.Subresource, + "verb", requestInfo.Verb, + "namespace", requestInfo.Namespace, + "name", requestInfo.Name, + ) + + return + } + + klog.ErrorSDepth(1, err, message) +} + +func getRequestInfoFromContext(ctx context.Context) *genericapirequest.RequestInfo { + if reqInfo, found := genericapirequest.RequestInfoFrom(ctx); found { + return reqInfo + } + return &genericapirequest.RequestInfo{} +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storageversion/manager.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storageversion/manager.go index 0e0d9542b0f5..d7d3863118ad 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storageversion/manager.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storageversion/manager.go @@ -44,6 +44,10 @@ type ResourceInfo struct { // DirectlyDecodableVersions is a list of versions that the converter for REST storage knows how to convert. This // contains items like apiextensions.k8s.io/v1beta1 even if we don't serve that version. DirectlyDecodableVersions []schema.GroupVersion + + // ServedVersions holds a list of all versions of GroupResource that are served. Note that a server may be able to + // decode a particular version, but still not serve it. + ServedVersions []string } // Manager records the resources whose StorageVersions need updates, and provides a method to update those StorageVersions. @@ -143,7 +147,10 @@ func (s *defaultManager) UpdateStorageVersions(kubeAPIServerClientConfig *rest.C if len(gr.Group) == 0 { gr.Group = "core" } - if err := updateStorageVersionFor(sc, serverID, gr, r.EncodingVersion, decodableVersions); err != nil { + + servedVersions := r.ServedVersions + + if err := updateStorageVersionFor(sc, serverID, gr, r.EncodingVersion, decodableVersions, servedVersions); err != nil { utilruntime.HandleError(fmt.Errorf("failed to update storage version for %v: %v", r.GroupResource, err)) s.recordStatusFailure(&r, err) hasFailure = true diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storageversion/updater.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storageversion/updater.go index ce4d87e91c48..abf7218bc0da 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storageversion/updater.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storageversion/updater.go @@ -123,12 +123,12 @@ func setStatusCondition(conditions *[]v1alpha1.StorageVersionCondition, newCondi } // updateStorageVersionFor updates the storage version object for the resource. -func updateStorageVersionFor(c Client, apiserverID string, gr schema.GroupResource, encodingVersion string, decodableVersions []string) error { +func updateStorageVersionFor(c Client, apiserverID string, gr schema.GroupResource, encodingVersion string, decodableVersions []string, servedVersions []string) error { retries := 3 var retry int var err error for retry < retries { - err = singleUpdate(c, apiserverID, gr, encodingVersion, decodableVersions) + err = singleUpdate(c, apiserverID, gr, encodingVersion, decodableVersions, servedVersions) if err == nil { return nil } @@ -145,7 +145,7 @@ func updateStorageVersionFor(c Client, apiserverID string, gr schema.GroupResour return err } -func singleUpdate(c Client, apiserverID string, gr schema.GroupResource, encodingVersion string, decodableVersions []string) error { +func singleUpdate(c Client, apiserverID string, gr schema.GroupResource, encodingVersion string, decodableVersions []string, servedVersions []string) error { shouldCreate := false name := fmt.Sprintf("%s.%s", gr.Group, gr.Resource) sv, err := c.Get(context.TODO(), name, metav1.GetOptions{}) @@ -157,7 +157,7 @@ func singleUpdate(c Client, apiserverID string, gr schema.GroupResource, encodin sv = &v1alpha1.StorageVersion{} sv.ObjectMeta.Name = name } - updatedSV := localUpdateStorageVersion(sv, apiserverID, encodingVersion, decodableVersions) + updatedSV := localUpdateStorageVersion(sv, apiserverID, encodingVersion, decodableVersions, servedVersions) if shouldCreate { createdSV, err := c.Create(context.TODO(), updatedSV, metav1.CreateOptions{}) if err != nil { @@ -174,11 +174,12 @@ func singleUpdate(c Client, apiserverID string, gr schema.GroupResource, encodin // localUpdateStorageVersion updates the input storageversion with given server storageversion info. // The function updates the input storageversion in place. -func localUpdateStorageVersion(sv *v1alpha1.StorageVersion, apiserverID, encodingVersion string, decodableVersions []string) *v1alpha1.StorageVersion { +func localUpdateStorageVersion(sv *v1alpha1.StorageVersion, apiserverID, encodingVersion string, decodableVersions []string, servedVersions []string) *v1alpha1.StorageVersion { newSSV := v1alpha1.ServerStorageVersion{ APIServerID: apiserverID, EncodingVersion: encodingVersion, DecodableVersions: decodableVersions, + ServedVersions: servedVersions, } foundSSV := false for i, ssv := range sv.Status.StorageVersions { diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/OWNERS b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/OWNERS index 2556c589fd12..fd722b2acf5e 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/OWNERS @@ -1,15 +1,15 @@ # See the OWNERS docs at https://go.k8s.io/owners approvers: - - lavalamp - deads2k - yue9944882 - MikeSpreitzer reviewers: - - lavalamp - deads2k - yue9944882 - MikeSpreitzer labels: - sig/api-machinery - area/apiserver +emeritus_approvers: + - lavalamp diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go index 2048a6ef6b04..708bf2cdef0b 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go @@ -58,6 +58,11 @@ import ( const timeFmt = "2006-01-02T15:04:05.999" +const ( + // priorityLevelMaxSeatsPercent is the percentage of the nominalCL used as max seats allocatable from work estimator + priorityLevelMaxSeatsPercent = float64(0.15) +) + // This file contains a simple local (to the apiserver) controller // that digests API Priority and Fairness config objects (FlowSchema // and PriorityLevelConfiguration) into the data structure that the @@ -151,6 +156,12 @@ type configController struct { // watchTracker implements the necessary WatchTracker interface. WatchTracker + // MaxSeatsTracker tracks the maximum seats that should be allocatable from the + // work estimator for a given priority level. This controller does not enforce + // any limits on max seats stored in this tracker, it is up to the work estimator + // to set lower/upper limits on max seats (currently min=1, max=10). + MaxSeatsTracker + // the most recent update attempts, ordered by increasing age. // Consumer trims to keep only the last minute's worth of entries. // The controller uses this to limit itself to at most six updates @@ -197,16 +208,15 @@ type priorityLevelState struct { pl *flowcontrol.PriorityLevelConfiguration // qsCompleter holds the QueueSetCompleter derived from `config` - // and `queues` if config is not exempt, nil otherwise. + // and `queues`. qsCompleter fq.QueueSetCompleter - // The QueueSet for this priority level. This is nil if and only - // if the priority level is exempt. + // The QueueSet for this priority level. + // Never nil. queues fq.QueueSet // quiescing==true indicates that this priority level should be - // removed when its queues have all drained. May be true only if - // queues is non-nil. + // removed when its queues have all drained. quiescing bool // number of goroutines between Controller::Match and calling the @@ -275,6 +285,7 @@ func newTestableController(config TestableConfig) *configController { flowcontrolClient: config.FlowcontrolClient, priorityLevelStates: make(map[string]*priorityLevelState), WatchTracker: NewWatchTracker(), + MaxSeatsTracker: NewMaxSeatsTracker(), } klog.V(2).Infof("NewTestableController %q with serverConcurrencyLimit=%d, requestWaitLimit=%s, name=%s, asFieldManager=%q", cfgCtlr.name, cfgCtlr.serverConcurrencyLimit, cfgCtlr.requestWaitLimit, cfgCtlr.name, cfgCtlr.asFieldManager) // Start with longish delay because conflicts will be between @@ -384,9 +395,6 @@ func (cfgCtlr *configController) updateBorrowingLocked(setCompleters bool, plSta items := make([]allocProblemItem, 0, len(plStates)) plNames := make([]string, 0, len(plStates)) for plName, plState := range plStates { - if plState.pl.Spec.Limited == nil { - continue - } obs := plState.seatDemandIntegrator.Reset() plState.seatDemandStats.update(obs) // Lower bound on this priority level's adjusted concurreny limit is the lesser of: @@ -403,7 +411,7 @@ func (cfgCtlr *configController) updateBorrowingLocked(setCompleters bool, plSta }) } if len(items) == 0 && cfgCtlr.nominalCLSum > 0 { - klog.ErrorS(nil, "Impossible: no non-exempt priority levels", "plStates", cfgCtlr.priorityLevelStates) + klog.ErrorS(nil, "Impossible: no priority levels", "plStates", cfgCtlr.priorityLevelStates) return } allocs, fairFrac, err := computeConcurrencyAllocation(cfgCtlr.nominalCLSum, items) @@ -412,17 +420,11 @@ func (cfgCtlr *configController) updateBorrowingLocked(setCompleters bool, plSta allocs = make([]float64, len(items)) for idx, plName := range plNames { plState := plStates[plName] - if plState.pl.Spec.Limited == nil { - continue - } allocs[idx] = float64(plState.currentCL) } } for idx, plName := range plNames { plState := plStates[plName] - if plState.pl.Spec.Limited == nil { - continue - } if setCompleters { qsCompleter, err := queueSetCompleterForPL(cfgCtlr.queueSetFactory, plState.queues, plState.pl, cfgCtlr.requestWaitLimit, plState.reqsGaugePair, plState.execSeatsObs, @@ -441,8 +443,15 @@ func (cfgCtlr *configController) updateBorrowingLocked(setCompleters bool, plSta if relChange >= 0.05 { logLevel = 2 } - klog.V(logLevel).InfoS("Update CurrentCL", "plName", plName, "seatDemandHighWatermark", plState.seatDemandStats.highWatermark, "seatDemandAvg", plState.seatDemandStats.avg, "seatDemandStdev", plState.seatDemandStats.stdDev, "seatDemandSmoothed", plState.seatDemandStats.smoothed, "fairFrac", fairFrac, "currentCL", currentCL, "backstop", err != nil) - plState.queues = plState.qsCompleter.Complete(fq.DispatchingConfig{ConcurrencyLimit: currentCL}) + var concurrencyDenominator int + if currentCL > 0 { + concurrencyDenominator = currentCL + } else { + concurrencyDenominator = int(math.Max(1, math.Round(float64(cfgCtlr.serverConcurrencyLimit)/10))) + } + plState.seatDemandRatioedGauge.SetDenominator(float64(concurrencyDenominator)) + klog.V(logLevel).InfoS("Update CurrentCL", "plName", plName, "seatDemandHighWatermark", plState.seatDemandStats.highWatermark, "seatDemandAvg", plState.seatDemandStats.avg, "seatDemandStdev", plState.seatDemandStats.stdDev, "seatDemandSmoothed", plState.seatDemandStats.smoothed, "fairFrac", fairFrac, "currentCL", currentCL, "concurrencyDenominator", concurrencyDenominator, "backstop", err != nil) + plState.queues = plState.qsCompleter.Complete(fq.DispatchingConfig{ConcurrencyLimit: currentCL, ConcurrencyDenominator: concurrencyDenominator}) } metrics.SetFairFrac(float64(fairFrac)) } @@ -690,9 +699,8 @@ func (meal *cfgMeal) digestNewPLsLocked(newPLs []*flowcontrol.PriorityLevelConfi klog.V(3).Infof("Priority level %q was undesired and has become desired again", pl.Name) state.quiescing = false } - if state.pl.Spec.Limited != nil { - meal.shareSum += float64(state.pl.Spec.Limited.NominalConcurrencyShares) - } + nominalConcurrencyShares, _, _ := plSpecCommons(state.pl) + meal.shareSum += float64(nominalConcurrencyShares) meal.haveExemptPL = meal.haveExemptPL || pl.Name == flowcontrol.PriorityLevelConfigurationNameExempt meal.haveCatchAllPL = meal.haveCatchAllPL || pl.Name == flowcontrol.PriorityLevelConfigurationNameCatchAll } @@ -765,15 +773,16 @@ func (meal *cfgMeal) processOldPLsLocked() { continue } if plName == flowcontrol.PriorityLevelConfigurationNameExempt && !meal.haveExemptPL || plName == flowcontrol.PriorityLevelConfigurationNameCatchAll && !meal.haveCatchAllPL { - // BTW, we know the Spec has not changed because the - // mandatory objects have immutable Specs + // BTW, we know the Spec has not changed what is says about queuing because the + // mandatory objects have immutable Specs as far as queuing is concerned. klog.V(3).Infof("Retaining mandatory priority level %q despite lack of API object", plName) } else { - if plState.queues == nil || plState.numPending == 0 && plState.queues.IsIdle() { - // Either there are no queues or they are done + if plState.numPending == 0 && plState.queues.IsIdle() { + // The QueueSet is done // draining and no use is coming from another // goroutine - klog.V(3).Infof("Removing undesired priority level %q (nilQueues=%v), Type=%v", plName, plState.queues == nil, plState.pl.Spec.Type) + klog.V(3).Infof("Removing undesired priority level %q, Type=%v", plName, plState.pl.Spec.Type) + meal.cfgCtlr.MaxSeatsTracker.ForgetPriorityLevel(plName) continue } if !plState.quiescing { @@ -789,15 +798,14 @@ func (meal *cfgMeal) processOldPLsLocked() { // This can not happen because queueSetCompleterForPL already approved this config panic(fmt.Sprintf("%s from name=%q spec=%s", err, plName, fcfmt.Fmt(plState.pl.Spec))) } - if plState.pl.Spec.Limited != nil { - // We deliberately include the lingering priority levels - // here so that their queues get some concurrency and they - // continue to drain. During this interim a lingering - // priority level continues to get a concurrency - // allocation determined by all the share values in the - // regular way. - meal.shareSum += float64(plState.pl.Spec.Limited.NominalConcurrencyShares) - } + // We deliberately include the lingering priority levels + // here so that their queues get some concurrency and they + // continue to drain. During this interim a lingering + // priority level continues to get a concurrency + // allocation determined by all the share values in the + // regular way. + nominalConcurrencyShares, _, _ := plSpecCommons(plState.pl) + meal.shareSum += float64(nominalConcurrencyShares) meal.haveExemptPL = meal.haveExemptPL || plName == flowcontrol.PriorityLevelConfigurationNameExempt meal.haveCatchAllPL = meal.haveCatchAllPL || plName == flowcontrol.PriorityLevelConfigurationNameCatchAll meal.newPLStates[plName] = plState @@ -809,41 +817,46 @@ func (meal *cfgMeal) processOldPLsLocked() { // QueueSets. func (meal *cfgMeal) finishQueueSetReconfigsLocked() { for plName, plState := range meal.newPLStates { - if plState.pl.Spec.Limited == nil { - klog.V(5).Infof("Using exempt priority level %q: quiescing=%v", plName, plState.quiescing) - continue - } - - limited := plState.pl.Spec.Limited + nominalConcurrencyShares, lendablePercent, borrowingLimitPercent := plSpecCommons(plState.pl) // The use of math.Ceil here means that the results might sum // to a little more than serverConcurrencyLimit but the // difference will be negligible. - concurrencyLimit := int(math.Ceil(float64(meal.cfgCtlr.serverConcurrencyLimit) * float64(limited.NominalConcurrencyShares) / meal.shareSum)) + concurrencyLimit := int(math.Ceil(float64(meal.cfgCtlr.serverConcurrencyLimit) * float64(nominalConcurrencyShares) / meal.shareSum)) var lendableCL, borrowingCL int - if limited.LendablePercent != nil { - lendableCL = int(math.Round(float64(concurrencyLimit) * float64(*limited.LendablePercent) / 100)) + if lendablePercent != nil { + lendableCL = int(math.Round(float64(concurrencyLimit) * float64(*lendablePercent) / 100)) } - if limited.BorrowingLimitPercent != nil { - borrowingCL = int(math.Round(float64(concurrencyLimit) * float64(*limited.BorrowingLimitPercent) / 100)) + if borrowingLimitPercent != nil { + borrowingCL = int(math.Round(float64(concurrencyLimit) * float64(*borrowingLimitPercent) / 100)) } else { borrowingCL = meal.cfgCtlr.serverConcurrencyLimit } + metrics.SetPriorityLevelConfiguration(plName, concurrencyLimit, concurrencyLimit-lendableCL, concurrencyLimit+borrowingCL) - plState.seatDemandRatioedGauge.SetDenominator(float64(concurrencyLimit)) cfgChanged := plState.nominalCL != concurrencyLimit || plState.minCL != concurrencyLimit-lendableCL || plState.maxCL != concurrencyLimit+borrowingCL plState.nominalCL = concurrencyLimit plState.minCL = concurrencyLimit - lendableCL plState.maxCL = concurrencyLimit + borrowingCL meal.maxExecutingRequests += concurrencyLimit - var waitLimit int - if qCfg := limited.LimitResponse.Queuing; qCfg != nil { - waitLimit = int(qCfg.Queues * qCfg.QueueLengthLimit) + if limited := plState.pl.Spec.Limited; limited != nil { + if qCfg := limited.LimitResponse.Queuing; qCfg != nil { + meal.maxWaitingRequests += int(qCfg.Queues * qCfg.QueueLengthLimit) + + // Max seats allocatable from work estimator is calculated as MAX(1, MIN(0.15 * nominalCL, nominalCL/handSize)). + // This is to keep max seats relative to total available concurrency with a minimum value of 1. + // 15% of nominal concurrency was chosen since it preserved the previous max seats of 10 for default priority levels + // when using apiserver's default total server concurrency of 600 (--max-requests-inflight=400, --max-mutating-requests-inflight=200). + // This ensures that clusters with relatively high inflight requests will continue to use a max seats of 10 + // while clusters with lower inflight requests will use max seats no greater than nominalCL/handSize. + // Calculated max seats can return arbitrarily high values but work estimator currently limits max seats at 10. + handSize := plState.pl.Spec.Limited.LimitResponse.Queuing.HandSize + maxSeats := uint64(math.Max(1, math.Min(math.Ceil(float64(concurrencyLimit)*priorityLevelMaxSeatsPercent), float64(int32(concurrencyLimit)/handSize)))) + meal.cfgCtlr.MaxSeatsTracker.SetMaxSeats(plName, maxSeats) + } } - meal.maxWaitingRequests += waitLimit - if plState.queues == nil { initialCL := concurrencyLimit - lendableCL/2 - klog.V(2).Infof("Introducing queues for priority level %q: config=%s, nominalCL=%d, lendableCL=%d, borrowingCL=%d, currentCL=%d, quiescing=%v (shares=%v, shareSum=%v)", plName, fcfmt.Fmt(plState.pl.Spec), concurrencyLimit, lendableCL, borrowingCL, initialCL, plState.quiescing, plState.pl.Spec.Limited.NominalConcurrencyShares, meal.shareSum) + klog.V(2).Infof("Introducing queues for priority level %q: config=%s, nominalCL=%d, lendableCL=%d, borrowingCL=%d, currentCL=%d, quiescing=%v (shares=%v, shareSum=%v)", plName, fcfmt.Fmt(plState.pl.Spec), concurrencyLimit, lendableCL, borrowingCL, initialCL, plState.quiescing, nominalConcurrencyShares, meal.shareSum) plState.seatDemandStats = seatDemandStats{} plState.currentCL = initialCL } else { @@ -851,7 +864,7 @@ func (meal *cfgMeal) finishQueueSetReconfigsLocked() { if cfgChanged { logLevel = 2 } - klog.V(logLevel).Infof("Retaining queues for priority level %q: config=%s, nominalCL=%d, lendableCL=%d, borrowingCL=%d, currentCL=%d, quiescing=%v, numPending=%d (shares=%v, shareSum=%v)", plName, fcfmt.Fmt(plState.pl.Spec), concurrencyLimit, lendableCL, borrowingCL, plState.currentCL, plState.quiescing, plState.numPending, plState.pl.Spec.Limited.NominalConcurrencyShares, meal.shareSum) + klog.V(logLevel).Infof("Retaining queues for priority level %q: config=%s, nominalCL=%d, lendableCL=%d, borrowingCL=%d, currentCL=%d, quiescing=%v, numPending=%d (shares=%v, shareSum=%v)", plName, fcfmt.Fmt(plState.pl.Spec), concurrencyLimit, lendableCL, borrowingCL, plState.currentCL, plState.quiescing, plState.numPending, nominalConcurrencyShares, meal.shareSum) } } meal.cfgCtlr.nominalCLSum = meal.maxExecutingRequests @@ -859,32 +872,35 @@ func (meal *cfgMeal) finishQueueSetReconfigsLocked() { } // queueSetCompleterForPL returns an appropriate QueueSetCompleter for the -// given priority level configuration. Returns nil if that config -// does not call for limiting. Returns nil and an error if the given +// given priority level configuration. Returns nil and an error if the given // object is malformed in a way that is a problem for this package. func queueSetCompleterForPL(qsf fq.QueueSetFactory, queues fq.QueueSet, pl *flowcontrol.PriorityLevelConfiguration, requestWaitLimit time.Duration, reqsIntPair metrics.RatioedGaugePair, execSeatsObs metrics.RatioedGauge, seatDemandGauge metrics.Gauge) (fq.QueueSetCompleter, error) { - if (pl.Spec.Type == flowcontrol.PriorityLevelEnablementExempt) != (pl.Spec.Limited == nil) { - return nil, errors.New("broken union structure at the top") + if (pl.Spec.Type == flowcontrol.PriorityLevelEnablementLimited) != (pl.Spec.Limited != nil) { + return nil, errors.New("broken union structure at the top, for Limited") + } + if (pl.Spec.Type == flowcontrol.PriorityLevelEnablementExempt) != (pl.Spec.Exempt != nil) { + return nil, errors.New("broken union structure at the top, for Exempt") } if (pl.Spec.Type == flowcontrol.PriorityLevelEnablementExempt) != (pl.Name == flowcontrol.PriorityLevelConfigurationNameExempt) { // This package does not attempt to cope with a priority level dynamically switching between exempt and not. return nil, errors.New("non-alignment between name and type") } - if pl.Spec.Limited == nil { - return nil, nil - } - if (pl.Spec.Limited.LimitResponse.Type == flowcontrol.LimitResponseTypeReject) != (pl.Spec.Limited.LimitResponse.Queuing == nil) { - return nil, errors.New("broken union structure for limit response") - } - qcAPI := pl.Spec.Limited.LimitResponse.Queuing qcQS := fq.QueuingConfig{Name: pl.Name} - if qcAPI != nil { - qcQS = fq.QueuingConfig{Name: pl.Name, - DesiredNumQueues: int(qcAPI.Queues), - QueueLengthLimit: int(qcAPI.QueueLengthLimit), - HandSize: int(qcAPI.HandSize), - RequestWaitLimit: requestWaitLimit, + if pl.Spec.Limited != nil { + if (pl.Spec.Limited.LimitResponse.Type == flowcontrol.LimitResponseTypeReject) != (pl.Spec.Limited.LimitResponse.Queuing == nil) { + return nil, errors.New("broken union structure for limit response") + } + qcAPI := pl.Spec.Limited.LimitResponse.Queuing + if qcAPI != nil { + qcQS = fq.QueuingConfig{Name: pl.Name, + DesiredNumQueues: int(qcAPI.Queues), + QueueLengthLimit: int(qcAPI.QueueLengthLimit), + HandSize: int(qcAPI.HandSize), + RequestWaitLimit: requestWaitLimit, + } } + } else { + qcQS = fq.QueuingConfig{Name: pl.Name, DesiredNumQueues: -1} } var qsc fq.QueueSetCompleter var err error @@ -894,7 +910,7 @@ func queueSetCompleterForPL(qsf fq.QueueSetFactory, queues fq.QueueSet, pl *flow qsc, err = qsf.BeginConstruction(qcQS, reqsIntPair, execSeatsObs, seatDemandGauge) } if err != nil { - err = fmt.Errorf("priority level %q has QueuingConfiguration %#+v, which is invalid: %w", pl.Name, qcAPI, err) + err = fmt.Errorf("priority level %q has QueuingConfiguration %#+v, which is invalid: %w", pl.Name, qcQS, err) } return qsc, err } @@ -957,16 +973,8 @@ func (meal *cfgMeal) imaginePL(proto *flowcontrol.PriorityLevelConfiguration, re seatDemandIntegrator: seatDemandIntegrator, seatDemandRatioedGauge: seatDemandRatioedGauge, } - if proto.Spec.Limited != nil { - meal.shareSum += float64(proto.Spec.Limited.NominalConcurrencyShares) - } -} - -type immediateRequest struct{} - -func (immediateRequest) Finish(execute func()) bool { - execute() - return false + nominalConcurrencyShares, _, _ := plSpecCommons(proto) + meal.shareSum += float64(nominalConcurrencyShares) } // startRequest classifies and, if appropriate, enqueues the request. @@ -1007,32 +1015,31 @@ func (cfgCtlr *configController) startRequest(ctx context.Context, rd RequestDig } plName := selectedFlowSchema.Spec.PriorityLevelConfiguration.Name plState := cfgCtlr.priorityLevelStates[plName] - if plState.pl.Spec.Type == flowcontrol.PriorityLevelEnablementExempt { - noteFn(selectedFlowSchema, plState.pl, "") - klog.V(7).Infof("startRequest(%#+v) => fsName=%q, distMethod=%#+v, plName=%q, immediate", rd, selectedFlowSchema.Name, selectedFlowSchema.Spec.DistinguisherMethod, plName) - return selectedFlowSchema, plState.pl, true, immediateRequest{}, time.Time{} - } var numQueues int32 - if plState.pl.Spec.Limited.LimitResponse.Type == flowcontrol.LimitResponseTypeQueue { - numQueues = plState.pl.Spec.Limited.LimitResponse.Queuing.Queues - } - var flowDistinguisher string var hashValue uint64 - if numQueues > 1 { - flowDistinguisher = computeFlowDistinguisher(rd, selectedFlowSchema.Spec.DistinguisherMethod) - hashValue = hashFlowID(selectedFlowSchema.Name, flowDistinguisher) + var flowDistinguisher string + if plState.pl.Spec.Type != flowcontrol.PriorityLevelEnablementExempt { + if plState.pl.Spec.Limited.LimitResponse.Type == flowcontrol.LimitResponseTypeQueue { + numQueues = plState.pl.Spec.Limited.LimitResponse.Queuing.Queues + } + if numQueues > 1 { + flowDistinguisher = computeFlowDistinguisher(rd, selectedFlowSchema.Spec.DistinguisherMethod) + hashValue = hashFlowID(selectedFlowSchema.Name, flowDistinguisher) + } } noteFn(selectedFlowSchema, plState.pl, flowDistinguisher) workEstimate := workEstimator() - startWaitingTime = cfgCtlr.clock.Now() + if plState.pl.Spec.Type != flowcontrol.PriorityLevelEnablementExempt { + startWaitingTime = cfgCtlr.clock.Now() + } klog.V(7).Infof("startRequest(%#+v) => fsName=%q, distMethod=%#+v, plName=%q, numQueues=%d", rd, selectedFlowSchema.Name, selectedFlowSchema.Spec.DistinguisherMethod, plName, numQueues) req, idle := plState.queues.StartRequest(ctx, &workEstimate, hashValue, flowDistinguisher, selectedFlowSchema.Name, rd.RequestInfo, rd.User, queueNoteFn) if idle { cfgCtlr.maybeReapReadLocked(plName, plState) } - return selectedFlowSchema, plState.pl, false, req, startWaitingTime + return selectedFlowSchema, plState.pl, plState.pl.Spec.Type == flowcontrol.PriorityLevelEnablementExempt, req, startWaitingTime } // maybeReap will remove the last internal traces of the named @@ -1046,10 +1053,6 @@ func (cfgCtlr *configController) maybeReap(plName string) { klog.V(7).Infof("plName=%s, plState==nil", plName) return } - if plState.queues == nil { - klog.V(7).Infof("plName=%s, plState.queues==nil", plName) - return - } useless := plState.quiescing && plState.numPending == 0 && plState.queues.IsIdle() klog.V(7).Infof("plState.quiescing=%v, plState.numPending=%d, useless=%v", plState.quiescing, plState.numPending, useless) if !useless { @@ -1107,3 +1110,16 @@ func relDiff(x, y float64) float64 { } return diff / den } + +// plSpecCommons returns the (NominalConcurrencyShares, LendablePercent, BorrowingLimitPercent) of the given priority level config +func plSpecCommons(pl *flowcontrol.PriorityLevelConfiguration) (int32, *int32, *int32) { + if limiter := pl.Spec.Limited; limiter != nil { + return limiter.NominalConcurrencyShares, limiter.LendablePercent, limiter.BorrowingLimitPercent + } + limiter := pl.Spec.Exempt + var nominalConcurrencyShares int32 + if limiter.NominalConcurrencyShares != nil { + nominalConcurrencyShares = *limiter.NominalConcurrencyShares + } + return nominalConcurrencyShares, limiter.LendablePercent, nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller_debug.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller_debug.go index 0b9bc02f9272..fde0c51512ff 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller_debug.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller_debug.go @@ -29,6 +29,7 @@ import ( "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apiserver/pkg/server/mux" + "k8s.io/apiserver/pkg/util/flowcontrol/debug" ) const ( @@ -75,22 +76,6 @@ func (cfgCtlr *configController) dumpPriorityLevels(w http.ResponseWriter, r *ht continue } - if plState.queues == nil { - tabPrint(tabWriter, row( - plState.pl.Name, // 1 - "", // 2 - "", // 3 - "", // 4 - "", // 5 - "", // 6 - "", // 7 - "", // 8 - "", // 9 - "", // 10 - )) - endLine(tabWriter) - continue - } queueSetDigest := plState.queues.Dump(false) activeQueueNum := 0 for _, q := range queueSetDigest.Queues { @@ -134,21 +119,6 @@ func (cfgCtlr *configController) dumpQueues(w http.ResponseWriter, r *http.Reque tabPrint(tabWriter, rowForHeaders(columnHeaders)) endLine(tabWriter) for _, plState := range cfgCtlr.priorityLevelStates { - if plState.queues == nil { - tabPrint(tabWriter, row( - plState.pl.Name, // 1 - "", // 2 - "", // 3 - "", // 4 - "", // 5 - "", // 6 - "", // 7 - "", // 8 - "", // 9 - )) - endLine(tabWriter) - continue - } queueSetDigest := plState.queues.Dump(false) for i, q := range queueSetDigest.Queues { tabPrint(tabWriter, row( @@ -185,57 +155,65 @@ func (cfgCtlr *configController) dumpRequests(w http.ResponseWriter, r *http.Req "InitialSeats", // 7 "FinalSeats", // 8 "AdditionalLatency", // 9 + "StartTime", // 10 })) if includeRequestDetails { continueLine(tabWriter) tabPrint(tabWriter, rowForHeaders([]string{ - "UserName", // 10 - "Verb", // 11 - "APIPath", // 12 - "Namespace", // 13 - "Name", // 14 - "APIVersion", // 15 - "Resource", // 16 - "SubResource", // 17 + "UserName", // 11 + "Verb", // 12 + "APIPath", // 13 + "Namespace", // 14 + "Name", // 15 + "APIVersion", // 16 + "Resource", // 17 + "SubResource", // 18 })) } endLine(tabWriter) for _, plState := range cfgCtlr.priorityLevelStates { - if plState.queues == nil { - continue - } queueSetDigest := plState.queues.Dump(includeRequestDetails) + dumpRequest := func(iq, ir int, r debug.RequestDump) { + tabPrint(tabWriter, row( + plState.pl.Name, // 1 + r.MatchedFlowSchema, // 2 + strconv.Itoa(iq), // 3 + strconv.Itoa(ir), // 4 + r.FlowDistinguisher, // 5 + r.ArriveTime.UTC().Format(time.RFC3339Nano), // 6 + strconv.Itoa(int(r.WorkEstimate.InitialSeats)), // 7 + strconv.Itoa(int(r.WorkEstimate.FinalSeats)), // 8 + r.WorkEstimate.AdditionalLatency.String(), // 9 + r.StartTime.UTC().Format(time.RFC3339Nano), // 10 + )) + if includeRequestDetails { + continueLine(tabWriter) + tabPrint(tabWriter, rowForRequestDetails( + r.UserName, // 11 + r.RequestInfo.Verb, // 12 + r.RequestInfo.Path, // 13 + r.RequestInfo.Namespace, // 14 + r.RequestInfo.Name, // 15 + schema.GroupVersion{ + Group: r.RequestInfo.APIGroup, + Version: r.RequestInfo.APIVersion, + }.String(), // 16 + r.RequestInfo.Resource, // 17 + r.RequestInfo.Subresource, // 18 + )) + } + endLine(tabWriter) + } for iq, q := range queueSetDigest.Queues { for ir, r := range q.Requests { - tabPrint(tabWriter, row( - plState.pl.Name, // 1 - r.MatchedFlowSchema, // 2 - strconv.Itoa(iq), // 3 - strconv.Itoa(ir), // 4 - r.FlowDistinguisher, // 5 - r.ArriveTime.UTC().Format(time.RFC3339Nano), // 6 - strconv.Itoa(int(r.WorkEstimate.InitialSeats)), // 7 - strconv.Itoa(int(r.WorkEstimate.FinalSeats)), // 8 - r.WorkEstimate.AdditionalLatency.String(), // 9 - )) - if includeRequestDetails { - continueLine(tabWriter) - tabPrint(tabWriter, rowForRequestDetails( - r.UserName, // 10 - r.RequestInfo.Verb, // 11 - r.RequestInfo.Path, // 12 - r.RequestInfo.Namespace, // 13 - r.RequestInfo.Name, // 14 - schema.GroupVersion{ - Group: r.RequestInfo.APIGroup, - Version: r.RequestInfo.APIVersion, - }.String(), // 15 - r.RequestInfo.Resource, // 16 - r.RequestInfo.Subresource, // 17 - )) - } - endLine(tabWriter) + dumpRequest(iq, ir, r) } + for _, r := range q.RequestsExecuting { + dumpRequest(iq, -1, r) + } + } + for _, r := range queueSetDigest.QueuelessExecutingRequests { + dumpRequest(-1, -1, r) } } runtime.HandleError(tabWriter.Flush()) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go index 2929048ecc7c..76782623a847 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go @@ -77,6 +77,10 @@ type Interface interface { // WatchTracker provides the WatchTracker interface. WatchTracker + + // MaxSeatsTracker is invoked from the work estimator to track max seats + // that can be occupied by a request for a priority level. + MaxSeatsTracker } // This request filter implements https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/1040-priority-and-fairness/README.md diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/debug/dump.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/debug/dump.go index f2945b613f91..2b8538dcd4c6 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/debug/dump.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/debug/dump.go @@ -25,21 +25,23 @@ import ( // QueueSetDump is an instant dump of queue-set. type QueueSetDump struct { - Queues []QueueDump - Waiting int - Executing int - SeatsInUse int - SeatsWaiting int - Dispatched int - Rejected int - Timedout int - Cancelled int + Queues []QueueDump + QueuelessExecutingRequests []RequestDump + Waiting int + Executing int + SeatsInUse int + SeatsWaiting int + Dispatched int + Rejected int + Timedout int + Cancelled int } // QueueDump is an instant dump of one queue in a queue-set. type QueueDump struct { QueueSum QueueSum - Requests []RequestDump + Requests []RequestDump // just the waiting ones + RequestsExecuting []RequestDump NextDispatchR string ExecutingRequests int SeatsInUse int diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/dropped_requests_tracker.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/dropped_requests_tracker.go new file mode 100644 index 000000000000..74bf9eece630 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/dropped_requests_tracker.go @@ -0,0 +1,234 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package flowcontrol + +import ( + "sync" + "sync/atomic" + "time" + + "k8s.io/utils/clock" +) + +const ( + // maxRetryAfter represents the maximum possible retryAfter. + maxRetryAfter = int64(32) +) + +// DroppedRequestsTracker is an interface that allows tracking +// a history od dropped requests in the system for the purpose +// of adjusting RetryAfter header to avoid system overload. +type DroppedRequestsTracker interface { + // RecordDroppedRequest records a request that was just + // dropped from processing. + RecordDroppedRequest(plName string) + + // GetRetryAfter returns the current suggested value of + // RetryAfter value. + GetRetryAfter(plName string) int64 +} + +// unixStat keeps a statistic how many requests were dropped within +// a single second. +type unixStat struct { + unixTime int64 + requests int64 +} + +type droppedRequestsStats struct { + lock sync.RWMutex + + // history stores the history of dropped requests. + history []unixStat + + // To reduce lock-contention, we store the information about + // the current second here, which we can then access under + // reader lock. + currentUnix int64 + currentCount atomic.Int64 + + retryAfter atomic.Int64 + retryAfterUpdateUnix int64 +} + +func newDroppedRequestsStats(nowUnix int64) *droppedRequestsStats { + result := &droppedRequestsStats{ + // We assume that we can bump at any time after first dropped request. + retryAfterUpdateUnix: 0, + } + result.retryAfter.Store(1) + return result +} + +func (s *droppedRequestsStats) recordDroppedRequest(unixTime int64) { + // Short path - if the current second matches passed time, + // just update the stats. + if done := func() bool { + s.lock.RLock() + defer s.lock.RUnlock() + if s.currentUnix == unixTime { + s.currentCount.Add(1) + return true + } + return false + }(); done { + return + } + + // We trigger the change of . + s.lock.Lock() + defer s.lock.Unlock() + if s.currentUnix == unixTime { + s.currentCount.Add(1) + return + } + + s.updateHistory(s.currentUnix, s.currentCount.Load()) + s.currentUnix = unixTime + s.currentCount.Store(1) + + // We only consider updating retryAfter when bumping the current second. + // However, given that we didn't report anything for the current second, + // we recompute it based on statistics from the previous one. + s.updateRetryAfterIfNeededLocked(unixTime) +} + +func (s *droppedRequestsStats) updateHistory(unixTime int64, count int64) { + s.history = append(s.history, unixStat{unixTime: unixTime, requests: count}) + + startIndex := 0 + // Entries that exceed 2*retryAfter or maxRetryAfter are never going to be needed. + maxHistory := 2 * s.retryAfter.Load() + if maxHistory > maxRetryAfter { + maxHistory = maxRetryAfter + } + for ; startIndex < len(s.history) && unixTime-s.history[startIndex].unixTime > maxHistory; startIndex++ { + } + if startIndex > 0 { + s.history = s.history[startIndex:] + } +} + +// updateRetryAfterIfNeededLocked updates the retryAfter based on the number of +// dropped requests in the last `retryAfter` seconds: +// - if there were less than `retryAfter` dropped requests, it decreases +// retryAfter +// - if there were at least 3*`retryAfter` dropped requests, it increases +// retryAfter +// +// The rationale behind these numbers being fairly low is that APF is queuing +// requests and rejecting (dropping) them is a last resort, which is not expected +// unless a given priority level is actually overloaded. +// +// Additionally, we rate-limit the increases of retryAfter to wait at least +// `retryAfter' seconds after the previous increase to avoid multiple bumps +// on a single spike. +// +// We're working with the interval [unixTime-retryAfter, unixTime). +func (s *droppedRequestsStats) updateRetryAfterIfNeededLocked(unixTime int64) { + retryAfter := s.retryAfter.Load() + + droppedRequests := int64(0) + for i := len(s.history) - 1; i >= 0; i-- { + if unixTime-s.history[i].unixTime > retryAfter { + break + } + if s.history[i].unixTime < unixTime { + droppedRequests += s.history[i].requests + } + } + + if unixTime-s.retryAfterUpdateUnix >= retryAfter && droppedRequests >= 3*retryAfter { + // We try to mimic the TCP algorithm and thus are doubling + // the retryAfter here. + retryAfter *= 2 + if retryAfter >= maxRetryAfter { + retryAfter = maxRetryAfter + } + s.retryAfter.Store(retryAfter) + s.retryAfterUpdateUnix = unixTime + return + } + + if droppedRequests < retryAfter && retryAfter > 1 { + // We try to mimc the TCP algorithm and thus are linearly + // scaling down the retryAfter here. + retryAfter-- + s.retryAfter.Store(retryAfter) + return + } +} + +// droppedRequestsTracker implement DroppedRequestsTracker interface +// for the purpose of adjusting RetryAfter header for newly dropped +// requests to avoid system overload. +type droppedRequestsTracker struct { + now func() time.Time + + lock sync.RWMutex + plStats map[string]*droppedRequestsStats +} + +// NewDroppedRequestsTracker is creating a new instance of +// DroppedRequestsTracker. +func NewDroppedRequestsTracker() DroppedRequestsTracker { + return newDroppedRequestsTracker(clock.RealClock{}.Now) +} + +func newDroppedRequestsTracker(now func() time.Time) *droppedRequestsTracker { + return &droppedRequestsTracker{ + now: now, + plStats: make(map[string]*droppedRequestsStats), + } +} + +func (t *droppedRequestsTracker) RecordDroppedRequest(plName string) { + unixTime := t.now().Unix() + + stats := func() *droppedRequestsStats { + // The list of priority levels should change very infrequently, + // so in almost all cases, the fast path should be enough. + t.lock.RLock() + if plStats, ok := t.plStats[plName]; ok { + t.lock.RUnlock() + return plStats + } + t.lock.RUnlock() + + // Slow path taking writer lock to update the map. + t.lock.Lock() + defer t.lock.Unlock() + if plStats, ok := t.plStats[plName]; ok { + return plStats + } + stats := newDroppedRequestsStats(unixTime) + t.plStats[plName] = stats + return stats + }() + + stats.recordDroppedRequest(unixTime) +} + +func (t *droppedRequestsTracker) GetRetryAfter(plName string) int64 { + t.lock.RLock() + defer t.lock.RUnlock() + + if plStats, ok := t.plStats[plName]; ok { + return plStats.retryAfter.Load() + } + return 1 +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/interface.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/interface.go index 5522bb455406..013fd41e087f 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/interface.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/interface.go @@ -34,7 +34,10 @@ type QueueSetFactory interface { // BeginConstruction does the first phase of creating a QueueSet. // The RatioedGaugePair observes number of requests, // execution covering just the regular phase. + // The denominator for the waiting phase is + // max(1, QueuingConfig.QueueLengthLimit) X max(1, QueuingConfig.DesiredNumQueues). // The RatioedGauge observes number of seats occupied through all phases of execution. + // The denominator for all the ratioed concurrency gauges is supplied later in the DispatchingConfig. // The Gauge observes the seat demand (executing + queued seats). BeginConstruction(QueuingConfig, metrics.RatioedGaugePair, metrics.RatioedGauge, metrics.Gauge) (QueueSetCompleter, error) } @@ -113,8 +116,11 @@ type QueuingConfig struct { Name string // DesiredNumQueues is the number of queues that the API says - // should exist now. This may be zero, in which case + // should exist now. This may be non-positive, in which case // QueueLengthLimit, HandSize, and RequestWaitLimit are ignored. + // A value of zero means to respect the ConcurrencyLimit of the DispatchingConfig. + // A negative value means to always dispatch immediately upon arrival + // (i.e., the requests are "exempt" from limitation). DesiredNumQueues int // QueueLengthLimit is the maximum number of requests that may be waiting in a given queue at a time @@ -133,4 +139,8 @@ type QueuingConfig struct { type DispatchingConfig struct { // ConcurrencyLimit is the maximum number of requests of this QueueSet that may be executing at a time ConcurrencyLimit int + + // ConcurrencyDenominator is used in relative metrics of concurrency. + // It equals ConcurrencyLimit except when that is zero. + ConcurrencyDenominator int } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go index 11c15ccb7281..aa54a9ccf1d1 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go @@ -24,6 +24,7 @@ import ( "sync" "time" + "k8s.io/apimachinery/pkg/util/sets" "k8s.io/apiserver/pkg/util/flowcontrol/debug" fq "k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing" "k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/eventclock" @@ -138,6 +139,10 @@ type queueSet struct { // from that queue. totRequestsExecuting int + // requestsExecutingSet is the set of requests executing in the real world IF + // there are no queues; otherwise the requests are tracked in the queues. + requestsExecutingSet sets.Set[*request] + // totSeatsInUse is the number of total "seats" in use by all the // request(s) that are currently executing in this queueset. totSeatsInUse int @@ -197,7 +202,7 @@ func (qsf *queueSetFactory) BeginConstruction(qCfg fq.QueuingConfig, reqsGaugePa // calls for one, and returns a non-nil error if the given config is // invalid. func checkConfig(qCfg fq.QueuingConfig) (*shufflesharding.Dealer, error) { - if qCfg.DesiredNumQueues == 0 { + if qCfg.DesiredNumQueues <= 0 { return nil, nil } dealer, err := shufflesharding.NewDealer(qCfg.DesiredNumQueues, qCfg.HandSize) @@ -219,6 +224,7 @@ func (qsc *queueSetCompleter) Complete(dCfg fq.DispatchingConfig) fq.QueueSet { qCfg: qsc.qCfg, currentR: 0, lastRealTime: qsc.factory.clock.Now(), + requestsExecutingSet: sets.New[*request](), } qs.promiseFactory = qsc.factory.promiseFactoryFactory(qs) } @@ -230,7 +236,7 @@ func (qsc *queueSetCompleter) Complete(dCfg fq.DispatchingConfig) fq.QueueSet { func createQueues(n, baseIndex int) []*queue { fqqueues := make([]*queue, n) for i := 0; i < n; i++ { - fqqueues[i] = &queue{index: baseIndex + i, requests: newRequestFIFO()} + fqqueues[i] = &queue{index: baseIndex + i, requestsWaiting: newRequestFIFO(), requestsExecuting: sets.New[*request]()} } return fqqueues } @@ -280,8 +286,8 @@ func (qs *queueSet) setConfiguration(ctx context.Context, qCfg fq.QueuingConfig, qll *= qCfg.DesiredNumQueues } qs.reqsGaugePair.RequestsWaiting.SetDenominator(float64(qll)) - qs.reqsGaugePair.RequestsExecuting.SetDenominator(float64(dCfg.ConcurrencyLimit)) - qs.execSeatsGauge.SetDenominator(float64(dCfg.ConcurrencyLimit)) + qs.reqsGaugePair.RequestsExecuting.SetDenominator(float64(dCfg.ConcurrencyDenominator)) + qs.execSeatsGauge.SetDenominator(float64(dCfg.ConcurrencyDenominator)) qs.dispatchAsMuchAsPossibleLocked() } @@ -504,7 +510,7 @@ func (qs *queueSet) advanceEpoch(ctx context.Context, now time.Time, incrR fqreq klog.InfoS("Advancing epoch", "QS", qs.qCfg.Name, "when", now.Format(nsTimeFmt), "oldR", oldR, "newR", qs.currentR, "incrR", incrR) success := true for qIdx, queue := range qs.queues { - if queue.requests.Length() == 0 && queue.requestsExecuting == 0 { + if queue.requestsWaiting.Length() == 0 && queue.requestsExecuting.Len() == 0 { // Do not just decrement, the value could be quite outdated. // It is safe to reset to zero in this case, because the next request // will overwrite the zero with `qs.currentR`. @@ -517,7 +523,7 @@ func (qs *queueSet) advanceEpoch(ctx context.Context, now time.Time, incrR fqreq klog.ErrorS(errors.New("queue::nextDispatchR underflow"), "Underflow", "QS", qs.qCfg.Name, "queue", qIdx, "oldNextDispatchR", oldNextDispatchR, "newNextDispatchR", queue.nextDispatchR, "incrR", incrR) success = false } - queue.requests.Walk(func(req *request) bool { + queue.requestsWaiting.Walk(func(req *request) bool { oldArrivalR := req.arrivalR req.arrivalR -= rDecrement if req.arrivalR > oldArrivalR { @@ -538,8 +544,8 @@ func (qs *queueSet) getVirtualTimeRatioLocked() float64 { for _, queue := range qs.queues { // here we want the sum of the maximum width of the requests in this queue since our // goal is to find the maximum rate at which the queue could work. - seatsRequested += (queue.seatsInUse + queue.requests.QueueSum().MaxSeatsSum) - if queue.requests.Length() > 0 || queue.requestsExecuting > 0 { + seatsRequested += (queue.seatsInUse + queue.requestsWaiting.QueueSum().MaxSeatsSum) + if queue.requestsWaiting.Length() > 0 || queue.requestsExecuting.Len() > 0 { activeQueues++ } } @@ -589,7 +595,7 @@ func (qs *queueSet) timeoutOldRequestsAndRejectOrEnqueueLocked(ctx context.Conte if ok := qs.rejectOrEnqueueToBoundLocked(req); !ok { return nil } - metrics.ObserveQueueLength(ctx, qs.qCfg.Name, fsName, queue.requests.Length()) + metrics.ObserveQueueLength(ctx, qs.qCfg.Name, fsName, queue.requestsWaiting.Length()) return req } @@ -608,7 +614,7 @@ func (qs *queueSet) shuffleShardLocked(hashValue uint64, descr1, descr2 interfac for i := 0; i < handSize; i++ { queueIdx := hand[(offset+i)%handSize] queue := qs.queues[queueIdx] - queueSum := queue.requests.QueueSum() + queueSum := queue.requestsWaiting.QueueSum() // this is the total amount of work in seat-seconds for requests // waiting in this queue, we will select the queue with the minimum. @@ -621,7 +627,7 @@ func (qs *queueSet) shuffleShardLocked(hashValue uint64, descr1, descr2 interfac } if klogV := klog.V(6); klogV.Enabled() { chosenQueue := qs.queues[bestQueueIdx] - klogV.Infof("QS(%s) at t=%s R=%v: For request %#+v %#+v chose queue %d, with sum: %#v & %d seats in use & nextDispatchR=%v", qs.qCfg.Name, qs.clock.Now().Format(nsTimeFmt), qs.currentR, descr1, descr2, bestQueueIdx, chosenQueue.requests.QueueSum(), chosenQueue.seatsInUse, chosenQueue.nextDispatchR) + klogV.Infof("QS(%s) at t=%s R=%v: For request %#+v %#+v chose queue %d, with sum: %#v & %d seats in use & nextDispatchR=%v", qs.qCfg.Name, qs.clock.Now().Format(nsTimeFmt), qs.currentR, descr1, descr2, bestQueueIdx, chosenQueue.requestsWaiting.QueueSum(), chosenQueue.seatsInUse, chosenQueue.nextDispatchR) } return bestQueueIdx } @@ -632,7 +638,7 @@ func (qs *queueSet) removeTimedOutRequestsFromQueueToBoundLocked(queue *queue, f timeoutCount := 0 disqueueSeats := 0 now := qs.clock.Now() - reqs := queue.requests + reqs := queue.requestsWaiting // reqs are sorted oldest -> newest // can short circuit loop (break) if oldest requests are not timing out // as newer requests also will not have timed out @@ -669,7 +675,7 @@ func (qs *queueSet) removeTimedOutRequestsFromQueueToBoundLocked(queue *queue, f // Otherwise enqueues and returns true. func (qs *queueSet) rejectOrEnqueueToBoundLocked(request *request) bool { queue := request.queue - curQueueLength := queue.requests.Length() + curQueueLength := queue.requestsWaiting.Length() // rejects the newly arrived request if resource criteria not met if qs.totSeatsInUse >= qs.dCfg.ConcurrencyLimit && curQueueLength >= qs.qCfg.QueueLengthLimit { @@ -684,7 +690,7 @@ func (qs *queueSet) rejectOrEnqueueToBoundLocked(request *request) bool { func (qs *queueSet) enqueueToBoundLocked(request *request) { queue := request.queue now := qs.clock.Now() - if queue.requests.Length() == 0 && queue.requestsExecuting == 0 { + if queue.requestsWaiting.Length() == 0 && queue.requestsExecuting.Len() == 0 { // the queue’s start R is set to the virtual time. queue.nextDispatchR = qs.currentR klogV := klog.V(6) @@ -692,7 +698,7 @@ func (qs *queueSet) enqueueToBoundLocked(request *request) { klogV.Infof("QS(%s) at t=%s R=%v: initialized queue %d start R due to request %#+v %#+v", qs.qCfg.Name, now.Format(nsTimeFmt), queue.nextDispatchR, queue.index, request.descr1, request.descr2) } } - request.removeFromQueueLocked = queue.requests.Enqueue(request) + request.removeFromQueueLocked = queue.requestsWaiting.Enqueue(request) qs.totRequestsWaiting++ qs.totSeatsWaiting += request.MaxSeats() metrics.AddRequestsInQueues(request.ctx, qs.qCfg.Name, request.fsName, 1) @@ -725,8 +731,9 @@ func (qs *queueSet) dispatchSansQueueLocked(ctx context.Context, workEstimate *f } qs.totRequestsExecuting++ qs.totSeatsInUse += req.MaxSeats() + qs.requestsExecutingSet = qs.requestsExecutingSet.Insert(req) metrics.AddRequestsExecuting(ctx, qs.qCfg.Name, fsName, 1) - metrics.AddRequestConcurrencyInUse(qs.qCfg.Name, fsName, req.MaxSeats()) + metrics.AddSeatConcurrencyInUse(qs.qCfg.Name, fsName, req.MaxSeats()) qs.reqsGaugePair.RequestsExecuting.Add(1) qs.execSeatsGauge.Add(float64(req.MaxSeats())) qs.seatDemandIntegrator.Set(float64(qs.totSeatsInUse + qs.totSeatsWaiting)) @@ -768,10 +775,10 @@ func (qs *queueSet) dispatchLocked() bool { // problem because other overhead is also included. qs.totRequestsExecuting++ qs.totSeatsInUse += request.MaxSeats() - queue.requestsExecuting++ + queue.requestsExecuting = queue.requestsExecuting.Insert(request) queue.seatsInUse += request.MaxSeats() metrics.AddRequestsExecuting(request.ctx, qs.qCfg.Name, request.fsName, 1) - metrics.AddRequestConcurrencyInUse(qs.qCfg.Name, request.fsName, request.MaxSeats()) + metrics.AddSeatConcurrencyInUse(qs.qCfg.Name, request.fsName, request.MaxSeats()) qs.reqsGaugePair.RequestsExecuting.Add(1) qs.execSeatsGauge.Add(float64(request.MaxSeats())) qs.seatDemandIntegrator.Set(float64(qs.totSeatsInUse + qs.totSeatsWaiting)) @@ -779,7 +786,7 @@ func (qs *queueSet) dispatchLocked() bool { if klogV.Enabled() { klogV.Infof("QS(%s) at t=%s R=%v: dispatching request %#+v %#+v work %v from queue %d with start R %v, queue will have %d waiting & %d requests occupying %d seats, set will have %d seats occupied", qs.qCfg.Name, request.startTime.Format(nsTimeFmt), qs.currentR, request.descr1, request.descr2, - request.workEstimate, queue.index, queue.nextDispatchR, queue.requests.Length(), queue.requestsExecuting, queue.seatsInUse, qs.totSeatsInUse) + request.workEstimate, queue.index, queue.nextDispatchR, queue.requestsWaiting.Length(), queue.requestsExecuting.Len(), queue.seatsInUse, qs.totSeatsInUse) } // When a request is dequeued for service -> qs.virtualStart += G * width if request.totalWork() > rDecrement/100 { // A single increment should never be so big @@ -796,6 +803,9 @@ func (qs *queueSet) dispatchLocked() bool { // otherwise it returns false. func (qs *queueSet) canAccommodateSeatsLocked(seats int) bool { switch { + case qs.qCfg.DesiredNumQueues < 0: + // This is code for exemption from limitation + return true case seats > qs.dCfg.ConcurrencyLimit: // we have picked the queue with the minimum virtual finish time, but // the number of seats this request asks for exceeds the concurrency limit. @@ -831,7 +841,7 @@ func (qs *queueSet) findDispatchQueueToBoundLocked() (*queue, *request) { for range qs.queues { qs.robinIndex = (qs.robinIndex + 1) % nq queue := qs.queues[qs.robinIndex] - oldestWaiting, _ := queue.requests.Peek() + oldestWaiting, _ := queue.requestsWaiting.Peek() if oldestWaiting != nil { sMin = ssMin(sMin, queue.nextDispatchR) sMax = ssMax(sMax, queue.nextDispatchR) @@ -848,7 +858,7 @@ func (qs *queueSet) findDispatchQueueToBoundLocked() (*queue, *request) { } } - oldestReqFromMinQueue, _ := minQueue.requests.Peek() + oldestReqFromMinQueue, _ := minQueue.requestsWaiting.Peek() if oldestReqFromMinQueue == nil { // This cannot happen klog.ErrorS(errors.New("selected queue is empty"), "Impossible", "queueSet", qs.qCfg.Name) @@ -935,7 +945,7 @@ func (qs *queueSet) finishRequestLocked(r *request) { defer qs.removeQueueIfEmptyLocked(r) qs.totSeatsInUse -= r.MaxSeats() - metrics.AddRequestConcurrencyInUse(qs.qCfg.Name, r.fsName, -r.MaxSeats()) + metrics.AddSeatConcurrencyInUse(qs.qCfg.Name, r.fsName, -r.MaxSeats()) qs.execSeatsGauge.Add(-float64(r.MaxSeats())) qs.seatDemandIntegrator.Set(float64(qs.totSeatsInUse + qs.totSeatsWaiting)) if r.queue != nil { @@ -952,7 +962,7 @@ func (qs *queueSet) finishRequestLocked(r *request) { } else if r.queue != nil { klogV.Infof("QS(%s) at t=%s R=%v: request %#+v %#+v finished all use of %d seats, adjusted queue %d start R to %v due to service time %.9fs, queue will have %d requests with %#v waiting & %d requests occupying %d seats", qs.qCfg.Name, now.Format(nsTimeFmt), qs.currentR, r.descr1, r.descr2, r.workEstimate.MaxSeats(), r.queue.index, - r.queue.nextDispatchR, actualServiceDuration.Seconds(), r.queue.requests.Length(), r.queue.requests.QueueSum(), r.queue.requestsExecuting, r.queue.seatsInUse) + r.queue.nextDispatchR, actualServiceDuration.Seconds(), r.queue.requestsWaiting.Length(), r.queue.requestsWaiting.QueueSum(), r.queue.requestsExecuting.Len(), r.queue.seatsInUse) } else { klogV.Infof("QS(%s) at t=%s R=%v: request %#+v %#+v finished all use of %d seats, qs will have %d requests occupying %d seats", qs.qCfg.Name, now.Format(nsTimeFmt), qs.currentR, r.descr1, r.descr2, r.workEstimate.InitialSeats, qs.totRequestsExecuting, qs.totSeatsInUse) } @@ -964,7 +974,7 @@ func (qs *queueSet) finishRequestLocked(r *request) { } else if r.queue != nil { klogV.Infof("QS(%s) at t=%s R=%v: request %#+v %#+v finished main use of %d seats but lingering on %d seats for %v seconds, adjusted queue %d start R to %v due to service time %.9fs, queue will have %d requests with %#v waiting & %d requests occupying %d seats", qs.qCfg.Name, now.Format(nsTimeFmt), qs.currentR, r.descr1, r.descr2, r.workEstimate.InitialSeats, r.workEstimate.FinalSeats, additionalLatency.Seconds(), r.queue.index, - r.queue.nextDispatchR, actualServiceDuration.Seconds(), r.queue.requests.Length(), r.queue.requests.QueueSum(), r.queue.requestsExecuting, r.queue.seatsInUse) + r.queue.nextDispatchR, actualServiceDuration.Seconds(), r.queue.requestsWaiting.Length(), r.queue.requestsWaiting.QueueSum(), r.queue.requestsExecuting.Len(), r.queue.seatsInUse) } else { klogV.Infof("QS(%s) at t=%s R=%v: request %#+v %#+v finished main use of %d seats but lingering on %d seats for %v seconds, qs will have %d requests occupying %d seats", qs.qCfg.Name, now.Format(nsTimeFmt), qs.currentR, r.descr1, r.descr2, r.workEstimate.InitialSeats, r.workEstimate.FinalSeats, additionalLatency.Seconds(), qs.totRequestsExecuting, qs.totSeatsInUse) } @@ -981,7 +991,7 @@ func (qs *queueSet) finishRequestLocked(r *request) { } else if r.queue != nil { klogV.Infof("QS(%s) at t=%s R=%v: request %#+v %#+v finished lingering on %d seats, queue %d will have %d requests with %#v waiting & %d requests occupying %d seats", qs.qCfg.Name, now.Format(nsTimeFmt), qs.currentR, r.descr1, r.descr2, r.workEstimate.FinalSeats, r.queue.index, - r.queue.requests.Length(), r.queue.requests.QueueSum(), r.queue.requestsExecuting, r.queue.seatsInUse) + r.queue.requestsWaiting.Length(), r.queue.requestsWaiting.QueueSum(), r.queue.requestsExecuting.Len(), r.queue.seatsInUse) } else { klogV.Infof("QS(%s) at t=%s R=%v: request %#+v %#+v finished lingering on %d seats, qs will have %d requests occupying %d seats", qs.qCfg.Name, now.Format(nsTimeFmt), qs.currentR, r.descr1, r.descr2, r.workEstimate.FinalSeats, qs.totRequestsExecuting, qs.totSeatsInUse) } @@ -991,12 +1001,14 @@ func (qs *queueSet) finishRequestLocked(r *request) { if r.queue != nil { // request has finished, remove from requests executing - r.queue.requestsExecuting-- + r.queue.requestsExecuting = r.queue.requestsExecuting.Delete(r) // When a request finishes being served, and the actual service time was S, // the queue’s start R is decremented by (G - S)*width. r.queue.nextDispatchR -= fqrequest.SeatsTimesDuration(float64(r.InitialSeats()), qs.estimatedServiceDuration-actualServiceDuration) qs.boundNextDispatchLocked(r.queue) + } else { + qs.requestsExecutingSet = qs.requestsExecutingSet.Delete(r) } } @@ -1008,7 +1020,7 @@ func (qs *queueSet) finishRequestLocked(r *request) { // The following hack addresses the first side of that inequity, // by insisting that dispatch in the virtual world not precede arrival. func (qs *queueSet) boundNextDispatchLocked(queue *queue) { - oldestReqFromMinQueue, _ := queue.requests.Peek() + oldestReqFromMinQueue, _ := queue.requestsWaiting.Peek() if oldestReqFromMinQueue == nil { return } @@ -1029,8 +1041,8 @@ func (qs *queueSet) removeQueueIfEmptyLocked(r *request) { // If there are more queues than desired and this one has no // requests then remove it if len(qs.queues) > qs.qCfg.DesiredNumQueues && - r.queue.requests.Length() == 0 && - r.queue.requestsExecuting == 0 { + r.queue.requestsWaiting.Length() == 0 && + r.queue.requestsExecuting.Len() == 0 { qs.queues = removeQueueAndUpdateIndexes(qs.queues, r.queue.index) // decrement here to maintain the invariant that (qs.robinIndex+1) % numQueues @@ -1055,15 +1067,16 @@ func (qs *queueSet) Dump(includeRequestDetails bool) debug.QueueSetDump { qs.lock.Lock() defer qs.lock.Unlock() d := debug.QueueSetDump{ - Queues: make([]debug.QueueDump, len(qs.queues)), - Waiting: qs.totRequestsWaiting, - Executing: qs.totRequestsExecuting, - SeatsInUse: qs.totSeatsInUse, - SeatsWaiting: qs.totSeatsWaiting, - Dispatched: qs.totRequestsDispatched, - Rejected: qs.totRequestsRejected, - Timedout: qs.totRequestsTimedout, - Cancelled: qs.totRequestsCancelled, + Queues: make([]debug.QueueDump, len(qs.queues)), + QueuelessExecutingRequests: SetMapReduce(dumpRequest(includeRequestDetails), append1[debug.RequestDump])(qs.requestsExecutingSet), + Waiting: qs.totRequestsWaiting, + Executing: qs.totRequestsExecuting, + SeatsInUse: qs.totSeatsInUse, + SeatsWaiting: qs.totSeatsWaiting, + Dispatched: qs.totRequestsDispatched, + Rejected: qs.totRequestsRejected, + Timedout: qs.totRequestsTimedout, + Cancelled: qs.totRequestsCancelled, } for i, q := range qs.queues { d.Queues[i] = q.dumpLocked(includeRequestDetails) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/types.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/types.go index f1073b96b285..8c36a58ffb89 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/types.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/types.go @@ -20,6 +20,7 @@ import ( "context" "time" + "k8s.io/apimachinery/pkg/util/sets" genericrequest "k8s.io/apiserver/pkg/endpoints/request" "k8s.io/apiserver/pkg/util/flowcontrol/debug" fq "k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing" @@ -90,15 +91,15 @@ type completedWorkEstimate struct { // queue is a sequence of requests that have arrived but not yet finished // execution in both the real and virtual worlds. type queue struct { - // The requests not yet executing in the real world are stored in a FIFO list. - requests fifo + // The requestsWaiting not yet executing in the real world are stored in a FIFO list. + requestsWaiting fifo // nextDispatchR is the R progress meter reading at // which the next request will be dispatched in the virtual world. nextDispatchR fcrequest.SeatSeconds - // requestsExecuting is the count in the real world. - requestsExecuting int + // requestsExecuting is the set of requests executing in the real world. + requestsExecuting sets.Set[*request] // index is the position of this queue among those in its queueSet. index int @@ -145,28 +146,14 @@ func (qs *queueSet) computeFinalWork(we *fcrequest.WorkEstimate) fcrequest.SeatS } func (q *queue) dumpLocked(includeDetails bool) debug.QueueDump { - digest := make([]debug.RequestDump, q.requests.Length()) - i := 0 - q.requests.Walk(func(r *request) bool { - // dump requests. - digest[i].MatchedFlowSchema = r.fsName - digest[i].FlowDistinguisher = r.flowDistinguisher - digest[i].ArriveTime = r.arrivalTime - digest[i].StartTime = r.startTime - digest[i].WorkEstimate = r.workEstimate.WorkEstimate - if includeDetails { - userInfo, _ := genericrequest.UserFrom(r.ctx) - digest[i].UserName = userInfo.GetName() - requestInfo, ok := genericrequest.RequestInfoFrom(r.ctx) - if ok { - digest[i].RequestInfo = *requestInfo - } - } - i++ + waitingDigest := make([]debug.RequestDump, 0, q.requestsWaiting.Length()) + q.requestsWaiting.Walk(func(r *request) bool { + waitingDigest = append(waitingDigest, dumpRequest(includeDetails)(r)) return true }) + executingDigest := SetMapReduce(dumpRequest(includeDetails), append1[debug.RequestDump])(q.requestsExecuting) - sum := q.requests.QueueSum() + sum := q.requestsWaiting.QueueSum() queueSum := debug.QueueSum{ InitialSeatsSum: sum.InitialSeatsSum, MaxSeatsSum: sum.MaxSeatsSum, @@ -175,9 +162,57 @@ func (q *queue) dumpLocked(includeDetails bool) debug.QueueDump { return debug.QueueDump{ NextDispatchR: q.nextDispatchR.String(), - Requests: digest, - ExecutingRequests: q.requestsExecuting, + Requests: waitingDigest, + RequestsExecuting: executingDigest, + ExecutingRequests: q.requestsExecuting.Len(), SeatsInUse: q.seatsInUse, QueueSum: queueSum, } } + +func dumpRequest(includeDetails bool) func(*request) debug.RequestDump { + return func(r *request) debug.RequestDump { + ans := debug.RequestDump{ + MatchedFlowSchema: r.fsName, + FlowDistinguisher: r.flowDistinguisher, + ArriveTime: r.arrivalTime, + StartTime: r.startTime, + WorkEstimate: r.workEstimate.WorkEstimate, + } + if includeDetails { + userInfo, _ := genericrequest.UserFrom(r.ctx) + ans.UserName = userInfo.GetName() + requestInfo, ok := genericrequest.RequestInfoFrom(r.ctx) + if ok { + ans.RequestInfo = *requestInfo + } + } + return ans + } +} + +// SetMapReduce is map-reduce starting from a set type in the sets package. +func SetMapReduce[Elt comparable, Result, Accumulator any](mapFn func(Elt) Result, reduceFn func(Accumulator, Result) Accumulator) func(map[Elt]sets.Empty) Accumulator { + return func(set map[Elt]sets.Empty) Accumulator { + var ans Accumulator + for elt := range set { + ans = reduceFn(ans, mapFn(elt)) + } + return ans + } +} + +// SliceMapReduce is map-reduce starting from a slice. +func SliceMapReduce[Elt, Result, Accumulator any](mapFn func(Elt) Result, reduceFn func(Accumulator, Result) Accumulator) func([]Elt) Accumulator { + return func(slice []Elt) Accumulator { + var ans Accumulator + for _, elt := range slice { + ans = reduceFn(ans, mapFn(elt)) + } + return ans + } +} + +func or(x, y bool) bool { return x || y } + +func append1[Elt any](slice []Elt, next Elt) []Elt { return append(slice, next) } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/max_seats.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/max_seats.go new file mode 100644 index 000000000000..18f88ab3b203 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/max_seats.go @@ -0,0 +1,66 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package flowcontrol + +import ( + "sync" +) + +// MaxSeatsTracker is used to track max seats allocatable per priority level from the work estimator +type MaxSeatsTracker interface { + // GetMaxSeats returns the maximum seats a request should occupy for a given priority level. + GetMaxSeats(priorityLevelName string) uint64 + + // SetMaxSeats configures max seats for a priority level. + SetMaxSeats(priorityLevelName string, maxSeats uint64) + + // ForgetPriorityLevel removes max seats tracking for a priority level. + ForgetPriorityLevel(priorityLevelName string) +} + +type maxSeatsTracker struct { + sync.RWMutex + + maxSeats map[string]uint64 +} + +func NewMaxSeatsTracker() MaxSeatsTracker { + return &maxSeatsTracker{ + maxSeats: make(map[string]uint64), + } +} + +func (m *maxSeatsTracker) GetMaxSeats(plName string) uint64 { + m.RLock() + defer m.RUnlock() + + return m.maxSeats[plName] +} + +func (m *maxSeatsTracker) SetMaxSeats(plName string, maxSeats uint64) { + m.Lock() + defer m.Unlock() + + m.maxSeats[plName] = maxSeats +} + +func (m *maxSeatsTracker) ForgetPriorityLevel(plName string) { + m.Lock() + defer m.Unlock() + + delete(m.maxSeats, plName) +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/metrics/metrics.go index 7cb05df6c89b..54af4415cd0c 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/metrics/metrics.go @@ -47,7 +47,7 @@ const ( var ( queueLengthBuckets = []float64{0, 10, 25, 50, 100, 250, 500, 1000} - requestDurationSecondsBuckets = []float64{0, 0.005, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 30} + requestDurationSecondsBuckets = []float64{0, 0.005, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 30} ) var registerMetrics sync.Once @@ -94,7 +94,7 @@ var ( Subsystem: subsystem, Name: "rejected_requests_total", Help: "Number of requests rejected by API Priority and Fairness subsystem", - StabilityLevel: compbasemetrics.ALPHA, + StabilityLevel: compbasemetrics.BETA, }, []string{priorityLevel, flowSchema, "reason"}, ) @@ -104,7 +104,7 @@ var ( Subsystem: subsystem, Name: "dispatched_requests_total", Help: "Number of requests executed by API Priority and Fairness subsystem", - StabilityLevel: compbasemetrics.ALPHA, + StabilityLevel: compbasemetrics.BETA, }, []string{priorityLevel, flowSchema}, ) @@ -206,7 +206,7 @@ var ( Subsystem: subsystem, Name: "current_inqueue_requests", Help: "Number of requests currently pending in queues of the API Priority and Fairness subsystem", - StabilityLevel: compbasemetrics.ALPHA, + StabilityLevel: compbasemetrics.BETA, }, []string{priorityLevel, flowSchema}, ) @@ -223,11 +223,13 @@ var ( ) apiserverRequestConcurrencyLimit = compbasemetrics.NewGaugeVec( &compbasemetrics.GaugeOpts{ - Namespace: namespace, - Subsystem: subsystem, - Name: "request_concurrency_limit", - Help: "Shared concurrency limit in the API Priority and Fairness subsystem", - StabilityLevel: compbasemetrics.ALPHA, + Namespace: namespace, + Subsystem: subsystem, + Name: "request_concurrency_limit", + Help: "Nominal number of execution seats configured for each priority level", + // Remove this metric once all suppported releases have the equal nominal_limit_seats metric + DeprecatedVersion: "1.30.0", + StabilityLevel: compbasemetrics.ALPHA, }, []string{priorityLevel}, ) @@ -237,17 +239,29 @@ var ( Subsystem: subsystem, Name: "current_executing_requests", Help: "Number of requests in initial (for a WATCH) or any (for a non-WATCH) execution stage in the API Priority and Fairness subsystem", - StabilityLevel: compbasemetrics.ALPHA, + StabilityLevel: compbasemetrics.BETA, }, []string{priorityLevel, flowSchema}, ) - apiserverRequestConcurrencyInUse = compbasemetrics.NewGaugeVec( + apiserverCurrentExecutingSeats = compbasemetrics.NewGaugeVec( &compbasemetrics.GaugeOpts{ Namespace: namespace, Subsystem: subsystem, - Name: "request_concurrency_in_use", + Name: "current_executing_seats", Help: "Concurrency (number of seats) occupied by the currently executing (initial stage for a WATCH, any stage otherwise) requests in the API Priority and Fairness subsystem", - StabilityLevel: compbasemetrics.ALPHA, + StabilityLevel: compbasemetrics.BETA, + }, + []string{priorityLevel, flowSchema}, + ) + apiserverRequestConcurrencyInUse = compbasemetrics.NewGaugeVec( + &compbasemetrics.GaugeOpts{ + Namespace: namespace, + Subsystem: subsystem, + Name: "request_concurrency_in_use", + Help: "Concurrency (number of seats) occupied by the currently executing (initial stage for a WATCH, any stage otherwise) requests in the API Priority and Fairness subsystem", + // Remove this metric once all suppported releases have the equal current_executing_seats metric + DeprecatedVersion: "1.31.0", + StabilityLevel: compbasemetrics.ALPHA, }, []string{priorityLevel, flowSchema}, ) @@ -258,7 +272,7 @@ var ( Name: "request_wait_duration_seconds", Help: "Length of time a request spent waiting in its queue", Buckets: requestDurationSecondsBuckets, - StabilityLevel: compbasemetrics.ALPHA, + StabilityLevel: compbasemetrics.BETA, }, []string{priorityLevel, flowSchema, "execute"}, ) @@ -323,7 +337,7 @@ var ( Subsystem: subsystem, Name: "nominal_limit_seats", Help: "Nominal number of execution seats configured for each priority level", - StabilityLevel: compbasemetrics.ALPHA, + StabilityLevel: compbasemetrics.BETA, }, []string{priorityLevel}, ) @@ -444,6 +458,7 @@ var ( apiserverRequestQueueLength, apiserverRequestConcurrencyLimit, apiserverRequestConcurrencyInUse, + apiserverCurrentExecutingSeats, apiserverCurrentExecutingRequests, apiserverRequestWaitingSeconds, apiserverRequestExecutionSeconds, @@ -523,9 +538,10 @@ func SetDispatchMetrics(priorityLevel string, r, s, sMin, sMax, discountedSMin, apiserverNextDiscountedSBounds.WithLabelValues(priorityLevel, "max").Set(discountedSMax) } -// AddRequestConcurrencyInUse adds the given delta to the gauge of concurrency in use by +// AddSeatConcurrencyInUse adds the given delta to the gauge of seats in use by // the currently executing requests of the given flowSchema and priorityLevel -func AddRequestConcurrencyInUse(priorityLevel, flowSchema string, delta int) { +func AddSeatConcurrencyInUse(priorityLevel, flowSchema string, delta int) { + apiserverCurrentExecutingSeats.WithLabelValues(priorityLevel, flowSchema).Add(float64(delta)) apiserverRequestConcurrencyInUse.WithLabelValues(priorityLevel, flowSchema).Add(float64(delta)) } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/config.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/config.go index b6db19209b54..c51435b1598a 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/config.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/config.go @@ -24,7 +24,7 @@ import ( const ( minimumSeats = 1 - maximumSeats = 10 + maximumSeatsLimit = 10 objectsPerSeat = 100.0 watchesPerSeat = 10.0 enableMutatingWorkEstimator = true @@ -39,12 +39,13 @@ type WorkEstimatorConfig struct { // MinimumSeats is the minimum number of seats a request must occupy. MinimumSeats uint64 `json:"minimumSeats,omitempty"` - // MaximumSeats is the maximum number of seats a request can occupy + + // MaximumSeatsLimit is an upper limit on the max seats a request can occupy. // // NOTE: work_estimate_seats_samples metric uses the value of maximumSeats // as the upper bound, so when we change maximumSeats we should also // update the buckets of the metric. - MaximumSeats uint64 `json:"maximumSeats,omitempty"` + MaximumSeatsLimit uint64 `json:"maximumSeatsLimit,omitempty"` } // ListWorkEstimatorConfig holds work estimator parameters related to list requests. @@ -66,7 +67,7 @@ type MutatingWorkEstimatorConfig struct { func DefaultWorkEstimatorConfig() *WorkEstimatorConfig { return &WorkEstimatorConfig{ MinimumSeats: minimumSeats, - MaximumSeats: maximumSeats, + MaximumSeatsLimit: maximumSeatsLimit, ListWorkEstimatorConfig: defaultListWorkEstimatorConfig(), MutatingWorkEstimatorConfig: defaultMutatingWorkEstimatorConfig(), } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/list_work_estimator.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/list_work_estimator.go index 130746a411ef..8d20867d6dd3 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/list_work_estimator.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/list_work_estimator.go @@ -29,10 +29,11 @@ import ( "k8s.io/klog/v2" ) -func newListWorkEstimator(countFn objectCountGetterFunc, config *WorkEstimatorConfig) WorkEstimatorFunc { +func newListWorkEstimator(countFn objectCountGetterFunc, config *WorkEstimatorConfig, maxSeatsFn maxSeatsFunc) WorkEstimatorFunc { estimator := &listWorkEstimator{ config: config, countGetterFn: countFn, + maxSeatsFn: maxSeatsFn, } return estimator.estimate } @@ -40,14 +41,21 @@ func newListWorkEstimator(countFn objectCountGetterFunc, config *WorkEstimatorCo type listWorkEstimator struct { config *WorkEstimatorConfig countGetterFn objectCountGetterFunc + maxSeatsFn maxSeatsFunc } func (e *listWorkEstimator) estimate(r *http.Request, flowSchemaName, priorityLevelName string) WorkEstimate { + minSeats := e.config.MinimumSeats + maxSeats := e.maxSeatsFn(priorityLevelName) + if maxSeats == 0 || maxSeats > e.config.MaximumSeatsLimit { + maxSeats = e.config.MaximumSeatsLimit + } + requestInfo, ok := apirequest.RequestInfoFrom(r.Context()) if !ok { // no RequestInfo should never happen, but to be on the safe side // let's return maximumSeats - return WorkEstimate{InitialSeats: e.config.MaximumSeats} + return WorkEstimate{InitialSeats: maxSeats} } if requestInfo.Name != "" { @@ -56,7 +64,7 @@ func (e *listWorkEstimator) estimate(r *http.Request, flowSchemaName, priorityLe // Example of such list requests: // /apis/certificates.k8s.io/v1/certificatesigningrequests?fieldSelector=metadata.name%3Dcsr-xxs4m // /api/v1/namespaces/test/configmaps?fieldSelector=metadata.name%3Dbig-deployment-1&limit=500&resourceVersion=0 - return WorkEstimate{InitialSeats: e.config.MinimumSeats} + return WorkEstimate{InitialSeats: minSeats} } query := r.URL.Query() @@ -66,9 +74,18 @@ func (e *listWorkEstimator) estimate(r *http.Request, flowSchemaName, priorityLe // This request is destined to fail in the validation layer, // return maximumSeats for this request to be consistent. - return WorkEstimate{InitialSeats: e.config.MaximumSeats} + return WorkEstimate{InitialSeats: maxSeats} + } + + // For watch requests, we want to adjust the cost only if they explicitly request + // sending initial events. + if requestInfo.Verb == "watch" { + if listOptions.SendInitialEvents == nil || !*listOptions.SendInitialEvents { + return WorkEstimate{InitialSeats: e.config.MinimumSeats} + } } - isListFromCache := !shouldListFromStorage(query, &listOptions) + + isListFromCache := requestInfo.Verb == "watch" || !shouldListFromStorage(query, &listOptions) numStored, err := e.countGetterFn(key(requestInfo)) switch { @@ -77,7 +94,7 @@ func (e *listWorkEstimator) estimate(r *http.Request, flowSchemaName, priorityLe // be conservative here and allocate maximum seats to this list request. // NOTE: if a CRD is removed, its count will go stale first and then the // pruner will eventually remove the CRD from the cache. - return WorkEstimate{InitialSeats: e.config.MaximumSeats} + return WorkEstimate{InitialSeats: maxSeats} case err == ObjectCountNotFoundErr: // there are multiple scenarios in which we can see this error: // a. the type is truly unknown, a typo on the caller's part. @@ -91,12 +108,12 @@ func (e *listWorkEstimator) estimate(r *http.Request, flowSchemaName, priorityLe // when aggregated API calls are overestimated, we allocate the minimum // possible seats (see #109106 as an example when being more conservative // led to problems). - return WorkEstimate{InitialSeats: e.config.MinimumSeats} + return WorkEstimate{InitialSeats: minSeats} case err != nil: // we should never be here since Get returns either ObjectCountStaleErr or // ObjectCountNotFoundErr, return maximumSeats to be on the safe side. klog.ErrorS(err, "Unexpected error from object count tracker") - return WorkEstimate{InitialSeats: e.config.MaximumSeats} + return WorkEstimate{InitialSeats: maxSeats} } limit := numStored @@ -125,11 +142,11 @@ func (e *listWorkEstimator) estimate(r *http.Request, flowSchemaName, priorityLe seats := uint64(math.Ceil(float64(estimatedObjectsToBeProcessed) / e.config.ObjectsPerSeat)) // make sure we never return a seat of zero - if seats < e.config.MinimumSeats { - seats = e.config.MinimumSeats + if seats < minSeats { + seats = minSeats } - if seats > e.config.MaximumSeats { - seats = e.config.MaximumSeats + if seats > maxSeats { + seats = maxSeats } return WorkEstimate{InitialSeats: seats} } @@ -149,9 +166,16 @@ func shouldListFromStorage(query url.Values, opts *metav1.ListOptions) bool { resourceVersion := opts.ResourceVersion match := opts.ResourceVersionMatch pagingEnabled := utilfeature.DefaultFeatureGate.Enabled(features.APIListChunking) + consistentListFromCacheEnabled := utilfeature.DefaultFeatureGate.Enabled(features.ConsistentListFromCache) + + // Serve consistent reads from storage if ConsistentListFromCache is disabled + consistentReadFromStorage := resourceVersion == "" && !consistentListFromCacheEnabled + // Watch cache doesn't support continuations, so serve them from etcd. hasContinuation := pagingEnabled && len(opts.Continue) > 0 + // Serve paginated requests about revision "0" from watch cache to avoid overwhelming etcd. hasLimit := pagingEnabled && opts.Limit > 0 && resourceVersion != "0" + // Watch cache only supports ResourceVersionMatchNotOlderThan (default). unsupportedMatch := match != "" && match != metav1.ResourceVersionMatchNotOlderThan - return resourceVersion == "" || hasContinuation || hasLimit || unsupportedMatch + return consistentReadFromStorage || hasContinuation || hasLimit || unsupportedMatch } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/mutating_work_estimator.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/mutating_work_estimator.go index 305f8e1ebb5a..9b983f0033be 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/mutating_work_estimator.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/mutating_work_estimator.go @@ -25,25 +25,33 @@ import ( "k8s.io/apiserver/pkg/util/flowcontrol/metrics" ) -func newMutatingWorkEstimator(countFn watchCountGetterFunc, config *WorkEstimatorConfig) WorkEstimatorFunc { +func newMutatingWorkEstimator(countFn watchCountGetterFunc, config *WorkEstimatorConfig, maxSeatsFn maxSeatsFunc) WorkEstimatorFunc { estimator := &mutatingWorkEstimator{ - config: config, - countFn: countFn, + config: config, + countFn: countFn, + maxSeatsFn: maxSeatsFn, } return estimator.estimate } type mutatingWorkEstimator struct { - config *WorkEstimatorConfig - countFn watchCountGetterFunc + config *WorkEstimatorConfig + countFn watchCountGetterFunc + maxSeatsFn maxSeatsFunc } func (e *mutatingWorkEstimator) estimate(r *http.Request, flowSchemaName, priorityLevelName string) WorkEstimate { + minSeats := e.config.MinimumSeats + maxSeats := e.maxSeatsFn(priorityLevelName) + if maxSeats == 0 || maxSeats > e.config.MaximumSeatsLimit { + maxSeats = e.config.MaximumSeatsLimit + } + // TODO(wojtekt): Remove once we tune the algorithm to not fail // scalability tests. if !e.config.Enabled { return WorkEstimate{ - InitialSeats: 1, + InitialSeats: minSeats, } } @@ -52,15 +60,15 @@ func (e *mutatingWorkEstimator) estimate(r *http.Request, flowSchemaName, priori // no RequestInfo should never happen, but to be on the safe side // let's return a large value. return WorkEstimate{ - InitialSeats: 1, - FinalSeats: e.config.MaximumSeats, + InitialSeats: minSeats, + FinalSeats: maxSeats, AdditionalLatency: e.config.eventAdditionalDuration(), } } if isRequestExemptFromWatchEvents(requestInfo) { return WorkEstimate{ - InitialSeats: e.config.MinimumSeats, + InitialSeats: minSeats, FinalSeats: 0, AdditionalLatency: time.Duration(0), } @@ -126,8 +134,8 @@ func (e *mutatingWorkEstimator) estimate(r *http.Request, flowSchemaName, priori // // TODO: Confirm that the current cap of maximumSeats allow us to // achieve the above. - if finalSeats > e.config.MaximumSeats { - finalSeats = e.config.MaximumSeats + if finalSeats > maxSeats { + finalSeats = maxSeats } additionalLatency = finalWork.DurationPerSeat(float64(finalSeats)) } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/seat_seconds.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/seat_seconds.go index e3a40174524a..05dab65bdd33 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/seat_seconds.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/seat_seconds.go @@ -38,7 +38,7 @@ const MinSeatSeconds = SeatSeconds(0) // This is intended only to produce small values, increments in work // rather than amount of work done since process start. func SeatsTimesDuration(seats float64, duration time.Duration) SeatSeconds { - return SeatSeconds(math.Round(seats * float64(duration/time.Nanosecond) / (1e9 / ssScale))) + return SeatSeconds(int64(math.Round(seats * float64(duration/time.Nanosecond) / (1e9 / ssScale)))) } // ToFloat converts to a floating-point representation. diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/width.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/width.go index 86f0425843b8..71837edba6bd 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/width.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/flowcontrol/request/width.go @@ -22,6 +22,9 @@ import ( "time" apirequest "k8s.io/apiserver/pkg/endpoints/request" + "k8s.io/apiserver/pkg/features" + utilfeature "k8s.io/apiserver/pkg/util/feature" + "k8s.io/klog/v2" ) @@ -61,15 +64,19 @@ type objectCountGetterFunc func(string) (int64, error) // number of watchers potentially interested in a given request. type watchCountGetterFunc func(*apirequest.RequestInfo) int +// MaxSeatsFunc represents a function that returns the maximum seats +// allowed for the work estimator for a given priority level. +type maxSeatsFunc func(priorityLevelName string) uint64 + // NewWorkEstimator estimates the work that will be done by a given request, // if no WorkEstimatorFunc matches the given request then the default // work estimate of 1 seat is allocated to the request. -func NewWorkEstimator(objectCountFn objectCountGetterFunc, watchCountFn watchCountGetterFunc, config *WorkEstimatorConfig) WorkEstimatorFunc { +func NewWorkEstimator(objectCountFn objectCountGetterFunc, watchCountFn watchCountGetterFunc, config *WorkEstimatorConfig, maxSeatsFn maxSeatsFunc) WorkEstimatorFunc { estimator := &workEstimator{ minimumSeats: config.MinimumSeats, - maximumSeats: config.MaximumSeats, - listWorkEstimator: newListWorkEstimator(objectCountFn, config), - mutatingWorkEstimator: newMutatingWorkEstimator(watchCountFn, config), + maximumSeatsLimit: config.MaximumSeatsLimit, + listWorkEstimator: newListWorkEstimator(objectCountFn, config, maxSeatsFn), + mutatingWorkEstimator: newMutatingWorkEstimator(watchCountFn, config, maxSeatsFn), } return estimator.estimate } @@ -86,8 +93,8 @@ func (e WorkEstimatorFunc) EstimateWork(r *http.Request, flowSchemaName, priorit type workEstimator struct { // the minimum number of seats a request must occupy minimumSeats uint64 - // the maximum number of seats a request can occupy - maximumSeats uint64 + // the default maximum number of seats a request can occupy + maximumSeatsLimit uint64 // listWorkEstimator estimates work for list request(s) listWorkEstimator WorkEstimatorFunc // mutatingWorkEstimator calculates the width of mutating request(s) @@ -99,12 +106,21 @@ func (e *workEstimator) estimate(r *http.Request, flowSchemaName, priorityLevelN if !ok { klog.ErrorS(fmt.Errorf("no RequestInfo found in context"), "Failed to estimate work for the request", "URI", r.RequestURI) // no RequestInfo should never happen, but to be on the safe side let's return maximumSeats - return WorkEstimate{InitialSeats: e.maximumSeats} + return WorkEstimate{InitialSeats: e.maximumSeatsLimit} } switch requestInfo.Verb { case "list": return e.listWorkEstimator.EstimateWork(r, flowSchemaName, priorityLevelName) + case "watch": + // WATCH supports `SendInitialEvents` option, which effectively means + // that is starts with sending of the contents of a corresponding LIST call. + // From that perspective, given that the watch only consumes APF seats + // during its initialization (sending init events), its cost should then + // be computed the same way as for a regular list. + if utilfeature.DefaultFeatureGate.Enabled(features.WatchList) { + return e.listWorkEstimator.EstimateWork(r, flowSchemaName, priorityLevelName) + } case "create", "update", "patch", "delete": return e.mutatingWorkEstimator.EstimateWork(r, flowSchemaName, priorityLevelName) } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/peerproxy/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/peerproxy/metrics/metrics.go new file mode 100644 index 000000000000..48b89be75ffd --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/peerproxy/metrics/metrics.go @@ -0,0 +1,56 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package metrics + +import ( + "context" + "sync" + + "k8s.io/component-base/metrics" + "k8s.io/component-base/metrics/legacyregistry" +) + +const ( + subsystem = "apiserver" + statuscode = "code" +) + +var registerMetricsOnce sync.Once + +var ( + // peerProxiedRequestsTotal counts the number of requests that were proxied to a peer kube-apiserver. + peerProxiedRequestsTotal = metrics.NewCounterVec( + &metrics.CounterOpts{ + Subsystem: subsystem, + Name: "rerouted_request_total", + Help: "Total number of requests that were proxied to a peer kube apiserver because the local apiserver was not capable of serving it", + StabilityLevel: metrics.ALPHA, + }, + []string{statuscode}, + ) +) + +func Register() { + registerMetricsOnce.Do(func() { + legacyregistry.MustRegister(peerProxiedRequestsTotal) + }) +} + +// IncPeerProxiedRequest increments the # of proxied requests to peer kube-apiserver +func IncPeerProxiedRequest(ctx context.Context, status string) { + peerProxiedRequestsTotal.WithContext(ctx).WithLabelValues(status).Add(1) +} diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/webhook/authentication.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/webhook/authentication.go index a69506de6902..95e4060bd119 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/webhook/authentication.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/webhook/authentication.go @@ -243,6 +243,7 @@ func restConfigFromKubeconfig(configAuthInfo *clientcmdapi.AuthInfo) (*rest.Conf if len(configAuthInfo.Impersonate) > 0 { config.Impersonate = rest.ImpersonationConfig{ UserName: configAuthInfo.Impersonate, + UID: configAuthInfo.ImpersonateUID, Groups: configAuthInfo.ImpersonateGroups, Extra: configAuthInfo.ImpersonateUserExtra, } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/webhook/webhook.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/webhook/webhook.go index 45143bf6efb0..b03640ae8df1 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/webhook/webhook.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/util/webhook/webhook.go @@ -62,7 +62,7 @@ type GenericWebhook struct { // Otherwise it returns false for an immediate fail. func DefaultShouldRetry(err error) bool { // these errors indicate a transient error that should be retried. - if utilnet.IsConnectionReset(err) || apierrors.IsInternalError(err) || apierrors.IsTimeout(err) || apierrors.IsTooManyRequests(err) { + if utilnet.IsConnectionReset(err) || utilnet.IsHTTP2ConnectionLost(err) || apierrors.IsInternalError(err) || apierrors.IsTimeout(err) || apierrors.IsTooManyRequests(err) { return true } // if the error sends the Retry-After header, we respect it as an explicit confirmation we should retry. diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/paramref.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/paramref.go index 1102f65f31a0..0951cae8a920 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/paramref.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/paramref.go @@ -18,11 +18,18 @@ limitations under the License. package v1alpha1 +import ( + v1alpha1 "k8s.io/api/admissionregistration/v1alpha1" + v1 "k8s.io/client-go/applyconfigurations/meta/v1" +) + // ParamRefApplyConfiguration represents an declarative configuration of the ParamRef type for use // with apply. type ParamRefApplyConfiguration struct { - Name *string `json:"name,omitempty"` - Namespace *string `json:"namespace,omitempty"` + Name *string `json:"name,omitempty"` + Namespace *string `json:"namespace,omitempty"` + Selector *v1.LabelSelectorApplyConfiguration `json:"selector,omitempty"` + ParameterNotFoundAction *v1alpha1.ParameterNotFoundActionType `json:"parameterNotFoundAction,omitempty"` } // ParamRefApplyConfiguration constructs an declarative configuration of the ParamRef type for use with @@ -46,3 +53,19 @@ func (b *ParamRefApplyConfiguration) WithNamespace(value string) *ParamRefApplyC b.Namespace = &value return b } + +// WithSelector sets the Selector field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Selector field is set to the value of the last call. +func (b *ParamRefApplyConfiguration) WithSelector(value *v1.LabelSelectorApplyConfiguration) *ParamRefApplyConfiguration { + b.Selector = value + return b +} + +// WithParameterNotFoundAction sets the ParameterNotFoundAction field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ParameterNotFoundAction field is set to the value of the last call. +func (b *ParamRefApplyConfiguration) WithParameterNotFoundAction(value v1alpha1.ParameterNotFoundActionType) *ParamRefApplyConfiguration { + b.ParameterNotFoundAction = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/validatingadmissionpolicyspec.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/validatingadmissionpolicyspec.go index f674b5b1ec20..7ee320e42881 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/validatingadmissionpolicyspec.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/validatingadmissionpolicyspec.go @@ -31,6 +31,7 @@ type ValidatingAdmissionPolicySpecApplyConfiguration struct { FailurePolicy *admissionregistrationv1alpha1.FailurePolicyType `json:"failurePolicy,omitempty"` AuditAnnotations []AuditAnnotationApplyConfiguration `json:"auditAnnotations,omitempty"` MatchConditions []MatchConditionApplyConfiguration `json:"matchConditions,omitempty"` + Variables []VariableApplyConfiguration `json:"variables,omitempty"` } // ValidatingAdmissionPolicySpecApplyConfiguration constructs an declarative configuration of the ValidatingAdmissionPolicySpec type for use with @@ -101,3 +102,16 @@ func (b *ValidatingAdmissionPolicySpecApplyConfiguration) WithMatchConditions(va } return b } + +// WithVariables adds the given value to the Variables field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the Variables field. +func (b *ValidatingAdmissionPolicySpecApplyConfiguration) WithVariables(values ...*VariableApplyConfiguration) *ValidatingAdmissionPolicySpecApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithVariables") + } + b.Variables = append(b.Variables, *values[i]) + } + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/variable.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/variable.go new file mode 100644 index 000000000000..2c70a8cfb5a0 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1/variable.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1alpha1 + +// VariableApplyConfiguration represents an declarative configuration of the Variable type for use +// with apply. +type VariableApplyConfiguration struct { + Name *string `json:"name,omitempty"` + Expression *string `json:"expression,omitempty"` +} + +// VariableApplyConfiguration constructs an declarative configuration of the Variable type for use with +// apply. +func Variable() *VariableApplyConfiguration { + return &VariableApplyConfiguration{} +} + +// WithName sets the Name field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Name field is set to the value of the last call. +func (b *VariableApplyConfiguration) WithName(value string) *VariableApplyConfiguration { + b.Name = &value + return b +} + +// WithExpression sets the Expression field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Expression field is set to the value of the last call. +func (b *VariableApplyConfiguration) WithExpression(value string) *VariableApplyConfiguration { + b.Expression = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/auditannotation.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/auditannotation.go new file mode 100644 index 000000000000..e92fba0ddbc0 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/auditannotation.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +// AuditAnnotationApplyConfiguration represents an declarative configuration of the AuditAnnotation type for use +// with apply. +type AuditAnnotationApplyConfiguration struct { + Key *string `json:"key,omitempty"` + ValueExpression *string `json:"valueExpression,omitempty"` +} + +// AuditAnnotationApplyConfiguration constructs an declarative configuration of the AuditAnnotation type for use with +// apply. +func AuditAnnotation() *AuditAnnotationApplyConfiguration { + return &AuditAnnotationApplyConfiguration{} +} + +// WithKey sets the Key field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Key field is set to the value of the last call. +func (b *AuditAnnotationApplyConfiguration) WithKey(value string) *AuditAnnotationApplyConfiguration { + b.Key = &value + return b +} + +// WithValueExpression sets the ValueExpression field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ValueExpression field is set to the value of the last call. +func (b *AuditAnnotationApplyConfiguration) WithValueExpression(value string) *AuditAnnotationApplyConfiguration { + b.ValueExpression = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/expressionwarning.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/expressionwarning.go new file mode 100644 index 000000000000..059c1b94ba2e --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/expressionwarning.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +// ExpressionWarningApplyConfiguration represents an declarative configuration of the ExpressionWarning type for use +// with apply. +type ExpressionWarningApplyConfiguration struct { + FieldRef *string `json:"fieldRef,omitempty"` + Warning *string `json:"warning,omitempty"` +} + +// ExpressionWarningApplyConfiguration constructs an declarative configuration of the ExpressionWarning type for use with +// apply. +func ExpressionWarning() *ExpressionWarningApplyConfiguration { + return &ExpressionWarningApplyConfiguration{} +} + +// WithFieldRef sets the FieldRef field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the FieldRef field is set to the value of the last call. +func (b *ExpressionWarningApplyConfiguration) WithFieldRef(value string) *ExpressionWarningApplyConfiguration { + b.FieldRef = &value + return b +} + +// WithWarning sets the Warning field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Warning field is set to the value of the last call. +func (b *ExpressionWarningApplyConfiguration) WithWarning(value string) *ExpressionWarningApplyConfiguration { + b.Warning = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/matchresources.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/matchresources.go new file mode 100644 index 000000000000..25d4139db6a2 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/matchresources.go @@ -0,0 +1,90 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +import ( + admissionregistrationv1beta1 "k8s.io/api/admissionregistration/v1beta1" + v1 "k8s.io/client-go/applyconfigurations/meta/v1" +) + +// MatchResourcesApplyConfiguration represents an declarative configuration of the MatchResources type for use +// with apply. +type MatchResourcesApplyConfiguration struct { + NamespaceSelector *v1.LabelSelectorApplyConfiguration `json:"namespaceSelector,omitempty"` + ObjectSelector *v1.LabelSelectorApplyConfiguration `json:"objectSelector,omitempty"` + ResourceRules []NamedRuleWithOperationsApplyConfiguration `json:"resourceRules,omitempty"` + ExcludeResourceRules []NamedRuleWithOperationsApplyConfiguration `json:"excludeResourceRules,omitempty"` + MatchPolicy *admissionregistrationv1beta1.MatchPolicyType `json:"matchPolicy,omitempty"` +} + +// MatchResourcesApplyConfiguration constructs an declarative configuration of the MatchResources type for use with +// apply. +func MatchResources() *MatchResourcesApplyConfiguration { + return &MatchResourcesApplyConfiguration{} +} + +// WithNamespaceSelector sets the NamespaceSelector field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the NamespaceSelector field is set to the value of the last call. +func (b *MatchResourcesApplyConfiguration) WithNamespaceSelector(value *v1.LabelSelectorApplyConfiguration) *MatchResourcesApplyConfiguration { + b.NamespaceSelector = value + return b +} + +// WithObjectSelector sets the ObjectSelector field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ObjectSelector field is set to the value of the last call. +func (b *MatchResourcesApplyConfiguration) WithObjectSelector(value *v1.LabelSelectorApplyConfiguration) *MatchResourcesApplyConfiguration { + b.ObjectSelector = value + return b +} + +// WithResourceRules adds the given value to the ResourceRules field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the ResourceRules field. +func (b *MatchResourcesApplyConfiguration) WithResourceRules(values ...*NamedRuleWithOperationsApplyConfiguration) *MatchResourcesApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithResourceRules") + } + b.ResourceRules = append(b.ResourceRules, *values[i]) + } + return b +} + +// WithExcludeResourceRules adds the given value to the ExcludeResourceRules field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the ExcludeResourceRules field. +func (b *MatchResourcesApplyConfiguration) WithExcludeResourceRules(values ...*NamedRuleWithOperationsApplyConfiguration) *MatchResourcesApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithExcludeResourceRules") + } + b.ExcludeResourceRules = append(b.ExcludeResourceRules, *values[i]) + } + return b +} + +// WithMatchPolicy sets the MatchPolicy field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the MatchPolicy field is set to the value of the last call. +func (b *MatchResourcesApplyConfiguration) WithMatchPolicy(value admissionregistrationv1beta1.MatchPolicyType) *MatchResourcesApplyConfiguration { + b.MatchPolicy = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/namedrulewithoperations.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/namedrulewithoperations.go new file mode 100644 index 000000000000..fa346c4a57b0 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/namedrulewithoperations.go @@ -0,0 +1,95 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +import ( + admissionregistrationv1 "k8s.io/api/admissionregistration/v1" + v1 "k8s.io/client-go/applyconfigurations/admissionregistration/v1" +) + +// NamedRuleWithOperationsApplyConfiguration represents an declarative configuration of the NamedRuleWithOperations type for use +// with apply. +type NamedRuleWithOperationsApplyConfiguration struct { + ResourceNames []string `json:"resourceNames,omitempty"` + v1.RuleWithOperationsApplyConfiguration `json:",inline"` +} + +// NamedRuleWithOperationsApplyConfiguration constructs an declarative configuration of the NamedRuleWithOperations type for use with +// apply. +func NamedRuleWithOperations() *NamedRuleWithOperationsApplyConfiguration { + return &NamedRuleWithOperationsApplyConfiguration{} +} + +// WithResourceNames adds the given value to the ResourceNames field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the ResourceNames field. +func (b *NamedRuleWithOperationsApplyConfiguration) WithResourceNames(values ...string) *NamedRuleWithOperationsApplyConfiguration { + for i := range values { + b.ResourceNames = append(b.ResourceNames, values[i]) + } + return b +} + +// WithOperations adds the given value to the Operations field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the Operations field. +func (b *NamedRuleWithOperationsApplyConfiguration) WithOperations(values ...admissionregistrationv1.OperationType) *NamedRuleWithOperationsApplyConfiguration { + for i := range values { + b.Operations = append(b.Operations, values[i]) + } + return b +} + +// WithAPIGroups adds the given value to the APIGroups field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the APIGroups field. +func (b *NamedRuleWithOperationsApplyConfiguration) WithAPIGroups(values ...string) *NamedRuleWithOperationsApplyConfiguration { + for i := range values { + b.APIGroups = append(b.APIGroups, values[i]) + } + return b +} + +// WithAPIVersions adds the given value to the APIVersions field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the APIVersions field. +func (b *NamedRuleWithOperationsApplyConfiguration) WithAPIVersions(values ...string) *NamedRuleWithOperationsApplyConfiguration { + for i := range values { + b.APIVersions = append(b.APIVersions, values[i]) + } + return b +} + +// WithResources adds the given value to the Resources field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the Resources field. +func (b *NamedRuleWithOperationsApplyConfiguration) WithResources(values ...string) *NamedRuleWithOperationsApplyConfiguration { + for i := range values { + b.Resources = append(b.Resources, values[i]) + } + return b +} + +// WithScope sets the Scope field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Scope field is set to the value of the last call. +func (b *NamedRuleWithOperationsApplyConfiguration) WithScope(value admissionregistrationv1.ScopeType) *NamedRuleWithOperationsApplyConfiguration { + b.Scope = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/paramkind.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/paramkind.go new file mode 100644 index 000000000000..6050e6025126 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/paramkind.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +// ParamKindApplyConfiguration represents an declarative configuration of the ParamKind type for use +// with apply. +type ParamKindApplyConfiguration struct { + APIVersion *string `json:"apiVersion,omitempty"` + Kind *string `json:"kind,omitempty"` +} + +// ParamKindApplyConfiguration constructs an declarative configuration of the ParamKind type for use with +// apply. +func ParamKind() *ParamKindApplyConfiguration { + return &ParamKindApplyConfiguration{} +} + +// WithAPIVersion sets the APIVersion field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the APIVersion field is set to the value of the last call. +func (b *ParamKindApplyConfiguration) WithAPIVersion(value string) *ParamKindApplyConfiguration { + b.APIVersion = &value + return b +} + +// WithKind sets the Kind field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Kind field is set to the value of the last call. +func (b *ParamKindApplyConfiguration) WithKind(value string) *ParamKindApplyConfiguration { + b.Kind = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/paramref.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/paramref.go new file mode 100644 index 000000000000..2be98dbc5251 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/paramref.go @@ -0,0 +1,71 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +import ( + v1beta1 "k8s.io/api/admissionregistration/v1beta1" + v1 "k8s.io/client-go/applyconfigurations/meta/v1" +) + +// ParamRefApplyConfiguration represents an declarative configuration of the ParamRef type for use +// with apply. +type ParamRefApplyConfiguration struct { + Name *string `json:"name,omitempty"` + Namespace *string `json:"namespace,omitempty"` + Selector *v1.LabelSelectorApplyConfiguration `json:"selector,omitempty"` + ParameterNotFoundAction *v1beta1.ParameterNotFoundActionType `json:"parameterNotFoundAction,omitempty"` +} + +// ParamRefApplyConfiguration constructs an declarative configuration of the ParamRef type for use with +// apply. +func ParamRef() *ParamRefApplyConfiguration { + return &ParamRefApplyConfiguration{} +} + +// WithName sets the Name field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Name field is set to the value of the last call. +func (b *ParamRefApplyConfiguration) WithName(value string) *ParamRefApplyConfiguration { + b.Name = &value + return b +} + +// WithNamespace sets the Namespace field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Namespace field is set to the value of the last call. +func (b *ParamRefApplyConfiguration) WithNamespace(value string) *ParamRefApplyConfiguration { + b.Namespace = &value + return b +} + +// WithSelector sets the Selector field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Selector field is set to the value of the last call. +func (b *ParamRefApplyConfiguration) WithSelector(value *v1.LabelSelectorApplyConfiguration) *ParamRefApplyConfiguration { + b.Selector = value + return b +} + +// WithParameterNotFoundAction sets the ParameterNotFoundAction field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ParameterNotFoundAction field is set to the value of the last call. +func (b *ParamRefApplyConfiguration) WithParameterNotFoundAction(value v1beta1.ParameterNotFoundActionType) *ParamRefApplyConfiguration { + b.ParameterNotFoundAction = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/typechecking.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/typechecking.go new file mode 100644 index 000000000000..07baf334cd39 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/typechecking.go @@ -0,0 +1,44 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +// TypeCheckingApplyConfiguration represents an declarative configuration of the TypeChecking type for use +// with apply. +type TypeCheckingApplyConfiguration struct { + ExpressionWarnings []ExpressionWarningApplyConfiguration `json:"expressionWarnings,omitempty"` +} + +// TypeCheckingApplyConfiguration constructs an declarative configuration of the TypeChecking type for use with +// apply. +func TypeChecking() *TypeCheckingApplyConfiguration { + return &TypeCheckingApplyConfiguration{} +} + +// WithExpressionWarnings adds the given value to the ExpressionWarnings field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the ExpressionWarnings field. +func (b *TypeCheckingApplyConfiguration) WithExpressionWarnings(values ...*ExpressionWarningApplyConfiguration) *TypeCheckingApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithExpressionWarnings") + } + b.ExpressionWarnings = append(b.ExpressionWarnings, *values[i]) + } + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicy.go new file mode 100644 index 000000000000..e144bc9f701c --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicy.go @@ -0,0 +1,256 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +import ( + admissionregistrationv1beta1 "k8s.io/api/admissionregistration/v1beta1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + types "k8s.io/apimachinery/pkg/types" + managedfields "k8s.io/apimachinery/pkg/util/managedfields" + internal "k8s.io/client-go/applyconfigurations/internal" + v1 "k8s.io/client-go/applyconfigurations/meta/v1" +) + +// ValidatingAdmissionPolicyApplyConfiguration represents an declarative configuration of the ValidatingAdmissionPolicy type for use +// with apply. +type ValidatingAdmissionPolicyApplyConfiguration struct { + v1.TypeMetaApplyConfiguration `json:",inline"` + *v1.ObjectMetaApplyConfiguration `json:"metadata,omitempty"` + Spec *ValidatingAdmissionPolicySpecApplyConfiguration `json:"spec,omitempty"` + Status *ValidatingAdmissionPolicyStatusApplyConfiguration `json:"status,omitempty"` +} + +// ValidatingAdmissionPolicy constructs an declarative configuration of the ValidatingAdmissionPolicy type for use with +// apply. +func ValidatingAdmissionPolicy(name string) *ValidatingAdmissionPolicyApplyConfiguration { + b := &ValidatingAdmissionPolicyApplyConfiguration{} + b.WithName(name) + b.WithKind("ValidatingAdmissionPolicy") + b.WithAPIVersion("admissionregistration.k8s.io/v1beta1") + return b +} + +// ExtractValidatingAdmissionPolicy extracts the applied configuration owned by fieldManager from +// validatingAdmissionPolicy. If no managedFields are found in validatingAdmissionPolicy for fieldManager, a +// ValidatingAdmissionPolicyApplyConfiguration is returned with only the Name, Namespace (if applicable), +// APIVersion and Kind populated. It is possible that no managed fields were found for because other +// field managers have taken ownership of all the fields previously owned by fieldManager, or because +// the fieldManager never owned fields any fields. +// validatingAdmissionPolicy must be a unmodified ValidatingAdmissionPolicy API object that was retrieved from the Kubernetes API. +// ExtractValidatingAdmissionPolicy provides a way to perform a extract/modify-in-place/apply workflow. +// Note that an extracted apply configuration will contain fewer fields than what the fieldManager previously +// applied if another fieldManager has updated or force applied any of the previously applied fields. +// Experimental! +func ExtractValidatingAdmissionPolicy(validatingAdmissionPolicy *admissionregistrationv1beta1.ValidatingAdmissionPolicy, fieldManager string) (*ValidatingAdmissionPolicyApplyConfiguration, error) { + return extractValidatingAdmissionPolicy(validatingAdmissionPolicy, fieldManager, "") +} + +// ExtractValidatingAdmissionPolicyStatus is the same as ExtractValidatingAdmissionPolicy except +// that it extracts the status subresource applied configuration. +// Experimental! +func ExtractValidatingAdmissionPolicyStatus(validatingAdmissionPolicy *admissionregistrationv1beta1.ValidatingAdmissionPolicy, fieldManager string) (*ValidatingAdmissionPolicyApplyConfiguration, error) { + return extractValidatingAdmissionPolicy(validatingAdmissionPolicy, fieldManager, "status") +} + +func extractValidatingAdmissionPolicy(validatingAdmissionPolicy *admissionregistrationv1beta1.ValidatingAdmissionPolicy, fieldManager string, subresource string) (*ValidatingAdmissionPolicyApplyConfiguration, error) { + b := &ValidatingAdmissionPolicyApplyConfiguration{} + err := managedfields.ExtractInto(validatingAdmissionPolicy, internal.Parser().Type("io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicy"), fieldManager, b, subresource) + if err != nil { + return nil, err + } + b.WithName(validatingAdmissionPolicy.Name) + + b.WithKind("ValidatingAdmissionPolicy") + b.WithAPIVersion("admissionregistration.k8s.io/v1beta1") + return b, nil +} + +// WithKind sets the Kind field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Kind field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithKind(value string) *ValidatingAdmissionPolicyApplyConfiguration { + b.Kind = &value + return b +} + +// WithAPIVersion sets the APIVersion field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the APIVersion field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithAPIVersion(value string) *ValidatingAdmissionPolicyApplyConfiguration { + b.APIVersion = &value + return b +} + +// WithName sets the Name field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Name field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithName(value string) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.Name = &value + return b +} + +// WithGenerateName sets the GenerateName field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the GenerateName field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithGenerateName(value string) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.GenerateName = &value + return b +} + +// WithNamespace sets the Namespace field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Namespace field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithNamespace(value string) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.Namespace = &value + return b +} + +// WithUID sets the UID field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the UID field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithUID(value types.UID) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.UID = &value + return b +} + +// WithResourceVersion sets the ResourceVersion field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ResourceVersion field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithResourceVersion(value string) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.ResourceVersion = &value + return b +} + +// WithGeneration sets the Generation field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Generation field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithGeneration(value int64) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.Generation = &value + return b +} + +// WithCreationTimestamp sets the CreationTimestamp field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the CreationTimestamp field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithCreationTimestamp(value metav1.Time) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.CreationTimestamp = &value + return b +} + +// WithDeletionTimestamp sets the DeletionTimestamp field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the DeletionTimestamp field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithDeletionTimestamp(value metav1.Time) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.DeletionTimestamp = &value + return b +} + +// WithDeletionGracePeriodSeconds sets the DeletionGracePeriodSeconds field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the DeletionGracePeriodSeconds field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithDeletionGracePeriodSeconds(value int64) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.DeletionGracePeriodSeconds = &value + return b +} + +// WithLabels puts the entries into the Labels field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, the entries provided by each call will be put on the Labels field, +// overwriting an existing map entries in Labels field with the same key. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithLabels(entries map[string]string) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + if b.Labels == nil && len(entries) > 0 { + b.Labels = make(map[string]string, len(entries)) + } + for k, v := range entries { + b.Labels[k] = v + } + return b +} + +// WithAnnotations puts the entries into the Annotations field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, the entries provided by each call will be put on the Annotations field, +// overwriting an existing map entries in Annotations field with the same key. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithAnnotations(entries map[string]string) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + if b.Annotations == nil && len(entries) > 0 { + b.Annotations = make(map[string]string, len(entries)) + } + for k, v := range entries { + b.Annotations[k] = v + } + return b +} + +// WithOwnerReferences adds the given value to the OwnerReferences field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the OwnerReferences field. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithOwnerReferences(values ...*v1.OwnerReferenceApplyConfiguration) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + for i := range values { + if values[i] == nil { + panic("nil value passed to WithOwnerReferences") + } + b.OwnerReferences = append(b.OwnerReferences, *values[i]) + } + return b +} + +// WithFinalizers adds the given value to the Finalizers field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the Finalizers field. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithFinalizers(values ...string) *ValidatingAdmissionPolicyApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + for i := range values { + b.Finalizers = append(b.Finalizers, values[i]) + } + return b +} + +func (b *ValidatingAdmissionPolicyApplyConfiguration) ensureObjectMetaApplyConfigurationExists() { + if b.ObjectMetaApplyConfiguration == nil { + b.ObjectMetaApplyConfiguration = &v1.ObjectMetaApplyConfiguration{} + } +} + +// WithSpec sets the Spec field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Spec field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithSpec(value *ValidatingAdmissionPolicySpecApplyConfiguration) *ValidatingAdmissionPolicyApplyConfiguration { + b.Spec = value + return b +} + +// WithStatus sets the Status field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Status field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyApplyConfiguration) WithStatus(value *ValidatingAdmissionPolicyStatusApplyConfiguration) *ValidatingAdmissionPolicyApplyConfiguration { + b.Status = value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicybinding.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicybinding.go new file mode 100644 index 000000000000..0dc06aedecdd --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicybinding.go @@ -0,0 +1,247 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +import ( + admissionregistrationv1beta1 "k8s.io/api/admissionregistration/v1beta1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + types "k8s.io/apimachinery/pkg/types" + managedfields "k8s.io/apimachinery/pkg/util/managedfields" + internal "k8s.io/client-go/applyconfigurations/internal" + v1 "k8s.io/client-go/applyconfigurations/meta/v1" +) + +// ValidatingAdmissionPolicyBindingApplyConfiguration represents an declarative configuration of the ValidatingAdmissionPolicyBinding type for use +// with apply. +type ValidatingAdmissionPolicyBindingApplyConfiguration struct { + v1.TypeMetaApplyConfiguration `json:",inline"` + *v1.ObjectMetaApplyConfiguration `json:"metadata,omitempty"` + Spec *ValidatingAdmissionPolicyBindingSpecApplyConfiguration `json:"spec,omitempty"` +} + +// ValidatingAdmissionPolicyBinding constructs an declarative configuration of the ValidatingAdmissionPolicyBinding type for use with +// apply. +func ValidatingAdmissionPolicyBinding(name string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b := &ValidatingAdmissionPolicyBindingApplyConfiguration{} + b.WithName(name) + b.WithKind("ValidatingAdmissionPolicyBinding") + b.WithAPIVersion("admissionregistration.k8s.io/v1beta1") + return b +} + +// ExtractValidatingAdmissionPolicyBinding extracts the applied configuration owned by fieldManager from +// validatingAdmissionPolicyBinding. If no managedFields are found in validatingAdmissionPolicyBinding for fieldManager, a +// ValidatingAdmissionPolicyBindingApplyConfiguration is returned with only the Name, Namespace (if applicable), +// APIVersion and Kind populated. It is possible that no managed fields were found for because other +// field managers have taken ownership of all the fields previously owned by fieldManager, or because +// the fieldManager never owned fields any fields. +// validatingAdmissionPolicyBinding must be a unmodified ValidatingAdmissionPolicyBinding API object that was retrieved from the Kubernetes API. +// ExtractValidatingAdmissionPolicyBinding provides a way to perform a extract/modify-in-place/apply workflow. +// Note that an extracted apply configuration will contain fewer fields than what the fieldManager previously +// applied if another fieldManager has updated or force applied any of the previously applied fields. +// Experimental! +func ExtractValidatingAdmissionPolicyBinding(validatingAdmissionPolicyBinding *admissionregistrationv1beta1.ValidatingAdmissionPolicyBinding, fieldManager string) (*ValidatingAdmissionPolicyBindingApplyConfiguration, error) { + return extractValidatingAdmissionPolicyBinding(validatingAdmissionPolicyBinding, fieldManager, "") +} + +// ExtractValidatingAdmissionPolicyBindingStatus is the same as ExtractValidatingAdmissionPolicyBinding except +// that it extracts the status subresource applied configuration. +// Experimental! +func ExtractValidatingAdmissionPolicyBindingStatus(validatingAdmissionPolicyBinding *admissionregistrationv1beta1.ValidatingAdmissionPolicyBinding, fieldManager string) (*ValidatingAdmissionPolicyBindingApplyConfiguration, error) { + return extractValidatingAdmissionPolicyBinding(validatingAdmissionPolicyBinding, fieldManager, "status") +} + +func extractValidatingAdmissionPolicyBinding(validatingAdmissionPolicyBinding *admissionregistrationv1beta1.ValidatingAdmissionPolicyBinding, fieldManager string, subresource string) (*ValidatingAdmissionPolicyBindingApplyConfiguration, error) { + b := &ValidatingAdmissionPolicyBindingApplyConfiguration{} + err := managedfields.ExtractInto(validatingAdmissionPolicyBinding, internal.Parser().Type("io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyBinding"), fieldManager, b, subresource) + if err != nil { + return nil, err + } + b.WithName(validatingAdmissionPolicyBinding.Name) + + b.WithKind("ValidatingAdmissionPolicyBinding") + b.WithAPIVersion("admissionregistration.k8s.io/v1beta1") + return b, nil +} + +// WithKind sets the Kind field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Kind field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithKind(value string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.Kind = &value + return b +} + +// WithAPIVersion sets the APIVersion field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the APIVersion field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithAPIVersion(value string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.APIVersion = &value + return b +} + +// WithName sets the Name field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Name field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithName(value string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.Name = &value + return b +} + +// WithGenerateName sets the GenerateName field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the GenerateName field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithGenerateName(value string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.GenerateName = &value + return b +} + +// WithNamespace sets the Namespace field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Namespace field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithNamespace(value string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.Namespace = &value + return b +} + +// WithUID sets the UID field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the UID field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithUID(value types.UID) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.UID = &value + return b +} + +// WithResourceVersion sets the ResourceVersion field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ResourceVersion field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithResourceVersion(value string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.ResourceVersion = &value + return b +} + +// WithGeneration sets the Generation field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Generation field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithGeneration(value int64) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.Generation = &value + return b +} + +// WithCreationTimestamp sets the CreationTimestamp field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the CreationTimestamp field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithCreationTimestamp(value metav1.Time) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.CreationTimestamp = &value + return b +} + +// WithDeletionTimestamp sets the DeletionTimestamp field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the DeletionTimestamp field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithDeletionTimestamp(value metav1.Time) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.DeletionTimestamp = &value + return b +} + +// WithDeletionGracePeriodSeconds sets the DeletionGracePeriodSeconds field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the DeletionGracePeriodSeconds field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithDeletionGracePeriodSeconds(value int64) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + b.DeletionGracePeriodSeconds = &value + return b +} + +// WithLabels puts the entries into the Labels field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, the entries provided by each call will be put on the Labels field, +// overwriting an existing map entries in Labels field with the same key. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithLabels(entries map[string]string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + if b.Labels == nil && len(entries) > 0 { + b.Labels = make(map[string]string, len(entries)) + } + for k, v := range entries { + b.Labels[k] = v + } + return b +} + +// WithAnnotations puts the entries into the Annotations field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, the entries provided by each call will be put on the Annotations field, +// overwriting an existing map entries in Annotations field with the same key. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithAnnotations(entries map[string]string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + if b.Annotations == nil && len(entries) > 0 { + b.Annotations = make(map[string]string, len(entries)) + } + for k, v := range entries { + b.Annotations[k] = v + } + return b +} + +// WithOwnerReferences adds the given value to the OwnerReferences field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the OwnerReferences field. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithOwnerReferences(values ...*v1.OwnerReferenceApplyConfiguration) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + for i := range values { + if values[i] == nil { + panic("nil value passed to WithOwnerReferences") + } + b.OwnerReferences = append(b.OwnerReferences, *values[i]) + } + return b +} + +// WithFinalizers adds the given value to the Finalizers field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the Finalizers field. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithFinalizers(values ...string) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.ensureObjectMetaApplyConfigurationExists() + for i := range values { + b.Finalizers = append(b.Finalizers, values[i]) + } + return b +} + +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) ensureObjectMetaApplyConfigurationExists() { + if b.ObjectMetaApplyConfiguration == nil { + b.ObjectMetaApplyConfiguration = &v1.ObjectMetaApplyConfiguration{} + } +} + +// WithSpec sets the Spec field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Spec field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingApplyConfiguration) WithSpec(value *ValidatingAdmissionPolicyBindingSpecApplyConfiguration) *ValidatingAdmissionPolicyBindingApplyConfiguration { + b.Spec = value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicybindingspec.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicybindingspec.go new file mode 100644 index 000000000000..d20a78efffb0 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicybindingspec.go @@ -0,0 +1,72 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +import ( + admissionregistrationv1beta1 "k8s.io/api/admissionregistration/v1beta1" +) + +// ValidatingAdmissionPolicyBindingSpecApplyConfiguration represents an declarative configuration of the ValidatingAdmissionPolicyBindingSpec type for use +// with apply. +type ValidatingAdmissionPolicyBindingSpecApplyConfiguration struct { + PolicyName *string `json:"policyName,omitempty"` + ParamRef *ParamRefApplyConfiguration `json:"paramRef,omitempty"` + MatchResources *MatchResourcesApplyConfiguration `json:"matchResources,omitempty"` + ValidationActions []admissionregistrationv1beta1.ValidationAction `json:"validationActions,omitempty"` +} + +// ValidatingAdmissionPolicyBindingSpecApplyConfiguration constructs an declarative configuration of the ValidatingAdmissionPolicyBindingSpec type for use with +// apply. +func ValidatingAdmissionPolicyBindingSpec() *ValidatingAdmissionPolicyBindingSpecApplyConfiguration { + return &ValidatingAdmissionPolicyBindingSpecApplyConfiguration{} +} + +// WithPolicyName sets the PolicyName field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the PolicyName field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingSpecApplyConfiguration) WithPolicyName(value string) *ValidatingAdmissionPolicyBindingSpecApplyConfiguration { + b.PolicyName = &value + return b +} + +// WithParamRef sets the ParamRef field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ParamRef field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingSpecApplyConfiguration) WithParamRef(value *ParamRefApplyConfiguration) *ValidatingAdmissionPolicyBindingSpecApplyConfiguration { + b.ParamRef = value + return b +} + +// WithMatchResources sets the MatchResources field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the MatchResources field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyBindingSpecApplyConfiguration) WithMatchResources(value *MatchResourcesApplyConfiguration) *ValidatingAdmissionPolicyBindingSpecApplyConfiguration { + b.MatchResources = value + return b +} + +// WithValidationActions adds the given value to the ValidationActions field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the ValidationActions field. +func (b *ValidatingAdmissionPolicyBindingSpecApplyConfiguration) WithValidationActions(values ...admissionregistrationv1beta1.ValidationAction) *ValidatingAdmissionPolicyBindingSpecApplyConfiguration { + for i := range values { + b.ValidationActions = append(b.ValidationActions, values[i]) + } + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicyspec.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicyspec.go new file mode 100644 index 000000000000..c6e938910337 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicyspec.go @@ -0,0 +1,117 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +import ( + admissionregistrationv1beta1 "k8s.io/api/admissionregistration/v1beta1" +) + +// ValidatingAdmissionPolicySpecApplyConfiguration represents an declarative configuration of the ValidatingAdmissionPolicySpec type for use +// with apply. +type ValidatingAdmissionPolicySpecApplyConfiguration struct { + ParamKind *ParamKindApplyConfiguration `json:"paramKind,omitempty"` + MatchConstraints *MatchResourcesApplyConfiguration `json:"matchConstraints,omitempty"` + Validations []ValidationApplyConfiguration `json:"validations,omitempty"` + FailurePolicy *admissionregistrationv1beta1.FailurePolicyType `json:"failurePolicy,omitempty"` + AuditAnnotations []AuditAnnotationApplyConfiguration `json:"auditAnnotations,omitempty"` + MatchConditions []MatchConditionApplyConfiguration `json:"matchConditions,omitempty"` + Variables []VariableApplyConfiguration `json:"variables,omitempty"` +} + +// ValidatingAdmissionPolicySpecApplyConfiguration constructs an declarative configuration of the ValidatingAdmissionPolicySpec type for use with +// apply. +func ValidatingAdmissionPolicySpec() *ValidatingAdmissionPolicySpecApplyConfiguration { + return &ValidatingAdmissionPolicySpecApplyConfiguration{} +} + +// WithParamKind sets the ParamKind field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ParamKind field is set to the value of the last call. +func (b *ValidatingAdmissionPolicySpecApplyConfiguration) WithParamKind(value *ParamKindApplyConfiguration) *ValidatingAdmissionPolicySpecApplyConfiguration { + b.ParamKind = value + return b +} + +// WithMatchConstraints sets the MatchConstraints field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the MatchConstraints field is set to the value of the last call. +func (b *ValidatingAdmissionPolicySpecApplyConfiguration) WithMatchConstraints(value *MatchResourcesApplyConfiguration) *ValidatingAdmissionPolicySpecApplyConfiguration { + b.MatchConstraints = value + return b +} + +// WithValidations adds the given value to the Validations field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the Validations field. +func (b *ValidatingAdmissionPolicySpecApplyConfiguration) WithValidations(values ...*ValidationApplyConfiguration) *ValidatingAdmissionPolicySpecApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithValidations") + } + b.Validations = append(b.Validations, *values[i]) + } + return b +} + +// WithFailurePolicy sets the FailurePolicy field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the FailurePolicy field is set to the value of the last call. +func (b *ValidatingAdmissionPolicySpecApplyConfiguration) WithFailurePolicy(value admissionregistrationv1beta1.FailurePolicyType) *ValidatingAdmissionPolicySpecApplyConfiguration { + b.FailurePolicy = &value + return b +} + +// WithAuditAnnotations adds the given value to the AuditAnnotations field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the AuditAnnotations field. +func (b *ValidatingAdmissionPolicySpecApplyConfiguration) WithAuditAnnotations(values ...*AuditAnnotationApplyConfiguration) *ValidatingAdmissionPolicySpecApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithAuditAnnotations") + } + b.AuditAnnotations = append(b.AuditAnnotations, *values[i]) + } + return b +} + +// WithMatchConditions adds the given value to the MatchConditions field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the MatchConditions field. +func (b *ValidatingAdmissionPolicySpecApplyConfiguration) WithMatchConditions(values ...*MatchConditionApplyConfiguration) *ValidatingAdmissionPolicySpecApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithMatchConditions") + } + b.MatchConditions = append(b.MatchConditions, *values[i]) + } + return b +} + +// WithVariables adds the given value to the Variables field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the Variables field. +func (b *ValidatingAdmissionPolicySpecApplyConfiguration) WithVariables(values ...*VariableApplyConfiguration) *ValidatingAdmissionPolicySpecApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithVariables") + } + b.Variables = append(b.Variables, *values[i]) + } + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicystatus.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicystatus.go new file mode 100644 index 000000000000..e3e6d417edd5 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validatingadmissionpolicystatus.go @@ -0,0 +1,66 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +import ( + v1 "k8s.io/client-go/applyconfigurations/meta/v1" +) + +// ValidatingAdmissionPolicyStatusApplyConfiguration represents an declarative configuration of the ValidatingAdmissionPolicyStatus type for use +// with apply. +type ValidatingAdmissionPolicyStatusApplyConfiguration struct { + ObservedGeneration *int64 `json:"observedGeneration,omitempty"` + TypeChecking *TypeCheckingApplyConfiguration `json:"typeChecking,omitempty"` + Conditions []v1.ConditionApplyConfiguration `json:"conditions,omitempty"` +} + +// ValidatingAdmissionPolicyStatusApplyConfiguration constructs an declarative configuration of the ValidatingAdmissionPolicyStatus type for use with +// apply. +func ValidatingAdmissionPolicyStatus() *ValidatingAdmissionPolicyStatusApplyConfiguration { + return &ValidatingAdmissionPolicyStatusApplyConfiguration{} +} + +// WithObservedGeneration sets the ObservedGeneration field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ObservedGeneration field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyStatusApplyConfiguration) WithObservedGeneration(value int64) *ValidatingAdmissionPolicyStatusApplyConfiguration { + b.ObservedGeneration = &value + return b +} + +// WithTypeChecking sets the TypeChecking field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the TypeChecking field is set to the value of the last call. +func (b *ValidatingAdmissionPolicyStatusApplyConfiguration) WithTypeChecking(value *TypeCheckingApplyConfiguration) *ValidatingAdmissionPolicyStatusApplyConfiguration { + b.TypeChecking = value + return b +} + +// WithConditions adds the given value to the Conditions field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the Conditions field. +func (b *ValidatingAdmissionPolicyStatusApplyConfiguration) WithConditions(values ...*v1.ConditionApplyConfiguration) *ValidatingAdmissionPolicyStatusApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithConditions") + } + b.Conditions = append(b.Conditions, *values[i]) + } + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validation.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validation.go new file mode 100644 index 000000000000..ed9ff1ac0c27 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/validation.go @@ -0,0 +1,70 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +import ( + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// ValidationApplyConfiguration represents an declarative configuration of the Validation type for use +// with apply. +type ValidationApplyConfiguration struct { + Expression *string `json:"expression,omitempty"` + Message *string `json:"message,omitempty"` + Reason *v1.StatusReason `json:"reason,omitempty"` + MessageExpression *string `json:"messageExpression,omitempty"` +} + +// ValidationApplyConfiguration constructs an declarative configuration of the Validation type for use with +// apply. +func Validation() *ValidationApplyConfiguration { + return &ValidationApplyConfiguration{} +} + +// WithExpression sets the Expression field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Expression field is set to the value of the last call. +func (b *ValidationApplyConfiguration) WithExpression(value string) *ValidationApplyConfiguration { + b.Expression = &value + return b +} + +// WithMessage sets the Message field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Message field is set to the value of the last call. +func (b *ValidationApplyConfiguration) WithMessage(value string) *ValidationApplyConfiguration { + b.Message = &value + return b +} + +// WithReason sets the Reason field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Reason field is set to the value of the last call. +func (b *ValidationApplyConfiguration) WithReason(value v1.StatusReason) *ValidationApplyConfiguration { + b.Reason = &value + return b +} + +// WithMessageExpression sets the MessageExpression field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the MessageExpression field is set to the value of the last call. +func (b *ValidationApplyConfiguration) WithMessageExpression(value string) *ValidationApplyConfiguration { + b.MessageExpression = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/variable.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/variable.go new file mode 100644 index 000000000000..0fc294c65d51 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1/variable.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +// VariableApplyConfiguration represents an declarative configuration of the Variable type for use +// with apply. +type VariableApplyConfiguration struct { + Name *string `json:"name,omitempty"` + Expression *string `json:"expression,omitempty"` +} + +// VariableApplyConfiguration constructs an declarative configuration of the Variable type for use with +// apply. +func Variable() *VariableApplyConfiguration { + return &VariableApplyConfiguration{} +} + +// WithName sets the Name field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Name field is set to the value of the last call. +func (b *VariableApplyConfiguration) WithName(value string) *VariableApplyConfiguration { + b.Name = &value + return b +} + +// WithExpression sets the Expression field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Expression field is set to the value of the last call. +func (b *VariableApplyConfiguration) WithExpression(value string) *VariableApplyConfiguration { + b.Expression = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/apiserverinternal/v1alpha1/serverstorageversion.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/apiserverinternal/v1alpha1/serverstorageversion.go index d36f7603c784..81c56330bb46 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/apiserverinternal/v1alpha1/serverstorageversion.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/apiserverinternal/v1alpha1/serverstorageversion.go @@ -24,6 +24,7 @@ type ServerStorageVersionApplyConfiguration struct { APIServerID *string `json:"apiServerID,omitempty"` EncodingVersion *string `json:"encodingVersion,omitempty"` DecodableVersions []string `json:"decodableVersions,omitempty"` + ServedVersions []string `json:"servedVersions,omitempty"` } // ServerStorageVersionApplyConfiguration constructs an declarative configuration of the ServerStorageVersion type for use with @@ -57,3 +58,13 @@ func (b *ServerStorageVersionApplyConfiguration) WithDecodableVersions(values .. } return b } + +// WithServedVersions adds the given value to the ServedVersions field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the ServedVersions field. +func (b *ServerStorageVersionApplyConfiguration) WithServedVersions(values ...string) *ServerStorageVersionApplyConfiguration { + for i := range values { + b.ServedVersions = append(b.ServedVersions, values[i]) + } + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/batch/v1/jobspec.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/batch/v1/jobspec.go index 839d88b64ec2..3d46a3ecf9b1 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/batch/v1/jobspec.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/batch/v1/jobspec.go @@ -32,12 +32,15 @@ type JobSpecApplyConfiguration struct { ActiveDeadlineSeconds *int64 `json:"activeDeadlineSeconds,omitempty"` PodFailurePolicy *PodFailurePolicyApplyConfiguration `json:"podFailurePolicy,omitempty"` BackoffLimit *int32 `json:"backoffLimit,omitempty"` + BackoffLimitPerIndex *int32 `json:"backoffLimitPerIndex,omitempty"` + MaxFailedIndexes *int32 `json:"maxFailedIndexes,omitempty"` Selector *metav1.LabelSelectorApplyConfiguration `json:"selector,omitempty"` ManualSelector *bool `json:"manualSelector,omitempty"` Template *corev1.PodTemplateSpecApplyConfiguration `json:"template,omitempty"` TTLSecondsAfterFinished *int32 `json:"ttlSecondsAfterFinished,omitempty"` CompletionMode *batchv1.CompletionMode `json:"completionMode,omitempty"` Suspend *bool `json:"suspend,omitempty"` + PodReplacementPolicy *batchv1.PodReplacementPolicy `json:"podReplacementPolicy,omitempty"` } // JobSpecApplyConfiguration constructs an declarative configuration of the JobSpec type for use with @@ -86,6 +89,22 @@ func (b *JobSpecApplyConfiguration) WithBackoffLimit(value int32) *JobSpecApplyC return b } +// WithBackoffLimitPerIndex sets the BackoffLimitPerIndex field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the BackoffLimitPerIndex field is set to the value of the last call. +func (b *JobSpecApplyConfiguration) WithBackoffLimitPerIndex(value int32) *JobSpecApplyConfiguration { + b.BackoffLimitPerIndex = &value + return b +} + +// WithMaxFailedIndexes sets the MaxFailedIndexes field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the MaxFailedIndexes field is set to the value of the last call. +func (b *JobSpecApplyConfiguration) WithMaxFailedIndexes(value int32) *JobSpecApplyConfiguration { + b.MaxFailedIndexes = &value + return b +} + // WithSelector sets the Selector field in the declarative configuration to the given value // and returns the receiver, so that objects can be built by chaining "With" function invocations. // If called multiple times, the Selector field is set to the value of the last call. @@ -133,3 +152,11 @@ func (b *JobSpecApplyConfiguration) WithSuspend(value bool) *JobSpecApplyConfigu b.Suspend = &value return b } + +// WithPodReplacementPolicy sets the PodReplacementPolicy field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the PodReplacementPolicy field is set to the value of the last call. +func (b *JobSpecApplyConfiguration) WithPodReplacementPolicy(value batchv1.PodReplacementPolicy) *JobSpecApplyConfiguration { + b.PodReplacementPolicy = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/batch/v1/jobstatus.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/batch/v1/jobstatus.go index a36d5d0ae11d..e8e472f8f710 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/batch/v1/jobstatus.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/batch/v1/jobstatus.go @@ -31,7 +31,9 @@ type JobStatusApplyConfiguration struct { Active *int32 `json:"active,omitempty"` Succeeded *int32 `json:"succeeded,omitempty"` Failed *int32 `json:"failed,omitempty"` + Terminating *int32 `json:"terminating,omitempty"` CompletedIndexes *string `json:"completedIndexes,omitempty"` + FailedIndexes *string `json:"failedIndexes,omitempty"` UncountedTerminatedPods *UncountedTerminatedPodsApplyConfiguration `json:"uncountedTerminatedPods,omitempty"` Ready *int32 `json:"ready,omitempty"` } @@ -95,6 +97,14 @@ func (b *JobStatusApplyConfiguration) WithFailed(value int32) *JobStatusApplyCon return b } +// WithTerminating sets the Terminating field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Terminating field is set to the value of the last call. +func (b *JobStatusApplyConfiguration) WithTerminating(value int32) *JobStatusApplyConfiguration { + b.Terminating = &value + return b +} + // WithCompletedIndexes sets the CompletedIndexes field in the declarative configuration to the given value // and returns the receiver, so that objects can be built by chaining "With" function invocations. // If called multiple times, the CompletedIndexes field is set to the value of the last call. @@ -103,6 +113,14 @@ func (b *JobStatusApplyConfiguration) WithCompletedIndexes(value string) *JobSta return b } +// WithFailedIndexes sets the FailedIndexes field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the FailedIndexes field is set to the value of the last call. +func (b *JobStatusApplyConfiguration) WithFailedIndexes(value string) *JobStatusApplyConfiguration { + b.FailedIndexes = &value + return b +} + // WithUncountedTerminatedPods sets the UncountedTerminatedPods field in the declarative configuration to the given value // and returns the receiver, so that objects can be built by chaining "With" function invocations. // If called multiple times, the UncountedTerminatedPods field is set to the value of the last call. diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/container.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/container.go index 9ada59ee20aa..32d715606314 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/container.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/container.go @@ -35,6 +35,7 @@ type ContainerApplyConfiguration struct { Env []EnvVarApplyConfiguration `json:"env,omitempty"` Resources *ResourceRequirementsApplyConfiguration `json:"resources,omitempty"` ResizePolicy []ContainerResizePolicyApplyConfiguration `json:"resizePolicy,omitempty"` + RestartPolicy *corev1.ContainerRestartPolicy `json:"restartPolicy,omitempty"` VolumeMounts []VolumeMountApplyConfiguration `json:"volumeMounts,omitempty"` VolumeDevices []VolumeDeviceApplyConfiguration `json:"volumeDevices,omitempty"` LivenessProbe *ProbeApplyConfiguration `json:"livenessProbe,omitempty"` @@ -160,6 +161,14 @@ func (b *ContainerApplyConfiguration) WithResizePolicy(values ...*ContainerResiz return b } +// WithRestartPolicy sets the RestartPolicy field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the RestartPolicy field is set to the value of the last call. +func (b *ContainerApplyConfiguration) WithRestartPolicy(value corev1.ContainerRestartPolicy) *ContainerApplyConfiguration { + b.RestartPolicy = &value + return b +} + // WithVolumeMounts adds the given value to the VolumeMounts field in the declarative configuration // and returns the receiver, so that objects can be build by chaining "With" function invocations. // If called multiple times, values provided by each call will be appended to the VolumeMounts field. diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/ephemeralcontainer.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/ephemeralcontainer.go index c51049ba1f25..5fa79a246ec8 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/ephemeralcontainer.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/ephemeralcontainer.go @@ -139,6 +139,14 @@ func (b *EphemeralContainerApplyConfiguration) WithResizePolicy(values ...*Conta return b } +// WithRestartPolicy sets the RestartPolicy field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the RestartPolicy field is set to the value of the last call. +func (b *EphemeralContainerApplyConfiguration) WithRestartPolicy(value corev1.ContainerRestartPolicy) *EphemeralContainerApplyConfiguration { + b.RestartPolicy = &value + return b +} + // WithVolumeMounts adds the given value to the VolumeMounts field in the declarative configuration // and returns the receiver, so that objects can be build by chaining "With" function invocations. // If called multiple times, values provided by each call will be appended to the VolumeMounts field. diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/ephemeralcontainercommon.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/ephemeralcontainercommon.go index 764b830e0498..8cded29a9ecb 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/ephemeralcontainercommon.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/ephemeralcontainercommon.go @@ -35,6 +35,7 @@ type EphemeralContainerCommonApplyConfiguration struct { Env []EnvVarApplyConfiguration `json:"env,omitempty"` Resources *ResourceRequirementsApplyConfiguration `json:"resources,omitempty"` ResizePolicy []ContainerResizePolicyApplyConfiguration `json:"resizePolicy,omitempty"` + RestartPolicy *corev1.ContainerRestartPolicy `json:"restartPolicy,omitempty"` VolumeMounts []VolumeMountApplyConfiguration `json:"volumeMounts,omitempty"` VolumeDevices []VolumeDeviceApplyConfiguration `json:"volumeDevices,omitempty"` LivenessProbe *ProbeApplyConfiguration `json:"livenessProbe,omitempty"` @@ -160,6 +161,14 @@ func (b *EphemeralContainerCommonApplyConfiguration) WithResizePolicy(values ... return b } +// WithRestartPolicy sets the RestartPolicy field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the RestartPolicy field is set to the value of the last call. +func (b *EphemeralContainerCommonApplyConfiguration) WithRestartPolicy(value corev1.ContainerRestartPolicy) *EphemeralContainerCommonApplyConfiguration { + b.RestartPolicy = &value + return b +} + // WithVolumeMounts adds the given value to the VolumeMounts field in the declarative configuration // and returns the receiver, so that objects can be build by chaining "With" function invocations. // If called multiple times, values provided by each call will be appended to the VolumeMounts field. diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/hostip.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/hostip.go new file mode 100644 index 000000000000..c2a42cf74714 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/hostip.go @@ -0,0 +1,39 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1 + +// HostIPApplyConfiguration represents an declarative configuration of the HostIP type for use +// with apply. +type HostIPApplyConfiguration struct { + IP *string `json:"ip,omitempty"` +} + +// HostIPApplyConfiguration constructs an declarative configuration of the HostIP type for use with +// apply. +func HostIP() *HostIPApplyConfiguration { + return &HostIPApplyConfiguration{} +} + +// WithIP sets the IP field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the IP field is set to the value of the last call. +func (b *HostIPApplyConfiguration) WithIP(value string) *HostIPApplyConfiguration { + b.IP = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/persistentvolumeclaimstatus.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/persistentvolumeclaimstatus.go index 4c38d89f5739..c29b2a9a155d 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/persistentvolumeclaimstatus.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/persistentvolumeclaimstatus.go @@ -25,12 +25,12 @@ import ( // PersistentVolumeClaimStatusApplyConfiguration represents an declarative configuration of the PersistentVolumeClaimStatus type for use // with apply. type PersistentVolumeClaimStatusApplyConfiguration struct { - Phase *v1.PersistentVolumeClaimPhase `json:"phase,omitempty"` - AccessModes []v1.PersistentVolumeAccessMode `json:"accessModes,omitempty"` - Capacity *v1.ResourceList `json:"capacity,omitempty"` - Conditions []PersistentVolumeClaimConditionApplyConfiguration `json:"conditions,omitempty"` - AllocatedResources *v1.ResourceList `json:"allocatedResources,omitempty"` - ResizeStatus *v1.PersistentVolumeClaimResizeStatus `json:"resizeStatus,omitempty"` + Phase *v1.PersistentVolumeClaimPhase `json:"phase,omitempty"` + AccessModes []v1.PersistentVolumeAccessMode `json:"accessModes,omitempty"` + Capacity *v1.ResourceList `json:"capacity,omitempty"` + Conditions []PersistentVolumeClaimConditionApplyConfiguration `json:"conditions,omitempty"` + AllocatedResources *v1.ResourceList `json:"allocatedResources,omitempty"` + AllocatedResourceStatuses map[v1.ResourceName]v1.ClaimResourceStatus `json:"allocatedResourceStatuses,omitempty"` } // PersistentVolumeClaimStatusApplyConfiguration constructs an declarative configuration of the PersistentVolumeClaimStatus type for use with @@ -86,10 +86,16 @@ func (b *PersistentVolumeClaimStatusApplyConfiguration) WithAllocatedResources(v return b } -// WithResizeStatus sets the ResizeStatus field in the declarative configuration to the given value -// and returns the receiver, so that objects can be built by chaining "With" function invocations. -// If called multiple times, the ResizeStatus field is set to the value of the last call. -func (b *PersistentVolumeClaimStatusApplyConfiguration) WithResizeStatus(value v1.PersistentVolumeClaimResizeStatus) *PersistentVolumeClaimStatusApplyConfiguration { - b.ResizeStatus = &value +// WithAllocatedResourceStatuses puts the entries into the AllocatedResourceStatuses field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, the entries provided by each call will be put on the AllocatedResourceStatuses field, +// overwriting an existing map entries in AllocatedResourceStatuses field with the same key. +func (b *PersistentVolumeClaimStatusApplyConfiguration) WithAllocatedResourceStatuses(entries map[v1.ResourceName]v1.ClaimResourceStatus) *PersistentVolumeClaimStatusApplyConfiguration { + if b.AllocatedResourceStatuses == nil && len(entries) > 0 { + b.AllocatedResourceStatuses = make(map[v1.ResourceName]v1.ClaimResourceStatus, len(entries)) + } + for k, v := range entries { + b.AllocatedResourceStatuses[k] = v + } return b } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/persistentvolumestatus.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/persistentvolumestatus.go index f7048dec4eb3..a473c0e927f4 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/persistentvolumestatus.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/persistentvolumestatus.go @@ -20,14 +20,16 @@ package v1 import ( v1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // PersistentVolumeStatusApplyConfiguration represents an declarative configuration of the PersistentVolumeStatus type for use // with apply. type PersistentVolumeStatusApplyConfiguration struct { - Phase *v1.PersistentVolumePhase `json:"phase,omitempty"` - Message *string `json:"message,omitempty"` - Reason *string `json:"reason,omitempty"` + Phase *v1.PersistentVolumePhase `json:"phase,omitempty"` + Message *string `json:"message,omitempty"` + Reason *string `json:"reason,omitempty"` + LastPhaseTransitionTime *metav1.Time `json:"lastPhaseTransitionTime,omitempty"` } // PersistentVolumeStatusApplyConfiguration constructs an declarative configuration of the PersistentVolumeStatus type for use with @@ -59,3 +61,11 @@ func (b *PersistentVolumeStatusApplyConfiguration) WithReason(value string) *Per b.Reason = &value return b } + +// WithLastPhaseTransitionTime sets the LastPhaseTransitionTime field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the LastPhaseTransitionTime field is set to the value of the last call. +func (b *PersistentVolumeStatusApplyConfiguration) WithLastPhaseTransitionTime(value metav1.Time) *PersistentVolumeStatusApplyConfiguration { + b.LastPhaseTransitionTime = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/podresourceclaimstatus.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/podresourceclaimstatus.go new file mode 100644 index 000000000000..ae79ca01b76a --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/podresourceclaimstatus.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1 + +// PodResourceClaimStatusApplyConfiguration represents an declarative configuration of the PodResourceClaimStatus type for use +// with apply. +type PodResourceClaimStatusApplyConfiguration struct { + Name *string `json:"name,omitempty"` + ResourceClaimName *string `json:"resourceClaimName,omitempty"` +} + +// PodResourceClaimStatusApplyConfiguration constructs an declarative configuration of the PodResourceClaimStatus type for use with +// apply. +func PodResourceClaimStatus() *PodResourceClaimStatusApplyConfiguration { + return &PodResourceClaimStatusApplyConfiguration{} +} + +// WithName sets the Name field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Name field is set to the value of the last call. +func (b *PodResourceClaimStatusApplyConfiguration) WithName(value string) *PodResourceClaimStatusApplyConfiguration { + b.Name = &value + return b +} + +// WithResourceClaimName sets the ResourceClaimName field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the ResourceClaimName field is set to the value of the last call. +func (b *PodResourceClaimStatusApplyConfiguration) WithResourceClaimName(value string) *PodResourceClaimStatusApplyConfiguration { + b.ResourceClaimName = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/podstatus.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/podstatus.go index e9d8e5b28f25..1a58ab6be2d0 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/podstatus.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/core/v1/podstatus.go @@ -26,20 +26,22 @@ import ( // PodStatusApplyConfiguration represents an declarative configuration of the PodStatus type for use // with apply. type PodStatusApplyConfiguration struct { - Phase *v1.PodPhase `json:"phase,omitempty"` - Conditions []PodConditionApplyConfiguration `json:"conditions,omitempty"` - Message *string `json:"message,omitempty"` - Reason *string `json:"reason,omitempty"` - NominatedNodeName *string `json:"nominatedNodeName,omitempty"` - HostIP *string `json:"hostIP,omitempty"` - PodIP *string `json:"podIP,omitempty"` - PodIPs []PodIPApplyConfiguration `json:"podIPs,omitempty"` - StartTime *metav1.Time `json:"startTime,omitempty"` - InitContainerStatuses []ContainerStatusApplyConfiguration `json:"initContainerStatuses,omitempty"` - ContainerStatuses []ContainerStatusApplyConfiguration `json:"containerStatuses,omitempty"` - QOSClass *v1.PodQOSClass `json:"qosClass,omitempty"` - EphemeralContainerStatuses []ContainerStatusApplyConfiguration `json:"ephemeralContainerStatuses,omitempty"` - Resize *v1.PodResizeStatus `json:"resize,omitempty"` + Phase *v1.PodPhase `json:"phase,omitempty"` + Conditions []PodConditionApplyConfiguration `json:"conditions,omitempty"` + Message *string `json:"message,omitempty"` + Reason *string `json:"reason,omitempty"` + NominatedNodeName *string `json:"nominatedNodeName,omitempty"` + HostIP *string `json:"hostIP,omitempty"` + HostIPs []HostIPApplyConfiguration `json:"hostIPs,omitempty"` + PodIP *string `json:"podIP,omitempty"` + PodIPs []PodIPApplyConfiguration `json:"podIPs,omitempty"` + StartTime *metav1.Time `json:"startTime,omitempty"` + InitContainerStatuses []ContainerStatusApplyConfiguration `json:"initContainerStatuses,omitempty"` + ContainerStatuses []ContainerStatusApplyConfiguration `json:"containerStatuses,omitempty"` + QOSClass *v1.PodQOSClass `json:"qosClass,omitempty"` + EphemeralContainerStatuses []ContainerStatusApplyConfiguration `json:"ephemeralContainerStatuses,omitempty"` + Resize *v1.PodResizeStatus `json:"resize,omitempty"` + ResourceClaimStatuses []PodResourceClaimStatusApplyConfiguration `json:"resourceClaimStatuses,omitempty"` } // PodStatusApplyConfiguration constructs an declarative configuration of the PodStatus type for use with @@ -101,6 +103,19 @@ func (b *PodStatusApplyConfiguration) WithHostIP(value string) *PodStatusApplyCo return b } +// WithHostIPs adds the given value to the HostIPs field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the HostIPs field. +func (b *PodStatusApplyConfiguration) WithHostIPs(values ...*HostIPApplyConfiguration) *PodStatusApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithHostIPs") + } + b.HostIPs = append(b.HostIPs, *values[i]) + } + return b +} + // WithPodIP sets the PodIP field in the declarative configuration to the given value // and returns the receiver, so that objects can be built by chaining "With" function invocations. // If called multiple times, the PodIP field is set to the value of the last call. @@ -184,3 +199,16 @@ func (b *PodStatusApplyConfiguration) WithResize(value v1.PodResizeStatus) *PodS b.Resize = &value return b } + +// WithResourceClaimStatuses adds the given value to the ResourceClaimStatuses field in the declarative configuration +// and returns the receiver, so that objects can be build by chaining "With" function invocations. +// If called multiple times, values provided by each call will be appended to the ResourceClaimStatuses field. +func (b *PodStatusApplyConfiguration) WithResourceClaimStatuses(values ...*PodResourceClaimStatusApplyConfiguration) *PodStatusApplyConfiguration { + for i := range values { + if values[i] == nil { + panic("nil value passed to WithResourceClaimStatuses") + } + b.ResourceClaimStatuses = append(b.ResourceClaimStatuses, *values[i]) + } + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/extensions/v1beta1/networkpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/extensions/v1beta1/networkpolicy.go index 81c84d2d46f0..27ea5d9dde94 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/extensions/v1beta1/networkpolicy.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/extensions/v1beta1/networkpolicy.go @@ -32,8 +32,7 @@ import ( type NetworkPolicyApplyConfiguration struct { v1.TypeMetaApplyConfiguration `json:",inline"` *v1.ObjectMetaApplyConfiguration `json:"metadata,omitempty"` - Spec *NetworkPolicySpecApplyConfiguration `json:"spec,omitempty"` - Status *NetworkPolicyStatusApplyConfiguration `json:"status,omitempty"` + Spec *NetworkPolicySpecApplyConfiguration `json:"spec,omitempty"` } // NetworkPolicy constructs an declarative configuration of the NetworkPolicy type for use with @@ -248,11 +247,3 @@ func (b *NetworkPolicyApplyConfiguration) WithSpec(value *NetworkPolicySpecApply b.Spec = value return b } - -// WithStatus sets the Status field in the declarative configuration to the given value -// and returns the receiver, so that objects can be built by chaining "With" function invocations. -// If called multiple times, the Status field is set to the value of the last call. -func (b *NetworkPolicyApplyConfiguration) WithStatus(value *NetworkPolicyStatusApplyConfiguration) *NetworkPolicyApplyConfiguration { - b.Status = value - return b -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/extensions/v1beta1/networkpolicystatus.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/extensions/v1beta1/networkpolicystatus.go deleted file mode 100644 index 99c89b09b092..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/extensions/v1beta1/networkpolicystatus.go +++ /dev/null @@ -1,48 +0,0 @@ -/* -Copyright The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by applyconfiguration-gen. DO NOT EDIT. - -package v1beta1 - -import ( - v1 "k8s.io/client-go/applyconfigurations/meta/v1" -) - -// NetworkPolicyStatusApplyConfiguration represents an declarative configuration of the NetworkPolicyStatus type for use -// with apply. -type NetworkPolicyStatusApplyConfiguration struct { - Conditions []v1.ConditionApplyConfiguration `json:"conditions,omitempty"` -} - -// NetworkPolicyStatusApplyConfiguration constructs an declarative configuration of the NetworkPolicyStatus type for use with -// apply. -func NetworkPolicyStatus() *NetworkPolicyStatusApplyConfiguration { - return &NetworkPolicyStatusApplyConfiguration{} -} - -// WithConditions adds the given value to the Conditions field in the declarative configuration -// and returns the receiver, so that objects can be build by chaining "With" function invocations. -// If called multiple times, values provided by each call will be appended to the Conditions field. -func (b *NetworkPolicyStatusApplyConfiguration) WithConditions(values ...*v1.ConditionApplyConfiguration) *NetworkPolicyStatusApplyConfiguration { - for i := range values { - if values[i] == nil { - panic("nil value passed to WithConditions") - } - b.Conditions = append(b.Conditions, *values[i]) - } - return b -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1alpha1/exemptprioritylevelconfiguration.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1alpha1/exemptprioritylevelconfiguration.go new file mode 100644 index 000000000000..3535d7478776 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1alpha1/exemptprioritylevelconfiguration.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1alpha1 + +// ExemptPriorityLevelConfigurationApplyConfiguration represents an declarative configuration of the ExemptPriorityLevelConfiguration type for use +// with apply. +type ExemptPriorityLevelConfigurationApplyConfiguration struct { + NominalConcurrencyShares *int32 `json:"nominalConcurrencyShares,omitempty"` + LendablePercent *int32 `json:"lendablePercent,omitempty"` +} + +// ExemptPriorityLevelConfigurationApplyConfiguration constructs an declarative configuration of the ExemptPriorityLevelConfiguration type for use with +// apply. +func ExemptPriorityLevelConfiguration() *ExemptPriorityLevelConfigurationApplyConfiguration { + return &ExemptPriorityLevelConfigurationApplyConfiguration{} +} + +// WithNominalConcurrencyShares sets the NominalConcurrencyShares field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the NominalConcurrencyShares field is set to the value of the last call. +func (b *ExemptPriorityLevelConfigurationApplyConfiguration) WithNominalConcurrencyShares(value int32) *ExemptPriorityLevelConfigurationApplyConfiguration { + b.NominalConcurrencyShares = &value + return b +} + +// WithLendablePercent sets the LendablePercent field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the LendablePercent field is set to the value of the last call. +func (b *ExemptPriorityLevelConfigurationApplyConfiguration) WithLendablePercent(value int32) *ExemptPriorityLevelConfigurationApplyConfiguration { + b.LendablePercent = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1alpha1/prioritylevelconfigurationspec.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1alpha1/prioritylevelconfigurationspec.go index 3949dee46dc6..ade920a75562 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1alpha1/prioritylevelconfigurationspec.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1alpha1/prioritylevelconfigurationspec.go @@ -27,6 +27,7 @@ import ( type PriorityLevelConfigurationSpecApplyConfiguration struct { Type *v1alpha1.PriorityLevelEnablement `json:"type,omitempty"` Limited *LimitedPriorityLevelConfigurationApplyConfiguration `json:"limited,omitempty"` + Exempt *ExemptPriorityLevelConfigurationApplyConfiguration `json:"exempt,omitempty"` } // PriorityLevelConfigurationSpecApplyConfiguration constructs an declarative configuration of the PriorityLevelConfigurationSpec type for use with @@ -50,3 +51,11 @@ func (b *PriorityLevelConfigurationSpecApplyConfiguration) WithLimited(value *Li b.Limited = value return b } + +// WithExempt sets the Exempt field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Exempt field is set to the value of the last call. +func (b *PriorityLevelConfigurationSpecApplyConfiguration) WithExempt(value *ExemptPriorityLevelConfigurationApplyConfiguration) *PriorityLevelConfigurationSpecApplyConfiguration { + b.Exempt = value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta1/exemptprioritylevelconfiguration.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta1/exemptprioritylevelconfiguration.go new file mode 100644 index 000000000000..071048090081 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta1/exemptprioritylevelconfiguration.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta1 + +// ExemptPriorityLevelConfigurationApplyConfiguration represents an declarative configuration of the ExemptPriorityLevelConfiguration type for use +// with apply. +type ExemptPriorityLevelConfigurationApplyConfiguration struct { + NominalConcurrencyShares *int32 `json:"nominalConcurrencyShares,omitempty"` + LendablePercent *int32 `json:"lendablePercent,omitempty"` +} + +// ExemptPriorityLevelConfigurationApplyConfiguration constructs an declarative configuration of the ExemptPriorityLevelConfiguration type for use with +// apply. +func ExemptPriorityLevelConfiguration() *ExemptPriorityLevelConfigurationApplyConfiguration { + return &ExemptPriorityLevelConfigurationApplyConfiguration{} +} + +// WithNominalConcurrencyShares sets the NominalConcurrencyShares field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the NominalConcurrencyShares field is set to the value of the last call. +func (b *ExemptPriorityLevelConfigurationApplyConfiguration) WithNominalConcurrencyShares(value int32) *ExemptPriorityLevelConfigurationApplyConfiguration { + b.NominalConcurrencyShares = &value + return b +} + +// WithLendablePercent sets the LendablePercent field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the LendablePercent field is set to the value of the last call. +func (b *ExemptPriorityLevelConfigurationApplyConfiguration) WithLendablePercent(value int32) *ExemptPriorityLevelConfigurationApplyConfiguration { + b.LendablePercent = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta1/prioritylevelconfigurationspec.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta1/prioritylevelconfigurationspec.go index 8ed4e399f88b..19146d9f668a 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta1/prioritylevelconfigurationspec.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta1/prioritylevelconfigurationspec.go @@ -27,6 +27,7 @@ import ( type PriorityLevelConfigurationSpecApplyConfiguration struct { Type *v1beta1.PriorityLevelEnablement `json:"type,omitempty"` Limited *LimitedPriorityLevelConfigurationApplyConfiguration `json:"limited,omitempty"` + Exempt *ExemptPriorityLevelConfigurationApplyConfiguration `json:"exempt,omitempty"` } // PriorityLevelConfigurationSpecApplyConfiguration constructs an declarative configuration of the PriorityLevelConfigurationSpec type for use with @@ -50,3 +51,11 @@ func (b *PriorityLevelConfigurationSpecApplyConfiguration) WithLimited(value *Li b.Limited = value return b } + +// WithExempt sets the Exempt field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Exempt field is set to the value of the last call. +func (b *PriorityLevelConfigurationSpecApplyConfiguration) WithExempt(value *ExemptPriorityLevelConfigurationApplyConfiguration) *PriorityLevelConfigurationSpecApplyConfiguration { + b.Exempt = value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta2/exemptprioritylevelconfiguration.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta2/exemptprioritylevelconfiguration.go new file mode 100644 index 000000000000..d6bc330fe7a3 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta2/exemptprioritylevelconfiguration.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta2 + +// ExemptPriorityLevelConfigurationApplyConfiguration represents an declarative configuration of the ExemptPriorityLevelConfiguration type for use +// with apply. +type ExemptPriorityLevelConfigurationApplyConfiguration struct { + NominalConcurrencyShares *int32 `json:"nominalConcurrencyShares,omitempty"` + LendablePercent *int32 `json:"lendablePercent,omitempty"` +} + +// ExemptPriorityLevelConfigurationApplyConfiguration constructs an declarative configuration of the ExemptPriorityLevelConfiguration type for use with +// apply. +func ExemptPriorityLevelConfiguration() *ExemptPriorityLevelConfigurationApplyConfiguration { + return &ExemptPriorityLevelConfigurationApplyConfiguration{} +} + +// WithNominalConcurrencyShares sets the NominalConcurrencyShares field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the NominalConcurrencyShares field is set to the value of the last call. +func (b *ExemptPriorityLevelConfigurationApplyConfiguration) WithNominalConcurrencyShares(value int32) *ExemptPriorityLevelConfigurationApplyConfiguration { + b.NominalConcurrencyShares = &value + return b +} + +// WithLendablePercent sets the LendablePercent field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the LendablePercent field is set to the value of the last call. +func (b *ExemptPriorityLevelConfigurationApplyConfiguration) WithLendablePercent(value int32) *ExemptPriorityLevelConfigurationApplyConfiguration { + b.LendablePercent = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta2/prioritylevelconfigurationspec.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta2/prioritylevelconfigurationspec.go index 5560ed9e567c..994a8a16a225 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta2/prioritylevelconfigurationspec.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta2/prioritylevelconfigurationspec.go @@ -27,6 +27,7 @@ import ( type PriorityLevelConfigurationSpecApplyConfiguration struct { Type *v1beta2.PriorityLevelEnablement `json:"type,omitempty"` Limited *LimitedPriorityLevelConfigurationApplyConfiguration `json:"limited,omitempty"` + Exempt *ExemptPriorityLevelConfigurationApplyConfiguration `json:"exempt,omitempty"` } // PriorityLevelConfigurationSpecApplyConfiguration constructs an declarative configuration of the PriorityLevelConfigurationSpec type for use with @@ -50,3 +51,11 @@ func (b *PriorityLevelConfigurationSpecApplyConfiguration) WithLimited(value *Li b.Limited = value return b } + +// WithExempt sets the Exempt field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Exempt field is set to the value of the last call. +func (b *PriorityLevelConfigurationSpecApplyConfiguration) WithExempt(value *ExemptPriorityLevelConfigurationApplyConfiguration) *PriorityLevelConfigurationSpecApplyConfiguration { + b.Exempt = value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3/exemptprioritylevelconfiguration.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3/exemptprioritylevelconfiguration.go new file mode 100644 index 000000000000..b03c11d0d9e6 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3/exemptprioritylevelconfiguration.go @@ -0,0 +1,48 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by applyconfiguration-gen. DO NOT EDIT. + +package v1beta3 + +// ExemptPriorityLevelConfigurationApplyConfiguration represents an declarative configuration of the ExemptPriorityLevelConfiguration type for use +// with apply. +type ExemptPriorityLevelConfigurationApplyConfiguration struct { + NominalConcurrencyShares *int32 `json:"nominalConcurrencyShares,omitempty"` + LendablePercent *int32 `json:"lendablePercent,omitempty"` +} + +// ExemptPriorityLevelConfigurationApplyConfiguration constructs an declarative configuration of the ExemptPriorityLevelConfiguration type for use with +// apply. +func ExemptPriorityLevelConfiguration() *ExemptPriorityLevelConfigurationApplyConfiguration { + return &ExemptPriorityLevelConfigurationApplyConfiguration{} +} + +// WithNominalConcurrencyShares sets the NominalConcurrencyShares field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the NominalConcurrencyShares field is set to the value of the last call. +func (b *ExemptPriorityLevelConfigurationApplyConfiguration) WithNominalConcurrencyShares(value int32) *ExemptPriorityLevelConfigurationApplyConfiguration { + b.NominalConcurrencyShares = &value + return b +} + +// WithLendablePercent sets the LendablePercent field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the LendablePercent field is set to the value of the last call. +func (b *ExemptPriorityLevelConfigurationApplyConfiguration) WithLendablePercent(value int32) *ExemptPriorityLevelConfigurationApplyConfiguration { + b.LendablePercent = &value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3/prioritylevelconfigurationspec.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3/prioritylevelconfigurationspec.go index f67f39445568..5b0680d912b4 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3/prioritylevelconfigurationspec.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3/prioritylevelconfigurationspec.go @@ -27,6 +27,7 @@ import ( type PriorityLevelConfigurationSpecApplyConfiguration struct { Type *v1beta3.PriorityLevelEnablement `json:"type,omitempty"` Limited *LimitedPriorityLevelConfigurationApplyConfiguration `json:"limited,omitempty"` + Exempt *ExemptPriorityLevelConfigurationApplyConfiguration `json:"exempt,omitempty"` } // PriorityLevelConfigurationSpecApplyConfiguration constructs an declarative configuration of the PriorityLevelConfigurationSpec type for use with @@ -50,3 +51,11 @@ func (b *PriorityLevelConfigurationSpecApplyConfiguration) WithLimited(value *Li b.Limited = value return b } + +// WithExempt sets the Exempt field in the declarative configuration to the given value +// and returns the receiver, so that objects can be built by chaining "With" function invocations. +// If called multiple times, the Exempt field is set to the value of the last call. +func (b *PriorityLevelConfigurationSpecApplyConfiguration) WithExempt(value *ExemptPriorityLevelConfigurationApplyConfiguration) *PriorityLevelConfigurationSpecApplyConfiguration { + b.Exempt = value + return b +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/internal/internal.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/internal/internal.go index 361b2f4e8555..3ed553662f69 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/internal/internal.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/internal/internal.go @@ -366,6 +366,12 @@ var schemaYAML = typed.YAMLObject(`types: - name: namespace type: scalar: string + - name: parameterNotFoundAction + type: + scalar: string + - name: selector + type: + namedType: io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector elementRelationship: atomic - name: io.k8s.api.admissionregistration.v1alpha1.TypeChecking map: @@ -464,6 +470,14 @@ var schemaYAML = typed.YAMLObject(`types: elementType: namedType: io.k8s.api.admissionregistration.v1alpha1.Validation elementRelationship: atomic + - name: variables + type: + list: + elementType: + namedType: io.k8s.api.admissionregistration.v1alpha1.Variable + elementRelationship: associative + keys: + - name - name: io.k8s.api.admissionregistration.v1alpha1.ValidatingAdmissionPolicyStatus map: fields: @@ -497,6 +511,39 @@ var schemaYAML = typed.YAMLObject(`types: - name: reason type: scalar: string +- name: io.k8s.api.admissionregistration.v1alpha1.Variable + map: + fields: + - name: expression + type: + scalar: string + default: "" + - name: name + type: + scalar: string + default: "" +- name: io.k8s.api.admissionregistration.v1beta1.AuditAnnotation + map: + fields: + - name: key + type: + scalar: string + default: "" + - name: valueExpression + type: + scalar: string + default: "" +- name: io.k8s.api.admissionregistration.v1beta1.ExpressionWarning + map: + fields: + - name: fieldRef + type: + scalar: string + default: "" + - name: warning + type: + scalar: string + default: "" - name: io.k8s.api.admissionregistration.v1beta1.MatchCondition map: fields: @@ -508,6 +555,31 @@ var schemaYAML = typed.YAMLObject(`types: type: scalar: string default: "" +- name: io.k8s.api.admissionregistration.v1beta1.MatchResources + map: + fields: + - name: excludeResourceRules + type: + list: + elementType: + namedType: io.k8s.api.admissionregistration.v1beta1.NamedRuleWithOperations + elementRelationship: atomic + - name: matchPolicy + type: + scalar: string + - name: namespaceSelector + type: + namedType: io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector + - name: objectSelector + type: + namedType: io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector + - name: resourceRules + type: + list: + elementType: + namedType: io.k8s.api.admissionregistration.v1beta1.NamedRuleWithOperations + elementRelationship: atomic + elementRelationship: atomic - name: io.k8s.api.admissionregistration.v1beta1.MutatingWebhook map: fields: @@ -581,6 +653,69 @@ var schemaYAML = typed.YAMLObject(`types: elementRelationship: associative keys: - name +- name: io.k8s.api.admissionregistration.v1beta1.NamedRuleWithOperations + map: + fields: + - name: apiGroups + type: + list: + elementType: + scalar: string + elementRelationship: atomic + - name: apiVersions + type: + list: + elementType: + scalar: string + elementRelationship: atomic + - name: operations + type: + list: + elementType: + scalar: string + elementRelationship: atomic + - name: resourceNames + type: + list: + elementType: + scalar: string + elementRelationship: atomic + - name: resources + type: + list: + elementType: + scalar: string + elementRelationship: atomic + - name: scope + type: + scalar: string + elementRelationship: atomic +- name: io.k8s.api.admissionregistration.v1beta1.ParamKind + map: + fields: + - name: apiVersion + type: + scalar: string + - name: kind + type: + scalar: string + elementRelationship: atomic +- name: io.k8s.api.admissionregistration.v1beta1.ParamRef + map: + fields: + - name: name + type: + scalar: string + - name: namespace + type: + scalar: string + - name: parameterNotFoundAction + type: + scalar: string + - name: selector + type: + namedType: io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector + elementRelationship: atomic - name: io.k8s.api.admissionregistration.v1beta1.ServiceReference map: fields: @@ -598,6 +733,128 @@ var schemaYAML = typed.YAMLObject(`types: - name: port type: scalar: numeric +- name: io.k8s.api.admissionregistration.v1beta1.TypeChecking + map: + fields: + - name: expressionWarnings + type: + list: + elementType: + namedType: io.k8s.api.admissionregistration.v1beta1.ExpressionWarning + elementRelationship: atomic +- name: io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicy + map: + fields: + - name: apiVersion + type: + scalar: string + - name: kind + type: + scalar: string + - name: metadata + type: + namedType: io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta + default: {} + - name: spec + type: + namedType: io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicySpec + default: {} + - name: status + type: + namedType: io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyStatus + default: {} +- name: io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyBinding + map: + fields: + - name: apiVersion + type: + scalar: string + - name: kind + type: + scalar: string + - name: metadata + type: + namedType: io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta + default: {} + - name: spec + type: + namedType: io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyBindingSpec + default: {} +- name: io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyBindingSpec + map: + fields: + - name: matchResources + type: + namedType: io.k8s.api.admissionregistration.v1beta1.MatchResources + - name: paramRef + type: + namedType: io.k8s.api.admissionregistration.v1beta1.ParamRef + - name: policyName + type: + scalar: string + - name: validationActions + type: + list: + elementType: + scalar: string + elementRelationship: associative +- name: io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicySpec + map: + fields: + - name: auditAnnotations + type: + list: + elementType: + namedType: io.k8s.api.admissionregistration.v1beta1.AuditAnnotation + elementRelationship: atomic + - name: failurePolicy + type: + scalar: string + - name: matchConditions + type: + list: + elementType: + namedType: io.k8s.api.admissionregistration.v1beta1.MatchCondition + elementRelationship: associative + keys: + - name + - name: matchConstraints + type: + namedType: io.k8s.api.admissionregistration.v1beta1.MatchResources + - name: paramKind + type: + namedType: io.k8s.api.admissionregistration.v1beta1.ParamKind + - name: validations + type: + list: + elementType: + namedType: io.k8s.api.admissionregistration.v1beta1.Validation + elementRelationship: atomic + - name: variables + type: + list: + elementType: + namedType: io.k8s.api.admissionregistration.v1beta1.Variable + elementRelationship: associative + keys: + - name +- name: io.k8s.api.admissionregistration.v1beta1.ValidatingAdmissionPolicyStatus + map: + fields: + - name: conditions + type: + list: + elementType: + namedType: io.k8s.apimachinery.pkg.apis.meta.v1.Condition + elementRelationship: associative + keys: + - type + - name: observedGeneration + type: + scalar: numeric + - name: typeChecking + type: + namedType: io.k8s.api.admissionregistration.v1beta1.TypeChecking - name: io.k8s.api.admissionregistration.v1beta1.ValidatingWebhook map: fields: @@ -668,6 +925,34 @@ var schemaYAML = typed.YAMLObject(`types: elementRelationship: associative keys: - name +- name: io.k8s.api.admissionregistration.v1beta1.Validation + map: + fields: + - name: expression + type: + scalar: string + default: "" + - name: message + type: + scalar: string + - name: messageExpression + type: + scalar: string + - name: reason + type: + scalar: string +- name: io.k8s.api.admissionregistration.v1beta1.Variable + map: + fields: + - name: expression + type: + scalar: string + default: "" + - name: name + type: + scalar: string + default: "" + elementRelationship: atomic - name: io.k8s.api.admissionregistration.v1beta1.WebhookClientConfig map: fields: @@ -695,6 +980,12 @@ var schemaYAML = typed.YAMLObject(`types: - name: encodingVersion type: scalar: string + - name: servedVersions + type: + list: + elementType: + scalar: string + elementRelationship: associative - name: io.k8s.api.apiserverinternal.v1alpha1.StorageVersion map: fields: @@ -3328,6 +3619,9 @@ var schemaYAML = typed.YAMLObject(`types: - name: backoffLimit type: scalar: numeric + - name: backoffLimitPerIndex + type: + scalar: numeric - name: completionMode type: scalar: string @@ -3337,12 +3631,18 @@ var schemaYAML = typed.YAMLObject(`types: - name: manualSelector type: scalar: boolean + - name: maxFailedIndexes + type: + scalar: numeric - name: parallelism type: scalar: numeric - name: podFailurePolicy type: namedType: io.k8s.api.batch.v1.PodFailurePolicy + - name: podReplacementPolicy + type: + scalar: string - name: selector type: namedType: io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector @@ -3377,6 +3677,9 @@ var schemaYAML = typed.YAMLObject(`types: - name: failed type: scalar: numeric + - name: failedIndexes + type: + scalar: string - name: ready type: scalar: numeric @@ -3386,6 +3689,9 @@ var schemaYAML = typed.YAMLObject(`types: - name: succeeded type: scalar: numeric + - name: terminating + type: + scalar: numeric - name: uncountedTerminatedPods type: namedType: io.k8s.api.batch.v1.UncountedTerminatedPods @@ -4306,6 +4612,9 @@ var schemaYAML = typed.YAMLObject(`types: type: namedType: io.k8s.api.core.v1.ResourceRequirements default: {} + - name: restartPolicy + type: + scalar: string - name: securityContext type: namedType: io.k8s.api.core.v1.SecurityContext @@ -4723,6 +5032,9 @@ var schemaYAML = typed.YAMLObject(`types: type: namedType: io.k8s.api.core.v1.ResourceRequirements default: {} + - name: restartPolicy + type: + scalar: string - name: securityContext type: namedType: io.k8s.api.core.v1.SecurityContext @@ -5053,6 +5365,12 @@ var schemaYAML = typed.YAMLObject(`types: - name: ip type: scalar: string +- name: io.k8s.api.core.v1.HostIP + map: + fields: + - name: ip + type: + scalar: string - name: io.k8s.api.core.v1.HostPathVolumeSource map: fields: @@ -5777,6 +6095,12 @@ var schemaYAML = typed.YAMLObject(`types: elementType: scalar: string elementRelationship: atomic + - name: allocatedResourceStatuses + type: + map: + elementType: + scalar: string + elementRelationship: separable - name: allocatedResources type: map: @@ -5798,9 +6122,6 @@ var schemaYAML = typed.YAMLObject(`types: - name: phase type: scalar: string - - name: resizeStatus - type: - scalar: string - name: io.k8s.api.core.v1.PersistentVolumeClaimTemplate map: fields: @@ -5927,6 +6248,9 @@ var schemaYAML = typed.YAMLObject(`types: - name: io.k8s.api.core.v1.PersistentVolumeStatus map: fields: + - name: lastPhaseTransitionTime + type: + namedType: io.k8s.apimachinery.pkg.apis.meta.v1.Time - name: message type: scalar: string @@ -6102,6 +6426,16 @@ var schemaYAML = typed.YAMLObject(`types: type: namedType: io.k8s.api.core.v1.ClaimSource default: {} +- name: io.k8s.api.core.v1.PodResourceClaimStatus + map: + fields: + - name: name + type: + scalar: string + default: "" + - name: resourceClaimName + type: + scalar: string - name: io.k8s.api.core.v1.PodSchedulingGate map: fields: @@ -6351,6 +6685,12 @@ var schemaYAML = typed.YAMLObject(`types: - name: hostIP type: scalar: string + - name: hostIPs + type: + list: + elementType: + namedType: io.k8s.api.core.v1.HostIP + elementRelationship: atomic - name: initContainerStatuses type: list: @@ -6386,6 +6726,14 @@ var schemaYAML = typed.YAMLObject(`types: - name: resize type: scalar: string + - name: resourceClaimStatuses + type: + list: + elementType: + namedType: io.k8s.api.core.v1.PodResourceClaimStatus + elementRelationship: associative + keys: + - name - name: startTime type: namedType: io.k8s.apimachinery.pkg.apis.meta.v1.Time @@ -8343,10 +8691,6 @@ var schemaYAML = typed.YAMLObject(`types: type: namedType: io.k8s.api.extensions.v1beta1.NetworkPolicySpec default: {} - - name: status - type: - namedType: io.k8s.api.extensions.v1beta1.NetworkPolicyStatus - default: {} - name: io.k8s.api.extensions.v1beta1.NetworkPolicyEgressRule map: fields: @@ -8426,17 +8770,6 @@ var schemaYAML = typed.YAMLObject(`types: elementType: scalar: string elementRelationship: atomic -- name: io.k8s.api.extensions.v1beta1.NetworkPolicyStatus - map: - fields: - - name: conditions - type: - list: - elementType: - namedType: io.k8s.apimachinery.pkg.apis.meta.v1.Condition - elementRelationship: associative - keys: - - type - name: io.k8s.api.extensions.v1beta1.ReplicaSet map: fields: @@ -8546,6 +8879,15 @@ var schemaYAML = typed.YAMLObject(`types: - name: maxUnavailable type: namedType: io.k8s.apimachinery.pkg.util.intstr.IntOrString +- name: io.k8s.api.flowcontrol.v1alpha1.ExemptPriorityLevelConfiguration + map: + fields: + - name: lendablePercent + type: + scalar: numeric + - name: nominalConcurrencyShares + type: + scalar: numeric - name: io.k8s.api.flowcontrol.v1alpha1.FlowDistinguisherMethod map: fields: @@ -8749,6 +9091,9 @@ var schemaYAML = typed.YAMLObject(`types: - name: io.k8s.api.flowcontrol.v1alpha1.PriorityLevelConfigurationSpec map: fields: + - name: exempt + type: + namedType: io.k8s.api.flowcontrol.v1alpha1.ExemptPriorityLevelConfiguration - name: limited type: namedType: io.k8s.api.flowcontrol.v1alpha1.LimitedPriorityLevelConfiguration @@ -8759,6 +9104,8 @@ var schemaYAML = typed.YAMLObject(`types: unions: - discriminator: type fields: + - fieldName: exempt + discriminatorValue: Exempt - fieldName: limited discriminatorValue: Limited - name: io.k8s.api.flowcontrol.v1alpha1.PriorityLevelConfigurationStatus @@ -8860,6 +9207,15 @@ var schemaYAML = typed.YAMLObject(`types: type: scalar: string default: "" +- name: io.k8s.api.flowcontrol.v1beta1.ExemptPriorityLevelConfiguration + map: + fields: + - name: lendablePercent + type: + scalar: numeric + - name: nominalConcurrencyShares + type: + scalar: numeric - name: io.k8s.api.flowcontrol.v1beta1.FlowDistinguisherMethod map: fields: @@ -9063,6 +9419,9 @@ var schemaYAML = typed.YAMLObject(`types: - name: io.k8s.api.flowcontrol.v1beta1.PriorityLevelConfigurationSpec map: fields: + - name: exempt + type: + namedType: io.k8s.api.flowcontrol.v1beta1.ExemptPriorityLevelConfiguration - name: limited type: namedType: io.k8s.api.flowcontrol.v1beta1.LimitedPriorityLevelConfiguration @@ -9073,6 +9432,8 @@ var schemaYAML = typed.YAMLObject(`types: unions: - discriminator: type fields: + - fieldName: exempt + discriminatorValue: Exempt - fieldName: limited discriminatorValue: Limited - name: io.k8s.api.flowcontrol.v1beta1.PriorityLevelConfigurationStatus @@ -9174,6 +9535,15 @@ var schemaYAML = typed.YAMLObject(`types: type: scalar: string default: "" +- name: io.k8s.api.flowcontrol.v1beta2.ExemptPriorityLevelConfiguration + map: + fields: + - name: lendablePercent + type: + scalar: numeric + - name: nominalConcurrencyShares + type: + scalar: numeric - name: io.k8s.api.flowcontrol.v1beta2.FlowDistinguisherMethod map: fields: @@ -9377,6 +9747,9 @@ var schemaYAML = typed.YAMLObject(`types: - name: io.k8s.api.flowcontrol.v1beta2.PriorityLevelConfigurationSpec map: fields: + - name: exempt + type: + namedType: io.k8s.api.flowcontrol.v1beta2.ExemptPriorityLevelConfiguration - name: limited type: namedType: io.k8s.api.flowcontrol.v1beta2.LimitedPriorityLevelConfiguration @@ -9387,6 +9760,8 @@ var schemaYAML = typed.YAMLObject(`types: unions: - discriminator: type fields: + - fieldName: exempt + discriminatorValue: Exempt - fieldName: limited discriminatorValue: Limited - name: io.k8s.api.flowcontrol.v1beta2.PriorityLevelConfigurationStatus @@ -9488,6 +9863,15 @@ var schemaYAML = typed.YAMLObject(`types: type: scalar: string default: "" +- name: io.k8s.api.flowcontrol.v1beta3.ExemptPriorityLevelConfiguration + map: + fields: + - name: lendablePercent + type: + scalar: numeric + - name: nominalConcurrencyShares + type: + scalar: numeric - name: io.k8s.api.flowcontrol.v1beta3.FlowDistinguisherMethod map: fields: @@ -9691,6 +10075,9 @@ var schemaYAML = typed.YAMLObject(`types: - name: io.k8s.api.flowcontrol.v1beta3.PriorityLevelConfigurationSpec map: fields: + - name: exempt + type: + namedType: io.k8s.api.flowcontrol.v1beta3.ExemptPriorityLevelConfiguration - name: limited type: namedType: io.k8s.api.flowcontrol.v1beta3.LimitedPriorityLevelConfiguration @@ -9701,6 +10088,8 @@ var schemaYAML = typed.YAMLObject(`types: unions: - discriminator: type fields: + - fieldName: exempt + discriminatorValue: Exempt - fieldName: limited discriminatorValue: Limited - name: io.k8s.api.flowcontrol.v1beta3.PriorityLevelConfigurationStatus @@ -10087,10 +10476,6 @@ var schemaYAML = typed.YAMLObject(`types: type: namedType: io.k8s.api.networking.v1.NetworkPolicySpec default: {} - - name: status - type: - namedType: io.k8s.api.networking.v1.NetworkPolicyStatus - default: {} - name: io.k8s.api.networking.v1.NetworkPolicyEgressRule map: fields: @@ -10170,17 +10555,6 @@ var schemaYAML = typed.YAMLObject(`types: elementType: scalar: string elementRelationship: atomic -- name: io.k8s.api.networking.v1.NetworkPolicyStatus - map: - fields: - - name: conditions - type: - list: - elementType: - namedType: io.k8s.apimachinery.pkg.apis.meta.v1.Condition - elementRelationship: associative - keys: - - type - name: io.k8s.api.networking.v1.ServiceBackendPort map: fields: diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/networking/v1/networkpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/networking/v1/networkpolicy.go index 101510e45f80..409507310b0d 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/networking/v1/networkpolicy.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/networking/v1/networkpolicy.go @@ -32,8 +32,7 @@ import ( type NetworkPolicyApplyConfiguration struct { v1.TypeMetaApplyConfiguration `json:",inline"` *v1.ObjectMetaApplyConfiguration `json:"metadata,omitempty"` - Spec *NetworkPolicySpecApplyConfiguration `json:"spec,omitempty"` - Status *NetworkPolicyStatusApplyConfiguration `json:"status,omitempty"` + Spec *NetworkPolicySpecApplyConfiguration `json:"spec,omitempty"` } // NetworkPolicy constructs an declarative configuration of the NetworkPolicy type for use with @@ -248,11 +247,3 @@ func (b *NetworkPolicyApplyConfiguration) WithSpec(value *NetworkPolicySpecApply b.Spec = value return b } - -// WithStatus sets the Status field in the declarative configuration to the given value -// and returns the receiver, so that objects can be built by chaining "With" function invocations. -// If called multiple times, the Status field is set to the value of the last call. -func (b *NetworkPolicyApplyConfiguration) WithStatus(value *NetworkPolicyStatusApplyConfiguration) *NetworkPolicyApplyConfiguration { - b.Status = value - return b -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/networking/v1/networkpolicystatus.go b/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/networking/v1/networkpolicystatus.go deleted file mode 100644 index 032de18eda4e..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/client-go/applyconfigurations/networking/v1/networkpolicystatus.go +++ /dev/null @@ -1,48 +0,0 @@ -/* -Copyright The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by applyconfiguration-gen. DO NOT EDIT. - -package v1 - -import ( - v1 "k8s.io/client-go/applyconfigurations/meta/v1" -) - -// NetworkPolicyStatusApplyConfiguration represents an declarative configuration of the NetworkPolicyStatus type for use -// with apply. -type NetworkPolicyStatusApplyConfiguration struct { - Conditions []v1.ConditionApplyConfiguration `json:"conditions,omitempty"` -} - -// NetworkPolicyStatusApplyConfiguration constructs an declarative configuration of the NetworkPolicyStatus type for use with -// apply. -func NetworkPolicyStatus() *NetworkPolicyStatusApplyConfiguration { - return &NetworkPolicyStatusApplyConfiguration{} -} - -// WithConditions adds the given value to the Conditions field in the declarative configuration -// and returns the receiver, so that objects can be build by chaining "With" function invocations. -// If called multiple times, values provided by each call will be appended to the Conditions field. -func (b *NetworkPolicyStatusApplyConfiguration) WithConditions(values ...*v1.ConditionApplyConfiguration) *NetworkPolicyStatusApplyConfiguration { - for i := range values { - if values[i] == nil { - panic("nil value passed to WithConditions") - } - b.Conditions = append(b.Conditions, *values[i]) - } - return b -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/discovery/aggregated_discovery.go b/cluster-autoscaler/vendor/k8s.io/client-go/discovery/aggregated_discovery.go index 7470259dc86d..f72c42051b91 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/discovery/aggregated_discovery.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/discovery/aggregated_discovery.go @@ -111,6 +111,8 @@ func convertAPIGroup(g apidiscovery.APIGroupDiscovery) ( return group, gvResources, failedGVs } +var emptyKind = metav1.GroupVersionKind{} + // convertAPIResource tranforms a APIResourceDiscovery to an APIResource. We are // resilient to missing GVK, since this resource might be the parent resource // for a subresource. If the parent is missing a GVK, it is not returned in @@ -125,7 +127,7 @@ func convertAPIResource(in apidiscovery.APIResourceDiscovery) (metav1.APIResourc Categories: in.Categories, } var err error - if in.ResponseKind != nil { + if in.ResponseKind != nil && (*in.ResponseKind) != emptyKind { result.Group = in.ResponseKind.Group result.Version = in.ResponseKind.Version result.Kind = in.ResponseKind.Kind @@ -140,7 +142,7 @@ func convertAPIResource(in apidiscovery.APIResourceDiscovery) (metav1.APIResourc // convertAPISubresource tranforms a APISubresourceDiscovery to an APIResource. func convertAPISubresource(parent metav1.APIResource, in apidiscovery.APISubresourceDiscovery) (metav1.APIResource, error) { result := metav1.APIResource{} - if in.ResponseKind == nil { + if in.ResponseKind == nil || (*in.ResponseKind) == emptyKind { return result, fmt.Errorf("subresource %s/%s missing GVK", parent.Name, in.Subresource) } result.Name = fmt.Sprintf("%s/%s", parent.Name, in.Subresource) diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go b/cluster-autoscaler/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go index 9143ce00ab42..3829b3cc09c9 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/discovery/cached/memory/memcache.go @@ -22,7 +22,7 @@ import ( "sync" "syscall" - openapi_v2 "github.com/google/gnostic/openapiv2" + openapi_v2 "github.com/google/gnostic-models/openapiv2" errorsutil "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/discovery/discovery_client.go b/cluster-autoscaler/vendor/k8s.io/client-go/discovery/discovery_client.go index 641568008b7b..a4f083a1ac3b 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/discovery/discovery_client.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/discovery/discovery_client.go @@ -20,6 +20,7 @@ import ( "context" "encoding/json" "fmt" + "mime" "net/http" "net/url" "sort" @@ -29,7 +30,7 @@ import ( //nolint:staticcheck // SA1019 Keep using module since it's still being maintained and the api of google.golang.org/protobuf/proto differs "github.com/golang/protobuf/proto" - openapi_v2 "github.com/google/gnostic/openapiv2" + openapi_v2 "github.com/google/gnostic-models/openapiv2" apidiscovery "k8s.io/api/apidiscovery/v2beta1" "k8s.io/apimachinery/pkg/api/errors" @@ -58,8 +59,9 @@ const ( defaultBurst = 300 AcceptV1 = runtime.ContentTypeJSON - // Aggregated discovery content-type (currently v2beta1). NOTE: Currently, we are assuming the order - // for "g", "v", and "as" from the server. We can only compare this string if we can make that assumption. + // Aggregated discovery content-type (v2beta1). NOTE: content-type parameters + // MUST be ordered (g, v, as) for server in "Accept" header (BUT we are resilient + // to ordering when comparing returned values in "Content-Type" header). AcceptV2Beta1 = runtime.ContentTypeJSON + ";" + "g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList" // Prioritize aggregated discovery by placing first in the order of discovery accept types. acceptDiscoveryFormats = AcceptV2Beta1 + "," + AcceptV1 @@ -259,8 +261,16 @@ func (d *DiscoveryClient) downloadLegacy() ( var resourcesByGV map[schema.GroupVersion]*metav1.APIResourceList // Switch on content-type server responded with: aggregated or unaggregated. - switch responseContentType { - case AcceptV1: + switch { + case isV2Beta1ContentType(responseContentType): + var aggregatedDiscovery apidiscovery.APIGroupDiscoveryList + err = json.Unmarshal(body, &aggregatedDiscovery) + if err != nil { + return nil, nil, nil, err + } + apiGroupList, resourcesByGV, failedGVs = SplitGroupsAndResources(aggregatedDiscovery) + default: + // Default is unaggregated discovery v1. var v metav1.APIVersions err = json.Unmarshal(body, &v) if err != nil { @@ -271,15 +281,6 @@ func (d *DiscoveryClient) downloadLegacy() ( apiGroup = apiVersionsToAPIGroup(&v) } apiGroupList.Groups = []metav1.APIGroup{apiGroup} - case AcceptV2Beta1: - var aggregatedDiscovery apidiscovery.APIGroupDiscoveryList - err = json.Unmarshal(body, &aggregatedDiscovery) - if err != nil { - return nil, nil, nil, err - } - apiGroupList, resourcesByGV, failedGVs = SplitGroupsAndResources(aggregatedDiscovery) - default: - return nil, nil, nil, fmt.Errorf("Unknown discovery response content-type: %s", responseContentType) } return apiGroupList, resourcesByGV, failedGVs, nil @@ -313,13 +314,8 @@ func (d *DiscoveryClient) downloadAPIs() ( failedGVs := map[schema.GroupVersion]error{} var resourcesByGV map[schema.GroupVersion]*metav1.APIResourceList // Switch on content-type server responded with: aggregated or unaggregated. - switch responseContentType { - case AcceptV1: - err = json.Unmarshal(body, apiGroupList) - if err != nil { - return nil, nil, nil, err - } - case AcceptV2Beta1: + switch { + case isV2Beta1ContentType(responseContentType): var aggregatedDiscovery apidiscovery.APIGroupDiscoveryList err = json.Unmarshal(body, &aggregatedDiscovery) if err != nil { @@ -327,12 +323,38 @@ func (d *DiscoveryClient) downloadAPIs() ( } apiGroupList, resourcesByGV, failedGVs = SplitGroupsAndResources(aggregatedDiscovery) default: - return nil, nil, nil, fmt.Errorf("Unknown discovery response content-type: %s", responseContentType) + // Default is unaggregated discovery v1. + err = json.Unmarshal(body, apiGroupList) + if err != nil { + return nil, nil, nil, err + } } return apiGroupList, resourcesByGV, failedGVs, nil } +// isV2Beta1ContentType checks of the content-type string is both +// "application/json" and contains the v2beta1 content-type params. +// NOTE: This function is resilient to the ordering of the +// content-type parameters, as well as parameters added by +// intermediaries such as proxies or gateways. Examples: +// +// "application/json; g=apidiscovery.k8s.io;v=v2beta1;as=APIGroupDiscoveryList" = true +// "application/json; as=APIGroupDiscoveryList;v=v2beta1;g=apidiscovery.k8s.io" = true +// "application/json; as=APIGroupDiscoveryList;v=v2beta1;g=apidiscovery.k8s.io;charset=utf-8" = true +// "application/json" = false +// "application/json; charset=UTF-8" = false +func isV2Beta1ContentType(contentType string) bool { + base, params, err := mime.ParseMediaType(contentType) + if err != nil { + return false + } + return runtime.ContentTypeJSON == base && + params["g"] == "apidiscovery.k8s.io" && + params["v"] == "v2beta1" && + params["as"] == "APIGroupDiscoveryList" +} + // ServerGroups returns the supported groups, with information like supported versions and the // preferred version. func (d *DiscoveryClient) ServerGroups() (*metav1.APIGroupList, error) { diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/discovery/fake/discovery.go b/cluster-autoscaler/vendor/k8s.io/client-go/discovery/fake/discovery.go index d234db893ddd..f8a78e1ef43e 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/discovery/fake/discovery.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/discovery/fake/discovery.go @@ -20,7 +20,7 @@ import ( "fmt" "net/http" - openapi_v2 "github.com/google/gnostic/openapiv2" + openapi_v2 "github.com/google/gnostic-models/openapiv2" "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/interface.go b/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/interface.go index d1e2b61be251..815960df5925 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/interface.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/interface.go @@ -26,6 +26,10 @@ import ( type Interface interface { // MutatingWebhookConfigurations returns a MutatingWebhookConfigurationInformer. MutatingWebhookConfigurations() MutatingWebhookConfigurationInformer + // ValidatingAdmissionPolicies returns a ValidatingAdmissionPolicyInformer. + ValidatingAdmissionPolicies() ValidatingAdmissionPolicyInformer + // ValidatingAdmissionPolicyBindings returns a ValidatingAdmissionPolicyBindingInformer. + ValidatingAdmissionPolicyBindings() ValidatingAdmissionPolicyBindingInformer // ValidatingWebhookConfigurations returns a ValidatingWebhookConfigurationInformer. ValidatingWebhookConfigurations() ValidatingWebhookConfigurationInformer } @@ -46,6 +50,16 @@ func (v *version) MutatingWebhookConfigurations() MutatingWebhookConfigurationIn return &mutatingWebhookConfigurationInformer{factory: v.factory, tweakListOptions: v.tweakListOptions} } +// ValidatingAdmissionPolicies returns a ValidatingAdmissionPolicyInformer. +func (v *version) ValidatingAdmissionPolicies() ValidatingAdmissionPolicyInformer { + return &validatingAdmissionPolicyInformer{factory: v.factory, tweakListOptions: v.tweakListOptions} +} + +// ValidatingAdmissionPolicyBindings returns a ValidatingAdmissionPolicyBindingInformer. +func (v *version) ValidatingAdmissionPolicyBindings() ValidatingAdmissionPolicyBindingInformer { + return &validatingAdmissionPolicyBindingInformer{factory: v.factory, tweakListOptions: v.tweakListOptions} +} + // ValidatingWebhookConfigurations returns a ValidatingWebhookConfigurationInformer. func (v *version) ValidatingWebhookConfigurations() ValidatingWebhookConfigurationInformer { return &validatingWebhookConfigurationInformer{factory: v.factory, tweakListOptions: v.tweakListOptions} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/validatingadmissionpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/validatingadmissionpolicy.go new file mode 100644 index 000000000000..d0e9cd64c82f --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/validatingadmissionpolicy.go @@ -0,0 +1,89 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by informer-gen. DO NOT EDIT. + +package v1beta1 + +import ( + "context" + time "time" + + admissionregistrationv1beta1 "k8s.io/api/admissionregistration/v1beta1" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" + runtime "k8s.io/apimachinery/pkg/runtime" + watch "k8s.io/apimachinery/pkg/watch" + internalinterfaces "k8s.io/client-go/informers/internalinterfaces" + kubernetes "k8s.io/client-go/kubernetes" + v1beta1 "k8s.io/client-go/listers/admissionregistration/v1beta1" + cache "k8s.io/client-go/tools/cache" +) + +// ValidatingAdmissionPolicyInformer provides access to a shared informer and lister for +// ValidatingAdmissionPolicies. +type ValidatingAdmissionPolicyInformer interface { + Informer() cache.SharedIndexInformer + Lister() v1beta1.ValidatingAdmissionPolicyLister +} + +type validatingAdmissionPolicyInformer struct { + factory internalinterfaces.SharedInformerFactory + tweakListOptions internalinterfaces.TweakListOptionsFunc +} + +// NewValidatingAdmissionPolicyInformer constructs a new informer for ValidatingAdmissionPolicy type. +// Always prefer using an informer factory to get a shared informer instead of getting an independent +// one. This reduces memory footprint and number of connections to the server. +func NewValidatingAdmissionPolicyInformer(client kubernetes.Interface, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer { + return NewFilteredValidatingAdmissionPolicyInformer(client, resyncPeriod, indexers, nil) +} + +// NewFilteredValidatingAdmissionPolicyInformer constructs a new informer for ValidatingAdmissionPolicy type. +// Always prefer using an informer factory to get a shared informer instead of getting an independent +// one. This reduces memory footprint and number of connections to the server. +func NewFilteredValidatingAdmissionPolicyInformer(client kubernetes.Interface, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer { + return cache.NewSharedIndexInformer( + &cache.ListWatch{ + ListFunc: func(options v1.ListOptions) (runtime.Object, error) { + if tweakListOptions != nil { + tweakListOptions(&options) + } + return client.AdmissionregistrationV1beta1().ValidatingAdmissionPolicies().List(context.TODO(), options) + }, + WatchFunc: func(options v1.ListOptions) (watch.Interface, error) { + if tweakListOptions != nil { + tweakListOptions(&options) + } + return client.AdmissionregistrationV1beta1().ValidatingAdmissionPolicies().Watch(context.TODO(), options) + }, + }, + &admissionregistrationv1beta1.ValidatingAdmissionPolicy{}, + resyncPeriod, + indexers, + ) +} + +func (f *validatingAdmissionPolicyInformer) defaultInformer(client kubernetes.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer { + return NewFilteredValidatingAdmissionPolicyInformer(client, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions) +} + +func (f *validatingAdmissionPolicyInformer) Informer() cache.SharedIndexInformer { + return f.factory.InformerFor(&admissionregistrationv1beta1.ValidatingAdmissionPolicy{}, f.defaultInformer) +} + +func (f *validatingAdmissionPolicyInformer) Lister() v1beta1.ValidatingAdmissionPolicyLister { + return v1beta1.NewValidatingAdmissionPolicyLister(f.Informer().GetIndexer()) +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/validatingadmissionpolicybinding.go b/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/validatingadmissionpolicybinding.go new file mode 100644 index 000000000000..7641e9940670 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/informers/admissionregistration/v1beta1/validatingadmissionpolicybinding.go @@ -0,0 +1,89 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by informer-gen. DO NOT EDIT. + +package v1beta1 + +import ( + "context" + time "time" + + admissionregistrationv1beta1 "k8s.io/api/admissionregistration/v1beta1" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" + runtime "k8s.io/apimachinery/pkg/runtime" + watch "k8s.io/apimachinery/pkg/watch" + internalinterfaces "k8s.io/client-go/informers/internalinterfaces" + kubernetes "k8s.io/client-go/kubernetes" + v1beta1 "k8s.io/client-go/listers/admissionregistration/v1beta1" + cache "k8s.io/client-go/tools/cache" +) + +// ValidatingAdmissionPolicyBindingInformer provides access to a shared informer and lister for +// ValidatingAdmissionPolicyBindings. +type ValidatingAdmissionPolicyBindingInformer interface { + Informer() cache.SharedIndexInformer + Lister() v1beta1.ValidatingAdmissionPolicyBindingLister +} + +type validatingAdmissionPolicyBindingInformer struct { + factory internalinterfaces.SharedInformerFactory + tweakListOptions internalinterfaces.TweakListOptionsFunc +} + +// NewValidatingAdmissionPolicyBindingInformer constructs a new informer for ValidatingAdmissionPolicyBinding type. +// Always prefer using an informer factory to get a shared informer instead of getting an independent +// one. This reduces memory footprint and number of connections to the server. +func NewValidatingAdmissionPolicyBindingInformer(client kubernetes.Interface, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer { + return NewFilteredValidatingAdmissionPolicyBindingInformer(client, resyncPeriod, indexers, nil) +} + +// NewFilteredValidatingAdmissionPolicyBindingInformer constructs a new informer for ValidatingAdmissionPolicyBinding type. +// Always prefer using an informer factory to get a shared informer instead of getting an independent +// one. This reduces memory footprint and number of connections to the server. +func NewFilteredValidatingAdmissionPolicyBindingInformer(client kubernetes.Interface, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer { + return cache.NewSharedIndexInformer( + &cache.ListWatch{ + ListFunc: func(options v1.ListOptions) (runtime.Object, error) { + if tweakListOptions != nil { + tweakListOptions(&options) + } + return client.AdmissionregistrationV1beta1().ValidatingAdmissionPolicyBindings().List(context.TODO(), options) + }, + WatchFunc: func(options v1.ListOptions) (watch.Interface, error) { + if tweakListOptions != nil { + tweakListOptions(&options) + } + return client.AdmissionregistrationV1beta1().ValidatingAdmissionPolicyBindings().Watch(context.TODO(), options) + }, + }, + &admissionregistrationv1beta1.ValidatingAdmissionPolicyBinding{}, + resyncPeriod, + indexers, + ) +} + +func (f *validatingAdmissionPolicyBindingInformer) defaultInformer(client kubernetes.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer { + return NewFilteredValidatingAdmissionPolicyBindingInformer(client, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions) +} + +func (f *validatingAdmissionPolicyBindingInformer) Informer() cache.SharedIndexInformer { + return f.factory.InformerFor(&admissionregistrationv1beta1.ValidatingAdmissionPolicyBinding{}, f.defaultInformer) +} + +func (f *validatingAdmissionPolicyBindingInformer) Lister() v1beta1.ValidatingAdmissionPolicyBindingLister { + return v1beta1.NewValidatingAdmissionPolicyBindingLister(f.Informer().GetIndexer()) +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/informers/factory.go b/cluster-autoscaler/vendor/k8s.io/client-go/informers/factory.go index 8e7a7e36de28..7dd0ae6353cc 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/informers/factory.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/informers/factory.go @@ -184,7 +184,7 @@ func (f *sharedInformerFactory) WaitForCacheSync(stopCh <-chan struct{}) map[ref return res } -// InternalInformerFor returns the SharedIndexInformer for obj using an internal +// InformerFor returns the SharedIndexInformer for obj using an internal // client. func (f *sharedInformerFactory) InformerFor(obj runtime.Object, newFunc internalinterfaces.NewInformerFunc) cache.SharedIndexInformer { f.lock.Lock() @@ -257,7 +257,7 @@ type SharedInformerFactory interface { // ForResource gives generic access to a shared informer of the matching type. ForResource(resource schema.GroupVersionResource) (GenericInformer, error) - // InternalInformerFor returns the SharedIndexInformer for obj using an internal + // InformerFor returns the SharedIndexInformer for obj using an internal // client. InformerFor(obj runtime.Object, newFunc internalinterfaces.NewInformerFunc) cache.SharedIndexInformer diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/informers/generic.go b/cluster-autoscaler/vendor/k8s.io/client-go/informers/generic.go index 2b63a8028cb1..5495239b29d5 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/informers/generic.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/informers/generic.go @@ -112,6 +112,10 @@ func (f *sharedInformerFactory) ForResource(resource schema.GroupVersionResource // Group=admissionregistration.k8s.io, Version=v1beta1 case v1beta1.SchemeGroupVersion.WithResource("mutatingwebhookconfigurations"): return &genericInformer{resource: resource.GroupResource(), informer: f.Admissionregistration().V1beta1().MutatingWebhookConfigurations().Informer()}, nil + case v1beta1.SchemeGroupVersion.WithResource("validatingadmissionpolicies"): + return &genericInformer{resource: resource.GroupResource(), informer: f.Admissionregistration().V1beta1().ValidatingAdmissionPolicies().Informer()}, nil + case v1beta1.SchemeGroupVersion.WithResource("validatingadmissionpolicybindings"): + return &genericInformer{resource: resource.GroupResource(), informer: f.Admissionregistration().V1beta1().ValidatingAdmissionPolicyBindings().Informer()}, nil case v1beta1.SchemeGroupVersion.WithResource("validatingwebhookconfigurations"): return &genericInformer{resource: resource.GroupResource(), informer: f.Admissionregistration().V1beta1().ValidatingWebhookConfigurations().Informer()}, nil diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/admissionregistration_client.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/admissionregistration_client.go index 8fda84b1d282..5a0a17d9bea7 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/admissionregistration_client.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/admissionregistration_client.go @@ -29,6 +29,8 @@ import ( type AdmissionregistrationV1beta1Interface interface { RESTClient() rest.Interface MutatingWebhookConfigurationsGetter + ValidatingAdmissionPoliciesGetter + ValidatingAdmissionPolicyBindingsGetter ValidatingWebhookConfigurationsGetter } @@ -41,6 +43,14 @@ func (c *AdmissionregistrationV1beta1Client) MutatingWebhookConfigurations() Mut return newMutatingWebhookConfigurations(c) } +func (c *AdmissionregistrationV1beta1Client) ValidatingAdmissionPolicies() ValidatingAdmissionPolicyInterface { + return newValidatingAdmissionPolicies(c) +} + +func (c *AdmissionregistrationV1beta1Client) ValidatingAdmissionPolicyBindings() ValidatingAdmissionPolicyBindingInterface { + return newValidatingAdmissionPolicyBindings(c) +} + func (c *AdmissionregistrationV1beta1Client) ValidatingWebhookConfigurations() ValidatingWebhookConfigurationInterface { return newValidatingWebhookConfigurations(c) } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_admissionregistration_client.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_admissionregistration_client.go index 1a988ddba1a4..badfbf0346ba 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_admissionregistration_client.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_admissionregistration_client.go @@ -32,6 +32,14 @@ func (c *FakeAdmissionregistrationV1beta1) MutatingWebhookConfigurations() v1bet return &FakeMutatingWebhookConfigurations{c} } +func (c *FakeAdmissionregistrationV1beta1) ValidatingAdmissionPolicies() v1beta1.ValidatingAdmissionPolicyInterface { + return &FakeValidatingAdmissionPolicies{c} +} + +func (c *FakeAdmissionregistrationV1beta1) ValidatingAdmissionPolicyBindings() v1beta1.ValidatingAdmissionPolicyBindingInterface { + return &FakeValidatingAdmissionPolicyBindings{c} +} + func (c *FakeAdmissionregistrationV1beta1) ValidatingWebhookConfigurations() v1beta1.ValidatingWebhookConfigurationInterface { return &FakeValidatingWebhookConfigurations{c} } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_validatingadmissionpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_validatingadmissionpolicy.go new file mode 100644 index 000000000000..90cb4ff6ca86 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_validatingadmissionpolicy.go @@ -0,0 +1,178 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by client-gen. DO NOT EDIT. + +package fake + +import ( + "context" + json "encoding/json" + "fmt" + + v1beta1 "k8s.io/api/admissionregistration/v1beta1" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" + labels "k8s.io/apimachinery/pkg/labels" + types "k8s.io/apimachinery/pkg/types" + watch "k8s.io/apimachinery/pkg/watch" + admissionregistrationv1beta1 "k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1" + testing "k8s.io/client-go/testing" +) + +// FakeValidatingAdmissionPolicies implements ValidatingAdmissionPolicyInterface +type FakeValidatingAdmissionPolicies struct { + Fake *FakeAdmissionregistrationV1beta1 +} + +var validatingadmissionpoliciesResource = v1beta1.SchemeGroupVersion.WithResource("validatingadmissionpolicies") + +var validatingadmissionpoliciesKind = v1beta1.SchemeGroupVersion.WithKind("ValidatingAdmissionPolicy") + +// Get takes name of the validatingAdmissionPolicy, and returns the corresponding validatingAdmissionPolicy object, and an error if there is any. +func (c *FakeValidatingAdmissionPolicies) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootGetAction(validatingadmissionpoliciesResource, name), &v1beta1.ValidatingAdmissionPolicy{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicy), err +} + +// List takes label and field selectors, and returns the list of ValidatingAdmissionPolicies that match those selectors. +func (c *FakeValidatingAdmissionPolicies) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.ValidatingAdmissionPolicyList, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootListAction(validatingadmissionpoliciesResource, validatingadmissionpoliciesKind, opts), &v1beta1.ValidatingAdmissionPolicyList{}) + if obj == nil { + return nil, err + } + + label, _, _ := testing.ExtractFromListOptions(opts) + if label == nil { + label = labels.Everything() + } + list := &v1beta1.ValidatingAdmissionPolicyList{ListMeta: obj.(*v1beta1.ValidatingAdmissionPolicyList).ListMeta} + for _, item := range obj.(*v1beta1.ValidatingAdmissionPolicyList).Items { + if label.Matches(labels.Set(item.Labels)) { + list.Items = append(list.Items, item) + } + } + return list, err +} + +// Watch returns a watch.Interface that watches the requested validatingAdmissionPolicies. +func (c *FakeValidatingAdmissionPolicies) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) { + return c.Fake. + InvokesWatch(testing.NewRootWatchAction(validatingadmissionpoliciesResource, opts)) +} + +// Create takes the representation of a validatingAdmissionPolicy and creates it. Returns the server's representation of the validatingAdmissionPolicy, and an error, if there is any. +func (c *FakeValidatingAdmissionPolicies) Create(ctx context.Context, validatingAdmissionPolicy *v1beta1.ValidatingAdmissionPolicy, opts v1.CreateOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootCreateAction(validatingadmissionpoliciesResource, validatingAdmissionPolicy), &v1beta1.ValidatingAdmissionPolicy{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicy), err +} + +// Update takes the representation of a validatingAdmissionPolicy and updates it. Returns the server's representation of the validatingAdmissionPolicy, and an error, if there is any. +func (c *FakeValidatingAdmissionPolicies) Update(ctx context.Context, validatingAdmissionPolicy *v1beta1.ValidatingAdmissionPolicy, opts v1.UpdateOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootUpdateAction(validatingadmissionpoliciesResource, validatingAdmissionPolicy), &v1beta1.ValidatingAdmissionPolicy{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicy), err +} + +// UpdateStatus was generated because the type contains a Status member. +// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). +func (c *FakeValidatingAdmissionPolicies) UpdateStatus(ctx context.Context, validatingAdmissionPolicy *v1beta1.ValidatingAdmissionPolicy, opts v1.UpdateOptions) (*v1beta1.ValidatingAdmissionPolicy, error) { + obj, err := c.Fake. + Invokes(testing.NewRootUpdateSubresourceAction(validatingadmissionpoliciesResource, "status", validatingAdmissionPolicy), &v1beta1.ValidatingAdmissionPolicy{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicy), err +} + +// Delete takes name of the validatingAdmissionPolicy and deletes it. Returns an error if one occurs. +func (c *FakeValidatingAdmissionPolicies) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error { + _, err := c.Fake. + Invokes(testing.NewRootDeleteActionWithOptions(validatingadmissionpoliciesResource, name, opts), &v1beta1.ValidatingAdmissionPolicy{}) + return err +} + +// DeleteCollection deletes a collection of objects. +func (c *FakeValidatingAdmissionPolicies) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error { + action := testing.NewRootDeleteCollectionAction(validatingadmissionpoliciesResource, listOpts) + + _, err := c.Fake.Invokes(action, &v1beta1.ValidatingAdmissionPolicyList{}) + return err +} + +// Patch applies the patch and returns the patched validatingAdmissionPolicy. +func (c *FakeValidatingAdmissionPolicies) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootPatchSubresourceAction(validatingadmissionpoliciesResource, name, pt, data, subresources...), &v1beta1.ValidatingAdmissionPolicy{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicy), err +} + +// Apply takes the given apply declarative configuration, applies it and returns the applied validatingAdmissionPolicy. +func (c *FakeValidatingAdmissionPolicies) Apply(ctx context.Context, validatingAdmissionPolicy *admissionregistrationv1beta1.ValidatingAdmissionPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + if validatingAdmissionPolicy == nil { + return nil, fmt.Errorf("validatingAdmissionPolicy provided to Apply must not be nil") + } + data, err := json.Marshal(validatingAdmissionPolicy) + if err != nil { + return nil, err + } + name := validatingAdmissionPolicy.Name + if name == nil { + return nil, fmt.Errorf("validatingAdmissionPolicy.Name must be provided to Apply") + } + obj, err := c.Fake. + Invokes(testing.NewRootPatchSubresourceAction(validatingadmissionpoliciesResource, *name, types.ApplyPatchType, data), &v1beta1.ValidatingAdmissionPolicy{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicy), err +} + +// ApplyStatus was generated because the type contains a Status member. +// Add a +genclient:noStatus comment above the type to avoid generating ApplyStatus(). +func (c *FakeValidatingAdmissionPolicies) ApplyStatus(ctx context.Context, validatingAdmissionPolicy *admissionregistrationv1beta1.ValidatingAdmissionPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + if validatingAdmissionPolicy == nil { + return nil, fmt.Errorf("validatingAdmissionPolicy provided to Apply must not be nil") + } + data, err := json.Marshal(validatingAdmissionPolicy) + if err != nil { + return nil, err + } + name := validatingAdmissionPolicy.Name + if name == nil { + return nil, fmt.Errorf("validatingAdmissionPolicy.Name must be provided to Apply") + } + obj, err := c.Fake. + Invokes(testing.NewRootPatchSubresourceAction(validatingadmissionpoliciesResource, *name, types.ApplyPatchType, data, "status"), &v1beta1.ValidatingAdmissionPolicy{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicy), err +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_validatingadmissionpolicybinding.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_validatingadmissionpolicybinding.go new file mode 100644 index 000000000000..f771f81f301b --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/fake/fake_validatingadmissionpolicybinding.go @@ -0,0 +1,145 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by client-gen. DO NOT EDIT. + +package fake + +import ( + "context" + json "encoding/json" + "fmt" + + v1beta1 "k8s.io/api/admissionregistration/v1beta1" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" + labels "k8s.io/apimachinery/pkg/labels" + types "k8s.io/apimachinery/pkg/types" + watch "k8s.io/apimachinery/pkg/watch" + admissionregistrationv1beta1 "k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1" + testing "k8s.io/client-go/testing" +) + +// FakeValidatingAdmissionPolicyBindings implements ValidatingAdmissionPolicyBindingInterface +type FakeValidatingAdmissionPolicyBindings struct { + Fake *FakeAdmissionregistrationV1beta1 +} + +var validatingadmissionpolicybindingsResource = v1beta1.SchemeGroupVersion.WithResource("validatingadmissionpolicybindings") + +var validatingadmissionpolicybindingsKind = v1beta1.SchemeGroupVersion.WithKind("ValidatingAdmissionPolicyBinding") + +// Get takes name of the validatingAdmissionPolicyBinding, and returns the corresponding validatingAdmissionPolicyBinding object, and an error if there is any. +func (c *FakeValidatingAdmissionPolicyBindings) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootGetAction(validatingadmissionpolicybindingsResource, name), &v1beta1.ValidatingAdmissionPolicyBinding{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicyBinding), err +} + +// List takes label and field selectors, and returns the list of ValidatingAdmissionPolicyBindings that match those selectors. +func (c *FakeValidatingAdmissionPolicyBindings) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.ValidatingAdmissionPolicyBindingList, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootListAction(validatingadmissionpolicybindingsResource, validatingadmissionpolicybindingsKind, opts), &v1beta1.ValidatingAdmissionPolicyBindingList{}) + if obj == nil { + return nil, err + } + + label, _, _ := testing.ExtractFromListOptions(opts) + if label == nil { + label = labels.Everything() + } + list := &v1beta1.ValidatingAdmissionPolicyBindingList{ListMeta: obj.(*v1beta1.ValidatingAdmissionPolicyBindingList).ListMeta} + for _, item := range obj.(*v1beta1.ValidatingAdmissionPolicyBindingList).Items { + if label.Matches(labels.Set(item.Labels)) { + list.Items = append(list.Items, item) + } + } + return list, err +} + +// Watch returns a watch.Interface that watches the requested validatingAdmissionPolicyBindings. +func (c *FakeValidatingAdmissionPolicyBindings) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) { + return c.Fake. + InvokesWatch(testing.NewRootWatchAction(validatingadmissionpolicybindingsResource, opts)) +} + +// Create takes the representation of a validatingAdmissionPolicyBinding and creates it. Returns the server's representation of the validatingAdmissionPolicyBinding, and an error, if there is any. +func (c *FakeValidatingAdmissionPolicyBindings) Create(ctx context.Context, validatingAdmissionPolicyBinding *v1beta1.ValidatingAdmissionPolicyBinding, opts v1.CreateOptions) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootCreateAction(validatingadmissionpolicybindingsResource, validatingAdmissionPolicyBinding), &v1beta1.ValidatingAdmissionPolicyBinding{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicyBinding), err +} + +// Update takes the representation of a validatingAdmissionPolicyBinding and updates it. Returns the server's representation of the validatingAdmissionPolicyBinding, and an error, if there is any. +func (c *FakeValidatingAdmissionPolicyBindings) Update(ctx context.Context, validatingAdmissionPolicyBinding *v1beta1.ValidatingAdmissionPolicyBinding, opts v1.UpdateOptions) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootUpdateAction(validatingadmissionpolicybindingsResource, validatingAdmissionPolicyBinding), &v1beta1.ValidatingAdmissionPolicyBinding{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicyBinding), err +} + +// Delete takes name of the validatingAdmissionPolicyBinding and deletes it. Returns an error if one occurs. +func (c *FakeValidatingAdmissionPolicyBindings) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error { + _, err := c.Fake. + Invokes(testing.NewRootDeleteActionWithOptions(validatingadmissionpolicybindingsResource, name, opts), &v1beta1.ValidatingAdmissionPolicyBinding{}) + return err +} + +// DeleteCollection deletes a collection of objects. +func (c *FakeValidatingAdmissionPolicyBindings) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error { + action := testing.NewRootDeleteCollectionAction(validatingadmissionpolicybindingsResource, listOpts) + + _, err := c.Fake.Invokes(action, &v1beta1.ValidatingAdmissionPolicyBindingList{}) + return err +} + +// Patch applies the patch and returns the patched validatingAdmissionPolicyBinding. +func (c *FakeValidatingAdmissionPolicyBindings) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootPatchSubresourceAction(validatingadmissionpolicybindingsResource, name, pt, data, subresources...), &v1beta1.ValidatingAdmissionPolicyBinding{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicyBinding), err +} + +// Apply takes the given apply declarative configuration, applies it and returns the applied validatingAdmissionPolicyBinding. +func (c *FakeValidatingAdmissionPolicyBindings) Apply(ctx context.Context, validatingAdmissionPolicyBinding *admissionregistrationv1beta1.ValidatingAdmissionPolicyBindingApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + if validatingAdmissionPolicyBinding == nil { + return nil, fmt.Errorf("validatingAdmissionPolicyBinding provided to Apply must not be nil") + } + data, err := json.Marshal(validatingAdmissionPolicyBinding) + if err != nil { + return nil, err + } + name := validatingAdmissionPolicyBinding.Name + if name == nil { + return nil, fmt.Errorf("validatingAdmissionPolicyBinding.Name must be provided to Apply") + } + obj, err := c.Fake. + Invokes(testing.NewRootPatchSubresourceAction(validatingadmissionpolicybindingsResource, *name, types.ApplyPatchType, data), &v1beta1.ValidatingAdmissionPolicyBinding{}) + if obj == nil { + return nil, err + } + return obj.(*v1beta1.ValidatingAdmissionPolicyBinding), err +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/generated_expansion.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/generated_expansion.go index 2aeb9c98ae20..56ad611f4580 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/generated_expansion.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/generated_expansion.go @@ -20,4 +20,8 @@ package v1beta1 type MutatingWebhookConfigurationExpansion interface{} +type ValidatingAdmissionPolicyExpansion interface{} + +type ValidatingAdmissionPolicyBindingExpansion interface{} + type ValidatingWebhookConfigurationExpansion interface{} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/validatingadmissionpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/validatingadmissionpolicy.go new file mode 100644 index 000000000000..bea51b587f62 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/validatingadmissionpolicy.go @@ -0,0 +1,243 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by client-gen. DO NOT EDIT. + +package v1beta1 + +import ( + "context" + json "encoding/json" + "fmt" + "time" + + v1beta1 "k8s.io/api/admissionregistration/v1beta1" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" + types "k8s.io/apimachinery/pkg/types" + watch "k8s.io/apimachinery/pkg/watch" + admissionregistrationv1beta1 "k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1" + scheme "k8s.io/client-go/kubernetes/scheme" + rest "k8s.io/client-go/rest" +) + +// ValidatingAdmissionPoliciesGetter has a method to return a ValidatingAdmissionPolicyInterface. +// A group's client should implement this interface. +type ValidatingAdmissionPoliciesGetter interface { + ValidatingAdmissionPolicies() ValidatingAdmissionPolicyInterface +} + +// ValidatingAdmissionPolicyInterface has methods to work with ValidatingAdmissionPolicy resources. +type ValidatingAdmissionPolicyInterface interface { + Create(ctx context.Context, validatingAdmissionPolicy *v1beta1.ValidatingAdmissionPolicy, opts v1.CreateOptions) (*v1beta1.ValidatingAdmissionPolicy, error) + Update(ctx context.Context, validatingAdmissionPolicy *v1beta1.ValidatingAdmissionPolicy, opts v1.UpdateOptions) (*v1beta1.ValidatingAdmissionPolicy, error) + UpdateStatus(ctx context.Context, validatingAdmissionPolicy *v1beta1.ValidatingAdmissionPolicy, opts v1.UpdateOptions) (*v1beta1.ValidatingAdmissionPolicy, error) + Delete(ctx context.Context, name string, opts v1.DeleteOptions) error + DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error + Get(ctx context.Context, name string, opts v1.GetOptions) (*v1beta1.ValidatingAdmissionPolicy, error) + List(ctx context.Context, opts v1.ListOptions) (*v1beta1.ValidatingAdmissionPolicyList, error) + Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) + Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.ValidatingAdmissionPolicy, err error) + Apply(ctx context.Context, validatingAdmissionPolicy *admissionregistrationv1beta1.ValidatingAdmissionPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) + ApplyStatus(ctx context.Context, validatingAdmissionPolicy *admissionregistrationv1beta1.ValidatingAdmissionPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) + ValidatingAdmissionPolicyExpansion +} + +// validatingAdmissionPolicies implements ValidatingAdmissionPolicyInterface +type validatingAdmissionPolicies struct { + client rest.Interface +} + +// newValidatingAdmissionPolicies returns a ValidatingAdmissionPolicies +func newValidatingAdmissionPolicies(c *AdmissionregistrationV1beta1Client) *validatingAdmissionPolicies { + return &validatingAdmissionPolicies{ + client: c.RESTClient(), + } +} + +// Get takes name of the validatingAdmissionPolicy, and returns the corresponding validatingAdmissionPolicy object, and an error if there is any. +func (c *validatingAdmissionPolicies) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + result = &v1beta1.ValidatingAdmissionPolicy{} + err = c.client.Get(). + Resource("validatingadmissionpolicies"). + Name(name). + VersionedParams(&options, scheme.ParameterCodec). + Do(ctx). + Into(result) + return +} + +// List takes label and field selectors, and returns the list of ValidatingAdmissionPolicies that match those selectors. +func (c *validatingAdmissionPolicies) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.ValidatingAdmissionPolicyList, err error) { + var timeout time.Duration + if opts.TimeoutSeconds != nil { + timeout = time.Duration(*opts.TimeoutSeconds) * time.Second + } + result = &v1beta1.ValidatingAdmissionPolicyList{} + err = c.client.Get(). + Resource("validatingadmissionpolicies"). + VersionedParams(&opts, scheme.ParameterCodec). + Timeout(timeout). + Do(ctx). + Into(result) + return +} + +// Watch returns a watch.Interface that watches the requested validatingAdmissionPolicies. +func (c *validatingAdmissionPolicies) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) { + var timeout time.Duration + if opts.TimeoutSeconds != nil { + timeout = time.Duration(*opts.TimeoutSeconds) * time.Second + } + opts.Watch = true + return c.client.Get(). + Resource("validatingadmissionpolicies"). + VersionedParams(&opts, scheme.ParameterCodec). + Timeout(timeout). + Watch(ctx) +} + +// Create takes the representation of a validatingAdmissionPolicy and creates it. Returns the server's representation of the validatingAdmissionPolicy, and an error, if there is any. +func (c *validatingAdmissionPolicies) Create(ctx context.Context, validatingAdmissionPolicy *v1beta1.ValidatingAdmissionPolicy, opts v1.CreateOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + result = &v1beta1.ValidatingAdmissionPolicy{} + err = c.client.Post(). + Resource("validatingadmissionpolicies"). + VersionedParams(&opts, scheme.ParameterCodec). + Body(validatingAdmissionPolicy). + Do(ctx). + Into(result) + return +} + +// Update takes the representation of a validatingAdmissionPolicy and updates it. Returns the server's representation of the validatingAdmissionPolicy, and an error, if there is any. +func (c *validatingAdmissionPolicies) Update(ctx context.Context, validatingAdmissionPolicy *v1beta1.ValidatingAdmissionPolicy, opts v1.UpdateOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + result = &v1beta1.ValidatingAdmissionPolicy{} + err = c.client.Put(). + Resource("validatingadmissionpolicies"). + Name(validatingAdmissionPolicy.Name). + VersionedParams(&opts, scheme.ParameterCodec). + Body(validatingAdmissionPolicy). + Do(ctx). + Into(result) + return +} + +// UpdateStatus was generated because the type contains a Status member. +// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). +func (c *validatingAdmissionPolicies) UpdateStatus(ctx context.Context, validatingAdmissionPolicy *v1beta1.ValidatingAdmissionPolicy, opts v1.UpdateOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + result = &v1beta1.ValidatingAdmissionPolicy{} + err = c.client.Put(). + Resource("validatingadmissionpolicies"). + Name(validatingAdmissionPolicy.Name). + SubResource("status"). + VersionedParams(&opts, scheme.ParameterCodec). + Body(validatingAdmissionPolicy). + Do(ctx). + Into(result) + return +} + +// Delete takes name of the validatingAdmissionPolicy and deletes it. Returns an error if one occurs. +func (c *validatingAdmissionPolicies) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error { + return c.client.Delete(). + Resource("validatingadmissionpolicies"). + Name(name). + Body(&opts). + Do(ctx). + Error() +} + +// DeleteCollection deletes a collection of objects. +func (c *validatingAdmissionPolicies) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error { + var timeout time.Duration + if listOpts.TimeoutSeconds != nil { + timeout = time.Duration(*listOpts.TimeoutSeconds) * time.Second + } + return c.client.Delete(). + Resource("validatingadmissionpolicies"). + VersionedParams(&listOpts, scheme.ParameterCodec). + Timeout(timeout). + Body(&opts). + Do(ctx). + Error() +} + +// Patch applies the patch and returns the patched validatingAdmissionPolicy. +func (c *validatingAdmissionPolicies) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + result = &v1beta1.ValidatingAdmissionPolicy{} + err = c.client.Patch(pt). + Resource("validatingadmissionpolicies"). + Name(name). + SubResource(subresources...). + VersionedParams(&opts, scheme.ParameterCodec). + Body(data). + Do(ctx). + Into(result) + return +} + +// Apply takes the given apply declarative configuration, applies it and returns the applied validatingAdmissionPolicy. +func (c *validatingAdmissionPolicies) Apply(ctx context.Context, validatingAdmissionPolicy *admissionregistrationv1beta1.ValidatingAdmissionPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + if validatingAdmissionPolicy == nil { + return nil, fmt.Errorf("validatingAdmissionPolicy provided to Apply must not be nil") + } + patchOpts := opts.ToPatchOptions() + data, err := json.Marshal(validatingAdmissionPolicy) + if err != nil { + return nil, err + } + name := validatingAdmissionPolicy.Name + if name == nil { + return nil, fmt.Errorf("validatingAdmissionPolicy.Name must be provided to Apply") + } + result = &v1beta1.ValidatingAdmissionPolicy{} + err = c.client.Patch(types.ApplyPatchType). + Resource("validatingadmissionpolicies"). + Name(*name). + VersionedParams(&patchOpts, scheme.ParameterCodec). + Body(data). + Do(ctx). + Into(result) + return +} + +// ApplyStatus was generated because the type contains a Status member. +// Add a +genclient:noStatus comment above the type to avoid generating ApplyStatus(). +func (c *validatingAdmissionPolicies) ApplyStatus(ctx context.Context, validatingAdmissionPolicy *admissionregistrationv1beta1.ValidatingAdmissionPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.ValidatingAdmissionPolicy, err error) { + if validatingAdmissionPolicy == nil { + return nil, fmt.Errorf("validatingAdmissionPolicy provided to Apply must not be nil") + } + patchOpts := opts.ToPatchOptions() + data, err := json.Marshal(validatingAdmissionPolicy) + if err != nil { + return nil, err + } + + name := validatingAdmissionPolicy.Name + if name == nil { + return nil, fmt.Errorf("validatingAdmissionPolicy.Name must be provided to Apply") + } + + result = &v1beta1.ValidatingAdmissionPolicy{} + err = c.client.Patch(types.ApplyPatchType). + Resource("validatingadmissionpolicies"). + Name(*name). + SubResource("status"). + VersionedParams(&patchOpts, scheme.ParameterCodec). + Body(data). + Do(ctx). + Into(result) + return +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/validatingadmissionpolicybinding.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/validatingadmissionpolicybinding.go new file mode 100644 index 000000000000..bba37bb0477b --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1beta1/validatingadmissionpolicybinding.go @@ -0,0 +1,197 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by client-gen. DO NOT EDIT. + +package v1beta1 + +import ( + "context" + json "encoding/json" + "fmt" + "time" + + v1beta1 "k8s.io/api/admissionregistration/v1beta1" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" + types "k8s.io/apimachinery/pkg/types" + watch "k8s.io/apimachinery/pkg/watch" + admissionregistrationv1beta1 "k8s.io/client-go/applyconfigurations/admissionregistration/v1beta1" + scheme "k8s.io/client-go/kubernetes/scheme" + rest "k8s.io/client-go/rest" +) + +// ValidatingAdmissionPolicyBindingsGetter has a method to return a ValidatingAdmissionPolicyBindingInterface. +// A group's client should implement this interface. +type ValidatingAdmissionPolicyBindingsGetter interface { + ValidatingAdmissionPolicyBindings() ValidatingAdmissionPolicyBindingInterface +} + +// ValidatingAdmissionPolicyBindingInterface has methods to work with ValidatingAdmissionPolicyBinding resources. +type ValidatingAdmissionPolicyBindingInterface interface { + Create(ctx context.Context, validatingAdmissionPolicyBinding *v1beta1.ValidatingAdmissionPolicyBinding, opts v1.CreateOptions) (*v1beta1.ValidatingAdmissionPolicyBinding, error) + Update(ctx context.Context, validatingAdmissionPolicyBinding *v1beta1.ValidatingAdmissionPolicyBinding, opts v1.UpdateOptions) (*v1beta1.ValidatingAdmissionPolicyBinding, error) + Delete(ctx context.Context, name string, opts v1.DeleteOptions) error + DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error + Get(ctx context.Context, name string, opts v1.GetOptions) (*v1beta1.ValidatingAdmissionPolicyBinding, error) + List(ctx context.Context, opts v1.ListOptions) (*v1beta1.ValidatingAdmissionPolicyBindingList, error) + Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) + Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) + Apply(ctx context.Context, validatingAdmissionPolicyBinding *admissionregistrationv1beta1.ValidatingAdmissionPolicyBindingApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) + ValidatingAdmissionPolicyBindingExpansion +} + +// validatingAdmissionPolicyBindings implements ValidatingAdmissionPolicyBindingInterface +type validatingAdmissionPolicyBindings struct { + client rest.Interface +} + +// newValidatingAdmissionPolicyBindings returns a ValidatingAdmissionPolicyBindings +func newValidatingAdmissionPolicyBindings(c *AdmissionregistrationV1beta1Client) *validatingAdmissionPolicyBindings { + return &validatingAdmissionPolicyBindings{ + client: c.RESTClient(), + } +} + +// Get takes name of the validatingAdmissionPolicyBinding, and returns the corresponding validatingAdmissionPolicyBinding object, and an error if there is any. +func (c *validatingAdmissionPolicyBindings) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + result = &v1beta1.ValidatingAdmissionPolicyBinding{} + err = c.client.Get(). + Resource("validatingadmissionpolicybindings"). + Name(name). + VersionedParams(&options, scheme.ParameterCodec). + Do(ctx). + Into(result) + return +} + +// List takes label and field selectors, and returns the list of ValidatingAdmissionPolicyBindings that match those selectors. +func (c *validatingAdmissionPolicyBindings) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.ValidatingAdmissionPolicyBindingList, err error) { + var timeout time.Duration + if opts.TimeoutSeconds != nil { + timeout = time.Duration(*opts.TimeoutSeconds) * time.Second + } + result = &v1beta1.ValidatingAdmissionPolicyBindingList{} + err = c.client.Get(). + Resource("validatingadmissionpolicybindings"). + VersionedParams(&opts, scheme.ParameterCodec). + Timeout(timeout). + Do(ctx). + Into(result) + return +} + +// Watch returns a watch.Interface that watches the requested validatingAdmissionPolicyBindings. +func (c *validatingAdmissionPolicyBindings) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) { + var timeout time.Duration + if opts.TimeoutSeconds != nil { + timeout = time.Duration(*opts.TimeoutSeconds) * time.Second + } + opts.Watch = true + return c.client.Get(). + Resource("validatingadmissionpolicybindings"). + VersionedParams(&opts, scheme.ParameterCodec). + Timeout(timeout). + Watch(ctx) +} + +// Create takes the representation of a validatingAdmissionPolicyBinding and creates it. Returns the server's representation of the validatingAdmissionPolicyBinding, and an error, if there is any. +func (c *validatingAdmissionPolicyBindings) Create(ctx context.Context, validatingAdmissionPolicyBinding *v1beta1.ValidatingAdmissionPolicyBinding, opts v1.CreateOptions) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + result = &v1beta1.ValidatingAdmissionPolicyBinding{} + err = c.client.Post(). + Resource("validatingadmissionpolicybindings"). + VersionedParams(&opts, scheme.ParameterCodec). + Body(validatingAdmissionPolicyBinding). + Do(ctx). + Into(result) + return +} + +// Update takes the representation of a validatingAdmissionPolicyBinding and updates it. Returns the server's representation of the validatingAdmissionPolicyBinding, and an error, if there is any. +func (c *validatingAdmissionPolicyBindings) Update(ctx context.Context, validatingAdmissionPolicyBinding *v1beta1.ValidatingAdmissionPolicyBinding, opts v1.UpdateOptions) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + result = &v1beta1.ValidatingAdmissionPolicyBinding{} + err = c.client.Put(). + Resource("validatingadmissionpolicybindings"). + Name(validatingAdmissionPolicyBinding.Name). + VersionedParams(&opts, scheme.ParameterCodec). + Body(validatingAdmissionPolicyBinding). + Do(ctx). + Into(result) + return +} + +// Delete takes name of the validatingAdmissionPolicyBinding and deletes it. Returns an error if one occurs. +func (c *validatingAdmissionPolicyBindings) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error { + return c.client.Delete(). + Resource("validatingadmissionpolicybindings"). + Name(name). + Body(&opts). + Do(ctx). + Error() +} + +// DeleteCollection deletes a collection of objects. +func (c *validatingAdmissionPolicyBindings) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error { + var timeout time.Duration + if listOpts.TimeoutSeconds != nil { + timeout = time.Duration(*listOpts.TimeoutSeconds) * time.Second + } + return c.client.Delete(). + Resource("validatingadmissionpolicybindings"). + VersionedParams(&listOpts, scheme.ParameterCodec). + Timeout(timeout). + Body(&opts). + Do(ctx). + Error() +} + +// Patch applies the patch and returns the patched validatingAdmissionPolicyBinding. +func (c *validatingAdmissionPolicyBindings) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + result = &v1beta1.ValidatingAdmissionPolicyBinding{} + err = c.client.Patch(pt). + Resource("validatingadmissionpolicybindings"). + Name(name). + SubResource(subresources...). + VersionedParams(&opts, scheme.ParameterCodec). + Body(data). + Do(ctx). + Into(result) + return +} + +// Apply takes the given apply declarative configuration, applies it and returns the applied validatingAdmissionPolicyBinding. +func (c *validatingAdmissionPolicyBindings) Apply(ctx context.Context, validatingAdmissionPolicyBinding *admissionregistrationv1beta1.ValidatingAdmissionPolicyBindingApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.ValidatingAdmissionPolicyBinding, err error) { + if validatingAdmissionPolicyBinding == nil { + return nil, fmt.Errorf("validatingAdmissionPolicyBinding provided to Apply must not be nil") + } + patchOpts := opts.ToPatchOptions() + data, err := json.Marshal(validatingAdmissionPolicyBinding) + if err != nil { + return nil, err + } + name := validatingAdmissionPolicyBinding.Name + if name == nil { + return nil, fmt.Errorf("validatingAdmissionPolicyBinding.Name must be provided to Apply") + } + result = &v1beta1.ValidatingAdmissionPolicyBinding{} + err = c.client.Patch(types.ApplyPatchType). + Resource("validatingadmissionpolicybindings"). + Name(*name). + VersionedParams(&patchOpts, scheme.ParameterCodec). + Body(data). + Do(ctx). + Into(result) + return +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/authentication_client.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/authentication_client.go index aea9d0e133e5..81be8b2e0468 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/authentication_client.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/authentication_client.go @@ -28,6 +28,7 @@ import ( type AuthenticationV1Interface interface { RESTClient() rest.Interface + SelfSubjectReviewsGetter TokenReviewsGetter } @@ -36,6 +37,10 @@ type AuthenticationV1Client struct { restClient rest.Interface } +func (c *AuthenticationV1Client) SelfSubjectReviews() SelfSubjectReviewInterface { + return newSelfSubjectReviews(c) +} + func (c *AuthenticationV1Client) TokenReviews() TokenReviewInterface { return newTokenReviews(c) } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/fake/fake_authentication_client.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/fake/fake_authentication_client.go index ee06a6cdd6c2..865239ff6459 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/fake/fake_authentication_client.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/fake/fake_authentication_client.go @@ -28,6 +28,10 @@ type FakeAuthenticationV1 struct { *testing.Fake } +func (c *FakeAuthenticationV1) SelfSubjectReviews() v1.SelfSubjectReviewInterface { + return &FakeSelfSubjectReviews{c} +} + func (c *FakeAuthenticationV1) TokenReviews() v1.TokenReviewInterface { return &FakeTokenReviews{c} } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/fake/fake_selfsubjectreview.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/fake/fake_selfsubjectreview.go new file mode 100644 index 000000000000..e683b3eaaa01 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/fake/fake_selfsubjectreview.go @@ -0,0 +1,46 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by client-gen. DO NOT EDIT. + +package fake + +import ( + "context" + + v1 "k8s.io/api/authentication/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + testing "k8s.io/client-go/testing" +) + +// FakeSelfSubjectReviews implements SelfSubjectReviewInterface +type FakeSelfSubjectReviews struct { + Fake *FakeAuthenticationV1 +} + +var selfsubjectreviewsResource = v1.SchemeGroupVersion.WithResource("selfsubjectreviews") + +var selfsubjectreviewsKind = v1.SchemeGroupVersion.WithKind("SelfSubjectReview") + +// Create takes the representation of a selfSubjectReview and creates it. Returns the server's representation of the selfSubjectReview, and an error, if there is any. +func (c *FakeSelfSubjectReviews) Create(ctx context.Context, selfSubjectReview *v1.SelfSubjectReview, opts metav1.CreateOptions) (result *v1.SelfSubjectReview, err error) { + obj, err := c.Fake. + Invokes(testing.NewRootCreateAction(selfsubjectreviewsResource, selfSubjectReview), &v1.SelfSubjectReview{}) + if obj == nil { + return nil, err + } + return obj.(*v1.SelfSubjectReview), err +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/generated_expansion.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/generated_expansion.go index 0413fb2b6658..35f2c22b4f16 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/generated_expansion.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/generated_expansion.go @@ -18,4 +18,6 @@ limitations under the License. package v1 +type SelfSubjectReviewExpansion interface{} + type TokenReviewExpansion interface{} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/selfsubjectreview.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/selfsubjectreview.go new file mode 100644 index 000000000000..bfb9603d672d --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/authentication/v1/selfsubjectreview.go @@ -0,0 +1,64 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by client-gen. DO NOT EDIT. + +package v1 + +import ( + "context" + + v1 "k8s.io/api/authentication/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + scheme "k8s.io/client-go/kubernetes/scheme" + rest "k8s.io/client-go/rest" +) + +// SelfSubjectReviewsGetter has a method to return a SelfSubjectReviewInterface. +// A group's client should implement this interface. +type SelfSubjectReviewsGetter interface { + SelfSubjectReviews() SelfSubjectReviewInterface +} + +// SelfSubjectReviewInterface has methods to work with SelfSubjectReview resources. +type SelfSubjectReviewInterface interface { + Create(ctx context.Context, selfSubjectReview *v1.SelfSubjectReview, opts metav1.CreateOptions) (*v1.SelfSubjectReview, error) + SelfSubjectReviewExpansion +} + +// selfSubjectReviews implements SelfSubjectReviewInterface +type selfSubjectReviews struct { + client rest.Interface +} + +// newSelfSubjectReviews returns a SelfSubjectReviews +func newSelfSubjectReviews(c *AuthenticationV1Client) *selfSubjectReviews { + return &selfSubjectReviews{ + client: c.RESTClient(), + } +} + +// Create takes the representation of a selfSubjectReview and creates it. Returns the server's representation of the selfSubjectReview, and an error, if there is any. +func (c *selfSubjectReviews) Create(ctx context.Context, selfSubjectReview *v1.SelfSubjectReview, opts metav1.CreateOptions) (result *v1.SelfSubjectReview, err error) { + result = &v1.SelfSubjectReview{} + err = c.client.Post(). + Resource("selfsubjectreviews"). + VersionedParams(&opts, scheme.ParameterCodec). + Body(selfSubjectReview). + Do(ctx). + Into(result) + return +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/fake/fake_networkpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/fake/fake_networkpolicy.go index 7bf8c6972aa8..a32022140a93 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/fake/fake_networkpolicy.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/fake/fake_networkpolicy.go @@ -104,18 +104,6 @@ func (c *FakeNetworkPolicies) Update(ctx context.Context, networkPolicy *v1beta1 return obj.(*v1beta1.NetworkPolicy), err } -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). -func (c *FakeNetworkPolicies) UpdateStatus(ctx context.Context, networkPolicy *v1beta1.NetworkPolicy, opts v1.UpdateOptions) (*v1beta1.NetworkPolicy, error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateSubresourceAction(networkpoliciesResource, "status", c.ns, networkPolicy), &v1beta1.NetworkPolicy{}) - - if obj == nil { - return nil, err - } - return obj.(*v1beta1.NetworkPolicy), err -} - // Delete takes name of the networkPolicy and deletes it. Returns an error if one occurs. func (c *FakeNetworkPolicies) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error { _, err := c.Fake. @@ -164,26 +152,3 @@ func (c *FakeNetworkPolicies) Apply(ctx context.Context, networkPolicy *extensio } return obj.(*v1beta1.NetworkPolicy), err } - -// ApplyStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating ApplyStatus(). -func (c *FakeNetworkPolicies) ApplyStatus(ctx context.Context, networkPolicy *extensionsv1beta1.NetworkPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.NetworkPolicy, err error) { - if networkPolicy == nil { - return nil, fmt.Errorf("networkPolicy provided to Apply must not be nil") - } - data, err := json.Marshal(networkPolicy) - if err != nil { - return nil, err - } - name := networkPolicy.Name - if name == nil { - return nil, fmt.Errorf("networkPolicy.Name must be provided to Apply") - } - obj, err := c.Fake. - Invokes(testing.NewPatchSubresourceAction(networkpoliciesResource, c.ns, *name, types.ApplyPatchType, data, "status"), &v1beta1.NetworkPolicy{}) - - if obj == nil { - return nil, err - } - return obj.(*v1beta1.NetworkPolicy), err -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/networkpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/networkpolicy.go index f24099b90d37..978b26db0337 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/networkpolicy.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/networkpolicy.go @@ -43,7 +43,6 @@ type NetworkPoliciesGetter interface { type NetworkPolicyInterface interface { Create(ctx context.Context, networkPolicy *v1beta1.NetworkPolicy, opts v1.CreateOptions) (*v1beta1.NetworkPolicy, error) Update(ctx context.Context, networkPolicy *v1beta1.NetworkPolicy, opts v1.UpdateOptions) (*v1beta1.NetworkPolicy, error) - UpdateStatus(ctx context.Context, networkPolicy *v1beta1.NetworkPolicy, opts v1.UpdateOptions) (*v1beta1.NetworkPolicy, error) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error Get(ctx context.Context, name string, opts v1.GetOptions) (*v1beta1.NetworkPolicy, error) @@ -51,7 +50,6 @@ type NetworkPolicyInterface interface { Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.NetworkPolicy, err error) Apply(ctx context.Context, networkPolicy *extensionsv1beta1.NetworkPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.NetworkPolicy, err error) - ApplyStatus(ctx context.Context, networkPolicy *extensionsv1beta1.NetworkPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.NetworkPolicy, err error) NetworkPolicyExpansion } @@ -141,22 +139,6 @@ func (c *networkPolicies) Update(ctx context.Context, networkPolicy *v1beta1.Net return } -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). -func (c *networkPolicies) UpdateStatus(ctx context.Context, networkPolicy *v1beta1.NetworkPolicy, opts v1.UpdateOptions) (result *v1beta1.NetworkPolicy, err error) { - result = &v1beta1.NetworkPolicy{} - err = c.client.Put(). - Namespace(c.ns). - Resource("networkpolicies"). - Name(networkPolicy.Name). - SubResource("status"). - VersionedParams(&opts, scheme.ParameterCodec). - Body(networkPolicy). - Do(ctx). - Into(result) - return -} - // Delete takes name of the networkPolicy and deletes it. Returns an error if one occurs. func (c *networkPolicies) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error { return c.client.Delete(). @@ -224,33 +206,3 @@ func (c *networkPolicies) Apply(ctx context.Context, networkPolicy *extensionsv1 Into(result) return } - -// ApplyStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating ApplyStatus(). -func (c *networkPolicies) ApplyStatus(ctx context.Context, networkPolicy *extensionsv1beta1.NetworkPolicyApplyConfiguration, opts v1.ApplyOptions) (result *v1beta1.NetworkPolicy, err error) { - if networkPolicy == nil { - return nil, fmt.Errorf("networkPolicy provided to Apply must not be nil") - } - patchOpts := opts.ToPatchOptions() - data, err := json.Marshal(networkPolicy) - if err != nil { - return nil, err - } - - name := networkPolicy.Name - if name == nil { - return nil, fmt.Errorf("networkPolicy.Name must be provided to Apply") - } - - result = &v1beta1.NetworkPolicy{} - err = c.client.Patch(types.ApplyPatchType). - Namespace(c.ns). - Resource("networkpolicies"). - Name(*name). - SubResource("status"). - VersionedParams(&patchOpts, scheme.ParameterCodec). - Body(data). - Do(ctx). - Into(result) - return -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/networking/v1/fake/fake_networkpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/networking/v1/fake/fake_networkpolicy.go index be7413cb8fe6..dde09774c4a2 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/networking/v1/fake/fake_networkpolicy.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/networking/v1/fake/fake_networkpolicy.go @@ -104,18 +104,6 @@ func (c *FakeNetworkPolicies) Update(ctx context.Context, networkPolicy *v1.Netw return obj.(*v1.NetworkPolicy), err } -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). -func (c *FakeNetworkPolicies) UpdateStatus(ctx context.Context, networkPolicy *v1.NetworkPolicy, opts metav1.UpdateOptions) (*v1.NetworkPolicy, error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateSubresourceAction(networkpoliciesResource, "status", c.ns, networkPolicy), &v1.NetworkPolicy{}) - - if obj == nil { - return nil, err - } - return obj.(*v1.NetworkPolicy), err -} - // Delete takes name of the networkPolicy and deletes it. Returns an error if one occurs. func (c *FakeNetworkPolicies) Delete(ctx context.Context, name string, opts metav1.DeleteOptions) error { _, err := c.Fake. @@ -164,26 +152,3 @@ func (c *FakeNetworkPolicies) Apply(ctx context.Context, networkPolicy *networki } return obj.(*v1.NetworkPolicy), err } - -// ApplyStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating ApplyStatus(). -func (c *FakeNetworkPolicies) ApplyStatus(ctx context.Context, networkPolicy *networkingv1.NetworkPolicyApplyConfiguration, opts metav1.ApplyOptions) (result *v1.NetworkPolicy, err error) { - if networkPolicy == nil { - return nil, fmt.Errorf("networkPolicy provided to Apply must not be nil") - } - data, err := json.Marshal(networkPolicy) - if err != nil { - return nil, err - } - name := networkPolicy.Name - if name == nil { - return nil, fmt.Errorf("networkPolicy.Name must be provided to Apply") - } - obj, err := c.Fake. - Invokes(testing.NewPatchSubresourceAction(networkpoliciesResource, c.ns, *name, types.ApplyPatchType, data, "status"), &v1.NetworkPolicy{}) - - if obj == nil { - return nil, err - } - return obj.(*v1.NetworkPolicy), err -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/networking/v1/networkpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/networking/v1/networkpolicy.go index 97afd6278667..d7454ce14525 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/networking/v1/networkpolicy.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/kubernetes/typed/networking/v1/networkpolicy.go @@ -43,7 +43,6 @@ type NetworkPoliciesGetter interface { type NetworkPolicyInterface interface { Create(ctx context.Context, networkPolicy *v1.NetworkPolicy, opts metav1.CreateOptions) (*v1.NetworkPolicy, error) Update(ctx context.Context, networkPolicy *v1.NetworkPolicy, opts metav1.UpdateOptions) (*v1.NetworkPolicy, error) - UpdateStatus(ctx context.Context, networkPolicy *v1.NetworkPolicy, opts metav1.UpdateOptions) (*v1.NetworkPolicy, error) Delete(ctx context.Context, name string, opts metav1.DeleteOptions) error DeleteCollection(ctx context.Context, opts metav1.DeleteOptions, listOpts metav1.ListOptions) error Get(ctx context.Context, name string, opts metav1.GetOptions) (*v1.NetworkPolicy, error) @@ -51,7 +50,6 @@ type NetworkPolicyInterface interface { Watch(ctx context.Context, opts metav1.ListOptions) (watch.Interface, error) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts metav1.PatchOptions, subresources ...string) (result *v1.NetworkPolicy, err error) Apply(ctx context.Context, networkPolicy *networkingv1.NetworkPolicyApplyConfiguration, opts metav1.ApplyOptions) (result *v1.NetworkPolicy, err error) - ApplyStatus(ctx context.Context, networkPolicy *networkingv1.NetworkPolicyApplyConfiguration, opts metav1.ApplyOptions) (result *v1.NetworkPolicy, err error) NetworkPolicyExpansion } @@ -141,22 +139,6 @@ func (c *networkPolicies) Update(ctx context.Context, networkPolicy *v1.NetworkP return } -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). -func (c *networkPolicies) UpdateStatus(ctx context.Context, networkPolicy *v1.NetworkPolicy, opts metav1.UpdateOptions) (result *v1.NetworkPolicy, err error) { - result = &v1.NetworkPolicy{} - err = c.client.Put(). - Namespace(c.ns). - Resource("networkpolicies"). - Name(networkPolicy.Name). - SubResource("status"). - VersionedParams(&opts, scheme.ParameterCodec). - Body(networkPolicy). - Do(ctx). - Into(result) - return -} - // Delete takes name of the networkPolicy and deletes it. Returns an error if one occurs. func (c *networkPolicies) Delete(ctx context.Context, name string, opts metav1.DeleteOptions) error { return c.client.Delete(). @@ -224,33 +206,3 @@ func (c *networkPolicies) Apply(ctx context.Context, networkPolicy *networkingv1 Into(result) return } - -// ApplyStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating ApplyStatus(). -func (c *networkPolicies) ApplyStatus(ctx context.Context, networkPolicy *networkingv1.NetworkPolicyApplyConfiguration, opts metav1.ApplyOptions) (result *v1.NetworkPolicy, err error) { - if networkPolicy == nil { - return nil, fmt.Errorf("networkPolicy provided to Apply must not be nil") - } - patchOpts := opts.ToPatchOptions() - data, err := json.Marshal(networkPolicy) - if err != nil { - return nil, err - } - - name := networkPolicy.Name - if name == nil { - return nil, fmt.Errorf("networkPolicy.Name must be provided to Apply") - } - - result = &v1.NetworkPolicy{} - err = c.client.Patch(types.ApplyPatchType). - Namespace(c.ns). - Resource("networkpolicies"). - Name(*name). - SubResource("status"). - VersionedParams(&patchOpts, scheme.ParameterCodec). - Body(data). - Do(ctx). - Into(result) - return -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/expansion_generated.go b/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/expansion_generated.go index 8960abc4f483..7148781f4213 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/expansion_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/expansion_generated.go @@ -22,6 +22,14 @@ package v1beta1 // MutatingWebhookConfigurationLister. type MutatingWebhookConfigurationListerExpansion interface{} +// ValidatingAdmissionPolicyListerExpansion allows custom methods to be added to +// ValidatingAdmissionPolicyLister. +type ValidatingAdmissionPolicyListerExpansion interface{} + +// ValidatingAdmissionPolicyBindingListerExpansion allows custom methods to be added to +// ValidatingAdmissionPolicyBindingLister. +type ValidatingAdmissionPolicyBindingListerExpansion interface{} + // ValidatingWebhookConfigurationListerExpansion allows custom methods to be added to // ValidatingWebhookConfigurationLister. type ValidatingWebhookConfigurationListerExpansion interface{} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/validatingadmissionpolicy.go b/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/validatingadmissionpolicy.go new file mode 100644 index 000000000000..7018b3ceec67 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/validatingadmissionpolicy.go @@ -0,0 +1,68 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by lister-gen. DO NOT EDIT. + +package v1beta1 + +import ( + v1beta1 "k8s.io/api/admissionregistration/v1beta1" + "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/client-go/tools/cache" +) + +// ValidatingAdmissionPolicyLister helps list ValidatingAdmissionPolicies. +// All objects returned here must be treated as read-only. +type ValidatingAdmissionPolicyLister interface { + // List lists all ValidatingAdmissionPolicies in the indexer. + // Objects returned here must be treated as read-only. + List(selector labels.Selector) (ret []*v1beta1.ValidatingAdmissionPolicy, err error) + // Get retrieves the ValidatingAdmissionPolicy from the index for a given name. + // Objects returned here must be treated as read-only. + Get(name string) (*v1beta1.ValidatingAdmissionPolicy, error) + ValidatingAdmissionPolicyListerExpansion +} + +// validatingAdmissionPolicyLister implements the ValidatingAdmissionPolicyLister interface. +type validatingAdmissionPolicyLister struct { + indexer cache.Indexer +} + +// NewValidatingAdmissionPolicyLister returns a new ValidatingAdmissionPolicyLister. +func NewValidatingAdmissionPolicyLister(indexer cache.Indexer) ValidatingAdmissionPolicyLister { + return &validatingAdmissionPolicyLister{indexer: indexer} +} + +// List lists all ValidatingAdmissionPolicies in the indexer. +func (s *validatingAdmissionPolicyLister) List(selector labels.Selector) (ret []*v1beta1.ValidatingAdmissionPolicy, err error) { + err = cache.ListAll(s.indexer, selector, func(m interface{}) { + ret = append(ret, m.(*v1beta1.ValidatingAdmissionPolicy)) + }) + return ret, err +} + +// Get retrieves the ValidatingAdmissionPolicy from the index for a given name. +func (s *validatingAdmissionPolicyLister) Get(name string) (*v1beta1.ValidatingAdmissionPolicy, error) { + obj, exists, err := s.indexer.GetByKey(name) + if err != nil { + return nil, err + } + if !exists { + return nil, errors.NewNotFound(v1beta1.Resource("validatingadmissionpolicy"), name) + } + return obj.(*v1beta1.ValidatingAdmissionPolicy), nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/validatingadmissionpolicybinding.go b/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/validatingadmissionpolicybinding.go new file mode 100644 index 000000000000..5fcebfd22fbe --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/listers/admissionregistration/v1beta1/validatingadmissionpolicybinding.go @@ -0,0 +1,68 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by lister-gen. DO NOT EDIT. + +package v1beta1 + +import ( + v1beta1 "k8s.io/api/admissionregistration/v1beta1" + "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/client-go/tools/cache" +) + +// ValidatingAdmissionPolicyBindingLister helps list ValidatingAdmissionPolicyBindings. +// All objects returned here must be treated as read-only. +type ValidatingAdmissionPolicyBindingLister interface { + // List lists all ValidatingAdmissionPolicyBindings in the indexer. + // Objects returned here must be treated as read-only. + List(selector labels.Selector) (ret []*v1beta1.ValidatingAdmissionPolicyBinding, err error) + // Get retrieves the ValidatingAdmissionPolicyBinding from the index for a given name. + // Objects returned here must be treated as read-only. + Get(name string) (*v1beta1.ValidatingAdmissionPolicyBinding, error) + ValidatingAdmissionPolicyBindingListerExpansion +} + +// validatingAdmissionPolicyBindingLister implements the ValidatingAdmissionPolicyBindingLister interface. +type validatingAdmissionPolicyBindingLister struct { + indexer cache.Indexer +} + +// NewValidatingAdmissionPolicyBindingLister returns a new ValidatingAdmissionPolicyBindingLister. +func NewValidatingAdmissionPolicyBindingLister(indexer cache.Indexer) ValidatingAdmissionPolicyBindingLister { + return &validatingAdmissionPolicyBindingLister{indexer: indexer} +} + +// List lists all ValidatingAdmissionPolicyBindings in the indexer. +func (s *validatingAdmissionPolicyBindingLister) List(selector labels.Selector) (ret []*v1beta1.ValidatingAdmissionPolicyBinding, err error) { + err = cache.ListAll(s.indexer, selector, func(m interface{}) { + ret = append(ret, m.(*v1beta1.ValidatingAdmissionPolicyBinding)) + }) + return ret, err +} + +// Get retrieves the ValidatingAdmissionPolicyBinding from the index for a given name. +func (s *validatingAdmissionPolicyBindingLister) Get(name string) (*v1beta1.ValidatingAdmissionPolicyBinding, error) { + obj, exists, err := s.indexer.GetByKey(name) + if err != nil { + return nil, err + } + if !exists { + return nil, errors.NewNotFound(v1beta1.Resource("validatingadmissionpolicybinding"), name) + } + return obj.(*v1beta1.ValidatingAdmissionPolicyBinding), nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/openapi/client.go b/cluster-autoscaler/vendor/k8s.io/client-go/openapi/client.go index 7b58762acfd5..6a43057187e9 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/openapi/client.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/openapi/client.go @@ -19,6 +19,7 @@ package openapi import ( "context" "encoding/json" + "strings" "k8s.io/client-go/rest" "k8s.io/kube-openapi/pkg/handler3" @@ -58,7 +59,11 @@ func (c *client) Paths() (map[string]GroupVersion, error) { // Create GroupVersions for each element of the result result := map[string]GroupVersion{} for k, v := range discoMap.Paths { - result[k] = newGroupVersion(c, v) + // If the server returned a URL rooted at /openapi/v3, preserve any additional client-side prefix. + // If the server returned a URL not rooted at /openapi/v3, treat it as an actual server-relative URL. + // See https://github.com/kubernetes/kubernetes/issues/117463 for details + useClientPrefix := strings.HasPrefix(v.ServerRelativeURL, "/openapi/v3") + result[k] = newGroupVersion(c, v, useClientPrefix) } return result, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/openapi/groupversion.go b/cluster-autoscaler/vendor/k8s.io/client-go/openapi/groupversion.go index 32133a29b8a4..601dcbe3ccb2 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/openapi/groupversion.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/openapi/groupversion.go @@ -18,6 +18,7 @@ package openapi import ( "context" + "net/url" "k8s.io/kube-openapi/pkg/handler3" ) @@ -29,18 +30,41 @@ type GroupVersion interface { } type groupversion struct { - client *client - item handler3.OpenAPIV3DiscoveryGroupVersion + client *client + item handler3.OpenAPIV3DiscoveryGroupVersion + useClientPrefix bool } -func newGroupVersion(client *client, item handler3.OpenAPIV3DiscoveryGroupVersion) *groupversion { - return &groupversion{client: client, item: item} +func newGroupVersion(client *client, item handler3.OpenAPIV3DiscoveryGroupVersion, useClientPrefix bool) *groupversion { + return &groupversion{client: client, item: item, useClientPrefix: useClientPrefix} } func (g *groupversion) Schema(contentType string) ([]byte, error) { - return g.client.restClient.Get(). - RequestURI(g.item.ServerRelativeURL). - SetHeader("Accept", contentType). - Do(context.TODO()). - Raw() + if !g.useClientPrefix { + return g.client.restClient.Get(). + RequestURI(g.item.ServerRelativeURL). + SetHeader("Accept", contentType). + Do(context.TODO()). + Raw() + } + + locator, err := url.Parse(g.item.ServerRelativeURL) + if err != nil { + return nil, err + } + + path := g.client.restClient.Get(). + AbsPath(locator.Path). + SetHeader("Accept", contentType) + + // Other than root endpoints(openapiv3/apis), resources have hash query parameter to support etags. + // However, absPath does not support handling query parameters internally, + // so that hash query parameter is added manually + for k, value := range locator.Query() { + for _, v := range value { + path.Param(k, v) + } + } + + return path.Do(context.TODO()).Raw() } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/openapi/typeconverter.go b/cluster-autoscaler/vendor/k8s.io/client-go/openapi/typeconverter.go new file mode 100644 index 000000000000..4b91e66d4516 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/openapi/typeconverter.go @@ -0,0 +1,48 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package openapi + +import ( + "encoding/json" + "fmt" + + "k8s.io/apimachinery/pkg/util/managedfields" + "k8s.io/kube-openapi/pkg/spec3" + "k8s.io/kube-openapi/pkg/validation/spec" +) + +func NewTypeConverter(client Client, preserveUnknownFields bool) (managedfields.TypeConverter, error) { + spec := map[string]*spec.Schema{} + paths, err := client.Paths() + if err != nil { + return nil, fmt.Errorf("failed to list paths: %w", err) + } + for _, gv := range paths { + s, err := gv.Schema("application/json") + if err != nil { + return nil, fmt.Errorf("failed to download schema: %w", err) + } + var openapi spec3.OpenAPI + if err := json.Unmarshal(s, &openapi); err != nil { + return nil, fmt.Errorf("failed to parse schema: %w", err) + } + for k, v := range openapi.Components.Schemas { + spec[k] = v + } + } + return managedfields.NewTypeConverter(spec, preserveUnknownFields) +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/plugin/pkg/client/auth/exec/exec.go b/cluster-autoscaler/vendor/k8s.io/client-go/plugin/pkg/client/auth/exec/exec.go index 5331b237a70b..b471f5cc6486 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/plugin/pkg/client/auth/exec/exec.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/plugin/pkg/client/auth/exec/exec.go @@ -32,12 +32,12 @@ import ( "sync" "time" - "github.com/davecgh/go-spew/spew" "golang.org/x/term" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/runtime/serializer" + "k8s.io/apimachinery/pkg/util/dump" utilnet "k8s.io/apimachinery/pkg/util/net" "k8s.io/client-go/pkg/apis/clientauthentication" "k8s.io/client-go/pkg/apis/clientauthentication/install" @@ -81,8 +81,6 @@ func newCache() *cache { return &cache{m: make(map[string]*Authenticator)} } -var spewConfig = &spew.ConfigState{DisableMethods: true, Indent: " "} - func cacheKey(conf *api.ExecConfig, cluster *clientauthentication.Cluster) string { key := struct { conf *api.ExecConfig @@ -91,7 +89,7 @@ func cacheKey(conf *api.ExecConfig, cluster *clientauthentication.Cluster) strin conf: conf, cluster: cluster, } - return spewConfig.Sprint(key) + return dump.Pretty(key) } type cache struct { diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/rest/config.go b/cluster-autoscaler/vendor/k8s.io/client-go/rest/config.go index 81e3cbd68971..f8ff7e928cff 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/rest/config.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/rest/config.go @@ -316,7 +316,7 @@ func RESTClientFor(config *Config) (*RESTClient, error) { // Validate config.Host before constructing the transport/client so we can fail fast. // ServerURL will be obtained later in RESTClientForConfigAndClient() - _, _, err := defaultServerUrlFor(config) + _, _, err := DefaultServerUrlFor(config) if err != nil { return nil, err } @@ -343,7 +343,7 @@ func RESTClientForConfigAndClient(config *Config, httpClient *http.Client) (*RES return nil, fmt.Errorf("NegotiatedSerializer is required when initializing a RESTClient") } - baseURL, versionedAPIPath, err := defaultServerUrlFor(config) + baseURL, versionedAPIPath, err := DefaultServerUrlFor(config) if err != nil { return nil, err } @@ -390,7 +390,7 @@ func UnversionedRESTClientFor(config *Config) (*RESTClient, error) { // Validate config.Host before constructing the transport/client so we can fail fast. // ServerURL will be obtained later in UnversionedRESTClientForConfigAndClient() - _, _, err := defaultServerUrlFor(config) + _, _, err := DefaultServerUrlFor(config) if err != nil { return nil, err } @@ -410,7 +410,7 @@ func UnversionedRESTClientForConfigAndClient(config *Config, httpClient *http.Cl return nil, fmt.Errorf("NegotiatedSerializer is required when initializing a RESTClient") } - baseURL, versionedAPIPath, err := defaultServerUrlFor(config) + baseURL, versionedAPIPath, err := DefaultServerUrlFor(config) if err != nil { return nil, err } @@ -548,7 +548,7 @@ func InClusterConfig() (*Config, error) { // Note: the Insecure flag is ignored when testing for this value, so MITM attacks are // still possible. func IsConfigTransportTLS(config Config) bool { - baseURL, _, err := defaultServerUrlFor(&config) + baseURL, _, err := DefaultServerUrlFor(&config) if err != nil { return false } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/rest/request.go b/cluster-autoscaler/vendor/k8s.io/client-go/rest/request.go index bb6fb4decb79..850e57daebd7 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/rest/request.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/rest/request.go @@ -24,6 +24,7 @@ import ( "io" "mime" "net/http" + "net/http/httptrace" "net/url" "os" "path" @@ -925,15 +926,38 @@ func (r *Request) newHTTPRequest(ctx context.Context) (*http.Request, error) { } url := r.URL().String() - req, err := http.NewRequest(r.verb, url, body) + req, err := http.NewRequestWithContext(httptrace.WithClientTrace(ctx, newDNSMetricsTrace(ctx)), r.verb, url, body) if err != nil { return nil, err } - req = req.WithContext(ctx) req.Header = r.headers return req, nil } +// newDNSMetricsTrace returns an HTTP trace that tracks time spent on DNS lookups per host. +// This metric is available in client as "rest_client_dns_resolution_duration_seconds". +func newDNSMetricsTrace(ctx context.Context) *httptrace.ClientTrace { + type dnsMetric struct { + start time.Time + host string + sync.Mutex + } + dns := &dnsMetric{} + return &httptrace.ClientTrace{ + DNSStart: func(info httptrace.DNSStartInfo) { + dns.Lock() + defer dns.Unlock() + dns.start = time.Now() + dns.host = info.Host + }, + DNSDone: func(info httptrace.DNSDoneInfo) { + dns.Lock() + defer dns.Unlock() + metrics.ResolverLatency.Observe(ctx, dns.host, time.Since(dns.start)) + }, + } +} + // request connects to the server and invokes the provided function when a server response is // received. It handles retry behavior and up front validation of requests. It will invoke // fn at most once. It will return an error if a problem occurred prior to connecting to the diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/rest/url_utils.go b/cluster-autoscaler/vendor/k8s.io/client-go/rest/url_utils.go index a56d1838d8fd..c4ce6e3b8fcd 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/rest/url_utils.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/rest/url_utils.go @@ -77,9 +77,9 @@ func DefaultVersionedAPIPath(apiPath string, groupVersion schema.GroupVersion) s return versionedAPIPath } -// defaultServerUrlFor is shared between IsConfigTransportTLS and RESTClientFor. It +// DefaultServerUrlFor is shared between IsConfigTransportTLS and RESTClientFor. It // requires Host and Version to be set prior to being called. -func defaultServerUrlFor(config *Config) (*url.URL, string, error) { +func DefaultServerUrlFor(config *Config) (*url.URL, string, error) { // TODO: move the default to secure when the apiserver supports TLS by default // config.Insecure is taken to mean "I want HTTPS but don't bother checking the certs against a CA." hasCA := len(config.CAFile) != 0 || len(config.CAData) != 0 diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/OWNERS b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/OWNERS index 726205b3dff3..921ac2fa02b3 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/OWNERS @@ -2,7 +2,6 @@ approvers: - thockin - - lavalamp - smarterclayton - wojtek-t - deads2k @@ -11,7 +10,6 @@ approvers: - ncdc reviewers: - thockin - - lavalamp - smarterclayton - wojtek-t - deads2k @@ -26,3 +24,5 @@ reviewers: - dims - ingvagabund - ncdc +emeritus_approvers: + - lavalamp diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/controller.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/controller.go index f437f286166f..8a1104bde808 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/controller.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/controller.go @@ -18,7 +18,6 @@ package cache import ( "errors" - "os" "sync" "time" @@ -148,9 +147,6 @@ func (c *controller) Run(stopCh <-chan struct{}) { if c.config.WatchErrorHandler != nil { r.watchErrorHandler = c.config.WatchErrorHandler } - if s := os.Getenv("ENABLE_CLIENT_GO_WATCH_LIST_ALPHA"); len(s) > 0 { - r.UseWatchList = true - } c.reflectorMutex.Lock() c.reflector = r diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/object-names.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/object-names.go new file mode 100644 index 000000000000..aa8dbb199373 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/object-names.go @@ -0,0 +1,65 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package cache + +import ( + "k8s.io/apimachinery/pkg/types" +) + +// ObjectName is a reference to an object of some implicit kind +type ObjectName struct { + Namespace string + Name string +} + +// NewObjectName constructs a new one +func NewObjectName(namespace, name string) ObjectName { + return ObjectName{Namespace: namespace, Name: name} +} + +// Parts is the inverse of the constructor +func (objName ObjectName) Parts() (namespace, name string) { + return objName.Namespace, objName.Name +} + +// String returns the standard string encoding, +// which is designed to match the historical behavior of MetaNamespaceKeyFunc. +// Note this behavior is different from the String method of types.NamespacedName. +func (objName ObjectName) String() string { + if len(objName.Namespace) > 0 { + return objName.Namespace + "/" + objName.Name + } + return objName.Name +} + +// ParseObjectName tries to parse the standard encoding +func ParseObjectName(str string) (ObjectName, error) { + var objName ObjectName + var err error + objName.Namespace, objName.Name, err = SplitMetaNamespaceKey(str) + return objName, err +} + +// NamespacedNameAsObjectName rebrands the given NamespacedName as an ObjectName +func NamespacedNameAsObjectName(nn types.NamespacedName) ObjectName { + return NewObjectName(nn.Namespace, nn.Name) +} + +// AsNamespacedName rebrands as a NamespacedName +func (objName ObjectName) AsNamespacedName() types.NamespacedName { + return types.NamespacedName{Namespace: objName.Namespace, Name: objName.Name} +} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/reflector.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/reflector.go index 2b335c104c87..45eaff52853f 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/reflector.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/reflector.go @@ -22,6 +22,7 @@ import ( "fmt" "io" "math/rand" + "os" "reflect" "strings" "sync" @@ -69,9 +70,7 @@ type Reflector struct { listerWatcher ListerWatcher // backoff manages backoff of ListWatch backoffManager wait.BackoffManager - // initConnBackoffManager manages backoff the initial connection with the Watch call of ListAndWatch. - initConnBackoffManager wait.BackoffManager - resyncPeriod time.Duration + resyncPeriod time.Duration // clock allows tests to manipulate time clock clock.Clock // paginatedResult defines whether pagination should be forced for list calls. @@ -220,11 +219,10 @@ func NewReflectorWithOptions(lw ListerWatcher, expectedType interface{}, store S // We used to make the call every 1sec (1 QPS), the goal here is to achieve ~98% traffic reduction when // API server is not healthy. With these parameters, backoff will stop at [30,60) sec interval which is // 0.22 QPS. If we don't backoff for 2min, assume API server is healthy and we reset the backoff. - backoffManager: wait.NewExponentialBackoffManager(800*time.Millisecond, 30*time.Second, 2*time.Minute, 2.0, 1.0, reflectorClock), - initConnBackoffManager: wait.NewExponentialBackoffManager(800*time.Millisecond, 30*time.Second, 2*time.Minute, 2.0, 1.0, reflectorClock), - clock: reflectorClock, - watchErrorHandler: WatchErrorHandler(DefaultWatchErrorHandler), - expectedType: reflect.TypeOf(expectedType), + backoffManager: wait.NewExponentialBackoffManager(800*time.Millisecond, 30*time.Second, 2*time.Minute, 2.0, 1.0, reflectorClock), + clock: reflectorClock, + watchErrorHandler: WatchErrorHandler(DefaultWatchErrorHandler), + expectedType: reflect.TypeOf(expectedType), } if r.name == "" { @@ -239,6 +237,10 @@ func NewReflectorWithOptions(lw ListerWatcher, expectedType interface{}, store S r.expectedGVK = getExpectedGVKFromObject(expectedType) } + if s := os.Getenv("ENABLE_CLIENT_GO_WATCH_LIST_ALPHA"); len(s) > 0 { + r.UseWatchList = true + } + return r } @@ -420,7 +422,7 @@ func (r *Reflector) watch(w watch.Interface, stopCh <-chan struct{}, resyncerrc select { case <-stopCh: return nil - case <-r.initConnBackoffManager.Backoff().C(): + case <-r.backoffManager.Backoff().C(): continue } } @@ -446,7 +448,7 @@ func (r *Reflector) watch(w watch.Interface, stopCh <-chan struct{}, resyncerrc select { case <-stopCh: return nil - case <-r.initConnBackoffManager.Backoff().C(): + case <-r.backoffManager.Backoff().C(): continue } case apierrors.IsInternalError(err) && retry.ShouldRetry(): @@ -508,7 +510,7 @@ func (r *Reflector) list(stopCh <-chan struct{}) error { pager.PageSize = 0 } - list, paginatedResult, err = pager.List(context.Background(), options) + list, paginatedResult, err = pager.ListWithAlloc(context.Background(), options) if isExpiredError(err) || isTooLargeResourceVersionError(err) { r.setIsLastSyncResourceVersionUnavailable(true) // Retry immediately if the resource version used to list is unavailable. @@ -517,7 +519,7 @@ func (r *Reflector) list(stopCh <-chan struct{}) error { // resource version it is listing at is expired or the cache may not yet be synced to the provided // resource version. So we need to fallback to resourceVersion="" in all to recover and ensure // the reflector makes forward progress. - list, paginatedResult, err = pager.List(context.Background(), metav1.ListOptions{ResourceVersion: r.relistResourceVersion()}) + list, paginatedResult, err = pager.ListWithAlloc(context.Background(), metav1.ListOptions{ResourceVersion: r.relistResourceVersion()}) } close(listCh) }() @@ -555,7 +557,7 @@ func (r *Reflector) list(stopCh <-chan struct{}) error { } resourceVersion = listMetaInterface.GetResourceVersion() initTrace.Step("Resource version extracted") - items, err := meta.ExtractList(list) + items, err := meta.ExtractListWithAlloc(list) if err != nil { return fmt.Errorf("unable to understand list result %#v (%v)", list, err) } @@ -599,7 +601,7 @@ func (r *Reflector) watchList(stopCh <-chan struct{}) (watch.Interface, error) { isErrorRetriableWithSideEffectsFn := func(err error) bool { if canRetry := isWatchErrorRetriable(err); canRetry { klog.V(2).Infof("%s: watch-list of %v returned %v - backing off", r.name, r.typeDescription, err) - <-r.initConnBackoffManager.Backoff().C() + <-r.backoffManager.Backoff().C() return true } if isExpiredError(err) || isTooLargeResourceVersionError(err) { diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/shared_informer.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/shared_informer.go index a889fdbc36bd..be8694ddb620 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/shared_informer.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/shared_informer.go @@ -459,29 +459,30 @@ func (s *sharedIndexInformer) Run(stopCh <-chan struct{}) { klog.Warningf("The sharedIndexInformer has started, run more than once is not allowed") return } - fifo := NewDeltaFIFOWithOptions(DeltaFIFOOptions{ - KnownObjects: s.indexer, - EmitDeltaTypeReplaced: true, - Transformer: s.transform, - }) - - cfg := &Config{ - Queue: fifo, - ListerWatcher: s.listerWatcher, - ObjectType: s.objectType, - ObjectDescription: s.objectDescription, - FullResyncPeriod: s.resyncCheckPeriod, - RetryOnError: false, - ShouldResync: s.processor.shouldResync, - - Process: s.HandleDeltas, - WatchErrorHandler: s.watchErrorHandler, - } func() { s.startedLock.Lock() defer s.startedLock.Unlock() + fifo := NewDeltaFIFOWithOptions(DeltaFIFOOptions{ + KnownObjects: s.indexer, + EmitDeltaTypeReplaced: true, + Transformer: s.transform, + }) + + cfg := &Config{ + Queue: fifo, + ListerWatcher: s.listerWatcher, + ObjectType: s.objectType, + ObjectDescription: s.objectDescription, + FullResyncPeriod: s.resyncCheckPeriod, + RetryOnError: false, + ShouldResync: s.processor.shouldResync, + + Process: s.HandleDeltas, + WatchErrorHandler: s.watchErrorHandler, + } + s.controller = New(cfg) s.controller.(*controller).clock = s.clock s.started = true diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/store.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/store.go index 5308ea748001..5cc3f42ec17e 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/store.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/cache/store.go @@ -21,6 +21,7 @@ import ( "strings" "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // Store is a generic object storage and processing interface. A @@ -99,20 +100,38 @@ type ExplicitKey string // The key uses the format / unless is empty, then // it's just . // -// TODO: replace key-as-string with a key-as-struct so that this -// packing/unpacking won't be necessary. +// Clients that want a structured alternative can use ObjectToName or MetaObjectToName. +// Note: this would not be a client that wants a key for a Store because those are +// necessarily strings. +// +// TODO maybe some day?: change Store to be keyed differently func MetaNamespaceKeyFunc(obj interface{}) (string, error) { if key, ok := obj.(ExplicitKey); ok { return string(key), nil } + objName, err := ObjectToName(obj) + if err != nil { + return "", err + } + return objName.String(), nil +} + +// ObjectToName returns the structured name for the given object, +// if indeed it can be viewed as a metav1.Object. +func ObjectToName(obj interface{}) (ObjectName, error) { meta, err := meta.Accessor(obj) if err != nil { - return "", fmt.Errorf("object has no meta: %v", err) + return ObjectName{}, fmt.Errorf("object has no meta: %v", err) } - if len(meta.GetNamespace()) > 0 { - return meta.GetNamespace() + "/" + meta.GetName(), nil + return MetaObjectToName(meta), nil +} + +// MetaObjectToName returns the structured name for the given object +func MetaObjectToName(obj metav1.Object) ObjectName { + if len(obj.GetNamespace()) > 0 { + return ObjectName{Namespace: obj.GetNamespace(), Name: obj.GetName()} } - return meta.GetName(), nil + return ObjectName{Namespace: "", Name: obj.GetName()} } // SplitMetaNamespaceKey returns the namespace and name that diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/clientcmd/api/types.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/clientcmd/api/types.go index 71fb821b1e8f..ae8b8c703863 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/clientcmd/api/types.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/clientcmd/api/types.go @@ -67,7 +67,7 @@ type Preferences struct { type Cluster struct { // LocationOfOrigin indicates where this object came from. It is used for round tripping config post-merge, but never serialized. // +k8s:conversion-gen=false - LocationOfOrigin string + LocationOfOrigin string `json:"-"` // Server is the address of the kubernetes cluster (https://hostname:port). Server string `json:"server"` // TLSServerName is used to check server certificate. If TLSServerName is empty, the hostname used to contact the server is used. @@ -107,7 +107,7 @@ type Cluster struct { type AuthInfo struct { // LocationOfOrigin indicates where this object came from. It is used for round tripping config post-merge, but never serialized. // +k8s:conversion-gen=false - LocationOfOrigin string + LocationOfOrigin string `json:"-"` // ClientCertificate is the path to a client cert file for TLS. // +optional ClientCertificate string `json:"client-certificate,omitempty"` @@ -159,7 +159,7 @@ type AuthInfo struct { type Context struct { // LocationOfOrigin indicates where this object came from. It is used for round tripping config post-merge, but never serialized. // +k8s:conversion-gen=false - LocationOfOrigin string + LocationOfOrigin string `json:"-"` // Cluster is the name of the cluster for this context Cluster string `json:"cluster"` // AuthInfo is the name of the authInfo for this context @@ -252,7 +252,7 @@ type ExecConfig struct { // recommended as one of the prime benefits of exec plugins is that no secrets need // to be stored directly in the kubeconfig. // +k8s:conversion-gen=false - Config runtime.Object + Config runtime.Object `json:"-"` // InteractiveMode determines this plugin's relationship with standard input. Valid // values are "Never" (this exec plugin never uses standard input), "IfAvailable" (this @@ -264,7 +264,7 @@ type ExecConfig struct { // client.authentication.k8s.io/v1beta1, then this field is optional and defaults // to "IfAvailable" when unset. Otherwise, this field is required. // +optional - InteractiveMode ExecInteractiveMode + InteractiveMode ExecInteractiveMode `json:"interactiveMode,omitempty"` // StdinUnavailable indicates whether the exec authenticator can pass standard // input through to this exec plugin. For example, a higher level entity might be using @@ -272,14 +272,14 @@ type ExecConfig struct { // plugin to use standard input. This is kept here in order to keep all of the exec configuration // together, but it is never serialized. // +k8s:conversion-gen=false - StdinUnavailable bool + StdinUnavailable bool `json:"-"` // StdinUnavailableMessage is an optional message to be displayed when the exec authenticator // cannot successfully run this exec plugin because it needs to use standard input and // StdinUnavailable is true. For example, a process that is already using standard input to // read user instructions might set this to "used by my-program to read user instructions". // +k8s:conversion-gen=false - StdinUnavailableMessage string + StdinUnavailableMessage string `json:"-"` } var _ fmt.Stringer = new(ExecConfig) diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/clientcmd/loader.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/clientcmd/loader.go index 44de1d41d836..b75737f1c904 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/clientcmd/loader.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/clientcmd/loader.go @@ -128,6 +128,28 @@ type ClientConfigLoadingRules struct { // WarnIfAllMissing indicates whether the configuration files pointed by KUBECONFIG environment variable are present or not. // In case of missing files, it warns the user about the missing files. WarnIfAllMissing bool + + // Warner is the warning log callback to use in case of missing files. + Warner WarningHandler +} + +// WarningHandler allows to set the logging function to use +type WarningHandler func(error) + +func (handler WarningHandler) Warn(err error) { + if handler == nil { + klog.V(1).Info(err) + } else { + handler(err) + } +} + +type MissingConfigError struct { + Missing []string +} + +func (c MissingConfigError) Error() string { + return fmt.Sprintf("Config not found: %s", strings.Join(c.Missing, ", ")) } // ClientConfigLoadingRules implements the ClientConfigLoader interface. @@ -219,7 +241,7 @@ func (rules *ClientConfigLoadingRules) Load() (*clientcmdapi.Config, error) { } if rules.WarnIfAllMissing && len(missingList) > 0 && len(kubeconfigs) == 0 { - klog.Warningf("Config not found: %s", strings.Join(missingList, ", ")) + rules.Warner.Warn(MissingConfigError{Missing: missingList}) } // first merge all of our maps diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go index 940e716175a0..c1151baf2073 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go @@ -99,6 +99,11 @@ func NewLeaderElector(lec LeaderElectionConfig) (*LeaderElector, error) { if lec.Lock == nil { return nil, fmt.Errorf("Lock must not be nil.") } + id := lec.Lock.Identity() + if id == "" { + return nil, fmt.Errorf("Lock identity is empty") + } + le := LeaderElector{ config: lec, clock: clock.RealClock{}, diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/configmaplock.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/configmaplock.go deleted file mode 100644 index e811fff03c56..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/configmaplock.go +++ /dev/null @@ -1,126 +0,0 @@ -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package resourcelock - -import ( - "context" - "encoding/json" - "errors" - "fmt" - - "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - corev1client "k8s.io/client-go/kubernetes/typed/core/v1" -) - -// TODO: This is almost a exact replica of Endpoints lock. -// going forwards as we self host more and more components -// and use ConfigMaps as the means to pass that configuration -// data we will likely move to deprecate the Endpoints lock. - -type configMapLock struct { - // ConfigMapMeta should contain a Name and a Namespace of a - // ConfigMapMeta object that the LeaderElector will attempt to lead. - ConfigMapMeta metav1.ObjectMeta - Client corev1client.ConfigMapsGetter - LockConfig ResourceLockConfig - cm *v1.ConfigMap -} - -// Get returns the election record from a ConfigMap Annotation -func (cml *configMapLock) Get(ctx context.Context) (*LeaderElectionRecord, []byte, error) { - var record LeaderElectionRecord - cm, err := cml.Client.ConfigMaps(cml.ConfigMapMeta.Namespace).Get(ctx, cml.ConfigMapMeta.Name, metav1.GetOptions{}) - if err != nil { - return nil, nil, err - } - cml.cm = cm - if cml.cm.Annotations == nil { - cml.cm.Annotations = make(map[string]string) - } - recordStr, found := cml.cm.Annotations[LeaderElectionRecordAnnotationKey] - recordBytes := []byte(recordStr) - if found { - if err := json.Unmarshal(recordBytes, &record); err != nil { - return nil, nil, err - } - } - return &record, recordBytes, nil -} - -// Create attempts to create a LeaderElectionRecord annotation -func (cml *configMapLock) Create(ctx context.Context, ler LeaderElectionRecord) error { - recordBytes, err := json.Marshal(ler) - if err != nil { - return err - } - cml.cm, err = cml.Client.ConfigMaps(cml.ConfigMapMeta.Namespace).Create(ctx, &v1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{ - Name: cml.ConfigMapMeta.Name, - Namespace: cml.ConfigMapMeta.Namespace, - Annotations: map[string]string{ - LeaderElectionRecordAnnotationKey: string(recordBytes), - }, - }, - }, metav1.CreateOptions{}) - return err -} - -// Update will update an existing annotation on a given resource. -func (cml *configMapLock) Update(ctx context.Context, ler LeaderElectionRecord) error { - if cml.cm == nil { - return errors.New("configmap not initialized, call get or create first") - } - recordBytes, err := json.Marshal(ler) - if err != nil { - return err - } - if cml.cm.Annotations == nil { - cml.cm.Annotations = make(map[string]string) - } - cml.cm.Annotations[LeaderElectionRecordAnnotationKey] = string(recordBytes) - cm, err := cml.Client.ConfigMaps(cml.ConfigMapMeta.Namespace).Update(ctx, cml.cm, metav1.UpdateOptions{}) - if err != nil { - return err - } - cml.cm = cm - return nil -} - -// RecordEvent in leader election while adding meta-data -func (cml *configMapLock) RecordEvent(s string) { - if cml.LockConfig.EventRecorder == nil { - return - } - events := fmt.Sprintf("%v %v", cml.LockConfig.Identity, s) - subject := &v1.ConfigMap{ObjectMeta: cml.cm.ObjectMeta} - // Populate the type meta, so we don't have to get it from the schema - subject.Kind = "ConfigMap" - subject.APIVersion = v1.SchemeGroupVersion.String() - cml.LockConfig.EventRecorder.Eventf(subject, v1.EventTypeNormal, "LeaderElection", events) -} - -// Describe is used to convert details on current resource lock -// into a string -func (cml *configMapLock) Describe() string { - return fmt.Sprintf("%v/%v", cml.ConfigMapMeta.Namespace, cml.ConfigMapMeta.Name) -} - -// Identity returns the Identity of the lock -func (cml *configMapLock) Identity() string { - return cml.LockConfig.Identity -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/endpointslock.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/endpointslock.go deleted file mode 100644 index eb36d2210a3f..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/endpointslock.go +++ /dev/null @@ -1,121 +0,0 @@ -/* -Copyright 2016 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package resourcelock - -import ( - "context" - "encoding/json" - "errors" - "fmt" - - "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - corev1client "k8s.io/client-go/kubernetes/typed/core/v1" -) - -type endpointsLock struct { - // EndpointsMeta should contain a Name and a Namespace of an - // Endpoints object that the LeaderElector will attempt to lead. - EndpointsMeta metav1.ObjectMeta - Client corev1client.EndpointsGetter - LockConfig ResourceLockConfig - e *v1.Endpoints -} - -// Get returns the election record from a Endpoints Annotation -func (el *endpointsLock) Get(ctx context.Context) (*LeaderElectionRecord, []byte, error) { - var record LeaderElectionRecord - ep, err := el.Client.Endpoints(el.EndpointsMeta.Namespace).Get(ctx, el.EndpointsMeta.Name, metav1.GetOptions{}) - if err != nil { - return nil, nil, err - } - el.e = ep - if el.e.Annotations == nil { - el.e.Annotations = make(map[string]string) - } - recordStr, found := el.e.Annotations[LeaderElectionRecordAnnotationKey] - recordBytes := []byte(recordStr) - if found { - if err := json.Unmarshal(recordBytes, &record); err != nil { - return nil, nil, err - } - } - return &record, recordBytes, nil -} - -// Create attempts to create a LeaderElectionRecord annotation -func (el *endpointsLock) Create(ctx context.Context, ler LeaderElectionRecord) error { - recordBytes, err := json.Marshal(ler) - if err != nil { - return err - } - el.e, err = el.Client.Endpoints(el.EndpointsMeta.Namespace).Create(ctx, &v1.Endpoints{ - ObjectMeta: metav1.ObjectMeta{ - Name: el.EndpointsMeta.Name, - Namespace: el.EndpointsMeta.Namespace, - Annotations: map[string]string{ - LeaderElectionRecordAnnotationKey: string(recordBytes), - }, - }, - }, metav1.CreateOptions{}) - return err -} - -// Update will update and existing annotation on a given resource. -func (el *endpointsLock) Update(ctx context.Context, ler LeaderElectionRecord) error { - if el.e == nil { - return errors.New("endpoint not initialized, call get or create first") - } - recordBytes, err := json.Marshal(ler) - if err != nil { - return err - } - if el.e.Annotations == nil { - el.e.Annotations = make(map[string]string) - } - el.e.Annotations[LeaderElectionRecordAnnotationKey] = string(recordBytes) - e, err := el.Client.Endpoints(el.EndpointsMeta.Namespace).Update(ctx, el.e, metav1.UpdateOptions{}) - if err != nil { - return err - } - el.e = e - return nil -} - -// RecordEvent in leader election while adding meta-data -func (el *endpointsLock) RecordEvent(s string) { - if el.LockConfig.EventRecorder == nil { - return - } - events := fmt.Sprintf("%v %v", el.LockConfig.Identity, s) - subject := &v1.Endpoints{ObjectMeta: el.e.ObjectMeta} - // Populate the type meta, so we don't have to get it from the schema - subject.Kind = "Endpoints" - subject.APIVersion = v1.SchemeGroupVersion.String() - el.LockConfig.EventRecorder.Eventf(subject, v1.EventTypeNormal, "LeaderElection", events) -} - -// Describe is used to convert details on current resource lock -// into a string -func (el *endpointsLock) Describe() string { - return fmt.Sprintf("%v/%v", el.EndpointsMeta.Namespace, el.EndpointsMeta.Name) -} - -// Identity returns the Identity of the lock -func (el *endpointsLock) Identity() string { - return el.LockConfig.Identity -} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/interface.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/interface.go index 05b5b202379b..483753d632ca 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/interface.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/resourcelock/interface.go @@ -34,7 +34,7 @@ const ( endpointsResourceLock = "endpoints" configMapsResourceLock = "configmaps" LeasesResourceLock = "leases" - // When using EndpointsLeasesResourceLock, you need to ensure that + // When using endpointsLeasesResourceLock, you need to ensure that // API Priority & Fairness is configured with non-default flow-schema // that will catch the necessary operations on leader-election related // endpoint objects. @@ -67,8 +67,8 @@ const ( // serviceAccount: // name: '*' // namespace: kube-system - EndpointsLeasesResourceLock = "endpointsleases" - // When using ConfigMapsLeasesResourceLock, you need to ensure that + endpointsLeasesResourceLock = "endpointsleases" + // When using configMapsLeasesResourceLock, you need to ensure that // API Priority & Fairness is configured with non-default flow-schema // that will catch the necessary operations on leader-election related // configmap objects. @@ -101,7 +101,7 @@ const ( // serviceAccount: // name: '*' // namespace: kube-system - ConfigMapsLeasesResourceLock = "configmapsleases" + configMapsLeasesResourceLock = "configmapsleases" ) // LeaderElectionRecord is the record that is stored in the leader election annotation. @@ -164,22 +164,6 @@ type Interface interface { // Manufacture will create a lock of a given type according to the input parameters func New(lockType string, ns string, name string, coreClient corev1.CoreV1Interface, coordinationClient coordinationv1.CoordinationV1Interface, rlc ResourceLockConfig) (Interface, error) { - endpointsLock := &endpointsLock{ - EndpointsMeta: metav1.ObjectMeta{ - Namespace: ns, - Name: name, - }, - Client: coreClient, - LockConfig: rlc, - } - configmapLock := &configMapLock{ - ConfigMapMeta: metav1.ObjectMeta{ - Namespace: ns, - Name: name, - }, - Client: coreClient, - LockConfig: rlc, - } leaseLock := &LeaseLock{ LeaseMeta: metav1.ObjectMeta{ Namespace: ns, @@ -190,21 +174,15 @@ func New(lockType string, ns string, name string, coreClient corev1.CoreV1Interf } switch lockType { case endpointsResourceLock: - return nil, fmt.Errorf("endpoints lock is removed, migrate to %s", EndpointsLeasesResourceLock) + return nil, fmt.Errorf("endpoints lock is removed, migrate to %s (using version v0.27.x)", endpointsLeasesResourceLock) case configMapsResourceLock: - return nil, fmt.Errorf("configmaps lock is removed, migrate to %s", ConfigMapsLeasesResourceLock) + return nil, fmt.Errorf("configmaps lock is removed, migrate to %s (using version v0.27.x)", configMapsLeasesResourceLock) case LeasesResourceLock: return leaseLock, nil - case EndpointsLeasesResourceLock: - return &MultiLock{ - Primary: endpointsLock, - Secondary: leaseLock, - }, nil - case ConfigMapsLeasesResourceLock: - return &MultiLock{ - Primary: configmapLock, - Secondary: leaseLock, - }, nil + case endpointsLeasesResourceLock: + return nil, fmt.Errorf("endpointsleases lock is removed, migrate to %s", LeasesResourceLock) + case configMapsLeasesResourceLock: + return nil, fmt.Errorf("configmapsleases lock is removed, migrated to %s", LeasesResourceLock) default: return nil, fmt.Errorf("Invalid lock-type %s", lockType) } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/metrics/metrics.go index f36430dc3edc..99d3d8e239cc 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/metrics/metrics.go @@ -42,6 +42,10 @@ type LatencyMetric interface { Observe(ctx context.Context, verb string, u url.URL, latency time.Duration) } +type ResolverLatencyMetric interface { + Observe(ctx context.Context, host string, latency time.Duration) +} + // SizeMetric observes client response size partitioned by verb and host. type SizeMetric interface { Observe(ctx context.Context, verb string, host string, size float64) @@ -64,6 +68,17 @@ type RetryMetric interface { IncrementRetry(ctx context.Context, code string, method string, host string) } +// TransportCacheMetric shows the number of entries in the internal transport cache +type TransportCacheMetric interface { + Observe(value int) +} + +// TransportCreateCallsMetric counts the number of times a transport is created +// partitioned by the result of the cache: hit, miss, uncacheable +type TransportCreateCallsMetric interface { + Increment(result string) +} + var ( // ClientCertExpiry is the expiry time of a client certificate ClientCertExpiry ExpiryMetric = noopExpiry{} @@ -71,6 +86,8 @@ var ( ClientCertRotationAge DurationMetric = noopDuration{} // RequestLatency is the latency metric that rest clients will update. RequestLatency LatencyMetric = noopLatency{} + // ResolverLatency is the latency metric that DNS resolver will update + ResolverLatency ResolverLatencyMetric = noopResolverLatency{} // RequestSize is the request size metric that rest clients will update. RequestSize SizeMetric = noopSize{} // ResponseSize is the response size metric that rest clients will update. @@ -85,6 +102,12 @@ var ( // RequestRetry is the retry metric that tracks the number of // retries sent to the server. RequestRetry RetryMetric = noopRetry{} + // TransportCacheEntries is the metric that tracks the number of entries in the + // internal transport cache. + TransportCacheEntries TransportCacheMetric = noopTransportCache{} + // TransportCreateCalls is the metric that counts the number of times a new transport + // is created + TransportCreateCalls TransportCreateCallsMetric = noopTransportCreateCalls{} ) // RegisterOpts contains all the metrics to register. Metrics may be nil. @@ -92,12 +115,15 @@ type RegisterOpts struct { ClientCertExpiry ExpiryMetric ClientCertRotationAge DurationMetric RequestLatency LatencyMetric + ResolverLatency ResolverLatencyMetric RequestSize SizeMetric ResponseSize SizeMetric RateLimiterLatency LatencyMetric RequestResult ResultMetric ExecPluginCalls CallsMetric RequestRetry RetryMetric + TransportCacheEntries TransportCacheMetric + TransportCreateCalls TransportCreateCallsMetric } // Register registers metrics for the rest client to use. This can @@ -113,6 +139,9 @@ func Register(opts RegisterOpts) { if opts.RequestLatency != nil { RequestLatency = opts.RequestLatency } + if opts.ResolverLatency != nil { + ResolverLatency = opts.ResolverLatency + } if opts.RequestSize != nil { RequestSize = opts.RequestSize } @@ -131,6 +160,12 @@ func Register(opts RegisterOpts) { if opts.RequestRetry != nil { RequestRetry = opts.RequestRetry } + if opts.TransportCacheEntries != nil { + TransportCacheEntries = opts.TransportCacheEntries + } + if opts.TransportCreateCalls != nil { + TransportCreateCalls = opts.TransportCreateCalls + } }) } @@ -146,6 +181,11 @@ type noopLatency struct{} func (noopLatency) Observe(context.Context, string, url.URL, time.Duration) {} +type noopResolverLatency struct{} + +func (n noopResolverLatency) Observe(ctx context.Context, host string, latency time.Duration) { +} + type noopSize struct{} func (noopSize) Observe(context.Context, string, string, float64) {} @@ -161,3 +201,11 @@ func (noopCalls) Increment(int, string) {} type noopRetry struct{} func (noopRetry) IncrementRetry(context.Context, string, string, string) {} + +type noopTransportCache struct{} + +func (noopTransportCache) Observe(int) {} + +type noopTransportCreateCalls struct{} + +func (noopTransportCreateCalls) Increment(string) {} diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/pager/pager.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/pager/pager.go index 9ba988f6856c..3c77cc37fa5a 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/pager/pager.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/pager/pager.go @@ -73,7 +73,23 @@ func New(fn ListPageFunc) *ListPager { // List returns a single list object, but attempts to retrieve smaller chunks from the // server to reduce the impact on the server. If the chunk attempt fails, it will load // the full list instead. The Limit field on options, if unset, will default to the page size. +// +// If items in the returned list are retained for different durations, and you want to avoid +// retaining the whole slice returned by p.PageFn as long as any item is referenced, +// use ListWithAlloc instead. func (p *ListPager) List(ctx context.Context, options metav1.ListOptions) (runtime.Object, bool, error) { + return p.list(ctx, options, false) +} + +// ListWithAlloc works like List, but avoids retaining references to the items slice returned by p.PageFn. +// It does this by making a shallow copy of non-pointer items in the slice returned by p.PageFn. +// +// If the items in the returned list are not retained, or are retained for the same duration, use List instead for memory efficiency. +func (p *ListPager) ListWithAlloc(ctx context.Context, options metav1.ListOptions) (runtime.Object, bool, error) { + return p.list(ctx, options, true) +} + +func (p *ListPager) list(ctx context.Context, options metav1.ListOptions, allocNew bool) (runtime.Object, bool, error) { if options.Limit == 0 { options.Limit = p.PageSize } @@ -123,7 +139,11 @@ func (p *ListPager) List(ctx context.Context, options metav1.ListOptions) (runti list.ResourceVersion = m.GetResourceVersion() list.SelfLink = m.GetSelfLink() } - if err := meta.EachListItem(obj, func(obj runtime.Object) error { + eachListItemFunc := meta.EachListItem + if allocNew { + eachListItemFunc = meta.EachListItemWithAlloc + } + if err := eachListItemFunc(obj, func(obj runtime.Object) error { list.Items = append(list.Items, obj) return nil }); err != nil { @@ -156,12 +176,26 @@ func (p *ListPager) List(ctx context.Context, options metav1.ListOptions) (runti // // Items are retrieved in chunks from the server to reduce the impact on the server with up to // ListPager.PageBufferSize chunks buffered concurrently in the background. +// +// If items passed to fn are retained for different durations, and you want to avoid +// retaining the whole slice returned by p.PageFn as long as any item is referenced, +// use EachListItemWithAlloc instead. func (p *ListPager) EachListItem(ctx context.Context, options metav1.ListOptions, fn func(obj runtime.Object) error) error { return p.eachListChunkBuffered(ctx, options, func(obj runtime.Object) error { return meta.EachListItem(obj, fn) }) } +// EachListItemWithAlloc works like EachListItem, but avoids retaining references to the items slice returned by p.PageFn. +// It does this by making a shallow copy of non-pointer items in the slice returned by p.PageFn. +// +// If the items passed to fn are not retained, or are retained for the same duration, use EachListItem instead for memory efficiency. +func (p *ListPager) EachListItemWithAlloc(ctx context.Context, options metav1.ListOptions, fn func(obj runtime.Object) error) error { + return p.eachListChunkBuffered(ctx, options, func(obj runtime.Object) error { + return meta.EachListItemWithAlloc(obj, fn) + }) +} + // eachListChunkBuffered fetches runtimeObject list chunks using this ListPager and invokes fn on // each list chunk. If fn returns an error, processing stops and that error is returned. If fn does // not return an error, any error encountered while retrieving the list from the server is diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/record/event.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/record/event.go index 4899b362dff3..f176167dc80e 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/record/event.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/record/event.go @@ -274,7 +274,7 @@ func recordEvent(sink EventSink, event *v1.Event, patch []byte, updateExistingEv klog.Errorf("Unable to construct event '%#v': '%v' (will not retry!)", event, err) return true case *errors.StatusError: - if errors.IsAlreadyExists(err) { + if errors.IsAlreadyExists(err) || errors.HasStatusCause(err, v1.NamespaceTerminatingCause) { klog.V(5).Infof("Server rejected event '%#v': '%v' (will not retry!)", event, err) } else { klog.Errorf("Server rejected event '%#v': '%v' (will not retry!)", event, err) @@ -357,6 +357,9 @@ func (recorder *recorderImpl) generateEvent(object runtime.Object, annotations m event := recorder.makeEvent(ref, annotations, eventtype, reason, message) event.Source = recorder.source + event.ReportingInstance = recorder.source.Host + event.ReportingController = recorder.source.Component + // NOTE: events should be a non-blocking operation, but we also need to not // put this in a goroutine, otherwise we'll race to write to a closed channel // when we go to shut down this broadcaster. Just drop events if we get overloaded, diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/watch/retrywatcher.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/watch/retrywatcher.go index e4806d2ea12b..d81dc43570dd 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/watch/retrywatcher.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/watch/retrywatcher.go @@ -24,10 +24,9 @@ import ( "net/http" "time" - "github.com/davecgh/go-spew/spew" - apierrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/dump" "k8s.io/apimachinery/pkg/util/net" "k8s.io/apimachinery/pkg/util/wait" "k8s.io/apimachinery/pkg/watch" @@ -191,7 +190,7 @@ func (rw *RetryWatcher) doReceive() (bool, time.Duration) { errObject := apierrors.FromObject(event.Object) statusErr, ok := errObject.(*apierrors.StatusError) if !ok { - klog.Error(spew.Sprintf("Received an error which is not *metav1.Status but %#+v", event.Object)) + klog.Error(fmt.Sprintf("Received an error which is not *metav1.Status but %s", dump.Pretty(event.Object))) // Retry unknown errors return false, 0 } @@ -220,7 +219,7 @@ func (rw *RetryWatcher) doReceive() (bool, time.Duration) { // Log here so we have a record of hitting the unexpected error // and we can whitelist some error codes if we missed any that are expected. - klog.V(5).Info(spew.Sprintf("Retrying after unexpected error: %#+v", event.Object)) + klog.V(5).Info(fmt.Sprintf("Retrying after unexpected error: %s", dump.Pretty(event.Object))) // Retry return false, statusDelay diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/transport/cache.go b/cluster-autoscaler/vendor/k8s.io/client-go/transport/cache.go index edcc6d1d4811..7c7f1b330f81 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/transport/cache.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/transport/cache.go @@ -27,6 +27,7 @@ import ( utilnet "k8s.io/apimachinery/pkg/util/net" "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/client-go/tools/metrics" ) // TlsTransportCache caches TLS http.RoundTrippers different configurations. The @@ -80,11 +81,16 @@ func (c *tlsTransportCache) get(config *Config) (http.RoundTripper, error) { // Ensure we only create a single transport for the given TLS options c.mu.Lock() defer c.mu.Unlock() + defer metrics.TransportCacheEntries.Observe(len(c.transports)) // See if we already have a custom transport for this config if t, ok := c.transports[key]; ok { + metrics.TransportCreateCalls.Increment("hit") return t, nil } + metrics.TransportCreateCalls.Increment("miss") + } else { + metrics.TransportCreateCalls.Increment("uncacheable") } // Get the TLS options for this client config diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/util/cert/cert.go b/cluster-autoscaler/vendor/k8s.io/client-go/util/cert/cert.go index 4be1dfe49350..91e171271afc 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/util/cert/cert.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/util/cert/cert.go @@ -25,6 +25,7 @@ import ( "crypto/x509/pkix" "encoding/pem" "fmt" + "math" "math/big" "net" "os" @@ -44,6 +45,7 @@ type Config struct { Organization []string AltNames AltNames Usages []x509.ExtKeyUsage + NotBefore time.Time } // AltNames contains the domain names and IP addresses that will be added @@ -57,14 +59,24 @@ type AltNames struct { // NewSelfSignedCACert creates a CA certificate func NewSelfSignedCACert(cfg Config, key crypto.Signer) (*x509.Certificate, error) { now := time.Now() + // returns a uniform random value in [0, max-1), then add 1 to serial to make it a uniform random value in [1, max). + serial, err := cryptorand.Int(cryptorand.Reader, new(big.Int).SetInt64(math.MaxInt64-1)) + if err != nil { + return nil, err + } + serial = new(big.Int).Add(serial, big.NewInt(1)) + notBefore := now.UTC() + if !cfg.NotBefore.IsZero() { + notBefore = cfg.NotBefore.UTC() + } tmpl := x509.Certificate{ - SerialNumber: new(big.Int).SetInt64(0), + SerialNumber: serial, Subject: pkix.Name{ CommonName: cfg.CommonName, Organization: cfg.Organization, }, DNSNames: []string{cfg.CommonName}, - NotBefore: now.UTC(), + NotBefore: notBefore, NotAfter: now.Add(duration365d * 10).UTC(), KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign, BasicConstraintsValid: true, @@ -116,9 +128,14 @@ func GenerateSelfSignedCertKeyWithFixtures(host string, alternateIPs []net.IP, a if err != nil { return nil, nil, err } - + // returns a uniform random value in [0, max-1), then add 1 to serial to make it a uniform random value in [1, max). + serial, err := cryptorand.Int(cryptorand.Reader, new(big.Int).SetInt64(math.MaxInt64-1)) + if err != nil { + return nil, nil, err + } + serial = new(big.Int).Add(serial, big.NewInt(1)) caTemplate := x509.Certificate{ - SerialNumber: big.NewInt(1), + SerialNumber: serial, Subject: pkix.Name{ CommonName: fmt.Sprintf("%s-ca@%d", host, time.Now().Unix()), }, @@ -144,9 +161,14 @@ func GenerateSelfSignedCertKeyWithFixtures(host string, alternateIPs []net.IP, a if err != nil { return nil, nil, err } - + // returns a uniform random value in [0, max-1), then add 1 to serial to make it a uniform random value in [1, max). + serial, err = cryptorand.Int(cryptorand.Reader, new(big.Int).SetInt64(math.MaxInt64-1)) + if err != nil { + return nil, nil, err + } + serial = new(big.Int).Add(serial, big.NewInt(1)) template := x509.Certificate{ - SerialNumber: big.NewInt(2), + SerialNumber: serial, Subject: pkix.Name{ CommonName: fmt.Sprintf("%s@%d", host, time.Now().Unix()), }, diff --git a/cluster-autoscaler/vendor/k8s.io/cloud-provider-aws/pkg/providers/v1/aws.go b/cluster-autoscaler/vendor/k8s.io/cloud-provider-aws/pkg/providers/v1/aws.go index 63a31ddd6f0b..b56d70f01905 100644 --- a/cluster-autoscaler/vendor/k8s.io/cloud-provider-aws/pkg/providers/v1/aws.go +++ b/cluster-autoscaler/vendor/k8s.io/cloud-provider-aws/pkg/providers/v1/aws.go @@ -463,7 +463,6 @@ type KMS interface { type EC2Metadata interface { // Query the EC2 metadata service (used to discover instance-id etc) GetMetadata(path string) (string, error) - Region() (string, error) } // AWS volume types @@ -612,8 +611,6 @@ type CloudConfig struct { // Maybe if we're not running on AWS, e.g. bootstrap; for now it is not very useful Zone string - Region string - // The AWS VPC flag enables the possibility to run the master components // on a different aws account, on a different cloud provider or on-premises. // If the flag is set also the KubernetesClusterTag must be provided @@ -645,6 +642,17 @@ type CloudConfig struct { //Security group for each ELB this security group will be used instead. ElbSecurityGroup string + //During the instantiation of an new AWS cloud provider, the detected region + //is validated against a known set of regions. + // + //In a non-standard, AWS like environment (e.g. Eucalyptus), this check may + //be undesirable. Setting this to true will disable the check and provide + //a warning that the check was skipped. Please note that this is an + //experimental feature and work-in-progress for the moment. If you find + //yourself in an non-AWS cloud and open an issue, please indicate that in the + //issue body. + DisableStrictZoneCheck bool + // NodeIPFamilies determines which IP addresses are added to node objects and their ordering. NodeIPFamilies []string } @@ -671,23 +679,6 @@ type CloudConfig struct { } } -// GetRegion returns the AWS region from the config, if set, or gets it from the metadata -// service if unset and sets in config -func (cfg *CloudConfig) GetRegion(metadata EC2Metadata) (string, error) { - if cfg.Global.Region != "" { - return cfg.Global.Region, nil - } - - klog.Info("Loading region from metadata service") - region, err := metadata.Region() - if err != nil { - return "", err - } - - cfg.Global.Region = region - return region, nil -} - func (cfg *CloudConfig) validateOverrides() error { if len(cfg.ServiceOverride) == 0 { return nil @@ -1271,7 +1262,7 @@ func init() { return nil, fmt.Errorf("error creating AWS metadata client: %q", err) } - regionName, err := getRegionFromMetadata(*cfg, metadata) + regionName, _, err := getRegionFromMetadata(*cfg, metadata) if err != nil { return nil, err } @@ -1317,6 +1308,28 @@ func readAWSCloudConfig(config io.Reader) (*CloudConfig, error) { return &cfg, nil } +func updateConfigZone(cfg *CloudConfig, metadata EC2Metadata) error { + if cfg.Global.Zone == "" { + if metadata != nil { + klog.Info("Zone not specified in configuration file; querying AWS metadata service") + var err error + cfg.Global.Zone, err = getAvailabilityZone(metadata) + if err != nil { + return err + } + } + if cfg.Global.Zone == "" { + return fmt.Errorf("no zone specified in configuration file") + } + } + + return nil +} + +func getAvailabilityZone(metadata EC2Metadata) (string, error) { + return metadata.GetMetadata("placement/availability-zone") +} + // Derives the region from a valid az name. // Returns an error if the az is known invalid (empty) func azToRegion(az string) (string, error) { @@ -1345,11 +1358,19 @@ func newAWSCloud(cfg CloudConfig, awsServices Services) (*Cloud, error) { return nil, fmt.Errorf("error creating AWS metadata client: %q", err) } - regionName, err := getRegionFromMetadata(cfg, metadata) + regionName, zone, err := getRegionFromMetadata(cfg, metadata) if err != nil { return nil, err } + if !cfg.Global.DisableStrictZoneCheck { + if err := validateRegion(regionName, metadata); err != nil { + return nil, fmt.Errorf("not a valid AWS zone (unknown region): %s, %w", zone, err) + } + } else { + klog.Warningf("Strict AWS zone checking is disabled. Proceeding with zone: %s", zone) + } + ec2, err := awsServices.Compute(regionName) if err != nil { return nil, fmt.Errorf("error creating AWS EC2 client: %v", err) @@ -1438,6 +1459,48 @@ func NewAWSCloud(cfg CloudConfig, awsServices Services) (*Cloud, error) { return newAWSCloud(cfg, awsServices) } +// validateRegion accepts an AWS region name and returns if the region is a +// valid region known to the AWS SDK. Considers the region returned from the +// EC2 metadata service to be a valid region as it's only available on a host +// running in a valid AWS region. +func validateRegion(region string, metadata EC2Metadata) error { + // Does the AWS SDK know about the region? Any region known by the SDK is a + // valid one. + for _, p := range endpoints.DefaultPartitions() { + for r := range p.Regions() { + if r == region { + return nil + } + } + } + + // ap-northeast-3 is purposely excluded from the SDK because it + // requires an access request (for more details see): + // https://github.com/aws/aws-sdk-go/issues/1863 + if region == "ap-northeast-3" { + return nil + } + + // Fallback to checking if the region matches the instance metadata region + // (ignoring any user overrides). This just accounts for running an old + // build of Kubernetes in a new region that wasn't compiled into the SDK + // when Kubernetes was built. + az, err := getAvailabilityZone(metadata) + if err != nil { + return err + } + ec2Region, err := azToRegion(az) + if err != nil { + return err + } + if region != ec2Region { + return fmt.Errorf("region %s is not known, and does not match EC2 instance's region, %s", + region, ec2Region) + } + + return nil +} + // Initialize passes a Kubernetes clientBuilder interface to the cloud provider func (c *Cloud) Initialize(clientBuilder cloudprovider.ControllerClientBuilder, stop <-chan struct{}) { c.clientBuilder = clientBuilder @@ -5249,16 +5312,22 @@ func (c *Cloud) describeNetworkInterfaces(nodeName string) (*ec2.NetworkInterfac return eni.NetworkInterfaces[0], nil } -func getRegionFromMetadata(cfg CloudConfig, metadata EC2Metadata) (string, error) { - // For backwards compatibility reasons, keeping this check to avoid breaking possible - // cases where Zone was set to override the region configuration. Otherwise, fall back - // to getting region the standard way. - if cfg.Global.Zone != "" { - zone := cfg.Global.Zone - klog.Infof("Zone %s configured in cloud config. Using that to get region.", zone) +func getRegionFromMetadata(cfg CloudConfig, metadata EC2Metadata) (string, string, error) { + klog.Infof("Get AWS region from metadata client") + err := updateConfigZone(&cfg, metadata) + if err != nil { + return "", "", fmt.Errorf("unable to determine AWS zone from cloud provider config or EC2 instance metadata: %v", err) + } + + zone := cfg.Global.Zone + if len(zone) <= 1 { + return "", "", fmt.Errorf("invalid AWS zone in config file: %s", zone) + } - return azToRegion(zone) + regionName, err := azToRegion(zone) + if err != nil { + return "", "", err } - return cfg.GetRegion(metadata) + return regionName, zone, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/cloud-provider-aws/pkg/providers/v1/aws_fakes.go b/cluster-autoscaler/vendor/k8s.io/cloud-provider-aws/pkg/providers/v1/aws_fakes.go index 84737b1417c4..2eb917d02786 100644 --- a/cluster-autoscaler/vendor/k8s.io/cloud-provider-aws/pkg/providers/v1/aws_fakes.go +++ b/cluster-autoscaler/vendor/k8s.io/cloud-provider-aws/pkg/providers/v1/aws_fakes.go @@ -91,12 +91,6 @@ func (s *FakeAWSServices) WithAz(az string) *FakeAWSServices { return s } -// WithRegion sets the AWS region -func (s *FakeAWSServices) WithRegion(region string) *FakeAWSServices { - s.region = region - return s -} - // Compute returns a fake EC2 client func (s *FakeAWSServices) Compute(region string) (EC2, error) { return s.ec2, nil @@ -432,11 +426,6 @@ func (m *FakeMetadata) GetMetadata(key string) (string, error) { return "", nil } -// Region returns AWS region -func (m *FakeMetadata) Region() (string, error) { - return m.aws.region, nil -} - // FakeELB is a fake ELB client used for testing type FakeELB struct { aws *FakeAWSServices diff --git a/cluster-autoscaler/vendor/k8s.io/cloud-provider/api/retry_error.go b/cluster-autoscaler/vendor/k8s.io/cloud-provider/api/retry_error.go new file mode 100644 index 000000000000..ac0e5e6e728f --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/cloud-provider/api/retry_error.go @@ -0,0 +1,46 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package api + +import ( + "time" +) + +// RetryError indicates that a service reconciliation should be retried after a +// fixed duration (as opposed to backing off exponentially). +type RetryError struct { + msg string + retryAfter time.Duration +} + +// NewRetryError returns a RetryError. +func NewRetryError(msg string, retryAfter time.Duration) *RetryError { + return &RetryError{ + msg: msg, + retryAfter: retryAfter, + } +} + +// Error shows the details of the retry reason. +func (re *RetryError) Error() string { + return re.msg +} + +// RetryAfter returns the defined retry-after duration. +func (re *RetryError) RetryAfter() time.Duration { + return re.retryAfter +} diff --git a/cluster-autoscaler/vendor/k8s.io/cloud-provider/cloud.go b/cluster-autoscaler/vendor/k8s.io/cloud-provider/cloud.go index 7e7bf9dfab89..c9a04085f480 100644 --- a/cluster-autoscaler/vendor/k8s.io/cloud-provider/cloud.go +++ b/cluster-autoscaler/vendor/k8s.io/cloud-provider/cloud.go @@ -131,11 +131,11 @@ func GetInstanceProviderID(ctx context.Context, cloud Interface, nodeName types. // irrespective of the ImplementedElsewhere error. Additional finalizers for // LB services must be managed in the alternate implementation. type LoadBalancer interface { - // TODO: Break this up into different interfaces (LB, etc) when we have more than one type of service // GetLoadBalancer returns whether the specified load balancer exists, and // if so, what its status is. // Implementations must treat the *v1.Service parameter as read-only and not modify it. - // Parameter 'clusterName' is the name of the cluster as presented to kube-controller-manager + // Parameter 'clusterName' is the name of the cluster as presented to kube-controller-manager. + // TODO: Break this up into different interfaces (LB, etc) when we have more than one type of service GetLoadBalancer(ctx context.Context, clusterName string, service *v1.Service) (status *v1.LoadBalancerStatus, exists bool, err error) // GetLoadBalancerName returns the name of the load balancer. Implementations must treat the // *v1.Service parameter as read-only and not modify it. @@ -143,7 +143,13 @@ type LoadBalancer interface { // EnsureLoadBalancer creates a new load balancer 'name', or updates the existing one. Returns the status of the balancer // Implementations must treat the *v1.Service and *v1.Node // parameters as read-only and not modify them. - // Parameter 'clusterName' is the name of the cluster as presented to kube-controller-manager + // Parameter 'clusterName' is the name of the cluster as presented to kube-controller-manager. + // + // Implementations may return a (possibly wrapped) api.RetryError to enforce + // backing off at a fixed duration. This can be used for cases like when the + // load balancer is not ready yet (e.g., it is still being provisioned) and + // polling at a fixed rate is preferred over backing off exponentially in + // order to minimize latency. EnsureLoadBalancer(ctx context.Context, clusterName string, service *v1.Service, nodes []*v1.Node) (*v1.LoadBalancerStatus, error) // UpdateLoadBalancer updates hosts under the specified load balancer. // Implementations must treat the *v1.Service and *v1.Node diff --git a/cluster-autoscaler/vendor/k8s.io/cloud-provider/config/types.go b/cluster-autoscaler/vendor/k8s.io/cloud-provider/config/types.go index 133716219512..f0e8c179f222 100644 --- a/cluster-autoscaler/vendor/k8s.io/cloud-provider/config/types.go +++ b/cluster-autoscaler/vendor/k8s.io/cloud-provider/config/types.go @@ -65,7 +65,7 @@ type KubeCloudSharedConfiguration struct { AllowUntaggedCloud bool // routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider.. RouteReconciliationPeriod metav1.Duration - // nodeMonitorPeriod is the period for syncing NodeStatus in NodeController. + // nodeMonitorPeriod is the period for syncing NodeStatus in CloudNodeLifecycleController. NodeMonitorPeriod metav1.Duration // clusterName is the instance prefix for the cluster. ClusterName string diff --git a/cluster-autoscaler/vendor/k8s.io/cloud-provider/names/controller_names.go b/cluster-autoscaler/vendor/k8s.io/cloud-provider/names/controller_names.go new file mode 100644 index 000000000000..87e938d01e69 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/cloud-provider/names/controller_names.go @@ -0,0 +1,69 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package names + +// Canonical controller names +// +// NAMING CONVENTIONS +// 1. naming should be consistent across the controllers +// 2. use of shortcuts should be avoided, unless they are well-known non-Kubernetes shortcuts +// 3. Kubernetes' resources should be written together without a hyphen ("-") +// +// CHANGE POLICY +// The controller names should be treated as IDs. +// They can only be changed if absolutely necessary. For example if an inappropriate name was chosen in the past, or if the scope of the controller changes. +// When a name is changed, the old name should be aliased in CCMControllerAliases, while preserving all old aliases. +// This is done to achieve backwards compatibility +// +// USE CASES +// The following places should use the controller name constants, when: +// 1. registering a controller in app.DefaultInitFuncConstructors or sample main.controllerInitializers: +// 1.1. disabling a controller by default in app.ControllersDisabledByDefault +// 1.2. checking if IsControllerEnabled +// 1.3. defining an alias in CCMControllerAliases (for backwards compatibility only) +// 2. used anywhere inside the controller itself: +// 2.1. [TODO] logger component should be configured with the controller name by calling LoggerWithName +// 2.2. [TODO] logging should use a canonical controller name when referencing a controller (Eg. Starting X, Shutting down X) +// 2.3. [TODO] emitted events should have an EventSource.Component set to the controller name (usually when initializing an EventRecorder) +// 2.4. [TODO] registering ControllerManagerMetrics with ControllerStarted and ControllerStopped +// 2.5. [TODO] calling WaitForNamedCacheSync +// 3. defining controller options for "--help" command or generated documentation +// 3.1. controller name should be used to create a pflag.FlagSet when registering controller options (the name is rendered in a controller flag group header) +// 3.2. when defined flag's help mentions a controller name +// 4. defining a new service account for a new controller (old controllers may have inconsistent service accounts to stay backwards compatible) +// 5. anywhere these controllers are used outside of this module (kube-controller-manager, cloud-provider sample) +const ( + CloudNodeController = "cloud-node-controller" + ServiceLBController = "service-lb-controller" + NodeRouteController = "node-route-controller" + CloudNodeLifecycleController = "cloud-node-lifecycle-controller" +) + +// CCMControllerAliases returns a mapping of aliases to canonical controller names +// +// These aliases ensure backwards compatibility and should never be removed! +// Only addition of new aliases is allowed, and only when a canonical name is changed (please see CHANGE POLICY of controller names) +func CCMControllerAliases() map[string]string { + // return a new reference to achieve immutability of the mapping + return map[string]string{ + "cloud-node": CloudNodeController, + "service": ServiceLBController, + "route": NodeRouteController, + "cloud-node-lifecycle": CloudNodeLifecycleController, + } + +} diff --git a/cluster-autoscaler/vendor/k8s.io/cloud-provider/options/kubecloudshared.go b/cluster-autoscaler/vendor/k8s.io/cloud-provider/options/kubecloudshared.go index 20bb03c095c8..988eade1c408 100644 --- a/cluster-autoscaler/vendor/k8s.io/cloud-provider/options/kubecloudshared.go +++ b/cluster-autoscaler/vendor/k8s.io/cloud-provider/options/kubecloudshared.go @@ -17,8 +17,12 @@ limitations under the License. package options import ( + "fmt" + "github.com/spf13/pflag" + cpconfig "k8s.io/cloud-provider/config" + "k8s.io/cloud-provider/names" ) // KubeCloudSharedOptions holds the options shared between kube-controller-manager @@ -49,13 +53,13 @@ func (o *KubeCloudSharedOptions) AddFlags(fs *pflag.FlagSet) { } o.CloudProvider.AddFlags(fs) - fs.StringVar(&o.ExternalCloudVolumePlugin, "external-cloud-volume-plugin", o.ExternalCloudVolumePlugin, "The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node and volume controllers to work for in tree cloud providers.") + fs.StringVar(&o.ExternalCloudVolumePlugin, "external-cloud-volume-plugin", o.ExternalCloudVolumePlugin, "The plugin to use when cloud provider is set to external. Can be empty, should only be set when cloud-provider is external. Currently used to allow node-ipam-controller, persistentvolume-binder-controller, persistentvolume-expander-controller and attach-detach-controller to work for in tree cloud providers.") fs.BoolVar(&o.UseServiceAccountCredentials, "use-service-account-credentials", o.UseServiceAccountCredentials, "If true, use individual service account credentials for each controller.") fs.BoolVar(&o.AllowUntaggedCloud, "allow-untagged-cloud", false, "Allow the cluster to run without the cluster-id on cloud instances. This is a legacy mode of operation and a cluster-id will be required in the future.") fs.MarkDeprecated("allow-untagged-cloud", "This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.") fs.DurationVar(&o.RouteReconciliationPeriod.Duration, "route-reconciliation-period", o.RouteReconciliationPeriod.Duration, "The period for reconciling routes created for Nodes by cloud provider.") fs.DurationVar(&o.NodeMonitorPeriod.Duration, "node-monitor-period", o.NodeMonitorPeriod.Duration, - "The period for syncing NodeStatus in NodeController.") + fmt.Sprintf("The period for syncing NodeStatus in %s.", names.CloudNodeLifecycleController)) fs.StringVar(&o.ClusterName, "cluster-name", o.ClusterName, "The instance prefix for the cluster.") fs.StringVar(&o.ClusterCIDR, "cluster-cidr", o.ClusterCIDR, "CIDR Range for Pods in cluster. Requires --allocate-node-cidrs to be true") fs.BoolVar(&o.AllocateNodeCIDRs, "allocate-node-cidrs", false, "Should CIDRs for Pods be allocated and set on the cloud provider.") diff --git a/cluster-autoscaler/vendor/k8s.io/cloud-provider/options/options.go b/cluster-autoscaler/vendor/k8s.io/cloud-provider/options/options.go index 35bf1737ed9d..37aeb01e1c09 100644 --- a/cluster-autoscaler/vendor/k8s.io/cloud-provider/options/options.go +++ b/cluster-autoscaler/vendor/k8s.io/cloud-provider/options/options.go @@ -38,6 +38,7 @@ import ( ccmconfig "k8s.io/cloud-provider/config" ccmconfigscheme "k8s.io/cloud-provider/config/install" ccmconfigv1alpha1 "k8s.io/cloud-provider/config/v1alpha1" + "k8s.io/cloud-provider/names" cliflag "k8s.io/component-base/cli/flag" cmoptions "k8s.io/controller-manager/options" "k8s.io/controller-manager/pkg/clientbuilder" @@ -141,12 +142,12 @@ func NewDefaultComponentConfig() (*ccmconfig.CloudControllerManagerConfiguration } // Flags returns flags for a specific CloudController by section name -func (o *CloudControllerManagerOptions) Flags(allControllers, disabledByDefaultControllers, allWebhooks, disabledByDefaultWebhooks []string) cliflag.NamedFlagSets { +func (o *CloudControllerManagerOptions) Flags(allControllers []string, disabledByDefaultControllers []string, controllerAliases map[string]string, allWebhooks, disabledByDefaultWebhooks []string) cliflag.NamedFlagSets { fss := cliflag.NamedFlagSets{} - o.Generic.AddFlags(&fss, allControllers, disabledByDefaultControllers) + o.Generic.AddFlags(&fss, allControllers, disabledByDefaultControllers, controllerAliases) o.KubeCloudShared.AddFlags(fss.FlagSet("generic")) - o.NodeController.AddFlags(fss.FlagSet("node controller")) - o.ServiceController.AddFlags(fss.FlagSet("service controller")) + o.NodeController.AddFlags(fss.FlagSet(names.CloudNodeController)) + o.ServiceController.AddFlags(fss.FlagSet(names.ServiceLBController)) if o.Webhook != nil { o.Webhook.AddFlags(fss.FlagSet("webhook"), allWebhooks, disabledByDefaultWebhooks) } @@ -168,7 +169,7 @@ func (o *CloudControllerManagerOptions) Flags(allControllers, disabledByDefaultC } // ApplyTo fills up cloud controller manager config with options. -func (o *CloudControllerManagerOptions) ApplyTo(c *config.Config, userAgent string) error { +func (o *CloudControllerManagerOptions) ApplyTo(c *config.Config, allControllers []string, disabledByDefaultControllers []string, controllerAliases map[string]string, userAgent string) error { var err error // Build kubeconfig first to so that if it fails, it doesn't cause leaking @@ -184,7 +185,7 @@ func (o *CloudControllerManagerOptions) ApplyTo(c *config.Config, userAgent stri c.Kubeconfig.QPS = o.Generic.ClientConnection.QPS c.Kubeconfig.Burst = int(o.Generic.ClientConnection.Burst) - if err = o.Generic.ApplyTo(&c.ComponentConfig.Generic); err != nil { + if err = o.Generic.ApplyTo(&c.ComponentConfig.Generic, allControllers, disabledByDefaultControllers, controllerAliases); err != nil { return err } if err = o.KubeCloudShared.ApplyTo(&c.ComponentConfig.KubeCloudShared); err != nil { @@ -246,10 +247,10 @@ func (o *CloudControllerManagerOptions) ApplyTo(c *config.Config, userAgent stri } // Validate is used to validate config before launching the cloud controller manager -func (o *CloudControllerManagerOptions) Validate(allControllers, disabledByDefaultControllers, allWebhooks, disabledByDefaultWebhooks []string) error { +func (o *CloudControllerManagerOptions) Validate(allControllers []string, disabledByDefaultControllers []string, controllerAliases map[string]string, allWebhooks, disabledByDefaultWebhooks []string) error { errors := []error{} - errors = append(errors, o.Generic.Validate(allControllers, disabledByDefaultControllers)...) + errors = append(errors, o.Generic.Validate(allControllers, disabledByDefaultControllers, controllerAliases)...) errors = append(errors, o.KubeCloudShared.Validate()...) errors = append(errors, o.ServiceController.Validate()...) errors = append(errors, o.SecureServing.Validate()...) @@ -282,8 +283,8 @@ func resyncPeriod(c *config.Config) func() time.Duration { } // Config return a cloud controller manager config objective -func (o *CloudControllerManagerOptions) Config(allControllers, disabledByDefaultControllers, allWebhooks, disabledByDefaultWebhooks []string) (*config.Config, error) { - if err := o.Validate(allControllers, disabledByDefaultControllers, allWebhooks, disabledByDefaultWebhooks); err != nil { +func (o *CloudControllerManagerOptions) Config(allControllers []string, disabledByDefaultControllers []string, controllerAliases map[string]string, allWebhooks, disabledByDefaultWebhooks []string) (*config.Config, error) { + if err := o.Validate(allControllers, disabledByDefaultControllers, controllerAliases, allWebhooks, disabledByDefaultWebhooks); err != nil { return nil, err } @@ -298,7 +299,7 @@ func (o *CloudControllerManagerOptions) Config(allControllers, disabledByDefault } c := &config.Config{} - if err := o.ApplyTo(c, CloudControllerManagerUserAgent); err != nil { + if err := o.ApplyTo(c, allControllers, disabledByDefaultControllers, controllerAliases, CloudControllerManagerUserAgent); err != nil { return nil, err } diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/options.go b/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/options.go index a5e11f7d8646..2db9b1f5382b 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/options.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/options.go @@ -17,14 +17,17 @@ limitations under the License. package v1 import ( + "errors" "flag" "fmt" "io" "math" "os" "strings" + "sync/atomic" "time" + "github.com/google/go-cmp/cmp" "github.com/spf13/pflag" "k8s.io/klog/v2" @@ -57,6 +60,24 @@ func NewLoggingConfiguration() *LoggingConfiguration { return &c } +// Applying configurations multiple times is not safe unless it's guaranteed that there +// are no goroutines which might call logging functions. The default for ValidateAndApply +// and ValidateAndApplyWithOptions is to return an error when called more than once. +// Binaries and unit tests can override that behavior. +var ReapplyHandling = ReapplyHandlingError + +type ReapplyHandlingType int + +const ( + // ReapplyHandlingError is the default: calling ValidateAndApply or + // ValidateAndApplyWithOptions again returns an error. + ReapplyHandlingError ReapplyHandlingType = iota + // ReapplyHandlingIgnoreUnchanged silently ignores any additional calls of + // ValidateAndApply or ValidateAndApplyWithOptions if the configuration + // is unchanged, otherwise they return an error. + ReapplyHandlingIgnoreUnchanged +) + // ValidateAndApply combines validation and application of the logging configuration. // This should be invoked as early as possible because then the rest of the program // startup (including validation of other options) will already run with the final @@ -64,6 +85,10 @@ func NewLoggingConfiguration() *LoggingConfiguration { // // The optional FeatureGate controls logging features. If nil, the default for // these features is used. +// +// Logging options must be applied as early as possible during the program +// startup. Some changes are global and cannot be done safely when there are +// already goroutines running. func ValidateAndApply(c *LoggingConfiguration, featureGate featuregate.FeatureGate) error { return validateAndApply(c, nil, featureGate, nil) } @@ -71,6 +96,10 @@ func ValidateAndApply(c *LoggingConfiguration, featureGate featuregate.FeatureGa // ValidateAndApplyWithOptions is a variant of ValidateAndApply which accepts // additional options beyond those that can be configured through the API. This // is meant for testing. +// +// Logging options must be applied as early as possible during the program +// startup. Some changes are global and cannot be done safely when there are +// already goroutines running. func ValidateAndApplyWithOptions(c *LoggingConfiguration, options *LoggingOptions, featureGate featuregate.FeatureGate) error { return validateAndApply(c, options, featureGate, nil) } @@ -183,10 +212,30 @@ func featureEnabled(featureGate featuregate.FeatureGate, feature featuregate.Fea } func apply(c *LoggingConfiguration, options *LoggingOptions, featureGate featuregate.FeatureGate) error { - contextualLoggingEnabled := contextualLoggingDefault + p := ¶meters{ + C: c, + Options: options, + ContextualLoggingEnabled: contextualLoggingDefault, + } if featureGate != nil { - contextualLoggingEnabled = featureGate.Enabled(ContextualLogging) + p.ContextualLoggingEnabled = featureGate.Enabled(ContextualLogging) + } + + oldP := applyParameters.Load() + if oldP != nil { + switch ReapplyHandling { + case ReapplyHandlingError: + return errors.New("logging configuration was already applied earlier, changing it is not allowed") + case ReapplyHandlingIgnoreUnchanged: + if diff := cmp.Diff(oldP, p); diff != "" { + return fmt.Errorf("the logging configuration should not be changed after setting it once (- old setting, + new setting):\n%s", diff) + } + return nil + default: + return fmt.Errorf("invalid value %d for ReapplyHandling", ReapplyHandling) + } } + applyParameters.Store(p) // if log format not exists, use nil loggr format, _ := logRegistry.get(c.Format) @@ -205,7 +254,7 @@ func apply(c *LoggingConfiguration, options *LoggingOptions, featureGate feature defer setverbositylevel.Mutex.Unlock() setverbositylevel.Callbacks = append(setverbositylevel.Callbacks, control.SetVerbosityLevel) } - klog.SetLoggerWithOptions(log, klog.ContextualLogger(contextualLoggingEnabled), klog.FlushLogger(control.Flush)) + klog.SetLoggerWithOptions(log, klog.ContextualLogger(p.ContextualLoggingEnabled), klog.FlushLogger(control.Flush)) } if err := loggingFlags.Lookup("v").Value.Set(VerbosityLevelPflag(&c.Verbosity).String()); err != nil { return fmt.Errorf("internal error while setting klog verbosity: %v", err) @@ -213,8 +262,41 @@ func apply(c *LoggingConfiguration, options *LoggingOptions, featureGate feature if err := loggingFlags.Lookup("vmodule").Value.Set(VModuleConfigurationPflag(&c.VModule).String()); err != nil { return fmt.Errorf("internal error while setting klog vmodule: %v", err) } - klog.StartFlushDaemon(c.FlushFrequency) - klog.EnableContextualLogging(contextualLoggingEnabled) + klog.StartFlushDaemon(c.FlushFrequency.Duration.Duration) + klog.EnableContextualLogging(p.ContextualLoggingEnabled) + return nil +} + +type parameters struct { + C *LoggingConfiguration + Options *LoggingOptions + ContextualLoggingEnabled bool +} + +var applyParameters atomic.Pointer[parameters] + +// ResetForTest restores the default settings. This is not thread-safe and should only +// be used when there are no goroutines running. The intended users are unit +// tests in other packages. +func ResetForTest(featureGate featuregate.FeatureGate) error { + oldP := applyParameters.Load() + if oldP == nil { + // Nothing to do. + return nil + } + + // This makes it possible to call apply again without triggering errors. + applyParameters.Store(nil) + + // Restore defaults. Shouldn't fail, but check anyway. + config := NewLoggingConfiguration() + if err := ValidateAndApply(config, featureGate); err != nil { + return fmt.Errorf("apply default configuration: %v", err) + } + + // And again... + applyParameters.Store(nil) + return nil } @@ -260,7 +342,7 @@ func addFlags(c *LoggingConfiguration, fs flagSet) { // No new log formats should be added after generation is of flag options logRegistry.freeze() - fs.DurationVar(&c.FlushFrequency, LogFlushFreqFlagName, c.FlushFrequency, "Maximum number of seconds between log flushes") + fs.DurationVar(&c.FlushFrequency.Duration.Duration, LogFlushFreqFlagName, c.FlushFrequency.Duration.Duration, "Maximum number of seconds between log flushes") fs.VarP(VerbosityLevelPflag(&c.Verbosity), "v", "v", "number for the log level verbosity") fs.Var(VModuleConfigurationPflag(&c.VModule), "vmodule", "comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)") @@ -282,8 +364,9 @@ func SetRecommendedLoggingConfiguration(c *LoggingConfiguration) { if c.Format == "" { c.Format = "text" } - if c.FlushFrequency == 0 { - c.FlushFrequency = LogFlushFreqDefault + if c.FlushFrequency.Duration.Duration == 0 { + c.FlushFrequency.Duration.Duration = LogFlushFreqDefault + c.FlushFrequency.SerializeAsString = true } var empty resource.QuantityValue if c.Options.JSON.InfoBufferSize == empty { diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/registry.go b/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/registry.go index f8fc1f2cae1a..6dc23ec18264 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/registry.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/registry.go @@ -20,6 +20,7 @@ import ( "fmt" "sort" "strings" + "sync" "github.com/go-logr/logr" @@ -30,6 +31,7 @@ var logRegistry = newLogFormatRegistry() // logFormatRegistry stores factories for all supported logging formats. type logFormatRegistry struct { + mutex sync.Mutex registry map[string]logFormat frozen bool } @@ -83,6 +85,8 @@ func newLogFormatRegistry() *logFormatRegistry { // register adds a new log format. It's an error to modify an existing one. func (lfr *logFormatRegistry) register(name string, format logFormat) error { + lfr.mutex.Lock() + defer lfr.mutex.Unlock() if lfr.frozen { return fmt.Errorf("log format registry is frozen, unable to register log format %s", name) } @@ -98,6 +102,8 @@ func (lfr *logFormatRegistry) register(name string, format logFormat) error { // get specified log format factory func (lfr *logFormatRegistry) get(name string) (*logFormat, error) { + lfr.mutex.Lock() + defer lfr.mutex.Unlock() format, ok := lfr.registry[name] if !ok { return nil, fmt.Errorf("log format: %s does not exists", name) @@ -107,6 +113,8 @@ func (lfr *logFormatRegistry) get(name string) (*logFormat, error) { // list names of registered log formats, including feature gates (sorted) func (lfr *logFormatRegistry) list() string { + lfr.mutex.Lock() + defer lfr.mutex.Unlock() formats := make([]string, 0, len(lfr.registry)) for name, format := range lfr.registry { item := fmt.Sprintf(`"%s"`, name) @@ -121,5 +129,7 @@ func (lfr *logFormatRegistry) list() string { // freeze prevents further modifications of the registered log formats. func (lfr *logFormatRegistry) freeze() { + lfr.mutex.Lock() + defer lfr.mutex.Unlock() lfr.frozen = true } diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/types.go b/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/types.go index d1bf313643b9..33becd9d02f4 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/types.go @@ -17,9 +17,11 @@ limitations under the License. package v1 import ( - "time" + "encoding/json" + "fmt" "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // Supported output formats. @@ -39,10 +41,11 @@ type LoggingConfiguration struct { // Format Flag specifies the structure of log messages. // default value of format is `text` Format string `json:"format,omitempty"` - // Maximum number of nanoseconds (i.e. 1s = 1000000000) between log - // flushes. Ignored if the selected logging backend writes log - // messages without buffering. - FlushFrequency time.Duration `json:"flushFrequency"` + // Maximum time between log flushes. + // If a string, parsed as a duration (i.e. "1s") + // If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000). + // Ignored if the selected logging backend writes log messages without buffering. + FlushFrequency TimeOrMetaDuration `json:"flushFrequency"` // Verbosity is the threshold that determines which log messages are // logged. Default is zero which logs only the most important // messages. Higher values enable additional messages. Error messages @@ -58,6 +61,37 @@ type LoggingConfiguration struct { Options FormatOptions `json:"options,omitempty"` } +// TimeOrMetaDuration is present only for backwards compatibility for the +// flushFrequency field, and new fields should use metav1.Duration. +type TimeOrMetaDuration struct { + // Duration holds the duration + Duration metav1.Duration + // SerializeAsString controls whether the value is serialized as a string or an integer + SerializeAsString bool `json:"-"` +} + +func (t TimeOrMetaDuration) MarshalJSON() ([]byte, error) { + if t.SerializeAsString { + return t.Duration.MarshalJSON() + } else { + // Marshal as integer for backwards compatibility + return json.Marshal(t.Duration.Duration) + } +} + +func (t *TimeOrMetaDuration) UnmarshalJSON(b []byte) error { + if len(b) > 0 && b[0] == '"' { + // string values unmarshal as metav1.Duration + t.SerializeAsString = true + return json.Unmarshal(b, &t.Duration) + } + t.SerializeAsString = false + if err := json.Unmarshal(b, &t.Duration.Duration); err != nil { + return fmt.Errorf("invalid duration %q: %w", string(b), err) + } + return nil +} + // FormatOptions contains options for the different logging formats. type FormatOptions struct { // [Alpha] JSON contains options for logging format "json". diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/zz_generated.deepcopy.go index 87ca10da1a36..e90cbcb3490c 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/logs/api/v1/zz_generated.deepcopy.go @@ -58,6 +58,7 @@ func (in *JSONOptions) DeepCopy() *JSONOptions { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *LoggingConfiguration) DeepCopyInto(out *LoggingConfiguration) { *out = *in + out.FlushFrequency = in.FlushFrequency if in.VModule != nil { in, out := &in.VModule, &out.VModule *out = make(VModuleConfiguration, len(*in)) @@ -77,6 +78,23 @@ func (in *LoggingConfiguration) DeepCopy() *LoggingConfiguration { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *TimeOrMetaDuration) DeepCopyInto(out *TimeOrMetaDuration) { + *out = *in + out.Duration = in.Duration + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TimeOrMetaDuration. +func (in *TimeOrMetaDuration) DeepCopy() *TimeOrMetaDuration { + if in == nil { + return nil + } + out := new(TimeOrMetaDuration) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in VModuleConfiguration) DeepCopyInto(out *VModuleConfiguration) { { diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/http.go b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/http.go index 3394a8f7114f..2a0d249c2058 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/http.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/http.go @@ -19,19 +19,28 @@ package metrics import ( "io" "net/http" + "time" "github.com/prometheus/client_golang/prometheus/promhttp" ) +var ( + processStartedAt time.Time +) + +func init() { + processStartedAt = time.Now() +} + // These constants cause handlers serving metrics to behave as described if // errors are encountered. const ( - // Serve an HTTP status code 500 upon the first error + // HTTPErrorOnError serve an HTTP status code 500 upon the first error // encountered. Report the error message in the body. HTTPErrorOnError promhttp.HandlerErrorHandling = iota - // Ignore errors and try to serve as many metrics as possible. However, - // if no metrics can be served, serve an HTTP status code 500 and the + // ContinueOnError ignore errors and try to serve as many metrics as possible. + // However, if no metrics can be served, serve an HTTP status code 500 and the // last error message in the body. Only use this in deliberate "best // effort" metrics collection scenarios. In this case, it is highly // recommended to provide other means of detecting errors: By setting an @@ -41,7 +50,7 @@ const ( // alerts. ContinueOnError - // Panic upon the first error encountered (useful for "crash only" apps). + // PanicOnError panics upon the first error encountered (useful for "crash only" apps). PanicOnError ) @@ -50,6 +59,7 @@ const ( type HandlerOpts promhttp.HandlerOpts func (ho *HandlerOpts) toPromhttpHandlerOpts() promhttp.HandlerOpts { + ho.ProcessStartTime = processStartedAt return promhttp.HandlerOpts(*ho) } diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/legacyregistry/registry.go b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/legacyregistry/registry.go index 79c806d8b2aa..64a430b79644 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/legacyregistry/registry.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/legacyregistry/registry.go @@ -18,6 +18,7 @@ package legacyregistry import ( "net/http" + "time" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/collectors" @@ -45,19 +46,22 @@ var ( // Registerer exposes the global registerer Registerer = defaultRegistry.Registerer + + processStart time.Time ) func init() { RawMustRegister(collectors.NewProcessCollector(collectors.ProcessCollectorOpts{})) RawMustRegister(collectors.NewGoCollector(collectors.WithGoCollectorRuntimeMetrics(collectors.MetricsAll))) defaultRegistry.RegisterMetaMetrics() + processStart = time.Now() } // Handler returns an HTTP handler for the DefaultGatherer. It is // already instrumented with InstrumentHandler (using "prometheus" as handler // name). func Handler() http.Handler { - return promhttp.InstrumentMetricHandler(prometheus.DefaultRegisterer, promhttp.HandlerFor(defaultRegistry, promhttp.HandlerOpts{})) + return promhttp.InstrumentMetricHandler(prometheus.DefaultRegisterer, promhttp.HandlerFor(defaultRegistry, promhttp.HandlerOpts{ProcessStartTime: processStart})) } // HandlerWithReset returns an HTTP handler for the DefaultGatherer but invokes @@ -65,7 +69,7 @@ func Handler() http.Handler { func HandlerWithReset() http.Handler { return promhttp.InstrumentMetricHandler( prometheus.DefaultRegisterer, - metrics.HandlerWithReset(defaultRegistry, metrics.HandlerOpts{})) + metrics.HandlerWithReset(defaultRegistry, metrics.HandlerOpts{ProcessStartTime: processStart})) } // CustomRegister registers a custom collector but uses the global registry. diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/feature/metrics.go b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/feature/metrics.go index d19357fde558..416e5eda2669 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/feature/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/feature/metrics.go @@ -30,7 +30,7 @@ var ( Namespace: "kubernetes", Name: "feature_enabled", Help: "This metric records the data about the stage and enablement of a k8s feature.", - StabilityLevel: k8smetrics.ALPHA, + StabilityLevel: k8smetrics.BETA, }, []string{"name", "stage"}, ) diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/restclient/metrics.go b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/restclient/metrics.go index aa7cabea2b3e..d0c80de03fa9 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/restclient/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/restclient/metrics.go @@ -41,6 +41,18 @@ var ( []string{"verb", "host"}, ) + // resolverLatency is a Prometheus Histogram metric type partitioned by + // "host" labels. It is used for the rest client DNS resolver latency metrics. + resolverLatency = k8smetrics.NewHistogramVec( + &k8smetrics.HistogramOpts{ + Name: "rest_client_dns_resolution_duration_seconds", + Help: "DNS resolver latency in seconds. Broken down by host.", + StabilityLevel: k8smetrics.ALPHA, + Buckets: []float64{0.005, 0.025, 0.1, 0.25, 0.5, 1.0, 2.0, 4.0, 8.0, 15.0, 30.0}, + }, + []string{"host"}, + ) + requestSize = k8smetrics.NewHistogramVec( &k8smetrics.HistogramOpts{ Name: "rest_client_request_size_bytes", @@ -152,6 +164,24 @@ var ( }, []string{"code", "call_status"}, ) + + transportCacheEntries = k8smetrics.NewGauge( + &k8smetrics.GaugeOpts{ + Name: "rest_client_transport_cache_entries", + StabilityLevel: k8smetrics.ALPHA, + Help: "Number of transport entries in the internal cache.", + }, + ) + + transportCacheCalls = k8smetrics.NewCounterVec( + &k8smetrics.CounterOpts{ + Name: "rest_client_transport_create_calls_total", + StabilityLevel: k8smetrics.ALPHA, + Help: "Number of calls to get a new transport, partitioned by the result of the operation " + + "hit: obtained from the cache, miss: created and added to the cache, uncacheable: created and not cached", + }, + []string{"result"}, + ) ) func init() { @@ -164,16 +194,22 @@ func init() { legacyregistry.MustRegister(requestRetry) legacyregistry.RawMustRegister(execPluginCertTTL) legacyregistry.MustRegister(execPluginCertRotation) + legacyregistry.MustRegister(execPluginCalls) + legacyregistry.MustRegister(transportCacheEntries) + legacyregistry.MustRegister(transportCacheCalls) metrics.Register(metrics.RegisterOpts{ ClientCertExpiry: execPluginCertTTLAdapter, ClientCertRotationAge: &rotationAdapter{m: execPluginCertRotation}, RequestLatency: &latencyAdapter{m: requestLatency}, + ResolverLatency: &resolverLatencyAdapter{m: resolverLatency}, RequestSize: &sizeAdapter{m: requestSize}, ResponseSize: &sizeAdapter{m: responseSize}, RateLimiterLatency: &latencyAdapter{m: rateLimiterLatency}, RequestResult: &resultAdapter{requestResult}, RequestRetry: &retryAdapter{requestRetry}, ExecPluginCalls: &callsAdapter{m: execPluginCalls}, + TransportCacheEntries: &transportCacheAdapter{m: transportCacheEntries}, + TransportCreateCalls: &transportCacheCallsAdapter{m: transportCacheCalls}, }) } @@ -185,6 +221,14 @@ func (l *latencyAdapter) Observe(ctx context.Context, verb string, u url.URL, la l.m.WithContext(ctx).WithLabelValues(verb, u.Host).Observe(latency.Seconds()) } +type resolverLatencyAdapter struct { + m *k8smetrics.HistogramVec +} + +func (l *resolverLatencyAdapter) Observe(ctx context.Context, host string, latency time.Duration) { + l.m.WithContext(ctx).WithLabelValues(host).Observe(latency.Seconds()) +} + type sizeAdapter struct { m *k8smetrics.HistogramVec } @@ -232,3 +276,19 @@ type retryAdapter struct { func (r *retryAdapter) IncrementRetry(ctx context.Context, code, method, host string) { r.m.WithContext(ctx).WithLabelValues(code, method, host).Inc() } + +type transportCacheAdapter struct { + m *k8smetrics.Gauge +} + +func (t *transportCacheAdapter) Observe(value int) { + t.m.Set(float64(value)) +} + +type transportCacheCallsAdapter struct { + m *k8smetrics.CounterVec +} + +func (t *transportCacheCallsAdapter) Increment(result string) { + t.m.WithLabelValues(result).Inc() +} diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/slis/metrics.go b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/slis/metrics.go index 7fb4a8e064e1..7907dfad12aa 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/slis/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/slis/metrics.go @@ -37,7 +37,7 @@ var ( Namespace: "kubernetes", Name: "healthcheck", Help: "This metric records the result of a single healthcheck.", - StabilityLevel: k8smetrics.ALPHA, + StabilityLevel: k8smetrics.BETA, }, []string{"name", "type"}, ) @@ -48,7 +48,7 @@ var ( Namespace: "kubernetes", Name: "healthchecks_total", Help: "This metric records the results of all healthcheck.", - StabilityLevel: k8smetrics.ALPHA, + StabilityLevel: k8smetrics.BETA, }, []string{"name", "type", "status"}, ) diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/registry.go b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/registry.go index 9a7138c11f8e..1942f9958d23 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/registry.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/registry.go @@ -39,26 +39,26 @@ var ( registeredMetrics = NewCounterVec( &CounterOpts{ - Name: "registered_metric_total", + Name: "registered_metrics_total", Help: "The count of registered metrics broken by stability level and deprecation version.", - StabilityLevel: ALPHA, + StabilityLevel: BETA, }, []string{"stability_level", "deprecated_version"}, ) disabledMetricsTotal = NewCounter( &CounterOpts{ - Name: "disabled_metric_total", + Name: "disabled_metrics_total", Help: "The count of disabled metrics.", - StabilityLevel: ALPHA, + StabilityLevel: BETA, }, ) hiddenMetricsTotal = NewCounter( &CounterOpts{ - Name: "hidden_metric_total", + Name: "hidden_metrics_total", Help: "The count of hidden metrics.", - StabilityLevel: ALPHA, + StabilityLevel: BETA, }, ) ) diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/testutil/testutil.go b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/testutil/testutil.go index 8587c752242a..26d2d5fd7154 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/testutil/testutil.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/testutil/testutil.go @@ -19,11 +19,13 @@ package testutil import ( "fmt" "io" + "testing" "github.com/prometheus/client_golang/prometheus/testutil" apimachineryversion "k8s.io/apimachinery/pkg/version" "k8s.io/component-base/metrics" + "k8s.io/component-base/metrics/legacyregistry" ) // CollectAndCompare registers the provided Collector with a newly created @@ -91,3 +93,62 @@ func NewFakeKubeRegistry(ver string) metrics.KubeRegistry { return metrics.NewKubeRegistry() } + +func AssertVectorCount(t *testing.T, name string, labelFilter map[string]string, wantCount int) { + metrics, err := legacyregistry.DefaultGatherer.Gather() + if err != nil { + t.Fatalf("Failed to gather metrics: %s", err) + } + + counterSum := 0 + for _, mf := range metrics { + if mf.GetName() != name { + continue // Ignore other metrics. + } + for _, metric := range mf.GetMetric() { + if !LabelsMatch(metric, labelFilter) { + continue + } + counterSum += int(metric.GetCounter().GetValue()) + } + } + if wantCount != counterSum { + t.Errorf("Wanted count %d, got %d for metric %s with labels %#+v", wantCount, counterSum, name, labelFilter) + for _, mf := range metrics { + if mf.GetName() == name { + for _, metric := range mf.GetMetric() { + t.Logf("\tnear match: %s", metric.String()) + } + } + } + } +} + +func AssertHistogramTotalCount(t *testing.T, name string, labelFilter map[string]string, wantCount int) { + metrics, err := legacyregistry.DefaultGatherer.Gather() + if err != nil { + t.Fatalf("Failed to gather metrics: %s", err) + } + counterSum := 0 + for _, mf := range metrics { + if mf.GetName() != name { + continue // Ignore other metrics. + } + for _, metric := range mf.GetMetric() { + if !LabelsMatch(metric, labelFilter) { + continue + } + counterSum += int(metric.GetHistogram().GetSampleCount()) + } + } + if wantCount != counterSum { + t.Errorf("Wanted count %d, got %d for metric %s with labels %#+v", wantCount, counterSum, name, labelFilter) + for _, mf := range metrics { + if mf.GetName() == name { + for _, metric := range mf.GetMetric() { + t.Logf("\tnear match: %s\n", metric.String()) + } + } + } + } +} diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/version/dynamic.go b/cluster-autoscaler/vendor/k8s.io/component-base/version/dynamic.go new file mode 100644 index 000000000000..46ade9f5ec13 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/component-base/version/dynamic.go @@ -0,0 +1,77 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package version + +import ( + "fmt" + "sync/atomic" + + utilversion "k8s.io/apimachinery/pkg/util/version" +) + +var dynamicGitVersion atomic.Value + +func init() { + // initialize to static gitVersion + dynamicGitVersion.Store(gitVersion) +} + +// SetDynamicVersion overrides the version returned as the GitVersion from Get(). +// The specified version must be non-empty, a valid semantic version, and must +// match the major/minor/patch version of the default gitVersion. +func SetDynamicVersion(dynamicVersion string) error { + if err := ValidateDynamicVersion(dynamicVersion); err != nil { + return err + } + dynamicGitVersion.Store(dynamicVersion) + return nil +} + +// ValidateDynamicVersion ensures the given version is non-empty, a valid semantic version, +// and matched the major/minor/patch version of the default gitVersion. +func ValidateDynamicVersion(dynamicVersion string) error { + return validateDynamicVersion(dynamicVersion, gitVersion) +} + +func validateDynamicVersion(dynamicVersion, defaultVersion string) error { + if len(dynamicVersion) == 0 { + return fmt.Errorf("version must not be empty") + } + if dynamicVersion == defaultVersion { + // allow no-op + return nil + } + vRuntime, err := utilversion.ParseSemantic(dynamicVersion) + if err != nil { + return err + } + // must match major/minor/patch of default version + var vDefault *utilversion.Version + if defaultVersion == "v0.0.0-master+$Format:%H$" { + // special-case the placeholder value which doesn't parse as a semantic version + vDefault, err = utilversion.ParseSemantic("v0.0.0-master") + } else { + vDefault, err = utilversion.ParseSemantic(defaultVersion) + } + if err != nil { + return err + } + if vRuntime.Major() != vDefault.Major() || vRuntime.Minor() != vDefault.Minor() || vRuntime.Patch() != vDefault.Patch() { + return fmt.Errorf("version %q must match major/minor/patch of default version %q", dynamicVersion, defaultVersion) + } + return nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/version/verflag/verflag.go b/cluster-autoscaler/vendor/k8s.io/component-base/version/verflag/verflag.go index 106e3450ff0a..46edab46f331 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/version/verflag/verflag.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/version/verflag/verflag.go @@ -20,20 +20,22 @@ package verflag import ( "fmt" + "io" "os" "strconv" + "strings" flag "github.com/spf13/pflag" "k8s.io/component-base/version" ) -type versionValue int +type versionValue string const ( - VersionFalse versionValue = 0 - VersionTrue versionValue = 1 - VersionRaw versionValue = 2 + VersionFalse versionValue = "false" + VersionTrue versionValue = "true" + VersionRaw versionValue = "raw" ) const strRawVersion string = "raw" @@ -51,20 +53,28 @@ func (v *versionValue) Set(s string) error { *v = VersionRaw return nil } + + if strings.HasPrefix(s, "v") { + err := version.SetDynamicVersion(s) + if err == nil { + *v = versionValue(s) + } + return err + } + boolVal, err := strconv.ParseBool(s) - if boolVal { - *v = VersionTrue - } else { - *v = VersionFalse + if err == nil { + if boolVal { + *v = VersionTrue + } else { + *v = VersionFalse + } } return err } func (v *versionValue) String() string { - if *v == VersionRaw { - return strRawVersion - } - return fmt.Sprintf("%v", bool(*v == VersionTrue)) + return string(*v) } // The type of the flag as required by the pflag.Value interface @@ -88,7 +98,7 @@ func Version(name string, value versionValue, usage string) *versionValue { const versionFlagName = "version" var ( - versionFlag = Version(versionFlagName, VersionFalse, "Print version information and quit") + versionFlag = Version(versionFlagName, VersionFalse, "--version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version") programName = "Kubernetes" ) @@ -98,14 +108,20 @@ func AddFlags(fs *flag.FlagSet) { fs.AddFlag(flag.Lookup(versionFlagName)) } -// PrintAndExitIfRequested will check if the -version flag was passed +// variables for unit testing PrintAndExitIfRequested +var ( + output = io.Writer(os.Stdout) + exit = os.Exit +) + +// PrintAndExitIfRequested will check if --version or --version=raw was passed // and, if so, print the version and exit. func PrintAndExitIfRequested() { if *versionFlag == VersionRaw { - fmt.Printf("%#v\n", version.Get()) - os.Exit(0) + fmt.Fprintf(output, "%#v\n", version.Get()) + exit(0) } else if *versionFlag == VersionTrue { - fmt.Printf("%s %s\n", programName, version.Get()) - os.Exit(0) + fmt.Fprintf(output, "%s %s\n", programName, version.Get()) + exit(0) } } diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/version/version.go b/cluster-autoscaler/vendor/k8s.io/component-base/version/version.go index d1e76dc00e05..1d268d4c6805 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/version/version.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/version/version.go @@ -31,7 +31,7 @@ func Get() apimachineryversion.Info { return apimachineryversion.Info{ Major: gitMajor, Minor: gitMinor, - GitVersion: gitVersion, + GitVersion: dynamicGitVersion.Load().(string), GitCommit: gitCommit, GitTreeState: gitTreeState, BuildDate: buildDate, diff --git a/cluster-autoscaler/vendor/k8s.io/component-helpers/scheduling/corev1/nodeaffinity/nodeaffinity.go b/cluster-autoscaler/vendor/k8s.io/component-helpers/scheduling/corev1/nodeaffinity/nodeaffinity.go index 27caf69b920a..0e3b991636a8 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-helpers/scheduling/corev1/nodeaffinity/nodeaffinity.go +++ b/cluster-autoscaler/vendor/k8s.io/component-helpers/scheduling/corev1/nodeaffinity/nodeaffinity.go @@ -200,6 +200,15 @@ func (t *nodeSelectorTerm) match(nodeLabels labels.Set, nodeFields fields.Set) ( return true, nil } +var validSelectorOperators = []string{ + string(v1.NodeSelectorOpIn), + string(v1.NodeSelectorOpNotIn), + string(v1.NodeSelectorOpExists), + string(v1.NodeSelectorOpDoesNotExist), + string(v1.NodeSelectorOpGt), + string(v1.NodeSelectorOpLt), +} + // nodeSelectorRequirementsAsSelector converts the []NodeSelectorRequirement api type into a struct that implements // labels.Selector. func nodeSelectorRequirementsAsSelector(nsm []v1.NodeSelectorRequirement, path *field.Path) (labels.Selector, []error) { @@ -225,7 +234,7 @@ func nodeSelectorRequirementsAsSelector(nsm []v1.NodeSelectorRequirement, path * case v1.NodeSelectorOpLt: op = selection.LessThan default: - errs = append(errs, field.NotSupported(p.Child("operator"), expr.Operator, nil)) + errs = append(errs, field.NotSupported(p.Child("operator"), expr.Operator, validSelectorOperators)) continue } r, err := labels.NewRequirement(expr.Key, op, expr.Values, field.WithPath(p)) diff --git a/cluster-autoscaler/vendor/k8s.io/controller-manager/options/generic.go b/cluster-autoscaler/vendor/k8s.io/controller-manager/options/generic.go index 45c086b11f2f..35b7cc2322d2 100644 --- a/cluster-autoscaler/vendor/k8s.io/controller-manager/options/generic.go +++ b/cluster-autoscaler/vendor/k8s.io/controller-manager/options/generic.go @@ -49,7 +49,7 @@ func NewGenericControllerManagerConfigurationOptions(cfg *cmconfig.GenericContro } // AddFlags adds flags related to generic for controller manager to the specified FlagSet. -func (o *GenericControllerManagerConfigurationOptions) AddFlags(fss *cliflag.NamedFlagSets, allControllers, disabledByDefaultControllers []string) { +func (o *GenericControllerManagerConfigurationOptions) AddFlags(fss *cliflag.NamedFlagSets, allControllers, disabledByDefaultControllers []string, controllerAliases map[string]string) { if o == nil { return } @@ -71,7 +71,7 @@ func (o *GenericControllerManagerConfigurationOptions) AddFlags(fss *cliflag.Nam } // ApplyTo fills up generic config with options. -func (o *GenericControllerManagerConfigurationOptions) ApplyTo(cfg *cmconfig.GenericControllerManagerConfiguration) error { +func (o *GenericControllerManagerConfigurationOptions) ApplyTo(cfg *cmconfig.GenericControllerManagerConfiguration, allControllers []string, disabledByDefaultControllers []string, controllerAliases map[string]string) error { if o == nil { return nil } @@ -88,13 +88,26 @@ func (o *GenericControllerManagerConfigurationOptions) ApplyTo(cfg *cmconfig.Gen cfg.ClientConnection = o.ClientConnection cfg.ControllerStartInterval = o.ControllerStartInterval cfg.LeaderElection = o.LeaderElection - cfg.Controllers = o.Controllers + + // copy controller names and replace aliases with canonical names + cfg.Controllers = make([]string, len(o.Controllers)) + for i, initialName := range o.Controllers { + initialNameWithoutPrefix := strings.TrimPrefix(initialName, "-") + controllerName := initialNameWithoutPrefix + if canonicalName, ok := controllerAliases[controllerName]; ok { + controllerName = canonicalName + } + if strings.HasPrefix(initialName, "-") { + controllerName = fmt.Sprintf("-%s", controllerName) + } + cfg.Controllers[i] = controllerName + } return nil } // Validate checks validation of GenericOptions. -func (o *GenericControllerManagerConfigurationOptions) Validate(allControllers []string, disabledByDefaultControllers []string) []error { +func (o *GenericControllerManagerConfigurationOptions) Validate(allControllers []string, disabledByDefaultControllers []string, controllerAliases map[string]string) []error { if o == nil { return nil } @@ -109,13 +122,17 @@ func (o *GenericControllerManagerConfigurationOptions) Validate(allControllers [ } allControllersSet := sets.NewString(allControllers...) - for _, controller := range o.Controllers { - if controller == "*" { + for _, initialName := range o.Controllers { + if initialName == "*" { continue } - controller = strings.TrimPrefix(controller, "-") - if !allControllersSet.Has(controller) { - errs = append(errs, fmt.Errorf("%q is not in the list of known controllers", controller)) + initialNameWithoutPrefix := strings.TrimPrefix(initialName, "-") + controllerName := initialNameWithoutPrefix + if canonicalName, ok := controllerAliases[controllerName]; ok { + controllerName = canonicalName + } + if !allControllersSet.Has(controllerName) { + errs = append(errs, fmt.Errorf("%q is not in the list of known controllers", initialNameWithoutPrefix)) } } diff --git a/cluster-autoscaler/vendor/k8s.io/controller-manager/pkg/leadermigration/config/default.go b/cluster-autoscaler/vendor/k8s.io/controller-manager/pkg/leadermigration/config/default.go index 362893b4071f..995f48ac4c68 100644 --- a/cluster-autoscaler/vendor/k8s.io/controller-manager/pkg/leadermigration/config/default.go +++ b/cluster-autoscaler/vendor/k8s.io/controller-manager/pkg/leadermigration/config/default.go @@ -26,13 +26,13 @@ func DefaultLeaderMigrationConfiguration() *internal.LeaderMigrationConfiguratio ResourceLock: ResourceLockLeases, ControllerLeaders: []internal.ControllerLeaderConfiguration{ { - Name: "route", + Name: "route-controller", Component: "*", }, { - Name: "service", + Name: "service-controller", Component: "*", }, { - Name: "cloud-node-lifecycle", + Name: "cloud-node-lifecycle-controller", Component: "*", }, }, diff --git a/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.pb.go b/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.pb.go index 56bc7dbae7f6..4db4fe0767fc 100644 --- a/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.pb.go @@ -77,7 +77,7 @@ func (Protocol) EnumDescriptor() ([]byte, []int) { type MountPropagation int32 const ( - // No mount propagation ("private" in Linux terminology). + // No mount propagation ("rprivate" in Linux terminology). MountPropagation_PROPAGATION_PRIVATE MountPropagation = 0 // Mounts get propagated from the host to the container ("rslave" in Linux). MountPropagation_PROPAGATION_HOST_TO_CONTAINER MountPropagation = 1 @@ -271,6 +271,31 @@ func (MetricType) EnumDescriptor() ([]byte, []int) { return fileDescriptor_00212fb1f9d3bf1c, []int{6} } +type CgroupDriver int32 + +const ( + CgroupDriver_SYSTEMD CgroupDriver = 0 + CgroupDriver_CGROUPFS CgroupDriver = 1 +) + +var CgroupDriver_name = map[int32]string{ + 0: "SYSTEMD", + 1: "CGROUPFS", +} + +var CgroupDriver_value = map[string]int32{ + "SYSTEMD": 0, + "CGROUPFS": 1, +} + +func (x CgroupDriver) String() string { + return proto.EnumName(CgroupDriver_name, int32(x)) +} + +func (CgroupDriver) EnumDescriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{7} +} + // Available profile types. type SecurityProfile_ProfileType int32 @@ -3461,9 +3486,12 @@ type ImageSpec struct { // Unstructured key-value map holding arbitrary metadata. // ImageSpec Annotations can be used to help the runtime target specific // images in multi-arch images. - Annotations map[string]string `protobuf:"bytes,2,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_sizecache int32 `json:"-"` + Annotations map[string]string `protobuf:"bytes,2,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + // The container image reference specified by the user (e.g. image[:tag] or digest). + // Only set if available within the RPC context. + UserSpecifiedImage string `protobuf:"bytes,18,opt,name=user_specified_image,json=userSpecifiedImage,proto3" json:"user_specified_image,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *ImageSpec) Reset() { *m = ImageSpec{} } @@ -3512,6 +3540,13 @@ func (m *ImageSpec) GetAnnotations() map[string]string { return nil } +func (m *ImageSpec) GetUserSpecifiedImage() string { + if m != nil { + return m.UserSpecifiedImage + } + return "" +} + type KeyValue struct { Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"` @@ -8370,9 +8405,11 @@ type ContainerStats struct { // Memory usage gathered from the container. Memory *MemoryUsage `protobuf:"bytes,3,opt,name=memory,proto3" json:"memory,omitempty"` // Usage of the writable layer. - WritableLayer *FilesystemUsage `protobuf:"bytes,4,opt,name=writable_layer,json=writableLayer,proto3" json:"writable_layer,omitempty"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_sizecache int32 `json:"-"` + WritableLayer *FilesystemUsage `protobuf:"bytes,4,opt,name=writable_layer,json=writableLayer,proto3" json:"writable_layer,omitempty"` + // Swap usage gathered from the container. + Swap *SwapUsage `protobuf:"bytes,5,opt,name=swap,proto3" json:"swap,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *ContainerStats) Reset() { *m = ContainerStats{} } @@ -8435,6 +8472,13 @@ func (m *ContainerStats) GetWritableLayer() *FilesystemUsage { return nil } +func (m *ContainerStats) GetSwap() *SwapUsage { + if m != nil { + return m.Swap + } + return nil +} + // WindowsContainerStats provides the resource usage statistics for a container specific for Windows type WindowsContainerStats struct { // Information of the container. @@ -8742,16 +8786,82 @@ func (m *MemoryUsage) GetMajorPageFaults() *UInt64Value { return nil } +type SwapUsage struct { + // Timestamp in nanoseconds at which the information were collected. Must be > 0. + Timestamp int64 `protobuf:"varint,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"` + // Available swap for use. This is defined as the swap limit - swapUsageBytes. + SwapAvailableBytes *UInt64Value `protobuf:"bytes,2,opt,name=swap_available_bytes,json=swapAvailableBytes,proto3" json:"swap_available_bytes,omitempty"` + // Total memory in use. This includes all memory regardless of when it was accessed. + SwapUsageBytes *UInt64Value `protobuf:"bytes,3,opt,name=swap_usage_bytes,json=swapUsageBytes,proto3" json:"swap_usage_bytes,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *SwapUsage) Reset() { *m = SwapUsage{} } +func (*SwapUsage) ProtoMessage() {} +func (*SwapUsage) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{128} +} +func (m *SwapUsage) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *SwapUsage) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_SwapUsage.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *SwapUsage) XXX_Merge(src proto.Message) { + xxx_messageInfo_SwapUsage.Merge(m, src) +} +func (m *SwapUsage) XXX_Size() int { + return m.Size() +} +func (m *SwapUsage) XXX_DiscardUnknown() { + xxx_messageInfo_SwapUsage.DiscardUnknown(m) +} + +var xxx_messageInfo_SwapUsage proto.InternalMessageInfo + +func (m *SwapUsage) GetTimestamp() int64 { + if m != nil { + return m.Timestamp + } + return 0 +} + +func (m *SwapUsage) GetSwapAvailableBytes() *UInt64Value { + if m != nil { + return m.SwapAvailableBytes + } + return nil +} + +func (m *SwapUsage) GetSwapUsageBytes() *UInt64Value { + if m != nil { + return m.SwapUsageBytes + } + return nil +} + // WindowsMemoryUsage provides the memory usage information specific to Windows type WindowsMemoryUsage struct { // Timestamp in nanoseconds at which the information were collected. Must be > 0. Timestamp int64 `protobuf:"varint,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"` // The amount of working set memory in bytes. WorkingSetBytes *UInt64Value `protobuf:"bytes,2,opt,name=working_set_bytes,json=workingSetBytes,proto3" json:"working_set_bytes,omitempty"` - // Available memory for use. This is defined as the memory limit - workingSetBytes. + // Available memory for use. This is defined as the memory limit - commit_memory_bytes. AvailableBytes *UInt64Value `protobuf:"bytes,3,opt,name=available_bytes,json=availableBytes,proto3" json:"available_bytes,omitempty"` // Cumulative number of page faults. - PageFaults *UInt64Value `protobuf:"bytes,4,opt,name=page_faults,json=pageFaults,proto3" json:"page_faults,omitempty"` + PageFaults *UInt64Value `protobuf:"bytes,4,opt,name=page_faults,json=pageFaults,proto3" json:"page_faults,omitempty"` + // Total commit memory in use. Commit memory is total of physical and virtual memory in use. + CommitMemoryBytes *UInt64Value `protobuf:"bytes,5,opt,name=commit_memory_bytes,json=commitMemoryBytes,proto3" json:"commit_memory_bytes,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_sizecache int32 `json:"-"` } @@ -8759,7 +8869,7 @@ type WindowsMemoryUsage struct { func (m *WindowsMemoryUsage) Reset() { *m = WindowsMemoryUsage{} } func (*WindowsMemoryUsage) ProtoMessage() {} func (*WindowsMemoryUsage) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{128} + return fileDescriptor_00212fb1f9d3bf1c, []int{129} } func (m *WindowsMemoryUsage) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8816,6 +8926,13 @@ func (m *WindowsMemoryUsage) GetPageFaults() *UInt64Value { return nil } +func (m *WindowsMemoryUsage) GetCommitMemoryBytes() *UInt64Value { + if m != nil { + return m.CommitMemoryBytes + } + return nil +} + type ReopenContainerLogRequest struct { // ID of the container for which to reopen the log. ContainerId string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"` @@ -8826,7 +8943,7 @@ type ReopenContainerLogRequest struct { func (m *ReopenContainerLogRequest) Reset() { *m = ReopenContainerLogRequest{} } func (*ReopenContainerLogRequest) ProtoMessage() {} func (*ReopenContainerLogRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{129} + return fileDescriptor_00212fb1f9d3bf1c, []int{130} } func (m *ReopenContainerLogRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8870,7 +8987,7 @@ type ReopenContainerLogResponse struct { func (m *ReopenContainerLogResponse) Reset() { *m = ReopenContainerLogResponse{} } func (*ReopenContainerLogResponse) ProtoMessage() {} func (*ReopenContainerLogResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{130} + return fileDescriptor_00212fb1f9d3bf1c, []int{131} } func (m *ReopenContainerLogResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8915,7 +9032,7 @@ type CheckpointContainerRequest struct { func (m *CheckpointContainerRequest) Reset() { *m = CheckpointContainerRequest{} } func (*CheckpointContainerRequest) ProtoMessage() {} func (*CheckpointContainerRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{131} + return fileDescriptor_00212fb1f9d3bf1c, []int{132} } func (m *CheckpointContainerRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -8973,7 +9090,7 @@ type CheckpointContainerResponse struct { func (m *CheckpointContainerResponse) Reset() { *m = CheckpointContainerResponse{} } func (*CheckpointContainerResponse) ProtoMessage() {} func (*CheckpointContainerResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{132} + return fileDescriptor_00212fb1f9d3bf1c, []int{133} } func (m *CheckpointContainerResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9010,7 +9127,7 @@ type GetEventsRequest struct { func (m *GetEventsRequest) Reset() { *m = GetEventsRequest{} } func (*GetEventsRequest) ProtoMessage() {} func (*GetEventsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{133} + return fileDescriptor_00212fb1f9d3bf1c, []int{134} } func (m *GetEventsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9057,7 +9174,7 @@ type ContainerEventResponse struct { func (m *ContainerEventResponse) Reset() { *m = ContainerEventResponse{} } func (*ContainerEventResponse) ProtoMessage() {} func (*ContainerEventResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{134} + return fileDescriptor_00212fb1f9d3bf1c, []int{135} } func (m *ContainerEventResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9129,7 +9246,7 @@ type ListMetricDescriptorsRequest struct { func (m *ListMetricDescriptorsRequest) Reset() { *m = ListMetricDescriptorsRequest{} } func (*ListMetricDescriptorsRequest) ProtoMessage() {} func (*ListMetricDescriptorsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{135} + return fileDescriptor_00212fb1f9d3bf1c, []int{136} } func (m *ListMetricDescriptorsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9167,7 +9284,7 @@ type ListMetricDescriptorsResponse struct { func (m *ListMetricDescriptorsResponse) Reset() { *m = ListMetricDescriptorsResponse{} } func (*ListMetricDescriptorsResponse) ProtoMessage() {} func (*ListMetricDescriptorsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{136} + return fileDescriptor_00212fb1f9d3bf1c, []int{137} } func (m *ListMetricDescriptorsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9220,7 +9337,7 @@ type MetricDescriptor struct { func (m *MetricDescriptor) Reset() { *m = MetricDescriptor{} } func (*MetricDescriptor) ProtoMessage() {} func (*MetricDescriptor) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{137} + return fileDescriptor_00212fb1f9d3bf1c, []int{138} } func (m *MetricDescriptor) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9278,7 +9395,7 @@ type ListPodSandboxMetricsRequest struct { func (m *ListPodSandboxMetricsRequest) Reset() { *m = ListPodSandboxMetricsRequest{} } func (*ListPodSandboxMetricsRequest) ProtoMessage() {} func (*ListPodSandboxMetricsRequest) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{138} + return fileDescriptor_00212fb1f9d3bf1c, []int{139} } func (m *ListPodSandboxMetricsRequest) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9316,7 +9433,7 @@ type ListPodSandboxMetricsResponse struct { func (m *ListPodSandboxMetricsResponse) Reset() { *m = ListPodSandboxMetricsResponse{} } func (*ListPodSandboxMetricsResponse) ProtoMessage() {} func (*ListPodSandboxMetricsResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{139} + return fileDescriptor_00212fb1f9d3bf1c, []int{140} } func (m *ListPodSandboxMetricsResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9363,7 +9480,7 @@ type PodSandboxMetrics struct { func (m *PodSandboxMetrics) Reset() { *m = PodSandboxMetrics{} } func (*PodSandboxMetrics) ProtoMessage() {} func (*PodSandboxMetrics) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{140} + return fileDescriptor_00212fb1f9d3bf1c, []int{141} } func (m *PodSandboxMetrics) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9423,7 +9540,7 @@ type ContainerMetrics struct { func (m *ContainerMetrics) Reset() { *m = ContainerMetrics{} } func (*ContainerMetrics) ProtoMessage() {} func (*ContainerMetrics) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{141} + return fileDescriptor_00212fb1f9d3bf1c, []int{142} } func (m *ContainerMetrics) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9486,7 +9603,7 @@ type Metric struct { func (m *Metric) Reset() { *m = Metric{} } func (*Metric) ProtoMessage() {} func (*Metric) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{142} + return fileDescriptor_00212fb1f9d3bf1c, []int{143} } func (m *Metric) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -9550,6 +9667,143 @@ func (m *Metric) GetValue() *UInt64Value { return nil } +type RuntimeConfigRequest struct { + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *RuntimeConfigRequest) Reset() { *m = RuntimeConfigRequest{} } +func (*RuntimeConfigRequest) ProtoMessage() {} +func (*RuntimeConfigRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{144} +} +func (m *RuntimeConfigRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *RuntimeConfigRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_RuntimeConfigRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *RuntimeConfigRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_RuntimeConfigRequest.Merge(m, src) +} +func (m *RuntimeConfigRequest) XXX_Size() int { + return m.Size() +} +func (m *RuntimeConfigRequest) XXX_DiscardUnknown() { + xxx_messageInfo_RuntimeConfigRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_RuntimeConfigRequest proto.InternalMessageInfo + +type RuntimeConfigResponse struct { + // Configuration information for Linux-based runtimes. This field contains + // global runtime configuration options that are not specific to runtime + // handlers. + Linux *LinuxRuntimeConfiguration `protobuf:"bytes,1,opt,name=linux,proto3" json:"linux,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *RuntimeConfigResponse) Reset() { *m = RuntimeConfigResponse{} } +func (*RuntimeConfigResponse) ProtoMessage() {} +func (*RuntimeConfigResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{145} +} +func (m *RuntimeConfigResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *RuntimeConfigResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_RuntimeConfigResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *RuntimeConfigResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_RuntimeConfigResponse.Merge(m, src) +} +func (m *RuntimeConfigResponse) XXX_Size() int { + return m.Size() +} +func (m *RuntimeConfigResponse) XXX_DiscardUnknown() { + xxx_messageInfo_RuntimeConfigResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_RuntimeConfigResponse proto.InternalMessageInfo + +func (m *RuntimeConfigResponse) GetLinux() *LinuxRuntimeConfiguration { + if m != nil { + return m.Linux + } + return nil +} + +type LinuxRuntimeConfiguration struct { + // Cgroup driver to use + // Note: this field should not change for the lifecycle of the Kubelet, + // or while there are running containers. + // The Kubelet will not re-request this after startup, and will construct the cgroup + // hierarchy assuming it is static. + // If the runtime wishes to change this value, it must be accompanied by removal of + // all pods, and a restart of the Kubelet. The easiest way to do this is with a full node reboot. + CgroupDriver CgroupDriver `protobuf:"varint,1,opt,name=cgroup_driver,json=cgroupDriver,proto3,enum=runtime.v1.CgroupDriver" json:"cgroup_driver,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *LinuxRuntimeConfiguration) Reset() { *m = LinuxRuntimeConfiguration{} } +func (*LinuxRuntimeConfiguration) ProtoMessage() {} +func (*LinuxRuntimeConfiguration) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{146} +} +func (m *LinuxRuntimeConfiguration) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *LinuxRuntimeConfiguration) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_LinuxRuntimeConfiguration.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *LinuxRuntimeConfiguration) XXX_Merge(src proto.Message) { + xxx_messageInfo_LinuxRuntimeConfiguration.Merge(m, src) +} +func (m *LinuxRuntimeConfiguration) XXX_Size() int { + return m.Size() +} +func (m *LinuxRuntimeConfiguration) XXX_DiscardUnknown() { + xxx_messageInfo_LinuxRuntimeConfiguration.DiscardUnknown(m) +} + +var xxx_messageInfo_LinuxRuntimeConfiguration proto.InternalMessageInfo + +func (m *LinuxRuntimeConfiguration) GetCgroupDriver() CgroupDriver { + if m != nil { + return m.CgroupDriver + } + return CgroupDriver_SYSTEMD +} + func init() { proto.RegisterEnum("runtime.v1.Protocol", Protocol_name, Protocol_value) proto.RegisterEnum("runtime.v1.MountPropagation", MountPropagation_name, MountPropagation_value) @@ -9558,6 +9812,7 @@ func init() { proto.RegisterEnum("runtime.v1.ContainerState", ContainerState_name, ContainerState_value) proto.RegisterEnum("runtime.v1.ContainerEventType", ContainerEventType_name, ContainerEventType_value) proto.RegisterEnum("runtime.v1.MetricType", MetricType_name, MetricType_value) + proto.RegisterEnum("runtime.v1.CgroupDriver", CgroupDriver_name, CgroupDriver_value) proto.RegisterEnum("runtime.v1.SecurityProfile_ProfileType", SecurityProfile_ProfileType_name, SecurityProfile_ProfileType_value) proto.RegisterType((*VersionRequest)(nil), "runtime.v1.VersionRequest") proto.RegisterType((*VersionResponse)(nil), "runtime.v1.VersionResponse") @@ -9715,6 +9970,7 @@ func init() { proto.RegisterType((*CpuUsage)(nil), "runtime.v1.CpuUsage") proto.RegisterType((*WindowsCpuUsage)(nil), "runtime.v1.WindowsCpuUsage") proto.RegisterType((*MemoryUsage)(nil), "runtime.v1.MemoryUsage") + proto.RegisterType((*SwapUsage)(nil), "runtime.v1.SwapUsage") proto.RegisterType((*WindowsMemoryUsage)(nil), "runtime.v1.WindowsMemoryUsage") proto.RegisterType((*ReopenContainerLogRequest)(nil), "runtime.v1.ReopenContainerLogRequest") proto.RegisterType((*ReopenContainerLogResponse)(nil), "runtime.v1.ReopenContainerLogResponse") @@ -9730,425 +9986,440 @@ func init() { proto.RegisterType((*PodSandboxMetrics)(nil), "runtime.v1.PodSandboxMetrics") proto.RegisterType((*ContainerMetrics)(nil), "runtime.v1.ContainerMetrics") proto.RegisterType((*Metric)(nil), "runtime.v1.Metric") + proto.RegisterType((*RuntimeConfigRequest)(nil), "runtime.v1.RuntimeConfigRequest") + proto.RegisterType((*RuntimeConfigResponse)(nil), "runtime.v1.RuntimeConfigResponse") + proto.RegisterType((*LinuxRuntimeConfiguration)(nil), "runtime.v1.LinuxRuntimeConfiguration") } func init() { proto.RegisterFile("api.proto", fileDescriptor_00212fb1f9d3bf1c) } var fileDescriptor_00212fb1f9d3bf1c = []byte{ - // 6593 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xd4, 0x7d, 0x4d, 0x6c, 0x1c, 0xc9, - 0x75, 0x30, 0x7b, 0x66, 0x48, 0xce, 0xbc, 0xe1, 0x90, 0xc3, 0x12, 0x45, 0x52, 0x43, 0x89, 0x92, - 0x7a, 0xff, 0xf4, 0xb3, 0xfa, 0x59, 0xad, 0x76, 0x57, 0x92, 0xb5, 0xbb, 0x1a, 0x91, 0x5c, 0x69, - 0xd6, 0x12, 0x39, 0x6e, 0x92, 0xb2, 0xd7, 0xfe, 0xe0, 0xfe, 0x5a, 0xd3, 0xc5, 0x61, 0xaf, 0x66, - 0xba, 0xdb, 0xdd, 0x3d, 0x92, 0xe8, 0x53, 0x8e, 0x89, 0x4f, 0x06, 0x12, 0xc7, 0x80, 0x11, 0x24, - 0xc8, 0x21, 0x3f, 0x40, 0x0e, 0x09, 0x02, 0x24, 0x70, 0x10, 0x24, 0x01, 0x8c, 0xc4, 0x70, 0x02, - 0x04, 0xc8, 0x21, 0x01, 0x7c, 0x08, 0x10, 0x7b, 0x13, 0x20, 0x40, 0x0e, 0xb9, 0xc4, 0x87, 0xdc, - 0x1c, 0xd4, 0x5f, 0x77, 0x57, 0xff, 0xcc, 0x0c, 0xb9, 0xeb, 0xdd, 0xf5, 0x89, 0xd3, 0xaf, 0xde, - 0x7b, 0xf5, 0xea, 0xd5, 0xab, 0x57, 0xaf, 0xaa, 0x5e, 0x15, 0xa1, 0x62, 0xb8, 0xd6, 0x65, 0xd7, - 0x73, 0x02, 0x07, 0x81, 0x37, 0xb0, 0x03, 0xab, 0x8f, 0x2f, 0x3f, 0x7d, 0xad, 0x71, 0xa9, 0x6b, - 0x05, 0xfb, 0x83, 0xc7, 0x97, 0x3b, 0x4e, 0xff, 0x4a, 0xd7, 0xe9, 0x3a, 0x57, 0x28, 0xca, 0xe3, - 0xc1, 0x1e, 0xfd, 0xa2, 0x1f, 0xf4, 0x17, 0x23, 0x55, 0x2f, 0xc0, 0xec, 0x23, 0xec, 0xf9, 0x96, - 0x63, 0x6b, 0xf8, 0x1b, 0x03, 0xec, 0x07, 0x68, 0x19, 0xa6, 0x9f, 0x32, 0xc8, 0xb2, 0x72, 0x46, - 0x39, 0x57, 0xd1, 0xc4, 0xa7, 0xfa, 0x07, 0x0a, 0xcc, 0x85, 0xc8, 0xbe, 0xeb, 0xd8, 0x3e, 0xce, - 0xc7, 0x46, 0x67, 0x61, 0x86, 0x8b, 0xa5, 0xdb, 0x46, 0x1f, 0x2f, 0x17, 0x68, 0x71, 0x95, 0xc3, - 0x36, 0x8d, 0x3e, 0x46, 0xaf, 0xc0, 0x9c, 0x40, 0x11, 0x4c, 0x8a, 0x14, 0x6b, 0x96, 0x83, 0x79, - 0x6d, 0xe8, 0x32, 0x1c, 0x13, 0x88, 0x86, 0x6b, 0x85, 0xc8, 0x25, 0x8a, 0x3c, 0xcf, 0x8b, 0x9a, - 0xae, 0xc5, 0xf1, 0xd5, 0xaf, 0x41, 0x65, 0x7d, 0x73, 0x7b, 0xcd, 0xb1, 0xf7, 0xac, 0x2e, 0x11, - 0xd1, 0xc7, 0x1e, 0xa1, 0x59, 0x56, 0xce, 0x14, 0x89, 0x88, 0xfc, 0x13, 0x35, 0xa0, 0xec, 0x63, - 0xc3, 0xeb, 0xec, 0x63, 0x7f, 0xb9, 0x40, 0x8b, 0xc2, 0x6f, 0x42, 0xe5, 0xb8, 0x81, 0xe5, 0xd8, - 0xfe, 0x72, 0x91, 0x51, 0xf1, 0x4f, 0xf5, 0xb7, 0x14, 0xa8, 0xb6, 0x1d, 0x2f, 0x78, 0x68, 0xb8, - 0xae, 0x65, 0x77, 0xd1, 0x55, 0x28, 0x53, 0x5d, 0x76, 0x9c, 0x1e, 0xd5, 0xc1, 0xec, 0xb5, 0x85, - 0xcb, 0x51, 0x87, 0x5c, 0x6e, 0xf3, 0x32, 0x2d, 0xc4, 0x42, 0x2f, 0xc1, 0x6c, 0xc7, 0xb1, 0x03, - 0xc3, 0xb2, 0xb1, 0xa7, 0xbb, 0x8e, 0x17, 0x50, 0xe5, 0x4c, 0x6a, 0xb5, 0x10, 0x4a, 0xf8, 0xa3, - 0x15, 0xa8, 0xec, 0x3b, 0x7e, 0xc0, 0x30, 0x8a, 0x14, 0xa3, 0x4c, 0x00, 0xb4, 0x70, 0x09, 0xa6, - 0x69, 0xa1, 0xe5, 0x72, 0x35, 0x4c, 0x91, 0xcf, 0x96, 0xab, 0xfe, 0xa0, 0x00, 0x93, 0x0f, 0x9d, - 0x81, 0x1d, 0x24, 0xaa, 0x31, 0x82, 0x7d, 0xde, 0x45, 0xb1, 0x6a, 0x8c, 0x60, 0x3f, 0xaa, 0x86, - 0x60, 0xb0, 0x5e, 0x62, 0xd5, 0x90, 0xc2, 0x06, 0x94, 0x3d, 0x6c, 0x98, 0x8e, 0xdd, 0x3b, 0xa0, - 0x22, 0x94, 0xb5, 0xf0, 0x9b, 0x74, 0x9f, 0x8f, 0x7b, 0x96, 0x3d, 0x78, 0xae, 0x7b, 0xb8, 0x67, - 0x3c, 0xc6, 0x3d, 0x2a, 0x4a, 0x59, 0x9b, 0xe5, 0x60, 0x8d, 0x41, 0xd1, 0x3b, 0x50, 0x75, 0x3d, - 0xc7, 0x35, 0xba, 0x06, 0xd1, 0xe0, 0xf2, 0x24, 0x55, 0xd2, 0xc9, 0xb8, 0x92, 0xa8, 0xc0, 0xed, - 0x08, 0x47, 0x8b, 0x13, 0xa0, 0xb7, 0xa0, 0x3a, 0xb0, 0x4c, 0xae, 0x6f, 0x7f, 0x79, 0xea, 0x4c, - 0xf1, 0x5c, 0xf5, 0xda, 0xf1, 0x38, 0x7d, 0x6b, 0x9d, 0x97, 0x6a, 0x71, 0x4c, 0x42, 0xd8, 0x8d, - 0x11, 0x4e, 0x0f, 0x25, 0x8c, 0x61, 0xaa, 0x3a, 0x54, 0xc2, 0x92, 0x48, 0xd5, 0x26, 0x55, 0x60, - 0x8d, 0xab, 0xda, 0x24, 0x26, 0x1e, 0x29, 0xd8, 0x32, 0xa9, 0xf2, 0x6a, 0x5a, 0x35, 0x84, 0xb5, - 0x4c, 0xb4, 0x08, 0x53, 0x3d, 0x6c, 0x77, 0x83, 0x7d, 0xaa, 0xbd, 0x9a, 0xc6, 0xbf, 0xd4, 0xdf, - 0x50, 0xa0, 0xb6, 0xeb, 0x63, 0x8f, 0x8c, 0x03, 0xdf, 0x35, 0x3a, 0x18, 0x5d, 0x82, 0x52, 0xdf, - 0x31, 0x31, 0x37, 0xa1, 0x13, 0x71, 0x21, 0x43, 0xa4, 0x87, 0x8e, 0x89, 0x35, 0x8a, 0x86, 0xce, - 0x43, 0x69, 0x60, 0x99, 0xcc, 0x6e, 0x73, 0xdb, 0x44, 0x51, 0x08, 0x6a, 0x97, 0xa0, 0x16, 0x87, - 0xa2, 0x12, 0x14, 0xf5, 0xe7, 0x0a, 0xcc, 0x85, 0xb5, 0x6d, 0x51, 0x83, 0x47, 0xaf, 0xc3, 0xb4, - 0x8d, 0x83, 0x67, 0x8e, 0xf7, 0x64, 0xb4, 0x6c, 0x02, 0x13, 0x5d, 0x84, 0xa2, 0xcb, 0x35, 0x32, - 0x94, 0x80, 0x60, 0x11, 0x64, 0xcb, 0xed, 0x50, 0x0d, 0x0d, 0x47, 0xb6, 0xdc, 0x0e, 0x31, 0xd7, - 0xc0, 0xf0, 0xba, 0x98, 0xf6, 0x07, 0x33, 0xfd, 0x32, 0x03, 0xb4, 0x4c, 0x74, 0x07, 0x66, 0x07, - 0x3e, 0xf6, 0x6c, 0x5f, 0x17, 0x83, 0x97, 0x18, 0x5b, 0x55, 0x66, 0x2a, 0xe9, 0x5d, 0xab, 0x31, - 0x82, 0x2d, 0x3e, 0xba, 0x55, 0x80, 0x96, 0x1d, 0xbc, 0x79, 0xfd, 0x91, 0xd1, 0x1b, 0x60, 0xb4, - 0x00, 0x93, 0x4f, 0xc9, 0x0f, 0xda, 0xf2, 0xa2, 0xc6, 0x3e, 0xd4, 0xbf, 0x2a, 0xc1, 0xca, 0x03, - 0x62, 0xe0, 0xdb, 0x86, 0x6d, 0x3e, 0x76, 0x9e, 0x6f, 0xe3, 0xce, 0xc0, 0xb3, 0x82, 0x83, 0x35, - 0xc7, 0x0e, 0xf0, 0xf3, 0x00, 0xdd, 0x87, 0x79, 0x5b, 0xf0, 0x0f, 0x05, 0x51, 0xa8, 0x20, 0x2b, - 0x99, 0xad, 0x63, 0x95, 0x6b, 0x75, 0x5b, 0x06, 0xf8, 0xe8, 0x6e, 0x34, 0xc4, 0x04, 0x9f, 0x42, - 0xba, 0x41, 0xdb, 0x1b, 0x54, 0x1a, 0xce, 0x45, 0x8c, 0x3e, 0xc1, 0xe3, 0x4d, 0x20, 0x4e, 0x57, - 0x37, 0x7c, 0x9d, 0xb4, 0x94, 0x6a, 0xb9, 0x7a, 0x6d, 0x51, 0xb2, 0x82, 0xb0, 0xc1, 0x5a, 0xc5, - 0x1b, 0xd8, 0x4d, 0x9f, 0x68, 0x08, 0xdd, 0xa0, 0x0e, 0x9c, 0xd0, 0x75, 0x3d, 0x67, 0xe0, 0x2e, - 0x97, 0x87, 0x12, 0x02, 0x25, 0xbc, 0x47, 0x30, 0xa9, 0x5f, 0xe7, 0x4e, 0x42, 0xf7, 0x1c, 0x27, - 0xd8, 0xf3, 0x85, 0x63, 0x10, 0x60, 0x8d, 0x42, 0xd1, 0x15, 0x38, 0xe6, 0x0f, 0x5c, 0xb7, 0x87, - 0xfb, 0xd8, 0x0e, 0x8c, 0x1e, 0xab, 0x88, 0xf4, 0x59, 0xf1, 0x5c, 0x51, 0x43, 0xf1, 0x22, 0xca, - 0xd8, 0x47, 0xab, 0x00, 0xae, 0x67, 0x3d, 0xb5, 0x7a, 0xb8, 0x8b, 0xcd, 0xe5, 0x29, 0xca, 0x34, - 0x06, 0x41, 0x6f, 0x10, 0x5f, 0xdf, 0xe9, 0x38, 0x7d, 0x77, 0xb9, 0x92, 0xd6, 0xb7, 0xe8, 0xa7, - 0xb6, 0xe7, 0xec, 0x59, 0x3d, 0xac, 0x09, 0x5c, 0xf4, 0x16, 0x94, 0x0d, 0xd7, 0x35, 0xbc, 0xbe, - 0xe3, 0x2d, 0xc3, 0x68, 0xba, 0x10, 0x19, 0x5d, 0x87, 0x05, 0xce, 0x43, 0x77, 0x59, 0x21, 0x73, - 0xa3, 0xd3, 0xc4, 0x2e, 0xef, 0x16, 0x96, 0x15, 0x0d, 0xf1, 0x72, 0x4e, 0x4b, 0x9c, 0xaa, 0xfa, - 0xb7, 0x0a, 0xcc, 0x25, 0x78, 0xa2, 0xf7, 0x61, 0x46, 0x70, 0x08, 0x0e, 0x5c, 0xe1, 0x06, 0x5e, - 0x19, 0x22, 0xc6, 0x65, 0xfe, 0x77, 0xe7, 0xc0, 0xc5, 0xd4, 0x5f, 0x8a, 0x0f, 0xf4, 0x02, 0xd4, - 0x7a, 0x4e, 0xc7, 0xe8, 0x51, 0xaf, 0xe5, 0xe1, 0x3d, 0xee, 0xd5, 0x67, 0x42, 0xa0, 0x86, 0xf7, - 0xd4, 0x3b, 0x50, 0x8d, 0x31, 0x40, 0x08, 0x66, 0x35, 0x56, 0xd5, 0x3a, 0xde, 0x33, 0x06, 0xbd, - 0xa0, 0x3e, 0x81, 0x66, 0x01, 0x76, 0xed, 0x0e, 0x99, 0x45, 0x6d, 0x6c, 0xd6, 0x15, 0x54, 0x83, - 0xca, 0x03, 0xc1, 0xa2, 0x5e, 0x50, 0xbf, 0x57, 0x84, 0xe3, 0xd4, 0xf0, 0xda, 0x8e, 0xc9, 0x47, - 0x02, 0x9f, 0x72, 0x5f, 0x80, 0x5a, 0x87, 0xf6, 0xa5, 0xee, 0x1a, 0x1e, 0xb6, 0x03, 0x3e, 0xf1, - 0xcc, 0x30, 0x60, 0x9b, 0xc2, 0x90, 0x06, 0x75, 0x9f, 0xb7, 0x48, 0xef, 0xb0, 0x91, 0xc3, 0x8d, - 0x5b, 0x6a, 0xf5, 0x90, 0x81, 0xa6, 0xcd, 0xf9, 0xa9, 0x91, 0x37, 0xed, 0x1f, 0xf8, 0x9d, 0xa0, - 0x27, 0xbc, 0xdd, 0xe5, 0x14, 0xab, 0xa4, 0xb0, 0x97, 0xb7, 0x19, 0xc1, 0x86, 0x1d, 0x78, 0x07, - 0x9a, 0x20, 0x47, 0xef, 0x42, 0xd9, 0x79, 0x8a, 0xbd, 0x7d, 0x6c, 0x30, 0x2f, 0x53, 0xbd, 0xf6, - 0x42, 0x8a, 0xd5, 0x9a, 0x70, 0xf4, 0x1a, 0xf6, 0x9d, 0x81, 0xd7, 0xc1, 0xbe, 0x16, 0x12, 0xa1, - 0x26, 0x54, 0x3c, 0x01, 0xe6, 0x5e, 0x68, 0x2c, 0x0e, 0x11, 0x55, 0xe3, 0x16, 0xcc, 0xc4, 0x85, - 0x43, 0x75, 0x28, 0x3e, 0xc1, 0x07, 0x5c, 0x99, 0xe4, 0x67, 0xe4, 0x9f, 0x58, 0x0f, 0xb3, 0x8f, - 0x5b, 0x85, 0x1b, 0x8a, 0xea, 0x01, 0x8a, 0x5a, 0xfa, 0x10, 0x07, 0x86, 0x69, 0x04, 0x06, 0x42, - 0x50, 0xa2, 0xc1, 0x18, 0x63, 0x41, 0x7f, 0x13, 0xae, 0x03, 0xee, 0xaa, 0x2b, 0x1a, 0xf9, 0x89, - 0x4e, 0x42, 0x25, 0xf4, 0x44, 0x3c, 0x22, 0x8b, 0x00, 0x24, 0x32, 0x32, 0x82, 0x00, 0xf7, 0xdd, - 0x80, 0x2a, 0xa6, 0xa6, 0x89, 0x4f, 0xf5, 0xd7, 0x26, 0xa1, 0x9e, 0xb2, 0x85, 0x5b, 0x50, 0xee, - 0xf3, 0xea, 0xb9, 0x0f, 0x5c, 0x95, 0xc2, 0xa3, 0x94, 0x90, 0x5a, 0x88, 0x4f, 0xa2, 0x0f, 0x62, - 0x6b, 0xb1, 0xf8, 0x31, 0xfc, 0x66, 0x46, 0xde, 0xd5, 0x4d, 0xcb, 0xc3, 0x9d, 0xc0, 0xf1, 0x0e, - 0xb8, 0xa0, 0x33, 0x3d, 0xa7, 0xbb, 0x2e, 0x60, 0xe8, 0x3a, 0x80, 0x69, 0xfb, 0x3a, 0xb5, 0xe1, - 0x2e, 0xef, 0x47, 0x69, 0x02, 0x0c, 0xc3, 0x44, 0xad, 0x62, 0xda, 0x3e, 0x17, 0xf9, 0x36, 0xd4, - 0x48, 0xcc, 0xa5, 0xf7, 0x45, 0xe0, 0x30, 0x49, 0x6d, 0x69, 0x49, 0x96, 0x3b, 0x8c, 0x00, 0xb5, - 0x19, 0x37, 0xfa, 0xf0, 0xd1, 0x1d, 0x98, 0xa2, 0x61, 0x8f, 0x08, 0x54, 0xce, 0x65, 0x37, 0x97, - 0x5b, 0xdf, 0x03, 0x8a, 0xca, 0x8c, 0x8f, 0xd3, 0xa1, 0x2d, 0xa8, 0x1a, 0xb6, 0xed, 0x04, 0x06, - 0xf3, 0xf8, 0x2c, 0x6c, 0xb9, 0x34, 0x94, 0x4d, 0x33, 0xc2, 0x67, 0xbc, 0xe2, 0x1c, 0xd0, 0x5b, - 0x30, 0x49, 0xa7, 0x04, 0xee, 0xc3, 0xcf, 0x8e, 0x1c, 0x14, 0x1a, 0xc3, 0x47, 0x6f, 0xc3, 0xf4, - 0x33, 0xcb, 0x36, 0x9d, 0x67, 0x3e, 0xf7, 0xa7, 0x92, 0x09, 0x7f, 0x99, 0x15, 0xa5, 0x88, 0x05, - 0x4d, 0xe3, 0x26, 0x54, 0x63, 0xed, 0x3b, 0x8c, 0xfd, 0x36, 0xde, 0x81, 0x7a, 0xb2, 0x4d, 0x87, - 0xb2, 0xff, 0x01, 0x2c, 0x68, 0x03, 0x3b, 0x12, 0x4d, 0x2c, 0x6f, 0xae, 0xc3, 0x14, 0xb7, 0x06, - 0x66, 0x8c, 0x27, 0x87, 0xa9, 0x55, 0xe3, 0xb8, 0xf1, 0x95, 0xca, 0xbe, 0x61, 0x9b, 0x3d, 0xec, - 0xf1, 0x1a, 0xc5, 0x4a, 0xe5, 0x3e, 0x83, 0xaa, 0x6f, 0xc3, 0xf1, 0x44, 0xb5, 0x7c, 0xa1, 0xf4, - 0x22, 0xcc, 0xba, 0x8e, 0xa9, 0xfb, 0x0c, 0x2c, 0x62, 0xc9, 0x0a, 0xb1, 0x1d, 0x81, 0xdb, 0x32, - 0x09, 0xf9, 0x76, 0xe0, 0xb8, 0x69, 0xb1, 0xc7, 0x23, 0x5f, 0x86, 0xc5, 0x24, 0x39, 0xab, 0x5e, - 0x7d, 0x17, 0x96, 0x34, 0xdc, 0x77, 0x9e, 0xe2, 0xa3, 0xb2, 0x6e, 0xc0, 0x72, 0x9a, 0x01, 0x67, - 0xfe, 0x01, 0x2c, 0x45, 0xd0, 0xed, 0xc0, 0x08, 0x06, 0xfe, 0xa1, 0x98, 0xf3, 0x55, 0xe4, 0x63, - 0xc7, 0x67, 0x1d, 0x59, 0xd6, 0xc4, 0xa7, 0xba, 0x04, 0x93, 0x6d, 0xc7, 0x6c, 0xb5, 0xd1, 0x2c, - 0x14, 0x2c, 0x97, 0x13, 0x17, 0x2c, 0x57, 0xed, 0xc4, 0xeb, 0xdc, 0x64, 0x51, 0x27, 0xab, 0x3a, - 0x89, 0x8a, 0x6e, 0xc0, 0xac, 0x61, 0x9a, 0x16, 0x31, 0x24, 0xa3, 0xa7, 0x5b, 0xae, 0x08, 0x9a, - 0xe7, 0x13, 0x5d, 0xdf, 0x6a, 0x6b, 0xb5, 0x08, 0xb1, 0xe5, 0xfa, 0xea, 0x5d, 0xa8, 0x44, 0x01, - 0xfa, 0x1b, 0xd1, 0x8a, 0xb0, 0x30, 0x3a, 0x96, 0x0b, 0x97, 0x8b, 0x9b, 0xa9, 0x49, 0x92, 0x8b, - 0xf9, 0x06, 0x40, 0xe8, 0x54, 0x45, 0x78, 0x78, 0x3c, 0x93, 0xa5, 0x16, 0x43, 0x54, 0xff, 0xad, - 0x14, 0x77, 0xb2, 0xb1, 0x26, 0x9b, 0x61, 0x93, 0x4d, 0xc9, 0xe9, 0x16, 0x0e, 0xe9, 0x74, 0x5f, - 0x83, 0x49, 0x3f, 0x30, 0x02, 0xcc, 0xe3, 0xf1, 0x95, 0x6c, 0x42, 0x52, 0x31, 0xd6, 0x18, 0x26, - 0x3a, 0x05, 0xd0, 0xf1, 0xb0, 0x11, 0x60, 0x53, 0x37, 0xd8, 0xac, 0x50, 0xd4, 0x2a, 0x1c, 0xd2, - 0x0c, 0x88, 0x17, 0x11, 0x2b, 0x88, 0x8c, 0x89, 0x30, 0xa7, 0x1b, 0xa3, 0xb5, 0x44, 0xe8, 0xbd, - 0xa6, 0x46, 0x7a, 0x2f, 0x4e, 0xca, 0xbd, 0x57, 0xe4, 0x89, 0xa7, 0x87, 0x79, 0x62, 0x46, 0x34, - 0x8e, 0x27, 0x2e, 0x0f, 0xf3, 0xc4, 0x9c, 0xcd, 0x70, 0x4f, 0x9c, 0xe1, 0x48, 0x2a, 0x59, 0x8e, - 0xe4, 0xb3, 0x74, 0x9d, 0x7f, 0x51, 0x80, 0xe5, 0xf4, 0x78, 0xe6, 0x7e, 0xec, 0x3a, 0x4c, 0xf9, - 0x14, 0x32, 0xdc, 0x7f, 0x72, 0x2a, 0x8e, 0x8b, 0xee, 0x42, 0xc9, 0xb2, 0xf7, 0x1c, 0x3e, 0xf0, - 0x2e, 0x0f, 0xa5, 0xe1, 0x35, 0x5d, 0x6e, 0xd9, 0x7b, 0x0e, 0xd3, 0x20, 0xa5, 0x45, 0x0f, 0xe0, - 0x58, 0xb8, 0xb2, 0xf6, 0x75, 0xc6, 0x18, 0x8b, 0x38, 0x4f, 0xb2, 0xd2, 0x30, 0xaa, 0xe2, 0x1c, - 0x51, 0x44, 0xb7, 0xcd, 0xc9, 0x48, 0x8c, 0x43, 0xd0, 0xfd, 0xc0, 0xe8, 0xbb, 0xc2, 0x62, 0x43, - 0x40, 0xe3, 0x2d, 0xa8, 0x84, 0xd5, 0x1f, 0x4a, 0x77, 0x2d, 0x58, 0x48, 0x8c, 0x11, 0xb6, 0x90, - 0x0c, 0x07, 0x95, 0x32, 0xee, 0xa0, 0x52, 0x7f, 0xa6, 0xc4, 0x07, 0xfa, 0x7b, 0x56, 0x2f, 0xc0, - 0x5e, 0x6a, 0xa0, 0xbf, 0x29, 0xf8, 0xb2, 0x51, 0x7e, 0x66, 0x08, 0x5f, 0xb6, 0x4e, 0xe3, 0x23, - 0xf6, 0x11, 0xcc, 0x52, 0x13, 0xd7, 0x7d, 0xdc, 0xa3, 0xb1, 0x12, 0xd7, 0xe3, 0x95, 0x6c, 0x06, - 0xac, 0x76, 0x36, 0x44, 0xb6, 0x39, 0x05, 0xeb, 0x9b, 0x5a, 0x2f, 0x0e, 0x6b, 0xdc, 0x01, 0x94, - 0x46, 0x3a, 0x94, 0x06, 0x1f, 0x12, 0x7f, 0xe9, 0x07, 0x99, 0x33, 0xf7, 0x1e, 0x15, 0x63, 0xb8, - 0xe5, 0x31, 0x51, 0x35, 0x8e, 0xab, 0xfe, 0x4b, 0x11, 0x20, 0x2a, 0xfc, 0x9c, 0x3b, 0xca, 0x5b, - 0xa1, 0xc3, 0x62, 0x11, 0xa7, 0x9a, 0xcd, 0x32, 0xd3, 0x55, 0xb5, 0x64, 0x57, 0xc5, 0x62, 0xcf, - 0x57, 0x72, 0x18, 0x1c, 0xda, 0x49, 0x4d, 0x7f, 0xde, 0x9c, 0xd4, 0x7b, 0xb0, 0x98, 0x34, 0x13, - 0xee, 0xa1, 0x5e, 0x85, 0x49, 0x2b, 0xc0, 0x7d, 0xb6, 0xdb, 0x9b, 0xd8, 0xb0, 0x88, 0xa1, 0x33, - 0x24, 0xf5, 0x1d, 0x58, 0x94, 0xfb, 0xea, 0x70, 0xa1, 0x8b, 0xfa, 0x20, 0x19, 0xfb, 0x44, 0xae, - 0x92, 0xdb, 0x47, 0xe6, 0xd6, 0x4f, 0x92, 0x86, 0x61, 0xaa, 0x3f, 0x54, 0xe0, 0x78, 0xa2, 0x28, - 0x67, 0xe0, 0x7f, 0x2d, 0x35, 0x80, 0x99, 0x6f, 0xbd, 0x3e, 0xa4, 0x96, 0x4f, 0x71, 0x14, 0x7f, - 0x19, 0x1a, 0x72, 0xf7, 0x48, 0xaa, 0xbd, 0x99, 0x18, 0xca, 0x67, 0x47, 0x0a, 0x1d, 0x8e, 0xe7, - 0x36, 0xac, 0x64, 0x32, 0x4e, 0xeb, 0xbc, 0x38, 0xa6, 0xce, 0xff, 0xb7, 0x10, 0xf7, 0xd9, 0xcd, - 0x20, 0xf0, 0xac, 0xc7, 0x83, 0x00, 0x7f, 0xb2, 0x41, 0xd5, 0x7a, 0x38, 0xb2, 0x99, 0x9f, 0x7d, - 0x35, 0x9b, 0x32, 0xaa, 0x3d, 0x73, 0x8c, 0x6f, 0xcb, 0x63, 0xbc, 0x44, 0x59, 0xbd, 0x36, 0x92, - 0xd5, 0xd0, 0xd1, 0xfe, 0x59, 0x0e, 0xe2, 0xbf, 0x57, 0x60, 0x2e, 0xd1, 0x2b, 0xe8, 0x0e, 0x80, - 0x11, 0x8a, 0xce, 0xed, 0xe3, 0xcc, 0xa8, 0x26, 0x6a, 0x31, 0x1a, 0x32, 0x27, 0xb2, 0x78, 0x31, - 0x63, 0x4e, 0xcc, 0x88, 0x17, 0xc3, 0x70, 0xf1, 0x76, 0xb4, 0xd8, 0x65, 0x9b, 0xa4, 0xea, 0xd0, - 0xc5, 0x2e, 0xa3, 0x15, 0x24, 0xea, 0xaf, 0x17, 0x60, 0x21, 0x8b, 0x3b, 0x7a, 0x19, 0x8a, 0x1d, - 0x77, 0xc0, 0x5b, 0x22, 0x1d, 0x0d, 0xad, 0xb9, 0x83, 0x5d, 0xdf, 0xe8, 0x62, 0x8d, 0x20, 0xa0, - 0x2b, 0x30, 0xd5, 0xc7, 0x7d, 0xc7, 0x3b, 0xe0, 0x72, 0x4b, 0xdb, 0x0d, 0x0f, 0x69, 0x09, 0xc3, - 0xe6, 0x68, 0xe8, 0x5a, 0x14, 0x56, 0x33, 0x79, 0x97, 0xa5, 0xd5, 0x03, 0x2b, 0x62, 0x24, 0x61, - 0x2c, 0x7d, 0x0d, 0xa6, 0x5d, 0xcf, 0xe9, 0x60, 0xdf, 0xe7, 0xbb, 0x21, 0xcb, 0x89, 0xb3, 0x2a, - 0x52, 0xc4, 0x69, 0x38, 0x22, 0xba, 0x05, 0x10, 0x05, 0x50, 0x7c, 0x66, 0x6a, 0xe4, 0xc6, 0x5b, - 0xbe, 0x16, 0xc3, 0x56, 0xbf, 0x5f, 0x80, 0xc5, 0x6c, 0xcd, 0xa1, 0x4b, 0x71, 0xbd, 0xac, 0x64, - 0xa8, 0x5a, 0x56, 0xcf, 0x9b, 0x09, 0xf5, 0xac, 0x66, 0x50, 0x64, 0x69, 0xe9, 0x66, 0x52, 0x4b, - 0xa7, 0x33, 0x08, 0xb3, 0x95, 0x75, 0x33, 0xa9, 0xac, 0x2c, 0xd2, 0x6c, 0x9d, 0x35, 0x33, 0x74, - 0x76, 0x36, 0xab, 0x8d, 0xf9, 0xaa, 0xfb, 0x1b, 0x05, 0x66, 0xe2, 0x72, 0xc9, 0x21, 0xab, 0x92, - 0x08, 0x59, 0xd1, 0x26, 0xcc, 0x9b, 0x6c, 0xe7, 0x56, 0xb7, 0xec, 0x00, 0x7b, 0x7b, 0x46, 0x47, - 0x44, 0x85, 0x67, 0x33, 0xec, 0xa2, 0x25, 0x70, 0x98, 0xe0, 0x75, 0x4e, 0x1b, 0x82, 0x49, 0x0b, - 0x42, 0x3e, 0xc2, 0x6b, 0x8d, 0xc1, 0x28, 0x46, 0xa4, 0xfe, 0xb3, 0x02, 0xc7, 0x32, 0x14, 0x3c, - 0xa2, 0x21, 0xbb, 0xf9, 0x0d, 0x39, 0x97, 0xdf, 0x75, 0x23, 0xdb, 0x73, 0x3f, 0xa3, 0x3d, 0xe3, - 0xf3, 0x8b, 0x37, 0xeb, 0xe7, 0x0a, 0x1c, 0xcf, 0xc4, 0xca, 0xdc, 0x5e, 0xbd, 0x06, 0x65, 0xef, - 0xb9, 0xfe, 0xf8, 0x20, 0xc0, 0x7e, 0xd6, 0xc0, 0xde, 0x8d, 0x9d, 0xa1, 0x4c, 0x7b, 0xcf, 0xef, - 0x12, 0x3c, 0x74, 0x1d, 0x2a, 0xde, 0x73, 0x1d, 0x7b, 0x9e, 0xe3, 0x09, 0x5f, 0x94, 0x4b, 0x54, - 0xf6, 0x9e, 0x6f, 0x50, 0x44, 0x52, 0x53, 0x20, 0x6a, 0x2a, 0x8d, 0xa8, 0x29, 0x88, 0x6a, 0x0a, - 0xc2, 0x9a, 0x26, 0x47, 0xd4, 0x14, 0xf0, 0x9a, 0xd4, 0x3f, 0x2c, 0xc0, 0xc9, 0x61, 0xea, 0xfa, - 0xc4, 0x14, 0xb1, 0x01, 0xc8, 0x7b, 0xae, 0xbb, 0x46, 0xe7, 0x09, 0x0e, 0x7c, 0xdd, 0xf4, 0x1c, - 0xd7, 0xc5, 0xe6, 0x28, 0x8d, 0xd4, 0xbd, 0xe7, 0x6d, 0x46, 0xb1, 0xce, 0x08, 0x8e, 0xa4, 0x99, - 0x0d, 0x40, 0x41, 0xba, 0xea, 0x11, 0x2a, 0xaa, 0x07, 0x89, 0xaa, 0xd5, 0x0f, 0x61, 0x26, 0xee, - 0x21, 0x46, 0xd8, 0xfe, 0x6d, 0xa8, 0x71, 0x0f, 0xa2, 0x77, 0x9c, 0x81, 0x1d, 0x8c, 0x52, 0xd4, - 0x0c, 0xc7, 0x5e, 0x23, 0xc8, 0xea, 0x37, 0xc2, 0xe1, 0xf6, 0xa9, 0x55, 0xf9, 0x47, 0x0a, 0x54, - 0x5a, 0x7d, 0xa3, 0x8b, 0xb7, 0x5d, 0xdc, 0x21, 0x33, 0xbd, 0x45, 0x3e, 0x78, 0xbf, 0xb3, 0x0f, - 0x74, 0x5f, 0x8e, 0x5a, 0x58, 0x9c, 0xfa, 0xb2, 0x74, 0x8e, 0x28, 0x38, 0x8c, 0x08, 0x55, 0x3e, - 0x6e, 0xbc, 0x71, 0x0d, 0xca, 0x5f, 0xc4, 0x07, 0x6c, 0x45, 0x3e, 0x26, 0x9d, 0xfa, 0x9d, 0x12, - 0x2c, 0xe5, 0x9c, 0xd5, 0xd0, 0xe5, 0x9c, 0x3b, 0xd0, 0x5d, 0xec, 0x59, 0x8e, 0x29, 0x54, 0xdb, - 0x71, 0x07, 0x6d, 0x0a, 0x40, 0x2b, 0x40, 0x3e, 0xf4, 0x6f, 0x0c, 0x1c, 0x1e, 0x31, 0x16, 0xb5, - 0x72, 0xc7, 0x1d, 0x7c, 0x89, 0x7c, 0x0b, 0x5a, 0x7f, 0xdf, 0xf0, 0x30, 0x1b, 0xe4, 0x8c, 0x76, - 0x9b, 0x02, 0xd0, 0x6b, 0x70, 0x9c, 0x4d, 0x60, 0x7a, 0xcf, 0xea, 0x5b, 0xc4, 0x15, 0xc6, 0xec, - 0xb7, 0xa8, 0x21, 0x56, 0xf8, 0x80, 0x94, 0xb5, 0x6c, 0x66, 0xb1, 0x2a, 0xd4, 0x1c, 0xa7, 0xaf, - 0xfb, 0x1d, 0xc7, 0xc3, 0xba, 0x61, 0x7e, 0x48, 0x8d, 0xb5, 0xa8, 0x55, 0x1d, 0xa7, 0xbf, 0x4d, - 0x60, 0x4d, 0xf3, 0x43, 0x74, 0x1a, 0xaa, 0x1d, 0x77, 0xe0, 0xe3, 0x40, 0x27, 0x7f, 0xe8, 0x8e, - 0x5a, 0x45, 0x03, 0x06, 0x5a, 0x73, 0x07, 0x7e, 0x0c, 0xa1, 0x4f, 0xd6, 0x50, 0xd3, 0x71, 0x84, - 0x87, 0xb8, 0x4f, 0x8f, 0xa4, 0xf7, 0x07, 0x5d, 0xec, 0x1a, 0x5d, 0xcc, 0x44, 0x13, 0xdb, 0x62, - 0xd2, 0x91, 0xf4, 0x7d, 0x8e, 0x42, 0x05, 0xd4, 0x66, 0xf7, 0xe3, 0x9f, 0x3e, 0x7a, 0x1f, 0xa6, - 0x07, 0xb6, 0xb5, 0x67, 0x61, 0x73, 0xb9, 0x42, 0x69, 0xaf, 0x8e, 0x71, 0x32, 0x76, 0x79, 0x97, - 0x91, 0xf0, 0x83, 0x3a, 0xce, 0x00, 0xdd, 0x82, 0x06, 0x57, 0x94, 0xff, 0xcc, 0x70, 0x93, 0xda, - 0x02, 0xaa, 0x82, 0x45, 0x86, 0xb1, 0xfd, 0xcc, 0x70, 0xe3, 0x1a, 0x6b, 0xdc, 0x82, 0x99, 0x38, - 0xd3, 0x43, 0xd9, 0xd2, 0x5d, 0xa8, 0x49, 0x8d, 0x24, 0xbd, 0x4d, 0x95, 0xe2, 0x5b, 0xdf, 0x14, - 0x03, 0xa0, 0x4c, 0x00, 0xdb, 0xd6, 0x37, 0x69, 0x22, 0x01, 0x95, 0x8c, 0xf2, 0x29, 0x69, 0xec, - 0x43, 0x35, 0xa0, 0x26, 0x9d, 0xdd, 0x13, 0xbf, 0x49, 0x0f, 0xe9, 0xb9, 0xdf, 0x24, 0xbf, 0x09, - 0xcc, 0x73, 0x7a, 0x42, 0x02, 0xfa, 0x9b, 0xc0, 0xe8, 0x29, 0x31, 0x3b, 0xf3, 0xa2, 0xbf, 0x69, - 0x15, 0xf8, 0x29, 0x4f, 0xc2, 0xa9, 0x68, 0xec, 0x43, 0xfd, 0x6d, 0x05, 0x60, 0xcd, 0x70, 0x8d, - 0xc7, 0x56, 0xcf, 0x0a, 0x0e, 0xd0, 0x79, 0xa8, 0x1b, 0xa6, 0xa9, 0x77, 0x04, 0xc4, 0xc2, 0x22, - 0x2b, 0x6a, 0xce, 0x30, 0xcd, 0xb5, 0x18, 0x18, 0x5d, 0x84, 0x79, 0xe2, 0xf5, 0x64, 0x5c, 0x96, - 0x26, 0x55, 0x27, 0x05, 0x12, 0xf2, 0x0d, 0x58, 0x26, 0x7c, 0x8d, 0xfe, 0x63, 0x0b, 0xdb, 0x81, - 0x4c, 0xc3, 0xf2, 0xa7, 0x16, 0x0d, 0xd3, 0x6c, 0xb2, 0xe2, 0x38, 0xa5, 0xfa, 0xd7, 0x53, 0x70, - 0x4a, 0xee, 0xf1, 0x64, 0x3a, 0xc5, 0x2d, 0x98, 0x49, 0xc8, 0x9b, 0x4a, 0x44, 0x88, 0x5a, 0xa8, - 0x49, 0xb8, 0x89, 0x84, 0x81, 0x42, 0x2a, 0x61, 0x20, 0x33, 0x55, 0xa3, 0xf8, 0x09, 0xa5, 0x6a, - 0x94, 0x3e, 0x66, 0xaa, 0xc6, 0xe4, 0x51, 0x53, 0x35, 0x66, 0xc6, 0x4e, 0xd5, 0x78, 0x99, 0x6e, - 0xf5, 0x88, 0x1a, 0xe9, 0x9c, 0xcd, 0x7c, 0x42, 0x2d, 0xe4, 0x6e, 0x8b, 0x54, 0xbd, 0x44, 0x4a, - 0xc7, 0xf4, 0x61, 0x52, 0x3a, 0xca, 0xb9, 0x29, 0x1d, 0x67, 0x60, 0xc6, 0x76, 0x74, 0x1b, 0x3f, - 0xd3, 0x49, 0xb7, 0xf8, 0xcb, 0x55, 0xd6, 0x47, 0xb6, 0xb3, 0x89, 0x9f, 0xb5, 0x09, 0x04, 0x9d, - 0x85, 0x99, 0xbe, 0xe1, 0x3f, 0xc1, 0x26, 0xcd, 0xad, 0xf0, 0x97, 0x6b, 0xd4, 0x9e, 0xaa, 0x0c, - 0xd6, 0x26, 0x20, 0xf4, 0x12, 0x84, 0x72, 0x70, 0xa4, 0x59, 0x8a, 0x54, 0x13, 0x50, 0x86, 0x16, - 0x4b, 0x0f, 0x99, 0x3b, 0x62, 0x7a, 0x48, 0xfd, 0x30, 0xe9, 0x21, 0x97, 0xa0, 0x2e, 0x7e, 0x8b, - 0xfc, 0x10, 0xb6, 0xdd, 0x4f, 0x53, 0x43, 0xe6, 0x44, 0x99, 0xc8, 0x01, 0xc9, 0xcb, 0x26, 0x81, - 0xa1, 0xd9, 0x24, 0x7f, 0xac, 0xf0, 0x85, 0x67, 0x38, 0x80, 0xf8, 0x31, 0xb6, 0x94, 0x81, 0xa0, - 0x1c, 0x25, 0x03, 0x01, 0xed, 0xe4, 0xe6, 0x68, 0x9c, 0xcf, 0xe7, 0x34, 0x2a, 0x4b, 0x43, 0x7d, - 0x18, 0xae, 0x09, 0x3f, 0x89, 0x5c, 0x33, 0xf5, 0x3f, 0x14, 0x38, 0xc5, 0xf9, 0xe5, 0x24, 0x64, - 0x65, 0x58, 0xb9, 0x92, 0x63, 0xe5, 0x1d, 0x0f, 0x9b, 0xd8, 0x0e, 0x2c, 0xa3, 0xa7, 0xfb, 0x2e, - 0xee, 0x88, 0x63, 0xde, 0x08, 0x4c, 0x03, 0x9d, 0xb3, 0x30, 0xc3, 0x72, 0x26, 0xf9, 0xf2, 0x90, - 0xa5, 0x46, 0x56, 0x69, 0xda, 0x24, 0x5f, 0x01, 0x6e, 0x65, 0x79, 0x96, 0x52, 0xee, 0xbe, 0xc2, - 0x48, 0x07, 0xa3, 0x3a, 0xb0, 0x94, 0x73, 0xe0, 0x9e, 0xd9, 0x4d, 0x4a, 0xba, 0x9b, 0x86, 0x2a, - 0x29, 0xdd, 0x4d, 0xdf, 0x51, 0xe0, 0x74, 0x6a, 0x99, 0xfa, 0xd9, 0x6b, 0x56, 0xfd, 0x33, 0x25, - 0xb4, 0x9f, 0xa4, 0xc9, 0xaf, 0xa5, 0x4d, 0xfe, 0xa5, 0x61, 0xab, 0xee, 0x4c, 0xa3, 0x7f, 0x94, - 0x6b, 0xf4, 0x17, 0x87, 0xae, 0xe0, 0x47, 0xe9, 0xf3, 0x5f, 0x15, 0x38, 0x91, 0x2b, 0x40, 0x22, - 0x1e, 0x54, 0x92, 0xf1, 0x20, 0x8f, 0x25, 0xa3, 0x10, 0x9d, 0xc5, 0x92, 0x34, 0x0a, 0xe7, 0x41, - 0x9b, 0xde, 0x37, 0x9e, 0x5b, 0xfd, 0x41, 0x9f, 0x07, 0x93, 0x84, 0xdd, 0x43, 0x06, 0x39, 0x4a, - 0x34, 0x79, 0x05, 0x16, 0x98, 0xa3, 0xa7, 0x01, 0x4d, 0x44, 0xc1, 0x82, 0xca, 0x79, 0x56, 0x46, - 0x62, 0x1b, 0x4e, 0xa0, 0x36, 0x61, 0x3e, 0x6c, 0xd6, 0xd0, 0x84, 0xa3, 0x58, 0x02, 0x51, 0x41, - 0x4e, 0x20, 0xb2, 0x61, 0x6a, 0x1d, 0x3f, 0xb5, 0x3a, 0xf8, 0x13, 0xc9, 0x5d, 0x3e, 0x03, 0x55, - 0x17, 0x7b, 0x7d, 0xcb, 0xf7, 0xc3, 0x59, 0xbd, 0xa2, 0xc5, 0x41, 0xea, 0x69, 0xa8, 0xac, 0xad, - 0xb7, 0x78, 0x95, 0x19, 0xa2, 0xaa, 0xff, 0x39, 0x05, 0x73, 0x49, 0x1b, 0xbb, 0x99, 0x4a, 0x68, - 0x3a, 0x95, 0xb9, 0x19, 0x96, 0xb1, 0x0b, 0x7c, 0x51, 0xac, 0x8f, 0x0a, 0xe9, 0xd3, 0xfe, 0x70, - 0x0d, 0x24, 0x96, 0x4d, 0xcb, 0x30, 0xdd, 0x71, 0xfa, 0x7d, 0xc3, 0x36, 0x45, 0x06, 0x3a, 0xff, - 0x24, 0x92, 0x1a, 0x5e, 0x97, 0xed, 0xff, 0x56, 0x34, 0xfa, 0x9b, 0x98, 0x00, 0x71, 0x86, 0x96, - 0x4d, 0x53, 0xa2, 0x68, 0x2f, 0x55, 0x34, 0xe0, 0xa0, 0x75, 0xcb, 0x43, 0xe7, 0xa0, 0x84, 0xed, - 0xa7, 0xe2, 0x60, 0x48, 0xda, 0x87, 0x14, 0x6b, 0x22, 0x8d, 0x62, 0xa0, 0xf3, 0x30, 0xd5, 0x27, - 0x66, 0x25, 0x8e, 0xcd, 0xe7, 0x53, 0x99, 0xda, 0x1a, 0x47, 0x40, 0xaf, 0xc2, 0xb4, 0x49, 0xb5, - 0x27, 0x16, 0x01, 0x48, 0x4a, 0xae, 0xa2, 0x45, 0x9a, 0x40, 0x41, 0xef, 0x86, 0x9b, 0xe0, 0x95, - 0xf4, 0xe9, 0x54, 0x42, 0xcd, 0x99, 0xfb, 0xdf, 0x9b, 0xf2, 0x4a, 0x12, 0xd2, 0x5b, 0xe9, 0x49, - 0x2e, 0xc3, 0x0f, 0xba, 0x4e, 0x40, 0xb9, 0xe7, 0x74, 0x99, 0xf5, 0x54, 0xd9, 0xf5, 0x85, 0x9e, - 0xd3, 0xa5, 0xc6, 0xb3, 0x00, 0x93, 0x7e, 0x60, 0x5a, 0x36, 0x8d, 0xa5, 0xca, 0x1a, 0xfb, 0x20, - 0x83, 0x94, 0xfe, 0xd0, 0x1d, 0xbb, 0x83, 0x97, 0x6b, 0xb4, 0xa8, 0x42, 0x21, 0x5b, 0x76, 0x87, - 0xae, 0x29, 0x83, 0xe0, 0x60, 0x79, 0x96, 0xc2, 0xc9, 0xcf, 0x68, 0x2f, 0x7a, 0x2e, 0x67, 0x2f, - 0x3a, 0x21, 0x70, 0xc6, 0x5e, 0x74, 0x3d, 0x77, 0xce, 0x48, 0xd2, 0x0a, 0x12, 0x12, 0x47, 0xae, - 0xad, 0xb7, 0x74, 0xd1, 0x35, 0xf3, 0xe9, 0xc4, 0xef, 0xd0, 0xec, 0x35, 0x08, 0x7f, 0x7e, 0xa6, - 0x47, 0x01, 0xdf, 0x57, 0x60, 0x71, 0x8d, 0x1e, 0x84, 0xc6, 0x7c, 0xe3, 0x61, 0x72, 0x88, 0x5e, - 0x0f, 0x13, 0xbb, 0x32, 0xb2, 0x73, 0x92, 0x9a, 0x12, 0x79, 0x5d, 0x6b, 0x30, 0x2b, 0xd8, 0x72, - 0xe2, 0xe2, 0x18, 0x59, 0x61, 0x35, 0x3f, 0xfe, 0xa9, 0xde, 0x86, 0xa5, 0x94, 0xe4, 0xfc, 0x38, - 0x2a, 0x79, 0x43, 0x80, 0x09, 0x1e, 0xbf, 0x21, 0xa0, 0xde, 0x82, 0xe3, 0xdb, 0x81, 0xe1, 0x05, - 0xa9, 0x66, 0x8f, 0x41, 0x4b, 0xf3, 0xbd, 0x64, 0x5a, 0x9e, 0x92, 0xb5, 0x0d, 0x0b, 0xdb, 0x81, - 0xe3, 0x1e, 0x81, 0x29, 0xf1, 0x3b, 0xa4, 0xe5, 0xce, 0x40, 0xcc, 0x33, 0xe2, 0x53, 0x5d, 0x62, - 0xd9, 0x69, 0xe9, 0xda, 0xbe, 0x00, 0x8b, 0x2c, 0x39, 0xec, 0x28, 0x8d, 0x38, 0x21, 0x52, 0xd3, - 0xd2, 0x7c, 0xef, 0xc1, 0x31, 0x69, 0x83, 0x9c, 0x27, 0x53, 0x5c, 0x95, 0x93, 0x29, 0xf2, 0xcf, - 0x22, 0xc2, 0x5c, 0x8a, 0xef, 0x16, 0x62, 0x7e, 0x3c, 0xe7, 0x44, 0xf5, 0x0d, 0x39, 0x95, 0xe2, - 0x74, 0x3e, 0x57, 0x29, 0x93, 0x22, 0x6d, 0x9d, 0xc5, 0x0c, 0xeb, 0xdc, 0x4d, 0x1d, 0xd7, 0x96, - 0xd2, 0xa9, 0x30, 0x09, 0x09, 0x3f, 0x95, 0x83, 0xda, 0x07, 0x2c, 0xdd, 0x22, 0xac, 0x3a, 0x3c, - 0xa3, 0x7d, 0x3d, 0x71, 0x46, 0xbb, 0x32, 0x44, 0xd2, 0xf0, 0x74, 0xf6, 0xbb, 0x25, 0xa8, 0x84, - 0x65, 0x29, 0x0d, 0xa7, 0x55, 0x55, 0xc8, 0x50, 0x55, 0x7c, 0x7e, 0x2d, 0x1e, 0x71, 0x7e, 0x2d, - 0x8d, 0x31, 0xbf, 0xae, 0x40, 0x85, 0xfe, 0xa0, 0x19, 0xf2, 0x6c, 0xbe, 0x2c, 0x53, 0x80, 0x86, - 0xf7, 0x22, 0x13, 0x9b, 0x1a, 0xd3, 0xc4, 0x12, 0xa9, 0x1d, 0xd3, 0xc9, 0xd4, 0x8e, 0x9b, 0xe1, - 0xdc, 0x57, 0x4e, 0x1f, 0xa5, 0x84, 0x1c, 0x33, 0x67, 0xbd, 0xc4, 0xfe, 0x69, 0x25, 0xbd, 0x7f, - 0x1a, 0xd1, 0x7f, 0x6e, 0x8f, 0x7a, 0xb7, 0x58, 0xbe, 0x46, 0xdc, 0xce, 0xb8, 0x8f, 0x7c, 0x43, - 0x3a, 0x2a, 0x53, 0x32, 0xe6, 0xaa, 0xd0, 0x2f, 0xc4, 0x8f, 0xc7, 0x76, 0x61, 0x31, 0x99, 0xe7, - 0x75, 0x28, 0x1f, 0x97, 0x93, 0x70, 0xfa, 0x9b, 0xf1, 0x88, 0x2f, 0x27, 0xbb, 0xf2, 0x66, 0x2a, - 0x11, 0x60, 0x6c, 0x0b, 0xbd, 0x2a, 0xe7, 0x0c, 0x1d, 0xda, 0xae, 0x52, 0x29, 0x43, 0x34, 0x22, - 0x31, 0x3c, 0x5e, 0xcc, 0x82, 0xf3, 0x0a, 0x87, 0x34, 0xe9, 0xca, 0x60, 0xcf, 0xb2, 0x2d, 0x7f, - 0x9f, 0x95, 0x4f, 0xb1, 0x95, 0x81, 0x00, 0x35, 0xe9, 0xae, 0x25, 0x7e, 0x6e, 0x05, 0x7a, 0xc7, - 0x31, 0x31, 0xb5, 0xda, 0x49, 0xad, 0x4c, 0x00, 0x6b, 0x8e, 0x89, 0xa3, 0xf1, 0x54, 0x3e, 0xec, - 0x78, 0xaa, 0x24, 0xc6, 0xd3, 0x22, 0x4c, 0x79, 0xd8, 0xf0, 0x1d, 0x9b, 0x6d, 0x66, 0x68, 0xfc, - 0x8b, 0x74, 0x44, 0x1f, 0xfb, 0x3e, 0xa9, 0x83, 0x07, 0x60, 0xfc, 0x33, 0x16, 0x2c, 0xce, 0x0c, - 0x09, 0x16, 0x87, 0xe4, 0x6e, 0x26, 0x82, 0xc5, 0xda, 0x90, 0x60, 0x71, 0xac, 0xd4, 0xcd, 0x28, - 0x2c, 0x9e, 0x1d, 0x15, 0x16, 0xc7, 0xe3, 0xca, 0x39, 0x39, 0xae, 0xbc, 0x1d, 0x5f, 0xa1, 0xd6, - 0xd3, 0x27, 0xd9, 0xc3, 0x6f, 0x84, 0x7c, 0x86, 0x03, 0xf8, 0x1f, 0x14, 0x58, 0x4a, 0x0d, 0x38, - 0x3e, 0x84, 0x5f, 0x4f, 0x24, 0x85, 0x0e, 0xcd, 0xc6, 0x14, 0x39, 0xa1, 0x4d, 0x29, 0x27, 0xf4, - 0xd2, 0x30, 0x92, 0x9c, 0x94, 0xd0, 0xa3, 0xa7, 0x69, 0x7e, 0x5b, 0x01, 0x94, 0xb1, 0x06, 0xbf, - 0x29, 0xa2, 0xf5, 0x43, 0xec, 0x96, 0xf1, 0x80, 0xfd, 0xdd, 0x28, 0x60, 0x2f, 0x1c, 0x66, 0xdf, - 0x21, 0xcc, 0x1f, 0xf9, 0x49, 0x01, 0x4e, 0xef, 0xba, 0x66, 0x22, 0x8c, 0xe4, 0x58, 0xe3, 0x7b, - 0xb6, 0x9b, 0x72, 0xf2, 0xcb, 0x11, 0x9b, 0x50, 0x3c, 0x4a, 0x13, 0xd0, 0xd7, 0xb3, 0xd2, 0x93, - 0x6e, 0x4b, 0x07, 0x89, 0xc3, 0x1b, 0xf8, 0x0b, 0x3e, 0xfe, 0x53, 0xe1, 0x4c, 0xbe, 0x00, 0x3c, - 0xe4, 0xfc, 0xff, 0x30, 0xb7, 0xf1, 0x1c, 0x77, 0xb6, 0x0f, 0xec, 0xce, 0x21, 0xb4, 0x5e, 0x87, - 0x62, 0xa7, 0x6f, 0xf2, 0xd3, 0x11, 0xf2, 0x33, 0x1e, 0x45, 0x17, 0xe5, 0x28, 0x5a, 0x87, 0x7a, - 0x54, 0x03, 0x1f, 0x40, 0x8b, 0x64, 0x00, 0x99, 0x04, 0x99, 0x30, 0x9f, 0xd1, 0xf8, 0x17, 0x87, - 0x63, 0x8f, 0x5d, 0x37, 0x61, 0x70, 0xec, 0x79, 0xb2, 0xd7, 0x2e, 0xca, 0x5e, 0x5b, 0xfd, 0x9e, - 0x02, 0x55, 0x52, 0xc3, 0xc7, 0x92, 0x9f, 0x2f, 0x65, 0x8b, 0xd1, 0x52, 0x36, 0x5c, 0x11, 0x97, - 0xe2, 0x2b, 0xe2, 0x48, 0xf2, 0x49, 0x0a, 0x4e, 0x4b, 0x3e, 0x15, 0xc2, 0xb1, 0xe7, 0xa9, 0x67, - 0x60, 0x86, 0xc9, 0xc6, 0x5b, 0x5e, 0x87, 0xe2, 0xc0, 0xeb, 0x89, 0xfe, 0x1b, 0x78, 0x3d, 0xf5, - 0x5b, 0x0a, 0xd4, 0x9a, 0x41, 0x60, 0x74, 0xf6, 0x0f, 0xd1, 0x80, 0x50, 0xb8, 0x42, 0x5c, 0xb8, - 0x74, 0x23, 0x22, 0x71, 0x4b, 0x39, 0xe2, 0x4e, 0x4a, 0xe2, 0xaa, 0x30, 0x2b, 0x64, 0xc9, 0x15, - 0x78, 0x13, 0x50, 0xdb, 0xf1, 0x82, 0xf7, 0x1c, 0xef, 0x99, 0xe1, 0x99, 0x87, 0x5b, 0xb5, 0x22, - 0x28, 0xf1, 0x07, 0x00, 0x8a, 0xe7, 0x26, 0x35, 0xfa, 0x5b, 0x7d, 0x05, 0x8e, 0x49, 0xfc, 0x72, - 0x2b, 0xbe, 0x05, 0x55, 0x3a, 0x0b, 0xf3, 0x05, 0xcd, 0xc5, 0xf8, 0xe9, 0xfb, 0x88, 0xd9, 0x5a, - 0x5d, 0x87, 0x79, 0x12, 0x8f, 0x51, 0x78, 0xe8, 0x5f, 0xae, 0x24, 0x62, 0xfe, 0xa5, 0x14, 0x8b, - 0x44, 0xbc, 0xff, 0x33, 0x05, 0x26, 0x29, 0x3c, 0x15, 0x23, 0xad, 0x90, 0x79, 0xce, 0x75, 0xf4, - 0xc0, 0xe8, 0x86, 0x8f, 0x2b, 0x10, 0xc0, 0x8e, 0xd1, 0xa5, 0x27, 0x3a, 0xb4, 0xd0, 0xb4, 0xba, - 0xd8, 0x0f, 0xc4, 0x09, 0x61, 0x95, 0xc0, 0xd6, 0x19, 0x88, 0x28, 0x86, 0x1e, 0xa4, 0x96, 0xe8, - 0x79, 0x29, 0xfd, 0x8d, 0xce, 0xb1, 0x9b, 0x8a, 0xc3, 0x8f, 0xc5, 0xe8, 0x0d, 0xc6, 0x06, 0x94, - 0x13, 0xe7, 0x59, 0xe1, 0x37, 0x3a, 0x0f, 0x25, 0xba, 0xff, 0x3c, 0x3d, 0x4c, 0x4b, 0x14, 0x85, - 0x58, 0x85, 0x6b, 0xd9, 0x36, 0x36, 0x69, 0x00, 0x54, 0xd6, 0xf8, 0x97, 0xfa, 0x2e, 0xa0, 0xb8, - 0xf2, 0x78, 0x07, 0x9d, 0x87, 0x29, 0xaa, 0x5b, 0x11, 0xc4, 0xce, 0xa7, 0x58, 0x6b, 0x1c, 0x41, - 0xfd, 0x1a, 0x20, 0x56, 0x97, 0x14, 0xb8, 0x1e, 0xa6, 0x03, 0x87, 0x84, 0xb0, 0x7f, 0xae, 0xc0, - 0x31, 0x89, 0x3b, 0x97, 0xef, 0x15, 0x99, 0x7d, 0x86, 0x78, 0x9c, 0xf5, 0xdb, 0xd2, 0xcc, 0x7c, - 0x3e, 0x2d, 0xc6, 0x2f, 0x68, 0x56, 0xfe, 0x47, 0x05, 0xa0, 0x39, 0x08, 0xf6, 0xf9, 0x46, 0x6b, - 0xbc, 0x13, 0x95, 0x44, 0x27, 0x36, 0xa0, 0xec, 0x1a, 0xbe, 0xff, 0xcc, 0xf1, 0xc4, 0x22, 0x32, - 0xfc, 0xa6, 0xdb, 0xa3, 0x03, 0xfe, 0xe2, 0x42, 0x45, 0xa3, 0xbf, 0xd1, 0x4b, 0x30, 0xcb, 0x5e, - 0xfd, 0xd0, 0x0d, 0xd3, 0xf4, 0x44, 0x46, 0x5f, 0x45, 0xab, 0x31, 0x68, 0x93, 0x01, 0x09, 0x9a, - 0x45, 0x4f, 0x23, 0x82, 0x03, 0x3d, 0x70, 0x9e, 0x60, 0x9b, 0x2f, 0x0c, 0x6b, 0x02, 0xba, 0x43, - 0x80, 0xec, 0xb8, 0xb1, 0x6b, 0xf9, 0x81, 0x27, 0xd0, 0xc4, 0xa1, 0x29, 0x87, 0x52, 0x34, 0xf5, - 0x4f, 0x14, 0xa8, 0xb7, 0x07, 0xbd, 0x1e, 0x53, 0xee, 0x51, 0x3a, 0xf9, 0x02, 0x6f, 0x4a, 0x21, - 0x6d, 0xf2, 0x91, 0xa2, 0x78, 0x13, 0x3f, 0x91, 0xbd, 0xac, 0xab, 0x30, 0x1f, 0x93, 0x98, 0x1b, - 0x8e, 0x14, 0xd9, 0x2b, 0x72, 0x64, 0xaf, 0x36, 0x01, 0xb1, 0xed, 0x9b, 0x23, 0xb7, 0x52, 0x3d, - 0x0e, 0xc7, 0x24, 0x16, 0x7c, 0x2a, 0xbe, 0x00, 0x35, 0x9e, 0x5d, 0xc6, 0x0d, 0xe2, 0x04, 0x94, - 0x89, 0x4b, 0xed, 0x58, 0xa6, 0xc8, 0x90, 0x98, 0x76, 0x1d, 0x73, 0xcd, 0x32, 0x3d, 0xf5, 0x4b, - 0x50, 0xe3, 0xd7, 0xd7, 0x39, 0xee, 0x1d, 0x98, 0xe5, 0xe7, 0x83, 0xba, 0x74, 0xdf, 0xf3, 0x44, - 0x46, 0x0a, 0xa3, 0x50, 0x85, 0x1d, 0xff, 0x54, 0xbf, 0x0e, 0x0d, 0x16, 0x2d, 0x48, 0x8c, 0x45, - 0x03, 0xef, 0x80, 0xb8, 0x0c, 0x31, 0x84, 0xbf, 0x4c, 0x59, 0xf3, 0xe2, 0x9f, 0xea, 0x29, 0x58, - 0xc9, 0xe4, 0xcf, 0x5b, 0xef, 0x42, 0x3d, 0x2a, 0x60, 0x97, 0x12, 0xc3, 0xb4, 0x0f, 0x25, 0x96, - 0xf6, 0xb1, 0x18, 0xc6, 0xde, 0x05, 0x31, 0x73, 0xd1, 0xf0, 0x3a, 0x5a, 0x71, 0x15, 0xf3, 0x56, - 0x5c, 0x25, 0x69, 0xc5, 0xa5, 0x3e, 0x0c, 0x75, 0xc8, 0xd7, 0xbd, 0xb7, 0xe9, 0xca, 0x9c, 0xd5, - 0x2d, 0x9c, 0xda, 0xc9, 0xec, 0xf6, 0x31, 0x24, 0x2d, 0x86, 0xaf, 0x9e, 0x87, 0x9a, 0xec, 0xde, - 0x62, 0x1e, 0x4b, 0x49, 0x79, 0xac, 0xd9, 0x84, 0xb3, 0x7a, 0x2d, 0xb1, 0xa4, 0xc8, 0xd2, 0x6b, - 0x62, 0x41, 0x71, 0x43, 0x72, 0x5b, 0x2f, 0x4a, 0x47, 0xf4, 0xbf, 0x20, 0x8f, 0xb5, 0xc0, 0xfd, - 0xf8, 0x7b, 0x3e, 0xa1, 0xe7, 0x0d, 0x55, 0x5f, 0x80, 0xea, 0x6e, 0xde, 0x23, 0x22, 0x25, 0x91, - 0x57, 0xf6, 0x26, 0x2c, 0xbc, 0x67, 0xf5, 0xb0, 0x7f, 0xe0, 0x07, 0xb8, 0xdf, 0xa2, 0xee, 0x65, - 0xcf, 0xc2, 0x1e, 0x5a, 0x05, 0xa0, 0xab, 0x48, 0xd7, 0xb1, 0xc2, 0x87, 0x13, 0x62, 0x10, 0xf5, - 0xc7, 0x0a, 0xcc, 0x45, 0x84, 0xe3, 0x64, 0xf8, 0xbd, 0x01, 0x93, 0x7b, 0xbe, 0xd8, 0x6d, 0x4b, - 0x9c, 0x41, 0x64, 0x89, 0xa0, 0x95, 0xf6, 0xfc, 0x96, 0x89, 0xde, 0x04, 0x18, 0xf8, 0xd8, 0xe4, - 0xc7, 0x7e, 0x23, 0x72, 0x2e, 0x2b, 0x04, 0x95, 0x1d, 0x1c, 0xde, 0x80, 0xaa, 0x65, 0x3b, 0x26, - 0xa6, 0x47, 0xc2, 0xe6, 0xa8, 0x7c, 0x4b, 0x60, 0xb8, 0xbb, 0x3e, 0x36, 0xd5, 0xdf, 0x8b, 0x0e, - 0x76, 0x3f, 0xcf, 0x2d, 0x54, 0x75, 0x3e, 0xbf, 0x8a, 0x5e, 0xe7, 0x26, 0x7b, 0x1f, 0xe6, 0x99, - 0x9b, 0xdc, 0x0b, 0xab, 0xcc, 0xbc, 0x87, 0x92, 0x68, 0x9b, 0x56, 0xb7, 0x78, 0x64, 0x25, 0x88, - 0xd4, 0x5b, 0x70, 0x3c, 0x91, 0x18, 0x3e, 0xfe, 0x76, 0xfa, 0xfb, 0x89, 0x7d, 0xb1, 0x68, 0x48, - 0x5d, 0x95, 0xef, 0x23, 0x0d, 0x4b, 0xe1, 0xe7, 0x57, 0x63, 0x76, 0xe1, 0x84, 0xb4, 0x69, 0x27, - 0xc9, 0x72, 0x23, 0x11, 0x2c, 0x9e, 0xc9, 0xe7, 0x97, 0x88, 0x1a, 0xff, 0x4b, 0x81, 0x85, 0x2c, - 0x84, 0x23, 0x6e, 0x18, 0x7f, 0x35, 0xe7, 0x2e, 0xe3, 0xeb, 0xa3, 0x04, 0xfa, 0x54, 0x36, 0xd8, - 0x37, 0xd9, 0x4d, 0xa8, 0xd1, 0x7d, 0x52, 0x1c, 0xaf, 0x4f, 0x7e, 0x56, 0x88, 0x1d, 0x8a, 0x0c, - 0xb9, 0xad, 0xf4, 0x31, 0x36, 0x29, 0xd7, 0x12, 0x97, 0x95, 0x2e, 0x66, 0x12, 0x8e, 0xb8, 0xab, - 0xa4, 0x65, 0x6d, 0x06, 0x5c, 0x1d, 0xc5, 0xe9, 0x73, 0xbb, 0x7f, 0xfd, 0xdf, 0x0a, 0xcc, 0xca, - 0x1d, 0x82, 0xde, 0xcd, 0xb8, 0xa9, 0x74, 0x7a, 0x44, 0x03, 0xa5, 0x8b, 0x4a, 0xfc, 0x66, 0x50, - 0x61, 0xfc, 0x9b, 0x41, 0xc5, 0xf1, 0x6e, 0x06, 0xdd, 0x85, 0xd9, 0x67, 0x9e, 0x15, 0x18, 0x8f, - 0x7b, 0x58, 0xef, 0x19, 0x07, 0xd8, 0xe3, 0x5e, 0x78, 0xa8, 0x1b, 0xaa, 0x09, 0x92, 0x07, 0x84, - 0x42, 0xfd, 0x56, 0x01, 0x8e, 0x67, 0x5e, 0x52, 0xf9, 0xf8, 0xed, 0xbe, 0x14, 0x6f, 0xf7, 0x61, - 0x6e, 0xfe, 0x14, 0x0f, 0x75, 0xf3, 0xa7, 0x95, 0xa3, 0x85, 0xac, 0xa3, 0xf4, 0x11, 0xca, 0xf8, - 0x4b, 0x05, 0xca, 0x42, 0xa8, 0x91, 0xf7, 0x70, 0x96, 0x06, 0x04, 0x4d, 0xa7, 0x69, 0xd8, 0xb6, - 0x61, 0x3b, 0xba, 0x8f, 0x49, 0x58, 0x34, 0xf2, 0xd6, 0xc3, 0x02, 0xa5, 0x5b, 0x73, 0x3c, 0xbc, - 0x69, 0xd8, 0xce, 0x36, 0x23, 0x42, 0x4d, 0xa8, 0x33, 0x7e, 0x94, 0x15, 0x61, 0x3a, 0x72, 0xaa, - 0x9a, 0xa5, 0x04, 0x84, 0x09, 0x61, 0xe6, 0xab, 0x3f, 0x50, 0x60, 0x2e, 0xa1, 0xd9, 0x5f, 0xbe, - 0x46, 0xfc, 0x6e, 0x11, 0xaa, 0xb1, 0x5e, 0x1e, 0xd1, 0x80, 0x35, 0x98, 0x17, 0xe9, 0x30, 0x3e, - 0x0e, 0xc6, 0xbb, 0x75, 0x32, 0xc7, 0x29, 0xb6, 0x71, 0xc0, 0x22, 0x99, 0x3b, 0x30, 0x67, 0x3c, - 0x35, 0xac, 0x1e, 0xb5, 0xa0, 0xb1, 0x82, 0x84, 0xd9, 0x10, 0x3f, 0x8c, 0x85, 0x58, 0xbb, 0xc7, - 0xba, 0x7b, 0x02, 0x14, 0x37, 0xba, 0x02, 0xe4, 0xfb, 0xb1, 0x9c, 0xab, 0xa1, 0x57, 0x80, 0x7c, - 0x3f, 0xac, 0x8f, 0xe6, 0xa0, 0xd3, 0xbb, 0x4f, 0x3e, 0x7f, 0x30, 0x23, 0xbf, 0x3e, 0x82, 0xfb, - 0x1e, 0x45, 0x25, 0x0a, 0xeb, 0x1b, 0x1f, 0x3a, 0x9e, 0x1e, 0xa7, 0x9f, 0x1e, 0xa1, 0x30, 0x4a, - 0xd1, 0x0e, 0x99, 0xa8, 0xff, 0xa3, 0x00, 0x4a, 0x0f, 0xc8, 0x5f, 0x9a, 0xae, 0x8a, 0x37, 0xbd, - 0x34, 0xb6, 0xea, 0xd4, 0x77, 0xe0, 0x84, 0x86, 0x1d, 0x17, 0xdb, 0xa1, 0xdf, 0x7b, 0xe0, 0x74, - 0x0f, 0x11, 0xb1, 0x9d, 0x84, 0x46, 0x16, 0x3d, 0x5f, 0x07, 0x0e, 0xa0, 0xb1, 0xb6, 0x8f, 0x3b, - 0x4f, 0x68, 0xf4, 0x7f, 0x94, 0x7c, 0x8e, 0x06, 0x94, 0x7b, 0x4e, 0x87, 0x3d, 0xbd, 0xc9, 0xb7, - 0x4a, 0xc4, 0xf7, 0x90, 0x5d, 0xea, 0x53, 0xb0, 0x92, 0x59, 0x2d, 0x97, 0x0a, 0x41, 0xfd, 0x1e, - 0x0e, 0x36, 0x9e, 0x62, 0x3b, 0x0c, 0x08, 0xd5, 0x1f, 0x16, 0x62, 0xa1, 0x27, 0x2d, 0x3a, 0x44, - 0x1e, 0x0c, 0x6a, 0xc3, 0x42, 0x84, 0x82, 0x09, 0x35, 0x7b, 0x08, 0x8f, 0x3d, 0x21, 0x99, 0x7d, - 0x46, 0x46, 0x2b, 0xa1, 0xef, 0xdf, 0x45, 0x4f, 0x7c, 0x84, 0xb0, 0xc4, 0xc9, 0x69, 0x31, 0x79, - 0x72, 0xfa, 0x3e, 0xa0, 0x78, 0x70, 0xc9, 0x57, 0x9b, 0xa5, 0x31, 0x5e, 0x35, 0xa9, 0xbb, 0xc9, - 0xf7, 0x77, 0x72, 0xde, 0x26, 0x99, 0x3c, 0xd2, 0xdb, 0x24, 0xea, 0x2a, 0x9c, 0x24, 0x21, 0xe3, - 0x43, 0x1c, 0x78, 0x56, 0x67, 0x1d, 0xfb, 0x1d, 0xcf, 0x72, 0x03, 0x27, 0x4c, 0xcd, 0x50, 0x75, - 0x38, 0x95, 0x53, 0xce, 0xd5, 0xfd, 0x0e, 0x54, 0xcd, 0x08, 0x9c, 0xb5, 0x72, 0x4f, 0xd2, 0x6a, - 0x71, 0x02, 0xf5, 0x03, 0xa8, 0x27, 0x11, 0x32, 0x33, 0x39, 0x11, 0x94, 0xf6, 0x71, 0xcf, 0x15, - 0x57, 0x53, 0xc8, 0x6f, 0xa2, 0x75, 0x16, 0x8d, 0x3f, 0xc1, 0x07, 0x62, 0x67, 0xb7, 0x42, 0x21, - 0x5f, 0xc4, 0x07, 0x61, 0xdb, 0xa4, 0xcb, 0xf2, 0x9e, 0xd5, 0x49, 0xb6, 0x2d, 0xa3, 0x3c, 0x6a, - 0x1b, 0xe9, 0xb6, 0x3e, 0x03, 0xf3, 0xb6, 0x9d, 0xca, 0xbd, 0x88, 0x4f, 0x69, 0xc1, 0x75, 0x4c, - 0xfe, 0x5b, 0xfd, 0x53, 0x05, 0xe6, 0x53, 0x18, 0x63, 0xee, 0xd6, 0xbf, 0x0a, 0xd3, 0xa2, 0xde, - 0x42, 0x3a, 0xdd, 0x91, 0xf1, 0xd2, 0x04, 0x0a, 0x6a, 0xc1, 0x7c, 0x64, 0xd1, 0x82, 0xae, 0x98, - 0xee, 0x8b, 0x78, 0x28, 0x4e, 0xc5, 0xad, 0x77, 0x12, 0x10, 0xb5, 0x03, 0xf5, 0x24, 0xd6, 0x38, - 0x63, 0xea, 0x50, 0xf2, 0xaa, 0x7f, 0xa7, 0xc0, 0x14, 0x83, 0x65, 0x76, 0xb6, 0xe4, 0xc5, 0x0b, - 0x49, 0x2f, 0xfe, 0x16, 0x54, 0x19, 0x1f, 0x3d, 0xbc, 0x98, 0x34, 0x2b, 0x6f, 0x58, 0x32, 0xd6, - 0x74, 0xb4, 0x42, 0x3f, 0xfc, 0x4d, 0x9a, 0xc1, 0xec, 0x85, 0xc6, 0xda, 0x22, 0xa9, 0xb5, 0x4a, - 0x61, 0xd4, 0xd7, 0x92, 0x78, 0x91, 0x47, 0xe5, 0x23, 0xe6, 0x41, 0x86, 0x75, 0xe1, 0x65, 0x28, - 0x8b, 0x47, 0x97, 0xd1, 0x34, 0x14, 0x77, 0xd6, 0xda, 0xf5, 0x09, 0xf2, 0x63, 0x77, 0xbd, 0x5d, - 0x57, 0x50, 0x19, 0x4a, 0xdb, 0x6b, 0x3b, 0xed, 0x7a, 0xe1, 0x42, 0x1f, 0xea, 0xc9, 0x77, 0x87, - 0xd1, 0x12, 0x1c, 0x6b, 0x6b, 0x5b, 0xed, 0xe6, 0xbd, 0xe6, 0x4e, 0x6b, 0x6b, 0x53, 0x6f, 0x6b, - 0xad, 0x47, 0xcd, 0x9d, 0x8d, 0xfa, 0x04, 0x3a, 0x0b, 0xa7, 0xe2, 0x05, 0xf7, 0xb7, 0xb6, 0x77, - 0xf4, 0x9d, 0x2d, 0x7d, 0x6d, 0x6b, 0x73, 0xa7, 0xd9, 0xda, 0xdc, 0xd0, 0xea, 0x0a, 0x3a, 0x05, - 0x27, 0xe2, 0x28, 0x77, 0x5b, 0xeb, 0x2d, 0x6d, 0x63, 0x8d, 0xfc, 0x6e, 0x3e, 0xa8, 0x17, 0x2e, - 0xbc, 0x0d, 0x35, 0xe9, 0x02, 0x03, 0x11, 0xa9, 0xbd, 0xb5, 0x5e, 0x9f, 0x40, 0x35, 0xa8, 0xc4, - 0xf9, 0x94, 0xa1, 0xb4, 0xb9, 0xb5, 0xbe, 0x51, 0x2f, 0x20, 0x80, 0xa9, 0x9d, 0xa6, 0x76, 0x6f, - 0x63, 0xa7, 0x5e, 0xbc, 0x70, 0x2b, 0xf9, 0x54, 0x02, 0x46, 0xf3, 0x50, 0xdb, 0x6e, 0x6e, 0xae, - 0xdf, 0xdd, 0xfa, 0x8a, 0xae, 0x6d, 0x34, 0xd7, 0x3f, 0xa8, 0x4f, 0xa0, 0x05, 0xa8, 0x0b, 0xd0, - 0xe6, 0xd6, 0x0e, 0x83, 0x2a, 0x17, 0x9e, 0x24, 0xd6, 0x2e, 0x18, 0x1d, 0x87, 0xf9, 0xb0, 0x4a, - 0x7d, 0x4d, 0xdb, 0x68, 0xee, 0x6c, 0x10, 0x49, 0x24, 0xb0, 0xb6, 0xbb, 0xb9, 0xd9, 0xda, 0xbc, - 0x57, 0x57, 0x08, 0xd7, 0x08, 0xbc, 0xf1, 0x95, 0x16, 0x41, 0x2e, 0xc8, 0xc8, 0xbb, 0x9b, 0x5f, - 0xdc, 0xdc, 0xfa, 0xf2, 0x66, 0xbd, 0x78, 0xe1, 0x57, 0xe3, 0x67, 0xeb, 0x91, 0x37, 0x5e, 0x81, - 0xa5, 0x54, 0x8d, 0xfa, 0xc6, 0xa3, 0x8d, 0xcd, 0x9d, 0xfa, 0x84, 0x5c, 0xb8, 0xbd, 0xd3, 0xd4, - 0xa2, 0x42, 0x25, 0x59, 0xb8, 0xd5, 0x6e, 0x87, 0x85, 0x05, 0xb9, 0x70, 0x7d, 0xe3, 0xc1, 0x46, - 0x44, 0x59, 0xbc, 0xf0, 0x22, 0x40, 0x64, 0x75, 0xa8, 0x0a, 0xd3, 0x6b, 0x5b, 0xbb, 0x9b, 0x3b, - 0x1b, 0x5a, 0x7d, 0x02, 0x55, 0x60, 0xf2, 0x5e, 0x73, 0xf7, 0xde, 0x46, 0x5d, 0xb9, 0xf6, 0xfb, - 0x0b, 0xe1, 0xdb, 0xa7, 0xdb, 0xd8, 0xa3, 0xb9, 0xe0, 0xeb, 0x30, 0x2d, 0xde, 0x1e, 0x97, 0x96, - 0xe4, 0xf2, 0x5b, 0xe9, 0x8d, 0x95, 0xcc, 0x32, 0x3e, 0x45, 0x4e, 0xa0, 0x47, 0x74, 0x43, 0x35, - 0xf6, 0xfa, 0xd0, 0x99, 0xc4, 0x26, 0x66, 0xea, 0x91, 0xa3, 0xc6, 0xd9, 0x21, 0x18, 0x21, 0xdf, - 0x0f, 0x60, 0x56, 0x7e, 0xe6, 0x0f, 0x9d, 0x95, 0x37, 0x3b, 0x33, 0x5e, 0x10, 0x6c, 0xa8, 0xc3, - 0x50, 0x42, 0xd6, 0x3a, 0xd4, 0x93, 0xcf, 0xfc, 0x21, 0x29, 0x87, 0x20, 0xe7, 0x15, 0xc1, 0xc6, - 0x8b, 0xc3, 0x91, 0xe2, 0x15, 0xa4, 0x5e, 0xaf, 0x7b, 0x61, 0xf8, 0x7b, 0x60, 0x19, 0x15, 0xe4, - 0x3d, 0x1a, 0xc6, 0x94, 0x23, 0x4f, 0x20, 0x28, 0xf1, 0x60, 0x5c, 0xc6, 0xdb, 0x52, 0xb2, 0x72, - 0xb2, 0xdf, 0x15, 0x52, 0x27, 0xd0, 0xff, 0x83, 0xb9, 0x44, 0xa2, 0x2f, 0x92, 0x08, 0xb3, 0xf3, - 0x97, 0x1b, 0x2f, 0x0c, 0xc5, 0x91, 0x7b, 0x35, 0x9e, 0xcc, 0x9b, 0xec, 0xd5, 0x8c, 0x24, 0xe1, - 0x64, 0xaf, 0x66, 0xe6, 0x02, 0x53, 0x43, 0x94, 0x12, 0x77, 0x65, 0x43, 0xcc, 0x4a, 0x14, 0x6e, - 0x9c, 0x1d, 0x82, 0x11, 0x57, 0x48, 0x22, 0x75, 0x57, 0x56, 0x48, 0x76, 0x52, 0x70, 0xe3, 0x85, - 0xa1, 0x38, 0xc9, 0x9e, 0x8c, 0x52, 0x06, 0xd3, 0x3d, 0x99, 0x4a, 0x5b, 0x4d, 0xf7, 0x64, 0x3a, - 0xe3, 0x90, 0xf7, 0x64, 0x22, 0xc9, 0x4f, 0x1d, 0x9a, 0x80, 0x94, 0xd5, 0x93, 0xd9, 0x49, 0x4a, - 0xea, 0x04, 0x7a, 0x06, 0xcb, 0x79, 0x79, 0x26, 0xe8, 0xe2, 0x21, 0xd2, 0x61, 0x1a, 0xaf, 0x8e, - 0x87, 0x1c, 0x56, 0x8c, 0x01, 0xa5, 0x57, 0x12, 0xe8, 0x25, 0x59, 0xdd, 0x39, 0x2b, 0x95, 0xc6, - 0xcb, 0xa3, 0xd0, 0xc2, 0x6a, 0xee, 0x41, 0x59, 0x64, 0xb0, 0x20, 0xc9, 0x05, 0x26, 0x32, 0x67, - 0x1a, 0x27, 0xb3, 0x0b, 0x43, 0x46, 0x5f, 0x80, 0x12, 0x81, 0xa2, 0xa5, 0x24, 0x9e, 0x60, 0xb0, - 0x9c, 0x2e, 0x08, 0x89, 0x9b, 0x30, 0xc5, 0x52, 0x33, 0x90, 0x74, 0x36, 0x24, 0xa5, 0x8e, 0x34, - 0x1a, 0x59, 0x45, 0x21, 0x8b, 0x36, 0xfb, 0x4f, 0x0e, 0x3c, 0xd3, 0x02, 0xad, 0x26, 0x1f, 0xf8, - 0x95, 0x53, 0x3a, 0x1a, 0xa7, 0x73, 0xcb, 0xe3, 0x36, 0x9b, 0xd8, 0x2d, 0x3b, 0x3b, 0x64, 0x4b, - 0x37, 0xcb, 0x66, 0xb3, 0x37, 0x8a, 0x59, 0xe7, 0xa6, 0x37, 0x92, 0xe5, 0xce, 0xcd, 0xdd, 0xac, - 0x97, 0x3b, 0x37, 0x7f, 0x3f, 0x9a, 0x0d, 0x8d, 0xe4, 0x4b, 0x3d, 0xea, 0xb0, 0x57, 0xb4, 0xb2, - 0x86, 0x46, 0xce, 0xeb, 0x5c, 0xea, 0x04, 0xda, 0x87, 0x63, 0x19, 0xcf, 0x77, 0xa1, 0x97, 0xf3, - 0xfd, 0xaf, 0x54, 0xcb, 0x2b, 0x23, 0xf1, 0xe2, 0x35, 0x65, 0x1c, 0xaf, 0xca, 0x35, 0xe5, 0x9f, - 0xef, 0xca, 0x35, 0x0d, 0x3b, 0xa7, 0xa5, 0x86, 0xc8, 0x7d, 0xc8, 0x89, 0xac, 0x33, 0xc7, 0x0c, - 0x43, 0x4c, 0x79, 0x8c, 0x7d, 0x38, 0x96, 0xb1, 0xda, 0x96, 0x85, 0xcd, 0xdf, 0x05, 0x90, 0x85, - 0x1d, 0xb6, 0x6c, 0x9f, 0x40, 0x5f, 0x05, 0x74, 0x0f, 0x07, 0x72, 0x7c, 0xe6, 0x23, 0x69, 0xa0, - 0x26, 0x17, 0xf6, 0x39, 0xf6, 0x29, 0xad, 0xf0, 0xd5, 0x89, 0xab, 0x0a, 0xb2, 0xd9, 0x5d, 0x82, - 0xd4, 0xba, 0x14, 0x9d, 0x4b, 0x76, 0x5b, 0xde, 0xd2, 0xb6, 0x71, 0x7e, 0x0c, 0xcc, 0xb0, 0x2d, - 0x76, 0xf2, 0xa9, 0x48, 0xb1, 0x34, 0x3a, 0x97, 0x6f, 0x26, 0xf2, 0x72, 0x33, 0x5d, 0x5f, 0xee, - 0xc2, 0x53, 0x9d, 0xb8, 0xf6, 0x3b, 0x45, 0x98, 0x61, 0xc9, 0x0b, 0x3c, 0x4c, 0x7c, 0x08, 0x10, - 0xe5, 0x01, 0xa1, 0x53, 0x49, 0x5e, 0x52, 0x72, 0x55, 0x63, 0x35, 0xaf, 0x38, 0xee, 0x8e, 0x62, - 0xf9, 0x35, 0xb2, 0x3b, 0x4a, 0xa7, 0x0b, 0xc9, 0xee, 0x28, 0x23, 0x31, 0x47, 0x9d, 0x40, 0xef, - 0x43, 0x25, 0x4c, 0xe7, 0x90, 0x3b, 0x39, 0x99, 0x97, 0xd2, 0x38, 0x95, 0x53, 0x1a, 0x97, 0x2e, - 0x96, 0xa5, 0x21, 0x4b, 0x97, 0xce, 0x00, 0x91, 0xa5, 0xcb, 0x4a, 0xef, 0x88, 0xda, 0xcb, 0xce, - 0x51, 0x33, 0xda, 0x2b, 0x1d, 0xab, 0x67, 0xb4, 0x57, 0x3e, 0x80, 0x55, 0x27, 0xee, 0xde, 0xf9, - 0xd1, 0x4f, 0x57, 0x95, 0x1f, 0xff, 0x74, 0x75, 0xe2, 0x57, 0x3e, 0x5a, 0x55, 0x7e, 0xf4, 0xd1, - 0xaa, 0xf2, 0x4f, 0x1f, 0xad, 0x2a, 0x3f, 0xf9, 0x68, 0x55, 0xf9, 0xf6, 0xbf, 0xaf, 0x4e, 0x7c, - 0x55, 0x7d, 0x72, 0xc3, 0xbf, 0x6c, 0x39, 0x57, 0x3a, 0x9e, 0x75, 0xc9, 0x70, 0xad, 0x2b, 0xee, - 0x93, 0xee, 0x15, 0xc3, 0xb5, 0xfc, 0x2b, 0x9c, 0xef, 0x95, 0xa7, 0xaf, 0x3d, 0x9e, 0xa2, 0xff, - 0xa5, 0xe7, 0xf5, 0xff, 0x0b, 0x00, 0x00, 0xff, 0xff, 0xdc, 0x7a, 0xda, 0x2e, 0x5f, 0x69, 0x00, - 0x00, + // 6791 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xd4, 0x3d, 0x4b, 0x6c, 0x1c, 0xc9, + 0x75, 0xec, 0x99, 0x21, 0x39, 0xf3, 0x86, 0x43, 0x0e, 0x4b, 0x14, 0x49, 0x8d, 0xfe, 0xbd, 0x3f, + 0x49, 0xbb, 0xfa, 0xac, 0xf6, 0x27, 0xc9, 0xfb, 0xd1, 0x88, 0xe4, 0x6a, 0x67, 0x2d, 0x91, 0xe3, + 0x1e, 0x72, 0xed, 0x5d, 0x07, 0xee, 0xb4, 0xa6, 0x8b, 0x64, 0xaf, 0x66, 0xba, 0xdb, 0xdd, 0x3d, + 0x92, 0xe8, 0x53, 0x8e, 0x89, 0x4f, 0x06, 0x12, 0xc7, 0x88, 0x11, 0x24, 0xc8, 0x21, 0x48, 0x6e, + 0x09, 0x02, 0x24, 0x71, 0x90, 0x1f, 0x60, 0x24, 0x86, 0x13, 0x20, 0x40, 0x0e, 0x09, 0xe0, 0x43, + 0x82, 0xd8, 0x9b, 0x00, 0x01, 0x72, 0xf6, 0x21, 0xa7, 0x38, 0xa8, 0x5f, 0x77, 0x57, 0xff, 0x66, + 0xc8, 0x5d, 0xef, 0xae, 0x4f, 0x9c, 0x7e, 0xf5, 0xde, 0xab, 0x57, 0xaf, 0x5e, 0xbd, 0x7a, 0x55, + 0xf5, 0xaa, 0x08, 0x35, 0xc3, 0xb5, 0xae, 0xb8, 0x9e, 0x13, 0x38, 0x08, 0xbc, 0x91, 0x1d, 0x58, + 0x43, 0x7c, 0xe5, 0xd1, 0x8b, 0xad, 0xcb, 0x7b, 0x56, 0xb0, 0x3f, 0x7a, 0x70, 0xa5, 0xef, 0x0c, + 0xaf, 0xee, 0x39, 0x7b, 0xce, 0x55, 0x8a, 0xf2, 0x60, 0xb4, 0x4b, 0xbf, 0xe8, 0x07, 0xfd, 0xc5, + 0x48, 0xd5, 0x4b, 0x30, 0xff, 0x1e, 0xf6, 0x7c, 0xcb, 0xb1, 0x35, 0xfc, 0xf5, 0x11, 0xf6, 0x03, + 0xb4, 0x0a, 0xb3, 0x8f, 0x18, 0x64, 0x55, 0x39, 0xa7, 0x5c, 0xa8, 0x69, 0xe2, 0x53, 0xfd, 0x03, + 0x05, 0x16, 0x42, 0x64, 0xdf, 0x75, 0x6c, 0x1f, 0xe7, 0x63, 0xa3, 0xf3, 0x30, 0xc7, 0xc5, 0xd2, + 0x6d, 0x63, 0x88, 0x57, 0x4b, 0xb4, 0xb8, 0xce, 0x61, 0x9b, 0xc6, 0x10, 0xa3, 0xe7, 0x60, 0x41, + 0xa0, 0x08, 0x26, 0x65, 0x8a, 0x35, 0xcf, 0xc1, 0xbc, 0x36, 0x74, 0x05, 0x8e, 0x09, 0x44, 0xc3, + 0xb5, 0x42, 0xe4, 0x0a, 0x45, 0x5e, 0xe4, 0x45, 0x6d, 0xd7, 0xe2, 0xf8, 0xea, 0x57, 0xa1, 0xb6, + 0xbe, 0xd9, 0x5b, 0x73, 0xec, 0x5d, 0x6b, 0x8f, 0x88, 0xe8, 0x63, 0x8f, 0xd0, 0xac, 0x2a, 0xe7, + 0xca, 0x44, 0x44, 0xfe, 0x89, 0x5a, 0x50, 0xf5, 0xb1, 0xe1, 0xf5, 0xf7, 0xb1, 0xbf, 0x5a, 0xa2, + 0x45, 0xe1, 0x37, 0xa1, 0x72, 0xdc, 0xc0, 0x72, 0x6c, 0x7f, 0xb5, 0xcc, 0xa8, 0xf8, 0xa7, 0xfa, + 0xdb, 0x0a, 0xd4, 0xbb, 0x8e, 0x17, 0xdc, 0x37, 0x5c, 0xd7, 0xb2, 0xf7, 0xd0, 0x35, 0xa8, 0x52, + 0x5d, 0xf6, 0x9d, 0x01, 0xd5, 0xc1, 0xfc, 0xf5, 0xa5, 0x2b, 0x51, 0x87, 0x5c, 0xe9, 0xf2, 0x32, + 0x2d, 0xc4, 0x42, 0xcf, 0xc0, 0x7c, 0xdf, 0xb1, 0x03, 0xc3, 0xb2, 0xb1, 0xa7, 0xbb, 0x8e, 0x17, + 0x50, 0xe5, 0x4c, 0x6b, 0x8d, 0x10, 0x4a, 0xf8, 0xa3, 0x93, 0x50, 0xdb, 0x77, 0xfc, 0x80, 0x61, + 0x94, 0x29, 0x46, 0x95, 0x00, 0x68, 0xe1, 0x0a, 0xcc, 0xd2, 0x42, 0xcb, 0xe5, 0x6a, 0x98, 0x21, + 0x9f, 0x1d, 0x57, 0xfd, 0x7e, 0x09, 0xa6, 0xef, 0x3b, 0x23, 0x3b, 0x48, 0x54, 0x63, 0x04, 0xfb, + 0xbc, 0x8b, 0x62, 0xd5, 0x18, 0xc1, 0x7e, 0x54, 0x0d, 0xc1, 0x60, 0xbd, 0xc4, 0xaa, 0x21, 0x85, + 0x2d, 0xa8, 0x7a, 0xd8, 0x30, 0x1d, 0x7b, 0x70, 0x40, 0x45, 0xa8, 0x6a, 0xe1, 0x37, 0xe9, 0x3e, + 0x1f, 0x0f, 0x2c, 0x7b, 0xf4, 0x44, 0xf7, 0xf0, 0xc0, 0x78, 0x80, 0x07, 0x54, 0x94, 0xaa, 0x36, + 0xcf, 0xc1, 0x1a, 0x83, 0xa2, 0x37, 0xa1, 0xee, 0x7a, 0x8e, 0x6b, 0xec, 0x19, 0x44, 0x83, 0xab, + 0xd3, 0x54, 0x49, 0xa7, 0xe2, 0x4a, 0xa2, 0x02, 0x77, 0x23, 0x1c, 0x2d, 0x4e, 0x80, 0x5e, 0x83, + 0xfa, 0xc8, 0x32, 0xb9, 0xbe, 0xfd, 0xd5, 0x99, 0x73, 0xe5, 0x0b, 0xf5, 0xeb, 0xc7, 0xe3, 0xf4, + 0x9d, 0x75, 0x5e, 0xaa, 0xc5, 0x31, 0x09, 0xe1, 0x5e, 0x8c, 0x70, 0xb6, 0x90, 0x30, 0x86, 0xa9, + 0xea, 0x50, 0x0b, 0x4b, 0x22, 0x55, 0x9b, 0x54, 0x81, 0x0d, 0xae, 0x6a, 0x93, 0x98, 0x78, 0xa4, + 0x60, 0xcb, 0xa4, 0xca, 0x6b, 0x68, 0xf5, 0x10, 0xd6, 0x31, 0xd1, 0x32, 0xcc, 0x0c, 0xb0, 0xbd, + 0x17, 0xec, 0x53, 0xed, 0x35, 0x34, 0xfe, 0xa5, 0xfe, 0x86, 0x02, 0x8d, 0x1d, 0x1f, 0x7b, 0x64, + 0x1c, 0xf8, 0xae, 0xd1, 0xc7, 0xe8, 0x32, 0x54, 0x86, 0x8e, 0x89, 0xb9, 0x09, 0x9d, 0x88, 0x0b, + 0x19, 0x22, 0xdd, 0x77, 0x4c, 0xac, 0x51, 0x34, 0x74, 0x11, 0x2a, 0x23, 0xcb, 0x64, 0x76, 0x9b, + 0xdb, 0x26, 0x8a, 0x42, 0x50, 0xf7, 0x08, 0x6a, 0xb9, 0x10, 0x95, 0xa0, 0xa8, 0x3f, 0x53, 0x60, + 0x21, 0xac, 0x6d, 0x8b, 0x1a, 0x3c, 0x7a, 0x09, 0x66, 0x6d, 0x1c, 0x3c, 0x76, 0xbc, 0x87, 0xe3, + 0x65, 0x13, 0x98, 0xe8, 0x79, 0x28, 0xbb, 0x5c, 0x23, 0x85, 0x04, 0x04, 0x8b, 0x20, 0x5b, 0x6e, + 0x9f, 0x6a, 0xa8, 0x18, 0xd9, 0x72, 0xfb, 0xc4, 0x5c, 0x03, 0xc3, 0xdb, 0xc3, 0xb4, 0x3f, 0x98, + 0xe9, 0x57, 0x19, 0xa0, 0x63, 0xa2, 0xdb, 0x30, 0x3f, 0xf2, 0xb1, 0x67, 0xfb, 0xba, 0x18, 0xbc, + 0xc4, 0xd8, 0xea, 0x32, 0x53, 0x49, 0xef, 0x5a, 0x83, 0x11, 0x6c, 0xf1, 0xd1, 0xad, 0x02, 0x74, + 0xec, 0xe0, 0xd5, 0x97, 0xdf, 0x33, 0x06, 0x23, 0x8c, 0x96, 0x60, 0xfa, 0x11, 0xf9, 0x41, 0x5b, + 0x5e, 0xd6, 0xd8, 0x87, 0xfa, 0xd7, 0x15, 0x38, 0x79, 0x8f, 0x18, 0x78, 0xcf, 0xb0, 0xcd, 0x07, + 0xce, 0x93, 0x1e, 0xee, 0x8f, 0x3c, 0x2b, 0x38, 0x58, 0x73, 0xec, 0x00, 0x3f, 0x09, 0xd0, 0x3b, + 0xb0, 0x68, 0x0b, 0xfe, 0xa1, 0x20, 0x0a, 0x15, 0xe4, 0x64, 0x66, 0xeb, 0x58, 0xe5, 0x5a, 0xd3, + 0x96, 0x01, 0x3e, 0xba, 0x13, 0x0d, 0x31, 0xc1, 0xa7, 0x94, 0x6e, 0x50, 0x6f, 0x83, 0x4a, 0xc3, + 0xb9, 0x88, 0xd1, 0x27, 0x78, 0xbc, 0x0a, 0xc4, 0xe9, 0xea, 0x86, 0xaf, 0x93, 0x96, 0x52, 0x2d, + 0xd7, 0xaf, 0x2f, 0x4b, 0x56, 0x10, 0x36, 0x58, 0xab, 0x79, 0x23, 0xbb, 0xed, 0x13, 0x0d, 0xa1, + 0x1b, 0xd4, 0x81, 0x13, 0xba, 0x3d, 0xcf, 0x19, 0xb9, 0xab, 0xd5, 0x42, 0x42, 0xa0, 0x84, 0x77, + 0x09, 0x26, 0xf5, 0xeb, 0xdc, 0x49, 0xe8, 0x9e, 0xe3, 0x04, 0xbb, 0xbe, 0x70, 0x0c, 0x02, 0xac, + 0x51, 0x28, 0xba, 0x0a, 0xc7, 0xfc, 0x91, 0xeb, 0x0e, 0xf0, 0x10, 0xdb, 0x81, 0x31, 0x60, 0x15, + 0x91, 0x3e, 0x2b, 0x5f, 0x28, 0x6b, 0x28, 0x5e, 0x44, 0x19, 0xfb, 0xe8, 0x0c, 0x80, 0xeb, 0x59, + 0x8f, 0xac, 0x01, 0xde, 0xc3, 0xe6, 0xea, 0x0c, 0x65, 0x1a, 0x83, 0xa0, 0x57, 0x88, 0xaf, 0xef, + 0xf7, 0x9d, 0xa1, 0xbb, 0x5a, 0x4b, 0xeb, 0x5b, 0xf4, 0x53, 0xd7, 0x73, 0x76, 0xad, 0x01, 0xd6, + 0x04, 0x2e, 0x7a, 0x0d, 0xaa, 0x86, 0xeb, 0x1a, 0xde, 0xd0, 0xf1, 0x56, 0x61, 0x3c, 0x5d, 0x88, + 0x8c, 0x5e, 0x86, 0x25, 0xce, 0x43, 0x77, 0x59, 0x21, 0x73, 0xa3, 0xb3, 0xc4, 0x2e, 0xef, 0x94, + 0x56, 0x15, 0x0d, 0xf1, 0x72, 0x4e, 0x4b, 0x9c, 0xaa, 0xfa, 0x77, 0x0a, 0x2c, 0x24, 0x78, 0xa2, + 0x77, 0x61, 0x4e, 0x70, 0x08, 0x0e, 0x5c, 0xe1, 0x06, 0x9e, 0x2b, 0x10, 0xe3, 0x0a, 0xff, 0xbb, + 0x7d, 0xe0, 0x62, 0xea, 0x2f, 0xc5, 0x07, 0x7a, 0x0a, 0x1a, 0x03, 0xa7, 0x6f, 0x0c, 0xa8, 0xd7, + 0xf2, 0xf0, 0x2e, 0xf7, 0xea, 0x73, 0x21, 0x50, 0xc3, 0xbb, 0xea, 0x6d, 0xa8, 0xc7, 0x18, 0x20, + 0x04, 0xf3, 0x1a, 0xab, 0x6a, 0x1d, 0xef, 0x1a, 0xa3, 0x41, 0xd0, 0x9c, 0x42, 0xf3, 0x00, 0x3b, + 0x76, 0x9f, 0xcc, 0xa2, 0x36, 0x36, 0x9b, 0x0a, 0x6a, 0x40, 0xed, 0x9e, 0x60, 0xd1, 0x2c, 0xa9, + 0xdf, 0x2d, 0xc3, 0x71, 0x6a, 0x78, 0x5d, 0xc7, 0xe4, 0x23, 0x81, 0x4f, 0xb9, 0x4f, 0x41, 0xa3, + 0x4f, 0xfb, 0x52, 0x77, 0x0d, 0x0f, 0xdb, 0x01, 0x9f, 0x78, 0xe6, 0x18, 0xb0, 0x4b, 0x61, 0x48, + 0x83, 0xa6, 0xcf, 0x5b, 0xa4, 0xf7, 0xd9, 0xc8, 0xe1, 0xc6, 0x2d, 0xb5, 0xba, 0x60, 0xa0, 0x69, + 0x0b, 0x7e, 0x6a, 0xe4, 0xcd, 0xfa, 0x07, 0x7e, 0x3f, 0x18, 0x08, 0x6f, 0x77, 0x25, 0xc5, 0x2a, + 0x29, 0xec, 0x95, 0x1e, 0x23, 0xd8, 0xb0, 0x03, 0xef, 0x40, 0x13, 0xe4, 0xe8, 0x2d, 0xa8, 0x3a, + 0x8f, 0xb0, 0xb7, 0x8f, 0x0d, 0xe6, 0x65, 0xea, 0xd7, 0x9f, 0x4a, 0xb1, 0x5a, 0x13, 0x8e, 0x5e, + 0xc3, 0xbe, 0x33, 0xf2, 0xfa, 0xd8, 0xd7, 0x42, 0x22, 0xd4, 0x86, 0x9a, 0x27, 0xc0, 0xdc, 0x0b, + 0x4d, 0xc4, 0x21, 0xa2, 0x6a, 0xdd, 0x82, 0xb9, 0xb8, 0x70, 0xa8, 0x09, 0xe5, 0x87, 0xf8, 0x80, + 0x2b, 0x93, 0xfc, 0x8c, 0xfc, 0x13, 0xeb, 0x61, 0xf6, 0x71, 0xab, 0x74, 0x43, 0x51, 0x3d, 0x40, + 0x51, 0x4b, 0xef, 0xe3, 0xc0, 0x30, 0x8d, 0xc0, 0x40, 0x08, 0x2a, 0x34, 0x18, 0x63, 0x2c, 0xe8, + 0x6f, 0xc2, 0x75, 0xc4, 0x5d, 0x75, 0x4d, 0x23, 0x3f, 0xd1, 0x29, 0xa8, 0x85, 0x9e, 0x88, 0x47, + 0x64, 0x11, 0x80, 0x44, 0x46, 0x46, 0x10, 0xe0, 0xa1, 0x1b, 0x50, 0xc5, 0x34, 0x34, 0xf1, 0xa9, + 0xfe, 0xda, 0x34, 0x34, 0x53, 0xb6, 0x70, 0x0b, 0xaa, 0x43, 0x5e, 0x3d, 0xf7, 0x81, 0x67, 0xa4, + 0xf0, 0x28, 0x25, 0xa4, 0x16, 0xe2, 0x93, 0xe8, 0x83, 0xd8, 0x5a, 0x2c, 0x7e, 0x0c, 0xbf, 0x99, + 0x91, 0xef, 0xe9, 0xa6, 0xe5, 0xe1, 0x7e, 0xe0, 0x78, 0x07, 0x5c, 0xd0, 0xb9, 0x81, 0xb3, 0xb7, + 0x2e, 0x60, 0xe8, 0x65, 0x00, 0xd3, 0xf6, 0x75, 0x6a, 0xc3, 0x7b, 0xbc, 0x1f, 0xa5, 0x09, 0x30, + 0x0c, 0x13, 0xb5, 0x9a, 0x69, 0xfb, 0x5c, 0xe4, 0xd7, 0xa1, 0x41, 0x62, 0x2e, 0x7d, 0x28, 0x02, + 0x87, 0x69, 0x6a, 0x4b, 0x2b, 0xb2, 0xdc, 0x61, 0x04, 0xa8, 0xcd, 0xb9, 0xd1, 0x87, 0x8f, 0x6e, + 0xc3, 0x0c, 0x0d, 0x7b, 0x44, 0xa0, 0x72, 0x21, 0xbb, 0xb9, 0xdc, 0xfa, 0xee, 0x51, 0x54, 0x66, + 0x7c, 0x9c, 0x0e, 0x6d, 0x41, 0xdd, 0xb0, 0x6d, 0x27, 0x30, 0x98, 0xc7, 0x67, 0x61, 0xcb, 0xe5, + 0x42, 0x36, 0xed, 0x08, 0x9f, 0xf1, 0x8a, 0x73, 0x40, 0xaf, 0xc1, 0x34, 0x9d, 0x12, 0xb8, 0x0f, + 0x3f, 0x3f, 0x76, 0x50, 0x68, 0x0c, 0x1f, 0xbd, 0x01, 0xb3, 0x8f, 0x2d, 0xdb, 0x74, 0x1e, 0xfb, + 0xdc, 0x9f, 0x4a, 0x26, 0xfc, 0x65, 0x56, 0x94, 0x22, 0x16, 0x34, 0xad, 0x9b, 0x50, 0x8f, 0xb5, + 0xef, 0x30, 0xf6, 0xdb, 0x7a, 0x13, 0x9a, 0xc9, 0x36, 0x1d, 0xca, 0xfe, 0x47, 0xb0, 0xa4, 0x8d, + 0xec, 0x48, 0x34, 0xb1, 0xbc, 0x79, 0x19, 0x66, 0xb8, 0x35, 0x30, 0x63, 0x3c, 0x55, 0xa4, 0x56, + 0x8d, 0xe3, 0xc6, 0x57, 0x2a, 0xfb, 0x86, 0x6d, 0x0e, 0xb0, 0xc7, 0x6b, 0x14, 0x2b, 0x95, 0x77, + 0x18, 0x54, 0x7d, 0x03, 0x8e, 0x27, 0xaa, 0xe5, 0x0b, 0xa5, 0xa7, 0x61, 0xde, 0x75, 0x4c, 0xdd, + 0x67, 0x60, 0x11, 0x4b, 0xd6, 0x88, 0xed, 0x08, 0xdc, 0x8e, 0x49, 0xc8, 0x7b, 0x81, 0xe3, 0xa6, + 0xc5, 0x9e, 0x8c, 0x7c, 0x15, 0x96, 0x93, 0xe4, 0xac, 0x7a, 0xf5, 0x2d, 0x58, 0xd1, 0xf0, 0xd0, + 0x79, 0x84, 0x8f, 0xca, 0xba, 0x05, 0xab, 0x69, 0x06, 0x9c, 0xf9, 0xfb, 0xb0, 0x12, 0x41, 0x7b, + 0x81, 0x11, 0x8c, 0xfc, 0x43, 0x31, 0xe7, 0xab, 0xc8, 0x07, 0x8e, 0xcf, 0x3a, 0xb2, 0xaa, 0x89, + 0x4f, 0x75, 0x05, 0xa6, 0xbb, 0x8e, 0xd9, 0xe9, 0xa2, 0x79, 0x28, 0x59, 0x2e, 0x27, 0x2e, 0x59, + 0xae, 0xda, 0x8f, 0xd7, 0xb9, 0xc9, 0xa2, 0x4e, 0x56, 0x75, 0x12, 0x15, 0xdd, 0x80, 0x79, 0xc3, + 0x34, 0x2d, 0x62, 0x48, 0xc6, 0x40, 0xb7, 0x5c, 0x11, 0x34, 0x2f, 0x26, 0xba, 0xbe, 0xd3, 0xd5, + 0x1a, 0x11, 0x62, 0xc7, 0xf5, 0xd5, 0x3b, 0x50, 0x8b, 0x02, 0xf4, 0x57, 0xa2, 0x15, 0x61, 0x69, + 0x7c, 0x2c, 0x17, 0x2e, 0x17, 0x37, 0x53, 0x93, 0x24, 0x17, 0xf3, 0x15, 0x80, 0xd0, 0xa9, 0x8a, + 0xf0, 0xf0, 0x78, 0x26, 0x4b, 0x2d, 0x86, 0xa8, 0xfe, 0x47, 0x25, 0xee, 0x64, 0x63, 0x4d, 0x36, + 0xc3, 0x26, 0x9b, 0x92, 0xd3, 0x2d, 0x1d, 0xd2, 0xe9, 0xbe, 0x08, 0xd3, 0x7e, 0x60, 0x04, 0x98, + 0xc7, 0xe3, 0x27, 0xb3, 0x09, 0x49, 0xc5, 0x58, 0x63, 0x98, 0xe8, 0x34, 0x40, 0xdf, 0xc3, 0x46, + 0x80, 0x4d, 0xdd, 0x60, 0xb3, 0x42, 0x59, 0xab, 0x71, 0x48, 0x3b, 0x20, 0x5e, 0x44, 0xac, 0x20, + 0x32, 0x26, 0xc2, 0x9c, 0x6e, 0x8c, 0xd6, 0x12, 0xa1, 0xf7, 0x9a, 0x19, 0xeb, 0xbd, 0x38, 0x29, + 0xf7, 0x5e, 0x91, 0x27, 0x9e, 0x2d, 0xf2, 0xc4, 0x8c, 0x68, 0x12, 0x4f, 0x5c, 0x2d, 0xf2, 0xc4, + 0x9c, 0x4d, 0xb1, 0x27, 0xce, 0x70, 0x24, 0xb5, 0x2c, 0x47, 0xf2, 0x59, 0xba, 0xce, 0xbf, 0x28, + 0xc1, 0x6a, 0x7a, 0x3c, 0x73, 0x3f, 0xf6, 0x32, 0xcc, 0xf8, 0x14, 0x52, 0xec, 0x3f, 0x39, 0x15, + 0xc7, 0x45, 0x77, 0xa0, 0x62, 0xd9, 0xbb, 0x0e, 0x1f, 0x78, 0x57, 0x0a, 0x69, 0x78, 0x4d, 0x57, + 0x3a, 0xf6, 0xae, 0xc3, 0x34, 0x48, 0x69, 0xd1, 0x3d, 0x38, 0x16, 0xae, 0xac, 0x7d, 0x9d, 0x31, + 0xc6, 0x22, 0xce, 0x93, 0xac, 0x34, 0x8c, 0xaa, 0x38, 0x47, 0x14, 0xd1, 0xf5, 0x38, 0x19, 0x89, + 0x71, 0x08, 0xba, 0x1f, 0x18, 0x43, 0x57, 0x58, 0x6c, 0x08, 0x68, 0xbd, 0x06, 0xb5, 0xb0, 0xfa, + 0x43, 0xe9, 0xae, 0x03, 0x4b, 0x89, 0x31, 0xc2, 0x16, 0x92, 0xe1, 0xa0, 0x52, 0x26, 0x1d, 0x54, + 0xea, 0x4f, 0x95, 0xf8, 0x40, 0x7f, 0xdb, 0x1a, 0x04, 0xd8, 0x4b, 0x0d, 0xf4, 0x57, 0x05, 0x5f, + 0x36, 0xca, 0xcf, 0x15, 0xf0, 0x65, 0xeb, 0x34, 0x3e, 0x62, 0xdf, 0x83, 0x79, 0x6a, 0xe2, 0xba, + 0x8f, 0x07, 0x34, 0x56, 0xe2, 0x7a, 0xbc, 0x9a, 0xcd, 0x80, 0xd5, 0xce, 0x86, 0x48, 0x8f, 0x53, + 0xb0, 0xbe, 0x69, 0x0c, 0xe2, 0xb0, 0xd6, 0x6d, 0x40, 0x69, 0xa4, 0x43, 0x69, 0xf0, 0x3e, 0xf1, + 0x97, 0x7e, 0x90, 0x39, 0x73, 0xef, 0x52, 0x31, 0x8a, 0x2d, 0x8f, 0x89, 0xaa, 0x71, 0x5c, 0xf5, + 0x5f, 0xcb, 0x00, 0x51, 0xe1, 0xe7, 0xdc, 0x51, 0xde, 0x0a, 0x1d, 0x16, 0x8b, 0x38, 0xd5, 0x6c, + 0x96, 0x99, 0xae, 0xaa, 0x23, 0xbb, 0x2a, 0x16, 0x7b, 0x3e, 0x97, 0xc3, 0xe0, 0xd0, 0x4e, 0x6a, + 0xf6, 0xf3, 0xe6, 0xa4, 0xde, 0x86, 0xe5, 0xa4, 0x99, 0x70, 0x0f, 0xf5, 0x02, 0x4c, 0x5b, 0x01, + 0x1e, 0xb2, 0xdd, 0xde, 0xc4, 0x86, 0x45, 0x0c, 0x9d, 0x21, 0xa9, 0x6f, 0xc2, 0xb2, 0xdc, 0x57, + 0x87, 0x0b, 0x5d, 0xd4, 0x7b, 0xc9, 0xd8, 0x27, 0x72, 0x95, 0xdc, 0x3e, 0x32, 0xb7, 0x7e, 0x92, + 0x34, 0x0c, 0x53, 0xfd, 0x81, 0x02, 0xc7, 0x13, 0x45, 0x39, 0x03, 0xff, 0xab, 0xa9, 0x01, 0xcc, + 0x7c, 0xeb, 0xcb, 0x05, 0xb5, 0x7c, 0x8a, 0xa3, 0xf8, 0xcb, 0xd0, 0x92, 0xbb, 0x47, 0x52, 0xed, + 0xcd, 0xc4, 0x50, 0x3e, 0x3f, 0x56, 0xe8, 0x70, 0x3c, 0x77, 0xe1, 0x64, 0x26, 0xe3, 0xb4, 0xce, + 0xcb, 0x13, 0xea, 0xfc, 0x7f, 0x4b, 0x71, 0x9f, 0xdd, 0x0e, 0x02, 0xcf, 0x7a, 0x30, 0x0a, 0xf0, + 0x27, 0x1b, 0x54, 0xad, 0x87, 0x23, 0x9b, 0xf9, 0xd9, 0x17, 0xb2, 0x29, 0xa3, 0xda, 0x33, 0xc7, + 0x78, 0x4f, 0x1e, 0xe3, 0x15, 0xca, 0xea, 0xc5, 0xb1, 0xac, 0x0a, 0x47, 0xfb, 0x67, 0x39, 0x88, + 0xff, 0x41, 0x81, 0x85, 0x44, 0xaf, 0xa0, 0xdb, 0x00, 0x46, 0x28, 0x3a, 0xb7, 0x8f, 0x73, 0xe3, + 0x9a, 0xa8, 0xc5, 0x68, 0xc8, 0x9c, 0xc8, 0xe2, 0xc5, 0x8c, 0x39, 0x31, 0x23, 0x5e, 0x0c, 0xc3, + 0xc5, 0xd7, 0xa3, 0xc5, 0x2e, 0xdb, 0x24, 0x55, 0x0b, 0x17, 0xbb, 0x8c, 0x56, 0x90, 0xa8, 0xbf, + 0x5e, 0x82, 0xa5, 0x2c, 0xee, 0xe8, 0x59, 0x28, 0xf7, 0xdd, 0x11, 0x6f, 0x89, 0x74, 0x34, 0xb4, + 0xe6, 0x8e, 0x76, 0x7c, 0x63, 0x0f, 0x6b, 0x04, 0x01, 0x5d, 0x85, 0x99, 0x21, 0x1e, 0x3a, 0xde, + 0x01, 0x97, 0x5b, 0xda, 0x6e, 0xb8, 0x4f, 0x4b, 0x18, 0x36, 0x47, 0x43, 0xd7, 0xa3, 0xb0, 0x9a, + 0xc9, 0xbb, 0x2a, 0xad, 0x1e, 0x58, 0x11, 0x23, 0x09, 0x63, 0xe9, 0xeb, 0x30, 0xeb, 0x7a, 0x4e, + 0x1f, 0xfb, 0x3e, 0xdf, 0x0d, 0x59, 0x4d, 0x9c, 0x55, 0x91, 0x22, 0x4e, 0xc3, 0x11, 0xd1, 0x2d, + 0x80, 0x28, 0x80, 0xe2, 0x33, 0x53, 0x2b, 0x37, 0xde, 0xf2, 0xb5, 0x18, 0xb6, 0xfa, 0xbd, 0x12, + 0x2c, 0x67, 0x6b, 0x0e, 0x5d, 0x8e, 0xeb, 0xe5, 0x64, 0x86, 0xaa, 0x65, 0xf5, 0xbc, 0x9a, 0x50, + 0xcf, 0x99, 0x0c, 0x8a, 0x2c, 0x2d, 0xdd, 0x4c, 0x6a, 0xe9, 0x6c, 0x06, 0x61, 0xb6, 0xb2, 0x6e, + 0x26, 0x95, 0x95, 0x45, 0x9a, 0xad, 0xb3, 0x76, 0x86, 0xce, 0xce, 0x67, 0xb5, 0x31, 0x5f, 0x75, + 0x7f, 0xab, 0xc0, 0x5c, 0x5c, 0x2e, 0x39, 0x64, 0x55, 0x12, 0x21, 0x2b, 0xda, 0x84, 0x45, 0x93, + 0xed, 0xdc, 0xea, 0x96, 0x1d, 0x60, 0x6f, 0xd7, 0xe8, 0x8b, 0xa8, 0xf0, 0x7c, 0x86, 0x5d, 0x74, + 0x04, 0x0e, 0x13, 0xbc, 0xc9, 0x69, 0x43, 0x30, 0x69, 0x41, 0xc8, 0x47, 0x78, 0xad, 0x09, 0x18, + 0xc5, 0x88, 0xd4, 0x7f, 0x51, 0xe0, 0x58, 0x86, 0x82, 0xc7, 0x34, 0x64, 0x27, 0xbf, 0x21, 0x17, + 0xf2, 0xbb, 0x6e, 0x6c, 0x7b, 0xde, 0xc9, 0x68, 0xcf, 0xe4, 0xfc, 0xe2, 0xcd, 0xfa, 0x99, 0x02, + 0xc7, 0x33, 0xb1, 0x32, 0xb7, 0x57, 0xaf, 0x43, 0xd5, 0x7b, 0xa2, 0x3f, 0x38, 0x08, 0xb0, 0x9f, + 0x35, 0xb0, 0x77, 0x62, 0x67, 0x28, 0xb3, 0xde, 0x93, 0x3b, 0x04, 0x0f, 0xbd, 0x0c, 0x35, 0xef, + 0x89, 0x8e, 0x3d, 0xcf, 0xf1, 0x84, 0x2f, 0xca, 0x25, 0xaa, 0x7a, 0x4f, 0x36, 0x28, 0x22, 0xa9, + 0x29, 0x10, 0x35, 0x55, 0xc6, 0xd4, 0x14, 0x44, 0x35, 0x05, 0x61, 0x4d, 0xd3, 0x63, 0x6a, 0x0a, + 0x78, 0x4d, 0xea, 0x1f, 0x96, 0xe0, 0x54, 0x91, 0xba, 0x3e, 0x31, 0x45, 0x6c, 0x00, 0xf2, 0x9e, + 0xe8, 0xae, 0xd1, 0x7f, 0x88, 0x03, 0x5f, 0x37, 0x3d, 0xc7, 0x75, 0xb1, 0x39, 0x4e, 0x23, 0x4d, + 0xef, 0x49, 0x97, 0x51, 0xac, 0x33, 0x82, 0x23, 0x69, 0x66, 0x03, 0x50, 0x90, 0xae, 0x7a, 0x8c, + 0x8a, 0x9a, 0x41, 0xa2, 0x6a, 0xf5, 0x43, 0x98, 0x8b, 0x7b, 0x88, 0x31, 0xb6, 0xff, 0x3a, 0x34, + 0xb8, 0x07, 0xd1, 0xfb, 0xce, 0xc8, 0x0e, 0xc6, 0x29, 0x6a, 0x8e, 0x63, 0xaf, 0x11, 0x64, 0xf5, + 0xeb, 0xe1, 0x70, 0xfb, 0xd4, 0xaa, 0xfc, 0x77, 0x05, 0x6a, 0x9d, 0xa1, 0xb1, 0x87, 0x7b, 0x2e, + 0xee, 0x93, 0x99, 0xde, 0x22, 0x1f, 0xbc, 0xdf, 0xd9, 0x07, 0x7a, 0x47, 0x8e, 0x5a, 0x58, 0x9c, + 0xfa, 0xac, 0x74, 0x8e, 0x28, 0x38, 0x8c, 0x59, 0x98, 0x5c, 0x83, 0xa5, 0x91, 0x8f, 0x3d, 0xdd, + 0x77, 0x71, 0xdf, 0xda, 0xb5, 0xb0, 0xa9, 0xb3, 0xea, 0x10, 0xad, 0x0e, 0x91, 0xb2, 0x9e, 0x28, + 0xa2, 0x3c, 0x3f, 0x76, 0x84, 0x72, 0x1d, 0xaa, 0x5f, 0xc4, 0x07, 0x6c, 0x0d, 0x3f, 0x21, 0x9d, + 0xfa, 0xed, 0x0a, 0xac, 0xe4, 0x9c, 0xee, 0xd0, 0x05, 0xa0, 0x3b, 0xd2, 0x5d, 0xec, 0x59, 0x8e, + 0x29, 0x3a, 0xa3, 0xef, 0x8e, 0xba, 0x14, 0x80, 0x4e, 0x02, 0xf9, 0xd0, 0xbf, 0x3e, 0x72, 0x78, + 0x8c, 0x59, 0xd6, 0xaa, 0x7d, 0x77, 0xf4, 0x25, 0xf2, 0x2d, 0x68, 0xfd, 0x7d, 0xc3, 0xc3, 0xcc, + 0x2d, 0x30, 0xda, 0x1e, 0x05, 0xa0, 0x17, 0xe1, 0x38, 0x9b, 0xf2, 0xf4, 0x81, 0x35, 0xb4, 0x88, + 0xf3, 0x8c, 0x59, 0x7c, 0x59, 0x43, 0xac, 0xf0, 0x1e, 0x29, 0xeb, 0xd8, 0xcc, 0xc6, 0x55, 0x68, + 0x38, 0xce, 0x50, 0xf7, 0xfb, 0x8e, 0x87, 0x75, 0xc3, 0xfc, 0x90, 0x9a, 0x77, 0x59, 0xab, 0x3b, + 0xce, 0xb0, 0x47, 0x60, 0x6d, 0xf3, 0x43, 0x74, 0x16, 0xea, 0x7d, 0x77, 0xe4, 0xe3, 0x40, 0x27, + 0x7f, 0xe8, 0x1e, 0x5c, 0x4d, 0x03, 0x06, 0x5a, 0x73, 0x47, 0x7e, 0x0c, 0x61, 0x48, 0x56, 0x5d, + 0xb3, 0x71, 0x84, 0xfb, 0x78, 0x48, 0x0f, 0xb1, 0xf7, 0x47, 0x7b, 0xd8, 0x35, 0xf6, 0x30, 0x13, + 0x4d, 0x6c, 0xa4, 0x49, 0x87, 0xd8, 0xef, 0x70, 0x14, 0x2a, 0xa0, 0x36, 0xbf, 0x1f, 0xff, 0xf4, + 0xd1, 0xbb, 0x30, 0x3b, 0xb2, 0x69, 0xbf, 0xae, 0xd6, 0x28, 0xed, 0xb5, 0x09, 0xce, 0xd2, 0xae, + 0xec, 0x30, 0x12, 0x7e, 0xb4, 0xc7, 0x19, 0xa0, 0x5b, 0xd0, 0xe2, 0x8a, 0xf2, 0x1f, 0x1b, 0x6e, + 0x52, 0x5b, 0x40, 0x55, 0xb0, 0xcc, 0x30, 0x7a, 0x8f, 0x0d, 0x37, 0xae, 0xb1, 0xd6, 0x2d, 0x98, + 0x8b, 0x33, 0x3d, 0x94, 0x2d, 0xdd, 0x81, 0x86, 0xd4, 0x48, 0xd2, 0xdb, 0x54, 0x29, 0xbe, 0xf5, + 0x0d, 0x31, 0x64, 0xaa, 0x04, 0xd0, 0xb3, 0xbe, 0x41, 0x53, 0x0f, 0xa8, 0x64, 0x94, 0x4f, 0x45, + 0x63, 0x1f, 0xaa, 0x01, 0x0d, 0xe9, 0xb4, 0x9f, 0x78, 0x5a, 0x7a, 0xac, 0xcf, 0x3d, 0x2d, 0xf9, + 0x4d, 0x60, 0x9e, 0x33, 0x10, 0x12, 0xd0, 0xdf, 0x04, 0x46, 0xcf, 0x95, 0xd9, 0x29, 0x19, 0xfd, + 0x4d, 0xab, 0xc0, 0x8f, 0x78, 0xda, 0x4e, 0x4d, 0x63, 0x1f, 0xea, 0xef, 0x28, 0x00, 0x6b, 0x86, + 0x6b, 0x3c, 0xb0, 0x06, 0x56, 0x70, 0x80, 0x2e, 0x42, 0xd3, 0x30, 0x4d, 0xbd, 0x2f, 0x20, 0x16, + 0x16, 0x79, 0x54, 0x0b, 0x86, 0x69, 0xae, 0xc5, 0xc0, 0xe8, 0x79, 0x58, 0x24, 0x7e, 0x52, 0xc6, + 0x65, 0x89, 0x55, 0x4d, 0x52, 0x20, 0x21, 0xdf, 0x80, 0x55, 0xc2, 0xd7, 0x18, 0x3e, 0xb0, 0xb0, + 0x1d, 0xc8, 0x34, 0x2c, 0xe3, 0x6a, 0xd9, 0x30, 0xcd, 0x36, 0x2b, 0x8e, 0x53, 0xaa, 0x7f, 0x33, + 0x03, 0xa7, 0xe5, 0x1e, 0x4f, 0x26, 0x60, 0xdc, 0x82, 0xb9, 0x84, 0xbc, 0xa9, 0xd4, 0x85, 0xa8, + 0x85, 0x9a, 0x84, 0x9b, 0x48, 0x31, 0x28, 0xa5, 0x52, 0x0c, 0x32, 0x93, 0x3b, 0xca, 0x9f, 0x50, + 0x72, 0x47, 0xe5, 0x63, 0x26, 0x77, 0x4c, 0x1f, 0x35, 0xb9, 0x63, 0x6e, 0xe2, 0xe4, 0x8e, 0x67, + 0xe9, 0xe6, 0x90, 0xa8, 0x91, 0xce, 0xf2, 0xcc, 0x27, 0x34, 0x42, 0xee, 0xb6, 0x48, 0xee, 0x4b, + 0x24, 0x81, 0xcc, 0x1e, 0x26, 0x09, 0xa4, 0x9a, 0x9b, 0x04, 0x72, 0x0e, 0xe6, 0x6c, 0x47, 0xb7, + 0xf1, 0x63, 0x9d, 0x74, 0x8b, 0xbf, 0x5a, 0x67, 0x7d, 0x64, 0x3b, 0x9b, 0xf8, 0x71, 0x97, 0x40, + 0xd0, 0x79, 0x98, 0x1b, 0x1a, 0xfe, 0x43, 0x6c, 0xd2, 0x6c, 0x0c, 0x7f, 0xb5, 0x41, 0xed, 0xa9, + 0xce, 0x60, 0x5d, 0x02, 0x42, 0xcf, 0x40, 0x28, 0x07, 0x47, 0x9a, 0xa7, 0x48, 0x0d, 0x01, 0x65, + 0x68, 0xb1, 0x84, 0x92, 0x85, 0x23, 0x26, 0x94, 0x34, 0x0f, 0x93, 0x50, 0x72, 0x19, 0x9a, 0xe2, + 0xb7, 0xc8, 0x28, 0x61, 0x07, 0x04, 0x34, 0x99, 0x64, 0x41, 0x94, 0x89, 0xac, 0x91, 0xbc, 0xfc, + 0x13, 0x28, 0xcc, 0x3f, 0xf9, 0x23, 0x85, 0x2f, 0x55, 0xc3, 0x01, 0xc4, 0x0f, 0xbe, 0xa5, 0x9c, + 0x05, 0xe5, 0x28, 0x39, 0x0b, 0x68, 0x3b, 0x37, 0xab, 0xe3, 0x62, 0x3e, 0xa7, 0x71, 0x79, 0x1d, + 0xea, 0xfd, 0x70, 0x15, 0xf9, 0x49, 0x64, 0xa7, 0xa9, 0xff, 0xa5, 0xc0, 0x69, 0xce, 0x2f, 0x27, + 0x85, 0x2b, 0xc3, 0xca, 0x95, 0x1c, 0x2b, 0xef, 0x7b, 0xd8, 0xc4, 0x76, 0x60, 0x19, 0x03, 0x1a, + 0x97, 0x88, 0x83, 0xe1, 0x08, 0x4c, 0x43, 0xa3, 0xf3, 0x30, 0xc7, 0xb2, 0x2c, 0xf9, 0x82, 0x92, + 0x25, 0x53, 0xd6, 0x69, 0xa2, 0x25, 0x5f, 0x33, 0x6e, 0x65, 0x79, 0x96, 0x4a, 0xee, 0x4e, 0xc4, + 0x58, 0x07, 0xa3, 0x3a, 0xb0, 0x92, 0x73, 0x44, 0x9f, 0xd9, 0x4d, 0x4a, 0xba, 0x9b, 0x0a, 0x95, + 0x94, 0xee, 0xa6, 0x6f, 0x2b, 0x70, 0x36, 0xb5, 0xb0, 0xfd, 0xec, 0x35, 0xab, 0xfe, 0xa9, 0x12, + 0xda, 0x4f, 0xd2, 0xe4, 0xd7, 0xd2, 0x26, 0xff, 0x4c, 0xd1, 0x3a, 0x3d, 0xd3, 0xe8, 0xdf, 0xcb, + 0x35, 0xfa, 0xe7, 0x0b, 0xd7, 0xfc, 0xe3, 0xf4, 0xf9, 0x6f, 0x0a, 0x9c, 0xc8, 0x15, 0x20, 0x11, + 0x0f, 0x2a, 0xc9, 0x78, 0x90, 0xc7, 0x92, 0x51, 0x50, 0xcf, 0x62, 0x49, 0x1a, 0xb7, 0xf3, 0xa0, + 0x4d, 0x1f, 0x1a, 0x4f, 0xac, 0xe1, 0x68, 0xc8, 0x83, 0x49, 0xc2, 0xee, 0x3e, 0x83, 0x1c, 0x25, + 0x9a, 0xbc, 0x0a, 0x4b, 0xcc, 0xd1, 0xd3, 0x80, 0x26, 0xa2, 0x60, 0x41, 0xe5, 0x22, 0x2b, 0x23, + 0xb1, 0x0d, 0x27, 0x50, 0xdb, 0xb0, 0x18, 0x36, 0xab, 0x30, 0x45, 0x29, 0x96, 0x72, 0x54, 0x92, + 0x53, 0x8e, 0x6c, 0x98, 0x59, 0xc7, 0x8f, 0xac, 0x3e, 0xfe, 0x44, 0xb2, 0x9d, 0xcf, 0x41, 0xdd, + 0xc5, 0xde, 0xd0, 0xf2, 0xfd, 0x70, 0x56, 0xaf, 0x69, 0x71, 0x90, 0x7a, 0x16, 0x6a, 0x6b, 0xeb, + 0x1d, 0x5e, 0x65, 0x86, 0xa8, 0xea, 0x7f, 0xcf, 0xc0, 0x42, 0xd2, 0xc6, 0x6e, 0xa6, 0x52, 0xa0, + 0x4e, 0x67, 0x6e, 0x9f, 0x65, 0xec, 0x1b, 0x3f, 0x2f, 0x56, 0x54, 0xa5, 0x74, 0x7e, 0x40, 0xb8, + 0x6a, 0x12, 0x0b, 0xad, 0x55, 0x98, 0xed, 0x3b, 0xc3, 0xa1, 0x61, 0x9b, 0x22, 0x67, 0x9d, 0x7f, + 0x12, 0x49, 0x0d, 0x6f, 0x8f, 0xed, 0x18, 0xd7, 0x34, 0xfa, 0x9b, 0x98, 0x00, 0x71, 0x86, 0x96, + 0x4d, 0x93, 0xa8, 0x68, 0x2f, 0xd5, 0x34, 0xe0, 0xa0, 0x75, 0xcb, 0x43, 0x17, 0xa0, 0x82, 0xed, + 0x47, 0xe2, 0x28, 0x49, 0xda, 0xb9, 0x14, 0x6b, 0x22, 0x8d, 0x62, 0xa0, 0x8b, 0x30, 0x33, 0x24, + 0x66, 0x25, 0x0e, 0xda, 0x17, 0x53, 0xb9, 0xdd, 0x1a, 0x47, 0x40, 0x2f, 0xc0, 0xac, 0x49, 0xb5, + 0x27, 0x16, 0x01, 0x48, 0x4a, 0xc7, 0xa2, 0x45, 0x9a, 0x40, 0x41, 0x6f, 0x85, 0xdb, 0xe6, 0xb5, + 0xf4, 0x79, 0x56, 0x42, 0xcd, 0x99, 0x3b, 0xe6, 0x9b, 0xf2, 0xda, 0x13, 0xd2, 0x9b, 0xef, 0x49, + 0x2e, 0xc5, 0x2b, 0xd0, 0x13, 0x50, 0x1d, 0x38, 0x7b, 0xcc, 0x7a, 0xea, 0xec, 0xc2, 0xc3, 0xc0, + 0xd9, 0xa3, 0xc6, 0xb3, 0x04, 0xd3, 0x7e, 0x60, 0x5a, 0x36, 0x8d, 0xa5, 0xaa, 0x1a, 0xfb, 0x20, + 0x83, 0x94, 0xfe, 0xd0, 0x1d, 0xbb, 0x8f, 0x57, 0x1b, 0xb4, 0xa8, 0x46, 0x21, 0x5b, 0x76, 0x9f, + 0xae, 0x29, 0x83, 0xe0, 0x60, 0x75, 0x9e, 0xc2, 0xc9, 0xcf, 0x68, 0xf7, 0x7a, 0x21, 0x67, 0xf7, + 0x3a, 0x21, 0x70, 0xc6, 0xee, 0x75, 0x33, 0x77, 0xce, 0x48, 0xd2, 0x0a, 0x12, 0x12, 0x47, 0xae, + 0xad, 0x77, 0x74, 0xd1, 0x35, 0x8b, 0xe9, 0x54, 0xf1, 0xd0, 0xec, 0x35, 0x08, 0x7f, 0x7e, 0xa6, + 0x87, 0x07, 0xdf, 0x53, 0x60, 0x79, 0x8d, 0x1e, 0x9d, 0xc6, 0x7c, 0xe3, 0x61, 0xb2, 0x8e, 0x5e, + 0x0a, 0x53, 0xc1, 0x32, 0xf2, 0x79, 0x92, 0x9a, 0x12, 0x99, 0x60, 0x6b, 0x30, 0x2f, 0xd8, 0x72, + 0xe2, 0xf2, 0x04, 0x79, 0x64, 0x0d, 0x3f, 0xfe, 0xa9, 0xbe, 0x0e, 0x2b, 0x29, 0xc9, 0xf9, 0x01, + 0x56, 0xf2, 0x4e, 0x01, 0x13, 0x3c, 0x7e, 0xa7, 0x40, 0xbd, 0x05, 0xc7, 0x7b, 0x81, 0xe1, 0x05, + 0xa9, 0x66, 0x4f, 0x40, 0x4b, 0x33, 0xc4, 0x64, 0x5a, 0x9e, 0xc4, 0xd5, 0x83, 0xa5, 0x5e, 0xe0, + 0xb8, 0x47, 0x60, 0x4a, 0xfc, 0x0e, 0x69, 0xb9, 0x33, 0x12, 0xf3, 0x8c, 0xf8, 0x54, 0x57, 0x58, + 0x3e, 0x5b, 0xba, 0xb6, 0x2f, 0xc0, 0x32, 0x4b, 0x27, 0x3b, 0x4a, 0x23, 0x4e, 0x88, 0x64, 0xb6, + 0x34, 0xdf, 0xbb, 0x70, 0x4c, 0xda, 0x52, 0xe7, 0xe9, 0x17, 0xd7, 0xe4, 0xf4, 0x8b, 0xfc, 0xd3, + 0x8b, 0x30, 0xfb, 0xe2, 0x3b, 0xa5, 0x98, 0x1f, 0xcf, 0x39, 0x83, 0x7d, 0x45, 0x4e, 0xbe, 0x38, + 0x9b, 0xcf, 0x55, 0xca, 0xbd, 0x48, 0x5b, 0x67, 0x39, 0xc3, 0x3a, 0x77, 0x52, 0x07, 0xbc, 0x95, + 0x74, 0xf2, 0x4c, 0x42, 0xc2, 0x4f, 0xe5, 0x68, 0xf7, 0x1e, 0x4b, 0xd0, 0x08, 0xab, 0x0e, 0x4f, + 0x75, 0x5f, 0x4a, 0x9c, 0xea, 0x9e, 0x2c, 0x90, 0x34, 0x3c, 0xcf, 0xfd, 0x4e, 0x05, 0x6a, 0x61, + 0x59, 0x4a, 0xc3, 0x69, 0x55, 0x95, 0x32, 0x54, 0x15, 0x9f, 0x5f, 0xcb, 0x47, 0x9c, 0x5f, 0x2b, + 0x13, 0xcc, 0xaf, 0x27, 0xa1, 0x46, 0x7f, 0xd0, 0x9c, 0x7a, 0x36, 0x5f, 0x56, 0x29, 0x40, 0xc3, + 0xbb, 0x91, 0x89, 0xcd, 0x4c, 0x68, 0x62, 0x89, 0x64, 0x90, 0xd9, 0x64, 0x32, 0xc8, 0xcd, 0x70, + 0xee, 0xab, 0xa6, 0x0f, 0x5f, 0x42, 0x8e, 0x99, 0xb3, 0x5e, 0x62, 0xc7, 0xb5, 0x96, 0xde, 0x71, + 0x8d, 0xe8, 0x3f, 0xb7, 0x87, 0xc3, 0x5b, 0x2c, 0xc3, 0x23, 0x6e, 0x67, 0xdc, 0x47, 0xbe, 0x22, + 0x1d, 0xae, 0x29, 0x19, 0x73, 0x55, 0xe8, 0x17, 0xe2, 0x07, 0x6a, 0x3b, 0xb0, 0x9c, 0xcc, 0x0c, + 0x3b, 0x94, 0x8f, 0xcb, 0x49, 0x51, 0xfd, 0xcd, 0x78, 0xc4, 0x97, 0x93, 0x8f, 0x79, 0x33, 0x95, + 0x3a, 0x30, 0xb1, 0x85, 0x5e, 0x93, 0xb3, 0x8c, 0x0e, 0x6d, 0x57, 0xa9, 0x24, 0x23, 0x1a, 0x91, + 0x18, 0x1e, 0x2f, 0x66, 0xc1, 0x79, 0x8d, 0x43, 0xda, 0x74, 0x65, 0xb0, 0x6b, 0xd9, 0x96, 0xbf, + 0xcf, 0xca, 0x67, 0xd8, 0xca, 0x40, 0x80, 0xda, 0x74, 0xd7, 0x12, 0x3f, 0xb1, 0x02, 0xbd, 0xef, + 0x98, 0x98, 0x5a, 0xed, 0xb4, 0x56, 0x25, 0x80, 0x35, 0xc7, 0xc4, 0xd1, 0x78, 0xaa, 0x1e, 0x76, + 0x3c, 0xd5, 0x12, 0xe3, 0x69, 0x19, 0x66, 0x3c, 0x6c, 0xf8, 0x8e, 0xcd, 0x36, 0x33, 0x34, 0xfe, + 0x45, 0x3a, 0x62, 0x88, 0x7d, 0x9f, 0xd4, 0xc1, 0x03, 0x30, 0xfe, 0x19, 0x0b, 0x16, 0xe7, 0x0a, + 0x82, 0xc5, 0x82, 0x6c, 0xcf, 0x44, 0xb0, 0xd8, 0x28, 0x08, 0x16, 0x27, 0x4a, 0xf6, 0x8c, 0xc2, + 0xe2, 0xf9, 0x71, 0x61, 0x71, 0x3c, 0xae, 0x5c, 0x90, 0xe3, 0xca, 0xd7, 0xe3, 0x2b, 0xd4, 0x66, + 0xfa, 0xec, 0xbb, 0xf8, 0x0e, 0xc9, 0x67, 0x38, 0x80, 0xff, 0x51, 0x81, 0x95, 0xd4, 0x80, 0xe3, + 0x43, 0xf8, 0xa5, 0x44, 0x1a, 0x69, 0x61, 0xfe, 0xa6, 0xc8, 0x22, 0x6d, 0x4b, 0x59, 0xa4, 0x97, + 0x8b, 0x48, 0x72, 0x92, 0x48, 0x8f, 0x9e, 0xd8, 0xf9, 0x2d, 0x05, 0x50, 0xc6, 0x1a, 0xfc, 0xa6, + 0x88, 0xd6, 0x0f, 0xb1, 0x5b, 0xc6, 0x03, 0xf6, 0xb7, 0xa2, 0x80, 0xbd, 0x74, 0x98, 0x7d, 0x87, + 0x30, 0xe3, 0xe4, 0xc7, 0x25, 0x38, 0xbb, 0xe3, 0x9a, 0x89, 0x30, 0x92, 0x63, 0x4d, 0xee, 0xd9, + 0x6e, 0xca, 0xe9, 0x32, 0x47, 0x6c, 0x42, 0xf9, 0x28, 0x4d, 0x40, 0x5f, 0xcb, 0x4a, 0x68, 0x7a, + 0x5d, 0x3a, 0x7a, 0x2c, 0x6e, 0xe0, 0x98, 0xe9, 0xeb, 0xe3, 0x9a, 0xb0, 0x0a, 0xe7, 0xf2, 0x05, + 0xe0, 0x21, 0xe7, 0x2f, 0xc3, 0xc2, 0xc6, 0x13, 0xdc, 0xef, 0x1d, 0xd8, 0xfd, 0x43, 0x68, 0xbd, + 0x09, 0xe5, 0xfe, 0xd0, 0xe4, 0xa7, 0x23, 0xe4, 0x67, 0x3c, 0x8a, 0x2e, 0xcb, 0x51, 0xb4, 0x0e, + 0xcd, 0xa8, 0x06, 0x3e, 0x80, 0x96, 0xc9, 0x00, 0x32, 0x09, 0x32, 0x61, 0x3e, 0xa7, 0xf1, 0x2f, + 0x0e, 0xc7, 0x1e, 0xbb, 0xa0, 0xc2, 0xe0, 0xd8, 0xf3, 0x64, 0xaf, 0x5d, 0x96, 0xbd, 0xb6, 0xfa, + 0x5d, 0x05, 0xea, 0xa4, 0x86, 0x8f, 0x25, 0x3f, 0x5f, 0xca, 0x96, 0xa3, 0xa5, 0x6c, 0xb8, 0x22, + 0xae, 0xc4, 0x57, 0xc4, 0x91, 0xe4, 0xd3, 0x14, 0x9c, 0x96, 0x7c, 0x26, 0x84, 0x63, 0xcf, 0x53, + 0xcf, 0xc1, 0x1c, 0x93, 0x8d, 0xb7, 0xbc, 0x09, 0xe5, 0x91, 0x37, 0x10, 0xfd, 0x37, 0xf2, 0x06, + 0xea, 0x37, 0x15, 0x68, 0xb4, 0x83, 0xc0, 0xe8, 0xef, 0x1f, 0xa2, 0x01, 0xa1, 0x70, 0xa5, 0xb8, + 0x70, 0xe9, 0x46, 0x44, 0xe2, 0x56, 0x72, 0xc4, 0x9d, 0x96, 0xc4, 0x55, 0x61, 0x5e, 0xc8, 0x92, + 0x2b, 0xf0, 0x26, 0xa0, 0xae, 0xe3, 0x05, 0x6f, 0x3b, 0xde, 0x63, 0xc3, 0x33, 0x0f, 0xb7, 0x6a, + 0x45, 0x50, 0xe1, 0x4f, 0x06, 0x94, 0x2f, 0x4c, 0x6b, 0xf4, 0xb7, 0xfa, 0x1c, 0x1c, 0x93, 0xf8, + 0xe5, 0x56, 0x7c, 0x0b, 0xea, 0x74, 0x16, 0xe6, 0x0b, 0x9a, 0xe7, 0xe3, 0xe7, 0xf5, 0x63, 0x66, + 0x6b, 0x75, 0x1d, 0x16, 0x49, 0x3c, 0x46, 0xe1, 0xa1, 0x7f, 0xb9, 0x9a, 0x88, 0xf9, 0x57, 0x52, + 0x2c, 0x12, 0xf1, 0xfe, 0x4f, 0x15, 0x98, 0xa6, 0xf0, 0x54, 0x8c, 0x74, 0x92, 0xcc, 0x73, 0xae, + 0xa3, 0x07, 0xc6, 0x5e, 0xf8, 0x1c, 0x03, 0x01, 0x6c, 0x1b, 0x7b, 0xf4, 0x44, 0x87, 0x16, 0x9a, + 0xd6, 0x1e, 0xf6, 0x03, 0x71, 0x42, 0x58, 0x27, 0xb0, 0x75, 0x06, 0x22, 0x8a, 0xa1, 0x07, 0xa9, + 0x15, 0x7a, 0x5e, 0x4a, 0x7f, 0xa3, 0x0b, 0xec, 0x6e, 0x63, 0xf1, 0xb1, 0x18, 0xbd, 0xf3, 0xd8, + 0x82, 0x6a, 0xe2, 0x3c, 0x2b, 0xfc, 0x46, 0x17, 0xa1, 0x42, 0xf7, 0x9f, 0x67, 0x8b, 0xb4, 0x44, + 0x51, 0x88, 0x55, 0xb8, 0x96, 0x6d, 0x63, 0x93, 0x06, 0x40, 0x55, 0x8d, 0x7f, 0xa9, 0x6f, 0x01, + 0x8a, 0x2b, 0x8f, 0x77, 0xd0, 0x45, 0x98, 0xa1, 0xba, 0x15, 0x41, 0xec, 0x62, 0x8a, 0xb5, 0xc6, + 0x11, 0xd4, 0xaf, 0x02, 0x62, 0x75, 0x49, 0x81, 0xeb, 0x61, 0x3a, 0xb0, 0x20, 0x84, 0xfd, 0x33, + 0x05, 0x8e, 0x49, 0xdc, 0xb9, 0x7c, 0xcf, 0xc9, 0xec, 0x33, 0xc4, 0xe3, 0xac, 0xdf, 0x90, 0x66, + 0xe6, 0x8b, 0x69, 0x31, 0x7e, 0x4e, 0xb3, 0xf2, 0x3f, 0x29, 0x00, 0xed, 0x51, 0xb0, 0xcf, 0x37, + 0x5a, 0xe3, 0x9d, 0xa8, 0x24, 0x3a, 0xb1, 0x05, 0x55, 0xd7, 0xf0, 0xfd, 0xc7, 0x8e, 0x27, 0x16, + 0x91, 0xe1, 0x37, 0xdd, 0x1e, 0x1d, 0xf1, 0x37, 0x1a, 0x6a, 0x1a, 0xfd, 0x8d, 0x9e, 0x81, 0x79, + 0xf6, 0x4e, 0x88, 0x6e, 0x98, 0xa6, 0x27, 0x72, 0x00, 0x6b, 0x5a, 0x83, 0x41, 0xdb, 0x0c, 0x48, + 0xd0, 0x2c, 0x7a, 0x1a, 0x11, 0x1c, 0xe8, 0x81, 0xf3, 0x10, 0xdb, 0x7c, 0x61, 0xd8, 0x10, 0xd0, + 0x6d, 0x02, 0x64, 0xc7, 0x8d, 0x7b, 0x96, 0x1f, 0x78, 0x02, 0x4d, 0x1c, 0x9a, 0x72, 0x28, 0x45, + 0x53, 0xff, 0x58, 0x81, 0x66, 0x77, 0x34, 0x18, 0x30, 0xe5, 0x1e, 0xa5, 0x93, 0x2f, 0xf1, 0xa6, + 0x94, 0xd2, 0x26, 0x1f, 0x29, 0x8a, 0x37, 0xf1, 0x13, 0xd9, 0xcb, 0xba, 0x06, 0x8b, 0x31, 0x89, + 0xb9, 0xe1, 0x48, 0x91, 0xbd, 0x22, 0x47, 0xf6, 0x6a, 0x1b, 0x10, 0xdb, 0xbe, 0x39, 0x72, 0x2b, + 0xd5, 0xe3, 0x70, 0x4c, 0x62, 0xc1, 0xa7, 0xe2, 0x4b, 0xd0, 0xe0, 0xf9, 0x68, 0xdc, 0x20, 0x4e, + 0x40, 0x95, 0xb8, 0xd4, 0xbe, 0x65, 0x8a, 0x0c, 0x89, 0x59, 0xd7, 0x31, 0xd7, 0x2c, 0xd3, 0x53, + 0xbf, 0x04, 0x0d, 0x7e, 0xe1, 0x9d, 0xe3, 0xde, 0x86, 0x79, 0x7e, 0x3e, 0xa8, 0x4b, 0x37, 0x44, + 0x4f, 0x64, 0x24, 0x3d, 0x0a, 0x55, 0xd8, 0xf1, 0x4f, 0xf5, 0x6b, 0xd0, 0x62, 0xd1, 0x82, 0xc4, + 0x58, 0x34, 0xf0, 0x36, 0x88, 0xeb, 0x13, 0x05, 0xfc, 0x65, 0xca, 0x86, 0x17, 0xff, 0x54, 0x4f, + 0xc3, 0xc9, 0x4c, 0xfe, 0xbc, 0xf5, 0x2e, 0x34, 0xa3, 0x02, 0x76, 0x8d, 0x31, 0x4c, 0xfb, 0x50, + 0x62, 0x69, 0x1f, 0xcb, 0x61, 0xec, 0x5d, 0x12, 0x33, 0x17, 0x0d, 0xaf, 0xa3, 0x15, 0x57, 0x39, + 0x6f, 0xc5, 0x55, 0x91, 0x56, 0x5c, 0xea, 0xfd, 0x50, 0x87, 0x7c, 0xdd, 0xfb, 0x3a, 0x5d, 0x99, + 0xb3, 0xba, 0x85, 0x53, 0x3b, 0x95, 0xdd, 0x3e, 0x86, 0xa4, 0xc5, 0xf0, 0xd5, 0x8b, 0xd0, 0x90, + 0xdd, 0x5b, 0xcc, 0x63, 0x29, 0x29, 0x8f, 0x35, 0x9f, 0x70, 0x56, 0x2f, 0x26, 0x96, 0x14, 0x59, + 0x7a, 0x4d, 0x2c, 0x28, 0x6e, 0x48, 0x6e, 0xeb, 0x69, 0xe9, 0x88, 0xfe, 0xe7, 0xe4, 0xb1, 0x96, + 0xb8, 0x1f, 0x7f, 0xdb, 0x27, 0xf4, 0xbc, 0xa1, 0xea, 0x53, 0x50, 0xdf, 0xc9, 0x7b, 0x76, 0xa4, + 0x22, 0xf2, 0xca, 0x5e, 0x85, 0xa5, 0xb7, 0xad, 0x01, 0xf6, 0x0f, 0xfc, 0x00, 0x0f, 0x3b, 0xd4, + 0xbd, 0xec, 0x5a, 0xd8, 0x43, 0x67, 0x00, 0xe8, 0x2a, 0xd2, 0x75, 0xac, 0xf0, 0xa9, 0x85, 0x18, + 0x44, 0xfd, 0x91, 0x02, 0x0b, 0x11, 0xe1, 0x24, 0x39, 0x81, 0xaf, 0xc0, 0xf4, 0xae, 0x2f, 0x76, + 0xdb, 0x12, 0x67, 0x10, 0x59, 0x22, 0x68, 0x95, 0x5d, 0xbf, 0x63, 0xa2, 0x57, 0x01, 0x46, 0x3e, + 0x36, 0xf9, 0xb1, 0xdf, 0x98, 0x2c, 0xcd, 0x1a, 0x41, 0x65, 0x07, 0x87, 0x37, 0xa0, 0x6e, 0xd9, + 0x8e, 0x89, 0xe9, 0x91, 0xb0, 0x39, 0x2e, 0x43, 0x13, 0x18, 0xee, 0x8e, 0x8f, 0x4d, 0xf5, 0xf7, + 0xa3, 0x83, 0xdd, 0xcf, 0x73, 0x0b, 0x55, 0x9d, 0xcf, 0xaf, 0xa2, 0xd7, 0xb9, 0xc9, 0xbe, 0x03, + 0x8b, 0xcc, 0x4d, 0xee, 0x86, 0x55, 0x66, 0xde, 0x5c, 0x49, 0xb4, 0x4d, 0x6b, 0x5a, 0x3c, 0xb2, + 0x12, 0x44, 0xea, 0x2d, 0x38, 0x9e, 0x48, 0x25, 0x9f, 0x7c, 0x3b, 0xfd, 0xdd, 0xc4, 0xbe, 0x58, + 0x34, 0xa4, 0xae, 0xc9, 0x37, 0x98, 0x8a, 0x92, 0xfe, 0xf9, 0x65, 0x9a, 0x1d, 0x38, 0x21, 0x6d, + 0xda, 0x49, 0xb2, 0xdc, 0x48, 0x04, 0x8b, 0xe7, 0xf2, 0xf9, 0x25, 0xa2, 0xc6, 0xff, 0x51, 0x60, + 0x29, 0x0b, 0xe1, 0x88, 0x1b, 0xc6, 0x1f, 0xe4, 0xdc, 0x7e, 0x7c, 0x69, 0x9c, 0x40, 0x9f, 0xca, + 0x06, 0xfb, 0x26, 0xbb, 0x3b, 0x35, 0xbe, 0x4f, 0xca, 0x93, 0xf5, 0xc9, 0x4f, 0x4b, 0xb1, 0x43, + 0x91, 0x82, 0xfb, 0x4d, 0x1f, 0x63, 0x93, 0x72, 0x2d, 0x71, 0xbd, 0xe9, 0xf9, 0x4c, 0xc2, 0x31, + 0xb7, 0x9b, 0xb4, 0xac, 0xcd, 0x80, 0x6b, 0xe3, 0x38, 0x7d, 0x6e, 0xf7, 0xaf, 0x7f, 0xab, 0x04, + 0xf3, 0x72, 0x87, 0xa0, 0xb7, 0x32, 0xee, 0x36, 0x9d, 0x1d, 0xd3, 0x40, 0xe9, 0x6a, 0x13, 0xbf, + 0x4b, 0x54, 0x9a, 0xfc, 0x2e, 0x51, 0x79, 0xb2, 0xbb, 0x44, 0x77, 0x60, 0xfe, 0xb1, 0x67, 0x05, + 0xc6, 0x83, 0x01, 0xd6, 0x07, 0xc6, 0x01, 0xf6, 0xb8, 0x17, 0x2e, 0x74, 0x43, 0x0d, 0x41, 0x72, + 0x8f, 0x50, 0xd0, 0x65, 0xd2, 0x63, 0xc3, 0xe5, 0xab, 0x2d, 0x29, 0x80, 0xeb, 0x3d, 0x36, 0x5c, + 0x46, 0x43, 0x51, 0xd4, 0x6f, 0x96, 0xe0, 0x78, 0xe6, 0x0d, 0x98, 0x8f, 0xaf, 0xa2, 0xcb, 0x71, + 0x15, 0x1d, 0xe6, 0x5a, 0x51, 0xf9, 0x50, 0xd7, 0x8a, 0x3a, 0x39, 0x0a, 0xcb, 0x3a, 0x75, 0x2f, + 0xd6, 0x9b, 0xfa, 0x97, 0x0a, 0x54, 0x85, 0x50, 0x63, 0x2f, 0xf9, 0xac, 0x8c, 0x08, 0x9a, 0x4e, + 0x33, 0xb6, 0x6d, 0xc3, 0x76, 0x74, 0x1f, 0x93, 0x08, 0x6a, 0xec, 0x95, 0x8a, 0x25, 0x4a, 0xb7, + 0xe6, 0x78, 0x78, 0xd3, 0xb0, 0x9d, 0x1e, 0x23, 0x42, 0x6d, 0x68, 0x32, 0x7e, 0x94, 0x15, 0x61, + 0x3a, 0x76, 0x56, 0x9b, 0xa7, 0x04, 0x84, 0x09, 0x61, 0xe6, 0xab, 0xdf, 0x57, 0x60, 0x21, 0xa1, + 0xd9, 0x5f, 0xbc, 0x46, 0xfc, 0x5e, 0x19, 0xea, 0xb1, 0x5e, 0x1e, 0xd3, 0x80, 0x35, 0x58, 0x14, + 0x99, 0x33, 0x3e, 0x0e, 0x26, 0xbb, 0xd2, 0xb2, 0xc0, 0x29, 0x7a, 0x38, 0x60, 0x41, 0xcf, 0x6d, + 0x58, 0x30, 0x1e, 0x19, 0xd6, 0x80, 0x5a, 0xd0, 0x44, 0xf1, 0xc4, 0x7c, 0x88, 0x1f, 0x86, 0x4d, + 0xac, 0xdd, 0x13, 0x5d, 0x6c, 0x01, 0x8a, 0x1b, 0xdd, 0x2f, 0xf2, 0xfd, 0x58, 0x7a, 0x56, 0xe1, + 0xfd, 0x22, 0xdf, 0x0f, 0xeb, 0xa3, 0xe9, 0xea, 0xf4, 0x62, 0x95, 0xcf, 0x5f, 0xe3, 0xc8, 0xaf, + 0x8f, 0xe0, 0xbe, 0x4d, 0x51, 0x89, 0xc2, 0x86, 0xc6, 0x87, 0x8e, 0xa7, 0xc7, 0xe9, 0x67, 0xc7, + 0x28, 0x8c, 0x52, 0x74, 0x43, 0x26, 0xea, 0x9f, 0x2b, 0x50, 0x0b, 0xfd, 0xc8, 0x98, 0x1e, 0xea, + 0xc0, 0x12, 0xcd, 0xed, 0x4f, 0x6a, 0x78, 0x4c, 0x27, 0x21, 0x42, 0xd4, 0x96, 0xb5, 0xdc, 0x86, + 0x26, 0x65, 0x15, 0x57, 0xf5, 0xb8, 0x8e, 0xf2, 0x85, 0x98, 0x2c, 0xfa, 0xfb, 0xab, 0x12, 0xa0, + 0xb4, 0x2b, 0xf9, 0x85, 0x31, 0xb2, 0x78, 0xa7, 0x55, 0x26, 0xef, 0xf4, 0xbb, 0x70, 0xac, 0xef, + 0x0c, 0x87, 0x16, 0xbd, 0x17, 0xe2, 0x78, 0x07, 0x93, 0x99, 0xdb, 0x22, 0xa3, 0x61, 0x7a, 0x62, + 0xea, 0x7b, 0x13, 0x4e, 0x68, 0xd8, 0x71, 0xb1, 0x1d, 0xba, 0xfe, 0x7b, 0xce, 0xde, 0x21, 0xe2, + 0xdb, 0x53, 0xd0, 0xca, 0xa2, 0xe7, 0xab, 0xe6, 0x11, 0xb4, 0xd6, 0xf6, 0x71, 0xff, 0x21, 0x5d, + 0x2b, 0x1d, 0x25, 0xfb, 0xa5, 0x05, 0xd5, 0x81, 0xd3, 0x67, 0x4f, 0x9b, 0xf2, 0x8d, 0x25, 0xf1, + 0x5d, 0xb0, 0xa7, 0x7f, 0x1a, 0x4e, 0x66, 0x56, 0xcb, 0xa5, 0x42, 0xd0, 0xbc, 0x8b, 0x83, 0x8d, + 0x47, 0xd8, 0x0e, 0xc3, 0x67, 0xf5, 0x07, 0xa5, 0x58, 0xa0, 0x4e, 0x8b, 0x0e, 0x91, 0x35, 0x84, + 0xba, 0xb0, 0x14, 0xa1, 0x60, 0x42, 0xcd, 0x1e, 0x1a, 0x64, 0x4f, 0x74, 0x66, 0x9f, 0x28, 0xd2, + 0x4a, 0xe8, 0xfb, 0x82, 0xd1, 0x13, 0x2a, 0x21, 0x2c, 0x71, 0xce, 0x5c, 0x4e, 0x9e, 0x33, 0xbf, + 0x0b, 0x28, 0x1e, 0x8a, 0xf3, 0xb5, 0x79, 0x65, 0x82, 0x57, 0x63, 0x9a, 0x6e, 0xf2, 0x7d, 0xa3, + 0x9c, 0xb7, 0x5f, 0xa6, 0x8f, 0xf4, 0xf6, 0x8b, 0x7a, 0x06, 0x4e, 0x91, 0x00, 0xfb, 0x3e, 0x0e, + 0x3c, 0xab, 0xbf, 0x8e, 0xfd, 0xbe, 0x67, 0xb9, 0x81, 0x13, 0x26, 0xb2, 0xa8, 0x3a, 0x9c, 0xce, + 0x29, 0xe7, 0xea, 0x7e, 0x13, 0xea, 0x66, 0x04, 0xce, 0xda, 0xe7, 0x48, 0xd2, 0x6a, 0x71, 0x02, + 0xf5, 0x7d, 0x68, 0x26, 0x11, 0x32, 0xf3, 0x5e, 0x11, 0x54, 0xf6, 0xf1, 0xc0, 0x15, 0x17, 0x79, + 0xc8, 0x6f, 0xa2, 0x75, 0xb6, 0x76, 0x79, 0x88, 0x0f, 0xc4, 0x3e, 0x78, 0x8d, 0x42, 0xbe, 0x88, + 0x0f, 0xc2, 0xb6, 0x49, 0x8f, 0x11, 0x78, 0x56, 0x3f, 0xd9, 0xb6, 0x8c, 0xf2, 0xa8, 0x6d, 0xa4, + 0xdb, 0x86, 0x0c, 0xcc, 0xdb, 0x76, 0x3a, 0xf7, 0xa1, 0x03, 0x4a, 0x0b, 0xae, 0x63, 0xf2, 0xdf, + 0xea, 0x9f, 0x28, 0xb0, 0x98, 0xc2, 0x98, 0xf0, 0x6c, 0xe3, 0x05, 0x98, 0x15, 0xf5, 0x96, 0xd2, + 0xc9, 0xa1, 0x8c, 0x97, 0x26, 0x50, 0x50, 0x07, 0x16, 0x23, 0x8b, 0x16, 0x74, 0xe5, 0x74, 0x5f, + 0xc4, 0x17, 0x2e, 0x54, 0xdc, 0x66, 0x3f, 0x01, 0x51, 0xfb, 0xd0, 0x4c, 0x62, 0x4d, 0x32, 0xa6, + 0x0e, 0x25, 0xaf, 0xfa, 0xf7, 0x0a, 0xcc, 0x30, 0x58, 0x66, 0x67, 0x4b, 0xd3, 0x41, 0x29, 0x39, + 0x1d, 0xbc, 0x06, 0x75, 0xc6, 0x47, 0x0f, 0xaf, 0x71, 0xcd, 0xcb, 0xdb, 0xbb, 0x8c, 0x35, 0x1d, + 0xad, 0x30, 0x0c, 0x7f, 0x93, 0x66, 0x30, 0x7b, 0xa1, 0x2b, 0x13, 0x91, 0x02, 0x5c, 0xa7, 0x30, + 0xea, 0x72, 0x49, 0xc8, 0xcc, 0xd7, 0x30, 0x63, 0x7c, 0x33, 0xdf, 0x87, 0x5a, 0xa6, 0x4f, 0xeb, + 0xa5, 0x36, 0x38, 0xd5, 0x6d, 0xfa, 0xf6, 0x5d, 0x7a, 0x63, 0x12, 0x7d, 0x41, 0x3e, 0x24, 0x7f, + 0x26, 0x75, 0xc2, 0x2c, 0x91, 0x8d, 0x3c, 0xf6, 0x04, 0x34, 0xa3, 0x51, 0x3f, 0x80, 0x13, 0xb9, + 0x38, 0xe8, 0x8d, 0xf0, 0xa1, 0x51, 0xd3, 0xb3, 0x1e, 0xf1, 0x8d, 0x85, 0x79, 0xf9, 0x51, 0x83, + 0x35, 0x8a, 0xb0, 0x4e, 0xcb, 0xc5, 0x13, 0xa4, 0xec, 0xeb, 0xd2, 0xb3, 0x50, 0x15, 0xcf, 0x73, + 0xa3, 0x59, 0x28, 0x6f, 0xaf, 0x75, 0x9b, 0x53, 0xe4, 0xc7, 0xce, 0x7a, 0xb7, 0xa9, 0xa0, 0x2a, + 0x54, 0x7a, 0x6b, 0xdb, 0xdd, 0x66, 0xe9, 0xd2, 0x10, 0x9a, 0xc9, 0x17, 0xaa, 0xd1, 0x0a, 0x1c, + 0xeb, 0x6a, 0x5b, 0xdd, 0xf6, 0xdd, 0xf6, 0x76, 0x67, 0x6b, 0x53, 0xef, 0x6a, 0x9d, 0xf7, 0xda, + 0xdb, 0x1b, 0xcd, 0x29, 0x74, 0x1e, 0x4e, 0xc7, 0x0b, 0xde, 0xd9, 0xea, 0x6d, 0xeb, 0xdb, 0x5b, + 0xfa, 0xda, 0xd6, 0xe6, 0x76, 0xbb, 0xb3, 0xb9, 0xa1, 0x35, 0x15, 0x74, 0x1a, 0x4e, 0xc4, 0x51, + 0xee, 0x74, 0xd6, 0x3b, 0xda, 0xc6, 0x1a, 0xf9, 0xdd, 0xbe, 0xd7, 0x2c, 0x5d, 0x7a, 0x03, 0x1a, + 0xd2, 0xc5, 0x15, 0x22, 0x52, 0x77, 0x6b, 0xbd, 0x39, 0x85, 0x1a, 0x50, 0x8b, 0xf3, 0xa9, 0x42, + 0x65, 0x73, 0x6b, 0x7d, 0xa3, 0x59, 0x42, 0x00, 0x33, 0xdb, 0x6d, 0xed, 0xee, 0xc6, 0x76, 0xb3, + 0x7c, 0xe9, 0x56, 0xf2, 0x51, 0x0d, 0x8c, 0x16, 0xa1, 0xd1, 0x6b, 0x6f, 0xae, 0xdf, 0xd9, 0xfa, + 0x8a, 0xae, 0x6d, 0xb4, 0xd7, 0xdf, 0x6f, 0x4e, 0xa1, 0x25, 0x68, 0x0a, 0xd0, 0xe6, 0xd6, 0x36, + 0x83, 0x2a, 0x97, 0x1e, 0x26, 0xd6, 0xac, 0x18, 0x1d, 0x87, 0xc5, 0xb0, 0x4a, 0x7d, 0x4d, 0xdb, + 0x68, 0x6f, 0x6f, 0x10, 0x49, 0x24, 0xb0, 0xb6, 0xb3, 0xb9, 0xd9, 0xd9, 0xbc, 0xdb, 0x54, 0x08, + 0xd7, 0x08, 0xbc, 0xf1, 0x95, 0x0e, 0x41, 0x2e, 0xc9, 0xc8, 0x3b, 0x9b, 0x5f, 0xdc, 0xdc, 0xfa, + 0xf2, 0x66, 0xb3, 0x7c, 0xe9, 0x57, 0xe3, 0x39, 0x15, 0xd1, 0xbc, 0x72, 0x12, 0x56, 0x52, 0x35, + 0xea, 0x1b, 0xef, 0x6d, 0x6c, 0x6e, 0x37, 0xa7, 0xe4, 0xc2, 0xde, 0x76, 0x5b, 0x8b, 0x0a, 0x95, + 0x64, 0xe1, 0x56, 0xb7, 0x1b, 0x16, 0x96, 0xe4, 0xc2, 0xf5, 0x8d, 0x7b, 0x1b, 0x11, 0x65, 0xf9, + 0xd2, 0xd3, 0x00, 0xd1, 0xf8, 0x41, 0x75, 0x98, 0x5d, 0xdb, 0xda, 0xd9, 0xdc, 0xde, 0xd0, 0x9a, + 0x53, 0xa8, 0x06, 0xd3, 0x77, 0xdb, 0x3b, 0x77, 0x37, 0x9a, 0xca, 0xa5, 0x8b, 0x30, 0x17, 0xb7, + 0x26, 0x82, 0xd7, 0x7b, 0xbf, 0xb7, 0xbd, 0x71, 0x9f, 0x68, 0x64, 0x0e, 0xaa, 0x6b, 0x77, 0xb5, + 0xad, 0x9d, 0xee, 0xdb, 0xbd, 0xa6, 0x72, 0xfd, 0xff, 0x96, 0xc2, 0x07, 0x75, 0x7b, 0xd8, 0xa3, + 0xd7, 0x05, 0xd6, 0x61, 0x56, 0x3c, 0x68, 0x2f, 0xed, 0xda, 0xc8, 0x0f, 0xf0, 0xb7, 0x4e, 0x66, + 0x96, 0xf1, 0xb8, 0x60, 0x0a, 0xbd, 0x47, 0xf7, 0xdc, 0x63, 0x4f, 0x5a, 0x9d, 0x4b, 0xec, 0x73, + 0xa7, 0x5e, 0xce, 0x6a, 0x9d, 0x2f, 0xc0, 0x08, 0xf9, 0xbe, 0x0f, 0xf3, 0xf2, 0xdb, 0x91, 0xe8, + 0xbc, 0xbc, 0x1f, 0x9e, 0xf1, 0x2c, 0x65, 0x4b, 0x2d, 0x42, 0x09, 0x59, 0xeb, 0xd0, 0x4c, 0xbe, + 0x1d, 0x89, 0xa4, 0x34, 0x93, 0x9c, 0xa7, 0x29, 0x5b, 0x4f, 0x17, 0x23, 0xc5, 0x2b, 0x48, 0x3d, + 0x89, 0xf8, 0x54, 0xf1, 0x23, 0x73, 0x19, 0x15, 0xe4, 0xbd, 0x44, 0xc7, 0x94, 0x23, 0xcf, 0x9a, + 0x28, 0xf1, 0x0a, 0x61, 0xc6, 0x83, 0x65, 0xb2, 0x72, 0xb2, 0x1f, 0xab, 0x52, 0xa7, 0xd0, 0x2f, + 0xc1, 0x42, 0x22, 0x17, 0x1c, 0x49, 0x84, 0xd9, 0x29, 0xee, 0xad, 0xa7, 0x0a, 0x71, 0xe4, 0x5e, + 0x8d, 0xe7, 0x7b, 0x27, 0x7b, 0x35, 0x23, 0x8f, 0x3c, 0xd9, 0xab, 0x99, 0xe9, 0xe2, 0xd4, 0x10, + 0xa5, 0xdc, 0x6e, 0xd9, 0x10, 0xb3, 0x72, 0xc9, 0x5b, 0xe7, 0x0b, 0x30, 0xe2, 0x0a, 0x49, 0x64, + 0x77, 0xcb, 0x0a, 0xc9, 0xce, 0x1b, 0x6f, 0x3d, 0x55, 0x88, 0x93, 0xec, 0xc9, 0x28, 0xab, 0x34, + 0xdd, 0x93, 0xa9, 0xcc, 0xe6, 0x74, 0x4f, 0xa6, 0x93, 0x52, 0x79, 0x4f, 0x26, 0xf2, 0x40, 0xd5, + 0xc2, 0x1c, 0xb5, 0xac, 0x9e, 0xcc, 0xce, 0x63, 0x53, 0xa7, 0xd0, 0x63, 0x58, 0xcd, 0x4b, 0x45, + 0x42, 0xcf, 0x1f, 0x22, 0x63, 0xaa, 0xf5, 0xc2, 0x64, 0xc8, 0x61, 0xc5, 0x18, 0x50, 0x7a, 0xf9, + 0x84, 0x9e, 0x91, 0xd5, 0x9d, 0xb3, 0x3c, 0x6b, 0x3d, 0x3b, 0x0e, 0x2d, 0xac, 0xe6, 0x2e, 0x54, + 0x45, 0x92, 0x13, 0x92, 0x5c, 0x60, 0x22, 0xb9, 0xaa, 0x75, 0x2a, 0xbb, 0x30, 0x64, 0xf4, 0x05, + 0xa8, 0x10, 0x28, 0x5a, 0x49, 0xe2, 0x09, 0x06, 0xab, 0xe9, 0x82, 0x90, 0xb8, 0x0d, 0x33, 0x2c, + 0x7b, 0x07, 0x49, 0xc7, 0x87, 0x52, 0x76, 0x51, 0xab, 0x95, 0x55, 0x14, 0xb2, 0xe8, 0xb2, 0x7f, + 0x0f, 0xc2, 0x93, 0x71, 0xd0, 0x99, 0xe4, 0xab, 0xd1, 0x72, 0xd6, 0x4f, 0xeb, 0x6c, 0x6e, 0x79, + 0xdc, 0x66, 0x13, 0xbb, 0xa4, 0xe7, 0x0b, 0x76, 0xfd, 0xb3, 0x6c, 0x36, 0xfb, 0x2c, 0x81, 0x75, + 0x6e, 0xfa, 0xac, 0x01, 0x3d, 0x93, 0x6b, 0xef, 0x52, 0x15, 0xcf, 0x8e, 0x43, 0x8b, 0x0f, 0x8d, + 0xe4, 0xf3, 0x4f, 0x6a, 0xd1, 0xd3, 0x6c, 0x59, 0x43, 0x23, 0xe7, 0xc9, 0x37, 0x75, 0x0a, 0xed, + 0xc3, 0xb1, 0x8c, 0x37, 0xe1, 0xd0, 0xb3, 0xf9, 0xfe, 0x57, 0xaa, 0xe5, 0xb9, 0xb1, 0x78, 0xf1, + 0x9a, 0x32, 0x4e, 0xe0, 0xe5, 0x9a, 0xf2, 0x53, 0x00, 0xe4, 0x9a, 0x8a, 0x8e, 0xf2, 0xa9, 0x21, + 0x72, 0x1f, 0x72, 0x22, 0xeb, 0x58, 0x3a, 0xc3, 0x10, 0x53, 0x1e, 0x63, 0x1f, 0x8e, 0x65, 0x6c, + 0x31, 0xc8, 0xc2, 0xe6, 0x6f, 0x7d, 0xc8, 0xc2, 0x16, 0xed, 0x55, 0x4c, 0xa1, 0x0f, 0x00, 0xdd, + 0xc5, 0x81, 0x1c, 0xca, 0xf9, 0x48, 0x1a, 0xa8, 0xc9, 0xdd, 0x8c, 0x1c, 0xfb, 0x94, 0xb6, 0x35, + 0xd4, 0xa9, 0x6b, 0x0a, 0xb2, 0xd9, 0x75, 0x93, 0xd4, 0x62, 0x1c, 0x5d, 0x48, 0x76, 0x5b, 0xde, + 0x7a, 0xbe, 0x75, 0x71, 0x02, 0xcc, 0xb0, 0x2d, 0x76, 0xf2, 0xfd, 0x51, 0xb1, 0x1e, 0xbc, 0x90, + 0x6f, 0x26, 0xf2, 0x1a, 0x3b, 0x5d, 0x5f, 0xee, 0x6a, 0x3b, 0x8c, 0xe7, 0x62, 0xc6, 0x74, 0x2e, + 0x3f, 0x1f, 0x24, 0x27, 0x9e, 0xcb, 0x32, 0xa0, 0xeb, 0xbf, 0x5b, 0x86, 0x39, 0x96, 0x37, 0xc3, + 0xc3, 0xcf, 0xfb, 0x00, 0x51, 0x0a, 0x1a, 0x3a, 0x9d, 0x94, 0x51, 0xca, 0xeb, 0x6b, 0x9d, 0xc9, + 0x2b, 0x8e, 0xbb, 0xb9, 0x58, 0x6a, 0x97, 0xec, 0xe6, 0xd2, 0x99, 0x6a, 0xb2, 0x9b, 0xcb, 0xc8, + 0x09, 0x53, 0xa7, 0xd0, 0xbb, 0x50, 0x0b, 0x33, 0x89, 0x64, 0xe3, 0x49, 0xa6, 0x44, 0xb5, 0x4e, + 0xe7, 0x94, 0xc6, 0xa5, 0x8b, 0x25, 0x08, 0xc9, 0xd2, 0xa5, 0x93, 0x8f, 0x64, 0xe9, 0xb2, 0x32, + 0x8b, 0xa2, 0xf6, 0xb2, 0x23, 0xfc, 0x8c, 0xf6, 0x4a, 0x19, 0x1d, 0x19, 0xed, 0x95, 0xcf, 0xfe, + 0xd5, 0xa9, 0x3b, 0xb7, 0x7f, 0xf8, 0x93, 0x33, 0xca, 0x8f, 0x7e, 0x72, 0x66, 0xea, 0x57, 0x3e, + 0x3a, 0xa3, 0xfc, 0xf0, 0xa3, 0x33, 0xca, 0x3f, 0x7f, 0x74, 0x46, 0xf9, 0xf1, 0x47, 0x67, 0x94, + 0x6f, 0xfd, 0xe7, 0x99, 0xa9, 0x0f, 0xd4, 0x87, 0x37, 0xfc, 0x2b, 0x96, 0x73, 0xb5, 0xef, 0x59, + 0x97, 0x0d, 0xd7, 0xba, 0xea, 0x3e, 0xdc, 0xbb, 0x6a, 0xb8, 0x96, 0x7f, 0x95, 0xf3, 0xbd, 0xfa, + 0xe8, 0xc5, 0x07, 0x33, 0xf4, 0x5f, 0x4a, 0xbd, 0xf4, 0xff, 0x01, 0x00, 0x00, 0xff, 0xff, 0x15, + 0xf3, 0x86, 0xa5, 0x0c, 0x6c, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. @@ -10251,6 +10522,14 @@ type RuntimeServiceClient interface { ListMetricDescriptors(ctx context.Context, in *ListMetricDescriptorsRequest, opts ...grpc.CallOption) (*ListMetricDescriptorsResponse, error) // ListPodSandboxMetrics gets pod sandbox metrics from CRI Runtime ListPodSandboxMetrics(ctx context.Context, in *ListPodSandboxMetricsRequest, opts ...grpc.CallOption) (*ListPodSandboxMetricsResponse, error) + // RuntimeConfig returns configuration information of the runtime. + // A couple of notes: + // - The RuntimeConfigRequest object is not to be confused with the contents of UpdateRuntimeConfigRequest. + // The former is for having runtime tell Kubelet what to do, the latter vice versa. + // - It is the expectation of the Kubelet that these fields are static for the lifecycle of the Kubelet. + // The Kubelet will not re-request the RuntimeConfiguration after startup, and CRI implementations should + // avoid updating them without a full node reboot. + RuntimeConfig(ctx context.Context, in *RuntimeConfigRequest, opts ...grpc.CallOption) (*RuntimeConfigResponse, error) } type runtimeServiceClient struct { @@ -10536,6 +10815,15 @@ func (c *runtimeServiceClient) ListPodSandboxMetrics(ctx context.Context, in *Li return out, nil } +func (c *runtimeServiceClient) RuntimeConfig(ctx context.Context, in *RuntimeConfigRequest, opts ...grpc.CallOption) (*RuntimeConfigResponse, error) { + out := new(RuntimeConfigResponse) + err := c.cc.Invoke(ctx, "/runtime.v1.RuntimeService/RuntimeConfig", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + // RuntimeServiceServer is the server API for RuntimeService service. type RuntimeServiceServer interface { // Version returns the runtime name, runtime version, and runtime API version. @@ -10626,6 +10914,14 @@ type RuntimeServiceServer interface { ListMetricDescriptors(context.Context, *ListMetricDescriptorsRequest) (*ListMetricDescriptorsResponse, error) // ListPodSandboxMetrics gets pod sandbox metrics from CRI Runtime ListPodSandboxMetrics(context.Context, *ListPodSandboxMetricsRequest) (*ListPodSandboxMetricsResponse, error) + // RuntimeConfig returns configuration information of the runtime. + // A couple of notes: + // - The RuntimeConfigRequest object is not to be confused with the contents of UpdateRuntimeConfigRequest. + // The former is for having runtime tell Kubelet what to do, the latter vice versa. + // - It is the expectation of the Kubelet that these fields are static for the lifecycle of the Kubelet. + // The Kubelet will not re-request the RuntimeConfiguration after startup, and CRI implementations should + // avoid updating them without a full node reboot. + RuntimeConfig(context.Context, *RuntimeConfigRequest) (*RuntimeConfigResponse, error) } // UnimplementedRuntimeServiceServer can be embedded to have forward compatible implementations. @@ -10716,6 +11012,9 @@ func (*UnimplementedRuntimeServiceServer) ListMetricDescriptors(ctx context.Cont func (*UnimplementedRuntimeServiceServer) ListPodSandboxMetrics(ctx context.Context, req *ListPodSandboxMetricsRequest) (*ListPodSandboxMetricsResponse, error) { return nil, status.Errorf(codes.Unimplemented, "method ListPodSandboxMetrics not implemented") } +func (*UnimplementedRuntimeServiceServer) RuntimeConfig(ctx context.Context, req *RuntimeConfigRequest) (*RuntimeConfigResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method RuntimeConfig not implemented") +} func RegisterRuntimeServiceServer(s *grpc.Server, srv RuntimeServiceServer) { s.RegisterService(&_RuntimeService_serviceDesc, srv) @@ -11228,6 +11527,24 @@ func _RuntimeService_ListPodSandboxMetrics_Handler(srv interface{}, ctx context. return interceptor(ctx, in, info, handler) } +func _RuntimeService_RuntimeConfig_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(RuntimeConfigRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(RuntimeServiceServer).RuntimeConfig(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/runtime.v1.RuntimeService/RuntimeConfig", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(RuntimeServiceServer).RuntimeConfig(ctx, req.(*RuntimeConfigRequest)) + } + return interceptor(ctx, in, info, handler) +} + var _RuntimeService_serviceDesc = grpc.ServiceDesc{ ServiceName: "runtime.v1.RuntimeService", HandlerType: (*RuntimeServiceServer)(nil), @@ -11340,6 +11657,10 @@ var _RuntimeService_serviceDesc = grpc.ServiceDesc{ MethodName: "ListPodSandboxMetrics", Handler: _RuntimeService_ListPodSandboxMetrics_Handler, }, + { + MethodName: "RuntimeConfig", + Handler: _RuntimeService_RuntimeConfig_Handler, + }, }, Streams: []grpc.StreamDesc{ { @@ -14150,6 +14471,15 @@ func (m *ImageSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if len(m.UserSpecifiedImage) > 0 { + i -= len(m.UserSpecifiedImage) + copy(dAtA[i:], m.UserSpecifiedImage) + i = encodeVarintApi(dAtA, i, uint64(len(m.UserSpecifiedImage))) + i-- + dAtA[i] = 0x1 + i-- + dAtA[i] = 0x92 + } if len(m.Annotations) > 0 { for k := range m.Annotations { v := m.Annotations[k] @@ -17956,6 +18286,18 @@ func (m *ContainerStats) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if m.Swap != nil { + { + size, err := m.Swap.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + } if m.WritableLayer != nil { { size, err := m.WritableLayer.MarshalToSizedBuffer(dAtA[:i]) @@ -18282,6 +18624,58 @@ func (m *MemoryUsage) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *SwapUsage) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SwapUsage) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *SwapUsage) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.SwapUsageBytes != nil { + { + size, err := m.SwapUsageBytes.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x1a + } + if m.SwapAvailableBytes != nil { + { + size, err := m.SwapAvailableBytes.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + if m.Timestamp != 0 { + i = encodeVarintApi(dAtA, i, uint64(m.Timestamp)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + func (m *WindowsMemoryUsage) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -18302,6 +18696,18 @@ func (m *WindowsMemoryUsage) MarshalToSizedBuffer(dAtA []byte) (int, error) { _ = i var l int _ = l + if m.CommitMemoryBytes != nil { + { + size, err := m.CommitMemoryBytes.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + } if m.PageFaults != nil { { size, err := m.PageFaults.MarshalToSizedBuffer(dAtA[:i]) @@ -18882,6 +19288,92 @@ func (m *Metric) MarshalToSizedBuffer(dAtA []byte) (int, error) { return len(dAtA) - i, nil } +func (m *RuntimeConfigRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *RuntimeConfigRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *RuntimeConfigRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + return len(dAtA) - i, nil +} + +func (m *RuntimeConfigResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *RuntimeConfigResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *RuntimeConfigResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.Linux != nil { + { + size, err := m.Linux.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *LinuxRuntimeConfiguration) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *LinuxRuntimeConfiguration) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *LinuxRuntimeConfiguration) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if m.CgroupDriver != 0 { + i = encodeVarintApi(dAtA, i, uint64(m.CgroupDriver)) + i-- + dAtA[i] = 0x8 + } + return len(dAtA) - i, nil +} + func encodeVarintApi(dAtA []byte, offset int, v uint64) int { offset -= sovApi(v) base := offset @@ -19956,6 +20448,10 @@ func (m *ImageSpec) Size() (n int) { n += mapEntrySize + 1 + sovApi(uint64(mapEntrySize)) } } + l = len(m.UserSpecifiedImage) + if l > 0 { + n += 2 + l + sovApi(uint64(l)) + } return n } @@ -21548,6 +22044,10 @@ func (m *ContainerStats) Size() (n int) { l = m.WritableLayer.Size() n += 1 + l + sovApi(uint64(l)) } + if m.Swap != nil { + l = m.Swap.Size() + n += 1 + l + sovApi(uint64(l)) + } return n } @@ -21652,6 +22152,26 @@ func (m *MemoryUsage) Size() (n int) { return n } +func (m *SwapUsage) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Timestamp != 0 { + n += 1 + sovApi(uint64(m.Timestamp)) + } + if m.SwapAvailableBytes != nil { + l = m.SwapAvailableBytes.Size() + n += 1 + l + sovApi(uint64(l)) + } + if m.SwapUsageBytes != nil { + l = m.SwapUsageBytes.Size() + n += 1 + l + sovApi(uint64(l)) + } + return n +} + func (m *WindowsMemoryUsage) Size() (n int) { if m == nil { return 0 @@ -21673,6 +22193,10 @@ func (m *WindowsMemoryUsage) Size() (n int) { l = m.PageFaults.Size() n += 1 + l + sovApi(uint64(l)) } + if m.CommitMemoryBytes != nil { + l = m.CommitMemoryBytes.Size() + n += 1 + l + sovApi(uint64(l)) + } return n } @@ -21909,6 +22433,40 @@ func (m *Metric) Size() (n int) { return n } +func (m *RuntimeConfigRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + return n +} + +func (m *RuntimeConfigResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.Linux != nil { + l = m.Linux.Size() + n += 1 + l + sovApi(uint64(l)) + } + return n +} + +func (m *LinuxRuntimeConfiguration) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if m.CgroupDriver != 0 { + n += 1 + sovApi(uint64(m.CgroupDriver)) + } + return n +} + func sovApi(x uint64) (n int) { return (math_bits.Len64(x|1) + 6) / 7 } @@ -22682,6 +23240,7 @@ func (this *ImageSpec) String() string { s := strings.Join([]string{`&ImageSpec{`, `Image:` + fmt.Sprintf("%v", this.Image) + `,`, `Annotations:` + mapStringForAnnotations + `,`, + `UserSpecifiedImage:` + fmt.Sprintf("%v", this.UserSpecifiedImage) + `,`, `}`, }, "") return s @@ -23783,6 +24342,7 @@ func (this *ContainerStats) String() string { `Cpu:` + strings.Replace(this.Cpu.String(), "CpuUsage", "CpuUsage", 1) + `,`, `Memory:` + strings.Replace(this.Memory.String(), "MemoryUsage", "MemoryUsage", 1) + `,`, `WritableLayer:` + strings.Replace(this.WritableLayer.String(), "FilesystemUsage", "FilesystemUsage", 1) + `,`, + `Swap:` + strings.Replace(this.Swap.String(), "SwapUsage", "SwapUsage", 1) + `,`, `}`, }, "") return s @@ -23840,6 +24400,18 @@ func (this *MemoryUsage) String() string { }, "") return s } +func (this *SwapUsage) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&SwapUsage{`, + `Timestamp:` + fmt.Sprintf("%v", this.Timestamp) + `,`, + `SwapAvailableBytes:` + strings.Replace(this.SwapAvailableBytes.String(), "UInt64Value", "UInt64Value", 1) + `,`, + `SwapUsageBytes:` + strings.Replace(this.SwapUsageBytes.String(), "UInt64Value", "UInt64Value", 1) + `,`, + `}`, + }, "") + return s +} func (this *WindowsMemoryUsage) String() string { if this == nil { return "nil" @@ -23849,6 +24421,7 @@ func (this *WindowsMemoryUsage) String() string { `WorkingSetBytes:` + strings.Replace(this.WorkingSetBytes.String(), "UInt64Value", "UInt64Value", 1) + `,`, `AvailableBytes:` + strings.Replace(this.AvailableBytes.String(), "UInt64Value", "UInt64Value", 1) + `,`, `PageFaults:` + strings.Replace(this.PageFaults.String(), "UInt64Value", "UInt64Value", 1) + `,`, + `CommitMemoryBytes:` + strings.Replace(this.CommitMemoryBytes.String(), "UInt64Value", "UInt64Value", 1) + `,`, `}`, }, "") return s @@ -24033,6 +24606,35 @@ func (this *Metric) String() string { }, "") return s } +func (this *RuntimeConfigRequest) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&RuntimeConfigRequest{`, + `}`, + }, "") + return s +} +func (this *RuntimeConfigResponse) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&RuntimeConfigResponse{`, + `Linux:` + strings.Replace(this.Linux.String(), "LinuxRuntimeConfiguration", "LinuxRuntimeConfiguration", 1) + `,`, + `}`, + }, "") + return s +} +func (this *LinuxRuntimeConfiguration) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&LinuxRuntimeConfiguration{`, + `CgroupDriver:` + fmt.Sprintf("%v", this.CgroupDriver) + `,`, + `}`, + }, "") + return s +} func valueToStringApi(v interface{}) string { rv := reflect.ValueOf(v) if rv.IsNil() { @@ -32161,6 +32763,38 @@ func (m *ImageSpec) Unmarshal(dAtA []byte) error { } m.Annotations[mapkey] = mapvalue iNdEx = postIndex + case 18: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field UserSpecifiedImage", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.UserSpecifiedImage = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipApi(dAtA[iNdEx:]) @@ -43879,6 +44513,42 @@ func (m *ContainerStats) Unmarshal(dAtA []byte) error { return err } iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Swap", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Swap == nil { + m.Swap = &SwapUsage{} + } + if err := m.Swap.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipApi(dAtA[iNdEx:]) @@ -44661,7 +45331,7 @@ func (m *MemoryUsage) Unmarshal(dAtA []byte) error { } return nil } -func (m *WindowsMemoryUsage) Unmarshal(dAtA []byte) error { +func (m *SwapUsage) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -44684,10 +45354,10 @@ func (m *WindowsMemoryUsage) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: WindowsMemoryUsage: wiretype end group for non-group") + return fmt.Errorf("proto: SwapUsage: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: WindowsMemoryUsage: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: SwapUsage: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -44711,7 +45381,7 @@ func (m *WindowsMemoryUsage) Unmarshal(dAtA []byte) error { } case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field WorkingSetBytes", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SwapAvailableBytes", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -44738,52 +45408,16 @@ func (m *WindowsMemoryUsage) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.WorkingSetBytes == nil { - m.WorkingSetBytes = &UInt64Value{} + if m.SwapAvailableBytes == nil { + m.SwapAvailableBytes = &UInt64Value{} } - if err := m.WorkingSetBytes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SwapAvailableBytes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field AvailableBytes", wireType) - } - var msglen int - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowApi - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - msglen |= int(b&0x7F) << shift - if b < 0x80 { - break - } - } - if msglen < 0 { - return ErrInvalidLengthApi - } - postIndex := iNdEx + msglen - if postIndex < 0 { - return ErrInvalidLengthApi - } - if postIndex > l { - return io.ErrUnexpectedEOF - } - if m.AvailableBytes == nil { - m.AvailableBytes = &UInt64Value{} - } - if err := m.AvailableBytes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - case 4: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field PageFaults", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SwapUsageBytes", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -44810,10 +45444,223 @@ func (m *WindowsMemoryUsage) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.PageFaults == nil { - m.PageFaults = &UInt64Value{} + if m.SwapUsageBytes == nil { + m.SwapUsageBytes = &UInt64Value{} } - if err := m.PageFaults.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SwapUsageBytes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *WindowsMemoryUsage) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: WindowsMemoryUsage: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: WindowsMemoryUsage: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field Timestamp", wireType) + } + m.Timestamp = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.Timestamp |= int64(b&0x7F) << shift + if b < 0x80 { + break + } + } + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WorkingSetBytes", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.WorkingSetBytes == nil { + m.WorkingSetBytes = &UInt64Value{} + } + if err := m.WorkingSetBytes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field AvailableBytes", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.AvailableBytes == nil { + m.AvailableBytes = &UInt64Value{} + } + if err := m.AvailableBytes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PageFaults", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.PageFaults == nil { + m.PageFaults = &UInt64Value{} + } + if err := m.PageFaults.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CommitMemoryBytes", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.CommitMemoryBytes == nil { + m.CommitMemoryBytes = &UInt64Value{} + } + if err := m.CommitMemoryBytes.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex @@ -46261,6 +47108,211 @@ func (m *Metric) Unmarshal(dAtA []byte) error { } return nil } +func (m *RuntimeConfigRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RuntimeConfigRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RuntimeConfigRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *RuntimeConfigResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: RuntimeConfigResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: RuntimeConfigResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Linux", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Linux == nil { + m.Linux = &LinuxRuntimeConfiguration{} + } + if err := m.Linux.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *LinuxRuntimeConfiguration) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: LinuxRuntimeConfiguration: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: LinuxRuntimeConfiguration: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 0 { + return fmt.Errorf("proto: wrong wireType = %d for field CgroupDriver", wireType) + } + m.CgroupDriver = 0 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + m.CgroupDriver |= CgroupDriver(b&0x7F) << shift + if b < 0x80 { + break + } + } + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func skipApi(dAtA []byte) (n int, err error) { l := len(dAtA) iNdEx := 0 diff --git a/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.proto b/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.proto index 3e20c42c6097..e16688d83863 100644 --- a/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.proto +++ b/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.proto @@ -131,6 +131,15 @@ service RuntimeService { // ListPodSandboxMetrics gets pod sandbox metrics from CRI Runtime rpc ListPodSandboxMetrics(ListPodSandboxMetricsRequest) returns (ListPodSandboxMetricsResponse) {} + + // RuntimeConfig returns configuration information of the runtime. + // A couple of notes: + // - The RuntimeConfigRequest object is not to be confused with the contents of UpdateRuntimeConfigRequest. + // The former is for having runtime tell Kubelet what to do, the latter vice versa. + // - It is the expectation of the Kubelet that these fields are static for the lifecycle of the Kubelet. + // The Kubelet will not re-request the RuntimeConfiguration after startup, and CRI implementations should + // avoid updating them without a full node reboot. + rpc RuntimeConfig(RuntimeConfigRequest) returns (RuntimeConfigResponse) {} } // ImageService defines the public APIs for managing images. @@ -199,7 +208,7 @@ message PortMapping { } enum MountPropagation { - // No mount propagation ("private" in Linux terminology). + // No mount propagation ("rprivate" in Linux terminology). PROPAGATION_PRIVATE = 0; // Mounts get propagated from the host to the container ("rslave" in Linux). PROPAGATION_HOST_TO_CONTAINER = 1; @@ -770,6 +779,9 @@ message ImageSpec { // ImageSpec Annotations can be used to help the runtime target specific // images in multi-arch images. map annotations = 2; + // The container image reference specified by the user (e.g. image[:tag] or digest). + // Only set if available within the RPC context. + string user_specified_image = 18; } message KeyValue { @@ -1627,6 +1639,8 @@ message ContainerStats { MemoryUsage memory = 3; // Usage of the writable layer. FilesystemUsage writable_layer = 4; + // Swap usage gathered from the container. + SwapUsage swap = 5; } // WindowsContainerStats provides the resource usage statistics for a container specific for Windows @@ -1681,16 +1695,27 @@ message MemoryUsage { UInt64Value major_page_faults = 7; } +message SwapUsage { + // Timestamp in nanoseconds at which the information were collected. Must be > 0. + int64 timestamp = 1; + // Available swap for use. This is defined as the swap limit - swapUsageBytes. + UInt64Value swap_available_bytes = 2; + // Total memory in use. This includes all memory regardless of when it was accessed. + UInt64Value swap_usage_bytes = 3; +} + // WindowsMemoryUsage provides the memory usage information specific to Windows message WindowsMemoryUsage { // Timestamp in nanoseconds at which the information were collected. Must be > 0. int64 timestamp = 1; // The amount of working set memory in bytes. UInt64Value working_set_bytes = 2; - // Available memory for use. This is defined as the memory limit - workingSetBytes. + // Available memory for use. This is defined as the memory limit - commit_memory_bytes. UInt64Value available_bytes = 3; // Cumulative number of page faults. UInt64Value page_faults = 4; + // Total commit memory in use. Commit memory is total of physical and virtual memory in use. + UInt64Value commit_memory_bytes = 5; } message ReopenContainerLogRequest { @@ -1801,3 +1826,29 @@ enum MetricType { COUNTER = 0; GAUGE = 1; } + +message RuntimeConfigRequest {} + +message RuntimeConfigResponse { + // Configuration information for Linux-based runtimes. This field contains + // global runtime configuration options that are not specific to runtime + // handlers. + LinuxRuntimeConfiguration linux = 1; +} + +message LinuxRuntimeConfiguration { + // Cgroup driver to use + // Note: this field should not change for the lifecycle of the Kubelet, + // or while there are running containers. + // The Kubelet will not re-request this after startup, and will construct the cgroup + // hierarchy assuming it is static. + // If the runtime wishes to change this value, it must be accompanied by removal of + // all pods, and a restart of the Kubelet. The easiest way to do this is with a full node reboot. + CgroupDriver cgroup_driver = 1; +} + +enum CgroupDriver { + SYSTEMD = 0; + CGROUPFS = 1; +} + diff --git a/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/services.go b/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/services.go index a3a7e7ed876b..b21b11ba24c4 100644 --- a/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/services.go +++ b/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/apis/services.go @@ -115,6 +115,8 @@ type RuntimeService interface { UpdateRuntimeConfig(ctx context.Context, runtimeConfig *runtimeapi.RuntimeConfig) error // Status returns the status of the runtime. Status(ctx context.Context, verbose bool) (*runtimeapi.StatusResponse, error) + // RuntimeConfig returns the configuration information of the runtime. + RuntimeConfig(ctx context.Context) (*runtimeapi.RuntimeConfigResponse, error) } // ImageManagerService interface should be implemented by a container image diff --git a/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/errors/errors.go b/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/errors/errors.go index 41d7b92466d8..a4538669122f 100644 --- a/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/errors/errors.go +++ b/cluster-autoscaler/vendor/k8s.io/cri-api/pkg/errors/errors.go @@ -17,10 +17,20 @@ limitations under the License. package errors import ( + "errors" + "google.golang.org/grpc/codes" "google.golang.org/grpc/status" ) +var ( + // ErrRegistryUnavailable - Get http error on the PullImage RPC call. + ErrRegistryUnavailable = errors.New("RegistryUnavailable") + + // ErrSignatureValidationFailed - Unable to validate the image signature on the PullImage RPC call. + ErrSignatureValidationFailed = errors.New("SignatureValidationFailed") +) + // IsNotFound returns a boolean indicating whether the error // is grpc not found error. // See https://github.com/grpc/grpc/blob/master/doc/statuscodes.md diff --git a/cluster-autoscaler/vendor/k8s.io/csi-translation-lib/translate.go b/cluster-autoscaler/vendor/k8s.io/csi-translation-lib/translate.go index 9dde216299bf..9bbf17bad350 100644 --- a/cluster-autoscaler/vendor/k8s.io/csi-translation-lib/translate.go +++ b/cluster-autoscaler/vendor/k8s.io/csi-translation-lib/translate.go @@ -150,14 +150,14 @@ func (CSITranslator) GetInTreePluginNameFromSpec(pv *v1.PersistentVolume, vol *v return curPlugin.GetInTreePluginName(), nil } } - return "", fmt.Errorf("could not find in-tree plugin name from persistent volume %v", pv) + return "", fmt.Errorf("could not find in-tree plugin name from persistent volume %s", pv.Name) } else if vol != nil { for _, curPlugin := range inTreePlugins { if curPlugin.CanSupportInline(vol) { return curPlugin.GetInTreePluginName(), nil } } - return "", fmt.Errorf("could not find in-tree plugin name from volume %v", vol) + return "", fmt.Errorf("could not find in-tree plugin name from volume %s", vol.Name) } else { return "", errors.New("both persistent volume and volume are nil") } @@ -171,7 +171,7 @@ func (CSITranslator) GetCSINameFromInTreeName(pluginName string) (string, error) return csiDriverName, nil } } - return "", fmt.Errorf("could not find CSI Driver name for plugin %v", pluginName) + return "", fmt.Errorf("could not find CSI Driver name for plugin %s", pluginName) } // GetInTreeNameFromCSIName returns the name of the in-tree plugin superseded by @@ -180,7 +180,7 @@ func (CSITranslator) GetInTreeNameFromCSIName(pluginName string) (string, error) if plugin, ok := inTreePlugins[pluginName]; ok { return plugin.GetInTreePluginName(), nil } - return "", fmt.Errorf("could not find In-Tree driver name for CSI plugin %v", pluginName) + return "", fmt.Errorf("could not find In-Tree driver name for CSI plugin %s", pluginName) } // IsPVMigratable tests whether there is migration logic for the given Persistent Volume @@ -208,5 +208,5 @@ func (CSITranslator) RepairVolumeHandle(driverName, volumeHandle, nodeID string) if plugin, ok := inTreePlugins[driverName]; ok { return plugin.RepairVolumeHandle(volumeHandle, nodeID) } - return "", fmt.Errorf("could not find In-Tree driver name for CSI plugin %v", driverName) + return "", fmt.Errorf("could not find In-Tree driver name for CSI plugin %s", driverName) } diff --git a/cluster-autoscaler/vendor/k8s.io/dynamic-resource-allocation/resourceclaim/resourceclaim.go b/cluster-autoscaler/vendor/k8s.io/dynamic-resource-allocation/resourceclaim/resourceclaim.go index cfe359889625..3fb1bced3d6f 100644 --- a/cluster-autoscaler/vendor/k8s.io/dynamic-resource-allocation/resourceclaim/resourceclaim.go +++ b/cluster-autoscaler/vendor/k8s.io/dynamic-resource-allocation/resourceclaim/resourceclaim.go @@ -24,6 +24,7 @@ limitations under the License. package resourceclaim import ( + "errors" "fmt" v1 "k8s.io/api/core/v1" @@ -31,25 +32,53 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) +var ( + // ErrAPIUnsupported is wrapped by the actual errors returned by Name and + // indicates that none of the required fields are set. + ErrAPIUnsupported = errors.New("none of the supported fields are set") + + // ErrClaimNotFound is wrapped by the actual errors returned by Name and + // indicates that the claim has not been created yet. + ErrClaimNotFound = errors.New("ResourceClaim not created yet") +) + // Name returns the name of the ResourceClaim object that gets referenced by or -// created for the PodResourceClaim. The name is deterministic and therefore -// this function does not need any additional information and it will never -// fail. +// created for the PodResourceClaim. Three different results are possible: +// +// - An error is returned when some field is not set as expected (either the +// input is invalid or the API got extended and the library and the client +// using it need to be updated) or the claim hasn't been created yet. // -// Either podClaim.ResourceClaimName or podClaim.Template must be non-nil, but not -// both. This is enforced by API validation. +// The error includes pod and pod claim name and the unexpected field and +// is derived from one of the pre-defined errors in this package. +// +// - A nil string pointer and no error when the ResourceClaim intentionally +// didn't get created and the PodResourceClaim can be ignored. +// +// - A pointer to the name and no error when the ResourceClaim got created. +// In this case the boolean determines whether IsForPod must be called +// after retrieving the ResourceClaim and before using it. // // If podClaim.Template is not nil, the caller must check that the // ResourceClaim is indeed the one that was created for the Pod by calling // IsUsable. -func Name(pod *v1.Pod, podClaim *v1.PodResourceClaim) string { - if podClaim.Source.ResourceClaimName != nil { - return *podClaim.Source.ResourceClaimName +func Name(pod *v1.Pod, podClaim *v1.PodResourceClaim) (name *string, mustCheckOwner bool, err error) { + switch { + case podClaim.Source.ResourceClaimName != nil: + return podClaim.Source.ResourceClaimName, false, nil + case podClaim.Source.ResourceClaimTemplateName != nil: + for _, status := range pod.Status.ResourceClaimStatuses { + if status.Name == podClaim.Name { + return status.ResourceClaimName, true, nil + } + } + return nil, false, fmt.Errorf(`pod "%s/%s": %w`, pod.Namespace, pod.Name, ErrClaimNotFound) + default: + return nil, false, fmt.Errorf(`pod "%s/%s", spec.resourceClaim %q: %w`, pod.Namespace, pod.Name, podClaim.Name, ErrAPIUnsupported) } - return pod.Name + "-" + podClaim.Name } -// IsForPod checks that the ResourceClaim is the ephemeral volume that +// IsForPod checks that the ResourceClaim is the one that // was created for the Pod. It returns an error that is informative // enough to be returned by the caller without adding further details // about the Pod or ResourceClaim. diff --git a/cluster-autoscaler/vendor/k8s.io/klog/v2/format.go b/cluster-autoscaler/vendor/k8s.io/klog/v2/format.go new file mode 100644 index 000000000000..63995ca6dbf4 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/klog/v2/format.go @@ -0,0 +1,65 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package klog + +import ( + "encoding/json" + "fmt" + "strings" + + "github.com/go-logr/logr" +) + +// Format wraps a value of an arbitrary type and implement fmt.Stringer and +// logr.Marshaler for them. Stringer returns pretty-printed JSON. MarshalLog +// returns the original value with a type that has no special methods, in +// particular no MarshalLog or MarshalJSON. +// +// Wrapping values like that is useful when the value has a broken +// implementation of these special functions (for example, a type which +// inherits String from TypeMeta, but then doesn't re-implement String) or the +// implementation produces output that is less readable or unstructured (for +// example, the generated String functions for Kubernetes API types). +func Format(obj interface{}) interface{} { + return formatAny{Object: obj} +} + +type formatAny struct { + Object interface{} +} + +func (f formatAny) String() string { + var buffer strings.Builder + encoder := json.NewEncoder(&buffer) + encoder.SetIndent("", " ") + if err := encoder.Encode(&f.Object); err != nil { + return fmt.Sprintf("error marshaling %T to JSON: %v", f, err) + } + return buffer.String() +} + +func (f formatAny) MarshalLog() interface{} { + // Returning a pointer to a pointer ensures that zapr doesn't find a + // fmt.Stringer or logr.Marshaler when it checks the type of the + // value. It then falls back to reflection, which dumps the value being + // pointed to (JSON doesn't have pointers). + ptr := &f.Object + return &ptr +} + +var _ fmt.Stringer = formatAny{} +var _ logr.Marshaler = formatAny{} diff --git a/cluster-autoscaler/vendor/k8s.io/klog/v2/internal/serialize/keyvalues.go b/cluster-autoscaler/vendor/k8s.io/klog/v2/internal/serialize/keyvalues.go index 1dc81a15fa62..bcdf5f8ee12d 100644 --- a/cluster-autoscaler/vendor/k8s.io/klog/v2/internal/serialize/keyvalues.go +++ b/cluster-autoscaler/vendor/k8s.io/klog/v2/internal/serialize/keyvalues.go @@ -18,6 +18,7 @@ package serialize import ( "bytes" + "encoding/json" "fmt" "strconv" @@ -196,11 +197,11 @@ func (f Formatter) KVFormat(b *bytes.Buffer, k, v interface{}) { case textWriter: writeTextWriterValue(b, v) case fmt.Stringer: - writeStringValue(b, true, StringerToString(v)) + writeStringValue(b, StringerToString(v)) case string: - writeStringValue(b, true, v) + writeStringValue(b, v) case error: - writeStringValue(b, true, ErrorToString(v)) + writeStringValue(b, ErrorToString(v)) case logr.Marshaler: value := MarshalerToValue(v) // A marshaler that returns a string is useful for @@ -215,9 +216,9 @@ func (f Formatter) KVFormat(b *bytes.Buffer, k, v interface{}) { // value directly. switch value := value.(type) { case string: - writeStringValue(b, true, value) + writeStringValue(b, value) default: - writeStringValue(b, false, f.AnyToString(value)) + f.formatAny(b, value) } case []byte: // In https://github.com/kubernetes/klog/pull/237 it was decided @@ -234,7 +235,7 @@ func (f Formatter) KVFormat(b *bytes.Buffer, k, v interface{}) { b.WriteByte('=') b.WriteString(fmt.Sprintf("%+q", v)) default: - writeStringValue(b, false, f.AnyToString(v)) + f.formatAny(b, v) } } @@ -242,12 +243,25 @@ func KVFormat(b *bytes.Buffer, k, v interface{}) { Formatter{}.KVFormat(b, k, v) } -// AnyToString is the historic fallback formatter. -func (f Formatter) AnyToString(v interface{}) string { +// formatAny is the fallback formatter for a value. It supports a hook (for +// example, for YAML encoding) and itself uses JSON encoding. +func (f Formatter) formatAny(b *bytes.Buffer, v interface{}) { + b.WriteRune('=') if f.AnyToStringHook != nil { - return f.AnyToStringHook(v) + b.WriteString(f.AnyToStringHook(v)) + return + } + encoder := json.NewEncoder(b) + l := b.Len() + if err := encoder.Encode(v); err != nil { + // This shouldn't happen. We discard whatever the encoder + // wrote and instead dump an error string. + b.Truncate(l) + b.WriteString(fmt.Sprintf(`""`, err)) + return } - return fmt.Sprintf("%+v", v) + // Remove trailing newline. + b.Truncate(b.Len() - 1) } // StringerToString converts a Stringer to a string, @@ -287,7 +301,7 @@ func ErrorToString(err error) (ret string) { } func writeTextWriterValue(b *bytes.Buffer, v textWriter) { - b.WriteRune('=') + b.WriteByte('=') defer func() { if err := recover(); err != nil { fmt.Fprintf(b, `""`, err) @@ -296,18 +310,13 @@ func writeTextWriterValue(b *bytes.Buffer, v textWriter) { v.WriteText(b) } -func writeStringValue(b *bytes.Buffer, quote bool, v string) { +func writeStringValue(b *bytes.Buffer, v string) { data := []byte(v) index := bytes.IndexByte(data, '\n') if index == -1 { b.WriteByte('=') - if quote { - // Simple string, quote quotation marks and non-printable characters. - b.WriteString(strconv.Quote(v)) - return - } - // Non-string with no line breaks. - b.WriteString(v) + // Simple string, quote quotation marks and non-printable characters. + b.WriteString(strconv.Quote(v)) return } diff --git a/cluster-autoscaler/vendor/k8s.io/klog/v2/k8s_references.go b/cluster-autoscaler/vendor/k8s.io/klog/v2/k8s_references.go index ecd3f8b69033..786af74bfd38 100644 --- a/cluster-autoscaler/vendor/k8s.io/klog/v2/k8s_references.go +++ b/cluster-autoscaler/vendor/k8s.io/klog/v2/k8s_references.go @@ -178,14 +178,14 @@ func (ks kobjSlice) process() (objs []interface{}, err string) { return objectRefs, "" } -var nilToken = []byte("") +var nilToken = []byte("null") func (ks kobjSlice) WriteText(out *bytes.Buffer) { s := reflect.ValueOf(ks.arg) switch s.Kind() { case reflect.Invalid: - // nil parameter, print as empty slice. - out.WriteString("[]") + // nil parameter, print as null. + out.Write(nilToken) return case reflect.Slice: // Okay, handle below. @@ -197,15 +197,15 @@ func (ks kobjSlice) WriteText(out *bytes.Buffer) { defer out.Write([]byte{']'}) for i := 0; i < s.Len(); i++ { if i > 0 { - out.Write([]byte{' '}) + out.Write([]byte{','}) } item := s.Index(i).Interface() if item == nil { out.Write(nilToken) } else if v, ok := item.(KMetadata); ok { - KObj(v).writeUnquoted(out) + KObj(v).WriteText(out) } else { - fmt.Fprintf(out, "", item) + fmt.Fprintf(out, `""`, item) return } } diff --git a/cluster-autoscaler/vendor/k8s.io/klog/v2/klog.go b/cluster-autoscaler/vendor/k8s.io/klog/v2/klog.go index 466eeaf265b5..152f8a6bd6d9 100644 --- a/cluster-autoscaler/vendor/k8s.io/klog/v2/klog.go +++ b/cluster-autoscaler/vendor/k8s.io/klog/v2/klog.go @@ -1228,6 +1228,19 @@ func CopyStandardLogTo(name string) { stdLog.SetOutput(logBridge(sev)) } +// NewStandardLogger returns a Logger that writes to the klog logs for the +// named and lower severities. +// +// Valid names are "INFO", "WARNING", "ERROR", and "FATAL". If the name is not +// recognized, NewStandardLogger panics. +func NewStandardLogger(name string) *stdLog.Logger { + sev, ok := severity.ByName(name) + if !ok { + panic(fmt.Sprintf("klog.NewStandardLogger(%q): unknown severity", name)) + } + return stdLog.New(logBridge(sev), "", stdLog.Lshortfile) +} + // logBridge provides the Write method that enables CopyStandardLogTo to connect // Go's standard logs to the logs provided by this package. type logBridge severity.Severity diff --git a/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/api.pb.go b/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/api.pb.go index 49c4713fb431..3361bc5f5eb5 100644 --- a/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/api.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/api.pb.go @@ -15,7 +15,7 @@ limitations under the License. */ // Code generated by protoc-gen-gogo. DO NOT EDIT. -// source: api.proto +// api.proto is a deprecated file. package v1beta1 @@ -40,6 +40,7 @@ var _ = math.Inf // proto package needs to be updated. const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. type VersionRequest struct { // Version of the KMS plugin API. Version string `protobuf:"bytes,1,opt,name=version,proto3" json:"version,omitempty"` @@ -79,6 +80,7 @@ func (m *VersionRequest) GetVersion() string { return "" } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. type VersionResponse struct { // Version of the KMS plugin API. Version string `protobuf:"bytes,1,opt,name=version,proto3" json:"version,omitempty"` @@ -136,6 +138,7 @@ func (m *VersionResponse) GetRuntimeVersion() string { return "" } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. type DecryptRequest struct { // Version of the KMS plugin API. Version string `protobuf:"bytes,1,opt,name=version,proto3" json:"version,omitempty"` @@ -184,6 +187,7 @@ func (m *DecryptRequest) GetCipher() []byte { return nil } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. type DecryptResponse struct { // The decrypted data. Plain []byte `protobuf:"bytes,1,opt,name=plain,proto3" json:"plain,omitempty"` @@ -223,6 +227,7 @@ func (m *DecryptResponse) GetPlain() []byte { return nil } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. type EncryptRequest struct { // Version of the KMS plugin API. Version string `protobuf:"bytes,1,opt,name=version,proto3" json:"version,omitempty"` @@ -271,6 +276,7 @@ func (m *EncryptRequest) GetPlain() []byte { return nil } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. type EncryptResponse struct { // The encrypted data. Cipher []byte `protobuf:"bytes,1,opt,name=cipher,proto3" json:"cipher,omitempty"` @@ -322,27 +328,27 @@ func init() { func init() { proto.RegisterFile("api.proto", fileDescriptor_00212fb1f9d3bf1c) } var fileDescriptor_00212fb1f9d3bf1c = []byte{ - // 308 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x52, 0x4f, 0x4b, 0xc3, 0x30, - 0x14, 0x5f, 0x27, 0x6e, 0xec, 0x59, 0x5a, 0x08, 0xc3, 0x55, 0x4f, 0x9a, 0xcb, 0xd4, 0x43, 0xcb, - 0xf4, 0xe2, 0x49, 0x64, 0xe8, 0x49, 0xf4, 0x50, 0xc1, 0x83, 0x17, 0xc9, 0xca, 0x43, 0xc3, 0x6c, - 0x1a, 0x93, 0xac, 0xb2, 0x2f, 0xea, 0xe7, 0x11, 0xdb, 0xb4, 0xa6, 0x13, 0xd1, 0xe3, 0x7b, 0xf9, - 0xfd, 0x79, 0xbf, 0xf7, 0x02, 0x23, 0x26, 0x79, 0x2c, 0x55, 0x61, 0x0a, 0x32, 0x2c, 0x67, 0x0b, - 0x34, 0x6c, 0x46, 0x4f, 0x20, 0x78, 0x40, 0xa5, 0x79, 0x21, 0x52, 0x7c, 0x5b, 0xa1, 0x36, 0x24, - 0x82, 0x61, 0x59, 0x77, 0x22, 0xef, 0xc0, 0x3b, 0x1a, 0xa5, 0x4d, 0x49, 0xdf, 0x21, 0x6c, 0xb1, - 0x5a, 0x16, 0x42, 0xe3, 0xef, 0x60, 0x72, 0x08, 0xbe, 0x5a, 0x09, 0xc3, 0x73, 0x7c, 0x12, 0x2c, - 0xc7, 0xa8, 0x5f, 0x3d, 0xef, 0xd8, 0xde, 0x1d, 0xcb, 0x91, 0x4c, 0x21, 0x6c, 0x20, 0x8d, 0xc8, - 0x56, 0x85, 0x0a, 0x6c, 0xdb, 0xba, 0xd1, 0x39, 0x04, 0x57, 0x98, 0xa9, 0xb5, 0x34, 0x7f, 0x0e, - 0x49, 0x76, 0x61, 0x90, 0x71, 0xf9, 0x82, 0xaa, 0x72, 0xf4, 0x53, 0x5b, 0xd1, 0x29, 0x84, 0xad, - 0x86, 0x1d, 0x7e, 0x0c, 0xdb, 0xf2, 0x95, 0xf1, 0x5a, 0xc2, 0x4f, 0xeb, 0x82, 0x5e, 0x42, 0x70, - 0x2d, 0xfe, 0x69, 0xd6, 0x2a, 0xf4, 0x5d, 0x85, 0x63, 0x08, 0x5b, 0x05, 0x6b, 0xf5, 0x3d, 0x95, - 0xe7, 0x4e, 0x75, 0xfa, 0xe1, 0xc1, 0xf8, 0x06, 0xd7, 0xb7, 0x4c, 0xb0, 0x67, 0xcc, 0x51, 0x98, - 0x7b, 0x54, 0x25, 0xcf, 0x90, 0x5c, 0xc0, 0xd0, 0xa6, 0x27, 0x93, 0xd8, 0x1e, 0x2b, 0xee, 0x5e, - 0x6a, 0x3f, 0xfa, 0xf9, 0x50, 0xdb, 0xd1, 0xde, 0x17, 0xdf, 0xc6, 0x75, 0xf8, 0xdd, 0x25, 0x3a, - 0xfc, 0x8d, 0xcd, 0xd4, 0x7c, 0x9b, 0xc1, 0xe1, 0x77, 0xf7, 0xe2, 0xf0, 0x37, 0xe2, 0xd2, 0xde, - 0x7c, 0xef, 0x71, 0xb2, 0x3c, 0xd7, 0x31, 0x2f, 0x92, 0x65, 0xae, 0x13, 0x26, 0xb9, 0x4e, 0x2c, - 0x78, 0x31, 0xa8, 0xbe, 0xe0, 0xd9, 0x67, 0x00, 0x00, 0x00, 0xff, 0xff, 0x13, 0xcb, 0x8d, 0x9b, - 0x8f, 0x02, 0x00, 0x00, + // 314 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x52, 0xcf, 0x4a, 0xf3, 0x40, + 0x10, 0xef, 0xf6, 0xe3, 0x6b, 0xe9, 0x58, 0x12, 0x58, 0x8a, 0x0d, 0xe2, 0x41, 0xf7, 0x52, 0xf5, + 0x90, 0x52, 0xbd, 0x78, 0x12, 0x29, 0x7a, 0x12, 0x3d, 0x44, 0xf0, 0xe0, 0x45, 0xb6, 0x61, 0xd0, + 0xa5, 0x66, 0xb3, 0xee, 0x6e, 0x23, 0x7d, 0x33, 0x9f, 0xc4, 0xe7, 0x11, 0x93, 0x4d, 0xdc, 0x54, + 0x44, 0x8f, 0x33, 0xfb, 0xfb, 0x33, 0xbf, 0x99, 0x85, 0x01, 0x57, 0x22, 0x56, 0x3a, 0xb7, 0x39, + 0xed, 0x17, 0xb3, 0x05, 0x5a, 0x3e, 0x63, 0x47, 0x10, 0xdc, 0xa1, 0x36, 0x22, 0x97, 0x09, 0xbe, + 0xac, 0xd0, 0x58, 0x1a, 0x41, 0xbf, 0xa8, 0x3a, 0x11, 0xd9, 0x23, 0x07, 0x83, 0xa4, 0x2e, 0xd9, + 0x2b, 0x84, 0x0d, 0xd6, 0xa8, 0x5c, 0x1a, 0xfc, 0x19, 0x4c, 0xf7, 0x61, 0xa8, 0x57, 0xd2, 0x8a, + 0x0c, 0x1f, 0x24, 0xcf, 0x30, 0xea, 0x96, 0xcf, 0x5b, 0xae, 0x77, 0xc3, 0x33, 0xa4, 0x13, 0x08, + 0x6b, 0x48, 0x2d, 0xf2, 0xaf, 0x44, 0x05, 0xae, 0xed, 0xdc, 0xd8, 0x1c, 0x82, 0x0b, 0x4c, 0xf5, + 0x5a, 0xd9, 0x5f, 0x87, 0xa4, 0xdb, 0xd0, 0x4b, 0x85, 0x7a, 0x42, 0x5d, 0x3a, 0x0e, 0x13, 0x57, + 0xb1, 0x09, 0x84, 0x8d, 0x86, 0x1b, 0x7e, 0x04, 0xff, 0xd5, 0x33, 0x17, 0x95, 0xc4, 0x30, 0xa9, + 0x0a, 0x76, 0x0e, 0xc1, 0xa5, 0xfc, 0xa3, 0x59, 0xa3, 0xd0, 0xf5, 0x15, 0x0e, 0x21, 0x6c, 0x14, + 0x9c, 0xd5, 0xd7, 0x54, 0xc4, 0x9f, 0xea, 0xf8, 0x9d, 0xc0, 0xe8, 0x0a, 0xd7, 0xd7, 0x5c, 0xf2, + 0x47, 0xcc, 0x50, 0xda, 0x5b, 0xd4, 0x85, 0x48, 0x91, 0x9e, 0x41, 0xdf, 0xa5, 0xa7, 0xe3, 0xd8, + 0x1d, 0x2b, 0x6e, 0x5f, 0x6a, 0x27, 0xfa, 0xfe, 0x50, 0xd9, 0xb1, 0xce, 0x27, 0xdf, 0xc5, 0xf5, + 0xf8, 0xed, 0x25, 0x7a, 0xfc, 0x8d, 0xcd, 0x54, 0x7c, 0x97, 0xc1, 0xe3, 0xb7, 0xf7, 0xe2, 0xf1, + 0x37, 0xe2, 0xb2, 0xce, 0x7c, 0xf7, 0x7e, 0xbc, 0x3c, 0x35, 0xb1, 0xc8, 0xa7, 0xcb, 0xcc, 0x4c, + 0xb9, 0x12, 0x66, 0xea, 0xc0, 0x6f, 0x84, 0x2c, 0x7a, 0xe5, 0x2f, 0x3c, 0xf9, 0x08, 0x00, 0x00, + 0xff, 0xff, 0x18, 0x47, 0x93, 0xb2, 0x92, 0x02, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. diff --git a/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/api.proto b/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/api.proto index 22450edcd876..f62abc7bff9b 100644 --- a/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/api.proto +++ b/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/api.proto @@ -19,6 +19,7 @@ syntax = "proto3"; package v1beta1; option go_package = "k8s.io/kms/apis/v1beta1"; +option deprecated = true; // This service defines the public APIs for remote KMS provider. service KeyManagementService { @@ -31,11 +32,13 @@ service KeyManagementService { rpc Encrypt(EncryptRequest) returns (EncryptResponse) {} } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. message VersionRequest { // Version of the KMS plugin API. string version = 1; } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. message VersionResponse { // Version of the KMS plugin API. string version = 1; @@ -45,6 +48,7 @@ message VersionResponse { string runtime_version = 3; } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. message DecryptRequest { // Version of the KMS plugin API. string version = 1; @@ -52,11 +56,13 @@ message DecryptRequest { bytes cipher = 2; } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. message DecryptResponse { // The decrypted data. bytes plain = 1; } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. message EncryptRequest { // Version of the KMS plugin API. string version = 1; @@ -64,8 +70,8 @@ message EncryptRequest { bytes plain = 2; } +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. message EncryptResponse { // The encrypted data. bytes cipher = 1; } - diff --git a/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/v1beta1.go b/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/v1beta1.go index 842d0a2fdc70..aae3359ef909 100644 --- a/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/v1beta1.go +++ b/cluster-autoscaler/vendor/k8s.io/kms/apis/v1beta1/v1beta1.go @@ -15,6 +15,7 @@ limitations under the License. */ // Package v1beta1 contains definition of kms-plugin's gRPC service. +// Deprecated: KMSv1 is deprecated in v1.28 and will only receive security updates going forward. Use KMSv2 instead. package v1beta1 // IsVersionCheckMethod determines whether the supplied method is a version check against kms-plugin. diff --git a/cluster-autoscaler/vendor/k8s.io/kms/apis/v2/api.pb.go b/cluster-autoscaler/vendor/k8s.io/kms/apis/v2/api.pb.go index cb746a64c964..1b634f9323e5 100644 --- a/cluster-autoscaler/vendor/k8s.io/kms/apis/v2/api.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/kms/apis/v2/api.pb.go @@ -288,7 +288,6 @@ type EncryptResponse struct { // This can be used to inform staleness of data updated via value.Transformer.TransformFromStorage. KeyId string `protobuf:"bytes,2,opt,name=key_id,json=keyId,proto3" json:"key_id,omitempty"` // Additional metadata to be stored with the encrypted data. - // This metadata can contain the encrypted local KEK that was used to encrypt the DEK. // This data is stored in plaintext in etcd. KMS plugin implementations are responsible for pre-encrypting any sensitive data. Annotations map[string][]byte `protobuf:"bytes,3,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` XXX_NoUnkeyedLiteral struct{} `json:"-"` diff --git a/cluster-autoscaler/vendor/k8s.io/kms/apis/v2/api.proto b/cluster-autoscaler/vendor/k8s.io/kms/apis/v2/api.proto index 09b52126f2b7..3c7d335e8b66 100644 --- a/cluster-autoscaler/vendor/k8s.io/kms/apis/v2/api.proto +++ b/cluster-autoscaler/vendor/k8s.io/kms/apis/v2/api.proto @@ -73,7 +73,6 @@ message EncryptResponse { // This can be used to inform staleness of data updated via value.Transformer.TransformFromStorage. string key_id = 2; // Additional metadata to be stored with the encrypted data. - // This metadata can contain the encrypted local KEK that was used to encrypt the DEK. // This data is stored in plaintext in etcd. KMS plugin implementations are responsible for pre-encrypting any sensitive data. map annotations = 3; } diff --git a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/builder/openapi.go b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/builder/openapi.go index 98be932cb9a0..1c4cb5bf87ee 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/builder/openapi.go +++ b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/builder/openapi.go @@ -152,7 +152,7 @@ func (o *openAPI) finalizeSwagger() (*spec.Swagger, error) { } } - return o.swagger, nil + return deduplicateParameters(o.swagger) } func (o *openAPI) buildDefinitionRecursively(name string) error { diff --git a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/builder/parameters.go b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/builder/parameters.go new file mode 100644 index 000000000000..2bb8bd885df8 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/builder/parameters.go @@ -0,0 +1,259 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package builder + +import ( + "encoding/base64" + "encoding/json" + "fmt" + "hash/fnv" + "sort" + "strconv" + "strings" + + "k8s.io/kube-openapi/pkg/validation/spec" +) + +// deduplicateParameters finds parameters that are shared across multiple endpoints and replace them with +// references to the shared parameters in order to avoid repetition. +// +// deduplicateParameters does not mutate the source. +func deduplicateParameters(sp *spec.Swagger) (*spec.Swagger, error) { + names, parameters, err := collectSharedParameters(sp) + if err != nil { + return nil, err + } + + if sp.Parameters != nil { + return nil, fmt.Errorf("shared parameters already exist") // should not happen with the builder, but to be sure + } + + clone := *sp + clone.Parameters = parameters + return replaceSharedParameters(names, &clone) +} + +// collectSharedParameters finds parameters that show up for many endpoints. These +// are basically all parameters with the exceptions of those where we know they are +// endpoint specific, e.g. because they reference the schema of the kind, or have +// the kind or resource name in the description. +func collectSharedParameters(sp *spec.Swagger) (namesByJSON map[string]string, ret map[string]spec.Parameter, err error) { + if sp == nil || sp.Paths == nil { + return nil, nil, nil + } + + countsByJSON := map[string]int{} + shared := map[string]spec.Parameter{} + var keys []string + + collect := func(p *spec.Parameter) error { + if (p.In == "query" || p.In == "path") && p.Name == "name" { + return nil // ignore name parameter as they are never shared with the Kind in the description + } + if p.In == "query" && p.Name == "fieldValidation" { + return nil // keep fieldValidation parameter unshared because kubectl uses it (until 1.27) to detect server-side field validation support + } + if p.In == "query" && p.Name == "dryRun" { + return nil // keep fieldValidation parameter unshared because kubectl uses it (until 1.26) to detect dry-run support + } + if p.Schema != nil && p.In == "body" && p.Name == "body" && !strings.HasPrefix(p.Schema.Ref.String(), "#/definitions/io.k8s.apimachinery") { + return nil // ignore non-generic body parameters as they reference the custom schema of the kind + } + + bs, err := json.Marshal(p) + if err != nil { + return err + } + + k := string(bs) + countsByJSON[k]++ + if count := countsByJSON[k]; count == 1 { + shared[k] = *p + keys = append(keys, k) + } + + return nil + } + + for _, path := range sp.Paths.Paths { + // per operation parameters + for _, op := range operations(&path) { + if op == nil { + continue // shouldn't happen, but ignore if it does; tested through unit test + } + for _, p := range op.Parameters { + if p.Ref.String() != "" { + // shouldn't happen, but ignore if it does + continue + } + if err := collect(&p); err != nil { + return nil, nil, err + } + } + } + + // per path parameters + for _, p := range path.Parameters { + if p.Ref.String() != "" { + continue // shouldn't happen, but ignore if it does + } + if err := collect(&p); err != nil { + return nil, nil, err + } + } + } + + // name deterministically + sort.Strings(keys) + ret = map[string]spec.Parameter{} + namesByJSON = map[string]string{} + for _, k := range keys { + name := shared[k].Name + if name == "" { + // this should never happen as the name is a required field. But if it does, let's be safe. + name = "param" + } + name += "-" + base64Hash(k) + i := 0 + for { + if _, ok := ret[name]; !ok { + ret[name] = shared[k] + namesByJSON[k] = name + break + } + i++ // only on hash conflict, unlikely with our few variants + name = shared[k].Name + "-" + strconv.Itoa(i) + } + } + + return namesByJSON, ret, nil +} + +func operations(path *spec.PathItem) []*spec.Operation { + return []*spec.Operation{path.Get, path.Put, path.Post, path.Delete, path.Options, path.Head, path.Patch} +} + +func base64Hash(s string) string { + hash := fnv.New64() + hash.Write([]byte(s)) //nolint:errcheck + return base64.URLEncoding.EncodeToString(hash.Sum(make([]byte, 0, 8))[:6]) // 8 characters +} + +func replaceSharedParameters(sharedParameterNamesByJSON map[string]string, sp *spec.Swagger) (*spec.Swagger, error) { + if sp == nil || sp.Paths == nil { + return sp, nil + } + + ret := sp + + firstPathChange := true + for k, path := range sp.Paths.Paths { + pathChanged := false + + // per operation parameters + for _, op := range []**spec.Operation{&path.Get, &path.Put, &path.Post, &path.Delete, &path.Options, &path.Head, &path.Patch} { + if *op == nil { + continue + } + + firstParamChange := true + for i := range (*op).Parameters { + p := (*op).Parameters[i] + + if p.Ref.String() != "" { + // shouldn't happen, but be idem-potent if it does + continue + } + + bs, err := json.Marshal(p) + if err != nil { + return nil, err + } + + if name, ok := sharedParameterNamesByJSON[string(bs)]; ok { + if firstParamChange { + orig := *op + *op = &spec.Operation{} + **op = *orig + (*op).Parameters = make([]spec.Parameter, len(orig.Parameters)) + copy((*op).Parameters, orig.Parameters) + firstParamChange = false + } + + (*op).Parameters[i] = spec.Parameter{ + Refable: spec.Refable{ + Ref: spec.MustCreateRef("#/parameters/" + name), + }, + } + pathChanged = true + } + } + } + + // per path parameters + firstParamChange := true + for i := range path.Parameters { + p := path.Parameters[i] + + if p.Ref.String() != "" { + // shouldn't happen, but be idem-potent if it does + continue + } + + bs, err := json.Marshal(p) + if err != nil { + return nil, err + } + + if name, ok := sharedParameterNamesByJSON[string(bs)]; ok { + if firstParamChange { + orig := path.Parameters + path.Parameters = make([]spec.Parameter, len(orig)) + copy(path.Parameters, orig) + firstParamChange = false + } + + path.Parameters[i] = spec.Parameter{ + Refable: spec.Refable{ + Ref: spec.MustCreateRef("#/parameters/" + name), + }, + } + pathChanged = true + } + } + + if pathChanged { + if firstPathChange { + clone := *sp + ret = &clone + + pathsClone := *ret.Paths + ret.Paths = &pathsClone + + ret.Paths.Paths = make(map[string]spec.PathItem, len(sp.Paths.Paths)) + for k, v := range sp.Paths.Paths { + ret.Paths.Paths[k] = v + } + + firstPathChange = false + } + ret.Paths.Paths[k] = path + } + } + + return ret, nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/cached/cache.go b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/cached/cache.go index 16e34853af73..76415b7830bf 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/cached/cache.go +++ b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/cached/cache.go @@ -19,6 +19,8 @@ limitations under the License. // operations are not repeated unnecessarily. The operations can be // created as a tree, and replaced dynamically as needed. // +// All the operations in this module are thread-safe. +// // # Dependencies and types of caches // // This package uses a source/transform/sink model of caches to build @@ -34,12 +36,6 @@ limitations under the License. // replaced with a new one, and saves the previous results in case an // error pops-up. // -// # Atomicity -// -// Most of the operations are not atomic/thread-safe, except for -// [Replaceable.Replace] which can be performed while the objects -// are being read. -// // # Etags // // Etags in this library is a cache version identifier. It doesn't @@ -54,6 +50,7 @@ package cached import ( "fmt" + "sync" "sync/atomic" ) @@ -100,14 +97,6 @@ type Data[T any] interface { Get() Result[T] } -// T is the source type, V is the destination type. -type merger[K comparable, T, V any] struct { - mergeFn func(map[K]Result[T]) Result[V] - caches map[K]Data[T] - cacheResults map[K]Result[T] - result Result[V] -} - // NewMerger creates a new merge cache, a cache that merges the result // of other caches. The function only gets called if any of the // dependency has changed. @@ -125,27 +114,89 @@ type merger[K comparable, T, V any] struct { // function will remerge all the dependencies together everytime. Since // the list of dependencies is constant, there is no way to save some // partial merge information either. +// +// Also note that Golang map iteration is not stable. If the mergeFn +// depends on the order iteration to be stable, it will need to +// implement its own sorting or iteration order. func NewMerger[K comparable, T, V any](mergeFn func(results map[K]Result[T]) Result[V], caches map[K]Data[T]) Data[V] { - return &merger[K, T, V]{ + listCaches := make([]Data[T], 0, len(caches)) + // maps from index to key + indexes := make(map[int]K, len(caches)) + i := 0 + for k := range caches { + listCaches = append(listCaches, caches[k]) + indexes[i] = k + i++ + } + + return NewListMerger(func(results []Result[T]) Result[V] { + if len(results) != len(indexes) { + panic(fmt.Errorf("invalid result length %d, expected %d", len(results), len(indexes))) + } + m := make(map[K]Result[T], len(results)) + for i := range results { + m[indexes[i]] = results[i] + } + return mergeFn(m) + }, listCaches) +} + +type listMerger[T, V any] struct { + lock sync.Mutex + mergeFn func([]Result[T]) Result[V] + caches []Data[T] + cacheResults []Result[T] + result Result[V] +} + +// NewListMerger creates a new merge cache that merges the results of +// other caches in list form. The function only gets called if any of +// the dependency has changed. +// +// The benefit of ListMerger over the basic Merger is that caches are +// stored in an ordered list so the order of the cache will be +// preserved in the order of the results passed to the mergeFn. +// +// If any of the dependency returned an error before, or any of the +// dependency returned an error this time, or if the mergeFn failed +// before, then the function is reran. +// +// Note that this assumes there is no "partial" merge, the merge +// function will remerge all the dependencies together everytime. Since +// the list of dependencies is constant, there is no way to save some +// partial merge information either. +func NewListMerger[T, V any](mergeFn func(results []Result[T]) Result[V], caches []Data[T]) Data[V] { + return &listMerger[T, V]{ mergeFn: mergeFn, caches: caches, } } -func (c *merger[K, T, V]) prepareResults() map[K]Result[T] { - cacheResults := make(map[K]Result[T], len(c.caches)) - for key, cache := range c.caches { - cacheResults[key] = cache.Get() +func (c *listMerger[T, V]) prepareResultsLocked() []Result[T] { + cacheResults := make([]Result[T], len(c.caches)) + ch := make(chan struct { + int + Result[T] + }, len(c.caches)) + for i := range c.caches { + go func(index int) { + ch <- struct { + int + Result[T] + }{ + index, + c.caches[index].Get(), + } + }(i) + } + for i := 0; i < len(c.caches); i++ { + res := <-ch + cacheResults[res.int] = res.Result } return cacheResults } -// Rerun if: -// - The last run resulted in an error -// - Any of the dependency previously returned an error -// - Any of the dependency just returned an error -// - Any of the dependency's etag changed -func (c *merger[K, T, V]) needsRunning(results map[K]Result[T]) bool { +func (c *listMerger[T, V]) needsRunningLocked(results []Result[T]) bool { if c.cacheResults == nil { return true } @@ -155,12 +206,8 @@ func (c *merger[K, T, V]) needsRunning(results map[K]Result[T]) bool { if len(results) != len(c.cacheResults) { panic(fmt.Errorf("invalid number of results: %v (expected %v)", len(results), len(c.cacheResults))) } - for key, oldResult := range c.cacheResults { - newResult, ok := results[key] - if !ok { - panic(fmt.Errorf("unknown cache entry: %v", key)) - } - + for i, oldResult := range c.cacheResults { + newResult := results[i] if newResult.Etag != oldResult.Etag || newResult.Err != nil || oldResult.Err != nil { return true } @@ -168,17 +215,17 @@ func (c *merger[K, T, V]) needsRunning(results map[K]Result[T]) bool { return false } -func (c *merger[K, T, V]) Get() Result[V] { - cacheResults := c.prepareResults() - if c.needsRunning(cacheResults) { +func (c *listMerger[T, V]) Get() Result[V] { + c.lock.Lock() + defer c.lock.Unlock() + cacheResults := c.prepareResultsLocked() + if c.needsRunningLocked(cacheResults) { c.cacheResults = cacheResults c.result = c.mergeFn(c.cacheResults) } return c.result } -type transformerCacheKeyType struct{} - // NewTransformer creates a new cache that transforms the result of // another cache. The transformFn will only be called if the source // cache has updated the output, otherwise, the cached result will be @@ -188,20 +235,17 @@ type transformerCacheKeyType struct{} // this time, or if the transformerFn failed before, the function is // reran. func NewTransformer[T, V any](transformerFn func(Result[T]) Result[V], source Data[T]) Data[V] { - return NewMerger(func(caches map[transformerCacheKeyType]Result[T]) Result[V] { - cache, ok := caches[transformerCacheKeyType{}] - if len(caches) != 1 || !ok { + return NewListMerger(func(caches []Result[T]) Result[V] { + if len(caches) != 1 { panic(fmt.Errorf("invalid cache for transformer cache: %v", caches)) } - return transformerFn(cache) - }, map[transformerCacheKeyType]Data[T]{ - {}: source, - }) + return transformerFn(caches[0]) + }, []Data[T]{source}) } // NewSource creates a new cache that generates some data. This // will always be called since we don't know the origin of the data and -// if it needs to be updated or not. +// if it needs to be updated or not. sourceFn MUST be thread-safe. func NewSource[T any](sourceFn func() Result[T]) Data[T] { c := source[T](sourceFn) return &c @@ -222,25 +266,24 @@ func NewStaticSource[T any](staticFn func() Result[T]) Data[T] { } type static[T any] struct { + once sync.Once fn func() Result[T] - result *Result[T] + result Result[T] } func (c *static[T]) Get() Result[T] { - if c.result == nil { - result := c.fn() - c.result = &result - } - return *c.result + c.once.Do(func() { + c.result = c.fn() + }) + return c.result } -// Replaceable is a cache that carries the result even when the -// cache is replaced. The cache can be replaced atomically (without any -// lock held). This is the type that should typically be stored in +// Replaceable is a cache that carries the result even when the cache is +// replaced. This is the type that should typically be stored in // structs. type Replaceable[T any] struct { cache atomic.Pointer[Data[T]] - result *Result[T] + result atomic.Pointer[Result[T]] } // Get retrieves the data from the underlying source. [Replaceable] @@ -251,14 +294,19 @@ type Replaceable[T any] struct { // failure is returned. func (c *Replaceable[T]) Get() Result[T] { result := (*c.cache.Load()).Get() - if result.Err != nil && c.result != nil && c.result.Err == nil { - return *c.result + + for { + cResult := c.result.Load() + if result.Err != nil && cResult != nil && cResult.Err == nil { + return *cResult + } + if c.result.CompareAndSwap(cResult, &result) { + return result + } } - c.result = &result - return *c.result } -// Replace changes the cache in a thread-safe way. +// Replace changes the cache. func (c *Replaceable[T]) Replace(cache Data[T]) { c.cache.Swap(&cache) } diff --git a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/handler/handler.go b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/handler/handler.go index 37cb96f1be11..0eb3f2360d56 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/handler/handler.go +++ b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/handler/handler.go @@ -22,13 +22,12 @@ import ( "fmt" "net/http" "strconv" - "sync" "time" "github.com/NYTimes/gziphandler" "github.com/emicklei/go-restful/v3" "github.com/golang/protobuf/proto" - openapi_v2 "github.com/google/gnostic/openapiv2" + openapi_v2 "github.com/google/gnostic-models/openapiv2" "github.com/google/uuid" "github.com/munnerz/goautoneg" klog "k8s.io/klog/v2" @@ -119,16 +118,14 @@ func ToProtoBinary(json []byte) ([]byte, error) { // RegisterOpenAPIVersionedService registers a handler to provide access to provided swagger spec. // // Deprecated: use OpenAPIService.RegisterOpenAPIVersionedService instead. -func RegisterOpenAPIVersionedService(spec *spec.Swagger, servePath string, handler common.PathHandler) (*OpenAPIService, error) { +func RegisterOpenAPIVersionedService(spec *spec.Swagger, servePath string, handler common.PathHandler) *OpenAPIService { o := NewOpenAPIService(spec) - return o, o.RegisterOpenAPIVersionedService(servePath, handler) + o.RegisterOpenAPIVersionedService(servePath, handler) + return o } // RegisterOpenAPIVersionedService registers a handler to provide access to provided swagger spec. -func (o *OpenAPIService) RegisterOpenAPIVersionedService(servePath string, handler common.PathHandler) error { - // Mutex protects the cache chain - var mutex sync.Mutex - +func (o *OpenAPIService) RegisterOpenAPIVersionedService(servePath string, handler common.PathHandler) { accepted := []struct { Type string SubType string @@ -157,9 +154,7 @@ func (o *OpenAPIService) RegisterOpenAPIVersionedService(servePath string, handl continue } // serve the first matching media type in the sorted clause list - mutex.Lock() result := accepts.GetDataAndEtag.Get() - mutex.Unlock() if result.Err != nil { klog.Errorf("Error in OpenAPI handler: %s", result.Err) // only return a 503 if we have no older cache data to serve @@ -183,8 +178,6 @@ func (o *OpenAPIService) RegisterOpenAPIVersionedService(servePath string, handl return }), )) - - return nil } // BuildAndRegisterOpenAPIVersionedService builds the spec and registers a handler to provide access to it. @@ -203,5 +196,6 @@ func BuildAndRegisterOpenAPIVersionedServiceFromRoutes(servePath string, routeCo return nil, err } o := NewOpenAPIService(spec) - return o, o.RegisterOpenAPIVersionedService(servePath, handler) + o.RegisterOpenAPIVersionedService(servePath, handler) + return o, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/handler3/handler.go b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/handler3/handler.go index 66b7a68da671..2263e2f32b75 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/handler3/handler.go +++ b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/handler3/handler.go @@ -30,7 +30,7 @@ import ( "time" "github.com/golang/protobuf/proto" - openapi_v3 "github.com/google/gnostic/openapiv3" + openapi_v3 "github.com/google/gnostic-models/openapiv3" "github.com/google/uuid" "github.com/munnerz/goautoneg" "k8s.io/klog/v2" diff --git a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/util/proto/document.go b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/util/proto/document.go index 763923dfffcb..5789e67ab7d9 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/util/proto/document.go +++ b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/util/proto/document.go @@ -21,7 +21,7 @@ import ( "sort" "strings" - openapi_v2 "github.com/google/gnostic/openapiv2" + openapi_v2 "github.com/google/gnostic-models/openapiv2" "gopkg.in/yaml.v2" ) diff --git a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/util/proto/document_v3.go b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/util/proto/document_v3.go index 519dcf2ebaee..d9f2896e3532 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/util/proto/document_v3.go +++ b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/util/proto/document_v3.go @@ -21,7 +21,7 @@ import ( "reflect" "strings" - openapi_v3 "github.com/google/gnostic/openapiv3" + openapi_v3 "github.com/google/gnostic-models/openapiv3" "gopkg.in/yaml.v3" ) diff --git a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/validation/spec/gnostic.go b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/validation/spec/gnostic.go index 406a09d9d1e0..6a77f2ac8229 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/validation/spec/gnostic.go +++ b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/validation/spec/gnostic.go @@ -21,7 +21,7 @@ import ( "strconv" "github.com/go-openapi/jsonreference" - openapi_v2 "github.com/google/gnostic/openapiv2" + openapi_v2 "github.com/google/gnostic-models/openapiv2" ) // Interfaces diff --git a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/validation/strfmt/format.go b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/validation/strfmt/format.go index 75c50053b1b8..c85067a26340 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/validation/strfmt/format.go +++ b/cluster-autoscaler/vendor/k8s.io/kube-openapi/pkg/validation/strfmt/format.go @@ -16,13 +16,10 @@ package strfmt import ( "encoding" - "fmt" "reflect" "strings" "sync" - "time" - "github.com/mitchellh/mapstructure" "k8s.io/kube-openapi/pkg/validation/errors" ) @@ -50,7 +47,6 @@ type Registry interface { ContainsName(string) bool Validates(string, string) bool Parse(string, string) (interface{}, error) - MapStructureHookFunc() mapstructure.DecodeHookFunc } type knownFormat struct { @@ -92,83 +88,6 @@ func NewSeededFormats(seeds []knownFormat, normalizer NameNormalizer) Registry { } } -// MapStructureHookFunc is a decode hook function for mapstructure -func (f *defaultFormats) MapStructureHookFunc() mapstructure.DecodeHookFunc { - return func(from reflect.Type, to reflect.Type, data interface{}) (interface{}, error) { - if from.Kind() != reflect.String { - return data, nil - } - for _, v := range f.data { - tpe, _ := f.GetType(v.Name) - if to == tpe { - switch v.Name { - case "date": - d, err := time.Parse(RFC3339FullDate, data.(string)) - if err != nil { - return nil, err - } - return Date(d), nil - case "datetime": - input := data.(string) - if len(input) == 0 { - return nil, fmt.Errorf("empty string is an invalid datetime format") - } - return ParseDateTime(input) - case "duration": - dur, err := ParseDuration(data.(string)) - if err != nil { - return nil, err - } - return Duration(dur), nil - case "uri": - return URI(data.(string)), nil - case "email": - return Email(data.(string)), nil - case "uuid": - return UUID(data.(string)), nil - case "uuid3": - return UUID3(data.(string)), nil - case "uuid4": - return UUID4(data.(string)), nil - case "uuid5": - return UUID5(data.(string)), nil - case "hostname": - return Hostname(data.(string)), nil - case "ipv4": - return IPv4(data.(string)), nil - case "ipv6": - return IPv6(data.(string)), nil - case "cidr": - return CIDR(data.(string)), nil - case "mac": - return MAC(data.(string)), nil - case "isbn": - return ISBN(data.(string)), nil - case "isbn10": - return ISBN10(data.(string)), nil - case "isbn13": - return ISBN13(data.(string)), nil - case "creditcard": - return CreditCard(data.(string)), nil - case "ssn": - return SSN(data.(string)), nil - case "hexcolor": - return HexColor(data.(string)), nil - case "rgbcolor": - return RGBColor(data.(string)), nil - case "byte": - return Base64(data.(string)), nil - case "password": - return Password(data.(string)), nil - default: - return nil, errors.InvalidTypeName(v.Name) - } - } - } - return data, nil - } -} - // Add adds a new format, return true if this was a new item instead of a replacement func (f *defaultFormats) Add(name string, strfmt Format, validator Validator) bool { f.Lock() diff --git a/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/doc.go b/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/doc.go deleted file mode 100644 index cb62b1cc1343..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/doc.go +++ /dev/null @@ -1,21 +0,0 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// +k8s:deepcopy-gen=package -// +k8s:openapi-gen=true -// +groupName=kubeproxy.config.k8s.io - -package v1alpha1 // import "k8s.io/kube-proxy/config/v1alpha1" diff --git a/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/register.go b/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/register.go deleted file mode 100644 index 16ed248fae09..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/register.go +++ /dev/null @@ -1,43 +0,0 @@ -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1alpha1 - -import ( - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/runtime/schema" -) - -// GroupName is the group name used in this package -const GroupName = "kubeproxy.config.k8s.io" - -// SchemeGroupVersion is group version used to register these objects -var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1alpha1"} - -var ( - // SchemeBuilder is the scheme builder with scheme init functions to run for this API package - SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes) - // AddToScheme is a global function that registers this API group & version to a scheme - AddToScheme = SchemeBuilder.AddToScheme -) - -// addKnownTypes registers known types to the given scheme -func addKnownTypes(scheme *runtime.Scheme) error { - scheme.AddKnownTypes(SchemeGroupVersion, - &KubeProxyConfiguration{}, - ) - return nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/types.go b/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/types.go deleted file mode 100644 index 3c6f3ecf6cce..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/types.go +++ /dev/null @@ -1,201 +0,0 @@ -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1alpha1 - -import ( - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - componentbaseconfigv1alpha1 "k8s.io/component-base/config/v1alpha1" -) - -// KubeProxyIPTablesConfiguration contains iptables-related configuration -// details for the Kubernetes proxy server. -type KubeProxyIPTablesConfiguration struct { - // masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using - // the pure iptables proxy mode. Values must be within the range [0, 31]. - MasqueradeBit *int32 `json:"masqueradeBit"` - // masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. - MasqueradeAll bool `json:"masqueradeAll"` - // LocalhostNodePorts tells kube-proxy to allow service NodePorts to be accessed via - // localhost (iptables mode only) - LocalhostNodePorts *bool `json:"localhostNodePorts"` - // syncPeriod is the period that iptables rules are refreshed (e.g. '5s', '1m', - // '2h22m'). Must be greater than 0. - SyncPeriod metav1.Duration `json:"syncPeriod"` - // minSyncPeriod is the minimum period that iptables rules are refreshed (e.g. '5s', '1m', - // '2h22m'). - MinSyncPeriod metav1.Duration `json:"minSyncPeriod"` -} - -// KubeProxyIPVSConfiguration contains ipvs-related configuration -// details for the Kubernetes proxy server. -type KubeProxyIPVSConfiguration struct { - // syncPeriod is the period that ipvs rules are refreshed (e.g. '5s', '1m', - // '2h22m'). Must be greater than 0. - SyncPeriod metav1.Duration `json:"syncPeriod"` - // minSyncPeriod is the minimum period that ipvs rules are refreshed (e.g. '5s', '1m', - // '2h22m'). - MinSyncPeriod metav1.Duration `json:"minSyncPeriod"` - // ipvs scheduler - Scheduler string `json:"scheduler"` - // excludeCIDRs is a list of CIDR's which the ipvs proxier should not touch - // when cleaning up ipvs services. - ExcludeCIDRs []string `json:"excludeCIDRs"` - // strict ARP configure arp_ignore and arp_announce to avoid answering ARP queries - // from kube-ipvs0 interface - StrictARP bool `json:"strictARP"` - // tcpTimeout is the timeout value used for idle IPVS TCP sessions. - // The default value is 0, which preserves the current timeout value on the system. - TCPTimeout metav1.Duration `json:"tcpTimeout"` - // tcpFinTimeout is the timeout value used for IPVS TCP sessions after receiving a FIN. - // The default value is 0, which preserves the current timeout value on the system. - TCPFinTimeout metav1.Duration `json:"tcpFinTimeout"` - // udpTimeout is the timeout value used for IPVS UDP packets. - // The default value is 0, which preserves the current timeout value on the system. - UDPTimeout metav1.Duration `json:"udpTimeout"` -} - -// KubeProxyConntrackConfiguration contains conntrack settings for -// the Kubernetes proxy server. -type KubeProxyConntrackConfiguration struct { - // maxPerCore is the maximum number of NAT connections to track - // per CPU core (0 to leave the limit as-is and ignore min). - MaxPerCore *int32 `json:"maxPerCore"` - // min is the minimum value of connect-tracking records to allocate, - // regardless of conntrackMaxPerCore (set maxPerCore=0 to leave the limit as-is). - Min *int32 `json:"min"` - // tcpEstablishedTimeout is how long an idle TCP connection will be kept open - // (e.g. '2s'). Must be greater than 0 to set. - TCPEstablishedTimeout *metav1.Duration `json:"tcpEstablishedTimeout"` - // tcpCloseWaitTimeout is how long an idle conntrack entry - // in CLOSE_WAIT state will remain in the conntrack - // table. (e.g. '60s'). Must be greater than 0 to set. - TCPCloseWaitTimeout *metav1.Duration `json:"tcpCloseWaitTimeout"` -} - -// KubeProxyWinkernelConfiguration contains Windows/HNS settings for -// the Kubernetes proxy server. -type KubeProxyWinkernelConfiguration struct { - // networkName is the name of the network kube-proxy will use - // to create endpoints and policies - NetworkName string `json:"networkName"` - // sourceVip is the IP address of the source VIP endoint used for - // NAT when loadbalancing - SourceVip string `json:"sourceVip"` - // enableDSR tells kube-proxy whether HNS policies should be created - // with DSR - EnableDSR bool `json:"enableDSR"` - // RootHnsEndpointName is the name of hnsendpoint that is attached to - // l2bridge for root network namespace - RootHnsEndpointName string `json:"rootHnsEndpointName"` - // ForwardHealthCheckVip forwards service VIP for health check port on - // Windows - ForwardHealthCheckVip bool `json:"forwardHealthCheckVip"` -} - -// DetectLocalConfiguration contains optional settings related to DetectLocalMode option -type DetectLocalConfiguration struct { - // BridgeInterface is a string argument which represents a single bridge interface name. - // Kube-proxy considers traffic as local if originating from this given bridge. - // This argument should be set if DetectLocalMode is set to LocalModeBridgeInterface. - BridgeInterface string `json:"bridgeInterface"` - // InterfaceNamePrefix is a string argument which represents a single interface prefix name. - // Kube-proxy considers traffic as local if originating from one or more interfaces which match - // the given prefix. This argument should be set if DetectLocalMode is set to LocalModeInterfaceNamePrefix. - InterfaceNamePrefix string `json:"interfaceNamePrefix"` -} - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// KubeProxyConfiguration contains everything necessary to configure the -// Kubernetes proxy server. -type KubeProxyConfiguration struct { - metav1.TypeMeta `json:",inline"` - - // featureGates is a map of feature names to bools that enable or disable alpha/experimental features. - FeatureGates map[string]bool `json:"featureGates,omitempty"` - - // bindAddress is the IP address for the proxy server to serve on (set to 0.0.0.0 - // for all interfaces) - BindAddress string `json:"bindAddress"` - // healthzBindAddress is the IP address and port for the health check server to serve on, - // defaulting to 0.0.0.0:10256 - HealthzBindAddress string `json:"healthzBindAddress"` - // metricsBindAddress is the IP address and port for the metrics server to serve on, - // defaulting to 127.0.0.1:10249 (set to 0.0.0.0 for all interfaces) - MetricsBindAddress string `json:"metricsBindAddress"` - // bindAddressHardFail, if true, kube-proxy will treat failure to bind to a port as fatal and exit - BindAddressHardFail bool `json:"bindAddressHardFail"` - // enableProfiling enables profiling via web interface on /debug/pprof handler. - // Profiling handlers will be handled by metrics server. - EnableProfiling bool `json:"enableProfiling"` - // clusterCIDR is the CIDR range of the pods in the cluster. It is used to - // bridge traffic coming from outside of the cluster. If not provided, - // no off-cluster bridging will be performed. - ClusterCIDR string `json:"clusterCIDR"` - // hostnameOverride, if non-empty, will be used as the identity instead of the actual hostname. - HostnameOverride string `json:"hostnameOverride"` - // clientConnection specifies the kubeconfig file and client connection settings for the proxy - // server to use when communicating with the apiserver. - ClientConnection componentbaseconfigv1alpha1.ClientConnectionConfiguration `json:"clientConnection"` - // iptables contains iptables-related configuration options. - IPTables KubeProxyIPTablesConfiguration `json:"iptables"` - // ipvs contains ipvs-related configuration options. - IPVS KubeProxyIPVSConfiguration `json:"ipvs"` - // oomScoreAdj is the oom-score-adj value for kube-proxy process. Values must be within - // the range [-1000, 1000] - OOMScoreAdj *int32 `json:"oomScoreAdj"` - // mode specifies which proxy mode to use. - Mode ProxyMode `json:"mode"` - // portRange is the range of host ports (beginPort-endPort, inclusive) that may be consumed - // in order to proxy service traffic. If unspecified (0-0) then ports will be randomly chosen. - PortRange string `json:"portRange"` - // conntrack contains conntrack-related configuration options. - Conntrack KubeProxyConntrackConfiguration `json:"conntrack"` - // configSyncPeriod is how often configuration from the apiserver is refreshed. Must be greater - // than 0. - ConfigSyncPeriod metav1.Duration `json:"configSyncPeriod"` - // nodePortAddresses is the --nodeport-addresses value for kube-proxy process. Values must be valid - // IP blocks. These values are as a parameter to select the interfaces where nodeport works. - // In case someone would like to expose a service on localhost for local visit and some other interfaces for - // particular purpose, a list of IP blocks would do that. - // If set it to "127.0.0.0/8", kube-proxy will only select the loopback interface for NodePort. - // If set it to a non-zero IP block, kube-proxy will filter that down to just the IPs that applied to the node. - // An empty string slice is meant to select all network interfaces. - NodePortAddresses []string `json:"nodePortAddresses"` - // winkernel contains winkernel-related configuration options. - Winkernel KubeProxyWinkernelConfiguration `json:"winkernel"` - // ShowHiddenMetricsForVersion is the version for which you want to show hidden metrics. - ShowHiddenMetricsForVersion string `json:"showHiddenMetricsForVersion"` - // DetectLocalMode determines mode to use for detecting local traffic, defaults to LocalModeClusterCIDR - DetectLocalMode LocalMode `json:"detectLocalMode"` - // DetectLocal contains optional configuration settings related to DetectLocalMode. - DetectLocal DetectLocalConfiguration `json:"detectLocal"` -} - -// ProxyMode represents modes used by the Kubernetes proxy server. -// -// Currently, two modes of proxy are available on Linux platforms: 'iptables' and 'ipvs'. -// One mode of proxy is available on Windows platforms: 'kernelspace'. -// -// If the proxy mode is unspecified, the best-available proxy mode will be used (currently this -// is `iptables` on Linux and `kernelspace` on Windows). If the selected proxy mode cannot be -// used (due to lack of kernel support, missing userspace components, etc) then kube-proxy -// will exit with an error. -type ProxyMode string - -// LocalMode represents modes to detect local traffic from the node -type LocalMode string diff --git a/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/zz_generated.deepcopy.go deleted file mode 100644 index 5ecdfb06c68e..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kube-proxy/config/v1alpha1/zz_generated.deepcopy.go +++ /dev/null @@ -1,198 +0,0 @@ -//go:build !ignore_autogenerated -// +build !ignore_autogenerated - -/* -Copyright The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by deepcopy-gen. DO NOT EDIT. - -package v1alpha1 - -import ( - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" -) - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *DetectLocalConfiguration) DeepCopyInto(out *DetectLocalConfiguration) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DetectLocalConfiguration. -func (in *DetectLocalConfiguration) DeepCopy() *DetectLocalConfiguration { - if in == nil { - return nil - } - out := new(DetectLocalConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyConfiguration) DeepCopyInto(out *KubeProxyConfiguration) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.FeatureGates != nil { - in, out := &in.FeatureGates, &out.FeatureGates - *out = make(map[string]bool, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - out.ClientConnection = in.ClientConnection - in.IPTables.DeepCopyInto(&out.IPTables) - in.IPVS.DeepCopyInto(&out.IPVS) - if in.OOMScoreAdj != nil { - in, out := &in.OOMScoreAdj, &out.OOMScoreAdj - *out = new(int32) - **out = **in - } - in.Conntrack.DeepCopyInto(&out.Conntrack) - out.ConfigSyncPeriod = in.ConfigSyncPeriod - if in.NodePortAddresses != nil { - in, out := &in.NodePortAddresses, &out.NodePortAddresses - *out = make([]string, len(*in)) - copy(*out, *in) - } - out.Winkernel = in.Winkernel - out.DetectLocal = in.DetectLocal - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyConfiguration. -func (in *KubeProxyConfiguration) DeepCopy() *KubeProxyConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *KubeProxyConfiguration) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyConntrackConfiguration) DeepCopyInto(out *KubeProxyConntrackConfiguration) { - *out = *in - if in.MaxPerCore != nil { - in, out := &in.MaxPerCore, &out.MaxPerCore - *out = new(int32) - **out = **in - } - if in.Min != nil { - in, out := &in.Min, &out.Min - *out = new(int32) - **out = **in - } - if in.TCPEstablishedTimeout != nil { - in, out := &in.TCPEstablishedTimeout, &out.TCPEstablishedTimeout - *out = new(v1.Duration) - **out = **in - } - if in.TCPCloseWaitTimeout != nil { - in, out := &in.TCPCloseWaitTimeout, &out.TCPCloseWaitTimeout - *out = new(v1.Duration) - **out = **in - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyConntrackConfiguration. -func (in *KubeProxyConntrackConfiguration) DeepCopy() *KubeProxyConntrackConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyConntrackConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyIPTablesConfiguration) DeepCopyInto(out *KubeProxyIPTablesConfiguration) { - *out = *in - if in.MasqueradeBit != nil { - in, out := &in.MasqueradeBit, &out.MasqueradeBit - *out = new(int32) - **out = **in - } - if in.LocalhostNodePorts != nil { - in, out := &in.LocalhostNodePorts, &out.LocalhostNodePorts - *out = new(bool) - **out = **in - } - out.SyncPeriod = in.SyncPeriod - out.MinSyncPeriod = in.MinSyncPeriod - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyIPTablesConfiguration. -func (in *KubeProxyIPTablesConfiguration) DeepCopy() *KubeProxyIPTablesConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyIPTablesConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyIPVSConfiguration) DeepCopyInto(out *KubeProxyIPVSConfiguration) { - *out = *in - out.SyncPeriod = in.SyncPeriod - out.MinSyncPeriod = in.MinSyncPeriod - if in.ExcludeCIDRs != nil { - in, out := &in.ExcludeCIDRs, &out.ExcludeCIDRs - *out = make([]string, len(*in)) - copy(*out, *in) - } - out.TCPTimeout = in.TCPTimeout - out.TCPFinTimeout = in.TCPFinTimeout - out.UDPTimeout = in.UDPTimeout - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyIPVSConfiguration. -func (in *KubeProxyIPVSConfiguration) DeepCopy() *KubeProxyIPVSConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyIPVSConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyWinkernelConfiguration) DeepCopyInto(out *KubeProxyWinkernelConfiguration) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyWinkernelConfiguration. -func (in *KubeProxyWinkernelConfiguration) DeepCopy() *KubeProxyWinkernelConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyWinkernelConfiguration) - in.DeepCopyInto(out) - return out -} diff --git a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1/types.go b/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1/types.go index 703516fb78c3..2ee4e4cddd82 100644 --- a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1/types.go @@ -89,6 +89,12 @@ type KubeSchedulerConfiguration struct { // with the extender. These extenders are shared by all scheduler profiles. // +listType=set Extenders []Extender `json:"extenders,omitempty"` + + // DelayCacheUntilActive specifies when to start caching. If this is true and leader election is enabled, + // the scheduler will wait to fill informer caches until it is the leader. Doing so will have slower + // failover with the benefit of lower memory overhead while waiting to become leader. + // Defaults to false. + DelayCacheUntilActive bool `json:"delayCacheUntilActive,omitempty"` } // DecodeNestedObjects decodes plugin args for known types. diff --git a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/register.go b/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/register.go deleted file mode 100644 index 59fc014a9306..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/register.go +++ /dev/null @@ -1,50 +0,0 @@ -/* -Copyright 2021 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1beta2 - -import ( - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/runtime/schema" -) - -// GroupName is the group name used in this package -const GroupName = "kubescheduler.config.k8s.io" - -// SchemeGroupVersion is group version used to register these objects -var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1beta2"} - -var ( - // SchemeBuilder is the scheme builder with scheme init functions to run for this API package - SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes) - // AddToScheme is a global function that registers this API group & version to a scheme - AddToScheme = SchemeBuilder.AddToScheme -) - -// addKnownTypes registers known types to the given scheme -func addKnownTypes(scheme *runtime.Scheme) error { - scheme.AddKnownTypes(SchemeGroupVersion, - &KubeSchedulerConfiguration{}, - &DefaultPreemptionArgs{}, - &InterPodAffinityArgs{}, - &NodeResourcesBalancedAllocationArgs{}, - &NodeResourcesFitArgs{}, - &PodTopologySpreadArgs{}, - &VolumeBindingArgs{}, - &NodeAffinityArgs{}, - ) - return nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/types.go b/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/types.go deleted file mode 100644 index 0e47967adb4b..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/types.go +++ /dev/null @@ -1,369 +0,0 @@ -/* -Copyright 2021 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1beta2 - -import ( - "bytes" - "fmt" - - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/runtime" - componentbaseconfigv1alpha1 "k8s.io/component-base/config/v1alpha1" - "sigs.k8s.io/yaml" -) - -const ( - // SchedulerDefaultLockObjectNamespace defines default scheduler lock object namespace ("kube-system") - SchedulerDefaultLockObjectNamespace string = metav1.NamespaceSystem - - // SchedulerDefaultLockObjectName defines default scheduler lock object name ("kube-scheduler") - SchedulerDefaultLockObjectName = "kube-scheduler" - - // SchedulerDefaultProviderName defines the default provider names - SchedulerDefaultProviderName = "DefaultProvider" -) - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// KubeSchedulerConfiguration configures a scheduler -type KubeSchedulerConfiguration struct { - metav1.TypeMeta `json:",inline"` - - // Parallelism defines the amount of parallelism in algorithms for scheduling a Pods. Must be greater than 0. Defaults to 16 - Parallelism *int32 `json:"parallelism,omitempty"` - - // LeaderElection defines the configuration of leader election client. - LeaderElection componentbaseconfigv1alpha1.LeaderElectionConfiguration `json:"leaderElection"` - - // ClientConnection specifies the kubeconfig file and client connection - // settings for the proxy server to use when communicating with the apiserver. - ClientConnection componentbaseconfigv1alpha1.ClientConnectionConfiguration `json:"clientConnection"` - - // Note: Both HealthzBindAddress and MetricsBindAddress fields are deprecated. - // Only empty address or port 0 is allowed. Anything else will fail validation. - // HealthzBindAddress is the IP address and port for the health check server to serve on. - HealthzBindAddress *string `json:"healthzBindAddress,omitempty"` - // MetricsBindAddress is the IP address and port for the metrics server to serve on. - MetricsBindAddress *string `json:"metricsBindAddress,omitempty"` - - // DebuggingConfiguration holds configuration for Debugging related features - // TODO: We might wanna make this a substruct like Debugging componentbaseconfigv1alpha1.DebuggingConfiguration - componentbaseconfigv1alpha1.DebuggingConfiguration `json:",inline"` - - // PercentageOfNodesToScore is the percentage of all nodes that once found feasible - // for running a pod, the scheduler stops its search for more feasible nodes in - // the cluster. This helps improve scheduler's performance. Scheduler always tries to find - // at least "minFeasibleNodesToFind" feasible nodes no matter what the value of this flag is. - // Example: if the cluster size is 500 nodes and the value of this flag is 30, - // then scheduler stops finding further feasible nodes once it finds 150 feasible ones. - // When the value is 0, default percentage (5%--50% based on the size of the cluster) of the - // nodes will be scored. - PercentageOfNodesToScore *int32 `json:"percentageOfNodesToScore,omitempty"` - - // PodInitialBackoffSeconds is the initial backoff for unschedulable pods. - // If specified, it must be greater than 0. If this value is null, the default value (1s) - // will be used. - PodInitialBackoffSeconds *int64 `json:"podInitialBackoffSeconds,omitempty"` - - // PodMaxBackoffSeconds is the max backoff for unschedulable pods. - // If specified, it must be greater than podInitialBackoffSeconds. If this value is null, - // the default value (10s) will be used. - PodMaxBackoffSeconds *int64 `json:"podMaxBackoffSeconds,omitempty"` - - // Profiles are scheduling profiles that kube-scheduler supports. Pods can - // choose to be scheduled under a particular profile by setting its associated - // scheduler name. Pods that don't specify any scheduler name are scheduled - // with the "default-scheduler" profile, if present here. - // +listType=map - // +listMapKey=schedulerName - Profiles []KubeSchedulerProfile `json:"profiles,omitempty"` - - // Extenders are the list of scheduler extenders, each holding the values of how to communicate - // with the extender. These extenders are shared by all scheduler profiles. - // +listType=set - Extenders []Extender `json:"extenders,omitempty"` -} - -// DecodeNestedObjects decodes plugin args for known types. -func (c *KubeSchedulerConfiguration) DecodeNestedObjects(d runtime.Decoder) error { - var strictDecodingErrs []error - for i := range c.Profiles { - prof := &c.Profiles[i] - for j := range prof.PluginConfig { - err := prof.PluginConfig[j].decodeNestedObjects(d) - if err != nil { - decodingErr := fmt.Errorf("decoding .profiles[%d].pluginConfig[%d]: %w", i, j, err) - if runtime.IsStrictDecodingError(err) { - strictDecodingErrs = append(strictDecodingErrs, decodingErr) - } else { - return decodingErr - } - } - } - } - if len(strictDecodingErrs) > 0 { - return runtime.NewStrictDecodingError(strictDecodingErrs) - } - return nil -} - -// EncodeNestedObjects encodes plugin args. -func (c *KubeSchedulerConfiguration) EncodeNestedObjects(e runtime.Encoder) error { - for i := range c.Profiles { - prof := &c.Profiles[i] - for j := range prof.PluginConfig { - err := prof.PluginConfig[j].encodeNestedObjects(e) - if err != nil { - return fmt.Errorf("encoding .profiles[%d].pluginConfig[%d]: %w", i, j, err) - } - } - } - return nil -} - -// KubeSchedulerProfile is a scheduling profile. -type KubeSchedulerProfile struct { - // SchedulerName is the name of the scheduler associated to this profile. - // If SchedulerName matches with the pod's "spec.schedulerName", then the pod - // is scheduled with this profile. - SchedulerName *string `json:"schedulerName,omitempty"` - - // Plugins specify the set of plugins that should be enabled or disabled. - // Enabled plugins are the ones that should be enabled in addition to the - // default plugins. Disabled plugins are any of the default plugins that - // should be disabled. - // When no enabled or disabled plugin is specified for an extension point, - // default plugins for that extension point will be used if there is any. - // If a QueueSort plugin is specified, the same QueueSort Plugin and - // PluginConfig must be specified for all profiles. - Plugins *Plugins `json:"plugins,omitempty"` - - // PluginConfig is an optional set of custom plugin arguments for each plugin. - // Omitting config args for a plugin is equivalent to using the default config - // for that plugin. - // +listType=map - // +listMapKey=name - PluginConfig []PluginConfig `json:"pluginConfig,omitempty"` -} - -// Plugins include multiple extension points. When specified, the list of plugins for -// a particular extension point are the only ones enabled. If an extension point is -// omitted from the config, then the default set of plugins is used for that extension point. -// Enabled plugins are called in the order specified here, after default plugins. If they need to -// be invoked before default plugins, default plugins must be disabled and re-enabled here in desired order. -type Plugins struct { - // PreEnqueue is a list of plugins that should be invoked before adding pods to the scheduling queue. - PreEnqueue PluginSet `json:"preEnqueue,omitempty"` - - // QueueSort is a list of plugins that should be invoked when sorting pods in the scheduling queue. - QueueSort PluginSet `json:"queueSort,omitempty"` - - // PreFilter is a list of plugins that should be invoked at "PreFilter" extension point of the scheduling framework. - PreFilter PluginSet `json:"preFilter,omitempty"` - - // Filter is a list of plugins that should be invoked when filtering out nodes that cannot run the Pod. - Filter PluginSet `json:"filter,omitempty"` - - // PostFilter is a list of plugins that are invoked after filtering phase, but only when no feasible nodes were found for the pod. - PostFilter PluginSet `json:"postFilter,omitempty"` - - // PreScore is a list of plugins that are invoked before scoring. - PreScore PluginSet `json:"preScore,omitempty"` - - // Score is a list of plugins that should be invoked when ranking nodes that have passed the filtering phase. - Score PluginSet `json:"score,omitempty"` - - // Reserve is a list of plugins invoked when reserving/unreserving resources - // after a node is assigned to run the pod. - Reserve PluginSet `json:"reserve,omitempty"` - - // Permit is a list of plugins that control binding of a Pod. These plugins can prevent or delay binding of a Pod. - Permit PluginSet `json:"permit,omitempty"` - - // PreBind is a list of plugins that should be invoked before a pod is bound. - PreBind PluginSet `json:"preBind,omitempty"` - - // Bind is a list of plugins that should be invoked at "Bind" extension point of the scheduling framework. - // The scheduler call these plugins in order. Scheduler skips the rest of these plugins as soon as one returns success. - Bind PluginSet `json:"bind,omitempty"` - - // PostBind is a list of plugins that should be invoked after a pod is successfully bound. - PostBind PluginSet `json:"postBind,omitempty"` - - // MultiPoint is a simplified config section to enable plugins for all valid extension points. - MultiPoint PluginSet `json:"multiPoint,omitempty"` -} - -// PluginSet specifies enabled and disabled plugins for an extension point. -// If an array is empty, missing, or nil, default plugins at that extension point will be used. -type PluginSet struct { - // Enabled specifies plugins that should be enabled in addition to default plugins. - // If the default plugin is also configured in the scheduler config file, the weight of plugin will - // be overridden accordingly. - // These are called after default plugins and in the same order specified here. - // +listType=atomic - Enabled []Plugin `json:"enabled,omitempty"` - // Disabled specifies default plugins that should be disabled. - // When all default plugins need to be disabled, an array containing only one "*" should be provided. - // +listType=map - // +listMapKey=name - Disabled []Plugin `json:"disabled,omitempty"` -} - -// Plugin specifies a plugin name and its weight when applicable. Weight is used only for Score plugins. -type Plugin struct { - // Name defines the name of plugin - Name string `json:"name"` - // Weight defines the weight of plugin, only used for Score plugins. - Weight *int32 `json:"weight,omitempty"` -} - -// PluginConfig specifies arguments that should be passed to a plugin at the time of initialization. -// A plugin that is invoked at multiple extension points is initialized once. Args can have arbitrary structure. -// It is up to the plugin to process these Args. -type PluginConfig struct { - // Name defines the name of plugin being configured - Name string `json:"name"` - // Args defines the arguments passed to the plugins at the time of initialization. Args can have arbitrary structure. - Args runtime.RawExtension `json:"args,omitempty"` -} - -func (c *PluginConfig) decodeNestedObjects(d runtime.Decoder) error { - gvk := SchemeGroupVersion.WithKind(c.Name + "Args") - // dry-run to detect and skip out-of-tree plugin args. - if _, _, err := d.Decode(nil, &gvk, nil); runtime.IsNotRegisteredError(err) { - return nil - } - - var strictDecodingErr error - obj, parsedGvk, err := d.Decode(c.Args.Raw, &gvk, nil) - if err != nil { - decodingArgsErr := fmt.Errorf("decoding args for plugin %s: %w", c.Name, err) - if obj != nil && runtime.IsStrictDecodingError(err) { - strictDecodingErr = runtime.NewStrictDecodingError([]error{decodingArgsErr}) - } else { - return decodingArgsErr - } - } - if parsedGvk.GroupKind() != gvk.GroupKind() { - return fmt.Errorf("args for plugin %s were not of type %s, got %s", c.Name, gvk.GroupKind(), parsedGvk.GroupKind()) - } - c.Args.Object = obj - return strictDecodingErr -} - -func (c *PluginConfig) encodeNestedObjects(e runtime.Encoder) error { - if c.Args.Object == nil { - return nil - } - var buf bytes.Buffer - err := e.Encode(c.Args.Object, &buf) - if err != nil { - return err - } - // The encoder might be a YAML encoder, but the parent encoder expects - // JSON output, so we convert YAML back to JSON. - // This is a no-op if produces JSON. - json, err := yaml.YAMLToJSON(buf.Bytes()) - if err != nil { - return err - } - c.Args.Raw = json - return nil -} - -// Extender holds the parameters used to communicate with the extender. If a verb is unspecified/empty, -// it is assumed that the extender chose not to provide that extension. -type Extender struct { - // URLPrefix at which the extender is available - URLPrefix string `json:"urlPrefix"` - // Verb for the filter call, empty if not supported. This verb is appended to the URLPrefix when issuing the filter call to extender. - FilterVerb string `json:"filterVerb,omitempty"` - // Verb for the preempt call, empty if not supported. This verb is appended to the URLPrefix when issuing the preempt call to extender. - PreemptVerb string `json:"preemptVerb,omitempty"` - // Verb for the prioritize call, empty if not supported. This verb is appended to the URLPrefix when issuing the prioritize call to extender. - PrioritizeVerb string `json:"prioritizeVerb,omitempty"` - // The numeric multiplier for the node scores that the prioritize call generates. - // The weight should be a positive integer - Weight int64 `json:"weight,omitempty"` - // Verb for the bind call, empty if not supported. This verb is appended to the URLPrefix when issuing the bind call to extender. - // If this method is implemented by the extender, it is the extender's responsibility to bind the pod to apiserver. Only one extender - // can implement this function. - BindVerb string `json:"bindVerb,omitempty"` - // EnableHTTPS specifies whether https should be used to communicate with the extender - EnableHTTPS bool `json:"enableHTTPS,omitempty"` - // TLSConfig specifies the transport layer security config - TLSConfig *ExtenderTLSConfig `json:"tlsConfig,omitempty"` - // HTTPTimeout specifies the timeout duration for a call to the extender. Filter timeout fails the scheduling of the pod. Prioritize - // timeout is ignored, k8s/other extenders priorities are used to select the node. - HTTPTimeout metav1.Duration `json:"httpTimeout,omitempty"` - // NodeCacheCapable specifies that the extender is capable of caching node information, - // so the scheduler should only send minimal information about the eligible nodes - // assuming that the extender already cached full details of all nodes in the cluster - NodeCacheCapable bool `json:"nodeCacheCapable,omitempty"` - // ManagedResources is a list of extended resources that are managed by - // this extender. - // - A pod will be sent to the extender on the Filter, Prioritize and Bind - // (if the extender is the binder) phases iff the pod requests at least - // one of the extended resources in this list. If empty or unspecified, - // all pods will be sent to this extender. - // - If IgnoredByScheduler is set to true for a resource, kube-scheduler - // will skip checking the resource in predicates. - // +optional - // +listType=atomic - ManagedResources []ExtenderManagedResource `json:"managedResources,omitempty"` - // Ignorable specifies if the extender is ignorable, i.e. scheduling should not - // fail when the extender returns an error or is not reachable. - Ignorable bool `json:"ignorable,omitempty"` -} - -// ExtenderManagedResource describes the arguments of extended resources -// managed by an extender. -type ExtenderManagedResource struct { - // Name is the extended resource name. - Name string `json:"name"` - // IgnoredByScheduler indicates whether kube-scheduler should ignore this - // resource when applying predicates. - IgnoredByScheduler bool `json:"ignoredByScheduler,omitempty"` -} - -// ExtenderTLSConfig contains settings to enable TLS with extender -type ExtenderTLSConfig struct { - // Server should be accessed without verifying the TLS certificate. For testing only. - Insecure bool `json:"insecure,omitempty"` - // ServerName is passed to the server for SNI and is used in the client to check server - // certificates against. If ServerName is empty, the hostname used to contact the - // server is used. - ServerName string `json:"serverName,omitempty"` - - // Server requires TLS client certificate authentication - CertFile string `json:"certFile,omitempty"` - // Server requires TLS client certificate authentication - KeyFile string `json:"keyFile,omitempty"` - // Trusted root certificates for server - CAFile string `json:"caFile,omitempty"` - - // CertData holds PEM-encoded bytes (typically read from a client certificate file). - // CertData takes precedence over CertFile - CertData []byte `json:"certData,omitempty"` - // KeyData holds PEM-encoded bytes (typically read from a client certificate key file). - // KeyData takes precedence over KeyFile - KeyData []byte `json:"keyData,omitempty"` - // CAData holds PEM-encoded bytes (typically read from a root certificates bundle). - // CAData takes precedence over CAFile - CAData []byte `json:"caData,omitempty"` -} diff --git a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/types_pluginargs.go b/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/types_pluginargs.go deleted file mode 100644 index 602db995df2d..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/types_pluginargs.go +++ /dev/null @@ -1,229 +0,0 @@ -/* -Copyright 2021 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1beta2 - -import ( - corev1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// DefaultPreemptionArgs holds arguments used to configure the -// DefaultPreemption plugin. -type DefaultPreemptionArgs struct { - metav1.TypeMeta `json:",inline"` - - // MinCandidateNodesPercentage is the minimum number of candidates to - // shortlist when dry running preemption as a percentage of number of nodes. - // Must be in the range [0, 100]. Defaults to 10% of the cluster size if - // unspecified. - MinCandidateNodesPercentage *int32 `json:"minCandidateNodesPercentage,omitempty"` - // MinCandidateNodesAbsolute is the absolute minimum number of candidates to - // shortlist. The likely number of candidates enumerated for dry running - // preemption is given by the formula: - // numCandidates = max(numNodes * minCandidateNodesPercentage, minCandidateNodesAbsolute) - // We say "likely" because there are other factors such as PDB violations - // that play a role in the number of candidates shortlisted. Must be at least - // 0 nodes. Defaults to 100 nodes if unspecified. - MinCandidateNodesAbsolute *int32 `json:"minCandidateNodesAbsolute,omitempty"` -} - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// InterPodAffinityArgs holds arguments used to configure the InterPodAffinity plugin. -type InterPodAffinityArgs struct { - metav1.TypeMeta `json:",inline"` - - // HardPodAffinityWeight is the scoring weight for existing pods with a - // matching hard affinity to the incoming pod. - HardPodAffinityWeight *int32 `json:"hardPodAffinityWeight,omitempty"` - - // IgnorePreferredTermsOfExistingPods configures the scheduler to ignore existing pods' preferred affinity - // rules when scoring candidate nodes, unless the incoming pod has inter-pod affinities. - IgnorePreferredTermsOfExistingPods bool `json:"ignorePreferredTermsOfExistingPods"` -} - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// NodeResourcesFitArgs holds arguments used to configure the NodeResourcesFit plugin. -type NodeResourcesFitArgs struct { - metav1.TypeMeta `json:",inline"` - - // IgnoredResources is the list of resources that NodeResources fit filter - // should ignore. This doesn't apply to scoring. - // +listType=atomic - IgnoredResources []string `json:"ignoredResources,omitempty"` - // IgnoredResourceGroups defines the list of resource groups that NodeResources fit filter should ignore. - // e.g. if group is ["example.com"], it will ignore all resource names that begin - // with "example.com", such as "example.com/aaa" and "example.com/bbb". - // A resource group name can't contain '/'. This doesn't apply to scoring. - // +listType=atomic - IgnoredResourceGroups []string `json:"ignoredResourceGroups,omitempty"` - - // ScoringStrategy selects the node resource scoring strategy. - // The default strategy is LeastAllocated with an equal "cpu" and "memory" weight. - ScoringStrategy *ScoringStrategy `json:"scoringStrategy,omitempty"` -} - -// PodTopologySpreadConstraintsDefaulting defines how to set default constraints -// for the PodTopologySpread plugin. -type PodTopologySpreadConstraintsDefaulting string - -const ( - // SystemDefaulting instructs to use the kubernetes defined default. - SystemDefaulting PodTopologySpreadConstraintsDefaulting = "System" - // ListDefaulting instructs to use the config provided default. - ListDefaulting PodTopologySpreadConstraintsDefaulting = "List" -) - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread plugin. -type PodTopologySpreadArgs struct { - metav1.TypeMeta `json:",inline"` - - // DefaultConstraints defines topology spread constraints to be applied to - // Pods that don't define any in `pod.spec.topologySpreadConstraints`. - // `.defaultConstraints[*].labelSelectors` must be empty, as they are - // deduced from the Pod's membership to Services, ReplicationControllers, - // ReplicaSets or StatefulSets. - // When not empty, .defaultingType must be "List". - // +optional - // +listType=atomic - DefaultConstraints []corev1.TopologySpreadConstraint `json:"defaultConstraints,omitempty"` - - // DefaultingType determines how .defaultConstraints are deduced. Can be one - // of "System" or "List". - // - // - "System": Use kubernetes defined constraints that spread Pods among - // Nodes and Zones. - // - "List": Use constraints defined in .defaultConstraints. - // - // Defaults to "System". - // +optional - DefaultingType PodTopologySpreadConstraintsDefaulting `json:"defaultingType,omitempty"` -} - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// NodeResourcesBalancedAllocationArgs holds arguments used to configure NodeResourcesBalancedAllocation plugin. -type NodeResourcesBalancedAllocationArgs struct { - metav1.TypeMeta `json:",inline"` - - // Resources to be managed, the default is "cpu" and "memory" if not specified. - // +listType=map - // +listMapKey=name - Resources []ResourceSpec `json:"resources,omitempty"` -} - -// UtilizationShapePoint represents single point of priority function shape. -type UtilizationShapePoint struct { - // Utilization (x axis). Valid values are 0 to 100. Fully utilized node maps to 100. - Utilization int32 `json:"utilization"` - // Score assigned to given utilization (y axis). Valid values are 0 to 10. - Score int32 `json:"score"` -} - -// ResourceSpec represents a single resource. -type ResourceSpec struct { - // Name of the resource. - Name string `json:"name"` - // Weight of the resource. - Weight int64 `json:"weight,omitempty"` -} - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// VolumeBindingArgs holds arguments used to configure the VolumeBinding plugin. -type VolumeBindingArgs struct { - metav1.TypeMeta `json:",inline"` - - // BindTimeoutSeconds is the timeout in seconds in volume binding operation. - // Value must be non-negative integer. The value zero indicates no waiting. - // If this value is nil, the default value (600) will be used. - BindTimeoutSeconds *int64 `json:"bindTimeoutSeconds,omitempty"` - - // Shape specifies the points defining the score function shape, which is - // used to score nodes based on the utilization of statically provisioned - // PVs. The utilization is calculated by dividing the total requested - // storage of the pod by the total capacity of feasible PVs on each node. - // Each point contains utilization (ranges from 0 to 100) and its - // associated score (ranges from 0 to 10). You can turn the priority by - // specifying different scores for different utilization numbers. - // The default shape points are: - // 1) 0 for 0 utilization - // 2) 10 for 100 utilization - // All points must be sorted in increasing order by utilization. - // +featureGate=VolumeCapacityPriority - // +optional - // +listType=atomic - Shape []UtilizationShapePoint `json:"shape,omitempty"` -} - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// NodeAffinityArgs holds arguments to configure the NodeAffinity plugin. -type NodeAffinityArgs struct { - metav1.TypeMeta `json:",inline"` - - // AddedAffinity is applied to all Pods additionally to the NodeAffinity - // specified in the PodSpec. That is, Nodes need to satisfy AddedAffinity - // AND .spec.NodeAffinity. AddedAffinity is empty by default (all Nodes - // match). - // When AddedAffinity is used, some Pods with affinity requirements that match - // a specific Node (such as Daemonset Pods) might remain unschedulable. - // +optional - AddedAffinity *corev1.NodeAffinity `json:"addedAffinity,omitempty"` -} - -// ScoringStrategyType the type of scoring strategy used in NodeResourcesFit plugin. -type ScoringStrategyType string - -const ( - // LeastAllocated strategy prioritizes nodes with least allocated resources. - LeastAllocated ScoringStrategyType = "LeastAllocated" - // MostAllocated strategy prioritizes nodes with most allocated resources. - MostAllocated ScoringStrategyType = "MostAllocated" - // RequestedToCapacityRatio strategy allows specifying a custom shape function - // to score nodes based on the request to capacity ratio. - RequestedToCapacityRatio ScoringStrategyType = "RequestedToCapacityRatio" -) - -// ScoringStrategy define ScoringStrategyType for node resource plugin -type ScoringStrategy struct { - // Type selects which strategy to run. - Type ScoringStrategyType `json:"type,omitempty"` - - // Resources to consider when scoring. - // The default resource set includes "cpu" and "memory" with an equal weight. - // Allowed weights go from 1 to 100. - // Weight defaults to 1 if not specified or explicitly set to 0. - // +listType=map - // +listMapKey=topologyKey - Resources []ResourceSpec `json:"resources,omitempty"` - - // Arguments specific to RequestedToCapacityRatio strategy. - RequestedToCapacityRatio *RequestedToCapacityRatioParam `json:"requestedToCapacityRatio,omitempty"` -} - -// RequestedToCapacityRatioParam define RequestedToCapacityRatio parameters -type RequestedToCapacityRatioParam struct { - // Shape is a list of points defining the scoring function shape. - // +listType=atomic - Shape []UtilizationShapePoint `json:"shape,omitempty"` -} diff --git a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/zz_generated.deepcopy.go deleted file mode 100644 index 7ffacf0f3da6..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kube-scheduler/config/v1beta2/zz_generated.deepcopy.go +++ /dev/null @@ -1,614 +0,0 @@ -//go:build !ignore_autogenerated -// +build !ignore_autogenerated - -/* -Copyright The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by deepcopy-gen. DO NOT EDIT. - -package v1beta2 - -import ( - v1 "k8s.io/api/core/v1" - runtime "k8s.io/apimachinery/pkg/runtime" -) - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *DefaultPreemptionArgs) DeepCopyInto(out *DefaultPreemptionArgs) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.MinCandidateNodesPercentage != nil { - in, out := &in.MinCandidateNodesPercentage, &out.MinCandidateNodesPercentage - *out = new(int32) - **out = **in - } - if in.MinCandidateNodesAbsolute != nil { - in, out := &in.MinCandidateNodesAbsolute, &out.MinCandidateNodesAbsolute - *out = new(int32) - **out = **in - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DefaultPreemptionArgs. -func (in *DefaultPreemptionArgs) DeepCopy() *DefaultPreemptionArgs { - if in == nil { - return nil - } - out := new(DefaultPreemptionArgs) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *DefaultPreemptionArgs) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *Extender) DeepCopyInto(out *Extender) { - *out = *in - if in.TLSConfig != nil { - in, out := &in.TLSConfig, &out.TLSConfig - *out = new(ExtenderTLSConfig) - (*in).DeepCopyInto(*out) - } - out.HTTPTimeout = in.HTTPTimeout - if in.ManagedResources != nil { - in, out := &in.ManagedResources, &out.ManagedResources - *out = make([]ExtenderManagedResource, len(*in)) - copy(*out, *in) - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Extender. -func (in *Extender) DeepCopy() *Extender { - if in == nil { - return nil - } - out := new(Extender) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *ExtenderManagedResource) DeepCopyInto(out *ExtenderManagedResource) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderManagedResource. -func (in *ExtenderManagedResource) DeepCopy() *ExtenderManagedResource { - if in == nil { - return nil - } - out := new(ExtenderManagedResource) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *ExtenderTLSConfig) DeepCopyInto(out *ExtenderTLSConfig) { - *out = *in - if in.CertData != nil { - in, out := &in.CertData, &out.CertData - *out = make([]byte, len(*in)) - copy(*out, *in) - } - if in.KeyData != nil { - in, out := &in.KeyData, &out.KeyData - *out = make([]byte, len(*in)) - copy(*out, *in) - } - if in.CAData != nil { - in, out := &in.CAData, &out.CAData - *out = make([]byte, len(*in)) - copy(*out, *in) - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtenderTLSConfig. -func (in *ExtenderTLSConfig) DeepCopy() *ExtenderTLSConfig { - if in == nil { - return nil - } - out := new(ExtenderTLSConfig) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *InterPodAffinityArgs) DeepCopyInto(out *InterPodAffinityArgs) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.HardPodAffinityWeight != nil { - in, out := &in.HardPodAffinityWeight, &out.HardPodAffinityWeight - *out = new(int32) - **out = **in - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InterPodAffinityArgs. -func (in *InterPodAffinityArgs) DeepCopy() *InterPodAffinityArgs { - if in == nil { - return nil - } - out := new(InterPodAffinityArgs) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *InterPodAffinityArgs) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeSchedulerConfiguration) DeepCopyInto(out *KubeSchedulerConfiguration) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.Parallelism != nil { - in, out := &in.Parallelism, &out.Parallelism - *out = new(int32) - **out = **in - } - in.LeaderElection.DeepCopyInto(&out.LeaderElection) - out.ClientConnection = in.ClientConnection - if in.HealthzBindAddress != nil { - in, out := &in.HealthzBindAddress, &out.HealthzBindAddress - *out = new(string) - **out = **in - } - if in.MetricsBindAddress != nil { - in, out := &in.MetricsBindAddress, &out.MetricsBindAddress - *out = new(string) - **out = **in - } - in.DebuggingConfiguration.DeepCopyInto(&out.DebuggingConfiguration) - if in.PercentageOfNodesToScore != nil { - in, out := &in.PercentageOfNodesToScore, &out.PercentageOfNodesToScore - *out = new(int32) - **out = **in - } - if in.PodInitialBackoffSeconds != nil { - in, out := &in.PodInitialBackoffSeconds, &out.PodInitialBackoffSeconds - *out = new(int64) - **out = **in - } - if in.PodMaxBackoffSeconds != nil { - in, out := &in.PodMaxBackoffSeconds, &out.PodMaxBackoffSeconds - *out = new(int64) - **out = **in - } - if in.Profiles != nil { - in, out := &in.Profiles, &out.Profiles - *out = make([]KubeSchedulerProfile, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - if in.Extenders != nil { - in, out := &in.Extenders, &out.Extenders - *out = make([]Extender, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeSchedulerConfiguration. -func (in *KubeSchedulerConfiguration) DeepCopy() *KubeSchedulerConfiguration { - if in == nil { - return nil - } - out := new(KubeSchedulerConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *KubeSchedulerConfiguration) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeSchedulerProfile) DeepCopyInto(out *KubeSchedulerProfile) { - *out = *in - if in.SchedulerName != nil { - in, out := &in.SchedulerName, &out.SchedulerName - *out = new(string) - **out = **in - } - if in.Plugins != nil { - in, out := &in.Plugins, &out.Plugins - *out = new(Plugins) - (*in).DeepCopyInto(*out) - } - if in.PluginConfig != nil { - in, out := &in.PluginConfig, &out.PluginConfig - *out = make([]PluginConfig, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeSchedulerProfile. -func (in *KubeSchedulerProfile) DeepCopy() *KubeSchedulerProfile { - if in == nil { - return nil - } - out := new(KubeSchedulerProfile) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *NodeAffinityArgs) DeepCopyInto(out *NodeAffinityArgs) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.AddedAffinity != nil { - in, out := &in.AddedAffinity, &out.AddedAffinity - *out = new(v1.NodeAffinity) - (*in).DeepCopyInto(*out) - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeAffinityArgs. -func (in *NodeAffinityArgs) DeepCopy() *NodeAffinityArgs { - if in == nil { - return nil - } - out := new(NodeAffinityArgs) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *NodeAffinityArgs) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *NodeResourcesBalancedAllocationArgs) DeepCopyInto(out *NodeResourcesBalancedAllocationArgs) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.Resources != nil { - in, out := &in.Resources, &out.Resources - *out = make([]ResourceSpec, len(*in)) - copy(*out, *in) - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeResourcesBalancedAllocationArgs. -func (in *NodeResourcesBalancedAllocationArgs) DeepCopy() *NodeResourcesBalancedAllocationArgs { - if in == nil { - return nil - } - out := new(NodeResourcesBalancedAllocationArgs) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *NodeResourcesBalancedAllocationArgs) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *NodeResourcesFitArgs) DeepCopyInto(out *NodeResourcesFitArgs) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.IgnoredResources != nil { - in, out := &in.IgnoredResources, &out.IgnoredResources - *out = make([]string, len(*in)) - copy(*out, *in) - } - if in.IgnoredResourceGroups != nil { - in, out := &in.IgnoredResourceGroups, &out.IgnoredResourceGroups - *out = make([]string, len(*in)) - copy(*out, *in) - } - if in.ScoringStrategy != nil { - in, out := &in.ScoringStrategy, &out.ScoringStrategy - *out = new(ScoringStrategy) - (*in).DeepCopyInto(*out) - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeResourcesFitArgs. -func (in *NodeResourcesFitArgs) DeepCopy() *NodeResourcesFitArgs { - if in == nil { - return nil - } - out := new(NodeResourcesFitArgs) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *NodeResourcesFitArgs) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *Plugin) DeepCopyInto(out *Plugin) { - *out = *in - if in.Weight != nil { - in, out := &in.Weight, &out.Weight - *out = new(int32) - **out = **in - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Plugin. -func (in *Plugin) DeepCopy() *Plugin { - if in == nil { - return nil - } - out := new(Plugin) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PluginConfig) DeepCopyInto(out *PluginConfig) { - *out = *in - in.Args.DeepCopyInto(&out.Args) - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PluginConfig. -func (in *PluginConfig) DeepCopy() *PluginConfig { - if in == nil { - return nil - } - out := new(PluginConfig) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PluginSet) DeepCopyInto(out *PluginSet) { - *out = *in - if in.Enabled != nil { - in, out := &in.Enabled, &out.Enabled - *out = make([]Plugin, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - if in.Disabled != nil { - in, out := &in.Disabled, &out.Disabled - *out = make([]Plugin, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PluginSet. -func (in *PluginSet) DeepCopy() *PluginSet { - if in == nil { - return nil - } - out := new(PluginSet) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *Plugins) DeepCopyInto(out *Plugins) { - *out = *in - in.PreEnqueue.DeepCopyInto(&out.PreEnqueue) - in.QueueSort.DeepCopyInto(&out.QueueSort) - in.PreFilter.DeepCopyInto(&out.PreFilter) - in.Filter.DeepCopyInto(&out.Filter) - in.PostFilter.DeepCopyInto(&out.PostFilter) - in.PreScore.DeepCopyInto(&out.PreScore) - in.Score.DeepCopyInto(&out.Score) - in.Reserve.DeepCopyInto(&out.Reserve) - in.Permit.DeepCopyInto(&out.Permit) - in.PreBind.DeepCopyInto(&out.PreBind) - in.Bind.DeepCopyInto(&out.Bind) - in.PostBind.DeepCopyInto(&out.PostBind) - in.MultiPoint.DeepCopyInto(&out.MultiPoint) - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Plugins. -func (in *Plugins) DeepCopy() *Plugins { - if in == nil { - return nil - } - out := new(Plugins) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PodTopologySpreadArgs) DeepCopyInto(out *PodTopologySpreadArgs) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.DefaultConstraints != nil { - in, out := &in.DefaultConstraints, &out.DefaultConstraints - *out = make([]v1.TopologySpreadConstraint, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodTopologySpreadArgs. -func (in *PodTopologySpreadArgs) DeepCopy() *PodTopologySpreadArgs { - if in == nil { - return nil - } - out := new(PodTopologySpreadArgs) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *PodTopologySpreadArgs) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *RequestedToCapacityRatioParam) DeepCopyInto(out *RequestedToCapacityRatioParam) { - *out = *in - if in.Shape != nil { - in, out := &in.Shape, &out.Shape - *out = make([]UtilizationShapePoint, len(*in)) - copy(*out, *in) - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RequestedToCapacityRatioParam. -func (in *RequestedToCapacityRatioParam) DeepCopy() *RequestedToCapacityRatioParam { - if in == nil { - return nil - } - out := new(RequestedToCapacityRatioParam) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *ResourceSpec) DeepCopyInto(out *ResourceSpec) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceSpec. -func (in *ResourceSpec) DeepCopy() *ResourceSpec { - if in == nil { - return nil - } - out := new(ResourceSpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *ScoringStrategy) DeepCopyInto(out *ScoringStrategy) { - *out = *in - if in.Resources != nil { - in, out := &in.Resources, &out.Resources - *out = make([]ResourceSpec, len(*in)) - copy(*out, *in) - } - if in.RequestedToCapacityRatio != nil { - in, out := &in.RequestedToCapacityRatio, &out.RequestedToCapacityRatio - *out = new(RequestedToCapacityRatioParam) - (*in).DeepCopyInto(*out) - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScoringStrategy. -func (in *ScoringStrategy) DeepCopy() *ScoringStrategy { - if in == nil { - return nil - } - out := new(ScoringStrategy) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *UtilizationShapePoint) DeepCopyInto(out *UtilizationShapePoint) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UtilizationShapePoint. -func (in *UtilizationShapePoint) DeepCopy() *UtilizationShapePoint { - if in == nil { - return nil - } - out := new(UtilizationShapePoint) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *VolumeBindingArgs) DeepCopyInto(out *VolumeBindingArgs) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.BindTimeoutSeconds != nil { - in, out := &in.BindTimeoutSeconds, &out.BindTimeoutSeconds - *out = new(int64) - **out = **in - } - if in.Shape != nil { - in, out := &in.Shape, &out.Shape - *out = make([]UtilizationShapePoint, len(*in)) - copy(*out, *in) - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VolumeBindingArgs. -func (in *VolumeBindingArgs) DeepCopy() *VolumeBindingArgs { - if in == nil { - return nil - } - out := new(VolumeBindingArgs) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *VolumeBindingArgs) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubelet/config/v1beta1/types.go b/cluster-autoscaler/vendor/k8s.io/kubelet/config/v1beta1/types.go index 68cafdb1a94b..b1ad1353fca1 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubelet/config/v1beta1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/config/v1beta1/types.go @@ -150,6 +150,7 @@ type KubeletConfiguration struct { // +optional TLSPrivateKeyFile string `json:"tlsPrivateKeyFile,omitempty"` // tlsCipherSuites is the list of allowed cipher suites for the server. + // Note that TLS 1.3 ciphersuites are not configurable. // Values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). // Default: nil // +optional @@ -547,22 +548,20 @@ type KubeletConfiguration struct { // Default: false // +optional ProtectKernelDefaults bool `json:"protectKernelDefaults,omitempty"` - // makeIPTablesUtilChains, if true, causes the Kubelet ensures a set of iptables rules - // are present on host. - // These rules will serve as utility rules for various components, e.g. kube-proxy. - // The rules will be created based on iptablesMasqueradeBit and iptablesDropBit. + // makeIPTablesUtilChains, if true, causes the Kubelet to create the + // KUBE-IPTABLES-HINT chain in iptables as a hint to other components about the + // configuration of iptables on the system. // Default: true // +optional MakeIPTablesUtilChains *bool `json:"makeIPTablesUtilChains,omitempty"` - // iptablesMasqueradeBit is the bit of the iptables fwmark space to mark for SNAT. - // Values must be within the range [0, 31]. Must be different from other mark bits. - // Warning: Please match the value of the corresponding parameter in kube-proxy. - // TODO: clean up IPTablesMasqueradeBit in kube-proxy. + // iptablesMasqueradeBit formerly controlled the creation of the KUBE-MARK-MASQ + // chain. + // Deprecated: no longer has any effect. // Default: 14 // +optional IPTablesMasqueradeBit *int32 `json:"iptablesMasqueradeBit,omitempty"` - // iptablesDropBit is the bit of the iptables fwmark space to mark for dropping packets. - // Values must be within the range [0, 31]. Must be different from other mark bits. + // iptablesDropBit formerly controlled the creation of the KUBE-MARK-DROP chain. + // Deprecated: no longer has any effect. // Default: 15 // +optional IPTablesDropBit *int32 `json:"iptablesDropBit,omitempty"` diff --git a/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go index a24d42881f38..e15ca7ccf99f 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go @@ -814,6 +814,56 @@ func (m *ContainerAllocateRequest) GetDevicesIDs() []string { return nil } +// CDIDevice specifies a CDI device information. +type CDIDevice struct { + // Fully qualified CDI device name + // for example: vendor.com/gpu=gpudevice1 + // see more details in the CDI specification: + // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md + Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *CDIDevice) Reset() { *m = CDIDevice{} } +func (*CDIDevice) ProtoMessage() {} +func (*CDIDevice) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{15} +} +func (m *CDIDevice) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *CDIDevice) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_CDIDevice.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *CDIDevice) XXX_Merge(src proto.Message) { + xxx_messageInfo_CDIDevice.Merge(m, src) +} +func (m *CDIDevice) XXX_Size() int { + return m.Size() +} +func (m *CDIDevice) XXX_DiscardUnknown() { + xxx_messageInfo_CDIDevice.DiscardUnknown(m) +} + +var xxx_messageInfo_CDIDevice proto.InternalMessageInfo + +func (m *CDIDevice) GetName() string { + if m != nil { + return m.Name + } + return "" +} + // AllocateResponse includes the artifacts that needs to be injected into // a container for accessing 'deviceIDs' that were mentioned as part of // 'AllocateRequest'. @@ -831,7 +881,7 @@ type AllocateResponse struct { func (m *AllocateResponse) Reset() { *m = AllocateResponse{} } func (*AllocateResponse) ProtoMessage() {} func (*AllocateResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{15} + return fileDescriptor_00212fb1f9d3bf1c, []int{16} } func (m *AllocateResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -875,15 +925,17 @@ type ContainerAllocateResponse struct { // Devices for the container. Devices []*DeviceSpec `protobuf:"bytes,3,rep,name=devices,proto3" json:"devices,omitempty"` // Container annotations to pass to the container runtime - Annotations map[string]string `protobuf:"bytes,4,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` - XXX_NoUnkeyedLiteral struct{} `json:"-"` - XXX_sizecache int32 `json:"-"` + Annotations map[string]string `protobuf:"bytes,4,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + // CDI devices for the container. + CDIDevices []*CDIDevice `protobuf:"bytes,5,rep,name=cdi_devices,json=cdiDevices,proto3" json:"cdi_devices,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` } func (m *ContainerAllocateResponse) Reset() { *m = ContainerAllocateResponse{} } func (*ContainerAllocateResponse) ProtoMessage() {} func (*ContainerAllocateResponse) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{16} + return fileDescriptor_00212fb1f9d3bf1c, []int{17} } func (m *ContainerAllocateResponse) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -940,6 +992,13 @@ func (m *ContainerAllocateResponse) GetAnnotations() map[string]string { return nil } +func (m *ContainerAllocateResponse) GetCDIDevices() []*CDIDevice { + if m != nil { + return m.CDIDevices + } + return nil +} + // Mount specifies a host volume to mount into a container. // where device library or tools are installed on host and container type Mount struct { @@ -956,7 +1015,7 @@ type Mount struct { func (m *Mount) Reset() { *m = Mount{} } func (*Mount) ProtoMessage() {} func (*Mount) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{17} + return fileDescriptor_00212fb1f9d3bf1c, []int{18} } func (m *Mount) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1024,7 +1083,7 @@ type DeviceSpec struct { func (m *DeviceSpec) Reset() { *m = DeviceSpec{} } func (*DeviceSpec) ProtoMessage() {} func (*DeviceSpec) Descriptor() ([]byte, []int) { - return fileDescriptor_00212fb1f9d3bf1c, []int{18} + return fileDescriptor_00212fb1f9d3bf1c, []int{19} } func (m *DeviceSpec) XXX_Unmarshal(b []byte) error { return m.Unmarshal(b) @@ -1090,6 +1149,7 @@ func init() { proto.RegisterType((*ContainerPreferredAllocationResponse)(nil), "v1beta1.ContainerPreferredAllocationResponse") proto.RegisterType((*AllocateRequest)(nil), "v1beta1.AllocateRequest") proto.RegisterType((*ContainerAllocateRequest)(nil), "v1beta1.ContainerAllocateRequest") + proto.RegisterType((*CDIDevice)(nil), "v1beta1.CDIDevice") proto.RegisterType((*AllocateResponse)(nil), "v1beta1.AllocateResponse") proto.RegisterType((*ContainerAllocateResponse)(nil), "v1beta1.ContainerAllocateResponse") proto.RegisterMapType((map[string]string)(nil), "v1beta1.ContainerAllocateResponse.AnnotationsEntry") @@ -1101,73 +1161,75 @@ func init() { func init() { proto.RegisterFile("api.proto", fileDescriptor_00212fb1f9d3bf1c) } var fileDescriptor_00212fb1f9d3bf1c = []byte{ - // 1044 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x56, 0xdd, 0x6e, 0x1b, 0x45, - 0x14, 0xce, 0xc6, 0x4d, 0x62, 0x1f, 0xa7, 0xf9, 0x99, 0x84, 0xc8, 0xd9, 0x14, 0x37, 0x9d, 0x14, - 0x1a, 0xa4, 0xc4, 0x26, 0x2e, 0x6a, 0x2b, 0x2e, 0x10, 0x2e, 0x0e, 0x60, 0x42, 0xd3, 0x68, 0x43, - 0x85, 0x04, 0x08, 0x6b, 0xbc, 0x3b, 0xb1, 0x57, 0x59, 0xcf, 0x2c, 0x3b, 0x63, 0x4b, 0xae, 0x84, - 0xc4, 0x05, 0x0f, 0xd0, 0x77, 0x80, 0x57, 0xe0, 0x1d, 0x7a, 0xc9, 0x25, 0x57, 0x88, 0x9a, 0x17, - 0x41, 0x9e, 0xd9, 0x3f, 0x6d, 0x36, 0x56, 0x2a, 0x71, 0xe7, 0x39, 0xe7, 0x7c, 0xe7, 0xe7, 0x3b, - 0x67, 0xcf, 0x31, 0x94, 0x88, 0xef, 0xd6, 0xfc, 0x80, 0x4b, 0x8e, 0x96, 0x46, 0x47, 0x5d, 0x2a, - 0xc9, 0x91, 0x79, 0xd8, 0x73, 0x65, 0x7f, 0xd8, 0xad, 0xd9, 0x7c, 0x50, 0xef, 0xf1, 0x1e, 0xaf, - 0x2b, 0x7d, 0x77, 0x78, 0xa1, 0x5e, 0xea, 0xa1, 0x7e, 0x69, 0x1c, 0x7e, 0x65, 0xc0, 0x46, 0x8b, - 0x8e, 0x5c, 0x9b, 0x9e, 0x79, 0xc3, 0x9e, 0xcb, 0x9e, 0xfb, 0xd2, 0xe5, 0x4c, 0xa0, 0x03, 0x40, - 0x7e, 0x40, 0x3b, 0x42, 0x92, 0x40, 0x76, 0x02, 0xfa, 0xd3, 0xd0, 0x0d, 0xa8, 0x53, 0x31, 0x76, - 0x8d, 0xfd, 0xa2, 0xb5, 0xe6, 0x07, 0xf4, 0x7c, 0xaa, 0xb0, 0x42, 0x39, 0x3a, 0x01, 0xdc, 0xa3, - 0xb2, 0xe3, 0x07, 0xf4, 0x82, 0x06, 0x01, 0x75, 0x3a, 0xc4, 0xf3, 0xb8, 0x4d, 0xa6, 0xae, 0x3a, - 0x64, 0x44, 0x5c, 0x8f, 0x74, 0x3d, 0x5a, 0x99, 0x57, 0xe8, 0xbb, 0x3d, 0x2a, 0xcf, 0x22, 0xc3, - 0x66, 0x6c, 0xd7, 0x8c, 0xcc, 0xf0, 0xef, 0x06, 0xac, 0x5a, 0xb4, 0xe7, 0x0a, 0x49, 0x83, 0x69, - 0x04, 0x2a, 0x24, 0xaa, 0xc0, 0xd2, 0x88, 0x06, 0xc2, 0xe5, 0x4c, 0xe5, 0x50, 0xb2, 0xa2, 0x27, - 0x32, 0xa1, 0x48, 0x99, 0xe3, 0x73, 0x97, 0x49, 0x15, 0xa0, 0x64, 0xc5, 0x6f, 0xb4, 0x07, 0xb7, - 0x03, 0x2a, 0xf8, 0x30, 0xb0, 0x69, 0x87, 0x91, 0x01, 0xad, 0x14, 0x94, 0xc1, 0x72, 0x24, 0x3c, - 0x25, 0x03, 0x8a, 0x1e, 0xc1, 0x12, 0xd7, 0x45, 0x57, 0x6e, 0xed, 0x1a, 0xfb, 0xe5, 0xc6, 0x9d, - 0x5a, 0xc8, 0x65, 0x2d, 0x87, 0x18, 0x2b, 0x32, 0xc6, 0x4b, 0xb0, 0x70, 0x3c, 0xf0, 0xe5, 0x18, - 0x37, 0x61, 0xf3, 0x6b, 0x57, 0xc8, 0x26, 0x73, 0xbe, 0x25, 0xd2, 0xee, 0x5b, 0x54, 0xf8, 0x9c, - 0x09, 0x8a, 0x3e, 0x80, 0x25, 0x47, 0x39, 0x10, 0x15, 0x63, 0xb7, 0xb0, 0x5f, 0x6e, 0xac, 0x66, - 0x1c, 0x5b, 0x91, 0x1e, 0x3f, 0x86, 0xe5, 0x6f, 0xb8, 0xcf, 0x3d, 0xde, 0x1b, 0xb7, 0xd9, 0x05, - 0x47, 0x0f, 0x60, 0x81, 0x71, 0x27, 0x06, 0xae, 0xc7, 0xc0, 0xd3, 0x17, 0xcf, 0x9a, 0xa7, 0xdc, - 0xa1, 0x96, 0xd6, 0x63, 0x13, 0x8a, 0x91, 0x08, 0xad, 0xc0, 0x7c, 0xbb, 0xa5, 0xe8, 0x29, 0x58, - 0xf3, 0xed, 0x16, 0xb6, 0x61, 0x51, 0xc7, 0x49, 0x69, 0x4a, 0x53, 0x0d, 0xda, 0x82, 0xc5, 0x3e, - 0x25, 0x9e, 0xec, 0x87, 0x8c, 0x85, 0x2f, 0x74, 0x04, 0x45, 0x19, 0xa6, 0xa1, 0xa8, 0x2a, 0x37, - 0xde, 0x89, 0x23, 0xa7, 0xf3, 0xb3, 0x62, 0x33, 0x7c, 0x02, 0x95, 0xb3, 0x70, 0x1a, 0x3e, 0xe3, - 0x4c, 0x12, 0x97, 0x25, 0x4d, 0xab, 0x43, 0x39, 0x2c, 0xb0, 0xe3, 0x3a, 0xba, 0x96, 0xd2, 0xd3, - 0x95, 0xc9, 0xdf, 0x77, 0x41, 0xe7, 0x25, 0xda, 0x2d, 0x61, 0x41, 0x68, 0xd2, 0x76, 0x04, 0xde, - 0x81, 0xed, 0x1c, 0x67, 0x9a, 0x4e, 0x3c, 0x06, 0x33, 0x67, 0x6c, 0xa2, 0x58, 0xdf, 0x03, 0xb2, - 0x23, 0x88, 0x9a, 0x57, 0x2a, 0x64, 0x44, 0xdf, 0x41, 0x5c, 0x44, 0xec, 0xf5, 0x7a, 0x4f, 0xd6, - 0xba, 0x9d, 0xa9, 0x43, 0xe0, 0x3f, 0x0c, 0xd8, 0xbb, 0x01, 0x14, 0xd5, 0x61, 0x23, 0x9e, 0xf6, - 0x8e, 0xae, 0xab, 0xdd, 0x0a, 0x0b, 0xb7, 0x50, 0xac, 0x6a, 0x45, 0x1a, 0xf4, 0x11, 0x6c, 0x0d, - 0x86, 0x42, 0x76, 0x5c, 0x66, 0x7b, 0x43, 0x27, 0x8d, 0x99, 0x57, 0x98, 0xcd, 0xa9, 0xb6, 0xad, - 0x95, 0x09, 0xea, 0x01, 0xac, 0xa6, 0xbe, 0x2f, 0xe1, 0xbe, 0xd4, 0x83, 0xbd, 0x60, 0xad, 0x24, - 0xe2, 0x73, 0xf7, 0x25, 0xc5, 0x3f, 0xc3, 0x4e, 0x6e, 0xb6, 0xe1, 0x80, 0xfe, 0x08, 0x1b, 0x69, - 0xce, 0xb4, 0x34, 0x22, 0xed, 0xf0, 0x86, 0xa4, 0x69, 0x94, 0x85, 0xec, 0x6c, 0xc3, 0x04, 0x6e, - 0xc1, 0xfd, 0x9b, 0x60, 0xd1, 0x1d, 0x28, 0x65, 0xc9, 0x4a, 0x04, 0xd8, 0x86, 0xd5, 0x10, 0x43, - 0x23, 0x9e, 0xcf, 0x66, 0x34, 0xfb, 0xde, 0xd5, 0xbc, 0x33, 0xf0, 0xbc, 0x0e, 0x9f, 0x40, 0xe5, - 0x3a, 0xf3, 0xb7, 0x1f, 0xe3, 0x1e, 0xac, 0x25, 0x3e, 0xc2, 0x1a, 0xcf, 0x67, 0x71, 0x8d, 0x67, - 0xe5, 0x3c, 0x83, 0xe0, 0x5f, 0x0b, 0xb0, 0x7d, 0x2d, 0x02, 0x7d, 0x0a, 0xb7, 0x28, 0x1b, 0xcd, - 0xf8, 0x08, 0xb2, 0x88, 0xda, 0x31, 0x1b, 0x89, 0x63, 0x26, 0x83, 0xb1, 0xa5, 0x90, 0xe8, 0x7d, - 0x58, 0x1c, 0xf0, 0x21, 0x93, 0x7a, 0x1c, 0xcb, 0x8d, 0x95, 0xd8, 0xc7, 0xb3, 0xa9, 0xd8, 0x0a, - 0xb5, 0xe8, 0x30, 0xd9, 0x74, 0x05, 0x65, 0xb8, 0x91, 0xd9, 0x74, 0xe7, 0x3e, 0xb5, 0xe3, 0x6d, - 0x87, 0x5e, 0x40, 0x99, 0x30, 0xc6, 0x25, 0x89, 0xb6, 0xee, 0x14, 0xf2, 0xf0, 0x06, 0xf9, 0x35, - 0x13, 0x94, 0x4e, 0x33, 0xed, 0xc7, 0x7c, 0x0c, 0xa5, 0xb8, 0x00, 0xb4, 0x06, 0x85, 0x4b, 0x3a, - 0x0e, 0x77, 0xde, 0xf4, 0x27, 0xda, 0x84, 0x85, 0x11, 0xf1, 0x86, 0x34, 0xdc, 0x79, 0xfa, 0xf1, - 0xf1, 0xfc, 0x13, 0xc3, 0xfc, 0x04, 0xd6, 0xb2, 0x9e, 0xdf, 0x06, 0x8f, 0xfb, 0xb0, 0xa0, 0xf8, - 0x40, 0xef, 0xc1, 0x4a, 0xd2, 0x64, 0x9f, 0xc8, 0x7e, 0x88, 0xbf, 0x1d, 0x4b, 0xcf, 0x88, 0xec, - 0xa3, 0x1d, 0x28, 0xf5, 0xb9, 0x90, 0xda, 0x22, 0xbc, 0x59, 0x53, 0x41, 0xa4, 0x0c, 0x28, 0x71, - 0x3a, 0x9c, 0x79, 0x7a, 0x09, 0x17, 0xad, 0xe2, 0x54, 0xf0, 0x9c, 0x79, 0x63, 0x1c, 0x00, 0x24, - 0x84, 0xfe, 0x2f, 0xe1, 0x76, 0xa1, 0xec, 0xd3, 0x60, 0xe0, 0x0a, 0xa1, 0x7a, 0xa1, 0x0f, 0x64, - 0x5a, 0xd4, 0xf8, 0x1c, 0x96, 0xf5, 0x35, 0x0e, 0x14, 0x3f, 0xe8, 0x11, 0x14, 0xa3, 0xeb, 0x8c, - 0x2a, 0x71, 0xd3, 0x32, 0x07, 0xdb, 0x4c, 0x46, 0x45, 0x1f, 0xc9, 0xb9, 0xc6, 0x6f, 0x05, 0x58, - 0x4e, 0x1f, 0x54, 0xf4, 0x25, 0x6c, 0x7d, 0x41, 0x65, 0xde, 0x9f, 0x8f, 0x0c, 0xd8, 0x9c, 0x79, - 0x91, 0xf1, 0x1c, 0x6a, 0xc2, 0x72, 0xfa, 0x02, 0x5f, 0xc1, 0xbf, 0x1b, 0xbf, 0xf3, 0x0e, 0x35, - 0x9e, 0xfb, 0xd0, 0x40, 0x54, 0x25, 0x93, 0xb3, 0xa5, 0xd0, 0x5e, 0x0c, 0xbe, 0x7e, 0xf3, 0x9b, - 0xf7, 0x67, 0x1b, 0x45, 0x81, 0x50, 0x13, 0x8a, 0xd1, 0x54, 0xa7, 0xc8, 0xcb, 0x6c, 0x1c, 0x73, - 0x3b, 0x47, 0x13, 0xbb, 0xf8, 0x01, 0xd6, 0xaf, 0x1c, 0x49, 0x74, 0x2f, 0x1d, 0x3f, 0xf7, 0x1a, - 0x9b, 0x78, 0x96, 0x49, 0xe4, 0xfd, 0xe9, 0x57, 0xaf, 0xdf, 0x54, 0x8d, 0xbf, 0xde, 0x54, 0xe7, - 0x7e, 0x99, 0x54, 0x8d, 0xd7, 0x93, 0xaa, 0xf1, 0xe7, 0xa4, 0x6a, 0xfc, 0x33, 0xa9, 0x1a, 0xaf, - 0xfe, 0xad, 0xce, 0x7d, 0x77, 0x70, 0xf9, 0x44, 0xd4, 0x5c, 0x5e, 0xbf, 0x1c, 0x76, 0xa9, 0x47, - 0x65, 0xdd, 0xbf, 0xec, 0xd5, 0x89, 0xef, 0x8a, 0xba, 0xfe, 0xb4, 0x7d, 0xd5, 0x97, 0x7a, 0x18, - 0xa7, 0xbb, 0xa8, 0xfe, 0x62, 0x3e, 0xfc, 0x2f, 0x00, 0x00, 0xff, 0xff, 0xe2, 0xf2, 0x09, 0x79, - 0xa7, 0x0a, 0x00, 0x00, + // 1088 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x56, 0xef, 0x6e, 0x1b, 0x45, + 0x10, 0xcf, 0xc5, 0x71, 0x62, 0x8f, 0xd3, 0xfc, 0xd9, 0x84, 0xc8, 0x71, 0x8a, 0x93, 0x6e, 0x0a, + 0x0d, 0x52, 0x62, 0x13, 0x17, 0xb5, 0x15, 0x1f, 0x10, 0x6e, 0x1c, 0xc0, 0x84, 0xa6, 0xd1, 0x85, + 0x0a, 0x09, 0x10, 0xa7, 0xf3, 0xdd, 0xc6, 0x3e, 0xe5, 0xbc, 0x7b, 0xdc, 0xae, 0x2d, 0xb9, 0x12, + 0x12, 0x8f, 0xd0, 0x77, 0x80, 0x57, 0xe0, 0x1d, 0xca, 0x37, 0x3e, 0xf2, 0x09, 0xd1, 0xf0, 0x22, + 0xc8, 0xbb, 0xb7, 0x77, 0xa7, 0x8b, 0x63, 0xa5, 0x52, 0xbf, 0xdd, 0xce, 0xcc, 0x6f, 0x66, 0xf6, + 0x37, 0xb3, 0x33, 0x07, 0x45, 0x3b, 0xf0, 0x6a, 0x41, 0xc8, 0x04, 0x43, 0x0b, 0xc3, 0xc3, 0x0e, + 0x11, 0xf6, 0x61, 0xe5, 0xa0, 0xeb, 0x89, 0xde, 0xa0, 0x53, 0x73, 0x58, 0xbf, 0xde, 0x65, 0x5d, + 0x56, 0x97, 0xfa, 0xce, 0xe0, 0x42, 0x9e, 0xe4, 0x41, 0x7e, 0x29, 0x1c, 0x7e, 0x65, 0xc0, 0x5a, + 0x8b, 0x0c, 0x3d, 0x87, 0x9c, 0xf9, 0x83, 0xae, 0x47, 0x9f, 0x07, 0xc2, 0x63, 0x94, 0xa3, 0x7d, + 0x40, 0x41, 0x48, 0x2c, 0x2e, 0xec, 0x50, 0x58, 0x21, 0xf9, 0x79, 0xe0, 0x85, 0xc4, 0x2d, 0x1b, + 0x3b, 0xc6, 0x5e, 0xc1, 0x5c, 0x09, 0x42, 0x72, 0x3e, 0x56, 0x98, 0x91, 0x1c, 0x9d, 0x00, 0xee, + 0x12, 0x61, 0x05, 0x21, 0xb9, 0x20, 0x61, 0x48, 0x5c, 0xcb, 0xf6, 0x7d, 0xe6, 0xd8, 0x63, 0x57, + 0x96, 0x3d, 0xb4, 0x3d, 0xdf, 0xee, 0xf8, 0xa4, 0x3c, 0x2b, 0xd1, 0xdb, 0x5d, 0x22, 0xce, 0xb4, + 0x61, 0x33, 0xb6, 0x6b, 0x6a, 0x33, 0xfc, 0xbb, 0x01, 0xcb, 0x26, 0xe9, 0x7a, 0x5c, 0x90, 0x70, + 0x1c, 0x81, 0x70, 0x81, 0xca, 0xb0, 0x30, 0x24, 0x21, 0xf7, 0x18, 0x95, 0x39, 0x14, 0x4d, 0x7d, + 0x44, 0x15, 0x28, 0x10, 0xea, 0x06, 0xcc, 0xa3, 0x42, 0x06, 0x28, 0x9a, 0xf1, 0x19, 0xed, 0xc2, + 0x9d, 0x90, 0x70, 0x36, 0x08, 0x1d, 0x62, 0x51, 0xbb, 0x4f, 0xca, 0x39, 0x69, 0xb0, 0xa8, 0x85, + 0xa7, 0x76, 0x9f, 0xa0, 0x47, 0xb0, 0xc0, 0xd4, 0xa5, 0xcb, 0x73, 0x3b, 0xc6, 0x5e, 0xa9, 0x71, + 0xb7, 0x16, 0x71, 0x59, 0x9b, 0x40, 0x8c, 0xa9, 0x8d, 0xf1, 0x02, 0xe4, 0x8f, 0xfb, 0x81, 0x18, + 0xe1, 0x26, 0xac, 0x7f, 0xe3, 0x71, 0xd1, 0xa4, 0xee, 0x77, 0xb6, 0x70, 0x7a, 0x26, 0xe1, 0x01, + 0xa3, 0x9c, 0xa0, 0x8f, 0x60, 0xc1, 0x95, 0x0e, 0x78, 0xd9, 0xd8, 0xc9, 0xed, 0x95, 0x1a, 0xcb, + 0x19, 0xc7, 0xa6, 0xd6, 0xe3, 0xc7, 0xb0, 0xf8, 0x2d, 0x0b, 0x98, 0xcf, 0xba, 0xa3, 0x36, 0xbd, + 0x60, 0xe8, 0x01, 0xe4, 0x29, 0x73, 0x63, 0xe0, 0x6a, 0x0c, 0x3c, 0x7d, 0xf1, 0xac, 0x79, 0xca, + 0x5c, 0x62, 0x2a, 0x3d, 0xae, 0x40, 0x41, 0x8b, 0xd0, 0x12, 0xcc, 0xb6, 0x5b, 0x92, 0x9e, 0x9c, + 0x39, 0xdb, 0x6e, 0x61, 0x07, 0xe6, 0x55, 0x9c, 0x94, 0xa6, 0x38, 0xd6, 0xa0, 0x0d, 0x98, 0xef, + 0x11, 0xdb, 0x17, 0xbd, 0x88, 0xb1, 0xe8, 0x84, 0x0e, 0xa1, 0x20, 0xa2, 0x34, 0x24, 0x55, 0xa5, + 0xc6, 0x7b, 0x71, 0xe4, 0x74, 0x7e, 0x66, 0x6c, 0x86, 0x4f, 0xa0, 0x7c, 0x16, 0x75, 0xc3, 0x11, + 0xa3, 0xc2, 0xf6, 0x68, 0x52, 0xb4, 0x3a, 0x94, 0xa2, 0x0b, 0x5a, 0x9e, 0xab, 0xee, 0x52, 0x7c, + 0xba, 0x74, 0xf5, 0xcf, 0x36, 0xa8, 0xbc, 0x78, 0xbb, 0xc5, 0x4d, 0x88, 0x4c, 0xda, 0x2e, 0xc7, + 0x5b, 0xb0, 0x39, 0xc1, 0x99, 0xa2, 0x13, 0x8f, 0xa0, 0x32, 0xa1, 0x6d, 0x74, 0xac, 0x1f, 0x00, + 0x39, 0x1a, 0x22, 0xfb, 0x95, 0x70, 0xa1, 0xe9, 0xdb, 0x8f, 0x2f, 0x11, 0x7b, 0xbd, 0xd9, 0x93, + 0xb9, 0xea, 0x64, 0xee, 0xc1, 0xf1, 0x1f, 0x06, 0xec, 0xde, 0x02, 0x8a, 0xea, 0xb0, 0x16, 0x77, + 0xbb, 0xa5, 0xee, 0xd5, 0x6e, 0x45, 0x17, 0x37, 0x51, 0xac, 0x6a, 0x69, 0x0d, 0xfa, 0x04, 0x36, + 0xfa, 0x03, 0x2e, 0x2c, 0x8f, 0x3a, 0xfe, 0xc0, 0x4d, 0x63, 0x66, 0x25, 0x66, 0x7d, 0xac, 0x6d, + 0x2b, 0x65, 0x82, 0x7a, 0x00, 0xcb, 0xa9, 0xf7, 0xc5, 0xbd, 0x97, 0xaa, 0xb1, 0xf3, 0xe6, 0x52, + 0x22, 0x3e, 0xf7, 0x5e, 0x12, 0xfc, 0x0b, 0x6c, 0x4d, 0xcc, 0x36, 0x6a, 0xd0, 0x9f, 0x60, 0x2d, + 0xcd, 0x99, 0x92, 0x6a, 0xd2, 0x0e, 0x6e, 0x49, 0x9a, 0x42, 0x99, 0xc8, 0xc9, 0x16, 0x8c, 0xe3, + 0x16, 0xdc, 0xbf, 0x0d, 0x16, 0xdd, 0x85, 0x62, 0x96, 0xac, 0x44, 0x80, 0x1d, 0x58, 0x8e, 0x30, + 0x44, 0xf3, 0x7c, 0x36, 0xa5, 0xd8, 0xf7, 0xae, 0xe7, 0x9d, 0x81, 0x4f, 0xaa, 0xf0, 0x09, 0x94, + 0x6f, 0x32, 0x7f, 0xfb, 0x36, 0xde, 0x86, 0xe2, 0x51, 0xab, 0x1d, 0xbd, 0x3d, 0x04, 0x73, 0x72, + 0xf4, 0xa8, 0xd7, 0x27, 0xbf, 0x71, 0x17, 0x56, 0x92, 0x20, 0x11, 0x09, 0xe7, 0xd3, 0x8a, 0x81, + 0xa7, 0x5d, 0x6a, 0x4a, 0x05, 0xfe, 0xcc, 0xc1, 0xe6, 0x8d, 0x08, 0xf4, 0x39, 0xcc, 0x11, 0x3a, + 0x9c, 0xf2, 0x4a, 0xb2, 0x88, 0xda, 0x31, 0x1d, 0xf2, 0x63, 0x2a, 0xc2, 0x91, 0x29, 0x91, 0xe8, + 0x43, 0x98, 0xef, 0xb3, 0x01, 0x15, 0xaa, 0x5f, 0x4b, 0x8d, 0xa5, 0xd8, 0xc7, 0xb3, 0xb1, 0xd8, + 0x8c, 0xb4, 0xe8, 0x20, 0x19, 0x85, 0x39, 0x69, 0xb8, 0x96, 0x19, 0x85, 0xe7, 0x01, 0x71, 0xe2, + 0x71, 0x88, 0x5e, 0x40, 0xc9, 0xa6, 0x94, 0x09, 0x5b, 0x8f, 0xe5, 0x31, 0xe4, 0xe1, 0x2d, 0xf2, + 0x6b, 0x26, 0x28, 0x95, 0x66, 0xda, 0x0f, 0x3a, 0x82, 0x92, 0xe3, 0x7a, 0x96, 0xce, 0x24, 0x2f, + 0xdd, 0xa2, 0xc4, 0xad, 0xae, 0x99, 0x2a, 0x6e, 0x7c, 0xe4, 0x26, 0x38, 0xae, 0x17, 0x7d, 0x57, + 0x1e, 0x43, 0x31, 0x66, 0x01, 0xad, 0x40, 0xee, 0x92, 0x8c, 0xa2, 0xda, 0x8e, 0x3f, 0xd1, 0x3a, + 0xe4, 0x87, 0xb6, 0x3f, 0x20, 0xd1, 0x64, 0x55, 0x87, 0x4f, 0x67, 0x9f, 0x18, 0x95, 0xcf, 0x60, + 0x25, 0x9b, 0xde, 0xdb, 0xe0, 0x71, 0x0f, 0xf2, 0x92, 0x54, 0xf4, 0x01, 0x2c, 0x25, 0x9d, 0x12, + 0xd8, 0xa2, 0x17, 0xe1, 0xef, 0xc4, 0xd2, 0x33, 0x5b, 0xf4, 0xd0, 0x16, 0x14, 0x7b, 0x8c, 0x0b, + 0x65, 0x11, 0x6d, 0xc6, 0xb1, 0x40, 0x2b, 0x43, 0x62, 0xbb, 0x16, 0xa3, 0xbe, 0x1a, 0xf5, 0x05, + 0xb3, 0x30, 0x16, 0x3c, 0xa7, 0xfe, 0x08, 0x87, 0x00, 0x49, 0x55, 0xde, 0x49, 0xb8, 0x1d, 0x28, + 0x05, 0x24, 0xec, 0x7b, 0x9c, 0xcb, 0x82, 0xaa, 0x35, 0x9c, 0x16, 0x35, 0xbe, 0x80, 0x45, 0xb5, + 0xf3, 0x43, 0xc9, 0x0f, 0x7a, 0x04, 0x05, 0xfd, 0x0f, 0x80, 0xca, 0x71, 0x89, 0x32, 0xbf, 0x05, + 0x95, 0xa4, 0xdf, 0xd4, 0x2a, 0x9e, 0x69, 0xfc, 0x96, 0x83, 0xc5, 0xf4, 0xda, 0x46, 0x5f, 0xc1, + 0xc6, 0x97, 0x44, 0x4c, 0xfa, 0xc5, 0xc9, 0x80, 0x2b, 0x53, 0xf7, 0x3e, 0x9e, 0x41, 0x4d, 0x58, + 0x4c, 0xef, 0xf9, 0x6b, 0xf8, 0xf7, 0xe3, 0xf3, 0xa4, 0xdf, 0x01, 0x3c, 0xf3, 0xb1, 0x81, 0x88, + 0x4c, 0x66, 0xc2, 0x2c, 0x44, 0xbb, 0x31, 0xf8, 0xe6, 0xfd, 0x52, 0xb9, 0x3f, 0xdd, 0x48, 0x07, + 0x42, 0x4d, 0x28, 0xe8, 0xa7, 0x91, 0x22, 0x2f, 0x33, 0xd7, 0x2a, 0x9b, 0x13, 0x34, 0xb1, 0x8b, + 0x1f, 0x61, 0xf5, 0xda, 0x2a, 0x46, 0xf7, 0xd2, 0xf1, 0x27, 0xee, 0xfc, 0x0a, 0x9e, 0x66, 0xa2, + 0xbd, 0x3f, 0xfd, 0xfa, 0xf5, 0x9b, 0xaa, 0xf1, 0xf7, 0x9b, 0xea, 0xcc, 0xaf, 0x57, 0x55, 0xe3, + 0xf5, 0x55, 0xd5, 0xf8, 0xeb, 0xaa, 0x6a, 0xfc, 0x7b, 0x55, 0x35, 0x5e, 0xfd, 0x57, 0x9d, 0xf9, + 0x7e, 0xff, 0xf2, 0x09, 0xaf, 0x79, 0xac, 0x7e, 0x39, 0xe8, 0x10, 0x9f, 0x88, 0x7a, 0x70, 0xd9, + 0xad, 0xdb, 0x81, 0xc7, 0xeb, 0xea, 0xe5, 0x06, 0xb2, 0x2e, 0xf5, 0x28, 0x4e, 0x67, 0x5e, 0xfe, + 0xc8, 0x3e, 0xfc, 0x3f, 0x00, 0x00, 0xff, 0xff, 0xe0, 0x26, 0x52, 0x96, 0x0d, 0x0b, 0x00, 0x00, } // Reference imports to suppress errors if they are not otherwise used. @@ -2075,6 +2137,36 @@ func (m *ContainerAllocateRequest) MarshalToSizedBuffer(dAtA []byte) (int, error return len(dAtA) - i, nil } +func (m *CDIDevice) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *CDIDevice) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *CDIDevice) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Name) > 0 { + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintApi(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + func (m *AllocateResponse) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -2132,6 +2224,20 @@ func (m *ContainerAllocateResponse) MarshalToSizedBuffer(dAtA []byte) (int, erro _ = i var l int _ = l + if len(m.CDIDevices) > 0 { + for iNdEx := len(m.CDIDevices) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.CDIDevices[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x2a + } + } if len(m.Annotations) > 0 { for k := range m.Annotations { v := m.Annotations[k] @@ -2538,6 +2644,19 @@ func (m *ContainerAllocateRequest) Size() (n int) { return n } +func (m *CDIDevice) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Name) + if l > 0 { + n += 1 + l + sovApi(uint64(l)) + } + return n +} + func (m *AllocateResponse) Size() (n int) { if m == nil { return 0 @@ -2587,6 +2706,12 @@ func (m *ContainerAllocateResponse) Size() (n int) { n += mapEntrySize + 1 + sovApi(uint64(mapEntrySize)) } } + if len(m.CDIDevices) > 0 { + for _, e := range m.CDIDevices { + l = e.Size() + n += 1 + l + sovApi(uint64(l)) + } + } return n } @@ -2818,6 +2943,16 @@ func (this *ContainerAllocateRequest) String() string { }, "") return s } +func (this *CDIDevice) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&CDIDevice{`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + `}`, + }, "") + return s +} func (this *AllocateResponse) String() string { if this == nil { return "nil" @@ -2847,6 +2982,11 @@ func (this *ContainerAllocateResponse) String() string { repeatedStringForDevices += strings.Replace(f.String(), "DeviceSpec", "DeviceSpec", 1) + "," } repeatedStringForDevices += "}" + repeatedStringForCDIDevices := "[]*CDIDevice{" + for _, f := range this.CDIDevices { + repeatedStringForCDIDevices += strings.Replace(f.String(), "CDIDevice", "CDIDevice", 1) + "," + } + repeatedStringForCDIDevices += "}" keysForEnvs := make([]string, 0, len(this.Envs)) for k := range this.Envs { keysForEnvs = append(keysForEnvs, k) @@ -2872,6 +3012,7 @@ func (this *ContainerAllocateResponse) String() string { `Mounts:` + repeatedStringForMounts + `,`, `Devices:` + repeatedStringForDevices + `,`, `Annotations:` + mapStringForAnnotations + `,`, + `CDIDevices:` + repeatedStringForCDIDevices + `,`, `}`, }, "") return s @@ -4298,6 +4439,88 @@ func (m *ContainerAllocateRequest) Unmarshal(dAtA []byte) error { } return nil } +func (m *CDIDevice) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: CDIDevice: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: CDIDevice: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func (m *AllocateResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 @@ -4733,6 +4956,40 @@ func (m *ContainerAllocateResponse) Unmarshal(dAtA []byte) error { } m.Annotations[mapkey] = mapvalue iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CDIDevices", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.CDIDevices = append(m.CDIDevices, &CDIDevice{}) + if err := m.CDIDevices[len(m.CDIDevices)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipApi(dAtA[iNdEx:]) diff --git a/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.proto b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.proto index f44e7cebd2d8..97019bd5b939 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.proto +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.proto @@ -164,6 +164,15 @@ message ContainerAllocateRequest { repeated string devices_ids = 1 [(gogoproto.customname) = "DevicesIDs"]; } +// CDIDevice specifies a CDI device information. +message CDIDevice { + // Fully qualified CDI device name + // for example: vendor.com/gpu=gpudevice1 + // see more details in the CDI specification: + // https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md + string name = 1; +} + // AllocateResponse includes the artifacts that needs to be injected into // a container for accessing 'deviceIDs' that were mentioned as part of // 'AllocateRequest'. @@ -185,6 +194,8 @@ message ContainerAllocateResponse { repeated DeviceSpec devices = 3; // Container annotations to pass to the container runtime map annotations = 4; + // CDI devices for the container. + repeated CDIDevice cdi_devices = 5 [(gogoproto.customname) = "CDIDevices"]; } // Mount specifies a host volume to mount into a container. diff --git a/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/dra/v1alpha3/api.pb.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/dra/v1alpha3/api.pb.go new file mode 100644 index 000000000000..6d0310cabd31 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/dra/v1alpha3/api.pb.go @@ -0,0 +1,2134 @@ +/* +Copyright The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Code generated by protoc-gen-gogo. DO NOT EDIT. +// source: api.proto + +package v1alpha3 + +import ( + context "context" + fmt "fmt" + _ "github.com/gogo/protobuf/gogoproto" + proto "github.com/gogo/protobuf/proto" + github_com_gogo_protobuf_sortkeys "github.com/gogo/protobuf/sortkeys" + grpc "google.golang.org/grpc" + codes "google.golang.org/grpc/codes" + status "google.golang.org/grpc/status" + io "io" + math "math" + math_bits "math/bits" + reflect "reflect" + strings "strings" +) + +// Reference imports to suppress errors if they are not otherwise used. +var _ = proto.Marshal +var _ = fmt.Errorf +var _ = math.Inf + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the proto package it is being compiled against. +// A compilation error at this line likely means your copy of the +// proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package + +type NodePrepareResourcesRequest struct { + // The list of ResourceClaims that are to be prepared. + Claims []*Claim `protobuf:"bytes,1,rep,name=claims,proto3" json:"claims,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *NodePrepareResourcesRequest) Reset() { *m = NodePrepareResourcesRequest{} } +func (*NodePrepareResourcesRequest) ProtoMessage() {} +func (*NodePrepareResourcesRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{0} +} +func (m *NodePrepareResourcesRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *NodePrepareResourcesRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_NodePrepareResourcesRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *NodePrepareResourcesRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_NodePrepareResourcesRequest.Merge(m, src) +} +func (m *NodePrepareResourcesRequest) XXX_Size() int { + return m.Size() +} +func (m *NodePrepareResourcesRequest) XXX_DiscardUnknown() { + xxx_messageInfo_NodePrepareResourcesRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_NodePrepareResourcesRequest proto.InternalMessageInfo + +func (m *NodePrepareResourcesRequest) GetClaims() []*Claim { + if m != nil { + return m.Claims + } + return nil +} + +type NodePrepareResourcesResponse struct { + // The ResourceClaims for which preparation was done + // or attempted, with claim_uid as key. + // + // It is an error if some claim listed in NodePrepareResourcesRequest + // does not get prepared. NodePrepareResources + // will be called again for those that are missing. + Claims map[string]*NodePrepareResourceResponse `protobuf:"bytes,1,rep,name=claims,proto3" json:"claims,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *NodePrepareResourcesResponse) Reset() { *m = NodePrepareResourcesResponse{} } +func (*NodePrepareResourcesResponse) ProtoMessage() {} +func (*NodePrepareResourcesResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{1} +} +func (m *NodePrepareResourcesResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *NodePrepareResourcesResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_NodePrepareResourcesResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *NodePrepareResourcesResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_NodePrepareResourcesResponse.Merge(m, src) +} +func (m *NodePrepareResourcesResponse) XXX_Size() int { + return m.Size() +} +func (m *NodePrepareResourcesResponse) XXX_DiscardUnknown() { + xxx_messageInfo_NodePrepareResourcesResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_NodePrepareResourcesResponse proto.InternalMessageInfo + +func (m *NodePrepareResourcesResponse) GetClaims() map[string]*NodePrepareResourceResponse { + if m != nil { + return m.Claims + } + return nil +} + +type NodePrepareResourceResponse struct { + // These are the additional devices that kubelet must + // make available via the container runtime. A resource + // may have zero or more devices. + CDIDevices []string `protobuf:"bytes,1,rep,name=cdi_devices,json=cdiDevices,proto3" json:"cdi_devices,omitempty"` + // If non-empty, preparing the ResourceClaim failed. + // cdi_devices is ignored in that case. + Error string `protobuf:"bytes,2,opt,name=error,proto3" json:"error,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *NodePrepareResourceResponse) Reset() { *m = NodePrepareResourceResponse{} } +func (*NodePrepareResourceResponse) ProtoMessage() {} +func (*NodePrepareResourceResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{2} +} +func (m *NodePrepareResourceResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *NodePrepareResourceResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_NodePrepareResourceResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *NodePrepareResourceResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_NodePrepareResourceResponse.Merge(m, src) +} +func (m *NodePrepareResourceResponse) XXX_Size() int { + return m.Size() +} +func (m *NodePrepareResourceResponse) XXX_DiscardUnknown() { + xxx_messageInfo_NodePrepareResourceResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_NodePrepareResourceResponse proto.InternalMessageInfo + +func (m *NodePrepareResourceResponse) GetCDIDevices() []string { + if m != nil { + return m.CDIDevices + } + return nil +} + +func (m *NodePrepareResourceResponse) GetError() string { + if m != nil { + return m.Error + } + return "" +} + +type NodeUnprepareResourcesRequest struct { + // The list of ResourceClaims that are to be unprepared. + Claims []*Claim `protobuf:"bytes,1,rep,name=claims,proto3" json:"claims,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *NodeUnprepareResourcesRequest) Reset() { *m = NodeUnprepareResourcesRequest{} } +func (*NodeUnprepareResourcesRequest) ProtoMessage() {} +func (*NodeUnprepareResourcesRequest) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{3} +} +func (m *NodeUnprepareResourcesRequest) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *NodeUnprepareResourcesRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_NodeUnprepareResourcesRequest.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *NodeUnprepareResourcesRequest) XXX_Merge(src proto.Message) { + xxx_messageInfo_NodeUnprepareResourcesRequest.Merge(m, src) +} +func (m *NodeUnprepareResourcesRequest) XXX_Size() int { + return m.Size() +} +func (m *NodeUnprepareResourcesRequest) XXX_DiscardUnknown() { + xxx_messageInfo_NodeUnprepareResourcesRequest.DiscardUnknown(m) +} + +var xxx_messageInfo_NodeUnprepareResourcesRequest proto.InternalMessageInfo + +func (m *NodeUnprepareResourcesRequest) GetClaims() []*Claim { + if m != nil { + return m.Claims + } + return nil +} + +type NodeUnprepareResourcesResponse struct { + // The ResourceClaims for which preparation was reverted. + // The same rules as for NodePrepareResourcesResponse.claims + // apply. + Claims map[string]*NodeUnprepareResourceResponse `protobuf:"bytes,1,rep,name=claims,proto3" json:"claims,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *NodeUnprepareResourcesResponse) Reset() { *m = NodeUnprepareResourcesResponse{} } +func (*NodeUnprepareResourcesResponse) ProtoMessage() {} +func (*NodeUnprepareResourcesResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{4} +} +func (m *NodeUnprepareResourcesResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *NodeUnprepareResourcesResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_NodeUnprepareResourcesResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *NodeUnprepareResourcesResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_NodeUnprepareResourcesResponse.Merge(m, src) +} +func (m *NodeUnprepareResourcesResponse) XXX_Size() int { + return m.Size() +} +func (m *NodeUnprepareResourcesResponse) XXX_DiscardUnknown() { + xxx_messageInfo_NodeUnprepareResourcesResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_NodeUnprepareResourcesResponse proto.InternalMessageInfo + +func (m *NodeUnprepareResourcesResponse) GetClaims() map[string]*NodeUnprepareResourceResponse { + if m != nil { + return m.Claims + } + return nil +} + +type NodeUnprepareResourceResponse struct { + // If non-empty, unpreparing the ResourceClaim failed. + Error string `protobuf:"bytes,1,opt,name=error,proto3" json:"error,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *NodeUnprepareResourceResponse) Reset() { *m = NodeUnprepareResourceResponse{} } +func (*NodeUnprepareResourceResponse) ProtoMessage() {} +func (*NodeUnprepareResourceResponse) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{5} +} +func (m *NodeUnprepareResourceResponse) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *NodeUnprepareResourceResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_NodeUnprepareResourceResponse.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *NodeUnprepareResourceResponse) XXX_Merge(src proto.Message) { + xxx_messageInfo_NodeUnprepareResourceResponse.Merge(m, src) +} +func (m *NodeUnprepareResourceResponse) XXX_Size() int { + return m.Size() +} +func (m *NodeUnprepareResourceResponse) XXX_DiscardUnknown() { + xxx_messageInfo_NodeUnprepareResourceResponse.DiscardUnknown(m) +} + +var xxx_messageInfo_NodeUnprepareResourceResponse proto.InternalMessageInfo + +func (m *NodeUnprepareResourceResponse) GetError() string { + if m != nil { + return m.Error + } + return "" +} + +type Claim struct { + // The ResourceClaim namespace (ResourceClaim.meta.Namespace). + // This field is REQUIRED. + Namespace string `protobuf:"bytes,1,opt,name=namespace,proto3" json:"namespace,omitempty"` + // The UID of the Resource claim (ResourceClaim.meta.UUID). + // This field is REQUIRED. + Uid string `protobuf:"bytes,2,opt,name=uid,proto3" json:"uid,omitempty"` + // The name of the Resource claim (ResourceClaim.meta.Name) + // This field is REQUIRED. + Name string `protobuf:"bytes,3,opt,name=name,proto3" json:"name,omitempty"` + // Resource handle (AllocationResult.ResourceHandles[*].Data) + // This field is REQUIRED. + ResourceHandle string `protobuf:"bytes,4,opt,name=resource_handle,json=resourceHandle,proto3" json:"resource_handle,omitempty"` + XXX_NoUnkeyedLiteral struct{} `json:"-"` + XXX_sizecache int32 `json:"-"` +} + +func (m *Claim) Reset() { *m = Claim{} } +func (*Claim) ProtoMessage() {} +func (*Claim) Descriptor() ([]byte, []int) { + return fileDescriptor_00212fb1f9d3bf1c, []int{6} +} +func (m *Claim) XXX_Unmarshal(b []byte) error { + return m.Unmarshal(b) +} +func (m *Claim) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) { + if deterministic { + return xxx_messageInfo_Claim.Marshal(b, m, deterministic) + } else { + b = b[:cap(b)] + n, err := m.MarshalToSizedBuffer(b) + if err != nil { + return nil, err + } + return b[:n], nil + } +} +func (m *Claim) XXX_Merge(src proto.Message) { + xxx_messageInfo_Claim.Merge(m, src) +} +func (m *Claim) XXX_Size() int { + return m.Size() +} +func (m *Claim) XXX_DiscardUnknown() { + xxx_messageInfo_Claim.DiscardUnknown(m) +} + +var xxx_messageInfo_Claim proto.InternalMessageInfo + +func (m *Claim) GetNamespace() string { + if m != nil { + return m.Namespace + } + return "" +} + +func (m *Claim) GetUid() string { + if m != nil { + return m.Uid + } + return "" +} + +func (m *Claim) GetName() string { + if m != nil { + return m.Name + } + return "" +} + +func (m *Claim) GetResourceHandle() string { + if m != nil { + return m.ResourceHandle + } + return "" +} + +func init() { + proto.RegisterType((*NodePrepareResourcesRequest)(nil), "v1alpha3.NodePrepareResourcesRequest") + proto.RegisterType((*NodePrepareResourcesResponse)(nil), "v1alpha3.NodePrepareResourcesResponse") + proto.RegisterMapType((map[string]*NodePrepareResourceResponse)(nil), "v1alpha3.NodePrepareResourcesResponse.ClaimsEntry") + proto.RegisterType((*NodePrepareResourceResponse)(nil), "v1alpha3.NodePrepareResourceResponse") + proto.RegisterType((*NodeUnprepareResourcesRequest)(nil), "v1alpha3.NodeUnprepareResourcesRequest") + proto.RegisterType((*NodeUnprepareResourcesResponse)(nil), "v1alpha3.NodeUnprepareResourcesResponse") + proto.RegisterMapType((map[string]*NodeUnprepareResourceResponse)(nil), "v1alpha3.NodeUnprepareResourcesResponse.ClaimsEntry") + proto.RegisterType((*NodeUnprepareResourceResponse)(nil), "v1alpha3.NodeUnprepareResourceResponse") + proto.RegisterType((*Claim)(nil), "v1alpha3.Claim") +} + +func init() { proto.RegisterFile("api.proto", fileDescriptor_00212fb1f9d3bf1c) } + +var fileDescriptor_00212fb1f9d3bf1c = []byte{ + // 500 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x54, 0x4d, 0x6f, 0xd3, 0x40, + 0x10, 0xcd, 0x36, 0x49, 0x45, 0x26, 0x52, 0x8b, 0x56, 0x15, 0xb2, 0x42, 0x31, 0x91, 0x45, 0x49, + 0x2e, 0xd8, 0x22, 0x05, 0xa9, 0x02, 0x71, 0x49, 0x0b, 0x2a, 0x08, 0x21, 0x64, 0x89, 0x0b, 0x97, + 0xb2, 0xb6, 0x07, 0xc7, 0x8a, 0xe3, 0x35, 0xbb, 0x76, 0xa4, 0xde, 0xf8, 0x09, 0xfc, 0xac, 0x1e, + 0x38, 0x20, 0x4e, 0x9c, 0x2a, 0x6a, 0xfe, 0x08, 0xf2, 0xda, 0x4e, 0x3f, 0xe4, 0x34, 0x95, 0x7a, + 0x9b, 0x7d, 0xbb, 0x33, 0x6f, 0xe6, 0xbd, 0xb1, 0xa1, 0xc3, 0xe2, 0xc0, 0x8c, 0x05, 0x4f, 0x38, + 0xbd, 0x33, 0x7f, 0xca, 0xc2, 0x78, 0xc2, 0x76, 0x7b, 0x4f, 0xfc, 0x20, 0x99, 0xa4, 0x8e, 0xe9, + 0xf2, 0x99, 0xe5, 0x73, 0x9f, 0x5b, 0xea, 0x81, 0x93, 0x7e, 0x55, 0x27, 0x75, 0x50, 0x51, 0x91, + 0x68, 0xbc, 0x81, 0xfb, 0x1f, 0xb8, 0x87, 0x1f, 0x05, 0xc6, 0x4c, 0xa0, 0x8d, 0x92, 0xa7, 0xc2, + 0x45, 0x69, 0xe3, 0xb7, 0x14, 0x65, 0x42, 0x07, 0xb0, 0xee, 0x86, 0x2c, 0x98, 0x49, 0x8d, 0xf4, + 0x9b, 0xc3, 0xee, 0x68, 0xd3, 0xac, 0x88, 0xcc, 0xfd, 0x1c, 0xb7, 0xcb, 0x6b, 0xe3, 0x27, 0x81, + 0xed, 0xfa, 0x42, 0x32, 0xe6, 0x91, 0x44, 0xfa, 0xee, 0x4a, 0xa5, 0xd1, 0x79, 0xa5, 0xeb, 0xf2, + 0x0a, 0x1a, 0xf9, 0x3a, 0x4a, 0xc4, 0x71, 0x45, 0xd6, 0xfb, 0x02, 0xdd, 0x0b, 0x30, 0xbd, 0x0b, + 0xcd, 0x29, 0x1e, 0x6b, 0xa4, 0x4f, 0x86, 0x1d, 0x3b, 0x0f, 0xe9, 0x4b, 0x68, 0xcf, 0x59, 0x98, + 0xa2, 0xb6, 0xd6, 0x27, 0xc3, 0xee, 0x68, 0xe7, 0x5a, 0xae, 0x8a, 0xca, 0x2e, 0x72, 0x5e, 0xac, + 0xed, 0x11, 0xc3, 0xab, 0x95, 0x65, 0x31, 0x8c, 0x05, 0x5d, 0xd7, 0x0b, 0x8e, 0x3c, 0x9c, 0x07, + 0x2e, 0x16, 0x13, 0x75, 0xc6, 0x1b, 0xd9, 0xe9, 0x43, 0xd8, 0x3f, 0x78, 0x7b, 0x50, 0xa0, 0x36, + 0xb8, 0x5e, 0x50, 0xc6, 0x74, 0x0b, 0xda, 0x28, 0x04, 0x17, 0xaa, 0xa1, 0x8e, 0x5d, 0x1c, 0x8c, + 0x43, 0x78, 0x90, 0xb3, 0x7c, 0x8a, 0xe2, 0xdb, 0xca, 0xff, 0x9b, 0x80, 0xbe, 0xac, 0x54, 0xd9, + 0xf3, 0xfb, 0x2b, 0xb5, 0x9e, 0x5d, 0x16, 0x65, 0x79, 0x66, 0xad, 0x05, 0xce, 0x2a, 0x0b, 0x5e, + 0x5d, 0xb6, 0x60, 0xb0, 0x82, 0xad, 0xce, 0x84, 0xe7, 0x4b, 0xe4, 0x59, 0x8c, 0xb4, 0x50, 0x95, + 0x5c, 0x54, 0x35, 0x81, 0xb6, 0x6a, 0x8d, 0x6e, 0x43, 0x27, 0x62, 0x33, 0x94, 0x31, 0x73, 0xb1, + 0x7c, 0x72, 0x0e, 0xe4, 0x2d, 0xa7, 0x81, 0x57, 0x1a, 0x92, 0x87, 0x94, 0x42, 0x2b, 0xbf, 0xd6, + 0x9a, 0x0a, 0x52, 0x31, 0x1d, 0xc0, 0xa6, 0x28, 0x69, 0x8f, 0x26, 0x2c, 0xf2, 0x42, 0xd4, 0x5a, + 0xea, 0x7a, 0xa3, 0x82, 0x0f, 0x15, 0x3a, 0x3a, 0x25, 0xd0, 0xca, 0xbb, 0xa5, 0x3e, 0x6c, 0xd5, + 0x2d, 0x34, 0xdd, 0x59, 0xb5, 0xf0, 0xca, 0xf2, 0xde, 0xe3, 0x9b, 0x7d, 0x17, 0x46, 0x83, 0xce, + 0xe0, 0x5e, 0xbd, 0x71, 0x74, 0xb0, 0xda, 0xda, 0x82, 0x6c, 0x78, 0xd3, 0x1d, 0x30, 0x1a, 0xe3, + 0xf1, 0xc9, 0x99, 0x4e, 0xfe, 0x9c, 0xe9, 0x8d, 0xef, 0x99, 0x4e, 0x4e, 0x32, 0x9d, 0xfc, 0xca, + 0x74, 0xf2, 0x37, 0xd3, 0xc9, 0x8f, 0x7f, 0x7a, 0xe3, 0xf3, 0xa3, 0xe9, 0x9e, 0x34, 0x03, 0x6e, + 0x4d, 0x53, 0x07, 0x43, 0x4c, 0xac, 0x78, 0xea, 0x5b, 0x2c, 0x0e, 0xa4, 0xe5, 0x09, 0x66, 0x55, + 0x24, 0xce, 0xba, 0xfa, 0xe9, 0xec, 0xfe, 0x0f, 0x00, 0x00, 0xff, 0xff, 0x42, 0xff, 0x15, 0x6b, + 0xba, 0x04, 0x00, 0x00, +} + +// Reference imports to suppress errors if they are not otherwise used. +var _ context.Context +var _ grpc.ClientConn + +// This is a compile-time assertion to ensure that this generated file +// is compatible with the grpc package it is being compiled against. +const _ = grpc.SupportPackageIsVersion4 + +// NodeClient is the client API for Node service. +// +// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream. +type NodeClient interface { + // NodePrepareResources prepares several ResourceClaims + // for use on the node. If an error is returned, the + // response is ignored. Failures for individidual claims + // can be reported inside NodePrepareResourcesResponse. + NodePrepareResources(ctx context.Context, in *NodePrepareResourcesRequest, opts ...grpc.CallOption) (*NodePrepareResourcesResponse, error) + // NodeUnprepareResources is the opposite of NodePrepareResources. + // The same error handling rules apply, + NodeUnprepareResources(ctx context.Context, in *NodeUnprepareResourcesRequest, opts ...grpc.CallOption) (*NodeUnprepareResourcesResponse, error) +} + +type nodeClient struct { + cc *grpc.ClientConn +} + +func NewNodeClient(cc *grpc.ClientConn) NodeClient { + return &nodeClient{cc} +} + +func (c *nodeClient) NodePrepareResources(ctx context.Context, in *NodePrepareResourcesRequest, opts ...grpc.CallOption) (*NodePrepareResourcesResponse, error) { + out := new(NodePrepareResourcesResponse) + err := c.cc.Invoke(ctx, "/v1alpha3.Node/NodePrepareResources", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + +func (c *nodeClient) NodeUnprepareResources(ctx context.Context, in *NodeUnprepareResourcesRequest, opts ...grpc.CallOption) (*NodeUnprepareResourcesResponse, error) { + out := new(NodeUnprepareResourcesResponse) + err := c.cc.Invoke(ctx, "/v1alpha3.Node/NodeUnprepareResources", in, out, opts...) + if err != nil { + return nil, err + } + return out, nil +} + +// NodeServer is the server API for Node service. +type NodeServer interface { + // NodePrepareResources prepares several ResourceClaims + // for use on the node. If an error is returned, the + // response is ignored. Failures for individidual claims + // can be reported inside NodePrepareResourcesResponse. + NodePrepareResources(context.Context, *NodePrepareResourcesRequest) (*NodePrepareResourcesResponse, error) + // NodeUnprepareResources is the opposite of NodePrepareResources. + // The same error handling rules apply, + NodeUnprepareResources(context.Context, *NodeUnprepareResourcesRequest) (*NodeUnprepareResourcesResponse, error) +} + +// UnimplementedNodeServer can be embedded to have forward compatible implementations. +type UnimplementedNodeServer struct { +} + +func (*UnimplementedNodeServer) NodePrepareResources(ctx context.Context, req *NodePrepareResourcesRequest) (*NodePrepareResourcesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method NodePrepareResources not implemented") +} +func (*UnimplementedNodeServer) NodeUnprepareResources(ctx context.Context, req *NodeUnprepareResourcesRequest) (*NodeUnprepareResourcesResponse, error) { + return nil, status.Errorf(codes.Unimplemented, "method NodeUnprepareResources not implemented") +} + +func RegisterNodeServer(s *grpc.Server, srv NodeServer) { + s.RegisterService(&_Node_serviceDesc, srv) +} + +func _Node_NodePrepareResources_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(NodePrepareResourcesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(NodeServer).NodePrepareResources(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/v1alpha3.Node/NodePrepareResources", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(NodeServer).NodePrepareResources(ctx, req.(*NodePrepareResourcesRequest)) + } + return interceptor(ctx, in, info, handler) +} + +func _Node_NodeUnprepareResources_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { + in := new(NodeUnprepareResourcesRequest) + if err := dec(in); err != nil { + return nil, err + } + if interceptor == nil { + return srv.(NodeServer).NodeUnprepareResources(ctx, in) + } + info := &grpc.UnaryServerInfo{ + Server: srv, + FullMethod: "/v1alpha3.Node/NodeUnprepareResources", + } + handler := func(ctx context.Context, req interface{}) (interface{}, error) { + return srv.(NodeServer).NodeUnprepareResources(ctx, req.(*NodeUnprepareResourcesRequest)) + } + return interceptor(ctx, in, info, handler) +} + +var _Node_serviceDesc = grpc.ServiceDesc{ + ServiceName: "v1alpha3.Node", + HandlerType: (*NodeServer)(nil), + Methods: []grpc.MethodDesc{ + { + MethodName: "NodePrepareResources", + Handler: _Node_NodePrepareResources_Handler, + }, + { + MethodName: "NodeUnprepareResources", + Handler: _Node_NodeUnprepareResources_Handler, + }, + }, + Streams: []grpc.StreamDesc{}, + Metadata: "api.proto", +} + +func (m *NodePrepareResourcesRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *NodePrepareResourcesRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *NodePrepareResourcesRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Claims) > 0 { + for iNdEx := len(m.Claims) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Claims[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *NodePrepareResourcesResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *NodePrepareResourcesResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *NodePrepareResourcesResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Claims) > 0 { + for k := range m.Claims { + v := m.Claims[k] + baseI := i + if v != nil { + { + size, err := v.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintApi(dAtA, i, uint64(len(k))) + i-- + dAtA[i] = 0xa + i = encodeVarintApi(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *NodePrepareResourceResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *NodePrepareResourceResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *NodePrepareResourceResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Error) > 0 { + i -= len(m.Error) + copy(dAtA[i:], m.Error) + i = encodeVarintApi(dAtA, i, uint64(len(m.Error))) + i-- + dAtA[i] = 0x12 + } + if len(m.CDIDevices) > 0 { + for iNdEx := len(m.CDIDevices) - 1; iNdEx >= 0; iNdEx-- { + i -= len(m.CDIDevices[iNdEx]) + copy(dAtA[i:], m.CDIDevices[iNdEx]) + i = encodeVarintApi(dAtA, i, uint64(len(m.CDIDevices[iNdEx]))) + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *NodeUnprepareResourcesRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *NodeUnprepareResourcesRequest) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *NodeUnprepareResourcesRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Claims) > 0 { + for iNdEx := len(m.Claims) - 1; iNdEx >= 0; iNdEx-- { + { + size, err := m.Claims[iNdEx].MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *NodeUnprepareResourcesResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *NodeUnprepareResourcesResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *NodeUnprepareResourcesResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Claims) > 0 { + for k := range m.Claims { + v := m.Claims[k] + baseI := i + if v != nil { + { + size, err := v.MarshalToSizedBuffer(dAtA[:i]) + if err != nil { + return 0, err + } + i -= size + i = encodeVarintApi(dAtA, i, uint64(size)) + } + i-- + dAtA[i] = 0x12 + } + i -= len(k) + copy(dAtA[i:], k) + i = encodeVarintApi(dAtA, i, uint64(len(k))) + i-- + dAtA[i] = 0xa + i = encodeVarintApi(dAtA, i, uint64(baseI-i)) + i-- + dAtA[i] = 0xa + } + } + return len(dAtA) - i, nil +} + +func (m *NodeUnprepareResourceResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *NodeUnprepareResourceResponse) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *NodeUnprepareResourceResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.Error) > 0 { + i -= len(m.Error) + copy(dAtA[i:], m.Error) + i = encodeVarintApi(dAtA, i, uint64(len(m.Error))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func (m *Claim) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalToSizedBuffer(dAtA[:size]) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *Claim) MarshalTo(dAtA []byte) (int, error) { + size := m.Size() + return m.MarshalToSizedBuffer(dAtA[:size]) +} + +func (m *Claim) MarshalToSizedBuffer(dAtA []byte) (int, error) { + i := len(dAtA) + _ = i + var l int + _ = l + if len(m.ResourceHandle) > 0 { + i -= len(m.ResourceHandle) + copy(dAtA[i:], m.ResourceHandle) + i = encodeVarintApi(dAtA, i, uint64(len(m.ResourceHandle))) + i-- + dAtA[i] = 0x22 + } + if len(m.Name) > 0 { + i -= len(m.Name) + copy(dAtA[i:], m.Name) + i = encodeVarintApi(dAtA, i, uint64(len(m.Name))) + i-- + dAtA[i] = 0x1a + } + if len(m.Uid) > 0 { + i -= len(m.Uid) + copy(dAtA[i:], m.Uid) + i = encodeVarintApi(dAtA, i, uint64(len(m.Uid))) + i-- + dAtA[i] = 0x12 + } + if len(m.Namespace) > 0 { + i -= len(m.Namespace) + copy(dAtA[i:], m.Namespace) + i = encodeVarintApi(dAtA, i, uint64(len(m.Namespace))) + i-- + dAtA[i] = 0xa + } + return len(dAtA) - i, nil +} + +func encodeVarintApi(dAtA []byte, offset int, v uint64) int { + offset -= sovApi(v) + base := offset + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + v >>= 7 + offset++ + } + dAtA[offset] = uint8(v) + return base +} +func (m *NodePrepareResourcesRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Claims) > 0 { + for _, e := range m.Claims { + l = e.Size() + n += 1 + l + sovApi(uint64(l)) + } + } + return n +} + +func (m *NodePrepareResourcesResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Claims) > 0 { + for k, v := range m.Claims { + _ = k + _ = v + l = 0 + if v != nil { + l = v.Size() + l += 1 + sovApi(uint64(l)) + } + mapEntrySize := 1 + len(k) + sovApi(uint64(len(k))) + l + n += mapEntrySize + 1 + sovApi(uint64(mapEntrySize)) + } + } + return n +} + +func (m *NodePrepareResourceResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.CDIDevices) > 0 { + for _, s := range m.CDIDevices { + l = len(s) + n += 1 + l + sovApi(uint64(l)) + } + } + l = len(m.Error) + if l > 0 { + n += 1 + l + sovApi(uint64(l)) + } + return n +} + +func (m *NodeUnprepareResourcesRequest) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Claims) > 0 { + for _, e := range m.Claims { + l = e.Size() + n += 1 + l + sovApi(uint64(l)) + } + } + return n +} + +func (m *NodeUnprepareResourcesResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + if len(m.Claims) > 0 { + for k, v := range m.Claims { + _ = k + _ = v + l = 0 + if v != nil { + l = v.Size() + l += 1 + sovApi(uint64(l)) + } + mapEntrySize := 1 + len(k) + sovApi(uint64(len(k))) + l + n += mapEntrySize + 1 + sovApi(uint64(mapEntrySize)) + } + } + return n +} + +func (m *NodeUnprepareResourceResponse) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Error) + if l > 0 { + n += 1 + l + sovApi(uint64(l)) + } + return n +} + +func (m *Claim) Size() (n int) { + if m == nil { + return 0 + } + var l int + _ = l + l = len(m.Namespace) + if l > 0 { + n += 1 + l + sovApi(uint64(l)) + } + l = len(m.Uid) + if l > 0 { + n += 1 + l + sovApi(uint64(l)) + } + l = len(m.Name) + if l > 0 { + n += 1 + l + sovApi(uint64(l)) + } + l = len(m.ResourceHandle) + if l > 0 { + n += 1 + l + sovApi(uint64(l)) + } + return n +} + +func sovApi(x uint64) (n int) { + return (math_bits.Len64(x|1) + 6) / 7 +} +func sozApi(x uint64) (n int) { + return sovApi(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +} +func (this *NodePrepareResourcesRequest) String() string { + if this == nil { + return "nil" + } + repeatedStringForClaims := "[]*Claim{" + for _, f := range this.Claims { + repeatedStringForClaims += strings.Replace(f.String(), "Claim", "Claim", 1) + "," + } + repeatedStringForClaims += "}" + s := strings.Join([]string{`&NodePrepareResourcesRequest{`, + `Claims:` + repeatedStringForClaims + `,`, + `}`, + }, "") + return s +} +func (this *NodePrepareResourcesResponse) String() string { + if this == nil { + return "nil" + } + keysForClaims := make([]string, 0, len(this.Claims)) + for k := range this.Claims { + keysForClaims = append(keysForClaims, k) + } + github_com_gogo_protobuf_sortkeys.Strings(keysForClaims) + mapStringForClaims := "map[string]*NodePrepareResourceResponse{" + for _, k := range keysForClaims { + mapStringForClaims += fmt.Sprintf("%v: %v,", k, this.Claims[k]) + } + mapStringForClaims += "}" + s := strings.Join([]string{`&NodePrepareResourcesResponse{`, + `Claims:` + mapStringForClaims + `,`, + `}`, + }, "") + return s +} +func (this *NodePrepareResourceResponse) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&NodePrepareResourceResponse{`, + `CDIDevices:` + fmt.Sprintf("%v", this.CDIDevices) + `,`, + `Error:` + fmt.Sprintf("%v", this.Error) + `,`, + `}`, + }, "") + return s +} +func (this *NodeUnprepareResourcesRequest) String() string { + if this == nil { + return "nil" + } + repeatedStringForClaims := "[]*Claim{" + for _, f := range this.Claims { + repeatedStringForClaims += strings.Replace(f.String(), "Claim", "Claim", 1) + "," + } + repeatedStringForClaims += "}" + s := strings.Join([]string{`&NodeUnprepareResourcesRequest{`, + `Claims:` + repeatedStringForClaims + `,`, + `}`, + }, "") + return s +} +func (this *NodeUnprepareResourcesResponse) String() string { + if this == nil { + return "nil" + } + keysForClaims := make([]string, 0, len(this.Claims)) + for k := range this.Claims { + keysForClaims = append(keysForClaims, k) + } + github_com_gogo_protobuf_sortkeys.Strings(keysForClaims) + mapStringForClaims := "map[string]*NodeUnprepareResourceResponse{" + for _, k := range keysForClaims { + mapStringForClaims += fmt.Sprintf("%v: %v,", k, this.Claims[k]) + } + mapStringForClaims += "}" + s := strings.Join([]string{`&NodeUnprepareResourcesResponse{`, + `Claims:` + mapStringForClaims + `,`, + `}`, + }, "") + return s +} +func (this *NodeUnprepareResourceResponse) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&NodeUnprepareResourceResponse{`, + `Error:` + fmt.Sprintf("%v", this.Error) + `,`, + `}`, + }, "") + return s +} +func (this *Claim) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&Claim{`, + `Namespace:` + fmt.Sprintf("%v", this.Namespace) + `,`, + `Uid:` + fmt.Sprintf("%v", this.Uid) + `,`, + `Name:` + fmt.Sprintf("%v", this.Name) + `,`, + `ResourceHandle:` + fmt.Sprintf("%v", this.ResourceHandle) + `,`, + `}`, + }, "") + return s +} +func valueToStringApi(v interface{}) string { + rv := reflect.ValueOf(v) + if rv.IsNil() { + return "nil" + } + pv := reflect.Indirect(rv).Interface() + return fmt.Sprintf("*%v", pv) +} +func (m *NodePrepareResourcesRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: NodePrepareResourcesRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: NodePrepareResourcesRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Claims", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Claims = append(m.Claims, &Claim{}) + if err := m.Claims[len(m.Claims)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *NodePrepareResourcesResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: NodePrepareResourcesResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: NodePrepareResourcesResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Claims", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Claims == nil { + m.Claims = make(map[string]*NodePrepareResourceResponse) + } + var mapkey string + var mapvalue *NodePrepareResourceResponse + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthApi + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthApi + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var mapmsglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + mapmsglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if mapmsglen < 0 { + return ErrInvalidLengthApi + } + postmsgIndex := iNdEx + mapmsglen + if postmsgIndex < 0 { + return ErrInvalidLengthApi + } + if postmsgIndex > l { + return io.ErrUnexpectedEOF + } + mapvalue = &NodePrepareResourceResponse{} + if err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil { + return err + } + iNdEx = postmsgIndex + } else { + iNdEx = entryPreIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Claims[mapkey] = mapvalue + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *NodePrepareResourceResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: NodePrepareResourceResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: NodePrepareResourceResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field CDIDevices", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.CDIDevices = append(m.CDIDevices, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Error = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *NodeUnprepareResourcesRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: NodeUnprepareResourcesRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: NodeUnprepareResourcesRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Claims", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Claims = append(m.Claims, &Claim{}) + if err := m.Claims[len(m.Claims)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *NodeUnprepareResourcesResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: NodeUnprepareResourcesResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: NodeUnprepareResourcesResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Claims", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + msglen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Claims == nil { + m.Claims = make(map[string]*NodeUnprepareResourceResponse) + } + var mapkey string + var mapvalue *NodeUnprepareResourceResponse + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthApi + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey < 0 { + return ErrInvalidLengthApi + } + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var mapmsglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + mapmsglen |= int(b&0x7F) << shift + if b < 0x80 { + break + } + } + if mapmsglen < 0 { + return ErrInvalidLengthApi + } + postmsgIndex := iNdEx + mapmsglen + if postmsgIndex < 0 { + return ErrInvalidLengthApi + } + if postmsgIndex > l { + return io.ErrUnexpectedEOF + } + mapvalue = &NodeUnprepareResourceResponse{} + if err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil { + return err + } + iNdEx = postmsgIndex + } else { + iNdEx = entryPreIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Claims[mapkey] = mapvalue + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *NodeUnprepareResourceResponse) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: NodeUnprepareResourceResponse: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: NodeUnprepareResourceResponse: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Error", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Error = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *Claim) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: Claim: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: Claim: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Namespace", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Namespace = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Uid", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Uid = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Name = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResourceHandle", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowApi + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= uint64(b&0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthApi + } + postIndex := iNdEx + intStringLen + if postIndex < 0 { + return ErrInvalidLengthApi + } + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ResourceHandle = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipApi(dAtA[iNdEx:]) + if err != nil { + return err + } + if (skippy < 0) || (iNdEx+skippy) < 0 { + return ErrInvalidLengthApi + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func skipApi(dAtA []byte) (n int, err error) { + l := len(dAtA) + iNdEx := 0 + depth := 0 + for iNdEx < l { + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowApi + } + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + wireType := int(wire & 0x7) + switch wireType { + case 0: + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowApi + } + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF + } + iNdEx++ + if dAtA[iNdEx-1] < 0x80 { + break + } + } + case 1: + iNdEx += 8 + case 2: + var length int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return 0, ErrIntOverflowApi + } + if iNdEx >= l { + return 0, io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + length |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if length < 0 { + return 0, ErrInvalidLengthApi + } + iNdEx += length + case 3: + depth++ + case 4: + if depth == 0 { + return 0, ErrUnexpectedEndOfGroupApi + } + depth-- + case 5: + iNdEx += 4 + default: + return 0, fmt.Errorf("proto: illegal wireType %d", wireType) + } + if iNdEx < 0 { + return 0, ErrInvalidLengthApi + } + if depth == 0 { + return iNdEx, nil + } + } + return 0, io.ErrUnexpectedEOF +} + +var ( + ErrInvalidLengthApi = fmt.Errorf("proto: negative length found during unmarshaling") + ErrIntOverflowApi = fmt.Errorf("proto: integer overflow") + ErrUnexpectedEndOfGroupApi = fmt.Errorf("proto: unexpected end of group") +) diff --git a/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/dra/v1alpha3/api.proto b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/dra/v1alpha3/api.proto new file mode 100644 index 000000000000..567842711bef --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/dra/v1alpha3/api.proto @@ -0,0 +1,103 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// To regenerate api.pb.go run `hack/update-codegen.sh protobindings` + +syntax = "proto3"; + +package v1alpha3; +option go_package = "k8s.io/kubelet/pkg/apis/dra/v1alpha3"; + +import "github.com/gogo/protobuf/gogoproto/gogo.proto"; + +option (gogoproto.goproto_stringer_all) = false; +option (gogoproto.stringer_all) = true; +option (gogoproto.goproto_getters_all) = true; +option (gogoproto.marshaler_all) = true; +option (gogoproto.sizer_all) = true; +option (gogoproto.unmarshaler_all) = true; +option (gogoproto.goproto_unrecognized_all) = false; + +service Node { + // NodePrepareResources prepares several ResourceClaims + // for use on the node. If an error is returned, the + // response is ignored. Failures for individidual claims + // can be reported inside NodePrepareResourcesResponse. + rpc NodePrepareResources (NodePrepareResourcesRequest) + returns (NodePrepareResourcesResponse) {} + + // NodeUnprepareResources is the opposite of NodePrepareResources. + // The same error handling rules apply, + rpc NodeUnprepareResources (NodeUnprepareResourcesRequest) + returns (NodeUnprepareResourcesResponse) {} +} + +message NodePrepareResourcesRequest { + // The list of ResourceClaims that are to be prepared. + repeated Claim claims = 1; +} + +message NodePrepareResourcesResponse { + // The ResourceClaims for which preparation was done + // or attempted, with claim_uid as key. + // + // It is an error if some claim listed in NodePrepareResourcesRequest + // does not get prepared. NodePrepareResources + // will be called again for those that are missing. + map claims = 1; +} + +message NodePrepareResourceResponse { + // These are the additional devices that kubelet must + // make available via the container runtime. A resource + // may have zero or more devices. + repeated string cdi_devices = 1 [(gogoproto.customname) = "CDIDevices"]; + // If non-empty, preparing the ResourceClaim failed. + // cdi_devices is ignored in that case. + string error = 2; +} + +message NodeUnprepareResourcesRequest { + // The list of ResourceClaims that are to be unprepared. + repeated Claim claims = 1; +} + +message NodeUnprepareResourcesResponse { + // The ResourceClaims for which preparation was reverted. + // The same rules as for NodePrepareResourcesResponse.claims + // apply. + map claims = 1; +} + +message NodeUnprepareResourceResponse { + // If non-empty, unpreparing the ResourceClaim failed. + string error = 1; +} + +message Claim { + // The ResourceClaim namespace (ResourceClaim.meta.Namespace). + // This field is REQUIRED. + string namespace = 1; + // The UID of the Resource claim (ResourceClaim.meta.UUID). + // This field is REQUIRED. + string uid = 2; + // The name of the Resource claim (ResourceClaim.meta.Name) + // This field is REQUIRED. + string name = 3; + // Resource handle (AllocationResult.ResourceHandles[*].Data) + // This field is REQUIRED. + string resource_handle = 4; +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go index 5e75fefe53ff..f201ce361d44 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go @@ -59,6 +59,9 @@ type NodeStats struct { // Stats about the rlimit of system. // +optional Rlimit *RlimitStats `json:"rlimit,omitempty"` + // Stats pertaining to swap resources. This is reported to non-windows systems only. + // +optional + Swap *SwapStats `json:"swap,omitempty"` } // RlimitStats are stats rlimit of OS. @@ -131,6 +134,9 @@ type PodStats struct { // ProcessStats pertaining to processes. // +optional ProcessStats *ProcessStats `json:"process_stats,omitempty"` + // Stats pertaining to swap resources. This is reported to non-windows systems only. + // +optional + Swap *SwapStats `json:"swap,omitempty"` } // ContainerStats holds container-level unprocessed sample stats. @@ -159,6 +165,9 @@ type ContainerStats struct { // +patchMergeKey=name // +patchStrategy=merge UserDefinedMetrics []UserDefinedMetric `json:"userDefinedMetrics,omitempty" patchStrategy:"merge" patchMergeKey:"name"` + // Stats pertaining to swap resources. This is reported to non-windows systems only. + // +optional + Swap *SwapStats `json:"swap,omitempty"` } // PodReference contains enough information to locate the referenced pod. @@ -237,6 +246,19 @@ type MemoryStats struct { MajorPageFaults *uint64 `json:"majorPageFaults,omitempty"` } +// SwapStats contains data about memory usage +type SwapStats struct { + // The time at which these stats were updated. + Time metav1.Time `json:"time"` + // Available swap memory for use. This is defined as the - . + // If swap limit is undefined, this value is omitted. + // +optional + SwapAvailableBytes *uint64 `json:"swapAvailableBytes,omitempty"` + // Total swap memory in use. + // +optional + SwapUsageBytes *uint64 `json:"swapUsageBytes,omitempty"` +} + // AcceleratorStats contains stats for accelerators attached to the container. type AcceleratorStats struct { // Make of the accelerator (nvidia, amd, google etc.) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/.import-restrictions b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/.import-restrictions similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/.import-restrictions rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/.import-restrictions diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/errors.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/errors.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/errors.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/errors.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/constants.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/constants.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/constants.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/constants.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/httpstream.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/httpstream.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/httpstream.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/httpstream.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/portforward.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/portforward.go similarity index 97% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/portforward.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/portforward.go index df0fe5a8e080..7aa668ca4dc8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/portforward.go +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/portforward.go @@ -23,8 +23,8 @@ import ( "time" "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/httpstream/wsstream" "k8s.io/apimachinery/pkg/util/runtime" - "k8s.io/apiserver/pkg/util/wsstream" ) // PortForwarder knows how to forward content from a data stream to/from a port diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/websocket.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/websocket.go similarity index 99% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/websocket.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/websocket.go index cbedb5b6c985..3700a7e22ada 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward/websocket.go +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/portforward/websocket.go @@ -31,9 +31,9 @@ import ( api "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/httpstream/wsstream" "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apiserver/pkg/endpoints/responsewriter" - "k8s.io/apiserver/pkg/util/wsstream" ) const ( diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/attach.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/attach.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/attach.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/attach.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/doc.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/doc.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/doc.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/doc.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/exec.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/exec.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/exec.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/exec.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/httpstream.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/httpstream.go similarity index 99% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/httpstream.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/httpstream.go index 8c18b2e72471..92ab045d24b5 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/httpstream.go +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/httpstream.go @@ -29,9 +29,9 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/util/httpstream" "k8s.io/apimachinery/pkg/util/httpstream/spdy" + "k8s.io/apimachinery/pkg/util/httpstream/wsstream" remotecommandconsts "k8s.io/apimachinery/pkg/util/remotecommand" "k8s.io/apimachinery/pkg/util/runtime" - "k8s.io/apiserver/pkg/util/wsstream" "k8s.io/client-go/tools/remotecommand" "k8s.io/klog/v2" diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/websocket.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/websocket.go similarity index 98% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/websocket.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/websocket.go index a81d2259bdac..25900888522d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand/websocket.go +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/remotecommand/websocket.go @@ -21,9 +21,9 @@ import ( "net/http" "time" + "k8s.io/apimachinery/pkg/util/httpstream/wsstream" "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apiserver/pkg/endpoints/responsewriter" - "k8s.io/apiserver/pkg/util/wsstream" ) const ( diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/request_cache.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/request_cache.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/request_cache.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/request_cache.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/server.go b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/server.go similarity index 98% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/server.go rename to cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/server.go index 3e989d8aee97..fe5c22b04979 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/streaming/server.go +++ b/cluster-autoscaler/vendor/k8s.io/kubelet/pkg/cri/streaming/server.go @@ -36,8 +36,8 @@ import ( remotecommandconsts "k8s.io/apimachinery/pkg/util/remotecommand" "k8s.io/client-go/tools/remotecommand" runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" - "k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward" - remotecommandserver "k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand" + "k8s.io/kubelet/pkg/cri/streaming/portforward" + remotecommandserver "k8s.io/kubelet/pkg/cri/streaming/remotecommand" ) // Server is the library interface to serve the stream requests. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/conntrack.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/conntrack.go deleted file mode 100644 index 4ff3c383735a..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/conntrack.go +++ /dev/null @@ -1,144 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package app - -import ( - "errors" - "os" - "strconv" - "strings" - - "k8s.io/klog/v2" - "k8s.io/mount-utils" - - "k8s.io/component-helpers/node/util/sysctl" -) - -// Conntracker is an interface to the global sysctl. Descriptions of the various -// sysctl fields can be found here: -// -// https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt -type Conntracker interface { - // SetMax adjusts nf_conntrack_max. - SetMax(max int) error - // SetTCPEstablishedTimeout adjusts nf_conntrack_tcp_timeout_established. - SetTCPEstablishedTimeout(seconds int) error - // SetTCPCloseWaitTimeout nf_conntrack_tcp_timeout_close_wait. - SetTCPCloseWaitTimeout(seconds int) error -} - -type realConntracker struct{} - -var errReadOnlySysFS = errors.New("readOnlySysFS") - -func (rct realConntracker) SetMax(max int) error { - if err := rct.setIntSysCtl("nf_conntrack_max", max); err != nil { - return err - } - klog.InfoS("Setting nf_conntrack_max", "nfConntrackMax", max) - - // Linux does not support writing to /sys/module/nf_conntrack/parameters/hashsize - // when the writer process is not in the initial network namespace - // (https://github.com/torvalds/linux/blob/v4.10/net/netfilter/nf_conntrack_core.c#L1795-L1796). - // Usually that's fine. But in some configurations such as with github.com/kinvolk/kubeadm-nspawn, - // kube-proxy is in another netns. - // Therefore, check if writing in hashsize is necessary and skip the writing if not. - hashsize, err := readIntStringFile("/sys/module/nf_conntrack/parameters/hashsize") - if err != nil { - return err - } - if hashsize >= (max / 4) { - return nil - } - - // sysfs is expected to be mounted as 'rw'. However, it may be - // unexpectedly mounted as 'ro' by docker because of a known docker - // issue (https://github.com/docker/docker/issues/24000). Setting - // conntrack will fail when sysfs is readonly. When that happens, we - // don't set conntrack hashsize and return a special error - // errReadOnlySysFS here. The caller should deal with - // errReadOnlySysFS differently. - writable, err := isSysFSWritable() - if err != nil { - return err - } - if !writable { - return errReadOnlySysFS - } - // TODO: generify this and sysctl to a new sysfs.WriteInt() - klog.InfoS("Setting conntrack hashsize", "conntrackHashsize", max/4) - return writeIntStringFile("/sys/module/nf_conntrack/parameters/hashsize", max/4) -} - -func (rct realConntracker) SetTCPEstablishedTimeout(seconds int) error { - return rct.setIntSysCtl("nf_conntrack_tcp_timeout_established", seconds) -} - -func (rct realConntracker) SetTCPCloseWaitTimeout(seconds int) error { - return rct.setIntSysCtl("nf_conntrack_tcp_timeout_close_wait", seconds) -} - -func (realConntracker) setIntSysCtl(name string, value int) error { - entry := "net/netfilter/" + name - - sys := sysctl.New() - if val, _ := sys.GetSysctl(entry); val != value && val < value { - klog.InfoS("Set sysctl", "entry", entry, "value", value) - if err := sys.SetSysctl(entry, value); err != nil { - return err - } - } - return nil -} - -// isSysFSWritable checks /proc/mounts to see whether sysfs is 'rw' or not. -func isSysFSWritable() (bool, error) { - const permWritable = "rw" - const sysfsDevice = "sysfs" - m := mount.New("" /* default mount path */) - mountPoints, err := m.List() - if err != nil { - klog.ErrorS(err, "Failed to list mount points") - return false, err - } - - for _, mountPoint := range mountPoints { - if mountPoint.Type != sysfsDevice { - continue - } - // Check whether sysfs is 'rw' - if len(mountPoint.Opts) > 0 && mountPoint.Opts[0] == permWritable { - return true, nil - } - klog.ErrorS(nil, "Sysfs is not writable", "mountPoint", mountPoint, "mountOptions", mountPoint.Opts) - return false, errReadOnlySysFS - } - - return false, errors.New("no sysfs mounted") -} - -func readIntStringFile(filename string) (int, error) { - b, err := os.ReadFile(filename) - if err != nil { - return -1, err - } - return strconv.Atoi(strings.TrimSpace(string(b))) -} - -func writeIntStringFile(filename string, value int) error { - return os.WriteFile(filename, []byte(strconv.Itoa(value)), 0640) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/init_windows.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/init_windows.go deleted file mode 100644 index 32ed6dc7fe0a..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/init_windows.go +++ /dev/null @@ -1,46 +0,0 @@ -//go:build windows -// +build windows - -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package app - -import ( - "k8s.io/kubernetes/pkg/windows/service" - - "github.com/spf13/pflag" -) - -const ( - serviceName = "kube-proxy" -) - -func initForOS(windowsService bool) error { - if windowsService { - return service.InitService(serviceName) - } - return nil -} - -func (o *Options) addOSFlags(fs *pflag.FlagSet) { - fs.BoolVar(&o.WindowsService, "windows-service", o.WindowsService, "Enable Windows Service Control Manager API integration") - fs.StringVar(&o.config.Winkernel.SourceVip, "source-vip", o.config.Winkernel.SourceVip, "The IP address of the source VIP for non-DSR.") - fs.StringVar(&o.config.Winkernel.NetworkName, "network-name", o.config.Winkernel.NetworkName, "The name of the cluster network.") - fs.BoolVar(&o.config.Winkernel.EnableDSR, "enable-dsr", o.config.Winkernel.EnableDSR, "If true make kube-proxy apply DSR policies for service VIP") - fs.StringVar(&o.config.Winkernel.RootHnsEndpointName, "root-hnsendpoint-name", "cbr0", "The name of the hns endpoint name for root namespace attached to l2bridge") - fs.BoolVar(&o.config.Winkernel.ForwardHealthCheckVip, "forward-healthcheck-vip", o.config.Winkernel.ForwardHealthCheckVip, "If true forward service VIP for health check port") -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server.go deleted file mode 100644 index 0f13dc72dc38..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server.go +++ /dev/null @@ -1,842 +0,0 @@ -/* -Copyright 2014 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Package app does all of the work necessary to configure and run a -// Kubernetes app process. -package app - -import ( - goflag "flag" - "fmt" - "net" - "net/http" - "os" - "strings" - "time" - - utilnode "k8s.io/kubernetes/pkg/util/node" - - "github.com/fsnotify/fsnotify" - "github.com/spf13/cobra" - "github.com/spf13/pflag" - - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/fields" - "k8s.io/apimachinery/pkg/labels" - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/runtime/serializer" - "k8s.io/apimachinery/pkg/selection" - utilruntime "k8s.io/apimachinery/pkg/util/runtime" - "k8s.io/apimachinery/pkg/util/wait" - "k8s.io/apiserver/pkg/server/healthz" - "k8s.io/apiserver/pkg/server/mux" - "k8s.io/apiserver/pkg/server/routes" - utilfeature "k8s.io/apiserver/pkg/util/feature" - "k8s.io/client-go/informers" - clientset "k8s.io/client-go/kubernetes" - v1core "k8s.io/client-go/kubernetes/typed/core/v1" - "k8s.io/client-go/rest" - "k8s.io/client-go/tools/clientcmd" - clientcmdapi "k8s.io/client-go/tools/clientcmd/api" - "k8s.io/client-go/tools/events" - cliflag "k8s.io/component-base/cli/flag" - componentbaseconfig "k8s.io/component-base/config" - "k8s.io/component-base/configz" - "k8s.io/component-base/logs" - logsapi "k8s.io/component-base/logs/api/v1" - metricsfeatures "k8s.io/component-base/metrics/features" - "k8s.io/component-base/metrics/legacyregistry" - "k8s.io/component-base/metrics/prometheus/slis" - "k8s.io/component-base/version" - "k8s.io/component-base/version/verflag" - "k8s.io/klog/v2" - "k8s.io/kube-proxy/config/v1alpha1" - api "k8s.io/kubernetes/pkg/apis/core" - "k8s.io/kubernetes/pkg/cluster/ports" - "k8s.io/kubernetes/pkg/kubelet/qos" - "k8s.io/kubernetes/pkg/proxy" - "k8s.io/kubernetes/pkg/proxy/apis" - kubeproxyconfig "k8s.io/kubernetes/pkg/proxy/apis/config" - proxyconfigscheme "k8s.io/kubernetes/pkg/proxy/apis/config/scheme" - kubeproxyconfigv1alpha1 "k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1" - "k8s.io/kubernetes/pkg/proxy/apis/config/validation" - "k8s.io/kubernetes/pkg/proxy/config" - "k8s.io/kubernetes/pkg/proxy/healthcheck" - proxyutil "k8s.io/kubernetes/pkg/proxy/util" - "k8s.io/kubernetes/pkg/util/filesystem" - utilflag "k8s.io/kubernetes/pkg/util/flag" - utilipset "k8s.io/kubernetes/pkg/util/ipset" - utiliptables "k8s.io/kubernetes/pkg/util/iptables" - utilipvs "k8s.io/kubernetes/pkg/util/ipvs" - "k8s.io/kubernetes/pkg/util/oom" - "k8s.io/utils/exec" - netutils "k8s.io/utils/net" - "k8s.io/utils/pointer" -) - -func init() { - utilruntime.Must(metricsfeatures.AddFeatureGates(utilfeature.DefaultMutableFeatureGate)) - logsapi.AddFeatureGates(utilfeature.DefaultMutableFeatureGate) -} - -// proxyRun defines the interface to run a specified ProxyServer -type proxyRun interface { - Run() error -} - -// Options contains everything necessary to create and run a proxy server. -type Options struct { - // ConfigFile is the location of the proxy server's configuration file. - ConfigFile string - // WriteConfigTo is the path where the default configuration will be written. - WriteConfigTo string - // CleanupAndExit, when true, makes the proxy server clean up iptables and ipvs rules, then exit. - CleanupAndExit bool - // WindowsService should be set to true if kube-proxy is running as a service on Windows. - // Its corresponding flag only gets registered in Windows builds - WindowsService bool - // config is the proxy server's configuration object. - config *kubeproxyconfig.KubeProxyConfiguration - // watcher is used to watch on the update change of ConfigFile - watcher filesystem.FSWatcher - // proxyServer is the interface to run the proxy server - proxyServer proxyRun - // errCh is the channel that errors will be sent - errCh chan error - - // The fields below here are placeholders for flags that can't be directly mapped into - // config.KubeProxyConfiguration. - // - // TODO remove these fields once the deprecated flags are removed. - - // master is used to override the kubeconfig's URL to the apiserver. - master string - // healthzPort is the port to be used by the healthz server. - healthzPort int32 - // metricsPort is the port to be used by the metrics server. - metricsPort int32 - - // hostnameOverride, if set from the command line flag, takes precedence over the `HostnameOverride` value from the config file - hostnameOverride string -} - -// AddFlags adds flags to fs and binds them to options. -func (o *Options) AddFlags(fs *pflag.FlagSet) { - o.addOSFlags(fs) - - fs.StringVar(&o.ConfigFile, "config", o.ConfigFile, "The path to the configuration file.") - fs.StringVar(&o.WriteConfigTo, "write-config-to", o.WriteConfigTo, "If set, write the default configuration values to this file and exit.") - fs.StringVar(&o.config.ClientConnection.Kubeconfig, "kubeconfig", o.config.ClientConnection.Kubeconfig, "Path to kubeconfig file with authorization information (the master location can be overridden by the master flag).") - fs.StringVar(&o.config.ClusterCIDR, "cluster-cidr", o.config.ClusterCIDR, "The CIDR range of pods in the cluster. When configured, traffic sent to a Service cluster IP from outside this range will be masqueraded and traffic sent from pods to an external LoadBalancer IP will be directed to the respective cluster IP instead. "+ - "For dual-stack clusters, a comma-separated list is accepted with at least one CIDR per IP family (IPv4 and IPv6). "+ - "This parameter is ignored if a config file is specified by --config.") - fs.StringVar(&o.config.ClientConnection.ContentType, "kube-api-content-type", o.config.ClientConnection.ContentType, "Content type of requests sent to apiserver.") - fs.StringVar(&o.master, "master", o.master, "The address of the Kubernetes API server (overrides any value in kubeconfig)") - fs.StringVar(&o.hostnameOverride, "hostname-override", o.hostnameOverride, "If non-empty, will use this string as identification instead of the actual hostname.") - fs.StringVar(&o.config.IPVS.Scheduler, "ipvs-scheduler", o.config.IPVS.Scheduler, "The ipvs scheduler type when proxy mode is ipvs") - fs.StringVar(&o.config.ShowHiddenMetricsForVersion, "show-hidden-metrics-for-version", o.config.ShowHiddenMetricsForVersion, - "The previous version for which you want to show hidden metrics. "+ - "Only the previous minor version is meaningful, other values will not be allowed. "+ - "The format is ., e.g.: '1.16'. "+ - "The purpose of this format is make sure you have the opportunity to notice if the next release hides additional metrics, "+ - "rather than being surprised when they are permanently removed in the release after that. "+ - "This parameter is ignored if a config file is specified by --config.") - - fs.StringSliceVar(&o.config.IPVS.ExcludeCIDRs, "ipvs-exclude-cidrs", o.config.IPVS.ExcludeCIDRs, "A comma-separated list of CIDR's which the ipvs proxier should not touch when cleaning up IPVS rules.") - fs.StringSliceVar(&o.config.NodePortAddresses, "nodeport-addresses", o.config.NodePortAddresses, - "A string slice of values which specify the addresses to use for NodePorts. Values may be valid IP blocks (e.g. 1.2.3.0/24, 1.2.3.4/32). The default empty string slice ([]) means to use all local addresses. This parameter is ignored if a config file is specified by --config.") - - fs.BoolVar(&o.CleanupAndExit, "cleanup", o.CleanupAndExit, "If true cleanup iptables and ipvs rules and exit.") - - fs.Var(&utilflag.IPVar{Val: &o.config.BindAddress}, "bind-address", "The IP address for the proxy server to serve on (set to '0.0.0.0' for all IPv4 interfaces and '::' for all IPv6 interfaces). This parameter is ignored if a config file is specified by --config.") - fs.Var(&utilflag.IPPortVar{Val: &o.config.HealthzBindAddress}, "healthz-bind-address", "The IP address with port for the health check server to serve on (set to '0.0.0.0:10256' for all IPv4 interfaces and '[::]:10256' for all IPv6 interfaces). Set empty to disable. This parameter is ignored if a config file is specified by --config.") - fs.Var(&utilflag.IPPortVar{Val: &o.config.MetricsBindAddress}, "metrics-bind-address", "The IP address with port for the metrics server to serve on (set to '0.0.0.0:10249' for all IPv4 interfaces and '[::]:10249' for all IPv6 interfaces). Set empty to disable. This parameter is ignored if a config file is specified by --config.") - fs.BoolVar(&o.config.BindAddressHardFail, "bind-address-hard-fail", o.config.BindAddressHardFail, "If true kube-proxy will treat failure to bind to a port as fatal and exit") - fs.Var(utilflag.PortRangeVar{Val: &o.config.PortRange}, "proxy-port-range", "Range of host ports (beginPort-endPort, single port or beginPort+offset, inclusive) that may be consumed in order to proxy service traffic. If (unspecified, 0, or 0-0) then ports will be randomly chosen.") - fs.Var(&o.config.Mode, "proxy-mode", "Which proxy mode to use: on Linux this can be 'iptables' (default) or 'ipvs'. On Windows the only supported value is 'kernelspace'."+ - "This parameter is ignored if a config file is specified by --config.") - fs.Var(cliflag.NewMapStringBool(&o.config.FeatureGates), "feature-gates", "A set of key=value pairs that describe feature gates for alpha/experimental features. "+ - "Options are:\n"+strings.Join(utilfeature.DefaultFeatureGate.KnownFeatures(), "\n")+"\n"+ - "This parameter is ignored if a config file is specified by --config.") - - fs.Int32Var(&o.healthzPort, "healthz-port", o.healthzPort, "The port to bind the health check server. Use 0 to disable.") - fs.MarkDeprecated("healthz-port", "This flag is deprecated and will be removed in a future release. Please use --healthz-bind-address instead.") - fs.Int32Var(&o.metricsPort, "metrics-port", o.metricsPort, "The port to bind the metrics server. Use 0 to disable.") - fs.MarkDeprecated("metrics-port", "This flag is deprecated and will be removed in a future release. Please use --metrics-bind-address instead.") - fs.Int32Var(o.config.OOMScoreAdj, "oom-score-adj", pointer.Int32Deref(o.config.OOMScoreAdj, int32(qos.KubeProxyOOMScoreAdj)), "The oom-score-adj value for kube-proxy process. Values must be within the range [-1000, 1000]. This parameter is ignored if a config file is specified by --config.") - fs.Int32Var(o.config.IPTables.MasqueradeBit, "iptables-masquerade-bit", pointer.Int32Deref(o.config.IPTables.MasqueradeBit, 14), "If using the pure iptables proxy, the bit of the fwmark space to mark packets requiring SNAT with. Must be within the range [0, 31].") - fs.Int32Var(o.config.Conntrack.MaxPerCore, "conntrack-max-per-core", *o.config.Conntrack.MaxPerCore, - "Maximum number of NAT connections to track per CPU core (0 to leave the limit as-is and ignore conntrack-min).") - fs.Int32Var(o.config.Conntrack.Min, "conntrack-min", *o.config.Conntrack.Min, - "Minimum number of conntrack entries to allocate, regardless of conntrack-max-per-core (set conntrack-max-per-core=0 to leave the limit as-is).") - fs.Int32Var(&o.config.ClientConnection.Burst, "kube-api-burst", o.config.ClientConnection.Burst, "Burst to use while talking with kubernetes apiserver") - - fs.DurationVar(&o.config.IPTables.SyncPeriod.Duration, "iptables-sync-period", o.config.IPTables.SyncPeriod.Duration, "The maximum interval of how often iptables rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.") - fs.DurationVar(&o.config.IPTables.MinSyncPeriod.Duration, "iptables-min-sync-period", o.config.IPTables.MinSyncPeriod.Duration, "The minimum interval of how often the iptables rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').") - fs.DurationVar(&o.config.IPVS.SyncPeriod.Duration, "ipvs-sync-period", o.config.IPVS.SyncPeriod.Duration, "The maximum interval of how often ipvs rules are refreshed (e.g. '5s', '1m', '2h22m'). Must be greater than 0.") - fs.DurationVar(&o.config.IPVS.MinSyncPeriod.Duration, "ipvs-min-sync-period", o.config.IPVS.MinSyncPeriod.Duration, "The minimum interval of how often the ipvs rules can be refreshed as endpoints and services change (e.g. '5s', '1m', '2h22m').") - fs.DurationVar(&o.config.IPVS.TCPTimeout.Duration, "ipvs-tcp-timeout", o.config.IPVS.TCPTimeout.Duration, "The timeout for idle IPVS TCP connections, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').") - fs.DurationVar(&o.config.IPVS.TCPFinTimeout.Duration, "ipvs-tcpfin-timeout", o.config.IPVS.TCPFinTimeout.Duration, "The timeout for IPVS TCP connections after receiving a FIN packet, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').") - fs.DurationVar(&o.config.IPVS.UDPTimeout.Duration, "ipvs-udp-timeout", o.config.IPVS.UDPTimeout.Duration, "The timeout for IPVS UDP packets, 0 to leave as-is. (e.g. '5s', '1m', '2h22m').") - fs.DurationVar(&o.config.Conntrack.TCPEstablishedTimeout.Duration, "conntrack-tcp-timeout-established", o.config.Conntrack.TCPEstablishedTimeout.Duration, "Idle timeout for established TCP connections (0 to leave as-is)") - fs.DurationVar( - &o.config.Conntrack.TCPCloseWaitTimeout.Duration, "conntrack-tcp-timeout-close-wait", - o.config.Conntrack.TCPCloseWaitTimeout.Duration, - "NAT timeout for TCP connections in the CLOSE_WAIT state") - fs.DurationVar(&o.config.ConfigSyncPeriod.Duration, "config-sync-period", o.config.ConfigSyncPeriod.Duration, "How often configuration from the apiserver is refreshed. Must be greater than 0.") - - fs.BoolVar(&o.config.IPVS.StrictARP, "ipvs-strict-arp", o.config.IPVS.StrictARP, "Enable strict ARP by setting arp_ignore to 1 and arp_announce to 2") - fs.BoolVar(&o.config.IPTables.MasqueradeAll, "masquerade-all", o.config.IPTables.MasqueradeAll, "If using the pure iptables proxy, SNAT all traffic sent via Service cluster IPs (this not commonly needed)") - fs.BoolVar(o.config.IPTables.LocalhostNodePorts, "iptables-localhost-nodeports", pointer.BoolDeref(o.config.IPTables.LocalhostNodePorts, true), "If false Kube-proxy will disable the legacy behavior of allowing NodePort services to be accessed via localhost, This only applies to iptables mode and ipv4.") - fs.BoolVar(&o.config.EnableProfiling, "profiling", o.config.EnableProfiling, "If true enables profiling via web interface on /debug/pprof handler. This parameter is ignored if a config file is specified by --config.") - - fs.Float32Var(&o.config.ClientConnection.QPS, "kube-api-qps", o.config.ClientConnection.QPS, "QPS to use while talking with kubernetes apiserver") - fs.Var(&o.config.DetectLocalMode, "detect-local-mode", "Mode to use to detect local traffic. This parameter is ignored if a config file is specified by --config.") - fs.StringVar(&o.config.DetectLocal.BridgeInterface, "pod-bridge-interface", o.config.DetectLocal.BridgeInterface, "A bridge interface name in the cluster. Kube-proxy considers traffic as local if originating from an interface which matches the value. This argument should be set if DetectLocalMode is set to BridgeInterface.") - fs.StringVar(&o.config.DetectLocal.InterfaceNamePrefix, "pod-interface-name-prefix", o.config.DetectLocal.InterfaceNamePrefix, "An interface prefix in the cluster. Kube-proxy considers traffic as local if originating from interfaces that match the given prefix. This argument should be set if DetectLocalMode is set to InterfaceNamePrefix.") -} - -// NewOptions returns initialized Options -func NewOptions() *Options { - return &Options{ - config: new(kubeproxyconfig.KubeProxyConfiguration), - healthzPort: ports.ProxyHealthzPort, - metricsPort: ports.ProxyStatusPort, - errCh: make(chan error), - } -} - -// Complete completes all the required options. -func (o *Options) Complete() error { - if len(o.ConfigFile) == 0 && len(o.WriteConfigTo) == 0 { - klog.InfoS("Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP") - o.config.HealthzBindAddress = addressFromDeprecatedFlags(o.config.HealthzBindAddress, o.healthzPort) - o.config.MetricsBindAddress = addressFromDeprecatedFlags(o.config.MetricsBindAddress, o.metricsPort) - } - - // Load the config file here in Complete, so that Validate validates the fully-resolved config. - if len(o.ConfigFile) > 0 { - c, err := o.loadConfigFromFile(o.ConfigFile) - if err != nil { - return err - } - o.config = c - - if err := o.initWatcher(); err != nil { - return err - } - } - - if err := o.processHostnameOverrideFlag(); err != nil { - return err - } - - return utilfeature.DefaultMutableFeatureGate.SetFromMap(o.config.FeatureGates) -} - -// Creates a new filesystem watcher and adds watches for the config file. -func (o *Options) initWatcher() error { - fswatcher := filesystem.NewFsnotifyWatcher() - err := fswatcher.Init(o.eventHandler, o.errorHandler) - if err != nil { - return err - } - err = fswatcher.AddWatch(o.ConfigFile) - if err != nil { - return err - } - o.watcher = fswatcher - return nil -} - -func (o *Options) eventHandler(ent fsnotify.Event) { - if ent.Has(fsnotify.Write) || ent.Has(fsnotify.Rename) { - // error out when ConfigFile is updated - o.errCh <- fmt.Errorf("content of the proxy server's configuration file was updated") - return - } - o.errCh <- nil -} - -func (o *Options) errorHandler(err error) { - o.errCh <- err -} - -// processHostnameOverrideFlag processes hostname-override flag -func (o *Options) processHostnameOverrideFlag() error { - // Check if hostname-override flag is set and use value since configFile always overrides - if len(o.hostnameOverride) > 0 { - hostName := strings.TrimSpace(o.hostnameOverride) - if len(hostName) == 0 { - return fmt.Errorf("empty hostname-override is invalid") - } - o.config.HostnameOverride = strings.ToLower(hostName) - } - - return nil -} - -// Validate validates all the required options. -func (o *Options) Validate() error { - if errs := validation.Validate(o.config); len(errs) != 0 { - return errs.ToAggregate() - } - - return nil -} - -// Run runs the specified ProxyServer. -func (o *Options) Run() error { - defer close(o.errCh) - if len(o.WriteConfigTo) > 0 { - return o.writeConfigFile() - } - - if o.CleanupAndExit { - return cleanupAndExit() - } - - proxyServer, err := NewProxyServer(o) - if err != nil { - return err - } - - o.proxyServer = proxyServer - return o.runLoop() -} - -// runLoop will watch on the update change of the proxy server's configuration file. -// Return an error when updated -func (o *Options) runLoop() error { - if o.watcher != nil { - o.watcher.Run() - } - - // run the proxy in goroutine - go func() { - err := o.proxyServer.Run() - o.errCh <- err - }() - - for { - err := <-o.errCh - if err != nil { - return err - } - } -} - -func (o *Options) writeConfigFile() (err error) { - const mediaType = runtime.ContentTypeYAML - info, ok := runtime.SerializerInfoForMediaType(proxyconfigscheme.Codecs.SupportedMediaTypes(), mediaType) - if !ok { - return fmt.Errorf("unable to locate encoder -- %q is not a supported media type", mediaType) - } - - encoder := proxyconfigscheme.Codecs.EncoderForVersion(info.Serializer, v1alpha1.SchemeGroupVersion) - - configFile, err := os.Create(o.WriteConfigTo) - if err != nil { - return err - } - - defer func() { - ferr := configFile.Close() - if ferr != nil && err == nil { - err = ferr - } - }() - - if err = encoder.Encode(o.config, configFile); err != nil { - return err - } - - klog.InfoS("Wrote configuration", "file", o.WriteConfigTo) - - return nil -} - -// addressFromDeprecatedFlags returns server address from flags -// passed on the command line based on the following rules: -// 1. If port is 0, disable the server (e.g. set address to empty). -// 2. Otherwise, set the port portion of the config accordingly. -func addressFromDeprecatedFlags(addr string, port int32) string { - if port == 0 { - return "" - } - return proxyutil.AppendPortIfNeeded(addr, port) -} - -// newLenientSchemeAndCodecs returns a scheme that has only v1alpha1 registered into -// it and a CodecFactory with strict decoding disabled. -func newLenientSchemeAndCodecs() (*runtime.Scheme, *serializer.CodecFactory, error) { - lenientScheme := runtime.NewScheme() - if err := kubeproxyconfig.AddToScheme(lenientScheme); err != nil { - return nil, nil, fmt.Errorf("failed to add kube-proxy config API to lenient scheme: %v", err) - } - if err := kubeproxyconfigv1alpha1.AddToScheme(lenientScheme); err != nil { - return nil, nil, fmt.Errorf("failed to add kube-proxy config v1alpha1 API to lenient scheme: %v", err) - } - lenientCodecs := serializer.NewCodecFactory(lenientScheme, serializer.DisableStrict) - return lenientScheme, &lenientCodecs, nil -} - -// loadConfigFromFile loads the contents of file and decodes it as a -// KubeProxyConfiguration object. -func (o *Options) loadConfigFromFile(file string) (*kubeproxyconfig.KubeProxyConfiguration, error) { - data, err := os.ReadFile(file) - if err != nil { - return nil, err - } - - return o.loadConfig(data) -} - -// loadConfig decodes a serialized KubeProxyConfiguration to the internal type. -func (o *Options) loadConfig(data []byte) (*kubeproxyconfig.KubeProxyConfiguration, error) { - - configObj, gvk, err := proxyconfigscheme.Codecs.UniversalDecoder().Decode(data, nil, nil) - if err != nil { - // Try strict decoding first. If that fails decode with a lenient - // decoder, which has only v1alpha1 registered, and log a warning. - // The lenient path is to be dropped when support for v1alpha1 is dropped. - if !runtime.IsStrictDecodingError(err) { - return nil, fmt.Errorf("failed to decode: %w", err) - } - - _, lenientCodecs, lenientErr := newLenientSchemeAndCodecs() - if lenientErr != nil { - return nil, lenientErr - } - - configObj, gvk, lenientErr = lenientCodecs.UniversalDecoder().Decode(data, nil, nil) - if lenientErr != nil { - // Lenient decoding failed with the current version, return the - // original strict error. - return nil, fmt.Errorf("failed lenient decoding: %v", err) - } - - // Continue with the v1alpha1 object that was decoded leniently, but emit a warning. - klog.InfoS("Using lenient decoding as strict decoding failed", "err", err) - } - - proxyConfig, ok := configObj.(*kubeproxyconfig.KubeProxyConfiguration) - if !ok { - return nil, fmt.Errorf("got unexpected config type: %v", gvk) - } - return proxyConfig, nil -} - -// ApplyDefaults applies the default values to Options. -func (o *Options) ApplyDefaults(in *kubeproxyconfig.KubeProxyConfiguration) (*kubeproxyconfig.KubeProxyConfiguration, error) { - external, err := proxyconfigscheme.Scheme.ConvertToVersion(in, v1alpha1.SchemeGroupVersion) - if err != nil { - return nil, err - } - - proxyconfigscheme.Scheme.Default(external) - - internal, err := proxyconfigscheme.Scheme.ConvertToVersion(external, kubeproxyconfig.SchemeGroupVersion) - if err != nil { - return nil, err - } - - out := internal.(*kubeproxyconfig.KubeProxyConfiguration) - - return out, nil -} - -// NewProxyCommand creates a *cobra.Command object with default parameters -func NewProxyCommand() *cobra.Command { - opts := NewOptions() - - cmd := &cobra.Command{ - Use: "kube-proxy", - Long: `The Kubernetes network proxy runs on each node. This -reflects services as defined in the Kubernetes API on each node and can do simple -TCP, UDP, and SCTP stream forwarding or round robin TCP, UDP, and SCTP forwarding across a set of backends. -Service cluster IPs and ports are currently found through Docker-links-compatible -environment variables specifying ports opened by the service proxy. There is an optional -addon that provides cluster DNS for these cluster IPs. The user must create a service -with the apiserver API to configure the proxy.`, - RunE: func(cmd *cobra.Command, args []string) error { - verflag.PrintAndExitIfRequested() - cliflag.PrintFlags(cmd.Flags()) - - if err := initForOS(opts.WindowsService); err != nil { - return fmt.Errorf("failed os init: %w", err) - } - - if err := opts.Complete(); err != nil { - return fmt.Errorf("failed complete: %w", err) - } - - if err := opts.Validate(); err != nil { - return fmt.Errorf("failed validate: %w", err) - } - // add feature enablement metrics - utilfeature.DefaultMutableFeatureGate.AddMetrics() - if err := opts.Run(); err != nil { - klog.ErrorS(err, "Error running ProxyServer") - return err - } - - return nil - }, - Args: func(cmd *cobra.Command, args []string) error { - for _, arg := range args { - if len(arg) > 0 { - return fmt.Errorf("%q does not take any arguments, got %q", cmd.CommandPath(), args) - } - } - return nil - }, - } - - var err error - opts.config, err = opts.ApplyDefaults(opts.config) - if err != nil { - klog.ErrorS(err, "Unable to create flag defaults") - // ACTION REQUIRED: Exit code changed from 255 to 1 - os.Exit(1) - } - - fs := cmd.Flags() - opts.AddFlags(fs) - fs.AddGoFlagSet(goflag.CommandLine) // for --boot-id-file and --machine-id-file - - _ = cmd.MarkFlagFilename("config", "yaml", "yml", "json") - - return cmd -} - -// ProxyServer represents all the parameters required to start the Kubernetes proxy server. All -// fields are required. -type ProxyServer struct { - Client clientset.Interface - EventClient v1core.EventsGetter - IptInterface utiliptables.Interface - IpvsInterface utilipvs.Interface - IpsetInterface utilipset.Interface - execer exec.Interface - Proxier proxy.Provider - Broadcaster events.EventBroadcaster - Recorder events.EventRecorder - ConntrackConfiguration kubeproxyconfig.KubeProxyConntrackConfiguration - Conntracker Conntracker // if nil, ignored - ProxyMode kubeproxyconfig.ProxyMode - NodeRef *v1.ObjectReference - MetricsBindAddress string - BindAddressHardFail bool - EnableProfiling bool - OOMScoreAdj *int32 - ConfigSyncPeriod time.Duration - HealthzServer healthcheck.ProxierHealthUpdater - localDetectorMode kubeproxyconfig.LocalMode -} - -// createClients creates a kube client and an event client from the given config and masterOverride. -// TODO remove masterOverride when CLI flags are removed. -func createClients(config componentbaseconfig.ClientConnectionConfiguration, masterOverride string) (clientset.Interface, v1core.EventsGetter, error) { - var kubeConfig *rest.Config - var err error - - if len(config.Kubeconfig) == 0 && len(masterOverride) == 0 { - klog.InfoS("Neither kubeconfig file nor master URL was specified, falling back to in-cluster config") - kubeConfig, err = rest.InClusterConfig() - } else { - // This creates a client, first loading any specified kubeconfig - // file, and then overriding the Master flag, if non-empty. - kubeConfig, err = clientcmd.NewNonInteractiveDeferredLoadingClientConfig( - &clientcmd.ClientConfigLoadingRules{ExplicitPath: config.Kubeconfig}, - &clientcmd.ConfigOverrides{ClusterInfo: clientcmdapi.Cluster{Server: masterOverride}}).ClientConfig() - } - if err != nil { - return nil, nil, err - } - - kubeConfig.AcceptContentTypes = config.AcceptContentTypes - kubeConfig.ContentType = config.ContentType - kubeConfig.QPS = config.QPS - kubeConfig.Burst = int(config.Burst) - - client, err := clientset.NewForConfig(kubeConfig) - if err != nil { - return nil, nil, err - } - - eventClient, err := clientset.NewForConfig(kubeConfig) - if err != nil { - return nil, nil, err - } - - return client, eventClient.CoreV1(), nil -} - -func serveHealthz(hz healthcheck.ProxierHealthUpdater, errCh chan error) { - if hz == nil { - return - } - - fn := func() { - err := hz.Run() - if err != nil { - klog.ErrorS(err, "Healthz server failed") - if errCh != nil { - errCh <- fmt.Errorf("healthz server failed: %v", err) - // if in hardfail mode, never retry again - blockCh := make(chan error) - <-blockCh - } - } else { - klog.ErrorS(nil, "Healthz server returned without error") - } - } - go wait.Until(fn, 5*time.Second, wait.NeverStop) -} - -func serveMetrics(bindAddress string, proxyMode kubeproxyconfig.ProxyMode, enableProfiling bool, errCh chan error) { - if len(bindAddress) == 0 { - return - } - - proxyMux := mux.NewPathRecorderMux("kube-proxy") - healthz.InstallHandler(proxyMux) - if utilfeature.DefaultFeatureGate.Enabled(metricsfeatures.ComponentSLIs) { - slis.SLIMetricsWithReset{}.Install(proxyMux) - } - proxyMux.HandleFunc("/proxyMode", func(w http.ResponseWriter, r *http.Request) { - w.Header().Set("Content-Type", "text/plain; charset=utf-8") - w.Header().Set("X-Content-Type-Options", "nosniff") - fmt.Fprintf(w, "%s", proxyMode) - }) - - proxyMux.Handle("/metrics", legacyregistry.Handler()) - - if enableProfiling { - routes.Profiling{}.Install(proxyMux) - routes.DebugFlags{}.Install(proxyMux, "v", routes.StringFlagPutHandler(logs.GlogSetter)) - } - - configz.InstallHandler(proxyMux) - - fn := func() { - err := http.ListenAndServe(bindAddress, proxyMux) - if err != nil { - err = fmt.Errorf("starting metrics server failed: %v", err) - utilruntime.HandleError(err) - if errCh != nil { - errCh <- err - // if in hardfail mode, never retry again - blockCh := make(chan error) - <-blockCh - } - } - } - go wait.Until(fn, 5*time.Second, wait.NeverStop) -} - -// Run runs the specified ProxyServer. This should never exit (unless CleanupAndExit is set). -// TODO: At the moment, Run() cannot return a nil error, otherwise it's caller will never exit. Update callers of Run to handle nil errors. -func (s *ProxyServer) Run() error { - // To help debugging, immediately log version - klog.InfoS("Version info", "version", version.Get()) - - klog.InfoS("Golang settings", "GOGC", os.Getenv("GOGC"), "GOMAXPROCS", os.Getenv("GOMAXPROCS"), "GOTRACEBACK", os.Getenv("GOTRACEBACK")) - - // TODO(vmarmol): Use container config for this. - var oomAdjuster *oom.OOMAdjuster - if s.OOMScoreAdj != nil { - oomAdjuster = oom.NewOOMAdjuster() - if err := oomAdjuster.ApplyOOMScoreAdj(0, int(*s.OOMScoreAdj)); err != nil { - klog.V(2).InfoS("Failed to apply OOMScore", "err", err) - } - } - - if s.Broadcaster != nil && s.EventClient != nil { - stopCh := make(chan struct{}) - s.Broadcaster.StartRecordingToSink(stopCh) - } - - // TODO(thockin): make it possible for healthz and metrics to be on the same port. - - var errCh chan error - if s.BindAddressHardFail { - errCh = make(chan error) - } - - // Start up a healthz server if requested - serveHealthz(s.HealthzServer, errCh) - - // Start up a metrics server if requested - serveMetrics(s.MetricsBindAddress, s.ProxyMode, s.EnableProfiling, errCh) - - // Tune conntrack, if requested - // Conntracker is always nil for windows - if s.Conntracker != nil { - max, err := getConntrackMax(s.ConntrackConfiguration) - if err != nil { - return err - } - if max > 0 { - err := s.Conntracker.SetMax(max) - if err != nil { - if err != errReadOnlySysFS { - return err - } - // errReadOnlySysFS is caused by a known docker issue (https://github.com/docker/docker/issues/24000), - // the only remediation we know is to restart the docker daemon. - // Here we'll send an node event with specific reason and message, the - // administrator should decide whether and how to handle this issue, - // whether to drain the node and restart docker. Occurs in other container runtimes - // as well. - // TODO(random-liu): Remove this when the docker bug is fixed. - const message = "CRI error: /sys is read-only: " + - "cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)" - s.Recorder.Eventf(s.NodeRef, nil, api.EventTypeWarning, err.Error(), "StartKubeProxy", message) - } - } - - if s.ConntrackConfiguration.TCPEstablishedTimeout != nil && s.ConntrackConfiguration.TCPEstablishedTimeout.Duration > 0 { - timeout := int(s.ConntrackConfiguration.TCPEstablishedTimeout.Duration / time.Second) - if err := s.Conntracker.SetTCPEstablishedTimeout(timeout); err != nil { - return err - } - } - - if s.ConntrackConfiguration.TCPCloseWaitTimeout != nil && s.ConntrackConfiguration.TCPCloseWaitTimeout.Duration > 0 { - timeout := int(s.ConntrackConfiguration.TCPCloseWaitTimeout.Duration / time.Second) - if err := s.Conntracker.SetTCPCloseWaitTimeout(timeout); err != nil { - return err - } - } - } - - noProxyName, err := labels.NewRequirement(apis.LabelServiceProxyName, selection.DoesNotExist, nil) - if err != nil { - return err - } - - noHeadlessEndpoints, err := labels.NewRequirement(v1.IsHeadlessService, selection.DoesNotExist, nil) - if err != nil { - return err - } - - labelSelector := labels.NewSelector() - labelSelector = labelSelector.Add(*noProxyName, *noHeadlessEndpoints) - - // Make informers that filter out objects that want a non-default service proxy. - informerFactory := informers.NewSharedInformerFactoryWithOptions(s.Client, s.ConfigSyncPeriod, - informers.WithTweakListOptions(func(options *metav1.ListOptions) { - options.LabelSelector = labelSelector.String() - })) - - // Create configs (i.e. Watches for Services and EndpointSlices) - // Note: RegisterHandler() calls need to happen before creation of Sources because sources - // only notify on changes, and the initial update (on process start) may be lost if no handlers - // are registered yet. - serviceConfig := config.NewServiceConfig(informerFactory.Core().V1().Services(), s.ConfigSyncPeriod) - serviceConfig.RegisterEventHandler(s.Proxier) - go serviceConfig.Run(wait.NeverStop) - - endpointSliceConfig := config.NewEndpointSliceConfig(informerFactory.Discovery().V1().EndpointSlices(), s.ConfigSyncPeriod) - endpointSliceConfig.RegisterEventHandler(s.Proxier) - go endpointSliceConfig.Run(wait.NeverStop) - - // This has to start after the calls to NewServiceConfig because that - // function must configure its shared informer event handlers first. - informerFactory.Start(wait.NeverStop) - - // Make an informer that selects for our nodename. - currentNodeInformerFactory := informers.NewSharedInformerFactoryWithOptions(s.Client, s.ConfigSyncPeriod, - informers.WithTweakListOptions(func(options *metav1.ListOptions) { - options.FieldSelector = fields.OneTermEqualSelector("metadata.name", s.NodeRef.Name).String() - })) - nodeConfig := config.NewNodeConfig(currentNodeInformerFactory.Core().V1().Nodes(), s.ConfigSyncPeriod) - // https://issues.k8s.io/111321 - if s.localDetectorMode == kubeproxyconfig.LocalModeNodeCIDR { - nodeConfig.RegisterEventHandler(&proxy.NodePodCIDRHandler{}) - } - nodeConfig.RegisterEventHandler(s.Proxier) - - go nodeConfig.Run(wait.NeverStop) - - // This has to start after the calls to NewNodeConfig because that must - // configure the shared informer event handler first. - currentNodeInformerFactory.Start(wait.NeverStop) - - // Birth Cry after the birth is successful - s.birthCry() - - go s.Proxier.SyncLoop() - - return <-errCh -} - -func (s *ProxyServer) birthCry() { - s.Recorder.Eventf(s.NodeRef, nil, api.EventTypeNormal, "Starting", "StartKubeProxy", "") -} - -func getConntrackMax(config kubeproxyconfig.KubeProxyConntrackConfiguration) (int, error) { - if config.MaxPerCore != nil && *config.MaxPerCore > 0 { - floor := 0 - if config.Min != nil { - floor = int(*config.Min) - } - scaled := int(*config.MaxPerCore) * detectNumCPU() - if scaled > floor { - klog.V(3).InfoS("GetConntrackMax: using scaled conntrack-max-per-core") - return scaled, nil - } - klog.V(3).InfoS("GetConntrackMax: using conntrack-min") - return floor, nil - } - return 0, nil -} - -// detectNodeIP returns the nodeIP used by the proxier -// The order of precedence is: -// 1. config.bindAddress if bindAddress is not 0.0.0.0 or :: -// 2. the primary IP from the Node object, if set -// 3. if no IP is found it defaults to 127.0.0.1 and IPv4 -func detectNodeIP(client clientset.Interface, hostname, bindAddress string) net.IP { - nodeIP := netutils.ParseIPSloppy(bindAddress) - if nodeIP.IsUnspecified() { - nodeIP = utilnode.GetNodeIP(client, hostname) - } - if nodeIP == nil { - klog.InfoS("Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag") - nodeIP = netutils.ParseIPSloppy("127.0.0.1") - } - return nodeIP -} - -// nodeIPTuple takes an addresses and return a tuple (ipv4,ipv6) -// The returned tuple is guaranteed to have the order (ipv4,ipv6). The address NOT of the passed address -// will have "any" address (0.0.0.0 or ::) inserted. -func nodeIPTuple(bindAddress string) [2]net.IP { - nodes := [2]net.IP{net.IPv4zero, net.IPv6zero} - - adr := netutils.ParseIPSloppy(bindAddress) - if netutils.IsIPv6(adr) { - nodes[1] = adr - } else { - nodes[0] = adr - } - - return nodes -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server_others.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server_others.go deleted file mode 100644 index 56ad20f9567e..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server_others.go +++ /dev/null @@ -1,581 +0,0 @@ -//go:build !windows -// +build !windows - -/* -Copyright 2014 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Package app does all of the work necessary to configure and run a -// Kubernetes app process. -package app - -import ( - "context" - "errors" - "fmt" - goruntime "runtime" - "strings" - "time" - - "github.com/google/cadvisor/machine" - "github.com/google/cadvisor/utils/sysfs" - "k8s.io/apimachinery/pkg/watch" - - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/client-go/tools/cache" - "k8s.io/client-go/tools/events" - - "k8s.io/apimachinery/pkg/fields" - - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - clientset "k8s.io/client-go/kubernetes" - toolswatch "k8s.io/client-go/tools/watch" - "k8s.io/component-base/configz" - "k8s.io/component-base/metrics" - nodeutil "k8s.io/component-helpers/node/util" - utilsysctl "k8s.io/component-helpers/node/util/sysctl" - "k8s.io/kubernetes/pkg/proxy" - proxyconfigapi "k8s.io/kubernetes/pkg/proxy/apis/config" - "k8s.io/kubernetes/pkg/proxy/apis/config/scheme" - "k8s.io/kubernetes/pkg/proxy/healthcheck" - "k8s.io/kubernetes/pkg/proxy/iptables" - "k8s.io/kubernetes/pkg/proxy/ipvs" - proxymetrics "k8s.io/kubernetes/pkg/proxy/metrics" - proxyutil "k8s.io/kubernetes/pkg/proxy/util" - proxyutiliptables "k8s.io/kubernetes/pkg/proxy/util/iptables" - utilipset "k8s.io/kubernetes/pkg/util/ipset" - utiliptables "k8s.io/kubernetes/pkg/util/iptables" - utilipvs "k8s.io/kubernetes/pkg/util/ipvs" - "k8s.io/utils/exec" - netutils "k8s.io/utils/net" - - "k8s.io/klog/v2" -) - -// timeoutForNodePodCIDR is the time to wait for allocators to assign a PodCIDR to the -// node after it is registered. -var timeoutForNodePodCIDR = 5 * time.Minute - -// NewProxyServer returns a new ProxyServer. -func NewProxyServer(o *Options) (*ProxyServer, error) { - return newProxyServer(o.config, o.master) -} - -func newProxyServer( - config *proxyconfigapi.KubeProxyConfiguration, - master string) (*ProxyServer, error) { - - if config == nil { - return nil, errors.New("config is required") - } - - if c, err := configz.New(proxyconfigapi.GroupName); err == nil { - c.Set(config) - } else { - return nil, fmt.Errorf("unable to register configz: %s", err) - } - - var ipvsInterface utilipvs.Interface - var ipsetInterface utilipset.Interface - - if len(config.ShowHiddenMetricsForVersion) > 0 { - metrics.SetShowHidden() - } - - hostname, err := nodeutil.GetHostname(config.HostnameOverride) - if err != nil { - return nil, err - } - - client, eventClient, err := createClients(config.ClientConnection, master) - if err != nil { - return nil, err - } - - nodeIP := detectNodeIP(client, hostname, config.BindAddress) - klog.InfoS("Detected node IP", "address", nodeIP.String()) - - // Create event recorder - eventBroadcaster := events.NewBroadcaster(&events.EventSinkImpl{Interface: client.EventsV1()}) - recorder := eventBroadcaster.NewRecorder(scheme.Scheme, "kube-proxy") - - nodeRef := &v1.ObjectReference{ - Kind: "Node", - Name: hostname, - UID: types.UID(hostname), - Namespace: "", - } - - var healthzServer healthcheck.ProxierHealthUpdater - if len(config.HealthzBindAddress) > 0 { - healthzServer = healthcheck.NewProxierHealthServer(config.HealthzBindAddress, 2*config.IPTables.SyncPeriod.Duration, recorder, nodeRef) - } - - var proxier proxy.Provider - var detectLocalMode proxyconfigapi.LocalMode - - proxyMode := getProxyMode(config.Mode) - detectLocalMode, err = getDetectLocalMode(config) - if err != nil { - return nil, fmt.Errorf("cannot determine detect-local-mode: %v", err) - } - - var nodeInfo *v1.Node - if detectLocalMode == proxyconfigapi.LocalModeNodeCIDR { - klog.InfoS("Watching for node, awaiting podCIDR allocation", "hostname", hostname) - nodeInfo, err = waitForPodCIDR(client, hostname) - if err != nil { - return nil, err - } - klog.InfoS("NodeInfo", "podCIDR", nodeInfo.Spec.PodCIDR, "podCIDRs", nodeInfo.Spec.PodCIDRs) - } - - klog.V(2).InfoS("DetectLocalMode", "localMode", string(detectLocalMode)) - - primaryFamily := v1.IPv4Protocol - primaryProtocol := utiliptables.ProtocolIPv4 - if netutils.IsIPv6(nodeIP) { - primaryFamily = v1.IPv6Protocol - primaryProtocol = utiliptables.ProtocolIPv6 - } - execer := exec.New() - iptInterface := utiliptables.New(execer, primaryProtocol) - - var ipt [2]utiliptables.Interface - dualStack := true // While we assume that node supports, we do further checks below - - // Create iptables handlers for both families, one is already created - // Always ordered as IPv4, IPv6 - if primaryProtocol == utiliptables.ProtocolIPv4 { - ipt[0] = iptInterface - ipt[1] = utiliptables.New(execer, utiliptables.ProtocolIPv6) - } else { - ipt[0] = utiliptables.New(execer, utiliptables.ProtocolIPv4) - ipt[1] = iptInterface - } - - nodePortAddresses := config.NodePortAddresses - - if !ipt[0].Present() { - return nil, fmt.Errorf("iptables is not supported for primary IP family %q", primaryProtocol) - } else if !ipt[1].Present() { - klog.InfoS("kube-proxy running in single-stack mode: secondary ipFamily is not supported", "ipFamily", ipt[1].Protocol()) - dualStack = false - - // Validate NodePortAddresses is single-stack - npaByFamily := proxyutil.MapCIDRsByIPFamily(config.NodePortAddresses) - secondaryFamily := proxyutil.OtherIPFamily(primaryFamily) - badAddrs := npaByFamily[secondaryFamily] - if len(badAddrs) > 0 { - klog.InfoS("Ignoring --nodeport-addresses of the wrong family", "ipFamily", secondaryFamily, "addresses", badAddrs) - nodePortAddresses = npaByFamily[primaryFamily] - } - } - - if proxyMode == proxyconfigapi.ProxyModeIPTables { - klog.InfoS("Using iptables Proxier") - if config.IPTables.MasqueradeBit == nil { - // MasqueradeBit must be specified or defaulted. - return nil, fmt.Errorf("unable to read IPTables MasqueradeBit from config") - } - - if dualStack { - klog.InfoS("kube-proxy running in dual-stack mode", "ipFamily", iptInterface.Protocol()) - klog.InfoS("Creating dualStackProxier for iptables") - // Always ordered to match []ipt - var localDetectors [2]proxyutiliptables.LocalTrafficDetector - localDetectors, err = getDualStackLocalDetectorTuple(detectLocalMode, config, ipt, nodeInfo) - if err != nil { - return nil, fmt.Errorf("unable to create proxier: %v", err) - } - - // TODO this has side effects that should only happen when Run() is invoked. - proxier, err = iptables.NewDualStackProxier( - ipt, - utilsysctl.New(), - execer, - config.IPTables.SyncPeriod.Duration, - config.IPTables.MinSyncPeriod.Duration, - config.IPTables.MasqueradeAll, - *config.IPTables.LocalhostNodePorts, - int(*config.IPTables.MasqueradeBit), - localDetectors, - hostname, - nodeIPTuple(config.BindAddress), - recorder, - healthzServer, - nodePortAddresses, - ) - } else { - // Create a single-stack proxier if and only if the node does not support dual-stack (i.e, no iptables support). - var localDetector proxyutiliptables.LocalTrafficDetector - localDetector, err = getLocalDetector(detectLocalMode, config, iptInterface, nodeInfo) - if err != nil { - return nil, fmt.Errorf("unable to create proxier: %v", err) - } - - // TODO this has side effects that should only happen when Run() is invoked. - proxier, err = iptables.NewProxier( - primaryFamily, - iptInterface, - utilsysctl.New(), - execer, - config.IPTables.SyncPeriod.Duration, - config.IPTables.MinSyncPeriod.Duration, - config.IPTables.MasqueradeAll, - *config.IPTables.LocalhostNodePorts, - int(*config.IPTables.MasqueradeBit), - localDetector, - hostname, - nodeIP, - recorder, - healthzServer, - nodePortAddresses, - ) - } - - if err != nil { - return nil, fmt.Errorf("unable to create proxier: %v", err) - } - proxymetrics.RegisterMetrics() - } else if proxyMode == proxyconfigapi.ProxyModeIPVS { - kernelHandler := ipvs.NewLinuxKernelHandler() - ipsetInterface = utilipset.New(execer) - ipvsInterface = utilipvs.New() - if err := ipvs.CanUseIPVSProxier(ipvsInterface, ipsetInterface, config.IPVS.Scheduler); err != nil { - return nil, fmt.Errorf("can't use the IPVS proxier: %v", err) - } - - klog.InfoS("Using ipvs Proxier") - if dualStack { - klog.InfoS("Creating dualStackProxier for ipvs") - - nodeIPs := nodeIPTuple(config.BindAddress) - - // Always ordered to match []ipt - var localDetectors [2]proxyutiliptables.LocalTrafficDetector - localDetectors, err = getDualStackLocalDetectorTuple(detectLocalMode, config, ipt, nodeInfo) - if err != nil { - return nil, fmt.Errorf("unable to create proxier: %v", err) - } - - proxier, err = ipvs.NewDualStackProxier( - ipt, - ipvsInterface, - ipsetInterface, - utilsysctl.New(), - execer, - config.IPVS.SyncPeriod.Duration, - config.IPVS.MinSyncPeriod.Duration, - config.IPVS.ExcludeCIDRs, - config.IPVS.StrictARP, - config.IPVS.TCPTimeout.Duration, - config.IPVS.TCPFinTimeout.Duration, - config.IPVS.UDPTimeout.Duration, - config.IPTables.MasqueradeAll, - int(*config.IPTables.MasqueradeBit), - localDetectors, - hostname, - nodeIPs, - recorder, - healthzServer, - config.IPVS.Scheduler, - nodePortAddresses, - kernelHandler, - ) - } else { - var localDetector proxyutiliptables.LocalTrafficDetector - localDetector, err = getLocalDetector(detectLocalMode, config, iptInterface, nodeInfo) - if err != nil { - return nil, fmt.Errorf("unable to create proxier: %v", err) - } - - proxier, err = ipvs.NewProxier( - primaryFamily, - iptInterface, - ipvsInterface, - ipsetInterface, - utilsysctl.New(), - execer, - config.IPVS.SyncPeriod.Duration, - config.IPVS.MinSyncPeriod.Duration, - config.IPVS.ExcludeCIDRs, - config.IPVS.StrictARP, - config.IPVS.TCPTimeout.Duration, - config.IPVS.TCPFinTimeout.Duration, - config.IPVS.UDPTimeout.Duration, - config.IPTables.MasqueradeAll, - int(*config.IPTables.MasqueradeBit), - localDetector, - hostname, - nodeIP, - recorder, - healthzServer, - config.IPVS.Scheduler, - nodePortAddresses, - kernelHandler, - ) - } - if err != nil { - return nil, fmt.Errorf("unable to create proxier: %v", err) - } - proxymetrics.RegisterMetrics() - } - - return &ProxyServer{ - Client: client, - EventClient: eventClient, - IptInterface: iptInterface, - IpvsInterface: ipvsInterface, - IpsetInterface: ipsetInterface, - execer: execer, - Proxier: proxier, - Broadcaster: eventBroadcaster, - Recorder: recorder, - ConntrackConfiguration: config.Conntrack, - Conntracker: &realConntracker{}, - ProxyMode: proxyMode, - NodeRef: nodeRef, - MetricsBindAddress: config.MetricsBindAddress, - BindAddressHardFail: config.BindAddressHardFail, - EnableProfiling: config.EnableProfiling, - OOMScoreAdj: config.OOMScoreAdj, - ConfigSyncPeriod: config.ConfigSyncPeriod.Duration, - HealthzServer: healthzServer, - localDetectorMode: detectLocalMode, - }, nil -} - -func waitForPodCIDR(client clientset.Interface, nodeName string) (*v1.Node, error) { - // since allocators can assign the podCIDR after the node registers, we do a watch here to wait - // for podCIDR to be assigned, instead of assuming that the Get() on startup will have it. - ctx, cancelFunc := context.WithTimeout(context.TODO(), timeoutForNodePodCIDR) - defer cancelFunc() - - fieldSelector := fields.OneTermEqualSelector("metadata.name", nodeName).String() - lw := &cache.ListWatch{ - ListFunc: func(options metav1.ListOptions) (object runtime.Object, e error) { - options.FieldSelector = fieldSelector - return client.CoreV1().Nodes().List(ctx, options) - }, - WatchFunc: func(options metav1.ListOptions) (i watch.Interface, e error) { - options.FieldSelector = fieldSelector - return client.CoreV1().Nodes().Watch(ctx, options) - }, - } - condition := func(event watch.Event) (bool, error) { - // don't process delete events - if event.Type != watch.Modified && event.Type != watch.Added { - return false, nil - } - - n, ok := event.Object.(*v1.Node) - if !ok { - return false, fmt.Errorf("event object not of type Node") - } - // don't consider the node if is going to be deleted and keep waiting - if !n.DeletionTimestamp.IsZero() { - return false, nil - } - return n.Spec.PodCIDR != "" && len(n.Spec.PodCIDRs) > 0, nil - } - - evt, err := toolswatch.UntilWithSync(ctx, lw, &v1.Node{}, nil, condition) - if err != nil { - return nil, fmt.Errorf("timeout waiting for PodCIDR allocation to configure detect-local-mode %v: %v", proxyconfigapi.LocalModeNodeCIDR, err) - } - if n, ok := evt.Object.(*v1.Node); ok { - return n, nil - } - return nil, fmt.Errorf("event object not of type node") -} - -func detectNumCPU() int { - // try get numCPU from /sys firstly due to a known issue (https://github.com/kubernetes/kubernetes/issues/99225) - _, numCPU, err := machine.GetTopology(sysfs.NewRealSysFs()) - if err != nil || numCPU < 1 { - return goruntime.NumCPU() - } - return numCPU -} - -func getDetectLocalMode(config *proxyconfigapi.KubeProxyConfiguration) (proxyconfigapi.LocalMode, error) { - mode := config.DetectLocalMode - switch mode { - case proxyconfigapi.LocalModeClusterCIDR, proxyconfigapi.LocalModeNodeCIDR, proxyconfigapi.LocalModeBridgeInterface, proxyconfigapi.LocalModeInterfaceNamePrefix: - return mode, nil - default: - if strings.TrimSpace(mode.String()) != "" { - return mode, fmt.Errorf("unknown detect-local-mode: %v", mode) - } - klog.V(4).InfoS("Defaulting detect-local-mode", "localModeClusterCIDR", string(proxyconfigapi.LocalModeClusterCIDR)) - return proxyconfigapi.LocalModeClusterCIDR, nil - } -} - -func getLocalDetector(mode proxyconfigapi.LocalMode, config *proxyconfigapi.KubeProxyConfiguration, ipt utiliptables.Interface, nodeInfo *v1.Node) (proxyutiliptables.LocalTrafficDetector, error) { - switch mode { - case proxyconfigapi.LocalModeClusterCIDR: - if len(strings.TrimSpace(config.ClusterCIDR)) == 0 { - klog.InfoS("Detect-local-mode set to ClusterCIDR, but no cluster CIDR defined") - break - } - return proxyutiliptables.NewDetectLocalByCIDR(config.ClusterCIDR, ipt) - case proxyconfigapi.LocalModeNodeCIDR: - if len(strings.TrimSpace(nodeInfo.Spec.PodCIDR)) == 0 { - klog.InfoS("Detect-local-mode set to NodeCIDR, but no PodCIDR defined at node") - break - } - return proxyutiliptables.NewDetectLocalByCIDR(nodeInfo.Spec.PodCIDR, ipt) - case proxyconfigapi.LocalModeBridgeInterface: - if len(strings.TrimSpace(config.DetectLocal.BridgeInterface)) == 0 { - return nil, fmt.Errorf("Detect-local-mode set to BridgeInterface, but no bridge-interface-name %s is defined", config.DetectLocal.BridgeInterface) - } - return proxyutiliptables.NewDetectLocalByBridgeInterface(config.DetectLocal.BridgeInterface) - case proxyconfigapi.LocalModeInterfaceNamePrefix: - if len(strings.TrimSpace(config.DetectLocal.InterfaceNamePrefix)) == 0 { - return nil, fmt.Errorf("Detect-local-mode set to InterfaceNamePrefix, but no interface-prefix %s is defined", config.DetectLocal.InterfaceNamePrefix) - } - return proxyutiliptables.NewDetectLocalByInterfaceNamePrefix(config.DetectLocal.InterfaceNamePrefix) - } - klog.InfoS("Defaulting to no-op detect-local", "detectLocalMode", string(mode)) - return proxyutiliptables.NewNoOpLocalDetector(), nil -} - -func getDualStackLocalDetectorTuple(mode proxyconfigapi.LocalMode, config *proxyconfigapi.KubeProxyConfiguration, ipt [2]utiliptables.Interface, nodeInfo *v1.Node) ([2]proxyutiliptables.LocalTrafficDetector, error) { - var err error - localDetectors := [2]proxyutiliptables.LocalTrafficDetector{proxyutiliptables.NewNoOpLocalDetector(), proxyutiliptables.NewNoOpLocalDetector()} - switch mode { - case proxyconfigapi.LocalModeClusterCIDR: - if len(strings.TrimSpace(config.ClusterCIDR)) == 0 { - klog.InfoS("Detect-local-mode set to ClusterCIDR, but no cluster CIDR defined") - break - } - - clusterCIDRs := cidrTuple(config.ClusterCIDR) - - if len(strings.TrimSpace(clusterCIDRs[0])) == 0 { - klog.InfoS("Detect-local-mode set to ClusterCIDR, but no IPv4 cluster CIDR defined, defaulting to no-op detect-local for IPv4") - } else { - localDetectors[0], err = proxyutiliptables.NewDetectLocalByCIDR(clusterCIDRs[0], ipt[0]) - if err != nil { // don't loose the original error - return localDetectors, err - } - } - - if len(strings.TrimSpace(clusterCIDRs[1])) == 0 { - klog.InfoS("Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6") - } else { - localDetectors[1], err = proxyutiliptables.NewDetectLocalByCIDR(clusterCIDRs[1], ipt[1]) - } - return localDetectors, err - case proxyconfigapi.LocalModeNodeCIDR: - if nodeInfo == nil || len(strings.TrimSpace(nodeInfo.Spec.PodCIDR)) == 0 { - klog.InfoS("No node info available to configure detect-local-mode NodeCIDR") - break - } - // localDetectors, like ipt, need to be of the order [IPv4, IPv6], but PodCIDRs is setup so that PodCIDRs[0] == PodCIDR. - // so have to handle the case where PodCIDR can be IPv6 and set that to localDetectors[1] - if netutils.IsIPv6CIDRString(nodeInfo.Spec.PodCIDR) { - localDetectors[1], err = proxyutiliptables.NewDetectLocalByCIDR(nodeInfo.Spec.PodCIDR, ipt[1]) - if err != nil { - return localDetectors, err - } - if len(nodeInfo.Spec.PodCIDRs) > 1 { - localDetectors[0], err = proxyutiliptables.NewDetectLocalByCIDR(nodeInfo.Spec.PodCIDRs[1], ipt[0]) - } - } else { - localDetectors[0], err = proxyutiliptables.NewDetectLocalByCIDR(nodeInfo.Spec.PodCIDR, ipt[0]) - if err != nil { - return localDetectors, err - } - if len(nodeInfo.Spec.PodCIDRs) > 1 { - localDetectors[1], err = proxyutiliptables.NewDetectLocalByCIDR(nodeInfo.Spec.PodCIDRs[1], ipt[1]) - } - } - return localDetectors, err - case proxyconfigapi.LocalModeBridgeInterface, proxyconfigapi.LocalModeInterfaceNamePrefix: - localDetector, err := getLocalDetector(mode, config, ipt[0], nodeInfo) - if err == nil { - localDetectors[0] = localDetector - localDetectors[1] = localDetector - } - return localDetectors, err - default: - klog.InfoS("Unknown detect-local-mode", "detectLocalMode", mode) - } - klog.InfoS("Defaulting to no-op detect-local", "detectLocalMode", string(mode)) - return localDetectors, nil -} - -// cidrTuple takes a comma separated list of CIDRs and return a tuple (ipv4cidr,ipv6cidr) -// The returned tuple is guaranteed to have the order (ipv4,ipv6) and if no cidr from a family is found an -// empty string "" is inserted. -func cidrTuple(cidrList string) [2]string { - cidrs := [2]string{"", ""} - foundIPv4 := false - foundIPv6 := false - - for _, cidr := range strings.Split(cidrList, ",") { - if netutils.IsIPv6CIDRString(cidr) && !foundIPv6 { - cidrs[1] = cidr - foundIPv6 = true - } else if !foundIPv4 { - cidrs[0] = cidr - foundIPv4 = true - } - if foundIPv6 && foundIPv4 { - break - } - } - - return cidrs -} - -func getProxyMode(proxyMode proxyconfigapi.ProxyMode) proxyconfigapi.ProxyMode { - if proxyMode == "" { - klog.InfoS("Using iptables proxy") - return proxyconfigapi.ProxyModeIPTables - } else { - return proxyMode - } -} - -// cleanupAndExit remove iptables rules and ipset/ipvs rules -func cleanupAndExit() error { - execer := exec.New() - - // cleanup IPv6 and IPv4 iptables rules, regardless of current configuration - ipts := []utiliptables.Interface{ - utiliptables.New(execer, utiliptables.ProtocolIPv4), - utiliptables.New(execer, utiliptables.ProtocolIPv6), - } - - ipsetInterface := utilipset.New(execer) - ipvsInterface := utilipvs.New() - - var encounteredError bool - for _, ipt := range ipts { - encounteredError = iptables.CleanupLeftovers(ipt) || encounteredError - encounteredError = ipvs.CleanupLeftovers(ipvsInterface, ipt, ipsetInterface) || encounteredError - } - if encounteredError { - return errors.New("encountered an error while tearing down rules") - } - - return nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server_windows.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server_windows.go deleted file mode 100644 index 6336fa72cc00..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server_windows.go +++ /dev/null @@ -1,172 +0,0 @@ -//go:build windows -// +build windows - -/* -Copyright 2014 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Package app does all of the work necessary to configure and run a -// Kubernetes app process. -package app - -import ( - "errors" - "fmt" - "net" - goruntime "runtime" - "strconv" - - // Enable pprof HTTP handlers. - _ "net/http/pprof" - - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/tools/events" - "k8s.io/component-base/configz" - "k8s.io/component-base/metrics" - nodeutil "k8s.io/component-helpers/node/util" - "k8s.io/klog/v2" - "k8s.io/kubernetes/pkg/proxy" - proxyconfigapi "k8s.io/kubernetes/pkg/proxy/apis/config" - proxyconfigscheme "k8s.io/kubernetes/pkg/proxy/apis/config/scheme" - "k8s.io/kubernetes/pkg/proxy/healthcheck" - "k8s.io/kubernetes/pkg/proxy/winkernel" -) - -// NewProxyServer returns a new ProxyServer. -func NewProxyServer(o *Options) (*ProxyServer, error) { - return newProxyServer(o.config, o.master) -} - -func newProxyServer(config *proxyconfigapi.KubeProxyConfiguration, master string) (*ProxyServer, error) { - if config == nil { - return nil, errors.New("config is required") - } - - if c, err := configz.New(proxyconfigapi.GroupName); err == nil { - c.Set(config) - } else { - return nil, fmt.Errorf("unable to register configz: %s", err) - } - - if len(config.ShowHiddenMetricsForVersion) > 0 { - metrics.SetShowHidden() - } - - client, eventClient, err := createClients(config.ClientConnection, master) - if err != nil { - return nil, err - } - - // Create event recorder - hostname, err := nodeutil.GetHostname(config.HostnameOverride) - if err != nil { - return nil, err - } - nodeIP := detectNodeIP(client, hostname, config.BindAddress) - klog.InfoS("Detected node IP", "IP", nodeIP.String()) - - eventBroadcaster := events.NewBroadcaster(&events.EventSinkImpl{Interface: client.EventsV1()}) - recorder := eventBroadcaster.NewRecorder(proxyconfigscheme.Scheme, "kube-proxy") - - nodeRef := &v1.ObjectReference{ - Kind: "Node", - Name: hostname, - UID: types.UID(hostname), - Namespace: "", - } - - var healthzServer healthcheck.ProxierHealthUpdater - var healthzPort int - if len(config.HealthzBindAddress) > 0 { - healthzServer = healthcheck.NewProxierHealthServer(config.HealthzBindAddress, 2*config.IPTables.SyncPeriod.Duration, recorder, nodeRef) - _, port, _ := net.SplitHostPort(config.HealthzBindAddress) - healthzPort, _ = strconv.Atoi(port) - } - - // Check if Kernel Space can be used. - canUseWinKernelProxy, err := winkernel.CanUseWinKernelProxier(winkernel.WindowsKernelCompatTester{}) - if !canUseWinKernelProxy && err != nil { - return nil, err - } - - var proxier proxy.Provider - proxyMode := proxyconfigapi.ProxyModeKernelspace - dualStackMode := getDualStackMode(config.Winkernel.NetworkName, winkernel.DualStackCompatTester{}) - if dualStackMode { - klog.InfoS("Creating dualStackProxier for Windows kernel.") - - proxier, err = winkernel.NewDualStackProxier( - config.IPTables.SyncPeriod.Duration, - config.IPTables.MinSyncPeriod.Duration, - config.IPTables.MasqueradeAll, - int(*config.IPTables.MasqueradeBit), - config.ClusterCIDR, - hostname, - nodeIPTuple(config.BindAddress), - recorder, - healthzServer, - config.Winkernel, - healthzPort, - ) - } else { - proxier, err = winkernel.NewProxier( - config.IPTables.SyncPeriod.Duration, - config.IPTables.MinSyncPeriod.Duration, - config.IPTables.MasqueradeAll, - int(*config.IPTables.MasqueradeBit), - config.ClusterCIDR, - hostname, - nodeIP, - recorder, - healthzServer, - config.Winkernel, - healthzPort, - ) - } - if err != nil { - return nil, fmt.Errorf("unable to create proxier: %v", err) - } - winkernel.RegisterMetrics() - - return &ProxyServer{ - Client: client, - EventClient: eventClient, - Proxier: proxier, - Broadcaster: eventBroadcaster, - Recorder: recorder, - ProxyMode: proxyMode, - NodeRef: nodeRef, - MetricsBindAddress: config.MetricsBindAddress, - BindAddressHardFail: config.BindAddressHardFail, - EnableProfiling: config.EnableProfiling, - OOMScoreAdj: config.OOMScoreAdj, - ConfigSyncPeriod: config.ConfigSyncPeriod.Duration, - HealthzServer: healthzServer, - }, nil -} - -func getDualStackMode(networkname string, compatTester winkernel.StackCompatTester) bool { - return compatTester.DualStackCompatible(networkname) -} - -func detectNumCPU() int { - return goruntime.NumCPU() -} - -// cleanupAndExit cleans up after a previous proxy run -func cleanupAndExit() error { - return errors.New("--cleanup-and-exit is not implemented on Windows") -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/options/globalflags_providers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/options/globalflags_providers.go index 9fde6b7f4f2e..100c575846c2 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/options/globalflags_providers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/options/globalflags_providers.go @@ -27,4 +27,5 @@ func addLegacyCloudProviderCredentialProviderFlags(global, local *pflag.FlagSet) // TODO(#58034): This is not a static file, so it's not quite as straightforward as --google-json-key. // We need to figure out how ACR users can dynamically provide pull credentials before we can deprecate this. pflagRegister(global, local, "azure-container-registry-config") + local.MarkDeprecated("azure-container-registry-config", "Use --image-credential-provider-config and --image-credential-provider-bin-dir to setup acr credential provider instead. Will be removed in a future release.") } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/options/options.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/options/options.go index 604e985fbc06..71ea243361c6 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/options/options.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/options/options.go @@ -86,6 +86,10 @@ type KubeletFlags struct { // Omit this flag to use the combination of built-in default configuration values and flags. KubeletConfigFile string + // kubeletDropinConfigDirectory is a path to a directory to specify dropins allows the user to optionally specify + // additional configs to overwrite what is provided by default and in the KubeletConfigFile flag + KubeletDropinConfigDirectory string + // WindowsService should be set to true if kubelet is running as a service on Windows. // Its corresponding flag only gets registered in Windows builds. WindowsService bool @@ -281,6 +285,7 @@ func (f *KubeletFlags) AddFlags(mainfs *pflag.FlagSet) { f.addOSFlags(fs) fs.StringVar(&f.KubeletConfigFile, "config", f.KubeletConfigFile, "The Kubelet will load its initial configuration from this file. The path may be absolute or relative; relative paths start at the Kubelet's current working directory. Omit this flag to use the built-in default configuration values. Command-line flags override configuration from this file.") + fs.StringVar(&f.KubeletDropinConfigDirectory, "config-dir", "", "Path to a directory to specify drop-ins, allows the user to optionally specify additional configs to overwrite what is provided by default and in the KubeletConfigFile flag. Note: Set the 'KUBELET_CONFIG_DROPIN_DIR_ALPHA' environment variable to specify the directory. [default='']") fs.StringVar(&f.KubeConfig, "kubeconfig", f.KubeConfig, "Path to a kubeconfig file, specifying how to connect to the API server. Providing --kubeconfig enables API server mode, omitting --kubeconfig enables standalone mode.") fs.StringVar(&f.BootstrapKubeconfig, "bootstrap-kubeconfig", f.BootstrapKubeconfig, "Path to a kubeconfig file that will be used to get client certificate for kubelet. "+ @@ -345,6 +350,7 @@ func AddKubeletConfigFlags(mainfs *pflag.FlagSet, c *kubeletconfig.KubeletConfig "v": true, "vmodule": true, "log-flush-frequency": true, + "provider-id": true, } fs.VisitAll(func(f *pflag.Flag) { if notDeprecated[f.Name] { @@ -364,7 +370,7 @@ func AddKubeletConfigFlags(mainfs *pflag.FlagSet, c *kubeletconfig.KubeletConfig fs.DurationVar(&c.HTTPCheckFrequency.Duration, "http-check-frequency", c.HTTPCheckFrequency.Duration, "Duration between checking http for new data") fs.StringVar(&c.StaticPodURL, "manifest-url", c.StaticPodURL, "URL for accessing additional Pod specifications to run") fs.Var(cliflag.NewColonSeparatedMultimapStringString(&c.StaticPodURLHeader), "manifest-url-header", "Comma-separated list of HTTP headers to use when accessing the url provided to --manifest-url. Multiple headers with the same name will be added in the same order provided. This flag can be repeatedly invoked. For example: --manifest-url-header 'a:hello,b:again,c:world' --manifest-url-header 'b:beautiful'") - fs.Var(&utilflag.IPVar{Val: &c.Address}, "address", "The IP address for the Kubelet to serve on (set to '0.0.0.0' or '::' for listening in all interfaces and IP families)") + fs.Var(&utilflag.IPVar{Val: &c.Address}, "address", "The IP address for the Kubelet to serve on (set to '0.0.0.0' or '::' for listening on all interfaces and IP address families)") fs.Int32Var(&c.Port, "port", c.Port, "The port for the Kubelet to serve on.") fs.Int32Var(&c.ReadOnlyPort, "read-only-port", c.ReadOnlyPort, "The read-only port for the Kubelet to serve on with no authentication/authorization (set to 0 to disable)") @@ -422,7 +428,7 @@ func AddKubeletConfigFlags(mainfs *pflag.FlagSet, c *kubeletconfig.KubeletConfig fs.BoolVar(&c.EnableDebuggingHandlers, "enable-debugging-handlers", c.EnableDebuggingHandlers, "Enables server endpoints for log collection and local running of containers and commands") fs.BoolVar(&c.EnableContentionProfiling, "contention-profiling", c.EnableContentionProfiling, "Enable block profiling, if profiling is enabled") fs.Int32Var(&c.HealthzPort, "healthz-port", c.HealthzPort, "The port of the localhost healthz endpoint (set to 0 to disable)") - fs.Var(&utilflag.IPVar{Val: &c.HealthzBindAddress}, "healthz-bind-address", "The IP address for the healthz server to serve on (set to '0.0.0.0' or '::' for listening in all interfaces and IP families)") + fs.Var(&utilflag.IPVar{Val: &c.HealthzBindAddress}, "healthz-bind-address", "The IP address for the healthz server to serve on (set to '0.0.0.0' or '::' for listening on all interfaces and IP address families)") fs.Int32Var(&c.OOMScoreAdj, "oom-score-adj", c.OOMScoreAdj, "The oom-score-adj value for kubelet process. Values must be within the range [-1000, 1000]") fs.StringVar(&c.ClusterDomain, "cluster-domain", c.ClusterDomain, "Domain for this cluster. If set, kubelet will configure all containers to search this domain in addition to the host's search domains") @@ -432,7 +438,7 @@ func AddKubeletConfigFlags(mainfs *pflag.FlagSet, c *kubeletconfig.KubeletConfig fs.DurationVar(&c.NodeStatusUpdateFrequency.Duration, "node-status-update-frequency", c.NodeStatusUpdateFrequency.Duration, "Specifies how often kubelet posts node status to master. Note: be cautious when changing the constant, it must work with nodeMonitorGracePeriod in nodecontroller.") fs.DurationVar(&c.ImageMinimumGCAge.Duration, "minimum-image-ttl-duration", c.ImageMinimumGCAge.Duration, "Minimum age for an unused image before it is garbage collected. Examples: '300ms', '10s' or '2h45m'.") fs.Int32Var(&c.ImageGCHighThresholdPercent, "image-gc-high-threshold", c.ImageGCHighThresholdPercent, "The percent of disk usage after which image garbage collection is always run. Values must be within the range [0, 100], To disable image garbage collection, set to 100. ") - fs.Int32Var(&c.ImageGCLowThresholdPercent, "image-gc-low-threshold", c.ImageGCLowThresholdPercent, "The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Values must be within the range [0, 100] and should not be larger than that of --image-gc-high-threshold.") + fs.Int32Var(&c.ImageGCLowThresholdPercent, "image-gc-low-threshold", c.ImageGCLowThresholdPercent, "The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Values must be within the range [0, 100] and must be less than that of --image-gc-high-threshold.") fs.DurationVar(&c.VolumeStatsAggPeriod.Duration, "volume-stats-agg-period", c.VolumeStatsAggPeriod.Duration, "Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to a negative number.") fs.Var(cliflag.NewMapStringBool(&c.FeatureGates), "feature-gates", "A set of key=value pairs that describe feature gates for alpha/experimental features. "+ "Options are:\n"+strings.Join(utilfeature.DefaultFeatureGate.KnownFeatures(), "\n")) @@ -464,8 +470,10 @@ func AddKubeletConfigFlags(mainfs *pflag.FlagSet, c *kubeletconfig.KubeletConfig fs.DurationVar(&c.CPUCFSQuotaPeriod.Duration, "cpu-cfs-quota-period", c.CPUCFSQuotaPeriod.Duration, "Sets CPU CFS quota period value, cpu.cfs_period_us, defaults to Linux Kernel default") fs.BoolVar(&c.EnableControllerAttachDetach, "enable-controller-attach-detach", c.EnableControllerAttachDetach, "Enables the Attach/Detach controller to manage attachment/detachment of volumes scheduled to this node, and disables kubelet from executing any attach/detach operations") fs.BoolVar(&c.MakeIPTablesUtilChains, "make-iptables-util-chains", c.MakeIPTablesUtilChains, "If true, kubelet will ensure iptables utility rules are present on host.") - fs.Int32Var(&c.IPTablesMasqueradeBit, "iptables-masquerade-bit", c.IPTablesMasqueradeBit, "The bit of the fwmark space to mark packets for SNAT. Must be within the range [0, 31]. Please match this parameter with corresponding parameter in kube-proxy.") - fs.Int32Var(&c.IPTablesDropBit, "iptables-drop-bit", c.IPTablesDropBit, "The bit of the fwmark space to mark packets for dropping. Must be within the range [0, 31].") + fs.Int32Var(&c.IPTablesMasqueradeBit, "iptables-masquerade-bit", c.IPTablesMasqueradeBit, "Has no effect; use kube-proxy parameters to configure the KUBE-MARK-MASQ chain.") + fs.MarkDeprecated("iptables-masquerade-bit", "This flag has no effect and will be removed in a future version.") + fs.Int32Var(&c.IPTablesDropBit, "iptables-drop-bit", c.IPTablesDropBit, "Has no effect; kubelet no longer creates a KUBE-MARK-DROP chain") + fs.MarkDeprecated("iptables-drop-bit", "This flag has no effect and will be removed in a future version.") fs.StringVar(&c.ContainerLogMaxSize, "container-log-max-size", c.ContainerLogMaxSize, " Set the maximum size (e.g. 10Mi) of container log file before it is rotated.") fs.Int32Var(&c.ContainerLogMaxFiles, "container-log-max-files", c.ContainerLogMaxFiles, " Set the maximum number of container log files that can be present for a container. The number must be >= 2.") fs.StringSliceVar(&c.AllowedUnsafeSysctls, "allowed-unsafe-sysctls", c.AllowedUnsafeSysctls, "Comma-separated whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *). Use these at your own risk.") diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/plugins_providers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/plugins_providers.go index aebb008b04cf..fd10d2464377 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/plugins_providers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/plugins_providers.go @@ -31,7 +31,6 @@ import ( "k8s.io/kubernetes/pkg/volume" "k8s.io/kubernetes/pkg/volume/azure_file" "k8s.io/kubernetes/pkg/volume/csimigration" - "k8s.io/kubernetes/pkg/volume/gcepd" "k8s.io/kubernetes/pkg/volume/portworx" "k8s.io/kubernetes/pkg/volume/rbd" "k8s.io/kubernetes/pkg/volume/vsphere_volume" @@ -66,7 +65,6 @@ type pluginInfo struct { func appendLegacyProviderVolumes(allPlugins []volume.VolumePlugin, featureGate featuregate.FeatureGate) ([]volume.VolumePlugin, error) { pluginMigrationStatus := make(map[string]pluginInfo) - pluginMigrationStatus[plugins.GCEPDInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationGCE, pluginUnregisterFeature: features.InTreePluginGCEUnregister, pluginProbeFunction: gcepd.ProbeVolumePlugins} pluginMigrationStatus[plugins.AzureFileInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationAzureFile, pluginUnregisterFeature: features.InTreePluginAzureFileUnregister, pluginProbeFunction: azure_file.ProbeVolumePlugins} pluginMigrationStatus[plugins.VSphereInTreePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationvSphere, pluginUnregisterFeature: features.InTreePluginvSphereUnregister, pluginProbeFunction: vsphere_volume.ProbeVolumePlugins} pluginMigrationStatus[plugins.PortworxVolumePluginName] = pluginInfo{pluginMigrationFeature: features.CSIMigrationPortworx, pluginUnregisterFeature: features.InTreePluginPortworxUnregister, pluginProbeFunction: portworx.ProbeVolumePlugins} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/server.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/server.go index 4c4fb27b2448..78b2d2c13947 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/server.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/cmd/kubelet/app/server.go @@ -23,19 +23,22 @@ import ( "errors" "fmt" "io" + "io/fs" "math" "net" "net/http" "os" - "path" "path/filepath" "strconv" "strings" "time" "github.com/coreos/go-systemd/v22/daemon" + "github.com/imdario/mergo" "github.com/spf13/cobra" "github.com/spf13/pflag" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" "k8s.io/klog/v2" "k8s.io/mount-utils" @@ -77,6 +80,7 @@ import ( "k8s.io/component-base/version" "k8s.io/component-base/version/verflag" nodeutil "k8s.io/component-helpers/node/util" + runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" kubeletconfigv1beta1 "k8s.io/kubelet/config/v1beta1" "k8s.io/kubernetes/cmd/kubelet/app/options" "k8s.io/kubernetes/pkg/api/legacyscheme" @@ -92,7 +96,6 @@ import ( "k8s.io/kubernetes/pkg/kubelet/certificate/bootstrap" "k8s.io/kubernetes/pkg/kubelet/cm" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/topology" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" "k8s.io/kubernetes/pkg/kubelet/config" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" "k8s.io/kubernetes/pkg/kubelet/eviction" @@ -108,6 +111,7 @@ import ( "k8s.io/kubernetes/pkg/util/rlimit" "k8s.io/kubernetes/pkg/volume/util/hostutil" "k8s.io/kubernetes/pkg/volume/util/subpath" + "k8s.io/utils/cpuset" "k8s.io/utils/exec" netutils "k8s.io/utils/net" ) @@ -200,11 +204,24 @@ is checked every 20 seconds (also configurable with a flag).`, } // load kubelet config file, if provided - if configFile := kubeletFlags.KubeletConfigFile; len(configFile) > 0 { - kubeletConfig, err = loadConfigFile(configFile) + if len(kubeletFlags.KubeletConfigFile) > 0 { + kubeletConfig, err = loadConfigFile(kubeletFlags.KubeletConfigFile) if err != nil { - return fmt.Errorf("failed to load kubelet config file, error: %w, path: %s", err, configFile) + return fmt.Errorf("failed to load kubelet config file, path: %s, error: %w", kubeletFlags.KubeletConfigFile, err) } + } + // Merge the kubelet configurations if --config-dir is set + if len(kubeletFlags.KubeletDropinConfigDirectory) > 0 { + _, ok := os.LookupEnv("KUBELET_CONFIG_DROPIN_DIR_ALPHA") + if !ok { + return fmt.Errorf("flag %s specified but environment variable KUBELET_CONFIG_DROPIN_DIR_ALPHA not set, cannot start kubelet", kubeletFlags.KubeletDropinConfigDirectory) + } + if err := mergeKubeletConfigurations(kubeletConfig, kubeletFlags.KubeletDropinConfigDirectory); err != nil { + return fmt.Errorf("failed to merge kubelet configs: %w", err) + } + } + + if len(kubeletFlags.KubeletConfigFile) > 0 || len(kubeletFlags.KubeletDropinConfigDirectory) > 0 { // We must enforce flag precedence by re-parsing the command line into the new object. // This is necessary to preserve backwards-compatibility across binary upgrades. // See issue #56171 for more details. @@ -256,7 +273,7 @@ is checked every 20 seconds (also configurable with a flag).`, config.StaticPodURLHeader[k] = []string{""} } // log the kubelet's config for inspection - klog.V(5).InfoS("KubeletConfiguration", "configuration", config) + klog.V(5).InfoS("KubeletConfiguration", "configuration", klog.Format(config)) // set up signal context for kubelet shutdown ctx := genericapiserver.SetupSignalContext() @@ -286,6 +303,41 @@ is checked every 20 seconds (also configurable with a flag).`, return cmd } +// mergeKubeletConfigurations merges the provided drop-in configurations with the base kubelet configuration. +// The drop-in configurations are processed in lexical order based on the file names. This means that the +// configurations in files with lower numeric prefixes are applied first, followed by higher numeric prefixes. +// For example, if the drop-in directory contains files named "10-config.conf" and "20-config.conf", +// the configurations in "10-config.conf" will be applied first, and then the configurations in "20-config.conf" will be applied, +// potentially overriding the previous values. +func mergeKubeletConfigurations(kubeletConfig *kubeletconfiginternal.KubeletConfiguration, kubeletDropInConfigDir string) error { + const dropinFileExtension = ".conf" + + // Walk through the drop-in directory and update the configuration for each file + err := filepath.WalkDir(kubeletDropInConfigDir, func(path string, info fs.DirEntry, err error) error { + if err != nil { + return err + } + if !info.IsDir() && filepath.Ext(info.Name()) == dropinFileExtension { + dropinConfig, err := loadConfigFile(path) + if err != nil { + return fmt.Errorf("failed to load kubelet dropin file, path: %s, error: %w", path, err) + } + + // Merge dropinConfig with kubeletConfig + if err := mergo.Merge(kubeletConfig, dropinConfig, mergo.WithOverride); err != nil { + return fmt.Errorf("failed to merge kubelet drop-in config, path: %s, error: %w", path, err) + } + } + return nil + }) + + if err != nil { + return fmt.Errorf("failed to walk through kubelet dropin directory %q: %w", kubeletDropInConfigDir, err) + } + + return nil +} + // newFlagSetWithGlobals constructs a new pflag.FlagSet with global flags registered // on it. func newFlagSetWithGlobals() *pflag.FlagSet { @@ -626,6 +678,17 @@ func run(ctx context.Context, s *options.KubeletServer, kubeDeps *kubelet.Depend runAuthenticatorCAReload(ctx.Done()) } + if err := kubelet.PreInitRuntimeService(&s.KubeletConfiguration, kubeDeps); err != nil { + return err + } + + // Get cgroup driver setting from CRI + if utilfeature.DefaultFeatureGate.Enabled(features.KubeletCgroupDriverFromCRI) { + if err := getCgroupDriverFromCRI(ctx, s, kubeDeps); err != nil { + return err + } + } + var cgroupRoots []string nodeAllocatableRoot := cm.NodeAllocatableRoot(s.CgroupRoot, s.CgroupsPerQOS, s.CgroupDriver) cgroupRoots = append(cgroupRoots, nodeAllocatableRoot) @@ -743,18 +806,18 @@ func run(ctx context.Context, s *options.KubeletServer, kubeDeps *kubelet.Depend ReservedSystemCPUs: reservedSystemCPUs, HardEvictionThresholds: hardEvictionThresholds, }, - QOSReserved: *experimentalQOSReserved, - CPUManagerPolicy: s.CPUManagerPolicy, - CPUManagerPolicyOptions: cpuManagerPolicyOptions, - CPUManagerReconcilePeriod: s.CPUManagerReconcilePeriod.Duration, - ExperimentalMemoryManagerPolicy: s.MemoryManagerPolicy, - ExperimentalMemoryManagerReservedMemory: s.ReservedMemory, - PodPidsLimit: s.PodPidsLimit, - EnforceCPULimits: s.CPUCFSQuota, - CPUCFSQuotaPeriod: s.CPUCFSQuotaPeriod.Duration, - TopologyManagerPolicy: s.TopologyManagerPolicy, - TopologyManagerScope: s.TopologyManagerScope, - ExperimentalTopologyManagerPolicyOptions: topologyManagerPolicyOptions, + QOSReserved: *experimentalQOSReserved, + CPUManagerPolicy: s.CPUManagerPolicy, + CPUManagerPolicyOptions: cpuManagerPolicyOptions, + CPUManagerReconcilePeriod: s.CPUManagerReconcilePeriod.Duration, + ExperimentalMemoryManagerPolicy: s.MemoryManagerPolicy, + ExperimentalMemoryManagerReservedMemory: s.ReservedMemory, + PodPidsLimit: s.PodPidsLimit, + EnforceCPULimits: s.CPUCFSQuota, + CPUCFSQuotaPeriod: s.CPUCFSQuotaPeriod.Duration, + TopologyManagerPolicy: s.TopologyManagerPolicy, + TopologyManagerScope: s.TopologyManagerScope, + TopologyManagerPolicyOptions: topologyManagerPolicyOptions, }, s.FailSwapOn, kubeDeps.Recorder, @@ -776,11 +839,6 @@ func run(ctx context.Context, s *options.KubeletServer, kubeDeps *kubelet.Depend klog.InfoS("Failed to ApplyOOMScoreAdj", "err", err) } - err = kubelet.PreInitRuntimeService(&s.KubeletConfiguration, kubeDeps) - if err != nil { - return err - } - if err := RunKubelet(s, kubeDeps, s.RunOnce); err != nil { return err } @@ -1011,8 +1069,8 @@ func getNodeName(cloud cloudprovider.Interface, hostname string) (types.NodeName // certificate and key file are generated. Returns a configured server.TLSOptions object. func InitializeTLS(kf *options.KubeletFlags, kc *kubeletconfiginternal.KubeletConfiguration) (*server.TLSOptions, error) { if !kc.ServerTLSBootstrap && kc.TLSCertFile == "" && kc.TLSPrivateKeyFile == "" { - kc.TLSCertFile = path.Join(kf.CertDirectory, "kubelet.crt") - kc.TLSPrivateKeyFile = path.Join(kf.CertDirectory, "kubelet.key") + kc.TLSCertFile = filepath.Join(kf.CertDirectory, "kubelet.crt") + kc.TLSPrivateKeyFile = filepath.Join(kf.CertDirectory, "kubelet.key") canReadCertAndKey, err := certutil.CanReadCertAndKey(kc.TLSCertFile, kc.TLSPrivateKeyFile) if err != nil { @@ -1061,6 +1119,12 @@ func InitializeTLS(kf *options.KubeletFlags, kc *kubeletconfiginternal.KubeletCo return nil, err } + if minTLSVersion == tls.VersionTLS13 { + if len(tlsCipherSuites) != 0 { + klog.InfoS("Warning: TLS 1.3 cipher suites are not configurable, ignoring --tls-cipher-suites") + } + } + tlsOptions := &server.TLSOptions{ Config: &tls.Config{ MinVersion: minTLSVersion, @@ -1181,9 +1245,7 @@ func startKubelet(k kubelet.Bootstrap, podCfg *config.PodConfig, kubeCfg *kubele if kubeCfg.ReadOnlyPort > 0 { go k.ListenAndServeReadOnly(netutils.ParseIPSloppy(kubeCfg.Address), uint(kubeCfg.ReadOnlyPort)) } - if utilfeature.DefaultFeatureGate.Enabled(features.KubeletPodResources) { - go k.ListenAndServePodResources() - } + go k.ListenAndServePodResources() } func createAndInitKubelet(kubeServer *options.KubeletServer, @@ -1279,3 +1341,51 @@ func newTracerProvider(s *options.KubeletServer) (oteltrace.TracerProvider, erro } return tp, nil } + +func getCgroupDriverFromCRI(ctx context.Context, s *options.KubeletServer, kubeDeps *kubelet.Dependencies) error { + klog.V(4).InfoS("Getting CRI runtime configuration information") + + var ( + runtimeConfig *runtimeapi.RuntimeConfigResponse + err error + ) + // Retry a couple of times, hoping that any errors are transient. + // Fail quickly on known, non transient errors. + for i := 0; i < 3; i++ { + runtimeConfig, err = kubeDeps.RemoteRuntimeService.RuntimeConfig(ctx) + if err != nil { + s, ok := status.FromError(err) + if !ok || s.Code() != codes.Unimplemented { + // We could introduce a backoff delay or jitter, but this is largely catching cases + // where the runtime is still starting up and we request too early. + // Give it a little more time. + time.Sleep(time.Second * 2) + continue + } + // CRI implementation doesn't support RuntimeConfig, fallback + klog.InfoS("CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config.") + return nil + } + } + if err != nil { + return err + } + + // Calling GetLinux().GetCgroupDriver() won't segfault, but it will always default to systemd + // which is not intended by the fields not being populated + linuxConfig := runtimeConfig.GetLinux() + if linuxConfig == nil { + return nil + } + + switch d := linuxConfig.GetCgroupDriver(); d { + case runtimeapi.CgroupDriver_SYSTEMD: + s.CgroupDriver = "systemd" + case runtimeapi.CgroupDriver_CGROUPFS: + s.CgroupDriver = "cgroupfs" + default: + return fmt.Errorf("runtime returned an unknown cgroup driver %d", d) + } + klog.InfoS("Using cgroup driver setting received from the CRI runtime", "cgroupDriver", s.CgroupDriver) + return nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/api/service/warnings.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/api/service/warnings.go index c99553367b73..29b1097dffc2 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/api/service/warnings.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/api/service/warnings.go @@ -53,6 +53,13 @@ func GetWarningsForService(service, oldService *api.Service) []string { warnings = append(warnings, getWarningsForCIDR(field.NewPath("spec").Child("loadBalancerSourceRanges").Index(i), cidr)...) } + if service.Spec.Type == api.ServiceTypeExternalName && len(service.Spec.ExternalIPs) > 0 { + warnings = append(warnings, fmt.Sprintf("spec.externalIPs is ignored when spec.type is %q", api.ServiceTypeExternalName)) + } + if service.Spec.Type != api.ServiceTypeExternalName && service.Spec.ExternalName != "" { + warnings = append(warnings, fmt.Sprintf("spec.externalName is ignored when spec.type is not %q", api.ServiceTypeExternalName)) + } + return warnings } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/api/v1/resource/helpers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/api/v1/resource/helpers.go index 4ed0b33d0803..0e894791d14c 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/api/v1/resource/helpers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/api/v1/resource/helpers.go @@ -65,7 +65,7 @@ func PodRequests(pod *v1.Pod, opts PodResourcesOptions) v1.ResourceList { cs, found := containerStatuses[container.Name] if found { if pod.Status.Resize == v1.PodResizeStatusInfeasible { - containerReqs = cs.AllocatedResources + containerReqs = cs.AllocatedResources.DeepCopy() } else { containerReqs = max(container.Resources.Requests, cs.AllocatedResources) } @@ -83,20 +83,44 @@ func PodRequests(pod *v1.Pod, opts PodResourcesOptions) v1.ResourceList { addResourceList(reqs, containerReqs) } + restartableInitContainerReqs := v1.ResourceList{} + initContainerReqs := v1.ResourceList{} // init containers define the minimum of any resource // Note: In-place resize is not allowed for InitContainers, so no need to check for ResizeStatus value + // + // Let's say `InitContainerUse(i)` is the resource requirements when the i-th + // init container is initializing, then + // `InitContainerUse(i) = sum(Resources of restartable init containers with index < i) + Resources of i-th init container`. + // + // See https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/753-sidecar-containers#exposing-pod-resource-requirements for the detail. for _, container := range pod.Spec.InitContainers { containerReqs := container.Resources.Requests if len(opts.NonMissingContainerRequests) > 0 { containerReqs = applyNonMissing(containerReqs, opts.NonMissingContainerRequests) } + if container.RestartPolicy != nil && *container.RestartPolicy == v1.ContainerRestartPolicyAlways { + // and add them to the resulting cumulative container requests + addResourceList(reqs, containerReqs) + + // track our cumulative restartable init container resources + addResourceList(restartableInitContainerReqs, containerReqs) + containerReqs = restartableInitContainerReqs + } else { + tmp := v1.ResourceList{} + addResourceList(tmp, containerReqs) + addResourceList(tmp, restartableInitContainerReqs) + containerReqs = tmp + } + if opts.ContainerFn != nil { opts.ContainerFn(containerReqs, podutil.InitContainers) } - maxResourceList(reqs, containerReqs) + maxResourceList(initContainerReqs, containerReqs) } + maxResourceList(reqs, initContainerReqs) + // Add overhead for running a pod to the sum of requests if requested: if !opts.ExcludeOverhead && pod.Spec.Overhead != nil { addResourceList(reqs, pod.Spec.Overhead) @@ -135,14 +159,40 @@ func PodLimits(pod *v1.Pod, opts PodResourcesOptions) v1.ResourceList { } addResourceList(limits, container.Resources.Limits) } + + restartableInitContainerLimits := v1.ResourceList{} + initContainerLimits := v1.ResourceList{} // init containers define the minimum of any resource + // + // Let's say `InitContainerUse(i)` is the resource requirements when the i-th + // init container is initializing, then + // `InitContainerUse(i) = sum(Resources of restartable init containers with index < i) + Resources of i-th init container`. + // + // See https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/753-sidecar-containers#exposing-pod-resource-requirements for the detail. for _, container := range pod.Spec.InitContainers { + containerLimits := container.Resources.Limits + // Is the init container marked as a restartable init container? + if container.RestartPolicy != nil && *container.RestartPolicy == v1.ContainerRestartPolicyAlways { + addResourceList(limits, containerLimits) + + // track our cumulative restartable init container resources + addResourceList(restartableInitContainerLimits, containerLimits) + containerLimits = restartableInitContainerLimits + } else { + tmp := v1.ResourceList{} + addResourceList(tmp, containerLimits) + addResourceList(tmp, restartableInitContainerLimits) + containerLimits = tmp + } + if opts.ContainerFn != nil { - opts.ContainerFn(container.Resources.Limits, podutil.InitContainers) + opts.ContainerFn(containerLimits, podutil.InitContainers) } - maxResourceList(limits, container.Resources.Limits) + maxResourceList(initContainerLimits, containerLimits) } + maxResourceList(limits, initContainerLimits) + // Add overhead to non-zero limits if requested: if !opts.ExcludeOverhead && pod.Spec.Overhead != nil { for name, quantity := range pod.Spec.Overhead { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/apps/validation/validation.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/apps/validation/validation.go index 14c287c39634..8c10a956f134 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/apps/validation/validation.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/apps/validation/validation.go @@ -280,7 +280,7 @@ func ValidateControllerRevisionUpdate(newHistory, oldHistory *apps.ControllerRev // ValidateDaemonSet tests if required fields in the DaemonSet are set. func ValidateDaemonSet(ds *apps.DaemonSet, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := apivalidation.ValidateObjectMeta(&ds.ObjectMeta, true, ValidateDaemonSetName, field.NewPath("metadata")) - allErrs = append(allErrs, ValidateDaemonSetSpec(&ds.Spec, field.NewPath("spec"), opts)...) + allErrs = append(allErrs, ValidateDaemonSetSpec(&ds.Spec, nil, field.NewPath("spec"), opts)...) return allErrs } @@ -288,7 +288,7 @@ func ValidateDaemonSet(ds *apps.DaemonSet, opts apivalidation.PodValidationOptio func ValidateDaemonSetUpdate(ds, oldDS *apps.DaemonSet, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := apivalidation.ValidateObjectMetaUpdate(&ds.ObjectMeta, &oldDS.ObjectMeta, field.NewPath("metadata")) allErrs = append(allErrs, ValidateDaemonSetSpecUpdate(&ds.Spec, &oldDS.Spec, field.NewPath("spec"))...) - allErrs = append(allErrs, ValidateDaemonSetSpec(&ds.Spec, field.NewPath("spec"), opts)...) + allErrs = append(allErrs, ValidateDaemonSetSpec(&ds.Spec, &oldDS.Spec, field.NewPath("spec"), opts)...) return allErrs } @@ -344,7 +344,7 @@ func ValidateDaemonSetStatusUpdate(ds, oldDS *apps.DaemonSet) field.ErrorList { } // ValidateDaemonSetSpec tests if required fields in the DaemonSetSpec are set. -func ValidateDaemonSetSpec(spec *apps.DaemonSetSpec, fldPath *field.Path, opts apivalidation.PodValidationOptions) field.ErrorList { +func ValidateDaemonSetSpec(spec, oldSpec *apps.DaemonSetSpec, fldPath *field.Path, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := field.ErrorList{} labelSelectorValidationOpts := unversionedvalidation.LabelSelectorValidationOptions{AllowInvalidLabelValueInSelector: opts.AllowInvalidLabelValueInSelector} @@ -359,8 +359,12 @@ func ValidateDaemonSetSpec(spec *apps.DaemonSetSpec, fldPath *field.Path, opts a } allErrs = append(allErrs, apivalidation.ValidatePodTemplateSpec(&spec.Template, fldPath.Child("template"), opts)...) - // Daemons typically run on more than one node, so mark Read-Write persistent disks as invalid. - allErrs = append(allErrs, apivalidation.ValidateReadOnlyPersistentDisks(spec.Template.Spec.Volumes, fldPath.Child("template", "spec", "volumes"))...) + // get rid of apivalidation.ValidateReadOnlyPersistentDisks,stop passing oldSpec to this function + var oldVols []api.Volume + if oldSpec != nil { + oldVols = oldSpec.Template.Spec.Volumes // +k8s:verify-mutation:reason=clone + } + allErrs = append(allErrs, apivalidation.ValidateReadOnlyPersistentDisks(spec.Template.Spec.Volumes, oldVols, fldPath.Child("template", "spec", "volumes"))...) // RestartPolicy has already been first-order validated as per ValidatePodTemplateSpec(). if spec.Template.Spec.RestartPolicy != api.RestartPolicyAlways { allErrs = append(allErrs, field.NotSupported(fldPath.Child("template", "spec", "restartPolicy"), spec.Template.Spec.RestartPolicy, []string{string(api.RestartPolicyAlways)})) @@ -544,7 +548,7 @@ func ValidateRollback(rollback *apps.RollbackConfig, fldPath *field.Path) field. } // ValidateDeploymentSpec validates given deployment spec. -func ValidateDeploymentSpec(spec *apps.DeploymentSpec, fldPath *field.Path, opts apivalidation.PodValidationOptions) field.ErrorList { +func ValidateDeploymentSpec(spec, oldSpec *apps.DeploymentSpec, fldPath *field.Path, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := field.ErrorList{} allErrs = append(allErrs, apivalidation.ValidateNonnegativeField(int64(spec.Replicas), fldPath.Child("replicas"))...) @@ -562,7 +566,12 @@ func ValidateDeploymentSpec(spec *apps.DeploymentSpec, fldPath *field.Path, opts if err != nil { allErrs = append(allErrs, field.Invalid(fldPath.Child("selector"), spec.Selector, "invalid label selector")) } else { - allErrs = append(allErrs, ValidatePodTemplateSpecForReplicaSet(&spec.Template, selector, spec.Replicas, fldPath.Child("template"), opts)...) + // oldSpec is not empty, pass oldSpec.template + var oldTemplate *api.PodTemplateSpec + if oldSpec != nil { + oldTemplate = &oldSpec.Template // +k8s:verify-mutation:reason=clone + } + allErrs = append(allErrs, ValidatePodTemplateSpecForReplicaSet(&spec.Template, oldTemplate, selector, spec.Replicas, fldPath.Child("template"), opts)...) } allErrs = append(allErrs, ValidateDeploymentStrategy(&spec.Strategy, fldPath.Child("strategy"))...) @@ -614,7 +623,7 @@ func ValidateDeploymentStatus(status *apps.DeploymentStatus, fldPath *field.Path // ValidateDeploymentUpdate tests if an update to a Deployment is valid. func ValidateDeploymentUpdate(update, old *apps.Deployment, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := apivalidation.ValidateObjectMetaUpdate(&update.ObjectMeta, &old.ObjectMeta, field.NewPath("metadata")) - allErrs = append(allErrs, ValidateDeploymentSpec(&update.Spec, field.NewPath("spec"), opts)...) + allErrs = append(allErrs, ValidateDeploymentSpec(&update.Spec, &old.Spec, field.NewPath("spec"), opts)...) return allErrs } @@ -637,7 +646,7 @@ func ValidateDeploymentStatusUpdate(update, old *apps.Deployment) field.ErrorLis // ValidateDeployment validates a given Deployment. func ValidateDeployment(obj *apps.Deployment, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := apivalidation.ValidateObjectMeta(&obj.ObjectMeta, true, ValidateDeploymentName, field.NewPath("metadata")) - allErrs = append(allErrs, ValidateDeploymentSpec(&obj.Spec, field.NewPath("spec"), opts)...) + allErrs = append(allErrs, ValidateDeploymentSpec(&obj.Spec, nil, field.NewPath("spec"), opts)...) return allErrs } @@ -660,7 +669,7 @@ var ValidateReplicaSetName = apimachineryvalidation.NameIsDNSSubdomain // ValidateReplicaSet tests if required fields in the ReplicaSet are set. func ValidateReplicaSet(rs *apps.ReplicaSet, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := apivalidation.ValidateObjectMeta(&rs.ObjectMeta, true, ValidateReplicaSetName, field.NewPath("metadata")) - allErrs = append(allErrs, ValidateReplicaSetSpec(&rs.Spec, field.NewPath("spec"), opts)...) + allErrs = append(allErrs, ValidateReplicaSetSpec(&rs.Spec, nil, field.NewPath("spec"), opts)...) return allErrs } @@ -668,7 +677,7 @@ func ValidateReplicaSet(rs *apps.ReplicaSet, opts apivalidation.PodValidationOpt func ValidateReplicaSetUpdate(rs, oldRs *apps.ReplicaSet, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := field.ErrorList{} allErrs = append(allErrs, apivalidation.ValidateObjectMetaUpdate(&rs.ObjectMeta, &oldRs.ObjectMeta, field.NewPath("metadata"))...) - allErrs = append(allErrs, ValidateReplicaSetSpec(&rs.Spec, field.NewPath("spec"), opts)...) + allErrs = append(allErrs, ValidateReplicaSetSpec(&rs.Spec, &oldRs.Spec, field.NewPath("spec"), opts)...) return allErrs } @@ -705,7 +714,7 @@ func ValidateReplicaSetStatus(status apps.ReplicaSetStatus, fldPath *field.Path) } // ValidateReplicaSetSpec tests if required fields in the ReplicaSet spec are set. -func ValidateReplicaSetSpec(spec *apps.ReplicaSetSpec, fldPath *field.Path, opts apivalidation.PodValidationOptions) field.ErrorList { +func ValidateReplicaSetSpec(spec, oldSpec *apps.ReplicaSetSpec, fldPath *field.Path, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := field.ErrorList{} allErrs = append(allErrs, apivalidation.ValidateNonnegativeField(int64(spec.Replicas), fldPath.Child("replicas"))...) @@ -725,13 +734,18 @@ func ValidateReplicaSetSpec(spec *apps.ReplicaSetSpec, fldPath *field.Path, opts if err != nil { allErrs = append(allErrs, field.Invalid(fldPath.Child("selector"), spec.Selector, "invalid label selector")) } else { - allErrs = append(allErrs, ValidatePodTemplateSpecForReplicaSet(&spec.Template, selector, spec.Replicas, fldPath.Child("template"), opts)...) + // oldSpec is not empty, pass oldSpec.template. + var oldTemplate *api.PodTemplateSpec + if oldSpec != nil { + oldTemplate = &oldSpec.Template // +k8s:verify-mutation:reason=clone + } + allErrs = append(allErrs, ValidatePodTemplateSpecForReplicaSet(&spec.Template, oldTemplate, selector, spec.Replicas, fldPath.Child("template"), opts)...) } return allErrs } // ValidatePodTemplateSpecForReplicaSet validates the given template and ensures that it is in accordance with the desired selector and replicas. -func ValidatePodTemplateSpecForReplicaSet(template *api.PodTemplateSpec, selector labels.Selector, replicas int32, fldPath *field.Path, opts apivalidation.PodValidationOptions) field.ErrorList { +func ValidatePodTemplateSpecForReplicaSet(template, oldTemplate *api.PodTemplateSpec, selector labels.Selector, replicas int32, fldPath *field.Path, opts apivalidation.PodValidationOptions) field.ErrorList { allErrs := field.ErrorList{} if template == nil { allErrs = append(allErrs, field.Required(fldPath, "")) @@ -744,8 +758,14 @@ func ValidatePodTemplateSpecForReplicaSet(template *api.PodTemplateSpec, selecto } } allErrs = append(allErrs, apivalidation.ValidatePodTemplateSpec(template, fldPath, opts)...) + // Daemons run on more than one node, Cancel verification of read and write volumes. + // get rid of apivalidation.ValidateReadOnlyPersistentDisks,stop passing oldTemplate to this function + var oldVols []api.Volume + if oldTemplate != nil { + oldVols = oldTemplate.Spec.Volumes // +k8s:verify-mutation:reason=clone + } if replicas > 1 { - allErrs = append(allErrs, apivalidation.ValidateReadOnlyPersistentDisks(template.Spec.Volumes, fldPath.Child("spec", "volumes"))...) + allErrs = append(allErrs, apivalidation.ValidateReadOnlyPersistentDisks(template.Spec.Volumes, oldVols, fldPath.Child("spec", "volumes"))...) } // RestartPolicy has already been first-order validated as per ValidatePodTemplateSpec(). if template.Spec.RestartPolicy != api.RestartPolicyAlways { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/autoscaling/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/autoscaling/OWNERS index ba7b77a5a78b..ab572136eace 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/autoscaling/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/autoscaling/OWNERS @@ -2,7 +2,6 @@ reviewers: - thockin - - lavalamp - smarterclayton - wojtek-t - deads2k diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/batch/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/batch/types.go index dbddde490a1f..a3a8caf03eaf 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/batch/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/batch/types.go @@ -44,6 +44,13 @@ const ( JobNameLabel = labelPrefix + LegacyJobNameLabel // Controller UID is used for selectors and labels for jobs ControllerUidLabel = labelPrefix + LegacyControllerUidLabel + // Annotation indicating the number of failures for the index corresponding + // to the pod, which are counted towards the backoff limit. + JobIndexFailureCountAnnotation = labelPrefix + "job-index-failure-count" + // Annotation indicating the number of failures for the index corresponding + // to the pod, which don't count towards the backoff limit, according to the + // pod failure policy. When the annotation is absent zero is implied. + JobIndexIgnoredFailureCountAnnotation = labelPrefix + "job-index-ignored-failure-count" ) // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object @@ -119,6 +126,12 @@ const ( // pod's job as Failed and terminate all running pods. PodFailurePolicyActionFailJob PodFailurePolicyAction = "FailJob" + // This is an action which might be taken on a pod failure - mark the + // Job's index as failed to avoid restarts within this index. This action + // can only be used when backoffLimitPerIndex is set. + // This value is alpha-level. + PodFailurePolicyActionFailIndex PodFailurePolicyAction = "FailIndex" + // This is an action which might be taken on a pod failure - the counter towards // .backoffLimit, represented by the job's .status.failed field, is not // incremented and a replacement pod is created. @@ -138,6 +151,19 @@ const ( PodFailurePolicyOnExitCodesOpNotIn PodFailurePolicyOnExitCodesOperator = "NotIn" ) +// PodReplacementPolicy specifies the policy for creating pod replacements. +// +enum +type PodReplacementPolicy string + +const ( + // TerminatingOrFailed means that we recreate pods + // when they are terminating (has a metadata.deletionTimestamp) or failed. + TerminatingOrFailed PodReplacementPolicy = "TerminatingOrFailed" + //Failed means to wait until a previously created Pod is fully terminated (has phase + //Failed or Succeeded) before creating a replacement Pod. + Failed PodReplacementPolicy = "Failed" +) + // PodFailurePolicyOnExitCodesRequirement describes the requirement for handling // a failed pod based on its container exit codes. In particular, it lookups the // .state.terminated.exitCode for each app container and init container status, @@ -195,6 +221,10 @@ type PodFailurePolicyRule struct { // // - FailJob: indicates that the pod's job is marked as Failed and all // running pods are terminated. + // - FailIndex: indicates that the pod's index is marked as Failed and will + // not be restarted. + // This value is alpha-level. It can be used when the + // `JobBackoffLimitPerIndex` feature gate is enabled (disabled by default). // - Ignore: indicates that the counter towards the .backoffLimit is not // incremented and a replacement pod is created. // - Count: indicates that the pod is handled in the default way - the @@ -251,8 +281,8 @@ type JobSpec struct { // checked against the backoffLimit. This field cannot be used in combination // with .spec.podTemplate.spec.restartPolicy=OnFailure. // - // This field is alpha-level. To use this field, you must enable the - // `JobPodFailurePolicy` feature gate (disabled by default). + // This field is beta-level. It can be used when the `JobPodFailurePolicy` + // feature gate is enabled (enabled by default). // +optional PodFailurePolicy *PodFailurePolicy @@ -269,6 +299,30 @@ type JobSpec struct { // +optional BackoffLimit *int32 + // Specifies the limit for the number of retries within an + // index before marking this index as failed. When enabled the number of + // failures per index is kept in the pod's + // batch.kubernetes.io/job-index-failure-count annotation. It can only + // be set when Job's completionMode=Indexed, and the Pod's restart + // policy is Never. The field is immutable. + // This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` + // feature gate is enabled (disabled by default). + // +optional + BackoffLimitPerIndex *int32 + + // Specifies the maximal number of failed indexes before marking the Job as + // failed, when backoffLimitPerIndex is set. Once the number of failed + // indexes exceeds this number the entire Job is marked as Failed and its + // execution is terminated. When left as null the job continues execution of + // all of its indexes and is marked with the `Complete` Job condition. + // It can only be specified when backoffLimitPerIndex is set. + // It can be null or up to completions. It is required and must be + // less than or equal to 10^4 when is completions greater than 10^5. + // This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` + // feature gate is enabled (disabled by default). + // +optional + MaxFailedIndexes *int32 + // TODO enabled it when https://github.com/kubernetes/kubernetes/issues/28486 has been fixed // Optional number of failed pods to retain. // +optional @@ -340,6 +394,19 @@ type JobSpec struct { // // +optional Suspend *bool + + // podReplacementPolicy specifies when to create replacement Pods. + // Possible values are: + // - TerminatingOrFailed means that we recreate pods + // when they are terminating (has a metadata.deletionTimestamp) or failed. + // - Failed means to wait until a previously created Pod is fully terminated (has phase + // Failed or Succeeded) before creating a replacement Pod. + // + // When using podFailurePolicy, Failed is the the only allowed value. + // TerminatingOrFailed and Failed are allowed values when podFailurePolicy is not in use. + // This is an alpha field. Enable JobPodReplacementPolicy to be able to use this field. + // +optional + PodReplacementPolicy *PodReplacementPolicy } // JobStatus represents the current state of a Job. @@ -372,6 +439,14 @@ type JobStatus struct { // +optional Active int32 + // The number of pods which are terminating (in phase Pending or Running + // and have a deletionTimestamp). + // + // This field is alpha-level. The job controller populates the field when + // the feature gate JobPodReplacementPolicy is enabled (disabled by default). + // +optional + Terminating *int32 + // The number of active pods which have a Ready condition. // // This field is beta-level. The job controller populates the field when @@ -397,6 +472,19 @@ type JobStatus struct { // +optional CompletedIndexes string + // FailedIndexes holds the failed indexes when backoffLimitPerIndex=true. + // The indexes are represented in the text format analogous as for the + // `completedIndexes` field, ie. they are kept as decimal integers + // separated by commas. The numbers are listed in increasing order. Three or + // more consecutive numbers are compressed and represented by the first and + // last element of the series, separated by a hyphen. + // For example, if the failed indexes are 1, 3, 4, 5 and 7, they are + // represented as "1,3-5,7". + // This field is alpha-level. It can be used when the `JobBackoffLimitPerIndex` + // feature gate is enabled (disabled by default). + // +optional + FailedIndexes *string + // uncountedTerminatedPods holds the UIDs of Pods that have terminated but // the job controller hasn't yet accounted for in the status counters. // diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/batch/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/batch/zz_generated.deepcopy.go index 015128250e52..f34516f7b4ac 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/batch/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/batch/zz_generated.deepcopy.go @@ -267,6 +267,16 @@ func (in *JobSpec) DeepCopyInto(out *JobSpec) { *out = new(int32) **out = **in } + if in.BackoffLimitPerIndex != nil { + in, out := &in.BackoffLimitPerIndex, &out.BackoffLimitPerIndex + *out = new(int32) + **out = **in + } + if in.MaxFailedIndexes != nil { + in, out := &in.MaxFailedIndexes, &out.MaxFailedIndexes + *out = new(int32) + **out = **in + } if in.Selector != nil { in, out := &in.Selector, &out.Selector *out = new(v1.LabelSelector) @@ -293,6 +303,11 @@ func (in *JobSpec) DeepCopyInto(out *JobSpec) { *out = new(bool) **out = **in } + if in.PodReplacementPolicy != nil { + in, out := &in.PodReplacementPolicy, &out.PodReplacementPolicy + *out = new(PodReplacementPolicy) + **out = **in + } return } @@ -324,11 +339,21 @@ func (in *JobStatus) DeepCopyInto(out *JobStatus) { in, out := &in.CompletionTime, &out.CompletionTime *out = (*in).DeepCopy() } + if in.Terminating != nil { + in, out := &in.Terminating, &out.Terminating + *out = new(int32) + **out = **in + } if in.Ready != nil { in, out := &in.Ready, &out.Ready *out = new(int32) **out = **in } + if in.FailedIndexes != nil { + in, out := &in.FailedIndexes, &out.FailedIndexes + *out = new(string) + **out = **in + } if in.UncountedTerminatedPods != nil { in, out := &in.UncountedTerminatedPods, &out.UncountedTerminatedPods *out = new(UncountedTerminatedPods) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/helper/helpers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/helper/helpers.go index 4cdbae980026..a404263e78c6 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/helper/helpers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/helper/helpers.go @@ -360,6 +360,28 @@ func ContainsAccessMode(modes []core.PersistentVolumeAccessMode, mode core.Persi return false } +func ClaimContainsAllocatedResources(pvc *core.PersistentVolumeClaim) bool { + if pvc == nil { + return false + } + + if pvc.Status.AllocatedResources != nil { + return true + } + return false +} + +func ClaimContainsAllocatedResourceStatus(pvc *core.PersistentVolumeClaim) bool { + if pvc == nil { + return false + } + + if pvc.Status.AllocatedResourceStatuses != nil { + return true + } + return false +} + // GetTolerationsFromPodAnnotations gets the json serialized tolerations data from Pod.Annotations // and converts it to the []Toleration type in core. func GetTolerationsFromPodAnnotations(annotations map[string]string) ([]core.Toleration, error) { @@ -453,41 +475,6 @@ func PersistentVolumeClaimHasClass(claim *core.PersistentVolumeClaim) bool { return false } -func toResourceNames(resources core.ResourceList) []core.ResourceName { - result := []core.ResourceName{} - for resourceName := range resources { - result = append(result, resourceName) - } - return result -} - -func toSet(resourceNames []core.ResourceName) sets.String { - result := sets.NewString() - for _, resourceName := range resourceNames { - result.Insert(string(resourceName)) - } - return result -} - -// toContainerResourcesSet returns a set of resources names in container resource requirements -func toContainerResourcesSet(ctr *core.Container) sets.String { - resourceNames := toResourceNames(ctr.Resources.Requests) - resourceNames = append(resourceNames, toResourceNames(ctr.Resources.Limits)...) - return toSet(resourceNames) -} - -// ToPodResourcesSet returns a set of resource names in all containers in a pod. -func ToPodResourcesSet(podSpec *core.PodSpec) sets.String { - result := sets.NewString() - for i := range podSpec.InitContainers { - result = result.Union(toContainerResourcesSet(&podSpec.InitContainers[i])) - } - for i := range podSpec.Containers { - result = result.Union(toContainerResourcesSet(&podSpec.Containers[i])) - } - return result -} - // GetDeletionCostFromPodAnnotations returns the integer value of pod-deletion-cost. Returns 0 // if not set or the value is invalid. func GetDeletionCostFromPodAnnotations(annotations map[string]string) (int32, error) { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/install/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/install/OWNERS index 215733b59788..1a5f757675f9 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/install/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/install/OWNERS @@ -1,7 +1,6 @@ # See the OWNERS docs at https://go.k8s.io/owners reviewers: - - lavalamp - smarterclayton - deads2k - caesarxuchao diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/pods/helpers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/pods/helpers.go index 71810c5005ce..defc69c11f54 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/pods/helpers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/pods/helpers.go @@ -84,6 +84,7 @@ func ConvertDownwardAPIFieldLabel(version, label, value string) (string, string, "spec.schedulerName", "status.phase", "status.hostIP", + "status.hostIPs", "status.podIP", "status.podIPs": return label, value, nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/types.go index d8f657b74230..75c68af621a2 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/types.go @@ -380,6 +380,12 @@ type PersistentVolumeStatus struct { // Reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI // +optional Reason string + // LastPhaseTransitionTime is the time the phase transitioned from one to another + // and automatically resets to current time everytime a volume phase transitions. + // This is an alpha field and requires enabling PersistentVolumeLastPhaseTransitionTime feature. + // +featureGate=PersistentVolumeLastPhaseTransitionTime + // +optional + LastPhaseTransitionTime *metav1.Time } // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object @@ -515,23 +521,27 @@ const ( ) // +enum -type PersistentVolumeClaimResizeStatus string +// When a controller receives persistentvolume claim update with ClaimResourceStatus for a resource +// that it does not recognizes, then it should ignore that update and let other controllers +// handle it. +type ClaimResourceStatus string const ( - // When expansion is complete, the empty string is set by resize controller or kubelet. - PersistentVolumeClaimNoExpansionInProgress PersistentVolumeClaimResizeStatus = "" - // State set when resize controller starts expanding the volume in control-plane - PersistentVolumeClaimControllerExpansionInProgress PersistentVolumeClaimResizeStatus = "ControllerExpansionInProgress" - // State set when expansion has failed in resize controller with a terminal error. - // Transient errors such as timeout should not set this status and should leave ResizeStatus + // State set when resize controller starts resizing the volume in control-plane + PersistentVolumeClaimControllerResizeInProgress ClaimResourceStatus = "ControllerResizeInProgress" + + // State set when resize has failed in resize controller with a terminal error. + // Transient errors such as timeout should not set this status and should leave allocatedResourceStatus // unmodified, so as resize controller can resume the volume expansion. - PersistentVolumeClaimControllerExpansionFailed PersistentVolumeClaimResizeStatus = "ControllerExpansionFailed" - // State set when resize controller has finished expanding the volume but further expansion is needed on the node. - PersistentVolumeClaimNodeExpansionPending PersistentVolumeClaimResizeStatus = "NodeExpansionPending" - // State set when kubelet starts expanding the volume. - PersistentVolumeClaimNodeExpansionInProgress PersistentVolumeClaimResizeStatus = "NodeExpansionInProgress" - // State set when expansion has failed in kubelet with a terminal error. Transient errors don't set NodeExpansionFailed. - PersistentVolumeClaimNodeExpansionFailed PersistentVolumeClaimResizeStatus = "NodeExpansionFailed" + PersistentVolumeClaimControllerResizeFailed ClaimResourceStatus = "ControllerResizeFailed" + + // State set when resize controller has finished resizing the volume but further resizing of volume + // is needed on the node. + PersistentVolumeClaimNodeResizePending ClaimResourceStatus = "NodeResizePending" + // State set when kubelet starts resizing the volume. + PersistentVolumeClaimNodeResizeInProgress ClaimResourceStatus = "NodeResizeInProgress" + // State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed + PersistentVolumeClaimNodeResizeFailed ClaimResourceStatus = "NodeResizeFailed" ) // PersistentVolumeClaimCondition represents the current condition of PV claim @@ -561,24 +571,70 @@ type PersistentVolumeClaimStatus struct { Capacity ResourceList // +optional Conditions []PersistentVolumeClaimCondition - // The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may - // be larger than the actual capacity when a volume expansion operation is requested. + // AllocatedResources tracks the resources allocated to a PVC including its capacity. + // Key names follow standard Kubernetes label syntax. Valid values are either: + // * Un-prefixed keys: + // - storage - the capacity of the volume. + // * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" + // Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered + // reserved and hence may not be used. + // + // Capacity reported here may be larger than the actual capacity when a volume expansion operation + // is requested. // For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. // If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. // If a volume expansion capacity request is lowered, allocatedResources is only // lowered if there are no expansion operations in progress and if the actual volume capacity // is equal or lower than the requested capacity. + // + // A controller that receives PVC update with previously unknown resourceName + // should ignore the update for the purpose it was designed. For example - a controller that + // only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid + // resources associated with PVC. + // // This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. // +featureGate=RecoverVolumeExpansionFailure // +optional AllocatedResources ResourceList - // ResizeStatus stores status of resize operation. - // ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty - // string by resize controller or kubelet. + // AllocatedResourceStatuses stores status of resource being resized for the given PVC. + // Key names follow standard Kubernetes label syntax. Valid values are either: + // * Un-prefixed keys: + // - storage - the capacity of the volume. + // * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" + // Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered + // reserved and hence may not be used. + // + // ClaimResourceStatus can be in any of following states: + // - ControllerResizeInProgress: + // State set when resize controller starts resizing the volume in control-plane. + // - ControllerResizeFailed: + // State set when resize has failed in resize controller with a terminal error. + // - NodeResizePending: + // State set when resize controller has finished resizing the volume but further resizing of + // volume is needed on the node. + // - NodeResizeInProgress: + // State set when kubelet starts resizing the volume. + // - NodeResizeFailed: + // State set when resizing has failed in kubelet with a terminal error. Transient errors don't set + // NodeResizeFailed. + // For example: if expanding a PVC for more capacity - this field can be one of the following states: + // - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" + // - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" + // - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" + // - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" + // - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" + // When this field is not set, it means that no resize operation is in progress for the given PVC. + // + // A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus + // should ignore the update for the purpose it was designed. For example - a controller that + // only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid + // resources associated with PVC. + // // This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. // +featureGate=RecoverVolumeExpansionFailure + // +mapType=granular // +optional - ResizeStatus *PersistentVolumeClaimResizeStatus + AllocatedResourceStatuses map[ResourceName]ClaimResourceStatus } // PersistentVolumeAccessMode defines various access modes for PV. @@ -2037,7 +2093,8 @@ type SecretEnvSource struct { // HTTPHeader describes a custom header to be used in HTTP probes type HTTPHeader struct { - // The header field name + // The header field name. + // This will be canonicalized upon output, so case-variant names will be understood as the same header. Name string // The header field value Value string @@ -2278,6 +2335,24 @@ type Container struct { // +featureGate=InPlacePodVerticalScaling // +optional ResizePolicy []ContainerResizePolicy + // RestartPolicy defines the restart behavior of individual containers in a pod. + // This field may only be set for init containers, and the only allowed value is "Always". + // For non-init containers or when this field is not specified, + // the restart behavior is defined by the Pod's restart policy and the container type. + // Setting the RestartPolicy as "Always" for the init container will have the following effect: + // this init container will be continually restarted on + // exit until all regular containers have terminated. Once all regular + // containers have completed, all init containers with restartPolicy "Always" + // will be shut down. This lifecycle differs from normal init containers and + // is often referred to as a "sidecar" container. Although this init + // container still starts in the init container sequence, it does not wait + // for the container to complete before proceeding to the next init + // container. Instead, the next init container starts immediately after this + // init container is started, or after any startupProbe has successfully + // completed. + // +featureGate=SidecarContainers + // +optional + RestartPolicy *ContainerRestartPolicy // +optional VolumeMounts []VolumeMount // volumeDevices is the list of block devices to be used by the container. @@ -2596,6 +2671,14 @@ const ( RestartPolicyNever RestartPolicy = "Never" ) +// ContainerRestartPolicy is the restart policy for a single container. +// This may only be set for init containers and only allowed value is "Always". +type ContainerRestartPolicy string + +const ( + ContainerRestartPolicyAlways ContainerRestartPolicy = "Always" +) + // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // PodList is a list of Pods. @@ -3183,15 +3266,9 @@ type ClaimSource struct { // // The template will be used to create a new ResourceClaim, which will // be bound to this pod. When this pod is deleted, the ResourceClaim - // will also be deleted. The name of the ResourceClaim will be -, where is the - // PodResourceClaim.Name. Pod validation will reject the pod if the - // concatenated name is not valid for a ResourceClaim (e.g. too long). - // - // An existing ResourceClaim with that name that is not owned by the - // pod will not be used for the pod to avoid using an unrelated - // resource by mistake. Scheduling and pod startup are then blocked - // until the unrelated ResourceClaim is removed. + // will also be deleted. The pod name and resource name, along with a + // generated component, will be used to form a unique name for the + // ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses. // // This field is immutable and no changes will be made to the // corresponding ResourceClaim by the control plane after creating the @@ -3199,6 +3276,22 @@ type ClaimSource struct { ResourceClaimTemplateName *string } +// PodResourceClaimStatus is stored in the PodStatus for each PodResourceClaim +// which references a ResourceClaimTemplate. It stores the generated name for +// the corresponding ResourceClaim. +type PodResourceClaimStatus struct { + // Name uniquely identifies this resource claim inside the pod. + // This must match the name of an entry in pod.spec.resourceClaims, + // which implies that the string must be a DNS_LABEL. + Name string + + // ResourceClaimName is the name of the ResourceClaim that was + // generated for the Pod in the namespace of the Pod. It this is + // unset, then generating a ResourceClaim was not necessary. The + // pod.spec.resourceClaims entry can be ignored in this case. + ResourceClaimName *string +} + // OSName is the set of OS'es that can be used in OS. type OSName string @@ -3445,12 +3538,15 @@ type PodDNSConfigOption struct { Value *string } -// PodIP represents the IP address of a pod. -// IP address information. Each entry includes: -// -// IP: An IP address allocated to the pod. Routable at least within -// the cluster. +// PodIP represents a single IP address allocated to the pod. type PodIP struct { + // IP is the IP address assigned to the pod + IP string +} + +// HostIP represents a single IP address allocated to the host. +type HostIP struct { + // IP is the IP address assigned to the host IP string } @@ -3504,6 +3600,13 @@ type EphemeralContainerCommon struct { // +featureGate=InPlacePodVerticalScaling // +optional ResizePolicy []ContainerResizePolicy + // Restart policy for the container to manage the restart behavior of each + // container within a pod. + // This may only be set for init containers. You cannot set this field on + // ephemeral containers. + // +featureGate=SidecarContainers + // +optional + RestartPolicy *ContainerRestartPolicy // Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. // +optional VolumeMounts []VolumeMount @@ -3593,9 +3696,21 @@ type PodStatus struct { // give the resources on this node to a higher priority pod that is created after preemption. // +optional NominatedNodeName string + + // HostIP holds the IP address of the host to which the pod is assigned. Empty if the pod has not started yet. + // A pod can be assigned to a node that has a problem in kubelet which in turns mean that HostIP will + // not be updated even if there is a node is assigned to pod // +optional HostIP string + // HostIPs holds the IP addresses allocated to the host. If this field is specified, the first entry must + // match the hostIP field. This list is empty if the pod has not started yet. + // A pod can be assigned to a node that has a problem in kubelet which in turns means that HostIPs will + // not be updated even if there is a node is assigned to this pod. + // match the hostIP field. This list is empty if no IPs have been allocated yet. + // +optional + HostIPs []HostIP + // PodIPs holds all of the known IP addresses allocated to the pod. Pods may be assigned AT MOST // one value for each of IPv4 and IPv6. // +optional @@ -3627,6 +3742,11 @@ type PodStatus struct { // +featureGate=InPlacePodVerticalScaling // +optional Resize PodResizeStatus + + // Status of resource claims. + // +featureGate=DynamicResourceAllocation + // +optional + ResourceClaimStatuses []PodResourceClaimStatus } // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object @@ -4082,10 +4202,9 @@ type ServiceSpec struct { // This feature depends on whether the underlying cloud-provider supports specifying // the loadBalancerIP when a load balancer is created. // This field will be ignored if the cloud-provider does not support the feature. - // Deprecated: This field was under-specified and its meaning varies across implementations, - // and it cannot support dual-stack. - // As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. - // This field may be removed in a future API version. + // Deprecated: This field was under-specified and its meaning varies across implementations. + // Using it is non-portable and it may not support dual-stack. + // Users are encouraged to use implementation-specific annotations when available. // +optional LoadBalancerIP string @@ -4192,6 +4311,8 @@ type ServicePort struct { // // * Kubernetes-defined prefixed names: // * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 + // * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 + // * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 // // * Other protocols should use implementation-defined prefixed names such as // mycompany.com/my-custom-protocol. @@ -4346,10 +4467,19 @@ type EndpointPort struct { Protocol Protocol // The application protocol for this port. + // This is used as a hint for implementations to offer richer behavior for protocols that they understand. // This field follows standard Kubernetes label syntax. - // Un-prefixed names are reserved for IANA standard service names (as per + // Valid values are either: + // + // * Un-prefixed protocol names - reserved for IANA standard service names (as per // RFC-6335 and https://www.iana.org/assignments/service-names). - // Non-standard protocols should use prefixed names such as + // + // * Kubernetes-defined prefixed names: + // * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 + // * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 + // * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 + // + // * Other protocols should use implementation-defined prefixed names such as // mycompany.com/my-custom-protocol. // +optional AppProtocol *string @@ -5802,12 +5932,9 @@ type WindowsSecurityContextOptions struct { RunAsUserName *string // HostProcess determines if a container should be run as a 'Host Process' container. - // This field is alpha-level and will only be honored by components that enable the - // WindowsHostProcessContainers feature flag. Setting this field without the feature - // flag will result in errors when validating the Pod. All of a Pod's containers must - // have the same effective HostProcess value (it is not allowed to have a mix of HostProcess - // containers and non-HostProcess containers). In addition, if HostProcess is true - // then HostNetwork must also be set to true. + // All of a Pod's containers must have the same effective HostProcess value + // (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). + // In addition, if HostProcess is true then HostNetwork must also be set to true. // +optional HostProcess *bool } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/OWNERS index dfcc2e714b4c..a47166dcef20 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/OWNERS @@ -2,7 +2,6 @@ reviewers: - thockin - - lavalamp - smarterclayton - wojtek-t - deads2k diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/conversion.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/conversion.go index dd92428cdf3b..6793616f3a89 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/conversion.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/conversion.go @@ -42,6 +42,7 @@ func addConversionFuncs(scheme *runtime.Scheme) error { "spec.restartPolicy", "spec.schedulerName", "spec.serviceAccountName", + "spec.hostNetwork", "status.phase", "status.podIP", "status.podIPs", diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/defaults.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/defaults.go index 433ae39b51f5..51337fe169cb 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/defaults.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/defaults.go @@ -118,8 +118,8 @@ func SetDefaults_Service(obj *v1.Service) { if sp.Protocol == "" { sp.Protocol = v1.ProtocolTCP } - if sp.TargetPort == intstr.FromInt(0) || sp.TargetPort == intstr.FromString("") { - sp.TargetPort = intstr.FromInt(int(sp.Port)) + if sp.TargetPort == intstr.FromInt32(0) || sp.TargetPort == intstr.FromString("") { + sp.TargetPort = intstr.FromInt32(sp.Port) } } // Defaults ExternalTrafficPolicy field for NodePort / LoadBalancer service @@ -199,6 +199,11 @@ func SetDefaults_Pod(obj *v1.Pod) { enableServiceLinks := v1.DefaultEnableServiceLinks obj.Spec.EnableServiceLinks = &enableServiceLinks } + + if obj.Spec.HostNetwork { + defaultHostNetworkPorts(&obj.Spec.Containers) + defaultHostNetworkPorts(&obj.Spec.InitContainers) + } } func SetDefaults_PodSpec(obj *v1.PodSpec) { // New fields added here will break upgrade tests: @@ -211,9 +216,11 @@ func SetDefaults_PodSpec(obj *v1.PodSpec) { if obj.RestartPolicy == "" { obj.RestartPolicy = v1.RestartPolicyAlways } - if obj.HostNetwork { - defaultHostNetworkPorts(&obj.Containers) - defaultHostNetworkPorts(&obj.InitContainers) + if utilfeature.DefaultFeatureGate.Enabled(features.DefaultHostNetworkHostPortsInPodTemplates) { + if obj.HostNetwork { + defaultHostNetworkPorts(&obj.Containers) + defaultHostNetworkPorts(&obj.InitContainers) + } } if obj.SecurityContext == nil { obj.SecurityContext = &v1.PodSecurityContext{} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/helper/helpers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/helper/helpers.go index 34aca4f2c528..932e3ac6921e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/helper/helpers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/helper/helpers.go @@ -315,12 +315,6 @@ func AddOrUpdateTolerationInPodSpec(spec *v1.PodSpec, toleration *v1.Toleration) return true } -// AddOrUpdateTolerationInPod tries to add a toleration to the pod's toleration list. -// Returns true if something was updated, false otherwise. -func AddOrUpdateTolerationInPod(pod *v1.Pod, toleration *v1.Toleration) bool { - return AddOrUpdateTolerationInPodSpec(&pod.Spec, toleration) -} - // GetMatchingTolerations returns true and list of Tolerations matching all Taints if all are tolerated, or false otherwise. func GetMatchingTolerations(taints []v1.Taint, tolerations []v1.Toleration) (bool, []v1.Toleration) { if len(taints) == 0 { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/zz_generated.conversion.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/zz_generated.conversion.go index 685f1dac9da7..8a432e8d7057 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/zz_generated.conversion.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/v1/zz_generated.conversion.go @@ -732,6 +732,16 @@ func RegisterConversions(s *runtime.Scheme) error { }); err != nil { return err } + if err := s.AddGeneratedConversionFunc((*v1.HostIP)(nil), (*core.HostIP)(nil), func(a, b interface{}, scope conversion.Scope) error { + return Convert_v1_HostIP_To_core_HostIP(a.(*v1.HostIP), b.(*core.HostIP), scope) + }); err != nil { + return err + } + if err := s.AddGeneratedConversionFunc((*core.HostIP)(nil), (*v1.HostIP)(nil), func(a, b interface{}, scope conversion.Scope) error { + return Convert_core_HostIP_To_v1_HostIP(a.(*core.HostIP), b.(*v1.HostIP), scope) + }); err != nil { + return err + } if err := s.AddGeneratedConversionFunc((*v1.HostPathVolumeSource)(nil), (*core.HostPathVolumeSource)(nil), func(a, b interface{}, scope conversion.Scope) error { return Convert_v1_HostPathVolumeSource_To_core_HostPathVolumeSource(a.(*v1.HostPathVolumeSource), b.(*core.HostPathVolumeSource), scope) }); err != nil { @@ -1382,6 +1392,16 @@ func RegisterConversions(s *runtime.Scheme) error { }); err != nil { return err } + if err := s.AddGeneratedConversionFunc((*v1.PodResourceClaimStatus)(nil), (*core.PodResourceClaimStatus)(nil), func(a, b interface{}, scope conversion.Scope) error { + return Convert_v1_PodResourceClaimStatus_To_core_PodResourceClaimStatus(a.(*v1.PodResourceClaimStatus), b.(*core.PodResourceClaimStatus), scope) + }); err != nil { + return err + } + if err := s.AddGeneratedConversionFunc((*core.PodResourceClaimStatus)(nil), (*v1.PodResourceClaimStatus)(nil), func(a, b interface{}, scope conversion.Scope) error { + return Convert_core_PodResourceClaimStatus_To_v1_PodResourceClaimStatus(a.(*core.PodResourceClaimStatus), b.(*v1.PodResourceClaimStatus), scope) + }); err != nil { + return err + } if err := s.AddGeneratedConversionFunc((*v1.PodSchedulingGate)(nil), (*core.PodSchedulingGate)(nil), func(a, b interface{}, scope conversion.Scope) error { return Convert_v1_PodSchedulingGate_To_core_PodSchedulingGate(a.(*v1.PodSchedulingGate), b.(*core.PodSchedulingGate), scope) }); err != nil { @@ -2986,6 +3006,7 @@ func autoConvert_v1_Container_To_core_Container(in *v1.Container, out *core.Cont return err } out.ResizePolicy = *(*[]core.ContainerResizePolicy)(unsafe.Pointer(&in.ResizePolicy)) + out.RestartPolicy = (*core.ContainerRestartPolicy)(unsafe.Pointer(in.RestartPolicy)) out.VolumeMounts = *(*[]core.VolumeMount)(unsafe.Pointer(&in.VolumeMounts)) out.VolumeDevices = *(*[]core.VolumeDevice)(unsafe.Pointer(&in.VolumeDevices)) out.LivenessProbe = (*core.Probe)(unsafe.Pointer(in.LivenessProbe)) @@ -3020,6 +3041,7 @@ func autoConvert_core_Container_To_v1_Container(in *core.Container, out *v1.Cont return err } out.ResizePolicy = *(*[]v1.ContainerResizePolicy)(unsafe.Pointer(&in.ResizePolicy)) + out.RestartPolicy = (*v1.ContainerRestartPolicy)(unsafe.Pointer(in.RestartPolicy)) out.VolumeMounts = *(*[]v1.VolumeMount)(unsafe.Pointer(&in.VolumeMounts)) out.VolumeDevices = *(*[]v1.VolumeDevice)(unsafe.Pointer(&in.VolumeDevices)) out.LivenessProbe = (*v1.Probe)(unsafe.Pointer(in.LivenessProbe)) @@ -3602,6 +3624,7 @@ func autoConvert_v1_EphemeralContainerCommon_To_core_EphemeralContainerCommon(in return err } out.ResizePolicy = *(*[]core.ContainerResizePolicy)(unsafe.Pointer(&in.ResizePolicy)) + out.RestartPolicy = (*core.ContainerRestartPolicy)(unsafe.Pointer(in.RestartPolicy)) out.VolumeMounts = *(*[]core.VolumeMount)(unsafe.Pointer(&in.VolumeMounts)) out.VolumeDevices = *(*[]core.VolumeDevice)(unsafe.Pointer(&in.VolumeDevices)) out.LivenessProbe = (*core.Probe)(unsafe.Pointer(in.LivenessProbe)) @@ -3636,6 +3659,7 @@ func autoConvert_core_EphemeralContainerCommon_To_v1_EphemeralContainerCommon(in return err } out.ResizePolicy = *(*[]v1.ContainerResizePolicy)(unsafe.Pointer(&in.ResizePolicy)) + out.RestartPolicy = (*v1.ContainerRestartPolicy)(unsafe.Pointer(in.RestartPolicy)) out.VolumeMounts = *(*[]v1.VolumeMount)(unsafe.Pointer(&in.VolumeMounts)) out.VolumeDevices = *(*[]v1.VolumeDevice)(unsafe.Pointer(&in.VolumeDevices)) out.LivenessProbe = (*v1.Probe)(unsafe.Pointer(in.LivenessProbe)) @@ -4119,6 +4143,26 @@ func Convert_core_HostAlias_To_v1_HostAlias(in *core.HostAlias, out *v1.HostAlia return autoConvert_core_HostAlias_To_v1_HostAlias(in, out, s) } +func autoConvert_v1_HostIP_To_core_HostIP(in *v1.HostIP, out *core.HostIP, s conversion.Scope) error { + out.IP = in.IP + return nil +} + +// Convert_v1_HostIP_To_core_HostIP is an autogenerated conversion function. +func Convert_v1_HostIP_To_core_HostIP(in *v1.HostIP, out *core.HostIP, s conversion.Scope) error { + return autoConvert_v1_HostIP_To_core_HostIP(in, out, s) +} + +func autoConvert_core_HostIP_To_v1_HostIP(in *core.HostIP, out *v1.HostIP, s conversion.Scope) error { + out.IP = in.IP + return nil +} + +// Convert_core_HostIP_To_v1_HostIP is an autogenerated conversion function. +func Convert_core_HostIP_To_v1_HostIP(in *core.HostIP, out *v1.HostIP, s conversion.Scope) error { + return autoConvert_core_HostIP_To_v1_HostIP(in, out, s) +} + func autoConvert_v1_HostPathVolumeSource_To_core_HostPathVolumeSource(in *v1.HostPathVolumeSource, out *core.HostPathVolumeSource, s conversion.Scope) error { out.Path = in.Path out.Type = (*core.HostPathType)(unsafe.Pointer(in.Type)) @@ -5318,7 +5362,7 @@ func autoConvert_v1_PersistentVolumeClaimStatus_To_core_PersistentVolumeClaimSta out.Capacity = *(*core.ResourceList)(unsafe.Pointer(&in.Capacity)) out.Conditions = *(*[]core.PersistentVolumeClaimCondition)(unsafe.Pointer(&in.Conditions)) out.AllocatedResources = *(*core.ResourceList)(unsafe.Pointer(&in.AllocatedResources)) - out.ResizeStatus = (*core.PersistentVolumeClaimResizeStatus)(unsafe.Pointer(in.ResizeStatus)) + out.AllocatedResourceStatuses = *(*map[core.ResourceName]core.ClaimResourceStatus)(unsafe.Pointer(&in.AllocatedResourceStatuses)) return nil } @@ -5333,7 +5377,7 @@ func autoConvert_core_PersistentVolumeClaimStatus_To_v1_PersistentVolumeClaimSta out.Capacity = *(*v1.ResourceList)(unsafe.Pointer(&in.Capacity)) out.Conditions = *(*[]v1.PersistentVolumeClaimCondition)(unsafe.Pointer(&in.Conditions)) out.AllocatedResources = *(*v1.ResourceList)(unsafe.Pointer(&in.AllocatedResources)) - out.ResizeStatus = (*v1.PersistentVolumeClaimResizeStatus)(unsafe.Pointer(in.ResizeStatus)) + out.AllocatedResourceStatuses = *(*map[v1.ResourceName]v1.ClaimResourceStatus)(unsafe.Pointer(&in.AllocatedResourceStatuses)) return nil } @@ -5528,6 +5572,7 @@ func autoConvert_v1_PersistentVolumeStatus_To_core_PersistentVolumeStatus(in *v1 out.Phase = core.PersistentVolumePhase(in.Phase) out.Message = in.Message out.Reason = in.Reason + out.LastPhaseTransitionTime = (*metav1.Time)(unsafe.Pointer(in.LastPhaseTransitionTime)) return nil } @@ -5540,6 +5585,7 @@ func autoConvert_core_PersistentVolumeStatus_To_v1_PersistentVolumeStatus(in *co out.Phase = v1.PersistentVolumePhase(in.Phase) out.Message = in.Message out.Reason = in.Reason + out.LastPhaseTransitionTime = (*metav1.Time)(unsafe.Pointer(in.LastPhaseTransitionTime)) return nil } @@ -6207,6 +6253,28 @@ func Convert_core_PodResourceClaim_To_v1_PodResourceClaim(in *core.PodResourceCl return autoConvert_core_PodResourceClaim_To_v1_PodResourceClaim(in, out, s) } +func autoConvert_v1_PodResourceClaimStatus_To_core_PodResourceClaimStatus(in *v1.PodResourceClaimStatus, out *core.PodResourceClaimStatus, s conversion.Scope) error { + out.Name = in.Name + out.ResourceClaimName = (*string)(unsafe.Pointer(in.ResourceClaimName)) + return nil +} + +// Convert_v1_PodResourceClaimStatus_To_core_PodResourceClaimStatus is an autogenerated conversion function. +func Convert_v1_PodResourceClaimStatus_To_core_PodResourceClaimStatus(in *v1.PodResourceClaimStatus, out *core.PodResourceClaimStatus, s conversion.Scope) error { + return autoConvert_v1_PodResourceClaimStatus_To_core_PodResourceClaimStatus(in, out, s) +} + +func autoConvert_core_PodResourceClaimStatus_To_v1_PodResourceClaimStatus(in *core.PodResourceClaimStatus, out *v1.PodResourceClaimStatus, s conversion.Scope) error { + out.Name = in.Name + out.ResourceClaimName = (*string)(unsafe.Pointer(in.ResourceClaimName)) + return nil +} + +// Convert_core_PodResourceClaimStatus_To_v1_PodResourceClaimStatus is an autogenerated conversion function. +func Convert_core_PodResourceClaimStatus_To_v1_PodResourceClaimStatus(in *core.PodResourceClaimStatus, out *v1.PodResourceClaimStatus, s conversion.Scope) error { + return autoConvert_core_PodResourceClaimStatus_To_v1_PodResourceClaimStatus(in, out, s) +} + func autoConvert_v1_PodSchedulingGate_To_core_PodSchedulingGate(in *v1.PodSchedulingGate, out *core.PodSchedulingGate, s conversion.Scope) error { out.Name = in.Name return nil @@ -6413,6 +6481,7 @@ func autoConvert_v1_PodStatus_To_core_PodStatus(in *v1.PodStatus, out *core.PodS out.Reason = in.Reason out.NominatedNodeName = in.NominatedNodeName out.HostIP = in.HostIP + out.HostIPs = *(*[]core.HostIP)(unsafe.Pointer(&in.HostIPs)) // WARNING: in.PodIP requires manual conversion: does not exist in peer-type out.PodIPs = *(*[]core.PodIP)(unsafe.Pointer(&in.PodIPs)) out.StartTime = (*metav1.Time)(unsafe.Pointer(in.StartTime)) @@ -6421,6 +6490,7 @@ func autoConvert_v1_PodStatus_To_core_PodStatus(in *v1.PodStatus, out *core.PodS out.QOSClass = core.PodQOSClass(in.QOSClass) out.EphemeralContainerStatuses = *(*[]core.ContainerStatus)(unsafe.Pointer(&in.EphemeralContainerStatuses)) out.Resize = core.PodResizeStatus(in.Resize) + out.ResourceClaimStatuses = *(*[]core.PodResourceClaimStatus)(unsafe.Pointer(&in.ResourceClaimStatuses)) return nil } @@ -6431,6 +6501,7 @@ func autoConvert_core_PodStatus_To_v1_PodStatus(in *core.PodStatus, out *v1.PodS out.Reason = in.Reason out.NominatedNodeName = in.NominatedNodeName out.HostIP = in.HostIP + out.HostIPs = *(*[]v1.HostIP)(unsafe.Pointer(&in.HostIPs)) out.PodIPs = *(*[]v1.PodIP)(unsafe.Pointer(&in.PodIPs)) out.StartTime = (*metav1.Time)(unsafe.Pointer(in.StartTime)) out.QOSClass = v1.PodQOSClass(in.QOSClass) @@ -6438,6 +6509,7 @@ func autoConvert_core_PodStatus_To_v1_PodStatus(in *core.PodStatus, out *v1.PodS out.ContainerStatuses = *(*[]v1.ContainerStatus)(unsafe.Pointer(&in.ContainerStatuses)) out.EphemeralContainerStatuses = *(*[]v1.ContainerStatus)(unsafe.Pointer(&in.EphemeralContainerStatuses)) out.Resize = v1.PodResizeStatus(in.Resize) + out.ResourceClaimStatuses = *(*[]v1.PodResourceClaimStatus)(unsafe.Pointer(&in.ResourceClaimStatuses)) return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/OWNERS index 054fa14fa74c..30b589bd6d8a 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/OWNERS @@ -2,7 +2,6 @@ reviewers: - thockin - - lavalamp - smarterclayton - wojtek-t - deads2k diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/validation.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/validation.go index 465c92380a94..cd9cbbb8b7e2 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/validation.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/validation.go @@ -1055,6 +1055,7 @@ func validateDownwardAPIVolumeFile(file *core.DownwardAPIVolumeFile, fldPath *fi if file.ResourceFieldRef != nil { allErrs = append(allErrs, field.Invalid(fldPath, "resource", "fieldRef and resourceFieldRef can not be specified simultaneously")) } + allErrs = append(allErrs, validateDownwardAPIHostIPs(file.FieldRef, fldPath.Child("fieldRef"), opts)...) } else if file.ResourceFieldRef != nil { localValidContainerResourceFieldPathPrefixes := validContainerResourceFieldPathPrefixesWithDownwardAPIHugePages allErrs = append(allErrs, validateContainerResourceFieldSelector(file.ResourceFieldRef, &validContainerResourceFieldPathExpressions, &localValidContainerResourceFieldPathPrefixes, fldPath.Child("resourceFieldRef"), true)...) @@ -2019,18 +2020,15 @@ type PersistentVolumeClaimSpecValidationOptions struct { AllowReadWriteOncePod bool // Allow users to recover from previously failing expansion operation EnableRecoverFromExpansionFailure bool - // Allow assigning StorageClass to unbound PVCs retroactively - EnableRetroactiveDefaultStorageClass bool // Allow to validate the label value of the label selector AllowInvalidLabelValueInSelector bool } func ValidationOptionsForPersistentVolumeClaim(pvc, oldPvc *core.PersistentVolumeClaim) PersistentVolumeClaimSpecValidationOptions { opts := PersistentVolumeClaimSpecValidationOptions{ - AllowReadWriteOncePod: utilfeature.DefaultFeatureGate.Enabled(features.ReadWriteOncePod), - EnableRecoverFromExpansionFailure: utilfeature.DefaultFeatureGate.Enabled(features.RecoverVolumeExpansionFailure), - EnableRetroactiveDefaultStorageClass: utilfeature.DefaultFeatureGate.Enabled(features.RetroactiveDefaultStorageClass), - AllowInvalidLabelValueInSelector: false, + AllowReadWriteOncePod: utilfeature.DefaultFeatureGate.Enabled(features.ReadWriteOncePod), + EnableRecoverFromExpansionFailure: utilfeature.DefaultFeatureGate.Enabled(features.RecoverVolumeExpansionFailure), + AllowInvalidLabelValueInSelector: false, } if oldPvc == nil { // If there's no old PVC, use the options based solely on feature enablement @@ -2048,6 +2046,11 @@ func ValidationOptionsForPersistentVolumeClaim(pvc, oldPvc *core.PersistentVolum // If the old object allowed "ReadWriteOncePod", continue to allow it in the new object opts.AllowReadWriteOncePod = true } + + if helper.ClaimContainsAllocatedResources(oldPvc) || + helper.ClaimContainsAllocatedResourceStatus(oldPvc) { + opts.EnableRecoverFromExpansionFailure = true + } return opts } @@ -2286,24 +2289,39 @@ func validateStorageClassUpgradeFromAnnotation(oldAnnotations, newAnnotations ma // Provide an upgrade path from PVC with nil storage class. We allow update of // StorageClassName only if following four conditions are met at the same time: -// 1. RetroactiveDefaultStorageClass FeatureGate is enabled -// 2. The new pvc's StorageClassName is not nil -// 3. The old pvc's StorageClassName is nil -// 4. The old pvc either does not have beta annotation set, or the beta annotation matches new pvc's StorageClassName +// 1. The new pvc's StorageClassName is not nil +// 2. The old pvc's StorageClassName is nil +// 3. The old pvc either does not have beta annotation set, or the beta annotation matches new pvc's StorageClassName func validateStorageClassUpgradeFromNil(oldAnnotations map[string]string, oldScName, newScName *string, opts PersistentVolumeClaimSpecValidationOptions) bool { oldAnnotation, oldAnnotationExist := oldAnnotations[core.BetaStorageClassAnnotation] - return opts.EnableRetroactiveDefaultStorageClass /* condition 1 */ && - newScName != nil /* condition 2 */ && - oldScName == nil /* condition 3 */ && - (!oldAnnotationExist || *newScName == oldAnnotation) /* condition 4 */ + return newScName != nil /* condition 1 */ && + oldScName == nil /* condition 2 */ && + (!oldAnnotationExist || *newScName == oldAnnotation) /* condition 3 */ +} + +func validatePersistentVolumeClaimResourceKey(value string, fldPath *field.Path) field.ErrorList { + allErrs := field.ErrorList{} + for _, msg := range validation.IsQualifiedName(value) { + allErrs = append(allErrs, field.Invalid(fldPath, value, msg)) + } + if len(allErrs) != 0 { + return allErrs + } + // For native resource names such as - either unprefixed names or with kubernetes.io prefix, + // only allowed value is storage + if helper.IsNativeResource(core.ResourceName(value)) { + if core.ResourceName(value) != core.ResourceStorage { + return append(allErrs, field.NotSupported(fldPath, value, []string{string(core.ResourceStorage)})) + } + } + return allErrs } -var resizeStatusSet = sets.NewString(string(core.PersistentVolumeClaimNoExpansionInProgress), - string(core.PersistentVolumeClaimControllerExpansionInProgress), - string(core.PersistentVolumeClaimControllerExpansionFailed), - string(core.PersistentVolumeClaimNodeExpansionPending), - string(core.PersistentVolumeClaimNodeExpansionInProgress), - string(core.PersistentVolumeClaimNodeExpansionFailed)) +var resizeStatusSet = sets.NewString(string(core.PersistentVolumeClaimControllerResizeInProgress), + string(core.PersistentVolumeClaimControllerResizeFailed), + string(core.PersistentVolumeClaimNodeResizePending), + string(core.PersistentVolumeClaimNodeResizeInProgress), + string(core.PersistentVolumeClaimNodeResizeFailed)) // ValidatePersistentVolumeClaimStatusUpdate validates an update to status of a PersistentVolumeClaim func ValidatePersistentVolumeClaimStatusUpdate(newPvc, oldPvc *core.PersistentVolumeClaim, validationOpts PersistentVolumeClaimSpecValidationOptions) field.ErrorList { @@ -2320,19 +2338,26 @@ func ValidatePersistentVolumeClaimStatusUpdate(newPvc, oldPvc *core.PersistentVo allErrs = append(allErrs, validateBasicResource(qty, capPath.Key(string(r)))...) } if validationOpts.EnableRecoverFromExpansionFailure { - resizeStatusPath := field.NewPath("status", "resizeStatus") - if newPvc.Status.ResizeStatus != nil { - resizeStatus := *newPvc.Status.ResizeStatus - if !resizeStatusSet.Has(string(resizeStatus)) { - allErrs = append(allErrs, field.NotSupported(resizeStatusPath, resizeStatus, resizeStatusSet.List())) + resizeStatusPath := field.NewPath("status", "allocatedResourceStatus") + if newPvc.Status.AllocatedResourceStatuses != nil { + resizeStatus := newPvc.Status.AllocatedResourceStatuses + for k, v := range resizeStatus { + if errs := validatePersistentVolumeClaimResourceKey(k.String(), resizeStatusPath); len(errs) > 0 { + allErrs = append(allErrs, errs...) + } + if !resizeStatusSet.Has(string(v)) { + allErrs = append(allErrs, field.NotSupported(resizeStatusPath, k, resizeStatusSet.List())) + continue + } } } allocPath := field.NewPath("status", "allocatedResources") for r, qty := range newPvc.Status.AllocatedResources { - if r != core.ResourceStorage { - allErrs = append(allErrs, field.NotSupported(allocPath, r, []string{string(core.ResourceStorage)})) + if errs := validatePersistentVolumeClaimResourceKey(r.String(), allocPath); len(errs) > 0 { + allErrs = append(allErrs, errs...) continue } + if errs := validateBasicResource(qty, allocPath.Key(string(r))); len(errs) > 0 { allErrs = append(allErrs, errs...) } else { @@ -2408,8 +2433,10 @@ var validEnvDownwardAPIFieldPathExpressions = sets.NewString( "spec.nodeName", "spec.serviceAccountName", "status.hostIP", + "status.hostIPs", "status.podIP", - "status.podIPs") + "status.podIPs", +) var validContainerResourceFieldPathExpressions = sets.NewString("limits.cpu", "limits.memory", "limits.ephemeral-storage", "requests.cpu", "requests.memory", "requests.ephemeral-storage") @@ -2430,6 +2457,7 @@ func validateEnvVarValueFrom(ev core.EnvVar, fldPath *field.Path, opts PodValida if ev.ValueFrom.FieldRef != nil { numSources++ allErrs = append(allErrs, validateObjectFieldSelector(ev.ValueFrom.FieldRef, &validEnvDownwardAPIFieldPathExpressions, fldPath.Child("fieldRef"))...) + allErrs = append(allErrs, validateDownwardAPIHostIPs(ev.ValueFrom.FieldRef, fldPath.Child("fieldRef"), opts)...) } if ev.ValueFrom.ResourceFieldRef != nil { numSources++ @@ -2493,6 +2521,16 @@ func validateObjectFieldSelector(fs *core.ObjectFieldSelector, expressions *sets return allErrs } +func validateDownwardAPIHostIPs(fieldSel *core.ObjectFieldSelector, fldPath *field.Path, opts PodValidationOptions) field.ErrorList { + allErrs := field.ErrorList{} + if !opts.AllowHostIPsField { + if fieldSel.FieldPath == "status.hostIPs" { + allErrs = append(allErrs, field.Forbidden(fldPath, "may not be set when feature gate 'PodHostIPs' is not enabled")) + } + } + return allErrs +} + func validateContainerResourceFieldSelector(fs *core.ResourceFieldSelector, expressions *sets.String, prefixes *sets.String, fldPath *field.Path, volume bool) field.ErrorList { allErrs := field.ErrorList{} @@ -2821,6 +2859,45 @@ func validatePodResourceClaimSource(claimSource core.ClaimSource, fldPath *field return allErrs } +func validateLivenessProbe(probe *core.Probe, fldPath *field.Path) field.ErrorList { + allErrs := field.ErrorList{} + + if probe == nil { + return allErrs + } + allErrs = append(allErrs, validateProbe(probe, fldPath)...) + if probe.SuccessThreshold != 1 { + allErrs = append(allErrs, field.Invalid(fldPath.Child("successThreshold"), probe.SuccessThreshold, "must be 1")) + } + return allErrs +} + +func validateReadinessProbe(probe *core.Probe, fldPath *field.Path) field.ErrorList { + allErrs := field.ErrorList{} + + if probe == nil { + return allErrs + } + allErrs = append(allErrs, validateProbe(probe, fldPath)...) + if probe.TerminationGracePeriodSeconds != nil { + allErrs = append(allErrs, field.Invalid(fldPath.Child("terminationGracePeriodSeconds"), probe.TerminationGracePeriodSeconds, "must not be set for readinessProbes")) + } + return allErrs +} + +func validateStartupProbe(probe *core.Probe, fldPath *field.Path) field.ErrorList { + allErrs := field.ErrorList{} + + if probe == nil { + return allErrs + } + allErrs = append(allErrs, validateProbe(probe, fldPath)...) + if probe.SuccessThreshold != 1 { + allErrs = append(allErrs, field.Invalid(fldPath.Child("successThreshold"), probe.SuccessThreshold, "must be 1")) + } + return allErrs +} + func validateProbe(probe *core.Probe, fldPath *field.Path) field.ErrorList { allErrs := field.ErrorList{} @@ -2840,6 +2917,23 @@ func validateProbe(probe *core.Probe, fldPath *field.Path) field.ErrorList { return allErrs } +func validateInitContainerRestartPolicy(restartPolicy *core.ContainerRestartPolicy, fldPath *field.Path) field.ErrorList { + var allErrors field.ErrorList + + if restartPolicy == nil { + return allErrors + } + switch *restartPolicy { + case core.ContainerRestartPolicyAlways: + break + default: + validValues := []string{string(core.ContainerRestartPolicyAlways)} + allErrors = append(allErrors, field.NotSupported(fldPath, *restartPolicy, validValues)) + } + + return allErrors +} + type commonHandler struct { Exec *core.ExecAction HTTPGet *core.HTTPGetAction @@ -2970,7 +3064,7 @@ func validateTCPSocketAction(tcp *core.TCPSocketAction, fldPath *field.Path) fie return ValidatePortNumOrName(tcp.Port, fldPath.Child("port")) } func validateGRPCAction(grpc *core.GRPCAction, fldPath *field.Path) field.ErrorList { - return ValidatePortNumOrName(intstr.FromInt(int(grpc.Port)), fldPath.Child("port")) + return ValidatePortNumOrName(intstr.FromInt32(grpc.Port), fldPath.Child("port")) } func validateHandler(handler commonHandler, fldPath *field.Path) field.ErrorList { numHandlers := 0 @@ -3170,6 +3264,13 @@ func validateInitContainers(containers []core.Container, regularContainers []cor // Apply the validation common to all container types allErrs = append(allErrs, validateContainerCommon(&ctr, volumes, podClaimNames, idxPath, opts)...) + restartAlways := false + // Apply the validation specific to init containers + if ctr.RestartPolicy != nil { + allErrs = append(allErrs, validateInitContainerRestartPolicy(ctr.RestartPolicy, idxPath.Child("restartPolicy"))...) + restartAlways = *ctr.RestartPolicy == core.ContainerRestartPolicyAlways + } + // Names must be unique within regular and init containers. Collisions with ephemeral containers // will be detected by validateEphemeralContainers(). if allNames.Has(ctr.Name) { @@ -3181,19 +3282,31 @@ func validateInitContainers(containers []core.Container, regularContainers []cor // Check for port conflicts in init containers individually since init containers run one-by-one. allErrs = append(allErrs, checkHostPortConflicts([]core.Container{ctr}, fldPath)...) - // These fields are disallowed for init containers. - if ctr.Lifecycle != nil { - allErrs = append(allErrs, field.Forbidden(idxPath.Child("lifecycle"), "may not be set for init containers")) - } - if ctr.LivenessProbe != nil { - allErrs = append(allErrs, field.Forbidden(idxPath.Child("livenessProbe"), "may not be set for init containers")) - } - if ctr.ReadinessProbe != nil { - allErrs = append(allErrs, field.Forbidden(idxPath.Child("readinessProbe"), "may not be set for init containers")) - } - if ctr.StartupProbe != nil { - allErrs = append(allErrs, field.Forbidden(idxPath.Child("startupProbe"), "may not be set for init containers")) + switch { + case restartAlways: + if ctr.Lifecycle != nil { + allErrs = append(allErrs, validateLifecycle(ctr.Lifecycle, idxPath.Child("lifecycle"))...) + } + allErrs = append(allErrs, validateLivenessProbe(ctr.LivenessProbe, idxPath.Child("livenessProbe"))...) + allErrs = append(allErrs, validateReadinessProbe(ctr.ReadinessProbe, idxPath.Child("readinessProbe"))...) + allErrs = append(allErrs, validateStartupProbe(ctr.StartupProbe, idxPath.Child("startupProbe"))...) + + default: + // These fields are disallowed for init containers. + if ctr.Lifecycle != nil { + allErrs = append(allErrs, field.Forbidden(idxPath.Child("lifecycle"), "may not be set for init containers without restartPolicy=Always")) + } + if ctr.LivenessProbe != nil { + allErrs = append(allErrs, field.Forbidden(idxPath.Child("livenessProbe"), "may not be set for init containers without restartPolicy=Always")) + } + if ctr.ReadinessProbe != nil { + allErrs = append(allErrs, field.Forbidden(idxPath.Child("readinessProbe"), "may not be set for init containers without restartPolicy=Always")) + } + if ctr.StartupProbe != nil { + allErrs = append(allErrs, field.Forbidden(idxPath.Child("startupProbe"), "may not be set for init containers without restartPolicy=Always")) + } } + if len(ctr.ResizePolicy) > 0 { allErrs = append(allErrs, field.Invalid(idxPath.Child("resizePolicy"), ctr.ResizePolicy, "must not be set for init containers")) } @@ -3256,25 +3369,6 @@ func validateHostUsers(spec *core.PodSpec, fldPath *field.Path) field.ErrorList return allErrs } - // For now only these volumes are supported: - // - configmap - // - secret - // - downwardAPI - // - emptyDir - // - projected - // So reject anything else. - for i, vol := range spec.Volumes { - switch { - case vol.EmptyDir != nil: - case vol.Secret != nil: - case vol.DownwardAPI != nil: - case vol.ConfigMap != nil: - case vol.Projected != nil: - default: - allErrs = append(allErrs, field.Forbidden(fldPath.Child("volumes").Index(i), "volume type not supported when `pod.Spec.HostUsers` is false")) - } - } - // We decided to restrict the usage of userns with other host namespaces: // https://github.com/kubernetes/kubernetes/pull/111090#discussion_r935994282 // The tl;dr is: you can easily run into permission issues that seem unexpected, we don't @@ -3318,22 +3412,20 @@ func validateContainers(containers []core.Container, volumes map[string]core.Vol allNames.Insert(ctr.Name) } - // These fields are only allowed for regular containers, so only check supported values here. - // Init and ephemeral container validation will return field.Forbidden() for these paths. + // These fields are allowed for regular containers and restartable init + // containers. + // Regular init container and ephemeral container validation will return + // field.Forbidden() for these paths. if ctr.Lifecycle != nil { allErrs = append(allErrs, validateLifecycle(ctr.Lifecycle, path.Child("lifecycle"))...) } - allErrs = append(allErrs, validateProbe(ctr.LivenessProbe, path.Child("livenessProbe"))...) - if ctr.LivenessProbe != nil && ctr.LivenessProbe.SuccessThreshold != 1 { - allErrs = append(allErrs, field.Invalid(path.Child("livenessProbe", "successThreshold"), ctr.LivenessProbe.SuccessThreshold, "must be 1")) - } - allErrs = append(allErrs, validateProbe(ctr.ReadinessProbe, path.Child("readinessProbe"))...) - if ctr.ReadinessProbe != nil && ctr.ReadinessProbe.TerminationGracePeriodSeconds != nil { - allErrs = append(allErrs, field.Invalid(path.Child("readinessProbe", "terminationGracePeriodSeconds"), ctr.ReadinessProbe.TerminationGracePeriodSeconds, "must not be set for readinessProbes")) - } - allErrs = append(allErrs, validateProbe(ctr.StartupProbe, path.Child("startupProbe"))...) - if ctr.StartupProbe != nil && ctr.StartupProbe.SuccessThreshold != 1 { - allErrs = append(allErrs, field.Invalid(path.Child("startupProbe", "successThreshold"), ctr.StartupProbe.SuccessThreshold, "must be 1")) + allErrs = append(allErrs, validateLivenessProbe(ctr.LivenessProbe, path.Child("livenessProbe"))...) + allErrs = append(allErrs, validateReadinessProbe(ctr.ReadinessProbe, path.Child("readinessProbe"))...) + allErrs = append(allErrs, validateStartupProbe(ctr.StartupProbe, path.Child("startupProbe"))...) + + // These fields are disallowed for regular containers + if ctr.RestartPolicy != nil { + allErrs = append(allErrs, field.Forbidden(path.Child("restartPolicy"), "may not be set for non-init containers")) } } @@ -3399,14 +3491,10 @@ const ( // restrictions in Linux libc name resolution handling. // Max number of DNS name servers. MaxDNSNameservers = 3 - // Expanded max number of domains in the search path list. - MaxDNSSearchPathsExpanded = 32 - // Expanded max number of characters in the search path. - MaxDNSSearchListCharsExpanded = 2048 // Max number of domains in the search path list. - MaxDNSSearchPathsLegacy = 6 - // Max number of characters in the search path list. - MaxDNSSearchListCharsLegacy = 256 + MaxDNSSearchPaths = 32 + // Max number of characters in the search path. + MaxDNSSearchListChars = 2048 ) func validateReadinessGates(readinessGates []core.PodReadinessGate, fldPath *field.Path) field.ErrorList { @@ -3455,16 +3543,12 @@ func validatePodDNSConfig(dnsConfig *core.PodDNSConfig, dnsPolicy *core.DNSPolic } } // Validate searches. - maxDNSSearchPaths, maxDNSSearchListChars := MaxDNSSearchPathsLegacy, MaxDNSSearchListCharsLegacy - if opts.AllowExpandedDNSConfig { - maxDNSSearchPaths, maxDNSSearchListChars = MaxDNSSearchPathsExpanded, MaxDNSSearchListCharsExpanded - } - if len(dnsConfig.Searches) > maxDNSSearchPaths { - allErrs = append(allErrs, field.Invalid(fldPath.Child("searches"), dnsConfig.Searches, fmt.Sprintf("must not have more than %v search paths", maxDNSSearchPaths))) + if len(dnsConfig.Searches) > MaxDNSSearchPaths { + allErrs = append(allErrs, field.Invalid(fldPath.Child("searches"), dnsConfig.Searches, fmt.Sprintf("must not have more than %v search paths", MaxDNSSearchPaths))) } // Include the space between search paths. - if len(strings.Join(dnsConfig.Searches, " ")) > maxDNSSearchListChars { - allErrs = append(allErrs, field.Invalid(fldPath.Child("searches"), dnsConfig.Searches, fmt.Sprintf("must not have more than %v characters (including spaces) in the search list", maxDNSSearchListChars))) + if len(strings.Join(dnsConfig.Searches, " ")) > MaxDNSSearchListChars { + allErrs = append(allErrs, field.Invalid(fldPath.Child("searches"), dnsConfig.Searches, fmt.Sprintf("must not have more than %v characters (including spaces) in the search list", MaxDNSSearchListChars))) } for i, search := range dnsConfig.Searches { // it is fine to have a trailing dot @@ -3481,15 +3565,35 @@ func validatePodDNSConfig(dnsConfig *core.PodDNSConfig, dnsPolicy *core.DNSPolic return allErrs } -func validateHostNetwork(hostNetwork bool, containers []core.Container, fldPath *field.Path) field.ErrorList { +// validatePodHostNetworkDeps checks fields which depend on whether HostNetwork is +// true or not. It should be called on all PodSpecs, but opts can change what +// is enforce. E.g. opts.ResourceIsPod should only be set when called in the +// context of a Pod, and not on PodSpecs which are embedded in other resources +// (e.g. Deployments). +func validatePodHostNetworkDeps(spec *core.PodSpec, fldPath *field.Path, opts PodValidationOptions) field.ErrorList { + // For we keep `.HostNetwork` in .SecurityContext on the internal + // version of Pod. + hostNetwork := false + if spec.SecurityContext != nil { + hostNetwork = spec.SecurityContext.HostNetwork + } + allErrors := field.ErrorList{} + if hostNetwork { - for i, container := range containers { + fldPath := fldPath.Child("containers") + for i, container := range spec.Containers { portsPath := fldPath.Index(i).Child("ports") for i, port := range container.Ports { idxPath := portsPath.Index(i) - if port.HostPort != port.ContainerPort { - allErrors = append(allErrors, field.Invalid(idxPath.Child("containerPort"), port.ContainerPort, "must match `hostPort` when `hostNetwork` is true")) + // At this point, we know that HostNetwork is true. If this + // PodSpec is in a Pod (opts.ResourceIsPod), then HostPort must + // be the same value as ContainerPort. If this PodSpec is in + // some other resource (e.g. Deployment) we allow 0 (i.e. + // unspecified) because it will be defaulted when the Pod is + // ultimately created, but we do not allow any other values. + if hp, cp := port.HostPort, port.ContainerPort; (opts.ResourceIsPod || hp != 0) && hp != cp { + allErrors = append(allErrors, field.Invalid(idxPath.Child("hostPort"), port.HostPort, "must match `containerPort` when `hostNetwork` is true")) } } } @@ -3688,25 +3792,29 @@ type PodValidationOptions struct { AllowInvalidLabelValueInSelector bool // Allow pod spec to use non-integer multiple of huge page unit size AllowIndivisibleHugePagesValues bool - // Allow more DNSSearchPaths and longer DNSSearchListChars - AllowExpandedDNSConfig bool + // Allow pod spec to use status.hostIPs in downward API if feature is enabled + AllowHostIPsField bool // Allow invalid topologySpreadConstraint labelSelector for backward compatibility AllowInvalidTopologySpreadConstraintLabelSelector bool // Allow node selector additions for gated pods. AllowMutableNodeSelectorAndNodeAffinity bool + // The top-level resource being validated is a Pod, not just a PodSpec + // embedded in some other resource. + ResourceIsPod bool } // validatePodMetadataAndSpec tests if required fields in the pod.metadata and pod.spec are set, // and is called by ValidatePodCreate and ValidatePodUpdate. func validatePodMetadataAndSpec(pod *core.Pod, opts PodValidationOptions) field.ErrorList { - fldPath := field.NewPath("metadata") - allErrs := ValidateObjectMeta(&pod.ObjectMeta, true, ValidatePodName, fldPath) - allErrs = append(allErrs, ValidatePodSpecificAnnotations(pod.ObjectMeta.Annotations, &pod.Spec, fldPath.Child("annotations"), opts)...) - allErrs = append(allErrs, ValidatePodSpec(&pod.Spec, &pod.ObjectMeta, field.NewPath("spec"), opts)...) + metaPath := field.NewPath("metadata") + specPath := field.NewPath("spec") + + allErrs := ValidateObjectMeta(&pod.ObjectMeta, true, ValidatePodName, metaPath) + allErrs = append(allErrs, ValidatePodSpecificAnnotations(pod.ObjectMeta.Annotations, &pod.Spec, metaPath.Child("annotations"), opts)...) + allErrs = append(allErrs, ValidatePodSpec(&pod.Spec, &pod.ObjectMeta, specPath, opts)...) // we do additional validation only pertinent for pods and not pod templates // this was done to preserve backwards compatibility - specPath := field.NewPath("spec") if pod.Spec.ServiceAccountName == "" { for vi, volume := range pod.Spec.Volumes { @@ -3774,6 +3882,58 @@ func validatePodIPs(pod *core.Pod) field.ErrorList { return allErrs } +// validateHostIPs validates IPs in pod status +func validateHostIPs(pod *core.Pod) field.ErrorList { + allErrs := field.ErrorList{} + + if len(pod.Status.HostIPs) == 0 { + return allErrs + } + + hostIPsField := field.NewPath("status", "hostIPs") + + // hostIP must be equal to hostIPs[0].IP + if pod.Status.HostIP != pod.Status.HostIPs[0].IP { + allErrs = append(allErrs, field.Invalid(hostIPsField.Index(0).Child("ip"), pod.Status.HostIPs[0].IP, "must be equal to `hostIP`")) + } + + // all HostPs must be valid IPs + for i, hostIP := range pod.Status.HostIPs { + for _, msg := range validation.IsValidIP(hostIP.IP) { + allErrs = append(allErrs, field.Invalid(hostIPsField.Index(i), hostIP.IP, msg)) + } + } + + // if we have more than one Pod.HostIP then + // - validate for dual stack + // - validate for duplication + if len(pod.Status.HostIPs) > 1 { + seen := sets.String{} + hostIPs := make([]string, 0, len(pod.Status.HostIPs)) + + // There should be no duplicates in list of Pod.HostIPs + for i, hostIP := range pod.Status.HostIPs { + hostIPs = append(hostIPs, hostIP.IP) + if seen.Has(hostIP.IP) { + allErrs = append(allErrs, field.Duplicate(hostIPsField.Index(i), hostIP)) + } + seen.Insert(hostIP.IP) + } + + dualStack, err := netutils.IsDualStackIPStrings(hostIPs) + if err != nil { + allErrs = append(allErrs, field.InternalError(hostIPsField, fmt.Errorf("failed to check for dual stack with error:%v", err))) + } + + // We only support one from each IP family (i.e. max two IPs in this list). + if !dualStack || len(hostIPs) > 2 { + allErrs = append(allErrs, field.Invalid(hostIPsField, pod.Status.HostIPs, "may specify no more than one IP for each IP family")) + } + } + + return allErrs +} + // ValidatePodSpec tests that the specified PodSpec has valid data. // This includes checking formatting and uniqueness. It also canonicalizes the // structure by setting default values and implementing any backwards-compatibility @@ -3790,10 +3950,11 @@ func ValidatePodSpec(spec *core.PodSpec, podMeta *metav1.ObjectMeta, fldPath *fi allErrs = append(allErrs, validateContainers(spec.Containers, vols, podClaimNames, fldPath.Child("containers"), opts)...) allErrs = append(allErrs, validateInitContainers(spec.InitContainers, spec.Containers, vols, podClaimNames, fldPath.Child("initContainers"), opts)...) allErrs = append(allErrs, validateEphemeralContainers(spec.EphemeralContainers, spec.Containers, spec.InitContainers, vols, podClaimNames, fldPath.Child("ephemeralContainers"), opts)...) + allErrs = append(allErrs, validatePodHostNetworkDeps(spec, fldPath, opts)...) allErrs = append(allErrs, validateRestartPolicy(&spec.RestartPolicy, fldPath.Child("restartPolicy"))...) allErrs = append(allErrs, validateDNSPolicy(&spec.DNSPolicy, fldPath.Child("dnsPolicy"))...) allErrs = append(allErrs, unversionedvalidation.ValidateLabels(spec.NodeSelector, fldPath.Child("nodeSelector"))...) - allErrs = append(allErrs, ValidatePodSecurityContext(spec.SecurityContext, spec, fldPath, fldPath.Child("securityContext"), opts)...) + allErrs = append(allErrs, validatePodSpecSecurityContext(spec.SecurityContext, spec, fldPath, fldPath.Child("securityContext"), opts)...) allErrs = append(allErrs, validateImagePullSecrets(spec.ImagePullSecrets, fldPath.Child("imagePullSecrets"))...) allErrs = append(allErrs, validateAffinity(spec.Affinity, opts, fldPath.Child("affinity"))...) allErrs = append(allErrs, validatePodDNSConfig(spec.DNSConfig, &spec.DNSPolicy, fldPath.Child("dnsConfig"), opts)...) @@ -4396,12 +4557,13 @@ func validateSysctls(sysctls []core.Sysctl, fldPath *field.Path) field.ErrorList return allErrs } -// ValidatePodSecurityContext test that the specified PodSecurityContext has valid data. -func ValidatePodSecurityContext(securityContext *core.PodSecurityContext, spec *core.PodSpec, specPath, fldPath *field.Path, opts PodValidationOptions) field.ErrorList { +// validatePodSpecSecurityContext verifies the SecurityContext of a PodSpec, +// whether that is defined in a Pod or in an embedded PodSpec (e.g. a +// Deployment's pod template). +func validatePodSpecSecurityContext(securityContext *core.PodSecurityContext, spec *core.PodSpec, specPath, fldPath *field.Path, opts PodValidationOptions) field.ErrorList { allErrs := field.ErrorList{} if securityContext != nil { - allErrs = append(allErrs, validateHostNetwork(securityContext.HostNetwork, spec.Containers, specPath.Child("containers"))...) if securityContext.FSGroup != nil { for _, msg := range validation.IsValidGroupID(*securityContext.FSGroup) { allErrs = append(allErrs, field.Invalid(fldPath.Child("fsGroup"), *(securityContext.FSGroup), msg)) @@ -4727,7 +4889,14 @@ func ValidatePodUpdate(newPod, oldPod *core.Pod, opts PodValidationOptions) fiel // already effectively nil, no change needed case mungedPodSpec.Affinity == nil && oldNodeAffinity != nil: mungedPodSpec.Affinity = &core.Affinity{NodeAffinity: oldNodeAffinity} // +k8s:verify-mutation:reason=clone + case mungedPodSpec.Affinity != nil && oldPod.Spec.Affinity == nil && + mungedPodSpec.Affinity.PodAntiAffinity == nil && mungedPodSpec.Affinity.PodAffinity == nil: + // We ensure no other fields are being changed, but the NodeAffinity. If that's the case, and the + // old pod's affinity is nil, we set the mungedPodSpec's affinity to nil. + mungedPodSpec.Affinity = nil // +k8s:verify-mutation:reason=clone default: + // The node affinity is being updated and the old pod Affinity is not nil. + // We set the mungedPodSpec's node affinity to the old pod's node affinity. mungedPodSpec.Affinity.NodeAffinity = oldNodeAffinity // +k8s:verify-mutation:reason=clone } } @@ -4795,11 +4964,16 @@ func ValidatePodStatusUpdate(newPod, oldPod *core.Pod, opts PodValidationOptions allErrs = append(allErrs, ValidateContainerStateTransition(newPod.Status.InitContainerStatuses, oldPod.Status.InitContainerStatuses, fldPath.Child("initContainerStatuses"), oldPod.Spec.RestartPolicy)...) // The kubelet will never restart ephemeral containers, so treat them like they have an implicit RestartPolicyNever. allErrs = append(allErrs, ValidateContainerStateTransition(newPod.Status.EphemeralContainerStatuses, oldPod.Status.EphemeralContainerStatuses, fldPath.Child("ephemeralContainerStatuses"), core.RestartPolicyNever)...) + allErrs = append(allErrs, validatePodResourceClaimStatuses(newPod.Status.ResourceClaimStatuses, newPod.Spec.ResourceClaims, fldPath.Child("resourceClaimStatuses"))...) if newIPErrs := validatePodIPs(newPod); len(newIPErrs) > 0 { allErrs = append(allErrs, newIPErrs...) } + if newIPErrs := validateHostIPs(newPod); len(newIPErrs) > 0 { + allErrs = append(allErrs, newIPErrs...) + } + return allErrs } @@ -4816,6 +4990,42 @@ func validatePodConditions(conditions []core.PodCondition, fldPath *field.Path) return allErrs } +// validatePodResourceClaimStatuses validates the ResourceClaimStatuses slice in a pod status. +func validatePodResourceClaimStatuses(statuses []core.PodResourceClaimStatus, podClaims []core.PodResourceClaim, fldPath *field.Path) field.ErrorList { + var allErrs field.ErrorList + + claimNames := sets.New[string]() + for i, status := range statuses { + idxPath := fldPath.Index(i) + // There's no need to check the content of the name. If it matches an entry, + // then it is valid, otherwise we reject it here. + if !havePodClaim(podClaims, status.Name) { + allErrs = append(allErrs, field.Invalid(idxPath.Child("name"), status.Name, "must match the name of an entry in `spec.resourceClaims`")) + } + if claimNames.Has(status.Name) { + allErrs = append(allErrs, field.Duplicate(idxPath.Child("name"), status.Name)) + } else { + claimNames.Insert(status.Name) + } + if status.ResourceClaimName != nil { + for _, detail := range ValidateResourceClaimName(*status.ResourceClaimName, false) { + allErrs = append(allErrs, field.Invalid(idxPath.Child("name"), status.ResourceClaimName, detail)) + } + } + } + + return allErrs +} + +func havePodClaim(podClaims []core.PodResourceClaim, name string) bool { + for _, podClaim := range podClaims { + if podClaim.Name == name { + return true + } + } + return false +} + // ValidatePodEphemeralContainersUpdate tests that a user update to EphemeralContainers is valid. // newPod and oldPod must only differ in their EphemeralContainers. func ValidatePodEphemeralContainersUpdate(newPod, oldPod *core.Pod, opts PodValidationOptions) field.ErrorList { @@ -5238,14 +5448,14 @@ func ValidateServiceStatusUpdate(service, oldService *core.Service) field.ErrorL // ValidateReplicationController tests if required fields in the replication controller are set. func ValidateReplicationController(controller *core.ReplicationController, opts PodValidationOptions) field.ErrorList { allErrs := ValidateObjectMeta(&controller.ObjectMeta, true, ValidateReplicationControllerName, field.NewPath("metadata")) - allErrs = append(allErrs, ValidateReplicationControllerSpec(&controller.Spec, field.NewPath("spec"), opts)...) + allErrs = append(allErrs, ValidateReplicationControllerSpec(&controller.Spec, nil, field.NewPath("spec"), opts)...) return allErrs } // ValidateReplicationControllerUpdate tests if required fields in the replication controller are set. func ValidateReplicationControllerUpdate(controller, oldController *core.ReplicationController, opts PodValidationOptions) field.ErrorList { allErrs := ValidateObjectMetaUpdate(&controller.ObjectMeta, &oldController.ObjectMeta, field.NewPath("metadata")) - allErrs = append(allErrs, ValidateReplicationControllerSpec(&controller.Spec, field.NewPath("spec"), opts)...) + allErrs = append(allErrs, ValidateReplicationControllerSpec(&controller.Spec, &oldController.Spec, field.NewPath("spec"), opts)...) return allErrs } @@ -5290,7 +5500,7 @@ func ValidateNonEmptySelector(selectorMap map[string]string, fldPath *field.Path } // Validates the given template and ensures that it is in accordance with the desired selector and replicas. -func ValidatePodTemplateSpecForRC(template *core.PodTemplateSpec, selectorMap map[string]string, replicas int32, fldPath *field.Path, opts PodValidationOptions) field.ErrorList { +func ValidatePodTemplateSpecForRC(template, oldTemplate *core.PodTemplateSpec, selectorMap map[string]string, replicas int32, fldPath *field.Path, opts PodValidationOptions) field.ErrorList { allErrs := field.ErrorList{} if template == nil { allErrs = append(allErrs, field.Required(fldPath, "")) @@ -5304,8 +5514,13 @@ func ValidatePodTemplateSpecForRC(template *core.PodTemplateSpec, selectorMap ma } } allErrs = append(allErrs, ValidatePodTemplateSpec(template, fldPath, opts)...) + // get rid of apivalidation.ValidateReadOnlyPersistentDisks,stop passing oldTemplate to this function + var oldVols []core.Volume + if oldTemplate != nil { + oldVols = oldTemplate.Spec.Volumes // +k8s:verify-mutation:reason=clone + } if replicas > 1 { - allErrs = append(allErrs, ValidateReadOnlyPersistentDisks(template.Spec.Volumes, fldPath.Child("spec", "volumes"))...) + allErrs = append(allErrs, ValidateReadOnlyPersistentDisks(template.Spec.Volumes, oldVols, fldPath.Child("spec", "volumes"))...) } // RestartPolicy has already been first-order validated as per ValidatePodTemplateSpec(). if template.Spec.RestartPolicy != core.RestartPolicyAlways { @@ -5319,12 +5534,17 @@ func ValidatePodTemplateSpecForRC(template *core.PodTemplateSpec, selectorMap ma } // ValidateReplicationControllerSpec tests if required fields in the replication controller spec are set. -func ValidateReplicationControllerSpec(spec *core.ReplicationControllerSpec, fldPath *field.Path, opts PodValidationOptions) field.ErrorList { +func ValidateReplicationControllerSpec(spec, oldSpec *core.ReplicationControllerSpec, fldPath *field.Path, opts PodValidationOptions) field.ErrorList { allErrs := field.ErrorList{} allErrs = append(allErrs, ValidateNonnegativeField(int64(spec.MinReadySeconds), fldPath.Child("minReadySeconds"))...) allErrs = append(allErrs, ValidateNonEmptySelector(spec.Selector, fldPath.Child("selector"))...) allErrs = append(allErrs, ValidateNonnegativeField(int64(spec.Replicas), fldPath.Child("replicas"))...) - allErrs = append(allErrs, ValidatePodTemplateSpecForRC(spec.Template, spec.Selector, spec.Replicas, fldPath.Child("template"), opts)...) + // oldSpec is not empty, pass oldSpec.template. + var oldTemplate *core.PodTemplateSpec + if oldSpec != nil { + oldTemplate = oldSpec.Template // +k8s:verify-mutation:reason=clone + } + allErrs = append(allErrs, ValidatePodTemplateSpecForRC(spec.Template, oldTemplate, spec.Selector, spec.Replicas, fldPath.Child("template"), opts)...) return allErrs } @@ -5344,17 +5564,29 @@ func ValidatePodTemplateSpec(spec *core.PodTemplateSpec, fldPath *field.Path, op return allErrs } -func ValidateReadOnlyPersistentDisks(volumes []core.Volume, fldPath *field.Path) field.ErrorList { +// ValidateReadOnlyPersistentDisks stick this AFTER the short-circuit checks +func ValidateReadOnlyPersistentDisks(volumes, oldVolumes []core.Volume, fldPath *field.Path) field.ErrorList { allErrs := field.ErrorList{} + + if utilfeature.DefaultFeatureGate.Enabled(features.SkipReadOnlyValidationGCE) { + return field.ErrorList{} + } + + isWriteablePD := func(vol *core.Volume) bool { + return vol.GCEPersistentDisk != nil && !vol.GCEPersistentDisk.ReadOnly + } + + for i := range oldVolumes { + if isWriteablePD(&oldVolumes[i]) { + return field.ErrorList{} + } + } + for i := range volumes { - vol := &volumes[i] idxPath := fldPath.Index(i) - if vol.GCEPersistentDisk != nil { - if !vol.GCEPersistentDisk.ReadOnly { - allErrs = append(allErrs, field.Invalid(idxPath.Child("gcePersistentDisk", "readOnly"), false, "must be true for replicated pods > 1; GCE PD can only be mounted on multiple machines if it is read-only")) - } + if isWriteablePD(&volumes[i]) { + allErrs = append(allErrs, field.Invalid(idxPath.Child("gcePersistentDisk", "readOnly"), false, "must be true for replicated pods > 1; GCE PD can only be mounted on multiple machines if it is read-only")) } - // TODO: What to do for AWS? It doesn't support replicas } return allErrs } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/zz_generated.deepcopy.go index f8d32ea9df5e..471fdbd6f398 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/zz_generated.deepcopy.go @@ -793,6 +793,11 @@ func (in *Container) DeepCopyInto(out *Container) { *out = make([]ContainerResizePolicy, len(*in)) copy(*out, *in) } + if in.RestartPolicy != nil { + in, out := &in.RestartPolicy, &out.RestartPolicy + *out = new(ContainerRestartPolicy) + **out = **in + } if in.VolumeMounts != nil { in, out := &in.VolumeMounts, &out.VolumeMounts *out = make([]VolumeMount, len(*in)) @@ -1420,6 +1425,11 @@ func (in *EphemeralContainerCommon) DeepCopyInto(out *EphemeralContainerCommon) *out = make([]ContainerResizePolicy, len(*in)) copy(*out, *in) } + if in.RestartPolicy != nil { + in, out := &in.RestartPolicy, &out.RestartPolicy + *out = new(ContainerRestartPolicy) + **out = **in + } if in.VolumeMounts != nil { in, out := &in.VolumeMounts, &out.VolumeMounts *out = make([]VolumeMount, len(*in)) @@ -1871,6 +1881,22 @@ func (in *HostAlias) DeepCopy() *HostAlias { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *HostIP) DeepCopyInto(out *HostIP) { + *out = *in + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HostIP. +func (in *HostIP) DeepCopy() *HostIP { + if in == nil { + return nil + } + out := new(HostIP) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *HostPathVolumeSource) DeepCopyInto(out *HostPathVolumeSource) { *out = *in @@ -2897,7 +2923,7 @@ func (in *PersistentVolume) DeepCopyInto(out *PersistentVolume) { out.TypeMeta = in.TypeMeta in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) in.Spec.DeepCopyInto(&out.Spec) - out.Status = in.Status + in.Status.DeepCopyInto(&out.Status) return } @@ -3074,10 +3100,12 @@ func (in *PersistentVolumeClaimStatus) DeepCopyInto(out *PersistentVolumeClaimSt (*out)[key] = val.DeepCopy() } } - if in.ResizeStatus != nil { - in, out := &in.ResizeStatus, &out.ResizeStatus - *out = new(PersistentVolumeClaimResizeStatus) - **out = **in + if in.AllocatedResourceStatuses != nil { + in, out := &in.AllocatedResourceStatuses, &out.AllocatedResourceStatuses + *out = make(map[ResourceName]ClaimResourceStatus, len(*in)) + for key, val := range *in { + (*out)[key] = val + } } return } @@ -3337,6 +3365,10 @@ func (in *PersistentVolumeSpec) DeepCopy() *PersistentVolumeSpec { // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *PersistentVolumeStatus) DeepCopyInto(out *PersistentVolumeStatus) { *out = *in + if in.LastPhaseTransitionTime != nil { + in, out := &in.LastPhaseTransitionTime, &out.LastPhaseTransitionTime + *out = (*in).DeepCopy() + } return } @@ -3809,6 +3841,27 @@ func (in *PodResourceClaim) DeepCopy() *PodResourceClaim { return out } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PodResourceClaimStatus) DeepCopyInto(out *PodResourceClaimStatus) { + *out = *in + if in.ResourceClaimName != nil { + in, out := &in.ResourceClaimName, &out.ResourceClaimName + *out = new(string) + **out = **in + } + return +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodResourceClaimStatus. +func (in *PodResourceClaimStatus) DeepCopy() *PodResourceClaimStatus { + if in == nil { + return nil + } + out := new(PodResourceClaimStatus) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *PodSchedulingGate) DeepCopyInto(out *PodSchedulingGate) { *out = *in @@ -4093,6 +4146,11 @@ func (in *PodStatus) DeepCopyInto(out *PodStatus) { (*in)[i].DeepCopyInto(&(*out)[i]) } } + if in.HostIPs != nil { + in, out := &in.HostIPs, &out.HostIPs + *out = make([]HostIP, len(*in)) + copy(*out, *in) + } if in.PodIPs != nil { in, out := &in.PodIPs, &out.PodIPs *out = make([]PodIP, len(*in)) @@ -4123,6 +4181,13 @@ func (in *PodStatus) DeepCopyInto(out *PodStatus) { (*in)[i].DeepCopyInto(&(*out)[i]) } } + if in.ResourceClaimStatuses != nil { + in, out := &in.ResourceClaimStatuses, &out.ResourceClaimStatuses + *out = make([]PodResourceClaimStatus, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } return } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/extensions/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/extensions/OWNERS index e559bf2aeeec..7911d9af4a1f 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/extensions/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/extensions/OWNERS @@ -2,7 +2,6 @@ reviewers: - thockin - - lavalamp - smarterclayton - wojtek-t - deads2k @@ -19,5 +18,6 @@ reviewers: - mwielgus - soltysh - dims + - jpbetz labels: - sig/apps diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/networking/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/networking/types.go index 6c9eaa7a4844..9ec17540baef 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/networking/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/networking/types.go @@ -35,11 +35,6 @@ type NetworkPolicy struct { // spec represents the specification of the desired behavior for this NetworkPolicy. // +optional Spec NetworkPolicySpec - - // status represents the current state of the NetworkPolicy. - // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status - // +optional - Status NetworkPolicyStatus } // PolicyType describes the NetworkPolicy type @@ -201,42 +196,6 @@ type NetworkPolicyPeer struct { IPBlock *IPBlock } -// NetworkPolicyConditionType is the type for status conditions on -// a NetworkPolicy. This type should be used with the -// NetworkPolicyStatus.Conditions field. -type NetworkPolicyConditionType string - -const ( - // NetworkPolicyConditionStatusAccepted represents status of a Network Policy that could be properly parsed by - // the Network Policy provider and will be implemented in the cluster - NetworkPolicyConditionStatusAccepted NetworkPolicyConditionType = "Accepted" - - // NetworkPolicyConditionStatusPartialFailure represents status of a Network Policy that could be partially - // parsed by the Network Policy provider and may not be completely implemented due to a lack of a feature or some - // other condition - NetworkPolicyConditionStatusPartialFailure NetworkPolicyConditionType = "PartialFailure" - - // NetworkPolicyConditionStatusFailure represents status of a Network Policy that could not be parsed by the - // Network Policy provider and will not be implemented in the cluster - NetworkPolicyConditionStatusFailure NetworkPolicyConditionType = "Failure" -) - -// NetworkPolicyConditionReason defines the set of reasons that explain why a -// particular NetworkPolicy condition type has been raised. -type NetworkPolicyConditionReason string - -const ( - // NetworkPolicyConditionReasonFeatureNotSupported represents a reason where the Network Policy may not have been - // implemented in the cluster due to a lack of some feature not supported by the Network Policy provider - NetworkPolicyConditionReasonFeatureNotSupported NetworkPolicyConditionReason = "FeatureNotSupported" -) - -// NetworkPolicyStatus describes the current state of the NetworkPolicy. -type NetworkPolicyStatus struct { - // conditions holds an array of metav1.Condition that describes the state of the NetworkPolicy. - Conditions []metav1.Condition -} - // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // NetworkPolicyList is a list of NetworkPolicy objects. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/networking/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/networking/zz_generated.deepcopy.go index e6a47cc1b0c9..3a39c6cac408 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/networking/zz_generated.deepcopy.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/networking/zz_generated.deepcopy.go @@ -661,7 +661,6 @@ func (in *NetworkPolicy) DeepCopyInto(out *NetworkPolicy) { out.TypeMeta = in.TypeMeta in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) in.Spec.DeepCopyInto(&out.Spec) - in.Status.DeepCopyInto(&out.Status) return } @@ -874,29 +873,6 @@ func (in *NetworkPolicySpec) DeepCopy() *NetworkPolicySpec { return out } -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *NetworkPolicyStatus) DeepCopyInto(out *NetworkPolicyStatus) { - *out = *in - if in.Conditions != nil { - in, out := &in.Conditions, &out.Conditions - *out = make([]v1.Condition, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NetworkPolicyStatus. -func (in *NetworkPolicyStatus) DeepCopy() *NetworkPolicyStatus { - if in == nil { - return nil - } - out := new(NetworkPolicyStatus) - in.DeepCopyInto(out) - return out -} - // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *ParentReference) DeepCopyInto(out *ParentReference) { *out = *in diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/controller_ref_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/controller_ref_manager.go index 740c98d32a88..68a99ea6f59e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/controller_ref_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/controller_ref_manager.go @@ -236,8 +236,8 @@ func (m *PodControllerRefManager) AdoptPod(ctx context.Context, pod *v1.Pod) err // ReleasePod sends a patch to free the pod from the control of the controller. // It returns the error if the patching fails. 404 and 422 errors are ignored. func (m *PodControllerRefManager) ReleasePod(ctx context.Context, pod *v1.Pod) error { - klog.V(2).Infof("patching pod %s_%s to remove its controllerRef to %s/%s:%s", - pod.Namespace, pod.Name, m.controllerKind.GroupVersion(), m.controllerKind.Kind, m.Controller.GetName()) + logger := klog.FromContext(ctx) + logger.V(2).Info("Patching pod to remove its controllerRef", "pod", klog.KObj(pod), "gvk", m.controllerKind, "controller", m.Controller.GetName()) patchBytes, err := GenerateDeleteOwnerRefStrategicMergeBytes(pod.UID, []types.UID{m.Controller.GetUID()}, m.finalizers...) if err != nil { return err @@ -361,8 +361,8 @@ func (m *ReplicaSetControllerRefManager) AdoptReplicaSet(ctx context.Context, rs // ReleaseReplicaSet sends a patch to free the ReplicaSet from the control of the Deployment controller. // It returns the error if the patching fails. 404 and 422 errors are ignored. func (m *ReplicaSetControllerRefManager) ReleaseReplicaSet(ctx context.Context, replicaSet *apps.ReplicaSet) error { - klog.V(2).Infof("patching ReplicaSet %s_%s to remove its controllerRef to %s/%s:%s", - replicaSet.Namespace, replicaSet.Name, m.controllerKind.GroupVersion(), m.controllerKind.Kind, m.Controller.GetName()) + logger := klog.FromContext(ctx) + logger.V(2).Info("Patching ReplicaSet to remove its controllerRef", "replicaSet", klog.KObj(replicaSet), "gvk", m.controllerKind, "controller", m.Controller.GetName()) patchBytes, err := GenerateDeleteOwnerRefStrategicMergeBytes(replicaSet.UID, []types.UID{m.Controller.GetUID()}) if err != nil { return err @@ -499,8 +499,8 @@ func (m *ControllerRevisionControllerRefManager) AdoptControllerRevision(ctx con // ReleaseControllerRevision sends a patch to free the ControllerRevision from the control of its controller. // It returns the error if the patching fails. 404 and 422 errors are ignored. func (m *ControllerRevisionControllerRefManager) ReleaseControllerRevision(ctx context.Context, history *apps.ControllerRevision) error { - klog.V(2).Infof("patching ControllerRevision %s_%s to remove its controllerRef to %s/%s:%s", - history.Namespace, history.Name, m.controllerKind.GroupVersion(), m.controllerKind.Kind, m.Controller.GetName()) + logger := klog.FromContext(ctx) + logger.V(2).Info("Patching ControllerRevision to remove its controllerRef", "controllerRevision", klog.KObj(history), "gvk", m.controllerKind, "controller", m.Controller.GetName()) patchBytes, err := GenerateDeleteOwnerRefStrategicMergeBytes(history.UID, []types.UID{m.Controller.GetUID()}) if err != nil { return err diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/controller_utils.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/controller_utils.go index 2f4bc44be019..6a44ec3c0364 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/controller_utils.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/controller_utils.go @@ -146,15 +146,15 @@ var ExpKeyFunc = func(obj interface{}) (string, error) { // types of controllers, because the keys might conflict across types. type ControllerExpectationsInterface interface { GetExpectations(controllerKey string) (*ControlleeExpectations, bool, error) - SatisfiedExpectations(controllerKey string) bool - DeleteExpectations(controllerKey string) - SetExpectations(controllerKey string, add, del int) error - ExpectCreations(controllerKey string, adds int) error - ExpectDeletions(controllerKey string, dels int) error - CreationObserved(controllerKey string) - DeletionObserved(controllerKey string) - RaiseExpectations(controllerKey string, add, del int) - LowerExpectations(controllerKey string, add, del int) + SatisfiedExpectations(logger klog.Logger, controllerKey string) bool + DeleteExpectations(logger klog.Logger, controllerKey string) + SetExpectations(logger klog.Logger, controllerKey string, add, del int) error + ExpectCreations(logger klog.Logger, controllerKey string, adds int) error + ExpectDeletions(logger klog.Logger, controllerKey string, dels int) error + CreationObserved(logger klog.Logger, controllerKey string) + DeletionObserved(logger klog.Logger, controllerKey string) + RaiseExpectations(logger klog.Logger, controllerKey string, add, del int) + LowerExpectations(logger klog.Logger, controllerKey string, add, del int) } // ControllerExpectations is a cache mapping controllers to what they expect to see before being woken up for a sync. @@ -172,10 +172,11 @@ func (r *ControllerExpectations) GetExpectations(controllerKey string) (*Control } // DeleteExpectations deletes the expectations of the given controller from the TTLStore. -func (r *ControllerExpectations) DeleteExpectations(controllerKey string) { +func (r *ControllerExpectations) DeleteExpectations(logger klog.Logger, controllerKey string) { if exp, exists, err := r.GetByKey(controllerKey); err == nil && exists { if err := r.Delete(exp); err != nil { - klog.V(2).Infof("Error deleting expectations for controller %v: %v", controllerKey, err) + + logger.V(2).Info("Error deleting expectations", "controller", controllerKey, "err", err) } } } @@ -183,27 +184,27 @@ func (r *ControllerExpectations) DeleteExpectations(controllerKey string) { // SatisfiedExpectations returns true if the required adds/dels for the given controller have been observed. // Add/del counts are established by the controller at sync time, and updated as controllees are observed by the controller // manager. -func (r *ControllerExpectations) SatisfiedExpectations(controllerKey string) bool { +func (r *ControllerExpectations) SatisfiedExpectations(logger klog.Logger, controllerKey string) bool { if exp, exists, err := r.GetExpectations(controllerKey); exists { if exp.Fulfilled() { - klog.V(4).Infof("Controller expectations fulfilled %#v", exp) + logger.V(4).Info("Controller expectations fulfilled", "expectations", exp) return true } else if exp.isExpired() { - klog.V(4).Infof("Controller expectations expired %#v", exp) + logger.V(4).Info("Controller expectations expired", "expectations", exp) return true } else { - klog.V(4).Infof("Controller still waiting on expectations %#v", exp) + logger.V(4).Info("Controller still waiting on expectations", "expectations", exp) return false } } else if err != nil { - klog.V(2).Infof("Error encountered while checking expectations %#v, forcing sync", err) + logger.V(2).Info("Error encountered while checking expectations, forcing sync", "err", err) } else { // When a new controller is created, it doesn't have expectations. // When it doesn't see expected watch events for > TTL, the expectations expire. // - In this case it wakes up, creates/deletes controllees, and sets expectations again. // When it has satisfied expectations and no controllees need to be created/destroyed > TTL, the expectations expire. // - In this case it continues without setting expectations till it needs to create/delete controllees. - klog.V(4).Infof("Controller %v either never recorded expectations, or the ttl expired.", controllerKey) + logger.V(4).Info("Controller either never recorded expectations, or the ttl expired", "controller", controllerKey) } // Trigger a sync if we either encountered and error (which shouldn't happen since we're // getting from local store) or this controller hasn't established expectations. @@ -218,46 +219,46 @@ func (exp *ControlleeExpectations) isExpired() bool { } // SetExpectations registers new expectations for the given controller. Forgets existing expectations. -func (r *ControllerExpectations) SetExpectations(controllerKey string, add, del int) error { +func (r *ControllerExpectations) SetExpectations(logger klog.Logger, controllerKey string, add, del int) error { exp := &ControlleeExpectations{add: int64(add), del: int64(del), key: controllerKey, timestamp: clock.RealClock{}.Now()} - klog.V(4).Infof("Setting expectations %#v", exp) + logger.V(4).Info("Setting expectations", "expectations", exp) return r.Add(exp) } -func (r *ControllerExpectations) ExpectCreations(controllerKey string, adds int) error { - return r.SetExpectations(controllerKey, adds, 0) +func (r *ControllerExpectations) ExpectCreations(logger klog.Logger, controllerKey string, adds int) error { + return r.SetExpectations(logger, controllerKey, adds, 0) } -func (r *ControllerExpectations) ExpectDeletions(controllerKey string, dels int) error { - return r.SetExpectations(controllerKey, 0, dels) +func (r *ControllerExpectations) ExpectDeletions(logger klog.Logger, controllerKey string, dels int) error { + return r.SetExpectations(logger, controllerKey, 0, dels) } // Decrements the expectation counts of the given controller. -func (r *ControllerExpectations) LowerExpectations(controllerKey string, add, del int) { +func (r *ControllerExpectations) LowerExpectations(logger klog.Logger, controllerKey string, add, del int) { if exp, exists, err := r.GetExpectations(controllerKey); err == nil && exists { exp.Add(int64(-add), int64(-del)) // The expectations might've been modified since the update on the previous line. - klog.V(4).Infof("Lowered expectations %#v", exp) + logger.V(4).Info("Lowered expectations", "expectations", exp) } } // Increments the expectation counts of the given controller. -func (r *ControllerExpectations) RaiseExpectations(controllerKey string, add, del int) { +func (r *ControllerExpectations) RaiseExpectations(logger klog.Logger, controllerKey string, add, del int) { if exp, exists, err := r.GetExpectations(controllerKey); err == nil && exists { exp.Add(int64(add), int64(del)) // The expectations might've been modified since the update on the previous line. - klog.V(4).Infof("Raised expectations %#v", exp) + logger.V(4).Info("Raised expectations", "expectations", exp) } } // CreationObserved atomically decrements the `add` expectation count of the given controller. -func (r *ControllerExpectations) CreationObserved(controllerKey string) { - r.LowerExpectations(controllerKey, 1, 0) +func (r *ControllerExpectations) CreationObserved(logger klog.Logger, controllerKey string) { + r.LowerExpectations(logger, controllerKey, 1, 0) } // DeletionObserved atomically decrements the `del` expectation count of the given controller. -func (r *ControllerExpectations) DeletionObserved(controllerKey string) { - r.LowerExpectations(controllerKey, 0, 1) +func (r *ControllerExpectations) DeletionObserved(logger klog.Logger, controllerKey string) { + r.LowerExpectations(logger, controllerKey, 0, 1) } // ControlleeExpectations track controllee creates/deletes. @@ -287,6 +288,20 @@ func (e *ControlleeExpectations) GetExpectations() (int64, int64) { return atomic.LoadInt64(&e.add), atomic.LoadInt64(&e.del) } +// MarshalLog makes a thread-safe copy of the values of the expectations that +// can be used for logging. +func (e *ControlleeExpectations) MarshalLog() interface{} { + return struct { + add int64 + del int64 + key string + }{ + add: atomic.LoadInt64(&e.add), + del: atomic.LoadInt64(&e.del), + key: e.key, + } +} + // NewControllerExpectations returns a store for ControllerExpectations. func NewControllerExpectations() *ControllerExpectations { return &ControllerExpectations{cache.NewStore(ExpKeyFunc)} @@ -335,47 +350,47 @@ func (u *UIDTrackingControllerExpectations) GetUIDs(controllerKey string) sets.S } // ExpectDeletions records expectations for the given deleteKeys, against the given controller. -func (u *UIDTrackingControllerExpectations) ExpectDeletions(rcKey string, deletedKeys []string) error { +func (u *UIDTrackingControllerExpectations) ExpectDeletions(logger klog.Logger, rcKey string, deletedKeys []string) error { expectedUIDs := sets.NewString() for _, k := range deletedKeys { expectedUIDs.Insert(k) } - klog.V(4).Infof("Controller %v waiting on deletions for: %+v", rcKey, deletedKeys) + logger.V(4).Info("Controller waiting on deletions", "controller", rcKey, "keys", deletedKeys) u.uidStoreLock.Lock() defer u.uidStoreLock.Unlock() if existing := u.GetUIDs(rcKey); existing != nil && existing.Len() != 0 { - klog.Errorf("Clobbering existing delete keys: %+v", existing) + logger.Error(nil, "Clobbering existing delete keys", "keys", existing) } if err := u.uidStore.Add(&UIDSet{expectedUIDs, rcKey}); err != nil { return err } - return u.ControllerExpectationsInterface.ExpectDeletions(rcKey, expectedUIDs.Len()) + return u.ControllerExpectationsInterface.ExpectDeletions(logger, rcKey, expectedUIDs.Len()) } // DeletionObserved records the given deleteKey as a deletion, for the given rc. -func (u *UIDTrackingControllerExpectations) DeletionObserved(rcKey, deleteKey string) { +func (u *UIDTrackingControllerExpectations) DeletionObserved(logger klog.Logger, rcKey, deleteKey string) { u.uidStoreLock.Lock() defer u.uidStoreLock.Unlock() uids := u.GetUIDs(rcKey) if uids != nil && uids.Has(deleteKey) { - klog.V(4).Infof("Controller %v received delete for pod %v", rcKey, deleteKey) - u.ControllerExpectationsInterface.DeletionObserved(rcKey) + logger.V(4).Info("Controller received delete for pod", "controller", rcKey, "key", deleteKey) + u.ControllerExpectationsInterface.DeletionObserved(logger, rcKey) uids.Delete(deleteKey) } } // DeleteExpectations deletes the UID set and invokes DeleteExpectations on the // underlying ControllerExpectationsInterface. -func (u *UIDTrackingControllerExpectations) DeleteExpectations(rcKey string) { +func (u *UIDTrackingControllerExpectations) DeleteExpectations(logger klog.Logger, rcKey string) { u.uidStoreLock.Lock() defer u.uidStoreLock.Unlock() - u.ControllerExpectationsInterface.DeleteExpectations(rcKey) + u.ControllerExpectationsInterface.DeleteExpectations(logger, rcKey) if uidExp, exists, err := u.uidStore.GetByKey(rcKey); err == nil && exists { if err := u.uidStore.Delete(uidExp); err != nil { - klog.V(2).Infof("Error deleting uid expectations for controller %v: %v", rcKey, err) + logger.V(2).Info("Error deleting uid expectations", "controller", rcKey, "err", err) } } } @@ -573,12 +588,13 @@ func (r RealPodControl) createPods(ctx context.Context, namespace string, pod *v } return err } + logger := klog.FromContext(ctx) accessor, err := meta.Accessor(object) if err != nil { - klog.Errorf("parentObject does not have ObjectMeta, %v", err) + logger.Error(err, "parentObject does not have ObjectMeta") return nil } - klog.V(4).Infof("Controller %v created pod %v", accessor.GetName(), newPod.Name) + logger.V(4).Info("Controller created pod", "controller", accessor.GetName(), "pod", klog.KObj(newPod)) r.Recorder.Eventf(object, v1.EventTypeNormal, SuccessfulCreatePodReason, "Created pod: %v", newPod.Name) return nil @@ -589,10 +605,11 @@ func (r RealPodControl) DeletePod(ctx context.Context, namespace string, podID s if err != nil { return fmt.Errorf("object does not have ObjectMeta, %v", err) } - klog.V(2).InfoS("Deleting pod", "controller", accessor.GetName(), "pod", klog.KRef(namespace, podID)) + logger := klog.FromContext(ctx) + logger.V(2).Info("Deleting pod", "controller", accessor.GetName(), "pod", klog.KRef(namespace, podID)) if err := r.KubeClient.CoreV1().Pods(namespace).Delete(ctx, podID, metav1.DeleteOptions{}); err != nil { if apierrors.IsNotFound(err) { - klog.V(4).Infof("pod %v/%v has already been deleted.", namespace, podID) + logger.V(4).Info("Pod has already been deleted.", "pod", klog.KRef(namespace, podID)) return err } r.Recorder.Eventf(object, v1.EventTypeWarning, FailedDeletePodReason, "Error deleting: %v", err) @@ -929,25 +946,49 @@ func maxContainerRestarts(pod *v1.Pod) int { } // FilterActivePods returns pods that have not terminated. -func FilterActivePods(pods []*v1.Pod) []*v1.Pod { +func FilterActivePods(logger klog.Logger, pods []*v1.Pod) []*v1.Pod { var result []*v1.Pod for _, p := range pods { if IsPodActive(p) { result = append(result, p) } else { - klog.V(4).Infof("Ignoring inactive pod %v/%v in state %v, deletion time %v", - p.Namespace, p.Name, p.Status.Phase, p.DeletionTimestamp) + logger.V(4).Info("Ignoring inactive pod", "pod", klog.KObj(p), "phase", p.Status.Phase, "deletionTime", p.DeletionTimestamp) } } return result } +func FilterTerminatingPods(pods []*v1.Pod) []*v1.Pod { + var result []*v1.Pod + for _, p := range pods { + if IsPodTerminating(p) { + result = append(result, p) + } + } + return result +} + +func CountTerminatingPods(pods []*v1.Pod) int32 { + numberOfTerminatingPods := 0 + for _, p := range pods { + if IsPodTerminating(p) { + numberOfTerminatingPods += 1 + } + } + return int32(numberOfTerminatingPods) +} + func IsPodActive(p *v1.Pod) bool { return v1.PodSucceeded != p.Status.Phase && v1.PodFailed != p.Status.Phase && p.DeletionTimestamp == nil } +func IsPodTerminating(p *v1.Pod) bool { + return !podutil.IsPodTerminal(p) && + p.DeletionTimestamp != nil +} + // FilterActiveReplicaSets returns replica sets that have (or at least ought to have) pods. func FilterActiveReplicaSets(replicaSets []*apps.ReplicaSet) []*apps.ReplicaSet { activeFilter := func(rs *apps.ReplicaSet) bool { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/daemon/daemon_controller.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/daemon/daemon_controller.go index d8a2f67dcf82..040251918b90 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/daemon/daemon_controller.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/daemon/daemon_controller.go @@ -272,7 +272,7 @@ func (dsc *DaemonSetsController) deleteDaemonset(logger klog.Logger, obj interfa } // Delete expectations for the DaemonSet so if we create a new one with the same name it starts clean - dsc.expectations.DeleteExpectations(key) + dsc.expectations.DeleteExpectations(logger, key) dsc.queue.Add(key) } @@ -518,7 +518,7 @@ func (dsc *DaemonSetsController) addPod(logger klog.Logger, obj interface{}) { return } logger.V(4).Info("Pod added", "pod", klog.KObj(pod)) - dsc.expectations.CreationObserved(dsKey) + dsc.expectations.CreationObserved(logger, dsKey) dsc.enqueueDaemonSet(ds) return } @@ -635,7 +635,7 @@ func (dsc *DaemonSetsController) deletePod(logger klog.Logger, obj interface{}) return } logger.V(4).Info("Pod deleted", "pod", klog.KObj(pod)) - dsc.expectations.DeletionObserved(dsKey) + dsc.expectations.DeletionObserved(logger, dsKey) dsc.enqueueDaemonSet(ds) } @@ -752,7 +752,7 @@ func (dsc *DaemonSetsController) getDaemonPods(ctx context.Context, ds *apps.Dae // This also reconciles ControllerRef by adopting/orphaning. // Note that returned Pods are pointers to objects in the cache. // If you want to modify one, you need to deep-copy it first. -func (dsc *DaemonSetsController) getNodesToDaemonPods(ctx context.Context, ds *apps.DaemonSet) (map[string][]*v1.Pod, error) { +func (dsc *DaemonSetsController) getNodesToDaemonPods(ctx context.Context, ds *apps.DaemonSet, includeDeletedTerminal bool) (map[string][]*v1.Pod, error) { claimedPods, err := dsc.getDaemonPods(ctx, ds) if err != nil { return nil, err @@ -761,9 +761,15 @@ func (dsc *DaemonSetsController) getNodesToDaemonPods(ctx context.Context, ds *a nodeToDaemonPods := make(map[string][]*v1.Pod) logger := klog.FromContext(ctx) for _, pod := range claimedPods { + if !includeDeletedTerminal && podutil.IsPodTerminal(pod) && pod.DeletionTimestamp != nil { + // This Pod has a finalizer or is already scheduled for deletion from the + // store by the kubelet or the Pod GC. The DS controller doesn't have + // anything else to do with it. + continue + } nodeName, err := util.GetTargetNodeName(pod) if err != nil { - logger.Info("Failed to get target node name of Pod in DaemonSet", + logger.V(4).Info("Failed to get target node name of Pod in DaemonSet", "pod", klog.KObj(pod), "daemonset", klog.KObj(ds)) continue } @@ -928,7 +934,7 @@ func (dsc *DaemonSetsController) updateDaemonSet(ctx context.Context, ds *apps.D } // Process rolling updates if we're ready. - if dsc.expectations.SatisfiedExpectations(key) { + if dsc.expectations.SatisfiedExpectations(klog.FromContext(ctx), key) { switch ds.Spec.UpdateStrategy.Type { case apps.OnDeleteDaemonSetStrategyType: case apps.RollingUpdateDaemonSetStrategyType: @@ -953,7 +959,7 @@ func (dsc *DaemonSetsController) updateDaemonSet(ctx context.Context, ds *apps.D // syncNodes with a list of pods to remove and a list of nodes to run a Pod of ds. func (dsc *DaemonSetsController) manage(ctx context.Context, ds *apps.DaemonSet, nodeList []*v1.Node, hash string) error { // Find out the pods which are created for the nodes by DaemonSet. - nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ctx, ds) + nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ctx, ds, false) if err != nil { return fmt.Errorf("couldn't get node to daemon pod mapping for daemon set %q: %v", ds.Name, err) } @@ -1002,7 +1008,7 @@ func (dsc *DaemonSetsController) syncNodes(ctx context.Context, ds *apps.DaemonS deleteDiff = dsc.burstReplicas } - dsc.expectations.SetExpectations(dsKey, createDiff, deleteDiff) + dsc.expectations.SetExpectations(logger, dsKey, createDiff, deleteDiff) // error channel to communicate back failures. make the buffer big enough to avoid any blocking errCh := make(chan error, createDiff+deleteDiff) @@ -1051,7 +1057,7 @@ func (dsc *DaemonSetsController) syncNodes(ctx context.Context, ds *apps.DaemonS } if err != nil { logger.V(2).Info("Failed creation, decrementing expectations for daemon set", "daemonset", klog.KObj(ds)) - dsc.expectations.CreationObserved(dsKey) + dsc.expectations.CreationObserved(logger, dsKey) errCh <- err utilruntime.HandleError(err) } @@ -1062,7 +1068,7 @@ func (dsc *DaemonSetsController) syncNodes(ctx context.Context, ds *apps.DaemonS skippedPods := createDiff - (batchSize + pos) if errorCount < len(errCh) && skippedPods > 0 { logger.V(2).Info("Slow-start failure. Skipping creation pods, decrementing expectations for daemon set", "skippedPods", skippedPods, "daemonset", klog.KObj(ds)) - dsc.expectations.LowerExpectations(dsKey, skippedPods, 0) + dsc.expectations.LowerExpectations(logger, dsKey, skippedPods, 0) // The skipped pods will be retried later. The next controller resync will // retry the slow start process. break @@ -1076,7 +1082,7 @@ func (dsc *DaemonSetsController) syncNodes(ctx context.Context, ds *apps.DaemonS go func(ix int) { defer deleteWait.Done() if err := dsc.podControl.DeletePod(ctx, ds.Namespace, podsToDelete[ix], ds); err != nil { - dsc.expectations.DeletionObserved(dsKey) + dsc.expectations.DeletionObserved(logger, dsKey) if !apierrors.IsNotFound(err) { logger.V(2).Info("Failed deletion, decremented expectations for daemon set", "daemonset", klog.KObj(ds)) errCh <- err @@ -1154,7 +1160,7 @@ func storeDaemonSetStatus( func (dsc *DaemonSetsController) updateDaemonSetStatus(ctx context.Context, ds *apps.DaemonSet, nodeList []*v1.Node, hash string, updateObservedGen bool) error { logger := klog.FromContext(ctx) logger.V(4).Info("Updating daemon set status") - nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ctx, ds) + nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ctx, ds, false) if err != nil { return fmt.Errorf("couldn't get node to daemon pod mapping for daemon set %q: %v", ds.Name, err) } @@ -1226,7 +1232,7 @@ func (dsc *DaemonSetsController) syncDaemonSet(ctx context.Context, key string) ds, err := dsc.dsLister.DaemonSets(namespace).Get(name) if apierrors.IsNotFound(err) { logger.V(3).Info("Daemon set has been deleted", "daemonset", key) - dsc.expectations.DeleteExpectations(key) + dsc.expectations.DeleteExpectations(logger, key) return nil } if err != nil { @@ -1271,7 +1277,7 @@ func (dsc *DaemonSetsController) syncDaemonSet(ctx context.Context, key string) } hash := cur.Labels[apps.DefaultDaemonSetUniqueLabelKey] - if !dsc.expectations.SatisfiedExpectations(dsKey) { + if !dsc.expectations.SatisfiedExpectations(logger, dsKey) { // Only update status. Don't raise observedGeneration since controller didn't process object of that generation. return dsc.updateDaemonSetStatus(ctx, ds, nodeList, hash, false) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/daemon/update.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/daemon/update.go index 2665b170c04d..d7755da95d60 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/daemon/update.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/daemon/update.go @@ -43,7 +43,7 @@ import ( // remaining within the constraints imposed by the update strategy. func (dsc *DaemonSetsController) rollingUpdate(ctx context.Context, ds *apps.DaemonSet, nodeList []*v1.Node, hash string) error { logger := klog.FromContext(ctx) - nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ctx, ds) + nodeToDaemonPods, err := dsc.getNodesToDaemonPods(ctx, ds, false) if err != nil { return fmt.Errorf("couldn't get node to daemon pod mapping for daemon set %q: %v", ds.Name, err) } @@ -294,7 +294,8 @@ func (dsc *DaemonSetsController) constructHistory(ctx context.Context, ds *apps. } func (dsc *DaemonSetsController) cleanupHistory(ctx context.Context, ds *apps.DaemonSet, old []*apps.ControllerRevision) error { - nodesToDaemonPods, err := dsc.getNodesToDaemonPods(ctx, ds) + // Include deleted terminal pods when maintaining history. + nodesToDaemonPods, err := dsc.getNodesToDaemonPods(ctx, ds, true) if err != nil { return fmt.Errorf("couldn't get node to daemon pod mapping for daemon set %q: %v", ds.Name, err) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/deployment/util/deployment_util.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/deployment/util/deployment_util.go index 347c284ccb99..d071dbfed090 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/deployment/util/deployment_util.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/deployment/util/deployment_util.go @@ -184,12 +184,12 @@ func SetDeploymentRevision(deployment *apps.Deployment, revision string) bool { } // MaxRevision finds the highest revision in the replica sets -func MaxRevision(allRSs []*apps.ReplicaSet) int64 { +func MaxRevision(logger klog.Logger, allRSs []*apps.ReplicaSet) int64 { max := int64(0) for _, rs := range allRSs { if v, err := Revision(rs); err != nil { // Skip the replica sets when it failed to parse their revision information - klog.V(4).Info("Couldn't parse revision for replica set, deployment controller will skip it when reconciling revisions", "replicaSet", klog.KObj(rs), "err", err) + logger.V(4).Info("Couldn't parse revision for replica set, deployment controller will skip it when reconciling revisions", "replicaSet", klog.KObj(rs), "err", err) } else if v > max { max = v } @@ -198,12 +198,12 @@ func MaxRevision(allRSs []*apps.ReplicaSet) int64 { } // LastRevision finds the second max revision number in all replica sets (the last revision) -func LastRevision(allRSs []*apps.ReplicaSet) int64 { +func LastRevision(logger klog.Logger, allRSs []*apps.ReplicaSet) int64 { max, secMax := int64(0), int64(0) for _, rs := range allRSs { if v, err := Revision(rs); err != nil { // Skip the replica sets when it failed to parse their revision information - klog.V(4).Info("Couldn't parse revision for replica set, deployment controller will skip it when reconciling revisions", "replicaSet", klog.KObj(rs), "err", err) + logger.V(4).Info("Couldn't parse revision for replica set, deployment controller will skip it when reconciling revisions", "replicaSet", klog.KObj(rs), "err", err) } else if v >= max { secMax = max max = v @@ -849,11 +849,11 @@ func WaitForObservedDeployment(getDeploymentFunc func() (*apps.Deployment, error // 2 desired, max unavailable 0%, surge 1% - should scale new(+1), then old(-1), then new(+1), then old(-1) // 1 desired, max unavailable 0%, surge 1% - should scale new(+1), then old(-1) func ResolveFenceposts(maxSurge, maxUnavailable *intstrutil.IntOrString, desired int32) (int32, int32, error) { - surge, err := intstrutil.GetScaledValueFromIntOrPercent(intstrutil.ValueOrDefault(maxSurge, intstrutil.FromInt(0)), int(desired), true) + surge, err := intstrutil.GetScaledValueFromIntOrPercent(intstrutil.ValueOrDefault(maxSurge, intstrutil.FromInt32(0)), int(desired), true) if err != nil { return 0, 0, err } - unavailable, err := intstrutil.GetScaledValueFromIntOrPercent(intstrutil.ValueOrDefault(maxUnavailable, intstrutil.FromInt(0)), int(desired), false) + unavailable, err := intstrutil.GetScaledValueFromIntOrPercent(intstrutil.ValueOrDefault(maxUnavailable, intstrutil.FromInt32(0)), int(desired), false) if err != nil { return 0, 0, err } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/lookup_cache.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/lookup_cache.go deleted file mode 100644 index 160aa6e08624..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/controller/lookup_cache.go +++ /dev/null @@ -1,92 +0,0 @@ -/* -Copyright 2016 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package controller - -import ( - "hash/fnv" - "sync" - - "github.com/golang/groupcache/lru" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - hashutil "k8s.io/kubernetes/pkg/util/hash" -) - -type objectWithMeta interface { - metav1.Object -} - -// keyFunc returns the key of an object, which is used to look up in the cache for it's matching object. -// Since we match objects by namespace and Labels/Selector, so if two objects have the same namespace and labels, -// they will have the same key. -func keyFunc(obj objectWithMeta) uint64 { - hash := fnv.New32a() - hashutil.DeepHashObject(hash, &equivalenceLabelObj{ - namespace: obj.GetNamespace(), - labels: obj.GetLabels(), - }) - return uint64(hash.Sum32()) -} - -type equivalenceLabelObj struct { - namespace string - labels map[string]string -} - -// MatchingCache save label and selector matching relationship -type MatchingCache struct { - mutex sync.RWMutex - cache *lru.Cache -} - -// NewMatchingCache return a NewMatchingCache, which save label and selector matching relationship. -func NewMatchingCache(maxCacheEntries int) *MatchingCache { - return &MatchingCache{ - cache: lru.New(maxCacheEntries), - } -} - -// Add will add matching information to the cache. -func (c *MatchingCache) Add(labelObj objectWithMeta, selectorObj objectWithMeta) { - key := keyFunc(labelObj) - c.mutex.Lock() - defer c.mutex.Unlock() - c.cache.Add(key, selectorObj) -} - -// GetMatchingObject lookup the matching object for a given object. -// Note: the cache information may be invalid since the controller may be deleted or updated, -// we need check in the external request to ensure the cache data is not dirty. -func (c *MatchingCache) GetMatchingObject(labelObj objectWithMeta) (controller interface{}, exists bool) { - key := keyFunc(labelObj) - // NOTE: we use Lock() instead of RLock() here because lru's Get() method also modifies state( - // it need update the least recently usage information). So we can not call it concurrently. - c.mutex.Lock() - defer c.mutex.Unlock() - return c.cache.Get(key) -} - -// Update update the cached matching information. -func (c *MatchingCache) Update(labelObj objectWithMeta, selectorObj objectWithMeta) { - c.Add(labelObj, selectorObj) -} - -// InvalidateAll invalidate the whole cache. -func (c *MatchingCache) InvalidateAll() { - c.mutex.Lock() - defer c.mutex.Unlock() - c.cache = lru.New(c.cache.MaxEntries) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/azure/azure_acr_helper.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/azure/azure_acr_helper.go index bcde4f385f2f..3051c04a12c8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/azure/azure_acr_helper.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/azure/azure_acr_helper.go @@ -53,7 +53,6 @@ import ( "errors" "fmt" "io" - "io/ioutil" "net/http" "net/url" "strconv" @@ -185,7 +184,7 @@ func performTokenExchange( var content []byte limitedReader := &io.LimitedReader{R: exchange.Body, N: maxReadLength} - if content, err = ioutil.ReadAll(limitedReader); err != nil { + if content, err = io.ReadAll(limitedReader); err != nil { return "", fmt.Errorf("Www-Authenticate: error reading response from %s", authEndpoint) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/azure/azure_credentials.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/azure/azure_credentials.go index 8302ebb47e8c..d3b3f1932677 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/azure/azure_credentials.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/azure/azure_credentials.go @@ -23,7 +23,6 @@ import ( "context" "errors" "io" - "io/ioutil" "os" "regexp" "strings" @@ -115,7 +114,7 @@ func parseConfig(configReader io.Reader) (*auth.AzureAuthConfig, error) { } limitedReader := &io.LimitedReader{R: configReader, N: maxReadLength} - configContents, err := ioutil.ReadAll(limitedReader) + configContents, err := io.ReadAll(limitedReader) if err != nil { return nil, err } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/config.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/config.go index 86ce18542c85..203ad14dfa61 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/config.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/config.go @@ -22,7 +22,6 @@ import ( "errors" "fmt" "io" - "io/ioutil" "net/http" "os" "path/filepath" @@ -108,7 +107,7 @@ func ReadDockercfgFile(searchPaths []string) (cfg DockerConfig, err error) { continue } klog.V(4).Infof("looking for .dockercfg at %s", absDockerConfigFileLocation) - contents, err := ioutil.ReadFile(absDockerConfigFileLocation) + contents, err := os.ReadFile(absDockerConfigFileLocation) if os.IsNotExist(err) { continue } @@ -160,7 +159,7 @@ func ReadDockerConfigJSONFile(searchPaths []string) (cfg DockerConfig, err error func ReadSpecificDockerConfigJSONFile(filePath string) (cfg DockerConfig, err error) { var contents []byte - if contents, err = ioutil.ReadFile(filePath); err != nil { + if contents, err = os.ReadFile(filePath); err != nil { return nil, err } return readDockerConfigJSONFileFromBytes(contents) @@ -211,7 +210,7 @@ func ReadURL(url string, client *http.Client, header *http.Header) (body []byte, } limitedReader := &io.LimitedReader{R: resp.Body, N: maxReadLength} - contents, err := ioutil.ReadAll(limitedReader) + contents, err := io.ReadAll(limitedReader) if err != nil { return nil, err } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/gcp/metadata.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/gcp/metadata.go index e71cb2f24d6e..f11b09a69fd4 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/gcp/metadata.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/gcp/metadata.go @@ -18,8 +18,8 @@ package gcp import ( "encoding/json" - "io/ioutil" "net/http" + "os" "os/exec" "runtime" "strings" @@ -134,7 +134,7 @@ func onGCEVM() bool { } name = fields[1] } else { - data, err := ioutil.ReadFile(gceProductNameFile) + data, err := os.ReadFile(gceProductNameFile) if err != nil { klog.V(2).Infof("Error while reading product_name: %v", err) return false diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/plugin/config.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/plugin/config.go index c65f3bb472f7..841d5b22123f 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/plugin/config.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/plugin/config.go @@ -18,9 +18,10 @@ package plugin import ( "fmt" - "io/ioutil" "strings" + "os" + "k8s.io/apimachinery/pkg/util/validation/field" "k8s.io/kubernetes/pkg/credentialprovider" kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config" @@ -33,7 +34,7 @@ func readCredentialProviderConfigFile(configPath string) (*kubeletconfig.Credent return nil, fmt.Errorf("credential provider config path is empty") } - data, err := ioutil.ReadFile(configPath) + data, err := os.ReadFile(configPath) if err != nil { return nil, fmt.Errorf("unable to read external registry credential provider configuration from %q: %w", configPath, err) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/plugin/plugin.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/plugin/plugin.go index 3eccb0006205..95bb18222d06 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/plugin/plugin.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/credentialprovider/plugin/plugin.go @@ -410,7 +410,7 @@ func (e *execPlugin) ExecPlugin(ctx context.Context, image string) (*credentialp cmd.Env = mergeEnvVars(e.environ(), configEnvVars) if err = e.runPlugin(ctx, cmd, image); err != nil { - return nil, err + return nil, fmt.Errorf("%w: %s", err, stderr.String()) } data = stdout.Bytes() diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/features/kube_features.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/features/kube_features.go index dae179775042..8b27be3f3078 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/features/kube_features.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/features/kube_features.go @@ -17,6 +17,7 @@ limitations under the License. package features import ( + apiextensionsfeatures "k8s.io/apiextensions-apiserver/pkg/features" "k8s.io/apimachinery/pkg/util/runtime" genericfeatures "k8s.io/apiserver/pkg/features" utilfeature "k8s.io/apiserver/pkg/util/feature" @@ -53,6 +54,7 @@ const ( // owner: @nabokihms // alpha: v1.26 // beta: v1.27 + // GA: v1.28 // // Enables API to get self subject attributes after authentication. APISelfSubjectReview featuregate.Feature = "APISelfSubjectReview" @@ -129,13 +131,11 @@ const ( // Enables the Azure File in-tree driver to Azure File Driver migration feature. CSIMigrationAzureFile featuregate.Feature = "CSIMigrationAzureFile" - // owner: @davidz627 - // alpha: v1.14 - // beta: v1.17 - // GA: 1.25 + // owner: @mfordjody + // alpha: v1.26 // - // Enables the GCE PD in-tree driver to GCE CSI Driver migration feature. - CSIMigrationGCE featuregate.Feature = "CSIMigrationGCE" + // Skip validation Enable in next version + SkipReadOnlyValidationGCE featuregate.Feature = "SkipReadOnlyValidationGCE" // owner: @trierra // alpha: v1.23 @@ -145,6 +145,7 @@ const ( // owner: @humblec // alpha: v1.23 + // deprecated: v1.28 // // Enables the RBD in-tree driver to RBD CSI Driver migration feature. CSIMigrationRBD featuregate.Feature = "CSIMigrationRBD" @@ -163,14 +164,6 @@ const ( // Enables SecretRef field in CSI NodeExpandVolume request. CSINodeExpandSecret featuregate.Feature = "CSINodeExpandSecret" - // owner: @pohly - // alpha: v1.19 - // beta: v1.21 - // GA: v1.24 - // - // Enables tracking of available storage capacity that CSI drivers provide. - CSIStorageCapacity featuregate.Feature = "CSIStorageCapacity" - // owner: @fengzixu // alpha: v1.21 // @@ -196,6 +189,11 @@ const ( // Normalize HttpGet URL and Header passing for lifecycle handlers with probers. ConsistentHTTPGetHandlers featuregate.Feature = "ConsistentHTTPGetHandlers" + // owner: @helayoty + // beta: v1.28 + // Set the scheduled time as an annotation in the job. + CronJobsScheduledAnnotation featuregate.Feature = "CronJobsScheduledAnnotation" + // owner: @deejross, @soltysh // kep: https://kep.k8s.io/3140 // alpha: v1.24 @@ -205,30 +203,21 @@ const ( // Enables support for time zones in CronJobs. CronJobTimeZone featuregate.Feature = "CronJobTimeZone" - // owner: @gnufied, @verult, @bertinatto - // alpha: v1.22 - // beta: v1.23 - // GA: v1.26 - // If supported by the CSI driver, delegates the role of applying FSGroup to - // the driver by passing FSGroup through the NodeStageVolume and - // NodePublishVolume calls. - DelegateFSGroupToCSIDriver featuregate.Feature = "DelegateFSGroupToCSIDriver" - - // owner: @jiayingz, @swatisehgal (for GA graduation) - // alpha: v1.8 - // beta: v1.10 - // GA: v1.26 + // owner: @thockin + // deprecated: v1.28 // - // Enables support for Device Plugins - DevicePlugins featuregate.Feature = "DevicePlugins" + // Changes when the default value of PodSpec.containers[].ports[].hostPort + // is assigned. The default is to only set a default value in Pods. + // Enabling this means a default will be assigned even to embeddedPodSpecs + // (e.g. in a Deployment), which is the historical default. + DefaultHostNetworkHostPortsInPodTemplates featuregate.Feature = "DefaultHostNetworkHostPortsInPodTemplates" - // owner: @RenaudWasTaken @dashpole - // alpha: v1.19 - // beta: v1.20 - // ga: v1.25 + // owner: @elezar + // kep: http://kep.k8s.io/4009 + // alpha: v1.28 // - // Disables Accelerator Metrics Collected by Kubelet - DisableAcceleratorUsageMetrics featuregate.Feature = "DisableAcceleratorUsageMetrics" + // Add support for CDI Device IDs in the Device Plugin API. + DevicePluginCDIDevices featuregate.Feature = "DevicePluginCDIDevices" // owner: @andrewsykim // alpha: v1.22 @@ -258,15 +247,6 @@ const ( // that is independent of a Pod. DynamicResourceAllocation featuregate.Feature = "DynamicResourceAllocation" - // owner: @andrewsykim - // kep: https://kep.k8s.io/1672 - // alpha: v1.20 - // beta: v1.22 - // GA: v1.26 - // - // Enable Terminating condition in Endpoint Slices. - EndpointSliceTerminatingCondition featuregate.Feature = "EndpointSliceTerminatingCondition" - // owner: @harche // kep: http://kep.k8s.io/3386 // alpha: v1.25 @@ -288,16 +268,16 @@ const ( // kep: https://kep.k8s.io/2595 // alpha: v1.22 // beta: v1.26 + // GA: v1.28 // // Enables apiserver and kubelet to allow up to 32 DNSSearchPaths and up to 2048 DNSSearchListChars. ExpandedDNSConfig featuregate.Feature = "ExpandedDNSConfig" // owner: @pweil- // alpha: v1.5 + // deprecated: v1.28 // - // Default userns=host for containers that are using other host namespaces, host mounts, the pod - // contains a privileged container, or specific non-namespaced capabilities (MKNOD, SYS_MODULE, - // SYS_TIME). This should only be enabled if user namespace remapping is enabled in the docker daemon. + // This flag used to be needed for dockershim CRI and currently does nothing. ExperimentalHostUserNamespaceDefaultingGate featuregate.Feature = "ExperimentalHostUserNamespaceDefaulting" // owner: @yuzhiquan, @bowei, @PxyUp, @SergeyKanzhelev @@ -382,6 +362,7 @@ const ( // owner: @humblec // alpha: v1.23 + // deprecated: v1.28 // // Disables the RBD in-tree driver. InTreePluginRBDUnregister featuregate.Feature = "InTreePluginRBDUnregister" @@ -396,18 +377,17 @@ const ( // kep: https://kep.k8s.io/3178 // alpha: v1.25 // beta: v1.27 + // stable: v1.28 // // Causes kubelet to no longer create legacy IPTables rules IPTablesOwnershipCleanup featuregate.Feature = "IPTablesOwnershipCleanup" // owner: @mimowo - // kep: https://kep.k8s.io/3329 - // alpha: v1.25 - // beta: v1.26 + // kep: https://kep.k8s.io/3850 + // alpha: v1.28 // - // Allow users to specify handling of pod failures based on container exit codes - // and pod conditions. - JobPodFailurePolicy featuregate.Feature = "JobPodFailurePolicy" + // Allows users to specify counting of failed pods per index. + JobBackoffLimitPerIndex featuregate.Feature = "JobBackoffLimitPerIndex" // owner: @ahg // beta: v1.23 @@ -418,6 +398,22 @@ const ( // that have never been unsuspended before. JobMutableNodeSchedulingDirectives featuregate.Feature = "JobMutableNodeSchedulingDirectives" + // owner: @mimowo + // kep: https://kep.k8s.io/3329 + // alpha: v1.25 + // beta: v1.26 + // + // Allow users to specify handling of pod failures based on container exit codes + // and pod conditions. + JobPodFailurePolicy featuregate.Feature = "JobPodFailurePolicy" + + // owner: @kannon92 + // kep : https://kep.k8s.io/3939 + // alpha: v1.28 + // + // Allow users to specify recreating pods of a job only when + // pods have fully terminated. + JobPodReplacementPolicy featuregate.Feature = "JobPodReplacementPolicy" // owner: @alculquicondor // alpha: v1.23 // beta: v1.24 @@ -436,13 +432,16 @@ const ( // yet. JobTrackingWithFinalizers featuregate.Feature = "JobTrackingWithFinalizers" - // owner: @andrewsykim @adisky @ndixita - // alpha: v1.20 - // beta: v1.24 - // GA: v1.26 + // owner: @marquiz + // kep: http://kep.k8s.io/4033 + // alpha: v1.28 // - // Enable kubelet exec plugins for image pull credentials. - KubeletCredentialProviders featuregate.Feature = "KubeletCredentialProviders" + // Enable detection of the kubelet cgroup driver configuration option from + // the CRI. The CRI runtime also needs to support this feature in which + // case the kubelet will ignore the cgroupDriver (--cgroup-driver) + // configuration option. If runtime doesn't support it, the kubelet will + // fallback to using it's cgroupDriver option. + KubeletCgroupDriverFromCRI featuregate.Feature = "KubeletCgroupDriverFromCRI" // owner: @AkihiroSuda // alpha: v1.22 @@ -452,9 +451,10 @@ const ( // All the node components such as CRI need to be running in the same user namespace. KubeletInUserNamespace featuregate.Feature = "KubeletInUserNamespace" - // owner: @dashpole + // owner: @dashpole, @ffromani (only for GA graduation) // alpha: v1.13 // beta: v1.15 + // GA: v1.28 // // Enables the kubelet's pod resources grpc endpoint KubeletPodResources featuregate.Feature = "KubeletPodResources" @@ -471,9 +471,10 @@ const ( // Enable POD resources API with Get method KubeletPodResourcesGet featuregate.Feature = "KubeletPodResourcesGet" - // owner: @fromanirh + // owner: @ffromani // alpha: v1.21 // beta: v1.23 + // GA: v1.28 // Enable POD resources API to return allocatable resources KubeletPodResourcesGetAllocatable featuregate.Feature = "KubeletPodResourcesGetAllocatable" @@ -485,6 +486,14 @@ const ( // Add support for distributed tracing in the kubelet KubeletTracing featuregate.Feature = "KubeletTracing" + // owner: @alexanderConstantinescu + // kep: http://kep.k8s.io/3836 + // alpha: v1.28 + // + // Implement connection draining for terminating nodes for + // `externalTrafficPolicy: Cluster` services. + KubeProxyDrainingTerminatingNodes featuregate.Feature = "KubeProxyDrainingTerminatingNodes" + // owner: @zshihang // kep: https://kep.k8s.io/2800 // beta: v1.24 @@ -501,6 +510,13 @@ const ( // Enables tracking of secret-based service account tokens usage. LegacyServiceAccountTokenTracking featuregate.Feature = "LegacyServiceAccountTokenTracking" + // owner: @yt2985 + // kep: http://kep.k8s.io/2800 + // alpha: v1.28 + // + // Enables cleaning up of secret-based service account tokens. + LegacyServiceAccountTokenCleanUp featuregate.Feature = "LegacyServiceAccountTokenCleanUp" + // owner: @RobertKrawitz // alpha: v1.15 // @@ -558,15 +574,6 @@ const ( // Enables new performance-improving code in kube-proxy iptables mode MinimizeIPTablesRestore featuregate.Feature = "MinimizeIPTablesRestore" - // owner: @janosi @bridgetkromhout - // kep: https://kep.k8s.io/1435 - // alpha: v1.20 - // beta: v1.24 - // ga: v1.26 - // - // Enables the usage of different protocols in the same Service with type=LoadBalancer - MixedProtocolLBService featuregate.Feature = "MixedProtocolLBService" - // owner: @sarveshr7 // kep: https://kep.k8s.io/2593 // alpha: v1.25 @@ -581,13 +588,6 @@ const ( // Enables the dynamic configuration of Service IP ranges MultiCIDRServiceAllocator featuregate.Feature = "MultiCIDRServiceAllocator" - // owner: @rikatz - // kep: https://kep.k8s.io/2943 - // alpha: v1.24 - // - // Enables NetworkPolicy status subresource - NetworkPolicyStatus featuregate.Feature = "NetworkPolicyStatus" - // owner: @jsafrane // kep: https://kep.k8s.io/3756 // alpha: v1.25 (as part of SELinuxMountReadWriteOncePod) @@ -606,12 +606,14 @@ const ( // kep: https://kep.k8s.io/2268 // alpha: v1.24 // beta: v1.26 + // GA: v1.28 // // Allow pods to failover to a different node in case of non graceful node shutdown NodeOutOfServiceVolumeDetach featuregate.Feature = "NodeOutOfServiceVolumeDetach" - // owner: @ehashman + // owner: @iholder101 // alpha: v1.22 + // beta1: v1.28. For more info, please look at the KEP: https://kep.k8s.io/2400. // // Permits kubelet to run with swap enabled NodeSwap featuregate.Feature = "NodeSwap" @@ -624,6 +626,13 @@ const ( // Enables PDBUnhealthyPodEvictionPolicy for PodDisruptionBudgets PDBUnhealthyPodEvictionPolicy featuregate.Feature = "PDBUnhealthyPodEvictionPolicy" + // owner: @RomanBednar + // kep: https://kep.k8s.io/3762 + // alpha: v1.28 + // + // Adds a new field to persistent volumes which holds a timestamp of when the volume last transitioned its phase. + PersistentVolumeLastPhaseTransitionTime featuregate.Feature = "PersistentVolumeLastPhaseTransitionTime" + // owner: @haircommander // kep: https://kep.k8s.io/2364 // alpha: v1.23 @@ -648,12 +657,26 @@ const ( // the pod is being deleted due to a disruption. PodDisruptionConditions featuregate.Feature = "PodDisruptionConditions" + // owner: @danielvegamyhre + // kep: https://kep.k8s.io/4017 + // beta: v1.28 + // + // Set pod completion index as a pod label for Indexed Jobs. + PodIndexLabel featuregate.Feature = "PodIndexLabel" + // owner: @ddebroy // alpha: v1.25 // - // Enables reporting of PodHasNetwork condition in pod status after pod + // Enables reporting of PodReadyToStartContainersCondition condition in pod status after pod // sandbox creation and network configuration completes successfully - PodHasNetworkCondition featuregate.Feature = "PodHasNetworkCondition" + PodReadyToStartContainersCondition featuregate.Feature = "PodReadyToStartContainersCondition" + + // owner: @wzshiming + // kep: http://kep.k8s.io/2681 + // alpha: v1.28 + // + // Adds pod.status.hostIPs and downward API + PodHostIPs featuregate.Feature = "PodHostIPs" // owner: @Huang-Wei // kep: https://kep.k8s.io/3521 @@ -663,17 +686,10 @@ const ( // Enable users to specify when a Pod is ready for scheduling. PodSchedulingReadiness featuregate.Feature = "PodSchedulingReadiness" - // owner: @liggitt, @tallclair, sig-auth - // alpha: v1.22 - // beta: v1.23 - // ga: v1.25 - // - // Enables the PodSecurity admission plugin - PodSecurity featuregate.Feature = "PodSecurity" - - // owner: @ehashman + // owner: @rphillips // alpha: v1.21 // beta: v1.22 + // ga: v1.28 // // Allows user to override pod-level terminationGracePeriod for probes ProbeTerminationGracePeriod featuregate.Feature = "ProbeTerminationGracePeriod" @@ -688,6 +704,7 @@ const ( // kep: https://kep.k8s.io/1669 // alpha: v1.22 // beta: v1.26 + // GA: v1.28 // // Enable kube-proxy to handle terminating ednpoints when externalTrafficPolicy=Local ProxyTerminatingEndpoints featuregate.Feature = "ProxyTerminatingEndpoints" @@ -717,6 +734,8 @@ const ( // owner: @RomanBednar // kep: https://kep.k8s.io/3333 // alpha: v1.25 + // beta: 1.26 + // stable: v1.28 // // Allow assigning StorageClass to unbound PVCs retroactively RetroactiveDefaultStorageClass featuregate.Feature = "RetroactiveDefaultStorageClass" @@ -739,6 +758,14 @@ const ( // equals to spec.parallelism before and after the update. ElasticIndexedJob featuregate.Feature = "ElasticIndexedJob" + // owner: @sanposhiho + // kep: http://kep.k8s.io/3063 + // beta: v1.28 + // + // Enables the scheduler's enhancement called QueueingHints, + // which benefits to reduce the useless requeueing. + SchedulerQueueingHints featuregate.Feature = "SchedulerQueueingHints" + // owner: @saschagrunert // kep: https://kep.k8s.io/2413 // alpha: v1.22 @@ -756,31 +783,23 @@ const ( // https://github.com/kubernetes/kubernetes/issues/111516 SecurityContextDeny featuregate.Feature = "SecurityContextDeny" - // owner: @maplain @andrewsykim - // kep: https://kep.k8s.io/2086 - // alpha: v1.21 - // beta: v1.22 - // GA: v1.26 - // - // Enables node-local routing for Service internal traffic - ServiceInternalTrafficPolicy featuregate.Feature = "ServiceInternalTrafficPolicy" - - // owner: @aojea - // kep: https://kep.k8s.io/3070 - // alpha: v1.24 - // beta: v1.25 - // ga: v1.26 - // - // Subdivide the ClusterIP range for dynamic and static IP allocation. - ServiceIPStaticSubrange featuregate.Feature = "ServiceIPStaticSubrange" - // owner: @xuzhenglun // kep: http://kep.k8s.io/3682 // alpha: v1.27 + // beta: v1.28 // // Subdivide the NodePort range for dynamic and static port allocation. ServiceNodePortStaticSubrange featuregate.Feature = "ServiceNodePortStaticSubrange" + // owner: @gjkim42 @SergeyKanzhelev @matthyx @tzneal + // kep: http://kep.k8s.io/753 + // alpha: v1.28 + // + // Introduces sidecar containers, a new type of init container that starts + // before other containers but remains running for the full duration of the + // pod's lifecycle and will not block pod termination. + SidecarContainers featuregate.Feature = "SidecarContainers" + // owner: @derekwaynecarr // alpha: v1.20 // beta: v1.22 @@ -854,12 +873,18 @@ const ( // Allow the usage of options to fine-tune the topology manager policies. TopologyManagerPolicyOptions featuregate.Feature = "TopologyManagerPolicyOptions" + // owner: @richabanker + // alpha: v1.28 + // + // Proxies client to an apiserver capable of serving the request in the event of version skew. + UnknownVersionInteroperabilityProxy featuregate.Feature = "UnknownVersionInteroperabilityProxy" + // owner: @rata, @giuseppe // kep: https://kep.k8s.io/127 // alpha: v1.25 // // Enables user namespace support for stateless pods. - UserNamespacesStatelessPodsSupport featuregate.Feature = "UserNamespacesStatelessPodsSupport" + UserNamespacesSupport featuregate.Feature = "UserNamespacesSupport" // owner: @cofyc // alpha: v1.21 @@ -885,14 +910,6 @@ const ( // Enables support for joining Windows containers to a hosts' network namespace. WindowsHostNetwork featuregate.Feature = "WindowsHostNetwork" - // owner: @marosset - // alpha: v1.22 - // beta: v1.23 - // GA: v1.26 - // - // Enables support for 'HostProcess' containers on Windows nodes. - WindowsHostProcessContainers featuregate.Feature = "WindowsHostProcessContainers" - // owner: @kerthcet // kep: https://kep.k8s.io/3094 // alpha: v1.25 @@ -934,7 +951,7 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS AnyVolumeDataSource: {Default: true, PreRelease: featuregate.Beta}, // on by default in 1.24 - APISelfSubjectReview: {Default: true, PreRelease: featuregate.Beta}, // on by default in 1.27 + APISelfSubjectReview: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // GA in 1.28; remove in 1.30 AppArmor: {Default: true, PreRelease: featuregate.Beta}, @@ -954,41 +971,37 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS CSIMigrationAzureFile: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 - CSIMigrationGCE: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.27 - CSIMigrationPortworx: {Default: false, PreRelease: featuregate.Beta}, // Off by default (requires Portworx CSI driver) - CSIMigrationRBD: {Default: false, PreRelease: featuregate.Alpha}, // Off by default (requires RBD CSI driver) + CSIMigrationRBD: {Default: false, PreRelease: featuregate.Deprecated}, // deprecated in 1.28, remove in 1.31 CSIMigrationvSphere: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 CSINodeExpandSecret: {Default: true, PreRelease: featuregate.Beta}, - CSIStorageCapacity: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.26 - CSIVolumeHealth: {Default: false, PreRelease: featuregate.Alpha}, + SkipReadOnlyValidationGCE: {Default: false, PreRelease: featuregate.Alpha}, + CloudControllerManagerWebhook: {Default: false, PreRelease: featuregate.Alpha}, ContainerCheckpoint: {Default: false, PreRelease: featuregate.Alpha}, ConsistentHTTPGetHandlers: {Default: true, PreRelease: featuregate.GA}, - CronJobTimeZone: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 - - DelegateFSGroupToCSIDriver: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 + CronJobsScheduledAnnotation: {Default: true, PreRelease: featuregate.Beta}, - DevicePlugins: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // GA in 1.26 + CronJobTimeZone: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 - DisableAcceleratorUsageMetrics: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, + DefaultHostNetworkHostPortsInPodTemplates: {Default: false, PreRelease: featuregate.Deprecated}, DisableCloudProviders: {Default: false, PreRelease: featuregate.Alpha}, DisableKubeletCloudCredentialProviders: {Default: false, PreRelease: featuregate.Alpha}, - DownwardAPIHugePages: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in v1.29 + DevicePluginCDIDevices: {Default: false, PreRelease: featuregate.Alpha}, - EndpointSliceTerminatingCondition: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in v1.28 + DownwardAPIHugePages: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in v1.29 DynamicResourceAllocation: {Default: false, PreRelease: featuregate.Alpha}, @@ -996,9 +1009,9 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS ExecProbeTimeout: {Default: true, PreRelease: featuregate.GA}, // lock to default and remove after v1.22 based on KEP #1972 update - ExpandedDNSConfig: {Default: true, PreRelease: featuregate.Beta}, + ExpandedDNSConfig: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.30 - ExperimentalHostUserNamespaceDefaultingGate: {Default: false, PreRelease: featuregate.Beta}, + ExperimentalHostUserNamespaceDefaultingGate: {Default: false, PreRelease: featuregate.Deprecated, LockToDefault: true}, // remove in 1.30 GRPCContainerProbe: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, //remove in 1.29 @@ -1022,37 +1035,45 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS InTreePluginPortworxUnregister: {Default: false, PreRelease: featuregate.Alpha}, - InTreePluginRBDUnregister: {Default: false, PreRelease: featuregate.Alpha}, + InTreePluginRBDUnregister: {Default: false, PreRelease: featuregate.Deprecated}, // deprecated in 1.28, remove in 1.31 InTreePluginvSphereUnregister: {Default: false, PreRelease: featuregate.Alpha}, - IPTablesOwnershipCleanup: {Default: true, PreRelease: featuregate.Beta}, + IPTablesOwnershipCleanup: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.30 - JobPodFailurePolicy: {Default: true, PreRelease: featuregate.Beta}, + JobBackoffLimitPerIndex: {Default: false, PreRelease: featuregate.Alpha}, JobMutableNodeSchedulingDirectives: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 + JobPodFailurePolicy: {Default: true, PreRelease: featuregate.Beta}, + + JobPodReplacementPolicy: {Default: false, PreRelease: featuregate.Alpha}, + JobReadyPods: {Default: true, PreRelease: featuregate.Beta}, JobTrackingWithFinalizers: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 - KubeletCredentialProviders: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 + KubeletCgroupDriverFromCRI: {Default: false, PreRelease: featuregate.Alpha}, KubeletInUserNamespace: {Default: false, PreRelease: featuregate.Alpha}, - KubeletPodResources: {Default: true, PreRelease: featuregate.Beta}, + KubeletPodResources: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // GA in 1.28, remove in 1.30 KubeletPodResourcesDynamicResources: {Default: false, PreRelease: featuregate.Alpha}, KubeletPodResourcesGet: {Default: false, PreRelease: featuregate.Alpha}, - KubeletPodResourcesGetAllocatable: {Default: true, PreRelease: featuregate.Beta}, + KubeletPodResourcesGetAllocatable: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // GA in 1.28, remove in 1.30 KubeletTracing: {Default: true, PreRelease: featuregate.Beta}, + KubeProxyDrainingTerminatingNodes: {Default: false, PreRelease: featuregate.Alpha}, + LegacyServiceAccountTokenNoAutoGeneration: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 - LegacyServiceAccountTokenTracking: {Default: true, PreRelease: featuregate.Beta}, + LegacyServiceAccountTokenTracking: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.30 + + LegacyServiceAccountTokenCleanUp: {Default: false, PreRelease: featuregate.Alpha}, LocalStorageCapacityIsolationFSQuotaMonitoring: {Default: false, PreRelease: featuregate.Alpha}, @@ -1068,43 +1089,41 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS MinDomainsInPodTopologySpread: {Default: true, PreRelease: featuregate.Beta}, - MinimizeIPTablesRestore: {Default: true, PreRelease: featuregate.Beta}, - - MixedProtocolLBService: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 + MinimizeIPTablesRestore: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.30 MultiCIDRRangeAllocator: {Default: false, PreRelease: featuregate.Alpha}, MultiCIDRServiceAllocator: {Default: false, PreRelease: featuregate.Alpha}, - NetworkPolicyStatus: {Default: false, PreRelease: featuregate.Alpha}, - NewVolumeManagerReconstruction: {Default: true, PreRelease: featuregate.Beta}, NodeLogQuery: {Default: false, PreRelease: featuregate.Alpha}, - NodeOutOfServiceVolumeDetach: {Default: true, PreRelease: featuregate.Beta}, + NodeOutOfServiceVolumeDetach: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.31 - NodeSwap: {Default: false, PreRelease: featuregate.Alpha}, + NodeSwap: {Default: false, PreRelease: featuregate.Beta}, PDBUnhealthyPodEvictionPolicy: {Default: true, PreRelease: featuregate.Beta}, + PersistentVolumeLastPhaseTransitionTime: {Default: false, PreRelease: featuregate.Alpha}, + PodAndContainerStatsFromCRI: {Default: false, PreRelease: featuregate.Alpha}, PodDeletionCost: {Default: true, PreRelease: featuregate.Beta}, PodDisruptionConditions: {Default: true, PreRelease: featuregate.Beta}, - PodHasNetworkCondition: {Default: false, PreRelease: featuregate.Alpha}, + PodReadyToStartContainersCondition: {Default: false, PreRelease: featuregate.Alpha}, - PodSchedulingReadiness: {Default: true, PreRelease: featuregate.Beta}, + PodHostIPs: {Default: false, PreRelease: featuregate.Alpha}, - PodSecurity: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, + PodSchedulingReadiness: {Default: true, PreRelease: featuregate.Beta}, - ProbeTerminationGracePeriod: {Default: true, PreRelease: featuregate.Beta}, // Default to true in beta 1.25 + ProbeTerminationGracePeriod: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 ProcMountType: {Default: false, PreRelease: featuregate.Alpha}, - ProxyTerminatingEndpoints: {Default: true, PreRelease: featuregate.Beta}, + ProxyTerminatingEndpoints: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.30 QOSReserved: {Default: false, PreRelease: featuregate.Alpha}, @@ -1112,21 +1131,21 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS RecoverVolumeExpansionFailure: {Default: false, PreRelease: featuregate.Alpha}, - RetroactiveDefaultStorageClass: {Default: true, PreRelease: featuregate.Beta}, + RetroactiveDefaultStorageClass: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 RotateKubeletServerCertificate: {Default: true, PreRelease: featuregate.Beta}, ElasticIndexedJob: {Default: true, PreRelease: featuregate.Beta}, + SchedulerQueueingHints: {Default: true, PreRelease: featuregate.Beta}, + SeccompDefault: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 SecurityContextDeny: {Default: false, PreRelease: featuregate.Alpha}, - ServiceIPStaticSubrange: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 + ServiceNodePortStaticSubrange: {Default: true, PreRelease: featuregate.Beta}, - ServiceInternalTrafficPolicy: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 - - ServiceNodePortStaticSubrange: {Default: false, PreRelease: featuregate.Alpha}, + SidecarContainers: {Default: false, PreRelease: featuregate.Alpha}, SizeMemoryBackedVolumes: {Default: true, PreRelease: featuregate.Beta}, @@ -1142,13 +1161,15 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS TopologyManagerPolicyAlphaOptions: {Default: false, PreRelease: featuregate.Alpha}, - TopologyManagerPolicyBetaOptions: {Default: false, PreRelease: featuregate.Beta}, + TopologyManagerPolicyBetaOptions: {Default: true, PreRelease: featuregate.Beta}, + + TopologyManagerPolicyOptions: {Default: true, PreRelease: featuregate.Beta}, - TopologyManagerPolicyOptions: {Default: false, PreRelease: featuregate.Alpha}, + UnknownVersionInteroperabilityProxy: {Default: false, PreRelease: featuregate.Alpha}, VolumeCapacityPriority: {Default: false, PreRelease: featuregate.Alpha}, - UserNamespacesStatelessPodsSupport: {Default: false, PreRelease: featuregate.Alpha}, + UserNamespacesSupport: {Default: false, PreRelease: featuregate.Alpha}, WinDSR: {Default: false, PreRelease: featuregate.Alpha}, @@ -1156,18 +1177,18 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS WindowsHostNetwork: {Default: true, PreRelease: featuregate.Alpha}, - WindowsHostProcessContainers: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 - NodeInclusionPolicyInPodTopologySpread: {Default: true, PreRelease: featuregate.Beta}, SELinuxMountReadWriteOncePod: {Default: true, PreRelease: featuregate.Beta}, InPlacePodVerticalScaling: {Default: false, PreRelease: featuregate.Alpha}, + PodIndexLabel: {Default: true, PreRelease: featuregate.Beta}, + // inherited features from generic apiserver, relisted here to get a conflict if it is changed // unintentionally on either side: - genericfeatures.AdmissionWebhookMatchConditions: {Default: false, PreRelease: featuregate.Alpha}, + genericfeatures.AdmissionWebhookMatchConditions: {Default: true, PreRelease: featuregate.Beta}, genericfeatures.AggregatedDiscoveryEndpoint: {Default: true, PreRelease: featuregate.Beta}, @@ -1177,14 +1198,10 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS genericfeatures.APIResponseCompression: {Default: true, PreRelease: featuregate.Beta}, - genericfeatures.AdvancedAuditing: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 - - genericfeatures.ValidatingAdmissionPolicy: {Default: false, PreRelease: featuregate.Alpha}, + genericfeatures.ValidatingAdmissionPolicy: {Default: false, PreRelease: featuregate.Beta}, genericfeatures.CustomResourceValidationExpressions: {Default: true, PreRelease: featuregate.Beta}, - genericfeatures.DryRun: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.28 - genericfeatures.OpenAPIEnums: {Default: true, PreRelease: featuregate.Beta}, genericfeatures.OpenAPIV3: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 @@ -1193,6 +1210,11 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS genericfeatures.ServerSideFieldValidation: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 + // inherited features from apiextensions-apiserver, relisted here to get a conflict if it is changed + // unintentionally on either side: + + apiextensionsfeatures.CRDValidationRatcheting: {Default: false, PreRelease: featuregate.Alpha}, + // features that enable backwards compatibility but are scheduled to be removed // ... HPAScaleToZero: {Default: false, PreRelease: featuregate.Alpha}, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/config/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/config/types.go index 91e3b9ba5e9b..1bff0cec0cac 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/config/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/config/types.go @@ -123,6 +123,7 @@ type KubeletConfiguration struct { // tlsPrivateKeyFile is the file containing x509 private key matching tlsCertFile TLSPrivateKeyFile string // TLSCipherSuites is the list of allowed cipher suites for the server. + // Note that TLS 1.3 ciphersuites are not configurable. // Values are from tls package constants (https://golang.org/pkg/crypto/tls/#pkg-constants). TLSCipherSuites []string // TLSMinVersion is the minimum TLS version supported. @@ -318,17 +319,15 @@ type KubeletConfiguration struct { // flags are not as it expects. Otherwise the Kubelet will attempt to modify // kernel flags to match its expectation. ProtectKernelDefaults bool - // If true, Kubelet ensures a set of iptables rules are present on host. - // These rules will serve as utility for various components, e.g. kube-proxy. - // The rules will be created based on IPTablesMasqueradeBit and IPTablesDropBit. + // If true, Kubelet creates the KUBE-IPTABLES-HINT chain in iptables as a hint to + // other components about the configuration of iptables on the system. MakeIPTablesUtilChains bool - // iptablesMasqueradeBit is the bit of the iptables fwmark space to mark for SNAT - // Values must be within the range [0, 31]. Must be different from other mark bits. - // Warning: Please match the value of the corresponding parameter in kube-proxy. - // TODO: clean up IPTablesMasqueradeBit in kube-proxy + // iptablesMasqueradeBit formerly controlled the creation of the KUBE-MARK-MASQ + // chain. + // Deprecated: no longer has any effect. IPTablesMasqueradeBit int32 - // iptablesDropBit is the bit of the iptables fwmark space to mark for dropping packets. - // Values must be within the range [0, 31]. Must be different from other mark bits. + // iptablesDropBit formerly controlled the creation of the KUBE-MARK-DROP chain. + // Deprecated: no longer has any effect. IPTablesDropBit int32 // featureGates is a map of feature names to bools that enable or disable alpha/experimental // features. This field modifies piecemeal the built-in default values from diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/config/validation/validation.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/config/validation/validation.go index 81fdf241936c..7d9b452b4fdc 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/config/validation/validation.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/config/validation/validation.go @@ -30,9 +30,9 @@ import ( tracingapi "k8s.io/component-base/tracing/api/v1" "k8s.io/kubernetes/pkg/features" kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" kubetypes "k8s.io/kubernetes/pkg/kubelet/types" utiltaints "k8s.io/kubernetes/pkg/util/taints" + "k8s.io/utils/cpuset" ) var ( diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/podresources/server_v1.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/podresources/server_v1.go index ad6734ae729b..58ba9dbba9dd 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/podresources/server_v1.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/apis/podresources/server_v1.go @@ -79,9 +79,10 @@ func (p *v1PodResourcesServer) List(ctx context.Context, req *v1.ListPodResource podResources[i] = &pRes } - return &v1.ListPodResourcesResponse{ + response := &v1.ListPodResourcesResponse{ PodResources: podResources, - }, nil + } + return response, nil } // GetAllocatableResources returns information about all the resources known by the server - this more like the capacity, not like the current amount of free resources. @@ -89,16 +90,13 @@ func (p *v1PodResourcesServer) GetAllocatableResources(ctx context.Context, req metrics.PodResourcesEndpointRequestsTotalCount.WithLabelValues("v1").Inc() metrics.PodResourcesEndpointRequestsGetAllocatableCount.WithLabelValues("v1").Inc() - if !utilfeature.DefaultFeatureGate.Enabled(kubefeatures.KubeletPodResourcesGetAllocatable) { - metrics.PodResourcesEndpointErrorsGetAllocatableCount.WithLabelValues("v1").Inc() - return nil, fmt.Errorf("PodResources API GetAllocatableResources disabled") - } - - return &v1.AllocatableResourcesResponse{ + response := &v1.AllocatableResourcesResponse{ Devices: p.devicesProvider.GetAllocatableDevices(), CpuIds: p.cpusProvider.GetAllocatableCPUs(), Memory: p.memoryProvider.GetAllocatableMemory(), - }, nil + } + + return response, nil } // Get returns information about the resources assigned to a specific pod diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go index c4be02a45b22..f54eaa2979fd 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cgroup_manager_linux.go @@ -45,11 +45,12 @@ import ( const ( // systemdSuffix is the cgroup name suffix for systemd systemdSuffix string = ".slice" - // MemoryMin is memory.min for cgroup v2 - MemoryMin string = "memory.min" - // MemoryHigh is memory.high for cgroup v2 - MemoryHigh string = "memory.high" - Cgroup2MaxCpuLimit string = "max" + // Cgroup2MemoryMin is memory.min for cgroup v2 + Cgroup2MemoryMin string = "memory.min" + // Cgroup2MemoryHigh is memory.high for cgroup v2 + Cgroup2MemoryHigh string = "memory.high" + Cgroup2MaxCpuLimit string = "max" + Cgroup2MaxSwapFilename string = "memory.swap.max" ) var RootCgroupName = CgroupName([]string{}) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager.go index 80c1af0aa83c..527b66bf50e9 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager.go @@ -31,7 +31,6 @@ import ( podresourcesapi "k8s.io/kubelet/pkg/apis/podresources/v1" kubeletconfig "k8s.io/kubernetes/pkg/kubelet/apis/config" "k8s.io/kubernetes/pkg/kubelet/apis/podresources" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" "k8s.io/kubernetes/pkg/kubelet/cm/devicemanager" "k8s.io/kubernetes/pkg/kubelet/config" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" @@ -40,6 +39,7 @@ import ( "k8s.io/kubernetes/pkg/kubelet/pluginmanager/cache" "k8s.io/kubernetes/pkg/kubelet/status" schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" + "k8s.io/utils/cpuset" ) type ActivePodsFunc func() []*v1.Pod @@ -146,18 +146,18 @@ type NodeConfig struct { KubeletRootDir string ProtectKernelDefaults bool NodeAllocatableConfig - QOSReserved map[v1.ResourceName]int64 - CPUManagerPolicy string - CPUManagerPolicyOptions map[string]string - TopologyManagerScope string - CPUManagerReconcilePeriod time.Duration - ExperimentalMemoryManagerPolicy string - ExperimentalMemoryManagerReservedMemory []kubeletconfig.MemoryReservation - PodPidsLimit int64 - EnforceCPULimits bool - CPUCFSQuotaPeriod time.Duration - TopologyManagerPolicy string - ExperimentalTopologyManagerPolicyOptions map[string]string + QOSReserved map[v1.ResourceName]int64 + CPUManagerPolicy string + CPUManagerPolicyOptions map[string]string + TopologyManagerScope string + CPUManagerReconcilePeriod time.Duration + ExperimentalMemoryManagerPolicy string + ExperimentalMemoryManagerReservedMemory []kubeletconfig.MemoryReservation + PodPidsLimit int64 + EnforceCPULimits bool + CPUCFSQuotaPeriod time.Duration + TopologyManagerPolicy string + TopologyManagerPolicyOptions map[string]string } type NodeAllocatableConfig struct { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go index 02cb34ddcdc3..18001ed61f63 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_linux.go @@ -51,7 +51,6 @@ import ( podresourcesapi "k8s.io/kubelet/pkg/apis/podresources/v1" kubefeatures "k8s.io/kubernetes/pkg/features" "k8s.io/kubernetes/pkg/kubelet/cadvisor" - "k8s.io/kubernetes/pkg/kubelet/cm/containermap" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager" "k8s.io/kubernetes/pkg/kubelet/cm/devicemanager" "k8s.io/kubernetes/pkg/kubelet/cm/dra" @@ -292,7 +291,7 @@ func NewContainerManager(mountUtil mount.Interface, cadvisorInterface cadvisor.I machineInfo.Topology, nodeConfig.TopologyManagerPolicy, nodeConfig.TopologyManagerScope, - nodeConfig.ExperimentalTopologyManagerPolicyOptions, + nodeConfig.TopologyManagerPolicyOptions, ) if err != nil { @@ -563,8 +562,9 @@ func (cm *containerManagerImpl) Start(node *v1.Node, localStorageCapacityIsolation bool) error { ctx := context.Background() + containerMap, containerRunningSet := buildContainerMapAndRunningSetFromRuntime(ctx, runtimeService) + // Initialize CPU manager - containerMap := buildContainerMapFromRuntime(ctx, runtimeService) err := cm.cpuManager.Start(cpumanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) if err != nil { return fmt.Errorf("start cpu manager error: %v", err) @@ -572,7 +572,7 @@ func (cm *containerManagerImpl) Start(node *v1.Node, // Initialize memory manager if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.MemoryManager) { - containerMap := buildContainerMapFromRuntime(ctx, runtimeService) + containerMap, _ := buildContainerMapAndRunningSetFromRuntime(ctx, runtimeService) err := cm.memoryManager.Start(memorymanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) if err != nil { return fmt.Errorf("start memory manager error: %v", err) @@ -636,7 +636,7 @@ func (cm *containerManagerImpl) Start(node *v1.Node, } // Starts device manager. - if err := cm.deviceManager.Start(devicemanager.ActivePodsFunc(activePods), sourcesReady); err != nil { + if err := cm.deviceManager.Start(devicemanager.ActivePodsFunc(activePods), sourcesReady, containerMap, containerRunningSet); err != nil { return err } @@ -673,6 +673,7 @@ func (cm *containerManagerImpl) GetResources(pod *v1.Pod, container *v1.Containe opts.Mounts = append(opts.Mounts, devOpts.Mounts...) opts.Envs = append(opts.Envs, devOpts.Envs...) opts.Annotations = append(opts.Annotations, devOpts.Annotations...) + opts.CDIDevices = append(opts.CDIDevices, devOpts.CDIDevices...) return opts, nil } @@ -699,26 +700,6 @@ func (cm *containerManagerImpl) SystemCgroupsLimit() v1.ResourceList { } } -func buildContainerMapFromRuntime(ctx context.Context, runtimeService internalapi.RuntimeService) containermap.ContainerMap { - podSandboxMap := make(map[string]string) - podSandboxList, _ := runtimeService.ListPodSandbox(ctx, nil) - for _, p := range podSandboxList { - podSandboxMap[p.Id] = p.Metadata.Uid - } - - containerMap := containermap.NewContainerMap() - containerList, _ := runtimeService.ListContainers(ctx, nil) - for _, c := range containerList { - if _, exists := podSandboxMap[c.PodSandboxId]; !exists { - klog.InfoS("No PodSandBox found for the container", "podSandboxId", c.PodSandboxId, "containerName", c.Metadata.Name, "containerId", c.Id) - continue - } - containerMap.Add(podSandboxMap[c.PodSandboxId], c.Metadata.Name, c.Id) - } - - return containerMap -} - func isProcessRunningInHost(pid int) (bool, error) { // Get init pid namespace. initPidNs, err := os.Readlink("/proc/1/ns/pid") @@ -978,6 +959,7 @@ func (cm *containerManagerImpl) GetDynamicResources(pod *v1.Pod, container *v1.C } for _, containerClaimInfo := range containerClaimInfos { var claimResources []*podresourcesapi.ClaimResource + containerClaimInfo.RLock() // TODO: Currently we maintain a list of ClaimResources, each of which contains // a set of CDIDevices from a different kubelet plugin. In the future we may want to // include the name of the kubelet plugin and/or other types of resources that are @@ -989,6 +971,7 @@ func (cm *containerManagerImpl) GetDynamicResources(pod *v1.Pod, container *v1.C } claimResources = append(claimResources, &podresourcesapi.ClaimResource{CDIDevices: cdiDevices}) } + containerClaimInfo.RUnlock() containerDynamicResource := podresourcesapi.DynamicResource{ ClassName: containerClaimInfo.ClassName, ClaimName: containerClaimInfo.ClaimName, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_windows.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_windows.go index c26a06837698..f62944aafcd0 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_windows.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/container_manager_windows.go @@ -23,6 +23,7 @@ limitations under the License. package cm import ( + "context" "fmt" "k8s.io/klog/v2" @@ -86,8 +87,11 @@ func (cm *containerManagerImpl) Start(node *v1.Node, } } + ctx := context.Background() + containerMap, containerRunningSet := buildContainerMapAndRunningSetFromRuntime(ctx, runtimeService) + // Starts device manager. - if err := cm.deviceManager.Start(devicemanager.ActivePodsFunc(activePods), sourcesReady); err != nil { + if err := cm.deviceManager.Start(devicemanager.ActivePodsFunc(activePods), sourcesReady, containerMap, containerRunningSet); err != nil { return err } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/cpu_assignment.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/cpu_assignment.go index cdf5e5197c8b..eba774e8f624 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/cpu_assignment.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/cpu_assignment.go @@ -24,7 +24,7 @@ import ( "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/topology" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" + "k8s.io/utils/cpuset" ) // LoopControl controls the behavior of the cpu accumulator loop logic diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/cpu_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/cpu_manager.go index 443eecd2d360..8b5049d7d747 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/cpu_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/cpu_manager.go @@ -32,11 +32,11 @@ import ( "k8s.io/kubernetes/pkg/kubelet/cm/containermap" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/topology" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" "k8s.io/kubernetes/pkg/kubelet/cm/topologymanager" "k8s.io/kubernetes/pkg/kubelet/config" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" "k8s.io/kubernetes/pkg/kubelet/status" + "k8s.io/utils/cpuset" ) // ActivePodsFunc is a function that returns a list of pods to reconcile. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/fake_cpu_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/fake_cpu_manager.go index 933697051355..4a03f3dd23ff 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/fake_cpu_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/fake_cpu_manager.go @@ -21,10 +21,10 @@ import ( "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/kubelet/cm/containermap" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" "k8s.io/kubernetes/pkg/kubelet/cm/topologymanager" "k8s.io/kubernetes/pkg/kubelet/config" "k8s.io/kubernetes/pkg/kubelet/status" + "k8s.io/utils/cpuset" ) type fakeManager struct { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy.go index 31473686548c..a80da9427cef 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy.go @@ -20,8 +20,8 @@ import ( "k8s.io/api/core/v1" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" "k8s.io/kubernetes/pkg/kubelet/cm/topologymanager" + "k8s.io/utils/cpuset" ) // Policy implements logic for pod container to CPU assignment. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy_none.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy_none.go index c5c151a78b3e..ff86498fd926 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy_none.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy_none.go @@ -22,8 +22,8 @@ import ( "k8s.io/api/core/v1" "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" "k8s.io/kubernetes/pkg/kubelet/cm/topologymanager" + "k8s.io/utils/cpuset" ) type nonePolicy struct{} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy_static.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy_static.go index 7a82de03da84..0f72e64dbc8a 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy_static.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/policy_static.go @@ -27,10 +27,10 @@ import ( "k8s.io/kubernetes/pkg/features" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state" "k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/topology" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" "k8s.io/kubernetes/pkg/kubelet/cm/topologymanager" "k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/bitmask" "k8s.io/kubernetes/pkg/kubelet/metrics" + "k8s.io/utils/cpuset" ) const ( @@ -584,7 +584,7 @@ func (p *staticPolicy) GetPodTopologyHints(s state.State, pod *v1.Pod) map[strin } } -// generateCPUtopologyHints generates a set of TopologyHints given the set of +// generateCPUTopologyHints generates a set of TopologyHints given the set of // available CPUs and the number of CPUs being requested. // // It follows the convention of marking all hints that have the same number of diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/checkpoint.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/checkpoint.go index ca6d2fc90a30..eb2bfa27eaf1 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/checkpoint.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/checkpoint.go @@ -18,11 +18,11 @@ package state import ( "encoding/json" + "fmt" "hash/fnv" "strings" - "github.com/davecgh/go-spew/spew" - + "k8s.io/apimachinery/pkg/util/dump" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/checksum" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/errors" @@ -102,21 +102,14 @@ func (cp *CPUManagerCheckpointV1) VerifyChecksum() error { return nil } - printer := spew.ConfigState{ - Indent: " ", - SortKeys: true, - DisableMethods: true, - SpewKeys: true, - } - ck := cp.Checksum cp.Checksum = 0 - object := printer.Sprintf("%#v", cp) + object := dump.ForHash(cp) object = strings.Replace(object, "CPUManagerCheckpointV1", "CPUManagerCheckpoint", 1) cp.Checksum = ck hash := fnv.New32a() - printer.Fprintf(hash, "%v", object) + fmt.Fprintf(hash, "%v", object) if cp.Checksum != checksum.Checksum(hash.Sum32()) { return errors.ErrCorruptCheckpoint } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state.go index a9bd906fcb23..352fddfb9cda 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state.go @@ -17,7 +17,7 @@ limitations under the License. package state import ( - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" + "k8s.io/utils/cpuset" ) // ContainerCPUAssignments type used in cpu manager state @@ -25,9 +25,9 @@ type ContainerCPUAssignments map[string]map[string]cpuset.CPUSet // Clone returns a copy of ContainerCPUAssignments func (as ContainerCPUAssignments) Clone() ContainerCPUAssignments { - ret := make(ContainerCPUAssignments) + ret := make(ContainerCPUAssignments, len(as)) for pod := range as { - ret[pod] = make(map[string]cpuset.CPUSet) + ret[pod] = make(map[string]cpuset.CPUSet, len(as[pod])) for container, cset := range as[pod] { ret[pod][container] = cset } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state_checkpoint.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state_checkpoint.go index 9297f23757dd..f6acc7c42ce6 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state_checkpoint.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state_checkpoint.go @@ -25,7 +25,7 @@ import ( "k8s.io/kubernetes/pkg/kubelet/checkpointmanager" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/errors" "k8s.io/kubernetes/pkg/kubelet/cm/containermap" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" + "k8s.io/utils/cpuset" ) var _ State = &stateCheckpoint{} @@ -121,7 +121,7 @@ func (sc *stateCheckpoint) restoreState() error { var tmpContainerCPUSet cpuset.CPUSet tmpAssignments := ContainerCPUAssignments{} for pod := range checkpointV2.Entries { - tmpAssignments[pod] = make(map[string]cpuset.CPUSet) + tmpAssignments[pod] = make(map[string]cpuset.CPUSet, len(checkpointV2.Entries[pod])) for container, cpuString := range checkpointV2.Entries[pod] { if tmpContainerCPUSet, err = cpuset.Parse(cpuString); err != nil { return fmt.Errorf("could not parse cpuset %q for container %q in pod %q: %v", cpuString, container, pod, err) @@ -147,7 +147,7 @@ func (sc *stateCheckpoint) storeState() error { assignments := sc.cache.GetCPUAssignments() for pod := range assignments { - checkpoint.Entries[pod] = make(map[string]string) + checkpoint.Entries[pod] = make(map[string]string, len(assignments[pod])) for container, cset := range assignments[pod] { checkpoint.Entries[pod][container] = cset.String() } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state_mem.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state_mem.go index 8f3a10d95b2d..cb01ea92609b 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state_mem.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state/state_mem.go @@ -20,7 +20,7 @@ import ( "sync" "k8s.io/klog/v2" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" + "k8s.io/utils/cpuset" ) type stateMemory struct { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/topology/topology.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/topology/topology.go index 4c4147556ea0..62d91a5dee5d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/topology/topology.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/topology/topology.go @@ -21,7 +21,7 @@ import ( cadvisorapi "github.com/google/cadvisor/info/v1" "k8s.io/klog/v2" - "k8s.io/kubernetes/pkg/kubelet/cm/cpuset" + "k8s.io/utils/cpuset" ) // NUMANodeInfo is a map from NUMANode ID to a list of CPU IDs associated with diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/checkpoint/checkpointv1.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/checkpoint/checkpointv1.go index 9014ebfc1fdc..d26b972f7be2 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/checkpoint/checkpointv1.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/checkpoint/checkpointv1.go @@ -18,11 +18,11 @@ package checkpoint import ( "encoding/json" + "fmt" "hash/fnv" "strings" - "github.com/davecgh/go-spew/spew" - + "k8s.io/apimachinery/pkg/util/dump" "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/checksum" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/errors" @@ -48,18 +48,11 @@ type checkpointDataV1 struct { // We need this special code path to be able to correctly validate the checksum k8s 1.19 wrote. // credits to https://github.com/kubernetes/kubernetes/pull/102717/commits/353f93895118d2ffa2d59a29a1fbc225160ea1d6 func (cp checkpointDataV1) checksum() checksum.Checksum { - printer := spew.ConfigState{ - Indent: " ", - SortKeys: true, - DisableMethods: true, - SpewKeys: true, - } - - object := printer.Sprintf("%#v", cp) + object := dump.ForHash(cp) object = strings.Replace(object, "checkpointDataV1", "checkpointData", 1) object = strings.Replace(object, "PodDevicesEntryV1", "PodDevicesEntry", -1) hash := fnv.New32a() - printer.Fprintf(hash, "%v", object) + fmt.Fprintf(hash, "%v", object) return checksum.Checksum(hash.Sum32()) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/manager.go index 8cb57aa8190f..d780ee801bdc 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/manager.go @@ -36,6 +36,7 @@ import ( pluginapi "k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/errors" + "k8s.io/kubernetes/pkg/kubelet/cm/containermap" "k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/checkpoint" plugin "k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/plugin/v1beta1" "k8s.io/kubernetes/pkg/kubelet/cm/topologymanager" @@ -97,6 +98,15 @@ type ManagerImpl struct { // pendingAdmissionPod contain the pod during the admission phase pendingAdmissionPod *v1.Pod + + // containerMap provides a mapping from (pod, container) -> containerID + // for all containers in a pod. Used to detect pods running across a restart + containerMap containermap.ContainerMap + + // containerRunningSet identifies which container among those present in `containerMap` + // was reported running by the container runtime when `containerMap` was computed. + // Used to detect pods running across a restart + containerRunningSet sets.String } type endpointInfo struct { @@ -216,6 +226,7 @@ func (m *ManagerImpl) PluginConnected(resourceName string, p plugin.DevicePlugin defer m.mutex.Unlock() m.endpoints[resourceName] = endpointInfo{e, options} + klog.V(2).InfoS("Device plugin connected", "resourceName", resourceName) return nil } @@ -246,6 +257,7 @@ func (m *ManagerImpl) PluginListAndWatchReceiver(resourceName string, resp *plug } func (m *ManagerImpl) genericDeviceUpdateCallback(resourceName string, devices []pluginapi.Device) { + healthyCount := 0 m.mutex.Lock() m.healthyDevices[resourceName] = sets.NewString() m.unhealthyDevices[resourceName] = sets.NewString() @@ -254,6 +266,7 @@ func (m *ManagerImpl) genericDeviceUpdateCallback(resourceName string, devices [ m.allDevices[resourceName][dev.ID] = dev if dev.Health == pluginapi.Healthy { m.healthyDevices[resourceName].Insert(dev.ID) + healthyCount++ } else { m.unhealthyDevices[resourceName].Insert(dev.ID) } @@ -262,6 +275,7 @@ func (m *ManagerImpl) genericDeviceUpdateCallback(resourceName string, devices [ if err := m.writeCheckpoint(); err != nil { klog.ErrorS(err, "Writing checkpoint encountered") } + klog.V(2).InfoS("Processed device updates for resource", "resourceName", resourceName, "totalCount", len(devices), "healthyCount", healthyCount) } // GetWatcherHandler returns the plugin handler @@ -277,11 +291,13 @@ func (m *ManagerImpl) checkpointFile() string { // Start starts the Device Plugin Manager and start initialization of // podDevices and allocatedDevices information from checkpointed state and // starts device plugin registration service. -func (m *ManagerImpl) Start(activePods ActivePodsFunc, sourcesReady config.SourcesReady) error { +func (m *ManagerImpl) Start(activePods ActivePodsFunc, sourcesReady config.SourcesReady, initialContainers containermap.ContainerMap, initialContainerRunningSet sets.String) error { klog.V(2).InfoS("Starting Device Plugin manager") m.activePods = activePods m.sourcesReady = sourcesReady + m.containerMap = initialContainers + m.containerRunningSet = initialContainerRunningSet // Loads in allocatedDevices information from disk. err := m.readCheckpoint() @@ -544,14 +560,52 @@ func (m *ManagerImpl) devicesToAllocate(podUID, contName, resource string, requi return nil, fmt.Errorf("pod %q container %q changed request for resource %q from %d to %d", string(podUID), contName, resource, devices.Len(), required) } } - if needed == 0 { - // No change, no work. + + // We have 3 major flows to handle: + // 1. kubelet running, normal allocation (needed > 0, container being [re]created). Steady state and most common case by far and large. + // 2. kubelet restart. In this scenario every other component of the stack (device plugins, app container, runtime) is still running. + // 3. node reboot. In this scenario device plugins may not be running yet when we try to allocate devices. + // note: if we get this far the runtime is surely running. This is usually enforced at OS level by startup system services dependencies. + + // First we take care of the exceptional flow (scenarios 2 and 3). In both flows, kubelet is reinitializing, and while kubelet is initializing, sources are NOT all ready. + // Is this a simple kubelet restart (scenario 2)? To distinguish, we use the informations we got for runtime. If we are asked to allocate devices for containers reported + // running, then it can only be a kubelet restart. On node reboot the runtime and the containers were also shut down. Then, if the container was running, it can only be + // because it already has access to all the required devices, so we got nothing to do and we can bail out. + if !m.sourcesReady.AllReady() && m.isContainerAlreadyRunning(podUID, contName) { + klog.V(3).InfoS("container detected running, nothing to do", "deviceNumber", needed, "resourceName", resource, "podUID", string(podUID), "containerName", contName) return nil, nil } + + // We dealt with scenario 2. If we got this far it's either scenario 3 (node reboot) or scenario 1 (steady state, normal flow). klog.V(3).InfoS("Need devices to allocate for pod", "deviceNumber", needed, "resourceName", resource, "podUID", string(podUID), "containerName", contName) - // Check if resource registered with devicemanager - if _, ok := m.healthyDevices[resource]; !ok { - return nil, fmt.Errorf("can't allocate unregistered device %s", resource) + healthyDevices, hasRegistered := m.healthyDevices[resource] + + // The following checks are expected to fail only happen on scenario 3 (node reboot). + // The kubelet is reinitializing and got a container from sources. But there's no ordering, so an app container may attempt allocation _before_ the device plugin was created, + // has registered and reported back to kubelet the devices. + // This can only happen on scenario 3 because at steady state (scenario 1) the scheduler prevents pod to be sent towards node which don't report enough devices. + // Note: we need to check the device health and registration status *before* we check how many devices are needed, doing otherwise caused issue #109595 + // Note: if the scheduler is bypassed, we fall back in scenario 1, so we still need these checks. + if !hasRegistered { + return nil, fmt.Errorf("cannot allocate unregistered device %s", resource) + } + + // Check if registered resource has healthy devices + if healthyDevices.Len() == 0 { + return nil, fmt.Errorf("no healthy devices present; cannot allocate unhealthy devices %s", resource) + } + + // Check if all the previously allocated devices are healthy + if !healthyDevices.IsSuperset(devices) { + return nil, fmt.Errorf("previously allocated devices are no longer healthy; cannot allocate unhealthy devices %s", resource) + } + + // We handled the known error paths in scenario 3 (node reboot), so from now on we can fall back in a common path. + // We cover container restart on kubelet steady state with the same flow. + if needed == 0 { + klog.V(3).InfoS("no devices needed, nothing to do", "deviceNumber", needed, "resourceName", resource, "podUID", string(podUID), "containerName", contName) + // No change, no work. + return nil, nil } // Declare the list of allocated devices. @@ -1026,3 +1080,23 @@ func (m *ManagerImpl) setPodPendingAdmission(pod *v1.Pod) { m.pendingAdmissionPod = pod } + +func (m *ManagerImpl) isContainerAlreadyRunning(podUID, cntName string) bool { + cntID, err := m.containerMap.GetContainerID(podUID, cntName) + if err != nil { + klog.V(4).InfoS("container not found in the initial map, assumed NOT running", "podUID", podUID, "containerName", cntName, "err", err) + return false + } + + // note that if container runtime is down when kubelet restarts, this set will be empty, + // so on kubelet restart containers will again fail admission, hitting https://github.com/kubernetes/kubernetes/issues/118559 again. + // This scenario should however be rare enough. + if !m.containerRunningSet.Has(cntID) { + klog.V(4).InfoS("container not present in the initial running set", "podUID", podUID, "containerName", cntName, "containerID", cntID) + return false + } + + // Once we make it here we know we have a running container. + klog.V(4).InfoS("container found in the initial set, assumed running", "podUID", podUID, "containerName", cntName, "containerID", cntID) + return true +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/pod_devices.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/pod_devices.go index 7a12e8de8130..fe4eb65e4055 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/pod_devices.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/pod_devices.go @@ -21,9 +21,13 @@ import ( "k8s.io/klog/v2" + "k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/util/sets" + utilfeature "k8s.io/apiserver/pkg/util/feature" pluginapi "k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1" + kubefeatures "k8s.io/kubernetes/pkg/features" "k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/checkpoint" + "k8s.io/kubernetes/pkg/kubelet/cm/util/cdi" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" ) @@ -244,6 +248,8 @@ func (pdev *podDevices) deviceRunContainerOptions(podUID, contName string) *Devi mountsMap := make(map[string]string) envsMap := make(map[string]string) annotationsMap := make(map[string]string) + // Keep track of all CDI devices requested for the container. + allCDIDevices := sets.New[string]() // Loops through AllocationResponses of all cached device resources. for _, devices := range resources { resp := devices.allocResp @@ -252,6 +258,7 @@ func (pdev *podDevices) deviceRunContainerOptions(podUID, contName string) *Devi // Mount points // Device files // Container annotations + // CDI device IDs // These artifacts are per resource per container. // Updates RunContainerOptions.Envs. for k, v := range resp.Envs { @@ -321,10 +328,78 @@ func (pdev *podDevices) deviceRunContainerOptions(podUID, contName string) *Devi annotationsMap[k] = v opts.Annotations = append(opts.Annotations, kubecontainer.Annotation{Name: k, Value: v}) } + + if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.DevicePluginCDIDevices) { + // Updates for CDI devices. + cdiDevices := getCDIDeviceInfo(resp, allCDIDevices) + opts.CDIDevices = append(opts.CDIDevices, cdiDevices...) + } + } + + // Although the CDI devices are expected to be empty when this feature is disabled, we still + // guard this with a feature gate to avoid any potential issues. + if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.DevicePluginCDIDevices) { + // We construct a resource ID from the pod UID and container name. + // This ID has no semantic meaning, and is only used to ensure that the generated CDI annotation key is unique + // for a given container. Since this is only called once per pod-container combination, this should be the case. + resourceID := podUID + "-" + contName + cdiAnnotations := getCDIAnnotations(resourceID, allCDIDevices, annotationsMap) + opts.Annotations = append(opts.Annotations, cdiAnnotations...) } + return opts } +// getCDIAnnotations returns the cdi annotations for a given container. +// This creates a CDI annotation with a key of the form: devicemanager_{{resourceID}}. +// The value of the annotation is a comma separated list of sorted CDI device IDs. +// If the annotation key is already defined in the provided annotations map, then the existing value is used. +func getCDIAnnotations(resourceID string, cdiDevices sets.Set[string], annotationsMap map[string]string) []kubecontainer.Annotation { + // We sort the CDI devices to ensure that the annotation value is deterministic. + sortedCDIDevices := sets.List[string](cdiDevices) + annotations, err := cdi.GenerateAnnotations(types.UID(resourceID), "devicemanager", sortedCDIDevices) + if err != nil { + klog.ErrorS(err, "Failed to create CDI annotations") + return nil + } + + var cdiAnnotations []kubecontainer.Annotation + for _, annotation := range annotations { + if e, ok := annotationsMap[annotation.Name]; ok { + klog.V(4).InfoS("Skip existing annotation", "annotationKey", annotation.Name, "annotationValue", annotation.Value) + if e != annotation.Value { + klog.ErrorS(nil, "Annotation has conflicting setting", "annotationKey", annotation.Name, "expected", e, "got", annotation.Value) + } + continue + } + klog.V(4).InfoS("Add annotation", "annotationKey", annotation.Name, "annotationValue", annotation.Value) + annotationsMap[annotation.Name] = annotation.Value + cdiAnnotations = append(cdiAnnotations, kubecontainer.Annotation{Name: annotation.Name, Value: annotation.Value}) + } + + return cdiAnnotations +} + +// getCDIDeviceInfo returns CDI devices from an allocate response +func getCDIDeviceInfo(resp *pluginapi.ContainerAllocateResponse, knownCDIDevices sets.Set[string]) []kubecontainer.CDIDevice { + var cdiDevices []kubecontainer.CDIDevice + for _, cdiDevice := range resp.CDIDevices { + if knownCDIDevices.Has(cdiDevice.Name) { + klog.V(4).InfoS("Skip existing CDI Device", "name", cdiDevice.Name) + continue + } + klog.V(4).InfoS("Add CDI device", "name", cdiDevice.Name) + knownCDIDevices.Insert(cdiDevice.Name) + + device := kubecontainer.CDIDevice{ + Name: cdiDevice.Name, + } + cdiDevices = append(cdiDevices, device) + } + + return cdiDevices +} + // getContainerDevices returns the devices assigned to the provided container for all ResourceNames func (pdev *podDevices) getContainerDevices(podUID, contName string) ResourceDeviceInstances { pdev.RLock() diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/types.go index d508e8c99699..fb330568adc3 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/types.go @@ -20,6 +20,8 @@ import ( "time" v1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/kubernetes/pkg/kubelet/cm/containermap" "k8s.io/kubernetes/pkg/kubelet/cm/topologymanager" "k8s.io/kubernetes/pkg/kubelet/config" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" @@ -31,7 +33,7 @@ import ( // Manager manages all the Device Plugins running on a node. type Manager interface { // Start starts device plugin registration service. - Start(activePods ActivePodsFunc, sourcesReady config.SourcesReady) error + Start(activePods ActivePodsFunc, sourcesReady config.SourcesReady, initialContainers containermap.ContainerMap, initialContainerRunningSet sets.String) error // Allocate configures and assigns devices to a container in a pod. From // the requested device resources, Allocate will communicate with the @@ -91,6 +93,8 @@ type DeviceRunContainerOptions struct { Devices []kubecontainer.DeviceInfo // The Annotations for the container Annotations []kubecontainer.Annotation + // CDI Devices for the container + CDIDevices []kubecontainer.CDIDevice } // TODO: evaluate whether we need this error definition. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/claiminfo.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/claiminfo.go index d1711a0771dc..7266f9e72b28 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/claiminfo.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/claiminfo.go @@ -20,9 +20,11 @@ import ( "fmt" "sync" + resourcev1alpha2 "k8s.io/api/resource/v1alpha2" "k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/util/sets" "k8s.io/kubernetes/pkg/kubelet/cm/dra/state" + "k8s.io/kubernetes/pkg/kubelet/cm/util/cdi" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" ) @@ -57,7 +59,7 @@ func (info *ClaimInfo) addCDIDevices(pluginName string, cdiDevices []string) err // NOTE: Passing CDI device names as annotations is a temporary solution // It will be removed after all runtimes are updated // to get CDI device names from the ContainerConfig.CDIDevices field - annotations, err := generateCDIAnnotations(info.ClaimUID, info.DriverName, cdiDevices) + annotations, err := cdi.GenerateAnnotations(info.ClaimUID, info.DriverName, cdiDevices) if err != nil { return fmt.Errorf("failed to generate container annotations, err: %+v", err) } @@ -79,14 +81,15 @@ type claimInfoCache struct { claimInfo map[string]*ClaimInfo } -func newClaimInfo(driverName, className string, claimUID types.UID, claimName, namespace string, podUIDs sets.Set[string]) *ClaimInfo { +func newClaimInfo(driverName, className string, claimUID types.UID, claimName, namespace string, podUIDs sets.Set[string], resourceHandles []resourcev1alpha2.ResourceHandle) *ClaimInfo { claimInfoState := state.ClaimInfoState{ - DriverName: driverName, - ClassName: className, - ClaimUID: claimUID, - ClaimName: claimName, - Namespace: namespace, - PodUIDs: podUIDs, + DriverName: driverName, + ClassName: className, + ClaimUID: claimUID, + ClaimName: claimName, + Namespace: namespace, + PodUIDs: podUIDs, + ResourceHandles: resourceHandles, } claimInfo := ClaimInfo{ ClaimInfoState: claimInfoState, @@ -119,6 +122,7 @@ func newClaimInfoCache(stateDir, checkpointName string) (*claimInfoCache, error) entry.ClaimName, entry.Namespace, entry.PodUIDs, + entry.ResourceHandles, ) for pluginName, cdiDevices := range entry.CDIDevices { err := info.addCDIDevices(pluginName, cdiDevices) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/manager.go index ea171f0b7b60..703eae58b4fc 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/manager.go @@ -28,6 +28,7 @@ import ( clientset "k8s.io/client-go/kubernetes" "k8s.io/dynamic-resource-allocation/resourceclaim" "k8s.io/klog/v2" + drapb "k8s.io/kubelet/pkg/apis/dra/v1alpha3" dra "k8s.io/kubernetes/pkg/kubelet/cm/dra/plugin" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" ) @@ -62,117 +63,159 @@ func NewManagerImpl(kubeClient clientset.Interface, stateFileDirectory string) ( } // PrepareResources attempts to prepare all of the required resource -// plugin resources for the input container, issue an NodePrepareResource rpc request +// plugin resources for the input container, issue NodePrepareResources rpc requests // for each new resource requirement, process their responses and update the cached // containerResources on success. func (m *ManagerImpl) PrepareResources(pod *v1.Pod) error { - // Process resources for each resource claim referenced by container - for _, container := range append(pod.Spec.InitContainers, pod.Spec.Containers...) { - for range container.Resources.Claims { - for i := range pod.Spec.ResourceClaims { - claimName := resourceclaim.Name(pod, &pod.Spec.ResourceClaims[i]) - klog.V(3).InfoS("Processing resource", "claim", claimName, "pod", pod.Name) - - // Resource is already prepared, add pod UID to it - if claimInfo := m.cache.get(claimName, pod.Namespace); claimInfo != nil { - // We delay checkpointing of this change until this call - // returns successfully. It is OK to do this because we - // will only return successfully from this call if the - // checkpoint has succeeded. That means if the kubelet is - // ever restarted before this checkpoint succeeds, the pod - // whose resources are being prepared would never have - // started, so it's OK (actually correct) to not include it - // in the cache. - claimInfo.addPodReference(pod.UID) - continue - } + batches := make(map[string][]*drapb.Claim) + claimInfos := make(map[types.UID]*ClaimInfo) + for i := range pod.Spec.ResourceClaims { + podClaim := &pod.Spec.ResourceClaims[i] + klog.V(3).InfoS("Processing resource", "podClaim", podClaim.Name, "pod", pod.Name) + claimName, mustCheckOwner, err := resourceclaim.Name(pod, podClaim) + if err != nil { + return fmt.Errorf("prepare resource claim: %v", err) + } - // Query claim object from the API server - resourceClaim, err := m.kubeClient.ResourceV1alpha2().ResourceClaims(pod.Namespace).Get( - context.TODO(), - claimName, - metav1.GetOptions{}) - if err != nil { - return fmt.Errorf("failed to fetch ResourceClaim %s referenced by pod %s: %+v", claimName, pod.Name, err) - } + if claimName == nil { + // Nothing to do. + continue + } + // Query claim object from the API server + resourceClaim, err := m.kubeClient.ResourceV1alpha2().ResourceClaims(pod.Namespace).Get( + context.TODO(), + *claimName, + metav1.GetOptions{}) + if err != nil { + return fmt.Errorf("failed to fetch ResourceClaim %s referenced by pod %s: %+v", *claimName, pod.Name, err) + } - // Check if pod is in the ReservedFor for the claim - if !resourceclaim.IsReservedForPod(pod, resourceClaim) { - return fmt.Errorf("pod %s(%s) is not allowed to use resource claim %s(%s)", - pod.Name, pod.UID, claimName, resourceClaim.UID) - } + if mustCheckOwner { + if err = resourceclaim.IsForPod(pod, resourceClaim); err != nil { + return err + } + } - // Grab the allocation.resourceHandles. If there are no - // allocation.resourceHandles, create a single resourceHandle with no - // content. This will trigger processing of this claim by a single - // kubelet plugin whose name matches resourceClaim.Status.DriverName. - resourceHandles := resourceClaim.Status.Allocation.ResourceHandles - if len(resourceHandles) == 0 { - resourceHandles = make([]resourcev1alpha2.ResourceHandle, 1) - } + // Check if pod is in the ReservedFor for the claim + if !resourceclaim.IsReservedForPod(pod, resourceClaim) { + return fmt.Errorf("pod %s(%s) is not allowed to use resource claim %s(%s)", + pod.Name, pod.UID, *claimName, resourceClaim.UID) + } - // Create a claimInfo object to store the relevant claim info. - claimInfo := newClaimInfo( - resourceClaim.Status.DriverName, - resourceClaim.Spec.ResourceClassName, - resourceClaim.UID, - resourceClaim.Name, - resourceClaim.Namespace, - sets.New(string(pod.UID)), - ) - - // Walk through each resourceHandle - for _, resourceHandle := range resourceHandles { - // If no DriverName is provided in the resourceHandle, we - // use the DriverName from the status - pluginName := resourceHandle.DriverName - if pluginName == "" { - pluginName = resourceClaim.Status.DriverName - } - - // Call NodePrepareResource RPC for each resourceHandle - client, err := dra.NewDRAPluginClient(pluginName) - if err != nil { - return fmt.Errorf("failed to get DRA Plugin client for plugin name %s, err=%+v", pluginName, err) - } - response, err := client.NodePrepareResource( - context.Background(), - resourceClaim.Namespace, - resourceClaim.UID, - resourceClaim.Name, - resourceHandle.Data) - if err != nil { - return fmt.Errorf("NodePrepareResource failed, claim UID: %s, claim name: %s, resource handle: %s, err: %+v", - resourceClaim.UID, resourceClaim.Name, resourceHandle.Data, err) - } - klog.V(3).InfoS("NodePrepareResource succeeded", "pluginName", pluginName, "response", response) - - // Add the CDI Devices returned by NodePrepareResource to - // the claimInfo object. - err = claimInfo.addCDIDevices(pluginName, response.CdiDevices) - if err != nil { - return fmt.Errorf("failed to add CDIDevices to claimInfo %+v: %+v", claimInfo, err) - } - - // TODO: We (re)add the claimInfo object to the cache and - // sync it to the checkpoint *after* the - // NodePrepareResource call has completed. This will cause - // issues if the kubelet gets restarted between - // NodePrepareResource and syncToCheckpoint. It will result - // in not calling NodeUnprepareResource for this claim - // because no claimInfo will be synced back to the cache - // for it after the restart. We need to resolve this issue - // before moving to beta. - m.cache.add(claimInfo) - - // Checkpoint to reduce redundant calls to - // NodePrepareResource() after a kubelet restart. - err = m.cache.syncToCheckpoint() - if err != nil { - return fmt.Errorf("failed to checkpoint claimInfo state, err: %+v", err) - } - } + // If no container actually uses the claim, then we don't need + // to prepare it. + if !claimIsUsedByPod(podClaim, pod) { + klog.V(5).InfoS("Skipping unused resource", "claim", claimName, "pod", pod.Name) + continue + } + + // Is the resource already prepared? Then add the pod UID to it. + if claimInfo := m.cache.get(*claimName, pod.Namespace); claimInfo != nil { + // We delay checkpointing of this change until this call + // returns successfully. It is OK to do this because we + // will only return successfully from this call if the + // checkpoint has succeeded. That means if the kubelet is + // ever restarted before this checkpoint succeeds, the pod + // whose resources are being prepared would never have + // started, so it's OK (actually correct) to not include it + // in the cache. + claimInfo.addPodReference(pod.UID) + continue + } + + // Grab the allocation.resourceHandles. If there are no + // allocation.resourceHandles, create a single resourceHandle with no + // content. This will trigger processing of this claim by a single + // kubelet plugin whose name matches resourceClaim.Status.DriverName. + resourceHandles := resourceClaim.Status.Allocation.ResourceHandles + if len(resourceHandles) == 0 { + resourceHandles = make([]resourcev1alpha2.ResourceHandle, 1) + } + + // Create a claimInfo object to store the relevant claim info. + claimInfo := newClaimInfo( + resourceClaim.Status.DriverName, + resourceClaim.Spec.ResourceClassName, + resourceClaim.UID, + resourceClaim.Name, + resourceClaim.Namespace, + sets.New(string(pod.UID)), + resourceHandles, + ) + + // Loop through all plugins and prepare for calling NodePrepareResources. + for _, resourceHandle := range resourceHandles { + // If no DriverName is provided in the resourceHandle, we + // use the DriverName from the status + pluginName := resourceHandle.DriverName + if pluginName == "" { + pluginName = resourceClaim.Status.DriverName + } + claim := &drapb.Claim{ + Namespace: resourceClaim.Namespace, + Uid: string(resourceClaim.UID), + Name: resourceClaim.Name, + ResourceHandle: resourceHandle.Data, } + batches[pluginName] = append(batches[pluginName], claim) + } + claimInfos[resourceClaim.UID] = claimInfo + } + + // Call NodePrepareResources for all claims in each batch. + // If there is any error, processing gets aborted. + // We could try to continue, but that would make the code more complex. + for pluginName, claims := range batches { + // Call NodePrepareResources RPC for all resource handles. + client, err := dra.NewDRAPluginClient(pluginName) + if err != nil { + return fmt.Errorf("failed to get DRA Plugin client for plugin name %s: %v", pluginName, err) + } + response, err := client.NodePrepareResources(context.Background(), &drapb.NodePrepareResourcesRequest{Claims: claims}) + if err != nil { + // General error unrelated to any particular claim. + return fmt.Errorf("NodePrepareResources failed: %v", err) + } + for claimUID, result := range response.Claims { + reqClaim := lookupClaimRequest(claims, claimUID) + if reqClaim == nil { + return fmt.Errorf("NodePrepareResources returned result for unknown claim UID %s", claimUID) + } + if result.Error != "" { + return fmt.Errorf("NodePrepareResources failed for claim %s/%s: %s", reqClaim.Namespace, reqClaim.Name, result.Error) + } + + claimInfo := claimInfos[types.UID(claimUID)] + + // Add the CDI Devices returned by NodePrepareResources to + // the claimInfo object. + err = claimInfo.addCDIDevices(pluginName, result.CDIDevices) + if err != nil { + return fmt.Errorf("failed to add CDIDevices to claimInfo %+v: %+v", claimInfo, err) + } + + // TODO: We (re)add the claimInfo object to the cache and + // sync it to the checkpoint *after* the + // NodePrepareResources call has completed. This will cause + // issues if the kubelet gets restarted between + // NodePrepareResources and syncToCheckpoint. It will result + // in not calling NodeUnprepareResources for this claim + // because no claimInfo will be synced back to the cache + // for it after the restart. We need to resolve this issue + // before moving to beta. + m.cache.add(claimInfo) + } + + // Checkpoint to reduce redundant calls to + // NodePrepareResources after a kubelet restart. + err = m.cache.syncToCheckpoint() + if err != nil { + return fmt.Errorf("failed to checkpoint claimInfo state, err: %+v", err) + } + + unfinished := len(claims) - len(response.Claims) + if unfinished != 0 { + return fmt.Errorf("NodePrepareResources left out %d claims", unfinished) } } // Checkpoint to capture all of the previous addPodReference() calls. @@ -183,6 +226,43 @@ func (m *ManagerImpl) PrepareResources(pod *v1.Pod) error { return nil } +func lookupClaimRequest(claims []*drapb.Claim, claimUID string) *drapb.Claim { + for _, claim := range claims { + if claim.Uid == claimUID { + return claim + } + } + return nil +} + +func claimIsUsedByPod(podClaim *v1.PodResourceClaim, pod *v1.Pod) bool { + if claimIsUsedByContainers(podClaim, pod.Spec.InitContainers) { + return true + } + if claimIsUsedByContainers(podClaim, pod.Spec.Containers) { + return true + } + return false +} + +func claimIsUsedByContainers(podClaim *v1.PodResourceClaim, containers []v1.Container) bool { + for i := range containers { + if claimIsUsedByContainer(podClaim, &containers[i]) { + return true + } + } + return false +} + +func claimIsUsedByContainer(podClaim *v1.PodResourceClaim, container *v1.Container) bool { + for _, c := range container.Resources.Claims { + if c.Name == podClaim.Name { + return true + } + } + return false +} + // GetResources gets a ContainerInfo object from the claimInfo cache. // This information is used by the caller to update a container config. func (m *ManagerImpl) GetResources(pod *v1.Pod, container *v1.Container) (*ContainerInfo, error) { @@ -190,25 +270,35 @@ func (m *ManagerImpl) GetResources(pod *v1.Pod, container *v1.Container) (*Conta cdiDevices := []kubecontainer.CDIDevice{} for i, podResourceClaim := range pod.Spec.ResourceClaims { - claimName := resourceclaim.Name(pod, &pod.Spec.ResourceClaims[i]) - + claimName, _, err := resourceclaim.Name(pod, &pod.Spec.ResourceClaims[i]) + if err != nil { + return nil, fmt.Errorf("list resource claims: %v", err) + } + // The claim name might be nil if no underlying resource claim + // was generated for the referenced claim. There are valid use + // cases when this might happen, so we simply skip it. + if claimName == nil { + continue + } for _, claim := range container.Resources.Claims { if podResourceClaim.Name != claim.Name { continue } - claimInfo := m.cache.get(claimName, pod.Namespace) + claimInfo := m.cache.get(*claimName, pod.Namespace) if claimInfo == nil { - return nil, fmt.Errorf("unable to get resource for namespace: %s, claim: %s", pod.Namespace, claimName) + return nil, fmt.Errorf("unable to get resource for namespace: %s, claim: %s", pod.Namespace, *claimName) } - klog.V(3).InfoS("Add resource annotations", "claim", claimName, "annotations", claimInfo.annotations) + claimInfo.RLock() + klog.V(3).InfoS("Add resource annotations", "claim", *claimName, "annotations", claimInfo.annotations) annotations = append(annotations, claimInfo.annotations...) for _, devices := range claimInfo.CDIDevices { for _, device := range devices { cdiDevices = append(cdiDevices, kubecontainer.CDIDevice{Name: device}) } } + claimInfo.RUnlock() } } @@ -220,10 +310,22 @@ func (m *ManagerImpl) GetResources(pod *v1.Pod, container *v1.Container) (*Conta // As such, calls to the underlying NodeUnprepareResource API are skipped for claims that have // already been successfully unprepared. func (m *ManagerImpl) UnprepareResources(pod *v1.Pod) error { - // Call NodeUnprepareResource RPC for every resource claim referenced by the pod + batches := make(map[string][]*drapb.Claim) + claimInfos := make(map[types.UID]*ClaimInfo) for i := range pod.Spec.ResourceClaims { - claimName := resourceclaim.Name(pod, &pod.Spec.ResourceClaims[i]) - claimInfo := m.cache.get(claimName, pod.Namespace) + claimName, _, err := resourceclaim.Name(pod, &pod.Spec.ResourceClaims[i]) + if err != nil { + return fmt.Errorf("unprepare resource claim: %v", err) + } + + // The claim name might be nil if no underlying resource claim + // was generated for the referenced claim. There are valid use + // cases when this might happen, so we simply skip it. + if claimName == nil { + continue + } + + claimInfo := m.cache.get(*claimName, pod.Namespace) // Skip calling NodeUnprepareResource if claim info is not cached if claimInfo == nil { @@ -241,27 +343,8 @@ func (m *ManagerImpl) UnprepareResources(pod *v1.Pod) error { continue } - // Query claim object from the API server - resourceClaim, err := m.kubeClient.ResourceV1alpha2().ResourceClaims(pod.Namespace).Get( - context.TODO(), - claimName, - metav1.GetOptions{}) - if err != nil { - return fmt.Errorf("failed to fetch ResourceClaim %s referenced by pod %s: %+v", claimName, pod.Name, err) - } - - // Grab the allocation.resourceHandles. If there are no - // allocation.resourceHandles, create a single resourceHandle with no - // content. This will trigger processing of this claim by a single - // kubelet plugin whose name matches resourceClaim.Status.DriverName. - resourceHandles := resourceClaim.Status.Allocation.ResourceHandles - if len(resourceHandles) == 0 { - resourceHandles = make([]resourcev1alpha2.ResourceHandle, 1) - } - - // Loop through all plugins and call NodeUnprepareResource only for the - // last pod that references the claim - for _, resourceHandle := range resourceHandles { + // Loop through all plugins and prepare for calling NodeUnprepareResources. + for _, resourceHandle := range claimInfo.ResourceHandles { // If no DriverName is provided in the resourceHandle, we // use the DriverName from the status pluginName := resourceHandle.DriverName @@ -269,38 +352,62 @@ func (m *ManagerImpl) UnprepareResources(pod *v1.Pod) error { pluginName = claimInfo.DriverName } - // Call NodeUnprepareResource RPC for each resourceHandle - client, err := dra.NewDRAPluginClient(pluginName) - if err != nil { - return fmt.Errorf("failed to get DRA Plugin client for plugin name %s, err=%+v", pluginName, err) - } - response, err := client.NodeUnprepareResource( - context.Background(), - claimInfo.Namespace, - claimInfo.ClaimUID, - claimInfo.ClaimName, - resourceHandle.Data) - if err != nil { - return fmt.Errorf( - "NodeUnprepareResource failed, pod: %s, claim UID: %s, claim name: %s, CDI devices: %s, err: %+v", - pod.Name, claimInfo.ClaimUID, claimInfo.ClaimName, claimInfo.CDIDevices, err) + claim := &drapb.Claim{ + Namespace: claimInfo.Namespace, + Uid: string(claimInfo.ClaimUID), + Name: claimInfo.ClaimName, + ResourceHandle: resourceHandle.Data, } - klog.V(3).InfoS("NodeUnprepareResource succeeded", "response", response) + batches[pluginName] = append(batches[pluginName], claim) } + claimInfos[claimInfo.ClaimUID] = claimInfo + } - // Delete last pod UID only if all NodeUnprepareResource calls succeed. - // This ensures that the status manager doesn't enter termination status - // for the pod. This logic is implemented in - // m.PodMightNeedToUnprepareResources and claimInfo.hasPodReference. - claimInfo.deletePodReference(pod.UID) - m.cache.delete(claimInfo.ClaimName, pod.Namespace) + // Call NodeUnprepareResources for all claims in each batch. + // If there is any error, processing gets aborted. + // We could try to continue, but that would make the code more complex. + for pluginName, claims := range batches { + // Call NodeUnprepareResources RPC for all resource handles. + client, err := dra.NewDRAPluginClient(pluginName) + if err != nil { + return fmt.Errorf("failed to get DRA Plugin client for plugin name %s: %v", pluginName, err) + } + response, err := client.NodeUnprepareResources(context.Background(), &drapb.NodeUnprepareResourcesRequest{Claims: claims}) + if err != nil { + // General error unrelated to any particular claim. + return fmt.Errorf("NodeUnprepareResources failed: %v", err) + } - // Checkpoint to reduce redundant calls to NodeUnPrepareResource() after a kubelet restart. + for claimUID, result := range response.Claims { + reqClaim := lookupClaimRequest(claims, claimUID) + if reqClaim == nil { + return fmt.Errorf("NodeUnprepareResources returned result for unknown claim UID %s", claimUID) + } + if result.Error != "" { + return fmt.Errorf("NodeUnprepareResources failed for claim %s/%s: %s", reqClaim.Namespace, reqClaim.Name, err) + } + + // Delete last pod UID only if unprepare succeeds. + // This ensures that the status manager doesn't enter termination status + // for the pod. This logic is implemented in + // m.PodMightNeedToUnprepareResources and claimInfo.hasPodReference. + claimInfo := claimInfos[types.UID(claimUID)] + claimInfo.deletePodReference(pod.UID) + m.cache.delete(claimInfo.ClaimName, pod.Namespace) + } + + // Checkpoint to reduce redundant calls to NodeUnprepareResources after a kubelet restart. err = m.cache.syncToCheckpoint() if err != nil { return fmt.Errorf("failed to checkpoint claimInfo state, err: %+v", err) } + + unfinished := len(claims) - len(response.Claims) + if unfinished != 0 { + return fmt.Errorf("NodeUnprepareResources left out %d claims", unfinished) + } } + // Checkpoint to capture all of the previous deletePodReference() calls. err := m.cache.syncToCheckpoint() if err != nil { @@ -320,15 +427,18 @@ func (m *ManagerImpl) GetContainerClaimInfos(pod *v1.Pod, container *v1.Containe claimInfos := make([]*ClaimInfo, 0, len(pod.Spec.ResourceClaims)) for i, podResourceClaim := range pod.Spec.ResourceClaims { - claimName := resourceclaim.Name(pod, &pod.Spec.ResourceClaims[i]) + claimName, _, err := resourceclaim.Name(pod, &pod.Spec.ResourceClaims[i]) + if err != nil { + return nil, fmt.Errorf("determine resource claim information: %v", err) + } for _, claim := range container.Resources.Claims { if podResourceClaim.Name != claim.Name { continue } - claimInfo := m.cache.get(claimName, pod.Namespace) + claimInfo := m.cache.get(*claimName, pod.Namespace) if claimInfo == nil { - return nil, fmt.Errorf("unable to get resource for namespace: %s, claim: %s", pod.Namespace, claimName) + return nil, fmt.Errorf("unable to get resource for namespace: %s, claim: %s", pod.Namespace, *claimName) } claimInfos = append(claimInfos, claimInfo) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/plugin/client.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/plugin/client.go index 395ec5b29b6d..a18dcba2172e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/plugin/client.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/plugin/client.go @@ -25,68 +25,53 @@ import ( "time" "google.golang.org/grpc" + grpccodes "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials/insecure" - "k8s.io/apimachinery/pkg/types" + grpcstatus "google.golang.org/grpc/status" "k8s.io/klog/v2" - drapbv1 "k8s.io/kubelet/pkg/apis/dra/v1alpha2" + drapbv1alpha2 "k8s.io/kubelet/pkg/apis/dra/v1alpha2" + drapb "k8s.io/kubelet/pkg/apis/dra/v1alpha3" ) -const PluginClientTimeout = 10 * time.Second - -type Client interface { - NodePrepareResource( - ctx context.Context, - namespace string, - claimUID types.UID, - claimName string, - resourceHandle string, - ) (*drapbv1.NodePrepareResourceResponse, error) - - NodeUnprepareResource( - ctx context.Context, - namespace string, - claimUID types.UID, - claimName string, - resourceHandle string, - ) (*drapbv1.NodeUnprepareResourceResponse, error) -} +const PluginClientTimeout = 45 * time.Second // Strongly typed address. type draAddr string // draPluginClient encapsulates all dra plugin methods. type draPluginClient struct { - pluginName string - addr draAddr - nodeV1ClientCreator nodeV1ClientCreator + pluginName string + addr draAddr + nodeClientCreator nodeClientCreator } -var _ Client = &draPluginClient{} +var _ drapb.NodeClient = &draPluginClient{} -type nodeV1ClientCreator func(addr draAddr) ( - nodeClient drapbv1.NodeClient, +type nodeClientCreator func(addr draAddr) ( + nodeClient drapb.NodeClient, + nodeClientOld drapbv1alpha2.NodeClient, closer io.Closer, err error, ) -// newV1NodeClient creates a new NodeClient with the internally used gRPC +// newNodeClient creates a new NodeClient with the internally used gRPC // connection set up. It also returns a closer which must be called to close // the gRPC connection when the NodeClient is not used anymore. -// This is the default implementation for the nodeV1ClientCreator, used in +// This is the default implementation for the nodeClientCreator, used in // newDRAPluginClient. -func newV1NodeClient(addr draAddr) (nodeClient drapbv1.NodeClient, closer io.Closer, err error) { +func newNodeClient(addr draAddr) (nodeClient drapb.NodeClient, nodeClientOld drapbv1alpha2.NodeClient, closer io.Closer, err error) { var conn *grpc.ClientConn conn, err = newGrpcConn(addr) if err != nil { - return nil, nil, err + return nil, nil, nil, err } - return drapbv1.NewNodeClient(conn), conn, nil + return drapb.NewNodeClient(conn), drapbv1alpha2.NewNodeClient(conn), conn, nil } -func NewDRAPluginClient(pluginName string) (Client, error) { +func NewDRAPluginClient(pluginName string) (drapb.NodeClient, error) { if pluginName == "" { return nil, fmt.Errorf("plugin name is empty") } @@ -97,84 +82,114 @@ func NewDRAPluginClient(pluginName string) (Client, error) { } return &draPluginClient{ - pluginName: pluginName, - addr: draAddr(existingPlugin.endpoint), - nodeV1ClientCreator: newV1NodeClient, + pluginName: pluginName, + addr: draAddr(existingPlugin.endpoint), + nodeClientCreator: newNodeClient, }, nil } -func (r *draPluginClient) NodePrepareResource( +func (r *draPluginClient) NodePrepareResources( ctx context.Context, - namespace string, - claimUID types.UID, - claimName string, - resourceHandle string, -) (*drapbv1.NodePrepareResourceResponse, error) { - klog.V(4).InfoS( - log("calling NodePrepareResource rpc"), - "namespace", namespace, - "claimUID", claimUID, - "claimName", claimName, - "resourceHandle", resourceHandle) - - if r.nodeV1ClientCreator == nil { - return nil, errors.New("failed to call NodePrepareResource. nodeV1ClientCreator is nil") + req *drapb.NodePrepareResourcesRequest, + opts ...grpc.CallOption, +) (resp *drapb.NodePrepareResourcesResponse, err error) { + logger := klog.FromContext(ctx) + logger.V(4).Info(log("calling NodePrepareResources rpc"), "request", req) + defer logger.V(4).Info(log("done calling NodePrepareResources rpc"), "response", resp, "err", err) + + if r.nodeClientCreator == nil { + return nil, errors.New("failed to call NodePrepareResources. nodeClientCreator is nil") } - nodeClient, closer, err := r.nodeV1ClientCreator(r.addr) + nodeClient, nodeClientOld, closer, err := r.nodeClientCreator(r.addr) if err != nil { return nil, err } defer closer.Close() - req := &drapbv1.NodePrepareResourceRequest{ - Namespace: namespace, - ClaimUid: string(claimUID), - ClaimName: claimName, - ResourceHandle: resourceHandle, - } - ctx, cancel := context.WithTimeout(ctx, PluginClientTimeout) defer cancel() - return nodeClient.NodePrepareResource(ctx, req) + resp, err = nodeClient.NodePrepareResources(ctx, req) + if err != nil { + status, _ := grpcstatus.FromError(err) + if status.Code() == grpccodes.Unimplemented { + // Fall back to the older gRPC API. + resp = &drapb.NodePrepareResourcesResponse{ + Claims: make(map[string]*drapb.NodePrepareResourceResponse), + } + err = nil + for _, claim := range req.Claims { + respOld, errOld := nodeClientOld.NodePrepareResource(ctx, + &drapbv1alpha2.NodePrepareResourceRequest{ + Namespace: claim.Namespace, + ClaimUid: claim.Uid, + ClaimName: claim.Name, + ResourceHandle: claim.ResourceHandle, + }) + result := &drapb.NodePrepareResourceResponse{} + if errOld != nil { + result.Error = errOld.Error() + } else { + result.CDIDevices = respOld.CdiDevices + } + resp.Claims[claim.Uid] = result + } + } + } + + return } -func (r *draPluginClient) NodeUnprepareResource( +func (r *draPluginClient) NodeUnprepareResources( ctx context.Context, - namespace string, - claimUID types.UID, - claimName string, - resourceHandle string, -) (*drapbv1.NodeUnprepareResourceResponse, error) { - klog.V(4).InfoS( - log("calling NodeUnprepareResource rpc"), - "namespace", namespace, - "claimUID", claimUID, - "claimname", claimName, - "resourceHandle", resourceHandle) - - if r.nodeV1ClientCreator == nil { - return nil, errors.New("nodeV1ClientCreate is nil") + req *drapb.NodeUnprepareResourcesRequest, + opts ...grpc.CallOption, +) (resp *drapb.NodeUnprepareResourcesResponse, err error) { + logger := klog.FromContext(ctx) + logger.V(4).Info(log("calling NodeUnprepareResource rpc"), "request", req) + defer logger.V(4).Info(log("done calling NodeUnprepareResources rpc"), "response", resp, "err", err) + + if r.nodeClientCreator == nil { + return nil, errors.New("failed to call NodeUnprepareResources. nodeClientCreator is nil") } - nodeClient, closer, err := r.nodeV1ClientCreator(r.addr) + nodeClient, nodeClientOld, closer, err := r.nodeClientCreator(r.addr) if err != nil { return nil, err } defer closer.Close() - req := &drapbv1.NodeUnprepareResourceRequest{ - Namespace: namespace, - ClaimUid: string(claimUID), - ClaimName: claimName, - ResourceHandle: resourceHandle, - } - ctx, cancel := context.WithTimeout(ctx, PluginClientTimeout) defer cancel() - return nodeClient.NodeUnprepareResource(ctx, req) + resp, err = nodeClient.NodeUnprepareResources(ctx, req) + if err != nil { + status, _ := grpcstatus.FromError(err) + if status.Code() == grpccodes.Unimplemented { + // Fall back to the older gRPC API. + resp = &drapb.NodeUnprepareResourcesResponse{ + Claims: make(map[string]*drapb.NodeUnprepareResourceResponse), + } + err = nil + for _, claim := range req.Claims { + _, errOld := nodeClientOld.NodeUnprepareResource(ctx, + &drapbv1alpha2.NodeUnprepareResourceRequest{ + Namespace: claim.Namespace, + ClaimUid: claim.Uid, + ClaimName: claim.Name, + ResourceHandle: claim.ResourceHandle, + }) + result := &drapb.NodeUnprepareResourceResponse{} + if errOld != nil { + result.Error = errOld.Error() + } + resp.Claims[claim.Uid] = result + } + } + } + + return } func newGrpcConn(addr draAddr) (*grpc.ClientConn, error) { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/state/checkpoint.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/state/checkpoint.go index 7cce61181822..7c44f12eea9d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/state/checkpoint.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/state/checkpoint.go @@ -18,9 +18,14 @@ package state import ( "encoding/json" + "fmt" + "hash/fnv" + "strings" + "k8s.io/apimachinery/pkg/util/dump" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/checksum" + "k8s.io/kubernetes/pkg/kubelet/checkpointmanager/errors" ) var _ checkpointmanager.Checkpoint = &DRAManagerCheckpoint{} @@ -34,9 +39,20 @@ type DRAManagerCheckpoint struct { Checksum checksum.Checksum `json:"checksum"` } +// DraManagerCheckpoint struct is an old implementation of the DraManagerCheckpoint +type DRAManagerCheckpointWithoutResourceHandles struct { + Version string `json:"version"` + Entries ClaimInfoStateListWithoutResourceHandles `json:"entries,omitempty"` + Checksum checksum.Checksum `json:"checksum"` +} + // List of claim info to store in checkpoint type ClaimInfoStateList []ClaimInfoState +// List of claim info to store in checkpoint +// TODO: remove in Beta +type ClaimInfoStateListWithoutResourceHandles []ClaimInfoStateWithoutResourceHandles + // NewDRAManagerCheckpoint returns an instance of Checkpoint func NewDRAManagerCheckpoint() *DRAManagerCheckpoint { return &DRAManagerCheckpoint{ @@ -63,6 +79,44 @@ func (dc *DRAManagerCheckpoint) VerifyChecksum() error { ck := dc.Checksum dc.Checksum = 0 err := ck.Verify(dc) + if err == errors.ErrCorruptCheckpoint { + // Verify with old structs without ResourceHandles field + // TODO: remove in Beta + err = verifyChecksumWithoutResourceHandles(dc, ck) + } dc.Checksum = ck return err } + +// verifyChecksumWithoutResourceHandles is a helper function that verifies checksum of the +// checkpoint in the old format, without ResourceHandles field. +// TODO: remove in Beta. +func verifyChecksumWithoutResourceHandles(dc *DRAManagerCheckpoint, checkSum checksum.Checksum) error { + entries := ClaimInfoStateListWithoutResourceHandles{} + for _, entry := range dc.Entries { + entries = append(entries, ClaimInfoStateWithoutResourceHandles{ + DriverName: entry.DriverName, + ClassName: entry.ClassName, + ClaimUID: entry.ClaimUID, + ClaimName: entry.ClaimName, + Namespace: entry.Namespace, + PodUIDs: entry.PodUIDs, + CDIDevices: entry.CDIDevices, + }) + } + oldcheckpoint := &DRAManagerCheckpointWithoutResourceHandles{ + Version: checkpointVersion, + Entries: entries, + Checksum: 0, + } + // Calculate checksum for old checkpoint + object := dump.ForHash(oldcheckpoint) + object = strings.Replace(object, "DRAManagerCheckpointWithoutResourceHandles", "DRAManagerCheckpoint", 1) + object = strings.Replace(object, "ClaimInfoStateListWithoutResourceHandles", "ClaimInfoStateList", 1) + hash := fnv.New32a() + fmt.Fprintf(hash, "%v", object) + if checkSum != checksum.Checksum(hash.Sum32()) { + return errors.ErrCorruptCheckpoint + } + return nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/state/state_checkpoint.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/state/state_checkpoint.go index 5a4b7dce7d1e..a391f0a13cac 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/state/state_checkpoint.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/state/state_checkpoint.go @@ -20,6 +20,7 @@ import ( "fmt" "sync" + resourcev1alpha2 "k8s.io/api/resource/v1alpha2" "k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/util/sets" "k8s.io/kubernetes/pkg/kubelet/checkpointmanager" @@ -54,6 +55,35 @@ type ClaimInfoState struct { // PodUIDs is a set of pod UIDs that reference a resource PodUIDs sets.Set[string] + // ResourceHandles is a list of opaque resource data for processing by a specific kubelet plugin + ResourceHandles []resourcev1alpha2.ResourceHandle + + // CDIDevices is a map of DriverName --> CDI devices returned by the + // GRPC API call NodePrepareResource + CDIDevices map[string][]string +} + +// ClaimInfoStateWithoutResourceHandles is an old implementation of the ClaimInfoState +// TODO: remove in Beta +type ClaimInfoStateWithoutResourceHandles struct { + // Name of the DRA driver + DriverName string + + // ClassName is a resource class of the claim + ClassName string + + // ClaimUID is an UID of the resource claim + ClaimUID types.UID + + // ClaimName is a name of the resource claim + ClaimName string + + // Namespace is a claim namespace + Namespace string + + // PodUIDs is a set of pod UIDs that reference a resource + PodUIDs sets.Set[string] + // CDIDevices is a map of DriverName --> CDI devices returned by the // GRPC API call NodePrepareResource CDIDevices map[string][]string @@ -67,6 +97,10 @@ type stateCheckpoint struct { // NewCheckpointState creates new State for keeping track of claim info with checkpoint backend func NewCheckpointState(stateDir, checkpointName string) (*stateCheckpoint, error) { + if len(checkpointName) == 0 { + return nil, fmt.Errorf("received empty string instead of checkpointName") + } + checkpointManager, err := checkpointmanager.NewCheckpointManager(stateDir) if err != nil { return nil, fmt.Errorf("failed to initialize checkpoint manager: %v", err) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/helpers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/helpers.go index dbbea4a80404..6be3e2723072 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/helpers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/helpers.go @@ -17,7 +17,14 @@ limitations under the License. package cm import ( + "context" + "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/util/sets" + internalapi "k8s.io/cri-api/pkg/apis" + runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" + "k8s.io/klog/v2" + "k8s.io/kubernetes/pkg/kubelet/cm/containermap" evictionapi "k8s.io/kubernetes/pkg/kubelet/eviction/api" ) @@ -44,3 +51,28 @@ func hardEvictionReservation(thresholds []evictionapi.Threshold, capacity v1.Res } return ret } + +func buildContainerMapAndRunningSetFromRuntime(ctx context.Context, runtimeService internalapi.RuntimeService) (containermap.ContainerMap, sets.String) { + podSandboxMap := make(map[string]string) + podSandboxList, _ := runtimeService.ListPodSandbox(ctx, nil) + for _, p := range podSandboxList { + podSandboxMap[p.Id] = p.Metadata.Uid + } + + runningSet := sets.NewString() + containerMap := containermap.NewContainerMap() + containerList, _ := runtimeService.ListContainers(ctx, nil) + for _, c := range containerList { + if _, exists := podSandboxMap[c.PodSandboxId]; !exists { + klog.InfoS("No PodSandBox found for the container", "podSandboxId", c.PodSandboxId, "containerName", c.Metadata.Name, "containerId", c.Id) + continue + } + podUID := podSandboxMap[c.PodSandboxId] + containerMap.Add(podUID, c.Metadata.Name, c.Id) + if c.State == runtimeapi.ContainerState_CONTAINER_RUNNING { + klog.V(4).InfoS("Container reported running", "podSandboxId", c.PodSandboxId, "podUID", podUID, "containerName", c.Metadata.Name, "containerId", c.Id) + runningSet.Insert(c.Id) + } + } + return containerMap, runningSet +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/helpers_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/helpers_linux.go index 18b0df17bfca..8a144e7a73cc 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/helpers_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/helpers_linux.go @@ -196,7 +196,7 @@ func ResourceConfigForPod(pod *v1.Pod, enforceCPULimits bool, cpuPeriod uint64, } if memoryMin > 0 { result.Unified = map[string]string{ - MemoryMin: strconv.FormatInt(memoryMin, 10), + Cgroup2MemoryMin: strconv.FormatInt(memoryMin, 10), } } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/node_container_manager_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/node_container_manager_linux.go index 74221c67047b..b57403dd95ba 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/node_container_manager_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/node_container_manager_linux.go @@ -147,7 +147,7 @@ func enforceExistingCgroup(cgroupManager CgroupManager, cName CgroupName, rl v1. if rp.Unified == nil { rp.Unified = make(map[string]string) } - rp.Unified[MemoryMin] = strconv.FormatInt(*rp.Memory, 10) + rp.Unified[Cgroup2MemoryMin] = strconv.FormatInt(*rp.Memory, 10) } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/qos_container_manager_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/qos_container_manager_linux.go index 89b3adae9af6..abf4487ee5df 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/qos_container_manager_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/qos_container_manager_linux.go @@ -292,7 +292,7 @@ func (m *qosContainerManagerImpl) setMemoryQoS(configs map[v1.PodQOSClass]*Cgrou if configs[v1.PodQOSBurstable].ResourceParameters.Unified == nil { configs[v1.PodQOSBurstable].ResourceParameters.Unified = make(map[string]string) } - configs[v1.PodQOSBurstable].ResourceParameters.Unified[MemoryMin] = strconv.FormatInt(burstableMin, 10) + configs[v1.PodQOSBurstable].ResourceParameters.Unified[Cgroup2MemoryMin] = strconv.FormatInt(burstableMin, 10) klog.V(4).InfoS("MemoryQoS config for qos", "qos", v1.PodQOSBurstable, "memoryMin", burstableMin) } @@ -300,7 +300,7 @@ func (m *qosContainerManagerImpl) setMemoryQoS(configs map[v1.PodQOSClass]*Cgrou if configs[v1.PodQOSGuaranteed].ResourceParameters.Unified == nil { configs[v1.PodQOSGuaranteed].ResourceParameters.Unified = make(map[string]string) } - configs[v1.PodQOSGuaranteed].ResourceParameters.Unified[MemoryMin] = strconv.FormatInt(guaranteedMin, 10) + configs[v1.PodQOSGuaranteed].ResourceParameters.Unified[Cgroup2MemoryMin] = strconv.FormatInt(guaranteedMin, 10) klog.V(4).InfoS("MemoryQoS config for qos", "qos", v1.PodQOSGuaranteed, "memoryMin", guaranteedMin) } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/policy_options.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/policy_options.go index 39fff52b7893..15f94c696d2f 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/policy_options.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/policy_options.go @@ -30,10 +30,10 @@ const ( ) var ( - alphaOptions = sets.NewString( + alphaOptions = sets.NewString() + betaOptions = sets.NewString( PreferClosestNUMANodes, ) - betaOptions = sets.NewString() stableOptions = sets.NewString() ) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope.go index ed149df5bade..db3edd63e647 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope.go @@ -31,6 +31,8 @@ const ( containerTopologyScope = "container" // podTopologyScope specifies the TopologyManagerScope per pod. podTopologyScope = "pod" + // noneTopologyScope specifies the TopologyManagerScope when topologyPolicyName is none. + noneTopologyScope = "none" ) type podTopologyHints map[string]map[string]TopologyHint diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_container.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_container.go index fd90ac549fbd..857ac2e9ae00 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_container.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_container.go @@ -45,11 +45,6 @@ func NewContainerScope(policy Policy) Scope { } func (s *containerScope) Admit(pod *v1.Pod) lifecycle.PodAdmitResult { - // Exception - Policy : none - if s.policy.Name() == PolicyNone { - return s.admitPolicyNone(pod) - } - for _, container := range append(pod.Spec.InitContainers, pod.Spec.Containers...) { bestHint, admit := s.calculateAffinity(pod, &container) klog.InfoS("Best TopologyHint", "bestHint", bestHint, "pod", klog.KObj(pod), "containerName", container.Name) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_none.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_none.go new file mode 100644 index 000000000000..c82b19e1f9cb --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_none.go @@ -0,0 +1,46 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package topologymanager + +import ( + "k8s.io/api/core/v1" + "k8s.io/kubernetes/pkg/kubelet/cm/containermap" + "k8s.io/kubernetes/pkg/kubelet/lifecycle" +) + +type noneScope struct { + scope +} + +// Ensure noneScope implements Scope interface +var _ Scope = &noneScope{} + +// NewNoneScope returns a none scope. +func NewNoneScope() Scope { + return &noneScope{ + scope{ + name: noneTopologyScope, + podTopologyHints: podTopologyHints{}, + policy: NewNonePolicy(), + podMap: containermap.NewContainerMap(), + }, + } +} + +func (s *noneScope) Admit(pod *v1.Pod) lifecycle.PodAdmitResult { + return s.admitPolicyNone(pod) +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_pod.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_pod.go index ffcf79171676..2dc1773fb3d7 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_pod.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/scope_pod.go @@ -45,11 +45,6 @@ func NewPodScope(policy Policy) Scope { } func (s *podScope) Admit(pod *v1.Pod) lifecycle.PodAdmitResult { - // Exception - Policy : none - if s.policy.Name() == PolicyNone { - return s.admitPolicyNone(pod) - } - bestHint, admit := s.calculateAffinity(pod) klog.InfoS("Best TopologyHint", "bestHint", bestHint, "pod", klog.KObj(pod)) if !admit { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/topology_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/topology_manager.go index 567736e82d3b..b2bf858def61 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/topology_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/topology_manager.go @@ -21,7 +21,7 @@ import ( "time" cadvisorapi "github.com/google/cadvisor/info/v1" - "k8s.io/api/core/v1" + v1 "k8s.io/api/core/v1" "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/bitmask" "k8s.io/kubernetes/pkg/kubelet/lifecycle" @@ -133,13 +133,19 @@ var _ Manager = &manager{} // NewManager creates a new TopologyManager based on provided policy and scope func NewManager(topology []cadvisorapi.Node, topologyPolicyName string, topologyScopeName string, topologyPolicyOptions map[string]string) (Manager, error) { - klog.InfoS("Creating topology manager with policy per scope", "topologyPolicyName", topologyPolicyName, "topologyScopeName", topologyScopeName) + // When policy is none, the scope is not relevant, so we can short circuit here. + if topologyPolicyName == PolicyNone { + klog.InfoS("Creating topology manager with none policy") + return &manager{scope: NewNoneScope()}, nil + } opts, err := NewPolicyOptions(topologyPolicyOptions) if err != nil { return nil, err } + klog.InfoS("Creating topology manager with policy per scope", "topologyPolicyName", topologyPolicyName, "topologyScopeName", topologyScopeName, "topologyPolicyOptions", opts) + numaInfo, err := NewNUMAInfo(topology, opts) if err != nil { return nil, fmt.Errorf("cannot discover NUMA topology: %w", err) @@ -152,9 +158,6 @@ func NewManager(topology []cadvisorapi.Node, topologyPolicyName string, topology var policy Policy switch topologyPolicyName { - case PolicyNone: - policy = NewNonePolicy() - case PolicyBestEffort: policy = NewBestEffortPolicy(numaInfo, opts) @@ -209,7 +212,7 @@ func (m *manager) RemoveContainer(containerID string) error { } func (m *manager) Admit(attrs *lifecycle.PodAdmitAttributes) lifecycle.PodAdmitResult { - klog.InfoS("Topology Admit Handler") + klog.InfoS("Topology Admit Handler", "podUID", attrs.Pod.UID, "podNamespace", attrs.Pod.Namespace, "podName", attrs.Pod.Name) metrics.TopologyManagerAdmissionRequestsTotal.Inc() startTime := time.Now() diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/cdi.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/util/cdi/cdi.go similarity index 98% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/cdi.go rename to cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/util/cdi/cdi.go index b6118337817b..b228b5a66d8d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/dra/cdi.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/util/cdi/cdi.go @@ -23,7 +23,7 @@ limitations under the License. // Long term it would be good to avoid this duplication: // https://github.com/container-orchestrated-devices/container-device-interface/issues/97 -package dra +package cdi import ( "errors" @@ -39,12 +39,15 @@ const ( annotationPrefix = "cdi.k8s.io/" ) -// generate container annotations using CDI UpdateAnnotations API. -func generateCDIAnnotations( +// GenerateAnnotations generate container annotations using CDI UpdateAnnotations API. +func GenerateAnnotations( claimUID types.UID, driverName string, cdiDevices []string, ) ([]kubecontainer.Annotation, error) { + if len(cdiDevices) == 0 { + return nil, nil + } annotations, err := updateAnnotations(map[string]string{}, driverName, string(claimUID), cdiDevices) if err != nil { return nil, fmt.Errorf("can't generate CDI annotations: %+v", err) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/config/file_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/config/file_linux.go index f672c9629825..42d86f868723 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/config/file_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/config/file_linux.go @@ -140,14 +140,14 @@ func (s *sourceFile) consumeWatchEvent(e *watchEvent) error { pod, podExist, err := s.store.GetByKey(objKey) if err != nil { return err - } else if !podExist { + } + if !podExist { return fmt.Errorf("the pod with key %s doesn't exist in cache", objKey) - } else { - if err = s.store.Delete(pod); err != nil { - return fmt.Errorf("failed to remove deleted pod from cache: %v", err) - } - delete(s.fileKeyMapping, e.fileName) } + if err = s.store.Delete(pod); err != nil { + return fmt.Errorf("failed to remove deleted pod from cache: %v", err) + } + delete(s.fileKeyMapping, e.fileName) } } return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/helpers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/helpers.go index 0388d579e55a..4bffea17eea8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/helpers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/helpers.go @@ -397,3 +397,27 @@ func MakePortMappings(container *v1.Container) (ports []PortMapping) { } return } + +// HasAnyRegularContainerStarted returns true if any regular container has +// started, which indicates all init containers have been initialized. +func HasAnyRegularContainerStarted(spec *v1.PodSpec, statuses []v1.ContainerStatus) bool { + if len(statuses) == 0 { + return false + } + + containerNames := make(map[string]struct{}) + for _, c := range spec.Containers { + containerNames[c.Name] = struct{}{} + } + + for _, status := range statuses { + if _, ok := containerNames[status.Name]; !ok { + continue + } + if status.State.Running != nil || status.State.Terminated != nil { + return true + } + } + + return false +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/runtime.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/runtime.go index 8a154f272c8e..7fa8f44ef736 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/runtime.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/runtime.go @@ -480,12 +480,6 @@ type RunContainerOptions struct { ReadOnly bool // hostname for pod containers Hostname string - // EnableHostUserNamespace sets userns=host when users request host namespaces (pid, ipc, net), - // are using non-namespaced capabilities (mknod, sys_time, sys_module), the pod contains a privileged container, - // or using host path volumes. - // This should only be enabled when the container runtime is performing user remapping AND if the - // experimental behavior is desired. - EnableHostUserNamespace bool } // VolumeInfo contains information about the volume. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/testing/fake_runtime.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/testing/fake_runtime.go index 3b89f5c65608..bf82205303cf 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/testing/fake_runtime.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/container/testing/fake_runtime.go @@ -115,38 +115,6 @@ func (f *FakeRuntimeCache) ForceUpdateIfOlder(context.Context, time.Time) error return nil } -// ClearCalls resets the FakeRuntime to the initial state. -func (f *FakeRuntime) ClearCalls() { - f.Lock() - defer f.Unlock() - - f.CalledFunctions = []string{} - f.PodList = []*FakePod{} - f.AllPodList = []*FakePod{} - f.APIPodStatus = v1.PodStatus{} - f.StartedPods = []string{} - f.KilledPods = []string{} - f.StartedContainers = []string{} - f.KilledContainers = []string{} - f.RuntimeStatus = nil - f.VersionInfo = "" - f.RuntimeType = "" - f.Err = nil - f.InspectErr = nil - f.StatusErr = nil - f.BlockImagePulls = false - if f.imagePullTokenBucket != nil { - for { - select { - case f.imagePullTokenBucket <- true: - default: - f.imagePullTokenBucket = nil - return - } - } - } -} - // UpdatePodCIDR fulfills the cri interface. func (f *FakeRuntime) UpdatePodCIDR(_ context.Context, c string) error { return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_image.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_image.go index 1deff550fd8c..a1afc80b8a28 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_image.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_image.go @@ -25,7 +25,10 @@ import ( "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc" "go.opentelemetry.io/otel/trace" "google.golang.org/grpc" + "google.golang.org/grpc/backoff" + "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials/insecure" + "google.golang.org/grpc/status" utilfeature "k8s.io/apiserver/pkg/util/feature" tracing "k8s.io/component-base/tracing" "k8s.io/klog/v2" @@ -53,7 +56,7 @@ func NewRemoteImageService(endpoint string, connectionTimeout time.Duration, tp ctx, cancel := context.WithTimeout(context.Background(), connectionTimeout) defer cancel() - dialOpts := []grpc.DialOption{} + var dialOpts []grpc.DialOption dialOpts = append(dialOpts, grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithContextDialer(dialer), @@ -70,6 +73,16 @@ func NewRemoteImageService(endpoint string, connectionTimeout time.Duration, tp grpc.WithStreamInterceptor(otelgrpc.StreamClientInterceptor(tracingOpts...))) } + connParams := grpc.ConnectParams{ + Backoff: backoff.DefaultConfig, + } + connParams.MinConnectTimeout = minConnectionTimeout + connParams.Backoff.BaseDelay = baseBackoffDelay + connParams.Backoff.MaxDelay = maxBackoffDelay + dialOpts = append(dialOpts, + grpc.WithConnectParams(connParams), + ) + conn, err := grpc.DialContext(ctx, addr, dialOpts...) if err != nil { klog.ErrorS(err, "Connect remote image service failed", "address", addr) @@ -165,6 +178,17 @@ func (r *remoteImageService) pullImageV1(ctx context.Context, image *runtimeapi. }) if err != nil { klog.ErrorS(err, "PullImage from image service failed", "image", image.Image) + + // We can strip the code from unknown status errors since they add no value + // and will make them easier to read in the logs/events. + // + // It also ensures that checking custom error types from pkg/kubelet/images/types.go + // works in `imageManager.EnsureImageExists` (pkg/kubelet/images/image_manager.go). + statusErr, ok := status.FromError(err) + if ok && statusErr.Code() == codes.Unknown { + return "", errors.New(statusErr.Message()) + } + return "", err } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go index 18e6bf7275fb..64571069376b 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cri/remote/remote_runtime.go @@ -27,6 +27,7 @@ import ( "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc" "go.opentelemetry.io/otel/trace" "google.golang.org/grpc" + "google.golang.org/grpc/backoff" "google.golang.org/grpc/codes" "google.golang.org/grpc/credentials/insecure" "google.golang.org/grpc/status" @@ -55,6 +56,11 @@ type remoteRuntimeService struct { const ( // How frequently to report identical errors identicalErrorDelay = 1 * time.Minute + + // connection parameters + maxBackoffDelay = 3 * time.Second + baseBackoffDelay = 100 * time.Millisecond + minConnectionTimeout = 5 * time.Second ) // CRIVersion is the type for valid Container Runtime Interface (CRI) API @@ -79,7 +85,7 @@ func NewRemoteRuntimeService(endpoint string, connectionTimeout time.Duration, t ctx, cancel := context.WithTimeout(context.Background(), connectionTimeout) defer cancel() - dialOpts := []grpc.DialOption{} + var dialOpts []grpc.DialOption dialOpts = append(dialOpts, grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithContextDialer(dialer), @@ -95,6 +101,17 @@ func NewRemoteRuntimeService(endpoint string, connectionTimeout time.Duration, t grpc.WithUnaryInterceptor(otelgrpc.UnaryClientInterceptor(tracingOpts...)), grpc.WithStreamInterceptor(otelgrpc.StreamClientInterceptor(tracingOpts...))) } + + connParams := grpc.ConnectParams{ + Backoff: backoff.DefaultConfig, + } + connParams.MinConnectTimeout = minConnectionTimeout + connParams.Backoff.BaseDelay = baseBackoffDelay + connParams.Backoff.MaxDelay = maxBackoffDelay + dialOpts = append(dialOpts, + grpc.WithConnectParams(connParams), + ) + conn, err := grpc.DialContext(ctx, addr, dialOpts...) if err != nil { klog.ErrorS(err, "Connect remote runtime failed", "address", addr) @@ -848,3 +865,18 @@ func (r *remoteRuntimeService) ListPodSandboxMetrics(ctx context.Context) ([]*ru return resp.GetPodMetrics(), nil } + +// RuntimeConfig returns the configuration information of the runtime. +func (r *remoteRuntimeService) RuntimeConfig(ctx context.Context) (*runtimeapi.RuntimeConfigResponse, error) { + ctx, cancel := context.WithTimeout(ctx, r.timeout) + defer cancel() + + resp, err := r.runtimeClient.RuntimeConfig(ctx, &runtimeapi.RuntimeConfigRequest{}) + if err != nil { + klog.ErrorS(err, "RuntimeConfig from runtime service failed") + return nil, err + } + klog.V(10).InfoS("[RemoteRuntimeService] RuntimeConfigResponse", "linuxConfig", resp.GetLinux()) + + return resp, nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/events/event.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/events/event.go index 1b31dc7e605a..d08253989ae8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/events/event.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/events/event.go @@ -74,6 +74,7 @@ const ( FailedCreatePodSandBox = "FailedCreatePodSandBox" FailedStatusPodSandBox = "FailedPodSandBoxStatus" FailedMountOnFilesystemMismatch = "FailedMountOnFilesystemMismatch" + FailedPrepareDynamicResources = "FailedPrepareDynamicResources" ) // Image manager event reason list diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/eviction/eviction_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/eviction/eviction_manager.go index 2c23d4241d75..e47b37a0d05e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/eviction/eviction_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/eviction/eviction_manager.go @@ -66,8 +66,6 @@ type managerImpl struct { config Config // the function to invoke to kill a pod killPodFunc KillPodFunc - // the function to get the mirror pod by a given static pod - mirrorPodFunc MirrorPodFunc // the interface that knows how to do image gc imageGC ImageGC // the interface that knows how to do container gc @@ -112,7 +110,6 @@ func NewManager( summaryProvider stats.SummaryProvider, config Config, killPodFunc KillPodFunc, - mirrorPodFunc MirrorPodFunc, imageGC ImageGC, containerGC ContainerGC, recorder record.EventRecorder, @@ -123,7 +120,6 @@ func NewManager( manager := &managerImpl{ clock: clock, killPodFunc: killPodFunc, - mirrorPodFunc: mirrorPodFunc, imageGC: imageGC, containerGC: containerGC, config: config, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/images/image_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/images/image_manager.go index 8798f023e954..6090b3732a53 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/images/image_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/images/image_manager.go @@ -19,9 +19,9 @@ package images import ( "context" "fmt" + "strings" "time" - dockerref "github.com/docker/distribution/reference" v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/types" "k8s.io/client-go/tools/record" @@ -29,8 +29,10 @@ import ( "k8s.io/klog/v2" runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" + crierrors "k8s.io/cri-api/pkg/errors" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" "k8s.io/kubernetes/pkg/kubelet/events" + "k8s.io/kubernetes/pkg/util/parsers" ) type ImagePodPullingTimeRecorder interface { @@ -157,35 +159,63 @@ func (m *imageManager) EnsureImageExists(ctx context.Context, pod *v1.Pod, conta if imagePullResult.err != nil { m.logIt(ref, v1.EventTypeWarning, events.FailedToPullImage, logPrefix, fmt.Sprintf("Failed to pull image %q: %v", container.Image, imagePullResult.err), klog.Warning) m.backOff.Next(backOffKey, m.backOff.Clock.Now()) - if imagePullResult.err == ErrRegistryUnavailable { - msg := fmt.Sprintf("image pull failed for %s because the registry is unavailable.", container.Image) - return "", msg, imagePullResult.err - } - return "", imagePullResult.err.Error(), ErrImagePull + msg, err := evalCRIPullErr(container, imagePullResult.err) + return "", msg, err } m.podPullingTimeRecorder.RecordImageFinishedPulling(pod.UID) - m.logIt(ref, v1.EventTypeNormal, events.PulledImage, logPrefix, fmt.Sprintf("Successfully pulled image %q in %v (%v including waiting)", container.Image, imagePullResult.pullDuration, time.Since(startTime)), klog.Info) + m.logIt(ref, v1.EventTypeNormal, events.PulledImage, logPrefix, fmt.Sprintf("Successfully pulled image %q in %v (%v including waiting)", + container.Image, imagePullResult.pullDuration.Truncate(time.Millisecond), time.Since(startTime).Truncate(time.Millisecond)), klog.Info) m.backOff.GC() return imagePullResult.imageRef, "", nil } +func evalCRIPullErr(container *v1.Container, err error) (errMsg string, errRes error) { + // Error assertions via errors.Is is not supported by gRPC (remote runtime) errors right now. + // See https://github.com/grpc/grpc-go/issues/3616 + if strings.HasPrefix(err.Error(), crierrors.ErrRegistryUnavailable.Error()) { + errMsg = fmt.Sprintf( + "image pull failed for %s because the registry is unavailable%s", + container.Image, + // Trim the error name from the message to convert errors like: + // "RegistryUnavailable: a more detailed explanation" to: + // "...because the registry is unavailable: a more detailed explanation" + strings.TrimPrefix(err.Error(), crierrors.ErrRegistryUnavailable.Error()), + ) + return errMsg, crierrors.ErrRegistryUnavailable + } + + if strings.HasPrefix(err.Error(), crierrors.ErrSignatureValidationFailed.Error()) { + errMsg = fmt.Sprintf( + "image pull failed for %s because the signature validation failed%s", + container.Image, + // Trim the error name from the message to convert errors like: + // "SignatureValidationFailed: a more detailed explanation" to: + // "...because the signature validation failed: a more detailed explanation" + strings.TrimPrefix(err.Error(), crierrors.ErrSignatureValidationFailed.Error()), + ) + return errMsg, crierrors.ErrSignatureValidationFailed + } + + // Fallback for no specific error + return err.Error(), ErrImagePull +} + // applyDefaultImageTag parses a docker image string, if it doesn't contain any tag or digest, // a default tag will be applied. func applyDefaultImageTag(image string) (string, error) { - named, err := dockerref.ParseNormalizedNamed(image) + _, tag, digest, err := parsers.ParseImageName(image) if err != nil { - return "", fmt.Errorf("couldn't parse image reference %q: %v", image, err) + return "", err } - _, isTagged := named.(dockerref.Tagged) - _, isDigested := named.(dockerref.Digested) - if !isTagged && !isDigested { + // we just concatenate the image name with the default tag here instead + if len(digest) == 0 && len(tag) > 0 && !strings.HasSuffix(image, ":"+tag) { // we just concatenate the image name with the default tag here instead // of using dockerref.WithTag(named, ...) because that would cause the // image to be fully qualified as docker.io/$name if it's a short name // (e.g. just busybox). We don't want that to happen to keep the CRI // agnostic wrt image names and default hostnames. - image = image + ":latest" + image = image + ":" + tag } return image, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/images/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/images/types.go index 3b0397faad49..52342b28ec15 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/images/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/images/types.go @@ -37,9 +37,6 @@ var ( // ErrImageNeverPull - Required Image is absent on host and PullPolicy is NeverPullImage ErrImageNeverPull = errors.New("ErrImageNeverPull") - // ErrRegistryUnavailable - Get http error when pulling image from registry - ErrRegistryUnavailable = errors.New("RegistryUnavailable") - // ErrInvalidImageName - Unable to parse the image name. ErrInvalidImageName = errors.New("InvalidImageName") ) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go index 3afbd6b21e4d..e8918472ee8d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go @@ -32,6 +32,7 @@ import ( "time" cadvisorapi "github.com/google/cadvisor/info/v1" + "github.com/google/go-cmp/cmp" libcontaineruserns "github.com/opencontainers/runc/libcontainer/userns" "github.com/opencontainers/selinux/go-selinux" "go.opentelemetry.io/otel/attribute" @@ -47,7 +48,6 @@ import ( "k8s.io/apimachinery/pkg/fields" "k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/diff" utilruntime "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apimachinery/pkg/util/sets" "k8s.io/apimachinery/pkg/util/wait" @@ -371,18 +371,6 @@ func NewMainKubelet(kubeCfg *kubeletconfiginternal.KubeletConfiguration, return nil, fmt.Errorf("invalid sync frequency %d", kubeCfg.SyncFrequency.Duration) } - if kubeCfg.MakeIPTablesUtilChains { - if kubeCfg.IPTablesMasqueradeBit > 31 || kubeCfg.IPTablesMasqueradeBit < 0 { - return nil, fmt.Errorf("iptables-masquerade-bit is not valid. Must be within [0, 31]") - } - if kubeCfg.IPTablesDropBit > 31 || kubeCfg.IPTablesDropBit < 0 { - return nil, fmt.Errorf("iptables-drop-bit is not valid. Must be within [0, 31]") - } - if kubeCfg.IPTablesDropBit == kubeCfg.IPTablesMasqueradeBit { - return nil, fmt.Errorf("iptables-masquerade-bit and iptables-drop-bit must be different") - } - } - if utilfeature.DefaultFeatureGate.Enabled(features.DisableCloudProviders) && cloudprovider.IsDeprecatedInternal(cloudProvider) { cloudprovider.DisableWarningForProvider(cloudProvider) return nil, fmt.Errorf("cloud provider %q was specified, but built-in cloud providers are disabled. Please set --cloud-provider=external and migrate to an external cloud provider", cloudProvider) @@ -517,56 +505,53 @@ func NewMainKubelet(kubeCfg *kubeletconfiginternal.KubeletConfiguration, tracer := kubeDeps.TracerProvider.Tracer(instrumentationScope) klet := &Kubelet{ - hostname: hostname, - hostnameOverridden: hostnameOverridden, - nodeName: nodeName, - kubeClient: kubeDeps.KubeClient, - heartbeatClient: kubeDeps.HeartbeatClient, - onRepeatedHeartbeatFailure: kubeDeps.OnHeartbeatFailure, - rootDirectory: filepath.Clean(rootDirectory), - resyncInterval: kubeCfg.SyncFrequency.Duration, - sourcesReady: config.NewSourcesReady(kubeDeps.PodConfig.SeenAllSources), - registerNode: registerNode, - registerWithTaints: registerWithTaints, - registerSchedulable: registerSchedulable, - dnsConfigurer: dns.NewConfigurer(kubeDeps.Recorder, nodeRef, nodeIPs, clusterDNS, kubeCfg.ClusterDomain, kubeCfg.ResolverConfig), - serviceLister: serviceLister, - serviceHasSynced: serviceHasSynced, - nodeLister: nodeLister, - nodeHasSynced: nodeHasSynced, - streamingConnectionIdleTimeout: kubeCfg.StreamingConnectionIdleTimeout.Duration, - recorder: kubeDeps.Recorder, - cadvisor: kubeDeps.CAdvisorInterface, - cloud: kubeDeps.Cloud, - externalCloudProvider: cloudprovider.IsExternal(cloudProvider), - providerID: providerID, - nodeRef: nodeRef, - nodeLabels: nodeLabels, - nodeStatusUpdateFrequency: kubeCfg.NodeStatusUpdateFrequency.Duration, - nodeStatusReportFrequency: kubeCfg.NodeStatusReportFrequency.Duration, - os: kubeDeps.OSInterface, - oomWatcher: oomWatcher, - cgroupsPerQOS: kubeCfg.CgroupsPerQOS, - cgroupRoot: kubeCfg.CgroupRoot, - mounter: kubeDeps.Mounter, - hostutil: kubeDeps.HostUtil, - subpather: kubeDeps.Subpather, - maxPods: int(kubeCfg.MaxPods), - podsPerCore: int(kubeCfg.PodsPerCore), - syncLoopMonitor: atomic.Value{}, - daemonEndpoints: daemonEndpoints, - containerManager: kubeDeps.ContainerManager, - nodeIPs: nodeIPs, - nodeIPValidator: validateNodeIP, - clock: clock.RealClock{}, - enableControllerAttachDetach: kubeCfg.EnableControllerAttachDetach, - makeIPTablesUtilChains: kubeCfg.MakeIPTablesUtilChains, - iptablesMasqueradeBit: int(kubeCfg.IPTablesMasqueradeBit), - iptablesDropBit: int(kubeCfg.IPTablesDropBit), - experimentalHostUserNamespaceDefaulting: utilfeature.DefaultFeatureGate.Enabled(features.ExperimentalHostUserNamespaceDefaultingGate), - keepTerminatedPodVolumes: keepTerminatedPodVolumes, - nodeStatusMaxImages: nodeStatusMaxImages, - tracer: tracer, + hostname: hostname, + hostnameOverridden: hostnameOverridden, + nodeName: nodeName, + kubeClient: kubeDeps.KubeClient, + heartbeatClient: kubeDeps.HeartbeatClient, + onRepeatedHeartbeatFailure: kubeDeps.OnHeartbeatFailure, + rootDirectory: filepath.Clean(rootDirectory), + resyncInterval: kubeCfg.SyncFrequency.Duration, + sourcesReady: config.NewSourcesReady(kubeDeps.PodConfig.SeenAllSources), + registerNode: registerNode, + registerWithTaints: registerWithTaints, + registerSchedulable: registerSchedulable, + dnsConfigurer: dns.NewConfigurer(kubeDeps.Recorder, nodeRef, nodeIPs, clusterDNS, kubeCfg.ClusterDomain, kubeCfg.ResolverConfig), + serviceLister: serviceLister, + serviceHasSynced: serviceHasSynced, + nodeLister: nodeLister, + nodeHasSynced: nodeHasSynced, + streamingConnectionIdleTimeout: kubeCfg.StreamingConnectionIdleTimeout.Duration, + recorder: kubeDeps.Recorder, + cadvisor: kubeDeps.CAdvisorInterface, + cloud: kubeDeps.Cloud, + externalCloudProvider: cloudprovider.IsExternal(cloudProvider), + providerID: providerID, + nodeRef: nodeRef, + nodeLabels: nodeLabels, + nodeStatusUpdateFrequency: kubeCfg.NodeStatusUpdateFrequency.Duration, + nodeStatusReportFrequency: kubeCfg.NodeStatusReportFrequency.Duration, + os: kubeDeps.OSInterface, + oomWatcher: oomWatcher, + cgroupsPerQOS: kubeCfg.CgroupsPerQOS, + cgroupRoot: kubeCfg.CgroupRoot, + mounter: kubeDeps.Mounter, + hostutil: kubeDeps.HostUtil, + subpather: kubeDeps.Subpather, + maxPods: int(kubeCfg.MaxPods), + podsPerCore: int(kubeCfg.PodsPerCore), + syncLoopMonitor: atomic.Value{}, + daemonEndpoints: daemonEndpoints, + containerManager: kubeDeps.ContainerManager, + nodeIPs: nodeIPs, + nodeIPValidator: validateNodeIP, + clock: clock.RealClock{}, + enableControllerAttachDetach: kubeCfg.EnableControllerAttachDetach, + makeIPTablesUtilChains: kubeCfg.MakeIPTablesUtilChains, + keepTerminatedPodVolumes: keepTerminatedPodVolumes, + nodeStatusMaxImages: nodeStatusMaxImages, + tracer: tracer, } if klet.cloud != nil { @@ -596,10 +581,6 @@ func NewMainKubelet(kubeCfg *kubeletconfiginternal.KubeletConfiguration, klet.configMapManager = configMapManager } - if klet.experimentalHostUserNamespaceDefaulting { - klog.InfoS("Experimental host user namespace defaulting is enabled") - } - machineInfo, err := klet.cadvisor.MachineInfo() if err != nil { return nil, err @@ -616,9 +597,8 @@ func NewMainKubelet(kubeCfg *kubeletconfiginternal.KubeletConfiguration, klet.startupManager = proberesults.NewManager() klet.podCache = kubecontainer.NewCache() - // podManager is also responsible for keeping secretManager and configMapManager contents up-to-date. - mirrorPodClient := kubepod.NewBasicMirrorClient(klet.kubeClient, string(nodeName), nodeLister) - klet.podManager = kubepod.NewBasicPodManager(mirrorPodClient) + klet.mirrorPodClient = kubepod.NewBasicMirrorClient(klet.kubeClient, string(nodeName), nodeLister) + klet.podManager = kubepod.NewBasicPodManager() klet.statusManager = status.NewManager(klet.kubeClient, klet.podManager, klet, kubeDeps.PodStartupLatencyTracker, klet.getRootDir()) @@ -842,7 +822,7 @@ func NewMainKubelet(kubeCfg *kubeletconfiginternal.KubeletConfiguration, // setup eviction manager evictionManager, evictionAdmitHandler := eviction.NewManager(klet.resourceAnalyzer, evictionConfig, - killPodNow(klet.podWorkers, kubeDeps.Recorder), klet.podManager.GetMirrorPodByPod, klet.imageManager, klet.containerGC, kubeDeps.Recorder, nodeRef, klet.clock, kubeCfg.LocalStorageCapacityIsolation) + killPodNow(klet.podWorkers, kubeDeps.Recorder), klet.imageManager, klet.containerGC, kubeDeps.Recorder, nodeRef, klet.clock, kubeCfg.LocalStorageCapacityIsolation) klet.evictionManager = evictionManager klet.admitHandlers.AddPodAdmitHandler(evictionAdmitHandler) @@ -941,7 +921,11 @@ type Kubelet struct { runtimeCache kubecontainer.RuntimeCache kubeClient clientset.Interface heartbeatClient clientset.Interface - rootDirectory string + // mirrorPodClient is used to create and delete mirror pods in the API for static + // pods. + mirrorPodClient kubepod.MirrorClient + + rootDirectory string lastObservedNodeAddressesMux sync.RWMutex lastObservedNodeAddresses []v1.NodeAddress @@ -949,9 +933,90 @@ type Kubelet struct { // onRepeatedHeartbeatFailure is called when a heartbeat operation fails more than once. optional. onRepeatedHeartbeatFailure func() - // podWorkers handle syncing Pods in response to events. + // podManager stores the desired set of admitted pods and mirror pods that the kubelet should be + // running. The actual set of running pods is stored on the podWorkers. The manager is populated + // by the kubelet config loops which abstracts receiving configuration from many different sources + // (api for regular pods, local filesystem or http for static pods). The manager may be consulted + // by other components that need to see the set of desired pods. Note that not all desired pods are + // running, and not all running pods are in the podManager - for instance, force deleting a pod + // from the apiserver will remove it from the podManager, but the pod may still be terminating and + // tracked by the podWorkers. Components that need to know the actual consumed resources of the + // node or are driven by podWorkers and the sync*Pod methods (status, volume, stats) should also + // consult the podWorkers when reconciling. + // + // TODO: review all kubelet components that need the actual set of pods (vs the desired set) + // and update them to use podWorkers instead of podManager. This may introduce latency in some + // methods, but avoids race conditions and correctly accounts for terminating pods that have + // been force deleted or static pods that have been updated. + // https://github.com/kubernetes/kubernetes/issues/116970 + podManager kubepod.Manager + + // podWorkers is responsible for driving the lifecycle state machine of each pod. The worker is + // notified of config changes, updates, periodic reconciliation, container runtime updates, and + // evictions of all desired pods and will invoke reconciliation methods per pod in separate + // goroutines. The podWorkers are authoritative in the kubelet for what pods are actually being + // run and their current state: + // + // * syncing: pod should be running (syncPod) + // * terminating: pod should be stopped (syncTerminatingPod) + // * terminated: pod should have all resources cleaned up (syncTerminatedPod) + // + // and invoke the handler methods that correspond to each state. Components within the + // kubelet that need to know the phase of the pod in order to correctly set up or tear down + // resources must consult the podWorkers. + // + // Once a pod has been accepted by the pod workers, no other pod with that same UID (and + // name+namespace, for static pods) will be started until the first pod has fully terminated + // and been cleaned up by SyncKnownPods. This means a pod may be desired (in API), admitted + // (in pod manager), and requested (by invoking UpdatePod) but not start for an arbitrarily + // long interval because a prior pod is still terminating. + // + // As an event-driven (by UpdatePod) controller, the podWorkers must periodically be resynced + // by the kubelet invoking SyncKnownPods with the desired state (admitted pods in podManager). + // Since the podManager may be unaware of some running pods due to force deletion, the + // podWorkers are responsible for triggering a sync of pods that are no longer desired but + // must still run to completion. podWorkers PodWorkers + // evictionManager observes the state of the node for situations that could impact node stability + // and evicts pods (sets to phase Failed with reason Evicted) to reduce resource pressure. The + // eviction manager acts on the actual state of the node and considers the podWorker to be + // authoritative. + evictionManager eviction.Manager + + // probeManager tracks the set of running pods and ensures any user-defined periodic checks are + // run to introspect the state of each pod. The probe manager acts on the actual state of the node + // and is notified of pods by the podWorker. The probe manager is the authoritative source of the + // most recent probe status and is responsible for notifying the status manager, which + // synthesizes them into the overall pod status. + probeManager prober.Manager + + // secretManager caches the set of secrets used by running pods on this node. The podWorkers + // notify the secretManager when pods are started and terminated, and the secretManager must + // then keep the needed secrets up-to-date as they change. + secretManager secret.Manager + + // configMapManager caches the set of config maps used by running pods on this node. The + // podWorkers notify the configMapManager when pods are started and terminated, and the + // configMapManager must then keep the needed config maps up-to-date as they change. + configMapManager configmap.Manager + + // volumeManager observes the set of running pods and is responsible for attaching, mounting, + // unmounting, and detaching as those pods move through their lifecycle. It periodically + // synchronizes the set of known volumes to the set of actually desired volumes and cleans up + // any orphaned volumes. The volume manager considers the podWorker to be authoritative for + // which pods are running. + volumeManager volumemanager.VolumeManager + + // statusManager receives updated pod status updates from the podWorker and updates the API + // status of those pods to match. The statusManager is authoritative for the synthesized + // status of the pod from the kubelet's perspective (other components own the individual + // elements of status) and should be consulted by components in preference to assembling + // that status themselves. Note that the status manager is downstream of the pod worker + // and components that need to check whether a pod is still running should instead directly + // consult the pod worker. + statusManager status.Manager + // resyncInterval is the interval between periodic full reconciliations of // pods on this node. resyncInterval time.Duration @@ -959,13 +1024,6 @@ type Kubelet struct { // sourcesReady records the sources seen by the kubelet, it is thread-safe. sourcesReady config.SourcesReady - // podManager is a facade that abstracts away the various sources of pods - // this Kubelet services. - podManager kubepod.Manager - - // Needed to observe and respond to situations that could impact node stability - evictionManager eviction.Manager - // Optional, defaults to /logs/ from /var/log logServer http.Handler // Optional, defaults to simple Docker implementation @@ -1006,8 +1064,6 @@ type Kubelet struct { // Volume plugins. volumePluginMgr *volume.VolumePluginMgr - // Handles container probing. - probeManager prober.Manager // Manages container health check results. livenessManager proberesults.Manager readinessManager proberesults.Manager @@ -1029,12 +1085,6 @@ type Kubelet struct { // Manager for container logs. containerLogManager logs.ContainerLogManager - // Secret manager. - secretManager secret.Manager - - // ConfigMap manager. - configMapManager configmap.Manager - // Cached MachineInfo returned by cadvisor. machineInfoLock sync.RWMutex machineInfo *cadvisorapi.MachineInfo @@ -1042,14 +1092,6 @@ type Kubelet struct { // Handles certificate rotations. serverCertificateManager certificate.Manager - // Syncs pods statuses with apiserver; also used as a cache of statuses. - statusManager status.Manager - - // VolumeManager runs a set of asynchronous loops that figure out which - // volumes need to be attached/mounted/unmounted/detached based on the pods - // scheduled on this node and makes it so. - volumeManager volumemanager.VolumeManager - // Cloud provider interface. cloud cloudprovider.Interface // Handles requests to cloud provider with timeout @@ -1115,10 +1157,12 @@ type Kubelet struct { // nodeLeaseController claims and renews the node lease for this Kubelet nodeLeaseController lease.Controller - // Generates pod events. + // pleg observes the state of the container runtime and notifies the kubelet of changes to containers, which + // notifies the podWorkers to reconcile the state of the pod (for instance, if a container dies and needs to + // be restarted). pleg pleg.PodLifecycleEventGenerator - // Evented PLEG + // eventedPleg supplements the pleg to deliver edge-driven container changes with low-latency. eventedPleg pleg.PodLifecycleEventGenerator // Store kubecontainer.PodStatus for all pods. @@ -1218,22 +1262,9 @@ type Kubelet struct { // config iptables util rules makeIPTablesUtilChains bool - // The bit of the fwmark space to mark packets for SNAT. - iptablesMasqueradeBit int - - // The bit of the fwmark space to mark packets for dropping. - iptablesDropBit int - // The AppArmor validator for checking whether AppArmor is supported. appArmorValidator apparmor.Validator - // experimentalHostUserNamespaceDefaulting sets userns=true when users request host namespaces (pid, ipc, net), - // are using non-namespaced capabilities (mknod, sys_time, sys_module), the pod contains a privileged container, - // or using host path volumes. - // This should only be enabled when the container runtime is performing user remapping AND if the - // experimental behavior is desired. - experimentalHostUserNamespaceDefaulting bool - // StatsProvider provides the node and the container stats. StatsProvider *stats.Provider @@ -1653,10 +1684,8 @@ func (kl *Kubelet) Run(updates <-chan kubetypes.PodUpdate) { // This operation writes all events that are dispatched in order to provide // the most accurate information possible about an error situation to aid debugging. // Callers should not write an event if this operation returns an error. -func (kl *Kubelet) SyncPod(_ context.Context, updateType kubetypes.SyncPodType, pod, mirrorPod *v1.Pod, podStatus *kubecontainer.PodStatus) (isTerminal bool, err error) { - // TODO(#113606): connect this with the incoming context parameter, which comes from the pod worker. - // Currently, using that context causes test failures. - ctx, otelSpan := kl.tracer.Start(context.TODO(), "syncPod", trace.WithAttributes( +func (kl *Kubelet) SyncPod(ctx context.Context, updateType kubetypes.SyncPodType, pod, mirrorPod *v1.Pod, podStatus *kubecontainer.PodStatus) (isTerminal bool, err error) { + ctx, otelSpan := kl.tracer.Start(ctx, "syncPod", trace.WithAttributes( attribute.String("k8s.pod.uid", string(pod.UID)), attribute.String("k8s.pod", klog.KObj(pod).String()), attribute.String("k8s.pod.name", pod.Name), @@ -1751,13 +1780,15 @@ func (kl *Kubelet) SyncPod(_ context.Context, updateType kubetypes.SyncPodType, var syncErr error p := kubecontainer.ConvertPodStatusToRunningPod(kl.getRuntime().Type(), podStatus) if err := kl.killPod(ctx, pod, p, nil); err != nil { - kl.recorder.Eventf(pod, v1.EventTypeWarning, events.FailedToKillPod, "error killing pod: %v", err) - syncErr = fmt.Errorf("error killing pod: %v", err) - utilruntime.HandleError(syncErr) + if !wait.Interrupted(err) { + kl.recorder.Eventf(pod, v1.EventTypeWarning, events.FailedToKillPod, "error killing pod: %v", err) + syncErr = fmt.Errorf("error killing pod: %w", err) + utilruntime.HandleError(syncErr) + } } else { // There was no error killing the pod, but the pod cannot be run. // Return an error to signal that the sync loop should back off. - syncErr = fmt.Errorf("pod cannot be run: %s", runnable.Message) + syncErr = fmt.Errorf("pod cannot be run: %v", runnable.Message) } return false, syncErr } @@ -1803,6 +1834,9 @@ func (kl *Kubelet) SyncPod(_ context.Context, updateType kubetypes.SyncPodType, if !pcm.Exists(pod) && !firstSync { p := kubecontainer.ConvertPodStatusToRunningPod(kl.getRuntime().Type(), podStatus) if err := kl.killPod(ctx, pod, p, nil); err == nil { + if wait.Interrupted(err) { + return false, err + } podKilled = true } else { klog.ErrorS(err, "KillPod failed", "pod", klog.KObj(pod), "podStatus", podStatus) @@ -1832,13 +1866,13 @@ func (kl *Kubelet) SyncPod(_ context.Context, updateType kubetypes.SyncPodType, if kubetypes.IsStaticPod(pod) { deleted := false if mirrorPod != nil { - if mirrorPod.DeletionTimestamp != nil || !kl.podManager.IsMirrorPodOf(mirrorPod, pod) { + if mirrorPod.DeletionTimestamp != nil || !kubepod.IsMirrorPodOf(mirrorPod, pod) { // The mirror pod is semantically different from the static pod. Remove // it. The mirror pod will get recreated later. klog.InfoS("Trying to delete pod", "pod", klog.KObj(pod), "podUID", mirrorPod.ObjectMeta.UID) podFullName := kubecontainer.GetPodFullName(pod) var err error - deleted, err = kl.podManager.DeleteMirrorPod(podFullName, &mirrorPod.ObjectMeta.UID) + deleted, err = kl.mirrorPodClient.DeleteMirrorPod(podFullName, &mirrorPod.ObjectMeta.UID) if deleted { klog.InfoS("Deleted mirror pod because it is outdated", "pod", klog.KObj(mirrorPod)) } else if err != nil { @@ -1852,7 +1886,7 @@ func (kl *Kubelet) SyncPod(_ context.Context, updateType kubetypes.SyncPodType, klog.V(4).InfoS("No need to create a mirror pod, since node has been removed from the cluster", "node", klog.KRef("", string(kl.nodeName))) } else { klog.V(4).InfoS("Creating a mirror pod for static pod", "pod", klog.KObj(pod)) - if err := kl.podManager.CreateMirrorPod(pod); err != nil { + if err := kl.mirrorPodClient.CreateMirrorPod(pod); err != nil { klog.ErrorS(err, "Failed creating a mirror pod for", "pod", klog.KObj(pod)) } } @@ -1866,15 +1900,13 @@ func (kl *Kubelet) SyncPod(_ context.Context, updateType kubetypes.SyncPodType, return false, err } - // Volume manager will not mount volumes for terminating pods - // TODO: once context cancellation is added this check can be removed - if !kl.podWorkers.IsPodTerminationRequested(pod.UID) { - // Wait for volumes to attach/mount - if err := kl.volumeManager.WaitForAttachAndMount(pod); err != nil { + // Wait for volumes to attach/mount + if err := kl.volumeManager.WaitForAttachAndMount(ctx, pod); err != nil { + if !wait.Interrupted(err) { kl.recorder.Eventf(pod, v1.EventTypeWarning, events.FailedMountVolume, "Unable to attach or mount volumes: %v", err) klog.ErrorS(err, "Unable to attach or mount volumes for pod; skipping pod", "pod", klog.KObj(pod)) - return false, err } + return false, err } // Fetch the pull secrets for the pod @@ -1893,8 +1925,13 @@ func (kl *Kubelet) SyncPod(_ context.Context, updateType kubetypes.SyncPodType, } } + // TODO(#113606): connect this with the incoming context parameter, which comes from the pod worker. + // Currently, using that context causes test failures. To remove this todoCtx, any wait.Interrupted + // errors need to be filtered from result and bypass the reasonCache - cancelling the context for + // SyncPod is a known and deliberate error, not a generic error. + todoCtx := context.TODO() // Call the container runtime's SyncPod callback - result := kl.containerRuntime.SyncPod(ctx, pod, podStatus, pullSecrets, kl.backOff) + result := kl.containerRuntime.SyncPod(todoCtx, pod, podStatus, pullSecrets, kl.backOff) kl.reasonCache.Update(pod.UID, result) if err := result.Error(); err != nil { // Do not return error if the only failures were pods in backoff @@ -2068,7 +2105,7 @@ func (kl *Kubelet) SyncTerminatingRuntimePod(_ context.Context, runningPod *kube // This typically occurs when a pod is force deleted from configuration (local disk or API) and the // kubelet restarts in the middle of the action. func (kl *Kubelet) SyncTerminatedPod(ctx context.Context, pod *v1.Pod, podStatus *kubecontainer.PodStatus) error { - _, otelSpan := kl.tracer.Start(context.Background(), "syncTerminatedPod", trace.WithAttributes( + ctx, otelSpan := kl.tracer.Start(ctx, "syncTerminatedPod", trace.WithAttributes( attribute.String("k8s.pod.uid", string(pod.UID)), attribute.String("k8s.pod", klog.KObj(pod).String()), attribute.String("k8s.pod.name", pod.Name), @@ -2086,7 +2123,7 @@ func (kl *Kubelet) SyncTerminatedPod(ctx context.Context, pod *v1.Pod, podStatus // volumes are unmounted after the pod worker reports ShouldPodRuntimeBeRemoved (which is satisfied // before syncTerminatedPod is invoked) - if err := kl.volumeManager.WaitForUnmount(pod); err != nil { + if err := kl.volumeManager.WaitForUnmount(ctx, pod); err != nil { return err } klog.V(4).InfoS("Pod termination unmounted volumes", "pod", klog.KObj(pod), "podUID", pod.UID) @@ -2140,6 +2177,15 @@ func (kl *Kubelet) SyncTerminatedPod(ctx context.Context, pod *v1.Pod, podStatus // Get pods which should be resynchronized. Currently, the following pod should be resynchronized: // - pod whose work is ready. // - internal modules that request sync of a pod. +// +// This method does not return orphaned pods (those known only to the pod worker that may have +// been deleted from configuration). Those pods are synced by HandlePodCleanups as a consequence +// of driving the state machine to completion. +// +// TODO: Consider synchronizing all pods which have not recently been acted on to be resilient +// to bugs that might prevent updates from being delivered (such as the previous bug with +// orphaned pods). Instead of asking the work queue for pending work, consider asking the +// PodWorker which pods should be synced. func (kl *Kubelet) getPodsToSync() []*v1.Pod { allPods := kl.podManager.GetPods() podUIDs := kl.workQueue.GetWork() @@ -2448,32 +2494,6 @@ func handleProbeSync(kl *Kubelet, update proberesults.Update, handler SyncHandle handler.HandlePodSyncs([]*v1.Pod{pod}) } -// dispatchWork starts the asynchronous sync of the pod in a pod worker. -// If the pod has completed termination, dispatchWork will perform no action. -func (kl *Kubelet) dispatchWork(pod *v1.Pod, syncType kubetypes.SyncPodType, mirrorPod *v1.Pod, start time.Time) { - // Run the sync in an async worker. - kl.podWorkers.UpdatePod(UpdatePodOptions{ - Pod: pod, - MirrorPod: mirrorPod, - UpdateType: syncType, - StartTime: start, - }) - // Note the number of containers for new pods. - if syncType == kubetypes.SyncPodCreate { - metrics.ContainersPerPodCount.Observe(float64(len(pod.Spec.Containers))) - } -} - -// TODO: handle mirror pods in a separate component (issue #17251) -func (kl *Kubelet) handleMirrorPod(mirrorPod *v1.Pod, start time.Time) { - // Mirror pod ADD/UPDATE/DELETE operations are considered an UPDATE to the - // corresponding static pod. Send update to the pod worker if the static - // pod exists. - if pod, ok := kl.podManager.GetPodByMirrorPod(mirrorPod); ok { - kl.dispatchWork(pod, kubetypes.SyncPodUpdate, mirrorPod, start) - } -} - // HandlePodAdditions is the callback in SyncHandler for pods being added from // a config source. func (kl *Kubelet) HandlePodAdditions(pods []*v1.Pod) { @@ -2491,8 +2511,18 @@ func (kl *Kubelet) HandlePodAdditions(pods []*v1.Pod) { // the apiserver and no action (other than cleanup) is required. kl.podManager.AddPod(pod) - if kubetypes.IsMirrorPod(pod) { - kl.handleMirrorPod(pod, start) + pod, mirrorPod, wasMirror := kl.podManager.GetPodAndMirrorPod(pod) + if wasMirror { + if pod == nil { + klog.V(2).InfoS("Unable to find pod for mirror pod, skipping", "mirrorPod", klog.KObj(mirrorPod), "mirrorPodUID", mirrorPod.UID) + continue + } + kl.podWorkers.UpdatePod(UpdatePodOptions{ + Pod: pod, + MirrorPod: mirrorPod, + UpdateType: kubetypes.SyncPodUpdate, + StartTime: start, + }) continue } @@ -2536,8 +2566,12 @@ func (kl *Kubelet) HandlePodAdditions(pods []*v1.Pod) { } } } - mirrorPod, _ := kl.podManager.GetMirrorPodByPod(pod) - kl.dispatchWork(pod, kubetypes.SyncPodCreate, mirrorPod, start) + kl.podWorkers.UpdatePod(UpdatePodOptions{ + Pod: pod, + MirrorPod: mirrorPod, + UpdateType: kubetypes.SyncPodCreate, + StartTime: start, + }) } } @@ -2547,12 +2581,21 @@ func (kl *Kubelet) HandlePodUpdates(pods []*v1.Pod) { start := kl.clock.Now() for _, pod := range pods { kl.podManager.UpdatePod(pod) - if kubetypes.IsMirrorPod(pod) { - kl.handleMirrorPod(pod, start) - continue + + pod, mirrorPod, wasMirror := kl.podManager.GetPodAndMirrorPod(pod) + if wasMirror { + if pod == nil { + klog.V(2).InfoS("Unable to find pod for mirror pod, skipping", "mirrorPod", klog.KObj(mirrorPod), "mirrorPodUID", mirrorPod.UID) + continue + } } - mirrorPod, _ := kl.podManager.GetMirrorPodByPod(pod) - kl.dispatchWork(pod, kubetypes.SyncPodUpdate, mirrorPod, start) + + kl.podWorkers.UpdatePod(UpdatePodOptions{ + Pod: pod, + MirrorPod: mirrorPod, + UpdateType: kubetypes.SyncPodUpdate, + StartTime: start, + }) } } @@ -2561,11 +2604,23 @@ func (kl *Kubelet) HandlePodUpdates(pods []*v1.Pod) { func (kl *Kubelet) HandlePodRemoves(pods []*v1.Pod) { start := kl.clock.Now() for _, pod := range pods { - kl.podManager.DeletePod(pod) - if kubetypes.IsMirrorPod(pod) { - kl.handleMirrorPod(pod, start) + kl.podManager.RemovePod(pod) + + pod, mirrorPod, wasMirror := kl.podManager.GetPodAndMirrorPod(pod) + if wasMirror { + if pod == nil { + klog.V(2).InfoS("Unable to find pod for mirror pod, skipping", "mirrorPod", klog.KObj(mirrorPod), "mirrorPodUID", mirrorPod.UID) + continue + } + kl.podWorkers.UpdatePod(UpdatePodOptions{ + Pod: pod, + MirrorPod: mirrorPod, + UpdateType: kubetypes.SyncPodUpdate, + StartTime: start, + }) continue } + // Deletion is allowed to fail because the periodic cleanup routine // will trigger deletion again. if err := kl.deletePod(pod); err != nil { @@ -2575,7 +2630,8 @@ func (kl *Kubelet) HandlePodRemoves(pods []*v1.Pod) { } // HandlePodReconcile is the callback in the SyncHandler interface for pods -// that should be reconciled. +// that should be reconciled. Pods are reconciled when only the status of the +// pod is updated in the API. func (kl *Kubelet) HandlePodReconcile(pods []*v1.Pod) { start := kl.clock.Now() for _, pod := range pods { @@ -2583,13 +2639,37 @@ func (kl *Kubelet) HandlePodReconcile(pods []*v1.Pod) { // to the pod manager. kl.podManager.UpdatePod(pod) + pod, mirrorPod, wasMirror := kl.podManager.GetPodAndMirrorPod(pod) + if wasMirror { + if pod == nil { + klog.V(2).InfoS("Unable to find pod for mirror pod, skipping", "mirrorPod", klog.KObj(mirrorPod), "mirrorPodUID", mirrorPod.UID) + continue + } + // Static pods should be reconciled the same way as regular pods + } + + // TODO: reconcile being calculated in the config manager is questionable, and avoiding + // extra syncs may no longer be necessary. Reevaluate whether Reconcile and Sync can be + // merged (after resolving the next two TODOs). + // Reconcile Pod "Ready" condition if necessary. Trigger sync pod for reconciliation. + // TODO: this should be unnecessary today - determine what is the cause for this to + // be different than Sync, or if there is a better place for it. For instance, we have + // needsReconcile in kubelet/config, here, and in status_manager. if status.NeedToReconcilePodReadiness(pod) { - mirrorPod, _ := kl.podManager.GetMirrorPodByPod(pod) - kl.dispatchWork(pod, kubetypes.SyncPodSync, mirrorPod, start) + kl.podWorkers.UpdatePod(UpdatePodOptions{ + Pod: pod, + MirrorPod: mirrorPod, + UpdateType: kubetypes.SyncPodSync, + StartTime: start, + }) } // After an evicted pod is synced, all dead containers in the pod can be removed. + // TODO: this is questionable - status read is async and during eviction we already + // expect to not have some container info. The pod worker knows whether a pod has + // been evicted, so if this is about minimizing the time to react to an eviction we + // can do better. If it's about preserving pod status info we can also do better. if eviction.PodIsEvicted(pod.Status) { if podStatus, err := kl.podCache.Get(pod.UID); err == nil { kl.containerDeletor.deleteContainersInPod("", podStatus, true) @@ -2603,8 +2683,24 @@ func (kl *Kubelet) HandlePodReconcile(pods []*v1.Pod) { func (kl *Kubelet) HandlePodSyncs(pods []*v1.Pod) { start := kl.clock.Now() for _, pod := range pods { - mirrorPod, _ := kl.podManager.GetMirrorPodByPod(pod) - kl.dispatchWork(pod, kubetypes.SyncPodSync, mirrorPod, start) + pod, mirrorPod, wasMirror := kl.podManager.GetPodAndMirrorPod(pod) + if wasMirror { + if pod == nil { + klog.V(2).InfoS("Unable to find pod for mirror pod, skipping", "mirrorPod", klog.KObj(mirrorPod), "mirrorPodUID", mirrorPod.UID) + continue + } + // Syncing a mirror pod is a programmer error since the intent of sync is to + // batch notify all pending work. We should make it impossible to double sync, + // but for now log a programmer error to prevent accidental introduction. + klog.V(3).InfoS("Programmer error, HandlePodSyncs does not expect to receive mirror pods", "podUID", pod.UID, "mirrorPodUID", mirrorPod.UID) + continue + } + kl.podWorkers.UpdatePod(UpdatePodOptions{ + Pod: pod, + MirrorPod: mirrorPod, + UpdateType: kubetypes.SyncPodSync, + StartTime: start, + }) } } @@ -2614,8 +2710,7 @@ func isPodResizeInProgress(pod *v1.Pod, podStatus *v1.PodStatus) bool { if cs.Resources == nil { continue } - if diff.ObjectDiff(c.Resources.Limits, cs.Resources.Limits) != "" || - diff.ObjectDiff(cs.AllocatedResources, cs.Resources.Requests) != "" { + if !cmp.Equal(c.Resources.Limits, cs.Resources.Limits) || !cmp.Equal(cs.AllocatedResources, cs.Resources.Requests) { return true } } @@ -2685,7 +2780,7 @@ func (kl *Kubelet) handlePodResourcesResize(pod *v1.Pod) *v1.Pod { klog.V(5).InfoS("ContainerStatus.AllocatedResources length mismatch", "pod", pod.Name, "container", container.Name) break } - if len(diff.ObjectDiff(container.Resources.Requests, containerStatus.AllocatedResources)) > 0 { + if !cmp.Equal(container.Resources.Requests, containerStatus.AllocatedResources) { podResized = true break } @@ -2805,7 +2900,7 @@ func (kl *Kubelet) ListenAndServeReadOnly(address net.IP, port uint) { // ListenAndServePodResources runs the kubelet podresources grpc service func (kl *Kubelet) ListenAndServePodResources() { - socket, err := util.LocalEndpoint(kl.getPodResourcesDir(), podresources.Socket) + endpoint, err := util.LocalEndpoint(kl.getPodResourcesDir(), podresources.Socket) if err != nil { klog.V(2).InfoS("Failed to get local endpoint for PodResources endpoint", "err", err) return @@ -2819,7 +2914,7 @@ func (kl *Kubelet) ListenAndServePodResources() { DynamicResources: kl.containerManager, } - server.ListenAndServePodResources(socket, providers) + server.ListenAndServePodResources(endpoint, providers) } // Delete the eligible dead container instances in a pod. Depending on the configuration, the latest dead containers may be kept around. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_network_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_network_linux.go index 8f767817a0f5..fce576890616 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_network_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_network_linux.go @@ -20,13 +20,10 @@ limitations under the License. package kubelet import ( - "fmt" "time" "k8s.io/apimachinery/pkg/util/wait" - utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/klog/v2" - "k8s.io/kubernetes/pkg/features" utiliptables "k8s.io/kubernetes/pkg/util/iptables" utilexec "k8s.io/utils/exec" ) @@ -36,16 +33,6 @@ const ( // or iptables-nft indicates which version of iptables the system is using KubeIPTablesHintChain utiliptables.Chain = "KUBE-IPTABLES-HINT" - // KubeMarkMasqChain is the mark-for-masquerade chain - // TODO: clean up this logic in kube-proxy - KubeMarkMasqChain utiliptables.Chain = "KUBE-MARK-MASQ" - - // KubeMarkDropChain is the mark-for-drop chain - KubeMarkDropChain utiliptables.Chain = "KUBE-MARK-DROP" - - // KubePostroutingChain is kubernetes postrouting rules - KubePostroutingChain utiliptables.Chain = "KUBE-POSTROUTING" - // KubeFirewallChain is kubernetes firewall rules KubeFirewallChain utiliptables.Chain = "KUBE-FIREWALL" ) @@ -74,8 +61,7 @@ func (kl *Kubelet) initNetworkUtil() { } // syncIPTablesRules ensures the KUBE-IPTABLES-HINT chain exists, and the martian packet -// protection rule is installed. If the IPTablesOwnershipCleanup feature gate is disabled -// it will also synchronize additional deprecated iptables rules. +// protection rule is installed. func (kl *Kubelet) syncIPTablesRules(iptClient utiliptables.Interface) bool { // Create hint chain so other components can see whether we are using iptables-legacy // or iptables-nft. @@ -87,6 +73,12 @@ func (kl *Kubelet) syncIPTablesRules(iptClient utiliptables.Interface) bool { if !iptClient.IsIPv6() { // ipv6 doesn't have this issue // Set up the KUBE-FIREWALL chain and martian packet protection rule. // (See below.) + + // NOTE: kube-proxy (in iptables mode) creates an identical copy of this + // rule. If you want to change this rule in the future, you MUST do so in + // a way that will interoperate correctly with skewed versions of the rule + // created by kube-proxy. + if _, err := iptClient.EnsureChain(utiliptables.TableFilter, KubeFirewallChain); err != nil { klog.ErrorS(err, "Failed to ensure that filter table KUBE-FIREWALL chain exists") return false @@ -119,98 +111,5 @@ func (kl *Kubelet) syncIPTablesRules(iptClient utiliptables.Interface) bool { } } - if !utilfeature.DefaultFeatureGate.Enabled(features.IPTablesOwnershipCleanup) { - ok := kl.syncIPTablesRulesDeprecated(iptClient) - if !ok { - return false - } - } - return true } - -// syncIPTablesRulesDeprecated ensures deprecated iptables rules are present: -// 1. In nat table, KUBE-MARK-DROP rule to mark connections for dropping -// Marked connection will be drop on INPUT/OUTPUT Chain in filter table -// 2. In nat table, KUBE-MARK-MASQ rule to mark connections for SNAT -// Marked connection will get SNAT on POSTROUTING Chain in nat table -func (kl *Kubelet) syncIPTablesRulesDeprecated(iptClient utiliptables.Interface) bool { - // Setup KUBE-MARK-DROP rules - dropMark := getIPTablesMark(kl.iptablesDropBit) - if _, err := iptClient.EnsureChain(utiliptables.TableNAT, KubeMarkDropChain); err != nil { - klog.ErrorS(err, "Failed to ensure that KUBE-MARK-DROP chain exists") - return false - } - if _, err := iptClient.EnsureRule(utiliptables.Append, utiliptables.TableNAT, KubeMarkDropChain, "-j", "MARK", "--or-mark", dropMark); err != nil { - klog.ErrorS(err, "Failed to ensure that KUBE-MARK-DROP rule exists") - return false - } - if _, err := iptClient.EnsureChain(utiliptables.TableFilter, KubeFirewallChain); err != nil { - klog.ErrorS(err, "Failed to ensure that KUBE-FIREWALL chain exists") - return false - } - if _, err := iptClient.EnsureRule(utiliptables.Append, utiliptables.TableFilter, KubeFirewallChain, - "-m", "comment", "--comment", "kubernetes firewall for dropping marked packets", - "-m", "mark", "--mark", fmt.Sprintf("%s/%s", dropMark, dropMark), - "-j", "DROP"); err != nil { - klog.ErrorS(err, "Failed to ensure that KUBE-FIREWALL rule exists") - return false - } - - // Setup KUBE-MARK-MASQ rules - masqueradeMark := getIPTablesMark(kl.iptablesMasqueradeBit) - if _, err := iptClient.EnsureChain(utiliptables.TableNAT, KubeMarkMasqChain); err != nil { - klog.ErrorS(err, "Failed to ensure that KUBE-MARK-MASQ chain exists") - return false - } - if _, err := iptClient.EnsureChain(utiliptables.TableNAT, KubePostroutingChain); err != nil { - klog.ErrorS(err, "Failed to ensure that KUBE-POSTROUTING chain exists") - return false - } - if _, err := iptClient.EnsureRule(utiliptables.Append, utiliptables.TableNAT, KubeMarkMasqChain, "-j", "MARK", "--or-mark", masqueradeMark); err != nil { - klog.ErrorS(err, "Failed to ensure that KUBE-MARK-MASQ rule exists") - return false - } - if _, err := iptClient.EnsureRule(utiliptables.Prepend, utiliptables.TableNAT, utiliptables.ChainPostrouting, - "-m", "comment", "--comment", "kubernetes postrouting rules", "-j", string(KubePostroutingChain)); err != nil { - klog.ErrorS(err, "Failed to ensure that POSTROUTING chain jumps to KUBE-POSTROUTING") - return false - } - - // Set up KUBE-POSTROUTING to unmark and masquerade marked packets - // NB: THIS MUST MATCH the corresponding code in the iptables and ipvs - // modes of kube-proxy - if _, err := iptClient.EnsureRule(utiliptables.Append, utiliptables.TableNAT, KubePostroutingChain, - "-m", "mark", "!", "--mark", fmt.Sprintf("%s/%s", masqueradeMark, masqueradeMark), - "-j", "RETURN"); err != nil { - klog.ErrorS(err, "Failed to ensure first masquerading rule exists") - return false - } - // Clear the mark to avoid re-masquerading if the packet re-traverses the network stack. - // We know the mark bit is currently set so we can use --xor-mark to clear it (without needing - // to Sprintf another bitmask). - if _, err := iptClient.EnsureRule(utiliptables.Append, utiliptables.TableNAT, KubePostroutingChain, - "-j", "MARK", "--xor-mark", masqueradeMark); err != nil { - klog.ErrorS(err, "Failed to ensure second masquerading rule exists") - return false - } - masqRule := []string{ - "-m", "comment", "--comment", "kubernetes service traffic requiring SNAT", - "-j", "MASQUERADE", - } - if iptClient.HasRandomFully() { - masqRule = append(masqRule, "--random-fully") - } - if _, err := iptClient.EnsureRule(utiliptables.Append, utiliptables.TableNAT, KubePostroutingChain, masqRule...); err != nil { - klog.ErrorS(err, "Failed to ensure third masquerading rule exists") - return false - } - - return true -} - -// getIPTablesMark returns the fwmark given the bit -func getIPTablesMark(bit int) string { - value := 1 << uint(bit) - return fmt.Sprintf("%#08x", value) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go index fbc9104c4281..03d8dc987b2d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go @@ -29,18 +29,19 @@ import ( "sort" "strings" + "github.com/google/go-cmp/cmp" v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/diff" "k8s.io/apimachinery/pkg/util/sets" utilvalidation "k8s.io/apimachinery/pkg/util/validation" utilfeature "k8s.io/apiserver/pkg/util/feature" - "k8s.io/component-helpers/storage/ephemeral" runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" "k8s.io/klog/v2" + "k8s.io/kubelet/pkg/cri/streaming/portforward" + remotecommandserver "k8s.io/kubelet/pkg/cri/streaming/remotecommand" podutil "k8s.io/kubernetes/pkg/api/v1/pod" "k8s.io/kubernetes/pkg/api/v1/resource" podshelper "k8s.io/kubernetes/pkg/apis/core/pods" @@ -50,8 +51,6 @@ import ( "k8s.io/kubernetes/pkg/fieldpath" "k8s.io/kubernetes/pkg/kubelet/cm" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" - "k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward" - remotecommandserver "k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand" "k8s.io/kubernetes/pkg/kubelet/envvars" "k8s.io/kubernetes/pkg/kubelet/images" "k8s.io/kubernetes/pkg/kubelet/metrics" @@ -345,7 +344,11 @@ func ensureHostsFile(fileName string, hostIPs []string, hostName, hostDomainName hostsFileContent = managedHostsFileContent(hostIPs, hostName, hostDomainName, hostAliases) } - return os.WriteFile(fileName, hostsFileContent, 0644) + hostsFilePerm := os.FileMode(0644) + if err := os.WriteFile(fileName, hostsFileContent, hostsFilePerm); err != nil { + return err + } + return os.Chmod(fileName, hostsFilePerm) } // nodeHostsFileContent reads the content of node's hosts file. @@ -515,11 +518,6 @@ func (kl *Kubelet) GenerateRunContainerOptions(ctx context.Context, pod *v1.Pod, } } - // only do this check if the experimental behavior is enabled, otherwise allow it to default to false - if kl.experimentalHostUserNamespaceDefaulting { - opts.EnableHostUserNamespace = kl.enableHostUserNamespace(ctx, pod) - } - return opts, cleanupAction, nil } @@ -832,6 +830,19 @@ func (kl *Kubelet) podFieldSelectorRuntimeValue(fs *v1.ObjectFieldSelector, pod return "", err } return hostIPs[0].String(), nil + case "status.hostIPs": + if !utilfeature.DefaultFeatureGate.Enabled(features.PodHostIPs) { + return "", nil + } + hostIPs, err := kl.getHostIPsAnyWay() + if err != nil { + return "", err + } + ips := make([]string, 0, len(hostIPs)) + for _, ip := range hostIPs { + ips = append(ips, ip.String()) + } + return strings.Join(ips, ","), nil case "status.podIP": return podIP, nil case "status.podIPs": @@ -882,6 +893,7 @@ func (kl *Kubelet) makePodDataDirs(pod *v1.Pod) error { // secrets. func (kl *Kubelet) getPullSecretsForPod(pod *v1.Pod) []v1.Secret { pullSecrets := []v1.Secret{} + failedPullSecrets := []string{} for _, secretRef := range pod.Spec.ImagePullSecrets { if len(secretRef.Name) == 0 { @@ -892,12 +904,17 @@ func (kl *Kubelet) getPullSecretsForPod(pod *v1.Pod) []v1.Secret { secret, err := kl.secretManager.GetSecret(pod.Namespace, secretRef.Name) if err != nil { klog.InfoS("Unable to retrieve pull secret, the image pull may not succeed.", "pod", klog.KObj(pod), "secret", klog.KObj(secret), "err", err) + failedPullSecrets = append(failedPullSecrets, secretRef.Name) continue } pullSecrets = append(pullSecrets, *secret) } + if len(failedPullSecrets) > 0 { + kl.recorder.Eventf(pod, v1.EventTypeWarning, "FailedToRetrieveImagePullSecret", "Unable to retrieve some image pull secrets (%s); attempting to pull the image may not succeed.", strings.Join(failedPullSecrets, ", ")) + } + return pullSecrets } @@ -986,23 +1003,6 @@ func (kl *Kubelet) removeOrphanedPodStatuses(pods []*v1.Pod, mirrorPods []*v1.Po kl.statusManager.RemoveOrphanedStatuses(podUIDs) } -// deleteOrphanedMirrorPods checks whether pod killer has done with orphaned mirror pod. -// If pod killing is done, podManager.DeleteMirrorPod() is called to delete mirror pod -// from the API server -func (kl *Kubelet) deleteOrphanedMirrorPods() { - mirrorPods := kl.podManager.GetOrphanedMirrorPodNames() - for _, podFullname := range mirrorPods { - if !kl.podWorkers.IsPodForMirrorPodTerminatingByFullName(podFullname) { - _, err := kl.podManager.DeleteMirrorPod(podFullname, nil) - if err != nil { - klog.ErrorS(err, "Encountered error when deleting mirror pod", "podName", podFullname) - } else { - klog.V(3).InfoS("Deleted mirror pod", "podName", podFullname) - } - } - } -} - // HandlePodCleanups performs a series of cleanup work, including terminating // pod workers, killing unwanted pods, and removing orphaned volumes/pod // directories. No config changes are sent to pod workers while this method @@ -1033,15 +1033,7 @@ func (kl *Kubelet) HandlePodCleanups(ctx context.Context) error { } } - allPods, mirrorPods := kl.podManager.GetPodsAndMirrorPods() - activePods := kl.filterOutInactivePods(allPods) - allRegularPods, allStaticPods := splitPodsByStatic(allPods) - activeRegularPods, activeStaticPods := splitPodsByStatic(activePods) - metrics.DesiredPodCount.WithLabelValues("").Set(float64(len(allRegularPods))) - metrics.DesiredPodCount.WithLabelValues("true").Set(float64(len(allStaticPods))) - metrics.ActivePodCount.WithLabelValues("").Set(float64(len(activeRegularPods))) - metrics.ActivePodCount.WithLabelValues("true").Set(float64(len(activeStaticPods))) - metrics.MirrorPodCount.Set(float64(len(mirrorPods))) + allPods, mirrorPods, orphanedMirrorPodFullnames := kl.podManager.GetPodsAndMirrorPods() // Pod phase progresses monotonically. Once a pod has reached a final state, // it should never leave regardless of the restart policy. The statuses @@ -1137,7 +1129,27 @@ func (kl *Kubelet) HandlePodCleanups(ctx context.Context) error { // Remove any orphaned mirror pods (mirror pods are tracked by name via the // pod worker) klog.V(3).InfoS("Clean up orphaned mirror pods") - kl.deleteOrphanedMirrorPods() + for _, podFullname := range orphanedMirrorPodFullnames { + if !kl.podWorkers.IsPodForMirrorPodTerminatingByFullName(podFullname) { + _, err := kl.mirrorPodClient.DeleteMirrorPod(podFullname, nil) + if err != nil { + klog.ErrorS(err, "Encountered error when deleting mirror pod", "podName", podFullname) + } else { + klog.V(3).InfoS("Deleted mirror pod", "podName", podFullname) + } + } + } + + // After pruning pod workers for terminated pods get the list of active pods for + // metrics and to determine restarts. + activePods := kl.filterOutInactivePods(allPods) + allRegularPods, allStaticPods := splitPodsByStatic(allPods) + activeRegularPods, activeStaticPods := splitPodsByStatic(activePods) + metrics.DesiredPodCount.WithLabelValues("").Set(float64(len(allRegularPods))) + metrics.DesiredPodCount.WithLabelValues("true").Set(float64(len(allStaticPods))) + metrics.ActivePodCount.WithLabelValues("").Set(float64(len(activeRegularPods))) + metrics.ActivePodCount.WithLabelValues("true").Set(float64(len(activeStaticPods))) + metrics.MirrorPodCount.Set(float64(len(mirrorPods))) // At this point, the pod worker is aware of which pods are not desired (SyncKnownPods). // We now look through the set of active pods for those that the pod worker is not aware of @@ -1155,10 +1167,14 @@ func (kl *Kubelet) HandlePodCleanups(ctx context.Context) error { klog.V(3).InfoS("Pod will be restarted because it is in the desired set and not known to the pod workers (likely due to UID reuse)", "podUID", desiredPod.UID) isStatic := kubetypes.IsStaticPod(desiredPod) - mirrorPod, _ := kl.podManager.GetMirrorPodByPod(desiredPod) + pod, mirrorPod, wasMirror := kl.podManager.GetPodAndMirrorPod(desiredPod) + if pod == nil || wasMirror { + klog.V(2).InfoS("Programmer error, restartable pod was a mirror pod but activePods should never contain a mirror pod", "podUID", desiredPod.UID) + continue + } kl.podWorkers.UpdatePod(UpdatePodOptions{ UpdateType: kubetypes.SyncPodCreate, - Pod: desiredPod, + Pod: pod, MirrorPod: mirrorPod, }) @@ -1176,6 +1192,21 @@ func (kl *Kubelet) HandlePodCleanups(ctx context.Context) error { metrics.RestartedPodTotal.WithLabelValues("true").Add(float64(restartCountStatic)) metrics.RestartedPodTotal.WithLabelValues("").Add(float64(restartCount)) + // Complete termination of deleted pods that are not runtime pods (don't have + // running containers), are terminal, and are not known to pod workers. + // An example is pods rejected during kubelet admission that have never + // started before (i.e. does not have an orphaned pod). + // Adding the pods with SyncPodKill to pod workers allows to proceed with + // force-deletion of such pods, yet preventing re-entry of the routine in the + // next invocation of HandlePodCleanups. + for _, pod := range kl.filterTerminalPodsToDelete(allPods, runningRuntimePods, workingPods) { + klog.V(3).InfoS("Handling termination and deletion of the pod to pod workers", "pod", klog.KObj(pod), "podUID", pod.UID) + kl.podWorkers.UpdatePod(UpdatePodOptions{ + UpdateType: kubetypes.SyncPodKill, + Pod: pod, + }) + } + // Finally, terminate any pods that are observed in the runtime but not present in the list of // known running pods from config. If we do terminate running runtime pods that will happen // asynchronously in the background and those will be processed in the next invocation of @@ -1245,10 +1276,44 @@ func (kl *Kubelet) HandlePodCleanups(ctx context.Context) error { // Cleanup any backoff entries. kl.backOff.GC() - return nil } +// filterTerminalPodsToDelete returns terminal pods which are ready to be +// deleted by the status manager, but are not in pod workers. +// First, the check for deletionTimestamp is a performance optimization as we +// don't need to do anything with terminal pods without deletionTimestamp. +// Second, the check for terminal pods is to avoid race conditions of triggering +// deletion on Pending pods which are not yet added to pod workers. +// Third, the check to skip pods known to pod workers is that the lifecycle of +// such pods is already handled by pod workers. +// Finally, we skip runtime pods as their termination is handled separately in +// the HandlePodCleanups routine. +func (kl *Kubelet) filterTerminalPodsToDelete(allPods []*v1.Pod, runningRuntimePods []*kubecontainer.Pod, workingPods map[types.UID]PodWorkerSync) map[types.UID]*v1.Pod { + terminalPodsToDelete := make(map[types.UID]*v1.Pod) + for _, pod := range allPods { + if pod.DeletionTimestamp == nil { + // skip pods which don't have a deletion timestamp + continue + } + if !podutil.IsPodPhaseTerminal(pod.Status.Phase) { + // skip the non-terminal pods + continue + } + if _, knownPod := workingPods[pod.UID]; knownPod { + // skip pods known to pod workers + continue + } + terminalPodsToDelete[pod.UID] = pod + } + for _, runningRuntimePod := range runningRuntimePods { + // skip running runtime pods - they are handled by a dedicated routine + // which terminates the containers + delete(terminalPodsToDelete, runningRuntimePod.ID) + } + return terminalPodsToDelete +} + // splitPodsByStatic separates a list of desired pods from the pod manager into // regular or static pods. Mirror pods are not valid config sources (a mirror pod // being created cannot cause the Kubelet to start running a static pod) and are @@ -1351,15 +1416,26 @@ func (kl *Kubelet) GetKubeletContainerLogs(ctx context.Context, podFullName, con return fmt.Errorf("pod %q cannot be found - no logs available", name) } - podUID := pod.UID - if mirrorPod, ok := kl.podManager.GetMirrorPodByPod(pod); ok { + // TODO: this should be using the podWorker's pod store as authoritative, since + // the mirrorPod might still exist, the pod may have been force deleted but + // is still terminating (users should be able to view logs of force deleted static pods + // based on full name). + var podUID types.UID + pod, mirrorPod, wasMirror := kl.podManager.GetPodAndMirrorPod(pod) + if wasMirror { + if pod == nil { + return fmt.Errorf("mirror pod %q does not have a corresponding pod", name) + } podUID = mirrorPod.UID + } else { + podUID = pod.UID } + podStatus, found := kl.statusManager.GetPodStatus(podUID) if !found { // If there is no cached status, use the status from the - // apiserver. This is useful if kubelet has recently been - // restarted. + // config source (apiserver). This is useful if kubelet + // has recently been restarted. podStatus = pod.Status } @@ -1387,7 +1463,16 @@ func getPhase(pod *v1.Pod, info []v1.ContainerStatus, podIsTerminal bool) v1.Pod spec := pod.Spec pendingInitialization := 0 failedInitialization := 0 + + // regular init containers for _, container := range spec.InitContainers { + if kubetypes.IsRestartableInitContainer(&container) { + // Skip the restartable init containers here to handle them separately as + // they are slightly different from the init containers in terms of the + // pod phase. + continue + } + containerStatus, ok := podutil.GetContainerStatus(info, container.Name) if !ok { pendingInitialization++ @@ -1414,11 +1499,48 @@ func getPhase(pod *v1.Pod, info []v1.ContainerStatus, podIsTerminal bool) v1.Pod } } + // counters for restartable init and regular containers unknown := 0 running := 0 waiting := 0 stopped := 0 succeeded := 0 + + // restartable init containers + for _, container := range spec.InitContainers { + if !kubetypes.IsRestartableInitContainer(&container) { + // Skip the regular init containers, as they have been handled above. + continue + } + containerStatus, ok := podutil.GetContainerStatus(info, container.Name) + if !ok { + unknown++ + continue + } + + switch { + case containerStatus.State.Running != nil: + if containerStatus.Started == nil || !*containerStatus.Started { + pendingInitialization++ + } + running++ + case containerStatus.State.Terminated != nil: + // Do nothing here, as terminated restartable init containers are not + // taken into account for the pod phase. + case containerStatus.State.Waiting != nil: + if containerStatus.LastTerminationState.Terminated != nil { + // Do nothing here, as terminated restartable init containers are not + // taken into account for the pod phase. + } else { + pendingInitialization++ + waiting++ + } + default: + pendingInitialization++ + unknown++ + } + } + for _, container := range spec.Containers { containerStatus, ok := podutil.GetContainerStatus(info, container.Name) if !ok { @@ -1450,7 +1572,11 @@ func getPhase(pod *v1.Pod, info []v1.ContainerStatus, podIsTerminal bool) v1.Pod } switch { - case pendingInitialization > 0: + case pendingInitialization > 0 && + // This is needed to handle the case where the pod has been initialized but + // the restartable init containers are restarting and the pod should not be + // placed back into v1.PodPending since the regular containers have run. + !kubecontainer.HasAnyRegularContainerStarted(&spec, info): fallthrough case waiting > 0: klog.V(5).InfoS("Pod waiting > 0, pending") @@ -1466,7 +1592,8 @@ func getPhase(pod *v1.Pod, info []v1.ContainerStatus, podIsTerminal bool) v1.Pod if podIsTerminal { // TODO(#116484): Also assign terminal phase to static pods. if !kubetypes.IsStaticPod(pod) { - // All containers are terminated in success + // All regular containers are terminated in success and all restartable + // init containers are stopped. if stopped == succeeded { return v1.PodSucceeded } @@ -1480,8 +1607,8 @@ func getPhase(pod *v1.Pod, info []v1.ContainerStatus, podIsTerminal bool) v1.Pod return v1.PodRunning } if stopped == succeeded { - // RestartPolicy is not Always, and all - // containers are terminated in success + // RestartPolicy is not Always, all containers are terminated in success + // and all restartable init containers are stopped. return v1.PodSucceeded } if spec.RestartPolicy == v1.RestartPolicyNever { @@ -1503,7 +1630,7 @@ func (kl *Kubelet) determinePodResizeStatus(pod *v1.Pod, podStatus *v1.PodStatus specStatusDiffer := false for _, c := range pod.Spec.Containers { if cs, ok := podutil.GetContainerStatus(podStatus.ContainerStatuses, c.Name); ok { - if cs.Resources != nil && diff.ObjectDiff(c.Resources, *cs.Resources) != "" { + if cs.Resources != nil && !cmp.Equal(c.Resources, *cs.Resources) { specStatusDiffer = true break } @@ -1586,7 +1713,7 @@ func (kl *Kubelet) generateAPIPodStatus(pod *v1.Pod, podStatus *kubecontainer.Po } // ensure the probe managers have up to date status for containers - kl.probeManager.UpdatePodStatus(pod.UID, s) + kl.probeManager.UpdatePodStatus(pod, s) // preserve all conditions not owned by the kubelet s.Conditions = make([]v1.PodCondition, 0, len(pod.Status.Conditions)+1) @@ -1608,23 +1735,38 @@ func (kl *Kubelet) generateAPIPodStatus(pod *v1.Pod, podStatus *kubecontainer.Po } // set all Kubelet-owned conditions - if utilfeature.DefaultFeatureGate.Enabled(features.PodHasNetworkCondition) { - s.Conditions = append(s.Conditions, status.GeneratePodHasNetworkCondition(pod, podStatus)) + if utilfeature.DefaultFeatureGate.Enabled(features.PodReadyToStartContainersCondition) { + s.Conditions = append(s.Conditions, status.GeneratePodReadyToStartContainersCondition(pod, podStatus)) } - s.Conditions = append(s.Conditions, status.GeneratePodInitializedCondition(&pod.Spec, s.InitContainerStatuses, s.Phase)) - s.Conditions = append(s.Conditions, status.GeneratePodReadyCondition(&pod.Spec, s.Conditions, s.ContainerStatuses, s.Phase)) - s.Conditions = append(s.Conditions, status.GenerateContainersReadyCondition(&pod.Spec, s.ContainerStatuses, s.Phase)) + allContainerStatuses := append(s.InitContainerStatuses, s.ContainerStatuses...) + s.Conditions = append(s.Conditions, status.GeneratePodInitializedCondition(&pod.Spec, allContainerStatuses, s.Phase)) + s.Conditions = append(s.Conditions, status.GeneratePodReadyCondition(&pod.Spec, s.Conditions, allContainerStatuses, s.Phase)) + s.Conditions = append(s.Conditions, status.GenerateContainersReadyCondition(&pod.Spec, allContainerStatuses, s.Phase)) s.Conditions = append(s.Conditions, v1.PodCondition{ Type: v1.PodScheduled, Status: v1.ConditionTrue, }) - // set HostIP and initialize PodIP/PodIPs for host network pods + // set HostIP/HostIPs and initialize PodIP/PodIPs for host network pods if kl.kubeClient != nil { hostIPs, err := kl.getHostIPsAnyWay() if err != nil { klog.V(4).InfoS("Cannot get host IPs", "err", err) } else { + if s.HostIP != "" { + if utilnet.IPFamilyOfString(s.HostIP) != utilnet.IPFamilyOf(hostIPs[0]) { + kl.recorder.Eventf(pod, v1.EventTypeWarning, "HostIPsIPFamilyMismatch", + "Kubelet detected an IPv%s node IP (%s), but the cloud provider selected an IPv%s node IP (%s); pass an explicit `--node-ip` to kubelet to fix this.", + utilnet.IPFamilyOfString(s.HostIP), s.HostIP, utilnet.IPFamilyOf(hostIPs[0]), hostIPs[0].String()) + } + } s.HostIP = hostIPs[0].String() + if utilfeature.DefaultFeatureGate.Enabled(features.PodHostIPs) { + s.HostIPs = []v1.HostIP{{IP: s.HostIP}} + if len(hostIPs) == 2 { + s.HostIPs = append(s.HostIPs, v1.HostIP{IP: hostIPs[1].String()}) + } + } + // HostNetwork Pods inherit the node IPs as PodIPs. They are immutable once set, // other than that if the node becomes dual-stack, we add the secondary IP. if kubecontainer.IsHostNetworkPod(pod) { @@ -1635,7 +1777,9 @@ func (kl *Kubelet) generateAPIPodStatus(pod *v1.Pod, podStatus *kubecontainer.Po } // Secondary IP is not set #105320 if len(hostIPs) == 2 && len(s.PodIPs) == 1 { - s.PodIPs = append(s.PodIPs, v1.PodIP{IP: hostIPs[1].String()}) + if utilnet.IPFamilyOfString(s.PodIPs[0].IP) != utilnet.IPFamilyOf(hostIPs[1]) { + s.PodIPs = append(s.PodIPs, v1.PodIP{IP: hostIPs[1].String()}) + } } } } @@ -2166,82 +2310,3 @@ func (kl *Kubelet) cleanupOrphanedPodCgroups(pcm cm.PodContainerManager, cgroupP go pcm.Destroy(val) } } - -// enableHostUserNamespace determines if the host user namespace should be used by the container runtime. -// Returns true if the pod is using a host pid, pic, or network namespace, the pod is using a non-namespaced -// capability, the pod contains a privileged container, or the pod has a host path volume. -// -// NOTE: when if a container shares any namespace with another container it must also share the user namespace -// or it will not have the correct capabilities in the namespace. This means that host user namespace -// is enabled per pod, not per container. -func (kl *Kubelet) enableHostUserNamespace(ctx context.Context, pod *v1.Pod) bool { - if kubecontainer.HasPrivilegedContainer(pod) || hasHostNamespace(pod) || - hasHostVolume(pod) || hasNonNamespacedCapability(pod) || kl.hasHostMountPVC(ctx, pod) { - return true - } - return false -} - -// hasNonNamespacedCapability returns true if MKNOD, SYS_TIME, or SYS_MODULE is requested for any container. -func hasNonNamespacedCapability(pod *v1.Pod) bool { - for _, c := range pod.Spec.Containers { - if c.SecurityContext != nil && c.SecurityContext.Capabilities != nil { - for _, cap := range c.SecurityContext.Capabilities.Add { - if cap == "MKNOD" || cap == "SYS_TIME" || cap == "SYS_MODULE" { - return true - } - } - } - } - - return false -} - -// hasHostVolume returns true if the pod spec has a HostPath volume. -func hasHostVolume(pod *v1.Pod) bool { - for _, v := range pod.Spec.Volumes { - if v.HostPath != nil { - return true - } - } - return false -} - -// hasHostNamespace returns true if hostIPC, hostNetwork, or hostPID are set to true. -func hasHostNamespace(pod *v1.Pod) bool { - if pod.Spec.SecurityContext == nil { - return false - } - return pod.Spec.HostIPC || pod.Spec.HostNetwork || pod.Spec.HostPID -} - -// hasHostMountPVC returns true if a PVC is referencing a HostPath volume. -func (kl *Kubelet) hasHostMountPVC(ctx context.Context, pod *v1.Pod) bool { - for _, volume := range pod.Spec.Volumes { - pvcName := "" - switch { - case volume.PersistentVolumeClaim != nil: - pvcName = volume.PersistentVolumeClaim.ClaimName - case volume.Ephemeral != nil: - pvcName = ephemeral.VolumeClaimName(pod, &volume) - default: - continue - } - pvc, err := kl.kubeClient.CoreV1().PersistentVolumeClaims(pod.Namespace).Get(ctx, pvcName, metav1.GetOptions{}) - if err != nil { - klog.InfoS("Unable to retrieve pvc", "pvc", klog.KRef(pod.Namespace, pvcName), "err", err) - continue - } - if pvc != nil { - referencedVolume, err := kl.kubeClient.CoreV1().PersistentVolumes().Get(ctx, pvc.Spec.VolumeName, metav1.GetOptions{}) - if err != nil { - klog.InfoS("Unable to retrieve pv", "pvName", pvc.Spec.VolumeName, "err", err) - continue - } - if referencedVolume != nil && referencedVolume.Spec.HostPath != nil { - return true - } - } - } - return false -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/helpers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/helpers.go index 0605ab4d328d..e217b4487c12 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/helpers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/helpers.go @@ -212,79 +212,40 @@ func toKubeRuntimeStatus(status *runtimeapi.RuntimeStatus) *kubecontainer.Runtim return &kubecontainer.RuntimeStatus{Conditions: conditions} } -func fieldProfile(scmp *v1.SeccompProfile, profileRootPath string, fallbackToRuntimeDefault bool) string { - if scmp == nil { - if fallbackToRuntimeDefault { - return v1.SeccompProfileRuntimeDefault - } - return "" - } - if scmp.Type == v1.SeccompProfileTypeRuntimeDefault { - return v1.SeccompProfileRuntimeDefault - } - if scmp.Type == v1.SeccompProfileTypeLocalhost && scmp.LocalhostProfile != nil && len(*scmp.LocalhostProfile) > 0 { - fname := filepath.Join(profileRootPath, *scmp.LocalhostProfile) - return v1.SeccompLocalhostProfileNamePrefix + fname - } - if scmp.Type == v1.SeccompProfileTypeUnconfined { - return v1.SeccompProfileNameUnconfined - } - - if fallbackToRuntimeDefault { - return v1.SeccompProfileRuntimeDefault - } - return "" -} - -func (m *kubeGenericRuntimeManager) getSeccompProfilePath(annotations map[string]string, containerName string, - podSecContext *v1.PodSecurityContext, containerSecContext *v1.SecurityContext, fallbackToRuntimeDefault bool) string { - // container fields are applied first - if containerSecContext != nil && containerSecContext.SeccompProfile != nil { - return fieldProfile(containerSecContext.SeccompProfile, m.seccompProfileRoot, fallbackToRuntimeDefault) - } - - // when container seccomp is not defined, try to apply from pod field - if podSecContext != nil && podSecContext.SeccompProfile != nil { - return fieldProfile(podSecContext.SeccompProfile, m.seccompProfileRoot, fallbackToRuntimeDefault) - } - - if fallbackToRuntimeDefault { - return v1.SeccompProfileRuntimeDefault - } - - return "" -} - -func fieldSeccompProfile(scmp *v1.SeccompProfile, profileRootPath string, fallbackToRuntimeDefault bool) *runtimeapi.SecurityProfile { +func fieldSeccompProfile(scmp *v1.SeccompProfile, profileRootPath string, fallbackToRuntimeDefault bool) (*runtimeapi.SecurityProfile, error) { if scmp == nil { if fallbackToRuntimeDefault { return &runtimeapi.SecurityProfile{ ProfileType: runtimeapi.SecurityProfile_RuntimeDefault, - } + }, nil } return &runtimeapi.SecurityProfile{ ProfileType: runtimeapi.SecurityProfile_Unconfined, - } + }, nil } if scmp.Type == v1.SeccompProfileTypeRuntimeDefault { return &runtimeapi.SecurityProfile{ ProfileType: runtimeapi.SecurityProfile_RuntimeDefault, - } + }, nil } - if scmp.Type == v1.SeccompProfileTypeLocalhost && scmp.LocalhostProfile != nil && len(*scmp.LocalhostProfile) > 0 { - fname := filepath.Join(profileRootPath, *scmp.LocalhostProfile) - return &runtimeapi.SecurityProfile{ - ProfileType: runtimeapi.SecurityProfile_Localhost, - LocalhostRef: fname, + if scmp.Type == v1.SeccompProfileTypeLocalhost { + if scmp.LocalhostProfile != nil && len(*scmp.LocalhostProfile) > 0 { + fname := filepath.Join(profileRootPath, *scmp.LocalhostProfile) + return &runtimeapi.SecurityProfile{ + ProfileType: runtimeapi.SecurityProfile_Localhost, + LocalhostRef: fname, + }, nil + } else { + return nil, fmt.Errorf("localhostProfile must be set if seccompProfile type is Localhost.") } } return &runtimeapi.SecurityProfile{ ProfileType: runtimeapi.SecurityProfile_Unconfined, - } + }, nil } func (m *kubeGenericRuntimeManager) getSeccompProfile(annotations map[string]string, containerName string, - podSecContext *v1.PodSecurityContext, containerSecContext *v1.SecurityContext, fallbackToRuntimeDefault bool) *runtimeapi.SecurityProfile { + podSecContext *v1.PodSecurityContext, containerSecContext *v1.SecurityContext, fallbackToRuntimeDefault bool) (*runtimeapi.SecurityProfile, error) { // container fields are applied first if containerSecContext != nil && containerSecContext.SeccompProfile != nil { return fieldSeccompProfile(containerSecContext.SeccompProfile, m.seccompProfileRoot, fallbackToRuntimeDefault) @@ -298,10 +259,10 @@ func (m *kubeGenericRuntimeManager) getSeccompProfile(annotations map[string]str if fallbackToRuntimeDefault { return &runtimeapi.SecurityProfile{ ProfileType: runtimeapi.SecurityProfile_RuntimeDefault, - } + }, nil } return &runtimeapi.SecurityProfile{ ProfileType: runtimeapi.SecurityProfile_Unconfined, - } + }, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/instrumented_services.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/instrumented_services.go index a83565a9c968..cfa633a99837 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/instrumented_services.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/instrumented_services.go @@ -361,3 +361,12 @@ func (in instrumentedRuntimeService) ListPodSandboxMetrics(ctx context.Context) recordError(operation, err) return out, err } + +func (in instrumentedRuntimeService) RuntimeConfig(ctx context.Context) (*runtimeapi.RuntimeConfigResponse, error) { + const operation = "runtime_config" + defer recordOperation(operation, time.Now()) + + out, err := in.service.RuntimeConfig(ctx) + recordError(operation, err) + return out, err +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go index 3fd849e4ae2e..eb25241cc118 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go @@ -52,6 +52,7 @@ import ( kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" "k8s.io/kubernetes/pkg/kubelet/cri/remote" "k8s.io/kubernetes/pkg/kubelet/events" + proberesults "k8s.io/kubernetes/pkg/kubelet/prober/results" "k8s.io/kubernetes/pkg/kubelet/types" "k8s.io/kubernetes/pkg/kubelet/util/format" "k8s.io/kubernetes/pkg/util/tail" @@ -136,7 +137,6 @@ func calcRestartCountByLogDir(path string) (int, error) { if _, err := os.Stat(path); err != nil { return 0, nil } - restartCount := int(0) files, err := os.ReadDir(path) if err != nil { return 0, err @@ -144,6 +144,7 @@ func calcRestartCountByLogDir(path string) (int, error) { if len(files) == 0 { return 0, nil } + restartCount := 0 restartCountLogFileRegex := regexp.MustCompile(`^(\d+)\.log(\..*)?`) for _, file := range files { if file.IsDir() { @@ -328,7 +329,7 @@ func (m *kubeGenericRuntimeManager) generateContainerConfig(ctx context.Context, Name: container.Name, Attempt: restartCountUint32, }, - Image: &runtimeapi.ImageSpec{Image: imageRef}, + Image: &runtimeapi.ImageSpec{Image: imageRef, UserSpecifiedImage: container.Image}, Command: command, Args: args, WorkingDir: container.WorkingDir, @@ -725,19 +726,21 @@ func (m *kubeGenericRuntimeManager) killContainer(ctx context.Context, pod *v1.P } m.recordContainerEvent(pod, containerSpec, containerID.ID, v1.EventTypeNormal, events.KillingContainer, message) + if gracePeriodOverride != nil { + gracePeriod = *gracePeriodOverride + klog.V(3).InfoS("Killing container with a grace period override", "pod", klog.KObj(pod), "podUID", pod.UID, + "containerName", containerName, "containerID", containerID.String(), "gracePeriod", gracePeriod) + } + // Run the pre-stop lifecycle hooks if applicable and if there is enough time to run it if containerSpec.Lifecycle != nil && containerSpec.Lifecycle.PreStop != nil && gracePeriod > 0 { gracePeriod = gracePeriod - m.executePreStopHook(ctx, pod, containerID, containerSpec, gracePeriod) } + // always give containers a minimal shutdown window to avoid unnecessary SIGKILLs if gracePeriod < minimumGracePeriodInSeconds { gracePeriod = minimumGracePeriodInSeconds } - if gracePeriodOverride != nil { - gracePeriod = *gracePeriodOverride - klog.V(3).InfoS("Killing container with a grace period override", "pod", klog.KObj(pod), "podUID", pod.UID, - "containerName", containerName, "containerID", containerID.String(), "gracePeriod", gracePeriod) - } klog.V(2).InfoS("Killing container with a grace period", "pod", klog.KObj(pod), "podUID", pod.UID, "containerName", containerName, "containerID", containerID.String(), "gracePeriod", gracePeriod) @@ -846,61 +849,224 @@ func (m *kubeGenericRuntimeManager) purgeInitContainers(ctx context.Context, pod } } -// findNextInitContainerToRun returns the status of the last failed container, the -// index of next init container to start, or done if there are no further init containers. -// Status is only returned if an init container is failed, in which case next will -// point to the current container. -func findNextInitContainerToRun(pod *v1.Pod, podStatus *kubecontainer.PodStatus) (status *kubecontainer.Status, next *v1.Container, done bool) { - if len(pod.Spec.InitContainers) == 0 { - return nil, nil, true - } - - // If any of the main containers have status and are Running, then all init containers must - // have been executed at some point in the past. However, they could have been removed - // from the container runtime now, and if we proceed, it would appear as if they - // never ran and will re-execute improperly. - for i := range pod.Spec.Containers { - container := &pod.Spec.Containers[i] +// hasAnyRegularContainerCreated returns true if any regular container has been +// created, which indicates all init containers have been initialized. +func hasAnyRegularContainerCreated(pod *v1.Pod, podStatus *kubecontainer.PodStatus) bool { + for _, container := range pod.Spec.Containers { status := podStatus.FindContainerStatusByName(container.Name) - if status != nil && status.State == kubecontainer.ContainerStateRunning { - return nil, nil, true + if status == nil { + continue + } + switch status.State { + case kubecontainer.ContainerStateCreated, + kubecontainer.ContainerStateRunning, + kubecontainer.ContainerStateExited: + return true + default: + // Ignore other states } } + return false +} - // If there are failed containers, return the status of the last failed one. - for i := len(pod.Spec.InitContainers) - 1; i >= 0; i-- { - container := &pod.Spec.InitContainers[i] - status := podStatus.FindContainerStatusByName(container.Name) - if status != nil && isInitContainerFailed(status) { - return status, container, false - } +// computeInitContainerActions sets the actions on the given changes that need +// to be taken for the init containers. This includes actions to initialize the +// init containers and actions to keep restartable init containers running. +// computeInitContainerActions returns true if pod has been initialized. +// +// The actions include: +// - Start the first init container that has not been started. +// - Restart all restartable init containers that have started but are not running. +// - Kill the restartable init containers that are not alive or started. +func (m *kubeGenericRuntimeManager) computeInitContainerActions(pod *v1.Pod, podStatus *kubecontainer.PodStatus, changes *podActions) bool { + if len(pod.Spec.InitContainers) == 0 { + return true } - // There are no failed containers now. + // If any of the main containers have status and are Running, then all init containers must + // have been executed at some point in the past. However, they could have been removed + // from the container runtime now, and if we proceed, it would appear as if they + // never ran and will re-execute improperly except for the restartable init containers. + podHasInitialized := hasAnyRegularContainerCreated(pod, podStatus) + + // isPreviouslyInitialized indicates if the current init container is + // previously initialized. + isPreviouslyInitialized := podHasInitialized + restartOnFailure := shouldRestartOnFailure(pod) + + // Note that we iterate through the init containers in reverse order to find + // the next init container to run, as the completed init containers may get + // removed from container runtime for various reasons. Therefore the kubelet + // should rely on the minimal number of init containers - the last one. + // + // Once we find the next init container to run, iterate through the rest to + // find the restartable init containers to restart. for i := len(pod.Spec.InitContainers) - 1; i >= 0; i-- { container := &pod.Spec.InitContainers[i] status := podStatus.FindContainerStatusByName(container.Name) + klog.V(4).InfoS("Computing init container action", "pod", klog.KObj(pod), "container", container.Name, "status", status) if status == nil { + // If the container is previously initialized but its status is not + // found, it means its last status is removed for some reason. + // Restart it if it is a restartable init container. + if isPreviouslyInitialized && types.IsRestartableInitContainer(container) { + changes.InitContainersToStart = append(changes.InitContainersToStart, i) + } continue } - // container is still running, return not done. - if status.State == kubecontainer.ContainerStateRunning { - return nil, nil, false + if isPreviouslyInitialized && !types.IsRestartableInitContainer(container) { + // after initialization, only restartable init containers need to be kept + // running + continue } - if status.State == kubecontainer.ContainerStateExited { - // all init containers successful - if i == (len(pod.Spec.InitContainers) - 1) { - return nil, nil, true + switch status.State { + case kubecontainer.ContainerStateCreated: + // nothing to do but wait for it to start + + case kubecontainer.ContainerStateRunning: + if !types.IsRestartableInitContainer(container) { + break } - // all containers up to i successful, go to i+1 - return nil, &pod.Spec.InitContainers[i+1], false + if types.IsRestartableInitContainer(container) { + if container.StartupProbe != nil { + startup, found := m.startupManager.Get(status.ID) + if !found { + // If the startup probe has not been run, wait for it. + break + } + if startup != proberesults.Success { + if startup == proberesults.Failure { + // If the restartable init container failed the startup probe, + // restart it. + changes.ContainersToKill[status.ID] = containerToKillInfo{ + name: container.Name, + container: container, + message: fmt.Sprintf("Init container %s failed startup probe", container.Name), + reason: reasonStartupProbe, + } + changes.InitContainersToStart = append(changes.InitContainersToStart, i) + } + break + } + } + + klog.V(4).InfoS("Init container has been initialized", "pod", klog.KObj(pod), "container", container.Name) + if i == (len(pod.Spec.InitContainers) - 1) { + podHasInitialized = true + } else if !isPreviouslyInitialized { + // this init container is initialized for the first time, start the next one + changes.InitContainersToStart = append(changes.InitContainersToStart, i+1) + } + + // A restartable init container does not have to take into account its + // liveness probe when it determines to start the next init container. + if container.LivenessProbe != nil { + liveness, found := m.livenessManager.Get(status.ID) + if !found { + // If the liveness probe has not been run, wait for it. + break + } + if liveness == proberesults.Failure { + // If the restartable init container failed the liveness probe, + // restart it. + changes.ContainersToKill[status.ID] = containerToKillInfo{ + name: container.Name, + container: container, + message: fmt.Sprintf("Init container %s failed liveness probe", container.Name), + reason: reasonLivenessProbe, + } + changes.InitContainersToStart = append(changes.InitContainersToStart, i) + } + } + } else { // init container + // nothing do to but wait for it to finish + break + } + + // If the init container failed and the restart policy is Never, the pod is terminal. + // Otherwise, restart the init container. + case kubecontainer.ContainerStateExited: + if types.IsRestartableInitContainer(container) { + changes.InitContainersToStart = append(changes.InitContainersToStart, i) + } else { // init container + if isInitContainerFailed(status) { + if !restartOnFailure { + changes.KillPod = true + changes.InitContainersToStart = nil + return false + } + changes.InitContainersToStart = append(changes.InitContainersToStart, i) + break + } + + klog.V(4).InfoS("Init container has been initialized", "pod", klog.KObj(pod), "container", container.Name) + if i == (len(pod.Spec.InitContainers) - 1) { + podHasInitialized = true + } else { + // this init container is initialized for the first time, start the next one + changes.InitContainersToStart = append(changes.InitContainersToStart, i+1) + } + } + + default: // kubecontainer.ContainerStatusUnknown or other unknown states + if types.IsRestartableInitContainer(container) { + // If the restartable init container is in unknown state, restart it. + changes.ContainersToKill[status.ID] = containerToKillInfo{ + name: container.Name, + container: container, + message: fmt.Sprintf("Init container is in %q state, try killing it before restart", + status.State), + reason: reasonUnknown, + } + changes.InitContainersToStart = append(changes.InitContainersToStart, i) + } else { // init container + if !isInitContainerFailed(status) { + klog.V(4).InfoS("This should not happen, init container is in unknown state but not failed", "pod", klog.KObj(pod), "containerStatus", status) + } + + if !restartOnFailure { + changes.KillPod = true + changes.InitContainersToStart = nil + return false + } + + // If the init container is in unknown state, restart it. + changes.ContainersToKill[status.ID] = containerToKillInfo{ + name: container.Name, + container: container, + message: fmt.Sprintf("Init container is in %q state, try killing it before restart", + status.State), + reason: reasonUnknown, + } + changes.InitContainersToStart = append(changes.InitContainersToStart, i) + } } + + if !isPreviouslyInitialized { + // the one before this init container has been initialized + isPreviouslyInitialized = true + } + } + + // this means no init containers have been started, + // start the first one + if !isPreviouslyInitialized { + changes.InitContainersToStart = append(changes.InitContainersToStart, 0) + } + + // reverse the InitContainersToStart, as the above loop iterated through the + // init containers backwards, but we want to start them as per the order in + // the pod spec. + l := len(changes.InitContainersToStart) + for i := 0; i < l/2; i++ { + changes.InitContainersToStart[i], changes.InitContainersToStart[l-1-i] = + changes.InitContainersToStart[l-1-i], changes.InitContainersToStart[i] } - return nil, &pod.Spec.InitContainers[0], false + return podHasInitialized } // GetContainerLogs returns logs of a specific container. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container_linux.go index 4c753b466f3b..c76389c0d9ff 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container_linux.go @@ -20,18 +20,26 @@ limitations under the License. package kuberuntime import ( + "errors" + "fmt" "math" "os" + "path/filepath" "strconv" + "sync" "time" + "github.com/containerd/cgroups" + cadvisorv1 "github.com/google/cadvisor/info/v1" libcontainercgroups "github.com/opencontainers/runc/libcontainer/cgroups" + v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" utilfeature "k8s.io/apiserver/pkg/util/feature" runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" "k8s.io/klog/v2" v1helper "k8s.io/kubernetes/pkg/apis/core/v1/helper" + kubeapiqos "k8s.io/kubernetes/pkg/apis/core/v1/helper/qos" kubefeatures "k8s.io/kubernetes/pkg/features" "k8s.io/kubernetes/pkg/kubelet/cm" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" @@ -46,7 +54,7 @@ func (m *kubeGenericRuntimeManager) applyPlatformSpecificContainerConfig(config enforceMemoryQoS := false // Set memory.min and memory.high if MemoryQoS enabled with cgroups v2 if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.MemoryQoS) && - libcontainercgroups.IsCgroup2UnifiedMode() { + isCgroup2UnifiedMode() { enforceMemoryQoS = true } cl, err := m.generateLinuxContainerConfig(container, pod, uid, username, nsTarget, enforceMemoryQoS) @@ -55,7 +63,7 @@ func (m *kubeGenericRuntimeManager) applyPlatformSpecificContainerConfig(config } config.Linux = cl - if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.UserNamespacesStatelessPodsSupport) { + if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.UserNamespacesSupport) { if cl.SecurityContext.NamespaceOptions.UsernsOptions != nil { for _, mount := range config.Mounts { mount.UidMappings = cl.SecurityContext.NamespaceOptions.UsernsOptions.Uids @@ -99,21 +107,17 @@ func (m *kubeGenericRuntimeManager) generateLinuxContainerResources(pod *v1.Pod, lcr.HugepageLimits = GetHugepageLimitsFromResources(container.Resources) - if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.NodeSwap) { + if swapConfigurationHelper := newSwapConfigurationHelper(*m.machineInfo); utilfeature.DefaultFeatureGate.Enabled(kubefeatures.NodeSwap) { // NOTE(ehashman): Behaviour is defined in the opencontainers runtime spec: // https://github.com/opencontainers/runtime-spec/blob/1c3f411f041711bbeecf35ff7e93461ea6789220/config-linux.md#memory switch m.memorySwapBehavior { - case kubelettypes.UnlimitedSwap: - // -1 = unlimited swap - lcr.MemorySwapLimitInBytes = -1 case kubelettypes.LimitedSwap: - fallthrough + swapConfigurationHelper.ConfigureLimitedSwap(lcr, pod, container) default: - // memorySwapLimit = total permitted memory+swap; if equal to memory limit, => 0 swap above memory limit - // Some swapping is still possible. - // Note that if memory limit is 0, memory swap limit is ignored. - lcr.MemorySwapLimitInBytes = lcr.MemoryLimitInBytes + swapConfigurationHelper.ConfigureUnlimitedSwap(lcr) } + } else { + swapConfigurationHelper.ConfigureNoSwap(lcr) } // Set memory.min and memory.high to enforce MemoryQoS @@ -122,7 +126,7 @@ func (m *kubeGenericRuntimeManager) generateLinuxContainerResources(pod *v1.Pod, memoryRequest := container.Resources.Requests.Memory().Value() memoryLimit := container.Resources.Limits.Memory().Value() if memoryRequest != 0 { - unified[cm.MemoryMin] = strconv.FormatInt(memoryRequest, 10) + unified[cm.Cgroup2MemoryMin] = strconv.FormatInt(memoryRequest, 10) } // Guaranteed pods by their QoS definition requires that memory request equals memory limit and cpu request must equal cpu limit. @@ -148,7 +152,7 @@ func (m *kubeGenericRuntimeManager) generateLinuxContainerResources(pod *v1.Pod, } } if memoryHigh != 0 && memoryHigh > memoryRequest { - unified[cm.MemoryHigh] = strconv.FormatInt(memoryHigh, 10) + unified[cm.Cgroup2MemoryHigh] = strconv.FormatInt(memoryHigh, 10) } } if len(unified) > 0 { @@ -171,7 +175,7 @@ func (m *kubeGenericRuntimeManager) generateContainerResources(pod *v1.Pod, cont enforceMemoryQoS := false // Set memory.min and memory.high if MemoryQoS enabled with cgroups v2 if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.MemoryQoS) && - libcontainercgroups.IsCgroup2UnifiedMode() { + isCgroup2UnifiedMode() { enforceMemoryQoS = true } return &runtimeapi.ContainerResources{ @@ -215,6 +219,15 @@ func (m *kubeGenericRuntimeManager) calculateLinuxResources(cpuRequest, cpuLimit resources.CpuPeriod = cpuPeriod } + // runc requires cgroupv2 for unified mode + if isCgroup2UnifiedMode() { + resources.Unified = map[string]string{ + // Ask the kernel to kill all processes in the container cgroup in case of OOM. + // See memory.oom.group in https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html for + // more info. + "memory.oom.group": "1", + } + } return &resources } @@ -289,3 +302,126 @@ func toKubeContainerResources(statusResources *runtimeapi.ContainerResources) *k } return cStatusResources } + +// Note: this function variable is being added here so it would be possible to mock +// the cgroup version for unit tests by assigning a new mocked function into it. Without it, +// the cgroup version would solely depend on the environment running the test. +var isCgroup2UnifiedMode = func() bool { + return libcontainercgroups.IsCgroup2UnifiedMode() +} + +var ( + swapControllerAvailability bool + swapControllerAvailabilityOnce sync.Once +) + +func swapControllerAvailable() bool { + // See https://github.com/containerd/containerd/pull/7838/ + swapControllerAvailabilityOnce.Do(func() { + const warn = "Failed to detect the availability of the swap controller, assuming not available" + p := "/sys/fs/cgroup/memory/memory.memsw.limit_in_bytes" + if libcontainercgroups.IsCgroup2UnifiedMode() { + // memory.swap.max does not exist in the cgroup root, so we check /sys/fs/cgroup//memory.swap.max + _, unified, err := cgroups.ParseCgroupFileUnified("/proc/self/cgroup") + if err != nil { + klog.V(5).ErrorS(fmt.Errorf("failed to parse /proc/self/cgroup: %w", err), warn) + return + } + p = filepath.Join("/sys/fs/cgroup", unified, "memory.swap.max") + } + if _, err := os.Stat(p); err != nil { + if !errors.Is(err, os.ErrNotExist) { + klog.V(5).ErrorS(err, warn) + } + return + } + swapControllerAvailability = true + }) + return swapControllerAvailability +} + +type swapConfigurationHelper struct { + machineInfo cadvisorv1.MachineInfo +} + +func newSwapConfigurationHelper(machineInfo cadvisorv1.MachineInfo) *swapConfigurationHelper { + return &swapConfigurationHelper{machineInfo: machineInfo} +} + +func (m swapConfigurationHelper) ConfigureLimitedSwap(lcr *runtimeapi.LinuxContainerResources, pod *v1.Pod, container *v1.Container) { + podQos := kubeapiqos.GetPodQOS(pod) + containerDoesNotRequestMemory := container.Resources.Requests.Memory().IsZero() && container.Resources.Limits.Memory().IsZero() + memoryRequestEqualsToLimit := container.Resources.Requests.Memory().Cmp(*container.Resources.Limits.Memory()) == 0 + + if podQos != v1.PodQOSBurstable || containerDoesNotRequestMemory || !isCgroup2UnifiedMode() || memoryRequestEqualsToLimit { + m.ConfigureNoSwap(lcr) + return + } + + containerMemoryRequest := container.Resources.Requests.Memory() + swapLimit, err := calcSwapForBurstablePods(containerMemoryRequest.Value(), int64(m.machineInfo.MemoryCapacity), int64(m.machineInfo.SwapCapacity)) + + if err != nil { + klog.ErrorS(err, "cannot calculate swap allocation amount; disallowing swap") + m.ConfigureNoSwap(lcr) + return + } + + m.configureSwap(lcr, swapLimit) +} + +func (m swapConfigurationHelper) ConfigureNoSwap(lcr *runtimeapi.LinuxContainerResources) { + if !isCgroup2UnifiedMode() { + if swapControllerAvailable() { + // memorySwapLimit = total permitted memory+swap; if equal to memory limit, => 0 swap above memory limit + // Some swapping is still possible. + // Note that if memory limit is 0, memory swap limit is ignored. + lcr.MemorySwapLimitInBytes = lcr.MemoryLimitInBytes + } + return + } + + m.configureSwap(lcr, 0) +} + +func (m swapConfigurationHelper) ConfigureUnlimitedSwap(lcr *runtimeapi.LinuxContainerResources) { + if !isCgroup2UnifiedMode() { + m.ConfigureNoSwap(lcr) + return + } + + if lcr.Unified == nil { + lcr.Unified = map[string]string{} + } + + lcr.Unified[cm.Cgroup2MaxSwapFilename] = "max" +} + +func (m swapConfigurationHelper) configureSwap(lcr *runtimeapi.LinuxContainerResources, swapMemory int64) { + if !isCgroup2UnifiedMode() { + klog.ErrorS(fmt.Errorf("swap configuration is not supported with cgroup v1"), "swap configuration under cgroup v1 is unexpected") + return + } + + if lcr.Unified == nil { + lcr.Unified = map[string]string{} + } + + lcr.Unified[cm.Cgroup2MaxSwapFilename] = fmt.Sprintf("%d", swapMemory) +} + +// The swap limit is calculated as (/)*. +// For more info, please look at the following KEP: https://kep.k8s.io/2400 +func calcSwapForBurstablePods(containerMemoryRequest, nodeTotalMemory, totalPodsSwapAvailable int64) (int64, error) { + if nodeTotalMemory <= 0 { + return 0, fmt.Errorf("total node memory is 0") + } + if containerMemoryRequest > nodeTotalMemory { + return 0, fmt.Errorf("container request %d is larger than total node memory %d", containerMemoryRequest, nodeTotalMemory) + } + + containerMemoryProportion := float64(containerMemoryRequest) / float64(nodeTotalMemory) + swapAllocation := containerMemoryProportion * float64(totalPodsSwapAvailable) + + return int64(swapAllocation), nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_gc.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_gc.go index a91e190ee750..35a19704b951 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_gc.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_gc.go @@ -293,7 +293,7 @@ func (cgc *containerGC) evictSandboxes(ctx context.Context, evictNonDeletedPods sandboxIDs.Insert(container.PodSandboxId) } - sandboxesByPod := make(sandboxesByPodUID) + sandboxesByPod := make(sandboxesByPodUID, len(sandboxes)) for _, sandbox := range sandboxes { podUID := types.UID(sandbox.Metadata.Uid) sandboxInfo := sandboxGCInfo{ @@ -301,13 +301,8 @@ func (cgc *containerGC) evictSandboxes(ctx context.Context, evictNonDeletedPods createTime: time.Unix(0, sandbox.CreatedAt), } - // Set ready sandboxes to be active. - if sandbox.State == runtimeapi.PodSandboxState_SANDBOX_READY { - sandboxInfo.active = true - } - - // Set sandboxes that still have containers to be active. - if sandboxIDs.Has(sandbox.Id) { + // Set ready sandboxes and sandboxes that still have containers to be active. + if sandbox.State == runtimeapi.PodSandboxState_SANDBOX_READY || sandboxIDs.Has(sandbox.Id) { sandboxInfo.active = true } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go index 9c9cc56d5ae9..c8e76d41f286 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go @@ -26,6 +26,7 @@ import ( "time" cadvisorapi "github.com/google/cadvisor/info/v1" + "github.com/google/go-cmp/cmp" "go.opentelemetry.io/otel/trace" crierror "k8s.io/cri-api/pkg/errors" "k8s.io/klog/v2" @@ -34,7 +35,6 @@ import ( "k8s.io/apimachinery/pkg/api/resource" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" kubetypes "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/diff" utilruntime "k8s.io/apimachinery/pkg/util/runtime" utilversion "k8s.io/apimachinery/pkg/util/version" utilfeature "k8s.io/apiserver/pkg/util/feature" @@ -492,11 +492,13 @@ type podActions struct { // The attempt number of creating sandboxes for the pod. Attempt uint32 - // The next init container to start. - NextInitContainerToStart *v1.Container + // InitContainersToStart keeps a list of indexes for the init containers to + // start, where the index is the index of the specific init container in the + // pod spec (pod.Spec.InitContainers). + InitContainersToStart []int // ContainersToStart keeps a list of indexes for the containers to start, // where the index is the index of the specific container in the pod spec ( - // pod.Spec.Containers. + // pod.Spec.Containers). ContainersToStart []int // ContainersToKill keeps a map of containers that need to be killed, note that // the key is the container ID of the container, while @@ -553,7 +555,7 @@ func (m *kubeGenericRuntimeManager) computePodResizeAction(pod *v1.Pod, containe if !exists || apiContainerStatus.State.Running == nil || apiContainerStatus.Resources == nil || kubeContainerStatus.State != kubecontainer.ContainerStateRunning || kubeContainerStatus.ID.String() != apiContainerStatus.ContainerID || - len(diff.ObjectDiff(container.Resources.Requests, apiContainerStatus.AllocatedResources)) != 0 { + !cmp.Equal(container.Resources.Requests, apiContainerStatus.AllocatedResources) { return true } @@ -827,7 +829,7 @@ func (m *kubeGenericRuntimeManager) computePodActions(ctx context.Context, pod * if createPodSandbox { if !shouldRestartOnFailure(pod) && attempt != 0 && len(podStatus.ContainerStatuses) != 0 { // Should not restart the pod, just return. - // we should not create a sandbox for a pod if it is already done. + // we should not create a sandbox, and just kill the pod if it is already done. // if all containers are done and should not be started, there is no need to create a new sandbox. // this stops confusing logs on pods whose containers all have exit codes, but we recreate a sandbox before terminating it. // @@ -846,18 +848,22 @@ func (m *kubeGenericRuntimeManager) computePodActions(ctx context.Context, pod * } containersToStart = append(containersToStart, idx) } - // We should not create a sandbox for a Pod if initialization is done and there is no container to start. - if len(containersToStart) == 0 { - _, _, done := findNextInitContainerToRun(pod, podStatus) - if done { - changes.CreateSandbox = false - return changes - } + + // If there is any regular container, it means all init containers have + // been initialized. + hasInitialized := hasAnyRegularContainerCreated(pod, podStatus) + // We should not create a sandbox, and just kill the pod if initialization + // is done and there is no container to start. + if hasInitialized && len(containersToStart) == 0 { + changes.CreateSandbox = false + return changes } + // If we are creating a pod sandbox, we should restart from the initial + // state. if len(pod.Spec.InitContainers) != 0 { // Pod has init containers, return the first one. - changes.NextInitContainerToStart = &pod.Spec.InitContainers[0] + changes.InitContainersToStart = []int{0} return changes } changes.ContainersToStart = containersToStart @@ -874,27 +880,8 @@ func (m *kubeGenericRuntimeManager) computePodActions(ctx context.Context, pod * } } - // Check initialization progress. - initLastStatus, next, done := findNextInitContainerToRun(pod, podStatus) - if !done { - if next != nil { - initFailed := initLastStatus != nil && isInitContainerFailed(initLastStatus) - if initFailed && !shouldRestartOnFailure(pod) { - changes.KillPod = true - } else { - // Always try to stop containers in unknown state first. - if initLastStatus != nil && initLastStatus.State == kubecontainer.ContainerStateUnknown { - changes.ContainersToKill[initLastStatus.ID] = containerToKillInfo{ - name: next.Name, - container: next, - message: fmt.Sprintf("Init container is in %q state, try killing it before restart", - initLastStatus.State), - reason: reasonUnknown, - } - } - changes.NextInitContainerToStart = next - } - } + hasInitialized := m.computeInitContainerActions(pod, podStatus, &changes) + if changes.KillPod || !hasInitialized { // Initialization failed or still in progress. Skip inspecting non-init // containers. return changes @@ -993,6 +980,9 @@ func (m *kubeGenericRuntimeManager) computePodActions(ctx context.Context, pod * if keepCount == 0 && len(changes.ContainersToStart) == 0 { changes.KillPod = true + // To prevent the restartable init containers to keep pod alive, we should + // not restart them. + changes.InitContainersToStart = nil } return changes @@ -1099,7 +1089,14 @@ func (m *kubeGenericRuntimeManager) SyncPod(ctx context.Context, pod *v1.Pod, po // Prepare resources allocated by the Dynammic Resource Allocation feature for the pod if utilfeature.DefaultFeatureGate.Enabled(features.DynamicResourceAllocation) { - if m.runtimeHelper.PrepareDynamicResources(pod) != nil { + if err := m.runtimeHelper.PrepareDynamicResources(pod); err != nil { + ref, referr := ref.GetReference(legacyscheme.Scheme, pod) + if referr != nil { + klog.ErrorS(referr, "Couldn't make a ref to pod", "pod", klog.KObj(pod)) + return + } + m.recorder.Eventf(ref, v1.EventTypeWarning, events.FailedPrepareDynamicResources, "Failed to prepare dynamic resources: %v", err) + klog.ErrorS(err, "Failed to prepare dynamic resources", "pod", klog.KObj(pod)) return } } @@ -1225,10 +1222,16 @@ func (m *kubeGenericRuntimeManager) SyncPod(ctx context.Context, pod *v1.Pod, po start(ctx, "ephemeral container", metrics.EphemeralContainer, ephemeralContainerStartSpec(&pod.Spec.EphemeralContainers[idx])) } - // Step 6: start the init container. - if container := podContainerChanges.NextInitContainerToStart; container != nil { + // Step 6: start init containers. + for _, idx := range podContainerChanges.InitContainersToStart { + container := &pod.Spec.InitContainers[idx] // Start the next init container. if err := start(ctx, "init container", metrics.InitContainer, containerStartSpec(container)); err != nil { + if types.IsRestartableInitContainer(container) { + klog.V(4).InfoS("Failed to start the restartable init container for the pod, skipping", "initContainerName", container.Name, "pod", klog.KObj(pod)) + continue + } + klog.V(4).InfoS("Failed to initialize the pod, as the init container failed to start, aborting", "initContainerName", container.Name, "pod", klog.KObj(pod)) return } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/security_context.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/security_context.go index 5e6f05b4e187..7db21ed74f0e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/security_context.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/security_context.go @@ -34,12 +34,12 @@ func (m *kubeGenericRuntimeManager) determineEffectiveSecurityContext(pod *v1.Po ReadonlyPaths: securitycontext.ConvertToRuntimeReadonlyPaths(effectiveSc.ProcMount), } } + var err error - // TODO: Deprecated, remove after we switch to Seccomp field - // set SeccompProfilePath. - synthesized.SeccompProfilePath = m.getSeccompProfilePath(pod.Annotations, container.Name, pod.Spec.SecurityContext, container.SecurityContext, m.seccompDefault) - - synthesized.Seccomp = m.getSeccompProfile(pod.Annotations, container.Name, pod.Spec.SecurityContext, container.SecurityContext, m.seccompDefault) + synthesized.Seccomp, err = m.getSeccompProfile(pod.Annotations, container.Name, pod.Spec.SecurityContext, container.SecurityContext, m.seccompDefault) + if err != nil { + return nil, err + } // set ApparmorProfile. synthesized.ApparmorProfile = apparmor.GetProfileNameFromPodAnnotations(pod.Annotations, container.Name) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/lifecycle/predicate.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/lifecycle/predicate.go index 96808cf71007..d20671749758 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/lifecycle/predicate.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/lifecycle/predicate.go @@ -21,9 +21,11 @@ import ( "runtime" v1 "k8s.io/api/core/v1" + utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/component-helpers/scheduling/corev1" "k8s.io/klog/v2" v1helper "k8s.io/kubernetes/pkg/apis/core/v1/helper" + "k8s.io/kubernetes/pkg/features" "k8s.io/kubernetes/pkg/kubelet/types" "k8s.io/kubernetes/pkg/scheduler" schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" @@ -72,6 +74,22 @@ func (w *predicateAdmitHandler) Admit(attrs *PodAdmitAttributes) PodAdmitResult pods := attrs.OtherPods nodeInfo := schedulerframework.NewNodeInfo(pods...) nodeInfo.SetNode(node) + + // TODO: Remove this after the SidecarContainers feature gate graduates to GA. + if !utilfeature.DefaultFeatureGate.Enabled(features.SidecarContainers) { + for _, c := range admitPod.Spec.InitContainers { + if types.IsRestartableInitContainer(&c) { + message := fmt.Sprintf("Init container %q may not have a non-default restartPolicy", c.Name) + klog.InfoS("Failed to admit pod", "pod", klog.KObj(admitPod), "message", message) + return PodAdmitResult{ + Admit: false, + Reason: "InitContainerRestartPolicyForbidden", + Message: message, + } + } + } + } + // ensure the node has enough plugin resources for that required in pods if err = w.pluginResourceUpdateFunc(nodeInfo, attrs); err != nil { message := fmt.Sprintf("Update plugin resources failed due to %v, which is unexpected.", err) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/metrics/collectors/resource_metrics.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/metrics/collectors/resource_metrics.go index ab6ae9340730..1b80b29c96a5 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/metrics/collectors/resource_metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/metrics/collectors/resource_metrics.go @@ -41,6 +41,13 @@ var ( metrics.ALPHA, "") + nodeSwapUsageDesc = metrics.NewDesc("node_swap_usage_bytes", + "Current swap usage of the node in bytes. Reported only on non-windows systems", + nil, + nil, + metrics.ALPHA, + "") + containerCPUUsageDesc = metrics.NewDesc("container_cpu_usage_seconds_total", "Cumulative cpu time consumed by the container in core-seconds", []string{"container", "pod", "namespace"}, @@ -55,6 +62,13 @@ var ( metrics.ALPHA, "") + containerSwapUsageDesc = metrics.NewDesc("container_swap_usage_bytes", + "Current amount of the container swap usage in bytes. Reported only on non-windows systems", + []string{"container", "pod", "namespace"}, + nil, + metrics.ALPHA, + "") + podCPUUsageDesc = metrics.NewDesc("pod_cpu_usage_seconds_total", "Cumulative cpu time consumed by the pod in core-seconds", []string{"pod", "namespace"}, @@ -69,6 +83,13 @@ var ( metrics.ALPHA, "") + podSwapUsageDesc = metrics.NewDesc("pod_swap_usage_bytes", + "Current amount of the pod swap usage in bytes. Reported only on non-windows systems", + []string{"pod", "namespace"}, + nil, + metrics.ALPHA, + "") + resourceScrapeResultDesc = metrics.NewDesc("scrape_error", "1 if there was an error while getting container metrics, 0 otherwise", nil, @@ -104,11 +125,14 @@ var _ metrics.StableCollector = &resourceMetricsCollector{} func (rc *resourceMetricsCollector) DescribeWithStability(ch chan<- *metrics.Desc) { ch <- nodeCPUUsageDesc ch <- nodeMemoryUsageDesc + ch <- nodeSwapUsageDesc ch <- containerStartTimeDesc ch <- containerCPUUsageDesc ch <- containerMemoryUsageDesc + ch <- containerSwapUsageDesc ch <- podCPUUsageDesc ch <- podMemoryUsageDesc + ch <- podSwapUsageDesc ch <- resourceScrapeResultDesc } @@ -131,15 +155,18 @@ func (rc *resourceMetricsCollector) CollectWithStability(ch chan<- metrics.Metri rc.collectNodeCPUMetrics(ch, statsSummary.Node) rc.collectNodeMemoryMetrics(ch, statsSummary.Node) + rc.collectNodeSwapMetrics(ch, statsSummary.Node) for _, pod := range statsSummary.Pods { for _, container := range pod.Containers { rc.collectContainerStartTime(ch, pod, container) rc.collectContainerCPUMetrics(ch, pod, container) rc.collectContainerMemoryMetrics(ch, pod, container) + rc.collectContainerSwapMetrics(ch, pod, container) } rc.collectPodCPUMetrics(ch, pod) rc.collectPodMemoryMetrics(ch, pod) + rc.collectPodSwapMetrics(ch, pod) } } @@ -161,6 +188,15 @@ func (rc *resourceMetricsCollector) collectNodeMemoryMetrics(ch chan<- metrics.M metrics.NewLazyConstMetric(nodeMemoryUsageDesc, metrics.GaugeValue, float64(*s.Memory.WorkingSetBytes))) } +func (rc *resourceMetricsCollector) collectNodeSwapMetrics(ch chan<- metrics.Metric, s summary.NodeStats) { + if s.Swap == nil || s.Swap.SwapUsageBytes == nil { + return + } + + ch <- metrics.NewLazyMetricWithTimestamp(s.Memory.Time.Time, + metrics.NewLazyConstMetric(nodeSwapUsageDesc, metrics.GaugeValue, float64(*s.Swap.SwapUsageBytes))) +} + func (rc *resourceMetricsCollector) collectContainerStartTime(ch chan<- metrics.Metric, pod summary.PodStats, s summary.ContainerStats) { if s.StartTime.Unix() <= 0 { return @@ -190,6 +226,16 @@ func (rc *resourceMetricsCollector) collectContainerMemoryMetrics(ch chan<- metr float64(*s.Memory.WorkingSetBytes), s.Name, pod.PodRef.Name, pod.PodRef.Namespace)) } +func (rc *resourceMetricsCollector) collectContainerSwapMetrics(ch chan<- metrics.Metric, pod summary.PodStats, s summary.ContainerStats) { + if s.Swap == nil || s.Swap.SwapUsageBytes == nil { + return + } + + ch <- metrics.NewLazyMetricWithTimestamp(s.Swap.Time.Time, + metrics.NewLazyConstMetric(containerSwapUsageDesc, metrics.GaugeValue, + float64(*s.Swap.SwapUsageBytes), s.Name, pod.PodRef.Name, pod.PodRef.Namespace)) +} + func (rc *resourceMetricsCollector) collectPodCPUMetrics(ch chan<- metrics.Metric, pod summary.PodStats) { if pod.CPU == nil || pod.CPU.UsageCoreNanoSeconds == nil { return @@ -209,3 +255,13 @@ func (rc *resourceMetricsCollector) collectPodMemoryMetrics(ch chan<- metrics.Me metrics.NewLazyConstMetric(podMemoryUsageDesc, metrics.GaugeValue, float64(*pod.Memory.WorkingSetBytes), pod.PodRef.Name, pod.PodRef.Namespace)) } + +func (rc *resourceMetricsCollector) collectPodSwapMetrics(ch chan<- metrics.Metric, pod summary.PodStats) { + if pod.Swap == nil || pod.Swap.SwapUsageBytes == nil { + return + } + + ch <- metrics.NewLazyMetricWithTimestamp(pod.Swap.Time.Time, + metrics.NewLazyConstMetric(podSwapUsageDesc, metrics.GaugeValue, + float64(*pod.Swap.SwapUsageBytes), pod.PodRef.Name, pod.PodRef.Namespace)) +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/metrics/metrics.go index e0395d3292b0..0897b59eb219 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/metrics/metrics.go @@ -613,7 +613,7 @@ var ( &metrics.CounterOpts{ Subsystem: KubeletSubsystem, Name: StartedHostProcessContainersTotalKey, - Help: "Cumulative number of hostprocess containers started. This metric will only be collected on Windows and requires WindowsHostProcessContainers feature gate to be enabled.", + Help: "Cumulative number of hostprocess containers started. This metric will only be collected on Windows.", StabilityLevel: metrics.ALPHA, }, []string{"container_type"}, @@ -623,7 +623,7 @@ var ( &metrics.CounterOpts{ Subsystem: KubeletSubsystem, Name: StartedHostProcessContainersErrorsTotalKey, - Help: "Cumulative number of errors when starting hostprocess containers. This metric will only be collected on Windows and requires WindowsHostProcessContainers feature gate to be enabled.", + Help: "Cumulative number of errors when starting hostprocess containers. This metric will only be collected on Windows.", StabilityLevel: metrics.ALPHA, }, []string{"container_type", "code"}, @@ -776,19 +776,14 @@ func Register(collectors ...metrics.StableCollector) { legacyregistry.MustRegister(OrphanedRuntimePodTotal) legacyregistry.MustRegister(RestartedPodTotal) legacyregistry.MustRegister(ManagedEphemeralContainers) - if utilfeature.DefaultFeatureGate.Enabled(features.KubeletPodResources) { - legacyregistry.MustRegister(PodResourcesEndpointRequestsTotalCount) - - if utilfeature.DefaultFeatureGate.Enabled(features.KubeletPodResourcesGetAllocatable) { - legacyregistry.MustRegister(PodResourcesEndpointRequestsListCount) - legacyregistry.MustRegister(PodResourcesEndpointRequestsGetAllocatableCount) - legacyregistry.MustRegister(PodResourcesEndpointErrorsListCount) - legacyregistry.MustRegister(PodResourcesEndpointErrorsGetAllocatableCount) - } - if utilfeature.DefaultFeatureGate.Enabled(features.KubeletPodResourcesGet) { - legacyregistry.MustRegister(PodResourcesEndpointRequestsGetCount) - legacyregistry.MustRegister(PodResourcesEndpointErrorsGetCount) - } + legacyregistry.MustRegister(PodResourcesEndpointRequestsTotalCount) + legacyregistry.MustRegister(PodResourcesEndpointRequestsListCount) + legacyregistry.MustRegister(PodResourcesEndpointRequestsGetAllocatableCount) + legacyregistry.MustRegister(PodResourcesEndpointErrorsListCount) + legacyregistry.MustRegister(PodResourcesEndpointErrorsGetAllocatableCount) + if utilfeature.DefaultFeatureGate.Enabled(features.KubeletPodResourcesGet) { + legacyregistry.MustRegister(PodResourcesEndpointRequestsGetCount) + legacyregistry.MustRegister(PodResourcesEndpointErrorsGetCount) } legacyregistry.MustRegister(StartedPodsTotal) legacyregistry.MustRegister(StartedPodsErrorsTotal) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/network/dns/dns.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/network/dns/dns.go index 0bb5ab1bb48a..1ffe9d20f8ca 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/network/dns/dns.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/network/dns/dns.go @@ -27,11 +27,9 @@ import ( v1 "k8s.io/api/core/v1" utilerrors "k8s.io/apimachinery/pkg/util/errors" utilvalidation "k8s.io/apimachinery/pkg/util/validation" - utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/client-go/tools/record" runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" "k8s.io/kubernetes/pkg/apis/core/validation" - "k8s.io/kubernetes/pkg/features" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" "k8s.io/kubernetes/pkg/kubelet/util/format" @@ -103,10 +101,7 @@ func omitDuplicates(strs []string) []string { func (c *Configurer) formDNSSearchFitsLimits(composedSearch []string, pod *v1.Pod) []string { limitsExceeded := false - maxDNSSearchPaths, maxDNSSearchListChars := validation.MaxDNSSearchPathsLegacy, validation.MaxDNSSearchListCharsLegacy - if utilfeature.DefaultFeatureGate.Enabled(features.ExpandedDNSConfig) { - maxDNSSearchPaths, maxDNSSearchListChars = validation.MaxDNSSearchPathsExpanded, validation.MaxDNSSearchListCharsExpanded - } + maxDNSSearchPaths, maxDNSSearchListChars := validation.MaxDNSSearchPaths, validation.MaxDNSSearchListChars if len(composedSearch) > maxDNSSearchPaths { composedSearch = composedSearch[:maxDNSSearchPaths] @@ -195,10 +190,7 @@ func (c *Configurer) CheckLimitsForResolvConf() { return } - domainCountLimit, maxDNSSearchListChars := validation.MaxDNSSearchPathsLegacy, validation.MaxDNSSearchListCharsLegacy - if utilfeature.DefaultFeatureGate.Enabled(features.ExpandedDNSConfig) { - domainCountLimit, maxDNSSearchListChars = validation.MaxDNSSearchPathsExpanded, validation.MaxDNSSearchListCharsExpanded - } + domainCountLimit, maxDNSSearchListChars := validation.MaxDNSSearchPaths, validation.MaxDNSSearchListChars if c.ClusterDomain != "" { domainCountLimit -= 3 diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/nodestatus/setters.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/nodestatus/setters.go index e6460f14d9f1..bf3f6e05a29c 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/nodestatus/setters.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/nodestatus/setters.go @@ -109,7 +109,11 @@ func NodeAddress(nodeIPs []net.IP, // typically Kubelet.nodeIPs if node.ObjectMeta.Annotations == nil { node.ObjectMeta.Annotations = make(map[string]string) } - node.ObjectMeta.Annotations[cloudproviderapi.AnnotationAlphaProvidedIPAddr] = nodeIP.String() + annotation := nodeIP.String() + if secondaryNodeIPSpecified { + annotation += "," + secondaryNodeIP.String() + } + node.ObjectMeta.Annotations[cloudproviderapi.AnnotationAlphaProvidedIPAddr] = annotation } else if node.ObjectMeta.Annotations != nil { // Clean up stale annotations if no longer using a cloud provider or // no longer overriding node IP. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go index 1dd176489a5a..cbca33f394ca 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pleg/evented.go @@ -46,16 +46,16 @@ var ( // e.g. Streaming data issues from the runtime or the runtime does not implement the // container events stream. func isEventedPLEGInUse() bool { - eventedPLEGUsageMu.Lock() - defer eventedPLEGUsageMu.Unlock() + eventedPLEGUsageMu.RLock() + defer eventedPLEGUsageMu.RUnlock() return eventedPLEGUsage } // setEventedPLEGUsage should only be accessed from // Start/Stop of Evented PLEG. func setEventedPLEGUsage(enable bool) { - eventedPLEGUsageMu.RLock() - defer eventedPLEGUsageMu.RUnlock() + eventedPLEGUsageMu.Lock() + defer eventedPLEGUsageMu.Unlock() eventedPLEGUsage = enable } @@ -229,7 +229,7 @@ func (e *EventedPLEG) processCRIEvents(containerEventsResponseCh chan *runtimeap if klog.V(6).Enabled() { klog.ErrorS(err, "Evented PLEG: error generating pod status from the received event", "podUID", podID, "podStatus", status) } else { - klog.ErrorS(err, "Evented PLEG: error generating pod status from the received event", "podUID", podID, "podStatus", status) + klog.ErrorS(err, "Evented PLEG: error generating pod status from the received event", "podUID", podID) } } else { if klogV := klog.V(6); klogV.Enabled() { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pod/pod_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pod/pod_manager.go index 69457e6c983b..e3cc4f760804 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pod/pod_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pod/pod_manager.go @@ -43,8 +43,6 @@ import ( // pod. When a static pod gets deleted, the associated orphaned mirror pod // will also be removed. type Manager interface { - // GetPods returns the regular pods bound to the kubelet and their spec. - GetPods() []*v1.Pod // GetPodByFullName returns the (non-mirror) pod that matches full name, as well as // whether the pod was found. GetPodByFullName(podFullName string) (*v1.Pod, bool) @@ -60,8 +58,18 @@ type Manager interface { // GetMirrorPodByPod returns the mirror pod for the given static pod and // whether it was known to the pod manager. GetMirrorPodByPod(*v1.Pod) (*v1.Pod, bool) - // GetPodsAndMirrorPods returns the both regular and mirror pods. - GetPodsAndMirrorPods() ([]*v1.Pod, []*v1.Pod) + // GetPodAndMirrorPod returns the complement for a pod - if a pod was provided + // and a mirror pod can be found, return it. If a mirror pod is provided and + // the pod can be found, return it and true for wasMirror. + GetPodAndMirrorPod(*v1.Pod) (pod, mirrorPod *v1.Pod, wasMirror bool) + + // GetPods returns the regular pods bound to the kubelet and their spec. + GetPods() []*v1.Pod + + // GetPodsAndMirrorPods returns the set of pods, the set of mirror pods, and + // the pod fullnames of any orphaned mirror pods. + GetPodsAndMirrorPods() (allPods []*v1.Pod, allMirrorPods []*v1.Pod, orphanedMirrorPodFullnames []string) + // SetPods replaces the internal pods with the new pods. // It is currently only used for testing. SetPods(pods []*v1.Pod) @@ -69,12 +77,11 @@ type Manager interface { AddPod(pod *v1.Pod) // UpdatePod updates the given pod in the manager. UpdatePod(pod *v1.Pod) - // DeletePod deletes the given pod from the manager. For mirror pods, + // RemovePod deletes the given pod from the manager. For mirror pods, // this means deleting the mappings related to mirror pods. For non- // mirror pods, this means deleting from indexes for all non-mirror pods. - DeletePod(pod *v1.Pod) - // GetOrphanedMirrorPodNames returns names of orphaned mirror pods - GetOrphanedMirrorPodNames() []string + RemovePod(pod *v1.Pod) + // TranslatePodUID returns the actual UID of a pod. If the UID belongs to // a mirror pod, returns the UID of its static pod. Otherwise, returns the // original UID. @@ -86,17 +93,12 @@ type Manager interface { // GetUIDTranslations returns the mappings of static pod UIDs to mirror pod // UIDs and mirror pod UIDs to static pod UIDs. GetUIDTranslations() (podToMirror map[kubetypes.ResolvedPodUID]kubetypes.MirrorPodUID, mirrorToPod map[kubetypes.MirrorPodUID]kubetypes.ResolvedPodUID) - // IsMirrorPodOf returns true if mirrorPod is a correct representation of - // pod; false otherwise. - IsMirrorPodOf(mirrorPod, pod *v1.Pod) bool - - MirrorClient } // basicManager is a functional Manager. // // All fields in basicManager are read-only and are updated calling SetPods, -// AddPod, UpdatePod, or DeletePod. +// AddPod, UpdatePod, or RemovePod. type basicManager struct { // Protects all internal maps. lock sync.RWMutex @@ -112,15 +114,11 @@ type basicManager struct { // Mirror pod UID to pod UID map. translationByUID map[kubetypes.MirrorPodUID]kubetypes.ResolvedPodUID - - // A mirror pod client to create/delete mirror pods. - MirrorClient } // NewBasicPodManager returns a functional Manager. -func NewBasicPodManager(client MirrorClient) Manager { +func NewBasicPodManager() Manager { pm := &basicManager{} - pm.MirrorClient = client pm.SetPods(nil) return pm } @@ -191,7 +189,7 @@ func (pm *basicManager) updatePodsInternal(pods ...*v1.Pod) { } } -func (pm *basicManager) DeletePod(pod *v1.Pod) { +func (pm *basicManager) RemovePod(pod *v1.Pod) { updateMetrics(pod, nil) pm.lock.Lock() defer pm.lock.Unlock() @@ -214,12 +212,18 @@ func (pm *basicManager) GetPods() []*v1.Pod { return podsMapToPods(pm.podByUID) } -func (pm *basicManager) GetPodsAndMirrorPods() ([]*v1.Pod, []*v1.Pod) { +func (pm *basicManager) GetPodsAndMirrorPods() (allPods []*v1.Pod, allMirrorPods []*v1.Pod, orphanedMirrorPodFullnames []string) { pm.lock.RLock() defer pm.lock.RUnlock() - pods := podsMapToPods(pm.podByUID) - mirrorPods := mirrorPodsMapToMirrorPods(pm.mirrorPodByUID) - return pods, mirrorPods + allPods = podsMapToPods(pm.podByUID) + allMirrorPods = mirrorPodsMapToMirrorPods(pm.mirrorPodByUID) + + for podFullName := range pm.mirrorPodByFullName { + if _, ok := pm.podByFullName[podFullName]; !ok { + orphanedMirrorPodFullnames = append(orphanedMirrorPodFullnames, podFullName) + } + } + return allPods, allMirrorPods, orphanedMirrorPodFullnames } func (pm *basicManager) GetPodByUID(uid types.UID) (*v1.Pod, bool) { @@ -280,19 +284,8 @@ func (pm *basicManager) GetUIDTranslations() (podToMirror map[kubetypes.Resolved return podToMirror, mirrorToPod } -func (pm *basicManager) GetOrphanedMirrorPodNames() []string { - pm.lock.RLock() - defer pm.lock.RUnlock() - var podFullNames []string - for podFullName := range pm.mirrorPodByFullName { - if _, ok := pm.podByFullName[podFullName]; !ok { - podFullNames = append(podFullNames, podFullName) - } - } - return podFullNames -} - -func (pm *basicManager) IsMirrorPodOf(mirrorPod, pod *v1.Pod) bool { +// IsMirrorPodOf returns true if pod and mirrorPod are associated with each other. +func IsMirrorPodOf(mirrorPod, pod *v1.Pod) bool { // Check name and namespace first. if pod.Name != mirrorPod.Name || pod.Namespace != mirrorPod.Namespace { return false @@ -333,3 +326,15 @@ func (pm *basicManager) GetPodByMirrorPod(mirrorPod *v1.Pod) (*v1.Pod, bool) { pod, ok := pm.podByFullName[kubecontainer.GetPodFullName(mirrorPod)] return pod, ok } + +func (pm *basicManager) GetPodAndMirrorPod(aPod *v1.Pod) (pod, mirrorPod *v1.Pod, wasMirror bool) { + pm.lock.RLock() + defer pm.lock.RUnlock() + + fullName := kubecontainer.GetPodFullName(aPod) + if kubetypes.IsMirrorPod(aPod) { + return pm.podByFullName[fullName], aPod, true + } + return aPod, pm.mirrorPodByFullName[fullName], false + +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pod_workers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pod_workers.go index 8d72e1ad55ef..20e8b493a8f4 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pod_workers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/pod_workers.go @@ -775,16 +775,23 @@ func (p *podWorkers) UpdatePod(options UpdatePodOptions) { } // if this pod is being synced for the first time, we need to make sure it is an active pod if options.Pod != nil && (options.Pod.Status.Phase == v1.PodFailed || options.Pod.Status.Phase == v1.PodSucceeded) { - // check to see if the pod is not running and the pod is terminal. - // If this succeeds then record in the podWorker that it is terminated. + // Check to see if the pod is not running and the pod is terminal; if this succeeds then record in the podWorker that it is terminated. + // This is needed because after a kubelet restart, we need to ensure terminal pods will NOT be considered active in Pod Admission. See http://issues.k8s.io/105523 + // However, `filterOutInactivePods`, considers pods that are actively terminating as active. As a result, `IsPodKnownTerminated()` needs to return true and thus `terminatedAt` needs to be set. if statusCache, err := p.podCache.Get(uid); err == nil { if isPodStatusCacheTerminal(statusCache) { + // At this point we know: + // (1) The pod is terminal based on the config source. + // (2) The pod is terminal based on the runtime cache. + // This implies that this pod had already completed `SyncTerminatingPod` sometime in the past. The pod is likely being synced for the first time due to a kubelet restart. + // These pods need to complete SyncTerminatedPod to ensure that all resources are cleaned and that the status manager makes the final status updates for the pod. + // As a result, set finished: false, to ensure a Terminated event will be sent and `SyncTerminatedPod` will run. status = &podSyncStatus{ terminatedAt: now, terminatingAt: now, syncedAt: now, startedTerminating: true, - finished: true, + finished: false, fullname: kubecontainer.BuildPodFullName(name, ns), } } @@ -1086,6 +1093,10 @@ func (p *podWorkers) cleanupUnstartedPod(pod *v1.Pod, status *podSyncStatus) { // or can be started, and updates the cached pod state so that downstream components can observe what the // pod worker goroutine is currently attempting to do. If ok is false, there is no available event. If any // of the boolean values is false, ensure the appropriate cleanup happens before returning. +// +// This method should ensure that either status.pendingUpdate is cleared and merged into status.activeUpdate, +// or when a pod cannot be started status.pendingUpdate remains the same. Pods that have not been started +// should never have an activeUpdate because that is exposed to downstream components on started pods. func (p *podWorkers) startPodSync(podUID types.UID) (ctx context.Context, update podWork, canStart, canEverStart, ok bool) { p.podLock.Lock() defer p.podLock.Unlock() @@ -1159,6 +1170,8 @@ func (p *podWorkers) startPodSync(podUID types.UID) (ctx context.Context, update klog.V(4).InfoS("Pod cannot start ever", "pod", klog.KObj(update.Options.Pod), "podUID", podUID, "updateType", update.WorkType) return ctx, update, canStart, canEverStart, true case !canStart: + // this is the only path we don't start the pod, so we need to put the change back in pendingUpdate + status.pendingUpdate = &update.Options status.working = false klog.V(4).InfoS("Pod cannot start yet", "pod", klog.KObj(update.Options.Pod), "podUID", podUID) return ctx, update, canStart, canEverStart, true @@ -1168,6 +1181,12 @@ func (p *podWorkers) startPodSync(podUID types.UID) (ctx context.Context, update status.startedAt = p.clock.Now() status.mergeLastUpdate(update.Options) + // If we are admitting the pod and it is new, record the count of containers + // TODO: We should probably move this into syncPod and add an execution count + // to the syncPod arguments, and this should be recorded on the first sync. + // Leaving it here complicates a particularly important loop. + metrics.ContainersPerPodCount.Observe(float64(len(update.Options.Pod.Spec.Containers))) + return ctx, update, true, true, true } @@ -1519,7 +1538,7 @@ func (p *podWorkers) SyncKnownPods(desiredPods []*v1.Pod) map[types.UID]PodWorke p.podsSynced = true for uid, status := range p.podSyncStatuses { // We retain the worker history of any pod that is still desired according to - // its UID. However, there are ]two scenarios during a sync that result in us + // its UID. However, there are two scenarios during a sync that result in us // needing to purge the history: // // 1. The pod is no longer desired (the local version is orphaned) @@ -1545,9 +1564,17 @@ func (p *podWorkers) SyncKnownPods(desiredPods []*v1.Pod) map[types.UID]PodWorke State: status.WorkType(), Orphan: orphan, } - if status.activeUpdate != nil && status.activeUpdate.Pod != nil { - sync.HasConfig = true - sync.Static = kubetypes.IsStaticPod(status.activeUpdate.Pod) + switch { + case status.activeUpdate != nil: + if status.activeUpdate.Pod != nil { + sync.HasConfig = true + sync.Static = kubetypes.IsStaticPod(status.activeUpdate.Pod) + } + case status.pendingUpdate != nil: + if status.pendingUpdate.Pod != nil { + sync.HasConfig = true + sync.Static = kubetypes.IsStaticPod(status.pendingUpdate.Pod) + } } workers[uid] = sync } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/preemption/preemption.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/preemption/preemption.go index 5f0fb5e03c00..e4d0cbd931b1 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/preemption/preemption.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/preemption/preemption.go @@ -21,10 +21,13 @@ import ( "math" v1 "k8s.io/api/core/v1" + utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/client-go/tools/record" "k8s.io/klog/v2" + podutil "k8s.io/kubernetes/pkg/api/v1/pod" "k8s.io/kubernetes/pkg/api/v1/resource" v1qos "k8s.io/kubernetes/pkg/apis/core/v1/helper/qos" + "k8s.io/kubernetes/pkg/features" "k8s.io/kubernetes/pkg/kubelet/events" "k8s.io/kubernetes/pkg/kubelet/eviction" "k8s.io/kubernetes/pkg/kubelet/lifecycle" @@ -103,6 +106,14 @@ func (c *CriticalPodAdmissionHandler) evictPodsToFreeRequests(admitPod *v1.Pod, status.Phase = v1.PodFailed status.Reason = events.PreemptContainer status.Message = message + if utilfeature.DefaultFeatureGate.Enabled(features.PodDisruptionConditions) { + podutil.UpdatePodCondition(status, &v1.PodCondition{ + Type: v1.DisruptionTarget, + Status: v1.ConditionTrue, + Reason: v1.PodReasonTerminationByKubelet, + Message: "Pod was preempted by Kubelet to accommodate a critical pod.", + }) + } }) if err != nil { klog.ErrorS(err, "Failed to evict pod", "pod", klog.KObj(pod)) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/prober_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/prober_manager.go index 94ba0022dc12..1a92699f0758 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/prober_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/prober_manager.go @@ -29,6 +29,8 @@ import ( kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" "k8s.io/kubernetes/pkg/kubelet/prober/results" "k8s.io/kubernetes/pkg/kubelet/status" + kubetypes "k8s.io/kubernetes/pkg/kubelet/types" + kubeutil "k8s.io/kubernetes/pkg/kubelet/util" "k8s.io/utils/clock" ) @@ -84,7 +86,7 @@ type Manager interface { // UpdatePodStatus modifies the given PodStatus with the appropriate Ready state for each // container based on container running status, cached probe results and worker states. - UpdatePodStatus(types.UID, *v1.PodStatus) + UpdatePodStatus(*v1.Pod, *v1.PodStatus) } type manager struct { @@ -166,12 +168,22 @@ func (t probeType) String() string { } } +func getRestartableInitContainers(pod *v1.Pod) []v1.Container { + var restartableInitContainers []v1.Container + for _, c := range pod.Spec.InitContainers { + if kubetypes.IsRestartableInitContainer(&c) { + restartableInitContainers = append(restartableInitContainers, c) + } + } + return restartableInitContainers +} + func (m *manager) AddPod(pod *v1.Pod) { m.workerLock.Lock() defer m.workerLock.Unlock() key := probeKey{podUID: pod.UID} - for _, c := range pod.Spec.Containers { + for _, c := range append(pod.Spec.Containers, getRestartableInitContainers(pod)...) { key.containerName = c.Name if c.StartupProbe != nil { @@ -233,7 +245,7 @@ func (m *manager) RemovePod(pod *v1.Pod) { defer m.workerLock.RUnlock() key := probeKey{podUID: pod.UID} - for _, c := range pod.Spec.Containers { + for _, c := range append(pod.Spec.Containers, getRestartableInitContainers(pod)...) { key.containerName = c.Name for _, probeType := range [...]probeType{readiness, liveness, startup} { key.probeType = probeType @@ -255,48 +267,92 @@ func (m *manager) CleanupPods(desiredPods map[types.UID]sets.Empty) { } } -func (m *manager) UpdatePodStatus(podUID types.UID, podStatus *v1.PodStatus) { +func (m *manager) isContainerStarted(pod *v1.Pod, containerStatus *v1.ContainerStatus) bool { + if containerStatus.State.Running == nil { + return false + } + + if result, ok := m.startupManager.Get(kubecontainer.ParseContainerID(containerStatus.ContainerID)); ok { + return result == results.Success + } + + // if there is a startup probe which hasn't run yet, the container is not + // started. + if _, exists := m.getWorker(pod.UID, containerStatus.Name, startup); exists { + return false + } + + // there is no startup probe, so the container is started. + return true +} + +func (m *manager) UpdatePodStatus(pod *v1.Pod, podStatus *v1.PodStatus) { for i, c := range podStatus.ContainerStatuses { - var started bool + started := m.isContainerStarted(pod, &podStatus.ContainerStatuses[i]) + podStatus.ContainerStatuses[i].Started = &started + + if !started { + continue + } + + var ready bool if c.State.Running == nil { - started = false - } else if result, ok := m.startupManager.Get(kubecontainer.ParseContainerID(c.ContainerID)); ok { - started = result == results.Success + ready = false + } else if result, ok := m.readinessManager.Get(kubecontainer.ParseContainerID(c.ContainerID)); ok && result == results.Success { + ready = true } else { // The check whether there is a probe which hasn't run yet. - _, exists := m.getWorker(podUID, c.Name, startup) - started = !exists - } - podStatus.ContainerStatuses[i].Started = &started - - if started { - var ready bool - if c.State.Running == nil { - ready = false - } else if result, ok := m.readinessManager.Get(kubecontainer.ParseContainerID(c.ContainerID)); ok && result == results.Success { - ready = true - } else { - // The check whether there is a probe which hasn't run yet. - w, exists := m.getWorker(podUID, c.Name, readiness) - ready = !exists // no readinessProbe -> always ready - if exists { - // Trigger an immediate run of the readinessProbe to update ready state - select { - case w.manualTriggerCh <- struct{}{}: - default: // Non-blocking. - klog.InfoS("Failed to trigger a manual run", "probe", w.probeType.String()) - } + w, exists := m.getWorker(pod.UID, c.Name, readiness) + ready = !exists // no readinessProbe -> always ready + if exists { + // Trigger an immediate run of the readinessProbe to update ready state + select { + case w.manualTriggerCh <- struct{}{}: + default: // Non-blocking. + klog.InfoS("Failed to trigger a manual run", "probe", w.probeType.String()) } } - podStatus.ContainerStatuses[i].Ready = ready } + podStatus.ContainerStatuses[i].Ready = ready } - // init containers are ready if they have exited with success or if a readiness probe has - // succeeded. + for i, c := range podStatus.InitContainerStatuses { + started := m.isContainerStarted(pod, &podStatus.InitContainerStatuses[i]) + podStatus.InitContainerStatuses[i].Started = &started + + initContainer, ok := kubeutil.GetContainerByIndex(pod.Spec.InitContainers, podStatus.InitContainerStatuses, i) + if !ok { + klog.V(4).InfoS("Mismatch between pod spec and status, likely programmer error", "pod", klog.KObj(pod), "containerName", c.Name) + continue + } + if !kubetypes.IsRestartableInitContainer(&initContainer) { + if c.State.Terminated != nil && c.State.Terminated.ExitCode == 0 { + podStatus.InitContainerStatuses[i].Ready = true + } + continue + } + + if !started { + continue + } + var ready bool - if c.State.Terminated != nil && c.State.Terminated.ExitCode == 0 { + if c.State.Running == nil { + ready = false + } else if result, ok := m.readinessManager.Get(kubecontainer.ParseContainerID(c.ContainerID)); ok && result == results.Success { ready = true + } else { + // The check whether there is a probe which hasn't run yet. + w, exists := m.getWorker(pod.UID, c.Name, readiness) + ready = !exists // no readinessProbe -> always ready + if exists { + // Trigger an immediate run of the readinessProbe to update ready state + select { + case w.manualTriggerCh <- struct{}{}: + default: // Non-blocking. + klog.InfoS("Failed to trigger a manual run", "probe", w.probeType.String()) + } + } } podStatus.InitContainerStatuses[i].Ready = ready } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/testing/fake_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/testing/fake_manager.go index 500cfcd99627..d3b31c9e7930 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/testing/fake_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/testing/fake_manager.go @@ -43,7 +43,7 @@ func (FakeManager) CleanupPods(_ map[types.UID]sets.Empty) {} func (FakeManager) Start() {} // UpdatePodStatus simulates updating the Pod Status. -func (FakeManager) UpdatePodStatus(_ types.UID, podStatus *v1.PodStatus) { +func (FakeManager) UpdatePodStatus(_ *v1.Pod, podStatus *v1.PodStatus) { for i := range podStatus.ContainerStatuses { podStatus.ContainerStatuses[i].Ready = true } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/worker.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/worker.go index b9ec0053de68..ea2d72433878 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/worker.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/prober/worker.go @@ -18,9 +18,7 @@ package prober import ( "context" - "fmt" "math/rand" - "strings" "time" v1 "k8s.io/api/core/v1" @@ -28,7 +26,6 @@ import ( "k8s.io/component-base/metrics" "k8s.io/klog/v2" podutil "k8s.io/kubernetes/pkg/api/v1/pod" - "k8s.io/kubernetes/pkg/apis/apps" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" "k8s.io/kubernetes/pkg/kubelet/prober/results" ) @@ -115,12 +112,10 @@ func newWorker( w.initialValue = results.Unknown } - podName := getPodLabelName(w.pod) - basicMetricLabels := metrics.Labels{ "probe_type": w.probeType.String(), "container": w.container.Name, - "pod": podName, + "pod": w.pod.Name, "namespace": w.pod.Namespace, "pod_uid": string(w.pod.UID), } @@ -128,7 +123,7 @@ func newWorker( proberDurationLabels := metrics.Labels{ "probe_type": w.probeType.String(), "container": w.container.Name, - "pod": podName, + "pod": w.pod.Name, "namespace": w.pod.Namespace, } @@ -221,10 +216,13 @@ func (w *worker) doProbe(ctx context.Context) (keepGoing bool) { c, ok := podutil.GetContainerStatus(status.ContainerStatuses, w.container.Name) if !ok || len(c.ContainerID) == 0 { - // Either the container has not been created yet, or it was deleted. - klog.V(3).InfoS("Probe target container not found", - "pod", klog.KObj(w.pod), "containerName", w.container.Name) - return true // Wait for more information. + c, ok = podutil.GetContainerStatus(status.InitContainerStatuses, w.container.Name) + if !ok || len(c.ContainerID) == 0 { + // Either the container has not been created yet, or it was deleted. + klog.V(3).InfoS("Probe target container not found", + "pod", klog.KObj(w.pod), "containerName", w.container.Name) + return true // Wait for more information. + } } if w.containerID.String() != c.ContainerID { @@ -337,15 +335,3 @@ func deepCopyPrometheusLabels(m metrics.Labels) metrics.Labels { } return ret } - -func getPodLabelName(pod *v1.Pod) string { - podName := pod.Name - if pod.GenerateName != "" { - podNameSlice := strings.Split(pod.Name, "-") - podName = strings.Join(podNameSlice[:len(podNameSlice)-1], "-") - if label, ok := pod.GetLabels()[apps.DefaultDeploymentUniqueLabelKey]; ok { - podName = strings.ReplaceAll(podName, fmt.Sprintf("-%s", label), "") - } - } - return podName -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go index 8b63d368d075..b11442ae902c 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/runonce.go @@ -129,7 +129,7 @@ func (kl *Kubelet) runPod(ctx context.Context, pod *v1.Pod, retryDelay time.Dura klog.InfoS("Pod's containers not running: syncing", "pod", klog.KObj(pod)) klog.InfoS("Creating a mirror pod for static pod", "pod", klog.KObj(pod)) - if err := kl.podManager.CreateMirrorPod(pod); err != nil { + if err := kl.mirrorPodClient.CreateMirrorPod(pod); err != nil { klog.ErrorS(err, "Failed creating a mirror pod", "pod", klog.KObj(pod)) } mirrorPod, _ := kl.podManager.GetMirrorPodByPod(pod) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/secret/fake_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/secret/fake_manager.go index 4f4bf9fe4cdf..0753d3dbb0ad 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/secret/fake_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/secret/fake_manager.go @@ -16,11 +16,16 @@ limitations under the License. package secret -import v1 "k8s.io/api/core/v1" +import ( + "fmt" + + v1 "k8s.io/api/core/v1" +) // fakeManager implements Manager interface for testing purposes. // simple operations to apiserver. type fakeManager struct { + secrets []*v1.Secret } // NewFakeManager creates empty/fake secret manager @@ -28,9 +33,27 @@ func NewFakeManager() Manager { return &fakeManager{} } -// GetSecret returns a nil secret for testing +// NewFakeManagerWithSecrets creates a fake secret manager with the provided secrets +func NewFakeManagerWithSecrets(secrets []*v1.Secret) Manager { + return &fakeManager{ + secrets: secrets, + } +} + +// GetSecret function returns the searched secret if it was provided during the manager initialization, otherwise, it returns an error. +// If the manager was initialized without any secrets, it returns a nil secret." func (s *fakeManager) GetSecret(namespace, name string) (*v1.Secret, error) { - return nil, nil + if s.secrets == nil { + return nil, nil + } + + for _, secret := range s.secrets { + if secret.Name == name { + return secret, nil + } + } + + return nil, fmt.Errorf("secret %s not found", name) } // RegisterPod implements the RegisterPod method for testing purposes. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/server/server.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/server/server.go index 0c033c9d69b5..8e21d3fa50ab 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/server/server.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/server/server.go @@ -69,6 +69,9 @@ import ( runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" podresourcesapi "k8s.io/kubelet/pkg/apis/podresources/v1" podresourcesapiv1alpha1 "k8s.io/kubelet/pkg/apis/podresources/v1alpha1" + "k8s.io/kubelet/pkg/cri/streaming" + "k8s.io/kubelet/pkg/cri/streaming/portforward" + remotecommandserver "k8s.io/kubelet/pkg/cri/streaming/remotecommand" "k8s.io/kubernetes/pkg/api/legacyscheme" api "k8s.io/kubernetes/pkg/apis/core" "k8s.io/kubernetes/pkg/apis/core/v1/validation" @@ -77,9 +80,6 @@ import ( "k8s.io/kubernetes/pkg/kubelet/apis/podresources" podresourcesgrpc "k8s.io/kubernetes/pkg/kubelet/apis/podresources/grpc" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" - "k8s.io/kubernetes/pkg/kubelet/cri/streaming" - "k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward" - remotecommandserver "k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand" "k8s.io/kubernetes/pkg/kubelet/prober" servermetrics "k8s.io/kubernetes/pkg/kubelet/server/metrics" "k8s.io/kubernetes/pkg/kubelet/server/stats" @@ -218,18 +218,19 @@ type PodResourcesProviders struct { } // ListenAndServePodResources initializes a gRPC server to serve the PodResources service -func ListenAndServePodResources(socket string, providers podresources.PodResourcesProviders) { +func ListenAndServePodResources(endpoint string, providers podresources.PodResourcesProviders) { server := grpc.NewServer(podresourcesgrpc.WithRateLimiter(podresourcesgrpc.DefaultQPS, podresourcesgrpc.DefaultBurstTokens)) podresourcesapiv1alpha1.RegisterPodResourcesListerServer(server, podresources.NewV1alpha1PodResourcesServer(providers)) podresourcesapi.RegisterPodResourcesListerServer(server, podresources.NewV1PodResourcesServer(providers)) - l, err := util.CreateListener(socket) + l, err := util.CreateListener(endpoint) if err != nil { klog.ErrorS(err, "Failed to create listener for podResources endpoint") os.Exit(1) } + klog.InfoS("Starting to serve the podresources API", "endpoint", endpoint) if err := server.Serve(l); err != nil { klog.ErrorS(err, "Failed to serve") os.Exit(1) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/server/stats/summary.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/server/stats/summary.go index fb0719b8ab04..cb130e6007ce 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/server/stats/summary.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/server/stats/summary.go @@ -105,6 +105,7 @@ func (sp *summaryProviderImpl) Get(ctx context.Context, updateStats bool) (*stat NodeName: node.Name, CPU: rootStats.CPU, Memory: rootStats.Memory, + Swap: rootStats.Swap, Network: networkStats, StartTime: sp.systemBootTime, Fs: rootFsStats, @@ -141,6 +142,7 @@ func (sp *summaryProviderImpl) GetCPUAndMemoryStats(ctx context.Context) (*stats NodeName: node.Name, CPU: rootStats.CPU, Memory: rootStats.Memory, + Swap: rootStats.Swap, StartTime: rootStats.StartTime, SystemContainers: sp.GetSystemContainersCPUAndMemoryStats(nodeConfig, podStats, false), } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cadvisor_stats_provider.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cadvisor_stats_provider.go index be428561d1e1..cf900550532d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cadvisor_stats_provider.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cadvisor_stats_provider.go @@ -152,6 +152,7 @@ func (p *cadvisorStatsProvider) ListPodStats(_ context.Context) ([]statsapi.PodS cpu, memory := cadvisorInfoToCPUandMemoryStats(podInfo) podStats.CPU = cpu podStats.Memory = memory + podStats.Swap = cadvisorInfoToSwapStats(podInfo) podStats.ProcessStats = cadvisorInfoToProcessStats(podInfo) } @@ -227,6 +228,7 @@ func (p *cadvisorStatsProvider) ListPodCPUAndMemoryStats(_ context.Context) ([]s cpu, memory := cadvisorInfoToCPUandMemoryStats(podInfo) podStats.CPU = cpu podStats.Memory = memory + podStats.Swap = cadvisorInfoToSwapStats(podInfo) } result = append(result, *podStats) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider.go index d1e1d90b275c..ad4c3e3b7e9b 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider.go @@ -206,6 +206,7 @@ func (p *criStatsProvider) listPodStatsPartiallyFromCRI(ctx context.Context, upd cs := p.makeContainerStats(stats, container, rootFsInfo, fsIDtoInfo, podSandbox.GetMetadata(), updateCPUNanoCoreUsage) p.addPodNetworkStats(ps, podSandboxID, caInfos, cs, containerNetworkStats[podSandboxID]) p.addPodCPUMemoryStats(ps, types.UID(podSandbox.Metadata.Uid), allInfos, cs) + p.addSwapStats(ps, types.UID(podSandbox.Metadata.Uid), allInfos, cs) p.addProcessStats(ps, types.UID(podSandbox.Metadata.Uid), allInfos, cs) // If cadvisor stats is available for the container, use it to populate @@ -247,15 +248,7 @@ func (p *criStatsProvider) listPodStatsStrictlyFromCRI(ctx context.Context, upda continue } ps := buildPodStats(podSandbox) - for _, criContainerStat := range criSandboxStat.Linux.Containers { - container, found := containerMap[criContainerStat.Attributes.Id] - if !found { - continue - } - // Fill available stats for full set of required pod stats - cs := p.makeContainerStats(criContainerStat, container, rootFsInfo, fsIDtoInfo, podSandbox.GetMetadata(), updateCPUNanoCoreUsage) - ps.Containers = append(ps.Containers, *cs) - } + p.addCRIPodContainerStats(criSandboxStat, ps, fsIDtoInfo, containerMap, podSandbox, rootFsInfo, updateCPUNanoCoreUsage) addCRIPodNetworkStats(ps, criSandboxStat) addCRIPodCPUStats(ps, criSandboxStat) addCRIPodMemoryStats(ps, criSandboxStat) @@ -548,6 +541,31 @@ func (p *criStatsProvider) addPodCPUMemoryStats( } } +func (p *criStatsProvider) addSwapStats( + ps *statsapi.PodStats, + podUID types.UID, + allInfos map[string]cadvisorapiv2.ContainerInfo, + cs *statsapi.ContainerStats, +) { + // try get cpu and memory stats from cadvisor first. + podCgroupInfo := getCadvisorPodInfoFromPodUID(podUID, allInfos) + if podCgroupInfo != nil { + ps.Swap = cadvisorInfoToSwapStats(podCgroupInfo) + return + } + + // Sum Pod cpu and memory stats from containers stats. + if cs.Swap != nil { + if ps.Swap == nil { + ps.Swap = &statsapi.SwapStats{Time: cs.Swap.Time} + } + swapAvailableBytes := getUint64Value(cs.Swap.SwapAvailableBytes) + getUint64Value(ps.Swap.SwapAvailableBytes) + swapUsageBytes := getUint64Value(cs.Swap.SwapUsageBytes) + getUint64Value(ps.Swap.SwapUsageBytes) + ps.Swap.SwapAvailableBytes = &swapAvailableBytes + ps.Swap.SwapUsageBytes = &swapUsageBytes + } +} + func (p *criStatsProvider) addProcessStats( ps *statsapi.PodStats, podUID types.UID, @@ -577,6 +595,7 @@ func (p *criStatsProvider) makeContainerStats( CPU: &statsapi.CPUStats{}, Memory: &statsapi.MemoryStats{}, Rootfs: &statsapi.FsStats{}, + Swap: &statsapi.SwapStats{}, // UserDefinedMetrics is not supported by CRI. } if stats.Cpu != nil { @@ -607,6 +626,19 @@ func (p *criStatsProvider) makeContainerStats( result.Memory.Time = metav1.NewTime(time.Unix(0, time.Now().UnixNano())) result.Memory.WorkingSetBytes = uint64Ptr(0) } + if stats.Swap != nil { + result.Swap.Time = metav1.NewTime(time.Unix(0, stats.Swap.Timestamp)) + if stats.Swap.SwapUsageBytes != nil { + result.Swap.SwapUsageBytes = &stats.Swap.SwapUsageBytes.Value + } + if stats.Swap.SwapAvailableBytes != nil { + result.Swap.SwapAvailableBytes = &stats.Swap.SwapAvailableBytes.Value + } + } else { + result.Swap.Time = metav1.NewTime(time.Unix(0, time.Now().UnixNano())) + result.Swap.SwapUsageBytes = uint64Ptr(0) + result.Swap.SwapAvailableBytes = uint64Ptr(0) + } if stats.WritableLayer != nil { result.Rootfs.Time = metav1.NewTime(time.Unix(0, stats.WritableLayer.Timestamp)) if stats.WritableLayer.UsedBytes != nil { @@ -914,22 +946,6 @@ func extractIDFromCgroupPath(cgroupPath string) string { return id } -func addCRIPodNetworkStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { - if criPodStat == nil || criPodStat.Linux == nil || criPodStat.Linux.Network == nil { - return - } - criNetwork := criPodStat.Linux.Network - iStats := statsapi.NetworkStats{ - Time: metav1.NewTime(time.Unix(0, criNetwork.Timestamp)), - InterfaceStats: criInterfaceToSummary(criNetwork.DefaultInterface), - Interfaces: make([]statsapi.InterfaceStats, 0, len(criNetwork.Interfaces)), - } - for _, iface := range criNetwork.Interfaces { - iStats.Interfaces = append(iStats.Interfaces, criInterfaceToSummary(iface)) - } - ps.Network = &iStats -} - func criInterfaceToSummary(criIface *runtimeapi.NetworkInterfaceUsage) statsapi.InterfaceStats { return statsapi.InterfaceStats{ Name: criIface.Name, @@ -940,43 +956,6 @@ func criInterfaceToSummary(criIface *runtimeapi.NetworkInterfaceUsage) statsapi. } } -func addCRIPodCPUStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { - if criPodStat == nil || criPodStat.Linux == nil || criPodStat.Linux.Cpu == nil { - return - } - criCPU := criPodStat.Linux.Cpu - ps.CPU = &statsapi.CPUStats{ - Time: metav1.NewTime(time.Unix(0, criCPU.Timestamp)), - UsageNanoCores: valueOfUInt64Value(criCPU.UsageNanoCores), - UsageCoreNanoSeconds: valueOfUInt64Value(criCPU.UsageCoreNanoSeconds), - } -} - -func addCRIPodMemoryStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { - if criPodStat == nil || criPodStat.Linux == nil || criPodStat.Linux.Memory == nil { - return - } - criMemory := criPodStat.Linux.Memory - ps.Memory = &statsapi.MemoryStats{ - Time: metav1.NewTime(time.Unix(0, criMemory.Timestamp)), - AvailableBytes: valueOfUInt64Value(criMemory.AvailableBytes), - UsageBytes: valueOfUInt64Value(criMemory.UsageBytes), - WorkingSetBytes: valueOfUInt64Value(criMemory.WorkingSetBytes), - RSSBytes: valueOfUInt64Value(criMemory.RssBytes), - PageFaults: valueOfUInt64Value(criMemory.PageFaults), - MajorPageFaults: valueOfUInt64Value(criMemory.MajorPageFaults), - } -} - -func addCRIPodProcessStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { - if criPodStat == nil || criPodStat.Linux == nil || criPodStat.Linux.Process == nil { - return - } - ps.ProcessStats = &statsapi.ProcessStats{ - ProcessCount: valueOfUInt64Value(criPodStat.Linux.Process.ProcessCount), - } -} - func valueOfUInt64Value(value *runtimeapi.UInt64Value) *uint64 { if value == nil { return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_linux.go new file mode 100644 index 000000000000..2bfa0fd48f38 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_linux.go @@ -0,0 +1,105 @@ +//go:build linux +// +build linux + +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package stats + +import ( + "time" + + cadvisorapiv2 "github.com/google/cadvisor/info/v2" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" + statsapi "k8s.io/kubelet/pkg/apis/stats/v1alpha1" +) + +func (p *criStatsProvider) addCRIPodContainerStats(criSandboxStat *runtimeapi.PodSandboxStats, + ps *statsapi.PodStats, fsIDtoInfo map[runtimeapi.FilesystemIdentifier]*cadvisorapiv2.FsInfo, + containerMap map[string]*runtimeapi.Container, + podSandbox *runtimeapi.PodSandbox, + rootFsInfo *cadvisorapiv2.FsInfo, updateCPUNanoCoreUsage bool) { + for _, criContainerStat := range criSandboxStat.Linux.Containers { + container, found := containerMap[criContainerStat.Attributes.Id] + if !found { + continue + } + // Fill available stats for full set of required pod stats + cs := p.makeContainerStats(criContainerStat, container, rootFsInfo, fsIDtoInfo, podSandbox.GetMetadata(), + updateCPUNanoCoreUsage) + ps.Containers = append(ps.Containers, *cs) + } +} + +func addCRIPodNetworkStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { + if criPodStat == nil || criPodStat.Linux == nil || criPodStat.Linux.Network == nil { + return + } + criNetwork := criPodStat.Linux.Network + iStats := statsapi.NetworkStats{ + Time: metav1.NewTime(time.Unix(0, criNetwork.Timestamp)), + InterfaceStats: criInterfaceToSummary(criNetwork.DefaultInterface), + Interfaces: make([]statsapi.InterfaceStats, 0, len(criNetwork.Interfaces)), + } + for _, iface := range criNetwork.Interfaces { + iStats.Interfaces = append(iStats.Interfaces, criInterfaceToSummary(iface)) + } + ps.Network = &iStats +} + +func addCRIPodMemoryStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { + if criPodStat == nil || criPodStat.Linux == nil || criPodStat.Linux.Memory == nil { + return + } + criMemory := criPodStat.Linux.Memory + ps.Memory = &statsapi.MemoryStats{ + Time: metav1.NewTime(time.Unix(0, criMemory.Timestamp)), + AvailableBytes: valueOfUInt64Value(criMemory.AvailableBytes), + UsageBytes: valueOfUInt64Value(criMemory.UsageBytes), + WorkingSetBytes: valueOfUInt64Value(criMemory.WorkingSetBytes), + RSSBytes: valueOfUInt64Value(criMemory.RssBytes), + PageFaults: valueOfUInt64Value(criMemory.PageFaults), + MajorPageFaults: valueOfUInt64Value(criMemory.MajorPageFaults), + } +} + +func addCRIPodCPUStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { + if criPodStat == nil || criPodStat.Linux == nil || criPodStat.Linux.Cpu == nil { + return + } + criCPU := criPodStat.Linux.Cpu + ps.CPU = &statsapi.CPUStats{ + Time: metav1.NewTime(time.Unix(0, criCPU.Timestamp)), + UsageNanoCores: valueOfUInt64Value(criCPU.UsageNanoCores), + UsageCoreNanoSeconds: valueOfUInt64Value(criCPU.UsageCoreNanoSeconds), + } +} + +func addCRIPodProcessStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { + if criPodStat == nil || criPodStat.Linux == nil || criPodStat.Linux.Process == nil { + return + } + ps.ProcessStats = &statsapi.ProcessStats{ + ProcessCount: valueOfUInt64Value(criPodStat.Linux.Process.ProcessCount), + } +} + +// listContainerNetworkStats returns the network stats of all the running containers. +// It should return (nil, nil) for platforms other than Windows. +func (p *criStatsProvider) listContainerNetworkStats() (map[string]*statsapi.NetworkStats, error) { + return nil, nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_others.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_others.go index 80762538f4ca..fb18e89f16cb 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_others.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_others.go @@ -1,5 +1,5 @@ -//go:build !windows -// +build !windows +//go:build !linux && !windows +// +build !linux,!windows /* Copyright 2019 The Kubernetes Authors. @@ -20,6 +20,8 @@ limitations under the License. package stats import ( + cadvisorapiv2 "github.com/google/cadvisor/info/v2" + runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" statsapi "k8s.io/kubelet/pkg/apis/stats/v1alpha1" ) @@ -28,3 +30,22 @@ import ( func (p *criStatsProvider) listContainerNetworkStats() (map[string]*statsapi.NetworkStats, error) { return nil, nil } + +func (p *criStatsProvider) addCRIPodContainerStats(criSandboxStat *runtimeapi.PodSandboxStats, + ps *statsapi.PodStats, fsIDtoInfo map[runtimeapi.FilesystemIdentifier]*cadvisorapiv2.FsInfo, + containerMap map[string]*runtimeapi.Container, + podSandbox *runtimeapi.PodSandbox, + rootFsInfo *cadvisorapiv2.FsInfo, updateCPUNanoCoreUsage bool) { +} + +func addCRIPodNetworkStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { +} + +func addCRIPodMemoryStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { +} + +func addCRIPodCPUStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { +} + +func addCRIPodProcessStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_windows.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_windows.go index a32ffe9e9810..e64da34c4dbd 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_windows.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/cri_stats_provider_windows.go @@ -22,10 +22,12 @@ package stats import ( "time" - "k8s.io/klog/v2" - "github.com/Microsoft/hcsshim" + cadvisorapiv2 "github.com/google/cadvisor/info/v2" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1" + "k8s.io/klog/v2" statsapi "k8s.io/kubelet/pkg/apis/stats/v1alpha1" ) @@ -79,6 +81,101 @@ func (p *criStatsProvider) listContainerNetworkStats() (map[string]*statsapi.Net return networkStats, nil } +func (p *criStatsProvider) addCRIPodContainerStats(criSandboxStat *runtimeapi.PodSandboxStats, + ps *statsapi.PodStats, fsIDtoInfo map[runtimeapi.FilesystemIdentifier]*cadvisorapiv2.FsInfo, + containerMap map[string]*runtimeapi.Container, + podSandbox *runtimeapi.PodSandbox, + rootFsInfo *cadvisorapiv2.FsInfo, + updateCPUNanoCoreUsage bool) { + for _, criContainerStat := range criSandboxStat.Windows.Containers { + container, found := containerMap[criContainerStat.Attributes.Id] + if !found { + continue + } + // Fill available stats for full set of required pod stats + cs := p.makeWinContainerStats(criContainerStat, container, rootFsInfo, fsIDtoInfo, podSandbox.GetMetadata()) + ps.Containers = append(ps.Containers, *cs) + } +} + +func (p *criStatsProvider) makeWinContainerStats( + stats *runtimeapi.WindowsContainerStats, + container *runtimeapi.Container, + rootFsInfo *cadvisorapiv2.FsInfo, + fsIDtoInfo map[runtimeapi.FilesystemIdentifier]*cadvisorapiv2.FsInfo, + meta *runtimeapi.PodSandboxMetadata) *statsapi.ContainerStats { + result := &statsapi.ContainerStats{ + Name: stats.Attributes.Metadata.Name, + // The StartTime in the summary API is the container creation time. + StartTime: metav1.NewTime(time.Unix(0, container.CreatedAt)), + CPU: &statsapi.CPUStats{}, + Memory: &statsapi.MemoryStats{}, + Rootfs: &statsapi.FsStats{}, + // UserDefinedMetrics is not supported by CRI. + } + if stats.Cpu != nil { + result.CPU.Time = metav1.NewTime(time.Unix(0, stats.Cpu.Timestamp)) + if stats.Cpu.UsageCoreNanoSeconds != nil { + result.CPU.UsageCoreNanoSeconds = &stats.Cpu.UsageCoreNanoSeconds.Value + } + if stats.Cpu.UsageNanoCores != nil { + result.CPU.UsageNanoCores = &stats.Cpu.UsageNanoCores.Value + } + } else { + result.CPU.Time = metav1.NewTime(time.Unix(0, time.Now().UnixNano())) + result.CPU.UsageCoreNanoSeconds = uint64Ptr(0) + result.CPU.UsageNanoCores = uint64Ptr(0) + } + if stats.Memory != nil { + result.Memory.Time = metav1.NewTime(time.Unix(0, stats.Memory.Timestamp)) + if stats.Memory.WorkingSetBytes != nil { + result.Memory.WorkingSetBytes = &stats.Memory.WorkingSetBytes.Value + } + if stats.Memory.AvailableBytes != nil { + result.Memory.AvailableBytes = &stats.Memory.AvailableBytes.Value + } + if stats.Memory.PageFaults != nil { + result.Memory.AvailableBytes = &stats.Memory.PageFaults.Value + } + } else { + result.Memory.Time = metav1.NewTime(time.Unix(0, time.Now().UnixNano())) + result.Memory.WorkingSetBytes = uint64Ptr(0) + result.Memory.AvailableBytes = uint64Ptr(0) + result.Memory.PageFaults = uint64Ptr(0) + } + if stats.WritableLayer != nil { + result.Rootfs.Time = metav1.NewTime(time.Unix(0, stats.WritableLayer.Timestamp)) + if stats.WritableLayer.UsedBytes != nil { + result.Rootfs.UsedBytes = &stats.WritableLayer.UsedBytes.Value + } + } + fsID := stats.GetWritableLayer().GetFsId() + if fsID != nil { + imageFsInfo, found := fsIDtoInfo[*fsID] + if !found { + imageFsInfo = p.getFsInfo(fsID) + fsIDtoInfo[*fsID] = imageFsInfo + } + if imageFsInfo != nil { + // The image filesystem id is unknown to the local node or there's + // an error on retrieving the stats. In these cases, we omit those stats + // and return the best-effort partial result. See + // https://github.com/kubernetes/heapster/issues/1793. + result.Rootfs.AvailableBytes = &imageFsInfo.Available + result.Rootfs.CapacityBytes = &imageFsInfo.Capacity + } + } + // NOTE: This doesn't support the old pod log path, `/var/log/pods/UID`. For containers + // using old log path, empty log stats are returned. This is fine, because we don't + // officially support in-place upgrade anyway. + var err error + result.Logs, err = p.hostStatsProvider.getPodContainerLogStats(meta.GetNamespace(), meta.GetName(), types.UID(meta.GetUid()), container.GetMetadata().GetName(), rootFsInfo) + if err != nil { + klog.ErrorS(err, "Unable to fetch container log stats", "containerName", container.GetMetadata().GetName()) + } + return result +} + // hcsStatsToNetworkStats converts hcsshim.Statistics.Network to statsapi.NetworkStats func hcsStatsToNetworkStats(timestamp time.Time, hcsStats *hcsshim.HNSEndpointStats, endpointName string) *statsapi.NetworkStats { result := &statsapi.NetworkStats{ @@ -104,6 +201,64 @@ func hcsStatToInterfaceStat(hcsStats *hcsshim.HNSEndpointStats, endpointName str return iStat } +func addCRIPodCPUStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { + if criPodStat == nil || criPodStat.Windows == nil || criPodStat.Windows.Cpu == nil { + return + } + criCPU := criPodStat.Windows.Cpu + ps.CPU = &statsapi.CPUStats{ + Time: metav1.NewTime(time.Unix(0, criCPU.Timestamp)), + UsageNanoCores: valueOfUInt64Value(criCPU.UsageNanoCores), + UsageCoreNanoSeconds: valueOfUInt64Value(criCPU.UsageCoreNanoSeconds), + } +} + +func addCRIPodMemoryStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { + if criPodStat == nil || criPodStat.Windows == nil || criPodStat.Windows.Memory == nil { + return + } + criMemory := criPodStat.Windows.Memory + ps.Memory = &statsapi.MemoryStats{ + Time: metav1.NewTime(time.Unix(0, criMemory.Timestamp)), + AvailableBytes: valueOfUInt64Value(criMemory.AvailableBytes), + WorkingSetBytes: valueOfUInt64Value(criMemory.WorkingSetBytes), + PageFaults: valueOfUInt64Value(criMemory.PageFaults), + } +} + +func addCRIPodProcessStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { + if criPodStat == nil || criPodStat.Windows == nil || criPodStat.Windows.Process == nil { + return + } + ps.ProcessStats = &statsapi.ProcessStats{ + ProcessCount: valueOfUInt64Value(criPodStat.Windows.Process.ProcessCount), + } +} + +func addCRIPodNetworkStats(ps *statsapi.PodStats, criPodStat *runtimeapi.PodSandboxStats) { + if criPodStat == nil || criPodStat.Windows == nil || criPodStat.Windows.Network == nil { + return + } + criNetwork := criPodStat.Windows.Network + iStats := statsapi.NetworkStats{ + Time: metav1.NewTime(time.Unix(0, criNetwork.Timestamp)), + InterfaceStats: criInterfaceToWinSummary(criNetwork.DefaultInterface), + Interfaces: make([]statsapi.InterfaceStats, 0, len(criNetwork.Interfaces)), + } + for _, iface := range criNetwork.Interfaces { + iStats.Interfaces = append(iStats.Interfaces, criInterfaceToWinSummary(iface)) + } + ps.Network = &iStats +} + +func criInterfaceToWinSummary(criIface *runtimeapi.WindowsNetworkInterfaceUsage) statsapi.InterfaceStats { + return statsapi.InterfaceStats{ + Name: criIface.Name, + RxBytes: valueOfUInt64Value(criIface.RxBytes), + TxBytes: valueOfUInt64Value(criIface.TxBytes), + } +} + // newNetworkStatsProvider uses the real windows hcsshim if not provided otherwise if the interface is provided // by the cristatsprovider in testing scenarios it uses that one func newNetworkStatsProvider(p *criStatsProvider) windowsNetworkStatsProvider { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/helper.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/helper.go index dd968b06eea4..c6ca3a064e05 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/helper.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/helper.go @@ -95,6 +95,7 @@ func cadvisorInfoToContainerStats(name string, info *cadvisorapiv2.ContainerInfo cpu, memory := cadvisorInfoToCPUandMemoryStats(info) result.CPU = cpu result.Memory = memory + result.Swap = cadvisorInfoToSwapStats(info) // NOTE: if they can be found, log stats will be overwritten // by the caller, as it knows more information about the pod, @@ -257,6 +258,29 @@ func cadvisorInfoToUserDefinedMetrics(info *cadvisorapiv2.ContainerInfo) []stats return udm } +func cadvisorInfoToSwapStats(info *cadvisorapiv2.ContainerInfo) *statsapi.SwapStats { + cstat, found := latestContainerStats(info) + if !found { + return nil + } + + var swapStats *statsapi.SwapStats + + if info.Spec.HasMemory && cstat.Memory != nil { + swapStats = &statsapi.SwapStats{ + Time: metav1.NewTime(cstat.Timestamp), + SwapUsageBytes: &cstat.Memory.Swap, + } + + if !isMemoryUnlimited(info.Spec.Memory.SwapLimit) { + swapAvailableBytes := info.Spec.Memory.SwapLimit - cstat.Memory.Swap + swapStats.SwapAvailableBytes = &swapAvailableBytes + } + } + + return swapStats +} + // latestContainerStats returns the latest container stats from cadvisor, or nil if none exist func latestContainerStats(info *cadvisorapiv2.ContainerInfo) (*cadvisorapiv2.ContainerStats, bool) { stats := info.Stats diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/provider.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/provider.go index 1241111236a6..09f82c609f51 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/provider.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/stats/provider.go @@ -27,18 +27,24 @@ import ( statsapi "k8s.io/kubelet/pkg/apis/stats/v1alpha1" "k8s.io/kubernetes/pkg/kubelet/cadvisor" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" - kubepod "k8s.io/kubernetes/pkg/kubelet/pod" "k8s.io/kubernetes/pkg/kubelet/server/stats" "k8s.io/kubernetes/pkg/kubelet/stats/pidlimit" "k8s.io/kubernetes/pkg/kubelet/status" + kubetypes "k8s.io/kubernetes/pkg/kubelet/types" ) +// PodManager is the subset of methods the manager needs to observe the actual state of the kubelet. +// See pkg/k8s.io/kubernetes/pkg/kubelet/pod.Manager for method godoc. +type PodManager interface { + TranslatePodUID(uid types.UID) kubetypes.ResolvedPodUID +} + // NewCRIStatsProvider returns a Provider that provides the node stats // from cAdvisor and the container stats from CRI. func NewCRIStatsProvider( cadvisor cadvisor.Interface, resourceAnalyzer stats.ResourceAnalyzer, - podManager kubepod.Manager, + podManager PodManager, runtimeCache kubecontainer.RuntimeCache, runtimeService internalapi.RuntimeService, imageService internalapi.ImageManagerService, @@ -54,7 +60,7 @@ func NewCRIStatsProvider( func NewCadvisorStatsProvider( cadvisor cadvisor.Interface, resourceAnalyzer stats.ResourceAnalyzer, - podManager kubepod.Manager, + podManager PodManager, runtimeCache kubecontainer.RuntimeCache, imageService kubecontainer.ImageService, statusProvider status.PodStatusProvider, @@ -67,7 +73,7 @@ func NewCadvisorStatsProvider( // cAdvisor and the container stats using the containerStatsProvider. func newStatsProvider( cadvisor cadvisor.Interface, - podManager kubepod.Manager, + podManager PodManager, runtimeCache kubecontainer.RuntimeCache, containerStatsProvider containerStatsProvider, ) *Provider { @@ -82,7 +88,7 @@ func newStatsProvider( // Provider provides the stats of the node and the pod-managed containers. type Provider struct { cadvisor cadvisor.Interface - podManager kubepod.Manager + podManager PodManager runtimeCache kubecontainer.RuntimeCache containerStatsProvider } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/status/generate.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/status/generate.go index 9f0a40f03cd7..c6707345b671 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/status/generate.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/status/generate.go @@ -55,6 +55,21 @@ func GenerateContainersReadyCondition(spec *v1.PodSpec, containerStatuses []v1.C } unknownContainers := []string{} unreadyContainers := []string{} + + for _, container := range spec.InitContainers { + if !kubetypes.IsRestartableInitContainer(&container) { + continue + } + + if containerStatus, ok := podutil.GetContainerStatus(containerStatuses, container.Name); ok { + if !containerStatus.Ready { + unreadyContainers = append(unreadyContainers, container.Name) + } + } else { + unknownContainers = append(unknownContainers, container.Name) + } + } + for _, container := range spec.Containers { if containerStatus, ok := podutil.GetContainerStatus(containerStatuses, container.Name); ok { if !containerStatus.Ready { @@ -143,6 +158,19 @@ func GeneratePodReadyCondition(spec *v1.PodSpec, conditions []v1.PodCondition, c } } +func isInitContainerInitialized(initContainer *v1.Container, containerStatus *v1.ContainerStatus) bool { + if kubetypes.IsRestartableInitContainer(initContainer) { + if containerStatus.Started == nil || !*containerStatus.Started { + return false + } + } else { // regular init container + if !containerStatus.Ready { + return false + } + } + return true +} + // GeneratePodInitializedCondition returns initialized condition if all init containers in a pod are ready, else it // returns an uninitialized condition. func GeneratePodInitializedCondition(spec *v1.PodSpec, containerStatuses []v1.ContainerStatus, podPhase v1.PodPhase) v1.PodCondition { @@ -154,15 +182,17 @@ func GeneratePodInitializedCondition(spec *v1.PodSpec, containerStatuses []v1.Co Reason: UnknownContainerStatuses, } } + unknownContainers := []string{} - unreadyContainers := []string{} + incompleteContainers := []string{} for _, container := range spec.InitContainers { - if containerStatus, ok := podutil.GetContainerStatus(containerStatuses, container.Name); ok { - if !containerStatus.Ready { - unreadyContainers = append(unreadyContainers, container.Name) - } - } else { + containerStatus, ok := podutil.GetContainerStatus(containerStatuses, container.Name) + if !ok { unknownContainers = append(unknownContainers, container.Name) + continue + } + if !isInitContainerInitialized(&container, &containerStatus) { + incompleteContainers = append(incompleteContainers, container.Name) } } @@ -175,12 +205,23 @@ func GeneratePodInitializedCondition(spec *v1.PodSpec, containerStatuses []v1.Co } } - unreadyMessages := []string{} + // If there is any regular container that has started, then the pod has + // been initialized before. + // This is needed to handle the case where the pod has been initialized but + // the restartable init containers are restarting. + if kubecontainer.HasAnyRegularContainerStarted(spec, containerStatuses) { + return v1.PodCondition{ + Type: v1.PodInitialized, + Status: v1.ConditionTrue, + } + } + + unreadyMessages := make([]string, 0, len(unknownContainers)+len(incompleteContainers)) if len(unknownContainers) > 0 { unreadyMessages = append(unreadyMessages, fmt.Sprintf("containers with unknown status: %s", unknownContainers)) } - if len(unreadyContainers) > 0 { - unreadyMessages = append(unreadyMessages, fmt.Sprintf("containers with incomplete status: %s", unreadyContainers)) + if len(incompleteContainers) > 0 { + unreadyMessages = append(unreadyMessages, fmt.Sprintf("containers with incomplete status: %s", incompleteContainers)) } unreadyMessage := strings.Join(unreadyMessages, ", ") if unreadyMessage != "" { @@ -198,7 +239,7 @@ func GeneratePodInitializedCondition(spec *v1.PodSpec, containerStatuses []v1.Co } } -func GeneratePodHasNetworkCondition(pod *v1.Pod, podStatus *kubecontainer.PodStatus) v1.PodCondition { +func GeneratePodReadyToStartContainersCondition(pod *v1.Pod, podStatus *kubecontainer.PodStatus) v1.PodCondition { newSandboxNeeded, _, _ := runtimeutil.PodSandboxChanged(pod, podStatus) // if a new sandbox does not need to be created for a pod, it indicates that // a sandbox for the pod with networking configured already exists. @@ -206,12 +247,12 @@ func GeneratePodHasNetworkCondition(pod *v1.Pod, podStatus *kubecontainer.PodSta // fresh sandbox and configure networking for the sandbox. if !newSandboxNeeded { return v1.PodCondition{ - Type: kubetypes.PodHasNetwork, + Type: kubetypes.PodReadyToStartContainers, Status: v1.ConditionTrue, } } return v1.PodCondition{ - Type: kubetypes.PodHasNetwork, + Type: kubetypes.PodReadyToStartContainers, Status: v1.ConditionFalse, } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/status/status_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/status/status_manager.go index 728b438a465f..e43c58744811 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/status/status_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/status/status_manager.go @@ -25,6 +25,7 @@ import ( "sync" "time" + "github.com/google/go-cmp/cmp" clientset "k8s.io/client-go/kubernetes" v1 "k8s.io/api/core/v1" @@ -32,7 +33,6 @@ import ( "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/diff" "k8s.io/apimachinery/pkg/util/wait" utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/klog/v2" @@ -40,9 +40,9 @@ import ( "k8s.io/kubernetes/pkg/features" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" "k8s.io/kubernetes/pkg/kubelet/metrics" - kubepod "k8s.io/kubernetes/pkg/kubelet/pod" "k8s.io/kubernetes/pkg/kubelet/status/state" kubetypes "k8s.io/kubernetes/pkg/kubelet/types" + kubeutil "k8s.io/kubernetes/pkg/kubelet/util" statusutil "k8s.io/kubernetes/pkg/util/pod" ) @@ -70,7 +70,7 @@ type versionedPodStatus struct { // All methods are thread-safe. type manager struct { kubeClient clientset.Interface - podManager kubepod.Manager + podManager PodManager // Map from pod UID to sync status of the corresponding pod. podStatuses map[types.UID]versionedPodStatus podStatusesLock sync.RWMutex @@ -87,8 +87,18 @@ type manager struct { stateFileDirectory string } -// PodStatusProvider knows how to provide status for a pod. It's intended to be used by other components -// that need to introspect status. +// PodManager is the subset of methods the manager needs to observe the actual state of the kubelet. +// See pkg/k8s.io/kubernetes/pkg/kubelet/pod.Manager for method godoc. +type PodManager interface { + GetPodByUID(types.UID) (*v1.Pod, bool) + GetMirrorPodByPod(*v1.Pod) (*v1.Pod, bool) + TranslatePodUID(uid types.UID) kubetypes.ResolvedPodUID + GetUIDTranslations() (podToMirror map[kubetypes.ResolvedPodUID]kubetypes.MirrorPodUID, mirrorToPod map[kubetypes.MirrorPodUID]kubetypes.ResolvedPodUID) +} + +// PodStatusProvider knows how to provide status for a pod. It is intended to be used by other components +// that need to introspect the authoritative status of a pod. The PodStatusProvider represents the actual +// status of a running pod as the kubelet sees it. type PodStatusProvider interface { // GetPodStatus returns the cached status for the provided pod UID, as well as whether it // was a cache hit. @@ -149,7 +159,7 @@ type Manager interface { const syncPeriod = 10 * time.Second // NewManager returns a functional Manager. -func NewManager(kubeClient clientset.Interface, podManager kubepod.Manager, podDeletionSafety PodDeletionSafetyProvider, podStartupLatencyHelper PodStartupLatencyStateHelper, stateFileDirectory string) Manager { +func NewManager(kubeClient clientset.Interface, podManager PodManager, podDeletionSafety PodDeletionSafetyProvider, podStartupLatencyHelper PodStartupLatencyStateHelper, stateFileDirectory string) Manager { return &manager{ kubeClient: kubeClient, podManager: podManager, @@ -339,8 +349,9 @@ func (m *manager) SetContainerReadiness(podUID types.UID, containerID kubecontai status.Conditions = append(status.Conditions, condition) } } - updateConditionFunc(v1.PodReady, GeneratePodReadyCondition(&pod.Spec, status.Conditions, status.ContainerStatuses, status.Phase)) - updateConditionFunc(v1.ContainersReady, GenerateContainersReadyCondition(&pod.Spec, status.ContainerStatuses, status.Phase)) + allContainerStatuses := append(status.InitContainerStatuses, status.ContainerStatuses...) + updateConditionFunc(v1.PodReady, GeneratePodReadyCondition(&pod.Spec, status.Conditions, allContainerStatuses, status.Phase)) + updateConditionFunc(v1.ContainersReady, GenerateContainersReadyCondition(&pod.Spec, allContainerStatuses, status.Phase)) m.updateStatusInternal(pod, status, false, false) } @@ -495,12 +506,25 @@ func hasPodInitialized(pod *v1.Pod) bool { } // if the last init container has ever completed with a zero exit code, the pod is initialized if l := len(pod.Status.InitContainerStatuses); l > 0 { - container := pod.Status.InitContainerStatuses[l-1] - if state := container.LastTerminationState; state.Terminated != nil && state.Terminated.ExitCode == 0 { - return true + container, ok := kubeutil.GetContainerByIndex(pod.Spec.InitContainers, pod.Status.InitContainerStatuses, l-1) + if !ok { + klog.V(4).InfoS("Mismatch between pod spec and status, likely programmer error", "pod", klog.KObj(pod), "containerName", container.Name) + return false } - if state := container.State; state.Terminated != nil && state.Terminated.ExitCode == 0 { - return true + + containerStatus := pod.Status.InitContainerStatuses[l-1] + if kubetypes.IsRestartableInitContainer(&container) { + if containerStatus.State.Running != nil && + containerStatus.Started != nil && *containerStatus.Started { + return true + } + } else { // regular init container + if state := containerStatus.LastTerminationState; state.Terminated != nil && state.Terminated.ExitCode == 0 { + return true + } + if state := containerStatus.State; state.Terminated != nil && state.Terminated.ExitCode == 0 { + return true + } } } // otherwise the pod has no record of being initialized @@ -526,26 +550,50 @@ func initializedContainers(containers []v1.ContainerStatus) []v1.ContainerStatus // checkContainerStateTransition ensures that no container is trying to transition // from a terminated to non-terminated state, which is illegal and indicates a // logical error in the kubelet. -func checkContainerStateTransition(oldStatuses, newStatuses []v1.ContainerStatus, restartPolicy v1.RestartPolicy) error { +func checkContainerStateTransition(oldStatuses, newStatuses *v1.PodStatus, podSpec *v1.PodSpec) error { // If we should always restart, containers are allowed to leave the terminated state - if restartPolicy == v1.RestartPolicyAlways { + if podSpec.RestartPolicy == v1.RestartPolicyAlways { return nil } - for _, oldStatus := range oldStatuses { + for _, oldStatus := range oldStatuses.ContainerStatuses { // Skip any container that wasn't terminated if oldStatus.State.Terminated == nil { continue } // Skip any container that failed but is allowed to restart - if oldStatus.State.Terminated.ExitCode != 0 && restartPolicy == v1.RestartPolicyOnFailure { + if oldStatus.State.Terminated.ExitCode != 0 && podSpec.RestartPolicy == v1.RestartPolicyOnFailure { continue } - for _, newStatus := range newStatuses { + for _, newStatus := range newStatuses.ContainerStatuses { if oldStatus.Name == newStatus.Name && newStatus.State.Terminated == nil { return fmt.Errorf("terminated container %v attempted illegal transition to non-terminated state", newStatus.Name) } } } + + for i, oldStatus := range oldStatuses.InitContainerStatuses { + initContainer, ok := kubeutil.GetContainerByIndex(podSpec.InitContainers, oldStatuses.InitContainerStatuses, i) + if !ok { + return fmt.Errorf("found mismatch between pod spec and status, container: %v", oldStatus.Name) + } + // Skip any restartable init container as it always is allowed to restart + if kubetypes.IsRestartableInitContainer(&initContainer) { + continue + } + // Skip any container that wasn't terminated + if oldStatus.State.Terminated == nil { + continue + } + // Skip any container that failed but is allowed to restart + if oldStatus.State.Terminated.ExitCode != 0 && podSpec.RestartPolicy == v1.RestartPolicyOnFailure { + continue + } + for _, newStatus := range newStatuses.InitContainerStatuses { + if oldStatus.Name == newStatus.Name && newStatus.State.Terminated == nil { + return fmt.Errorf("terminated init container %v attempted illegal transition to non-terminated state", newStatus.Name) + } + } + } return nil } @@ -571,11 +619,7 @@ func (m *manager) updateStatusInternal(pod *v1.Pod, status v1.PodStatus, forceUp } // Check for illegal state transition in containers - if err := checkContainerStateTransition(oldStatus.ContainerStatuses, status.ContainerStatuses, pod.Spec.RestartPolicy); err != nil { - klog.ErrorS(err, "Status update on pod aborted", "pod", klog.KObj(pod)) - return - } - if err := checkContainerStateTransition(oldStatus.InitContainerStatuses, status.InitContainerStatuses, pod.Spec.RestartPolicy); err != nil { + if err := checkContainerStateTransition(&oldStatus, &status, &pod.Spec); err != nil { klog.ErrorS(err, "Status update on pod aborted", "pod", klog.KObj(pod)) return } @@ -589,8 +633,8 @@ func (m *manager) updateStatusInternal(pod *v1.Pod, status v1.PodStatus, forceUp // Set InitializedCondition.LastTransitionTime. updateLastTransitionTime(&status, &oldStatus, v1.PodInitialized) - // Set PodHasNetwork.LastTransitionTime. - updateLastTransitionTime(&status, &oldStatus, kubetypes.PodHasNetwork) + // Set PodReadyToStartContainersCondition.LastTransitionTime. + updateLastTransitionTime(&status, &oldStatus, kubetypes.PodReadyToStartContainers) // Set PodScheduledCondition.LastTransitionTime. updateLastTransitionTime(&status, &oldStatus, v1.PodScheduled) @@ -888,14 +932,16 @@ func (m *manager) canBeDeleted(pod *v1.Pod, status v1.PodStatus, podIsFinished b if pod.DeletionTimestamp == nil || kubetypes.IsMirrorPod(pod) { return false } - // Delay deletion of pods until the phase is terminal. + // Delay deletion of pods until the phase is terminal, based on pod.Status + // which comes from pod manager. if !podutil.IsPodPhaseTerminal(pod.Status.Phase) { - klog.V(3).InfoS("Delaying pod deletion as the phase is non-terminal", "phase", status.Phase, "pod", klog.KObj(pod), "podUID", pod.UID) + // For debugging purposes we also log the kubelet's local phase, when the deletion is delayed. + klog.V(3).InfoS("Delaying pod deletion as the phase is non-terminal", "phase", pod.Status.Phase, "localPhase", status.Phase, "pod", klog.KObj(pod), "podUID", pod.UID) return false } // If this is an update completing pod termination then we know the pod termination is finished. if podIsFinished { - klog.V(3).InfoS("The pod termination is finished as SyncTerminatedPod completes its execution", "phase", status.Phase, "pod", klog.KObj(pod), "podUID", pod.UID) + klog.V(3).InfoS("The pod termination is finished as SyncTerminatedPod completes its execution", "phase", pod.Status.Phase, "localPhase", status.Phase, "pod", klog.KObj(pod), "podUID", pod.UID) return true } return false @@ -937,7 +983,7 @@ func (m *manager) needsReconcile(uid types.UID, status v1.PodStatus) bool { } klog.V(3).InfoS("Pod status is inconsistent with cached status for pod, a reconciliation should be triggered", "pod", klog.KObj(pod), - "statusDiff", diff.ObjectDiff(podStatus, &status)) + "statusDiff", cmp.Diff(podStatus, &status)) return true } @@ -1036,6 +1082,9 @@ func mergePodStatus(oldPodStatus, newPodStatus v1.PodStatus, couldHaveRunningCon } newPodStatus.Conditions = podConditions + // ResourceClaimStatuses is not owned and not modified by kubelet. + newPodStatus.ResourceClaimStatuses = oldPodStatus.ResourceClaimStatuses + // Delay transitioning a pod to a terminal status unless the pod is actually terminal. // The Kubelet should never transition a pod to terminal status that could have running // containers and thus actively be leveraging exclusive resources. Note that resources diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/sysctl/allowlist.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/sysctl/allowlist.go index 16bc95fff08e..07daa528eac0 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/sysctl/allowlist.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/sysctl/allowlist.go @@ -116,13 +116,8 @@ func (w *patternAllowlist) Admit(attrs *lifecycle.PodAdmitAttributes) lifecycle. } } - var hostNet, hostIPC bool - if pod.Spec.SecurityContext != nil { - hostNet = pod.Spec.HostNetwork - hostIPC = pod.Spec.HostIPC - } for _, s := range pod.Spec.SecurityContext.Sysctls { - if err := w.validateSysctl(s.Name, hostNet, hostIPC); err != nil { + if err := w.validateSysctl(s.Name, pod.Spec.HostNetwork, pod.Spec.HostIPC); err != nil { return lifecycle.PodAdmitResult{ Admit: false, Reason: ForbiddenReason, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/constants.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/constants.go index 32af6d6baa46..3f085d22a11d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/constants.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/constants.go @@ -43,8 +43,7 @@ const ( // entries here should be moved to staging/src/k8s.io.api/core/v1/types.go // once the feature managing the condition graduates to Beta. const ( - // PodHasNetwork indicates networking has been configured successfully for the - // pod and IP address(es) assigned. Images for containers specified in the pod - // spec can be pulled and containers launched after this condition is true. - PodHasNetwork = "PodHasNetwork" + // PodReadyToStartContainers pod sandbox is successfully configured and + // the pod is ready to launch containers. + PodReadyToStartContainers = "PodReadyToStartContainers" ) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/pod_status.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/pod_status.go index a1894aedf7ad..f69ca822a0b2 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/pod_status.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/pod_status.go @@ -37,8 +37,8 @@ func PodConditionByKubelet(conditionType v1.PodConditionType) bool { return true } } - if utilfeature.DefaultFeatureGate.Enabled(features.PodHasNetworkCondition) { - if conditionType == PodHasNetwork { + if utilfeature.DefaultFeatureGate.Enabled(features.PodReadyToStartContainersCondition) { + if conditionType == PodReadyToStartContainers { return true } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/pod_update.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/pod_update.go index 6c7e236fc9a6..7f7fc5b799bc 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/pod_update.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/types/pod_update.go @@ -192,3 +192,13 @@ func IsCriticalPodBasedOnPriority(priority int32) bool { func IsNodeCriticalPod(pod *v1.Pod) bool { return IsCriticalPod(pod) && (pod.Spec.PriorityClassName == scheduling.SystemNodeCritical) } + +// IsRestartableInitContainer returns true if the initContainer has +// ContainerRestartPolicyAlways. +func IsRestartableInitContainer(initContainer *v1.Container) bool { + if initContainer.RestartPolicy == nil { + return false + } + + return *initContainer.RestartPolicy == v1.ContainerRestartPolicyAlways +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/userns/userns_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/userns/userns_manager.go index 7d23f215adca..ffd23630f13e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/userns/userns_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/userns/userns_manager.go @@ -142,7 +142,7 @@ func MakeUserNsManager(kl userNsPodsManager) (*UsernsManager, error) { } // do not bother reading the list of pods if user namespaces are not enabled. - if !utilfeature.DefaultFeatureGate.Enabled(features.UserNamespacesStatelessPodsSupport) { + if !utilfeature.DefaultFeatureGate.Enabled(features.UserNamespacesSupport) { return &m, nil } @@ -258,7 +258,7 @@ func (m *UsernsManager) record(pod types.UID, from, length uint32) (err error) { // Release releases the user namespace allocated to the specified pod. func (m *UsernsManager) Release(podUID types.UID) { - if !utilfeature.DefaultFeatureGate.Enabled(features.UserNamespacesStatelessPodsSupport) { + if !utilfeature.DefaultFeatureGate.Enabled(features.UserNamespacesSupport) { return } @@ -367,7 +367,7 @@ func (m *UsernsManager) createUserNs(pod *v1.Pod) (userNs userNamespace, err err // GetOrCreateUserNamespaceMappings returns the configuration for the sandbox user namespace func (m *UsernsManager) GetOrCreateUserNamespaceMappings(pod *v1.Pod) (*runtimeapi.UserNamespace, error) { - if !utilfeature.DefaultFeatureGate.Enabled(features.UserNamespacesStatelessPodsSupport) { + if !utilfeature.DefaultFeatureGate.Enabled(features.UserNamespacesSupport) { return nil, nil } @@ -427,7 +427,7 @@ func (m *UsernsManager) GetOrCreateUserNamespaceMappings(pod *v1.Pod) (*runtimea // allocations with the pods actually running. It frees any user namespace // allocation for orphaned pods. func (m *UsernsManager) CleanupOrphanedPodUsernsAllocations(pods []*v1.Pod, runningPods []*kubecontainer.Pod) error { - if !utilfeature.DefaultFeatureGate.Enabled(features.UserNamespacesStatelessPodsSupport) { + if !utilfeature.DefaultFeatureGate.Enabled(features.UserNamespacesSupport) { return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/cache_based_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/cache_based_manager.go index 22531a28780b..160f9ca0d295 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/cache_based_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/cache_based_manager.go @@ -22,7 +22,7 @@ import ( "sync" "time" - "k8s.io/api/core/v1" + v1 "k8s.io/api/core/v1" "k8s.io/apiserver/pkg/storage" "k8s.io/kubernetes/pkg/kubelet/util" @@ -92,7 +92,7 @@ func isObjectOlder(newObject, oldObject runtime.Object) bool { return newVersion < oldVersion } -func (s *objectStore) AddReference(namespace, name string) { +func (s *objectStore) AddReference(namespace, name string, _ types.UID) { key := objectKey{namespace: namespace, name: name} // AddReference is called from RegisterPod, thus it needs to be efficient. @@ -114,7 +114,7 @@ func (s *objectStore) AddReference(namespace, name string) { item.data = nil } -func (s *objectStore) DeleteReference(namespace, name string) { +func (s *objectStore) DeleteReference(namespace, name string, _ types.UID) { key := objectKey{namespace: namespace, name: name} s.lock.Lock() @@ -225,7 +225,7 @@ func (c *cacheBasedManager) RegisterPod(pod *v1.Pod) { c.lock.Lock() defer c.lock.Unlock() for name := range names { - c.objectStore.AddReference(pod.Namespace, name) + c.objectStore.AddReference(pod.Namespace, name, pod.UID) } var prev *v1.Pod key := objectKey{namespace: pod.Namespace, name: pod.Name, uid: pod.UID} @@ -238,7 +238,7 @@ func (c *cacheBasedManager) RegisterPod(pod *v1.Pod) { // names and prev need to have their ref counts decremented. Any that // are only in prev need to be completely removed. This unconditional // call takes care of both cases. - c.objectStore.DeleteReference(prev.Namespace, name) + c.objectStore.DeleteReference(prev.Namespace, name, prev.UID) } } } @@ -252,7 +252,7 @@ func (c *cacheBasedManager) UnregisterPod(pod *v1.Pod) { delete(c.registeredPods, key) if prev != nil { for name := range c.getReferencedObjects(prev) { - c.objectStore.DeleteReference(prev.Namespace, name) + c.objectStore.DeleteReference(prev.Namespace, name, prev.UID) } } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/manager.go index 2c983d35d22b..99f10a1dfa1e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/manager.go @@ -17,8 +17,9 @@ limitations under the License. package manager import ( - "k8s.io/api/core/v1" + v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" ) // Manager is the interface for registering and unregistering @@ -46,15 +47,15 @@ type Manager interface { // Store is the interface for a object cache that // can be used by cacheBasedManager. type Store interface { - // AddReference adds a reference to the object to the store. + // AddReference adds a reference from referencedFrom to the object to the store. // Note that multiple additions to the store has to be allowed // in the implementations and effectively treated as refcounted. - AddReference(namespace, name string) - // DeleteReference deletes reference to the object from the store. + AddReference(namespace, name string, referencedFrom types.UID) + // DeleteReference deletes a reference from referencedFrom to the object from the store. // Note that object should be deleted only when there was a // corresponding Delete call for each of Add calls (effectively - // when refcount was reduced to zero). - DeleteReference(namespace, name string) + // when refcount of every referenceFrom was reduced to zero). + DeleteReference(namespace, name string, referencedFrom types.UID) // Get an object from a store. Get(namespace, name string) (runtime.Object, error) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/watch_based_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/watch_based_manager.go index e3a1d7e29d81..f04891fce539 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/watch_based_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/manager/watch_based_manager.go @@ -21,7 +21,7 @@ import ( "sync" "time" - "k8s.io/api/core/v1" + v1 "k8s.io/api/core/v1" "k8s.io/client-go/tools/cache" "k8s.io/klog/v2" @@ -31,6 +31,7 @@ import ( "k8s.io/apimachinery/pkg/fields" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/util/sets" "k8s.io/apimachinery/pkg/util/wait" "k8s.io/apimachinery/pkg/watch" @@ -44,7 +45,7 @@ type isImmutableFunc func(runtime.Object) bool // objectCacheItem is a single item stored in objectCache. type objectCacheItem struct { - refCount int + refMap map[types.UID]int store *cacheStore reflector *cache.Reflector @@ -231,7 +232,7 @@ func (c *objectCache) newReflectorLocked(namespace, name string) *objectCacheIte 0, ) item := &objectCacheItem{ - refCount: 0, + refMap: make(map[types.UID]int), store: store, reflector: reflector, hasSynced: func() (bool, error) { return store.hasSynced(), nil }, @@ -245,7 +246,7 @@ func (c *objectCache) newReflectorLocked(namespace, name string) *objectCacheIte return item } -func (c *objectCache) AddReference(namespace, name string) { +func (c *objectCache) AddReference(namespace, name string, referencedFrom types.UID) { key := objectKey{namespace: namespace, name: name} // AddReference is called from RegisterPod thus it needs to be efficient. @@ -260,17 +261,20 @@ func (c *objectCache) AddReference(namespace, name string) { item = c.newReflectorLocked(namespace, name) c.items[key] = item } - item.refCount++ + item.refMap[referencedFrom]++ } -func (c *objectCache) DeleteReference(namespace, name string) { +func (c *objectCache) DeleteReference(namespace, name string, referencedFrom types.UID) { key := objectKey{namespace: namespace, name: name} c.lock.Lock() defer c.lock.Unlock() if item, ok := c.items[key]; ok { - item.refCount-- - if item.refCount == 0 { + item.refMap[referencedFrom]-- + if item.refMap[referencedFrom] == 0 { + delete(item.refMap, referencedFrom) + } + if len(item.refMap) == 0 { // Stop the underlying reflector. item.stop() delete(c.items, key) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/util.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/util.go index c2969a511264..97933afe39b0 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/util.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/util.go @@ -19,6 +19,7 @@ package util import ( "fmt" + v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) @@ -43,3 +44,16 @@ func GetNodenameForKernel(hostname string, hostDomainName string, setHostnameAsF } return kernelHostname, nil } + +// GetContainerByIndex validates and extracts the container at index "idx" from +// "containers" with respect to "statuses". +// It returns true if the container is valid, else returns false. +func GetContainerByIndex(containers []v1.Container, statuses []v1.ContainerStatus, idx int) (v1.Container, bool) { + if idx < 0 || idx >= len(containers) || idx >= len(statuses) { + return v1.Container{}, false + } + if statuses[idx].Name != containers[idx].Name { + return v1.Container{}, false + } + return containers[idx], true +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/util_windows.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/util_windows.go index af51d45c6055..b8fc558d2e2f 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/util_windows.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/util/util_windows.go @@ -24,6 +24,8 @@ import ( "fmt" "net" "net/url" + "os" + "path/filepath" "strings" "syscall" "time" @@ -117,9 +119,27 @@ func parseEndpoint(endpoint string) (string, string, error) { } } -// LocalEndpoint empty implementation +// LocalEndpoint returns the full path to a named pipe at the given endpoint - unlike on unix, we can't use sockets. func LocalEndpoint(path, file string) (string, error) { - return "", fmt.Errorf("LocalEndpoints are unsupported in this build") + // extract the podresources config name from the path. We only need this on windows because the preferred layout of pipes, + // this is why we have the extra logic in here instead of changing the function signature. Join the file to make sure the + // last path component is a file, so the operation chain works.. + podResourcesDir := filepath.Base(filepath.Dir(filepath.Join(path, file))) + if podResourcesDir == "" { + // should not happen because the user can configure a root directory, and we expected a subdirectory inside + // the user supplied root directory named like "pod-resources" or so. + return "", fmt.Errorf("cannot infer the podresources directory from path %q", path) + } + // windows pipes are expected to use forward slashes: https://learn.microsoft.com/windows/win32/ipc/pipe-names + // so using `url` like we do on unix gives us unclear benefits - see https://github.com/kubernetes/kubernetes/issues/78628 + // So we just construct the path from scratch. + // Format: \\ServerName\pipe\PipeName + // Where where ServerName is either the name of a remote computer or a period, to specify the local computer. + // We only consider PipeName as regular windows path, while the pipe path components are fixed, hence we use constants. + serverPart := `\\.` + pipePart := "pipe" + pipeName := "kubelet-" + podResourcesDir + return npipeProtocol + "://" + filepath.Join(serverPart, pipePart, pipeName), nil } var tickCount = syscall.NewLazyDLL("kernel32.dll").NewProc("GetTickCount64") @@ -146,6 +166,11 @@ func IsUnixDomainSocket(filePath string) (bool, error) { // does NOT work in 1809 if the socket file is created within a bind mounted directory by a container // and the FSCTL is issued in the host by the kubelet. + // If the file does not exist, it cannot be a Unix domain socket. + if _, err := os.Stat(filePath); os.IsNotExist(err) { + return false, fmt.Errorf("File %s not found. Err: %v", filePath, err) + } + klog.V(6).InfoS("Function IsUnixDomainSocket starts", "filePath", filePath) // As detailed in https://github.com/kubernetes/kubernetes/issues/104584 we cannot rely // on the Unix Domain socket working on the very first try, hence the potential need to diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volume_host.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volume_host.go index ec321b4066ed..7a9a9f871229 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volume_host.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volume_host.go @@ -38,7 +38,6 @@ import ( "k8s.io/kubernetes/pkg/kubelet/configmap" "k8s.io/kubernetes/pkg/kubelet/secret" "k8s.io/kubernetes/pkg/kubelet/token" - proxyutil "k8s.io/kubernetes/pkg/proxy/util" "k8s.io/kubernetes/pkg/volume" "k8s.io/kubernetes/pkg/volume/util" "k8s.io/kubernetes/pkg/volume/util/hostutil" @@ -152,11 +151,6 @@ func (kvh *kubeletVolumeHost) GetSubpather() subpath.Interface { return kvh.kubelet.subpather } -func (kvh *kubeletVolumeHost) GetFilteredDialOptions() *proxyutil.FilteredDialOptions { - // FilteredDial is not needed in the kubelet. - return nil -} - func (kvh *kubeletVolumeHost) GetHostUtil() hostutil.HostUtils { return kvh.kubelet.hostutil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/actual_state_of_world.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/actual_state_of_world.go index 90217a102d79..ada8c4415e44 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/actual_state_of_world.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/actual_state_of_world.go @@ -169,14 +169,20 @@ type ActualStateOfWorld interface { GetAttachedVolumes() []AttachedVolume // SyncReconstructedVolume check the volume.outerVolumeSpecName in asw and - // the one populated from dsw , if they do not match, update this field from the value from dsw. + // the one populated from dsw, if they do not match, update this field from the value from dsw. SyncReconstructedVolume(volumeName v1.UniqueVolumeName, podName volumetypes.UniquePodName, outerVolumeSpecName string) + // Add the specified volume to ASW as uncertainly attached. + AddAttachUncertainReconstructedVolume(volumeName v1.UniqueVolumeName, volumeSpec *volume.Spec, nodeName types.NodeName, devicePath string) error + // UpdateReconstructedDevicePath updates devicePath of a reconstructed volume // from Node.Status.VolumesAttached. The ASW is updated only when the volume is still // uncertain. If the volume got mounted in the meantime, its devicePath must have // been fixed by such an update. UpdateReconstructedDevicePath(volumeName v1.UniqueVolumeName, devicePath string) + + // UpdateReconstructedVolumeAttachability updates volume attachability from the API server. + UpdateReconstructedVolumeAttachability(volumeName v1.UniqueVolumeName, volumeAttachable bool) } // MountedVolume represents a volume that has successfully been mounted to a pod. @@ -251,6 +257,14 @@ type actualStateOfWorld struct { sync.RWMutex } +type volumeAttachability string + +const ( + volumeAttachabilityTrue volumeAttachability = "True" + volumeAttachabilityFalse volumeAttachability = "False" + volumeAttachabilityUncertain volumeAttachability = "Uncertain" +) + // attachedVolume represents a volume the kubelet volume manager believes to be // successfully attached to a node it is managing. Volume types that do not // implement an attacher are assumed to be in this state. @@ -280,7 +294,7 @@ type attachedVolume struct { // pluginIsAttachable indicates the volume plugin used to attach and mount // this volume implements the volume.Attacher interface - pluginIsAttachable bool + pluginIsAttachable volumeAttachability // deviceMountState stores information that tells us if device is mounted // globally or not @@ -361,7 +375,19 @@ type mountedPod struct { func (asw *actualStateOfWorld) MarkVolumeAsAttached( logger klog.Logger, volumeName v1.UniqueVolumeName, volumeSpec *volume.Spec, _ types.NodeName, devicePath string) error { - return asw.addVolume(volumeName, volumeSpec, devicePath) + + pluginIsAttachable := volumeAttachabilityFalse + if attachablePlugin, err := asw.volumePluginMgr.FindAttachablePluginBySpec(volumeSpec); err == nil && attachablePlugin != nil { + pluginIsAttachable = volumeAttachabilityTrue + } + + return asw.addVolume(volumeName, volumeSpec, devicePath, pluginIsAttachable) +} + +func (asw *actualStateOfWorld) AddAttachUncertainReconstructedVolume( + volumeName v1.UniqueVolumeName, volumeSpec *volume.Spec, _ types.NodeName, devicePath string) error { + + return asw.addVolume(volumeName, volumeSpec, devicePath, volumeAttachabilityUncertain) } func (asw *actualStateOfWorld) MarkVolumeAsUncertain( @@ -526,6 +552,28 @@ func (asw *actualStateOfWorld) UpdateReconstructedDevicePath(volumeName v1.Uniqu asw.attachedVolumes[volumeName] = volumeObj } +func (asw *actualStateOfWorld) UpdateReconstructedVolumeAttachability(volumeName v1.UniqueVolumeName, attachable bool) { + asw.Lock() + defer asw.Unlock() + + volumeObj, volumeExists := asw.attachedVolumes[volumeName] + if !volumeExists { + return + } + if volumeObj.pluginIsAttachable != volumeAttachabilityUncertain { + // Reconciler must have updated volume state, i.e. when a pod uses the volume and + // succeeded mounting the volume. Such update has fixed the device path. + return + } + + if attachable { + volumeObj.pluginIsAttachable = volumeAttachabilityTrue + } else { + volumeObj.pluginIsAttachable = volumeAttachabilityFalse + } + asw.attachedVolumes[volumeName] = volumeObj +} + func (asw *actualStateOfWorld) GetDeviceMountState(volumeName v1.UniqueVolumeName) operationexecutor.DeviceMountState { asw.RLock() defer asw.RUnlock() @@ -592,7 +640,7 @@ func (asw *actualStateOfWorld) IsVolumeMountedElsewhere(volumeName v1.UniqueVolu // volume plugin can support the given volumeSpec or more than one plugin can // support it, an error is returned. func (asw *actualStateOfWorld) addVolume( - volumeName v1.UniqueVolumeName, volumeSpec *volume.Spec, devicePath string) error { + volumeName v1.UniqueVolumeName, volumeSpec *volume.Spec, devicePath string, attachability volumeAttachability) error { asw.Lock() defer asw.Unlock() @@ -615,11 +663,6 @@ func (asw *actualStateOfWorld) addVolume( } } - pluginIsAttachable := false - if attachablePlugin, err := asw.volumePluginMgr.FindAttachablePluginBySpec(volumeSpec); err == nil && attachablePlugin != nil { - pluginIsAttachable = true - } - volumeObj, volumeExists := asw.attachedVolumes[volumeName] if !volumeExists { volumeObj = attachedVolume{ @@ -627,7 +670,7 @@ func (asw *actualStateOfWorld) addVolume( spec: volumeSpec, mountedPods: make(map[volumetypes.UniquePodName]mountedPod), pluginName: volumePlugin.GetPluginName(), - pluginIsAttachable: pluginIsAttachable, + pluginIsAttachable: attachability, deviceMountState: operationexecutor.DeviceNotMounted, devicePath: devicePath, } @@ -1094,7 +1137,7 @@ func (asw *actualStateOfWorld) newAttachedVolume( VolumeName: attachedVolume.volumeName, VolumeSpec: attachedVolume.spec, NodeName: asw.nodeName, - PluginIsAttachable: attachedVolume.pluginIsAttachable, + PluginIsAttachable: attachedVolume.pluginIsAttachable == volumeAttachabilityTrue, DevicePath: attachedVolume.devicePath, DeviceMountPath: attachedVolume.deviceMountPath, PluginName: attachedVolume.pluginName, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/desired_state_of_world.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/desired_state_of_world.go index c8cab650ca07..2a7abe23c945 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/desired_state_of_world.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/desired_state_of_world.go @@ -587,7 +587,7 @@ func (dsw *desiredStateOfWorld) GetVolumesToMount() []VolumeToMount { }, } if volumeObj.persistentVolumeSize != nil { - vmt.PersistentVolumeSize = volumeObj.persistentVolumeSize.DeepCopy() + vmt.DesiredPersistentVolumeSize = volumeObj.persistentVolumeSize.DeepCopy() } volumesToMount = append(volumesToMount, vmt) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go index 80ef98aafb27..8aab267d2c11 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go @@ -40,7 +40,6 @@ import ( "k8s.io/kubernetes/pkg/features" "k8s.io/kubernetes/pkg/kubelet/config" kubecontainer "k8s.io/kubernetes/pkg/kubelet/container" - "k8s.io/kubernetes/pkg/kubelet/pod" "k8s.io/kubernetes/pkg/kubelet/volumemanager/cache" "k8s.io/kubernetes/pkg/volume" "k8s.io/kubernetes/pkg/volume/csimigration" @@ -70,12 +69,19 @@ type DesiredStateOfWorldPopulator interface { HasAddedPods() bool } -// podStateProvider can determine if a pod is going to be terminated. -type podStateProvider interface { +// PodStateProvider can determine if a pod is going to be terminated. +type PodStateProvider interface { ShouldPodContainersBeTerminating(types.UID) bool ShouldPodRuntimeBeRemoved(types.UID) bool } +// PodManager is the subset of methods the manager needs to observe the actual state of the kubelet. +// See pkg/k8s.io/kubernetes/pkg/kubelet/pod.Manager for method godoc. +type PodManager interface { + GetPodByUID(types.UID) (*v1.Pod, bool) + GetPods() []*v1.Pod +} + // NewDesiredStateOfWorldPopulator returns a new instance of // DesiredStateOfWorldPopulator. // @@ -90,8 +96,8 @@ type podStateProvider interface { func NewDesiredStateOfWorldPopulator( kubeClient clientset.Interface, loopSleepDuration time.Duration, - podManager pod.Manager, - podStateProvider podStateProvider, + podManager PodManager, + podStateProvider PodStateProvider, desiredStateOfWorld cache.DesiredStateOfWorld, actualStateOfWorld cache.ActualStateOfWorld, kubeContainerRuntime kubecontainer.Runtime, @@ -121,8 +127,8 @@ func NewDesiredStateOfWorldPopulator( type desiredStateOfWorldPopulator struct { kubeClient clientset.Interface loopSleepDuration time.Duration - podManager pod.Manager - podStateProvider podStateProvider + podManager PodManager + podStateProvider PodStateProvider desiredStateOfWorld cache.DesiredStateOfWorld actualStateOfWorld cache.ActualStateOfWorld pods processedPods @@ -207,7 +213,9 @@ func (dswp *desiredStateOfWorldPopulator) findAndAddNewPods() { // Iterate through all pods in desired state of world, and remove if they no // longer exist func (dswp *desiredStateOfWorldPopulator) findAndRemoveDeletedPods() { + podsFromCache := make(map[volumetypes.UniquePodName]struct{}) for _, volumeToMount := range dswp.desiredStateOfWorld.GetVolumesToMount() { + podsFromCache[volumetypes.UniquePodName(volumeToMount.Pod.UID)] = struct{}{} pod, podExists := dswp.podManager.GetPodByUID(volumeToMount.Pod.UID) if podExists { @@ -256,6 +264,23 @@ func (dswp *desiredStateOfWorldPopulator) findAndRemoveDeletedPods() { dswp.deleteProcessedPod(volumeToMount.PodName) } + // Cleanup orphanded entries from processedPods + dswp.pods.Lock() + orphanedPods := make([]volumetypes.UniquePodName, 0, len(dswp.pods.processedPods)) + for k := range dswp.pods.processedPods { + if _, ok := podsFromCache[k]; !ok { + orphanedPods = append(orphanedPods, k) + } + } + dswp.pods.Unlock() + for _, orphanedPod := range orphanedPods { + uid := types.UID(orphanedPod) + _, podExists := dswp.podManager.GetPodByUID(uid) + if !podExists && dswp.podStateProvider.ShouldPodRuntimeBeRemoved(uid) { + dswp.deleteProcessedPod(orphanedPod) + } + } + podsWithError := dswp.desiredStateOfWorld.GetPodsWithErrors() for _, podName := range podsWithError { if _, podExists := dswp.podManager.GetPodByUID(types.UID(podName)); !podExists { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler_common.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler_common.go index 7fb53f9ce94f..b895f943fd84 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler_common.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler_common.go @@ -104,24 +104,24 @@ func NewReconciler( volumePluginMgr *volumepkg.VolumePluginMgr, kubeletPodsDir string) Reconciler { return &reconciler{ - kubeClient: kubeClient, - controllerAttachDetachEnabled: controllerAttachDetachEnabled, - loopSleepDuration: loopSleepDuration, - waitForAttachTimeout: waitForAttachTimeout, - nodeName: nodeName, - desiredStateOfWorld: desiredStateOfWorld, - actualStateOfWorld: actualStateOfWorld, - populatorHasAddedPods: populatorHasAddedPods, - operationExecutor: operationExecutor, - mounter: mounter, - hostutil: hostutil, - skippedDuringReconstruction: map[v1.UniqueVolumeName]*globalVolumeInfo{}, - volumePluginMgr: volumePluginMgr, - kubeletPodsDir: kubeletPodsDir, - timeOfLastSync: time.Time{}, - volumesFailedReconstruction: make([]podVolume, 0), - volumesNeedDevicePath: make([]v1.UniqueVolumeName, 0), - volumesNeedReportedInUse: make([]v1.UniqueVolumeName, 0), + kubeClient: kubeClient, + controllerAttachDetachEnabled: controllerAttachDetachEnabled, + loopSleepDuration: loopSleepDuration, + waitForAttachTimeout: waitForAttachTimeout, + nodeName: nodeName, + desiredStateOfWorld: desiredStateOfWorld, + actualStateOfWorld: actualStateOfWorld, + populatorHasAddedPods: populatorHasAddedPods, + operationExecutor: operationExecutor, + mounter: mounter, + hostutil: hostutil, + skippedDuringReconstruction: map[v1.UniqueVolumeName]*globalVolumeInfo{}, + volumePluginMgr: volumePluginMgr, + kubeletPodsDir: kubeletPodsDir, + timeOfLastSync: time.Time{}, + volumesFailedReconstruction: make([]podVolume, 0), + volumesNeedUpdateFromNodeStatus: make([]v1.UniqueVolumeName, 0), + volumesNeedReportedInUse: make([]v1.UniqueVolumeName, 0), } } @@ -141,11 +141,11 @@ type reconciler struct { skippedDuringReconstruction map[v1.UniqueVolumeName]*globalVolumeInfo kubeletPodsDir string // lock protects timeOfLastSync for updating and checking - timeOfLastSyncLock sync.Mutex - timeOfLastSync time.Time - volumesFailedReconstruction []podVolume - volumesNeedDevicePath []v1.UniqueVolumeName - volumesNeedReportedInUse []v1.UniqueVolumeName + timeOfLastSyncLock sync.Mutex + timeOfLastSync time.Time + volumesFailedReconstruction []podVolume + volumesNeedUpdateFromNodeStatus []v1.UniqueVolumeName + volumesNeedReportedInUse []v1.UniqueVolumeName } func (rc *reconciler) Run(stopCh <-chan struct{}) { @@ -178,7 +178,7 @@ func (rc *reconciler) unmountVolumes() { func (rc *reconciler) mountOrAttachVolumes() { // Ensure volumes that should be attached/mounted are attached/mounted. for _, volumeToMount := range rc.desiredStateOfWorld.GetVolumesToMount() { - volMounted, devicePath, err := rc.actualStateOfWorld.PodExistsInVolume(volumeToMount.PodName, volumeToMount.VolumeName, volumeToMount.PersistentVolumeSize, volumeToMount.SELinuxLabel) + volMounted, devicePath, err := rc.actualStateOfWorld.PodExistsInVolume(volumeToMount.PodName, volumeToMount.VolumeName, volumeToMount.DesiredPersistentVolumeSize, volumeToMount.SELinuxLabel) volumeToMount.DevicePath = devicePath if cache.IsSELinuxMountMismatchError(err) { // The volume is mounted, but with an unexpected SELinux context. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler_new.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler_new.go index 9e9e19e9cbf8..3f8ab5396097 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler_new.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler_new.go @@ -56,8 +56,14 @@ func (rc *reconciler) reconcileNew() { rc.cleanOrphanVolumes() } - if len(rc.volumesNeedDevicePath) != 0 { - rc.updateReconstructedDevicePaths() + if len(rc.volumesNeedUpdateFromNodeStatus) != 0 { + rc.updateReconstructedFromNodeStatus() + } + if len(rc.volumesNeedUpdateFromNodeStatus) == 0 { + // ASW is fully populated only after both devicePaths and uncertain volume attach-ability + // were reconstructed from the API server. + // This will start reconciliation of node.status.volumesInUse. + rc.updateLastSyncTime() } if len(rc.volumesNeedReportedInUse) != 0 && rc.populatorHasAddedPods() { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct_common.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct_common.go index 220c917878e4..57e534d9a0e4 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct_common.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct_common.go @@ -93,7 +93,9 @@ func (gvi *globalVolumeInfo) addPodVolume(rcv *reconstructedVolume) { func (rc *reconciler) cleanupMounts(volume podVolume) { klog.V(2).InfoS("Reconciler sync states: could not find volume information in desired state, clean up the mount points", "podName", volume.podName, "volumeSpecName", volume.volumeSpecName) mountedVolume := operationexecutor.MountedVolume{ - PodName: volume.podName, + PodName: volume.podName, + // VolumeName should be generated by `GetUniqueVolumeNameFromSpec` or `GetUniqueVolumeNameFromSpecWithPod`. + // However, since we don't have the volume information in asw when cleanup mounts, it doesn't matter what we put here. VolumeName: v1.UniqueVolumeName(volume.volumeSpecName), InnerVolumeSpecName: volume.volumeSpecName, PluginName: volume.pluginName, @@ -234,17 +236,28 @@ func (rc *reconciler) reconstructVolume(volume podVolume) (rvolume *reconstructe // Searching by spec checks whether the volume is actually attachable // (i.e. has a PV) whereas searching by plugin name can only tell whether // the plugin supports attachable volumes. - attachablePlugin, err := rc.volumePluginMgr.FindAttachablePluginBySpec(volumeSpec) - if err != nil { - return nil, err - } deviceMountablePlugin, err := rc.volumePluginMgr.FindDeviceMountablePluginBySpec(volumeSpec) if err != nil { return nil, err } + // The unique volume name used depends on whether the volume is attachable/device-mountable + // (needsNameFromSpec = true) or not. + needsNameFromSpec := deviceMountablePlugin != nil + if !needsNameFromSpec { + // Check attach-ability of a volume only as a fallback to avoid calling + // FindAttachablePluginBySpec for CSI volumes - it needs a connection to the API server, + // but it may not be available at this stage of kubelet startup. + // All CSI volumes are device-mountable, so they won't reach this code. + attachablePlugin, err := rc.volumePluginMgr.FindAttachablePluginBySpec(volumeSpec) + if err != nil { + return nil, err + } + needsNameFromSpec = attachablePlugin != nil + } + var uniqueVolumeName v1.UniqueVolumeName - if attachablePlugin != nil || deviceMountablePlugin != nil { + if needsNameFromSpec { uniqueVolumeName, err = util.GetUniqueVolumeNameFromSpec(plugin, volumeSpec) if err != nil { return nil, err @@ -256,8 +269,6 @@ func (rc *reconciler) reconstructVolume(volume podVolume) (rvolume *reconstructe var volumeMapper volumepkg.BlockVolumeMapper var volumeMounter volumepkg.Mounter var deviceMounter volumepkg.DeviceMounter - // Path to the mount or block device to check - var checkPath string if volume.volumeMode == v1.PersistentVolumeBlock { var newMapperErr error @@ -274,8 +285,6 @@ func (rc *reconciler) reconstructVolume(volume podVolume) (rvolume *reconstructe pod.UID, newMapperErr) } - mapDir, linkName := volumeMapper.GetPodDeviceMapPath() - checkPath = filepath.Join(mapDir, linkName) } else { var err error volumeMounter, err = plugin.NewMounter( @@ -291,7 +300,6 @@ func (rc *reconciler) reconstructVolume(volume podVolume) (rvolume *reconstructe pod.UID, err) } - checkPath = volumeMounter.GetPath() if deviceMountablePlugin != nil { deviceMounter, err = deviceMountablePlugin.NewDeviceMounter() if err != nil { @@ -305,16 +313,6 @@ func (rc *reconciler) reconstructVolume(volume podVolume) (rvolume *reconstructe } } - // Check existence of mount point for filesystem volume or symbolic link for block volume - isExist, checkErr := rc.operationExecutor.CheckVolumeExistenceOperation(volumeSpec, checkPath, volumeSpec.Name(), rc.mounter, uniqueVolumeName, volume.podName, pod.UID, attachablePlugin) - if checkErr != nil { - return nil, checkErr - } - // If mount or symlink doesn't exist, volume reconstruction should be failed - if !isExist { - return nil, fmt.Errorf("volume: %q is not mounted", uniqueVolumeName) - } - reconstructedVolume := &reconstructedVolume{ volumeName: uniqueVolumeName, podName: volume.podName, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct_new.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct_new.go index 51d52c0b1be0..1ed487309977 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct_new.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct_new.go @@ -39,7 +39,7 @@ func (rc *reconciler) readyToUnmount() bool { // Allow unmount only when ASW device paths were corrected from node.status to prevent // calling unmount with a wrong devicePath. - if len(rc.volumesNeedDevicePath) != 0 { + if len(rc.volumesNeedUpdateFromNodeStatus) != 0 { return false } return true @@ -50,7 +50,6 @@ func (rc *reconciler) readyToUnmount() bool { // put the volumes to volumesFailedReconstruction to be cleaned up later when DesiredStateOfWorld // is populated. func (rc *reconciler) reconstructVolumes() { - defer rc.updateLastSyncTime() // Get volumes information by reading the pod's directory podVolumes, err := getVolumesFromPodDir(rc.kubeletPodsDir) if err != nil { @@ -98,16 +97,16 @@ func (rc *reconciler) reconstructVolumes() { // Remember to update DSW with this information. rc.volumesNeedReportedInUse = reconstructedVolumeNames // Remember to update devicePath from node.status.volumesAttached - rc.volumesNeedDevicePath = reconstructedVolumeNames + rc.volumesNeedUpdateFromNodeStatus = reconstructedVolumeNames } klog.V(2).InfoS("Volume reconstruction finished") } func (rc *reconciler) updateStatesNew(reconstructedVolumes map[v1.UniqueVolumeName]*globalVolumeInfo) { for _, gvl := range reconstructedVolumes { - err := rc.actualStateOfWorld.MarkVolumeAsAttached( + err := rc.actualStateOfWorld.AddAttachUncertainReconstructedVolume( //TODO: the devicePath might not be correct for some volume plugins: see issue #54108 - klog.TODO(), gvl.volumeName, gvl.volumeSpec, rc.nodeName, gvl.devicePath) + gvl.volumeName, gvl.volumeSpec, rc.nodeName, gvl.devicePath) if err != nil { klog.ErrorS(err, "Could not add volume information to actual state of world", "volumeName", gvl.volumeName) continue @@ -174,36 +173,40 @@ func (rc *reconciler) cleanOrphanVolumes() { rc.volumesFailedReconstruction = make([]podVolume, 0) } -// updateReconstructedDevicePaths tries to file devicePaths of reconstructed volumes from +// updateReconstructedFromNodeStatus tries to file devicePaths of reconstructed volumes from // node.Status.VolumesAttached. This can be done only after connection to the API // server is established, i.e. it can't be part of reconstructVolumes(). -func (rc *reconciler) updateReconstructedDevicePaths() { +func (rc *reconciler) updateReconstructedFromNodeStatus() { klog.V(4).InfoS("Updating reconstructed devicePaths") if rc.kubeClient == nil { // Skip reconstructing devicePath from node objects if kubelet is in standalone mode. // Such kubelet is not expected to mount any attachable volume or Secrets / ConfigMap. klog.V(2).InfoS("Skipped reconstruction of DevicePaths from node.status in standalone mode") - rc.volumesNeedDevicePath = nil + rc.volumesNeedUpdateFromNodeStatus = nil return } node, fetchErr := rc.kubeClient.CoreV1().Nodes().Get(context.TODO(), string(rc.nodeName), metav1.GetOptions{}) if fetchErr != nil { // This may repeat few times per second until kubelet is able to read its own status for the first time. - klog.V(2).ErrorS(fetchErr, "Failed to get Node status to reconstruct device paths") + klog.V(4).ErrorS(fetchErr, "Failed to get Node status to reconstruct device paths") return } - for _, volumeID := range rc.volumesNeedDevicePath { + for _, volumeID := range rc.volumesNeedUpdateFromNodeStatus { + attachable := false for _, attachedVolume := range node.Status.VolumesAttached { if volumeID != attachedVolume.Name { continue } rc.actualStateOfWorld.UpdateReconstructedDevicePath(volumeID, attachedVolume.DevicePath) + attachable = true klog.V(4).InfoS("Updated devicePath from node status for volume", "volumeName", attachedVolume.Name, "path", attachedVolume.DevicePath) } + rc.actualStateOfWorld.UpdateReconstructedVolumeAttachability(volumeID, attachable) } + klog.V(2).InfoS("DevicePaths of reconstructed volumes updated") - rc.volumesNeedDevicePath = nil + rc.volumesNeedUpdateFromNodeStatus = nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager.go index 7de1d03fbbe6..b34f21b62f6f 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager.go @@ -17,6 +17,7 @@ limitations under the License. package volumemanager import ( + "context" "errors" "fmt" "sort" @@ -38,7 +39,6 @@ import ( csitrans "k8s.io/csi-translation-lib" "k8s.io/kubernetes/pkg/kubelet/config" "k8s.io/kubernetes/pkg/kubelet/container" - "k8s.io/kubernetes/pkg/kubelet/pod" "k8s.io/kubernetes/pkg/kubelet/volumemanager/cache" "k8s.io/kubernetes/pkg/kubelet/volumemanager/metrics" "k8s.io/kubernetes/pkg/kubelet/volumemanager/populator" @@ -97,14 +97,14 @@ type VolumeManager interface { // actual state of the world). // An error is returned if all volumes are not attached and mounted within // the duration defined in podAttachAndMountTimeout. - WaitForAttachAndMount(pod *v1.Pod) error + WaitForAttachAndMount(ctx context.Context, pod *v1.Pod) error // WaitForUnmount processes the volumes referenced in the specified // pod and blocks until they are all unmounted (reflected in the actual // state of the world). // An error is returned if all volumes are not unmounted within // the duration defined in podAttachAndMountTimeout. - WaitForUnmount(pod *v1.Pod) error + WaitForUnmount(ctx context.Context, pod *v1.Pod) error // GetMountedVolumesForPod returns a VolumeMap containing the volumes // referenced by the specified pod that are successfully attached and @@ -150,11 +150,18 @@ type VolumeManager interface { } // podStateProvider can determine if a pod is going to be terminated -type podStateProvider interface { +type PodStateProvider interface { ShouldPodContainersBeTerminating(k8stypes.UID) bool ShouldPodRuntimeBeRemoved(k8stypes.UID) bool } +// PodManager is the subset of methods the manager needs to observe the actual state of the kubelet. +// See pkg/k8s.io/kubernetes/pkg/kubelet/pod.Manager for method godoc. +type PodManager interface { + GetPodByUID(k8stypes.UID) (*v1.Pod, bool) + GetPods() []*v1.Pod +} + // NewVolumeManager returns a new concrete instance implementing the // VolumeManager interface. // @@ -166,8 +173,8 @@ type podStateProvider interface { func NewVolumeManager( controllerAttachDetachEnabled bool, nodeName k8stypes.NodeName, - podManager pod.Manager, - podStateProvider podStateProvider, + podManager PodManager, + podStateProvider PodStateProvider, kubeClient clientset.Interface, volumePluginMgr *volume.VolumePluginMgr, kubeContainerRuntime container.Runtime, @@ -385,7 +392,7 @@ func (vm *volumeManager) MarkVolumesAsReportedInUse( vm.desiredStateOfWorld.MarkVolumesReportedInUse(volumesReportedAsInUse) } -func (vm *volumeManager) WaitForAttachAndMount(pod *v1.Pod) error { +func (vm *volumeManager) WaitForAttachAndMount(ctx context.Context, pod *v1.Pod) error { if pod == nil { return nil } @@ -404,9 +411,11 @@ func (vm *volumeManager) WaitForAttachAndMount(pod *v1.Pod) error { // like Downward API, depend on this to update the contents of the volume). vm.desiredStateOfWorldPopulator.ReprocessPod(uniquePodName) - err := wait.PollImmediate( + err := wait.PollUntilContextTimeout( + ctx, podAttachAndMountRetryInterval, podAttachAndMountTimeout, + true, vm.verifyVolumesMountedFunc(uniquePodName, expectedVolumes)) if err != nil { @@ -423,7 +432,7 @@ func (vm *volumeManager) WaitForAttachAndMount(pod *v1.Pod) error { } return fmt.Errorf( - "unmounted volumes=%v, unattached volumes=%v, failed to process volumes=%v: %s", + "unmounted volumes=%v, unattached volumes=%v, failed to process volumes=%v: %w", unmountedVolumes, unattachedVolumes, volumesNotInDSW, @@ -434,7 +443,7 @@ func (vm *volumeManager) WaitForAttachAndMount(pod *v1.Pod) error { return nil } -func (vm *volumeManager) WaitForUnmount(pod *v1.Pod) error { +func (vm *volumeManager) WaitForUnmount(ctx context.Context, pod *v1.Pod) error { if pod == nil { return nil } @@ -444,9 +453,11 @@ func (vm *volumeManager) WaitForUnmount(pod *v1.Pod) error { vm.desiredStateOfWorldPopulator.ReprocessPod(uniquePodName) - err := wait.PollImmediate( + err := wait.PollUntilContextTimeout( + ctx, podAttachAndMountRetryInterval, podAttachAndMountTimeout, + true, vm.verifyVolumesUnmountedFunc(uniquePodName)) if err != nil { @@ -461,7 +472,7 @@ func (vm *volumeManager) WaitForUnmount(pod *v1.Pod) error { } return fmt.Errorf( - "mounted volumes=%v: %s", + "mounted volumes=%v: %w", mountedVolumes, err) } @@ -493,14 +504,15 @@ func (vm *volumeManager) getUnattachedVolumes(uniquePodName types.UniquePodName) unattachedVolumes = append(unattachedVolumes, volumeToMount.OuterVolumeSpecName) } } + sort.Strings(unattachedVolumes) return unattachedVolumes } // verifyVolumesMountedFunc returns a method that returns true when all expected // volumes are mounted. -func (vm *volumeManager) verifyVolumesMountedFunc(podName types.UniquePodName, expectedVolumes []string) wait.ConditionFunc { - return func() (done bool, err error) { +func (vm *volumeManager) verifyVolumesMountedFunc(podName types.UniquePodName, expectedVolumes []string) wait.ConditionWithContextFunc { + return func(_ context.Context) (done bool, err error) { if errs := vm.desiredStateOfWorld.PopPodErrors(podName); len(errs) > 0 { return true, errors.New(strings.Join(errs, "; ")) } @@ -510,8 +522,8 @@ func (vm *volumeManager) verifyVolumesMountedFunc(podName types.UniquePodName, e // verifyVolumesUnmountedFunc returns a method that is true when there are no mounted volumes for this // pod. -func (vm *volumeManager) verifyVolumesUnmountedFunc(podName types.UniquePodName) wait.ConditionFunc { - return func() (done bool, err error) { +func (vm *volumeManager) verifyVolumesUnmountedFunc(podName types.UniquePodName) wait.ConditionWithContextFunc { + return func(_ context.Context) (done bool, err error) { if errs := vm.desiredStateOfWorld.PopPodErrors(podName); len(errs) > 0 { return true, errors.New(strings.Join(errs, "; ")) } @@ -540,6 +552,8 @@ func filterUnmountedVolumes(mountedVolumes sets.String, expectedVolumes []string unmountedVolumes = append(unmountedVolumes, expectedVolume) } } + sort.Strings(unmountedVolumes) + return unmountedVolumes } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager_fake.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager_fake.go index e3e56b4d853c..94a7ac5b2d54 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager_fake.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager_fake.go @@ -17,6 +17,8 @@ limitations under the License. package volumemanager import ( + "context" + v1 "k8s.io/api/core/v1" "k8s.io/kubernetes/pkg/kubelet/config" "k8s.io/kubernetes/pkg/kubelet/container" @@ -46,12 +48,12 @@ func (f *FakeVolumeManager) Run(sourcesReady config.SourcesReady, stopCh <-chan } // WaitForAttachAndMount is not implemented -func (f *FakeVolumeManager) WaitForAttachAndMount(pod *v1.Pod) error { +func (f *FakeVolumeManager) WaitForAttachAndMount(ctx context.Context, pod *v1.Pod) error { return nil } // WaitForUnmount is not implemented -func (f *FakeVolumeManager) WaitForUnmount(pod *v1.Pod) error { +func (f *FakeVolumeManager) WaitForUnmount(ctx context.Context, pod *v1.Pod) error { return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubemark/.import-restrictions b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubemark/.import-restrictions index 5bcdf7301be8..6390404bd0e9 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubemark/.import-restrictions +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubemark/.import-restrictions @@ -2,5 +2,4 @@ rules: # override pkg/ import restriction on cmd/ for kubemark - selectorRegexp: k8s[.]io/kubernetes/cmd allowedPrefixes: - - k8s.io/kubernetes/cmd/kube-proxy/app - k8s.io/kubernetes/cmd/kubelet/app diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubemark/hollow_proxy.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubemark/hollow_proxy.go deleted file mode 100644 index e7ba215e57f0..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubemark/hollow_proxy.go +++ /dev/null @@ -1,146 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package kubemark - -import ( - "fmt" - "time" - - v1 "k8s.io/api/core/v1" - discoveryv1 "k8s.io/api/discovery/v1" - "k8s.io/apimachinery/pkg/types" - clientset "k8s.io/client-go/kubernetes" - v1core "k8s.io/client-go/kubernetes/typed/core/v1" - "k8s.io/client-go/tools/events" - utilsysctl "k8s.io/component-helpers/node/util/sysctl" - proxyapp "k8s.io/kubernetes/cmd/kube-proxy/app" - "k8s.io/kubernetes/pkg/proxy" - proxyconfig "k8s.io/kubernetes/pkg/proxy/config" - "k8s.io/kubernetes/pkg/proxy/iptables" - proxyutiliptables "k8s.io/kubernetes/pkg/proxy/util/iptables" - utiliptables "k8s.io/kubernetes/pkg/util/iptables" - utilnode "k8s.io/kubernetes/pkg/util/node" - utilexec "k8s.io/utils/exec" - netutils "k8s.io/utils/net" - utilpointer "k8s.io/utils/pointer" - - "k8s.io/klog/v2" -) - -type HollowProxy struct { - ProxyServer *proxyapp.ProxyServer -} - -type FakeProxier struct { - proxyconfig.NoopNodeHandler -} - -func (*FakeProxier) Sync() {} -func (*FakeProxier) SyncLoop() { - select {} -} -func (*FakeProxier) OnServiceAdd(service *v1.Service) {} -func (*FakeProxier) OnServiceUpdate(oldService, service *v1.Service) {} -func (*FakeProxier) OnServiceDelete(service *v1.Service) {} -func (*FakeProxier) OnServiceSynced() {} -func (*FakeProxier) OnEndpointSliceAdd(slice *discoveryv1.EndpointSlice) {} -func (*FakeProxier) OnEndpointSliceUpdate(oldSlice, slice *discoveryv1.EndpointSlice) {} -func (*FakeProxier) OnEndpointSliceDelete(slice *discoveryv1.EndpointSlice) {} -func (*FakeProxier) OnEndpointSlicesSynced() {} - -func NewHollowProxyOrDie( - nodeName string, - client clientset.Interface, - eventClient v1core.EventsGetter, - iptInterface utiliptables.Interface, - sysctl utilsysctl.Interface, - execer utilexec.Interface, - broadcaster events.EventBroadcaster, - recorder events.EventRecorder, - useRealProxier bool, - proxierSyncPeriod time.Duration, - proxierMinSyncPeriod time.Duration, -) (*HollowProxy, error) { - // Create proxier and service/endpoint handlers. - var proxier proxy.Provider - var err error - - if useRealProxier { - nodeIP := utilnode.GetNodeIP(client, nodeName) - if nodeIP == nil { - klog.InfoS("Can't determine this node's IP, assuming 127.0.0.1") - nodeIP = netutils.ParseIPSloppy("127.0.0.1") - } - family := v1.IPv4Protocol - if iptInterface.IsIPv6() { - family = v1.IPv6Protocol - } - // Real proxier with fake iptables, sysctl, etc underneath it. - //var err error - proxier, err = iptables.NewProxier( - family, - iptInterface, - sysctl, - execer, - proxierSyncPeriod, - proxierMinSyncPeriod, - false, - false, - 0, - proxyutiliptables.NewNoOpLocalDetector(), - nodeName, - nodeIP, - recorder, - nil, - []string{}, - ) - if err != nil { - return nil, fmt.Errorf("unable to create proxier: %v", err) - } - } else { - proxier = &FakeProxier{} - } - - // Create a Hollow Proxy instance. - nodeRef := &v1.ObjectReference{ - Kind: "Node", - Name: nodeName, - UID: types.UID(nodeName), - Namespace: "", - } - return &HollowProxy{ - ProxyServer: &proxyapp.ProxyServer{ - Client: client, - EventClient: eventClient, - IptInterface: iptInterface, - Proxier: proxier, - Broadcaster: broadcaster, - Recorder: recorder, - ProxyMode: "fake", - NodeRef: nodeRef, - OOMScoreAdj: utilpointer.Int32Ptr(0), - ConfigSyncPeriod: 30 * time.Second, - }, - }, nil -} - -func (hp *HollowProxy) Run() error { - if err := hp.ProxyServer.Run(); err != nil { - return fmt.Errorf("Error while running proxy: %w", err) - } - return nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/probe/http/request.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/probe/http/request.go index 4285c0a4ccbe..fb7f818b2492 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/probe/http/request.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/probe/http/request.go @@ -113,7 +113,7 @@ func formatURL(scheme string, host string, port int, path string) *url.URL { func v1HeaderToHTTPHeader(headerList []v1.HTTPHeader) http.Header { headers := make(http.Header) for _, header := range headerList { - headers[header.Name] = append(headers[header.Name], header.Value) + headers.Add(header.Name, header.Value) } return headers } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/OWNERS index 675d11afb440..c2ea7233a532 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/OWNERS @@ -6,3 +6,4 @@ reviewers: - sig-network-reviewers labels: - sig/network + - area/kube-proxy diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/OWNERS deleted file mode 100644 index 08861f7d8521..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/OWNERS +++ /dev/null @@ -1,8 +0,0 @@ -# See the OWNERS docs at https://go.k8s.io/owners - -approvers: - - thockin -reviewers: - - sig-network-reviewers -labels: - - sig/network diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/doc.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/doc.go deleted file mode 100644 index 64cad5ced197..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/doc.go +++ /dev/null @@ -1,20 +0,0 @@ -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// +k8s:deepcopy-gen=package -// +groupName=kubeproxy.config.k8s.io - -package config // import "k8s.io/kubernetes/pkg/proxy/apis/config" diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/register.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/register.go deleted file mode 100644 index fda569221cbd..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/register.go +++ /dev/null @@ -1,43 +0,0 @@ -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package config - -import ( - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/runtime/schema" -) - -// GroupName is the group name used in this package -const GroupName = "kubeproxy.config.k8s.io" - -// SchemeGroupVersion is group version used to register these objects -var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: runtime.APIVersionInternal} - -var ( - // SchemeBuilder is the scheme builder with scheme init functions to run for this API package - SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes) - // AddToScheme is a global function that registers this API group & version to a scheme - AddToScheme = SchemeBuilder.AddToScheme -) - -// addKnownTypes registers known types to the given scheme -func addKnownTypes(scheme *runtime.Scheme) error { - scheme.AddKnownTypes(SchemeGroupVersion, - &KubeProxyConfiguration{}, - ) - return nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/scheme/scheme.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/scheme/scheme.go deleted file mode 100644 index 75e8082b25e1..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/scheme/scheme.go +++ /dev/null @@ -1,43 +0,0 @@ -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package scheme - -import ( - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/runtime/serializer" - utilruntime "k8s.io/apimachinery/pkg/util/runtime" - "k8s.io/kubernetes/pkg/proxy/apis/config" - "k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1" -) - -var ( - // Scheme defines methods for serializing and deserializing API objects. - Scheme = runtime.NewScheme() - // Codecs provides methods for retrieving codecs and serializers for specific - // versions and content types. - Codecs = serializer.NewCodecFactory(Scheme, serializer.EnableStrict) -) - -func init() { - AddToScheme(Scheme) -} - -// AddToScheme adds the types of this group into the given scheme. -func AddToScheme(scheme *runtime.Scheme) { - utilruntime.Must(v1alpha1.AddToScheme(scheme)) - utilruntime.Must(config.AddToScheme(scheme)) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/types.go deleted file mode 100644 index eec55133ee94..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/types.go +++ /dev/null @@ -1,281 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package config - -import ( - "fmt" - "sort" - "strings" - - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - componentbaseconfig "k8s.io/component-base/config" -) - -// KubeProxyIPTablesConfiguration contains iptables-related configuration -// details for the Kubernetes proxy server. -type KubeProxyIPTablesConfiguration struct { - // masqueradeBit is the bit of the iptables fwmark space to use for SNAT if using - // the pure iptables proxy mode. Values must be within the range [0, 31]. - MasqueradeBit *int32 - // masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. - MasqueradeAll bool - // LocalhostNodePorts tells kube-proxy to allow service NodePorts to be accessed via - // localhost (iptables mode only) - LocalhostNodePorts *bool - // syncPeriod is the period that iptables rules are refreshed (e.g. '5s', '1m', - // '2h22m'). Must be greater than 0. - SyncPeriod metav1.Duration - // minSyncPeriod is the minimum period that iptables rules are refreshed (e.g. '5s', '1m', - // '2h22m'). - MinSyncPeriod metav1.Duration -} - -// KubeProxyIPVSConfiguration contains ipvs-related configuration -// details for the Kubernetes proxy server. -type KubeProxyIPVSConfiguration struct { - // syncPeriod is the period that ipvs rules are refreshed (e.g. '5s', '1m', - // '2h22m'). Must be greater than 0. - SyncPeriod metav1.Duration - // minSyncPeriod is the minimum period that ipvs rules are refreshed (e.g. '5s', '1m', - // '2h22m'). - MinSyncPeriod metav1.Duration - // ipvs scheduler - Scheduler string - // excludeCIDRs is a list of CIDR's which the ipvs proxier should not touch - // when cleaning up ipvs services. - ExcludeCIDRs []string - // strict ARP configure arp_ignore and arp_announce to avoid answering ARP queries - // from kube-ipvs0 interface - StrictARP bool - // tcpTimeout is the timeout value used for idle IPVS TCP sessions. - // The default value is 0, which preserves the current timeout value on the system. - TCPTimeout metav1.Duration - // tcpFinTimeout is the timeout value used for IPVS TCP sessions after receiving a FIN. - // The default value is 0, which preserves the current timeout value on the system. - TCPFinTimeout metav1.Duration - // udpTimeout is the timeout value used for IPVS UDP packets. - // The default value is 0, which preserves the current timeout value on the system. - UDPTimeout metav1.Duration -} - -// KubeProxyConntrackConfiguration contains conntrack settings for -// the Kubernetes proxy server. -type KubeProxyConntrackConfiguration struct { - // maxPerCore is the maximum number of NAT connections to track - // per CPU core (0 to leave the limit as-is and ignore min). - MaxPerCore *int32 - // min is the minimum value of connect-tracking records to allocate, - // regardless of maxPerCore (set maxPerCore=0 to leave the limit as-is). - Min *int32 - // tcpEstablishedTimeout is how long an idle TCP connection will be kept open - // (e.g. '2s'). Must be greater than 0 to set. - TCPEstablishedTimeout *metav1.Duration - // tcpCloseWaitTimeout is how long an idle conntrack entry - // in CLOSE_WAIT state will remain in the conntrack - // table. (e.g. '60s'). Must be greater than 0 to set. - TCPCloseWaitTimeout *metav1.Duration -} - -// KubeProxyWinkernelConfiguration contains Windows/HNS settings for -// the Kubernetes proxy server. -type KubeProxyWinkernelConfiguration struct { - // networkName is the name of the network kube-proxy will use - // to create endpoints and policies - NetworkName string - // sourceVip is the IP address of the source VIP endpoint used for - // NAT when loadbalancing - SourceVip string - // enableDSR tells kube-proxy whether HNS policies should be created - // with DSR - EnableDSR bool - // RootHnsEndpointName is the name of hnsendpoint that is attached to - // l2bridge for root network namespace - RootHnsEndpointName string - // ForwardHealthCheckVip forwards service VIP for health check port on - // Windows - ForwardHealthCheckVip bool -} - -// DetectLocalConfiguration contains optional settings related to DetectLocalMode option -type DetectLocalConfiguration struct { - // BridgeInterface is a string argument which represents a single bridge interface name. - // Kube-proxy considers traffic as local if originating from this given bridge. - // This argument should be set if DetectLocalMode is set to BridgeInterface. - BridgeInterface string - // InterfaceNamePrefix is a string argument which represents a single interface prefix name. - // Kube-proxy considers traffic as local if originating from one or more interfaces which match - // the given prefix. This argument should be set if DetectLocalMode is set to InterfaceNamePrefix. - InterfaceNamePrefix string -} - -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object - -// KubeProxyConfiguration contains everything necessary to configure the -// Kubernetes proxy server. -type KubeProxyConfiguration struct { - metav1.TypeMeta - - // featureGates is a map of feature names to bools that enable or disable alpha/experimental features. - FeatureGates map[string]bool - - // bindAddress is the IP address for the proxy server to serve on (set to 0.0.0.0 - // for all interfaces) - BindAddress string - // healthzBindAddress is the IP address and port for the health check server to serve on, - // defaulting to 0.0.0.0:10256 - HealthzBindAddress string - // metricsBindAddress is the IP address and port for the metrics server to serve on, - // defaulting to 127.0.0.1:10249 (set to 0.0.0.0 for all interfaces) - MetricsBindAddress string - // BindAddressHardFail, if true, kube-proxy will treat failure to bind to a port as fatal and exit - BindAddressHardFail bool - // enableProfiling enables profiling via web interface on /debug/pprof handler. - // Profiling handlers will be handled by metrics server. - EnableProfiling bool - // clusterCIDR is the CIDR range of the pods in the cluster. It is used to - // bridge traffic coming from outside of the cluster. If not provided, - // no off-cluster bridging will be performed. - ClusterCIDR string - // hostnameOverride, if non-empty, will be used as the identity instead of the actual hostname. - HostnameOverride string - // clientConnection specifies the kubeconfig file and client connection settings for the proxy - // server to use when communicating with the apiserver. - ClientConnection componentbaseconfig.ClientConnectionConfiguration - // iptables contains iptables-related configuration options. - IPTables KubeProxyIPTablesConfiguration - // ipvs contains ipvs-related configuration options. - IPVS KubeProxyIPVSConfiguration - // oomScoreAdj is the oom-score-adj value for kube-proxy process. Values must be within - // the range [-1000, 1000] - OOMScoreAdj *int32 - // mode specifies which proxy mode to use. - Mode ProxyMode - // portRange is the range of host ports (beginPort-endPort, inclusive) that may be consumed - // in order to proxy service traffic. If unspecified (0-0) then ports will be randomly chosen. - PortRange string - // conntrack contains conntrack-related configuration options. - Conntrack KubeProxyConntrackConfiguration - // configSyncPeriod is how often configuration from the apiserver is refreshed. Must be greater - // than 0. - ConfigSyncPeriod metav1.Duration - // nodePortAddresses is the --nodeport-addresses value for kube-proxy process. Values must be valid - // IP blocks. These values are as a parameter to select the interfaces where nodeport works. - // In case someone would like to expose a service on localhost for local visit and some other interfaces for - // particular purpose, a list of IP blocks would do that. - // If set it to "127.0.0.0/8", kube-proxy will only select the loopback interface for NodePort. - // If set it to a non-zero IP block, kube-proxy will filter that down to just the IPs that applied to the node. - // An empty string slice is meant to select all network interfaces. - NodePortAddresses []string - // winkernel contains winkernel-related configuration options. - Winkernel KubeProxyWinkernelConfiguration - // ShowHiddenMetricsForVersion is the version for which you want to show hidden metrics. - ShowHiddenMetricsForVersion string - // DetectLocalMode determines mode to use for detecting local traffic, defaults to LocalModeClusterCIDR - DetectLocalMode LocalMode - // DetectLocal contains optional configuration settings related to DetectLocalMode. - DetectLocal DetectLocalConfiguration -} - -// ProxyMode represents modes used by the Kubernetes proxy server. -// -// Currently, two modes of proxy are available on Linux platforms: 'iptables' and 'ipvs'. -// One mode of proxy is available on Windows platforms: 'kernelspace'. -// -// If the proxy mode is unspecified, the best-available proxy mode will be used (currently this -// is `iptables` on Linux and `kernelspace` on Windows). If the selected proxy mode cannot be -// used (due to lack of kernel support, missing userspace components, etc) then kube-proxy -// will exit with an error. -type ProxyMode string - -const ( - ProxyModeIPTables ProxyMode = "iptables" - ProxyModeIPVS ProxyMode = "ipvs" - ProxyModeKernelspace ProxyMode = "kernelspace" -) - -// LocalMode represents modes to detect local traffic from the node -type LocalMode string - -// Currently supported modes for LocalMode -const ( - LocalModeClusterCIDR LocalMode = "ClusterCIDR" - LocalModeNodeCIDR LocalMode = "NodeCIDR" - LocalModeBridgeInterface LocalMode = "BridgeInterface" - LocalModeInterfaceNamePrefix LocalMode = "InterfaceNamePrefix" -) - -func (m *ProxyMode) Set(s string) error { - *m = ProxyMode(s) - return nil -} - -func (m *ProxyMode) String() string { - if m != nil { - return string(*m) - } - return "" -} - -func (m *ProxyMode) Type() string { - return "ProxyMode" -} - -func (m *LocalMode) Set(s string) error { - *m = LocalMode(s) - return nil -} - -func (m *LocalMode) String() string { - if m != nil { - return string(*m) - } - return "" -} - -func (m *LocalMode) Type() string { - return "LocalMode" -} - -type ConfigurationMap map[string]string - -func (m *ConfigurationMap) String() string { - pairs := []string{} - for k, v := range *m { - pairs = append(pairs, fmt.Sprintf("%s=%s", k, v)) - } - sort.Strings(pairs) - return strings.Join(pairs, ",") -} - -func (m *ConfigurationMap) Set(value string) error { - for _, s := range strings.Split(value, ",") { - if len(s) == 0 { - continue - } - arr := strings.SplitN(s, "=", 2) - if len(arr) == 2 { - (*m)[strings.TrimSpace(arr[0])] = strings.TrimSpace(arr[1]) - } else { - (*m)[strings.TrimSpace(arr[0])] = "" - } - } - return nil -} - -func (*ConfigurationMap) Type() string { - return "mapStringString" -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/defaults.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/defaults.go deleted file mode 100644 index 127913249ad4..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/defaults.go +++ /dev/null @@ -1,137 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1alpha1 - -import ( - "fmt" - "time" - - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - kruntime "k8s.io/apimachinery/pkg/runtime" - kubeproxyconfigv1alpha1 "k8s.io/kube-proxy/config/v1alpha1" - - "k8s.io/kubernetes/pkg/cluster/ports" - "k8s.io/kubernetes/pkg/kubelet/qos" - proxyutil "k8s.io/kubernetes/pkg/proxy/util" - netutils "k8s.io/utils/net" - "k8s.io/utils/pointer" -) - -func addDefaultingFuncs(scheme *kruntime.Scheme) error { - return RegisterDefaults(scheme) -} - -func SetDefaults_KubeProxyConfiguration(obj *kubeproxyconfigv1alpha1.KubeProxyConfiguration) { - - if len(obj.BindAddress) == 0 { - obj.BindAddress = "0.0.0.0" - } - - defaultHealthzAddress, defaultMetricsAddress := getDefaultAddresses(obj.BindAddress) - - if obj.HealthzBindAddress == "" { - obj.HealthzBindAddress = fmt.Sprintf("%s:%v", defaultHealthzAddress, ports.ProxyHealthzPort) - } else { - obj.HealthzBindAddress = proxyutil.AppendPortIfNeeded(obj.HealthzBindAddress, ports.ProxyHealthzPort) - } - if obj.MetricsBindAddress == "" { - obj.MetricsBindAddress = fmt.Sprintf("%s:%v", defaultMetricsAddress, ports.ProxyStatusPort) - } else { - obj.MetricsBindAddress = proxyutil.AppendPortIfNeeded(obj.MetricsBindAddress, ports.ProxyStatusPort) - } - - if obj.OOMScoreAdj == nil { - temp := int32(qos.KubeProxyOOMScoreAdj) - obj.OOMScoreAdj = &temp - } - if obj.IPTables.SyncPeriod.Duration == 0 { - obj.IPTables.SyncPeriod = metav1.Duration{Duration: 30 * time.Second} - } - if obj.IPTables.MinSyncPeriod.Duration == 0 { - obj.IPTables.MinSyncPeriod = metav1.Duration{Duration: 1 * time.Second} - } - if obj.IPTables.LocalhostNodePorts == nil { - obj.IPTables.LocalhostNodePorts = pointer.Bool(true) - } - if obj.IPVS.SyncPeriod.Duration == 0 { - obj.IPVS.SyncPeriod = metav1.Duration{Duration: 30 * time.Second} - } - - if obj.Conntrack.MaxPerCore == nil { - obj.Conntrack.MaxPerCore = pointer.Int32(32 * 1024) - } - if obj.Conntrack.Min == nil { - obj.Conntrack.Min = pointer.Int32(128 * 1024) - } - - if obj.IPTables.MasqueradeBit == nil { - temp := int32(14) - obj.IPTables.MasqueradeBit = &temp - } - if obj.Conntrack.TCPEstablishedTimeout == nil { - obj.Conntrack.TCPEstablishedTimeout = &metav1.Duration{Duration: 24 * time.Hour} // 1 day (1/5 default) - } - if obj.Conntrack.TCPCloseWaitTimeout == nil { - // See https://github.com/kubernetes/kubernetes/issues/32551. - // - // CLOSE_WAIT conntrack state occurs when the Linux kernel - // sees a FIN from the remote server. Note: this is a half-close - // condition that persists as long as the local side keeps the - // socket open. The condition is rare as it is typical in most - // protocols for both sides to issue a close; this typically - // occurs when the local socket is lazily garbage collected. - // - // If the CLOSE_WAIT conntrack entry expires, then FINs from the - // local socket will not be properly SNAT'd and will not reach the - // remote server (if the connection was subject to SNAT). If the - // remote timeouts for FIN_WAIT* states exceed the CLOSE_WAIT - // timeout, then there will be an inconsistency in the state of - // the connection and a new connection reusing the SNAT (src, - // port) pair may be rejected by the remote side with RST. This - // can cause new calls to connect(2) to return with ECONNREFUSED. - // - // We set CLOSE_WAIT to one hour by default to better match - // typical server timeouts. - obj.Conntrack.TCPCloseWaitTimeout = &metav1.Duration{Duration: 1 * time.Hour} - } - if obj.ConfigSyncPeriod.Duration == 0 { - obj.ConfigSyncPeriod.Duration = 15 * time.Minute - } - - if len(obj.ClientConnection.ContentType) == 0 { - obj.ClientConnection.ContentType = "application/vnd.kubernetes.protobuf" - } - if obj.ClientConnection.QPS == 0.0 { - obj.ClientConnection.QPS = 5.0 - } - if obj.ClientConnection.Burst == 0 { - obj.ClientConnection.Burst = 10 - } - if obj.FeatureGates == nil { - obj.FeatureGates = make(map[string]bool) - } -} - -// getDefaultAddresses returns default address of healthz and metrics server -// based on the given bind address. IPv6 addresses are enclosed in square -// brackets for appending port. -func getDefaultAddresses(bindAddress string) (defaultHealthzAddress, defaultMetricsAddress string) { - if netutils.ParseIPSloppy(bindAddress).To4() != nil { - return "0.0.0.0", "127.0.0.1" - } - return "[::]", "[::1]" -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/doc.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/doc.go deleted file mode 100644 index a67f856a8d04..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/doc.go +++ /dev/null @@ -1,24 +0,0 @@ -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// +k8s:deepcopy-gen=package -// +k8s:conversion-gen=k8s.io/kubernetes/pkg/proxy/apis/config -// +k8s:conversion-gen-external-types=k8s.io/kube-proxy/config/v1alpha1 -// +k8s:defaulter-gen=TypeMeta -// +k8s:defaulter-gen-input=k8s.io/kube-proxy/config/v1alpha1 -// +groupName=kubeproxy.config.k8s.io - -package v1alpha1 // import "k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1" diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/register.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/register.go deleted file mode 100644 index a053b1e77839..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/register.go +++ /dev/null @@ -1,43 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1alpha1 - -import ( - "k8s.io/apimachinery/pkg/runtime/schema" - kubeproxyconfigv1alpha1 "k8s.io/kube-proxy/config/v1alpha1" -) - -// GroupName is the group name used in this package -const GroupName = "kubeproxy.config.k8s.io" - -// SchemeGroupVersion is group version used to register these objects -var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1alpha1"} - -var ( - // localSchemeBuilder extends the SchemeBuilder instance with the external types. In this package, - // defaulting and conversion init funcs are registered as well. - localSchemeBuilder = &kubeproxyconfigv1alpha1.SchemeBuilder - // AddToScheme is a global function that registers this API group & version to a scheme - AddToScheme = localSchemeBuilder.AddToScheme -) - -func init() { - // We only register manually written functions here. The registration of the - // generated functions takes place in the generated files. The separation - // makes the code compile even when the generated files are missing. - localSchemeBuilder.Register(addDefaultingFuncs) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.conversion.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.conversion.go deleted file mode 100644 index ab8c42ba754f..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.conversion.go +++ /dev/null @@ -1,325 +0,0 @@ -//go:build !ignore_autogenerated -// +build !ignore_autogenerated - -/* -Copyright The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by conversion-gen. DO NOT EDIT. - -package v1alpha1 - -import ( - unsafe "unsafe" - - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - conversion "k8s.io/apimachinery/pkg/conversion" - runtime "k8s.io/apimachinery/pkg/runtime" - configv1alpha1 "k8s.io/component-base/config/v1alpha1" - v1alpha1 "k8s.io/kube-proxy/config/v1alpha1" - config "k8s.io/kubernetes/pkg/proxy/apis/config" -) - -func init() { - localSchemeBuilder.Register(RegisterConversions) -} - -// RegisterConversions adds conversion functions to the given scheme. -// Public to allow building arbitrary schemes. -func RegisterConversions(s *runtime.Scheme) error { - if err := s.AddGeneratedConversionFunc((*v1alpha1.DetectLocalConfiguration)(nil), (*config.DetectLocalConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1alpha1_DetectLocalConfiguration_To_config_DetectLocalConfiguration(a.(*v1alpha1.DetectLocalConfiguration), b.(*config.DetectLocalConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.DetectLocalConfiguration)(nil), (*v1alpha1.DetectLocalConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_DetectLocalConfiguration_To_v1alpha1_DetectLocalConfiguration(a.(*config.DetectLocalConfiguration), b.(*v1alpha1.DetectLocalConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1alpha1.KubeProxyConfiguration)(nil), (*config.KubeProxyConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1alpha1_KubeProxyConfiguration_To_config_KubeProxyConfiguration(a.(*v1alpha1.KubeProxyConfiguration), b.(*config.KubeProxyConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.KubeProxyConfiguration)(nil), (*v1alpha1.KubeProxyConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_KubeProxyConfiguration_To_v1alpha1_KubeProxyConfiguration(a.(*config.KubeProxyConfiguration), b.(*v1alpha1.KubeProxyConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1alpha1.KubeProxyConntrackConfiguration)(nil), (*config.KubeProxyConntrackConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1alpha1_KubeProxyConntrackConfiguration_To_config_KubeProxyConntrackConfiguration(a.(*v1alpha1.KubeProxyConntrackConfiguration), b.(*config.KubeProxyConntrackConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.KubeProxyConntrackConfiguration)(nil), (*v1alpha1.KubeProxyConntrackConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_KubeProxyConntrackConfiguration_To_v1alpha1_KubeProxyConntrackConfiguration(a.(*config.KubeProxyConntrackConfiguration), b.(*v1alpha1.KubeProxyConntrackConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1alpha1.KubeProxyIPTablesConfiguration)(nil), (*config.KubeProxyIPTablesConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1alpha1_KubeProxyIPTablesConfiguration_To_config_KubeProxyIPTablesConfiguration(a.(*v1alpha1.KubeProxyIPTablesConfiguration), b.(*config.KubeProxyIPTablesConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.KubeProxyIPTablesConfiguration)(nil), (*v1alpha1.KubeProxyIPTablesConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_KubeProxyIPTablesConfiguration_To_v1alpha1_KubeProxyIPTablesConfiguration(a.(*config.KubeProxyIPTablesConfiguration), b.(*v1alpha1.KubeProxyIPTablesConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1alpha1.KubeProxyIPVSConfiguration)(nil), (*config.KubeProxyIPVSConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1alpha1_KubeProxyIPVSConfiguration_To_config_KubeProxyIPVSConfiguration(a.(*v1alpha1.KubeProxyIPVSConfiguration), b.(*config.KubeProxyIPVSConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.KubeProxyIPVSConfiguration)(nil), (*v1alpha1.KubeProxyIPVSConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_KubeProxyIPVSConfiguration_To_v1alpha1_KubeProxyIPVSConfiguration(a.(*config.KubeProxyIPVSConfiguration), b.(*v1alpha1.KubeProxyIPVSConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1alpha1.KubeProxyWinkernelConfiguration)(nil), (*config.KubeProxyWinkernelConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1alpha1_KubeProxyWinkernelConfiguration_To_config_KubeProxyWinkernelConfiguration(a.(*v1alpha1.KubeProxyWinkernelConfiguration), b.(*config.KubeProxyWinkernelConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.KubeProxyWinkernelConfiguration)(nil), (*v1alpha1.KubeProxyWinkernelConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_KubeProxyWinkernelConfiguration_To_v1alpha1_KubeProxyWinkernelConfiguration(a.(*config.KubeProxyWinkernelConfiguration), b.(*v1alpha1.KubeProxyWinkernelConfiguration), scope) - }); err != nil { - return err - } - return nil -} - -func autoConvert_v1alpha1_DetectLocalConfiguration_To_config_DetectLocalConfiguration(in *v1alpha1.DetectLocalConfiguration, out *config.DetectLocalConfiguration, s conversion.Scope) error { - out.BridgeInterface = in.BridgeInterface - out.InterfaceNamePrefix = in.InterfaceNamePrefix - return nil -} - -// Convert_v1alpha1_DetectLocalConfiguration_To_config_DetectLocalConfiguration is an autogenerated conversion function. -func Convert_v1alpha1_DetectLocalConfiguration_To_config_DetectLocalConfiguration(in *v1alpha1.DetectLocalConfiguration, out *config.DetectLocalConfiguration, s conversion.Scope) error { - return autoConvert_v1alpha1_DetectLocalConfiguration_To_config_DetectLocalConfiguration(in, out, s) -} - -func autoConvert_config_DetectLocalConfiguration_To_v1alpha1_DetectLocalConfiguration(in *config.DetectLocalConfiguration, out *v1alpha1.DetectLocalConfiguration, s conversion.Scope) error { - out.BridgeInterface = in.BridgeInterface - out.InterfaceNamePrefix = in.InterfaceNamePrefix - return nil -} - -// Convert_config_DetectLocalConfiguration_To_v1alpha1_DetectLocalConfiguration is an autogenerated conversion function. -func Convert_config_DetectLocalConfiguration_To_v1alpha1_DetectLocalConfiguration(in *config.DetectLocalConfiguration, out *v1alpha1.DetectLocalConfiguration, s conversion.Scope) error { - return autoConvert_config_DetectLocalConfiguration_To_v1alpha1_DetectLocalConfiguration(in, out, s) -} - -func autoConvert_v1alpha1_KubeProxyConfiguration_To_config_KubeProxyConfiguration(in *v1alpha1.KubeProxyConfiguration, out *config.KubeProxyConfiguration, s conversion.Scope) error { - out.FeatureGates = *(*map[string]bool)(unsafe.Pointer(&in.FeatureGates)) - out.BindAddress = in.BindAddress - out.HealthzBindAddress = in.HealthzBindAddress - out.MetricsBindAddress = in.MetricsBindAddress - out.BindAddressHardFail = in.BindAddressHardFail - out.EnableProfiling = in.EnableProfiling - out.ClusterCIDR = in.ClusterCIDR - out.HostnameOverride = in.HostnameOverride - if err := configv1alpha1.Convert_v1alpha1_ClientConnectionConfiguration_To_config_ClientConnectionConfiguration(&in.ClientConnection, &out.ClientConnection, s); err != nil { - return err - } - if err := Convert_v1alpha1_KubeProxyIPTablesConfiguration_To_config_KubeProxyIPTablesConfiguration(&in.IPTables, &out.IPTables, s); err != nil { - return err - } - if err := Convert_v1alpha1_KubeProxyIPVSConfiguration_To_config_KubeProxyIPVSConfiguration(&in.IPVS, &out.IPVS, s); err != nil { - return err - } - out.OOMScoreAdj = (*int32)(unsafe.Pointer(in.OOMScoreAdj)) - out.Mode = config.ProxyMode(in.Mode) - out.PortRange = in.PortRange - if err := Convert_v1alpha1_KubeProxyConntrackConfiguration_To_config_KubeProxyConntrackConfiguration(&in.Conntrack, &out.Conntrack, s); err != nil { - return err - } - out.ConfigSyncPeriod = in.ConfigSyncPeriod - out.NodePortAddresses = *(*[]string)(unsafe.Pointer(&in.NodePortAddresses)) - if err := Convert_v1alpha1_KubeProxyWinkernelConfiguration_To_config_KubeProxyWinkernelConfiguration(&in.Winkernel, &out.Winkernel, s); err != nil { - return err - } - out.ShowHiddenMetricsForVersion = in.ShowHiddenMetricsForVersion - out.DetectLocalMode = config.LocalMode(in.DetectLocalMode) - if err := Convert_v1alpha1_DetectLocalConfiguration_To_config_DetectLocalConfiguration(&in.DetectLocal, &out.DetectLocal, s); err != nil { - return err - } - return nil -} - -// Convert_v1alpha1_KubeProxyConfiguration_To_config_KubeProxyConfiguration is an autogenerated conversion function. -func Convert_v1alpha1_KubeProxyConfiguration_To_config_KubeProxyConfiguration(in *v1alpha1.KubeProxyConfiguration, out *config.KubeProxyConfiguration, s conversion.Scope) error { - return autoConvert_v1alpha1_KubeProxyConfiguration_To_config_KubeProxyConfiguration(in, out, s) -} - -func autoConvert_config_KubeProxyConfiguration_To_v1alpha1_KubeProxyConfiguration(in *config.KubeProxyConfiguration, out *v1alpha1.KubeProxyConfiguration, s conversion.Scope) error { - out.FeatureGates = *(*map[string]bool)(unsafe.Pointer(&in.FeatureGates)) - out.BindAddress = in.BindAddress - out.HealthzBindAddress = in.HealthzBindAddress - out.MetricsBindAddress = in.MetricsBindAddress - out.BindAddressHardFail = in.BindAddressHardFail - out.EnableProfiling = in.EnableProfiling - out.ClusterCIDR = in.ClusterCIDR - out.HostnameOverride = in.HostnameOverride - if err := configv1alpha1.Convert_config_ClientConnectionConfiguration_To_v1alpha1_ClientConnectionConfiguration(&in.ClientConnection, &out.ClientConnection, s); err != nil { - return err - } - if err := Convert_config_KubeProxyIPTablesConfiguration_To_v1alpha1_KubeProxyIPTablesConfiguration(&in.IPTables, &out.IPTables, s); err != nil { - return err - } - if err := Convert_config_KubeProxyIPVSConfiguration_To_v1alpha1_KubeProxyIPVSConfiguration(&in.IPVS, &out.IPVS, s); err != nil { - return err - } - out.OOMScoreAdj = (*int32)(unsafe.Pointer(in.OOMScoreAdj)) - out.Mode = v1alpha1.ProxyMode(in.Mode) - out.PortRange = in.PortRange - if err := Convert_config_KubeProxyConntrackConfiguration_To_v1alpha1_KubeProxyConntrackConfiguration(&in.Conntrack, &out.Conntrack, s); err != nil { - return err - } - out.ConfigSyncPeriod = in.ConfigSyncPeriod - out.NodePortAddresses = *(*[]string)(unsafe.Pointer(&in.NodePortAddresses)) - if err := Convert_config_KubeProxyWinkernelConfiguration_To_v1alpha1_KubeProxyWinkernelConfiguration(&in.Winkernel, &out.Winkernel, s); err != nil { - return err - } - out.ShowHiddenMetricsForVersion = in.ShowHiddenMetricsForVersion - out.DetectLocalMode = v1alpha1.LocalMode(in.DetectLocalMode) - if err := Convert_config_DetectLocalConfiguration_To_v1alpha1_DetectLocalConfiguration(&in.DetectLocal, &out.DetectLocal, s); err != nil { - return err - } - return nil -} - -// Convert_config_KubeProxyConfiguration_To_v1alpha1_KubeProxyConfiguration is an autogenerated conversion function. -func Convert_config_KubeProxyConfiguration_To_v1alpha1_KubeProxyConfiguration(in *config.KubeProxyConfiguration, out *v1alpha1.KubeProxyConfiguration, s conversion.Scope) error { - return autoConvert_config_KubeProxyConfiguration_To_v1alpha1_KubeProxyConfiguration(in, out, s) -} - -func autoConvert_v1alpha1_KubeProxyConntrackConfiguration_To_config_KubeProxyConntrackConfiguration(in *v1alpha1.KubeProxyConntrackConfiguration, out *config.KubeProxyConntrackConfiguration, s conversion.Scope) error { - out.MaxPerCore = (*int32)(unsafe.Pointer(in.MaxPerCore)) - out.Min = (*int32)(unsafe.Pointer(in.Min)) - out.TCPEstablishedTimeout = (*v1.Duration)(unsafe.Pointer(in.TCPEstablishedTimeout)) - out.TCPCloseWaitTimeout = (*v1.Duration)(unsafe.Pointer(in.TCPCloseWaitTimeout)) - return nil -} - -// Convert_v1alpha1_KubeProxyConntrackConfiguration_To_config_KubeProxyConntrackConfiguration is an autogenerated conversion function. -func Convert_v1alpha1_KubeProxyConntrackConfiguration_To_config_KubeProxyConntrackConfiguration(in *v1alpha1.KubeProxyConntrackConfiguration, out *config.KubeProxyConntrackConfiguration, s conversion.Scope) error { - return autoConvert_v1alpha1_KubeProxyConntrackConfiguration_To_config_KubeProxyConntrackConfiguration(in, out, s) -} - -func autoConvert_config_KubeProxyConntrackConfiguration_To_v1alpha1_KubeProxyConntrackConfiguration(in *config.KubeProxyConntrackConfiguration, out *v1alpha1.KubeProxyConntrackConfiguration, s conversion.Scope) error { - out.MaxPerCore = (*int32)(unsafe.Pointer(in.MaxPerCore)) - out.Min = (*int32)(unsafe.Pointer(in.Min)) - out.TCPEstablishedTimeout = (*v1.Duration)(unsafe.Pointer(in.TCPEstablishedTimeout)) - out.TCPCloseWaitTimeout = (*v1.Duration)(unsafe.Pointer(in.TCPCloseWaitTimeout)) - return nil -} - -// Convert_config_KubeProxyConntrackConfiguration_To_v1alpha1_KubeProxyConntrackConfiguration is an autogenerated conversion function. -func Convert_config_KubeProxyConntrackConfiguration_To_v1alpha1_KubeProxyConntrackConfiguration(in *config.KubeProxyConntrackConfiguration, out *v1alpha1.KubeProxyConntrackConfiguration, s conversion.Scope) error { - return autoConvert_config_KubeProxyConntrackConfiguration_To_v1alpha1_KubeProxyConntrackConfiguration(in, out, s) -} - -func autoConvert_v1alpha1_KubeProxyIPTablesConfiguration_To_config_KubeProxyIPTablesConfiguration(in *v1alpha1.KubeProxyIPTablesConfiguration, out *config.KubeProxyIPTablesConfiguration, s conversion.Scope) error { - out.MasqueradeBit = (*int32)(unsafe.Pointer(in.MasqueradeBit)) - out.MasqueradeAll = in.MasqueradeAll - out.LocalhostNodePorts = (*bool)(unsafe.Pointer(in.LocalhostNodePorts)) - out.SyncPeriod = in.SyncPeriod - out.MinSyncPeriod = in.MinSyncPeriod - return nil -} - -// Convert_v1alpha1_KubeProxyIPTablesConfiguration_To_config_KubeProxyIPTablesConfiguration is an autogenerated conversion function. -func Convert_v1alpha1_KubeProxyIPTablesConfiguration_To_config_KubeProxyIPTablesConfiguration(in *v1alpha1.KubeProxyIPTablesConfiguration, out *config.KubeProxyIPTablesConfiguration, s conversion.Scope) error { - return autoConvert_v1alpha1_KubeProxyIPTablesConfiguration_To_config_KubeProxyIPTablesConfiguration(in, out, s) -} - -func autoConvert_config_KubeProxyIPTablesConfiguration_To_v1alpha1_KubeProxyIPTablesConfiguration(in *config.KubeProxyIPTablesConfiguration, out *v1alpha1.KubeProxyIPTablesConfiguration, s conversion.Scope) error { - out.MasqueradeBit = (*int32)(unsafe.Pointer(in.MasqueradeBit)) - out.MasqueradeAll = in.MasqueradeAll - out.LocalhostNodePorts = (*bool)(unsafe.Pointer(in.LocalhostNodePorts)) - out.SyncPeriod = in.SyncPeriod - out.MinSyncPeriod = in.MinSyncPeriod - return nil -} - -// Convert_config_KubeProxyIPTablesConfiguration_To_v1alpha1_KubeProxyIPTablesConfiguration is an autogenerated conversion function. -func Convert_config_KubeProxyIPTablesConfiguration_To_v1alpha1_KubeProxyIPTablesConfiguration(in *config.KubeProxyIPTablesConfiguration, out *v1alpha1.KubeProxyIPTablesConfiguration, s conversion.Scope) error { - return autoConvert_config_KubeProxyIPTablesConfiguration_To_v1alpha1_KubeProxyIPTablesConfiguration(in, out, s) -} - -func autoConvert_v1alpha1_KubeProxyIPVSConfiguration_To_config_KubeProxyIPVSConfiguration(in *v1alpha1.KubeProxyIPVSConfiguration, out *config.KubeProxyIPVSConfiguration, s conversion.Scope) error { - out.SyncPeriod = in.SyncPeriod - out.MinSyncPeriod = in.MinSyncPeriod - out.Scheduler = in.Scheduler - out.ExcludeCIDRs = *(*[]string)(unsafe.Pointer(&in.ExcludeCIDRs)) - out.StrictARP = in.StrictARP - out.TCPTimeout = in.TCPTimeout - out.TCPFinTimeout = in.TCPFinTimeout - out.UDPTimeout = in.UDPTimeout - return nil -} - -// Convert_v1alpha1_KubeProxyIPVSConfiguration_To_config_KubeProxyIPVSConfiguration is an autogenerated conversion function. -func Convert_v1alpha1_KubeProxyIPVSConfiguration_To_config_KubeProxyIPVSConfiguration(in *v1alpha1.KubeProxyIPVSConfiguration, out *config.KubeProxyIPVSConfiguration, s conversion.Scope) error { - return autoConvert_v1alpha1_KubeProxyIPVSConfiguration_To_config_KubeProxyIPVSConfiguration(in, out, s) -} - -func autoConvert_config_KubeProxyIPVSConfiguration_To_v1alpha1_KubeProxyIPVSConfiguration(in *config.KubeProxyIPVSConfiguration, out *v1alpha1.KubeProxyIPVSConfiguration, s conversion.Scope) error { - out.SyncPeriod = in.SyncPeriod - out.MinSyncPeriod = in.MinSyncPeriod - out.Scheduler = in.Scheduler - out.ExcludeCIDRs = *(*[]string)(unsafe.Pointer(&in.ExcludeCIDRs)) - out.StrictARP = in.StrictARP - out.TCPTimeout = in.TCPTimeout - out.TCPFinTimeout = in.TCPFinTimeout - out.UDPTimeout = in.UDPTimeout - return nil -} - -// Convert_config_KubeProxyIPVSConfiguration_To_v1alpha1_KubeProxyIPVSConfiguration is an autogenerated conversion function. -func Convert_config_KubeProxyIPVSConfiguration_To_v1alpha1_KubeProxyIPVSConfiguration(in *config.KubeProxyIPVSConfiguration, out *v1alpha1.KubeProxyIPVSConfiguration, s conversion.Scope) error { - return autoConvert_config_KubeProxyIPVSConfiguration_To_v1alpha1_KubeProxyIPVSConfiguration(in, out, s) -} - -func autoConvert_v1alpha1_KubeProxyWinkernelConfiguration_To_config_KubeProxyWinkernelConfiguration(in *v1alpha1.KubeProxyWinkernelConfiguration, out *config.KubeProxyWinkernelConfiguration, s conversion.Scope) error { - out.NetworkName = in.NetworkName - out.SourceVip = in.SourceVip - out.EnableDSR = in.EnableDSR - out.RootHnsEndpointName = in.RootHnsEndpointName - out.ForwardHealthCheckVip = in.ForwardHealthCheckVip - return nil -} - -// Convert_v1alpha1_KubeProxyWinkernelConfiguration_To_config_KubeProxyWinkernelConfiguration is an autogenerated conversion function. -func Convert_v1alpha1_KubeProxyWinkernelConfiguration_To_config_KubeProxyWinkernelConfiguration(in *v1alpha1.KubeProxyWinkernelConfiguration, out *config.KubeProxyWinkernelConfiguration, s conversion.Scope) error { - return autoConvert_v1alpha1_KubeProxyWinkernelConfiguration_To_config_KubeProxyWinkernelConfiguration(in, out, s) -} - -func autoConvert_config_KubeProxyWinkernelConfiguration_To_v1alpha1_KubeProxyWinkernelConfiguration(in *config.KubeProxyWinkernelConfiguration, out *v1alpha1.KubeProxyWinkernelConfiguration, s conversion.Scope) error { - out.NetworkName = in.NetworkName - out.SourceVip = in.SourceVip - out.EnableDSR = in.EnableDSR - out.RootHnsEndpointName = in.RootHnsEndpointName - out.ForwardHealthCheckVip = in.ForwardHealthCheckVip - return nil -} - -// Convert_config_KubeProxyWinkernelConfiguration_To_v1alpha1_KubeProxyWinkernelConfiguration is an autogenerated conversion function. -func Convert_config_KubeProxyWinkernelConfiguration_To_v1alpha1_KubeProxyWinkernelConfiguration(in *config.KubeProxyWinkernelConfiguration, out *v1alpha1.KubeProxyWinkernelConfiguration, s conversion.Scope) error { - return autoConvert_config_KubeProxyWinkernelConfiguration_To_v1alpha1_KubeProxyWinkernelConfiguration(in, out, s) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.defaults.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.defaults.go deleted file mode 100644 index a4e7946ec04b..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1/zz_generated.defaults.go +++ /dev/null @@ -1,41 +0,0 @@ -//go:build !ignore_autogenerated -// +build !ignore_autogenerated - -/* -Copyright The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by defaulter-gen. DO NOT EDIT. - -package v1alpha1 - -import ( - runtime "k8s.io/apimachinery/pkg/runtime" - v1alpha1 "k8s.io/kube-proxy/config/v1alpha1" -) - -// RegisterDefaults adds defaulters functions to the given scheme. -// Public to allow building arbitrary schemes. -// All generated defaulters are covering - they call all nested defaulters. -func RegisterDefaults(scheme *runtime.Scheme) error { - scheme.AddTypeDefaultingFunc(&v1alpha1.KubeProxyConfiguration{}, func(obj interface{}) { - SetObjectDefaults_KubeProxyConfiguration(obj.(*v1alpha1.KubeProxyConfiguration)) - }) - return nil -} - -func SetObjectDefaults_KubeProxyConfiguration(in *v1alpha1.KubeProxyConfiguration) { - SetDefaults_KubeProxyConfiguration(in) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/validation/validation.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/validation/validation.go deleted file mode 100644 index 607a5928aaaa..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/validation/validation.go +++ /dev/null @@ -1,293 +0,0 @@ -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package validation - -import ( - "fmt" - "net" - "runtime" - "strconv" - "strings" - - utilnet "k8s.io/apimachinery/pkg/util/net" - "k8s.io/apimachinery/pkg/util/sets" - "k8s.io/apimachinery/pkg/util/validation/field" - utilfeature "k8s.io/apiserver/pkg/util/feature" - componentbaseconfig "k8s.io/component-base/config" - "k8s.io/component-base/metrics" - apivalidation "k8s.io/kubernetes/pkg/apis/core/validation" - kubeproxyconfig "k8s.io/kubernetes/pkg/proxy/apis/config" - netutils "k8s.io/utils/net" -) - -// Validate validates the configuration of kube-proxy -func Validate(config *kubeproxyconfig.KubeProxyConfiguration) field.ErrorList { - allErrs := field.ErrorList{} - - newPath := field.NewPath("KubeProxyConfiguration") - - effectiveFeatures := utilfeature.DefaultFeatureGate.DeepCopy() - if err := effectiveFeatures.SetFromMap(config.FeatureGates); err != nil { - allErrs = append(allErrs, field.Invalid(newPath.Child("featureGates"), config.FeatureGates, err.Error())) - } - - allErrs = append(allErrs, validateKubeProxyIPTablesConfiguration(config.IPTables, newPath.Child("KubeProxyIPTablesConfiguration"))...) - if config.Mode == kubeproxyconfig.ProxyModeIPVS { - allErrs = append(allErrs, validateKubeProxyIPVSConfiguration(config.IPVS, newPath.Child("KubeProxyIPVSConfiguration"))...) - } - allErrs = append(allErrs, validateKubeProxyConntrackConfiguration(config.Conntrack, newPath.Child("KubeProxyConntrackConfiguration"))...) - allErrs = append(allErrs, validateProxyMode(config.Mode, newPath.Child("Mode"))...) - allErrs = append(allErrs, validateClientConnectionConfiguration(config.ClientConnection, newPath.Child("ClientConnection"))...) - - if config.OOMScoreAdj != nil && (*config.OOMScoreAdj < -1000 || *config.OOMScoreAdj > 1000) { - allErrs = append(allErrs, field.Invalid(newPath.Child("OOMScoreAdj"), *config.OOMScoreAdj, "must be within the range [-1000, 1000]")) - } - - if config.ConfigSyncPeriod.Duration <= 0 { - allErrs = append(allErrs, field.Invalid(newPath.Child("ConfigSyncPeriod"), config.ConfigSyncPeriod, "must be greater than 0")) - } - - if netutils.ParseIPSloppy(config.BindAddress) == nil { - allErrs = append(allErrs, field.Invalid(newPath.Child("BindAddress"), config.BindAddress, "not a valid textual representation of an IP address")) - } - - if config.HealthzBindAddress != "" { - allErrs = append(allErrs, validateHostPort(config.HealthzBindAddress, newPath.Child("HealthzBindAddress"))...) - } - allErrs = append(allErrs, validateHostPort(config.MetricsBindAddress, newPath.Child("MetricsBindAddress"))...) - - if config.ClusterCIDR != "" { - cidrs := strings.Split(config.ClusterCIDR, ",") - switch { - case len(cidrs) > 2: - allErrs = append(allErrs, field.Invalid(newPath.Child("ClusterCIDR"), config.ClusterCIDR, "only one CIDR allowed or a valid DualStack CIDR (e.g. 10.100.0.0/16,fde4:8dba:82e1::/48)")) - // if DualStack and two cidrs validate if there is at least one of each IP family - case len(cidrs) == 2: - isDual, err := netutils.IsDualStackCIDRStrings(cidrs) - if err != nil || !isDual { - allErrs = append(allErrs, field.Invalid(newPath.Child("ClusterCIDR"), config.ClusterCIDR, "must be a valid DualStack CIDR (e.g. 10.100.0.0/16,fde4:8dba:82e1::/48)")) - } - // if we are here means that len(cidrs) == 1, we need to validate it - default: - if _, _, err := netutils.ParseCIDRSloppy(config.ClusterCIDR); err != nil { - allErrs = append(allErrs, field.Invalid(newPath.Child("ClusterCIDR"), config.ClusterCIDR, "must be a valid CIDR block (e.g. 10.100.0.0/16 or fde4:8dba:82e1::/48)")) - } - } - } - - if _, err := utilnet.ParsePortRange(config.PortRange); err != nil { - allErrs = append(allErrs, field.Invalid(newPath.Child("PortRange"), config.PortRange, "must be a valid port range (e.g. 300-2000)")) - } - - allErrs = append(allErrs, validateKubeProxyNodePortAddress(config.NodePortAddresses, newPath.Child("NodePortAddresses"))...) - allErrs = append(allErrs, validateShowHiddenMetricsVersion(config.ShowHiddenMetricsForVersion, newPath.Child("ShowHiddenMetricsForVersion"))...) - if config.DetectLocalMode == kubeproxyconfig.LocalModeBridgeInterface { - allErrs = append(allErrs, validateInterface(config.DetectLocal.BridgeInterface, newPath.Child("InterfaceName"))...) - } - if config.DetectLocalMode == kubeproxyconfig.LocalModeInterfaceNamePrefix { - allErrs = append(allErrs, validateInterface(config.DetectLocal.InterfaceNamePrefix, newPath.Child("InterfacePrefix"))...) - } - - return allErrs -} - -func validateKubeProxyIPTablesConfiguration(config kubeproxyconfig.KubeProxyIPTablesConfiguration, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - - if config.MasqueradeBit != nil && (*config.MasqueradeBit < 0 || *config.MasqueradeBit > 31) { - allErrs = append(allErrs, field.Invalid(fldPath.Child("MasqueradeBit"), config.MasqueradeBit, "must be within the range [0, 31]")) - } - - if config.SyncPeriod.Duration <= 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("SyncPeriod"), config.SyncPeriod, "must be greater than 0")) - } - - if config.MinSyncPeriod.Duration < 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("MinSyncPeriod"), config.MinSyncPeriod, "must be greater than or equal to 0")) - } - - if config.MinSyncPeriod.Duration > config.SyncPeriod.Duration { - allErrs = append(allErrs, field.Invalid(fldPath.Child("SyncPeriod"), config.MinSyncPeriod, fmt.Sprintf("must be greater than or equal to %s", fldPath.Child("MinSyncPeriod").String()))) - } - - return allErrs -} - -func validateKubeProxyIPVSConfiguration(config kubeproxyconfig.KubeProxyIPVSConfiguration, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - - if config.SyncPeriod.Duration <= 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("SyncPeriod"), config.SyncPeriod, "must be greater than 0")) - } - - if config.MinSyncPeriod.Duration < 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("MinSyncPeriod"), config.MinSyncPeriod, "must be greater than or equal to 0")) - } - - if config.MinSyncPeriod.Duration > config.SyncPeriod.Duration { - allErrs = append(allErrs, field.Invalid(fldPath.Child("SyncPeriod"), config.MinSyncPeriod, fmt.Sprintf("must be greater than or equal to %s", fldPath.Child("MinSyncPeriod").String()))) - } - - allErrs = append(allErrs, validateIPVSTimeout(config, fldPath)...) - allErrs = append(allErrs, validateIPVSExcludeCIDRs(config.ExcludeCIDRs, fldPath.Child("ExcludeCidrs"))...) - - return allErrs -} - -func validateKubeProxyConntrackConfiguration(config kubeproxyconfig.KubeProxyConntrackConfiguration, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - - if config.MaxPerCore != nil && *config.MaxPerCore < 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("MaxPerCore"), config.MaxPerCore, "must be greater than or equal to 0")) - } - - if config.Min != nil && *config.Min < 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("Min"), config.Min, "must be greater than or equal to 0")) - } - - if config.TCPEstablishedTimeout.Duration < 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("TCPEstablishedTimeout"), config.TCPEstablishedTimeout, "must be greater than or equal to 0")) - } - - if config.TCPCloseWaitTimeout.Duration < 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("TCPCloseWaitTimeout"), config.TCPCloseWaitTimeout, "must be greater than or equal to 0")) - } - - return allErrs -} - -func validateProxyMode(mode kubeproxyconfig.ProxyMode, fldPath *field.Path) field.ErrorList { - if runtime.GOOS == "windows" { - return validateProxyModeWindows(mode, fldPath) - } - - return validateProxyModeLinux(mode, fldPath) -} - -func validateProxyModeLinux(mode kubeproxyconfig.ProxyMode, fldPath *field.Path) field.ErrorList { - validModes := sets.NewString( - string(kubeproxyconfig.ProxyModeIPTables), - string(kubeproxyconfig.ProxyModeIPVS), - ) - - if mode == "" || validModes.Has(string(mode)) { - return nil - } - - errMsg := fmt.Sprintf("must be %s or blank (blank means the best-available proxy [currently iptables])", strings.Join(validModes.List(), ",")) - return field.ErrorList{field.Invalid(fldPath.Child("ProxyMode"), string(mode), errMsg)} -} - -func validateProxyModeWindows(mode kubeproxyconfig.ProxyMode, fldPath *field.Path) field.ErrorList { - validModes := sets.NewString( - string(kubeproxyconfig.ProxyModeKernelspace), - ) - - if mode == "" || validModes.Has(string(mode)) { - return nil - } - - errMsg := fmt.Sprintf("must be %s or blank (blank means the most-available proxy [currently 'kernelspace'])", strings.Join(validModes.List(), ",")) - return field.ErrorList{field.Invalid(fldPath.Child("ProxyMode"), string(mode), errMsg)} -} - -func validateClientConnectionConfiguration(config componentbaseconfig.ClientConnectionConfiguration, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - allErrs = append(allErrs, apivalidation.ValidateNonnegativeField(int64(config.Burst), fldPath.Child("Burst"))...) - return allErrs -} - -func validateHostPort(input string, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - - hostIP, port, err := net.SplitHostPort(input) - if err != nil { - allErrs = append(allErrs, field.Invalid(fldPath, input, "must be IP:port")) - return allErrs - } - - if ip := netutils.ParseIPSloppy(hostIP); ip == nil { - allErrs = append(allErrs, field.Invalid(fldPath, hostIP, "must be a valid IP")) - } - - if p, err := strconv.Atoi(port); err != nil { - allErrs = append(allErrs, field.Invalid(fldPath, port, "must be a valid port")) - } else if p < 1 || p > 65535 { - allErrs = append(allErrs, field.Invalid(fldPath, port, "must be a valid port")) - } - - return allErrs -} - -func validateKubeProxyNodePortAddress(nodePortAddresses []string, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - - for i := range nodePortAddresses { - if _, _, err := netutils.ParseCIDRSloppy(nodePortAddresses[i]); err != nil { - allErrs = append(allErrs, field.Invalid(fldPath.Index(i), nodePortAddresses[i], "must be a valid CIDR")) - } - } - - return allErrs -} - -func validateIPVSTimeout(config kubeproxyconfig.KubeProxyIPVSConfiguration, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - - if config.TCPTimeout.Duration < 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("TCPTimeout"), config.TCPTimeout, "must be greater than or equal to 0")) - } - - if config.TCPFinTimeout.Duration < 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("TCPFinTimeout"), config.TCPFinTimeout, "must be greater than or equal to 0")) - } - - if config.UDPTimeout.Duration < 0 { - allErrs = append(allErrs, field.Invalid(fldPath.Child("UDPTimeout"), config.UDPTimeout, "must be greater than or equal to 0")) - } - - return allErrs -} - -func validateIPVSExcludeCIDRs(excludeCIDRs []string, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - - for i := range excludeCIDRs { - if _, _, err := netutils.ParseCIDRSloppy(excludeCIDRs[i]); err != nil { - allErrs = append(allErrs, field.Invalid(fldPath.Index(i), excludeCIDRs[i], "must be a valid CIDR")) - } - } - return allErrs -} - -func validateShowHiddenMetricsVersion(version string, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - errs := metrics.ValidateShowHiddenMetricsVersion(version) - for _, e := range errs { - allErrs = append(allErrs, field.Invalid(fldPath, version, e.Error())) - } - - return allErrs -} - -func validateInterface(iface string, fldPath *field.Path) field.ErrorList { - allErrs := field.ErrorList{} - if len(iface) == 0 { - allErrs = append(allErrs, field.Invalid(fldPath, iface, "must not be empty")) - } - return allErrs -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/zz_generated.deepcopy.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/zz_generated.deepcopy.go deleted file mode 100644 index fd83125fc2af..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/apis/config/zz_generated.deepcopy.go +++ /dev/null @@ -1,220 +0,0 @@ -//go:build !ignore_autogenerated -// +build !ignore_autogenerated - -/* -Copyright The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by deepcopy-gen. DO NOT EDIT. - -package config - -import ( - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" -) - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in ConfigurationMap) DeepCopyInto(out *ConfigurationMap) { - { - in := &in - *out = make(ConfigurationMap, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - return - } -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ConfigurationMap. -func (in ConfigurationMap) DeepCopy() ConfigurationMap { - if in == nil { - return nil - } - out := new(ConfigurationMap) - in.DeepCopyInto(out) - return *out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *DetectLocalConfiguration) DeepCopyInto(out *DetectLocalConfiguration) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DetectLocalConfiguration. -func (in *DetectLocalConfiguration) DeepCopy() *DetectLocalConfiguration { - if in == nil { - return nil - } - out := new(DetectLocalConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyConfiguration) DeepCopyInto(out *KubeProxyConfiguration) { - *out = *in - out.TypeMeta = in.TypeMeta - if in.FeatureGates != nil { - in, out := &in.FeatureGates, &out.FeatureGates - *out = make(map[string]bool, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - out.ClientConnection = in.ClientConnection - in.IPTables.DeepCopyInto(&out.IPTables) - in.IPVS.DeepCopyInto(&out.IPVS) - if in.OOMScoreAdj != nil { - in, out := &in.OOMScoreAdj, &out.OOMScoreAdj - *out = new(int32) - **out = **in - } - in.Conntrack.DeepCopyInto(&out.Conntrack) - out.ConfigSyncPeriod = in.ConfigSyncPeriod - if in.NodePortAddresses != nil { - in, out := &in.NodePortAddresses, &out.NodePortAddresses - *out = make([]string, len(*in)) - copy(*out, *in) - } - out.Winkernel = in.Winkernel - out.DetectLocal = in.DetectLocal - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyConfiguration. -func (in *KubeProxyConfiguration) DeepCopy() *KubeProxyConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *KubeProxyConfiguration) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyConntrackConfiguration) DeepCopyInto(out *KubeProxyConntrackConfiguration) { - *out = *in - if in.MaxPerCore != nil { - in, out := &in.MaxPerCore, &out.MaxPerCore - *out = new(int32) - **out = **in - } - if in.Min != nil { - in, out := &in.Min, &out.Min - *out = new(int32) - **out = **in - } - if in.TCPEstablishedTimeout != nil { - in, out := &in.TCPEstablishedTimeout, &out.TCPEstablishedTimeout - *out = new(v1.Duration) - **out = **in - } - if in.TCPCloseWaitTimeout != nil { - in, out := &in.TCPCloseWaitTimeout, &out.TCPCloseWaitTimeout - *out = new(v1.Duration) - **out = **in - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyConntrackConfiguration. -func (in *KubeProxyConntrackConfiguration) DeepCopy() *KubeProxyConntrackConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyConntrackConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyIPTablesConfiguration) DeepCopyInto(out *KubeProxyIPTablesConfiguration) { - *out = *in - if in.MasqueradeBit != nil { - in, out := &in.MasqueradeBit, &out.MasqueradeBit - *out = new(int32) - **out = **in - } - if in.LocalhostNodePorts != nil { - in, out := &in.LocalhostNodePorts, &out.LocalhostNodePorts - *out = new(bool) - **out = **in - } - out.SyncPeriod = in.SyncPeriod - out.MinSyncPeriod = in.MinSyncPeriod - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyIPTablesConfiguration. -func (in *KubeProxyIPTablesConfiguration) DeepCopy() *KubeProxyIPTablesConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyIPTablesConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyIPVSConfiguration) DeepCopyInto(out *KubeProxyIPVSConfiguration) { - *out = *in - out.SyncPeriod = in.SyncPeriod - out.MinSyncPeriod = in.MinSyncPeriod - if in.ExcludeCIDRs != nil { - in, out := &in.ExcludeCIDRs, &out.ExcludeCIDRs - *out = make([]string, len(*in)) - copy(*out, *in) - } - out.TCPTimeout = in.TCPTimeout - out.TCPFinTimeout = in.TCPFinTimeout - out.UDPTimeout = in.UDPTimeout - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyIPVSConfiguration. -func (in *KubeProxyIPVSConfiguration) DeepCopy() *KubeProxyIPVSConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyIPVSConfiguration) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *KubeProxyWinkernelConfiguration) DeepCopyInto(out *KubeProxyWinkernelConfiguration) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeProxyWinkernelConfiguration. -func (in *KubeProxyWinkernelConfiguration) DeepCopy() *KubeProxyWinkernelConfiguration { - if in == nil { - return nil - } - out := new(KubeProxyWinkernelConfiguration) - in.DeepCopyInto(out) - return out -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/config/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/config/OWNERS index f5f691a16621..46a45ce4e769 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/config/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/config/OWNERS @@ -2,7 +2,6 @@ reviewers: - sig-network-reviewers - - lavalamp - smarterclayton labels: - sig/network diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/conntrack/cleanup.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/conntrack/cleanup.go new file mode 100644 index 000000000000..ab5dc7df8976 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/conntrack/cleanup.go @@ -0,0 +1,111 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package conntrack + +import ( + v1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/klog/v2" + "k8s.io/kubernetes/pkg/proxy" + proxyutil "k8s.io/kubernetes/pkg/proxy/util" + utilexec "k8s.io/utils/exec" +) + +// CleanStaleEntries takes care of flushing stale conntrack entries for services and endpoints. +func CleanStaleEntries(isIPv6 bool, exec utilexec.Interface, svcPortMap proxy.ServicePortMap, + serviceUpdateResult proxy.UpdateServiceMapResult, endpointUpdateResult proxy.UpdateEndpointMapResult) { + + deleteStaleServiceConntrackEntries(isIPv6, exec, svcPortMap, serviceUpdateResult, endpointUpdateResult) + deleteStaleEndpointConntrackEntries(exec, svcPortMap, endpointUpdateResult) +} + +// deleteStaleServiceConntrackEntries takes care of flushing stale conntrack entries related +// to UDP Service IPs. When a service has no endpoints and we drop traffic to it, conntrack +// may create "black hole" entries for that IP+port. When the service gets endpoints we +// need to delete those entries so further traffic doesn't get dropped. +func deleteStaleServiceConntrackEntries(isIPv6 bool, exec utilexec.Interface, svcPortMap proxy.ServicePortMap, serviceUpdateResult proxy.UpdateServiceMapResult, endpointUpdateResult proxy.UpdateEndpointMapResult) { + conntrackCleanupServiceIPs := serviceUpdateResult.DeletedUDPClusterIPs + conntrackCleanupServiceNodePorts := sets.New[int]() + + // merge newly active services gathered from updateEndpointsMap + // a UDP service that changes from 0 to non-0 endpoints is newly active. + for _, svcPortName := range endpointUpdateResult.NewlyActiveUDPServices { + if svcInfo, ok := svcPortMap[svcPortName]; ok { + klog.V(4).InfoS("Newly-active UDP service may have stale conntrack entries", "servicePortName", svcPortName) + conntrackCleanupServiceIPs.Insert(svcInfo.ClusterIP().String()) + for _, extIP := range svcInfo.ExternalIPStrings() { + conntrackCleanupServiceIPs.Insert(extIP) + } + for _, lbIP := range svcInfo.LoadBalancerIPStrings() { + conntrackCleanupServiceIPs.Insert(lbIP) + } + nodePort := svcInfo.NodePort() + if svcInfo.Protocol() == v1.ProtocolUDP && nodePort != 0 { + conntrackCleanupServiceNodePorts.Insert(nodePort) + } + } + } + + klog.V(4).InfoS("Deleting conntrack stale entries for services", "IPs", conntrackCleanupServiceIPs.UnsortedList()) + for _, svcIP := range conntrackCleanupServiceIPs.UnsortedList() { + if err := ClearEntriesForIP(exec, svcIP, v1.ProtocolUDP); err != nil { + klog.ErrorS(err, "Failed to delete stale service connections", "IP", svcIP) + } + } + klog.V(4).InfoS("Deleting conntrack stale entries for services", "nodePorts", conntrackCleanupServiceNodePorts.UnsortedList()) + for _, nodePort := range conntrackCleanupServiceNodePorts.UnsortedList() { + err := ClearEntriesForPort(exec, nodePort, isIPv6, v1.ProtocolUDP) + if err != nil { + klog.ErrorS(err, "Failed to clear udp conntrack", "nodePort", nodePort) + } + } +} + +// deleteStaleEndpointConntrackEntries takes care of flushing stale conntrack entries related +// to UDP endpoints. After a UDP endpoint is removed we must flush any conntrack entries +// for it so that if the same client keeps sending, the packets will get routed to a new endpoint. +func deleteStaleEndpointConntrackEntries(exec utilexec.Interface, svcPortMap proxy.ServicePortMap, endpointUpdateResult proxy.UpdateEndpointMapResult) { + for _, epSvcPair := range endpointUpdateResult.DeletedUDPEndpoints { + if svcInfo, ok := svcPortMap[epSvcPair.ServicePortName]; ok { + endpointIP := proxyutil.IPPart(epSvcPair.Endpoint) + nodePort := svcInfo.NodePort() + var err error + if nodePort != 0 { + err = ClearEntriesForPortNAT(exec, endpointIP, nodePort, v1.ProtocolUDP) + if err != nil { + klog.ErrorS(err, "Failed to delete nodeport-related endpoint connections", "servicePortName", epSvcPair.ServicePortName) + } + } + err = ClearEntriesForNAT(exec, svcInfo.ClusterIP().String(), endpointIP, v1.ProtocolUDP) + if err != nil { + klog.ErrorS(err, "Failed to delete endpoint connections", "servicePortName", epSvcPair.ServicePortName) + } + for _, extIP := range svcInfo.ExternalIPStrings() { + err := ClearEntriesForNAT(exec, extIP, endpointIP, v1.ProtocolUDP) + if err != nil { + klog.ErrorS(err, "Failed to delete endpoint connections for externalIP", "servicePortName", epSvcPair.ServicePortName, "externalIP", extIP) + } + } + for _, lbIP := range svcInfo.LoadBalancerIPStrings() { + err := ClearEntriesForNAT(exec, lbIP, endpointIP, v1.ProtocolUDP) + if err != nil { + klog.ErrorS(err, "Failed to delete endpoint connections for LoadBalancerIP", "servicePortName", epSvcPair.ServicePortName, "loadBalancerIP", lbIP) + } + } + } + } +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/conntrack/conntrack.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/conntrack/conntrack.go similarity index 97% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/conntrack/conntrack.go rename to cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/conntrack/conntrack.go index 4b39f61ac79e..450ca1dda3a3 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/conntrack/conntrack.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/conntrack/conntrack.go @@ -63,12 +63,12 @@ func Exec(execer exec.Interface, parameters ...string) error { if err != nil { return fmt.Errorf("error looking for path of conntrack: %v", err) } - klog.V(4).Infof("Clearing conntrack entries %v", parameters) + klog.V(4).InfoS("Clearing conntrack entries", "parameters", parameters) output, err := execer.Command(conntrackPath, parameters...).CombinedOutput() if err != nil { return fmt.Errorf("conntrack command returned: %q, error message: %s", string(output), err) } - klog.V(4).Infof("Conntrack entries deleted %s", string(output)) + klog.V(4).InfoS("Conntrack entries deleted", "output", string(output)) return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/endpoints.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/endpoints.go index 3cecb33226cf..fd1e9485cdb6 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/endpoints.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/endpoints.go @@ -30,10 +30,10 @@ import ( "k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/util/sets" "k8s.io/kubernetes/pkg/proxy/metrics" - utilproxy "k8s.io/kubernetes/pkg/proxy/util" + proxyutil "k8s.io/kubernetes/pkg/proxy/util" ) -var supportedEndpointSliceAddressTypes = sets.NewString( +var supportedEndpointSliceAddressTypes = sets.New[string]( string(discovery.AddressTypeIPv4), string(discovery.AddressTypeIPv6), ) @@ -49,7 +49,7 @@ type BaseEndpointInfo struct { // ZoneHints represent the zone hints for the endpoint. This is based on // endpoint.hints.forZones[*].name in the EndpointSlice API. - ZoneHints sets.String + ZoneHints sets.Set[string] // Ready indicates whether this endpoint is ready and NOT terminating. // For pods, this is true if a pod has a ready status and a nil deletion timestamp. // This is only set when watching EndpointSlices. If using Endpoints, this is always @@ -103,18 +103,18 @@ func (info *BaseEndpointInfo) IsTerminating() bool { } // GetZoneHints returns the zone hint for the endpoint. -func (info *BaseEndpointInfo) GetZoneHints() sets.String { +func (info *BaseEndpointInfo) GetZoneHints() sets.Set[string] { return info.ZoneHints } // IP returns just the IP part of the endpoint, it's a part of proxy.Endpoint interface. func (info *BaseEndpointInfo) IP() string { - return utilproxy.IPPart(info.Endpoint) + return proxyutil.IPPart(info.Endpoint) } // Port returns just the Port part of the endpoint. func (info *BaseEndpointInfo) Port() (int, error) { - return utilproxy.PortPart(info.Endpoint) + return proxyutil.PortPart(info.Endpoint) } // Equal is part of proxy.Endpoint interface. @@ -135,7 +135,7 @@ func (info *BaseEndpointInfo) GetZone() string { } func newBaseEndpointInfo(IP, nodeName, zone string, port int, isLocal bool, - ready, serving, terminating bool, zoneHints sets.String) *BaseEndpointInfo { + ready, serving, terminating bool, zoneHints sets.Set[string]) *BaseEndpointInfo { return &BaseEndpointInfo{ Endpoint: net.JoinHostPort(IP, strconv.Itoa(port)), IsLocal: isLocal, @@ -232,7 +232,7 @@ func (ect *EndpointChangeTracker) EndpointSliceUpdate(endpointSlice *discovery.E // PendingChanges returns a set whose keys are the names of the services whose endpoints // have changed since the last time ect was used to update an EndpointsMap. (You must call // this _before_ calling em.Update(ect).) -func (ect *EndpointChangeTracker) PendingChanges() sets.String { +func (ect *EndpointChangeTracker) PendingChanges() sets.Set[string] { return ect.endpointSliceCache.pendingChanges() } @@ -361,8 +361,8 @@ func (em EndpointsMap) unmerge(other EndpointsMap) { } // getLocalEndpointIPs returns endpoints IPs if given endpoint is local - local means the endpoint is running in same host as kube-proxy. -func (em EndpointsMap) getLocalReadyEndpointIPs() map[types.NamespacedName]sets.String { - localIPs := make(map[types.NamespacedName]sets.String) +func (em EndpointsMap) getLocalReadyEndpointIPs() map[types.NamespacedName]sets.Set[string] { + localIPs := make(map[types.NamespacedName]sets.Set[string]) for svcPortName, epList := range em { for _, ep := range epList { // Only add ready endpoints for health checking. Terminating endpoints may still serve traffic @@ -374,7 +374,7 @@ func (em EndpointsMap) getLocalReadyEndpointIPs() map[types.NamespacedName]sets. if ep.GetIsLocal() { nsn := svcPortName.NamespacedName if localIPs[nsn] == nil { - localIPs[nsn] = sets.NewString() + localIPs[nsn] = sets.New[string]() } localIPs[nsn].Insert(ep.IP()) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/endpointslicecache.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/endpointslicecache.go index 6f00982d4e22..ffbab9913c4b 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/endpointslicecache.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/endpointslicecache.go @@ -31,7 +31,7 @@ import ( "k8s.io/client-go/tools/events" "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/features" - utilproxy "k8s.io/kubernetes/pkg/proxy/util" + proxyutil "k8s.io/kubernetes/pkg/proxy/util" utilnet "k8s.io/utils/net" ) @@ -81,7 +81,7 @@ type endpointInfo struct { Addresses []string NodeName *string Zone *string - ZoneHints sets.String + ZoneHints sets.Set[string] Ready bool Serving bool @@ -141,7 +141,7 @@ func newEndpointSliceInfo(endpointSlice *discovery.EndpointSlice, remove bool) * if utilfeature.DefaultFeatureGate.Enabled(features.TopologyAwareHints) { if endpoint.Hints != nil && len(endpoint.Hints.ForZones) > 0 { - epInfo.ZoneHints = sets.String{} + epInfo.ZoneHints = sets.New[string]() for _, zone := range endpoint.Hints.ForZones { epInfo.ZoneHints.Insert(zone.Name) } @@ -190,11 +190,11 @@ func (cache *EndpointSliceCache) updatePending(endpointSlice *discovery.Endpoint // pendingChanges returns a set whose keys are the names of the services whose endpoints // have changed since the last time checkoutChanges was called -func (cache *EndpointSliceCache) pendingChanges() sets.String { +func (cache *EndpointSliceCache) pendingChanges() sets.Set[string] { cache.lock.Lock() defer cache.lock.Unlock() - changes := sets.NewString() + changes := sets.New[string]() for serviceNN, esTracker := range cache.trackerByServiceMap { if len(esTracker.pending) > 0 { changes.Insert(serviceNN.String()) @@ -290,7 +290,7 @@ func (cache *EndpointSliceCache) addEndpoints(svcPortName *ServicePortName, port if (cache.ipFamily == v1.IPv6Protocol) != utilnet.IsIPv6String(endpoint.Addresses[0]) { // Emit event on the corresponding service which had a different IP // version than the endpoint. - utilproxy.LogAndEmitIncorrectIPVersionEvent(cache.recorder, "endpointslice", endpoint.Addresses[0], svcPortName.NamespacedName.Namespace, svcPortName.NamespacedName.Name, "") + proxyutil.LogAndEmitIncorrectIPVersionEvent(cache.recorder, "endpointslice", endpoint.Addresses[0], svcPortName.NamespacedName.Namespace, svcPortName.NamespacedName.Name, "") continue } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/healthcheck/proxier_health.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/healthcheck/proxier_health.go index 7dc5e4e4b4d4..2ffdcc2b9d2d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/healthcheck/proxier_health.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/healthcheck/proxier_health.go @@ -26,9 +26,16 @@ import ( "k8s.io/client-go/tools/events" "k8s.io/klog/v2" api "k8s.io/kubernetes/pkg/apis/core" + "k8s.io/kubernetes/pkg/proxy/metrics" "k8s.io/utils/clock" ) +const ( + // ToBeDeletedTaint is a taint used by the CLuster Autoscaler before marking a node for deletion. Defined in + // https://github.com/kubernetes/autoscaler/blob/e80ab518340f88f364fe3ef063f8303755125971/cluster-autoscaler/utils/deletetaint/delete.go#L36 + ToBeDeletedTaint = "ToBeDeletedByClusterAutoscaler" +) + // ProxierHealthUpdater allows callers to update healthz timestamp only. type ProxierHealthUpdater interface { // QueuedUpdate should be called when the proxier receives a Service or Endpoints @@ -42,6 +49,10 @@ type ProxierHealthUpdater interface { // Run starts the healthz HTTP server and blocks until it exits. Run() error + // Sync the node and determine if its eligible or not. Eligible is + // defined as being: not tainted by ToBeDeletedTaint and not deleted. + SyncNode(node *v1.Node) + proxierHealthChecker } @@ -62,6 +73,7 @@ type proxierHealthServer struct { lastUpdated atomic.Value oldestPendingQueued atomic.Value + nodeEligible atomic.Bool } // NewProxierHealthServer returns a proxier health http server. @@ -70,7 +82,7 @@ func NewProxierHealthServer(addr string, healthTimeout time.Duration, recorder e } func newProxierHealthServer(listener listener, httpServerFactory httpServerFactory, c clock.Clock, addr string, healthTimeout time.Duration, recorder events.EventRecorder, nodeRef *v1.ObjectReference) *proxierHealthServer { - return &proxierHealthServer{ + hs := &proxierHealthServer{ listener: listener, httpFactory: httpServerFactory, clock: c, @@ -79,6 +91,11 @@ func newProxierHealthServer(listener listener, httpServerFactory httpServerFacto recorder: recorder, nodeRef: nodeRef, } + // The node is eligible (and thus the proxy healthy) while it's starting up + // and until we've processed the first node event that indicates the + // contrary. + hs.nodeEligible.Store(true) + return hs } // Updated indicates that kube-proxy has successfully updated its backend, so it should @@ -96,8 +113,8 @@ func (hs *proxierHealthServer) QueuedUpdate() { hs.oldestPendingQueued.CompareAndSwap(zeroTime, hs.clock.Now()) } -// IsHealthy returns the proxier's health state, following the same definition -// the HTTP server defines. +// IsHealthy returns only the proxier's health state, following the same +// definition the HTTP server defines, but ignoring the state of the Node. func (hs *proxierHealthServer) IsHealthy() bool { isHealthy, _, _ := hs.isHealthy() return isHealthy @@ -123,14 +140,28 @@ func (hs *proxierHealthServer) isHealthy() (bool, time.Time, time.Time) { // There's an unprocessed update queued, but it's not late yet healthy = true } - return healthy, lastUpdated, currentTime } +func (hs *proxierHealthServer) SyncNode(node *v1.Node) { + if !node.DeletionTimestamp.IsZero() { + hs.nodeEligible.Store(false) + return + } + for _, taint := range node.Spec.Taints { + if taint.Key == ToBeDeletedTaint { + hs.nodeEligible.Store(false) + return + } + } + hs.nodeEligible.Store(true) +} + // Run starts the healthz HTTP server and blocks until it exits. func (hs *proxierHealthServer) Run() error { serveMux := http.NewServeMux() serveMux.Handle("/healthz", healthzHandler{hs: hs}) + serveMux.Handle("/livez", livezHandler{hs: hs}) server := hs.httpFactory.New(hs.addr, serveMux) listener, err := hs.listener.Listen(hs.addr) @@ -156,12 +187,40 @@ type healthzHandler struct { } func (h healthzHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) { + nodeEligible := h.hs.nodeEligible.Load() + healthy, lastUpdated, currentTime := h.hs.isHealthy() + healthy = healthy && nodeEligible + resp.Header().Set("Content-Type", "application/json") + resp.Header().Set("X-Content-Type-Options", "nosniff") + if !healthy { + metrics.ProxyHealthzTotal.WithLabelValues("503").Inc() + resp.WriteHeader(http.StatusServiceUnavailable) + } else { + metrics.ProxyHealthzTotal.WithLabelValues("200").Inc() + resp.WriteHeader(http.StatusOK) + // In older releases, the returned "lastUpdated" time indicated the last + // time the proxier sync loop ran, even if nothing had changed. To + // preserve compatibility, we use the same semantics: the returned + // lastUpdated value is "recent" if the server is healthy. The kube-proxy + // metrics provide more detailed information. + lastUpdated = currentTime + } + fmt.Fprintf(resp, `{"lastUpdated": %q,"currentTime": %q, "nodeEligible": %v}`, lastUpdated, currentTime, nodeEligible) +} + +type livezHandler struct { + hs *proxierHealthServer +} + +func (h livezHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) { healthy, lastUpdated, currentTime := h.hs.isHealthy() resp.Header().Set("Content-Type", "application/json") resp.Header().Set("X-Content-Type-Options", "nosniff") if !healthy { + metrics.ProxyLivezTotal.WithLabelValues("503").Inc() resp.WriteHeader(http.StatusServiceUnavailable) } else { + metrics.ProxyLivezTotal.WithLabelValues("200").Inc() resp.WriteHeader(http.StatusOK) // In older releases, the returned "lastUpdated" time indicated the last // time the proxier sync loop ran, even if nothing had changed. To diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/healthcheck/service_health.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/healthcheck/service_health.go index a6f2afe765d4..25d48285560c 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/healthcheck/service_health.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/healthcheck/service_health.go @@ -20,20 +20,19 @@ import ( "fmt" "net" "net/http" + "strconv" "strings" "sync" "github.com/lithammer/dedent" - v1 "k8s.io/api/core/v1" - "k8s.io/klog/v2" + v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/types" + utilerrors "k8s.io/apimachinery/pkg/util/errors" "k8s.io/client-go/tools/events" + "k8s.io/klog/v2" api "k8s.io/kubernetes/pkg/apis/core" - - utilerrors "k8s.io/apimachinery/pkg/util/errors" - "k8s.io/apimachinery/pkg/util/sets" - utilproxy "k8s.io/kubernetes/pkg/proxy/util" + proxyutil "k8s.io/kubernetes/pkg/proxy/util" ) // ServiceHealthServer serves HTTP endpoints for each service name, with results @@ -58,21 +57,17 @@ type proxierHealthChecker interface { IsHealthy() bool } -func newServiceHealthServer(hostname string, recorder events.EventRecorder, listener listener, factory httpServerFactory, nodePortAddresses *utilproxy.NodePortAddresses, healthzServer proxierHealthChecker) ServiceHealthServer { - - nodeAddresses, err := nodePortAddresses.GetNodeAddresses(utilproxy.RealNetwork{}) - if err != nil || nodeAddresses.Len() == 0 { - klog.ErrorS(err, "Failed to get node ip address matching node port addresses, health check port will listen to all node addresses", "nodePortAddresses", nodePortAddresses) - nodeAddresses = sets.NewString() - nodeAddresses.Insert(utilproxy.IPv4ZeroCIDR) - } - - // if any of the addresses is zero cidr then we listen - // to old style : - for _, addr := range nodeAddresses.List() { - if utilproxy.IsZeroCIDR(addr) { - nodeAddresses = sets.NewString("") - break +func newServiceHealthServer(hostname string, recorder events.EventRecorder, listener listener, factory httpServerFactory, nodePortAddresses *proxyutil.NodePortAddresses, healthzServer proxierHealthChecker) ServiceHealthServer { + // It doesn't matter whether we listen on "0.0.0.0", "::", or ""; go + // treats them all the same. + nodeIPs := []net.IP{net.IPv4zero} + + if !nodePortAddresses.MatchAll() { + ips, err := nodePortAddresses.GetNodeIPs(proxyutil.RealNetwork{}) + if err == nil { + nodeIPs = ips + } else { + klog.ErrorS(err, "Failed to get node ip address matching node port addresses, health check port will listen to all node addresses", "nodePortAddresses", nodePortAddresses) } } @@ -83,22 +78,22 @@ func newServiceHealthServer(hostname string, recorder events.EventRecorder, list httpFactory: factory, healthzServer: healthzServer, services: map[types.NamespacedName]*hcInstance{}, - nodeAddresses: nodeAddresses, + nodeIPs: nodeIPs, } } // NewServiceHealthServer allocates a new service healthcheck server manager -func NewServiceHealthServer(hostname string, recorder events.EventRecorder, nodePortAddresses *utilproxy.NodePortAddresses, healthzServer proxierHealthChecker) ServiceHealthServer { +func NewServiceHealthServer(hostname string, recorder events.EventRecorder, nodePortAddresses *proxyutil.NodePortAddresses, healthzServer proxierHealthChecker) ServiceHealthServer { return newServiceHealthServer(hostname, recorder, stdNetListener{}, stdHTTPServerFactory{}, nodePortAddresses, healthzServer) } type server struct { hostname string // node addresses where health check port will listen on - nodeAddresses sets.String - recorder events.EventRecorder // can be nil - listener listener - httpFactory httpServerFactory + nodeIPs []net.IP + recorder events.EventRecorder // can be nil + listener listener + httpFactory httpServerFactory healthzServer proxierHealthChecker @@ -169,12 +164,11 @@ func (hcI *hcInstance) listenAndServeAll(hcs *server) error { var err error var listener net.Listener - addresses := hcs.nodeAddresses.List() - hcI.httpServers = make([]httpServer, 0, len(addresses)) + hcI.httpServers = make([]httpServer, 0, len(hcs.nodeIPs)) // for each of the node addresses start listening and serving - for _, address := range addresses { - addr := net.JoinHostPort(address, fmt.Sprint(hcI.port)) + for _, ip := range hcs.nodeIPs { + addr := net.JoinHostPort(ip.String(), fmt.Sprint(hcI.port)) // create http server httpSrv := hcs.httpFactory.New(addr, hcHandler{name: hcI.nsn, hcs: hcs}) // start listener @@ -235,11 +229,13 @@ func (h hcHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request) { return } count := svc.endpoints - kubeProxyHealthy := h.hcs.healthzServer.IsHealthy() h.hcs.lock.RUnlock() + kubeProxyHealthy := h.hcs.healthzServer.IsHealthy() resp.Header().Set("Content-Type", "application/json") resp.Header().Set("X-Content-Type-Options", "nosniff") + resp.Header().Set("X-Load-Balancing-Endpoint-Weight", strconv.Itoa(count)) + if count != 0 && kubeProxyHealthy { resp.WriteHeader(http.StatusOK) } else { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/iptables/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/iptables/OWNERS deleted file mode 100644 index 5368499adb22..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/iptables/OWNERS +++ /dev/null @@ -1,8 +0,0 @@ -# See the OWNERS docs at https://go.k8s.io/owners - -reviewers: - - sig-network-reviewers - - smarterclayton - - justinsb -labels: - - sig/network diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/iptables/proxier.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/iptables/proxier.go deleted file mode 100644 index 937441ecaac4..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/iptables/proxier.go +++ /dev/null @@ -1,1687 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package iptables - -// -// NOTE: this needs to be tested in e2e since it uses iptables for everything. -// - -import ( - "bytes" - "crypto/sha256" - "encoding/base32" - "fmt" - "net" - "reflect" - "strconv" - "strings" - "sync" - "sync/atomic" - "time" - - v1 "k8s.io/api/core/v1" - discovery "k8s.io/api/discovery/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/sets" - "k8s.io/apimachinery/pkg/util/wait" - utilfeature "k8s.io/apiserver/pkg/util/feature" - "k8s.io/client-go/tools/events" - utilsysctl "k8s.io/component-helpers/node/util/sysctl" - "k8s.io/klog/v2" - "k8s.io/kubernetes/pkg/features" - "k8s.io/kubernetes/pkg/proxy" - "k8s.io/kubernetes/pkg/proxy/healthcheck" - "k8s.io/kubernetes/pkg/proxy/metaproxier" - "k8s.io/kubernetes/pkg/proxy/metrics" - utilproxy "k8s.io/kubernetes/pkg/proxy/util" - proxyutiliptables "k8s.io/kubernetes/pkg/proxy/util/iptables" - "k8s.io/kubernetes/pkg/util/async" - "k8s.io/kubernetes/pkg/util/conntrack" - utiliptables "k8s.io/kubernetes/pkg/util/iptables" - utilexec "k8s.io/utils/exec" - netutils "k8s.io/utils/net" -) - -const ( - // the services chain - kubeServicesChain utiliptables.Chain = "KUBE-SERVICES" - - // the external services chain - kubeExternalServicesChain utiliptables.Chain = "KUBE-EXTERNAL-SERVICES" - - // the nodeports chain - kubeNodePortsChain utiliptables.Chain = "KUBE-NODEPORTS" - - // the kubernetes postrouting chain - kubePostroutingChain utiliptables.Chain = "KUBE-POSTROUTING" - - // kubeMarkMasqChain is the mark-for-masquerade chain - kubeMarkMasqChain utiliptables.Chain = "KUBE-MARK-MASQ" - - // the kubernetes forward chain - kubeForwardChain utiliptables.Chain = "KUBE-FORWARD" - - // kubeProxyFirewallChain is the kube-proxy firewall chain - kubeProxyFirewallChain utiliptables.Chain = "KUBE-PROXY-FIREWALL" - - // kube proxy canary chain is used for monitoring rule reload - kubeProxyCanaryChain utiliptables.Chain = "KUBE-PROXY-CANARY" - - // kubeletFirewallChain is a duplicate of kubelet's firewall containing - // the anti-martian-packet rule. It should not be used for any other - // rules. - kubeletFirewallChain utiliptables.Chain = "KUBE-FIREWALL" - - // largeClusterEndpointsThreshold is the number of endpoints at which - // we switch into "large cluster mode" and optimize for iptables - // performance over iptables debuggability - largeClusterEndpointsThreshold = 1000 -) - -const sysctlRouteLocalnet = "net/ipv4/conf/all/route_localnet" -const sysctlBridgeCallIPTables = "net/bridge/bridge-nf-call-iptables" - -// internal struct for string service information -type servicePortInfo struct { - *proxy.BaseServicePortInfo - // The following fields are computed and stored for performance reasons. - nameString string - clusterPolicyChainName utiliptables.Chain - localPolicyChainName utiliptables.Chain - firewallChainName utiliptables.Chain - externalChainName utiliptables.Chain -} - -// returns a new proxy.ServicePort which abstracts a serviceInfo -func newServiceInfo(port *v1.ServicePort, service *v1.Service, bsvcPortInfo *proxy.BaseServicePortInfo) proxy.ServicePort { - svcPort := &servicePortInfo{BaseServicePortInfo: bsvcPortInfo} - - // Store the following for performance reasons. - svcName := types.NamespacedName{Namespace: service.Namespace, Name: service.Name} - svcPortName := proxy.ServicePortName{NamespacedName: svcName, Port: port.Name} - protocol := strings.ToLower(string(svcPort.Protocol())) - svcPort.nameString = svcPortName.String() - svcPort.clusterPolicyChainName = servicePortPolicyClusterChain(svcPort.nameString, protocol) - svcPort.localPolicyChainName = servicePortPolicyLocalChainName(svcPort.nameString, protocol) - svcPort.firewallChainName = serviceFirewallChainName(svcPort.nameString, protocol) - svcPort.externalChainName = serviceExternalChainName(svcPort.nameString, protocol) - - return svcPort -} - -// internal struct for endpoints information -type endpointsInfo struct { - *proxy.BaseEndpointInfo - - ChainName utiliptables.Chain -} - -// returns a new proxy.Endpoint which abstracts a endpointsInfo -func newEndpointInfo(baseInfo *proxy.BaseEndpointInfo, svcPortName *proxy.ServicePortName) proxy.Endpoint { - return &endpointsInfo{ - BaseEndpointInfo: baseInfo, - ChainName: servicePortEndpointChainName(svcPortName.String(), strings.ToLower(string(svcPortName.Protocol)), baseInfo.Endpoint), - } -} - -// Equal overrides the Equal() function implemented by proxy.BaseEndpointInfo. -func (e *endpointsInfo) Equal(other proxy.Endpoint) bool { - o, ok := other.(*endpointsInfo) - if !ok { - klog.ErrorS(nil, "Failed to cast endpointsInfo") - return false - } - return e.Endpoint == o.Endpoint && - e.IsLocal == o.IsLocal && - e.ChainName == o.ChainName && - e.Ready == o.Ready -} - -// Proxier is an iptables based proxy for connections between a localhost:lport -// and services that provide the actual backends. -type Proxier struct { - // endpointsChanges and serviceChanges contains all changes to endpoints and - // services that happened since iptables was synced. For a single object, - // changes are accumulated, i.e. previous is state from before all of them, - // current is state after applying all of those. - endpointsChanges *proxy.EndpointChangeTracker - serviceChanges *proxy.ServiceChangeTracker - - mu sync.Mutex // protects the following fields - svcPortMap proxy.ServicePortMap - endpointsMap proxy.EndpointsMap - nodeLabels map[string]string - // endpointSlicesSynced, and servicesSynced are set to true - // when corresponding objects are synced after startup. This is used to avoid - // updating iptables with some partial data after kube-proxy restart. - endpointSlicesSynced bool - servicesSynced bool - needFullSync bool - initialized int32 - syncRunner *async.BoundedFrequencyRunner // governs calls to syncProxyRules - syncPeriod time.Duration - lastIPTablesCleanup time.Time - - // These are effectively const and do not need the mutex to be held. - iptables utiliptables.Interface - masqueradeAll bool - masqueradeMark string - exec utilexec.Interface - localDetector proxyutiliptables.LocalTrafficDetector - hostname string - nodeIP net.IP - recorder events.EventRecorder - - serviceHealthServer healthcheck.ServiceHealthServer - healthzServer healthcheck.ProxierHealthUpdater - - // Since converting probabilities (floats) to strings is expensive - // and we are using only probabilities in the format of 1/n, we are - // precomputing some number of those and cache for future reuse. - precomputedProbabilities []string - - // The following buffers are used to reuse memory and avoid allocations - // that are significantly impacting performance. - iptablesData *bytes.Buffer - existingFilterChainsData *bytes.Buffer - filterChains utilproxy.LineBuffer - filterRules utilproxy.LineBuffer - natChains utilproxy.LineBuffer - natRules utilproxy.LineBuffer - - // largeClusterMode is set at the beginning of syncProxyRules if we are - // going to end up outputting "lots" of iptables rules and so we need to - // optimize for performance over debuggability. - largeClusterMode bool - - // localhostNodePorts indicates whether we allow NodePort services to be accessed - // via localhost. - localhostNodePorts bool - // nodePortAddresses selects the interfaces where nodePort works. - nodePortAddresses *utilproxy.NodePortAddresses - // networkInterfacer defines an interface for several net library functions. - // Inject for test purpose. - networkInterfacer utilproxy.NetworkInterfacer -} - -// Proxier implements proxy.Provider -var _ proxy.Provider = &Proxier{} - -// NewProxier returns a new Proxier given an iptables Interface instance. -// Because of the iptables logic, it is assumed that there is only a single Proxier active on a machine. -// An error will be returned if iptables fails to update or acquire the initial lock. -// Once a proxier is created, it will keep iptables up to date in the background and -// will not terminate if a particular iptables call fails. -func NewProxier(ipFamily v1.IPFamily, - ipt utiliptables.Interface, - sysctl utilsysctl.Interface, - exec utilexec.Interface, - syncPeriod time.Duration, - minSyncPeriod time.Duration, - masqueradeAll bool, - localhostNodePorts bool, - masqueradeBit int, - localDetector proxyutiliptables.LocalTrafficDetector, - hostname string, - nodeIP net.IP, - recorder events.EventRecorder, - healthzServer healthcheck.ProxierHealthUpdater, - nodePortAddressStrings []string, -) (*Proxier, error) { - nodePortAddresses := utilproxy.NewNodePortAddresses(nodePortAddressStrings) - - if !nodePortAddresses.ContainsIPv4Loopback() { - localhostNodePorts = false - } - if localhostNodePorts { - // Set the route_localnet sysctl we need for exposing NodePorts on loopback addresses - // Refer to https://issues.k8s.io/90259 - klog.InfoS("Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses") - if err := utilproxy.EnsureSysctl(sysctl, sysctlRouteLocalnet, 1); err != nil { - return nil, err - } - } - - // Proxy needs br_netfilter and bridge-nf-call-iptables=1 when containers - // are connected to a Linux bridge (but not SDN bridges). Until most - // plugins handle this, log when config is missing - if val, err := sysctl.GetSysctl(sysctlBridgeCallIPTables); err == nil && val != 1 { - klog.InfoS("Missing br-netfilter module or unset sysctl br-nf-call-iptables, proxy may not work as intended") - } - - // Generate the masquerade mark to use for SNAT rules. - masqueradeValue := 1 << uint(masqueradeBit) - masqueradeMark := fmt.Sprintf("%#08x", masqueradeValue) - klog.V(2).InfoS("Using iptables mark for masquerade", "ipFamily", ipt.Protocol(), "mark", masqueradeMark) - - serviceHealthServer := healthcheck.NewServiceHealthServer(hostname, recorder, nodePortAddresses, healthzServer) - - proxier := &Proxier{ - svcPortMap: make(proxy.ServicePortMap), - serviceChanges: proxy.NewServiceChangeTracker(newServiceInfo, ipFamily, recorder, nil), - endpointsMap: make(proxy.EndpointsMap), - endpointsChanges: proxy.NewEndpointChangeTracker(hostname, newEndpointInfo, ipFamily, recorder, nil), - needFullSync: true, - syncPeriod: syncPeriod, - iptables: ipt, - masqueradeAll: masqueradeAll, - masqueradeMark: masqueradeMark, - exec: exec, - localDetector: localDetector, - hostname: hostname, - nodeIP: nodeIP, - recorder: recorder, - serviceHealthServer: serviceHealthServer, - healthzServer: healthzServer, - precomputedProbabilities: make([]string, 0, 1001), - iptablesData: bytes.NewBuffer(nil), - existingFilterChainsData: bytes.NewBuffer(nil), - filterChains: utilproxy.LineBuffer{}, - filterRules: utilproxy.LineBuffer{}, - natChains: utilproxy.LineBuffer{}, - natRules: utilproxy.LineBuffer{}, - localhostNodePorts: localhostNodePorts, - nodePortAddresses: nodePortAddresses, - networkInterfacer: utilproxy.RealNetwork{}, - } - - burstSyncs := 2 - klog.V(2).InfoS("Iptables sync params", "ipFamily", ipt.Protocol(), "minSyncPeriod", minSyncPeriod, "syncPeriod", syncPeriod, "burstSyncs", burstSyncs) - // We pass syncPeriod to ipt.Monitor, which will call us only if it needs to. - // We need to pass *some* maxInterval to NewBoundedFrequencyRunner anyway though. - // time.Hour is arbitrary. - proxier.syncRunner = async.NewBoundedFrequencyRunner("sync-runner", proxier.syncProxyRules, minSyncPeriod, time.Hour, burstSyncs) - - go ipt.Monitor(kubeProxyCanaryChain, []utiliptables.Table{utiliptables.TableMangle, utiliptables.TableNAT, utiliptables.TableFilter}, - proxier.forceSyncProxyRules, syncPeriod, wait.NeverStop) - - if ipt.HasRandomFully() { - klog.V(2).InfoS("Iptables supports --random-fully", "ipFamily", ipt.Protocol()) - } else { - klog.V(2).InfoS("Iptables does not support --random-fully", "ipFamily", ipt.Protocol()) - } - - return proxier, nil -} - -// NewDualStackProxier creates a MetaProxier instance, with IPv4 and IPv6 proxies. -func NewDualStackProxier( - ipt [2]utiliptables.Interface, - sysctl utilsysctl.Interface, - exec utilexec.Interface, - syncPeriod time.Duration, - minSyncPeriod time.Duration, - masqueradeAll bool, - localhostNodePorts bool, - masqueradeBit int, - localDetectors [2]proxyutiliptables.LocalTrafficDetector, - hostname string, - nodeIP [2]net.IP, - recorder events.EventRecorder, - healthzServer healthcheck.ProxierHealthUpdater, - nodePortAddresses []string, -) (proxy.Provider, error) { - // Create an ipv4 instance of the single-stack proxier - ipFamilyMap := utilproxy.MapCIDRsByIPFamily(nodePortAddresses) - ipv4Proxier, err := NewProxier(v1.IPv4Protocol, ipt[0], sysctl, - exec, syncPeriod, minSyncPeriod, masqueradeAll, localhostNodePorts, masqueradeBit, localDetectors[0], hostname, - nodeIP[0], recorder, healthzServer, ipFamilyMap[v1.IPv4Protocol]) - if err != nil { - return nil, fmt.Errorf("unable to create ipv4 proxier: %v", err) - } - - ipv6Proxier, err := NewProxier(v1.IPv6Protocol, ipt[1], sysctl, - exec, syncPeriod, minSyncPeriod, masqueradeAll, false, masqueradeBit, localDetectors[1], hostname, - nodeIP[1], recorder, healthzServer, ipFamilyMap[v1.IPv6Protocol]) - if err != nil { - return nil, fmt.Errorf("unable to create ipv6 proxier: %v", err) - } - return metaproxier.NewMetaProxier(ipv4Proxier, ipv6Proxier), nil -} - -type iptablesJumpChain struct { - table utiliptables.Table - dstChain utiliptables.Chain - srcChain utiliptables.Chain - comment string - extraArgs []string -} - -var iptablesJumpChains = []iptablesJumpChain{ - {utiliptables.TableFilter, kubeExternalServicesChain, utiliptables.ChainInput, "kubernetes externally-visible service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}}, - {utiliptables.TableFilter, kubeExternalServicesChain, utiliptables.ChainForward, "kubernetes externally-visible service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}}, - {utiliptables.TableFilter, kubeNodePortsChain, utiliptables.ChainInput, "kubernetes health check service ports", nil}, - {utiliptables.TableFilter, kubeServicesChain, utiliptables.ChainForward, "kubernetes service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}}, - {utiliptables.TableFilter, kubeServicesChain, utiliptables.ChainOutput, "kubernetes service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}}, - {utiliptables.TableFilter, kubeForwardChain, utiliptables.ChainForward, "kubernetes forwarding rules", nil}, - {utiliptables.TableFilter, kubeProxyFirewallChain, utiliptables.ChainInput, "kubernetes load balancer firewall", []string{"-m", "conntrack", "--ctstate", "NEW"}}, - {utiliptables.TableFilter, kubeProxyFirewallChain, utiliptables.ChainOutput, "kubernetes load balancer firewall", []string{"-m", "conntrack", "--ctstate", "NEW"}}, - {utiliptables.TableFilter, kubeProxyFirewallChain, utiliptables.ChainForward, "kubernetes load balancer firewall", []string{"-m", "conntrack", "--ctstate", "NEW"}}, - {utiliptables.TableNAT, kubeServicesChain, utiliptables.ChainOutput, "kubernetes service portals", nil}, - {utiliptables.TableNAT, kubeServicesChain, utiliptables.ChainPrerouting, "kubernetes service portals", nil}, -} - -// Duplicates of chains created in pkg/kubelet/kubelet_network_linux.go; we create these -// on startup but do not delete them in CleanupLeftovers. -var iptablesKubeletJumpChains = []iptablesJumpChain{ - {utiliptables.TableFilter, kubeletFirewallChain, utiliptables.ChainInput, "", nil}, - {utiliptables.TableFilter, kubeletFirewallChain, utiliptables.ChainOutput, "", nil}, - - // Move this to iptablesJumpChains once IPTablesOwnershipCleanup is GA and kubelet - // no longer creates this chain, - {utiliptables.TableNAT, kubePostroutingChain, utiliptables.ChainPostrouting, "kubernetes postrouting rules", nil}, -} - -var iptablesCleanupOnlyChains = []iptablesJumpChain{ - // Present in kube 1.13 - 1.19. Removed by #95252 in favor of adding reject rules for incoming/forwarding packets to kubeExternalServicesChain - {utiliptables.TableFilter, kubeServicesChain, utiliptables.ChainInput, "kubernetes service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}}, -} - -// CleanupLeftovers removes all iptables rules and chains created by the Proxier -// It returns true if an error was encountered. Errors are logged. -func CleanupLeftovers(ipt utiliptables.Interface) (encounteredError bool) { - // Unlink our chains - for _, jump := range append(iptablesJumpChains, iptablesCleanupOnlyChains...) { - args := append(jump.extraArgs, - "-m", "comment", "--comment", jump.comment, - "-j", string(jump.dstChain), - ) - if err := ipt.DeleteRule(jump.table, jump.srcChain, args...); err != nil { - if !utiliptables.IsNotFoundError(err) { - klog.ErrorS(err, "Error removing pure-iptables proxy rule") - encounteredError = true - } - } - } - - // Flush and remove all of our "-t nat" chains. - iptablesData := bytes.NewBuffer(nil) - if err := ipt.SaveInto(utiliptables.TableNAT, iptablesData); err != nil { - klog.ErrorS(err, "Failed to execute iptables-save", "table", utiliptables.TableNAT) - encounteredError = true - } else { - existingNATChains := utiliptables.GetChainsFromTable(iptablesData.Bytes()) - natChains := &utilproxy.LineBuffer{} - natRules := &utilproxy.LineBuffer{} - natChains.Write("*nat") - // Start with chains we know we need to remove. - for _, chain := range []utiliptables.Chain{kubeServicesChain, kubeNodePortsChain, kubePostroutingChain} { - if _, found := existingNATChains[chain]; found { - chainString := string(chain) - natChains.Write(utiliptables.MakeChainLine(chain)) // flush - natRules.Write("-X", chainString) // delete - } - } - // Hunt for service and endpoint chains. - for chain := range existingNATChains { - chainString := string(chain) - if isServiceChainName(chainString) { - natChains.Write(utiliptables.MakeChainLine(chain)) // flush - natRules.Write("-X", chainString) // delete - } - } - natRules.Write("COMMIT") - natLines := append(natChains.Bytes(), natRules.Bytes()...) - // Write it. - err = ipt.Restore(utiliptables.TableNAT, natLines, utiliptables.NoFlushTables, utiliptables.RestoreCounters) - if err != nil { - klog.ErrorS(err, "Failed to execute iptables-restore", "table", utiliptables.TableNAT) - metrics.IptablesRestoreFailuresTotal.Inc() - encounteredError = true - } - } - - // Flush and remove all of our "-t filter" chains. - iptablesData.Reset() - if err := ipt.SaveInto(utiliptables.TableFilter, iptablesData); err != nil { - klog.ErrorS(err, "Failed to execute iptables-save", "table", utiliptables.TableFilter) - encounteredError = true - } else { - existingFilterChains := utiliptables.GetChainsFromTable(iptablesData.Bytes()) - filterChains := &utilproxy.LineBuffer{} - filterRules := &utilproxy.LineBuffer{} - filterChains.Write("*filter") - for _, chain := range []utiliptables.Chain{kubeServicesChain, kubeExternalServicesChain, kubeForwardChain, kubeNodePortsChain} { - if _, found := existingFilterChains[chain]; found { - chainString := string(chain) - filterChains.Write(utiliptables.MakeChainLine(chain)) - filterRules.Write("-X", chainString) - } - } - filterRules.Write("COMMIT") - filterLines := append(filterChains.Bytes(), filterRules.Bytes()...) - // Write it. - if err := ipt.Restore(utiliptables.TableFilter, filterLines, utiliptables.NoFlushTables, utiliptables.RestoreCounters); err != nil { - klog.ErrorS(err, "Failed to execute iptables-restore", "table", utiliptables.TableFilter) - metrics.IptablesRestoreFailuresTotal.Inc() - encounteredError = true - } - } - return encounteredError -} - -func computeProbability(n int) string { - return fmt.Sprintf("%0.10f", 1.0/float64(n)) -} - -// This assumes proxier.mu is held -func (proxier *Proxier) precomputeProbabilities(numberOfPrecomputed int) { - if len(proxier.precomputedProbabilities) == 0 { - proxier.precomputedProbabilities = append(proxier.precomputedProbabilities, "") - } - for i := len(proxier.precomputedProbabilities); i <= numberOfPrecomputed; i++ { - proxier.precomputedProbabilities = append(proxier.precomputedProbabilities, computeProbability(i)) - } -} - -// This assumes proxier.mu is held -func (proxier *Proxier) probability(n int) string { - if n >= len(proxier.precomputedProbabilities) { - proxier.precomputeProbabilities(n) - } - return proxier.precomputedProbabilities[n] -} - -// Sync is called to synchronize the proxier state to iptables as soon as possible. -func (proxier *Proxier) Sync() { - if proxier.healthzServer != nil { - proxier.healthzServer.QueuedUpdate() - } - metrics.SyncProxyRulesLastQueuedTimestamp.SetToCurrentTime() - proxier.syncRunner.Run() -} - -// SyncLoop runs periodic work. This is expected to run as a goroutine or as the main loop of the app. It does not return. -func (proxier *Proxier) SyncLoop() { - // Update healthz timestamp at beginning in case Sync() never succeeds. - if proxier.healthzServer != nil { - proxier.healthzServer.Updated() - } - - // synthesize "last change queued" time as the informers are syncing. - metrics.SyncProxyRulesLastQueuedTimestamp.SetToCurrentTime() - proxier.syncRunner.Loop(wait.NeverStop) -} - -func (proxier *Proxier) setInitialized(value bool) { - var initialized int32 - if value { - initialized = 1 - } - atomic.StoreInt32(&proxier.initialized, initialized) -} - -func (proxier *Proxier) isInitialized() bool { - return atomic.LoadInt32(&proxier.initialized) > 0 -} - -// OnServiceAdd is called whenever creation of new service object -// is observed. -func (proxier *Proxier) OnServiceAdd(service *v1.Service) { - proxier.OnServiceUpdate(nil, service) -} - -// OnServiceUpdate is called whenever modification of an existing -// service object is observed. -func (proxier *Proxier) OnServiceUpdate(oldService, service *v1.Service) { - if proxier.serviceChanges.Update(oldService, service) && proxier.isInitialized() { - proxier.Sync() - } -} - -// OnServiceDelete is called whenever deletion of an existing service -// object is observed. -func (proxier *Proxier) OnServiceDelete(service *v1.Service) { - proxier.OnServiceUpdate(service, nil) - -} - -// OnServiceSynced is called once all the initial event handlers were -// called and the state is fully propagated to local cache. -func (proxier *Proxier) OnServiceSynced() { - proxier.mu.Lock() - proxier.servicesSynced = true - proxier.setInitialized(proxier.endpointSlicesSynced) - proxier.mu.Unlock() - - // Sync unconditionally - this is called once per lifetime. - proxier.syncProxyRules() -} - -// OnEndpointSliceAdd is called whenever creation of a new endpoint slice object -// is observed. -func (proxier *Proxier) OnEndpointSliceAdd(endpointSlice *discovery.EndpointSlice) { - if proxier.endpointsChanges.EndpointSliceUpdate(endpointSlice, false) && proxier.isInitialized() { - proxier.Sync() - } -} - -// OnEndpointSliceUpdate is called whenever modification of an existing endpoint -// slice object is observed. -func (proxier *Proxier) OnEndpointSliceUpdate(_, endpointSlice *discovery.EndpointSlice) { - if proxier.endpointsChanges.EndpointSliceUpdate(endpointSlice, false) && proxier.isInitialized() { - proxier.Sync() - } -} - -// OnEndpointSliceDelete is called whenever deletion of an existing endpoint slice -// object is observed. -func (proxier *Proxier) OnEndpointSliceDelete(endpointSlice *discovery.EndpointSlice) { - if proxier.endpointsChanges.EndpointSliceUpdate(endpointSlice, true) && proxier.isInitialized() { - proxier.Sync() - } -} - -// OnEndpointSlicesSynced is called once all the initial event handlers were -// called and the state is fully propagated to local cache. -func (proxier *Proxier) OnEndpointSlicesSynced() { - proxier.mu.Lock() - proxier.endpointSlicesSynced = true - proxier.setInitialized(proxier.servicesSynced) - proxier.mu.Unlock() - - // Sync unconditionally - this is called once per lifetime. - proxier.syncProxyRules() -} - -// OnNodeAdd is called whenever creation of new node object -// is observed. -func (proxier *Proxier) OnNodeAdd(node *v1.Node) { - if node.Name != proxier.hostname { - klog.ErrorS(nil, "Received a watch event for a node that doesn't match the current node", - "eventNode", node.Name, "currentNode", proxier.hostname) - return - } - - if reflect.DeepEqual(proxier.nodeLabels, node.Labels) { - return - } - - proxier.mu.Lock() - proxier.nodeLabels = map[string]string{} - for k, v := range node.Labels { - proxier.nodeLabels[k] = v - } - proxier.needFullSync = true - proxier.mu.Unlock() - klog.V(4).InfoS("Updated proxier node labels", "labels", node.Labels) - - proxier.Sync() -} - -// OnNodeUpdate is called whenever modification of an existing -// node object is observed. -func (proxier *Proxier) OnNodeUpdate(oldNode, node *v1.Node) { - if node.Name != proxier.hostname { - klog.ErrorS(nil, "Received a watch event for a node that doesn't match the current node", - "eventNode", node.Name, "currentNode", proxier.hostname) - return - } - - if reflect.DeepEqual(proxier.nodeLabels, node.Labels) { - return - } - - proxier.mu.Lock() - proxier.nodeLabels = map[string]string{} - for k, v := range node.Labels { - proxier.nodeLabels[k] = v - } - proxier.needFullSync = true - proxier.mu.Unlock() - klog.V(4).InfoS("Updated proxier node labels", "labels", node.Labels) - - proxier.Sync() -} - -// OnNodeDelete is called whenever deletion of an existing node -// object is observed. -func (proxier *Proxier) OnNodeDelete(node *v1.Node) { - if node.Name != proxier.hostname { - klog.ErrorS(nil, "Received a watch event for a node that doesn't match the current node", - "eventNode", node.Name, "currentNode", proxier.hostname) - return - } - proxier.mu.Lock() - proxier.nodeLabels = nil - proxier.needFullSync = true - proxier.mu.Unlock() - - proxier.Sync() -} - -// OnNodeSynced is called once all the initial event handlers were -// called and the state is fully propagated to local cache. -func (proxier *Proxier) OnNodeSynced() { -} - -// portProtoHash takes the ServicePortName and protocol for a service -// returns the associated 16 character hash. This is computed by hashing (sha256) -// then encoding to base32 and truncating to 16 chars. We do this because IPTables -// Chain Names must be <= 28 chars long, and the longer they are the harder they are to read. -func portProtoHash(servicePortName string, protocol string) string { - hash := sha256.Sum256([]byte(servicePortName + protocol)) - encoded := base32.StdEncoding.EncodeToString(hash[:]) - return encoded[:16] -} - -const ( - servicePortPolicyClusterChainNamePrefix = "KUBE-SVC-" - servicePortPolicyLocalChainNamePrefix = "KUBE-SVL-" - serviceFirewallChainNamePrefix = "KUBE-FW-" - serviceExternalChainNamePrefix = "KUBE-EXT-" - servicePortEndpointChainNamePrefix = "KUBE-SEP-" - - // For cleanup. This can be removed after 1.26 is released. - deprecatedServiceLBChainNamePrefix = "KUBE-XLB-" -) - -// servicePortPolicyClusterChain returns the name of the KUBE-SVC-XXXX chain for a service, which is the -// main iptables chain for that service, used for dispatching to endpoints when using `Cluster` -// traffic policy. -func servicePortPolicyClusterChain(servicePortName string, protocol string) utiliptables.Chain { - return utiliptables.Chain(servicePortPolicyClusterChainNamePrefix + portProtoHash(servicePortName, protocol)) -} - -// servicePortPolicyLocalChainName returns the name of the KUBE-SVL-XXXX chain for a service, which -// handles dispatching to local endpoints when using `Local` traffic policy. This chain only -// exists if the service has `Local` internal or external traffic policy. -func servicePortPolicyLocalChainName(servicePortName string, protocol string) utiliptables.Chain { - return utiliptables.Chain(servicePortPolicyLocalChainNamePrefix + portProtoHash(servicePortName, protocol)) -} - -// serviceFirewallChainName returns the name of the KUBE-FW-XXXX chain for a service, which -// is used to implement the filtering for the LoadBalancerSourceRanges feature. -func serviceFirewallChainName(servicePortName string, protocol string) utiliptables.Chain { - return utiliptables.Chain(serviceFirewallChainNamePrefix + portProtoHash(servicePortName, protocol)) -} - -// serviceExternalChainName returns the name of the KUBE-EXT-XXXX chain for a service, which -// implements "short-circuiting" for internally-originated external-destination traffic when using -// `Local` external traffic policy. It forwards traffic from local sources to the KUBE-SVC-XXXX -// chain and traffic from external sources to the KUBE-SVL-XXXX chain. -func serviceExternalChainName(servicePortName string, protocol string) utiliptables.Chain { - return utiliptables.Chain(serviceExternalChainNamePrefix + portProtoHash(servicePortName, protocol)) -} - -// servicePortEndpointChainName returns the name of the KUBE-SEP-XXXX chain for a particular -// service endpoint. -func servicePortEndpointChainName(servicePortName string, protocol string, endpoint string) utiliptables.Chain { - hash := sha256.Sum256([]byte(servicePortName + protocol + endpoint)) - encoded := base32.StdEncoding.EncodeToString(hash[:]) - return utiliptables.Chain(servicePortEndpointChainNamePrefix + encoded[:16]) -} - -func isServiceChainName(chainString string) bool { - prefixes := []string{ - servicePortPolicyClusterChainNamePrefix, - servicePortPolicyLocalChainNamePrefix, - servicePortEndpointChainNamePrefix, - serviceFirewallChainNamePrefix, - serviceExternalChainNamePrefix, - deprecatedServiceLBChainNamePrefix, - } - - for _, p := range prefixes { - if strings.HasPrefix(chainString, p) { - return true - } - } - return false -} - -// After a UDP endpoint has been removed, we must flush any pending conntrack entries to it, or else we -// risk sending more traffic to it, all of which will be lost (because UDP). -// This assumes the proxier mutex is held -// TODO: move it to util -func (proxier *Proxier) deleteUDPEndpointConnections(deletedUDPEndpoints []proxy.ServiceEndpoint) { - for _, epSvcPair := range deletedUDPEndpoints { - if svcInfo, ok := proxier.svcPortMap[epSvcPair.ServicePortName]; ok { - endpointIP := utilproxy.IPPart(epSvcPair.Endpoint) - nodePort := svcInfo.NodePort() - var err error - if nodePort != 0 { - err = conntrack.ClearEntriesForPortNAT(proxier.exec, endpointIP, nodePort, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to delete nodeport-related endpoint connections", "servicePortName", epSvcPair.ServicePortName) - } - } - err = conntrack.ClearEntriesForNAT(proxier.exec, svcInfo.ClusterIP().String(), endpointIP, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to delete endpoint connections", "servicePortName", epSvcPair.ServicePortName) - } - for _, extIP := range svcInfo.ExternalIPStrings() { - err := conntrack.ClearEntriesForNAT(proxier.exec, extIP, endpointIP, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to delete endpoint connections for externalIP", "servicePortName", epSvcPair.ServicePortName, "externalIP", extIP) - } - } - for _, lbIP := range svcInfo.LoadBalancerIPStrings() { - err := conntrack.ClearEntriesForNAT(proxier.exec, lbIP, endpointIP, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to delete endpoint connections for LoadBalancerIP", "servicePortName", epSvcPair.ServicePortName, "loadBalancerIP", lbIP) - } - } - } - } -} - -// Assumes proxier.mu is held. -func (proxier *Proxier) appendServiceCommentLocked(args []string, svcName string) []string { - // Not printing these comments, can reduce size of iptables (in case of large - // number of endpoints) even by 40%+. So if total number of endpoint chains - // is large enough, we simply drop those comments. - if proxier.largeClusterMode { - return args - } - return append(args, "-m", "comment", "--comment", svcName) -} - -// Called by the iptables.Monitor, and in response to topology changes; this calls -// syncProxyRules() and tells it to resync all services, regardless of whether the -// Service or Endpoints/EndpointSlice objects themselves have changed -func (proxier *Proxier) forceSyncProxyRules() { - proxier.mu.Lock() - proxier.needFullSync = true - proxier.mu.Unlock() - - proxier.syncProxyRules() -} - -// This is where all of the iptables-save/restore calls happen. -// The only other iptables rules are those that are setup in iptablesInit() -// This assumes proxier.mu is NOT held -func (proxier *Proxier) syncProxyRules() { - proxier.mu.Lock() - defer proxier.mu.Unlock() - - // don't sync rules till we've received services and endpoints - if !proxier.isInitialized() { - klog.V(2).InfoS("Not syncing iptables until Services and Endpoints have been received from master") - return - } - - // Keep track of how long syncs take. - start := time.Now() - defer func() { - metrics.SyncProxyRulesLatency.Observe(metrics.SinceInSeconds(start)) - klog.V(2).InfoS("SyncProxyRules complete", "elapsed", time.Since(start)) - }() - - tryPartialSync := !proxier.needFullSync && utilfeature.DefaultFeatureGate.Enabled(features.MinimizeIPTablesRestore) - var serviceChanged, endpointsChanged sets.String - if tryPartialSync { - serviceChanged = proxier.serviceChanges.PendingChanges() - endpointsChanged = proxier.endpointsChanges.PendingChanges() - } - serviceUpdateResult := proxier.svcPortMap.Update(proxier.serviceChanges) - endpointUpdateResult := proxier.endpointsMap.Update(proxier.endpointsChanges) - - // We need to detect stale connections to UDP Services so we - // can clean dangling conntrack entries that can blackhole traffic. - conntrackCleanupServiceIPs := serviceUpdateResult.DeletedUDPClusterIPs - conntrackCleanupServiceNodePorts := sets.NewInt() - // merge stale services gathered from updateEndpointsMap - // an UDP service that changes from 0 to non-0 endpoints is considered stale. - for _, svcPortName := range endpointUpdateResult.NewlyActiveUDPServices { - if svcInfo, ok := proxier.svcPortMap[svcPortName]; ok { - klog.V(4).InfoS("Newly-active UDP service may have stale conntrack entries", "servicePortName", svcPortName) - conntrackCleanupServiceIPs.Insert(svcInfo.ClusterIP().String()) - for _, extIP := range svcInfo.ExternalIPStrings() { - conntrackCleanupServiceIPs.Insert(extIP) - } - for _, lbIP := range svcInfo.LoadBalancerIPStrings() { - conntrackCleanupServiceIPs.Insert(lbIP) - } - nodePort := svcInfo.NodePort() - if svcInfo.Protocol() == v1.ProtocolUDP && nodePort != 0 { - conntrackCleanupServiceNodePorts.Insert(nodePort) - } - } - } - - klog.V(2).InfoS("Syncing iptables rules") - - success := false - defer func() { - if !success { - klog.InfoS("Sync failed", "retryingTime", proxier.syncPeriod) - proxier.syncRunner.RetryAfter(proxier.syncPeriod) - if tryPartialSync { - metrics.IptablesPartialRestoreFailuresTotal.Inc() - } - // proxier.serviceChanges and proxier.endpointChanges have already - // been flushed, so we've lost the state needed to be able to do - // a partial sync. - proxier.needFullSync = true - } - }() - - if !tryPartialSync { - // Ensure that our jump rules (eg from PREROUTING to KUBE-SERVICES) exist. - // We can't do this as part of the iptables-restore because we don't want - // to specify/replace *all* of the rules in PREROUTING, etc. - // - // We need to create these rules when kube-proxy first starts, and we need - // to recreate them if the utiliptables Monitor detects that iptables has - // been flushed. In both of those cases, the code will force a full sync. - // In all other cases, it ought to be safe to assume that the rules - // already exist, so we'll skip this step when doing a partial sync, to - // save us from having to invoke /sbin/iptables 20 times on each sync - // (which will be very slow on hosts with lots of iptables rules). - for _, jump := range append(iptablesJumpChains, iptablesKubeletJumpChains...) { - if _, err := proxier.iptables.EnsureChain(jump.table, jump.dstChain); err != nil { - klog.ErrorS(err, "Failed to ensure chain exists", "table", jump.table, "chain", jump.dstChain) - return - } - args := jump.extraArgs - if jump.comment != "" { - args = append(args, "-m", "comment", "--comment", jump.comment) - } - args = append(args, "-j", string(jump.dstChain)) - if _, err := proxier.iptables.EnsureRule(utiliptables.Prepend, jump.table, jump.srcChain, args...); err != nil { - klog.ErrorS(err, "Failed to ensure chain jumps", "table", jump.table, "srcChain", jump.srcChain, "dstChain", jump.dstChain) - return - } - } - } - - // - // Below this point we will not return until we try to write the iptables rules. - // - - // Reset all buffers used later. - // This is to avoid memory reallocations and thus improve performance. - proxier.filterChains.Reset() - proxier.filterRules.Reset() - proxier.natChains.Reset() - proxier.natRules.Reset() - - // Write chain lines for all the "top-level" chains we'll be filling in - for _, chainName := range []utiliptables.Chain{kubeServicesChain, kubeExternalServicesChain, kubeForwardChain, kubeNodePortsChain, kubeProxyFirewallChain} { - proxier.filterChains.Write(utiliptables.MakeChainLine(chainName)) - } - for _, chainName := range []utiliptables.Chain{kubeServicesChain, kubeNodePortsChain, kubePostroutingChain, kubeMarkMasqChain} { - proxier.natChains.Write(utiliptables.MakeChainLine(chainName)) - } - - // Install the kubernetes-specific postrouting rules. We use a whole chain for - // this so that it is easier to flush and change, for example if the mark - // value should ever change. - // NB: THIS MUST MATCH the corresponding code in the kubelet - proxier.natRules.Write( - "-A", string(kubePostroutingChain), - "-m", "mark", "!", "--mark", fmt.Sprintf("%s/%s", proxier.masqueradeMark, proxier.masqueradeMark), - "-j", "RETURN", - ) - // Clear the mark to avoid re-masquerading if the packet re-traverses the network stack. - proxier.natRules.Write( - "-A", string(kubePostroutingChain), - "-j", "MARK", "--xor-mark", proxier.masqueradeMark, - ) - masqRule := []string{ - "-A", string(kubePostroutingChain), - "-m", "comment", "--comment", `"kubernetes service traffic requiring SNAT"`, - "-j", "MASQUERADE", - } - if proxier.iptables.HasRandomFully() { - masqRule = append(masqRule, "--random-fully") - } - proxier.natRules.Write(masqRule) - - // Install the kubernetes-specific masquerade mark rule. We use a whole chain for - // this so that it is easier to flush and change, for example if the mark - // value should ever change. - proxier.natRules.Write( - "-A", string(kubeMarkMasqChain), - "-j", "MARK", "--or-mark", proxier.masqueradeMark, - ) - - isIPv6 := proxier.iptables.IsIPv6() - if !isIPv6 && proxier.localhostNodePorts { - // Kube-proxy's use of `route_localnet` to enable NodePorts on localhost - // creates a security hole (https://issue.k8s.io/90259) which this - // iptables rule mitigates. - // NB: THIS MUST MATCH the corresponding code in the kubelet. (Actually, - // kubelet uses "--dst"/"--src" rather than "-d"/"-s" but that's just a - // command-line thing and results in the same rule being created.) - proxier.filterChains.Write(utiliptables.MakeChainLine(kubeletFirewallChain)) - proxier.filterRules.Write( - "-A", string(kubeletFirewallChain), - "-m", "comment", "--comment", `"block incoming localnet connections"`, - "-d", "127.0.0.0/8", - "!", "-s", "127.0.0.0/8", - "-m", "conntrack", - "!", "--ctstate", "RELATED,ESTABLISHED,DNAT", - "-j", "DROP", - ) - } - - // Accumulate NAT chains to keep. - activeNATChains := map[utiliptables.Chain]bool{} // use a map as a set - - // To avoid growing this slice, we arbitrarily set its size to 64, - // there is never more than that many arguments for a single line. - // Note that even if we go over 64, it will still be correct - it - // is just for efficiency, not correctness. - args := make([]string, 64) - - // Compute total number of endpoint chains across all services - // to get a sense of how big the cluster is. - totalEndpoints := 0 - for svcName := range proxier.svcPortMap { - totalEndpoints += len(proxier.endpointsMap[svcName]) - } - proxier.largeClusterMode = (totalEndpoints > largeClusterEndpointsThreshold) - - // These two variables are used to publish the sync_proxy_rules_no_endpoints_total - // metric. - serviceNoLocalEndpointsTotalInternal := 0 - serviceNoLocalEndpointsTotalExternal := 0 - - // Build rules for each service-port. - for svcName, svc := range proxier.svcPortMap { - svcInfo, ok := svc.(*servicePortInfo) - if !ok { - klog.ErrorS(nil, "Failed to cast serviceInfo", "serviceName", svcName) - continue - } - protocol := strings.ToLower(string(svcInfo.Protocol())) - svcPortNameString := svcInfo.nameString - - // Figure out the endpoints for Cluster and Local traffic policy. - // allLocallyReachableEndpoints is the set of all endpoints that can be routed to - // from this node, given the service's traffic policies. hasEndpoints is true - // if the service has any usable endpoints on any node, not just this one. - allEndpoints := proxier.endpointsMap[svcName] - clusterEndpoints, localEndpoints, allLocallyReachableEndpoints, hasEndpoints := proxy.CategorizeEndpoints(allEndpoints, svcInfo, proxier.nodeLabels) - - // Note the endpoint chains that will be used - for _, ep := range allLocallyReachableEndpoints { - if epInfo, ok := ep.(*endpointsInfo); ok { - activeNATChains[epInfo.ChainName] = true - } - } - - // clusterPolicyChain contains the endpoints used with "Cluster" traffic policy - clusterPolicyChain := svcInfo.clusterPolicyChainName - usesClusterPolicyChain := len(clusterEndpoints) > 0 && svcInfo.UsesClusterEndpoints() - if usesClusterPolicyChain { - activeNATChains[clusterPolicyChain] = true - } - - // localPolicyChain contains the endpoints used with "Local" traffic policy - localPolicyChain := svcInfo.localPolicyChainName - usesLocalPolicyChain := len(localEndpoints) > 0 && svcInfo.UsesLocalEndpoints() - if usesLocalPolicyChain { - activeNATChains[localPolicyChain] = true - } - - // internalPolicyChain is the chain containing the endpoints for - // "internal" (ClusterIP) traffic. internalTrafficChain is the chain that - // internal traffic is routed to (which is always the same as - // internalPolicyChain). hasInternalEndpoints is true if we should - // generate rules pointing to internalTrafficChain, or false if there are - // no available internal endpoints. - internalPolicyChain := clusterPolicyChain - hasInternalEndpoints := hasEndpoints - if svcInfo.InternalPolicyLocal() { - internalPolicyChain = localPolicyChain - if len(localEndpoints) == 0 { - hasInternalEndpoints = false - } - } - internalTrafficChain := internalPolicyChain - - // Similarly, externalPolicyChain is the chain containing the endpoints - // for "external" (NodePort, LoadBalancer, and ExternalIP) traffic. - // externalTrafficChain is the chain that external traffic is routed to - // (which is always the service's "EXT" chain). hasExternalEndpoints is - // true if there are endpoints that will be reached by external traffic. - // (But we may still have to generate externalTrafficChain even if there - // are no external endpoints, to ensure that the short-circuit rules for - // local traffic are set up.) - externalPolicyChain := clusterPolicyChain - hasExternalEndpoints := hasEndpoints - if svcInfo.ExternalPolicyLocal() { - externalPolicyChain = localPolicyChain - if len(localEndpoints) == 0 { - hasExternalEndpoints = false - } - } - externalTrafficChain := svcInfo.externalChainName // eventually jumps to externalPolicyChain - - // usesExternalTrafficChain is based on hasEndpoints, not hasExternalEndpoints, - // because we need the local-traffic-short-circuiting rules even when there - // are no externally-usable endpoints. - usesExternalTrafficChain := hasEndpoints && svcInfo.ExternallyAccessible() - if usesExternalTrafficChain { - activeNATChains[externalTrafficChain] = true - } - - // Traffic to LoadBalancer IPs can go directly to externalTrafficChain - // unless LoadBalancerSourceRanges is in use in which case we will - // create a firewall chain. - loadBalancerTrafficChain := externalTrafficChain - fwChain := svcInfo.firewallChainName - usesFWChain := hasEndpoints && len(svcInfo.LoadBalancerIPStrings()) > 0 && len(svcInfo.LoadBalancerSourceRanges()) > 0 - if usesFWChain { - activeNATChains[fwChain] = true - loadBalancerTrafficChain = fwChain - } - - var internalTrafficFilterTarget, internalTrafficFilterComment string - var externalTrafficFilterTarget, externalTrafficFilterComment string - if !hasEndpoints { - // The service has no endpoints at all; hasInternalEndpoints and - // hasExternalEndpoints will also be false, and we will not - // generate any chains in the "nat" table for the service; only - // rules in the "filter" table rejecting incoming packets for - // the service's IPs. - internalTrafficFilterTarget = "REJECT" - internalTrafficFilterComment = fmt.Sprintf(`"%s has no endpoints"`, svcPortNameString) - externalTrafficFilterTarget = "REJECT" - externalTrafficFilterComment = internalTrafficFilterComment - } else { - if !hasInternalEndpoints { - // The internalTrafficPolicy is "Local" but there are no local - // endpoints. Traffic to the clusterIP will be dropped, but - // external traffic may still be accepted. - internalTrafficFilterTarget = "DROP" - internalTrafficFilterComment = fmt.Sprintf(`"%s has no local endpoints"`, svcPortNameString) - serviceNoLocalEndpointsTotalInternal++ - } - if !hasExternalEndpoints { - // The externalTrafficPolicy is "Local" but there are no - // local endpoints. Traffic to "external" IPs from outside - // the cluster will be dropped, but traffic from inside - // the cluster may still be accepted. - externalTrafficFilterTarget = "DROP" - externalTrafficFilterComment = fmt.Sprintf(`"%s has no local endpoints"`, svcPortNameString) - serviceNoLocalEndpointsTotalExternal++ - } - } - - // Capture the clusterIP. - if hasInternalEndpoints { - proxier.natRules.Write( - "-A", string(kubeServicesChain), - "-m", "comment", "--comment", fmt.Sprintf(`"%s cluster IP"`, svcPortNameString), - "-m", protocol, "-p", protocol, - "-d", svcInfo.ClusterIP().String(), - "--dport", strconv.Itoa(svcInfo.Port()), - "-j", string(internalTrafficChain)) - } else { - // No endpoints. - proxier.filterRules.Write( - "-A", string(kubeServicesChain), - "-m", "comment", "--comment", internalTrafficFilterComment, - "-m", protocol, "-p", protocol, - "-d", svcInfo.ClusterIP().String(), - "--dport", strconv.Itoa(svcInfo.Port()), - "-j", internalTrafficFilterTarget, - ) - } - - // Capture externalIPs. - for _, externalIP := range svcInfo.ExternalIPStrings() { - if hasEndpoints { - // Send traffic bound for external IPs to the "external - // destinations" chain. - proxier.natRules.Write( - "-A", string(kubeServicesChain), - "-m", "comment", "--comment", fmt.Sprintf(`"%s external IP"`, svcPortNameString), - "-m", protocol, "-p", protocol, - "-d", externalIP, - "--dport", strconv.Itoa(svcInfo.Port()), - "-j", string(externalTrafficChain)) - } - if !hasExternalEndpoints { - // Either no endpoints at all (REJECT) or no endpoints for - // external traffic (DROP anything that didn't get - // short-circuited by the EXT chain.) - proxier.filterRules.Write( - "-A", string(kubeExternalServicesChain), - "-m", "comment", "--comment", externalTrafficFilterComment, - "-m", protocol, "-p", protocol, - "-d", externalIP, - "--dport", strconv.Itoa(svcInfo.Port()), - "-j", externalTrafficFilterTarget, - ) - } - } - - // Capture load-balancer ingress. - for _, lbip := range svcInfo.LoadBalancerIPStrings() { - if hasEndpoints { - proxier.natRules.Write( - "-A", string(kubeServicesChain), - "-m", "comment", "--comment", fmt.Sprintf(`"%s loadbalancer IP"`, svcPortNameString), - "-m", protocol, "-p", protocol, - "-d", lbip, - "--dport", strconv.Itoa(svcInfo.Port()), - "-j", string(loadBalancerTrafficChain)) - - } - if usesFWChain { - proxier.filterRules.Write( - "-A", string(kubeProxyFirewallChain), - "-m", "comment", "--comment", fmt.Sprintf(`"%s traffic not accepted by %s"`, svcPortNameString, svcInfo.firewallChainName), - "-m", protocol, "-p", protocol, - "-d", lbip, - "--dport", strconv.Itoa(svcInfo.Port()), - "-j", "DROP") - } - } - if !hasExternalEndpoints { - // Either no endpoints at all (REJECT) or no endpoints for - // external traffic (DROP anything that didn't get short-circuited - // by the EXT chain.) - for _, lbip := range svcInfo.LoadBalancerIPStrings() { - proxier.filterRules.Write( - "-A", string(kubeExternalServicesChain), - "-m", "comment", "--comment", externalTrafficFilterComment, - "-m", protocol, "-p", protocol, - "-d", lbip, - "--dport", strconv.Itoa(svcInfo.Port()), - "-j", externalTrafficFilterTarget, - ) - } - } - - // Capture nodeports. - if svcInfo.NodePort() != 0 { - if hasEndpoints { - // Jump to the external destination chain. For better or for - // worse, nodeports are not subect to loadBalancerSourceRanges, - // and we can't change that. - proxier.natRules.Write( - "-A", string(kubeNodePortsChain), - "-m", "comment", "--comment", svcPortNameString, - "-m", protocol, "-p", protocol, - "--dport", strconv.Itoa(svcInfo.NodePort()), - "-j", string(externalTrafficChain)) - } - if !hasExternalEndpoints { - // Either no endpoints at all (REJECT) or no endpoints for - // external traffic (DROP anything that didn't get - // short-circuited by the EXT chain.) - proxier.filterRules.Write( - "-A", string(kubeExternalServicesChain), - "-m", "comment", "--comment", externalTrafficFilterComment, - "-m", "addrtype", "--dst-type", "LOCAL", - "-m", protocol, "-p", protocol, - "--dport", strconv.Itoa(svcInfo.NodePort()), - "-j", externalTrafficFilterTarget, - ) - } - } - - // Capture healthCheckNodePorts. - if svcInfo.HealthCheckNodePort() != 0 { - // no matter if node has local endpoints, healthCheckNodePorts - // need to add a rule to accept the incoming connection - proxier.filterRules.Write( - "-A", string(kubeNodePortsChain), - "-m", "comment", "--comment", fmt.Sprintf(`"%s health check node port"`, svcPortNameString), - "-m", "tcp", "-p", "tcp", - "--dport", strconv.Itoa(svcInfo.HealthCheckNodePort()), - "-j", "ACCEPT", - ) - } - - // If the SVC/SVL/EXT/FW/SEP chains have not changed since the last sync - // then we can omit them from the restore input. (We have already marked - // them in activeNATChains, so they won't get deleted.) - if tryPartialSync && !serviceChanged.Has(svcName.NamespacedName.String()) && !endpointsChanged.Has(svcName.NamespacedName.String()) { - continue - } - - // Set up internal traffic handling. - if hasInternalEndpoints { - args = append(args[:0], - "-m", "comment", "--comment", fmt.Sprintf(`"%s cluster IP"`, svcPortNameString), - "-m", protocol, "-p", protocol, - "-d", svcInfo.ClusterIP().String(), - "--dport", strconv.Itoa(svcInfo.Port()), - ) - if proxier.masqueradeAll { - proxier.natRules.Write( - "-A", string(internalTrafficChain), - args, - "-j", string(kubeMarkMasqChain)) - } else if proxier.localDetector.IsImplemented() { - // This masquerades off-cluster traffic to a service VIP. The - // idea is that you can establish a static route for your - // Service range, routing to any node, and that node will - // bridge into the Service for you. Since that might bounce - // off-node, we masquerade here. - proxier.natRules.Write( - "-A", string(internalTrafficChain), - args, - proxier.localDetector.IfNotLocal(), - "-j", string(kubeMarkMasqChain)) - } - } - - // Set up external traffic handling (if any "external" destinations are - // enabled). All captured traffic for all external destinations should - // jump to externalTrafficChain, which will handle some special cases and - // then jump to externalPolicyChain. - if usesExternalTrafficChain { - proxier.natChains.Write(utiliptables.MakeChainLine(externalTrafficChain)) - - if !svcInfo.ExternalPolicyLocal() { - // If we are using non-local endpoints we need to masquerade, - // in case we cross nodes. - proxier.natRules.Write( - "-A", string(externalTrafficChain), - "-m", "comment", "--comment", fmt.Sprintf(`"masquerade traffic for %s external destinations"`, svcPortNameString), - "-j", string(kubeMarkMasqChain)) - } else { - // If we are only using same-node endpoints, we can retain the - // source IP in most cases. - - if proxier.localDetector.IsImplemented() { - // Treat all locally-originated pod -> external destination - // traffic as a special-case. It is subject to neither - // form of traffic policy, which simulates going up-and-out - // to an external load-balancer and coming back in. - proxier.natRules.Write( - "-A", string(externalTrafficChain), - "-m", "comment", "--comment", fmt.Sprintf(`"pod traffic for %s external destinations"`, svcPortNameString), - proxier.localDetector.IfLocal(), - "-j", string(clusterPolicyChain)) - } - - // Locally originated traffic (not a pod, but the host node) - // still needs masquerade because the LBIP itself is a local - // address, so that will be the chosen source IP. - proxier.natRules.Write( - "-A", string(externalTrafficChain), - "-m", "comment", "--comment", fmt.Sprintf(`"masquerade LOCAL traffic for %s external destinations"`, svcPortNameString), - "-m", "addrtype", "--src-type", "LOCAL", - "-j", string(kubeMarkMasqChain)) - - // Redirect all src-type=LOCAL -> external destination to the - // policy=cluster chain. This allows traffic originating - // from the host to be redirected to the service correctly. - proxier.natRules.Write( - "-A", string(externalTrafficChain), - "-m", "comment", "--comment", fmt.Sprintf(`"route LOCAL traffic for %s external destinations"`, svcPortNameString), - "-m", "addrtype", "--src-type", "LOCAL", - "-j", string(clusterPolicyChain)) - } - - // Anything else falls thru to the appropriate policy chain. - if hasExternalEndpoints { - proxier.natRules.Write( - "-A", string(externalTrafficChain), - "-j", string(externalPolicyChain)) - } - } - - // Set up firewall chain, if needed - if usesFWChain { - proxier.natChains.Write(utiliptables.MakeChainLine(fwChain)) - - // The service firewall rules are created based on the - // loadBalancerSourceRanges field. This only works for VIP-like - // loadbalancers that preserve source IPs. For loadbalancers which - // direct traffic to service NodePort, the firewall rules will not - // apply. - args = append(args[:0], - "-A", string(fwChain), - "-m", "comment", "--comment", fmt.Sprintf(`"%s loadbalancer IP"`, svcPortNameString), - ) - - // firewall filter based on each source range - allowFromNode := false - for _, src := range svcInfo.LoadBalancerSourceRanges() { - proxier.natRules.Write(args, "-s", src, "-j", string(externalTrafficChain)) - _, cidr, err := netutils.ParseCIDRSloppy(src) - if err != nil { - klog.ErrorS(err, "Error parsing CIDR in LoadBalancerSourceRanges, dropping it", "cidr", cidr) - } else if cidr.Contains(proxier.nodeIP) { - allowFromNode = true - } - } - // For VIP-like LBs, the VIP is often added as a local - // address (via an IP route rule). In that case, a request - // from a node to the VIP will not hit the loadbalancer but - // will loop back with the source IP set to the VIP. We - // need the following rules to allow requests from this node. - if allowFromNode { - for _, lbip := range svcInfo.LoadBalancerIPStrings() { - proxier.natRules.Write( - args, - "-s", lbip, - "-j", string(externalTrafficChain)) - } - } - // If the packet was able to reach the end of firewall chain, - // then it did not get DNATed, so it will match the - // corresponding KUBE-PROXY-FIREWALL rule. - proxier.natRules.Write( - "-A", string(fwChain), - "-m", "comment", "--comment", fmt.Sprintf(`"other traffic to %s will be dropped by KUBE-PROXY-FIREWALL"`, svcPortNameString), - ) - } - - // If Cluster policy is in use, create the chain and create rules jumping - // from clusterPolicyChain to the clusterEndpoints - if usesClusterPolicyChain { - proxier.natChains.Write(utiliptables.MakeChainLine(clusterPolicyChain)) - proxier.writeServiceToEndpointRules(svcPortNameString, svcInfo, clusterPolicyChain, clusterEndpoints, args) - } - - // If Local policy is in use, create the chain and create rules jumping - // from localPolicyChain to the localEndpoints - if usesLocalPolicyChain { - proxier.natChains.Write(utiliptables.MakeChainLine(localPolicyChain)) - proxier.writeServiceToEndpointRules(svcPortNameString, svcInfo, localPolicyChain, localEndpoints, args) - } - - // Generate the per-endpoint chains. - for _, ep := range allLocallyReachableEndpoints { - epInfo, ok := ep.(*endpointsInfo) - if !ok { - klog.ErrorS(nil, "Failed to cast endpointsInfo", "endpointsInfo", ep) - continue - } - - endpointChain := epInfo.ChainName - - // Create the endpoint chain - proxier.natChains.Write(utiliptables.MakeChainLine(endpointChain)) - activeNATChains[endpointChain] = true - - args = append(args[:0], "-A", string(endpointChain)) - args = proxier.appendServiceCommentLocked(args, svcPortNameString) - // Handle traffic that loops back to the originator with SNAT. - proxier.natRules.Write( - args, - "-s", epInfo.IP(), - "-j", string(kubeMarkMasqChain)) - // Update client-affinity lists. - if svcInfo.SessionAffinityType() == v1.ServiceAffinityClientIP { - args = append(args, "-m", "recent", "--name", string(endpointChain), "--set") - } - // DNAT to final destination. - args = append(args, "-m", protocol, "-p", protocol, "-j", "DNAT", "--to-destination", epInfo.Endpoint) - proxier.natRules.Write(args) - } - } - - // Delete chains no longer in use. Since "iptables-save" can take several seconds - // to run on hosts with lots of iptables rules, we don't bother to do this on - // every sync in large clusters. (Stale chains will not be referenced by any - // active rules, so they're harmless other than taking up memory.) - if !proxier.largeClusterMode || time.Since(proxier.lastIPTablesCleanup) > proxier.syncPeriod { - var existingNATChains map[utiliptables.Chain]struct{} - - proxier.iptablesData.Reset() - if err := proxier.iptables.SaveInto(utiliptables.TableNAT, proxier.iptablesData); err == nil { - existingNATChains = utiliptables.GetChainsFromTable(proxier.iptablesData.Bytes()) - - for chain := range existingNATChains { - if !activeNATChains[chain] { - chainString := string(chain) - if !isServiceChainName(chainString) { - // Ignore chains that aren't ours. - continue - } - // We must (as per iptables) write a chain-line - // for it, which has the nice effect of flushing - // the chain. Then we can remove the chain. - proxier.natChains.Write(utiliptables.MakeChainLine(chain)) - proxier.natRules.Write("-X", chainString) - } - } - proxier.lastIPTablesCleanup = time.Now() - } else { - klog.ErrorS(err, "Failed to execute iptables-save: stale chains will not be deleted") - } - } - - // Finally, tail-call to the nodePorts chain. This needs to be after all - // other service portal rules. - nodeAddresses, err := proxier.nodePortAddresses.GetNodeAddresses(proxier.networkInterfacer) - if err != nil { - klog.ErrorS(err, "Failed to get node ip address matching nodeport cidrs, services with nodeport may not work as intended", "CIDRs", proxier.nodePortAddresses) - } - // nodeAddresses may contain dual-stack zero-CIDRs if proxier.nodePortAddresses is empty. - // Ensure nodeAddresses only contains the addresses for this proxier's IP family. - for addr := range nodeAddresses { - if utilproxy.IsZeroCIDR(addr) && isIPv6 == netutils.IsIPv6CIDRString(addr) { - // if any of the addresses is zero cidr of this IP family, non-zero IPs can be excluded. - nodeAddresses = sets.NewString(addr) - break - } - } - - for address := range nodeAddresses { - if utilproxy.IsZeroCIDR(address) { - destinations := []string{"-m", "addrtype", "--dst-type", "LOCAL"} - if isIPv6 { - // For IPv6, Regardless of the value of localhostNodePorts is true - // or false, we should disable access to the nodePort via localhost. Since it never works and always - // cause kernel warnings. - destinations = append(destinations, "!", "-d", "::1/128") - } - - if !proxier.localhostNodePorts && !isIPv6 { - // If set localhostNodePorts to "false"(route_localnet=0), We should generate iptables rules that - // disable NodePort services to be accessed via localhost. Since it doesn't work and causes - // the kernel to log warnings if anyone tries. - destinations = append(destinations, "!", "-d", "127.0.0.0/8") - } - - proxier.natRules.Write( - "-A", string(kubeServicesChain), - "-m", "comment", "--comment", `"kubernetes service nodeports; NOTE: this must be the last rule in this chain"`, - destinations, - "-j", string(kubeNodePortsChain)) - break - } - - // Ignore IP addresses with incorrect version - if isIPv6 && !netutils.IsIPv6String(address) || !isIPv6 && netutils.IsIPv6String(address) { - klog.ErrorS(nil, "IP has incorrect IP version", "IP", address) - continue - } - - // For ipv6, Regardless of the value of localhostNodePorts is true or false, we should disallow access - // to the nodePort via lookBack address. - if isIPv6 && utilproxy.IsLoopBack(address) { - klog.ErrorS(nil, "disallow nodePort services to be accessed via ipv6 localhost address", "IP", address) - continue - } - - // For ipv4, When localhostNodePorts is set to false, Ignore ipv4 lookBack address - if !isIPv6 && utilproxy.IsLoopBack(address) && !proxier.localhostNodePorts { - klog.ErrorS(nil, "disallow nodePort services to be accessed via ipv4 localhost address", "IP", address) - continue - } - - // create nodeport rules for each IP one by one - proxier.natRules.Write( - "-A", string(kubeServicesChain), - "-m", "comment", "--comment", `"kubernetes service nodeports; NOTE: this must be the last rule in this chain"`, - "-d", address, - "-j", string(kubeNodePortsChain)) - } - - // Drop the packets in INVALID state, which would potentially cause - // unexpected connection reset. - // https://github.com/kubernetes/kubernetes/issues/74839 - proxier.filterRules.Write( - "-A", string(kubeForwardChain), - "-m", "conntrack", - "--ctstate", "INVALID", - "-j", "DROP", - ) - - // If the masqueradeMark has been added then we want to forward that same - // traffic, this allows NodePort traffic to be forwarded even if the default - // FORWARD policy is not accept. - proxier.filterRules.Write( - "-A", string(kubeForwardChain), - "-m", "comment", "--comment", `"kubernetes forwarding rules"`, - "-m", "mark", "--mark", fmt.Sprintf("%s/%s", proxier.masqueradeMark, proxier.masqueradeMark), - "-j", "ACCEPT", - ) - - // The following rule ensures the traffic after the initial packet accepted - // by the "kubernetes forwarding rules" rule above will be accepted. - proxier.filterRules.Write( - "-A", string(kubeForwardChain), - "-m", "comment", "--comment", `"kubernetes forwarding conntrack rule"`, - "-m", "conntrack", - "--ctstate", "RELATED,ESTABLISHED", - "-j", "ACCEPT", - ) - - metrics.IptablesRulesTotal.WithLabelValues(string(utiliptables.TableFilter)).Set(float64(proxier.filterRules.Lines())) - metrics.IptablesRulesTotal.WithLabelValues(string(utiliptables.TableNAT)).Set(float64(proxier.natRules.Lines())) - - // Sync rules. - proxier.iptablesData.Reset() - proxier.iptablesData.WriteString("*filter\n") - proxier.iptablesData.Write(proxier.filterChains.Bytes()) - proxier.iptablesData.Write(proxier.filterRules.Bytes()) - proxier.iptablesData.WriteString("COMMIT\n") - proxier.iptablesData.WriteString("*nat\n") - proxier.iptablesData.Write(proxier.natChains.Bytes()) - proxier.iptablesData.Write(proxier.natRules.Bytes()) - proxier.iptablesData.WriteString("COMMIT\n") - - klog.V(2).InfoS("Reloading service iptables data", - "numServices", len(proxier.svcPortMap), - "numEndpoints", totalEndpoints, - "numFilterChains", proxier.filterChains.Lines(), - "numFilterRules", proxier.filterRules.Lines(), - "numNATChains", proxier.natChains.Lines(), - "numNATRules", proxier.natRules.Lines(), - ) - klog.V(9).InfoS("Restoring iptables", "rules", proxier.iptablesData.Bytes()) - - // NOTE: NoFlushTables is used so we don't flush non-kubernetes chains in the table - err = proxier.iptables.RestoreAll(proxier.iptablesData.Bytes(), utiliptables.NoFlushTables, utiliptables.RestoreCounters) - if err != nil { - if pErr, ok := err.(utiliptables.ParseError); ok { - lines := utiliptables.ExtractLines(proxier.iptablesData.Bytes(), pErr.Line(), 3) - klog.ErrorS(pErr, "Failed to execute iptables-restore", "rules", lines) - } else { - klog.ErrorS(err, "Failed to execute iptables-restore") - } - metrics.IptablesRestoreFailuresTotal.Inc() - return - } - success = true - proxier.needFullSync = false - - for name, lastChangeTriggerTimes := range endpointUpdateResult.LastChangeTriggerTimes { - for _, lastChangeTriggerTime := range lastChangeTriggerTimes { - latency := metrics.SinceInSeconds(lastChangeTriggerTime) - metrics.NetworkProgrammingLatency.Observe(latency) - klog.V(4).InfoS("Network programming", "endpoint", klog.KRef(name.Namespace, name.Name), "elapsed", latency) - } - } - - metrics.SyncProxyRulesNoLocalEndpointsTotal.WithLabelValues("internal").Set(float64(serviceNoLocalEndpointsTotalInternal)) - metrics.SyncProxyRulesNoLocalEndpointsTotal.WithLabelValues("external").Set(float64(serviceNoLocalEndpointsTotalExternal)) - if proxier.healthzServer != nil { - proxier.healthzServer.Updated() - } - metrics.SyncProxyRulesLastTimestamp.SetToCurrentTime() - - // Update service healthchecks. The endpoints list might include services that are - // not "OnlyLocal", but the services list will not, and the serviceHealthServer - // will just drop those endpoints. - if err := proxier.serviceHealthServer.SyncServices(proxier.svcPortMap.HealthCheckNodePorts()); err != nil { - klog.ErrorS(err, "Error syncing healthcheck services") - } - if err := proxier.serviceHealthServer.SyncEndpoints(proxier.endpointsMap.LocalReadyEndpoints()); err != nil { - klog.ErrorS(err, "Error syncing healthcheck endpoints") - } - - // Finish housekeeping. - // Clear stale conntrack entries for UDP Services, this has to be done AFTER the iptables rules are programmed. - // TODO: these could be made more consistent. - klog.V(4).InfoS("Deleting conntrack stale entries for services", "IPs", conntrackCleanupServiceIPs.UnsortedList()) - for _, svcIP := range conntrackCleanupServiceIPs.UnsortedList() { - if err := conntrack.ClearEntriesForIP(proxier.exec, svcIP, v1.ProtocolUDP); err != nil { - klog.ErrorS(err, "Failed to delete stale service connections", "IP", svcIP) - } - } - klog.V(4).InfoS("Deleting conntrack stale entries for services", "nodePorts", conntrackCleanupServiceNodePorts.UnsortedList()) - for _, nodePort := range conntrackCleanupServiceNodePorts.UnsortedList() { - err := conntrack.ClearEntriesForPort(proxier.exec, nodePort, isIPv6, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to clear udp conntrack", "nodePort", nodePort) - } - } - klog.V(4).InfoS("Deleting stale endpoint connections", "endpoints", endpointUpdateResult.DeletedUDPEndpoints) - proxier.deleteUDPEndpointConnections(endpointUpdateResult.DeletedUDPEndpoints) -} - -func (proxier *Proxier) writeServiceToEndpointRules(svcPortNameString string, svcInfo proxy.ServicePort, svcChain utiliptables.Chain, endpoints []proxy.Endpoint, args []string) { - // First write session affinity rules, if applicable. - if svcInfo.SessionAffinityType() == v1.ServiceAffinityClientIP { - for _, ep := range endpoints { - epInfo, ok := ep.(*endpointsInfo) - if !ok { - continue - } - comment := fmt.Sprintf(`"%s -> %s"`, svcPortNameString, epInfo.Endpoint) - - args = append(args[:0], - "-A", string(svcChain), - ) - args = proxier.appendServiceCommentLocked(args, comment) - args = append(args, - "-m", "recent", "--name", string(epInfo.ChainName), - "--rcheck", "--seconds", strconv.Itoa(svcInfo.StickyMaxAgeSeconds()), "--reap", - "-j", string(epInfo.ChainName), - ) - proxier.natRules.Write(args) - } - } - - // Now write loadbalancing rules. - numEndpoints := len(endpoints) - for i, ep := range endpoints { - epInfo, ok := ep.(*endpointsInfo) - if !ok { - continue - } - comment := fmt.Sprintf(`"%s -> %s"`, svcPortNameString, epInfo.Endpoint) - - args = append(args[:0], "-A", string(svcChain)) - args = proxier.appendServiceCommentLocked(args, comment) - if i < (numEndpoints - 1) { - // Each rule is a probabilistic match. - args = append(args, - "-m", "statistic", - "--mode", "random", - "--probability", proxier.probability(numEndpoints-i)) - } - // The final (or only if n == 1) rule is a guaranteed match. - proxier.natRules.Write(args, "-j", string(epInfo.ChainName)) - } -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/OWNERS index cbaad91602cb..b33dc0c340dc 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/OWNERS @@ -3,6 +3,7 @@ reviewers: - sig-network-reviewers - andrewsykim + - aroradaman - uablrek approvers: - sig-network-approvers diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/graceful_termination.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/graceful_termination.go index 5cc92db57c09..5c873af52306 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/graceful_termination.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/graceful_termination.go @@ -23,7 +23,7 @@ import ( "k8s.io/apimachinery/pkg/util/wait" "k8s.io/klog/v2" - utilipvs "k8s.io/kubernetes/pkg/util/ipvs" + utilipvs "k8s.io/kubernetes/pkg/proxy/ipvs/util" ) const ( diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/ipset.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/ipset.go index f77280d6bcfb..5147c112c4f0 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/ipset.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/ipset.go @@ -19,7 +19,7 @@ package ipvs import ( "k8s.io/apimachinery/pkg/util/sets" utilversion "k8s.io/apimachinery/pkg/util/version" - utilipset "k8s.io/kubernetes/pkg/util/ipset" + utilipset "k8s.io/kubernetes/pkg/proxy/ipvs/ipset" "fmt" "strings" diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipset/ipset.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/ipset/ipset.go similarity index 94% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipset/ipset.go rename to cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/ipset/ipset.go index c82fe0c310f5..f6532dc090ef 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipset/ipset.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/ipset/ipset.go @@ -28,6 +28,8 @@ import ( netutils "k8s.io/utils/net" ) +var validationError = fmt.Errorf("failed to validate entry for ipset") + // Interface is an injectable interface for running ipset commands. Implementations must be goroutine-safe. type Interface interface { // FlushSet deletes all entries from a named set. @@ -165,7 +167,7 @@ type Entry struct { // Validate checks if a given ipset entry is valid or not. The set parameter is the ipset that entry belongs to. func (e *Entry) Validate(set *IPSet) bool { if e.Port < 0 { - klog.Errorf("Entry %v port number %d should be >=0 for ipset %v", e, e.Port, set) + klog.ErrorS(validationError, "port number should be >=0", "entry", e, "port", e.Port, "ipset", set) return false } switch e.SetType { @@ -187,7 +189,7 @@ func (e *Entry) Validate(set *IPSet) bool { // IP2 can not be empty for `hash:ip,port,ip` type ip set if netutils.ParseIPSloppy(e.IP2) == nil { - klog.Errorf("Error parsing entry %v second ip address %v for ipset %v", e, e.IP2, set) + klog.ErrorS(validationError, "error parsing second ip address", "entry", e, "ip", e.IP2, "ipset", set) return false } case HashIPPortNet: @@ -198,22 +200,22 @@ func (e *Entry) Validate(set *IPSet) bool { // Net can not be empty for `hash:ip,port,net` type ip set if _, ipNet, err := netutils.ParseCIDRSloppy(e.Net); ipNet == nil { - klog.Errorf("Error parsing entry %v ip net %v for ipset %v, error: %v", e, e.Net, set, err) + klog.ErrorS(err, "error parsing ip net", "entry", e, "net", e.Net, "set", set) return false } case BitmapPort: // check if port number satisfies its ipset's requirement of port range if set == nil { - klog.Errorf("Unable to reference ip set where the entry %v exists", e) + klog.ErrorS(validationError, "unable to reference ip set where the entry exists", "entry", e) return false } begin, end, err := parsePortRange(set.PortRange) if err != nil { - klog.Errorf("Failed to parse set %v port range %s for ipset %v, error: %v", set, set.PortRange, set, err) + klog.ErrorS(err, "failed to parse set port range", "ipset", set, "portRange", set.PortRange) return false } if e.Port < begin || e.Port > end { - klog.Errorf("Entry %v port number %d is not in the port range %s of its ipset %v", e, e.Port, set.PortRange, set) + klog.ErrorS(validationError, "port number is not in the port range of its ipset", "entry", e, "port", e.Port, "portRange", set.PortRange, "ipset", set) return false } } @@ -261,7 +263,7 @@ func (e *Entry) checkIPandProtocol(set *IPSet) bool { // checkIP checks if IP of Entry is valid. func (e *Entry) checkIP(set *IPSet) bool { if netutils.ParseIPSloppy(e.IP) == nil { - klog.Errorf("Error parsing entry %v ip address %v for ipset %v", e, e.IP, set) + klog.ErrorS(validationError, "error parsing ip address", "entry", e, "ip", e.IP, "ipset", set) return false } @@ -489,7 +491,7 @@ func validateProtocol(protocol string) bool { if protocol == ProtocolTCP || protocol == ProtocolUDP || protocol == ProtocolSCTP { return true } - klog.Errorf("Invalid entry's protocol: %s, supported protocols are [%s, %s, %s]", protocol, ProtocolTCP, ProtocolUDP, ProtocolSCTP) + klog.ErrorS(validationError, "invalid protocol", "protocol", protocol, "supportedProtocols", []string{ProtocolTCP, ProtocolUDP, ProtocolSCTP}) return false } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipset/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/ipset/types.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipset/types.go rename to cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/ipset/types.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink.go index ab0b9eaaa14d..cc173eae5c10 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink.go @@ -40,4 +40,9 @@ type NetLinkHandle interface { // Only the addresses of the current family are returned. // IPv6 link-local and loopback addresses are excluded. GetLocalAddresses(dev string) (sets.Set[string], error) + // GetAllLocalAddressesExcept return all local addresses on the node, except from the passed dev. + // This is not the same as to take the diff between GetAllLocalAddresses and GetLocalAddresses + // since an address can be assigned to many interfaces. This problem raised + // https://github.com/kubernetes/kubernetes/issues/114815 + GetAllLocalAddressesExcept(dev string) (sets.Set[string], error) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink_linux.go index f4d2368885d9..98c96bc70319 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink_linux.go @@ -24,7 +24,8 @@ import ( "net" "k8s.io/apimachinery/pkg/util/sets" - utilproxy "k8s.io/kubernetes/pkg/proxy/util" + "k8s.io/klog/v2" + proxyutil "k8s.io/kubernetes/pkg/proxy/util" netutils "k8s.io/utils/net" "github.com/vishvananda/netlink" @@ -134,7 +135,7 @@ func (h *netlinkHandle) GetAllLocalAddresses() (sets.Set[string], error) { if err != nil { return nil, fmt.Errorf("Could not get addresses: %v", err) } - return utilproxy.AddressSet(h.isValidForSet, addr), nil + return proxyutil.AddressSet(h.isValidForSet, addr), nil } // GetLocalAddresses return all local addresses for an interface. @@ -149,7 +150,7 @@ func (h *netlinkHandle) GetLocalAddresses(dev string) (sets.Set[string], error) if err != nil { return nil, fmt.Errorf("Can't get addresses from %s: %v", ifi.Name, err) } - return utilproxy.AddressSet(h.isValidForSet, addr), nil + return proxyutil.AddressSet(h.isValidForSet, addr), nil } func (h *netlinkHandle) isValidForSet(ip net.IP) bool { @@ -164,3 +165,30 @@ func (h *netlinkHandle) isValidForSet(ip net.IP) bool { } return true } + +// GetAllLocalAddressesExcept return all local addresses on the node, +// except from the passed dev. This is not the same as to take the +// diff between GetAllLocalAddresses and GetLocalAddresses since an +// address can be assigned to many interfaces. This problem raised +// https://github.com/kubernetes/kubernetes/issues/114815 +func (h *netlinkHandle) GetAllLocalAddressesExcept(dev string) (sets.Set[string], error) { + ifaces, err := net.Interfaces() + if err != nil { + return nil, err + } + var addr []net.Addr + for _, iface := range ifaces { + if iface.Name == dev { + continue + } + ifadr, err := iface.Addrs() + if err != nil { + // This may happen if the interface was deleted. Ignore + // but log the error. + klog.ErrorS(err, "Reading addresses", "interface", iface.Name) + continue + } + addr = append(addr, ifadr...) + } + return proxyutil.AddressSet(h.isValidForSet, addr), nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink_unsupported.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink_unsupported.go index 31f3fb7406b2..1cb38d3fb8f0 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink_unsupported.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/netlink_unsupported.go @@ -71,6 +71,11 @@ func (h *netlinkHandle) GetLocalAddresses(dev string) (sets.Set[string], error) return nil, fmt.Errorf("netlink is not supported in this platform") } +// GetAllLocalAddressesExcept is part of interface. +func (h *netlinkHandle) GetAllLocalAddressesExcept(dev string) (sets.Set[string], error) { + return nil, fmt.Errorf("netlink is not supported in this platform") +} + // Must match the one in proxier_test.go func (h *netlinkHandle) isValidForSet(ip net.IP) bool { return false diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/proxier.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/proxier.go index cf52b2fcdcee..0a7e37810e5b 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/proxier.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/proxier.go @@ -43,16 +43,16 @@ import ( "k8s.io/client-go/tools/events" utilsysctl "k8s.io/component-helpers/node/util/sysctl" "k8s.io/kubernetes/pkg/proxy" + "k8s.io/kubernetes/pkg/proxy/conntrack" "k8s.io/kubernetes/pkg/proxy/healthcheck" + utilipset "k8s.io/kubernetes/pkg/proxy/ipvs/ipset" + utilipvs "k8s.io/kubernetes/pkg/proxy/ipvs/util" "k8s.io/kubernetes/pkg/proxy/metaproxier" "k8s.io/kubernetes/pkg/proxy/metrics" - utilproxy "k8s.io/kubernetes/pkg/proxy/util" + proxyutil "k8s.io/kubernetes/pkg/proxy/util" proxyutiliptables "k8s.io/kubernetes/pkg/proxy/util/iptables" "k8s.io/kubernetes/pkg/util/async" - "k8s.io/kubernetes/pkg/util/conntrack" - utilipset "k8s.io/kubernetes/pkg/util/ipset" utiliptables "k8s.io/kubernetes/pkg/util/iptables" - utilipvs "k8s.io/kubernetes/pkg/util/ipvs" ) const ( @@ -268,19 +268,19 @@ type Proxier struct { // that are significantly impacting performance. iptablesData *bytes.Buffer filterChainsData *bytes.Buffer - natChains utilproxy.LineBuffer - filterChains utilproxy.LineBuffer - natRules utilproxy.LineBuffer - filterRules utilproxy.LineBuffer + natChains proxyutil.LineBuffer + filterChains proxyutil.LineBuffer + natRules proxyutil.LineBuffer + filterRules proxyutil.LineBuffer // Added as a member to the struct to allow injection for testing. netlinkHandle NetLinkHandle // ipsetList is the list of ipsets that ipvs proxier used. ipsetList map[string]*IPSet // nodePortAddresses selects the interfaces where nodePort works. - nodePortAddresses *utilproxy.NodePortAddresses + nodePortAddresses *proxyutil.NodePortAddresses // networkInterfacer defines an interface for several net library functions. // Inject for test purpose. - networkInterfacer utilproxy.NetworkInterfacer + networkInterfacer proxyutil.NetworkInterfacer gracefuldeleteManager *GracefulTerminationManager // serviceNoLocalEndpointsInternal represents the set of services that couldn't be applied // due to the absence of local endpoints when the internal traffic policy is "Local". @@ -338,7 +338,7 @@ func NewProxier(ipFamily v1.IPFamily, } // Set the conntrack sysctl we need for - if err := utilproxy.EnsureSysctl(sysctl, sysctlVSConnTrack, 1); err != nil { + if err := proxyutil.EnsureSysctl(sysctl, sysctlVSConnTrack, 1); err != nil { return nil, err } @@ -357,34 +357,34 @@ func NewProxier(ipFamily v1.IPFamily, klog.V(2).InfoS("Left as-is", "sysctl", sysctlConnReuse) } else { // Set the connection reuse mode - if err := utilproxy.EnsureSysctl(sysctl, sysctlConnReuse, 0); err != nil { + if err := proxyutil.EnsureSysctl(sysctl, sysctlConnReuse, 0); err != nil { return nil, err } } // Set the expire_nodest_conn sysctl we need for - if err := utilproxy.EnsureSysctl(sysctl, sysctlExpireNoDestConn, 1); err != nil { + if err := proxyutil.EnsureSysctl(sysctl, sysctlExpireNoDestConn, 1); err != nil { return nil, err } // Set the expire_quiescent_template sysctl we need for - if err := utilproxy.EnsureSysctl(sysctl, sysctlExpireQuiescentTemplate, 1); err != nil { + if err := proxyutil.EnsureSysctl(sysctl, sysctlExpireQuiescentTemplate, 1); err != nil { return nil, err } // Set the ip_forward sysctl we need for - if err := utilproxy.EnsureSysctl(sysctl, sysctlForward, 1); err != nil { + if err := proxyutil.EnsureSysctl(sysctl, sysctlForward, 1); err != nil { return nil, err } if strictARP { // Set the arp_ignore sysctl we need for - if err := utilproxy.EnsureSysctl(sysctl, sysctlArpIgnore, 1); err != nil { + if err := proxyutil.EnsureSysctl(sysctl, sysctlArpIgnore, 1); err != nil { return nil, err } // Set the arp_announce sysctl we need for - if err := utilproxy.EnsureSysctl(sysctl, sysctlArpAnnounce, 2); err != nil { + if err := proxyutil.EnsureSysctl(sysctl, sysctlArpAnnounce, 2); err != nil { return nil, err } } @@ -409,7 +409,7 @@ func NewProxier(ipFamily v1.IPFamily, scheduler = defaultScheduler } - nodePortAddresses := utilproxy.NewNodePortAddresses(nodePortAddressStrings) + nodePortAddresses := proxyutil.NewNodePortAddresses(ipFamily, nodePortAddressStrings) serviceHealthServer := healthcheck.NewServiceHealthServer(hostname, recorder, nodePortAddresses, healthzServer) @@ -440,14 +440,14 @@ func NewProxier(ipFamily v1.IPFamily, ipvsScheduler: scheduler, iptablesData: bytes.NewBuffer(nil), filterChainsData: bytes.NewBuffer(nil), - natChains: utilproxy.LineBuffer{}, - natRules: utilproxy.LineBuffer{}, - filterChains: utilproxy.LineBuffer{}, - filterRules: utilproxy.LineBuffer{}, + natChains: proxyutil.NewLineBuffer(), + natRules: proxyutil.NewLineBuffer(), + filterChains: proxyutil.NewLineBuffer(), + filterRules: proxyutil.NewLineBuffer(), netlinkHandle: NewNetLinkHandle(ipFamily == v1.IPv6Protocol), ipset: ipset, nodePortAddresses: nodePortAddresses, - networkInterfacer: utilproxy.RealNetwork{}, + networkInterfacer: proxyutil.RealNetwork{}, gracefuldeleteManager: NewGracefulTerminationManager(ipvs), } // initialize ipsetList with all sets we needed @@ -480,7 +480,7 @@ func NewDualStackProxier( masqueradeBit int, localDetectors [2]proxyutiliptables.LocalTrafficDetector, hostname string, - nodeIP [2]net.IP, + nodeIPs map[v1.IPFamily]net.IP, recorder events.EventRecorder, healthzServer healthcheck.ProxierHealthUpdater, scheduler string, @@ -490,14 +490,12 @@ func NewDualStackProxier( safeIpset := newSafeIpset(ipset) - ipFamilyMap := utilproxy.MapCIDRsByIPFamily(nodePortAddresses) - // Create an ipv4 instance of the single-stack proxier ipv4Proxier, err := NewProxier(v1.IPv4Protocol, ipt[0], ipvs, safeIpset, sysctl, exec, syncPeriod, minSyncPeriod, filterCIDRs(false, excludeCIDRs), strictARP, tcpTimeout, tcpFinTimeout, udpTimeout, masqueradeAll, masqueradeBit, - localDetectors[0], hostname, nodeIP[0], - recorder, healthzServer, scheduler, ipFamilyMap[v1.IPv4Protocol], kernelHandler) + localDetectors[0], hostname, nodeIPs[v1.IPv4Protocol], + recorder, healthzServer, scheduler, nodePortAddresses, kernelHandler) if err != nil { return nil, fmt.Errorf("unable to create ipv4 proxier: %v", err) } @@ -505,8 +503,8 @@ func NewDualStackProxier( ipv6Proxier, err := NewProxier(v1.IPv6Protocol, ipt[1], ipvs, safeIpset, sysctl, exec, syncPeriod, minSyncPeriod, filterCIDRs(true, excludeCIDRs), strictARP, tcpTimeout, tcpFinTimeout, udpTimeout, masqueradeAll, masqueradeBit, - localDetectors[1], hostname, nodeIP[1], - nil, nil, scheduler, ipFamilyMap[v1.IPv6Protocol], kernelHandler) + localDetectors[1], hostname, nodeIPs[v1.IPv6Protocol], + recorder, healthzServer, scheduler, nodePortAddresses, kernelHandler) if err != nil { return nil, fmt.Errorf("unable to create ipv6 proxier: %v", err) } @@ -904,6 +902,7 @@ func (proxier *Proxier) OnNodeDelete(node *v1.Node) { klog.ErrorS(nil, "Received a watch event for a node that doesn't match the current node", "eventNode", node.Name, "currentNode", proxier.hostname) return } + proxier.mu.Lock() proxier.nodeLabels = nil proxier.mu.Unlock() @@ -946,29 +945,6 @@ func (proxier *Proxier) syncProxyRules() { serviceUpdateResult := proxier.svcPortMap.Update(proxier.serviceChanges) endpointUpdateResult := proxier.endpointsMap.Update(proxier.endpointsChanges) - // We need to detect stale connections to UDP Services so we - // can clean dangling conntrack entries that can blackhole traffic. - conntrackCleanupServiceIPs := serviceUpdateResult.DeletedUDPClusterIPs - conntrackCleanupServiceNodePorts := sets.NewInt() - // merge stale services gathered from updateEndpointsMap - // an UDP service that changes from 0 to non-0 endpoints is considered stale. - for _, svcPortName := range endpointUpdateResult.NewlyActiveUDPServices { - if svcInfo, ok := proxier.svcPortMap[svcPortName]; ok { - klog.V(4).InfoS("Newly-active UDP service may have stale conntrack entries", "servicePortName", svcPortName) - conntrackCleanupServiceIPs.Insert(svcInfo.ClusterIP().String()) - for _, extIP := range svcInfo.ExternalIPStrings() { - conntrackCleanupServiceIPs.Insert(extIP) - } - for _, lbIP := range svcInfo.LoadBalancerIPStrings() { - conntrackCleanupServiceIPs.Insert(lbIP) - } - nodePort := svcInfo.NodePort() - if svcInfo.Protocol() == v1.ProtocolUDP && nodePort != 0 { - conntrackCleanupServiceNodePorts.Insert(nodePort) - } - } - } - klog.V(3).InfoS("Syncing ipvs proxier rules") proxier.serviceNoLocalEndpointsInternal = sets.New[string]() @@ -1013,11 +989,10 @@ func (proxier *Proxier) syncProxyRules() { klog.ErrorS(err, "Error listing addresses binded to dummy interface") } // nodeAddressSet All addresses *except* those on the dummy interface - nodeAddressSet, err := proxier.netlinkHandle.GetAllLocalAddresses() + nodeAddressSet, err := proxier.netlinkHandle.GetAllLocalAddressesExcept(defaultDummyDevice) if err != nil { klog.ErrorS(err, "Error listing node addresses") } - nodeAddressSet = nodeAddressSet.Difference(alreadyBoundAddrs) hasNodePort := false for _, svc := range proxier.svcPortMap { @@ -1028,35 +1003,23 @@ func (proxier *Proxier) syncProxyRules() { } } - // Both nodeAddresses and nodeIPs can be reused for all nodePort services - // and only need to be computed if we have at least one nodePort service. - var ( - // List of node addresses to listen on if a nodePort is set. - nodeAddresses []string - // List of node IP addresses to be used as IPVS services if nodePort is set. - nodeIPs []net.IP - ) - + // List of node IP addresses to be used as IPVS services if nodePort is set. This + // can be reused for all nodePort services. + var nodeIPs []net.IP if hasNodePort { - nodeAddrSet, err := proxier.nodePortAddresses.GetNodeAddresses(proxier.networkInterfacer) - if err != nil { - klog.ErrorS(err, "Failed to get node IP address matching nodeport cidr") + if proxier.nodePortAddresses.MatchAll() { + for _, ipStr := range nodeAddressSet.UnsortedList() { + nodeIPs = append(nodeIPs, netutils.ParseIPSloppy(ipStr)) + } } else { - nodeAddresses = nodeAddrSet.List() - for _, address := range nodeAddresses { - a := netutils.ParseIPSloppy(address) - if a.IsLoopback() { - continue - } - if utilproxy.IsZeroCIDR(address) { - nodeIPs = nil - for _, ipStr := range nodeAddressSet.UnsortedList() { - nodeIPs = append(nodeIPs, netutils.ParseIPSloppy(ipStr)) + allNodeIPs, err := proxier.nodePortAddresses.GetNodeIPs(proxier.networkInterfacer) + if err != nil { + klog.ErrorS(err, "Failed to get node IP address matching nodeport cidr") + } else { + for _, ip := range allNodeIPs { + if !ip.IsLoopback() { + nodeIPs = append(nodeIPs, ip) } - break - } - if getIPFamily(a) == proxier.ipFamily { - nodeIPs = append(nodeIPs, a) } } } @@ -1193,9 +1156,13 @@ func (proxier *Proxier) syncProxyRules() { if proxier.ipvsScheduler == "mh" { serv.Flags |= utilipvs.FlagSourceHash } - if err := proxier.syncService(svcPortNameString, serv, true, alreadyBoundAddrs); err == nil { + // We must not add the address to the dummy device if it exist on another interface + shouldBind := !nodeAddressSet.Has(serv.Address.String()) + if err := proxier.syncService(svcPortNameString, serv, shouldBind, alreadyBoundAddrs); err == nil { activeIPVSServices.Insert(serv.String()) - activeBindAddrs.Insert(serv.Address.String()) + if shouldBind { + activeBindAddrs.Insert(serv.Address.String()) + } if err := proxier.syncEndpoint(svcPortName, svcInfo.ExternalPolicyLocal(), serv); err != nil { klog.ErrorS(err, "Failed to sync endpoint for service", "servicePortName", svcPortName, "virtualServer", serv) } @@ -1296,9 +1263,13 @@ func (proxier *Proxier) syncProxyRules() { if proxier.ipvsScheduler == "mh" { serv.Flags |= utilipvs.FlagSourceHash } - if err := proxier.syncService(svcPortNameString, serv, true, alreadyBoundAddrs); err == nil { + // We must not add the address to the dummy device if it exist on another interface + shouldBind := !nodeAddressSet.Has(serv.Address.String()) + if err := proxier.syncService(svcPortNameString, serv, shouldBind, alreadyBoundAddrs); err == nil { activeIPVSServices.Insert(serv.String()) - activeBindAddrs.Insert(serv.Address.String()) + if shouldBind { + activeBindAddrs.Insert(serv.Address.String()) + } if err := proxier.syncEndpoint(svcPortName, svcInfo.ExternalPolicyLocal(), serv); err != nil { klog.ErrorS(err, "Failed to sync endpoint for service", "servicePortName", svcPortName, "virtualServer", serv) } @@ -1308,7 +1279,7 @@ func (proxier *Proxier) syncProxyRules() { } if svcInfo.NodePort() != 0 { - if len(nodeAddresses) == 0 || len(nodeIPs) == 0 { + if len(nodeIPs) == 0 { // Skip nodePort configuration since an error occurred when // computing nodeAddresses or nodeIPs. continue @@ -1528,24 +1499,8 @@ func (proxier *Proxier) syncProxyRules() { metrics.SyncProxyRulesNoLocalEndpointsTotal.WithLabelValues("internal").Set(float64(proxier.serviceNoLocalEndpointsInternal.Len())) metrics.SyncProxyRulesNoLocalEndpointsTotal.WithLabelValues("external").Set(float64(proxier.serviceNoLocalEndpointsExternal.Len())) - // Finish housekeeping. - // Clear stale conntrack entries for UDP Services, this has to be done AFTER the ipvs rules are programmed. - // TODO: these could be made more consistent. - klog.V(4).InfoS("Deleting conntrack stale entries for services", "IPs", conntrackCleanupServiceIPs.UnsortedList()) - for _, svcIP := range conntrackCleanupServiceIPs.UnsortedList() { - if err := conntrack.ClearEntriesForIP(proxier.exec, svcIP, v1.ProtocolUDP); err != nil { - klog.ErrorS(err, "Failed to delete stale service connections", "IP", svcIP) - } - } - klog.V(4).InfoS("Deleting conntrack stale entries for services", "nodePorts", conntrackCleanupServiceNodePorts.UnsortedList()) - for _, nodePort := range conntrackCleanupServiceNodePorts.UnsortedList() { - err := conntrack.ClearEntriesForPort(proxier.exec, nodePort, proxier.ipFamily == v1.IPv6Protocol, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to clear udp conntrack", "nodePort", nodePort) - } - } - klog.V(4).InfoS("Deleting stale endpoint connections", "endpoints", endpointUpdateResult.DeletedUDPEndpoints) - proxier.deleteUDPEndpointConnections(endpointUpdateResult.DeletedUDPEndpoints) + // Finish housekeeping, clear stale conntrack entries for UDP Services + conntrack.CleanStaleEntries(proxier.ipFamily == v1.IPv6Protocol, proxier.exec, proxier.svcPortMap, serviceUpdateResult, endpointUpdateResult) } // writeIptablesRules write all iptables rules to proxier.natRules or proxier.FilterRules that ipvs proxier needed @@ -1726,6 +1681,9 @@ func (proxier *Proxier) writeIptablesRules() { proxier.filterRules.Write( "-A", string(kubeIPVSFilterChain), "-m", "set", "--match-set", proxier.ipsetList[kubeExternalIPSet].Name, "dst,dst", "-j", "RETURN") + proxier.filterRules.Write( + "-A", string(kubeIPVSFilterChain), + "-m", "set", "--match-set", proxier.ipsetList[kubeHealthCheckNodePortSet].Name, "dst", "-j", "RETURN") proxier.filterRules.Write( "-A", string(kubeIPVSFilterChain), "-m", "conntrack", "--ctstate", "NEW", @@ -1734,7 +1692,7 @@ func (proxier *Proxier) writeIptablesRules() { // Install the kubernetes-specific postrouting rules. We use a whole chain for // this so that it is easier to flush and change, for example if the mark // value should ever change. - // NB: THIS MUST MATCH the corresponding code in the kubelet + proxier.natRules.Write( "-A", string(kubePostroutingChain), "-m", "mark", "!", "--mark", fmt.Sprintf("%s/%s", proxier.masqueradeMark, proxier.masqueradeMark), @@ -1812,42 +1770,6 @@ func (proxier *Proxier) createAndLinkKubeChain() { } -// After a UDP endpoint has been removed, we must flush any pending conntrack entries to it, or else we -// risk sending more traffic to it, all of which will be lost (because UDP). -// This assumes the proxier mutex is held -// TODO: move it to util -func (proxier *Proxier) deleteUDPEndpointConnections(deletedUDPEndpoints []proxy.ServiceEndpoint) { - for _, epSvcPair := range deletedUDPEndpoints { - if svcInfo, ok := proxier.svcPortMap[epSvcPair.ServicePortName]; ok { - endpointIP := utilproxy.IPPart(epSvcPair.Endpoint) - nodePort := svcInfo.NodePort() - var err error - if nodePort != 0 { - err = conntrack.ClearEntriesForPortNAT(proxier.exec, endpointIP, nodePort, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to delete nodeport-related endpoint connections", "servicePortName", epSvcPair.ServicePortName) - } - } - err = conntrack.ClearEntriesForNAT(proxier.exec, svcInfo.ClusterIP().String(), endpointIP, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to delete endpoint connections", "servicePortName", epSvcPair.ServicePortName) - } - for _, extIP := range svcInfo.ExternalIPStrings() { - err := conntrack.ClearEntriesForNAT(proxier.exec, extIP, endpointIP, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to delete endpoint connections for externalIP", "servicePortName", epSvcPair.ServicePortName, "externalIP", extIP) - } - } - for _, lbIP := range svcInfo.LoadBalancerIPStrings() { - err := conntrack.ClearEntriesForNAT(proxier.exec, lbIP, endpointIP, v1.ProtocolUDP) - if err != nil { - klog.ErrorS(err, "Failed to delete endpoint connections for LoadBalancerIP", "servicePortName", epSvcPair.ServicePortName, "loadBalancerIP", lbIP) - } - } - } - } -} - func (proxier *Proxier) syncService(svcName string, vs *utilipvs.VirtualServer, bindAddr bool, alreadyBoundAddrs sets.Set[string]) error { appliedVirtualServer, _ := proxier.ipvs.GetVirtualServer(vs) if appliedVirtualServer == nil || !appliedVirtualServer.Equal(vs) { @@ -1899,7 +1821,7 @@ func (proxier *Proxier) syncEndpoint(svcPortName proxy.ServicePortName, onlyNode } // curEndpoints represents IPVS destinations listed from current system. - curEndpoints := sets.NewString() + curEndpoints := sets.New[string]() curDests, err := proxier.ipvs.GetRealServers(appliedVirtualServer) if err != nil { klog.ErrorS(err, "Failed to list IPVS destinations") @@ -1943,13 +1865,13 @@ func (proxier *Proxier) syncEndpoint(svcPortName proxy.ServicePortName, onlyNode } } - newEndpoints := sets.NewString() + newEndpoints := sets.New[string]() for _, epInfo := range endpoints { newEndpoints.Insert(epInfo.String()) } // Create new endpoints - for _, ep := range newEndpoints.List() { + for _, ep := range sets.List(newEndpoints) { ip, port, err := net.SplitHostPort(ep) if err != nil { klog.ErrorS(err, "Failed to parse endpoint", "endpoint", ep) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/safe_ipset.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/safe_ipset.go index 2be9900d96d1..1dbad0eb5cd8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/safe_ipset.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/safe_ipset.go @@ -19,7 +19,7 @@ package ipvs import ( "sync" - "k8s.io/kubernetes/pkg/util/ipset" + "k8s.io/kubernetes/pkg/proxy/ipvs/ipset" ) type safeIpset struct { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/ipvs.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/util/ipvs.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/ipvs.go rename to cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/util/ipvs.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/ipvs_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/util/ipvs_linux.go similarity index 99% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/ipvs_linux.go rename to cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/util/ipvs_linux.go index d6b947415218..a783d1d428b8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/ipvs_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/util/ipvs_linux.go @@ -46,7 +46,7 @@ type Protocol uint16 func New() Interface { handle, err := libipvs.New("") if err != nil { - klog.Errorf("IPVS interface can't be initialized, error: %v", err) + klog.ErrorS(err, "IPVS interface can't be initialized") return nil } return &runner{ diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/ipvs_unsupported.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/util/ipvs_unsupported.go similarity index 100% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/ipvs_unsupported.go rename to cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/ipvs/util/ipvs_unsupported.go diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/metrics/metrics.go index 27304214d996..19fb8923cf29 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/metrics/metrics.go @@ -27,7 +27,8 @@ import ( const kubeProxySubsystem = "kubeproxy" var ( - // SyncProxyRulesLatency is the latency of one round of kube-proxy syncing proxy rules. + // SyncProxyRulesLatency is the latency of one round of kube-proxy syncing proxy + // rules. (With the iptables proxy, this includes both full and partial syncs.) SyncProxyRulesLatency = metrics.NewHistogram( &metrics.HistogramOpts{ Subsystem: kubeProxySubsystem, @@ -38,6 +39,28 @@ var ( }, ) + // SyncFullProxyRulesLatency is the latency of one round of full rule syncing. + SyncFullProxyRulesLatency = metrics.NewHistogram( + &metrics.HistogramOpts{ + Subsystem: kubeProxySubsystem, + Name: "sync_full_proxy_rules_duration_seconds", + Help: "SyncProxyRules latency in seconds for full resyncs", + Buckets: metrics.ExponentialBuckets(0.001, 2, 15), + StabilityLevel: metrics.ALPHA, + }, + ) + + // SyncPartialProxyRulesLatency is the latency of one round of partial rule syncing. + SyncPartialProxyRulesLatency = metrics.NewHistogram( + &metrics.HistogramOpts{ + Subsystem: kubeProxySubsystem, + Name: "sync_partial_proxy_rules_duration_seconds", + Help: "SyncProxyRules latency in seconds for partial resyncs", + Buckets: metrics.ExponentialBuckets(0.001, 2, 15), + StabilityLevel: metrics.ALPHA, + }, + ) + // SyncProxyRulesLastTimestamp is the timestamp proxy rules were last // successfully synced. SyncProxyRulesLastTimestamp = metrics.NewGauge( @@ -137,17 +160,54 @@ var ( }, ) - // IptablesRulesTotal is the number of iptables rules that the iptables proxy installs. + // IptablesRulesTotal is the total number of iptables rules that the iptables + // proxy has installed. IptablesRulesTotal = metrics.NewGaugeVec( &metrics.GaugeOpts{ Subsystem: kubeProxySubsystem, Name: "sync_proxy_rules_iptables_total", - Help: "Number of proxy iptables rules programmed", + Help: "Total number of iptables rules owned by kube-proxy", + StabilityLevel: metrics.ALPHA, + }, + []string{"table"}, + ) + + // IptablesRulesLastSync is the number of iptables rules that the iptables proxy + // updated in the last sync. + IptablesRulesLastSync = metrics.NewGaugeVec( + &metrics.GaugeOpts{ + Subsystem: kubeProxySubsystem, + Name: "sync_proxy_rules_iptables_last", + Help: "Number of iptables rules written by kube-proxy in last sync", StabilityLevel: metrics.ALPHA, }, []string{"table"}, ) + // ProxyHealthzTotal is the number of returned HTTP Status for each + // healthz probe. + ProxyHealthzTotal = metrics.NewCounterVec( + &metrics.CounterOpts{ + Subsystem: kubeProxySubsystem, + Name: "proxy_healthz_total", + Help: "Cumulative proxy healthz HTTP status", + StabilityLevel: metrics.ALPHA, + }, + []string{"code"}, + ) + + // ProxyLivezTotal is the number of returned HTTP Status for each + // livez probe. + ProxyLivezTotal = metrics.NewCounterVec( + &metrics.CounterOpts{ + Subsystem: kubeProxySubsystem, + Name: "proxy_livez_total", + Help: "Cumulative proxy livez HTTP status", + StabilityLevel: metrics.ALPHA, + }, + []string{"code"}, + ) + // SyncProxyRulesLastQueuedTimestamp is the last time a proxy sync was // requested. If this is much larger than // kubeproxy_sync_proxy_rules_last_timestamp_seconds, then something is hung. @@ -180,6 +240,8 @@ var registerMetricsOnce sync.Once func RegisterMetrics() { registerMetricsOnce.Do(func() { legacyregistry.MustRegister(SyncProxyRulesLatency) + legacyregistry.MustRegister(SyncFullProxyRulesLatency) + legacyregistry.MustRegister(SyncPartialProxyRulesLatency) legacyregistry.MustRegister(SyncProxyRulesLastTimestamp) legacyregistry.MustRegister(NetworkProgrammingLatency) legacyregistry.MustRegister(EndpointChangesPending) @@ -187,10 +249,14 @@ func RegisterMetrics() { legacyregistry.MustRegister(ServiceChangesPending) legacyregistry.MustRegister(ServiceChangesTotal) legacyregistry.MustRegister(IptablesRulesTotal) + legacyregistry.MustRegister(IptablesRulesLastSync) legacyregistry.MustRegister(IptablesRestoreFailuresTotal) legacyregistry.MustRegister(IptablesPartialRestoreFailuresTotal) legacyregistry.MustRegister(SyncProxyRulesLastQueuedTimestamp) legacyregistry.MustRegister(SyncProxyRulesNoLocalEndpointsTotal) + legacyregistry.MustRegister(ProxyHealthzTotal) + legacyregistry.MustRegister(ProxyLivezTotal) + }) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/node.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/node.go index f2cbf6b1f2d6..7cd24f6d4fcd 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/node.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/node.go @@ -23,6 +23,7 @@ import ( v1 "k8s.io/api/core/v1" "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/proxy/config" + "k8s.io/kubernetes/pkg/proxy/healthcheck" ) // NodePodCIDRHandler handles the life cycle of kube-proxy based on the node PodCIDR assigned @@ -33,6 +34,12 @@ type NodePodCIDRHandler struct { podCIDRs []string } +func NewNodePodCIDRHandler(podCIDRs []string) *NodePodCIDRHandler { + return &NodePodCIDRHandler{ + podCIDRs: podCIDRs, + } +} + var _ config.NodeHandler = &NodePodCIDRHandler{} // OnNodeAdd is a handler for Node creates. @@ -79,3 +86,23 @@ func (n *NodePodCIDRHandler) OnNodeDelete(node *v1.Node) { // OnNodeSynced is a handler for Node syncs. func (n *NodePodCIDRHandler) OnNodeSynced() {} + +// NodeEligibleHandler handles the life cycle of the Node's eligibility, as +// determined by the health server for directing load balancer traffic. +type NodeEligibleHandler struct { + HealthServer healthcheck.ProxierHealthUpdater +} + +var _ config.NodeHandler = &NodeEligibleHandler{} + +// OnNodeAdd is a handler for Node creates. +func (n *NodeEligibleHandler) OnNodeAdd(node *v1.Node) { n.HealthServer.SyncNode(node) } + +// OnNodeUpdate is a handler for Node updates. +func (n *NodeEligibleHandler) OnNodeUpdate(_, node *v1.Node) { n.HealthServer.SyncNode(node) } + +// OnNodeDelete is a handler for Node deletes. +func (n *NodeEligibleHandler) OnNodeDelete(node *v1.Node) { n.HealthServer.SyncNode(node) } + +// OnNodeSynced is a handler for Node syncs. +func (n *NodeEligibleHandler) OnNodeSynced() {} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/service.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/service.go index ab8e6fa5d140..279191f57b25 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/service.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/service.go @@ -32,7 +32,7 @@ import ( "k8s.io/apimachinery/pkg/util/sets" apiservice "k8s.io/kubernetes/pkg/api/v1/service" "k8s.io/kubernetes/pkg/proxy/metrics" - utilproxy "k8s.io/kubernetes/pkg/proxy/util" + proxyutil "k8s.io/kubernetes/pkg/proxy/util" ) // BaseServicePortInfo contains base information that defines a service. @@ -165,7 +165,7 @@ func (sct *ServiceChangeTracker) newBaseServiceInfo(port *v1.ServicePort, servic stickyMaxAgeSeconds = int(*service.Spec.SessionAffinityConfig.ClientIP.TimeoutSeconds) } - clusterIP := utilproxy.GetClusterIPByFamily(sct.ipFamily, service) + clusterIP := proxyutil.GetClusterIPByFamily(sct.ipFamily, service) info := &BaseServicePortInfo{ clusterIP: netutils.ParseIPSloppy(clusterIP), port: int(port.Port), @@ -194,21 +194,21 @@ func (sct *ServiceChangeTracker) newBaseServiceInfo(port *v1.ServicePort, servic // services, this is actually expected. Hence we downgraded from reporting by events // to just log lines with high verbosity - ipFamilyMap := utilproxy.MapIPsByIPFamily(service.Spec.ExternalIPs) + ipFamilyMap := proxyutil.MapIPsByIPFamily(service.Spec.ExternalIPs) info.externalIPs = ipFamilyMap[sct.ipFamily] // Log the IPs not matching the ipFamily - if ips, ok := ipFamilyMap[utilproxy.OtherIPFamily(sct.ipFamily)]; ok && len(ips) > 0 { + if ips, ok := ipFamilyMap[proxyutil.OtherIPFamily(sct.ipFamily)]; ok && len(ips) > 0 { klog.V(4).InfoS("Service change tracker ignored the following external IPs for given service as they don't match IP Family", - "ipFamily", sct.ipFamily, "externalIPs", strings.Join(ips, ","), "service", klog.KObj(service)) + "ipFamily", sct.ipFamily, "externalIPs", strings.Join(ips, ", "), "service", klog.KObj(service)) } - ipFamilyMap = utilproxy.MapCIDRsByIPFamily(loadBalancerSourceRanges) + ipFamilyMap = proxyutil.MapCIDRsByIPFamily(loadBalancerSourceRanges) info.loadBalancerSourceRanges = ipFamilyMap[sct.ipFamily] // Log the CIDRs not matching the ipFamily - if cidrs, ok := ipFamilyMap[utilproxy.OtherIPFamily(sct.ipFamily)]; ok && len(cidrs) > 0 { + if cidrs, ok := ipFamilyMap[proxyutil.OtherIPFamily(sct.ipFamily)]; ok && len(cidrs) > 0 { klog.V(4).InfoS("Service change tracker ignored the following load balancer source ranges for given Service as they don't match IP Family", - "ipFamily", sct.ipFamily, "loadBalancerSourceRanges", strings.Join(cidrs, ","), "service", klog.KObj(service)) + "ipFamily", sct.ipFamily, "loadBalancerSourceRanges", strings.Join(cidrs, ", "), "service", klog.KObj(service)) } // Obtain Load Balancer Ingress IPs @@ -220,11 +220,11 @@ func (sct *ServiceChangeTracker) newBaseServiceInfo(port *v1.ServicePort, servic } if len(ips) > 0 { - ipFamilyMap = utilproxy.MapIPsByIPFamily(ips) + ipFamilyMap = proxyutil.MapIPsByIPFamily(ips) - if ipList, ok := ipFamilyMap[utilproxy.OtherIPFamily(sct.ipFamily)]; ok && len(ipList) > 0 { + if ipList, ok := ipFamilyMap[proxyutil.OtherIPFamily(sct.ipFamily)]; ok && len(ipList) > 0 { klog.V(4).InfoS("Service change tracker ignored the following load balancer ingress IPs for given Service as they don't match the IP Family", - "ipFamily", sct.ipFamily, "loadBalancerIngressIps", strings.Join(ipList, ","), "service", klog.KObj(service)) + "ipFamily", sct.ipFamily, "loadBalancerIngressIps", strings.Join(ipList, ", "), "service", klog.KObj(service)) } // Create the LoadBalancerStatus with the filtered IPs for _, ip := range ipFamilyMap[sct.ipFamily] { @@ -330,11 +330,11 @@ func (sct *ServiceChangeTracker) Update(previous, current *v1.Service) bool { // PendingChanges returns a set whose keys are the names of the services that have changed // since the last time sct was used to update a ServiceMap. (You must call this _before_ // calling sm.Update(sct).) -func (sct *ServiceChangeTracker) PendingChanges() sets.String { +func (sct *ServiceChangeTracker) PendingChanges() sets.Set[string] { sct.lock.Lock() defer sct.lock.Unlock() - changes := sets.NewString() + changes := sets.New[string]() for name := range sct.items { changes.Insert(name.String()) } @@ -346,12 +346,12 @@ type UpdateServiceMapResult struct { // DeletedUDPClusterIPs holds stale (no longer assigned to a Service) Service IPs // that had UDP ports. Callers can use this to abort timeout-waits or clear // connection-tracking information. - DeletedUDPClusterIPs sets.String + DeletedUDPClusterIPs sets.Set[string] } // Update updates ServicePortMap base on the given changes. func (sm ServicePortMap) Update(changes *ServiceChangeTracker) (result UpdateServiceMapResult) { - result.DeletedUDPClusterIPs = sets.NewString() + result.DeletedUDPClusterIPs = sets.New[string]() sm.apply(changes, result.DeletedUDPClusterIPs) return result } @@ -381,11 +381,11 @@ func (sct *ServiceChangeTracker) serviceToServiceMap(service *v1.Service) Servic return nil } - if utilproxy.ShouldSkipService(service) { + if proxyutil.ShouldSkipService(service) { return nil } - clusterIP := utilproxy.GetClusterIPByFamily(sct.ipFamily, service) + clusterIP := proxyutil.GetClusterIPByFamily(sct.ipFamily, service) if clusterIP == "" { return nil } @@ -407,7 +407,7 @@ func (sct *ServiceChangeTracker) serviceToServiceMap(service *v1.Service) Servic // apply the changes to ServicePortMap and update the deleted UDP cluster IP set. // apply triggers processServiceMapChange on every change. -func (sm *ServicePortMap) apply(changes *ServiceChangeTracker, deletedUDPClusterIPs sets.String) { +func (sm *ServicePortMap) apply(changes *ServiceChangeTracker, deletedUDPClusterIPs sets.Set[string]) { changes.lock.Lock() defer changes.lock.Unlock() for _, change := range changes.items { @@ -446,9 +446,9 @@ func (sm *ServicePortMap) apply(changes *ServiceChangeTracker, deletedUDPCluster // B{{"ns", "cluster-ip", "http"}: {"172.16.55.10", 1234, "TCP"}} // A updated to be {{"ns", "cluster-ip", "http"}: {"172.16.55.10", 1234, "TCP"}} // produce string set {"ns/cluster-ip:http"} -func (sm *ServicePortMap) merge(other ServicePortMap) sets.String { +func (sm *ServicePortMap) merge(other ServicePortMap) sets.Set[string] { // existingPorts is going to store all identifiers of all services in `other` ServicePortMap. - existingPorts := sets.NewString() + existingPorts := sets.New[string]() for svcPortName, info := range other { // Take ServicePortName.String() as the newly merged service's identifier and put it into existingPorts. existingPorts.Insert(svcPortName.String()) @@ -475,7 +475,7 @@ func (sm *ServicePortMap) filter(other ServicePortMap) { // unmerge deletes all other ServicePortMap's elements from current ServicePortMap and // updates deletedUDPClusterIPs with all of the newly-deleted UDP cluster IPs. -func (sm *ServicePortMap) unmerge(other ServicePortMap, deletedUDPClusterIPs sets.String) { +func (sm *ServicePortMap) unmerge(other ServicePortMap, deletedUDPClusterIPs sets.Set[string]) { for svcPortName := range other { info, exists := (*sm)[svcPortName] if exists { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/topology.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/topology.go index e8248a93a7e7..52f539b659fb 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/topology.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/topology.go @@ -55,7 +55,7 @@ func CategorizeEndpoints(endpoints []Endpoint, svcInfo ServicePort, nodeLabels m // if there are 0 cluster-wide endpoints, we can try to fallback to any terminating endpoints that are ready. // When falling back to terminating endpoints, we do NOT consider topology aware routing since this is a best // effort attempt to avoid dropping connections. - if len(clusterEndpoints) == 0 && utilfeature.DefaultFeatureGate.Enabled(features.ProxyTerminatingEndpoints) { + if len(clusterEndpoints) == 0 { clusterEndpoints = filterEndpoints(endpoints, func(ep Endpoint) bool { if ep.IsServing() && ep.IsTerminating() { return true @@ -87,7 +87,7 @@ func CategorizeEndpoints(endpoints []Endpoint, svcInfo ServicePort, nodeLabels m if ep.GetIsLocal() { hasLocalReadyEndpoints = true } - } else if ep.IsServing() && ep.IsTerminating() && utilfeature.DefaultFeatureGate.Enabled(features.ProxyTerminatingEndpoints) { + } else if ep.IsServing() && ep.IsTerminating() { hasAnyEndpoints = true if ep.GetIsLocal() { hasLocalServingTerminatingEndpoints = true diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/types.go index 1b6b843a8c14..7c9d19ab8183 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/types.go @@ -126,7 +126,7 @@ type Endpoint interface { IsTerminating() bool // GetZoneHints returns the zone hint for the endpoint. This is based on // endpoint.hints.forZones[0].name in the EndpointSlice API. - GetZoneHints() sets.String + GetZoneHints() sets.Set[string] // IP returns IP part of the endpoint. IP() string // Port returns the Port part of the endpoint. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/iptables/traffic.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/iptables/traffic.go index 4666c6c3de68..f27d89e9a57b 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/iptables/traffic.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/iptables/traffic.go @@ -19,7 +19,6 @@ package iptables import ( "fmt" - utiliptables "k8s.io/kubernetes/pkg/util/iptables" netutils "k8s.io/utils/net" ) @@ -62,10 +61,7 @@ type detectLocalByCIDR struct { // NewDetectLocalByCIDR implements the LocalTrafficDetector interface using a CIDR. This can be used when a single CIDR // range can be used to capture the notion of local traffic. -func NewDetectLocalByCIDR(cidr string, ipt utiliptables.Interface) (LocalTrafficDetector, error) { - if netutils.IsIPv6CIDRString(cidr) != ipt.IsIPv6() { - return nil, fmt.Errorf("CIDR %s has incorrect IP version: expect isIPv6=%t", cidr, ipt.IsIPv6()) - } +func NewDetectLocalByCIDR(cidr string) (LocalTrafficDetector, error) { _, _, err := netutils.ParseCIDRSloppy(cidr) if err != nil { return nil, err diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/linebuffer.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/linebuffer.go new file mode 100644 index 000000000000..2309d93c8075 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/linebuffer.go @@ -0,0 +1,150 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package util + +import ( + "bytes" + "fmt" +) + +// LineBuffer is an interface for writing lines of input to a bytes.Buffer +type LineBuffer interface { + // Write takes a list of arguments, each a string or []string, joins all the + // individual strings with spaces, terminates with newline, and writes them to the + // buffer. Any other argument type will panic. + Write(args ...interface{}) + + // WriteBytes writes bytes to the buffer, and terminates with newline. + WriteBytes(bytes []byte) + + // Reset clears the buffer + Reset() + + // Bytes returns the contents of the buffer as a []byte + Bytes() []byte + + // String returns the contents of the buffer as a string + String() string + + // Lines returns the number of lines in the buffer. Note that more precisely, this + // returns the number of times Write() or WriteBytes() was called; it assumes that + // you never wrote any newlines to the buffer yourself. + Lines() int +} + +type realLineBuffer struct { + b bytes.Buffer + lines int +} + +// NewLineBuffer returns a new "real" LineBuffer +func NewLineBuffer() LineBuffer { + return &realLineBuffer{} +} + +// Write is part of LineBuffer +func (buf *realLineBuffer) Write(args ...interface{}) { + for i, arg := range args { + if i > 0 { + buf.b.WriteByte(' ') + } + switch x := arg.(type) { + case string: + buf.b.WriteString(x) + case []string: + for j, s := range x { + if j > 0 { + buf.b.WriteByte(' ') + } + buf.b.WriteString(s) + } + default: + panic(fmt.Sprintf("unknown argument type: %T", x)) + } + } + buf.b.WriteByte('\n') + buf.lines++ +} + +// WriteBytes is part of LineBuffer +func (buf *realLineBuffer) WriteBytes(bytes []byte) { + buf.b.Write(bytes) + buf.b.WriteByte('\n') + buf.lines++ +} + +// Reset is part of LineBuffer +func (buf *realLineBuffer) Reset() { + buf.b.Reset() + buf.lines = 0 +} + +// Bytes is part of LineBuffer +func (buf *realLineBuffer) Bytes() []byte { + return buf.b.Bytes() +} + +// String is part of LineBuffer +func (buf *realLineBuffer) String() string { + return buf.b.String() +} + +// Lines is part of LineBuffer +func (buf *realLineBuffer) Lines() int { + return buf.lines +} + +type discardLineBuffer struct { + lines int +} + +// NewDiscardLineBuffer returns a dummy LineBuffer that counts the number of writes but +// throws away the data. (This is used for iptables proxy partial syncs, to keep track of +// how many rules we managed to avoid having to sync.) +func NewDiscardLineBuffer() LineBuffer { + return &discardLineBuffer{} +} + +// Write is part of LineBuffer +func (buf *discardLineBuffer) Write(args ...interface{}) { + buf.lines++ +} + +// WriteBytes is part of LineBuffer +func (buf *discardLineBuffer) WriteBytes(bytes []byte) { + buf.lines++ +} + +// Reset is part of LineBuffer +func (buf *discardLineBuffer) Reset() { + buf.lines = 0 +} + +// Bytes is part of LineBuffer +func (buf *discardLineBuffer) Bytes() []byte { + return []byte{} +} + +// String is part of LineBuffer +func (buf *discardLineBuffer) String() string { + return "" +} + +// Lines is part of LineBuffer +func (buf *discardLineBuffer) Lines() int { + return buf.lines +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/nodeport_addresses.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/nodeport_addresses.go index aebe5f0718cc..c5332a079586 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/nodeport_addresses.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/nodeport_addresses.go @@ -20,7 +20,7 @@ import ( "fmt" "net" - "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/api/core/v1" netutils "k8s.io/utils/net" ) @@ -30,32 +30,52 @@ type NodePortAddresses struct { cidrs []*net.IPNet containsIPv4Loopback bool + matchAll bool } // RFC 5735 127.0.0.0/8 - This block is assigned for use as the Internet host loopback address var ipv4LoopbackStart = net.IPv4(127, 0, 0, 0) -// NewNodePortAddresses takes the `--nodeport-addresses` value (which is assumed to -// contain only valid CIDRs) and returns a NodePortAddresses object. If cidrStrings is -// empty, this is treated as `["0.0.0.0/0", "::/0"]`. -func NewNodePortAddresses(cidrStrings []string) *NodePortAddresses { - if len(cidrStrings) == 0 { - cidrStrings = []string{IPv4ZeroCIDR, IPv6ZeroCIDR} +// NewNodePortAddresses takes an IP family and the `--nodeport-addresses` value (which is +// assumed to contain only valid CIDRs, potentially of both IP families) and returns a +// NodePortAddresses object for the given family. If there are no CIDRs of the given +// family then the CIDR "0.0.0.0/0" or "::/0" will be added (even if there are CIDRs of +// the other family). +func NewNodePortAddresses(family v1.IPFamily, cidrStrings []string) *NodePortAddresses { + npa := &NodePortAddresses{} + + // Filter CIDRs to correct family + for _, str := range cidrStrings { + if (family == v1.IPv4Protocol) == netutils.IsIPv4CIDRString(str) { + npa.cidrStrings = append(npa.cidrStrings, str) + } } - - npa := &NodePortAddresses{ - cidrStrings: cidrStrings, + if len(npa.cidrStrings) == 0 { + if family == v1.IPv4Protocol { + npa.cidrStrings = []string{IPv4ZeroCIDR} + } else { + npa.cidrStrings = []string{IPv6ZeroCIDR} + } } + // Now parse for _, str := range npa.cidrStrings { _, cidr, _ := netutils.ParseCIDRSloppy(str) - npa.cidrs = append(npa.cidrs, cidr) if netutils.IsIPv4CIDR(cidr) { if cidr.IP.IsLoopback() || cidr.Contains(ipv4LoopbackStart) { npa.containsIPv4Loopback = true } } + + if IsZeroCIDR(str) { + // Ignore everything else + npa.cidrs = []*net.IPNet{cidr} + npa.matchAll = true + break + } + + npa.cidrs = append(npa.cidrs, cidr) } return npa @@ -65,32 +85,23 @@ func (npa *NodePortAddresses) String() string { return fmt.Sprintf("%v", npa.cidrStrings) } -// GetNodeAddresses return all matched node IP addresses for npa's CIDRs. -// If npa's CIDRs include "0.0.0.0/0" and/or "::/0", then those values will be returned -// verbatim in the response and no actual IPs of that family will be returned. -// If no matching IPs are found, GetNodeAddresses will return an error. -// NetworkInterfacer is injected for test purpose. -func (npa *NodePortAddresses) GetNodeAddresses(nw NetworkInterfacer) (sets.String, error) { - uniqueAddressList := sets.NewString() - - // First round of iteration to pick out `0.0.0.0/0` or `::/0` for the sake of excluding non-zero IPs. - for _, cidr := range npa.cidrStrings { - if IsZeroCIDR(cidr) { - uniqueAddressList.Insert(cidr) - } - } +// MatchAll returns true if npa matches all node IPs (of npa's given family) +func (npa *NodePortAddresses) MatchAll() bool { + return npa.matchAll +} +// GetNodeIPs return all matched node IP addresses for npa's CIDRs. If no matching +// IPs are found, it returns an empty list. +// NetworkInterfacer is injected for test purpose. +func (npa *NodePortAddresses) GetNodeIPs(nw NetworkInterfacer) ([]net.IP, error) { addrs, err := nw.InterfaceAddrs() if err != nil { return nil, fmt.Errorf("error listing all interfaceAddrs from host, error: %v", err) } - // Second round of iteration to parse IPs based on cidr. + // Use a map to dedup matches + addresses := make(map[string]net.IP) for _, cidr := range npa.cidrs { - if IsZeroCIDR(cidr.String()) { - continue - } - for _, addr := range addrs { var ip net.IP // nw.InterfaceAddrs may return net.IPAddr or net.IPNet on windows, and it will return net.IPNet on linux. @@ -104,21 +115,17 @@ func (npa *NodePortAddresses) GetNodeAddresses(nw NetworkInterfacer) (sets.Strin } if cidr.Contains(ip) { - if netutils.IsIPv6(ip) && !uniqueAddressList.Has(IPv6ZeroCIDR) { - uniqueAddressList.Insert(ip.String()) - } - if !netutils.IsIPv6(ip) && !uniqueAddressList.Has(IPv4ZeroCIDR) { - uniqueAddressList.Insert(ip.String()) - } + addresses[ip.String()] = ip } } } - if uniqueAddressList.Len() == 0 { - return nil, fmt.Errorf("no addresses found for cidrs %v", npa.cidrStrings) + ips := make([]net.IP, 0, len(addresses)) + for _, ip := range addresses { + ips = append(ips, ip) } - return uniqueAddressList, nil + return ips, nil } // ContainsIPv4Loopback returns true if npa's CIDRs contain an IPv4 loopback address. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/utils.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/utils.go index 319daf27b4a0..3f56bb6dbf5e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/utils.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/util/utils.go @@ -17,12 +17,9 @@ limitations under the License. package util import ( - "bytes" "context" - "errors" "fmt" "net" - "net/http" "strconv" "strings" @@ -46,14 +43,6 @@ const ( IPv6ZeroCIDR = "::/0" ) -var ( - // ErrAddressNotAllowed indicates the address is not allowed - ErrAddressNotAllowed = errors.New("address not allowed") - - // ErrNoAddresses indicates there are no addresses for the hostname - ErrNoAddresses = errors.New("no addresses for hostname") -) - // isValidEndpoint checks that the given host / port pair are valid endpoint func isValidEndpoint(host string, port int) bool { return host != "" && port > 0 @@ -96,56 +85,11 @@ func IsLoopBack(ip string) bool { return false } -// IsProxyableIP checks if a given IP address is permitted to be proxied -func IsProxyableIP(ip string) error { - netIP := netutils.ParseIPSloppy(ip) - if netIP == nil { - return ErrAddressNotAllowed - } - return isProxyableIP(netIP) -} - -func isProxyableIP(ip net.IP) error { - if !ip.IsGlobalUnicast() { - return ErrAddressNotAllowed - } - return nil -} - // Resolver is an interface for net.Resolver type Resolver interface { LookupIPAddr(ctx context.Context, host string) ([]net.IPAddr, error) } -// IsProxyableHostname checks if the IP addresses for a given hostname are permitted to be proxied -func IsProxyableHostname(ctx context.Context, resolv Resolver, hostname string) error { - resp, err := resolv.LookupIPAddr(ctx, hostname) - if err != nil { - return err - } - - if len(resp) == 0 { - return ErrNoAddresses - } - - for _, host := range resp { - if err := isProxyableIP(host.IP); err != nil { - return err - } - } - return nil -} - -// IsAllowedHost checks if the given IP host address is in a network in the denied list. -func IsAllowedHost(host net.IP, denied []*net.IPNet) error { - for _, ipNet := range denied { - if ipNet.Contains(host) { - return ErrAddressNotAllowed - } - } - return nil -} - // GetLocalAddrs returns a list of all network addresses on the local system func GetLocalAddrs() ([]net.IP, error) { var localAddrs []net.IP @@ -238,7 +182,7 @@ func MapIPsByIPFamily(ipStrings []string) map[v1.IPFamily][]string { ipFamilyMap := map[v1.IPFamily][]string{} for _, ip := range ipStrings { // Handle only the valid IPs - if ipFamily, err := getIPFamilyFromIP(ip); err == nil { + if ipFamily := getIPFamilyFromIP(ip); ipFamily != "" { ipFamilyMap[ipFamily] = append(ipFamilyMap[ipFamily], ip) } else { // this function is called in multiple places. All of which @@ -260,7 +204,7 @@ func MapCIDRsByIPFamily(cidrStrings []string) map[v1.IPFamily][]string { ipFamilyMap := map[v1.IPFamily][]string{} for _, cidr := range cidrStrings { // Handle only the valid CIDRs - if ipFamily, err := getIPFamilyFromCIDR(cidr); err == nil { + if ipFamily := getIPFamilyFromCIDR(cidr); ipFamily != "" { ipFamilyMap[ipFamily] = append(ipFamilyMap[ipFamily], cidr) } else { klog.ErrorS(nil, "Skipping invalid CIDR", "cidr", cidr) @@ -269,27 +213,29 @@ func MapCIDRsByIPFamily(cidrStrings []string) map[v1.IPFamily][]string { return ipFamilyMap } -func getIPFamilyFromIP(ipStr string) (v1.IPFamily, error) { +// Returns the IP family of ipStr, or "" if ipStr can't be parsed as an IP +func getIPFamilyFromIP(ipStr string) v1.IPFamily { netIP := netutils.ParseIPSloppy(ipStr) if netIP == nil { - return "", ErrAddressNotAllowed + return "" } if netutils.IsIPv6(netIP) { - return v1.IPv6Protocol, nil + return v1.IPv6Protocol } - return v1.IPv4Protocol, nil + return v1.IPv4Protocol } -func getIPFamilyFromCIDR(cidrStr string) (v1.IPFamily, error) { +// Returns the IP family of cidrStr, or "" if cidrStr can't be parsed as a CIDR +func getIPFamilyFromCIDR(cidrStr string) v1.IPFamily { _, netCIDR, err := netutils.ParseCIDRSloppy(cidrStr) if err != nil { - return "", ErrAddressNotAllowed + return "" } if netutils.IsIPv6CIDR(netCIDR) { - return v1.IPv6Protocol, nil + return v1.IPv6Protocol } - return v1.IPv4Protocol, nil + return v1.IPv4Protocol } // OtherIPFamily returns the other ip family @@ -347,66 +293,6 @@ func EnsureSysctl(sysctl utilsysctl.Interface, name string, newVal int) error { return nil } -// DialContext is a dial function matching the signature of net.Dialer.DialContext. -type DialContext = func(context.Context, string, string) (net.Conn, error) - -// FilteredDialOptions configures how a DialContext is wrapped by NewFilteredDialContext. -type FilteredDialOptions struct { - // DialHostIPDenylist restricts hosts from being dialed. - DialHostCIDRDenylist []*net.IPNet - // AllowLocalLoopback controls connections to local loopback hosts (as defined by - // IsProxyableIP). - AllowLocalLoopback bool -} - -// NewFilteredDialContext returns a DialContext function that filters connections based on a FilteredDialOptions. -func NewFilteredDialContext(wrapped DialContext, resolv Resolver, opts *FilteredDialOptions) DialContext { - if wrapped == nil { - wrapped = http.DefaultTransport.(*http.Transport).DialContext - } - if opts == nil { - // Do no filtering - return wrapped - } - if resolv == nil { - resolv = net.DefaultResolver - } - if len(opts.DialHostCIDRDenylist) == 0 && opts.AllowLocalLoopback { - // Do no filtering. - return wrapped - } - return func(ctx context.Context, network, address string) (net.Conn, error) { - // DialContext is given host:port. LookupIPAddress expects host. - addressToResolve, _, err := net.SplitHostPort(address) - if err != nil { - addressToResolve = address - } - - resp, err := resolv.LookupIPAddr(ctx, addressToResolve) - if err != nil { - return nil, err - } - - if len(resp) == 0 { - return nil, ErrNoAddresses - } - - for _, host := range resp { - if !opts.AllowLocalLoopback { - if err := isProxyableIP(host.IP); err != nil { - return nil, err - } - } - if opts.DialHostCIDRDenylist != nil { - if err := IsAllowedHost(host.IP, opts.DialHostCIDRDenylist); err != nil { - return nil, err - } - } - } - return wrapped(ctx, network, address) - } -} - // GetClusterIPByFamily returns a service clusterip by family func GetClusterIPByFamily(ipFamily v1.IPFamily, service *v1.Service) string { // allowing skew @@ -434,62 +320,6 @@ func GetClusterIPByFamily(ipFamily v1.IPFamily, service *v1.Service) string { return "" } -type LineBuffer struct { - b bytes.Buffer - lines int -} - -// Write takes a list of arguments, each a string or []string, joins all the -// individual strings with spaces, terminates with newline, and writes to buf. -// Any other argument type will panic. -func (buf *LineBuffer) Write(args ...interface{}) { - for i, arg := range args { - if i > 0 { - buf.b.WriteByte(' ') - } - switch x := arg.(type) { - case string: - buf.b.WriteString(x) - case []string: - for j, s := range x { - if j > 0 { - buf.b.WriteByte(' ') - } - buf.b.WriteString(s) - } - default: - panic(fmt.Sprintf("unknown argument type: %T", x)) - } - } - buf.b.WriteByte('\n') - buf.lines++ -} - -// WriteBytes writes bytes to buffer, and terminates with newline. -func (buf *LineBuffer) WriteBytes(bytes []byte) { - buf.b.Write(bytes) - buf.b.WriteByte('\n') - buf.lines++ -} - -// Reset clears buf -func (buf *LineBuffer) Reset() { - buf.b.Reset() - buf.lines = 0 -} - -// Bytes returns the contents of buf as a []byte -func (buf *LineBuffer) Bytes() []byte { - return buf.b.Bytes() -} - -// Lines returns the number of lines in buf. Note that more precisely, this returns the -// number of times Write() or WriteBytes() was called; it assumes that you never wrote -// any newlines to the buffer yourself. -func (buf *LineBuffer) Lines() int { - return buf.lines -} - // RevertPorts is closing ports in replacementPortsMap but not in originalPortsMap. In other words, it only // closes the ports opened in this sync. func RevertPorts(replacementPortsMap, originalPortsMap map[netutils.LocalPort]netutils.Closeable) { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/OWNERS deleted file mode 100644 index eec557c644c2..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/OWNERS +++ /dev/null @@ -1,17 +0,0 @@ -# See the OWNERS docs at https://go.k8s.io/owners - -approvers: - - feiskyer - - sbangari - - daschott -reviewers: - - feiskyer - - sbangari - - daschott -labels: - - sig/network -emeritus_approvers: - - dineshgovindasamy - - ksubrmnn - - kumarvin123 - - madhanrm diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/hns.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/hns.go deleted file mode 100644 index 1f31c94e6368..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/hns.go +++ /dev/null @@ -1,451 +0,0 @@ -//go:build windows -// +build windows - -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package winkernel - -import ( - "crypto/sha1" - "encoding/json" - "fmt" - - "github.com/Microsoft/hcsshim/hcn" - "k8s.io/klog/v2" - - "strings" -) - -type HostNetworkService interface { - getNetworkByName(name string) (*hnsNetworkInfo, error) - getAllEndpointsByNetwork(networkName string) (map[string]*endpointsInfo, error) - getEndpointByID(id string) (*endpointsInfo, error) - getEndpointByIpAddress(ip string, networkName string) (*endpointsInfo, error) - getEndpointByName(id string) (*endpointsInfo, error) - createEndpoint(ep *endpointsInfo, networkName string) (*endpointsInfo, error) - deleteEndpoint(hnsID string) error - getLoadBalancer(endpoints []endpointsInfo, flags loadBalancerFlags, sourceVip string, vip string, protocol uint16, internalPort uint16, externalPort uint16, previousLoadBalancers map[loadBalancerIdentifier]*loadBalancerInfo) (*loadBalancerInfo, error) - getAllLoadBalancers() (map[loadBalancerIdentifier]*loadBalancerInfo, error) - deleteLoadBalancer(hnsID string) error -} - -type hns struct{} - -var ( - // LoadBalancerFlagsIPv6 enables IPV6. - LoadBalancerFlagsIPv6 hcn.LoadBalancerFlags = 2 - // LoadBalancerPortMappingFlagsVipExternalIP enables VipExternalIP. - LoadBalancerPortMappingFlagsVipExternalIP hcn.LoadBalancerPortMappingFlags = 16 -) - -func (hns hns) getNetworkByName(name string) (*hnsNetworkInfo, error) { - hnsnetwork, err := hcn.GetNetworkByName(name) - if err != nil { - klog.ErrorS(err, "Error getting network by name") - return nil, err - } - - var remoteSubnets []*remoteSubnetInfo - for _, policy := range hnsnetwork.Policies { - if policy.Type == hcn.RemoteSubnetRoute { - policySettings := hcn.RemoteSubnetRoutePolicySetting{} - err = json.Unmarshal(policy.Settings, &policySettings) - if err != nil { - return nil, fmt.Errorf("failed to unmarshal Remote Subnet policy settings") - } - rs := &remoteSubnetInfo{ - destinationPrefix: policySettings.DestinationPrefix, - isolationID: policySettings.IsolationId, - providerAddress: policySettings.ProviderAddress, - drMacAddress: policySettings.DistributedRouterMacAddress, - } - remoteSubnets = append(remoteSubnets, rs) - } - } - - return &hnsNetworkInfo{ - id: hnsnetwork.Id, - name: hnsnetwork.Name, - networkType: string(hnsnetwork.Type), - remoteSubnets: remoteSubnets, - }, nil -} - -func (hns hns) getAllEndpointsByNetwork(networkName string) (map[string]*(endpointsInfo), error) { - hcnnetwork, err := hcn.GetNetworkByName(networkName) - if err != nil { - klog.ErrorS(err, "failed to get HNS network by name", "name", networkName) - return nil, err - } - endpoints, err := hcn.ListEndpointsOfNetwork(hcnnetwork.Id) - if err != nil { - return nil, fmt.Errorf("failed to list endpoints: %w", err) - } - endpointInfos := make(map[string]*(endpointsInfo)) - for _, ep := range endpoints { - // Add to map with key endpoint ID or IP address - // Storing this is expensive in terms of memory, however there is a bug in Windows Server 2019 that can cause two endpoints to be created with the same IP address. - // TODO: Store by IP only and remove any lookups by endpoint ID. - endpointInfos[ep.Id] = &endpointsInfo{ - ip: ep.IpConfigurations[0].IpAddress, - isLocal: uint32(ep.Flags&hcn.EndpointFlagsRemoteEndpoint) == 0, - macAddress: ep.MacAddress, - hnsID: ep.Id, - hns: hns, - // only ready and not terminating endpoints were added to HNS - ready: true, - serving: true, - terminating: false, - } - endpointInfos[ep.IpConfigurations[0].IpAddress] = endpointInfos[ep.Id] - - if len(ep.IpConfigurations) == 1 { - continue - } - - // If ipFamilyPolicy is RequireDualStack or PreferDualStack, then there will be 2 IPS (iPV4 and IPV6) - // in the endpoint list - endpointDualstack := &endpointsInfo{ - ip: ep.IpConfigurations[1].IpAddress, - isLocal: uint32(ep.Flags&hcn.EndpointFlagsRemoteEndpoint) == 0, - macAddress: ep.MacAddress, - hnsID: ep.Id, - hns: hns, - // only ready and not terminating endpoints were added to HNS - ready: true, - serving: true, - terminating: false, - } - endpointInfos[ep.IpConfigurations[1].IpAddress] = endpointDualstack - } - klog.V(3).InfoS("Queried endpoints from network", "network", networkName) - klog.V(5).InfoS("Queried endpoints details", "network", networkName, "endpointInfos", endpointInfos) - return endpointInfos, nil -} - -func (hns hns) getEndpointByID(id string) (*endpointsInfo, error) { - hnsendpoint, err := hcn.GetEndpointByID(id) - if err != nil { - return nil, err - } - return &endpointsInfo{ //TODO: fill out PA - ip: hnsendpoint.IpConfigurations[0].IpAddress, - isLocal: uint32(hnsendpoint.Flags&hcn.EndpointFlagsRemoteEndpoint) == 0, //TODO: Change isLocal to isRemote - macAddress: hnsendpoint.MacAddress, - hnsID: hnsendpoint.Id, - hns: hns, - }, nil -} -func (hns hns) getEndpointByIpAddress(ip string, networkName string) (*endpointsInfo, error) { - hnsnetwork, err := hcn.GetNetworkByName(networkName) - if err != nil { - klog.ErrorS(err, "Error getting network by name") - return nil, err - } - - endpoints, err := hcn.ListEndpoints() - if err != nil { - return nil, fmt.Errorf("failed to list endpoints: %w", err) - } - for _, endpoint := range endpoints { - equal := false - if endpoint.IpConfigurations != nil && len(endpoint.IpConfigurations) > 0 { - equal = endpoint.IpConfigurations[0].IpAddress == ip - - if !equal && len(endpoint.IpConfigurations) > 1 { - equal = endpoint.IpConfigurations[1].IpAddress == ip - } - } - if equal && strings.EqualFold(endpoint.HostComputeNetwork, hnsnetwork.Id) { - return &endpointsInfo{ - ip: ip, - isLocal: uint32(endpoint.Flags&hcn.EndpointFlagsRemoteEndpoint) == 0, //TODO: Change isLocal to isRemote - macAddress: endpoint.MacAddress, - hnsID: endpoint.Id, - hns: hns, - }, nil - } - } - return nil, fmt.Errorf("Endpoint %v not found on network %s", ip, networkName) -} -func (hns hns) getEndpointByName(name string) (*endpointsInfo, error) { - hnsendpoint, err := hcn.GetEndpointByName(name) - if err != nil { - return nil, err - } - return &endpointsInfo{ //TODO: fill out PA - ip: hnsendpoint.IpConfigurations[0].IpAddress, - isLocal: uint32(hnsendpoint.Flags&hcn.EndpointFlagsRemoteEndpoint) == 0, //TODO: Change isLocal to isRemote - macAddress: hnsendpoint.MacAddress, - hnsID: hnsendpoint.Id, - hns: hns, - }, nil -} -func (hns hns) createEndpoint(ep *endpointsInfo, networkName string) (*endpointsInfo, error) { - hnsNetwork, err := hcn.GetNetworkByName(networkName) - if err != nil { - return nil, err - } - var flags hcn.EndpointFlags - if !ep.isLocal { - flags |= hcn.EndpointFlagsRemoteEndpoint - } - ipConfig := &hcn.IpConfig{ - IpAddress: ep.ip, - } - hnsEndpoint := &hcn.HostComputeEndpoint{ - IpConfigurations: []hcn.IpConfig{*ipConfig}, - MacAddress: ep.macAddress, - Flags: flags, - SchemaVersion: hcn.SchemaVersion{ - Major: 2, - Minor: 0, - }, - } - - var createdEndpoint *hcn.HostComputeEndpoint - if !ep.isLocal { - if len(ep.providerAddress) != 0 { - policySettings := hcn.ProviderAddressEndpointPolicySetting{ - ProviderAddress: ep.providerAddress, - } - policySettingsJson, err := json.Marshal(policySettings) - if err != nil { - return nil, fmt.Errorf("PA Policy creation failed: %v", err) - } - paPolicy := hcn.EndpointPolicy{ - Type: hcn.NetworkProviderAddress, - Settings: policySettingsJson, - } - hnsEndpoint.Policies = append(hnsEndpoint.Policies, paPolicy) - } - createdEndpoint, err = hnsNetwork.CreateRemoteEndpoint(hnsEndpoint) - if err != nil { - return nil, err - } - } else { - createdEndpoint, err = hnsNetwork.CreateEndpoint(hnsEndpoint) - if err != nil { - return nil, err - } - } - return &endpointsInfo{ - ip: createdEndpoint.IpConfigurations[0].IpAddress, - isLocal: uint32(createdEndpoint.Flags&hcn.EndpointFlagsRemoteEndpoint) == 0, - macAddress: createdEndpoint.MacAddress, - hnsID: createdEndpoint.Id, - providerAddress: ep.providerAddress, //TODO get from createdEndpoint - hns: hns, - }, nil -} -func (hns hns) deleteEndpoint(hnsID string) error { - hnsendpoint, err := hcn.GetEndpointByID(hnsID) - if err != nil { - return err - } - err = hnsendpoint.Delete() - if err == nil { - klog.V(3).InfoS("Remote endpoint resource deleted", "hnsID", hnsID) - } - return err -} - -// findLoadBalancerID will construct a id from the provided loadbalancer fields -func findLoadBalancerID(endpoints []endpointsInfo, vip string, protocol, internalPort, externalPort uint16) (loadBalancerIdentifier, error) { - // Compute hash from backends (endpoint IDs) - hash, err := hashEndpoints(endpoints) - if err != nil { - klog.V(2).ErrorS(err, "Error hashing endpoints", "endpoints", endpoints) - return loadBalancerIdentifier{}, err - } - if len(vip) > 0 { - return loadBalancerIdentifier{protocol: protocol, internalPort: internalPort, externalPort: externalPort, vip: vip, endpointsHash: hash}, nil - } - return loadBalancerIdentifier{protocol: protocol, internalPort: internalPort, externalPort: externalPort, endpointsHash: hash}, nil -} - -func (hns hns) getAllLoadBalancers() (map[loadBalancerIdentifier]*loadBalancerInfo, error) { - lbs, err := hcn.ListLoadBalancers() - var id loadBalancerIdentifier - if err != nil { - return nil, err - } - loadBalancers := make(map[loadBalancerIdentifier]*(loadBalancerInfo)) - for _, lb := range lbs { - portMap := lb.PortMappings[0] - // Compute hash from backends (endpoint IDs) - hash, err := hashEndpoints(lb.HostComputeEndpoints) - if err != nil { - klog.V(2).ErrorS(err, "Error hashing endpoints", "policy", lb) - return nil, err - } - if len(lb.FrontendVIPs) == 0 { - // Leave VIP uninitialized - id = loadBalancerIdentifier{protocol: uint16(portMap.Protocol), internalPort: portMap.InternalPort, externalPort: portMap.ExternalPort, endpointsHash: hash} - } else { - id = loadBalancerIdentifier{protocol: uint16(portMap.Protocol), internalPort: portMap.InternalPort, externalPort: portMap.ExternalPort, vip: lb.FrontendVIPs[0], endpointsHash: hash} - } - loadBalancers[id] = &loadBalancerInfo{ - hnsID: lb.Id, - } - } - klog.V(3).InfoS("Queried load balancers", "count", len(lbs)) - return loadBalancers, nil -} - -func (hns hns) getLoadBalancer(endpoints []endpointsInfo, flags loadBalancerFlags, sourceVip string, vip string, protocol uint16, internalPort uint16, externalPort uint16, previousLoadBalancers map[loadBalancerIdentifier]*loadBalancerInfo) (*loadBalancerInfo, error) { - var id loadBalancerIdentifier - vips := []string{} - // Compute hash from backends (endpoint IDs) - hash, err := hashEndpoints(endpoints) - if err != nil { - klog.V(2).ErrorS(err, "Error hashing endpoints", "endpoints", endpoints) - return nil, err - } - if len(vip) > 0 { - id = loadBalancerIdentifier{protocol: protocol, internalPort: internalPort, externalPort: externalPort, vip: vip, endpointsHash: hash} - vips = append(vips, vip) - } else { - id = loadBalancerIdentifier{protocol: protocol, internalPort: internalPort, externalPort: externalPort, endpointsHash: hash} - } - - if lb, found := previousLoadBalancers[id]; found { - klog.V(1).InfoS("Found cached Hns loadbalancer policy resource", "policies", lb) - return lb, nil - } - - lbPortMappingFlags := hcn.LoadBalancerPortMappingFlagsNone - if flags.isILB { - lbPortMappingFlags |= hcn.LoadBalancerPortMappingFlagsILB - } - if flags.useMUX { - lbPortMappingFlags |= hcn.LoadBalancerPortMappingFlagsUseMux - } - if flags.preserveDIP { - lbPortMappingFlags |= hcn.LoadBalancerPortMappingFlagsPreserveDIP - } - if flags.localRoutedVIP { - lbPortMappingFlags |= hcn.LoadBalancerPortMappingFlagsLocalRoutedVIP - } - if flags.isVipExternalIP { - lbPortMappingFlags |= LoadBalancerPortMappingFlagsVipExternalIP - } - - lbFlags := hcn.LoadBalancerFlagsNone - if flags.isDSR { - lbFlags |= hcn.LoadBalancerFlagsDSR - } - - if flags.isIPv6 { - lbFlags |= LoadBalancerFlagsIPv6 - } - - lbDistributionType := hcn.LoadBalancerDistributionNone - - if flags.sessionAffinity { - lbDistributionType = hcn.LoadBalancerDistributionSourceIP - } - - loadBalancer := &hcn.HostComputeLoadBalancer{ - SourceVIP: sourceVip, - PortMappings: []hcn.LoadBalancerPortMapping{ - { - Protocol: uint32(protocol), - InternalPort: internalPort, - ExternalPort: externalPort, - DistributionType: lbDistributionType, - Flags: lbPortMappingFlags, - }, - }, - FrontendVIPs: vips, - SchemaVersion: hcn.SchemaVersion{ - Major: 2, - Minor: 0, - }, - Flags: lbFlags, - } - - for _, ep := range endpoints { - loadBalancer.HostComputeEndpoints = append(loadBalancer.HostComputeEndpoints, ep.hnsID) - } - - lb, err := loadBalancer.Create() - - if err != nil { - return nil, err - } - - klog.V(1).InfoS("Created Hns loadbalancer policy resource", "loadBalancer", lb) - lbInfo := &loadBalancerInfo{ - hnsID: lb.Id, - } - // Add to map of load balancers - previousLoadBalancers[id] = lbInfo - return lbInfo, err -} - -func (hns hns) deleteLoadBalancer(hnsID string) error { - lb, err := hcn.GetLoadBalancerByID(hnsID) - if err != nil { - // Return silently - return nil - } - - err = lb.Delete() - if err != nil { - // There is a bug in Windows Server 2019, that can cause the delete call to fail sometimes. We retry one more time. - // TODO: The logic in syncProxyRules should be rewritten in the future to better stage and handle a call like this failing using the policyApplied fields. - klog.V(1).ErrorS(err, "Error deleting Hns loadbalancer policy resource. Attempting one more time...", "loadBalancer", lb) - return lb.Delete() - } - return err -} - -// Calculates a hash from the given endpoint IDs. -func hashEndpoints[T string | endpointsInfo](endpoints []T) (hash [20]byte, err error) { - var id string - // Recover in case something goes wrong. Return error and null byte array. - defer func() { - if r := recover(); r != nil { - err = r.(error) - hash = [20]byte{} - } - }() - - // Iterate over endpoints, compute hash - for _, ep := range endpoints { - switch x := any(ep).(type) { - case endpointsInfo: - id = strings.ToUpper(x.hnsID) - case string: - id = x - } - if len(id) > 0 { - // We XOR the hashes of endpoints, since they are an unordered set. - // This can cause collisions, but is sufficient since we are using other keys to identify the load balancer. - hash = xor(hash, sha1.Sum(([]byte(id)))) - } - } - return -} - -func xor(b1 [20]byte, b2 [20]byte) (xorbytes [20]byte) { - for i := 0; i < 20; i++ { - xorbytes[i] = b1[i] ^ b2[i] - } - return xorbytes -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/metrics.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/metrics.go deleted file mode 100644 index e4357e4eceb0..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/metrics.go +++ /dev/null @@ -1,39 +0,0 @@ -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package winkernel - -import ( - "sync" - - "k8s.io/component-base/metrics/legacyregistry" - "k8s.io/kubernetes/pkg/proxy/metrics" -) - -var registerMetricsOnce sync.Once - -// RegisterMetrics registers kube-proxy metrics for Windows modes. -func RegisterMetrics() { - registerMetricsOnce.Do(func() { - legacyregistry.MustRegister(metrics.SyncProxyRulesLatency) - legacyregistry.MustRegister(metrics.SyncProxyRulesLastTimestamp) - legacyregistry.MustRegister(metrics.EndpointChangesPending) - legacyregistry.MustRegister(metrics.EndpointChangesTotal) - legacyregistry.MustRegister(metrics.ServiceChangesPending) - legacyregistry.MustRegister(metrics.ServiceChangesTotal) - legacyregistry.MustRegister(metrics.SyncProxyRulesLastQueuedTimestamp) - }) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/proxier.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/proxier.go deleted file mode 100644 index 3c14451dbdba..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/proxy/winkernel/proxier.go +++ /dev/null @@ -1,1711 +0,0 @@ -//go:build windows -// +build windows - -/* -Copyright 2017 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package winkernel - -import ( - "fmt" - "net" - "os" - "strconv" - "strings" - "sync" - "sync/atomic" - "time" - - "github.com/Microsoft/hcsshim" - "github.com/Microsoft/hcsshim/hcn" - v1 "k8s.io/api/core/v1" - discovery "k8s.io/api/discovery/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/intstr" - apiutil "k8s.io/apimachinery/pkg/util/net" - "k8s.io/apimachinery/pkg/util/sets" - "k8s.io/apimachinery/pkg/util/wait" - utilfeature "k8s.io/apiserver/pkg/util/feature" - "k8s.io/client-go/tools/events" - "k8s.io/klog/v2" - "k8s.io/kubernetes/pkg/apis/core/v1/helper" - kubefeatures "k8s.io/kubernetes/pkg/features" - "k8s.io/kubernetes/pkg/proxy" - "k8s.io/kubernetes/pkg/proxy/apis/config" - proxyconfig "k8s.io/kubernetes/pkg/proxy/config" - "k8s.io/kubernetes/pkg/proxy/healthcheck" - "k8s.io/kubernetes/pkg/proxy/metaproxier" - "k8s.io/kubernetes/pkg/proxy/metrics" - utilproxy "k8s.io/kubernetes/pkg/proxy/util" - "k8s.io/kubernetes/pkg/util/async" - netutils "k8s.io/utils/net" -) - -// KernelCompatTester tests whether the required kernel capabilities are -// present to run the windows kernel proxier. -type KernelCompatTester interface { - IsCompatible() error -} - -// CanUseWinKernelProxier returns true if we should use the Kernel Proxier -// instead of the "classic" userspace Proxier. This is determined by checking -// the windows kernel version and for the existence of kernel features. -func CanUseWinKernelProxier(kcompat KernelCompatTester) (bool, error) { - // Check that the kernel supports what we need. - if err := kcompat.IsCompatible(); err != nil { - return false, err - } - return true, nil -} - -type WindowsKernelCompatTester struct{} - -// IsCompatible returns true if winkernel can support this mode of proxy -func (lkct WindowsKernelCompatTester) IsCompatible() error { - _, err := hcsshim.HNSListPolicyListRequest() - if err != nil { - return fmt.Errorf("Windows kernel is not compatible for Kernel mode") - } - return nil -} - -type externalIPInfo struct { - ip string - hnsID string -} - -type loadBalancerIngressInfo struct { - ip string - hnsID string - healthCheckHnsID string -} - -type loadBalancerInfo struct { - hnsID string -} - -type loadBalancerIdentifier struct { - protocol uint16 - internalPort uint16 - externalPort uint16 - vip string - endpointsHash [20]byte -} - -type loadBalancerFlags struct { - isILB bool - isDSR bool - isVipExternalIP bool - localRoutedVIP bool - useMUX bool - preserveDIP bool - sessionAffinity bool - isIPv6 bool -} - -// internal struct for string service information -type serviceInfo struct { - *proxy.BaseServicePortInfo - targetPort int - externalIPs []*externalIPInfo - loadBalancerIngressIPs []*loadBalancerIngressInfo - hnsID string - nodePorthnsID string - policyApplied bool - remoteEndpoint *endpointsInfo - hns HostNetworkService - preserveDIP bool - localTrafficDSR bool - internalTrafficLocal bool - winProxyOptimization bool -} - -type hnsNetworkInfo struct { - name string - id string - networkType string - remoteSubnets []*remoteSubnetInfo -} - -type remoteSubnetInfo struct { - destinationPrefix string - isolationID uint16 - providerAddress string - drMacAddress string -} - -const ( - NETWORK_TYPE_OVERLAY = "overlay" - // MAX_COUNT_STALE_LOADBALANCERS is the maximum number of stale loadbalancers which cleanedup in single syncproxyrules. - // If there are more stale loadbalancers to clean, it will go to next iteration of syncproxyrules. - MAX_COUNT_STALE_LOADBALANCERS = 20 -) - -func newHostNetworkService() (HostNetworkService, hcn.SupportedFeatures) { - var h HostNetworkService - supportedFeatures := hcn.GetSupportedFeatures() - if supportedFeatures.Api.V2 { - h = hns{} - } else { - panic("Windows HNS Api V2 required. This version of windows does not support API V2") - } - return h, supportedFeatures -} - -// logFormattedEndpoints will log all endpoints and its states which are taking part in endpointmap change. -// This mostly for debugging purpose and verbosity is set to 5. -func logFormattedEndpoints(logMsg string, logLevel klog.Level, svcPortName proxy.ServicePortName, eps []proxy.Endpoint) { - if klog.V(logLevel).Enabled() { - var epInfo string - for _, v := range eps { - epInfo = epInfo + fmt.Sprintf("\n %s={Ready:%v,Serving:%v,Terminating:%v,IsRemote:%v}", v.String(), v.IsReady(), v.IsServing(), v.IsTerminating(), !v.GetIsLocal()) - } - klog.V(logLevel).InfoS(logMsg, "svcPortName", svcPortName, "endpoints", epInfo) - } -} - -// This will cleanup stale load balancers which are pending delete -// in last iteration. This function will act like a self healing of stale -// loadbalancer entries. -func (proxier *Proxier) cleanupStaleLoadbalancers() { - i := 0 - countStaleLB := len(proxier.mapStaleLoadbalancers) - if countStaleLB == 0 { - return - } - klog.V(3).InfoS("Cleanup of stale loadbalancers triggered", "LB Count", countStaleLB) - for lbID := range proxier.mapStaleLoadbalancers { - i++ - if err := proxier.hns.deleteLoadBalancer(lbID); err == nil { - delete(proxier.mapStaleLoadbalancers, lbID) - } - if i == MAX_COUNT_STALE_LOADBALANCERS { - // The remaining stale loadbalancers will be cleaned up in next iteration - break - } - } - countStaleLB = len(proxier.mapStaleLoadbalancers) - if countStaleLB > 0 { - klog.V(3).InfoS("Stale loadbalancers still remaining", "LB Count", countStaleLB, "stale_lb_ids", proxier.mapStaleLoadbalancers) - } -} - -func getNetworkName(hnsNetworkName string) (string, error) { - if len(hnsNetworkName) == 0 { - klog.V(3).InfoS("Flag --network-name not set, checking environment variable") - hnsNetworkName = os.Getenv("KUBE_NETWORK") - if len(hnsNetworkName) == 0 { - return "", fmt.Errorf("Environment variable KUBE_NETWORK and network-flag not initialized") - } - } - return hnsNetworkName, nil -} - -func getNetworkInfo(hns HostNetworkService, hnsNetworkName string) (*hnsNetworkInfo, error) { - hnsNetworkInfo, err := hns.getNetworkByName(hnsNetworkName) - for err != nil { - klog.ErrorS(err, "Unable to find HNS Network specified, please check network name and CNI deployment", "hnsNetworkName", hnsNetworkName) - time.Sleep(1 * time.Second) - hnsNetworkInfo, err = hns.getNetworkByName(hnsNetworkName) - } - return hnsNetworkInfo, err -} - -func isOverlay(hnsNetworkInfo *hnsNetworkInfo) bool { - return strings.EqualFold(hnsNetworkInfo.networkType, NETWORK_TYPE_OVERLAY) -} - -// StackCompatTester tests whether the required kernel and network are dualstack capable -type StackCompatTester interface { - DualStackCompatible(networkName string) bool -} - -type DualStackCompatTester struct{} - -func (t DualStackCompatTester) DualStackCompatible(networkName string) bool { - // First tag of hcsshim that has a proper check for dual stack support is v0.8.22 due to a bug. - if err := hcn.IPv6DualStackSupported(); err != nil { - // Hcn *can* fail the query to grab the version of hcn itself (which this call will do internally before parsing - // to see if dual stack is supported), but the only time this can happen, at least that can be discerned, is if the host - // is pre-1803 and hcn didn't exist. hcsshim should truthfully return a known error if this happened that we can - // check against, and the case where 'err != this known error' would be the 'this feature isn't supported' case, as is being - // used here. For now, seeming as how nothing before ws2019 (1809) is listed as supported for k8s we can pretty much assume - // any error here isn't because the query failed, it's just that dualstack simply isn't supported on the host. With all - // that in mind, just log as info and not error to let the user know we're falling back. - klog.InfoS("This version of Windows does not support dual-stack, falling back to single-stack", "err", err.Error()) - return false - } - - // check if network is using overlay - hns, _ := newHostNetworkService() - networkName, err := getNetworkName(networkName) - if err != nil { - klog.ErrorS(err, "Unable to determine dual-stack status, falling back to single-stack") - return false - } - networkInfo, err := getNetworkInfo(hns, networkName) - if err != nil { - klog.ErrorS(err, "Unable to determine dual-stack status, falling back to single-stack") - return false - } - - if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.WinOverlay) && isOverlay(networkInfo) { - // Overlay (VXLAN) networks on Windows do not support dual-stack networking today - klog.InfoS("Winoverlay does not support dual-stack, falling back to single-stack") - return false - } - - return true -} - -// internal struct for endpoints information -type endpointsInfo struct { - ip string - port uint16 - isLocal bool - macAddress string - hnsID string - refCount *uint16 - providerAddress string - hns HostNetworkService - - // conditions - ready bool - serving bool - terminating bool -} - -// String is part of proxy.Endpoint interface. -func (info *endpointsInfo) String() string { - return net.JoinHostPort(info.ip, strconv.Itoa(int(info.port))) -} - -// GetIsLocal is part of proxy.Endpoint interface. -func (info *endpointsInfo) GetIsLocal() bool { - return info.isLocal -} - -// IsReady returns true if an endpoint is ready and not terminating. -func (info *endpointsInfo) IsReady() bool { - return info.ready -} - -// IsServing returns true if an endpoint is ready, regardless of it's terminating state. -func (info *endpointsInfo) IsServing() bool { - return info.serving -} - -// IsTerminating returns true if an endpoint is terminating. -func (info *endpointsInfo) IsTerminating() bool { - return info.terminating -} - -// GetZoneHint returns the zone hint for the endpoint. -func (info *endpointsInfo) GetZoneHints() sets.String { - return sets.String{} -} - -// IP returns just the IP part of the endpoint, it's a part of proxy.Endpoint interface. -func (info *endpointsInfo) IP() string { - return info.ip -} - -// Port returns just the Port part of the endpoint. -func (info *endpointsInfo) Port() (int, error) { - return int(info.port), nil -} - -// Equal is part of proxy.Endpoint interface. -func (info *endpointsInfo) Equal(other proxy.Endpoint) bool { - return info.String() == other.String() && info.GetIsLocal() == other.GetIsLocal() -} - -// GetNodeName returns the NodeName for this endpoint. -func (info *endpointsInfo) GetNodeName() string { - return "" -} - -// GetZone returns the Zone for this endpoint. -func (info *endpointsInfo) GetZone() string { - return "" -} - -// Uses mac prefix and IPv4 address to return a mac address -// This ensures mac addresses are unique for proper load balancing -// There is a possibility of MAC collisions but this Mac address is used for remote endpoints only -// and not sent on the wire. -func conjureMac(macPrefix string, ip net.IP) string { - if ip4 := ip.To4(); ip4 != nil { - a, b, c, d := ip4[0], ip4[1], ip4[2], ip4[3] - return fmt.Sprintf("%v-%02x-%02x-%02x-%02x", macPrefix, a, b, c, d) - } else if ip6 := ip.To16(); ip6 != nil { - a, b, c, d := ip6[15], ip6[14], ip6[13], ip6[12] - return fmt.Sprintf("%v-%02x-%02x-%02x-%02x", macPrefix, a, b, c, d) - } - return "02-11-22-33-44-55" -} - -func (proxier *Proxier) endpointsMapChange(oldEndpointsMap, newEndpointsMap proxy.EndpointsMap) { - // This will optimize remote endpoint and loadbalancer deletion based on the annotation - var svcPortMap = make(map[proxy.ServicePortName]bool) - var logLevel klog.Level = 5 - for svcPortName, eps := range oldEndpointsMap { - logFormattedEndpoints("endpointsMapChange oldEndpointsMap", logLevel, svcPortName, eps) - svcPortMap[svcPortName] = true - proxier.onEndpointsMapChange(&svcPortName, false) - } - - for svcPortName, eps := range newEndpointsMap { - logFormattedEndpoints("endpointsMapChange newEndpointsMap", logLevel, svcPortName, eps) - // redundantCleanup true means cleanup is called second time on the same svcPort - redundantCleanup := svcPortMap[svcPortName] - proxier.onEndpointsMapChange(&svcPortName, redundantCleanup) - } -} - -func (proxier *Proxier) onEndpointsMapChange(svcPortName *proxy.ServicePortName, redundantCleanup bool) { - - svc, exists := proxier.svcPortMap[*svcPortName] - - if exists { - svcInfo, ok := svc.(*serviceInfo) - - if !ok { - klog.ErrorS(nil, "Failed to cast serviceInfo", "servicePortName", svcPortName) - return - } - - if svcInfo.winProxyOptimization && redundantCleanup { - // This is a second cleanup call. - // Second cleanup on the same svcPort will be ignored if the - // winProxyOptimization is Enabled - return - } - - klog.V(3).InfoS("Endpoints are modified. Service is stale", "servicePortName", svcPortName) - svcInfo.cleanupAllPolicies(proxier.endpointsMap[*svcPortName], proxier.mapStaleLoadbalancers, true) - } else { - // If no service exists, just cleanup the remote endpoints - klog.V(3).InfoS("Endpoints are orphaned, cleaning up") - // Cleanup Endpoints references - epInfos, exists := proxier.endpointsMap[*svcPortName] - - if exists { - // Cleanup Endpoints references - for _, ep := range epInfos { - epInfo, ok := ep.(*endpointsInfo) - - if ok { - epInfo.Cleanup() - } - - } - } - } -} - -func (proxier *Proxier) serviceMapChange(previous, current proxy.ServicePortMap) { - for svcPortName := range current { - proxier.onServiceMapChange(&svcPortName) - } - - for svcPortName := range previous { - if _, ok := current[svcPortName]; ok { - continue - } - proxier.onServiceMapChange(&svcPortName) - } -} - -func (proxier *Proxier) onServiceMapChange(svcPortName *proxy.ServicePortName) { - - svc, exists := proxier.svcPortMap[*svcPortName] - - if exists { - svcInfo, ok := svc.(*serviceInfo) - - if !ok { - klog.ErrorS(nil, "Failed to cast serviceInfo", "servicePortName", svcPortName) - return - } - - klog.V(3).InfoS("Updating existing service port", "servicePortName", svcPortName, "clusterIP", svcInfo.ClusterIP(), "port", svcInfo.Port(), "protocol", svcInfo.Protocol()) - svcInfo.cleanupAllPolicies(proxier.endpointsMap[*svcPortName], proxier.mapStaleLoadbalancers, false) - } -} - -// returns a new proxy.Endpoint which abstracts a endpointsInfo -func (proxier *Proxier) newEndpointInfo(baseInfo *proxy.BaseEndpointInfo, _ *proxy.ServicePortName) proxy.Endpoint { - - portNumber, err := baseInfo.Port() - - if err != nil { - portNumber = 0 - } - - info := &endpointsInfo{ - ip: baseInfo.IP(), - port: uint16(portNumber), - isLocal: baseInfo.GetIsLocal(), - macAddress: conjureMac("02-11", netutils.ParseIPSloppy(baseInfo.IP())), - refCount: new(uint16), - hnsID: "", - hns: proxier.hns, - - ready: baseInfo.Ready, - serving: baseInfo.Serving, - terminating: baseInfo.Terminating, - } - - return info -} - -func newSourceVIP(hns HostNetworkService, network string, ip string, mac string, providerAddress string) (*endpointsInfo, error) { - hnsEndpoint := &endpointsInfo{ - ip: ip, - isLocal: true, - macAddress: mac, - providerAddress: providerAddress, - - ready: true, - serving: true, - terminating: false, - } - ep, err := hns.createEndpoint(hnsEndpoint, network) - return ep, err -} - -func (ep *endpointsInfo) DecrementRefCount() { - klog.V(3).InfoS("Decrementing Endpoint RefCount", "endpointsInfo", ep) - if !ep.GetIsLocal() && ep.refCount != nil && *ep.refCount > 0 { - *ep.refCount-- - } -} - -func (ep *endpointsInfo) Cleanup() { - klog.V(3).InfoS("Endpoint cleanup", "endpointsInfo", ep) - if !ep.GetIsLocal() && ep.refCount != nil { - *ep.refCount-- - - // Remove the remote hns endpoint, if no service is referring it - // Never delete a Local Endpoint. Local Endpoints are already created by other entities. - // Remove only remote endpoints created by this service - if *ep.refCount <= 0 && !ep.GetIsLocal() { - klog.V(4).InfoS("Removing endpoints, since no one is referencing it", "endpoint", ep) - err := ep.hns.deleteEndpoint(ep.hnsID) - if err == nil { - ep.hnsID = "" - } else { - klog.ErrorS(err, "Endpoint deletion failed", "ip", ep.IP()) - } - } - - ep.refCount = nil - } -} - -func (refCountMap endPointsReferenceCountMap) getRefCount(hnsID string) *uint16 { - refCount, exists := refCountMap[hnsID] - if !exists { - refCountMap[hnsID] = new(uint16) - refCount = refCountMap[hnsID] - } - return refCount -} - -// returns a new proxy.ServicePort which abstracts a serviceInfo -func (proxier *Proxier) newServiceInfo(port *v1.ServicePort, service *v1.Service, bsvcPortInfo *proxy.BaseServicePortInfo) proxy.ServicePort { - info := &serviceInfo{BaseServicePortInfo: bsvcPortInfo} - preserveDIP := service.Annotations["preserve-destination"] == "true" - // Annotation introduced to enable optimized loadbalancing - winProxyOptimization := !(strings.ToUpper(service.Annotations["winProxyOptimization"]) == "DISABLED") - localTrafficDSR := service.Spec.ExternalTrafficPolicy == v1.ServiceExternalTrafficPolicyLocal - var internalTrafficLocal bool - if service.Spec.InternalTrafficPolicy != nil { - internalTrafficLocal = *service.Spec.InternalTrafficPolicy == v1.ServiceInternalTrafficPolicyLocal - } - err := hcn.DSRSupported() - if err != nil { - preserveDIP = false - localTrafficDSR = false - } - // targetPort is zero if it is specified as a name in port.TargetPort. - // Its real value would be got later from endpoints. - targetPort := 0 - if port.TargetPort.Type == intstr.Int { - targetPort = port.TargetPort.IntValue() - } - - info.preserveDIP = preserveDIP - info.targetPort = targetPort - info.hns = proxier.hns - info.localTrafficDSR = localTrafficDSR - info.internalTrafficLocal = internalTrafficLocal - info.winProxyOptimization = winProxyOptimization - klog.V(3).InfoS("Flags enabled for service", "service", service.Name, "localTrafficDSR", localTrafficDSR, "internalTrafficLocal", internalTrafficLocal, "preserveDIP", preserveDIP, "winProxyOptimization", winProxyOptimization) - - for _, eip := range service.Spec.ExternalIPs { - info.externalIPs = append(info.externalIPs, &externalIPInfo{ip: eip}) - } - - for _, ingress := range service.Status.LoadBalancer.Ingress { - if netutils.ParseIPSloppy(ingress.IP) != nil { - info.loadBalancerIngressIPs = append(info.loadBalancerIngressIPs, &loadBalancerIngressInfo{ip: ingress.IP}) - } - } - return info -} - -func (network hnsNetworkInfo) findRemoteSubnetProviderAddress(ip string) string { - var providerAddress string - for _, rs := range network.remoteSubnets { - _, ipNet, err := netutils.ParseCIDRSloppy(rs.destinationPrefix) - if err != nil { - klog.ErrorS(err, "Failed to parse CIDR") - } - if ipNet.Contains(netutils.ParseIPSloppy(ip)) { - providerAddress = rs.providerAddress - } - if ip == rs.providerAddress { - providerAddress = rs.providerAddress - } - } - - return providerAddress -} - -type endPointsReferenceCountMap map[string]*uint16 - -// Proxier is an hns based proxy for connections between a localhost:lport -// and services that provide the actual backends. -type Proxier struct { - // TODO(imroc): implement node handler for winkernel proxier. - proxyconfig.NoopNodeHandler - - // endpointsChanges and serviceChanges contains all changes to endpoints and - // services that happened since policies were synced. For a single object, - // changes are accumulated, i.e. previous is state from before all of them, - // current is state after applying all of those. - endpointsChanges *proxy.EndpointChangeTracker - serviceChanges *proxy.ServiceChangeTracker - endPointsRefCount endPointsReferenceCountMap - mu sync.Mutex // protects the following fields - svcPortMap proxy.ServicePortMap - endpointsMap proxy.EndpointsMap - // endpointSlicesSynced and servicesSynced are set to true when corresponding - // objects are synced after startup. This is used to avoid updating hns policies - // with some partial data after kube-proxy restart. - endpointSlicesSynced bool - servicesSynced bool - isIPv6Mode bool - initialized int32 - syncRunner *async.BoundedFrequencyRunner // governs calls to syncProxyRules - // These are effectively const and do not need the mutex to be held. - masqueradeAll bool - masqueradeMark string - clusterCIDR string - hostname string - nodeIP net.IP - recorder events.EventRecorder - - serviceHealthServer healthcheck.ServiceHealthServer - healthzServer healthcheck.ProxierHealthUpdater - - // Since converting probabilities (floats) to strings is expensive - // and we are using only probabilities in the format of 1/n, we are - // precomputing some number of those and cache for future reuse. - precomputedProbabilities []string - - hns HostNetworkService - network hnsNetworkInfo - sourceVip string - hostMac string - isDSR bool - supportedFeatures hcn.SupportedFeatures - healthzPort int - - forwardHealthCheckVip bool - rootHnsEndpointName string - mapStaleLoadbalancers map[string]bool // This maintains entries of stale load balancers which are pending delete in last iteration -} - -type localPort struct { - desc string - ip string - port int - protocol string -} - -func (lp *localPort) String() string { - return fmt.Sprintf("%q (%s:%d/%s)", lp.desc, lp.ip, lp.port, lp.protocol) -} - -func Enum(p v1.Protocol) uint16 { - if p == v1.ProtocolTCP { - return 6 - } - if p == v1.ProtocolUDP { - return 17 - } - if p == v1.ProtocolSCTP { - return 132 - } - return 0 -} - -type closeable interface { - Close() error -} - -// Proxier implements proxy.Provider -var _ proxy.Provider = &Proxier{} - -// NewProxier returns a new Proxier -func NewProxier( - syncPeriod time.Duration, - minSyncPeriod time.Duration, - masqueradeAll bool, - masqueradeBit int, - clusterCIDR string, - hostname string, - nodeIP net.IP, - recorder events.EventRecorder, - healthzServer healthcheck.ProxierHealthUpdater, - config config.KubeProxyWinkernelConfiguration, - healthzPort int, -) (*Proxier, error) { - masqueradeValue := 1 << uint(masqueradeBit) - masqueradeMark := fmt.Sprintf("%#08x/%#08x", masqueradeValue, masqueradeValue) - - if nodeIP == nil { - klog.InfoS("Invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP") - nodeIP = netutils.ParseIPSloppy("127.0.0.1") - } - - if len(clusterCIDR) == 0 { - klog.InfoS("ClusterCIDR not specified, unable to distinguish between internal and external traffic") - } - - // windows listens to all node addresses - nodePortAddresses := utilproxy.NewNodePortAddresses(nil) - serviceHealthServer := healthcheck.NewServiceHealthServer(hostname, recorder, nodePortAddresses, healthzServer) - - hns, supportedFeatures := newHostNetworkService() - hnsNetworkName, err := getNetworkName(config.NetworkName) - if err != nil { - return nil, err - } - - klog.V(3).InfoS("Cleaning up old HNS policy lists") - deleteAllHnsLoadBalancerPolicy() - - // Get HNS network information - hnsNetworkInfo, err := getNetworkInfo(hns, hnsNetworkName) - if err != nil { - return nil, err - } - - // Network could have been detected before Remote Subnet Routes are applied or ManagementIP is updated - // Sleep and update the network to include new information - if isOverlay(hnsNetworkInfo) { - time.Sleep(10 * time.Second) - hnsNetworkInfo, err = hns.getNetworkByName(hnsNetworkName) - if err != nil { - return nil, fmt.Errorf("could not find HNS network %s", hnsNetworkName) - } - } - - klog.V(1).InfoS("Hns Network loaded", "hnsNetworkInfo", hnsNetworkInfo) - isDSR := config.EnableDSR - if isDSR && !utilfeature.DefaultFeatureGate.Enabled(kubefeatures.WinDSR) { - return nil, fmt.Errorf("WinDSR feature gate not enabled") - } - err = hcn.DSRSupported() - if isDSR && err != nil { - return nil, err - } - - var sourceVip string - var hostMac string - if isOverlay(hnsNetworkInfo) { - if !utilfeature.DefaultFeatureGate.Enabled(kubefeatures.WinOverlay) { - return nil, fmt.Errorf("WinOverlay feature gate not enabled") - } - err = hcn.RemoteSubnetSupported() - if err != nil { - return nil, err - } - sourceVip = config.SourceVip - if len(sourceVip) == 0 { - return nil, fmt.Errorf("source-vip flag not set") - } - - if nodeIP.IsUnspecified() { - // attempt to get the correct ip address - klog.V(2).InfoS("Node ip was unspecified, attempting to find node ip") - nodeIP, err = apiutil.ResolveBindAddress(nodeIP) - if err != nil { - klog.InfoS("Failed to find an ip. You may need set the --bind-address flag", "err", err) - } - } - - interfaces, _ := net.Interfaces() //TODO create interfaces - for _, inter := range interfaces { - addresses, _ := inter.Addrs() - for _, addr := range addresses { - addrIP, _, _ := netutils.ParseCIDRSloppy(addr.String()) - if addrIP.String() == nodeIP.String() { - klog.V(2).InfoS("Record Host MAC address", "addr", inter.HardwareAddr) - hostMac = inter.HardwareAddr.String() - } - } - } - if len(hostMac) == 0 { - return nil, fmt.Errorf("could not find host mac address for %s", nodeIP) - } - } - - isIPv6 := netutils.IsIPv6(nodeIP) - proxier := &Proxier{ - endPointsRefCount: make(endPointsReferenceCountMap), - svcPortMap: make(proxy.ServicePortMap), - endpointsMap: make(proxy.EndpointsMap), - masqueradeAll: masqueradeAll, - masqueradeMark: masqueradeMark, - clusterCIDR: clusterCIDR, - hostname: hostname, - nodeIP: nodeIP, - recorder: recorder, - serviceHealthServer: serviceHealthServer, - healthzServer: healthzServer, - hns: hns, - network: *hnsNetworkInfo, - sourceVip: sourceVip, - hostMac: hostMac, - isDSR: isDSR, - supportedFeatures: supportedFeatures, - isIPv6Mode: isIPv6, - healthzPort: healthzPort, - rootHnsEndpointName: config.RootHnsEndpointName, - forwardHealthCheckVip: config.ForwardHealthCheckVip, - mapStaleLoadbalancers: make(map[string]bool), - } - - ipFamily := v1.IPv4Protocol - if isIPv6 { - ipFamily = v1.IPv6Protocol - } - serviceChanges := proxy.NewServiceChangeTracker(proxier.newServiceInfo, ipFamily, recorder, proxier.serviceMapChange) - endPointChangeTracker := proxy.NewEndpointChangeTracker(hostname, proxier.newEndpointInfo, ipFamily, recorder, proxier.endpointsMapChange) - proxier.endpointsChanges = endPointChangeTracker - proxier.serviceChanges = serviceChanges - - burstSyncs := 2 - klog.V(3).InfoS("Record sync param", "minSyncPeriod", minSyncPeriod, "syncPeriod", syncPeriod, "burstSyncs", burstSyncs) - proxier.syncRunner = async.NewBoundedFrequencyRunner("sync-runner", proxier.syncProxyRules, minSyncPeriod, syncPeriod, burstSyncs) - return proxier, nil -} - -func NewDualStackProxier( - syncPeriod time.Duration, - minSyncPeriod time.Duration, - masqueradeAll bool, - masqueradeBit int, - clusterCIDR string, - hostname string, - nodeIP [2]net.IP, - recorder events.EventRecorder, - healthzServer healthcheck.ProxierHealthUpdater, - config config.KubeProxyWinkernelConfiguration, - healthzPort int, -) (proxy.Provider, error) { - - // Create an ipv4 instance of the single-stack proxier - ipv4Proxier, err := NewProxier(syncPeriod, minSyncPeriod, masqueradeAll, masqueradeBit, - clusterCIDR, hostname, nodeIP[0], recorder, healthzServer, config, healthzPort) - - if err != nil { - return nil, fmt.Errorf("unable to create ipv4 proxier: %v, hostname: %s, clusterCIDR : %s, nodeIP:%v", err, hostname, clusterCIDR, nodeIP[0]) - } - - ipv6Proxier, err := NewProxier(syncPeriod, minSyncPeriod, masqueradeAll, masqueradeBit, - clusterCIDR, hostname, nodeIP[1], recorder, healthzServer, config, healthzPort) - if err != nil { - return nil, fmt.Errorf("unable to create ipv6 proxier: %v, hostname: %s, clusterCIDR : %s, nodeIP:%v", err, hostname, clusterCIDR, nodeIP[1]) - } - - // Return a meta-proxier that dispatch calls between the two - // single-stack proxier instances - return metaproxier.NewMetaProxier(ipv4Proxier, ipv6Proxier), nil -} - -// CleanupLeftovers removes all hns rules created by the Proxier -// It returns true if an error was encountered. Errors are logged. -func CleanupLeftovers() (encounteredError bool) { - // Delete all Hns Load Balancer Policies - deleteAllHnsLoadBalancerPolicy() - // TODO - // Delete all Hns Remote endpoints - - return encounteredError -} - -func (svcInfo *serviceInfo) cleanupAllPolicies(endpoints []proxy.Endpoint, mapStaleLoadbalancers map[string]bool, isEndpointChange bool) { - klog.V(3).InfoS("Service cleanup", "serviceInfo", svcInfo) - // if it's an endpoint change and winProxyOptimization annotation enable, skip lb deletion and remoteEndpoint deletion - winProxyOptimization := isEndpointChange && svcInfo.winProxyOptimization - if winProxyOptimization { - klog.V(3).InfoS("Skipped loadbalancer deletion.", "hnsID", svcInfo.hnsID, "nodePorthnsID", svcInfo.nodePorthnsID, "winProxyOptimization", svcInfo.winProxyOptimization, "isEndpointChange", isEndpointChange) - } else { - // Skip the svcInfo.policyApplied check to remove all the policies - svcInfo.deleteLoadBalancerPolicy(mapStaleLoadbalancers) - } - // Cleanup Endpoints references - for _, ep := range endpoints { - epInfo, ok := ep.(*endpointsInfo) - if ok { - if winProxyOptimization { - epInfo.DecrementRefCount() - } else { - epInfo.Cleanup() - } - } - } - if svcInfo.remoteEndpoint != nil { - svcInfo.remoteEndpoint.Cleanup() - } - - svcInfo.policyApplied = false -} - -func (svcInfo *serviceInfo) deleteLoadBalancerPolicy(mapStaleLoadbalancer map[string]bool) { - // Remove the Hns Policy corresponding to this service - hns := svcInfo.hns - if err := hns.deleteLoadBalancer(svcInfo.hnsID); err != nil { - mapStaleLoadbalancer[svcInfo.hnsID] = true - klog.V(1).ErrorS(err, "Error deleting Hns loadbalancer policy resource.", "hnsID", svcInfo.hnsID, "ClusterIP", svcInfo.ClusterIP()) - } else { - // On successful delete, remove hnsId - svcInfo.hnsID = "" - } - - if err := hns.deleteLoadBalancer(svcInfo.nodePorthnsID); err != nil { - mapStaleLoadbalancer[svcInfo.nodePorthnsID] = true - klog.V(1).ErrorS(err, "Error deleting Hns NodePort policy resource.", "hnsID", svcInfo.nodePorthnsID, "NodePort", svcInfo.NodePort()) - } else { - // On successful delete, remove hnsId - svcInfo.nodePorthnsID = "" - } - - for _, externalIP := range svcInfo.externalIPs { - mapStaleLoadbalancer[externalIP.hnsID] = true - if err := hns.deleteLoadBalancer(externalIP.hnsID); err != nil { - klog.V(1).ErrorS(err, "Error deleting Hns ExternalIP policy resource.", "hnsID", externalIP.hnsID, "IP", externalIP.ip) - } else { - // On successful delete, remove hnsId - externalIP.hnsID = "" - } - } - for _, lbIngressIP := range svcInfo.loadBalancerIngressIPs { - klog.V(3).InfoS("Loadbalancer Hns LoadBalancer delete triggered for loadBalancer Ingress resources in cleanup", "lbIngressIP", lbIngressIP) - if err := hns.deleteLoadBalancer(lbIngressIP.hnsID); err != nil { - mapStaleLoadbalancer[lbIngressIP.hnsID] = true - klog.V(1).ErrorS(err, "Error deleting Hns IngressIP policy resource.", "hnsID", lbIngressIP.hnsID, "IP", lbIngressIP.ip) - } else { - // On successful delete, remove hnsId - lbIngressIP.hnsID = "" - } - - if lbIngressIP.healthCheckHnsID != "" { - if err := hns.deleteLoadBalancer(lbIngressIP.healthCheckHnsID); err != nil { - mapStaleLoadbalancer[lbIngressIP.healthCheckHnsID] = true - klog.V(1).ErrorS(err, "Error deleting Hns IngressIP HealthCheck policy resource.", "hnsID", lbIngressIP.healthCheckHnsID, "IP", lbIngressIP.ip) - } else { - // On successful delete, remove hnsId - lbIngressIP.healthCheckHnsID = "" - } - } - } -} - -func deleteAllHnsLoadBalancerPolicy() { - plists, err := hcsshim.HNSListPolicyListRequest() - if err != nil { - return - } - for _, plist := range plists { - klog.V(3).InfoS("Remove policy", "policies", plist) - _, err = plist.Delete() - if err != nil { - klog.ErrorS(err, "Failed to delete policy list") - } - } - -} - -func getHnsNetworkInfo(hnsNetworkName string) (*hnsNetworkInfo, error) { - hnsnetwork, err := hcsshim.GetHNSNetworkByName(hnsNetworkName) - if err != nil { - klog.ErrorS(err, "Failed to get HNS Network by name") - return nil, err - } - - return &hnsNetworkInfo{ - id: hnsnetwork.Id, - name: hnsnetwork.Name, - networkType: hnsnetwork.Type, - }, nil -} - -// Sync is called to synchronize the proxier state to hns as soon as possible. -func (proxier *Proxier) Sync() { - if proxier.healthzServer != nil { - proxier.healthzServer.QueuedUpdate() - } - metrics.SyncProxyRulesLastQueuedTimestamp.SetToCurrentTime() - proxier.syncRunner.Run() -} - -// SyncLoop runs periodic work. This is expected to run as a goroutine or as the main loop of the app. It does not return. -func (proxier *Proxier) SyncLoop() { - // Update healthz timestamp at beginning in case Sync() never succeeds. - if proxier.healthzServer != nil { - proxier.healthzServer.Updated() - } - // synthesize "last change queued" time as the informers are syncing. - metrics.SyncProxyRulesLastQueuedTimestamp.SetToCurrentTime() - proxier.syncRunner.Loop(wait.NeverStop) -} - -func (proxier *Proxier) setInitialized(value bool) { - var initialized int32 - if value { - initialized = 1 - } - atomic.StoreInt32(&proxier.initialized, initialized) -} - -func (proxier *Proxier) isInitialized() bool { - return atomic.LoadInt32(&proxier.initialized) > 0 -} - -// OnServiceAdd is called whenever creation of new service object -// is observed. -func (proxier *Proxier) OnServiceAdd(service *v1.Service) { - proxier.OnServiceUpdate(nil, service) -} - -// OnServiceUpdate is called whenever modification of an existing -// service object is observed. -func (proxier *Proxier) OnServiceUpdate(oldService, service *v1.Service) { - if proxier.serviceChanges.Update(oldService, service) && proxier.isInitialized() { - proxier.Sync() - } -} - -// OnServiceDelete is called whenever deletion of an existing service -// object is observed. -func (proxier *Proxier) OnServiceDelete(service *v1.Service) { - proxier.OnServiceUpdate(service, nil) -} - -// OnServiceSynced is called once all the initial event handlers were -// called and the state is fully propagated to local cache. -func (proxier *Proxier) OnServiceSynced() { - proxier.mu.Lock() - proxier.servicesSynced = true - proxier.setInitialized(proxier.endpointSlicesSynced) - proxier.mu.Unlock() - - // Sync unconditionally - this is called once per lifetime. - proxier.syncProxyRules() -} - -func shouldSkipService(svcName types.NamespacedName, service *v1.Service) bool { - // if ClusterIP is "None" or empty, skip proxying - if !helper.IsServiceIPSet(service) { - klog.V(3).InfoS("Skipping service due to clusterIP", "serviceName", svcName, "clusterIP", service.Spec.ClusterIP) - return true - } - // Even if ClusterIP is set, ServiceTypeExternalName services don't get proxied - if service.Spec.Type == v1.ServiceTypeExternalName { - klog.V(3).InfoS("Skipping service due to Type=ExternalName", "serviceName", svcName) - return true - } - return false -} - -// OnEndpointSliceAdd is called whenever creation of a new endpoint slice object -// is observed. -func (proxier *Proxier) OnEndpointSliceAdd(endpointSlice *discovery.EndpointSlice) { - if proxier.endpointsChanges.EndpointSliceUpdate(endpointSlice, false) && proxier.isInitialized() { - proxier.Sync() - } -} - -// OnEndpointSliceUpdate is called whenever modification of an existing endpoint -// slice object is observed. -func (proxier *Proxier) OnEndpointSliceUpdate(_, endpointSlice *discovery.EndpointSlice) { - if proxier.endpointsChanges.EndpointSliceUpdate(endpointSlice, false) && proxier.isInitialized() { - proxier.Sync() - } -} - -// OnEndpointSliceDelete is called whenever deletion of an existing endpoint slice -// object is observed. -func (proxier *Proxier) OnEndpointSliceDelete(endpointSlice *discovery.EndpointSlice) { - if proxier.endpointsChanges.EndpointSliceUpdate(endpointSlice, true) && proxier.isInitialized() { - proxier.Sync() - } -} - -// OnEndpointSlicesSynced is called once all the initial event handlers were -// called and the state is fully propagated to local cache. -func (proxier *Proxier) OnEndpointSlicesSynced() { - proxier.mu.Lock() - proxier.endpointSlicesSynced = true - proxier.setInitialized(proxier.servicesSynced) - proxier.mu.Unlock() - - // Sync unconditionally - this is called once per lifetime. - proxier.syncProxyRules() -} - -func (proxier *Proxier) cleanupAllPolicies() { - for svcName, svc := range proxier.svcPortMap { - svcInfo, ok := svc.(*serviceInfo) - if !ok { - klog.ErrorS(nil, "Failed to cast serviceInfo", "serviceName", svcName) - continue - } - svcInfo.cleanupAllPolicies(proxier.endpointsMap[svcName], proxier.mapStaleLoadbalancers, false) - } -} - -func isNetworkNotFoundError(err error) bool { - if err == nil { - return false - } - if _, ok := err.(hcn.NetworkNotFoundError); ok { - return true - } - if _, ok := err.(hcsshim.NetworkNotFoundError); ok { - return true - } - return false -} - -// isAllEndpointsTerminating function will return true if all the endpoints are terminating. -// If atleast one is not terminating, then return false -func (proxier *Proxier) isAllEndpointsTerminating(svcName proxy.ServicePortName, isLocalTrafficDSR bool) bool { - for _, epInfo := range proxier.endpointsMap[svcName] { - ep, ok := epInfo.(*endpointsInfo) - if !ok { - continue - } - if isLocalTrafficDSR && !ep.GetIsLocal() { - // KEP-1669: Ignore remote endpoints when the ExternalTrafficPolicy is Local (DSR Mode) - continue - } - // If Readiness Probe fails and pod is not under delete, then - // the state of the endpoint will be - Ready:False, Serving:False, Terminating:False - if !ep.IsReady() && !ep.IsTerminating() { - // Ready:false, Terminating:False, ignore - continue - } - if !ep.IsTerminating() { - return false - } - } - return true -} - -// isAllEndpointsNonServing function will return true if all the endpoints are non serving. -// If atleast one is serving, then return false -func (proxier *Proxier) isAllEndpointsNonServing(svcName proxy.ServicePortName, isLocalTrafficDSR bool) bool { - for _, epInfo := range proxier.endpointsMap[svcName] { - ep, ok := epInfo.(*endpointsInfo) - if !ok { - continue - } - if isLocalTrafficDSR && !ep.GetIsLocal() { - continue - } - if ep.IsServing() { - return false - } - } - return true -} - -// updateQueriedEndpoints updates the queriedEndpoints map with newly created endpoint details -func updateQueriedEndpoints(newHnsEndpoint *endpointsInfo, queriedEndpoints map[string]*endpointsInfo) { - // store newly created endpoints in queriedEndpoints - queriedEndpoints[newHnsEndpoint.hnsID] = newHnsEndpoint - queriedEndpoints[newHnsEndpoint.ip] = newHnsEndpoint -} - -// This is where all of the hns save/restore calls happen. -// assumes proxier.mu is held -func (proxier *Proxier) syncProxyRules() { - proxier.mu.Lock() - defer proxier.mu.Unlock() - - // don't sync rules till we've received services and endpoints - if !proxier.isInitialized() { - klog.V(2).InfoS("Not syncing hns until Services and Endpoints have been received from master") - return - } - - // Keep track of how long syncs take. - start := time.Now() - defer func() { - metrics.SyncProxyRulesLatency.Observe(metrics.SinceInSeconds(start)) - klog.V(4).InfoS("Syncing proxy rules complete", "elapsed", time.Since(start)) - }() - - hnsNetworkName := proxier.network.name - hns := proxier.hns - - var gatewayHnsendpoint *endpointsInfo - if proxier.forwardHealthCheckVip { - gatewayHnsendpoint, _ = hns.getEndpointByName(proxier.rootHnsEndpointName) - } - - prevNetworkID := proxier.network.id - updatedNetwork, err := hns.getNetworkByName(hnsNetworkName) - if updatedNetwork == nil || updatedNetwork.id != prevNetworkID || isNetworkNotFoundError(err) { - klog.InfoS("The HNS network is not present or has changed since the last sync, please check the CNI deployment", "hnsNetworkName", hnsNetworkName) - proxier.cleanupAllPolicies() - if updatedNetwork != nil { - proxier.network = *updatedNetwork - } - return - } - - // We assume that if this was called, we really want to sync them, - // even if nothing changed in the meantime. In other words, callers are - // responsible for detecting no-op changes and not calling this function. - serviceUpdateResult := proxier.svcPortMap.Update(proxier.serviceChanges) - endpointUpdateResult := proxier.endpointsMap.Update(proxier.endpointsChanges) - - deletedUDPClusterIPs := serviceUpdateResult.DeletedUDPClusterIPs - // merge stale services gathered from updateEndpointsMap - for _, svcPortName := range endpointUpdateResult.NewlyActiveUDPServices { - if svcInfo, ok := proxier.svcPortMap[svcPortName]; ok && svcInfo != nil && svcInfo.Protocol() == v1.ProtocolUDP { - klog.V(2).InfoS("Newly-active UDP service may have stale conntrack entries", "servicePortName", svcPortName) - deletedUDPClusterIPs.Insert(svcInfo.ClusterIP().String()) - } - } - // Query HNS for endpoints and load balancers - queriedEndpoints, err := hns.getAllEndpointsByNetwork(hnsNetworkName) - if err != nil { - klog.ErrorS(err, "Querying HNS for endpoints failed") - return - } - if queriedEndpoints == nil { - klog.V(4).InfoS("No existing endpoints found in HNS") - queriedEndpoints = make(map[string]*(endpointsInfo)) - } - queriedLoadBalancers, err := hns.getAllLoadBalancers() - if queriedLoadBalancers == nil { - klog.V(4).InfoS("No existing load balancers found in HNS") - queriedLoadBalancers = make(map[loadBalancerIdentifier]*(loadBalancerInfo)) - } - if err != nil { - klog.ErrorS(err, "Querying HNS for load balancers failed") - return - } - if strings.EqualFold(proxier.network.networkType, NETWORK_TYPE_OVERLAY) { - if _, ok := queriedEndpoints[proxier.sourceVip]; !ok { - _, err = newSourceVIP(hns, hnsNetworkName, proxier.sourceVip, proxier.hostMac, proxier.nodeIP.String()) - if err != nil { - klog.ErrorS(err, "Source Vip endpoint creation failed") - return - } - } - } - - klog.V(3).InfoS("Syncing Policies") - - // Program HNS by adding corresponding policies for each service. - for svcName, svc := range proxier.svcPortMap { - svcInfo, ok := svc.(*serviceInfo) - if !ok { - klog.ErrorS(nil, "Failed to cast serviceInfo", "serviceName", svcName) - continue - } - - if svcInfo.policyApplied { - klog.V(4).InfoS("Policy already applied", "serviceInfo", svcInfo) - continue - } - - if strings.EqualFold(proxier.network.networkType, NETWORK_TYPE_OVERLAY) { - serviceVipEndpoint := queriedEndpoints[svcInfo.ClusterIP().String()] - if serviceVipEndpoint == nil { - klog.V(4).InfoS("No existing remote endpoint", "IP", svcInfo.ClusterIP()) - hnsEndpoint := &endpointsInfo{ - ip: svcInfo.ClusterIP().String(), - isLocal: false, - macAddress: proxier.hostMac, - providerAddress: proxier.nodeIP.String(), - } - - newHnsEndpoint, err := hns.createEndpoint(hnsEndpoint, hnsNetworkName) - if err != nil { - klog.ErrorS(err, "Remote endpoint creation failed for service VIP") - continue - } - - newHnsEndpoint.refCount = proxier.endPointsRefCount.getRefCount(newHnsEndpoint.hnsID) - *newHnsEndpoint.refCount++ - svcInfo.remoteEndpoint = newHnsEndpoint - updateQueriedEndpoints(newHnsEndpoint, queriedEndpoints) - } - } - - var hnsEndpoints []endpointsInfo - var hnsLocalEndpoints []endpointsInfo - klog.V(4).InfoS("Applying Policy", "serviceInfo", svcName) - // Create Remote endpoints for every endpoint, corresponding to the service - containsPublicIP := false - containsNodeIP := false - var allEndpointsTerminating, allEndpointsNonServing bool - someEndpointsServing := true - - if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.ProxyTerminatingEndpoints) && len(svcInfo.loadBalancerIngressIPs) > 0 { - // Check should be done only if comes under the feature gate or enabled - // The check should be done only if Spec.Type == Loadbalancer. - allEndpointsTerminating = proxier.isAllEndpointsTerminating(svcName, svcInfo.localTrafficDSR) - allEndpointsNonServing = proxier.isAllEndpointsNonServing(svcName, svcInfo.localTrafficDSR) - someEndpointsServing = !allEndpointsNonServing - klog.V(4).InfoS("Terminating status checked for all endpoints", "svcClusterIP", svcInfo.ClusterIP(), "allEndpointsTerminating", allEndpointsTerminating, "allEndpointsNonServing", allEndpointsNonServing, "localTrafficDSR", svcInfo.localTrafficDSR) - } else { - klog.V(4).InfoS("Skipped terminating status check for all endpoints", "svcClusterIP", svcInfo.ClusterIP(), "proxyEndpointsFeatureGateEnabled", utilfeature.DefaultFeatureGate.Enabled(kubefeatures.ProxyTerminatingEndpoints), "ingressLBCount", len(svcInfo.loadBalancerIngressIPs)) - } - - for _, epInfo := range proxier.endpointsMap[svcName] { - ep, ok := epInfo.(*endpointsInfo) - if !ok { - klog.ErrorS(nil, "Failed to cast endpointsInfo", "serviceName", svcName) - continue - } - - if svcInfo.internalTrafficLocal && svcInfo.localTrafficDSR && !ep.GetIsLocal() { - // No need to use or create remote endpoint when internal and external traffic policy is remote - klog.V(3).InfoS("Skipping the endpoint. Both internalTraffic and external traffic policies are local", "EpIP", ep.ip, " EpPort", ep.port) - continue - } - - if someEndpointsServing { - - if !allEndpointsTerminating && !ep.IsReady() { - klog.V(3).InfoS("Skipping the endpoint for LB creation. Endpoint is either not ready or all not all endpoints are terminating", "EpIP", ep.ip, " EpPort", ep.port, "allEndpointsTerminating", allEndpointsTerminating, "IsEpReady", ep.IsReady()) - continue - } - if !ep.IsServing() { - klog.V(3).InfoS("Skipping the endpoint for LB creation. Endpoint is not serving", "EpIP", ep.ip, " EpPort", ep.port, "IsEpServing", ep.IsServing()) - continue - } - - } - - var newHnsEndpoint *endpointsInfo - hnsNetworkName := proxier.network.name - var err error - - // targetPort is zero if it is specified as a name in port.TargetPort, so the real port should be got from endpoints. - // Note that hcsshim.AddLoadBalancer() doesn't support endpoints with different ports, so only port from first endpoint is used. - // TODO(feiskyer): add support of different endpoint ports after hcsshim.AddLoadBalancer() add that. - if svcInfo.targetPort == 0 { - svcInfo.targetPort = int(ep.port) - } - // There is a bug in Windows Server 2019 that can cause two endpoints to be created with the same IP address, so we need to check using endpoint ID first. - // TODO: Remove lookup by endpoint ID, and use the IP address only, so we don't need to maintain multiple keys for lookup. - if len(ep.hnsID) > 0 { - newHnsEndpoint = queriedEndpoints[ep.hnsID] - } - - if newHnsEndpoint == nil { - // First check if an endpoint resource exists for this IP, on the current host - // A Local endpoint could exist here already - // A remote endpoint was already created and proxy was restarted - newHnsEndpoint = queriedEndpoints[ep.IP()] - } - - if newHnsEndpoint == nil { - if ep.GetIsLocal() { - klog.ErrorS(err, "Local endpoint not found: on network", "ip", ep.IP(), "hnsNetworkName", hnsNetworkName) - continue - } - - if strings.EqualFold(proxier.network.networkType, NETWORK_TYPE_OVERLAY) { - klog.InfoS("Updating network to check for new remote subnet policies", "networkName", proxier.network.name) - networkName := proxier.network.name - updatedNetwork, err := hns.getNetworkByName(networkName) - if err != nil { - klog.ErrorS(err, "Unable to find HNS Network specified, please check network name and CNI deployment", "hnsNetworkName", hnsNetworkName) - proxier.cleanupAllPolicies() - return - } - proxier.network = *updatedNetwork - providerAddress := proxier.network.findRemoteSubnetProviderAddress(ep.IP()) - if len(providerAddress) == 0 { - klog.InfoS("Could not find provider address, assuming it is a public IP", "IP", ep.IP()) - providerAddress = proxier.nodeIP.String() - } - - hnsEndpoint := &endpointsInfo{ - ip: ep.ip, - isLocal: false, - macAddress: conjureMac("02-11", netutils.ParseIPSloppy(ep.ip)), - providerAddress: providerAddress, - } - - newHnsEndpoint, err = hns.createEndpoint(hnsEndpoint, hnsNetworkName) - if err != nil { - klog.ErrorS(err, "Remote endpoint creation failed", "endpointsInfo", hnsEndpoint) - continue - } - updateQueriedEndpoints(newHnsEndpoint, queriedEndpoints) - } else { - - hnsEndpoint := &endpointsInfo{ - ip: ep.ip, - isLocal: false, - macAddress: ep.macAddress, - } - - newHnsEndpoint, err = hns.createEndpoint(hnsEndpoint, hnsNetworkName) - if err != nil { - klog.ErrorS(err, "Remote endpoint creation failed") - continue - } - updateQueriedEndpoints(newHnsEndpoint, queriedEndpoints) - } - } - // For Overlay networks 'SourceVIP' on an Load balancer Policy can either be chosen as - // a) Source VIP configured on kube-proxy (or) - // b) Node IP of the current node - // - // For L2Bridge network the Source VIP is always the NodeIP of the current node and the same - // would be configured on kube-proxy as SourceVIP - // - // The logic for choosing the SourceVIP in Overlay networks is based on the backend endpoints: - // a) Endpoints are any IP's outside the cluster ==> Choose NodeIP as the SourceVIP - // b) Endpoints are IP addresses of a remote node => Choose NodeIP as the SourceVIP - // c) Everything else (Local POD's, Remote POD's, Node IP of current node) ==> Choose the configured SourceVIP - if strings.EqualFold(proxier.network.networkType, NETWORK_TYPE_OVERLAY) && !ep.GetIsLocal() { - providerAddress := proxier.network.findRemoteSubnetProviderAddress(ep.IP()) - - isNodeIP := (ep.IP() == providerAddress) - isPublicIP := (len(providerAddress) == 0) - klog.InfoS("Endpoint on overlay network", "ip", ep.IP(), "hnsNetworkName", hnsNetworkName, "isNodeIP", isNodeIP, "isPublicIP", isPublicIP) - - containsNodeIP = containsNodeIP || isNodeIP - containsPublicIP = containsPublicIP || isPublicIP - } - - // Save the hnsId for reference - klog.V(1).InfoS("Hns endpoint resource", "endpointsInfo", newHnsEndpoint) - - hnsEndpoints = append(hnsEndpoints, *newHnsEndpoint) - if newHnsEndpoint.GetIsLocal() { - hnsLocalEndpoints = append(hnsLocalEndpoints, *newHnsEndpoint) - } else { - // We only share the refCounts for remote endpoints - ep.refCount = proxier.endPointsRefCount.getRefCount(newHnsEndpoint.hnsID) - *ep.refCount++ - } - - ep.hnsID = newHnsEndpoint.hnsID - - klog.V(3).InfoS("Endpoint resource found", "endpointsInfo", ep) - } - - klog.V(3).InfoS("Associated endpoints for service", "endpointsInfo", hnsEndpoints, "serviceName", svcName) - - if len(svcInfo.hnsID) > 0 { - // This should not happen - klog.InfoS("Load Balancer already exists -- Debug ", "hnsID", svcInfo.hnsID) - } - - // In ETP:Cluster, if all endpoints are under termination, - // it will have serving and terminating, else only ready and serving - if len(hnsEndpoints) == 0 { - if svcInfo.winProxyOptimization { - // Deleting loadbalancers when there are no endpoints to serve. - klog.V(3).InfoS("Cleanup existing ", "endpointsInfo", hnsEndpoints, "serviceName", svcName) - svcInfo.deleteLoadBalancerPolicy(proxier.mapStaleLoadbalancers) - } - klog.ErrorS(nil, "Endpoint information not available for service, not applying any policy", "serviceName", svcName) - continue - } - - klog.V(4).InfoS("Trying to apply Policies for service", "serviceInfo", svcInfo) - var hnsLoadBalancer *loadBalancerInfo - var sourceVip = proxier.sourceVip - if containsPublicIP || containsNodeIP { - sourceVip = proxier.nodeIP.String() - } - - sessionAffinityClientIP := svcInfo.SessionAffinityType() == v1.ServiceAffinityClientIP - if sessionAffinityClientIP && !proxier.supportedFeatures.SessionAffinity { - klog.InfoS("Session Affinity is not supported on this version of Windows") - } - - endpointsAvailableForLB := !allEndpointsTerminating && !allEndpointsNonServing - proxier.deleteExistingLoadBalancer(hns, svcInfo.winProxyOptimization, &svcInfo.hnsID, sourceVip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), hnsEndpoints, queriedLoadBalancers) - - // clusterIPEndpoints is the endpoint list used for creating ClusterIP loadbalancer. - clusterIPEndpoints := hnsEndpoints - if svcInfo.internalTrafficLocal { - // Take local endpoints for clusterip loadbalancer when internal traffic policy is local. - clusterIPEndpoints = hnsLocalEndpoints - } - - if len(clusterIPEndpoints) > 0 { - - // If all endpoints are terminating, then no need to create Cluster IP LoadBalancer - // Cluster IP LoadBalancer creation - hnsLoadBalancer, err := hns.getLoadBalancer( - clusterIPEndpoints, - loadBalancerFlags{isDSR: proxier.isDSR, isIPv6: proxier.isIPv6Mode, sessionAffinity: sessionAffinityClientIP}, - sourceVip, - svcInfo.ClusterIP().String(), - Enum(svcInfo.Protocol()), - uint16(svcInfo.targetPort), - uint16(svcInfo.Port()), - queriedLoadBalancers, - ) - if err != nil { - klog.ErrorS(err, "Policy creation failed") - continue - } - - svcInfo.hnsID = hnsLoadBalancer.hnsID - klog.V(3).InfoS("Hns LoadBalancer resource created for cluster ip resources", "clusterIP", svcInfo.ClusterIP(), "hnsID", hnsLoadBalancer.hnsID) - - } else { - klog.V(3).InfoS("Skipped creating Hns LoadBalancer for cluster ip resources. Reason : all endpoints are terminating", "clusterIP", svcInfo.ClusterIP(), "nodeport", svcInfo.NodePort(), "allEndpointsTerminating", allEndpointsTerminating) - } - - // If nodePort is specified, user should be able to use nodeIP:nodePort to reach the backend endpoints - if svcInfo.NodePort() > 0 { - // If the preserve-destination service annotation is present, we will disable routing mesh for NodePort. - // This means that health services can use Node Port without falsely getting results from a different node. - nodePortEndpoints := hnsEndpoints - if svcInfo.preserveDIP || svcInfo.localTrafficDSR { - nodePortEndpoints = hnsLocalEndpoints - } - - proxier.deleteExistingLoadBalancer(hns, svcInfo.winProxyOptimization, &svcInfo.nodePorthnsID, sourceVip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), nodePortEndpoints, queriedLoadBalancers) - - if len(nodePortEndpoints) > 0 && endpointsAvailableForLB { - // If all endpoints are in terminating stage, then no need to create Node Port LoadBalancer - hnsLoadBalancer, err := hns.getLoadBalancer( - nodePortEndpoints, - loadBalancerFlags{isVipExternalIP: true, isDSR: svcInfo.localTrafficDSR, localRoutedVIP: true, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, - sourceVip, - "", - Enum(svcInfo.Protocol()), - uint16(svcInfo.targetPort), - uint16(svcInfo.NodePort()), - queriedLoadBalancers, - ) - if err != nil { - klog.ErrorS(err, "Policy creation failed") - continue - } - - svcInfo.nodePorthnsID = hnsLoadBalancer.hnsID - klog.V(3).InfoS("Hns LoadBalancer resource created for nodePort resources", "clusterIP", svcInfo.ClusterIP(), "nodeport", svcInfo.NodePort(), "hnsID", hnsLoadBalancer.hnsID) - } else { - klog.V(3).InfoS("Skipped creating Hns LoadBalancer for nodePort resources", "clusterIP", svcInfo.ClusterIP(), "nodeport", svcInfo.NodePort(), "allEndpointsTerminating", allEndpointsTerminating) - } - } - - // Create a Load Balancer Policy for each external IP - for _, externalIP := range svcInfo.externalIPs { - // Disable routing mesh if ExternalTrafficPolicy is set to local - externalIPEndpoints := hnsEndpoints - if svcInfo.localTrafficDSR { - externalIPEndpoints = hnsLocalEndpoints - } - - proxier.deleteExistingLoadBalancer(hns, svcInfo.winProxyOptimization, &externalIP.hnsID, sourceVip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), externalIPEndpoints, queriedLoadBalancers) - - if len(externalIPEndpoints) > 0 && endpointsAvailableForLB { - // If all endpoints are in terminating stage, then no need to External IP LoadBalancer - // Try loading existing policies, if already available - hnsLoadBalancer, err = hns.getLoadBalancer( - externalIPEndpoints, - loadBalancerFlags{isVipExternalIP: true, isDSR: svcInfo.localTrafficDSR, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, - sourceVip, - externalIP.ip, - Enum(svcInfo.Protocol()), - uint16(svcInfo.targetPort), - uint16(svcInfo.Port()), - queriedLoadBalancers, - ) - if err != nil { - klog.ErrorS(err, "Policy creation failed") - continue - } - externalIP.hnsID = hnsLoadBalancer.hnsID - klog.V(3).InfoS("Hns LoadBalancer resource created for externalIP resources", "externalIP", externalIP, "hnsID", hnsLoadBalancer.hnsID) - } else { - klog.V(3).InfoS("Skipped creating Hns LoadBalancer for externalIP resources", "externalIP", externalIP, "allEndpointsTerminating", allEndpointsTerminating) - } - } - // Create a Load Balancer Policy for each loadbalancer ingress - for _, lbIngressIP := range svcInfo.loadBalancerIngressIPs { - // Try loading existing policies, if already available - lbIngressEndpoints := hnsEndpoints - if svcInfo.preserveDIP || svcInfo.localTrafficDSR { - lbIngressEndpoints = hnsLocalEndpoints - } - - proxier.deleteExistingLoadBalancer(hns, svcInfo.winProxyOptimization, &lbIngressIP.hnsID, sourceVip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), lbIngressEndpoints, queriedLoadBalancers) - - if len(lbIngressEndpoints) > 0 { - hnsLoadBalancer, err := hns.getLoadBalancer( - lbIngressEndpoints, - loadBalancerFlags{isVipExternalIP: true, isDSR: svcInfo.preserveDIP || svcInfo.localTrafficDSR, useMUX: svcInfo.preserveDIP, preserveDIP: svcInfo.preserveDIP, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, - sourceVip, - lbIngressIP.ip, - Enum(svcInfo.Protocol()), - uint16(svcInfo.targetPort), - uint16(svcInfo.Port()), - queriedLoadBalancers, - ) - if err != nil { - klog.ErrorS(err, "Policy creation failed") - continue - } - lbIngressIP.hnsID = hnsLoadBalancer.hnsID - klog.V(3).InfoS("Hns LoadBalancer resource created for loadBalancer Ingress resources", "lbIngressIP", lbIngressIP) - } else { - klog.V(3).InfoS("Skipped creating Hns LoadBalancer for loadBalancer Ingress resources", "lbIngressIP", lbIngressIP) - } - - if proxier.forwardHealthCheckVip && gatewayHnsendpoint != nil && endpointsAvailableForLB { - // Avoid creating health check loadbalancer if all the endpoints are terminating - nodeport := proxier.healthzPort - if svcInfo.HealthCheckNodePort() != 0 { - nodeport = svcInfo.HealthCheckNodePort() - } - - proxier.deleteExistingLoadBalancer(hns, svcInfo.winProxyOptimization, &lbIngressIP.healthCheckHnsID, sourceVip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), []endpointsInfo{*gatewayHnsendpoint}, queriedLoadBalancers) - - hnsHealthCheckLoadBalancer, err := hns.getLoadBalancer( - []endpointsInfo{*gatewayHnsendpoint}, - loadBalancerFlags{isDSR: false, useMUX: svcInfo.preserveDIP, preserveDIP: svcInfo.preserveDIP}, - sourceVip, - lbIngressIP.ip, - Enum(svcInfo.Protocol()), - uint16(nodeport), - uint16(nodeport), - queriedLoadBalancers, - ) - if err != nil { - klog.ErrorS(err, "Policy creation failed") - continue - } - lbIngressIP.healthCheckHnsID = hnsHealthCheckLoadBalancer.hnsID - klog.V(3).InfoS("Hns Health Check LoadBalancer resource created for loadBalancer Ingress resources", "ip", lbIngressIP) - } else { - klog.V(3).InfoS("Skipped creating Hns Health Check LoadBalancer for loadBalancer Ingress resources", "ip", lbIngressIP, "allEndpointsTerminating", allEndpointsTerminating) - } - } - svcInfo.policyApplied = true - klog.V(2).InfoS("Policy successfully applied for service", "serviceInfo", svcInfo) - } - - if proxier.healthzServer != nil { - proxier.healthzServer.Updated() - } - metrics.SyncProxyRulesLastTimestamp.SetToCurrentTime() - - // Update service healthchecks. The endpoints list might include services that are - // not "OnlyLocal", but the services list will not, and the serviceHealthServer - // will just drop those endpoints. - if err := proxier.serviceHealthServer.SyncServices(proxier.svcPortMap.HealthCheckNodePorts()); err != nil { - klog.ErrorS(err, "Error syncing healthcheck services") - } - if err := proxier.serviceHealthServer.SyncEndpoints(proxier.endpointsMap.LocalReadyEndpoints()); err != nil { - klog.ErrorS(err, "Error syncing healthcheck endpoints") - } - - // Finish housekeeping. - // TODO: these could be made more consistent. - for _, svcIP := range deletedUDPClusterIPs.UnsortedList() { - // TODO : Check if this is required to cleanup stale services here - klog.V(5).InfoS("Pending delete stale service IP connections", "IP", svcIP) - } - - // remove stale endpoint refcount entries - for hnsID, referenceCount := range proxier.endPointsRefCount { - if *referenceCount <= 0 { - klog.V(3).InfoS("Deleting unreferenced remote endpoint", "hnsID", hnsID) - proxier.hns.deleteEndpoint(hnsID) - delete(proxier.endPointsRefCount, hnsID) - } - } - // This will cleanup stale load balancers which are pending delete - // in last iteration - proxier.cleanupStaleLoadbalancers() -} - -// deleteExistingLoadBalancer checks whether loadbalancer delete is needed or not. -// If it is needed, the function will delete the existing loadbalancer and return true, else false. -func (proxier *Proxier) deleteExistingLoadBalancer(hns HostNetworkService, winProxyOptimization bool, lbHnsID *string, sourceVip string, protocol, intPort, extPort uint16, endpoints []endpointsInfo, queriedLoadBalancers map[loadBalancerIdentifier]*loadBalancerInfo) bool { - - if !winProxyOptimization || *lbHnsID == "" { - // Loadbalancer delete not needed - return false - } - - lbID, lbIdErr := findLoadBalancerID( - endpoints, - sourceVip, - protocol, - intPort, - extPort, - ) - - if lbIdErr != nil { - return proxier.deleteLoadBalancer(hns, lbHnsID) - } - - if _, ok := queriedLoadBalancers[lbID]; ok { - // The existing loadbalancer in the system is same as what we try to delete and recreate. So we skip deleting. - return false - } - - return proxier.deleteLoadBalancer(hns, lbHnsID) -} - -func (proxier *Proxier) deleteLoadBalancer(hns HostNetworkService, lbHnsID *string) bool { - klog.V(3).InfoS("Hns LoadBalancer delete triggered for loadBalancer resources", "lbHnsID", *lbHnsID) - if err := hns.deleteLoadBalancer(*lbHnsID); err != nil { - // This will be cleanup by cleanupStaleLoadbalancer fnction. - proxier.mapStaleLoadbalancers[*lbHnsID] = true - } - *lbHnsID = "" - return true -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/registry/core/service/allocator/interfaces.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/registry/core/service/allocator/interfaces.go index 41f966858b7b..1328425ffcfb 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/registry/core/service/allocator/interfaces.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/registry/core/service/allocator/interfaces.go @@ -23,11 +23,7 @@ type Interface interface { AllocateNext() (int, bool, error) Release(int) error ForEach(func(int)) - - // For testing Has(int) bool - - // For testing Free() int // Destroy shuts down all internal structures. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/scheme/scheme.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/scheme/scheme.go index 375b49b569ac..9121eff657c1 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/scheme/scheme.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/scheme/scheme.go @@ -22,7 +22,6 @@ import ( utilruntime "k8s.io/apimachinery/pkg/util/runtime" config "k8s.io/kubernetes/pkg/scheduler/apis/config" configv1 "k8s.io/kubernetes/pkg/scheduler/apis/config/v1" - configv1beta2 "k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2" configv1beta3 "k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3" ) @@ -41,12 +40,10 @@ func init() { // AddToScheme builds the kubescheduler scheme using all known versions of the kubescheduler api. func AddToScheme(scheme *runtime.Scheme) { utilruntime.Must(config.AddToScheme(scheme)) - utilruntime.Must(configv1beta2.AddToScheme(scheme)) utilruntime.Must(configv1beta3.AddToScheme(scheme)) utilruntime.Must(configv1.AddToScheme(scheme)) utilruntime.Must(scheme.SetVersionPriority( configv1.SchemeGroupVersion, configv1beta3.SchemeGroupVersion, - configv1beta2.SchemeGroupVersion, )) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/types.go index 8db4e35c987e..a87a8d1d6b46 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/types.go @@ -96,6 +96,12 @@ type KubeSchedulerConfiguration struct { // Extenders are the list of scheduler extenders, each holding the values of how to communicate // with the extender. These extenders are shared by all scheduler profiles. Extenders []Extender + + // DelayCacheUntilActive specifies when to start caching. If this is true and leader election is enabled, + // the scheduler will wait to fill informer caches until it is the leader. Doing so will have slower + // failover with the benefit of lower memory overhead while waiting to become leader. + // Defaults to false. + DelayCacheUntilActive bool } // KubeSchedulerProfile is a scheduling profile. @@ -247,13 +253,13 @@ func (p *Plugins) Names() []string { p.Permit, p.QueueSort, } - n := sets.NewString() + n := sets.New[string]() for _, e := range extensions { for _, pg := range e.Enabled { n.Insert(pg.Name) } } - return n.List() + return sets.List(n) } // Extender holds the parameters used to communicate with the extender. If a verb is unspecified/empty, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/default_plugins.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/default_plugins.go index bdb8f78bf502..a7d5a602619d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/default_plugins.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/default_plugins.go @@ -110,7 +110,7 @@ type pluginIndex struct { } func mergePluginSet(logger klog.Logger, defaultPluginSet, customPluginSet v1.PluginSet) v1.PluginSet { - disabledPlugins := sets.NewString() + disabledPlugins := sets.New[string]() enabledCustomPlugins := make(map[string]pluginIndex) // replacedPluginIndex is a set of index of plugins, which have replaced the default plugins. replacedPluginIndex := sets.NewInt() diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/defaults.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/defaults.go index e6ec0ab613b1..6746f23a9620 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/defaults.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/defaults.go @@ -57,13 +57,13 @@ func pluginsNames(p *configv1.Plugins) []string { p.PreEnqueue, p.QueueSort, } - n := sets.NewString() + n := sets.New[string]() for _, e := range extensions { for _, pg := range e.Enabled { n.Insert(pg.Name) } } - return n.List() + return sets.List(n) } func setDefaults_KubeSchedulerProfile(logger klog.Logger, prof *configv1.KubeSchedulerProfile) { @@ -71,7 +71,7 @@ func setDefaults_KubeSchedulerProfile(logger klog.Logger, prof *configv1.KubeSch prof.Plugins = mergePlugins(logger, getDefaultPlugins(), prof.Plugins) // Set default plugin configs. scheme := GetPluginArgConversionScheme() - existingConfigs := sets.NewString() + existingConfigs := sets.New[string]() for j := range prof.PluginConfig { existingConfigs.Insert(prof.PluginConfig[j].Name) args := prof.PluginConfig[j].Args.Object diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/zz_generated.conversion.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/zz_generated.conversion.go index dd8d9b231094..ca9be957b426 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/zz_generated.conversion.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1/zz_generated.conversion.go @@ -429,6 +429,7 @@ func autoConvert_v1_KubeSchedulerConfiguration_To_config_KubeSchedulerConfigurat out.Profiles = nil } out.Extenders = *(*[]config.Extender)(unsafe.Pointer(&in.Extenders)) + out.DelayCacheUntilActive = in.DelayCacheUntilActive return nil } @@ -466,6 +467,7 @@ func autoConvert_config_KubeSchedulerConfiguration_To_v1_KubeSchedulerConfigurat out.Profiles = nil } out.Extenders = *(*[]v1.Extender)(unsafe.Pointer(&in.Extenders)) + out.DelayCacheUntilActive = in.DelayCacheUntilActive return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/conversion.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/conversion.go deleted file mode 100644 index c0d89d75ee91..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/conversion.go +++ /dev/null @@ -1,117 +0,0 @@ -/* -Copyright 2021 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1beta2 - -import ( - "fmt" - "sync" - - "k8s.io/apimachinery/pkg/conversion" - "k8s.io/apimachinery/pkg/runtime" - utilruntime "k8s.io/apimachinery/pkg/util/runtime" - "k8s.io/kube-scheduler/config/v1beta2" - "k8s.io/kubernetes/pkg/scheduler/apis/config" -) - -var ( - // pluginArgConversionScheme is a scheme with internal and v1beta2 registered, - // used for defaulting/converting typed PluginConfig Args. - // Access via getPluginArgConversionScheme() - pluginArgConversionScheme *runtime.Scheme - initPluginArgConversionScheme sync.Once -) - -func GetPluginArgConversionScheme() *runtime.Scheme { - initPluginArgConversionScheme.Do(func() { - // set up the scheme used for plugin arg conversion - pluginArgConversionScheme = runtime.NewScheme() - utilruntime.Must(AddToScheme(pluginArgConversionScheme)) - utilruntime.Must(config.AddToScheme(pluginArgConversionScheme)) - }) - return pluginArgConversionScheme -} - -func Convert_v1beta2_KubeSchedulerConfiguration_To_config_KubeSchedulerConfiguration(in *v1beta2.KubeSchedulerConfiguration, out *config.KubeSchedulerConfiguration, s conversion.Scope) error { - if err := autoConvert_v1beta2_KubeSchedulerConfiguration_To_config_KubeSchedulerConfiguration(in, out, s); err != nil { - return err - } - return convertToInternalPluginConfigArgs(out) -} - -// convertToInternalPluginConfigArgs converts PluginConfig#Args into internal -// types using a scheme, after applying defaults. -func convertToInternalPluginConfigArgs(out *config.KubeSchedulerConfiguration) error { - scheme := GetPluginArgConversionScheme() - for i := range out.Profiles { - prof := &out.Profiles[i] - for j := range prof.PluginConfig { - args := prof.PluginConfig[j].Args - if args == nil { - continue - } - if _, isUnknown := args.(*runtime.Unknown); isUnknown { - continue - } - internalArgs, err := scheme.ConvertToVersion(args, config.SchemeGroupVersion) - if err != nil { - return fmt.Errorf("converting .Profiles[%d].PluginConfig[%d].Args into internal type: %w", i, j, err) - } - prof.PluginConfig[j].Args = internalArgs - } - } - return nil -} - -func Convert_config_KubeSchedulerConfiguration_To_v1beta2_KubeSchedulerConfiguration(in *config.KubeSchedulerConfiguration, out *v1beta2.KubeSchedulerConfiguration, s conversion.Scope) error { - if err := autoConvert_config_KubeSchedulerConfiguration_To_v1beta2_KubeSchedulerConfiguration(in, out, s); err != nil { - return err - } - return convertToExternalPluginConfigArgs(out) -} - -// convertToExternalPluginConfigArgs converts PluginConfig#Args into -// external (versioned) types using a scheme. -func convertToExternalPluginConfigArgs(out *v1beta2.KubeSchedulerConfiguration) error { - scheme := GetPluginArgConversionScheme() - for i := range out.Profiles { - for j := range out.Profiles[i].PluginConfig { - args := out.Profiles[i].PluginConfig[j].Args - if args.Object == nil { - continue - } - if _, isUnknown := args.Object.(*runtime.Unknown); isUnknown { - continue - } - externalArgs, err := scheme.ConvertToVersion(args.Object, SchemeGroupVersion) - if err != nil { - return err - } - out.Profiles[i].PluginConfig[j].Args.Object = externalArgs - } - } - return nil -} - -// Convert_config_KubeSchedulerProfile_To_v1beta2_KubeSchedulerProfile uses auto coversion by -// ignoring per profile PercentageOfNodesToScore. -func Convert_config_KubeSchedulerProfile_To_v1beta2_KubeSchedulerProfile(in *config.KubeSchedulerProfile, out *v1beta2.KubeSchedulerProfile, s conversion.Scope) error { - return autoConvert_config_KubeSchedulerProfile_To_v1beta2_KubeSchedulerProfile(in, out, s) -} - -func Convert_config_Plugins_To_v1beta2_Plugins(in *config.Plugins, out *v1beta2.Plugins, s conversion.Scope) error { - return autoConvert_config_Plugins_To_v1beta2_Plugins(in, out, s) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/default_plugins.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/default_plugins.go deleted file mode 100644 index 6bd76b704aeb..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/default_plugins.go +++ /dev/null @@ -1,189 +0,0 @@ -/* -Copyright 2021 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1beta2 - -import ( - "k8s.io/apimachinery/pkg/util/sets" - utilfeature "k8s.io/apiserver/pkg/util/feature" - "k8s.io/klog/v2" - "k8s.io/kube-scheduler/config/v1beta2" - "k8s.io/kubernetes/pkg/features" - "k8s.io/kubernetes/pkg/scheduler/framework/plugins/names" - "k8s.io/utils/pointer" -) - -// getDefaultPlugins returns the default set of plugins. -func getDefaultPlugins() *v1beta2.Plugins { - plugins := &v1beta2.Plugins{ - QueueSort: v1beta2.PluginSet{ - Enabled: []v1beta2.Plugin{ - {Name: names.PrioritySort}, - }, - }, - PreFilter: v1beta2.PluginSet{ - Enabled: []v1beta2.Plugin{ - {Name: names.NodeResourcesFit}, - {Name: names.NodePorts}, - {Name: names.VolumeRestrictions}, - {Name: names.PodTopologySpread}, - {Name: names.InterPodAffinity}, - {Name: names.VolumeBinding}, - {Name: names.VolumeZone}, - {Name: names.NodeAffinity}, - }, - }, - Filter: v1beta2.PluginSet{ - Enabled: []v1beta2.Plugin{ - {Name: names.NodeUnschedulable}, - {Name: names.NodeName}, - {Name: names.TaintToleration}, - {Name: names.NodeAffinity}, - {Name: names.NodePorts}, - {Name: names.NodeResourcesFit}, - {Name: names.VolumeRestrictions}, - {Name: names.EBSLimits}, - {Name: names.GCEPDLimits}, - {Name: names.NodeVolumeLimits}, - {Name: names.AzureDiskLimits}, - {Name: names.VolumeBinding}, - {Name: names.VolumeZone}, - {Name: names.PodTopologySpread}, - {Name: names.InterPodAffinity}, - }, - }, - PostFilter: v1beta2.PluginSet{ - Enabled: []v1beta2.Plugin{ - {Name: names.DefaultPreemption}, - }, - }, - PreScore: v1beta2.PluginSet{ - Enabled: []v1beta2.Plugin{ - {Name: names.InterPodAffinity}, - {Name: names.PodTopologySpread}, - {Name: names.TaintToleration}, - {Name: names.NodeAffinity}, - {Name: names.NodeResourcesFit}, - {Name: names.NodeResourcesBalancedAllocation}, - }, - }, - Score: v1beta2.PluginSet{ - Enabled: []v1beta2.Plugin{ - {Name: names.NodeResourcesBalancedAllocation, Weight: pointer.Int32(1)}, - {Name: names.ImageLocality, Weight: pointer.Int32(1)}, - {Name: names.InterPodAffinity, Weight: pointer.Int32(1)}, - {Name: names.NodeResourcesFit, Weight: pointer.Int32(1)}, - {Name: names.NodeAffinity, Weight: pointer.Int32(1)}, - // Weight is doubled because: - // - This is a score coming from user preference. - // - It makes its signal comparable to NodeResourcesFit.LeastAllocated. - {Name: names.PodTopologySpread, Weight: pointer.Int32(2)}, - {Name: names.TaintToleration, Weight: pointer.Int32(1)}, - }, - }, - Reserve: v1beta2.PluginSet{ - Enabled: []v1beta2.Plugin{ - {Name: names.VolumeBinding}, - }, - }, - PreBind: v1beta2.PluginSet{ - Enabled: []v1beta2.Plugin{ - {Name: names.VolumeBinding}, - }, - }, - Bind: v1beta2.PluginSet{ - Enabled: []v1beta2.Plugin{ - {Name: names.DefaultBinder}, - }, - }, - } - applyFeatureGates(plugins) - - return plugins -} - -func applyFeatureGates(config *v1beta2.Plugins) { - if utilfeature.DefaultFeatureGate.Enabled(features.VolumeCapacityPriority) { - config.Score.Enabled = append(config.Score.Enabled, v1beta2.Plugin{Name: names.VolumeBinding, Weight: pointer.Int32(1)}) - } - if utilfeature.DefaultFeatureGate.Enabled(features.PodSchedulingReadiness) { - config.PreEnqueue.Enabled = append(config.PreEnqueue.Enabled, v1beta2.Plugin{Name: names.SchedulingGates}) - } -} - -// mergePlugins merges the custom set into the given default one, handling disabled sets. -func mergePlugins(defaultPlugins, customPlugins *v1beta2.Plugins) *v1beta2.Plugins { - if customPlugins == nil { - return defaultPlugins - } - - defaultPlugins.QueueSort = mergePluginSet(defaultPlugins.QueueSort, customPlugins.QueueSort) - defaultPlugins.PreFilter = mergePluginSet(defaultPlugins.PreFilter, customPlugins.PreFilter) - defaultPlugins.Filter = mergePluginSet(defaultPlugins.Filter, customPlugins.Filter) - defaultPlugins.PostFilter = mergePluginSet(defaultPlugins.PostFilter, customPlugins.PostFilter) - defaultPlugins.PreScore = mergePluginSet(defaultPlugins.PreScore, customPlugins.PreScore) - defaultPlugins.Score = mergePluginSet(defaultPlugins.Score, customPlugins.Score) - defaultPlugins.Reserve = mergePluginSet(defaultPlugins.Reserve, customPlugins.Reserve) - defaultPlugins.Permit = mergePluginSet(defaultPlugins.Permit, customPlugins.Permit) - defaultPlugins.PreBind = mergePluginSet(defaultPlugins.PreBind, customPlugins.PreBind) - defaultPlugins.Bind = mergePluginSet(defaultPlugins.Bind, customPlugins.Bind) - defaultPlugins.PostBind = mergePluginSet(defaultPlugins.PostBind, customPlugins.PostBind) - return defaultPlugins -} - -type pluginIndex struct { - index int - plugin v1beta2.Plugin -} - -func mergePluginSet(defaultPluginSet, customPluginSet v1beta2.PluginSet) v1beta2.PluginSet { - disabledPlugins := sets.NewString() - enabledCustomPlugins := make(map[string]pluginIndex) - // replacedPluginIndex is a set of index of plugins, which have replaced the default plugins. - replacedPluginIndex := sets.NewInt() - for _, disabledPlugin := range customPluginSet.Disabled { - disabledPlugins.Insert(disabledPlugin.Name) - } - for index, enabledPlugin := range customPluginSet.Enabled { - enabledCustomPlugins[enabledPlugin.Name] = pluginIndex{index, enabledPlugin} - } - var enabledPlugins []v1beta2.Plugin - if !disabledPlugins.Has("*") { - for _, defaultEnabledPlugin := range defaultPluginSet.Enabled { - if disabledPlugins.Has(defaultEnabledPlugin.Name) { - continue - } - // The default plugin is explicitly re-configured, update the default plugin accordingly. - if customPlugin, ok := enabledCustomPlugins[defaultEnabledPlugin.Name]; ok { - klog.InfoS("Default plugin is explicitly re-configured; overriding", "plugin", defaultEnabledPlugin.Name) - // Update the default plugin in place to preserve order. - defaultEnabledPlugin = customPlugin.plugin - replacedPluginIndex.Insert(customPlugin.index) - } - enabledPlugins = append(enabledPlugins, defaultEnabledPlugin) - } - } - - // Append all the custom plugins which haven't replaced any default plugins. - // Note: duplicated custom plugins will still be appended here. - // If so, the instantiation of scheduler framework will detect it and abort. - for index, plugin := range customPluginSet.Enabled { - if !replacedPluginIndex.Has(index) { - enabledPlugins = append(enabledPlugins, plugin) - } - } - return v1beta2.PluginSet{Enabled: enabledPlugins} -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/defaults.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/defaults.go deleted file mode 100644 index 1bf12080673d..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/defaults.go +++ /dev/null @@ -1,241 +0,0 @@ -/* -Copyright 2021 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1beta2 - -import ( - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/util/sets" - "k8s.io/apiserver/pkg/util/feature" - componentbaseconfigv1alpha1 "k8s.io/component-base/config/v1alpha1" - "k8s.io/kube-scheduler/config/v1beta2" - "k8s.io/kubernetes/pkg/features" - "k8s.io/kubernetes/pkg/scheduler/apis/config" - "k8s.io/utils/pointer" -) - -var defaultResourceSpec = []v1beta2.ResourceSpec{ - {Name: string(v1.ResourceCPU), Weight: 1}, - {Name: string(v1.ResourceMemory), Weight: 1}, -} - -func addDefaultingFuncs(scheme *runtime.Scheme) error { - return RegisterDefaults(scheme) -} - -func pluginsNames(p *v1beta2.Plugins) []string { - if p == nil { - return nil - } - extensions := []v1beta2.PluginSet{ - p.PreFilter, - p.Filter, - p.PostFilter, - p.Reserve, - p.PreScore, - p.Score, - p.PreBind, - p.Bind, - p.PostBind, - p.Permit, - p.QueueSort, - } - n := sets.NewString() - for _, e := range extensions { - for _, pg := range e.Enabled { - n.Insert(pg.Name) - } - } - return n.List() -} - -func setDefaults_KubeSchedulerProfile(prof *v1beta2.KubeSchedulerProfile) { - // Set default plugins. - prof.Plugins = mergePlugins(getDefaultPlugins(), prof.Plugins) - - // Set default plugin configs. - scheme := GetPluginArgConversionScheme() - existingConfigs := sets.NewString() - for j := range prof.PluginConfig { - existingConfigs.Insert(prof.PluginConfig[j].Name) - args := prof.PluginConfig[j].Args.Object - if _, isUnknown := args.(*runtime.Unknown); isUnknown { - continue - } - scheme.Default(args) - } - - // Append default configs for plugins that didn't have one explicitly set. - for _, name := range pluginsNames(prof.Plugins) { - if existingConfigs.Has(name) { - continue - } - gvk := v1beta2.SchemeGroupVersion.WithKind(name + "Args") - args, err := scheme.New(gvk) - if err != nil { - // This plugin is out-of-tree or doesn't require configuration. - continue - } - scheme.Default(args) - args.GetObjectKind().SetGroupVersionKind(gvk) - prof.PluginConfig = append(prof.PluginConfig, v1beta2.PluginConfig{ - Name: name, - Args: runtime.RawExtension{Object: args}, - }) - } -} - -// SetDefaults_KubeSchedulerConfiguration sets additional defaults -func SetDefaults_KubeSchedulerConfiguration(obj *v1beta2.KubeSchedulerConfiguration) { - if obj.Parallelism == nil { - obj.Parallelism = pointer.Int32(16) - } - - if len(obj.Profiles) == 0 { - obj.Profiles = append(obj.Profiles, v1beta2.KubeSchedulerProfile{}) - } - // Only apply a default scheduler name when there is a single profile. - // Validation will ensure that every profile has a non-empty unique name. - if len(obj.Profiles) == 1 && obj.Profiles[0].SchedulerName == nil { - obj.Profiles[0].SchedulerName = pointer.String(v1.DefaultSchedulerName) - } - - // Add the default set of plugins and apply the configuration. - for i := range obj.Profiles { - prof := &obj.Profiles[i] - setDefaults_KubeSchedulerProfile(prof) - } - - if obj.PercentageOfNodesToScore == nil { - obj.PercentageOfNodesToScore = pointer.Int32(config.DefaultPercentageOfNodesToScore) - } - - if len(obj.LeaderElection.ResourceLock) == 0 { - // Use lease-based leader election to reduce cost. - // We migrated for EndpointsLease lock in 1.17 and starting in 1.20 we - // migrated to Lease lock. - obj.LeaderElection.ResourceLock = "leases" - } - if len(obj.LeaderElection.ResourceNamespace) == 0 { - obj.LeaderElection.ResourceNamespace = v1beta2.SchedulerDefaultLockObjectNamespace - } - if len(obj.LeaderElection.ResourceName) == 0 { - obj.LeaderElection.ResourceName = v1beta2.SchedulerDefaultLockObjectName - } - - if len(obj.ClientConnection.ContentType) == 0 { - obj.ClientConnection.ContentType = "application/vnd.kubernetes.protobuf" - } - // Scheduler has an opinion about QPS/Burst, setting specific defaults for itself, instead of generic settings. - if obj.ClientConnection.QPS == 0.0 { - obj.ClientConnection.QPS = 50.0 - } - if obj.ClientConnection.Burst == 0 { - obj.ClientConnection.Burst = 100 - } - - // Use the default LeaderElectionConfiguration options - componentbaseconfigv1alpha1.RecommendedDefaultLeaderElectionConfiguration(&obj.LeaderElection) - - if obj.PodInitialBackoffSeconds == nil { - obj.PodInitialBackoffSeconds = pointer.Int64(1) - } - - if obj.PodMaxBackoffSeconds == nil { - obj.PodMaxBackoffSeconds = pointer.Int64(10) - } - - // Enable profiling by default in the scheduler - if obj.EnableProfiling == nil { - obj.EnableProfiling = pointer.Bool(true) - } - - // Enable contention profiling by default if profiling is enabled - if *obj.EnableProfiling && obj.EnableContentionProfiling == nil { - obj.EnableContentionProfiling = pointer.Bool(true) - } -} - -func SetDefaults_DefaultPreemptionArgs(obj *v1beta2.DefaultPreemptionArgs) { - if obj.MinCandidateNodesPercentage == nil { - obj.MinCandidateNodesPercentage = pointer.Int32(10) - } - if obj.MinCandidateNodesAbsolute == nil { - obj.MinCandidateNodesAbsolute = pointer.Int32(100) - } -} - -func SetDefaults_InterPodAffinityArgs(obj *v1beta2.InterPodAffinityArgs) { - if obj.HardPodAffinityWeight == nil { - obj.HardPodAffinityWeight = pointer.Int32(1) - } -} - -func SetDefaults_VolumeBindingArgs(obj *v1beta2.VolumeBindingArgs) { - if obj.BindTimeoutSeconds == nil { - obj.BindTimeoutSeconds = pointer.Int64(600) - } - if len(obj.Shape) == 0 && feature.DefaultFeatureGate.Enabled(features.VolumeCapacityPriority) { - obj.Shape = []v1beta2.UtilizationShapePoint{ - { - Utilization: 0, - Score: 0, - }, - { - Utilization: 100, - Score: int32(config.MaxCustomPriorityScore), - }, - } - } -} - -func SetDefaults_NodeResourcesBalancedAllocationArgs(obj *v1beta2.NodeResourcesBalancedAllocationArgs) { - if len(obj.Resources) == 0 { - obj.Resources = defaultResourceSpec - return - } - // If the weight is not set or it is explicitly set to 0, then apply the default weight(1) instead. - for i := range obj.Resources { - if obj.Resources[i].Weight == 0 { - obj.Resources[i].Weight = 1 - } - } -} - -func SetDefaults_PodTopologySpreadArgs(obj *v1beta2.PodTopologySpreadArgs) { - if obj.DefaultingType == "" { - obj.DefaultingType = v1beta2.SystemDefaulting - } -} - -func SetDefaults_NodeResourcesFitArgs(obj *v1beta2.NodeResourcesFitArgs) { - if obj.ScoringStrategy == nil { - obj.ScoringStrategy = &v1beta2.ScoringStrategy{ - Type: v1beta2.ScoringStrategyType(config.LeastAllocated), - Resources: defaultResourceSpec, - } - } - if len(obj.ScoringStrategy.Resources) == 0 { - // If no resources specified, use the default set. - obj.ScoringStrategy.Resources = append(obj.ScoringStrategy.Resources, defaultResourceSpec...) - } - for i := range obj.ScoringStrategy.Resources { - if obj.ScoringStrategy.Resources[i].Weight == 0 { - obj.ScoringStrategy.Resources[i].Weight = 1 - } - } -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/register.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/register.go deleted file mode 100644 index b8ca76de5f5a..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/register.go +++ /dev/null @@ -1,42 +0,0 @@ -/* -Copyright 2021 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package v1beta2 - -import ( - "k8s.io/kube-scheduler/config/v1beta2" -) - -// GroupName is the group name used in this package -const GroupName = v1beta2.GroupName - -// SchemeGroupVersion is group version used to register these objects -var SchemeGroupVersion = v1beta2.SchemeGroupVersion - -var ( - // localSchemeBuilder extends the SchemeBuilder instance with the external types. In this package, - // defaulting and conversion init funcs are registered as well. - localSchemeBuilder = &v1beta2.SchemeBuilder - // AddToScheme is a global function that registers this API group & version to a scheme - AddToScheme = localSchemeBuilder.AddToScheme -) - -func init() { - // We only register manually written functions here. The registration of the - // generated functions takes place in the generated files. The separation - // makes the code compile even when the generated files are missing. - localSchemeBuilder.Register(addDefaultingFuncs) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.conversion.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.conversion.go deleted file mode 100644 index e5c9fb73fa65..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.conversion.go +++ /dev/null @@ -1,945 +0,0 @@ -//go:build !ignore_autogenerated -// +build !ignore_autogenerated - -/* -Copyright The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by conversion-gen. DO NOT EDIT. - -package v1beta2 - -import ( - unsafe "unsafe" - - corev1 "k8s.io/api/core/v1" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - conversion "k8s.io/apimachinery/pkg/conversion" - runtime "k8s.io/apimachinery/pkg/runtime" - v1alpha1 "k8s.io/component-base/config/v1alpha1" - v1beta2 "k8s.io/kube-scheduler/config/v1beta2" - config "k8s.io/kubernetes/pkg/scheduler/apis/config" -) - -func init() { - localSchemeBuilder.Register(RegisterConversions) -} - -// RegisterConversions adds conversion functions to the given scheme. -// Public to allow building arbitrary schemes. -func RegisterConversions(s *runtime.Scheme) error { - if err := s.AddGeneratedConversionFunc((*v1beta2.DefaultPreemptionArgs)(nil), (*config.DefaultPreemptionArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_DefaultPreemptionArgs_To_config_DefaultPreemptionArgs(a.(*v1beta2.DefaultPreemptionArgs), b.(*config.DefaultPreemptionArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.DefaultPreemptionArgs)(nil), (*v1beta2.DefaultPreemptionArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_DefaultPreemptionArgs_To_v1beta2_DefaultPreemptionArgs(a.(*config.DefaultPreemptionArgs), b.(*v1beta2.DefaultPreemptionArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.Extender)(nil), (*config.Extender)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_Extender_To_config_Extender(a.(*v1beta2.Extender), b.(*config.Extender), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.Extender)(nil), (*v1beta2.Extender)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_Extender_To_v1beta2_Extender(a.(*config.Extender), b.(*v1beta2.Extender), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.ExtenderManagedResource)(nil), (*config.ExtenderManagedResource)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_ExtenderManagedResource_To_config_ExtenderManagedResource(a.(*v1beta2.ExtenderManagedResource), b.(*config.ExtenderManagedResource), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.ExtenderManagedResource)(nil), (*v1beta2.ExtenderManagedResource)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_ExtenderManagedResource_To_v1beta2_ExtenderManagedResource(a.(*config.ExtenderManagedResource), b.(*v1beta2.ExtenderManagedResource), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.ExtenderTLSConfig)(nil), (*config.ExtenderTLSConfig)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_ExtenderTLSConfig_To_config_ExtenderTLSConfig(a.(*v1beta2.ExtenderTLSConfig), b.(*config.ExtenderTLSConfig), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.ExtenderTLSConfig)(nil), (*v1beta2.ExtenderTLSConfig)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_ExtenderTLSConfig_To_v1beta2_ExtenderTLSConfig(a.(*config.ExtenderTLSConfig), b.(*v1beta2.ExtenderTLSConfig), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.InterPodAffinityArgs)(nil), (*config.InterPodAffinityArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_InterPodAffinityArgs_To_config_InterPodAffinityArgs(a.(*v1beta2.InterPodAffinityArgs), b.(*config.InterPodAffinityArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.InterPodAffinityArgs)(nil), (*v1beta2.InterPodAffinityArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_InterPodAffinityArgs_To_v1beta2_InterPodAffinityArgs(a.(*config.InterPodAffinityArgs), b.(*v1beta2.InterPodAffinityArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.KubeSchedulerProfile)(nil), (*config.KubeSchedulerProfile)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_KubeSchedulerProfile_To_config_KubeSchedulerProfile(a.(*v1beta2.KubeSchedulerProfile), b.(*config.KubeSchedulerProfile), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.NodeAffinityArgs)(nil), (*config.NodeAffinityArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_NodeAffinityArgs_To_config_NodeAffinityArgs(a.(*v1beta2.NodeAffinityArgs), b.(*config.NodeAffinityArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.NodeAffinityArgs)(nil), (*v1beta2.NodeAffinityArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_NodeAffinityArgs_To_v1beta2_NodeAffinityArgs(a.(*config.NodeAffinityArgs), b.(*v1beta2.NodeAffinityArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.NodeResourcesBalancedAllocationArgs)(nil), (*config.NodeResourcesBalancedAllocationArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_NodeResourcesBalancedAllocationArgs_To_config_NodeResourcesBalancedAllocationArgs(a.(*v1beta2.NodeResourcesBalancedAllocationArgs), b.(*config.NodeResourcesBalancedAllocationArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.NodeResourcesBalancedAllocationArgs)(nil), (*v1beta2.NodeResourcesBalancedAllocationArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_NodeResourcesBalancedAllocationArgs_To_v1beta2_NodeResourcesBalancedAllocationArgs(a.(*config.NodeResourcesBalancedAllocationArgs), b.(*v1beta2.NodeResourcesBalancedAllocationArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.NodeResourcesFitArgs)(nil), (*config.NodeResourcesFitArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_NodeResourcesFitArgs_To_config_NodeResourcesFitArgs(a.(*v1beta2.NodeResourcesFitArgs), b.(*config.NodeResourcesFitArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.NodeResourcesFitArgs)(nil), (*v1beta2.NodeResourcesFitArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_NodeResourcesFitArgs_To_v1beta2_NodeResourcesFitArgs(a.(*config.NodeResourcesFitArgs), b.(*v1beta2.NodeResourcesFitArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.Plugin)(nil), (*config.Plugin)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_Plugin_To_config_Plugin(a.(*v1beta2.Plugin), b.(*config.Plugin), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.Plugin)(nil), (*v1beta2.Plugin)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_Plugin_To_v1beta2_Plugin(a.(*config.Plugin), b.(*v1beta2.Plugin), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.PluginConfig)(nil), (*config.PluginConfig)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_PluginConfig_To_config_PluginConfig(a.(*v1beta2.PluginConfig), b.(*config.PluginConfig), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.PluginConfig)(nil), (*v1beta2.PluginConfig)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_PluginConfig_To_v1beta2_PluginConfig(a.(*config.PluginConfig), b.(*v1beta2.PluginConfig), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.PluginSet)(nil), (*config.PluginSet)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_PluginSet_To_config_PluginSet(a.(*v1beta2.PluginSet), b.(*config.PluginSet), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.PluginSet)(nil), (*v1beta2.PluginSet)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_PluginSet_To_v1beta2_PluginSet(a.(*config.PluginSet), b.(*v1beta2.PluginSet), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.Plugins)(nil), (*config.Plugins)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_Plugins_To_config_Plugins(a.(*v1beta2.Plugins), b.(*config.Plugins), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.PodTopologySpreadArgs)(nil), (*config.PodTopologySpreadArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_PodTopologySpreadArgs_To_config_PodTopologySpreadArgs(a.(*v1beta2.PodTopologySpreadArgs), b.(*config.PodTopologySpreadArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.PodTopologySpreadArgs)(nil), (*v1beta2.PodTopologySpreadArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_PodTopologySpreadArgs_To_v1beta2_PodTopologySpreadArgs(a.(*config.PodTopologySpreadArgs), b.(*v1beta2.PodTopologySpreadArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.RequestedToCapacityRatioParam)(nil), (*config.RequestedToCapacityRatioParam)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_RequestedToCapacityRatioParam_To_config_RequestedToCapacityRatioParam(a.(*v1beta2.RequestedToCapacityRatioParam), b.(*config.RequestedToCapacityRatioParam), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.RequestedToCapacityRatioParam)(nil), (*v1beta2.RequestedToCapacityRatioParam)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_RequestedToCapacityRatioParam_To_v1beta2_RequestedToCapacityRatioParam(a.(*config.RequestedToCapacityRatioParam), b.(*v1beta2.RequestedToCapacityRatioParam), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.ResourceSpec)(nil), (*config.ResourceSpec)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_ResourceSpec_To_config_ResourceSpec(a.(*v1beta2.ResourceSpec), b.(*config.ResourceSpec), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.ResourceSpec)(nil), (*v1beta2.ResourceSpec)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_ResourceSpec_To_v1beta2_ResourceSpec(a.(*config.ResourceSpec), b.(*v1beta2.ResourceSpec), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.ScoringStrategy)(nil), (*config.ScoringStrategy)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_ScoringStrategy_To_config_ScoringStrategy(a.(*v1beta2.ScoringStrategy), b.(*config.ScoringStrategy), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.ScoringStrategy)(nil), (*v1beta2.ScoringStrategy)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_ScoringStrategy_To_v1beta2_ScoringStrategy(a.(*config.ScoringStrategy), b.(*v1beta2.ScoringStrategy), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.UtilizationShapePoint)(nil), (*config.UtilizationShapePoint)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_UtilizationShapePoint_To_config_UtilizationShapePoint(a.(*v1beta2.UtilizationShapePoint), b.(*config.UtilizationShapePoint), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.UtilizationShapePoint)(nil), (*v1beta2.UtilizationShapePoint)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_UtilizationShapePoint_To_v1beta2_UtilizationShapePoint(a.(*config.UtilizationShapePoint), b.(*v1beta2.UtilizationShapePoint), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*v1beta2.VolumeBindingArgs)(nil), (*config.VolumeBindingArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_VolumeBindingArgs_To_config_VolumeBindingArgs(a.(*v1beta2.VolumeBindingArgs), b.(*config.VolumeBindingArgs), scope) - }); err != nil { - return err - } - if err := s.AddGeneratedConversionFunc((*config.VolumeBindingArgs)(nil), (*v1beta2.VolumeBindingArgs)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_VolumeBindingArgs_To_v1beta2_VolumeBindingArgs(a.(*config.VolumeBindingArgs), b.(*v1beta2.VolumeBindingArgs), scope) - }); err != nil { - return err - } - if err := s.AddConversionFunc((*config.KubeSchedulerConfiguration)(nil), (*v1beta2.KubeSchedulerConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_KubeSchedulerConfiguration_To_v1beta2_KubeSchedulerConfiguration(a.(*config.KubeSchedulerConfiguration), b.(*v1beta2.KubeSchedulerConfiguration), scope) - }); err != nil { - return err - } - if err := s.AddConversionFunc((*config.KubeSchedulerProfile)(nil), (*v1beta2.KubeSchedulerProfile)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_KubeSchedulerProfile_To_v1beta2_KubeSchedulerProfile(a.(*config.KubeSchedulerProfile), b.(*v1beta2.KubeSchedulerProfile), scope) - }); err != nil { - return err - } - if err := s.AddConversionFunc((*config.Plugins)(nil), (*v1beta2.Plugins)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_config_Plugins_To_v1beta2_Plugins(a.(*config.Plugins), b.(*v1beta2.Plugins), scope) - }); err != nil { - return err - } - if err := s.AddConversionFunc((*v1beta2.KubeSchedulerConfiguration)(nil), (*config.KubeSchedulerConfiguration)(nil), func(a, b interface{}, scope conversion.Scope) error { - return Convert_v1beta2_KubeSchedulerConfiguration_To_config_KubeSchedulerConfiguration(a.(*v1beta2.KubeSchedulerConfiguration), b.(*config.KubeSchedulerConfiguration), scope) - }); err != nil { - return err - } - return nil -} - -func autoConvert_v1beta2_DefaultPreemptionArgs_To_config_DefaultPreemptionArgs(in *v1beta2.DefaultPreemptionArgs, out *config.DefaultPreemptionArgs, s conversion.Scope) error { - if err := v1.Convert_Pointer_int32_To_int32(&in.MinCandidateNodesPercentage, &out.MinCandidateNodesPercentage, s); err != nil { - return err - } - if err := v1.Convert_Pointer_int32_To_int32(&in.MinCandidateNodesAbsolute, &out.MinCandidateNodesAbsolute, s); err != nil { - return err - } - return nil -} - -// Convert_v1beta2_DefaultPreemptionArgs_To_config_DefaultPreemptionArgs is an autogenerated conversion function. -func Convert_v1beta2_DefaultPreemptionArgs_To_config_DefaultPreemptionArgs(in *v1beta2.DefaultPreemptionArgs, out *config.DefaultPreemptionArgs, s conversion.Scope) error { - return autoConvert_v1beta2_DefaultPreemptionArgs_To_config_DefaultPreemptionArgs(in, out, s) -} - -func autoConvert_config_DefaultPreemptionArgs_To_v1beta2_DefaultPreemptionArgs(in *config.DefaultPreemptionArgs, out *v1beta2.DefaultPreemptionArgs, s conversion.Scope) error { - if err := v1.Convert_int32_To_Pointer_int32(&in.MinCandidateNodesPercentage, &out.MinCandidateNodesPercentage, s); err != nil { - return err - } - if err := v1.Convert_int32_To_Pointer_int32(&in.MinCandidateNodesAbsolute, &out.MinCandidateNodesAbsolute, s); err != nil { - return err - } - return nil -} - -// Convert_config_DefaultPreemptionArgs_To_v1beta2_DefaultPreemptionArgs is an autogenerated conversion function. -func Convert_config_DefaultPreemptionArgs_To_v1beta2_DefaultPreemptionArgs(in *config.DefaultPreemptionArgs, out *v1beta2.DefaultPreemptionArgs, s conversion.Scope) error { - return autoConvert_config_DefaultPreemptionArgs_To_v1beta2_DefaultPreemptionArgs(in, out, s) -} - -func autoConvert_v1beta2_Extender_To_config_Extender(in *v1beta2.Extender, out *config.Extender, s conversion.Scope) error { - out.URLPrefix = in.URLPrefix - out.FilterVerb = in.FilterVerb - out.PreemptVerb = in.PreemptVerb - out.PrioritizeVerb = in.PrioritizeVerb - out.Weight = in.Weight - out.BindVerb = in.BindVerb - out.EnableHTTPS = in.EnableHTTPS - out.TLSConfig = (*config.ExtenderTLSConfig)(unsafe.Pointer(in.TLSConfig)) - out.HTTPTimeout = in.HTTPTimeout - out.NodeCacheCapable = in.NodeCacheCapable - out.ManagedResources = *(*[]config.ExtenderManagedResource)(unsafe.Pointer(&in.ManagedResources)) - out.Ignorable = in.Ignorable - return nil -} - -// Convert_v1beta2_Extender_To_config_Extender is an autogenerated conversion function. -func Convert_v1beta2_Extender_To_config_Extender(in *v1beta2.Extender, out *config.Extender, s conversion.Scope) error { - return autoConvert_v1beta2_Extender_To_config_Extender(in, out, s) -} - -func autoConvert_config_Extender_To_v1beta2_Extender(in *config.Extender, out *v1beta2.Extender, s conversion.Scope) error { - out.URLPrefix = in.URLPrefix - out.FilterVerb = in.FilterVerb - out.PreemptVerb = in.PreemptVerb - out.PrioritizeVerb = in.PrioritizeVerb - out.Weight = in.Weight - out.BindVerb = in.BindVerb - out.EnableHTTPS = in.EnableHTTPS - out.TLSConfig = (*v1beta2.ExtenderTLSConfig)(unsafe.Pointer(in.TLSConfig)) - out.HTTPTimeout = in.HTTPTimeout - out.NodeCacheCapable = in.NodeCacheCapable - out.ManagedResources = *(*[]v1beta2.ExtenderManagedResource)(unsafe.Pointer(&in.ManagedResources)) - out.Ignorable = in.Ignorable - return nil -} - -// Convert_config_Extender_To_v1beta2_Extender is an autogenerated conversion function. -func Convert_config_Extender_To_v1beta2_Extender(in *config.Extender, out *v1beta2.Extender, s conversion.Scope) error { - return autoConvert_config_Extender_To_v1beta2_Extender(in, out, s) -} - -func autoConvert_v1beta2_ExtenderManagedResource_To_config_ExtenderManagedResource(in *v1beta2.ExtenderManagedResource, out *config.ExtenderManagedResource, s conversion.Scope) error { - out.Name = in.Name - out.IgnoredByScheduler = in.IgnoredByScheduler - return nil -} - -// Convert_v1beta2_ExtenderManagedResource_To_config_ExtenderManagedResource is an autogenerated conversion function. -func Convert_v1beta2_ExtenderManagedResource_To_config_ExtenderManagedResource(in *v1beta2.ExtenderManagedResource, out *config.ExtenderManagedResource, s conversion.Scope) error { - return autoConvert_v1beta2_ExtenderManagedResource_To_config_ExtenderManagedResource(in, out, s) -} - -func autoConvert_config_ExtenderManagedResource_To_v1beta2_ExtenderManagedResource(in *config.ExtenderManagedResource, out *v1beta2.ExtenderManagedResource, s conversion.Scope) error { - out.Name = in.Name - out.IgnoredByScheduler = in.IgnoredByScheduler - return nil -} - -// Convert_config_ExtenderManagedResource_To_v1beta2_ExtenderManagedResource is an autogenerated conversion function. -func Convert_config_ExtenderManagedResource_To_v1beta2_ExtenderManagedResource(in *config.ExtenderManagedResource, out *v1beta2.ExtenderManagedResource, s conversion.Scope) error { - return autoConvert_config_ExtenderManagedResource_To_v1beta2_ExtenderManagedResource(in, out, s) -} - -func autoConvert_v1beta2_ExtenderTLSConfig_To_config_ExtenderTLSConfig(in *v1beta2.ExtenderTLSConfig, out *config.ExtenderTLSConfig, s conversion.Scope) error { - out.Insecure = in.Insecure - out.ServerName = in.ServerName - out.CertFile = in.CertFile - out.KeyFile = in.KeyFile - out.CAFile = in.CAFile - out.CertData = *(*[]byte)(unsafe.Pointer(&in.CertData)) - out.KeyData = *(*[]byte)(unsafe.Pointer(&in.KeyData)) - out.CAData = *(*[]byte)(unsafe.Pointer(&in.CAData)) - return nil -} - -// Convert_v1beta2_ExtenderTLSConfig_To_config_ExtenderTLSConfig is an autogenerated conversion function. -func Convert_v1beta2_ExtenderTLSConfig_To_config_ExtenderTLSConfig(in *v1beta2.ExtenderTLSConfig, out *config.ExtenderTLSConfig, s conversion.Scope) error { - return autoConvert_v1beta2_ExtenderTLSConfig_To_config_ExtenderTLSConfig(in, out, s) -} - -func autoConvert_config_ExtenderTLSConfig_To_v1beta2_ExtenderTLSConfig(in *config.ExtenderTLSConfig, out *v1beta2.ExtenderTLSConfig, s conversion.Scope) error { - out.Insecure = in.Insecure - out.ServerName = in.ServerName - out.CertFile = in.CertFile - out.KeyFile = in.KeyFile - out.CAFile = in.CAFile - out.CertData = *(*[]byte)(unsafe.Pointer(&in.CertData)) - out.KeyData = *(*[]byte)(unsafe.Pointer(&in.KeyData)) - out.CAData = *(*[]byte)(unsafe.Pointer(&in.CAData)) - return nil -} - -// Convert_config_ExtenderTLSConfig_To_v1beta2_ExtenderTLSConfig is an autogenerated conversion function. -func Convert_config_ExtenderTLSConfig_To_v1beta2_ExtenderTLSConfig(in *config.ExtenderTLSConfig, out *v1beta2.ExtenderTLSConfig, s conversion.Scope) error { - return autoConvert_config_ExtenderTLSConfig_To_v1beta2_ExtenderTLSConfig(in, out, s) -} - -func autoConvert_v1beta2_InterPodAffinityArgs_To_config_InterPodAffinityArgs(in *v1beta2.InterPodAffinityArgs, out *config.InterPodAffinityArgs, s conversion.Scope) error { - if err := v1.Convert_Pointer_int32_To_int32(&in.HardPodAffinityWeight, &out.HardPodAffinityWeight, s); err != nil { - return err - } - out.IgnorePreferredTermsOfExistingPods = in.IgnorePreferredTermsOfExistingPods - return nil -} - -// Convert_v1beta2_InterPodAffinityArgs_To_config_InterPodAffinityArgs is an autogenerated conversion function. -func Convert_v1beta2_InterPodAffinityArgs_To_config_InterPodAffinityArgs(in *v1beta2.InterPodAffinityArgs, out *config.InterPodAffinityArgs, s conversion.Scope) error { - return autoConvert_v1beta2_InterPodAffinityArgs_To_config_InterPodAffinityArgs(in, out, s) -} - -func autoConvert_config_InterPodAffinityArgs_To_v1beta2_InterPodAffinityArgs(in *config.InterPodAffinityArgs, out *v1beta2.InterPodAffinityArgs, s conversion.Scope) error { - if err := v1.Convert_int32_To_Pointer_int32(&in.HardPodAffinityWeight, &out.HardPodAffinityWeight, s); err != nil { - return err - } - out.IgnorePreferredTermsOfExistingPods = in.IgnorePreferredTermsOfExistingPods - return nil -} - -// Convert_config_InterPodAffinityArgs_To_v1beta2_InterPodAffinityArgs is an autogenerated conversion function. -func Convert_config_InterPodAffinityArgs_To_v1beta2_InterPodAffinityArgs(in *config.InterPodAffinityArgs, out *v1beta2.InterPodAffinityArgs, s conversion.Scope) error { - return autoConvert_config_InterPodAffinityArgs_To_v1beta2_InterPodAffinityArgs(in, out, s) -} - -func autoConvert_v1beta2_KubeSchedulerConfiguration_To_config_KubeSchedulerConfiguration(in *v1beta2.KubeSchedulerConfiguration, out *config.KubeSchedulerConfiguration, s conversion.Scope) error { - if err := v1.Convert_Pointer_int32_To_int32(&in.Parallelism, &out.Parallelism, s); err != nil { - return err - } - if err := v1alpha1.Convert_v1alpha1_LeaderElectionConfiguration_To_config_LeaderElectionConfiguration(&in.LeaderElection, &out.LeaderElection, s); err != nil { - return err - } - if err := v1alpha1.Convert_v1alpha1_ClientConnectionConfiguration_To_config_ClientConnectionConfiguration(&in.ClientConnection, &out.ClientConnection, s); err != nil { - return err - } - if err := v1.Convert_Pointer_string_To_string(&in.HealthzBindAddress, &out.HealthzBindAddress, s); err != nil { - return err - } - if err := v1.Convert_Pointer_string_To_string(&in.MetricsBindAddress, &out.MetricsBindAddress, s); err != nil { - return err - } - if err := v1alpha1.Convert_v1alpha1_DebuggingConfiguration_To_config_DebuggingConfiguration(&in.DebuggingConfiguration, &out.DebuggingConfiguration, s); err != nil { - return err - } - out.PercentageOfNodesToScore = (*int32)(unsafe.Pointer(in.PercentageOfNodesToScore)) - if err := v1.Convert_Pointer_int64_To_int64(&in.PodInitialBackoffSeconds, &out.PodInitialBackoffSeconds, s); err != nil { - return err - } - if err := v1.Convert_Pointer_int64_To_int64(&in.PodMaxBackoffSeconds, &out.PodMaxBackoffSeconds, s); err != nil { - return err - } - if in.Profiles != nil { - in, out := &in.Profiles, &out.Profiles - *out = make([]config.KubeSchedulerProfile, len(*in)) - for i := range *in { - if err := Convert_v1beta2_KubeSchedulerProfile_To_config_KubeSchedulerProfile(&(*in)[i], &(*out)[i], s); err != nil { - return err - } - } - } else { - out.Profiles = nil - } - out.Extenders = *(*[]config.Extender)(unsafe.Pointer(&in.Extenders)) - return nil -} - -func autoConvert_config_KubeSchedulerConfiguration_To_v1beta2_KubeSchedulerConfiguration(in *config.KubeSchedulerConfiguration, out *v1beta2.KubeSchedulerConfiguration, s conversion.Scope) error { - if err := v1.Convert_int32_To_Pointer_int32(&in.Parallelism, &out.Parallelism, s); err != nil { - return err - } - if err := v1alpha1.Convert_config_LeaderElectionConfiguration_To_v1alpha1_LeaderElectionConfiguration(&in.LeaderElection, &out.LeaderElection, s); err != nil { - return err - } - if err := v1alpha1.Convert_config_ClientConnectionConfiguration_To_v1alpha1_ClientConnectionConfiguration(&in.ClientConnection, &out.ClientConnection, s); err != nil { - return err - } - if err := v1.Convert_string_To_Pointer_string(&in.HealthzBindAddress, &out.HealthzBindAddress, s); err != nil { - return err - } - if err := v1.Convert_string_To_Pointer_string(&in.MetricsBindAddress, &out.MetricsBindAddress, s); err != nil { - return err - } - if err := v1alpha1.Convert_config_DebuggingConfiguration_To_v1alpha1_DebuggingConfiguration(&in.DebuggingConfiguration, &out.DebuggingConfiguration, s); err != nil { - return err - } - out.PercentageOfNodesToScore = (*int32)(unsafe.Pointer(in.PercentageOfNodesToScore)) - if err := v1.Convert_int64_To_Pointer_int64(&in.PodInitialBackoffSeconds, &out.PodInitialBackoffSeconds, s); err != nil { - return err - } - if err := v1.Convert_int64_To_Pointer_int64(&in.PodMaxBackoffSeconds, &out.PodMaxBackoffSeconds, s); err != nil { - return err - } - if in.Profiles != nil { - in, out := &in.Profiles, &out.Profiles - *out = make([]v1beta2.KubeSchedulerProfile, len(*in)) - for i := range *in { - if err := Convert_config_KubeSchedulerProfile_To_v1beta2_KubeSchedulerProfile(&(*in)[i], &(*out)[i], s); err != nil { - return err - } - } - } else { - out.Profiles = nil - } - out.Extenders = *(*[]v1beta2.Extender)(unsafe.Pointer(&in.Extenders)) - return nil -} - -func autoConvert_v1beta2_KubeSchedulerProfile_To_config_KubeSchedulerProfile(in *v1beta2.KubeSchedulerProfile, out *config.KubeSchedulerProfile, s conversion.Scope) error { - if err := v1.Convert_Pointer_string_To_string(&in.SchedulerName, &out.SchedulerName, s); err != nil { - return err - } - if in.Plugins != nil { - in, out := &in.Plugins, &out.Plugins - *out = new(config.Plugins) - if err := Convert_v1beta2_Plugins_To_config_Plugins(*in, *out, s); err != nil { - return err - } - } else { - out.Plugins = nil - } - if in.PluginConfig != nil { - in, out := &in.PluginConfig, &out.PluginConfig - *out = make([]config.PluginConfig, len(*in)) - for i := range *in { - if err := Convert_v1beta2_PluginConfig_To_config_PluginConfig(&(*in)[i], &(*out)[i], s); err != nil { - return err - } - } - } else { - out.PluginConfig = nil - } - return nil -} - -// Convert_v1beta2_KubeSchedulerProfile_To_config_KubeSchedulerProfile is an autogenerated conversion function. -func Convert_v1beta2_KubeSchedulerProfile_To_config_KubeSchedulerProfile(in *v1beta2.KubeSchedulerProfile, out *config.KubeSchedulerProfile, s conversion.Scope) error { - return autoConvert_v1beta2_KubeSchedulerProfile_To_config_KubeSchedulerProfile(in, out, s) -} - -func autoConvert_config_KubeSchedulerProfile_To_v1beta2_KubeSchedulerProfile(in *config.KubeSchedulerProfile, out *v1beta2.KubeSchedulerProfile, s conversion.Scope) error { - if err := v1.Convert_string_To_Pointer_string(&in.SchedulerName, &out.SchedulerName, s); err != nil { - return err - } - // WARNING: in.PercentageOfNodesToScore requires manual conversion: does not exist in peer-type - if in.Plugins != nil { - in, out := &in.Plugins, &out.Plugins - *out = new(v1beta2.Plugins) - if err := Convert_config_Plugins_To_v1beta2_Plugins(*in, *out, s); err != nil { - return err - } - } else { - out.Plugins = nil - } - if in.PluginConfig != nil { - in, out := &in.PluginConfig, &out.PluginConfig - *out = make([]v1beta2.PluginConfig, len(*in)) - for i := range *in { - if err := Convert_config_PluginConfig_To_v1beta2_PluginConfig(&(*in)[i], &(*out)[i], s); err != nil { - return err - } - } - } else { - out.PluginConfig = nil - } - return nil -} - -func autoConvert_v1beta2_NodeAffinityArgs_To_config_NodeAffinityArgs(in *v1beta2.NodeAffinityArgs, out *config.NodeAffinityArgs, s conversion.Scope) error { - out.AddedAffinity = (*corev1.NodeAffinity)(unsafe.Pointer(in.AddedAffinity)) - return nil -} - -// Convert_v1beta2_NodeAffinityArgs_To_config_NodeAffinityArgs is an autogenerated conversion function. -func Convert_v1beta2_NodeAffinityArgs_To_config_NodeAffinityArgs(in *v1beta2.NodeAffinityArgs, out *config.NodeAffinityArgs, s conversion.Scope) error { - return autoConvert_v1beta2_NodeAffinityArgs_To_config_NodeAffinityArgs(in, out, s) -} - -func autoConvert_config_NodeAffinityArgs_To_v1beta2_NodeAffinityArgs(in *config.NodeAffinityArgs, out *v1beta2.NodeAffinityArgs, s conversion.Scope) error { - out.AddedAffinity = (*corev1.NodeAffinity)(unsafe.Pointer(in.AddedAffinity)) - return nil -} - -// Convert_config_NodeAffinityArgs_To_v1beta2_NodeAffinityArgs is an autogenerated conversion function. -func Convert_config_NodeAffinityArgs_To_v1beta2_NodeAffinityArgs(in *config.NodeAffinityArgs, out *v1beta2.NodeAffinityArgs, s conversion.Scope) error { - return autoConvert_config_NodeAffinityArgs_To_v1beta2_NodeAffinityArgs(in, out, s) -} - -func autoConvert_v1beta2_NodeResourcesBalancedAllocationArgs_To_config_NodeResourcesBalancedAllocationArgs(in *v1beta2.NodeResourcesBalancedAllocationArgs, out *config.NodeResourcesBalancedAllocationArgs, s conversion.Scope) error { - out.Resources = *(*[]config.ResourceSpec)(unsafe.Pointer(&in.Resources)) - return nil -} - -// Convert_v1beta2_NodeResourcesBalancedAllocationArgs_To_config_NodeResourcesBalancedAllocationArgs is an autogenerated conversion function. -func Convert_v1beta2_NodeResourcesBalancedAllocationArgs_To_config_NodeResourcesBalancedAllocationArgs(in *v1beta2.NodeResourcesBalancedAllocationArgs, out *config.NodeResourcesBalancedAllocationArgs, s conversion.Scope) error { - return autoConvert_v1beta2_NodeResourcesBalancedAllocationArgs_To_config_NodeResourcesBalancedAllocationArgs(in, out, s) -} - -func autoConvert_config_NodeResourcesBalancedAllocationArgs_To_v1beta2_NodeResourcesBalancedAllocationArgs(in *config.NodeResourcesBalancedAllocationArgs, out *v1beta2.NodeResourcesBalancedAllocationArgs, s conversion.Scope) error { - out.Resources = *(*[]v1beta2.ResourceSpec)(unsafe.Pointer(&in.Resources)) - return nil -} - -// Convert_config_NodeResourcesBalancedAllocationArgs_To_v1beta2_NodeResourcesBalancedAllocationArgs is an autogenerated conversion function. -func Convert_config_NodeResourcesBalancedAllocationArgs_To_v1beta2_NodeResourcesBalancedAllocationArgs(in *config.NodeResourcesBalancedAllocationArgs, out *v1beta2.NodeResourcesBalancedAllocationArgs, s conversion.Scope) error { - return autoConvert_config_NodeResourcesBalancedAllocationArgs_To_v1beta2_NodeResourcesBalancedAllocationArgs(in, out, s) -} - -func autoConvert_v1beta2_NodeResourcesFitArgs_To_config_NodeResourcesFitArgs(in *v1beta2.NodeResourcesFitArgs, out *config.NodeResourcesFitArgs, s conversion.Scope) error { - out.IgnoredResources = *(*[]string)(unsafe.Pointer(&in.IgnoredResources)) - out.IgnoredResourceGroups = *(*[]string)(unsafe.Pointer(&in.IgnoredResourceGroups)) - out.ScoringStrategy = (*config.ScoringStrategy)(unsafe.Pointer(in.ScoringStrategy)) - return nil -} - -// Convert_v1beta2_NodeResourcesFitArgs_To_config_NodeResourcesFitArgs is an autogenerated conversion function. -func Convert_v1beta2_NodeResourcesFitArgs_To_config_NodeResourcesFitArgs(in *v1beta2.NodeResourcesFitArgs, out *config.NodeResourcesFitArgs, s conversion.Scope) error { - return autoConvert_v1beta2_NodeResourcesFitArgs_To_config_NodeResourcesFitArgs(in, out, s) -} - -func autoConvert_config_NodeResourcesFitArgs_To_v1beta2_NodeResourcesFitArgs(in *config.NodeResourcesFitArgs, out *v1beta2.NodeResourcesFitArgs, s conversion.Scope) error { - out.IgnoredResources = *(*[]string)(unsafe.Pointer(&in.IgnoredResources)) - out.IgnoredResourceGroups = *(*[]string)(unsafe.Pointer(&in.IgnoredResourceGroups)) - out.ScoringStrategy = (*v1beta2.ScoringStrategy)(unsafe.Pointer(in.ScoringStrategy)) - return nil -} - -// Convert_config_NodeResourcesFitArgs_To_v1beta2_NodeResourcesFitArgs is an autogenerated conversion function. -func Convert_config_NodeResourcesFitArgs_To_v1beta2_NodeResourcesFitArgs(in *config.NodeResourcesFitArgs, out *v1beta2.NodeResourcesFitArgs, s conversion.Scope) error { - return autoConvert_config_NodeResourcesFitArgs_To_v1beta2_NodeResourcesFitArgs(in, out, s) -} - -func autoConvert_v1beta2_Plugin_To_config_Plugin(in *v1beta2.Plugin, out *config.Plugin, s conversion.Scope) error { - out.Name = in.Name - if err := v1.Convert_Pointer_int32_To_int32(&in.Weight, &out.Weight, s); err != nil { - return err - } - return nil -} - -// Convert_v1beta2_Plugin_To_config_Plugin is an autogenerated conversion function. -func Convert_v1beta2_Plugin_To_config_Plugin(in *v1beta2.Plugin, out *config.Plugin, s conversion.Scope) error { - return autoConvert_v1beta2_Plugin_To_config_Plugin(in, out, s) -} - -func autoConvert_config_Plugin_To_v1beta2_Plugin(in *config.Plugin, out *v1beta2.Plugin, s conversion.Scope) error { - out.Name = in.Name - if err := v1.Convert_int32_To_Pointer_int32(&in.Weight, &out.Weight, s); err != nil { - return err - } - return nil -} - -// Convert_config_Plugin_To_v1beta2_Plugin is an autogenerated conversion function. -func Convert_config_Plugin_To_v1beta2_Plugin(in *config.Plugin, out *v1beta2.Plugin, s conversion.Scope) error { - return autoConvert_config_Plugin_To_v1beta2_Plugin(in, out, s) -} - -func autoConvert_v1beta2_PluginConfig_To_config_PluginConfig(in *v1beta2.PluginConfig, out *config.PluginConfig, s conversion.Scope) error { - out.Name = in.Name - if err := runtime.Convert_runtime_RawExtension_To_runtime_Object(&in.Args, &out.Args, s); err != nil { - return err - } - return nil -} - -// Convert_v1beta2_PluginConfig_To_config_PluginConfig is an autogenerated conversion function. -func Convert_v1beta2_PluginConfig_To_config_PluginConfig(in *v1beta2.PluginConfig, out *config.PluginConfig, s conversion.Scope) error { - return autoConvert_v1beta2_PluginConfig_To_config_PluginConfig(in, out, s) -} - -func autoConvert_config_PluginConfig_To_v1beta2_PluginConfig(in *config.PluginConfig, out *v1beta2.PluginConfig, s conversion.Scope) error { - out.Name = in.Name - if err := runtime.Convert_runtime_Object_To_runtime_RawExtension(&in.Args, &out.Args, s); err != nil { - return err - } - return nil -} - -// Convert_config_PluginConfig_To_v1beta2_PluginConfig is an autogenerated conversion function. -func Convert_config_PluginConfig_To_v1beta2_PluginConfig(in *config.PluginConfig, out *v1beta2.PluginConfig, s conversion.Scope) error { - return autoConvert_config_PluginConfig_To_v1beta2_PluginConfig(in, out, s) -} - -func autoConvert_v1beta2_PluginSet_To_config_PluginSet(in *v1beta2.PluginSet, out *config.PluginSet, s conversion.Scope) error { - if in.Enabled != nil { - in, out := &in.Enabled, &out.Enabled - *out = make([]config.Plugin, len(*in)) - for i := range *in { - if err := Convert_v1beta2_Plugin_To_config_Plugin(&(*in)[i], &(*out)[i], s); err != nil { - return err - } - } - } else { - out.Enabled = nil - } - if in.Disabled != nil { - in, out := &in.Disabled, &out.Disabled - *out = make([]config.Plugin, len(*in)) - for i := range *in { - if err := Convert_v1beta2_Plugin_To_config_Plugin(&(*in)[i], &(*out)[i], s); err != nil { - return err - } - } - } else { - out.Disabled = nil - } - return nil -} - -// Convert_v1beta2_PluginSet_To_config_PluginSet is an autogenerated conversion function. -func Convert_v1beta2_PluginSet_To_config_PluginSet(in *v1beta2.PluginSet, out *config.PluginSet, s conversion.Scope) error { - return autoConvert_v1beta2_PluginSet_To_config_PluginSet(in, out, s) -} - -func autoConvert_config_PluginSet_To_v1beta2_PluginSet(in *config.PluginSet, out *v1beta2.PluginSet, s conversion.Scope) error { - if in.Enabled != nil { - in, out := &in.Enabled, &out.Enabled - *out = make([]v1beta2.Plugin, len(*in)) - for i := range *in { - if err := Convert_config_Plugin_To_v1beta2_Plugin(&(*in)[i], &(*out)[i], s); err != nil { - return err - } - } - } else { - out.Enabled = nil - } - if in.Disabled != nil { - in, out := &in.Disabled, &out.Disabled - *out = make([]v1beta2.Plugin, len(*in)) - for i := range *in { - if err := Convert_config_Plugin_To_v1beta2_Plugin(&(*in)[i], &(*out)[i], s); err != nil { - return err - } - } - } else { - out.Disabled = nil - } - return nil -} - -// Convert_config_PluginSet_To_v1beta2_PluginSet is an autogenerated conversion function. -func Convert_config_PluginSet_To_v1beta2_PluginSet(in *config.PluginSet, out *v1beta2.PluginSet, s conversion.Scope) error { - return autoConvert_config_PluginSet_To_v1beta2_PluginSet(in, out, s) -} - -func autoConvert_v1beta2_Plugins_To_config_Plugins(in *v1beta2.Plugins, out *config.Plugins, s conversion.Scope) error { - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.PreEnqueue, &out.PreEnqueue, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.QueueSort, &out.QueueSort, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.PreFilter, &out.PreFilter, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.Filter, &out.Filter, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.PostFilter, &out.PostFilter, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.PreScore, &out.PreScore, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.Score, &out.Score, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.Reserve, &out.Reserve, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.Permit, &out.Permit, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.PreBind, &out.PreBind, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.Bind, &out.Bind, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.PostBind, &out.PostBind, s); err != nil { - return err - } - if err := Convert_v1beta2_PluginSet_To_config_PluginSet(&in.MultiPoint, &out.MultiPoint, s); err != nil { - return err - } - return nil -} - -// Convert_v1beta2_Plugins_To_config_Plugins is an autogenerated conversion function. -func Convert_v1beta2_Plugins_To_config_Plugins(in *v1beta2.Plugins, out *config.Plugins, s conversion.Scope) error { - return autoConvert_v1beta2_Plugins_To_config_Plugins(in, out, s) -} - -func autoConvert_config_Plugins_To_v1beta2_Plugins(in *config.Plugins, out *v1beta2.Plugins, s conversion.Scope) error { - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.PreEnqueue, &out.PreEnqueue, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.QueueSort, &out.QueueSort, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.PreFilter, &out.PreFilter, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.Filter, &out.Filter, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.PostFilter, &out.PostFilter, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.PreScore, &out.PreScore, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.Score, &out.Score, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.Reserve, &out.Reserve, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.Permit, &out.Permit, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.PreBind, &out.PreBind, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.Bind, &out.Bind, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.PostBind, &out.PostBind, s); err != nil { - return err - } - if err := Convert_config_PluginSet_To_v1beta2_PluginSet(&in.MultiPoint, &out.MultiPoint, s); err != nil { - return err - } - return nil -} - -func autoConvert_v1beta2_PodTopologySpreadArgs_To_config_PodTopologySpreadArgs(in *v1beta2.PodTopologySpreadArgs, out *config.PodTopologySpreadArgs, s conversion.Scope) error { - out.DefaultConstraints = *(*[]corev1.TopologySpreadConstraint)(unsafe.Pointer(&in.DefaultConstraints)) - out.DefaultingType = config.PodTopologySpreadConstraintsDefaulting(in.DefaultingType) - return nil -} - -// Convert_v1beta2_PodTopologySpreadArgs_To_config_PodTopologySpreadArgs is an autogenerated conversion function. -func Convert_v1beta2_PodTopologySpreadArgs_To_config_PodTopologySpreadArgs(in *v1beta2.PodTopologySpreadArgs, out *config.PodTopologySpreadArgs, s conversion.Scope) error { - return autoConvert_v1beta2_PodTopologySpreadArgs_To_config_PodTopologySpreadArgs(in, out, s) -} - -func autoConvert_config_PodTopologySpreadArgs_To_v1beta2_PodTopologySpreadArgs(in *config.PodTopologySpreadArgs, out *v1beta2.PodTopologySpreadArgs, s conversion.Scope) error { - out.DefaultConstraints = *(*[]corev1.TopologySpreadConstraint)(unsafe.Pointer(&in.DefaultConstraints)) - out.DefaultingType = v1beta2.PodTopologySpreadConstraintsDefaulting(in.DefaultingType) - return nil -} - -// Convert_config_PodTopologySpreadArgs_To_v1beta2_PodTopologySpreadArgs is an autogenerated conversion function. -func Convert_config_PodTopologySpreadArgs_To_v1beta2_PodTopologySpreadArgs(in *config.PodTopologySpreadArgs, out *v1beta2.PodTopologySpreadArgs, s conversion.Scope) error { - return autoConvert_config_PodTopologySpreadArgs_To_v1beta2_PodTopologySpreadArgs(in, out, s) -} - -func autoConvert_v1beta2_RequestedToCapacityRatioParam_To_config_RequestedToCapacityRatioParam(in *v1beta2.RequestedToCapacityRatioParam, out *config.RequestedToCapacityRatioParam, s conversion.Scope) error { - out.Shape = *(*[]config.UtilizationShapePoint)(unsafe.Pointer(&in.Shape)) - return nil -} - -// Convert_v1beta2_RequestedToCapacityRatioParam_To_config_RequestedToCapacityRatioParam is an autogenerated conversion function. -func Convert_v1beta2_RequestedToCapacityRatioParam_To_config_RequestedToCapacityRatioParam(in *v1beta2.RequestedToCapacityRatioParam, out *config.RequestedToCapacityRatioParam, s conversion.Scope) error { - return autoConvert_v1beta2_RequestedToCapacityRatioParam_To_config_RequestedToCapacityRatioParam(in, out, s) -} - -func autoConvert_config_RequestedToCapacityRatioParam_To_v1beta2_RequestedToCapacityRatioParam(in *config.RequestedToCapacityRatioParam, out *v1beta2.RequestedToCapacityRatioParam, s conversion.Scope) error { - out.Shape = *(*[]v1beta2.UtilizationShapePoint)(unsafe.Pointer(&in.Shape)) - return nil -} - -// Convert_config_RequestedToCapacityRatioParam_To_v1beta2_RequestedToCapacityRatioParam is an autogenerated conversion function. -func Convert_config_RequestedToCapacityRatioParam_To_v1beta2_RequestedToCapacityRatioParam(in *config.RequestedToCapacityRatioParam, out *v1beta2.RequestedToCapacityRatioParam, s conversion.Scope) error { - return autoConvert_config_RequestedToCapacityRatioParam_To_v1beta2_RequestedToCapacityRatioParam(in, out, s) -} - -func autoConvert_v1beta2_ResourceSpec_To_config_ResourceSpec(in *v1beta2.ResourceSpec, out *config.ResourceSpec, s conversion.Scope) error { - out.Name = in.Name - out.Weight = in.Weight - return nil -} - -// Convert_v1beta2_ResourceSpec_To_config_ResourceSpec is an autogenerated conversion function. -func Convert_v1beta2_ResourceSpec_To_config_ResourceSpec(in *v1beta2.ResourceSpec, out *config.ResourceSpec, s conversion.Scope) error { - return autoConvert_v1beta2_ResourceSpec_To_config_ResourceSpec(in, out, s) -} - -func autoConvert_config_ResourceSpec_To_v1beta2_ResourceSpec(in *config.ResourceSpec, out *v1beta2.ResourceSpec, s conversion.Scope) error { - out.Name = in.Name - out.Weight = in.Weight - return nil -} - -// Convert_config_ResourceSpec_To_v1beta2_ResourceSpec is an autogenerated conversion function. -func Convert_config_ResourceSpec_To_v1beta2_ResourceSpec(in *config.ResourceSpec, out *v1beta2.ResourceSpec, s conversion.Scope) error { - return autoConvert_config_ResourceSpec_To_v1beta2_ResourceSpec(in, out, s) -} - -func autoConvert_v1beta2_ScoringStrategy_To_config_ScoringStrategy(in *v1beta2.ScoringStrategy, out *config.ScoringStrategy, s conversion.Scope) error { - out.Type = config.ScoringStrategyType(in.Type) - out.Resources = *(*[]config.ResourceSpec)(unsafe.Pointer(&in.Resources)) - out.RequestedToCapacityRatio = (*config.RequestedToCapacityRatioParam)(unsafe.Pointer(in.RequestedToCapacityRatio)) - return nil -} - -// Convert_v1beta2_ScoringStrategy_To_config_ScoringStrategy is an autogenerated conversion function. -func Convert_v1beta2_ScoringStrategy_To_config_ScoringStrategy(in *v1beta2.ScoringStrategy, out *config.ScoringStrategy, s conversion.Scope) error { - return autoConvert_v1beta2_ScoringStrategy_To_config_ScoringStrategy(in, out, s) -} - -func autoConvert_config_ScoringStrategy_To_v1beta2_ScoringStrategy(in *config.ScoringStrategy, out *v1beta2.ScoringStrategy, s conversion.Scope) error { - out.Type = v1beta2.ScoringStrategyType(in.Type) - out.Resources = *(*[]v1beta2.ResourceSpec)(unsafe.Pointer(&in.Resources)) - out.RequestedToCapacityRatio = (*v1beta2.RequestedToCapacityRatioParam)(unsafe.Pointer(in.RequestedToCapacityRatio)) - return nil -} - -// Convert_config_ScoringStrategy_To_v1beta2_ScoringStrategy is an autogenerated conversion function. -func Convert_config_ScoringStrategy_To_v1beta2_ScoringStrategy(in *config.ScoringStrategy, out *v1beta2.ScoringStrategy, s conversion.Scope) error { - return autoConvert_config_ScoringStrategy_To_v1beta2_ScoringStrategy(in, out, s) -} - -func autoConvert_v1beta2_UtilizationShapePoint_To_config_UtilizationShapePoint(in *v1beta2.UtilizationShapePoint, out *config.UtilizationShapePoint, s conversion.Scope) error { - out.Utilization = in.Utilization - out.Score = in.Score - return nil -} - -// Convert_v1beta2_UtilizationShapePoint_To_config_UtilizationShapePoint is an autogenerated conversion function. -func Convert_v1beta2_UtilizationShapePoint_To_config_UtilizationShapePoint(in *v1beta2.UtilizationShapePoint, out *config.UtilizationShapePoint, s conversion.Scope) error { - return autoConvert_v1beta2_UtilizationShapePoint_To_config_UtilizationShapePoint(in, out, s) -} - -func autoConvert_config_UtilizationShapePoint_To_v1beta2_UtilizationShapePoint(in *config.UtilizationShapePoint, out *v1beta2.UtilizationShapePoint, s conversion.Scope) error { - out.Utilization = in.Utilization - out.Score = in.Score - return nil -} - -// Convert_config_UtilizationShapePoint_To_v1beta2_UtilizationShapePoint is an autogenerated conversion function. -func Convert_config_UtilizationShapePoint_To_v1beta2_UtilizationShapePoint(in *config.UtilizationShapePoint, out *v1beta2.UtilizationShapePoint, s conversion.Scope) error { - return autoConvert_config_UtilizationShapePoint_To_v1beta2_UtilizationShapePoint(in, out, s) -} - -func autoConvert_v1beta2_VolumeBindingArgs_To_config_VolumeBindingArgs(in *v1beta2.VolumeBindingArgs, out *config.VolumeBindingArgs, s conversion.Scope) error { - if err := v1.Convert_Pointer_int64_To_int64(&in.BindTimeoutSeconds, &out.BindTimeoutSeconds, s); err != nil { - return err - } - out.Shape = *(*[]config.UtilizationShapePoint)(unsafe.Pointer(&in.Shape)) - return nil -} - -// Convert_v1beta2_VolumeBindingArgs_To_config_VolumeBindingArgs is an autogenerated conversion function. -func Convert_v1beta2_VolumeBindingArgs_To_config_VolumeBindingArgs(in *v1beta2.VolumeBindingArgs, out *config.VolumeBindingArgs, s conversion.Scope) error { - return autoConvert_v1beta2_VolumeBindingArgs_To_config_VolumeBindingArgs(in, out, s) -} - -func autoConvert_config_VolumeBindingArgs_To_v1beta2_VolumeBindingArgs(in *config.VolumeBindingArgs, out *v1beta2.VolumeBindingArgs, s conversion.Scope) error { - if err := v1.Convert_int64_To_Pointer_int64(&in.BindTimeoutSeconds, &out.BindTimeoutSeconds, s); err != nil { - return err - } - out.Shape = *(*[]v1beta2.UtilizationShapePoint)(unsafe.Pointer(&in.Shape)) - return nil -} - -// Convert_config_VolumeBindingArgs_To_v1beta2_VolumeBindingArgs is an autogenerated conversion function. -func Convert_config_VolumeBindingArgs_To_v1beta2_VolumeBindingArgs(in *config.VolumeBindingArgs, out *v1beta2.VolumeBindingArgs, s conversion.Scope) error { - return autoConvert_config_VolumeBindingArgs_To_v1beta2_VolumeBindingArgs(in, out, s) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.defaults.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.defaults.go deleted file mode 100644 index 359ee6437660..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2/zz_generated.defaults.go +++ /dev/null @@ -1,73 +0,0 @@ -//go:build !ignore_autogenerated -// +build !ignore_autogenerated - -/* -Copyright The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by defaulter-gen. DO NOT EDIT. - -package v1beta2 - -import ( - runtime "k8s.io/apimachinery/pkg/runtime" - v1beta2 "k8s.io/kube-scheduler/config/v1beta2" -) - -// RegisterDefaults adds defaulters functions to the given scheme. -// Public to allow building arbitrary schemes. -// All generated defaulters are covering - they call all nested defaulters. -func RegisterDefaults(scheme *runtime.Scheme) error { - scheme.AddTypeDefaultingFunc(&v1beta2.DefaultPreemptionArgs{}, func(obj interface{}) { SetObjectDefaults_DefaultPreemptionArgs(obj.(*v1beta2.DefaultPreemptionArgs)) }) - scheme.AddTypeDefaultingFunc(&v1beta2.InterPodAffinityArgs{}, func(obj interface{}) { SetObjectDefaults_InterPodAffinityArgs(obj.(*v1beta2.InterPodAffinityArgs)) }) - scheme.AddTypeDefaultingFunc(&v1beta2.KubeSchedulerConfiguration{}, func(obj interface{}) { - SetObjectDefaults_KubeSchedulerConfiguration(obj.(*v1beta2.KubeSchedulerConfiguration)) - }) - scheme.AddTypeDefaultingFunc(&v1beta2.NodeResourcesBalancedAllocationArgs{}, func(obj interface{}) { - SetObjectDefaults_NodeResourcesBalancedAllocationArgs(obj.(*v1beta2.NodeResourcesBalancedAllocationArgs)) - }) - scheme.AddTypeDefaultingFunc(&v1beta2.NodeResourcesFitArgs{}, func(obj interface{}) { SetObjectDefaults_NodeResourcesFitArgs(obj.(*v1beta2.NodeResourcesFitArgs)) }) - scheme.AddTypeDefaultingFunc(&v1beta2.PodTopologySpreadArgs{}, func(obj interface{}) { SetObjectDefaults_PodTopologySpreadArgs(obj.(*v1beta2.PodTopologySpreadArgs)) }) - scheme.AddTypeDefaultingFunc(&v1beta2.VolumeBindingArgs{}, func(obj interface{}) { SetObjectDefaults_VolumeBindingArgs(obj.(*v1beta2.VolumeBindingArgs)) }) - return nil -} - -func SetObjectDefaults_DefaultPreemptionArgs(in *v1beta2.DefaultPreemptionArgs) { - SetDefaults_DefaultPreemptionArgs(in) -} - -func SetObjectDefaults_InterPodAffinityArgs(in *v1beta2.InterPodAffinityArgs) { - SetDefaults_InterPodAffinityArgs(in) -} - -func SetObjectDefaults_KubeSchedulerConfiguration(in *v1beta2.KubeSchedulerConfiguration) { - SetDefaults_KubeSchedulerConfiguration(in) -} - -func SetObjectDefaults_NodeResourcesBalancedAllocationArgs(in *v1beta2.NodeResourcesBalancedAllocationArgs) { - SetDefaults_NodeResourcesBalancedAllocationArgs(in) -} - -func SetObjectDefaults_NodeResourcesFitArgs(in *v1beta2.NodeResourcesFitArgs) { - SetDefaults_NodeResourcesFitArgs(in) -} - -func SetObjectDefaults_PodTopologySpreadArgs(in *v1beta2.PodTopologySpreadArgs) { - SetDefaults_PodTopologySpreadArgs(in) -} - -func SetObjectDefaults_VolumeBindingArgs(in *v1beta2.VolumeBindingArgs) { - SetDefaults_VolumeBindingArgs(in) -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/default_plugins.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/default_plugins.go index 13d479fece57..41d1a2aa8e3a 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/default_plugins.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/default_plugins.go @@ -92,7 +92,7 @@ type pluginIndex struct { } func mergePluginSet(defaultPluginSet, customPluginSet v1beta3.PluginSet) v1beta3.PluginSet { - disabledPlugins := sets.NewString() + disabledPlugins := sets.New[string]() enabledCustomPlugins := make(map[string]pluginIndex) // replacedPluginIndex is a set of index of plugins, which have replaced the default plugins. replacedPluginIndex := sets.NewInt() diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/defaults.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/defaults.go index b3816d36a005..374bcb5474e2 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/defaults.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/defaults.go @@ -55,13 +55,13 @@ func pluginsNames(p *v1beta3.Plugins) []string { p.Permit, p.QueueSort, } - n := sets.NewString() + n := sets.New[string]() for _, e := range extensions { for _, pg := range e.Enabled { n.Insert(pg.Name) } } - return n.List() + return sets.List(n) } func setDefaults_KubeSchedulerProfile(prof *v1beta3.KubeSchedulerProfile) { @@ -69,7 +69,7 @@ func setDefaults_KubeSchedulerProfile(prof *v1beta3.KubeSchedulerProfile) { prof.Plugins = mergePlugins(getDefaultPlugins(), prof.Plugins) // Set default plugin configs. scheme := GetPluginArgConversionScheme() - existingConfigs := sets.NewString() + existingConfigs := sets.New[string]() for j := range prof.PluginConfig { existingConfigs.Insert(prof.PluginConfig[j].Name) args := prof.PluginConfig[j].Args.Object diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/zz_generated.conversion.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/zz_generated.conversion.go index 841c537c3f7e..a12860fe77b3 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/zz_generated.conversion.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3/zz_generated.conversion.go @@ -466,6 +466,7 @@ func autoConvert_config_KubeSchedulerConfiguration_To_v1beta3_KubeSchedulerConfi out.Profiles = nil } out.Extenders = *(*[]v1beta3.Extender)(unsafe.Pointer(&in.Extenders)) + // WARNING: in.DelayCacheUntilActive requires manual conversion: does not exist in peer-type return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/validation/validation.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/validation/validation.go index e284101607f0..d6153ed5d9d7 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/validation/validation.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/validation/validation.go @@ -33,7 +33,6 @@ import ( componentbasevalidation "k8s.io/component-base/config/validation" v1helper "k8s.io/kubernetes/pkg/apis/core/v1/helper" "k8s.io/kubernetes/pkg/scheduler/apis/config" - "k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2" "k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3" ) @@ -142,10 +141,6 @@ type invalidPlugins struct { // Remember to add an entry to that list when creating a new component config // version (even if the list of invalid plugins is empty). var invalidPluginsByVersion = []invalidPlugins{ - { - schemeGroupVersion: v1beta2.SchemeGroupVersion.String(), - plugins: []string{}, - }, { schemeGroupVersion: v1beta3.SchemeGroupVersion.String(), plugins: []string{}, @@ -227,7 +222,7 @@ func validatePluginConfig(path *field.Path, apiVersion string, profile *config.K } } - seenPluginConfig := make(sets.String) + seenPluginConfig := sets.New[string]() for i := range profile.PluginConfig { pluginConfigPath := path.Child("pluginConfig").Index(i) @@ -298,7 +293,7 @@ func validateCommonQueueSort(path *field.Path, profiles []config.KubeSchedulerPr func validateExtenders(fldPath *field.Path, extenders []config.Extender) []error { var errs []error binders := 0 - extenderManagedResources := sets.NewString() + extenderManagedResources := sets.New[string]() for i, extender := range extenders { path := fldPath.Index(i) if len(extender.PrioritizeVerb) > 0 && extender.Weight <= 0 { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/validation/validation_pluginargs.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/validation/validation_pluginargs.go index 325a6bc6779f..15c9ddfaa31a 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/validation/validation_pluginargs.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/apis/config/validation/validation_pluginargs.go @@ -31,7 +31,8 @@ import ( "k8s.io/kubernetes/pkg/scheduler/apis/config" ) -var supportedScoringStrategyTypes = sets.NewString( +// supportedScoringStrategyTypes has to be a set of strings for use with field.Unsupported +var supportedScoringStrategyTypes = sets.New( string(config.LeastAllocated), string(config.MostAllocated), string(config.RequestedToCapacityRatio), @@ -148,13 +149,13 @@ func validateTopologyKey(p *field.Path, v string) field.ErrorList { } func validateWhenUnsatisfiable(p *field.Path, v v1.UnsatisfiableConstraintAction) *field.Error { - supportedScheduleActions := sets.NewString(string(v1.DoNotSchedule), string(v1.ScheduleAnyway)) + supportedScheduleActions := sets.New(string(v1.DoNotSchedule), string(v1.ScheduleAnyway)) if len(v) == 0 { return field.Required(p, "can not be empty") } if !supportedScheduleActions.Has(string(v)) { - return field.NotSupported(p, v, supportedScheduleActions.List()) + return field.NotSupported(p, v, sets.List(supportedScheduleActions)) } return nil } @@ -221,7 +222,7 @@ func validateResources(resources []config.ResourceSpec, p *field.Path) field.Err // ValidateNodeResourcesBalancedAllocationArgs validates that NodeResourcesBalancedAllocationArgs are set correctly. func ValidateNodeResourcesBalancedAllocationArgs(path *field.Path, args *config.NodeResourcesBalancedAllocationArgs) error { var allErrs field.ErrorList - seenResources := sets.NewString() + seenResources := sets.New[string]() for i, resource := range args.Resources { if seenResources.Has(resource.Name) { allErrs = append(allErrs, field.Duplicate(path.Child("resources").Index(i).Child("name"), resource.Name)) @@ -270,7 +271,7 @@ func ValidateVolumeBindingArgs(path *field.Path, args *config.VolumeBindingArgs) }) } -// ValidateVolumeBindingArgs validates that VolumeBindingArgs with scheduler features. +// ValidateVolumeBindingArgsWithOptions validates that VolumeBindingArgs and VolumeBindingArgsValidationOptions with scheduler features. func ValidateVolumeBindingArgsWithOptions(path *field.Path, args *config.VolumeBindingArgs, opts VolumeBindingArgsValidationOptions) error { var allErrs field.ErrorList @@ -313,7 +314,7 @@ func ValidateNodeResourcesFitArgs(path *field.Path, args *config.NodeResourcesFi strategyPath := path.Child("scoringStrategy") if args.ScoringStrategy != nil { if !supportedScoringStrategyTypes.Has(string(args.ScoringStrategy.Type)) { - allErrs = append(allErrs, field.NotSupported(strategyPath.Child("type"), args.ScoringStrategy.Type, supportedScoringStrategyTypes.List())) + allErrs = append(allErrs, field.NotSupported(strategyPath.Child("type"), args.ScoringStrategy.Type, sets.List(supportedScoringStrategyTypes))) } allErrs = append(allErrs, validateResources(args.ScoringStrategy.Resources, strategyPath.Child("resources"))...) if args.ScoringStrategy.RequestedToCapacityRatio != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/eventhandlers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/eventhandlers.go index 95ed57114b56..5d1fe0dde47e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/eventhandlers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/eventhandlers.go @@ -17,14 +17,17 @@ limitations under the License. package scheduler import ( + "context" "fmt" "reflect" "strings" + "time" v1 "k8s.io/api/core/v1" storagev1 "k8s.io/api/storage/v1" "k8s.io/apimachinery/pkg/runtime/schema" utilruntime "k8s.io/apimachinery/pkg/util/runtime" + "k8s.io/apimachinery/pkg/util/wait" utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/client-go/dynamic/dynamicinformer" "k8s.io/client-go/informers" @@ -43,9 +46,10 @@ import ( ) func (sched *Scheduler) onStorageClassAdd(obj interface{}) { + logger := sched.logger sc, ok := obj.(*storagev1.StorageClass) if !ok { - klog.ErrorS(nil, "Cannot convert to *storagev1.StorageClass", "obj", obj) + logger.Error(nil, "Cannot convert to *storagev1.StorageClass", "obj", obj) return } @@ -56,42 +60,45 @@ func (sched *Scheduler) onStorageClassAdd(obj interface{}) { // We don't need to invalidate cached results because results will not be // cached for pod that has unbound immediate PVCs. if sc.VolumeBindingMode != nil && *sc.VolumeBindingMode == storagev1.VolumeBindingWaitForFirstConsumer { - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(queue.StorageClassAdd, nil) + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, queue.StorageClassAdd, nil, sc, nil) } } func (sched *Scheduler) addNodeToCache(obj interface{}) { + logger := sched.logger node, ok := obj.(*v1.Node) if !ok { - klog.ErrorS(nil, "Cannot convert to *v1.Node", "obj", obj) + logger.Error(nil, "Cannot convert to *v1.Node", "obj", obj) return } - nodeInfo := sched.Cache.AddNode(node) - klog.V(3).InfoS("Add event for node", "node", klog.KObj(node)) - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(queue.NodeAdd, preCheckForNode(nodeInfo)) + nodeInfo := sched.Cache.AddNode(logger, node) + logger.V(3).Info("Add event for node", "node", klog.KObj(node)) + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, queue.NodeAdd, nil, node, preCheckForNode(nodeInfo)) } func (sched *Scheduler) updateNodeInCache(oldObj, newObj interface{}) { + logger := sched.logger oldNode, ok := oldObj.(*v1.Node) if !ok { - klog.ErrorS(nil, "Cannot convert oldObj to *v1.Node", "oldObj", oldObj) + logger.Error(nil, "Cannot convert oldObj to *v1.Node", "oldObj", oldObj) return } newNode, ok := newObj.(*v1.Node) if !ok { - klog.ErrorS(nil, "Cannot convert newObj to *v1.Node", "newObj", newObj) + logger.Error(nil, "Cannot convert newObj to *v1.Node", "newObj", newObj) return } - nodeInfo := sched.Cache.UpdateNode(oldNode, newNode) + nodeInfo := sched.Cache.UpdateNode(logger, oldNode, newNode) // Only requeue unschedulable pods if the node became more schedulable. if event := nodeSchedulingPropertiesChange(newNode, oldNode); event != nil { - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(*event, preCheckForNode(nodeInfo)) + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, *event, oldNode, newNode, preCheckForNode(nodeInfo)) } } func (sched *Scheduler) deleteNodeFromCache(obj interface{}) { + logger := sched.logger var node *v1.Node switch t := obj.(type) { case *v1.Node: @@ -100,28 +107,30 @@ func (sched *Scheduler) deleteNodeFromCache(obj interface{}) { var ok bool node, ok = t.Obj.(*v1.Node) if !ok { - klog.ErrorS(nil, "Cannot convert to *v1.Node", "obj", t.Obj) + logger.Error(nil, "Cannot convert to *v1.Node", "obj", t.Obj) return } default: - klog.ErrorS(nil, "Cannot convert to *v1.Node", "obj", t) + logger.Error(nil, "Cannot convert to *v1.Node", "obj", t) return } - klog.V(3).InfoS("Delete event for node", "node", klog.KObj(node)) - if err := sched.Cache.RemoveNode(node); err != nil { - klog.ErrorS(err, "Scheduler cache RemoveNode failed") + logger.V(3).Info("Delete event for node", "node", klog.KObj(node)) + if err := sched.Cache.RemoveNode(logger, node); err != nil { + logger.Error(err, "Scheduler cache RemoveNode failed") } } func (sched *Scheduler) addPodToSchedulingQueue(obj interface{}) { + logger := sched.logger pod := obj.(*v1.Pod) - klog.V(3).InfoS("Add event for unscheduled pod", "pod", klog.KObj(pod)) - if err := sched.SchedulingQueue.Add(pod); err != nil { + logger.V(3).Info("Add event for unscheduled pod", "pod", klog.KObj(pod)) + if err := sched.SchedulingQueue.Add(logger, pod); err != nil { utilruntime.HandleError(fmt.Errorf("unable to queue %T: %v", obj, err)) } } func (sched *Scheduler) updatePodInSchedulingQueue(oldObj, newObj interface{}) { + logger := sched.logger oldPod, newPod := oldObj.(*v1.Pod), newObj.(*v1.Pod) // Bypass update event that carries identical objects; otherwise, a duplicated // Pod may go through scheduling and cause unexpected behavior (see #96071). @@ -137,12 +146,13 @@ func (sched *Scheduler) updatePodInSchedulingQueue(oldObj, newObj interface{}) { return } - if err := sched.SchedulingQueue.Update(oldPod, newPod); err != nil { + if err := sched.SchedulingQueue.Update(logger, oldPod, newPod); err != nil { utilruntime.HandleError(fmt.Errorf("unable to update %T: %v", newObj, err)) } } func (sched *Scheduler) deletePodFromSchedulingQueue(obj interface{}) { + logger := sched.logger var pod *v1.Pod switch t := obj.(type) { case *v1.Pod: @@ -158,7 +168,7 @@ func (sched *Scheduler) deletePodFromSchedulingQueue(obj interface{}) { utilruntime.HandleError(fmt.Errorf("unable to handle object in %T: %T", sched, obj)) return } - klog.V(3).InfoS("Delete event for unscheduled pod", "pod", klog.KObj(pod)) + logger.V(3).Info("Delete event for unscheduled pod", "pod", klog.KObj(pod)) if err := sched.SchedulingQueue.Delete(pod); err != nil { utilruntime.HandleError(fmt.Errorf("unable to dequeue %T: %v", obj, err)) } @@ -166,53 +176,56 @@ func (sched *Scheduler) deletePodFromSchedulingQueue(obj interface{}) { if err != nil { // This shouldn't happen, because we only accept for scheduling the pods // which specify a scheduler name that matches one of the profiles. - klog.ErrorS(err, "Unable to get profile", "pod", klog.KObj(pod)) + logger.Error(err, "Unable to get profile", "pod", klog.KObj(pod)) return } // If a waiting pod is rejected, it indicates it's previously assumed and we're // removing it from the scheduler cache. In this case, signal a AssignedPodDelete // event to immediately retry some unscheduled Pods. if fwk.RejectWaitingPod(pod.UID) { - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(queue.AssignedPodDelete, nil) + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, queue.AssignedPodDelete, pod, nil, nil) } } func (sched *Scheduler) addPodToCache(obj interface{}) { + logger := sched.logger pod, ok := obj.(*v1.Pod) if !ok { - klog.ErrorS(nil, "Cannot convert to *v1.Pod", "obj", obj) + logger.Error(nil, "Cannot convert to *v1.Pod", "obj", obj) return } - klog.V(3).InfoS("Add event for scheduled pod", "pod", klog.KObj(pod)) + logger.V(3).Info("Add event for scheduled pod", "pod", klog.KObj(pod)) - if err := sched.Cache.AddPod(pod); err != nil { - klog.ErrorS(err, "Scheduler cache AddPod failed", "pod", klog.KObj(pod)) + if err := sched.Cache.AddPod(logger, pod); err != nil { + logger.Error(err, "Scheduler cache AddPod failed", "pod", klog.KObj(pod)) } - sched.SchedulingQueue.AssignedPodAdded(pod) + sched.SchedulingQueue.AssignedPodAdded(logger, pod) } func (sched *Scheduler) updatePodInCache(oldObj, newObj interface{}) { + logger := sched.logger oldPod, ok := oldObj.(*v1.Pod) if !ok { - klog.ErrorS(nil, "Cannot convert oldObj to *v1.Pod", "oldObj", oldObj) + logger.Error(nil, "Cannot convert oldObj to *v1.Pod", "oldObj", oldObj) return } newPod, ok := newObj.(*v1.Pod) if !ok { - klog.ErrorS(nil, "Cannot convert newObj to *v1.Pod", "newObj", newObj) + logger.Error(nil, "Cannot convert newObj to *v1.Pod", "newObj", newObj) return } - klog.V(4).InfoS("Update event for scheduled pod", "pod", klog.KObj(oldPod)) + logger.V(4).Info("Update event for scheduled pod", "pod", klog.KObj(oldPod)) - if err := sched.Cache.UpdatePod(oldPod, newPod); err != nil { - klog.ErrorS(err, "Scheduler cache UpdatePod failed", "pod", klog.KObj(oldPod)) + if err := sched.Cache.UpdatePod(logger, oldPod, newPod); err != nil { + logger.Error(err, "Scheduler cache UpdatePod failed", "pod", klog.KObj(oldPod)) } - sched.SchedulingQueue.AssignedPodUpdated(newPod) + sched.SchedulingQueue.AssignedPodUpdated(logger, oldPod, newPod) } func (sched *Scheduler) deletePodFromCache(obj interface{}) { + logger := sched.logger var pod *v1.Pod switch t := obj.(type) { case *v1.Pod: @@ -221,19 +234,19 @@ func (sched *Scheduler) deletePodFromCache(obj interface{}) { var ok bool pod, ok = t.Obj.(*v1.Pod) if !ok { - klog.ErrorS(nil, "Cannot convert to *v1.Pod", "obj", t.Obj) + logger.Error(nil, "Cannot convert to *v1.Pod", "obj", t.Obj) return } default: - klog.ErrorS(nil, "Cannot convert to *v1.Pod", "obj", t) + logger.Error(nil, "Cannot convert to *v1.Pod", "obj", t) return } - klog.V(3).InfoS("Delete event for scheduled pod", "pod", klog.KObj(pod)) - if err := sched.Cache.RemovePod(pod); err != nil { - klog.ErrorS(err, "Scheduler cache RemovePod failed", "pod", klog.KObj(pod)) + logger.V(3).Info("Delete event for scheduled pod", "pod", klog.KObj(pod)) + if err := sched.Cache.RemovePod(logger, pod); err != nil { + logger.Error(err, "Scheduler cache RemovePod failed", "pod", klog.KObj(pod)) } - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(queue.AssignedPodDelete, nil) + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, queue.AssignedPodDelete, pod, nil, nil) } // assignedPod selects pods that are assigned (scheduled and running). @@ -246,6 +259,24 @@ func responsibleForPod(pod *v1.Pod, profiles profile.Map) bool { return profiles.HandlesSchedulerName(pod.Spec.SchedulerName) } +const ( + // syncedPollPeriod controls how often you look at the status of your sync funcs + syncedPollPeriod = 100 * time.Millisecond +) + +// WaitForHandlersSync waits for EventHandlers to sync. +// It returns true if it was successful, false if the controller should shut down +func (sched *Scheduler) WaitForHandlersSync(ctx context.Context) error { + return wait.PollUntilContextCancel(ctx, syncedPollPeriod, true, func(ctx context.Context) (done bool, err error) { + for _, handler := range sched.registeredHandlers { + if !handler.HasSynced() { + return false, nil + } + } + return true, nil + }) +} + // addAllEventHandlers is a helper function used in tests and in Scheduler // to add event handlers for various informers. func addAllEventHandlers( @@ -253,9 +284,14 @@ func addAllEventHandlers( informerFactory informers.SharedInformerFactory, dynInformerFactory dynamicinformer.DynamicSharedInformerFactory, gvkMap map[framework.GVK]framework.ActionType, -) { +) error { + var ( + handlerRegistration cache.ResourceEventHandlerRegistration + err error + handlers []cache.ResourceEventHandlerRegistration + ) // scheduled pod cache - informerFactory.Core().V1().Pods().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Core().V1().Pods().Informer().AddEventHandler( cache.FilteringResourceEventHandler{ FilterFunc: func(obj interface{}) bool { switch t := obj.(type) { @@ -280,9 +316,13 @@ func addAllEventHandlers( DeleteFunc: sched.deletePodFromCache, }, }, - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) + // unscheduled pod queue - informerFactory.Core().V1().Pods().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Core().V1().Pods().Informer().AddEventHandler( cache.FilteringResourceEventHandler{ FilterFunc: func(obj interface{}) bool { switch t := obj.(type) { @@ -307,34 +347,41 @@ func addAllEventHandlers( DeleteFunc: sched.deletePodFromSchedulingQueue, }, }, - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) - informerFactory.Core().V1().Nodes().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Core().V1().Nodes().Informer().AddEventHandler( cache.ResourceEventHandlerFuncs{ AddFunc: sched.addNodeToCache, UpdateFunc: sched.updateNodeInCache, DeleteFunc: sched.deleteNodeFromCache, }, - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) + logger := sched.logger buildEvtResHandler := func(at framework.ActionType, gvk framework.GVK, shortGVK string) cache.ResourceEventHandlerFuncs { funcs := cache.ResourceEventHandlerFuncs{} if at&framework.Add != 0 { evt := framework.ClusterEvent{Resource: gvk, ActionType: framework.Add, Label: fmt.Sprintf("%vAdd", shortGVK)} - funcs.AddFunc = func(_ interface{}) { - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(evt, nil) + funcs.AddFunc = func(obj interface{}) { + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, evt, nil, obj, nil) } } if at&framework.Update != 0 { evt := framework.ClusterEvent{Resource: gvk, ActionType: framework.Update, Label: fmt.Sprintf("%vUpdate", shortGVK)} - funcs.UpdateFunc = func(_, _ interface{}) { - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(evt, nil) + funcs.UpdateFunc = func(old, obj interface{}) { + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, evt, old, obj, nil) } } if at&framework.Delete != 0 { evt := framework.ClusterEvent{Resource: gvk, ActionType: framework.Delete, Label: fmt.Sprintf("%vDelete", shortGVK)} - funcs.DeleteFunc = func(_ interface{}) { - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(evt, nil) + funcs.DeleteFunc = func(obj interface{}) { + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, evt, obj, nil, nil) } } return funcs @@ -345,17 +392,26 @@ func addAllEventHandlers( case framework.Node, framework.Pod: // Do nothing. case framework.CSINode: - informerFactory.Storage().V1().CSINodes().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Storage().V1().CSINodes().Informer().AddEventHandler( buildEvtResHandler(at, framework.CSINode, "CSINode"), - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) case framework.CSIDriver: - informerFactory.Storage().V1().CSIDrivers().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Storage().V1().CSIDrivers().Informer().AddEventHandler( buildEvtResHandler(at, framework.CSIDriver, "CSIDriver"), - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) case framework.CSIStorageCapacity: - informerFactory.Storage().V1().CSIStorageCapacities().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Storage().V1().CSIStorageCapacities().Informer().AddEventHandler( buildEvtResHandler(at, framework.CSIStorageCapacity, "CSIStorageCapacity"), - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) case framework.PersistentVolume: // MaxPDVolumeCountPredicate: since it relies on the counts of PV. // @@ -370,42 +426,60 @@ func addAllEventHandlers( // bindings due to conflicts if PVs are updated by PV controller or other // parties, then scheduler will add pod back to unschedulable queue. We // need to move pods to active queue on PV update for this scenario. - informerFactory.Core().V1().PersistentVolumes().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Core().V1().PersistentVolumes().Informer().AddEventHandler( buildEvtResHandler(at, framework.PersistentVolume, "Pv"), - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) case framework.PersistentVolumeClaim: // MaxPDVolumeCountPredicate: add/update PVC will affect counts of PV when it is bound. - informerFactory.Core().V1().PersistentVolumeClaims().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Core().V1().PersistentVolumeClaims().Informer().AddEventHandler( buildEvtResHandler(at, framework.PersistentVolumeClaim, "Pvc"), - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) case framework.PodSchedulingContext: if utilfeature.DefaultFeatureGate.Enabled(features.DynamicResourceAllocation) { - _, _ = informerFactory.Resource().V1alpha2().PodSchedulingContexts().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Resource().V1alpha2().PodSchedulingContexts().Informer().AddEventHandler( buildEvtResHandler(at, framework.PodSchedulingContext, "PodSchedulingContext"), - ) + ); err != nil { + return err + } } + handlers = append(handlers, handlerRegistration) case framework.ResourceClaim: if utilfeature.DefaultFeatureGate.Enabled(features.DynamicResourceAllocation) { - _, _ = informerFactory.Resource().V1alpha2().ResourceClaims().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Resource().V1alpha2().ResourceClaims().Informer().AddEventHandler( buildEvtResHandler(at, framework.ResourceClaim, "ResourceClaim"), - ) + ); err != nil { + return err + } } + handlers = append(handlers, handlerRegistration) case framework.StorageClass: if at&framework.Add != 0 { - informerFactory.Storage().V1().StorageClasses().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Storage().V1().StorageClasses().Informer().AddEventHandler( cache.ResourceEventHandlerFuncs{ AddFunc: sched.onStorageClassAdd, }, - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) } if at&framework.Update != 0 { - informerFactory.Storage().V1().StorageClasses().Informer().AddEventHandler( + if handlerRegistration, err = informerFactory.Storage().V1().StorageClasses().Informer().AddEventHandler( cache.ResourceEventHandlerFuncs{ - UpdateFunc: func(_, _ interface{}) { - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(queue.StorageClassUpdate, nil) + UpdateFunc: func(old, obj interface{}) { + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, queue.StorageClassUpdate, old, obj, nil) }, }, - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) } default: // Tests may not instantiate dynInformerFactory. @@ -421,17 +495,22 @@ func addAllEventHandlers( // - foos.v1 (2 sections) // - foo.v1.example.com (the first section should be plural) if strings.Count(string(gvk), ".") < 2 { - klog.ErrorS(nil, "incorrect event registration", "gvk", gvk) + logger.Error(nil, "incorrect event registration", "gvk", gvk) continue } // Fall back to try dynamic informers. gvr, _ := schema.ParseResourceArg(string(gvk)) dynInformer := dynInformerFactory.ForResource(*gvr).Informer() - dynInformer.AddEventHandler( + if handlerRegistration, err = dynInformer.AddEventHandler( buildEvtResHandler(at, gvk, strings.Title(gvr.Resource)), - ) + ); err != nil { + return err + } + handlers = append(handlers, handlerRegistration) } } + sched.registeredHandlers = handlers + return nil } func nodeSchedulingPropertiesChange(newNode *v1.Node, oldNode *v1.Node) *framework.ClusterEvent { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/extender.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/extender.go index 8350e04c95af..7ccb9e6ca691 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/extender.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/extender.go @@ -48,7 +48,7 @@ type HTTPExtender struct { weight int64 client *http.Client nodeCacheCapable bool - managedResources sets.String + managedResources sets.Set[string] ignorable bool } @@ -96,7 +96,7 @@ func NewHTTPExtender(config *schedulerapi.Extender) (framework.Extender, error) Transport: transport, Timeout: config.HTTPTimeout.Duration, } - managedResources := sets.NewString() + managedResources := sets.New[string]() for _, r := range config.ManagedResources { managedResources.Insert(string(r.Name)) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/cycle_state.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/cycle_state.go index 9227fe545d87..710e16daf700 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/cycle_state.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/cycle_state.go @@ -43,12 +43,12 @@ type StateKey string // StateData stored by one plugin can be read, altered, or deleted by another plugin. // CycleState does not provide any data protection, as all plugins are assumed to be // trusted. -// Note: CycleState uses a sync.Map to back the storage. It's aimed to optimize for the "write once and read many times" scenarios. -// It is the recommended pattern used in all in-tree plugins - plugin-specific state is written once in PreFilter/PreScore and afterwards read many times in Filter/Score. +// Note: CycleState uses a sync.Map to back the storage, because it is thread safe. It's aimed to optimize for the "write once and read many times" scenarios. +// It is the recommended pattern used in all in-tree plugins - plugin-specific state is written once in PreFilter/PreScore and afterward read many times in Filter/Score. type CycleState struct { // storage is keyed with StateKey, and valued with StateData. storage sync.Map - // if recordPluginMetrics is true, PluginExecutionDuration will be recorded for this cycle. + // if recordPluginMetrics is true, metrics.PluginExecutionDuration will be recorded for this cycle. recordPluginMetrics bool // SkipFilterPlugins are plugins that will be skipped in the Filter extension point. SkipFilterPlugins sets.Set[string] @@ -61,7 +61,7 @@ func NewCycleState() *CycleState { return &CycleState{} } -// ShouldRecordPluginMetrics returns whether PluginExecutionDuration metrics should be recorded. +// ShouldRecordPluginMetrics returns whether metrics.PluginExecutionDuration metrics should be recorded. func (c *CycleState) ShouldRecordPluginMetrics() bool { if c == nil { return false @@ -84,10 +84,12 @@ func (c *CycleState) Clone() *CycleState { return nil } copy := NewCycleState() + // Safe copy storage in case of overwriting. c.storage.Range(func(k, v interface{}) bool { copy.storage.Store(k, v.(StateData).Clone()) return true }) + // The below are not mutated, so we don't have to safe copy. copy.recordPluginMetrics = c.recordPluginMetrics copy.SkipFilterPlugins = c.SkipFilterPlugins copy.SkipScorePlugins = c.SkipScorePlugins @@ -96,8 +98,9 @@ func (c *CycleState) Clone() *CycleState { } // Read retrieves data with the given "key" from CycleState. If the key is not -// present an error is returned. -// This function is thread safe by using sync.Map. +// present, ErrNotFound is returned. +// +// See CycleState for notes on concurrency. func (c *CycleState) Read(key StateKey) (StateData, error) { if v, ok := c.storage.Load(key); ok { return v.(StateData), nil @@ -106,13 +109,15 @@ func (c *CycleState) Read(key StateKey) (StateData, error) { } // Write stores the given "val" in CycleState with the given "key". -// This function is thread safe by using sync.Map. +// +// See CycleState for notes on concurrency. func (c *CycleState) Write(key StateKey, val StateData) { c.storage.Store(key, val) } // Delete deletes data with the given key from CycleState. -// This function is thread safe by using sync.Map. +// +// See CycleState for notes on concurrency. func (c *CycleState) Delete(key StateKey) { c.storage.Delete(key) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/interface.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/interface.go index 374efa16ee81..6fc1564e3870 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/interface.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/interface.go @@ -35,6 +35,7 @@ import ( clientset "k8s.io/client-go/kubernetes" restclient "k8s.io/client-go/rest" "k8s.io/client-go/tools/events" + "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/scheduler/apis/config" "k8s.io/kubernetes/pkg/scheduler/framework/parallelize" ) @@ -322,13 +323,15 @@ type QueueSortPlugin interface { // move unschedulable Pods in internal scheduling queues. Plugins // that fail pod scheduling (e.g., Filter plugins) are expected to implement this interface. type EnqueueExtensions interface { + Plugin // EventsToRegister returns a series of possible events that may cause a Pod - // failed by this plugin schedulable. + // failed by this plugin schedulable. Each event has a callback function that + // filters out events to reduce useless retry of Pod's scheduling. // The events will be registered when instantiating the internal scheduling queue, // and leveraged to build event handlers dynamically. // Note: the returned list needs to be static (not depend on configuration parameters); // otherwise it would lead to undefined behavior. - EventsToRegister() []ClusterEvent + EventsToRegister() []ClusterEventWithHint } // PreFilterExtensions is an interface that is included in plugins that allow specifying @@ -512,6 +515,9 @@ type Framework interface { // PreEnqueuePlugins returns the registered preEnqueue plugins. PreEnqueuePlugins() []PreEnqueuePlugin + // EnqueueExtensions returns the registered Enqueue extensions. + EnqueueExtensions() []EnqueueExtensions + // QueueSortFunc returns the function to sort pods in scheduling queue QueueSortFunc() LessFunc @@ -641,7 +647,7 @@ type Handle interface { type PreFilterResult struct { // The set of nodes that should be considered downstream; if nil then // all nodes are eligible. - NodeNames sets.String + NodeNames sets.Set[string] } func (p *PreFilterResult) AllNodes() bool { @@ -704,11 +710,11 @@ func (ni *NominatingInfo) Mode() NominatingMode { type PodNominator interface { // AddNominatedPod adds the given pod to the nominator or // updates it if it already exists. - AddNominatedPod(pod *PodInfo, nominatingInfo *NominatingInfo) + AddNominatedPod(logger klog.Logger, pod *PodInfo, nominatingInfo *NominatingInfo) // DeleteNominatedPodIfExists deletes nominatedPod from internal cache. It's a no-op if it doesn't exist. DeleteNominatedPodIfExists(pod *v1.Pod) // UpdateNominatedPod updates the with . - UpdateNominatedPod(oldPod *v1.Pod, newPodInfo *PodInfo) + UpdateNominatedPod(logger klog.Logger, oldPod *v1.Pod, newPodInfo *PodInfo) // NominatedPodsForNode returns nominatedPods on the given node. NominatedPodsForNode(nodeName string) []*PodInfo } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/defaultpreemption/default_preemption.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/defaultpreemption/default_preemption.go index b096f93ed147..a2d20968d110 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/defaultpreemption/default_preemption.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/defaultpreemption/default_preemption.go @@ -142,6 +142,7 @@ func (pl *DefaultPreemption) SelectVictimsOnNode( pod *v1.Pod, nodeInfo *framework.NodeInfo, pdbs []*policy.PodDisruptionBudget) ([]*v1.Pod, int, *framework.Status) { + logger := klog.FromContext(ctx) var potentialVictims []*framework.PodInfo removePod := func(rpi *framework.PodInfo) error { if err := nodeInfo.RemovePod(rpi.Pod); err != nil { @@ -207,7 +208,7 @@ func (pl *DefaultPreemption) SelectVictimsOnNode( } rpi := pi.Pod victims = append(victims, rpi) - klog.V(5).InfoS("Pod is a potential preemption victim on node", "pod", klog.KObj(rpi), "node", klog.KObj(nodeInfo.Node())) + logger.V(5).Info("Pod is a potential preemption victim on node", "pod", klog.KObj(rpi), "node", klog.KObj(nodeInfo.Node())) } return fits, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go index 081b77f51ce9..471c6e2bb596 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go @@ -23,8 +23,11 @@ import ( "sort" "sync" + "github.com/google/go-cmp/cmp" + v1 "k8s.io/api/core/v1" resourcev1alpha2 "k8s.io/api/resource/v1alpha2" + apiequality "k8s.io/apimachinery/pkg/api/equality" apierrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" @@ -38,6 +41,7 @@ import ( "k8s.io/kubernetes/pkg/scheduler/framework" "k8s.io/kubernetes/pkg/scheduler/framework/plugins/feature" "k8s.io/kubernetes/pkg/scheduler/framework/plugins/names" + schedutil "k8s.io/kubernetes/pkg/scheduler/util" ) const ( @@ -169,7 +173,7 @@ func (d *stateData) publishPodSchedulingContexts(ctx context.Context, clientset } if loggerV := logger.V(6); loggerV.Enabled() { // At a high enough log level, dump the entire object. - loggerV.Info(msg, "podSchedulingCtxDump", schedulingCtx) + loggerV.Info(msg, "podSchedulingCtxDump", klog.Format(schedulingCtx)) } else { logger.V(5).Info(msg, "podSchedulingCtx", klog.KObj(schedulingCtx)) } @@ -208,6 +212,7 @@ func statusForClaim(schedulingCtx *resourcev1alpha2.PodSchedulingContext, podCla // dynamicResources is a plugin that ensures that ResourceClaims are allocated. type dynamicResources struct { enabled bool + fh framework.Handle clientset kubernetes.Interface claimLister resourcev1alpha2listers.ResourceClaimLister classLister resourcev1alpha2listers.ResourceClassLister @@ -223,6 +228,7 @@ func New(plArgs runtime.Object, fh framework.Handle, fts feature.Features) (fram return &dynamicResources{ enabled: true, + fh: fh, clientset: fh.ClientSet(), claimLister: fh.SharedInformerFactory().Resource().V1alpha2().ResourceClaims().Lister(), classLister: fh.SharedInformerFactory().Resource().V1alpha2().ResourceClasses().Lister(), @@ -230,6 +236,7 @@ func New(plArgs runtime.Object, fh framework.Handle, fts feature.Features) (fram }, nil } +var _ framework.PreEnqueuePlugin = &dynamicResources{} var _ framework.PreFilterPlugin = &dynamicResources{} var _ framework.FilterPlugin = &dynamicResources{} var _ framework.PostFilterPlugin = &dynamicResources{} @@ -245,59 +252,280 @@ func (pl *dynamicResources) Name() string { // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *dynamicResources) EventsToRegister() []framework.ClusterEvent { +func (pl *dynamicResources) EventsToRegister() []framework.ClusterEventWithHint { if !pl.enabled { return nil } - - events := []framework.ClusterEvent{ + events := []framework.ClusterEventWithHint{ // Allocation is tracked in ResourceClaims, so any changes may make the pods schedulable. - {Resource: framework.ResourceClaim, ActionType: framework.Add | framework.Update}, + {Event: framework.ClusterEvent{Resource: framework.ResourceClaim, ActionType: framework.Add | framework.Update}, QueueingHintFn: pl.isSchedulableAfterClaimChange}, // When a driver has provided additional information, a pod waiting for that information // may be schedulable. - // TODO (#113702): can we change this so that such an event does not trigger *all* pods? - // Yes: https://github.com/kubernetes/kubernetes/blob/abcbaed0784baf5ed2382aae9705a8918f2daa18/pkg/scheduler/eventhandlers.go#L70 - {Resource: framework.PodSchedulingContext, ActionType: framework.Add | framework.Update}, + {Event: framework.ClusterEvent{Resource: framework.PodSchedulingContext, ActionType: framework.Add | framework.Update}, QueueingHintFn: pl.isSchedulableAfterPodSchedulingContextChange}, // A resource might depend on node labels for topology filtering. // A new or updated node may make pods schedulable. - {Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeLabel}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeLabel}}, } return events } +// PreEnqueue checks if there are known reasons why a pod currently cannot be +// scheduled. When this fails, one of the registered events can trigger another +// attempt. +func (pl *dynamicResources) PreEnqueue(ctx context.Context, pod *v1.Pod) (status *framework.Status) { + if err := pl.foreachPodResourceClaim(pod, nil); err != nil { + return statusUnschedulable(klog.FromContext(ctx), err.Error()) + } + return nil +} + +// isSchedulableAfterClaimChange is invoked for all claim events reported by +// an informer. It checks whether that change made a previously unschedulable +// pod schedulable. It errs on the side of letting a pod scheduling attempt +// happen. +func (pl *dynamicResources) isSchedulableAfterClaimChange(logger klog.Logger, pod *v1.Pod, oldObj, newObj interface{}) framework.QueueingHint { + if newObj == nil { + // Deletes don't make a pod schedulable. + return framework.QueueSkip + } + + _, modifiedClaim, err := schedutil.As[*resourcev1alpha2.ResourceClaim](nil, newObj) + if err != nil { + // Shouldn't happen. + logger.Error(err, "unexpected new object in isSchedulableAfterClaimChange") + return framework.QueueAfterBackoff + } + + usesClaim := false + if err := pl.foreachPodResourceClaim(pod, func(_ string, claim *resourcev1alpha2.ResourceClaim) { + if claim.UID == modifiedClaim.UID { + usesClaim = true + } + }); err != nil { + // This is not an unexpected error: we know that + // foreachPodResourceClaim only returns errors for "not + // schedulable". + logger.V(4).Info("pod is not schedulable", "pod", klog.KObj(pod), "claim", klog.KObj(modifiedClaim), "reason", err.Error()) + return framework.QueueSkip + } + + if !usesClaim { + // This was not the claim the pod was waiting for. + logger.V(6).Info("unrelated claim got modified", "pod", klog.KObj(pod), "claim", klog.KObj(modifiedClaim)) + return framework.QueueSkip + } + + if oldObj == nil { + logger.V(4).Info("claim for pod got created", "pod", klog.KObj(pod), "claim", klog.KObj(modifiedClaim)) + return framework.QueueImmediately + } + + // Modifications may or may not be relevant. If the entire + // status is as before, then something else must have changed + // and we don't care. What happens in practice is that the + // resource driver adds the finalizer. + originalClaim, ok := oldObj.(*resourcev1alpha2.ResourceClaim) + if !ok { + // Shouldn't happen. + logger.Error(nil, "unexpected old object in isSchedulableAfterClaimAddOrUpdate", "obj", oldObj) + return framework.QueueAfterBackoff + } + if apiequality.Semantic.DeepEqual(&originalClaim.Status, &modifiedClaim.Status) { + if loggerV := logger.V(7); loggerV.Enabled() { + // Log more information. + loggerV.Info("claim for pod got modified where the pod doesn't care", "pod", klog.KObj(pod), "claim", klog.KObj(modifiedClaim), "diff", cmp.Diff(originalClaim, modifiedClaim)) + } else { + logger.V(6).Info("claim for pod got modified where the pod doesn't care", "pod", klog.KObj(pod), "claim", klog.KObj(modifiedClaim)) + } + return framework.QueueSkip + } + + logger.V(4).Info("status of claim for pod got updated", "pod", klog.KObj(pod), "claim", klog.KObj(modifiedClaim)) + return framework.QueueImmediately +} + +// isSchedulableAfterPodSchedulingContextChange is invoked for all +// PodSchedulingContext events reported by an informer. It checks whether that +// change made a previously unschedulable pod schedulable (updated) or a new +// attempt is needed to re-create the object (deleted). It errs on the side of +// letting a pod scheduling attempt happen. +func (pl *dynamicResources) isSchedulableAfterPodSchedulingContextChange(logger klog.Logger, pod *v1.Pod, oldObj, newObj interface{}) framework.QueueingHint { + // Deleted? That can happen because we ourselves delete the PodSchedulingContext while + // working on the pod. This can be ignored. + if oldObj != nil && newObj == nil { + logger.V(4).Info("PodSchedulingContext got deleted") + return framework.QueueSkip + } + + oldPodScheduling, newPodScheduling, err := schedutil.As[*resourcev1alpha2.PodSchedulingContext](oldObj, newObj) + if err != nil { + // Shouldn't happen. + logger.Error(nil, "isSchedulableAfterPodSchedulingChange") + return framework.QueueAfterBackoff + } + podScheduling := newPodScheduling // Never nil because deletes are handled above. + + if podScheduling.Name != pod.Name || podScheduling.Namespace != pod.Namespace { + logger.V(7).Info("PodSchedulingContext for unrelated pod got modified", "pod", klog.KObj(pod), "podScheduling", klog.KObj(podScheduling)) + return framework.QueueSkip + } + + // If the drivers have provided information about all + // unallocated claims with delayed allocation, then the next + // scheduling attempt is able to pick a node, so we let it run + // immediately if this occurred for the first time, otherwise + // we allow backoff. + pendingDelayedClaims := 0 + if err := pl.foreachPodResourceClaim(pod, func(podResourceName string, claim *resourcev1alpha2.ResourceClaim) { + if claim.Spec.AllocationMode == resourcev1alpha2.AllocationModeWaitForFirstConsumer && + claim.Status.Allocation == nil && + !podSchedulingHasClaimInfo(podScheduling, podResourceName) { + pendingDelayedClaims++ + } + }); err != nil { + // This is not an unexpected error: we know that + // foreachPodResourceClaim only returns errors for "not + // schedulable". + logger.V(4).Info("pod is not schedulable, keep waiting", "pod", klog.KObj(pod), "reason", err.Error()) + return framework.QueueSkip + } + + // Some driver responses missing? + if pendingDelayedClaims > 0 { + // We could start a pod scheduling attempt to refresh the + // potential nodes list. But pod scheduling attempts are + // expensive and doing them too often causes the pod to enter + // backoff. Let's wait instead for all drivers to reply. + if loggerV := logger.V(6); loggerV.Enabled() { + loggerV.Info("PodSchedulingContext with missing resource claim information, keep waiting", "pod", klog.KObj(pod), "podSchedulingDiff", cmp.Diff(oldPodScheduling, podScheduling)) + } else { + logger.V(5).Info("PodSchedulingContext with missing resource claim information, keep waiting", "pod", klog.KObj(pod)) + } + return framework.QueueSkip + } + + if oldPodScheduling == nil /* create */ || + len(oldPodScheduling.Status.ResourceClaims) < len(podScheduling.Status.ResourceClaims) /* new information and not incomplete (checked above) */ { + // This definitely is new information for the scheduler. Try again immediately. + logger.V(4).Info("PodSchedulingContext for pod has all required information, schedule immediately", "pod", klog.KObj(pod)) + return framework.QueueImmediately + } + + // The other situation where the scheduler needs to do + // something immediately is when the selected node doesn't + // work: waiting in the backoff queue only helps eventually + // resources on the selected node become available again. It's + // much more likely, in particular when trying to fill up the + // cluster, that the choice simply didn't work out. The risk + // here is that in a situation where the cluster really is + // full, backoff won't be used because the scheduler keeps + // trying different nodes. This should not happen when it has + // full knowledge about resource availability (= + // PodSchedulingContext.*.UnsuitableNodes is complete) but may happen + // when it doesn't (= PodSchedulingContext.*.UnsuitableNodes had to be + // truncated). + // + // Truncation only happens for very large clusters and then may slow + // down scheduling, but should not break it completely. This is + // acceptable while DRA is alpha and will be investigated further + // before moving DRA to beta. + if podScheduling.Spec.SelectedNode != "" { + for _, claimStatus := range podScheduling.Status.ResourceClaims { + if sliceContains(claimStatus.UnsuitableNodes, podScheduling.Spec.SelectedNode) { + logger.V(5).Info("PodSchedulingContext has unsuitable selected node, schedule immediately", "pod", klog.KObj(pod), "selectedNode", podScheduling.Spec.SelectedNode, "podResourceName", claimStatus.Name) + return framework.QueueImmediately + } + } + } + + // Update with only the spec modified? + if oldPodScheduling != nil && + !apiequality.Semantic.DeepEqual(&oldPodScheduling.Spec, &podScheduling.Spec) && + apiequality.Semantic.DeepEqual(&oldPodScheduling.Status, &podScheduling.Status) { + logger.V(5).Info("PodSchedulingContext has only the scheduler spec changes, ignore the update", "pod", klog.KObj(pod)) + return framework.QueueSkip + } + + // Once we get here, all changes which are known to require special responses + // have been checked for. Whatever the change was, we don't know exactly how + // to handle it and thus return QueueAfterBackoff. This will cause the + // scheduler to treat the event as if no event hint callback had been provided. + // Developers who want to investigate this can enable a diff at log level 6. + if loggerV := logger.V(6); loggerV.Enabled() { + loggerV.Info("PodSchedulingContext for pod with unknown changes, maybe schedule", "pod", klog.KObj(pod), "podSchedulingDiff", cmp.Diff(oldPodScheduling, podScheduling)) + } else { + logger.V(5).Info("PodSchedulingContext for pod with unknown changes, maybe schedule", "pod", klog.KObj(pod)) + } + return framework.QueueAfterBackoff + +} + +func podSchedulingHasClaimInfo(podScheduling *resourcev1alpha2.PodSchedulingContext, podResourceName string) bool { + for _, claimStatus := range podScheduling.Status.ResourceClaims { + if claimStatus.Name == podResourceName { + return true + } + } + return false +} + +func sliceContains(hay []string, needle string) bool { + for _, item := range hay { + if item == needle { + return true + } + } + return false +} + // podResourceClaims returns the ResourceClaims for all pod.Spec.PodResourceClaims. func (pl *dynamicResources) podResourceClaims(pod *v1.Pod) ([]*resourcev1alpha2.ResourceClaim, error) { claims := make([]*resourcev1alpha2.ResourceClaim, 0, len(pod.Spec.ResourceClaims)) + if err := pl.foreachPodResourceClaim(pod, func(_ string, claim *resourcev1alpha2.ResourceClaim) { + // We store the pointer as returned by the lister. The + // assumption is that if a claim gets modified while our code + // runs, the cache will store a new pointer, not mutate the + // existing object that we point to here. + claims = append(claims, claim) + }); err != nil { + return nil, err + } + return claims, nil +} + +// foreachPodResourceClaim checks that each ResourceClaim for the pod exists. +// It calls an optional handler for those claims that it finds. +func (pl *dynamicResources) foreachPodResourceClaim(pod *v1.Pod, cb func(podResourceName string, claim *resourcev1alpha2.ResourceClaim)) error { for _, resource := range pod.Spec.ResourceClaims { - claimName := resourceclaim.Name(pod, &resource) - isEphemeral := resource.Source.ResourceClaimTemplateName != nil - claim, err := pl.claimLister.ResourceClaims(pod.Namespace).Get(claimName) + claimName, mustCheckOwner, err := resourceclaim.Name(pod, &resource) if err != nil { - // The error usually has already enough context ("resourcevolumeclaim "myclaim" not found"), - // but we can do better for generic ephemeral inline volumes where that situation - // is normal directly after creating a pod. - if isEphemeral && apierrors.IsNotFound(err) { - err = fmt.Errorf("waiting for dynamic resource controller to create the resourceclaim %q", claimName) - } - return nil, err + return err + } + // The claim name might be nil if no underlying resource claim + // was generated for the referenced claim. There are valid use + // cases when this might happen, so we simply skip it. + if claimName == nil { + continue + } + claim, err := pl.claimLister.ResourceClaims(pod.Namespace).Get(*claimName) + if err != nil { + return err } if claim.DeletionTimestamp != nil { - return nil, fmt.Errorf("resourceclaim %q is being deleted", claim.Name) + return fmt.Errorf("resourceclaim %q is being deleted", claim.Name) } - if isEphemeral { + if mustCheckOwner { if err := resourceclaim.IsForPod(pod, claim); err != nil { - return nil, err + return err } } - // We store the pointer as returned by the lister. The - // assumption is that if a claim gets modified while our code - // runs, the cache will store a new pointer, not mutate the - // existing object that we point to here. - claims = append(claims, claim) + if cb != nil { + cb(resource.Name, claim) + } } - return claims, nil + return nil } // PreFilter invoked at the prefilter extension point to check if pod has all @@ -305,7 +533,7 @@ func (pl *dynamicResources) podResourceClaims(pod *v1.Pod) ([]*resourcev1alpha2. // the pod cannot be scheduled at the moment on any node. func (pl *dynamicResources) PreFilter(ctx context.Context, state *framework.CycleState, pod *v1.Pod) (*framework.PreFilterResult, *framework.Status) { if !pl.enabled { - return nil, nil + return nil, framework.NewStatus(framework.Skip) } logger := klog.FromContext(ctx) @@ -321,10 +549,10 @@ func (pl *dynamicResources) PreFilter(ctx context.Context, state *framework.Cycl return nil, statusUnschedulable(logger, err.Error()) } logger.V(5).Info("pod resource claims", "pod", klog.KObj(pod), "resourceclaims", klog.KObjSlice(claims)) - // If the pod does not reference any claim, we don't need to do - // anything for it. + // If the pod does not reference any claim, + // DynamicResources Filter has nothing to do with the Pod. if len(claims) == 0 { - return nil, nil + return nil, framework.NewStatus(framework.Skip) } s.availableOnNodes = make([]*nodeaffinity.NodeSelector, len(claims)) @@ -576,7 +804,7 @@ func (pl *dynamicResources) PreScore(ctx context.Context, cs *framework.CycleSta sort.Strings(schedulingCtx.Spec.PotentialNodes) state.storePodSchedulingContexts(schedulingCtx) } - logger.V(5).Info("all potential nodes already set", "pod", klog.KObj(pod), "potentialnodes", nodes) + logger.V(5).Info("all potential nodes already set", "pod", klog.KObj(pod), "potentialnodes", klog.KObjSlice(nodes)) return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/feature/feature.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/feature/feature.go index 7859b01a1db1..4d1ee444cf76 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/feature/feature.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/feature/feature.go @@ -29,4 +29,5 @@ type Features struct { EnablePodSchedulingReadiness bool EnablePodDisruptionConditions bool EnableInPlacePodVerticalScaling bool + EnableSidecarContainers bool } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/filtering.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/filtering.go index 02a7c7926930..d65524119964 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/filtering.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/filtering.go @@ -158,10 +158,7 @@ func (pl *InterPodAffinity) getExistingAntiAffinityCounts(ctx context.Context, p processNode := func(i int) { nodeInfo := nodes[i] node := nodeInfo.Node() - if node == nil { - klog.ErrorS(nil, "Node not found") - return - } + topoMap := make(topologyToMatchedTermCount) for _, existingPod := range nodeInfo.PodsWithRequiredAntiAffinity { topoMap.updateWithAntiAffinityTerms(existingPod.RequiredAntiAffinityTerms, pod, nsLabels, node, 1) @@ -197,10 +194,7 @@ func (pl *InterPodAffinity) getIncomingAffinityAntiAffinityCounts(ctx context.Co processNode := func(i int) { nodeInfo := allNodes[i] node := nodeInfo.Node() - if node == nil { - klog.ErrorS(nil, "Node not found") - return - } + affinity := make(topologyToMatchedTermCount) antiAffinity := make(topologyToMatchedTermCount) for _, existingPod := range nodeInfo.Pods { @@ -254,7 +248,8 @@ func (pl *InterPodAffinity) PreFilter(ctx context.Context, cycleState *framework return nil, framework.AsStatus(err) } } - s.namespaceLabels = GetNamespaceLabelsSnapshot(pod.Namespace, pl.nsLister) + logger := klog.FromContext(ctx) + s.namespaceLabels = GetNamespaceLabelsSnapshot(logger, pod.Namespace, pl.nsLister) s.existingAntiAffinityCounts = pl.getExistingAntiAffinityCounts(ctx, pod, s.namespaceLabels, nodesWithRequiredAntiAffinityPods) s.affinityCounts, s.antiAffinityCounts = pl.getIncomingAffinityAntiAffinityCounts(ctx, s.podInfo, allNodes) @@ -369,9 +364,6 @@ func satisfyPodAffinity(state *preFilterState, nodeInfo *framework.NodeInfo) boo // Filter invoked at the filter extension point. // It checks if a pod can be scheduled on the specified node with pod affinity/anti-affinity configuration. func (pl *InterPodAffinity) Filter(ctx context.Context, cycleState *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { - if nodeInfo.Node() == nil { - return framework.NewStatus(framework.Error, "node not found") - } state, err := getPreFilterState(cycleState) if err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/plugin.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/plugin.go index f7e84ce09d96..b7131eaf48fe 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/plugin.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/plugin.go @@ -54,8 +54,8 @@ func (pl *InterPodAffinity) Name() string { // EventsToRegister returns the possible events that may make a failed Pod // schedulable -func (pl *InterPodAffinity) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ +func (pl *InterPodAffinity) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ // All ActionType includes the following events: // - Delete. An unschedulable Pod may fail due to violating an existing Pod's anti-affinity constraints, // deleting an existing Pod may make it schedulable. @@ -63,8 +63,8 @@ func (pl *InterPodAffinity) EventsToRegister() []framework.ClusterEvent { // an unschedulable Pod schedulable. // - Add. An unschedulable Pod may fail due to violating pod-affinity constraints, // adding an assigned Pod may make it schedulable. - {Resource: framework.Pod, ActionType: framework.All}, - {Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeLabel}, + {Event: framework.ClusterEvent{Resource: framework.Pod, ActionType: framework.All}}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeLabel}}, } } @@ -122,12 +122,12 @@ func (pl *InterPodAffinity) mergeAffinityTermNamespacesIfNotEmpty(at *framework. // GetNamespaceLabelsSnapshot returns a snapshot of the labels associated with // the namespace. -func GetNamespaceLabelsSnapshot(ns string, nsLister listersv1.NamespaceLister) (nsLabels labels.Set) { +func GetNamespaceLabelsSnapshot(logger klog.Logger, ns string, nsLister listersv1.NamespaceLister) (nsLabels labels.Set) { podNS, err := nsLister.Get(ns) if err == nil { // Create and return snapshot of the labels. return labels.Merge(podNS.Labels, nil) } - klog.V(3).InfoS("getting namespace, assuming empty set of namespace labels", "namespace", ns, "err", err) + logger.V(3).Info("getting namespace, assuming empty set of namespace labels", "namespace", ns, "err", err) return } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/scoring.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/scoring.go index fd4cc24327df..19c8994d587e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/scoring.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/interpodaffinity/scoring.go @@ -24,6 +24,7 @@ import ( v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/labels" + "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/scheduler/framework" ) @@ -131,7 +132,7 @@ func (pl *InterPodAffinity) PreScore( ) *framework.Status { if len(nodes) == 0 { // No nodes to score. - return nil + return framework.NewStatus(framework.Skip) } if pl.sharedLister == nil { @@ -141,21 +142,19 @@ func (pl *InterPodAffinity) PreScore( affinity := pod.Spec.Affinity hasPreferredAffinityConstraints := affinity != nil && affinity.PodAffinity != nil && len(affinity.PodAffinity.PreferredDuringSchedulingIgnoredDuringExecution) > 0 hasPreferredAntiAffinityConstraints := affinity != nil && affinity.PodAntiAffinity != nil && len(affinity.PodAntiAffinity.PreferredDuringSchedulingIgnoredDuringExecution) > 0 + hasConstraints := hasPreferredAffinityConstraints || hasPreferredAntiAffinityConstraints // Optionally ignore calculating preferences of existing pods' affinity rules // if the incoming pod has no inter-pod affinities. - if pl.args.IgnorePreferredTermsOfExistingPods && !hasPreferredAffinityConstraints && !hasPreferredAntiAffinityConstraints { - cycleState.Write(preScoreStateKey, &preScoreState{ - topologyScore: make(map[string]map[string]int64), - }) - return nil + if pl.args.IgnorePreferredTermsOfExistingPods && !hasConstraints { + return framework.NewStatus(framework.Skip) } // Unless the pod being scheduled has preferred affinity terms, we only // need to process nodes hosting pods with affinity. var allNodes []*framework.NodeInfo var err error - if hasPreferredAffinityConstraints || hasPreferredAntiAffinityConstraints { + if hasConstraints { allNodes, err = pl.sharedLister.NodeInfos().List() if err != nil { return framework.AsStatus(fmt.Errorf("failed to get all nodes from shared lister: %w", err)) @@ -186,19 +185,18 @@ func (pl *InterPodAffinity) PreScore( return framework.AsStatus(fmt.Errorf("updating PreferredAntiAffinityTerms: %w", err)) } } - state.namespaceLabels = GetNamespaceLabelsSnapshot(pod.Namespace, pl.nsLister) + logger := klog.FromContext(pCtx) + state.namespaceLabels = GetNamespaceLabelsSnapshot(logger, pod.Namespace, pl.nsLister) topoScores := make([]scoreMap, len(allNodes)) index := int32(-1) processNode := func(i int) { nodeInfo := allNodes[i] - if nodeInfo.Node() == nil { - return - } + // Unless the pod being scheduled has preferred affinity terms, we only // need to process pods with affinity in the node. podsToProcess := nodeInfo.PodsWithAffinity - if hasPreferredAffinityConstraints || hasPreferredAntiAffinityConstraints { + if hasConstraints { // We need to process all the pods. podsToProcess = nodeInfo.Pods } @@ -213,6 +211,10 @@ func (pl *InterPodAffinity) PreScore( } pl.parallelizer.Until(pCtx, len(allNodes), processNode, pl.Name()) + if index == -1 { + return framework.NewStatus(framework.Skip) + } + for i := 0; i <= int(index); i++ { state.topologyScore.append(topoScores[i]) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go index d966e53d1d17..d9d431f9aebe 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go @@ -81,9 +81,9 @@ func (s *preFilterState) Clone() framework.StateData { // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *NodeAffinity) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ - {Resource: framework.Node, ActionType: framework.Add | framework.Update}, +func (pl *NodeAffinity) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.Update}}, } } @@ -107,14 +107,14 @@ func (pl *NodeAffinity) PreFilter(ctx context.Context, cycleState *framework.Cyc // Check if there is affinity to a specific node and return it. terms := affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms - var nodeNames sets.String + var nodeNames sets.Set[string] for _, t := range terms { - var termNodeNames sets.String + var termNodeNames sets.Set[string] for _, r := range t.MatchFields { if r.Key == metav1.ObjectNameField && r.Operator == v1.NodeSelectorOpIn { // The requirements represent ANDed constraints, and so we need to // find the intersection of nodes. - s := sets.NewString(r.Values...) + s := sets.New(r.Values...) if termNodeNames == nil { termNodeNames = s } else { @@ -149,9 +149,7 @@ func (pl *NodeAffinity) PreFilterExtensions() framework.PreFilterExtensions { // the plugin's added affinity. func (pl *NodeAffinity) Filter(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { node := nodeInfo.Node() - if node == nil { - return framework.NewStatus(framework.Error, "node not found") - } + if pl.addedNodeSelector != nil && !pl.addedNodeSelector.Match(node) { return framework.NewStatus(framework.UnschedulableAndUnresolvable, errReasonEnforced) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodename/node_name.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodename/node_name.go index e02134f793c4..7adea806cb78 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodename/node_name.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodename/node_name.go @@ -41,9 +41,9 @@ const ( // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *NodeName) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ - {Resource: framework.Node, ActionType: framework.Add | framework.Update}, +func (pl *NodeName) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.Update}}, } } @@ -54,9 +54,7 @@ func (pl *NodeName) Name() string { // Filter invoked at the filter extension point. func (pl *NodeName) Filter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { - if nodeInfo.Node() == nil { - return framework.NewStatus(framework.Error, "node not found") - } + if !Fits(pod, nodeInfo) { return framework.NewStatus(framework.UnschedulableAndUnresolvable, ErrReason) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeports/node_ports.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeports/node_ports.go index bfd648efe4a3..515aab09eebc 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeports/node_ports.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeports/node_ports.go @@ -66,6 +66,10 @@ func getContainerPorts(pods ...*v1.Pod) []*v1.ContainerPort { for j := range pod.Spec.Containers { container := &pod.Spec.Containers[j] for k := range container.Ports { + // Only return ports with a host port specified. + if container.Ports[k].HostPort <= 0 { + continue + } ports = append(ports, &container.Ports[k]) } } @@ -76,6 +80,10 @@ func getContainerPorts(pods ...*v1.Pod) []*v1.ContainerPort { // PreFilter invoked at the prefilter extension point. func (pl *NodePorts) PreFilter(ctx context.Context, cycleState *framework.CycleState, pod *v1.Pod) (*framework.PreFilterResult, *framework.Status) { s := getContainerPorts(pod) + // Skip if a pod has no ports. + if len(s) == 0 { + return nil, framework.NewStatus(framework.Skip) + } cycleState.Write(preFilterStateKey, preFilterState(s)) return nil, nil } @@ -101,11 +109,11 @@ func getPreFilterState(cycleState *framework.CycleState) (preFilterState, error) // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *NodePorts) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ +func (pl *NodePorts) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ // Due to immutable fields `spec.containers[*].ports`, pod update events are ignored. - {Resource: framework.Pod, ActionType: framework.Delete}, - {Resource: framework.Node, ActionType: framework.Add | framework.Update}, + {Event: framework.ClusterEvent{Resource: framework.Pod, ActionType: framework.Delete}}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.Update}}, } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/balanced_allocation.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/balanced_allocation.go index ef6b8723b652..f375be2d42f6 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/balanced_allocation.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/balanced_allocation.go @@ -105,7 +105,7 @@ func (ba *BalancedAllocation) Score(ctx context.Context, state *framework.CycleS // Detail: score = (1 - std) * MaxNodeScore, where std is calculated by the root square of Σ((fraction(i)-mean)^2)/len(resources) // The algorithm is partly inspired by: // "Wei Huang et al. An Energy Efficient Virtual Machine Placement Algorithm with Balanced Resource Utilization" - return ba.score(pod, nodeInfo, s.podRequests) + return ba.score(ctx, pod, nodeInfo, s.podRequests) } // ScoreExtensions of the Score plugin. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/fit.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/fit.go index 81edad6e2777..04e9bcbf7578 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/fit.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/fit.go @@ -24,7 +24,6 @@ import ( v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/util/sets" - "k8s.io/kubernetes/pkg/api/v1/resource" v1helper "k8s.io/kubernetes/pkg/apis/core/v1/helper" "k8s.io/kubernetes/pkg/scheduler/apis/config" @@ -82,9 +81,10 @@ var nodeResourceStrategyTypeMap = map[config.ScoringStrategyType]scorer{ // Fit is a plugin that checks if a node has sufficient resources. type Fit struct { - ignoredResources sets.String - ignoredResourceGroups sets.String + ignoredResources sets.Set[string] + ignoredResourceGroups sets.Set[string] enableInPlacePodVerticalScaling bool + enableSidecarContainers bool handle framework.Handle resourceAllocationScorer } @@ -165,9 +165,10 @@ func NewFit(plArgs runtime.Object, h framework.Handle, fts feature.Features) (fr } return &Fit{ - ignoredResources: sets.NewString(args.IgnoredResources...), - ignoredResourceGroups: sets.NewString(args.IgnoredResourceGroups...), + ignoredResources: sets.New(args.IgnoredResources...), + ignoredResourceGroups: sets.New(args.IgnoredResourceGroups...), enableInPlacePodVerticalScaling: fts.EnableInPlacePodVerticalScaling, + enableSidecarContainers: fts.EnableSidecarContainers, handle: h, resourceAllocationScorer: *scorePlugin(args), }, nil @@ -235,16 +236,16 @@ func getPreFilterState(cycleState *framework.CycleState) (*preFilterState, error // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (f *Fit) EventsToRegister() []framework.ClusterEvent { +func (f *Fit) EventsToRegister() []framework.ClusterEventWithHint { podActionType := framework.Delete if f.enableInPlacePodVerticalScaling { // If InPlacePodVerticalScaling (KEP 1287) is enabled, then PodUpdate event should be registered // for this plugin since a Pod update may free up resources that make other Pods schedulable. podActionType |= framework.Update } - return []framework.ClusterEvent{ - {Resource: framework.Pod, ActionType: podActionType}, - {Resource: framework.Node, ActionType: framework.Add | framework.Update}, + return []framework.ClusterEventWithHint{ + {Event: framework.ClusterEvent{Resource: framework.Pod, ActionType: podActionType}}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.Update}}, } } @@ -252,6 +253,15 @@ func (f *Fit) EventsToRegister() []framework.ClusterEvent { // Checks if a node has sufficient resources, such as cpu, memory, gpu, opaque int resources etc to run a pod. // It returns a list of insufficient resources, if empty, then the node has all the resources requested by the pod. func (f *Fit) Filter(ctx context.Context, cycleState *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { + if !f.enableSidecarContainers && hasRestartableInitContainer(pod) { + // Scheduler will calculate resources usage for a Pod containing + // restartable init containers that will be equal or more than kubelet will + // require to run the Pod. So there will be no overbooking. However, to + // avoid the inconsistency in resource calculation between the scheduler + // and the older (before v1.28) kubelet, make the Pod unschedulable. + return framework.NewStatus(framework.UnschedulableAndUnresolvable, "Pod has a restartable init container and the SidecarContainers feature is disabled") + } + s, err := getPreFilterState(cycleState) if err != nil { return framework.AsStatus(err) @@ -270,6 +280,15 @@ func (f *Fit) Filter(ctx context.Context, cycleState *framework.CycleState, pod return nil } +func hasRestartableInitContainer(pod *v1.Pod) bool { + for _, c := range pod.Spec.InitContainers { + if c.RestartPolicy != nil && *c.RestartPolicy == v1.ContainerRestartPolicyAlways { + return true + } + } + return false +} + // InsufficientResource describes what kind of resource limit is hit and caused the pod to not fit the node. type InsufficientResource struct { ResourceName v1.ResourceName @@ -286,7 +305,7 @@ func Fits(pod *v1.Pod, nodeInfo *framework.NodeInfo) []InsufficientResource { return fitsRequest(computePodResourceRequest(pod), nodeInfo, nil, nil) } -func fitsRequest(podRequest *preFilterState, nodeInfo *framework.NodeInfo, ignoredExtendedResources, ignoredResourceGroups sets.String) []InsufficientResource { +func fitsRequest(podRequest *preFilterState, nodeInfo *framework.NodeInfo, ignoredExtendedResources, ignoredResourceGroups sets.Set[string]) []InsufficientResource { insufficientResources := make([]InsufficientResource, 0, 4) allowedPodNumber := nodeInfo.Allocatable.AllowedPodNumber @@ -307,7 +326,7 @@ func fitsRequest(podRequest *preFilterState, nodeInfo *framework.NodeInfo, ignor return insufficientResources } - if podRequest.MilliCPU > (nodeInfo.Allocatable.MilliCPU - nodeInfo.Requested.MilliCPU) { + if podRequest.MilliCPU > 0 && podRequest.MilliCPU > (nodeInfo.Allocatable.MilliCPU-nodeInfo.Requested.MilliCPU) { insufficientResources = append(insufficientResources, InsufficientResource{ ResourceName: v1.ResourceCPU, Reason: "Insufficient cpu", @@ -316,7 +335,7 @@ func fitsRequest(podRequest *preFilterState, nodeInfo *framework.NodeInfo, ignor Capacity: nodeInfo.Allocatable.MilliCPU, }) } - if podRequest.Memory > (nodeInfo.Allocatable.Memory - nodeInfo.Requested.Memory) { + if podRequest.Memory > 0 && podRequest.Memory > (nodeInfo.Allocatable.Memory-nodeInfo.Requested.Memory) { insufficientResources = append(insufficientResources, InsufficientResource{ ResourceName: v1.ResourceMemory, Reason: "Insufficient memory", @@ -325,7 +344,8 @@ func fitsRequest(podRequest *preFilterState, nodeInfo *framework.NodeInfo, ignor Capacity: nodeInfo.Allocatable.Memory, }) } - if podRequest.EphemeralStorage > (nodeInfo.Allocatable.EphemeralStorage - nodeInfo.Requested.EphemeralStorage) { + if podRequest.EphemeralStorage > 0 && + podRequest.EphemeralStorage > (nodeInfo.Allocatable.EphemeralStorage-nodeInfo.Requested.EphemeralStorage) { insufficientResources = append(insufficientResources, InsufficientResource{ ResourceName: v1.ResourceEphemeralStorage, Reason: "Insufficient ephemeral-storage", @@ -381,5 +401,5 @@ func (f *Fit) Score(ctx context.Context, state *framework.CycleState, pod *v1.Po } } - return f.score(pod, nodeInfo, s.podRequests) + return f.score(ctx, pod, nodeInfo, s.podRequests) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/resource_allocation.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/resource_allocation.go index 68e4433f9185..863647bd0af6 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/resource_allocation.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/noderesources/resource_allocation.go @@ -17,6 +17,8 @@ limitations under the License. package noderesources import ( + "context" + v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" utilfeature "k8s.io/apiserver/pkg/util/feature" @@ -44,13 +46,13 @@ type resourceAllocationScorer struct { // score will use `scorer` function to calculate the score. func (r *resourceAllocationScorer) score( + ctx context.Context, pod *v1.Pod, nodeInfo *framework.NodeInfo, podRequests []int64) (int64, *framework.Status) { + logger := klog.FromContext(ctx) node := nodeInfo.Node() - if node == nil { - return 0, framework.NewStatus(framework.Error, "node not found") - } + // resources not set, nothing scheduled, if len(r.resources) == 0 { return 0, framework.NewStatus(framework.Error, "resources not found") @@ -59,7 +61,7 @@ func (r *resourceAllocationScorer) score( requested := make([]int64, len(r.resources)) allocatable := make([]int64, len(r.resources)) for i := range r.resources { - alloc, req := r.calculateResourceAllocatableRequest(nodeInfo, v1.ResourceName(r.resources[i].Name), podRequests[i]) + alloc, req := r.calculateResourceAllocatableRequest(logger, nodeInfo, v1.ResourceName(r.resources[i].Name), podRequests[i]) // Only fill the extended resource entry when it's non-zero. if alloc == 0 { continue @@ -70,8 +72,8 @@ func (r *resourceAllocationScorer) score( score := r.scorer(requested, allocatable) - if klogV := klog.V(10); klogV.Enabled() { // Serializing these maps is costly. - klogV.InfoS("Listing internal info for allocatable resources, requested resources and score", "pod", + if loggerV := logger.V(10); loggerV.Enabled() { // Serializing these maps is costly. + loggerV.Info("Listed internal info for allocatable resources, requested resources and score", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceAllocationScorer", r.Name, "allocatableResource", allocatable, "requestedResource", requested, "resourceScore", score, ) @@ -84,7 +86,7 @@ func (r *resourceAllocationScorer) score( // - 1st param: quantity of allocatable resource on the node. // - 2nd param: aggregated quantity of requested resource on the node. // Note: if it's an extended resource, and the pod doesn't request it, (0, 0) is returned. -func (r *resourceAllocationScorer) calculateResourceAllocatableRequest(nodeInfo *framework.NodeInfo, resource v1.ResourceName, podRequest int64) (int64, int64) { +func (r *resourceAllocationScorer) calculateResourceAllocatableRequest(logger klog.Logger, nodeInfo *framework.NodeInfo, resource v1.ResourceName, podRequest int64) (int64, int64) { requested := nodeInfo.NonZeroRequested if r.useRequested { requested = nodeInfo.Requested @@ -107,7 +109,7 @@ func (r *resourceAllocationScorer) calculateResourceAllocatableRequest(nodeInfo return nodeInfo.Allocatable.ScalarResources[resource], (nodeInfo.Requested.ScalarResources[resource] + podRequest) } } - klog.V(10).InfoS("Requested resource is omitted for node score calculation", "resourceName", resource) + logger.V(10).Info("Requested resource is omitted for node score calculation", "resourceName", resource) return 0, 0 } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeunschedulable/node_unschedulable.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeunschedulable/node_unschedulable.go index 47231837e5cb..1ba667ed75ab 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeunschedulable/node_unschedulable.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeunschedulable/node_unschedulable.go @@ -46,9 +46,9 @@ const ( // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *NodeUnschedulable) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ - {Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeTaint}, +func (pl *NodeUnschedulable) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeTaint}}, } } @@ -60,9 +60,7 @@ func (pl *NodeUnschedulable) Name() string { // Filter invoked at the filter extension point. func (pl *NodeUnschedulable) Filter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { node := nodeInfo.Node() - if node == nil { - return framework.NewStatus(framework.UnschedulableAndUnresolvable, ErrReasonUnknownCondition) - } + // If pod tolerate unschedulable taint, it's also tolerate `node.Spec.Unschedulable`. podToleratesUnschedulable := v1helper.TolerationsTolerateTaint(pod.Spec.Tolerations, &v1.Taint{ Key: v1.TaintNodeUnschedulable, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/csi.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/csi.go index ab2d335f6afc..e26401d39c2b 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/csi.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/csi.go @@ -60,6 +60,7 @@ type CSILimits struct { translator InTreeToCSITranslator } +var _ framework.PreFilterPlugin = &CSILimits{} var _ framework.FilterPlugin = &CSILimits{} var _ framework.EnqueueExtensions = &CSILimits{} @@ -73,13 +74,33 @@ func (pl *CSILimits) Name() string { // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *CSILimits) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ - {Resource: framework.CSINode, ActionType: framework.Add}, - {Resource: framework.Pod, ActionType: framework.Delete}, +func (pl *CSILimits) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ + {Event: framework.ClusterEvent{Resource: framework.CSINode, ActionType: framework.Add}}, + {Event: framework.ClusterEvent{Resource: framework.Pod, ActionType: framework.Delete}}, } } +// PreFilter invoked at the prefilter extension point +// +// If the pod haven't those types of volumes, we'll skip the Filter phase +func (pl *CSILimits) PreFilter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod) (*framework.PreFilterResult, *framework.Status) { + volumes := pod.Spec.Volumes + for i := range volumes { + vol := &volumes[i] + if vol.PersistentVolumeClaim != nil || vol.Ephemeral != nil || pl.translator.IsInlineMigratable(vol) { + return nil, nil + } + } + + return nil, framework.NewStatus(framework.Skip) +} + +// PreFilterExtensions returns prefilter extensions, pod add and remove. +func (pl *CSILimits) PreFilterExtensions() framework.PreFilterExtensions { + return nil +} + // Filter invoked at the filter extension point. func (pl *CSILimits) Filter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { // If the new pod doesn't have any volume attached to it, the predicate will always be true @@ -88,9 +109,6 @@ func (pl *CSILimits) Filter(ctx context.Context, _ *framework.CycleState, pod *v } node := nodeInfo.Node() - if node == nil { - return framework.NewStatus(framework.Error, "node not found") - } // If CSINode doesn't exist, the predicate may read the limits from Node object csiNode, err := pl.csiNodeLister.Get(node.Name) @@ -172,7 +190,7 @@ func (pl *CSILimits) filterAttachableVolumes( // - If the volume is migratable and CSI migration is enabled, need to count it // as well. // - If the volume is not migratable, it will be count in non_csi filter. - if err := pl.checkAttachableInlineVolume(vol, csiNode, pod, result); err != nil { + if err := pl.checkAttachableInlineVolume(&vol, csiNode, pod, result); err != nil { return err } @@ -220,15 +238,15 @@ func (pl *CSILimits) filterAttachableVolumes( // checkAttachableInlineVolume takes an inline volume and add to the result map if the // volume is migratable and CSI migration for this plugin has been enabled. -func (pl *CSILimits) checkAttachableInlineVolume(vol v1.Volume, csiNode *storagev1.CSINode, +func (pl *CSILimits) checkAttachableInlineVolume(vol *v1.Volume, csiNode *storagev1.CSINode, pod *v1.Pod, result map[string]string) error { - if !pl.translator.IsInlineMigratable(&vol) { + if !pl.translator.IsInlineMigratable(vol) { return nil } // Check if the intree provisioner CSI migration has been enabled. - inTreeProvisionerName, err := pl.translator.GetInTreePluginNameFromSpec(nil, &vol) + inTreeProvisionerName, err := pl.translator.GetInTreePluginNameFromSpec(nil, vol) if err != nil { - return fmt.Errorf("looking up provisioner name for volume %v: %w", vol, err) + return fmt.Errorf("looking up provisioner name for volume %s: %w", vol.Name, err) } if !isCSIMigrationOn(csiNode, inTreeProvisionerName) { csiNodeName := "" @@ -240,9 +258,9 @@ func (pl *CSILimits) checkAttachableInlineVolume(vol v1.Volume, csiNode *storage return nil } // Do translation for the in-tree volume. - translatedPV, err := pl.translator.TranslateInTreeInlineVolumeToCSI(&vol, pod.Namespace) + translatedPV, err := pl.translator.TranslateInTreeInlineVolumeToCSI(vol, pod.Namespace) if err != nil || translatedPV == nil { - return fmt.Errorf("converting volume(%v) from inline to csi: %w", vol, err) + return fmt.Errorf("converting volume(%s) from inline to csi: %w", vol.Name, err) } driverName, err := pl.translator.GetCSINameFromInTreeName(inTreeProvisionerName) if err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/non_csi.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/non_csi.go index 4a2535e5f26b..763c6f45ddd7 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/non_csi.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/non_csi.go @@ -119,6 +119,7 @@ type nonCSILimits struct { randomVolumeIDPrefix string } +var _ framework.PreFilterPlugin = &nonCSILimits{} var _ framework.FilterPlugin = &nonCSILimits{} var _ framework.EnqueueExtensions = &nonCSILimits{} @@ -201,13 +202,34 @@ func (pl *nonCSILimits) Name() string { // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *nonCSILimits) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ - {Resource: framework.Node, ActionType: framework.Add}, - {Resource: framework.Pod, ActionType: framework.Delete}, +func (pl *nonCSILimits) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add}}, + {Event: framework.ClusterEvent{Resource: framework.Pod, ActionType: framework.Delete}}, } } +// PreFilter invoked at the prefilter extension point +// +// If the pod haven't those types of volumes, we'll skip the Filter phase +func (pl *nonCSILimits) PreFilter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod) (*framework.PreFilterResult, *framework.Status) { + volumes := pod.Spec.Volumes + for i := range volumes { + vol := &volumes[i] + _, ok := pl.filter.FilterVolume(vol) + if ok || vol.PersistentVolumeClaim != nil || vol.Ephemeral != nil { + return nil, nil + } + } + + return nil, framework.NewStatus(framework.Skip) +} + +// PreFilterExtensions returns prefilter extensions, pod add and remove. +func (pl *nonCSILimits) PreFilterExtensions() framework.PreFilterExtensions { + return nil +} + // Filter invoked at the filter extension point. func (pl *nonCSILimits) Filter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { // If a pod doesn't have any volume attached to it, the predicate will always be true. @@ -216,7 +238,7 @@ func (pl *nonCSILimits) Filter(ctx context.Context, _ *framework.CycleState, pod return nil } - newVolumes := make(sets.String) + newVolumes := sets.New[string]() if err := pl.filterVolumes(pod, true /* new pod */, newVolumes); err != nil { return framework.AsStatus(err) } @@ -227,9 +249,6 @@ func (pl *nonCSILimits) Filter(ctx context.Context, _ *framework.CycleState, pod } node := nodeInfo.Node() - if node == nil { - return framework.NewStatus(framework.Error, "node not found") - } var csiNode *storage.CSINode var err error @@ -248,7 +267,7 @@ func (pl *nonCSILimits) Filter(ctx context.Context, _ *framework.CycleState, pod } // count unique volumes - existingVolumes := make(sets.String) + existingVolumes := sets.New[string]() for _, existingPod := range nodeInfo.Pods { if err := pl.filterVolumes(existingPod.Pod, false /* existing pod */, existingVolumes); err != nil { return framework.AsStatus(err) @@ -274,7 +293,7 @@ func (pl *nonCSILimits) Filter(ctx context.Context, _ *framework.CycleState, pod return nil } -func (pl *nonCSILimits) filterVolumes(pod *v1.Pod, newPod bool, filteredVolumes sets.String) error { +func (pl *nonCSILimits) filterVolumes(pod *v1.Pod, newPod bool, filteredVolumes sets.Set[string]) error { volumes := pod.Spec.Volumes for i := range volumes { vol := &volumes[i] diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/utils.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/utils.go index 07f388cf94d7..691f2d500e50 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/utils.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodevolumelimits/utils.go @@ -46,9 +46,7 @@ func isCSIMigrationOn(csiNode *storagev1.CSINode, pluginName string) bool { return false } case csilibplugins.GCEPDInTreePluginName: - if !utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationGCE) { - return false - } + return true case csilibplugins.AzureDiskInTreePluginName: return true case csilibplugins.CinderInTreePluginName: @@ -68,13 +66,13 @@ func isCSIMigrationOn(csiNode *storagev1.CSINode, pluginName string) bool { return false } - var mpaSet sets.String + var mpaSet sets.Set[string] mpa := csiNodeAnn[v1.MigratedPluginsAnnotationKey] if len(mpa) == 0 { - mpaSet = sets.NewString() + mpaSet = sets.New[string]() } else { tok := strings.Split(mpa, ",") - mpaSet = sets.NewString(tok...) + mpaSet = sets.New(tok...) } return mpaSet.Has(pluginName) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/filtering.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/filtering.go index bd0d5a92f30f..68cdc2199fb3 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/filtering.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/filtering.go @@ -152,7 +152,10 @@ func (pl *PodTopologySpread) PreFilter(ctx context.Context, cycleState *framewor s, err := pl.calPreFilterState(ctx, pod) if err != nil { return nil, framework.AsStatus(err) + } else if s != nil && len(s.Constraints) == 0 { + return nil, framework.NewStatus(framework.Skip) } + cycleState.Write(preFilterStateKey, s) return nil, nil } @@ -236,11 +239,8 @@ func getPreFilterState(cycleState *framework.CycleState) (*preFilterState, error // calPreFilterState computes preFilterState describing how pods are spread on topologies. func (pl *PodTopologySpread) calPreFilterState(ctx context.Context, pod *v1.Pod) (*preFilterState, error) { - allNodes, err := pl.sharedLister.NodeInfos().List() - if err != nil { - return nil, fmt.Errorf("listing NodeInfos: %w", err) - } var constraints []topologySpreadConstraint + var err error if len(pod.Spec.TopologySpreadConstraints) > 0 { // We have feature gating in APIServer to strip the spec // so don't need to re-check feature gate, just check length of Constraints. @@ -262,6 +262,11 @@ func (pl *PodTopologySpread) calPreFilterState(ctx context.Context, pod *v1.Pod) return &preFilterState{}, nil } + allNodes, err := pl.sharedLister.NodeInfos().List() + if err != nil { + return nil, fmt.Errorf("listing NodeInfos: %w", err) + } + s := preFilterState{ Constraints: constraints, TpKeyToCriticalPaths: make(map[string]*criticalPaths, len(constraints)), @@ -273,10 +278,6 @@ func (pl *PodTopologySpread) calPreFilterState(ctx context.Context, pod *v1.Pod) processNode := func(i int) { nodeInfo := allNodes[i] node := nodeInfo.Node() - if node == nil { - klog.ErrorS(nil, "Node not found") - return - } if !pl.enableNodeInclusionPolicyInPodTopologySpread { // spreading is applied to nodes that pass those filters. @@ -333,9 +334,6 @@ func (pl *PodTopologySpread) calPreFilterState(ctx context.Context, pod *v1.Pod) // Filter invoked at the filter extension point. func (pl *PodTopologySpread) Filter(ctx context.Context, cycleState *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { node := nodeInfo.Node() - if node == nil { - return framework.AsStatus(fmt.Errorf("node not found")) - } s, err := getPreFilterState(cycleState) if err != nil { @@ -347,12 +345,13 @@ func (pl *PodTopologySpread) Filter(ctx context.Context, cycleState *framework.C return nil } + logger := klog.FromContext(ctx) podLabelSet := labels.Set(pod.Labels) for _, c := range s.Constraints { tpKey := c.TopologyKey tpVal, ok := node.Labels[c.TopologyKey] if !ok { - klog.V(5).InfoS("Node doesn't have required label", "node", klog.KObj(node), "label", tpKey) + logger.V(5).Info("Node doesn't have required label", "node", klog.KObj(node), "label", tpKey) return framework.NewStatus(framework.UnschedulableAndUnresolvable, ErrReasonNodeLabelNotMatch) } @@ -360,7 +359,7 @@ func (pl *PodTopologySpread) Filter(ctx context.Context, cycleState *framework.C // 'existing matching num' + 'if self-match (1 or 0)' - 'global minimum' <= 'maxSkew' minMatchNum, err := s.minMatchNum(tpKey, c.MinDomains, pl.enableMinDomainsInPodTopologySpread) if err != nil { - klog.ErrorS(err, "Internal error occurred while retrieving value precalculated in PreFilter", "topologyKey", tpKey, "paths", s.TpKeyToCriticalPaths) + logger.Error(err, "Internal error occurred while retrieving value precalculated in PreFilter", "topologyKey", tpKey, "paths", s.TpKeyToCriticalPaths) continue } @@ -376,7 +375,7 @@ func (pl *PodTopologySpread) Filter(ctx context.Context, cycleState *framework.C } skew := matchNum + selfMatchNum - minMatchNum if skew > int(c.MaxSkew) { - klog.V(5).InfoS("Node failed spreadConstraint: matchNum + selfMatchNum - minMatchNum > maxSkew", "node", klog.KObj(node), "topologyKey", tpKey, "matchNum", matchNum, "selfMatchNum", selfMatchNum, "minMatchNum", minMatchNum, "maxSkew", c.MaxSkew) + logger.V(5).Info("Node failed spreadConstraint: matchNum + selfMatchNum - minMatchNum > maxSkew", "node", klog.KObj(node), "topologyKey", tpKey, "matchNum", matchNum, "selfMatchNum", selfMatchNum, "minMatchNum", minMatchNum, "maxSkew", c.MaxSkew) return framework.NewStatus(framework.Unschedulable, ErrReasonConstraintsNotMatch) } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/plugin.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/plugin.go index ed2333e76df2..17803e4ee0f5 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/plugin.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/plugin.go @@ -131,8 +131,8 @@ func (pl *PodTopologySpread) setListers(factory informers.SharedInformerFactory) // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *PodTopologySpread) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ +func (pl *PodTopologySpread) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ // All ActionType includes the following events: // - Add. An unschedulable Pod may fail due to violating topology spread constraints, // adding an assigned Pod may make it schedulable. @@ -140,9 +140,9 @@ func (pl *PodTopologySpread) EventsToRegister() []framework.ClusterEvent { // an unschedulable Pod schedulable. // - Delete. An unschedulable Pod may fail due to violating an existing Pod's topology spread constraints, // deleting an existing Pod may make it schedulable. - {Resource: framework.Pod, ActionType: framework.All}, + {Event: framework.ClusterEvent{Resource: framework.Pod, ActionType: framework.All}}, // Node add|delete|updateLabel maybe lead an topology key changed, // and make these pod in scheduling schedulable or unschedulable. - {Resource: framework.Node, ActionType: framework.Add | framework.Delete | framework.UpdateNodeLabel}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.Delete | framework.UpdateNodeLabel}}, } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/scoring.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/scoring.go index 7a3c0589b1b9..97770878f531 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/scoring.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/podtopologyspread/scoring.go @@ -36,7 +36,7 @@ const invalidScore = -1 type preScoreState struct { Constraints []topologySpreadConstraint // IgnoredNodes is a set of node names which miss some Constraints[*].topologyKey. - IgnoredNodes sets.String + IgnoredNodes sets.Set[string] // TopologyPairToPodCounts is keyed with topologyPair, and valued with the number of matching pods. TopologyPairToPodCounts map[topologyPair]*int64 // TopologyNormalizingWeight is the weight we give to the counts per topology. @@ -120,13 +120,13 @@ func (pl *PodTopologySpread) PreScore( return framework.AsStatus(fmt.Errorf("getting all nodes: %w", err)) } - if len(filteredNodes) == 0 || len(allNodes) == 0 { - // No nodes to score. - return nil + if len(allNodes) == 0 { + // No need to score. + return framework.NewStatus(framework.Skip) } state := &preScoreState{ - IgnoredNodes: sets.NewString(), + IgnoredNodes: sets.New[string](), TopologyPairToPodCounts: make(map[topologyPair]*int64), } // Only require that nodes have all the topology labels if using @@ -138,10 +138,9 @@ func (pl *PodTopologySpread) PreScore( return framework.AsStatus(fmt.Errorf("calculating preScoreState: %w", err)) } - // return if incoming pod doesn't have soft topology spread Constraints. + // return Skip if incoming pod doesn't have soft topology spread Constraints. if len(state.Constraints) == 0 { - cycleState.Write(preScoreStateKey, state) - return nil + return framework.NewStatus(framework.Skip) } // Ignore parsing errors for backwards compatibility. @@ -149,9 +148,6 @@ func (pl *PodTopologySpread) PreScore( processAllNode := func(i int) { nodeInfo := allNodes[i] node := nodeInfo.Node() - if node == nil { - return - } if !pl.enableNodeInclusionPolicyInPodTopologySpread { // `node` should satisfy incoming pod's NodeSelector/NodeAffinity diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/registry.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/registry.go index b4c0abe7572d..718e9eb1d8ed 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/registry.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/registry.go @@ -56,6 +56,7 @@ func NewInTreeRegistry() runtime.Registry { EnablePodSchedulingReadiness: feature.DefaultFeatureGate.Enabled(features.PodSchedulingReadiness), EnablePodDisruptionConditions: feature.DefaultFeatureGate.Enabled(features.PodDisruptionConditions), EnableInPlacePodVerticalScaling: feature.DefaultFeatureGate.Enabled(features.InPlacePodVerticalScaling), + EnableSidecarContainers: feature.DefaultFeatureGate.Enabled(features.SidecarContainers), } registry := runtime.Registry{ diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/schedulinggates/scheduling_gates.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/schedulinggates/scheduling_gates.go index 249e31a3b576..5c0678cb0d84 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/schedulinggates/scheduling_gates.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/schedulinggates/scheduling_gates.go @@ -55,9 +55,9 @@ func (pl *SchedulingGates) PreEnqueue(ctx context.Context, p *v1.Pod) *framework // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *SchedulingGates) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ - {Resource: framework.Pod, ActionType: framework.Update}, +func (pl *SchedulingGates) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ + {Event: framework.ClusterEvent{Resource: framework.Pod, ActionType: framework.Update}}, } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/tainttoleration/taint_toleration.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/tainttoleration/taint_toleration.go index 4611a98158c0..9d1bbe85caf6 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/tainttoleration/taint_toleration.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/tainttoleration/taint_toleration.go @@ -54,18 +54,15 @@ func (pl *TaintToleration) Name() string { // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *TaintToleration) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ - {Resource: framework.Node, ActionType: framework.Add | framework.Update}, +func (pl *TaintToleration) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.Update}}, } } // Filter invoked at the filter extension point. func (pl *TaintToleration) Filter(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { node := nodeInfo.Node() - if node == nil { - return framework.AsStatus(fmt.Errorf("invalid nodeInfo")) - } taint, isUntolerated := v1helper.FindMatchingUntoleratedTaint(node.Spec.Taints, pod.Spec.Tolerations, helper.DoNotScheduleTaintsFilterFunc()) if !isUntolerated { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/assume_cache.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/assume_cache.go index 283b4083e698..5b5127762218 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/assume_cache.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/assume_cache.go @@ -43,7 +43,7 @@ type AssumeCache interface { // Get the object by name Get(objName string) (interface{}, error) - // Get the API object by name + // GetAPIObj gets the API object by name GetAPIObj(objName string) (interface{}, error) // List all the objects in the cache diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/binder.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/binder.go index b9ac4cb04273..b8afe554ca81 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/binder.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/binder.go @@ -125,7 +125,7 @@ type InTreeToCSITranslator interface { // // This integrates into the existing scheduler workflow as follows: // 1. The scheduler takes a Pod off the scheduler queue and processes it serially: -// a. Invokes all pre-filter plugins for the pod. GetPodVolumes() is invoked +// a. Invokes all pre-filter plugins for the pod. GetPodVolumeClaims() is invoked // here, pod volume information will be saved in current scheduling cycle state for later use. // If pod has bound immediate PVCs, GetEligibleNodes() is invoked to potentially reduce // down the list of eligible nodes based on the bound PV's NodeAffinity (if any). @@ -157,7 +157,7 @@ type SchedulerVolumeBinder interface { // // If eligibleNodes is 'nil', then it indicates that such eligible node reduction cannot be made // and all nodes should be considered. - GetEligibleNodes(boundClaims []*v1.PersistentVolumeClaim) (eligibleNodes sets.String) + GetEligibleNodes(boundClaims []*v1.PersistentVolumeClaim) (eligibleNodes sets.Set[string]) // FindPodVolumes checks if all of a Pod's PVCs can be satisfied by the // node and returns pod's volumes information. @@ -386,7 +386,7 @@ func (b *volumeBinder) FindPodVolumes(pod *v1.Pod, podVolumeClaims *PodVolumeCla // // Returning 'nil' for eligibleNodes indicates that such eligible node reduction cannot be made and all nodes // should be considered. -func (b *volumeBinder) GetEligibleNodes(boundClaims []*v1.PersistentVolumeClaim) (eligibleNodes sets.String) { +func (b *volumeBinder) GetEligibleNodes(boundClaims []*v1.PersistentVolumeClaim) (eligibleNodes sets.Set[string]) { if len(boundClaims) == 0 { return } @@ -407,13 +407,13 @@ func (b *volumeBinder) GetEligibleNodes(boundClaims []*v1.PersistentVolumeClaim) // on the first found list of eligible nodes for the local PersistentVolume, // insert to the eligible node set. if eligibleNodes == nil { - eligibleNodes = sets.NewString(nodeNames...) + eligibleNodes = sets.New(nodeNames...) } else { // for subsequent finding of eligible nodes for the local PersistentVolume, // take the intersection of the nodes with the existing eligible nodes // for cases if PV1 has node affinity to node1 and PV2 has node affinity to node2, // then the eligible node list should be empty. - eligibleNodes = eligibleNodes.Intersection(sets.NewString(nodeNames...)) + eligibleNodes = eligibleNodes.Intersection(sets.New(nodeNames...)) } } } @@ -1088,7 +1088,7 @@ func isCSIMigrationOnForPlugin(pluginName string) bool { case csiplugins.AWSEBSInTreePluginName: return true case csiplugins.GCEPDInTreePluginName: - return utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationGCE) + return true case csiplugins.AzureDiskInTreePluginName: return true case csiplugins.CinderInTreePluginName: @@ -1112,13 +1112,13 @@ func isPluginMigratedToCSIOnNode(pluginName string, csiNode *storagev1.CSINode) return false } - var mpaSet sets.String + var mpaSet sets.Set[string] mpa := csiNodeAnn[v1.MigratedPluginsAnnotationKey] if len(mpa) == 0 { - mpaSet = sets.NewString() + mpaSet = sets.New[string]() } else { tok := strings.Split(mpa, ",") - mpaSet = sets.NewString(tok...) + mpaSet = sets.New(tok...) } return mpaSet.Has(pluginName) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/fake_binder.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/fake_binder.go index c77c9733789a..b8e78b8bea12 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/fake_binder.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/fake_binder.go @@ -55,7 +55,7 @@ func (b *FakeVolumeBinder) GetPodVolumeClaims(pod *v1.Pod) (podVolumeClaims *Pod } // GetEligibleNodes implements SchedulerVolumeBinder.GetEligibleNodes. -func (b *FakeVolumeBinder) GetEligibleNodes(boundClaims []*v1.PersistentVolumeClaim) (eligibleNodes sets.String) { +func (b *FakeVolumeBinder) GetEligibleNodes(boundClaims []*v1.PersistentVolumeClaim) (eligibleNodes sets.Set[string]) { return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/volume_binding.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/volume_binding.go index 3d251ba4a55d..66756b7af134 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/volume_binding.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumebinding/volume_binding.go @@ -87,25 +87,25 @@ func (pl *VolumeBinding) Name() string { // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *VolumeBinding) EventsToRegister() []framework.ClusterEvent { - events := []framework.ClusterEvent{ +func (pl *VolumeBinding) EventsToRegister() []framework.ClusterEventWithHint { + events := []framework.ClusterEventWithHint{ // Pods may fail because of missing or mis-configured storage class // (e.g., allowedTopologies, volumeBindingMode), and hence may become // schedulable upon StorageClass Add or Update events. - {Resource: framework.StorageClass, ActionType: framework.Add | framework.Update}, + {Event: framework.ClusterEvent{Resource: framework.StorageClass, ActionType: framework.Add | framework.Update}}, // We bind PVCs with PVs, so any changes may make the pods schedulable. - {Resource: framework.PersistentVolumeClaim, ActionType: framework.Add | framework.Update}, - {Resource: framework.PersistentVolume, ActionType: framework.Add | framework.Update}, + {Event: framework.ClusterEvent{Resource: framework.PersistentVolumeClaim, ActionType: framework.Add | framework.Update}}, + {Event: framework.ClusterEvent{Resource: framework.PersistentVolume, ActionType: framework.Add | framework.Update}}, // Pods may fail to find available PVs because the node labels do not // match the storage class's allowed topologies or PV's node affinity. // A new or updated node may make pods schedulable. - {Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeLabel}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeLabel}}, // We rely on CSI node to translate in-tree PV to CSI. - {Resource: framework.CSINode, ActionType: framework.Add | framework.Update}, + {Event: framework.ClusterEvent{Resource: framework.CSINode, ActionType: framework.Add | framework.Update}}, // When CSIStorageCapacity is enabled, pods may become schedulable // on CSI driver & storage capacity changes. - {Resource: framework.CSIDriver, ActionType: framework.Add | framework.Update}, - {Resource: framework.CSIStorageCapacity, ActionType: framework.Add | framework.Update}, + {Event: framework.ClusterEvent{Resource: framework.CSIDriver, ActionType: framework.Add | framework.Update}}, + {Event: framework.ClusterEvent{Resource: framework.CSIStorageCapacity, ActionType: framework.Add | framework.Update}}, } return events } @@ -233,9 +233,6 @@ func getStateData(cs *framework.CycleState) (*stateData, error) { // PVCs can be matched with an available and node-compatible PV. func (pl *VolumeBinding) Filter(ctx context.Context, cs *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { node := nodeInfo.Node() - if node == nil { - return framework.NewStatus(framework.Error, "node not found") - } state, err := getStateData(cs) if err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumerestrictions/volume_restrictions.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumerestrictions/volume_restrictions.go index b73752051ef3..cfe883bd83f5 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumerestrictions/volume_restrictions.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumerestrictions/volume_restrictions.go @@ -154,10 +154,26 @@ func haveOverlap(a1, a2 []string) bool { return false } +// return true if there are conflict checking targets. +func needsRestrictionsCheck(v v1.Volume) bool { + return v.GCEPersistentDisk != nil || v.AWSElasticBlockStore != nil || v.RBD != nil || v.ISCSI != nil +} + // PreFilter computes and stores cycleState containing details for enforcing ReadWriteOncePod. func (pl *VolumeRestrictions) PreFilter(ctx context.Context, cycleState *framework.CycleState, pod *v1.Pod) (*framework.PreFilterResult, *framework.Status) { + needsCheck := false + for i := range pod.Spec.Volumes { + if needsRestrictionsCheck(pod.Spec.Volumes[i]) { + needsCheck = true + break + } + } + if !pl.enableReadWriteOncePod { - return nil, nil + if needsCheck { + return nil, nil + } + return nil, framework.NewStatus(framework.Skip) } pvcs, err := pl.readWriteOncePodPVCsForPod(ctx, pod) @@ -172,6 +188,10 @@ func (pl *VolumeRestrictions) PreFilter(ctx context.Context, cycleState *framewo if err != nil { return nil, framework.AsStatus(err) } + + if !needsCheck && s.conflictingPVCRefCount == 0 { + return nil, framework.NewStatus(framework.Skip) + } cycleState.Write(preFilterStateKey, s) return nil, nil } @@ -257,14 +277,12 @@ func (pl *VolumeRestrictions) readWriteOncePodPVCsForPod(ctx context.Context, po // existing volumes. func satisfyVolumeConflicts(pod *v1.Pod, nodeInfo *framework.NodeInfo) bool { for i := range pod.Spec.Volumes { - v := &pod.Spec.Volumes[i] - // fast path if there is no conflict checking targets. - if v.GCEPersistentDisk == nil && v.AWSElasticBlockStore == nil && v.RBD == nil && v.ISCSI == nil { + v := pod.Spec.Volumes[i] + if !needsRestrictionsCheck(v) { continue } - for _, ev := range nodeInfo.Pods { - if isVolumeConflict(v, ev.Pod) { + if isVolumeConflict(&v, ev.Pod) { return false } } @@ -315,17 +333,17 @@ func (pl *VolumeRestrictions) Filter(ctx context.Context, cycleState *framework. // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *VolumeRestrictions) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ +func (pl *VolumeRestrictions) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ // Pods may fail to schedule because of volumes conflicting with other pods on same node. // Once running pods are deleted and volumes have been released, the unschedulable pod will be schedulable. // Due to immutable fields `spec.volumes`, pod update events are ignored. - {Resource: framework.Pod, ActionType: framework.Delete}, + {Event: framework.ClusterEvent{Resource: framework.Pod, ActionType: framework.Delete}}, // A new Node may make a pod schedulable. - {Resource: framework.Node, ActionType: framework.Add}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add}}, // Pods may fail to schedule because the PVC it uses has not yet been created. // This PVC is required to exist to check its access modes. - {Resource: framework.PersistentVolumeClaim, ActionType: framework.Add | framework.Update}, + {Event: framework.ClusterEvent{Resource: framework.PersistentVolumeClaim, ActionType: framework.Add | framework.Update}}, } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumezone/volume_zone.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumezone/volume_zone.go index f3f0f5dfe348..b0504ea7f786 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumezone/volume_zone.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/volumezone/volume_zone.go @@ -60,7 +60,7 @@ const ( type pvTopology struct { pvName string key string - values sets.String + values sets.Set[string] } // the state is initialized in PreFilter phase. because we save the pointer in @@ -107,6 +107,7 @@ func (pl *VolumeZone) PreFilter(ctx context.Context, cs *framework.CycleState, p } func (pl *VolumeZone) getPVbyPod(ctx context.Context, pod *v1.Pod) ([]pvTopology, *framework.Status) { + logger := klog.FromContext(ctx) podPVTopologies := make([]pvTopology, 0) for i := range pod.Spec.Volumes { @@ -154,13 +155,13 @@ func (pl *VolumeZone) getPVbyPod(ctx context.Context, pod *v1.Pod) ([]pvTopology if value, ok := pv.ObjectMeta.Labels[key]; ok { volumeVSet, err := volumehelpers.LabelZonesToSet(value) if err != nil { - klog.InfoS("Failed to parse label, ignoring the label", "label", fmt.Sprintf("%s:%s", key, value), "err", err) + logger.Info("Failed to parse label, ignoring the label", "label", fmt.Sprintf("%s:%s", key, value), "err", err) continue } podPVTopologies = append(podPVTopologies, pvTopology{ pvName: pv.Name, key: key, - values: volumeVSet, + values: sets.Set[string](volumeVSet), }) } } @@ -190,6 +191,7 @@ func (pl *VolumeZone) PreFilterExtensions() framework.PreFilterExtensions { // require calling out to the cloud provider. It seems that we are moving away // from inline volume declarations anyway. func (pl *VolumeZone) Filter(ctx context.Context, cs *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { + logger := klog.FromContext(ctx) // If a pod doesn't have any volume attached to it, the predicate will always be true. // Thus we make a fast path for it, to avoid unnecessary computations in this case. if len(pod.Spec.Volumes) == 0 { @@ -226,7 +228,7 @@ func (pl *VolumeZone) Filter(ctx context.Context, cs *framework.CycleState, pod for _, pvTopology := range podPVTopologies { v, ok := node.Labels[pvTopology.key] if !ok || !pvTopology.values.Has(v) { - klog.V(10).InfoS("Won't schedule pod onto node due to volume (mismatch on label key)", "pod", klog.KObj(pod), "node", klog.KObj(node), "PV", klog.KRef("", pvTopology.pvName), "PVLabelKey", pvTopology.key) + logger.V(10).Info("Won't schedule pod onto node due to volume (mismatch on label key)", "pod", klog.KObj(pod), "node", klog.KObj(node), "PV", klog.KRef("", pvTopology.pvName), "PVLabelKey", pvTopology.key) return framework.NewStatus(framework.UnschedulableAndUnresolvable, ErrReasonConflict) } } @@ -258,18 +260,18 @@ func getErrorAsStatus(err error) *framework.Status { // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. -func (pl *VolumeZone) EventsToRegister() []framework.ClusterEvent { - return []framework.ClusterEvent{ +func (pl *VolumeZone) EventsToRegister() []framework.ClusterEventWithHint { + return []framework.ClusterEventWithHint{ // New storageClass with bind mode `VolumeBindingWaitForFirstConsumer` will make a pod schedulable. // Due to immutable field `storageClass.volumeBindingMode`, storageClass update events are ignored. - {Resource: framework.StorageClass, ActionType: framework.Add}, + {Event: framework.ClusterEvent{Resource: framework.StorageClass, ActionType: framework.Add}}, // A new node or updating a node's volume zone labels may make a pod schedulable. - {Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeLabel}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeLabel}}, // A new pvc may make a pod schedulable. // Due to fields are immutable except `spec.resources`, pvc update events are ignored. - {Resource: framework.PersistentVolumeClaim, ActionType: framework.Add}, + {Event: framework.ClusterEvent{Resource: framework.PersistentVolumeClaim, ActionType: framework.Add}}, // A new pv or updating a pv's volume zone labels may make a pod schedulable. - {Resource: framework.PersistentVolume, ActionType: framework.Add | framework.Update}, + {Event: framework.ClusterEvent{Resource: framework.PersistentVolume, ActionType: framework.Add | framework.Update}}, } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/preemption/preemption.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/preemption/preemption.go index 68e215dd2a5c..15d8a34f39b2 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/preemption/preemption.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/preemption/preemption.go @@ -150,6 +150,8 @@ type Evaluator struct { // - . It's the regular happy path // and the non-empty nominatedNodeName will be applied to the preemptor pod. func (ev *Evaluator) Preempt(ctx context.Context, pod *v1.Pod, m framework.NodeToStatusMap) (*framework.PostFilterResult, *framework.Status) { + logger := klog.FromContext(ctx) + // 0) Fetch the latest version of . // It's safe to directly fetch pod here. Because the informer cache has already been // initialized when creating the Scheduler obj. @@ -157,13 +159,13 @@ func (ev *Evaluator) Preempt(ctx context.Context, pod *v1.Pod, m framework.NodeT podNamespace, podName := pod.Namespace, pod.Name pod, err := ev.PodLister.Pods(pod.Namespace).Get(pod.Name) if err != nil { - klog.ErrorS(err, "Getting the updated preemptor pod object", "pod", klog.KRef(podNamespace, podName)) + logger.Error(err, "Could not get the updated preemptor pod object", "pod", klog.KRef(podNamespace, podName)) return nil, framework.AsStatus(err) } // 1) Ensure the preemptor is eligible to preempt other pods. if ok, msg := ev.PodEligibleToPreemptOthers(pod, m[pod.Status.NominatedNodeName]); !ok { - klog.V(5).InfoS("Pod is not eligible for preemption", "pod", klog.KObj(pod), "reason", msg) + logger.V(5).Info("Pod is not eligible for preemption", "pod", klog.KObj(pod), "reason", msg) return nil, framework.NewStatus(framework.Unschedulable, msg) } @@ -188,13 +190,13 @@ func (ev *Evaluator) Preempt(ctx context.Context, pod *v1.Pod, m framework.NodeT } // 3) Interact with registered Extenders to filter out some candidates if needed. - candidates, status := ev.callExtenders(pod, candidates) + candidates, status := ev.callExtenders(logger, pod, candidates) if !status.IsSuccess() { return nil, status } // 4) Find the best candidate. - bestCandidate := ev.SelectCandidate(candidates) + bestCandidate := ev.SelectCandidate(logger, candidates) if bestCandidate == nil || len(bestCandidate.Name()) == 0 { return nil, framework.NewStatus(framework.Unschedulable, "no candidate node for preemption") } @@ -217,12 +219,13 @@ func (ev *Evaluator) findCandidates(ctx context.Context, pod *v1.Pod, m framewor if len(allNodes) == 0 { return nil, nil, errors.New("no nodes available") } + logger := klog.FromContext(ctx) potentialNodes, unschedulableNodeStatus := nodesWherePreemptionMightHelp(allNodes, m) if len(potentialNodes) == 0 { - klog.V(3).InfoS("Preemption will not help schedule pod on any node", "pod", klog.KObj(pod)) + logger.V(3).Info("Preemption will not help schedule pod on any node", "pod", klog.KObj(pod)) // In this case, we should clean-up any existing nominated node name of the pod. if err := util.ClearNominatedNodeName(ctx, ev.Handler.ClientSet(), pod); err != nil { - klog.ErrorS(err, "Cannot clear 'NominatedNodeName' field of pod", "pod", klog.KObj(pod)) + logger.Error(err, "Could not clear the nominatedNodeName field of pod", "pod", klog.KObj(pod)) // We do not return as this error is not critical. } return nil, unschedulableNodeStatus, nil @@ -234,12 +237,12 @@ func (ev *Evaluator) findCandidates(ctx context.Context, pod *v1.Pod, m framewor } offset, numCandidates := ev.GetOffsetAndNumCandidates(int32(len(potentialNodes))) - if klogV := klog.V(5); klogV.Enabled() { + if loggerV := logger.V(5); logger.Enabled() { var sample []string for i := offset; i < offset+10 && i < int32(len(potentialNodes)); i++ { sample = append(sample, potentialNodes[i].Node().Name) } - klogV.InfoS("Selecting candidates from a pool of nodes", "potentialNodesCount", len(potentialNodes), "offset", offset, "sampleLength", len(sample), "sample", sample, "candidates", numCandidates) + loggerV.Info("Selected candidates from a pool of nodes", "potentialNodesCount", len(potentialNodes), "offset", offset, "sampleLength", len(sample), "sample", sample, "candidates", numCandidates) } candidates, nodeStatuses, err := ev.DryRunPreemption(ctx, pod, potentialNodes, pdbs, offset, numCandidates) for node, nodeStatus := range unschedulableNodeStatus { @@ -252,7 +255,7 @@ func (ev *Evaluator) findCandidates(ctx context.Context, pod *v1.Pod, m framewor // We will only check with extenders that support preemption. // Extenders which do not support preemption may later prevent preemptor from being scheduled on the nominated // node. In that case, scheduler will find a different host for the preemptor in subsequent scheduling cycles. -func (ev *Evaluator) callExtenders(pod *v1.Pod, candidates []Candidate) ([]Candidate, *framework.Status) { +func (ev *Evaluator) callExtenders(logger klog.Logger, pod *v1.Pod, candidates []Candidate) ([]Candidate, *framework.Status) { extenders := ev.Handler.Extenders() nodeLister := ev.Handler.SnapshotSharedLister().NodeInfos() if len(extenders) == 0 { @@ -272,8 +275,8 @@ func (ev *Evaluator) callExtenders(pod *v1.Pod, candidates []Candidate) ([]Candi nodeNameToVictims, err := extender.ProcessPreemption(pod, victimsMap, nodeLister) if err != nil { if extender.IsIgnorable() { - klog.InfoS("Skipping extender as it returned error and has ignorable flag set", - "extender", extender, "err", err) + logger.Info("Skipped extender as it returned error and has ignorable flag set", + "extender", extender.Name(), "err", err) continue } return nil, framework.AsStatus(err) @@ -283,7 +286,7 @@ func (ev *Evaluator) callExtenders(pod *v1.Pod, candidates []Candidate) ([]Candi if victims == nil || len(victims.Pods) == 0 { if extender.IsIgnorable() { delete(nodeNameToVictims, nodeName) - klog.InfoS("Ignoring node without victims", "node", klog.KRef("", nodeName)) + logger.Info("Ignored node for which the extender didn't report victims", "node", klog.KRef("", nodeName), "extender", extender.Name()) continue } return nil, framework.AsStatus(fmt.Errorf("expected at least one victim pod on node %q", nodeName)) @@ -312,7 +315,7 @@ func (ev *Evaluator) callExtenders(pod *v1.Pod, candidates []Candidate) ([]Candi // SelectCandidate chooses the best-fit candidate from given and return it. // NOTE: This method is exported for easier testing in default preemption. -func (ev *Evaluator) SelectCandidate(candidates []Candidate) Candidate { +func (ev *Evaluator) SelectCandidate(logger klog.Logger, candidates []Candidate) Candidate { if len(candidates) == 0 { return nil } @@ -321,7 +324,7 @@ func (ev *Evaluator) SelectCandidate(candidates []Candidate) Candidate { } victimsMap := ev.CandidatesToVictimsMap(candidates) - candidateNode := pickOneNodeForPreemption(victimsMap) + candidateNode := pickOneNodeForPreemption(logger, victimsMap) // Same as candidatesToVictimsMap, this logic is not applicable for out-of-tree // preemption plugins that exercise different candidates on the same nominated node. @@ -333,7 +336,7 @@ func (ev *Evaluator) SelectCandidate(candidates []Candidate) Candidate { } // We shouldn't reach here. - klog.ErrorS(errors.New("no candidate selected"), "Should not reach here", "candidates", candidates) + logger.Error(errors.New("no candidate selected"), "Should not reach here", "candidates", candidates) // To not break the whole flow, return the first candidate. return candidates[0] } @@ -348,6 +351,7 @@ func (ev *Evaluator) prepareCandidate(ctx context.Context, c Candidate, pod *v1. ctx, cancel := context.WithCancel(ctx) defer cancel() + logger := klog.FromContext(ctx) errCh := parallelize.NewErrorChannel() preemptPod := func(index int) { victim := c.Victims().Pods[index] @@ -355,6 +359,7 @@ func (ev *Evaluator) prepareCandidate(ctx context.Context, c Candidate, pod *v1. // Otherwise we should delete the victim. if waitingPod := fh.GetWaitingPod(victim.UID); waitingPod != nil { waitingPod.Reject(pluginName, "preempted") + klog.V(2).InfoS("Preemptor pod rejected a waiting pod", "preemptor", klog.KObj(pod), "waitingPod", klog.KObj(victim), "node", c.Name()) } else { if feature.DefaultFeatureGate.Enabled(features.PodDisruptionConditions) { victimPodApply := corev1apply.Pod(victim.Name, victim.Namespace).WithStatus(corev1apply.PodStatus()) @@ -367,17 +372,19 @@ func (ev *Evaluator) prepareCandidate(ctx context.Context, c Candidate, pod *v1. ) if _, err := cs.CoreV1().Pods(victim.Namespace).ApplyStatus(ctx, victimPodApply, metav1.ApplyOptions{FieldManager: fieldManager, Force: true}); err != nil { - klog.ErrorS(err, "Preparing pod preemption", "pod", klog.KObj(victim), "preemptor", klog.KObj(pod)) + logger.Error(err, "Could not add DisruptionTarget condition due to preemption", "pod", klog.KObj(victim), "preemptor", klog.KObj(pod)) errCh.SendErrorWithCancel(err, cancel) return } } if err := util.DeletePod(ctx, cs, victim); err != nil { - klog.ErrorS(err, "Preempting pod", "pod", klog.KObj(victim), "preemptor", klog.KObj(pod)) + logger.Error(err, "Preempted pod", "pod", klog.KObj(victim), "preemptor", klog.KObj(pod)) errCh.SendErrorWithCancel(err, cancel) return } + klog.V(2).InfoS("Preemptor Pod preempted victim Pod", "preemptor", klog.KObj(pod), "victim", klog.KObj(victim), "node", c.Name()) } + fh.EventRecorder().Eventf(victim, pod, v1.EventTypeNormal, "Preempted", "Preempting", "Preempted by a pod on node %v", c.Name()) } @@ -392,9 +399,9 @@ func (ev *Evaluator) prepareCandidate(ctx context.Context, c Candidate, pod *v1. // this node. So, we should remove their nomination. Removing their // nomination updates these pods and moves them to the active queue. It // lets scheduler find another place for them. - nominatedPods := getLowerPriorityNominatedPods(fh, pod, c.Name()) + nominatedPods := getLowerPriorityNominatedPods(logger, fh, pod, c.Name()) if err := util.ClearNominatedNodeName(ctx, cs, nominatedPods...); err != nil { - klog.ErrorS(err, "Cannot clear 'NominatedNodeName' field") + logger.Error(err, "Cannot clear 'NominatedNodeName' field") // We do not return as this error is not critical. } @@ -437,7 +444,7 @@ func getPodDisruptionBudgets(pdbLister policylisters.PodDisruptionBudgetLister) // 6. If there are still ties, the first such node is picked (sort of randomly). // The 'minNodes1' and 'minNodes2' are being reused here to save the memory // allocation and garbage collection time. -func pickOneNodeForPreemption(nodesToVictims map[string]*extenderv1.Victims) string { +func pickOneNodeForPreemption(logger klog.Logger, nodesToVictims map[string]*extenderv1.Victims) string { if len(nodesToVictims) == 0 { return "" } @@ -477,7 +484,7 @@ func pickOneNodeForPreemption(nodesToVictims map[string]*extenderv1.Victims) str // Get earliest start time of all pods on the current node. earliestStartTimeOnNode := util.GetEarliestPodStartTime(nodesToVictims[node]) if earliestStartTimeOnNode == nil { - klog.ErrorS(errors.New("earliestStartTime is nil for node"), "Should not reach here", "node", node) + logger.Error(errors.New("earliestStartTime is nil for node"), "Should not reach here", "node", node) return int64(math.MinInt64) } // The bigger the earliestStartTimeOnNode, the higher the score. @@ -530,7 +537,7 @@ func pickOneNodeForPreemption(nodesToVictims map[string]*extenderv1.Victims) str // manipulation of NodeInfo and PreFilter state per nominated pod. It may not be // worth the complexity, especially because we generally expect to have a very // small number of nominated pods per node. -func getLowerPriorityNominatedPods(pn framework.PodNominator, pod *v1.Pod, nodeName string) []*v1.Pod { +func getLowerPriorityNominatedPods(logger klog.Logger, pn framework.PodNominator, pod *v1.Pod, nodeName string) []*v1.Pod { podInfos := pn.NominatedPodsForNode(nodeName) if len(podInfos) == 0 { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go index d8684f5ae01c..f83b3d454721 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go @@ -44,15 +44,6 @@ const ( maxTimeout = 15 * time.Minute ) -var allClusterEvents = []framework.ClusterEvent{ - {Resource: framework.Pod, ActionType: framework.All}, - {Resource: framework.Node, ActionType: framework.All}, - {Resource: framework.CSINode, ActionType: framework.All}, - {Resource: framework.PersistentVolume, ActionType: framework.All}, - {Resource: framework.PersistentVolumeClaim, ActionType: framework.All}, - {Resource: framework.StorageClass, ActionType: framework.All}, -} - // frameworkImpl is the component responsible for initializing and running scheduler // plugins. type frameworkImpl struct { @@ -61,6 +52,7 @@ type frameworkImpl struct { waitingPods *waitingPodsMap scorePluginWeight map[string]int preEnqueuePlugins []framework.PreEnqueuePlugin + enqueueExtensions []framework.EnqueueExtensions queueSortPlugins []framework.QueueSortPlugin preFilterPlugins []framework.PreFilterPlugin filterPlugins []framework.FilterPlugin @@ -77,6 +69,7 @@ type frameworkImpl struct { kubeConfig *restclient.Config eventRecorder events.EventRecorder informerFactory informers.SharedInformerFactory + logger klog.Logger metricsRecorder *metrics.MetricAsyncRecorder profileName string @@ -132,8 +125,8 @@ type frameworkOptions struct { podNominator framework.PodNominator extenders []framework.Extender captureProfile CaptureProfile - clusterEventMap map[framework.ClusterEvent]sets.String parallelizer parallelize.Parallelizer + logger *klog.Logger } // Option for the frameworkImpl. @@ -142,7 +135,7 @@ type Option func(*frameworkOptions) // WithComponentConfigVersion sets the component config version to the // KubeSchedulerConfiguration version used. The string should be the full // scheme group/version of the external type we converted from (for example -// "kubescheduler.config.k8s.io/v1beta2") +// "kubescheduler.config.k8s.io/v1") func WithComponentConfigVersion(componentConfigVersion string) Option { return func(o *frameworkOptions) { o.componentConfigVersion = componentConfigVersion @@ -215,17 +208,17 @@ func WithCaptureProfile(c CaptureProfile) Option { } } -// WithClusterEventMap sets clusterEventMap for the scheduling frameworkImpl. -func WithClusterEventMap(m map[framework.ClusterEvent]sets.String) Option { +// WithMetricsRecorder sets metrics recorder for the scheduling frameworkImpl. +func WithMetricsRecorder(r *metrics.MetricAsyncRecorder) Option { return func(o *frameworkOptions) { - o.clusterEventMap = m + o.metricsRecorder = r } } -// WithMetricsRecorder sets metrics recorder for the scheduling frameworkImpl. -func WithMetricsRecorder(r *metrics.MetricAsyncRecorder) Option { +// WithLogger overrides the default logger from k8s.io/klog. +func WithLogger(logger klog.Logger) Option { return func(o *frameworkOptions) { - o.metricsRecorder = r + o.logger = &logger } } @@ -233,7 +226,6 @@ func WithMetricsRecorder(r *metrics.MetricAsyncRecorder) Option { func defaultFrameworkOptions(stopCh <-chan struct{}) frameworkOptions { return frameworkOptions{ metricsRecorder: metrics.NewMetricsAsyncRecorder(1000, time.Second, stopCh), - clusterEventMap: make(map[framework.ClusterEvent]sets.String), parallelizer: parallelize.NewParallelizer(parallelize.DefaultParallelism), } } @@ -241,12 +233,17 @@ func defaultFrameworkOptions(stopCh <-chan struct{}) frameworkOptions { var _ framework.Framework = &frameworkImpl{} // NewFramework initializes plugins given the configuration and the registry. -func NewFramework(r Registry, profile *config.KubeSchedulerProfile, stopCh <-chan struct{}, opts ...Option) (framework.Framework, error) { - options := defaultFrameworkOptions(stopCh) +func NewFramework(ctx context.Context, r Registry, profile *config.KubeSchedulerProfile, opts ...Option) (framework.Framework, error) { + options := defaultFrameworkOptions(ctx.Done()) for _, opt := range opts { opt(&options) } + logger := klog.FromContext(ctx) + if options.logger != nil { + logger = *options.logger + } + f := &frameworkImpl{ registry: r, snapshotSharedLister: options.snapshotSharedLister, @@ -260,6 +257,7 @@ func NewFramework(r Registry, profile *config.KubeSchedulerProfile, stopCh <-cha extenders: options.extenders, PodNominator: options.podNominator, parallelizer: options.parallelizer, + logger: logger, } if profile == nil { @@ -310,8 +308,7 @@ func NewFramework(r Registry, profile *config.KubeSchedulerProfile, stopCh <-cha } pluginsMap[name] = p - // Update ClusterEventMap in place. - fillEventToPluginMap(p, options.clusterEventMap) + f.fillEnqueueExtensions(p) } // initialize plugins per individual extension points @@ -323,7 +320,7 @@ func NewFramework(r Registry, profile *config.KubeSchedulerProfile, stopCh <-cha // initialize multiPoint plugins to their expanded extension points if len(profile.Plugins.MultiPoint.Enabled) > 0 { - if err := f.expandMultiPointPlugins(profile, pluginsMap); err != nil { + if err := f.expandMultiPointPlugins(logger, profile, pluginsMap); err != nil { return nil, err } } @@ -358,9 +355,41 @@ func NewFramework(r Registry, profile *config.KubeSchedulerProfile, stopCh <-cha options.captureProfile(outputProfile) } + f.setInstrumentedPlugins() return f, nil } +// setInstrumentedPlugins initializes instrumented plugins from current plugins that frameworkImpl has. +func (f *frameworkImpl) setInstrumentedPlugins() { + // Cache metric streams for prefilter and filter plugins. + for i, pl := range f.preFilterPlugins { + f.preFilterPlugins[i] = &instrumentedPreFilterPlugin{ + PreFilterPlugin: f.preFilterPlugins[i], + metric: metrics.PluginEvaluationTotal.WithLabelValues(pl.Name(), metrics.PreFilter, f.profileName), + } + } + for i, pl := range f.filterPlugins { + f.filterPlugins[i] = &instrumentedFilterPlugin{ + FilterPlugin: f.filterPlugins[i], + metric: metrics.PluginEvaluationTotal.WithLabelValues(pl.Name(), metrics.Filter, f.profileName), + } + } + + // Cache metric streams for prescore and score plugins. + for i, pl := range f.preScorePlugins { + f.preScorePlugins[i] = &instrumentedPreScorePlugin{ + PreScorePlugin: f.preScorePlugins[i], + metric: metrics.PluginEvaluationTotal.WithLabelValues(pl.Name(), metrics.PreScore, f.profileName), + } + } + for i, pl := range f.scorePlugins { + f.scorePlugins[i] = &instrumentedScorePlugin{ + ScorePlugin: f.scorePlugins[i], + metric: metrics.PluginEvaluationTotal.WithLabelValues(pl.Name(), metrics.Score, f.profileName), + } + } +} + func (f *frameworkImpl) SetPodNominator(n framework.PodNominator) { f.PodNominator = n } @@ -429,7 +458,7 @@ func (os *orderedSet) delete(s string) { } } -func (f *frameworkImpl) expandMultiPointPlugins(profile *config.KubeSchedulerProfile, pluginsMap map[string]framework.Plugin) error { +func (f *frameworkImpl) expandMultiPointPlugins(logger klog.Logger, profile *config.KubeSchedulerProfile, pluginsMap map[string]framework.Plugin) error { // initialize MultiPoint plugins for _, e := range f.getExtensionPoints(profile.Plugins) { plugins := reflect.ValueOf(e.slicePtr).Elem() @@ -441,12 +470,12 @@ func (f *frameworkImpl) expandMultiPointPlugins(profile *config.KubeSchedulerPro enabledSet.insert(plugin.Name) } - disabledSet := sets.NewString() + disabledSet := sets.New[string]() for _, disabledPlugin := range e.plugins.Disabled { disabledSet.Insert(disabledPlugin.Name) } if disabledSet.Has("*") { - klog.V(4).InfoS("all plugins disabled for extension point, skipping MultiPoint expansion", "extension", pluginType) + logger.V(4).Info("Skipped MultiPoint expansion because all plugins are disabled for extension point", "extension", pluginType) continue } @@ -467,7 +496,7 @@ func (f *frameworkImpl) expandMultiPointPlugins(profile *config.KubeSchedulerPro // a plugin that's enabled via MultiPoint can still be disabled for specific extension points if disabledSet.Has(ep.Name) { - klog.V(4).InfoS("plugin disabled for extension point", "plugin", ep.Name, "extension", pluginType) + logger.V(4).Info("Skipped disabled plugin for extension point", "plugin", ep.Name, "extension", pluginType) continue } @@ -477,7 +506,7 @@ func (f *frameworkImpl) expandMultiPointPlugins(profile *config.KubeSchedulerPro // This maintains expected behavior for overriding default plugins (see https://github.com/kubernetes/kubernetes/pull/99582) if enabledSet.has(ep.Name) { overridePlugins.insert(ep.Name) - klog.InfoS("MultiPoint plugin is explicitly re-configured; overriding", "plugin", ep.Name) + logger.Info("MultiPoint plugin is explicitly re-configured; overriding", "plugin", ep.Name) continue } @@ -516,41 +545,37 @@ func (f *frameworkImpl) expandMultiPointPlugins(profile *config.KubeSchedulerPro return nil } -func fillEventToPluginMap(p framework.Plugin, eventToPlugins map[framework.ClusterEvent]sets.String) { +func (f *frameworkImpl) fillEnqueueExtensions(p framework.Plugin) { ext, ok := p.(framework.EnqueueExtensions) if !ok { - // If interface EnqueueExtensions is not implemented, register the default events - // to the plugin. This is to ensure backward compatibility. - registerClusterEvents(p.Name(), eventToPlugins, allClusterEvents) + // If interface EnqueueExtensions is not implemented, register the default enqueue extensions + // to the plugin because we don't know which events the plugin is interested in. + // This is to ensure backward compatibility. + f.enqueueExtensions = append(f.enqueueExtensions, &defaultEnqueueExtension{pluginName: p.Name()}) return } - events := ext.EventsToRegister() - // It's rare that a plugin implements EnqueueExtensions but returns nil. - // We treat it as: the plugin is not interested in any event, and hence pod failed by that plugin - // cannot be moved by any regular cluster event. - if len(events) == 0 { - klog.InfoS("Plugin's EventsToRegister() returned nil", "plugin", p.Name()) - return - } - // The most common case: a plugin implements EnqueueExtensions and returns non-nil result. - registerClusterEvents(p.Name(), eventToPlugins, events) + f.enqueueExtensions = append(f.enqueueExtensions, ext) } -func registerClusterEvents(name string, eventToPlugins map[framework.ClusterEvent]sets.String, evts []framework.ClusterEvent) { - for _, evt := range evts { - if eventToPlugins[evt] == nil { - eventToPlugins[evt] = sets.NewString(name) - } else { - eventToPlugins[evt].Insert(name) - } - } +// defaultEnqueueExtension is used when a plugin does not implement EnqueueExtensions interface. +type defaultEnqueueExtension struct { + pluginName string +} + +func (p *defaultEnqueueExtension) Name() string { return p.pluginName } +func (p *defaultEnqueueExtension) EventsToRegister() []framework.ClusterEventWithHint { + // need to return all specific cluster events with framework.All action instead of wildcard event + // because the returning values are used to register event handlers. + // If we return the wildcard here, it won't affect the event handlers registered by the plugin + // and some events may not be registered in the event handlers. + return framework.UnrollWildCardResource() } func updatePluginList(pluginList interface{}, pluginSet config.PluginSet, pluginsMap map[string]framework.Plugin) error { plugins := reflect.ValueOf(pluginList).Elem() pluginType := plugins.Type().Elem() - set := sets.NewString() + set := sets.New[string]() for _, ep := range pluginSet.Enabled { pg, ok := pluginsMap[ep.Name] if !ok { @@ -573,11 +598,16 @@ func updatePluginList(pluginList interface{}, pluginSet config.PluginSet, plugin return nil } -// EnqueuePlugins returns the registered enqueue plugins. +// PreEnqueuePlugins returns the registered preEnqueue plugins. func (f *frameworkImpl) PreEnqueuePlugins() []framework.PreEnqueuePlugin { return f.preEnqueuePlugins } +// EnqueueExtensions returns the registered reenqueue plugins. +func (f *frameworkImpl) EnqueueExtensions() []framework.EnqueueExtensions { + return f.enqueueExtensions +} + // QueueSortFunc returns the function to sort pods in scheduling queue func (f *frameworkImpl) QueueSortFunc() framework.LessFunc { if f == nil { @@ -608,13 +638,20 @@ func (f *frameworkImpl) RunPreFilterPlugins(ctx context.Context, state *framewor var result *framework.PreFilterResult var pluginsWithNodes []string skipPlugins := sets.New[string]() + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "PreFilter") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod)) for _, pl := range f.preFilterPlugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) r, s := f.runPreFilterPlugin(ctx, pl, state, pod) if s.IsSkip() { skipPlugins.Insert(pl.Name()) continue } - metrics.PluginEvaluationTotal.WithLabelValues(pl.Name(), metrics.PreFilter, f.profileName).Inc() if !s.IsSuccess() { s.SetFailedPlugin(pl.Name()) if s.IsUnschedulable() { @@ -658,14 +695,22 @@ func (f *frameworkImpl) RunPreFilterExtensionAddPod( podInfoToAdd *framework.PodInfo, nodeInfo *framework.NodeInfo, ) (status *framework.Status) { + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "PreFilterExtension") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(podToSchedule), "node", klog.KObj(nodeInfo.Node()), "operation", "addPod") for _, pl := range f.preFilterPlugins { if pl.PreFilterExtensions() == nil || state.SkipFilterPlugins.Has(pl.Name()) { continue } + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) status = f.runPreFilterExtensionAddPod(ctx, pl, state, podToSchedule, podInfoToAdd, nodeInfo) if !status.IsSuccess() { err := status.AsError() - klog.ErrorS(err, "Failed running AddPod on PreFilter plugin", "plugin", pl.Name(), "pod", klog.KObj(podToSchedule)) + logger.Error(err, "Plugin failed", "pod", klog.KObj(podToSchedule), "node", klog.KObj(nodeInfo.Node()), "operation", "addPod", "plugin", pl.Name()) return framework.AsStatus(fmt.Errorf("running AddPod on PreFilter plugin %q: %w", pl.Name(), err)) } } @@ -693,14 +738,22 @@ func (f *frameworkImpl) RunPreFilterExtensionRemovePod( podInfoToRemove *framework.PodInfo, nodeInfo *framework.NodeInfo, ) (status *framework.Status) { + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "PreFilterExtension") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, klog.KObj(podToSchedule), "node", klog.KObj(nodeInfo.Node())) for _, pl := range f.preFilterPlugins { if pl.PreFilterExtensions() == nil || state.SkipFilterPlugins.Has(pl.Name()) { continue } + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) status = f.runPreFilterExtensionRemovePod(ctx, pl, state, podToSchedule, podInfoToRemove, nodeInfo) if !status.IsSuccess() { err := status.AsError() - klog.ErrorS(err, "Failed running RemovePod on PreFilter plugin", "plugin", pl.Name(), "pod", klog.KObj(podToSchedule)) + logger.Error(err, "Plugin failed", "node", klog.KObj(nodeInfo.Node()), "operation", "removePod", "plugin", pl.Name(), "pod", klog.KObj(podToSchedule)) return framework.AsStatus(fmt.Errorf("running RemovePod on PreFilter plugin %q: %w", pl.Name(), err)) } } @@ -728,11 +781,19 @@ func (f *frameworkImpl) RunFilterPlugins( pod *v1.Pod, nodeInfo *framework.NodeInfo, ) *framework.Status { + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "Filter") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod), "node", klog.KObj(nodeInfo.Node())) + for _, pl := range f.filterPlugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) if state.SkipFilterPlugins.Has(pl.Name()) { continue } - metrics.PluginEvaluationTotal.WithLabelValues(pl.Name(), metrics.Filter, f.profileName).Inc() if status := f.runFilterPlugin(ctx, pl, state, pod, nodeInfo); !status.IsSuccess() { if !status.IsUnschedulable() { // Filter plugins are not supposed to return any status other than @@ -765,11 +826,20 @@ func (f *frameworkImpl) RunPostFilterPlugins(ctx context.Context, state *framewo metrics.FrameworkExtensionPointDuration.WithLabelValues(metrics.PostFilter, status.Code().String(), f.profileName).Observe(metrics.SinceInSeconds(startTime)) }() + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "PostFilter") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod)) + // `result` records the last meaningful(non-noop) PostFilterResult. var result *framework.PostFilterResult var reasons []string var failedPlugin string for _, pl := range f.postFilterPlugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) r, s := f.runPostFilterPlugin(ctx, pl, state, pod, filteredNodeStatusMap) if s.IsSuccess() { return r, s @@ -835,6 +905,9 @@ func (f *frameworkImpl) RunFilterPluginsWithNominatedPods(ctx context.Context, s // the nominated pods are treated as not running. We can't just assume the // nominated pods are running because they are not running right now and in fact, // they may end up getting scheduled to a different node. + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "FilterWithNominatedPods") + ctx = klog.NewContext(ctx, logger) for i := 0; i < 2; i++ { stateToUse := state nodeInfoToUse := info @@ -861,7 +934,7 @@ func (f *frameworkImpl) RunFilterPluginsWithNominatedPods(ctx context.Context, s // to run on the node. It returns 1) whether any pod was added, 2) augmented cycleState, // 3) augmented nodeInfo. func addNominatedPods(ctx context.Context, fh framework.Handle, pod *v1.Pod, state *framework.CycleState, nodeInfo *framework.NodeInfo) (bool, *framework.CycleState, *framework.NodeInfo, error) { - if fh == nil || nodeInfo.Node() == nil { + if fh == nil { // This may happen only in tests. return false, state, nodeInfo, nil } @@ -900,7 +973,15 @@ func (f *frameworkImpl) RunPreScorePlugins( metrics.FrameworkExtensionPointDuration.WithLabelValues(metrics.PreScore, status.Code().String(), f.profileName).Observe(metrics.SinceInSeconds(startTime)) }() skipPlugins := sets.New[string]() + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "PreScore") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod)) for _, pl := range f.preScorePlugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) status = f.runPreScorePlugin(ctx, pl, state, pod, nodes) if status.IsSkip() { skipPlugins.Insert(pl.Name()) @@ -949,10 +1030,19 @@ func (f *frameworkImpl) RunScorePlugins(ctx context.Context, state *framework.Cy errCh := parallelize.NewErrorChannel() if len(plugins) > 0 { + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "Score") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod)) // Run Score method for each node in parallel. f.Parallelizer().Until(ctx, len(nodes), func(index int) { nodeName := nodes[index].Name + logger := klog.LoggerWithValues(logger, "node", klog.ObjectRef{Name: nodeName}) for _, pl := range plugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) s, status := f.runScorePlugin(ctx, pl, state, pod, nodeName) if !status.IsSuccess() { err := fmt.Errorf("plugin %q failed with: %w", pl.Name(), status.AsError()) @@ -1050,16 +1140,24 @@ func (f *frameworkImpl) RunPreBindPlugins(ctx context.Context, state *framework. defer func() { metrics.FrameworkExtensionPointDuration.WithLabelValues(metrics.PreBind, status.Code().String(), f.profileName).Observe(metrics.SinceInSeconds(startTime)) }() + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "PreBind") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod), "node", klog.ObjectRef{Name: nodeName}) for _, pl := range f.preBindPlugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) status = f.runPreBindPlugin(ctx, pl, state, pod, nodeName) if !status.IsSuccess() { if status.IsUnschedulable() { - klog.V(4).InfoS("Pod rejected by PreBind plugin", "pod", klog.KObj(pod), "node", nodeName, "plugin", pl.Name(), "status", status.Message()) + logger.V(4).Info("Pod rejected by PreBind plugin", "pod", klog.KObj(pod), "node", nodeName, "plugin", pl.Name(), "status", status.Message()) status.SetFailedPlugin(pl.Name()) return status } err := status.AsError() - klog.ErrorS(err, "Failed running PreBind plugin", "plugin", pl.Name(), "pod", klog.KObj(pod), "node", nodeName) + logger.Error(err, "Plugin failed", "plugin", pl.Name(), "pod", klog.KObj(pod), "node", nodeName) return framework.AsStatus(fmt.Errorf("running PreBind plugin %q: %w", pl.Name(), err)) } } @@ -1085,19 +1183,27 @@ func (f *frameworkImpl) RunBindPlugins(ctx context.Context, state *framework.Cyc if len(f.bindPlugins) == 0 { return framework.NewStatus(framework.Skip, "") } + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "Bind") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod), "node", klog.ObjectRef{Name: nodeName}) for _, pl := range f.bindPlugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) status = f.runBindPlugin(ctx, pl, state, pod, nodeName) if status.IsSkip() { continue } if !status.IsSuccess() { if status.IsUnschedulable() { - klog.V(4).InfoS("Pod rejected by Bind plugin", "pod", klog.KObj(pod), "node", nodeName, "plugin", pl.Name(), "status", status.Message()) + logger.V(4).Info("Pod rejected by Bind plugin", "pod", klog.KObj(pod), "node", nodeName, "plugin", pl.Name(), "status", status.Message()) status.SetFailedPlugin(pl.Name()) return status } err := status.AsError() - klog.ErrorS(err, "Failed running Bind plugin", "plugin", pl.Name(), "pod", klog.KObj(pod), "node", nodeName) + logger.Error(err, "Plugin Failed", "plugin", pl.Name(), "pod", klog.KObj(pod), "node", nodeName) return framework.AsStatus(fmt.Errorf("running Bind plugin %q: %w", pl.Name(), err)) } return status @@ -1121,7 +1227,15 @@ func (f *frameworkImpl) RunPostBindPlugins(ctx context.Context, state *framework defer func() { metrics.FrameworkExtensionPointDuration.WithLabelValues(metrics.PostBind, framework.Success.String(), f.profileName).Observe(metrics.SinceInSeconds(startTime)) }() + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "PostBind") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod), "node", klog.ObjectRef{Name: nodeName}) for _, pl := range f.postBindPlugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) f.runPostBindPlugin(ctx, pl, state, pod, nodeName) } } @@ -1146,11 +1260,24 @@ func (f *frameworkImpl) RunReservePluginsReserve(ctx context.Context, state *fra defer func() { metrics.FrameworkExtensionPointDuration.WithLabelValues(metrics.Reserve, status.Code().String(), f.profileName).Observe(metrics.SinceInSeconds(startTime)) }() + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "Reserve") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod), "node", klog.ObjectRef{Name: nodeName}) for _, pl := range f.reservePlugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) status = f.runReservePluginReserve(ctx, pl, state, pod, nodeName) if !status.IsSuccess() { + if status.IsUnschedulable() { + logger.V(4).Info("Pod rejected by plugin", "pod", klog.KObj(pod), "plugin", pl.Name(), "status", status.Message()) + status.SetFailedPlugin(pl.Name()) + return status + } err := status.AsError() - klog.ErrorS(err, "Failed running Reserve plugin", "plugin", pl.Name(), "pod", klog.KObj(pod)) + logger.Error(err, "Plugin failed", "plugin", pl.Name(), "pod", klog.KObj(pod)) return framework.AsStatus(fmt.Errorf("running Reserve plugin %q: %w", pl.Name(), err)) } } @@ -1176,7 +1303,15 @@ func (f *frameworkImpl) RunReservePluginsUnreserve(ctx context.Context, state *f }() // Execute the Unreserve operation of each reserve plugin in the // *reverse* order in which the Reserve operation was executed. + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "Unreserve") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod), "node", klog.ObjectRef{Name: nodeName}) for i := len(f.reservePlugins) - 1; i >= 0; i-- { + logger := klog.LoggerWithName(logger, f.reservePlugins[i].Name()) + ctx := klog.NewContext(ctx, logger) f.runReservePluginUnreserve(ctx, f.reservePlugins[i], state, pod, nodeName) } } @@ -1204,13 +1339,20 @@ func (f *frameworkImpl) RunPermitPlugins(ctx context.Context, state *framework.C }() pluginsWaitTime := make(map[string]time.Duration) statusCode := framework.Success + logger := klog.FromContext(ctx) + logger = klog.LoggerWithName(logger, "Permit") + // TODO(knelasevero): Remove duplicated keys from log entry calls + // When contextualized logging hits GA + // https://github.com/kubernetes/kubernetes/issues/111672 + logger = klog.LoggerWithValues(logger, "pod", klog.KObj(pod), "node", klog.ObjectRef{Name: nodeName}) for _, pl := range f.permitPlugins { + logger := klog.LoggerWithName(logger, pl.Name()) + ctx := klog.NewContext(ctx, logger) status, timeout := f.runPermitPlugin(ctx, pl, state, pod, nodeName) if !status.IsSuccess() { if status.IsUnschedulable() { - klog.V(4).InfoS("Pod rejected by permit plugin", "pod", klog.KObj(pod), "plugin", pl.Name(), "status", status.Message()) - status.SetFailedPlugin(pl.Name()) - return status + logger.V(4).Info("Pod rejected by plugin", "pod", klog.KObj(pod), "plugin", pl.Name(), "status", status.Message()) + return status.WithFailedPlugin(pl.Name()) } if status.IsWait() { // Not allowed to be greater than maxTimeout. @@ -1221,7 +1363,7 @@ func (f *frameworkImpl) RunPermitPlugins(ctx context.Context, state *framework.C statusCode = framework.Wait } else { err := status.AsError() - klog.ErrorS(err, "Failed running Permit plugin", "plugin", pl.Name(), "pod", klog.KObj(pod)) + logger.Error(err, "Plugin failed", "plugin", pl.Name(), "pod", klog.KObj(pod)) return framework.AsStatus(fmt.Errorf("running Permit plugin %q: %w", pl.Name(), err)).WithFailedPlugin(pl.Name()) } } @@ -1230,7 +1372,7 @@ func (f *frameworkImpl) RunPermitPlugins(ctx context.Context, state *framework.C waitingPod := newWaitingPod(pod, pluginsWaitTime) f.waitingPods.add(waitingPod) msg := fmt.Sprintf("one or more plugins asked to wait and no plugin rejected pod %q", pod.Name) - klog.V(4).InfoS("One or more plugins asked to wait and no plugin rejected pod", "pod", klog.KObj(pod)) + logger.V(4).Info("One or more plugins asked to wait and no plugin rejected pod", "pod", klog.KObj(pod)) return framework.NewStatus(framework.Wait, msg) } return nil @@ -1253,7 +1395,9 @@ func (f *frameworkImpl) WaitOnPermit(ctx context.Context, pod *v1.Pod) *framewor return nil } defer f.waitingPods.remove(pod.UID) - klog.V(4).InfoS("Pod waiting on permit", "pod", klog.KObj(pod)) + + logger := klog.FromContext(ctx) + logger.V(4).Info("Pod waiting on permit", "pod", klog.KObj(pod)) startTime := time.Now() s := <-waitingPod.s @@ -1261,11 +1405,11 @@ func (f *frameworkImpl) WaitOnPermit(ctx context.Context, pod *v1.Pod) *framewor if !s.IsSuccess() { if s.IsUnschedulable() { - klog.V(4).InfoS("Pod rejected while waiting on permit", "pod", klog.KObj(pod), "status", s.Message()) + logger.V(4).Info("Pod rejected while waiting on permit", "pod", klog.KObj(pod), "status", s.Message()) return s } err := s.AsError() - klog.ErrorS(err, "Failed waiting on permit for pod", "pod", klog.KObj(pod)) + logger.Error(err, "Failed waiting on permit for pod", "pod", klog.KObj(pod)) return framework.AsStatus(fmt.Errorf("waiting on permit for pod: %w", err)).WithFailedPlugin(s.FailedPlugin()) } return nil @@ -1362,8 +1506,8 @@ func (f *frameworkImpl) SharedInformerFactory() informers.SharedInformerFactory return f.informerFactory } -func (f *frameworkImpl) pluginsNeeded(plugins *config.Plugins) sets.String { - pgSet := sets.String{} +func (f *frameworkImpl) pluginsNeeded(plugins *config.Plugins) sets.Set[string] { + pgSet := sets.Set[string]{} if plugins == nil { return pgSet diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/instrumented_plugins.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/instrumented_plugins.go new file mode 100644 index 000000000000..ee117c5e625f --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/instrumented_plugins.go @@ -0,0 +1,83 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package runtime + +import ( + "context" + + v1 "k8s.io/api/core/v1" + compbasemetrics "k8s.io/component-base/metrics" + "k8s.io/kubernetes/pkg/scheduler/framework" +) + +type instrumentedFilterPlugin struct { + framework.FilterPlugin + + metric compbasemetrics.CounterMetric +} + +var _ framework.FilterPlugin = &instrumentedFilterPlugin{} + +func (p *instrumentedFilterPlugin) Filter(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { + p.metric.Inc() + return p.FilterPlugin.Filter(ctx, state, pod, nodeInfo) +} + +type instrumentedPreFilterPlugin struct { + framework.PreFilterPlugin + + metric compbasemetrics.CounterMetric +} + +var _ framework.PreFilterPlugin = &instrumentedPreFilterPlugin{} + +func (p *instrumentedPreFilterPlugin) PreFilter(ctx context.Context, state *framework.CycleState, pod *v1.Pod) (*framework.PreFilterResult, *framework.Status) { + result, status := p.PreFilterPlugin.PreFilter(ctx, state, pod) + if !status.IsSkip() { + p.metric.Inc() + } + return result, status +} + +type instrumentedPreScorePlugin struct { + framework.PreScorePlugin + + metric compbasemetrics.CounterMetric +} + +var _ framework.PreScorePlugin = &instrumentedPreScorePlugin{} + +func (p *instrumentedPreScorePlugin) PreScore(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodes []*v1.Node) *framework.Status { + status := p.PreScorePlugin.PreScore(ctx, state, pod, nodes) + if !status.IsSkip() { + p.metric.Inc() + } + return status +} + +type instrumentedScorePlugin struct { + framework.ScorePlugin + + metric compbasemetrics.CounterMetric +} + +var _ framework.ScorePlugin = &instrumentedScorePlugin{} + +func (p *instrumentedScorePlugin) Score(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeName string) (int64, *framework.Status) { + p.metric.Inc() + return p.ScorePlugin.Score(ctx, state, pod, nodeName) +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/types.go index fda3ced88af3..64b83eb33a32 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/types.go @@ -78,6 +78,61 @@ const ( WildCard GVK = "*" ) +type ClusterEventWithHint struct { + Event ClusterEvent + // QueueingHintFn is executed for the plugin rejected by this plugin when the above Event happens, + // and filters out events to reduce useless retry of Pod's scheduling. + // It's an optional field. If not set, + // the scheduling of Pods will be always retried with backoff when this Event happens. + // (the same as QueueAfterBackoff) + QueueingHintFn QueueingHintFn +} + +// QueueingHintFn returns a hint that signals whether the event can make a Pod, +// which was rejected by this plugin in the past scheduling cycle, schedulable or not. +// It's called before a Pod gets moved from unschedulableQ to backoffQ or activeQ. +// +// - `pod`: the Pod to be enqueued, which is rejected by this plugin in the past. +// - `oldObj` `newObj`: the object involved in that event. +// - For example, the given event is "Node deleted", the `oldObj` will be that deleted Node. +// - `oldObj` is nil if the event is add event. +// - `newObj` is nil if the event is delete event. +type QueueingHintFn func(logger klog.Logger, pod *v1.Pod, oldObj, newObj interface{}) QueueingHint + +type QueueingHint int + +const ( + // QueueSkip implies that the cluster event has no impact on + // scheduling of the pod. + QueueSkip QueueingHint = iota + + // QueueAfterBackoff implies that the Pod may be schedulable by the event, + // and worth retrying the scheduling again after backoff. + QueueAfterBackoff + + // QueueImmediately is returned only when it is highly possible that the Pod gets scheduled in the next scheduling. + // You should only return QueueImmediately when there is a high chance that the Pod gets scheduled in the next scheduling. + // Otherwise, it's detrimental to scheduling throughput. + // For example, when the Pod was rejected as waiting for an external resource to be provisioned, that is directly tied to the Pod, + // and the event is that the resource is provisioned, then you can return QueueImmediately. + // As a counterexample, when the Pod was rejected due to insufficient memory resource, + // and the event is that more memory on Node is available, then you should return QueueAfterBackoff instead of QueueImmediately + // because other Pods may be waiting for the same resources and only a few of them would schedule in the next scheduling cycle. + QueueImmediately +) + +func (s QueueingHint) String() string { + switch s { + case QueueSkip: + return "QueueSkip" + case QueueAfterBackoff: + return "QueueAfterBackoff" + case QueueImmediately: + return "QueueImmediately" + } + return "" +} + // ClusterEvent abstracts how a system resource's state gets changed. // Resource represents the standard API resources such as Pod, Node, etc. // ActionType denotes the specific change such as Add, Update or Delete. @@ -92,6 +147,20 @@ func (ce ClusterEvent) IsWildCard() bool { return ce.Resource == WildCard && ce.ActionType == All } +func UnrollWildCardResource() []ClusterEventWithHint { + return []ClusterEventWithHint{ + {Event: ClusterEvent{Resource: Pod, ActionType: All}}, + {Event: ClusterEvent{Resource: Node, ActionType: All}}, + {Event: ClusterEvent{Resource: CSINode, ActionType: All}}, + {Event: ClusterEvent{Resource: CSIDriver, ActionType: All}}, + {Event: ClusterEvent{Resource: CSIStorageCapacity, ActionType: All}}, + {Event: ClusterEvent{Resource: PersistentVolume, ActionType: All}}, + {Event: ClusterEvent{Resource: PersistentVolumeClaim, ActionType: All}}, + {Event: ClusterEvent{Resource: StorageClass, ActionType: All}}, + {Event: ClusterEvent{Resource: PodSchedulingContext, ActionType: All}}, + } +} + // QueuedPodInfo is a Pod wrapper with additional information related to // the pod's status in the scheduling queue, such as the timestamp when // it's added to the queue. @@ -106,9 +175,9 @@ type QueuedPodInfo struct { // back to the queue multiple times before it's successfully scheduled. // It shouldn't be updated once initialized. It's used to record the e2e scheduling // latency for a pod. - InitialAttemptTimestamp time.Time + InitialAttemptTimestamp *time.Time // If a Pod failed in a scheduling cycle, record the plugin names it failed by. - UnschedulablePlugins sets.String + UnschedulablePlugins sets.Set[string] // Whether the Pod is scheduling gated (by PreEnqueuePlugins) or not. Gated bool } @@ -197,7 +266,7 @@ func (pi *PodInfo) Update(pod *v1.Pod) error { // AffinityTerm is a processed version of v1.PodAffinityTerm. type AffinityTerm struct { - Namespaces sets.String + Namespaces sets.Set[string] Selector labels.Selector TopologyKey string NamespaceSelector labels.Selector @@ -220,7 +289,7 @@ type WeightedAffinityTerm struct { // Diagnosis records the details to diagnose a scheduling failure. type Diagnosis struct { NodeToStatusMap NodeToStatusMap - UnschedulablePlugins sets.String + UnschedulablePlugins sets.Set[string] // PreFilterMsg records the messages returned from PreFilter plugins. PreFilterMsg string // PostFilterMsg records the messages returned from PostFilter plugins. @@ -274,6 +343,7 @@ func (f *FitError) Error() string { if postFilterMsg != "" { reasonMsg += fmt.Sprintf(SeparatorFormat, postFilterMsg) } + return reasonMsg } @@ -364,8 +434,8 @@ func getPodAntiAffinityTerms(affinity *v1.Affinity) (terms []v1.PodAffinityTerm) // returns a set of names according to the namespaces indicated in podAffinityTerm. // If namespaces is empty it considers the given pod's namespace. -func getNamespacesFromPodAffinityTerm(pod *v1.Pod, podAffinityTerm *v1.PodAffinityTerm) sets.String { - names := sets.String{} +func getNamespacesFromPodAffinityTerm(pod *v1.Pod, podAffinityTerm *v1.PodAffinityTerm) sets.Set[string] { + names := sets.Set[string]{} if len(podAffinityTerm.Namespaces) == 0 && podAffinityTerm.NamespaceSelector == nil { names.Insert(pod.Namespace) } else { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/cache.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/cache.go index 0b60a82f9c2a..4e94b4b3baaf 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/cache.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/cache.go @@ -17,6 +17,7 @@ limitations under the License. package cache import ( + "context" "fmt" "sync" "time" @@ -35,11 +36,12 @@ var ( // New returns a Cache implementation. // It automatically starts a go routine that manages expiration of assumed pods. -// "ttl" is how long the assumed pod will get expired, "0" means pod will never expire. -// "stop" is the channel that would close the background goroutine. -func New(ttl time.Duration, stop <-chan struct{}) Cache { - cache := newCache(ttl, cleanAssumedPeriod, stop) - cache.run() +// "ttl" is how long the assumed pod will get expired. +// "ctx" is the context that would close the background goroutine. +func New(ctx context.Context, ttl time.Duration) Cache { + logger := klog.FromContext(ctx) + cache := newCache(ctx, ttl, cleanAssumedPeriod) + cache.run(logger) return cache } @@ -61,7 +63,7 @@ type cacheImpl struct { mu sync.RWMutex // a set of assumed pod keys. // The key could further be used to get an entry in podStates. - assumedPods sets.String + assumedPods sets.Set[string] // a map from pod key to podState. podStates map[string]*podState nodes map[string]*nodeInfoListItem @@ -86,7 +88,7 @@ type imageState struct { // Size of the image size int64 // A set of node names for nodes having this image present - nodes sets.String + nodes sets.Set[string] } // createImageStateSummary returns a summarizing snapshot of the given image's state. @@ -97,15 +99,16 @@ func (cache *cacheImpl) createImageStateSummary(state *imageState) *framework.Im } } -func newCache(ttl, period time.Duration, stop <-chan struct{}) *cacheImpl { +func newCache(ctx context.Context, ttl, period time.Duration) *cacheImpl { + logger := klog.FromContext(ctx) return &cacheImpl{ ttl: ttl, period: period, - stop: stop, + stop: ctx.Done(), nodes: make(map[string]*nodeInfoListItem), - nodeTree: newNodeTree(nil), - assumedPods: make(sets.String), + nodeTree: newNodeTree(logger, nil), + assumedPods: sets.New[string](), podStates: make(map[string]*podState), imageStates: make(map[string]*imageState), } @@ -121,10 +124,10 @@ func newNodeInfoListItem(ni *framework.NodeInfo) *nodeInfoListItem { // moveNodeInfoToHead moves a NodeInfo to the head of "cache.nodes" doubly // linked list. The head is the most recently updated NodeInfo. // We assume cache lock is already acquired. -func (cache *cacheImpl) moveNodeInfoToHead(name string) { +func (cache *cacheImpl) moveNodeInfoToHead(logger klog.Logger, name string) { ni, ok := cache.nodes[name] if !ok { - klog.ErrorS(nil, "No node info with given name found in the cache", "node", klog.KRef("", name)) + logger.Error(nil, "No node info with given name found in the cache", "node", klog.KRef("", name)) return } // if the node info list item is already at the head, we are done. @@ -149,10 +152,10 @@ func (cache *cacheImpl) moveNodeInfoToHead(name string) { // removeNodeInfoFromList removes a NodeInfo from the "cache.nodes" doubly // linked list. // We assume cache lock is already acquired. -func (cache *cacheImpl) removeNodeInfoFromList(name string) { +func (cache *cacheImpl) removeNodeInfoFromList(logger klog.Logger, name string) { ni, ok := cache.nodes[name] if !ok { - klog.ErrorS(nil, "No node info with given name found in the cache", "node", klog.KRef("", name)) + logger.Error(nil, "No node info with given name found in the cache", "node", klog.KRef("", name)) return } @@ -194,7 +197,7 @@ func (cache *cacheImpl) Dump() *Dump { // nodeInfo.Node() is guaranteed to be not nil for all the nodes in the snapshot. // This function tracks generation number of NodeInfo and updates only the // entries of an existing snapshot that have changed after the snapshot was taken. -func (cache *cacheImpl) UpdateSnapshot(nodeSnapshot *Snapshot) error { +func (cache *cacheImpl) UpdateSnapshot(logger klog.Logger, nodeSnapshot *Snapshot) error { cache.mu.Lock() defer cache.mu.Unlock() @@ -271,7 +274,7 @@ func (cache *cacheImpl) UpdateSnapshot(nodeSnapshot *Snapshot) error { } if updateAllLists || updateNodesHavePodsWithAffinity || updateNodesHavePodsWithRequiredAntiAffinity || updateUsedPVCSet { - cache.updateNodeInfoSnapshotList(nodeSnapshot, updateAllLists) + cache.updateNodeInfoSnapshotList(logger, nodeSnapshot, updateAllLists) } if len(nodeSnapshot.nodeInfoList) != cache.nodeTree.numNodes { @@ -280,26 +283,26 @@ func (cache *cacheImpl) UpdateSnapshot(nodeSnapshot *Snapshot) error { ", trying to recover", len(nodeSnapshot.nodeInfoList), cache.nodeTree.numNodes, len(nodeSnapshot.nodeInfoMap), len(cache.nodes)) - klog.ErrorS(nil, errMsg) + logger.Error(nil, errMsg) // We will try to recover by re-creating the lists for the next scheduling cycle, but still return an // error to surface the problem, the error will likely cause a failure to the current scheduling cycle. - cache.updateNodeInfoSnapshotList(nodeSnapshot, true) + cache.updateNodeInfoSnapshotList(logger, nodeSnapshot, true) return fmt.Errorf(errMsg) } return nil } -func (cache *cacheImpl) updateNodeInfoSnapshotList(snapshot *Snapshot, updateAll bool) { +func (cache *cacheImpl) updateNodeInfoSnapshotList(logger klog.Logger, snapshot *Snapshot, updateAll bool) { snapshot.havePodsWithAffinityNodeInfoList = make([]*framework.NodeInfo, 0, cache.nodeTree.numNodes) snapshot.havePodsWithRequiredAntiAffinityNodeInfoList = make([]*framework.NodeInfo, 0, cache.nodeTree.numNodes) - snapshot.usedPVCSet = sets.NewString() + snapshot.usedPVCSet = sets.New[string]() if updateAll { // Take a snapshot of the nodes order in the tree snapshot.nodeInfoList = make([]*framework.NodeInfo, 0, cache.nodeTree.numNodes) nodesList, err := cache.nodeTree.list() if err != nil { - klog.ErrorS(err, "Error occurred while retrieving the list of names of the nodes from node tree") + logger.Error(err, "Error occurred while retrieving the list of names of the nodes from node tree") } for _, nodeName := range nodesList { if nodeInfo := snapshot.nodeInfoMap[nodeName]; nodeInfo != nil { @@ -314,7 +317,7 @@ func (cache *cacheImpl) updateNodeInfoSnapshotList(snapshot *Snapshot, updateAll snapshot.usedPVCSet.Insert(key) } } else { - klog.ErrorS(nil, "Node exists in nodeTree but not in NodeInfoMap, this should not happen", "node", klog.KRef("", nodeName)) + logger.Error(nil, "Node exists in nodeTree but not in NodeInfoMap, this should not happen", "node", klog.KRef("", nodeName)) } } } else { @@ -369,7 +372,7 @@ func (cache *cacheImpl) PodCount() (int, error) { return count, nil } -func (cache *cacheImpl) AssumePod(pod *v1.Pod) error { +func (cache *cacheImpl) AssumePod(logger klog.Logger, pod *v1.Pod) error { key, err := framework.GetPodKey(pod) if err != nil { return err @@ -381,15 +384,15 @@ func (cache *cacheImpl) AssumePod(pod *v1.Pod) error { return fmt.Errorf("pod %v(%v) is in the cache, so can't be assumed", key, klog.KObj(pod)) } - return cache.addPod(pod, true) + return cache.addPod(logger, pod, true) } -func (cache *cacheImpl) FinishBinding(pod *v1.Pod) error { - return cache.finishBinding(pod, time.Now()) +func (cache *cacheImpl) FinishBinding(logger klog.Logger, pod *v1.Pod) error { + return cache.finishBinding(logger, pod, time.Now()) } // finishBinding exists to make tests deterministic by injecting now as an argument -func (cache *cacheImpl) finishBinding(pod *v1.Pod, now time.Time) error { +func (cache *cacheImpl) finishBinding(logger klog.Logger, pod *v1.Pod, now time.Time) error { key, err := framework.GetPodKey(pod) if err != nil { return err @@ -398,7 +401,7 @@ func (cache *cacheImpl) finishBinding(pod *v1.Pod, now time.Time) error { cache.mu.RLock() defer cache.mu.RUnlock() - klog.V(5).InfoS("Finished binding for pod, can be expired", "podKey", key, "pod", klog.KObj(pod)) + logger.V(5).Info("Finished binding for pod, can be expired", "podKey", key, "pod", klog.KObj(pod)) currState, ok := cache.podStates[key] if ok && cache.assumedPods.Has(key) { if cache.ttl == time.Duration(0) { @@ -412,7 +415,7 @@ func (cache *cacheImpl) finishBinding(pod *v1.Pod, now time.Time) error { return nil } -func (cache *cacheImpl) ForgetPod(pod *v1.Pod) error { +func (cache *cacheImpl) ForgetPod(logger klog.Logger, pod *v1.Pod) error { key, err := framework.GetPodKey(pod) if err != nil { return err @@ -428,13 +431,13 @@ func (cache *cacheImpl) ForgetPod(pod *v1.Pod) error { // Only assumed pod can be forgotten. if ok && cache.assumedPods.Has(key) { - return cache.removePod(pod) + return cache.removePod(logger, pod) } return fmt.Errorf("pod %v(%v) wasn't assumed so cannot be forgotten", key, klog.KObj(pod)) } // Assumes that lock is already acquired. -func (cache *cacheImpl) addPod(pod *v1.Pod, assumePod bool) error { +func (cache *cacheImpl) addPod(logger klog.Logger, pod *v1.Pod, assumePod bool) error { key, err := framework.GetPodKey(pod) if err != nil { return err @@ -445,7 +448,7 @@ func (cache *cacheImpl) addPod(pod *v1.Pod, assumePod bool) error { cache.nodes[pod.Spec.NodeName] = n } n.info.AddPod(pod) - cache.moveNodeInfoToHead(pod.Spec.NodeName) + cache.moveNodeInfoToHead(logger, pod.Spec.NodeName) ps := &podState{ pod: pod, } @@ -457,18 +460,18 @@ func (cache *cacheImpl) addPod(pod *v1.Pod, assumePod bool) error { } // Assumes that lock is already acquired. -func (cache *cacheImpl) updatePod(oldPod, newPod *v1.Pod) error { - if err := cache.removePod(oldPod); err != nil { +func (cache *cacheImpl) updatePod(logger klog.Logger, oldPod, newPod *v1.Pod) error { + if err := cache.removePod(logger, oldPod); err != nil { return err } - return cache.addPod(newPod, false) + return cache.addPod(logger, newPod, false) } // Assumes that lock is already acquired. // Removes a pod from the cached node info. If the node information was already // removed and there are no more pods left in the node, cleans up the node from // the cache. -func (cache *cacheImpl) removePod(pod *v1.Pod) error { +func (cache *cacheImpl) removePod(logger klog.Logger, pod *v1.Pod) error { key, err := framework.GetPodKey(pod) if err != nil { return err @@ -476,16 +479,15 @@ func (cache *cacheImpl) removePod(pod *v1.Pod) error { n, ok := cache.nodes[pod.Spec.NodeName] if !ok { - klog.ErrorS(nil, "Node not found when trying to remove pod", "node", klog.KRef("", pod.Spec.NodeName), "podKey", key, "pod", klog.KObj(pod)) - + logger.Error(nil, "Node not found when trying to remove pod", "node", klog.KRef("", pod.Spec.NodeName), "podKey", key, "pod", klog.KObj(pod)) } else { if err := n.info.RemovePod(pod); err != nil { return err } if len(n.info.Pods) == 0 && n.info.Node() == nil { - cache.removeNodeInfoFromList(pod.Spec.NodeName) + cache.removeNodeInfoFromList(logger, pod.Spec.NodeName) } else { - cache.moveNodeInfoToHead(pod.Spec.NodeName) + cache.moveNodeInfoToHead(logger, pod.Spec.NodeName) } } @@ -494,7 +496,7 @@ func (cache *cacheImpl) removePod(pod *v1.Pod) error { return nil } -func (cache *cacheImpl) AddPod(pod *v1.Pod) error { +func (cache *cacheImpl) AddPod(logger klog.Logger, pod *v1.Pod) error { key, err := framework.GetPodKey(pod) if err != nil { return err @@ -508,18 +510,18 @@ func (cache *cacheImpl) AddPod(pod *v1.Pod) error { case ok && cache.assumedPods.Has(key): // When assuming, we've already added the Pod to cache, // Just update here to make sure the Pod's status is up-to-date. - if err = cache.updatePod(currState.pod, pod); err != nil { - klog.ErrorS(err, "Error occurred while updating pod") + if err = cache.updatePod(logger, currState.pod, pod); err != nil { + logger.Error(err, "Error occurred while updating pod") } if currState.pod.Spec.NodeName != pod.Spec.NodeName { // The pod was added to a different node than it was assumed to. - klog.InfoS("Pod was added to a different node than it was assumed", "podKey", key, "pod", klog.KObj(pod), "assumedNode", klog.KRef("", pod.Spec.NodeName), "currentNode", klog.KRef("", currState.pod.Spec.NodeName)) + logger.Info("Pod was added to a different node than it was assumed", "podKey", key, "pod", klog.KObj(pod), "assumedNode", klog.KRef("", pod.Spec.NodeName), "currentNode", klog.KRef("", currState.pod.Spec.NodeName)) return nil } case !ok: // Pod was expired. We should add it back. - if err = cache.addPod(pod, false); err != nil { - klog.ErrorS(err, "Error occurred while adding pod") + if err = cache.addPod(logger, pod, false); err != nil { + logger.Error(err, "Error occurred while adding pod") } default: return fmt.Errorf("pod %v(%v) was already in added state", key, klog.KObj(pod)) @@ -527,7 +529,7 @@ func (cache *cacheImpl) AddPod(pod *v1.Pod) error { return nil } -func (cache *cacheImpl) UpdatePod(oldPod, newPod *v1.Pod) error { +func (cache *cacheImpl) UpdatePod(logger klog.Logger, oldPod, newPod *v1.Pod) error { key, err := framework.GetPodKey(oldPod) if err != nil { return err @@ -548,14 +550,14 @@ func (cache *cacheImpl) UpdatePod(oldPod, newPod *v1.Pod) error { } if currState.pod.Spec.NodeName != newPod.Spec.NodeName { - klog.ErrorS(nil, "Pod updated on a different node than previously added to", "podKey", key, "pod", klog.KObj(oldPod)) - klog.ErrorS(nil, "scheduler cache is corrupted and can badly affect scheduling decisions") + logger.Error(nil, "Pod updated on a different node than previously added to", "podKey", key, "pod", klog.KObj(oldPod)) + logger.Error(nil, "scheduler cache is corrupted and can badly affect scheduling decisions") klog.FlushAndExit(klog.ExitFlushTimeout, 1) } - return cache.updatePod(oldPod, newPod) + return cache.updatePod(logger, oldPod, newPod) } -func (cache *cacheImpl) RemovePod(pod *v1.Pod) error { +func (cache *cacheImpl) RemovePod(logger klog.Logger, pod *v1.Pod) error { key, err := framework.GetPodKey(pod) if err != nil { return err @@ -569,15 +571,15 @@ func (cache *cacheImpl) RemovePod(pod *v1.Pod) error { return fmt.Errorf("pod %v(%v) is not found in scheduler cache, so cannot be removed from it", key, klog.KObj(pod)) } if currState.pod.Spec.NodeName != pod.Spec.NodeName { - klog.ErrorS(nil, "Pod was added to a different node than it was assumed", "podKey", key, "pod", klog.KObj(pod), "assumedNode", klog.KRef("", pod.Spec.NodeName), "currentNode", klog.KRef("", currState.pod.Spec.NodeName)) + logger.Error(nil, "Pod was added to a different node than it was assumed", "podKey", key, "pod", klog.KObj(pod), "assumedNode", klog.KRef("", pod.Spec.NodeName), "currentNode", klog.KRef("", currState.pod.Spec.NodeName)) if pod.Spec.NodeName != "" { // An empty NodeName is possible when the scheduler misses a Delete // event and it gets the last known state from the informer cache. - klog.ErrorS(nil, "scheduler cache is corrupted and can badly affect scheduling decisions") + logger.Error(nil, "scheduler cache is corrupted and can badly affect scheduling decisions") klog.FlushAndExit(klog.ExitFlushTimeout, 1) } } - return cache.removePod(currState.pod) + return cache.removePod(logger, currState.pod) } func (cache *cacheImpl) IsAssumedPod(pod *v1.Pod) (bool, error) { @@ -611,7 +613,7 @@ func (cache *cacheImpl) GetPod(pod *v1.Pod) (*v1.Pod, error) { return podState.pod, nil } -func (cache *cacheImpl) AddNode(node *v1.Node) *framework.NodeInfo { +func (cache *cacheImpl) AddNode(logger klog.Logger, node *v1.Node) *framework.NodeInfo { cache.mu.Lock() defer cache.mu.Unlock() @@ -622,15 +624,15 @@ func (cache *cacheImpl) AddNode(node *v1.Node) *framework.NodeInfo { } else { cache.removeNodeImageStates(n.info.Node()) } - cache.moveNodeInfoToHead(node.Name) + cache.moveNodeInfoToHead(logger, node.Name) - cache.nodeTree.addNode(node) + cache.nodeTree.addNode(logger, node) cache.addNodeImageStates(node, n.info) n.info.SetNode(node) return n.info.Clone() } -func (cache *cacheImpl) UpdateNode(oldNode, newNode *v1.Node) *framework.NodeInfo { +func (cache *cacheImpl) UpdateNode(logger klog.Logger, oldNode, newNode *v1.Node) *framework.NodeInfo { cache.mu.Lock() defer cache.mu.Unlock() @@ -638,13 +640,13 @@ func (cache *cacheImpl) UpdateNode(oldNode, newNode *v1.Node) *framework.NodeInf if !ok { n = newNodeInfoListItem(framework.NewNodeInfo()) cache.nodes[newNode.Name] = n - cache.nodeTree.addNode(newNode) + cache.nodeTree.addNode(logger, newNode) } else { cache.removeNodeImageStates(n.info.Node()) } - cache.moveNodeInfoToHead(newNode.Name) + cache.moveNodeInfoToHead(logger, newNode.Name) - cache.nodeTree.updateNode(oldNode, newNode) + cache.nodeTree.updateNode(logger, oldNode, newNode) cache.addNodeImageStates(newNode, n.info) n.info.SetNode(newNode) return n.info.Clone() @@ -656,7 +658,7 @@ func (cache *cacheImpl) UpdateNode(oldNode, newNode *v1.Node) *framework.NodeInf // the source of truth. // However, we keep a ghost node with the list of pods until all pod deletion // events have arrived. A ghost node is skipped from snapshots. -func (cache *cacheImpl) RemoveNode(node *v1.Node) error { +func (cache *cacheImpl) RemoveNode(logger klog.Logger, node *v1.Node) error { cache.mu.Lock() defer cache.mu.Unlock() @@ -670,11 +672,11 @@ func (cache *cacheImpl) RemoveNode(node *v1.Node) error { // in a different watch, and thus can potentially be observed later, even though // they happened before node removal. if len(n.info.Pods) == 0 { - cache.removeNodeInfoFromList(node.Name) + cache.removeNodeInfoFromList(logger, node.Name) } else { - cache.moveNodeInfoToHead(node.Name) + cache.moveNodeInfoToHead(logger, node.Name) } - if err := cache.nodeTree.removeNode(node); err != nil { + if err := cache.nodeTree.removeNode(logger, node); err != nil { return err } cache.removeNodeImageStates(node) @@ -693,7 +695,7 @@ func (cache *cacheImpl) addNodeImageStates(node *v1.Node, nodeInfo *framework.No if !ok { state = &imageState{ size: image.SizeBytes, - nodes: sets.NewString(node.Name), + nodes: sets.New(node.Name), } cache.imageStates[name] = state } else { @@ -732,17 +734,15 @@ func (cache *cacheImpl) removeNodeImageStates(node *v1.Node) { } } -func (cache *cacheImpl) run() { - go wait.Until(cache.cleanupExpiredAssumedPods, cache.period, cache.stop) -} - -func (cache *cacheImpl) cleanupExpiredAssumedPods() { - cache.cleanupAssumedPods(time.Now()) +func (cache *cacheImpl) run(logger klog.Logger) { + go wait.Until(func() { + cache.cleanupAssumedPods(logger, time.Now()) + }, cache.period, cache.stop) } // cleanupAssumedPods exists for making test deterministic by taking time as input argument. // It also reports metrics on the cache size for nodes, pods, and assumed pods. -func (cache *cacheImpl) cleanupAssumedPods(now time.Time) { +func (cache *cacheImpl) cleanupAssumedPods(logger klog.Logger, now time.Time) { cache.mu.Lock() defer cache.mu.Unlock() defer cache.updateMetrics() @@ -751,17 +751,17 @@ func (cache *cacheImpl) cleanupAssumedPods(now time.Time) { for key := range cache.assumedPods { ps, ok := cache.podStates[key] if !ok { - klog.ErrorS(nil, "Key found in assumed set but not in podStates, potentially a logical error") + logger.Error(nil, "Key found in assumed set but not in podStates, potentially a logical error") klog.FlushAndExit(klog.ExitFlushTimeout, 1) } if !ps.bindingFinished { - klog.V(5).InfoS("Could not expire cache for pod as binding is still in progress", "podKey", key, "pod", klog.KObj(ps.pod)) + logger.V(5).Info("Could not expire cache for pod as binding is still in progress", "podKey", key, "pod", klog.KObj(ps.pod)) continue } if cache.ttl != 0 && now.After(*ps.deadline) { - klog.InfoS("Pod expired", "podKey", key, "pod", klog.KObj(ps.pod)) - if err := cache.removePod(ps.pod); err != nil { - klog.ErrorS(err, "ExpirePod failed", "podKey", key, "pod", klog.KObj(ps.pod)) + logger.Info("Pod expired", "podKey", key, "pod", klog.KObj(ps.pod)) + if err := cache.removePod(logger, ps.pod); err != nil { + logger.Error(err, "ExpirePod failed", "podKey", key, "pod", klog.KObj(ps.pod)) } } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/comparer.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/comparer.go index bf8cafb7844d..bc7648dce6bf 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/comparer.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/comparer.go @@ -38,9 +38,9 @@ type CacheComparer struct { } // Compare compares the nodes and pods of NodeLister with Cache.Snapshot. -func (c *CacheComparer) Compare() error { - klog.V(3).InfoS("Cache comparer started") - defer klog.V(3).InfoS("Cache comparer finished") +func (c *CacheComparer) Compare(logger klog.Logger) error { + logger.V(3).Info("Cache comparer started") + defer logger.V(3).Info("Cache comparer finished") nodes, err := c.NodeLister.List(labels.Everything()) if err != nil { @@ -57,11 +57,11 @@ func (c *CacheComparer) Compare() error { pendingPods, _ := c.PodQueue.PendingPods() if missed, redundant := c.CompareNodes(nodes, dump.Nodes); len(missed)+len(redundant) != 0 { - klog.InfoS("Cache mismatch", "missedNodes", missed, "redundantNodes", redundant) + logger.Info("Cache mismatch", "missedNodes", missed, "redundantNodes", redundant) } if missed, redundant := c.ComparePods(pods, pendingPods, dump.Nodes); len(missed)+len(redundant) != 0 { - klog.InfoS("Cache mismatch", "missedPods", missed, "redundantPods", redundant) + logger.Info("Cache mismatch", "missedPods", missed, "redundantPods", redundant) } return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/debugger.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/debugger.go index d8839ec67e83..8a1f80c88b42 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/debugger.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/debugger.go @@ -17,10 +17,12 @@ limitations under the License. package debugger import ( + "context" "os" "os/signal" corelisters "k8s.io/client-go/listers/core/v1" + "k8s.io/klog/v2" internalcache "k8s.io/kubernetes/pkg/scheduler/internal/cache" internalqueue "k8s.io/kubernetes/pkg/scheduler/internal/queue" ) @@ -54,7 +56,9 @@ func New( // ListenForSignal starts a goroutine that will trigger the CacheDebugger's // behavior when the process receives SIGINT (Windows) or SIGUSER2 (non-Windows). -func (d *CacheDebugger) ListenForSignal(stopCh <-chan struct{}) { +func (d *CacheDebugger) ListenForSignal(ctx context.Context) { + logger := klog.FromContext(ctx) + stopCh := ctx.Done() ch := make(chan os.Signal, 1) signal.Notify(ch, compareSignal) @@ -64,8 +68,8 @@ func (d *CacheDebugger) ListenForSignal(stopCh <-chan struct{}) { case <-stopCh: return case <-ch: - d.Comparer.Compare() - d.Dumper.DumpAll() + d.Comparer.Compare(logger) + d.Dumper.DumpAll(logger) } } }() diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/dumper.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/dumper.go index d95c234eed7c..618be63d989d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/dumper.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger/dumper.go @@ -36,30 +36,30 @@ type CacheDumper struct { } // DumpAll writes cached nodes and scheduling queue information to the scheduler logs. -func (d *CacheDumper) DumpAll() { - d.dumpNodes() - d.dumpSchedulingQueue() +func (d *CacheDumper) DumpAll(logger klog.Logger) { + d.dumpNodes(logger) + d.dumpSchedulingQueue(logger) } // dumpNodes writes NodeInfo to the scheduler logs. -func (d *CacheDumper) dumpNodes() { +func (d *CacheDumper) dumpNodes(logger klog.Logger) { dump := d.cache.Dump() nodeInfos := make([]string, 0, len(dump.Nodes)) for name, nodeInfo := range dump.Nodes { nodeInfos = append(nodeInfos, d.printNodeInfo(name, nodeInfo)) } // Extra blank line added between node entries for readability. - klog.InfoS("Dump of cached NodeInfo", "nodes", strings.Join(nodeInfos, "\n\n")) + logger.Info("Dump of cached NodeInfo", "nodes", strings.Join(nodeInfos, "\n\n")) } // dumpSchedulingQueue writes pods in the scheduling queue to the scheduler logs. -func (d *CacheDumper) dumpSchedulingQueue() { +func (d *CacheDumper) dumpSchedulingQueue(logger klog.Logger) { pendingPods, s := d.podQueue.PendingPods() var podData strings.Builder for _, p := range pendingPods { podData.WriteString(printPod(p)) } - klog.InfoS("Dump of scheduling queue", "summary", s, "pods", podData.String()) + logger.Info("Dump of scheduling queue", "summary", s, "pods", podData.String()) } // printNodeInfo writes parts of NodeInfo to a string. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/interface.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/interface.go index f6298bd346b3..24b7fd49066d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/interface.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/interface.go @@ -19,6 +19,7 @@ package cache import ( v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/scheduler/framework" ) @@ -68,23 +69,23 @@ type Cache interface { // AssumePod assumes a pod scheduled and aggregates the pod's information into its node. // The implementation also decides the policy to expire pod before being confirmed (receiving Add event). // After expiration, its information would be subtracted. - AssumePod(pod *v1.Pod) error + AssumePod(logger klog.Logger, pod *v1.Pod) error // FinishBinding signals that cache for assumed pod can be expired - FinishBinding(pod *v1.Pod) error + FinishBinding(logger klog.Logger, pod *v1.Pod) error // ForgetPod removes an assumed pod from cache. - ForgetPod(pod *v1.Pod) error + ForgetPod(logger klog.Logger, pod *v1.Pod) error // AddPod either confirms a pod if it's assumed, or adds it back if it's expired. // If added back, the pod's information would be added again. - AddPod(pod *v1.Pod) error + AddPod(logger klog.Logger, pod *v1.Pod) error // UpdatePod removes oldPod's information and adds newPod's information. - UpdatePod(oldPod, newPod *v1.Pod) error + UpdatePod(logger klog.Logger, oldPod, newPod *v1.Pod) error // RemovePod removes a pod. The pod's information would be subtracted from assigned node. - RemovePod(pod *v1.Pod) error + RemovePod(logger klog.Logger, pod *v1.Pod) error // GetPod returns the pod from the cache with the same namespace and the // same name of the specified pod. @@ -95,21 +96,21 @@ type Cache interface { // AddNode adds overall information about node. // It returns a clone of added NodeInfo object. - AddNode(node *v1.Node) *framework.NodeInfo + AddNode(logger klog.Logger, node *v1.Node) *framework.NodeInfo // UpdateNode updates overall information about node. // It returns a clone of updated NodeInfo object. - UpdateNode(oldNode, newNode *v1.Node) *framework.NodeInfo + UpdateNode(logger klog.Logger, oldNode, newNode *v1.Node) *framework.NodeInfo // RemoveNode removes overall information about node. - RemoveNode(node *v1.Node) error + RemoveNode(logger klog.Logger, node *v1.Node) error // UpdateSnapshot updates the passed infoSnapshot to the current contents of Cache. // The node info contains aggregated information of pods scheduled (including assumed to be) // on this node. // The snapshot only includes Nodes that are not deleted at the time this function is called. // nodeinfo.Node() is guaranteed to be not nil for all the nodes in the snapshot. - UpdateSnapshot(nodeSnapshot *Snapshot) error + UpdateSnapshot(logger klog.Logger, nodeSnapshot *Snapshot) error // Dump produces a dump of the current cache. Dump() *Dump @@ -117,6 +118,6 @@ type Cache interface { // Dump is a dump of the cache state. type Dump struct { - AssumedPods sets.String + AssumedPods sets.Set[string] Nodes map[string]*framework.NodeInfo } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/node_tree.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/node_tree.go index 2463e3a95bd3..f344f8494fdc 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/node_tree.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/node_tree.go @@ -36,24 +36,24 @@ type nodeTree struct { } // newNodeTree creates a NodeTree from nodes. -func newNodeTree(nodes []*v1.Node) *nodeTree { +func newNodeTree(logger klog.Logger, nodes []*v1.Node) *nodeTree { nt := &nodeTree{ tree: make(map[string][]string, len(nodes)), } for _, n := range nodes { - nt.addNode(n) + nt.addNode(logger, n) } return nt } // addNode adds a node and its corresponding zone to the tree. If the zone already exists, the node // is added to the array of nodes in that zone. -func (nt *nodeTree) addNode(n *v1.Node) { +func (nt *nodeTree) addNode(logger klog.Logger, n *v1.Node) { zone := utilnode.GetZoneKey(n) if na, ok := nt.tree[zone]; ok { for _, nodeName := range na { if nodeName == n.Name { - klog.InfoS("Node already exists in the NodeTree", "node", klog.KObj(n)) + logger.Info("Did not add to the NodeTree because it already exists", "node", klog.KObj(n)) return } } @@ -62,12 +62,12 @@ func (nt *nodeTree) addNode(n *v1.Node) { nt.zones = append(nt.zones, zone) nt.tree[zone] = []string{n.Name} } - klog.V(2).InfoS("Added node in listed group to NodeTree", "node", klog.KObj(n), "zone", zone) + logger.V(2).Info("Added node in listed group to NodeTree", "node", klog.KObj(n), "zone", zone) nt.numNodes++ } // removeNode removes a node from the NodeTree. -func (nt *nodeTree) removeNode(n *v1.Node) error { +func (nt *nodeTree) removeNode(logger klog.Logger, n *v1.Node) error { zone := utilnode.GetZoneKey(n) if na, ok := nt.tree[zone]; ok { for i, nodeName := range na { @@ -76,13 +76,13 @@ func (nt *nodeTree) removeNode(n *v1.Node) error { if len(nt.tree[zone]) == 0 { nt.removeZone(zone) } - klog.V(2).InfoS("Removed node in listed group from NodeTree", "node", klog.KObj(n), "zone", zone) + logger.V(2).Info("Removed node in listed group from NodeTree", "node", klog.KObj(n), "zone", zone) nt.numNodes-- return nil } } } - klog.ErrorS(nil, "Node in listed group was not found", "node", klog.KObj(n), "zone", zone) + logger.Error(nil, "Did not remove Node in NodeTree because it was not found", "node", klog.KObj(n), "zone", zone) return fmt.Errorf("node %q in group %q was not found", n.Name, zone) } @@ -99,7 +99,7 @@ func (nt *nodeTree) removeZone(zone string) { } // updateNode updates a node in the NodeTree. -func (nt *nodeTree) updateNode(old, new *v1.Node) { +func (nt *nodeTree) updateNode(logger klog.Logger, old, new *v1.Node) { var oldZone string if old != nil { oldZone = utilnode.GetZoneKey(old) @@ -110,8 +110,8 @@ func (nt *nodeTree) updateNode(old, new *v1.Node) { if oldZone == newZone { return } - nt.removeNode(old) // No error checking. We ignore whether the old node exists or not. - nt.addNode(new) + nt.removeNode(logger, old) // No error checking. We ignore whether the old node exists or not. + nt.addNode(logger, new) } // list returns the list of names of the node. NodeTree iterates over zones and in each zone iterates diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/snapshot.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/snapshot.go index 78b67322a41f..abd79312a2df 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/snapshot.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/cache/snapshot.go @@ -38,7 +38,7 @@ type Snapshot struct { havePodsWithRequiredAntiAffinityNodeInfoList []*framework.NodeInfo // usedPVCSet contains a set of PVC names that have one or more scheduled pods using them, // keyed in the format "namespace/name". - usedPVCSet sets.String + usedPVCSet sets.Set[string] generation int64 } @@ -48,7 +48,7 @@ var _ framework.SharedLister = &Snapshot{} func NewEmptySnapshot() *Snapshot { return &Snapshot{ nodeInfoMap: make(map[string]*framework.NodeInfo), - usedPVCSet: sets.NewString(), + usedPVCSet: sets.New[string](), } } @@ -103,8 +103,8 @@ func createNodeInfoMap(pods []*v1.Pod, nodes []*v1.Node) map[string]*framework.N return nodeNameToInfo } -func createUsedPVCSet(pods []*v1.Pod) sets.String { - usedPVCSet := sets.NewString() +func createUsedPVCSet(pods []*v1.Pod) sets.Set[string] { + usedPVCSet := sets.New[string]() for _, pod := range pods { if pod.Spec.NodeName == "" { continue @@ -123,7 +123,7 @@ func createUsedPVCSet(pods []*v1.Pod) sets.String { } // getNodeImageStates returns the given node's image states based on the given imageExistence map. -func getNodeImageStates(node *v1.Node, imageExistenceMap map[string]sets.String) map[string]*framework.ImageStateSummary { +func getNodeImageStates(node *v1.Node, imageExistenceMap map[string]sets.Set[string]) map[string]*framework.ImageStateSummary { imageStates := make(map[string]*framework.ImageStateSummary) for _, image := range node.Status.Images { @@ -138,13 +138,13 @@ func getNodeImageStates(node *v1.Node, imageExistenceMap map[string]sets.String) } // createImageExistenceMap returns a map recording on which nodes the images exist, keyed by the images' names. -func createImageExistenceMap(nodes []*v1.Node) map[string]sets.String { - imageExistenceMap := make(map[string]sets.String) +func createImageExistenceMap(nodes []*v1.Node) map[string]sets.Set[string] { + imageExistenceMap := make(map[string]sets.Set[string]) for _, node := range nodes { for _, image := range node.Status.Images { for _, name := range image.Names { if _, ok := imageExistenceMap[name]; !ok { - imageExistenceMap[name] = sets.NewString(node.Name) + imageExistenceMap[name] = sets.New(node.Name) } else { imageExistenceMap[name].Insert(node.Name) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/queue/scheduling_queue.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/queue/scheduling_queue.go index 9e699614098b..ba3c00edd214 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/queue/scheduling_queue.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/queue/scheduling_queue.go @@ -27,6 +27,7 @@ limitations under the License. package queue import ( + "container/list" "context" "fmt" "math/rand" @@ -59,12 +60,9 @@ const ( // backoffQ or activeQ. If this value is empty, the default value (5min) // will be used. DefaultPodMaxInUnschedulablePodsDuration time.Duration = 5 * time.Minute - - queueClosed = "scheduling queue is closed" - // Scheduling queue names - activeQName = "Active" - backoffQName = "Backoff" + activeQ = "Active" + backoffQ = "Backoff" unschedulablePods = "Unschedulable" preEnqueue = "PreEnqueue" @@ -91,15 +89,15 @@ type PreEnqueueCheck func(pod *v1.Pod) bool // makes it easy to use those data structures as a SchedulingQueue. type SchedulingQueue interface { framework.PodNominator - Add(pod *v1.Pod) error + Add(logger klog.Logger, pod *v1.Pod) error // Activate moves the given pods to activeQ iff they're in unschedulablePods or backoffQ. // The passed-in pods are originally compiled from plugins that want to activate Pods, // by injecting the pods through a reserved CycleState struct (PodsToActivate). - Activate(pods map[string]*v1.Pod) + Activate(logger klog.Logger, pods map[string]*v1.Pod) // AddUnschedulableIfNotPresent adds an unschedulable pod back to scheduling queue. // The podSchedulingCycle represents the current scheduling cycle number which can be // returned by calling SchedulingCycle(). - AddUnschedulableIfNotPresent(pod *framework.QueuedPodInfo, podSchedulingCycle int64) error + AddUnschedulableIfNotPresent(logger klog.Logger, pod *framework.QueuedPodInfo, podSchedulingCycle int64) error // SchedulingCycle returns the current number of scheduling cycle which is // cached by scheduling queue. Normally, incrementing this number whenever // a pod is popped (e.g. called Pop()) is enough. @@ -107,17 +105,24 @@ type SchedulingQueue interface { // Pop removes the head of the queue and returns it. It blocks if the // queue is empty and waits until a new item is added to the queue. Pop() (*framework.QueuedPodInfo, error) - Update(oldPod, newPod *v1.Pod) error + // Done must be called for pod returned by Pop. This allows the queue to + // keep track of which pods are currently being processed. + Done(types.UID) + Update(logger klog.Logger, oldPod, newPod *v1.Pod) error Delete(pod *v1.Pod) error - MoveAllToActiveOrBackoffQueue(event framework.ClusterEvent, preCheck PreEnqueueCheck) - AssignedPodAdded(pod *v1.Pod) - AssignedPodUpdated(pod *v1.Pod) + // TODO(sanposhiho): move all PreEnqueueCkeck to Requeue and delete it from this parameter eventually. + // Some PreEnqueueCheck include event filtering logic based on some in-tree plugins + // and it affect badly to other plugins. + // See https://github.com/kubernetes/kubernetes/issues/110175 + MoveAllToActiveOrBackoffQueue(logger klog.Logger, event framework.ClusterEvent, oldObj, newObj interface{}, preCheck PreEnqueueCheck) + AssignedPodAdded(logger klog.Logger, pod *v1.Pod) + AssignedPodUpdated(logger klog.Logger, oldPod, newPod *v1.Pod) PendingPods() ([]*v1.Pod, string) // Close closes the SchedulingQueue so that the goroutine which is // waiting to pop items can exit gracefully. Close() // Run starts the goroutines managing the queue. - Run() + Run(logger klog.Logger) } // NewSchedulingQueue initializes a priority queue as a new scheduling queue. @@ -157,6 +162,13 @@ type PriorityQueue struct { cond sync.Cond + // inFlightPods holds the UID of all pods which have been popped out for which Done + // hasn't been called yet - in other words, all pods that are currently being + // processed (being scheduled, in permit, or in the binding cycle). + inFlightPods map[types.UID]inFlightPod + // receivedEvents holds the events received by the scheduling queue. + receivedEvents *list.List + // activeQ is heap structure that scheduler actively looks at to find pods to // schedule. Head of heap is the highest priority pod. activeQ *heap.Heap @@ -172,11 +184,13 @@ type PriorityQueue struct { // received a move request. Unschedulable pods in and before this scheduling // cycle will be put back to activeQueue if we were trying to schedule them // when we received move request. + // TODO: this will be removed after SchedulingQueueHint goes to stable and the feature gate is removed. moveRequestCycle int64 - clusterEventMap map[framework.ClusterEvent]sets.String // preEnqueuePluginMap is keyed with profile name, valued with registered preEnqueue plugins. preEnqueuePluginMap map[string][]framework.PreEnqueuePlugin + // queueingHintMap is keyed with profile name, valued with registered queueing hint functions. + queueingHintMap QueueingHintMapPerProfile // closed indicates that the queue is closed. // It is mainly used to let Pop() exit its control loop while waiting for an item. @@ -187,6 +201,34 @@ type PriorityQueue struct { metricsRecorder metrics.MetricAsyncRecorder // pluginMetricsSamplePercent is the percentage of plugin metrics to be sampled. pluginMetricsSamplePercent int + + // isSchedulingQueueHintEnabled indicates whether the feature gate for the scheduling queue is enabled. + isSchedulingQueueHintEnabled bool +} + +// QueueingHintFunction is the wrapper of QueueingHintFn that has PluginName. +type QueueingHintFunction struct { + PluginName string + QueueingHintFn framework.QueueingHintFn +} + +type inFlightPod struct { + // previousEvent is the latest observed event when the pod is popped. + previousEvent *list.Element +} + +// clusterEvent has the event and involved objects. +type clusterEvent struct { + event framework.ClusterEvent + // oldObj is the object that involved this event. + oldObj interface{} + // newObj is the object that involved this event. + newObj interface{} + + // inFlightPodsNum is the counter of pods referring to this cluster event. + // It is initialized with the number of Pods being scheduled when the event is received, + // and is decremented when the scheduling for those Pods are Done(). + inFlightPodsNum int } type priorityQueueOptions struct { @@ -197,8 +239,8 @@ type priorityQueueOptions struct { podLister listersv1.PodLister metricsRecorder metrics.MetricAsyncRecorder pluginMetricsSamplePercent int - clusterEventMap map[framework.ClusterEvent]sets.String preEnqueuePluginMap map[string][]framework.PreEnqueuePlugin + queueingHintMap QueueingHintMapPerProfile } // Option configures a PriorityQueue @@ -232,17 +274,23 @@ func WithPodLister(pl listersv1.PodLister) Option { } } -// WithClusterEventMap sets clusterEventMap for PriorityQueue. -func WithClusterEventMap(m map[framework.ClusterEvent]sets.String) Option { +// WithPodMaxInUnschedulablePodsDuration sets podMaxInUnschedulablePodsDuration for PriorityQueue. +func WithPodMaxInUnschedulablePodsDuration(duration time.Duration) Option { return func(o *priorityQueueOptions) { - o.clusterEventMap = m + o.podMaxInUnschedulablePodsDuration = duration } } -// WithPodMaxInUnschedulablePodsDuration sets podMaxInUnschedulablePodsDuration for PriorityQueue. -func WithPodMaxInUnschedulablePodsDuration(duration time.Duration) Option { +// QueueingHintMapPerProfile is keyed with profile name, valued with queueing hint map registered for the profile. +type QueueingHintMapPerProfile map[string]QueueingHintMap + +// QueueingHintMap is keyed with ClusterEvent, valued with queueing hint functions registered for the event. +type QueueingHintMap map[framework.ClusterEvent][]*QueueingHintFunction + +// WithQueueingHintMapPerProfile sets preEnqueuePluginMap for PriorityQueue. +func WithQueueingHintMapPerProfile(m QueueingHintMapPerProfile) Option { return func(o *priorityQueueOptions) { - o.podMaxInUnschedulablePodsDuration = duration + o.queueingHintMap = m } } @@ -283,7 +331,7 @@ func newQueuedPodInfoForLookup(pod *v1.Pod, plugins ...string) *framework.Queued // and so we avoid creating a full PodInfo, which is expensive to instantiate frequently. return &framework.QueuedPodInfo{ PodInfo: &framework.PodInfo{Pod: pod}, - UnschedulablePlugins: sets.NewString(plugins...), + UnschedulablePlugins: sets.New(plugins...), } } @@ -316,11 +364,14 @@ func NewPriorityQueue( podMaxInUnschedulablePodsDuration: options.podMaxInUnschedulablePodsDuration, activeQ: heap.NewWithRecorder(podInfoKeyFunc, comp, metrics.NewActivePodsRecorder()), unschedulablePods: newUnschedulablePods(metrics.NewUnschedulablePodsRecorder(), metrics.NewGatedPodsRecorder()), - moveRequestCycle: -1, - clusterEventMap: options.clusterEventMap, + inFlightPods: make(map[types.UID]inFlightPod), + receivedEvents: list.New(), preEnqueuePluginMap: options.preEnqueuePluginMap, + queueingHintMap: options.queueingHintMap, metricsRecorder: options.metricsRecorder, pluginMetricsSamplePercent: options.pluginMetricsSamplePercent, + moveRequestCycle: -1, + isSchedulingQueueHintEnabled: utilfeature.DefaultFeatureGate.Enabled(features.SchedulerQueueingHints), } pq.cond.L = &pq.lock pq.podBackoffQ = heap.NewWithRecorder(podInfoKeyFunc, pq.podsCompareBackoffCompleted, metrics.NewBackoffPodsRecorder()) @@ -330,9 +381,68 @@ func NewPriorityQueue( } // Run starts the goroutine to pump from podBackoffQ to activeQ -func (p *PriorityQueue) Run() { - go wait.Until(p.flushBackoffQCompleted, 1.0*time.Second, p.stop) - go wait.Until(p.flushUnschedulablePodsLeftover, 30*time.Second, p.stop) +func (p *PriorityQueue) Run(logger klog.Logger) { + go wait.Until(func() { + p.flushBackoffQCompleted(logger) + }, 1.0*time.Second, p.stop) + go wait.Until(func() { + p.flushUnschedulablePodsLeftover(logger) + }, 30*time.Second, p.stop) +} + +// isPodWorthRequeuing calls QueueingHintFn of only plugins registered in pInfo.unschedulablePlugins. +// If any QueueingHintFn returns QueueImmediately, the scheduling queue is supposed to enqueue this Pod to activeQ. +// If no QueueingHintFn returns QueueImmediately, but some return QueueAfterBackoff, +// the scheduling queue is supposed to enqueue this Pod to activeQ/backoffQ depending on the remaining backoff time of the Pod. +// If all QueueingHintFn returns QueueSkip, the scheduling queue enqueues the Pod back to unschedulable Pod pool +// because no plugin changes the scheduling result via the event. +func (p *PriorityQueue) isPodWorthRequeuing(logger klog.Logger, pInfo *framework.QueuedPodInfo, event framework.ClusterEvent, oldObj, newObj interface{}) framework.QueueingHint { + if pInfo.UnschedulablePlugins.Len() == 0 { + logger.V(6).Info("Worth requeuing because no unschedulable plugins", "pod", klog.KObj(pInfo.Pod)) + return framework.QueueAfterBackoff + } + + if event.IsWildCard() { + logger.V(6).Info("Worth requeuing because the event is wildcard", "pod", klog.KObj(pInfo.Pod)) + return framework.QueueAfterBackoff + } + + hintMap, ok := p.queueingHintMap[pInfo.Pod.Spec.SchedulerName] + if !ok { + // shouldn't reach here unless bug. + logger.Error(nil, "No QueueingHintMap is registered for this profile", "profile", pInfo.Pod.Spec.SchedulerName, "pod", klog.KObj(pInfo.Pod)) + return framework.QueueAfterBackoff + } + + pod := pInfo.Pod + queueHint := framework.QueueSkip + for eventToMatch, hintfns := range hintMap { + if eventToMatch.Resource != event.Resource || eventToMatch.ActionType&event.ActionType == 0 { + continue + } + + for _, hintfn := range hintfns { + if !pInfo.UnschedulablePlugins.Has(hintfn.PluginName) { + continue + } + + switch h := hintfn.QueueingHintFn(logger, pod, oldObj, newObj); h { + case framework.QueueSkip: + continue + case framework.QueueImmediately: + return h + case framework.QueueAfterBackoff: + // replace queueHint with the returned value, + // but continue to other queueHintFn to check because other plugins may want to return QueueImmediately. + queueHint = h + } + + } + } + + // No queueing hint function is registered for this event + // or no queueing hint fn returns the value other than QueueSkip. + return queueHint } // runPreEnqueuePlugins iterates PreEnqueue function in each registered PreEnqueuePlugin. @@ -341,9 +451,10 @@ func (p *PriorityQueue) Run() { // Note: we need to associate the failed plugin to `pInfo`, so that the pod can be moved back // to activeQ by related cluster event. func (p *PriorityQueue) runPreEnqueuePlugins(ctx context.Context, pInfo *framework.QueuedPodInfo) bool { + logger := klog.FromContext(ctx) var s *framework.Status pod := pInfo.Pod - startTime := time.Now() + startTime := p.clock.Now() defer func() { metrics.FrameworkExtensionPointDuration.WithLabelValues(preEnqueue, s.Code().String(), pod.Spec.SchedulerName).Observe(metrics.SinceInSeconds(startTime)) }() @@ -357,9 +468,9 @@ func (p *PriorityQueue) runPreEnqueuePlugins(ctx context.Context, pInfo *framewo pInfo.UnschedulablePlugins.Insert(pl.Name()) metrics.UnschedulableReason(pl.Name(), pod.Spec.SchedulerName).Inc() if s.Code() == framework.Error { - klog.ErrorS(s.AsError(), "Unexpected error running PreEnqueue plugin", "pod", klog.KObj(pod), "plugin", pl.Name()) + logger.Error(s.AsError(), "Unexpected error running PreEnqueue plugin", "pod", klog.KObj(pod), "plugin", pl.Name()) } else { - klog.V(5).InfoS("Status after running PreEnqueue plugin", "pod", klog.KObj(pod), "plugin", pl.Name(), "status", s) + logger.Info("Status after running PreEnqueue plugin", "pod", klog.KObj(pod), "plugin", pl.Name(), "status", s) } return false } @@ -379,15 +490,19 @@ func (p *PriorityQueue) runPreEnqueuePlugin(ctx context.Context, pl framework.Pr // addToActiveQ tries to add pod to active queue. It returns 2 parameters: // 1. a boolean flag to indicate whether the pod is added successfully. // 2. an error for the caller to act on. -func (p *PriorityQueue) addToActiveQ(pInfo *framework.QueuedPodInfo) (bool, error) { +func (p *PriorityQueue) addToActiveQ(logger klog.Logger, pInfo *framework.QueuedPodInfo) (bool, error) { pInfo.Gated = !p.runPreEnqueuePlugins(context.Background(), pInfo) if pInfo.Gated { // Add the Pod to unschedulablePods if it's not passing PreEnqueuePlugins. p.unschedulablePods.addOrUpdate(pInfo) return false, nil } + if pInfo.InitialAttemptTimestamp == nil { + now := p.clock.Now() + pInfo.InitialAttemptTimestamp = &now + } if err := p.activeQ.Add(pInfo); err != nil { - klog.ErrorS(err, "Error adding pod to the active queue", "pod", klog.KObj(pInfo.Pod)) + logger.Error(err, "Error adding pod to the active queue", "pod", klog.KObj(pInfo.Pod)) return false, err } return true, nil @@ -395,39 +510,39 @@ func (p *PriorityQueue) addToActiveQ(pInfo *framework.QueuedPodInfo) (bool, erro // Add adds a pod to the active queue. It should be called only when a new pod // is added so there is no chance the pod is already in active/unschedulable/backoff queues -func (p *PriorityQueue) Add(pod *v1.Pod) error { +func (p *PriorityQueue) Add(logger klog.Logger, pod *v1.Pod) error { p.lock.Lock() defer p.lock.Unlock() pInfo := p.newQueuedPodInfo(pod) gated := pInfo.Gated - if added, err := p.addToActiveQ(pInfo); !added { + if added, err := p.addToActiveQ(logger, pInfo); !added { return err } if p.unschedulablePods.get(pod) != nil { - klog.ErrorS(nil, "Error: pod is already in the unschedulable queue", "pod", klog.KObj(pod)) + logger.Error(nil, "Error: pod is already in the unschedulable queue", "pod", klog.KObj(pod)) p.unschedulablePods.delete(pod, gated) } // Delete pod from backoffQ if it is backing off if err := p.podBackoffQ.Delete(pInfo); err == nil { - klog.ErrorS(nil, "Error: pod is already in the podBackoff queue", "pod", klog.KObj(pod)) + logger.Error(nil, "Error: pod is already in the podBackoff queue", "pod", klog.KObj(pod)) } - klog.V(5).InfoS("Pod moved to an internal scheduling queue", "pod", klog.KObj(pod), "event", PodAdd, "queue", activeQName) + logger.V(5).Info("Pod moved to an internal scheduling queue", "pod", klog.KObj(pod), "event", PodAdd, "queue", activeQ) metrics.SchedulerQueueIncomingPods.WithLabelValues("active", PodAdd).Inc() - p.addNominatedPodUnlocked(pInfo.PodInfo, nil) + p.addNominatedPodUnlocked(logger, pInfo.PodInfo, nil) p.cond.Broadcast() return nil } // Activate moves the given pods to activeQ iff they're in unschedulablePods or backoffQ. -func (p *PriorityQueue) Activate(pods map[string]*v1.Pod) { +func (p *PriorityQueue) Activate(logger klog.Logger, pods map[string]*v1.Pod) { p.lock.Lock() defer p.lock.Unlock() activated := false for _, pod := range pods { - if p.activate(pod) { + if p.activate(logger, pod) { activated = true } } @@ -437,7 +552,7 @@ func (p *PriorityQueue) Activate(pods map[string]*v1.Pod) { } } -func (p *PriorityQueue) activate(pod *v1.Pod) bool { +func (p *PriorityQueue) activate(logger klog.Logger, pod *v1.Pod) bool { // Verify if the pod is present in activeQ. if _, exists, _ := p.activeQ.Get(newQueuedPodInfoForLookup(pod)); exists { // No need to activate if it's already present in activeQ. @@ -448,7 +563,7 @@ func (p *PriorityQueue) activate(pod *v1.Pod) bool { if pInfo = p.unschedulablePods.get(pod); pInfo == nil { // If the pod doesn't belong to unschedulablePods or backoffQ, don't activate it. if obj, exists, _ := p.podBackoffQ.Get(newQueuedPodInfoForLookup(pod)); !exists { - klog.ErrorS(nil, "To-activate pod does not exist in unschedulablePods or backoffQ", "pod", klog.KObj(pod)) + logger.Error(nil, "To-activate pod does not exist in unschedulablePods or backoffQ", "pod", klog.KObj(pod)) return false } else { pInfo = obj.(*framework.QueuedPodInfo) @@ -457,18 +572,18 @@ func (p *PriorityQueue) activate(pod *v1.Pod) bool { if pInfo == nil { // Redundant safe check. We shouldn't reach here. - klog.ErrorS(nil, "Internal error: cannot obtain pInfo") + logger.Error(nil, "Internal error: cannot obtain pInfo") return false } gated := pInfo.Gated - if added, _ := p.addToActiveQ(pInfo); !added { + if added, _ := p.addToActiveQ(logger, pInfo); !added { return false } p.unschedulablePods.delete(pInfo.Pod, gated) p.podBackoffQ.Delete(pInfo) metrics.SchedulerQueueIncomingPods.WithLabelValues("active", ForceActivate).Inc() - p.addNominatedPodUnlocked(pInfo.PodInfo, nil) + p.addNominatedPodUnlocked(logger, pInfo.PodInfo, nil) return true } @@ -489,13 +604,102 @@ func (p *PriorityQueue) SchedulingCycle() int64 { return p.schedulingCycle } +// determineSchedulingHintForInFlightPod looks at the unschedulable plugins of the given Pod +// and determines the scheduling hint for this Pod while checking the events that happened during in-flight. +func (p *PriorityQueue) determineSchedulingHintForInFlightPod(logger klog.Logger, pInfo *framework.QueuedPodInfo, podSchedulingCycle int64) framework.QueueingHint { + if len(pInfo.UnschedulablePlugins) == 0 { + // When there is no unschedulable plugin, we cannot have a guess which event makes this Pod schedulable. + // Here, we use the latest requestCycle so that this Pod won't be stuck in the unschedulable pod pool for a long time. + if p.receivedEvents.Len() != 0 { + return framework.QueueAfterBackoff + } + return framework.QueueSkip + } + + inFlightPod, ok := p.inFlightPods[pInfo.Pod.UID] + if !ok { + // It shouldn't reach here unless there is a bug somewhere. + // But, set podSchedulingCycle to moveRequestCycle + // so that this Pod won't stuck in the unschedulable pod pool. + logger.Error(nil, "In flight Pod isn't found in the scheduling queue. If you see this error log, it's likely a bug in the scheduler.", "pod", klog.KObj(pInfo.Pod)) + return framework.QueueAfterBackoff + } + + // AddUnschedulableIfNotPresent is called with the Pod at the end of scheduling or binding. + // So, given pInfo should have been Pop()ed before, + // we can assume pInfo must be recorded in inFlightPods. + // check if there is an event that makes this Pod schedulable based on pInfo.UnschedulablePlugins. + event := p.receivedEvents.Front() + if inFlightPod.previousEvent != nil { + // only check events that happened after the Pod was popped. + event = inFlightPod.previousEvent.Next() + } + schedulingHint := framework.QueueSkip + for ; event != nil; event = event.Next() { + e := event.Value.(*clusterEvent) + + hint := p.isPodWorthRequeuing(logger, pInfo, e.event, e.oldObj, e.newObj) + if hint == framework.QueueSkip { + continue + } + + if hint == framework.QueueImmediately { + // QueueImmediately is the strongest opinion, we don't need to check other events. + schedulingHint = framework.QueueImmediately + break + } + if hint == framework.QueueAfterBackoff { + // replace schedulingHint with QueueAfterBackoff, + // but continue to check other events because we may find it QueueImmediately with other events. + schedulingHint = framework.QueueAfterBackoff + } + } + return schedulingHint +} + +// addUnschedulableIfNotPresentWithoutQueueingHint inserts a pod that cannot be scheduled into +// the queue, unless it is already in the queue. Normally, PriorityQueue puts +// unschedulable pods in `unschedulablePods`. But if there has been a recent move +// request, then the pod is put in `podBackoffQ`. +// TODO: This function is called only when p.isSchedulingQueueHintEnabled is false, +// and this will be removed after SchedulingQueueHint goes to stable and the feature gate is removed. +func (p *PriorityQueue) addUnschedulableWithoutQueueingHint(logger klog.Logger, pInfo *framework.QueuedPodInfo, podSchedulingCycle int64) error { + pod := pInfo.Pod + // Refresh the timestamp since the pod is re-added. + pInfo.Timestamp = p.clock.Now() + + // If a move request has been received, move it to the BackoffQ, otherwise move + // it to unschedulablePods. + for plugin := range pInfo.UnschedulablePlugins { + metrics.UnschedulableReason(plugin, pInfo.Pod.Spec.SchedulerName).Inc() + } + if p.moveRequestCycle >= podSchedulingCycle { + if err := p.podBackoffQ.Add(pInfo); err != nil { + return fmt.Errorf("error adding pod %v to the backoff queue: %v", klog.KObj(pod), err) + } + logger.V(5).Info("Pod moved to an internal scheduling queue", "pod", klog.KObj(pod), "event", ScheduleAttemptFailure, "queue", backoffQ) + metrics.SchedulerQueueIncomingPods.WithLabelValues("backoff", ScheduleAttemptFailure).Inc() + } else { + p.unschedulablePods.addOrUpdate(pInfo) + logger.V(5).Info("Pod moved to an internal scheduling queue", "pod", klog.KObj(pod), "event", ScheduleAttemptFailure, "queue", unschedulablePods) + metrics.SchedulerQueueIncomingPods.WithLabelValues("unschedulable", ScheduleAttemptFailure).Inc() + } + + p.addNominatedPodUnlocked(logger, pInfo.PodInfo, nil) + return nil +} + // AddUnschedulableIfNotPresent inserts a pod that cannot be scheduled into // the queue, unless it is already in the queue. Normally, PriorityQueue puts // unschedulable pods in `unschedulablePods`. But if there has been a recent move // request, then the pod is put in `podBackoffQ`. -func (p *PriorityQueue) AddUnschedulableIfNotPresent(pInfo *framework.QueuedPodInfo, podSchedulingCycle int64) error { +func (p *PriorityQueue) AddUnschedulableIfNotPresent(logger klog.Logger, pInfo *framework.QueuedPodInfo, podSchedulingCycle int64) error { p.lock.Lock() defer p.lock.Unlock() + + // In any case, this Pod will be moved back to the queue and we should call Done. + defer p.done(pInfo.Pod.UID) + pod := pInfo.Pod if p.unschedulablePods.get(pod) != nil { return fmt.Errorf("Pod %v is already present in unschedulable queue", klog.KObj(pod)) @@ -508,6 +712,11 @@ func (p *PriorityQueue) AddUnschedulableIfNotPresent(pInfo *framework.QueuedPodI return fmt.Errorf("Pod %v is already present in the backoff queue", klog.KObj(pod)) } + if !p.isSchedulingQueueHintEnabled { + // fall back to the old behavior which doesn't depend on the queueing hint. + return p.addUnschedulableWithoutQueueingHint(logger, pInfo, podSchedulingCycle) + } + // Refresh the timestamp since the pod is re-added. pInfo.Timestamp = p.clock.Now() @@ -516,25 +725,24 @@ func (p *PriorityQueue) AddUnschedulableIfNotPresent(pInfo *framework.QueuedPodI for plugin := range pInfo.UnschedulablePlugins { metrics.UnschedulableReason(plugin, pInfo.Pod.Spec.SchedulerName).Inc() } - if p.moveRequestCycle >= podSchedulingCycle { - if err := p.podBackoffQ.Add(pInfo); err != nil { - return fmt.Errorf("error adding pod %v to the backoff queue: %v", klog.KObj(pod), err) - } - klog.V(5).InfoS("Pod moved to an internal scheduling queue", "pod", klog.KObj(pod), "event", ScheduleAttemptFailure, "queue", backoffQName) - metrics.SchedulerQueueIncomingPods.WithLabelValues("backoff", ScheduleAttemptFailure).Inc() - } else { - p.unschedulablePods.addOrUpdate(pInfo) - klog.V(5).InfoS("Pod moved to an internal scheduling queue", "pod", klog.KObj(pod), "event", ScheduleAttemptFailure, "queue", unschedulablePods) - metrics.SchedulerQueueIncomingPods.WithLabelValues("unschedulable", ScheduleAttemptFailure).Inc() + // Based on isPodWorthRequeuing(), we check whether this Pod may change its scheduling result by any of events that happened during scheduling. + schedulingHint := p.determineSchedulingHintForInFlightPod(logger, pInfo, podSchedulingCycle) + + // In this case, we try to requeue this Pod to activeQ/backoffQ. + queue := p.requeuePodViaQueueingHint(logger, pInfo, schedulingHint, ScheduleAttemptFailure) + logger.V(6).Info("Pod moved to an internal scheduling queue", "pod", klog.KObj(pod), "event", ScheduleAttemptFailure, "queue", queue, "schedulingCycle", podSchedulingCycle) + if queue == activeQ { + // When the Pod is moved to activeQ, need to let p.cond know so that the Pod will be pop()ed out. + p.cond.Broadcast() } - p.addNominatedPodUnlocked(pInfo.PodInfo, nil) + p.addNominatedPodUnlocked(logger, pInfo.PodInfo, nil) return nil } // flushBackoffQCompleted Moves all pods from backoffQ which have completed backoff in to activeQ -func (p *PriorityQueue) flushBackoffQCompleted() { +func (p *PriorityQueue) flushBackoffQCompleted(logger klog.Logger) { p.lock.Lock() defer p.lock.Unlock() activated := false @@ -550,11 +758,11 @@ func (p *PriorityQueue) flushBackoffQCompleted() { } _, err := p.podBackoffQ.Pop() if err != nil { - klog.ErrorS(err, "Unable to pop pod from backoff queue despite backoff completion", "pod", klog.KObj(pod)) + logger.Error(err, "Unable to pop pod from backoff queue despite backoff completion", "pod", klog.KObj(pod)) break } - if added, _ := p.addToActiveQ(pInfo); added { - klog.V(5).InfoS("Pod moved to an internal scheduling queue", "pod", klog.KObj(pod), "event", BackoffComplete, "queue", activeQName) + if added, _ := p.addToActiveQ(logger, pInfo); added { + logger.V(5).Info("Pod moved to an internal scheduling queue", "pod", klog.KObj(pod), "event", BackoffComplete, "queue", activeQ) metrics.SchedulerQueueIncomingPods.WithLabelValues("active", BackoffComplete).Inc() activated = true } @@ -567,7 +775,7 @@ func (p *PriorityQueue) flushBackoffQCompleted() { // flushUnschedulablePodsLeftover moves pods which stay in unschedulablePods // longer than podMaxInUnschedulablePodsDuration to backoffQ or activeQ. -func (p *PriorityQueue) flushUnschedulablePodsLeftover() { +func (p *PriorityQueue) flushUnschedulablePodsLeftover(logger klog.Logger) { p.lock.Lock() defer p.lock.Unlock() @@ -581,7 +789,7 @@ func (p *PriorityQueue) flushUnschedulablePodsLeftover() { } if len(podsToMove) > 0 { - p.movePodsToActiveOrBackoffQueue(podsToMove, UnschedulableTimeout) + p.movePodsToActiveOrBackoffQueue(logger, podsToMove, UnschedulableTimeout, nil, nil) } } @@ -596,7 +804,8 @@ func (p *PriorityQueue) Pop() (*framework.QueuedPodInfo, error) { // When Close() is called, the p.closed is set and the condition is broadcast, // which causes this loop to continue and return from the Pop(). if p.closed { - return nil, fmt.Errorf(queueClosed) + klog.V(2).InfoS("Scheduling queue is closed") + return nil, nil } p.cond.Wait() } @@ -607,17 +816,86 @@ func (p *PriorityQueue) Pop() (*framework.QueuedPodInfo, error) { pInfo := obj.(*framework.QueuedPodInfo) pInfo.Attempts++ p.schedulingCycle++ + // In flight, no move request yet. + if p.isSchedulingQueueHintEnabled { + p.inFlightPods[pInfo.Pod.UID] = inFlightPod{ + previousEvent: p.receivedEvents.Back(), + } + } + + for plugin := range pInfo.UnschedulablePlugins { + metrics.UnschedulableReason(plugin, pInfo.Pod.Spec.SchedulerName).Dec() + } + return pInfo, nil } +// Done must be called for pod returned by Pop. This allows the queue to +// keep track of which pods are currently being processed. +func (p *PriorityQueue) Done(pod types.UID) { + p.lock.Lock() + defer p.lock.Unlock() + + p.done(pod) +} + +func (p *PriorityQueue) done(pod types.UID) { + if !p.isSchedulingQueueHintEnabled { + // do nothing if schedulingQueueHint is disabled. + // In that case, we don't have inFlightPods and receivedEvents. + return + } + inFlightPod, ok := p.inFlightPods[pod] + if !ok { + // This Pod is already done()ed. + return + } + delete(p.inFlightPods, pod) + + // remove events which is only referred from this Pod + // so that the receivedEvents map doesn't grow infinitely. + + // Find the event that we should start. + // case1. If the previousEvent is nil, it means no receivedEvents when this Pod's scheduling started. + // We start from the first event in the receivedEvents. + // case2. If the previousEvent is not nil, but the inFlightPodsNum is 0, + // this previousEvent is removed from the list already. + // We start from the first event in the receivedEvents. + event := p.receivedEvents.Front() + if inFlightPod.previousEvent != nil && inFlightPod.previousEvent.Value.(*clusterEvent).inFlightPodsNum != 0 { + // case3. If the previousEvent is not nil, and the inFlightPodsNum is not 0, + // we can start from the next event of the previousEvent. + event = inFlightPod.previousEvent.Next() + } + + for event != nil { + e := event.Value.(*clusterEvent) + // decrement inFlightPodsNum on events that happened after the Pod is popped. + e.inFlightPodsNum-- + if e.inFlightPodsNum <= 0 { + // remove the event from the list if no Pod refers to it. + eventToDelete := event + // we need to take next event before removal. + event = event.Next() + p.receivedEvents.Remove(eventToDelete) + continue + } + event = event.Next() + } +} + // isPodUpdated checks if the pod is updated in a way that it may have become -// schedulable. It drops status of the pod and compares it with old version. +// schedulable. It drops status of the pod and compares it with old version, +// except for pod.status.resourceClaimStatuses: changing that may have an +// effect on scheduling. func isPodUpdated(oldPod, newPod *v1.Pod) bool { strip := func(pod *v1.Pod) *v1.Pod { p := pod.DeepCopy() p.ResourceVersion = "" p.Generation = 0 - p.Status = v1.PodStatus{} + p.Status = v1.PodStatus{ + ResourceClaimStatuses: pod.Status.ResourceClaimStatuses, + } p.ManagedFields = nil p.Finalizers = nil return p @@ -629,7 +907,7 @@ func isPodUpdated(oldPod, newPod *v1.Pod) bool { // the item from the unschedulable queue if pod is updated in a way that it may // become schedulable and adds the updated one to the active queue. // If pod is not present in any of the queues, it is added to the active queue. -func (p *PriorityQueue) Update(oldPod, newPod *v1.Pod) error { +func (p *PriorityQueue) Update(logger klog.Logger, oldPod, newPod *v1.Pod) error { p.lock.Lock() defer p.lock.Unlock() @@ -638,14 +916,14 @@ func (p *PriorityQueue) Update(oldPod, newPod *v1.Pod) error { // If the pod is already in the active queue, just update it there. if oldPodInfo, exists, _ := p.activeQ.Get(oldPodInfo); exists { pInfo := updatePod(oldPodInfo, newPod) - p.updateNominatedPodUnlocked(oldPod, pInfo.PodInfo) + p.updateNominatedPodUnlocked(logger, oldPod, pInfo.PodInfo) return p.activeQ.Update(pInfo) } // If the pod is in the backoff queue, update it there. if oldPodInfo, exists, _ := p.podBackoffQ.Get(oldPodInfo); exists { pInfo := updatePod(oldPodInfo, newPod) - p.updateNominatedPodUnlocked(oldPod, pInfo.PodInfo) + p.updateNominatedPodUnlocked(logger, oldPod, pInfo.PodInfo) return p.podBackoffQ.Update(pInfo) } } @@ -653,7 +931,7 @@ func (p *PriorityQueue) Update(oldPod, newPod *v1.Pod) error { // If the pod is in the unschedulable queue, updating it may make it schedulable. if usPodInfo := p.unschedulablePods.get(newPod); usPodInfo != nil { pInfo := updatePod(usPodInfo, newPod) - p.updateNominatedPodUnlocked(oldPod, pInfo.PodInfo) + p.updateNominatedPodUnlocked(logger, oldPod, pInfo.PodInfo) if isPodUpdated(oldPod, newPod) { gated := usPodInfo.Gated if p.isPodBackingoff(usPodInfo) { @@ -661,13 +939,13 @@ func (p *PriorityQueue) Update(oldPod, newPod *v1.Pod) error { return err } p.unschedulablePods.delete(usPodInfo.Pod, gated) - klog.V(5).InfoS("Pod moved to an internal scheduling queue", "pod", klog.KObj(pInfo.Pod), "event", PodUpdate, "queue", backoffQName) + logger.V(5).Info("Pod moved to an internal scheduling queue", "pod", klog.KObj(pInfo.Pod), "event", PodUpdate, "queue", backoffQ) } else { - if added, err := p.addToActiveQ(pInfo); !added { + if added, err := p.addToActiveQ(logger, pInfo); !added { return err } p.unschedulablePods.delete(usPodInfo.Pod, gated) - klog.V(5).InfoS("Pod moved to an internal scheduling queue", "pod", klog.KObj(pInfo.Pod), "event", BackoffComplete, "queue", activeQName) + logger.V(5).Info("Pod moved to an internal scheduling queue", "pod", klog.KObj(pInfo.Pod), "event", BackoffComplete, "queue", activeQ) p.cond.Broadcast() } } else { @@ -679,11 +957,11 @@ func (p *PriorityQueue) Update(oldPod, newPod *v1.Pod) error { } // If pod is not in any of the queues, we put it in the active queue. pInfo := p.newQueuedPodInfo(newPod) - if added, err := p.addToActiveQ(pInfo); !added { + if added, err := p.addToActiveQ(logger, pInfo); !added { return err } - p.addNominatedPodUnlocked(pInfo.PodInfo, nil) - klog.V(5).InfoS("Pod moved to an internal scheduling queue", "pod", klog.KObj(pInfo.Pod), "event", PodUpdate, "queue", activeQName) + p.addNominatedPodUnlocked(logger, pInfo.PodInfo, nil) + logger.V(5).Info("Pod moved to an internal scheduling queue", "pod", klog.KObj(pInfo.Pod), "event", PodUpdate, "queue", activeQ) p.cond.Broadcast() return nil } @@ -707,9 +985,9 @@ func (p *PriorityQueue) Delete(pod *v1.Pod) error { // AssignedPodAdded is called when a bound pod is added. Creation of this pod // may make pending pods with matching affinity terms schedulable. -func (p *PriorityQueue) AssignedPodAdded(pod *v1.Pod) { +func (p *PriorityQueue) AssignedPodAdded(logger klog.Logger, pod *v1.Pod) { p.lock.Lock() - p.movePodsToActiveOrBackoffQueue(p.getUnschedulablePodsWithMatchingAffinityTerm(pod), AssignedPodAdd) + p.movePodsToActiveOrBackoffQueue(logger, p.getUnschedulablePodsWithMatchingAffinityTerm(logger, pod), AssignedPodAdd, nil, pod) p.lock.Unlock() } @@ -729,12 +1007,12 @@ func isPodResourcesResizedDown(pod *v1.Pod) bool { // AssignedPodUpdated is called when a bound pod is updated. Change of labels // may make pending pods with matching affinity terms schedulable. -func (p *PriorityQueue) AssignedPodUpdated(pod *v1.Pod) { +func (p *PriorityQueue) AssignedPodUpdated(logger klog.Logger, oldPod, newPod *v1.Pod) { p.lock.Lock() - if isPodResourcesResizedDown(pod) { - p.moveAllToActiveOrBackoffQueue(AssignedPodUpdate, nil) + if isPodResourcesResizedDown(newPod) { + p.moveAllToActiveOrBackoffQueue(logger, AssignedPodUpdate, oldPod, newPod, nil) } else { - p.movePodsToActiveOrBackoffQueue(p.getUnschedulablePodsWithMatchingAffinityTerm(pod), AssignedPodUpdate) + p.movePodsToActiveOrBackoffQueue(logger, p.getUnschedulablePodsWithMatchingAffinityTerm(logger, newPod), AssignedPodUpdate, oldPod, newPod) } p.lock.Unlock() } @@ -744,57 +1022,103 @@ func (p *PriorityQueue) AssignedPodUpdated(pod *v1.Pod) { // This function adds all pods and then signals the condition variable to ensure that // if Pop() is waiting for an item, it receives the signal after all the pods are in the // queue and the head is the highest priority pod. -func (p *PriorityQueue) moveAllToActiveOrBackoffQueue(event framework.ClusterEvent, preCheck PreEnqueueCheck) { +func (p *PriorityQueue) moveAllToActiveOrBackoffQueue(logger klog.Logger, event framework.ClusterEvent, oldObj, newObj interface{}, preCheck PreEnqueueCheck) { unschedulablePods := make([]*framework.QueuedPodInfo, 0, len(p.unschedulablePods.podInfoMap)) for _, pInfo := range p.unschedulablePods.podInfoMap { if preCheck == nil || preCheck(pInfo.Pod) { unschedulablePods = append(unschedulablePods, pInfo) } } - p.movePodsToActiveOrBackoffQueue(unschedulablePods, event) + p.movePodsToActiveOrBackoffQueue(logger, unschedulablePods, event, oldObj, newObj) } // MoveAllToActiveOrBackoffQueue moves all pods from unschedulablePods to activeQ or backoffQ. // This function adds all pods and then signals the condition variable to ensure that // if Pop() is waiting for an item, it receives the signal after all the pods are in the // queue and the head is the highest priority pod. -func (p *PriorityQueue) MoveAllToActiveOrBackoffQueue(event framework.ClusterEvent, preCheck PreEnqueueCheck) { +func (p *PriorityQueue) MoveAllToActiveOrBackoffQueue(logger klog.Logger, event framework.ClusterEvent, oldObj, newObj interface{}, preCheck PreEnqueueCheck) { p.lock.Lock() defer p.lock.Unlock() - p.moveAllToActiveOrBackoffQueue(event, preCheck) + p.moveAllToActiveOrBackoffQueue(logger, event, oldObj, newObj, preCheck) +} + +// requeuePodViaQueueingHint tries to requeue Pod to activeQ, backoffQ or unschedulable pod pool based on schedulingHint. +// It returns the queue name Pod goes. +// +// NOTE: this function assumes lock has been acquired in caller +func (p *PriorityQueue) requeuePodViaQueueingHint(logger klog.Logger, pInfo *framework.QueuedPodInfo, schedulingHint framework.QueueingHint, event string) string { + if schedulingHint == framework.QueueSkip { + p.unschedulablePods.addOrUpdate(pInfo) + metrics.SchedulerQueueIncomingPods.WithLabelValues("unschedulable", event).Inc() + return unschedulablePods + } + + pod := pInfo.Pod + if schedulingHint == framework.QueueAfterBackoff && p.isPodBackingoff(pInfo) { + if err := p.podBackoffQ.Add(pInfo); err != nil { + logger.Error(err, "Error adding pod to the backoff queue, queue this Pod to unschedulable pod pool", "pod", klog.KObj(pod)) + p.unschedulablePods.addOrUpdate(pInfo) + return unschedulablePods + } + + metrics.SchedulerQueueIncomingPods.WithLabelValues("backoff", event).Inc() + return backoffQ + } + + // Reach here if schedulingHint is QueueImmediately, or schedulingHint is QueueAfterBackoff but the pod is not backing off. + + added, err := p.addToActiveQ(logger, pInfo) + if err != nil { + logger.Error(err, "Error adding pod to the active queue, queue this Pod to unschedulable pod pool", "pod", klog.KObj(pod)) + } + if added { + metrics.SchedulerQueueIncomingPods.WithLabelValues("active", event).Inc() + return activeQ + } + if pInfo.Gated { + // In case the pod is gated, the Pod is pushed back to unschedulable Pods pool in addToActiveQ. + return unschedulablePods + } + + p.unschedulablePods.addOrUpdate(pInfo) + metrics.SchedulerQueueIncomingPods.WithLabelValues("unschedulable", ScheduleAttemptFailure).Inc() + return unschedulablePods } // NOTE: this function assumes lock has been acquired in caller -func (p *PriorityQueue) movePodsToActiveOrBackoffQueue(podInfoList []*framework.QueuedPodInfo, event framework.ClusterEvent) { +func (p *PriorityQueue) movePodsToActiveOrBackoffQueue(logger klog.Logger, podInfoList []*framework.QueuedPodInfo, event framework.ClusterEvent, oldObj, newObj interface{}) { activated := false for _, pInfo := range podInfoList { - // If the event doesn't help making the Pod schedulable, continue. - // Note: we don't run the check if pInfo.UnschedulablePlugins is nil, which denotes - // either there is some abnormal error, or scheduling the pod failed by plugins other than PreFilter, Filter and Permit. - // In that case, it's desired to move it anyways. - if len(pInfo.UnschedulablePlugins) != 0 && !p.podMatchesEvent(pInfo, event) { + schedulingHint := p.isPodWorthRequeuing(logger, pInfo, event, oldObj, newObj) + if schedulingHint == framework.QueueSkip { + // QueueingHintFn determined that this Pod isn't worth putting to activeQ or backoffQ by this event. + logger.V(5).Info("Event is not making pod schedulable", "pod", klog.KObj(pInfo.Pod), "event", event.Label) continue } - pod := pInfo.Pod - if p.isPodBackingoff(pInfo) { - if err := p.podBackoffQ.Add(pInfo); err != nil { - klog.ErrorS(err, "Error adding pod to the backoff queue", "pod", klog.KObj(pod)) - } else { - klog.V(5).InfoS("Pod moved to an internal scheduling queue", "pod", klog.KObj(pInfo.Pod), "event", event, "queue", backoffQName) - metrics.SchedulerQueueIncomingPods.WithLabelValues("backoff", event.Label).Inc() - p.unschedulablePods.delete(pod, pInfo.Gated) - } - } else { - gated := pInfo.Gated - if added, _ := p.addToActiveQ(pInfo); added { - klog.V(5).InfoS("Pod moved to an internal scheduling queue", "pod", klog.KObj(pInfo.Pod), "event", event, "queue", activeQName) - activated = true - metrics.SchedulerQueueIncomingPods.WithLabelValues("active", event.Label).Inc() - p.unschedulablePods.delete(pod, gated) - } + + p.unschedulablePods.delete(pInfo.Pod, pInfo.Gated) + queue := p.requeuePodViaQueueingHint(logger, pInfo, schedulingHint, event.Label) + logger.V(4).Info("Pod moved to an internal scheduling queue", "pod", klog.KObj(pInfo.Pod), "event", event.Label, "queue", queue, "hint", schedulingHint) + if queue == activeQ { + activated = true } } + p.moveRequestCycle = p.schedulingCycle + + // (no need to check the feature gate because there is always no p.inFlightPods when the feature is disabled.) + if len(p.inFlightPods) != 0 { + // AddUnschedulableIfNotPresent might get called for in-flight Pods later, and in + // AddUnschedulableIfNotPresent we need to know whether events were + // observed while scheduling them. + p.receivedEvents.PushBack(&clusterEvent{ + event: event, + inFlightPodsNum: len(p.inFlightPods), + oldObj: oldObj, + newObj: newObj, + }) + } + if activated { p.cond.Broadcast() } @@ -803,8 +1127,8 @@ func (p *PriorityQueue) movePodsToActiveOrBackoffQueue(podInfoList []*framework. // getUnschedulablePodsWithMatchingAffinityTerm returns unschedulable pods which have // any affinity term that matches "pod". // NOTE: this function assumes lock has been acquired in caller. -func (p *PriorityQueue) getUnschedulablePodsWithMatchingAffinityTerm(pod *v1.Pod) []*framework.QueuedPodInfo { - nsLabels := interpodaffinity.GetNamespaceLabelsSnapshot(pod.Namespace, p.nsLister) +func (p *PriorityQueue) getUnschedulablePodsWithMatchingAffinityTerm(logger klog.Logger, pod *v1.Pod) []*framework.QueuedPodInfo { + nsLabels := interpodaffinity.GetNamespaceLabelsSnapshot(logger, pod.Namespace, p.nsLister) var podsToMove []*framework.QueuedPodInfo for _, pInfo := range p.unschedulablePods.podInfoMap { @@ -864,9 +1188,9 @@ func (npm *nominator) deleteNominatedPodIfExistsUnlocked(pod *v1.Pod) { // This is called during the preemption process after a node is nominated to run // the pod. We update the structure before sending a request to update the pod // object to avoid races with the following scheduling cycles. -func (npm *nominator) AddNominatedPod(pi *framework.PodInfo, nominatingInfo *framework.NominatingInfo) { +func (npm *nominator) AddNominatedPod(logger klog.Logger, pi *framework.PodInfo, nominatingInfo *framework.NominatingInfo) { npm.lock.Lock() - npm.addNominatedPodUnlocked(pi, nominatingInfo) + npm.addNominatedPodUnlocked(logger, pi, nominatingInfo) npm.lock.Unlock() } @@ -900,8 +1224,8 @@ func (p *PriorityQueue) newQueuedPodInfo(pod *v1.Pod, plugins ...string) *framew return &framework.QueuedPodInfo{ PodInfo: podInfo, Timestamp: now, - InitialAttemptTimestamp: now, - UnschedulablePlugins: sets.NewString(plugins...), + InitialAttemptTimestamp: nil, + UnschedulablePlugins: sets.New(plugins...), } } @@ -1019,7 +1343,7 @@ type nominator struct { lock sync.RWMutex } -func (npm *nominator) addNominatedPodUnlocked(pi *framework.PodInfo, nominatingInfo *framework.NominatingInfo) { +func (npm *nominator) addNominatedPodUnlocked(logger klog.Logger, pi *framework.PodInfo, nominatingInfo *framework.NominatingInfo) { // Always delete the pod if it already exists, to ensure we never store more than // one instance of the pod. npm.delete(pi.Pod) @@ -1038,11 +1362,11 @@ func (npm *nominator) addNominatedPodUnlocked(pi *framework.PodInfo, nominatingI // If the pod was removed or if it was already scheduled, don't nominate it. updatedPod, err := npm.podLister.Pods(pi.Pod.Namespace).Get(pi.Pod.Name) if err != nil { - klog.V(4).InfoS("Pod doesn't exist in podLister, aborted adding it to the nominator", "pod", klog.KObj(pi.Pod)) + logger.V(4).Info("Pod doesn't exist in podLister, aborted adding it to the nominator", "pod", klog.KObj(pi.Pod)) return } if updatedPod.Spec.NodeName != "" { - klog.V(4).InfoS("Pod is already scheduled to a node, aborted adding it to the nominator", "pod", klog.KObj(pi.Pod), "node", updatedPod.Spec.NodeName) + logger.V(4).Info("Pod is already scheduled to a node, aborted adding it to the nominator", "pod", klog.KObj(pi.Pod), "node", updatedPod.Spec.NodeName) return } } @@ -1050,7 +1374,7 @@ func (npm *nominator) addNominatedPodUnlocked(pi *framework.PodInfo, nominatingI npm.nominatedPodToNode[pi.Pod.UID] = nodeName for _, npi := range npm.nominatedPods[nodeName] { if npi.Pod.UID == pi.Pod.UID { - klog.V(4).InfoS("Pod already exists in the nominator", "pod", klog.KObj(npi.Pod)) + logger.V(4).Info("Pod already exists in the nominator", "pod", klog.KObj(npi.Pod)) return } } @@ -1075,13 +1399,13 @@ func (npm *nominator) delete(p *v1.Pod) { } // UpdateNominatedPod updates the with . -func (npm *nominator) UpdateNominatedPod(oldPod *v1.Pod, newPodInfo *framework.PodInfo) { +func (npm *nominator) UpdateNominatedPod(logger klog.Logger, oldPod *v1.Pod, newPodInfo *framework.PodInfo) { npm.lock.Lock() defer npm.lock.Unlock() - npm.updateNominatedPodUnlocked(oldPod, newPodInfo) + npm.updateNominatedPodUnlocked(logger, oldPod, newPodInfo) } -func (npm *nominator) updateNominatedPodUnlocked(oldPod *v1.Pod, newPodInfo *framework.PodInfo) { +func (npm *nominator) updateNominatedPodUnlocked(logger klog.Logger, oldPod *v1.Pod, newPodInfo *framework.PodInfo) { // In some cases, an Update event with no "NominatedNode" present is received right // after a node("NominatedNode") is reserved for this pod in memory. // In this case, we need to keep reserving the NominatedNode when updating the pod pointer. @@ -1102,7 +1426,7 @@ func (npm *nominator) updateNominatedPodUnlocked(oldPod *v1.Pod, newPodInfo *fra // We update irrespective of the nominatedNodeName changed or not, to ensure // that pod pointer is updated. npm.delete(oldPod) - npm.addNominatedPodUnlocked(newPodInfo, nominatingInfo) + npm.addNominatedPodUnlocked(logger, newPodInfo, nominatingInfo) } // NewPodNominator creates a nominator as a backing of framework.PodNominator. @@ -1120,62 +1444,6 @@ func newPodNominator(podLister listersv1.PodLister) *nominator { } } -// MakeNextPodFunc returns a function to retrieve the next pod from a given -// scheduling queue -func MakeNextPodFunc(queue SchedulingQueue) func() *framework.QueuedPodInfo { - return func() *framework.QueuedPodInfo { - podInfo, err := queue.Pop() - if err == nil { - klog.V(4).InfoS("About to try and schedule pod", "pod", klog.KObj(podInfo.Pod)) - for plugin := range podInfo.UnschedulablePlugins { - metrics.UnschedulableReason(plugin, podInfo.Pod.Spec.SchedulerName).Dec() - } - return podInfo - } - klog.ErrorS(err, "Error while retrieving next pod from scheduling queue") - return nil - } -} - func podInfoKeyFunc(obj interface{}) (string, error) { return cache.MetaNamespaceKeyFunc(obj.(*framework.QueuedPodInfo).Pod) } - -// Checks if the Pod may become schedulable upon the event. -// This is achieved by looking up the global clusterEventMap registry. -func (p *PriorityQueue) podMatchesEvent(podInfo *framework.QueuedPodInfo, clusterEvent framework.ClusterEvent) bool { - if clusterEvent.IsWildCard() { - return true - } - - for evt, nameSet := range p.clusterEventMap { - // Firstly verify if the two ClusterEvents match: - // - either the registered event from plugin side is a WildCardEvent, - // - or the two events have identical Resource fields and *compatible* ActionType. - // Note the ActionTypes don't need to be *identical*. We check if the ANDed value - // is zero or not. In this way, it's easy to tell Update&Delete is not compatible, - // but Update&All is. - evtMatch := evt.IsWildCard() || - (evt.Resource == clusterEvent.Resource && evt.ActionType&clusterEvent.ActionType != 0) - - // Secondly verify the plugin name matches. - // Note that if it doesn't match, we shouldn't continue to search. - if evtMatch && intersect(nameSet, podInfo.UnschedulablePlugins) { - return true - } - } - - return false -} - -func intersect(x, y sets.String) bool { - if len(x) > len(y) { - x, y = y, x - } - for v := range x { - if y.Has(v) { - return true - } - } - return false -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/metrics.go index 5ee59a430f0c..c76e1a28d642 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/metrics.go @@ -106,16 +106,6 @@ var ( Help: "Number of pending pods, by the queue type. 'active' means number of pods in activeQ; 'backoff' means number of pods in backoffQ; 'unschedulable' means number of pods in unschedulablePods that the scheduler attempted to schedule and failed; 'gated' is the number of unschedulable pods that the scheduler never attempted to schedule because they are gated.", StabilityLevel: metrics.STABLE, }, []string{"queue"}) - // SchedulerGoroutines isn't called in some parts where goroutines start. - // Goroutines metric replaces SchedulerGoroutines metric. Goroutine metric tracks all goroutines. - SchedulerGoroutines = metrics.NewGaugeVec( - &metrics.GaugeOpts{ - Subsystem: SchedulerSubsystem, - DeprecatedVersion: "1.26.0", - Name: "scheduler_goroutines", - Help: "Number of running goroutines split by the work they do such as binding. This metric is replaced by the \"goroutines\" metric.", - StabilityLevel: metrics.ALPHA, - }, []string{"work"}) Goroutines = metrics.NewGaugeVec( &metrics.GaugeOpts{ Subsystem: SchedulerSubsystem, @@ -220,7 +210,6 @@ var ( FrameworkExtensionPointDuration, PluginExecutionDuration, SchedulerQueueIncomingPods, - SchedulerGoroutines, Goroutines, PermitWaitDuration, CacheSize, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/profile_metrics.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/profile_metrics.go index cf7fa4249c84..d8e36e9df0a8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/profile_metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/profile_metrics.go @@ -19,27 +19,27 @@ package metrics // This file contains helpers for metrics that are associated to a profile. var ( - scheduledResult = "scheduled" - unschedulableResult = "unschedulable" - errorResult = "error" + ScheduledResult = "scheduled" + UnschedulableResult = "unschedulable" + ErrorResult = "error" ) // PodScheduled can records a successful scheduling attempt and the duration // since `start`. func PodScheduled(profile string, duration float64) { - observeScheduleAttemptAndLatency(scheduledResult, profile, duration) + observeScheduleAttemptAndLatency(ScheduledResult, profile, duration) } // PodUnschedulable can records a scheduling attempt for an unschedulable pod // and the duration since `start`. func PodUnschedulable(profile string, duration float64) { - observeScheduleAttemptAndLatency(unschedulableResult, profile, duration) + observeScheduleAttemptAndLatency(UnschedulableResult, profile, duration) } // PodScheduleError can records a scheduling attempt that had an error and the // duration since `start`. func PodScheduleError(profile string, duration float64) { - observeScheduleAttemptAndLatency(errorResult, profile, duration) + observeScheduleAttemptAndLatency(ErrorResult, profile, duration) } func observeScheduleAttemptAndLatency(result, profile string, duration float64) { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/profile/profile.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/profile/profile.go index b023b03f628d..3846d9720df9 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/profile/profile.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/profile/profile.go @@ -18,6 +18,7 @@ limitations under the License. package profile import ( + "context" "errors" "fmt" @@ -34,24 +35,24 @@ import ( type RecorderFactory func(string) events.EventRecorder // newProfile builds a Profile for the given configuration. -func newProfile(cfg config.KubeSchedulerProfile, r frameworkruntime.Registry, recorderFact RecorderFactory, - stopCh <-chan struct{}, opts ...frameworkruntime.Option) (framework.Framework, error) { +func newProfile(ctx context.Context, cfg config.KubeSchedulerProfile, r frameworkruntime.Registry, recorderFact RecorderFactory, + opts ...frameworkruntime.Option) (framework.Framework, error) { recorder := recorderFact(cfg.SchedulerName) opts = append(opts, frameworkruntime.WithEventRecorder(recorder)) - return frameworkruntime.NewFramework(r, &cfg, stopCh, opts...) + return frameworkruntime.NewFramework(ctx, r, &cfg, opts...) } // Map holds frameworks indexed by scheduler name. type Map map[string]framework.Framework // NewMap builds the frameworks given by the configuration, indexed by name. -func NewMap(cfgs []config.KubeSchedulerProfile, r frameworkruntime.Registry, recorderFact RecorderFactory, - stopCh <-chan struct{}, opts ...frameworkruntime.Option) (Map, error) { +func NewMap(ctx context.Context, cfgs []config.KubeSchedulerProfile, r frameworkruntime.Registry, recorderFact RecorderFactory, + opts ...frameworkruntime.Option) (Map, error) { m := make(Map) v := cfgValidator{m: m} for _, cfg := range cfgs { - p, err := newProfile(cfg, r, recorderFact, stopCh, opts...) + p, err := newProfile(ctx, cfg, r, recorderFact, opts...) if err != nil { return nil, fmt.Errorf("creating profile for scheduler name %s: %v", cfg.SchedulerName, err) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/schedule_one.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/schedule_one.go index 90e2d30e9c99..525e1af86320 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/schedule_one.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/schedule_one.go @@ -17,7 +17,9 @@ limitations under the License. package scheduler import ( + "container/heap" "context" + "errors" "fmt" "math/rand" "strconv" @@ -56,28 +58,39 @@ const ( // to ensure that a certain minimum of nodes are checked for feasibility. // This in turn helps ensure a minimum level of spreading. minFeasibleNodesPercentageToFind = 5 + // numberOfHighestScoredNodesToReport is the number of node scores + // to be included in ScheduleResult. + numberOfHighestScoredNodesToReport = 3 ) // scheduleOne does the entire scheduling workflow for a single pod. It is serialized on the scheduling algorithm's host fitting. func (sched *Scheduler) scheduleOne(ctx context.Context) { - podInfo := sched.NextPod() + logger := klog.FromContext(ctx) + podInfo, err := sched.NextPod() + if err != nil { + logger.Error(err, "Error while retrieving next pod from scheduling queue") + return + } // pod could be nil when schedulerQueue is closed if podInfo == nil || podInfo.Pod == nil { return } + pod := podInfo.Pod + logger.V(4).Info("About to try and schedule pod", "pod", klog.KObj(pod)) + fwk, err := sched.frameworkForPod(pod) if err != nil { // This shouldn't happen, because we only accept for scheduling the pods // which specify a scheduler name that matches one of the profiles. - klog.ErrorS(err, "Error occurred") + logger.Error(err, "Error occurred") return } - if sched.skipPodSchedule(fwk, pod) { + if sched.skipPodSchedule(ctx, fwk, pod) { return } - klog.V(3).InfoS("Attempting to schedule pod", "pod", klog.KObj(pod)) + logger.V(3).Info("Attempting to schedule pod", "pod", klog.KObj(pod)) // Synchronously attempt to find a fit for the pod. start := time.Now() @@ -102,15 +115,17 @@ func (sched *Scheduler) scheduleOne(ctx context.Context) { bindingCycleCtx, cancel := context.WithCancel(ctx) defer cancel() - metrics.SchedulerGoroutines.WithLabelValues(metrics.Binding).Inc() - defer metrics.SchedulerGoroutines.WithLabelValues(metrics.Binding).Dec() metrics.Goroutines.WithLabelValues(metrics.Binding).Inc() defer metrics.Goroutines.WithLabelValues(metrics.Binding).Dec() status := sched.bindingCycle(bindingCycleCtx, state, fwk, scheduleResult, assumedPodInfo, start, podsToActivate) if !status.IsSuccess() { sched.handleBindingCycleError(bindingCycleCtx, state, fwk, assumedPodInfo, start, scheduleResult, status) + return } + // Usually, DonePod is called inside the scheduling queue, + // but in this case, we need to call it here because this Pod won't go back to the scheduling queue. + sched.SchedulingQueue.Done(assumedPodInfo.Pod.UID) }() } @@ -125,6 +140,7 @@ func (sched *Scheduler) schedulingCycle( start time.Time, podsToActivate *framework.PodsToActivate, ) (ScheduleResult, *framework.QueuedPodInfo, *framework.Status) { + logger := klog.FromContext(ctx) pod := podInfo.Pod scheduleResult, err := sched.SchedulePod(ctx, fwk, state, pod) if err != nil { @@ -135,7 +151,7 @@ func (sched *Scheduler) schedulingCycle( fitError, ok := err.(*framework.FitError) if !ok { - klog.ErrorS(err, "Error selecting node for pod", "pod", klog.KObj(pod)) + logger.Error(err, "Error selecting node for pod", "pod", klog.KObj(pod)) return ScheduleResult{nominatingInfo: clearNominatedNode}, podInfo, framework.AsStatus(err) } @@ -145,7 +161,7 @@ func (sched *Scheduler) schedulingCycle( // into the resources that were preempted, but this is harmless. if !fwk.HasPostFilterPlugins() { - klog.V(3).InfoS("No PostFilter plugins are registered, so no preemption will be performed") + logger.V(3).Info("No PostFilter plugins are registered, so no preemption will be performed") return ScheduleResult{}, podInfo, framework.NewStatus(framework.Unschedulable).WithError(err) } @@ -154,9 +170,9 @@ func (sched *Scheduler) schedulingCycle( msg := status.Message() fitError.Diagnosis.PostFilterMsg = msg if status.Code() == framework.Error { - klog.ErrorS(nil, "Status after running PostFilter plugins for pod", "pod", klog.KObj(pod), "status", msg) + logger.Error(nil, "Status after running PostFilter plugins for pod", "pod", klog.KObj(pod), "status", msg) } else { - klog.V(5).InfoS("Status after running PostFilter plugins for pod", "pod", klog.KObj(pod), "status", msg) + logger.V(5).Info("Status after running PostFilter plugins for pod", "pod", klog.KObj(pod), "status", msg) } var nominatingInfo *framework.NominatingInfo @@ -172,29 +188,36 @@ func (sched *Scheduler) schedulingCycle( assumedPodInfo := podInfo.DeepCopy() assumedPod := assumedPodInfo.Pod // assume modifies `assumedPod` by setting NodeName=scheduleResult.SuggestedHost - err = sched.assume(assumedPod, scheduleResult.SuggestedHost) + err = sched.assume(logger, assumedPod, scheduleResult.SuggestedHost) if err != nil { // This is most probably result of a BUG in retrying logic. // We report an error here so that pod scheduling can be retried. // This relies on the fact that Error will check if the pod has been bound // to a node and if so will not add it back to the unscheduled pods queue // (otherwise this would cause an infinite loop). - return ScheduleResult{nominatingInfo: clearNominatedNode}, - assumedPodInfo, - framework.AsStatus(err) + return ScheduleResult{nominatingInfo: clearNominatedNode}, assumedPodInfo, framework.AsStatus(err) } // Run the Reserve method of reserve plugins. if sts := fwk.RunReservePluginsReserve(ctx, state, assumedPod, scheduleResult.SuggestedHost); !sts.IsSuccess() { // trigger un-reserve to clean up state associated with the reserved Pod fwk.RunReservePluginsUnreserve(ctx, state, assumedPod, scheduleResult.SuggestedHost) - if forgetErr := sched.Cache.ForgetPod(assumedPod); forgetErr != nil { - klog.ErrorS(forgetErr, "Scheduler cache ForgetPod failed") + if forgetErr := sched.Cache.ForgetPod(logger, assumedPod); forgetErr != nil { + logger.Error(forgetErr, "Scheduler cache ForgetPod failed") } - return ScheduleResult{nominatingInfo: clearNominatedNode}, - assumedPodInfo, - sts + if sts.IsUnschedulable() { + fitErr := &framework.FitError{ + NumAllNodes: 1, + Pod: pod, + Diagnosis: framework.Diagnosis{ + NodeToStatusMap: framework.NodeToStatusMap{scheduleResult.SuggestedHost: sts}, + UnschedulablePlugins: sets.New(sts.FailedPlugin()), + }, + } + return ScheduleResult{nominatingInfo: clearNominatedNode}, assumedPodInfo, framework.NewStatus(sts.Code()).WithError(fitErr) + } + return ScheduleResult{nominatingInfo: clearNominatedNode}, assumedPodInfo, sts } // Run "permit" plugins. @@ -202,18 +225,28 @@ func (sched *Scheduler) schedulingCycle( if !runPermitStatus.IsWait() && !runPermitStatus.IsSuccess() { // trigger un-reserve to clean up state associated with the reserved Pod fwk.RunReservePluginsUnreserve(ctx, state, assumedPod, scheduleResult.SuggestedHost) - if forgetErr := sched.Cache.ForgetPod(assumedPod); forgetErr != nil { - klog.ErrorS(forgetErr, "Scheduler cache ForgetPod failed") + if forgetErr := sched.Cache.ForgetPod(logger, assumedPod); forgetErr != nil { + logger.Error(forgetErr, "Scheduler cache ForgetPod failed") + } + + if runPermitStatus.IsUnschedulable() { + fitErr := &framework.FitError{ + NumAllNodes: 1, + Pod: pod, + Diagnosis: framework.Diagnosis{ + NodeToStatusMap: framework.NodeToStatusMap{scheduleResult.SuggestedHost: runPermitStatus}, + UnschedulablePlugins: sets.New(runPermitStatus.FailedPlugin()), + }, + } + return ScheduleResult{nominatingInfo: clearNominatedNode}, assumedPodInfo, framework.NewStatus(runPermitStatus.Code()).WithError(fitErr) } - return ScheduleResult{nominatingInfo: clearNominatedNode}, - assumedPodInfo, - runPermitStatus + return ScheduleResult{nominatingInfo: clearNominatedNode}, assumedPodInfo, runPermitStatus } // At the end of a successful scheduling cycle, pop and move up Pods if needed. if len(podsToActivate.Map) != 0 { - sched.SchedulingQueue.Activate(podsToActivate.Map) + sched.SchedulingQueue.Activate(logger, podsToActivate.Map) // Clear the entries after activation. podsToActivate.Map = make(map[string]*v1.Pod) } @@ -230,6 +263,7 @@ func (sched *Scheduler) bindingCycle( assumedPodInfo *framework.QueuedPodInfo, start time.Time, podsToActivate *framework.PodsToActivate) *framework.Status { + logger := klog.FromContext(ctx) assumedPod := assumedPodInfo.Pod @@ -249,17 +283,18 @@ func (sched *Scheduler) bindingCycle( } // Calculating nodeResourceString can be heavy. Avoid it if klog verbosity is below 2. - klog.V(2).InfoS("Successfully bound pod to node", "pod", klog.KObj(assumedPod), "node", scheduleResult.SuggestedHost, "evaluatedNodes", scheduleResult.EvaluatedNodes, "feasibleNodes", scheduleResult.FeasibleNodes) + logger.V(2).Info("Successfully bound pod to node", "pod", klog.KObj(assumedPod), "node", scheduleResult.SuggestedHost, "evaluatedNodes", scheduleResult.EvaluatedNodes, "feasibleNodes", scheduleResult.FeasibleNodes) metrics.PodScheduled(fwk.ProfileName(), metrics.SinceInSeconds(start)) metrics.PodSchedulingAttempts.Observe(float64(assumedPodInfo.Attempts)) - metrics.PodSchedulingDuration.WithLabelValues(getAttemptsLabel(assumedPodInfo)).Observe(metrics.SinceInSeconds(assumedPodInfo.InitialAttemptTimestamp)) - + if assumedPodInfo.InitialAttemptTimestamp != nil { + metrics.PodSchedulingDuration.WithLabelValues(getAttemptsLabel(assumedPodInfo)).Observe(metrics.SinceInSeconds(*assumedPodInfo.InitialAttemptTimestamp)) + } // Run "postbind" plugins. fwk.RunPostBindPlugins(ctx, state, assumedPod, scheduleResult.SuggestedHost) // At the end of a successful binding cycle, move up Pods if needed. if len(podsToActivate.Map) != 0 { - sched.SchedulingQueue.Activate(podsToActivate.Map) + sched.SchedulingQueue.Activate(logger, podsToActivate.Map) // Unlike the logic in schedulingCycle(), we don't bother deleting the entries // as `podsToActivate.Map` is no longer consumed. } @@ -275,12 +310,13 @@ func (sched *Scheduler) handleBindingCycleError( start time.Time, scheduleResult ScheduleResult, status *framework.Status) { + logger := klog.FromContext(ctx) assumedPod := podInfo.Pod // trigger un-reserve plugins to clean up state associated with the reserved Pod fwk.RunReservePluginsUnreserve(ctx, state, assumedPod, scheduleResult.SuggestedHost) - if forgetErr := sched.Cache.ForgetPod(assumedPod); forgetErr != nil { - klog.ErrorS(forgetErr, "scheduler cache ForgetPod failed") + if forgetErr := sched.Cache.ForgetPod(logger, assumedPod); forgetErr != nil { + logger.Error(forgetErr, "scheduler cache ForgetPod failed") } else { // "Forget"ing an assumed Pod in binding cycle should be treated as a PodDelete event, // as the assumed Pod had occupied a certain amount of resources in scheduler cache. @@ -289,11 +325,11 @@ func (sched *Scheduler) handleBindingCycleError( // It's intentional to "defer" this operation; otherwise MoveAllToActiveOrBackoffQueue() would // update `q.moveRequest` and thus move the assumed pod to backoffQ anyways. if status.IsUnschedulable() { - defer sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(internalqueue.AssignedPodDelete, func(pod *v1.Pod) bool { + defer sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, internalqueue.AssignedPodDelete, assumedPod, nil, func(pod *v1.Pod) bool { return assumedPod.UID != pod.UID }) } else { - sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(internalqueue.AssignedPodDelete, nil) + sched.SchedulingQueue.MoveAllToActiveOrBackoffQueue(logger, internalqueue.AssignedPodDelete, assumedPod, nil, nil) } } @@ -309,11 +345,11 @@ func (sched *Scheduler) frameworkForPod(pod *v1.Pod) (framework.Framework, error } // skipPodSchedule returns true if we could skip scheduling the pod for specified cases. -func (sched *Scheduler) skipPodSchedule(fwk framework.Framework, pod *v1.Pod) bool { +func (sched *Scheduler) skipPodSchedule(ctx context.Context, fwk framework.Framework, pod *v1.Pod) bool { // Case 1: pod is being deleted. if pod.DeletionTimestamp != nil { fwk.EventRecorder().Eventf(pod, nil, v1.EventTypeWarning, "FailedScheduling", "Scheduling", "skip schedule deleting pod: %v/%v", pod.Namespace, pod.Name) - klog.V(3).InfoS("Skip schedule deleting pod", "pod", klog.KObj(pod)) + klog.FromContext(ctx).V(3).Info("Skip schedule deleting pod", "pod", klog.KObj(pod)) return true } @@ -322,6 +358,7 @@ func (sched *Scheduler) skipPodSchedule(fwk framework.Framework, pod *v1.Pod) bo // during its previous scheduling cycle but before getting assumed. isAssumed, err := sched.Cache.IsAssumedPod(pod) if err != nil { + // TODO(91633): pass ctx into a revised HandleError utilruntime.HandleError(fmt.Errorf("failed to check whether pod %s/%s is assumed: %v", pod.Namespace, pod.Name, err)) return false } @@ -334,8 +371,7 @@ func (sched *Scheduler) skipPodSchedule(fwk framework.Framework, pod *v1.Pod) bo func (sched *Scheduler) schedulePod(ctx context.Context, fwk framework.Framework, state *framework.CycleState, pod *v1.Pod) (result ScheduleResult, err error) { trace := utiltrace.New("Scheduling", utiltrace.Field{Key: "namespace", Value: pod.Namespace}, utiltrace.Field{Key: "name", Value: pod.Name}) defer trace.LogIfLong(100 * time.Millisecond) - - if err := sched.Cache.UpdateSnapshot(sched.nodeInfoSnapshot); err != nil { + if err := sched.Cache.UpdateSnapshot(klog.FromContext(ctx), sched.nodeInfoSnapshot); err != nil { return result, err } trace.Step("Snapshotting scheduler cache and node infos done") @@ -372,7 +408,7 @@ func (sched *Scheduler) schedulePod(ctx context.Context, fwk framework.Framework return result, err } - host, err := selectHost(priorityList) + host, _, err := selectHost(priorityList, numberOfHighestScoredNodesToReport) trace.Step("Prioritizing done") return ScheduleResult{ @@ -385,9 +421,10 @@ func (sched *Scheduler) schedulePod(ctx context.Context, fwk framework.Framework // Filters the nodes to find the ones that fit the pod based on the framework // filter plugins and filter extenders. func (sched *Scheduler) findNodesThatFitPod(ctx context.Context, fwk framework.Framework, state *framework.CycleState, pod *v1.Pod) ([]*v1.Node, framework.Diagnosis, error) { + logger := klog.FromContext(ctx) diagnosis := framework.Diagnosis{ NodeToStatusMap: make(framework.NodeToStatusMap), - UnschedulablePlugins: sets.NewString(), + UnschedulablePlugins: sets.New[string](), } allNodes, err := sched.nodeInfoSnapshot.NodeInfos().List() @@ -403,7 +440,7 @@ func (sched *Scheduler) findNodesThatFitPod(ctx context.Context, fwk framework.F // Record the messages from PreFilter in Diagnosis.PreFilterMsg. msg := s.Message() diagnosis.PreFilterMsg = msg - klog.V(5).InfoS("Status after running PreFilter plugins for pod", "pod", klog.KObj(pod), "status", msg) + logger.V(5).Info("Status after running PreFilter plugins for pod", "pod", klog.KObj(pod), "status", msg) // Status satisfying IsUnschedulable() gets injected into diagnosis.UnschedulablePlugins. if s.FailedPlugin() != "" { diagnosis.UnschedulablePlugins.Insert(s.FailedPlugin()) @@ -416,7 +453,7 @@ func (sched *Scheduler) findNodesThatFitPod(ctx context.Context, fwk framework.F if len(pod.Status.NominatedNodeName) > 0 { feasibleNodes, err := sched.evaluateNominatedNode(ctx, pod, fwk, state, diagnosis) if err != nil { - klog.ErrorS(err, "Evaluation failed on nominated node", "pod", klog.KObj(pod), "node", pod.Status.NominatedNodeName) + logger.Error(err, "Evaluation failed on nominated node", "pod", klog.KObj(pod), "node", pod.Status.NominatedNodeName) } // Nominated node passes all the filters, scheduler is good to assign this node to the pod. if len(feasibleNodes) != 0 { @@ -444,7 +481,7 @@ func (sched *Scheduler) findNodesThatFitPod(ctx context.Context, fwk framework.F return nil, diagnosis, err } - feasibleNodes, err = findNodesThatPassExtenders(sched.Extenders, pod, feasibleNodes, diagnosis.NodeToStatusMap) + feasibleNodes, err = findNodesThatPassExtenders(ctx, sched.Extenders, pod, feasibleNodes, diagnosis.NodeToStatusMap) if err != nil { return nil, diagnosis, err } @@ -463,7 +500,7 @@ func (sched *Scheduler) evaluateNominatedNode(ctx context.Context, pod *v1.Pod, return nil, err } - feasibleNodes, err = findNodesThatPassExtenders(sched.Extenders, pod, feasibleNodes, diagnosis.NodeToStatusMap) + feasibleNodes, err = findNodesThatPassExtenders(ctx, sched.Extenders, pod, feasibleNodes, diagnosis.NodeToStatusMap) if err != nil { return nil, err } @@ -573,7 +610,8 @@ func (sched *Scheduler) numFeasibleNodesToFind(percentageOfNodesToScore *int32, return numNodes } -func findNodesThatPassExtenders(extenders []framework.Extender, pod *v1.Pod, feasibleNodes []*v1.Node, statuses framework.NodeToStatusMap) ([]*v1.Node, error) { +func findNodesThatPassExtenders(ctx context.Context, extenders []framework.Extender, pod *v1.Pod, feasibleNodes []*v1.Node, statuses framework.NodeToStatusMap) ([]*v1.Node, error) { + logger := klog.FromContext(ctx) // Extenders are called sequentially. // Nodes in original feasibleNodes can be excluded in one extender, and pass on to the next // extender in a decreasing manner. @@ -593,7 +631,7 @@ func findNodesThatPassExtenders(extenders []framework.Extender, pod *v1.Pod, fea feasibleList, failedMap, failedAndUnresolvableMap, err := extender.Filter(pod, feasibleNodes) if err != nil { if extender.IsIgnorable() { - klog.InfoS("Skipping extender as it returned error and has ignorable flag set", "extender", extender, "err", err) + logger.Info("Skipping extender as it returned error and has ignorable flag set", "extender", extender, "err", err) continue } return nil, err @@ -639,6 +677,7 @@ func prioritizeNodes( pod *v1.Pod, nodes []*v1.Node, ) ([]framework.NodePluginScores, error) { + logger := klog.FromContext(ctx) // If no priority configs are provided, then all nodes will have a score of one. // This is required to generate the priority list in the required format if len(extenders) == 0 && !fwk.HasScorePlugins() { @@ -665,11 +704,11 @@ func prioritizeNodes( } // Additional details logged at level 10 if enabled. - klogV := klog.V(10) - if klogV.Enabled() { + loggerVTen := logger.V(10) + if loggerVTen.Enabled() { for _, nodeScore := range nodesScores { for _, pluginScore := range nodeScore.Scores { - klogV.InfoS("Plugin scored node for pod", "pod", klog.KObj(pod), "plugin", pluginScore.Name, "node", nodeScore.Name, "score", pluginScore.Score) + loggerVTen.Info("Plugin scored node for pod", "pod", klog.KObj(pod), "plugin", pluginScore.Name, "node", nodeScore.Name, "score", pluginScore.Score) } } } @@ -686,17 +725,15 @@ func prioritizeNodes( } wg.Add(1) go func(extIndex int) { - metrics.SchedulerGoroutines.WithLabelValues(metrics.PrioritizingExtender).Inc() metrics.Goroutines.WithLabelValues(metrics.PrioritizingExtender).Inc() defer func() { - metrics.SchedulerGoroutines.WithLabelValues(metrics.PrioritizingExtender).Dec() metrics.Goroutines.WithLabelValues(metrics.PrioritizingExtender).Dec() wg.Done() }() prioritizedList, weight, err := extenders[extIndex].Prioritize(pod, nodes) if err != nil { // Prioritization errors from extender can be ignored, let k8s/other extenders determine the priorities - klog.V(5).InfoS("Failed to run extender's priority function. No score given by this extender.", "error", err, "pod", klog.KObj(pod), "extender", extenders[extIndex].Name()) + logger.V(5).Info("Failed to run extender's priority function. No score given by this extender.", "error", err, "pod", klog.KObj(pod), "extender", extenders[extIndex].Name()) return } mu.Lock() @@ -704,8 +741,8 @@ func prioritizeNodes( for i := range *prioritizedList { nodename := (*prioritizedList)[i].Host score := (*prioritizedList)[i].Score - if klogV.Enabled() { - klogV.InfoS("Extender scored node for pod", "pod", klog.KObj(pod), "extender", extenders[extIndex].Name(), "node", nodename, "score", score) + if loggerVTen.Enabled() { + loggerVTen.Info("Extender scored node for pod", "pod", klog.KObj(pod), "extender", extenders[extIndex].Name(), "node", nodename, "score", score) } // MaxExtenderPriority may diverge from the max priority used in the scheduler and defined by MaxNodeScore, @@ -736,50 +773,102 @@ func prioritizeNodes( } } - if klogV.Enabled() { + if loggerVTen.Enabled() { for i := range nodesScores { - klogV.InfoS("Calculated node's final score for pod", "pod", klog.KObj(pod), "node", nodesScores[i].Name, "score", nodesScores[i].TotalScore) + loggerVTen.Info("Calculated node's final score for pod", "pod", klog.KObj(pod), "node", nodesScores[i].Name, "score", nodesScores[i].TotalScore) } } return nodesScores, nil } +var errEmptyPriorityList = errors.New("empty priorityList") + // selectHost takes a prioritized list of nodes and then picks one // in a reservoir sampling manner from the nodes that had the highest score. -func selectHost(nodeScores []framework.NodePluginScores) (string, error) { - if len(nodeScores) == 0 { - return "", fmt.Errorf("empty priorityList") +// It also returns the top {count} Nodes, +// and the top of the list will be always the selected host. +func selectHost(nodeScoreList []framework.NodePluginScores, count int) (string, []framework.NodePluginScores, error) { + if len(nodeScoreList) == 0 { + return "", nil, errEmptyPriorityList } - maxScore := nodeScores[0].TotalScore - selected := nodeScores[0].Name + + var h nodeScoreHeap = nodeScoreList + heap.Init(&h) cntOfMaxScore := 1 - for _, ns := range nodeScores[1:] { - if ns.TotalScore > maxScore { - maxScore = ns.TotalScore - selected = ns.Name - cntOfMaxScore = 1 - } else if ns.TotalScore == maxScore { + selectedIndex := 0 + // The top of the heap is the NodeScoreResult with the highest score. + sortedNodeScoreList := make([]framework.NodePluginScores, 0, count) + sortedNodeScoreList = append(sortedNodeScoreList, heap.Pop(&h).(framework.NodePluginScores)) + + // This for-loop will continue until all Nodes with the highest scores get checked for a reservoir sampling, + // and sortedNodeScoreList gets (count - 1) elements. + for ns := heap.Pop(&h).(framework.NodePluginScores); ; ns = heap.Pop(&h).(framework.NodePluginScores) { + if ns.TotalScore != sortedNodeScoreList[0].TotalScore && len(sortedNodeScoreList) == count { + break + } + + if ns.TotalScore == sortedNodeScoreList[0].TotalScore { cntOfMaxScore++ if rand.Intn(cntOfMaxScore) == 0 { // Replace the candidate with probability of 1/cntOfMaxScore - selected = ns.Name + selectedIndex = cntOfMaxScore - 1 } } + + sortedNodeScoreList = append(sortedNodeScoreList, ns) + + if h.Len() == 0 { + break + } + } + + if selectedIndex != 0 { + // replace the first one with selected one + previous := sortedNodeScoreList[0] + sortedNodeScoreList[0] = sortedNodeScoreList[selectedIndex] + sortedNodeScoreList[selectedIndex] = previous } - return selected, nil + + if len(sortedNodeScoreList) > count { + sortedNodeScoreList = sortedNodeScoreList[:count] + } + + return sortedNodeScoreList[0].Name, sortedNodeScoreList, nil +} + +// nodeScoreHeap is a heap of framework.NodePluginScores. +type nodeScoreHeap []framework.NodePluginScores + +// nodeScoreHeap implements heap.Interface. +var _ heap.Interface = &nodeScoreHeap{} + +func (h nodeScoreHeap) Len() int { return len(h) } +func (h nodeScoreHeap) Less(i, j int) bool { return h[i].TotalScore > h[j].TotalScore } +func (h nodeScoreHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] } + +func (h *nodeScoreHeap) Push(x interface{}) { + *h = append(*h, x.(framework.NodePluginScores)) +} + +func (h *nodeScoreHeap) Pop() interface{} { + old := *h + n := len(old) + x := old[n-1] + *h = old[0 : n-1] + return x } // assume signals to the cache that a pod is already in the cache, so that binding can be asynchronous. // assume modifies `assumed`. -func (sched *Scheduler) assume(assumed *v1.Pod, host string) error { +func (sched *Scheduler) assume(logger klog.Logger, assumed *v1.Pod, host string) error { // Optimistically assume that the binding will succeed and send it to apiserver // in the background. // If the binding fails, scheduler will release resources allocated to assumed pod // immediately. assumed.Spec.NodeName = host - if err := sched.Cache.AssumePod(assumed); err != nil { - klog.ErrorS(err, "Scheduler cache AssumePod failed") + if err := sched.Cache.AssumePod(logger, assumed); err != nil { + logger.Error(err, "Scheduler cache AssumePod failed") return err } // if "assumed" is a nominated pod, we should remove it from internal cache @@ -794,8 +883,9 @@ func (sched *Scheduler) assume(assumed *v1.Pod, host string) error { // The precedence for binding is: (1) extenders and (2) framework plugins. // We expect this to run asynchronously, so we handle binding metrics internally. func (sched *Scheduler) bind(ctx context.Context, fwk framework.Framework, assumed *v1.Pod, targetNode string, state *framework.CycleState) (status *framework.Status) { + logger := klog.FromContext(ctx) defer func() { - sched.finishBinding(fwk, assumed, targetNode, status) + sched.finishBinding(logger, fwk, assumed, targetNode, status) }() bound, err := sched.extendersBinding(assumed, targetNode) @@ -819,12 +909,12 @@ func (sched *Scheduler) extendersBinding(pod *v1.Pod, node string) (bool, error) return false, nil } -func (sched *Scheduler) finishBinding(fwk framework.Framework, assumed *v1.Pod, targetNode string, status *framework.Status) { - if finErr := sched.Cache.FinishBinding(assumed); finErr != nil { - klog.ErrorS(finErr, "Scheduler cache FinishBinding failed") +func (sched *Scheduler) finishBinding(logger klog.Logger, fwk framework.Framework, assumed *v1.Pod, targetNode string, status *framework.Status) { + if finErr := sched.Cache.FinishBinding(logger, assumed); finErr != nil { + logger.Error(finErr, "Scheduler cache FinishBinding failed") } if !status.IsSuccess() { - klog.V(1).InfoS("Failed to bind pod", "pod", klog.KObj(assumed)) + logger.V(1).Info("Failed to bind pod", "pod", klog.KObj(assumed)) return } @@ -843,6 +933,17 @@ func getAttemptsLabel(p *framework.QueuedPodInfo) string { // handleSchedulingFailure records an event for the pod that indicates the // pod has failed to schedule. Also, update the pod condition and nominated node name if set. func (sched *Scheduler) handleSchedulingFailure(ctx context.Context, fwk framework.Framework, podInfo *framework.QueuedPodInfo, status *framework.Status, nominatingInfo *framework.NominatingInfo, start time.Time) { + calledDone := false + defer func() { + if !calledDone { + // Basically, AddUnschedulableIfNotPresent calls DonePod internally. + // But, AddUnschedulableIfNotPresent isn't called in some corner cases. + // Here, we call DonePod explicitly to avoid leaking the pod. + sched.SchedulingQueue.Done(podInfo.Pod.UID) + } + }() + + logger := klog.FromContext(ctx) reason := v1.PodReasonSchedulerError if status.IsUnschedulable() { reason = v1.PodReasonUnschedulable @@ -860,13 +961,12 @@ func (sched *Scheduler) handleSchedulingFailure(ctx context.Context, fwk framewo errMsg := status.Message() if err == ErrNoNodesAvailable { - klog.V(2).InfoS("Unable to schedule pod; no nodes are registered to the cluster; waiting", "pod", klog.KObj(pod), "err", err) - } else if fitError, ok := err.(*framework.FitError); ok { - // Inject UnschedulablePlugins to PodInfo, which will be used later for moving Pods between queues efficiently. + logger.V(2).Info("Unable to schedule pod; no nodes are registered to the cluster; waiting", "pod", klog.KObj(pod)) + } else if fitError, ok := err.(*framework.FitError); ok { // Inject UnschedulablePlugins to PodInfo, which will be used later for moving Pods between queues efficiently. podInfo.UnschedulablePlugins = fitError.Diagnosis.UnschedulablePlugins - klog.V(2).InfoS("Unable to schedule pod; no fit; waiting", "pod", klog.KObj(pod), "err", errMsg) + logger.V(2).Info("Unable to schedule pod; no fit; waiting", "pod", klog.KObj(pod), "err", errMsg) } else if apierrors.IsNotFound(err) { - klog.V(2).InfoS("Unable to schedule pod, possibly due to node not found; waiting", "pod", klog.KObj(pod), "err", errMsg) + logger.V(2).Info("Unable to schedule pod, possibly due to node not found; waiting", "pod", klog.KObj(pod), "err", errMsg) if errStatus, ok := err.(apierrors.APIStatus); ok && errStatus.Status().Details.Kind == "node" { nodeName := errStatus.Status().Details.Name // when node is not found, We do not remove the node right away. Trying again to get @@ -874,33 +974,36 @@ func (sched *Scheduler) handleSchedulingFailure(ctx context.Context, fwk framewo _, err := fwk.ClientSet().CoreV1().Nodes().Get(context.TODO(), nodeName, metav1.GetOptions{}) if err != nil && apierrors.IsNotFound(err) { node := v1.Node{ObjectMeta: metav1.ObjectMeta{Name: nodeName}} - if err := sched.Cache.RemoveNode(&node); err != nil { - klog.V(4).InfoS("Node is not found; failed to remove it from the cache", "node", node.Name) + if err := sched.Cache.RemoveNode(logger, &node); err != nil { + logger.V(4).Info("Node is not found; failed to remove it from the cache", "node", node.Name) } } } } else { - klog.ErrorS(err, "Error scheduling pod; retrying", "pod", klog.KObj(pod)) + logger.Error(err, "Error scheduling pod; retrying", "pod", klog.KObj(pod)) } // Check if the Pod exists in informer cache. podLister := fwk.SharedInformerFactory().Core().V1().Pods().Lister() cachedPod, e := podLister.Pods(pod.Namespace).Get(pod.Name) if e != nil { - klog.InfoS("Pod doesn't exist in informer cache", "pod", klog.KObj(pod), "err", e) + logger.Info("Pod doesn't exist in informer cache", "pod", klog.KObj(pod), "err", e) + // We need to call DonePod here because we don't call AddUnschedulableIfNotPresent in this case. } else { // In the case of extender, the pod may have been bound successfully, but timed out returning its response to the scheduler. // It could result in the live version to carry .spec.nodeName, and that's inconsistent with the internal-queued version. if len(cachedPod.Spec.NodeName) != 0 { - klog.InfoS("Pod has been assigned to node. Abort adding it back to queue.", "pod", klog.KObj(pod), "node", cachedPod.Spec.NodeName) + logger.Info("Pod has been assigned to node. Abort adding it back to queue.", "pod", klog.KObj(pod), "node", cachedPod.Spec.NodeName) + // We need to call DonePod here because we don't call AddUnschedulableIfNotPresent in this case. } else { // As is from SharedInformer, we need to do a DeepCopy() here. // ignore this err since apiserver doesn't properly validate affinity terms // and we can't fix the validation for backwards compatibility. podInfo.PodInfo, _ = framework.NewPodInfo(cachedPod.DeepCopy()) - if err := sched.SchedulingQueue.AddUnschedulableIfNotPresent(podInfo, sched.SchedulingQueue.SchedulingCycle()); err != nil { - klog.ErrorS(err, "Error occurred") + if err := sched.SchedulingQueue.AddUnschedulableIfNotPresent(logger, podInfo, sched.SchedulingQueue.SchedulingCycle()); err != nil { + logger.Error(err, "Error occurred") } + calledDone = true } } @@ -909,7 +1012,8 @@ func (sched *Scheduler) handleSchedulingFailure(ctx context.Context, fwk framewo // and the time the scheduler receives a Pod Update for the nominated pod. // Here we check for nil only for tests. if sched.SchedulingQueue != nil { - sched.SchedulingQueue.AddNominatedPod(podInfo.PodInfo, nominatingInfo) + logger := klog.FromContext(ctx) + sched.SchedulingQueue.AddNominatedPod(logger, podInfo.PodInfo, nominatingInfo) } if err == nil { @@ -925,7 +1029,7 @@ func (sched *Scheduler) handleSchedulingFailure(ctx context.Context, fwk framewo Reason: reason, Message: errMsg, }, nominatingInfo); err != nil { - klog.ErrorS(err, "Error updating pod", "pod", klog.KObj(pod)) + klog.FromContext(ctx).Error(err, "Error updating pod", "pod", klog.KObj(pod)) } } @@ -940,7 +1044,8 @@ func truncateMessage(message string) string { } func updatePod(ctx context.Context, client clientset.Interface, pod *v1.Pod, condition *v1.PodCondition, nominatingInfo *framework.NominatingInfo) error { - klog.V(3).InfoS("Updating pod condition", "pod", klog.KObj(pod), "conditionType", condition.Type, "conditionStatus", condition.Status, "conditionReason", condition.Reason) + logger := klog.FromContext(ctx) + logger.V(3).Info("Updating pod condition", "pod", klog.KObj(pod), "conditionType", condition.Type, "conditionStatus", condition.Status, "conditionReason", condition.Reason) podStatusCopy := pod.Status.DeepCopy() // NominatedNodeName is updated only if we are trying to set it, and the value is // different from the existing one. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go index 540d46f038b2..60d345af046d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go @@ -24,8 +24,8 @@ import ( v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/util/sets" "k8s.io/apimachinery/pkg/util/wait" + utilfeature "k8s.io/apiserver/pkg/util/feature" "k8s.io/client-go/dynamic/dynamicinformer" "k8s.io/client-go/informers" coreinformers "k8s.io/client-go/informers/core/v1" @@ -34,6 +34,7 @@ import ( "k8s.io/client-go/tools/cache" "k8s.io/klog/v2" configv1 "k8s.io/kube-scheduler/config/v1" + "k8s.io/kubernetes/pkg/features" schedulerapi "k8s.io/kubernetes/pkg/scheduler/apis/config" "k8s.io/kubernetes/pkg/scheduler/apis/config/scheme" "k8s.io/kubernetes/pkg/scheduler/framework" @@ -70,7 +71,7 @@ type Scheduler struct { // is available. We don't use a channel for this, because scheduling // a pod may take some amount of time and we don't want pods to get // stale while they sit in a channel. - NextPod func() *framework.QueuedPodInfo + NextPod func() (*framework.QueuedPodInfo, error) // FailureHandler is called upon a scheduling failure. FailureHandler FailureHandlerFn @@ -96,11 +97,19 @@ type Scheduler struct { percentageOfNodesToScore int32 nextStartNodeIndex int + + // logger *must* be initialized when creating a Scheduler, + // otherwise logging functions will access a nil sink and + // panic. + logger klog.Logger + + // registeredHandlers contains the registrations of all handlers. It's used to check if all handlers have finished syncing before the scheduling cycles start. + registeredHandlers []cache.ResourceEventHandlerRegistration } -func (s *Scheduler) applyDefaultHandlers() { - s.SchedulePod = s.schedulePod - s.FailureHandler = s.handleSchedulingFailure +func (sched *Scheduler) applyDefaultHandlers() { + sched.SchedulePod = sched.schedulePod + sched.FailureHandler = sched.handleSchedulingFailure } type schedulerOptions struct { @@ -239,17 +248,15 @@ var defaultSchedulerOptions = schedulerOptions{ } // New returns a Scheduler -func New(client clientset.Interface, +func New(ctx context.Context, + client clientset.Interface, informerFactory informers.SharedInformerFactory, dynInformerFactory dynamicinformer.DynamicSharedInformerFactory, recorderFactory profile.RecorderFactory, - stopCh <-chan struct{}, opts ...Option) (*Scheduler, error) { - stopEverything := stopCh - if stopEverything == nil { - stopEverything = wait.NeverStop - } + logger := klog.FromContext(ctx) + stopEverything := ctx.Done() options := defaultSchedulerOptions for _, opt := range opts { @@ -273,7 +280,7 @@ func New(client clientset.Interface, metrics.Register() - extenders, err := buildExtenders(options.extenders, options.profiles) + extenders, err := buildExtenders(logger, options.extenders, options.profiles) if err != nil { return nil, fmt.Errorf("couldn't build extenders: %w", err) } @@ -282,18 +289,15 @@ func New(client clientset.Interface, nodeLister := informerFactory.Core().V1().Nodes().Lister() snapshot := internalcache.NewEmptySnapshot() - clusterEventMap := make(map[framework.ClusterEvent]sets.String) - metricsRecorder := metrics.NewMetricsAsyncRecorder(1000, time.Second, stopCh) + metricsRecorder := metrics.NewMetricsAsyncRecorder(1000, time.Second, stopEverything) - profiles, err := profile.NewMap(options.profiles, registry, recorderFactory, stopCh, + profiles, err := profile.NewMap(ctx, options.profiles, registry, recorderFactory, frameworkruntime.WithComponentConfigVersion(options.componentConfigVersion), frameworkruntime.WithClientSet(client), frameworkruntime.WithKubeConfig(options.kubeConfig), frameworkruntime.WithInformerFactory(informerFactory), frameworkruntime.WithSnapshotSharedLister(snapshot), frameworkruntime.WithCaptureProfile(frameworkruntime.CaptureProfile(options.frameworkCapturer)), - frameworkruntime.WithClusterEventMap(clusterEventMap), - frameworkruntime.WithClusterEventMap(clusterEventMap), frameworkruntime.WithParallelism(int(options.parallelism)), frameworkruntime.WithExtenders(extenders), frameworkruntime.WithMetricsRecorder(metricsRecorder), @@ -307,18 +311,21 @@ func New(client clientset.Interface, } preEnqueuePluginMap := make(map[string][]framework.PreEnqueuePlugin) + queueingHintsPerProfile := make(internalqueue.QueueingHintMapPerProfile) for profileName, profile := range profiles { preEnqueuePluginMap[profileName] = profile.PreEnqueuePlugins() + queueingHintsPerProfile[profileName] = buildQueueingHintMap(profile.EnqueueExtensions()) } + podQueue := internalqueue.NewSchedulingQueue( profiles[options.profiles[0].SchedulerName].QueueSortFunc(), informerFactory, internalqueue.WithPodInitialBackoffDuration(time.Duration(options.podInitialBackoffSeconds)*time.Second), internalqueue.WithPodMaxBackoffDuration(time.Duration(options.podMaxBackoffSeconds)*time.Second), internalqueue.WithPodLister(podLister), - internalqueue.WithClusterEventMap(clusterEventMap), internalqueue.WithPodMaxInUnschedulablePodsDuration(options.podMaxInUnschedulablePodsDuration), internalqueue.WithPreEnqueuePluginMap(preEnqueuePluginMap), + internalqueue.WithQueueingHintMapPerProfile(queueingHintsPerProfile), internalqueue.WithPluginMetricsSamplePercent(pluginMetricsSamplePercent), internalqueue.WithMetricsRecorder(*metricsRecorder), ) @@ -327,11 +334,11 @@ func New(client clientset.Interface, fwk.SetPodNominator(podQueue) } - schedulerCache := internalcache.New(durationToExpireAssumedPod, stopEverything) + schedulerCache := internalcache.New(ctx, durationToExpireAssumedPod) // Setup cache debugger. debugger := cachedebugger.New(nodeLister, podLister, schedulerCache, podQueue) - debugger.ListenForSignal(stopEverything) + debugger.ListenForSignal(ctx) sched := &Scheduler{ Cache: schedulerCache, @@ -339,21 +346,56 @@ func New(client clientset.Interface, nodeInfoSnapshot: snapshot, percentageOfNodesToScore: options.percentageOfNodesToScore, Extenders: extenders, - NextPod: internalqueue.MakeNextPodFunc(podQueue), StopEverything: stopEverything, SchedulingQueue: podQueue, Profiles: profiles, + logger: logger, } + sched.NextPod = podQueue.Pop sched.applyDefaultHandlers() - addAllEventHandlers(sched, informerFactory, dynInformerFactory, unionedGVKs(clusterEventMap)) + if err = addAllEventHandlers(sched, informerFactory, dynInformerFactory, unionedGVKs(queueingHintsPerProfile)); err != nil { + return nil, fmt.Errorf("adding event handlers: %w", err) + } return sched, nil } +// defaultQueueingHintFn is the default queueing hint function. +// It always returns QueueAfterBackoff as the queueing hint. +var defaultQueueingHintFn = func(_ klog.Logger, _ *v1.Pod, _, _ interface{}) framework.QueueingHint { + return framework.QueueAfterBackoff +} + +func buildQueueingHintMap(es []framework.EnqueueExtensions) internalqueue.QueueingHintMap { + queueingHintMap := make(internalqueue.QueueingHintMap) + for _, e := range es { + events := e.EventsToRegister() + + // Note: Rarely, a plugin implements EnqueueExtensions but returns nil. + // We treat it as: the plugin is not interested in any event, and hence pod failed by that plugin + // cannot be moved by any regular cluster event. + // So, we can just ignore such EventsToRegister here. + + for _, event := range events { + fn := event.QueueingHintFn + if fn == nil || !utilfeature.DefaultFeatureGate.Enabled(features.SchedulerQueueingHints) { + fn = defaultQueueingHintFn + } + + queueingHintMap[event.Event] = append(queueingHintMap[event.Event], &internalqueue.QueueingHintFunction{ + PluginName: e.Name(), + QueueingHintFn: fn, + }) + } + } + return queueingHintMap +} + // Run begins watching and scheduling. It starts scheduling and blocked until the context is done. func (sched *Scheduler) Run(ctx context.Context) { - sched.SchedulingQueue.Run() + logger := klog.FromContext(ctx) + sched.SchedulingQueue.Run(logger) // We need to start scheduleOne loop in a dedicated goroutine, // because scheduleOne function hangs on getting the next item @@ -375,7 +417,7 @@ func NewInformerFactory(cs clientset.Interface, resyncPeriod time.Duration) info return informerFactory } -func buildExtenders(extenders []schedulerapi.Extender, profiles []schedulerapi.KubeSchedulerProfile) ([]framework.Extender, error) { +func buildExtenders(logger klog.Logger, extenders []schedulerapi.Extender, profiles []schedulerapi.KubeSchedulerProfile) ([]framework.Extender, error) { var fExtenders []framework.Extender if len(extenders) == 0 { return nil, nil @@ -384,7 +426,7 @@ func buildExtenders(extenders []schedulerapi.Extender, profiles []schedulerapi.K var ignoredExtendedResources []string var ignorableExtenders []framework.Extender for i := range extenders { - klog.V(2).InfoS("Creating extender", "extender", extenders[i]) + logger.V(2).Info("Creating extender", "extender", extenders[i]) extender, err := NewHTTPExtender(&extenders[i]) if err != nil { return nil, err @@ -435,13 +477,15 @@ func buildExtenders(extenders []schedulerapi.Extender, profiles []schedulerapi.K type FailureHandlerFn func(ctx context.Context, fwk framework.Framework, podInfo *framework.QueuedPodInfo, status *framework.Status, nominatingInfo *framework.NominatingInfo, start time.Time) -func unionedGVKs(m map[framework.ClusterEvent]sets.String) map[framework.GVK]framework.ActionType { +func unionedGVKs(queueingHintsPerProfile internalqueue.QueueingHintMapPerProfile) map[framework.GVK]framework.ActionType { gvkMap := make(map[framework.GVK]framework.ActionType) - for evt := range m { - if _, ok := gvkMap[evt.Resource]; ok { - gvkMap[evt.Resource] |= evt.ActionType - } else { - gvkMap[evt.Resource] = evt.ActionType + for _, queueingHints := range queueingHintsPerProfile { + for evt := range queueingHints { + if _, ok := gvkMap[evt.Resource]; ok { + gvkMap[evt.Resource] |= evt.ActionType + } else { + gvkMap[evt.Resource] = evt.ActionType + } } } return gvkMap diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/util/utils.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/util/utils.go index 3b083878b410..967c248355d8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/util/utils.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/util/utils.go @@ -25,11 +25,13 @@ import ( v1 "k8s.io/api/core/v1" apierrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/types" utilerrors "k8s.io/apimachinery/pkg/util/errors" "k8s.io/apimachinery/pkg/util/net" "k8s.io/apimachinery/pkg/util/strategicpatch" "k8s.io/client-go/kubernetes" + "k8s.io/client-go/tools/cache" "k8s.io/client-go/util/retry" corev1helpers "k8s.io/component-helpers/scheduling/corev1" "k8s.io/klog/v2" @@ -67,12 +69,12 @@ func GetEarliestPodStartTime(victims *extenderv1.Victims) *metav1.Time { maxPriority := corev1helpers.PodPriority(victims.Pods[0]) for _, pod := range victims.Pods { - if corev1helpers.PodPriority(pod) == maxPriority { - if GetPodStartTime(pod).Before(earliestPodStartTime) { - earliestPodStartTime = GetPodStartTime(pod) + if podPriority := corev1helpers.PodPriority(pod); podPriority == maxPriority { + if podStartTime := GetPodStartTime(pod); podStartTime.Before(earliestPodStartTime) { + earliestPodStartTime = podStartTime } - } else if corev1helpers.PodPriority(pod) > maxPriority { - maxPriority = corev1helpers.PodPriority(pod) + } else if podPriority > maxPriority { + maxPriority = podPriority earliestPodStartTime = GetPodStartTime(pod) } } @@ -159,3 +161,31 @@ func IsScalarResourceName(name v1.ResourceName) bool { return v1helper.IsExtendedResourceName(name) || v1helper.IsHugePageResourceName(name) || v1helper.IsPrefixedNativeResource(name) || v1helper.IsAttachableVolumeResourceName(name) } + +// As converts two objects to the given type. +// Both objects must be of the same type. If not, an error is returned. +// nil objects are allowed and will be converted to nil. +// For oldObj, cache.DeletedFinalStateUnknown is handled and the +// object stored in it will be converted instead. +func As[T runtime.Object](oldObj, newobj interface{}) (T, T, error) { + var oldTyped T + var newTyped T + var ok bool + if newobj != nil { + newTyped, ok = newobj.(T) + if !ok { + return oldTyped, newTyped, fmt.Errorf("expected %T, but got %T", newTyped, newobj) + } + } + + if oldObj != nil { + if realOldObj, ok := oldObj.(cache.DeletedFinalStateUnknown); ok { + oldObj = realOldObj.Obj + } + oldTyped, ok = oldObj.(T) + if !ok { + return oldTyped, newTyped, fmt.Errorf("expected %T, but got %T", oldTyped, oldObj) + } + } + return oldTyped, newTyped, nil +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/conntrack/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/conntrack/OWNERS deleted file mode 100644 index 675d11afb440..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/conntrack/OWNERS +++ /dev/null @@ -1,8 +0,0 @@ -# See the OWNERS docs at https://go.k8s.io/owners - -approvers: - - sig-network-approvers -reviewers: - - sig-network-reviewers -labels: - - sig/network diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/filesystem/defaultfs.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/filesystem/defaultfs.go index a7eb01ffe361..0ddd2248fa9d 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/filesystem/defaultfs.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/filesystem/defaultfs.go @@ -85,7 +85,7 @@ func (fs *DefaultFs) RemoveAll(path string) error { return os.RemoveAll(fs.prefix(path)) } -// Remove via os.RemoveAll +// Remove via os.Remove func (fs *DefaultFs) Remove(name string) error { return os.Remove(fs.prefix(name)) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/filesystem/filesystem.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/filesystem/filesystem.go index 43cd4aa7e291..6408e0fa838c 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/filesystem/filesystem.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/filesystem/filesystem.go @@ -33,7 +33,7 @@ type Filesystem interface { RemoveAll(path string) error Remove(name string) error - // from "io/ioutil" + // from "os" ReadFile(filename string) ([]byte, error) TempDir(dir, prefix string) (string, error) TempFile(dir, prefix string) (File, error) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/hash/hash.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/hash/hash.go index 803f066a440b..0962e5cfb5b3 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/hash/hash.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/hash/hash.go @@ -17,9 +17,10 @@ limitations under the License. package hash import ( + "fmt" "hash" - "github.com/davecgh/go-spew/spew" + "k8s.io/apimachinery/pkg/util/dump" ) // DeepHashObject writes specified object to hash using the spew library @@ -27,11 +28,5 @@ import ( // ensuring the hash does not change when a pointer changes. func DeepHashObject(hasher hash.Hash, objectToWrite interface{}) { hasher.Reset() - printer := spew.ConfigState{ - Indent: " ", - SortKeys: true, - DisableMethods: true, - SpewKeys: true, - } - printer.Fprintf(hasher, "%#v", objectToWrite) + fmt.Fprintf(hasher, "%v", dump.ForHash(objectToWrite)) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipset/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipset/OWNERS deleted file mode 100644 index c70337b9a47b..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipset/OWNERS +++ /dev/null @@ -1,11 +0,0 @@ -# See the OWNERS docs at https://go.k8s.io/owners - -reviewers: - - sig-network-reviewers -approvers: - - sig-network-approvers -labels: - - sig/network -emeritus_approvers: - - brendandburns - - m1093782566 diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/iptables/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/iptables/OWNERS index 80a5e13e3085..548730fa9528 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/iptables/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/iptables/OWNERS @@ -1,14 +1,8 @@ # See the OWNERS docs at https://go.k8s.io/owners reviewers: - - dcbw - - thockin - - danwinship + - sig-network-reviewers approvers: - - dcbw - - thockin - - danwinship -emeritus_approvers: - - eparis + - sig-network-approvers labels: - sig/network diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/OWNERS deleted file mode 100644 index 44fd07aeee53..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/ipvs/OWNERS +++ /dev/null @@ -1,16 +0,0 @@ -# See the OWNERS docs at https://go.k8s.io/owners - -reviewers: - - thockin - - andrewsykim - - uablrek -approvers: - - thockin - - andrewsykim - - uablrek -labels: - - sig/network - - area/ipvs -emeritus_approvers: - - lbernail - - m1093782566 diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/parsers/parsers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/parsers/parsers.go index ef869cd768e0..75130a8628c9 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/parsers/parsers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/parsers/parsers.go @@ -31,7 +31,7 @@ import ( func ParseImageName(image string) (string, string, string, error) { named, err := dockerref.ParseNormalizedNamed(image) if err != nil { - return "", "", "", fmt.Errorf("couldn't parse image name: %v", err) + return "", "", "", fmt.Errorf("couldn't parse image name %q: %v", image, err) } repoToPull := named.Name() diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/configmap/configmap.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/configmap/configmap.go index 7a1e5e58178b..ae7151149785 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/configmap/configmap.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/configmap/configmap.go @@ -252,7 +252,7 @@ func (b *configMapVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterA setPerms := func(_ string) error { // This may be the first time writing and new files get created outside the timestamp subdirectory: // change the permissions on the whole volume and not only in the timestamp directory. - return volume.SetVolumeOwnership(b, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(b.plugin, nil)) + return volume.SetVolumeOwnership(b, dir, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(b.plugin, nil)) } err = writer.Write(payload, setPerms) if err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_attacher.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_attacher.go index 8ffb3acf49cf..ef3c98258ac8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_attacher.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_attacher.go @@ -595,14 +595,13 @@ func (c *csiAttacher) UnmountDevice(deviceMountPath string) error { driverName = data[volDataKey.driverName] volID = data[volDataKey.volHandle] } else { - klog.Error(log("UnmountDevice failed to load volume data file [%s]: %v", dataDir, err)) - - // The volume might have been mounted by old CSI volume plugin. Fall back to the old behavior: read PV from API server - driverName, volID, err = getDriverAndVolNameFromDeviceMountPath(c.k8s, deviceMountPath) - if err != nil { - klog.Errorf(log("attacher.UnmountDevice failed to get driver and volume name from device mount path: %v", err)) - return err + if errors.Is(err, os.ErrNotExist) { + klog.V(4).Info(log("attacher.UnmountDevice skipped because volume data file [%s] does not exist", dataDir)) + return nil } + + klog.Errorf(log("attacher.UnmountDevice failed to get driver and volume name from device mount path: %v", err)) + return err } if c.csiClient == nil { @@ -682,36 +681,6 @@ func makeDeviceMountPath(plugin *csiPlugin, spec *volume.Spec) (string, error) { return filepath.Join(plugin.host.GetPluginDir(plugin.GetPluginName()), driver, volSha, globalMountInGlobalPath), nil } -func getDriverAndVolNameFromDeviceMountPath(k8s kubernetes.Interface, deviceMountPath string) (string, string, error) { - // deviceMountPath structure: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/{pvname}/globalmount - dir := filepath.Dir(deviceMountPath) - if file := filepath.Base(deviceMountPath); file != globalMountInGlobalPath { - return "", "", errors.New(log("getDriverAndVolNameFromDeviceMountPath failed, path did not end in %s", globalMountInGlobalPath)) - } - // dir is now /var/lib/kubelet/plugins/kubernetes.io/csi/pv/{pvname} - pvName := filepath.Base(dir) - - // Get PV and check for errors - pv, err := k8s.CoreV1().PersistentVolumes().Get(context.TODO(), pvName, metav1.GetOptions{}) - if err != nil { - return "", "", err - } - if pv == nil || pv.Spec.CSI == nil { - return "", "", errors.New(log("getDriverAndVolNameFromDeviceMountPath could not find CSI Persistent Volume Source for pv: %s", pvName)) - } - - // Get VolumeHandle and PluginName from pv - csiSource := pv.Spec.CSI - if csiSource.Driver == "" { - return "", "", errors.New(log("getDriverAndVolNameFromDeviceMountPath failed, driver name empty")) - } - if csiSource.VolumeHandle == "" { - return "", "", errors.New(log("getDriverAndVolNameFromDeviceMountPath failed, VolumeHandle empty")) - } - - return csiSource.Driver, csiSource.VolumeHandle, nil -} - func verifyAttachmentStatus(attachment *storage.VolumeAttachment, volumeHandle string) (bool, error) { // when we received a deleted event during attachment, fail fast if attachment == nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_mounter.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_mounter.go index 1974b0367531..468f882b8845 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_mounter.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_mounter.go @@ -333,7 +333,7 @@ func (c *csiMountMgr) SetUpAt(dir string, mounterArgs volume.MounterArgs) error // Driver doesn't support applying FSGroup. Kubelet must apply it instead. // fullPluginName helps to distinguish different driver from csi plugin - err := volume.SetVolumeOwnership(c, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(c.plugin, c.spec)) + err := volume.SetVolumeOwnership(c, dir, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(c.plugin, c.spec)) if err != nil { // At this point mount operation is successful: // 1. Since volume can not be used by the pod because of invalid permissions, we must return error diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_plugin.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_plugin.go index 4c26662797c4..2556517276e6 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_plugin.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_plugin.go @@ -217,7 +217,7 @@ func (p *csiPlugin) Init(host volume.VolumeHost) error { var migratedPlugins = map[string](func() bool){ csitranslationplugins.GCEPDInTreePluginName: func() bool { - return utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationGCE) + return true }, csitranslationplugins.AWSEBSInTreePluginName: func() bool { return true diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_util.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_util.go index ee2bdc193b32..bb4d799ff3c8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_util.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_util.go @@ -79,7 +79,7 @@ func loadVolumeData(dir string, fileName string) (map[string]string, error) { file, err := os.Open(dataFileName) if err != nil { - return nil, errors.New(log("failed to open volume data file [%s]: %v", dataFileName, err)) + return nil, fmt.Errorf("%s: %w", log("failed to open volume data file [%s]", dataFileName), err) } defer file.Close() data := map[string]string{} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/nodeinfomanager/nodeinfomanager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/nodeinfomanager/nodeinfomanager.go index 9a77fb715662..89b022587024 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/nodeinfomanager/nodeinfomanager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/nodeinfomanager/nodeinfomanager.go @@ -526,10 +526,7 @@ func (nim *nodeInfoManager) installDriverToCSINode( return fmt.Errorf("error getting CSI client") } - topologyKeys := make(sets.String) - for k := range topology { - topologyKeys.Insert(k) - } + topologyKeys := sets.StringKeySet(topology) specModified := true // Clone driver list, omitting the driver that matches the given driverName diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csimigration/plugin_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csimigration/plugin_manager.go index 2eacf54cb308..c2749ea37ec8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csimigration/plugin_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csimigration/plugin_manager.go @@ -91,7 +91,7 @@ func (pm PluginManager) IsMigrationEnabledForPlugin(pluginName string) bool { case csilibplugins.AWSEBSInTreePluginName: return true case csilibplugins.GCEPDInTreePluginName: - return pm.featureGate.Enabled(features.CSIMigrationGCE) + return true case csilibplugins.AzureFileInTreePluginName: return pm.featureGate.Enabled(features.CSIMigrationAzureFile) case csilibplugins.AzureDiskInTreePluginName: diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/downwardapi/downwardapi.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/downwardapi/downwardapi.go index b13e6ea6015c..54364009d018 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/downwardapi/downwardapi.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/downwardapi/downwardapi.go @@ -223,7 +223,7 @@ func (b *downwardAPIVolumeMounter) SetUpAt(dir string, mounterArgs volume.Mounte setPerms := func(_ string) error { // This may be the first time writing and new files get created outside the timestamp subdirectory: // change the permissions on the whole volume and not only in the timestamp directory. - return volume.SetVolumeOwnership(b, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(b.plugin, nil)) + return volume.SetVolumeOwnership(b, dir, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(b.plugin, nil)) } err = writer.Write(data, setPerms) if err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/emptydir/empty_dir.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/emptydir/empty_dir.go index 9ad981c54bd4..192db424e676 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/emptydir/empty_dir.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/emptydir/empty_dir.go @@ -280,7 +280,7 @@ func (ed *emptyDir) SetUpAt(dir string, mounterArgs volume.MounterArgs) error { err = fmt.Errorf("unknown storage medium %q", ed.medium) } - volume.SetVolumeOwnership(ed, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(ed.plugin, nil)) + volume.SetVolumeOwnership(ed, dir, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(ed.plugin, nil)) // If setting up the quota fails, just log a message but don't actually error out. // We'll use the old du mechanism in this case, at least until we support @@ -519,10 +519,12 @@ func (ed *emptyDir) TearDownAt(dir string) error { } func (ed *emptyDir) teardownDefault(dir string) error { - // Remove any quota - err := fsquota.ClearQuota(ed.mounter, dir) - if err != nil { - klog.Warningf("Warning: Failed to clear quota on %s: %v", dir, err) + if utilfeature.DefaultFeatureGate.Enabled(features.LocalStorageCapacityIsolationFSQuotaMonitoring) { + // Remove any quota + err := fsquota.ClearQuota(ed.mounter, dir) + if err != nil { + klog.Warningf("Warning: Failed to clear quota on %s: %v", dir, err) + } } // Renaming the directory is not required anymore because the operation executor // now handles duplicate operations on the same volume diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/fc/disk_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/fc/disk_manager.go index bb054ea16618..02e15c4f85c5 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/fc/disk_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/fc/disk_manager.go @@ -91,7 +91,7 @@ func diskSetUp(manager diskManager, b fcDiskMounter, volPath string, mounter mou } if !b.readOnly { - volume.SetVolumeOwnership(&b, fsGroup, fsGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) + volume.SetVolumeOwnership(&b, volPath, fsGroup, fsGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) } return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/flexvolume/mounter.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/flexvolume/mounter.go index 8098cfdb66ee..3821af7e9235 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/flexvolume/mounter.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/flexvolume/mounter.go @@ -95,7 +95,7 @@ func (f *flexVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterArgs) if !f.readOnly { if f.plugin.capabilities.FSGroup { // fullPluginName helps to distinguish different driver from flex volume plugin - volume.SetVolumeOwnership(f, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(f.plugin, f.spec)) + volume.SetVolumeOwnership(f, dir, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(f.plugin, f.spec)) } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/OWNERS b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/OWNERS deleted file mode 100644 index a966206b2997..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/OWNERS +++ /dev/null @@ -1,13 +0,0 @@ -# See the OWNERS docs at https://go.k8s.io/owners - -approvers: - - saad-ali - - thockin -reviewers: - - saad-ali - - jsafrane - - jingxu97 - - gnufied - - msau42 -emeritus_approvers: - - davidz627 diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/attacher.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/attacher.go deleted file mode 100644 index c97be9edcde0..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/attacher.go +++ /dev/null @@ -1,410 +0,0 @@ -//go:build !providerless -// +build !providerless - -/* -Copyright 2016 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package gcepd - -import ( - "encoding/json" - "errors" - "fmt" - "os" - "path" - "path/filepath" - "runtime" - "strconv" - "time" - - "k8s.io/klog/v2" - "k8s.io/mount-utils" - utilexec "k8s.io/utils/exec" - - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/sets" - "k8s.io/kubernetes/pkg/volume" - volumeutil "k8s.io/kubernetes/pkg/volume/util" - "k8s.io/legacy-cloud-providers/gce" -) - -type gcePersistentDiskAttacher struct { - host volume.VolumeHost - gceDisks gce.Disks -} - -var _ volume.Attacher = &gcePersistentDiskAttacher{} - -var _ volume.DeviceMounter = &gcePersistentDiskAttacher{} - -var _ volume.AttachableVolumePlugin = &gcePersistentDiskPlugin{} - -var _ volume.DeviceMountableVolumePlugin = &gcePersistentDiskPlugin{} - -func (plugin *gcePersistentDiskPlugin) NewAttacher() (volume.Attacher, error) { - gceCloud, err := getCloudProvider(plugin.host.GetCloudProvider()) - if err != nil { - return nil, err - } - - return &gcePersistentDiskAttacher{ - host: plugin.host, - gceDisks: gceCloud, - }, nil -} - -func (plugin *gcePersistentDiskPlugin) NewDeviceMounter() (volume.DeviceMounter, error) { - return plugin.NewAttacher() -} - -func (plugin *gcePersistentDiskPlugin) GetDeviceMountRefs(deviceMountPath string) ([]string, error) { - mounter := plugin.host.GetMounter(plugin.GetPluginName()) - return mounter.GetMountRefs(deviceMountPath) -} - -// Attach checks with the GCE cloud provider if the specified volume is already -// attached to the node with the specified Name. -// If the volume is attached, it succeeds (returns nil). -// If it is not, Attach issues a call to the GCE cloud provider to attach it. -// Callers are responsible for retrying on failure. -// Callers are responsible for thread safety between concurrent attach and -// detach operations. -func (attacher *gcePersistentDiskAttacher) Attach(spec *volume.Spec, nodeName types.NodeName) (string, error) { - volumeSource, readOnly, err := getVolumeSource(spec) - if err != nil { - return "", err - } - - pdName := volumeSource.PDName - - attached, err := attacher.gceDisks.DiskIsAttached(pdName, nodeName) - if err != nil { - // Log error and continue with attach - klog.Errorf( - "Error checking if PD (%q) is already attached to current node (%q). Will continue and try attach anyway. err=%v", - pdName, nodeName, err) - } - - if err == nil && attached { - // Volume is already attached to node. - klog.Infof("Attach operation is successful. PD %q is already attached to node %q.", pdName, nodeName) - } else { - if err := attacher.gceDisks.AttachDisk(pdName, nodeName, readOnly, isRegionalPD(spec)); err != nil { - klog.Errorf("Error attaching PD %q to node %q: %+v", pdName, nodeName, err) - return "", err - } - } - - return filepath.Join(diskByIDPath, diskGooglePrefix+pdName), nil -} - -func (attacher *gcePersistentDiskAttacher) VolumesAreAttached(specs []*volume.Spec, nodeName types.NodeName) (map[*volume.Spec]bool, error) { - volumesAttachedCheck := make(map[*volume.Spec]bool) - volumePdNameMap := make(map[string]*volume.Spec) - pdNameList := []string{} - for _, spec := range specs { - volumeSource, _, err := getVolumeSource(spec) - // If error is occurred, skip this volume and move to the next one - if err != nil { - klog.Errorf("Error getting volume (%q) source : %v", spec.Name(), err) - continue - } - pdNameList = append(pdNameList, volumeSource.PDName) - volumesAttachedCheck[spec] = true - volumePdNameMap[volumeSource.PDName] = spec - } - attachedResult, err := attacher.gceDisks.DisksAreAttached(pdNameList, nodeName) - if err != nil { - // Log error and continue with attach - klog.Errorf( - "Error checking if PDs (%v) are already attached to current node (%q). err=%v", - pdNameList, nodeName, err) - return volumesAttachedCheck, err - } - - for pdName, attached := range attachedResult { - if !attached { - spec := volumePdNameMap[pdName] - volumesAttachedCheck[spec] = false - klog.V(2).Infof("VolumesAreAttached: check volume %q (specName: %q) is no longer attached", pdName, spec.Name()) - } - } - return volumesAttachedCheck, nil -} - -func (attacher *gcePersistentDiskAttacher) BulkVerifyVolumes(volumesByNode map[types.NodeName][]*volume.Spec) (map[types.NodeName]map[*volume.Spec]bool, error) { - volumesAttachedCheck := make(map[types.NodeName]map[*volume.Spec]bool) - diskNamesByNode := make(map[types.NodeName][]string) - volumeSpecToDiskName := make(map[*volume.Spec]string) - - for nodeName, volumeSpecs := range volumesByNode { - diskNames := []string{} - for _, spec := range volumeSpecs { - volumeSource, _, err := getVolumeSource(spec) - if err != nil { - klog.Errorf("Error getting volume (%q) source : %v", spec.Name(), err) - continue - } - diskNames = append(diskNames, volumeSource.PDName) - volumeSpecToDiskName[spec] = volumeSource.PDName - } - diskNamesByNode[nodeName] = diskNames - } - - attachedDisksByNode, err := attacher.gceDisks.BulkDisksAreAttached(diskNamesByNode) - if err != nil { - return nil, err - } - - for nodeName, volumeSpecs := range volumesByNode { - volumesAreAttachedToNode := make(map[*volume.Spec]bool) - for _, spec := range volumeSpecs { - diskName := volumeSpecToDiskName[spec] - volumesAreAttachedToNode[spec] = attachedDisksByNode[nodeName][diskName] - } - volumesAttachedCheck[nodeName] = volumesAreAttachedToNode - } - return volumesAttachedCheck, nil -} - -// search Windows disk number by LUN -func getDiskID(pdName string, exec utilexec.Interface) (string, error) { - // TODO: replace Get-GcePdName with native windows support of Get-Disk, see issue #74674 - cmd := `Get-GcePdName | select Name, DeviceId | ConvertTo-Json` - output, err := exec.Command("powershell", "/c", cmd).CombinedOutput() - if err != nil { - klog.Errorf("Get-GcePdName failed, error: %v, output: %q", err, string(output)) - err = errors.New(err.Error() + " " + string(output)) - return "", err - } - - var data []map[string]interface{} - if err = json.Unmarshal(output, &data); err != nil { - klog.Errorf("Get-Disk output is not a json array, output: %q", string(output)) - return "", err - } - - for _, pd := range data { - if jsonName, ok := pd["Name"]; ok { - if name, ok := jsonName.(string); ok { - if name == pdName { - klog.Infof("found the disk %q", name) - if diskNum, ok := pd["DeviceId"]; ok { - switch v := diskNum.(type) { - case int: - return strconv.Itoa(v), nil - case float64: - return strconv.Itoa(int(v)), nil - case string: - return v, nil - default: - // diskNum isn't one of the types above - klog.Warningf("Disk %q found, but disknumber (%q) is not in one of the recongnized type", name, diskNum) - } - } - } - } - } - } - - return "", fmt.Errorf("could not found disk number for disk %q", pdName) -} - -func (attacher *gcePersistentDiskAttacher) WaitForAttach(spec *volume.Spec, devicePath string, _ *v1.Pod, timeout time.Duration) (string, error) { - ticker := time.NewTicker(checkSleepDuration) - defer ticker.Stop() - timer := time.NewTimer(timeout) - defer timer.Stop() - - volumeSource, _, err := getVolumeSource(spec) - if err != nil { - return "", err - } - - pdName := volumeSource.PDName - - if runtime.GOOS == "windows" { - exec := attacher.host.GetExec(gcePersistentDiskPluginName) - id, err := getDiskID(pdName, exec) - if err != nil { - klog.Errorf("WaitForAttach (windows) failed with error %s", err) - return "", err - } - return id, nil - } - - partition := "" - if volumeSource.Partition != 0 { - partition = strconv.Itoa(int(volumeSource.Partition)) - } - - sdBefore, err := filepath.Glob(diskSDPattern) - if err != nil { - klog.Errorf("Error filepath.Glob(\"%s\"): %v\r\n", diskSDPattern, err) - } - sdBeforeSet := sets.NewString(sdBefore...) - - devicePaths := getDiskByIDPaths(pdName, partition) - for { - select { - case <-ticker.C: - klog.V(5).Infof("Checking GCE PD %q is attached.", pdName) - path, err := verifyDevicePath(devicePaths, sdBeforeSet, pdName) - if err != nil { - // Log error, if any, and continue checking periodically. See issue #11321 - klog.Errorf("Error verifying GCE PD (%q) is attached: %v", pdName, err) - } else if path != "" { - // A device path has successfully been created for the PD - klog.Infof("Successfully found attached GCE PD %q.", pdName) - return path, nil - } else { - klog.V(4).Infof("could not verify GCE PD (%q) is attached, device path does not exist", pdName) - } - case <-timer.C: - return "", fmt.Errorf("could not find attached GCE PD %q. Timeout waiting for mount paths to be created", pdName) - } - } -} - -func (attacher *gcePersistentDiskAttacher) GetDeviceMountPath( - spec *volume.Spec) (string, error) { - volumeSource, _, err := getVolumeSource(spec) - if err != nil { - return "", err - } - - return makeGlobalPDName(attacher.host, volumeSource.PDName), nil -} - -func (attacher *gcePersistentDiskAttacher) MountDevice(spec *volume.Spec, devicePath string, deviceMountPath string, _ volume.DeviceMounterArgs) error { - // Only mount the PD globally once. - mounter := attacher.host.GetMounter(gcePersistentDiskPluginName) - notMnt, err := mounter.IsLikelyNotMountPoint(deviceMountPath) - if err != nil { - if os.IsNotExist(err) { - dir := deviceMountPath - if runtime.GOOS == "windows" { - // in windows, as we use mklink, only need to MkdirAll for parent directory - dir = filepath.Dir(deviceMountPath) - } - if err := os.MkdirAll(dir, 0750); err != nil { - return fmt.Errorf("MountDevice:CreateDirectory failed with %s", err) - } - notMnt = true - } else { - return err - } - } - - volumeSource, readOnly, err := getVolumeSource(spec) - if err != nil { - return err - } - - options := []string{} - if readOnly { - options = append(options, "ro") - } - if notMnt { - diskMounter := volumeutil.NewSafeFormatAndMountFromHost(gcePersistentDiskPluginName, attacher.host) - mountOptions := volumeutil.MountOptionFromSpec(spec, options...) - err = diskMounter.FormatAndMount(devicePath, deviceMountPath, volumeSource.FSType, mountOptions) - if err != nil { - os.Remove(deviceMountPath) - return err - } - klog.V(4).Infof("formatting spec %v devicePath %v deviceMountPath %v fs %v with options %+v", spec.Name(), devicePath, deviceMountPath, volumeSource.FSType, options) - } - return nil -} - -type gcePersistentDiskDetacher struct { - host volume.VolumeHost - gceDisks gce.Disks -} - -var _ volume.Detacher = &gcePersistentDiskDetacher{} - -var _ volume.DeviceUnmounter = &gcePersistentDiskDetacher{} - -func (plugin *gcePersistentDiskPlugin) NewDetacher() (volume.Detacher, error) { - gceCloud, err := getCloudProvider(plugin.host.GetCloudProvider()) - if err != nil { - return nil, err - } - - return &gcePersistentDiskDetacher{ - host: plugin.host, - gceDisks: gceCloud, - }, nil -} - -func (plugin *gcePersistentDiskPlugin) NewDeviceUnmounter() (volume.DeviceUnmounter, error) { - return plugin.NewDetacher() -} - -// Detach checks with the GCE cloud provider if the specified volume is already -// attached to the specified node. If the volume is not attached, it succeeds -// (returns nil). If it is attached, Detach issues a call to the GCE cloud -// provider to attach it. -// Callers are responsible for retrying on failure. -// Callers are responsible for thread safety between concurrent attach and detach -// operations. -func (detacher *gcePersistentDiskDetacher) Detach(volumeName string, nodeName types.NodeName) error { - pdName := path.Base(volumeName) - - attached, err := detacher.gceDisks.DiskIsAttached(pdName, nodeName) - if err != nil { - // Log error and continue with detach - klog.Errorf( - "Error checking if PD (%q) is already attached to current node (%q). Will continue and try detach anyway. err=%v", - pdName, nodeName, err) - } - - if err == nil && !attached { - // Volume is not attached to node. Success! - klog.Infof("Detach operation is successful. PD %q was not attached to node %q.", pdName, nodeName) - return nil - } - - if err = detacher.gceDisks.DetachDisk(pdName, nodeName); err != nil { - klog.Errorf("Error detaching PD %q from node %q: %v", pdName, nodeName, err) - return err - } - - return nil -} - -func (detacher *gcePersistentDiskDetacher) UnmountDevice(deviceMountPath string) error { - if runtime.GOOS == "windows" { - // Flush data cache for windows because it does not do so automatically during unmount device - exec := detacher.host.GetExec(gcePersistentDiskPluginName) - err := volumeutil.WriteVolumeCache(deviceMountPath, exec) - if err != nil { - return err - } - } - return mount.CleanupMountPoint(deviceMountPath, detacher.host.GetMounter(gcePersistentDiskPluginName), false) -} - -func (plugin *gcePersistentDiskPlugin) CanAttach(spec *volume.Spec) (bool, error) { - return true, nil -} - -func (plugin *gcePersistentDiskPlugin) CanDeviceMount(spec *volume.Spec) (bool, error) { - return true, nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/doc.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/doc.go deleted file mode 100644 index 94a41bc3adf0..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/doc.go +++ /dev/null @@ -1,19 +0,0 @@ -/* -Copyright 2015 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Package gcepd contains the internal representation of GCE PersistentDisk -// volumes. -package gcepd // import "k8s.io/kubernetes/pkg/volume/gcepd" diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_pd.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_pd.go deleted file mode 100644 index 7bbeade0ef08..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_pd.go +++ /dev/null @@ -1,568 +0,0 @@ -//go:build !providerless -// +build !providerless - -/* -Copyright 2014 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package gcepd - -import ( - "context" - "fmt" - "os" - "path/filepath" - "runtime" - "strconv" - - "k8s.io/klog/v2" - "k8s.io/mount-utils" - utilstrings "k8s.io/utils/strings" - - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/api/resource" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - volumehelpers "k8s.io/cloud-provider/volume/helpers" - "k8s.io/kubernetes/pkg/volume" - "k8s.io/kubernetes/pkg/volume/util" - gcecloud "k8s.io/legacy-cloud-providers/gce" -) - -// ProbeVolumePlugins is the primary entrypoint for volume plugins. -func ProbeVolumePlugins() []volume.VolumePlugin { - return []volume.VolumePlugin{&gcePersistentDiskPlugin{nil}} -} - -type gcePersistentDiskPlugin struct { - host volume.VolumeHost -} - -var _ volume.VolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.PersistentVolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.DeletableVolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.ProvisionableVolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.ExpandableVolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.VolumePluginWithAttachLimits = &gcePersistentDiskPlugin{} - -const ( - gcePersistentDiskPluginName = "kubernetes.io/gce-pd" -) - -// The constants are used to map from the machine type (number of CPUs) to the limit of -// persistent disks that can be attached to an instance. Please refer to gcloud doc -// https://cloud.google.com/compute/docs/machine-types -// These constants are all the documented attach limit minus one because the -// node boot disk is considered an attachable disk so effective attach limit is -// one less. -const ( - volumeLimitSmall = 15 - volumeLimitBig = 127 -) - -func getPath(uid types.UID, volName string, host volume.VolumeHost) string { - return host.GetPodVolumeDir(uid, utilstrings.EscapeQualifiedName(gcePersistentDiskPluginName), volName) -} - -func (plugin *gcePersistentDiskPlugin) Init(host volume.VolumeHost) error { - plugin.host = host - return nil -} - -func (plugin *gcePersistentDiskPlugin) GetPluginName() string { - return gcePersistentDiskPluginName -} - -func (plugin *gcePersistentDiskPlugin) GetVolumeName(spec *volume.Spec) (string, error) { - volumeSource, _, err := getVolumeSource(spec) - if err != nil { - return "", err - } - - return volumeSource.PDName, nil -} - -func (plugin *gcePersistentDiskPlugin) CanSupport(spec *volume.Spec) bool { - return (spec.PersistentVolume != nil && spec.PersistentVolume.Spec.GCEPersistentDisk != nil) || - (spec.Volume != nil && spec.Volume.GCEPersistentDisk != nil) -} - -func (plugin *gcePersistentDiskPlugin) RequiresRemount(spec *volume.Spec) bool { - return false -} - -func (plugin *gcePersistentDiskPlugin) SupportsMountOption() bool { - return true -} - -func (plugin *gcePersistentDiskPlugin) SupportsBulkVolumeVerification() bool { - return true -} - -func (plugin *gcePersistentDiskPlugin) SupportsSELinuxContextMount(spec *volume.Spec) (bool, error) { - return false, nil -} - -func (plugin *gcePersistentDiskPlugin) GetAccessModes() []v1.PersistentVolumeAccessMode { - return []v1.PersistentVolumeAccessMode{ - v1.ReadWriteOnce, - v1.ReadOnlyMany, - } -} - -func (plugin *gcePersistentDiskPlugin) GetVolumeLimits() (map[string]int64, error) { - volumeLimits := map[string]int64{ - util.GCEVolumeLimitKey: volumeLimitSmall, - } - cloud := plugin.host.GetCloudProvider() - - // if we can't fetch cloudprovider we return an error - // hoping external CCM or admin can set it. Returning - // default values from here will mean, no one can - // override them. - if cloud == nil { - return nil, fmt.Errorf("no cloudprovider present") - } - - if cloud.ProviderName() != gcecloud.ProviderName { - return nil, fmt.Errorf("expected gce cloud got %s", cloud.ProviderName()) - } - - instances, ok := cloud.Instances() - if !ok { - klog.Warning("Failed to get instances from cloud provider") - return volumeLimits, nil - } - - instanceType, err := instances.InstanceType(context.TODO(), plugin.host.GetNodeName()) - if err != nil { - klog.Errorf("Failed to get instance type from GCE cloud provider") - return volumeLimits, nil - } - smallMachineTypes := []string{"f1-micro", "g1-small", "e2-micro", "e2-small", "e2-medium"} - for _, small := range smallMachineTypes { - if instanceType == small { - volumeLimits[util.GCEVolumeLimitKey] = volumeLimitSmall - return volumeLimits, nil - } - } - volumeLimits[util.GCEVolumeLimitKey] = volumeLimitBig - return volumeLimits, nil -} - -func (plugin *gcePersistentDiskPlugin) VolumeLimitKey(spec *volume.Spec) string { - return util.GCEVolumeLimitKey -} - -func (plugin *gcePersistentDiskPlugin) NewMounter(spec *volume.Spec, pod *v1.Pod, _ volume.VolumeOptions) (volume.Mounter, error) { - // Inject real implementations here, test through the internal function. - return plugin.newMounterInternal(spec, pod.UID, &GCEDiskUtil{}, plugin.host.GetMounter(plugin.GetPluginName())) -} - -func getVolumeSource( - spec *volume.Spec) (*v1.GCEPersistentDiskVolumeSource, bool, error) { - if spec.Volume != nil && spec.Volume.GCEPersistentDisk != nil { - return spec.Volume.GCEPersistentDisk, spec.Volume.GCEPersistentDisk.ReadOnly, nil - } else if spec.PersistentVolume != nil && - spec.PersistentVolume.Spec.GCEPersistentDisk != nil { - return spec.PersistentVolume.Spec.GCEPersistentDisk, spec.ReadOnly, nil - } - - return nil, false, fmt.Errorf("spec does not reference a GCE volume type") -} - -func (plugin *gcePersistentDiskPlugin) newMounterInternal(spec *volume.Spec, podUID types.UID, manager pdManager, mounter mount.Interface) (volume.Mounter, error) { - // GCEPDs used directly in a pod have a ReadOnly flag set by the pod author. - // GCEPDs used as a PersistentVolume gets the ReadOnly flag indirectly through the persistent-claim volume used to mount the PV - volumeSource, readOnly, err := getVolumeSource(spec) - if err != nil { - return nil, err - } - - pdName := volumeSource.PDName - partition := "" - if volumeSource.Partition != 0 { - partition = strconv.Itoa(int(volumeSource.Partition)) - } - - return &gcePersistentDiskMounter{ - gcePersistentDisk: &gcePersistentDisk{ - podUID: podUID, - volName: spec.Name(), - pdName: pdName, - partition: partition, - mounter: mounter, - manager: manager, - plugin: plugin, - MetricsProvider: volume.NewMetricsStatFS(getPath(podUID, spec.Name(), plugin.host)), - }, - mountOptions: util.MountOptionFromSpec(spec), - readOnly: readOnly}, nil -} - -func (plugin *gcePersistentDiskPlugin) NewUnmounter(volName string, podUID types.UID) (volume.Unmounter, error) { - // Inject real implementations here, test through the internal function. - return plugin.newUnmounterInternal(volName, podUID, &GCEDiskUtil{}, plugin.host.GetMounter(plugin.GetPluginName())) -} - -func (plugin *gcePersistentDiskPlugin) newUnmounterInternal(volName string, podUID types.UID, manager pdManager, mounter mount.Interface) (volume.Unmounter, error) { - return &gcePersistentDiskUnmounter{&gcePersistentDisk{ - podUID: podUID, - volName: volName, - manager: manager, - mounter: mounter, - plugin: plugin, - MetricsProvider: volume.NewMetricsStatFS(getPath(podUID, volName, plugin.host)), - }}, nil -} - -func (plugin *gcePersistentDiskPlugin) NewDeleter(logger klog.Logger, spec *volume.Spec) (volume.Deleter, error) { - return plugin.newDeleterInternal(spec, &GCEDiskUtil{}) -} - -func (plugin *gcePersistentDiskPlugin) newDeleterInternal(spec *volume.Spec, manager pdManager) (volume.Deleter, error) { - if spec.PersistentVolume != nil && spec.PersistentVolume.Spec.GCEPersistentDisk == nil { - return nil, fmt.Errorf("spec.PersistentVolumeSource.GCEPersistentDisk is nil") - } - return &gcePersistentDiskDeleter{ - gcePersistentDisk: &gcePersistentDisk{ - volName: spec.Name(), - pdName: spec.PersistentVolume.Spec.GCEPersistentDisk.PDName, - manager: manager, - plugin: plugin, - }}, nil -} - -func (plugin *gcePersistentDiskPlugin) NewProvisioner(logger klog.Logger, options volume.VolumeOptions) (volume.Provisioner, error) { - return plugin.newProvisionerInternal(options, &GCEDiskUtil{}) -} - -func (plugin *gcePersistentDiskPlugin) newProvisionerInternal(options volume.VolumeOptions, manager pdManager) (volume.Provisioner, error) { - return &gcePersistentDiskProvisioner{ - gcePersistentDisk: &gcePersistentDisk{ - manager: manager, - plugin: plugin, - }, - options: options, - }, nil -} - -func (plugin *gcePersistentDiskPlugin) RequiresFSResize() bool { - return true -} - -func (plugin *gcePersistentDiskPlugin) ExpandVolumeDevice( - spec *volume.Spec, - newSize resource.Quantity, - oldSize resource.Quantity) (resource.Quantity, error) { - cloud, err := getCloudProvider(plugin.host.GetCloudProvider()) - - if err != nil { - return oldSize, err - } - pdName := spec.PersistentVolume.Spec.GCEPersistentDisk.PDName - updatedQuantity, err := cloud.ResizeDisk(pdName, oldSize, newSize) - - if err != nil { - return oldSize, err - } - return updatedQuantity, nil -} - -func (plugin *gcePersistentDiskPlugin) NodeExpand(resizeOptions volume.NodeResizeOptions) (bool, error) { - fsVolume, err := util.CheckVolumeModeFilesystem(resizeOptions.VolumeSpec) - if err != nil { - return false, fmt.Errorf("error checking VolumeMode: %v", err) - } - // if volume is not a fs file system, there is nothing for us to do here. - if !fsVolume { - return true, nil - } - _, err = util.GenericResizeFS(plugin.host, plugin.GetPluginName(), resizeOptions.DevicePath, resizeOptions.DeviceMountPath) - if err != nil { - return false, err - } - return true, nil -} - -var _ volume.NodeExpandableVolumePlugin = &gcePersistentDiskPlugin{} - -func (plugin *gcePersistentDiskPlugin) ConstructVolumeSpec(volumeName, mountPath string) (volume.ReconstructedVolume, error) { - mounter := plugin.host.GetMounter(plugin.GetPluginName()) - kvh, ok := plugin.host.(volume.KubeletVolumeHost) - if !ok { - return volume.ReconstructedVolume{}, fmt.Errorf("plugin volume host does not implement KubeletVolumeHost interface") - } - hu := kvh.GetHostUtil() - pluginMntDir := util.GetPluginMountDir(plugin.host, plugin.GetPluginName()) - sourceName, err := hu.GetDeviceNameFromMount(mounter, mountPath, pluginMntDir) - if err != nil { - return volume.ReconstructedVolume{}, err - } - gceVolume := &v1.Volume{ - Name: volumeName, - VolumeSource: v1.VolumeSource{ - GCEPersistentDisk: &v1.GCEPersistentDiskVolumeSource{ - PDName: sourceName, - }, - }, - } - return volume.ReconstructedVolume{ - Spec: volume.NewSpecFromVolume(gceVolume), - }, nil -} - -// Abstract interface to PD operations. -type pdManager interface { - // Creates a volume - CreateVolume(provisioner *gcePersistentDiskProvisioner, node *v1.Node, allowedTopologies []v1.TopologySelectorTerm) (volumeID string, volumeSizeGB int, labels map[string]string, fstype string, err error) - // Deletes a volume - DeleteVolume(deleter *gcePersistentDiskDeleter) error -} - -// gcePersistentDisk volumes are disk resources provided by Google Compute Engine -// that are attached to the kubelet's host machine and exposed to the pod. -type gcePersistentDisk struct { - volName string - podUID types.UID - // Unique identifier of the PD, used to find the disk resource in the provider. - pdName string - // Specifies the partition to mount - partition string - // Utility interface to provision and delete disks - manager pdManager - // Mounter interface that provides system calls to mount the global path to the pod local path. - mounter mount.Interface - plugin *gcePersistentDiskPlugin - volume.MetricsProvider -} - -type gcePersistentDiskMounter struct { - *gcePersistentDisk - // Specifies whether the disk will be mounted as read-only. - readOnly bool - mountOptions []string -} - -var _ volume.Mounter = &gcePersistentDiskMounter{} - -func (b *gcePersistentDiskMounter) GetAttributes() volume.Attributes { - return volume.Attributes{ - ReadOnly: b.readOnly, - Managed: !b.readOnly, - SELinuxRelabel: true, - } -} - -// SetUp bind mounts the disk global mount to the volume path. -func (b *gcePersistentDiskMounter) SetUp(mounterArgs volume.MounterArgs) error { - return b.SetUpAt(b.GetPath(), mounterArgs) -} - -// SetUp bind mounts the disk global mount to the give volume path. -func (b *gcePersistentDiskMounter) SetUpAt(dir string, mounterArgs volume.MounterArgs) error { - // TODO: handle failed mounts here. - notMnt, err := b.mounter.IsLikelyNotMountPoint(dir) - klog.V(4).Infof("GCE PersistentDisk set up: Dir (%s) PD name (%q) Mounted (%t) Error (%v), ReadOnly (%t)", dir, b.pdName, !notMnt, err, b.readOnly) - if err != nil && !os.IsNotExist(err) { - return fmt.Errorf("cannot validate mount point: %s %v", dir, err) - } - if !notMnt { - return nil - } - - if runtime.GOOS != "windows" { - // in windows, we will use mklink to mount, will MkdirAll in Mount func - if err := os.MkdirAll(dir, 0750); err != nil { - return fmt.Errorf("mkdir failed on disk %s (%v)", dir, err) - } - } - - // Perform a bind mount to the full path to allow duplicate mounts of the same PD. - options := []string{"bind"} - if b.readOnly { - options = append(options, "ro") - } - - globalPDPath := makeGlobalPDName(b.plugin.host, b.pdName) - klog.V(4).Infof("attempting to mount %s", dir) - - mountOptions := util.JoinMountOptions(b.mountOptions, options) - - err = b.mounter.MountSensitiveWithoutSystemd(globalPDPath, dir, "", mountOptions, nil) - if err != nil { - notMnt, mntErr := b.mounter.IsLikelyNotMountPoint(dir) - if mntErr != nil { - return fmt.Errorf("failed to mount: %v. Cleanup IsLikelyNotMountPoint check failed: %v", err, mntErr) - } - if !notMnt { - if mntErr = b.mounter.Unmount(dir); mntErr != nil { - return fmt.Errorf("failed to mount: %v. Cleanup failed to unmount: %v", err, mntErr) - } - notMnt, mntErr := b.mounter.IsLikelyNotMountPoint(dir) - if mntErr != nil { - return fmt.Errorf("failed to mount: %v. Cleanup IsLikelyNotMountPoint check failed: %v", err, mntErr) - } - if !notMnt { - // This is very odd, we don't expect it. We'll try again next sync loop. - return fmt.Errorf("%s is still mounted, despite call to unmount(). Will try again next sync loop", dir) - } - } - mntErr = os.Remove(dir) - if mntErr != nil { - return fmt.Errorf("failed to mount: %v. Cleanup os Remove(%s) failed: %v", err, dir, mntErr) - } - - return fmt.Errorf("mount of disk %s failed: %v", dir, err) - } - - klog.V(4).Infof("mount of disk %s succeeded", dir) - if !b.readOnly { - if err := volume.SetVolumeOwnership(b, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)); err != nil { - klog.Errorf("SetVolumeOwnership returns error %v", err) - } - } - return nil -} - -func makeGlobalPDName(host volume.VolumeHost, devName string) string { - return filepath.Join(host.GetPluginDir(gcePersistentDiskPluginName), util.MountsInGlobalPDPath, devName) -} - -func (b *gcePersistentDiskMounter) GetPath() string { - return getPath(b.podUID, b.volName, b.plugin.host) -} - -type gcePersistentDiskUnmounter struct { - *gcePersistentDisk -} - -var _ volume.Unmounter = &gcePersistentDiskUnmounter{} - -func (c *gcePersistentDiskUnmounter) GetPath() string { - return getPath(c.podUID, c.volName, c.plugin.host) -} - -// Unmounts the bind mount, and detaches the disk only if the PD -// resource was the last reference to that disk on the kubelet. -func (c *gcePersistentDiskUnmounter) TearDown() error { - return c.TearDownAt(c.GetPath()) -} - -// TearDownAt unmounts the bind mount -func (c *gcePersistentDiskUnmounter) TearDownAt(dir string) error { - return mount.CleanupMountPoint(dir, c.mounter, false) -} - -type gcePersistentDiskDeleter struct { - *gcePersistentDisk -} - -var _ volume.Deleter = &gcePersistentDiskDeleter{} - -func (d *gcePersistentDiskDeleter) GetPath() string { - return getPath(d.podUID, d.volName, d.plugin.host) -} - -func (d *gcePersistentDiskDeleter) Delete() error { - return d.manager.DeleteVolume(d) -} - -type gcePersistentDiskProvisioner struct { - *gcePersistentDisk - options volume.VolumeOptions -} - -var _ volume.Provisioner = &gcePersistentDiskProvisioner{} - -func (c *gcePersistentDiskProvisioner) Provision(selectedNode *v1.Node, allowedTopologies []v1.TopologySelectorTerm) (*v1.PersistentVolume, error) { - if !util.ContainsAllAccessModes(c.plugin.GetAccessModes(), c.options.PVC.Spec.AccessModes) { - return nil, fmt.Errorf("invalid AccessModes %v: only AccessModes %v are supported", c.options.PVC.Spec.AccessModes, c.plugin.GetAccessModes()) - } - - volumeID, sizeGB, labels, fstype, err := c.manager.CreateVolume(c, selectedNode, allowedTopologies) - if err != nil { - return nil, err - } - - if fstype == "" { - fstype = "ext4" - } - - volumeMode := c.options.PVC.Spec.VolumeMode - if volumeMode != nil && *volumeMode == v1.PersistentVolumeBlock { - // Block volumes should not have any FSType - fstype = "" - } - - pv := &v1.PersistentVolume{ - ObjectMeta: metav1.ObjectMeta{ - Name: c.options.PVName, - Labels: map[string]string{}, - Annotations: map[string]string{ - util.VolumeDynamicallyCreatedByKey: "gce-pd-dynamic-provisioner", - }, - }, - Spec: v1.PersistentVolumeSpec{ - PersistentVolumeReclaimPolicy: c.options.PersistentVolumeReclaimPolicy, - AccessModes: c.options.PVC.Spec.AccessModes, - Capacity: v1.ResourceList{ - v1.ResourceName(v1.ResourceStorage): resource.MustParse(fmt.Sprintf("%dGi", sizeGB)), - }, - VolumeMode: volumeMode, - PersistentVolumeSource: v1.PersistentVolumeSource{ - GCEPersistentDisk: &v1.GCEPersistentDiskVolumeSource{ - PDName: volumeID, - Partition: 0, - ReadOnly: false, - FSType: fstype, - }, - }, - MountOptions: c.options.MountOptions, - }, - } - if len(c.options.PVC.Spec.AccessModes) == 0 { - pv.Spec.AccessModes = c.plugin.GetAccessModes() - } - - requirements := make([]v1.NodeSelectorRequirement, 0) - if len(labels) != 0 { - if pv.Labels == nil { - pv.Labels = make(map[string]string) - } - for k, v := range labels { - pv.Labels[k] = v - var values []string - if k == v1.LabelTopologyZone { - values, err = volumehelpers.LabelZonesToList(v) - if err != nil { - return nil, fmt.Errorf("failed to convert label string for Zone: %s to a List: %v", v, err) - } - } else { - values = []string{v} - } - requirements = append(requirements, v1.NodeSelectorRequirement{Key: k, Operator: v1.NodeSelectorOpIn, Values: values}) - } - } - - if len(requirements) > 0 { - pv.Spec.NodeAffinity = new(v1.VolumeNodeAffinity) - pv.Spec.NodeAffinity.Required = new(v1.NodeSelector) - pv.Spec.NodeAffinity.Required.NodeSelectorTerms = make([]v1.NodeSelectorTerm, 1) - pv.Spec.NodeAffinity.Required.NodeSelectorTerms[0].MatchExpressions = requirements - } - - return pv, nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_pd_block.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_pd_block.go deleted file mode 100644 index bde10beb16c2..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_pd_block.go +++ /dev/null @@ -1,183 +0,0 @@ -//go:build !providerless -// +build !providerless - -/* -Copyright 2018 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package gcepd - -import ( - "fmt" - "path/filepath" - "strconv" - - "k8s.io/klog/v2" - "k8s.io/mount-utils" - utilstrings "k8s.io/utils/strings" - - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/kubernetes/pkg/volume" - "k8s.io/kubernetes/pkg/volume/util/volumepathhandler" -) - -var _ volume.VolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.PersistentVolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.BlockVolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.DeletableVolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.ProvisionableVolumePlugin = &gcePersistentDiskPlugin{} -var _ volume.ExpandableVolumePlugin = &gcePersistentDiskPlugin{} - -func (plugin *gcePersistentDiskPlugin) ConstructBlockVolumeSpec(podUID types.UID, volumeName, mapPath string) (*volume.Spec, error) { - pluginDir := plugin.host.GetVolumeDevicePluginDir(gcePersistentDiskPluginName) - blkutil := volumepathhandler.NewBlockVolumePathHandler() - globalMapPathUUID, err := blkutil.FindGlobalMapPathUUIDFromPod(pluginDir, mapPath, podUID) - if err != nil { - return nil, err - } - klog.V(5).Infof("globalMapPathUUID: %v, err: %v", globalMapPathUUID, err) - - globalMapPath := filepath.Dir(globalMapPathUUID) - if len(globalMapPath) <= 1 { - return nil, fmt.Errorf("failed to get volume plugin information from globalMapPathUUID: %v", globalMapPathUUID) - } - - return getVolumeSpecFromGlobalMapPath(volumeName, globalMapPath) -} - -func getVolumeSpecFromGlobalMapPath(volumeName, globalMapPath string) (*volume.Spec, error) { - // Get volume spec information from globalMapPath - // globalMapPath example: - // plugins/kubernetes.io/{PluginName}/{DefaultKubeletVolumeDevicesDirName}/{volumeID} - // plugins/kubernetes.io/gce-pd/volumeDevices/vol-XXXXXX - pdName := filepath.Base(globalMapPath) - if len(pdName) <= 1 { - return nil, fmt.Errorf("failed to get pd name from global path=%s", globalMapPath) - } - block := v1.PersistentVolumeBlock - gceVolume := &v1.PersistentVolume{ - ObjectMeta: metav1.ObjectMeta{ - Name: volumeName, - }, - Spec: v1.PersistentVolumeSpec{ - PersistentVolumeSource: v1.PersistentVolumeSource{ - GCEPersistentDisk: &v1.GCEPersistentDiskVolumeSource{ - PDName: pdName, - }, - }, - VolumeMode: &block, - }, - } - - return volume.NewSpecFromPersistentVolume(gceVolume, true), nil -} - -// NewBlockVolumeMapper creates a new volume.BlockVolumeMapper from an API specification. -func (plugin *gcePersistentDiskPlugin) NewBlockVolumeMapper(spec *volume.Spec, pod *v1.Pod, _ volume.VolumeOptions) (volume.BlockVolumeMapper, error) { - // If this is called via GenerateUnmapDeviceFunc(), pod is nil. - // Pass empty string as dummy uid since uid isn't used in the case. - var uid types.UID - if pod != nil { - uid = pod.UID - } - - return plugin.newBlockVolumeMapperInternal(spec, uid, &GCEDiskUtil{}, plugin.host.GetMounter(plugin.GetPluginName())) -} - -func (plugin *gcePersistentDiskPlugin) newBlockVolumeMapperInternal(spec *volume.Spec, podUID types.UID, manager pdManager, mounter mount.Interface) (volume.BlockVolumeMapper, error) { - volumeSource, readOnly, err := getVolumeSource(spec) - if err != nil { - return nil, err - } - pdName := volumeSource.PDName - partition := "" - if volumeSource.Partition != 0 { - partition = strconv.Itoa(int(volumeSource.Partition)) - } - - mapper := &gcePersistentDiskMapper{ - gcePersistentDisk: &gcePersistentDisk{ - volName: spec.Name(), - podUID: podUID, - pdName: pdName, - partition: partition, - manager: manager, - mounter: mounter, - plugin: plugin, - }, - readOnly: readOnly, - } - - blockPath, err := mapper.GetGlobalMapPath(spec) - if err != nil { - return nil, fmt.Errorf("failed to get device path: %v", err) - } - mapper.MetricsProvider = volume.NewMetricsBlock(filepath.Join(blockPath, string(podUID))) - - return mapper, nil -} - -func (plugin *gcePersistentDiskPlugin) NewBlockVolumeUnmapper(volName string, podUID types.UID) (volume.BlockVolumeUnmapper, error) { - return plugin.newUnmapperInternal(volName, podUID, &GCEDiskUtil{}) -} - -func (plugin *gcePersistentDiskPlugin) newUnmapperInternal(volName string, podUID types.UID, manager pdManager) (volume.BlockVolumeUnmapper, error) { - return &gcePersistentDiskUnmapper{ - gcePersistentDisk: &gcePersistentDisk{ - volName: volName, - podUID: podUID, - pdName: volName, - manager: manager, - plugin: plugin, - }}, nil -} - -type gcePersistentDiskUnmapper struct { - *gcePersistentDisk -} - -var _ volume.BlockVolumeUnmapper = &gcePersistentDiskUnmapper{} - -type gcePersistentDiskMapper struct { - *gcePersistentDisk - readOnly bool -} - -var _ volume.BlockVolumeMapper = &gcePersistentDiskMapper{} - -// GetGlobalMapPath returns global map path and error -// path: plugins/kubernetes.io/{PluginName}/volumeDevices/pdName -func (pd *gcePersistentDisk) GetGlobalMapPath(spec *volume.Spec) (string, error) { - volumeSource, _, err := getVolumeSource(spec) - if err != nil { - return "", err - } - return filepath.Join(pd.plugin.host.GetVolumeDevicePluginDir(gcePersistentDiskPluginName), string(volumeSource.PDName)), nil -} - -// GetPodDeviceMapPath returns pod device map path and volume name -// path: pods/{podUid}/volumeDevices/kubernetes.io~aws -func (pd *gcePersistentDisk) GetPodDeviceMapPath() (string, string) { - name := gcePersistentDiskPluginName - return pd.plugin.host.GetPodVolumeDeviceDir(pd.podUID, utilstrings.EscapeQualifiedName(name)), pd.volName -} - -// SupportsMetrics returns true for gcePersistentDisk as it initializes the -// MetricsProvider. -func (pd *gcePersistentDisk) SupportsMetrics() bool { - return true -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_util.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_util.go deleted file mode 100644 index 03d6f7683025..000000000000 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/gcepd/gce_util.go +++ /dev/null @@ -1,368 +0,0 @@ -//go:build !providerless -// +build !providerless - -/* -Copyright 2014 The Kubernetes Authors. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -package gcepd - -import ( - "fmt" - "path/filepath" - "regexp" - "strings" - "time" - - "k8s.io/klog/v2" - "k8s.io/mount-utils" - "k8s.io/utils/exec" - utilpath "k8s.io/utils/path" - - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/util/sets" - cloudprovider "k8s.io/cloud-provider" - cloudvolume "k8s.io/cloud-provider/volume" - volumehelpers "k8s.io/cloud-provider/volume/helpers" - "k8s.io/kubernetes/pkg/volume" - volumeutil "k8s.io/kubernetes/pkg/volume/util" - gcecloud "k8s.io/legacy-cloud-providers/gce" -) - -const ( - diskByIDPath = "/dev/disk/by-id/" - diskGooglePrefix = "google-" - diskScsiGooglePrefix = "scsi-0Google_PersistentDisk_" - diskPartitionSuffix = "-part" - diskSDPath = "/dev/sd" - diskSDPattern = "/dev/sd*" - maxRetries = 10 - checkSleepDuration = time.Second - maxRegionalPDZones = 2 - - // Replication type constants must be lower case. - replicationTypeNone = "none" - replicationTypeRegionalPD = "regional-pd" - - // scsi_id output should be in the form of: - // 0Google PersistentDisk - scsiPattern = `^0Google\s+PersistentDisk\s+([\S]+)\s*$` -) - -var ( - // errorSleepDuration is modified only in unit tests and should be constant - // otherwise. - errorSleepDuration = 5 * time.Second - - // regex to parse scsi_id output and extract the serial - scsiRegex = regexp.MustCompile(scsiPattern) -) - -// GCEDiskUtil provides operation for GCE PD -type GCEDiskUtil struct{} - -// DeleteVolume deletes a GCE PD -// Returns: error -func (util *GCEDiskUtil) DeleteVolume(d *gcePersistentDiskDeleter) error { - cloud, err := getCloudProvider(d.gcePersistentDisk.plugin.host.GetCloudProvider()) - if err != nil { - return err - } - - if err = cloud.DeleteDisk(d.pdName); err != nil { - klog.V(2).Infof("Error deleting GCE PD volume %s: %v", d.pdName, err) - // GCE cloud provider returns volume.deletedVolumeInUseError when - // necessary, no handling needed here. - return err - } - klog.V(2).Infof("Successfully deleted GCE PD volume %s", d.pdName) - return nil -} - -// CreateVolume creates a GCE PD. -// Returns: gcePDName, volumeSizeGB, labels, fsType, error -func (util *GCEDiskUtil) CreateVolume(c *gcePersistentDiskProvisioner, node *v1.Node, allowedTopologies []v1.TopologySelectorTerm) (string, int, map[string]string, string, error) { - cloud, err := getCloudProvider(c.gcePersistentDisk.plugin.host.GetCloudProvider()) - if err != nil { - return "", 0, nil, "", err - } - - name := volumeutil.GenerateVolumeName(c.options.ClusterName, c.options.PVName, 63) // GCE PD name can have up to 63 characters - capacity := c.options.PVC.Spec.Resources.Requests[v1.ResourceName(v1.ResourceStorage)] - // GCE PDs are allocated in chunks of GiBs - requestGB, err := volumehelpers.RoundUpToGiB(capacity) - if err != nil { - return "", 0, nil, "", err - } - - // Apply Parameters. - // Values for parameter "replication-type" are canonicalized to lower case. - // Values for other parameters are case-insensitive, and we leave validation of these values - // to the cloud provider. - diskType := "" - configuredZone := "" - var configuredZones sets.String - zonePresent := false - zonesPresent := false - replicationType := replicationTypeNone - fstype := "" - for k, v := range c.options.Parameters { - switch strings.ToLower(k) { - case "type": - diskType = v - case "zone": - zonePresent = true - configuredZone = v - case "zones": - zonesPresent = true - configuredZones, err = volumehelpers.ZonesToSet(v) - if err != nil { - return "", 0, nil, "", err - } - case "replication-type": - replicationType = strings.ToLower(v) - case volume.VolumeParameterFSType: - fstype = v - default: - return "", 0, nil, "", fmt.Errorf("invalid option %q for volume plugin %s", k, c.plugin.GetPluginName()) - } - } - - // TODO: implement PVC.Selector parsing - if c.options.PVC.Spec.Selector != nil { - return "", 0, nil, "", fmt.Errorf("claim.Spec.Selector is not supported for dynamic provisioning on GCE") - } - - var activezones sets.String - activezones, err = cloud.GetAllCurrentZones() - if err != nil { - return "", 0, nil, "", err - } - - var disk *gcecloud.Disk - switch replicationType { - case replicationTypeRegionalPD: - selectedZones, err := volumehelpers.SelectZonesForVolume(zonePresent, zonesPresent, configuredZone, configuredZones, activezones, node, allowedTopologies, c.options.PVC.Name, maxRegionalPDZones) - if err != nil { - klog.V(2).Infof("Error selecting zones for regional GCE PD volume: %v", err) - return "", 0, nil, "", err - } - disk, err = cloud.CreateRegionalDisk( - name, - diskType, - selectedZones, - requestGB, - *c.options.CloudTags) - if err != nil { - klog.V(2).Infof("Error creating regional GCE PD volume: %v", err) - return "", 0, nil, "", err - } - klog.V(2).Infof("Successfully created Regional GCE PD volume %s", name) - - case replicationTypeNone: - selectedZone, err := volumehelpers.SelectZoneForVolume(zonePresent, zonesPresent, configuredZone, configuredZones, activezones, node, allowedTopologies, c.options.PVC.Name) - if err != nil { - return "", 0, nil, "", err - } - disk, err = cloud.CreateDisk( - name, - diskType, - selectedZone, - requestGB, - *c.options.CloudTags) - if err != nil { - klog.V(2).Infof("Error creating single-zone GCE PD volume: %v", err) - return "", 0, nil, "", err - } - klog.V(2).Infof("Successfully created single-zone GCE PD volume %s", name) - - default: - return "", 0, nil, "", fmt.Errorf("replication-type of '%s' is not supported", replicationType) - } - - labels, err := cloud.GetAutoLabelsForPD(disk) - if err != nil { - // We don't really want to leak the volume here... - klog.Errorf("error getting labels for volume %q: %v", name, err) - } - - return name, int(requestGB), labels, fstype, nil -} - -// Returns the first path that exists, or empty string if none exist. -func verifyDevicePath(devicePaths []string, sdBeforeSet sets.String, diskName string) (string, error) { - if err := udevadmChangeToNewDrives(sdBeforeSet); err != nil { - // It's possible udevadm was called on other disks so it should not block this - // call. If it did fail on this disk, then the devicePath will either - // not exist or be wrong. If it's wrong, then the scsi_id check below will fail. - klog.Errorf("udevadmChangeToNewDrives failed with: %v", err) - } - - for _, path := range devicePaths { - if pathExists, err := mount.PathExists(path); err != nil { - return "", fmt.Errorf("error checking if path exists: %v", err) - } else if pathExists { - // validate that the path actually resolves to the correct disk - serial, err := getScsiSerial(path, diskName) - if err != nil { - return "", fmt.Errorf("failed to get scsi serial %v", err) - } - if serial != diskName { - // The device link is not pointing to the correct device - // Trigger udev on this device to try to fix the link - if udevErr := udevadmChangeToDrive(path); udevErr != nil { - klog.Errorf("udevadmChangeToDrive %q failed with: %v", path, err) - } - - // Return error to retry WaitForAttach and verifyDevicePath - return "", fmt.Errorf("scsi_id serial %q for device %q doesn't match disk %q", serial, path, diskName) - } - // The device link is correct - return path, nil - } - } - - return "", nil -} - -// Calls scsi_id on the given devicePath to get the serial number reported by that device. -func getScsiSerial(devicePath, diskName string) (string, error) { - exists, err := utilpath.Exists(utilpath.CheckFollowSymlink, "/lib/udev/scsi_id") - if err != nil { - return "", fmt.Errorf("failed to check scsi_id existence: %v", err) - } - - if !exists { - klog.V(6).Infof("scsi_id doesn't exist; skipping check for %v", devicePath) - return diskName, nil - } - - out, err := exec.New().Command( - "/lib/udev/scsi_id", - "--page=0x83", - "--whitelisted", - fmt.Sprintf("--device=%v", devicePath)).CombinedOutput() - if err != nil { - return "", fmt.Errorf("scsi_id failed for device %q with %v", devicePath, err) - } - - return parseScsiSerial(string(out)) -} - -// Parse the output returned by scsi_id and extract the serial number -func parseScsiSerial(output string) (string, error) { - substrings := scsiRegex.FindStringSubmatch(output) - if substrings == nil { - return "", fmt.Errorf("scsi_id output cannot be parsed: %q", output) - } - - return substrings[1], nil -} - -// Returns list of all /dev/disk/by-id/* paths for given PD. -func getDiskByIDPaths(pdName string, partition string) []string { - devicePaths := []string{ - filepath.Join(diskByIDPath, diskGooglePrefix+pdName), - filepath.Join(diskByIDPath, diskScsiGooglePrefix+pdName), - } - - if partition != "" { - for i, path := range devicePaths { - devicePaths[i] = path + diskPartitionSuffix + partition - } - } - - return devicePaths -} - -// Return cloud provider -func getCloudProvider(cloudProvider cloudprovider.Interface) (*gcecloud.Cloud, error) { - var err error - for numRetries := 0; numRetries < maxRetries; numRetries++ { - gceCloudProvider, ok := cloudProvider.(*gcecloud.Cloud) - if !ok || gceCloudProvider == nil { - // Retry on error. See issue #11321 - klog.Errorf("Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned %v instead", cloudProvider) - time.Sleep(errorSleepDuration) - continue - } - - return gceCloudProvider, nil - } - - return nil, fmt.Errorf("failed to get GCE GCECloudProvider with error %v", err) -} - -// Triggers the application of udev rules by calling "udevadm trigger -// --action=change" for newly created "/dev/sd*" drives (exist only in -// after set). This is workaround for Issue #7972. Once the underlying -// issue has been resolved, this may be removed. -func udevadmChangeToNewDrives(sdBeforeSet sets.String) error { - sdAfter, err := filepath.Glob(diskSDPattern) - if err != nil { - return fmt.Errorf("error filepath.Glob(\"%s\"): %v\r", diskSDPattern, err) - } - - for _, sd := range sdAfter { - if !sdBeforeSet.Has(sd) { - return udevadmChangeToDrive(sd) - } - } - - return nil -} - -// Calls "udevadm trigger --action=change" on the specified drive. -// drivePath must be the block device path to trigger on, in the format "/dev/sd*", or a symlink to it. -// This is workaround for Issue #7972. Once the underlying issue has been resolved, this may be removed. -func udevadmChangeToDrive(drivePath string) error { - klog.V(5).Infof("udevadmChangeToDrive: drive=%q", drivePath) - - // Evaluate symlink, if any - drive, err := filepath.EvalSymlinks(drivePath) - if err != nil { - return fmt.Errorf("udevadmChangeToDrive: filepath.EvalSymlinks(%q) failed with %v", drivePath, err) - } - klog.V(5).Infof("udevadmChangeToDrive: symlink path is %q", drive) - - // Check to make sure input is "/dev/sd*" - if !strings.Contains(drive, diskSDPath) { - return fmt.Errorf("udevadmChangeToDrive: expected input in the form \"%s\" but drive is %q", diskSDPattern, drive) - } - - // Call "udevadm trigger --action=change --property-match=DEVNAME=/dev/sd..." - _, err = exec.New().Command( - "udevadm", - "trigger", - "--action=change", - fmt.Sprintf("--property-match=DEVNAME=%s", drive)).CombinedOutput() - if err != nil { - return fmt.Errorf("udevadmChangeToDrive: udevadm trigger failed for drive %q with %v", drive, err) - } - return nil -} - -// Checks whether the given GCE PD volume spec is associated with a regional PD. -func isRegionalPD(spec *volume.Spec) bool { - if spec.PersistentVolume != nil { - zonesLabel := spec.PersistentVolume.Labels[v1.LabelTopologyZone] - if zonesLabel == "" { - zonesLabel = spec.PersistentVolume.Labels[v1.LabelFailureDomainBetaZone] - } - zones := strings.Split(zonesLabel, cloudvolume.LabelMultiZoneDelimiter) - return len(zones) > 1 - } - return false -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/git_repo/git_repo.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/git_repo/git_repo.go index fe890032e27d..995018d90072 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/git_repo/git_repo.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/git_repo/git_repo.go @@ -235,7 +235,7 @@ func (b *gitRepoVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterArg return fmt.Errorf("failed to exec 'git reset --hard': %s: %v", output, err) } - volume.SetVolumeOwnership(b, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(b.plugin, nil)) + volume.SetVolumeOwnership(b, dir, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(b.plugin, nil)) volumeutil.SetReady(b.getMetaDir()) return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/iscsi/disk_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/iscsi/disk_manager.go index 6d60e44efaf0..6aa8652bd6b0 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/iscsi/disk_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/iscsi/disk_manager.go @@ -96,7 +96,7 @@ func diskSetUp(manager diskManager, b iscsiDiskMounter, volPath string, mounter } if !b.readOnly { - volume.SetVolumeOwnership(&b, fsGroup, fsGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) + volume.SetVolumeOwnership(&b, volPath, fsGroup, fsGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) } return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/local/local.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/local/local.go index ca0bc3040021..0c8fe0753967 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/local/local.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/local/local.go @@ -615,7 +615,7 @@ func (m *localVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterArgs) if !m.readOnly { // Volume owner will be written only once on the first volume mount if len(refs) == 0 { - return volume.SetVolumeOwnership(m, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(m.plugin, nil)) + return volume.SetVolumeOwnership(m, dir, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(m.plugin, nil)) } } return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/plugins.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/plugins.go index e56d410a5055..0b7b4e87e1cb 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/plugins.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/plugins.go @@ -40,7 +40,6 @@ import ( "k8s.io/client-go/tools/cache" "k8s.io/client-go/tools/record" cloudprovider "k8s.io/cloud-provider" - proxyutil "k8s.io/kubernetes/pkg/proxy/util" "k8s.io/kubernetes/pkg/volume/util/hostutil" "k8s.io/kubernetes/pkg/volume/util/recyclerclient" "k8s.io/kubernetes/pkg/volume/util/subpath" @@ -443,9 +442,6 @@ type VolumeHost interface { // Returns an interface that should be used to execute subpath operations GetSubpather() subpath.Interface - - // Returns options to pass for proxyutil filtered dialers. - GetFilteredDialOptions() *proxyutil.FilteredDialOptions } // VolumePluginMgr tracks registered plugins. @@ -694,13 +690,11 @@ func (pm *VolumePluginMgr) FindPluginBySpec(spec *Spec) (VolumePlugin, error) { return match, nil } -// FindPluginByName fetches a plugin by name or by legacy name. If no plugin -// is found, returns error. +// FindPluginByName fetches a plugin by name. If no plugin is found, returns error. func (pm *VolumePluginMgr) FindPluginByName(name string) (VolumePlugin, error) { pm.mutex.RLock() defer pm.mutex.RUnlock() - // Once we can get rid of legacy names we can reduce this to a map lookup. var match VolumePlugin if v, found := pm.plugins[name]; found { match = v @@ -1065,7 +1059,7 @@ func NewPersistentVolumeRecyclerPodTemplate() *v1.Pod { Name: "pv-recycler", Image: "registry.k8s.io/debian-base:v2.0.0", Command: []string{"/bin/sh"}, - Args: []string{"-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"}, + Args: []string{"-c", "test -e /scrub && find /scrub -mindepth 1 -delete && test -z \"$(ls -A /scrub)\" || exit 1"}, VolumeMounts: []v1.VolumeMount{ { Name: "vol", diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/portworx/portworx.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/portworx/portworx.go index e0eaf94495d3..6b9243f52341 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/portworx/portworx.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/portworx/portworx.go @@ -335,7 +335,7 @@ func (b *portworxVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterAr return err } if !b.readOnly { - volume.SetVolumeOwnership(b, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) + volume.SetVolumeOwnership(b, dir, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) } klog.Infof("Portworx Volume %s setup at %s", b.volumeID, dir) return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/projected/projected.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/projected/projected.go index c82b38653e37..deb7728168a8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/projected/projected.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/projected/projected.go @@ -233,7 +233,7 @@ func (s *projectedVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterA setPerms := func(_ string) error { // This may be the first time writing and new files get created outside the timestamp subdirectory: // change the permissions on the whole volume and not only in the timestamp directory. - return volume.SetVolumeOwnership(s, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(s.plugin, nil)) + return volume.SetVolumeOwnership(s, dir, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(s.plugin, nil)) } err = writer.Write(data, setPerms) if err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/rbd/disk_manager.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/rbd/disk_manager.go index edff33540f4d..2131c7ecedd8 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/rbd/disk_manager.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/rbd/disk_manager.go @@ -96,7 +96,7 @@ func diskSetUp(manager diskManager, b rbdMounter, volPath string, mounter mount. klog.V(3).Infof("rbd: successfully bind mount %s to %s with options %v", globalPDPath, volPath, mountOptions) if !b.ReadOnly { - volume.SetVolumeOwnership(&b, fsGroup, fsGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) + volume.SetVolumeOwnership(&b, volPath, fsGroup, fsGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) } return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/secret/secret.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/secret/secret.go index f43f1bffa3b9..f1d2c9c59ffd 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/secret/secret.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/secret/secret.go @@ -247,7 +247,7 @@ func (b *secretVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterArgs setPerms := func(_ string) error { // This may be the first time writing and new files get created outside the timestamp subdirectory: // change the permissions on the whole volume and not only in the timestamp directory. - return volume.SetVolumeOwnership(b, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(b.plugin, nil)) + return volume.SetVolumeOwnership(b, dir, mounterArgs.FsGroup, nil /*fsGroupChangePolicy*/, volumeutil.FSGroupCompleteHook(b.plugin, nil)) } err = writer.Write(payload, setPerms) if err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/atomic_writer.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/atomic_writer.go index 91ee77a9f690..7a1f0515e9ec 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/atomic_writer.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/atomic_writer.go @@ -19,7 +19,6 @@ package util import ( "bytes" "fmt" - "io/ioutil" "os" "path" "path/filepath" @@ -327,7 +326,7 @@ func shouldWriteFile(path string, content []byte) (bool, error) { return true, nil } - contentOnFs, err := ioutil.ReadFile(path) + contentOnFs, err := os.ReadFile(path) if err != nil { return false, err } @@ -379,7 +378,7 @@ func (w *AtomicWriter) pathsToRemove(payload map[string]FileProjection, oldTsDir // newTimestampDir creates a new timestamp directory func (w *AtomicWriter) newTimestampDir() (string, error) { - tsDir, err := ioutil.TempDir(w.targetDir, time.Now().UTC().Format("..2006_01_02_15_04_05.")) + tsDir, err := os.MkdirTemp(w.targetDir, time.Now().UTC().Format("..2006_01_02_15_04_05.")) if err != nil { klog.Errorf("%s: unable to create new temp directory: %v", w.logContext, err) return "", err @@ -411,11 +410,11 @@ func (w *AtomicWriter) writePayloadToDir(payload map[string]FileProjection, dir return err } - if err := ioutil.WriteFile(fullPath, content, mode); err != nil { + if err := os.WriteFile(fullPath, content, mode); err != nil { klog.Errorf("%s: unable to write file %s with mode %v: %v", w.logContext, fullPath, mode, err) return err } - // Chmod is needed because ioutil.WriteFile() ends up calling + // Chmod is needed because os.WriteFile() ends up calling // open(2) to create the file, so the final mode used is "mode & // ~umask". But we want to make sure the specified mode is used // in the file no matter what the umask is. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/common/quota_common_linux_impl.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/common/quota_common_linux_impl.go index 7f24ca1cd7db..5e8e3850cfe5 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/common/quota_common_linux_impl.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/common/quota_common_linux_impl.go @@ -22,7 +22,6 @@ package common import ( "bufio" "fmt" - "io/ioutil" "os" "os/exec" "regexp" @@ -144,7 +143,7 @@ func doRunXFSQuotaCommand(mountpoint string, mountsFile, command string) (string // See https://bugzilla.redhat.com/show_bug.cgi?id=237120 for an example // of the problem that could be caused if this were to happen. func runXFSQuotaCommand(mountpoint string, command string) (string, error) { - tmpMounts, err := ioutil.TempFile("", "mounts") + tmpMounts, err := os.CreateTemp("", "mounts") if err != nil { return "", fmt.Errorf("cannot create temporary mount file: %v", err) } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/project.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/project.go index 8ebc00687405..16d25d6c3f57 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/project.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/project.go @@ -22,7 +22,6 @@ package fsquota import ( "bufio" "fmt" - "io/ioutil" "os" "path/filepath" "regexp" @@ -267,7 +266,7 @@ func writeProjectFile(base *os.File, projects []projectType) (string, error) { return "", err } mode := stat.Mode() & os.ModePerm - f, err := ioutil.TempFile(filepath.Dir(oname), filepath.Base(oname)) + f, err := os.CreateTemp(filepath.Dir(oname), filepath.Base(oname)) if err != nil { return "", err } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/quota_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/quota_linux.go index 240cc356ee88..33b74ab3b5fa 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/quota_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/fsquota/quota_linux.go @@ -353,10 +353,11 @@ func AssignQuota(m mount.Interface, path string, poduid types.UID, bytes *resour } // When enforcing quotas are enabled, we'll condition this // on their being disabled also. - if ibytes > 0 { - ibytes = -1 + fsbytes := ibytes + if fsbytes > 0 { + fsbytes = -1 } - if err = setQuotaOnDir(path, id, ibytes); err == nil { + if err = setQuotaOnDir(path, id, fsbytes); err == nil { quotaPodMap[id] = internalPodUid quotaSizeMap[id] = ibytes podQuotaMap[internalPodUid] = id @@ -364,7 +365,7 @@ func AssignQuota(m mount.Interface, path string, poduid types.UID, bytes *resour dirPodMap[path] = internalPodUid podUidMap[internalPodUid] = externalPodUid podDirCountMap[internalPodUid]++ - klog.V(4).Infof("Assigning quota ID %d (%d) to %s", id, ibytes, path) + klog.V(4).Infof("Assigning quota ID %d (request limit %d, actual limit %d) to %s", id, ibytes, fsbytes, path) return nil } removeProjectID(path, id) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/io_util.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/io_util.go index 8d65a6e48d24..d16adffa5b58 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/io_util.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/io_util.go @@ -38,7 +38,7 @@ func NewIOHandler() IoUtil { } func (handler *osIOHandler) ReadFile(filename string) ([]byte, error) { - return ioutil.ReadFile(filename) + return os.ReadFile(filename) } func (handler *osIOHandler) ReadDir(dirname string) ([]os.FileInfo, error) { return ioutil.ReadDir(dirname) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/node_expander.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/node_expander.go index a1beedec65e1..cf9c57504e85 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/node_expander.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/node_expander.go @@ -37,7 +37,7 @@ type NodeExpander struct { // computed via precheck pvcStatusCap resource.Quantity pvCap resource.Quantity - resizeStatus *v1.PersistentVolumeClaimResizeStatus + resizeStatus v1.ClaimResourceStatus // pvcAlreadyUpdated if true indicates that although we are calling NodeExpandVolume on the kubelet // PVC has already been updated - possibly because expansion already succeeded on different node. @@ -68,29 +68,37 @@ type testResponseData struct { } // runPreCheck performs some sanity checks before expansion can be performed on the PVC. +// This function returns true only if node expansion is allowed to proceed otherwise +// it returns false. func (ne *NodeExpander) runPreCheck() bool { ne.pvcStatusCap = ne.pvc.Status.Capacity[v1.ResourceStorage] ne.pvCap = ne.pv.Spec.Capacity[v1.ResourceStorage] - ne.resizeStatus = ne.pvc.Status.ResizeStatus + allocatedResourceStatus := ne.pvc.Status.AllocatedResourceStatuses + if currentStatus, ok := allocatedResourceStatus[v1.ResourceStorage]; ok { + ne.resizeStatus = currentStatus + } // PVC is already expanded but we are still trying to expand the volume because // last recorded size in ASOW is older. This can happen for RWX volume types. - if ne.pvcStatusCap.Cmp(ne.pluginResizeOpts.NewSize) >= 0 && (ne.resizeStatus == nil || *ne.resizeStatus == v1.PersistentVolumeClaimNoExpansionInProgress) { + if ne.pvcStatusCap.Cmp(ne.pluginResizeOpts.NewSize) >= 0 && ne.resizeStatus == "" { ne.pvcAlreadyUpdated = true + return true + } + + // recovery features will only work for newer version of resize controller + if ne.resizeStatus == "" { + return false } + resizeStatusVal := ne.resizeStatus + // if resizestatus is nil or NodeExpansionInProgress or NodeExpansionPending then we - // should allow volume expansion on the node to proceed. We are making an exception for - // resizeStatus being nil because it will support use cases where - // resizeStatus may not be set (old control-plane expansion controller etc). - if ne.resizeStatus == nil || - ne.pvcAlreadyUpdated || - *ne.resizeStatus == v1.PersistentVolumeClaimNodeExpansionPending || - *ne.resizeStatus == v1.PersistentVolumeClaimNodeExpansionInProgress { + // should allow volume expansion on the node to proceed. + if resizeStatusVal == v1.PersistentVolumeClaimNodeResizePending || + resizeStatusVal == v1.PersistentVolumeClaimNodeResizeInProgress { return true } - return false } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go index b11183550b4a..f4d257b005ac 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go @@ -28,7 +28,6 @@ import ( "github.com/go-logr/logr" "k8s.io/klog/v2" - "k8s.io/mount-utils" v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" @@ -38,7 +37,6 @@ import ( "k8s.io/kubernetes/pkg/volume/util/hostutil" "k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations" volumetypes "k8s.io/kubernetes/pkg/volume/util/types" - "k8s.io/kubernetes/pkg/volume/util/volumepathhandler" ) // OperationExecutor defines a set of operations for attaching, detaching, @@ -151,8 +149,6 @@ type OperationExecutor interface { ExpandInUseVolume(volumeToMount VolumeToMount, actualStateOfWorld ActualStateOfWorldMounterUpdater, currentSize resource.Quantity) error // ReconstructVolumeOperation construct a new volumeSpec and returns it created by plugin ReconstructVolumeOperation(volumeMode v1.PersistentVolumeMode, plugin volume.VolumePlugin, mapperPlugin volume.BlockVolumePlugin, uid types.UID, podName volumetypes.UniquePodName, volumeSpecName string, volumePath string, pluginName string) (volume.ReconstructedVolume, error) - // CheckVolumeExistenceOperation checks volume existence - CheckVolumeExistenceOperation(volumeSpec *volume.Spec, mountPath, volumeName string, mounter mount.Interface, uniqueVolumeName v1.UniqueVolumeName, podName volumetypes.UniquePodName, podUID types.UID, attachable volume.AttachableVolumePlugin) (bool, error) } // NewOperationExecutor returns a new instance of OperationExecutor. @@ -444,9 +440,9 @@ type VolumeToMount struct { // time at which volume was requested to be mounted MountRequestTime time.Time - // PersistentVolumeSize stores desired size of the volume. + // DesiredPersistentVolumeSize stores desired size of the volume. // usually this is the size if pv.Spec.Capacity - PersistentVolumeSize resource.Quantity + DesiredPersistentVolumeSize resource.Quantity // SELinux label that should be used to mount. SELinuxLabel string @@ -1092,61 +1088,3 @@ func (oe *operationExecutor) ReconstructVolumeOperation( Spec: volumeSpec, }, nil } - -// CheckVolumeExistenceOperation checks mount path directory if volume still exists -func (oe *operationExecutor) CheckVolumeExistenceOperation( - volumeSpec *volume.Spec, - mountPath, volumeName string, - mounter mount.Interface, - uniqueVolumeName v1.UniqueVolumeName, - podName volumetypes.UniquePodName, - podUID types.UID, - attachable volume.AttachableVolumePlugin) (bool, error) { - fsVolume, err := util.CheckVolumeModeFilesystem(volumeSpec) - if err != nil { - return false, err - } - - // Filesystem Volume case - // For attachable volume case, check mount path directory if volume is still existing and mounted. - // Return true if volume is mounted. - if fsVolume { - if attachable != nil { - var isNotMount bool - var mountCheckErr error - if mounter == nil { - return false, fmt.Errorf("mounter was not set for a filesystem volume") - } - if isNotMount, mountCheckErr = mount.IsNotMountPoint(mounter, mountPath); mountCheckErr != nil { - return false, fmt.Errorf("could not check whether the volume %q (spec.Name: %q) pod %q (UID: %q) is mounted with: %v", - uniqueVolumeName, - volumeName, - podName, - podUID, - mountCheckErr) - } - return !isNotMount, nil - } - // For non-attachable volume case, skip check and return true without mount point check - // since plugins may not have volume mount point. - return true, nil - } - - // Block Volume case - // Check mount path directory if volume still exists, then return true if volume - // is there. Either plugin is attachable or non-attachable, the plugin should - // have symbolic link associated to raw block device under pod device map - // if volume exists. - blkutil := volumepathhandler.NewBlockVolumePathHandler() - var islinkExist bool - var checkErr error - if islinkExist, checkErr = blkutil.IsSymlinkExist(mountPath); checkErr != nil { - return false, fmt.Errorf("could not check whether the block volume %q (spec.Name: %q) pod %q (UID: %q) is mapped to: %v", - uniqueVolumeName, - volumeName, - podName, - podUID, - checkErr) - } - return islinkExist, nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go index af6633ed5903..bd59e865cbe0 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go @@ -1781,12 +1781,14 @@ func (og *operationGenerator) expandAndRecoverFunction(resizeOpts inTreeResizeOp resizeCalled: false, } - // by default we are expanding to full-fill size requested in pvc.Spec.Resources + // by default we are expanding to fulfill size requested in pvc.Spec.Resources newSize := pvcSpecSize - resizeStatus := v1.PersistentVolumeClaimNoExpansionInProgress - if pvc.Status.ResizeStatus != nil { - resizeStatus = *pvc.Status.ResizeStatus + + var resizeStatus v1.ClaimResourceStatus + if status, ok := pvc.Status.AllocatedResourceStatuses[v1.ResourceStorage]; ok { + resizeStatus = status } + var allocatedSize *resource.Quantity t, ok := pvc.Status.AllocatedResources[v1.ResourceStorage] if ok { @@ -1798,10 +1800,10 @@ func (og *operationGenerator) expandAndRecoverFunction(resizeOpts inTreeResizeOp // pv is not of requested size yet and hence will require expanding switch resizeStatus { - case v1.PersistentVolumeClaimControllerExpansionInProgress: - case v1.PersistentVolumeClaimNodeExpansionPending: - case v1.PersistentVolumeClaimNodeExpansionInProgress: - case v1.PersistentVolumeClaimNodeExpansionFailed: + case v1.PersistentVolumeClaimControllerResizeInProgress, + v1.PersistentVolumeClaimNodeResizePending, + v1.PersistentVolumeClaimNodeResizeInProgress, + v1.PersistentVolumeClaimNodeResizeFailed: if allocatedSize != nil { newSize = *allocatedSize } @@ -1820,30 +1822,29 @@ func (og *operationGenerator) expandAndRecoverFunction(resizeOpts inTreeResizeOp // safe to do so. // 4. While expansion was still pending on the node, user reduced the pvc size. switch resizeStatus { - case v1.PersistentVolumeClaimNodeExpansionInProgress: - case v1.PersistentVolumeClaimNodeExpansionPending: + case v1.PersistentVolumeClaimNodeResizeInProgress, + v1.PersistentVolumeClaimNodeResizePending: // we don't need to do any work. We could be here because of a spurious update event. // This is case #1 return resizeResponse - case v1.PersistentVolumeClaimNodeExpansionFailed: + case v1.PersistentVolumeClaimNodeResizeFailed: // This is case#3 pvc, err = og.markForPendingNodeExpansion(pvc, pv) resizeResponse.pvc = pvc resizeResponse.err = err return resizeResponse - case v1.PersistentVolumeClaimControllerExpansionInProgress: - case v1.PersistentVolumeClaimControllerExpansionFailed: - case v1.PersistentVolumeClaimNoExpansionInProgress: + case v1.PersistentVolumeClaimControllerResizeInProgress, + v1.PersistentVolumeClaimControllerResizeFailed: // This is case#2 or it could also be case#4 when user manually shrunk the PVC // after expanding it. if allocatedSize != nil { newSize = *allocatedSize } default: - // It is impossible for ResizeStatus to be nil and allocatedSize to be not nil but somehow + // It is impossible for ResizeStatus to be "" and allocatedSize to be not nil but somehow // if we do end up in this state, it is safest to resume expansion to last recorded size in // allocatedSize variable. - if pvc.Status.ResizeStatus == nil && allocatedSize != nil { + if resizeStatus == "" && allocatedSize != nil { newSize = *allocatedSize } else { newSize = pvcSpecSize @@ -1938,7 +1939,7 @@ func (og *operationGenerator) GenerateExpandInUseVolumeFunc( var eventErr, detailedErr error migrated := false - if currentSize.IsZero() || volumeToMount.PersistentVolumeSize.IsZero() { + if currentSize.IsZero() || volumeToMount.DesiredPersistentVolumeSize.IsZero() { err := fmt.Errorf("current or new size of the volume is not set") eventErr, detailedErr = volumeToMount.GenerateError("NodeExpandvolume.expansion failed", err) return volumetypes.NewOperationContext(eventErr, detailedErr, migrated) @@ -1948,7 +1949,7 @@ func (og *operationGenerator) GenerateExpandInUseVolumeFunc( VolumeSpec: volumeToMount.VolumeSpec, DevicePath: volumeToMount.DevicePath, OldSize: currentSize, - NewSize: volumeToMount.PersistentVolumeSize, + NewSize: volumeToMount.DesiredPersistentVolumeSize, } fsVolume, err := util.CheckVolumeModeFilesystem(volumeToMount.VolumeSpec) if err != nil { @@ -2168,7 +2169,7 @@ func (og *operationGenerator) nodeExpandVolume( } func (og *operationGenerator) checkForRecoveryFromExpansion(pvc *v1.PersistentVolumeClaim, volumeToMount VolumeToMount) bool { - resizeStatus := pvc.Status.ResizeStatus + resizeStatus := pvc.Status.AllocatedResourceStatuses[v1.ResourceStorage] allocatedResource := pvc.Status.AllocatedResources featureGateStatus := utilfeature.DefaultFeatureGate.Enabled(features.RecoverVolumeExpansionFailure) @@ -2179,7 +2180,7 @@ func (og *operationGenerator) checkForRecoveryFromExpansion(pvc *v1.PersistentVo // Even though RecoverVolumeExpansionFailure feature gate is enabled, it appears that we are running with older version // of resize controller, which will not populate allocatedResource and resizeStatus. This can happen because of version skew // and hence we are going to keep expanding using older logic. - if resizeStatus == nil && allocatedResource == nil { + if resizeStatus == "" && allocatedResource == nil { _, detailedMsg := volumeToMount.GenerateMsg("MountVolume.NodeExpandVolume running with", "older external resize controller") klog.Warningf(detailedMsg) return false diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/resize_util.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/resize_util.go index d6d028b0d928..0f1495f7b346 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/resize_util.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/resize_util.go @@ -152,12 +152,11 @@ func MarkControllerReisizeInProgress(pvc *v1.PersistentVolumeClaim, resizerName Status: v1.ConditionTrue, LastTransitionTime: metav1.Now(), } - controllerExpansionInProgress := v1.PersistentVolumeClaimControllerExpansionInProgress conditions := []v1.PersistentVolumeClaimCondition{progressCondition} newPVC := pvc.DeepCopy() newPVC = MergeResizeConditionOnPVC(newPVC, conditions) - newPVC.Status.ResizeStatus = &controllerExpansionInProgress - newPVC.Status.AllocatedResources = v1.ResourceList{v1.ResourceStorage: newSize} + newPVC = mergeStorageResourceStatus(newPVC, v1.PersistentVolumeClaimControllerResizeInProgress) + newPVC = mergeStorageAllocatedResources(newPVC, newSize) newPVC = setResizer(newPVC, resizerName) return PatchPVCStatus(pvc /*oldPVC*/, newPVC, kubeClient) } @@ -192,10 +191,11 @@ func MarkForFSResize( } conditions := []v1.PersistentVolumeClaimCondition{pvcCondition} newPVC := pvc.DeepCopy() + if utilfeature.DefaultFeatureGate.Enabled(features.RecoverVolumeExpansionFailure) { - expansionPendingOnNode := v1.PersistentVolumeClaimNodeExpansionPending - newPVC.Status.ResizeStatus = &expansionPendingOnNode + newPVC = mergeStorageResourceStatus(newPVC, v1.PersistentVolumeClaimNodeResizePending) } + newPVC = MergeResizeConditionOnPVC(newPVC, conditions) updatedPVC, err := PatchPVCStatus(pvc /*oldPVC*/, newPVC, kubeClient) return updatedPVC, err @@ -220,8 +220,13 @@ func MarkFSResizeFinished( // if RecoverVolumeExpansionFailure is enabled, we need to reset ResizeStatus back to nil if utilfeature.DefaultFeatureGate.Enabled(features.RecoverVolumeExpansionFailure) { - expansionFinished := v1.PersistentVolumeClaimNoExpansionInProgress - newPVC.Status.ResizeStatus = &expansionFinished + allocatedResourceStatusMap := newPVC.Status.AllocatedResourceStatuses + delete(allocatedResourceStatusMap, v1.ResourceStorage) + if len(allocatedResourceStatusMap) == 0 { + newPVC.Status.AllocatedResourceStatuses = nil + } else { + newPVC.Status.AllocatedResourceStatuses = allocatedResourceStatusMap + } } newPVC = MergeResizeConditionOnPVC(newPVC, []v1.PersistentVolumeClaimCondition{}) @@ -232,9 +237,9 @@ func MarkFSResizeFinished( // MarkNodeExpansionFailed marks a PVC for node expansion as failed. Kubelet should not retry expansion // of volumes which are in failed state. func MarkNodeExpansionFailed(pvc *v1.PersistentVolumeClaim, kubeClient clientset.Interface) (*v1.PersistentVolumeClaim, error) { - expansionFailedOnNode := v1.PersistentVolumeClaimNodeExpansionFailed newPVC := pvc.DeepCopy() - newPVC.Status.ResizeStatus = &expansionFailedOnNode + newPVC = mergeStorageResourceStatus(newPVC, v1.PersistentVolumeClaimNodeResizeFailed) + patchBytes, err := createPVCPatch(pvc, newPVC, false /* addResourceVersionCheck */) if err != nil { return pvc, fmt.Errorf("patchPVCStatus failed to patch PVC %q: %v", pvc.Name, err) @@ -250,9 +255,8 @@ func MarkNodeExpansionFailed(pvc *v1.PersistentVolumeClaim, kubeClient clientset // MarkNodeExpansionInProgress marks pvc expansion in progress on node func MarkNodeExpansionInProgress(pvc *v1.PersistentVolumeClaim, kubeClient clientset.Interface) (*v1.PersistentVolumeClaim, error) { - nodeExpansionInProgress := v1.PersistentVolumeClaimNodeExpansionInProgress newPVC := pvc.DeepCopy() - newPVC.Status.ResizeStatus = &nodeExpansionInProgress + newPVC = mergeStorageResourceStatus(newPVC, v1.PersistentVolumeClaimNodeResizeInProgress) updatedPVC, err := PatchPVCStatus(pvc /* oldPVC */, newPVC, kubeClient) return updatedPVC, err } @@ -365,6 +369,32 @@ func MergeResizeConditionOnPVC( return pvc } +func mergeStorageResourceStatus(pvc *v1.PersistentVolumeClaim, status v1.ClaimResourceStatus) *v1.PersistentVolumeClaim { + allocatedResourceStatusMap := pvc.Status.AllocatedResourceStatuses + if allocatedResourceStatusMap == nil { + pvc.Status.AllocatedResourceStatuses = map[v1.ResourceName]v1.ClaimResourceStatus{ + v1.ResourceStorage: status, + } + return pvc + } + allocatedResourceStatusMap[v1.ResourceStorage] = status + pvc.Status.AllocatedResourceStatuses = allocatedResourceStatusMap + return pvc +} + +func mergeStorageAllocatedResources(pvc *v1.PersistentVolumeClaim, size resource.Quantity) *v1.PersistentVolumeClaim { + allocatedResourcesMap := pvc.Status.AllocatedResources + if allocatedResourcesMap == nil { + pvc.Status.AllocatedResources = map[v1.ResourceName]resource.Quantity{ + v1.ResourceStorage: size, + } + return pvc + } + allocatedResourcesMap[v1.ResourceStorage] = size + pvc.Status.AllocatedResources = allocatedResourcesMap + return pvc +} + // GenericResizeFS : call generic filesystem resizer for plugins that don't have any special filesystem resize requirements func GenericResizeFS(host volume.VolumeHost, pluginName, devicePath, deviceMountPath string) (bool, error) { resizer := mount.NewResizeFs(host.GetExec(pluginName)) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/storageclass.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/storageclass.go index d2098c25800d..223eb9dc21c1 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/storageclass.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/storageclass.go @@ -64,7 +64,7 @@ func GetDefaultClass(lister storagev1listers.StorageClassLister) (*storagev1.Sto return defaultClasses[i].CreationTimestamp.UnixNano() > defaultClasses[j].CreationTimestamp.UnixNano() }) if len(defaultClasses) > 1 { - klog.V(4).Infof("%d default StorageClasses were found, choosing the newest: %s", len(defaultClasses), defaultClasses[0].Name) + klog.V(4).Infof("%d default StorageClasses were found, choosing: %s", len(defaultClasses), defaultClasses[0].Name) } return defaultClasses[0], nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/util.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/util.go index bc33f5f2d424..05415215b16e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/util.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/util.go @@ -19,7 +19,6 @@ package util import ( "context" "fmt" - "io/ioutil" "os" "path/filepath" "reflect" @@ -173,7 +172,7 @@ func LoadPodFromFile(filePath string) (*v1.Pod, error) { if filePath == "" { return nil, fmt.Errorf("file path not specified") } - podDef, err := ioutil.ReadFile(filePath) + podDef, err := os.ReadFile(filePath) if err != nil { return nil, fmt.Errorf("failed to read file path %s: %+v", filePath, err) } @@ -688,9 +687,9 @@ func HasMountRefs(mountPath string, mountRefs []string) bool { // Neither of the above should be counted as a mount ref as those are handled // by the kubelet. What we're concerned about is a path like // /data/local/some/manual/mount - // As unmonting could interrupt usage from that mountpoint. + // As unmounting could interrupt usage from that mountpoint. // - // So instead of looking for the entire /var/lib/... path, the plugins/kuberentes.io/ + // So instead of looking for the entire /var/lib/... path, the plugins/kubernetes.io/ // suffix is trimmed off and searched for. // // If there isn't a /plugins/... path, the whole mountPath is used instead. @@ -706,7 +705,7 @@ func HasMountRefs(mountPath string, mountRefs []string) bool { return false } -// WriteVolumeCache flush disk data given the spcified mount path +// WriteVolumeCache flush disk data given the specified mount path func WriteVolumeCache(deviceMountPath string, exec utilexec.Interface) error { // If runtime os is windows, execute Write-VolumeCache powershell command on the disk if runtime.GOOS == "windows" { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/volumepathhandler/volume_path_handler.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/volumepathhandler/volume_path_handler.go index 1de7c52a55d6..e632843d1e13 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/volumepathhandler/volume_path_handler.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/volumepathhandler/volume_path_handler.go @@ -18,7 +18,6 @@ package volumepathhandler import ( "fmt" - "io/ioutil" "os" "path/filepath" @@ -279,12 +278,12 @@ func (v VolumePathHandler) IsDeviceBindMountExist(mapPath string) (bool, error) // GetDeviceBindMountRefs searches bind mounts under global map path func (v VolumePathHandler) GetDeviceBindMountRefs(devPath string, mapPath string) ([]string, error) { var refs []string - files, err := ioutil.ReadDir(mapPath) + files, err := os.ReadDir(mapPath) if err != nil { return nil, err } for _, file := range files { - if file.Mode()&os.ModeDevice != os.ModeDevice { + if file.Type()&os.ModeDevice != os.ModeDevice { continue } filename := file.Name() diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/volumepathhandler/volume_path_handler_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/volumepathhandler/volume_path_handler_linux.go index 2e55df4cca80..541d0b65d354 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/volumepathhandler/volume_path_handler_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/volumepathhandler/volume_path_handler_linux.go @@ -22,7 +22,6 @@ package volumepathhandler import ( "errors" "fmt" - "io/ioutil" "os" "os/exec" "path/filepath" @@ -133,7 +132,7 @@ func getLoopDeviceFromSysfs(path string) (string, error) { backingFile := fmt.Sprintf("%s/loop/backing_file", device) // The contents of this file is the absolute path of "path". - data, err := ioutil.ReadFile(backingFile) + data, err := os.ReadFile(backingFile) if err != nil { continue } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/volume_linux.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/volume_linux.go index 57c02815029a..ec7f6da4bfe9 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/volume_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/volume_linux.go @@ -40,22 +40,22 @@ const ( // SetVolumeOwnership modifies the given volume to be owned by // fsGroup, and sets SetGid so that newly created files are owned by // fsGroup. If fsGroup is nil nothing is done. -func SetVolumeOwnership(mounter Mounter, fsGroup *int64, fsGroupChangePolicy *v1.PodFSGroupChangePolicy, completeFunc func(types.CompleteFuncParam)) error { +func SetVolumeOwnership(mounter Mounter, dir string, fsGroup *int64, fsGroupChangePolicy *v1.PodFSGroupChangePolicy, completeFunc func(types.CompleteFuncParam)) error { if fsGroup == nil { return nil } timer := time.AfterFunc(30*time.Second, func() { - klog.Warningf("Setting volume ownership for %s and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699", mounter.GetPath()) + klog.Warningf("Setting volume ownership for %s and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699", dir) }) defer timer.Stop() - if skipPermissionChange(mounter, fsGroup, fsGroupChangePolicy) { - klog.V(3).InfoS("Skipping permission and ownership change for volume", "path", mounter.GetPath()) + if skipPermissionChange(mounter, dir, fsGroup, fsGroupChangePolicy) { + klog.V(3).InfoS("Skipping permission and ownership change for volume", "path", dir) return nil } - err := walkDeep(mounter.GetPath(), func(path string, info os.FileInfo, err error) error { + err := walkDeep(dir, func(path string, info os.FileInfo, err error) error { if err != nil { return err } @@ -104,14 +104,12 @@ func changeFilePermission(filename string, fsGroup *int64, readonly bool, info o return nil } -func skipPermissionChange(mounter Mounter, fsGroup *int64, fsGroupChangePolicy *v1.PodFSGroupChangePolicy) bool { - dir := mounter.GetPath() - +func skipPermissionChange(mounter Mounter, dir string, fsGroup *int64, fsGroupChangePolicy *v1.PodFSGroupChangePolicy) bool { if fsGroupChangePolicy == nil || *fsGroupChangePolicy != v1.FSGroupChangeOnRootMismatch { klog.V(4).InfoS("Perform recursive ownership change for directory", "path", dir) return false } - return !requiresPermissionChange(mounter.GetPath(), fsGroup, mounter.GetAttributes().ReadOnly) + return !requiresPermissionChange(dir, fsGroup, mounter.GetAttributes().ReadOnly) } func requiresPermissionChange(rootDir string, fsGroup *int64, readonly bool) bool { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/volume_unsupported.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/volume_unsupported.go index 20c56d4b63e2..3b5a200a6160 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/volume_unsupported.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/volume_unsupported.go @@ -24,6 +24,6 @@ import ( "k8s.io/kubernetes/pkg/volume/util/types" ) -func SetVolumeOwnership(mounter Mounter, fsGroup *int64, fsGroupChangePolicy *v1.PodFSGroupChangePolicy, completeFunc func(types.CompleteFuncParam)) error { +func SetVolumeOwnership(mounter Mounter, dir string, fsGroup *int64, fsGroupChangePolicy *v1.PodFSGroupChangePolicy, completeFunc func(types.CompleteFuncParam)) error { return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/vsphere_volume/vsphere_volume.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/vsphere_volume/vsphere_volume.go index 0660eed66bb3..9d5dd3a4a7c2 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/vsphere_volume/vsphere_volume.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/vsphere_volume/vsphere_volume.go @@ -277,7 +277,7 @@ func (b *vsphereVolumeMounter) SetUpAt(dir string, mounterArgs volume.MounterArg os.Remove(dir) return err } - volume.SetVolumeOwnership(b, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) + volume.SetVolumeOwnership(b, dir, mounterArgs.FsGroup, mounterArgs.FSGroupChangePolicy, util.FSGroupCompleteHook(b.plugin, nil)) klog.V(3).Infof("vSphere volume %s mounted to %s", b.volPath, dir) return nil diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/vsphere_volume/vsphere_volume_util_windows.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/vsphere_volume/vsphere_volume_util_windows.go index b9f2b6c590b6..ae85c73f4315 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/vsphere_volume/vsphere_volume_util_windows.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/vsphere_volume/vsphere_volume_util_windows.go @@ -40,6 +40,10 @@ func verifyDevicePath(path string) (string, error) { klog.V(4).Infof("Found vSphere disk attached with disk number %v", path) return path, nil } + // NOTE: If a powershell command that would return an array (e.g.: Get-Disk) would return an array of + // one element, powershell will in fact return that object directly, and **not an array containing + // that elemenent, which means piping it to ConvertTo-Json would not result in array as expected below. + // The following syntax forces it to always be an array. cmd := exec.Command("powershell", "/c", "Get-Disk | Select Number, SerialNumber | ConvertTo-JSON") output, err := cmd.Output() if err != nil { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/deployment.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/deployment.go index 84687f36b83e..b7bb7b7fe595 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/deployment.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/deployment.go @@ -21,11 +21,10 @@ import ( "fmt" "time" - "github.com/davecgh/go-spew/spew" - apps "k8s.io/api/apps/v1" - "k8s.io/api/core/v1" + v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/dump" "k8s.io/apimachinery/pkg/util/wait" clientset "k8s.io/client-go/kubernetes" podutil "k8s.io/kubernetes/pkg/api/v1/pod" @@ -37,7 +36,7 @@ type LogfFn func(format string, args ...interface{}) func LogReplicaSetsOfDeployment(deployment *apps.Deployment, allOldRSs []*apps.ReplicaSet, newRS *apps.ReplicaSet, logf LogfFn) { if newRS != nil { - logf(spew.Sprintf("New ReplicaSet %q of Deployment %q:\n%+v", newRS.Name, deployment.Name, *newRS)) + logf("New ReplicaSet %q of Deployment %q:\n%s", newRS.Name, deployment.Name, dump.Pretty(*newRS)) } else { logf("New ReplicaSet of Deployment %q is nil.", deployment.Name) } @@ -45,7 +44,7 @@ func LogReplicaSetsOfDeployment(deployment *apps.Deployment, allOldRSs []*apps.R logf("All old ReplicaSets of Deployment %q:", deployment.Name) } for i := range allOldRSs { - logf(spew.Sprintf("%+v", *allOldRSs[i])) + logf(dump.Pretty(*allOldRSs[i])) } } @@ -65,7 +64,7 @@ func LogPodsOfDeployment(c clientset.Interface, deployment *apps.Deployment, rsL if podutil.IsPodAvailable(&pod, minReadySeconds, metav1.Now()) { availability = "available" } - logf(spew.Sprintf("Pod %q is %s:\n%+v", pod.Name, availability, pod)) + logf("Pod %q is %s:\n%s", pod.Name, availability, dump.Pretty(pod)) } } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/paths.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/paths.go index 9f1f6f5da34c..eaedb02aed25 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/paths.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/paths.go @@ -27,6 +27,11 @@ import ( // GetK8sRootDir returns the root directory for kubernetes, if present in the gopath. func GetK8sRootDir() (string, error) { + dir := os.Getenv("KUBE_ROOT") + if len(dir) > 0 { + return dir, nil + } + dir, err := RootDir() if err != nil { return "", err @@ -59,12 +64,17 @@ func RootDir() (string, error) { } // GetK8sBuildOutputDir returns the build output directory for k8s -func GetK8sBuildOutputDir() (string, error) { +// For dockerized build, targetArch (eg: 'linux/arm64', 'linux/amd64') must be explicitly specified +// For non dockerized build, targetArch is ignored +func GetK8sBuildOutputDir(isDockerizedBuild bool, targetArch string) (string, error) { k8sRoot, err := GetK8sRootDir() if err != nil { return "", err } buildOutputDir := filepath.Join(k8sRoot, "_output/local/go/bin") + if isDockerizedBuild { + buildOutputDir = filepath.Join(k8sRoot, "_output/dockerized/bin/", targetArch) + } if _, err := os.Stat(buildOutputDir); err != nil { return "", err } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/pki_helpers.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/pki_helpers.go index 06c3290493df..c96e5855cac4 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/pki_helpers.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/pki_helpers.go @@ -53,10 +53,12 @@ func EncodeCertPEM(cert *x509.Certificate) []byte { // NewSignedCert creates a signed certificate using the given CA certificate and key func NewSignedCert(cfg *certutil.Config, key crypto.Signer, caCert *x509.Certificate, caKey crypto.Signer) (*x509.Certificate, error) { - serial, err := cryptorand.Int(cryptorand.Reader, new(big.Int).SetInt64(math.MaxInt64)) + // returns a uniform random value in [0, max-1), then add 1 to serial to make it a uniform random value in [1, max). + serial, err := cryptorand.Int(cryptorand.Reader, new(big.Int).SetInt64(math.MaxInt64-1)) if err != nil { return nil, err } + serial = new(big.Int).Add(serial, big.NewInt(1)) if len(cfg.CommonName) == 0 { return nil, fmt.Errorf("must specify a CommonName") } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/runners.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/runners.go index 6e18e8b9cd25..3cbc8a7bba20 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/runners.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/test/utils/runners.go @@ -183,8 +183,11 @@ type RCConfig struct { ServiceAccountTokenProjections int - //Additional containers to run in the pod + // Additional containers to run in the pod AdditionalContainers []v1.Container + + // Security context for created pods + SecurityContext *v1.SecurityContext } func (rc *RCConfig) RCConfigLog(fmt string, args ...interface{}) { @@ -335,11 +338,12 @@ func (config *DeploymentConfig) create() error { TerminationGracePeriodSeconds: config.getTerminationGracePeriodSeconds(nil), Containers: []v1.Container{ { - Name: config.Name, - Image: config.Image, - Command: config.Command, - Ports: []v1.ContainerPort{{ContainerPort: 80}}, - Lifecycle: config.Lifecycle, + Name: config.Name, + Image: config.Image, + Command: config.Command, + Ports: []v1.ContainerPort{{ContainerPort: 80}}, + Lifecycle: config.Lifecycle, + SecurityContext: config.SecurityContext, }, }, }, @@ -421,11 +425,12 @@ func (config *ReplicaSetConfig) create() error { TerminationGracePeriodSeconds: config.getTerminationGracePeriodSeconds(nil), Containers: []v1.Container{ { - Name: config.Name, - Image: config.Image, - Command: config.Command, - Ports: []v1.ContainerPort{{ContainerPort: 80}}, - Lifecycle: config.Lifecycle, + Name: config.Name, + Image: config.Image, + Command: config.Command, + Ports: []v1.ContainerPort{{ContainerPort: 80}}, + Lifecycle: config.Lifecycle, + SecurityContext: config.SecurityContext, }, }, }, @@ -499,10 +504,11 @@ func (config *JobConfig) create() error { TerminationGracePeriodSeconds: config.getTerminationGracePeriodSeconds(nil), Containers: []v1.Container{ { - Name: config.Name, - Image: config.Image, - Command: config.Command, - Lifecycle: config.Lifecycle, + Name: config.Name, + Image: config.Image, + Command: config.Command, + Lifecycle: config.Lifecycle, + SecurityContext: config.SecurityContext, }, }, RestartPolicy: v1.RestartPolicyOnFailure, @@ -612,12 +618,13 @@ func (config *RCConfig) create() error { Affinity: config.Affinity, Containers: []v1.Container{ { - Name: config.Name, - Image: config.Image, - Command: config.Command, - Ports: []v1.ContainerPort{{ContainerPort: 80}}, - ReadinessProbe: config.ReadinessProbe, - Lifecycle: config.Lifecycle, + Name: config.Name, + Image: config.Image, + Command: config.Command, + Ports: []v1.ContainerPort{{ContainerPort: 80}}, + ReadinessProbe: config.ReadinessProbe, + Lifecycle: config.Lifecycle, + SecurityContext: config.SecurityContext, }, }, DNSPolicy: *config.DNSPolicy, diff --git a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/azure.go b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/azure.go index 461ef3171ccd..928532ceb4cc 100644 --- a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/azure.go +++ b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/azure.go @@ -41,7 +41,6 @@ import ( "k8s.io/client-go/tools/cache" "k8s.io/client-go/tools/record" cloudprovider "k8s.io/cloud-provider" - cloudproviderapi "k8s.io/cloud-provider/api" "k8s.io/klog/v2" "k8s.io/legacy-cloud-providers/azure/auth" azcache "k8s.io/legacy-cloud-providers/azure/cache" @@ -858,14 +857,6 @@ func (az *Cloud) updateNodeCaches(prevNode, newNode *v1.Node) { case hasExcludeBalancerLabel: az.excludeLoadBalancerNodes.Insert(newNode.ObjectMeta.Name) - case !isNodeReady(newNode) && getCloudTaint(newNode.Spec.Taints) == nil: - // If not in ready state and not a newly created node, add to excludeLoadBalancerNodes cache. - // New nodes (tainted with "node.cloudprovider.kubernetes.io/uninitialized") should not be - // excluded from load balancers regardless of their state, so as to reduce the number of - // VMSS API calls and not provoke VMScaleSetActiveModelsCountLimitReached. - // (https://github.com/kubernetes-sigs/cloud-provider-azure/issues/851) - az.excludeLoadBalancerNodes.Insert(newNode.ObjectMeta.Name) - default: // Nodes not falling into the three cases above are valid backends and // should not appear in excludeLoadBalancerNodes cache. @@ -995,21 +986,3 @@ func (az *Cloud) ShouldNodeExcludedFromLoadBalancer(nodeName string) (bool, erro return az.excludeLoadBalancerNodes.Has(nodeName), nil } - -func isNodeReady(node *v1.Node) bool { - for _, cond := range node.Status.Conditions { - if cond.Type == v1.NodeReady && cond.Status == v1.ConditionTrue { - return true - } - } - return false -} - -func getCloudTaint(taints []v1.Taint) *v1.Taint { - for _, taint := range taints { - if taint.Key == cloudproviderapi.TaintExternalCloudProvider { - return &taint - } - } - return nil -} diff --git a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/gce/gce_healthchecks.go b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/gce/gce_healthchecks.go index f905a6e8afe6..6409e4979e4c 100644 --- a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/gce/gce_healthchecks.go +++ b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/gce/gce_healthchecks.go @@ -20,8 +20,6 @@ limitations under the License. package gce import ( - "k8s.io/klog/v2" - computealpha "google.golang.org/api/compute/v0.alpha" computebeta "google.golang.org/api/compute/v0.beta" compute "google.golang.org/api/compute/v1" @@ -29,8 +27,6 @@ import ( "github.com/GoogleCloudPlatform/k8s-cloud-provider/pkg/cloud" "github.com/GoogleCloudPlatform/k8s-cloud-provider/pkg/cloud/filter" "github.com/GoogleCloudPlatform/k8s-cloud-provider/pkg/cloud/meta" - v1 "k8s.io/api/core/v1" - utilversion "k8s.io/apimachinery/pkg/util/version" ) const ( @@ -42,18 +38,6 @@ const ( lbNodesHealthCheckPort = 10256 ) -var ( - minNodesHealthCheckVersion *utilversion.Version -) - -func init() { - if v, err := utilversion.ParseGeneric("1.7.2"); err != nil { - klog.Fatalf("Failed to parse version for minNodesHealthCheckVersion: %v", err) - } else { - minNodesHealthCheckVersion = v - } -} - func newHealthcheckMetricContext(request string) *metricContext { return newHealthcheckMetricContextWithVersion(request, computeV1Version) } @@ -274,25 +258,3 @@ func GetNodesHealthCheckPort() int32 { func GetNodesHealthCheckPath() string { return nodesHealthCheckPath } - -// isAtLeastMinNodesHealthCheckVersion checks if a version is higher than -// `minNodesHealthCheckVersion`. -func isAtLeastMinNodesHealthCheckVersion(vstring string) bool { - version, err := utilversion.ParseGeneric(vstring) - if err != nil { - klog.Errorf("vstring (%s) is not a valid version string: %v", vstring, err) - return false - } - return version.AtLeast(minNodesHealthCheckVersion) -} - -// supportsNodesHealthCheck returns false if anyone of the nodes has version -// lower than `minNodesHealthCheckVersion`. -func supportsNodesHealthCheck(nodes []*v1.Node) bool { - for _, node := range nodes { - if !isAtLeastMinNodesHealthCheckVersion(node.Status.NodeInfo.KubeProxyVersion) { - return false - } - } - return true -} diff --git a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/gce/gce_loadbalancer_external.go b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/gce/gce_loadbalancer_external.go index b95ad4166c03..b4f451cbd7b2 100644 --- a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/gce/gce_loadbalancer_external.go +++ b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/gce/gce_loadbalancer_external.go @@ -63,7 +63,6 @@ func (g *Cloud) ensureExternalLoadBalancer(clusterName string, clusterID string, } hostNames := nodeNames(nodes) - supportsNodesHealthCheck := supportsNodesHealthCheck(nodes) hosts, err := g.getInstancesByNames(hostNames) if err != nil { return nil, err @@ -229,9 +228,7 @@ func (g *Cloud) ensureExternalLoadBalancer(clusterName string, clusterID string, // turn on the tpNeedsRecreation flag to delete/recreate fwdrule/tpool updating the // target pool to use local traffic health check. klog.V(2).Infof("ensureExternalLoadBalancer(%s): Updating from nodes health checks to local traffic health checks.", lbRefStr) - if supportsNodesHealthCheck { - hcToDelete = makeHTTPHealthCheck(MakeNodesHealthCheckName(clusterID), GetNodesHealthCheckPath(), GetNodesHealthCheckPort()) - } + hcToDelete = makeHTTPHealthCheck(MakeNodesHealthCheckName(clusterID), GetNodesHealthCheckPath(), GetNodesHealthCheckPort()) tpNeedsRecreation = true } hcToCreate = makeHTTPHealthCheck(loadBalancerName, path, healthCheckNodePort) @@ -245,9 +242,7 @@ func (g *Cloud) ensureExternalLoadBalancer(clusterName string, clusterID string, hcToDelete = hcLocalTrafficExisting tpNeedsRecreation = true } - if supportsNodesHealthCheck { - hcToCreate = makeHTTPHealthCheck(MakeNodesHealthCheckName(clusterID), GetNodesHealthCheckPath(), GetNodesHealthCheckPort()) - } + hcToCreate = makeHTTPHealthCheck(MakeNodesHealthCheckName(clusterID), GetNodesHealthCheckPath(), GetNodesHealthCheckPort()) } // Now we get to some slightly more interesting logic. // First, neither target pools nor forwarding rules can be updated in place - diff --git a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/vsphere/nodemanager.go b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/vsphere/nodemanager.go index 0fe96a7b79d5..5bd3eea6e369 100644 --- a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/vsphere/nodemanager.go +++ b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/vsphere/nodemanager.go @@ -27,11 +27,14 @@ import ( "github.com/vmware/govmomi/object" "github.com/vmware/govmomi/vim25/mo" - "k8s.io/klog/v2" - v1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" k8stypes "k8s.io/apimachinery/pkg/types" + coreclients "k8s.io/client-go/kubernetes/typed/core/v1" + corelisters "k8s.io/client-go/listers/core/v1" cloudprovider "k8s.io/cloud-provider" + "k8s.io/klog/v2" "k8s.io/legacy-cloud-providers/vsphere/vclib" ) @@ -61,6 +64,9 @@ type NodeManager struct { //CredentialsManager credentialManager *SecretCredentialManager + nodeLister corelisters.NodeLister + nodeGetter coreclients.NodesGetter + // Mutexes registeredNodesLock sync.RWMutex nodeInfoLock sync.RWMutex @@ -271,10 +277,43 @@ func (nm *NodeManager) GetNode(nodeName k8stypes.NodeName) (v1.Node, error) { nm.registeredNodesLock.RLock() node := nm.registeredNodes[convertToString(nodeName)] nm.registeredNodesLock.RUnlock() - if node == nil { - return v1.Node{}, vclib.ErrNoVMFound + if node != nil { + klog.V(4).Infof("Node %s found in vSphere cloud provider cache", nodeName) + return *node, nil + } + + if nm.nodeLister != nil { + klog.V(4).Infof("Node %s missing in vSphere cloud provider cache, trying node informer") + node, err := nm.nodeLister.Get(convertToString(nodeName)) + if err != nil { + if !errors.IsNotFound(err) { + return v1.Node{}, err + } + // Fall through with IsNotFound error and try to get the node from the API server + } else { + node := node.DeepCopy() + nm.addNode(node) + klog.V(4).Infof("Node %s found in vSphere cloud provider node informer", nodeName) + return *node, nil + } } - return *node, nil + + if nm.nodeGetter != nil { + klog.V(4).Infof("Node %s missing in vSphere cloud provider caches, trying the API server") + node, err := nm.nodeGetter.Nodes().Get(context.TODO(), convertToString(nodeName), metav1.GetOptions{}) + if err != nil { + if !errors.IsNotFound(err) { + return v1.Node{}, err + } + // Fall through with IsNotFound error to keep the code consistent with the above + } else { + nm.addNode(node) + klog.V(4).Infof("Node %s found in the API server", nodeName) + return *node, nil + } + } + klog.V(4).Infof("Node %s not found in vSphere cloud provider", nodeName) + return v1.Node{}, vclib.ErrNoVMFound } func (nm *NodeManager) getNodes() map[string]*v1.Node { @@ -515,3 +554,11 @@ func (nm *NodeManager) GetHostsInZone(ctx context.Context, zoneFailureDomain str klog.V(4).Infof("GetHostsInZone %v returning: %v", zoneFailureDomain, hosts) return hosts, nil } + +func (nm *NodeManager) SetNodeLister(nodeLister corelisters.NodeLister) { + nm.nodeLister = nodeLister +} + +func (nm *NodeManager) SetNodeGetter(nodeGetter coreclients.NodesGetter) { + nm.nodeGetter = nodeGetter +} diff --git a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/vsphere/vsphere.go b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/vsphere/vsphere.go index 756c3f6c9993..615094db6611 100644 --- a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/vsphere/vsphere.go +++ b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/vsphere/vsphere.go @@ -276,6 +276,7 @@ func init() { // Initialize passes a Kubernetes clientBuilder interface to the cloud provider func (vs *VSphere) Initialize(clientBuilder cloudprovider.ControllerClientBuilder, stop <-chan struct{}) { vs.kubeClient = clientBuilder.ClientOrDie("vsphere-legacy-cloud-provider") + vs.nodeManager.SetNodeGetter(vs.kubeClient.CoreV1()) } // Initialize Node Informers @@ -318,6 +319,9 @@ func (vs *VSphere) SetInformers(informerFactory informers.SharedInformerFactory) cache.ResourceEventHandlerFuncs{UpdateFunc: vs.syncNodeZoneLabels}, zoneLabelsResyncPeriod, ) + + nodeLister := informerFactory.Core().V1().Nodes().Lister() + vs.nodeManager.SetNodeLister(nodeLister) klog.V(4).Infof("Node informers in vSphere cloud provider initialized") } diff --git a/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_helper_unix.go b/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_helper_unix.go index cb8732fce74c..82210aaa23a3 100644 --- a/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_helper_unix.go +++ b/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_helper_unix.go @@ -20,14 +20,17 @@ limitations under the License. package mount import ( + "bytes" "errors" "fmt" "io/fs" "os" "strconv" "strings" + "sync" "syscall" + "golang.org/x/sys/unix" "k8s.io/klog/v2" utilio "k8s.io/utils/io" ) @@ -91,7 +94,7 @@ type MountInfo struct { // nolint: golint // ParseMountInfo parses /proc/xxx/mountinfo. func ParseMountInfo(filename string) ([]MountInfo, error) { - content, err := utilio.ConsistentRead(filename, maxListTries) + content, err := readMountInfo(filename) if err != nil { return []MountInfo{}, err } @@ -173,8 +176,7 @@ func splitMountOptions(s string) []string { // isMountPointMatch returns true if the path in mp is the same as dir. // Handles case where mountpoint dir has been renamed due to stale NFS mount. func isMountPointMatch(mp MountPoint, dir string) bool { - deletedDir := fmt.Sprintf("%s\\040(deleted)", dir) - return ((mp.Path == dir) || (mp.Path == deletedDir)) + return strings.TrimSuffix(mp.Path, "\\040(deleted)") == dir } // PathExists returns true if the specified path exists. @@ -199,3 +201,50 @@ func PathExists(path string) (bool, error) { } return false, err } + +// These variables are used solely by kernelHasMountinfoBug. +var ( + hasMountinfoBug bool + checkMountinfoBugOnce sync.Once +) + +// kernelHasMountinfoBug checks if the kernel bug that can lead to incomplete +// mountinfo being read is fixed. It does so by checking the kernel version. +// +// The bug was fixed by the kernel commit 9f6c61f96f2d97 (since Linux 5.8). +// Alas, there is no better way to check if the bug is fixed other than to +// rely on the kernel version returned by uname. +func kernelHasMountinfoBug() bool { + checkMountinfoBugOnce.Do(func() { + // Assume old kernel. + hasMountinfoBug = true + + uname := unix.Utsname{} + err := unix.Uname(&uname) + if err != nil { + return + } + + end := bytes.IndexByte(uname.Release[:], 0) + v := bytes.SplitN(uname.Release[:end], []byte{'.'}, 3) + if len(v) != 3 { + return + } + major, _ := strconv.Atoi(string(v[0])) + minor, _ := strconv.Atoi(string(v[1])) + + if major > 5 || (major == 5 && minor >= 8) { + hasMountinfoBug = false + } + }) + + return hasMountinfoBug +} + +func readMountInfo(path string) ([]byte, error) { + if kernelHasMountinfoBug() { + return utilio.ConsistentRead(path, maxListTries) + } + + return os.ReadFile(path) +} diff --git a/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_linux.go b/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_linux.go index f0125fcb489e..7d1807230476 100644 --- a/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_linux.go @@ -24,7 +24,6 @@ import ( "errors" "fmt" "io/fs" - "io/ioutil" "os" "os/exec" "path/filepath" @@ -37,7 +36,6 @@ import ( "k8s.io/klog/v2" utilexec "k8s.io/utils/exec" - utilio "k8s.io/utils/io" ) const ( @@ -271,7 +269,7 @@ func detectSafeNotMountedBehavior() bool { // detectSafeNotMountedBehaviorWithExec is for testing with FakeExec. func detectSafeNotMountedBehaviorWithExec(exec utilexec.Interface) bool { // create a temp dir and try to umount it - path, err := ioutil.TempDir("", "kubelet-detect-safe-umount") + path, err := os.MkdirTemp("", "kubelet-detect-safe-umount") if err != nil { klog.V(4).Infof("Cannot create temp dir to detect safe 'not mounted' behavior: %v", err) return false @@ -633,7 +631,7 @@ func (mounter *SafeFormatAndMount) GetDiskFormat(disk string) (string, error) { // ListProcMounts is shared with NsEnterMounter func ListProcMounts(mountFilePath string) ([]MountPoint, error) { - content, err := utilio.ConsistentRead(mountFilePath, maxListTries) + content, err := readMountInfo(mountFilePath) if err != nil { return nil, err } @@ -766,7 +764,7 @@ func (mounter *Mounter) IsMountPoint(file string) (bool, error) { // Resolve any symlinks in file, kernel would do the same and use the resolved path in /proc/mounts. resolvedFile, err := filepath.EvalSymlinks(file) if err != nil { - if errors.Is(isMntErr, fs.ErrNotExist) { + if errors.Is(err, fs.ErrNotExist) { return false, fs.ErrNotExist } return false, err @@ -810,7 +808,6 @@ func tryUnmount(target string, withSafeNotMountedBehavior bool, unmountTimeout t func forceUmount(target string, withSafeNotMountedBehavior bool) error { command := exec.Command("umount", "-f", target) output, err := command.CombinedOutput() - if err != nil { return checkUmountError(target, command, output, err, withSafeNotMountedBehavior) } diff --git a/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_windows.go b/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_windows.go index 7b2a2d1a15e0..e5a03ecff839 100644 --- a/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_windows.go +++ b/cluster-autoscaler/vendor/k8s.io/mount-utils/mount_windows.go @@ -82,11 +82,11 @@ func (mounter *Mounter) MountSensitive(source string, target string, fstype stri if source == "tmpfs" { klog.V(3).Infof("mounting source (%q), target (%q), with options (%q)", source, target, sanitizedOptionsForLogging) - return os.MkdirAll(target, 0755) + return os.MkdirAll(target, 0o755) } parentDir := filepath.Dir(target) - if err := os.MkdirAll(parentDir, 0755); err != nil { + if err := os.MkdirAll(parentDir, 0o755); err != nil { return err } @@ -299,26 +299,20 @@ func (mounter *SafeFormatAndMount) formatAndMountSensitive(source string, target } klog.V(4).Infof("diskMount: Disk successfully formatted, disk: %q, fstype: %q", source, fstype) - volumeIds, err := listVolumesOnDisk(source) + volumeIds, err := ListVolumesOnDisk(source) if err != nil { return err } driverPath := volumeIds[0] - target = NormalizeWindowsPath(target) - output, err := mounter.Exec.Command("cmd", "/c", "mklink", "/D", target, driverPath).CombinedOutput() - if err != nil { - klog.Errorf("mklink(%s, %s) failed: %v, output: %q", target, driverPath, err, string(output)) - return err - } - klog.V(2).Infof("formatAndMount disk(%s) fstype(%s) on(%s) with output(%s) successfully", driverPath, fstype, target, string(output)) - return nil + return mounter.MountSensitive(driverPath, target, fstype, options, sensitiveOptions) } // ListVolumesOnDisk - returns back list of volumes(volumeIDs) in the disk (requested in diskID). -func listVolumesOnDisk(diskID string) (volumeIDs []string, err error) { - cmd := fmt.Sprintf("(Get-Disk -DeviceId %s | Get-Partition | Get-Volume).UniqueId", diskID) +func ListVolumesOnDisk(diskID string) (volumeIDs []string, err error) { + // If a Disk has multiple volumes, Get-Volume may not return items in the same order. + cmd := fmt.Sprintf("(Get-Disk -DeviceId %s | Get-Partition | Get-Volume | Sort-Object -Property UniqueId).UniqueId", diskID) output, err := exec.Command("powershell", "/c", cmd).CombinedOutput() - klog.V(4).Infof("listVolumesOnDisk id from %s: %s", diskID, string(output)) + klog.V(4).Infof("ListVolumesOnDisk id from %s: %s", diskID, string(output)) if err != nil { return []string{}, fmt.Errorf("error list volumes on disk. cmd: %s, output: %s, error: %v", cmd, string(output), err) } diff --git a/cluster-autoscaler/vendor/k8s.io/mount-utils/resizefs_linux.go b/cluster-autoscaler/vendor/k8s.io/mount-utils/resizefs_linux.go index 81386fef878c..3a5fa1be7cb3 100644 --- a/cluster-autoscaler/vendor/k8s.io/mount-utils/resizefs_linux.go +++ b/cluster-autoscaler/vendor/k8s.io/mount-utils/resizefs_linux.go @@ -45,7 +45,6 @@ func NewResizeFs(exec utilexec.Interface) *ResizeFs { // Resize perform resize of file system func (resizefs *ResizeFs) Resize(devicePath string, deviceMountPath string) (bool, error) { format, err := getDiskFormat(resizefs.exec, devicePath) - if err != nil { formatErr := fmt.Errorf("ResizeFS.Resize - error checking format for device %s: %v", devicePath, err) return false, formatErr @@ -78,7 +77,6 @@ func (resizefs *ResizeFs) extResize(devicePath string) (bool, error) { resizeError := fmt.Errorf("resize of device %s failed: %v. resize2fs output: %s", devicePath, err, string(output)) return false, resizeError - } func (resizefs *ResizeFs) xfsResize(deviceMountPath string) (bool, error) { @@ -161,6 +159,7 @@ func (resizefs *ResizeFs) NeedResize(devicePath string, deviceMountPath string) } return true, nil } + func (resizefs *ResizeFs) getDeviceSize(devicePath string) (uint64, error) { output, err := resizefs.exec.Command(blockDev, "--getsize64", devicePath).CombinedOutput() outStr := strings.TrimSpace(string(output)) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpuset/OWNERS b/cluster-autoscaler/vendor/k8s.io/utils/cpuset/OWNERS similarity index 57% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpuset/OWNERS rename to cluster-autoscaler/vendor/k8s.io/utils/cpuset/OWNERS index d4c6257fd184..0ec2b0852722 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpuset/OWNERS +++ b/cluster-autoscaler/vendor/k8s.io/utils/cpuset/OWNERS @@ -1,7 +1,8 @@ # See the OWNERS docs at https://go.k8s.io/owners approvers: + - dchen1107 - derekwaynecarr -emeritus_approvers: - - ConnorDoyle - - vishh + - ffromani + - klueska + - SergeyKanzhelev diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpuset/cpuset.go b/cluster-autoscaler/vendor/k8s.io/utils/cpuset/cpuset.go similarity index 98% rename from cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpuset/cpuset.go rename to cluster-autoscaler/vendor/k8s.io/utils/cpuset/cpuset.go index f20a3fa029c4..52912d95bcdc 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/cm/cpuset/cpuset.go +++ b/cluster-autoscaler/vendor/k8s.io/utils/cpuset/cpuset.go @@ -23,6 +23,8 @@ limitations under the License. // See http://man7.org/linux/man-pages/man7/cpuset.7.html#FORMATS for details. // // Future work can migrate this to use a 'set' library, and relax the dubious 'immutable' property. +// +// This package was originally developed in the 'kubernetes' repository. package cpuset import ( diff --git a/cluster-autoscaler/vendor/modules.txt b/cluster-autoscaler/vendor/modules.txt index d303cb8fd245..5060179ab7c4 100644 --- a/cluster-autoscaler/vendor/modules.txt +++ b/cluster-autoscaler/vendor/modules.txt @@ -4,7 +4,7 @@ cloud.google.com/go/compute/internal # cloud.google.com/go/compute/metadata v0.2.3 ## explicit; go 1.19 cloud.google.com/go/compute/metadata -# github.com/Azure/azure-sdk-for-go v67.2.0+incompatible +# github.com/Azure/azure-sdk-for-go v68.0.0+incompatible ## explicit github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2019-07-01/compute github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2019-12-01/compute @@ -24,11 +24,11 @@ github.com/Azure/azure-sdk-for-go/version # github.com/Azure/go-autorest v14.2.0+incompatible ## explicit github.com/Azure/go-autorest -# github.com/Azure/go-autorest/autorest v0.11.28 +# github.com/Azure/go-autorest/autorest v0.11.29 ## explicit; go 1.15 github.com/Azure/go-autorest/autorest github.com/Azure/go-autorest/autorest/azure -# github.com/Azure/go-autorest/autorest/adal v0.9.21 +# github.com/Azure/go-autorest/autorest/adal v0.9.23 ## explicit; go 1.15 github.com/Azure/go-autorest/autorest/adal # github.com/Azure/go-autorest/autorest/azure/auth v0.5.8 @@ -67,9 +67,10 @@ github.com/GoogleCloudPlatform/k8s-cloud-provider/pkg/cloud/mock # github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab ## explicit github.com/JeffAshton/win_pdh -# github.com/Microsoft/go-winio v0.4.17 -## explicit; go 1.12 +# github.com/Microsoft/go-winio v0.6.0 +## explicit; go 1.17 github.com/Microsoft/go-winio +github.com/Microsoft/go-winio/internal/socket github.com/Microsoft/go-winio/pkg/guid github.com/Microsoft/go-winio/pkg/security github.com/Microsoft/go-winio/vhd @@ -77,8 +78,6 @@ github.com/Microsoft/go-winio/vhd ## explicit; go 1.13 github.com/Microsoft/hcsshim github.com/Microsoft/hcsshim/computestorage -github.com/Microsoft/hcsshim/hcn -github.com/Microsoft/hcsshim/internal/cni github.com/Microsoft/hcsshim/internal/cow github.com/Microsoft/hcsshim/internal/hcs github.com/Microsoft/hcsshim/internal/hcs/schema1 @@ -91,8 +90,6 @@ github.com/Microsoft/hcsshim/internal/logfields github.com/Microsoft/hcsshim/internal/longpath github.com/Microsoft/hcsshim/internal/mergemaps github.com/Microsoft/hcsshim/internal/oc -github.com/Microsoft/hcsshim/internal/regstate -github.com/Microsoft/hcsshim/internal/runhcs github.com/Microsoft/hcsshim/internal/safefile github.com/Microsoft/hcsshim/internal/timeout github.com/Microsoft/hcsshim/internal/vmcompute @@ -166,7 +163,7 @@ github.com/beorn7/perks/quantile # github.com/blang/semver/v4 v4.0.0 ## explicit; go 1.14 github.com/blang/semver/v4 -# github.com/cenkalti/backoff/v4 v4.2.0 +# github.com/cenkalti/backoff/v4 v4.2.1 ## explicit; go 1.18 github.com/cenkalti/backoff/v4 # github.com/cespare/xxhash/v2 v2.2.0 @@ -176,24 +173,26 @@ github.com/cespare/xxhash/v2 ## explicit; go 1.13 github.com/checkpoint-restore/go-criu/v5 github.com/checkpoint-restore/go-criu/v5/rpc -# github.com/cilium/ebpf v0.7.0 -## explicit; go 1.16 +# github.com/cilium/ebpf v0.9.1 +## explicit; go 1.17 github.com/cilium/ebpf github.com/cilium/ebpf/asm +github.com/cilium/ebpf/btf github.com/cilium/ebpf/internal -github.com/cilium/ebpf/internal/btf +github.com/cilium/ebpf/internal/sys github.com/cilium/ebpf/internal/unix github.com/cilium/ebpf/link -# github.com/container-storage-interface/spec v1.7.0 +# github.com/container-storage-interface/spec v1.8.0 ## explicit; go 1.18 github.com/container-storage-interface/spec/lib/go/csi -# github.com/containerd/cgroups v1.0.1 -## explicit; go 1.13 +# github.com/containerd/cgroups v1.1.0 +## explicit; go 1.17 +github.com/containerd/cgroups github.com/containerd/cgroups/stats/v1 # github.com/containerd/console v1.0.3 ## explicit; go 1.13 github.com/containerd/console -# github.com/containerd/ttrpc v1.1.0 +# github.com/containerd/ttrpc v1.2.2 ## explicit; go 1.13 github.com/containerd/ttrpc # github.com/coreos/go-semver v0.3.1 @@ -216,7 +215,7 @@ github.com/digitalocean/godo # github.com/dimchansky/utfbom v1.1.1 ## explicit github.com/dimchansky/utfbom -# github.com/docker/distribution v2.8.1+incompatible +# github.com/docker/distribution v2.8.2+incompatible ## explicit github.com/docker/distribution/digestset github.com/docker/distribution/reference @@ -239,7 +238,7 @@ github.com/felixge/httpsnoop # github.com/fsnotify/fsnotify v1.6.0 ## explicit; go 1.16 github.com/fsnotify/fsnotify -# github.com/gardener/machine-controller-manager v0.50.0 +# github.com/gardener/machine-controller-manager v0.50.1 ## explicit; go 1.20 github.com/gardener/machine-controller-manager/pkg/apis/machine github.com/gardener/machine-controller-manager/pkg/apis/machine/v1alpha1 @@ -255,15 +254,12 @@ github.com/gardener/machine-controller-manager/pkg/client/informers/externalvers github.com/gardener/machine-controller-manager/pkg/client/listers/machine/v1alpha1 github.com/gardener/machine-controller-manager/pkg/util/provider/cache github.com/gardener/machine-controller-manager/pkg/util/provider/machinecodes/codes -# github.com/gardener/machine-controller-manager-provider-aws v0.17.0 -## explicit; go 1.19 +# github.com/gardener/machine-controller-manager-provider-aws v0.19.2 +## explicit; go 1.20 github.com/gardener/machine-controller-manager-provider-aws/pkg/aws/apis -# github.com/gardener/machine-controller-manager-provider-azure v0.10.0 -## explicit; go 1.19 +# github.com/gardener/machine-controller-manager-provider-azure v0.11.1 +## explicit; go 1.20 github.com/gardener/machine-controller-manager-provider-azure/pkg/azure/apis -# github.com/ghodss/yaml v1.0.0 -## explicit -github.com/ghodss/yaml # github.com/go-logr/logr v1.2.4 ## explicit; go 1.16 github.com/go-logr/logr @@ -287,7 +283,7 @@ github.com/go-task/slim-sprig # github.com/godbus/dbus/v5 v5.0.6 ## explicit; go 1.12 github.com/godbus/dbus/v5 -# github.com/gofrs/uuid v4.0.0+incompatible +# github.com/gofrs/uuid v4.4.0+incompatible ## explicit github.com/gofrs/uuid # github.com/gogo/protobuf v1.3.2 @@ -297,7 +293,7 @@ github.com/gogo/protobuf/proto github.com/gogo/protobuf/protoc-gen-gogo/descriptor github.com/gogo/protobuf/sortkeys github.com/gogo/protobuf/types -# github.com/golang-jwt/jwt/v4 v4.4.2 +# github.com/golang-jwt/jwt/v4 v4.5.0 ## explicit; go 1.16 github.com/golang-jwt/jwt/v4 # github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da @@ -317,7 +313,7 @@ github.com/golang/protobuf/ptypes/any github.com/golang/protobuf/ptypes/duration github.com/golang/protobuf/ptypes/timestamp github.com/golang/protobuf/ptypes/wrappers -# github.com/google/cadvisor v0.47.1 +# github.com/google/cadvisor v0.47.3 ## explicit; go 1.16 github.com/google/cadvisor/cache/memory github.com/google/cadvisor/collector @@ -366,7 +362,7 @@ github.com/google/cadvisor/utils/sysfs github.com/google/cadvisor/utils/sysinfo github.com/google/cadvisor/version github.com/google/cadvisor/watcher -# github.com/google/cel-go v0.14.0 +# github.com/google/cel-go v0.16.0 ## explicit; go 1.18 github.com/google/cel-go/cel github.com/google/cel-go/checker @@ -386,13 +382,13 @@ github.com/google/cel-go/interpreter github.com/google/cel-go/interpreter/functions github.com/google/cel-go/parser github.com/google/cel-go/parser/gen -# github.com/google/gnostic v0.6.9 -## explicit; go 1.12 -github.com/google/gnostic/compiler -github.com/google/gnostic/extensions -github.com/google/gnostic/jsonschema -github.com/google/gnostic/openapiv2 -github.com/google/gnostic/openapiv3 +# github.com/google/gnostic-models v0.6.8 +## explicit; go 1.18 +github.com/google/gnostic-models/compiler +github.com/google/gnostic-models/extensions +github.com/google/gnostic-models/jsonschema +github.com/google/gnostic-models/openapiv2 +github.com/google/gnostic-models/openapiv3 # github.com/google/go-cmp v0.5.9 ## explicit; go 1.13 github.com/google/go-cmp/cmp @@ -476,9 +472,6 @@ github.com/mistifyio/go-zfs # github.com/mitchellh/go-homedir v1.1.0 ## explicit github.com/mitchellh/go-homedir -# github.com/mitchellh/mapstructure v1.5.0 -## explicit; go 1.14 -github.com/mitchellh/mapstructure # github.com/moby/ipvs v1.1.0 ## explicit; go 1.17 github.com/moby/ipvs @@ -546,8 +539,8 @@ github.com/onsi/gomega/types # github.com/opencontainers/go-digest v1.0.0 ## explicit; go 1.13 github.com/opencontainers/go-digest -# github.com/opencontainers/runc v1.1.4 -## explicit; go 1.16 +# github.com/opencontainers/runc v1.1.7 +## explicit; go 1.17 github.com/opencontainers/runc/libcontainer github.com/opencontainers/runc/libcontainer/apparmor github.com/opencontainers/runc/libcontainer/capabilities @@ -588,7 +581,7 @@ github.com/pkg/errors # github.com/pmezard/go-difflib v1.0.0 ## explicit github.com/pmezard/go-difflib/difflib -# github.com/prometheus/client_golang v1.14.0 +# github.com/prometheus/client_golang v1.16.0 ## explicit; go 1.17 github.com/prometheus/client_golang/prometheus github.com/prometheus/client_golang/prometheus/collectors @@ -596,16 +589,16 @@ github.com/prometheus/client_golang/prometheus/internal github.com/prometheus/client_golang/prometheus/promhttp github.com/prometheus/client_golang/prometheus/testutil github.com/prometheus/client_golang/prometheus/testutil/promlint -# github.com/prometheus/client_model v0.3.0 -## explicit; go 1.9 +# github.com/prometheus/client_model v0.4.0 +## explicit; go 1.18 github.com/prometheus/client_model/go -# github.com/prometheus/common v0.42.0 +# github.com/prometheus/common v0.44.0 ## explicit; go 1.18 github.com/prometheus/common/expfmt github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg github.com/prometheus/common/model -# github.com/prometheus/procfs v0.9.0 -## explicit; go 1.18 +# github.com/prometheus/procfs v0.10.1 +## explicit; go 1.19 github.com/prometheus/procfs github.com/prometheus/procfs/internal/fs github.com/prometheus/procfs/internal/util @@ -615,7 +608,7 @@ github.com/rubiojr/go-vhd/vhd # github.com/satori/go.uuid v1.2.0 ## explicit github.com/satori/go.uuid -# github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646 +# github.com/seccomp/libseccomp-golang v0.10.0 ## explicit; go 1.14 github.com/seccomp/libseccomp-golang # github.com/sirupsen/logrus v1.9.0 @@ -646,8 +639,8 @@ github.com/syndtr/gocapability/capability ## explicit; go 1.12 github.com/vishvananda/netlink github.com/vishvananda/netlink/nl -# github.com/vishvananda/netns v0.0.2 -## explicit; go 1.12 +# github.com/vishvananda/netns v0.0.4 +## explicit; go 1.17 github.com/vishvananda/netns # github.com/vmware/govmomi v0.30.0 ## explicit; go 1.17 @@ -682,24 +675,24 @@ github.com/vmware/govmomi/vim25/progress github.com/vmware/govmomi/vim25/soap github.com/vmware/govmomi/vim25/types github.com/vmware/govmomi/vim25/xml -# go.etcd.io/etcd/api/v3 v3.5.7 -## explicit; go 1.17 +# go.etcd.io/etcd/api/v3 v3.5.9 +## explicit; go 1.19 go.etcd.io/etcd/api/v3/authpb go.etcd.io/etcd/api/v3/etcdserverpb go.etcd.io/etcd/api/v3/membershippb go.etcd.io/etcd/api/v3/mvccpb go.etcd.io/etcd/api/v3/v3rpc/rpctypes go.etcd.io/etcd/api/v3/version -# go.etcd.io/etcd/client/pkg/v3 v3.5.7 -## explicit; go 1.17 +# go.etcd.io/etcd/client/pkg/v3 v3.5.9 +## explicit; go 1.19 go.etcd.io/etcd/client/pkg/v3/fileutil go.etcd.io/etcd/client/pkg/v3/logutil go.etcd.io/etcd/client/pkg/v3/systemd go.etcd.io/etcd/client/pkg/v3/tlsutil go.etcd.io/etcd/client/pkg/v3/transport go.etcd.io/etcd/client/pkg/v3/types -# go.etcd.io/etcd/client/v3 v3.5.7 -## explicit; go 1.17 +# go.etcd.io/etcd/client/v3 v3.5.9 +## explicit; go 1.19 go.etcd.io/etcd/client/v3 go.etcd.io/etcd/client/v3/credentials go.etcd.io/etcd/client/v3/internal/endpoint @@ -803,6 +796,7 @@ go.uber.org/zap/zapgrpc ## explicit; go 1.17 golang.org/x/crypto/cryptobyte golang.org/x/crypto/cryptobyte/asn1 +golang.org/x/crypto/hkdf golang.org/x/crypto/internal/alias golang.org/x/crypto/internal/poly1305 golang.org/x/crypto/nacl/secretbox @@ -813,6 +807,9 @@ golang.org/x/crypto/salsa20/salsa ## explicit; go 1.18 golang.org/x/exp/constraints golang.org/x/exp/slices +# golang.org/x/mod v0.12.0 +## explicit; go 1.17 +golang.org/x/mod/semver # golang.org/x/net v0.14.0 ## explicit; go 1.17 golang.org/x/net/bpf @@ -829,7 +826,7 @@ golang.org/x/net/internal/timeseries golang.org/x/net/proxy golang.org/x/net/trace golang.org/x/net/websocket -# golang.org/x/oauth2 v0.7.0 +# golang.org/x/oauth2 v0.8.0 ## explicit; go 1.17 golang.org/x/oauth2 golang.org/x/oauth2/authhandler @@ -845,6 +842,7 @@ golang.org/x/sync/singleflight # golang.org/x/sys v0.12.0 ## explicit; go 1.17 golang.org/x/sys/cpu +golang.org/x/sys/execabs golang.org/x/sys/internal/unsafeheader golang.org/x/sys/plan9 golang.org/x/sys/unix @@ -890,8 +888,24 @@ golang.org/x/text/width golang.org/x/time/rate # golang.org/x/tools v0.12.0 ## explicit; go 1.18 +golang.org/x/tools/cmd/stringer golang.org/x/tools/go/ast/inspector +golang.org/x/tools/go/gcexportdata +golang.org/x/tools/go/internal/packagesdriver +golang.org/x/tools/go/packages +golang.org/x/tools/go/types/objectpath +golang.org/x/tools/internal/event +golang.org/x/tools/internal/event/core +golang.org/x/tools/internal/event/keys +golang.org/x/tools/internal/event/label +golang.org/x/tools/internal/event/tag +golang.org/x/tools/internal/gcimporter +golang.org/x/tools/internal/gocommand +golang.org/x/tools/internal/packagesinternal +golang.org/x/tools/internal/pkgbits +golang.org/x/tools/internal/tokeninternal golang.org/x/tools/internal/typeparams +golang.org/x/tools/internal/typesinternal # google.golang.org/api v0.114.0 ## explicit; go 1.19 google.golang.org/api/compute/v0.alpha @@ -922,16 +936,21 @@ google.golang.org/appengine/internal/modules google.golang.org/appengine/internal/remote_api google.golang.org/appengine/internal/urlfetch google.golang.org/appengine/urlfetch -# google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 +# google.golang.org/genproto v0.0.0-20230526161137-0005af68ea54 +## explicit; go 1.19 +google.golang.org/genproto/internal +google.golang.org/genproto/protobuf/field_mask +# google.golang.org/genproto/googleapis/api v0.0.0-20230525234035-dd9d682886f9 ## explicit; go 1.19 google.golang.org/genproto/googleapis/api google.golang.org/genproto/googleapis/api/annotations google.golang.org/genproto/googleapis/api/expr/v1alpha1 google.golang.org/genproto/googleapis/api/httpbody +# google.golang.org/genproto/googleapis/rpc v0.0.0-20230525234030-28d5490b6b19 +## explicit; go 1.19 google.golang.org/genproto/googleapis/rpc/code google.golang.org/genproto/googleapis/rpc/errdetails google.golang.org/genproto/googleapis/rpc/status -google.golang.org/genproto/protobuf/field_mask # google.golang.org/grpc v1.54.0 ## explicit; go 1.17 google.golang.org/grpc @@ -1045,7 +1064,7 @@ gopkg.in/yaml.v2 # gopkg.in/yaml.v3 v3.0.1 ## explicit gopkg.in/yaml.v3 -# k8s.io/api v0.27.2 => k8s.io/api v0.27.1 +# k8s.io/api v0.28.0 => k8s.io/api v0.28.0 ## explicit; go 1.20 k8s.io/api/admission/v1 k8s.io/api/admission/v1beta1 @@ -1101,7 +1120,10 @@ k8s.io/api/scheduling/v1beta1 k8s.io/api/storage/v1 k8s.io/api/storage/v1alpha1 k8s.io/api/storage/v1beta1 -# k8s.io/apimachinery v0.27.2 => k8s.io/apimachinery v0.28.0-alpha.0 +# k8s.io/apiextensions-apiserver v0.27.2 => k8s.io/apiextensions-apiserver v0.28.0 +## explicit; go 1.20 +k8s.io/apiextensions-apiserver/pkg/features +# k8s.io/apimachinery v0.28.0 => k8s.io/apimachinery v0.28.0 ## explicit; go 1.20 k8s.io/apimachinery/pkg/api/equality k8s.io/apimachinery/pkg/api/errors @@ -1133,10 +1155,12 @@ k8s.io/apimachinery/pkg/selection k8s.io/apimachinery/pkg/types k8s.io/apimachinery/pkg/util/cache k8s.io/apimachinery/pkg/util/diff +k8s.io/apimachinery/pkg/util/dump k8s.io/apimachinery/pkg/util/errors k8s.io/apimachinery/pkg/util/framer k8s.io/apimachinery/pkg/util/httpstream k8s.io/apimachinery/pkg/util/httpstream/spdy +k8s.io/apimachinery/pkg/util/httpstream/wsstream k8s.io/apimachinery/pkg/util/intstr k8s.io/apimachinery/pkg/util/json k8s.io/apimachinery/pkg/util/managedfields @@ -1162,7 +1186,7 @@ k8s.io/apimachinery/pkg/watch k8s.io/apimachinery/third_party/forked/golang/json k8s.io/apimachinery/third_party/forked/golang/netutil k8s.io/apimachinery/third_party/forked/golang/reflect -# k8s.io/apiserver v0.27.2 => k8s.io/apiserver v0.27.1 +# k8s.io/apiserver v0.28.0 => k8s.io/apiserver v0.28.0 ## explicit; go 1.20 k8s.io/apiserver/pkg/admission k8s.io/apiserver/pkg/admission/cel @@ -1223,6 +1247,8 @@ k8s.io/apiserver/pkg/authorization/path k8s.io/apiserver/pkg/authorization/union k8s.io/apiserver/pkg/cel k8s.io/apiserver/pkg/cel/common +k8s.io/apiserver/pkg/cel/environment +k8s.io/apiserver/pkg/cel/lazy k8s.io/apiserver/pkg/cel/library k8s.io/apiserver/pkg/cel/openapi k8s.io/apiserver/pkg/cel/openapi/resolver @@ -1259,6 +1285,7 @@ k8s.io/apiserver/pkg/server/mux k8s.io/apiserver/pkg/server/options k8s.io/apiserver/pkg/server/options/encryptionconfig k8s.io/apiserver/pkg/server/options/encryptionconfig/controller +k8s.io/apiserver/pkg/server/options/encryptionconfig/metrics k8s.io/apiserver/pkg/server/resourceconfig k8s.io/apiserver/pkg/server/routes k8s.io/apiserver/pkg/server/storage @@ -1293,9 +1320,9 @@ k8s.io/apiserver/pkg/util/flowcontrol/format k8s.io/apiserver/pkg/util/flowcontrol/metrics k8s.io/apiserver/pkg/util/flowcontrol/request k8s.io/apiserver/pkg/util/flushwriter +k8s.io/apiserver/pkg/util/peerproxy/metrics k8s.io/apiserver/pkg/util/shufflesharding k8s.io/apiserver/pkg/util/webhook -k8s.io/apiserver/pkg/util/wsstream k8s.io/apiserver/pkg/util/x509metrics k8s.io/apiserver/pkg/warning k8s.io/apiserver/plugin/pkg/audit/buffered @@ -1304,7 +1331,7 @@ k8s.io/apiserver/plugin/pkg/audit/truncate k8s.io/apiserver/plugin/pkg/audit/webhook k8s.io/apiserver/plugin/pkg/authenticator/token/webhook k8s.io/apiserver/plugin/pkg/authorizer/webhook -# k8s.io/client-go v0.27.2 => k8s.io/client-go v0.27.1 +# k8s.io/client-go v0.28.0 => k8s.io/client-go v0.28.0 ## explicit; go 1.20 k8s.io/client-go/applyconfigurations/admissionregistration/v1 k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1 @@ -1630,7 +1657,7 @@ k8s.io/client-go/util/homedir k8s.io/client-go/util/keyutil k8s.io/client-go/util/retry k8s.io/client-go/util/workqueue -# k8s.io/cloud-provider v0.27.1 => k8s.io/cloud-provider v0.27.1 +# k8s.io/cloud-provider v0.28.0 => k8s.io/cloud-provider v0.28.0 ## explicit; go 1.20 k8s.io/cloud-provider k8s.io/cloud-provider/api @@ -1643,16 +1670,17 @@ k8s.io/cloud-provider/controllers/node/config/v1alpha1 k8s.io/cloud-provider/controllers/service/config k8s.io/cloud-provider/controllers/service/config/v1alpha1 k8s.io/cloud-provider/credentialconfig +k8s.io/cloud-provider/names k8s.io/cloud-provider/node/helpers k8s.io/cloud-provider/options k8s.io/cloud-provider/service/helpers k8s.io/cloud-provider/volume k8s.io/cloud-provider/volume/errors k8s.io/cloud-provider/volume/helpers -# k8s.io/cloud-provider-aws v1.27.1 +# k8s.io/cloud-provider-aws v1.27.0 ## explicit; go 1.20 k8s.io/cloud-provider-aws/pkg/providers/v1 -# k8s.io/component-base v0.27.2 => k8s.io/component-base v0.27.1 +# k8s.io/component-base v0.28.0 => k8s.io/component-base v0.28.0 ## explicit; go 1.20 k8s.io/component-base/cli/flag k8s.io/component-base/codec @@ -1680,7 +1708,7 @@ k8s.io/component-base/tracing k8s.io/component-base/tracing/api/v1 k8s.io/component-base/version k8s.io/component-base/version/verflag -# k8s.io/component-helpers v0.27.1 => k8s.io/component-helpers v0.27.1 +# k8s.io/component-helpers v0.28.0 => k8s.io/component-helpers v0.28.0 ## explicit; go 1.20 k8s.io/component-helpers/apimachinery/lease k8s.io/component-helpers/node/topology @@ -1690,7 +1718,7 @@ k8s.io/component-helpers/scheduling/corev1 k8s.io/component-helpers/scheduling/corev1/nodeaffinity k8s.io/component-helpers/storage/ephemeral k8s.io/component-helpers/storage/volume -# k8s.io/controller-manager v0.27.1 => k8s.io/controller-manager v0.27.1 +# k8s.io/controller-manager v0.28.0 => k8s.io/controller-manager v0.28.0 ## explicit; go 1.20 k8s.io/controller-manager/config k8s.io/controller-manager/config/v1 @@ -1702,19 +1730,19 @@ k8s.io/controller-manager/pkg/features k8s.io/controller-manager/pkg/features/register k8s.io/controller-manager/pkg/leadermigration/config k8s.io/controller-manager/pkg/leadermigration/options -# k8s.io/cri-api v0.0.0 => k8s.io/cri-api v0.28.0-alpha.0 +# k8s.io/cri-api v0.28.0 => k8s.io/cri-api v0.28.0 ## explicit; go 1.20 k8s.io/cri-api/pkg/apis k8s.io/cri-api/pkg/apis/runtime/v1 k8s.io/cri-api/pkg/errors -# k8s.io/csi-translation-lib v0.27.0 => k8s.io/csi-translation-lib v0.27.1 +# k8s.io/csi-translation-lib v0.27.0 => k8s.io/csi-translation-lib v0.28.0 ## explicit; go 1.20 k8s.io/csi-translation-lib k8s.io/csi-translation-lib/plugins -# k8s.io/dynamic-resource-allocation v0.0.0 => k8s.io/dynamic-resource-allocation v0.27.1 +# k8s.io/dynamic-resource-allocation v0.0.0 => k8s.io/dynamic-resource-allocation v0.28.0 ## explicit; go 1.20 k8s.io/dynamic-resource-allocation/resourceclaim -# k8s.io/klog/v2 v2.90.1 +# k8s.io/klog/v2 v2.100.1 ## explicit; go 1.13 k8s.io/klog/v2 k8s.io/klog/v2/internal/buffer @@ -1722,13 +1750,13 @@ k8s.io/klog/v2/internal/clock k8s.io/klog/v2/internal/dbg k8s.io/klog/v2/internal/serialize k8s.io/klog/v2/internal/severity -# k8s.io/kms v0.27.1 => k8s.io/kms v0.27.1 +# k8s.io/kms v0.28.0 => k8s.io/kms v0.28.0 ## explicit; go 1.20 k8s.io/kms/apis/v1beta1 k8s.io/kms/apis/v2 k8s.io/kms/pkg/service k8s.io/kms/pkg/util -# k8s.io/kube-openapi v0.0.0-20230501164219-8b0f38b5fd1f +# k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 ## explicit; go 1.19 k8s.io/kube-openapi/pkg/builder k8s.io/kube-openapi/pkg/builder3 @@ -1750,19 +1778,15 @@ k8s.io/kube-openapi/pkg/validation/errors k8s.io/kube-openapi/pkg/validation/spec k8s.io/kube-openapi/pkg/validation/strfmt k8s.io/kube-openapi/pkg/validation/strfmt/bson -# k8s.io/kube-proxy v0.0.0 => k8s.io/kube-proxy v0.27.1 -## explicit; go 1.20 -k8s.io/kube-proxy/config/v1alpha1 -# k8s.io/kube-scheduler v0.0.0 => k8s.io/kube-scheduler v0.27.1 +# k8s.io/kube-scheduler v0.0.0 => k8s.io/kube-scheduler v0.28.0 ## explicit; go 1.20 k8s.io/kube-scheduler/config/v1 -k8s.io/kube-scheduler/config/v1beta2 k8s.io/kube-scheduler/config/v1beta3 k8s.io/kube-scheduler/extender/v1 -# k8s.io/kubectl v0.0.0 => k8s.io/kubectl v0.27.1 +# k8s.io/kubectl v0.0.0 => k8s.io/kubectl v0.28.0 ## explicit; go 1.20 k8s.io/kubectl/pkg/scale -# k8s.io/kubelet v0.27.1 => k8s.io/kubelet v0.27.1 +# k8s.io/kubelet v0.28.0 => k8s.io/kubelet v0.28.0 ## explicit; go 1.20 k8s.io/kubelet/config/v1 k8s.io/kubelet/config/v1alpha1 @@ -1775,13 +1799,16 @@ k8s.io/kubelet/pkg/apis/credentialprovider/v1alpha1 k8s.io/kubelet/pkg/apis/credentialprovider/v1beta1 k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1 k8s.io/kubelet/pkg/apis/dra/v1alpha2 +k8s.io/kubelet/pkg/apis/dra/v1alpha3 k8s.io/kubelet/pkg/apis/pluginregistration/v1 k8s.io/kubelet/pkg/apis/podresources/v1 k8s.io/kubelet/pkg/apis/podresources/v1alpha1 k8s.io/kubelet/pkg/apis/stats/v1alpha1 -# k8s.io/kubernetes v1.27.1 +k8s.io/kubelet/pkg/cri/streaming +k8s.io/kubelet/pkg/cri/streaming/portforward +k8s.io/kubelet/pkg/cri/streaming/remotecommand +# k8s.io/kubernetes v1.28.0 ## explicit; go 1.20 -k8s.io/kubernetes/cmd/kube-proxy/app k8s.io/kubernetes/cmd/kubelet/app k8s.io/kubernetes/cmd/kubelet/app/options k8s.io/kubernetes/pkg/api/legacyscheme @@ -1844,7 +1871,6 @@ k8s.io/kubernetes/pkg/kubelet/cm/containermap k8s.io/kubernetes/pkg/kubelet/cm/cpumanager k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/state k8s.io/kubernetes/pkg/kubelet/cm/cpumanager/topology -k8s.io/kubernetes/pkg/kubelet/cm/cpuset k8s.io/kubernetes/pkg/kubelet/cm/devicemanager k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/checkpoint k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/plugin/v1beta1 @@ -1856,14 +1882,12 @@ k8s.io/kubernetes/pkg/kubelet/cm/memorymanager/state k8s.io/kubernetes/pkg/kubelet/cm/topologymanager k8s.io/kubernetes/pkg/kubelet/cm/topologymanager/bitmask k8s.io/kubernetes/pkg/kubelet/cm/util +k8s.io/kubernetes/pkg/kubelet/cm/util/cdi k8s.io/kubernetes/pkg/kubelet/config k8s.io/kubernetes/pkg/kubelet/configmap k8s.io/kubernetes/pkg/kubelet/container k8s.io/kubernetes/pkg/kubelet/container/testing k8s.io/kubernetes/pkg/kubelet/cri/remote -k8s.io/kubernetes/pkg/kubelet/cri/streaming -k8s.io/kubernetes/pkg/kubelet/cri/streaming/portforward -k8s.io/kubernetes/pkg/kubelet/cri/streaming/remotecommand k8s.io/kubernetes/pkg/kubelet/envvars k8s.io/kubernetes/pkg/kubelet/events k8s.io/kubernetes/pkg/kubelet/eviction @@ -1933,27 +1957,22 @@ k8s.io/kubernetes/pkg/probe/grpc k8s.io/kubernetes/pkg/probe/http k8s.io/kubernetes/pkg/probe/tcp k8s.io/kubernetes/pkg/proxy -k8s.io/kubernetes/pkg/proxy/apis -k8s.io/kubernetes/pkg/proxy/apis/config -k8s.io/kubernetes/pkg/proxy/apis/config/scheme -k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1 -k8s.io/kubernetes/pkg/proxy/apis/config/validation k8s.io/kubernetes/pkg/proxy/config +k8s.io/kubernetes/pkg/proxy/conntrack k8s.io/kubernetes/pkg/proxy/healthcheck -k8s.io/kubernetes/pkg/proxy/iptables k8s.io/kubernetes/pkg/proxy/ipvs +k8s.io/kubernetes/pkg/proxy/ipvs/ipset +k8s.io/kubernetes/pkg/proxy/ipvs/util k8s.io/kubernetes/pkg/proxy/metaproxier k8s.io/kubernetes/pkg/proxy/metrics k8s.io/kubernetes/pkg/proxy/util k8s.io/kubernetes/pkg/proxy/util/iptables -k8s.io/kubernetes/pkg/proxy/winkernel k8s.io/kubernetes/pkg/registry/core/service/allocator k8s.io/kubernetes/pkg/scheduler k8s.io/kubernetes/pkg/scheduler/apis/config k8s.io/kubernetes/pkg/scheduler/apis/config/latest k8s.io/kubernetes/pkg/scheduler/apis/config/scheme k8s.io/kubernetes/pkg/scheduler/apis/config/v1 -k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta2 k8s.io/kubernetes/pkg/scheduler/apis/config/v1beta3 k8s.io/kubernetes/pkg/scheduler/apis/config/validation k8s.io/kubernetes/pkg/scheduler/framework @@ -1995,16 +2014,13 @@ k8s.io/kubernetes/pkg/security/apparmor k8s.io/kubernetes/pkg/securitycontext k8s.io/kubernetes/pkg/util/async k8s.io/kubernetes/pkg/util/config -k8s.io/kubernetes/pkg/util/conntrack k8s.io/kubernetes/pkg/util/filesystem k8s.io/kubernetes/pkg/util/flag k8s.io/kubernetes/pkg/util/flock k8s.io/kubernetes/pkg/util/goroutinemap k8s.io/kubernetes/pkg/util/goroutinemap/exponentialbackoff k8s.io/kubernetes/pkg/util/hash -k8s.io/kubernetes/pkg/util/ipset k8s.io/kubernetes/pkg/util/iptables -k8s.io/kubernetes/pkg/util/ipvs k8s.io/kubernetes/pkg/util/labels k8s.io/kubernetes/pkg/util/node k8s.io/kubernetes/pkg/util/oom @@ -2025,7 +2041,6 @@ k8s.io/kubernetes/pkg/volume/downwardapi k8s.io/kubernetes/pkg/volume/emptydir k8s.io/kubernetes/pkg/volume/fc k8s.io/kubernetes/pkg/volume/flexvolume -k8s.io/kubernetes/pkg/volume/gcepd k8s.io/kubernetes/pkg/volume/git_repo k8s.io/kubernetes/pkg/volume/hostpath k8s.io/kubernetes/pkg/volume/iscsi @@ -2051,7 +2066,7 @@ k8s.io/kubernetes/pkg/volume/vsphere_volume k8s.io/kubernetes/pkg/windows/service k8s.io/kubernetes/test/utils k8s.io/kubernetes/third_party/forked/golang/expansion -# k8s.io/legacy-cloud-providers v0.0.0 => k8s.io/legacy-cloud-providers v0.27.1 +# k8s.io/legacy-cloud-providers v0.0.0 => k8s.io/legacy-cloud-providers v0.28.0 ## explicit; go 1.20 k8s.io/legacy-cloud-providers/azure k8s.io/legacy-cloud-providers/azure/auth @@ -2093,7 +2108,7 @@ k8s.io/legacy-cloud-providers/gce/gcpcredential k8s.io/legacy-cloud-providers/vsphere k8s.io/legacy-cloud-providers/vsphere/vclib k8s.io/legacy-cloud-providers/vsphere/vclib/diskmanagers -# k8s.io/mount-utils v0.26.0-alpha.0 => k8s.io/mount-utils v0.27.3 +# k8s.io/mount-utils v0.26.0-alpha.0 => k8s.io/mount-utils v0.28.0 ## explicit; go 1.20 k8s.io/mount-utils # k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 @@ -2101,6 +2116,7 @@ k8s.io/mount-utils k8s.io/utils/buffer k8s.io/utils/clock k8s.io/utils/clock/testing +k8s.io/utils/cpuset k8s.io/utils/exec k8s.io/utils/inotify k8s.io/utils/integer @@ -2193,33 +2209,34 @@ sigs.k8s.io/yaml # github.com/aws/aws-sdk-go/service/eks => github.com/aws/aws-sdk-go/service/eks v1.38.49 # github.com/digitalocean/godo => github.com/digitalocean/godo v1.27.0 # github.com/rancher/go-rancher => github.com/rancher/go-rancher v0.1.0 -# k8s.io/api => k8s.io/api v0.27.1 -# k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.27.1 -# k8s.io/apimachinery => k8s.io/apimachinery v0.28.0-alpha.0 -# k8s.io/apiserver => k8s.io/apiserver v0.27.1 -# k8s.io/cli-runtime => k8s.io/cli-runtime v0.27.1 -# k8s.io/client-go => k8s.io/client-go v0.27.1 -# k8s.io/cloud-provider => k8s.io/cloud-provider v0.27.1 -# k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.27.1 -# k8s.io/code-generator => k8s.io/code-generator v0.27.1 -# k8s.io/component-base => k8s.io/component-base v0.27.1 -# k8s.io/component-helpers => k8s.io/component-helpers v0.27.1 -# k8s.io/controller-manager => k8s.io/controller-manager v0.27.1 -# k8s.io/cri-api => k8s.io/cri-api v0.28.0-alpha.0 -# k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.27.1 -# k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.27.1 -# k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.27.1 -# k8s.io/kube-proxy => k8s.io/kube-proxy v0.27.1 -# k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.27.1 -# k8s.io/kubectl => k8s.io/kubectl v0.27.1 -# k8s.io/kubelet => k8s.io/kubelet v0.27.1 -# k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.27.1 -# k8s.io/metrics => k8s.io/metrics v0.27.1 -# k8s.io/mount-utils => k8s.io/mount-utils v0.27.3 -# k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.27.1 -# k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.27.1 -# k8s.io/sample-controller => k8s.io/sample-controller v0.27.1 -# k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.27.1 -# k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.27.1 -# k8s.io/kms => k8s.io/kms v0.27.1 -# k8s.io/noderesourcetopology-api => k8s.io/noderesourcetopology-api v0.26.3 +# k8s.io/api => k8s.io/api v0.28.0 +# k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.28.0 +# k8s.io/apimachinery => k8s.io/apimachinery v0.28.0 +# k8s.io/apiserver => k8s.io/apiserver v0.28.0 +# k8s.io/cli-runtime => k8s.io/cli-runtime v0.28.0 +# k8s.io/client-go => k8s.io/client-go v0.28.0 +# k8s.io/cloud-provider => k8s.io/cloud-provider v0.28.0 +# k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.28.0 +# k8s.io/code-generator => k8s.io/code-generator v0.28.0 +# k8s.io/component-base => k8s.io/component-base v0.28.0 +# k8s.io/component-helpers => k8s.io/component-helpers v0.28.0 +# k8s.io/controller-manager => k8s.io/controller-manager v0.28.0 +# k8s.io/cri-api => k8s.io/cri-api v0.28.0 +# k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.28.0 +# k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.28.0 +# k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.28.0 +# k8s.io/kube-proxy => k8s.io/kube-proxy v0.28.0 +# k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.28.0 +# k8s.io/kubectl => k8s.io/kubectl v0.28.0 +# k8s.io/kubelet => k8s.io/kubelet v0.28.0 +# k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.28.0 +# k8s.io/metrics => k8s.io/metrics v0.28.0 +# k8s.io/mount-utils => k8s.io/mount-utils v0.28.0 +# k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.28.0 +# k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.28.0 +# k8s.io/sample-controller => k8s.io/sample-controller v0.28.0 +# k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.28.0 +# k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.28.0 +# k8s.io/kms => k8s.io/kms v0.28.0 +# k8s.io/noderesourcetopology-api => k8s.io/noderesourcetopology-api v0.27.0 +# k8s.io/endpointslice => k8s.io/endpointslice v0.28.0 diff --git a/cluster-autoscaler/version/version.go b/cluster-autoscaler/version/version.go index fbbb44bacd80..92a5f905b7cd 100644 --- a/cluster-autoscaler/version/version.go +++ b/cluster-autoscaler/version/version.go @@ -17,4 +17,4 @@ limitations under the License. package version // ClusterAutoscalerVersion contains version of CA. -const ClusterAutoscalerVersion = "1.27.1" +const ClusterAutoscalerVersion = "1.28.0" diff --git a/hack/boilerplate/boilerplate.py b/hack/boilerplate/boilerplate.py index 7051b3751da4..e9c0a53d2bae 100755 --- a/hack/boilerplate/boilerplate.py +++ b/hack/boilerplate/boilerplate.py @@ -160,6 +160,7 @@ def file_extension(filename): "cluster-autoscaler/cloudprovider/ionoscloud/ionos-cloud-sdk-go", "cluster-autoscaler/cloudprovider/hetzner/hcloud-go", "cluster-autoscaler/cloudprovider/oci", + "cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk", "cluster-autoscaler/cloudprovider/mcm", "cluster-autoscaler/integration", "cluster-autoscaler/hack", diff --git a/hack/verify-golint.sh b/hack/verify-golint.sh index 803b370b09e6..a32965a99d43 100755 --- a/hack/verify-golint.sh +++ b/hack/verify-golint.sh @@ -40,6 +40,8 @@ excluded_packages=( 'cluster-autoscaler/cloudprovider/hetzner/hcloud-go' 'cluster-autoscaler/expander/grpcplugin/protos' 'cluster-autoscaler/cloudprovider/tencentcloud/tencentcloud-sdk-go' + 'cluster-autoscaler/cloudprovider/volcengine/volc-sdk-golang' + 'cluster-autoscaler/cloudprovider/volcengine/volcengine-go-sdk' ) FIND_PACKAGES='go list ./... ' diff --git a/multidimensional-pod-autoscaler/AEP.md b/multidimensional-pod-autoscaler/AEP.md new file mode 100644 index 000000000000..b06b2110c5ac --- /dev/null +++ b/multidimensional-pod-autoscaler/AEP.md @@ -0,0 +1,542 @@ +# AEP-5342: Multi-dimensional Pod Autoscaler + +AEP - Autoscaler Enhancement Proposal + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories](#user-stories-optional) + - [A New MPA Framework with Reinforcement Learning](#a-new-mpa-framework-with-reinforcement-learning) + - [Different Scaling Actions for Different Types of Resources](#different-scaling-actions-for-different-types-of-resources) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Unit Tests](#unit-tests) + - [Integration Tests](#integration-tests) + - [End-to-end Tests](#end-to-end-tests) + - [Graduation Criteria](#graduation-criteria) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) + + +## Release Signoff Checklist + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) AEP approvers have approved the AEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + +Currently, Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) control the scaling actions separately as independent controllers to determine the resource allocation for a containerized application. +Due to the independence of these two controllers, when they are configured to optimize the same target, e.g., CPU usage, they can lead to an awkward situation where HPA tries to spin more pods based on the higher-than-threshold CPU usage while VPA tries to squeeze the size of each pod based on the lower CPU usage (after scaling out by HPA). +The final outcome would be a large number of small pods created for the workloads. +Manual fine-tuning the timing to do vertical/horizontal scaling and prioritization are usually needed for synchronization of the HPA and VPA. + +We propose a Multi-dimensional Pod Autoscaling (MPA) framework that combines the actions of vertical and horizontal autoscaling in a single action but separates the actuation completely from the controlling algorithms. +It consists of three controllers (i.e., a recommender, an updater, and an admission controller) and an MPA API (i.e., a CRD object or CR) that connects the autoscaling recommendations to actuation. +The multidimensional scaling algorithm is implemented in the recommender. +The scaling decisions derived from the recommender are stored in the MPA object. +The updater and the admission controller retrieve those decisions from the MPA object and actuate those vertical and horizontal actions. +Our proposed MPA (with the separation of recommendations from actuation) allows developers to replace the default recommender with their alternative customized recommender, so developers can provide their own recommender implementing advanced algorithms that control both scaling actions across different resource dimensions. + +## Motivation + +To scale application Deployments, Kubernetes supports both horizontal and vertical scaling with a Horizontal Pod Autoscaler (HPA) and a Vertical Pod Autoscaler (VPA), respectively. +Currently, [HPA] and [VPA] work separately as independent controllers to determine the resource allocation of a containerized application. +- HPA determines the number of replicas for each Deployment of an application with the aim of automatically scaling the workload to match demand. The HPA controller, running within the Kubernetes control plane, periodically adjusts the desired scale of its target (e.g., a Deployment) to match observed metrics such as average CPU utilization, average memory utilization, or any other custom metric the users specify (e.g., the rate of client requests per second or I/O writes per second). The autoscaling algorithm that the HPA controller uses is based on the equation `desired_replicas = current_replicas * (current_metric_value / desired_metric_value)`. +- VPA determines the size of containers, namely CPU and Memory Request and Limit. The primary goal of VPA is to reduce maintenance costs and improve the utilization of cluster resources. When configured, it will set the Request and Limit automatically based on historical usage and thus allow proper scheduling onto nodes so that the appropriate resource amount is available for each replica. It will also maintain ratios between limits and requests that were specified in the initial container configuration. + +When using HPA and VPA together to both reduce resource usage and guarantee application performance, VPA resizes pods based on their measured resource usage, and HPA scales in/out based on the customer application performance metric, and their logic is entirely ignorant of each other. +Due to the independence of these two controllers, they can lead to an awkward situation where VPA tries to squeeze the pods into smaller sizes based on their measured utilization. +Still, HPA tries to scale out the applications to improve the customized performance metrics. +It is also [not recommended] to use HPA together with VPA for CPU or memory metrics. +Therefore, there is a need to combine the two controllers so that horizontal and vertical scaling decisions are made in combination for an application to achieve both objectives, including resource efficiency and the application service-level objectives (SLOs)/performance goals. +However, existing VPA/HPA designs cannot accommodate such requirements. +Manual fine-tuning the timing or frequency to do vertical/horizontal scaling and prioritization are usually needed for synchronization of the HPA and VPA. + +[HPA]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ +[VPA]: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler +[not recommended]: https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler + +### Goals + +- Design and implement a holistic framework with a set of controllers to achieve multi-dimensional pod autoscaling (MPA). +- Separate the decision actuation from recommendations for both horizontal and vertical autoscaling, which enables users to replace the default recommender with their customized recommender. +- Re-use existing HPA and VPA libraries as much as possible in MPA. + +### Non-Goals + +- Design of new multi-dimensional pod autoscaling algorithms. Although this proposal will enable alternate recommenders, no alternate recommenders will be created as part of this proposal. +- Rewrite functionalities that have been implemented with existing HPA and VPA. +- This proposal will not support running multiple recommenders for the same MPA object. Each MPA object is supposed to use only one recommender. + +## Proposal +### User Stories +#### A New MPA Framework with Reinforcement Learning + +Many studies in research show that combined horizontal and vertical scaling can guarantee application performance with better resource efficiency using advanced algorithms such as reinforcement learning [1, 2]. These algorithms cannot be used with existing HPA and VPA frameworks. A new framework (MPA) is needed to combine horizontal and vertical scaling actions and separate the actuation of scaling actions from the autoscaling algorithms. The new MPA framework will work for all workloads on Kubernetes. + +[1] Haoran Qiu, Subho S. Banerjee, Saurabh Jha, Zbigniew T. Kalbarczyk, Ravishankar K. Iyer (2020). FIRM: An Intelligent Fine-Grained Resource Management Framework for SLO-Oriented Microservices. In Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2020). + +[2] Haoran Qiu, Weichao Mao, Archit Patke, Chen Wang, Hubertus Franke, Zbigniew T. Kalbarczyk, Tamer Başar, Ravishankar K. Iyer (2022). SIMPPO: A Scalable and Incremental Online Learning Framework for Serverless Resource Management. In Proceedings of the 13th ACM Symposium on Cloud Computing (SoCC 2022). + +#### Different Scaling Actions for Different Types of Resources + +For certain workloads, to ensure a custom metric (e.g., throughput or request-serving latency), horizontal scaling typically controls the CPU resources effectively, and vertical scaling is typically effective in increasing or decreasing the allocated memory capacity per pod. Thus, there is a need to control different types of resources at the same time using different scaling actions. Existing VPA and HPA can control these separately. However, they cannot achieve the same objective, e.g., guarantee a custom metric within an SLO target, by controlling both dimensions with different resource types independently. For example, they can lead to an awkward situation where HPA tries to spin more pods based on the higher-than-threshold CPU usage while VPA tries to squeeze the size of each pod based on the lower memory usage (after scaling out by HPA). In the end, there will be a large number of small pods created for the workloads. + +## Design Details + +Our proposed MPA framework consists of three controllers (i.e., a recommender, an updater, and an admission controller) and an MPA API (i.e., a CRD object or CR) that connects the autoscaling recommendations to actuation. The figure below describes the architectural overview of the proposed MPA framework. + +[](./kep-imgs/mpa-design.png "MPA Design Overview") + +**MPA API.** Application owners specify the autoscaling configurations which include: + +1. whether they only want to know the recommendations from MPA or they want MPA to directly actuate the autoscaling decisions; +2. application SLOs (e.g., in terms of latency or throughput) if there are; +3. any custom metrics if there are; and +4. other autoscaling configurations that exist in HPA and VPA (e.g., desired resource utilizations, container update policies, min and max number of replicas). + +MPA API is also responsible for connecting the autoscaling actions generated from the MPA Recommender to MPA Admission Controller and Updater which actually execute the scaling actions. MPA API is created based on the [multidimensional Pod scaling service] (not open-sourced) provided by Google. MPA API is a Custom Resource Definition (CRD) in Kubernetes and each MPA instance is a CR. MPA CR keeps track of recommendations on target requests and target replica numbers. + +[multidimensional Pod scaling service]: https://cloud.google.com/kubernetes-engine/docs/how-to/multidimensional-pod-autoscaling + +**Metrics APIs.** The Metrics APIs serve both default metrics or custom metrics associated with any Kubernetes objects. Custom metrics could be the application latency, throughput, or any other application-specific metrics. HPA already consumes metrics from such [a variety of metric APIs] (e.g., `metrics.k8s.io` API for resource metrics provided by metrics-server, `custom.metrics.k8s.io` API for custom metrics provided by "adapter" API servers provided by metrics solution vendors, and the `external.metrics.k8s.io` API for external metrics provided by the custom metrics adapters as well. A popular choice for the metrics collector is Prometheus. The metrics are then used by the MPA Recommender for making autoscaling decisions. + +[a variety of metric APIs]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis + +**MPA Recommender.** MPA Recommender retrieves the time-indexed measurement data from the Metrics APIs and generates the vertical and horizontal scaling actions. The actions from the MPA Recommender are then updated in the MPA API object. The autoscaling behavior is based on user-defined configurations. Users can implement their own recommenders as well. + +**MPA Updater.** MPA Updater will update the number of replicas in the deployment and evict the eligible pods for vertical scaling. + +**MPA Admission-Controller.** If users intend to directly execute the autoscaling recommendations generated from the MPA Recommender, the MPA Admission-Controller will update the deployment configuration (i.e., the size of each replica) and configure the rolling update to the Application Deployment. + +### Action Actuation Implementation + +To actuate the decisions without losing availability, we plan to: + +1. evict pods with min-replicas configured and update Pod sizes with the web-hooked admission controller (for vertical scaling), and +2. add or remove replicas (for horizontal scaling). + +We use a web-hooked admission controller to manage vertical scaling because if the actuator directly updates the vertical scaling configurations through deployment, it will potentially overload etcd (as vertical scaling might be quite frequent). +MPA Admission Controller intercepts Pod creation requests and rewrites the request by applying recommended resources to the Pod spec. +We do not use the web-hooked admission controller to manage the horizontal scaling as it could slow down the pod creation process. +In the future when the [in-place vertical resizing](https://github.com/kubernetes/enhancements/issues/1287) is enabled, we can enable the option of in-place vertical resizing while keeping the web-hooked admission controller for eviction-based vertical resizing as an option as well. + +[](./kep-imgs/mpa-action-actuation.png "MPA Action Actuation") + +Pros: +- Vertical scaling is handled by webhooks to avoid overloading etcd +- Horizontal scaling is handled through deployment to avoid extra overhead by webhooks +- Authentication and authorization for vertical scaling are handled by admission webhooks +- Recommendation and the actuation are completely separated + +Cons: +- Webhooks introduce extra overhead for vertical scaling operations (can be avoided after in-place resizing of pod is enabled without eviction) +- Vertical and horizontal scaling executions are separated (can be avoided after in-place resizing of pod is enabled without eviction) +- State changes in pod sizes are not persisted (too much to keep in etcd, could use Prometheus to store pod state changes) + +### Action Recommendation Implementation + +To generate the vertical scaling action recommendation, we reuse VPA libraries as much as possible to implement scaling algorithm integrated with the newly generated MPA API code. +To do that, we need to update accordingly the code which read and update the VPA objects to be interacting with the MPA objects. +To generate the horizontal scaling action recommendation, we reuse HPA libraries, integrating with the MPA API code, to reads and updates the MPA objects. +We integrate vertical and horizontal scaling in a single feedback cycle. +As an intitial solution, vertical scaling and horizontal scaling is performed alternatively (vertical scaling first). +Vertical scaling will scale the CPU and memory allocations based on the historical usage; and horizontal scaling will scale the number of replicas based on either CPU utilization or a custom metric. +In the future, we can consider more complex way of prioritization and conflict resolution. +The separation of recommendation and actuation allows customized recommender to be used to replace the default recommender. +For example, users can plug-in their RL-based controller to replace the MPA recommender, receiving measurements from the Metrics Server and modifying the MPA objects directly to give recommendations. + +The implementation of the MPA framework (the backend) is based on the existing HPA and VPA codebase so that it only requires minimum code maintenance. +Reused Codebase References: +- HPA: https://github.com/kubernetes/kubernetes/tree/master/pkg/controller/podautoscaler +- VPA: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler + +### MPA API Object + +We reuse the CR definitions from the [MultidimPodAutoscaler](https://cloud.google.com/kubernetes-engine/docs/how-to/multidimensional-pod-autoscaling) object developed by Google. +`MultidimPodAutoscaler` is the configuration for multi-dimensional Pod autoscaling, which automatically manages Pod resources and their count based on historical and real-time resource utilization. +MultidimPodAutoscaler has two main fields: `spec` and `status`. + +#### MPA Object + +``` +apiVersion: autoscaling.gke.io/v1beta1 +kind: MultidimPodAutoscaler +metadata: + name: my-autoscaler +# MultidimPodAutoscalerSpec +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: my-target + policy: + updateMode: Auto + goals: + metrics: + - type: Resource + resource: + # Define the target CPU utilization request here + name: cpu + target: + type: Utilization + averageUtilization: target-cpu-util + constraints: + global: + minReplicas: min-num-replicas + maxReplicas: max-num-replicas + containerControlledResources: [ memory, cpu ] # Added cpu here as well + container: + - name: '*' # either a literal name, or "*" to match all containers + # this is not a general wildcard match + # Define boundaries for the memory request here + requests: + minAllowed: + memory: min-allowed-memory + maxAllowed: + memory: max-allowed-memory + # Define the recommender to use here + recommenders: + - name: my-recommender + +# MultidimPodAutoscalerStatus +status: + lastScaleTime: timestamp + currentReplicas: number-of-replicas + desiredReplicas: number-of-recommended-replicas + recommendation: + containerRecommendations: + - containerName: name + lowerBound: lower-bound + target: target-value + upperBound: upper-bound + conditions: + - lastTransitionTime: timestamp + message: message + reason: reason + status: status + type: condition-type + currentMetrics: + - type: metric-type + value: metric-value +``` + +### Test Plan + + + +[ ] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +#### Unit Tests + + + + + + + +Unit tests are located at each controller package. + +#### Integration Tests + + + + + +Integration tests are to be added in the beta version. + +#### End-to-End Tests + + + + + +End-to-end tests are to be added in the beta version. + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +#### How can this feature be enabled / disabled in a live cluster? + +MPA can be enabled by checking the prerequisite and executing `./deploy/mpa-up.sh`. + +#### Does enabling the feature change any default behavior? + +No. + +#### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + +MPA can be disabled by executing `./deploy/mpa-down.sh`. + +#### What happens if we reenable the feature if it was previously rolled back? + +No impact will happen because everytime MPA is enabled it is a full new reset and restart of MPA. + +#### Are there any tests for feature enablement/disablement? + +End-to-end test of MPA will be included in the beta version. + +### Dependencies + + + +#### Does this feature depend on any specific services running in the cluster? + +MPA relies on cluster-level `metrics.k8s.io` API (for example, from [metrics-server](https://github.com/kubernetes-sigs/metrics-server)) +For the evict-and-replace mechanism, the API server needs to support the MutatingAdmissionWebhook API. + +### Scalability + + + +#### Will enabling / using this feature result in any new API calls? +No, replacing HPA/VPA with MPA only translates the way how recommendations are generated (separation of recommendation from actuation). +The original API calls used by HPA/VPA are reused by MPA and no new API calls are used by MPA. + +#### Will enabling / using this feature result in introducing new API types? +Yes, MPA introduces a new Custom Resource `MultidimPodAutoscaler`, similar to `VerticalPodAutoscaler`. + +#### Will enabling / using this feature result in any new calls to the cloud provider? +No. + +#### Will enabling / using this feature result in increasing size or count of the existing API objects? +No. It will not affect any existing API objects. + +#### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? +No. To the best of our knowledge, it will not cause any increasing time of [existing SLIs/SLOs](https://github.com/kubernetes/community/blob/master/sig-scalability/slos/slos.md). + +#### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? +No. + +#### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? +No. + + + +#### Will enabling / using this feature result in introducing new API types? + + + +#### Will enabling / using this feature result in any new calls to the cloud provider? + + + +#### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +#### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +#### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +### Troubleshooting + + + +#### How does this feature react if the API server and/or etcd is unavailable? + +#### What are other known failure modes? + + + +#### What steps should be taken if SLOs are not being met to determine the problem? + +## Alternatives + + + +### MPA as a Recommender Only + +An alternative option is to have MPA just as a recommender. +For VPA, based on the support of the customized recommender, MPA can be implemented as a recommender to write to a VPA object. Then VPA updater and admission controller will actuate the recommendation. +For HPA, additional support for alternative recommenders is needed so MPA can write scaling recommendations to the HPA object as well. + +- Pros: + - Less work and easier maintenance in the future + - Simple especially when vertical and horizontal are two completely independent control loops +- Cons: + - Additional support from HPA (enabling customized recommenders) is needed which requires update in the upstream Kubernetes + - Hard to coordinate/synchronize when horizontal and vertical scaling states and decisions are kept in different places (i.e., HPA and VPA object) + +### Google GKE's Approach of MPA + +In this [alternative approach](https://cloud.google.com/kubernetes-engine/docs/how-to/multidimensional-pod-autoscaling) (non-open-sourced), a `MultidimPodAutoscaler` object modifies memory or/and CPU requests and adds replicas so that the average utilization of each replica matches your target utilization. +The MPA object will be translated to VPA and HPA objects so at the end there are two *independent* controllers managing the vertical and horizontal scaling application deployment. diff --git a/multidimensional-pod-autoscaler/kep-imgs/mpa-action-actuation.png b/multidimensional-pod-autoscaler/kep-imgs/mpa-action-actuation.png new file mode 100644 index 0000000000000000000000000000000000000000..95e2d687b6aa926c5cc8d8e6a1084c47909338bc GIT binary patch literal 221661 zcmeEucT`i`_AUYnf`CdB5W#{-7pc-MfYN&>2uLR&U1|UeN>}MfQJQq=ouEkXq1VuR z2t5Qy@^(DO^Sg5H{hdGG8{=KaV2lm!z1LcE&GmiXoNKQAR!v2of{c-jfPjGF@uP+z=^5ZJEwgXG5fG5&*gSZk_V~dARyAiwOB;I&0)j_xqhm?bK8(_}AW<1X zFX^hIdJ}i)WE7T9MS*15Zd^Uf_EP425myLTC7ss8uK~QWG)B?aa^Luy8Pd=`l%;=q zL4BwxkRlp&>cKNqm$&0o{9P9adTGL?2fbJmPfgHTtn}&j>6p{3+L0V&dwZ#JEk87z zt`nT0VI}z(!1f{fYB~uC3E_{$A?WxKh(Ni}qbt(|bA-YT&X0UNM@OJ``}^7Tb2^;g ztuC;bpOuX#NDXJL-e^~v5%gfqW4%xIu+lN1szgG#>#RWN83F#8PsyZQ1pe&&@z;0> zZg76qEYUN%LDj}=s(31qPe$*}S36rb2*@}<+w`f{gkFa{aS2K`l6#t)9Mmyl;7pZ~l1d zNt=L@*&#^}f9i|d=L1A(iLcvJT=qA=$u90veW6EihVw#7M@KsB9aHnCJH*^Oue9tG z?U)EMk!$N~o8^v+SMDnuc=9k+3Gl++j7SyA8lRrtB`264UlbyO5Io%?DAg+d@i|W) zyvS=xaG{lzHiu9-l|0m+K_gyryJniQHb9z`aEA2^HwkwGQML@RPN2|}>#anmpRkXR zD>RrsqtOcZFfC|Jy!4Ij+35*_ihESgiO)3Letx$3oA>Rr_s&yZdqDGu78baBRkM}x z^Sy#d5gO80_xaT%e!L=%yktNGeodBo{|hnai=0m;sV=>2%ZvjQlKy8tsV`IPGRS?q zp?7M?pW+F5+?oCv0@2H$GcGf(q9om~DjJ)f-Sr`K@aOv`GJWMaf!|rxR00LEjF&LE z_)pfiRJG1?(nY_y_-W|Vwz}*WXDyx_hQYVhA&fF%pQJt&S(7CM3qKG~!}MOTrW0eX zqpYJa4lKAYk}BTo(f4B6X;CH`- z`%>tK*b~fG{*fE&w(G*{_w1R=X~)BuzpG3?6V?96(S4f!rTOd~C(lYuC3R&;rFf<3 zgnTUxh}9t}w9Z~1{r)}dopTh~yIDnWBxj5mo13<++A8(q#G@+?wu5_Sx%qM~{lBooViJRk2oSRe_yb77E%VV_Vvo zTD?0;7F-r0+ABJ!)YWK=gV|rnjw;=it5RanY2duXd7I`v4H=CJrz=M&S3B1jryQpx zR}3de?Xuc^4u)I&Q5sS1w~};LwGZ?1J}K!g-g49~(5_EAzGo2-Ttg<$W|EA@}v8um4>Zz z2aRx<$U@>3K?LZH{&J<-*RMV^z5?$VCb>KEn(cdy+9Y~?K6s{~xyU6$^g@j3lIY;v z2yRX82z{yI_Z8a}V^0U|_nOCkBv=^R`u4oGpEEC+%4}OK%g^R%*^dOLFgH0AP14l} z*A(tk?q54HI8rB(40;|!M2WH?D}6F1JfbI9Yxi0elPVR@8dD^mvP+W?^P2Pz~#;EKEY$!gZuWP5j}!(4%t{ zXF}AwE}=X&P1fKSWXMR#Z=HS4XnpY(nLX$n-@Zw$w-fFnhAogS>>lfz3#H=APG3!* zd2JS69f_Qdghg6?()na}U%c^_DcAS#pgJlPDf|NV*7sZYFZa=LP&3f$Go!DzbLHH& zWgWZb1>Oz2RCdWvNt2sb_)hsXce891tWGg#Zhdxp)+?5Qw*%x3t{U5pTAD_3mVcJ(i%_BW4JesELWXolPP!~{d zQ4};JbRvl1ai|L}_9E!9YPP)14#8X1GgDIrqdrD3{@hC^x& zY9Ln{Z^5EI>CYR&j@vI*Q2`d$nb)W zQ?Z4htt!~mV`S~~!G%?{oyu^xah9=TttljJCUdgu5Weo0i!4UkRxUitT`8YKEN;k7 z@_0;a7*(&iq1q*z>zE`Jy{f&hdmU}GOtLJSqa-1b+tYiEc|ji{gm=vkcUU^4c)dPj z(~mvo;A>~rNqEnBQB3(T?(g=Uw_9FdVSxBBbo$kZ%?0HxgkUL>G9d~q~}Q^6|`k!Sp8q!^b=k`O{=cOeJl9$XFUR@!0BAQ^XJvl zUyyO~gn>TvhP))!cvJ_+>UvDpdGt)xiBzml9d#mOJ$kx3;{}%P9?eSbNLyv9YVxwZ zB{)S$OnRPG#{WNESU(CVV=NvPQ~r;4{Car)G%*$f^z+y6|NcUTh@A9TWV5N??H?Lu zW!*%I{O5lC*$Shc%rrEBFS&sFe{YuHQ~(6}Ph(~6Z6GAXQTS}RCjBRK{Az{t_=w`) z5Ac7U3^+q~{=jiMBx~(|H!-V>%rxBbAMGpn)xQBkWFleA{!dmR#Q72ZQ{-axN9eD*)y@<)&S8=&z2G0}f$+xp3=0Nk1S=hZ6z zWEH^W&isGka=Q=yn0GQla(Z4RRmnZAnOs$15e;zPe%cyY+C*nKU}Pfu`>N{gBivc+ zQVgNz+gP?NllPb5J3sRDgEH)^X`|P-c(JBr?%phpSg-A~Vj}nlzbHRJZ!fIWA$=VU z%51g4;$LL_2?8t$y%A-)hpqBbFz#X96AhpVMJ8P}#ZVJ2;y9Ub8Go4wnq@c?;%8*H zI*={7ef&`idH(hw1XPy|%j6Qihz(c5=;kGD)b4ceVvrE&r0275^Ec86e|Np*TLG}5{McdN zJHvTx;o|K&;>2FnGAWt$Hc7g|kNF=TqUup~yIT!t=aW{NtJgQ-(b#a=)V|X(6b5l}^|yuh9@}MI8>(D~IIHr=cmbZ7hq_KUC-LRG2r@;D6_Hj#M`qRp(A*T}#=ckV8kCA3>K__MzaJ*&G zfW>dPk{&7XvsjOpdBWBA0a)jXVt(-_RP=@r6XU%5n%r|v;#$C!n`(^55t`J;NK^Vs zl9ccU2vvQhO4zRljR^yBlWfS!lh2(ZwBuAG#TIsVIWRolgec1*;Ishv8-H5A^yjSs zpj&Dj2RMWyL_FVMFNG-+^k_Ef@9@U>kTnpuBQF_AjANP)T(P`(fV%z{TrLX++~0(F z;$+e06TYa)$D7dpN+G4cfm&|>A=eq+3&%(qRb{sV0OpqtZGTZv@7n3@Z#ar8N`UFH zSpQnHRukvzTBF1{}F1R`Wa@*$fx6a^5 z>OUekF(`NnZ{04edASz!n;B&)$ah!zmmt2&{ku@GA@JlfVE`$%2R{9+w7Lfuh*Oou zxYA!9y-IqlDl?6P(3SrM8rAA}cOQm`=7ZLxMIQA?<9tN+n?L>Xw!f?g@DlG=vxfJj zPkxv;F(>=(kIt4;5E@?vMoQZr{+Bt`5*rM1`&UajO*{YwC_|*t{WjuVuYkvfa3Kw+3X0$N&{PE!%JLPfe?xKYzgY=-|f5hJJe;&?|~tfMxvYm1F<0 z>3hJEWS#l&Pa_db#mhyIpz+OrYWlxM{Ers;uMz(jHtn1~*Q9TT*Nj#J1nMUdnBX8U5_kYfb~TrD!QwA#ik<8$d!<*;sNVbo|; z0&I0Kw?Ahv7nY;V1%oXp_QNbf`V~X4Np!p*HJe!XC2$4$K+ri!ofc(Md641SGqFP3 zvC3A$bqG%yc1ZfRmbqo9T+qr4ojZ_JJb^(n=Hz+Msn<5^nc_pP7jf0N#p5`2{m)md-7|rS0IfC~#;Gb699@)yW6N#}pdHfV#rK zJO+o(4)%6_2HUHj6>3Vk99vV}V7!K^Z4;>EDi5~W9q%M803Z>Dq&AVHBDq9 z^Kc!cQzCt2S@0Y?jQ%S&RBUN z=4f9PF0rv>Q&$$GJFzuLtJr@b3(@yUDSa{e{`c7eX zmr%(&74GF=tOu%W+`C02o$AVQd~mSz;f{zi^o5BQ_lL*+GDpNpdvGZcZ$Q=i%e){{ z8nY}E-t(D_lN{56mT(nHK6{izZn@h?xh&szFL(oS4}k&~(Sk zsFG5LjaSM0nKmCH3aZzxeuz-;9bPpoQ}Lh^Ho%E*kGW;lm{Tve0lX%OUi~9_v%Y%U zA;h@fA4_C7uiJh}%Ju1(b0e6Gc2IUohbc_isvd`P*34J7)`d_eX#h&9Vodt&hH2~nhnWBD@jyz$4}R~ViEQKYDapN+{;b7AV`C>2VqaPMpjX!BhuB&L zx_7R8gbiHuEiUIM%*g6R=@I3TPJX}ftFS4YP?h_n#YQEu*9(A*94p(q-{J8jm=JSB z`H0`wfXC*$aLuqrzTw&rgC|zC$Us5&lBBVL2#O%+%|Yk1OQ%*5BJe6@u!rn|91?P= z3^nHVs?hu-b?$L|#aN1GeIpt!$aeWqo?RHv&E@9hx{;B{;&UKkvpns_MQgRFoICR> zW(56J%r`ru?}()wQRiM%_|RWw2_NZ%=jz~jH#Pk9%^}fA+)J^ZlH!4(1sZ!77@L};oLu%u>gC}I$VNs?n^4m8v`{cu%-L1U z7CC1tmi0wYw1Q?)YA$O~3*zi>`8J3(J8NJW!&hd%R>n0sEPRTPnXk8SE^*-6PZ9LR zG2rp|dc2>(1NxfT#bx}eDs(tjFB0L$7)&(m8Rwe#Jr2B3<<=wTSgFidvbEnN7A@Qy zk^J#y=qQgeWZon?xMO>0wM0kkIqaQ<=V1MM#g6I9M=n*>5EI`FgEC&O+*qtfOWE3; zrz9u!`KrznvkhhJ_>}{@L9Fao?++Ieb@Tf#ymCqQzBPG!r`=4G-+|iP5OcV%_~J}n zl3alRCpb1S<4L1wBF&oCn5bEgM~3tcroXuK@_lAXBgYTrt2 zE>u@>k2hPB4}%Q18)j>uyq`*sk6mU%kORP6^ctQnpR%qgor*NInVCHuPn;M!!3`G7 z+etYCs`hz&xEChfwl4qa$qq%iQi}LtC{2HrMMlNOPCx=7j=Ew}0Q_~TS--2am1P8? zHU{}_y0Q&18u*+ierGiUH|d0^ZXD-7p|k=HFUDy#+Q(4WSfA|xK*XqjWN5y~d!f0! zK1uEzv-Smnn1>Sf73Tf2B4ya>?oXOi21Q|cozG7#2HN!~Jh?T^vvS>vB|e7R@Z%_wKhZu`U+30{a~jy0uQ`;@ zhv&$se0H!H`HJuQ@F**yzOnvQ;pE}y8w~P`-EjUUrAlfnus~4n$q|Bo%!;LyyJ3@N z1V>*^MqXkXLxhe)p@AW@cDdPLuxodUN{79}a|eC@=i*QUMq$^7W9CQ0x_R3b-UW~K zL55&KVKCHl-=2t;Y5irV90DS5RHHIIaN{@44@8wqg!<+}HH$Gmc`=-IyBF`FZ26+K z>?T4jp?%P;H#`QComUz7N`Hh)*Usj~AF&AUjdT_la9a;xJ${s}DOj~! z1H1@8bHl>g9gul|cCJpwXo*#pRkG*Jrt*hCb_L|SH(1)iTa)gMtgGl#&B()f^i*B1 zGyR^Z2`_8Mu{*-zBs*54{oXR1x@kYT{mq}PgKohwJX2Y!y3{My0WRXGV-W96z&fOm z8zsz8Ws|V8emy#P+ml4aM{}`$SOv;0GC6i=Gn~%_7rjzHZrBf~lXm+WE;7a{oShe< z`ui~diFgI?gj>tf{Z~JP+xBJSm%l^YRu&!lUD06kc;?Ilx$;|ylBn*9N)1%FMUUV5 zJ{^Yhsr?vc;VVMw;XyvzC&LZnJInTGXRDA@4gjDN;pHMSMybI%LTp_lvz@E zAZKvA%GFd`Fl9ZDEnF#Ev$bnEnJua|nNF?}*q&4GGO>+UTgZqx%g}$^7@@lua=%DDPsDBEjU(nBJ3HqLn6wQE;#=LqC7Y_T$V1gVu3=U==FZdb>{(D3KH z+&heQI6G%Vb^C}D>8|@JO$=(^#DB-JqG2wg!bqfcGHmo6L4XSFo5TbKoBJDW%L;hn z(rs4}a8cFs@EEh~cZ$D1OE0@+m`VmR5t^ z%oqZQZe9iz<#F9%`lHEz=L6Jw2`K^F$#l%)fKKeUOp~i-G) zG|$aGBd4>9u#-iXhT34w-8&O4v3GOQ?OXzId9JPVX$GBS3EC9`+kZ48)~J~XNI)QE z>1cB;NJg2EOSd2|J6VC4)D_wg8k(4pw}LH6j_Wqi%;)NP2Jm~fS0%j(*25D0i9tlZ zsL{;>Q&b5T^1v*eT``kulk+GD8d}Dfe6Z7XK5PG5ZUG?pIwjVljF=cj?dDDbamTwE zuhh6r(XE=OidN)m=w!t>-%cc7{q}uJeWiZsqVKSeMU7x8tWaI;Q&T^=Jwm9h`2=`J6XIt2jy`2$?aD;|98z5Iy~)tLlx3dR~&Ow~2P z&S0GFULk51=IJi-*I^n}r-rXVxdcEW*i_d}J9vs`E4#_Z*ce_zqavTphO0IyTm76d zU7j1Cc3b;#HCu`3WyDI=X;t04nn5}9eoDhAzC^VmGZHIdFWm6_nA`1)#ghy640YT) ze)5HMe(y5T4+vp6M}86$iEXlFSJidtar2$yg2Q^9lroq%l^8#wcyOTefz+;6M?3{t zG*!IM(-5cH`=e$3IMjW`d1K!twpX9&Qi71~_93+et$(fnGLsfonQ3W4j0`}EQb0N+ zlTLi223c>@8W{u8D*JNlg$y&91dxAU0jk-QJ7W1%vx6|9=7)zQ$+hBP<37W8&~C$5 zVw!6$gN$TwQ)XDo#0b?Fhg7_NeDj)3-Jp_h&%bSo=ucpj^VRVM?9$2nL{IV?2%uyThfmfyYlpg^y9-uaN*VNNftTA zClVs5{KOma;@5E+inU!P3GSm_w!x|PD{+HA_<5{Oz40x!q__nx6euUzWr@RwYF<47 zDN6djT$e@c?76zPQs2tcx&~B{FjifNP`>+++p72&Whh)J)>CEC4 znM=a1PZ>uXpXhsL@D;`AmkD)JCT$NX)GRojD@<{c4Joz_!1=bYBy`u$GI3bmTh7sn zz#wz%hP9i+ADp1&b0s0nJqF*Mo6{<2*IehDUrvk|tc>V{ zCdxNRN|y^gLEs}%bP%S{T7$mk)(HU35fJuxoq}-{^?Xg!M(r4Hk!xdEiIai#3cN4eo zD?KrrcN1Kqdx+>-qT68)A1$dguIAq<8drq6kGM-@*UiyU4^`HYJBo5OF1ve)VKL0DiUU&u8?0hV&vP9aC@=&x+EI$dC3L=VjqMTV zN%HcFXy^a=<$pZyIK?5SpuRFm-e$<(uDNt;*a`QDy=BUd?&Pbx^1Nn5x za(JQHr~sejJG0J0jRfo_6h_Wh;WNs%8h%NP6Tx5LoESZ3@Szq6knEN~h_SNrLT5r^ zF%S^!&ncHZ!uy=tt*`b?xhnzeNQb_XI5|iQ5BuUyi{st9LMB{1WUJ`K<;+otN{TcN zQ^g*oTP9l3`X-2xGH!D**rKpNF9GY`Q%2iyZD&y*(xx0;o@a)}hm+#=ecNZZgQJI8 zOZtvX5Ceua?t}Z$j-;T5 z8#&eTx>q_JF2o<@I~XK!6-qG4tOLeu?=Z6z9_bDQhBjm~?2IVhk5&Z&wzELMmYg4h z_ny>lVl#zOh3wmPxF6vCFIpd0w-%R2Ppf8V@*O@rTrE_6#Kqb9R890DwhGra%A-;b zfuG`-+Kr&;$LcERmz>th8Q1po=&9lIU0xT-0wSKkoS=`w=;h9ByWGyF4C2Nv!x`~M zDvTw%W7a=1;}@84!UcRp^Po03N`&6R9)MArBQlmm79J3>gg&;3z!V%mkZ zP=yxgR!Em>#ro0LtJgc~WBo=wj!Rv;qv4OX{AH{%{*c|>cLsS+$nM@a!Y$`^6~kkz z#des$n-wgJ*LHI0p4@n}Q`sfKd*kT3CZdtb62PD@NkiLy?dA6^cyR*YD3uR;;yB_f`_|3p`J7V2i zDO$c&CB`a<_|U4BCdCJ-N|q;im)9HCp`ZeV#%p4>C(|Ky%Bh;(Di`+47gtmP(A+!uhaXTsOetggNSj z#I_gD9~8cQ%=S}caS(1jb@uk?z~+m1 zi*_zPrJ|NaSVU)Fs9_~dI~h}7ny~?d;#ulz4kIzfW5%f7Y)3GvUSAxLWhLE;fF%4q z3-d&oxEN%cJ+RoJNSA)MM@lRE%f=36l8;jCt`m^S)8rNz?aKz5?YlB|T_)8IXtt(# zR2}aNsVV>sh+NOtE33_%65Q?Tiz=wPOV%3a4hGUlc6aj2V4oSm}W9qpGsMZ{yeA=ooJBVQ5ZE`u7Xd<*b!IzFbg6pzKs z;b4HEPt;G=zKx5fhpBrr$IdRqEZQ9O?)4|nfl{#UD50*9vEw?x3X9a`fMMO(dbgp^ z+y*6+uKt8{X1?1yMUcUP<-7?Z!mv2NC0s2l3hUO2JBx?N3aaO*;dk5qZnItKd zS!Vin@qD~KQj^$TxIiwm@Lqnr1(?Hn7hADlG{`2CiUk~R4Vkcf0Y6=?#SL{XR`Xd8 zbKTO0_J&r}|U7tdz%=$GG`^g`~@cG$N0Y`(4O|G0FeJRYU|CBLuVt6^9=Hk3!{sbDZ;(yjLPfthDG7SZdanTC;SH8!;?Q14sS?z#EhckiQSVZ^~Zn{lA zgS0_cp=4<)9-oC$kQG{LiW@5HFNyso0pwS+kp>uBheFS$|D|(ph1dF;z2_iM=F{6B{Dlcg-zq)_(T`Nhp-m8R5$=(hjtYdxhEJ?2^ zn1-$*>|V;~l6NMG@Fn?dx1k&B1&{mibymicr}5b#8PJmf%{fizAW?r5hbHVS`$w5q zzfWf==Vq=6hSS(>P#f*lWAA09;QLa_Kvpw=Ab^s78R}|nfP;@OjbnDRq?2wc#r4j# z^s)wUpK?65YHR(+YD)^BQWZ`>|Cs&+?VZG6f0-Cy6C@_o=j5FJ-dDhp3Gmel&)&KI z+$_Pf(r;??Ul!C;Knnl2i~nMff3eknt@xk325+SQ`r^O$nx9MUzbpFxRnWmOrbbl% z%xM^N-#*UB*9?YS{2R^^1e~EBIcR%KF=RgZM>U`wFY6|B@wEZo@gvS(61LZi5T#=8 z^CcVyR~R#jK6|21ve+5_0py2XD3&lG9pJp}bFgWUncl32(oPG{`t#Ny{y5$;S;fh4py&%pDhtL36eOg16;oa-Kd4oeT$Z{K zr>y2^svkU=`TojgFh|=WXv)5%3+8`{sD0b#s#-`x>b)1RE8gqsirQ|!&wEUAI5lTZ;{s*lxvJ3V3x7q1~X%((bQ<+yBT_=f41xU@|Gs@%Q=q{V!ka zfcU?;IeenuAYXqSO_Bo27qq->LzS3@hM^=sua)IeBluD!{yhzELMPH2!v9 zH6Wu3*E-T%PX5pDLjk708sL04mW+SAuK<{>wZrY@%O~G(a&Z4O<4KGBi}R-b*Np#p z0shw;{}<2sSy=u5<)ThCF-EB@{bCzt*4r+Novux^y6K)fiy7=GK6h|PI1cN6Q2z?ZvTE+|)gR23?sSW1 zfh#7LSSa;p^svX zoF5z1K6jFR!Jhp5S_))uP(IuIKK{uq0aI+}Mdaq1!xVv6npyTrE6yLT6{>1x zDz%Y3)h#fJ*Us|^ZB_kHJ+B*kw=@PQZN^)1o`sl0>RGwR8|FftC ze=sPFSzH)0mu#$g_5^|SGM$d$HBJZ@->57_)*mgJ*tU@!>K4aKL;S?h^B?--_|0MY z6;lS;3bBiBi=vOL$|o}(QMi9SO-`3S7a=XBoGft{FK{S^j*{WVj{kDFM5ge9wXLlJ z{BWoOMh`t`qJbR8W(E+EWY(dk#95ZZZF+}5-UwH?FDgUt3@i3gSYpooVE1luWG#*A zxt7HJ7iTCqY1~$u?yP>VwzkIJmfc-Ubo>JYAN-=gVR2G3oYd9mpeK zIDD}?DXKuNktKg2mxx(ic{k}{%Uy~65p!ct&hR~xQyYF{Csct9TL9^lYLZx4Q)mSK z$eq~IM;F+^)*KfirYD4r&%COmUqw8>68D}>5q3fT^1Ca~rvqrH$X(@E|8l&UxDYy? zi@Sw8EByfBUEpn(TI`a}=uR2xX2tN)REE@(T)*nqG zW0-GeT8MzP8Q$`p=o?@PX6{Yk5M1G<(uhp+{dR$F(d(GKLP$R2ZbbEU`S4CYPtP7n zR61}zj<0#zUI7gikv*WL zn`#Y&XDyf0Kd5l(7*{!49F7gw>l{1$Is*1Dye1=0b_|$UdF#Hy)uxNl6>vZALwxFH z$RKxLZR%#C{d)4xuQl2P1t>66I~~S!0`@>=#r(7_u243D?U~sh!;?uP|DlDA zkUF&95;Qw3X2EU-p+(Rk@!*kz?O5nX;i>0yvsd^Ra9oA88vmgyq@5|!(k#<_;58t{ zWSi@VjveBg4Cl|kG|s6EMRz$WK*w&%qy~}gu6bc*Nr+La zl%yfRaf)lpeHiXMedGmTQlbex&%R;3kEh?h{|hRq%0bH*9>W+D#wS3WVl$Md&ylG& zZk-ke16I>V_fd;jL8sXtf)j2H2ZN_-zv$q z;S@cDn#ti2AN1H+N<6fD%r>{wbp^H? z3q5AT04FAu@G$5XTx}j9DU7H;+UWA&^NYwDEv(t7rdg|z1dfB+1Tnj}4o56pX%1&j zETk8KnO@>JM|F2TQt^CQ#mmP9`4Qu#8WgIOb(T%esB9wwM&xdty%?&0-(r5TA(q>V=5j}#o5^WkVw0YQX>G9yH0(|OwO`)?@M|h+zeF_Q zwU(b-exXD8N2Nh1PfRAhUeO*Xkdco05(%8h^7sa9(~Vu-EJ?xSafo14Mxn7FuWV+X zQG6a$@&T&|SZ@?UJqLXUQ2Wa6hYrllXw;Er&VeH^qwA68QpcN(%2kQ!nWBXgEagIF z@#Hxo3xvZ>^-=q|Uz3jJDpmlJ1Znb)fQq^%UJz-c_)0$*eovjn4&EuOT1-e7!#0pe zsTIXnxhx4UTzO-?3LF4dM$~S&724rZxv88i9-um|@@VfTX*S;iPb*h3C7OX|eCBrW zz-n4CyRTA&C0iWG;VyEAo7e9Kl+aK~2qbpS(t9#I);?tx%;GCw^Cib3dF zS8T>7$`x{mFq`O5D+3zEc@RU6FmmmajR)WQmvG=GfQzY4zoG|O^B1^b(MujvrxXpQ zJm9_^%@odj!ckG>kexLXoLB!!R?3thAWu4-zAWYsABnamgcto7nZzC;O%{zW#oA!w z$-_k!VlL*3@!}N;{kQ=Hv$ylHiasjAxkG?vha5r4B%H^AN|kZh&b&m`(FGd&>}LvH zZUI(ZG@P%%>Z&w-o@1ZRPAJgb;>?{c$*A6_L_{fiqX}6RL>1h^a~TS=%Glu&HcL|k zjv%BQSpda(KT~ycWP&L<>j3MR2Yu|s({mH>X+mG`a#B@Pzj5zj{+Ty4KKpa z6zYgr5+BU7OUF}6Y⪻XsT-G8t7U8f(^RuSf1iB<;jtJzksc&zoCkRKUnqJaB&ZC zpgMMTWl_Oqd%n$L6d>S2*gX>FsDz$SH60EVo8`+9Aos9cLL$i5afn4iaN>0UuBmLj z^jXnFMX$E>Pj0<*Ga8;CIuo7K`u9z!^KbZHU6STV-y1UITL7is@-a^9!O!K2OUjXv zpH2%<3!nwg6$yQDe9$7T0(iDZd)-5g!Ov=!=6vZv&rSX8xte!uA6Er&YlMQE))Trk zJGb?!ORWdcz=`zt;nLWH9P^ib5fQ%Y<{YTcDqZHzPU{mLN_d@=v*P1IRW)x#&{`+h zUprne5(iXdGpz@%{8{>-#(Q4#du-+dD#_AayI6bZ95eC}6oYUKX=#@nOuWVyI0dLC zbH7#exFb|jx=GjJW`Iv!W!oE@#tkSATWm9Xw2w$MW3G6_U+1HA+T_s|iJvLw!Mp47 zbj5bq&>#jfaEq%sf1Pf6+|*i>rV2+nwPt&CW}JgG;rJ^KbQ*UiEMnoj8RyqHhxI0U z6QwGKQ!v2ims|(6b!k0%BzZnMt6tAC0j930HZM$cIF2ws^1B(3HbNH`y=*WIV%E9U zUNb*6&&R8J{f{wIGJ_Z?SOxM6M z4#9`-%&Pnkl4$UXggo)&5y=1$Yh>cRuiTF=&+%2o)79*DGTB^CuQTW7*P*?)52I1E zz5?i2bWwTStTd?i?E*7j<#Zs8<_Nt;=&2?}Ho&Wn$n|NZ&ChYI(li%;*1#9OnMY@; z`mSnnT#>t_?C9BHsNrzt4UPFj{(K1W>|9f<-fKaST8>3qyEUH(wcTNFkd)S#tDk=N zCBu;Nvc!yzsau?g3tW6HUeoj^szwhlvUQHgD(dNi-MZp&V6f~9MS zkh`|k5&~xsNXX(=p(G%!tcU&$jAvb2v~X3@8i~#j8Tzb*c8pV%+){6Hl8aq4eMHJp zNdX7Fkz=q;BWH%ubnVKU>*2R@Q^+fIa}s)vn)v+GbwucEyYt3}0^@SH51DXhC>XWe zP!k{8o2u1uX)GA)!NaO)Urh(&l36K!w$WP89nK9+u|dcu>7vkk4weC#Mw?Pl_}!(C z{WLf-j*3MCO^2{Dv!fytZlXwU!fLgQ0|w1d1R&ik+{jzn5>|0sCTInH%*6URWqmgn z5mV)^o~zx`T;QT6x~6iM656NQA5>37z^^}*!~ToR337?}ZvLqQ0Eg1_qzgJmip;rqW7UBEn$~hixll0(Hi?3` zB^eY&K0f@q_NNI^Y16cjZO!1 z72N7BRcqz*X*N9KFOlgaRD0yKkpbN`RzC~`4w_d%DlO4f}9(TG5^SA$?>LQUNH+T zz4e2-!_wlH|PlW4BF7>3CaO3(S{GwQ_xGbaWe_T9o za`2rV253;@&M*c7vv`7Re*BkVcB)r|{)i+asjdB7Xf(Skgcg$PI#v6xtl2g}Nk^KY zq&h87y`K8bLeKFc?BM!nNyw-Pz(E=tXQyXTd0`X%UpY{6KJ}u6YbJ-V*y0>_Z)~1^ z2@D8yQ=ltvyMGLT_?7kTvRxAjzcAS~4ZfE;8tqw(mH(ns;X3Lc7bxw}oRgu~K8>g6 zGR~%AYp#OF>6PfWWfQYE^9|Z1qmyRG=wXu{@FaNQ<|O-xuvaGaeM3Sxcu=f>R&7srjDSI&X71SvH#H;MMVBN)^M6j$&JeaYkT2K>vJQUI5PTE!$~lfj4v^X4$<% zF}z{)=6HIUmC|P_;{K-UxBbD)snoE#^J!7)s2n$6H6VcLT~x(Z(u)^_LQV^>;%hE9 z_})?esXO%vGgJ%Zl7)Ug9$7^?)Ms~TF_`H+j7B7`EPc3c z!%aI^5(sJ0LfvH1vzFlb)_1Q8<5T)Gl@)=6e>*oHO%$jfP^$%*zGm==i4LF93Wb|Ir-cbsmfqvL;gVm!$3jID)qP)Iu~#~r+5EFSvzt~C6F*&? zp;XDiq}&`(JtWdy2Gjgw4?HzqW@iPc_TQ+o6cR((tq7X(4|?xlwZso!o}~_M0`^O~ zdN|cOzchwb-V2p^UONfmY3iTDr&j2VUDfr=;v_%C&}c5{D)NINHe+tX#)?p#$>cj0 z3WM8~HhzHI3vqJ?(hsxlc1$w@4ilATWiYrLfB%`k`O3pow4R1;IC!tzDKbvT8K6Y& zzQS4tfqPQ`(4Zk1LV%mtf{nPw?2scmwFes215a(>WaC{eX=8WFwZ(K85JHZ*&(5Xn z@$?qf*JOG889WEr2Plh7mga2&%Zuk|YScslM-ZNv@jSP7NYJYLquF4k)0=4sj|fRq zQnjb~XtBjgQOlJMIPniLD=NudwcwuT&R&3C=JBG}>?w;e%P(FWE=5aqr3i+)WB-<%>pP+))zEDPA5dXa9tSD*(nN{5~jRp?inGm{m4%# z_jUoTxgl_LKsDH@a4|ckD4*ZFSrDGzgaW>WbBWvzyD1;StZ>K7ehmoQxY{`2+;y4! zjW+omj1ReH)Du4E17CH>66|m%!p~}hEbPyZ52Y5%(?&sl>sexl<8C zDSf=Tn=o9PXc0= zX-r>DaT@AsZx_N-ZtcdMSIPxu7!TIJJM5ZQygO3P^Z;1t`~@F=eN}^&_}ZgdfI~9% zP?8!Fp5Aj|BAGyT^viw3cHVQdva&3l9?`kb;%Pq{a#e|~hbzNEQ+@#F4poMA+fV0} zbpOofHC3P$BWm`??6QFGc1TMjFhEEGG8kCh7hB8gw@m`Rz!Ei>yHz6Fsf99`NDl$eFJCULFoEvoPWcpNuv)+>_E79V=VTUm(QQ?X`=9iXJ_|y}(p@b)UaO*yh=<`vQ$Ts_5fsNo9paSyd`_yl+zeO{&9vsK{K{Co zL?~{67`D7>r_|YI8%q&!1W9G(Q<)6hDW~nAdpHKf3#I}esZH|YY4sl`-C9}cD3#>!myfAAf za8tyv;sbu32@>gpZ^avzJtmx1fIQg$;qJfVx!(W(alDigM|7+r~WGdpzy&y*+xJ&Uu|vzt8*i_wS!BF6TTwJ;wch8`tag zcD+kL+u^9{G#5NQ|FQscX1b$LqA0vdh??^*6#FchekB^ksf}qLAif8YU=|09#5Fr} z{Wt!leIlH_r9O#OCsYFS%-~w zamhSxy|N-`9}l;xSl7#@!;IHh^A+)S>(63t6_QL?0OrBS6FjXYQGazJ_GR^2O7r~A zhy~O*_Dcn*C*Qw1rWdDXa3+y)WV>qIgGnN9pkR1F9i8YbRp0y(jBFyPIIfY zH_^IcUSke=xqFE7P4k|WUM-AFme>@BOfTqA?>$(xFYpFWv&4+zc&sC~rjILQpFK_M z?NBh6#p={L;2v{?G)$afhQfWR5 z-TfX&Yf6pZH2x!+Jvl)Cbn?A;QixL}dj&L9GOMSb*>?c16EFhMe<8V-Z)%@J=W0E5 zv|3rJ$F48m=7aCZ^(YiG4%G3{01h*oUajIYXp6}$N5Yt+mOpEvsfzIyl2)jmyu!bg z($}i(dSq6Ufrqr;O}WNrT7nnftzPM=@C^4A_DDT!1=gUt_iL+y3}1oC^vev7#NG`^ zs?tconVS|gRS)wKm!Cm1loWlipl=8$sos<;Ye6X)dhg$yfnA}XR4vs6gK>6+)#@GS z0yf!!ejwH%TD%vm3kC)uMcif<$pfQlP~7SN^sP1`qUqJ@pXk( z5LNa)XzNhtp9PUWb_;}i#a%^XUP?;yPVr;TOG70b zlzdTP-p5uE><964%XJ(#-I`{Mdf{EPiT52P+l95SwXL?g-mRh~y=>cR-xxJ;I4dro z_N*eagQC(PgH|7#!IbB)xYbVk_Q(g;gyMcLH`263gLwoA@Ici_C`hsP$T2>lbax}f zWrW90F(ASCgxvj`JPMpK=2x^xDZ)t}WAa$RfX!Lds zePbnAWk9L*B!)SE`QO5B16~)T4y_H5FMmjAo>DJzyx`OWa|EV8I^0XfnXOu>#?vDV ztScn9y_gWh7W8xx%Di@(IveetI-@xR9m&}jUJthCes01vgvaPZ#L6-3e01QNk=ttX ziLUOF@rBWge0G+*3;F`;Ljs%L&+2=e1irU6w7$IZL(2%&*c|-Qy;^>K+suTC7|m30 z@l#gz;T-;-#b6Ab3ysK|1j%b9mrYBig7h%7FRHqg)f0OwHw)&#*zVaO%N}@IX_nuv z9UNc1Y&=N~X=aWi+g5tFQKhou4;+3dFuxj(1KPdNm914Z&IYNzX5;8^^hgqql$l|g zYzF57FZ+Rus21#G@QLI|W`V7hh$HkJY7>qs*lF&o)* zW#1(|Y_xasIXU zmOxS_7YliY$J%J`#=JV)$eJiS4-n9Lb#K)nM!0i_B7S?(VLg0QDsaHv_yT8FEAP70 zv*=nSQQrSAel^Q$aXBNE6dEsRbU2vv!z znhNP|Zaw?`#-~xSegzZY`Y8Dw~dG_DQWBpN`!~9k3SJVgZH3=DG6{B=p%?xYj*;uz*$oz!ir$1g$F;e;T+0H~ZWi3K25cRRE6XuXQ()u({-JE4eQ6B zB>Bm#M~=I^0}7a110e0%xDa%P<@5_-p?tDaiKuvHbX~L72A9+PGZ71Di-@h7oa?;R zAI=o#WCYxY)_331FtG8yz`+LT&C`l$^uGTRGtWM7u{nPP$%4h09V;U6<&<6^_gGW( z;YRZQGoEp5mk`SJYCKjcz)zKpx}LpLtAB9S+ojed`kamLYMc$uH^?fjRT~PWlX?N- zZRPLA^MHTkbT-?iaLE@fKkb)eO>Dgos{ zxc*F4y-7;9lQD_4Uw|OfAKwi9BHe@`HP00OP2K}GTrJhnLsM9!zAV`+X7;t{bCwM4ctG_`io>UguUws zk3-ID^E78Jq>`D+-F@`Ver4F{1Hg%G8u^*vf@*(1rdrh`zcns|h!_8X7FI}H((H0C z+NFJ}&fTP|PB_l8g?Fv^(owFcu}hyA<>^$Em)=0WAtTlA4AN|wj^$HSHy_pc`RPYI z6Jv9yn&4igvp9?gZfWLnx%X_gUJjUO^`lL1Xe zF#y@q)6x4`!xzs{Q2_p#u@Z%jY)WS61$OR1(a289nib$fL`v{(@Oq(AT07q$*3t09 zQp%s42|eDOZ~RHi9!1%sm@MnW5RWu9k4Mf@bB73de^g_9(uyflFR)weWQWe~LGfev z7$;?qTpPkmQElFW&x{$9@_m!vGGY6J1YJ{zT)P;E2G+0alTtt4a zNv5->d?f5)>$K_4zb%p3vjmy(FHGD*41u|UOkK}oE3~@BHcdS77%69*F4EV14)fj7 zEogV2xMX@kX9sbirKTN%P{%>QeM z@=;dVk_2=m1RV%3hm=AI59%%jnYv#g*A8uX@bX5G2?OO|8tJz0VNJ#bm@#@7 zqnWhZ%AGUS6r7A>P+t`i#%~rePB4h`z&yWQY=bmm?Z=XePWod)ndhlh>NiI;@( z2od`3dtpOKqd?AKq*%L)39$XjR!o zm`qfjawdBmjUwua+Fbl;oS#lsM&Upob2~`Ui!F1TOb3_EhiHO=t&oGeD9K*xVB~%3 z`#K>bN}ad;S7%xx!<}j9H5=bP3pwBG!MLK+-(n~8EK^6U@ovtBrMhPB+nib(8=AVS zI?UIDxEQ=to_L>7UdL%VpYF9<5c2j?9WN70s%j>_9wjSGpR6XSEnETVYQ>~5k2H*K zlBR5ZbZ(}P(;mj-MX2V})M*ttT634Rjmiei%2>3B=R>8V`77i(Z2#R^u@V`){^-nF zcA0O}QfH?HC#YVMFH?F|%i{x3`umbnuH^^}x>K}+H|qF14!pfR%D>_Sdw?Lb3}GHe zjekROl%0{ZBhodOLNN72Z9ylC_<#QE&q1=#pdmcIZ_|mi-$^b^S;cWlpX)jZ5*zxI zQ{ODUgg!*5`Y(J#Pz0xj*aUB483@QWt9ih^1ma%>BFYgeWE$ zj8pS_6oq;DGqpk!)lyJFbsz+!vTvC|d)x*z#d#01p(ZelcHCriDiVr;fgY`QEBAgx zzcxZFHffLyls;Luspsi9bN$?PY(@ zBDS}FNc{fm3M~UahU!O!npI$W@}G`X@6!lum9!=z#o8Q;xveBQRMh+_M@ZCrbpVnA zL$w@j(Hvu)IOe&T^#ED89Vc7OQG%35{{>(yz3`+)LLcTP6=E0nKN&rJnGuMaa_ z-IeA@%^T-=O=6(`nmf2%l+^X~PAZeH2Cvy=Xo=F-=eg_fljlPA_y zGmiQ*Jr>IcUr)~D&s=(O3=8K!|HxXoRBR430n()>PVH#v`JR!Z@d$id3cy(}99?r4 zhxqqLHO@2_3TglP9mV{Q-#)!EZ46+vYGh3A`%EBU0zIqMEad?x-v+?QU5a*}WcR2^ z>+rcJc8Z6Stl8P)CkX|Q{qvugrZZj_Zq{W%nB;+4VgZ(}`+OlgHUdzH4EIcjo12l( z$q2`sKTdw=DK~!puWxYiAd$7nYw)L6SvYv6^XF~+b#dSCium`oZ-KytWVe|W(#qx^ zi#_yA8W;TQ8~%AA|G2KJGB4g7HSS3j)BZ1?_8^TJt1`HH_Rc@PTn1^*ku21IAv2D@@Dt?%=WS z2jlpe{CKG@&+()}Z`5YNrg<8wc$MTxR!Y+viW3kqmeqy>_e#ez8}n)$Mf&3x{;}8| zkYGVmD$e^z540e;NZTWAHUTXChYL5Kyk3NEvjF7899(bk7s+gvN@fELZANKi2>i69f{GAkZ?Q>fSVRrU3*`6)MzU+#zh+V3v}RnE!2 zS_e^`>W6l%RxSXVuCE<6&_^#^^&Mu3diCbWY?W>O-T(D9ZIRgQ!_~kI85a~3xWty0 z4&1*zcCu`>j$#pz%bhb<_C40?-Rv*EuHk<^TNiC? z(rXV5P$<;c3cyIsb{I5C5at91DuoXO0{d+sbu{*;mnnp!AZ~?c+LZ&Aa#R3GvIQ6q ze06BVk+hCCqoP8#m1j8B7+7etI=;;fsnsV~V0V@_oEw!^0U%AR^&(m2itAHvP`dYY zfa6bb9v4 zP^#@#cW>_?K&(2TWR$mQ4!v1_>g+`WGzUUz4J3t|8HGt^aNMJ*>uTBty$K#`^Sz{2 z6~2IauM0%ZhnWrjI_G|h6D~?L?=r;Y-7zVTl~GqgVw-XRm`gr1s= zU5uJtb2w)!qO_GQO5~2x_Ft1d@IwavGpv_K9;>2m01n8>L7HA^+V|mc?1%vjx8(x& z?4~2JN^k4fO@V*@dQ6H5${F`wHrJd|QQ~ejA~YL{zo=;5H;_;&+$|)EQcrV}*BG7Q z-{OL}u_%SI&y)sk*?YbaB*HA}I^G<@6+!620CO&X-VIep(7TS^08w#S1;n_*Pp2q! zH8#O2SP>m*rx?iq16<|lOu{^Zl0RacHr<-FG2b$`|`QSHBDu=D9Obv|IXEF%t`(wzOE@*~ckQ7?u-@;UDS z3dfcr9?Ofjr$clWkT%;Bn~Nmv5LnngAPRI@%U50kG{$TmHIUMJC%T(4IYJxGKxMUw zuu6$ZEU9e~INF!(Yqb7=`wEn>vB;>tRn0l^{J|Te9DRSqp}%)mfP^4^HX?2lu`;^b zD@e*QkZxwUxzMi~+z+^xG=_KQl>QLF%&}(oyt~po8>P{*G2A&?a6K$%2|@C#A$+2A z?e^Y6-lD$1p6dB8aoj~9*tyARJl`Gw6`Y5T zi#MJf%|vTrb)kHqW{}J+HaE_CRZRiN}Gbtsf23tHo%Veq`Gz%&AxA<0S zr|q$F??dkU0xy*a6_y{7B&hkbH)`=kXG0n-F0x$OL95yWj5+9Xn#ureI3RcL{h713 zza8U&RxwjHzG_L-3_;k03FaWcWPe+nb6W?=-{Q(=0s`!sY4U@>xBUJHi&Y3UpW92* zSTBrfqgW(C*XL4Mc_FDt{7Ouwc|Q!XK9&r=#$w4Z_>0)!K_r`BgADaZlOLesJ>*k` zh#L}0DZ7ImFzH$xqR+qf&Sp=L^vij-sGC=ycwf+k%2+g|@eIO-SiiM9&x;=?R}n1c zYAh%rPKUCTz1)lpMm&N`pYfOB&!Ik;6@UEk3rI;-R!B+M1kVX_Y>P%UbL=t^bs?B8`xy#2Nr5D0Xr_Vyh_ zbz5znpLmwJeRa_>+syn_gzt}>qUiBL*gG}mJuBnT0c^{Nf_c#Ic^r%$9ccC3;^;81 z+?l3P40d`qce)B!S>o#DluPhz4jNnF0Hus|+{^_yEhm;R<^Z=hk!lb@cNw1Yt$m@5 z6%p#{%Z`dgUfnP}t}A#4TCy$O%l~-wD&%+CTt^hzexMC4OkxMRSoXsk_Y=M z(21S6POQc&5r$<^{rb#fwPUE?ZFc99MP4^C*u985W3XEbiK&x4w<^&RMjBjKrsUjS zMp_Ltu)kErh^2YTQ7RL^RXE40M37+N(NLCXT$HbcL8T4KC%<7ZWUoSRDi$RY?ve|} z?cN#=_d}N|sw5x3ez1%2ypOk>cDgI?O7eawKJDS6o+J>T_cL@u=plalsX@%5{Oib`7a>)7SDNzQFs+fY3CR*9omVx2|Obb9pF z7#gG^`Qw}^xXm&PdT>YYb%&$#Ix*etcQCOYYT@O+!aIfbWut^=j-MR3^mHmv3LaWec`54D*-g^X5-CLAF{X`Bk{KfA~8#@;oHTt zHWhn!x4pIoTDX;&MqaT`c#o^j*vzKV)f>J#3i0w?7uzkb-*(^MKCjVhA2rR_VlS*u zIrDh@#z=ZmL(U=m0$*I759a!l3)HUBs0=rcChj^s%?n45!%(AhL@3wnDIjC?7@Cm5!VPPcGicdnxt z%kydXkl_h!$UMD9xO;)xc4YE_>B_1042QrXxAoTTZyt2@Ouj>VD{s(VX#cgu_IUAn zTYLJK4sN>TF6{hbTcNBwclTv`lk7b;`QOn`co#~a-8xQt!8eYOI(edMJ#v|ytA)Zn z&xZ@}`x_%?Vj7+ee51s7N?$TyXsOpeJXf zop$0x@N)wE`Q2q#YupBI%~7)Dn)j&k%H)!x%|G;)zxbm}Qk8Kq%hi|ve3-N3SaLSb z&Dc%_L1SIEfO< znhy+7^|y0*B*Yj}#U(s7LF@ghk-4gIDXq7|0Oc*Lylv+lvGs{G$+m_ATQ49#%IkSb zDw&cBq6D0Mx2++ERZb+7!sL0&=<@!R;D^6w7H((RQAuZpgqMrF*tv8Mg6hP$hKFFx z-0{ZHjfIA%3m$i3Fp!(G8}yW9rYM<(16~>=i_Ov}r*w6_C$FVWL@KK;)Q(_`fkIDN zoa}DQAL{y%sPVrDz>0H4;j{KPcIwLbnyH6Z+=pd5RzXpIukRjpMF-3bzlZ{cw5Y+;37Q zi_HsrcggYNwr@;OAKg~NS2%eY@8qfvFJ2fP6PLB}5)Cq_iyNMJeY-?~a5Abh3PmDS zUSthrW{q9VD;vJimrl&|{%h`jxXKM}2Ul@m{)Kr!V?~3e$?qP`o`pYdshnSIl8M1J zTT8d=_%7UR7oGQ4uMoKP3>EbMqS?w97sODJfAt%D5fYx9BXR@yh0zF;rKUoO=>H%!7TSbr(RzzxmJQwH{$K-Iwt3FibQn2WO;WV`jc70H<}7T zSHD)Cx*mALaG>=q5&Zy&$o)ChrFq&S^hW$8igSoDDZ83lpjKSfm>c~9?J$pmc;K|( zDIG&{Plh@BpoK?qi&Rt92ud*fPWH4~$}2(3Hl%MyX2bH;daTJ0n;Ipj|LphSbVjPA zv|7ole+UM2eRJ-6>?WGtTxtJ>Zk=#%m93V&CH-lFW-rT?13{_76lLF$=ZTY!Ot>!O zU(_YSWEMaxCKPfcDG=dbQC`&lghIjGj6ms0u!dFWT)hd~JqwR33YE55ya5^x?$Zw- zg|3G!&Sg-qp%WK7Nnf`gm0*9kee)N%i3=_fazx~s&w$KG(lnkkkSY_>3I<}$>p(-x_cRQtay9eGmrZ&d$74AO5Ni|+Uzm8wK-ZT{Q2f9C~jF@d2d0L zd^h3utl>@ZZT#N6z5AODts3tIId1St@N3+ebs)^Np0rOz%aJ$=;cv*vN192#sQ_=&qfP!hW0;-TFX?NbqV7{bIZg+5=GSnAA;QzTWl zzGc{vkkQ!$ovzq0!+^`R5OwPS6w%wj!-Ku>R(0y6XkJJkCV=FuYT{p(%WsA0$X*!R zUDo4xU4MFt*&yv$cOI;aQWlOU;`ic@;PlSUwjDvaYfU6|0;QU32hN8x&eeH_Mqe3H zZ@#YHrKcz>km*b^dhKw7(WEl&P6|!=L)7!cG;yR&_AfHCYti=S?zqe&Y|z-L+jyW! zws&CM%y&7vcVG(L6ZLWR2ec)ah|7?mx+ZR4x!3h&Lw5{uzR2y3XPD;hxvaA|2>54{nryY^W_!at=of8>1lbo(amOXT znI3V&gk9+E!iPSuWu-5Pj!nk5x0TpVMRg6>WlruixNtTsE^0U}S8V0x&bq_1TM@^o zrA>6W@55v$vz3<_gVu1*dvU4@!k^RQU-%_mkQFJ@Bsa}=VTVHW^zqDKS`BE?ldf;d z$sefRh)WiW7XYdjnAigr?ixf|MM!xG-L?uC(6C=SHKHH5F37+idlE16ppqm_Tw&@&+83NzxQ#a#M}*aUX3>CkULEXt(|& z?2tpXH8u5k&qIsmxIATgm!W0Bc6u_~OF&?}9>4x)ag^fVqHR|@y}51Ds=&yuELjok zxrj;KLrkQ47AJk&TL=2<0w)Qy^kTXCBm!Z`O>Phd5k$J>&lXYM@{UGC z*mrueP`GLLTqai=d%c^vg{*w23wz%%C;0#u!JIbOrz2(>TlFwpizU3bXoAJ!* z9k3SLFH-iJ__6pD@O~p_XP7^%TJ<*X0}TU(xg@FVxOX%2r-2x*e)M!E(s}?qCyddkh=@{}#(CI?Y|ZM)2FGDuTRK0~`sbb@MgecKM>G=F zN|F$f1WR}3B>y%@ZJu-LWztjd=2=au%1t)vrtHC!p3B`q7fP^kA0kLQVu;b|ABqoR zt}f$LGf7r&t5F!km-0rZ5qZj69Hu!dG8}-&*6oy4w=pu9Bh7t-AsaNWeX!L{?LqTj zEf_Mp?w!>cwgq6fE@c+sp{pPFO#|JkB+9lI^jEq+Gv+?YJv0TV=7x6yD+=I;H1fXq zPgjQm);-`NWd8jR_3}4yj1)Zs1C&_J9-~{|v?}2ZZF(G$jKH zf*v<@kN>=aAJk$&wdHp4xw|QdJ@rb`8HQ}xLh0uVEFGSAS+dQL*-Jq#>J}-WXUwlU z6@IxBrhyC*!1BDTJi|#xb#x)Pt6UTtGVZD%@8nQ=qL`g{?xPoGXL(RMxjM6KyWAKSnUIj!Y2PsG@?JQB>{(CZC0&a=@5RLd?Ro zj%i^+o6u}&cftmuq)ubF*l?DzVYz_YzySSrvT(bG7q7u#ec@UIo$x{M>-SQBa!fIK zI4S42L)o>iXUpy>bpn6-b-*ax9745Uc*c29-k-58x!i-Lx32MJZX(GMfmeP;iOcB z7px~qes}>?=Nop0HMsnObrPI8H~XbDQ&&StQn;j^)QH)fF*;~-#(3L&F2djl;*maL z2!PnQY=R8oPWU2Z8pj@YEBsp}ph8>#$~=EMD!u}xj!RSp4bC9=s8%Z;L&yO4lnbLF ztOjaLn+bYNLAA;0*Rn(8W)MNp0sw-(t&-Oq7LB#=IvwbW0*$z~c)A`|Ht1N)%UST$ zr#OA+OSpTwVYg#wjr0RV~5zS&}Fw38NS@ z=~wHZ&al{vKH8z)16o_VBsMpW*MAwEsxH+*AmAO@LmBj<9jHM#=*x?m}sAPrzJ-)z@UIS+xAe4B@w8VYxD1Pzg%CG%~rjj1x7lXNZaEz z+saL&FG_g@0WDpt<)8P0EIjC;-i4o6@@zg<3I0(SPi5Z*Oy-1mNk$o13q2 z_{>LWcOYe7#^*-1)8*IR4Xzw*Px$I<6t#Jvs}!pe-XC~8w5>iv|5~J*Fp6CCmDF|dzsViFT&UWI^yoM28 zJDu)i33N#&i>B~a(Ej@jpEI}wNeL}Pa`FV~&8j`}^0Zac9Fj>xRP{#NqX6KG+`j30 zz6{1G{M^VMw%{)-GmHy<`mN99NB#Xb5uljz5r&CPp_o`5HoQ}ew)VJ3-k;SAEa}Bl zRtEfA(|*GJ$o_7o{C$)0tMhZ+-QdcND+*l=&l7+M(x1v?co(*mlbCR7S+}M!WoZ%i zeyhCY{@y+)Qm9n^^Rdm0C9m+6!%ruM-G;{NhN&wZAi|!dv&n^Jh zkS7{qk%S)i(rz|KRtn9ePSk)z=YbSyCV!G5CPm$}K#WKnk=_*D7AiF8_=J~oX3OmcT8tyDTr(Hvhs&%Z z0-bEc9`8+fbK!Yd#_W8EBPQ|5kNWKy2JDhUr(28|4`6iKP`*!2#Q8?5^*W?=QgcLO z#KOg@T28A^Q`E)*_(ufo1JNH1$KTvJck&BzD30c}>N&IPo_K4TGseZEqy4oGPIVWVBp%hdmx+0>}fYRaxm`%Lm)=jqGyRq)hNplViAQc`m02fT&$4v9PG zq^j)676jo{Yz+=4u6+bm_bn7Ivw+}@8;0QD&Zy^@JXV!auIuyX+6!d(0NkgZYp=YyR&z4u3$5>D zg~q`wFH}q4g)v)9x?M1uLXpi`^j#Z#=gb#;y=;rFyA=TN=#ltzow!XdJ^HZHbU0b| z2;EP41#_$CiycO>C7Ae}(77{lmjy7&J{_d$Cg)h*?6k2{3dUZ3g=ne}LW-U1vR3OL zVx7iswzD>sboO)uSKoM~Hy*BA5OWP$6Owkc{rUMHZPVYoMKGBJA?9+cKQKwepL&0J zhE9e*0xh2}a3OFF=I3s5m=p#mvL?;GZ>XD{y~ttd4C4OZI@!er3qcyZw`5&Kkr_0l zfe~$Rk4lqNm~qpcc{N*|Ce!gP5;;4*olc>67mR&gqeQb6iCq{B)xILptlh}p8s7_c zisRb-d!gz;(B|p(*-q_2wGj91*u_ZaZ!V{wh+=o^4@cz#dR*U0erG2oY8_Z*ALR*a z(S1d}m{a4ItC)Z%^!078hGnmZ2m*j$rZ=`{RF(iT>~1E)B~7Ty4^JaCQs)*8XDY25 zxBq2|4b)d)`GPoXn7L(MrT86Lu|GoRcizXt#I>`v??EmUruT5bRJXGwI%Hqddot?5KzMaQF+y}~*$3*n^f2(jLIV&+XjuAP8+ww(>z*PmT-0&$#?dp;`Yqw$* zTPI+&>Lc+VT4OC;Jj#^##7_h+j32!dIeW&{?Kw}{n1;ZlerpuvwZ!UHFu)^?(!ypZ zaU$-Jta!muD-^(vtF=`Q#z&&z3lQ}=G49>Gk0sVRN=AVx5g8M!zseqBhsTur3D_(W z9||y>wNW`P&sZA#p<{anfp@nfEU6{XsLVPC(^2?(q7VV>}r`dO` zr>*j*H6bF~lnKw4CY7Q@Mnd*)bm)NVdX2=L8z)QO@Y8nROmmX=G4kqlKVnF=ctt)p z7~a({hQ@o*uU2jtoQJg1?gtQ2YFF=|t( zKYl==IGp`ZaVx#F4JP-l>{BRpVIPz=kcrZST9*Y^ke&EWyL>yE2wW|BlczZ%*!3pj zHa64;0Gp4jgkwhU9B_6xUk^GMD?fL8LcqPc36CYSG|KmB>-HDS_C7=O^hm`R&x8OD zo@PDXG1Z+NcB~8LcUlLO8Jw3xBUXhUd@A<^GoRbk=3iUvhoD3wqmG|P4DW1xJH=SN zbjweh_pG`uNe?2%yQ|dew(C5Adw8~p+wKs2^CG)-15uGWM6GRIe=NPWm3rdF$T?d6 ztP*>#%^lx4ufw9SzaX!_Q*Qp{Z|#@DyNorU4q}PySvo*SlUy_pRQ6HIQ>ol(y2B3? zSv!Wx&<`9Mn_l+HMM_&9LXy78PbpYVW!k!xwVMqPYw4rPfw2>}$IE#G97c5bZai`D z${KyR)^a)&(O~fNk2+o{vL*g1OC1=VY)$pWX2PL;he5`d{M~BF@J;?SUjFR1o3<0q z4pM7QVd|T88we3q(S0kMDH}u$RZ3eqAUD@7R}glPUcS^m=|J$9df9wYm}_1x-oCwx zF}fwT!I3OiB{I%;w$i_HKd}vzFAq)R35+loyX|B1$KFu9v6p?2 zI~k$w>ciS?<&BO8_03#q7Z+ll zTOWiDU=Kck0l153DMb28{8p@)X{ba~ttFU!ri4KGj77XLyQ4Ur_jWIF5Q{WRj9mr7 zo_M9HI4YXSzs8cLPHo@jE-XoKKh3{V2h(VK&PnGc8cJEb=~^k-vdh^iU!n`k$BIfa zKJ$wXb~$X2yNt*?>UoA#q958VVQ} zk4DurMISpw@1@2yfL{{RBeHMqnYORLH<}+byG65v&z+THvkjKQ9vQU#yqoWZS8KvfmBfxQIUp=sVu_WI4 z=&IqNt?QObMNn33Iiko)BE5U-tlKNReL0(L10PgD;w0X=>^J)i0hQ!qN~}s!Ybp3i z+$_V{nrZV2pnJ+EzERa@uUPEFv0!X>qT5tSMdh?xiWw7^dHIrL{d^z8oE`3{7Q@^K z`&|ZL)~p5V=MNQoXXH*uMKU&qg10C+`nXm*_n6ACLmCeMloeWWuj1q0R`j${r&4g|zTR zf11&D_MJUp{G&jd>>-62(ip0OjcDrFLrHT9pPvh?AJxO(QPMKx82%BZShMSoZTWLk z__yqhp#naf$azKlrx*SAXa4$$TMDF-XXB@Z|Ch@M(1Z_ng!|F{;wFE(Xi7T4&(3ag z7Bv6MWk|5fYV0yMocnQKvz1s{`{QJbppNoBj<_A#gkt~ld#7bpM(lpV6FvAo+hk;5Kh;L`;kfFmRH7YyTO9KYo%s9b73dwyBy&?k(wJb;SoA zo}le=o|FFjy8e1@Kfefk?ps#E#jrD}AEq0GgOCAY zy&j|yAh{-(a)f*IqL}ZO>-S%G7ly1MF3QdS>E6Z=rSA%X`=H3r4fKD1Z_IfYusrPn zR)6^^U>hQyT&_Bh**tMzRf+Q7r8@D~i}=;|#Mt9qgwlPO;zhD}3;q+kf#n7_%cUh+Iq z!*(i=C5rRe_?uD8{6zrN3VSu}o8ejUQQXjV_om>yRt}?;B7^V< zkx_!?Cn&K04qKB~gNk(#%#OZtAA)YkpARSv)xot)_TCtNyA6ZAF#_k1<$Rk#s8F=Z zAju%+y5D8hz?WU2xMdQ8j8r1}<9?7sn?oHfdG1# zrm&j4k>j8qfyPCIt^e6?SQ_^)O&vX*!|j~nSEfp#J(*RCPNU;%0HaMo3U&io@rO~y z$gZWl`V?V!;h>JD=gsmYK8?>UbUpsLQNQoywp8p9=Of250EL>?e&e&>%tt85#YT0? z-+Z{H&m%h1q4AXMw6v=T(X{|7nOrc-oUF(q??Qw#o2fz$(6y=7BVyMvHSnz{$pw#v zfv=%A*P{~UwtP^s`Q<+}fcKU=()`wt!>fr44NhgjH z)PK3gStWH{LN2cbt$)AN#*D{WHKI>!I1Sb1k#_oWRHRh7b6^SiCXBShQkZ zADj+$1OG^;vjx2sw{&5F)#sv03`-ff5rwnwra#s~={F=i-9h|a5$i%}9~ml@B|Y!l z)?jc4q8tRnq6O&jjXP#X%dD5_l82o377H8b!ux>V8PwH!{wfNY!8Plc&EA_GEHYnf z`;fsvfK<|m7+!ia=eSE2^SdLhCn`?bQ9rIxjU|Rq8ySyHZ#AAfN#gSy?>bB5XQy8V z1~u9kA(p9>K_kWmC&?|u>2(kc3ka!`-lLIz$MDXKU1R-u`r*~Uz?2*CeUq(WYx7NZj+FG+O5oof54iZ>=VtVIk+E1BJe2VM5HSk*tccxS z2>|9G*g?z__Bey-asg0BgG4gC=avpYbdT$XRH=pFk1+^rhY8|WVrjI02#Y^tiA$n! z1|BeOBsDSPr9g?$q^sQl@c`2lR_vnF$7H1gVJwtl!>#b1jbY%={CTPHJU9h( zu5Ka@@$99@DLNy!EN9!=<^(ATV5WzMXDUEJN`aDDqqOTjys%Eo_>y~_>|H%QZ<@hu zoy~i?b@5V+!*@y%=8}y7PANck>zK_{atjIij*xH+4-SM%9*0KWKdYrHgnfVk;!XQC z=n5Z?DFORUrQ3010xGVkU%Y=8dAvKX)Sj=KW?f?{Vcp`*EGp7WPfDtL(h_k&A2B=U zOK%q#X)nJ1xcc?=IR1e{z{xqzI8rY%)$YM=m-gPm9i>i$_N)FZJOBd@C#?^UTq13! zLsv`5r%4(v_0Li-n>@S?fQg9P8-qOn>&fil6~}jP9{?fc5lm7kPy~Nq4$tFQjNz%c zgw%5|?|CkB%||d83F=MGHN2Pp6VwyWorjsL5idm?dGpaA%@X>eqNf4j)W5(D;YAYv z{)ICoF>yD;z`2rleJXglSa^`VLj%84&JIxBxZ0P?aPUA52-usDD5>yyAE;1Vs=+AA zQdW)?dhr$NXEo@0&R(U=N=(mlHFY2!0lTvLOlNixcD2e zNfE^`Vuwirq82m!yon`=5eu1xLF5<~Zg~VQ*RDW~8oYbk_RlfHM*6lqeHjV>iTv?W zj&%hklyu+yWlphtEokvdP0ilXHA02hWitVWXP&qY^}25@gO5tAP?c;Y_)O>VZ4GrQ zW+#=FKEE>T#(n#Fc6z}zXTGVsW(Rt4$UTO$>i_$;M9`0XD)Aa`c7*JDyo z6EnspG8|LAko!`Q44IR}b1vp3|8uR7%Gw+{RJ>K^0npAhUS5ED=U0V}rG|?zO5XKU z0r7w)Lv&B0Bf!E%fWrvP=L7TS+P;YxaeKoMX^*-&U*}p^s)BUqiGZ8PwkQBfg|G7U zspTQKiRQE4+VhVA3MuB`BeuvCCX~a9V!xXN>7Av-2zm0dI4;RcT+`NIakHq5 zpX4UU=b$9)IcwCw*(a*-c$(sTXVBh4XR#8X1mRtP3!Z)A_M%hx1&*fK7)Q6jf-Hu^^EfBFu= zGuXsgOq^3pC+9*O29<6bEnhS!>DB**{zLGXi}uMi{JU?-%lv0%29pkUT4bU?T4l)% zGMPp5MKCu1fLP!rmF5ZG!yh8ka(H$gTdVPn=@cd~+o4uDo&KV-oI-^84gKK*gj6>JY}=bUv{)L7(0pFo z{j~PrKRYr&5;+-6w7Iie&NcoyAN|g+%5zIT7wyvplg1Bv7`EkIcUl975s}2>7 z3g<8m(f~H^aX?iZ)+F8Hs8M?60!)av40oe3V4>T$34W9c$uAke5lGbve8Gt*nk<5*i*FQUy1)Os!&zazsnMT}@G7NCOrfH3I9!;^&)DhzY^EFHYvuxKlpq{e6l3^Pe`3lOB{^e#?fEd;+sEN}ao_C#I4t3f57z3NTg=QbB*pGy%DA0L zUux@yCU6-YeZBQR%2~k+C)L6R)7!4_q&-!f-8B*aWeX7^#Ih4Gphh3Wnr#qH4yV(v ze1;%szdTZ53YB}T%O0fMCfsL7K!CAxRrcXyFc_#L-TLEw`N^Zdmy)0y2u69^=-aRkn5j_6CXn?{s7e@>wwaVBM>i0!KJ@w z?yI0ImT^gT40>dh?b9`j{|TA0saVa;ZawE*<0VjImH*xD{dfac)ngLjxmPy_bNeNp z>V|2Zn}wEh@<<0X46+2#*eg6ZDLLgVbD&}>L=1HJQQ%c+g^ae;VQh4mNRg9z2wMcW zr`!bgV^dEEN*W^5(`_OIsu~)R70@fd*tHI5Q+;LB;jkG~kn21yJz*+krRYx3nD>so zW2umPsv%4oM=>87Xp)1~rxyQt#EMxj>{_QKjW(9utK}+<^L~`a|F{VVH@Nr^I%7n4 zrg$nt=83tVAnhK+xZ1Ms+YHFqaf|~V$VUAu;6bfsA(5r2_L_m>5n`ITy#gJ|>-(o6 z4|^6BSUpA#oq^@V&A$jcrU0?4urzY#oMtXZ!p;{OH(gUcaJtOxLCD5IK%9uo5eL*o z5COgIU1vdt%&kD(RaXWAFe0SIDsREU(ol^lWfHj9`eUYWif9(f@mZF1>bZu1>~^g^ zNsZeQn*MsfDDaM-{9fRbn`%mR%sPmkVbCML{^$iIs!Pra^b&3+Fj^Z=*mF1|a8!ac zRmw_ABwA*mR85lGMUF)nH%$vy(0A!cfL?%`y|A~pw?3J1vZ^Qfek3_LI+#VBv}fW3 zF0%>b?x5dRkARvUfKf2ga5!t}wn!v#lpUH8YUCypSfakkgl)5jAPfKGwA&dy0Pfin zLnPBjFHW3w?~TVs$=?Rj>T}u>ayUilToj4Yqi-Kw+8@&xy~u)cQa`Lb&JxJvJD!ju zr?BNmEEqedqWw=C9e=^F1KFvPUg z@b`@Nk97;nz$NS2Q+U5>OEEzXt-N5*n){oPoWUMgwDwckPQK4*5<6iA2Qrh2ZKu1G z7Z%>)7dv5&OoF!wT^i{tjuEqRtydxEvC3!;<4Awz+mB3(D_fJg%e1H#Px?tNye7K; zWgzYSCZg`T7$z_P>zQ&%bn5`n`cQ7N%bkb`1vppVM7{YEj8y-7h~Y^AQ{??YaPg3m zI}XWAMoM~tNJ*oBg3{fMAdPg#0;z?7NK1FD|GD0R<9O%`uXAU=ycOJ9#|*OADtRYXR(o3FQ;Qh3LNE$ge<{L426LrF zL!X6s+?5shFUmo$1~IMu+wo|m--@JU8x$^x@xzq7Ckl@i;9$hl4ACvn$!--!`D+ep z3lL9HX`%vsn$%P}K=`%Uqdl^aqhnK$hBTIf6+F^%M*! zQp=q)b?z4j8M!S$R@qD|5uag5jv5B2%9bywgj}tr9BtjNfPzm|9Z7Z8)>>ZelmXjo zO+EPhnz#*O?Rf#TSrMubZ!zFtYOD{DM4Z^?+CO8N<7TBk)Ak{Je z5lv)y+`2)|*w`3zzZ~OA!y{)@YY=e0V0~e{INha*N{J*_OS;3LiN>lNNB}1}333tn z+bg>B2LVkPgVum+hyCR)Z)an%{Aag@B(1ErZVG3kiX~IJ6wfyT{B3YP|m+k&R?Anp`aP7Z1^p zY<4`I{}ponL6TVYXkV)-{qaFx6s|{jb~KPZoYVjFA|;=p`J8Oqf&l*RFw2Elv*OmL zSVHN0%8@dzp^k(oA0=%FK^?^7Z!pjH_x+MQ2I9CBZ$)xkw3cXy>u=dXqK$hPM>$Cd zePvD6hK(pB{ww;AenKOLnT9Z8%YWI>Trs)LwJ_d;+RE-0AnyCiGmttDAKAVt|NBk3 zqagF$XbNtkfq#Bm10wEWh7W>){cRA3zKGsa;FyJjBt4kE$~*l}gGM;M4V+uyv4z_# z0lYK6M|*&vV56EO3E|CE7~8_(Do5t8ZZVtxOmji}1enk{H{i{ ze>diOU?Z!fk07^ttN;4zer8}14_@_=-vS-}M-ck!w}AHroB}8FgIg2Z|N86yf8Mg1 zad*@IyW_?x$ww1s?qj}36~|+5TJyVy+L zFcjaN*Fok z;^Q%;gDpfKN((tYv?PAghNEH&@ut3LRnv<+a2$la4&}H)xR?KaP3tHqMUe*>*B9rp z-l_1^qyss)mPhF{hqtj#slc-DwT@i&8DFEsq@ne+qb;p+K{EESc8k_-xy*vL2D8lI zT@PK{VD^hbGIuCj8rKgPhobNeSHSdAt98rw2G*L76^r$uxdi@Pqo& zva(Eb31VNVK6&&knR9x`Bga|tsu8yrKgzg=u&>A9TdJLGi6=40N@9KnwE`zLq8gI@ zI5O0@wAx~$BFF_ifvu-|9hxI_u=d$x?NJRozoZIEY?m~eTkXdUhq);r_9n^Xs0Du= z81w0|x?4a2z6O;paYZoq`rz!Be=N zGDJ!%x)Fny-G*4DRM%9(kuO%0t4*VM`hXVM-j^$okHY7Xl`HMs$*VoYJr&sI+Jl@*Q^f3O&L|-eYu%VHoJQ{~Zb4&Bp6uWIfp5n;K zCn3|y!F(828-rn<4`f&bsl@hEn*H0Sk5lNYmvXvODociuaKD{JBB2-~2(ij8XAZ3X zP5^48+kDCLef4K4ZX65A9&wu#6hyzH*y*i9RT*;3(+#sZ_R&!jNdGwG>vJjU#HS?2 zJDdN4?S=2R7C7bR%(Sm6e)TLh13UEfb!gfI>-~*i_wWMokf`>?USa*~EK6qaqL5E6 zXYOy^*)h|Gotb`f4_cf1ao;3`?E^>hd}Aq>+(TvEpUEYMy+~~KO)#g*){)u|suQ*V z&$f?)=7*irCdVgj)|-2sxC957bbd?%3rAH8!Vee%+gJg>3?YFa8BzZotlyFBp=cEO z#N~#n)E&v!8yfHQ1QdVC4*bCDsG*0Q`*0WKC)I{c5mL(H#-lNlNDA zJRwghQYWzFaeyfUA3xl-Ao?ezb$1TxM$yO)IJNcXos~aMEBRg zOBbci^(1ZhlUH{@TFMQQ1|_*?a{uI&vO%FJ8jbMZ)YOX<9rn#_qtB#n>Q^T)pPy61 zTFc6&G;XSnxQmNEIe(%q*I(#h64jcOqsB5rRXW|1(k$xdq@yurxw0L2D0JuFF+noJ z0fl<3s^?y@fxolTP{Yh-Qo!;K>b){%gX1^D6;6);=y*>Y%GSrJHaMf)vX{f4E z)?U8|{wPU@7aaXRE9#9^0fo z?|b+45=uhN$_u|(G&SAU@q84$b+IfmU+0Zjj@njx0%bjhA@scKUtBq%qz z)%y6I(~ONY&ClgyL+y7?g+3f#2&TgmcUpSyBct`Rfk@=--}g5{A2iH)&*xf4zj1`Q znvQ7j_kuH!_i(>ivl}9M`z8SxTL%T)KH0#h>oQ6If+n5Qk*JuwDls|o;>^fayplX5 zXR5%J{U3Y#e}y9WssYGQ$6OyhpN#3H$-xuPKf3-hP4Yh5k68YoRaA9v+Uxo(B&pu1 zxO!e}ctE3spJKKFmGh2HlCM*i4K`Ru8hzeEF8YFu7}5L*C=ez5`#)F{?9sprTUJCx zf20Cm+Kz+mVcnpdYf~gNvAJpQ#~O$)W;BaZTmpK7g4wEp_WY;D$XG%5Q>kI)q{W@>>&(V|QoPwM z!JwsZfQNL~;)~c^{&6Pj??EFbJy=rhCqPH-AXJ|872%E%k;ViX1`?YJ3lab|OpG3oB_e(aN zk_yD6getEyO!Tj&)ZvgoJpc!0f$qjO(2uACl&!qG`wo8{Z{})A1N0nd^Xns-;?2yl zodtibZ^B4koExi39M$Wx#p#E-#g|6}yF$&Q0}9j^i#Y1UN&Qz}7=3T?)c292EmU>A zWd@#z+ppYL(iMnt;`}{d<`Xq-Y&#o#k?Y1$?UDx_Zs^)fZrX+#z+PlGatI^4Lsiq( zHnR}P!We!XkLVQGh;RVnD;4d*cjf?NNb39n7+G}yz;Z33Bc0}dqOZvm2nH{pPA%sz zC?o=!z5vij$Fv;2ME6%nbT0@DyHY%*vd^o4ixrd4)`=WpuF=MmSU?VE+YM;Ag|{_h4*N_idgCcrY)h7}IePlod|s};>!sITRDCV4Or0N; z!BD04;+)5?A~N;{KN`M38P+48L4=sKix4jZQ%e&Sj_Bw7+HV0g zIzIq8+Ju#F%@PoCt|U?sq%&Ck>p*5aL{79gXxa34>n#47)`Wdtjxz3DHSG7ZUTB#A5u7^~KSw-4#m|01a`#cer4T7P7ULQV89r`OLAS8t+JtI_9Enj~MC zKgeKkLhZCc_f1b*HTGqn-^u=3Vxh5ZtVd$IedE%&S$YllC+h$lD6qBfN^#@de5`|La8||WgK;;9PF#T_IPML5$q`tOH%ExF zO}#2XxP%2mNr{eYl8!<98t-Kife`^}qVlcQ;+9gU2Q ze1Su|G!CAl*S68I4qvTq4m)brY29N{vYzKXiHW^!Emx9Osf|r1;aIT20kcrwEp>Q9jmRWH1OA+o}tH`DqmE_>D)(lxrov!zbTw z)=90a@BOFF>0Frra`g~LOZQ;tm9w#GMqS+$LZRc>3^7y$kaRkG#8P~6Z2&+IXArz^ z6+2+Ox*9YAMqpsVXHVkiYyi-rwxSwAz%HW?(X1E#)-W{VOSV z=KBwIFLU>e2aX4tkZN+w@h;Tv8<^#tHp;@LXV(4{0lIhSsQ@^TpHrT((rO!Ox8e@! zI5-ua#~LgsN~9B`b@b@y+IT2Q)D$&dmezSzYU8Gv6}p#WWXdq{_x^W11Td7xEThT` zIt88Bwfa7;P-DI;Tn|=r4g&s7 zji5PiJX~E&{nhL2@T6k>MDQX@y(Tuf#r-Y62ouGTqxl&Oq^$nAc=t(I^X|gjcyZ}U zk$BoiAC4AAOsx_`>*jLIeTc#rj3Dhq3&6r@{vaAgh(QO?{0HL{_w$g#`C}-B#sh8+ z)8&in!17{B7KZXiE;@%xu#spTNyCCFt4x4F#E) z5Ts@iQ&%unx4*IrdIEr-u=WrzZPllwjL1b(1IZsmPbfVp3nB31S7Q_YLF|L?IcYh5DKO@yF6M^@*IS%>_bKp7KA^E@Q%Ax} zpxD4Wa>TjyJCC+`%eKrx?pQn9)5dyy6b-b|yA1GeH%uc$o^l|M6E+pUD8Bd^EetgOK%O*%!W!dM{)SFRsUcbwOj8iiMyhHV>+k#!1?EC+Tf=T0_&HU{v?4K<~ae&7OxToDE>Dj(bTLY(^jxq(;Yvvc7mc}_u+vMXENczUhuMH? zny9{UWbVq>0OYE0<>rXWms?gbptO=~023*k$?WI9Nfh|6VyVT1cb@=F!RUJY`&@0^ zu{1R0Ok{FVXC-Sert~xT&p2>igH-4<-l)GA%f1OdN8KIW#=Wi`Xri3ZlYHmHxXiJU&F`crVt#@z?OH5N<=FMIE7SW_MhyAIvb@mhgPg6D$T(=xZ z$lO^VPQC@aGGO@mUUh0%z>vE$`*7rt;7Z9q?pA5&=o>BT#z*?~W>)4_>RNoFT zhh6uTiZ?D~6A^!H&8l`-YGc048__PnD;_-|eXP~7P*hxEdJ!*g`*AaUKP_pqS|!_~ zxI)QG-U@Va8bFG+_O8N2CkU}GG2AMTPqXpyy@}Tnda9VOXOjnLq1E~IYy?_a!U3KC zvYQ?vsbw{6!yzn8ODZ(!q=2XIb6FggnxNC=cHgtcWy6IS8micCjckv))%Un?65HA( z*8N%@0k~EX#`B87@z;K|--LN&WNUvB`PWhIo(ycfOBb{6JmzCCV!k>nx)6E0TBAdL z0JZq)yc4eY>?W<~K>cXrbNrFh=fVl^Xx;r7l74x8>#NUP_+L1~9BMe&?IFCJ#+`*N zLM%_eSJWRYw6eX;VQ>2~v(#cAPewVh2+Wbc?^FQqwN3uQP`>Gp70VdJVP7P{m z(_l*SO>M2rgfDJ2vo1QWrZr3rrREJAmCs{s8)ZF2nmeuzP248jUlSN=h+_h!SP!wR zRZkt$!$8$<8^P`d3PlqA%|@M2iklZu$MZ__;>|Mh%<-amjdxp>e#Y>{Y}*Or7rA*7 zqwP2HWIAG5t@iIX>pH47M~H(=zxl_)JY-h1J8b*uCLy+&xX-ABK|<}UbL`xz8e5iu znMIer&g$6vK@;G`Rw&QmTk2IMeV9RcCQ@~~Dl=5sPY1$Mb z1`}}o)%*9NOvW<$I8?kMz<-yjc~XOoW-o0l@+zzzjyZC8DL9fhE|F&XwMZ4RJF|@o z&vYN_F7^>Ggk`gfStZO|-H~qR{Pez`5qIQA1s}0Y&C?J{NN#P3ZmF=8*PXYloRT`H z2d5@2Ga`}R_n(9i;)vCYbp*bqKegsNI6X?HA$unrfWP!{I~4ANFOsaGh5m3s;Y--( zLl~Mi)P$a!Q_!{%UDg6qZ1UQF6?>OiEnEL?hnV%Gw6R-%$S`WO_gQHjOUQulSCP~g z8=G`ab#j*cjzbmCU$@`V4V9!oYr5&!vMWiK=QgSw)!-Ys%pfg)4b(nb9KG@E;g4w0 z5;l3}HN=|f3<@(*ip(3)W)xK zb#J_K6SD4o+^o~mUG>m?5G$AA!&d&*zQEaF#!Dqp{-+jIazg9Yk6#|OS%rMi&ol|F zT}KDU?zN_DlQvK<<7DUJ!xdh~m!&cL7iD%M)6Pa7hAidN{W7Sr9i6;Z)3dD)%D)wx z9;K{Zsur90zitg(lmOr(s}#uA-<{oW9C=<+5A)gY1YCgGb%5BmYs=s(NKvb&aQLo@ z#~@$~XTOs7s)v{zG#B&l?U`-ehyj4AO`s6)vRo^HeUS32@-qDOSZa=gwC0o4P zbIIn5>hIG=N=>-)ytv4>3+Ev$d>5TG`bYVJ+|dUzeho3d%>0pWb>DFqk4-R({%oQ1 z#Lkb~S0o2xcxRgPf%xUfdi1s<4qFzsS02Al%|QoMD>^7cRu57TnEgumN2vHEaUJjW z@f$S-pHp4EV~UBPkcJI+i5nrSrodn5c~K+MHV^Q3bVNqdd-ARI_A5<&8?=v)@i5s$ z-$*|HqWWw)PItYqbRbleqHKKIl+yvqneJGqdJP%SS?qb`Bj772^6x%iP(OjfWl5|w z2ah{hInF)-9LIEDII&x9L;KbH2Q--D_Ew!p1W>nHA@5ZcsH779d^^pSqmnOGJb8a- zum4DO8nU&lWD%mJFm}>VvCSr!!Rb^(0tz9ygX{mU@fik z)nl{FvVmmj1M*_E=){ug@tBIrrMT`^lW$3S&IPd=!2tzP*vAGbcI^}wo$gsCJ*&}h zoi!>R?#O19Er#k+{ zl`md4O``{}h#LNR0tQPZdq59zasT(i#XCcXO4+L`#AMAE_wi>Z4I}5D$I}p#J&5V; zm21EYk{AtQs7Wko5~ryXVluk4t}Zj)VTf;fKl+k9^X${3FKAVgtZ?b>}m)N;jE+lQ%c9@ayHE5r=-RE2P6(1X71iVph0jWj)b_H|e z8c$eG+iS{Hu$DAT$|v*p?QqXS#4??VhsR-b879F-n_`5LU*xHcg+^lGZS^Ck!?=;n z?>qfQdhRqZlLv0rNtF#PzAtLEOPsBzE#t-IJ=D;q*4&dUi1*CaD5E6qPCxCP8Q*Ddus-iF8GuZ~DlNDe- zg&X!7X!hvEQi~IQKr~6+g^}Us<=zz-Diufl=rE^khF|_IWii*@+owOxd&d2zPifW~ znj_~hH_vl?V{0zpBd||*`d&@FhM&5RY;XH-wy^glInvD~aQwme6)oMp zci9?m)iR~t<-C(5G>M^S>7^0Eysuz#r_=E5K==I_vj;eab$$a?C396JhQ1@)gWK+> zm)9O6QJ%WKL&IX1{v|UTheG12zV(o`3B2}^jUd{$$|(0jl`-Yu?r0MH;~hi2V;Cqn z!mfEs1gSH@Ek8djbas1@MT<)8dro=NT9%v^kJD@j^9=+RXi7CVGM>Ta;@-r4kmNya z^A$!nXYp2)8h*8(rGT0&h@IApb}y_VXXEFijg&OkpMx*vI5S$@=(HX~N%a&0(Qdgi z6fs{OUD6ZC^v?B7^m5~iVMW2;7f8S3d5rcsJpH$6H0*q{e$>}0jkk2`dsQd_=9@Mu znZx4D#SxXl0riUu`t?bM6(R?-NAb20Yj-P~jaoI-D%vw` zd(+I=>Yp{VC}HT$b;ymSr0aD?>>)x?v}EgbjK_Ii!I!T}M({&l5f)vNLu@s6(#sD! zyai$$P_9+8Bc2bwEHu+o$QVSsn>BM(e-8)iLsT1rF!6js;(FrQ+&8e7vkEAB5-%=z z3S$xCBQroRyWq01vGEtscN(9c7v1z?@#M8^{{i>^({kx#>nxvQM@mSGOxPO~xMpEo zaz?kWA`yPN!w}$e)pxgBNK5QES0UAzVW_2fn<2;?)@u3Ih1&xcjHO%!yj?P!SkDj1 zl$>o_%`T#;<#94j^x*Px>~9h~XSspm7*wMOl3(M~3Di*(4r{4x_B2TaYB<}ja;c2@ zl&C3`oE+;B;i4q8cBh)?IU038pu~?y!ixLXH7SnFEFfAUu!~1qk-4VM&KYaX@v20t zkz=5nFn)+EqIpjYbAA-5y<=VL>dDL5s}d8Xd~V~@f=6;YTJ3)6V~*BQr)ow|V{->g zI$kmC=Lw>UWKKxODWNo$65m%&K$ocQx9PEz-y<>H z(#+lMPt%-CgUQ=THT6x%M;ZVXc^_Fkc!AtHkmy7fFQ^w|&7e9|pi|<-tJCNnw zWR$O-Jvf3Kak(f@G(32II7P=_MdhU@JwzeQ=;>GPg<5Xjj%0O5p=cjcgBm!M9YaGX z+Yir!Do}+D;@r62S-np&$B2yhmj8VPQDPMo$uRUAJ|SDv;FGJ$e3Ollb`|ipfdp5?#M(3i$?Q3)x6WWQ=s30xS zS6*9my^f%eHq5&j>r4@V2s zf}@%MV;Yt3gVSf=&}M96u3y#$36w-9TTawHW%ue!D)HLuh;CDh#^6!kMt2lp5IqaN zix&B2%Jigvyd(adPznr}#!{B{*C5HaV~xshPGr9AVip3%NZI1QNe=x;Z75wdd+&~3 z3!*Xm#S&!ni_3W4>Kp>m9x-OH>}o&E>V^;8Gpi!>A%L`h$J%B7fyKKmjM@FtNS)GO zQ71hYRf|fkl!R&`JhpsVDlTpT?fRF7Rl)az9Jd?EZK2eR>69=_kDq@0cZ~mxN74{I z@EaOA;o{eHEmdFfQpiZa;#zJ^u*vVf^aLZ#;Q>dnlBTAMA6O26Ov4L_O*sC|PkgHl zjj~{)mDoFB@+j4jfd+5lSz)qozY3gTPfaO)cyX2TTVi~-;04UoQw`E<59P{Goh;Py9b+Ke?p4eXr~fp);T2aHS&SaywQ4k(W7{a zheUhL85WbZKL2mLtfljDnODt9b>`$M7e_Dlvb?3gdiQxRG{5dPjAX}0Um&X@9 zYWHTZ-lCtlP0X3?WZ$2kr1AAMG6qKMNH)Ti;s|)Ed3ihxLsH46WKbp;Bs<|pg8N1E zgrk@@CnqN?hF`&kh^z!Vqa+HWnjzY~McgHY=#KGk4`7u{w9L^ecT8*ci#nfvHCFyQ z6f0H!tY#a#JYD4~R13ewE-8z8jn0pg zH}QEfq3}Z5L(yA&PMl-|mSpQMA*@$E5GITbSl0tq<5;b=H+NCfFsci%A7#g|7~|W0 z&AqD}Sli)(BXUlEVBt7{T>|f zE2NwB@$Kh3E67eo=iC^bH;lXZgOLN*bw1)Lgo}}lpiZeACghKEYZ|x|dB{-G-1@2vYo78sSDCcU|U)bfQm-4R4J`+})WCU0DqScqD)__EHPT>M%%4KRm)4 zERAeRK{&w!Mx^%M;IYTwGLT}fW9J1D%M(o_>(q2w%*8%c}F1;_28K=DsjDT z$rm&Qv%3H&>_J*`+Q1TwVE|-y<$k{yXyU>0n+hqX3H#8{kSkbW8UPK0PdlSPu$}Kb zTjGKNF(_Zv^A&tnNfhm8l42%&;}9ea*iBFTL34hM97J*g6EYpN)M_h)o}C~rb1!ZZ ze*yccV54419Yt_2WB~g+-ScaSuS6r#5h1Gy+-8;tbN|-`Rg&bmA>UYJ7~U|A5ICd1 z+gzD|-!ttv<91JNOlCo+Z$1Cmzy)PAS9)hSH6D7PQAr=&Vxkm%owA^{T)y())hHV| zT#290AslTSO^Wd;$|~a?82UF5KQ2`51AD)s)&jh!ZD7KB4FYxob-#r;NH;H7uI;Fx#8XAT@n0 zbv^a5kF~vwd%72n_k?O!MrSN>uU#^OAf{1Tp5SycS;t5(Q4>v3)E;g^#OTumMZC_&+ z&Azig_+jHgq2oFw)osrwse+b~u0|EY^Hu?#-$mi2^V#b>>L$rCve1y(b3rufjU4=Q zUKJ*pp~9te?IL^TMMJ=j2`dKGIdVcg0K{Mai5rR`4R_{3X2p_N_c&1Yz3`wm4%?MLQ z{=3LIQjTHJ!~vC17Dfx{fC^g`9IuBe}@jlmvP6 z1NC1i6C#P?TW&;cbYI==w>z5t#FdFjKghY~!TkA?E0J0kse5#Mh=svpGPd!+Tv^)ywk7)2a)`BOZ=h()6t7^VA(^SBAifub z-#b-mDEkGSdlkp@OzOq^#lIdT2<_+8yAL}~2Q>=2q)oeNp--n->?JrTfC}EZMTzii z;(##Gr*3V@j#l9uIs4|jGxs-tQjT*bY`3Ftl0KhazsvGJvE_d!(;rfhjtHF;=i3*S zkHs;l$fle)^UFF$j3QPEQ-^Vn3`PqpKBOYug@cVo#|E?k&XRk;b7X~8cG%vf$~GyU zukJjtAp=g*X4)^=zXr5YKT`_1e&3ZhG)%oZ>yb#x%gg(GQESnK-2l92_36ugIAFK1 zK5h8RIVQ@rDdE1GyA+ISxcvSsuYXSL`5}!haRj@R#1UWMxVkqJHp4^0`Vb~*R^zea{B90^L8r>psE0#Ucf7PHNIy;k_I5TyVC)8* zoGGjaV_z#Nf}@f$bt;<#-x#PYJ9wuj6D+Je!WzT|kom|v+p^D3TlP9|W`>&pty?mX z-?l#Gp?E=V{~WRWnv6@qZ{-vZC=0FY|9K8+q>eD6o<^II2pQ8iE+0k*byqn}#HQYx z-fbu#6xIx&B$+@}4tv~cHe2?c+(RKUNfdiVVOlEN#=UEhKt$>m?_M2&N`lx2xJ`S}*dWyS;& z^ESRE;r-2|yyh%6R1)7O2}o~sOCo;zm`gxg!;TMW`J$Tw=Wch}YJNbw^L5tp(G}iP zj&eE{b9AeRZ>NJ60t*f>Q49(=S#EgZ+;ECI-^`5tk$MC1!#}l1N9GW+{DsRHcL&_? zVPKD;8&iev>7VWA2&rwS5v%O%YzALLfRP6e{`HPXWRP|nz!C5gi)~+zO`gZzciU{X7`*p{f2uHeY4N#dXK?_ne^ymvKkVN%B>2gsAo6;2Q&4i3 zOuD>Ohm7@I(hELIcdP+)wTuG#kp9yCk00GR{U{`Ue=iD~z)Va3h4o#Q?*6+8E%k}- z|M`~Lbjt=$SQeSDsYE=jwi?M^s!vRYYpB>yv?A&4yhMI)rzeLQ1T2y90`! zDR*)&wke0rRY|a9i)2XM6Ynkx-fvZ5rx?pXRRxLX4G zyTl?W%iI(G@t#zbN-4+DR=KbyZ5Zb$tH>cc#vC(L0)rMtv@oU;VsHV~D+#hw|} z6i`x@A@`6u4CA8rnBAX0To8x)!WjLLS=?_LI}SAI z_Vx5lPM4jM7?6o^gOjN$7q*72Vg?Gt8DaQ}#Emo_jriw<5$hG8PgnF=@Kr}^ZgaCS zDd!ZQ|6l20#ylD=W;?^I5@ndn(OuE3JJ)#UoQ<;EDI{|4UK6gBNr?5VY#M z+iiqz3zv&yGt#RV96_288M2Et(Xk>tqE7XrsK%uDjQ59B45D5#%JMKW^y*FgLJ! zXY-Hqv1US2)p0fx5zQhf6#EqE8YFBNvei{nMr04$jr{cO*>yPbve&K$#m3E9juaz=d42pYNNRu%{L0s%8R;9%s32ijVIVY643G@HJ0Hy1$ z>f&V&(Xo#jE-uQPHdsUbk{u*Ipcho8^%+@I$%P(4$MGn2rBDsU2s2lTDKF_;uKEo~ zSONh{v&4Q_gWH-)1K2%Y1%jF%fJ~_3$PPu&&if$8>u}5YPWsw zeH@l%Esdt{3;VwRCFf0xzNen3y*PUx$JZF@G1=5q6s_%td-cOFeISEGlt^e&0{=&_ zC5}Zpl5g}|9i^a09l4zAvd1Bh70%fX!yDNAO09mS*FF6*s%YQCNawP4)R9ii&n}FH z@6Rc!4<{SiweCWeS^dW^mJ8m5fsffj?j|vJYfv%%*1&~7a{0Zdx_1W@l4FQMlDDSB ziyLSur^8u5D9fT0bZKiH4BmnP&dg1S987>a7v8(l=nphy`+#t4Uu9T>D#H0#(<5=} z$XpJL()B*GtKsldCu+&tucwF~ZrxhA7nigVeu;0A#A(0QJqW_nb0J~v+h+NY3hi-O} zg8FSH|0oklqjxE^C^8&mE z6mo)7tJdlfs{hc@){O8d%jAIACs#;*f6+yFS)%l`DCbWVC!<;rJ_AU_WYB{L}BeSmh^sbH96 z5wiI`bRU-D#zcRUoR!!{aZ&QSB>cyYZad)M{A|1Q9LK!Z!~D!ykNSSxd6qwERli{1 z8m8^dLvKncb!_7|w2{Z7lV>CUmoKD4x<++3baKZ>F*u1#m2&rdf*sZ2X-qVJVanl2 zcK34BG@ARuX6|jg#C%Xja_YUKtQj^A3D@~U;YvzLwqS;;&#aEp3&MNMxTlh;v&pzo z2O8=S?@7fij!S8jTE#$}34<>)toM`2ui$@nZ&W;YW*_eQE?!yzR;l+VK7hUDU_TaO zRIo(yP((JsV-@(v}DFUaHgaqV*pIX>k|VXbF>5RGo2P{xsQL35#VAoJY_$L?x( zx)q!L5E>Y^%GUg&c$tx4$22NS4z5tGXDsk;;T6Z-Gre~j*_}NcqxMgCt)ZM<$Mp&Y zEvR^?xTB6=f_*=J_c5|cbW!t&xp*KPUMTxWX^1?NhAAkxt1rd?tMdhoLdH1eUcFWB zP%r!T2$-?`1<2wLas3&7rSfA6X`b&C?we}ndqm86r+Vd6-z&C#Xwt}o zqW$~ZsF<}s4^zenVFI*kF%UH(Z&GO*fYwpTd#gFyM?*~nuxGbLd`T1rOi5(qh2T4u zr4$Q>9|B2(OPY0Zw3T~~Eox0LYLilO^F|$?Vp&9STnl0<%-XF2R8gQFvLRc+ydO*u9deRtYrEXTQu9QGOynoh~=sg%r6W zZS!KqwQVzGq!mBOk)jZfcs+;W3&<0GlsOaIwh+QbouwJSK8{X^bU}r$V`^j#HBcRV z$Wyf})I;%#5#&7d_AbK~QQM)qzCP7*xu zmodL^a4;+hMc0cVw7pCn4mQA1(ojM0)^p7-x<-7Z%ej%ifFxl*3bg5orjbS(U||>X zdC`DLc}&IcI`&d$k9mfVu)quCIJ#HFmBB;=qaWa%olo6&-jT57R1$PQ`TI0K2$H<# z8s;01y~CFy)j7Kz!rn4QV>^<p8JS!DY0kABQ4nX71tJt2x$ z5|=J)FYsfgfE><@%cFfGOjrv8(wZ}2xTx%Hj~7Q2Bw3$3bEo;c`70dC(PfHj3iG4g zc}AU;hNmMhGUmot*>`>)zLGem7kcjej8je$XZA4kBr+$<#@)~bgA0#ak#=*j%{v&_ zUd9=s^cA7!THK4Y*N ztN8zwzR+P7v^@1Ne0JadG(2_O;|GqO6rIc&=Tjf}YuaUq_v{PRPJd)9T|I5DkGw7q zU)^h0XBfY_jQw=eyIyy8SL?_3N)bMVz@)e2pC=Wm6Uc8`uU0v2zMTPgTYnG>KZ~95 zcS0milG+3=S<5G_NaawJJgLsha^uCj*>Dew+R{5QgXx=oCQ*kC^+hjo>_oP5EtZd+ z{L+W6u_^TU5f#h?x%LGJ@A&yBezdtE3geT~PtBIm0{ zvu*>+ur51&^lbRI?{CHy3rq&49ak4Db=2C?M9DU;?xc0O@pcct*Re$}Lz;Bf-XpHE z=aT`aojhgGj;bH9y`VM#R>#%v5kx|`?GW4E4LlQN02-!Z$z-d011V_hkukRf7wpu57wR}cx2--(; z1eB|6b9%h1_xt?J`A5GNyg&QR&u*qap;W}O;l}pWa^*< zNlJeFP4pM83(JM-B&DC2RC1`B=t`ulZv#c6__%(Dyl5 z92WH{Zo^L`S+E2ZdKMQJIOggc-vdrZ9UmW`7x(A08adXhmz`~SMmZ>$P#XZXqF*B1N4rYhMb z7p?KzSfQV4Ksr&_ua$y({ShM<2B;u!!t0 z{WPe=2V0@%>R0TDK-;+KVe2mHWN2SA7v&d~L+PT8gF7Ev9KG`x#zp}oRzLhGCRvJc{OE=_M(#<`7}bzuCQbdRZTyme*;%@zphk8p7}8(=h#4q zu%v&0$A<&%`&0i_!f1R91x3hO`}jlloi>^rR`~&y_mUn2nA*CpXXXs(Gg#^v;=}y_ zOJDNpK=OcY%K-_*tMTNdNBraogz8Uuz;W>rMse=5H87PB1yJ->fc`E1ScJjSrgur5 z?-v$_o-B8faF!2$b*`Pb&oX!wPx{1>qN~O=$`+;|h>s+b2;s zyTqEUf~C`0uJ-(5xh8P8W!Y2u3~Q7-grayR1q34)mT@){Ztayx$w+W#Hcq(yf{$oS zxIXdo!-G{$i2edh&E3HV8goRYvAcQdya^oJP$LdM*(J)6cZEE6m>(d zcj|DKlfEgD&NsU4f-<;;{Z8G#;f(!VeKwW?W>eEI^g@CVOp`{zBlU7 zWK^R>>Fn&xW}RY&r(bW|fT57jVmE+zQwHdM8VLCiQ-@pjgR+1(_wHZCpt4+B75kWK zfb2xIC2q|b9o*j|&qLoM+9G9Iso;7iNS%zcot3L{5Cf%sk&UU2reh3eFI+`M_@!kC z!SQ0-nx%%K=xr%ut*RvM4THxr4^`#X-FL~NqLw1fK>-|-qmOjvjxC^^(gFq3DH}#w zYyP9LjWJ*S=Q0^&`IX3?Y7i0g&3)wJP9tZe$=`HCQ502133=G21rJ&6n{}n-N~;V& zoJGKJH{e*mki>U_5O8$`#9ITX_@cD-(`7?-YNn>?3%@ZCe(^(8AS$QM^RVB;gG&k_ z3_wt!BxcKObal5fVaO1j@7E(=8I^04t#tFg@MqKviDC1MNpHC}!r@q-r4e}kgDqv< zr=8Zi0Kc>QAgQaH%x>D!Bq!fv{=cdP$!NmBhf_+Pkz^9$g(?=~V__*34y)0l&ZR+5 z$R7AHjp`~J+)>qG36hLo?LLX1eV;4x=&Rdy#@(a*v}Bdz^7`>J2~vX7XBj^c9#v37 z=Obs35@)j(g6eDWtktpyueA|ywG99d%Ye|vff(WQ^V_|<|92kAfy@aFdp#NON<(EB zmm^P(NLT!wk1KAe{OI7?B1&?Oa^c0Ov8@IS?M(pvhj6hzvzI+FZsAld(=7uBusn zhpz+Q2hYVSHyagaEbFuh(8j69#+tdVBshnOsmB76rF-$z^E(K}uT|;gW0Rle(}xbK z01KED4PuawQBcLN`dER;>;B`?LPv&R4P<_@uHyB;wU~VzvINJzfBj&?l!VA+RE7Ry z*R$}sYl>HdO!r2(y(J_09Xs^W=t~+onCl*mIg-?*mPDbw5JL1z&QaAim)5E<)Iq02 z^k`K2fC+NIG~*#y9h7sAQ4ETI&Fc%p|4KZUq1(h2*Zd-&x#BI9`NFBzu9 zQ)9`DugYZ9It%JRO|@<~}<6xk0Hl%5bscpw;Y|No=xEuf-o*S29my1TojrBjeb zI;2aGZb|9xR0-)81f;uL1*E&XyKCmVNB6Vdy`RVT-T%MVEX7-$nXAt0jN?3xW#-f| zRjEN5A)wach#!zc#S+l{x42z}3^tuV@UfDu`Ga0fdu6u?PdZJ`SNFnd?XemSG91Hf zL-BIDOf0@8y@j^q915~b&ECGQvqU#~X8@!z9@rNpOQ{T_aGAv*wq&Rev+Ahb2B;wY zINKcXG4(5~1=JMJu_IRE;C~s7kCFag1CkV+kK}=^C(sbLO(BH zX$O>p+&*dBkx(+M4ztC8EK+`kmenLIANonB2N+;heBQC8^N&8G0V} zUR{-kd2eVKF4)7q?Y>SLdPxB2mdk{f(G;X;6+-*jYhm)kO@;LL{Or7=+GZkqoE5{u{5p!vw zE@DTuj=mGZ>yzVK)+#9lQL;Lu)!7KjA-^KHvznJ%ILT=aZSU4T2unaNKsZ`QfUQxE zo0!!&rZng)Gi<7wUTVVGUet_?#Q|7DIMdi zp^=Nl|8U8Wj!qGTt&-fLUTgn(O;8+8Rf$8(fr98rm88S*J&{}T@=UkIqZ0f{lwx0h zTiYJs`-PZ#Z(?XB;m{rrMcpF&xIVnT2||Gp%^_#m%%4Cq(t9&&^rug@7VDo?N! zx|w$B*n$8=1?bkr|5}q7y56fq_Cm_(v)#pX0yQCg6RSEO!TjCI$*L+73#|$=1cU%* zR+m6GGBrgH`13!HhTp6M^_KT`^L5{iK$}M9G_e3G(_9Vj_*Xg?Si0m_2wkIAErr&& z*BVKFG;Tgzz>(1`?Q{ol(;G6 zP5$``_=F?7|1_QI89FuSvl<|gh`3UnMi9O$hA-btgkaf_Apo{+;o?|m4ib`rr;v&C z8+zc)K-eDCZhgH|x0k#!`pFws!lEt3k%XM$x#;9IW<~!bL?YASeU^PTQ1`+vEi{ia=S@D6k0Yv zj2o;?4kIsh6NncB=DF~91gPi!kxfxCff=zW{@UrBut%W^R!?%&E}e?%wQl71dFraC z&RJ57BJA{jagDsyC|$BX1%QG?cKYc~H0E0gxW#{0yZquEjEe?9dV`_5zr76n6!UvJ zGJ}jul7`kG*O_iF<7p{IBOiUWvSM^M!}?e(_Dp6U@-tV;FMneI)H? zH(7Ps@#!mNB6(}B1qW8kuO!RgON>BoHeZvqn>dRce>C9Fpf>t0pj=?q0bPzRR#$HJ zgZMBWFqk82=+~5E;1>xRMn9#&{u$;@9HsGE(7V!(Ih;P*f^9rugi)Gs??fumAxi5E*R z7h)cAqBidxplz4D>f2P6iB?$`=fOmJ`J8U-TF*n025k;C@)(v3DPpn#w@a0f=R_bC zT6*bkx7+X4-~bdbbVzq<)gmZiye1lrWlS+#c3+I2X%gBIZx_cM+^aoU_EG#UT5kHr-5y0SE%AIC##KdRJVnraMr^&8)ewZ)Bz-o(@5 zlpj7>9*9!x;e0p2{7!E|Q+5PCOW5cG%sI z7h9KNanG)JO=+O0s`N|UChcbV6rE8nxK2zi2Y)=HO^%@4*BcoGuv3!~5oNA%` zKzIHy98c{)1K0szFk1&6+uVpdC~&{7XbnE-2l8&AT}w2j&l&253 zPkhG&tUg-z^K(w&J@}X_40me0IkS^=bI&VPTWdFj;Xv>(4&Pm>?C_H{)XQzNi)+ASL|6P=MF~{^`vNR-78d#Uv~Jr-i;wx)d)$dSLUdcO0eb zH^ZIRlE0ZWiYVu`=Z*>{u;%*0wVk3`-G~D7D<%hUO&F;y)Gp5e90bgI_4XN#*#M&M zLuS^AbSA`92fW1|ao0eU!D%f?RSA+E5S+2DbW0?*(J@DZ=DG7)#E8IQ^&h3cNO>hn z&^7uNQQAW)Z6_d&$3;Npk2|FW>84H#p{a}!2CX7!sIoVf^4$1bP|`}GUutnmTbQST zLZND#zGv4LRvLh6^>Y5a#m7O92GK`vs(kv+X_VfQ1#1pTyb--#SNv_;s$)AO(ZAHu zX|9_*8zO*{rEjvIf-c#FM(9f5YQRYVi3W`Ni3t$hqKa!uEj*fxL&K(nv`Q9%%tN>3rJLfQjNTDfq1JvBZdnOWmDdz5_~{Jx~7l ziQyXgmbhOxJkZ;U6kq>4;uyq+$rL&gi|I@~xZI`^Yov+e$4eXDS z%XWlZk)-kfni(>ID&4T$mw1}a6e8k2Id&;Dg}cHngQ2H6Z@7t7$_}ZhDb5mvDkNGA z^RMGBZ=}KE(PLFC6N{*;mO8p^Th$tfUrT``+CQ@(P_V)X;LxAque0Mg6t}~R({R%B z?DtuuYn>@aY<4};Gy8i;Jn*H#tAS|Bu6kOv^2y@JI{JS6m4|9Rpo^iRHfrbrwL!6))oI?9G;_G$J2r^royw{GJ7x0s zt5O9K#(G3%>W3;^e<|kJ!!*r(cCo4O03&~5ojT=WQj{$C$S4X`*r4J91pSnzFh{D? z99E{-FbbVE@d~O^v^pGs3szfJG20atYrF&mW1rv74*SvrZL+P976bm0sAbSq{bnA; zV^PD=4&yw#_bSR6prWnYcGJtg=J3r@VnHYI+k;zYl3s@I%YCh!!mh=QCL^! zi@!gzoavUMVCxiz-_Z3))P==kyL7%^7NH1l(3d_tbS1jJBcQ~D=(;vps&W}SPly4r zLjQOTK8Ml%g1!IMVz3`{{CCbz zj74@!;$l(3Nz_z(FfQ!$Bg3Oz%i>*EF5SjK&Xt)y}egeHeHh zgQp}8Wi=RjAK!EkJkvxQl(dUXX%*BS#&eQ>#A;%q9F8+tD1CLuy&J|yQAn_2>YMp94)JcF(9HuK3#>2rQv8uC1_AP~u84fRN(R5ni=i*TxT zBYQs39JTQ=@4e?aSX}0+(Bpeo!8L~96U9&K5sRJ$$NJ(@R-wr4a@oSLgt^+}N7YS} zZEepXNZx8`zmQ6YW# zJkzXw&9?N4#4)vbGa6m7#7CD#YegQ5qNgIMPx4wtbH;b5Bxy?hw$nYe^ZiQl;Y}MX zny!`d?k?!ue44dcOU8$>v+{O7tSA0`AOG?>)Ui)q5smOsW`K!m;n7p!2|?^BXKscn z(lBlIC_0SyS@`7$qj%klyFV2qL=(|1opAbl1wayn56g^^qed(n;&#mw!=zLowPLf4 z-a;UvNl8l?7TZO7I2Q$1lM1>p&R4_~%>n>li)+_)pzElZVkk?X;a|BC$qaZ1GYcfq z%=vkv9JE*d7Z@!UtD@WDOQ=>^q0EP-5PoTiLn4mE#!d9rfQyILwM!8e`v;ETh2mfu z!UwF)<3=YZrRRuwV0GyQTr++bgDLL~(~*e4WIQFzX*LAhVGNuR}7RUFb^P zC+;s_q^{7qSEmi2jo}MW!wH0fi@*|VZVu8~8Cd9~IG6hC533xQqSQ;3$25fIqPdSw zMx7{}GLpKlZ;aSwWmHb)bf|!9^9YvkG)stUDS;ut>$sIquUm@1fEFltCs2S+!e&jI zPDR1zqIp+5RbV>@ue6%Fx*q^X3LB)H=*Z8}MaJ(a22fDVs!ltHLB2AwjF?ESYMQ5% z#lz;xcWYuw(n(h*w3@B}Hb=z(r)BUb;R6dX=iEi@fdhh**rOo-joj8#QYc4M@d^YC z>uvrj*o$^b^rU1W@SJdSX)l{&w=C=~nh@ZFeX!d)_~# zLX*^vHXW(F-^1FY<+Nnl78=oOjQGIuRcPc@0br(e)EInM5lW)VOUAr3>O=gKI<ZhbH!n3=ewHTxX}gF(?~n;w`7_blrMR+I zcAcAxBt;@*-%-CQcQH|Q!NYn8pS!-x!m7XC&EB`zt-IjVQWH^rJv)#U^^O zD`u{0zsI^A=96lrEiBMC^kg7amJ$PIf^!=>u6|3&7!b#qONHLZpvIL!3NBLkRng%?zB$WSLGvH*%^!PBloLg-7ZM4=d~AfGif(GT@YNFU_i~hv%&Roji8;-^&v^mo~Sl!R;S6u%3M8`OepDJnQz=6 z|H}A1(S_*C5lT`3L@1u>hb+@eX^<`WidUSN%Uz7@=U5-{i^S)L}Y89Jf(}x6e6;Wf5#tVGZhvXm2kpltWZDL zNr$JtfY_a935(>7&j1P?AD`A10MSkHPOw>olUdA^#!&4yaI^^agn?-D^U;08vxV?P}Ug^vXK0(`Zp2p{*8A{O5g z_0Za?caV#SEd1I3Wr5~_(HpS{NqD~)xXZr~>_cpeg3{<%3s>mfQ`1xFO^14CL3CP~ zUcfcMU^vWPk#2_JV&t<%eow*jRJbdh%F&qv&Q zZnZ457R2M-CsqgnmD4dTpP}Y=QY!U&2U|rjD{5{2mvown2F#Z@6$4Ei7;<5jD8kSq z$z<|3#gZ^>O^BjuBEw>S+yQ5ZvrHMCc_SW=urDhZNulG3I-VL@gHahwNmv4!PzSWW zBB77&WWDHiz%D|BM29`xgPBU{fHvZ6=MfN`17r0#=|td416>O(-~Lp+r*W4yljvp>q# zn{~;aF8lIa=pn|cwc-O@+(avUNE(oaI%abwa~YYR0^6jG0D0U)XQA5D#@nqlshAre zCQbxI+GHTGS2!nt#o`+fDW@$?qxCy&=q=wRL43Q}bwF*bQs3|K@-Nhl6XJ^5l)ZHR z6{$u!^SADKO`{fuceiXxqT2f#uwyuDJv~uv5x3t9EmI!qOtL~K-$+#)=}g;cgGId)sXq#oBV8=%1TrX9UEpY{07$s*ew|_(&V#{>6Yh4*+7tSGclnqlE!jfUg12*N~`~#+&h~+;0A?aoERx z;uNC4d4>T#!5CXy6xq93ei&NrsEmgHsH)NcWW!W&VfrG&EqJ16y(B{steRPoH-(uM zQA&?GlvslC2db;O7VSewdiGkNV`-vJ7ET;ASHD%!{~^q#46woxQ;Fi|m@8c40DUgX zP2p|8HY{@UV_-ACQIw-&P&zm2B9DMGNP1C+*t~E4%#FrHFZwAPonM_4Oe5wBIHPf7 ztrm_dTNPwsn2bE-@8NUe<5fQNA38`pPHJu6F&v=XN19((>uy8X;_u=$V6 z@FmQjscuBCbl^zt35@Ptul(g2lq+r`PuL5|sBKgeUrY}TtWFy3rm8l@(#AC7hvOA4 zbsY>;F9{e6!19AxXoZ2?dlu;8^CrNjmz>C~6>Duj`Q_D_GWbII3{a7DnQ&Tc1iF2y z`&kCwL(mIn8OPo-E@{7UXyP5g2~)^cbFPb}8_!SQ_mA2N_O#HF8Plj^`?u_&T}Sj_ zL<;HqmRsTkj<%obS7IGH*dk8hQB^3)B@q105B&Yd2})=*YpiBGly4Z}1oI(Sd)N^G zrg1`}gkAa}yg|ZoP=)M`P83!%zywiO(qpX;*P5ibsHw9Z6!QT{Yo+Y@SxdL%%EEH1 z8T~?h+Ag;l0040YfJQc@qPD(ell3hi=yP|y01#hRx$KT1m2Vg7udpjzy20v?4oiI&f5Jfzm!?eVuXWqyfie#WT+8M z-1P&ybaJie=9|vCKd#ab1t?u4lozZqD7}ZXG@K`q{FltV_R<|0ec++QE^?Tp+>S%isA4dg{UY;bj;V~G9gEJsR7aMedYSytT3UlIH zM{{U0?1zzw#~&JQ9?U4?@B_+U@sBA44HoE0)s%1EmHR?IB1wEFO#locd+a5=C+l9_ z4`KIAeu=Q2Dx9?>E4GJTxcJ2hYeRa?X~l0Yk(*S}sn|fy=e)afOU_sq$V+A6(;NQI zONqfyW*^s|MDt!oP4)q9Pw3Vv4rY8^7m7P3!{y|RpGY5{ycLqa5bBX3@_rVbQnE6x zYM!4&RLakNDoO*WILog-nseL!W&Z(j_$&LNASqU>*=LVmY_w5Qp--T?IhwJC-V)5U z8N-oPG+e#yk*#%++x!e~6UkC_s;{o{q1%MT;MVIK>@xxcOX+lxVM-ZY6xbU-Th>A9 z{@*!$6>6A@2NJ@!`^Go*sV7S|J!DFepKWq4pF_()qT&Exp2;=)+x$)p|D@8eQzt0iD#f1u{2^qAEEch7~YJ*m`}XE7MJO(U=eFH|0((D z9XXD>BSFnmZS~BivNMGQctWZV1$-Ax{Bna55mBOt9B1=GD6cQtpUgfItlefKRrZW` zY{^nOz&y0`Is}*gj+Tviu{bz3Qk#^XDVBJreN9bX?1}ei9R@%HjD{mvx607PwsFCH zH3We+M1~IBW9^j&3jISS?)o7X5;36v&6N^>&2IBIAHW{3kW^?uV`8zaJ8{Zg9I<(P z+0vP@!$J*~r%+ZEi2pHzzr5j^ex5~O7>VQtB zFQFCKC`3s<23!M5w!QP7N=fU%I$is;DJefC$eBhRWrj0$Km7Pa@{T>kUS2V52LqB_ zpM)q*?-fRq{chx86ovMXmcd_@^5`sbH|7%v{s^A$5x8{bhe?;@8yClhE5KGDN@s_6 zifd(=2M!E<;47A4nS-wI=n#%8 zA&Jjex8UrC$Kj5Xto$b(3Nu?yaY8l9q^?bjub}{B-KaZ9Z3#+8P7E=30ChY zl+DhDj1>=%=?4^oU$?Ihi6jicN2vb%Sjv z6lbLgPh9iLBa)di9N)0Wmp0K$-Yo;o*7B=lXzcHPNSS#d&v5+7&ULsTQu-B`Gu!%C z5ebsIdv27gkVHIUlFE*2vqXky+c}1zaBSos@dodI029C4z5~7h zJp`jS6GM38b?C>vQ#!{Rnfug>e%t^S8=S+xs5g!d^$8Vw7T&cL@^! zLO6c?(SuBjTD?PW<<%n0c7V&r8`-={u&+^5@1)^4nB%u)@$!Krc?wQFqJG;l*Y6#H~lJ zQzDDpMruoFn^amvr2MR>Vto)837WVMc&5#4eBsmPnIGyxyS`V4%>R6+u4Mv>llef} zJVfKWiyraHF;J$(e(nzPj~0s{m?` z>91V`N6}*3Rr_sx4eMTUB=YLtihF%G;AS#Zk@lVycl&OIRHw6>Yix5n1fJ|)*@jwh zr}L+>^(e-tFj6YwkhA|e;AaQ=D(GUiBTx@8Z9FJYhfM=@ImspllOkx7bi55(Ol5l1 zC7ik&IS6SesW~Zl*XfHSC!odlxktc7ZPJspAtR4BuM)4qu0&%PED7&{=9%s=0~WB( zNX-o*=}XcNDk2Z;xdkNV@ntp2`PYiG`l<4Urv_cvSrhx;pd7VZnmmh4iB}LEW3?4W z>?YYmbxW`=zu@@$sw7Wsm%QY>dq3z-ry%B8n%7tQIPNP^54hL9`jQ(1BQJL9t|BMv z3>D7dI(csQ{KpPGohlSixApXXiiseYMjeKJIkPYsZpBd#ZG?bpfZk9b=WuyZdH#Fn zZ_+`%=%CyaonW&kQM+M??Y==(o>lGY;uq<8w`@roxBiBd_vD3BrN006oVFGk`FH{x z=KC!w8nI-=-0znK^V+gmg?(?AD`MS5?DXKUSU(*bmm7<6O_~Q|B8qTSBTU|7J?sGh zrj!piO5#TneNVhwCbB)c7onq8Kdhl=T*ReSgg8m{9Lgj;!Fzue6R&(d9Ch^K{s-q` zeIRs-w3P9BPi%zinGCh_@fnv``ln>8g<<)B8YRe+Y+G2=3Br--4#(qKXkHf_ z#8bn#!BLCf(?J6KfCxyH-DJfUXc~u%WC~F%uwug4FVo{tD{9vlwW^EXCRdTu)?K@` zKoxbYkm{{L-ZT$hDqd(M1%)h*wlY9Jhm3nu#8B5>rUdPXSm7N8_4wgy=7g?>vD$a( zD0>9Gb&vwd<)bsmjS{i=Vx8pj^~|*fRG^t3uE(=yNj1^g89MNnB#*3)0bZ4Q1d464t*J2R^V*aeExC zkfflntaDuR`uWm_W%{L+2K*s^4vY_30_u)LD3CG+&4ezF>I)e(z5r295FOiNo0!p? zyGqU0dXl}W`11KN(yL6IP$!}R=$oc8CDJene|wvXsNx`XF*1#(U#7b_BXz5kpg})- ztnrOv5=YHebwyRAjynqrH{Ell&n7CertDsXZBls}(cE0>sR0Wni`q##Vu03tgYlTO zy@MdgB7cvwQ}?im($Sm0ge!2$^$ca1grDoCZ0G`E#%m-GE$1gEKEa*f_&IlB_N#-s zfoJxUTuOy1ET$H-kfuEdLrNAdhVs_S?B-@Y(6t|!Z+iZ$4FqVqCc~j%(hdB|A8%3{ zcgdc+R;J9nKtw!wKUo9Xkyk4JW=*P0QgLb6-`Qp8xiF@V~<9SqW3)J-jb3T7425q`vY-Tlpk^z zs&w8Hwa>8L-a&1|tyCBJCVNyzT6sKXynVNtaVO){pM9U2cN!Ea!8ISmtm}(@r{Z3x zSDik2hG>ep^+WsNmVANEp71GHs9f2ra@bMdj1(d5j}7K28L+6Fx~V{*-xDW392oAt zWAJv$7RZISq86fvd=NqnYVPPjH%K50G-QT*#Gg6$BxO$dR%;b_-L>^?fqLWC17fu^YoBpc>5V&_k!_`Bd}4 zYiT#n>cA*=zWfjb7x=dlD*X+MDz9be$XM?R?mZiqg5m@Tck0tp^U=X|--p-V+d3tP z4|Iz(%j%!SD~R&M^(DTMxAUy3PCgc z4$A)hS7Usf5r1?N=o{wV!FLFpy7r-DgijC8d8M;P32uuI`oGf}M@w&Z8)pS7G1+~* z4pQW-ewJu2pLaI?r7Svj0HbuY(5XDYaKPxTGxnlt_L9K(w8HqhZ(f@Q3metB&in`48nr@av*utpm!ZYrZUxQ~E7Suz{#cgBHXmUmY&F zTHQmt)pQg@+J;ufsWIbA@1*#1Wt^Wf#T4=J3YVIGQ;V;s6nrs1$GYsSmj=biZSx|# zrlK${n0r3ZGjm?{`&Kp|mXol@=Y2J_p+^tWEd`aQ82^Ft|0~Lm7Yo5Ui8=a4J6om6 z0-F*%w9esJ^0S};0MZpy!wwb|1i~(DOjVW7zs$r1oG)#ZqX2tZMLU7Lx{J4O7#;5o z1KNtiHU0bybNbH!-m>71w!J!qnb0F9)W;-q&5B;4v4aBbq}PO|5oCetPh8T)aZK!$ z;mhd*&Tv-1wVPQ%se7G0Pu<^An9IvYp>}iN#`X8N3c#Am8)QhUcTxT9qhR=j%PWF~ z@v!1Ulb4z+PmVw5;EcTNcy3@a;7RA0B2BTt2ur)KWwNzQmhUnkC8p+Jmv6^)ghV($ ziDYcv3vMnT!q8YzebRO;728zvGM~G;YX~zq-cV^e&-u~;JxQe^t#Uc|nEtc^Kf^ce ztLHa4&#z0@@xwYZX*$>M2>uk40YL1J{-cA@Wyv%*LQI-_?^vpf?ruNr8zRp&B7Yeq z={tiNo$lF{+VH!d9APeQWJOpPz0k3Lh@ZiFQ$?Ek)aZ@%32^s@#`K(zv{7v2J$5|X zsn#er>uQP?2gp0UQ7`nZ7LyP%@W+I2C`h)^s+EWsVQAE%oZ>v4tRt z*orfhgB9IT{VTkkS-C%D#)0O6EdMX+**KMg4*og(5z#U1QOPX68{nIa;)6UTCc2!R z^R-=Uq)Yj?2P~-IXFI+0KislC>5bY$*kSRAVg|Jq@mx{^%)P(I#QWF2vGfp3;?b>t1SPXL_$mq zp+ET$%NimY3aL&LKLT|Duu2=yXgpA5mMXY)!089_9vu!vBiZPt09APJ>n*>B03VWZ z7?i##@-5OrsHx0u&o(WVztKP(QWZ9sJDAwZ#j+{a)M&$lw*CaRlC%Wy`e0a4DVzDq zQzxJ$7LD1dmng?PePBF{GJsI>0fWrnF|Y{S#E}j~3AMY3>8*eW6C=GGM)M-y7h4G? zcWDsQO`P?nM*JqgnW(M1je)Br%TB)}Fh(=CcX^7;cE{~Rf=|U!*ynkCJ0~C9=Lh+l zKLXbLVZXeGx*%2YD5BDd0qeHxl!QOTz3$d}ud!JaIc9a@n9G1EdzO;&?l`~@`R=or z4!4xEETBKiqAv5w8F6aFKw!k~7MhuA;Qx*ac68zRFd}sxn~+Vv z;#Ig+6J{5t+EG*?eUL?U8N83=ad(=)iDjRz8PiQl#qpxI+tB#U{3_VzQ=hBFWjw+k z?kb3X>xrbjivG;m9*ZyfApCJYe)4U6ANbjLF%fI)4!KiPR=f%UDcwVK()b9?yY4kz z#StqHwC2!)S76OS!($4jtQ3`v`b}d~Ct!EMz(5k0a*Xt>8C9x7#cHkitC& zGr+|n%3Rp^7XIDYhTN}yEBB(ZGR32IFvyDJQ6_c`Xco->Mfe8D=O|nqEiwzaAARJp znTl=+{?*T@Vh+_;c4#z>MKn3YGT@@Jz%t zy%|(;r03Hmh0RT5YoWsL2M@2t1y<^i5P~WwlD^)4NQuAE2I3-5=}$*TQp@wuzPJ;{L8I zp{YWSc?&Rz0XmR22p!?uO~N}s?k>~r>TH007$9CVg|LIoTiy)kt^l|gF)8d3erNeo zpy;sODt{1b6W>_^$Y&ZPw6e;4o`?`gB*d*hcBhUlj7clp_L%CRK+N6T5&GYVr*#>O zxfi|1tt+=$N&3LfyPht0S!8e2-cW zSW9y;|Mo-+sF?b}bbr3JxJ3ZdXl{sgv`}lKbp^s5H@DFF)nYD=ly9^aQyjMHmzSc1 zC^uA6fhq!!F!bTZHDopfmES)Fi#Q0vykF}1BO(Q2IQ3y(bcA%?U247w2*zJvEqoM& zts9j1;TZ$^94n3FAq!OP?eO@TNS-~Y!`1GPZD=lRC)Iucct-$owS%L?1t}s@@3~4-SW}I$29imv@VK} z#{@2e+{g$YOoyWg7K^>$ijx!nSVjaM0#H?o#sJS!>3oo1n)dlIcC#5;I&`XKH$2er zEug(OL*jo(GZ`ncR|gdUR5P=KAqi(HTVA><^PyChP%XoEQjmr{^K0kX8-Qhu5yG5# zl?0(T{1>+v9^hiq4g^-VzbgG{)2n+D77jAP;fI=Wgi0x(Ti-tD0oX2t>Fv*41UJFl z>Ul$I_N2Uj_HqE=JjFzy8LJ{9Jecmq9{8(!L=k#)6 zX&ElmZ|*TFAjS2MwYK<#sGhE>B)k1pC`Ud{J?ajiCj`(3-E}3iF!Xt~yh}YM zdIytK;|C(#X0*gr#xBwEYkL*#$-$|nSYjAKYgl5Fx!hC}kmdyFWndBz-i zSE!;OLmT|PI?T&sR4J)D?mRO-SG`)0D0IIJ44$O%)fUdzY`GLWgIl2Q-RBnICVL7# zV+X${yOY2Wp}n)RwfzVn!f&zw2(blmEJ1)6UBKZ!&=i8-dISIPf z*K{~c&=dY_;JBvvxb=ETm1bNP1l;5IS*i?u2So%qf*9Vr z@r1EY2H9+JBo!~i931uiMYZn7 z`u0b=S%4uIpp&Klyt;P@aRa3?5FWV21r~>X)ugk>kc#3zg2WU!;fu%iw&g94%1l@v zpn?nBBPS#2pi{S_@FK&J!k$qYeL;rTG(CI2%`bb~E}y=%J%*!Y`lR%tKjHCT>FfYN z7{mukRyEU))bFSW83tMQl|}_rZlE?+Z(!@|7cpoy*f!C0))Vd0(eQc)JWPE{z0&GE z@VwMB&yYTWafc}bvt0gUP|0E_keZFD`n8` zhf;Z}dF|%3f!2q^hV|HL1Gm`_c<|tN0YInqOiqjp>R|&9r^Re2(cCpn*J(7?a0c9K z&TOGQI)1_L zWr!Q4{(AMVwEK}sYN`X#TAMOz2TDCIVX8Mj*dD$=)od`2y+$l)wS;1B^<-~vY|)*t zD>jtJVvW)qst=T+c!z~~=V3s#2O!L>?6HQ9gD==n8nTAZ6!`x3En{weE*qgU-^ld}zBT5&uBkx7a*fp}b{nw{RvhK=7! zj&+YFZQhORo`tWp9btJMYv6pkmp>YU_-SK6L;hM}7MtzZC6uPYt zOg)}?+g{Rg|C$MujIB#f!)njP0iPY=aCDUeS43uA&Q50B*s8f9&v#~dx<@z_(ZF>5 zeF@_UV2CHz%(zSA#<-Kq9<-biv>a-uqTV)o=1iXL)kIDYp6%6|dyzfyJr8{S^%TAv z_gw-^%Qq-P3x8SUa=h78b!K3E91baXc)VwKBDNCEeo8)qZ{P#=0Efcw@|LA!@&n_; z5P;`@zyc#*EB}%mXMBqJ__#ClsgQgeE~ZqB44U_(t8g}UX#gdz&Mibs?5q$@`}q(v z%cBWg$(F~chGsE=zgFKFvc{6Wsg&WcY57GBh<-_6A4TELN1x|AJ2dEdZ4?OK11>+e zvAgVB8o}2J(3YX^?Y#<57Mi;hFr9ysPW{1vSw1~BiHyAO$?hO#yl8HYHdo&KqLOcI zc^^@wW}#$W_t|90{N1EA9(zq!*DJ%zM6LSTDd6!TD>I{5j+PeggjtBIUXy~^ltWp` z_*rJ(eU`gXJILCAOmp-HohAoZf_cHw*+D_xhuvtxZg(6Iix#^AJec83IxVKKjw01O z+mn^ftMxe1ISkPUV;p*g3c%C4eGN+EbFlY31i10s0F6|=a>G_cuo1eAk`z>cHQNgB z5<=V_9xXE8g?Pj}xI&Y}z{r-{u_=!P=oax0Wyewm+8a`^jQYEmMOSrR=cTi$n@i30 zM|hbUZ<&e?@ieX9g_Y&cC1&YTl>vS?Fze_ND z=!&g3Y>~bmc^0w+-uJk^+16ZUoYUgiYKat*RT9}iAPAiPjUTG zYfAt}uT5(JyWmo@;p>b`SK0mj+J{ayUxxr!S3O??Eo52WFt|y(r?;27#L+mY-%jhi zTy=dHlZZhmt@Qlu_9dBGpC>!Y=b4Jedj>veZEx{Cl<>whu`a&{OqZA@Wcrb52)=7T z_$S=}XKV)GZhH*Pzi|bWYfS+)+6yCyi3Q|TXLM^UUqhHpFL%coeEIi(9h)3~JxlFn z+T+$+=Wskvv>n&hpHP=AOXI6wFVSv&1NuoG%R{}(G0pZUvtyxj>+lTwp;*0+eLq8$ z$L9t{vgH=T21bRvldZ_Ue+Nevvd6B{0MaG#+@xYJ2l19`)n6O#Y?e1g-*(P=*4a7(5ySe&`3-}y<Y@!7LI|ue~y4rlVhyqxEWHggaXD zJ4x=^qfCFX??B1)*{KOFgFV z&3-S+;6Wlh*6lMkGkol70R)+2^GB17?=H1NLY6?fhKosA!)UDvUW!g%#yQKCm;uB6 z15E%P*aET!fa5q(#?Ltbqx}3f{}EswssvoD&#s#~)#kI$`bkXz$zVv!qF52Hd+QbY`_PIMPEZHE7+tJf6JF^HZ>y{9C*`BwmIvTz6GI9hha2dxP7dF3{U_eK2QJUaKtn$i3kDcsV`){QkgH#On&RzVer@X*^xPW!@eR zuFljlvFvb4GaVo>{@PafI`KIw-h!d z)nEVW-v;!b@5ZAF{P)zGVy#uK=@KpWmiw^+`R+{x{DpI%NhAF`9Y4Y!u899!@^4`}?>}t1e|uIW2LL3d^Oc1VGF_us>nZ?# zzk30lHt)s^$1bEg&&$0UXSdZ5g1KNMoUZf#-?NO5i>nQ|wcWPSmw)0Oa-ne2iR{TdeT@tVNcc&|!^w zN*aNoN2j36j&ZV)ZzIr1Xu%8g9y(^>Zw+TT01SnRi-0C$-K3`8EMSnZj~vDeV}NaL zuqyaRo?`{PwL9<$~R(O5I_)uq~;aS&Q5z?8^q z@sm`q#xkLub<~ghtX=K{eS}*j+J9dGEP!;T9E-yFNMr+OetQitg4HYX&eFc#g6M^c zK3=GgNrQ)Y=LZeFPo;!zR)TOcho%49vWjIFLa4dbpYM`-|0EajoZ(ycHU+53+a}BO z_zFm;O0&wAR``TAD|M!M~{OtpqSP^)cRr|k>0l2h|Ar4Y=Tr}$chc~7?X$HpZS8rU5 z35V8n@MZAK+rW0#Y?PK)UxZ#Wc23=%}H4 zuEy%a2`Ml2f1Mc~Ie!%}&+hooN{hqg_IB6ZF*>U}zubebI@Ns;k@5YR$~+OE_ay@T zHOYUo?vyHdK$ovk-t(h{cQe510vFEJTriRKOhpdGoY8BL#n1HJFc3h^CABdFIp09d z(dKf0x*lLo<*ACp`|n$p9-^a1?=vTGR!SosT>)XBd7t*xpm~nDyv5{vYlw?y&+q=Q zp7-sBEQzRZ3#(Dvg86W!-SG4KQrDvefwxN#dypp+7i((@_uqHhUxrU6^Rh6_9@$y# zvl5vXpcG~|nDT7uvogiJ$9hb+&yhIr!G}FK$~EQfpt(%%Gr1Cj|3}$dM^(9QQR9jd zf`~yVt*Er1fJlQ%H*88N6(mK8O-hM?G)M_ZNV7@lMu8&;0umcgN?JDE4ZroC`<-(= zF@E>^{NJJo4==! z!>!&_?JiVxKqjZji!M=7QTgG!Q8ykeRDTCNr>qc9)&HkZcuIH{$&9*lWg~9(pxJsk zJHw&tIO+}8Jip-D3%5VK0G1+P7VOgsE2{rjDu;6escy||`a{vdTDS6pYlA+!&mK{e zK;h=@xP~MHW6EE$$iIgT&sbz`r?^vW`$I=7Pxr#qLujW?MDytMTQ|`)bW>i_J%6p} ze|{CBp34vyEV?q{k9B;9d&f-$8&RCf-+r#&S{`rchR9y_EC&Do=5#?o3>TfJb?i+L zaq+1*FYH*53xJ%ivcsqsnJAI+H;eq5|JOfD$bQaLV*C)Ey9KCsC8R2xXVRjP(&nIo z^Ima*)lj9e9UWLU{IBDf8>NA=i|^T;MlcPgeNKHtnnb^)iW2%t&3Sk5gQ zHiz}q-a|I6oHt==_;U1+iu0aPgAZwuevPvZplnaTvF)H@e|NJC94y+QIMBjbOtwTi zCEvOEZ`Jo-j)dphabo|Ph+$URAn_@@04lChrr=xkbsyGdy5}C%S zoKRo7XI|9|Zd;?hmFQW+NQZwp51wpiGxa1lCrkNIh5#f&7-?2Zbc3+!_Nb0}?_XBg z|N2{gFACB~PUGtA7&bJ|d2TiDnXhOI6a3HHZy2~W?VF^AkL$~S3e|~I+ORr5tbV(n zHWE(rzvFl0MYuu6EVdykP-NUJIC_lWhV!(tZG98IAB64U=c|hMYn;)3 zCJ;_b>8u;iTq!lFd;n==12N0KRbxH=30Z0%*-vkI{`$k0oZ1D9+KMwZ1qEMm*zD;# zyM1shxd)LjHz|oRZBZ=e&A7vNIm+Wk%Vy*A5MZPbX)rMfsdDs2-p}+*jFvqR+X4z*$O$sDfG1Z$kf1Y_6hukQ<|$A`zax!g|M&t<+hec zk{czQ1(p5d#MSh%^4UJ4TCbx5QdYkNA)OIi zgGxTruZ)wn0j&D^0-N@K8Gf6^&JE9sYrlf6Bl3Q@c5$Os#`tzok{ zdR4tW6_*mb;Z2zK={e|a5#fr%WF>^F#ql`<>4bW}msm87ydz$jEF@wy+<+@G0PPAB z`A&R4+>qv*n!nuDX33*i)+=#mOJN=I2CS7d$~|=lZ3J$SdMA9kaE7o((=v#fkIh=? zDh+Re@!|ev8B>UG7cWbS@G;%B_(KRI{zfV48A|C#88yO054Ii+Kq$Rb*2@^35-j-p z11DWOFXC+HXb!IL^GLAmTSZHVkyvF!4_H;TU8`B^dh7faQtCIjA)gK7-tM596-0?A z92B36`iGmqT~3N6uHG68&~duKM_#wqC9hMVnNFojwj|+25F^NFS%THv_~AwN*qPO3 z?tRnQgxwFv$Sx1IOC9f`P%(KMz*XDdvn(lAb@?ArK}OjSKrtPQAK0H;cY_XQCNrN) z!$}Q|kv`&i^0>Ll zvr`%WsRC4Pi;K8YjAd7k9u(=^`VK|UzUtR2H)sY5V3A|D#lm0(wDJ)$=l6bm1?$L> zQV@af(j9s6#JHvooH@eI^qVK4>l)c!f6#Ln(&#nr4-voZ33%85>_Uot#z6CrEWs#p z6gxf$K+5E)&BV6KT#%6ufY*f%&dc=XG}2X}?;{Xp2FI(8r5tV?A}pP8Mq2fC=Y{qr z26tgyEV+TdV7bxfv+e`1V#pi}TFuVN5E)(KHvDj;Rns?4VWC5S2qlfAyzz_{9;C02 zyqr`q0H8yC)l9&5*in5tFQY2y_A6qYZ*qE7_9c*Q)6fEw5-FLxn~P7y4)<&)2H^zv z1`AA`cc))(4!+F3%za25nM>a*;SBZoJaCIj;bXY_8-@NQ0f=8YQI5WE-l(~?1FKp2 z$E#q2+w@JduGGMekVMCwnT$AGN|!8jk#QrQqd=Y}qnSb1N-ZQpw^SW-Cm7dV*7oI0QnX0|8TArKho_3{O7K55d;q}k{f~0}x zuU{Ih+TR!$@Fb-F>1#}O6Q@)3jYPQo+TuS>AN(p~5~C`s*4$}U78V3Bh>rK;VCaOF ztqN8?JORAFx8OlTGqx$1&H@~4l#K2dMjq}>i-qaI67d2t#!o|;0^f!tBo;*fgW$0l zK(1=xpALp!1r)7Y6QOr35r8LA+P2qywT*w&xsn6|rlq65)JF#U{s*JZ#$1`*o`N?r zL#M8``9{fkO1=9!SYXaZ%Ss2=e_>K#^x$8!c_dfT_HRRLz@K4`?;aMp)>(-2L_H-> zg>vv7P(NN^<=huLQ+)Sf-5Hzp2yBo@Z!Fz{r9@N%;EG9ZrTNn1*r{6|cyV=Z`#Yl> zpTn6m2E!PI{eG`KJ&D1pv98{&5)Nwo=XGFIlIHw$G2FI;)W1iuP5d67 z9k|WJJQ&p#gw;{Mj-rBsqD{v0ws#8j{#@8@_PJZ@!EOMDGW8{jx>llrlv>!PcR}=d zIoK(14)!ET98>3ddL_OO&<7fx#|q^JDF+O$Yn{*Hz0r6$y#pk@bMfZuR&GLWYybhw%Yx#DUx$NF0Xx9#5&X`aGrshbW9 zy4}Jr6k~bc1A5X3=Y;UEriTQzq(t$vZU@J{3;c)uy)5qhW~!F^kot?99oif+mAH0J zmJV16sWa0k7D@cQPEWdlPd072T)C>E0sq!)^fB0;H(76-PdW3Z`>vKPjBXlDXy|Eaq@Or-TW|DV4d7sJ@jc^n3r%U-eLH{j2{E-NF~}r2#o&cp>pJ- z=xGgyiJ(vw9PGWs$>s4T=9&m%1~$g5hE^!47B`cW?Fxu(cla|ROk4aQS~9;j(a>#7 zv1!cD4(7zXa)iocAGVg(2})#|)F~(dG#Ei_jZ+!bd`%0T!3KCA57Zq1EHUdqX|tOD zqTk?Pg-QrQ)N4@}VS}8%gEgU~-~nMxbD+Us4Q1jzK`%Tg70#Uq8i8b5_l}=tn|+b* zEcQv(avhjMpWw>*ya^0P{Qn98gBUw{zeE?;%PTONaJN9^*|=~8K8F81g9Sm_;-xo_PmU`^SC zGrz5>$cO(4vf$e#{g2)niP@D&bqDMD>7J<1uW|0YM-*7ME)WM}UC9NQWT=zWHD3~A zyHB0x{-6(tLS3iWtv*++n(*XGeaImPA1*l<6dI$CbS+C%fFdJm0<1SJw?{vj*nDIIY-}b0os*p9P93n90vl82$2w%9!_fSXmOb?TQ`&}IRg-4_JfaEZHa8+QOD$GQuxx@znL-N2~SNt z=DJ}1MeD9_7yPJe+p)}3JX!{>C5O77iTT)`;p-y_+%G5V2i06(m7u_7d7SAx8FFa` z*J9EOH!eR&scVOvpKDMqG#<#41LhTOC=a8)lcm}94Q;*$bTfiYw6=kVMGH4~W@+cO z9$SITo(|n_Z*fqj*8|o?wWf-lDljFVSpl1n<~T?(Q2g_gVw8l%waT-`XnGOSx}nwolAt zB$m=S40awstle3^GE<)AvFE{wP|LoseCe>WQLeObr&I<_PDQ8TkN( z{FwCZt=*BsglF7}dxMVpDCtbW1_)`9rfYo_H|k*1CC^6iNFw_94GNe$s0*~WUO4Ld zF4N3_&4wT`kg-5t{9@-|DwoepDwH2>gIrz}+=TZ+-T6L!3a7RWzv61G^(L2*RoA)t z(M#u;N|7{l4z;g^v)ZFgs7D#3LWqQ^VZ>ArT~nVr4@~9sEDAN**w?pfT)cjg8^@>I z4kBj9eU4L78($xGS$q;p#uZ*4S)a!GG9lRng`(y&kC=boM85&nd1+`8iic~KIF?W= zHDw3>V#B@X8pU>V{nu+_(0{M{*>sbt(f6$`x5BV5I+FymM94Bp058t_iUVP4c?L#R zQ1sR$v`h|U%h43xor>n#8Yxc|mCRGVuATBCAob^FX-UBmj?1E@p!&2OtZdw}{#Z$% z3jS^QZV?KuN?EDkykF5HwPn<&5}|vL zep1%_JnGm-7z(~QH~OJ6uusE|A6q^NP5I5a=CqOdolAM! zg*__y`rYQQ`h;z)x}sgY!=eq$WaEj+(?y3>=Uy=dFEtEWSh=*drVZEayr%!Mn^yn6 zz9=nxZBw?-CfvD+y{E=4fp^D1K=WF=Mt#tckc1T4h|AJDln5d_a=lDkC=k-jW*S>Zpy*8c9ymRd|ri%DAj^6;b#)!4NlPb0F z@TJ}v@1*2qWQ&wq0Y>(4{_2Umd%>MKcV=ZL+nwu$YD^QTP8I+j8a$Ud&3N0Z1mW}!_ghkT(GRH zP^aIhAO`{-Kc4<#&(cWz0jEpQ%Ll_XT$y36!?g3GwOy3epPC*n2Ix5E_1W?X55`fk z59aMUUHsAD>PHE90;NE0b)88ymq*wvE1CWs>*V{p1Jzu$SAsXCP|U)^v!Cw|7Mpc9 zFY90A8GZk4_^0!1Om#&`p$*AmfZc?X-IM&#w`ok_pRhHO%I27qFLRzRPiQmJr);-o(fujy$4X{YYiS<}$D;)nUqRL5wa-h1&d zU(3GBwzy%|P-V{hYQWoB3VQ#VOal14^mDIh^Q)ckF{w=!zEnCX{UleY1e7Tua6NhN zb~XG8*B68LE9cJt9Gd+RI=A$Yi}wD6_NFjbti&>O(*8BWGr#Hket7tFq4eR?oq&g~ z(l9g@w<2*2Q@=Fgx~|tBVDow)=BmRlEmIDaicSw_qWL!(qSHH;HS4`$`}!|GJ~J>Q z(Pw%%A)No7AD-o=P+@+t;CZeH#|iDpjyQkrYRyU#Hit#1RIcRBcuOifvtnH;q=pR> zQ=4#t3-9l3Q0s=%C+5;&ECNRQEQC8+CTo1An@eg~&xY&wS*V22&Q8gqB`_g!5!I?| zzby+Y#3Ri=v*6&!6#56t#wFRi3w_m2s}G-^R;{;1kxhCRhtMugbbFnF4dv9gZM1{n>s7$R4lzFGgHdz4~9R) z8)cBYcvW3$s+xADWAd*ddUsHPN$jrM1||&uOi90|xG^#J*vM|5T#MLI?-J{2z20Wo zvv@?PT=8m?VrPl;*FCFUq{@*b4-TTQ>6w=e+64S2jdC#{+CPK+oWL$h7^`Ji43o^u z!nz{HHJ{p>Mb}S-bW~9aBoXBBnsqe-4})iRxW=U`qXvA!_QQTI9L1^9Euv}WHmTF) z0zbF|(4x>M8^xPE#QrQsG{a7<+M!C+Jt)>ex|F#x!FeFNuPlyyN>L=ED2BY#aIXJ- zm^@pdhgO&hu88fncb_l}bjTo29(*YPKH=0Exd(4Uv$K~*0S{RQL21(b}7LoDiPfxm(K zRPdZbh?WQnA(65~FKiS>X>6Vgd6>16SeX78wI3rj-L#q#w*F1QKkuFCajWl8T)8Ue z^LSSY6y^{W`5Ps%N_nO3nC3Gn6~wMoCF{7v>m#uIYKN+kw&Zns7 zXq3m4eg$+tmq-sQ&I+>T5{?=k$HaKt=6O;%lRm1>I#@S#jElVjS_VbI%L{HkZoLRQQ#BDt1_A7b;^eSik_y4 z*H+y-%U+9_PZ(+Y{45u9#wa(F(fDyF+t*_4#*X304AjOeI7 z-_;RGS(j{!?>m4AD2J_)-2{PgP9}3k?UASrwOYQ8@TuF2-5DWzi*3;qiMzUGs*9aQ zt8!*_#9pz!qB60^6m46gqk4|yp80B}#)QU@W-T~jALw{9q4Q*}WStSv7k36ALNUH! zaKbz3+V%Q<*cf3?#){Ti^_Vfa9GN=avpt@8xR+s8dxOF)1Uwp1N~9sn_4~f9_g;x9 z8=1ti<4Weee}ueEtM@-}O^r30rq9RAqN2Q>){VgLtLd{cGur{|L>$BLZ55*ElW%7s%XPp^| ztL3g0(1i-Cslk|YZGM2iONRPeg(Aa)tZ{m_7ZyRiAN!L+xW;Xi z`MbgQn@{!M{#aYmi$76a5#NA}SdwtZ$7%#y;VADfMAunT1s7g<54@O|S|j#A%|ddr z)%?x@iRKKWX;YWI)^umnrUp!;y&L+w0}PZ?d+zW%K1Sy}qj`sw$!n*#^1sja_PJRN zcF4A*8hABngO2%T#3&X`PaIU@C1u0TtvwPK<9m>tKk7$}3p>`Ot=do*x3-$o&OCE@ zeX1|>D8J$x?vHwkm9%19GoEvQpCBjsAMeT?uwl}-T)U6*LY%NYRWg>v6aSWT{ z65eI)k=Jx9E=##O7!>3LA9uFm&l)Yi3ePExT0>P-P*jxEG}#B4%L>&y0W0H;(1#m^ zJRe&=Gbid^syBMku%}oxUm7Py@iJXZBf`_}(gPgcvNPwCR?VTP?>?eHy3xC#5G#6P zh&(Zh59`c1KKE-^+}=OTe|8b zGVwwV_9t3(HGft|ao(2-4hRv|dc(`lYUAZBlo8$%+r$xXaYaz`;s8*-jjMjVAh85A zH|$OeG3$AO@<0lTv7uBJsw^pRd8 zl>x`Zmd2Rk1dr%+jmvWlj%-nR+Yoj&W3$n(<0%Q)4;P*?Dn__t&9bao6%C#*d(UWe zzfRBNXuB2G1gK+*6}6Ck4h0oeRUC(YS=!AoyVq`4*$b7Vyei4hsI3k+G4Zh{e!~oO ziHwC^-vkbe9n0RM@pWW;CMOU=G4p1_>lq%N>~VAEZ|#qdw_N#}EJ-4k8oK+NG?i|J zjdB0ij8CzsY2On!Qn=0%uh(@-(Zo={;_VVsMY3KBs=)w$oOL(00`6jAtU4Cmb=}4ff$R&O- zldjP2H*%-u^*T?!lub=ffIi)$wY(Xk!NkO^if{M&IwInFWG{8rDQ>>G+nQIl(Y=%? z&b%v48_(^iuUM!j3`j7;sMJ5WvuT>M3uG$lU4{jnq4?zhf z)U4mN36d^tVe^bj#~Ww0U+al8eI2S4r^}GwTy8)9xn0@8CHW|wA1<}4CNDU|fVs2g zwH|-BGv}r=Myv~!?5}+xRX1y(>qDg9Ip(O$KDC;!u%Q0MoMyiZL>&H|?!*JsLvanA z3$NTi`O9#laVYo7z`}OpMt7ata~=3FP_Z3V&rEkxS!{MBlilwmmfA9}O=V7SOpX;5 z3Go(hqfn9I-E2L8;glgz2DpwU&s$+$)rV2{obztBSJ^5Nev$S#Q#mTpF2|Y`TKHL6Oa^tgE z3U#OJo?e~)NsZgi?^p(0undOR#+=1E!tPml9dZ0zHkk)kXElZ+%p1hKQ&!|`MgLlr zi5nAvb&Kg%-#*GWGILZcbGr9BHE)v7hGm9DUd1O^K~fj#tP}K86V<2?p1`HgpV~R# zY#eKpvdYK#uv8tW^YXek5)RJ-P;e1WyJmABT7&6O9e0mjH76)4Ag8kgk^uM*)l z*)YYJm}*G3p?mrwjv|+9lOpUz{hen|_*6`oek0MDAzxU#kjf*b6FRb0!~vL4|72Yg z3BJEfxRsnkV;wE|cLi^@8un^Eoz<8b6Ag;vF>9xk*PSRiv_o<4%;+*?2(nVQOd?wZ zXuW=oCvc?2jY%2_i&zzEX#`ZaRWEx$8RD&@ZsNJp!{tro&)JiEzpTfZ6YCP0(P>bY z{VHtRT-+UoXpfY`bdNt)SQ$3XS|U;Z&gF|16SYKBNs0H%16!g{c)GTuGgYUW8;D(I+}P$nf#7>f+wC z6rli7mo6Q}+V+#O<}FRZ@ba*JpW^y=v1&(rF5!*-M-(=M>kC+68@5}Ig;?j};$Y4_ zm-o1Oj#9xc%#Qj%{Y8o(fXGMXv==t>E!!M3Bu>YbVSNXSqeaIKPMHd+bg z5RnTo|F})Hepa#X=hRgRxWs)9v!!RT6(~DUi|q65o0)lO^OiKigk!ePBFhcN2>PV} zTlOQ1!;_Ub?xXML1-vEySOSpBJr0MiI;XRESY$eYTalgmY|AniO3e?$CzTrf9_{&? zdlm`Y{FXIB-wg~7*pjWQZoI_KYSz>a6~kmf2YR+aMOID=yF(D0Fg2Nr7W)_YQH73; zE-}T(Jd*U5FgJU4vy=o>Q`Jwm#Gek=#Z-TxB0Q7=4jk6!q=4=H)w8}xB6v$6nP?@ctWqluj1YcjzQ43|Vw zq&rc82JZW4$GrLi03vtYQl6DxCm~tKVOMEf`LK^e3suX~2-a74_nKxiUsOkKB>bc; zYnSQ3RNipFd_v7;4eq8p2NRfQT^AYtnQwM7_|_OvowWh!KAn&~FV^8Vb$C6yMH*#k z&Yb+LAwYc2JS_TQWpcqY_8P*HJ3P_cIjwc9A4;aZ&g7RWpF7F?EHA3~i^JNWeSrmL z#n=gkv}ee^^~m|+Zl#0DDPgkL^>HR=iP>+}v5h1;OxKqJu+j(=VER$TihCX|udq5eh&v1>gK{NfKUaZh2ch$G^eYRh^9y@{`Mh zgTYLiQ7MGZ<0(v`;{DJ#!!qbv*4%Sju|EM=&qDlQu>(hh)>bdoiXg_k;&+Y>!0b@V zy?ylu+qU3$gEObbN4LmCw+vk-!lYkF%g^eCxlw%nsz|+_-}&R45@sOwmi0r&Z!m4~ zZH>A191hDBCB+OYc717$v4r)R8>_ikB1>}v!dsg4BA^-0^cbbZE4!BGaS{M%u^~B~Y8ei4(=TxZAjq2!XXJht89l`huD5oFkcB{Aq8iQb} zP}8jtlJ~(!@U}TZbC<%lEybZg`h#{@$ov(?rUNiRXdq!5tRpaGfeu7IY-i@2vha1` zYl)P@Zoq0k=M^^%TwSc5qQy4wXE?5xOLuEOe#Gl0%(N_+asAOJ?c~m>PQ4ZfWXa zR~hKJ&z=DEFH3XnuI5;TiO72+3f@r#`{}1!TJL0J-*vu7)$fWW6JZFybiT1pH!9=V z}NGv1>5q z`wk2rCf{hyTQ-U8HfpaU9UJP1hiB}!RJfi}2*v#9wq@iW`&({}}9_GvEvpe6NAjy*#t~e6F|q^7ItfeQ<*+ zXX4e>+D{W+3N?lju;^sVq#D)N>UwIr?9o^o^%tZ8!5X3L=Aaz6#1C`8Qm^f2KT-9tau7+vfe51;hZ)bS1 zfwT?{r2LoX3@HUn_gmmpKgilh4ff^ferqz5diUMhm@THFPyMc^qrbG?v(46bKrUFg zDs;HF+KqF{RXV*>z0R-0LsW*1tOiUvn0Ko{i6f?OaMy}Q!Q`81pi55xhK-jV z3myXfvE6y-Bk%~c#zcY5r{UKenl)wMb#{DR%#WZw0xg0SNQ>pw%M>IjmnH37w(Ec3 zD=A8arC47a5p&b-T6fFGW3t^( z_UhoUDz4{J_oUgbFB*0$@}s$0ZvaaOl8wIm^X)h%SJ@pXiL4g5TwVRt9No*_eRaLP zsr{snL2`dNjXx^8@djSSm9J$_M`V1g0M|1=GvyA{-z!oA>}!1p?ZjzVL6ePr@DN$s zaQ2%J6ja@~m>iOm7j87#B}jd$ngKGVgO<8Juse3^A^;L1ehh-tQS>_h#9Q0;6MaM8 zMIE98dh1M|G9~JAQcqjYkf)A`>~)(lsxeg($1J1ny%AIlNEu$UvYCEvTV|nC6SMyG ztDg~=*E&U4u#ULj(+dK%?^*{s$NqEP5KAS;uzqVTP`D)xthrY$`!gkrR zJNF%}e|c>yYT`__n?O0>UL=0x1`64i>y(&)=05I2f9?zUt(|)dlZw>#S2O{(TlR7u0eG)EbYYceD$QWc{SMa;@a) z3WiYkYNCRxVCuTh60smarDqpr=AptS+j(dFj4oI8QvKO@8MO+^CdSZgyw2}qx@}js zF`9TgFD7obo4(9;)|nw&ZS*^LuHBmR2p@qNFGk3{==M!S%!qCA^_55xJki)7QE~Aj z*6WrkSMPI2H?+S0<_pfJZw`M{P|zEYl3A(QgE^Fo^A6BGeYLWAQ@lM6r?{*@m zesy_I_qh6<2#Q`MCE#e-PQ((MrPsY*#;bD)CAQkpB`;}Egx&Rtirx=kioh5eg17K9 z7q!A5rXP9#zdTP*9H+{?pie>Nl^tEFslqV;&*BV^U-3ij|6& z>+^O8tcjU_J*<2&xCzi;YMBaMAi;yoYtv%^T&0t((bB;+6R+i8pHc~=ePbvYb_o<& zza4k!eWLcL8aFXGL(f;v)aeoW0Hdrse3wLSrin9Ul*FmZHY_E z@3?kwVMb?dyD~9^gM33#{b;6bJ17RGsc^-5-nocEDx{gmyeG4{rpL>Gi4GyqiC-fs zuj{*E7TPiKR@c#L<-e$y%6mMv***}ojViA2eW`2NT<#o;t?k-ZieK_nAI#iI0iv9A zuqK_1pcnfkW@a6;p~@13gMDSgydao5hayHjo(ILB;`3b50qkr2Z{>KBWBO!xmn6{- zmya@Eu~KUx2omiRL5uPEWVcISoI@mib-^-0J=7}W=?Kk~ob#fNIZ)XAh>l`f1%D=a zDX7zCw%v`VTHE=V60*0)e$PfFKcdAX+e~QO>t&DA52()8GF$_v713yaj&D%QpY4_^ zZz6DdR)*gSD5hi_itsr0a3AcVmQn%*_~z65=a!B*T}etXwaTR7OF$%#W+GnH{c9vnpb}l0 zeyCGw8FSbwY|y}#w!TCgC{9nDX|jgk=9i|o@7+@4>T)F0Y~zY-Q#wigE{|)^+>x{k z27x93$X4;0bzSa9EjaL<4mp3l{%|atYj!!6z_3P55qEM4n6>xzVV)G5+3L>#_BP8XYd*i%|OT!CwCT((Y%zl>Nx#Z{OGrq0* z?yiXM(ARB$g;nt0X$w(H!ZU$pX76_Nlt_TDpn8XMP_4rqEbHEzmDD;{zt`;-fRt#| z2dikGIJ3??O%tRlm&I4u=Ky~h<1oAsLBZn>_U**>+mvpW#fq{!ML8^p@f-YuY!GUJ zM{Sw((R4Wsi{Vm{K6tKxXnEv}gzkrNpdVmaGdp|f8j}gMNXh-krdYzlc==h*zQfs} z5|gt8hEknNprP62a&R$g1#5T$jfi!^TRS?^+k*Nz!V+X|D2vv}k=? zA07Ve=V3k-Qy`DJ#Xoo!h_mY1mdH1Jrd)kZTCBl*?HXsY_%o4`D!cjbd^f};#diIZ zlRV?@TUDej=I*FE5G%#!cj&3xvZ-N&O1uc%fFD}>krOCG!#d!`@v5!fn}lhaJz4zu z*GvMQtaHB9dtRwo$=EX&-%P2YOEA&$5oZ!{I*d^_38I-3e5$6}l% zvI&c)(n6Sn)&gNA`G-`zdwWH%5@++WCGq_6>cH1>=0J-jrM*>fw~-(|o4}D&v@>f158HRDN&ec%QXc)>e&rIJ_r!d{|C?syP>9aewLzA9~$2O5k zwFSOVrz?!j@Gny+P#8-brk|zQ;nlcZm?}e2>41#9=?6XsciZ!AA*5orqsobB1m72W z>-smuod`v9RB}q4DK~C~?{!)-eP!6`bp?qvvCZw4svIA~vIG6TI6jLgguFUgV^_b5 z?M>QY#fJSX>SPQ+y(VBDh6r~+9!|}dh_eg%oR?d9oiwT(YBf$#ch_gZr4~y8xD4|5 zj6*&724aUg=pdtxteI;XP3arwgD}d>S;4YVu^7e!+eLlpb!{+>SR>5IR8qZo7&&(A zA@6x!`rNLXU|d{+GmdqjKHy#q%a!bbE)Vj3;p_vYV_(Pxl8RFW?@C@)(bFy^w2rhl z*tn?j4G&+RNy^%IBw`nWh${fio2s=`^BXKW6{Vx>M`10535lA6b+}gjdEv!?zkd1_Hz7<{V4;X$O1)yp@S~y|2 zQlqI-mbSP{J(pa`fQ|l{wiB$-jf^m?c!~IQ*}p#ujUYthsoW{TS%Rc$x{u#XMXvPw z=@+V7nC=*1r$UCa2I~?vE}NyGO(oFzyyXg++>OvX2{2nTOkY=#DRH2si0_TeQcAv+ zYOJYGIeJTJMuy|bLp>`On zFD+b@maJy!SR3?&xt$ ziiZ>yX=^lgOj{b@v+lohQqc=dr+lT%6`C7oe)<(HdxP2}59t;w`-{hgChE-dj>Vov zO@(xxI1W(ZNP9@_?QQ2Yv;O)@ z{=>Qm1hyl^c#B3j5lfCOH{pBg1wvBlh>7Mdeq`#t0VDw*!p^0?7=DDN;1P>#0jlQB zdcK`&IR5>ev-2D=(jPyRlD~VHSZNK~%Go%S*0bj+6 zY){g2yM@7TR$x4^k0QK zqYdWZf2aCNmVmpxli&PlogF&5o04R|I;uc>7I?m6IAiMOn8{e|XMu06B3ge}Tfot1 z;rj6EaD~Rk`{Z%d^-aJJ1^}9W0`z*tS>h6lK14d#$jfXFa++?zXv976|6f<_iMtA) zhuiR*`nI|U)iAgBY~5tk{W$<|^Ff%>=I`%+R#Xxudb!de^fy61)@^t6_ukQSkMb5U zvMoVZGkOR1GhX?Rl}ctX-a_UYZtMB>hJr7x&8sJcpRc-=Vk_SK^)02ceSQB(x(bp= zLj&ws82^tvJqdF>b10L|X@Q2A*YxWL&~Hc`mVm}?7jQw#`j;mTTwpG43D`yHlhSXW zz?thzLIWOhD8vs)Se!{t{QjqUV$Sx1o(sB0VdakaF>T~f z>@{;#+&$t+jX2Z=ZD+2IOSIhH_4>mLU>2Kg0+jR>gYB?KHh%v-eIEopVaD7aB6jd5 z;t{B=!(4%-8y`uDZwS*g9`x%gk(^7=?`vCIzjrsRRQHQ?T}MmzU^D{!Yf$h1`b~a6 zMr??(U5`Zx5He}Dfe%?OsoZ*^?*oj|% z;`eqeC7&AeQd zI8d|}A`&Qg>N$W1f}=@xj&5AQFIV0}yzQJe3)F2iL^|i6@A$|(nGsj7 zzg!c~YI2~*afy}gwXKc)7BCOkYiF=xJs2#3Az2OEoT%>k|Nf+B`3JbH zrl9=Od2U;Nu-YLuaDw^m>AY;~3YY+lIzN90Fu<~6j#lUg@LbsYbO0meuE$^AjE+pE zhAUx`s;qVX`Yn);X50lJ-DD(!UGv?CRN3%3@V{O$K)hdu00t@r!Z+$ca5>+f;YM8X zj44=nMBePH-l&odI!L2iq;Iv^z=s_IW716Ogn=-sV%g>Y;s1}<;URtnDIop&2U`?Y z6A(|C4DF)FL97vvCDRal3otM?27jZ6l0z?Mf04R}z8AN)w*EA2^Kc4};rWuMSf#<_ zOK4)R)~F!%`U{T;|NL%$zGrByc=x8iGpn_&76Gu=?@z&Q2?4tlBs8&7KiRYHc+{^l z*fd`({f+YeRz76n@8B}<1cYOV57jORe=LC9e$ty#nKYXF!J;6!YOht*;4=V<+NZ^3>%!0;Uu5K=a-(h?NTb>)rqT z_)G*qgx+3})kt}6o_dJr9B%F`LhEbaMCb5^dS%=Ud@!!pM6?Uea=5_sYCbvOS|Q@> z^$8S{BvjYd%BrT#^s%5x{|<=h%+rG=v4aPoiOR#~wS`p80xtW`T*G@&aur1}?;sklBd z`IK?qXS-oy(}*@emlmeo5eEc-*##!04u8BGD6ues8DUZ##%CXq5cE^jIzVxQh`ye= z0y;2y<1gk;DmBLebLX#u1GA^}5dg&^*oEwRg?b-b?hat)01Un+9ZvP49^T97taIMy zSmOQT8UJwxdr(KE_WF91YyA^;;qNX1UFfbuU-KSkY5%2^x3mx!-0+8>LY#+`rRW%h zDCFs#CeFBzfX~-C`i8$)HZ{9@f}z4rXYH7BveZ84ProC^fw0(v&wHKuU#b@@;yi}i z%9>i@L4^2UhM-%gzXlrSuWwP|l`%}6?%veajpVk;+L^Nvb4%C6a+M-}?i^uB^ zEjB>V$M{>KT3zz)m9VYo#%|2MmcMo#O<)D;HX{c1zyH~gf6x%w>IB^|0(%NPkG~Mv zmOcE<|CvCzH=~V5d%TE?yeqLcBA>j#hG2d?* zgri951E<>z#B9}wMru^_7l8hcBjkx=QPHMj@Hc7EJIY^t2q}W6J(GHeuggG|B9nTb zK^cUcB>WXAhYdl$x?+UB^GJ{D+WeNBoP09>gV{IougwSFOH%f~Ba&o9o%jTDSuc3f zR{@E64p5)AsV5bbf8MG;-Z*0@4-$30NUQSP=mO`QC5hj)8-=`sp`Yr(WSgA627W#& zbE3OFzjlXB_|yEe(^f{mc&>S##0Rbp9pZA4FV~duJ|38R z6hV?0Zof5t(gG09EOp$-!3IAKxPkmO9pBL_Qla$F6NHRC2RnUr-w~t?Aq2*QP@@zE zP12_{1s;In9H}d})V;y>sRLEZXmjVK-#>kyn(sg%zPfa$b(({HFP&}tMXIWRdAU32 z-4*GM%E?2Gy3DQ`B8bZ@c$f#G=?#c}*KJ~_mJ!O%JUC{olmWifq$IN5(|kSWmvpT{ z?+LuBk+m8-P>=iTJ{~N$Mrt~^tNTcTPd~keP$NJf%xXi+PQhLPv3>w1z_Zd1RGLl( zt#EM!Qo2WT=sY||c&Zo?dN01sIYk3q@RQ#j2WJ0$Y76xUY$R2gOCzql87;~9?gM0` zIlyI$!RD5`(7lzjtQbCffIP~8OCwnc6iew!@whs2C2-(M8x_FE$KU9Z=R)j4&mkmJD1 zct<$Y1ypYL6>EXK5aIcxE9GoQkkgOsi{}0Ai{qB+_1y6kTF?!sSNnw@fuS1yxB^i> zHa(C$_DcGsjPqHcKd#*`1qvG_+~5XJ22ZKi*S8bb;5U7jc?$dE zGQ3=_6EA7CKX3ZDM=wE~-w-;IIalDg{Ln$Gd=FZU9rbW;ntyY%w9#gyp5^xCk?X}o zi$e{s82JQDkX6i>KouHcc?z zXzP*5Q#u09QWZmBf#JqfmtI5M)e_0RUonZ;R?c>%1(ZA)=tWYt`lHpo?NzS^$9zUl z5Tv{jTme`w-;LY@_!WKRs{dLLWJ3+Qx*nb_wViOwI{Ei3ZTdAS)z;knE-FJQ9)up( zc!F>t9UT{*>B3OyfcGdy7+ZXF#-5ERFT6#F=|Ej|?EIglo~O9DYQ7H{!-Dj&uRTYQ z7-%m!jCiiXQHrCp`zqmf*JEL*ih2KlM&SJmEll^T!@H<(X622PBS;}r{008MoaLQA zU(x@G1kmI`39BU*LmkVd;f8sw$n+F-y%G#R272#5eS*x-{92SYAH5qL!5Uup^NcX+ zc`nu+4#N)Ae8|TiG;_4Wmd-+VRdS3>`0FB3Qu`6k`=ttwx{^p(f^Y||@CA9Vf0gdGk-}O3(7K&j<$nj^PoGuh~-9a4MyXqKCe)-3L zj+A;>Jiqhd%$)k!sLtOodIjXcjt*a`{cNTq|LNlX&$CO$i^bs(}zrbPTwGDC6k=H`=FB#(@y#KXA99`GxlS&oTPq)EuS zz+vH@#asmT{=a;Hw2X8(d9y|d1 z5`;cAEhc|jv~|*UIljzaa^h`9KLz`>h58&Hh8k zsNdcFGNCty0zpXK&I8pA2fz>nZi?&Af`17TN%i>TZo`T%lO6c9|JwfA+(!)6D&i$< zeBq%q!iYM=v*6x)JjC{dAl}^ zyzR>>Z>dsIu0}Df1y>m4P$B-F+K_5i zHy?Z!__ZR7kQaYKg&;?aBjEMev8l}6u`-gW%CrmP%oUD4BdWjrI!T0vWe+Xqcc5a% z@U}t^R0wjUoqTwQy**)D@I?@kZ*%67e9Na(zdR;P?vY6k^WDxBL!us*qqq0_GDG5g z|3ALIJf7-xZ6A@LfuyZO2^lgq?n0p=Lxxh3DGekkGN)wNO*9~5rpS=WR48Mpl&Op% z88fwH7Ba8j^|a64E$2O--yiS!bk2TvYpw71dG7nVulu^Mo7VRq^pL2ORPs~9`uo#Q zhf(lkNWZ93SoR*yU;RY&NM_~#beo$|y7}yX7}FM8pu+h!TC}&D_5kslkc%?1t=Bok z`ubS@Kz6Ts=PTf;mkp=G^syr+3HO~C4P8k8)jGf}{3=88DDj{f2eLIhv2M0=@5=v+ z?)krwXEo{LxtN9=a95OPJF|3>^p-wMCFczANTvB2d%&9QGss=OEDCW~B?vySdk%{F z&kEE@?O!Jv%A~o;y*Cy=hkW2OQ3hqL`Cp}s)S}9v$g;?#&f8@4Ljy$G$)^Dg zq<>soN^=6EQW-8_QHTK!i>Hzq>T|30{#SLt4?oE#E6(y>$OSng*m3p`EQEpTl3j1Vqa z)fE5b5l1-A#E42eB^Qn9&oIj}hw006EYqqUGP&!fb#W`rp-^AG5}`Sh95t;cRSG*_ z8-B)cXFniM8bP4M^Zyp(5HfhfK^4BPHSd432mgLI-!xL^^r;+rwdK))aMGh+J7xd% z&Gp*p6V9?UAI0%_BmK7OiU_&-{Jckcm8!EE)kIlCcxNI4nd&H>7-~qdwYauOo z@o!+??bu{fG_+MyW#9iGq(_Jv-eofOshT|O!}?&i=CxC^HSfx=?K2QvmZI!FNsQw1 z37bK@NE>k1&VwrX)3NX9<>7bS{lKAY75QIptUiaLU~xTd*+u6Oy1o)bz>o(pAdi%J zYLu;tFHA94Nc>kn;8$?#%tgYGp)Na;viNCLOOi$oPH#Wf9;!_EB3N&`+gZ;=2S21Z zXeb-b);{eTROoy5wy#t?CHQZZYe8FsE%xFMit> zEkER3gZn~)J~J_JVzFAJHR~DkJU*1c!k|Z9uqO?=yVE%XBP3=nck?yXs+VwO2$DNuQi^elLK7=B0HI`_#W^XQl%i)>0 zp2PfBPNzRxGxzY3B82em0M+M3}kw83B^SP^m66rCwQS%Yg zD36d!b)&ilPZIOrYA2ZawbOpOVu1pada#rvd~Z38r}8fjxBv0xs_fj=t5?^nc=J{x zh3IEkMYzEK;*G{y>WuYGUD`)LLUvm}!{QSo<02HZF?Ke#)YBE$u3amJT&D{}qaO@1 zSduEc<|-l;tm81nGT0p#aq2zJ#QnW|5{mOq7vpBrVt$e`YEcM@pY&NPTUEa4rkw`ZdSHiASqq@AAp% zTD9a_c=s0rSe{jhJMeIiuomOI+jW4R?*MPDHB*rm6*s8_Ajsm1edDHI&8~lQ(yvc# zlVtcX@iR!IYwuWJZ9;ImWqtYCDrCtd4?578elW9DN2pC2#SJRMuA#&UJp4Gc3pazs zw#+X_!)8DH=u80)_N{*!>?|xuJkS2Q8ass!z!H}|u+w!D+V?>q%QM~RWn$JxytB6m8EsG?r|u!;6tweMAw^XiaGN3E1+ zT{8Ro>;C>zuA;lQoAgE1Jl%Kb@05a!hy!|W&vZHc`Y)!4(5KV3$%Yx_79nnYLLBmx zD38NWbgpZ6)3%*kwOt$IA(ArDuwqtn4 zay-AhoxOF~Bu;$=SuL^k+~)&QMwuMDRZL#Kg6ZT4nhGjAROY?^2_MXnF?YiF#ut2( zb7EDp>(w)pr{3RL)|_|e|M$UZ*|Dnk_Dua8lm;vQ2z7_WNaSnL7z+!FFA}rz244j? zS4(&IrJVe~pB%sMQO421Pn3nTjrB```na4*Sr#u&m;Ty#y#U35DT0!gxq9G+^?^8LG4F%!9wFD*bcHe;*spJ?`GrYX`nPShIYG z(?ED%%PYgtcgOH9cj`(p(KH=6xEa)NI*vU5IoRq`JC=8G7*>}Np~Qcy1dWr9fbN0U zn0oy1B3s}s1aBP!ah)_(;y<6udAO~$N0`-M2?h)pFiPStpnU6SHcEl&uQk@Dv;nRb5v^o$w$YqhPyaydrDE+CkB1vby`Qoxc+%Lhy^K}EK8QG zcxm$EsmeaHWkZ%Yzq3=rm#sSno3aE+GF*~r-{UtAA@DzEYkGBg`-8EUd;-;MR^0UWa&zu*18f3jXqQrCm~+@JK!JMda?1iG)jLBli_ zXN$eGeLRb8#fHp?;5#%KKJT43qnd~ znSStI*RS_CQPQriD0c^$Q37`T7qZ`B%CpO!w_*30Gmm8gC05;jxoYuE)#JybE>58k zbI$Q*p>$8F~|^8A3VdG`lhfO z*ZQatvN`S!=0f22zi!Iu!qKDm(HYuHOf*7ZkA3J^(N$!c(7h40$Fl7$>@+$}JYXm= zdznM!IXKbx2TQnqsqN-J_`+lE59&X!*4eX_Nhely-IUscoz7cRglJ1tuS1H#ofzZJ zIh=BA*Pi%%E1NMW$M!!z`A&)X4OaWKY(0r`XY=U~EA7sy*vGkpJUwKeLnp^2eavXH zS*hvTk-PK3(0|^-vg7abAMz7w_4i>=GC2-*ST~-EOd()ogcU3}Y2KaX5sl$B^bBMG zxoFp=(Pz2~D&soEk31DZfPInN@a)9Zl9+phbn3ak;NMsI`&b__K9*ikk3nEF-f$>yt2Q!q5$Nnmi1Ej8n=az68Ff2a&U0)gX&yvJd%H|Ayw*! z?}UN}g05%00!&A@k@)dI+Q?<)Cnds01Iu@@p1`}jy7c4iFFHB!7BSI*nr+z)0lL4XwSjl zCxYW92@30^lxbnByY-?o->3xZ5X}v-rP_4%T*ziKB?ZR+`RY$@0KV1e%ABVyn=yz& z!BQtdg`4W96h6+q9hb+xuOBF{WM8z%AHlO2-r|WBP)2Aj-+*HV%}oG_fEO-2QxlnY zV)LY<8k2eknSSMsBz)b=z5X#Rw2#j>iUFA|UDk7XI%iW8Cs`kGXFX&tV$e+opz19H zMmKFM@JK!Wn&H9uJIYzKuX~V7HAo@^NxEK6TssMgz{8D^;lR8^%*f%H=U&bxaxq^K z;spY)s=@RoY$CSoi+Q#dPp{s$=>LTNf)vL4!-IN0Vee_9qoNU8Q;w7N(lt9p;UB*} z5}meRWl3Cz!{o(Tt__DhS9RG;@<;1CLQkixw`2ly2wlkZn<3P(T;UGrrjjOghRZo=IS#f+#;FJTN$96ue0=y84QIhW#UQSm=!0dd+#RB# zs^||&p}Fa~S$b8w3O}hWbDix%ApW4xc>{N;AvAS->u&lVo02@^==RSRar&0Uh+M$% z2&zKQu^v>pqf$$Q<}(9-A70(2pW3kWz+li%Mez$!R|m>;$p!;W=@UVs033Pd(xp80A(~aJ*(cqtxHn?2NfmdjV=*`uzOL23!?VZ5Wk+)_ zJ|A;>awY9ArbPyPb7b3=zbOg?;RoyXF0`mmju=}7x3Ju)?+M5#T)i`3Mu<<@=1Zez z{3BIo*eGo$i@qI6Hp-evqI3EmApYb!96*Vo%ug@e^ETGZY9f&L%f6+?TYX7aL%bg; z*g5bZ@xa5sa`a0`=jT5^&|QU*7z!NHdU>wF6sg{Co29P*c$6w+(585qe%JxH3bK;G zF&qqk2xJB83{UQ&-Et(~$3$k1%TT(yCd# zC|}SwY?!395xXrpftt(`y&FuiMerVa55@3H`p^X)@5g6;vqPe^N8>-Ik$)-jn9p7# zHz@=24Lx&lQw&?At`0=XULHS(#Gz)?awGLO9Obycxx=2~tN-<>q6b5?TW~hp!P$t= z4M#RuV8UnB)*KKRs%5rE^qZW-!Ze+6!dyxPlu%R>|k#2LNU zDp-Bje|CKLa<>CY!6Qy92>$VtHpx5JuGFrrsCW9YUdn?p!#EV*oXEg#+rMf3bJl21 z9Wx;W=>(JuHTYAxBR7fPO`r`G3LXFp_rqicmRNJfEW;hSU1I$7u-I~u_PhcxDC@KH zrGXnZWJljm76v+#FA3e20LKlHe^2$lpX6IiDiKrm!ukY%9A#Ek*5{!m5hQuGTFbBr>O(*?|I&JU%FJ7obVQj zf~o5*Go!ibgpuQzrS$R8aw9FGQDxJ;BSSpiYj$lQgn!xT&(1~gv3QuL&%xoYi3G-EmwV>ERxVKy{=+$#9a$=Vqtl zpd&j@!wVoI0|^n!_hQHZ%xeKm*7!*p{%O&a`Mh!LOO{=H`uRp(T(?Ba4SLBEQ1m)Z zU)oHYDDem{Miu}OC{HW2Zy`XJ?5~-~UH=p2d5j!U9SZc7S#fCvD22Mv+4Dh|{W4jo zpaOz@J0BGxHV|1u9S5=#88q0j|BG_5HtmxcUQc$wOoH`Hde!A1mw+Fj?-rAXm1ms1 zynU%Ws76-I=8S`Iya&X3)R6Jadw{oM(BYCFHrkZ@(OXEG#0&1Qui1GfX=jk2%F>cR z@n9}VgHMS~9KB%&{yhS}VtCsP_qlPy=O?ePmf01o?3PkJJSd~Bz6b08YLugqch-LD z#leb}Rz`E@>}i=@j~gIAOw+;@H2#udEsY5cU$j~RxEM%7pAzk zVN2Jy#ZTS`ZlB03%W)i3 z{s@$A$~#p4N-IFyPm;hlwBPP82lKqiMlSVyP1v9Arq{`rq1PDTvon4cwx#R6uE6MeiiD2(G9 z13f+cX7usI4n(=TK%Z!f^kkW{Vfy{OSCv)kXB}R+zrMRKy%w;emnT zdZE0afrFuj4LNmJ!Z`ad7;BO2sXg{@F;#KKI3<63X^`aV=CYD`a|{PrC})jOCa2lP z0WDa|&DAREZS&TulwaQbr~cKsV^#hBF>GJlqQxuFgD7w#!7kD~;*5dpl_~p4^w~A9 z*gMw1&2asdsZrf8JvH$kWr|b*7fq2yN6_N%lzQjnnz1%mtqxIme9lwv9%Oa@d zRW6NE)=iy!HnotlW-0C5tS!G|$7h#O>qfK}ik!B+8h1r(u>U?hyOKb3q2ADog<*~IpjGrDd zSI=5)-<>bD;8U#H@?Jm4%}QVp^ZG&f0E6GZL0FMri&5v1h8SiLq7OBo!F3N^PnkFy zj$IzW>19|_^g$UU;Zn2lU4SeCw2Mz*{B{MO@RP-&7hO)SVK`+ z<6KyHiqoda?gyO7Hi`XZN&5`>?=9k$bNuk>86bo3z)G9G2{5sCcXU_O>@LAh`AzTE z$l4dr2h)muOidA^+|zQ-$=47ros*v(4dCM8;ptC!RVDZ3ZYKjHAeU;leIY%AjYQjo zkH1vSs<5cWg`d|U2ssAO9>|lDQPa8bB@n3VQp#{gUui#JU(DqY?QaUMGnp19;CVEf zHl;#VV7D@i{yh%Wrml^dpAT@*Jn{l-CeHmuRH*7cY&jmr_$w8^VXYiqyY$-I%oU=C z9^J`$jm_N}JB61O1z*nPz3ata|GPF&625?ev&o4LA3-@j^By#b<2n?=twk-3sSWf|CE%Qb=4hrxrSK3WGAieFi<^lhmptC~4D zIR#*EyfL_H*jm){_xHW{oKE_bfP5}qS4bhq>ie0eVr{!hQ_U;Zh+Cj1R*hzEQH@nm zeTs?o$ant5vw)5xct`sZ`;6^ZVI}iYNOfT7f5t;9jeHyLF4btao35&2eW%x@M1i zGWWdQrCEolJ$a*BU&Ts-hTFI;0z0z65bJ@~IaR%et16f*7<%9WS2$rZjwXb*L-&2! zPkFvV6h-*914oz>j%e%aTWZ$cOn!~lPQr4rJM zgswm*@#rSIk4Q_gh-j%N-?2S87;{5SWJJLOGVE2S#%7>i7OYISYBoTwH;wZFRfj0x$+K27Z(ad1IzwyML==0($eiy1kTJ#Y;58W)~DKT3r z)ne2=48i6+tN_U@zXyX-0{V&bY$dM9f(G0Ur~xCj000fz>=mDtGh9?r;Y7($&4P zPxgYMi2*Gep;&N*KB6-|;?un=(3qI9)s)W`k62_cEf}xAfC-FMThI7X_|N3{DMkTYxRPIi_7%<>Zp1MX$mhnf*1(^_ZPZqYxbq9Qa>X_wAXxNyD(q< zR(RDt+4j-Zl6ol81|H?FcIZk2&^NS@72U%eA(t#xkKu{WHBAbPwJKXR1eZPdU z+1K4)fu%-pH55c|*T^&*B%<8V{eliok_V&KsvR_?G9A4SOXikS&4SxU{Pv7)e>9iu zf(0_9fBvKnb3sqX8kLxLHD0Zl9_uwvI{ZkyEQ>8${R>Fak79g_HSrg*aNyUE}x9p+F$gB*~9GR>}_h0tW8!L~vgm2-1|N0ZvWJD)UeE*CyDnPWUXbXfX56 zEb!0Q^8VS)K1)3-={d1&(kpGex2y;>{L!^V^d&Zqd)elxcJsMEzdRpyIN7MdKv~Kq z-VF;W2rm+JusPe|ZeX(KqlBm|HdO0=kT+KJ90q)g@RlLpmJ3|AwY*-6MWEEdngl&* z%kDXhNSu`16|==@G={pP=UBw+*RSiU!(|1Mz{u!8Mv-E%cExX<>X4=fn5wWF?;N|Y zb+l%nJlfpF>4mEHCkA^|07(RZsqaCIz`{4WEJBqgv+{3ZQ~FbG*QUB{^55;J$In}u zoic8gfYzrcD0^49m2r1woDp)CQijUQx9;={2=7x38Ll%1uQ~xD>TcO)9|=tsHNUy% zI-6JW8;xHNA1CZ*Iv77wIppDX?h$hcZQ=c8G5T~SwR!#Ejeu2om9lRxTDWc zCTkfQiohIG0|d;#h|5GF$m3FsFY?KUoI;{K=s~ykJom@>vdDIgC^XdkP#Y)Agl)Gj zg;NH9wt|cO)#N|gGNn&F(TR&TWSpd>hR|c@Pz~Ms1iFkjF1gXwHU~-|#847F!t4jD z9|L5lGt$)Kdq?vso9apLjdvB(4;%HWDm%w+*qv$10b9A=x6(*A#S1-zGEH9 zkTq9GIQaJ#Kf0dX`T;7?E)dfWS&PpC1DWCftZFgp+Zu@0PF7&B1`U5z2*cA|vM1g! zlAL!Wx4fKe?Il)0)i2K*_Qu^VKV!2v!>ZX2ZMi8ncgmD5P)RoKoKV`hdz`e2K zOyjra$+{4oDfSNB%Jnl68=R6w*lEtp0}j!H(bpu7J}a1N97R7vOZoc;4&!4771-Hi z_jV9g$h2g_d7NLQoc72lwl9N0qt^^rCEe?o@;kkRj?j^x67>(+yE*UaWFQSYkbO!I zJhtGiDlTSDL4Y1r=fHfXf+U*Pb8Uw2T%watyh)q(aGEr}7P12@!{&kG$S52tnwd^#U7kc z+pdsB?aG2gi1J5%_Lme_d%AQ!6nDbQg*BeZ%f=kj)2P`^Hzj8HGBLA8`gm|xLMwD- zL4>vnU%V>sSV0Tj-*-Aq2=0@v}-^q7SYeJ+mdgq?-p|{%j<0<005M9#HV^uq_SoI8h1ix;7yX2ThbJCH4^!_V}{~VLebp^ zFw(2a2!dbaubXX_M<&qz((q3;EfD~_vchTny^WKp2?)nHojCRGHB&eLnp-em0yQr& z(s^84Uiot-J4qiF$%Lx1og6U>k#oXwyHasZSLZj!I5&g!D8I- zE<8lmde4j5KM?0ch?d#c*Y`~8K|R(=wP17`?~cDPLcss0)?l;Oxf8)dm--sY31MWi0C33n z*=}6FUIGRkJAT&bKvr6%b{)-EUp0T>5;TuAnb^%dP(D$=nba$H zSEiTMD6o81sjX}K-4o=OXM9tk>*aG9{!KX}Zx%@x7A0lQx6^2|@jxkDrH$0viRvH~ zBVh?K)1bfupGNjrKLv9Nq=V&Ik^^ zzA%2^i)sh{YkcX0vA`j9=0HW!jgpWKwi<}FL6Jekon@ihEW=+Od6q)jmoxjpK~XQY z*t8fF-E#uNs)N4Zjx2Mur-V=$o%V8@y-6d)bXRm?$AZer1UDEWZ4zBIClEL86btwsS6K^)0LD8@^mY7$9>uN7oDnUI~42n3Z3{j-}qKpl`)FC32 z9DQAMBrOfyty|t!<}TP&@bSff60aV2afDokT(B=Q&13Lk1rz@1l-a=peTo_vj9%GF`Y)Pjx85H4{YC&g zY!*>QB5~_I>(#MgO2ZZG^~F~rv=65sj1kw}v|q)#-Wlyev`>|%E5-8$n{&QZU0S2y z+7zJocgIpFHL%-0p^GGT2-1x5U5)qb-Me0YbN1{S(-Ij*7h`uS)6kd*Z$-adpL;85 zIlsEB$FN|*u4A_`(xOlGY+y?3d?NXpwsNI?7KmZ2(owSukST+oc*e_BX}-2uIcy)n zH+=vIbXh_ulpKxVK8-?9YZGwK4~vGDuU3kwlV45KQcx@lbcUra;Vme={&*MN85WSwaW%qsU9KxNiBhB|9Yw=>SI|GUpp7y{{N{~KhcgKnrnblg53XfmW-bGT zkaS}y&sJ49p|5#Zb#RH5yz@wGE@b81va)kan3gfl7$D%}yUskYc7gif*LVQ*4+%p6 z_y*K7U}?)wZC|b6n#(Ki@;RVB@5*Jb?`wDahtPi0HJp{f5)WK$;@tLgOCM|cbO9<# z+qMFFyCzhZ3t#cvbrZz68ue$v-OteVXcoDFG&Vy+$;xPb^p>-f38&k)Xc7O#=F8Ji z?!PZP)cfg)f6oLJG=s99&Kk^$H~!3nf|Jtr4L(3e>yb@%KY%$@Mv92w??PyKB5|?$ z3o=i2)IUUDDv00W?GQ0O&@4_0l0!0IhP7#e5Pu#M8Bg+Q1^ahPi;tT9t`^z+;}s|( zcNgVXc~U()+)rt{{yTgyz**sL3T`}OV3GFU5p1Yz}_2qJ$CHVm0~&{5CJO8SUUBS(J=0jZ(~g`y!><5Z+p6Yb!Yi8qokE*UWBy z&pjy#?_XFo%g(GvZ_hmvz1hKd zcKPJ6q_UdM4z{(U}Y|&|v_V4rgz=sL}Up{A_q+w3@k1_F@ zyA}1`&6-z_^1Hk1Ejn{^7TQO#62IA;jTw#uAQ!95a^Cz*pnPWp@ypT76GHCWX9gH& z+=_U!onZ|}F+j}+OqKFHJWahC8&gfz5E6Rt1@+D=g(nGJcJ{(G$f8M`)58I2WNJ4! zk900`N~U{wQg&2R5IUSKpqy@&Rv{#9msrg_{X>-s3SqW2D- z=U%E9wbs{{ITxS)WpI~AQdA<4dAN|-9Vo~ zU!0DSnfdmt7_=palMHIb))K9-yClA|xgWTDV`fhX5rTVxO^>rguEOZui9af*ro zGR5wS5=1(c_bn0V@NsFckXSe@u&j5A_x6KAIY?S*>AguvhbxGCGAtT5&l zYMu}Y@5?r!O7mXPaYlm-E6MC@uVz8Hhs&t^QgRghAfr&ZV(a$_41uclavfbei8{Xo zEoIT{J~yPEYvM$}0=ZIpP-fK?EBi+Gp)v;LOix;UC#OdN8Ucy7itquIzo9KxX8Hqv1D9SGXO7`*f8cQ(Air8dl8->>B+IStC& z76QijMvApof163>g@-+(^^0Zc6f+Jbqj?s-!{$v74<;fC+=y(u)m*!59$E-uK+JIp zWlDb6vp=Llep&YDMu}--4 zWynr(OC04##P%hcC?I~)kt1!iCm?>%$4^c+@M|r2r zT@iAXf`7C^pHJaGB`0B$XEb{1COBKee31;4@njI%-wf<#>e<>2b{Yr|GkW|X)ITGF zeiJn0MQ`F>x%1+)tfS_q(cRG|l_qI6&VDVryIKTwTOe8Kqz^oE?o}Tv8{4JIxbPpo z1(+2#ODa7+cyAT17O}#TpW|mA_yO-uRm(Yyk$9#Pd+rhgRK>jb+lhtaWNxlh<;pLzu%!` zf_MtyOcpiv+ztbips$9Z$q}SA47IGe$pDMe^kaTmmtOzD$;oLEPeoTM7754Z=$2bF zsT$-0$Az&*eyQ)z(Gfhz4R;w7-&LzX#6s?Q)L#D88ReY+RlPy~w2O__=iH7QlJFF1a;~5VFbO_ur;SggxFv zgswml)hQUjZi#Oc_$*x!f4w20;H8<%0imi@BLq*`|+fT4BgS)N<88T0{XW!zkU z6x8Ix(641fN253rqcWXAv9XGi% zMORE;KZi6ZCG-14z4shijYpLcC~j&|i#MvN6!;J&RLo7yL$sl&U<-rnqNb&|`>8RT zrW~7YKgTfMrVUqb?-Y+R+=z_yHtqWS(TRyyilAlhz8>x<(rns9Sg0!U)e{QMFvlvt zwKY4}4UuBz=E@cHoA<+Q-295W=VkY#jm*vyE@+QYv~8%SO(;AtTh)d~Pwn079D+K>#@_ zTJwyL`Cl>+eH$5M#Vo(j)~$z$bP#`I02VQ28mBvgLvA&xgZLu($&rL0ujO8^+~vcd z-6ZEB(Wl(As%zueWY6X~;DR7jkm^WJr`)#Cil`AGqd6*D*8pl{l1MVT{VTkmO;H1B z>EQsE^kf3?YZYB;M%$(nHUt}c>}OU3bxyoMRqi@FVX+gC>EEDHc=$NR8E$JNr z>{LT_*@bMFF@Ee*T)W^R9(kQ+|RjC)X7&wNXpd+9akD-rV3$#NIgyy+I@f4Ks_Jvx8w z^k|PD5>qfSWWdu~l90^y6Sg7uJD^fDCR1{RpRdxS2=FM1x8?*3WuF!5b9zlNV< zkp~K*GP2x6om?BzZGn7Q2oYm@&sRarf&@XdD(f)Cyl`PDh5^>fix*XKRR;Uu_634; zy1n#+Cq!JgL4Rg%Q^a7sX84Xxp-%Hg$>zj{V3Vl|P|z`pIf|tz%=NAJFx0-KzcOTc zYPB#j-3S(oh1%aGtSR0^8_J#{S&F{f&0hiq*||w;f*XFr2+xP^j$FOxY>F;mQ=|;^ zY^VGV8JSRaEWENi4M>~N)u%QOuVg_?RR)D51bcCx@!ECk){S5(TN&bL%oR!Q-|v+C zX@HGlOO@r{iGqne7uBtBb&ivD{7%ecZY?rEBF_~hrKy@A()gTY+pJ|Y=DuRaXdk7o zI|Gy?LGZEt@dc*T#Nhdckq~|O10;NlvLD| zmc|H1ZJEYUqVF9*$l?$DZiV$vg#on9&Ld%!9it3 zS~2s2k3R!ZxuOej%Tb;ROGng~PgJp#drSNsKDoLtR}acPVtvk?iFlujJ&cc{P_L-t z{Um%Iid=JV+F41l`&PlqpuE(Ofbm+Xn~ty_T6@7!ltU#le95WS{gQrUdzu~ z%6K%x%6P)moBR;M(9Zqp82&yoJNcvdKFAvGRXBJZEllOW>p!YER$|oC+2gVF8(31W z%a(W~w+=dTGSOHS>@C}9a&ePMSVGI#HxPPl{ddzF9CYm z6YW)A6phtIfh&#r(^f@{6wjOgD`%M6?sRUiY5wz*hEnwe3gz(^=uq>9xGWaXMob}3 zMOY4X;ZZtfXE=8EF1DYgxZXu%@qV_w1Lja7z;FQ|^;c~>8}Ms%+3Z4FUIF%%ecViy zr3m0c44Cjr#V`H*>@6l*`$PeB{*A``ZOja~t9OHSzgxctU+UAczjqDC(4ditWUEb7 z)`QzA_Ps&$V zI8;A7v3C{J;fK~NZQ>^Jv4j_dWcwzs&eFiL#noiCp5Ta0VmIcAKnt!bYMk!9d@_)Z z`-n{lmkA$!{9GpE!iAScf1omxebEDOV!t6DWsTOizf~!aacqwiPoh;Tc!72Ku&P=^#}?GWn{ydN3Pse(ea~68iqcjH^-=Yk7FiuH83n zXg=8(F-v0DRk~HSi*B zh;NAp{=d~h_9cmrDw8&7jLi$XIwGXG2C+fdyn!_i&sPP=YbQ-S*#nMKmyUWosze>H zYRXW;sek41d~+^e3KJ$5&wM}n_`o{c4s9X*R5!O{wXg;toM8{_X9KO#=GctBiJd06 zcQSiGgp(E+O$y9|dQ-;XdjJ_V?i6f);4TYekIK@-{YKvl78QN_swI}4PBubo>i*5e z(!>*iji9oQ26rslQcuR}z~y9&^KyDnzs3Im%LOPnGkv;rd!q{N zOb_m?&mvx^fdkM$#?V%)(A;t|LN(f3uRZ!Mp!{4G@ItJ1WXq=86#3f_Rzmz1bhHQc za9l^LTJfu^;OZLeXHtx^s%_U|Anh_Y)qj8RPixH0kxDmvpLyFSnA-REme8P5ItF%f z;Iky)-6nSE+{+%@4aH?k?Vs5iG0*OLYaN=Y`^bfnGZIiiB+q=$-QPbLO^17nc9aXl znJ2&O%jt>-#=I3W$((H$J$7}N)TDPra$NC1x7;f^_c*h$F|q%>2fb#4npD2Sf%j;A z#GxKI-R6E8%t|~_ZFOxEqMfZ#9A&iMpM=s=DN~hMMP&tei2i`}c&h(+Xv~Ng+@ZQj zdeUke=oqY6DLS)`l@0Cpn2e=bT8eBTve$XI=Q@l2kIs@))f9LqRo$W}-hr01EXGHE z1BN7@CpV{C_+!Np35lmAV^(1XZ;F+L<=F!-`Cn-UJTO+G^72Z6TD4Q}n1?)03U7q; z0J!b;wZVe=YAKUEUbBZ_=A26Xq5p|L^|#%^+={3Us>!00w1LXq-H?IUS{N@qW`5_U z+p?~siVzl#i~0QpvAjK1~gZM!uKV1Ma?Rz zW92Gp@*991jS+O9HN#a$nPlxL{6ua|lq9|Fz)uF!>mAyZq0fkRkenBC3-pg;&`Sq? z5Aj-VA0Hap?wTE~5l)*60-K&Jxb4EVp0flU79+ znjg#5tB%myi|t>M9hsYV-)1q`h0PA%5Br;b+s3MNQpgwHWN2sKr7T`~h5nH=U&grx z0o1{M$OOftZZdz4wlbNzQO;ncHSuv*{J=!O5Kxvs@r2*AZmLm@I#722bHZvck1L63 zbhr!A8(|5nx29q~%c3&MgJLZ#2bYLu@Ar*AVMW#6bqqDgD-b?KU4!aQ8=@L3f7y=x z4;b&E2ymOI-+rkF{BtbSmZ`3@m*V~G|5S%PapG3vH#))g_KP76eZGyqK#Q<6{i3aF zI8`6)%z80umSx-337gsUV{87{a9Gve*=?SltWb-vl{z3KoZ&{+g<3`HdK@%-Jd~aA zZDD1ZFmVc)yD|r8zoVzCJwSLsj=o|NPp z%rxpCx{GY1xV!>`SC?L3#G!oSS(S804G>E-p~j<7DvGaN_K?ZXpnSlhNU=;6BCEEdPG`Xzw`SPjpe(~(+P-(`Zel~li9ou$Z9Ez&lI$Kvx#kh9^ z58ZIoxNOjmD3@?_cc!tKE}$=`}rd@ws3(oThetSb)Kqr*^@1qamoBt%JDk9KUJ zzzuGs%g*6e`bsuRpP=!8NrwIoXeq|j;8H+{kuQHC!VuihZzWp=VdhjXKB<99&=yL|QlMnxZX1o6K{u*6xMN$5k&uhtyfTTqzhdW9 z00ul7&ej8A#{#z3y<`O~n^=(^cmq$rX%S{pe!g#-Cth$laJSwK(q^$ddcr zan~%W5x<~98o!-Ze0CEVB|nT1ZU%M=Y=M7qs^2^;j!Cv412Vk$35LSA!UaH?`4Yzl zl#N}`mZJ?O>GBwKGR58U@^t#pw_R7lIT}46Ff8M_51~W;et6>%D=XbJ`>sq-wBZjBdRYs(cCb|WH^?k0tWjdYO*~@Dj&6eRejSjWk8NdE_j&@k zhpG7|@6%GA;IRb`fLA6{bm(a9pBn914-HJTbP59lL!wVd*~Xlbsm*FX;FzFn-@0GV zxi8lxM>^{AU*9#NQ045qbPzGIixj(Pqhx=85`)vShYIAs9|VSR_qutKySWvW{b}iM z^Dx;%72dbyQA7*+BdPLLxu1H|dOx_zQ3i{((K>_ zeKRvb>FX!Jt{Nh~8vaE)GzDnG!+@7|tF|oZOw+AID;6j><@dz9;W(nGMbHdz!8#m* zUHxDVD2DO#08q=t7)X%9DckH+`RxmY*JN;4lV{#^{e?d&cUQpvCFAsCkwVVN@)yI& zf<;R&>uniB&_8|tdeqeqAX%-p4FJmBgUaZZ>o;(F>iZGjoVJC35pY)wU-aygTljLj z1+-~xNZI4_hcm;cKbE1pD)#iDB{X^5?;QijNWObJB4PGb{Up zb@5`ZE4%l4uBE?n5D6_mB&=sH+B@y!2QN_pv_AZ>_~e#U!jlugItPtjThKm67U?KA zm0k^pW?f6b#$RIi=tgU(GQzYGIFy+CIoC+|VnK{PhNLNVr^h##e}~NPiskK8W|NQD zTs5?vh=}^VYL3zVtwWC$;mm~vD9j@m#wx|sDT`Hyl3r>EDE7I=loH;-y-J`Es zZ=HDnN0r4}R5LOD z&7HM@6-b;SSeEm0=4U=NkZhU@|3%i4Sv{)!WrKM+eiPRYl>Hc0YBt)1LW{T%z!(`X zcUga1#$FgcEnc~?K%T4rSzYi+M~s?WN0*F;0i)@gq_Z(FFF?3pLi|_-6T3T$FX>Oh z72H~~_nGX#BEGWuCAWJ%E9IPo*Mf-v7wKnIsU@Wk|&_``0_&Wlk zvkx=3KajB#7JhfJb{i03hOT^NEjwls+m1NIr?T+-W-^7h?pQyNts_SpH{rp;vcqu( zWZ=$fS%{B|dx)Jtg!dRfIvK+KKwrXIg;zWPsSO)Nc1Za@xqR!pYjpOVGXRT1uyrE- z&{aoP+UvU+zgUOD>WwBUnG^kLJYSVE8Tp<)yc|qB?Kq@DACY8r6HdzCsh@M$Ctw3U z+@Y!zlr7M;ia^kzWq)vu*0{k-vO z&PjpyFPpW&wLiMwy&(U^!1grS8|uN!u>%opAPzWOHuBF?t0&l(ED6jR$QD1pTnwtT zuMmb)3_xdl{kbQN{E zkopOpZZmnMBPk^%WjLF4+4dOITx~Dor)0B8ml3bW%=fLEv%4;^eV2>-Vx#pc^h1@? z{+l8Pv6?f8^LT>W{$?xTdF-*wx~RFflaC4um9|*^#3m=HgM;4NYp|q5#G7Hdd|=J- z@wfBMYkcb&(7UMFdtflsNw{I4(yVa~;>qm$das>!(XpFD0;@u*Zjwtt2;M(YX|kjj zmGZA5759@^T7G`_;qBWlTV%Q}hQx`VBdZICXp63bX~s`=!>4t`H}l4keFjI&)6aHc z(OXUb=F8{Zt(1f#6hU#^>7S|LD}Hwi8L{)^pgoPtp}W%}C1vFv`qMuH%`jV5cLiji z_P5zxKTx|9o8I=it+nE%pQKoU0Q~XYb84BfEzh0G$}#aX|8nUa+V8?2%=Bse)_3E; zv+qjDumFz#$?2QY4wJZ@U+%s+*hWPj_eYK2u1t3eUsb zHSaQ})S(M?;yFm>huRPhf=;z!l8xtK=#79Ri=s1r%~n!MienfUWM|+$zJ^Q)i9n7n zhd~tAydSpqPM59WhE*USYNLAo{JA&(cvkV?rF^gZprK;%<1*B!#`GP+{EDRdIn-4<_$qqa%syXd?8E6^#EiT$2$>u!J(5cUNw4L!LCLt+D zJqHN)fduF&0Wg@^^K-*Ia|-LjjTDH8aZ7?5LI6m~zf;*k+iq4JmA4Qc9k#;5mdys3 zy!1-YU0ZJT6;t{m{>n-hpZr6`UT_9{bl;h4ZoNkG7jRN_%ri~f+VSW|_ zDrT6ho%lPJMnCmD#kOCY0^N^hqYu6ZQRON|IOk2=V$-E`*j2lf(*08&u*Y)F6q z+1dqC!b+fNuu}?5Vqsp|K~(m778KJ-+n!;{3Slf#^wAr53g$~1)KZX~-0s@J?rdmZKIyVsGScTVSUw1aU zWvHHpT=N2yIiXo?8dq*~8F`fRc>soc?}z~QIlil>lj#p0WCk{*KX{bkjMGV7aj{*> zA2JviHx}v4^wh?3 z2R0uyGCfRJo3?!exL?x_@QHePyBT0uEvTJFVH(FZRb-&X|E)oV(w+Zx;{wf9$uWIH zp{n=?j~90*dBhtn9SdCco!&RzD>Td)cGtevovkX|h19N&z z>6*phgjkU}G}L6J&`8eSZvf2chp3TD1s`aNcJr^-c8MoxtTD`p1>#wl^Mi|>dG?%F zpFs*?k5r%#yW@I~&X`1$4*xH?cHk`vRev|Vr0NYNOlk7urb0$@{?{83XaE$t3bWva zzP(n|d`&pf=o`zzS#I8xS5^^f+KktzVYEV?;G2hUjg0 z6C;jvK3&G91*nfrP;rH;o_ynxq@BwIO-ydL<%<+K577;~1~=^Ap5O+cY8*JwMBidI z_9ptK17~&OJA6TA42oMZm8UtOo~xZPrlk66b0s!~1p1CQHaV8jVUfY-oF zqN$$guVjV+Z8gDC5Q9$Y5fb5@A6N!#7NhN%AQ}$xL$HJ>b--^J*M&VPW(O2VV2Ce>t=atw|on^IIEkvYBAsM~FbSu1V=!W>q6KR{&&# z?LJqaSf;0{@&w$xR9%=Ds&s>SJ5^p@{z@2wvl@%p;LeJ=Nq*IaoHe;C?%zWQh8cV_e1(5JhPH<6S6vLyxVv!q{TmY-#x?FxbHZ&a^dPgJpj|Rl~i*l6DHx-8Pi=#{0 zyuedBK?^8pnfHTD`Q!SU?v#M?>X=mnl%121uxBGVW-khA+zB+-_`|Op-n2hZZKNIq zfpWUm#t7p-wE#NMfDFi%Q=o#YIKg;S&>;<#G3j9p=La;n!yo)9D~cEaUofmq=n72& zSg|J4vcD}Bv=$s(0GmXkSrd1nuC6XjCn%EcOSQ>G2arJ!8>>KbAbZzlhcA8Kb3(if zj!1*_31gJ6v8Qb4CXg6`A%^Fyel0E6${2dnZ_=iwq9|GtWo;g6`rjCV_Z>)itzgSizofn0@ET?cKl8elX-? zmoqa10AhU0uygssHvDiOBp-hyP4JA*iCs3-k=Gv@D@+;_iU$QI1F(ZB&^nfsknjd{ zi8RcZjRb^D$;#!18YMx}P!-nAW8Uov=O)ZJ4Hhn{4{OgGf{8DzIuo+K3hd-mi?(~* zfBn5A>pyj|h7v6SKZnCP&(_C%W528WKn!53g0T+X86aYnGive+619eCAMjQK`Nu4Q z$!JcqT@P*TJCqkkzq!3R0RQdKFoY7{x^?Tq7bM65#M_)vDJEWCbx?Rslz-vr*=%oK z2BHAPho63;K}#1+OuEYd^~#Cg?#XYzW&9Q?eGr8ME+2uwhL|s!=b&{GF_pt^4PsnN zk#HQ%=87fP-}E>nL~I*R;@_dhpFiEx2!*CheWtl?r!zEkrU%2lDj;F7);0`)`d|f| z6D0GMqYhZmWULaJknX(9jEuCGoTyv5l|)3a z5fKq@zFLJ0`x2ts=bJZgtV#vc_kfUCdF6dJSagO`lGb72QpurBxykQeGl@Y;X5}#J zF9H^xR*ow*6a(#+K4@n60$TWN@eJ?!LGMA9df?sdO+q@qolAH%aS;+cw`u!HJPbOI z03f!pCm*}`)vJ~8acr2?sb^Cf1f>Zv>kjIk7&ZZ?T^Gi-;l}^t1F_g3k%9+HNEx6+ zUR3lU;`*SKMtoj0NdW;1VSiDcDhXaW5|VqH0j2d5Q&s?WYxHpaCOF{l2OKN%0ZTqk zdFTQu#rip=guG__5*?*#1Y zd-cu&$*r$8q`SAJ{~;_M_ADtp|C8J=G+Tbc**&d*LIRn>`rh!i0;k+BB_$54IlT6t zuvqN7QO^SOzHA~rx=JWC|Ly*6{OzQQi;vHK^P_JYdO=s=9c)O+&^&anUI7r}(AeRN zCMVbBLkD2TuFX$$Ph8hL1WSZL&*-&McINkG-F(q46ndac#oO9}w`qdvh5zz9a93u! z0Bb{mEW44P{YXM42Y=Oha4(lp3~i?rfc*9EcKNT>_tqp*R8#~_+Sc#GLdDL?Jsq!E zxQXz=hYH77Nmg%06U%4cKT7;S9rA#)-GU{*1;3j=)=n)Hsy+Z1PqJJ5ptU~dfq~QI zLBP>Inynt;uMe@ygzGd>And+rM8rQ91PXBw=36e2w@n1It)!YuP?9dD!oS26yk7eE^b%7XPR*?>@} z0l)&F00f&LMud+K0FmMmCO}Pmvxiu=r;zCG+(hAmU%=Myv6w6Zv!~_;@vAfPQ2(y$ zui}8RN!>G3db)4E`p^QNAXb~2u+8)#+wya=nH3;c1Bpm+PhEy{=e{M@f*0s8oiqRH zI|5zhdM1;FZRr%BwQV~iM~)cC1q-kYm&#C|zJ4zC0W?+Uk?rXXtZe+pG1zbkVZM7- zuP6UWAt9mYT-lax=*=}WsV9i4cSHe^q#QQkZd`r8Ffh(@a{$oT>Wfk&49c!CU8;s# zTkoD;BHJ$eXcYU_-U1l;G*t2ov-b_Ke98 zwj((uqy>rx`+DtjaWQHIBV>Ix)a^XzseR_>^>ZzR&X#Hbgit;0Hezav!MLctMg=HI z86a#SKpCRj8(?7916?@X6Wx~$w>;f_0Yz|85HJ{F=yFYeRTCZ+u=`eZsH5Q^j0?;G zktM`~fmCl;ugh=Buc*)hF)2se)1?tVDvObs*{G}uq!gB*{K3e@#Z}C81ndBQAG|YW z!H0Is153BZ4xj+*8ZN*-=Xc77+f#|Q8#M`FIF|w-cLrv&Rm>2;d0lE8#0wB}5@{|^ zS%O;33n;cjXebS^mwr?_!(s_omCoA|pJ9PS;8kXjAxsH#mOxG}K}4v7$bO?E+Fv37Yh(SMrH&z~h#f5u7e71V&K61>;vBpr;XH3w$2PGQDDO(;Oru zA62-{u4Qu(k?O-fV&-=ZZGw9Jen*piM4+F5(2*tYfHZgtguq_IqoYYNk&xeqWkhDm zLe9*Y5r)vx0KAq!{Y@t|!Tq5+wBG(KfR;LYwl*Eyw#!|L$;E3KWJ7k-kmVVr?}PX)*xd8iU0SrSs`Op~SxU-kzDW9ZVODHs6J6l};wYH~cz zF!#9};8GCavcrH1?;+kJP*J=?HYwr#d!O+ip-2j2XvQpe1AzQZ4YhGQ$;i&Dqy#d5 zP$f`fg!D+Nabz#Rn*0pDbvGm3?~-wH=pk|15GMse%{y`H3-XI8!As zlT+=FK&%U~bbyC>0+Epbhw6$%>`-M5lD*t979{tu1VVzN!>8b~SfldTLKmmyR`;dI zzy)&TA31{5nc5=rk(+ zv**52yo3kZ!Jra=xjxL`-XdXd>rf%cst&MgodTUSOOqTB!9vZoUTVVAN%atkAU>q+ z@PqY06FJn0mCDh7y+Qx687@btZv%pGYPUnv5S+n2m^Szdl(@63JOFucXd+v6B3D&X zP=@O8;gM_f1+8`&1lX6RmoTj+Xc&(V3=NRun=pxjs29ZIC#v*%@EO}s>I`Wj+MfFMZSrMsy?+z;L+9V?6d z2-p@KTf|`uvSn5R)hNK+U650zJk>X=)=9f>019SZP$sF>#3XJRu`fo50J3IvCIs(; z0#lIWojVd17G@=6DxR*SxeO*s3V0=K2$Tj1m9Qyi;hC%A;=}f_%FF)j+VuvioDk|# zTV8#LF3K|kt+8kG5MK%tUBYov8*Y+U_{S~pF?InJmKV#4pGE=BDFNq}^N1+)Q~SW| zLAa2;vtx0@1q5`U4|#A-gE(Cj0}!ct!2f1OtpImxc;=R#-k)7SYJ7A4!vEOe^Murs z7#QJrGBp9`cJS=D?09;f#oUX(`q#peLKfMf^ zL@X>UNGcrG(o-f_KppiGAaZUS<~%Dmf4v6-eRl^3lYvK2h^+u26Kic15OAB9fL;o! zB#A4fyNQ4v%YhLOqq@yyM&0M zU`tVCFjPEz)2*ybLYoN+$;}Ylns++ZR&AL^XA)m$$blvR=m1t&4S>F!7!o}b*y(LE zhX72jQe1-)8Qo&Kvhd?$95$f!2he7`_Oot3<>nbim zS&8=38WI38`=0f_R*8}d)a4Y`yE5s}l@W9@+%);*pNH)OR^xqe(}6j}{&4TX0l(a8 z;q;_=6hPjkAw=;|-6jCNG{f|6j30DeAV)X}k|KNN2%bRqm?0(vPDGnZbY)!7fdXKa zTFqTw_aSOi|9d(7Nvb|R}ACb%my)kLAWIgs)pQH*Pl?;FX7AB6@TN)}1XIFvRv z9RFeIETnEgpAC8jx2z6=2nV!ufVR(kXd+a#qZNY7mc2AIw~xrn$fR}D6hi&R;^!AK z6fgmX3%^S#c5I~EVI<+*q2StCB@in>hx>@2!*wL!l?`qfsqI|@e6!$=IVl2VVUW*S zC;fqogfRVs*Lnv|h7cM&jEh@7Bvsk9BAZ+sqWB>Eo2{qA!YWLKb{9$K15}XUqY?J)?L3gP#s`Q*qM$e|eq&4;=P zn7ptffK1Wm?%k}*{k!{4A%gx?y=4+qpldn|%?j6Sw69Z=$25%?ziuDyp8wkOEiC? z^C@%>q1H~iI^($Et^Y0AO~aB8#+fcFzXjddrvpt1nyU1bw+~}FfnXf99=3E1Pse$Ai&z0_$i%J(&$8z` z%yBGh=2*o-IlTC0cZLhzJ!8tt%d2I;4{8MiYrNrfHqYJMI8_!ppu@2ks)FY>X_UyZ zomf{9LtSMU+Qe#1?~1_UXwX^SeShK(-j}fbwFrOq?xN~%4WrEo-KuC$ld7&S(6rG! z2Qy>sw9}~zeG#v|Z z%ZM9=LR2)3MP)98=e{U(Y}S#xf24q*)HqZZ{%pO(aR-4^=GTnC(f{{t!B!HTKYt$S z>e39vDi4^f;Nd$pK$1Ig{N)2txh!r6AbE0^E&81K(_@t&dd%pFynb zJ`2db4;tKLqRxVzsdwJwXNtYRpjL@qIiryl;|`3sMl(VaW&%zy&9cyZF+1=Ff`(Y# zZd~Ty1_S?nF^gmg(=D{~&2j%p7WDctV2mRG#1jErAf3v@`5(creb zfWD(R_qzQk8X;_0*Bf%_QZQM7Eco}@G2l1&Lv}0;4g(l4c@m4NR!1)9_n+ym%_0A|=PQAc?dBnx(r zAe;okIv4BVuE)|@EkgHv&IfStKd*i37B~++X7XB+nnJnF4#L!3&@j6p2&Ol)OVDT& z4)OP$!*&os*t`1?Zrln9U@Squ2+ejGe>bi)5M#Prt-ZT30}TVUbzC+1S{TsiNnu!M zc=V5tS6S7Z2W3kiy}ot<5~p|g61 zQ;-Nm2x98lfsJ?eT*nv|=?1)pY_T0KW`YwwKya%SIO^p*Fu8*Q0H91xBv^4) z``;-iVF%lYwouzoW*{&qQXpX6G!z6!AS%6hr@0K!hsGDq!UYhR`m%3JsLVfhc!4yg zdueF{jCGM}<6cH7{q~X&N`SMG&JL&%ser;H^G71+k7j7EX!9G3_XCc@!`{?AA&zEf6z=R@nt$YXNOn#t>#?VuH z>p&$Hh4S<>QB?pnJITYtQ~J~jrY=HtL&|S0R(sES;g6MQNy}J9bF2FYwvWr~`|bGI z%w5Z@z5r@!{}I#U3_#n_hq#PgINXYP?|Uo&b306u%Hx_A(cTvwM8$my+ZX4zY})CZ zoP?rEy9a3H3US=}LK+JtA?-EbAN4`)I)@!_cyx}V=k*S+z=+o}JBA)eb2wo9@DfO~ zSo345PlZF2Z*S!Q{)sN1?%-ttm%Kd;lOf#WTGU&u;-sVtnq2`M__e1U~FTE$H2U$XopBmUA}{mAYeW+XIIJjDqVfVU$}wD6^t5Irr7_ zIc`8v!cfTz<8XR%V(P_xIJ3#_a6l@*n=ow|Rq5+8z_GLir8i+Ra*MoAvv5!WAq>;9 zA8D&oLqjos1ZtE?Gy3a?N(&XV-yTE*$^ZL$NmI0L$AyQ7AJqVrQ0W3Z&e16zFhr=T zsm;X+R*^H+0><lipJ;_3n9H;RRFmfsd7Htxno%R!dJ$Kao5Ka3gCcfF7WX+P}IA%4Pb{#x_(0WyaVU+&gsCBcTn) zH<T8*6(AQ{45bp*lNThq3 ziD2vzoh1kp_!QvlNNKDcSxms%&hp~kqMjfOe~^N_JTjUF3X^CwRa`FUMM2=n6_jAX z@!CP*Mo*E8MQ_QjRM-Cs5&vx!NV86d65&w=UydKx=SavBVo;A1ay3eNAuT8YTV+Ce z(`fID1c*UTUls0Q!U)vGlVD*$Ys1yK^-{GZh_{8=$l3vA4Ji~l44W`N1`{<5^#fld z@)?%r&!)-r{{8!s^K%~_!*UJTz~L{-K@oxDM8GDsyx&7ohzI~Zy%{% zAHPHIQ0EJn*hjOC5R z5`-@UsNI-S31G?;P;3WeB`RD?WfC{Yi8_ zc2mF%Hh<9mp|Ku1bj<~LPT)+;12n6zY-Jx^-cc(|L_f|yJ13VCxQT3%cr3q z%V!?Q3mFZ}VO3R42fGLLiug)Hgh}H}y%wj!7R3DjuC)>43bF0^4BW3BE{cMXlDjl2 z3xoajAGd%?YD%1&n_DVt+-EpoXufsJ9E6aK2mx?I+vWS(**DhOTi+fmmXfS*-ew7P z6F|kDjRREf^(viHMNRSeB+OdnnL{DqhSOLlpwogh{Qv+~KL=N)<&W=x5bb}KWkre} z5kX5exu6}|1!&#}>h}h~v=*?YC1!zm;LAxvpcvnQL8BW1)(vvFFA~wDiwKyJMSl{O z%5N`eO5@#tGh5y{0wVj9-vEO#1k%W$z1B_h36{wK3T#8-;Wj($K)`Wq7*Z{B)!n;y zbJ>uV0XI)ukYwEtvEmZ!y8|w6`SBujq%mWwhXJ=Y;X8EV=J|zTOCP4Vu4c%sFw~*86b^c0lRd=wjcN}v^@7hks_nb+=tK_ zfD-w9Y(lz%{VqG`BwC)5ucsNBYyY#eMc^VpMGB3<{Z!XF?cB|7b(N&s#MlL1mexKsAm)@^|GJmm&2oE5*^ei@oA z?Z$eHQxt+Sw1!T7_TQb+@5UcK+7IF*^K8wU$6K%VyFFR281~G{6!5a)oYy|fI*MDAQi{79n|5n8R;|)k7;1xnVAH&}M zOZ4^EpT&_u(L}?vq6(%ZuSy^QQKwAQ)+^r=3?bIScOD85nz{`C!uIH^O-<3 z>ylZ}S4q;vk42EV8-P5Gdd)dTMzKaOQ>33mOk#k;r~stEueDXJqoHiePV)MEMl)F-7O9Bx~ACPM-P0*}#n=!c%AyZYG>$jn)7H%^9*Zcn0)yO0Ud(6fqH*JKb z61{MDukvMi`uOp4hh*&5s9tms`+Q*Q-@S^UFWjrpF*P7wk>G5p7eBiRfBkRkQ*aEr z=G^1^!e^63z(I>8tkw?gKYTc_LA##*e=Vq*7OeeO4B;E+5+d4!8bDg@F>ACqn?37s<-a}h-+#snEq~!?v7uUX z#LNPk_Q}o%jVAc2YvQG3t42{J043b&BSKF)hI_Gkhqm|_Y$^Oh z$9L~2z~}@*P}upn_re`gfDK52ymG^9>;1E;tZQw*x{XB-ZJEl#ieTN~-)j?xXU`F* z98NPzFPw8Wdb4F#2&X1^KjdJNriuNpiD-d;oq?-~fL$Yb-P`Gpdd3Bdj zsQ1VI3eU*c^nq$D^q~Pg)Q+M};E%pc?8<;Eex-F!rOM+?BHDHuFv8OCb!5Xt#PK3MUT;-aO>flv`CQlu3jNuMj?o$!!EL0A%pnd#T$P~O!Vg=q;hc}x3WZAgg=BJ}78+4S1a;PRxURM?PR#5^ME#z{f zldVSZM8=sUuCA`rc*%`jvqI|i&y%|lE*xW!sav!8R(z)>-L;E4-7zVSC49#=yPh^> zNpKV*W(cw(tGzt$NH~L!ow**;-EG43SMyd)v}ko9f%UcZ?2dDDT{3eFgqtnnBp2b; z+=cA`?&g!ZT56xK9;FYUIf!)ONp}ot)nkaOFbtEs>G9doRUo(jhf_;V!7fc`s_s0p zMpzZJgH$V<$#k>PuqClDSN`Sw7P{T_8nD5JjcLK*&DLEPs?++zG1D z^oIqD|0Gwck{ltyDbsu6TKi+nbsqIgJmK4D&nB4&snb+h@vmRK8hzS7=;#%`<2~7O z$45mt*Jt;X`4$x}Q-%k;y_hXOT*AfS2Ka9X$`;=gxjf44$KW`>(xi`Jx}EMG+6hta z`u@G!uYg%~b$9oHdRA-D%Xu$tPaH#(90dKP;(wbWa1wdfMNAg}nkbhkUb;Y+*~)`F zraFTz`-1$!ru7GQDwB8Jux~sa(RIq5F*|mk8vEZ-F*cKehrBD=si&>(NaIK~V4;s5 z1r9keRhbf8G5dC7ZM=V0@J*S?VCoXc+6C?+PIVfO-v2Cv2o9>qR!Lm_)_oX%F zu_H&ERHZtea=3^Irr>qB;C*A&g%lf~CTlNg04;R6D4<*La zIoQnhaCSS^E5bx@BAt`)iEd`wFK_16@D^WvKxgu#oXk)N$8m#w<*Hlw?}JJizu-eYL1t!qJaTetRC9h^9>C;ngR*gdbX zOIOd%5r|y5@>qJHesf)9l7z6Mv-<8#!8w}}Sj+!XQi>+2@;$pcRcXwz=^|pi$YLh= zhi?u*j+U{s@X@jK-k#(f5$`6#H^LiMgZEf6Fd9HyF(0D9zALt~F4z`wR2xREq9x$K zn8a1H;m8w1Q8k(4&0Sr61rCFyXl=>1oEM13}yiL z&r|<-*uJbHB>%G`*fLL#Z+*`sm_zB(-hvo)j>BoHG6`tzalU&0x2^#JAu|ccr$~S6 zmSkOLa`7HJ2Zc@GeT&LjiP8V(8uUCYf<~?OZCNF>e=eh>Xig1)cY=f1d0K9PR#j%} zE~A{k?@Tu9xy67wj@fK~i-qH}2z0A@p2yHeWdW2~Fx{qyFJ5=IaqV1bx>y0Jg|H-Y{UA~83AZ{r!&=gYY8 zdT65}@HDi3JhTeA4o;fFq&WOx`4fgmB*_%)J;moG< zOmdUm8VlW>Z7Ix;b1if%x`z4RSt{gbldXQUFbm6BB(sRopaIq2Sdz zAZf$>$`cEHeY+}q)Thm}rX|18UC;x@g$wb*wtGVEKyW#B8KByWmznU3{_i0Zb z9!^Qp&zM+D5?C4ycgTXipKpkYX_En_!aZd6?Pyo3w7l{6xYrR^TMBYyHrE0m4tj|1 zvf`$p*l^=W`<{wO+*W*u&n6s+c1md~r0i}rKTvg_XPMX1h%Y{Rlh_d2Zp_|ukMLiM zKYJL*`r^?>N3(Do+^`x{$yr$1Rih4!8~!-Yc_$*g9D}_c!l+baRu&@^53>& zfp2pbwSJ^=Cq*W^L&ljbH@W1iM8x=`u?qewkZ^14p=ZG?6mvj(=pmiy_5KfhNgVh9 z?#e`&O>x)yb5b)bP9)j-*>5Z)U|HZBe0n%fxP-dzR+#7{>OQlW*-3yuzr4%-uEbgG z%zIN>_Vcj^BNg6%&hc~~A@V&DSJqP+(#-sHKUtkUrqXI;)%%^0rG|W*;arCml7re1 zr}c~x8BStETDUC6C$1=V;B=8JyyMtKb9%ns%`E13Xc4{UZnHYtnEJyWN&zNKA3z;@ z_g`skVYwn~_Qf1a+4UV7xX;^bx@FEQ-;rp@)o|6A!wgP3hcAErI+J4cRdlF1JG0GQ zo}2iCxGFbY=)KjZ#^ic~mM49B?-R^MTl!r6)q(T`9 zkH!Upl-fd@{rSmJ;QRsAO0cV#+wGSGf~Aai%1&IIHq}UPus&UEgSR=dNu>c?{uzcF z+wpGCo*YgM0tUs$ixIzxIsup8^Lk#^ zIGps98+BT_d${h{cqy87C%S2`c?K0$!rd4{9GbslV)PRz^B7JF=Uq3}`#_9eTl3@; z-f$4(nZ3Cdk-F}_)y%FWmrnP#;cn036^g^$YbTj;%Sx1K_uhZ5@0sWKNNM-OWxTc4 zS6#Jn(n?tN%1;PpH{!MgI-w+w1JB%mw^o6LdWGK$ZQyy>g-m#^^i@UD1SC#S9CF*o`0k;QN$8O(J}yQ2Ey<@6qE4`+_N8@uTM0si!U z;oiNw%cu60M%)-)BWI;!f{wHQX~m`#r${J31rahaYRfzIe<9?(cP~jR-$wa!u~bw| z=EE4(YsAFFSq%nJ12Nsjf{u9ahG_h&nS(XbFRph#ZyafgSM1PR;B6qZ*d#JAeFbCy zj2_J7&rZ~9?3EAMOTDW4!W2K#XLC<+!(vgDk-*jnXW3qDOE-0U^9hr_ji3WJK$PWg zE7{XsX|B-zqgr>;JoiJ1$6LvJxEtCn@10b0i}3aR6n#$ybY6l59pW0-&oJvgt}C;0 z#aS(1v@vm8h9pUEAeZqm`CXVxx9+rL3i=p0kL4&&|60Th+7xdWW)9+;Jy)TLTXDNz zoi2=LfT;~bIM)OmxfF$PE&5CL#U{M-Mxr?@ zk#Z_FY;QCDO6h2sluxTxrwPy8Xz%vA}s-MB!H>D3W#Ao z?AYC&=B)5^x3h{=q|of8O`zHlvg^;?k1v|%chtCV$lQ{Bu;e)Tpwuhd=N-Iz3M0oK zESIM=VI1})CGgtf;`i8P9=)OU&gn*mb_e_3{j-=_%NXon7~_1>_GCffxYhJV>JE9K zm6+M#4!HvSP4%46LOxwV(qywWlc9y%2g7AmoZ9IMU*%}5U~H)ra!HT)0ZfMGnSYYj z$!&BKOu~t4H8-i$wJoiOKjOTsC4;&JIwRX(_5=*O8dAB9lFPTvvcTs=EdSvMCyCn07VNhLVlwHbJ4herViHri~7AIFc0PvSFjcJ16{*^jZX;NUxeD9V?)S2`kf zJ4mFVA|5%Hf8T~vA@0M3nM0Cx=ex!+T&Eh(NLTw`h&P^}4`~pwXj{Ym+Bq`5? zFN!pdlO{%hfJ5((C-3Ej-VLc)G09frtOB?1Pk^3ElulAf4gOCJmOwgg;`b!u+oRa_&W#D1k1^doA=G z1`EO3Hrs~Z9HO%I(a5;X_SNlKdA22Hq}cLTjTOse;RS1oxoS#-kR>u+Ep=m6wIrRv+1yCDaDRUb^MmI6|)DWS?g>(cY36;y7V1vOMc> z<*8s(H+z{Ge{;o4<-Y3s6Vi{CJ!Z4313Q^17mn%^<96(JDlJ#_#wRY0*(Ah83PnueB5F_ONXg_Zv&T|yn;o7?7T~-UdW6qHgueUbp%{96P26-v zrTIus6^G$BVT^+FP`7#V&cmzmF!=jQNPS*gKF8fM(%v7~f^B>izKtulnC8Acb0B3J zTdeB+qqO&*C<2S8pjutPJ67%7Xo7PHK4C_1+W|%3^?k+sK+VU)vE*hQ+cQilw+~SI z?dY)in81-SqNYM=&l}c_D~(~#nGWaeG<_S9xaf=vz_bQg{!9`c(|Lcx_FMTwjeN5e zlZOXvGc9OlGshBThIf;^W?F5?kn#-SF(i&2JFP^K-HB-*F;MBk-0(W1!`GeoqFwgZ z_^3g^_NKVf##i>1DpH-T9b+{?oaRrD2;_Aqvd4Xf=@Vr$wYXmVN81BE%6hUi!D6n| zckbW0&o(~3`)jAc{bynaTfTPomv)-E<>orR{3+%}-MYqkbDxDp{%luQMz?@`*_7op zQ<3biD)-RYmicm@I=2$90Ljs%ORRUA*%}OAZYQK7*zk+=8S(!9LdR9&>88|vOp;Fi z=VN4A9Hdx+4ZlRx7(-9tIi>0H)FBoW16X!gk}lh`w%u_upNp+uXndT!_x1hcp+brA zADL3@HR;z9Otej#8LG!|>7UJNTM~2=**F4(9#{!~#*TIq$1Ms)Hfb{^6P}T z^!|(}OZ`gOE@hLzF+a9D?BPqxXr!<}JN^_hM+Z%$QL5!ZriKU2>@K(z7Kk2hGIC1DtU@NioAh%i-_)#elQh**XN zU-)2Y@Ajkv{={D~r|h=Rn&H}9+g>U2>#9_n%sq~qnQ;`^LFp#t^!^hjU>o08Ov0gE z&l7SIe3?8nG&>q^=f2~ere$k62)d*ar*1tFdUhmP{Fa2@fW}7^$)vGx*=X6{& zh8vIHv2^;G?G}6xsefL)dUV6VhP@TxIRR&T-$|+Cy1YL4AESgt{SP;teIAAzGD{#= zZx1g&+WFuVE@(A)S(1IwD~RW0e2PXBpJ>Fjx?C zeE;Fp_=ATAIbyFxeKZcoCh|v01*quiU!cEea^bx|_8v+C(QUFN8gBPx52luXej;_~ zZi(G$-=6flO8!!Qm9m$gE<|-Y+A4??Ii~i$`H%*i`m4zwWVOY0_2<^=ZTgI^oGmG7 z&iwJ9{!6(#^K=vKMh2^HR67Z zT28TWB8e-8xiI=LS3(2^zox3q$eL?uSbI4>G}CJJ#9Za;Tx`4lzPnSkfn+i-@Nxdz zmC5!pah>8+?fJR)$@iZ*fp>1DcqW~-(7&Jhu)HM4^$XXj$EH&^LIX=~b6ty&xDbOs z_4C|LhabBLSXJ;|d^VGQEYeW^PV&emL%L+j$)dbr|a(fl0&{W^Xb?=Ep5GF z4fUE5k6y!X^1>M(741$)uP%~#DATevm_Mk)X>+s_KKbp9f4mZTrOy(&O6hM)cW~F{ z!~6^zwp#1QU-+O-0rxE5(L5${r(+9^Nz(J)So7924K+LrAC9s5`rvxjV}Iq+y51d6 za00*Hrb|EJFfGw-R|xh0LWao_X!H-P)ZFJZS($#AhxAb>b$zXE*pA`*z?zM(Hyp z;bGC@r}U??uAv)x&zUHGH6;JRzW%bA;(Kp=@gVwRfqSC&L}d^5XZLBDV-3%>R|8XP zD2G2Zo2kj^7CK*SUjx;BQC`jD-0Tz{0A;vh4f-5a`dpV&ZwE(;~wz_9X2J=mLA zvxJiUoQ7xIDxD|4`5RUROZdhe3~w5dF)vftp0co5ArMtFv8{_tnd~R?-suee@7A@x z7tIz=Tyei$x&JoBrVW9Yc&I|r(b+D4c^|W#8G}Q3y{itT`_k~Jy8gpJFMDnV7Plv@|{|)aJ^DjTFp~Nl!Tx$ zv9}_=`~e=<`PT4eifsMKa^te1k5+aQVsCkDxRm3rJ^Ptm%I%mCKTF3G!My8GyBiVP zm-m?qi@y#W64bDIzua^ObIe@&*H^7 zeRMe|hAljpy1LJ^5dSpxu&Wy(Ar)yRahRCbr9CH}#xsxIqv-ZL=*e$#HLH+gw!_x^ zWdEwo&F6P@cv-aD-9-Z~2Ea(_x~J7~kE2*s&qlfS$B`MlyQwOF>>ej3V%Y^tG$4`Fjz~skkAB?fZ2UD#|MH=4qYKVyA~!Q<`q2@f>wC_?go_>Vesg69=@jtW->$r)L#&EEkv}<$UGXdapb<;fiOLUp*xqbyt*M znl}Gh%e9qc9em*anbUet&rH{S5W7mgNF}mDx%e%Gnqu(#lZMazNq#=~qwVtyEqJYX zQeP&Hc{<$yuHamn`A_&JG{EeHSc5&0LVF-z;Y(4e2i`Qd853*f3Nv;aZ*o!_L!Qk) zHxTy>cV7skN>I=JVI*@&N8)Yc(4+X-0o=t8Ed#F^;+oIwnSOV((m7vq`Ks6W;WSRw z$?;CjZIOxD^EVSb1<$DMZJ|lLo_%YVMu8ngQ#?~wASJ!Cbsx5Ea>L za{8Xk#5u%`dIYLgHn+EbY_&jPQ2X(pvZPX@Oz%HmDKV<$+IR70>S83G;t=;un)H#z z#}&%#BD~X)VJ+r5>ee5ta`je%x0{9t%_&um3KsCaq$W$MYAgH7==#m#>jyE>uD4Yw zz7$Uk@82qoJkmBzYEbrjXteQ6Z1`F2DuF9zgoih+Y1xDI$&wncR`#`B@2i@3=6amf zjdJ}q*13|IW?H)ay*sCYldu*;j`7s|RHZzlE z(7|`p{Td@H$jdz!I4*MOA9wb*SniMC#-AkkTId^kr0LALNjH~k7tF^W`tT^$>*|Pl z4A1Q>Q8WJ1DO~G+JUjZE&98W}?OlYAx8n_(P1WR_ohjXwpR{rKm$6^}U}yhh`2c&c zScouYZ-tD3XAsX7&_YX62s&+`anXEYs_8|%%mwpyx!X%~9jb?fjURuyNa}&BOdcRL zi(6n&{Fs;d@z<$s8=~~hM|RW6PJT!rb2kCoSH_gPFdoWnu1le$604v!J zSqZU%g^3TBp)?xIF8R!#$%bE3hmw}Z?`>vcLPoAFe)q-O8op7o8WNJPPpVd?zlf|H zsC#q&a+=r=ChdZ|2JKk}=??iV=34KCcPxbNDQ4QkIxbt@%j|W#Y_dz#`o%2+7Uw)- z(SZEYgG{O3<-RA*QnOrtxcg{ZbjJ6gRP`!MT@RPavb8V)n@>`3B`rFoFqd-|wWQIwy|hb_eE6wVH382Xn>Kx2F`tU{2x3Qy@IKALa{H1Pk(uc^qmQ za@?&K+VwiDol$zW!fHc_=V`ooO&t_Xtgg33kxqP_1k|_NzSBp1_=ryAoV>$ zlRSH#h;@_(GywZF^3e_$*V+N{n6CSnJvqp3Jo@k`jG@%DE1 zWsE`Mp{}{xpM73_rUKi{lyt1^Nea=Dt3vu|Yhr_WxTvuzYtxPIWx0VqCk)fFU-D=b zeU&ycZtzP_a~cr0-l-Brqm&Y}>$6&z#E)y&vu@nydtg=QV4#_i6rFldPARup`;=Fo z*fGhU`ulk;^wlrbkmD0Nnj3Bzr+#@xNoW!RvVhleyK_I-Gv1t-^0ki_^rAiQ`=N*~5^R*%VO?sRQzj^Y40OPhO6cWul@= zeoDZK%e5N1C99Nbs->lM3T7^&Ntz(D@v<%BeZn+wmK#B5#kvgy0e0HD;z6si4glQO zw`VY2i;|I)4dn8rl$V#+%zbuy1yo@aL2{z$A`k^=T`|7f_x@MI1{RJggG^*<;>v=* z?lgRj0;zKpw5bwe}Kwu$e8{W8rmZY`IOiFMR*f##(8mlBmqp7V>f=TmNBCUAvK z(=ADB#*2f57Qx(%G?qrS?}^ktIrX&?rt7N7Cx3SS*~)u8`leXUfTL!<)^5jhA}Omz zFQ|?YWtTi>;Z#06xOdd}Se&YyHp|QWQtmh3L|pRn?CQBb{R8znS+Yev8e0Bb{ zBWUEP3t65b(EC5W-U6!1w2c}@L@5c8kdOvJN*bg=K)PGH8|glvgo1Q;gLH#*gES%y z0@B?rob%s|I`hta-~X-UnuR!v#krsRxvqW1-g_y6oJuC;bL(Oq>n$+DOupr;Zvx8E z5i{Uo-8wVuppkN|x2k8vwE{|^8IUw*Gx~yQF;zx3m?n_sT8V`42h;o;e*YbYk`n|G zOm$wnYRz`b6h1IWQ$wTXgrW5NA!A9{%t$@cxV#2>kG%%4yd?LWNSvQV@x{7%`HGau zA+$HsqaH^Wb=aE%Ii(|HA59IqKZ=6EamE0CB|(#7jj9O-ayMn@3Nhs5fL6KC{jTA- zJ$PR^xywHUN#i$}4OTpfX7wP>Vo?8KE4ewmcQaXh644~~c=w|)CWtP!&-)VD3+({U zI^;wg=v&cfR+)*38`W+&ZH+zT5W%;8-)<-F(=|LSS=Q60d~D!6?(q^K0lnu`e8_-1UiV^0Atdv zL8-Qx{;(w32az(^KV%pzEtQLZ&di|O=gw}c?pzqvJf$dKiznGdYS?F@iy4AnkAZQ z?)&$Au>HDK@Q6w>JGaBKeRw!*sXN5-=xF+GCnzX3fuNHkun|ant`C2CbycuiYduS? zbpw2!6&XbSM!WxiHzTCgHWKT6VNnOWp%?hPweVV1_np!vsm(8@!Y}({-f8K<0MsSR zlf?|CrW1(7;oPzUio~?`bBIDOj(s$yX3Vkd&z9!g+v*JrhdTX0kiIm*NeT(;Mp*8SM zk!LP@I(a#TCjEtgAc?mMlrmnb0##K`eZz>rz`*qMYG;L_4dc}d-@5`G5!tdSiMKAS zfh%RsL{$#5@PMS1L8n&oDcIxYs#@mF-a?bdQvs5vvdQv2uYf^uqs9tgyD9Z2GcyCH zzb#;rSYHKn#1w$8{1`9R8CC0WV$y7wjHh+|S%&j>h+lL0e$nG6f4y9-#0bO}^BD!A zkn6P+7Go4}Lwkw8=}G@im`E`QQ%s)HufDvWH3Mj*3mQI3U$CLnKHF_{UX1<$P(7v{Axu6=XQN8 z*)<;=J(PcC^Zzr^m!$|I%r>MW{>J5gHtc@Zakc-&Ci7`EI(J6_QEL$JZJI7JHC9FFGGm{Nw+VXSVu<()PmBLQl8Lj?kC*# ze$h1ifQU<_YCzXsz750~pPI&SNIppmiNU8URmcjME<-vq$kr}w`ZnMJ*uh(D6TBpV z&2z-p=)6my4Kxm#!)O(=Nr^$jvM3se_5iW6h!?pG$lhzJz)?zo#FOMwL+?BYP-$4F zIt_L|0|jCXpz{&})^-X*lSqZ&FBAIXnX^2HlDR7}4LWAZzoP#?LvL$2sAXK}Z3piE z6}1*A5pHFMi#or3Q~E;KUv^Xrh=)}`4ddMwurnw$24_tXifAdJF#H6agkKu?S;lGi z7T&#%y8vD}i$B1jJ`^ASrU6ko8s{N5B+~fGr(Ngv=T=q?ulS4!?D)x0M~?)By`xn$axAk_0S9 zNbEq9gh7Pd@zLYC%DdLDnP`V>NVe7nkih5I3? zMX=|Ut#)0hQp2qQ$X`}<*3z4$`?Y3Y$$`2NAu#(;)}>l;L%UTOC^*3*kW4t=U-$+V zw(3u5Queo?+KudegeWZVlmJwS#X3|Kzr^>63&U! zmq1x~{#g#cA%Da1+g}~C59z@617JKn@_>K)cge#Q+sdD|D}Tj<%WVQMBT^p1g$E6! z)Fg}Qy#wZR>*h70?T%-^(>bX+a#{5YE&0kM2hph}?gkl7Dj=tu3(UMw#)0HlCeS_o z!Z-)?v1E4&(il!Kh`2~=4gG;-k?5+uGpsZYEW{~*Q&FL61}xp3nv{O5r?T`~rvjcA z^2AZam3QynT_xipYK><$G@`S*iH|)3>u7ng3|a+)z>kHKy_k~x<$KcMLEQbuT%gE? zf?{05$Z4``iy_vRNaO`Q+fCZ}AjDJ3+$BIaML!1` zs`TqwJOLqc$%tn%G)*Y9xYS8Q!r) z(xci2f@Jr2%5VG+ow*`ET*n$-iixnW!o z&5#cDi_4tqCxVh0W<%gnrzTy0Mj=EE##?<&hxbVcZi?`uqFo891 zrpoR|dj%&P`d#@_9X=-!AwfYnr`;J9)w#kD_79+xicsAgmUNlt#e6DDRILIo)rx5- z!_lRiBEQuqw2zcir?I0$p!xw>LG}X{6_#d2+@1Nx=Z4)6wt!lQW=|i42C71o!Aa(y zt4FI?K!irb%6_7QsEU3Y5oBfc4tILl^gGjz_W9-?^!-S2eMJ2?fNgz)5Seq3eOU+z zFmaq4h)2GCTJ4Cw%YkQ#JtozgNk-D03*R*^hZpF{$Ur9m7H$}p@T=?W?F&aI`cyhz z&k-TCZm4}i74*2MRHf6n9Rh~Z1eoH?dCL9y=!|5LdyLkhLDFgb-;Ej_YTEZ^Yni%2 z2;+c2%u^SAsU0xIRRDH{F=Q*1OZYylK=77L{>qZ@_&!`Hs({%DX0QsyG z&|i=P_I+xBra2riJ67-#Fo|ugpi7p;+zQc{?akG-f4l{CkC_dSjEQo6wGZg91iU?` zBh&Vqm?aX~Xv=Hw*IX$H3*IUe)f50*&Bl>iHKI=aN7z5g9?sTSXMtkundsK^_;`#f zQD{~d@zVihSks7&us##6drnIYF+ntp%=2Bw1;!6TpUq##`c`Za+}<%D zjFIjX>t2o(Q(Q1zKO1c)JpoAK`Qtk?EkNnsVqQ7ZX?ubVSO&A_Zj(T+g#bPxE4K7H z<4^wr1kThs_2AK zZ04hwfse~-UpKlcIQMtXW2uvihtg|Bn|=vz+Za1DGIA6qL+XhpI~$XuD`S9$hB>fd8L~jt~XhDrDqy2HwBUz(;U{3~sgM zc&-p>s1m#A&zoBG6q8IC!QR1v!o|hqbkP%pvJmuJD*sG_3sZR>T+PRt?2~mNp!-7g z(O8Bg=N-pTLwPxuNrUQC3a{2-r4wi1oe`~6%h?&a!K6oA_a6mf$r}Nk6MGtXs#1Yj za2FWGS1+dvbeigkh{*wTxc_aw*(<3^hhU0)UL6VHT__C4i~(A=0W=gXzJX6T0&eNJ z=h@Emhe^YbvZaMcvos#`xVBUpS3a;q2^K6FHMT*= z_gaUK?ls|%vz>&w0EVh*_e7lhiD%~3lGdut6+bvQ!dpoLUH zbqe+!$60Jx!6x|Sdwf=EFanpXK{r)90HIxecABEC&s+7Xa@IqX>!lFTBa12eTFHlx zpYl8&RU0%^Ko{$1?V&e^{rV7t2k*hb!E^3Mw&5c|d3kvpmdu008LH}EORccK+R!X= zXfi#V;qM-us4^Z#09#ym9MyJ>`urE(PbDH>qVu)tAOmM6ib`euScmON;qCynRYD!x)tg#uov0(5UDTMFHw|Lu~%z<0&h zn15X;!M6y=M*Ww#44U!)nr{M?^on+G5F>$PH~)(_OMW{Htyr|hia*mJz!ebHS<375 zbf)}sD9+GzRvysS?tHUx+8*Ruvi{D$xDN<9z5?PR90Zj*3k$Kl`>jSU`$YDF5#Sa4 zLWmjNm|uBT|28QipWB+&RMZ{VV)-*lCcA@f%U!Lmg$3P>qrxB-M5Y?;pRSV-MY;;K67Lxe z>|OW6roYeZ*B8lgV6+bdFQqX?;ZVa?StZN1XHP)Nc2~@=7~@9)4|-nFhWVBZMbk1o z4V-EQQ%N0%2R}p{#yQ5zg;khc9oxj^qoKU#AQSje z*}X^B@@wKYTLsC>f&&n?WgN%>M^||MH8n84T`L%WfgaXI{QP2i*~40rYpxT3KO+pJ z#?2LgJ#rH>iGE*w!B$Cl&01K{5s3|~*e}rJ?^ycli;TBm8mcx2t^7I-BvypGW<-9H z%UC`Z{u!fx^ZS1#eWaxBEཽi2t>QX7#j7S7Afr5=a z_c5nRcM_)bhuT`s!!{C#%lhtz`Hg}QKc>%> z-3o$3@!7LQ8nG?-6w-)iD$n$BYHYk+4;Fn#($W+}mwLiVDOW4`i+tqs`y1U)CDjaG z^b?ZL)Y>wU5V5>V+`wg7`Eb?EG+v(TZfY3pO+zmAb~1=()y?AASyc_3!I ztlA-#UgHxeIVB_7xPcOM!BQH-SuhTrm{pD~lYZRE_1*UvxHZrjVEA$U?tRB2n)Z@4 zkP;+7-$*&=t)q$Y9uO!VYqt8(?6j?VTit4!(EczJi=9}0e+5buly7BAbGUU7So6s1 z!kx*=$y%HjTP#;-_v$U8G|*K>a%BiF_2eECq4=aV+%~VTHvsrJs{GGoN zAgE&Wp7^>C&U``3xULPw)4VO){^yS$KYY+pi$icdHjSHt|Iq^U{Q29Pyw(T54+irH z0RUezts6_V_L5`Z{F=*vHCi;L>i0bv%!##oU#tM$@-i@`&_4cf#_Q_M3 zL}hf5bb6ISSG7Lr6l-%!iwEeOZK{Ed3P5(2r@b3PI2Y_huNVkRDs#iNh_#Tzz>9o$rA{{{__A$+y%JqAO`uHCi zLK1-BvvBd!dO3jj7W@E7DTAEMK0I+g5r8#SZ5cN+tZo3}9MDALyyUN}MWs6NFRyvs zog@U_QL#pRp@?*T2T4fs--{eC0Rjr<4gb#|1IYuy9*+iowdI`+9X1RdKE(XjK?$eA z(d8qe5m3_K8e#wqN%{=w-e^GMoT6!E97^N!$W>0_Js#>J6AAc&asza@Mggyi79=qX z4^=4eC5yZjuJ4gq0Dfqiri^VR7u8r`Mq@ouz*gD<>NI;^jhwq4{J^K+U1as96_XeT z133c0aW0xDKLkp@U`u7Y#|v8KZC?DgHFi!;IQ3%%DpG@xh}U}Mr%hDTpg~fOhvd*c z&j6}(%7(-B=U6+ZZ3G{1=wfH+X@-+$@R9%EgF9IW>DQb=_x`*6q5mE(8WviYAb-sh zz-safFxX+i-)d=t9xZYq2-p~xmzQ9i{$pw#H0xFS0nv<(?hAv4C}4DnGw=@^xa@nYKl@7EVW0N~(vX|!Sp|L*nK zF@k$A3-cB{w1srX-}x(awSGXdd*D!0Z!!rBDpPC%^BGB1uF9pfnAspu(1aeX56fBc z6~}=+6#3hx(#{k~xn>=|{mK_0dMMzfWHD2fui1T@++KnzMybCFS#_+^zm6lo1GvLf z%h2b-F4svcfcy@oYC1qlRYtv7S7*C~hXkN?CB4(9&F#u>*aPGh{VsR9Ud4CzvM@3- z-WDKEDg$NUSyVVdfkAO$EZv9kLNz6TGd97qk$lK*KKgS0eFpAzoZc`xAfKrOyJbm$ zBGwEL-YgO#7Y7kQe-TGmg2w5};b^6I^-c5v5ZDyTZd`NTT&3rm~35L~? z{$}=)5d-gd6#?I|myZU_g|RtHJYF&4IPko0eHYm5xV)y~R5Mk+{N-GiVT6+LHLu99 z?zN-&-eyG5Xl`%1xi2M>pANj+(Mkr0Y5g|CF@;k=-lkm3yEw_IWI2AD2qhgjF?Qk} z)87s$fyFwQttHR^mR{%HFxc}E0Yyd}&W1h3CdT~hgBI9L7z!@^C+vLFnvu1sgPqJ2 zf1J-n?@fa(h&e%*Y-N96uDQASC;(2Co-UF}mmidLQIXYKo}i~Y8i>7kaR{#5Q*;@< z;OALD_U)54csePq5|4=1d>GDv^d(JcI5#?Q6+26Bm4wo3RD7Ao0XKiBt5(z1e*NY7Q&`yt!>(D z1_NnZOO>XBqI7XKO2;#uU_;G(GIiJD>plaI(m7**YQM!@N9~!D|r15)I7$a3$@3wu}@sm~TwYj(?J`%$^OKd&b8mH4p7(#I} z9IbbqQ!8ks-cgHgFsAZx8_DGoTeBF)3yIptmBCm%jD&JO zk;;5f-6CkcjNIBl-{WmqUVq21Bhw`*7eyUnWzDYNjv$%F z&nNov@z;$LUK_BUY$WBSdXV^NUaxg;Vj9g8D9A{2tZp0AS)UQxEwN3WOB%4Y8Rf>slT>?18(l)!KY@vaW-BSPNAnmK^o4m;nAz7g0KVYQ zA|`;hx^I3i02yPqdEkS`Z3HdjK*ok;T!+rrc|^^)*hqp!OxDklQ{_wYyNMw*O5OC) zya|X@Qi*R3I_;=hdcrn-Q0Pi%IQhmFsgu@Gdeffpxw>N9e9gNX0<(;j2v*GydE7ss z%O;|76<99Su?o@&0>vHGy<_po#p{ExQxZloZ~2R;Q-C4c_mSNobDWO>8| zKHoyvqT%L21iu9C)su#r=kiw#bzNDh-|vs4w#fCbgMGmWwOFFa@r(v5hVW5Vb{vM+ zK{M5xvJJIq;j?8pzN*f9Gr|z`y$l9nftL@#fbo2MNZw4fckExRrcf*`XA+%jP;-541j z&9(lgWh$^OXw1fNGX%_Vx@2D2WiA>=7i(G-pS~o)}$Yc(LZ$!DceK6 z9059*+aoyJ`-?JnFn`|&q!0x4%LPa$g4^cKSqsodb_6v25m#Du);|&bPs#WA_9Ql? zCX|3xsc|g~q@TRoA(?fsaD*TwbMR|3QoE(GhwfK91OZ8Cmk4L6J>dsw>Mu&X?`VbFT-ay~=c8)M@# zI1iiFfD)LY+%b+`TXqvsoNx4!F&$2!jLZ^0J1S)Z#VDd=>3?JO%?^a zQrKwBvE$Dnw4f)f9B#SNBKf@E{Kkw8b$%VyQ_5D*9J~=ADJdz5=J#@!jAKkWf!Rwx zea#Sy@cSutBahk4NkT8!?FBKwAfFE?mVXRDe`3+dEcyGvk`g^a+OrM&@tBJ7uZ;sD zSyHuTVK&3f`&%Q~)Lzh&2q3T|IzK;;t}AYqgc$@Iaqh`JwfSD7qwduC{$ajIL#nWb zJZp}WDk(bt0T>4KrO*ki_C<)lz$lSP>Qmi$(2{7mV8`t9LM*B#x4W(SE z=@o?CTQ6RwGV2c3JW6?qd(NgBZPJp_uL)jB#SLbmC8>QYo}ENnGn7M`reeA;Ls*?_ zZ(Q(rjP4#q&$Yx)jJ?Q8u-l;+w0gYIP&>L=*)v{289D0pwXxbF^$}W<+c#A6th>Ql zuU$0j9FvlDE?N@Utk<}%4i4WXuM4;aDx&SXk5y>!NX8C`+^+JqoW)&*tpA3O{+2ayUYl@H0?t8O_EsDO8^6hFoa;0q{P;pab9oM*0*a zdwY8;tE-}^si`YVOPOqK_`l;*a;)I^i;(v?F#5)FH26PVkYD)#4qiV5Bn~Cl+{~m& z#^t7-z9h?(iSfR+kpU?t!(ZaBC3 z`l;pqXaIkgYXwr_$q%eohH+y6iv%ES^oCJHPgq)?B z_0@^;X*%zyj772EIfpW;lhR)24HGa~IX|ztuLIk1l3JA^WT29Jl)`gX^C=l4eDB>E z{~PG^ao$+AD4N?lKkroVQ%0wdMLzHufEx6#<7~yg<9str2B)rmIjjSB@!(GqGNl3* z#;_OLqh*Z0V~`yI`~_^(+~ywcUuP~v-RjPdd!1zSL^!@xv!~Zw^9i4`p#AE(th$^I zA^nkmC^buv!S|QdLWfH^4qqBfB!VR&UgFMo7LLyAt5g+rW}gx9r^*O9>?2&3ImzYa{) zIyKC=1sCdXQh8$;*3e$uQ)*kY-ZmSX61vZ0Z4byaT~wSWpiGNvdN|kQlEkRXFXxdh z4~NH;qGAer*5yg^{1Z=Cq(Ma-gSG?%XWYdzjjr0=sracU&E!};!RX>iA^LGZOA4Sud@>?^H7wM!Otz0S8d^4p>;c|LeJj@uft(_B_ z81{oeN`_F%GYb3g?)`I8V?3-{eL0ke6fY^8b#QfTchujku)*4bjyQG+Z}ZR?i|orV z+>`<-{X1ZhR!k?MvOk$S^d?7Yol5FbfzCKWdt5HhKz{kcId;Y&2wssVu!j&vs@dmY;^CL%jm%L~(~9Kdmwh zpQ0o0x+;n4E#g%VG+0=(hamiw$OJDa(>t92_?4W&LUvB_+#IF41id2_kvp9q@xLjm zjr|9R{HwA2l8wOeW7_;Czd`eG^Gn@n5Cv0Zkey)ZkbTX8U<`L+CQgHl84Sgw;= zo3#g6&(D|lHL!0yrTqrW3;9Y!tozvs2;kU}+u=8`j9duMUzS4mJ$%zhYh=apHA9kNbg!+?R{6H`}mI z+Lu=Q9N4TSc|qviARYQ zub!syd%WK5Cd1sCu#q-b%v0+hpsJFDt;x(_h4l^-1WItdS$h5?g4gJ|_^X45APv=& zJpcbIMnIGO@f258q_%Da76ByJpS3ma92YJGx?Rc)dSS$7DPZ zo=|^wtzC>BUUZTtNeJH^nwk--A=&w1Su7Z@D37arInnBc!NyDX>oK{SBQ=8EnWhk( z7B|s1InXognu#=aYh-gy)-tRXwJ_iIs@naz^ugH)6k|6ByzJ?3aecT)!v549L8t6= zu-JC0u?$oKCKS5K>o5X#D+vG^%rs}2@M-F{NLX#kp5LK6k7Jm5SyGZ)Ouh`@hUoEx zqMW%yw`|&=J_DG&^KHf7!dVI`HhKI;20YSkfOAcM|vOSqmrV@tpg!I3$#$ zT*a$gIp{k7oNtnX=s$L@-S>c%$eV;om+nDOBq2%;1F4*%W$9}_dnK)$ga3ys%hlO> z?LT3gj`;$StkSDTY?ME)(RVTB0JQ*5-c998`<0H!?SzL+L?9 z4fo9Nip}TFon2en;-?hv0>H43)Amn91?{B#LeW3Wwg23fUwo#F` zf4~=>fFfe-;=Vqu1c%T$OI{Q{>1_r;InPa#N1=*HnW$SzVn~t2b__RRdETAk#)&s$ z6Zgauai&kV*ebcO={7NlXDF)Z)N+G-Jv=jf8KWq)Yyg+JQN9oNWZ>Jw3jHXv6y2g% z=Fh@Q-;XOjl@5Q;<>;$AQKDNim?%?O4f(jiJWwM<^>@u7_;a>%hTqrM*GT5m9m^gO zp8wY~EQ~rD#|7M~zK)G-pLIj%7(;amS~5FP8$v z-~av^7M+^^fs;_3!s@hifwXOJbC;Gj(hVtzKzh*W^ooj`xT&UD^jpnLk;*3S;0)^S zMcrcgE&~|Q;IuzD@T{|6sdk$o;yyz1@+6sibhO4YBagB+lksO8Qu<+8SD!9VKvEJR z4xQ>pU?eSe>qeHjBc<{=2`~c61sO}MKxyr}G_3|bb z(^@B*q#4~N=Lj$%pgieK<*N;^rH;~=LCA~=RQ!T?rk;L2mTsMuU2I%j)pI$Clr0ez zpqNv!cL}Xd82M{g%!;BF_k72F0axE&K8K+=N?Aj@6chX^maQ%_ zdKj!O)kywGqWmx=_qg*_HmteX3P1MhO z2#snD8$GIYZ{`)m%BZAep zp?mvtj~S2*p-tL`dc*iJD+E$_Gr-f@UNIYBV2y{7`G#b$UGFvNvO5ZEnGR zPUQl-S{GU15PJI(n?cjji}~pHFN8^tz{|^Pv<(&}ME3V$e(VvVQdjZV3kg1FZge{SZQ63T7MA7fRxN^#)YCVV{a%(UUadSC8gDN? z_%$-%yL>l6u1Tz|ZGM?_g`KmY0F_stY#hE@(axz!gpGF|Mg``}!G@wa<%BN3 z#N}a9O@4mOX`&An-67$*UYAP=LM@-)e1Fi_ZXx8DQhilOYH!5Pnm2dwSxCj(KO1RUusYV|GRB~fa;5CDjN<9UT;^|uwThVKz@X3`~{P*5fW#k z{d3ce(>eKbG#x}g%gjcnK{{-2L1`O{T0%0Ra!vX8)ir5rcY5I24z}G2lj7UV9y^@A za8N{)NrRx?|1r7XUXRak3-l+JHe1}FRR-NTb`oN&(i`2;Z8d20 zGi&##IFHL~^DU@{NHu@+$q%Q=d_oK2eiAJ!S>S#8;sZE@u<|x*b?8hCD4}PXvC_<4ppmh9$?2!F=c!|yn z@oD>2R=2t|0dLHCkIz5W?wZny_OVg_Ob1MoRxV$p?qG$j0R+hyny2*pbw-=Fw5SaF zPYGZS6g9k8EbJ1RrSnPfP4!2d*1386{5h>b9Qn_HZyf$+yJNQM;6SopjH9cwMZe}a85y&!)Od@Uh&pDv3~_D zJtp{{F4nB_SI4hsIf^pF>TQS4H1Bq!B_YTX&4^d`S-fw zY^-voEB6&p{qQp}>_($jt%nxu%s0F{c{6LNQpyyFHxin{E`@F#OU!?Qb2MSHCvdU3 z7Vwi&GJ0&LL_bq|ZD1O$Wcf$d&-=!)D8w6F_CGs6M5=F{^`8Pl4WSi$o%78RQ&{kw z-vNr03mF6s1;C;raY2tCy=WjjeU%zXW)pMXA`kNq^ z%|ZiIg1&vZ!?$rW%V{#3iS@g4@f+_5>2>r`iRE2l9_Xk-35#EnA;ps&TyGj)WqGk$ zd>cvf+@E9lhNS`ys!axL)h<$R`5Z<*Jx&Ltd=%F2jZPsGMXWXvlx%EKZbxHbVxgBx zW)GPSUKVIqM#y(3yyiYaG5(3Z$6`0j#P&S0ZrwxtNV!b+6Rkp3RXD~rK4AVn5GE{A zul};uuTx{6G4fP)=W{fL7IDUa4t`SujPED3#;rixqef|7cr)KUWppe6To8~(PzMyJMDpxeR^yjp`t^W>Or(zY{|mmk36BJX@JPk^6qQkiwjl`YKW zE_dF9CkuOkq%NYTp2q07JL+bcn2GkU-sMzEt5R>&{)djAV{Rw=e3l3ESrtOE2FPx>Z8oRyiRB_-F|&fW~RbAWh8oEF8fvJ<+gT# zt46)&>UvNAZ8vw`iznac!%=(UfU(I?!mIuHmK>MK1uTyT2d^R`ub|9hukptIfy=5p z9`5duadCLBF9=73e2Wn{N^m*H&xnuksx6hZpP2Mt;f&=ElLp~3)N2O#ogx%WT1ztl z)+bN)@J>!{#DzqwNRat-g@{Wvsn~$J1;PIjgpwxdpQ4NAyebUkGavorJ8FAsYX5bx zFm9tg;gQL5C7stB4$AOSJLVt)h62q9#z*rW%8wKMq@q8ydtj(_1wB&x1}_2PnI$rp z7^~T?#mJBa{Ek5rsGr#^c1Wv-1H(u@Y3EU&n6@Y=mpVVHo>N>pwS*8n|BR^J>@H;1 zr@Gwa{s6l#S~}n)Azu%gd5acv-gDUKh=veWQI=Fvf;LFbE)OTlpgm9plE8ZUs85*u z5uYZT{)`KO6jKIy{B2kI5vZ@f>Ro3)WP7t?^)4L!iuO_*zjhqTwakIj@3EaU!!VqL zJz@P?BJ7&@lfwfNKA)p?I^nNbJ@Wn1RH`K!(rTsJSOtXRw)%%N{^(zR#VWCFM;(t=wZ#zpt$ok)wgr5{VScYP<);xCTU?p6`0R%M0WPTybSO)ooB8>_V zzhVNcKV;_6VUl1TBV@h|2|6b}elL#-ulne{o|6=t;|qAS?b_@j9d3W8@_RRn`Vjek z4kz=L74J0IF$a}ZjsWDV<1?)~xZMc)s_ORGjc?Qp*~+QrRb9gM@2U=cMDEbEJ8)X9 z$?!nuv$65{Z0GA^6RQiTiWuyT@CZg+vcH~%GLC6kr|>|6z=Yhk*4S>!kKA7pX?g!- zTH2b2BNqq;Ag|60(Dm|&6xN)Y?R$T%_P(r8xowopsE^e$!YN@IHFK#a{3d z5XPB#ICa-Y9(wt4l$0>%yH{TcQN<}S3z{oJ*X4tkh&_syUeU5fR>D*Ey4 zQg|qC|1Abxg?(0WVxEg~UUjV9vZr#c)nY1Hkb#)Q}jL~FVa+bHMgHM)L z9WqEvT}>B0=||DHlw99;nt@kUcd>vW$|Sy!WPl!!`P83U09)kGf>ct`6d+L2>bG~E zoJO&N;%T*ZHk+B_%i>HxnG>El z#1S#}6)H~7z7qxkeJlXK{r;~c?!Yi2+6vD#dAM#rPB-b4&pu$HZ(TIAXWVf=ENfww zomBOL?nq^ZlG$O8@^X=-W(@<$wNx3^)Eid}JxRJN7+Ut}(X;>MRU$s*BD2=>y1IyX zp~bd&?mX0ZZ0H;t(F~9FGtefA{9k_o^3=- zODmK~*+z`Dd0F|?@qCTloh)Vg5#Dfzc=G4HEvFZTP(+WCbmbN=rq2q@NPz|E!EB7jKnK(vDHypj1vsP=i%rIdJ?GUnwv}6%lDSW>iwzP z;STNh1}~ACt)Pdj)O8X+mPd^T4C1B|cF;nX+xVmAO?8@Fg?M#6%|%>ui@0ell$onE z-`{EL{QZi8S|2kM*>0oa`>g12Q&1l3*^yRT13E(jj|x?l$uzmewa2<11n&|oduhi# zxSz)7Mh{z)ji=1NW{l)AN7?+o(sOR-^spr!sd%5SkvnBU;j$@k0dkSTPK7sFh~aFf?YjPX`!YPl`Zn1?b9x#gP% zZKIElH0^cWlZJB=e%Hn#^~rMwWqyTr$|}!ZRf6Ybgu8RqT^m)o(NG|YlufLS_-a^B zChJBX>7SNFD0!LwDwcjhV`R0I5guN3k>cr#w@Z+o-0HC=nB8U8`K(LA=@JW&9+x5 z`pNJ`2X}%=_vHXIYvqkJ({*S&N#2a(CPN+VY0c*H3MhZ7wjd502hG}}Ui_v3@WLKHJet>j|2_(6;H$n;#Tvqv83o8iKAz8M?u_oK(b22oISe8XkyFMb*T>4 zx<)u@InA7;uO5B1)VjA@WwvHi&tSc?7}ntaBdlH{`}|7*`*xJe_wLa^=KeIKlVRo5 zx1wf?@gyYvy8IC8LCz1(Ryrb3rUy;^qA?u)m53n|Dek>VW+&T1|J#{vWkzte(UG$1 zEqz75sl^9lRQ6l&s0xAwHoLs%lvrm{R5Ujjck90RQDwNSS-ty()*;6wlDEQi$xR8G zDPtL%pL%ahOtD>s#inrfGZnkp*VMV3h1S@YFv4+-d5<;8$xo*=YK_vOTg)T%;XOq= z3!g2Jt<~$)=QpfA-tjOon=80GZavYRi&mq1_im08L(~G#a2mgC8l`ayEzm@BL&c3* z?@i6~$v0thwj+7**);DpOcRg5;5<~u@oYt*+IGpP*L)}_?S&s)??HukSEdEuQxm2I zqcc9K*^9G~LN;ypux3x_$0DD}3EItQ^HBx-&6_fX!cbM)_>tyX@etgQi3_5rKtpdj z+@a{yg;hs!_sLPz?b(X80jUtpn!M4Ng3ZDR#5)aGW!I{Cf}u`e(Ao~uvrEzE*V;&llw`u3wPt(9=3 zKjI9AB>HvMU}+)$bwVFjzZh-@i=Br!Qo}|Xn>S-hgPo=~ZYZ$;jjQZPsV@$+I8jPK z+JZBpR9yF;r8YPuuE`I^!GOUz{1AWYvu0?1dyXLpxYa~?Ua_QCP@|t5MNmJEvMb{uGh|Rwr`H$6|lSY-uO+V|2$4c&TKRxpI zq*-o0g5*=K9_NLT+>-@0Cf1gW@t#D|vu^ZF5pj$ij`5d_>mfsU*%)4)*S15-1Z*ZK zt9`mtRwOs9F)T9fb$knPZ4gIgPVUEqez4}9?{%WhfijfAq_hn(`0zC806%JyrXGb9+h1!#y+1Q>?QCAF|8_Bi*)m>T7h8*49Hgn@cYiUT=+U z%N2afuCAFe)BHJ4eA9B!36f_Xe~MNg@z6)-NR&&yBu?dnh*CS`#OJ28UEz&nB7&_C zZ-oI6j1h<02;o2pNlQYKt*t3XH5nvFb)GJl$`?CF&mZeN>Xo??xh>_4nl?(oPW3AM z3pMIvhd3;`is=0XQ?An+LmKTmcV8~t8@k>KD#42O4@`AR63JZnIpJ$EBz|c3XE})3 zIIFwrpJ;`uTV>g{{y={gTPjwx0%5$#>mr$+D&Tvwlzra@dVZPw)diNrYsLCrW%b@5H=koY9nY-6y5=>Sq9VAmH_J=zX7OIs*gPJu8HTIK}Jme6yE1Icd zr@IE6K{ZH$#v3EqxD3-(TF(1sMNC?jC)=Zm&ep6CQcQ}Pnm=;-NXi$uDWeM`s*m78WGYi5il!c0 za{0WZDaPO*4h87J5D6}>Rrfop9A)Oj6jVQGsdJyS5nQIaS5A)TvY);l8S4AJ1ySX* z0PJinq*e0+r5n3R?T&k?=}wHES=!K1?JDjr#Jzp=j6DWDWS?&*w;2+*;D0WoG?~Y+ ze%>L_P#aSW9n^?tO5qIY#@J@EWgI!H&tVO_Df}K}YWHe5q>A&oP`%FC< z#eV{|wf^trwRRj|v`9kH;^;!fj!z9Q_g|p-V=!Cgx7!>8hJgDoejzb>hFnpoe_7#qWnzO?~zUULpx|`=Yl0Q4PG>h3HK6_unr|>(! zDwcU>jbbxio>z4yY(M87dbd;^a-1Ag)}F#iP&Dh5{5tccLeLX?lm76UNZ~{QEO4o~ zR10U{v$(C2uAQ7(l9r!#jb5_)SsKZqKS`1(k)&FMSZ$tH>#WJ%^b2kd?^;aQO1h!q zMH#SBXMVOhe9xdxELBbjdL%M*RCEuUy848ByP8~5#O@z5HJsqnfA1iPy*kbf!7ItK z54nmI_fOLyRLmJn*G-kV+$B_%h=qEg@vj*7r=$hNJQD}qZOX7Q^HtY4HOAyDugLb) zQt^l0_Yu#xUNIYXWhmklJnJ$UKF>kB>%~*67z4dVZ;ch)WHF`&o%qc*zFvl%h`Ww! zoe%HY;s)nngoS9jkaoFu@4WopGK6U>$KceR&Fw^QJTj59(MG*I ze#UV-%htll+b`R0+=ip%dzt?Z3ZG?3W5p7zI|kL!Xu&3t$GYgsyYrn~#Key^;^8f{ zf{eqaz|ttbWOL302mBVuxYbB>KTfme_~XZ#{49g{t{|-*z$A7+$=2yI|WdwSSmU zxtjT4l!v^fL%Yr{R#s|%d#7BG&N3VAIzl^hghZsl*BPc>y&zE0u$tsXU<-|Yb^yY1x#%tgXv%WL?6L; z($?4?oTWNgB{@OJoF9lL`D9;n{(p?UbyQVr+dV8wH_{!_DGic>ba!t`xV?9;5` zVz&ip<+eU84%2Z`OJbecF(RP^h$IfNwxQVLld0SUjuN#Lub|zG`U?{dPfyA0!AR+( zQpSN`qUqYe6Q!7KgW>ChH!K)xR@}-1`mNSE_Dr{kxcT(;1SOZ92`50m1bb!8Li^yZ z;<9A>P^4Cv)g2P_PAXw=Or_RE&!WG{u+E_WT@IMc;^T+(8b!xgRh-nk%kC~~4-^79 zsf2vRyI)Av9=Dy0b^COlhsD=?2BGVvyb;8D!g|%Ee+^R&gq|N9S^}!x-Oy@Q_^xPU zoxVueEHZE|b{b2SI@Mv?(w$5;r|TsyDKCdS(HK5H5pjmsbtX3xhq7(Fa{Wp%oh$e1 z?ch)*y0qF!3RiLeEUEij^46>YVSqF}y8m_A==3w|rz{nErVDX(NM23H#`Fc3@#Q{iGiG4qp3 zC3t?z*}w6AKR1iwXwG&Bd$fsUCOvd>4~RcIv$H=ICG@D?<7QFVgnXZNy>S~#za9{W z%ddTDnUh_|>T%aUBVFwUYBop3dU}olybojn?Xdk#|Mc2_vJ}y^;jOjXRW<2-4F+wE zxCMRlN)*du#(j>#I{Horz%h4N(!hs^}R3qYF!!>Ob+ za-m73Cd`n5s@xjX`f=X7h~oa#>g+xlR@yfo#5<5&77bKhwm;q)bjIG~$XgtTotK5@ z!j#fxC01@;GyB3z`yOiZ#8?%s2JTkwM%MM!)Fav@Rf(NpG1A*nP5`Wwhuas2b-`BO z2cs8Iy$obN{rgr1N;zruIK2;8HGD>cVS^=AucZHZtm_d^J@3jw8zHPStdVH2rQQ^wgywyab0%DThBG0)rjFo^-rEs znKncqgZ(!Pj&Fr*g$udMLCjiT&ihp|Z989v{&b`+Z+-k?ty+9IPqjhn=~|?<*=@hGKxNu4x2oj0Fgl@0&*U$U>FLoM3?(r__-UCr^3LhSj{)7OdmUMH(3 zd$+HtW2IeAeVSdTVW*Lz?aJu-*2A6P-98Db*=wsW2Yu1)TUsbWU*YZvkt)qR+-bWB ztg+XS%A}W$ylqhZ2=C6cGetXggL_I=IvnKmf&Bw$L_js@1q=f$%)j?9;OU@UkgNdR zDUkV;v*~3wib~U&aUbW+2Xo8GmReQsiytIL zjWOz!h4u&*AEeAHNn71Gv>%RFnnytA&se~@P7FhV8f72_$5+QYAxdAw3%!D6Xbtgs zOzB#`B2&%p=121ojn^M5mdsQIzZCqeTn-DdEVPJy7oN{W%GB|qSmoLKtK&s#y0Kfn zWqE7KW{Nh}d?CfJ%>1s_AS(;_#~4bP^nT?fWGzR@!+Dc}RqRsLaOB@oqp%-_+x>}P zb>r)H)z_LA`};g2aCR~paqXXod8;yy!d@FZCaZtiIo$m6%kR-AP@m6box9S;oUSPtp|Qlh^cnW_Q!*~ag2<=0F{*q*w|wE18QoIVo? zVZ<((%ZY#CQLMG@b_(Ze*Jy20?u->qUr(smD;a+&hB|c+<79svlTibZ)(23J>OUMq zMi$=?EolxO2{i67lF1X_iG~+`%RzXqQlZyjVC(>@mw%ON%WE{`6?!UxgvE3?>DJ$W zu%*O8WXDGPbLZ>Bq|G_}K5@gkgV@b$8Trb9Uc%-QA|b~x0{F)C(I9T>a>RiMPDGu~ zzi8Kg%sZZ^cEMt7NmrI_UP|O7Y<^r#niC*G(!*|O0oA*aI73Y0wBO%m2s*hC;_k$} zp^?f;n2Zd}HS@YRnmxvZi?7`r-7m^i##NpbxR<{=O2U3>6a3lJrk?K+xTUJ%HakiB zbLM)pr?LM<6U4)8mo~&1t2}c2nIIoWsg|*L2A9)a|fWA6gpLtpBcl zM`+woYb9swC0TDH464+7W5-U^8sRE7592M2L|Vf`kE$h$&l#r9qC%gXc{ zjjB0}s@brC9;Z+`HD0-3gzR#YXaAsiP)*8qLMF*)`3B+r{-sRF%_XD(@ zMqI{TV@{x~ssUiDvpZ0Yz`bP75)eZ&ll@#R>Qlc9xdwxM51qDtI#`e(^g`kU)|=@M z7UWqD{11Ru6GAan=lL9M%F#1;4JC)5>Zn=*aT7|4?1$^y=Z-FTNdJ!i&!C^jcX=H5 zWz6t{`$w3FOmJ-1+|TLXtgq2Tl|qx9qp>^BFzZmGFRP zJvBlQFB?a2V(uOQ((n`}$Wb$D;l@6-bG>nT&?Pw1yiDdnYPmtF~mM+5Yt-`UKbiW*>;>EU!SL#Uc6URN;-dy{{OgDUcS$*P}WA=VD+@89}2?VShDtO z*1;=1&V9bRfv&}7I9N~Q7AwXL4Qztv-0if9qW~CKkbiC(Hx?gw@$*zP5$9{|H@6?W zKF@VvI+D|C3!mlU4RFmm395f=GVP;%;cp=Jtl}|nX20wGe`qX!5w<&UpdLe_;&j6=N15^U>btjjC87y{IcO`*2Wki1x+{%=NTa4XLZoZuJIFO;uMer%3R%?r zMe!LS`zn%|QuBR!u|%eL1RNX#BX5Pk04<88``uRZ2$oS_xS5J+K3Pk6062ngC_BhX z={LB^PxyqgtJ4NZRfInhjWN${x%v;Ll*pJ5unuy}GIA)kVHGJWvh%$|)uYQ^$%mU)(yiQNNR*9ox7j{ck^uTJ`;89hMNN7Q?l!{_W9eyu_hRI|sF?P~qsG!Up z^>Bs!8WV25voQ#X6x6aAx&5f~>TL0BczbM8bF}TFRCGL)zX3qqdos9pe9ie!==?>Z z^zS0?IvRGhSt3KpB9}omaT2Aki@QC$MJm^)3q{r>=js9$qi%7~NSB?oPM;(%CL*yT z-8y1RyEEm*ha=Lee~!>;4Af1ozH(45Uf2k|&i4|2Kg;;ZuH-@?la*nUN%I};ExS?# z$1WCCXS(v!)IjVcM*xMX@q5E^BdjNn8aFrnD?bcY<@bn~TxAY!KmW$DfDaM0_{lKve52gr=tz^W@1j@{-7h4 z1iEPbX|j20y4jtWFu$`k%vt}lIh}MWYoz9NUSZy#iqg(&!;72bI){~9tcVu}U>ypH z`1aM6PKHYTveE4|fu-Y|7%jMP$#n%_doH2zgg;Tz*y6uZ_S0CR;GvhxogO7v@L8&rleP4+*SxfP! zrrv*g)B^w^$PXs%d%f z&V=Yj%RsZ|bzI$1I3`Pn4v3fCoUl0%~$Q3O?4^SyFD=veWFLj9*&P7h~P;H^mG~5w$`mrnvX63 zg(zfMY3+_ZWC_NLlHEUyfgx)hIvFc=dz%$&pLX|&@{gBW$kl+nD*#){@$4`9%O5xL zt+c_)nb_om)}T&_h;#AxWZ7VXhJ6yd)LDTBL=CfR?#Bi>V3u9WEbR@H8uu4d3Rpsh zM!V@GLUHXxlF8Kgf`0KVq9XKhtgJK4Dt;~XC6&ZK1fAB4c#?di_S71s2=~tR(PgB4 zk8ZwK9j(;>IxCM>*~sae&SUL-+z-yr){e^e@8L5}3@tV9t$5<#*M4=KYb@wff2ySZ zhaL^s2t{{oo|}c~5%1+&Cucy)hIfHk06IhOWsTvbGUR;h8bZ;!%e1e)Sk7LmS+6Om z+LeL^IJI$ES*kRMY5r`YSdHA49%)C!q2g||SJpCkKfm6+lWwpNnPFbpJ9!is5+BDj zT=CtzG4y~;YwY`OcJnfgAatZ^F=OXJg5#1SQPxtO_X^{1O{hskEmLnEGRuf7? zrEI}X(xKOfN&SQRc|l3P6I=ihSo9l4qcl6+TGiDpytiF?_ z^P%9{58C&$4HThQM|m174Dg4})+}w>Gtb0F-_JfI1mHZThLceb4~g79 z*Yga^SJ4&a7`Mkpo9phAhvz>Tj0o0+ErXEZgV2G{5E_RYZ1FGGF+fTH*}tCQeJSp0 z60P3ElR67OgG)WA4Uom`KB8*WR^6#phJ1TmNg;{T;~C9nRkGBn(zF+ZmRc#9{ zk9BCo%k6qC8f!0Sxfb)}AHV-(U}VU`9m^6q2DjI95HPG=A1~6^Gksa2)W8=kLweFnXL!xD*QKH^tvU)bGkYBfi1K#gZ&r-M)qqdg>rPYbg{~>bzXDGInE|5ERR} zxT;!6a@uJG-op8$+UF}-K?sE6C6Kc|AD!ZyMA`l_UOMrzdF4{N+|Wfg$3y}=wtaO* z$}6nTsYWLBI{@w^90n|T_=fp-r~NH@@<-w2=f?o8N#5cfZGD=>@-o+$#Lr=6@EODXi)y7!N4w*&`@Tt)9-BpaLq$i<#{JgXm#!Pv(lySFD-y7 zk2ONG;Y~4wM9S1H`?c=~Z&(BYz{z@%t4P`u2Y*>#sW%pnEa@~bZw-J8B=Lcv*U(3k zEm>k2%#h1k&v8HQcZ-}Q^1gVwov!E;}VBE zys*A2aK1PVd_ZBBFvj24cHX0N_cqvn=vR|@c1oZ zy+KCzLb!=g)VR;oWC@l@!h};Y2`=kcwagfxRjR0&{+QggkCHW$2it=DWLlwLO?!H5 z=~-iZ{>DdUof18qb6E>p7ri@i>PhuSj&b-ces3&Fh{m z_kH9)$DM)X$!AN+l!l-7KxDk99UqeC@(&5;KEHzdE-=M=24H+`L=G?v!LCJ4%awGX%_xi>fFmo2Aujp;a2;*6^Y}?en%IskOTazKXLnq-`0k67 zKsvRk=^m@F`upusoiqxu@RI1^aG>fMj{^~hrxOMnN21=2M=npa)l{f4y>+FrnwlvyGJ_IF?9@!8MUQfKp&*BCY zhF65W-!y#I?-eH}^%$3dTeJ9T9jFL~wfL=q_WvPckbCl&hqlqn&ZM)sv~pjnpqt~N z$62h`jysl!nrMkl9p2h#M?3W7zAnt>nQhc~3iRd$J$J5joz1t`)u8I{piu}i@xHP- z^JN3M$x0%F-CH*8taH1YMH9sF3=<@P)af*&y>VWs@ZseQ~o;NLLw1i@XhTM z4Wukd-dywg-Q96^PwF4_y0ru*<{KsX zz5CiA?lW)RywxP7z@oRCMlTsUU_5I}L7nM=tvQI$A|(EbYIh)p-+-h5q3 z4Svz+TOn5$of0sf>e5Ma`MrdKj-=try$Gzmdl`>I@v+@Tv5r;eYlJq#pY3j_<+^OC zM*R_@r$GZyB-ayxo|Em(LZP8}{4OmVEUUVkJ#bnZ;R3_p0+;_P*2$&O3y-y^9O;(s8{%FIs+yQz^ z+)|-E3J9^zZSow6$mveBp=cL5991^^xXRC#~a1?mj~@}G_%Kr};V zgttBc8P&#PDcU~Hz(p}}P4oA`27ojj#?;|r^3-#;IIJg$s5(A(d=khPvNK}RY@MP; z&FiRa2ND(44 z!*l_cMguLYaV1w96+Z$VQ|;cLlD&UX7BnZb8!@w6nEhOxSatCGp6>+FgN@B3e(ldZ_tHiNIP1&gHdniX zjWC>7)FYZLpdg&MmB;!wjVqP1V>SYV%{oALItKL8Dm$<_#__49)zham?VT+96#3u&2@ zk(}#wp^MxBlpMe+PC4NC-ur`Li#g(7^j~yIXt%GOSNf4=mpE89MY*=Ek8EP`CUuJzacJPbO=&MvbyhcDp+uC9OE{Xc^GYn!(h1jB&Acm z+e41rk9=f|38w~5#ol%AgVn9QT7ho8&`vgc?2aJxt~Y3f7aI5Of^W5zO!qEQ6q-V; zrW;ac@i2@`gIoW>VfWoTa@u-6AR@V#QbaWB3)6nvF3>x=gvw-4aH4f%QTPEg$mtpaidL)GVihFkVQJ8d-s3n55l`=N(oKS6S2+k?Y>~ zkeICW9FAZ;E_d1{s;f4;yz=!lM=A!Ud=gw*wE4Wq%ij@R?beN9sE8w{31oBHu1N9f zmbo6=!FJjYIwnyS7Q$nk5SC-@tJJ=LUXGnhKbY{J`ven_G8;1VN#W{e*U`mo*RxPH z_QYRQjkr^Gi|!Msiysx`4z#2mE;EsVr-IhEoj%7>+ST?2H{YvyPrlfVg<{aFmWn1@ zaw^WQK;GoN{UJdC^cb(bGhYoL=M8kAB`>HkYJ8f^A1NB)aJALBrlScyTM&xy^_<42B$Ikpa(wr!~$9=$3p z73`}4!#1Fl{KWqVC*Zp-QweJOtm&|!H2Jd_D>XNUi{Nsj*kbFA$(p4HqZ_#0S@iZ7 zEx$X+A=jMMNOAo=9bGNYGmJN~c=eGk3rEmqiy!Fv#NXB|v}I<{d0!~%W`pz;D|8z0!|G#y^Ed!i zt==pF5VmvRDOB~8F--mE1W$wWI~m}*a{*sp0l4Td>UtgwhW&1G&c|0wUmstI#bU_t z3XcqZ5cD#@>L@Jj+_<&p^R#MmexMh?CYhN>6l$m%kENLv@Zf&(64Dq@-yR@e+2gb- z^y0Bz-QAFYrqOq}x#bm$0?f6=!uSR8jMWFWv07*v7_oLUJ(Sy^w(MMrSaH z>{5WSY)b=P#tUclVt2lx@IztJpo)s@Y3#|AnL?c-=~_UIdbGIqs`UEIe(DEpO`Xia zQ%ni0#)q|FwMYg>Rkv5;G8IIi?hF`xb^*-1P9dl(P_@@4#pdggoSr*#z?4t5Uit7? z2xA%JpV<`SO9KYHN9cyCZeESmfFYWyV{AarOEM|w1~abPmg2UD`Tm8@JOz!~o6Z-j z{?0-^0{|UPMbO_FE$ZJ+(2wL@Ig>+bD9t3rUbR!r^6I`lR0ijf1m3TrT3N%pji z(8b)uW`k}>hn9KI_ab+C>E;wj9>}pi0}DL&q(W*pABbBIUBy=0eI;<2hnS_#J#g$z z;Ok)gg~_|%%mK;7f@lGw>1Ss^nK1k>&eLCA&hN$9pV#E+e%S@wltO)I9M=em662p* z8o~kymYYqGlHj^Wh5;i~S-@Lk+1EZ+RjtO&ozAW-m0pew(5FpCeQA$5 z#t$Q>{RJMnLkJ2reJ!3W-f;t1!<*Xx538`a z`^}CZlP13kVJYaa0E?=|h7sE~jwZG2W}A(4Eenjj;A@{c%kP*+J6#2GVj4s6ZdWy= z+1PN>EHR|AqQEeQeg6Eb|Fh7K<`ZRsT*-juWYax&uJb=;-T(E`fBzqVivSGZj0R2& zYgU<418#X@=@_jmmB425%0i#(_2*s+P&i0F>rKVj8(IAhM}IGGQVW|JKwNlt%A~-Y z0u)8>dc0DQ*;rAkRXCR`=qqV+=7E!R_RUXN5wl3KA>%`u>F9m&TLG$N(ti%-?;rm^ zt~-9YDubh58^F4qZ*i7u)=LsHdd@CmF_>C2*`<6-Wav~qaGGrC>zv3C(m+*M?}5!~ z-eg7FX?rfo&Hi4{>n3qDi;pVVV~dV#6BvzrrbY*N9`wpZd0f#eo{g;^n#A_Q{%d{i ze}El-ehdBTc}~ED?&Fk-R<$MhFEw&w#Tqv)CL5M`8dpK>7Nab6@6A0i4~G6QJn}E< zrYzM@KS+w{$2t*x$M|mnM_Eqom5vv!#h_27+N+@TX?#-1{cWQ9i0zEi_V?1wa>^`0 zPZ2<@`-|CFHvJd|ljR4%cn~#T?i0WTbgpF!lCc4`xryU4XXOtlkSG;piz-<=R3_0) zDqifc>Jbb`8w=t&9Nk--VPCIbe-C{F=+YN1$7;a@-9KD|KO8?2xA;7;vC3SnqN^;2 z-UcIJa*^5QM4tk*EHOi$hkk5`3cn8WS4L19bP}7pB?SERSXhq~6aI4D&g!zoWN12V z;XZh(_^I+$z=kR-&2Me-zezWLqtU=c)jKi+j+hkNVDT$>z;ZAhhcP2xt#;G9`pYQx z6Yo>~q}e9bu)gnuSYhz<)#32`ht})ko!-IR#fSC#QopT$wyfYfM|mR5qdNe+`k{Jy z1f1}_R90^sK$*Hm)ilKr=Bx($+@fcTQ-swg;t>$+%rsr=38oQD8?W*~8j5F%m1{4B z0k)KL9WX;Dz6xEL=U`8>`aIs68QCF4_X0>aZ3WKV-B85o;8g!OiJ{ixS#edVH>s>e%Ap9@F1f8Uk`Q~wB9gR7M_RKgE?;xwoVIUb;Ij_GJx8K>}A0_yvj zFJh~YyO;@H=h3X-s_(@FVQ=xd+NlYOA*Fu#`l6q9?5HTVy(PVtX7XmR#|G;WFc>f; z<#kcK<*=C#dE>kz=Y`iL#OA(7n?ap4mEGypu=J^&5Rlvr1T>l6cA^kS1UuXA9J)Xn z18yIi>EADZFW2IWxMt(mNjS<63PK5O#^mKM0z;c+=kdbs8S$y@Pq0eHJLqx!w;VZq(GWW#YC>~W-k6w;G&d!PM{}v8= zo-O=!-6TxNpGor?n#=GP2MJvnUig%>*Dj*Gc;QnAJrU_jM5b@Och>T)dH(@|WeTtn zlS4a2tW|srI1XEl8N%?RS<7A;G5^=OOKP2ti>o`lkJ_H&TL-&Dj(B8 z#rC5|z7V)nI)5>V+WCzbl#hZNPm>!!;svtRdOta#$HSsOSNW2Hb+RpB8Rudu{iIAq zANCrCm#*Dlj2xOdEDrYGmJ4we(CM8E@jRPO&8RDqDowGYL4ymW;h}sNIqPhl^hosP zvvP0-Ac;%8Dd6N5QcEG*16b~wM^{~#A$h3DX9>zPEzK16ozL-lQP;QE?zh2Ang7JQ zZT$v^XDX*7twbT=JgZl_ee4EZjqrH1qLUvb%622@{!0#cVv+3sS8wa@x{3VPJmBqp zfo9GFuup7*bGjm+Tf;n-!3!6RvM+9F?W#TtYV(Sjj6{EC>L^M(qIFt^$gsov-Vam* zi1XXpu5rK|jV#UWuC_0^FdcX!+deKJ`L_qcH3I7ikAW@wc7>W2*SoxFLrEo+$+g|f zD!s;b%T=5xgbT4aESb48`dHF7$1{<^e77Fj=`+x^>A-GbU-U+Wvth8EE$cDC%$~~K zsNC*Nc#rnVCIfC*4=dhwA*X$u`9$4|dW#t`WsmJ|DWUYqm2aIbISWwrGA8qfx9c&E z5Jv@RVrZFf>qYbej2F-Sq1vmu?;5{uU|(NC9IwI@+syGTCyVM&(8pu0j=(u;oj&K% z&DS+znR(3?DDR8_IQo|$t~W+C--AQ0sbWnYoy`zKWY06s(z(JIy$lRrGgZS_tRC@; z+>Xv~6}K}UrE|fNW|LJ)5BXd75813H2x3t?+sx9I1~iz|vYMNI(4p;$n^lsjEGl9n z>whB;|9esZo)CG6pR?00`ne>z)fb7F8#)sAExKT%8EnIRUoOM?iqu4!*gaqv`Ap|{ zpR!Wpt=^GU!oYV!d+tvCrn^QOk64)H{wee~!eG0Do1Bbf&eTvu>-Q65};&) z)L$#^t$2U-p#|&$c-nm)r0Jbf2B}dw88aHyl_C3g?{HJjoXk8_fUX2$l)>R#H*ARD z*70fax~5c9A?Y;&${zPne_|JuC@rir1(C90Oy!obNmXlyIL%xXw5#1}qbK#jVeOo5EKB0)Z0~1qr`V2fdqd^# z8Dx5MV##}alUvTcOyza9Y9{rGDb*r4ER~x{cs&${Uj5%v>tDxven^vr4tYq@`%^oI zPTzjt_D@!Cb!i~_NDO(p<0PC}QdimsSK#)obawmza|}2Nk?a24?;8g0Ub%546rYQo zQteQ^*wmFvJEkOxSOHM+b?f^U*fCUUJwdHb5Jv_~YI6&SG{q0{7D>^M9FRY>S1BhkjYd1ZZ3@uX$aB(5csF=J z|7FhRB&BTY_K_*mnGMx~du+%|koI$GvQ^4XE$S2?#hm*czn0fJSG@)|7-$Gf0Ft}u ziX+`ItxB;73ZaaT=b;J`gL`|M*QIjy%w+5JV?{nM22e(nsO9FUZqfE2-l~thk(A@P z=yu1j6=5mM12_5>c7#b|F-YuAnkEeqnbf==k{3fvPRg)e=>T~=Dd&Ge4&}sqomZ$j z3`Ye>fY|M#2TV`INm=l`chV2)vY!bIGL(G(`|SLGbSwXS3s8oDoJKj{xq302%5t$+ zU-naucD<&R-a^HU%w@YGqVo7v3r$j*IIqV&8sub6p*&3h=KlVCOdFIfnZ=m=6ZPpK z`*=6+YfbhN={0ZJv3UQB7m%S13YT^eo5^=+YrW%$_k)a~oIgb26tV@XIo<_&j4;)! z5UN#bYsk@SG(Z~Z-5m!*S&1&4VvotyI=Q@Pni2#UcF=6>wLWP_hZ)bbQgD`Xe?EaN zeK214%t=K`9bNQdA>=kMwlktz0X9>Mi8~ocmP*RBTay4;U2(v@J9akTd`+3dVVP*% z9t%uw>T5No&wn>-IPrh%<@kP~AAF9TgdJ|Ty2_T0a(R;R4+NUB4*+R~Yj&4Leo^~@ zl&49DU}Ct|FNxdq%wy2-Sqw>Sw7;z?f#+s?4{DKd%{t@)pOmvycY%IcEgX}Nxi1Ps zX3s@Ovs@>hGCWbsqM-itHBtA8_gNYM6co-Z$(-ETyQU`JUM)H{pSoX=U`_jQ+k*1B z?Lehkn;9HZh;(*TJ#a8lIZ*Q1yyVUT#%>C_yV;&patXQwsqCi4OB^p@ti&_}_3vEc z9|d{7ca8gR(8}(-Hr3N*jx;}Ogzf7b#+&#{R_mV=@ZU`~86G-3HuBa<_NRdK1XhtC z$;b70J`KIQ<%0!L^l~5k1p!T4K!}&axnj{|v#(_)6{ZW*m6wcI+Z`8!vw2{UG z5mqWkMPxu{$2HSvtLM6RCiulIwCui-{;78SWZ}G!t9-rHzDXFIj{O8E8c#+<855oZ zo_#qBdP#S^-#JaR? zPTMI{v#0AVt$9(~P{!|K=|nJsa>lZSEH=O`Hg8Xl$6Sk6dMQu$W{@(KeHk86vXDUp z{XVzKs6rTE^oTA~3RJ|KxUave1DyWqdPKeT^VG4Hj~{hgj7YFq)cgB;f|rLpn2mzM z64sxJ2f=xl6XX9DiTmFULI9AaYkz)D2cvn<0hZ`eW7oetqUJv}`%(fDBzYRp`mH|c z7aN2t_uH3@I~QC^!0--vm(A)>6Us-#?WN*oQ!EkT>O@*4itXME>+^LFeIiSH>at!} zwHt4|JN$N(wCGzQ+_HxeNMu9Y?9Ra2ZY;hGwykXhx89tRRD0T5o6U2SC(lL;2p5Qui_J2EaILdH)?5OClUu@>k8$PIC!aK(#8J^c|7rkC*rG% zUByF2B3`-ORuoFqnXlZ>vNuv#6?iLB{rDABgkKBL2F7P1eL&W^@1n&Q9Je{GBL2{M zfOq*t_UqPo$)`?~+3FPK0Gw5)%^9pG{*&hOLhQb$U+@W_oi%;b=P=^*ez<&-n5vB* zY2v4aaUC@T{rP|b(dJ5AFHtb<0SlN-3+BfS&=;*vx%o>X_c42#`T? z6tV@SANY}N&+>Lo0 zr8EY_0M)iXVI%*~uTD;TvH|k$C%cSip)C=ZEP|V(8ezXyk$Cw#LpmV^Z++uNoeE=e z^>nz+6{eb;X)J+>M2f_7=;x7sV{0$wtx*`8@NoZrZ0}y;vFD-b@q6xNZjmsF%MPVc z(Il5B1|woI$gLr-Ds@_%pfBHcjoNUu`Jx>nu$hiXAH`iSb7|~q5WGFZ^B=Q!R z{JV)05`=p6J~?^R)2_9KsxQ*Z8geuF{J_`vc!L&01MP1h`6EWtwV_x)*ynJ48;TK6 z9#iG{-yokqy3~)p+QK^50n@#O3Iy!YjE`Figc8>8-~r9pn-1vJ2*rh0kju}DCTEGP z2j|zH5RR^1p;uKFKVW1rrwNI^Uy}|kKfXL;ig>~kG0dPh(Y9%F+V0G{;q$piE04l- zXKK0KZYZmo(@{GYdUIg8B4KtocF2GJb30O0>LYEjs_z;fKvIVr2HME{JHFxNW>+1{ z1fU)c4JFL$hx#Z5)J`LY(XFm_&0e@$2K=wlSt?pNHWtvlhreH6Cj<1vU()A~gwQwW!hon# zvBJRC#c5CIQ@`j2-CMFS1TXYeK46bPRxNU~AkfnXZGGX#)!%oU>|_J;MkR=hgs+Ze zrcyHr*t)#j5PF@34jq8tFISqX)F-lFKDb=WtNQ?VN*IOB-ta|IqtLJ* zN_D?Xcj4@DaooJgn4h7A_q9IDYoA$}x>h z>dSj&K#dv=UB|lh<)UA9v2OnJ-cX|TRvL6tv=Yr6Sd-B-If5m|T9aXI%c+u5tz>Wa)+4DJ_%Clc-csf3c*aEiWw+-9iVv?on$P&p~9mkEv9TtF83? zqFe{n-8&CmGprMT3qFzEV4R?bDy#^+T=;~?fCol8);#VLpR1+mxcC}Vm>X@xItr>2 zSJ2NRimB%#qX&{}M|xL|0H1mP4F;GW74{$4+5ZXA|C`Yx-^Ebv*<#6ab-O~mYHIMZKhmhz?vc88!;QD_f6p=e}IQ`qiwx_*(#RDB_#i>+2OpTW_Pj9LB~c`B0O5e zV!6cxmyRf){VEjo7!!mNBRK;g2dQUZK9xL01gTTs%SEciDhq5iT8)uORYC8^cPfRk z$&1t%lCpnvGMc;DLUx9W)mD+}9gdeTEtdhew1F_KGB5s zXd)kQN6@P?h>!|Fw3aS+=h9~@S2rf1US_3EoI{1;8KB|*Wm@&jpEOdqxum|>J@hDr zWDn?LehI(6aWQ8Z$>0b9wI>NcA1-u+!(Q(RKZwiLaF7uE?f&+k2};*BS=<|M@r;i5 zpEh+?+PwqiNrgmy1@nCkX6c8f3b{af4y#-0h$5sHjGQ8W3Llbb{t6UpU!iInmcoNIy5b3sNL*p>V+(|J8sG1l%y zvc=NMnPM_#Q(}?KPb|9pGS+M5sogL#=&e5*DS(ii3j}XIn<2!rQ_N7Q!}%|5pTDw! zEKi8p)ad7OfWB4IHG{__Z=LfkeO;nfom)zDP3wnn#Da1{&NyRppU1Ow@mNBUUk#Ck zr#JaP-)4i?jLdGr*7angLyCL?4}Lwnn~fWgJ)`kvz*-uakVKlz&U)*RPO(f>+P{zL z%^Jc9dEfLdyHfksp~d0RhH8T&Q>GaRh|2CP88#|0zimGL`Hz5&=6QDQ1^RmPg_IZ5 z6`n%B7HK#xWwzy~K4%f<{bfWJ0NBnx2Hp^3Gm^Dx3Asa>qPwS)=|a#*-#U7BT8(iP zdx9zhlvgdqeGPuZzPiij^>Ei#xj%1jaXpg3=p|BHOFNIwP|F2es!GKQG#ZZV7E5rd z8PwaSE0<)moij+dE%3_qI=gLK0WUyB0J7M<(CSo#J$T+BG56?l=T(WgHT(U?%8%6t zKGk|thZe?emU+wuT|!omXVG}CJk<@Xo(kj^(_MyBsAx9N=_zMU`79S2NbvQ3?h^{& zf`~F2%B1>_^#G@g!96I92RGB*kV_rqMGpMD66{i(iLaubd2D9**@-n0p&~GmeER_& ze^(Pv=>7naVe*H=RI&C7HJ5?_Ah?$a&gY6&{4eP2>&IuIX*QD$NGL>y+hUrZ&Hzoe zaI|Ot+3yuW{o@wuIaosg-#g3NA5A$L6+;!yb<&XtW6QU7DITR(F2-3jsr%LA?yuV& z^@XTggUw`oN?+2*^IOieB~IVxhtW(|TRH8`*4BS{ZHi=z}!8j6-KNfaT%;(2}id1xkbBEiIICyIyYRH`>*9jy?TDP1i29dW4CIFe@+-n?sG z$4=&?bvSp^6JBN&lsCjxY5hnW|0Dh>pGNQ#7f&11Z?v>1=oU1*w^~jf(tQeEq!VZE zQ~7@##D97qvb>kH02PkYmg;z^_IhqG7dCWSqPy)El=4GJifr2VtT=E&w3gG|zfH`1 z1hTnUCJ!(36VOMpaM!?W4%UI3CQ8hI7)}#J5{qGMjcBxZUq}XEf>c3WW^1teEL!Z_ zp8<>R{m736gws}Uw}Y1+nU-dyY`CpKa`%Se=;RdbzE7hSR<{Id`%2rW?fH)}Yyc3W zDDvETsanaR0nmkAM>}^#+7~~5^0HeL(3#bp&Xp0GYWYn2^EQS+Fj>3Nj*M8*Kmk6n zAV$3a$fe86-4od^S>X$)_^(6QOuy}&VB81~)rBJ!o3wiTWZ-I4`!&QzA%Y|e0DbYA z;xp3(TEmwaJT5VvovxM}h$rXg)==zTXZ-Y==`kH)qPZpZjNcdK7Q?{b^nKBaH`Ob~vTo=dII@xK zRE}5Ma*8I56~E6r0G1cD{7hwLfG5hsu!>>XwUhPQ0 zT^%XH<_2)r5Op` z=!_Qli;iBS-sHQTa&|vq-1`OK-l_a7BDZ770B_I8Z0`b8;ETr()QEpxTwuGb-N{JFmib$s*D2;R@ARrCW3@{?yN;e47F?0@$ zGKA!SbPnAx)G+jXZ#?IC&hx&{^*-M};Q8&ouGw?nd+)XO+ABXx9tc}JcG!*StY^p= z2D{oCWPv{JROB`pPn}D7B8*>CmvNX37rf3Jkg4zckd6gvt~*b>fXd=>0H_M-tCHPC zdEZKaBIjey{Wp>T%qbNE5|;7F9s|)lJqa3}ud?Djz`Ou7DtVHJ(&c5|y!E!Oi#Mnx zK1{*rL_NgUmeTt!Lj?rQTAs;%W@A)H6)@16>I%d?yn$*FOEGK4+pPbnup9wPP@iBw zNmouFq7!`m=n&a|?%Po10$qKlv}%~6#whi?7jsLo8E?&-Rk5xDAm+TV*;+&Xon#>(9Z3zMt=k#b)itfGas(g;<{2D4(9sTH8?#vP z)0UlH-oBi~WyC-uRH=zKl+C+T$7eegyi3mUh5Y(>tcVSJGG91vk&ZNgx2p~X-L-2& zq^tUo6b|JHEcPysQVjsPhVL3Nq&7kJBoF%L2tazGK|AeDsn7P~ImY=tFPUty!0at< zwn#5;dy7;aFVQrp*0EH@o_3gk`=I&rtKWdsd0CgnZJ#M90JYsj^VpG4m&ihQxFmzo zF_#rd?e+{Rx7x2`$2HrSYA?7N1Y zr<*gagMKb5mDvu2qTnZL9O=-pZd5Rm;vd2oKx)C+wi^YKQG=1NsqD)@@z_?>92jw+ zP-*<-(8kS8Y)A0NqYW<}$vA4YIt?!tR&U)suT&YJa&g-c!2H+NX4bnM2tndxm%awd z2SS%Q4YSGBiJ@cgVi5r-n1Wz+O1prT$Hph~O&nh8TQ_}Wh{WsTmn)KLiDlMT%gU2l ze~eU!ldSO^7Wz#2f?VHiEpSz8pBjLfg*B)cosZ?!YGkZ-IIQj#t^m28Nu0w>Coa~n zpY>owm=o8bLt)l7aIgqPwG21Ousq5J;mOyE+`+VIvn^gtKhuthu-UaXX};I(_6td}`ST7~k)n zGs-8T)R6#O8z~SoyWbjN3ib_RpEXy~mCo`v^|DtzE=05->bm7KBq8xGs-GDJg|FPwHi)L5j2zOmacJ)N)P!piEg ziq~Y5=^l5uYxg1_(XCl+Emnzf1@&ykLA0xt?=h=Z>#67-NaVl}6C;pVj|kUJPxmJ2 zY%#u>t8FJPwqJ61I{SbMttu{L$S2h)9i9w>&EzDOvpV`cZGJ0Kb&jMiTPhoI{f|dz z)OdhpZ12nDVK&#pvpV-pJOvt|n9fCnJkXfrXSlH#Q0eX%eBEwtSK16SOptF?c@Nw_5%y!db6o&V{&hTj~fo)1I`e`_1$dgM#Se5ktd$g-^dz9WE}V zOINa4G_rsyx>pZr$gS{L-!Bh4TdHMWbGFuzJLN?h73?oN9AftJ=L7SB3Ouz6HG@{x z4;4F-&Gms`lbRCEM8RhzGM=wS`K{*D^2A(PG%laKn@ zVnpAj4>Z^lbBo%U!znX*5dAaNR^Q@f#Y8;~V8$fl4+zB*0Jf7C4J!r8U|0Epg2j$>D68UC8hy2Og{ zVpUYI8EZ;o(ac*#xyvVRJF|g_-ux`H$f|Bh|8NK6hWqXf?DZgagv9B**#hX#aix)X zSQTOb%hZ~4iV1va+$sfo`SAsZ|D7PJ_Ee749Y-ZdZ({5Wy7lr$!K$Rb&n7ol+X!X@ zR8Hp+sQcBW3m3Uy@5QPbW3BQ@$%4er$utC$dxkpQE_y8Lt?n(&0Lis05&UBZr$?&C z4CnJCoFzha_^!Mi(YUT(KJL}X_BUq-iE*~9elr0Y#XOvS8iO1{eDTh4Gd?}sqHUok z(<5O1ZgPw{I`YyNK67@QSlgq)2(Y>1{@|3bVNi@2v~;cfjbd>aNxIFZ6?V8U%DJF! z)Cw-_HI%%cZ`xK^Q~Htiz)Dkfv9zpM6b71*Ze?6pI4We*Wy(7<52I2XXeUsu80X2v zIYDcj_VRWn-hP??Wi%tX+O4?q$i#ZtGL3Kl*vY!YI;?gallMlq#*_Y@80cZi zQ2d2++JPcp!&awJ!p-bav017C5{DcFMzU408lnq%U~2Dv6lqs~xPi8|vls~CtM%TT zj1negR+qTD{Urb}+Nq?UA$9|lkQ~K7Kqp3l8sd}5WQ&_r>bdylN=Za!xiK(=$oR8N zP>tQ|=xXmYqK#+qA!5NiuXG)hrYkF1h_F0!h=R=^!4^kkvmA|O{D;Et*E}6+Yw|3pK>pu> zB7;5Mr{;L!cXtc#(DqJ4(Thc4cCT}^Gm|{e|($XD*6%PUlD_T`*2p>_ANH889t`QIu(N&?i1C2 zq`L)On0=d`7P|ygG-o(Zhq8qi^Bx0fWPmFqsU%qP2b`(L;oYnJgMP#I5Z4C#r59eG z;m-oEtPC~bN(GarzV{%rA0>~yPuzFm(0`YCP7RUC;$2{$j45PJ*C^ILRtN!u6(9bn zFVl^`wOa8g9mR!v9+_#rb=tWMn&d1+eIG^{xZ^!a|JI~gnb)81R#xugl`KvLUbGa8 zFZGO$>k+y!-uUFnU!j_Bk&+)8Ez5NGvXn5EM9zv@sEXK+A|o*yyk&#+gI zS=2Jag7^{?TYNDYRu2^C>IRr&dUzV=;)=ft)#S;^@oiRsh@9ooox|iCw)$idi_?`h znq|Rr*)7`3l+i_bsbVIXZxy^x+SDx#3WbZ0&z@XYOKJTAb9$jBdzm*|*YB|WWzi$aZDUOyTc`{?fvyA9xitIUmJ_b_6z-Cb3zk*IUn3On=?-T7i>ya2h}IcKd2~@O-Cq;`HX)3!}5TGxwRV_xBOz^(d%sCe?gkKEO}pjJS=K1JvN7;6(tHX>UCrPUecvK zmQQhB+`V~uG#6tKzh%W026wk*nDuVr^7sB3dh)_Y3{V`W+up!FOxE{1+j8^ATycOJi28$ix4 z7LI(rg9VKE$Fq|u{x9dtv=8ks(;aB|kAN6EHD6Z%zMy?O`r$xmld)0!5%rCaFe;J< zQYwV`0C>h{pceHJI-MJ6C;9L^SMzue!xXRiyo^bPSwa~0<)QJE<2xkbsLY; zXTHy6FL~HQZat0PSYNYJ?RwdkjDo}Dd}vkCph~d;gIfu^7GjBi(R~;qJ&_gU%#@_Q<1BHF5eD4fk z9?z}GOzuqU-L+Oeei=^d>p@5@!`P)6o3zIy3!q}=d50c6^LWFl!wU}30Bb*L9brpV zQeS)avy+-}F#N7kg`(^TEz*Zynh%JT?PDkqM^IvUpP)Q;;$=;r2RIoPec zazLHbr+WL1)+B-?)?r0z+2>?u0a}3BZ!weC5hL)DwkqjWOLfsyZ52*WY2f%Eq)6l* z%VX8QumEoJX+5zjmu$u6T^6y4Iq$B(%y4zyNJT0HMdiGbS}YhQ30qT+Cr7XBz)y70 z3G`l%APFx4XuIO>*AnUg`_u!ORCuljr8yz@0^bkK)rC;0R;56uLzk2vl9)#e6g1OP zAHtg*1koYbZIXwt(Z@Z5Iko@9=D7CYvWEOUDb$g?VLRT0Z84RsEciUU-Jw6ZYMtXO zVA)ye2FXN|8JxUj-Jiah2kJo?OO`PoaFC5;>K!Js zayIK_Z`RJ^p`b-YcB2Ik6#!7XC3zCrT`P@d^IzqYr`y(2i3qppL#*OC-wBUS zB&xT{DL2EC zBrKl3jZ1Q*ylQZJ$l|B8pH3mfuTGW94IgoBXziJc$BM2Eqy!}4PpE)Ss{XP>m z6YA&sZHWf-DDdjN;JzChG?HWoZ(ZltOigKW;-pA#CJa^eSdp}S&pYH)tz!aQBYC{- zWY_eVW@R1J@J@YYB81GL)ck#r3E^J}atKsmvddy2nbnU$28qg+80USd2xuDW= z__@OG6voTaAM}fo>$HhVsorGVe@G0^I9v`cWtHP@6=VI1E<1GFv+Ddf6a!SoHS7_w z^OAZH-n&4P$R=s#$z+Lc$x+@Zj}79(0kTTZ%-N>y+w`uS?B?_U{@&FK6qGPm7x0e| zBhmXahDNM`7AAo*Q%Bq1=gfe$tfG=N*=_rx z`F-vpqFL-YkVF=D2MeHOtIgRK&?LT;nUxqQ*ryXn?9|{Ad-M&JKfvjmXW;*U7hYHA ze;Z}mxV`^|HiQ-MUkpBf)c=xbx55_82ED%>kmsGHK$B3sDlna6fEUm95`dWfNip_y zz`VD_EovtGadn@u5zxA>sobK$mUokeEG(^B*`dnx4H>34HDpEpV~Gik`jjuKXBaUaj%Z$6UxP?nj=%4ORi_O~kUdy7s zv696qU+Ce1&f1AR^zCY_77?iiS>CEgABw)Db5X^t)287=K!@VB2K;*3q9h#O(@Oc~ zTttppzkI8&IuLr*9e6hu_Nxx0L|^QJ4>$}d%SL>2YLBqiYGlJ9k@)_XFX8qLZ1 zo2+=OSW$Rep*Ap#8gSLI5=L`oX+97xeeKqxR$1vDv@q&0*Izp(mv3;?oFp1{f+Pv- z@416LrJ0qy;bF9Ro)W7|>ujl`>%k=Y1DTCd22CjwUb(C|lY5DddN; zBdapHgtbof_N*=P^G`#uC%TFNHr#903(OsU`KNkax-ylhF{IjS>y3#5N6rpB|0zdX zeP+8`cTf1cDx^99)Q4;{1;5tFu^SPCuilOtL-ixQTQn9t68XG7R=Q^n@7zc*;P1Z7 z8k^dYGt`yMq)}urNO-qmvAl3!bE1?{mDEVz?oRapPnpwuK2t2!dx&0ci zoxWT=N4n;{9U|X|O_0FW2c18!a{JHU#;dGWXTJMJ6^c>>;_NR;2Hg6r7j*NO09;B% z8QXvA2JtHdXIbBIxQ0Dm=PXWO|E6D-xrAg<@Ne*HM7Jj`;^$exkd<6CjPLOZ9Y8yL z;b(4H*)Qr?T84QFfG#fKYdm%kGySHmu`;POc*xcoI!VSf;wg!rF2<~xovFP;&0msP zPZ$*C=BUhREd!svc4&VM-%iw4yFtRb(`k^eLZ(7J#6YjiVzlv1Brfu=BvY7zhBW<(TxS_q*=Dm0s+kU4cQAl zZ5}=2+V?V@HN_CuA_PnM@Lgc;yQ6eE6%>f>;K&8(C|K*fuLcwHXr zXrHtJgql_kY{X2>?UQt0C4?rvtkC2%9G5@)ieux9IUvYSDL!)-F;P#d+{|q8)zqIh z1ibb9lZpM#4P$Q48@FbOP1m6A-qx3qB_P9!{Bid6`-IjNq&hT#BcFwMAywLlZM##b zzuc=%%L5mUatWRW6GwlCDHCDwnLKVf*}lN=iV@zXmlDYZz#>Lt63G}8<0NXEF+#-F zE-OAK#kyr$YrSlvx=HFrL{|YQ;p|0xGfVwpHr-d2ZcmH*6hDqEYe;6Ab|&fj*-sRQ z?@O#4-M_yr0<^Klidg8@Y64|j`I5<7&y2gpv(ZI$(Rm)Rf_Fsf4fPc9;TgCGxoY_D9*%rdbj#;_#-Rn6e`#8X(54@6qGl;BM+5w-cSL=mn z`>An*qv?%Vja0Jql2620 z&vRHHL*?gG87WW?o_v^PwmJCJr6NBb$P-)p2QEASjCPmX zjN$y6`B;TH5m2kS{ZW2I#Ma10TA2P0)&rM9^y6HUf#DKc51vhAOGk@cpOucP3&vJz zhwkD6+UWN^sSBHK*HS6d>7IUYyJ(BvVK0@tI$hwFiD^~1aQ;kr3LbVrs zxSeZn$JtQ9saX}bxN6E%B(v!pMl&*%!(x>AmesxU{a zc^N;S7qbCog$(Ozd*rpezuEm$MgdxvST$e1!bPx7OHxg6hF&(4iprKZ$xa+@6Y;VB@|}wa9rC95i+@Y; zy5`E}x~583ONS{Y{2~jzy&@9z`Bj04$>_#w)-I~aGs8N!2%w)dL}Nm4?4hrLY^d+m zjr0Bj1{vG+W@M3_7W~VnYq-$~JlyOR#tQG(Mvd}0x#gloE7Z&w;mP41bv8R7@{uVx zqZP+|&a}A$KgI7G;o?#_#eurpSWVe2shkb&Jp8!;XerXvuxWJ}lS2`YWwUKV%rISa6UR5E0{ukxSX;5RM4^t_wuG^)*e0F&bD3hyodh(sk zDX~Z4#d9sbqbI^3PkW_Dkj3s}hs5OM*ST`vkWwA9@ z?NUb(tc_Cdek;&&h&%?;?=1LTh;(xGp67SV1`s8$FP5~154+)7!}yLJ!Ym&04W;In z(d_pTc+xC4XHOP=X_Z50DS8wOI7IxllrG# zMF-A&wVhy~Az9`pAWWeQO;s<+apXCQBupd1kq-kO=FdF=FP1iI#+#oy?8z8_2*%%gtl7mRM@T~%)G(gVWFHl3Sf zKys{f3F-PX92`$2AZrRZLmZQ=4w+vEwOj7f31)J70r@g*iihEC5h+{EI2Sl7->#YU zq{?#|dY_SKKwMO758Fq8hUkAJ%zr~(3!@?|O5v&BnOC@%1z^la9_|$zx9&2!Ko2=| z$4Z?A8Uj55C5*RqS;?!ak`;G;i2}ppai>(!;0pXkw;a4J$vXvaXzi214$O2))^a2Z zMry8#w7q~Rp}bnvtzH<9^ZBi2%o@K&iF9jCtZ2kICCm!y&^k@nq<+}Yxuu`#P7_2U zG+i@|clOU(kB{^|flOBMlx3!vMd8hnfGWhgCg)ds^>eRhth9L_J*M`Kv71x4lS8V_ z{M)LsEcCHBQjg^P11LX#jk?8!*Q!+>7UtU46mDa1!-VGE1oD<|8PzSO6TyR2wz1|{ z$!bAt{n)gV-bP21*OpfrDE&lu%KRt|k&8KxbL>T>pcFX3EFgzo@9+N3!70uki#g|m*Z3lRa2Ui9yUZ-!y z_ZUo-RufvH+NeOo#H{9haAl<8t8iaM`*{-706WK)~(Korq@Zg zE>U4lx^E4!f37|2HIcCHmh~TXKbzxL`l>jPURPltJ2wB+EUF>QRB#@TdLcPjouV~} zAe=jQ8F+VK`=w~8ij^Rl1;v0j6!2Zm+3!^S5eo=3IPZ)9*2c3@w>TJ5Cp`{t?U zZnTx?ynTr4W5!?+H+5G@sCaN(ZTOd6A)SGJwzps!k_XjB*&Ja`##gVro$$k zT2nCVFpKd1rcUkZ#w*e8X8aAGtP{Wm4n+6x{8glP5b*xxU=X^GCv={79QyR1c&H;5+ag|`tCs_~><2Ge(Vr;L@4g-hOTFJxhI5%c0c{BJVK z0w!-X+ZJMQsD_LoU7l#W2~m(?r>BG7!(UVccJf`)6$tduTBEg|NrPy&X(iwR%ldkKXg*n0j#19tZh=#)u5e(Rdh}3~7w< zZu$D69qac{@l|@e@8o4fHpgCithT?>d!)iV?-fbu4K3zuo#OT+=cq_|oM#M|X|M#V z6UyX&k`v&mYxll>gVEMQWb1fOJ3g?C%y}P_NeXZ@q~U=!KyMCLGi+3I(Z`8)+Pz7% zl;+B8KN4q=b#g=5!d9Kk(}uQ31t4(mt0J{N$JFUo21H17L-X_Y7K6`2{nD=k2a6AT^i_K;z=y>zp#%e~Eu8Bx1}Ajo(d^4I>{@6)<2 znsjV?$X|BqZZ=x_ZRRJf$Vl(nA<}v#W5Go?(PDkQ)~E?HC=3!#Zx2%{A87R<%6Coh z+LagX|wWghTw}PVH8w9Ji8W;czj$@3WA(ekilU(cWa^ zYA*C#C)N$^JloY`5S`uY-I02$vr?yV`YO;B=^=7@I3Ky{c=L~QcR?sw zvf3@qF_Z|`*^euQZ_z)-G`%~iu$$H#=(_AS$UaDEBLShG_~y8B<3fh1nSOI5C=}<6 zAI-5zo)zorJf8(&PfGT-MO;m%sOt8%zNiLD$^37S=p(81$;z3+BTYmiLsz40hR0mCSo?`5qtOG~60vpxzBl4@A;|wca1K6@}ra zOggsrT;E!Nd3HH#nod#jyS}chxcGoGY?+3sl^{$58pf)h{Uu5n}6sm!Z^n#%vR7-jYQ=RKaM5F#8T+o|CCQi2Euf5-0B zliu~drKfAs>F(Z9CONd)ZD;O@ZD29{hmRNhydnG+GaVPbeEnI@#~p4vwcSSMxGZdr zSPz?`+vyh&JWNTnw_amFW>RN1v7N3ngf}k;lh#vnk5ogaPFHXWCg#Q|?`1BVAmZrZ z=D6Fny1sre_GDiNqI3)`qg@~@0q>u0d?#ofT%QG@@O06ev@S^RF8k5TCX${+A+Ryo z8z_<3LA!dy@D9=mrpxnNg5DR;xaA{IVeX$rG{)>H2r|>)&?Ta}%RBMjez>YT`nlxG zw`l?Vj!QrHdVc`(>)?quywh~(X>d5x=-R9rDMD%6u(Q@z_hS)bceswlPe-Kd4m^y) zAl=Zd+Y1jd=B+g4`UmBowfRE%@0L#yYL`uemy0v0P{z$D*a?T8_w1Oqd(m{3xfTQ^ ztv#}vT2!V1>EkpZ`uTR=#2D7~^8LL47iw+}!(D0TV*L|S_y-?d^Vp-1;N`cQ6-mzA z2$|Dx?AdXl#A#S~C1whZiP8-F5p6RTufy5sq(K|ngcV(%#Qn{KQ@AaSbBLoUhuJY^ z+K$d=wpmpuKX^rnX_|GyVFSoNg4N6b&pgddO~%)o4rH%Vp5<}ePUlaN?*5=`?whkz zE+8HdGbxy~6}KZgMCWea>cY=G#4X@Vs0Gn{KdO$An^N+swjO(mvNVre>^`6av_u%D zMUY4l>T4drU)nJE9Dzk;R!B}hnA>a#@#NZ^Cqh!p$=b)>D!#2Ny$~Q33O!j!;^$NV z1c-%!@$EnKm5US9*K*yseNR1Jk64;ui_XB~prdHKt#>#}v83rXmN}W=)#T#g?xEy<=4DIDQDk=dN zUTxUi%gk@`6I;L!kFBhO_OUjppb)r`~5-tq;0!?h)hygg=23K z_5{`RP{8d$>~qybH0iqI2CBDr9J03#A!nenQFSif;_fXGL{;8y=0mb|hw$2-`W5ZT zrc~ODh_096H=RCc*~R-UlAT$%`$Sv=^sA?yP1+Bcsy~fDiR_%^DhGjj=sdbRv}@+A z_mUxX57u?QM33TvX0>z8rnaCeBc)Rj4Q9ZqR8}!*)9V<8TR2FjLN`~&YFW3Fo(}C1 zyj19UCw_-dO(JIg{*tcv3Fnk|zfh%LMy!Y>; z;(kjx;DJDX2G|y+cfpi?=hn10yskaM ztwheQ6SFUd#W0&1D0-k=e~@U86bRjInn%E&+Dv3;O`g_>1&%tT;0f{Uig2i+gVL?g zI-DD22u^NJ|C`Kb52+pc(u}4j5#=T3X)TKTwUvt^*jmQD_oGgs&k2kl_f_9T8nV4P zMe%X;JpNPD26S+WAQaWnw7bg4T*#Y&+rC!F)Lm(}2V4V3yZF6$C#Rln{JZfW|HB#& z+fmF6JY?+y_Gr`#e(c59D@&_MRJ(u5U0yxj8S;rgc=33s$)x!vswvgks-}?;r>;=U zRO*L7{sKsetZ6xs2@(iCGVnvt@ z_n{*m4Alt}@1lDKFw>r!+1nd{0)eQxSFXQOYn;39ZX)`wqWdX*Q=wTvZL!2;Nl}gu z`3}u#7}5-;#4%x1S2|z|GBwjIo9}iV(F$s?W3?>|4Ivx#rGknV-Y}H7&hT6R0yKU0 z>m>*} zy-SA&$#%^!F;B5cmoq~+((ceINbZGMz)m1F?HtRM>;Tr=h2&6yqoF5ekex4v*vs_2 zX-C^a3PF&<3lma7ttAn-?sp&>;+*LU1O%=gR=6mH>- z@|V#q9iQM)eEb|41(Qdy6R6*X?%pzmcmavJX5?%Y!f ztnD_n-tW2P@OVTOajv?Q9YO;Ebn}IcQY6kR_g^VwU}C4GB+UXh&z!q0zPJJ5@JvhkE{DwswHHXnsi65bUb zbv*rUG)2IBOJrDwInhvWZ2j{IicnMkxP2k+T_VxgLawgA7vZ_WQg&tp>(O+@>$yg= zW)fxw4HY4F{bBxZrwNh=3fB1tZ;W}WK6k*{=TByf?xy?k8F)SB%0O?mG;|WI9_ITZ z8M^1!-e&u~4(?soJl`WE!@BfNKCn{jm3^1JFhM-u7Q1?bkd}PzDK>w5dcrQC(Pf-z zBUy{&%>a8fVI$uPRh*$w>q68yiecKdq-i5_`%n_Af1N(ZYN$Y+e=qarG?yx(FcJYM z?1Xj--i)cvdS3JXx!73hPmT9?bVHEypGbsqP(t^Wqd!Doll&%`|K_?3ZTuF->tNU3 zR(jqSv7`rr+kM*_#sc3b$7rLsY6V)rTrFd0hiW;qo90X0(vcIK{VN9z-Eus%?RWVc zf_o^_jz-`e;2r*6hC{R2k)2Z6Anry)(_sP2h!u#wPOv%soYb4!JJLx~t(%pD6|8MM zhNYY<53wOl?f37zV`Kc%DLtH5rj2tE+hWS8N`p;YP5N0Buhuvx9w>IYT43*H90y)1 z|Kw=>a7(s#FeT&cSz_sYY23Q*y z!Dd*l;G=_bXO*k4mVM$m|9XM4qzQ`P*k2CPAW}31I#Y8YnA=N{gAk_H7pax9sUlNV zw$rG7U-7N@MbTO`Yc1SZN?JEt*^bon{M-%aZ(n9lSxfNC-~xt0S3hu@XXVbdVEp)_ zPv)%bSS?~92~A(1fciuYK6an2ZFzcD<|}6SXurnI(>w%~0uw!Vbw?)dF2=gE6UNc< zzF+Q^>zJn@Elg>}^yiS*my}!99v|FnE~P0L7Em9NNIlor&=E*+IX5+0;2^NNUMm?X zB2f3hpy0GF9R)&lFgh6cez=R}xddPw|K5A?0kWWzM=pxbF12J+Xv(rVH@(lKH_BGm zz}01r+ebAmjhku;^IoBG8Z9$w=bCR$XHJVA1)ud2$DeZEF%HD8h_o}u!z1PPMwlw) zySGH=S_W>XexqWjzQ@nQb@Em^)_N}Nv^X$ao+G%{sIBRf`hz%kqg7s;<_>TIr3>=p zoBfeL7)MFu_|5xaaFcb`P|npF!mRd9s_MJv6(4;Wce8sM$@j)B_G|aouIzZLrlO+? zPVQNu@%X0qUeuy`b>UUuCI&96g+-{Ztjf;()6AQ6E_%DJ#EtiKR+`#$g4J=haIl~a z2z(s^RnB&!Yf5iamx{&V_`&kBg0(lns{t3U@>O@amfi^Zc*d8g*$#X%@wQqBYi&JRiuzFR?RR=% zF2BLWg^D@L9!@C4GpD=k_rDAb=bLuJ6&W+R_%);CIuMMdAm`9aNb#W&GVy2RB#v(i zqbWy0(b5Tgu8Qw14)_ZNIJ9Xo9FY@vl?rSfhSmu}%L7loVs*6pnMKeZE7dxfg{nj7 za=M4Gyw`H^_UcAr$KAm)N$1p-u;bVUAjjTTCdKolhi`UqIUSD<%+FJnyPUE;<%;x+ z)zlX$iaM;(fS{w&oz)`>IZsS#XTqWf#x`TM1o!e(&+U?Z_FI`epK$i~QDQvKoR;<% z?24tjvYQJEU6oFUJ(#?$tQe>~sN~S*is|HLp1Tf*b9?SzRF`8(VN*lVu*CrWH%2mv z9P-u^I|I|K%yh0BW$ghKy@FOUBIAy~{0ZW|6qLel_HMur5-AQ@e`#^UylxDGtQ@#!EI`I8mj7L>KEv3eb( znI#xAdngnNtFaBv-qo$Lq;|}JB>;)gSlcB=Z|+UftJE4Y&+dA)-X`d?OoWzyT~Xv z0|i77$i=p4=+go$X)X?gErB5uVp};(Fuj6yYs@s78+QUCsD~H|?%vE-GtQ8W{-%oi zmjlHA+T4fvLpvG2fP?7@2IR?BuRH4Kh_+|Fk-0Yk3pfblTM$Ll>1C66G?KvenU7g^ z6CWVYi;uu|zXtT0GiA4xGe?SkI^CUE*u5=0c9t&)N@Ud zSN`%BT-*Zpvc9n*#AwE=t>`q?%ghcJ<|CV_AW%vU0c_azYSG=*wOf?6&Z8NI*!{Yz zSFR=HiZ?o+@+m1+G5xKMeYoVq$Y?(f^raZ%&ZY32WGO76J(vVbd_S@M#{|VQzTjWD zu;5~Lu>3b%05_8n3Y~-Tr(Jz7`7Ax`D?d_GSNMz6zm}Hq7`Lcg2lH{rqCZoLe{m!D z%UL-MUYmpO@9PVAAkcpktFBpC9W+lQAY6pBo41^=mH8mPN8GoA6mBxF`b&knn2C7w zZLMtG^Bbia#kzsq9MA{W0H-r)nv*^FdodaAxJ4;i9YxBN4ao0C&7Yac>o)~2U+Izm zrh@I+z}M9Olix4qkQDI^Ff9Z48y<2yt(595SwxG{3R|B%aXhZv27Hi#5SR|JhE|#C zsBvGl5fV4h-(VeW@zYsT;O}?zNM+Q)F#A((1EsCQlJFQl0|Lio&wJxlhP>9LM%eAp ztKMW67FFHpt-RU^TnhVaM+H|8&X*Rj7<9I`5>0M52m9-=Td-O2$z4eWuDGkPO4d5D|pVj zj#~YDR&fd6ROHxWJ%@Fp4RLqUf6+DE@ID&084hEz{DaJE()ZC=s^#7zHGl%2eo=so z;@yyJybG7%$L#aa*Q9~J_gefmF{P-d+wDAmmS2J(=Ub@}{HYSnFALnqjI(sgWr7C; z1a<%^YkB42%Lr>r^@8~0O05(ebX0}>Kg7~I`zuN!Xtl#FA-NZPwVcvE07aDXebPig z0b5fIBM-6}eyt+I0$)B&SitjO%v_-%%JL;3Bh4Q40T_*axk~9~DigZ}`SUPgvHX9$ z`QIJ~ajENUXv5duME@{<|23!w@?&81a*Ng2^{x2{7d#?E<;r@G~|Buc2uh98Fl%4+yo&Rfd z{ws9;k8}0^FLZn~$h_(U|InbA9$k2D{DBkv+p_ur2kH1xcGjb zGGNQ_#n?#*TwIqKukl)X&vMf|SmcwQJ)Q((&t`pzL~#HtCPYEkPdQE+56l)#z^%Ka z)qti}dUh}os8M_hn_**M{@-gMBRgElifndYPSsq03ba(I9k0Ae3pO;GiIJnkZjlF-mw~@VBb#T z>i0TJnvBXTxP9Qfyro0r(c#8tC4TGSL#F7JV~cC1SsE5NVW;m`kTCFtgR8Sw?uLTW4A9b$@am0QuS#xlSq&HE zXg=rphh6-~C9b9=ej?`&yA;VRo}qZ%>2&0AZuc+pJ;s+$;pYP0MLP?4CyFa9d(*|l z!e~VV<&$}fdv{}A&yy!`nMjX

t|!K3C!RoY1~n)~x;iNj;w%k221(FMB@@dGSGf z?26!d7gNmT3l@5I(}$lGyA3BIa#)2VNvsbSOTxU*BqC`=BtpsAEkkIRx)X1^OxVzY+%++5j zb@0iG6cvB2s2& z`iN%{m{9lEve`Q_|6AS4Xu2-wvXL+uSK5^R5PY!>d5s^k&#mm38Gn87s!uI)1Py>} zl;+%|wikC>`22F$<=C7lB=zVewZjYkt=jpgoaQtz?D@&;mq6Oi`X#^Lmev1!0K#=Z zkO8;u*MaEKnNb2Cs~&DOpmy)=uMM$YFiEAQ`YBdOWIl|$S>JVvY}#!;9O`c1*u!JA z()UGui(RYqemB58?p`}{t=gCd=XLTp9n1V8t2$<`v&lL~M^0vh$+4$%(8Qe!>#Bo4 z5fL^)SycH`b`_eVn0jxk5i`BLQGp0Z81Yyyrf>n6pbuzT=ak~Ux8x6LIxdhk=wh$Y z)ABxAj2;04G~5Qn3*(*#su3fBNE4C7o}a-FChQDN&Ww-Jz~@H`_rq!h;koJF_r`1N z(%F{Btstp-&h&A`F#t1)`SEW*4Cp>?-y7026!HEF&y)&n1{g4XvKL5;5L2|pFWYIO zV(xsEoF9{t2F!S(uL+?Z=x|PCkAv6kl(7~MUhk{{6y}-b12^?A)|JS*G z7j~sA)nvFxyCZYC8s@Y%nBT@5JIbW`F1wdZQ;(~qI$ABS>mYtqI-139ja!tt`ep0# z@Ezmh(SsN&V=9$Nm1l+t0*+=u=y3YI#IjV5s?8NXq-XS`%d_gC*6P|(4qmtInbOsq zpI;PB@-j%v@?E}e2ytzod&Y~>D0M?jTFq^Zg2p_+SbknT*)thrD*Mnr5XZ# zvnYU*=&|EZVNlE8zuL&Za}zw>vYhH_AgEyLPD+DvA9!m3U5Jvy%Tc-|@gporvWO!j-?|tx0(DGhS1y5HoKzSTz450nX1*9ZV)bn?+v7%J2@DCcrguKukd0=UDan0 z&zQazlAf9H&XoMu48$Et6P}s^IdLGDX_j{FH6aw#aT>&DBzF1)^=j1>>BSdzkB0?{9@agI zGS!zmWh6@jZPh(7ZVB41NIzrKbDeUgFX2^I3|gEvi4;)Lnh@H#O1%0x5y-=y<= zG6^iT~yoguI)W$JmEZ9BE2{C7pHo0zRg>B&|@ zJKR!Iua~_k=FZxFWbn!U>g{9ndVH*so(o&quK4TJb8Vp6E3iV>QQ*W6bp2SI=3Ejc zln#RVBnh5z?}g|}Z&Xj%8jV+4B*y>Y^7&UP=T(VY=L)W5_~N-7w}+Vs>2GPI$J!be zI{PyMRLKC`ARnLoA*UTF@=E5xZ~{K-YT;nUbvRJXSy+q(Nd5W9Hh$lvNHQJ=ETlvd!0$ ztz9c9YzCySzAUMO9C2R0S1|x-YP8n_^dc#vuY{PA{Qz`|5s!;sw&C0o>y2w9SS&5N;9vSf*hjIw2!k$qpnnD_qb{I0ju@9OXOx?b1iA7;k) zex7^zd_MR6c&_!yunJygipRm2Dn!r%8L8MDZl8P`=Oy=Zk)lOvZwTZRYOU%9FdTf> z@XdtsKdEK$m?&;_!2uTVJ^z{{Z4wti;l-sW$W3>kWF89cE($+Vd=QuIZ{M0iFGzqGG#}HY5pCUyj|WML z*5*5M3|LI|k3clom=_7Y+%KTM@`L0L#o|Ew-YzVX5>|43D;M*0z1 z?dSmZt~8ZG#$JeZv0n6 zm6g6~*Q$nz+G%QL08ZQmPPtC?R1nz5>a?s13777fOMOH)9dWMiszChif)-IM%m&k* zs`_v|8bz1G$~1(i<;;d*`{kVI{oLY3E)^gA%h-qWOZ+t3_qiIGPS`nk^gh%l8G&+` z`Mef~l7fFBm8=nCo+r^M9$6;146Or}P9vw+Nw3&opERjhgSPpG1KW(MJ$7Of@bdV! z#z~{Hr+}U5&zLAyXd0)v+BKEWH3<;7_4`iNk^Z#rtEdVFDXW-O!)OGG9={Rn;iB1%5KG-dKb;RfR6WAvfZ4)dF z6R#@t5>woRHs^3&pHK9mY*3-sR>$*D6?*NV2!Dp7%eUp$2N(z9T>CQkJVKWdoXR%T zL2UlZtBXz`TVqHpBvixJWU;ie27<=e7W-f}cyz@cQn|8w@b|4d#G7YKeJ>Qclb1(} zr1?`@j*pbtW(nZ!gk+21nd;_fU?X#orqf~s`{wd+`Hn5sd%#qP;>7bjRmf`8HIPbB zVbW72Z^~V#nU+K3Mw1`ZsEAZvik%4d(Bv(;TAu1xXy0Fz%B$2bBA$nkmCrINp_eo7 zeOOPS3;B^9t0q2boz?2TaY5kBXnqKvQ%jL>v$1Wrffq*6#x#Vu7buG1b`>%qC#?tZ z!9-eG+x$9KIEVE_8d zXsIuxSbLk`eO2po&1^#v`<^^HFkMH|vo(1?sl!UGHO$^?G-2YDQngTMdwt?set(|Z z*5LR3NOLLab(T1><)Ze*=xR_N5Q1sNEihivg)dsUf2X5LS2yJtxBTHeSi%ZH;mx+D zFVkS3#mfLj#s}Q0P0dg?tZq*h_scSOmvV?ji`yBL424P0kx(LG0Li8+$Kq#!J`hzO zxMKm~N2ghEqRQ*JdXoSopanc+E9BzLiMO* zU2O<6>nXvOHinNtaC!BO&zc+I5>sb@3=x=9lGo~7ItdZiy_ESEk|JX9WqTX*sy93^ z3;ZJqJ{#_VM>5$x(p&Le#6dQXgAea*rG-pX{O?IRm^GCrjuHzlWu$jiq-xlDlj7?E?rAnR+^7^f+`8PeTEbc!#mhYAc%Vd~&0$kD`8NMOOWxUZeLi>+ zn{SsU{-kFqN`dGNq|wIqY1!&n7Fv7Q?|NY#OEUJBe8KN9d@xbv$bz)ae6^^AOFsE& z^fo_Fi85is1EFWq+jbcQgohKHo)h`cfFK5_2$v=|Rd)DGj^k=>(`K~uCxGsd>nP+r z08EylJxyIb&JrgK%$URB`F1V_ne^soaQ;TBF*m}?JdI3QZ=($3FdXX+eeNT~N|Z>| z(ns_FkZuNS{W{Qw-!b5~4oGO^Uv1#lGvj^qY=?vaTt+pM0luL8!t+P9f-P45@hQ{v z-btwIdqA>Kf5Bh(P%2F=&S1vdS)$3#hp$6-8JN&fo>*p&CA;EY*4*liRfS@-v+tU|0q)O=F%;%4YluQ{lVJLmWuaR=Tl7xP~QRV%-!|N~}4{ZpRD{nJ-R&ihM_&&J#7G2)dWe z{QW*0^j;Yl@TER80?lI}2AxYwUR|U=?YXp@(`)988;B?cyZ*YyeRsD0?tusH8G{V~Up!u% zr~sYQY-MI(bPeE{U%@2w@qncw5AKN5 zzPkjj)cf2b+c4inasrITbKtz%RV9v5tn->3_kgHu1R5w8$yA%SumxvXre9!rjC%Vp z5I6Fq^o`H=4-~yUl;AcP^aQD@&78%CfpKJs9tTCoXJ?;ic3_q9lIIimulX@%1+ZgN z0!vzqk8>@13Ui0TdFpRzKg8{wqfFg7gZwdYMzv-LOo{KQIbP6S4OOm@MI=ilcG{#A z0BpSvN1%prprM*9M9IKIJ(EV@sCG9H4ZvA>oNS_PZ<^&Q&v^KaJQ5FYM9$`K#=`)u zU{Qw_S1YgF+C;%igaf&3FYd;aGc7M>V$}0n0&0n4pg=~sXQU4S8aNt-LR%l~QDTu* zPwwLL?#C_r?-B1h+}1lNoCC_&0N_amc1(#GmNxQAwnd~qH!z5@be^#lkznPcTQ)6m zJAF3(qYagwYj$qeX^#aOs8Mw9g}P07abQq>oTwU!ukcSR=eAIR;QAZ|fderfYX-;c zPRGh00~7SL`U?;=4}v`h5~fWNd}b3}`9(28vWN;nLj=obF{lL^ z()0j+P+}D3qHSQmshCYk;E~4aaGbf5s&0!S={Q;?njy3;)e@tV=-md-9}TQ zYLGb(5xt3$3=krAciy7Cq>-pYl`#2s^TQM0(q)&DVg{gAFkuQbSQ{N2MNDE1W7Tjp=N zK8WI~60I9aRq(=8sSPeocBds>!?}#UJ63DaHs`N(PHP!dP@x;^2QY}OpVrUTv|0+i>y8%Gj<*XMxqtvIiLIj%ibBFO}EH?C`aPeKGbSS5s z>~0Q8Q6sO}x`DFsTzYJcqGh1n8SKN_-xwqh+m;d|M-8!#4SrPg??y6i#q6T$P+DuY zjJ1p8xXGkPTM1zKCG2&Shu54K9#p8f82(q^*^JVW$3_nGPF%-gc;I z4>9|YM*k$b)WTWqraT#jlu!O4lG}go)z>#*DpsQ^G`r?xeeBPg_)AIVX9Y}-4j+p$-anr9H>3hG0Aw~Jb zrSWm`n>7O84PXB7EZ~&ZQ_u>CT@#Tf|HzrYdwrrN{pS_?4%391|2szX-LL=iAo5Nq zfFnIV+sA%@;aglJ)Bx;xy0OX}zZJf1;X5(qfOSb5^ctJZge{-)ZP;3OfHmxkgtVD& zfBKU*yy-#G1T!Y{00D*|iIk{9`8HVg4j zp6(y>Pr3rxe@#XuZv4ucbi|smJoSNJZDJ|1iBr|v7Jp^US>({UrVozpSDX0S7{SyO zs&u(uSz`m%gszTjyKN~D|8bqK3->>l{Qq47iFEn@5|`YNJ12c&`alLJ&o=l{R#N*b JSK*4^zXAJwPQ3sC literal 0 HcmV?d00001 diff --git a/multidimensional-pod-autoscaler/kep-imgs/mpa-design.png b/multidimensional-pod-autoscaler/kep-imgs/mpa-design.png new file mode 100644 index 0000000000000000000000000000000000000000..2090f97b5dd331c11ba05eb1c80eb4b7eb876a6e GIT binary patch literal 463736 zcmeFZc{tR2A3q#Iap)97HAyFB8CywYS0ZBUW8e2>hRVJSsiZ~-A^Vnn8O9*XASKy% zV+=#K!5FfQ7|S!~bf5b^-M{->&vQN3^WSqWm-&9o^_}m0-`nf;e!oBOua~;o_t;p@ zvm85ij7>xRuKuxOr{s?vW9m4;M0+PD^7R$kfzC_+-koFRy%&~fe>`(E(QpETkAZ0G z6UP|o&KzU-{S?~IF}m}|82?xwJ9eKA`0uqoo#4ODp+9yk!u8nkf1P7SJN~}B|NG0o zj|^FK|G8oo{hw!_lFwrJbIsK8`{z79$1l(hCm*PrdL26^(enE*ouGpm{@5|)V;XmF z8~W3&OfolKQLWoq=P$Ub9^G|@3-IghCy$?5N~lX-LS!Xzi*w(N6&4J{pe_jpwjHC- zd90r3t@lKIoRRVN?XORsToP%x(jil@bCh-i{&uKfd1*B{qo26H{PAPAA;j1?w4lLR zM+c%4_xe}mV|2&=dHF{N|M6K>!54Yvy&cD!2hM{ zFtZb54#Zk#a6UGt%KOY8r-W6CUQM#d+r;Gyk^f)aqB51fs&?wQ=zsL{N-xY5 zJ?R-U06PVvUVKef{mj7)aF)Q3)Avc7=n>=llh{jQO3wQlB*A0uyJKV&SEptPj51k^UCjx z6Lul2WXm)+z65?TYRFP*Wz!|iZd%WeP|;q8)QGc`(u|v0lsWld+N1q>wujy~9_+C% zl`>^p!xpy6^5DkQYG9Sr-?13&dvZ7=9;IZT{r9zOI9WvUVSDeZEIHCqmAVei58#hy z1lugGN`LJlCPrHyi99(*55N4tF|IUEDPP*y&&Z%V z`@M?5C)TYSYdc#S-1p3-FZSRsfuDtlA=&3oyfUIoT?yXLZipJ~v2XZ+^NVs}?U{j! zxU~xFZOi7xU;hxG@buaG_0%fWiSIohvQN+*x1jS_AZ_Tr_|vfeaj+k&es?;4`JFu; zn}a{YQu&DG+E_)uXA3v^L z=?OgijJ~p4tD*6TAXg_}a%Y-+W!KnB&hu(VRe+D+z5g@1+%CNinw5sB2up{UYj&|U zgk@S;XH7GO+`07IwdcoWeQsFyUblAfFtzgwI+a%`>*}Xy zn^yXw?pbNdrUx|SgV*t23u9VxG#))>-$yXcV zp8Rd%y!d~ildomKZ{e%+360tQRqNjo7u%P&m0+?b!!l0J5x#>JguQq^MX&P#_&lZy z8a)LM!5`T6pDEw5`Q?Q7j8jvZ{pXhGkNp33n?FtQFQZLz+O&E}SDSMGVPO}*_j)O) zrSq(|E-pknph0y_pTSu+YN8C~CpQn_goC4?vy$kD&fW)|Ld!aIzJ=T>P|Ky3hSeG#@3w0JraX9rKe%y>bf6Lq7z9q;yM>Cfc^xL`-TxQ~m}w zr;af6%mN=Gh|>?WGLD6_&;rj;Sa&;Wid}c-iS##vkU%dTxs19n>*_VWvfuhW>6g~a zHH$>^fP9({qhGeX{x>51AC8-Hj3dL`%UWdnlM{)!4>WY>L)H;LmXeYA9wTS znQo@=`MiV;kgk=LGKCVrnf`6x$oVMUpy{JX*iS-lMLY((jT_m<$JQ9WNuQG}NHl$a zI=k5~#t3)&4?zWNu!-c)9s{IHQ5%=tEtC@;c}^F|PP8Cj@caB)SldDzJ}CBF=#t4N zP>*B&N2L4_=X?5(PxirgYx@6?G%V$$Me*VI<4TjjOcpK!`HLm}PKWQ(=A9vR2X^fV zGTUaN7Yi7TA8>`3I@aEr_Ysn~7sN}(1}1qtJDB-!6)}+3Xx$wQm-O^^z^@oTSiySU zS-R-B`pyRBr5+)FTvp`TRSl;}b)sw3gq51WO5T*w;_kW7m)ri&ow4SsR5}Yi^}`VwUYI`Suy`b<(z2R5W4SPumI<$m8IM zjLnzun8rT)U~pV5Te-l4W^gCYRT1c)9Q5DP(;%7Jn)(N^(-c8IIp#o!1O(mcftH}O zwfwCN$>royY0>dJI!zf(h2IC7uIvmtTG_I7F6i;vH!o5R+4iP9xG`Z6pUf`=L==x^ z_SQ3rP+ribQ*_5=g&%mt{gPGyH2@0-1C{5}}+0H9N<0nLFdCOg4{b0YkwfvqR-$5==cq!iJ)qJWQrNjJMe*IS( z-knhukoRI}zFv*5s626((QXQcqbE;vCwXbTNu&Q;`ZQvcg&VIKQQIE<__|^6bYmL3 zXW1jq4DH^Tff<;g{@msSzpD5dSRUqyhD@Y&oB?62ZBvHl0&DXEiLLa!J4XggKkpzX zKyWfkt0&^hZ;zF08MmtL4{V|p{b|B_7FoxR)*Q6r6~Ubf0gvJ)Rz|W*9m5uhu@h5c z0veyko%Oq3i~K~ExkfGJN=NzwWgWeHULtkQ!*1+T6=A7mLc-oR);m3Fq=FL~&(G4+ zVw^20M>?bdyiGRQj!W=jP||rJaeP_)*VJzlt3AOo?Zcr{HmrYBp@lVZoPZ1A{nFp5 z{7ZD@X}{%MHf7UmgV?r&=8Tzq^?(u_uh{!6FoJMjZyTxAS0tr0xAoO10qoaF(&bE7 zFX+W9KOV?v41*w}C{caTDVD%8UVpayh9`9nmme#)GkO<yxu`2{*=e*WAZrXhYVH96SI=*p!jB6Kc9P^fwF(p3M<++V%^7=5m4OGx&5JfoD; z^JNR9|J`-u=h?_Wv^$Z-d`2FcbWP#1|f>|Ci}bpPJkpk{&V8FgE{ zlO!iFZJ89RQX(2STf4;+5+q}(VCp#0(nptbA}ob<3@rD9|8GvoEGqs4E&s-ZOZ_qQ zTK=o;e&LFL+}f~S3xCKVa;8d~SZi#WKm!4kx&7dFm4U+HaInx(e8 zDGTReZ6>UzRUJrGxQXNe(xVO{4Af53pYYaI=ExvMf&E9X;O_Sg;avyIq#SF7d|Kz>7}mk%Jup-7NImb z1wbnc>3d%F4^UJcjIk~D33(xjivwW46!=#9Yu#XKSx3UN$0z-<^Ie4RWaVmS>bDob z??P)qEFRMW$IsB`n%FSH%va6?0;D(RFWf-*U6f+`cLMx#X0;ejeJAcsA@QzWBa5QPTUib^s3obF!~* z*p^{@C0E+NxD#H~TBdNJ%Y{hkeJFCeh%N}VN7X z5gWSXpYdVr^q6s~Tl1F-_kD+2(*a4(Sy<%uXFQSn9`dg|=?$y@6b$lk^Gxsc++b$A&A*Ey zkKYSD{7fkg>wZkQNNVe@g*AD3nXfsAUDSI0hQ4 z)W)$dRw^H4L_-e>yP`ubF2QSnO+RnVg;k1tb9J8G(K9WHPq*(G_jwRm8Q19k$^L8p zTl;zVY5_TTU`eY)I%Jk9gxxJ%RkNf_fwM%IFzJFV*Bh^!@n};;nA7U4`m57m|MCr| zCLj6s2aUe&;tmQw>H9xPiSkKWHlNgz1Nsp0?D_*52~tXqK8GqT6|v57#$dTiTfWSq$?hZtgzZI_c+~SjIY>8MO(0P5)9L&s|6zyTo7#?;~G` zfVPo;e0o`?IPOUOgtuM`UZ8!MMgdk0?-@{_zh3Px{fMm6W6j?zj9%^ikoh~kbfN$A z%aZ4CiD~all-JX8($ShWU!@bA8OAQ^UjPW!mO?lwk-UDmNU0&;-i=TQPATmX(uEz6 zGVmmJzFlIYAf3z3z>B`_0ho^h+nl*>13tgUN-K!tpN z^y$!nSw|K*tG%nZbtv%)Rh5;rX@I!Lc}f_1 zeigYmz|^l`o6SJ6F5V29Y@1##W8OLCxFhBoVWG@C(yy4+q?o6ulyj;oqWXr`RQ0rC z*8K9L3=bgHPeUT}@P)O$b|eXN;e-OOES9v}QfKwt&&65LtK5(3YZupDsXtp~N;1)f zoAh6-?tijTYme*QM!H~jgpPR4YSZ1f(1V-@@`H#Kukb(J=ZC&uLFgI?K+VvXfY;Xd zTN)@WJ)`&a^y{*ew`5-=u8mYLKC$GDAK}6p@fttqqAs0>e-JY$TcGznxRv;M#`$c! zJJj zfuYQ_vvxJNX3eVA)gB+;YKe#yPf91lT_$={)*i|07FJ zi58d9r?`dotEWIvdTJda9sp?ybt^~Bv9T;UWx^hP3qkMz+-1#c?XtRiPx&6osOhyS zCq8s@o>$Wv&Q;ypD+mmEe>ru;O`?B zdSYG0!2M~Z8Xx#H&H4s3Tw|rv)Pv5J)-Q@)Q;^8oR zyb=i`qOa6J1fKF*q+W3bYs4kO=4H^QT;Lod{ddcpZFu`PGd;M*n9B!btbLd%bPDWv z<}KM|%k0zlkVRyrB3e;96oP!QZP0ZW4l&vdFpH1o#TrGHx+A{?zgUf@C^zr&{*=P#8qhUxacla< zf07lTZhA4aC9*V7Dh>I#267L2lpe%OD~xYnuvAnpHH z3W*&pAHg5yXqtAEL{&7m8Q6EN`JxU*iYgTIZLu!ahGW&>D$jZTOf!*`1>KjE6J=Ow zhwbvGW7R{l{P?Mrl<~o+irF#3N9y_Ytc&LL+QswZ;Cjx)SH@$zl58ZTsX6aa(cAJD z+iR>Kb3?r=JtF!=5k7$rC)%X^WB^d43D}lMA6Y-~4Pwcoeo!$*YRVljY$p}_$XZbk zyrk;wb^ij;|6wHBYD@nTPs~J@Vui;K=u=B=NgmHqNsB_AY?u04F@i>Lp~ZKqG`zk% zA8ZSBVi39G%I$a&01}L?Em|JW%G4daP%uHRn%B{P^;Q|+c1@s<=@#ytPUIx#QCS69 zPF2r)x~~D^rxVU*p}~9~hwK{XK9H&tNrcj;z>o*2olJLdEz-bW>}(J2XX2l=$N{GG z(Wxt4>u^9#3~o9a)xmVKeBy=hwm)2(0eNfQ)3FnJsXqz8EC@}FyaIe%5zNO=S{FtL zvB*W+J&iCl3_#I&-2E-Jn9eTXC}8n5WOvu**Nu_}F7ZcG(TCsj{$E=Q=w zdciLjfUCPT5DDm47su~!7Khy&dD7-lvRD;RteDD+l~ERWoRisuW53%6F-uAh{kh&e zM$gA6Wt4c>U218jqCn7G*9-Rpmj`@YNhvz?*71PsD?d!Ulf(wn@GS!+S|I>iu7?iW z?#^w@0E%CO*o}F9FS!2tsWxliI=4cKPeO;-dTDM2X_W?#c1dtU$v35;J0K6e3ngcL zG53-79sD;#l`CsnN8Mu_k&$E6(ApWXHV9&f6Jk;D!iw3o@}RAH0>OfBT4l~BW3dX$m0pBE)C?McQg*ntlIGoQ5C^$+~A~G9o~OfKf|V4Fc|3O9~5p$ zteLePyQzff<)>y$Ukp#SEcu?-+GK!2&rYjkA1Ub;8ts5^LHM3oG0(0<-S7iWCpTiD z3F_ch>ZnCWTWr#78&9B{3c$Zcm>j>d6nMnQqic`_IZ& zZ`)`GN%^-jwRv9_MXKrO?kCOIgtK{k(JU!$#cqNaJ9qRmRAaYFLGTjwHuZGVH%CQk zK;gT~Z+dFrH^nhahU(0}^ybT2>jgt>pMDzGsDU*bUdZ`=Dx9n{sV?=xX-&$=fk7}b zKeK3tH`2&sxN7#$KYY6ifi6718#XfqkQ(b|5>5mMFjNtShNq7!)M{YdCFcR~z-CaB z1^SV;0`P5e(#pQWQx3eg0TZ)BJyDj&W+%ZkR<>q-<*D?mvz`LsEQ3_h@1PQIEe&F>HV}|X;D{+G z7zs0n#w9D5jol2F15l(x#Rm(j2azBnPuuy0dKW&$VG}}muE?-IVf8888ii~vKZ*Z@ zLv`#qV@Q=WH&V?j$Ge*I zU8n6iLc&!HQZw5=GmG#PUV+!F&30PsWEAZkysn2&%)_e4xzht5%Ou!7gXVe;0fhgA|_&(mxxJ| zg)Pyy5tE23C8+Oy5>YZ;AD|e0_M&`5d0ohz_<2 zXXzxf?pJY2otRIOA>+8er)x{;mXDybxa6%XRZG~8Csy6|4`rQmu4)Xs>shohh^PXj zI&ecbw}Yzp^H_kvlq4xU2<2bG;DSJ>mu>1xeGGuLlcWI=pmCmHZnG$`I!42m;kwvK z$b*fGhRI~?^@f+ADmo!w5kjHKoq_v%Fk#m4!~l7X^3g?J;L7Yl+?NPQ2*hO<6)+xv z?N4GM@ai%Lg=CES64LF5dIJb*vtorx{e9R#kOH;@bD0#jaB1@|L<)1rC_K;~k=a0q^+yH_hUG_l#UDp+cc-?m6;q09v zo`!MY?MH^mZZv*p;fTu}`yO)p8}~TY;k=anElc z$+BLuUjuX$j-u_Bw9--HWQD!F*61!cCl)lW;S1l>YFi*n4x&rpSZ5-4BowuYf-7ws zhg%-OwQNVlL7k1E#a02!BM*tN7x1-!##TZuXfVfZju{bZyi)2i{~T#1QuHyz(<#W@ z*<7n8lM75S-DuaEr#WV_6Eh{e{sQt~UnbRO{^s9%=9vFiGw%%TADcz}?exwio`gAI z!_#2l4yLYBpvkxM^O(^$o=pJVF%9Lp!Ku18M{dU#vH{0xUgmIgFxgoaf8#GO zski0U?^DX}LD=+rEBJPJZZxWXeR=Y;ux^QWIUxy6!SRb(8B#+w3e&tpBOEpv-f; zk#uA9NL*+S-nz-YoNLK^0a+-4^ZVK^*kq8U*T98c8s4aW$|aZI#n$1-V#!q^KRN6a z=?mUqX*-7ij6;PjEs{>KOn!&$dWW$qvQKP*$)WjeKK@wQM%2pkYOj-n75AjT=6L1F zKB@I}ZiF=`(a11?Q1)C*uXIT5DbUrFDwbQ^(Q9SgguH`qKQwq)PfI~KoHGsXB0Y;r zfkjE}mM&E94$o;Pr0-7E==&O@Hje&agLB}xYVT?Q z5IG~<-Lb>ZFrF3f{3x;34!EMnkICtDWLlT3h4migR)ld>4kMr9M)e~+4NJL5{#TBw z)kC}}JU!xR#0;vPS#OTIo6n`bIJe;D%fc>I+S2@x0?S|6STwo@@8LaRulw2uUwxTn z=v2C2nQ!M2j^TB3>ND00R$13kwL;C#`iG|jJKr_ZRdbhI$H$n1rEc$z8*#IX7VxZh z>xQHVyf~Q>Jg%KNl)v(hgSKPm8`rq5_Tgk7jnVn4eX`GIconX9b5!pjX0AAf>fztl zttwVNv+7$9j4Lf}wA|jmrf{fGHTELOtnZ}p)eZz7%gnh|#Vn0+*c!v~P&FbWv%p89 z8`!vQmnLLoM@{7qpDKi{%-0cQ-v>7)@oA!UFgX@$H|*x_tj%>7$6FAbH|46gwYGCf zaq0CccYbQselBgvte*E}kQe7EYZXE)Rx^5jWlw??P{^_-?H(KXHK+7e%D)W;TiL2Q zmTB#~IgdoRer3bEwwR8Vo7bj%cK<*En4`y)9qP_?irdNmh6*0R)jQ}dJ}=GQ4#W=S zV%6!{CON*i_kv~|w>QfpVI{KCf{G=JcTpPYNuU+ugs%J@S2}awjx@%eeXBM{1P8!( zw!2wqMXgyi9xlBps+#PGsG0P@Skbymhxl!R7r=nix0F*ur!WpqNRVNI#`y_L2Ui~s z+cwwQ=b8&`O&URnZ@q!bG+reUq01eA%d46+Ud6e~{@B!2Ryf|PD=i)AJ;G#Eimaf! zqPWYP5dMRD$M3TOf_AEU_iBx{u$?(40Khhs8F3tl21obJMk zNI^T#=4apdR0Wt%Mj22i?~8Mp4EPzgGreq!!-(GlLBG9O=!sVM?H*I+D6y~1Y-Oq$ zzq@FXqQXcN9yQR4fppGIB?qL#U7s2>I<(i8b@5B#o0?v=?#1T225#Kn%R#hHxlg6N zxpHH~XfwdYWJneltbfi|+fByRFpaH=`g*8E)tv1R6=C3LQ&YhsUEuZ!YoXl+l*D_9 z!>9N8blVAhk(>GK+R;Z%<0yOa#04ycx8RQF5zx$tgfSoobWAd%_gfURq^w3a^R{l; z@xCpEE5pSfm4DmQd$$svcK=4>My=VHakZtUTPkZNsXE%4G~K-GlTNA~lF_T`Af_7- z12N8;AZR9+F{*kzt$)#dzzfrJ_9AkgDK0~)gjlUK*cG z=mt;mm?bUev=-VT8uksTEq@4SIS2rPePLh9<&H9EJXxsb*z%X!hNa~u;Pr=X#Vq3K zS$7(Y0<7P~Sw3Cz?rfmw&hORg+DFVux=>pb5bc~X!rgn82vtDj&5QO}2M_%KVOLR6 z!#Okc8d$3OGFiZ=dC}+fd{q>&QHtAqr&MP9PFqd)k)UFspjUSQR<;uzFKxTordz-e zP?PEJ%7MIQ;DjGV(UtkXXR@@2n}%Ix{hQunVN%P=_Ocj8^$!TjGNR`$ ztfeppLZ5sU12trS;u6iR2OeLZ%5P+5*p)gt$P{@HMB815Zfps~qD&~q0j1}J(6^nE zR69|)Akasqkcr~=I(iO|qru;T(Wh%k>!5J(lj;udOkUND!B|ptf+7@GYe&ZLhxBdPiS>A5_$&BMXuF{oS^5xzY2(P>j zsiQx@tgE{S>nq=zN2J_(kTL2d$9y-Qed69DwHO>#?-qEZ#x0%b7B=B^P(|yVXPtdm zK4tqYSi%x!vafgjL3#6iV&E<;n=BGoUYic@d)`x)%DoRI_YG{ZJlC&ij5ZE85Z?bxY^rn;9yiPuQ;GuvAIA685e&ICdXYMGjPxj#4`bGz*$0_Cb9`k<(_fy)jy?#GrSxRUar z2XPO8q(PPG9b57%V=W_-OnyO;$?M-=Q$x+A!1XJGn?O`P+%;sYTr<$I2R62O4)>~? zl!L8W?G-WJnm;J^H|1M7(3tAV+YZiCgBG_da8@2bR8^Lg6{ul3F3seLii!TgkC|_0 zj|7{_D%@8qY0XxHQsnn%GYyD!gvp)`VBg}pf>CzYu5aVk$A{QP9A&o(=B2h!2N&Zh z`kpuwrIm~cwtN`Vx@@viaLddV8({Cs4oE|q=;&=>XhnjOSj`8XFQM1_ogl=s?)BR3 z?#&^+OoYp*pVa-Iun*fi*{$C!He~;!_2YK+L3E%y!=*b*nVo58K`-XW$bEYTM@smS zG{bIUb@I%zdNTNoxrHr5AR=G9Xyz1Of{tzz8)y--mt(YV?>v74QqyQ~QMlfKaqT2$ zQ>i@AX5^& z-F)Qf&hu*W7WQ5s&Z?$nJ#t)arz$}IL!T2*t-GsrB>bXfzOP-oc@Xf!;EAo5Eh%zG z4_fh6&za}MYQDXn`>x_giq9DQ*Z8PfbIYubLgtF!7}5?XJ6pqgD(zR{E4*RstYZ%1 zsV>I+lLei=m>q6J|J{5ESV9?joqnmBl5w(|n?%?pkn_6x3*{y1X^MSt{ zl)C^9d$d2;r=A&Q2EGO@uIhxvG8{ThpWCk&R%WJm*)d; zw5FXM!hI9u)ai_Mw*;w6_0CB*OSey}Z1;dHr z3i}p9yU`F8Jxv5q=Ok^~$E;+0QX;XVk*TRSQZlT|iFhdtC1X6qo$4i$u!Ozg=J3Hp zC3(Mjd7HD%Q)0I=X~Bhf&61@H9B-KW<*+t?c`~PezQ{(_qAr3-bHDo~sOA8gX3*2C zlIh#o02$V4x2*5U9OT%1d6jzc17!_p2e`buW_)-R;m9}dO4$yyBW~psP3=&(tAA@Y zTGW5k?zzzLe$|2Q6D_9GQ#JPD95x76TOuw zO-H8jwsH1(&#Z0+6~?H>bAjmFw%NA>dhJBH=V8P;HymYl6HvzfOVtNaski-_3xg!d z$RRppw<+}Qfbko9hP!r4B#?^_s;b(Rp^{P~AXBucj0`PKmwq&kvIAalBc^V0maPhy zp;OR~aovUj<=@PP)GKs?=B;qUS4`FAo*8jy%NB-kik-_SiSM)RJel!y_{jXC_*m~Z zBW7!iSGX0sfdC4xYS-PfGF;VzZ_1mu>roa z6tqmZ0xTS&C3;%UjYVD<-yDy066J!zvfpEZTK^j3-&)liC`1}}oX zm3w|l)Z9EO7r^F=-3@xg>B5)g zvppqF<~g)oe0SFs eG6vW5kh=CG7YV# z9bmTasE~XU9vhIBkBxBNKr}Vp11M!01jMIpcmQ^W9y;S2-?dW37QqSl0Mv?uv#7*3 zO|sn5KvU$m!3aYaBIyrz1y=v2DVPY+GAzwD(Xpjw!3yANlX)+$QVCAL@bp~hIqFK) ztl!PHPQ*NCq%fC$WB!+XTA#>A&I;34rSNPxvJbZs^wItA8vQl&mE!(x4Y$27{6P_6 ziX87P4hGWDF5@z05{)e(*MHlZUwYbED>vL8#}+IjXH>W!BkPE}OnS<8{KQpi>r|8B znh)ys11psHd{@5dW!A=X9iGy7po9I>lhDk%+Q|e%^!|Yxkpm$h?9#_X@PFEsSY*RP zmO72+F119KE(XFX`a*9d@EYJMwF3rnstgC@JWb@JTtvKkD^Uwu^;Q$uwA20`#OxTs z5MDu&X`*ip$0MV|O0Uh(lX7Fve4ks=dfy2r@**lS@GHh|lwOJE@G^-zfxO21^SuwqWGI#+4`8m^eS9l|u_Op2k1m&4w`?F)aJ za?aIj327VQ7n?swzc@|_f3pi|J3wR#8|isV@Z^8y3{tuG=l=q*5ADMF2=91yq(vW? z%&&q8hKl#Jfw&nH>uqP4b^h41{-hDSc~Ji>8@iVveY;mT*)pkLjWL)PdQC)#D{elBUvKQrg zpd|!=K|$pzvE&Te=lBu>1?+(e>1*jpt5rOP=a-*~Iy{xz`wa{cXuJ)IME7@$H76Bcc<{!z zi^+#k;Fc69*dPXT_H`mfoNAKyL22YW8xA0eu_{~&KxPCoC^e#T4iroP#D^ff4H)>w zL{+w4nXi)07N?roO>3-K7h-)kBbx4~cn25BZap3@KixD0NZP%tV)*-Pr- z_|f$ma>Lwq07KvXIhI~VS=0LXCZEDez@TKP7|=Tpfw_;s+8)?yB}UMK{#Oy9eC$N8zcYE zE&^?JrUJMC%Xwd^N_)xyPe8~4xCM~O=ni~OD$IH_vRzcF9`j2QD{@he<8T{7&n2kG7LEhjlH z*Upwl(|R~NNNp+{8oJplU}aj0frc~?-^yM@9#CXb;X3|*H+w&9mbH>(TTL*L{Sh9@ zF~_z2M%@`o5`8*{E>nGmbx|*$1uRKHD(Eg`JH!juvH^~7__EKLCN5mRN{~|HBB|Ga z?`}4kSicXi{(OiTe)-ZnLiQlF9JMAlpAIUvt56we3ShWC1wqHMkCM>g2gP?q6p&A5 z?ZZXqG7B7?7DQ@IX)p|hZw~rlTCEbuvbP>kh=jfE=WFu`5uIC`99^z;9oZ7;BG$Tx zFSAh#^3~Tf$H#JIkh{C7PZ^?9tIPt+W7~9!b^1PY8R5=XqO?MEcIZqQ`-VE#J_U^9 zN3l1=J|v~6hqzSOT=UwS#mHdgEZvam%ulJf(t}dk0La9XwW@9HwQ9tFZ&kB}MQpt< znh}7#7@M;31@9ek`hG-9aNLAUf(KF?U4On-0mg?fHeGy5|nO<0(Q$=QlZePWJv7F{$gGNrjY&8Wa(FIKgQLvVvZk{HcgPXoS6%{5nQ@ZuENbX8|7?s5vbl1 zj#3-YX)~)79YD=1!<78}Ny=Z6z(#R14_Oi3XbC~DP8uF zEdl;+^pCpsGCGZ6u$og8*#Isp)7TJX`F#p2bEQMP1Lihg<-Fal)NAs+%2u^Q=B@X- z+C0J({>S5&np0WP1Mwi_l@5*bQFCm|sos#dQj>hI^Yf^PW#&HMxJHOnT?@4^6NgiV zbpy?id(;`F__>&Dl+ng;!MVzL^l4wKPVjR4?-(Fgf6C?Hwx>_yhL=-sO(1pf@vLET zr^S`8?rPG7D5cKlALYC}*{64&<*&t<%`28M5d4(oG9$FFFD&mo0PBav1n~ z^(SlZp|8EQC1nchCpzIBh;>8U$?ZcD$e@D_(E&q@%7RVav zwuKTptW0shH)u_@JhjZ2m;S>JuG7KTHw!7M_Fw({CoR0_Sa;#3LjRm7wwlyO7xUg5 zU2lr=&7om>syGPL(2+Wx@^#BguRz4m0o!q>$Qo%%;9Z9D^?B6$N7?ZdXhT#N;6G7T-g7pJhVFg^nH1_6VP<(6YQ1 z;L3DD^`Nk)?lbW56`ze7tg!H)=h|7cHB+Vi`(+0Lc0tl?4r|r8Q)ZczwpwjndOAUA z-E83Gbd|x3G~d~pMaWZ*PsZmGasJr@p3d6Zam*@?VU1A?0Su-oH;YGJX&DT+7u8>H zL;7epTW}<72X{Y$g0tjU=}{iKR2Ksf4n@RL%m>${IyGZ!u~LeW!sC^r*mS z;-fDHp7W(T7fq^!_jY#ChLT83A+8ZyZ+CdYvBZ2Q7KQEd+;JIE@c`*vxHX}bv5Ooe z^>OjcDo29*^4~q3Sbh2}MoJ+s_ebh8Y(A~%bHvp4Jgldp!-?W}n>;%gTgKV4jJySoT=?5}MW1-XHVeA^W0nn&J7FH_ z1!QN0brjFa@-yiypU1ju*adB8u}0qKHMh5E?oyI*$SMHvClbHinfqC__OvG}ZtuFZJw#vt)_P+;qo^fIvzBt! z&^r%rcZB;SUri}Tycx@!*}EZG*#+cU+T5z7B{p@IeldBAQ*UD#?UEQSj|64Fq=gP* zTN`TZ@muHG3IIf*(kAeeatYY1)Lp_ zm^qg=VVl|-#aX%t&O&!=1T38{Xg#lOnw$fH;Cf}U9j2VMP651sL(&|H$}X8G3;&4a zu@^Qv9Mm>7GyG9l3{bl_J5l6MR<%LVMYDf6QRU9F;J>R@N9nJAuEz$-E&YpR4lX@- zE?a5)j!#ZdN%AX>=`l_@8Crk$oTI?w@g$_wTNnXq9|*Jljz5;t?@+e=ruqHlH`=U@ zHObEShCDmDwi?=Sy9Fe$F8sNjHN{=g*s+62Fn~dLwGaxP2O?n2)pH36)&_BnOCp9Y zchahY0jdOL38aYUQ5EMe+U5? zr0NDB*y%?)C$`f|H0qWOnlnh2gVr=Um2QvH!G#e=Yy`O8tfor;@=+`7QOz;v6kh^T%HOF`sz}>4p8Qs<>Q;)bHcO z`SHo*fzcmjxZJ0&xgZ4NXBR9m-REiTsONGx6AWG#)9h1`3{TkQQ1EkIfG;OlyQNe13~&r2)e+9hJ}S?M**FK__|(uurA_s3S5D&&9w zLjB5x#H7O|Lla9!%Ao>jSV$ioZQz>H$zr+rXxyc`kXDLTr$=>AqgzcSYZsf+&PUF@ zF|Vu@Hw#cA?~G>ZqBes?AnWJwt1|h~v_5xm?`MMX#@{{uZN_T&aJ_!?nD@mWoeeQs z)^+6*|NXYr_f7D3q1<;jFi@(#V$Fmdh^p&gn>xXH|CG??vhMB30C(5ANPv!DuklJ= z#fQ6){L2isC{iSwQQ@O&sk`l?_6GqH_#&|;%HBMUpj6kK252|looGa?#WkDVfOZZoZ9JI57Mw;g)NF9!tY@E zUmnS2@9So$ovkOG6dn2?@0m5&?>0X~f?ul1gal-0*Gf}lJgU;SDQ)rETdQ@4J%kL> zQ=VoCjnS0izJ-8oTxldT<`5d4A>cUGD|72Tnf5r}UrF0uSvl!$oVB2l@z|$%AiyyW ztX4znnPnA|)ZX3_$WKi)EV2Y9TW1G(`Z%DfQhbnYW&u(@o8v0JlVwdcC&}Ci7GvcD zneW~f-7>VrN|=EuwR?lTt$w#0F(N1+D#Y+1YUZn&)=+N1#dl*G>vV1RSvHe3^&}IL zkCCEg1%r651S)9qU>y@$i&8n1?Az0PYQF5q@7yxK8Pf?>@)W>})kQv#N6E>EGf7g`U9e zSUq)1>Iccvx?BEt4(cus7C0s}n`J?%$^^~Mp%~GcH`-o_<5%^$XcGj@8`IjW2y{}9 zmFc-={fV3ka=OYuk)gvSE#FB(OA2(-$QnO=R$NuUE|3*sxNs0u=>}ZGtnHla%P=6A z$L<{J^toq6ZBQR2RyPb2uX>`~2`IQ;lTTb@Vf5IsDVxD)#U#S?TycV@$uAdrfrI5C$Kw#QdK~G{HjEeI}Z;Ce8o-}RLm*=6wfc&&|WWf;Wq1 zVpxq<3*0Y;du)uNEi2tmtyVwIpAT9e-lLpU#cA}78aF(@u6o_TmZP3uu#-RamkVB1 zHGu)~ef>41d0KMpD?w?SfFPm30F_Z(^`Qyqhi!!gH+giVk#vE0>$#wdaxu7@Qzmr6wPAt_$cB4wq<91-)sG`+VtCC<1J>0p_A|CAbN zT%5yUD6X@@TKGiXkV^8^%~8DrB_Usk|I^I-Qou5+1#ZqkHM@giT5!vq)aL1+&c~yX z>Hb@Xn6tE!gO}h?6O#FDqBrYT;Ip|r&`$MApp(-L31qvYRfJXG8hC`N?sAYY=gJpt ztfJ=*`(jlI0E5poxVCX+KZ-iLuPvyQSisQB^!Wc`?9Jn$Uf=)m6O|&VkRr6%sSsjp zC8=cJ_mF*tY-1f#Atc$WvG4mfjC~YI*2y}T*=d@f-<0KhbNMB5G^m|lrqI=kSDR}B%kY$>-LhsiDFOF#}On|^PI0PXvrg6m`b<%oX^nemY@wz9^! z-9>+mw)vLbE-H~NST9iZ@))vd&u*{h2uqmTr)$O+Zm^k2fUb0gJ(_-BdqQL zWr1gG;EVXd=nAXT6*sS(mcAxL?PgLHFer=|ALZI7176#nQ*+EBc-?ad2z z34dt)ixg#q(-EB}I7D7)pDQ@yKB3~*>txlu3Dly$`r*9`-d{|#n@11<q8@%Y-*c zl_~&S@in=dgH_Ez&X~aC2Ig565${MsUHpyra6asfB75mm_Nta$@EI%%Fu_HQ%RO|; z!jDT&$1l_6e6+V8rlt!BNPHx08n!G!h62!(obF`5wK95kg{3_;k!nx9MNmP#K2^Ov znLS*v$@5J!Z-7XiV3PL7f&IewuMn2GfQX&${vfy=G>zx?Ah;e$4lYUx+AB8RAV3c48F=8k~XyDCme3mPvk*6IlfI<(L5dS>}+$D;r7ffuDa`k zZs}TY((>UTUpqfiH!buD`zohnHh>&D7^}%-hTZQdcI@*^Vd*nsUP~GxfreVB1k_#L zLl3tu#obv6bhvG=8J6TXuA;X!3}?Y_UdYc~VlUQ0eA+HKT3(su>Q<x~3dlQ;Lw~ znn`T~=T+12&W8Z|A^^m?VPnZm!O|`PeQD<35q{2bO>=eZyWm&bW3+*2?nv*y0{tIP zwh!L#S%U7mr%suNZulTA)GU>CW-t;4VsEHAWu=4s$C<}VO6SEBjPhRXc`e?J+$$Z6 zt*OC$ORw2vUMm9xGJrkF(6Gs@GRdZ?RUIk@Q&a7C zv9L5t6(+U2%UJb&dhl7uE&m4P4%aTU4I@mYy-waRndRonMBefz1I zl>bEq{*1*>C}*6e!-tYh!M~;6{{3Y}7#*r z7>ot@Mcj@b58Iv_{#+Y!#q96`w%j^}{qxm#xm)_3g^X@htJA^+EBBDr5hH%Tb+faM zr?|e`Qj83Q+_c+2nfAvNo0@b&n?4*DhsPD4DOzHlEAY2a5fNMdh^W|Y7+$;G_f%vq zp;oU+x3ATYmM6BftI_{TT@zDu6=g_|E+>BP_{cXiE^W~3va{y| z<-}pBiq6l`gdan;a>}C=7hW;P_z7^Ve>j|tH>+;GG90x;zu?vZu%%V@55=pUHP2Aa z^Wk|RXWR(nSeAznp1W+#T4NP@!x|e16ISd7<|v77_tGa?iGgHWJaVGg>KnoY0$>k3 zO`=J%>DQds2|p?Z02;vI7#budaF=+PyaIZnVLL8gXwyFX`f? zeeFh}Hs#IIO}4T)yd%(Xy)Fx1pZ-74hMeMN;LzR53qNgxzQ9BJHkwyu<3S7R_a^;}@l9?m5TX;cZO*%5Jl9eyvvY`6ZM)NjuRkyUI5aY)vtg`)^5>?d@7?_jQ4sQ7uK%3 zk?6nfvyyPdH%M|U`FZ;Nn4Q%C|J_66@I>= zF70<}XiYt=t*8R0h~tb~iXOhurqR&5=;GS0dV^-8;>OW-#z>0bk`bE*-iOl;mHxdf zF}eBp)NqWY+fi@@MxdAnq)DeSYwt%AZcNjwkq%H=Eo&n}g>@$n&*H8s6V3=x?4dwh z0o52%7x52;B$Xy00@ISkmdH_tmeAm8JDW~3BM@o1?*LR2EaOK`mhCWB(lmG(HeJJ& zj+j&ZFTws#Oa7f)Wp<8oHNIn9p+DPz7!HB8yW3-_0#Yji3|)|n2jF^TZyD0L#4vu| z(H3T;v#4z0@*CI<5xCCEIwobC1;8qDy^j}W*h7&b7W_Aau86&qN3nI*emFEg^ZjmO z>!@47oL4@zRT%eKi1H|QgQDtoLPZ3&a@f;%*U`?RRCIbBEz)gfgjW{9q49~ByHT?u za^0grTO9fFVpaEYD~I~drx?SzdyT@i`tK$fr6w+>CC!%|Lll-Vqs}HJ$f&qeD|((@ zdQ*nN1_D zAHc?YHXr+u+zjUR*G%t)*8-sk75I#~!(WpBbv}Dr++kb(`OvNE&;wMamoy%{-H4Y%jc5~#Fl(Im|%j6JV1cDB2w4!)RLYWo%&Y5IBJ zqEEtpFluHupkKr#w;Z4Gq6jm5f4`{{ivO*o$$M0h_E<9)J(EzY3x-kojqo`u7dx_L zzUK4dYjvgN?u}+ttlBa;4dGwO`MpAT^yW<5Do_4AG8{3-;M@V?u;VyOHHF+kGYFeK zPP@FdR&mIG|00lrY5^?547Y+-cTv}Bb@Yq1!;XH$iV&ZBh@fmrse$&ig%ZXp_rNFr z^l|iDYGf)?lY+Rv`~xAyFJ4Pn#x+zAhsdS>A_ZhfgkXjT<~3cIWbF(e1#|UzEd<`S z>vuWU=P@A|X%X-xeJGItF6F|pns!Kb$(vSI*<5YUj%-?U(>xT=k`J0~Zs;UpQWO<( zs)Fh-6#IxZEV@rwAXd7n^!-SsD{>i_a>Ef@8tyG4$Q~X);aKQ5%YJG_7_xq^1gtzUGlR+DxMy}>gRmxG=Rl8}rS%r~55nisBT)Vt-zYplG=wX;1WHnT77@D^f# zf@-E}vIRUlS+rN1BU6#J;9X-g%Pg^)5?SJ6s+oJWuxY~kkx?aFBB|S6x*3%exJ7LL*K7!+cMZFs&ldC4XGXOsiXQ%O)-*q4(;lY4+lxWUIQGWzuCPU-kx* zJQ-P_hWYe&zn=jVFl^Ufsg#4iiuLcs29&krbrbzHs$K<27q>|A`)ne%rY;L@kDg2Y zUINQ-dbT;dAbruKFY!P}Vx# zh2JbK<=T{5$CBa=idQ&M8x|o~4kg})a(frqr1t144Z6TTHs=8X^C8kr2z+W)i%m^f z)XlB^Rb(zZD_gq0dCr$yH0r@FbbM5N`SGhE2ahf@AtcQ4lEVkMzTu2-IVp(HG~Z9! zD4OAR>4o^Bi{(17a-Uk4O=pQE8z}9t|19xu0sy=lGR)|@n(7>--0=q)Fywu)Y5rXC zp?HlPvy0j*aAix735T;a&EOT-)ch=5I0=3F+qy|elY|LtN>^?1A|$`bvEM~g6<%O0 ztw0uiNN-(q|J`&FHfS=J)!@yB2qS)_G{Nr~chk*dr6t{5HwH+-wPV}O7k{8Y7a8i0qJ;()KD3y~+yvsB@QJ31 zZJ1w0u&%Q_cTFP+9P_%e({f)CUenu;w%;}P2Yzly+BnDg&Q6wI;`q}v{y_jJdC%HM zSAN6JN#_T(sh=OTZniMjJF;zuOV?9d3ILeW~+F&3*tHsrKpZ66uC^lj4VesJu^e^P)eh1Z78{!4qrCTlsnLd7GgC zH{4D5xOU*pW#-V?)RH>mTvMjt|`Iqr-e=`Lyg+r>W4o4S7JJ|lA{*_NOZXC~z zVUXA_YI+;81FZr-H3r|B4hjh)M`4;mPKC6G*=d}iI&VfXgtdUT5@j+9zU36nC2F|^ z#ro}K_K={KsDe<4c62iMJBFp1ky~)}@Fic?CuZ|qPy22)@A(wVyzgpAHCh6XDaUF& z?p0@)H4xN6r7n~$4b^s{WMpT1Rq?+yTDG|L?uycOh?R$~U+^xDj(-%_#o zxsxt;0tHP^NOTryu0H5ij}|g9P}&6!j9_?o&~gni%57f2&IS=c*3bOcIN=#4(FJ|K zA%JVJ$cZJ*N4=ju)diAxO(2OjCh@2n;MpXRuPoK2UyKLRcl#g7_`S4bbrki^ z;%W4ru00y<9meBoly@bxIyFlMrq^v(X5ovux#$l!N}D)B;RJ};5I{UCbSk?@Pw>=4 z6LjkWTqCcSzj5p7()0~sy%uhtFfeRoU`4oOn%N2h?^=vW3rRGnA;^?>Y!fYNT{jxV z%#wY0mb>23oX{OpK!#wVk60q6^FPN9AoX9~?<37&yb0Cm%_@OoWSNFadUA;x0 z6Ev8KO5t+yAu8{wc6D>5uo;7mlVr}@?^L`iZTY@!Y_F^{_AFpfAfzKYfv7RSEIMV+ zojLGDvA^jH%HRE==bi9$z~VPm88*zC_|r?6ni!4RHG#gGnURk()>}b*=@}s0(vz?_ zg~IbNsr=|ReM_(R$v)-=%`r1xjSV)bcQr9bIALI81oBZA0r*pHcN*T@lSF@wV_(}dfYUup4(zjM_WL)g+Dpa| z;TA7W_VeI_Ke3zzo9|81fO*ZW`KenqT}D;V_36k;1be!!0~6O|a;hMR5_>qDHUi%f(pu&j3!C`!-CH=dvvhd?CnezENW%RYK2E-4czbR(yc?a@- zIXLyr&s1=FD5R@z(D`aZsfXb8N$Bcjzq@2MptYx2fjZvD)sWsxE(2;BLDHn%sCV{M zN1u#x%~n-h=k;#&Uy~ytPwA-LQ2}z^!IT6k4(>9JA7CYY*>VSj&eG5I`EEnnXvZN@~&*nP(FR=OMn?$T`0VhZsU}1gMX6p zB`a7rZ>~~xuvUMi-qC66shdkI1EVOiWxBBr}OI}A08SJ^D9IuUrqbc+!#K`d!Q42RJz4}I+* zHI%ln=L(D6EqeCd49GG|v@ghdgAC41Fw6`9<{L*z%lEDbbTRcN z&buGaGLi;Y%EaY-skg)-_osm4rmwRl`j=h;^1xMm)q6=waA5Sh#&8%wRouu2G#xa# z)jLH#7I4+~*YU;YweP;)Ln*19gunDo?{XCxiUXRD{eM4tQlqvtrf@Ca4jnHuxwVh* z;eF_;Pvq=~37{|^IidF;{LGQ&GhAG@w%}uVKDu{3LY}kLy&gZIdsm?T!qM@|&z+xr z@X~uA*?V}7_P<62iI@fcTlU%feUJbk}N9=j~mhke9F(~~NjQ6o? zl$*7+%zKpHF_BDy$n`7NjAAN-HZ^t`cQ2)@w?c2Nw^N^$;3+q&o84QqDlMq%z5MQD zGOtPnFq9I=XwNBVX=Tu836be7y$lLD4N-!Qf^wRNilgRX$yH7A>mOHrYsMM^#-^`J z5m``U)JL9*cOb#KMA)`})*f{DO0(xHGz=34E)!4O4w9kFz^Bg}MOs58T;gHwXjQN6q_N=qZyfzshru$aXd^+6 zk_%ftaDA1}w&kz=t=8$Jwp~XM@I!lfb$2G!BWsj4X|9Iao+>6q`(ui#z5dpzEIK{< z}dst zo_lv?mZ9T^OjgS`*_B@hNd>#^04p|59VwVZT=W z2IEh5@mW{8Pun$9&|Fyz0LGwv+DuJ~Ssw)xqx(KAo6hN2)UAI-c8>3_VLV>YMi(^% zF`Z+wF!QuuVF^3`;}oh-G97s}iPLDm+Ut`1w3&y8^sI-sC0pea81JTJ=-$0C8>N+` zv5&;{9L4pEU;B}(E(g}d42CU5MvT}a60A9AKfjJG(%;v-Yv#Ys?!WBhuljj)Ro{)k z$F>ko2<#lCuf{8df{juZE-b7yTrA{$v z6`c=!eAldK+gza%Zi6?%io?vTBfw@NRD;D#UqknTi$zU2#|hfzyu+21uxcJLQK5;E z@n+TiI)>9EQk)B?p?dyxXbJ=bS>xWgfm$^kq1WoG>{)o6CjR*I@>T_pwj-t6G zI6g1etcXq6MW){o4SqScVTcWA$G3GwF^fSLg9~n0O*WczCc5n5vhn*eBYBwtwL_=DG*rPs(Xc{54DwuM>Ims$*HI}&X$EBEQ z)0*`!w!!4KGXHAW){bd85n6e?X{3C3amHggNZMtKGM^$xaR){NZi%LZTn@!-`vV!vivsdQAgDjfT^fMJ_YA$9uo>1~CgFpU4_ z!o`f-KE0IE@4Ikt=vcY_BRznEz08RFRM29?Tc~)6eM5LJ=S}7j+7&$wjYt^5nh=0} zSw@AugQ0zrS~pAV_jBZc!sSQ@Hf!6$!b~g_tuLRk88vmUol{kRzU(-1 z(K_;~;9^s+{Bd+)pAjnelXs+*$Ra4Yz=i;OWw4WM^QIO53^mY=c>{2x&?ZKp3T zGF_+ELXgips3*J4Rw!4ZPYxMu8CP~QFF45nVfSR?)+Ly-*?wkptR*GDRc2w+wE|%~ zDPn($7$!+LQz%W)cmQ2BR!^F%6S`aztTf09iqX7p@Lb+CcRPS>l^uEsT^uRP2;!k< zGZXnFct<#EuVo*yUb#LjTqTBrQx;Q1=d)zUV*d1FG^Ofqw_wqn2*2R)SL{Q8;a5_c zOyQhax!IW0 zdkdL77Q_pJ$K|5Wk`N}lO)?x^6`teX1HX@L{!gt=P%>n8;z|AT6Sh>19Tk1Ws#K`WJ291 zxu1Rwq{vai(K8o=~|vA(1ALevF_Nh%BgOv8l>?jMmp`*&#;di85=r%UqU0650l8QA?N0_o7`ZY`R*D`B7!txfdY!3~|znXCyLeN|~IxbH`lmZ|GRdXt8Hu zneUoCKh|V%7pA}7vG7o)0NcyH#nxDO4ek*t*OFbQ6Wh!DV6>#o6}*;M0Ot;r3E%p} z?2O?0)Ouf9Mm#h+Rtq*%KCUKf*SLhJc`~6bCGqJgyco@5v&KZSkvTgW;NfQ+{=%QY zEOIZ6_at2B&75_5Z!4{GDS`7$RhaVx*%x-#%w{-z{*yf4CsZ!E;^R;bx;BHH!mWQ+ zm&5(j)r(a2`8CHw87%+vcXJR(ak`}JPA8f-&ZN(1t!D}M@K$2CrOoU zn3gvNUma8}!z#g>Q4QH8tNayBzXmeGYF~dtmn~&UN8Msqzm1@1c!~v&JT()lx_1jf znO{TChka2QX+IUaP-VG**sXqqEuhGa6~>?~_MeM6jy}kSJNfq;(KC0BJsT8R4Gc<7 z!eGQr61j$GOpz%;n_^K?p}U_Qgm%xn4Efw14P+VUwI%MN_-yM7P^FUF#8rvH7IrLn zgI5^Y>tI`DJaUR62CzzL(+<%w{*ILc1C+)dZW-3bJ821{ub0QlY&QhwL|hG!HEEcmyE?ECl6 zcPhyx4!uHD{(-Gy-+QJ({JI8{9>u#Uu<+MkjJCUH1f_4Sdfs8sXWb@Rl2pNNxKdv@ ztHwFBhfUKmdEM6PXpS4jzxhn* zrk%gtfwX-c+gHHt5t8oYn>>p#`j!s!ON^>n zd&{`XV}wW;l`WgK6-(Cg@NuqT($ivG114#mZV1d_Pf4G?U-rd=l`W~-xL}GlS=(Jb z&>m`9`6-onHUOk$W*@&1aKgK9F5m>g7=;Aa7JARo`88=SxE)IR>i^Wnf(I8z%2KKM zIvq)wlT&`$U9(2J;%U@p*k0Y!^%?^&#s_`(&H}uA6eF!Nx|m%}C@U_N%wWZ$D{3^O zn}xsB{^O_}h6vxY_71;MG{pWi8BMx=EvIQjaGpr{ztMT)SnsZMZ^YsJq<`M+6>{;U zaSnO&`Xe*TBZXQG+kEIJg9?y(>E^6M;eu&>-cMf6@(jADvo-2YsvpDi+t!0E%d{QT zk2MR!LnXH7{3YSNYt4u$5@X|A(at!xR?qx{GW><`H>bvP(BY9|=*zB*TY+}bS_uN* zt{9xlzq`#Z4~ltZWL{7Kzq=xY;xZf~NEvC1|ys40oEy+Bw-j)K-jkjiqhTVi6 zU`;oZl>tSOS^hZpODQgOnk3)Zz|;}ym6uq`7uc`|3i-l1m1}97w9Wf8by$4arMWM$ zz@<5!AadWICu?0`WwgXwG0)aLqix`+kc-6+&=fY0G=8>k4iX z_g35vQ8&u_8DR#)7iTw2Nm{0ldf{c&d;}l3YlLU+8W6bn1A$BS9+ExrL@H;;#+xCj zPa`^v6pZd>$~yPevA<_z{^J9Ho8q!K?ykgWs~uHw>3i7jZT%pM$e)DsK7NFb5yRJcE~4kdF&PRn!$D3g0OYS!CR> zSk7s6wTUznwH`jz>?wvUYY`FcUk119MS@4lm*8x0al6Q=4)~BXY_f5=+tt;4*ef>0 zWyC*e2Rk@&P9y2=yhwfD@vmHx!}c;4A;lG3!mAcBso?~}#hi-i^xMm+^iX>1PLADd zp|$4XLdU?)FNa1qSg&8|w7$PE{9#;^qsFr)?rmg_>op|$DaTs6+S-}g=V|nu`6lQC z6OcxCtV8G5e3uCmtNfY=-zr@{8gqwb*^yTT%Cl_EU~kISIb9f3@fiQ;%A4Y<9Hasl zSQ}Jiev7$uUwa)503+h;L++?axzo^R??AVt$6mch+Ye(DPEZiCYw`&1>J@axBMZS_ z!bD|R#>Jx41^i1;UT|x@0-KWdxAfqMBooeF6s<{F=krJ6s2|w7EvD3It9a$irG<`> zl`{?2{jgQ;^N;*!S2ly$sjxuBu);~R6*WaNE&W2|?VvS{Y3#-`2H`UOz=9(pxaU#_ zht&8?phu1QOS)u|09?>ifZFDisvKyO)aCkNXnKzaHcsiN#^I9d(+V)jg>A{%a#AV* z>L`kY4_Z;>ht_>Dnuth>xo+yY^R*b_6-P7x@-TAtfTK_hmeeTi@PT}<$xYHyNa~{1 z=qoztmC64~dH-w8d;%E(Plf*ctQ^HuJ$U-1(PA0a#JckhJzJ8oz09J4R9G*kbp#B0 zX`nl@;$X|}P~JJ=T0`rIe(IaZ3cDdb01mK^M(cE)^lTJbH0pr_knb?72O@lp#5Jt% zSs^v{b{H;5-QvAqh=cr8>Xg7fJ>51m2{N{>zYQYiPv z1DDvbKW71~O_wBxwBwan1eaU_k7;0!nZN6bJMl#|FKTHGR8i^~K7R{QzL$xyifLYV z#60Jhkt*yBdT*U+6SCbiOp;mKF|cS#Q;=^t>wo>hGMi^VL(d8FG!f&ZKq=dW0H((s zW!X|JmL^?&4Kv>)?mD^h= zgo(Gkajse;3cDq#x4cH%1}n;uvME1w9~Ryy?G^}+mZ#N#uAVzHa#w-w&K4_^Od z^kV{hz%-ls0!ZDOFw9ovu{_r0m|O{DZN8nIs2ao|G6fhCS<#kEq?!*U_ZHw-*=~&!$L(Zhz$(xA0G95T-&K;Sm z)BdS?!^c?mCNpyHhN{(Q9a%*VRqRg-c#z^zDwjGv?}(#?e?WFJquuE1LDoh>1=P^(i&ZVY5}kKUEN+fu3R;E+53}ub4lXHC=%v z*Mq2A>yY*hU>Ngj^C%13;Ig_qv1pdV`QU$QR-A0LTSh zJ!x17>znfoG&%h*@Y)d?61aDM5QqpE|8lW5Oju?EsAr#W`xsWezcS(A;u%UYR=3|u zT-jRYu$Y0UzNj{qIig(9nLUA#z2dov-6tq!$qk&%B}~vu;fhNz&AiRmh-=U*&Bl)_ zc!iUTvg=2Z7u_GVoqTagxz+O=-}cW)_=`Nc)IOM0+*wcRXIfDJ7yDqE3zpQChBx!szPcBKc!fZ#oV_}H%**|V}0(^P0( z)_Uril>s&gOUj9Y5KgooX?KluWu)=sO?Hf|0RhUr+(EXhLhl9#PAT{R!(Xs>tZ{a@ zWX1ppLBO<<5-VmXtf<|DKQo6(!GSy00F71k-2I%-SNjUE2o)Q#e81s*N~vwJ_WFSQ z?Dgo&m-?Dy5Jy7+UkLJSab{( zm79>HEYxD9zU+yt?mLeBmva=>ee#?bDZjXQ{g)ijPj-TG;gQ$h&4tw>$-k3U3l1JQ zl_oMNm^`6q`ZSF%)?~_Rp+C+`Mo?6|C(c|iW_4}+l(%}WaaQczh?S{`B>4w7&)4@Oq+P)!<JjqD5U{`&kc58nvzOcH?QRR0wb%D?8uA&YshrU60El zRCA{Lc1N(sW=^vm45UlaKbNow#@vKQD9k@{z$KlHwpGgO_+WTa7s*t3b#5WapD^m5 zE^L1WDR%MzzC7ZZ#F2kE#2<_kZA(C1k}=DV-2NFRoo5JPLb*4lf>P-R9w?{!2^|#x+12-8e^2U&%ga3pS?y>dQWbkrMYe>P`wyf zWXH+%8$oIK-a#*XeSZC0Ev=ZEDkfAwA&AndZnG{%)XcRpyh8LI3Nmh8onYyE{0ur? zpP~h#{&~Hau+i7>3yncF3rt&WEC!r}Q$<#W;M^m^W()?p;mU`Yz6X_leTFXkH4p!% z8~%;Xny$2s^IrozVb%BcynT71rq$QrOA+feqh^SEP$WA}wCi5dac;TGvgY56qwID* z+OpIa?|oPInG$cChz%!`_kz}WErti$GrQTUKG^>jOitvmEuU)VNQ<7RH3ucdnbfCX zQssQ8X`0Xnh9$;xWEI;^phs%kKVjSu@!{uxclqkIyDCdu5{rae3-KkESs!?)$Bd_) zvX6$`XZ^Zs>ZF#k`*T+2->~Z+l5E5H`ujQ7NY}rX`S&Mn6dhOM!8aOjOV7R%bFZlH zZovmCkPKXVB;JCY4{?LGScS8b&^5bl!7ap~fcE{m$?#*)c-NhPsfckM-1?3ux=yOt zNY$Ixv>R$7IsQ(nSSi>dLhF8KiFqICibcbnH0M;1PsE~1c-#rc?9wK4Ysvdpi~@6R zSl;}3n|FrpAtdwYvh#58nSms)by4(_D0#^B3))U}O;#f07l@X>|F?f*Jt;o`tcmTd z{y&fYza+EE=;6eelnFV7?_|q99%mvJf17Az6{JYAE|Muh`=2GyrOJ7)NO3el){7FC z1eA8lKC2L8y>u3?MC-Rp zT3Texvq)YMT?B|_XtgskmXYzN`XmZoyY?ubY!tO{Yb>rTjePd?a|f2=8tekV>lnY5 zu=m>kR`=JB{Ou?@(IaP6>z`A)N7#+y?OGAcYb#9X&{swUVGikj7fG2ims7rv>q<`A zFKxP%qrPdTUK<$gL5E%$zHb@yEu;})*-O7M5xXuiTcuq1HnH1pLiW=!bgGeF~zK+bR`)lh6mZ#hbyA1wgj(;pD< zt+9eT8K(~Uwir(7IHdmtM1B#bkTY};+qF!le>09Fbj5~cpZ)jKQkN5o%gcRAO-G-< z3%!b#-+EJASlH>Wnk1O|Xnv?OU9`U2)ym@;6g#@kRq>WY2$sau5`I%S{usE^2TU4) zjEAk0#uucDCrsO)Zmo_pP@2hjqb6z>lWTwLArr{b0AJLEDq*XkaTh;droq*1cNaD8 zj_b{Q2Bq{%UO(G#M^Z(}ci&>^qh8-}3zRw6{oWr+Smti;yL2v<&7zEG7eT?uRPP!I zW$ht1?nF=o;D3xlDx*VL3Cfv1J54o;OF;ym36gp=O<^j`=M1^e4w#ohwY1$|r$hX; zK^G5ZxkkP)PO<-IkyXy9J|ueKM)K_`I|KD`z_|1R_WrB=0bj4Np_CB>RHF(guw6Qp z*0TC+-#wg}>+K-Yly0m0sv?mm#XlMGe6&u|tz~OlWGN3=hy!fB#ly7R?Sgti&{SnO zU1Okicj-hyc8vFa-F8d}+0btz!A$tdWIbDN*KE_$mrA{F^^VM?X0=fASj{Rmv+~=) z#pH8~7d_28G4%mzqgPlZ1HDR5`%%La1}*QN2mc8;T5dycT<08{Y=Y>sd7w zMX1Nd_Gyjjjq4;2D<7;h!IM~8h8*!=dv{tLs%SHW&0}@~?ydP=LYtXp8FcR2cM^K{ zL+>+f)hI_tGM}|fG+v7qyYNL%1g3pbVAL;mJ6gM`DC3VRQ3RcvKh$5K2G%i{<~DeN z^Q+H=XAksu87@HdFKHM4Fw5(o5rlu%?I+650TXso`T9S*SYK z)BIjd9P@g2I?%XZ@!^xLBHB%51%o5{4Zl$^_m$|e!T%!j=Whe3_{5o_e>Nx?5(kTt z4ii`j_l}~&kDXmo^#u^YIH26kusFx2eY{Xe+eycv{lPv_Mb1umfsF}m$I!_hc8}|+d z>OBLzAMLi0kx{b0$w*AD^X~xaSRWb3FguyBE@V;!a%e${t5@2-M+2C%g2M6CNH9mO^2Gckna~hgNTwvxJzciS4 zm?uO>jh6)ABJ&z&)di=EG29?;>MMl1ZFH?d@+Xz!TuYjuRZ?t};{Lc~EUaq& zn%|Un=0pr=at)39#O>gBY|?J>so&CD0SI03%GE!&0j+bie>ONQ=_5~&ikZ{k@J3{1 zO^Z(Ya^$f1_Q225P{{ogb><*hG$I=^1bsX8BN7Hm8$t@hTem`Yg$H&cFR|Zjk4tUI z@)a(Kl{N=jNs zZ)Fz5v5Q?_814X7^dc?HTH7FdWkS*rP zwcGsg)hnhF(YfFsB0@WUGoERW$mUEywFO&#FToxZJwVcmEM}Lc&n?$4UKVX{kL-G* zXbHOi6UUtod$K#-5hNtEz(m{=P*8Yu&`JR}SD>0tN2;aDQEF|T(XShqY~d(j$~ zLlh~~2bUia;Tn2qJAPs$#Q*=tzMWqPi{NL{{qdL? z-hw)OD7VFTak8-QRFzl+%3NIt4wzf+#4qEPo!bdnY!#yy-Sl3*sPv|0dY9PTd5QA+ z1N)vjw1m-2FRimYDRop?WPP+$eC@SQWsalzyPzwS*fYO{bOFL-dbZV_n!Wp=%|6hl2e(ctZ3WMxGmv(_#r!ONJ*{yUux zQchfwOTG@0O5usoTrv$HxAZe3=BL+x%1|S8k5C^yhOWACQr%79uv5I4kn`=A8ntHy zHw=1S{Kw4bSH0I5+lpt8f*<`5v_8)fsla|2B*Q9KEW`!GrNAJj(UbpXXoGY`>T(Y4 zTiXUbre=7AejeoN@092jqPcQeIP--#gQwXK<&%Ra8vym^~?N}mRCq99cEtH=keL_SzX=`MT(Z>8! zF{@%gIH>mR1Tew5t;Ts+xMH`CRglK>C|wel^KhHVRfXv9PwIFidP#x`4S*x*q4Yg6N&wdD{AE;?}Z3 zyV;K@YewKbO-e~A>EAGE*RA)#M0JzJi=clq7SEYPP1%5`snI7L&B5tD)4xPDbKTtW zo{Rs(=QJ#{YIcIVw${1In-}pgflbsF;Dcyeb!3p&V{SR97uQJne$$oRdw@2XHA16F z`}bLruxy{InTPWIpu)8dyr4KAX(v*Nce z)~<-&manIK(z)uI3^da=wQ30m!k;6PZB%66{MfE5Hen8+a3QfP4;dz7B3Es|_iDBS zEl%zoV6#2Z@W)vEC4>d|xD$>rW@;2H^M?T;8UZo}w zZHKfT!E#(^vSPCBF8Iai{N<08`_8OZr?=}dMeErXfLh)|S4LU`m_PCp_`Kdm*oL+k=%eseq{ND3d4MnYz z%v(Kgw?4A4u-GnQg;^S~2bSoH%;(yUNTbh!#V*uKKT2B2Y*VY>qnB1ME$htY>*ev8 znqRsaykiiL1n%i&SaL`5HV2rws`@2(TywGD2U0@E&^qK<_dBQ8Esg1hA(1D=ksh(z z*mASK*_rbvtlpn`n}7YzPVP70l02~s?`buBLnZ=$gDHRCmoHE0_6l#9K5jGqw^nRz z$B_052oTO+%um_?e0nysE#^V)-qB$5FN`y&;fXW0NaC@_}l6JGGED%2(u98u;}H1aa%rJ!p{1 zF4f@mqUgatvJ!t^**}02{jeXY2ltfG|03D{8S5N5VSFvvRhjHy(Coi2Fgfb{iW799 zx?IY5im0MTS)oSYHs23{CIECo-x$Mb%Npcr>Q7y4c2fN$0-uyE9LPK5&-wQZpIoqTQ1!uIwy|9Y;m z&L7K^mu&NlYVHnm>P4`DK4G5*3=@^x7tg@v0SNm4Sm5vKk`N87^8auspA8>QKHbWU zT3f7fI<`je2bsrOXGh0qE?}3d5&xE^w{iXJld<&tb=xc7kFai?%-71 z+EkHQeJjueyPsQW-U9@!uI%ow?(jjAyZnr7a9hm4Rlugt_HXTuYVO`)nPx9m3g`^} z=cwYpp+QL3$rnC8hO@pE`Tx8#+BfN}CFOnQ$9ijAFs`f-R5Gsdq#952J&U=hWJ}+X z-mU8`_hbnHM60F5Wbnl=d3I~q>34=j{J>R%PG3Fh^^p;G3q*%{+BbyWjyp8|K$6X< z6xuQL++M3E`e2-K+_&ef=p$gJqUzPl&LdGw%YabG*6LhNn(GP{ zF!maj`Iaxwf-bCIUt2~suEJ-xMj|F7Dy9WePSxSlPD;uf8So{4KPScNdf`->(#}S} zQb%$i=rKVrF+XLwW@s>l2PVnR!X-Y&j0LY^r%gR}#91||)L;Pn{YcW_72EN~nm<+Y zx*FeY4w|gdEgVJo1ot6KOh%5fEGC(;+llc@X7xJL&ppECl!CF%RP%;UGaESyvR8}p>1 zq$@|kwOs*}ltq-^9?LBpjF~i2>b>E(l~DzFpSsePIt^{TnPcegQvl0v6qq_4xUw9+ zWM(BhT58eA?^R=4VG&{QOgWJ0W&1yTa@8uxgEkGcSb zj+!OpJxu-Tm8Q$pTH-PZ@E@tV(o6Oa-Prprdx+*oA-MAp5-jotA=RvEbu`j0(t_=O z$gEvAABIA0TxA!J(Eh<}o&VDGqmoxHPCjojlO#7PiD36aSh+hC zf0}8C-W7Y9w`tLPES~zndiH6J+~TN^D(c;}{h^hEcZh{QvswV@HunCsLY9GK{7>5L zKec*i__>0I+bXWRG5kfq1Pr=97kH_*S@8-(?=Xy9#&<*{)(W;_`CG-U+${$GKvFB| zX@O-z$@tittXB+f6uQqg~NCH@1s=`yj5@!Vgr9DiEzE>ob&_EM}Tv;X(ke15{fW8`bRh$+X&1lpJI z!M||Kc6GuXr~$&N&0F@jv^*gY$b@2Iln0G;dJ88gYG)+XP;Htm2VGzBn{tQIm9E&HhpV59IBRvxX7j|REwS$711?F5sXhZrNk->j4`sf$>^LQ!c^kmr zr1`7X!6+1ndrzTnZ5>L>vjl^D`z=#m!;SU7zu%uC#M`I0@p>cotIz(C&i}M2?weU3 zG{|Hq?>hKZhqQCTGpPgu?W)gC$jLc0F%90n6!bMC196l6TG;1hxBY0jg^kb4$qmb+ zgD_*h*Rp(K^oYY6*5_2gcjpu@#;0DUMTB+kBYkc+U z$cyhC6&1!2KnLWF`i3sarDxOYM^jI6D(qa$yZm~Y-YPv4?4s&QT{d^OP{>1j_P?1X zGtYfo(JIZHkdS2j3*#I=|4wdeps!u=ta^!>VE9l1Fzk&0$Igr}^u`|#=uNFT>fQ@| z2K9QjHO2?@Ge;{NQG){?ItmJAn2}k`*E|g9%0hUr*0^FPhMYGCPLr_5+>J}(Xp~eM zlnC+*cLnO|66IS$Tm4}x=-f2FTRtyiwP`L+=3}9NtKfmra?@j8169j0+_am^uZ3|a zi|@_t=p<82NeM=^H_gieR&z+M4$LHr?M6Zx3yV#SqtPD~|MNzh|D9WYPnO+RUlYcy zyUW9i=>OvN?^{_f=T;~mTN=;xa7vbN+iwGpA>k{;rS-mjUo=oIH6Zk&n6w%{6`M8+ zP`qM00F(-j#oiJ1-`wb3bz|so9WG)tyG51l9_Gk(8>=m|xku!QP4&gWqEL>a(KlD4F?_>=ttpm)a9VZPVli%pc)@v67%!$*aF%L@BH*?eM4Zk;)_iaQ?OM@ zHse5@wYP+^cRqQ3emLlul+%1j2_ivJVzSZcXLse#i0FU6-zvk6mJ01KN%}J%)jlkz zz@^d-nfz~|15aN81kGqkJ&kM4G%S2*_3f(EA;PKNM(SoH$HqGbJ~TjGvEe)b9?^Yw znyKzhR(r^%y2JyZm>wK>$c3+To(9@P_ObQp=Yf``*AlMA7a$hzbf1P8Y9`bI6lrL6 z2lz$O9m!?s(G9%+*J=%~gE#Er?oVCg`gWSk80LKW`=yIg0&vF7N#!L<3xnQcBov0V zek@}4)7T(9w^q7yDdX8KWo4N^noi8r(Nnn&xhJe}O6MHo|RxX6dS1;;x1qVIc zKVy89|74ogDm3t6#BM;iiG839*OKJ;zYo~DR7Hb==JQEA;T1AvB;z}amdY&EzX z$?=Bx`Ld2Wki`KZgH-XGd))%JYTBwr&mVBC`#s!1J_=$XvqUW+(Z@l^G1f=Pq%>Nn*BOFLlV9|LFM>*FPe)OeL-RvjA@Hk zck_xBbK`c$9MF;mbB*P1PLIV2F8~=MF zCOG2EyY&U~hkY`x80|{HAgs+Dvb*WXnttDN1AMB?*?sWcT4JPRL-kB6e7TmlwA%s7 zb<@qx+FU2QdzCDxnIN<&(c!KsAXcf|5$#UBBUi@>9cu#vW24=}QlXycC5$8aVzI5q zq0d(M>+YAoC?x(IL*Hh9FD79oqu2j?_i>eBaB#2| zNTEld3qb6&M(E6buPBF`y2Rn%p6Rcgc<5Kn^QA5F+|koZ-y6wQV0N0IMY1Q(I9}u- z|K#Wp)o5DCW-lyZzsnG7Aea2G+8fXTMXF-mqh48BVoSy?X@`>QhKtSN9Rdc^QCL&1 zM{9BUzJz6woiss}WStm#?daw*r_$sobIdEnIeOogs-V;3Px-jh=75(W$wC$Z&AX-kHOYKU zLk@VebAE0aSF&p;QMOxSN$?oAf2|yV!V#8cl}oXs+%j_4+DzOW7SK6kQ$5l|cGW6a zoVC#+5wiF^9QP`3b5L6Tnt=5{=L)L!q1e?&GYW$T4UZZgIjDe={w6>uXLd?l5buh3 zo%*lfzm6$#SmYdazS4SJsFNAXjSf3`9&TYP1f7*8*4rAyXb z5)0uy4Gh~=6MOJe_mZa*ERzaPm*yMY%<9)z6VGC0q zAok`L6EB#75|&rbU5KtgreUH=T>vk#X2Fg*Y*N!MQ?ja8;7~$`Z$J= zisyP+&X>Avtg{ge++JR2e1==l8e)7Fxw=P*oV0Xm_qze@@CEL~=K{Gn_V|xc{Fq?3fRR~_iIsXWR0D~;!t;jHr zRz&QEREJ1#c#mDTd3c(ltrgTdG&tz?#c@%g#^akTCDWTIvYc1Oq9pe z?=tkV@5B7tQgmPh%bL14!FYITLgRV3c5^r*w7_XU{f93EUu7*#R1^K=iMIOUmF%Dy zAgRNyuy=Ruueo_c**pM0S#r_80snXu#u{rip| z!v-{#?3g&&{}?C9E7GTm_t1w{b$KcwPns^{3!0?g3s-5O)uC4Rxxcgh!CuTSH2fSw ziC(fe@^}Awo&CY0_f#)XFZOudBJMJceCG6-zp=H!@+wq>I?}QggnU4qucRu+W@*c} z1?KWozZ9GMuH21xN>&L^VyMZQ4HlxX@?q0j?#>_jTHnNK5j zH@yM0T$)TF4`j3q85pDGJd)=zUdu-^5J_1XF`qRk4L$NqizL#`LP;3s5Juh6iq-5Z zqS!j94d{`d6~rqq^$NBNI^5gHzW9Icmmk}QUp@G3@O6^=vmCY1fuB8d{Sq{_Lj$*egCC|YH;}m8frj|8YE?sM)P28+TF)%=vH;V; z8*i2*qfvvb3#WQnJhqcPD#p)dwV#t8m8m+U#_fG~(RqG)CL9}5En%$8n;5qUONEIH z4)3xc6DMW#i`qc=CiQ#Q4>)W!cMD&2pX%O{6)TA|s|*jRmerDO#m>ltug&af~&wZ4rEqm{)FTaO}IlB>3jp7fF*a`$RGy z`Uhluo&AJ&M#SQVsL%D}1*wFa0W-QKfZjJFGH_#(>i*kaoqKoE(DLX-ZTd~Q$rPyw5SpZuIA)iH#{hD9g40texM9ss~UGjd`t zImj!FcPH(g~!rUYLN{e|30VQgR*hoQQX?o7iz! zUBaW9@GDQ0A1wtjr;<73{64OJ%(&)?HZL$pGQEz(-XXS!o<2aaNV>h)1d1?90)t?Qt>ihmrYU{ko zH!oTNTngvJh5nn%Be4w%Rxr=_7$i3nS=?(-8e0s1^`TliZoFPq6z$_Y79{x`B_@zj zp7W6z37(rU_c(CeX3Vz-fOZLlG`C;B2gv2t-D4$iMT4Ib3{^KZ>y>$v;&3od=w~cy zizfPbFfcSq6H&o1a;!Bl!q>LQvuO9TL4epj&>xKBpK|8^dQ=tdYw&%{rf`h?Uo-1& zMcxJ!+swuL3-Z-pC4|`(AC5a7Hm54Xik_(^gVJM-oW-BkoHl{1J1Ww<(HkCTExSFq z0U<^B;nzoRm9X#3mkH4wH+cSGOxqE@RRMp6oPdI+500Jolr(>A0-L5Jj2DVFof-LF zA08dO1N43E0dK~R6`pVVGn@v@YO&hjMgq3q5?}(OO8KwF(0 zEBvV%RSwgQ#F9>o$Pwp4l?pvMR14$YF6DsM#yctVpd@dRfE^~I{CL`pmP!~Yi3nV7y85I zrkLlaMpG;t4{x1PXQ8v_{o32^)suEwub$OHN@HuJZ1(u&z6W&P@&I+SI5=A6K~N+L zI}hYMY;pU|=I|c%neVuqN7Lwg*r#I5{N_0A>ov-#fe+bz3#j9R6LxQYxG3|Fb=^n3JUV!;yN- zbf2o?vRH$n%c?d7*(Jp{F8%|^VFm~`A~*X=ZGNw&baLqMPEEMyWIRC1j2eLQdhQGa znO>!!SP5MYH+Ij3JQ4L2qb=-2JUTymoc_eMub<{jP*rp5zp@;`&(j~BtAF#$Rb{|+ zU?;a7{CXUr#J4zITfX;5Nm%7YxGNoVXvg=;%4MiPo#rOswv+dsSWvJ>JAc#V+89Ok zR?9m1)xnYj&{_N>J@ruxhKCz}b>@H#iXn%4lO_;E;jMO8QMFCY0_E(axG%6JxnW)L zB7oPSw&xU&9&xE^{|Pls;K!=)s9*Q4W^#A`xk~)^z3@M-a=RZ;n%%;-`TlFerU$^Rkxl#2IrrFta*^L*en zeq3&G@VnYFiq2aBfL=xDop^<KGL~Obxe6up&^na(@p%`eZCwmb zY7N?zzJ$BSAtPC?ogsUQGw{9ZdE0wyc#p<1o#Z~dF%1d9?DhK2JDPdp-w;8{w&O+Hu~T>20#wTq zge!y33%cvc6D|yFIEfq49+cH&GDnKq+QWFCmi8XEy8^6zUeBWxJyI7WYcTo(KJpj3 zZjrB=e#>8~Tqp3h;=->u6htX$4>(VE`CVB&w9XNiagcT{bBkB~3FRb4>*P4IA>H5F(YsQTU?x%@kN;`a^? zM(&$0Ity;)cvn92xE`1-{jf~imLsEsg;hx}0Hhn|jffND(vN1|eKh>e>M7g=yVx}a$QhKI z=BHqFJN%SiZds)uqRg0=GS|2$!zikV+!_q1_=##!OHkXWoBo5lJSBjSgDv5C$m06CCf@>*jkNckA zW(n3volGR%i`5T40W9lRC4h(9rPbWbzlanLhRjdK$IgjTxo@>n$U}I-^~m99PB|D{ z6l63DpnKqA#Y@-{OO~OwV#^BIa_uW}0RU67yhB8;m1I6lmRx@FjA?z-Il5gDL9H8a zsx+P{K*AQqjN?9>8n+QF@Tt&~Mzus+`Xf^8=A_m87i_;Z%+tG6sS*V-`6KVjAnTOf z@|Ut!e|4PsPj=zG(pSw>9TW9aLzM1yq}~ZDrEj9=Bp_rnyev|qqK1Ap<^YB1>&_=y z)qIg{=Z6}ONScoL*l!*dh-5v*s<7L&5#Fdbu$IyS^P+9$^WC-CnlcvbX1G$`VUG?lUr3{`a7$NniMA%& zNmgj=9yJ0M@rZvDIezcW!Onmo3M*hFK(1H?w{qR?IJeoJpwdo$*XF#g z1uL z6C}i9GWUO-!TJ-yEWaKbv0Y8D#6EY}h_*0Q2~!T^nzRa{@Op^KLBF#zss3}=#(ghv z(T~NR=D<@(S&r6xlQPkE>Kk?U%w;q^k9s)4fKW!%p+xw~G&YRmRa;jlYp87pmV%3Azis5700CK!ik{$UxclMu%UaK zJOR#{7na_aZWSpcL9WCE;U!91hHZ5Vw@uC(W|o9QUw%P>Ha(gcBHlz?i2$>DcC9b^ zgiS%))I87KFMYKy9nu&ie3E=aH6^GC2iY)ft2%*q(7eScSZCoaoBayjs8&M{ zSL?M3Lq!^M{nXa-rtYbV;4|CqYKmD(ZT1-j?C<|lcu5wmIqJXw9c-#I{{9N2>!?kvD zw=h7{TUw@~T_X?y+nQpZms4=f&loQ*9_)pIa;!oX6?M{|JAU5y*U%AIkJ6?J2%@c&jTHI9}m*n59j6RxP?m0az!_@Qi_jqx}`~!2W zP#v*}ExsWcT&7ZJz`Xjazv8d>LpOKL=}23w1Y;H63J0OxoloN%WFm0iMv0C^@hUGe zf7IM9ZF6g*^m&Ev7o6~&HGD)Qo5H{%LR+}v%B+)maDqT!&bz!)p4PlCAkM&y`)R>& z!)aIabgkK>=H{hhsHT6#QfcXoT5`xpbg+V%Npm$gaw)mnCCSQt;N++H<8lrlV1U%@ z`tYKMdKkfw@)l`9;H{r-{BuGvv>wuh)<0gkOHAirIXr8E~p=!L_V|?_98jE}CsmdOO z-95yBh*a}7DEOme(3gZN(yB#>rJN8sMOzg5fV}TYyo;GdUHWPe!uhKlbfIm60McuS zv%0t%5V{zEZ3Pdxt5->4Qq7fst1PSSLBiH+C);QOftfcV@5YNdtt#&rH_Dycw0;>Eu*wI$*tzPH^8eLlWJG@-{WpWu|h2?d&6t9M^Nn0rJej{ zMHH@eJijtH^m zURw#Z^~5rA%GCorIb3R?4LDQ!&E62kTXd}W7xDeWMIV23FbvK+h61NU4%UU7EEfN* z`2JIa^G}cbmAB`*DIr$e)4!ii`jA5x|JgfY$}s{g3GM=}tCA7j8GYKZIEjil1dEO} zYd|4K$S*<099gBN-rm=BeG6CK-yt}2jKoj_^G7n*ASFzA*WcJ#69jq;V%O$%)m3vB-G>PhjhFButyNvI6s10Nc2aP^D2d;3E6v~2A}Dses>`lq1Uh&>z1sQBCFa?YmsOFAWcg3kDOpE^rdTxxt6U$JWSgK)*DY($ zc?}hmdRHJEl&4+c9z!9O_U7YvPL9++)nGR?yX$6<$lhvG=5ODIzx@T=@(9&I$UQD0Ef87p|9 zAF)#}c2~}tVG4RGh9B+v`WV~hsHY+G;hRT+g1q{}-izl%Ks&{I;+7u@ahC#H&bHk6gGUS%J8vPbS+R-Ok}6u1N8&d?GSFeRmd*){9zw~;0@dpTaKwOXc_<-m z3s5s)CKTkR)umiM`*}3Ev5QqaCr&A%rnv@Ai^U`YeY=ut`+s+>ll0Re-X$*~zpEfV zT%(JB=geR$B^aXttxmKe=*{SvIRgi>^;zS;Ci#>-w@V&&dk}rN>3msAlm|l-_PV={ zTgLqYlPT+z_jf?$o_hG0OMFfmRs2~NHyMQvx5OY~HK1DHpuXu91LyD{$;gd|C(Kd2 z^(cc@^#_%oqmQww^U1k>4E4rxoiv={shvYFTSPY%SB#)e@6^8+Z4%m7ZTCiF=3L&1 z87R{QIauzJqz8&s8wnjQx2d|%MRS=_KCBntYBy-BrEk~ZZk{;&3YcWW^M;(F2p#d3 z%khk~Og>>*J$QnAjJxM$UFM@kcaGwvCeRy$a_@*1OcPL4CFGyr`rdG~IAT^+PHSWk zU3%-6WH;)7zLU|iRJ-n;0ZgZ=bhd4}G9}tb-Jtlsa9Y9@!&R7w^*+ueuQqa$Zw5`I z8%E^3(xsY?ELJ>!W`rfnRJ>^W(#S|}bcQ9?Yp1BpKCo~^ zdlKLN2<<*XInWAO@--1lNM=I`PUwx$YDQ#$ORx#+KJqiz|m>;@Z`oOoxKNTx^GO~;V*`NWp)QqX!f>gI{rt`4e)Kq!U_aOiDI zf5Se>+?rganu|^~8mRklcZTWsm%bs7?9C8!`lPM5kAm95zt7%nI4wvpY$33D_J$8i zZ_+BI1e&9P@JNk(RNl9o zTf$p3ViaEUaz<5+BxL0V?s(hVxtL=J3>DJq%OA)-)R;Ur} zvl~0!%l?Z$4U zZ}Npwc%s4*d2ZM}EegzvVA}`MWMgu(qU)xfN*rR)rf7;wyW>&4`v@(~0yY12bRyA3 zPHoNoc3O?0qbIsqF(@#1TU6DgjQc_vseXk9YL3tTc~AKNbD94n;{qNdbUwH{<^CRj zZ?7L8AufYYfXXX)>#}3&pT>}e3&vya2n*SCGfNkf(=U|x^;p^I3S}%^)eh2hsIt4N zsTW~>P!KK7s?TQbf3BRTG@11neD}SS@E2hFklt7=z}&-~lyt|~zc`jG3M2I_a!Yx@ zk!MeU%9g5UFnot)8*^pxTADuQ*bDJ?ZZ;jmQ@2$|6y*yfHSf9sUp;WLVX#}Ufj~v< z;&&3+E~2Kd$ukXmIob&&AABSFdz$*IwyyhE5eod&*UGO7iH&WpIYtbyj=ohGk|84~ zqE)7J#Ha5Lix?`w)67gpRr^tsrZ%+ooKR4a$-sp|iWBj5s`w_?LeWQ|x!u>zF#`HF z*u>A?Ia!OJp>Kfk%Bq~2v$!+9X;T~Ka|?DIJo%40hKw_DU{g6`FdRO5QoC%ck@wYs z-dUgV`yihzs|T(xT{pIw+O&WdoEeW~Yg&bzaF)AI!Mryz|2)CYC#8rZNw@8h!>*HY z2d-^c);nzz%0O;rS!x8N;oJgKLax~Lkf^m#E^fmS$Gznf_!Vz9%uTH*fSv+&gGB~- zT*S_i%+a8-#;0m|KW)Q>6oj0*(x#&7tEwZT7w6zy{ojN0|AB+}Jr8NV`29Tp6JQ(c z-7A!7tti-WVbA}_>q%4)c6{@2^g$|Ea=i3WxEUY6Lnvf*uqj7UbI|`@f9F7N%y_7v z5QC5wFg4v&6k#87sVV#242iYRVOb5hhTcRk9>r1Ap8lLjG*J3bla4Fu4ZMPA&kIz( zZ=T|!>0psPf9{LS@(5wzn~gYHF{3&y^d-09K1S9ghHx4g9p;Z~P6}8l9H{DhT6)BG zkS2I9ji=s3&(`G8G~NY&-I_O^Ra!P(kx-!5AUT8X&7UpQ)28EEYK$`|du{>BUz5nG z>K<6Paz=@#;V6B~2cRyu7i<2ZyCam%wRSbaRVBS?0x(dNyT-?$!FDI6ipc=irS0Y3Tp0Pi=f((RB^m)$zi=4<+$A!q}Gs znjhU8b*Xs&Q1huviPsyW!O@Cx%^}I|0j?oC@tr`4f>@krWIx|f8?yt#KK>FII^`Cd zI?&OeA?kKd9aJ`*=;}Oc+J~Jpb&I_Q@3122j+JJL^^x&#f(Xv{2{!(yAiyi=5!vCv+%`+EXPCCv#$eUQuUSbMGI|a`oi$w&%cX`I`#A|nB2HYecYVLK{m`r&Zl&A6*nw9&eFvqP1 zG#uTL?zw#y5y;h#fvKQTqX~m#*Y&8xiR#M$AYYeeNQMgeI|wEC@8lp&m>*&6y;^@k z->Y<#TuCxs_!iQDH=|M(dy5>((`G;JU^;j`m_s4iAadbItI-+?~c!3mZ zi3$sI%St6~w(#u~4Fo>_+Vy&^7#CPO`w0n6`ac~0PnGV^$8;6?`+g#C4!1e{XxzT7 ze8T%Wl=tvx^uEbjfr4}Ux5LGSv5pAItjuo4B$XY+%R9{utUVTk%qs!e6WDjEc)3OD z%iJJQZF2*C6XqmgqHE+OsUeI!h3Bi7K;2rBsi0@r|!*r~=I?nnwxa1-V9-`@1Lq zbe+b(1Kj`IGyh(W+;%?ke4O4g3EpD#6WQvd&?UemNxxR@_Y7F&<><>FM(HK@RxqN-X zS7@5YAs5vvRY*}9@{HW*;oX)e(?(CeS>2D+y^~`w#9nUM7{L6HK>48_7H&x#nZqlEjw`Y6JNk@N%?hsvh$aC6C zV`AOL5*|{3&aHR!pOe}BVn{Sdm8A?WEQZ#3VCyH?L+gv=u1wa&?a9KBg&KY()76Tf zBs(Beb9Z}P1=Xve`;EwbqO&`%yl-R8+(J+xe)yF^<+6>MsNa(kZ}x8(U3H4m#07$q z1a8i`fLm@y&o+!Cg}Zh3^6e+T9HOH?e&%m|D7{8!NjvWUFx2DSkDpj~oaBf-TaCOy z9jWY%9g}Qj07<1*B8zrvoIMnH{U*c=zAb_%WS^``P^M$Di#zYS5%o1guSX%HT$ifc zqGP^$RN0WECZX!#C#QIoDZyKe8%y`_@(&UFl`+z}oWu?1ICh8x78F~nZkwDsF-S>8 z%)>im34l35Nx&&MjklcN7LPQJb*j#W1e=K(fsX z_;SN^V)g#{97Fd(&M!``62(&(&-GL3r1etfZd#+n2N}HL@CXLudj0QO;&-zZ)!oJ_ zJ+)dNxEb)nh^4ml?g?*v5EDj^#$&5(-q$S2%uV~_&O0Aa$%Fo~wigYj3_#8bY_*%5 zH(^UJ+pR;jZjH+io|#sHS5#a_Rq)7jOWHSe7m&arZflh0(I!ZW5i_w;xyjmW)?{3w zVsEZh=~_vQNzE4t(a-=r*TkarMw>j>-R^s8%9s^%L>2B@*w$!gL>VQ61zAYycu7$L zmmw?RGnJub_2yB++U5b@D!YV-Aqul~q;*(iSTcNT7!Hf4ZdM*HnZoqf$D=-_*E4tL zM%AE}r=w1vL!5wSX+Hd?Rxz{A-8;Num{v3WV}k`7GK`gZrqk}5gXlbLw?dN5)T1bN zL~Wrl@2zOc;3O6*wc=VApbeKmeJPBS9`irpMsW7w%@_(SuEqpP;cIrE6SX~{w`Qwi zE{5@AwfMrIJ{y$5pvtV`!d;lNQ>yQhF##TQb?vuSaTY1fB2CjF-%SWc>J&uSoFPfs zD$GdFTW|;RP2PMf-=0Ul$=>T17LvVz_G9-r^Xf|jP#10L)owYl$-2Pp4CI4=xK>_v zzM-VpsBzgKx9qCk25|9nqA|qB$b05$cqE^I?oN5QAa6;0tx;y6$#i?T{|P?|?Qr<6 zh#}P{?uw!ude=z%pLo^nCm$l{8BO~=IYNKP)$1bTPf9>Ju4a_Sy;RV=IvaLr8tsmY)~G}dkB=aN1|DvT zayC3RE5JY>Ln?A&6uINKljn?xQt>o|)YKAezbQPI*Ua5{V)%;$s!3rrZ+K_o2CHfU zghGgW-}JGp7finOiR$CMHH=tXbu%D}&&UG~^5kWaH-k^ z8@D=EKVdF2Gh4nc(LVKD$XL%~T#3Xm!d9F+mJ17VR)7T1?&+Va?*V_@>D!%rq15|s z5o58Etg$>1z&YhDGR~+T~w@~E97w|=8c25T@U$cdz$O?;I)86dp zet)LT#vn(9p30@&UH2$gvmo>Q!q(3vH)n!;U3evWy)CK0fJuWfSkUcs(w-EFq;}R! zy}P(t1KK$3pr&snI120MyQIC~9%N|TynbeDO!hwS4T6X#u{9MyIr0Xk8@s|(jHim$ zYNi4oY!tbfk%aDn7lg)Mtf8|HW|6q2UBU!gP=JDe7#6l4@;*;X{ zjhNgU9`+3gL`}?$?5nfZ%;TkE5ie<4HScDz2bp*3+x&>Dz>~!NMfE30z z$Y^M`(grJWB{iDa0xp)J37BQkq<9TjX|GN)TywavTo~%xO<{>e`NZV@U=g|sRkM$0CwG(eAco~<_Dr@)1NiY~Js0FN>5kBV* z!S$2VuqVx#@r%Ys)cd|%o(W{Fex{*-`Ua7iTSGptY%(ajm^rh?Y;lh}u~v0T=^VLe zA~`4}&#paLXr$vc8q!TNMG=K;fgRE!M3nv{SD_oSf6#R!KD$=n2nESTa<#Y5;r-Md*5W_#dieP|uhB_Mup_BEf-av+aT{bSlNCZlBQ)j9=V9wext%W7ijaVKpmR7OmMM-O(fx+(Yc3y z$YJB5(=fuk?7bnS8@gK?_@6A();=u`OGpS7jdy0p?$b<@g`$^zaxkcw}CS@>#V6-DPm|ubmVUiZ@n+($b~Q# zSDUt5;462XQU~m4w3vuvJO@|$eKFWBNw&zhz4p0iKE>*fJ>q12;@VRnbT+-KMP#l7 zly@HJg?|uLPkHONet=6Y)6vFp6%(OfO^SC>xw2b&&qxaWyp-@*i<{i=z-^NHpznhK<0;y;UFLv_&I2#McrEoo67+4!4)o2T zV--M1%HTI!NxKs3X|A1@x|QETQ5h?_=>f9be~AV82G~EU>md!WK&A^3TQQ$rdB}hL zR>z5;AadN6jke~39#qDp3LdImPB2#A)1BIAEy{KcsG?;8ryemL8~gV=Ar1;K{6DNb zZl}|S-Y>YK{xjV@IG;z~Y@~-*cLq@$JwfH8hPi27m#Y9|L=$h(-9xMsjrCG@=G0-rmr=A#uto5Hd z2scIEIM5$;m+LZ?V>H9QpQuoN3u%%%Wut@GP}re%9p=o7<2Tz%fCZ+cQ>-|48F~q+ zWmQO_gI%t)3V$5w`Xc0Ih9|&?#3)7HXm)6x$GGkZ`ABd$~GVhO}lD9pBohM?riX@;srr%D* ztZa9^oefIXld_^j6*c%aJnHfq7Y`M&RG*Bq9yiwNItoFN1Sm3!pNPiT14BZ6D8f6t zkT7h)wTS88jrKM;x=Dka)>8O~mZQIRB=o_FBr;z1=MQgO+IMAYPW=)%jRc3fD;DRr zfV8L$;bjVz(E{qr8c|VJ{Z8y2xa$nP<~xi-8h%{OjYDdM;4zr@j&a4~{qDFZY|RYi zPPoS90LRRj5Sq{$Ivk|2cXz6W#c=C=D**Jg$QxeJqbA->$lWo4j;c!ugPT?NnvC5l zyYW6|y^L}@6Mf=K zMPQj-G#UD}tX|D==VKWp@*kpwGxMw?8s%D!xtD(C$l!gtxV<3_yMog+Nil6N^QDTo zFA1-W7*drQ;toF`n={xmrR61CAZG1UJN!XY*zZ26{y35=chglN81^E!teW|VIXHGk zEjN?df=`pTHdV^BE^?UEIT$xMZs?+jst?HU{J?$-HwQK&!dn`MxT<~Q`b(0FDC~fu z!Mc2Smt|*zM7-*X;Xu!{njS3C`7}F3^fQT%eJS}N3U4v*SIpPUv~$rQ%$YZ23sJ%W z;`Djzn~f2LHQHhi7c{{>;DQCuD?EnGRT+*+S*7-;3|!FdPSEf-`iB_ca-ZOh8KoyH z3h#g`oPo^vfr~XSH$<_H*g#_F4H~WJe%&3HK4=_R?D0$nw~_)z;YSV>oZ;SCh=zR8 zHP??Tl4?=hevj(nD&@--4izF8x)Dpqe1^-uTmo)OEl2yukSh7Sdj>k}(nzINosq>` z_g$@oNPCL7`EBYY@VaDG54-}E>hCeaK`;WD4-ai|R{KA>xRpj!yU_#QkQr1ci+RdD zAgv{_57DMNcz0JAJ@o(^Yrkmn6Fy~Nraoz|?sw37d)jsXh6aIXg*{{D4a=jpkIHNh zppTa7(q16}-_dT&vot;a9ViSOIk-FoN}H%`+Pq`xKYvg)E0Tg-HrFmK z&N61+46nLzdxwHl3>00eP!y{LOp-T)x#T5S2I-&PP^f|c`W+)~LEA_e+b;8195!ZaEI z53uj@A~CGTCfCHk&NmuA!S%*zc`Yx;GE`mP}mu9oy4Ruj?&6ct0{=-FjpGv zt9KkOQ^ZPJk2BTmCR(mcl5R!!OJPbZjUj|QK6Ryh*E8%C`;EO;4?dAnU1G>TXubZh zz_H_})|kFB&G-Fo${U7Xne?gQ`NGGH6nP4Buek?WQ@*;2YQ6GUSfH3W2RS_IUpX0f zYj*}|Yw%5<5UbUJp4S|)^eLL|7_q>#&NaWqsD4asX;a}hiO>SBoX8iio5 zvWa2JuP04K!G8N&_>zX5U`<9eB6$PS(OT(cBtVcVw&{e>-W-^(#1gcr zy{B%4LK>w``Rr2b^SK{6#U;@quO(s;Rm*FX#6E!p7m1#J$zFJ%Sxt?GJ%X`v8AdsL zJnplsHViHqID1NN&$K%KJyDfF=x`wTf7_^HvCJ0DsHOaREI~c@6=~5gbl2Q5nWizg zUs1Rbg>r2&oRA6>eYWj}y)C&&TCZC+6n$vno7Hpj@B7`)jkS65R3d9-lm5w{LH@%X zx-{)>1s2LiA4ZSC!Q(*#k0((D6yqT|YP{(D*;jqfdFo;~W(Mc~$Wbu-W;ZpOrVUHo za>ms3DvD|h#!8^OK<_6*ETcFnh;zz;H8KUZrwkkCJsCYVOdkrPkd|9gpa9yJ-%b`+ zcNQy2n2MDJJ{Vp}$m4@!6m_+4D)H&6|HiFuaCf?iub`M&Ua+ogu2r8UC^av4=D=*K zBz)0TFnf+l<3Yx6uc`1}=`m;QWK>GZ+p@$$cCnSI$tI9|Z@A-h`gtupf%o8Azdylv z%Ffj)7X~uzfVa$zg!v(UJC*tt(w6NwQayp)(r6G`BZRZCAncJ_Dx^RI+kk*f<{;vA ziCnHkKf_E2NLFr1Db3{lo&|Kb>y(_)mvvVbf<#Moa-;@7xe{Aens7{0ZMRxFFL)?3|izOGFw z%l}8(dj~X?rT@dDI3nW+3M$eq1QZaI-cdoS5CsA0C|#NmdL0Hwj7ldUEm4pr5Tu13 zP!Xgh^w1+UKnMXs2qcu>b$6Zb&aCXtyzhUPaPK|mJmvG$b8hikyEJmN;kB=t(jCgItXK~A9g7kSJHOfr z-Al=*shieVaxbUm3(mRvyGxtwJvZC!AEGNir+z%brl;)nB+~=t{?Guo9qKWq#(^*h zIpVoy(!1X7oA|VXq>}YHhpEZ%&TY=J?s%JIzRYsL-<63ywvKJ#(?qppka5- z)XcbQ>j&HB8Hv<1{NFVmuHt7W1G^+%O>%toF-BOFK^M?BfNV$YwAVwo{)HAwKt%3N`iE>O}*S` z|0+hm(;{Ym_%&=3S5Ca%1+yP8P6lex@kl1j9UGkoRykPDkj5aoGf2?dx857t0sCyS zng})GkdY%+mRQR2^l$S@P$z2NYrepFI&2}zDPq&Z?N`kiyH?zy1>VX%o+BH+jrlYm z)3U{?Jd$*d$N2IwBZK>sZ_CRF&)I2ducK-?$2=}L4u#C=KMDFknL9v4-$p2%q7JXT zZf52X(CZJ)o6s`V856up46O*On7RZlPHJq!P;0zmxfBa%z931{pUvp(a4hEi;tnki z9?74)A%K4h*C#xjXjcgsphY)gQU`Xb=UR$#=J4*a4+cY9su&t37@m@MxBgya+7$HD zI!LgXV8$i$4^$o+J{bOTvO&U2Hw%8N=7k|kaCtJK{RwrZSyz!LN?peWohM|?!iHB> zp%UUl*KIy57eRx{d3W<$G+{H<+6#Kl$j3Fs$;o#L-PfvNabe_UB1gIECEv2xn>cWL z%qiFx2B%79f8z5LE7dfn?GI&ay?uLty#_6m<)!yH(PhuL+T;a)9D2uW7UCRKpfNYd zcEoM=8g=(=-mpp5YK3=!3yjZ~GC^CM@SBdFpD(MT-x;0)2Un%JN78QvcuaYeyQ8~J z_7WRvix}*1;xDnc`sHJbVW1>%4z|O8J-9N4ZuS<8)htANZU@Dffv1tvtgETX9%-|7 zT_(|gGm0pRZmQpt4|6x@y!H=|vWb;5AgsmzxrWMhf38^y#D`9G3nnxRbd_Syr|%m3 z)f!D5eZEv*3I^x~L=egGD`P40V=2c~j#`OcFUNY8?N#Mw&9(Tcs8Fzwr9_V6vLzsc z?T4Hpf>DP{vru@9*J9hK#UI z5XXp7d?Kl}g_f0t_gm=3c)5r7@&r9H<+DwaEf*W!{=N4b=T?`Snb~1qtikF?S1-S;IeQ=w_V%_T81~ScLSy>p@GYszWO3AkodAb^0DU`ePWFI3p~b#JD#9LDDHVuno(nXwc!+w zJ)PWBWnWY1=FvBxY8^hm+1|WnRkr)N*^Gl)jx_lY+*OXy&UhIR{=h6JhAIbj6ZSx(-h!H6LTI(yy1fle27N zMJcpoEsRFpMiJ3>2o`GHk1Cs)IAHVP1`nPZ;+EAoN_exRLjNv@b`Ek5?VdYx;J`sw zu}5w#yTdW7riKGKN+WkU&UT-cSMHJnXCzgdWliW&E3nxdoceCby3Iwx_dGN~p{jCM z!D}Y6WU&D8=gz}+9r^+S4t4$5`jG0^9gnRSLOZT`~S zsP957R6NgPDekq^Jir0dFo#CMj;rYvgsP5}PlEcwNX@?NraHW-)tmSFgQBzBQ{=KH zv;=gwahmm_B{1zO@!1Dl^TixhdUq52GCeb$KsqMu4mYi;QiMfnyq`iw3u#ZDHlqbhJew1RyWu~SnG?S{mBJ9U3XDnK-+PC*I6yTYY>v+sD*UVz7+21($y1td5-+hy4C+DV?NIM-qcbCI5HIFtx#FQ5D1z)r34x1TDC z-n6&IzYGYSa)%-xdmb1>Q^cjSbV9!pjX(vW^e&9F(hWwc+0*9bIPJPmMI#9LkP8qcRy!Q^~jf z@t^U@JKhK>EPZYMeIsoNiBa3Y5Y*N;k($&$+6Y@tmN5y!T2c%p@Zn;i)prWyS5lgO zoEsTw0iKh;eeDA!f`9Nx(T!K^M3EEJ4yf)EbK`<(lLx)0IA9rfb*y0W=1|9Us5auW zw~C4%T%*AaxpX#y?y-r{0IldJ({iQ;p;!Z1H?JK=+1@m}+~NX(xo>w0?k-(o-PCYj zdSsh?7gkiuH*2Ke3}OO{`s>W<<;l498yk9!cMy3+uKP>gkvCR)dw0qk)H+D$pRt@Xj)Vqj!hq!#hP*T#PD0=AqD8 zJ(CL2vigTM%9`#+Cz69dK!&c%b?}&WhUT8@~HuUf2HxBv5{1W~{BLw+d{5Q}<8GYG|i40eu5_~I; zS?!c+MzO+{q? zIwruQ^~_s3=^tc!+jK%j82ADmv$oXVO)DH(`Z>ZFRI>D?}juf57OEBftfwH-Bkz?x#H_I5c`%G5y{396x*IDR4&Z^)tRV{>9mQ z!4dv)m@nooj9E>FNxBD&b+;%xF)wddozML(%aZQ_7Hn|nVfpaCJJJ$6@MPW04aH;s zda~*CkJgzxYPfmeQ+s}7O-7dfo{e9~&8ZJYikm)*{IWULrw$w>9OWHmnqIO( z-fOl_U#ZB2dw=nFDqdbG{`pH%%UM>PQ7Lv|Vey0^-I{BDXvmZe2U!0Wkv!Iqms=fv zeB+lUpNQ-{Q*sHg{TuNCXdnGw9Q%`6EZGB#RoYrS!}G64Z7UBprhE*2>-G6)4k;c= z6AHWRmD8*|%RapM8p$ z`=-}Nj$e!@=TJ)rLhXd~y7pC<&Aw)Y;6&nPNpl?sLMKGw4){6x*3?f>(U?!GX7CMZ z!d=mdwqw)GtHNI^u*Bg!IVXXJ@w!bnZFG+KSc1I>rQDdyKva$`0v~j zJr(dU#sqAp3c0COEPO*v(d*-5d%~&LN}b5#)%I93U~LC_|7NHStW8P1i39H6;(O&A z-ZU9-1Q;A%f7YDM&SqeF@v6_(L&Xrs=G$Y`Z)X8aNpO(4Y9Tu(hL{dg)en$8GYCW; z_q1=4`TLCj=S5S@2>?><#D9zU#$&&XCQhj=n_JuNKqtbeum>R2YY>&LEfO^7+4?1@ z1PCy#r#bRzhQEE&@xKA%gXVzqU~Z!t`ORt;X}p4?x9fPk+Uj?&O$9;i*i+J;>e=sP zG;_90LClbEI^{qUX`ALZ{?9c3Eu--ceC)PGwC5S}2%i(r2N8(!&si9;U#ldt=KVAXWxAR&y% zlO5YQyO4c!W>1q-dr6crsL823^R5Tx5rV(Z`iAo_f?v^Vz>vt)J3rY+pNOCp5(UGT zxl`b;&$s>?tBU0Z%3Utlp+SQ$cCK;{n7(?a{{v{3jz=6_j`KQmc%S=*J{`n+WQ@*n zAb8T9|04UE#80n0CRl}dWpe+=Tbqz4&V)ah@lq&UrCLD}3?4UmoZj3B#3KdHHuSDT zwl``D)a-)N%>mTCc~;H!5xw@sfc!Vd;D1QKyYxdbU2Xl@F1eqa{QIDa-y^fdNqHd# zn1SzkyWDws!~J{SO5w?B8`eQ;qi<$edxZSy|6IcIiF{$*i~CKc$@@q1z2NP@4wZ_D zzdG@9;E|?>PO)S2`=D17{fOnOtNlUTF@F(oB2+{HV}Z2}5;pk042YftFz<$2)L-MN zZNnj;T^UD}1Q_$ZDc%DE)|nVM{vz#v3EV#*HsTcEM_3;yQ~w6Ie|%>XoD-NFA@7GD zzCYTZ!L}BdoPS&XH$46GOFyzy?9w(VPn2H3|2WRRbA12*yWU-`>}%!2$`IR8och|I4H$+I~(ec`*a`7@zxN54{> zr-AAFURx7CJ1|_7ILc^Wa{PhgwyJ=*Py&bdQ%|w;!0b-QFupIGf0nU#!e7M&+OZ_~ zeeT(&{}Td7m1U#$Q_4RnN)zg*irt6-6l>@E@4xexCUN$!R>kLP=3lI+ef&u~fPc`) z^G_pvTa0ZQU|d~97XLt}{ds*HAYd+#+5GO??xBHq&GIPy#jwUz1faIQM8^Gnm}?G1 z2-bxi4)4A@*nbFEks%{=12{U9c z;e;=^i+IyoihG>q*aPwquX#3jE;IS<}$PeDbqh6|IZZ#g@{AQ`ERjh;%5yk`7e* z0*MX}j-mRf!N6#|nlOg)jDu*hkNSO`Rk9c$w{1S=T`chWCY`X$jCx4K&XQ!@HC1`lak_2W{SEpUzjhF#E^yY_=i~OBJ!Z)S zq9Ch&y%Hx4u_nL4=CL>b0w{w26q4&Xqw`p`&Xb4qUv%PsYW%h7NiXVIgWrCL0DTR@ zpELxg@ehxWxdpnKgZgEm$)f?U1(mB};@J3bc|TShcE@r>izl@Ps43#s?sF|4*dnzB z?UMR5vk{73(m~7I8PCqK4|yHjTH8?zjcjMr3n}>jIlm?Yg8;PF;gD6A$cg;mU~Bxi zvq08@R-euqEjL-ph*4H>`J-`Ic&#F`XI!suV@tCxH&gUB_)(F-srewb<$*Wd&p*xR z97~aw=}1p2=1vh`iRhV^!bw-9EZ1i;%R5TbbH=K}LPyRRHj-7gpY_Zi3+$!j<+^9b z{1D+0vHj~cG6%f8>M_CN#^rs<9Fg`wgI&jdcExGaz$j-wlP=htBujL2$ zOBA!*NY#i>o6pL+rf?gP2UOx&Fe}%`;N#KST&++k?m@Z`e!6=D0xA^tfAny;kOiy8 z5v`|xN}+MBJaDV3nTH*5%krGpwx*>*NuE%W`^f-X_?L|5D_bGew=ze!Y=;X0tt)AN z6on~q<4tvC)_;UsB}vscbN>UCbHj;S{$(T4AE=rc_FgeC(UpZvHl zA~rn)j9f{}@SX1)2wTsif=*fGf)ef)9XybAlMfWybEyEm?`!~w@zEc;^@A`SKa2=o6oO(5yR4ySD% zdPM&|7vHf*WM-^vCmh)}RG$oTQIhp?v`a6R&KxSI{MfhK05#okt8%8C7+w{L&TLFZ zRmqOv3JS!C=7W7f5{)Mi?ZoFCbu|_6?rKiNuffkPmqdLnSJ-lQms#2{eAko;6a~cQ zHVkUdGR1&!sIVowWbb7$p5Z4=)lF^LJwJM;l9^JSOh0o*b`Vnh#SydM%nC*$+Y71w zHBO*&4<{kb9qJo0F1Ziv0k*1X4cI#=fgd{YZ%h7n6Lt-h<_A;0=3&v{q7VcMvCOhN z*%mQOtH3$=Rd`m`!Uac76C%qi9utB-_z>y!vZT6mmo}yqVmm-Wd$M=I7*)yA^yhE7 znMyo}Dm1C#bqceGhdc989}G$G%JLJ&AZxD8a<})6<4%0MRI(C#16ei#z=T+*Fe`DJ z<&AKGe2Jh-!IO-pEhl>B-V%VWLR+7BzPhGm`@5$1pn%C;DYE~8bw8uJ@k^7Ycw4`e z{zMxpS@&7+9qf_i{ywxbwirO-=%gcU1K1iG9O(_30X z2fy~xx~aVPk`VgMuK}T&!m>giS7h;<4xGhFc9O;e?n4Y}P~=WMJT3BJ_^>$Wk-6US z0E}}5nb8zbMwz=WnY!_T7SYu=PD_J$2q}j@u)$Cc)ZP0pDV2L=fKLQsBErA#l^oHq z6&6;;{TSqdHLg|1Dy%O}42QHm*^}DqreEtABA~oU{TLovwmt`VI$T3zDH&vb08jh?j>C$)*NaP+6w+5JzlLwhA!x;y6~oF z{&KN<#fdYV@_y2b+p2NFE!LZ=S1@-vGHKoQjcDpMT91wd@%yR*IO`EPM?2SIpC+pw zVg7e1`}bwlS)L6P%mgW%8nw2DpYRb1eM9PnlFdF`Bum;MsI(WO*lLn;x|KXnxQ$(u z{~J|LKP;IYeXv9UV}wy^PJamYcP%BNJoJh^m5J%{$-#c;#L1bt$LufEJQR zJZTYHWi`8Hb=wPoQrv!oAIT}uR{kDz;EjAM%&d)&*i&#yDcc1Eu}%W z%Kyu$k&(FNkTA))P#Xnq6-9xU_)KD`&s1cf4^3%pa%-WO>OpR}826a)+;3;<=G-8_ zut!Y}*qw4}_KB^o1N+5nz(Q;XzLxtgF9&>|pYJ$-Tx9uV1=xRmGu7_r4pI(xbjW`6>iVIz_q_^LiJz-*a~o%Kl}ZywqczFx)gmVkTCjgk$#2*V zwe2N`3yaF)AZ$A=b7iLcTAv>x27s(Z#K5BFoKFrClw#ata6|hCt=d#bgi#+6QH2=Ha1j?Axy6GqQ{~HfJ zedWQ=%a8Wqf$)!m=O(u-Y9z6w8b!6CK2r0xW zwKBd!^0OM~3J>A(rT`_5v|gOG-cOk!Z|U*`GhnkJ?#3&K2SMElQ2okk+rv7xN}J(QLQY{3q1QUq2@3T>dW_{y)=E_o5yXYPGTRHHLi&Nx^=pBbLyXR(+Q( zT7QF|?vCTY?bedr)OhX${|qA;#`oC#i)dL=cfU~?R#T=MaStmb|aVvA$p zEhAz+vwSN9|H&9eWD0ld7LDuZxKMp&V?i;*v&vVgT0)p*XGA3(ZX~NL=L=ieZfmy7 ztuP&sVaMs#mzbdLiA>fM)~g&NlOHY>s7aF?eEO5}=|@2#@8>Q+6cUWro((^TnxQUz`WP?tMU^}4x>rx#?kBDhjGI#7odGT`P+da{n z0oFLs)y0(;m4$9N1xlyp=7$9j&_C z#CN62MwQ!Y;az9^3*s*d38mIod>M1v$!dwSRkP7sy-G{36cSj1?I(N~1nrhUla{T8 zkT0PkiL==>>s?;6xowL;JW-ibwsZ|$w^AC2pH=bOs;7w~=z1V@g2HNiXKZXpn6%;r zN*0>qn04=7%%zK2r$uN z6b;bQ$|fz0Oria-&Y+;FO<=o2>|Iqy^N+l~zWi05ZhJ2@M+1yNgt3cW)6M#_)`=hc zLcD#_*e=?1jWDy5_3A~ubRWz&_Z`i{A`M&dNJH{-@47zCG3kxDY+c4ZzY9^d+hHB>1DnQ*BOHLq^C6a-#DGCJGA&8*B z#n?K=4-ml}65@hF3rmE`Ilr$mvo|MeH22wcttQT$))imRe?{wIhC&tHA<8qCy!TlF zo+eMCLKB6N7VIGsnLGiR(*tFA$CAe_X9AZrv-`Ezo zbf%~;Q<~EN4WH&%ez{1V4GMRDknrBV0KHVQ?7k$c@LRnXLUI* zA%&d8WFq@kPT!zoXzJZwo#APN)Vb}<2}kS2`l#F}<1itNvMta4XduFOGvi#boYZZ&86z=S)dfonjsbCFW6gg0tuqCpbzL}|!iB}U8x$iyb-Q^b0 z*lfxUJ7iu$^%>S3D$tlh8bVEdF;uy1-Dh>$=eOb*SuLmNPm^rcDnMPZJ2%lxcP#C!c+&;Uhw=hNmIgctLQ2bCLVHTyI2YmCWl?{z1t zGYOd}N(G^0(vNd5 zFAVP`pW>SSSfV1UWuW(K;No%;ifKaF3@>InWH}$GA|pp#yJxH*#az37oO#p*@*1-+ zn0mD4Fn-iGZ0lDe{LzW3;oXd$`4u~Cqs?y6qIjtpo)Y97Q@X|FALYh)T2BrOAEa5K z8XV{5cgpall2MCNws=~E7y@FAW_y9fuX@eA^lwMwF~oDg&hlSE@yuoTPli=w%By6f za1D9@8LEP3?!^_C^7_y%x;hNt%2s#J@FP$Lur8dI&tk~4+j zcarPa;ht$I-F5oR&;~_((8!^#o!I+G({x&MM}N({O}pb_Yfhi;m+j)_kL+08PQwII zxu9k-9s@VXUXf|1Av!5KEIj#`5~_gqKGtL13l`%(SB%^Oj)j1xw(X6Wij-0&p}M+C z4{f@p_U;filA1jQ!^H1E9-f+&RQSwSArH`-J~ z15};ERD3Pqqnw>%2H}nI8eb5MhzNooyC48(FCcLW31J)9`1R1(QL!spYxl;%{avr` z0KW*@@+0iTm9>@ux$$mb1=}zSDuG{2igFOG3ZwjG#IjYVt!JW+vvRi^WE_?GY?%n= zlOAFceW(h2R2ic%CP7Y7Yp2h$joG6sc`1v~JVCEoc7(vo!0izewwl90#BblY;}8-U zJ{xhAh7BNx+YUsewx@^AoM`|Agrv`|w}ng!i|vp{oJqth%HUxJX*+ej2l7%V^1`9| zmtBbUjRxb_LH}+!|7cX3&K}%KhwT?l9}zQqC5p&mP0x77X9}MqCY}^*zGA|0qcw_RWfrJ2r8y1Lv-ALI zv4#!-hi%+%a1sE{S9uVpF$lFY{dF>(!#^`0;Xr-i)}MwGe$%iE51MUGXMZlJR{EoNJlf$6cSVsDPF9qTD^oyc2CUJ zIrxRZ`em+CH%xq@l$N!^3U6D0TDO#qx!p*kE8Xmq)5qOuhso#DqDRHZLp8Sv2{qmR zR!Eotm%K8B)Ru|`w~14?B(>Wr>m8>)EiiwuB-I#Z&r@{d@iohf8Rsvni76Ykn1r^;3%7Jmy*gd$!b;MrP$Y?dg{$==gSmf$PhBrLYH~W`#1IyXU-! zT&#~uxlqH4LJceie9YC}i=wZ0P<*84CgEaA7|Gm0OSByw*e}K4Mq|agY4}Sn=zc$Q)8G<%I8}Hboia>b82?71fAdjmv#+U zTW&!FIY|&`&78KPs>tdGUe|7bdu(;abDv+QUNEx`h?xmteR;ub&lWrLa(2R$rM6Hz zj89;Zv&D4I%sR5>Hb+J3O2&o~pAvV&)e=zl(!z#y2}H}=+KGNYO8ZBdQuJ$Q6uTqy_wzgvD&lb&&oBvN zNqnZ5&rc0pU%Z9K)qlpH3CqvM>K@4yg9#2Bdt^S*^Of1?uY$AgOf4yO^j(T2Dzw;B za7I0;FXd}in}ZbgT(V@M9;O%zR(2)XnnrSo%5{3HaM2X)e9OgRgq!@`_m+`Qp!DQW z?>@~_4Z0)c1_k$hQLO-IW3U3ah9u!SrumCE!08~aO_5#7oc5|t{*7p+cIQm!2X(r=1LAN z;kDzqo)b{rum|2PrV?k;0+fMI^@TB)+1y^6yipT2q%j|YFD{kBDzHJf&LKH5-be85Dt$)}Cb#9_;9ug#&vVIP!!aj!n;2 z6s<=GhcHr8nB-|=28V4___oZpzHBZ0ygt%qu1tT^Noo>^r}N~70ie1jhN}*PPMWS& z4F5K_0nNSxiLw6~hzP7p(ys2#xl3v-E6Zx}pCOJq#k?W)Ksh8g5Li^`V3cDgHh7B{ zA{gr75@Ww70onkf1~~j^NH=fWrf};{>7d5v8Uv5;?zy}e&8RnJ6C8zNHkLEC!pJsD z?O#J}&eQw*@YT!U&%Sl8Ley0=HDyAGc5=zfYG1$F@I`n6&aKx*GD;w5aSH+Klbtw$ zXZBV#o#3@hCaP}L)~m{{8bL=v9%=IDyp|Xgt4rjFk80^S&)L?W*ACb0w#c$|g-L~* z2I=ql=X9MT1t0Uv^N+y~`&|?MZ^vOAK9#TLEwUd@03A;W>AhwNMx_@z)Ew?oEBbX+ zGfkFL+vN;g=kx1sEPATukTMID>~urDtu?XW?WH8t+qW}A;q%VKx2megiVY=Py#}x)?q3O+GaQ%W|qbNSuT6m?=Hb_=$@OIl~=gVXvANei@YHW@K|p zo?A*(O^j;oi9I;Z$x37ZPvA6npGsGM$a1B`)@+wqKe@w1N zB*kEwCTNWxaenR&duocyPq;)JF}-VOf78r#?7Y`qpw(Exj8efaDEg597Us7!PGeJC zu5p-jG@GZ}Bfk+5E_!sdyHGkj?wOpRbw>4+{qzw1s#(~>thpVr+0b$~=~8`A)9(E# z5LDt;#lTpjZ`MS)zfS$d(j@Qq3hZlZj^X~QPdBWLNyyHc&I`p0%sp96s59t{AL36y zSvVPvso)+h^0x6T^##&>3;P+hW@`~EBs;kY2?atB)_96K4E0f?2gU4Eig`{5W7@pO zmC!JHXvZ*NtbcY-3y)a~_o!uM(1>* zrxbU?qq5h7QjKbo_2fwGsW{Bf3~Ijbs#Bl{XgT@(p~y`{EyYeKGO4 z#>uaMdhg!ZJ%cO?+sKyx$!tA|v2woels%WkBXn7jGvh9^2&2Ffmt#t=md&_6hAOtK zzV1hK_sAC0&lMESbhoWB3O2{v+M}s6N20CAdrY88)W+*!1=*qIREfMbi9DG%%TOq@ ze?@T?of8t*D$BaQTET~N_LOOmi^ipuy~i2KU0!u*&xM%o+}sFowkrYWkyxSrGAORM z3r`m2-BmJ;e>Lvv(OtI@Y+Gn8!}qZluRWFCy#v^m3gd*$F2xuls2hIXIX#bFHoW-} zimWkaafHomtu>#Y)f`^Hvq#aEELkE;7Kd}w8bw#}Qf{W{cYEiV*r_KMZqDiW@gN0R zR-Wj@xM4^?d*jDcUEbG(E;_H}ZPZ__qv!OR-76RTRbYr4sfOLmXMn;oxW z_uf5ECsyU!$}W40dVD%jj>>3)j?J8j!Gj-?5}18rmKAf*#=cn1=}XZlV`<|DeMD~NTJI5LH^64!9hLd|K?4it zy1)&0kW#o9sYAaL6LfubVR&y3WS(P*s;gMOS&Ry7*;#QLgDqL^ z+?}G`fEHCkbU~Y{ws=E^8vn4z1m0Y2Vx@S;b^Ium>3}ab(BpyDSXE;Gs8Us;Raj;s z>Pl5PxepR|o#3~U1`#C1R(GAA`^;t^W8WFs6(1|uCTEuhSh4InxcnIJ7NWBf*~yp?3`1jw^)qxeEMQNWg1*!Xl%Yf_P_* z5G|IZIq4$t{bVLC2)^NsObo2@Qw6QgFl@zkj;t#+rf%xz!McBi^z0*aQ@@(?`+YfZ z_>^{xj*||&E#|UB$7hV&i)H?3)M~C>xgiTOarVXCdfzf{w+8k{a0QfV%@9mOrN*il zcLwcguD+-mmoJR>@ZTzgAF!;=YbdO7-34)aUz)uWZB1+&lkrp?8*+r{=MijaIw4`d zMDFU)of{BN1|PU>#jLD+x_%o$4-5|A?L$?@w+KR_t%GXmv#qhk61DfBkBft~sKFi0 z6%2#Q=oCm5HtInAC4BTZKnUB1IUz~jmP^8QY3KI=eDl@#mDQn?&us+g2c8>~VJgk2DC0<#M zq(GyKW!4l;TjnQr`0m;g1u5rf7ldbba3hW_C^cvI6otSVlVB|iKc(t7k|~c=bPtbg zBa_mpop_k{gprEwd7Ofg^}aL)0I~8HwcaB0vDH6D+uy38wh}-eA8k|j&1eCL&5c*3 z@M^Ug2i&zzS)2T7?L*{4R%F1)dt-*{^2?m(BpX+WZQHcG=+_&HISs^i^TfS@TZ(CP z`z|g`oXlSGRhkEERghd$ft|}$^UtEy zVt#v3r@$#bwmbP;yd`o`3NgVs8m^Q-sdOr38#Cpmx8S)*i|A+bYkcW0_}ml5Aulsc zy48&d4g!QckYm={FWG1R1l6Gi=hC_PFqs*H^R7L)4i?+acD6PKqa0t0qbhPrA@xpq zqw5u(vbZ7h*;tF{(ShXWHA_T=u!oVIq_Ico5J6$-A>OvqAc8h(xDE0~MuONjqbCur z#LbuHy0+AOwU705qnfIVbWnQ*3SBD$KWgKWX#rp1Mc&s{T-<&gvDvkgny{N2CZN3| ziDvsb@=8yKPWBy2ouQ@UE;7mHed^+!s24hEr}jxie=XG;ANAQi;55MGrM%Bg@1FlN zvyfhz-4X8SBjEgEg_*~Fz{mc9`Hf+-f^c{DE9!yfcIim_#Z{}tOWjpGC1)&DFLwwC zSm2jGNk|`lH{W--ZkNA*>v40-8FPKu@f6E9n6;sbpF=w_k58eykUUdg$HF{& zDL%cgTgzu__&yUI6MQl8A!l|;I!u`O^#LUq#8iR85?er&F=Knl$tMI<=T+?W6Q2TZ9k%2*D#I6Z7^p)!o^#Qgu zt4mLcT|tw9`Gq+G`w$Cgwc$Nz72cheeKdXFT#m}Y;>XWs5KEiAy7FjO>^zaBlM`On z_1s9#^~0c4nR|*6-`=}RJF%w5Wt5vOqAkf_E{h44=f!zqm_?aDek)3eZ&917qnQK& z*{BkF!5y5IlX%Lr&!K$&3h-$IFQtSAo{tLoE`_J7RPtMyMEU){ES_}@M_8l#=L_bh zyS|j1llG3KsJm!IkSLX6xQAlW8rERSMFgkXmN?NOI&u~t= zC3NEe=VTM?rWzaTQcmecTF<<7Ad^akUE&_=Vr*vpW?}>`P+S_Z!dw1+A~==F$!F7) zI)=E!HPl}^E3Y0d;QVtiQC|T%7~mA`O4?C`3vg7y7-yI)$oA_`=7j>q$T zA2@eD+sz)8c{FX7+uG*0hH=4Q*cfCSnctJV||&ESawSOp-(Ydrx3 ztOR+Ae~xX1GSGZSxO?^CexeB+pwDVXS>;jPCj)$sIl|<|P2weArP(pngoyg9Ue7Nv zy%iQVo$5WBSHYK7s{}46v41^N)M*S)Ct}vEW*+vcXOzh8URO{cgFvI13@Ttv1GV5T z$#ot%S^A@F_dpERA>CyylSs6N0Dga#)LwGTx&Z) z8_gS%P1H}Nn2q>Hna)uz?KGa1rgHgSgAUZ_d}sixuB-W6`?xLYs>JF2wwQLqp&OMu zYH3*Wx!CUVT57Kh>UASqr!QQBCt5tkCagwt2^k;J@1M&d*)8f&GdC3STVL^L8;)sY$6!?+QxO6WAXdd=tD==fASH!Ymar;mt5f6@|O)T zKb;3s#+{}fnss5pR_X3sXE7=W_Ifa(kr~8J+0(xjoVGr0XcKt9kG%;kOieMkC7O4u zaO{CqCYi(zn-9&?04cK>qfxe6Rus?AHKx$(DV{ILMm>oX$M)$0>7l_rop|{I;h2y8Uxj4|s@sqW!3H z)(DQ&tvW-rwDr5y_c&*6wJeAI#2HZkb3j)kTS0-uKDS)4dIc*i?G;R1_?TX>Q8T>g z1$7CxsJ7E$vjUs!=EaaK24HR`jGfJ-=otNR8pv8L$9;q&*R>ohWUb=BxLN4&l1~6u ztv=O#Q)yw?b9p8=dbf5C9|tKxUdMSoJcGcW>F}$sXc>5DVPDiYA&|b4QC{;dy5ye} zj*G<{E{5!B+b~-=lZkb-g$4yb;zENuHvV$PVnm&zZKj=TCErkoo9jXeg+8a*t5jtp z!_7C|r-k>;aDQclYTrx)2N?7^ka=|kY?hmM_lE1(0$j?<47M*{{}2bcedh4-lP6Ce zx^C-u{Pyu5PsaP2-u2rlue~Uu*$=b*#rVqe+QYv-JzsYA+1dL?p8b00#GR9=TxYyr zo^yJ#M-Hsn>FqP}eOp^cpBSa>z}c`qG=)5aO#0wLZC5b4Z$O@nug@&!E%3D@%dUkE zm-68Yl8ZS1Xx>OQl~7s6CnU;v(J%CJh@$IjTGzXunyb;pR>hRmSDsvib&p0tqumb3 zH8!juYll;gHU1a3H7PH~%Z>e^UOwyK)=u$vLdbvWkek^niT7{(>Lhf8IC$~Fk}df4 z7nC0h8UgSDKB1x2DKQ4ZhwQC&YwwEm?NEf7SBB`JL;bSF6t$NpYuRK0-X-^S_dmhxnW;0mb*m;_ zWJH{>t73<1xrJS^@;>02S+0Zg^Q-6?!LZfNaqBP!2er?pxe78=i(KExO)3yB^ubXX zSJTIpE~vh7>JP_M#f&xPx#uk23dzIW5YH0u;CnP8989j9@E>k#S2KE`Haup2xWnDd zLN4IGs_5+bwf0+IxH)_4WOBAOWRX+w&3IF}L=OJy2rZx>s~|+b~JO za*pQyL<=$4k&us{o;47#z`?+Zi>-GX&Sp;Z_O0^sQE}Sira;_1z!AJ?T7Y&?Y*h9_ zXb^czBpnOMS{BjKD0Gc}%|;rFTk2>I4`tL3_}b_FP+20T0H$2&?LAVP=@W8incVcYWi!=Q`kUpN1s;0#+n1<@>ukXfGa+nDz@& zR{3ahzAZ=bNy%eEt60B;@HK2YTrs??#cu7MBUpF)UMni^&9WZ*UTwD@L77~(m)GCu z@fxoP_RNfiCsuENj>f~o_Fe|qO?5VpvJ*%*$E{Ea3nGgKJ-AywFyn-PT9gfP6pQDu zatI_aHWjyg8**Gj*Zuf=uK0S7Jhejh=lEjebe0ueY)=cbRe1J>zA zC#yJN$E&`b1<(iLY7^_=@YPd>V?W{jAkxU@WcKKWLtG(Q`zoi=3*5Z`I`*|L#JFG>lMUlx{ z#^AKwec2I_eeoU4#***m#n$~Y9cOuGRzx;^*jb5QcMn-7P>o63GmN9YQM%1 z>Qc9BbZ!1p?EB~p#)5FpzPEvbc@sRU$jpNr^KqBMX|wZyx!@myRMyltk_M+RXl9%? z-SUm+Is0vNE>)<$%6KENr=_Wway6(CoZ6+jM{z38b5O&PMlM1EHgTPf;o}3S>*EF` zzX^_^l8qWxx%1o|DhC7M-fBi+{PKQrSVWfx&+{GxtZWb-LjR1_b?S6PIP-3cAYehE zQi!=g>#bWO`}rqdaV4p#zj2i*m@-xxW4PKKTc4+J6=Q&Oez!8Vs* zET_P(ki?=#OL~&7GqUciKTJ0Oo+4SIq%4s#Bhs}SP~-g@j#>w61(~;DVB3sh;AF0_ zb6$QA+zPg+Uxoa1>%2v{FD6$+-M^<{=dO~uU4}`oA)<@BP64Gu^oTbiyJjk&#ALS` zCW8+PLHlTUI^#%yz)xvt&i2_v z4m8c%BZww57g`kpm$cr+ndNRPUkP@e;4rI??SQ8tU~3Ks@M)er)e|mTMLzVO5HRoK zi2#nvH=t94i90Ed|E>F%=in%PmMA;-@YX)A%)7#ap)5ka%57%D1&01XOM2$2zGQ9m z`B*C*ad7&)Hlkk&Q*0CtYd}e0rU#VhDr&yu`FsHjU$RgA{cMVB?*GMkrrARifCmh zR8WzPphBU9rR*IQ84|({AwWQCl|}><4V$2{g)n5vN>Gpp2w^77fUF1s0t6D00C{eD zj_34P(|=z)@AxDkxv$^#+v_VJ0cj@+RA|5R=p=$t36R9{O*H(SXD+vR#o^>g4QXbF z&Bxu&3~iiO>r^<+zYILlB4W=@zAAq(A)B8+|LAKOzh^Zves1|7*+iWR&({3Kfhld3j9#$BvtD*klJXbNE9q^n1iraf8uhG)hrL;koHUEm)i-k?sE+5U zT)9aZ4Bh<2!U@}f_%+lbxo02?M68f(EPY%(qB>kN)W742>@~yYbdz+d5u34YHM3-I zr^DGagi7?8L;YIy*U5K9`!jpRvq>{WK#D03KzyNb&W~?jI8yN>jBFiQW?EN$ySrYu zm!zGbX)4#}d|YBmA*wc4RV0?8npkZyb4X_oplQ}i9=^nl=2P6ZLMPMlzG~VvMaaIo?1qEwas>5_Nc&f1EFJ1$Y(rKCgF+M_rY$ipG#(4n zQ*@{JZC%h+M0W1ULZ6MKd*+I*%fR5No=T128xOV{>Fkp7em?!yJ7K8l@v>FrS9wEN zJ~Lytn)b>*7`;zPjEK!tgs~<|R&9W~P{v5eIXS`H%w| z>4yd;&-GPob!xX}+HHkm*2cEND4rk6Mb2^C>ma$xuL_w#i?~ee6<$U(w4Hhs1o$!7 zYS`>>dPdT#JPKKh0g;8T%T>k?Ucrvk2(5ABqnWTYNEgEn8~aFvE<@)>+9OV3 zL?imaa3NxPC+?@966FJP+J1TwgrNnr{#5Y@js`jAmFy!GjvLD*R#iE8I(1pm>M~{r zKqPBAHQK~%4RMr4TQecMoY@(el5Ik-eRs50hHrlm;LdPQogPvi zVTMmK`7}!J4*BKDI(?-PUZ~wdU@g4EDx%6njC>Pc!U&^0u+f;Djm6T?hEn)CI-=1z zP@ty^q0FL%Z@D9d3ZtwhjpLZ*X3~Darn_9Aya=?s z1~Ia#cE8?;F(#!4H@brxA=*$k*81z{DPFXSB;i~c)RPhCQ86ot>MKx9bX#ya&OU~j z#XhDm(i^LW!{kNBAvc;mJzgiz&8>;$6Fu-feQNIO`vmuBS3-_4mdp~dZfUCrhz?Fx z4fJ>`YJ=Yv<3c>1Rbff-^K0h}&wJ*2c6k&<*v_9%E_Za#m}4f28U8#tGA!n0%9A23R|*G7jV$`=4O-rr9Q&@R}xOU zF7<-EeXC9-1TCyv1@2F!sjN?R9!Z=qP*Ain~WQW zn^1lAbcHh-Dv58BckCl)ZmamQ>9&oo;yTBpS4=beJwEg@=5Gqkrt)Q2FwR_E|2QAL z5&; zP?wA9#wSC7#g~;a!-rx*is#t+QoY;F$2s>G18agCZe}xRz04Pe-0N0~(JN1gBa=GG zSwP^k**#QKm5Gc~9VV+LF3i`v`vP`RF9OV}IZpnib_@qTaHYnQWDAQm zxwpY<^yl+tp(fNWZI2M4vzN(fQ*|hHKc59g$W4f>wW`{p4`--91RMLNYy6Rz-b?%P z8ET{P(;>eiXy(nTpX|)dojiBNL3Qyxwr1&aI$4=#T(1bIf@B-ANM7LFIl>+-y`L7_ z-X8{#Cb5O&w*`CS7W&fwpqT09ELxJZ^`X+mKF^8@74x9D>6n>QT`691(_-&CRTGit z7oGy7=PYq0+a?#079P6#bSg2kK}Q>=79L#9p515dy8Om3qSJ>txz83#ADGi_IQZhO z6XdCqmesb9ArNm;gJ!mFq|Z{LBVtYI+qU*XjezUT!$#S*-zh~zCdFq19fxd8W&gMd zAOCKr5_4)_@uo@HDI#H+pYAyE4KP4mBLnx4?}#}|6T6?mRxx5aRXN3_r?XuOf@Oc* zYP%DLbzQ-<%GH>T*z$`LpE@(gprOn< z4HurL_Uo)~JGEUG+zPKPVZC_<_6wZ3o*zvWKIBz8OOR7ie+Dki%L zR}X~`G}6v8Br<i zk7C$yIEoc^jG*mUIloAAVz-FbR(M39N=WtQ%^1h{u$esV;}i@-arzc?A+^I7n#ofGN!!K5@s3da~>fK)10Egq> zp>CEuybqspUchX2tCD}gq1^oArUmIYb~HI>L(GEmI~6@RWakJa)S%vMF(!KmidvW( z+rrz!LkrDPW{NiFW#O4!!o!MLXVR-mZW9Qj@2k2^)riB6tl^MRkN3uEIFYkH+6Y)ZlG1N(du)JKp$-* z=PO72Zu&TgFP=-;y$h=xt$%wdpad?^)B*}84t$%80C*}w{7 zm!+`*Ol*P$uZ}EPZA65Xwv5ZAwHe$jtzW~Pdw8g>D^g|USxZVweH|A94R@RzM$tiQ zo%OS$3Ip3H+n67Q!|1K4XA)yzPjb=yPNIjj?A+wR@Owz}XH^(_5>wA&lVyMmcr_T9 z9K4xr04Yf@4sWE|oy6{L)#q~izfdPLt$HJ~P^1v0$z;0r)0<8=q4KEZV;7~IkLP>D z>1acyX@DF7Vf|8K+y0V=+DlE6D3&|-U_cQxAl-g5#Fdp~p$SEz7 zZ7-P(Xl}Xwg3OwRd0w}jccLnG5)uvb0odc9Q9Ffz<)a;!Uo&Lo(nf<+x`;AIbWc;x z_eI7UG=k@m8KVS>C3PF{QIow~ENJIB*(jS1cgaJC*ATQ!M5}>(&31dZwG4E4i`gjtq0W+VQgI!3m6`^ZbyF9W>2qxZz(mCmFE2Z|H|pv-fwfmEP^9py z0rFKm?S(hjku-LG@5Z)ajFvRkQcl_zX8#n+^39NN!%kQJz&zg%JMmt#Hun+Q=thpC z5lc#ZWnbU+eSpiN9&hBgIElb?OE53ZCznvAX8X5XD@4|S`n!3tqoc9k?Y-GWOFqxH z5h||F)>}E#p5t^U@JMQNJA_*~wD6thjkp^luT&Fr2%gTTAk$xk14?@<$p`KeHytpe z+6&6?%}Bp(e;g(_gX>KrmO}&2ctgk4mt@wdGY4>Pf?A7S*gNeZT|7C(M)0%pE^dhj zdXcoP^8lKFZX^Jy6L@;3?gJK+o<|g5)xmXh8cV{Wb&WY1k2rDW9mi-5s8m+cuz-O28YG0G;?qX zdU=I-Yd}qF<+8 zL^+AV>W;^y+a+tc2T8xo>g6&f({5rrY)8t^tj2sHn7Et~#F>*pM_pgJV5p-f0(lzG z*t{u`oT?Kqi4@rMs|CNtI}k8Ff4O<-t5r3hPZT*8YaS)By?uj@Vro%#prigEgcDs zCxGDanUl-$)bY{=MTksW7o_yz;V8Of+lG_Fafupq*{ErM@f)<>ZZ9Z)g#+2Y++*nE z*{6iPbNE7^?h;f{m?i4OV*kZ1xY0|kWUO|pj_1Wygyt@m(C zS0jkNRs4Mg5HP9|-CKc7ZP1SZSO&k^Vzum!zgMk#af(knPW17k-5l1&wC_T&@sXj0 z49xKj0)|sq1RfG!2Iju&%u2iN=F7RhSo*?bWWc3B@z};0luGjOL9nae${R`IcotOQ z%o>?I7=kmVOG#0s!j8~<$H5sJJQ2_JHHVt8zTc=O*!rpX#jrP(!i$@kasRn0s zP7kTV9mzE@GC0kZB~`+Qkzx`6To!qHWe#V4oaSWg=Va_5rCI7xvw8IIBx|&-2u&=@}knkRheJJy@FObnDxfd-C7Iu6o{mbhh9o-j| zA%=~k=&A=Sn?7BLri+{+FsMIEgSf!QF1Dk#~JuplyddMX+3KQ1Y{VHB?5?pzfQtf}tLr*gCxMwApSmo^7-7(; z#uaMRy3PWuJmE^k^G=nUQ%R`RePwD%LqK96O)9@NG#AM`ZgE_E(o{R z1dML1A?=T+(yK{tV>srPOb0#1G+AU=ji2w!Yi`W?RVj=ZytV^S6~`fdFO=cG&yXl^ zgkOXLX*>%V5q^D!C@|!)%(#$<0f##Rm0-a|!z}dmFvdHUc`Ch}2!gk9h}+tFin-gH zo^ib5{szNHiVw5;P1T$xAA7lDQkh?^YD@0uamYSrRVUJzQypmQEYBOjqxh@x`8wm) zF1;kT$Q#4}J%x2WN?%@>`fer6O^V`5E=t+l`o~!SF2*OM)-;0ab9WD<4srY+iIQmq)qVA5fSbx&DD9pBDZG=MtZ8%ZwN6Q zPP#dNsOY6cc6Ckxlpi9Sh$^?$v^C6sc|Hs6`hd89j*%KW&A1}B(G#J;dHb`dryQqx zNY5<%U;7&6?>B7)<|NQHhvoBd!=KaUihHNwhC6i$(m`NaP%TsFuX1I6IvV7)eFx)K zV(=mB!%Zu*34?<#ZyeZOMb!zo$wnv|Sv=`l0sPbl1vglnXSX_JDCwa9#PTT7xIfc? z<~!QHSI)oHC!|eAg`$A(-~P4bE7C8nc$L^mDdSKb`SI7>d9M9&qcE8LJ|oVWmd3yG z5?9rMjvD$bLIv_^o}%gL!Iy(iF_~?AV(aMaW8Iuxy`xXek6n18KjBtbbUD7{Lgu)b z^jLGyjr4Cawud0MR9~E&{%(-ssIg1FhtF2uf909PzkMs_%6H1LDG6AXdL%2|j_-Wc zC)p~EfUg{*uEHjtYXp9Ju77{>H1?LZjs@-JIX>@yVqW4O04-j}gU?FwsVyCL1a_as zF#A))`HSYCpHcAC(f8~p#9LdRH)Ktyqrm$H8XfTu|NLKk94{c1RJF5+?~TWCKBp;5 z1Rfv#GWZ{CI|sD;gaNQQH{!=kD1pynUqRU&3(64owq)|(u=hU}I=UG!x&lSbr}cvV z=k-5P%jja06ei)dL-2mS$NbZ$cYp1(vwRrHyzzO)-AX$o+;!OXqq+Kw`up=+6G1>M zqUZTu;(vbRzn>UP0!RpLlT7|+7m~TNtD2GD{pEkZ;m-$eIN-;UdOlC$ zPw(mU4pPA0GeZ29k%s{!WHlB1GZDFU>o-rxC#muOJcbV#;7n%R;y;5?iI&>gteY+0 zRryz&uygXO`MUw-7qx9Jg=+g}w|n90^=Oi`y;*}|Ge6}mY6}RP#h?|v>GMDO>Cwai zji4iwlldu8|E(r{b~bkh=^y_O3I6sEe*?hiNcV`+%J|&u<(-?g&Tqc)`S(A4I1N)k zbcOqdPJa%~0FW<@=fL|3^QkcBe@1}<>qAIe+34Dg7_T}ic{=ZK~fKz1ktDxQ?jKF?nS(8mSf zSQ2XSQ_6n);5!)|r``YTtG@jJ>_Zq8$mZ9?XnF#8@=K=rd^qzHMde=!NToEEG3WJp z!u?-L;?o0Z>|~sTUm2g~v(#2&6+kse(Br+8ZWQ=uCfp34W^$&q0s&^2U zoL;o@GaXp~_S&ZAKH~pA#ixh-k9+0+CEz%TO)T@PH~Rd%)1g51b5GdA?-xJe%cl6R zE4Q1xjSjpz$49p{Jr(nhq3TE7Gkw7Kx&HHjz#q}Kcl@EttNffD`5o^{bk0ZHmXGQC z^Ope(ZVP-}gGh4MClCJdS+1S}NCGJz`u*bzpEB`Bsr*AfMT=epWEbV1B*h0w;&A1T z>!2&Z{AXxdO;3LS{Qk^T6d!hu&OW~5_cIc@u4yHTT2ClwF8P^97*9F^NUPqRc-4Te zOAVSt-l>&W0HE1FIKjkA;PIxu=htHqjok|f56Md?;?^$pP8`skZ*qNsp0R|+Sy=p$ zcj!K~ELw;Qn6wNJn7c!@VKt(}Wd09S$HOP1%cCL!a}Er@;-}(~J7F4eBBaJ9+L0&> z(m5M|`hm5U^aGa$V8uj>noS(hoN!VWJ#q4#u)7DqThivyt4AB~N((Y_dP<|Lp4spNR|O zdfg?de@|iU9oS8Ii*#c?VMaWlYAEHR9(9eoVAP(hIVe=U;VjUR@*%LNf6IB&NJPxR zn^)MdUDA?uY9QPgIC{3i@=CNo`1pl-kH2U3g`B2U>9~dIiz(+m(di$hL1TJH7qj=C zZM?EeK9@|Fccj&-cI&O3(><}Sm-KM8UMFm;j|g`fo}11(8##lw$cg-ShONfe*;?$HN0Ny ztn`C0xWqXOIn;myPd+qRcLfyq#EEYJ+CLe6kND#Mz>KWlp}jgW-GLA7efL%_{mxAt z)B3*R{Kc=7FjL!4Tg&Yd8JQybsIU(GQJo)PV$`RBTTc%FtId!48|=Qjw7eO8JTT+H z?Z3ZtF#u*I;>!cde8|*f3$UBtoq;FZ6aFI$w0+@B*4_T`J*JE&QGAH9Zxz-=6waU;UyeKV%J~3xN9kP6H*saSTA{ z#LFuw4>irI4lz|wPg!p()CXhCgbgPwhoIvwuZZVUP7WzXXMQVQ*UT`2PoA*N%7L`b zdsKfs(C4&?f6i{Dz?}G2zN)7Af_FPO##uMOWgLUQl*XQTtsb~WJx_@`~6oj z4c$fK7M+Sp) zAHK(rdF&60*M%pbvx!B>+U@Du7i6{q-UCUJRyF*6>y##lW8765nc&EFipZ&MDLaZFvV zk@rn|*+ZEoc1&?acn?FvCebKSY-OTYhar#$b9w?ajadZy1

chYh*_}jm|lbsv^ zF7wpq&BCoFd%zEH%U#pHze`?SZk=K{g6awevc>Ak@nQOD-B!6`~vL_r3>Y za|{?Vrdovj|N8Cd*mGK#aRBsp;`G;*i3sBP{h~TgG)LvRTU+08x8MG*YgBHch*;S& zTDc`98jaemOA@CVG&Wa24Bf8$Eef8$_3oYAoOOIp03SAqzI_nz&vslEJnDjC!20d< z{SD0{uBU={oevp$o;UE>q{YNI0j(=!v)cxu(NnwR=cRUJK>14xK05CYR?A4)wX_^@ z9RoKq`@CbqRrLQ+RwwKGVTg>vvEU=<(_CP=u4d$k6&$d!jyV+{2Ts`uX7gv(ND#Y)?!Dnw#VX|6L=@~uPwWvNIl-%lKkt(jiwLqU^FRQK?1WG|jpcRBZ{ z-hA@jzAGdctABNi*y4cF9if|y*oe5brxSv`*9s8>;e|_;NVY*EK^I++^Y&idp@nef zvw2~Yo3hGDD;JYp@M?!E$1g76yN2RK*Kv)#uKnwCjVQPb1rw?d0Gl^op5F>JU^s>)>v-^hV}56jrtbrT#+=q`br!sU40MMU~f2acUeTyi8_(HibD%DQI_drn!6}6w>rw|*-yN4D=jVk#-->oiXAYn zNv^R0?`;GM^7u9Twp+r#m|pQkUsPakRim8b6;{e@6z0kjO33S1jor3Z5(e{CgzGi>uAgSE$ zwU_J{1XIaV&Qv^s7|i?n>#zdYlQ z3imJ8>HfF&Gs!8(@Ad%-T-0>pxh4g^ewa~j6;*{%8~BXEGCvnBHhoJmy1Z!g4B9hT z_3&yP$n<#&M%u!9bxCd}>c$jkZMf2U7wxvFUWPcxLu!~D-us>^nqXF$NOYXC-Zf&U zsudA1?27Mp-Fr_M7>k?yX5V^|&K5B^(qPgU1-f)tVV(X`FWJ69fF;OswR`P12CE$Q z%lM!$p41KsE86|`9w<`hx$;CBS5aBsy%Sr{L(XYN{w&rFKWdt z&C;OJkgY`fS#mQ6@l|B%MI8hI}bi``gNo#ckiAW+D>N8mhozQ2WovP2DhJXiIsF_hQ}SK2c=v#j zx&wICE9Cflj`Y~vE+biQlGb45rtO}MQk0fnhjGLDHqK!ZhO(8YlOu&a&l#4)lLjp0 z%#DkkbkRalsw>?F4n7sm+P8UulN6L$q#$qBhquvUAz#NgTiSQa*p0{;M z>AMunn-wB%Jfn212VM6Wx&twE2|t1<$Z`;q3cq@O%Qc-NZN6z{QU7rdK$FtGeorQ6 zVb}(f((cV~k9resul4I7qzS^V0W$7!&FC}tegJ<#_8fdm+i@T>WFtQ+TEg~6JK4s} z)DPNuHZ+2qpUc%BO_XsbOUKW=&zL1#Zc9QxsWLCrzTZWd3oEX_)n&4E%wJY1WZfyh zM5NFnQ;&#?FF36}mq@%u;|>G3R|T){xg)M?Gro_DHKFm~k5XngulOlkW-%&+SSli-}f7(Dn)Doxo>Z zbmX$zztcv{H;IowV1|uvPhG&1bn1o`P#a1p=;N>i-qmNgZ=XWOt;mhRcWxKR@CsRY z9g5}<0cn@gcMb0`s!6bH8|@&}w$V73Zqc1y{qiy8-c<5+Ki+K(X&&~v|2m^PSke@GHey^lnsTka9uvFg8=u;U5S&a<^70$u15&|0iI`ecKU1m8DanSWDx#*fV)P^$rAsBj?L_ zc-o+c>A+&|+@gS+1@TUD1#20MXep{`kZ1Ry{Fn$X=q+ zoT>J~PlJUf-EW+4NVDASKEr)GrhLKx*hZZ(Abor3Qmwh11IiYj7W%@kJyhB;=p9Hc z95ne9l@ySckj$DM(iTNO+0o=Iu*^1LP>9LdB3$6r=QS>o*qH)bBxs4jDjw9Id6(NR z*EWlEV-7^+WWN`Zg2mdhf$UMH}co9X&2m z zOk_NTTz7FRC08uw4TurHh)OERA)N#v9o~TD%+s%z{m^PZ<4JB)ykBoj=CHYfC@Z7= zf%c^m@Q%@}v;x=*tR*?;(fz@J0J=Bbbo3~& zIrKo3B7kF|>8i4IGN#atx%#5`$F6@8Br>d)1kkvx(2FXNZuZBq2w!6EjhXu zG*)l1NM8s*=@wiERG6@JB7DrIF;WhB1Q)nx{Av+hz}fi<#X=5etdA;J32IuO-!IQ&M`Jx)MZN^ z72K#)K#y7*JJ%RTKqhNMJ!NM{wk}~xt&X7MX5E6%M=C4|n>hPSPmHHWN}C18RoUC2 zyxkIey=%}u4KEi5sh;ljJXjS$}JwC=nL0a2D@3X@(-L*!?)VE>e`_n2u$U9B%U7 zw}{1yZ?w+^Wf{l@*d<1F)*PvEi9D5wE34*F z&&sogqg?2BLD)WAZO8ywavNwt$#Ff`V>6Fjt3t`kwYrLeQ*mxlYB{t!?WMVS5L_%Q z%QOpY>b!l)32tZoVo_LD0TmLEF+){0#Xr0TdFkGqye( ziVnnuO`3=fPz6P1RHZ#XtQ?seQHWVgyOMQfTct;<1aHEPa}=y1+v<5n9q*AW8r}{o z#S8vhk2RI-1L~yj_tr=H&+^Hx*6)Optgcjvzh;3B-@hQ+u_oIz%DkF)&bt)-EpxJU zl$N54DcIfIA!mCM{J0B!WAle1pGq=oWOb10unv>&hB^BsEp|zW*SGRUS_UnRJ+QS( z)r|^Be857}oZa4=QIG5oXVck*=cgiV=4tWSe#Q;QR*0x%`T6#V?wcQX0W|0-uM`{5 z#4+hiPkF$+PYgF2b1 zn@$OeViCZ`%v+f1QlI+{k>n3UzNYKRIacGE(iK~Sy691RDytX>ndUdM9)_AvHlvg( z7*=4~b56fx(YCU2!%r&~s3?<+)Pft!EICY~xKqg7@M&hsU5?F<7n6eqY@W?q&%BZ- z@YA!2y7N$567&IEKHnjKp4KDfpx{l;jNnRDIjz-tYU9#P( zN3nI1rHS^}y8V6`_cM3aTR|xLkO!(FDVrhJ5xO0vF(pI5?XD=sxjlz%2%utRY)`^1 zsGP$ud@45TbUwE302e!|ELgT7;kkm?;(gDrEq0-($)=*_O;oXKglp79&VOb4zb^N( zLo}Y)Jc-XbrCa)2U$f(pN2pU>Fg8L~W(nSFE*s61wIpo<(Ox~W#7W^x?s3$Nn@c1z z|Dkp$`}cB6GVo5VFB|9c5`e}a)8tqK844&EqJd2_v5pby5PY!}(J(_Gl$>UY9-7K{ zSRGq(niNPTNe7ZM#v2-5{Z7_lQGgxh5HI;lmax5D9arRWhU>BB` z^%Kwnka@!Duig{s<1oP8RpVqe|5ljr1sK#^T8THbls({tE<&IW}bVRIOZa8bj1T-aRkc z-DK7gb>kLpB?>V;A!dPRgtfBm=SwvMGi+<*S@-p_xHFE2-~i~RX9MVK*Y)^kQq9t# z19;Zq@tEp!6)yE^rf-&)k5C-(>kt*cCz;LNdzgY)b zr3o*`VXBmt=JeZukO3_bBP9fm$OA~}*{c@g%H9k4n+NI|PKbWc+D44k?cpLjLw%<- zTpw1O;bgKSk7SI9<8d!@BYi(k`9{0!)FnDRiKCy*8Ts$tDXhPGH2>hQd64QKyFucO z1GdbU>?c@*CkdV#>3+Cav{8ZfOWRAMI)&csmCT#<{q5NyewW+KJEoNJI3z1itN{6; zNdp`jO`+jmt~5ou)^2xq*5{hP-vV@%v}Pa$?sNYNHSk^_asP{kJqN+#9&a)may=U? zaw6{;tp1R%f?(kcm@Eri{*aGYsgTu~%^Q0honvB&>g;E?XV_b;)=2NHteS?4M4F!O z{UAf48qGRuc#t?+Ao5r*YI*{l!C^+9!NaEQ^~V&hh3XP!2x9x84Z@2o?{}rMPVfTP z`~|A&lw9h>i^a`#`98Hbo*7x*3)#%7dE$MaXCEc<02(MV>BF>4?|y2I{m<==e_hio z9jEgC#9xW_hYxyUU!}+^g}IqBG-$ff%FU9e4dwNFjobDm+?cTab+I&WD_eANv*&ng zE%$d_y3_}i0d^FU$*74#x|H{*_4hjh8@8SXP#5XxdUK(h)0T)tWP#1$lv6!P%Z; z*TOSjWRdUFA*RbUy*q>=s4=gYYJ1cgFEdP(tr@lV zc8~#mg=u<4L1epDIUm=2zgzwI>Yi$l6?Tin_L2Q9WRWW6f)0m8Js?0-hv82P>8uAM zK-uaIq}iIfd|F1QCJ~&R5QG=OK=9ZscLpL;E$Zy@Am{{@;MjAm4z&R z*)Pb?$e}BGk_RR&?W5-=W)^-ov)WzL5P@mVIV@^)PtnxN$hm{FUP3Pnn8{ zb-O)koO_kvYfmf`Q)|>R&%S5d#ML_2?Ql+-8?a<%6cOIx6(=_VpX>07ww z3*o9@^%a}ANM2Gk9#UC4Yz9qTvF zW4M@|;R(MJq=g<%UyeY5!%I>R!zxD;H zGJocmPh_Uaci-V8@b632&xZc$5dNX9qN^`}MJ%AOK@kkTB;bUy#mMDder}AoG)Ln& zm?t$zO|mh3s*bT7TbCYBcBO4pEXi}{20({>$G)l#pUo5~VDetrwj zL?rW1fDI}&o2+QEne{UOW(1t_GIFvN#kn5mBr|V5p}I|$ znP*r-`|R>I*hA{oVSgui+9&>nMrlbtScQW_rgzxP|a?OEoaK|9aq-Adlg`iN(+ znx)oRLb@+vbLdW1!)Y7oMt@jEo`_zQ%*sfoYZkOP$bY;s6m}#^WyJ`U-k{!7XInG;Vy=8QDQrPu z{YZqFT{FOAdx=;^W1CWGY8>9{m~RRd-R(r~suKwLnRbA6Q`glIE~Q5)oH8CSH>*A3 zU4vDQ4-Fos)j88MLCY9D^OXvvW|t?{tCfG-bkVfa_xaU+eiGW0w40L{>J7J&U-b$6 z8f`sP=eC0PIlHlEBx&46a>P&NJ;G~&)ZpEYoyK|lVdK2Pxs|QMwt%LQ2wND{8~C8X z+`0n4u%w7KV27jp=4hv%ZF|o#$_(*%@5-vQNcA|FSa*1})~PBa!j-(T88KPIoE$^w z_~iPXk3XO97N5`T2?Hj^qX-MxrNr(*EMh&b)*LD8&P1P#5*r11yUt`Ba1o`&zbuMG zTo_*=(rEj>b2RS6R)c~YS0bLo6F|UVY<#O!vsCH4NhoGA%}Kj(_pG-3QqoJvGZ9R_ z0u6)TdgvkT9ME<#FR$?tBTuxmGWLWS$YuTgGv=>t0jXT0&!qc^CEz9VOc2Y#d4CeHjw z>{xxKe6fr(roGa*ek0BjZ^6wr7*GDPpF6o1a^ESvpUZO+wI~%+Oai@*^qTBMfiiS8 z15!ZaRX5rp6eaMXJS{uGz}0ClTIa62I?rTOQ}R{GzyhwiN+8e497yB+&`059az^qn z3zSf#c{-+#^ZGYWIm)9@@5yb{$w!$gqdsF<>fC@%aZaiWW4zQ7F32xu{4aXgf9tgs z81ghd{}bKzAGO!CXQzqIPxc|StfLB8u-I;kh$;$*h3Yf3AhFydKnFon6ONlmnoVDq z4ecDUL-@$Ph6W1ulB9lif`l)O=D%zXlBI8Pnhr=mVsd3|Q-j8Y*3?B{^8L8gx99d_ zZ{wgv+oC|GMMtOS#VZn*VsY6i*s|V+7^iu$^&)` zWZ7VUdp|QLH1@uB>8&nIA}or;Z7AI@`xaN~*DOTiMut?DFt9F^qj6Dbq3*ySCydaR zciSB>46me0y03kRC^_BV>JmfQcZlBf1;3N;kRZhDDq8QuV3nH+m=Cx3+wXQnMHR{1 zgK|UfCJHJyF792sA2opCAlB7{#&yPxiEfHlO4KAkSE1NAj0@AYMr2U!jJK>#4Y1v& zuD|=NqdNtk65wRaMvHScT8c#BW|bh>d)vg8C^=ky&q`LG?cSdBq_Z2TEl^Qqa8OxT zl&->+y>T_pLHAFp)0{?%GWwpf7gnpTeHd)5t^taoy@MAG*U|xGTjhsv-8)dAPaDg4 z6@J8t9H+SMqf)v(0ifU}Aicm_(-e(va`H`KQwb@B<4s?)^3q?Q+=9RsOrBjn+7M?6|yHKm?X`f!0`|GZSRB zl8MTfb7XK{RO%@LB{aMoxL+03upNr1v+R-U5?b4ev>@NOKEqKUrOB= zYB2+HSXwwSIoz=T!^tuXA$iqe~H$O&$ zpQTuQUQ^_D;Ih{@S_+>h=+V=_lF(rt)4iwZM>HNofUS{5h!62k@_ir#3Aon+C=80W zorFTkj)$-T^}~*M*3b<-teaOAth2nRMAeX%Qr8+5roc0r6c*7ZL`@vLN zCGthkToB}d%SgYJFBeeJ&e9B|Ff!n0#rKF0`-Jk&VTd(H6a%V@idWwpgb_A8<&{QM zyDyDtBL1_n(c+wzQ0mSynY2K&{m3pvq{nIAuKQhW1%KPXdo_HQ*bt@LkZUw6nC>;Nmq1stUL)oom#Lr9TFlIIVC54aeW zR1G3tQ?r&NieQ=G(VUXb1e40dOhtB037TxOd1-Q?Ww2KrKO8ys8x=5n`;o?NX05wE ze9XxFH%s-WQgPGTF00M&+Se-1{`gt`0@~>v|4{E^szLZ46cAs53LMy>o)C!K0V5aN13z@axZE zctE`uzU5{>Halb-$#)88PX}i@Q7p#CVvEN68Q7Kiecy1}X?@E$Tx&m>_{O%dJRcLZ zSgQOqbOE=NdB)dXVEr*UXIoL!!gy`5$g|IF083P&$?34LGbe$-aC(eW>rW(Igz!U{ z_(NA+t>2Zt8PE~p=@HCF*8GEk{S`=TOZwKq0-4GRN$p0J$Gsug>I3mz&q)nVz|Ofv8P zvj`UNvVwJRs<0Z1{&nbBFDP`$#QDfDInr6BeZ4ad6=5~pDye?Ahu@OcKa_ft)*qD8 z;#LUpq7aB~Y7&usmHxgg>=6M43 z4gQpEuuduMO@FTP$p(?)nD)83%j8;|r5+(BGGra9UFw(F{Iu$Us;Dk;CtqhHr4 zD@ZAe$*a~m^LJy{|1y%%&C31*rsX?jBMNLI-|y~cj3?7e1AZgCQg^OQg;tz-(iK7sp9(s-Z6r#6TsnL z^o8GaGlysS>+Zxv_a^Xa8MIM7|7TK}T4h;^C1qJr(EF0f%iPTy&|vfUhiGUEQlzr6P^MVJTd z6&4$ijyaOI`m1T)lGl&|{!y#V((I5V#K)<#$fhK{^J9i)tCgkl?eWec( zhjoHyL48O=l||&cK<{Mss=mNT#N1A3g4KSdOwWeFCt#&E9oz!)c;d@2yJpuh@N{`1 zI;K5VP^*7zB7c)@RZpvL4yqE`a@MTX0`yv5F(cH7${N)PJ3#%-=mReUU^7wAM zurRPUWy^Oiyw8wF&RukmVTO6i>+oe zV-t9%l)y`|gx-GqCfUf#98UeP|i2%2Cn<(|I#18jsEwSxmR)$Y!jpQYqqDqG2 zmzEmaldne91$0}vh$^1kco?^3-RHNzl9&#Os}hMQ8dsK8a)q+(7p$90B7zp@eyghmCofXPS^Q$qe4zY>#K(Po@#8Ky4oI`%djJ@s(kKF zT9(sw{yNGh`m8M85xBn zzMDBFz+lfsGiPSO4kaH#x6T)a`x|XNw?XSrIEi;jsuj$2OzQ!f`jZ$0&CTXM4fEG0B3g@@B>c4uW>Ebues(`Lvqu+x|O}8wh|0hcR z&mT?4L^b3YbAQzLf7Mqbv;QA$Zypcz+Q*HTv`ADcYOIw>_EU^~E3y`nea)6-EZN37 zBt_C7GO}mOG8nt;6^&5FJ{U{Z!5I5KmfzRubk4cc{hV_@uix{J*L=+|pX;-}Kg)Gp zu6gksj;hOFy$g0eO}WQy1RD5mHJoSO%Wmcjel97!PulIE~%Pxchh%2-b z<0+ql^)#+^c)WoSSiNu=Q){sKf_FDObjg!fy4%|&(wS^0;YzdX2c7-qSU9lgQ)#Tz zzEm+s+8n^E99`|$bPli+^Im?xTC11P&6Dw|MvaAAOylw9OyBx0M#%50n2(0DrJKzmCGOv7fwi ztavW8IB58x<4dIz;85UJZZJrv9j|m5IK0hlXS1IF%8rOraJ_mKaD73t<`JM#l6aS zFsfWiP6ZA+$xGBNG2~(NF-TdF%3U~_{ff})xlUTPsb1qxSZxjyc?2{dUx=i&8F49j zd@k>6-n6|eUv07Jb8iHI$~ZizGJ+X{jLtA0@{xX7YNFOT_ePs@tA!CREogc}aF?X4c0G=>kmK?OX-|u+g8iqd|t2-L2rxW|iS?h;v zDtXj*S?@0_wT~=u%q;rHi0=b{?>|uGkiWJc6*l+2BJ=Gxs!rCMovgcVQSKSeUs%7o z+M`1Mw3R7nC8k(rfP?RK;6 z%Hcb}>2B{#M=k|>mfuX&yxyn2G3Dy9uAWhx6zOpvxH&gTbvY`wL+wfpf7|9jBXAkO zN^rNXO^uAFqVabBVOzfS5!BLRIzx^=xy$51bF7I@mdkHK<@QeGxQS0tpfs0=V-?XE z>AsTQf*tTW;^8!{MzSGh@APSEM@;xe30o-mMQ2i$>Mx;Q31=)Z@Y$*RG3~_GUkKK( zT$!6&%3N#Xpo^+EArwTU&c&}!RO37+dEIZTAUscv6N);L&KHbUJa{!uq}xTV>yC&8 z8-S~cCgm=E5Ckq$_d5fx4ap{UirjMc#q^eHGJu@dCz2XbEVUgu0g0$w6 z^_wlX2j#kqgzow-T`gn0QX^R4AvGPVW5V;$_vVk#Gn4&(`}#r+wtHocvg^OQfHRrW z+1b+Flh3~@c1%w-Px6>ts-SlXR)vW zN3ipDQhgbCHat_VvJ%YYlpFB`IwWhp4c*kF=*UiwDAP9|a_Bn9xV>9ifOcWTm&Dc- zUVFfdb1xHO>D5;H3YSMSiDIhox0c2bhy`E<;MO)ZaNR87%mt5}we7QHCQxeBS<&A< zDaER_T#eEMrec#<8FGJ%Gb$I#d8n9Lv9m$hB1=f_+HpyJu+giE)_2OAQ`uGK$#o%r zu1B-)u-^@?h&N5xTGoWWXsmCn|Alt_Vy+YiVB$TOK^dG$Q}wX{F=fW zw~TJ1t3$7pe6O~-GsSh+y4-rA@^-hVOeE>k=tkU(RaB(`AB*z9xU*BvI3enhq?V`s z5q;o9gUxN|Y}%}Z*q&dVeNk>>1D4()g93DJk`?HTmIlHqSV7kaj{QL!2{CX(?|Q@1 z0n6w0%@x$7o5yCd!l7XM+yynuneJ%!t<|33w?oqQPKoe_-TZSC!ks0v zo#Yq1tg_9UMOcm7Q zHZfzsd29|gg>yzu{mRPOm4dq;pW}1;s%JGlarK@khj+N$>xb}ru&_HeSDY1nj}MZ6 z7aZ-N&xb#M|1%oeQNit+i&&}zttHR4^)|>JU3EeEuilBnHXU#)n^IPmm|VOU^L|Mv zUJh?0*F6_q*6Z_gw5NZxn=#2WRTu)!f3L zs()N9oLDUZpDuDQFk?7(w$kjz4a!?L3NC5MdGR+>H1Eb+GRS^T@Q%DAWa+oG!4&!B zOM>`PbuajS3p3>@nF9xZ{9p^En3-o%xE*-&uiyRoN|DN6z?B}!ZlUxU3Q25#rf`$6 z)qc&se?H&OiraB2{m_dJ-IRqA%rGdJtF$N`Y^FzHxUS!{wAt)@d;Pqm(4e?p-zHCT zZk6RT+HcNN#)D^`FgaF=cqc&J6l<*$S;o(Iw7A_74DeibBUyk*EuD8(n9w%5?HSRx zZgtH~7Z{^Tjq1)^J84h`p2D>&rbpyPddghxhBRzSsf-WN1XoBIxqh2#GG+EcAc-RG zHm?Rez8N;WOR_O&y{;r4n8VxWm;su^IQ_ERZ&=Z5}ii z8o}~mr$et2K778J$nhA>#UF)#nE&jyJ{^r0u!8k0p@3;Pyl%ivo+Zdk~)FFR<1Vl750%dtq%A z^#PMqiBEKkJe=*Yq8Afe!*E`w&6_kTGocCvX&dt{PikhF!XjUV4l6V5fH6z19iul1 zj5}LZ=No7w?FvVkY6D@*o;B!FZ!k+A_H#3+3@VYe@D_6w!3y#1$fsP>1Hra=8~X;M zhoyCfp=;jTp}-l^JGnHh&@bKdj2w#O+sq9SYY14Co zy0|lABIIqKlE8sUw$Egas<7&$y!Oo0)B1K+i}N*>=pkCt1fS%W1YPdBI+rvcCili{x7IEk4zac& zQ&%1hK`tStvWv#XKY+w-pU@Yrc=_eFmR7a4jC&-$7`kBDQGz{qNKhVn*5+OO(0 z#*BinBY%m}{UG@ZX+`we6M}1P#TV8e;{D4~6eDDAZL1Bk^M-X?dL;As@C-H6>aA2o z%NlK==bbXlzmz*C*L-pdDyX5~p>2~W+4oq}U_@IoEOe!0FhbCr6ZuW_W?h8y$7Xl^ z0u>L-$HoQ`+B%4B$ZBi8A7imwZ!EQ7Q`2?$bq;pBS>#NW+-2iWbjzyk`DvnYYLgo) zmi1;kb2e2*o7Ro`ua@b0lrqRM@}y%OrDIJ3f;=BY`^*{(??6i1ty2psF|~q^uh8SK z-4irS&l7@m_86z8%kv^)FPft5S+tSSf!jRW$qTYJTbW_2x0YNvyWCR9HcV-4N_E{( zVzGscN`F~CaWxs^f$+%5y}sk{S*Ma&5JpAiJ*QtCqwl;+K|>a38qBy1k;3KI_b7r5 zxa}n>FTT$X&#%`}5v0|BYtul_l5nzX^Q7Pmwj>cW0GY8$4G7$hF>|@f4QnX}t@_5K zFTBL4BSb-sSCM1Tq|2dgzj2TUO#N6M*yt8)^xYBtf`1{+O+ywfUGufh%H}Wa&`mY1 zdEgMXD%Ks5dQ^YJ#foOBc=FrMdLBJLc{*m3gbaIq180E?4Dr9QStY^KO$Mxi3>9uBD?_{2%|Y z=(K7X6Qd9J!?YyOE8VI${&bXosfi35mp|9tmA3+bLgbeppu)y^&mZ5fO)_lNhp$X8 z^WJDgF`;Q>t3dE+-r&y>pv&h}UKdbiTB~qN-p`Rg6n2Prc! zv{C(>-b_F8b?5E6h)57rcO(-W7X`kworSH%J-ec(R(VfD@%@fp^!AoMor%YW7WZn$ zYlcDf7MwS6;iUDI~bk=r|wMnO_B!JzjB@JpxPM83Zofs;ZUHE>G8!{Du9<+&U(a%MrZnXk_Vz z%z;)~U^IA=%kARl74U9?eP&ek($FVi=QjIbXVTczFHnYd-U_T)YfdzXs0LSZWb4wU zf}5ig_}fQs1&fJ}UJA!hl%ibIk{D1$Ppppr{p-?d0Ajr1d47B=8Q2r2|0c^H^3}Ne ze)j5}UyA#WkGwQEYnpZFBWxa+{t8YF)4A8{5c{%^)STS*iicz;eh0WYcoLQbi#`4M ztUpwOelfEfyJ#JvP(tGK4i6;J&1#h{*uG)$SLO*P>Wc5~_W8@6`8$cfj49!b6k8`p zZ`V}h(ELKdekY6#|_QzOC!L8-l?osM{@}wyeGrWpFReI&ndQ>#LS(2E}l7DlZ zdQ60VZgZ*K>HM9a*72*yDL$ExE)^y>Cc^goLHH^E2nzz2S*6(BKNWwOKJ$`*>p@<+ z;$teMVthBu@%c!~?Ocp72yJ`2_04xNs-LDk#!ra!a`rtCo_yEQ#!=kpusE#s zc<@^t%&oM%yh*KZvP`9f=+SpdQP`iJ<&1ION?ZT!_vWujTGsPOq2uq@p2Tt1^& z`Oab-;}s^VTLKSFXM6NsngX)p30jCDVw;~GRS*70QrW^UP@G%2751go z?A$(CoZ2*=@^);!3!5!Y-_^ugjFDY(`tzQEPFmV+4nfocFrlzXtMRNN$ z^ADH%(=LVv4xFe*<+KH+Lf(410S0&Cob^lrU!~}K7>oTv-e8yc``Y3`m{G>>6#G(P zw|0qsIUa%kNUkQ=!JQzd^oo8S`k!pk|J`v-*N-l}WA1GnrK1nXg~wUGyBNjr_Y%x& zR}oR63s4*UL&RpNRQFlvN0&Snam~~ynvttkwsAwDouPF$H`(kDBSjuKh$nJX^k7#n zuyA$y!DIrNjv_l z-`{6fzn>Bce;x>BnCR}?iIp!6)Cqu`$OT-ncW&Wqoa zJ&f}Nwr-57A@o2?kmIr+9X>uVaV@h>Zp#w1C7QRIiQ5dOI;poFnhg4#8!|5_>XzRT zAr4MVwY8;^Z=CWJSaCFpA(EjNGFP@G`kJzbS~+l0)!$D0D_{Msv0!4g4_1KLWr8<# zSO{N!*KGcXp=9S|m{9iV%r6wH?lL+ThkMgy8UNWWf7oO6{3lDB-`5s=<}AxOBQ#Go=6^DPdP{hxRWFP2>OZT+ul3qQVvP>!s#JI*n_#4?EGg($ncq zl~RLfr+0xSj6VywzdbyDK7BmtLm)BW71i;OLiGnBjgom$Snhv1O!bKH4>56ErwP9WxkTYp0h?}G=RAMSYc@~6@l5J`l0HDax z10br9y5w&b@@vN~lg^OOZ~SKv`fGjux|D`gSkHnJ=V33vehH~tzs){W5RP<-F3vSmbhd&|M_F+Sz4~}}AB`+_A&c^a&~Nt71xi7SvB97^ zLBOkg<5kC2W~Du4{=ZSUzbT}DJ@q@fM3+&I%_UbJJOT@O0JY~B77d*|@w))Oojs4r zwX&9&o9w!aH*U{QcpU=SqD#!(7DX6A499KlnWm>@7%8GO-@}Y&(F-~9S~NNZNoA_( z!_y4KU!_oqZx`J5oQMpVK69=J_OH$3{TwxGOohWQv}hdAB8i42m1*Eo@ykAM@$DH2 z55B{p5#IU7v>rj&H$hM`34}D9(z0uKz%2}2=JTA`P9hXHVp7>2$ygk$r1B>fN=Q4q z^h50xCJi94cD7#8*{Tz-lPTQb-RH2iDx2}o|M;Ijz{p>E@YgGUO}hI*1kDQ2mp`rc z54h{UOn24!w(y{d25TNc?2D6>&ZU^q{P%Z+e02~3g0VJ&yfpX(yC}@#_SdOlCbm8) z*VF<=c<+nDgn@kiu0d)=AebhXlSUz}<>>`h|1W3r2ashrk&gvc+{Gj- zg-YPem>j3v)G!F5U8DoZL5{JpU6(s8)t(=BEFSeT{q&q!(ynp$JDt%CkCQDP$1L8x z&#kB#VuBXes{KO2|614|ueg$dj4NtBrO4t3-29<8BkbRlZz9}3F3m}~7>{+ZK$mE} zzS)2pYz3)BK8yBYBXC!&`%@PpvXvOOE_-VOQh=WAr3TD<{Wwn>3xjPBc5neCDG2Mjsaw4|F8&O(&sG8)yzy>|i-GKiwt z4WwZEq9b$e8klS}%az9{!_#rd{^*p)t{x%wr2R3A=a1kHxvRC*gT!V}4WAMB$E9uRik8s0I{${z4POSf)FaJ~^wFc)rKXKOLP&h!Ji{EdKp zvGNaghdTrCC!-&1dh=)gry(9eg9siR60Fne8=Y(bUPv7sAnEo9i}_g!J4t zt*yT1P)YTq6FlC-3)V_~k&p?SFSz^yqOD-7weF^_5$okBU6_aJ3Y5tvq4VM+8L#6# zd0&_XE27OeS&0Pi0RSo6K(%ipXZi%bDN>Gl~mTlC3TetoNjSPTi|ZWEs|ti|JxhwZwO^qGBg_QZEZ z0!=0D)I&_k$GtS}s}+{a7>-(&GQ!(!qiul_IoC?b!T|&e=qOaM<8}HqgUhiIv^bI2 zl~#s>lo|%_9gdj^zdxVRCKi~rlijeD6o`=hKb2K}7ADuzM>qyG?oT6n?E6&mmCAeA z|rm`s|&VxZ@TRCM|_w?82E7B&Z8JNPa`a*tA z&uV6>JfoJPfYqIAxtt1JRrtnTZM5G#>**-19R&sbo{+zELy_~-P_MrGpbEP!->HY3{U*!7n5ug$S z`0;ya9^%inGqwea;M*RhPp0A^iEcb~bT93##ur~zFMp$EI%J5lU$c1!D++a87uUB@ zs?KvRM_tRwO@Vo=s#qH{3ZWJGc}4qC@hLPeRiHzi1|OF{AK1QGnbr)UO%T8WR0~kf zU^CA^9`9T*s|^Tid!6z}KK*~vv>zk`<-Dl9usiKsr2DhLM!3Eye|zp0WF@uy}+1EbfqzxWHQXYW$;sN?3^TnTIdf21bB(Vpv14 zox&^?i+ym8YBV%n@eoG-fd3d(RYpb)*S9Hbx33B1jEeBjU;7tn`(Cl7QdZTp#@!9E zxc0My0!lgu0n2TR&quu81?1@81DY z(Z|2i^>_gMty4y+^k_%WzkqFjY73kB;hB)PXZJ6cjsBp{M;9aJc?TMmHzn+P-o=b) zy$DA)1PFw;waaMJy{s6MoWMinDHq;yaMSSHf z-@E+&T*fr|j#OWx#rjfh^xq^ncIA(m>WnxFn(Mc(aE2Zld{}&|qgY*>zbljE)Jn^; zGTfEtcC^<`)JFo1%VSOqn1x9y9CaLXdYn<>%Yf)*AuZ&nmO+`YB??T<5edSYQ8uR; zv*Zr=KRMoM*J&umwkN$~3_Nmo>M$LM@`{LZc0JA{0#x*i@PJ60ewey`?UDProX{&PjU@6`;tx$t| zPp_tW6y1=Dqgo&j$dj(h?nGNEY)9P$%VuG4&AO&JuC<9$QmqUc&vqp6)&YOb)1MpB zoV*o@d|HKeb>1&$w)EQzi7_dSEm_xZP5A$}@co~({=O#mJ8v%4ypAva+Ncof3GIhei&+MfVk(V%Se% zC8zq&W}>DETN^9C8Xns)-+2MTh)3mBc%_?_gum9c!^Q1Jr8=D!`wIQ z@atFLuSx14Wa~~oh8hGFY%&1&9qnGt1k~kh*kuY`Kt?=o|7-htzgC>P1k}Qx{^mmW zC)IyAsp}f=OX`ZiEjtEvEVwACJ{M~8+A$TjyyP}rb10$(WG`qKT-|5bzViy$FTMVm z%j_y_kRoX`cD=X()$3SoNw~(3ro;LTIgv)HK0ql>Pam}zxCpAgRP3omN~2M;?SH$V z2mq)CU1c%jQQhvw{a$=OvHxL_GV;gjm#7Oa9Q=4`pG?^D0pQn5XSK`Vk%nOA-6#*k zlz`lXv~O*pNyT>Jl8)#CL9>(YmT-5Mr=>>*8hfJVnXt@>qfzsAjcpCks_fHu+KYyT z+uz+ae}h>9ArD9z0RSj;4*-GF9P7N09Dg(o*@`G9`1N1vQJX>jgSh`{mNJ3I!{_6Z z8IYD2e_EgVK?Lnf~+L%35JMg^$1thUZ`m2xi|fvmjT9(HDET(@gI5iil; zHL@`$4nU;RDS8{G45tK47pSd8z$C1I=v|1p16s50aTva_$5}R_vM#F%%y%sC44shF zq3?~Ka324S5M*H}VqYRQ3MG5)sKxu?^{#Kg4Ri*Mo2VxL%vmw~Siy(x&sh#+*8 z{aIXtfY!9pA=+?&)!*7D_t%gAfyow$GR5LVi^K!mKclV#{TOH&ek9kPl$j~do$UjTXMKK1(9*8h>lQ9!=eEVoAV z*nS1q`40-k%4ws6B{FD+ydk#LUx$4Zywl!#zT}dfHtU6pjLkohaMNoL814l;3N(7k zn%Ro2Z6LT$g!`9sZQa?$__0G)m)||ep99_^HW%? zocPdEXo&{vyOqf^B07MQ?RwNsDD9L7g1I@F>Q9*YldAo<$Io^ER1+X?7Ve{xj`Ld? z>R}boRiZh$&ND`lv^xJlC1sThDKpT3x~8;(@@c_$XGmHX0p1p)=z3}0OR_-#R2s0f z4azfvI?PRmJK%Z>&c^wVtg zpJ=*n0A%A|YHD(5fBK}q+;L5dk44}u+6BEY2a;A>C)Acd^|e0FCm>czz>*otdcy)Q zTck(}wk-&p%)y58n`Ag~(%@4Y^8W#6HlzX?#6?#*la`oKBpRR1R}ehyM}B1WyoZDd zF;yvL_52mSt_~l0(Vh%R;>P1aKmGs z4;4wNl2ua=avzcw?Y3FlM4A%W%eV3f{ID;k2KK};>yv#JRG7Sx=&@Nbq3+U8p@#_K zeL0*7c?DRFf6?3eNzJb7>#|F@ z$f%#gXSlV$=53EbRfcMlv(&Xp$%lSa83Lw?Nw@Lm!FV{U7Q;2 z>NjST>Uj4RC2IqRC&8el38kY^OoK)ZV)aK5>hv}P=+o16sP-vP5HeN)MN7 z7(Sr?xwS8IlQQDvyo8p{b&M4xV*|*Nv26?uHaXO}%N`EFWzyDFS*F47JV9+|RVW8< z)kQ=cwUI!ZiYp%Fbv!(|{(7~*`s^FWnpIKx(L;zNNQKRM)Dd6i;%zXE3;f3DSp)`% zrgoCw*b(lKszIwcDBg>vHbNEG&u>PC+6psY-&c z@=w1pVm`Fi4Gd@^#W$Y_!ISy<+B|+MZXYct8QO_+8l1KDNngmt_#mEwv~}*Ksw#GN z5$qviP~CPHJw24#W>odDdF$YAzZo-PHRs~{f-BVTC)gx=JKvP># z-24FinHF0~JdJz^ceb%>Z3qS&@z+Tf+UBq9)mWP*od3V;!{55V~iy@i&t z6^5g{R;o1nqU|ph_4mcAQ&9rrsgw*c;GYZ4;cN$MpG-_siOoU9X177cm6sW;@e3d| zTT0%OHVqRXog1*dP9NaWH$Qqzpi7YuON|A^ie%)=ptL!hD6 zOlibO`EqfM1U6{u15$?hj#3bm>8sad$UcVuowohrIA-xert5_s74^P^(EmC8Jbqa8 zE)}*^e4Zc`LYnJf$=cw`oyjP;qG3)k{h=8CXOc6&7f**J4aUMK>Yon~>Ew z=|83c_I>$(<(~|OO7rWxa|LYsU_|Cse~T*;C%bOtt>{hKNRf=cnwOZ0Bd8&CG3!jt zv{)f4m7ak+qu$=9g9>jpyfZLYK%fPA7J5~^EkWEPLqb7q5qYH z|MG@K6;Lv~;1|3f8B(3m9T8Z(p0v6s$lo5IuzsSoH)EiyN}rLn(>m7)83?8zgLR?N zGL`|WoARThGS!#eW8ZX@#w|iXPhVE9ckg6L9 zoT=C%bzN-oiW^M;B?$6$KO=Z^b)gIRqW>K5{I{&tr1hU^D!8rvv$s1yL3K(*r=eI; zTQrA4)i8~4UK7-{;>3?|03q%A6;l%fNnT7gWkff9X&rN*X9JS9)lFG^-HzK-&odhx zdg~O1p#@bwk@JQpqD>pSR3P-$Je{vou4#^$;^Ukc@=~{2qOE@7}uI#SS84lym1o2s$BQQjwYBV%Lj3$+yN+^{uI#u=}|)- zh=i}o&)%lo_el-I`@kM9m}O|OKoMBNfC3G^h~`9;Z0er=L>Ji*B|o{5{i^1fkZDi% zeI@KTZ;0EYw)DG$1zgkXn$?(dCB7{I|3x$Ys|eU{0mP)`yJ+lF=Fh(!#BcXpic&5P zZpji{(~MW)G9J}*HB#IDQTOs)R&SH@p6>}dn2*3m48(}o%?@?*y`nb!P*cP z*Y#Jh)}-?vn*RBK+k>n_frkl4a}NUh|?wpa}u#h|%Fl0Ew8 zp8;%tNIf<&Q6X*({Tf^1LvB6zKS%-?!ef`vnLuIt{z~&Tl;7k(L~M~UL1Q}##z-1d z0hE6Fb3->J>GGAprL^Z#YuxM-WU98mMJE{_%EiS+_|a@F{BEFJO~C&|!{_^Z3Gezy zq2TvLJwDE(U#!`kYce-uT&Ydd*t|4U(j=-V$_N}Cz{4o?B*TA4z#1`qSVR5WPRho| zrz#e3rv>SwPbCa$YjmGK?{;`m0=4`K3c1ADPJrU_xN}W7h~2r7BO^4RPv9^ul=e$~ z2udZP50VZef!gzL7}<2@8-M z>&lhNc;7+Z#davGT)cci&DyRvwy6EBmWyca+}+dUC)!7pJXD`Tf?8q-KeR-K>llx5 zmHP9Qxg*A44i1Hgn5v40AAHcj#Rj;{yZ06s1x8(}T#`9E0Zhv2dhPWuy5_$CYQOvP zv|PZ3Y`mvli+oq|g)<_{J=B){>q+s4q!zxFILp@>w@3xs6_1U)wWnRZtzGKEiX+Ng zSnMaY*vz=K+ANY;1rp#RZ9TNE7aI17IOqd zc#FbD5RWDTN1|uDFILXd5OFWFq-oV`+m{rQd`K}fOw-;gcTP6&4vR`R3ST>B!~k?Z zzeh!&VEvC+XhN&nsp9q#0Rr zpTo{BOFl<)^iZXatYZH)Z2jVmD7`nfmc|m$KYCpL2nu@x;kESj@iLnlI9;#l=8N7I z3!uhHx=GUE;`*>OS57^aa8kYzvCGVkw8NW54=&b{6VKq&#q>VGvRKHZwD%dTkOw!v z)npA8=+}rJG_9H4vR7JZJM zaj+j@Tvyx6rYqm(9Q)nc{WA~4COf;=UZXLp%cRK9w7%l%`0}->zEAGQ4pj+m|Av|BT%( zR?*XQmT{Whw!m{^$1ka|sYSJO>HLUC1LL6QMaMG zGPN6!UCM()H=5c^vp|X7Q>5$b>F#6c89SGA^@X?_tlOByEG8T7GUvaw&YjiRPLFRT zf6EM2`P9-yoh_PSDK4Xv8{6-Fu-E&tbz|eg?s-=0AOh2_MdBNm6js3Y-cZ0Yo?ZkU zeQI6YNjl@{ zTkK-Q-9x{`EtP$VGoW`FdvZ@T+BYL1@_|m=)L(Jv7s~#_!~72SWKt-%1i*A|4={hs zwD`X|Jy|F#Zd0mtIR@gLNQei#DgR|DR*nXy-hm>uON>&R%TU6+_iVRPscm0tZv!Y` zW|=6i$Rze+8y2xe3>O%WEMtv=;Y*&?J)6DwT+nl4rA*4n{dw!=G$X5(+{dCJH}u_s zgj3A^S7x;UGm9R&f}LkB15W21Ed116v;CM|f^fJexMfYwbx)W=$BWy7+y(n?9mq;? zi?#v4+rc0%z&IrR*{JGn&>5a7DBSYd#pa66%*P+|BD}sac(N;d!$h2*)SmBD7f9pm zgFYl_wbPsyoMt%M zy``GRUQt}bh)rDP<3nTL2$%=$!@-#Y;+h$o5XG{M@rj8|RZ)JK!iqUIo9^WNhb1l; zx9M(|(vD|;t?Hj%MX5YuPUOa|JosbC+2c5qx2C)2R!njtsr$FB0e*|^Lqsq)cXKcQ zAi}YRT<-EYJ9cZsuYvNmNB4k}ktKl{-@MAKuhfB|NwbMO(w^qDjR-?bN6*mOLrto^ z5IiOY8rnIAyhS^)+|t|EwalOQwooCEdv+3SeuiKb1Fg zO?PKa#hj>?cMtR>eLd|wk);srrq-+gR0)=rA|mtYo_h}Wa(p&dtG?J_wZp7m-Mf_Q z#;_EJ7WFO?Lo<}vc7^xH+i~hwwxB{RtaN6ug#P+sg|$(cfz&fh71p z*x5MUJo%>cc>Dc7SgUE~KT9TEK`?foQF*_@MNQB9@~-)n&%WfDkuLDXmAb*5FQT&t z2Vl{;Wwv88_qr#`I_7wBiM&CFdq@>_Rjmy)j6$Dv%sHB%b@HDD_^WuEk(IUmMzEQ( zq1{U-rB<^IeFl{*wVdGb8MscU`z$?P@Eo3*+(J%Tk|4P7SI4aalju_7dE?{b9(lDX zplq$?=#xQ`>>LTch!;uoMEVTLT;+KkSO?M{H9z$o1;x9-<$n zfHgcJ3$~ned^YbD)jdF$oCsK^3>)~cbK$l$PQUAZngT4WM_XHTnn-osayq9l z+GgW*9I*zg=8)&7p>m7Ir?M2^^ja-3ieJj(CQAiTxqNCG*!Y~`7^P%x616tyA>+>E zc&bYf)QKLQNz#g9PE3#ibmvj@qittTwabcyOTgaNa17<%E(5l3dWsu#fi7@|%*f2A zM}Q;x8ex8`cRKsI$nQKW>WWZfM-{c*nUg~8Q2=IaXvW0UD-V*bU0+zau&&k@IeNh+fy)yM8yphS#!C%6R|^8IjL*;7 zsjbKlF(*d8%BO4ORT$zY&z3~S3xC@k+w`|;qN+ZLB#9S?c6ODu2oJ{qTM=lc0I3=r z11>E@L$>B3E-j|6;*CbRzw<_jLQiieX~@OdIkGU3jS8#BF2OP0*F%fV8^Hmc1|$4& zY8~+mV-%8nhcDovUeNu&U1m zuoisB;WWFccsjpKtZEB?vgqfQVVeq9tPR?@x~b?Z$hIq+WS4RJAC=@!0L2yk$Zz7# z_~M@q%gcWU9=Z(i7EG>D;c|;tlMH6$$%YDM{gy1m}_AxI_i7>(Et+jmb03PAb@u=+^k?=y>K$ zy4;w!QPr{1Cl~fi;gtF}JJMRReD#6txlXBSL2b%O6^1Jm1(BJW{>i&&H-%Vg29*}9G2CBx3(a)&9 zyLR;g3OHn;B-_`2NXyvUG-$g9 zn|Sj!kx#EuJlb?(qWlbWo36AOGkmRv!Z&zwFIauN>Hg7W1q2!KK7Iaw$Pn7~fefLz z#^?6;%8dUywIIHtfTG|!Os8Wxbns>Sr{JK@Q;5p|4OuQ71<8)_xk?FY818K(9eQ}Uw@u_ED zje}}EhyyRplzOLh3yTKCFIZVxPBe`48@d;Ul2jGi>&VEs&h+uk{x2MOs~x2TpCd{A!o#BC905A4o;J|#oZxPY z6}rR_*OsE4_=`hftlUGPWaXXLTPPlV5$QZ(4wsbHe+`#_{YVe3WHimrBfpn3WGv}L zNUu`I>eOmrbX6i_@F~xp+IV31qR1>>X)k4#49YafcpKkZ8Dv$dBdZ!p9>w;#5vyo< z-M8B$*P=5L*9O6&C|j-G{N(y>=!ok=F-^&OOQ)AyzLp65e2SO$FoxB$kxgZq2gH|qvorCqTB_@gU3FQ z)uS;2?*2UpS|N-EXZxCB-4VBR0HuN3%;Iv>2+3q+6!L3IjB)v9UdP(#ix`(QUB|Pn zrC)^-&3p5vSf}QSSp9ZncU~`M+>X0ukUfpnkHoB4qOQ=WOw^haN(%`N31EKy(lKzQ z&>4CM`&3{jTNI?9bp?YPx`Sa{jcjfn@9*=5j5`$ltoH@@D-0sGWjFfAgk z;cS*Fvwx{rR2ii2@ay)lzKjgaiv#efy;eY4)uSO=Y`Kz}N8W4Z#Ase4jfnN$6fH*1 ztC_Q&1N2{#IQ9wxk-l5T^!)l)Rib$Y&#N#%BdiMqty|SWLJpSzu!MZ@)j+ZA5G`^7 zUp&vme}U#HN~bxKU+QJnJ>Te!rHQ7;8!jp=419)cu&_5Eh^-PLcCXAMR%k>frJH9k zTL$q}Dh^bv2;ULLS`kPdSDN@uN;Q2qHX0)2`p27*jf>;W7N4Gvo2`JcD$~g4@YUv) z7GxkIRuLtHS3-5$L;93Lkhz~6Gv@D5b2mh$K%OQl;H?&`wtTRw&#m)ot?tCn=laaI zbKdB7#i52u_W_;N!7^_pL7R#w9o)N4{7Rf8^zB*4tKHCUjLU7Zcv!Wc z-|jd^%r$uSPQPk2*ECex1pyM~-Pj+nIf(3~L z0=iUX|3*o4uJp^Wr%$2E5l$JZWRi&nSZNVJ%3=+fSti zjl)OhF6$VztCbFBw;?$~3$MgDAg?;^_sb^i{a^1DR~j2OBb5m>?HcOV010n(}T%k zuC&NWG9E46c>7L7{5AOV+et#{?56F;{Ki%JX+*%4a1ERk<5|v+*uhGzQjOT`#4%+3R!3U!;%NQc7{?v1q~*~kbbjca z8`C6=!q-?wg9snr3DIcf6gqcYUWktXZ=AWY@v>ji$-+`LIFiShJ%FWLVMc#P(px)R zR(bPFoVdcE=Tb>T{`2kCxr@=XRd>Z)XP7)*C^GN1r%iB1`f+b;#9df)u?}uVeyQr< z^Ztx-4DA_d1uFZEjOZFRti-~bEm4!1%F_N@KRv++7u@T)0H*p&F2J=Hq?1yFB!9ehb_*Wfd-IR$csc) zFP1z;yGGHBwN*QJ$U3;RkJciX3wglS#T_i)jnWb4%kW!PYQ7g@sEUJ5loUDcN-2n) zN#r2XmM8gb_D z=P-I2x<8PvcI3a{F_l$#XX(np(9nLcoR=DCLtYbA^aauS&_nwdX<Pxi8;IOSnDU!+q*)6uwC` zMMVR`QA*1`+jMyVu8}1OPoQ2pcV{&`)wW;o>Fbmh>iXLHWKw~3E2#>y92Ft=${GLlkemC=;qeP^$9i5S!P)79 zSUqzLT}Bhy|3AvUJDlzIeY@4_pruPytuB-rwMVR0Rn@8zsz%hPO|6&_s-<>Yd&H<( zQBiv*Mr&45BZ44ytXM%1;Z2|C`}~ggc;Dmw{hs$92M2K^pZmVA`@GKUyv{58m9wm9 z{XzzEv+dnRCVB7nO8)Vo4IGG{zOlH)l7zV{uaubFw(bZtxzw;$r`P3*eGr%Thhd~g zjDPWk?dFA9=3i{$tmuE*5Cr)xBO(#+?9vgW)(pA_*VI~!*QzLFJ;XfEyZE(ugh^|9 z@o+NXGH`{N5ESv*dwEcpvUrqPp*j&rB!8H(Sr)ZP8e9RgEFr_C2=!$LdpNUFQHh1r zR}RVRD>W50pay}*8Q0UK5Np(l?*`aHeG$0iJUykHIoRsCBEK>mWi$UxUO$GkOlSux;X z;MrHb)Ar{M3nK044_bBpjA7BVZet98j@(6|5tCJD|8_%{!9qsbvM4IZb~GUdAcsG_H7-kC5K!b?Romyf323{?l2&;N&LP(V+C>1%#$^Pl&vNx-WJJ zVsn!2a67eDkY)q7*@%mX*{sLj=o>JX0{Z|rrR~=H4t?c1AJt@*D|)L%V04L259RG1 zS%%LLx_fBqF{geaFn&cApxGJaOath3pv@mT`u^*N1Xaf~wWy}#>z+1O{|T2TLt;s$ zUCIIVsAJTh3u`uyuK)^9nYYuqE7FSpVekx$gm{b+J7Q<2=G|#{>{BE1qV~r~&Vwce zE0Ch78^_kKXC?&=G3r7ZT;-)wF8w#28)(8HXHn3sOgcHa-fo)pf$%cp#GRZzS;4RH zxjnwpN*8Y7n{>;Ew}t$v>B~VXF$SW$8+_-L3#1FKOeQY6YhC%*P*Xh<>A`4X3)$k*W`l z(fjb%sen*|UC*Op8|v6#K3uOAyl_zM&^PL1H|Jm=(37pP`l&}$Q3=t7&LQu*Py2Xy ztf|>xrrC2?&h*2L=d2+Mr^xf-sxZ@ zq(Bc}JFx#TIWv*@)K-N2$cSzw84Co)Mv=Ys%y2+ETyJDjplsJ~Rk0Sl|h z<(&s%?mJ~-MW(G4#v2X!;0)3+(&%YCI2$s*H)T=qq{|<*oC$-C(Ptq_GktZiUVF49 zVYtk$Lmzs#Giu7ihQBHQX^vjD`-*>M*HtRcuT(=oykFbOUv_QHMP=R~T6hH1`B*Af zc351*)EL5&M76#Fzh0(u_r@OGsW^2?M^V{?_r^vnKy-RWcJv$_Gu@M=F@Trpe!~RJe&@ z7qhr)x2=M(k)5~K{Vk;0N(QAJC?BSad(22DKa%wNfuxdnXRSd$)acE2|B@lP@yEW1 zPPnxEO+P+y3z>;kezr8^zP2w(FOp5yKjFkAR=#ckXi(KRexKdvL-Kos6Di6bYp)#v zJxMj^0;ry;q_UpH>fZFumeb&ECG{wmn$IWdO_bApIJQ=Pf8}TP#Rz*u5xm7P0YaM` zxuxTX7(m~T$KUjuUK*Yt?ynrOdwpMG+?coIX~H*z#8`Ol)J(jwotzS$ zg48JOBr%Kb)rKvOx z0-LKJ+&5=E)#kdL4sUBvlrPM58iM=3UI!?V& z4Oj#41`6IYW_x`7b(DgZKhK+~l0U?5jlCzZ(xoZJ#-TrA`SBT6#d#W_TN><(H`2M@ z$pX72aPTXILw=EdMfswQrHNg6CNSIeDm|IPXMMFU4~I_k+h+SG7Uj=HwD zxnXB*yB7(ZGFT6UV7FO2YcW}5XI8f-)p$m`tQ&ggf+Pc9rZi@fX!8V~qDx{9xN0TdaRhRhn{eM5PuWA71X`?2bn%Ln9d zp4q`M)l67T!arUBJNDh!_91C=qg0*-s6~qN14!h_r-uT$k?Ws$jfTcy#{~xGKxkCG1KoCH2av+b!pdIB!Cyw z?|9Mwz?lCTQ5!a@dZ1v&8_RRi|9Q=YyyoD;4!O&jF12hY{~rfN1Va0op;hU8^(LC@BNntsjyM z_QKG#z+2|E_C-YTIHm2!r|6EEq7|*z(_~yH6^%3b#Wd3Id`jm$m^sXB@cv9)t2F}? zu+*NlkuCUtu7uB(})=7fl;GhdPo zuht*>gbzTib}aMmlGC}4P3F@w?)*05Iakvq`Sf!&z0Rf?xovlJK__xsXnPr#RN_s7 zDO4wk!th(?&>=b-9(y?OiI9*%kPahR(Z=6_Y>A>I^KqFf zK70vz$K@*(1y0n1fb)YEt5~~X>&|C^blX1x9-no7&deuC*N0)JMbB93e}$Z~KgaSr z`OmVbwx8J@zav--rwQp;Y-rUYp7YgX_)A`DAg7{mOg>`{vUNN80NV~2czfyAPF>+OF8}$a_rKlYMK>=fp137F-y{5=>RI+%MuloOJ}RlH!tPdyj$P;X zxVexc^P7x}dvj?pJ{|u=q0u)jCLICNrx6q_izkneWIF0jt`I}^%6E8T6wxC*UpJ;6 zl=^{}{T7gX5%5gz*6Yr8miTchozq})+9S@HuEi#xh{7N9>+iY3*-`I;?Jq}ME}P(w z8E@zuiCykX8sj?R&SdtY%xkQ2+=b{VD6X^9dgk>teD3pX(IsvDN8GnMcIuvA6|pcH zn%(o=Xl{@$t1Xt=&|GKXZU(2t%1qTQufpa=@9ENe;wJ>W3Ux4)k1-MB#-O0I)yBqW z6Vx|enwQA=EU)x-CMu$S?UQKVM~6Y9Ydy*WTt=5>kgz)%Ts~#LK(#;Qvq^9sIw^DN zX*FJ*n6Z$)?(CE%>GQyFE&R>b()Pm}TFHKEM#F^<%;`lE*Qs#e5YF3cs?>4o%(44= zRIXfch5wL!9DMZny+j}07A<521#F52L5XKoJc?iA-x;e1(y3D&*G7_sd_6@|AL{x| zzanYufIyx7kn7*Wcw)bE`X}@4%-fZlL!70bp$;CeL&IVo7N1Z#!kjnmw$kwjCpu!8 zW{cRvx)2Qkgxz-U#;Ot5I!xnkl9?|URK}gx^t2L69JZ}NY|hD|0?xC0HbfQ+ow#csU9XT7~EnH&A%RXHQCHX&RrxruDwSV zWZnC@jO+0~j~^fA!68IGf01_jT3GmG$o`?X{wHNnuJ`@|-Q-KD>%hK1EtzbD`&;tu)BAMEq{hu3@#ffBvM14vszH798{c5sERLeIc>>WE?V5 z{HNA!0rh;xugx=s{aG^sj)csJtZ|pIsMzp2;Q5EyH_QckB3}s@;e3N!47`j-Ed(q? z?-?O>R_a~I{QW4*r%cj=>q$$oRT$JJ!VXuVw?-P7$G-WawR}hP-nZQn$mHh7u__KY zeMl%OokpLHPS0t%+RY+z&U5vm-SEdbMduvMY$cFCIYP<<2{4;Ow zP@93m;@cq_pig*LMFQ5PqM8L?bp_9@U|&ZHBhM0&w^>KcuP-AoG(ABA#&R-VeZjuL z5xP6CR*pZa@Zp`*2RfYS0QGZ*s6JfNImuY|<*rcw#k{)=LSMezGBW%5vmQaio8zji zep(zClsUYv=RGDdwhNBK~=jlphlS>OMu&sZjdD;u1}VX2>>itElu+0BRmy7f<&~}h z*0|$0Z?`BcA^>=o-Oy=qp*6DpOP8|uH`|*N(8!mi?jLYcA_ccEMQNWnSkW?^G2NH% zz^7(PC6o7fyQq>Ru`_w>gAeNTP`lt~ zEodaCR*@Yt_!H_l2I&;Ok3`caq;024I6u?V%Pt$=Cj`;zx`QCO<~0KGl=sPI4l6aw z&cDBxae3ajfpgW)CQ-&AQ7hxtQ1P3vL_o^Qls^bJw(gU9xa0IW+3v{X=qHiKhj)GK z@JwRF-fdh#;6RB_&#sdjSjhq;XSvJB-oS%V{ckmsA#kzpZ*0bmfY9WX3;$$+_n9*e zjM`rkE(R{+jYFybIY*X)_r2Cr`=2nLbOo9`BXw()qIeCP$((CJ!gYRo;GUizP7JJJ zWOFGAiB`(1Ix6*H%Cz4+mDy%x7kW)HZt1)tjnMTu<(io`E~*s$c_pLKgP6-XTImR9 zc&S&uA}PJJGNGiqBt!^2Dkk0MUJs;-BuAi3W}Vs38H5|W2{C}CR0ADH-q`rYMn(#I zN0+ls_--6#hr*hiGT)flrP6k8dkl>jYelqS`XmkPI=SQ9>(ESuGPQeauej(GyZbj# z3p$qB=y$NIIiP#a8l7l|#Uz;T&DG3-&67xpr2Nn_M8gt8pGlL>9JWUJoY3T6XdNg$ zDcRjR>>8Cd9KCk*QO-M)Pz(=M9lyf8_ZrF;vw2UkI1M30;4JiTa%zx@`JKggs|@8VIEw61jbnN~-b?;aWJk9p^-GvJ|PNW405 zOoi%SxZz35m~q@iY!f07$h`4I*(X*sBc#rl&xJQka^MsmuNs3e2Pgqw-<5r^cBI7X z^}T}fBW~6Io51hg&!6m~28g9K76JPFW-6LiGh0~_KI)}=%}Qug>ndbHTz+m)a!h2& z%JAXCx9$ljOq6@s1uyXP)F2G6UqmCV-4KsW5~1~0V|2n7_s-)mef!={hWVt_YP=I3 znX6q-Q^p6?aPJdeZ>6|zPFKX?bxo?K^+sx6!K9p41J6fQy8wm}2Guj&XJz-;R?NW4 zasRzc362V80gFZyR^{(!owfTXSclg&D|jE41fKtOQ3ZLs?jOtI@RDG3wBOI+oOUv3 z9UE5;9)CS^&45T2!$WV@X%WHw%UhSV2h1t*jPq&8kEER&APBXBo>a$3cy0B@wxAU* zxT%7;2QK;c1q3bmQCDn@Efl0C&$a7fsO`OY+nsRGDdZkg+Tsv$_^xHx?I)=Rwgzvy0*}TnDvP(rSR(cAqlkin zg6<85?_)SUChpFZ?bv*!ZXZ^xSmk@hez*_HC|Y^x%8T0Rmuy!e`1WvT{u*?}Nyaox zFGXBpw?q1luo8-8two>q47r#`U($QMlrBJ+aTE^yM^N=o0QoOH(f|32LRW^9ipDi| z(X{_aZ8RB-qq%Q=5;83??|YRfhT1@y*OWmv#GsKZz7_Xmb{Sl`xYzKuxr+%EE_{) zC5d={j*nK$3JOXqLkW2)Fy`%YT)vcK05|AZ;o@d{(p({5*eZBK;p63|HT2p3MhCTy zA_GwRj9fb)8xb1AQvh0~5WHi2o>ny~E$-%i1%HOTZvHZDHPRr9->w5P){F`bo19hx zBpOew3#6{+U?+}ti`7_#SLMwsTdbbOI0zZ393B~*%lTs@% z7iLM=* zo1eySJNZn1ZD3WNXZhE7N?NB`*|8`eCP@6dyHNEQ1Bb~ia?q7MXZK_APQwZFz1NG5 zjc2x3^dHBB1V-Q!#_L|aRN_!-Jz%t&$?vAGkl2;tGiBw(Y+mB*EWH5FktsX`3Z%t1 z?sf~;k$CkwKZrw<+e!PH$?+Q91Xrjb@@Ru1p-~w$jG}aHqI+21->Dd!!3^6ea)Ncm zG_|Bpi@jKT19f`%FvDzNQss!eW;+Mp&bNgJW(_QVxux5DWdRCYw7*zYX)Q;)!{H$+IN-hB^5yPQ>#5-V>v{KR3nu#9rE=HG(nGlO!+ z88V&547$($)$n&xFkA8lxM7l~ZYNDcJ#1WXv)SnuNbIZ!_PPY5m5s=OvFWYv{Sq2N z6Re}p=xT4^#p~AjkNI8VCpu^^JM+jp9RgRf1YMd zH<*9)XBYfFD`!@i`~CY3uMQ^5ytfSGjT3%u#gf>xr>?pSl8VbS=Q)(0Th07hSVRuI+q1u zP{O$(560xGj_E5XUyHsg z_{=nh_sfze-1WKl#?HAtJg*b*6hC)YQ8x?atTaWhFz1K_nb+(`x_HivG;Qru+YifO z19lMpfSuQgE<1{`z!1lrGsD~FV_WZ?2=4FI)jyY%D-h@BdP0&z+t;GAUCj_Pl*5GQOT?{DH0_qQcFfjw;rqtg)E@1d9B<##3^* z(JE3!E92F5XBPMykF#4`@$vCCv+F^z1E_;}yAE-GRnqAEIgE&@R8lPW%et4vaMs9( z2npA|ECM&Mi`4G)m=mMrgFFa>LRDTPQL2$z-#{~cLZ(x@J>dYJihNc%BfiqWeg04U z+kER!^3q~|?EQH#R~dBI&NFZJ9`CrS6;|0L*yia~dBh(5RL9A3%-Z#ugyQlEeq7PL z6J{rK4%cb2?KD3Cti6g()#@tjNv*AaoKSe}yK{I#?MdAA1FJsqd0dprno7@V`;}Sl zsW-1e%AM-`4rXpqahl6Aq&)SWA=ji0C++zB-sl;M%~wup6D&e=KP!JlYv8rWSC@v{ zE2n?|>wa(5^B3+z(~+k#ZWR0t%z|T71F*^qs-z;<7{dB^74CIn+cy*F#?ev_CO+Mj zT1raQGvJ8JI>3~CPFWDmn;pcx#v{CQx-a2xlw`U7C6I>hzoS zqx}%{ZFLYK9RB*{+!xXg=1XV^GA`0CiVAw^=yP`jhBTf?+lh#7vj0v>>0Wumv~qrq zeCd}Te{u7~R`7Tby1*3QR{u+?I`S)?MYj6PihwPo~cc+Zb|GN*t2+oh<37l`qisdL`n-X!k&lGXj6I*Sg{wKTQn zvI6*wCV-I8I^kkI=2cD*D{8-#tvmpyiyu&iJIUgamI`Twsx}6FDuDeej+qNDc(}Pm z6m}bb^M?i$Hfaasv;Mc}E_)14W()#H{JyADmVI7vo|V*E+Um zG!#VKUu+l8Zcqo$vOe3^2bOxfZ|*1ewZW0oR?f9yX4eq!@7a%QF3Hs!;l64`)?VpgsMKY`jS(_bW#z2@{zwR%n+X;I0Qgbh88nC zJv{@%!?zB_J2EG0E%sNx*$f^E9qjkGN)0d2`l_*Ls`x}RQ^(BShpJZ61XEI-?4P1@C zvvhCEQ5IA3kD>(h%Z?Rj9Uf(@&_Pp?lpE^Z6AqL$Ejf z{Th?szG9~D4$Ol{#IK_R$br0JG$i^MZ2;3VTkEP3MtgC{)4b?cDl}5AO>YUk<+v9x zI(_B0nW5ndCCyc!(thH}by@915j=QQcf|H`v$J$gb>#5aVux?Bv=H2-t-VBoGO#vF zQ8DU@1THH9L6bRhCy(>Gmx%|#9s@lF*mYC9h}P}6AH{A#lR}ez^gQ-y4%4$Tp$_f= zeO=|EC?IZXA|E`1>o0?`B^h+Rv}0eTZmtau?A)d@ZV9DVe^E|~m6|s_9r|X%moD?o^FVRZ${`{K>&DD*uO0L1 zCUR*o4SO#p{lE$9QAa_f$5^-a6@_*)1K)T>Gk5&>dPh^QhQ*hqW8%f-|8`saZ{mdz zIKSooTTdYV9`$dlk7iBP0u?1q?1R%w;<}d)7I7w-XMHEcWL}N!wFeXR*o8m$nu4Fi zw>wn-*pI>}sFj)brK+~+M}um2W`Z}|%leC~o#f#LfW@)0n0lAxvs*47jWBUQ2401= ziZ$E4ZtrU5+1dH7L>$2l&x?pH$jai!sQB5{Cb9aO7Z970c6qhA)?c5ytdrV&;%v95 zs7%M*t4qc5@5-c{Mw_pVErG)9sL_797`JPBeHC8r?qz}7B+O=?0HHfPD{v+yPd=N&}ILRib~#|QUZ?a8CALG8b*BhExwazzyqBx>6ZWArccO_F+L1>sGq=Y66YY;d*i4INhx>WF%?*z*1;Vm@6fQWgJ;gFJ?~ z-0iNgV7dQIZw?($(3wS_2kqg2ubQ42PhV7rr{Q;d!g^>&{`wv+Z}YcG7jP4Y&sI~U z&_g$F-!_q+dXNQc#*jJPHSs){${DtVUd@H83%5>tDSr~8oIgGzvm-o`BOYZ;= zC#T&C4i+HMA7i~F$JHh`qF!JSM*MQY%=roGD1$ns?Y9M@bS%@ls^bQ zk^P8~&5w~&I?At`S{WcN%GfK4cO9*)OpO7Y{!ja@f4IH>E32fde(8izrCi5fR*`4k zH}^SSbh}E7$3m!QTAp!{ny)KNrg}V4wezcH_8rQ#fGn$7H~g0dwQ>!VDBkZY~6E1is52PUve1ffGTm3pFSnB zH?C5J(-<;u-#WlNED7Dw$@_lS%VhprY<#f=YyRidM3*H|Y;wXC-)!}w_Yb!wd)%(} z=*07W$_m<(d~{Di>~gNe9V%xA2s^ffE4{}ad+i1WB{8t`Zkxbr&`kcuihRbN_dSSY zk;ee1^}cm!_>@K{dk(Ym0TT49;@TF$`m`uRZun9Si}m6kTm)|lHkgDfx47V|4WBlX z>3WqIq{6WLN6hmLU5-q0@7i*7x<_AXmYoGbu+x;cRgmoM0lUHuI2-uE!19g0{;SET z;?yW4^(D^%(Z-~HHaUA}!>$}{wYo;tBW%Nz!}ns(?;r=ryes=5Xdg25M+5k+8rW}5 z&h@`%X#Y3L#9?;^tB<==S!n@ye~I9g!asV#a}{i5a}?FB5XY(xm`fdGGmi)&lqtK5bLQ^wVpbD~l zwL^UB+V|e6T8H!38NFT#mHDuOZom8-77~aQFG-#HIE9yzO;LvM^offvHTl0C z)J5DD6x`-3d7$PnSX(6JH1V9?0I}k~*Lh~ZE)!05JBHBv81jD|r2p3mlRJ}wdOvqR z)%EW(Q0v{_k4u!E0byNx=YX`h&FAr4k!a__fpg3IdGLLFhis957<{gqg}!#F3(XDU zxu?30GJ}c@8|cB3Ua_MP#?duho=~&~sB1!d`4=L&Xr)_IQ=wf+F|V*QV`@b3wyQ6A zE(n{Eq3E}PHCYufZ0*d0B9cmxG!tDwo+%&K`2I)Tkd^%GXUlAD1CHI9S38~PIbn`Y z(%3VYx&OY!%=7JKtn>~Qte_N*v~Ke|f^m>%TYJ_%-U@f#Kn4^RFrebrhBQa39Qa*c zO2;ROT-R($P=NCnGi%%_p8W8lucf|ED3n|T5laR^anQ`}H}u+n zfhQjZ+O0qnAa|Xl36PAvGkEN;&INA9G-;>Bv+Yv01>;B;r@V|bdi1;Nx1#mboES|1zL$5Dyy-z=z3?{kNkK62n4}_9&MxeLRNwO>i{kCQ zl;5duUa#3>(v8(X*-j8gg-za+045}&NdR~RBb=h8Smn|Fk<&4e1{sE zKgOkXS_#j_eU0Q+#N=uefSg*=r(+-P7l&D=`ICAMWT|apUz_=#9qn(-|LQZMW|t&1 zU)i>Q4<9hp5yToCw%oyfB@~+%!uL)$9hklR!{-Z$T_}F5m|o*7vW!Dik1x;bEW1!! zN6P#LPR(p`#|`op)5Ok3M6_$}?d`Q6S^_95nzEu({B54m-Q`X~0j(LwhZA7NYdwn< zh1dx@jISxbPUG{jMuN-O*VJepN!|S|USYcaL5!l@?Fbpq5A`QSvP82O3!b?(Q-#f3 z!b*a6=NGJF<0bvKE7D%tj%s5nib}gPj&c5L=erwEj#J$RPS@QZ`1@v9H<-Lc9~ui( zqt$4hz7H*gvv?dD0t>P;dL5%}M;vtJY~>5`WqJG_Imy#&S(`s)qeS$)=?w4Y_!qu@ z+K*h;9lJuy@wj=2=X%*1>;X+LQKlq)Pv-prl72?s*fpNtd6(%s?=o}A4e5*8QlOoj zNwEa@tY{~Tj%z&aLPDywbh5UK6nCJqTD1LD$=y-Wt8ZBfj$u1HKV0DA+vSUlBvoYW z#p@3@TrN0vIM%q%h*&sA!aB+N*I4FDj#Wt70C=uKEnC*hB6G`1zI|K@^o*nbnuf?i^u zodhgPA((+pYFJnJ(!$6+AS@4V@V(;tEd|xl@|8Pn6?r}M<6PlPkwF4HkrV_G+`fWw zuJE+*!7}Exhs}f-Gfr1{gcJPpf23w%0Lu5a~bSdvn&!24il*4xGO91bPk*>j6|K z#|NDVKY3;48%Fk4>tV%5gCS=Dv4Nj~OiT}{CU>MuLfip_z@pOS=T%xcH}7Vi!vLvu z);|9h8RT5NF)1$<`|U5+)e9NT4vzkT7qq?>*SW#{%i#N*2!)0DARts9OR!3xl2Vf=hBHwyu5Z?#6b@Ip2OQO=`T-Q zocBT`$8p>$6Yr~Tn@F!{zC1j~y}tOCS2%QmQ@T*>cG&g?%Z!W^1BYYkak%o7l{{@i zOXEU}yLi$~`U6v~x@eDU;aWHQm?frV_}f!GDUmJ`MB5ZuA)Cgu0O9QNkK=ichg zy4{*O8^rh*!ARosSL&1j#-{=S>tpL{$**6G{QXuTM@Sqz(O++e-e?{)g@Qf5E)}I^i#y5AoW&fB*Q*onaNV9bm7gH8>t_h;8Xt zP}{Nxf(*Qq;b3)*oS5=u#ovhBBQU)j-BoELYm(X1w{Wa6N`plhx9aku=2h`Q?Jg^( zO(~#!c7SmRbRV_9N&ae7@^ZrARf9LvF_F8Km|L9HI(096m#bkC=C-R6iF7Va(mTA; zt(M6v&ixF+qRg==aKroOsgPmV*8wumlN&|jWp5p1NdtmfRakd2xff;O$t7{$v}Cbb z=X62ZQ+i%5U+)PR`1l->?|Igzx&o@QlXm6m$dT)BcIVnP#M!(Z&V>{7t7GA==Z(); z@p}Wkjw(o(qV47GJ+%O;OkzDsoi;%|3|9*wdn)gj8=@#Jf22Ow)Fvx^xN3r!uM~6O ziu|kKJ1WPh$uU4#vEu#j@B=_w(B(tA12)zy8xnq-8#7tJdsQL!>|8J}JH*;-b$EsW z6>RwE>mJTW|C)-xqN<@m5zI_Ofvc~JE(F1SHSwe;2SMmt=htvvVGcEO?*)(IZ?#b| zMkV;JAEv7w4H@nkf|A~bm9(D@(NR1LQjFGagzBHYmk=FXKCNBtW&7ko4No{)z3!V# zvDjBj<5pT~$7=1rH&1`|iDBNY95|~h_ZKhPJSDtdr?AJ(a!ei$jfKwuv4X|*g$gFv zQXuxYylXea@i!-YNeZTO{<3q0;@UYEhFfh{;l?;N>+tKc2NPVhD^Y>>DNbcfcXgj} z=yzDi4c9&{RAA;t7J`71iYiB&+Sbi^=r{iD4ju0!yWC+32{wy3{K0#VrcS!c6tUJw z{oEU-tFbW=MIT(`c-pQES~7)In+_+88)Le9D~=vE{VA_*|2$7Gr8UcKCB)ZXtLRFE znLV@Qb>modwkt(hqgM|)PsqL(Zm1W;`|6i9Junelo9Xr&#CD4Ipcv|hBLF9&dA-4fOp9ZC1KPHfk zDr9pi`u^6hbSlyv+oGm%jj98qr`gO*z3lzsB`!+?9K}TFW{AUw1yxsrt5$0^1;NHj zuB+h{cL0DM0N-rQr;DGeo#yA}Mt2fPDf9t@e+o01ft1iCn%$e0&I!ica}P%IF#;2N z`S{)4llavVQ|F__pWxV{ncxN?NjDR6`wj%_H(ck6($`&rU$*b$6_*y=P zdKVi!w(yvAA4%7-i^&|M#D<3@SC`cB|#jJ^WFQ=hW z%z}(8Mf_NfX9C41O-?T>lfTGcPDFoR;y-iazVxyQ`?+PO@u>lL#q+fZ^F0+g-Ims} zvYfM5*X6f^SFaWkM-eIeEAz8NJoJ?X=B#lTQMLfGa!p%_E+howtDR*T-ICvJ;Ez&M zN6LHFuOnayH$wNb9P5ax*L6nPPxv8)dU{E9l!c&Zu92KX(i=$5IrAi8bH4PNRbg3I zI-S>Fu0G>Il6q&RS}-x_pqAY5!UP2xfbN>sU+cN>hY9OuTA&QkPSFJlgsh&RWH*$B zCD&y*nhkn*lYVzXxlP%*UyK$&C)qzmeyC&}#A*l&NuWwAClWoEx!9yEURbi^ud3L8 zAU||`gtLxKP8ohN^MZRt(Jo&;Ww|0(BkwKISUJ>7EQx|(?>KE)YUmB4XD`(}~Nii$5O!U8L-dgM<8JPpjCCC_ZTZf-KqR@L;5xZktFF@|ZI z?C*+;18+~()eawOt>`|$8^!K*6AUFP9%hYe+l2%KcH{Hgo<`O;x4gf-RV6%@JZ9FG zeAISry%9{iI>QeD+uN>CY9!k~7KsVqj#S}kB1wg*Tjss2YJii?z6DiYuXqB4wESfv^~{CwR0iq5Xa1)hPw_```hFwwY6?WY zm~cb$$&*Un$=}x9o5clT68EtUY_Q&)ME0y5<;unq)0)f-2m+~`=GoVRrt7FC<<{re zGmVaqsmIPRa9nUW!{x)N*yzyaKa+#6T(Vz<~xh^2<`zT*C=hzh}M9Zjw}IEFx~vRAx_ky@{RzgJSwA8t0a8N1S!f<78Fr z2lYUvQ+;fRtI-~`4lwC%hxfn+iOnE`LG?se_Vrbm3hh8@=4P6-!;Nvh!7^JmQ%9U` z!f{B3^@xS9?dzYezozYVQyZ+$gEnhkh&t=!XV0s?mB?T?pse&V?bR)0ZL1`eiX)0o zm|b=b0uAd(F;deQev*bf@klU?Bz<2|OUfZ(9xP})qfI~8do)86SA_0PbGCRcCMjj% zE?zK8^u-;z_l0%c;k*uKcXNTmxOVQ@t4959E5HgxmQoGt!9EFkP;&(Q$ ztE;PaVrOT^@AUm+uJAv?a#bBVAg{|;jL$gz2kZsInjPS3b`jP0_BnH96BjFKL(94n zipe<&TlWP8AD}t=sn66>1L!(M^L*Q*wtHgSjw3vX_5%jbc04?3$`1?Tv^gXaIOW4K zAYp@aqenS@GZ^u_e2@DgAqnxBm;Go;PBh~L6Voq_v4iu-^EeBg2fgp#&wG9!jW-BB z%2diZ?>kWqNj{iqu`8+e_%8CW$w(mDGdXW<@{V`Ml@(63*wM1^T>d65EhgU;MYXcF zb|$90O(yXJlhVh7zKt3z;}Sz4dR0rCl@LnXrM*r3&$<25YXC!SlPf(s**6Ut_M75^ z+`Oycx~LQ@EWISz1Oom=r;$$Hx^?U8QTEMPYNp()Z#K5J3bD*K!9h^pfvV)EIa8k6 zfAlOJ4owUuhm#qR^aR10D&sfZ7tX>%Sc*N2_MA$&_pRO;kLHzseCcw$Gqs6(e^|F= zXrkxwg`B-4YicjjI2znxogL55DPSiUEH6psGu znpb^)_742E4iLLVSv@j5`XT`+?_Jk!_T(s%U=?e!f0Ih@SCw@e_giQQ2~fDbwLqpO zJF(iSS!Bxlip$E1WrR}#NOt+U?68}7!!Q=*fkEyeWCb;gpPmH8cKA*un#5YS%4cVP z|6SW?4)&P&dfzLJ;bn#A0n~;50qs`Knp?&mj>sQPoBkAZ+Iq1a@CDezAGElrnsuv< z{Xi$RjHdAMrzLl^lTeSLjFZ8$G6w{Za5ajMIN^TCQy?+k7yu3cysm`EwCgETb-@K| zB2|W8nN%ABf7RuU`aYYYqp^$BqO{fN&YB`|Yx8L9GxUA^Z0=-x3`(C~LyDokq~VzM zs6RI7`bDsEAc0iA&`=sPuy#O1{SdSqsSRF=zJGFQsg94Dp`0kkJ@t&v-bpwW-@-Sb!sS)(s1e#+&4o3d%lOv&fk6ryT)W zVkTNwCqFlO!etmaKGA$7czNEJ-u@}hV!FQA@dX<6(ByG654|2O~0q;JA0j%~y_How!}0@OK6`#qRrp`%B5x zeJvnC=_<{}w_4SGV6cL&0x$QwEvu$vjpiMt$KE0LMK)u4aAr2VYCe!0eb2SvmVYVz ztXnJEG|{p4NN&2jwj5xaj4Qxt#)kVk?VDltQ*vY>>9uX`EobP&9jagbA2!wO7O9+0~eesP$f)HUn<~Z zSaloD&UANm*NhmG+PgS&O%`TaLrU-zO}R@x$1zT$(F{{vUC|>xzp{f0A`c<2fktp` zimeu*qF#$$Ot($WFngdv>8U5)~xotfrCMBZ>#$Rjc>PVwH z1obgLLn+!aJUu!E=yC`)@AfRdRJn%RqEM8J?H6$dt>IH;j>o$b32qm-gL&YWE?>Sa zt9MvXJSX!FbniBr#77^1_3#;SamkV;G~;bg*ZRt_C$j%CFQ;pv=)wK5Vob)#zk>Vw zU|l~z%a?KdSJeBU2)GdeUWHjVQ)v0>BjU<>=1;i(U#z`(Jk)LbH(r*CN<~sxvlgMO z*%d7kvhP%O!-%nuB~g-4WM`1F8~eV7?2LV9$TG}eFc^$6&qrN7-}}Cv>$<+(_wV;l zFY}tuH0PY>vAoY?c^`T2x}x97RyC|_Xq_$4Z0joV)W6k5?M-?&n$!lLguXj@hNVY% z(GJ{9UcSiWw0$WwRcPD1trO31EA-^>k(L*zfSzO0m?QMe`L%8iQOhH-0*vDJ5p_Ba zAHQ@QY-nZUIy*YVEpHC_l^j^Ixlyl?3@EQ6eGm~aZ-bRKm3Y}Qh(Nj0g4br`>>3Yi zg|>#>O9Pb%mDs$k>L)L6JZc&A29ht40Xw&5^R3dl-}}n!y>)zNj4EBJnyV#2O(c_U zHAJRdyIQ+^lea&2*(8CbCF9XL*eBY3_>yqDEJv~1sWN)*d=2JI0J|}TTC3_V{Y+6M z3sDMGZFt&@X64RaaDGup12Zal7CWog{{kRQZquWGlTDtX!X+zvd$k-zdZb_kw0nq< zwlQiOt&R8xI+oL{Z_B`=oU__5R!}%Or-@->{DEOOX26BXN;DZLkQ6$oG|Z`NFCms} zs9ka&3fefxu>sg4MXbZDVEbY3E;>coO%;E=#Rc5+fU0P_sZS+lI_qU7zUnaOo;vn? z9oXdpnumnXm&DQ;$uLL=;O$F-fJ?p-?m})(B~2+YStv&BqWHs7a6!%X#Mc5a_7sa# z?K_s)RRccBk-gP6@rQN7p^Z0EA3>gJ?|6A{=3^txNaHc*QoQ! zF~7I~68k&cpm3~}73xw8&_7EfG|XUp!X+p;I8xRM##to0-%!5!xr!ws8tHBdudxj< zl6HE}Ic9!(m|#@u72;K99ptq!>b5bJfM7NitUYd0_B7Zzvwy0rZQIA9aunR&lVZvt zSSvYxg=6z|{-lx9hB~l^yuM zW=UXeO{5xD3r8$?U+suGSh(elhbKA#z$r^cHFAc*=(<(ryHTZ*2z%5LY9acYT?D+7+$R*2|rsi*lU5|79YO*&%NyFrB?a+Rc=-?I;E))$b7Zp`d)a({P zjd^5c5?39D%0LLbiZaW+ES@R98rtTPQ)yJPC0!I&eXs#(E-)2{;UTJG)Aj|!4=Yy9 zDiZ;Ilr;A#Bx-)49bKoBK@`-8{mO92lsO;ANCF$FxlJ8}1p?P4G-!ftemTs`R4?eJ*t_>H^GC9p9doVW^>NN5@;r3TRPIC zkU9NJ+QpVvsr}b_hGhVwk~EXc@Ck0q?e8%>L_?PNji?0S zHD?=SX+;*lbcufdG1_zIQeJ{L%z+^7Mn^ckkoS<)#zBi|n+XGSsO47e_=k;sIycWV zD7?LnYv?Q!a{voHJuFN-wHu4NxRxVczKIxg+G+}%i;HKuK!o&uc8n9T)iYh;(-0C8 z;xZoGVk~QCiQJXyDfC?7UUK?a^#axIio-KOv+Fk`+mu3OQD>G`bf@d;2v5A~$@Wh7 zxn!Z`j)i5T3FH-bWoiw7f_{5S=bi>iWPuUe=c6d;LQNYnO&!3v1B$6XaoxkVL== zgG-+=rTQ0;wiY2m&eNH2e`uX1!VlidD^FY$Xn6>9)pOh^&MnRUZLCCYieeW2o8(z20_Gg);c50b zwn99|kb^l)`XQ@8vXDGwo)lmJUu|I)N}^#~T136NsAtcfaS7&aTI~R#Vl3~pe%&$P zgtJaD4rrFzyClgN)HTZ`2sdhO$vXFr4q(y-BCeKLM(vGGR*5$bg5;AFvOHxcKH3Go zu&TLtOAc2)%3<~|H9db%1my!$VfnUmy`C`Bg881i&zg5>zW=H?TXeNG5;>pXqQI=(wVBS%V)tHYn70YBm zPv6J;F!vfQAWgo*M3-7Sg8golKF~ydrZQ`Kq5>;(#Rbi&?^0z8p({6h{P=w73o9Z4 z?WN(#Uu!apm-4Uk4o>`*Jt&exD%4%jD z$^xB~92wV^Wiq;-NNrW{RG*P4ldKn!lx)^^MQhNVk+wc;p7l118lYj1&&O6WOS)=c z?M*)h4IA;^y?eLTxS8;ZmtzYkrxgyR`Yg-`$-~%?A3qZTu-3Fk432bhChoeSLRVmD z%W63@0E=P}Yffe^QMgrJ!$KNveEQ2xa(SH3QSa zRQeus-TdWTGJuP3Gcb=6rB6e~`+7TJ~GokB@oo?Zl(ihTKU8mJY$2u%J6PbW#~+wyRY zoTgHOdKu+Vw_dV0Qnqq;HRK)dT9Nf`*}`}iVjG?C@#6MZ(-Zr0jigR%t4>>$q;_bp zq@y{tsa$nVoCl8l5}B@%cj}{B%-4rfmzWos&`+$qg$Ub}B#E;8{2S9xPz(Hq9UTtc z>*il+xWILO3Uj^bCh+%cy&O`^{4GiDZ&$X7ZEHD5-znU5>L=lVz#_o0N?1}O^Lao)T9@-|H*)9IPA>`{Ehy`w+NYuUGG(LF1#<4?^Q7$B2aSRK ztPxEC<;jQ^Tffv$J3cwD^|uqymO-7#ElD0Kw6wHmN?+}D$JQKgWfYeE5SkcTXe`Mm++T{wQaJut4&Uw^6m|x2 z*KLf|@pGp;Nx5SE0;GB4G-k-a9+6iHpA3!moglq2g)AQg_rN7 zf2~j)WN(b|H+HR!O5eY*HQ0Gz(OAh-{(=|R!91; zq(&Qig(_zANe2`bgGUt?YViTp2Mylg26f;EQLcvv2sEL=nlgDUrqGWV84ID$3)qtY}( z%WsO<9Tg@^PzP5Lu4HP8@0_e2NqlEQfI~?fMJR!fEHcp1(`f7@W~(M!d5POrmFBM(V$A8ZtQ+o z#T}M?yk@wT)M%cE#qo5?b;;`eER4lhH{aG0Iv!=G#2zo5YgKZ{mIDfDtE#p&BWu!b zH@{9-TU3ZxIc}djx?Y5GL37XvZjzLuG^P#f~F6sVe#B0}FR{W`&jfE6V+&2W&l2?{w zLL=X>6)m`8oC2ix5tunV7Wtvp$^~x~A8G~~7PBe|4hq$XjqZq-6kw`1ns*v|84M0I zeO`dNb2L?eh2e3%BhV$IePJbalBwPGUG1CtE%8b2lNSUc>xe{FC#`7k@g%u%B_G$t z;YKB%ByOwiRjjAwv+!Nv?1PGvx}^@o@#bv5eer*Kp(kka2N7j5nWTM}YkZrO7Svjk zG{W@8lHy~%mUhc!P9H6Ao@}(_!yJ%Q#p1B`M%G}Q@z~vtykG_cm)^taqgsZ3{DyAr zoBB!Z`v``%jY!N~o+#s&9`0K*0%FZ{AIAVHgi9lNfeCJ42b=LU{Sq0xyf|F=O075w zvg^1mvDW*g zv9+CYy+{wi)#+KFO@b1AF3Te~?DAoe-LWxwe$_cZ0ihI14>Az$ zVl~Bjxd~Bg>A510#B7d_lP#j?W%o;v#|?$qw{oQSMT7)% zsXNEHF?OAmpVP77cq`S2B%d9iS5%K1bYO7Jl|i&GHic2dnp1LQr&!oC`^D0gzDkv8 zhLD2ECr6lpR+(@$**Fdys`~>_o%cRbsx(+T4aD38)5xo3a&Z=ySo#R-Gadx47X_!n zC#!fAO6@1v3((bOv)^UP9QSl&5G60G>)*@Xt*P^1MMQDc9W0OH(As?F#L7Wl1y_Y$ zRBCW{vXL(3R@yZ@xFzvwOXBlFCb#cx8i@DbcEi5ZDGg1@@Sc$TN~e{L&-Mq+YCkRp zpKKu@UhaGCF7Oc{8%5U%WQ%Z*eZ}qB^q2ah zm=uVtL1>LI23pzH|5#t4wxNUur5WbMGMy^= zir=wvDVFy~k5Vmy8;||CGr6|3H@;C*P@03XY%ZMV9pv0i^W{ zyWF6<&ZJ+at)$4`K_aon_dTeb5|ep!-%|hABlU8A`juyQBg40HFWmVF+4uM2-K&@*D>YoB11K;C!nJc*nZ2}#ngk`Da z04xEnD^sa2Ax>OUJi5RbQ7#~{<{Y(D=>i(gfSlPhcU~x+LBZxTr$n8PE$r`;wP}jp zm7oltU%wd1Pk1f>wDudq4(e;u4Svnrlh>y0`I_dvcV=4)>Af`F?M%OuqzPD<1lKc%)wFPM*;ei z%;dnUYR}!v!?fv!hrf0M^_}yoDpnn|efA7F9j{4HI6 zpAR!fBxUCGw8>Rl5)4{=wz>@6#@zaTwfY~W@{;yVnXf+Tty=Iu!@%xvJf1$kR_$PF zP1D5tGkpBJ3mG|acFm4l;m0gK`N#`B*I3eTEw9lGlF}P5nw4Ff_UK<&X|DK%e_!~L z84i3?obq<4o&S8)o65a3Am!&oGQYD_=P$$wOGhtrG(z5F=pXyv(bNC*de+~2@xqGn zV%(3t&MK~_vvmtOCAF`!MI7$IW_18R&7*@x7j)z2yxXG`6I9YWNtpmurYZRO?CVUY zjUZ#7#Nb!GRSMnJt5-?lBW_#X=I-0`)4wnAhZ1~|;)FX5Ij`~$#2=u|W{?ZsBSdEJ zwE$BS_c8~}OWvxCD%KZ;oa*|y(K~|qYXsz6EJM1XD$~uJGDXFhgH1!#)L(Z5X{EOzcjd``}9bsB4(AzmNVuV!}kG81=)vXPozQ8&sFdmTW{Vnr0EgcBN_1U~d5EzLSZ)o!e#p$Nzj* zzwT-FCZkiX-KeF@X}S+O{FjySbYZWl{}7Fa>(f9)DBu+0sn0RJ-)5;k{4gLbCgeZ1 ze^CEP)!6=X?fco4w+B14Cev*nJPcge?qC9X7znDKf4HN1_33e}Pb3}UB8T#Uo`sa_ z@+*IP!s0Jq1#HWlIU@rfFR##cf6#B9%ui{P&j(r_kMp&99sXv&EVi|@F-1mg67{@g z+?SwDS@{0jt#A(W{dvjmxL=h82h=(GRZ0TY`!o_I*!e>H7V!8IkmYyUD#zIomi}c; zheyT7dtKzeSd;o!f%+dy0_^r1dA&XrmCIax8MUnwciAg;*^S48CT?Zbiaf~Hy{j5; z|5F4XyHFMuK`ds|>xINA-ig^8d8v+^i?S3mXc)a>W)pPre?CI0z?|E$Fnd_Wv0|7GW& z?v?rH9{@~pqhQ-*M-APIm4%K+3X4BFX8}n%dvdw$fjC_6!E3A3GymDweLV@NYV{Sq zKdnjc#6V_*d9N+fxw{@Fa?Mn{ch7+TF5@5GIzauf73SO}SYrmF z@7KqEOaBZGO~9Ye@9DY24K>08?cwu%!N)oR98t>lfd$R47XPTm^m{@1@Zm#qEbP?m zenSKM|9=N*Jz1FPygG7E1I?{GS*ZjYZ$7O#KNnH?AZ*o`i(mAc6HCY+KMyG5Kg<#X zfbrdNpws66lO6uURsdRbRq#3PG(#}_7XZ(wB>SMQ?ld0||Bg|5F9zsS`49ch*&G1- zeWey+{TqGGNF!5HDw$ts!)x_V7TNzHME|p(fB4HUj{U5qHhGKyVa3smEsf*Q7TWfw0sXLPZ zsfy#WpBVT*GSq+dux+`#)@eC8)gKf>$!O{U4a4{>+fRrK9D_t9fCmVzr|tVV@|U;! z=jHzOyU#D4IDh{910aEEZfT`#8CBWU9+etqC#V!uJHr!rlp_62{nM-R^|+@ypCRqQ ze<4qQuH0lLHNDG3GILu$t5io6Y55DD{mBac=F367Cr$L?t>2MRo@5Cw$&lN(pH>uA z*bO}QSU}>VKBLqsvVb9RTGAI;vQYQK3ju*)QUkIL0m{uAS09R1(ptABeq z15P&XwjA!lD9%`Bs%2!ioDNhM+rB>hx>=98L&|nPH0RkAF+zA2)oYcyFMDZ>drLou z9{=Vy{7H{86#Y!253V7Jo59zOYc&cXPG5i`Oef2*RVT0ER3mJbPtEz30wC}hYIYIu zo`a@bS4iodPJYH45wT`sc+t*0eGL;EisF6H0F{RC4=_pEFn=mOAUw(>q#$~ecLFBD}<`j}m7gLed&MJJrzIhbX|pF#vt zNE6~5S^eU857S59mJ))8wg63~n8jc7IcrI?36$lVmi5b3{2>c|)E8o&BgW~y6lo5- zP<}&C${B90OsE$+ZAp$(^#Wu?y)FH78KEooEdifqBPdNf@;l(70;1&(g0VNZsAeyw ztES^z;;^N4>i$|1rOH+k@DBAp)FzA5#(GP6;^iSn4tV)(S)y%9w%Np18EC#cp1b_A zVdQZM8IEPUJq(IdX;2HXRRHDFB9e_u5=0Bi=8>xZ!Il`Zd#8qQ=Qi< zR!gDtgKk!B*=h&MW5sqRx9?x@7`8a}6#JT*ntI;f!?cxB;h1N9*1Y|lml84hq*IED zT0RH;GO@96@6^;pUvA7WgvLEIh`X@}uwazN3uMKA(En3pWRLK#_WkXNx6X8H%iNlI z{SgdrfpeQl^X)~{wZ%3HM9pUpm0x1njCuKzEcW0lk53-1dpT?$ra1`Y22e^gM|~=!=n+cZFLDG6BLDGc%wp_C(te3fmWlzGqr#?);(e;l2L6cx-%1b)@qK}a~ZZGRO7IS__mz44Ry zE(&~7EbccbjO8z!JpaJOP`5+*HvPyGDnw*ncTU@-^D5+`kROHICDHA>NP0$5yXp7=fZ@J}BcwwHJa zEJ>^6a85j1+jeGov-0aRH_yn*4s|A$)#IJ*4N7i^6W3{x>x+KJwlJ4!^lv*lIy&RU z5TGH>(@uzTj6@Iqv*vtjE#vS+@xxmntee^E@sC zi99#IIg)O5b=vD$6?;>-^Y{@qQ1K*-%buS83&PbV}syIA{esz{`S2H z6S~P}X}N9y#<&Y)w{;W4mTXkqeMqy2&Q6KC7-lIuY{k7ww*}=$Gebzoc&N(eh^mz9 z4QvRs-D_twdI5v=5$e!|3rV>wKXFBt6pgCzh6UPp#fg;iNTcRn8;9?nV0-UJHZjIV z?rUv#=}7XDJ49h!b5)o>3{!{O(!`y zS;tQI)o^CUi})mm?9v@21kBa!h&Q+W@+ol^B|WfiBmr~rT@-% z{WT`};Y{*F02=3#GLMxlcA$b4G?4Mss4d!sdIGvN=IfjU3iYH>2#7h<5^9GZkKrq5Vg0!c5cb=~ z;(V>A-@i#}CFTQ-_F5rFs1Vw--TDBW@du(^y7Oxj6X=R!%qVFFV`a;dGh>OA2;2H3ED zAY(Kyct({+I^!6;c_u10Vu8vOX%!}}p$i^)hU_J$1f0hLVaM{BBwUe$3BPRD_LRZGMCltdyH;K30rpQvo zsTU^-{IUAQwtMl|T8G+$86IKCwxH#99!@3H$enN0Ek26B9sU#{P!&VOk9L%UE2rIP ziKAY-lV>b5wQNjlTyytxhGI)FLy^QzojfAZ5wdQnsd>N*n*3BTTG3BqiDKsZTyJ5*d3XIpRp=S>IvR)AC>PngJhlw%z!;)xYEL+|3QQQ zgbDxsB?aws?J^gx6UQZ$>XqOD3lmJx@3!yz=N86>uSNwlnU~bKj(@8x<_5z*%(V`M zD;kV$Z$nw*k}SK|jVuFK#^X57NF+Dt&E7j5qT}PU9Z5ys%+Ab_wk3p=58TB~Q-575N1+y;4aBE6ZKW!PY@pnaxbRXYFxI zy2#5a7q&ri`_iEJ6h4RQuH_m6+)}9p zl_KsCGt2#Ljfi+T(w-EqEgBhvm|GWijJrn2e3X5KYJ{27WY{2878cI3#foWh8zes|ZyF_#f>Ta>x26+XAjhH}nT z*U-3Z9$UUpY(3QEVHDAPIuY{nQ8~$xgzj$qTv^1e=UGyuD$yIawqL}VSWEbNe<7b7 zza=TnPY_CJz%`Yn#mv5jVK)uSeEcD*n-s%XhdJ-$pl&z0{eWGBLRc&S(G>;Hp?4XO zUJfFwl_Ygs8Ad=Hb?VX_!Q^D+S|-Z)sqDdb5ON7g6e(&L(M1xikDaQ@$RTrnbvr3k zCH`&u)cD?k?B)xW@STa&hR$=*rlvac)eZ7oQpGRJxSxXxcl}e7ZZ^)lZK8t_V&e|O z+^dn*P8YXBQ?Rx@===Sjs6lBO!=Ll%6VoNQ-_Jy0#}ScozbmBxHL3NsrU#r2S@ zjEm)ZwYW@QQEKh{yztP;EhPQB0l@HlVNqlYJwzoUDAcAsqmS|?%$_R3f8dF!n_5Rc z7Z-m%4NZ&muqKU>A5tJuqK4 z6iLiuR$&6Y#)e5CA-6(9fDPG09;}y-^YiX}H9TQnyx}IDn>A4!UZa!kT5?AAp!4v8 zBFaColXvohJ_=J?9PTD2M)2J&R1X0CAe(>iv7V<87@3RZvAsR?+YEUBF4nE8ehEy_ z%OA7v*s%{9wGAFhF4U1rB|3^*aK>ZwZR8NYnEgb?_0c}bYxT+8oY4T%?Oq7H{PsS1BTE0v8`7X^*csTo7JOkBoIbVZTj;d5NlBq1bBYb+H58wZPp>c(;I zu3aQiccM;7j8>pyqo{j4^L3J$MYkzYl1oBgw}jnluo{lcfc%N(8kA>C2dcT3>K4_k zlUBtHu1Lh%_l8?6WcXRdvHHO1hizP!iv4lr72$98Yqd_*?o`GnLX7G5m^|K~&bt^C zxhA6YgrrP;p;;2RK9EAvC*H}|NuJ@rl;S7|0#PH`E5WlE5LUZ0Ru0y4yn{`BDWJQu z@L|np#pkop+5ZmU{kUYwb5R^?FTiouM(eSqd=U}_ZP*4ia#kzbwjbg5AcLFtXz94a zV?k=}-EFyE?NNL-W05y>W0RT!+O}=e4D?G2SH4h1cx5Mm>x>y8n={9y)Ou3@G|;L* zliv+>?<}(Hi0aIkU)YNs27(ZtxtW(IGR=;oGSP#Xvk7!ESL6f*_bet5nDggjxI-A@ z+;cAo8e4kONOu5%j=XwI$yc_>w{pAR8HE>@JKk~c>y~cGPDss2L=GmJ<;_$$(sfB; z)xuUvYvB*exHWPQHjlRGxFI%y!Mx#k_!<}#_hK?H`amsY*=}1BKWfvU z{oBmY|L*&LfMoM@`7;&E1+=}Fa_#4~g66NBNmKmJNY#aY8^C>^CvOgZ9k#+R&a-!4 zL!Bef@)<2RAKa*bho=SBvpfd+M$flBC)us85tmRF&wacqi=EZAzLs8`sSxFx3fq)< zwqwI{DcW&v`kY3}R&HiLyC9!*cLGj%bVogqMephLNlDv_&^wT7rRt#C?oj4ZaZTC% zrb@e9gYCCn1;<+;jW1QDip9OdqzA0O%o=19uazwRO}`;~taaoBivB_*!00X+tE-ssE^6;!oEa^AGhNPPyBiW5{NFxW{Ew z)OgN^mM|`nqk9TLVVoC~duTb|N0_M{3*mVxG?iOHt7bfy;h!MhqJCFWUpqL-b6wjk z`)rkRdT3f9J*;l`>1&6E932T3v7OIsiWZPP{EEI#uiLlw!P*w-wWfY<4qDiXgJY(_ zGBoHKvo!!(x4ibM7WbHgjM*N&LJ_w|`zKs*Tb4*Tv{|5N%cUz$R3%=Azw?F9{pfF5 zgP_imvZvw5OU zSA48CXeeg)BttxKGZvfWsWD6Q-PYFswlNic9y1&#Vx#4{QnfNMx?D3ZaBp}LiWf-h zrx`z9`$2wlEbc12t@iN#12nLlNu=+W+udNqjSeGYy|))>Yu;gTS+3(|!-<_U$5&7} zk8(+3I$|^WbnC|Su;X|v3KV%7KyFrzYO$c-rjYt~C-);zaDu&ux0N$Y;`C>1#>Sgd zhz~p;D6G8)3PWGf7T=-}IX5)4!TI&3c9*vINXal8!I?{ihht{9CCMuOWZrJJ3lx>) zlI0@Znw(pSE2yCnR31827~0*2K!__GhdnvK^~}}=R&VSXTF!j`axeRP8U#|4*T8*_ z2V-DZYHev;>6&XWkPu1b98rjv_O=z1c$oO1kdjU3v&Gr0T@c#IIHRmeq7_MJGD!vAF1MYS6<>-IhG=Wvi*h zNPoE=RM8Ny5$f-`Y_{OD7m9qocpynhxv&`#Di|ClOKMMe-b`tE=+Rw@(Vjg4Rsgf_ zqyY2Hf!MYEC{C!LL7~V1;mU=j!*CyH2p+sLwO8rU&%LlSbsy}V7^0?uH{Gxsn2-CB zTgmz&KU4WC?fK+OTMv7q^MCjW^ECOHu~)Gbj;??yf=-<7FykK|JITo&fEK1u;p4CfT-Jfs}|BFoXfb6AcFiERoj z(u#W_mfHpFGOW|;R%Cr=-APY(Zf__03H!|K#FGmi11&zL-@ z0ivW7&m1%$5AVws}BW)*E5C}#8O`f}Eqk#cxXqnPe1S7zUtUg#r)!kJT5n_{># zQqN6sV=>3Odi`I=x?qS%jOTmqv?*by;)<=xjxznm?PtTPQkQdSn2RT!R#-brq7xXM zD{Zc6Cm}9Nv#sqbstsg~%_e!ZF8g+}X&M-ytM<*6+-uPd=W1sQ$hB97)WF3(1>RD^ zpw+wr!|^?XJhx@#*)G;@83@R9gy&AL1;sBe06H%H$)Vzx&L$=>`-dg095WSTq&F2m zM&bIIpYYo0TyN8h68)Pj*gu8{lWi$iqx#Q$SWF18x4xi>-zYElBGPYUC=Gq!wDyL3 z7hgQ8J2;nQg%Fk6d=S-RJ2gU3Ey4UXX~q*YuRb|fQ%3wYkgM4Osk<%3 z%1n4Un(vs3+GH+uVAQ<>0%jt`l2^Hj8psUh0a#Bz^Ai;2SQEA%s&=C#N(| zY6GOp2pTuL2*tj^)XZ!_FhIy>;fp#x)+Bs$IqT!W^rBVgmhUa3b>u9YJxW!IB34HX zg%0kfiCkuKpW#$nuDs76ygv}p4rZTr%Mz?3esyRqs>_<(4Pa1UrX% z97I0kk_HBQfz7WWT4vH|HkT1;s)SsXLZz3WyMJLw{yyyZ?+>@S`e~Crf6=r;i&WZ) z&j0w8qL{nz9&1oMC-;Nv(L3tj9fN{7MQ%%0b;wY^D7_p?ji~E-7)FYJ^VwkdGF-8X zjaD<=_gSdite5cfrZ3&*Rw~bzA#*9WqqY)Lp|zxfbgvc!i%@HMxDSzr#c{9AN5O)4 z7}cAdavzsW&pK{Ayn0z}@W4xz93{%}#(*?ozX1?2uVu7p3$>n0RPO<>v(E>ER>rDq9Ev_a-&iE|VRT&o{;^qz$@5op~KclYAV z7;*(aq>m08(TDXbah`*!C`MWfM z+V(Mi>I=zc$KyGUCp_hT1p6*(A7ATJLlcee z764yQ?|rvl%a+(_^l*`C374VanMb|Jd?}-%uut}J zuTP^5)gpEKO>##u(|sDyh9b}eNJSUfHEd)%R%$aJXUPKHyNgO_4Y+suvLb9x77Zmv zfexmnU{vG;m&yKtY>eqj&F9Ze9(1o3F^`UmtGLo^IP@%9nFdXHPNVW_Z1Pr@D{T~} z=`41d)<~Y_JCsF0D*t!o?k}DrqqZ(w|Ge>rsnB0~G*5avx&=;K!Wlkj+tF~`cE7lA zAYpUka(}t&X6+ct`kT}$wr@YaM8m+-No!_%4mK;PmW>T*TdXa zXE=qKiqL`n_nM`oxNpkQuPO^u&&Z9ut09@DSu!c*6!266DWU>nL(?SpM{51mHW9gb zjk1}VkDsVAKF`s5&w_z2&Smy@zSD<1LWMGWp+4i}LoP2BRPDK}Xg>!Hb)K&2?d#(o zGHcK}St@W|R8I{)3+VUF=&9$*S`OdR50g+pJ+*(PVT;^XaVIUJ=c+_{l5>Jhf7ac( z_wT$0e9uUhI#|{2t=>lx1x+j3W0_bbZW&Jqu-qKjcbl^Yxu9E)gmNy5y~b2*tj!tOkBd{K8BGbwp{(`RfVqsPo#!O1U|7ODkxlu^#5d zy**8hUYQpSE58h!zW~l!wkeZ1P>Ti#EWN(nO`#){>VMws*&s24)T=^s7<3Yu5^K^rGB&N0q zE0^(O_utE^$kuKTNzHK|W%N$RxTq55PWIy-*I)x`D#E{RE%FIo1el$NT2_IzQU55>(}^THCng zk{DyIJ}wt=GSxWtue^`HD17g+O~a^3pXE*vFw{9~nqH)t*{OjS6v%ju5b+9v`d<@Rfv&e$i}^2Yn%M z*7Uy2ou8u+RD!93qhXNOD~0XNEYZIA8$?u+x1nK}(`4Pu2D^;10i$Y?h%XHgNs{x( zVS(`D60+#lCDZpYrX}MU4Mq2L6(*jI#RI*M<@~Du7jXs9UKNGh>9P{~ZP7fzdFM&R zb)gh#XPK`3KE=fcl-9XqHR%B|W{i7uHy!izH7y_AnxH=8{Y3NV=5k$kfDUi*h@yJYB3&mFuJ@4%oP67#M?rAZ}S; zaP6`ZR;_Hfbp}&{KoQ`6aGM>oU9ss2i&E)X$AqrURNSudv>Zf>!n8a*W;8DtW8HF} zz;cE#`Q%-xW=UlNq%FLhehY`cM>s`7Qj*o%x0|vT=?&R^^LXK~G1wMHTu&|I!fivz zilr`w!wP`i{dlZ*8EFF((G^#Md3WH(^8k9FU4spBW9-R`2F%4P{F4jYuSvrkCqI-Q zwra4Qq}<#VaZ_HLY!ZlYH)78oI~N_zb zdIQ2a)&T*H!%ad*%Urgc{-1)N-s&SQRJnR?;T6*88-6hx35L@-(_|8x=w{~ zTsrW1oJdAQbyUp{w63${Z|-T=g)n`;9M&0(a;yVGCC(TYI7LEK&F>H^UAi{dvAdc3 zHKEHq%Vz58A7ZyUq=liYd6r)|OF+-Cmv@QA*OCzPvxH7KRZEAO#18Ku>h7~Op6+#|F5_~Mi>vY&DB)>&# z%^8h*yvwzOYvJ6it4BTDLB4!=%;vv}#~mT_qolT2Ak3eW$cU7_bbUSV!TsggAdSfi zNo}f-8riyGct8vPJxIG|zmZaKP{!J_X7{%7-78=fa;1@PViY*9r39W zOVbXvv;<;vwaVAeM64O9(ZUou@iI2l$9xj&daBn~2|$I3WlWM~*G*y0g?&k6`9aO; z!jiF)4gW^~VDq_LaayuAuL3Z>IoP0^h#K0fGaeI+4)I-p!K&$Aax<^088|g$np|M14+Bl~31}Lm<*I2t6Esvs#>}DZHMWY1v?i zjdSqbmrz&M)l1i5qC(JLxQ2aDVhA8OAeq$P8A3|GrZZ)a-c)N3ZC?mr-~RA;Z+yr*k@n;o2RNEHW8&{!2rf z$Or}#XzF!8d`oPRW?{|Cv6_&6N>zBJ8T1!4kk(Aw|Es-SsW+hiwwwpicU;h=u|BgmC zUAv(d+K+XGmcqN7lHU8}JOT=HM53c~WZOpQCR6((6Ed~6_{NRs9iYsv?JrTGaj_gean=XvV)Kn*U91j!>4&vQ8m0WX@0qr}--E#_@ghR$iS9WbbD1db zNh2ymUiCrLy7G{m+aeoXDRn9KURJNXs%Yf}fgk)e$xdYW)|uSv%8UGWR}0?>2mGfS z@H3+LU|K2igA_2qZJW1bKm<+T}t;1wSe-_!-|N^^;?I*?>ef4t7l3{8sLxXMV!1O zl1m%A$iIQ--_@h!%hcAdhj(d*m5bkBjdGWN7Nz)pNP*jCrvyqoR+xf}oUW?itoT2i zy>(dA?fX9ds2GTX4i!;K*g#3?E*UuimG176E*YW-P6TOWbT^~BL?ooUL2`7C8VtTK zeB$Ht{^0xfJ3jw-xE&kEUe|r!SDe>*oi~0^cVq6XxylmvV)miA$eAYuR~+qruG=+ zN=0DIJPo9;vZQN`&Po?pYirB?>~m$U!!cv|9R#EGE9@&Hn;nyXVVoutuAUX^pw23R ziIq{u&Z{$7gV{i`Y`WU=VKo^p88yHZr9u|p6u8kA_0w4KkvGQHb-PY{59yl7eEu)_ z#~)snx4Mp^3&F?K-E!1DTeB@XkQE3uO`(7J!d}|#Sf&UHCdtvvdZ}=SQ0rm+>K7^n z={r9H-{v+{1^0X(yf>;jE#)%t;=ta!YUNokfL3!BnimTqIQrg5BaeQ}$frH8FT|O| z%H58&s;nK(I0r*qfDN6RTDe-*OD(4Q-7VLuW1 zR`x_lu>F)3C(7VpF%a2PqQU%f8< zZ9bDsYxDie;-;sH(Wi4NrvyO5$~uqp@y zj>THpX^pr|DQ9BWM2mxBNyGRqo5%ODG183zcvH}OydgovK^s-W+W8ecE%@oHqpag z!7+O)1%>5PL?oYqexH$@y}ei*8(q^0PPI}wr)ycMIBL@8aEGrdyJu(2l|UekQMIU{ zYOg!(wsrKz^uBcLcIA3;<#6q&HbAnyM=!8Vdm73LG=CVD!>`wth+guMPTtFjJ93i$ z|1a43tD_Gj^D;0R)_`cL!H_pox8*|fil^9AM@`CD^XQ|G%bxP5jO z-f!Lmi;I7o&o|NLP>Dum5G#!W#~N9_sBqY~dtBsyfIpgfxWsy&fct=4eerl5jx^)4 z95dBQivnj{vii0ayJeBPjNxZ~da}AFxRSjhu;5)YKgEAyeD&(lap8I>O1u)63<>`hW6Nkg7Te>F?4rVyuy9qS%UycS}Aa>2!%-*SeF0W3+pKsHd zOvf+D^>PW}W^>*h=%ljm;y=3QSdH4+>YKTu z+=gbD^|5-b|25$qa7f&`=Ubl$+NUF_!ABbF9t)As$pTk)`z>1!&9v4f+Y@Ic<-(Vn zr-gy?l_UAwg2*|U$)J39rr*%@0z1yDe4js_xrt8QP96Sxe)d1wa(}%P8M$f-5RgDN zFtQtV5tA*iG)QIJ;UnDzsaaiNb6rd9hA^MrEXDwu7B=me;1}*YMjC9UUG^2h61~Os z+ldfZI{|>oRkwY?9jY}E^YZ+->kL^WEu2R>r+Gv-_cj&}%m=p*%uGvD7c;9OL`M(V zWA{9RAkqMOs%M}Fak3D`-Nt`UelxPR= z!+X`^WpU{Fc`QYg3i~TNxKhP|w$yE;K&)s{qrw_S`yhyhA;cUtC$+@Lo)NM6W9wyf`JnlE{o&5bwSZDwQX z3B1^ix6u~<0H@equ7IwLULZ}SzRhWgq@h^=_lMQS3;PinKzGS{dSi+oX1?O>NB$jixkP6A?r;Q3(J`f+avOpljSDS zpwZ&?WffXEdiNY*1C%%V9;f-|n`uO2^LMosZ(TW#Qr=YkQyzg$AwJCVkfLeeRbgOm(4Y|GLZx4pcJ3ccsCFEHC!a5f~|2Mjz<9QOI zR_0>%PTC_tprOd7QtKU@ujF>~@Gb=P(})EkKhBjm(uObhY4k3T0o498Hd!Qne4J+y zX84F(29iH{>sr%9+KvV#0K*4T}w@%~RM<)<$r5(vy+CK*dFwQl;LM`kGdM%nsRo@|N$S zyLzzW!&S0(qT$ku(Y7{KR`V>HmV)Rr=YsrJ)IvDq1of8eg%9wL4|<*E>b6W9dbQr;cW zYMMM5QW$tNT5N5$yIvCED93hb8E7$`GBfNKCF9&CcoX~18KWeWGUCIm$lceQ;o8=IBbu8QaR}gk$>zO4qY#uh%%mVt<|r$2PlD0CzA+w z)Prm8o&&`ih`5|epeP!gtL`20BpY;2`Pk{$rg!(>O@Y6CDv7Dh@DI9hR-YTqP%rUw zT-AJa(aoWnu69x0$GHm;6&=AmTGN!EIG8D?+1Ur+=b`AAj<3;d^9L1zF~;$FY@@ zD<5O5wdFk_83{N$RW#UQBiX>6KjIjB7}rLp1D}bG$i^2B0Ppc#EDmv8>RPsD-aOGF zuv2(FL|g%{RY7$h%7Am3v@6=iUZDjCF78yj9Y?0Aqg}3#SW7$u)Y2TKo?x$XTNUI2 zO>35c&Ahk8jJKPtF2T|HMy>W#anU%?3$nI(Ko@ z^9LqdZf;L?lxrEs4CVh&j;3m94imo?tXTc1PFw5nrBcy^s;RuS1E37H=_?^TqgmiN z8Qa0zg6u+}r)YIR99q)f)aS|%wJYf??hiQnk=>46?!&hizOqA85Ny8;Jhr*MW!|#>L zdBmo6ku4{Stg6 z2O6%PJN^S7*7Qz1GG(|2cNzmI?fViWP6YjAEeBlH7llf8_*STT0YrJoTKn+I!uZjj zsCd6de2(TsEm?$1`A3!c4*h^R(w|h|{{=#(oUg=2Gli_bwy7;H>b&T?Fkm>G1kMfH z*VGS&QsS!{Utx5zw9hdRV5=$>$hhyS=Q`HS*7#otpON5rM5}rZuavtu?c!vQd_=ls zL1C=Cjmz?&^7lFg_}k%;^-Py+=G^2K=+H%8N$N}%5^Fn^mvT^jN9w2A(Hp61`8{x= z2?KX>Yx9BZk&zfWO!p_~s?`p%czzCnwKZFcTCP?qm5cGVHg64Qk_hQ9?%6!;99CWQ z`#DW#;~D+@4xFxcQe5Mz~nReca( zr9M35d2);{NLEHSe8yK79Uic>E}41r(Q&LM<`IC^@F_9mPufmOozfV9{&i$6?$Fb? zlO}-3@Gk9UHU0Qe?jTB7Sa=fepcHI()a6A7jorE{5^sLOa8vi~)1QRv^rE`21#jSL z-6;PEFG<{QCf2AfTs#8B3b)l+AS`)@@jHXIp%gnI@axq%j-L3|pC4W^8&l@po%eL# zu_9V2T7^xjKx$~Nosk;-l`aLUoo6?yL>@TC8jaet3|$8yj)^`zFDEUiD=IKkmkm;%xwv<+Ao#ISyAVXd0v-IbSQp||0cJ1nq0VK z@Mfxn|2+v}535xGLuGv{$J59Kd1R2gH;@NIaUQ&IRSz+S!DLz~U@~*Ai{V@7IS)Qs zoVsk8Wo|%3H8ZPvf@WtDexv%k!2xY@-0MY;h6lG(vqyg%?>|T1x-}GHlSINC++d;6 z+pscvd>H?eeC5;1@)&(x@Z0R%+%VxVUuu63W$?@~=96b6m2}w!EV-4OKdI^Jb>?)~p9U zj(>(INRFact`1)L%3y4gXMZ&~{8AMg=*NX8tp70w{b%ZMnodyB;8_Q0#Xh+=TfF_PUR_j73W*O^G#4Qjj`|eyN#}9Jz&%IiXczFeg`0uZF=HiJ|-;tfL92U5# za{J^SfYh=oqHb28sQxdz@?mWqov10kD66+6VuemLUNNC|? zWr?d(MwNP=ED_7uVan2L+~|;(BI2-8tn)qr zqhHMf1MKh}CW-i$y{VsLKd`(7)2Im0Ysv>UwADY^)7CQ4qD=mf3l39ubU%uWh}IY? zU8XP!lZzo%c%0A~Z zS~vixO!~l-l|i?b(e1eHlqLJX$rvj_M$d%)_f5hFfk4-&X7FWv4OKsCr}0{gjEm60 ze$9L>P26Kj8KrqD*(@Fw?TT#1j|W1{54_NA$ooe^BTB1x_7lI;nLBRM5KYu(FBPv2 zJ)onsxo_MiR<}$bOwqgW6YOe!=0##SSt(t$kbk0*ypI0~UU9bd4ZW&8xP;r&gcT~R z@H;K$sEe*{t0MTpxj|)5rv>nHBpyGV1|gQg6oFmkxp-=k1+o_OsI`$)Uy?(86Ug4m7`283VFswMw?&k$K>RWQ{F@sV1A1Z5kVsEuHP7qaDFjd%#* z)F!n;X>Z5A1shv4GG=2s8Kmk>1HoTpY#Itg%*--er+N2KezccTVf;&77cN-M0kZ+; z+hMcd+7I6m0+OGLm--%UL{lWI+Xe2~V5XzaAHgL%_uRA@5e=KP{ObUZgxz*UsyEq| z29A$IB9e3}i6W0bE%vuFF7zeC68WRAaCV|SbA z_N64l71VE+SgA*f?JjF4#1U5haXWTZ=;J9>yS;$u(mbh6mBwIfgStqRwS^z?3c1j0Tttntp;3B|`EVwBxeQzGi_ zwK{RBH&*#DM|4zxuoLgCQ?{m0z{jBR7W?0WmzjtK#ekbRO+&vHfCR z^a8MY17O-I=#UI(x|1c3*Z|fwbbA`Pn;%*GGJpG)FJ)Pry<8?OX2`-hsRfxE*A0wc zBLTERiSHa&o8xU-s^>y&2b%WFw_NB1E=$uBve99WM6JO$1cie-0e-m2v198_s#-{6 zm^PPGqH_Vt5jF0_IwK*hi;Z#0#ot<&8uFU}2%^SqCu5RE4LEL3Fg$Ml94!>c*mLjwS0*|5w%{#GkO@08b({<*4PPaZk z{)+iwFN46S^*NnUG}3Au!p0*#O@?3wiPrMiC$*)d#MJ?Ljb+Q0wy)V1Mi(In@sTkt z-Y@`<$e4z`+~amJ1pPKF00Bba;)wKmek|}KCs>HEyoBvyR8j>Rfrf^kQp6lD4ogzg z3!o-{)S#>aXTLuqA+g)qMpaWRyk`!kiezR2l9&}H{Dj;Rqt-jy1MZX+sSpQ&ReAvu z7tdQUp;tS_#g$tNFA+WFw4g27&7xfi@Z5T^-B4Xh2Od^qp=w??WMdGJyg@Wk8WqWP zY}R}DUW{5R3~t$<7+HRRY;7wVjAcFL{+|+lm+qw2+gFsm6CFuWkW_d!dgu%tJdGP) z8JouJfMVM9rwZdRBu|QY&jpcIuv}2Ce#WNqV!YWLvW&^x$i|PJG%OQM1XKxljlcWN zfH4Fdxv6nJTL;0gM>{jiOlf0=q=nZ#nBFEq*1|ti~lMR(P18zu7ZXti3;WXNeD}5b99`9O7Rl?)9Ne z)Hy87<3gvqJIML*_-5vVgGIJMoR6A21)ig?7RZhBO#q&xYF)k9lw$Ul2Aacq;iQs( z8er`t@Y{qzN7LF94xLdB{bSY6ll@0HABHyI#%_H#a;L5VLGU!9m6nA?9`vqSi`e#f zfwN+zuceAUES64SCvUj0wWC`sA=ddeD(D1T8OiGrT)x(`{Zv$u`iO9J*`u&p1EAuq zpZhf7V7EOKYocOpoohDUbLf#fXg_pLP~cNiFSCfOWhGJ46)IhlF3+%t{ika5IHp&P z3ul)PQF#0O>PzsE!{!X39iEs8<*HATp0tjOF~>~@mM#UDJq#(qXbvKWIcwFZ1Eoh| z41aNvIlTX5+1@9zCr+reZEqMn{vV!*6!Ow%WtTOMT=)TwPwtpD%R2@AE2WhVqQEF={z(?_T1m%w9Uz`BRD^%SX?z zPh;8Ik2vg0B6;^z3SA4G?W}Ut4^oT4I;9r9BN*PLXZA0ES>*t749)tAc}hykJTO?+ zmZk$YfbYAO9BiW1jI7m*U9}JV2{7|`va_i-kqwo=m_@tj;TlbMyZW~SKbsovtifD% zM=-v+7z4%0ZdLpWYqwH)?`qnBS#7NGTGhqdC7c^&Lh~7i>zQ3~{^U{0%p5w$9`TbY zoq`K3VdROIi8Y!8-Hc-Y>v&kk?S(b_iaif3d&a2plv*uWNV-!HBc|Hhs-4^@+8Yf; z&>54D9wzayrs>{398_d&aw?W9Z8(ZDjI60s8b=_f4>u3%wqK8!r#r`ql+BROM!mjG4b8aTg|Pl?Dv3~${n(_UgkWY zD_f{HBhe{+I%889Cszt+U0Bl?|F7WrKLJ(5_{s`8zrDSOdpt?Fx8eiieL2JW!r1xS z_!{;0*ITtgtjWuiRNb(|9B;U*w3u+h1XX^vz@3&EMMMh-2F|P~4!OQU-L3kB0cL7i zjFh@KW>k4#)yBAe|k&FQH!yrA8i8 zEUu2_vp?9Ob}8j#z>1z|++=nSdiXmYoe1sK z+2&MVsWET@w}HP^C9Ej}WT1b!2UEPZN7Xv9R{W9mZ|;g2?PJ#^j!tsNu*^ZGDDG{t zq+TYG`ohuor2lcYe|;Hve16BsuhP|0VciEM9UM7f3GnUYT#j43itAY^);D79e!N74 zfJZKA&Yq7}&X^C{cOrASmQgRs=oV1VRFVZqDzjQ@%8PDIyz z{uQR@u=@T!N#Zq5(ZlVn{ihT=d+&Z1SCR|s{umx(0%?&&9c6tKX1a;nb8mW}#Puq4 zuFcI}9{2I>-`^yq-oTD32s)3To5chV7q8I*6tu^5#=kEoUWxewr z5Tm_PvcBhy9AA`#b@;`mFX68ejP8bA;5eq;Je@`vkXB5E&`%wZ4tvv*4{i^)ISJu5 zoh(xHWz}ny6P3vP7CLJbT&o^^0;(a|r=F@n3r4VnO_o8p7kCenmC(+3YfP>8$RgR?Uj@vU;Qq_pkyMK*$05``RtCQsjytbnq@F*gt8(5QlW=jX zo@H1Kt;EI3Z^ta#AgBfOojhR;AuEi&R=qXD^I>}gxHr+XQYu`-!xlTcJqN;wND%T5 zt>ILq^kVuJV$dd-|1jo+p!VGFNGo3Sx4NRZYG@K7#Y975jLg;eg|@)$lsaL!+iok* zPZ~W-CI1l`4NN_Ma!65;$)n>bogc6LlH{2J!m8v3hLQ2*?q>lGQ(SA=%)S6F%u-xrF;_GKSK~ z;vPl9^<+X~zQVh|m!t+{l)_sU@jPytFcg!S-1ie~GsDf;E4Oz){FKnqScLH;g(iQ$h(8W>#rso|rI0$tbrWc^&n<_`meV-`!_>PoMf>xL= zh-5xK&>j_bFe&vB-zA!1cyHzPas#_8d2ASuw7w=8L8xi~e#l04EqjUbus3?u$X0fC zyxj>1ee>U0MRx}t1QgHkLK&olsaYkbwGer$D7!i@J))~TCtqK0fBxHt{OW2?M=HSz zx^!wscMu^Ej7_HA2#W2%81Wob(kXAqvxgN zy{;y_fQO^U%V9f};*$7e@Lyj|;(4(yp0*iv>cODWQ1m^SPa?yhdY zWN1IzV-5Wb@3q2ks0BW*UfBRa#UxL=ND@;4dqqC`I-R#9d7kO{wx+z6^yIP@fofyr z_aO_=5-9a{i{Yrd_+ZbTWZ-~Mb8oJt78N(u#k9w4rqL0?d}@TP5P_;)Y;C-3v;UrPaUNk z1B8G496bg~7d3J0S|}YH5ln2zXsT~!SUuFl*TX9Fg!m0^(a8um%jBGJIvtr;%$x7Lj|Ei3-vumNmG2NZBc;ZkJlV^uYKIPGVf})e^-SewSJDQ-gH+(wTDGxKfNUUK&{}wyJul^)wL}r1LIqan}VCxLr^_8kV z3BE=zbi@LDY#eoqeLl>RJW}6%9Xb!CAd!gO7g#FxwZgj2vV(G%E}P#r!6lC5fGh;( zGqzW@KGS|eZe&@}A!0$4OMZXIPR<;FkUs=q)bIUA%%_WeYJ6#>7qoW-Q(AV>(nBNo zY0r5oFVfJI9Gf<(^#5R!6OTsQwSVog}%K58O z?fAl^&k4h(JU;GF@EiXFjs_Xj|BL1%^`Vi~?qEI)v@>90}ik!IN?!oNr(R zqA!&e9;VetNBXIJ#PxilGsAI@NFBqJTF7~mF1tR=@#9Bpduh{?M};xMq3)=gH>s-slaj+Fp2< z;dI_A9j^$D>3tFh{nxG)liLuEn+L+-2Us}%h^3sEQBZtKhc>#`qvxzx0KDg=h;M4Q zodQCxSRPhZ4#GbqE1XE=a$m}6n-s4eNd2h;f0}=6+)@Pw6eCM@^VNzRA`dotS=t&} z6xuGwy|R{pjZ>D#asA-w1*8@`9T%_B0l6R#i}C5sR(h0|7Ja0RAj8LPP56U0UzI;4 zOght1J+s2Y5GTb5=FXowyW^Di^bi28D5sGrm<_H%gg z#2TWT)sZjV%DpEeQZIcU8U}p>m82@Ya^8-NSIYeq1SC1U1+#pvJu3Q{4Y@&bpPEK` z(prj2T|T-x4o^^OZ(Dg_=U8jK(RsIHi2{o;=;^`v@2_d`Ix$g!=WyylM_`1Q$8JS6 z#)gh{(nO%*@W%ql^Uj^>J@Gz^*xw0kEHXdVu&wDmy!#iz1DR@J4$@no<49d8hjKS!^!LNxj_d)Kgxk9OY5YLO3?`k z+VOk3S;A@eLo594a!4>Ze@;`Vf?uZg^}z`| zBaX$X4@JEM6fy%b7ViR%L6OlbWaR=X1#rJ+XwyuM&!aJBYrA$G&zvFy4cZ%%E2_gwA(zTOE zuMKR7jgBrMI?1;0{)2G^(|5(k!i%Xj`SaXW>{lUZAO82l%3gUoFY}lisZQ-E@&d2o zd*FY2o==fo!STv{ELl;Syr+Y6CMb3%np{ z9YChA?K;ccR5;`_%A&=cgW_Sfm%fv_mv$|J%aj`@I!xAOUttX@9k17#cYSJB!JSBwBbw$2*FMHA}*P1(Tr%dthrVJG^-aU>h5gV^ox+Oo;tylKW zqR}Sxb4G0xRyy$J&wjc9Uieys^G?^vk|1pYq8%<<>mpT97)OZvGyVBDTGFp3!f@B( zyso9Y4xfwxKl_m*y-@Ekl|-7%kp}himX7c6*WSn3RU>YT4oX_Pd>ts8+{xD%?!u4a zJ{(hhC`bW-rij%xa1O5<&f&E2aoWB^plRaB4YH#8LiN$$@)MdF^_%5<#bM!o`8a=J zY#whni+*fr@Su&f!>2_wI^l&;PZuSssMlCTLR+q<-#f zR78a9>b_E~tZ_Xca!3;DJCW&}03ZYf8uY@C_@$U77#KAlvdSbDw1;m9w3yh+hPOvN zv+-}ex!Pt+khFD2|vHMV94BIXCtRPi;Fg8OMu(;Hv{iODqu?4*4o7>Z&y$6K4l$@p-_!_VHUlZz+c_bXd?1+&fyeTI>BId;`Z zj(a7p5H+p{@nce_2a&a^)4737u8^KP-|tagR+Y5j;ZdXxt|D{o^B!*A9+D*Y!$pl> z=y9%r7DPojvUO$*9!nkIODJtY57h!{6&jBp-i#^AmQHRd)7Ai)8Tyq#H95PJ&8kU_ z5=ZFOOv|APt#3;5oApWz*YfZ?)xCyM!pukA=o4b)VzP37LtiMZZ)7b-bmpFlp+UxyflvBH8DhGnw`N z+gpf8d7TOfaF{AF+&D~$&K$pP@>AmZ{gp9B`3m65W`Scyk3m;V@4LT(+TUON`BFsS zyh^huvMus`YH3k5U%U4Lq zNdQEYj|b25pX1$VraqVBWS27MR{W;%G2;5~M5LdMUo3i3s6xH!evU~as+K{dgN4N=xMOs{&pA7UEM59roD3O*9((AN48Wc0bfeCB;aT8Kt3L{-U|Si+ z6#CeMYr@LZ>%(ubeErQbH`|@V@fS@{IhJ@I%l_K37YVyER70i!xj&r@SVAP6V2pmS z*6-U7Nzz{7hku8xf9;VQ%}(b~-X=LhhoT~%ffCkY`XxP*$*hbidfJp_H+Xv)U02LKL()_LypAg3CuxWPZ7GN&;~K;(*%ZC_YZ}rgdEAJbOk0sFX~- z)VHRI02;G(km*C)a;W{kg483DJdcA1b5T>LE&_O9GVmbE6TD%8HIl`>&=b)4xTO&S zKf9O|r;?&JGDIQkTwl(L_sytmjVUp4vk6KCC(*V_JUL-T?;IUAzmbL)K{zj$-x=NQ zvy5+u0AY=2QK|E8?2YWzml(R8Cr%2TDRt#;TU=|?x~}+o>%R@*}0H?g*q{w>ce{ns2j;p&E(;{^<(d7Ub+@(u9jo8t9D3R zTO|87jLUDh%4Am6v46H>D{oZFu^WS!=Y*sY-olYNReLMEu%>o$u|BF*;6HXQH)x%_ zUNVkIl65I5|8}U}OUixXK2mIF!@|2Pg`YO)wx5hi{9_W<7TU#vGn+p8j#OnR#UlH< zlBby<)L||{gM;?V(T7qC;tG}0Pz|Tx3lh>~Oq(fqKZlb4y;)AZ%2LVe`JiYfTfYV_ zP=dI#dyV$=_b!>gE|~s#)&j2qZTgUeHr~_X*ZCYSc`77-GWE~ zl{Cnk$(iFj6AG47N~hZ!)tv*e_z;pgop87cNKqrv=4L~8r-*hqIoPWT!YNN6XlEIh z4p`mT2dzd-@P_4KRNQJY@nT=eamhAblbw4(U-jtwx7jAo$+~RnE&eAw>l|Ws6>e~|0>nZ^9K54950^a ztpyw^eVlO>%#ILi$|u|j%2a{PaR=RdQPdIcovARu@<`lGC{eBB*Q*-)3{rVOs;|$*$%&IKtI|cAw_T~l(>p`Y5*_|{C!SsL@g6D zA?4#0TY1+bDOD{tiY0M!?a(|3zePbwP2*%xdg%MqA?#t$Mz($T;MbFWC`oy>;m1H5b*=G-A_n%tM0z2B?bhFa=HGw%v9af6Z&Oe#xsTMad1>vUi!{_xBbru0 zd%b5-z-$kxnIUF&J|kdR5zE8br|P}O*sXitRFUE`=|sx8D2+b zHssQzPWyHw)_z=XXmy9oOnY-6=HUg0f@^GQBfEhv3iaVtD;&@;g|~f&VH;#_A0}tL zXiZ9i&Y6CIcSMgGpVnFmjmIw!^otn)swReAt@@7@Uyyci%v(Ppm9?VlLO<^OkDUN1 zzP`o?{NQ}%j?->&y4vYcM|r@2S&pX~al!}_VszL_E1=s zKx8EgX-Wf3V7Yr(zCuB?Tfj0#Seo&2b9seEN7}ZUh1p(@q5JrdQ`Ih8NrVtG9Yvzr zPaaKa%Nrk3WP|SUHU8x(ep!4ec`s|WRXsq?kBn>%gwYQ-kzRSYX&u_s^VhY#r(g3b zyC~%n0Dxhq|NnduL!gXflw6KT!ic?vt{v&}3Z7mH|G%@_do}T$Ex> z>oaZQ=UP7a9JTZ(#Sl;PM8kCvd7Bf`sUEAmB5tR}khXcl)s5`VNTy*8G{7!$c~s(U zQaUKNzyEW&9ipcah^WVeHJ6{j#P7nsq;x3lBxRGS*gum)IOJfjry96noo&cjw43Y_c_{-+Kc@Q^6yO68;sg+-5j=B-h=YWT6$Dp)NqcRb@2)pdgCdta9T;wYbqhWpU7#$EM- z;&%Z_h>;RJ-POY)XzYaa6Eh3mrdwuNB}ML51zJ@_b<0#jyOrvVNL|v--97wPJ&v=B z7LgD5mXK}N7y3wVZ9{fsXTFNmi3ONXO5lOsA)^lR(XWrKOojApqjjSF7yvA2ACuQ* z!*W19CzjF5S#`wh&$jh{Jb8TXRp`i+>s#Fg6Z1&TF_5yohxXAn^GpoYYP%K|FUqk= zt(4q>B}k|n{QPp0Ttm)U_KM;Wb`(jQH#y@FjKnr+jfh2!Yg2ZE$Bf?rv2t?Y&WKXT zysx)1v<$^BAbcx9=q4cH^R}b3NW9k&wh_=MjIz+1dOO1&VmnP2Mv44sZujQHB zjZ9mku+1JkM?BQItG=~tV=>rzG^#rASZTx|!VvWh@Ks&QB0ZhEDP^Xh<@L~f8O@GS zl@Zj-#Tm)MRdmy?0Mvf%G&Gi@iWqe>QvA=}fH$PHyd>C)H6X^)nh?FiBLyV)BW&bq zEo20?aM$h2i(l`*d_)r%!^W;h4XnM$_{}bso3cSe-FhupNZ!*)|3yGon%(2r};GUqkE8^VK4Ddv(f zPvTgl7EF$@!%V;Ki_lwuw}-^tp#@5JQs&3f18F+(JuItQ3<)~PSHmf1JG8!B9y<#K zp?Y0MdtQ*+6|;8%6ziYi?~U%*y0yZx**_`&ZfC-($E^wqgRzW@9uOitAi3fDYSdvL zXiFLuFq}t7=mNTaYNnd*Hgca_c9VT-xI{ujBl%znJx(WWNu0r&yap<-E%b&P3o$28 zpejpdCzS1iB_Y@*7d9N}UF!U>ITsXuXE~eCnzyzUX{$xASI_2S-wuXkI8y9>|DTJQ z?EjjUUuBTHu-CZ$=`A3-H&c0q#}GZ+#^>*sP! zn|WINFz;ChE~()Q1^O8TryWtIr0>&Yu<%PCUnB>AAwuF$PufEAFhpVEVSZ^tAF|hC zd>G_*oAcK__{+|+WX_wMSMs_CiX{%&5P&(12g`Rly2w#0u>{M=!gsbv(`n6@m~SJG zvJg?Opzd3;WaF~a)Cd!>c6c6y6t>-{e|R{o(6h{nxO3m#y)tRLn$0#EgtQlXKO(wQl@N9MdB=ZxTx?9G>+C&m2**jx*4_Hg*y4vHs=x`!fIF7Z{`$raMia+V=GGvH*?Uilq8m z_`fw+mu6mZ-wl`Kl8 zPpH7IWWCZC@_b|(#K^~4EqTP?a2ZqToB^}**1dsU4Lx!oewkSrJij%hyYhxtYYx*C zuVg^~&XVe$i~vJEv_8Y~(RJkAQ5}8vG`R;m5iK}jF$(wU(XJGO1em*Np%tTs#=_l% zJ-Y92{UF-ca*DDzzaXp}jK2M7nnqKW#jSAbIr`0u8PTZ%Ucr0u8zd zieMM9MA?Xkgrh@BA>vtFLIsofvaya!8HfSrfd=#P{&7}##Ak*C`|&4FLNuKtHrQev z2wbX?o%J7Dch(BYYy0K4XF!j+DizjimmReGfO~=*LzIOIZeAUGwyW{-25|t}1`1$CF*IvoM@B4T8LS^ignp1iI z`rkXS4_PN`XI!mox`6IP{&XM2II&VE`cXn@1wq3yXzpL%{3V%6_wb5jd&ew|Y=je{4<-_XdNa4R?#za7EZLPA z6s|qwmzkyv5d6q1ITBk~Bx&B`Cvr@kAf1jhL@uy#e<1dTi?+4uRm^Nyb55mrCJU9q z=5Rsx1P00>)RG|^miJo+=DGSSlfQBRbO30gHj05Uq)cgqgIC8{Ob*&jo=|Fcx9qK$ zhiUQJTJ!>&hTKcDMEUytvz8fXK)m$Iu=7y?V=AWHtG7D zOhNcV&88Dm*g5NDVB59%`;coCE4MnZK-bwi+96Y2!ecRd1nr<5_Ld`L(MsaCAi{EpB zr0&-|{Ff8&dnc6BD*`@KLrL0oDk7I=bv9%Kqh0op#efKAEITd;ZFChNLemB zg!AZ3-$OtlLY-mx)!~MK8hSXs*24tp$;~^Z&{S7pS?wV&rE!naD?ZzKjen3RW322DLPnpQz0hG(&0G zGB-^?z8dp+Nh$3)W^Wx_o=%$~3IQI-z#?x%YPw7@CYEZ`?3SdsTh2uy;TLNw%&a1b zJgTJMa^QDRm4DPyZ5DWUbbgw4((4sY;z6uNg%Bj&Z5mK=7KbbAzi(Isp`|c^&0+6T zLcfb*oiEC^IWs6Ei?JYPz-wU(U||0ZE^B~J!atMQ#g8mpq`K$bcX54bJC-hC+ksw3 zMobVNH`*R97n7JC$Ks(A@@&TJIhmx&pL>bziAYFHT-i?FM^ys9Uur9ni{ZJd7TEFK zOLzZu3PmJN&sVoRi=UWo(rdVFbH+kM;aW)9_y5P)TgOGYb$_5o#6ScTR8&%hp}T~k zBqRp}q)WPUfT6=eq#5aM=^i>nq+#e921L3Wr0;|0yzc?yy}x_^@iPp}B$fEu1^&^Y`_r#`*p_&_0cGtJhee}PMYETWv!DDH+5P;1p?ZQP(`jogIVo?ns z3*$3hZYrleknU&~OTFA&%DdJQUBSk%+#&?{fpFqbHeDs!%E z;KLV_BBoA^H-d$B?Y%i(hQJ2;xfwou7yd@z1EaJHa1W1lBW51K{OTx^gGBjqBqA5b z@UH-O_Y#ri{NoF1`C0xc~<%VI_7ups-E z^ocFUfT`SnDg^KM2%-+Z$`n;KrY>YPC}pf$Ja+j}4@*x~!6?Hnd+{v^bEd>R1~iG< zY!aa_&Rc$5OVx6v6|;|R*L~tmg*-0XF`h;5GZ=hk`vEf2BSgL>MQFmJ0|HXUELc?! z#I{R_xu1==2X|#14my;}f~-^8DD3$D@UV|yi~}C}5=Kg1@C|+QZ64L(bdH-f&8`_; zuVo=&{n+}ZWt4D9LZAsuPmwz$8e~+fK#+*uR)L80$2u#E8^2`#6731P>LEukA;uo` z9)uq?l!evce%lRQrApxL$2}JKMBgtIU(c*Q{lUG<{|2hgzA}9xC@|Q!YO?qSbfStq z`_I(2zl9TMgQ5t%)=X*-d z#Xm(9Y%nSo%~FKwp%kI*dA$uP{vx+gxX7qxbGlEM=IUf4NrcqF!t8&*HBUjFtGAjd zn1aiclFAJ016!84ICjg=Mq0hDVtmT#5;IP^2Bgr$U-)NF8~aR$am*8zjfEN;&BL41 z;#kYnA8zLL%s$6*Xb_)b`!w`hnNJDxTygGd*O8>)1=l?z3HM}}h^)=|BHH%pw|@@! zhEK7lniuvZ=0MMPB{AW#)4AnRC&FjfcERXE&4&qdVQh>@5VOJ^yCjiv1D%f4<%>Q~ z!}xuSdnn8x2!xxDG`xhv#HHkqA^ZT(-jz=0#HKxaZ?900968dKv6Li^!Mo=__cnf6 zU{E4PRJcuAgPqTbxH-*Dt)1}q!u51c=Q_1|04Q--dTIB@M{JB_{oo_bI~A{rYeB); zyH_B&$3P}HC-yza1c)1hEdKU&eo=!2OROzxl@)ex=ggb$lm%g5t!Z2KmqzCg`+n;n z{|F$2)vdF}##3sA7}M#{y|VBxJ`#%eU5( zRmWekmsdo$y@*QTYfZ&FZ7BU;u=@8^P$Y&654~BYFkk=GhDSk6S5OW%h>|Z*t$;sH z_7G?C5;mwHpco6DTd-NEloEghrJa4YASRF=d6e+%jUdU5GRc9*^#-vQQ;&$l`PVtk zl1_G+4SCUhmd9UdG8W36 zN|-pJq;n+t5lLj2#0>$WX58f8%PXVg-7GC}wBtS# z*FO7wZis7pt*~wfQr3jc-JWJmlREK`TzZ(;(sK^t-jUaZvp)Q>XE#+C7J1d`9YUFp z9!^vO5vD`>D3P3&K15Q>P}GNYV#o_r&wH(UrTIH$OfC z+H1s8Nq$ftKxvHd2N~_{5UU!4 z=nqH!jd17_YlG0=YyFD&%!Y+5XtgE&?e+sqSqsBLzXB;W2YV$O_K|9$gg!VRT*@w_ zi-q=F%$5A>L*OPB_Kl9KeHXms3LB3Z7yFoQ*MM%Wy1WHJ%r=Svx_28=GXT`vH_B|= zBBa+o=1bgZn^dM83pTzt&?x+SUiwZZp(NFXM}xVW)h$Z?IpC$-a?_3Yz4rzzmcmQP zHcpJfJ^ZoqL#cYR?pn=Jif%2TChE|$9M02I%*#CR=rr^Z$pGkaUUb7>ZLEPG5C;^w zgBuah&x4yIqnTSCUpY)hl-Rvg;Ca1?yi{;=-N-p>lSjkD7$U!4DJf=} z^PDgt=HV@BhkJhiQ{CBbl?lzb`o0G7i7B+EY_j3MqP!ytC3m|i{y}iQRW!|Rx?Tha zS@s{e6MT-5N|$tC#EONKTkpB&<#*@fhyGYv7MY|hgH%}w)O8=0F#ZY1toR#{6sq;A ziJ$$hF{;eR7_5D{gz{}($PPBXjc(%~=cc9Mj{z^Y~~ zJ&U+*F6Je#5BL$TgFZAIWT037fn>+qA6H8r9RWS0WUgk1DUvc2j24b)n7Ak>#iq&{ zE66$}Nc12!ve-=vOOBBqnc$3y zA9&?+r%YBRVLaJ^uD@r(!iR*yPMA6*_LA2b@6%Hoa4-%iTOME452%w?eXrG%FnvlAhGkFDB)<7gT5Br#~P-MP+8rJ>lKM+4rYx7H~p>3)k48 zk!t@Fo93zV<Aj2eqlRMWyx_>0IlG%=qsT&OYk&%tWY|RVq#lxkYM+ zexEr%J@;l_uUdTxm;|pz5j(`PMotLkaLzb;o?t*fds9qsYrJj(M!?fDKO?mZeoSB7 zlYZcQn;nuV<1H@>+IuM?+@|j>fyxSVi2CVU|NEo7_E)jUq|1K*bwH9=uy`q@_~pOd z^MO%hQl8_hK@_JY6qp?*H^ddZBqMh-@FyuM_RB6;_e6pe(xv992>9w)ho^f62LFfj z{Bme#b}R)Um?6Cy)ED=#2=M6y=K|iLBrWD9{bS4it8yv%bunw@!E3~f^?H~$t`e#+ z+!mAa3pc(qcK7q7_Visy(t!Sg9rdz7$Zm!7bAlwo%KxEh|LWXtUjY-acii3jiSH#? zXuPs!{*U+uy~Nm7u$GYqDkkJ*U!+OTi}-9jPEP|r_+g&Xg)^wG|+BbQJ;J86H) z6#jc6|Cd+!SA*8fKb_WSaNfL28MtpHPk_D%2^&Z;fyBv#+eS(khZ_`1r9xn^b8_>) z_WG>E3ff}6pJwNhm;%k(#heq3kOx52hV=vY6^zJ7fQg$9z8jw0(vz&qi{Qjvq@b!s zO*gouYQp2rCJ@g)`G2k;NX6?dYZZtG8B!+vP3U}>TmKyZ(f2k)3a?{^P%}U4!M)Cv2u-XaoGESKdk9~4UW%63O zqbOo!rmsaV)r4U&g>fF$)1BMKf6kb^2s5hHmOpHWN?&!n`6!t}>1|)?2s(iE!4Yw; zaC3te{J==^`RhI#!gFNwyf=PBoa#ac?6LYcC7%KO2?Oj5ptVrk+GPMn|J`h0{$xs9 z=dF3FHou$A=3f1URmqfLmGqm+CWBMPkPw;h!dU4oneY}Zie*bP*u9s(D-H0vb*oHv z=2p~FwnAT%l~l^YB@tf(%g>ZTfD4$omySuetZ@Fw&g@i>h*xGN(0bvBM*$dI(okee zi^(dJ5SJ}}GQgq;+;bOi>69m-9&Gm5PC}fc)Z$RD=SMG&6AvkGO+4Rh&$gwMtv)ZJ0ChtAI z?#089FfLG|pi32aO|)l~)+d^GC;eQ%?v#(7?)O?td#aJb7H&j-UO*KA3FeIS(P6H7 z>i}al*t~EWN8N4(=+#$KQ3OThBR;r2wx3!i47X4z58uRGWH6~-%-H3!Dl!_$ZgFBL zCjVMnm$I81F?sUQ@8>2Gg7TNPUOLK5KQ#F+`i&(Wx8zwKv|Hai{oCUBUokR5eE-3& zGcs~2Ezbrq*}GNYaTW%9g{cU`ksE?9pk5bK78rdHxBF3htVhope+^e1SAPeI(szuEo!gC8` ziqT`UCFWHIr7iQtjEgUgMe4M@%2_qkJ-qeVf97E!z}2Zv$HM7sJPDK)e!R*)Mb^AL zcQ7iT_ar>K#rm{u=kE|Zfe~wJW~PF%R4%zO*KH2?vKPGgrfXkO;=cc+mr0xq1{cLT zUw0%v!;=IvvHE07uvo}2Z-@(czWCMw-eL_|igEC5Ze{quVP_u$lZ)o_ zqX2^uy+7^Yj~|0>0&UxA@2Dyzfdy+Hbbslgx(D%YmSEo-Q36R!9wBhg1^~`KpaB#V zK3KP1*9eKK*h##IVXyJxifS#0HZc4h=ddc5DOdq$P4E-D69irTj`$1(y~w|edE*@< zL92S-5XFaN_f5m)iaLx5piF=A=n?pFbm%6iIiP9}(p)y!^qu4LG^Py)n4+& zbnA3lW}#^jhqAJAbZe`m&ou=lB~wXBNea#g`Emw229DHJ%Cw}U(dUYaQqLnzeVn%! zqo4W{MZd)%ihSxHg}JgQ6NWR~VaujX`{Q_7Wj#sa*jBj>L?vz;GZ|GvzG>rkTYlY1 zCx)gsKz~jkHIoD6gl(=xpXO>`nxK+Q4aFrg?P!Iy=}}P&2QEO-SUnEEr>Pbho4#lt zsEpCIb2|q-|HzXU&x3@@t1(pT?+r>wQkNNggeht1j}9$qLoNt>$hW|jf4Ce~%y%x& z^8Bty;OkY3g)6?PNAltP?i@dy161n^lRkZm|70lt zYlLoIx`s{kG+S2Rz`!7NZ|O5f*@ zh(3c&KP1aDS@6k|L_KxZ`;CD_dhEGnGk1D-d$Qm*l0V8X0e4IY7$?1{(L^~h9QqD7 zlF9lby&OC=ndWxS-X$Lt69IHHr+QX!1#NS3;A_Dx6L56 zsIiZqG3d{f{+&Wa0`9eJI0xCI_1K0xxQoaT=OeNiQv9gzOE7Mc%Y7-P>X8)wKqbU) z;)7ydFV%)7xO?0%ZmQse^S>H^g7ofo%iG zC>_?Vb7|){D;gN^+4#27vCrBs{RF|I>ly@7zb!@kVHWC`y!Ke1k@?`ac|uZ`g?)6T z;#Hz5fJ-)~AwwC1V0L491jy)@aBu##Tek*QaFK~{BAzK4wz)<+;jFULn@_+dk6b;q zES(&A7qF1I;&#n;7pk8v`jg(-tJ#`ow21;lgS2k$%-IR1U86AYo|XQhr+HT3&q2Sv z(2J_<4PGB*`mALN3o8j?S~;Eh*^&%#9#i}hkOC0ssvpodhqV5pnqPt7V8~k4;tHR% zw~S_cn7X9}B8ZyS(r{zre#B^1Rh@-{1LBmwsMcD>s#ZGqN3t!#HV?M6JMwC!S&F9J zi>J3w_^mo09|-jpKnIQSQvaESQ^J@~OV%bSMKT*4FI!*q2p7j*;zb$#j+xKVL;8Ja zM(=CmBk=X})Rviock|&H`TEH)*J*xLoG%2reNnvx0?I5tHTdhdj+TxsupmhY;0w-3 z335IFc*nUnQWw?R@lHx#ciM_M>l3@qm|)z@71V5ZKqH4s#VRsI{p3qGX>r& zoPoUDeF`2tf5%FKBr(d%cV5q4-n*2+t~v8k(Bj%CITaZR>xCoK+X1(m@=coObh56m z>D>SP(?WQ$M(L}^?B|^dinSMdnsQX(X&g=?N|&XQK{~&P&l-I~B?n5<#Kb3Ocw`_^ zj}Y2&doWt|sVwUeV`byi)RfA>6(VmB_IBxbPLA(E|0Mf+o|4NtI3hwX<1E#}ESh@W z3ckhi3*Pn%ig;*~ErR#6<^KyK8faZ!m2oBTr=_0x(Hj3_*jqMA5V!ru3=eTjJe&E+ zzU-EsD0U0F&na^7ykDF(5z4P&5sGSR_xl}#c?fUaifYTN8Rq4iV(3kH;-117-S59I zw5k11EFdM^b7jq(&T<){XCe&rfR=%C*oS{7zc|NqYi2S9G1r5-S3@RzWCyo0X zp7vu4$weJoZuGL3xZcLve`AQYjhOZ}X* znZ$J#kN<0Of9&NGFXidZ9el3KbPq>KsqF(rKH@3O&u&HC0&25>Y)F5U{;w-D^ctXjl}b)Sbvw{Ls+Q zQ))%SXbJQ~V&GI&RrTR_i@jG{Q8ALOQ9eqqUf7Twf_hp1HqFU5EUiox8yF6)ZbYm? zjS-CLNcKJ@gw#G43ps2#Sic2}tqMEb&VH555zK-2p-ku-gAeq4{0q274^+X+&~j1z zvDH*F^96L8Bs?y#D0@*vNV2cc!+*gyGwH;&kI-kLS+t{3Qt>dzv^vl=$VE}9+V>!{(wyWVg|-Ew+} zYZxb$lG3n#ce0&qb2ehSEiCFbep`6k@`iy|g9U`zawtEMhTdHNKBuYvNXe?c!;lGK zZ${e=h>B_*%dsttkwx2`bJX?0`lhS^=FD92E63zUNINdrf$M9hh;2YgB#yOJuYhr&?(WgIXO9{6=Wycz?15h2T;Wut z4VLtSD?RYpvLZR2G($dQp1~7Lnh39?dlS^B(<4U%+_^>D5&pjCCU5?{7 zj@1-34Mq@)bImEM4EtcG{k+_nt=tODWP`@be0F7NLR=82zl9Y!pCR zGR{-+I-G?gBkH_{0#s!bLsfMr&sR~>R{o)67Q>QHy=Y;w?ZdXrP`Y+D*+Z1;;U}=+ zM7)P1e}Bjq-*&Op>f2ZOj887&vM6hDHQS+qR?m9hhi%42hYh#F<0V~nYmN+yX=sLT zY_D+_MsEO}y>=E*k1n>nlf!$#osgN?>dL%?v)Ss4#Kdo7(Bv{vt28XR$F-9sI0|jT zNrRwvW|G{Mqb6l|7=lJCC1&rE8Ov|Bk9%NK^mfFu^|r?{$iY?8C7d)Xm-WZUSUA7U z3CIvzy{+EPyvKcbu}B_2raG+tt-o}jn{y-^a@lNW4CW#0ZSrlL$~G==+ymvxu|SaQ zv#cfQ?(Q`#qZK}@&09z8I@GO&YqRt!^bte@Cu4^{rKP48a`2@c9)IyDGD2aKEiurZ z%Yi|b|D4Df&jZzSn2lZ-Ty#IG*W|;;V+)!t^SCg{*XksRT$&RQ}>$%VfQ^?Hm z#^z10_#&2M{>?-F%vynBz`KaNh{sI=Yy`wpwt7m4;)G`rLqTJ7 zbos^z`V#x26AYl2A>!&{9A%(Y_+#$uIPbIa0FeJnhbIE_uVunNo#psv%q(C*#JRof zS(EiBYT#E-M&qrSt!2oYCZ2^oJ>vaDi&1wjaS@EF zOrv7)#@s%S+8@|^#Rg82Y|%-u4?Nz8AZYi0wZhh6(A1;<;a;x?2i}-V^xh#naFSos zNi{N(V-q$GkL9pgJFc|k>~%llUg0*apY|Wr(2aB3(1Gstb47m+F`1a$b5-#+;A)Nk<+8k24Xc=>hjPLyHE75uFqDJxhnn5+nhmW$oSJ;o6 zU%i@s_*dTD>gFf(1~Y!+7^{SMXjnA&VwJmLU0t;FtHVQ;w;Qg0o!g<|aU6JPkIFr> z<@?B{H?Tg;FZ>oEhD5-VG-24L#{OSo0*B=I9rD-9K@4r(6}0HbtAi$Za>cD zeYd_zp6Ce!i~DGV;L@<3Kv6mczk=?aF8!Ha%F7bPq1LoilAm+&J8K1F15(akV|gc+ z;O^ICbNA2I_Ua)6v-ZkP@ET&dbG(TgtxkdSqwe4r2io;!v%~O2xg>r$zMQQFeh%1v zW#PbtLv=f*H-JMK=ka*B{%jZ4f=qn+)}Hs$JDx|yBfAGtDvn2rN9}S~cGoC}xgS3f zs19dTHSatgWSOaA3)nn;Ppn#TyzL;?fRSf*sto^@eYV9&aX(R0ArSg4a_p`bbg{QD zaoX|mZcjW|y$>yfe)n;Y<@pk0vvD)TdQaL>;f1%RTTZ49eM{%WMVUkQ81B6xj|eFy zp3jGr5`*1$vXgE@SK?b&g;C~3Ihze;YIcOPFS^l z+~dREz$l=Rf1C7?ldUm4ZMqbe{Hje4a`A zhFL}jVwqDNX8dJ@Bden@RfW55_JnNDLBC zq={EicPrH^6`#cZ64-Oc{=Vtaln#>#Wn8t`UfTN*aDq#5G8rz4vVE zki()~^jE;bs#VU(_`BbMTn8^Vd~P4=*8a}D1k&q>=Muw5;IhI!V)7-bwei?9_P=VO z-K6NR$12Yy#?Qy9&A|{`W}k& zYi{K(GY%1Fp?xAlVO^s9QFQ|fzgxp(#(F-llK1@9d8Gh}vY1SRf^Hku%i zzs`@03zYT%dI<4N7cywT{^nJ^hP^Jveu3^Hg$_?e|x@)&zs8H?z#c+ltYn~x@@>gfvf7*Su1o! zZ8r3M+IlKs96Jo{5ZkSZGRx}AN++6X)asci=^GQSFIt(&u0-NhtiK;(F%1|!@f7|> znjCwC@_w;efn%9JW zc{8&Y?p?kTw!le)zTnt}qa4Ad?b4lNrm?)fvUa%8WkhMREGY|}qi^YNB86qnBYGLr z`tPVmC4DnATwdrm45pA=G@CqvjWb6XFP++*oOzE=we4WC09F)Y@#}PuzuO36wQRkn zH}=Zvy{Wz_v+9)~mzPU^np#>bi)AveOnPR^3AGHPN{#3H4B*b$Rq7TP%A4e|W8Pck z10v7U3!VEVp`RkD9&;RM8&+Rs5}Ue<&+@$Jwt@#$S2clMvPS!k1KuUsxDoiC7O z>J@UQVz(OWxpj+^-S-Y7Gt{tyxybO?a;E(l-5!QAc6V4D^mWtDv~yx`8lG@!P4StD z16wlhyD*ORYh9YRjb8%$;&~U;`QR**0NPsvI(PzY zzV=7ll9v*!{mbrc!yrC7Ibwu)KI^A)xKkez|KT%P{0j3g4CR9k*$OCvK=f+gt&g<{ zR)>{*mu0O#D<1i+V4BmNx(4Nk3(*~82P?Izr81foQCbgto{tBTndMe29ST0&GK>|b ztvOzRZf*pncSO!=5z7-Dq0REDnq41DLw!f3P6J%f)i^^W=|N-|Ln(6yd!(WROFw40 zL0eeFwCzDUh4tcnb`zH32Kd_HGFNg3NvQ@2k4D2m<;vLSQQO&9h6k2livcZ-QQn~Z zs_^!c;vzhD=qMC`@W=eUigz+x#ofcm)Nq>Io|2dk3taKBo%W}+T zFCH^ghc-_%MFfOUkDL1DNQZqldwCciLSOOXlPjB9qaD%*$Pp4fF zN=e<)fc@Sklf-u|a;!uJkG&SxlVuf&ni!qvjYh;qA?s$(rfh`QAkK)2QRS;pSeAIe z{1$%-avMN=5w*AJ6^DlyMZ50R9i5d+mTJYu22mVVI-KT1FJ?MoDOteQ>pt4)GAw3; z**olU4$jB(V(s^gwDs>#&ARY|swUqBdPkxx@88uTSuZ#?-zBro;a&@RBrPXb^v+F) zj*#K7m-n!a#BfMG&vAsCCyoS2HI(!zF$yNG${{?2uaj&%r`BuV-Nv9q*`puKOfEKi zY3j$8O2tszT)s2udk)1Ah*72xLvfx*UcHbc#088u} zhp<;tI@Tqfi}?bWVUDq_v7wbosWX7w@LC!>=ytK<#w|aHJBVK7@8 zRbdjO%xfU%P_P>><)JCYh%B5cDo@PKuqLdbQGk+L8IG?5F7X!~$hOh3%=| zT;)!@p02vkRDkCQkky9A2jAFcR=~MjQ3+;rQRHsZsk4zrT6XRhZ^*_!+Q927!kN_X zkLkO)Hgq>dZ!*=4E|G(uSB7GZyXldZ9Vl&nehq*(4}#2b?WWgM8wqTzzd$zs34EB$ z%{u>{fTeVP>-rA5^mvhb;vU_q(!|&K9sWLqX8IbXqqA$>6Ynb_A=@vp%blDC^w@{@ zB+A#`)HIbJnJ)^`g)y3VJ={`ghrV`{dQjD%ia+Ll;e>ApzuYDK zwP9NLv1SL~*?62F`5wlOlmp0vKj!A2LSBD)N#BfM!k&L5_pLBNv?uqp=HeIgO|FBn zzQlfSIfeh;@e^{nq4~$?*xh5XID6B8O!xVaFNSLgW;soxo2%*Vl*$^b+yH}BnFK*P z9?9Z{#=c{^sw7&~9~&y*xV!z%!ZMVgmdt3Sdh)Uvm)i>C5-j!JUVq?Bg938v!&3Ie zh1_T~Z1_?B&Y-uO=~EA`4rkKzM*QKDlDwttDu5xRGkWcVSDgB`DwpmfTRIb8#k%jj znVBaXyWY-S%-?ea5HV&w3LkJ(Pj2<@ROd2mXt-);g=%JkN^1jn{$>}ZJBRyWq_PHF z@bfIN|1O)OU9b)XrYfB(3h?53RFJ@ye;`C)ICdas4*3q0?ZTkqRC+BcZSo*!rJV>e z@Afru1|29C4a`I`*+HF<#c9n(33_z+!=i$d-I=eDITWY&LcUvOr@II9pq&;u3{xri zQ14fyk9y3=fBO(x(Gu-J!BJjknJB8-2`SnOPMP;fW~X&MaCDtkE6<2K0G!Nyc= zaq6_MX~bA>%o`%BUGAeNv^=axoJGh^(^MVNs~(n(GBG+U?q9K>jogG{E>V7~+FFGJ z^bv=-kvlD4a|{Y*B}L#%Rg(MTK#}dYo4-c9OXz@b=<+_);he~v;nZI3ckVK^fsutG zkhcS;m5sjy+k&25fVh|3A#-qKWT0UQY|bqT*KmviQY3B-y(Sn#Uk^yR)JrW13iw7^ z!B*8H*{V6*;9%dC2B$-bbWzJK6_iJvecF>(u4b!*ac=D+q#1HBV$!Z%X$3jCn;G2C z(!rGDCu>DBnTu9zmoB{tIKm%ZTWCW@j;9^z^1T7+o3T3Zio&UZY3upv4@h0+EM<>g z#*Wypg{`^P(-?`rL=zg8^6lV~-tS+2;7z7yNo*I^i$>l(TTBtd;Rb6YPVizOHxHrH`?#9$?D-pb24b$}J;O=j;oDIHg#sJ>s z-`Vnj0EH`}rei6V-(ZuRPD|UL!sBtc$f}LT_cURT+~pbR69}o-l5gGSD#9g@s)zE_ zgz?zDXPnq;HM7E@U^T<94)3WFl}cXiK($$ReK^5-9%jLe?Qz^JP2`lW$_e#j=~vuN z%J23_KtD?K>D-qJ_bs@FaV|?FFH8>RNc%+pJ92S|Ki8csre52qOZ?-2wJ~cVAKQGJ zoSvQze;pVWAMb(g1g}=9ETG<&uKyTMoUnsgbF_ovE`d38{+C-)$kZ{cXO28?axa>6O+5|(Me}{1gL=ghk$^?JnXOjL*}hHg5(mz8nN-%TBn6Q0avB%* z*LL!bSFQO$oPz3|{_*W$N4cef|`}*yffN zmW}e%Ua3!ZOI4aL25r&yTw_UuXvekgS;#6WDtdb{c9U-V+cksK)HsJA+|CR&#vz+Q zi@_^oJYx%tTL(*nC_ud5zI=(X_`tFj_8CuRa8TZxT8qrt#?P0joYr-oH$W(%u&Uv3 zzvYF3>dYZ@idiGFPk#ElB|{Xa1!>r|zi$@{E?jL7%_xS%&NCDIN6+g7yqp=q zyk_+ggGpd+WT<}U`eb87Ud>&TMh|eKacs`5+Z=4NI8L9J3Q<}JOSzJx-7o3>7GvSs zZt#}Qomn$m!maB1DxNKF1^L2HHUX#nvI@k@bhwGP`If>5)w)Nht(Dc%$-@PlD=Y~S zN@Q70wUUDFTFH}{m!;f}defUh8}M}1CM19`2D z)^r+BQn_Vggj|qk11XziAXlPwUbS-D>1$1hwab7DkfB*ogEzmju3kDe&>qS6=#Vmh zeSJcV)aX#$0T*|u_~X!4aJ;{=H$PGK$db}{uX3K2fm*S7iE6aM7r@7RSPM7g;kdHk zveer)hfIy9tWD_R(NI(jl{3nt)YO(b^-N5Bn&R!E$`|%gyxXbpBB1`|Vy}LirQsJ1 zWM#Kvf8a0Bfn>nze+kWXFp?=WbPW7TteQRr0nHx%IyS36ZVTpkXF zPQRZHA?E5eF3e@?O1sK*Ta*2W=+6OMf=)H(Z85Xepc*o$T zWyv<@OgoejcPYf2NQI!?9=3kydcc^)^Rzcj6p`p^wb*~~V(_(5ZOFR)2H)<9H^+q2 za#VlZ;5&eJ+j)?gj;8o*+fN*^-W|~4G~u0?>dnDIjyP>EgvwF!*78neKss+83&ILX z?FNM>`fUUlttA`-W_I;?YQyexSuU7nAV1w699a@~ z0aP{gnvlpZsGxFp;>9i0?b0aMj=q?*20_2qVm(78^N5b z&)a0+2v(;a%;X89`9e)D=(PQ-_6s4@OR&8cLqs!qC1%9l{URE*Qi~QND&B|N;qlqa zpBjPm4qV>?ZQflz$~>n9B+3>?A7C+zyB?)3u5`|7IhqI=+5(i<@FNAwh=UU&6BCz! z>q$f>tnoGaz*mQ;k$=HOjU6w-k`Z*aKColEbjn<3w#j@VwMg8iEQrAs@$3}r`S zI04C}^M`7*=Efc*^ZNnZ$Ip9BeHpeHzg+Peefbcs{>;2}fB||Ra?-u))v%4fvy6;= z`cqG!kC(jmORQyNezZM4AG7#JEd*nKOw&SFIB-NA{yho#OkD!`Foq%{7SbI#JLM4v zke)q=P)MwOE&%r?u^T#oHd_Ry+GOP{V<9UpBEnfWPR6)y-@YB7S?PdkSt^0aHtpzh zxUw}`R%s5BwC{kIVuI$`8E84LklMXo9GYx&pB@nd-@0|jd0|$W3T1tuF~|nypmK;F~y)bx+7Z+pF)Gk^}T%`bLYj-0Wfrfelvp&Tm4O&)>oF6l}%L z{|s#v5S701D;s~OodEdQ>v;87&uG7XL|q7_6sx%?{eMdf^Rb^RDUE=?kM7TyF-1uT zxEt-`m30y`R+{Z=f?T-XG!2psVpyq)kI$1gD>Uf&frl}=-Lyw*N(mz$+v_kUa zX*cZaE4#BfeAxcrCUY2G!Z+5)PZeYO^myE5K&R1zZDfShoSwdTb&YswcGkj1j*ZP@ zW$B%;q0!#KcE>~R@SMK9^m41QYf57|J$r8A+BhvTmUuN7rNO=zq$3Z7*Cq^0w-;PI zaBIj`4=V+)SoR{^>KVx=Dn3K)w++ppnc}m9A*kFnfE()Z*e!i4wp-A

h~~46g58 zl*+4de%oMzZzXX(EaKr$rRDPSb>p(_^0q$Gz=hqPZRAH326(9~xqLCsWq}hT>rB5q z@|Sf$POf8bHjXIq4eftENX~XCGkM6QhskTcH!bRd8TN{CwRaXVB7 zSJOt+_4rzgu@SSpRH%k+k;2<`SlzwdL1rXNPXVrXII=6mD7fvl@B z4{Zt?4)9^(kG1Uu@1HMO5wy7a{lG`nBHI)C``%0c+OKo3%C1oWtpvNU+_uKl4fl(+ z+damKEu-$-_U8Oz=TZOND!U01%Q{@eJ7og!ddt?fHj371cTw_ti>npyPfDa-4xB8c zXzpKLxw5deeiATP@qXi4k?j}TMJH@*N~bQ$Cu{1%!9z4O(pOGXuG@w38*r6SkEJ5x9oi-m8vy|>aXtP%fADKV#;$!Jv$FyOeQtzDK z#YLKiGD!68^8`M(-Q0MnD3<-XHCmCU7VlF0_YrZ+K`58G{wD@y6F&S?!q?DSNAa^TpMl6TF4+&E_iDK22yyIe zZDkz};fC>MOWVN2smtj8IGfSEvf)3Q0frL zJ;?=aVo+8+ph&U-g4)vM6k@0lsq|ysVdn)+hw-+Nxa>v&!uF!LOcXBqA{Jsm21xVe z_c9IubRIRMV+{{aTi__5>8cYzU z8Bf8}K+*NF-{I+GUMgFAea2_5Y;ArzfmW}8pyT`TK+lFv z%8JbAR0K}8lFO&}FU*Vw;$h_k>>sy=(k1bORIf~xQEV`1BN})@mDJ4*EM%1?Hw>h5 z(nm}e2L>E!@0X727!4+Fam*Xct_evrLnR#@P?0Em{F^+_n{g*(DCydO-5cp7gwz*x z-lI!5T3OwsuD>*}IroJMVKMk=gevao zY4wrDz&&0Vs0tVuyy@dia_MMcA7}(a0!cq&(q!F3(;d^?;gTXIO>KG7qb&^NQPx!n z5`I$soU4zuJM)y%K1=*Dr1(1t#eV?YQHH+YG8*n4Rw|a2lVj`6kWwx>x!@h)yN}0? ztlC2CIe%L{jqVhaR1_>H?~Zdm@~*92x1onA<*D6k)4>vW>SkUEUIdQA$M^kv0)AO0 zDj?|+S)x7(5=eXd*aD~58mn^;a2s8R*M5#gFpg@u&ld@{o@v93|M zL>Xh%yXQt-mie;2{ww-jDPr(>!$n@DFm=Yu3Brz$E|?2$6uZfS&C>#VC~BfSX*6z# zgf|Pt^YSRfZE+f=W-apuhv?F(t-1@w*t@i?kj{8X%qE{1mHqR; zSOet6marea`hORQ<6b0jBn2$^54=BwWuC1TV7kTzhYN9OWsXrXxv3E zAbxKhQE%J*s_WwDz|9e^$lYWow=zm36?817P5Ra=etb0LSjfi^U=M;i$)qrw3bb*k zqJrmr&l`ZN-(7qJ!{OckjT-!4Hsl!?qvB$A&iCLoCm?ovMaIUdw#Yo4wZdg>6Ul3M z`Fd6N2r|P)kpbvjTB&I`I~B2%y%dBpm)Hnv3aJ}On!Y2WFs4#FY5??$J65YKu(B9a zEohHhTiS8nFk7(%!^NPa>bk{bc_(Y-Wt0!RcLQe&1{);vX6srtX;0xPD@z;cUf?Nm zs4-Vwz2Zt7lir)7o@AZahC3iL>6`;YWmH#XxA#7aTvKb!Ty-sgi&O{j0=<%D1X?y^ z!{%rcPrmXf*6;a*H0Bp(@sfgUavI!pEO^gfkH!Zs-a~~{?w$4m{2!2h?ziF$Uaho% zS|3H`f%s0CC)C-t-caRStl2swn5n{M;=s`xqEXoh$eHe;%jL2_*|S$M|lSmqqwZ*0m?k&z%6?U?i0hSo5K$9o!0~dSXsr#}`Qs9&_9xsuXi$jPv<9eermC4|&l zK&1%8jfK?0&+?5P*2eo(uKZjHbJfye5r$D1S68uUTT<(jG60>|PvRw0dT+>NDbSX2 zs#cNl;Ez|Dwrtf^Ck~4~Hn|)*wRd!s#93}`u6KF{R8K0pM zA(FCqYp;8@Z=sZy@G`A$#8?M-!7NDe_%A;-Zv!q-Y{PBu{5kFZJAVr>h?YJUmRQ~0 z()?UZNDb)c!oe9-`R23JOys)UPX80xb)op@blR-<`kQm=HV55XQC+kAu3u%OdK zzYI9FM}tu#E6S6AeajZUEwSMew$!zp`uqb5PG9YJ6GDc{ZF8skE37iR;}m(_{zYEe z!3MoaD_!@ss$8snQsm+ee~Bc1T(aBpmD#!J0Sj*cD8=qcoCOp zbg5gf_9}yiG6TTT568V%4>c#&iEdPA8o#)MbuR9BH z$7O)uon0oH=L?T?(ae|wY~YEclvK9fUhG{H6uKYi(prkyV~q3;-A=_fmY&^B{JJpe z{`RQq##3M>^h|_Xaq}_1qME$!79WS)mt|dSOM&oDP6qX6Ee`AWc}SzKSEv*$ z`XDx(AY-a^&juc!E>;mzW3uxl+W)`zpl~&R4$*5w&$jB)qL}MF&xq^}n_*B>#k&Wd z!7kGM!fF;jVrmW)iViAxY;K3=y5XZgF$6z@qeer5k0K^Nv=HQcE?({WGD12s7uK){ zlh<;3v$-;DNl1P*wlw^}QlM6`Us3Dr@X(bk|62htz!{`2qb{8tUiJM86REChGH!uh zMCt~&$!s~-kWsl4@>&R4E)K$uX%Xr|PS!$zDA3aNc6dGt2e9>Fy#L5u%sXu?%2LIl zWm3BSua7Y*#m@Hr1B?!ZV%^?JA)E(C=Fh>kbGGSkJP8u_YLz^|Mc!t_GA?Mh7}vUT zEa0+x0VEpCX?=*UV}!YBT#M-lFFhR)*vJPn_c_BIlheEA|K3F4}JH&~>O6v#(UR&g*ug zA>h5f&PHpuTWRx`g}PJ*6^ZUWJuQbeKd$z*Px=%BT#W_^}+aXWXN-Szrj_7 zaT{nq4uN#X{K59eMu>FptpEls_HoWMiVHV`gTpEpul35Pr8+X*tI-zT=C{?A@}s-p zmL4IO$)(`-aY6lg`uUz1skDIA%CLnp|KE+4V1ha360K8I2-_=78x@rZ4TtG%L}lQ? zng#0ai6>lUp2B9Gcr1J>5?64CH`$)SkASb@p^t(+b>WA zJVT_qjm5I&dXo5SZO9@@4pG9WVoS}6I6XcUeF|_}jO!eGOJQ?y7!wrdcHFY1;0x}I zFM^f<<+_$nYAUZGRa6BJEOML2g=>2AvjmAVMHEq(Zr-cPl_ENBapSJs!aY6mZs|*V zdLoXmr(+@`?}h0dUksh7{-7tW%S62PemiT?0x-0-;&07i5ZqSGJ# zzo1bPhAO9za(Vg8wJyZ%Ou21Au}v5eU;=NtH*49$ z{v-;-&eiz+vi-{fng_B?<&bMT9TwXWu@;(NI0cZJ7=`^Tt|&-LKTjj%8?91jE=2U0 zbfem8=~#oB+S+g;8abbg#^Kx`AuSbdQZPSp?QZF0OP95+U>~$eyM~O!rQ=n8yBm_B zRyH`Elgoq;{>e~3zk6Y`_Nh`t?~BCWnd(2j2Rbbb>&>5X}=7*cq>4C?8hkulJpvAI!Q z{mE04WAo5;mQty})>1LJ)_Y5R;Mk2I>h)~3v#6by97(bbmH>!=6#*-RnfOa6t}x*1(VMA_hD(o#!%kRlwx0 zVRc!<^FiwWEljPco;3F)m>yGs!TO7&&g-`kP>+pu7X-`kg`+FD@XI(vdsehNyy}!r zfy6e6OC#mULn^A4bLdqe?u9uJLhAc20bX;@IN@;P8`=bD_zFTnHuAC3K%Tl8(JZH+ za@oKjvX8NL!?|}&M{j-}s4Bg5X^e7uz9gDo!Qr+go_f_QLYfwdMry9xzz)Stsh@5U z5$H4L`cNhE^$8LC@XXNy`-CcTxJX(1B;z9gctTC`77nveHz2Go<9sG1H(lzeZEgI;Xs zoMH6WN}JE|8>o6G2L+P*d87o7xW=53iGxbA4yz*%1~pk!Lg2V+Z$9kreEdjVz8!F2 z5tv06GUkjnDAYLu+LtC?onw2Ezg*12vOb(nhVX|HB#dJdGzPG&^rWA>u^nB{@n<=H z6_9x+3%Z^enF(IhbJUy|X7uXoSkJeOi%~t@pM$PkjDlzpC%*XJun=9DE+i6qtV-;> ze$~gQ>ML7*gvvsTi)f4TzMv~<~))(t0Y?`y&^3k zgW!>#aLA+MrJ@?92&xdDX>4^W%&CdEz30+gQ(pweo{S!|Vy66Z`2Udh-SKdyZQGkd zLW+b4Q9=q5f{5Owh-jk@QARJ(dvB4DM2{{QE$WQkjZUIQH#$S~8J$r^neVo{$tKV4 zKD*ER{pV*Gd}}GrRvD-afGlW98p1je`q6Ir0u}b>$Fh zI#_BsaYyz1D!bqQ&WP_waNfRt(D_BwIHhs4yq5jEKTfOAgHBTiGZ?>fagwq$-m!FF41cEVu6 znv!Buk5@L#(9^tjxZCdnI@_^LA65#KmYNOZ0FPA@Q9%cisb`@kLWj85t4>6FpwU8+ubvExA}H#RnD=!jY7RYdyzal#P^2`hrWFGDQJ zLQa7c4fIX@4`o?5AdD?M7LI{q)|hee;48bcZye0houKezMH7TD#N|WHGC*lc^42Im6QL#9FS5jf7!p&zGwH z96`Y{VyMf0KRueCLdIVd#`3KMg%3CbjbosZ7BoA0)^)xV?IRi2xj@`|;UP3k_xBm8 zunV=bA>1s6>nVFRvP}|@StZrmN2?1GX?D?;wzz9BrxoTPYToe7g$2ER^zEsc)YJxa z-~RUUT^Tf_!rd?X`BCLUh5K?upFsO%iR;U5iwo<)Zr7ub@Tk^!I%9r+RFPY>5DZ2y zy8q>tuh~j3mS!<^z)N|Wk(7eMYlsG*G#9!|yQ)>X`ev(DYwq29Pu_kt0F(gYn|r>u zJ&z&@D05w3P-4$KjB3e+j^;cfE?XTh(s*-`Pve*!Rq-`8?)JTVUnp0p`}fD)=CQth zLp2uO*Y)+MXM#A$(m+sG?>9lCy}e(ET3n5J!S934D7|(BzV`BlgBo3Q?KtyAm#wMPSj=mSB?U?}BTJ`F}W)72dQN_A9@&wC> zTWc#g<4THhq%_xYrDw3 z*%H+#HJ_Mt7KJ{`2;aBgDKS-0P%N_)_Vf3TN=<#uEV-X#l{usjGy(n=MI+rwy0=_5 z4<#SCcUOdk>gJw>hi4~xE{S~Lz>-}9vge{xy?mA$+_5feF!RaMH6_ND2uV8W-uhD~ z^bdGNcG@UShG9W?#^x?1r6sb2&5aG(E@O(A6j z-K`}z*69wj&?v(^X2+%`z<;J@r+V9K`?HgQd?Kf~7YV-!knQdvFsq%8%W`3Ff`TJ} zYD>!5`H8M#(b07zuWL7NkoCsASdab)*p;1X)eX`!A#K8ZV=ULCw&j^;spY(~WOJU% zJhvA1t@7&JR~jrtiETH-EA4us{P-}`LYe-;k57g<{TKY~bB;V}_}>Zb{Z*x?Ni@X3 zHOf$k5Qq`a&-HZQ$bwq1aCpIBLZZ7vUiXZMs??ZfK}Sn&73;A9;!Lt`E;#lW&pz;Eq>0E$Z;_&9GXK~<0LSGvWT4Fu^4SHVz0hcrs49EZ5DOkFL9*T^4)bd3V3?MMXuK(hR-^ zf~J6^bEgt{Lbd5#W)4*U#s-whPJea`mgyqCe_cwNlL zh06(ESmBE4IXE#(rXcjOmlNBbg?FEIvrX1TOLrY^HE~z=G{IfZj@H(TxUOd`w@Ov- zc&nL~Ae_L5x=1(JN;+1h9?_)&VG_Zo3jBP^$$$u2b$RU~BG=B4Ay>k@NW$68k$rn6`=2n#}Ly)-dA%eUET-puaAhIfVG z-Nh4CBX+_dY1STExUN?*F+(XCEf^TneF^AB1uQWa*9hBD7?;H)zj7Gd6iUW9!TSC< z7C`cZl<57MrXL$)#(yr@{&KSO^vx%p%Apod_>wvX4;Jc77VK3@?do@7VqrZ26T@NXNXy6gjir!!C&-8)@If zd&-1eyndE){&SNKa0$Cj*Q($%b(&ErODcZNflR1}#bZ^POK=hR~vNn(qJB9Y)4{zeun zk9$tzu0hjroVx?_TkA^+)#u=nqP)1v7(iBcf0ed!8tem{``0RwX7k83na6clq&FFnq2`0a30sKW} z@aO)4=Oh4?@Lax`rZ8Lau<@y_dIqP(-~)$VzEMi0I_^OfqbS>R*0M31+Wu_t=oNHp z35I{>u41$95a; z>vYT*4kRNQ8*UfQ0jMxK>o z_Ik+_vZ@rdOKli~?J;s&IKon3eHea~!Z<-%b#P!#t0ZRV`Lwb^PY?Rw3r7=?|M{l? zchaDJiXGFRu-Z}Fdv*sAI{5pW z$k&|S%~9M#Tqn2S&VnHptODlR8}{sG%6vBscXDZu!M@atQPs?BrHq7rt0DPVhT``F z79)}@NIBu^fO$y^m2?x6tU`mHlY1rB%bNy+oy)dsVkxsD4%i;t1uDN>*f^|%=)q(| zK-#olhPteF$}&qdo=w|*JL_Ppv8?c@t0Z{7*Uvg*(P#a7HJD-GD242b$ysGbZ~p^f z<nm+d`IfH`l z;pz-G^dLRluDx*a^jz+^+f_%r!)~7Nty>UNc-z1QDp~~?IkyAWe0u#6+@&mdt{_g^M!&sLT>ys zB%SJ^2og(s?jxAD1Z%oRru2*cS-{jA4CeFe2+pzex!AIOv#M5th9~$a-g^8!N$2gs zN&at{?|Y0^R`YJts>p5AdJdxbUQpH*)V08KNy8Fo?JobJY$;DfU~gNL>yzy!ZZ>PN zEI~W|gpj09{3?JhO+HPk@`xJGuD;py=IvYQYTN~4H*w8cla8ur8G%4By?2gj4sX^` zDtaf=6N=`5oiUYz2f26?4*k?}l1tE6eZ3bM`7>4Go0t8IOe024I+WarbwrXWd#PZR zk%yy3SiNjKVZu*Zte=591S|Bk%9v}>2M6wwa&qDE@$t-7C;r7B_=_I_`rz%%&A!M4 z;BQJ{ZzCP3b6hQ_IFG*bE`J3Zi^U9lwS)um`cJk;vqd%2A*3HX=w;yKRC?eM^5Gg# zI76IQAeB229Qqkx(=(vI)fxKl@JN!pPAv{~28_=`55kUW(BI8qezUSz_HA+J78>Z&D z=#@gw^JEWv#7u}54CX6mHoz9gOneiLv7%$v!+9e5FTgp>-1GJrC3;>a* zhxqH)uW7C_fggwbX`cAb7goFOpgh9eMLcZl#IIbr(xQ(_>gZqB3g2D&&xWIUmP@5zTDeR05Q4;i#PR%G^j*>e=2ef+FWKWXa%*MgI`--n zb}gJl=#Q(k`5Ka0N&+O7a~1 zX!n?&o?gd^KthJK=Ge&Ko%SzZtZAf^>`vr*+p*mIp9plnJ@7TLNqj$z3b^tnawrm( zr{EEfw)h$2dHEJ_d$)JjmXv`(GTL-7e#v}KcVIAgJL_G&fK}8}480}tEP*5n9 z2+0LC6tOB^I*}1SFx;c{_^tL0U0qx)(nRZ=^@XadhChkf{iA{Z=B)P!v{Y{iE8}|} zG`Q-#q*Bks4Z@rXF6y8D(Pw3VhZkorrEA`n8p0Z)SfpanUO)rBhJ&-Gs|Bk@d1o%X zn9ln22~eXyO>ArZ3C!Ep6BI|aV`={>)OH^){31~IzVnIPhii^Yf4^q&7fJ0m#}K=8 z5(W3@RH=reQG^9}7t=&-<(&?)Pgj;O$?b;B9+c%x&>&n!=2wwh7MH62T6U&yFU_7kf(~%`aIN2M#qq;6Bs%An3ecUOG~Sc0K4Lqyak{n6CTr_UrxJDe-+;}#LNZtawS(7%81jwQz1`KQp)a1 z0ff*7UI#`zn~a%jZcV+0YVTz@2O!+=nMv=%u%uF26fB3v$sI%5(b3s?x}2~x{@vee zqW(D`{~p5WPoF%8j6Vbqf>W<6db~&L!UQcK%1OEJOP^txloPXl4u$ss(=B~@H>e}L z$xFv2=HYAhyE4ocW2qkwN-P2l#{iJCopwjaoL{Tw%n|@s2B~gsvSYBSa1~jQaJ959 zJElY#Ic;|VhK`)C`rqSEn_rz`NpL-^41D+y`8Ml%E~3v7MtmUSAlwxWlLJ?aIZ*u= zYj=st_oPcvmO`vqbwFrrYr5b?7@yujM0*k3z3f!q{gR_5NQo&V94O0B?NU20qPuCb zNNs@Xuxv16~&015V&u z@o2wAa@}z+x!@e3w)=uh5>utq@_TL1SlSnvisk{imPf1QrlD8v=3SwP4gZ&Z{vVl{ z`OV8*cgHkzGyEtQ%@3diaIke$9PPUwZVF3c1}Kf1gsS|DlPC6o0C^%rla*NtM{=MI zp1<0Boqbs_f#&`|nau_h-ClwULV{UH6(QY5^h3wd$;spXB5X&b;15fG{K%06H{lzt(Z~upO2MX$ZXjZ-nP;O29?vc| zuSU}Z_%w`-X?_rJiPv_#@DqO zVnDg@Wp7<@IR8j)xJ2JgSJA}PHTKEVr&5hiCE9G7%c7>$VY$I}_S5NY%E?iW`iSxX ziUr!;%+0enCyuv`dDcygp`Q`1zmV`DlH}^%Y_3_#oA&`wcE^FHSN|NSe|B0>e?JJPLN3+o-A^PsTgy|7;2Y* zu70AL$=(qaZ!do|QYuawOUF`BU{m^bqST@23w~B&I8CI>&2c#bKso%6#>7?g*h6gc2h*5}6C~LEuiAzqGwT~c z0!+|%nBvc$2j7E??ZYptqV7N|<7Z#+LX~rtYhu;`0A{)@Eb_zRi`tl&=O)N>XkgWp zw(ZK^A?>n0ac}c@y1D(M1z=KQ5Qp{+L6`8fD;f3G#==RFqcQ0Pw0+x%M|%;H(+0y~ z;Y)P)oxE@*2?h*xTCx)ZZVh zye=~w?nRySJw29AHidPHU-_8mEOtDe!B`=b2rpFk7x~zG-0~G2$R&lo%oS~q!|&b0 z4A$&)QR|C@Ua!UxYJRLa(fDOC+;Cu!^V~cUr9puLleISA(t`?lXb7nMr%LP3G5RmF zJaFwr-iTu&8m|CR<*t;)s>8gJaisszporC}mA-gMJA5U+ZZ~A2WXz`J^R0XEt&7(k zD>$aqn2r|FjTufWru<%a&e*@2DJMaN0#Xs0ib`RxMIu1EhaiTV=!r@ceXBguLzeYH zVXVn+_mBmzCL2qX8on$U^pb7Fab*{N-((Zsl|fExyCuNNEzYp%sW7AgH<)>w=nevj zloZX@2s}JM$pwXp4(65y7EpDA1UUAA{mQ#SP1PhKcWLDWI+(}G4>Ra$BF!Yu^E55g zZ*3`$Qo?_MB=71+%It4rx+qlIOa1K-!h$UDH5@M2CfZW?708*5#Ed1#Ovez8dfO#( z_&ys_)XV&7-4W^|gig>nrhY_>u6ky!aYA~fW;W*!Zx3qo_m{s3q8^Y8xp}$L(K0$Z z!A)wsN+*6{6|OV#2J3ZYq-rg(dB)$&GR$ino1`4#M`Xmj{hy-5e_NG&eYDPxZr%`C{!n<4IOjR{- zQZ8+mscX&Tuu~n2wH6&{a@)Opkq77=ul6)m&3RGf=ULmK3QjF!iM68UZu#t2*<;xo z*%ofhW3S!m652svKA{LNQ1J`co10gCEKhj|a9G-^oyO<|vUsn^sy?wAS&B(YXJa=z zhvh!&Q{>cTS+T!uW;hW~ck)8?i_tk*%xV@uvrwB`OH$SpOmp5%^&q@Qq{So;p)n@)%Up7(B+X5meqDckG~C`hPU3Q zUfeerriuOdLtd4)Iqq*ho28M3IS@>XEdvuGB#B<|SuW*rS*GoaH4C;b%b$CqIF1Rf zK&F-)M_^g?tFG@(jt|a!XkKNPA>BT{65N+x#u@a&Gk)1#;BFCU+|?EZ;cc1kV;XI$ zywEKf79#8%?MgD!4p4>ejCl<3CbkPZX=^$)pu1oKE3_k84KGFb6A* zZ&TA}e*aC9kus+cO|Ja6=A(#VucC}DL3<&F%)Pc>^U}$wY|W#}G3SoY3Cg1vxoWLO zLyBsn>JJg^7=6ZP0s0|37LM4VVl}%Sa0)hvD0`#{>%}C52TI>D+wrtQ*VzZ~Zc+z8^(WofAFtXJR(uLluy>PXQir0kEIDQy1cewFU+~*1-I0Q(HzU1<( zuZ1Wnccw-+`fu&SQNoeObu zdqTJ}Xx*k@y=gt=HM_{ZPqunN@q*DbW@w1dFF2R2we<`u{38W4381529XHrV2<5U~ z8BDtK3CKvH5)$KT_I7oHK*<<-ju3LsVl?p%z?~?}r*f523|ks0c|3rxcxogAy@N|X z0a8~t^&fLzW_VyY7xEDNbRcnl*~_9tNEx80*e<`TkDAsZS&-@lXlbgs)$5LV8AksX z%t7YlYG!>ftQL;*QZ73R&7n&Dp=pp$I4<+k;F|?fmr6~KF%E$nLz2`22U7YCn*$xx z=b9_+g-2WMAAWW}{LJODR9{K63=c21IvT9D1?7n-W`Kl8Jsg+EZm=Ki5PjfriT~Kv z17Z=Pn+Ax>#e5#u0e1F#{SdRXFyWA}cP-C`E2i0plf&4`tWb>9gZM?hVI{g|wya8^ zY_|2Rec1a*m`7kEvtdU9CPg})OUj~}r_L5|^$VEjD@ImYgexi>$_jwDkQKyg&*Rah z1TKImC(y*+bRS|eXri&xOm%JlD6VM@2-D|+_P6QoHJ6$*IA8{%7tqLaJAiEY`-qv_ zk}_jVh?EuYPltt1`M>O$l-}F-2%VXqr;DxhXbvY4cG^g-%I(0I$@lBkS0&E+y~W$+^2i0bm?8;{wjrku&=6HwZwgSjkcNEp>uICYmOiNFm-AU=3OvYh}>DK9sa-eW#zuw|4^ zhR2tjTAFXJTyHeqoUhV&wIOw;PXKDNr?uJfk!%U&Jge42-n-oSc=aRMy0O-+3a`1-Pw{leqmQ1+@&wS-E8w~i7e(z`n*EVcWJmN<5A9s&);)7SXReM54I#L zlVzIGvG@JkXO;E5h0&|!T&GW-6;d|P;Kt(Ak$H+9RcJZ*(c7QkS7skzJd9BL{<`a8t zKL?#$xDrUPk2at~A9opV>F{~>g8Q2A_j;qplWcdc^>Z$g48pZX zy|n>B`&L7h#$^k)RUug;lJGp(M69~~7Ai7YmZOFP(t*RUr|)ZBbz5>c`o(3cWQUds zc@-+g^eA!ASVxy#z-R{{l-JR2Ho5vfi}@NVlbMB%FCXP)S1>OZ=&ANbbLqh7{1F+j zZsZV_#4HMIKuqZ(%MSJ3yS(BLOQcj18GwP@mPMgQzg?xtco)B%Wu;lfM{0`QIV7)S zxp9BEZR;F$jeqR%>(|4S~z`zcgpAi@rZKOA&}VHS{|OYcOo{$rgz>yOKv3 zS}vo(@)Q!$U_vMhqA@HHHaG4*st z9t??cysCOga#xhIm7ZT1^l(rm!%`R@{W;;-&1^Ia4o_K7y7$aJFlpM!1zw(CM7q5U zbnTCSxkQ&2K9S*t4S*XMqmFO~R`}X8$CQ5exO;S7+Ev}Q*$ZNgPv?A$q0XztIs0}& z1vmHBF5h|$a&$xx!pk?Rc-Zs2@p~b3`?&`Nj*<9uy{^2<#m=N~s7E-|MTMkd=i#B% zt?lJVii84^jeut_j+`Q`YGqF(qT|?wJ~B7e9!_N(eHV@LKo*Xu5cRU6o4lXx4+wvNNdGy-99yvQ$&Ps6%V59L0-1J;Q(xU1l$U&A>+6P}L|?a7zU-SSdf+FOAL8f=$oUm3l=^d`}BR|O!< z>5NtPT^l#(elH!!VR=U1ob>$W<|eJ&@E67t6|Pvy9->$+Zl zd-v2C(nBW>kvyO%w5f^>AnvMkT)E#_;?T}G?R@7<`{qbh6`{yu$hZr@+Ng;>K}Y9o zP|UO{iJI9F-iqLJsU2d_+n?(TIbOc7*Waw|zyi0t0$Z9}sGKKVW`kKQrz~nvTTY&e zz4fHDZ=wJC#4?|q^wYR1`Ote4wGK|T)6e-WG(_lo8=6`mHbzJuW)~!m7<;2pnVv?M z_KSWL-@O4)$M_A3Zvxa<%v>GP!oi{U1_k8$R*%~b^=*LDLbWs48l+WHuv%*pFO}H! zQ7(?9nTjc&vKXETBRdZ*9Vx-^UBKSX)!4DKnD+||EDo}v>c-yNltn0pg~x*!xJ_eG z8pxX+NCy%@`Iwp;F%J8C&8yjG9t$f556&%e%LjvCf+Upop~b7tg&F8$9>`Nohu6#T z$N3uYk5ZeOmjG{NFM-hA{;1=U_0CIbsB!>EO5l>UHqRe9>+c=%xBhAP^tMIJtcb2j3`!*5AnU5xw8Z)Rh&-XzA$`}jU+hIa@iD>%Aoy~)2B7n|`z{`l{w zr+mjdLfETid$#Y~V3pH2Nm#j|uCKaiz;8nafJ*yCY2LO5odFY3awrq^U(MbAkb~0% z*q=W#M%rZRDmMxZ9f@Z*>l`H;8rkX8zWQq0FM=N&eRK-kl%W)SddwWu4;-Hiyqh@W z+mb(}I?{BS>=GHZ@fd3@J&JEdl#<#wphbM(wJ31oc@~WR`IWM1D9GlG+ zyK!SwC1G%Gmb?2Uj__>y01QjiJLhvT{e4n1Fq2@J-&(qba^Ks**Xwfk05Gs*v?cDT z-8O6KdC~+22Djy6yp*o%c~JDM zr#!YE_BLy!7@B;tRuR&*nTk%#%AL^q@#v-G>Rbl&7IU%`X-vk>SZO3k1Wc>9nm{{? zX`BSQ)xp#jsC5Ii8vBA-w&Qu3;Ht|L`wfG9i=I53k?cT@$s;>{fa{5QNg6n$;>>1! zgpzk8x0BA7X@_72vW%f?OE*o1Hh%Fu<z!aWF-C`dZAi^GL)y;4 zPDpsMW{78MZ_q_lP(;}TRaaM*53E835_RcLt%w-cxVZcadXF_zzCDgM4tbquW&0-8 z@ie^dl(&FsFaeBC%8@O;h+fouI(`sax&pKMV-eKwR=y0aV(=)7U-^(i8k94#)}C8HWm|ptRXUGn zB^=;fx*rW^G*R~H0P^DbT28`<_8<_Lcp-h1ugIwk`8o)A_fwAE={Sl{rUK?WB0KXt^xuu&~96ST@ zLKY(O)*m_E=s$JU1WdQZ(1_1)la%ttgeFi0ejX+D03E4RJX(Y=A!tn-`F(StC zn{biXU7~EI<_an7w6d25)*LxrjAM2V_<5>J|NMkUnQ-jelLDff+UV+~hYm6G5J36Rd@SITyGW>+(Z4-~y z%pwJKkUO$66;a;#^{|$PYLH)-@DpR(KEX`}F4LWCgjlC;fcwC@)16p9llmIykslXR2t6F)kr7O``p&EhPYc~Vn=->&be2U%(zD<|jU>xiAw zy4N?x1hl{|p19#bO*&m*qTJ+RkDN4WF23IsVbGo1U}ve73tyc6I?YO!0G52NYR6C1 z2}b(osQWYn;u$tF&318XG?z_wUC*8?5@2m#Ze*`~$^%UGL|j)dv{S;pp@K(YUZVU` zS{_r^t_d6vAMXRsEHugHG(fD_^Ny`;2~{@u!nVIB2zZJYO8JvtFDoP3io$*7pU3%6 zC6{}9Yq=+MIDv}WrO>7Hr~iA#{^3>m?UTYtQZ4(6z)J7|Gg8#Pf^O>f%zsJUyS9I; z40aTO#&uz+l|#V7@`6b`2)RdRxyVL;cm~$=msaTCT%tHhSB*~(Sm-@>=#Ddxlmyv& z)TO@0)hc9bX+Sh5GxYN%PDIeHYC*)j@h2d`(J* z+YlzU1WmIopv;aQfLjSVxT?om1YUdxlt@hf`y}&reK^cuDb0tu(OoE&)$SaqUUBawdJ@*2Aau z9q|X)%6>pUmvH|`xD^;f5{`0r7tJvO^Zc(lpZLASY zG#B>}=^(9gINM-+@Z%k)X}2%A59r-x&A&-&bRe-x$+8rMt#E%}8HiyR0tk7-%4!ZC zbvFfLn8(oF@k>iL-D0$3!|px~TQd*$G1tneR48jjITlGgdGh>?k1v77^xo=SH|%wK z?M|&b6c;OHq{4)tp`mQYb7i)rJ}{u(XIY`edurCiLJRWxlgBYiAf`pt0smxQJ+O9v zlTJH%R|qqW$YIXgN&W$z{)gavg1}zf7&5G;>dYo-mgIOOgWV02X{h<;IJVxW&}Tz3 zbIB4Vj(O-cGsf76Z_x<&8L}Go?0A6IG(rSVi(OvS+8pGU( z%S`wADh3BGzPeCcVXbk8^mY(9`W*4SLDDZT*PpvSif-m)cs@ekbQEX}prlLyLYIz-SvrE)wJhC7*sE;udD%1gkCd_D5$T^;KKYr!3EF7s-4zCRVR8ZF>}<`>We zwwHr!&HzBTh!!AN#KAW7@K`kgPBlOGoJr_R<SZ!4 z;R!D?$zIqS@dz%n99FiEeqSYGP2a<31w~*PZZ^1wpF*83^=sBU6kdqC6(cc9ea>g0 zygob9vOEJkmhAPM+%Ty7Z(>{N^ zr_kl3hN3fpzyc`~aJ^A+e3r_xfUQ>Ixm2w~!W+xYv?z5D#y)*kU`q_-SSkbM>ZF_& zeLl*_A;7|^#FMRosSU{JS`MRI4ksbe2C1mbO;P(-*~VSbG}zH&mzB)YViFovD;L&E zNTw<^$)YC(06wbO!4^|H?4_ z@_RmFNolcmVd!){fl8!v)YOn8m~BF3-;raf>(cM2gR++QXn&T4{GkIf4m{B}^W~2t zdj;@h`JeOG9yWPo;tx9z$7{LAIsvqkQ_t|;jpoL20R3>3Qd^8Rj<`!;W#{ubkEt6j z=Sj*u<3<>SuR`7f1>s<;KYs>_D=Dx z@|! z)uBOS1$5%Y`}f>DHcz4omPtFRk1np$MoWALr~kT}_~So(bWdKQLsZG(@CDL7Ws@E$ z8!53xQcaq}C_nm9IWgV!)pt}iQ{dz}T6tAoSRG3Z*3Q6H!ph`-B+~;DH4cn-#ez?7 zuM7i<_xz2NqpPoyn{3$6I53!0&Y%GWoMSE_4ed6dS!O;BFg>iOmBL=WdxnVagT-O_Gd)oYCU&d2J7EMwvsEk@)U< z$FkyLs^`QL&jv+jB)HeCM;RRfOZFn(vwXuPhE~uGBz@L9!x;h()obtnHw}a#w>Hq z95zAyV%c%j)=ma5l1-Yh2~lzG`eEwdq%-?t3NyXleb3?H923q zrju-< z{}K)OEqEoJ2YRpO(*3C^PG+O*M((4euS=dYNYLj%RVb=J;0eR|Uyak7_}-lx2tG*-Py2)DjJ!C(?%NbnO=oC6uC;mfn(@N8)q9_}(QJ08qt`)Zy{`o% z_a?ocjFw-dl?w&f_4@H3#{_-|YK()cLwSn7Pu$noquO=}R^%F=eJqi7;WvpiXf9x4 zs~DaW{ShrzQlCmRj3t|~^NTfa6QV>8*||U^`uo~V>+Mq_f0YvK;x#=pipwmHp|y)%Q(6Whrk=zZ!QTKPS}QCKTPSoCw27 z{CU?G_---Dzvw%`b1Q{8->aP~4IC zE! zOxn@=rUmf5&S)zTlP)QhglXW#?te(UiJ#%LV2-VnU1JxhMPeeZYz7f* zY(7#_8!;6&|BJolZ8V4>NIeC_k~)K>JeY># z`S#L;AvK0e2Q?v_QVDLi4PA?mvh)gBI;S z%uplmQ(P#3->H@8Wz7PebtF*DsG4nVPyr+wE!Tv>uHJUuwZ`l`16%}AO*DgRKRJx9 zW>q^ZI>V};s@&Jtx0SUFMd93?k2Lt`WVi3hOM}i&299=Kpcb%*#yV&3f7vX{Pn`bN zDDsq;A(L`57sUIBgdZz+am%E8tzkc)w~=b+!@nOaAfM%}4$(%+DR>~z1i-$=xN?5( zb~ZjZ1um+ZdNt$feh``o$BICTj-vN0rtf|mKgp*ZOaQ}f_^Hb_^2@F$>pWyZNL9qX$_0Ce-p`@;yAkD44+wYOJLX4GaB`70sZ4^q9H z^kX2sAMu@O$8v!jy%DF;wU!xO;o&4ykH;D9GBVQCJB_J|##*Pn7ZZb{&p(btInshQ zcHeboY?@jy8u!H@rK~EZJ>kiXr6GvJ*qnsk(|6BAaLPQXEJq zUIas(jNgjTBq`#xkVg+Sx2#5RFCaja17Es2F302$OIqllXl(TKY0vn{ zngMb|UWNPns-q1rgU^Kw3Ca^tz9SEpwF!+=*X69TfuXzoqEX_S_OD~BwTFw$T>!5N zwbQ%7A;GXcA!G|g?usuDnz)isgL&0_>RS#EN`6<6bYLV@own`q*~shg6cf~1_V=m0cN=@YCiLrl zEn-TK|F;7iBRFAr^!}uv?1lQdCupbqwTjT8=ooI+c zD~5HT^nojeKmLLe`XAsM!jlNtrZi78vh@*Ac|aIKzMiRiY=JA?oog&zK;KiPDfOlJ z))ssq7=VRW%3xJs7=s`hVB+A&?3{vzD~k?x1gBB%$4x!QA}PIgIDX*(QbeRSy`Z23 zDOtEd`vA1i7VG)}kFk}MmCZX~>b!RIr5J~IA?rg;bN*ScMqnOpIQ2?9@LbTlckerm z9eJA~nsfj`hg5)+XRz1%xvBARW7Ax6tXB1SlP9nbY?tV#T zTibA<&_a>6P)aG<;@;v86^dJdK+v`j9D=)+Qi^MWYjJ`F*8nXp!Ci|x1P}J7`<%Vc zdFOr4clJN?Oy-#oCVAGq?zOIUExAwTAN(=)Oe-Wf`hU(n`?CIJ0W-BNK(546s|1ft zZJ;<9euJN#Gr=sjwz6`u_dautRX}de9W^tP+-mWYQDi_;l$n{4BG*pcO@MP$Vo_L0 zv$x6vxsFTbbwe#wiYP@_mtfaiH;fc%8E)pE8tNaXj_&EE;OMGGa^EF6Eku-V$K7W0 z^PWw!q+7vH#B;yysdWEhV271Gq_2?(2OKEgqg;pY!3gLrmdL$$k#* z8*Z9sn7EmZ-*WUMZFk_1f0I|ts2gW6Xj?>hwpje1^Wya5S8DXzPE<8qtLpQ84Xm0% z2gOLby zDkcx086K_TMY?Qk+7ms0givvx-e!}zpylN3lL#8`VUT0SN4n7fT$SY{)_v;QQrA{Q zRaUDMm_}nng9hz-pouo`j$Nen^vSecxThHD-*q1pO^YNJU~2e+N@C)r9$0H+pO_lD z%h=Q^&xejS6=4PrI{L^nN>*EVz5WDR?HOQ=`bOfu*Sf-BNwrPpyzWm4S@ z+>bW5SmxMdnP-x8n&jh8>x9vY^?6yl$LRR`{}Xoo=b~B+z;GI=`U8+bv+vx=ZyIH5 zUfPkfXu2-ADR9>hsdcg+<{MSf^a_zC{4C_Fgkh6YlCyocxv4f+ zpJu?C#>G8kT|ucQ6 zmpPBSQJ=`kvgyC!-NPFSc|>sXIT*M88j|)Imq6MbJe2hLk=g@|SOaNym35q(J&)xD z`^kjK#%TpE%rvf4?t0O^-tAsf!JgT<23~?yZ4FN5PwXFBX#R!VCmy)0Q2vGwBmYu3CMxG^z!8@E3Uimi!5Re_`v_X3V+EIW2FP2A7cD zQu?%R{t}nVqB>X8^Wjr*l7==$*V~iZy-putcPKC)AHtlfD<)d=0dd(7O1wSp)zqK| z?3>=RSG6=N2Nc@TOPlsth;?D6xNn@5eQ)=iBP8JvUN2M&|IFA3+Oc7Db?S{h*Qk}( z1Vx<*IeaLt-e;x6LfYJ6|Ek+u4qLV!--c0HF(dn_by;__gk?u;>K)IY3O3F=sJxSw z#+*-i5a2MK73F+3)#Z@=e=I_p{lT5D zY;&eJvTssVYkd}iX$3V5!mf<8pX^!O!WG2xz4iObGQz%F14fq1m(PJxjL)Ynl;gM3 zo|try*NFS`Sw~C7Qu2G({9boDP5fkWkMPvpibtGV5Xt{LEeqF@jHV> zPNKs-V9!-x@_;h^!+onj5|po0JHrDsqxIFba0v%^HTQS(_a z!c2v^aJE>l=}Y2_2SOmNq}=;^4&P}9|Ix`7_t2&Au$TU;owcr{GUrI0e96Z(a6FcXyb+_E zf{bk^hG7x>ZVT904{rT&$==`)JCGYfj5YcY*d zDT5w|ER$bdF$a#&3!{!t4#wZ(pNK1n9)Ed5Gs_=$0$TE3UAX@0@8q-1`IL9i-6lwZ zD(+;?cTuCs3V(AMS4*_Y_;EwzfHKwLhv=K7-xk%T9#FGy`>-6dY_h@Q%lJV_{# zdIbuoJ)F3SgoxK~ingq5($mwQzmbBW^pvSzP9D*+WM+P^O+$+*_nYf%#yakjboP!m{%}uSk^E8Kz_Q() z*!kHHvs-EW9KNd43Z5b%50M?Ox9O&Z~_q0Mr4*1313N%N#67Ms%SHW z!bgMp45ylWc=Kq*XKDikR9I3-^z^bWXS+Xugk82C6PdMsJFV=iY5XWS)!jq_koF8s=n;$;0PTJVd@J|JPyhL(h6_WVsR!$u|Ztb|JDUI=u z15RP<+i9n%9<=F{;JPr=>7X1lV*A)498q~+a1Q2N)pGU-!|hYzp!eMdZcDdGvSY7~ zwwT71H4B}+rY=?V29EVzuKE~IiHc841N_sft>&a|w&&yy8a@@{yh;m>IJ8}0u?g8JX~afno8P=&e56*wws7W5YAT)EqmZ10LA_5_lrnRXnvo}QgdRv2z1 zT5j|_TQa+%wKN|Km8)0|Jg;y%e5YQ}hPo~W56Xl;vI>Z+FssFB>(;O_=-S5T8I5B}WK4@;&;~HvEOx?(n6=?E)mAG3 zlkQ_C0HMpXjy_*Tl1L!sN6MfV4fxwmDwi8&~mV7vN%%}Yr=@N z@;rg=!ye*hpnF9@EQW$g*fTPmN@DpuDLgnh=du9<2C)P3q*`8UG`wX+F5}u1N2d;5 zwVx01W4W=8hY>583LDp8*7w&E=)v;o}I441H0NME}t5~op&+5TxuOJnP0v% zpBexKX;XLGI@ky5%X=olq*;~Ao0{wc>-Y|peFkNlE=7_f%ER|Y?5}mI^-ZnbRawDp z)VrtKIYl=D#_tS?(h_B#&bcPLJ9#wQzxttCY1IYicjW26_Ec1eyLoqTkgM)BY9#17 zQM)Q^Z|tz7@oMy-!%){MPST;B?`^D5=^PdFW9l$p)9Hbzhd4u+{x7lI?;#Z7`a$2m zrF)(exIPNsd6R^}DMvWLkJ(>w(r%|Cak%lmNMZpKgkzy0>?LV=O%3`#(ciLt%3CQu zJbOk5_J>dB)h36Dbz)=FJrKdpAxl;Hx;v6b@wWuXgM*d8(ZE_5sR9N8ON(I;3G>B@ zdI%-GpQM9!Qxpx!`93zmzX_35j*d2MTlSms*~{_5d#=y`Lxy!Xsc-!$s|D5?B}|QH zQ*_q#S~pdG%Bi8b$pQeiF~FmAhc5VhC?n-q{b)S1Zs=ae2P5Zk-sBHoBP0$byaY{{ z!YEJz5Ki@(V+U%w}7+fXpQN7WN+br7yuP@lYYjK_~+ z>34}g$gGR@se*h>>wBZ_;|!7djrjb^W=>ou_$Maxl@cN;z1itb8B^FVzuadkl*sL^ z_H>y_F|c3-m($}596qDOsGQkvi=Yy?T9HTg_e4hMr#~~Cq@7j&i8$aAQnh%iKr8c?G=as1 z=gVh)kFU={etV%q@+~fAwZ5dn_k+)ILzPO)r%K95$a!Eojj6j9GyVeBa~rR%(QN$= z`>3o9_*^V6rQ{?qvZ8!SovV5yajmoX{x{`(D1b$)bu1r#$hHEKC+A(QB;BW?rUt?8 z{t;Qw{??+pR3z2>0+hqy&Co%f?`pp6? zLkCBU@>UDczqZt0@z4Lh5y0yNFL`NKYFledmJNNrY6;tJt@9sEsf@z?m+AG3Wf+K1 zdEB?zH`N*~r@hV9*$T8yfT3XU_uFPg_9j1%a$I_3bQ=2-s}<9py`AgRbsjuPA)zg) zy1NI?b4I(q_Xsde%Nami5UcGliJSk(p>FCb-a`UQlf$Va9P~|ie1l-ey*8pTr2<$74y+`kU5c)nm7fV+vKEhZjuhESA4-?uj7 zbSDdg$U9O6wq&g9p9`HGw%oP4BX+o1BT}F74peuN2-nC`^=5l7BuT~8-2EkD`{8T?Ks}r?g|5X}vBB18SQihx}-_gZ)T5Y5~-R z@!B!dau;x3~wDfb*V<2 zhkaiXCe8t*)&@`?vwc2wU?|`N7c#MtgLiZ!`h|Q!drASbI#DXp9|T3emeu9+*rFed zS%lGYVf@?xsZ_JWI>i#<|JKL<$2*HE=>mdkkH_`}gZ6_^YJ__VkCJ7dr!f#qwO0oj z+u>0#yL@)yvGYi--U228;Ziu5AKfhBrzKQ4n#qoGnM^%?V9B#(4Gn!oG(u9dzzkC% zBb({naDjWjKjYP~Z=4cT&%kz*!?x9QK3libsj{aYEq40OrXV|%>wV*2HHUVx$@qk^ z0*9_CIiue5o8w8lWo1*9)e`eNJ~vk~?0_p}(p;{rkKdbeW9)K0XS=UPEd>YCTb4HR zJEm99iV0t@J5f5K{q6kjX%a)n<(L^I*6@G1Ch6@M}y^ z^V>{FwbR?`41*F#6b&SVU%GwNj2r#RSbn^L0NLNEx&zaJf$VP)2>a!p6HWr{!#sCL zXgxdK`vf0eV1%FlT8ez1BSyJ%p8c;i|9cx`NYt;R#=q#zgdF~7XxlSyP?%|vQ48kx zjo`4A_V{dP_}BzB#~`KwFbWQQPh!W1& zT0=}qy0Mv9Ncd+|fg#8nztYKkUUBE>FdlFr?f?)x>M!9z{7`HdI-=$;Ib2^^t5ZV= zBGRAH=L|Cve-W6p+#}2}n5Ch^cXt|o7nY<%PRej1Su0n1c>8p8_>LI&g&Vc?bW>nA zgfq3_GhZ|bS@!*%kl=d}zNRg}ve&FSwPr$Bms@E^9cWI+<#qf5?Is4z`YN#!F@ye> z2bbrBSjdMwR0-OLcV=*ufZIr+g;~4I5FGS~rzn`o7v+Nc2y)_@~YPkAeIjAIN(0 z7v?~2_UQCPmx058FEGEw5}VrrK8q&`#H|xnV0skaSC_IfvP`iL$%%a1E&o6u6u-jK z+S^mE`7m37wR-L=3<@J4?{6!dotuY+?`)CBXi8R`Z%-apZ4H4qItxKQ;(4@6PmiBT zOK;w`$xf{~8|L%4w%3PXoi|ZPC;^Mr>vJ5p`ZV(u^E11}emxP8U!|6%^s(&-;;`kLN2U=T=XcJ!#(@UDIQbz|#Gyc(Ggcv_64w}mR1^rSLhk(XJB za)^C9EPQMqWtXcW3gezNiCY*)eT8_Q(u%#tfJ^8n!gi(95^=3S0VS%UU@M))^KIeT z0a0spj+9fuLIvLTn+P76_&_0r7qv+z!@-pmc0I~HY~Vb#bNL55Z>UrY%lF!Q6$j~D z^mVZh5&*WCp>>2=Q+#Sn0s=`@DM0u{iqC6ZqGZfNn1}kF`(*KrJ>w()(-Gdnx%_RP zzh9y5*t3w9i?=3PvT&@inRMR%gMl-Cgk4u%To89k5kQ2-|62gzpTdf?!^?7PE|1#| zYoKpQfHdMyV2q6`9>gTiD|(^d!u}QpFv-6DKt?MUo2^LvB8)L)NvKRG8JKCq zcu$?S$gOq^D^3u4(@l~Ge{nw)-|^SN_&3Fb3JxY^UD*sg!V}4jb$oPXv`u3^n`Vhl z3nKnxqN3Fy4}E-67@>OIP+Ms&>blOt$iQHBIN?Mi>Kv`X{!T_l`hERaSc!&a?I-iP zm3GS#D%~AIikGu&!b(}vG8=*AGA3QoOz4NQ^h$-gGH=&!#%>EUcVnI#6RY5e1>W$# zCin|ReWw4t*;EKtwi~8}FJ8Zi;<+O^^IIN~YIq+4)ekLfrbmqLJJXw)mM%ucC-Y zzq@tp1~&)B{5pqvj~J>8TE(==FAQ{|oH|erv}P!;sWKwNM@XHQG}CY-nQ<4NElS|I=+^Tsy-WtF*=c3DKwKF*&Kc5{ zV^zaCoO5@9`<9U_fUD_VrHHIxA%iEUXJ*RFQK8I}e>Gn8L;if7N_EjJKwj4ujej()C z?>#-mQbe4BF+Uq;s1>}MuQlQ zm=wo+c}G5R@`Aq5;_$q{qPQ7>9*b;|0Ysq^@=?5J6UPmb35>@LKN6c!_fxOqhd0Y$ zitVOd!ht|xULBq4@ILdo%(Ti`8u5?B{Pk;G)~O$o5ikRlr0QJ&Ad1RlOyVLwDG5n!4Psf|0-5gVcr&4SE zy7##sam|ON${{5gZ;rD$d2z*(E*A%;Z_e6wU8 zzPBM_lJ{A2P|QKMU3VsyC%!ZBqn;~Nk3$=4!HVmyJ&)F z>!i4cjlRKuWA|?4->f^Ug4AiB!3}4Zo2VdV@E42C(oN4f8yJ+Me*P!TVgqXGl?g6Y z^V1(FAGdb7;`j!sxa#{n*>g@DS4R%Xeo9l$Un+wt1=L8ES$8UpNHp}Jn*CsQzqm$i zrXD24Op~PJ+M^Sa-c4x0h4{znw1>%RpMBX&yP3P|xw&l#$~ZXNf#&Peg)m+re@Nrkt%PQpl!3#tMp&JYP!?nJcVbYmA>=#BS z?^CI+9(8rMWt2WNGBq!Rq;JPRS&Oh6wrC=WeZp2>>@ICR3U57>BMy!A?fhVM5N&8a z48_PfmQz1;bG()@f&0E878H?NHIrjzJl$6MHJC)aT44Lu)<#9o~xieY5Ce~6j z?Q)4Jx82UbBqsFcP-QrnO|XREq?Fby?6fuXcubb<?!_oFWBpF^g5OwDUkMxPDg*nHKxv9upZy_s`xleJO#HFw4QBZw(Ifh5lz6a z-aFAP97f^3iAh;M+9>C<8X8VdlEm7PX%c%<}a7wayQVTb$bDW@vPvsH~SN;n&8b86V zYZkr|e7p8-WImB=oROWp-W>IKRS5&TrO?qqJzX$AdLVacs!^Y5Er_}%lN^Zp+@CKu zy&U#!J^W`Yvuo1jG%%Cbj=r2m;A7XlasZ&0ZSeWdU1fr-*~glb@*+QLXU^_C***Qj z`9vZCdg!f`W^|)}F2G7k)Oq^U&F=CZdn7m><*ax_tl+)u=1+#am)#|f=1{oa)i^EO zxU`RG>T?f1=Gsz#Uh4{(;2svZ4#R5YJ`axf*Cq@^yl z*B=LC;n}fKr^he0lmLbRK>%M*vF5MPYPjd^)RntCSTL|QfN801*QBr9-Xx0;xKd#> z?Xf=0?RCJ&X#3Tlmswd~c2KU20I%n6PS4FmoW=H!{l>1#g5J^+QOWH! z9ta(8qkU1{<%M7;H=@GYQ^hIbE~(xN7rGktV?;2@ zm`6M4PRdlxekUJ<+`S30nx95R40Ij?T1y-31__NGxsOLq6Sy?sZ$BpSYnv(%*+_>h z=hAo?=he(@&N`8DCJJk(b!D8EnM$TZ5SRHJjD9P*_qkLCM=Ky<__qT+S^fS}PhTl_ zru3xl-W}xYoIN~UJ)GIdEp!qi?+Gr+_V#?NMe>Wl(T@3t&1DUJPms}47H%!8(4ulR zHNv&7j4tw$gY&wq)Yys`u4Inny&$Rj(~^wzh9l0OI$cj&*?R7^GuY~Uj$8p3s=Xki z=$2@{E`kYa*Ihtnt2iXrqRD41fvrESV@!L8VMMSoj@*`hJ%%92@^`J6ECz8|S$DGw z7XCExD$1bHf9O^Qy=v))S4JykWo7MR1%zAUCAlAed>tgGHbwF}7-v&H#uAUd5foxnAG3|g@k`_ek1boD8`r`DLq6O+yM<;AWSzWB~TQO^`& zJrQ;y+TnH+u>$@sw3zObi7X2R`@h7(*WgPqCo@rZgZQ*>Gfs_W@7dT=LfCcnH1(3W zUV%mGe-?5Cc0Z>te{x&raGBV1CZNJ&{`v=>^=brEzq6H{T0ynE`{`IcME+6Cv+ouc z?h_i7yo%yEnvF`=vFm`v*XN<9Un}Tkh)sW#e1A?K)^}t1L!baq&e?d2O^D6o+1eug-#8MQrywA*6~4WfKkt%q{&|_YRefp&X8H!n&K} zQm-lzV7OCQG?`Ti0A$U2>anJtbd7jBiYv`TU8J{!0Q0AFTOuL-`RmyDe%VS3+$T^ z3}JlbZKUqmj4$rg0AI6K(qz`o_ zb*U&5OU9m^jk9I28&jchk5Y_aR7tt9`|B&+>gIkn@QdZW<+W#ATo8BVB#6hEaRqTu<` z9!{-*8QGrum56dk7u+bVEj;yd%n^9&RKBhOw*Kh9upnO&|A^-D0y}Ex*N9SBP4|Cb z2=8>J(l~vUGWl|b$Vonc9m`r2&ovYB3-XH8D;)GO)NIuDt$$6q zV_*8Ypo)iftyFFPf{Jn8tHCsBIY>ZVkzLHN?osEyhzGNz=Ud5GexJFS*Z_Y{|FV)m zrHl~#G0V)E=`&>vZ;ODVF~#7--=K?dizG3xhY9Emt`w0}j)Y@HUYQ#4nOQeU_SkYk zV{sYh-wKs^W~y4Nr$Ab)we%& z)^ug_v~qJFJ4K}E;%rTQ;L~Y%B0r0OfCrCCI@U-R8$8N(@s~6@_y$UAb|cmmeCnrd zVcP>?hth1!bx+JV%;W+Op%jOceYUH;K&0{gx=X=%Rq3^UTL}zhSfdb5{LhLJsju|5 zYQsj=2rPlYWXrPlwVzJ+O`#v;4r2kz7D|cL+_Y6HiN+Usl4}jK&NikLK2=d(Ap@BP3LxCXJUSE+cJmd5v7z2n&h5v+JHT$G?dRJM<(3Bsl}m#K8_1L)Ia z*^gv7Cixm;W`|FydJQQpBmylugjyI(%Zxb^2}Y)u>dj!nfnB4;Gj{wA z{8wQ6J4NunuVzNWPsAh#1Z~Gm+TcC4U#|4;=iNZ)Zd}7v1 zPk$>AV%cl99IH@_xpi~$*qhd>%Y~88by)~#*eM=E7QDt8>!brK5-ZvEM9#Wsi&|$T z0uGf_aR{dKrs6@TlIZjyU7NZw?Le*{(!|lRzNV0b;@sF6vUI20FF?3cAL&7OpsAOb z^hb2t&f;$WJ^3);uo9dd z);C>i$<2A=MlFs$x_DL6I+LY_`iN$Z9}J4gSTP>$5+(qscF>gQ)P|bpw-tHhu0D$4 zpPFb2Qdgo)3e(i<+V4q3uq;MmJfZ0U7HP(Q~JTjKgFvX$M#al}-lc|VSZs5pU@ zmvJ>p0U;Q8gRV;uU11!rhB0d&0|T^Czv}NP0Q8~h%?Kb<%n6hqggO*lSZeY%zYauQ z%YN35E6ahm!&l*ea_2JTG7Tp7@xC-XZ4^g`v@!nf!|N>))?*L#)jQ(q9Q!N>?2zHY+zfSiwB4s==5rIj1XznUbL4 z>SVfItTVC+f5AX*ew@7VU)i$%vtq&T0ZF7CIhzCT0~b<#+?FKs|gAc_9uH%-iJC#E_1x=``%R(bp%$4c_& zc(mQFB|{vwYZ34UfzK|^9``w_0)>=NdD|H;L&S}GG_DC_=Fr!7VwtPGshVAkmerP0 z<-F>e`GK>->N(A{xj75ts>S|Km4%EEJ&fjVK}6%AHEIm$FMWOj%(kQPpp8fhI4w4 z9xlw!UHi-AaorW)TU#*AbymtAYJZNnJ`t?6y20-1?rhOnx`R?<%io!46f}Q*k49;s zd4U#M?CjY-@(UWb7ej2F_+(lX!!HMilv6sVm)KzZp(U^Etty!9{d#%2Y>5T~ar%>g zZE89Y{(8<2{L8S{C<)JAxycx2kN2zlh%GXS`YsY*(d6u;_hz^k^<}WnZv}1Z%$_S? zvq^rEv(4;YCK_1dIAlim@OQ4syR|&FJAW~W5EYi*q(nxiE2pd&<>FZO5NP8=W8>f+ z+Rja3JHI?dL<=35&{^EOp8i#K@Shxh7$OZ5`XWv@laofd6e4Dr?P2G*C80q*F<5}kJf6MH*`r3Wt(JxsNSe_V@Q{{ zSP&!l#5s@0d`QyR@OGG?Uh5L+*faZ$7z2}u3PR~WWiyOU;E zY$_z?aXB4egvqec;p^4d`i;SWM)gq5LcnByNa#kKFQ)nplZ;d*z7X(2JL)EY_)+2> zC#na(+N@V(a*kv$t`}XOv;OeExZbP>ShKE=C~g*W654;A-7&6no7>a_3IY_Mv$P`^ zK7pjDDxBY!|LcmeBEG3sVrXs9$F4K>$yAd`ilp{Qssuer;Xo~(cV^7&PYQbibW4pn zhMiz$(5^2_522y#dP#ZnuIyH`rMC4=Hb-53BH<1oF#Nj1J@%MEp*5jOtN*Ps*XPDv6Cm!t=`K3v5%`2O)sl33B}M&Wzdf<;eB#*nl7eJ z#C3}9P2jccVb+5`eVotvsmHArjCOmuIZ`(W&@WI-^AUfwlKDI9{0(o|w!f#1aj%uu zM{SI8-1FfDV?TO=vyO9_%(J7lm7;D5HMu?DgL()B-&c0H5 zI`<0W#NXhe)^g(Gdw!UM$PWAi!w0(!ujzmpE~_(X>m%l(Z#se))@#>D_Xa`_P&HLx z%ztO~dhTs9y?!F3eH9A#Vam)5I8;9_Ad4oOtn@I1^T|ctI(n+f{(h3D&DH)}a;&Hp zF<={XVEphGxy0T3)Y-T1Mz~jboKk1vO2RU0T+dyNkp`aDbeMtcge#Tc{H`LEy{e-= zm*I*qagSXwrG$gqPBuOz0%_E_tP8brKc>4jHrJi|lc0y43$_?A{$~Oz%M`ngwGL*0 zgWH-(N*+zF0C4_F%Gk-R?GeY$lAHIoDX8)1(@yx|do7g0z@pxm89*5d;a~k}0h26B z_BJ0sw0nr)+gqG(pf4Fa@gODCb*p6q^EZG63iiFR-cXcGqbZZd6^z$(3ukzFveO#$ zB)deSY9L^n_XfrJvL}oWG#MntTu)c5qeq@aZ#|nffQfKGs5OQ8HWs~A%o1&;8BbeZ z0%+dg*FL5#Tb3@}psdm>bPIagI}HXxfFkvNDctb6LD2g%hCDQGfJ*cVjb*FOv%3O2 z1pU>?D~&3eZK!f1hC(Qt|MM$i<#DBNbZy2Yfq^8|_EsJJ4oq$jB}0#~6pm+N5ft^0 zxsWvVJJJ#uXp)2j<&L8uS+ZJP4O|Z%JZ%ZVTqd!yz&N+mre zc71&TwhWZsOyEU5d8VREt;$UQdQ}fj^7zi583!Wr;$Cfk;78^V>1)WZMj1j z|0wnrQVRWi>B89RTc}UefgrN1fpqV%*gE7_@(rA!2DnVCMvgRZ12Sv3B!+I8BfHyC z5u*>T`L%t@l6CHK(JjRlh;VkE2Ye5uC+Vu8d%0n0gEUyU`Oh* zKyvG3cha?lwz_e)y+I)hmsP6yfkD7rC}a$iyP5Je&3NfKa$1Zd#=Y_Z$vJ(khP^w~ zM6uFKF*89nY=OwosderBQu~|h1kD%cYbV;={}ixx>;u-k(_^wow`W70#E#tAt3d$u zkrqud;RUqQ{(*|hGFZrKocC)rvd_-JZ>H*3j@is(v}9|8M5SgR{Q z%x0UaT*;r)7aaWbqkF`Nsh>z>0r+`e2@83@vgv~D$+jEbgb&ZHf8S z?r=sh`8SjmAa%ZXeYckz*zbe21?g%^$i*SbOMrpP%{gSNp7Um??#2_k9=X!;>biM1 znYn#Vqptfe*Dy-_|bbnaIoe2W&Z`E0@=Qm>90F` z8@orl+E$|WI`ZNk9Lf<47zgzMPcbPc?bRS+_izn4e0hGj;dhs2-(^wg_QTS0cL&AL zT}0naF+q{2Zl zi*|lVI7$VIcOIQ?jJ{i`O0m&dOeksGsxm)R-n<&ApK_<&8PwSl6qdYk&*V`kXL51> ztO238q=K`R!nUsv$c{6&Vj5*k&gx`GQc>Dfdd5~zU8-Q1?w9IuwQYvyQ~{{XDfcnR z2&N=p(0Cq#<&}@g)Q8_@HsGrMa~42>!y#L{;r&gS9U3b|PSFM_nTUiAEb-A(BVvxG zuzQr)4PF;)>`J0Ut{7U^Vr1`p*AY2e=~-9@_p$s%#mp@^9;Rg|@w$AC^Av~2uxp{* zL8foZ49RpJzZ~9G4u3Q4A%+GWyVq48t2?fD+F7r_3+iHgP4hYUYsJqy-?aOhZVX@F z4y2wXxQ+zC(EiHFp{FWM_IfXTXfUa}eu&Yw9@D}}9GxxhU)b8umOkC%A%$6Ud8}oZ zP7h{If593Nrd9Ud7}^210BKFKd@lEZ`7E|aem|pIr|gIF^4H_)q8|2VcQ5QUMg=tM zxSG`f*vjUVoy;0uJ(8_W?Hsv&-^~|OZ9@*Rl%!23T<)_+elP`BqB!CBmi~gT${>O% zjbsv zXd55S&O(go-e9C_>ESi$-KUwFQ%(^Xbv7_oOvBm6#NmBxH`&wGa8S=wcQU800Gnn( zY*9n!BX?|`?@mkYVfnh%HbsrRKjM_tcs$pvlA(>2W<2SsR44F*e51Eh$-GM8%rHhcXwa5t-nk$CJ99Y6@X04#pn0k%xnkW_D$Km6G&Bb3d%XnrIxnLh*JFr=p zskVI55xvaj*1JwNg&fI1)+d5o*L+HeRWI_wd*sVva=8IxgB5n11*#Hf9R{m-qqZMc z^8M|tmNd;F;$qTX|?RKj(GzQm4!|o@Zhbk@LD?(_uXPbkM26!a+U>mM0FL zGBtJ=;9Oq#kPA(m*A?F(NPQF!=C%2!crIk>$d=r3#)ym zCWR#n3@h6=VI?-8`gWPZ=C|tF>X)h^DhmY(FR$Qr&NYVW!@^nqHUzhU43yv zSCt$VK3q=))%_HKemZbCjo`9J;?>Pl)P&uZ_sq+NCWZH2LgF6Le1vx~{I4IWc8%?` z6d}7-J6u1ggXi-{c)Gu7y_Dj8y>b&J-K(96vhPbkV{6S&MXZNH4)3nsz%=5SRZ@L7 zN6A_nGr!-a{g*iYT_OK<@B7s^6iYo}8VjkU!Tih7=}}rvWWW}Ua!i>M8jmUDGjTN9 ze(r=Ar7-qA@scOJt;p_cF8|uYxa&YzRsiU7aYrsjPZTZOoY;DdzB7@c!3l`Ml!)*L zAK$kWRrs7zU)o(7S`{bL$Y}`fg+Ak%nTK(at@=u(5YdXOuL+>r6_YW{QbAz4#8v!o zNAxi(CaV_c)`hcRGg*-!ITYsNlpdz*Y*BcL^5IrH*?u`CCbBjVHM}umzI9I14dr@A zv^876ZnF^sY22&Plmruf;@HudwcsMs)}4 z!?CL}xvgsZ940{^uu-K%0MI8NEnq*h&CsEI-=t4BhZAMXe6>t8)w(f}f0Oot+J#F0 zFIJ-S1&-tVHYoO@19o2D#<*!hTzFsMl*gR>VkLXd7D~yq zrano&ZEJLFyiwS1v6UPoF-l;8=?DMbkOU03WE;McN@sl>Stt~T)=`HuUw99lX%<)5 ztK?1X@m=}mNI8cg^DXho&stRT@Gg>`qFiT@hvBSZ|D283M_`w6qi-{fN58H%6wW zEQ2cglkbR~ZcoO@$FgJ;Mh8D)eeHG89e+=|=7Erxj$}B4p~ThwS-+#(r}A_os(1b` z2Jqi;f-nAhJMV_irTtfKX!(2z>!+u0a#-ElG(%2vN;8YI80MqCRfB;lrJLpV%8V~% z$j1C(*Ra2st^?Y`$eFR^7&@QJT)hZI3!pgvHMNte*O z*|TKLZ5NA0Pp^~{#_U7WU!>iynIquFmyl37NaqU*Ll!l78xU&}XZTySuZ5CFfU#3A zjlN@qP$niZri5upA*ykh_ZCzFR-rKf(_kkF;>{f1n{bSkZi6FsQ}Ky&b{frEhnRI> zD4g6>{CtqeSNA4PdMoC{`(WoSC^c1o@4oA2w_*PI@>9CLc+MZqKc*G2`j~v=kBe2Y zuCzn)@;Jhi>^Gt>wQ{PyI)ODsZ|`@?BfZ?KUqZ#)x3ioz?QiaF7w^smXdQZ$+$>$U z4ga*ubw)1^xAtxvvR2kQEoqiJw402h414@$e5@yO`o7!mUYkl5jmH*wbr5M_$}2Sw z2vc@fC!QV1(hia5}r69+`=7Whm8YE;HjXlZWy(;3w$}dpo`eyYc3F? zO`w!ujb0&jlu?@NdYVQ${;=QkwR(D8`wU5qyM?b?jzVL(rtqcAzzfcCu+{wRWN zfky?V`yura&s2Q;=Yw=l{exrn@-gn*ZpddDgF{pw2sy4l=u4BFA7X!%i>aG|AGwis zgi)@|wh8!?d<%ovJ+*Lny3cXWtd#hc-(sW`R^fext}q*X4ll2(0O86K;$x{dJIpn> z=}U6&l8r}YzWy(DHk~XuT;?JLlovUyCL8E@bxkAd)l%*Wr%&Z})IWfO9FU?99ZoPB z*eh5Y%pVp4BjTXAeRyZ<-PjsOO4oh=9I>VqCw8%|_N4Q%kEVu=ms=}CYt|ai9~#kX z7rf*Xojes&cy4UmkQr*{>C3nTNi4N7ftYli$xqDP;EScnPBA$5)nwUtj zcYke`|5k1Nh6n|))I}ej+2Ik)!4E!8bvxh zLx6TbCLoflY)&P-qgTUzNgZP%M`k5Q+mYY=CVQ^f=E%uPb<9Zri-UOVL~9P&aa_&N17u+8~}33r;^yvbTi!bfOxeL&&n?s*cUj~mPmdiQUxs>QEJx$3l6*Q z)A3_!=V&3&!ep$hp5@!m6R4W!lP|TaV+aC!$~zFWLKq?f&$sz!5Gkvlp|=h&OsIzJU4=C1cL3SR8VgKwW{;{5f^6k_Bz&UEg9HYe*?#CAB ztrEog`r!A}3Jiu-#c=4uf&+Y0q8K$#6?1z%#5=?OtR-0fzocd88c8Fch(UxY-eeTK z84@jH=$z>8xBXQPAPzL&v&S&PJf^}|jAK$z3HXLRZuE`a*#BajIxfS#?k{=W|2!iJ zW&xuV4z_^OkB%?FGON!9;lzg#z}tid_g ztz8tUgC+5X=(L~4g)x|X_10{?*%XWxLKN-0HjX~AZOAOOaawzgi+W#w zcg1cwJmIq;>DlKVwt}Yf<+I)d2S;=-FgQX|ss4IWW?HDzb?d}Otfd{Ed3f!Q5-J6U zQO>+0ZSqKi<2?Ae=>dIZ7t0SH~S!vP)_zr(-T9p&xz#MrNLEJvr5id z(Mm(jFNF6S$MPo`^h*mX*dsmK9_?>CwtW!ww6`6wnK!K)FhX{#f-8%F^?sN@P%h6UGuL-{ zG1}$#KC0)0bNp0Thne%i0S<=o_{hI?i5goydhNmjV z#tgI0)7CWk4lSfyIQ_{{cET1t5QXC?IsNwHyQjhW?>q1QDT4o0HXJD{MXmqpkrgIf zjVsf+;;R59$0&ZG4ee@u;KfJQc`iA4rpW~iKBgZ_Dj-b0rYbCN-*ciq#;_(!97EVj zJh*(*6o=|-Lv^Lsyss7(ARRWa%X{*<6&ii z(lkwd8uTmri+Rztj7P=L9kIQM$~Y*8?i;7w*d9}4zwLvA_#mE}&z0gRTlU~^sBSAY zFRm*wm00*&`2R=PTL8t?WnIGwfnWgw1V}=V;7)=&B)GdY9^47;9taZLU4qkSL*pK- z35~nEyZe7L^S;kBQ{TM*R8hsHDZ2Vz&N+MUwbxoZ#YRO1e-Nz>=-ZRe`%uYgQa#!= z-K=z;-AMEInSafEbmOp{7Cpr;o^-}>Zr2rY4PvXgw; z2pJHQ@O*}3)>!bGr7@_Ai=@g{vwiy2OY3n=UUQ-uwrm33`#8Du1a393>`lGF_~BTE zuI%m$fuel0`0`&U?g$0S*OwDndtNiA4J(vsF-^9@ zG5yed6%TbXvHlWnJP$()+@4w9ON~37KwbS3YS6>g%q=}BAUaInXTF_LH?l?Q;^^0P z6P({B*EFLbbFiuu#Mj`vxvG6ZB77}0XRp`SZ7d%inui6D z34?F^*?^n?8BwiFTT%9?U)(=2Y!Gvn-5UAw!a{osr%%Gl1M6ns^U#XK3x?Igqx6@J^h_*lJEdF8a@$YEL zF|p+ofdl|)$Y!2I{Ag547U`e9##Z2bzMWQl@?>Nl#Z@hQH5%kIi8Pe2AC?=qOK3$R zyT1N?YX&^25vIZNTn4{4I==kSB#>^(jdk)KqvTl$VhKYDz2%i5<7bxV;U#jj6IlnP zD%ojSO0$5&$G9B&;M#WB_OCuybSyRqjR+N<%@%Z{$F4E+ICrsuqqg_?f& zzpwmnT1R-d^i9n-Hy348o*SqasRSJ~tUkAOL0Fh5RExH$Ki0IqUjErijf#e)4)iX+ zqPpUHS5{VuCDkC|AAxUQi)4)dE*-?zwt%N`7g|fZdG1z{)lk=_LGbeK<78{3QxYY!;kvQO=Nu!wk%$V4L7NG+ ziRTj}45(Y$eVQ;$K;ANfWY%4^Xk_2ADVvOQpX@f9aLy{7&$I&%HK{jlsKvRl?QX9j zB_K!~H!(if^NF<_4WgWU|GH6CnQQIW_LxZB`AB240&AEDq0w0vE{{~I@ z3TeJS%9O^4|0o>}dIn+<5T%MPZ?CFCPow!1dtaJ{IZ6SOVZYcTyCpX1n<^5L?GAb| z^eUmZB$ds(T$(dq#TMhcttdmx_|OYNED z26(NDofNnGZt+w8NGDX1=of9(E7ld~TJg+!P_93M=u#blpq)uYet)o3@#xBmTr}d_ zlHUXRt-4K0#qbDXUddIefk|UYRXt5KS9lTXUigo=$NaSEyvH+=02yynQ|XjOMK2Jq zjIGauvF%jtnGThZ0*DK#v}DGNnqNToanI2m>l->}vgdNp+xQw33g7EogA?=;|JZ!S zDyfUy8Y|t%P<+~~v(8UzcjgEdfAc}3pePbwOJQrGJJJl(urRS{_jNDsq<~Sh%FxQ< zN*IKt(vj-nJ-8Vx)cU^PZG2R}GFDdCEbm+L&8~IlWD_~_fR-z+l~wsB-(uB*abFnE z@O!ln`5xxrN0WC~Aj%3|T{<-tmG_s^2!Vq(_L@!9dl#~a@_=}7y{hVLG0Uk&>vjzz zkcmrUGqyGHZerXW5Vv`@H^?}Z%ursIAEY)Q^3GmYcA4L*J{@e>gpPA}tWO(`bL0Xn z$tjAjcUN{PA3JZ5a|#P-EXH?sA{FuI)p7}K>ioQKWjACip^87n6z^}&OVcG}ClM<; z92#1hpuDEQGmaJ((Cqwc)BT?pqzouP!N0zY4#bt}Nhx@r(R9Up zlxFE3h}d6|!Ghq=^jVS1lgJXkL5Iq%*6!ZKu(ss&X{UNsJqd<)yk7#9)f~8c%$A2H z#hf*Oxfg!WR>;x==GioBhnA=%GBk9XJjTZff*SUi!%7p__zV&VIy0O<)fevkjn)n6 zd7W+;WPxT9kHFX%EScR6O)!U^ zjPjzC?Q22fj1d&W1{x+)=uYFa%*)GX0uT#BU|E81me~ew!Y9WyKVu&?1(vj)@6g8V zZ+pA;_SZWtLZH+o1I>ua^Ti$#-tR`6&1ZIXabw4F8=%D`)-kUxxKR>KBjsZ&0H3U- zxVEWE!|gfl-N<*Ba}HSRedLkFB@<9LW zlx$mPWvV&)FnNjzP{+?BA#iyL!3M{%SE=A*Zac}Bg+JQ+^Jr9bq7PERyO31$r(4}+ zlvf-Qkk(8umJB$|NM!@##{qkpg72=!dTP>p+GtZ#%U4qs4& zJO>V=Oyat7rg$3S6sN>otV}QdPs5%4eYp8dLpSrz4^vHpg?=&hJOd97O4(2-J>3!Nc&L>_2BF*K(jbvc=)|XhYL! zG}nxe9<(}jX7|xY-Me(7*0nR&TNdA4Zw7u624{MEg#Rr%HKIUhAQjkL1P5b&Xo>_G zYAAymvU&~n9)rgN*2h%*LgaSs$Xm?=brv1cHUlQW>L6xFn)gE7lIrq!#yDVUWPBfz zNrX1KuHNjSq+(1DN=S=q(MlJ+E`qkM{S>Ybp+l?++IK%4|A$T$Urg%s6C&*N-9#K7`;OP1OV#JP{W?J*2r3` z>6n}IUGdGq{!AbH#$M(Z00brOb#o?p@Zt+B1ee-uvP=ii!f=lusYop?hvLzj%{Mq@ zJo#nV6A>+;){sDOjT5XjC&I9#HX!ciG>*Ar5*`mk@**7#bFR4n$M=DTdLo zkz&~!u5Y`22z*BV&A9uzI<82gMRfuu?V~jL14DCR06dUXGiBC-G9>$t1O)xUL!pib zOxL2e1r*~k*!U?A@!VQ|r*aOGyLI=jVV%fSSEX?~j5tl|h+`0Psl7A+M&2vTYSL=+E5D{8Z9MlAC$TBTaMe9q zWv+Hvd{3t-!XMpo2d6K2RU)JI3?mT05cV@zsVv!4i?oA^)oA4>)tLHc?KAWjfdpix zlx)cp9ltzBR%@-&+PYvcEiXg9$#DP0mz~5#xg1-;#N~ly<9y4E5SBpM04I7sWSpe* zsIQW7zF~R#iIKExdb5kW^={P^ zqRh4eAxF}ugWMefsfee76A<=u59y8PRUFGS?aKfuum&EL#ABCoL++BPYh7c!+$7Ux z{sgV=UnCBiR^18hZL%RP` zQkVN~9I0FFzMgPkgX-+Ktq`;N&ISM0?AUw@GX8C`ko9OYo~dFYq2x(gWtBvU z2%$MZF0LFFF#_~76=bCCmOyv4#@@}}({i#b{WtX|>#CE}YV*;U>n2oku^%XaJU*G# zbPJG{@EH6QkIe55C-g{NnU>0E4?rFT#1Msh<+W}3vZ+mt8P>oG zEOI~J83Th@Ef!mvsbBL*20g(VQPyihk!i1%3-G4=s^-riYLDSnTgB{`YcETg#iasu z*p$<(`>u-)VsT<{d1CD@^sQ1EV11ap#%0Bi4(~7Neb-g?m zW7trvynrMY|JloJ@7!uH1i0BIH><@Mjm1cAh;fY;)G8a`xsHNC5WYd(ewk1a)GFJ_ zGNKv)HerqaMn)L(-%z|7pr2A(KhE(ctj4z{jnc3(=7F>i_32D{ioNu7!CS%{+I+{p zy4R|05c9n-6}t8Uy4YiTo8&U4^uf!!(d#9FMa0&7DbJ=*_L%WZytznMZNH9@k#kkY z_k~6UKGZDq5bZ?DkQiT|O~C{E_ZmltMtF-_Hcbges(SQ|zuJHQ>>147%-k_rzsPk% zn04+~SYA^r`b=vvvK6t0NuX3`uGtTG72Y2=_Pwxo&62jq<0zE{e0`y6KcCg)MZ%OW-DbIS*4f62P4V zZYm)2ZT6*)7N@F@RGs{K>deR??%W7crpQDD$!D|#>v~y!u%4@86@|aSHOx$`{j~h6 z)TK7z4E~6b$7(w0mYQfpAGxWxYO35yKr9z?;PLg&^&R8gCjm@~TL0@)g;_NmZv<9s zp*Bd!%e%jhF7tHXgIV_qW!-cmu2PLwh)RD$A7`UC7|y_1N}x`~+)L3WulwTG#_RIl zYr#;5xN{drAeH|%9@2+4g>!bxv8$P&9Gf^MSd~)gtv~%y=Qf~uqh&KO-gAAuYMfKW zTo`!*Q%wq_Sx1fV<^5=_p6yXUesiNqM9W)D7*u zIPWw8teL{;xyR9{yr7%^Q>SyQS?GSK*33u!wkIhczWAFWEzo0r=k>WeMhuLsR&`j>SOE-*vx`~VQ6O;mm+7hrCx#&H zJ--!rxLRd%-kZuA+|~2DOCs}IS@b-`=z|@ca6XJF%PRt;*n5aYqf8689}Lby%Ol*Y8p(E@ZVnR6S(tj+>M?qM9Y{el!~*b-!LuEOI;DfSFdPx|Oi;4!$Xo zWPZ56)Vo;RjE%W+CsmJ52dHM%G)0S(!1i*mpT* zESu2lb6f-0g#1rCL$q2v>x=FKWVdFqN8>~QFk7eX^b92J`WbEa7D|Auf?S0Lt@^k< z<>$NU=xI01{?|jsdODB(1;`6ygy*{?<*S0IU34VP2R&>n(hysT896Q!tW9C8?fk#M_Mc2!+z2bSXG^K-t-HaAZgXc z$#0uR5?YH9kwz_}>@Bt~LM#eYdst23d!Xb|7`3_6&e$;+v(e>HR~7%}pz%-}%Qa!# zT$YMHCR8?sErjB$3Ig4H6_Ay=F9#Cvee%updEizpR-s)g9t8r`OQ0`ouQy%ek7Vpk z2$8HPP2CX7Dp2)Eo1@3kCb5WZMNd8w@{cq|2yj{SnsgR|f@K(d^!!o~&v`zgdMDsT zsV2O=g!w%~-WnDsIR}3zvry04W-%8AGDq{ftK%~u5Glgb@~?fzOzlt7qBSYD?(Q!a z3(8v`QYc90F=Fvr*WGAKMEd#Fo86r_i$#h4I`IDU#S*?BN-bR#C{kqgh3s$MIg>Z4 z9H1A%#w3tcw*=dvsvq@eqCQT3?F$#%-6fKQ^DH{KIrF}zpkUD{4CpY;_0wwW9EzlS zhY6YlLc!|Xyxo2s=Xs-d8rBn!8UTT(v-O9bc1$6Ah)A)5)p3r} zy}{Pn5qGt5y7kz4N)-E*>cGAc%1>$<2XhCy(wT%IsEKEhXv#v>gw`ne5zH;=B31s* zbu;S11>XRNjL-fLDZBnZlv5l`28x%9MHwBf*<^i*Pw+BMw|a!v`ctJR?!i$yR3jo4UIvR%xdd_Svs)<`yC)C)u@gGP@c4|h|Oi2^=ixcC@-T^ zaW~K7%4uxn{960|vUcD`Yik|S+uS7Y(9!uevkvz5uD!1R4H}8UmDAHXrQE_C?TWr) z64$vqY+A}p{`M0gx0k(9h!mutbK&3fE{n2(=mcu^(Qn>UN+_8w`aJRzaT~b*-nNi9 zpx1(e?Kq@BrDs5_I0w`0m{dS)Ehox!q=8(A9E8NBAT958%<^I2mBoq2ZglBUrQtNB z|BmlqYOF|kJ3-4Pd-$ighZ?DK!9;(JwnMn*i&|4)r$7^j1GRB}&<*+ny|ZYrh;@$< z`$im_vVw=yBK__}=}(~O_(m}`owy6s5k-I!HX}a9Yczc?kyg2z3sMjM#Xxnb6H6t_ zM=#Tex3@PtkXjJ}qXb{%t3Y^`& zr5}w)ICaKS(k)llPuUN9jUMIp+~IeO(%yR>>8ieXg?Tl*gl2-W>ksQMu&OA?35Gk(8MulZ-243^pW#m2HatjXkaqYe@n|F0uf5$BoDcEmeO{v>d z_*TuOzRp}!TZey$S4r0O=OP!p zL#ZNMGx5D1DNA)YQk=^kGcc`E<;7hX^Xrg<&@KHAu#3dndz%C#N&=_%H>QxNZIiw z{V=|J-j9W%(}e6jtK-QKt=>Q|=`CtSlE3NEfw-JQ6_DCqY5%rJUiYCyCI=Q#l9px@ zT1;O;O9(DW%l5xVd;B8WzzK4#8vyO7H78?q)vtMGdtdZ&>jS49CW=JVYA`l=93_v- z0r<>~Xn&^W^gz>wa?160(#M1rA>5xiIfWu2I5R8s>Sei{i=N7n~({GH?i{I!G+^b)fXa9ne*0#js+Kmn%8 z&U|BS=_o9;pVmCzGA-sZ6tRobdM3`XWVW$SjOwe|SRpneP`YPX{QUW|$kpz8rvM84 zJk$}0USu=h@M>qg3yVZR26!p=>dMtA;D6jY{%yhkFZ<`ui+*vGS%iZWhEWfZfPN^7 zGEC-$iC52hH2eO6G#=6P_<_+8*Ey!i4Yv^ zZD`=l*L|FK6?$ZnSl|GgKVTB}AaM^8g6A1f)0hVAEzI|Cc=kz7OkW~2H}w5fv^Jz3 zfH+1Zf0$(9CsU?!Zz}lJ_8o4-7Y=8-FYx%iB^9*5X|kfd9y*gP*m%$Sf|zC6d*fAa zasuF4R>L0EHa@1ie9eVixbwRYnA@;qtt{wdU9n5Z?~Pfrdx54Sqku8ctf}0$61d&W zNn8+KLOr~k9%i;sJ-BuH)N6a1nn$4oAEOXpacwv3#22dck^S*Ghjo4$W*HVX%5@L+ zmXXnaI#@KYPZwX}Aun(;H2y4%AC=*8&|v!D4J2{s8hv8$h}yljrVv$<4gs4EdF1cH zj5Ph_Q1i5(o^^ zP^nTfShXAK)Ld1tFs!4TF0ZqyDzCh7f3))@hTqMO$+jngO48|&lza}CPP4&wv4MnK z<$Qa@vVygQs#6{~;ANf;UUSP63lsS@^jo08?oU%YY#U4s>(aj3&4OFUzdual7n?@T zdMc2h*Jg(O8EiZ{RM2!u_?ELEd1`ouY!R z(%OX+!RC@4z%oMF-pTY4Q!Y~=-TS8R%r1;3Pu&&j5U#wU`-aD{a6iknegVincx01V z0*!EQ+~yJ)?X`OdFs!Qm}l}f88&6`bV&N<9XlfLSk^M26IK)8EdJI-U0Co#2DPVe5E(V z0Kcw}D1?^Rc|R}H=bAO-@#>f>1)}d3!*=*-w7aIzDtTq0Gs+5Z1pV&bHLK5aKCQl5 z)769N)dSH4gI^QUmFlr)P<+ra1?bv2`M_#r0Y=B=3V1`TjURL=WrihjN3}g>A6j@1x~O$c3%@oYh6}_-sIg9J8uNYd*W(OC7S9m z%JQE)Q%mb$w*AIYQKooBc`X3=oC>~7q>JLGih?1~tX@Eie)7QJsvafEsz zh{F9!7CK^F`eG;-I*n!lXvy^sNY*7c(AhruUmCG%7N|k3Ko(OK@7c|UeCgD{5&%;A z6VMVy@}s)*yB?9=Up(%mBfGvDpX`oAc#449EEP{Do02sWO+_Pti_iRDhR|w}-tHT6?RZRUQd=E~MdbF^~z4x2H_QAxt7e40# zXKoz=WRlz8NxZYBrZlv4-toJ~5qn>KN#Rj_P_av38PPV{Xdv8Oalu&z!4jEtQUHPi zZ7Pfc+e%pSa^8i+>wH`tn$nxb%{pGLmub^-ZK8L!4NR7~LA^z>A1Xt;n4=R#zcH^y%exh|17$Rk=&09m z7lGHXHzRU%lKEw0wq<9?V8qh&u7l{eC3%7CT)U}1MGtC9iJI(nmPi@5JyDE9paZ8Hw*2$wPiaI^7>calySJIjiE=d;<)hto zO}`J_pxjo6knt}LDP;8>k5dhhb3+K^IZZQ>UIso6iU#iMvD#;hpzKh*&w$5?KgH+j zHxPU}a!1TJ6n>M_rP$xsa^916@jsy0i$ zQ6d<8LG3BfLohHAE+A|z42Wl-tfnLPMK3K5;XAu<`E>d5Fs6p`F>W8-gj)C5hqa!z{irAhS1w+UOGFgqUgDXi`;IFVPq7*jS2vyltLv~k7eDl$ERhjXR41yB?R0~ za;sq9P;q;$+aXn~=-)+@{%jmC1Ag>cX6bKW(J=h;Y3thg4cvo}Bjs)zYJhB<2~3Fi z>@x3^n7)z-OCl@RrAw;AhC4*RPeB}{@;SN2r&FPEQ;WF=97%e}cG{kcg}*hH-qX&N zz|Nckuv;kfS3be@ek)mq9)L2-o6(V(VTa5}0ss`{cDX@UC`Hb+#dz_xfGEvwT;>@y z*%r6M#uehVbRrYG7(l1+<_BC4Y#r~P%a1A?*SWt4JR zx;)?UH)hQ`t69mDmB2xP+aGI4uYhk1f1E2zB;&RKR_S#a`;32InsbgGb;-)C^>kI9 ze4KW4PC;^ip2CkZG34s=Pwg#36-ZsmB01JB?o%7}K$T6~sa?Hn4=qQ*AAEeX^0WN^QVk zN;m8+-gI+Bxv1V`8xKol{4|i#J2^CK6dFTV=D&$0=2L_;x_ptJ5te@cY|Gh{zH|u$ z3(xpK{sVEa!NvHiN)|&qlh*~V-Qj7b&zK>G7%K>*%poNMW`ppS@4W5$jbF$u0e zV(B zdwgsbTnUlh?e^Bbi_P^r!U*@P!5Z=F(kZp&2TZ+}& zBYUAjiT#0DjXQR|UkCS^o?E)g^RcXuIS7JrY-$uW*yBbVK4n3_Vb~B|eoLsnsq83j zkLlH^_E}M%Wb;z-Y7X9_G>Gcvn=Q4YH(303=MKIXa~Mw-QsLDKdlSW+W0t;DPjS2c zWn|2kC)*00|*lEuDGw}j9Tjc16J|LWC6|w<*CT$H+cgRoI;}vMA^x6l%xK5@~wli((OlqGy_S)%LU=;gG9k1=K-l#xDE;4 z+D#KxbItcWIyg`e{#V*jiYKylSz2=2Bc$&*9JY==raXi*q^cA&^3RdP)Mc;Zw zHkJJtW1PRQvBqKqJwTY>R`<{I+Q1!u2jIo<5C?~&HB|2Cu9R8u00b+83x^<--^I@R zYhgmFiT0wKxyAAQJE0WE+ZzU&VbQtozw_EQbA~Y8m9{YT_P^D2v#ID*D-GKv<1<}8 z=V{LccHUdd2LMraxHVzhMz{QU{i)@k^xFK{yhcHBdC|0k@f&B@Sh*>0Xt09hC#3{& zxDVpoD`a2VzlCw%$e)Jb6iZ|Cn7(YO4B{c0&RkUJK1|@W@;+Ep^jsLcCd+|KZ0lwG zm^)IV9G?}tUw7gRauV>lZIVxYa@A3_md>rh|#{oCyD< z*{m(k#)=bnr>4hQoCXC!o#$yt2hgcFiFmy`*60FQ8Bk49oWv6D02^tndg-?@wNe>M zC8d3_h_*9b6~?=P0nWv})p!7hZnNsQ{&L$#ovCr7lKjS;iA7x5NUNZy<5dl(M*Asn z$a`UXJWlX^jj9ih(i0hWCVNp5VaK&TS*+SmFg!ycqgQ1V;yguWf&ZmMn^{DLF?u1D z&HOXxyd&=X*G`O?Qz2bX($7acPCHkyNg&k(*b5>6`gdAGj_tGVLC^g5HP=_tdBB~2 ztm2FCzU!~;@p=~TJJ*}jLBX1PL4d^g2{@QB!hAu9oz5eV50`?xT}<(n^se~BH@(s1 zUc+ta!RgFSzX&O%;<8Y2sits@%5|H0D(Ikp7uo-+A^Yo^v=k+2f|VN^QoKQmB|Qm1 zJY$&!p6m=eeqS^!6i()2=$ShtaMrUKShj7v_lq7>T6X zvmAh#u)OhVYeh%%?{@a`j;URmf4w1yF{PS)jJlkq?3X@lq-Mwc(9ExN`Ie%f^+x=N z%du5W-#y3CJ`X9x3VrYgFdQHx<&Z;F6>gzuekEzS#@@2GjtnSsOT0kv@$;ZbdP-7wjau zMLlf3pZ|l6AQ?g=<~%LIT`mH)Z9Y$6-l?--V2QL_?Y<_%D}m)sdWIxexl6W__}pQS zoH=5J${6hG^IoiIM{mIon=dn*CCKKyFQ!dvjQ4nuv4~~%XOew1%d~1E@ELCo{aVwZ zlrf$O3=Ns)B6xHvN1_o26Q!x?R5iwHuYo`?bCvT1J4X7IE#R*3ub|rhb1h+rkYB^x z9@sCE`oCTPJ@9u82z{PaLDDm-dAuqSY_Lfrgs58iT@sTzR#WmP(%zb8X#nX5=A z!W=hb7>Fj*Iu*Iei)2e~%A88WIEeQurO<7h{l*$Ad34lU;;QPJD1=8y2y2J^gnN|Y z8f!%P=|DLWp`J!bs7c;UA?7K>pV z!f#*F1qkinehArUstdWQ|B~_y<3Z?BZs84K(w9m`+swE|q|T{1*PjKV1VtID>Z7Y) z!sWH2-?ql4o?tofHl#K*PCr)XGnfBkJ+}ydUhB4f>(*w#lOU%`LB_(77g;A%T@=;6 z7=Mq!W-P*Z7T-;>b<^5L)OFFE_E^}lNx@4;kApByp zo?$cO_VN3(ieu%rEP4(%S033=J1ApnNB=OX41XQJ7Y-WwQ;t+`J1VzWs$b)ZrORdT zne!#=%-rQChV8gCC*}C=&Wv9-aj8=@Kt!HR<}G)ANA{Ypwz$ko)FtEHtPwGQrhcmz z(Vd{g)=Z>Vms9^0%`Bh6HHEt2Zv{@pod(&r+7+qb|L%GazAn8|U zLiGk6NcSHjgHcG@PX!+fCk>On&8q!axqiJhk_?&n#O~-MBQQMqYBQU#I3Y{Ssr=_e?N~P4I#r>yas2U<=BKk9*%I$4WQ6TtA?Hqw^Hvj#cs!ff zrFyyA2`=ZGFIXQY!Eov@c_Sxnje0#Ct zYU~;ztY22I{wY|R?1pU~Wpe!T_M46Xbnf8=VdnO7TE#~U_0TE){9?EW17D!*<%cg+ zx98=gGSfHj;F^gnypnRF3$ZN$_c~N7bH|dJGWF!lpNIG*P{Eaetsk-p4RmqEllw8? z9!{{-fIdxESrntw?!cwvq2llf@M92q;)jHeKLjM&&+M6OrOpNc`(!=%*E&jr`a+uf zSw~U6EwPx&4S4BexfWg*PuxSM_l>#XuP^3$IiF@_|D-LZ=|kg8v2FIl!#6ImNjVeG zq3esp9}N&GDGEuZREm@WGu(G*eU4vTgG$sGVadw(rfE%2ZvYVO)1b~}R&PnISff|j zRLB9Ki7+h&K}?3yz;yWZYCoR8c&)!K6;o@!E@f|@4yei|B=T&ka2enFZ;tU(Z2^-I z|H>);-;1^wU&*euge4G<&!9bGBI|eUh3XngG7Ht1@6Y&E;B38|wE3`lHmXq`BIXJD z=a@PO(!Bw8pr2PZS)B%28w|UNDzyAWdgJe+un-ba=BBf`*MomMn+tvwZmBzryorfO z=fX?cpDlCq5-nC;ZFqI9Yn+6b)h?!ZZqY;%el0cYSeb*g-OZy~W8CyrOmsavTytQUG4`Nd27~1*4`G?=@!!M zAhz;k*A?^gUN<_E4B^vjY9#D7Bsvav2Q+i!A#qZRxH~t7lT{TyzI(Jg*3_(KTem-6 znS7Nq!MMx4jb{7|1+lS0APd#%pHe48>7}(0!)n_@8y>e4F3Mk8t;6dpqNB}`xy>HK zCL_cZz>1C#(nQH{`H-jjT#2l(CKc1J9d709jS5|{Z-w+RALU!g*+zxBR^ZQ)LWiv~ zE4RO8?Oo*c-If1P9%FZb;2AiitW64WLT#s2dSrF;s5p(CBnuozt`TPMl3ZjA;Jfv| zrL8)rA9^_a_aCPgOmBntaiMlrxYHsG^q-Zunb%~C8mA&4eO-EcuT#@FoqH0A;iwrA zmfIGx()M7nD(-t#rp=pEEA*+V%Ayf-=eYv=wBd7oCs-*DXqV3ko4`e$a> zryV?!Y_mWETZ`9vO-xKICiv0O9H2Z^d)+$gV9yFD8yQi57HUin%{s~cPRxQ2KtIjo z?+wIK!sNE-meSC2oN~0y2hFROnmsN@HZy&gRO2PdE@nb1N~}nUxL8c#GL+H@V-YGj z@yMx;J=658Sz<28qNshurq3{7p2khw=)SB?X(OV zcIZa!wV5SwNDj4rU_7?i5tiPr%Nuetqf=09C4T3{TX5%AOE0_zA^{(`+ZgAn=hqYn zZZTfvw6ILx15B(O+nvmv824h zh>cTFIP>eochAx*INeg0z2zuKY-w(JnC!u8jTD>Pk5eLMOq;26cwfXlIIuseE8(H_ z`uNsjA@1=`zly#+wPJolH{brCX~hz0JMHMgvF35{V)iA+yiS|j9tmuNE#&CIjstr? zRp}X1MYQa@e3)h)6D5H#)#dw{3r>)!^o zo;6$U2t;F61i;OBfyM~kX7}MbyOS1drx=>S@}`r}247iuKLn&LplhgCyt!cseD-2fB^;^ivy*04bmJ@}6H@KDl#Ol~><{ys{PPExzTQUm1y;IPp&5gl1nu#j;fu zoU$}_JKGnWDjoi+j)R%~Ce}&lWT_6C!#aQS^UTo`Z&a8Ag@SUZplAPVdyQL?755VJ z9Zv$89@%G?-8nkZmc%!9cJ5DCuTOW*$#f(7i}w{*3kb3)wHyP8~)mTm|q9U6PyK7Eb23 zJuh*MLf_7(mgDA0;j?qTR`@)Da9@hezDs6^%(m*P{Hv~^I)oe)mNmsVeD^7AH@DR% z{p++I)wQXnowNYvsP~|awb@2(7-QId#aH%pE;pVK=FxF=<<<`2Jn6Osu4HmoZb{wh?OdNYB)<<7`Z#R6>N?1R+2-2kI#8KVDl+a?^x4;ei*&bM za0d8>$S;~x+Z40AWv&p7>zwQal()v(ev&rd)a0w%?i8S{8?Z1dwwHOdhT=!yl5+DI zyh*q3rQdeBm%qO4Ug2qnv{t?2R4fnas<1Ykg+WtgTafKD5=>%KFP&#r4O$Lo3aYR_ zyC?6$(kse>IM#U2Fj7&1s4MSb&`4Qgw!Y3x9IM*O8l(Q-`O>LVwD2r_0ehX}i zf1ApuSaiZ>39C=uyL(E}!v+Ko8#sxzpHL9R2j@MF^(H!%LLtt@cQ<(=Ic|W*Y^?Y7 zI)SmCa=uJ9z=yj228Mm+ct?gakvFAL|E)0-SgoP}UF9cp%hl)ED`1VkL(Oms&nu|k zBD;mXll7*@#7S1eLz*aBy6z$Isoq&pTFCkoz+`QDhJpQaM~Nmn$*E_$pHsZitt<&~ zP^U5Fp=oTG{q_LBSbr|L0e#VpARKAnH&8-*4F$50UBtocG*is){wx1&k(23bE^a`D z)QIPH5C?Ip$pp2e*7aa-f*V#HpbNA*iS`GZrGispdN9e<@bn-X<5C%mTyW8_q!7vD z36w84j$K)eHc{2YK-Zn8CB+;leiKpI!0NG5_{=hmm&6TeMbkTTEpJ3c-D-YMH5kS| zVe|XB1(Tj_cAu_2FI3uqarMZjq>o@n#d6BCv4hpgNvWMU*QeNRH4`^H%YC5yPB(lv zt&jXt4O=z35q$4IJ$}9dm{*5l$M%krvjQG!&A);9d=8T6vs1CBphTG!Bb|AnOccw; zGw;xZ(wMyZ9hEwL0r%bb?_)@>h-0TNGks*FkMS@FtI+X&moOd=c7>hwD%+0s914^& zn%Wcdc>Z`U@F8u(lB@yQ!CH#mCjXl1veq1wx#pgrJ;ksNwa>j>E9zq81P0#T)CBhJ z>I>379sPWrn^&2*fq1b$FWXwGC*BiDl!r?lF&wgL^>ntf(v(I#f(DjO&Abjv|9*0` zS!3LvK6xtH~?prJE(G`EDYgvT{F1DTs=r<7$!`J+;yCnq|CohF;@!6I4S+QSj) zd)L~9=(DuEf$YY)7;bjLTC&LsI`)$qBK>N85lh^2W&CQVLd5^@kN}{~D3oy3BA3Gj zy2Mq_kP|R2UnVd+%j9>`CqQ83Fz2j~giX8~qmG@CH)t}D@&;*;kQ0{q<3b6@p=hPR zeYTKkDq_T>u?#|s3WZs5>@Q59?*@aViT~kXv^?9UZ)*1?{ z2hl_X{>pcJ@Z?+nO>37p!ZeUn6BUseJdC+?p+y)oO3-IX_@cwD$2vURe^dFRsrg1) z*I1rm7^yo-bEx9cJM$am_UYGa*G*@%IZh$o`?o(@h3t-xTY~8H)Vr(N>*`#sUm0Rg zO1)a&o8Es^H^sSPHuXDUW%d@5=bH=5`MJ;VRG$0H>lW+wSA)am?&O}t0M>%HzmuzT zLOIpGIKj216igbeNn$dgA&4xww(Jp5U??PybZs4fcjqDQ_>-i?qUw$ElE^?`-BQ^q zARgD-PYcR5*h$ao0XAhT{ctEP-`C zG4>fELdHvAC-g5%mOrxd#0xt5Z>CN&b4HxgPd{4HA1#K2IJK+{8u(#iK`*DwMlFZ4 z!8x`a(8X?z++Z8g#&kW!h8vFWFI7I)#I((6*QMibVDr-3)j(F>$0zV|3|2N{E742R z_Z-_;Le-}-q;~9`4j=1kdZy;;Sjk*uc`LozCo%^1&oDXbqGZoQ#d#Myzcyaf4WR4; z@qp}u(~b(xFhmvSQ+jH&)NHNoVb~IQN51=m_uW+#(je7wMxy`bOpPk@VvEex)$wYg z%L^@_o~9bkWjZNGQ#Q)Id7ll;SHQvGeE;cQzX~9J7=^woduzzG-v7>Je?}A_S&k{YRnUD%V!sG7y$JjT3)O zIq>^=mF+hGfQzZoF`Pa?{VfAc54kk0;2-2j8=yejASA51s+%x$BqwdoZiyyI$)+`QRBAw#5x8sR*^hK#M*z(3~m7DN_ z`X#rstMYJt5z?dyNs&o)$k07mn5UEZN}TPe%n?J5q0SEX5x3r%c?b>zg2Livm(P{k zeQV<7y#PMZZGZ1f$m5Q7F8Vu3?uYCo_pWp9S<5wxHMm$X^Ui+ve)itK=TgGA9%e+}8`Y7Dt0pmX7;tnjZXG0XKUY$Ke6nKi%sOslb`phk(lN83=%mb z^A_;Hm=)bns&B9*#;CNpUWa{thloq5e7{S6@}qt9EitC;y7Tcd7K zH)9Z5)SP@Q5vuRhNyMg>S|mEH!c!6mcZ?p7X~ZTCwL13Z!AeWzc}mqO;rf8V;j>)|v*+2~QwUE-HEN`bcXlqO8R%_rSNdBn zlHlO{u;=BuO|YsTMCO+;<&Jg`i8=0{;D7v2K09~c`XG#ya|*SlbdPhf&K&twV95sG zmz>*a)jDw03T4%~J?2b8^(VKSZjGDTe709wnS*fMA^yoDr!KV`RWs`IbL@>K1&3%_ z8~YwAk4{aN_PVUBlTELrE@NoQ!qvcIBV$)^-}sq!@}M%HY2>@JY8L3n`OMcU>PV4b zqfPa#_vH^t_Cw^bC9A&fh1%LsNZgJ=J*^Ip(yEHg24h0(tepd@6>Rv>@w?@#Y_&40 zMD0gh&wfM{Y;KfAYf!XYT+~!&P$bhuUUG&zCMU|`xt#SUyjgxRSYA4TCLTR8J-;}# z4BPO_upjaySI(|3B37;A?3k20FPz{;ksaGd?GpbKF;wDvC5dR!>5goQAF?JkjT5mM zuMO{Pye(`I>`JpVg%fTEW6p+Ek-53pS%Y@q0*~F?t22=+f!VX}TbX78$uQh>U=Vru zjq{yY0GD$V6y7Y3>-4Aa9a}yU81aPJXzrk=BBGLPF!HY~SXkg4s-22n0`o)tksC&% z^RcPV+MK?n+%M2#$-pWRRc^V>zlDoVytNDoPFKtBd#Fcx3dBWyy07%ghNfYY)G5{+ zO;!j#8Qgn*2Xqd-FNEk{>x!#0&NzxY+@CID6pH7{H=J{Bq;=z#D(`tZHV0{%SlRoa zWISM}CuTCd%dY8G>BDulZ&z}&Za3zdXnCr|h;7Z3kglfn^IAo+w@V?^1i z1v*bi?=Bt>mP)}7$%oT_d`|--Dn(PEeA5Py_aDg#N@X7LDQ8+lSRAKqe`4*GjU#!0 zsM6~NA3W(!Tsdsbanx!N$k7X@2T95F=#yW?C=77kRD$UzOtn}rRH6J#^s2OXSEpIR z!v{<-2H4Hx7L9-~@}&MvnaOvM+woEQhEBv&&jzO?ETns*!N!pS|6 zYATq?wszdD8!-lXFsWPGLnNTyYtPhr(*t=rVYWLb#G_;TUDxslTOUo=qEdAkgj1>D zWjxoegFLvE0Q;1%_`2UUjs`aT3X{QPsN> z()KO49#KqKB!~P|ROpk0Mp2<5g5wopZgb=N>c#qPEMKlmumsy@0|Q%xv4r}nwq_%8 zeldmjLg-~HH^rsHFZ-1|=*?YsX4b#+k#ika^)uUrKkWTT3p#*5-#(OX8p}?dHPaDP z%SBcYWq1uGKZRmX99+JCANFyOIet^wD`|i+ES6QTEgJoMxj3KN$JRe- z9m7mLPZdj|ucmOYGQQ1_)+Q8j=#f_YL#U7jx-48?VRd|(c@moxbvd~*9^Uv^!Op6$ zIpu_i06RM7QxUeFdG#>FF(d5pi^;QFDxn|KCa_5flvB^lp63z=&ml|%kdB}4w=WQb z1M8R#hn4Sj({SvRmB=)5qfi}e)V1Wd{y+(ylv$P zP#nA+QLwD5ScH(5x|l|9OW;(?@Td#4p){K&rJOTDv!w)jQ@Vd7P(qGA9RdGvJo}!nkc;CvFdPPg5#^F3RC<85^$&*Q$SLe~(wZAq>2BGc$o#_~0Z zqs)nwjNExMgTzAgf?oW#HRYoZrSZO92mI03R91r~47|bD*OU0=Jrib7)h9`Xp3#Rf zW(9q&!by3oY*93;^5vFI%SLD8DYGWG<=35bwhq6`Nzq=sjcbjIFS@5kS-l1!#W}kW zPcSPAosA(2FPW`NCqY&DRIRvip)JiQD|0fjT@a`Jr^$H&3p#~`_MW>FI9Cn-WSl2T zYRQh&?`@~PlNk)Xu5p%)o0B<2p~!HO1ioaoaFQ|KMDQ`(F=IPCJeqg6Vq9#Wr@>dq z*T{G|kakMBI*`QtZ7Y6DX2eBZPNr5YGIH-$hkhH#4C{G)Gb>QDh&94*d^^~C{k8-@j~#wei?X4^L1=JkXRj_Z@sc5 zkv3DCmTKkKkzFhV`TB^K3SWt>ezowWJgj#88IIzfNjGV;2?S+wWijBh9@!F!Mf4;3 zqgE5)q~K5lQ;&qM!`2KdLglot(< zBk?^So|=-$85(R_(6S%-A;uA0#tI-!_pkh8THQ-6}`MG_UZ}VHL1P)*=4i(!Ec?|h4VCb_l@zTgz!I; z2S6?AAk+HYt7oxPxS<2~?v+k-`upc}&m0R{;H@YVeq6Q(EVa37`cgPFKV^rSM!6TITUZ3 z;nM16F{bzSxjG3Fg&rbGaiaN%^4RUopCRAAc9d+96$-B}x62=p=mxGkjYVs$bkcVZ z&sWOM3e;q}&AWd-T90;12#nil3~cSI>qRGOPP!qhvhfw|03fOoM+3q{fkjOw&CJ{e ze@kMl)TWiSlicx!!}riwkmE(g`#rBrv3vo$#`7SAEG@oBiKdTGFKMt-<}Rz~C-h%; z;5*t2r({#QeFWVT$`pR}SQ|-|hbn~si`9x`(qiwkWhC&jDu~@6zBX`Cr3>{vTqF^% zZC8KKPfzu-OV(~CH=UY^UbUbYx<$$U;DTcS=b=*p5Ed+-aR;%<12 z(L$$_M(I@d01|qT$XjwPFLkFywO91myOg^4%h2=Yf$JF*B5z#May&psA_q)e!}2F| z=yuFU4CuT18NF%-V*&QWrXh~k0w(2HK_2~Ls^FH{|6&mKQ19fw*pmLQ@$^12P4GuH z4_bHpswoqLuAObu1HHPmg?>ZGd(8yRfrTSi!NT7!;?IuSk_&euTOa1fD6h)&$A3UG zgr&^*70N%R0|(}*g;K85M{KL9jfboK2u^`_kV2O25qiNTIdAkf&z8ixR;|_EaRb`| zeH+}+;O^I>1+l!YE|CFPH?=GDru;SN$;V>pM7_^7RfFNlvFGxCK%djY82#|jrmcbn z*w?%1^+rw3zA4F;$ItdKYEe#$@b16^?`_whJSt6lMSR|k;;8lRQi!wdagxDyj!eeY z%d5J!nLU_Bf+QUKy5WNq#hzxMI>4}agW;FDlCyGitTDychu5`pn}YG-W=8dI03M~^nA4^x(l%GDJvxG2m7C8>DJgO67$+U zDfF?P-t3!a)2uAR>gPAk(lNrS8N{pe*^1h{EGgLjNr07c$#vBd_ZXMKxIh-+u@bv5 z*|!X}U1IoGF2fc`*0k<%k&5OGl3Wt1{Tjy8nT)2ZiQnF=GvBaSXts7V3S&%19i3Ht zF?Z6Fr)>YwFiGQok_FM~!CjoRZL2_iV)nGxr=V@JAANn<*6A_BLnMh&rg%-0T(!K< z!|PSl0=-wMVY$g`oQ7Z2@$dvoTl9zc^NjCbvTr>r+#YqkD5td^of7>#0PEmd+sIAs ze3xzEM-q8=zw+-(P;Aqz_K+$*!jNrDiOKUV5wtq#e*4l)h2Z^02o?=itz--tBww`% z$z^xMe~*0PqekYQ@iegg7}5ID9hIP=Uxi01z^vIgMzHB|10e z*Z%%!Z&MCbm9j_eA6hl)N4T9&YESE{&|Cnx zz0RjL+1=7o4#+b%)WWr@`fsboiVS1}E@vfSN|Sm&8PHer#|y=u67gn9De9`M=8r-@ z@`-k~uU8esUqS}*%`3gPEhO9rKddKO->0Dx`gHpWPKc)nyZDHbAH&*kO8zuiTRK;cl?fdOXV2v&8qa$MCx7~5=9e=%g}@>(&4o&OdTUeUojnmxFq z+M4jNE1((!3NzTdohV`HGFI8I1dlRWMk6@4*n4YwmiaJMmMRLve{ko;qS`+pTfUff4e zsj_QAxoU5pO^!UX|H)6j+xXV?SoA1%ZqL&w`!UM??Go*+oo3Bh3*e@87Iwmg%A*fv2(-D2uU0Mx;FM`SB%*bS?jsy(n=u<~4~OOgN^D?p|93 zHv!x+p4Cw7>}2~j0feQnuN~{<`%_|t2YcSZS?S+=roWRo-XNjmo$iu)(aix5G2qSe zr^As^@0C(54xfGSIZ~TbE%~cVhUkUKw}F-kWdzV5QQ6O!e+^VoJiz%StWhWM08Xsl zP!8r%5wV}-e>_19cHG!cg24xC26SH3?~Hz6pHO1D<(&AXRXQt9{xLXx8ZAEF!)Q|$ z@Ki-?opD(SGs1YrBkLFKx(&y%#IOdp?X>CXFZGE-YbMydiC0=(VlG*_y#m&7X9b@> zoAd>H>caMb1yTKTFqT<6r;E(9pWZo+_nUxXy;Zm2_^13*RoHZ>mrB2x>MohhWo}6P_rhQEmgz~RMY-DT|>Hdq77~-Xo)*GanRg_)y z3A?92y*A*UY>u8i)_*g`6a2iESC74#c*5D%Zvx6J8!WAAFi@i@V4h*bFaNWHs?#Tq zl{g7e8PDw1N_UJd6fVI14K>rn8cR?zKqH|jxHw%g?en|G)9%#+Jlhkj_=8>Jdn76t ze|w!%Fu!)!pE*@pa>$lrES`rv5B(yn45aP~3$|3pwa+vmoPds@A&?O3_9MoHpLBY5 ziOZKA+mByF#BXWh=mm9yt;p5ssygz9wxhuN{`ahk+xVv?W-P^UQh&?w$b90N21|mL z`#B}7WyjE6fuStrW|?tZ<_x=E8IMdXKeZ0a9@g?*6y+)n4b_b*xwrd^nPFV_ot0aR zqEuC;`V&BP&jR)TBMUxA{?OQ=Ww%riH3-WO@WMouMmnNUB1^ySp|i&v6PCfzob-eo zIoP63qwvsRqB0|am71I{#Jo&hF@kP(^o=3d7Ol23a9yF zfHCpz<| zxDEIx4AM-n2lPlw{&e?j&J+H8vd;VR_Z@Z0r!Mm<4_|*6)VS=pFHRYt1)1uego~3! zP8~GvH(H$~(wl*yxVS?3o8pOk%_Zr)alLoO$8jB(12 z@_%~YpbE%Nmy|aRXniTt`k=W2)oGyI<~NxLFC(>_9;^<8asb6ZYm;Cwrg~{&5apZD z%JnD3ctcazx9>ub*agck zELRx;_I=b;@pj7=M=yPnS}(d`?ewTtXfi8y2BB+C zj~N@$%mzS>foVRtbS3Euiq1J)4uQXpjo%Z6R$p~PZb{TsO|!3QhG+_letkd6>?agL zYhLN+2mz#a; zEhDbHX5E-~45z!DKo|dYW5XN#2^eabsj2CM3N~oh&+hlQ_E8|T$R4n?`(ROz7wRCA zLTIXO>c?W(*?og1LVA2}M*~VGBAE zjZ_kGHF=@go$?Ekb>%!m=gVqGCGw6;&*Khn9rI!rcqQCRsfPURXwgXl6@S(=yr7Q zK7zC9_U3ZRsYW4z^Q4~(Udt(AVD>H=b^hf&gB9zL%w0`K;IkeNzrvItsGIw8(NhP- z7UH!T>2}r5+0W7Tw^Sc-EBRCP^dZW>*fY9^jD#kGyz&ErEqlSDXA_)KM{JIrQK4Q| zQO?1~FcZ#>f%cCiWrFVD+Fvuy+#YIYh-0CHl^}mTmTNhN(6^R&--k}P{U-YzZat`Z(+WWWPQxm?xeN55MZr7O_BT`r(%KB zk0dB3rG@b4FNzroOgiM>hkRQViZvXm<@y%fdh*2Ps>sMf^aqocY1SywKpy}8=XCW% zDdjGB+DujEl~n*zB~c$T{tGnD04y+HT{0!&vW0uMw6OYnbv#e~lLFOR>q8)m0CvuC z5J4A;`vcgRD zmJnuIOlq8|PVX+_u<#*;&YvlZxF*^uTbKJ-*hHOXFP#okMb@y;0TePHl4FuUbXQ8q z>*g@PX$q}`EbVEwJbu#sFFbqy$IOL{B{Bw?fE1$H(3pQqPfzRsf<_mp07B3QOW4+& zaKpyxI@vh>K&o&Qoe#$+IU_wKJvL!MTD8s)oUVSz5N9pPQ|`S?8{_+m)ND zcHv6l^NDc|^Mk$&`;VYcnD${ag0M=BZn>iE373eUlTgPXq-5e-KEp<{xo4}C4(|Q? zaMbSkvmDeaFuvtP)9GSbx;tIN%R>!Tbim>V%Q|~{m=d$Of~w^8F5mUlRzb<>*2MXWd*FPk6~tl=S_;RQYq*2AIyLfQK7lQDiL@(Wd1%bQ;MUZH5Xj4^3;`KB0< zubEVwmB^g7O%6c{3jw#DDQsYYecEH|Vmw#F8m&9_-Edi!wAGM*UFBEkkNuWkho;i8 zLn)b=PO7o_qj{=`I0$R5!sO$;$4k!rjA^J9BUR>%oWr*`nA1DbhmO{VUy=0QaOB?`z9 zVb=eio1e>kvALS;VqR{Ae3#Y@i-H>zam(@Z$!qEZc5UCdU1c!aUN3sURd#DoEe8>w zU@N8c6U)c`+WjMEBM(GYg;5S!%Oc&XOvu&c8~PwT%0dYR(wA0Wiw+YqsrvS*YNCU4 z0TI0j$^*{TO3b^vd+%4F;3ExVmrRS4*3tYXAW95_ZHglO_@g*>Bc&{ke4*2!MOkx@&-CdleLWsY2bUixKmlB$;kc3F?wFU*uZ{ij`l|Jf(_DuPz1do1{c zhVgGM3hs6 zUS)}TB{ipTr(lpxz?c(%x~&{gCjOY`;E;8Mdad*jY}CtXM#nhSN0Y&IU``SqcOTeV zBvc9FwT-I*ilB#!5B*-yQi!tEQ~<8P#0x1^+c?YZpRVmH_;#2Mv3EK^)h-^ZZpS?oDCcm>j(_huGOEDHgx{b4oZZ4^6uuik zdR5Vr7KOoRv_x;7@)%%XmNAG#y!e3Nc{^}5;{;a)r&aCrvfL6*R*nhH z2M?b(jQHrHMVOkHR4$(^=@#u`{;M^&{}H5|Uo9+Tn2wZnsmRqH)U3KUxI(NGBKm)! ziXD&;i9&ZH_Zy|^QPhX(91#Wf-nmffCce80+P?V@3v{lMa)mMs^&%nT3_(wRO)h+L zUg5kA-QCPDD1MwbtH~VyD}64d3c54V9BU`5wFEAklfB@@lhaCgRTrmD6-~o*4fL?g zWBm(3o_0&y$wJl-2EE@qvb`IwgDGLzmT6ues4CXeN%)=A%)*mn0$AnaP^1lyJA;E9>K*C3JNA@?U~fI! zw=lJzaV#4OR#aQ6A?S!4B{KcpG3{H^OJn;kJ z>=ytLRcQilIO&nS7xWR4<#aU^y!WEFR|xtSg{nVJ@TiyNr<~6MU;faj#{dL8`{s+B zVRno<-{BG?Q79oZZ1bbjrZpQH8k&}RV~i#CJ42@DrHfi4hSq8l&EQ(denjR@k^U(r zi7I>-0ehwhcxAou@osP2(-a`Cdo!C`6wd6nf?a2YuVu64Vu!WTcP-RcR4@%o4RmWhRo+28rQ$)8mA<3xI>RJ70Nql5f;E z`1SAZh5s|O{jbk_WkcPW^bEG!zQQCGf)qu9o(byg8~RF$h)pC3q9@q?mJ#Xpm$ry( zne2SF`ZE$Sl*jmLbqL9uA?iNOA3Cohrc_OWtcH;UP*kq*yJR?S)P*`mZ{ zH&okIIFZ4%^bL&&%W|71Q^M);{`q>*-Pctntz5-rC--dQQ8>2n(K0+&Ep{7P+>La)h%2~pzJ5(Voun&wd5Ie@nyyvh@;1rxPDBb+Fb0m@ojg;O z)}}1Wntu4Ky5~Q0=w1~{cvMrn^IU5=sY}pDs(PMoNm>pj4=sUB@QyHo@whU_yHyb? zzN9NHFXz_-fWlS4G}Tya(C{Y^ob%a77$oT+#!h-k*G93Rn&+|XnPn;xQJ2h06X2};%}yD;FtiBU)B)&5U3N@U6f0^ zU_9Ty-Z}DiPh8&d_6QJrX;U2Z3=0p=mm>uCx}6nf8!aAUVEC_#5l#HNIq(atH+6q_ zw|-r}6Yx3-EVtUp8yuni>4v2GN_#0&zV=Ezll9hj`?F2a ze}2~nxbDx8eJh#?M^|Ghg#20v{@kC(qM8Fi$%gM^X4HRYr&ms=eBg=9>KeymGu9jl zw5TY>X-46c_#0UR@2VR$$kF9A+SSWZj*AD=!_csaI!~BZYH-T+_1lxZk-z|Aoei=b{L66BIvbntD<6NXPMp?`2n&qR4S@#DQ^DW+Wi@y)acC z{`M9o=7%(e?MU{7m`6dcGh97z5Dcdyqyk$U7X7uHZMR@xc_@eKq?Gbj*Oeg zhGVD&)qk=8y!@cZcJePB1UJDJV5ZA<@;tBXTs3NfcHx^yA;BIWpihxVyTv_J@VA_8 zI-z;sP)f~S`@-9ProNf~IezHBbe{WfvUwn-UrNJo^)sIzxjtb&x@Asd!7R@9w@PpM z0zPxlH1)EX_H&F{QlhI8u$~)E@9(l zB>XX^^eAf?m(y5}z6$Tws~%Sn^w$vvO&4 zl(12($o+s|ja{0$@$+_cb`_A`Sk2VFGYeHKwozYec36C^+w{K5JW?%N4hZR!tll>Y zAv~;0+s4D-F#Sc)Y&vfpGE=C= z8qvuo#83YoWCty`Hw|Tln)k7YTQ1g;UcOtBU_Q$hddoDPXdK1%r>fGj9s>mdOW7{j zyChXe%YE=~rrek}$l%RG#I?8%MXib=w(XVEgIF)s>$Y~Sk1F?NNs-Q4JY4o2nekYR zLpSkmp0v9JS~v8*)Xxj#D?Sx1$*8bYvenz1v@aba9~|t#5q-bO!VxU>#C6EghQ0;KYhA-(2i7L@w2BJQV2`-oT%)+iF}+l zxuC|A{aKrj>$FAlbYoI=)m7L?A~TvkJmdb^7C}wU`@nUQ3`zH!W$or)EmtplOlDNn z1Ly{zype`*`Gf^cZ9-&t5b7_jui$8vrkP4ds3_wL?oU=xznFGxP^b&lXnCt%pJ6A= z^d|SkYB!c7HsPNVR^MSQj%F~&`cMH~Z&mWXI60Y&sp#d@^Rih{OhK$v#8&cY1XkML zbQ&onk0-Pgb1(1t3;w|Q*3LCE`1`5e+aHiePMY@Npsb55OT_z+N*v4Yn`!LBOw?ya zJ{RU-h~=uhQqR6A7T7buNJ|D2RM$+TDQsa16wHFF*`H+@NG5W8FB2?N579Iv10nRJ z0?xrSNF@@ViUFyL-NYl{;nQ3&dKm)zS9bNAXlYu7^%yX}<>vt1Q9uWM`B1xzXqL#0~ z)9Q1LZNh~dha3i9>7Fmp8id- zr}~4fIa>fW~r+A@Uk@3 zSJ3mfOnsh@X??R4lu0EWbG6T?wdUX*e2=u!@=Pi1zzLD&`zry#al_OJFUy+CZt#4T z2}#`G&)J~wR3gC-{#`!FAad!FR43cGzc*R*k_x*pTjTNSM zk!|ebyoNE5d@b|pi8z@&R~@O?7rtM&HQ*p@IZ7S$H|#~Lf2+gbv`OSXTz750l^#{U z4Oybf^}#etHPJ&1#fg@nUDxV?yN2@zRwSx?{W6n})rD75)?nr8uOV?<1~K5^o(>MS z<)T+WnM=; z^dpTPPE%#3(&_RItm~b?`DdTYw+(rv042C04BoEM<5 zHdn8@Ck%t#=8gL8VlF)OKZ^6p4dayNkMJ>O&svRU!hEiw%Q$!U-@oyXLI`bV+LtER zPx#b|JYS_*L&Zh4hYToAyYCz|Y&K1rKt`XO(?x?7Lf#0YzXq+Y-o-woHOdGf#ehA} z5r(A`UMj3Nq9j}&=@g$UmNZ@A8Z0nK)zcttN(Y@}4dLW$hGrkcGO_ygC&TCl>`p_X zwrAd?QMs38v4u=rWJTH!!{R4Boy zr|Mnw(3KSZ`lIDLJ3t$ZifJ*X6d35oAfHhdthUdVL~n-8kKL|>RgR{=b1I-yTgg5< zR->aqw|40P_I0-Trbl{JoAAp&1wEDSi$Zt0^#d=`+WicAs5DO0_Rljl< zrl{#UCwlLbA!*sE=wLXN3OCyQXpP4cc`6x5cSUGsR!S2tTRPgjg;YWqX}GIIo}brF zr8P{!ILU7sTk*}$J`h%OfH?B7o&+kdaR3|+r}DGHetCA0y8QyXy*ptx5Ew_u5{rp9 zM-2i%Sj27AGL7Ytcg*{eo5-b`pJC7Pdtkq+VVmTCwi8pfC2k0z7>@wzuTGu%${l)? zhEDY-tz31Z{wu>~3UfpqdB%@9Ld-cvz&JNls;Pb)frvQOtc%>gYZw|JZQ8GGMEUf#bk=!2gyxQ8!G z`kXnMtpfc`Z?Grr;%Iwc6;>>K-4HI~c|x`kMlsW3`3Q>q*4p_nVjfZ6fu8$n*`oUL zt4xWK+Hwx-rk`|L2Ggj*{b9BmBD|EaAd%z8F5CPc3_Uj)6K}6~`jk4x>Z_P5{VlO> zMBqc^C2JiYvArklsa9e>4tdwlzbdmV$`Y9!E5Zk8or-?3B2La-;-*j_`us?@y%uz-l z?&NH=Fzb>6$0qZk*8a~9hP__JoT#ZK^}QZ}ARxz-^>W*tqbqmka5mli?i$H(a?3Va z4IYe5n!C25S{*n;^}gAp&o^myG8_AxS6#xr=sd_okMigrS6!|60tqi7VVAc0Z4jMP z*{xo+^u!!j%tvD67~~bV90^qJY_NHh5c)l_1yIvYfO_`)GKt?QN9%y)HC!cH!W0jk z?#`tUtl2JxZ4oP6w+axN12HeVe^MzyjK>vuSZVqR#HgBSt0^;QR1g(#5qVbKO4KSsBkvHiB6-6! z=9*boVo;fd2R$J)xfF`?^abkiS;A-UQyY)@kn0$FWildBpGIXqyOuZ?S;QGl5i;EH z#8&ZM-GA+s{$IXcd`$rg^YEbIY8UZDfg4Q>ho}x!VO&?Gd;XT|{gvE47n6-v3REbt znwJ+99F^wBQoE&(FnoSi-%}k`xt6N=h4Ulm@@IQK=gk$60oN1`yo!)HxL`Yz)SLYH z&)pg;KGl_GN^pc53OSC@5dA91QvO!Nu-13uvCEW_bjNF%^?a%khtl16*o~x51-XrHzw5;n%44-IHn(Y}G^|1S1go3^F6(t$Y<4|O(0STi zk6kB+Avj=#^(U*Rd%i|0pd^2|?|C4bc+)`>8YWMu*kfcjLr)DgvTxr6StILSNJIPLquXi`Kb&Sjsf^w?xVcX&VfO~t(Cov9r%23Nh3O9-VLX=~ zDzFN01Z2|UDBeI6I`LST;1Ep;Kc8y zG0*T_`~ZESraxbBz2=?A5o zkK1pSa0SHi2J0>iOLiKCxj1X1a0Hn3%ronW%E^>!Y5XwcJaD=sfQy4;Y76On&G5ep zZU5xSuNkN-q1E*6gVBPaWOXfqcUNZb&o+g~M*4Iux%cyKK|6~Zi$6L&V%gN^-q0mK zmcgA>7I`l%_&u@8{=0OV*Q+}q9Q?zj^)=jf^U}U$OzYw_zN*q>^L74sXugC2-_;U;Q!A+FlNxY=}0S(x!+s_VAvB5!^l|dUu&_j4t=GG3SBgi? zPXkgeE!J~g+Gsv@=G#GdEHeQqzvIvR_bMKD%Y>d&qTj3cSX>%V;?ok{Uo0_00wU|U=9top6VIyb9LAyru}Q0 z>iOP$i|DN9ep3wa{xuTBp4H{Q26Lbo2MKxZS8P7h-=Q^X&!m(Q;jLq&<{4Xv!?K~GM%yoxmzTLgnbl5 zOJHffvzkL|(QphH`MR$>0XBA}kYq4a`+&t>wKnICEa(s?l@I7mSGq~=?Cevxr8YWC z!|YyqEYaT-;--~dx|F$uC$JsJ(_Zp-I!xuBJgAF5-&5OCg8iV!|s z%J#0|BuMf037o;T98UhU4nd@`B9N)MxsXjYMkDk1d<6$eIE+m2SLtQ zZP%aSz^p(ORHfFfAMSe{zm(H{$&h9Z*)(V4cS@YEw+_G*r$IyxomA@{t9lLnc6KrSjO#Sm_ML zFsc@qvr(HY-Zf@@?<{Z#Z_+aRLs+kC;EshrX=HS4ZA!OkO)b38#iLC7AfTQH$Q(?>|Tf|E`2*#)p(GZzvwiY$)&HD6PIX!|$DTXX2c+rIq!b*H9Vd zXuZe5Nr@c?Ag5QU&9NVb-Qy9a6;ySeuHJt=nm_L7qAOX#s3d$jBW^0)1cPcLN36rs zmQkG6I&6nkUP@~X)2^4R8Q9)l2+U22GrPyBad6NF;0xV)9%edX8cMi_hYtcXw*5OL zSy<@A%48*tG<8IE5 zwD1lhz4JdGlXN+Eg{(U&Wf2{DLH);0Yhw!7Jy~YflBB!Pxd!msv$YykTsUq z%YZNyg<9F90J0dc#-KT-NN)D|3FubPL60T=GO~J6Qa2 zM}=#ygDl*h{ob414!dF*N?UFCD(dSqPx9RhsEA(kKOC=ovlz2G>ZsVozrD1MvInQt zu+;cIz?O8p9Fg~_`=kwqW^INtQL3yFLB(}ziSi=6yenQaHTG#qic>c7$TKD~|Cb!; z|NB3fG7Dvo_USmjA5ep7;l8ew^D^ow{NwLjZO6TP%P;r^~nS z%MnW$rtgJy)h+{^-*JQb8kw4fd!WQ5J3r=Q*7!!IX4)Dmyp1lUbQHbK34?ejYRYv( z?&D(uG*`LK_uvR!#h<8o&T?qAIW`RF82P@kgePmcNFOads8KIlqODZ4n>ue8+jk54 z!M{z8F7l7LEs7a5nUh~B4X&!rqxoQLCC7oBt@9C*wHyT{<*RAm1>qpjO8=9|B_C)Z zi_Nb~)t@TpHc0;_7EZ$Nr~=?)jPZ-+&?%(=XV4Hr4x!j72rsS#w zLJ!}SsD29Rs$@O;CgH`t2FP%1A|i8^br+I^ZC5R-v9~R0R={NeuGU~*@}~eW ztV5cBk*LVq!{Kas@62s>uK)N;{?BDRC(m8Q zB+{IN3Sb-NZgtZ3NqTjV$m)R5am@@l&EoMmv@yn^(j*>R#pC9rJgCWNC~Y+gJkpxN zJx+$PeKTU-M|E&nA5u7M>4?mhu4MSz<{9#1x}n<78MngE=*UA0F2NhJf;p%wQg2#V zxZ#+@9Jf`1QZ!Y0>-&4}fk-H)_rke2Dl3Z2>-d~p3N?w2pmYYL&|rg|TvYr*xAz-) z^zY*ujvEC}cUJ9DR5e)K@sW{{C2%*>Q8S%oPy^#sZjSrwW-Ka>n=6uL&D@SXMbj>J z1{?PqR|P8yRx02^sp=usnZO9_!@B>~m*#q`_adQIn{7LU!G zwuzXZSXOh5b^6Xjk|A7_7XJVFbN~9`|NP+`SLS#!+(HBea^F3P7w>$Bkt5NY+a>cI zm1QdNJUKkYV^D1QT$g{jfV~Kc^prChA{(o!TM2=ZR3}E$(odCJ?vugQ5&4?Og<;iH zT+Gw%TRsDNa!C)SiKb&}I7j#7^&VOxxPO;_c1fG}5+1c>n7sWE@ZG%NT6jCF$g0}N zX*utsNQ*MgVk$RVpd2!$*uDGUf#MZ%a(}XYrTw^)4WHjPTx85r^a=k!=q+1CgvvWs zHEUKpSf|Y2UcG<*2mY7uV#?q5=^&9;gq46c7NIBTXEc?GL^6#R8BmL|>6j%CmvyCC z6|&o#N2E4oS4ZP{c7^Yz8Wf#-Bv=|)Nm>NaE`O$FwzKl}Ijbn&B1*F1Nwx}OytJ?4 zLl$b7^i1A)McdKw7Df`m5%5T^<8s*f>1>tsJEP8Wcv>7=$~^Q-5j++BG}rNGpL1WGxsfh*Ol{HqN$|gM0`6WQ4lIfV ze!9=M+cdl-ZMUxNjff||{sfG>Kl&I%D6k~Xm7~a$s$Sp<<=-%FX-VUCUDe6+3;pCO zq+l_Syf$fp$nhE~l(iNqre}w=)|p%@?5?sI{jHx?WKvS`JCsc)VNy+{kz=Ry zkn_otxX`m~*6s0BVb2GBXV*l1A}tArvF~P3oLb_wjowSS?lMJ2MVVMKznj~43|P8# zgN*xbr`%h;C3D(JsowKE`?hy`<|SNRU>3h|mYrv2A@Jn0_0?fj8aKr?9T&Y`|2Mow@t`+1;u9>uCPp9!#jkTAd&yXQw9U?qKAs z#_EHelG0qW6ZYvqek}86b-~IXZ=+JlZuN4-VQifrET5Ln>etF{t*;QR7sT3q_+F*C zL;M|<#KmWO;9lAB+?eFk6Sx4pzmSEI;l-mYsRpuXk34rHqLPdUCI!zdEd;x?$Y<8e z(|l0pSdV=+Bf?g1JbjGT+#8=kW_4+3l^mQ0bgK%*HP?h~Q}nTUbJa@{Hh$|(?jZ|B zJl#16{g2}kOZRJr@WR`2d&%^nr~vej;4~FE)Ea~nN~G$-Zps0i_q?%VQNMmEVEmh} zh7iiIkZo?T90l)q@}iDl%GtG>>Z!pHJ#?mYj?b*(`o!(zkbPIsxf&z$)ryZ?kVPPO zjEuk7)7hB3sU}bD1k<)hHW#Q9PwdFA=3cJ5n`85$geR?me!kT_Y{|RM5^hKRY4--r z&5`Ajh2h}r;xTW8J9)udk8fheSG|pv51bb8{_{7^z2}#e%+c)1Bic5Ilk6%jNZdAv zfrR2`g$!~l3nA77P9fN^?q`CJe~(4~w>KP9=FNSiR8uc&$D^J5aP?*q@@3U!guzYc z)6Veiu|AHmuKeLRZoRI_YL|kNMNUqxFS|>4g?eggCL}dlMdxPj;d-r3TGj7_4+|AK zE*-T)_$lUGZv&6R&C@<6J#K%hy4iJd^E-N0hdxm!|qUgpMI39AoJ72EZHVhc+^4QQX?T)?oLGM?#NW2oe8}weJpSGHceCZi}cWDqT@f zno5<9B8pN4=~d|+=`BD+!9uUnk=_FYLT`d}2)#o=1(4Qjg5B!nOE-NtE|SP89zg0qxp4q9(>fw(3J~CWUMlh!k1|O}RXf4U+`M=R2hvqHR1Ydd;keW6j$PgwM@S9Ipf zM|fyd(Nb+iga0?x$e_>7Un*K>+h-Fjv0RKv+`+cKtEst@!K4h2JTeeB12WpZj z)%5kwtsjpqHhq)q{$bZpJy`>BT6<|nIQF;QXCQ<tP-l_)QK@`#9^BGxlaD2;y! z5elz#FMD6o&SB;~x8qw5s$<^GEzZ3WE|-Zcm_mftR4A-hjaqcnSB@yq=bdmbiq*>^ zUtxvcubUX-#73~=Key#S6qJv|nJ22T#zLCMM(bQ{TSD-qjGdethG` zjoR{*@dzXK6oK`@Cjh}sXPidvT0ckHns3!Yd5s5eIu*C=W$3s+n^jGuy_3T?{@>qI z9cJ3mhIK^p#pECjF7CcbjE=ELM>rp%%2OyZ&xzHr<(c#_>3NMc_`JO$m$TXe(EQt} zy9|Hrvwrp3$Lk-~2r6(gjj6QDsv||jhLM83Wjj1sDlM$HMGP!d)F6vQDwnYu|=mV zq_c6rx|a{`kYGqPAL!@}8F_j8iI?9l6188y6dYguxi|meNJg70Zx`Dq@*nX_ikLSS zrZ{GqgB~&9LA+^;Y;-G3IYD?lRJ?}rI!$}^Ov~L^o7kwQX16X8Gc*2AZu;j}7E3vC zc9n*C*UYM7Oa&EiFryubUmmC$R<>i*VA6_3`NFGwF@@aJ-J|^ID~>y%sb z&ql(SSMNE+bmGcSUkNI*aDM&9Y+&t-lXkI#{rvNn1v(Wyg0^-Qr->-!z;$x;K{{Qw#Y! zHLa(sy)Qk(ezIJ4E(Mj`Xay_NfpRIkdDlz7I`~3x;+_nxyypHVX z|JwMJ>7IqZ15!d{xSN=O_$7n9yu9v`#NliR+YNvj*=E}){-FnZdg|<|<}jn7tBHw; z%`&9;`=V9*VwlZ{pGpL4D;y5HW^UJexA9|c-ScoZb%s;3sJlG3RL07!DlLVoC0hHt z3$_Rvcx%IuHC>giN#sIj=$JS0maPFXxKO>(;ResSuvg9M7A^NQ<0h8ni>>znnx@2Z z8##@a)(Iv#++!4`Y!&GNSN6!7aHY3R00#GoRG8WH=-@w-`E=>W!LO)!+i{wiUxR|4 z3)*ke(>T<&UQU4FXdUMFq4+e*%3T}d#=7hc54-{10=gvSen9$tzbC+# z))DinAYY`gI~>NVeI?G(oX=>U9aU2540#YyYUSY*a%IQo>nYrF49Lhr9(e-lwql0T(Y00}}-`=rtqQz*zFa@XOjS%O3o==l|YPz)n(1 z!Y9@;d#(GjR3b!2w~R`+GhtQz_>TDj{juS`^}!zaQ)Oi&-1ghp{GcUs%P6**=M2|u zyUNUmET3i=SrHfVW8wOH520!O3?3t7rCs1bF4|+r5+dX;m$w4$_iVYD6Hb;xrXG>} z&=1UCiV>+&BjRVavIA1TOqaM8H)z%uCl5nQd(L-O==tyy+1*=K#SI;{ZOp_6jNZl! z$afAxQoYYZj<3QgqFec=i9;>SIi8hf82vlVW*Ay;rB-PMGWDGO(iu$ zB5PgYOkj=l2=LJ;8NP)L(h87j(yCykNC$y@uvoS<$v-rjYZm}me~>2`9Y0zgugO16 z>by482#pmm=yodIi4j>}YSl}A6_Q%j@!lw@r{mt^IOzJbsk2nsd&OkfAR}!m3-e1) zX5^$InwnH}?!?)sA0lQFkUp13-YBmORVg21TAx^wuC*ltu zb5u;TD2BYlNc+38fG6l%Y5n%nR=1kniGP#?Fh50@-XJrp60Mc;J;zD!5l8doC?(G6 zb=&EpViC&rhS-}a1=yj7uXO2tca?#ignqF@&?R&CYuABu-Eu@qbUeC$4EFSFQ%6TQ zsG3VV|HvO72+q?Wrjzj>UZBrI#ndGr&%+wO<7shEA{hO*W%8_RHTlbR9GzWzSryI= z*~n0sc^ES<^v&5RQWRT$nWJxY)qA8q^4%m42rZNv?Z`9~sepme-eW?ea*ip(p9(5W z>(=$vIe6roIzVxuxPR#Q7@}B+07VwM!lrNaZRhYEX<%dK{MOyEh;j$)hd%i@{BI{J zxLGL5bIxhg`9Q)e*#t2gH*iBf-*{4pA5*wepnDKg^Z6-M=8Y>-f#5RD??)k6VnW6! z(fxHS^S$Lw`cK4y%&lPtasTj+`RxkUy~r>z^z``w>B7RoIy_A~JSIj69h;+5th`dO z?809;&;_6|c2DFs9mHqM#s_YB=(Kky5N;^XdcvZbw>UV$V+9@FWDPrJ3&Hw*9F{8J^l&~$L4xV8;R)(+Tk!x52rEM! zqT^P|N-OP%c%;Tt zxL~vKE2nwWR~zlaxb+5B)O*r%;c3RY%def$t*y6(ZdI4Bv8WZTUovxylEH;m;d7Rs z#?*{s$aLnntt@C#YXSQ+gFbXq9e0~bxl?o~*qdlzR=QVJjmL@+V~;oob1kr7aQ~0U z3oc5_f>miW_rK)eMCE(ypBoAgOb_r67`87er5aW(nyVGa48=ns6NqP-qY*Bsq1Ro- z7VPU8>uAHZd9FLK1EV>;taU! zyd4-1q4w6k7JZ6Og1Z!J7_GQlZ09y(o~OoD-D~O;AAX)#ZpAv?Q;=#9!!eadOUNTpIViso`M#7$6$R z5myGMaHGN$7;GF?RPJ(E1zHEhr8Qx0i_Y~`y6(5WkAD(RQP#wNN%NYp%XWBOnO&I6 zmhGrnq28VYj_wGDu8(EIQ??E)rgx)#e3q@sZFM3)kmh1$al_HO1$`-Bpc(Os8bgao zGS>89k4g?j$M-I&RanQh*;L8xhhA*;INyHku&EB$l0W9s)YZs8l0TN>yrHZBI@CF} z=J)w7QGOuhvl`6nfWG@u!H;gyoD-DZI~mFtAfUYY;yf}~5|?d}+xt7D77M%hX)fFC zBf`MvTqpBQtqJq^HyKlB|Dk{UF9s#>JV2o;s|G^IkrSLQOHNlf=PMLou0hgMb2Cyi zbB7u%a6B53BHP-Miym~jYMmUscL9VcsCR%R!c)pg1{#3Ihd<1)R=^tkWxr`1aV$kP?=eB z7uH{rq6JK_AJ(u7+ti7>LF|lBuODEdW z9Pa8H!sEEBq7xMrmeF$z+o@}0(OYxE8pq=P+Dhu_2iEHoYIC2Ka5G2?3GLoJe+<#- zYq!AZwsV8<4q)@m;8gSO9pV`LirLw!<*O$rcTGo=BeAY*T`NP@n90jOj^o5}A*{Jn ztVJGTMqq^7XwL^>GHZ15$N&DLATh9;(BxKS8L_setZdvqx`|wWm51+1&TtW{MXEcO z5E;|pUN2WnTP|Gk(Idl&z^$U>64hDv!yLEmc=FL>SjQOi4n-zE9PN^3wR>*GX2Txm z>NnOLy&lmXOClFlnJBHLpk zg?a_(px*wU_hh10m>C=0SkmI1wM!OD=tkhqQfU0%TKc?eKW^IOM|MU@M)x&B7y6|) zqr)K{ac*geLH<8FFuoPP zhKCH>d$Wtq?g_*3GM?_!sO@75l6#wFaMOVd#sHCn#Qn)! zvcz4HwZ#F4iXnG)bD8bF5jwjB2^(g%aAVl=^=rfP030@EMzn9fyOWXG-y z7`9I7?#Y;#xHyi8(^S`{(mKMZEh`ndSNWAw)YYSSJ~~)xd#OT-;q>jEW~}YoW|n5( z$`4{rafAg_?>yqDu(N&+OKaCC40c!+bn%afH0U^~4YU(Zo*E|SYt}EnZ4p=2@W?Of zjYj5NzSHIczpA`MgRi zl@avz8}Gk(-~W*h`I9FQ`*1QjZN4nnMJ8ECYhb*?wSQnj)zp{PNAfC;@mPfMT)AqP0Hf2wu~m9A|r3ux~pDlzIE5cb@GPF zd&`SzRD32ZjCA`x`)hLKt-Z}UB?E?&uS*No2QDf+FO>LzG!N1$e|nv-7{+Zn`mnHa zoO(=d=>}OL|KL5PO1_Bwg0F*2=U75!s)Q0T`FC-?kK=I955Zj*ecPBCm(?@oBZXk^9)q2{4% zXL8$oCH#`glXgIYNo~%Bd~?Y@v;4+TuO6?~O6#X*kNSd0MdO_m=p9e3B!Fo>=sIfd zozh3`Q5`rh%c4=~CH_+9L~`+j|Eo7So?Hb|0Z-ihco{e z_stA7(8zl@Pf|XNIJsTY|M|{_b)A-+nIa7NEcl7P)x;k9+-N*OfFG zDnG#EXZ|5W``;fkaFbB*gL12XqSBMh?T{Rgqd@4+5m z#O{)$28b};Wp2p+*LC?(EGi|=^gjYzzj_n~%OeWbFWmn>ih}<2qr^)cQ4&u->8e~$BRh1ft8{-q`E8q z%Vj_RK({mD{yH9q#-6en2a}v5odG11O_hyRpS?%+kGJeL__9jK6eD4vG9s@QY=;y71Wd zhZexEpYr!zJIe+fR~{+$*XC8DSR+?s3zP?hQE7z#_7MN;Uym~f(ycHw{%UoDr5Hp- zbsVtcTBhBu+e@zkfJ~VbXym;r=(u91AU{4l$0kfE@j3J<3N5)a;m}a1|Oda z7(%I>Dd&)h1h<9K-Y&xBm;3#z@BHoGzTE(?TcN*j>F1{h(p{jgzXg<4>Io#0ikW$4 zk&0pZLEq)VcxAcW%IFXz(0@VKF8l(aJ7FA;aoj(t?fdNrKfDIM`?`9e#LuR7{y769 zBjYSrPa6u%pt0dwb^|4GE0?zx`$8CcuN!v6VCJ`RzDvV*!0DwmX^%TX1Pyy?%QF;%8P}zdKDEwqvg|h3W|3*p7EYw zFrHbXgS?A4*vi+$puU!!so%ceVZ^OpH0l5T>6db|pZ{NOM$*Mx3P!Y}>v&~>B z7^y@H*9r_;ljY(?)~7?{o`SV{kToQyKv0PST#!C0PB@MgFx(;DCyCF#gB?w0_N|Aj$d;9A2bV-i#B~IrF7}+$C7tcqg-M2YzoL52G?0WHs zf6qUgM;mnz9XD%_&xUg+b@?(Tv2a2OMOfPZMj{i>J{y1O% zc5D6qR}eOVa5C{yxWUzbKbT^D=fQWG@y>199eSKJ9V;uE=e3>?yJXXqD6u}M7XUgI z^hVn{gXYKW2efQJze4L1=GY6(CWSKcFwh=#el;dxz}w`i{KccKNlE%K9T+%3sHK+kkgVK`t}#Ebc;fa!-^BrD&Bq9Bp??wi-fz zzVmZ%P|##~IcOwq0nHB*h)0USIt*M~JtfCSDvgCAkE-};uH^xZR=tldTZ_N#{m)+b z*E}7p{wSMv7k&!yi;U$pux^pzp`j0Z<5i{rJJ{+)E^-qL<13xX5p&0tqnn$87@=F0y0wJGctttknJi(y z^NKZ;n(=WR@iY`0Eq7Uo6+oHBDxkjby{b2}z=5+Sta=@Y`@lx-XmhO6@j`$>cIrGf92hNGFt|FV9>H-X00th{a3=t6N9hirvS8^ zsGN;5OEzqevKg@`COLCOhu(eXh4YjjpRdKYXP`|?R;}D2r`)6g1N>ogYpbmeQ2V9} zxT;tnr(q2>05t1Zxyt&+h5-OID`w!f<2E1c9NRodW{_7Sbi>svYYWRri0%Q#_+3to zWf1`l|LEludG~54ZyQ*|S%M1K3Nd5((q_B(OKedJXZrTi@*C2XZu2&Im5u;hzarOjdzwANd#`j;t0e zM*P^(obX$>ZUuRr5ontKY>_4%dChjNBMk7^fWQV)?iIkz6f7$DUU}M)qo7CcpC| z){NwCM>$XV>a?o;b*TX@I1XI#58?0KFU@-~Zn|J_0qzZ(u9f$_NMVXo>t%Y4XHf22 zA_(YX=QtX#58$?Hj^Ve}2Uu~%!*vRST?v$^Mbh{rjO)a#zK z``cA+V%;Y{d7OwYY<>Cd7q;GaK7Gm4sLy>m=&?SKEnP9}8DK*l$p+@A)o)Q9-$!{I zD2_vMT6}l9t;ncq_`uLm#~xemo7*ep*zhy70z1ps%mpEK8hN@fl;x3 z8r!Cxb9-ug+32Zprz?2*Rz7|7$eYHt?XE#4Pr7q+@QuBduK@p*;nGNpOlzDP{b|Sc za@*c_z#_SfT7XZR0Ufn=L;`h1Cmh)X1vUId4?vDe{~b_OVbl!aY~)T?Z*-oEb({u@ zAD0vZ%~qSsR>aALpIeqpk}eFub*z;%(ackhC^L8^6KK>lRlNk^7{JyCi0n4u&zu@* zno1CnpGf@35e)k0%I;UvwI_-!ZRrv%=Ouc@+H-)twiUgEcH8jVlmB2$nTpN=O2>|( z9=mwxO6lAOcS}&+`0rQkudEmR+w&X1fU~;$40C?^mB0c}cVCuzEMN6=Z-(MyZReSb ze2N96yHOuV@0c=RVPtg1rj*ZONxvLj&J7G;Aj#ssoggL~F-16KvcY#!2SXa{HkECvoG`KG-7axt4fUH)2`@q@~_JYre4(Vy6u?n@; z392{*8$3PkiVXwmHIHr72a#0r?m&aC6&wf-$8qPh)0gisWM_~@S|^I zr3%Tc%9nY&j!D52)%_U7>`E(-s`g=YG8~A#4a@5AC>D`UV(_$2x}?WZqK6fGwB?}J zUs4t_qCA%w-&$h?CSAo53;Be7WB6M)BDk1Yj>!BLm58FB9J*N8Ie;RRAJ!mV7Mu1x z&8=8}`W;t&Xi-Ej>h2~D0zNT0#Fp%*JGBCa$q;VE&0@4W#_;0NOC+aB0Uje~&X`Gw z-KrXo1d4rakWNJ0Gy+B%HSWH*Y`L1K9ReWlu24^Tcz~gQS&L3(1vesj)tR7Y&;)TMb?1R*AEvPX1NU8&Q?<9w+)(}D(H27 zODc%DzQsj#Jo7Z434{>T{}%z7pXc1^ie6uM#D=!ddRar?3Z|+}pzckb#@#10rPt&> zt3_0*x22}+*dCWVWwPqQhPNNjSRYF=>(ZS7vPP%hzfVMBu8Nz2di z9tT5^s2CnI<`F~(7BY9 z$irmgEc8H703SFy;Xs|{>y1HvrybyQpUd*kTUCHLiFeJd6Z z8vtGCXo&f3x&BQt9oWFb!<8YALwt@}D-o`obSWOlPFY%)8&dh(tATMNF~Ee+bGSN{ z!B4>| z`ld-o^U8pGOGGbP^||F}N!J#ORVRYo%N8`4y}YZ{wKNQn1ce6E)Th(4c8Rb#s-}Ae zEurtqDyD;_l0lnuzb(+plQv~{lRQixfRZ1Yr0CzUBj(R zU~}cll|+~?m^>2?P6|k%jPcCO%shGV;xqNlN~g7#SA<;(jgP_Ok34_rQbFq|gM53(a@zFt zbjgsJ4iFpAvu$6^#$)g*(IMfTjh-igkt_IM^N;c&BA!-pLR-5w&k& zLqc}rQO|Zk%X$RBnQxW9kINykaX@nItF6VZuuUz=C8z$zvIksGj@-$L2fZ{@#m?KP zngB&Eycy^Q@K|a_L`F~5mY_y;i1eZGX^}OcYI=HnM)>E0`xh6v(j$BvQ)2_gd4Rhd4Dv%5A08QvO1zw%ZtfTAkG-~IdHtF&lQIzGE0Kx1HGJ3Z> zV+JcAv^Z+X?Wsj{o2Lg04l(l)b7Q>yOw<4vhD|GbaTCy7+vmRB(?Hp2UVf?7?y`;U z`gAj=+s~9}T%$?9ZWAr*+W5l4;Z+NOA5vlRh?JnvC}{W3U&150yp*1qELm7MSa72Pr%R z2MY|O_`yMypugUkt;W^mi{3p(0LjLKMbR>=CYT*gVITaS;C04^D+H6}pN5Mk^*3A1 z7D*QqEeHG`kBqa97VwNh8*N-RVAeKPka8hfe|M#8!U5m$0Xa`Zxw)rU-O3}WSv|mW zS&g1M-|f2%&VFUI^ay{V#f8@DDL9>0C!lD9N!iUkEy|TqWB>0bochW1jB6?~!0oaN z+_h1xefZw@PV8r~ZAYlsTY&{}W+13BZ)uAAdP%1VPz_Iot*2J&$oU}Rv_Eb4UqQWF ztblvvznybXvKh%t7j!pCC!|>P69Xu~ZXIGw`z_0MJzDHp7v&YZ7ak_v`l46dQC{5@ z5?T5hgd2Ylwvq${8i5(?>38Ueg1Ecu__(+gQV-K( z>7?Y?(xs=K``uOxE1cW#LEkZr$|^pz8-m*0 zZ1J{y8o?A*;Qsyws=PLrYR8m?iXn$!)pR44=nQ~Cm0%p?*WR|A zo`kt0q>w5d`O+e5fP2_;PJeG@Eu>wLMm1gpI+`#I*y_@u_aG9W>A}WTAL6l=ck-5P zQ|63oupS2+^n5q(bv~ZY5}`@B_Awr`+B)zG-_r+|&U&w+oU_Y<$44NxF=q+G*Z@_= z79j4lV(1YEgej=^vqhqK{yyUliYP(#D2)mflmKw<_nYZY?(rqcGZRd$tiALSV-B zSX+((3xv#^pgpKV_)e&ye_vY*vQ%_;KB;P%2uh~SgUB{?ubOy$>Z!qVNH1BnX}@`U zl}q;%W)k{sH#k8zZ~+k0Ue^+!$rgtnWZ{0N%0=TEMhc;7GD(Y_Ra&MS@RS0c82Bw4 zulFG0GS-W)4Qr&a^>qWD$MTU`yF2ZZW0-?IEY{s>zH*^>p6C>JPbsOe=z0@lLr<>` z5r#+6{`!q8)vkoFBKLueD>FRhW$_8yV)46UN#q)x(u=BGlRv{mDJ6%`!hwY zU6#7YW@6lhw@C!e(0b!;N`@YL_SxQ_e`}kAT|et&T~9yV`u?z$O=W4{Y5S;zcz0Xq zfyfn{_W5kjTYs>m$sat~kN1T}a%z(TGbu5?6O&=g+bVe!1x*^(Z3@f*V$F|bkiSH< zoh7TbF%^)f2lOs!Lv|Jt2njRNmylk6UR~A3dn+YVqq^9X2%eu#=EzWp7;q7ZdOY$qtJ^y%o%?FQ`AhU(AV%W zO`=(|-t-qVU#$9HLBtLL-E=)zcWwg4bM`qRY>z#v&l$+vTQXFkZ3&k*t9rmal2?q3 z=eISS;6}rncgCjpCe*y%(|hc{>N?#YL@Y=)Wpb?4@Dpys^H}>aOBSl$Y=YjS3=Du_ zI$#JI^X@SS#2h?ytn^5d^BDSt7`VwA|J5 zh0LkX{IQ)3K&C6xT<2bSS&~q0geA*$o#1o#kc{$?ll2^W0uugo#+D3h{^S@4seNXm4qBa zNB5=tM#AzJIy|0h09v2XxU^6U#+1pDp1!Oryc}&+^@_+k2NFxV+X(}a^6>C;gjYnG z@vMnrDL_*)eU?XG1mwlgl}uE=I&q2jv!aTIlr1Z_lyOz2I+gBVs`xns@)YTDABdCmuhxy&BM2wZC** z1KBnxU2S{oCsyPy;Mj2nd3V61Fy3*qYLnzIK73!`)a8aktn*o(Z9gV+zlKR(Qo7+N?ADpwk5b%8D`Q7z441}LKjG;j6!unmgH zrJ7ex&PHaVg?Bqov{#t3Q0;hh-Y1KUdiXpdB!uyjrZsGh2Jb&uXcCg|m!ZIw;(Aeo z)CImF-KdhIZjec-Ay&Ai|5p_QzpxtJcwjJlH`}Zls<@sWShva#0dWkYIz0J3$w!rY zk#PE_E)EYQT27v(wt2rPv@Oq{smID+C=+LAjZ6|KvI&TFZ|Ra1)4%O(UB8;FdYG4W z+hK03NKeY7+QZ*YDE%{y?I1ySma7pz!io=c^pHp1wXjjhe*1(ePHZ0EH(5h^SXu3l zy-u`C!#o%ND%f{#BKFOvop(M5w=QWJqQ;9}-~~$OL8Darz`^UQ=U={({-XeaVS(6O z%~8gq=iS^(Fyd8rNd%;sRO!>%SXgWj)uh|vI^_J*XBW`b9`lpwI38A8P=w((w+C_R zKb-}r3cl?kz_g8Cf-F3DA zf`^_zamkV~9NJmYvfBCGT|t?>GqDEKg7K0Fy!so=YrW%54G_$zc}FzwjMV#( zF@DXVN;8#`x)YAsHELdlsT`xv8OnubE1<4R9_g{xXQgj)5R#Bk=iml%gM&+aX-(3i zc;?5+=)yoV2inCHw6@xO{Ys<{y0A1xy8l$g03-l|*|&6JlVW9>RYRt@O7K)MIaE8RiU zVR(MzG8MJl8O0%_g+|hQ($L}8PwVV3&gpB7$qjoh-YPpRBGLj2~{nJ^bbc6Q#*$en|-Y>7D;%Q7WFEMy-CUwdF}O+Zt`P z9a;hW+S=5@vNju3>MluTET^WYXVoHfm>KJana)vho9d&^C^1$(x+WdQfBmy(JhuDc zUbZH5xqThQtP@VfV@CDWab@u80HjKO9J&Y56a2GLzk5`3;aDpEJA}>O(7cBqDK=gq zv{xNbbO?qpMtQH%&YLVBfsazRyHxcu5oa;SH6^3vXl1yf^r6+pn=ikqCz$C5anW zkd|Yn+2py>+gQ4hBLB?|D)Xr=ZFriyrHllZ(PIs1+kT&(uVqVsRc+wdf(1rt;en(p zH!|t`pO*Q5VHVp!p&+%Jh!d8*ba^(*Z%El?TR=(MZA+`UCD&~JJqz)}eG(xfCK9E( zN3+>XW$!jGAIkIG(*2|_O-!Zw`j}myfc<&w_}<= z^ue=F@CP4jn{cP6=N~0gm$y`6-xiCp8aHvYzqbeYK zUCmP_-cNsLC7(p7v0J$0wxXwN@&H4L4%BMb|Gh!#YE21X+rvjkM{6txRiF&DYDTd) zj>aQ^;N$j-5$m)6Y|UMFq<8l9N|#e>8|dZi1(uFG??(M&IRsUC)qUmgoJWOH`0c&BR|jg7|3Hbsibv+b%EG~s z8~S!;ZjQRKh=gge8F+Fp<2#Wm{7ZRt6`3veqv4ZkUZMCh(~10yde%CG|4^37?Ve7{ zF))Ds=)9YIqyhPu-7b}Oy9%eZZxiAM<2aBQZ_8JulaCj$U%Xtdi8PM{g(d4HQ~n6U zl>TF&?tsrHZm|?ZCs^11Ei76miq~RMjb8o5sn>qfRNmXmMOIW?{c6rKdJFsehM}@= zv2@WALb*j$Mn*=Mk4AtQNHvPh`2sYRh=F2_4IrOKQfK-><(K+VE|qWGDGAiA)DQ4; z|DpookN5v?e}WyJtmn0V^dX%RRN{+{i?m(-t*b_8do%0XoR6A{j(e`#;h1z%NBqIQ zW9iQN)dt@DOqz{;v-LDVOI`=uN=Y+lXW!XZy=OP^_|OJkvD?k{*+*)*T4+(6A2g$- z_}>`uk3Ws#G3x_L4Ls)Ns8!9RN1LMR(kF?TuJFMsy#2DF%%(L6+5L3A@>VE0Uh1fw zbxHtfg=Q)~Q?4-XW>tZuSEm+C{=?1_T22iT#z{SW%JQ{|autj<$PS4UftI~m0TTj) zfI_dVogYkwOWo{_jjoQ4R{%*hdEp6>R4gCRT%}P#Oh*x7U&|p$XdAdDz8}$uaZd+v z$Rg5seGSC2_URR~#omnCa)WU*?^VD@hw;<35>d4cW+*a8g2Eu9ViFRP74%8Q=F{1c z+V&ePf(}c8`Ff2lDG}O>efsEEv|y@x^WL^N`ZTQo_h#TLw1X&>oPPl)BV*l$v?%$R z4IzH%x7v!!ilU8^AJ{kCCSo#I$Mp7u$k?tyL5J4>OJm*AweSA#?Z&m2tw8qk2uFZg zSZAIgN6o|Nb-cv$}|2C$JrS;x^O9ZHe4sxkm#~(6I05UO$*yCPJDQFu6?V;h9 z;+)$g?^ty>uJ0s~x8!*2gyhryxe#sV+#rCK`H?CU@oUMKpAQqp;SHU5Wh4;BV8q^N z?{yA)s-0Arg*zdR`yR+)?eJ))fJ}ZOZ01O_o#p|NDV|cpGdV}ZDF%?rH-PesE99ZS z{T=hj`!PfyD|44r2orVk)TtZH%*>+;F9Kri`_*i)mADpAnO4`2sRb-qk4nwP(G-I# z6<|z)5nodF9K#D1Zfgyqz)Hep#!~MWW#q0+7ua$MTJkwk*Y@2E?L`2Xd9E;7-dI?* z-aC|=flk=4w!JeJzt*&xZF?l~=zIC@gQKFJx6#3K`Jh zWo0CbaNsf8=IiIpH(jm!_O>CHJH%{6T;TG^b#VhrZ6XXeUjKBlg8~5r@M%`GDUVzL zzB8o_CB1cZVh-_gugmxCz8f)+(Vro-fvs+ByP}x0Z2YIt$y25#d`*GKmbJ_{ zaO=z4Sr439LPpNR^S)*n*8VN`L+H!abZXGosf8fLM1EkuBfK-JN;&}gW3QHAII$KR z9JUWa$pjphjD@+*GcHSG{jze)S_ZL&CO!H``AlB*jL3w_mOd4oWhDb4%adRDNJ6Ou zd7ejAd3`(AYe|K&%Clf`!+pv{Xy-a-)-mD1~NlFpa!J$e(UFr zHAJ<|%+pff8Xi`gt(0Ln@`aab<>LR)u=>M3e*GtgKH}p}XUY;KBs8X1CizHdZ|!aG zP{@X3Vm;?1pBB?Gzb9ReYH86?Y5FiD8W~le%yAT-Rv1?m!Qp92(#N*L+us&^GukL4 z$8;+N-#9x+932v3l<9y08uum^A;%#Fh-Ed&ZnI?OFV7>z&!5yc8bEXiTp?WK3F%Fe ztGG#4Y-Xhya3G08M(0xEMUwx-VawAIXq?TwwF4KLit303#XUx(YQl0v4+4a@RTVE; zY9*4=n(D@)!9t#K%GE*duX0vnI#0bUk1&V%*IpV0{xjOqT4%-?odCn%w+BTxeopUL zb4wnW*A@XH^ZQV{Zw8I-_AEXk5lDLy&@WuUwSDNr3v}O4*N#B_hYZ z6p~isT$OfK?($2#Jsf$Giph~Nk3ZEjn9ASC1$t07C%nj8@?O+Q_i*)J1l9eZyzRyj z1$LTFNYx#`rK0EhXAt{8W9#&%0LhVQEmsRW1!QZ_;E9shn430&JcOC4>M5Tzx3vXN zxyG0a8>}azax7X#bQh=JSCw@p%)fr^S2Y}-))g~^@D&y5t&*D`Di;#J-d4(Ay6p*u zi?_qp-WPmYDia}$YhI1H8Ns?R8x$665o_xg)x2hX41aLgnuL0{+=$FlKDg$wJH_5_ z-}7g{AL7YAF+}NRxF|SW2Cv#}aas&|Om_<;=fGjUVq&Oe=5yCn_vB>=?H6Q+6OgZ| zg1XL!K3z6D8;3sM>e>$K`UV=&4Er4;y}8=Ldv(Rv@X-nW5z*1C;OOR>7z4BxK+V^| z)uhSQu?rc8#J*WL;2{b1-kx$qU${w+IgLVM1*@l&0C1=txZUl5G?R~Gz&VY^2W4B; zzdN@Mz?+nI93ES#jW?A)03~G@k8nf93n) zMuVB_RuP|+rM9>pHM~V64H9@;gnAB8$MM{gT>pg{W^(pzzNA%^@3froUvX^UuNryW zW8Kf!lam%}tw3)Ut#2HNVVvL}Yc>x%eWO*5bo_d=a_s4GB@6#5&Uehu#PFYK$wOcf zng)2QHXm%|4Wk%)Sj?wuXlJG6Duc=&5wX~{xCMNJ(2f@ zZIUY=jd#gKn3eTXzBmRf2xO@N6t_*CWtISyP%tNt{g`}PaRgK)*49`#mtrxF!z~7rB$`e8gwoLZNGsjliF>*4_UD zw}=uVmjY8g0(KWfyLqGetSB{pO8_JDLBWiJ2^j1`{U*D%N&9{aY)adL+mpMbgYu?v zw2E~l2#nGu(Sx=4hkN6cC09V<3A=3;Kmrh|B_kH3b99Gavg~313swmABOF8h_VS1n zZe`2>ptLw~NPqyR>xU#D1Wg)xK@ewyW-AeK#_NbYm3#(T^I1miF-$kY=@Yg@wtVNwBx|^A zZ8m%n-C`*HTSG%zzKSx;b%-vXrwB2Jv9bozzK}Z!zjcY%oe^y*;mC3O{*^h$;?KBE z>>h!Jd^G#9JDTxbbUk9lM)A;or(IBxKjz!k@Tgj8m{7)JXOc{G?71nm2I-`eHhm=@ zAjOTQb{U(s%}?nU_L0fehpV}Oj)~g#sqLzSgqlG|ZSA;d{K#OF{7{-4y-8Q%TceMl z^tW|$p{KPMe43Ae{{jr8I6?&(ID;`49L}Gk;~twrlJP#g7itvfln^gT7)Lbs##IIw z+K31BzWN)D>V;1WE*R%ED!y^@fIB&*KyC)46a$(-06s=`xzK#5yI06Omh>rx=sePL z%l)kGAO-&qwTA!2qClM=P;LM2IR{UbE;Mwkvfqhs+45?rf!f%_yDMZlECbqO07zvq zFRiK4&>RjuHYm$CnB*}?=IBv-Lze2sKkoFX?vY$%6&BiA(yWc~Rk_}SkR#+hlX z%oG-~9Bu`h7aKij=i}lUxfj2Ay>&JFV;7qiPGmKGP=p#=y_b67f;o3?{GVluzku@f zVHVDkHE^6Gz?e1f=v3EiP`qm?7GdNS_~EE-hUQkK;Hz)--DKyjQQLuB8rHhb;f0_$ zXs8NWo_`9G04P*jziDj*iO_A;JTaDQdKln}d|Qi2#;D~SShmowRWx6{&5ty+hj37p zEZAE`@RcL{^-i7kvlq);sCs8#gD_#!Do3w@pvDN~Cd%|t-v+)QhyqTI@Z3b<4R%MK zO1SAbu{onB%R0RFk17+7F2aiS7S`?@jbf<1`}jN&3!TgSRZ$reufK3+e}b<66-tOb zJ@NeeyZUGF$p1&#dqy?6uIs`IA_@XFL^>8^N>zGC#Re!SNUusSq4z2(pwhbt1VM_@ zdnZ9qLJz%#5ReieKxmI`bSKF#?_UaMSz<7x^QM6J&1r z;Qn1=Y6;xJ?%tnVk(WT!U|!8LT$=vS<=NYwqoZ9oJF*en(;kQ|90&!zVU%)wvO-Hc z&Lzu&-x$L)PPqPyYXpEDNH{PWIX(Hs1$q+eHPwN@h*SrN-OTutnJ|w@Yna;nqGnE)DzLZZ+(j00p7XgHk+-%pV?EnzJDDQoWHNhBY!LXjzUy^A z=j;~{vPC^=8xRao$5^#x6aL#c9XT#v4<=!+R2;6UryVDYIMw2Tg**qR5D znk34N+S>$VQt+-jT2c*FBtXZgWQ3oRCTs&9EE$8{qiPXx3xH8|M}ZSS>JoE#(!|~|c(M7dgzdmj z0^SLJxEcQQ=g$*smqrLci!do|nxmRH_l92hR@BDC_m_xp;re!$y$IGIa?c7-WmOlw zK-GKLgY?xmvqvU!+juj_k~(MFmrsNEPVZvoTRb58Gfjw9`co|d8KjaQ&LO>|EgHpC z*w{U?(BmR|I{C1KLL3>p4)k6W-B-lm%2-+JmvU3z+hv4Ckbd90SmQIPKnzdOo~@Lo-mbIV^1YZPucRK?J$f}kJX zgBgW9rc5DYQBy#}veSOdzetE_b2*r4`$fNS8S(A8^d$zTV#5a2+^0tmFA05uggo<7 z@i!`*9pY6>OzL8nO1$yZ@PI3)E5U;!H7)8?a&txErf1p?5C`cv2YfHAeltPbU5gzX zQ$Ek)*XPTz8E~h{rtG(7mc>onbAwLZZ`Xwkykb5kwrssLYKV6&XvWQVQEdkb8p~?{ zo~|%)cQrNuC{8n#X|G?sdXJ7Js+#M$W})}#?>;KV-wpAiH;r@C(BHFtP)7lNmTQII zMG?LZdT;~A6%2N7>w>MZ@~YahidaFtUquom$F3@T@X-G$ZTGj)I4M&_#GohrY3a33 z!LqU*tGpUYffouUNXS>)xc{EoOnLBxh(ZB>CDJX4!aZB2v9iS9kg``X=cCQ zUKt;hzEL|U)VH!@a3)QJpI;SFeFcCQ=V=G^@*}Vr<*0rk0a|-8bE#paP3)Y%=D;n$ zF!fo?40FgEt9AJ*ve*dv5$Xt#iF&IQh8~hqJPcbn*(9|48NH^*h342hk%m@5c$bP( zy8%1!V$H2ufXi-i-{`2|(hZ6d{SBuM<|{DW*$22%SGG#LKQo$|c!)DOf-Q4*iT(2j z@h%VQBvgsK7cfgIFr#^Z8?ra!h@b2gC~ll!d4jeM(MeBA8RQEST9*%4_NorEwMY*W z4!KI-HLq*Lt9Gep3qnhNByd%vlwY(4KcrAAppOzr7TTowmhM^nqAuS^b5}FZQDe0% zjImSB33gq{f4Aze%04VFz`buup}}&q#`J)z9$omrkTn4RRrBitZ+*&d-mB3vwU>>C z4?vgN_ny>C5i|ej@Lu1%@LrSUR>_B6r|z%2k0T8{3W+06i&wqUMWoYg?$Ala83yYX z@M!EG*_)c}C1)rX^k&~(GYOzoD^Yy=pmoW%ZZS*=PPA(DmtEsDeLsaOV?rSGGuQTE z2r-RG5oQ71SoTLzNo!5CU{Gkr$Bn>rXZn59z3Xm>JhMy-|o&B(OnX} zevrd9t3>Q`nTY38Z%49^sGtv$Wuh{v|2@GR3%+x_CdSr|1`2%hDa!kD*ZI!oL6vev zKr3AWJ_K=|B@7^d*_2~@d$8O(dJCf^{=NVUWboS-v;|F2jc>D)fypY7eVL)@H4fvi zz!l!-e{bsV;thYi0@mdVHOF=VMfr_}%mdT` zTL2)6ata7ic9In;Kk%&KO?Xy{GO|lcVulg|Ix637ZY7Ol{4~Y9 z9Yh(i3a1pKbc{^PEmWMh_E(t&bS(CoE1(V_IgJt}oW+b@O?rk=TV43_54Ui@?d!fC zBPN#aP}{4mWR%AYeX53ik-t`m7-Fie*N!%USp?gRN#u0JUJJ9_xJR_>j7$}mQwj)| z4TG#R^*mtRzu$nE$GaQb9;~Yk>R?~*o8>@jxMCa8vWRfyn!AOrJ~(cO!0os<~|=@PKW;V z($yo@G(HUz1bRjBXA|gyb%fj>u1`}RZZ6m(RldEnRu}_)axM{@5n~mZ-=!6e7C=Nd z1WAu>TvuMylXgmSx5wOHgONgGT6#Uh_S|sz&1STe8X}3IXnOZ&>C}aWS(+NlB>Rr| z(Yh8lMx26pPL~|nPbHLLWnt;up(WK#h3X)(J<$|v>fvu+&Y5E4e!$~cpif&wYZ5s0bTY_V}ewi5>S`%*RiB%ja-nqNNb zeDk&|jO|PigO6!IjaI#(KvXSi@SMhJ$cS7BiaRdruYNkXBWBB&f3PCnsPXpD2Q+Sqj z8O|`z|JO|WH(*Efz*9a)lPPVt zO=Ph>P)EO_2jR@gpQ~`tj4VqWBX)FE$A3!?dojwgxBGjWt*jRgvf{Noq}r=Y>z>%) z*kHoQHHH@#)L%8YS+7x}Q%;EP>6pO4;|-_v5s(7{%hFJNY~#W2%O9e{3HZ-h*6G+- zqGPgsL+&BlAzz$8ja#sHkh39zYYeWjR!{I<8d`Qu*=dzJLThaoa;tLgPLZ?SP;HwP zgsn7#aGjU%;vg1Q#cvG`hnC8EOkYcKP>&d6*~3iOlSHJP@{3Q&0qGq9~S`|;*Ouapcs+nyi@%ElD~p;UaOP5-@DuU;7;a)*!A ze6dq)3HmYmr6dIKyWShNzbKH?{r2pNx7R47#<|HbX#Kcw-Q4-5vHn9#|N7|A%ZO-h z?nb}z>I>)3&-KdT#ZN*Acy=H)2#S}0qtp%N$8A&Oyo!Ie-8f2(39L(-DsdJ94cQ}| zKaClRZYl9qYN@NGfhv=Hy1Kz|O_qFTH$zaYLvJN;=nBZECb5cHy${U1`_Ob7Gf%*! zldFN+dLtt6bQzMocq=Ad~`0xE6Jy)FUwK2wWHF9gZ4e9}P zUphmgNa~zyo?LweO@bz7racq3@MjY>G9=i%RNfUR3J@t~q(@9=eD()I26Bwe_t#Q4zH4^q74R}u+*=EBB(`oC z#i7c;un%SvJNx zI}v#9e)BC%{*l~zi#?u3D$IlU@kUR?R*^~7(BG%mU@FHegU&Q`c~4VsP%JV~PHeV^47bcOyCSOuhSLVX)E%CB zOr@cLuh4;*)VW%kxy2FUc-3w+yej`!59{dAW1v`D7`w$u2B2+e<`RV2EKC}`%UnKl zDfqr?s;?lw2FPRL;-U@NDhrDVA}jP)JKDU4KS>c)nTSVC>kxFY45Rj%SXX*@sHTfP z)(y;1-XL2>!4cG>lPJ>egM#LUCZ_B;f_9!dCpnWN13YV-u2nvW4$beg;UciJgUDC4 zN2^bA2CWVL{mT5OiU0RswFt)uQ6q7qpOr3-!^~?+l~GT}a>CqrSvZw~de8JTh4FgH zywu)GQL?UIUDK6viqCE`$8xDmP_bL76BV7s^l@%hhh~!{_R+Amz=mFh26Eip0ND=E zTUP2@CvsnQu!rp=k%&7t>`h3{*|t9O`+RfG{%N+4KN2ek@Xb@4njEEsu+Qo0i$Nud z0?G#_*L)j?j4D6xt_P!wm}u*DZ*2N<8T2=pBCE@-cZEwDvTY{0;ZAWT>4xu5{%mcs z7_r|t|BY!_)%2RwnI*{K01Y`UkzW@$p`=`Y-v8k6>Gw?|_}y!>jeff;z?%L@JSy|g zP!f1^M?4>?pc>gxv|jkK7Fe})0{@olV6uokPJOZ+nEKfHMKreothfuHVmBztTnF_H zUqGF)1Y!c{!t)D7zfmAc*Yv@1!eRQapgRnX!eBnRp`Nzrw7I=s8tU^fD=TXPH^PP)BzD&nw4V)aE-QplkVFr-V z7t9hJ>c<%ld?|GD`LEH6bT&`x4ZdsgW;`E&u2(K%-ZvVRkZl5pp+{0;WSA$?WGxnD zKjLVTCs|`R>ucJ=+UP+q&&Ow$u8;A15^T_9I@5>*%&|+(qb?nFv`72Etb_kR%+4T5 zQ2#18r^FdIXddl00r{sYpfb+`A>74)62`C;7%kcb!5NkO#q{z(qmLUR@K5C9kCpgu zt?j?Sk@k;NbrruK)EUPchD@m~OE{-q6(kwg@1A3*m8I0n*qm^zO2s8edgx^oOLyu` zP<7Qr9~OL-giOh*_=VZ~w$p3K>8gB|A^G`ie^34;wD{TL_v=tF%hNjWnhg8Mc`PN? z_FJmGMZ7I4t^Q;T?75p8%>Z z7kTeDYV1XcuP=gMyMGzG?PYgLuhq>r0N{ulQgVu3i+y(i6>)jYX?gv80ztHG?K0c_ zkz1LDcU-`Wm&F`cHBkf0B{&(ITtYj6leehXn$he!0Ejw)4u=20E%tMGAnR)`1f)NM zugQAGXp*YkSFmQ(;Von0k&|!aA1^pqXSVN8os)dpi(Jf0`gzyT23f*%hsH z^cQG%8w%SJU+@fU4@Ww2K+C^HGF+Yy_zPfpKuqSh4>TA7@P9(@MfZQQ5&))8IUJzm?s(z3%^*YC4wVtIiXE6rP#H!%~vhBbl8 zIA#4`@!Q(p^z?`vKWZ9%ak?=Q6k z_vLB`oLn56HSF>=KUr0La-z?;g_7wF6kLa|9!0 zxa(fwa46GOv+8@5e8xLyr9jlVogogf@OIe*t5Dn>0}aN;-nt70xaE|zr~_E1(nAzN?rgvniMaL_TNWy+`OnihDj^Lb_WXOjwUBF z#hlDqqP4--d(i%?nkX#5dI_q_3S52c|srRef`{yo_1y8<$DLiT}1H_{sjp3AwM z|7oznTT~#^T^jCKlXZ6tJn;j?0)-H^6GR~X$p*UBm+4Z@q6`lx3ToJ~R6}V=XC($V z`@SdKM>($vM4Q;F5B%?{F)Z2kjjNs22y(iq-APP-@58{nEHed0j7>0!J_d`^1A!E~ zvl07r zFm$WJa8Ivw$>6p^S~%)9LC++Ru@M@de5`N^j8{;tuPqfO_WydFQV0iAa|9Edv{Vph z{J`uoG6d`z(O>b&fGG*ZXQB?c@=e{(9r9jbU=w#fBVS^YzK5SbnCH2`r@pP=-V3UQ z3b^S9M4#tX-GhcR3oJA|BqzUL!N5%^>TIH4@tV28HU~Gj4-TF^_yd3cAG%N+)wu9_ zW8i9Z8@oP0?M(N&rT>aZH}>7WeG+PoJiN=LM6_$yPSnuX4i-POs}Xk^?kyf4&DDmc zQz2!|+7eVc80Ay*<#3b6e%r%*$Sx_K{qx4!uU0j+?FxXDKXS`9PAq`uZX?sUduq&@9PNk4;8rm2v6o93Wt52*uw$fE}=x7Uif^GPdTcjty# zGPaGAT}p@RobX&J3T_|WYjYb=&rnC4sG1Il6oX{oCtamR?{LEBdY7Se7?bFP`o$r= zs)=8l3w{}0e6v%rc^S1D>W11iPr^@TKf;I_O|d2)p1r42d#TFP;2x-l$y7FD`tY;O z5L@&bo6iz~=_fga%5k=J*c3nTP}F?+I$~+$aIr|WcJ_q(e_%=r_TcGt9bY4FFkj4p zqvQsSyB2YeEh=3S^7#y#_+wx>lDyJtds&@3Qy4gRoFg?&;?CREY6z&n!Byi!?r3Sw z5oe1~^Cz3U*O6ej)q@=!*xxjg-Zw3upC9XBkjH(u<1sAp13n4It=Ag!cTKpN4Tks!XNq%I+@CaMu=zseeaZ*Vn)(4Pc?79r zY1=jWUI1_!Z&jPt4+>)%{e7wSsmdq0il*IKQg`To(~lNn@hq* zi%Nxg)=ef2K_Y5N?nhU9r<>i@gSQPT-!v+=lmTy^Cw3!+tWn=T{Q;!?r$qdBUj;wm zJ?s1GwwAHUK)f;BQ+uH8brPE-vmteTD{P{l?s6)BvJGKsxF9LSe-h-fwfA2OgY4Rm z=YEAWNY;lu$kQOH_omBXYvJF`bE`@QHNV`sYl}W&%Uip?%I3(qwWy#nLCp^JS%TE< zM-k=2(n-WiQ6H0ZF$eAWy>cxW3t}O296WU|1d^d;e*c|c2-bgpGaO2Mr&1C5(z?NX zu}*1!61TbW?&j}Hf9>E)qLx{gcn2Kew-()^NTmRv;rD_(u9_w z>A97l-Zcf{nN4?J&~$dhg}wVe%p&HMbS&qU!x&F^@lFK(P8qeBEAsMVl%rjfU$1Ak zSfRZjAK>!?Y=IUtjrze2ekMr%(O13!KH!H=n}ct({Pbi$FslGFohjA9j`#n(0Klv= z-(*Q~C}v6_h?Tf>|35=$v&b|6&6I>CpdyXy_e^QffIkhWS>r(rpn1mwL%$!o=uu3P z4w9Mw#i^xd9V9>1>ARIw#fILuo&BeM!7mCJeq6O|qk2gMu`Al<&Ioiek>ChVt66Gx zcd+z+KwhE9Wy*|?CzLvji<(Mcfia*jqnv%B6T@3S1_M4>Z#qeXY6sg%!vbHR92k?n zA(L*)>8+0YH~{v#giwF6LBD7e(p*(&#U#jt3OOU~C|$r~Z)w%=0Mme`LV6N;YlM9#M9R2zPj)FAUS9c33pbunCfq*elZ%Smi5nk--BslkfipiZAYnI{6ptG7ZuypaNmt{fYA25z%^!X~u?UPJg@G^B zj&?Y~)p3Sn@LTK_W@cO303yp9bj?*(SLFDd^Sf;lV)y*(;X6UKmui~bR14^FK31W- zTutgVBCU@jcW+1q5$F*LHho`$ZM1is1r6#@jubVdw-0GzmEA5Ch6i?x>YKHP3^GGL ztp7NE;EvU~$NE`<6-JiLKE10z#EP$#yiq4wVth`?%EVhV~^eU51#D+)&)2t6@%@wAxLj z^^>IL_$ZQz+{fgv%&yXep8@yJL8vQ&Gj!)dpnO z2RA}!BAfjG+6mWCikU@3%XC{{RwSKLchcv>#s#uH8thm2PL+9dZ;|8MTb{QyNgi=r zGw8pW@Ic@yoH)|eW-MM9fNhFq1D8h*+22;SnC}g^^{N8h_TMyaMhF_lcGc1I-o`(0 zGS?r06aFgY76R6>qTLTp=c{O)Yd`78#8doEj)wqoY2A!<~wwflPVe$_J z9^-Hi-VP`7Yt#9VomajL_e)6QDl46wj56DGQ5W2WIc6aE0pzwqupRd=4>`&L3bfQ|J7ma&|Jvkz_1MItVup zazL5$I~++*adQ#exr{t zw2@fuTZy>k_vFbLgL$9%&0ob^d$ehCej{)@;$*h1cT4Y4N7caAmrSXBDb5T6VRLmf zLvDvJ&wMYk=A(`^ zwuve7T#X$@c2ej)=dBzk2?67-6tP}zSILl7pp(0LEp$7ft|Is3*7b>E<2;zZm}RE8 z&6xCaKVJtM55F3_%`!&((vk~|<3CUn&n#WGRfsji8wL*PXB6yu9H{G`0GB@&Q31{D z$DvdoqM|4vmybq@jRh!nRYTxP;ks}Jp8VnFoOmBAKt(J zPTXsK8pvcj0o{NHI5)*Q(^@sX==aP|VDpEXRMWEv3sQ^%1&wO^bXUi^6dldQzXSmf zrtp(H1VYvi?6v{8QCI&DXI>BBaPY7WFXSM|J zk4AC*2IPga>81rLem>-i^q;xUgLZ&EM2J#C1x;Z_zz~LoHw}5(OnuTwRR|iU;$KN+! zmHc?|CP0E~T+W0S>8=|E@|O39NKp$S);AVJWW(lQ3s$&mK?MAZOwCaYP*VA!{x1Y2|~Je zPa59c-bKKy(sWw454OJIpE|1P)L&}SF+a-_77&6RQ>L~2ISmCFZ~WJEN4Wy#JFHEU z8}~O^5jmFD3Cx|+WyIRqE3>n&asXK>iuA`!&hv$e&2c!tD2$0)k=zFys8P(L34c+b)3Z7dhsFR*RmQpbFyu`K zbh7koXsy!z+y&E50l=D86foT=-f9ef*VhJ)m4t=?tR6>y$?m~$M7gwfJ+K7G$=0bF z1%=mWJ65L)c$welwygq`n(X1s^PGHNn|5vPc!BTxL+@=NNecP2|B(S@1&IYlwLf-c z2SuX$L|_2ieGAO)Nr&(c2a1K{+$F!(J+fzmpy{hn&{0<{Iq4vm6rr^K}%4?n9K zZQg3}shG9F)RsGH&&UTumw0|I5M))m@hL!&LZ;yFT~aNyEJf-q~LVlpeJf} z@7~h`E`U@Nn^3fZcivK8`4y{3!p!bhutgkbONc_h4C|xm#&PyzOUhIA@RQ0-!hkd3 zQM4w%5^C6}&N5KT{s?`OJ7BLmt-6IqJv{J~8==#3V9sv06QjRmFx2{DX1v!PfK9HX zX|8h6?>*5l$s#-VU1|+gJG2&p+HbK>BzvraD1wM}aS7ufF8;C3H;a4;R%iHqgD(P? zaZNzc{#Y5fJ=bG4r3M$g z%gvTx^~b6Ee;C(;*Xg{~;Gw&pmLyXRL$EC&Uk3oysA+}nrfFu}${4I?heNW^)LeTJ z%?seLs?}Yo;(eDo^Vb!SFyW(@#mJ%w_k=egSLYXFTdU0_*8PdbOcOgaQg(VM+wX?( zOrvMN>fStD4-9ANW^h!kM8mXp@1thM{xa<4@K$v1^V7ao(_CQ~@{MGMC@tPkTM!o+gU|l7)!Hh zGTQ&R2`+akfQfmb$~qzP+qbK2TBmv2oznie_kvYNTI37d+%8cgAp2L=ar?nn2MUJp zLpAEL1G9bIFJM0%fCrI-*hOb1j!49SyPgeNe9vt2;`UJk2Tgoi;Vv|POlSWAC;r>7 zVvJ9?S|!sD=#y%P@!(cM6O!!pf5vAhgy_hoVE7ltAe*M1gw#P0mZCo}$bJSC+kSwG- zC9P{zxZ7EPo#dDAl9?@wL+SrjUh2L)(dDt5*p(h&?4|zoc%tw(D|t=Ef{!JmmL0!+ z+A`6;Ics6L4f=^_bke-qrCNp!f_094KfS8-?7s+2*OxFA>9<$>BMB2^$Ah>t?4BdI zVLD_kujIR3)@J0Uf&x}I7 zx43&8-_b1Syb(70J3{xIZOFYrx21uv(HrY-R_YQv_SkOQo#BL3NvCW3n^-TjlJjdF z8c89s+(7H)9Rj%5K6z!%G!Yg4{(X$P@uYI?QF;(&GnGTW6a!D0tMf!3s7$o`j16lWm_Y)n;M5}mi$HI;*e zb*Jq#9b%Kn6Fd9t*j`sg+Zr2me^vhXAg%zDDu0z}t2jc2`A7$XJ1X(ryFu53?VC84 z2?LFo!8~=EO5Ve!Kv>UmP20iFM%B^|hG=itvUEbmef>_HIgO;vw*l4NlaeyU846w_ zWg2q5P=aK%%lWWgfXBEm?$^ld!~{r_Fz3j{>2BX2rWrcEV$qAa) z_kq#9P`g*8@mnaww3ba`+Y}e)yE)sL6Q#Axa?<9RVqT`J2bh19ezGcuSd&s5Tr7pg z;Z7w%-f>tq36a=QMBr7xcvBfjKF6MA%rB4l^~>t#;6mD;APLWj<0QYgwl2gt@j=+O zrFt;%-J&oJ9WomqUQ)_l%7&D0J6l|1=V1OG!c0Q-(u>@>s+j5|9oJ1lurR2 zL+@F>4wdZh&0xU0%2x>Ar4a%2WTdDfb-|+7T`HxQi_y5=4O*JmC&f!!X z14^KCJ*G?D*tcUa8iOrZwDS^mvwrFZrkxD5>+o>V^}3gJYdU|8-&`+5e3pTfp8@%= z=$G@zB6zsgj;g-BO0i5St^nIzz}!lY~6GvQ&JH^7g~1qmeJ!F#A0 zbg9~js9d}{p=z>^k;XOUMg~osA^IPVJ)f;hmk#XyJLBY(g@3R)qMO_Rr9-lkxk9(j z=oTB@1RCLr^m4HHWSQP?|5u{dw6PhU3oCMP25O0%YWnf?ij$KbTyZX7?D#)~_?iWQ zsRYu+?y&p`L2A+ANK>a6&zS zy|Z3I01}2w+etT{_E}3?swUc)V~d;a6+_us{Mub$+bbYzAuMmX*EhAeXu7J zZ0|0QryBPoz`*FpoQ6K2#?o0T%v8!;T$At*TyGAXeM$d_`Swm&K?NXH7i$3qH)E?+ zj{S!DyRQ@bv8@}ftsYHtWqGEtilnfEyxtgOSdp8AP;5w#QbTUQ(x13yaKshop?R+c zP()?TjTEfU{Z;C>>qrTDNb}rde+0BB)L|8rNIZ+bf}71;%$%i5(LpVq5WhEXI{Y|W z5S~QYNCJ2CKb^IMyMIO=;VD@ydYzg zFU#3}Z}fLiWXhV`eW?6EvAJ%Fjr>$Ux)`| z&v*?=U-@IfxOeeOi`#oMefK@kIVmj4^Bj9yrj8+W%-JtnLyw+9!P^>aar#7P2q|-} zxw_edOnh$kNxD?2R6(CFKGL)Eilz*^v2gM~)EB=q<}sv!Msteks`Mly{X9^mm=CW{ zqdXR`Mo$Jcq{;e-ej#9nL$<8coTq(`8d=GVr}PHx`W>|cs6$c3o-31DQ2#kcR6dAY zON-rivwjSg+Y76HhkLj6%N(9(Ns=E9+p(yv$`VMa&ku6i`%wn0C?BQ1nBKU!Ha^=9-H`*s9A4$zl&&u!8v4YibsmvqI|Zw`h%#C z+`58DVeQY5kD@+NBSYw|0o9@XnN8ZC?)NEAnu}U0w5l*qJX6w&@0tO=JFufR8?Qm` zJDmrP0Ak&DSis}!aXhF`S~`2hX%xrA_!p6Q>V`Ezmv%JlUqYGx@jn&;6rjzs;3sb9 z$6%$e%*As4E=i|;b*m@D-klhFY)`_ER!3QkS1(+WFd5`=p};k?y6LE3Q^sOWK$_W|r+%MnlMPMNu# zJ=i(?s1Dc;<_qxH&PZjyt_6(bftQDW;C>xqKAZ61d<};1W%kqdvh;7w8JW0dH(xYc z)PEe5U*8!!5yc_%Dy?hVFUIAOtXLx@3yxy@ze(|BM@ncw9}z`Dv67dimE&I%bi0aj1OM@*8T2&%^%Whoq~cnPz5s)toELue zvbPD$Ju^+#Lv>p@rv9+;WjZ88i($FiZuF+0HM)oHrW_3v7>p(^`ftAuUqE`wJ0`CM za1ftxg1dv1M;9-F?eSL9YJ~Yf!=S`^nzO97#%;wZ4sXVsmI_o=w!!o`@5MDjEe+kp zTLumKtX_S>$#f2ES+sysAnqGvKHu>!ZRCQO|5Ydp`pJN_w`Y{4q3+KcWt3zg(jP%-R}l? z_@%M1a1r=#ei!=F4NZ3rNj-%;F0ogC&GltcALVW#A4PK{CF(rD-q5X)$5h({2mD$J z9g#^C(w#Z1eWvq~St@9HD_SU3)PH)sh<_6p9p1Zg+I9tej~2x0qPBp^H&wxM845(N zL^LRu1UGBsNyku#!Vigyj*24W549l;7a80iy2+{4;>B42def; z2Hi8>DqSxgRa&dSTiz+z*+k$|54EHf;d|X*M~L>>XByW|B5vEFUG|*QHewu2$6ijR z{D=DtW`g#!R41TfKOv6^@wFfl#S$2;U*YBhS|rE1`J|X#bJ`PUPZ{4)?)E&s@afZK zjKze8_xR%Hzw}m^n6Rba9UVrkfH|!F^oq`7>Hg{+Uf}Oa1i=i|%VPBZkZ*iv|kEd21*6D@bAL6Rr4%k@?ui!GTIqo=FwPUBhSgotr zeNw1O>VZQNz_o`puf!OuX&z*jXZU4&)qpa4khX1cvyP#^K>NJxM8h^lR3@L4x$(BKqFx^GHgLNPi7{&Z@S+(zo2(p${yy z8SBRK`Vh8gyx|S2W985)uP#)C~4M%gVV-qJe z5?E3vgdL^mwT^kJQ3eN?mQ;hV1_ThhM57EFoufG9Yn2?diy~>&tgtC3-1K%W_)kJd zON{4zN0KG#2RtHzfVz2qrEci^y8lANc;9N{A(k7WEhr^)P?7U;PL{y-DKw{6#0nkV zCX@f7X>6<{o$}eT*w-V~hcncYZ{p|`Oi@)i9Aj{#QsV3u$tyD9UdYuP09Qqs4`j^_ zb;Q3qD1S@IE)Axl8T|c1`fo9j*;%gvS15?dvg42nsAR90{-o$-*}MUw7MxAR_M+En z_*1~$`RSPQn_obYzVJl-;2`tQ%k7^<>;GIkc-&4~6g0cde6yKFe+U=Tow&W0Mwbpa z&DUs(ZcPC)MSM5Rci7c*Y(I~|9#XI~QiOS?nkXD^0Z$M&@*QW^EcSmeUN?)pK6qG1 z@EYGiT>9>QS6d(E{xU8QVl!2qUW=wiaguEv03q+YVdeROAv|B_>nl>15joaa;9nXM zrW7bo;FlLMmPdN8P_)axm#ZSospQa9Jtl7v!EV0#8qWL2yleU8adm<<`FVL()B2_r zw{fGFMH*BF7uo_>-3!(W%|*Ma7VDPd%)ZS*@{f!%UsqTdl^!L}C ziH|Pq>^nG0GU;0AfwM7j>W>{dCI6rMTGptFHV>@E znCPJ&@OtmR|sK*#1>j{pV2#KTxKEclB~0?H1%1$sGA<)AVXLR{DJ8j*3EYrjI# zk2=9O-hjJsW#^WezGx7ahqi## zVkFi)f75QVKW7V2&b!kT$EBACPDWn0R>yty>TQQb65) z=y8|}g-~Z;yq}vHU~!nO?|L_pp44M|aM%^<(KgF0%wf5b_n&JZSc65wnlqIb;BqOj z$(wFQSFFENnNTC2KYuRjI``Pr5)01Y!C`ljF5C)9sd55L?3DS>$gpIRb`l90{KK-S z{E3ZB#iVBM-~PxyA&G0D+GiND|YV@mINIylbSZg^~qf2 zJ!+(HH(0=BPC}CfKMMfTPQp|8QI_%rA+{He{rSpS)z73{vAsf1i|ww6LZ~5g@HsiByYqf*lS?m*@x->(za9iTJM%x)2pG(rKN$tP3d=38>x8%$r+6 z;I}lE_Bh!6o`o{42%82<>3(DunUNN*k_{{~G0)9WtqOmF{yWrp<7^*B0l@wd)~X@a z*EDjMA|fKhq4pp~Dm#dLM413x*zGBp^MF3jtv-%M=;@Yj*x8B)(i*I*RNdi8LR$n| zrGMHVL0!*|q?~OYJz}|??}(~@PrE9gjSIL3u^A~Zy1ibw)wH`K zFp#O`)$i70)Ac<*FCx!??A+3?Q>eKu0 z)SQVcGERv(gwOWIni{W0iWp&)_fZ)~7&*~}I;rzjA?CBwKS-kXZA@+7KkF%xO_C_M zC{Drr8>l$v3so_>%@t9sfH<-5hQd?4E(%iHED<{p(N~W(_UF33Q^;nsW6oZ7-=->u zih@$Urz-wgi`!37ZES2*2Z@}P_n<-sv!!@70qV5t#xE8%clSygX6pc9?fe9gkopXG zd0hV$&Vp=YL8MsA9cK_l{QS9BEaw~mDi7)D^8KOBIG zaxtVXppOi|+AeDO6Y<>sD?Z}~h)ep7X6jc(8X_5RqwSnuLpv!ntb!wm)Qw39@Zza} z)&9maG0OEA1Acg(@Wh{Z9Z&qJxz0p25bGs5J3dlkiW$?eUN~F3zmWQXTkw>{$>R*S z@#77;+v7FG$Pv$z^y0)f-APz^2|`#333XcMn-8EL8CFIAf9=@7mT~BRdjb5DpO0t_ z<;7vSY48Lz1^Y9s#l}rnm1&@a`Qd717F6@rLxJnoEOZXT)#mvyemSE-wB&jrZMuTF zYn(ZF+bGh1aGO_ud$O7A)`i~OQ}$x>qA=Me*EOniiUL!gCsi`a<1sb5CVopd0R;y5 zVH_}Vz2q<-UD64bN|-J~AF4w(Hp+5)A2 zNDC+QVrmX=WCR_;Tv$l2g8!wpyD*jlPF*Eks{}VyY9!Ys2Ij_1xXVC+8~wP|7R|XI zTHWaq+9uuyFYAcGd~X{xE&YhqNoifVEJc6g;6&m#&c z;^^F|!tNE*U7EBaGq;m^aqm({%qgcutzS2^-u!P*S$~+T{(IDnY5UhOdG6emf(SS~YBf+2Rcd@yu@Zs_lRF~W*1(j;8{Ajo0Kq5OZ_ zTv#|AH}zO}t=e<@_U#~Dh&uV%mL#s=bd8<%!gMF4AEYKX;n{TmF`w=j)eou}wak;q zw7WGp(~)+71k_k1rpbA!i#vaMeu<$$!e&e_NyK>3SGC8WG8SYC*GH)>9b>p1O~Qmj zbC=At^BJ&ld>UMJI2dBS*D$nBy@7-*a)pd0Z8m>Jc2<#!FBI*3hIZK|=T|;r4C4bb zwtmz@##q4Iyev`!b}Lu>Tk{C>G}#Ni*BPg2quuE5(?;)>7cuu}q)9t+pQ|` z6-S#ZdUq)j?|J^=Y!gyw=93JI6?3MDB<8M5pg}n-4pk+KIJ1u}SSkn@tKDm{o#hQX z?>+J%-&oHW+{UQuY!|CwyWX>NI#}i1$*X{pv(dC$ANf>;ETZ)5vR8>AS=3x9{r@^G z#weT+A1Vx2>MlgqJM3Y{YhriE{d7HBr^NCU)SgqE#m3gKgJfL%e9TY~8 zSCkvpwjj*>I!R$%K}Uz4e+huSYm#a!Pb0-N$kS1%a2Fwm4|oBJ%(;fO`w+HbZlC|f z-g|~MwFPa%N9+X=P*G_L0xAO1dr)c8Risy?OOqOUj17n~1Jc|c8)fxOg*(aKY2*=KtSRfL^Ky;uF>kt&H1JeK!qD3J}oKJr%6gU=&hpz9Tv zwX&x!=GnzJ)XFFG5ec;i3_een2CV`U2a2y0afsj&c&u!*dGecG)bCHukFd(X3!V3R z653(FVep)dP4vJh&{0_fjIF4Cv6#$c49pG6TdOq+;SX*$s#)RpL&s}h5iu87o#NL` zznL*)*lLNtRdVAK69Yq;KzxZZmX0{)sj5%7BM9W>OSYBV9y!NJpM?XJ zw@GGt@x`Co*B@y$JOjomt>xvuOJOh|-Umc3Q_3A-|O_H#~S-97>+W2wZ z7Du{w*vFgOvY(NLY%pW!3)CIiRy@%PwADx}zWC{g`5UtX68Dijt#?fc!1WcmwlH?4 zoCmm$ z{dWr`E)#i>zzHUNo${+7Xr&4=5<}J>2pqM@0@1N7&<(i)=haTF6!lmOH4Rt_L3GqE zTn;jvPBF6uLSN#^3M|H9yjB@qa-U|Xeaw-~3Tm3OGDRSx_s=m`Y;|0+Fg*o#z)~>D zRd>@X26*&#FKbgOoXL9EvDj*WuX*ncc#GnvsYm*YVjtbJZ%ioRhpY%nF}ttlxrq)? zdi#hP3NxqAeeUQF0g3>RSsZZ|W!3fxK@?H4_LY5uoN9U%8XeRsck>KB@t=r`}h z-1u#-X*}&CCwG=$z0G2hXtn>6*z&Lz;7H#@_M&I(FtI=x6>-J;ruy7zClGa+FY$P) zMzX&MalD}TWib&Ylr}b@KyQ#Tn;HJ_#Cqe$y>~NgPjwYwIaXSZv>Z?+0mEf30tcn@ zRiPmgtkIqZ)sr28U+#)(``ujEIAY({u`yvF_R72fwZMaZhOU%ZKM*2^20bN-mzg5~ z5&2ncf=U4#dt|1<`|B@W1A33@&~d&Al&Ym-4AY%hQPJ_&?VC=rH(9X@n4AWk(Z4M;pR#_3+pk{=&f|9??`OHC;;) z-67Tp8DMYWXHeFZc%ubf0>6Oo7nNm(tQrZ#Er>|x&7Ts5wL#;xZSW3OOy)DPtpKf5X#W#aPfh3(uWQE6;3V`oSnA-p3X}!w!ImSCmQ%t zdKEklbY-p;4BThqy;(|ke1h6mA*XNtivjw1sx7cZ%iSMc1P6b~Wt^LXMxZ84(9CIN znUD`lQTcLh58I4}7uS8?d|4k-o_8Dg%sV_q#(WDnE>Y(48!8T4)GWN`1hN@?Hss(s zb>^DkGkL^(>SpDN0~M)yKR1DSWg*!tdM4+xj>qz}`OrzAUgHg2(ZMs7jo!bw%|w?5*=!2jIy%_jG^^$c2MT zGj@fiw1}OPM?dm@NV;P3I##yL=b~Qm+5@$&EFdfL9&klw9!b2Zk293Ntjt%KY1wFU zcmPH5vcN>CVIs&$4)lR;&Sv@)RQbv2=~6r3hYYU zs`w?erEy^}H{BHxgh<$YsU2`IH=Ib-_xd;+M-UKl<@f2OR;8#67c6`o5^A?B9hMn= zxG*K`cvOzbF$edS{ueK{LCO>FklytH5x=o7LIt|p1TcW#4+YmzKut+i>P+dtJTOLy zvyV83@Hl@KC@WEs1p4#MwH#3M``dgw0yU!?Qj=PM6H-|&ccgaSd(88;psaXF@q6~< zd7BNN7UlKxW70F4r?gqBH-~1)vJwx#3;KLs_=Vr{X`k8~q0@0xT2kxOeaCM90#v`l z{IFeGEzp=y9V$KZ3TRTS&9hS#4KZZrec1NOJbxh&DsH`*y1Zz28rv)15$!4|b=t8{ zFicdly4wRs|jS z*rRb71CzBD9f2K$2M6Ohz_SIV z0qEsF`!?+ksX2!RkJ?7(2bqnYr2>NytcWvCwmL92R=#V#ImJk;X*_+ic&vJ$i~pAX zCj#&MC$L`iS{}C{&YJqEOBDa+gc#f8yKX+JwC^oF8}8=I3t~T-wF0NqOhD0%ODGlG zL!!?a$5bh}sf%%Sx?`OPIPL0jy5LtZK~>Ryl+_^Zn-<)f0N;-zw)gU1fLD$tM9NVR zb@iGFI$w*(#N@lmC;h&2Tx1*yqcNI)-&^#PSAu4p z@X9_;G4klGX|ia;JDBl~nHHM;AB>)xv|Z9j%Cb*Sa`C|X89b6xL#js+Nmf5vE?}Qn~S-1HK1a^IJGmyr{p2`sR-iImy+Ka&0oK`{&L?3%P$&j2gp`o zKyuZfqIfQP;j8F*@q0#Xxci6G9>0j-4NZdi7H=lTHZASv_YzWKZC4>4Mztyd@Q&kk z6dge7hl#rSIBht0pCBy`~zKcz*(%d|_#!@pSh z-x?RLnCmDqk({YUNgzaJb#MNO(^vl8wfPt)E0t zT7qZV^u}Aw0}}j5w`<9V@ayMZM7XO&v~hS=sZN;D4H&oS@3uT)6NX%In-l^0Rjn8P zxf*ciL60F?458B48JJNqkI>pMr;+}VYWhgRA+r8xv%_$ZNw#JkCK9v*J;0f{Z>>@Cn!pe=Ua1n!R1`Y%^TYYV4e(e62_m85Z zF33@#@zxlSn9!(^_e=H%-O`NPD*?i8Jbq+noGTak%Qt_3ZU9hsFQSEK-VEnAi~K`S zej~sXR)6qBx%bd~f45XsF1jQ{#Ccx8Kjuew{He{>k38^#f|T!OOxAr|XD8S)9zJ2! za0~p@?!O4jy`O;0CoSZ0x_ianaBk`s~@h zJon)fg6R+hqr_$;T~5gH+#X;+eXj&a4UAd>6MB`O(lBVH#yf9j22DgWcb$&i4CD+c z_mH6X@aMaxZE#sH@q`JjL_BVPpG?W(#z3WePy9u{lX5?&$bBSb0?ncJ9hhNfaBEO4^F`LGy3W?9$Y)9n~AzD3Cfw3fKuom>4eH z=GZUEuV>-_zy(f*?RrYXCUNJp0f2WO;(yq4&nj;Sw&RTz(mF%gGzrO-g*%w7IqEj55GJi@Z$J(YZStS&B+u3cUeVYXPSXJ&n1VkE|Y*Ef$vkqKE#y*-op^sgbgpx8TeBr+{eOL#~3Zte;VG5G-J?_fs9fLDE44!IECUXWpbB z8M}UM_yeQdF`p`i^XH#3d^milimz-<#rWi#k2jv}xp)1=!FvJsH+(0nPT#mKaXnb- z4z(d!gvHptLl>a|0e%7YE`2$9Z|)T%g%;bu6RXpyGaC)K!bC!Ypu>2io{An_{>&;Z z$-YA;&R>80*B%mThVui?N0wH5@K9C`Nc}5TBCq32;n(P+K5A>95g=st@u1`q5AvQ~Y8)?t#A-yi)C4ZrzAUxb7L9`B|K&YKWb?ORyFr>>#nL1s*pJ(!yN zp0vfbne-(pTA41Jk5gxWPNelc8u4O;X$sEt%V!DEBv*tXgI!qWpnRu$^C|F|E{t0>%NO3 zcsE0!PgcvhQJgN%c4oE5dV~o=D16gVJa5G63iK%P*KiY;pR^o8l}i_TK}M@)PuwXcs*y$<@m@CX&aruC@V)A z*GUiedjQ_`!}fD!tkf-QgjQauc5W!(V~HOgHS$PI@%uBK|8n}={)-~~ALR9%ha!61 zMn=*=&V-{2-_#kZm z$K7to)H~N+po~Z(A1Y!v7~s^bfPgJAiWDCGsv5#ojG`U-Dl4I5x>&UDEL&ZCcoEN5zJ zrmIBysBB82==-wGv88xXfeKgC+R%QpM73S>W1rMzz?i;<)xSUd!@7P)L1{V(&?<~^ zuN3Z^J1|GJKRHBt`0J}GJol?cwWxm4iPLwu?q{%?WneDn%laZ&B(XFw9l3Dba<|^6 ze?eCc-Se&!ba5#D@_zK;{rJz-4a;G`Z;*uLMvlBzR_p@Tp-qrejcZ-m`R=4Xt7ryC3*DZFJ>GvJWQ&-6kz3cBzfXBl2$sO2e;#JVwqeH(-dq~f34 z)aQe`AFj*Oa%JXJy2x*R`=(uXK+B+^iH8?Lu3#v=iZ%j=IAf8<@`dBE21kvz5S ziblP+7cRJz+~B$_w86zd`A#j9lj8V9frLqKSx~ecU7KQ48|MV3Cab#1dEMUtW=Sn2 z3cYhgo|%K+9X^U~iljow)N42z-?dvS?4s2xO<(;40HjB40vK~%Kc8`F78Qr?~7f_u?b@M$?$VxBBTs((vejBH>-aoArvM3JydQSSb|8H{9s zvG~i&N^c6?Mm0=Q?_t~K>NE%EndKkA-JH~99{N{Ab$wKCT=7{%Wmy#ue*lCR4v&#_HHhfInm>(OvN2H zF8n0DKeM$Kh%lU;;jwSFFN_zeSvEYa=`dv0?cdZT2=;Fix~c_Tz~9|&X6h^y?yDzL7^QKVQP~Ng(uPN3|qzX z2X2+VIDE^g@0trg^EU2QTxvAIe%02~dSZ_nnQo#KzGbp_$?;W#_Z2y&bdFbos#)A> zDok8z2o50p!xSKN{=+ICm+Aw{JQ%}sV*C2_AyCc0uhZ*V5$5AE&0UO+XdU1chuH?A zcZ4Pw1s!-{cP`2itL`Tg@mc%4@P^<*>H=J$kg71JNz}#Bd$)JJ?7yJ(fMsTIt2 zi#%HQ8OsDBbv)ka#AX)DhBn|_n(aLrhQ~xynJ5g@IvN1@Uh)xztM(${>5&d|7LIC` zfF;MyiQs9OIN~ds-#G2plq)>Gdn>6*vtE9n;Zeqr?>u3LF=~65J!0lmm&;FaFeqPo0J70wXf#QgBudxPC6@f4TmhmH*52zwx&V z1Ni?-tkdLrokh4e>}NGW*ljRajnIA9;L^x_4KInJYX4OY|6#+mtkRnlaJyzDx1|VA zc){F`H%d<-aXospTh+8>*V&+bQY`82_S}5`V-vP7c+<%?i^v}R9&uA&c~thIuebjH zs~`DYqT#<<#PioYfH-Lg@^ zm;cuw;NWqQA;PpjOL^xadT)QtqaTo3w6eeeTC|3nF;caQWit2(54X-UzX zoa^$+k#ZBfslmPn?UDzNZ~N65@Ai|@Tl!fo{crr0_XCx&q3i87l6U{LxN%39)0WkjX_s14Yi8@A(*^o1+6dsg!}waYpS!^ z!EKU+{&N~Ww4k**bQL@BwA2J;5E$UrH|(n-yd0bo6)=rQGOyG`Xooj{)dqRy|{ zN;qsW)1mWs{`4Qge*Z)L@pWb=)Ei4n(xWM236(`wjWIJW1$|kSe;r)-nmkc)DDq@Z z{V)+}8-qNK;V*h@QZz>Y>hRVy&M*Amy!9ySqc3ir{6af^dFX1`b!K6_%xRDfbfUx^ z=Bkq_t*65;^`%0&&k_B4ow;Bnq>PTzrpRlkeb85+nER$iN@*ct@qyZq)y_iQs#--r z8W+!Ow`a$Pqq0f|MAaab@O6_Z4^wR?PdY04=9n}jHRGeD*E#DIuh++W6=7r@exYmN zKwQkUmy0y^D0wJ0Q@4tw=a(JO)(3whPT!p*&lUQ+U^MR3*NE9jnOOcv$61Gk0&{KG zN3aPd5qv4;EfV3sIz5#OMwJxApbc?tF@wuq)rQzfu;PZaMs8^#e6fsb@WpNzkO_o~ z@cMd{9e>-IL?T_EKI>ZlNTFYDx&!y*lgmOSTUGP5%&(x)$;s$r}>@1bFb3Yy})~YQGAJuG&Dts(xQfBB-V&>na*R@6> zU4F9vBJ#fpBf8fbCI zQ)@gE7UhOr$Q!7 z+?PP4ocw6{SEyWjRw@GdXyLVT_UX23eo)O9Qktz5hvUBk@d-x7L~q11I-;`DatI;R z2dZcv>rvW-yg;SY^cudII}}NwTz@8d`1rUb6GTI;OC5Rvu~>mLwCi8b4eg6zv^r&z znVmQqqCAX%a!3}64i1t32Me{Gbc;LC`od2*<)N@vaCvHjb@yGp>=IK&w5^t}tIMY% zu@eqrEw}aRSzu!+7NXFCiol7P8|!O%TCh(qbKaJwRfKcrU|)%8NzJrH;>o0>G7lIrAw?0k^c&Ei;SUGP+C$Qc38fWlw01r)FhYd z4qUUG;76e1`*k@Iz0RbIi&skAG&(wku(7m2WoKAwjA3l;mDzOlPF4p0arG~fv{T`` zh$WsL0k&kXTl9C-F?APc+%xP&5XhyJjTcRhci*|U$5rxZ=dz6Voqi=;|N7-K zY+%DYiuSHLW@U3PU%?(AR-z{~!cYt{66^idkpJ?@(?c`@RhJHI^8`QWfnM0p<(!*p zVQ#K|d)?vOG_EK+AN+YP=Nu+EN$+Klk_!C+0pr1Oaq5bd+pSTjZ0cX2TvR%a$1fI( zM`9IQ@jkgm5<it2CB-3%^{QgyVLZp|qWF)!G!}8DM5k(eD;I+qp6?#Z)SR^xG*FJf< zflDY}wOc}tL%1mIU<$nz9m z8f<(4(pU`eGRexSzw1uSO}K2f!e;(7imFnc&RULQM%GK0+E~X^e$DNHM8o3yvRh%d zL^MFur>tK(*_6A0=^EaT%fBy@kzjsrd=INKzZiRmUxUkgk&1!t%p8uq!^@P_dEJa$ z&(F!Oe1d`9N8P$w$E*b_MOn#Fj@9$|s>O99D|L9afZQmvkj1ldaAHP!(ciF_=)L^u zKxDUMuQJ0t8q7S*_eQ>I-L(85qa*V3h)v|x3FmeLk4hHWam36C`gtFUX_`4@@)cj% zkwDWmTZxU8DnxafckX!xxec0dl$$%opRnsh!q0sC#=lbABg4nD!Vpk6EygcMU2b!NuIczReVEntbHkp)T}GkRhWiPZka_^{W@bzbuKko|VSxJ(1z ze(y_D@-+3I=dGJG^YQg8$YK{akB1V~X*{|pqvjC<227xSOrkoMP0?_Ocbnyiano$1 zS3hjonjGe{JmPgw+K|M#yGM`QvgG6|cluLv&%9V9Mc4*mj2k?IBjnazPEzGAwZ<+= zT<=izOcP3Q^~w8lf|T8I9X#3V=aJR;WZiaLO0rGGvcH-gLd1C9vDlMH(PY>e&t9^8 zp}*U8iH4PU?DAGkeXDT3~LU7mK}ZO4=La!)qy72*(#WgC3UXyS)F`uvWyK^Z zYeV^=kB%R1fn(9yycU1|LkdZ_jEIbPom_~PWW z2qW2gsf#-3F>cN2IikWbLPoHMU`V{ug3e5(C1)u&BRkC0_FObOBy7Qh+`Sez_@sH= zjvw3OfLEbimo<2eX@J>S(>KKzUK0wbsOz=vu1S*+AH!V06gN5I{0a30_;BI8V0PDV zLtP$jc9F3?s1Z>>sua8^GVw5lvv{bjCwWMzvO0$W1OFJX$l~7h9qiGT4G#UXIORmPRqde_EpIGHYpFNIVtgikb$jl?*3mCDcruf<= z5UBuQ651XaxK$?SVdvV(v6A*HBj~&|i8c3fqPLS)xe4PuggPIYisA4scHg)v1>P$0 zL)wzSj#)*(?N2M;>St>&r)yBc>FIh@%Cy&)-+Nci<^n9>g@Bq$W-5uTTprB)O2M(r zS6}`;3xHo*(xJb4$ecwOy(|bWa#tGwwYe+7Kxm(3Z!fnB6~Rl2Ruppw$@Tmb2qFGr zz8+v$C~CMNBX*9EwW!dDgso?pvCynfcX49n7c5vyXh|1W8-{>^%QMOqv+zRp)Av3Q zMPx_9Q+^R>#Q8Dxj< zS)z*uWm&eVehHuf8^*}pvm=JGGvzQ^>3H=O4Wx%#>$BNcbAsR#srW<$7FlsnUb_W5 zq@`7{!9n`8FeTQy0UJfSZj$KYF+k^xsAYiD+US?_jCePV4SMBkv^XG4rX}~1cI=6j0>a;+s?+W`2hUUXw??~CG^?~l7G1>#k>g@ z;#J__Z*(4q=3Oo>inVrdZ{a2VG^yqgm5`Tpe^S0r*rlY8l@)FFRjdBKnmDmo1$NaJ zffwW$Y;Zs5w~=>rp@T6g4@E21V=&pw5v}M?^ge^%5UnV7Sa7#49~MwR%z#F6Jso}V zHDv;1UZ_k4MZaYW3D5+YR3Xb?nOB^%4&7o{K!kfA)vI-)AOpB={a!g0L-A=Fsd6x5 z9|fo7*;?26FY3306cVR#HVPhNA9<;-aa9v_$vOO&AC_B7@Vo(x3&F|2wbE{yMe=p= zwJ%+Z`pF$cj6o}ENC^moK)GgV)d5nP6Ath1JDNx}|z0$(brG zR=Zx9M7WNdm1_yfADj38qiqwSm1<{i| zW{!x5lzZuA326Hn5wv zp2Au$Zu$6kNZ7nWavz^LFhbiCqWqw5jug!VwqToLce#;ef&dx0+Jktb&5m1oe}-uLj{;=9x|1H)vT!1k>x&{pV%XIC3%&XmV4k)aA3hwu%jM`8auj1_R;kY%%kAo$b3Hv80h#u^MCN-V{AyCZFsThC@OF`( zL{%ERxy%u|y#2o{bCjcT>8L)gBvW|H7616t($Iaj9b>ellnpwOb}Y?tfx_#-HEFNU zQDZz(-?1liWf10rQ8TDkwYH8VMHe~*#k$<#dI;SV6`m$Db@gcXxrdARDj_v&Oe@^#(AJYW4#-JtAYwVLw^1vag@PhG$^=Vi60DPi*snwcx^T|B!tDbGxqR(CzMe$vPEY~&OPKCJCsG04sM+RgL zO=W&`v-SzvDiJ*{N^X5Mi~fuy8EaNYga+1)d%-}H4tZLt2j5P5mHg5Wr+f>xnwUY!Kb5ALg(x{1WXzP zkU;JB&%pc#W3R+h{+qi^m0acRgYlrn zWp@gnl722JRrIEHZg>ZFqMLBND|-&E_Q4K>{tm*Yb^~l9?QDzg$!VvlUQgquiuaBy z{ombWs3+i2ofFRO9CA-kUYe@v3Qox@7IxHgzbO}je1=u3a6nk*@g>htWQ4Okw?&&4 z+z@wVzO~^X?p`$BynI{s3ysi#AlPxk#hqkt6!sjbm9yQ3c1oxEEo}(?Av;r=ZnMGr z=1gF#;(Uk5cURV_uMU8o8(a(`i;r5;fKAa7RR_>z-p^4Wg&ePjgW1}%PZPwlV-S&I6Ppr~iq%=p|Eb0LCAO3eBk+my9G!ABD_-WIC^2e%3< zI(rn{EJ-_!Y42Xh_TPhSYwCM*OvsSSd`vgYP|)%+lQcS?(XLxaPLJo>l-tC51p)q= z0yZ&0O>pxv(Q%ovEw)5^1ZTgV=A9DWcvI$Qeace$G$!6M-~HWCgK0r-BXSqziRssX z1Dm5Uj*MG<`VV{Qs|lcMspX;wQAN8TeuR+XVft<=!PND(+$yWqI7_mpyi>!oYICK& zw@c3O!)P6YkkL2FSD8$WR|Bfw#%9DR9H-KwW0QzIw7Lb#k8a#`WO1RBcRq<83|JCs7X4qZc~5 zu8p{>7*9B1TLNLPER)?Ta#R z`^8263ibn3UvahMPy2fw|3mqfKN>IHmL1lX_%J&B9!Y=75`u7RH^c^`tzm?_w!Pxi z$O494BlrW$t)Nwd-)wt8ufhd z72vRd^GbalTEHO$<pnT|2I%(X3q|!vO}vXg0IudUf;ktkVy(g(PeMDK_vb#1 znc^i!_%5&9EU~DGYJ-RlvXYmOH>rXsNyAR0{;b-+F=c#y*Ea^A{>F*!lO;)hUSS4y zDi){^5FDIiGw8iA&Yh7DZ(a2YE}nTw;MT%~CwcY^tfa?Lw~nitW5p44y`1_X)q7|= z$hx#;4@}u1?o!MLR3{6@Um(_(yEbDl6BA}mpj~`toIEak8e-TGism6w#)v|e+&Kqv zz$S9}p+xYStyhF(4>JSAKUmg7=JPtE$L;a=yQuX~H-%n+IGZ%tfyH)IpcCr8r4I-A zT9`$!C-PWvLq`qPA7m9}wQp2moYxmPCg|e?#Bv(3Ri*`+~TEGjn6VPA0CyNG zs+I0snsf#Ax%bN9!$&}A{&ygPg9WQn^c!NtbK;}N8w+00W4yL#$xi|5zs3N%c(e{j zrQ9wi%(yc^ON~#cTZWYjvMpM+xyB4ETKe6Az`Q=SW9WODXH@Q##K)$l5(-f-vI#)Y zHc_ur?L{3nyEa9Tpl&E|yas2I$sFKYO_HsC>%L(b{n|}SHo!7MtvzRIGzuhs)^|m; znikzHi!kZnlhRuM4#nP~;pInIuQd(9dn=m97T$XmNV^#%-0<7QKKYEMKD;E9YeCyj zdS?}*W=zM}b{8K;3(8*}5Ts<|zRV$~TxQ3-r}%-Mc?t1Y-X}XXhKX#0haHprMTxE< zfr%VGcR)>?ch$JQ1`v&J%jyD3)GQE-qWCPy<`*^@iMICokTO$!`|yB5MJ!h1VKQtp zTLL1+41Lm*lo!IU2YR+V@a2~jxirWO?0>5eyEzcjZ#g2rdXyo;aS!virk(9#gfgAH zG&L<4b|}YdBn{PL82k*&QK3$xn+swy*sy3tP}n5DSi(#8CXz1dsE9L*m^6PmOI~@( ziqL7gnt5~eszRQ@yb=}r+O4c8JyUd?xDaObA_MqX_Fc?C9$Hu%^Vzu_GGr>})WEC5 zF*u3*tk47+#lL^eBQh+^RV@JQ_lAUZ1!>4x=zRq;lGKHZMf{a~s}SBfez~d=OQKv1 z2==Rd;otZ3pj!H7i zy!doUCY)_uSSm49)2?N1Fs>#$pWbIE{>YSjxT6%7vlz$%7pE@Bs9ICUq!urT$XPDV zZDzc25Zg$3W?_^0d20!6y1B7zuiqN#pB$Jv2ns5>m*tk6!?3dv{7O-Eu^Uw)*FY<2 za@qQ&4<-=GJust7`g9aoFk_m4h`R#5&h~~jw&DefjHk6NS<#D(EKw~5ClVwNNi*<; zOq!xclfetX{CL_BUcL|trPZGJ*IzXsnn$*38jTf+CfeUeqm0~?2qFb=ZPsrH?Kc7J zeFO2U*IM+m>iOw}DnpZ9q*c-vNN{VPU&mHvUf*#g7JHy=W#>$K2Lyvcx&v>Jy+nEP z_TWhG;6J?}m?g1){o2Yuu(D1&WQ2ntqS0A8)6}%qDqkaoA3)wW>hjWRMrF}>uJJ!* z)S7C8S z{0Ttx z4H3e~R%4#a;mt9$ytPYk`GS^tk2A{mj?JZ1F8j08*K4{>_uft0Mek&~J&C(VqUqRa z>%V*Bwm7I0o$rB)F&swKG(x)N;r+%LDolR(2D6ZMdz)G6I?iUT{Iz>TzE00QjpT`f zPj-x-*K%Nl9BG=~X5?o4b%GY@{Vs*A z-^n6|ObT2-JLGD+SeCykQc!lyPvM)QknvT3MX-8=d9Q;Ks}PAo=0P<^TdMgXhSTdK zR5o(EP?~FE(7jmxtXID=X@~cz9yM22DziJj8tjE&^+hOzNUy0#Bh11%UMgA&e`pAt zjl{cYmb)^@dVAvimU5&-Kny{J@odD)Lcm5KplmJ4mJ%Nu*V(LkA#5v42H{+g zPl7vpc3b`wm@t*FVqY$^n!T1V_|V45xqONNe$qxpLbQmEMVWkz(ZiY#*bL_l|Bz-qGFaeh-!T=^E#(SA85?(f$AhC&a6g9O6;_a=S^%56^6i*J=Lt`_sY8>>~&Ah+8ymj zK3<++Z6~*^62?I-a-iWlo}rlB=6-nwc?T?4Pl6fbxZb-?6|L0A5%hWW^(SE=>lxFT z7Pouv?9`gU>&?v##X1v-FmC6K`2;81@-}8_W)Svx!-^Ysxa`J*86G2zzMzFrH6YOm zE~1Q>qg6uVCEeHtU##5UGs0{G^R1M0yT;Iu7P<24B6t1Cmm^S1n|74A?DSee!iRJG z4i-A7e~+5A+NO_Sk~DBO6ys1#&8AWhifz5q83ZcF`pY6t-VY{foLo` zUnf%gWRtPj452@jd2m~wZQP0UPq>q}^0&B%Z@68cJTo6Lx!SAoo3g&nNfH)&`PpXw zC9`niyT;4l8+|$e&+Rp&u|O;=S1;yOu_`5!)@t?`CFaBF0UC7W_Rp^BwT4}-v*Dhc zjcjpsk(f>?fX`3SQNcz;1yj=FkhB>`Lmc`1EIqOB`sR*|4grU5(x+EqPq!7}mdm_M zM!bKwJNzl==sW}@E~wv4!*`U-8PDr#WjL~f>Qc_^vr{An0FWk z4z@6OTppa%Tt5*XUvZwyZ#l|~FWGFGyW$zPIa1ED#c5cvCG0v}SQM?2=)JSKo*4}G_W8_%AEPP^3`5232 zkH}!!s&;2Hel3`gEb)_b{8 zmC(NuQ!4PVk`8&`teT;+i#}|9MUw7O9_0PsxClq+T0wWuVN{|aR|H3#j=wa6S&ae}ok=w4^|2w%n|nH0~|>8P15N(SSuF%USv*s9zHs42=7FYjfTuceH; z>|C!FKbAnY#dce0K-Z#6;)r4q=U2eA*c}o1cC%Rsq)FJZgjJstXhoqKDy%A?q2LE9 z0O)->$!#non3+8I2kX47rQJk(<^o$a7Ky4r@)dk4q;9ZRq<|@ zq-hGO5`jS!?bEdfStXe@2h_E*y{p()^p{*kY<} z3)~xP31|+fP&GlJ0}ULirukqM{Cnr#p4=@sPUj&Bo7G+?R#h8xH=yM*oqklzZ~4(q zdmg~ZXvyf$Q|+HX`ij+GF?V=$18} z$pKhwBPN!)t#Nny($1vUY8}q`jHX-eLM$uWG|X^V_>gpEGV|&>MYmY*cN~4!0TD<(x&Z&t6Rz1q_|ERNQt^>`^o-+KW}C9 zvlF?-{a>oLmh?`Ug^8;|KJ1b=P|5Y?a&2bo+grr7#mMa;VWbp585JLtSXa=(yg(Ma z*FNs|C7`o+mC~(K0O;>ACH>RMoW2;;4xgO&VUQ|KB&xd$b!O!-TMSW0uIS;PsQNQv zJD-ra8q`~6>=(Qm21rJZaduc;rW=jtkm$Z)r#ohi>LGqeD4C0#v2@z<5cT8GHLf%i z(b{#`_G;6WpB&XTrfh#a^rY$P^N%Utnr!q7N$}o@As!WCM*eaI=DzMSwd;D^m8;tZ z&IHQ=ys;HkO5Q~_?*}qcNkXLMv~KpG+LL)UKPok*61c|fdmtuniIz0-3kUg8Yx z%JWCLrm5%1$Sv`;uDxtGw=JXvsE38C%Kj#$x&uS6*#gSS2zeX_C?51lDS6XUy%*}@ za1w9|itA_ehkaG{f7wt%IXB|D%L;eW8CTfg#^02JsJ-D|T=Z*(0he{18Dmk#-af=5Gd zq}aK+2y$tuOIvByzXSkGT;`nT)*At=+l=?sA_cc0_{Y`I`ilXF(U!u3^d~GstmCS~ zk8Upg82h98GjqD+IsUnh<=M;(;O7nRM`zjwg>5#fz2rkToPZ+dZ58!rG|yAQyNk71 ziNtYd@&4a6B<(n!g5S#wSd>X$@bz)80iazJo^o zObOHjh*Vac$F|S=_y#PpShF2+ zLAzAoAM2Ic#QHs0?Qc#)cF(S>o5A#m3tV*<$7eh6`d^jS{}Yp-Y!?BpxBoBKfAH&S zNGY^7n`Y-Qw6tvh{EP-Ec=`N~QmfxdkYae` z7m7uC6=e%BHdvEq^x4WByG@_t;V~<8u*6K8=;u8rMvnle*lCCCRDqGc1~iDl7E97n z5cI$+fpTu{+IXku?bAKcl1Svzb0`)bhiPE%K^nOnfid`p(bAQRdRAvw-`q*tF|c}j z5f~HpK4urS1%Go{{fnJV2{46letE#O03$TY4SR;&YXG9ZRJRf_5t=6VtM5K^qVwEl za-hNvCA)n|r8oDZI2Np|c0O|*q3BdrZ_tUJBA&+ZM~^Ytw78zMTKjY}f_eLW{<9I` z3oGR|00jvPD|u|tb2m7DrG*o_PujU}_}V#~-O2uFNtYf+Yhg9CF7Nd1PL{^^`j;=E zL(&Y)L6H*(JF2e->(YaP`obAnd`XG{9O1PjZW0R{ zuLnzD5mbUB=%-kgv}-><=Oi2BY*&bx!5=Xg(Jnb<RSYqjSVwILI5bg#;_W!jb zAa}5`rN*b69?4J6dsROhBw!)lOHbpqOI;}8WWK9~X$By{9KA4GQ2vg4!Sg_cw=3CB zEZ%ut%%`0+>r&xgdhCG{FKkKQGi-<=X9!kakZR|)#oK`IL0+WUJ>!^9Il0#*UvA3N z#jh&YxdRHS6#3vdgwh?Qz)7}4k2^s6ZD81_cE=;d9XPWy>iOQ4Uf_&}_$06hD8cpU zrpP8kS0$fwB6V|@bekN_1~8Ge@eId(wjHhfxL2dv^)fp>&h_0d>A)cxYP^W(1FojL zjg=>+MDMa5V;QGn8DFqsrF#kOR0l_o+RnkWEP!%3AYtJV>S? z2%*=14GlbnBW#ckZJz77^Wpl!DWLsJO&+pa`*1$}@IGBCyHI&s_}W^Qx3jK#?r5WC zzaGaz^7P!&*UTbEKjj_oRd)plic|$Vc2uu_nfosifyt}_0@Leh$8|##U0U=zB!F;qREbJC#$g|7dx**X>Dd&7{UXilJk#7ufNY0 zbYv3g7a+Lj7!-2HtUt@1quCK%*Pa0p+J%#TH+EXVj*tI8?7e3|Q|q=aEGU8qVnd{( z2uN3uUacr1y#)xU^xi`aO$CuAy%*^ainLHd6{R<60RjXB3@xFC7RonSXYcc^y>RX0 zx%d9u|HL_&%sJjspYgoo6@EJPwQ6Pel7=6^VA(WlP<+HWy<25=^q^H#=KYq0++Qm6 ze?0fPD&XrrCbZoAl|ugiGW*kCq-U`^s=}na&o;v3+h~=WesRD9q4v`Q-J$Kf-{pQ; z*8@{QE~f-s6Jq+)V}2>F|Mqz!r+m>t9-4ve8=}(Qr-x@_es#MQC9oX2Gh(`b83yv- zPdV`qfRBRwN{0zw-mkoi+v1%v4gBSEg8|ux*t`5?ztO*u(4DHrrn^4^Sp5~;4;6x4 zcBTY{Sbq7fMZl&=&#ZU9W)GjRIY2Y+xcf4e_)T1TcuOjkdZyrCaF%_UE!=78d@h5j zozz>ImmWNzd{J{1I2w_i@l58qbYgPIV8!wnSPQI$#N{tJVvRMFdEWP*xvDIid5)Yc zcXKlH?X67Agdpx~dzX}{MaQj%4e5=alLNSFw1;WS(EiGwPKomUag}ANTGTn7{^UgX z+%#ep>djg)>wlf&PIN|I@8#Y5W^6V+M8Kg0zIGQ~)o1tK-5e-h-N) zJ)UxZHMP;faeW)mj(?fSFZ+~4-5+C- zKpR=G7nvZO|I3%^BY$M3YbN`5XD!^mQ_>~yv(BfiUjw$r?xQ66#aE^+Xw%{V11A5iOB!%6Sy1EU!q5!H-Gu{m!_qkQ->?= zTw03$G4~Y)!iIs0UHK2pp8TCie<>*zhBIWY7Xn~k>3+FNvT0w)Zh=hm{o-ftnKBp; z!P4ey5+J``(O*9Q_+`jMi13y6S5JKWZDZg0tG3mW&)s|(n2whz?GcF%NX|o4Pz@(- z|A2fv)Klwqx>G1eWf}c>#2P5i9}_x zbVgDtTTa07a!(b@>RQTfk*;*2_4gaTd=d$x=h&WD$9#bb|CjhbUul!JlNAnTG(hnyH{}9|vHHrd)$o+B z0MA8mb!n|#Z=JF0J08k^`>4Mvr|;C6r=*vYm@|H@et*6Fe||O#8St)e8Vzs%S{{EE z-tS5J3+q8R^?td1}-@nzLf7kCK z%ajjHmYlCpNhXCz56t!5`Yz~8!ze}94OfsYS{W%c@p-&?`IdDDZys`LMx z1@I?jX*zdtmEk`M`F|nW|0v}Du`U18;{KzA{}=lAhY$0`c5i=F+`V*Hj%QMLloJsNu6D}Mk8e@(tF*2?PZr|i!RY0d6#=g8l`e?RfQ=!y#tMl{eD1tHP5uwg=o&>ChrReM@KdR0o9wVH*a;ghlK=&cc|S@m<8x-{t=<>OuMs1`T?)GcGhxpv;n+-@LLE3XcON`8^*y8Xb@N$J zh5eg?!v&5}kDZT6qR#UD0s`faxTeHz@aacfhRG){3f;|k%K zBC@T=&)UXDSTQ(qU_r!SwuL6AmyrDQYqn<#l4x zP=;_z=EDFbQk%(DiUo&ZkTjkWK46*Zc{3GSCpuytJOkrk@2EDdjJAIh(6%~R$Hp!` zHiFijSsKXc4E`2WK2b#P7%u72nmh)f`qik`tbn z{aJ9C#+2dOrAx1@axVS5cmA`}_LV)e1B?3g2PguF2nK^8c&?Bovq848?YgPnKV7LJ=TLl(gvmNM9v2;cCY3s3Rws2X zz`)}%U)65*Yf6YFsnz$-ucqxs0i?2@I<9V(K9%N<%Yteu;VkNI{TbpEFw*tMAg7zx z;?)JuQP=pnfWRdNjUO{$7b*D6>(`QtxqtqEfBT^<+C&19q}*+;C+f`GG{ok#v|h=r z`KiKtLo$6mT!@u~7m1!sDbdMuj_CEllA(DOht*2-E&gIN|D`kBW=oB+=bN|GE!o+2 z%eUMsED3|fwJxeud{vt%h}lkRZuwbt-K6iik3yU8UzBa+u^aEOTkgep72;XKlL_0; zXWa2!R|3+6;8A{Wwi%^)t{l1-k9$uT$-zy%71^4e`!j@!k@PF2ZY}L!05w%eY+*Mn65SvFVE42$v!<}!lQp{(U_+r5;6ZzKR!U`;P)-*OgHHe=mm~tueX(a@E>~Zr90UudQJsz77 z!_M`5xnwZ%$U`YQqgAQR-@rS1CP!83`jRfFV#d%=YB3K`*9e+8ahFguX|nLj`D(|0 zYOk;j7g@dFLyw!n;!k=kh80vTbkK4`N88EjW!Z!BiZLVeX6?QFA2$bRBTNq9%wk@8 z@a2&kiG|2)ENY*8)$8YkOQvIDnUQv}zBI4Q4e7}d{`#m=O?sNgzk8;&?8tclo!E-~ zc;$pu_$rez$jo^+Zlr5)&*iIRJg|4gGC0QYeTR<<<)-C+&=e5BWAs=#Yfq;PRe1jB z^$vGhVvqq>eQwJtRL_eThm6iS963h@4QhR5q$b*TjolDc{Gf&bPrr&bsp?`Z^t zx?;srB|f8ad7P75q#%}hvBIm)3WdK5)7@I;NrnRo! z7VX{L(Yd)lcC~o8E?T0eo0$S|!|-I*)&7j8{I)wr67VGhrbPjPvhtj8Js}m;H9pEf zO}jA(sv|XA{SSF=)T*W&eAQ^JU^@Mn;(m9zUMZ z_Gg&NS}Oa_6(-)ZuPtKOC7iyX?&IuE;#nmt>qGG%Esi9+E8g1UEs$|_NNMTw7JKfO z%>JAk-HPA)HvXE*HB`{ZueYNI4vwwtusC;4z^hHU)pB91!eP51H8MoEk}d!%E4!F2-ik=p6mJd!Xa6|55^|@tmG*;2`c~MkoTz5zG%gi&r(5`pnD_3F!|_L zYZBZjOZ(`B^9}29I>y<_cQ%Q`+)W=_JgIRk`%;_Q(>0i4sXCND2Dh^mg@GJtMU38$ zyRNZ003i1n34~rF6{gp5l)Gm4%f$8>hGjjcY?&1XZGMk!uWyJZu2d)aidKd z8ban6!z%K7+sAz&i~`|O zS1Er|^oo5Feft$e)HaafD(_xyYnx;{5yM+bm?AP{9=c9%eZiHzxlHSU_+OLlU*&Pf z|96H3z_d@9QJgru-glkjCCwd;JXJPxZC&h`lK$W1(P%EVMW5C z*)$HMKD*k5F|BuSFng3sa7!P#my}CvC}3>*R^mY*0mJV%!`yn7>_P|Si}+N+^O&rQ zsM)Aj^#DcY7O}BAlXUD(E-*=Pk@B`0awvW1Faxo2KC@7N9w9E2|2%AqG1C^)6N*O^ z30*~jM*V#-RaMFiH|NDv2L_Cna1*6aw7!__x(L;>n0rN9NOGni%DwB~ z#P07c45Ul5E;gO&iR~z0WTxlVn1MZw-Vx=#D03euK7YFM_f%B%LV>JsJO9f8h2_Mw zPDcl()1bkw0eK*0fH>5XkcjR>*5OAfP|t>Jx8F}fKa_)|t>S70JEbGbYV=pSk~pap z>g(I$l(ApcLx&ZkBk5}>dVAgk&kx9Dcc)^;@vbt)2TX3<-=n*a7H*5@msWC`$^F&Z z|0cx9Hrf(x7!>YEF`hnIIQXt!6*Y=n>x+K|BB6s>`R5eM8X?w4e|)<^8I=G;x)BoI zFxpiQe|c(c3oZ`D_|xex458+zDrZ=&ghx!StS$9?0WT0~h3oFlY&VueczdFBTS8)= z_sAHiw=610Esrx`@7Zh0+T_aXJn^?eBedY4`<#ZewHn^4LrpX$}5A0E9K7_?MG>-&j zc_6mxDLCQM=);|i;I#;B`N##PsjtS<8yXSYHRNf?3EgNejZ&sKWmef|gjWq9Dx|Z*xhl6vk z=r~TebXO1=cp$`*ox{WT&^1Z5eWa2w^hGt5RBl_gj!x4?C&kI1*^Ga;d?%JU8R(yh z(*g$lmn&%x&H)A$*Sae%C<$xK^93W?3)s-En<~@|%ad$eec0un-PWjsLQmTshOir4 zpRJ5i>qFfu3jI>Be(4&dYAw30XNV4EqUW|#ZEM;2p_}2tu1M72lJ~*mB9B?-_3=T@ z^^xw%HlHGJ9i@Hw(Q6VV?>j5!^IjWU711%<*0oCb^<@;id_^4$<2KMCLPQrFb#``M ziGC(1DDhlCK=5&LP=|7om-xV#46iwDPL}aZi^GbB7D}<=b+bFI(;I05CfgeIAy{n# zy)%-bnT>n*co7R3tf|*+m|Rok^T6=m?%I)DIBaMq=AP2-Hq{Cy!az>f4(73Z5i#-P zI&xnD%duPVPgo~vT&!}(uQ=EQo{>bB3j{88M64i1aYOwSU4w<)b@lgT42mo{+dU5A zQMR$Vsh<7#RVI|jPu`0dml8R{Q`8S_YE*zs_!Hx9(~MFFP>fe#A(Yf>W7S?tqw!Z= zCQO59+P`1nu%dFDYYT_tW2xx*sOr{Btx-)Ga>kSEa{>ig{HU#$T&$E-+sX|uKJm2& z23`hQ&bcd&h#$Ohpb!V@Vk-B3Pt%E5bSXgN`D{s4yols3W#13)qDUo^PShb@gO)6#4FR54W{@~QE`%;YGF zA>J)RzvpW>3|t8d)G@f-0c8G0n~at1YTk!I<92Io`Z?pJjscPzI0=11wTm*XtTRY< zg21ksmrm(wql^gk*uF`SK^T=yI#2oNx%YzCBXuu2g%c$tKXG2Vc2)w$#|5k68v#|o zPOU0mjefPAL9#Iyjx@MZai|<6{37w)Yr0~`!6JPy%Cxg5c^!G=aX+OTS5~SgPuQ5V zP>a}hmk&uO3V=`On;O%tn3D7cq~~_caXm-m-ODrR9bEpdsZtEqQz!^{{7f^KRB$A^ z$9k*{)V<6s?J(MRes&tEKP3Hm2HPvco$X@^wfoG6sGg|aRm(XK(RAEdKjqu@bi-R* zfW)3qrnvXp?c*t~i_Nmf|AbIknpu8q2&pWP`Ktt+-M zAn6I)?@XN=3plz|3fxpM{32og0&V}EAia=q&vw5Yb4SpC^qnov?v~QkT?eV7h*~58 zg5x=-*#c2h0l7TX5Vo1f?{KLG@jMX~xq1E^3bFYsRhX>0h{qZN~_33(j=p3#}`LdF~g zHCHBTR4WMnihE3QEo6Qo=c%)*29OMwL~;aeQkt071CZS=8W%ZUTl+HmZYUJe;juow z@75x{z;0)tjj2@6!ij}FbV(B!5yXyj|+N(TP!dY9xED3Iw*9QxA!}+@D;!dZ) zt`F-O-Ruq}$I-hs8|}Jx1M{gH^Lw8@Tcc~E8|g~P5U;QS-y&3-QrR21c`bK~1UD5y zW6w)V ziHCvM8!(-w4MWlkrPhv`igb6sR4=R*dA&K3*l-Nnd&zzC=4Mm#F3Be<*GadR4UqQ5 zRE{3=A$b(e>ix~;vPeId-*h&Q zphAl6vF_m#Xh}_e4AUGn8Np7JM9RtLa5g*U5~lzNPliI(iar1X52ZX&a4HX`mweP? zGpwK^V$Ee;pjAp8T<5%^KUxb-w3;k=Xu_j&TNi>G?D<4@xZXA3fE%^W0#Il8fjm$* zNKb)y{}V*>K|S>*h;jlKw^Q!y~X3fi@t1g!Qzd#`ESUl=!apk)VZPqqH{C0b><|)y|JY1 zN-;raDXvnF=q6n`-&TW0k8KP~;t2+ZF^t-D|@nHETm1)MKB* z!^0h_!-#tI5b0<&ox#^;%CrkIdUwze6TI$KW_sYnkphOXjGSYKcY#^rX93sxVb3yX z9SJKAzrEzar_R>@ky1=XW|OEll}|4hyayV#bq0Wia%_+P?WK*ET-gOf+4ki({>D~L zB;_wJlT8?4<3b<&!rqkqNw%0vs9Yi4s|7JK=}OGi_&kpCrj!91FE*v|JyR85N8hMQ z>C{&eRtiT%O!%R~;DH(kNg)m7tRvJvT4mQ?j`^<1SF8oe?FJ__+DeeQ(qdeD(6Ze5 zD~dn$_IBNl_xfJWKt-!kXWK{A5k9!fbxRB-y^telz4Q&)7;)<9gIaM6#cW*xu)@8cZmrn`@9_Pc@Znf=BAUcz>o(By|xP&su@C9iW zM*wj{0_N%Eau!rg^}a|C^aQ`Oq|cj8(c6$LIsF}t{t%$=27)Z3OeGXZY7b}p>E~L4 z_%B6+k-94PYkMc{bLRSA6uHf>ICPNPz`40zZsl0(&J~z+OC@NWvVOrm1MirmWv1ac zH~hRC=2>4Rppx3tv3@;+Yw)gyEJ!~E-_ct(xlzD^(*v49G0V#gtsW*@E2xTyNNbFp zX37HF)aYn0zAh`8davi1R>u^Rlw=X8C|b;8lxoQc?kv-7a9Buz9qn|^HI&FQc@ZX# z{4jEg7?>x;A+;)EXkr@uZ8!UQYx{(jo_QjrIyj(3a~1CHY>m?7qZ92bDKKzZiTP%! z*;#E$lCm(4-}jub8B6oklYtZ*V}MEzg;z%Skoz+?+-C8}S(7D6er934eC=)ZXVIlw zyACodK)@VwX?xUZU_3P(9oC@7LWSPpNUgVZa_j=te7$~AX0*oJlk*Q1F&KaFJtr+q zmM|5}H|(N_y2k4%-j#WYwSuVE!cRnoa1ThIt?Bo`Nhl-XHTZ&Dxyy z+a)5AYyc`c*6T>Hj`6T_@4do3AhIddD9s=bKTs~sMJq`Uhq~FV<3nF+Ta&(M?n}Y6 z7hd6ulta6M2OiF%R~q<>rB>7A2td00xNccKXKiLSb^;-paO{=`yhlxgx8|20K7P(^ zSW%p1;aR(nDc){>cd!@4XQyCHhecHf&EUJND^ZGt#6edIs@2b0Aud1l#}Ewm!gwnE zzB*?qhS06=I`*Q@rYyCHF6k!O`{!zzM-hsXc{;s!REBMU-p{lP4$bc(yka@cTkG0tjn=+;D7KR1$gmv+Cj1zJ z8KWkp`U~zqM3^)zDKs_JcucOm(LODGv}d`C^jNV7nDpAc)w&W0fAjne&ZTpdt@O;B zs^r$$-!1vl_RqU2HK`p^iatCMj&rAoJ?_z_UBRDp`Ix z`0kSILWVSjN95snUrph%QZmBj*(~Bh8LvG9ftsUJ)}PB{EOXWBYT$?lqF!zhtB*&X zxxIelpA|pwxGqHd_$5~a@q9WP{b@JSG{xR1d29P4$3Q;=tpQ5!o{o*z4||pP{2YWsVqk%CkkA>q+pw3cOdlUUKk8G@tF3 zHs~kmPK7`0NvObi{Ch}h81#hn{#ru+v(oYe9P20z+ujL)Qnvcq2<)6|QGC>_{Up~64gglA7L%#D zM%;S(*?PdKlN!Sv6Qoq`n}r4l_~wd3kz8H=9b;F;7fRa_W+QGNyHS+@cR^FaC8!qm z5`8r8k*ajR;qlp;(*JyLoIv3B+XKv&fS9bW*3vS8DS-Q}<2 z0r!Y=r>V8IcSpOJbVn&(X=yj!p5RV&Stc`Mnn<(T^^9(*^TL`r-pO{q{$+vZh>3l= z_Ci^VXU2h|I&EF+1%}z>d?;qj++05M70^2^efiA=-kJcQ=z&3j71je9`RKLT{JYC?va+3E9<@!kC9%*#0qwBms{Viekmau`lGIwqXlM!OB}gkje*syZeTEXN)2Lq$gr0pNgcIjxDVc~jbx-_aiKuLB z0UF;M;sKTc`+mg8i7(d4;8y{sg!8pvCJaIy!_%VT-t%19p+MU&#_h-r#BOj;c#1WbWiMV6zGV~17uz6t5QS&7=b5V3 zBlREn!xC;txxaUBdsCFu(e|;I#=Fo`csQ{>(zbTza^Ti<6Oo<-t9^+`{nmmLwwbr6 z-mVCjPA4uoIw(z)8_HYB#y9NW>{Uv>3Fgq}qnU6+ z6pUBhM}h|*il#Iy*{Ix~KD=s6o5aesmKw9a_WSK!TjG6<54MV%CmRk9T%YA4C#p3a zO1FXuPK&p)3lIh%+QhIF(NBykV|G(*mv&$8Ru*ZzLjkCXeigPpJjv`rt(5y0!S^WW zh3$jQftH6yPdjn+is^~JogJlDxma3h1}juGWStTHk4dGc!NsxD`O9@$2~Tal2daqkEQNj zlc$8|d@-guwH$74ZbZG*HmUpr(4318pv5qXb*)ea<4veb504=NJfpuP;nhC2f=$Sj z*GtlqT6};KM#>i@!=AL;avyOznxL4fObHkI^X&~$!nV;`Zq?|?xV5}WPEQLF5-QkA z93x??y}mhskxEk(K$w>S6kq+x+KPy61b<<{aPfyy-+M#{I<=Mb=b0$VZtL-%Z zft;{4+C;K`kI8d8Ty8Gm6!8}gU?f>wPy4;l-v^5ni0h9YXPRFdJCOsN^1VYn;<)OA z?EvTkqdn`7Cq3K8jd~t3M00IXuCAIXd$|xE{1O2(bEYY?RJ~MRk@| z9!j=oBDCGhU@Hg;jS5UXeduV?kzaD_?tdFFJSf%o$&XuZNV*nVP3EA;w8l_CTk z*Z~}2R}ng;Jm8W+a8{wR?<29*7qW=Dr7>ix?9%hC!cjh|hfuz|F>9ogfNd&f6l{S{ z#&!(9NEG)%I?+ZTt;TN2?AJaPFSnZObQ+VcAt+HEdrnB~nUoNH7iIiz^43?iew4fH zbCz%ACPD`KUQ<-wWJXfKC%RLAea>+KY3ibB7@5zWFJiqW)Lg!Frqg@>yK`k5r1t`2ri1!? zN;2|usPxNCR||`9Lk|?^MS>6JN1Lp9hdZi7WByI@P_>qjQFG{r51TDBW(KwH(Zcg; z#ze{YJl`s9yfx&&`}ooG)j#q|kbcNiw|gxQT<6}HQvyJ=3-)uj;(_KL+VL$41G|UA z157PrqZR2Vs60o^bs*59k6ssfL>tO<)bQKXJJnhzdaxmTz?nwd2D`I{sMEs=@-gAi zy)pQ%jvM~nPYSecSKRcHy1&;}gNi@VeW4H>(jZ2g%AM@q2#8T{yN?L-#fgf}9=;KJ z68t$4nE$v%mWB8efqO_cwCDQCP@fh>0^nw52tS?npJ{Ddrtl~L6zc#;gx0OF#;4Q@ z69C~11<}~;-nL|bj!Wcr1=HvdJ|u13XtneH8M&YM<9Ln9RtyJ_ zE&>#CoEx13BKL~qzS5H8STAO6dTW;}!~^_Fkx4=mV83KKC%-0=5~%uY`%ocAU98(?g^2>M*onUmh-*INHY2_QPG zB1{UdpSb?u_GjHIKt;mYzNEs3m?Ew|lHs=?Kxd0(VIf1I zj>5Mz3iBylr7z1iZX~37IVfV+ywE}XZ5t~}yHY-*m#X$XlpXEQ>}DHhdN82XKb>di z4yEYjx?Y$0JorW$f4VALYRm>cL1@g2%fsa{7tjQCI2Z{6C@3@?0h)Ja&aWs%^Gdkv zJx=pGDDs-W&#k;~u5#bMKFW@8oF)}9^a72@a`Q2TV<+92?>9(Otj~M822x?|QaEVr zq6s~8h(AP(I$x;hu5hH`tUl0^hWFwM>d`}{q9d1Su5i!zQ$%YCfijg$jRR}KY|2Z& zlInbzap|v)$;Q*S<<>)uyL^aloO6QoJZ~&h`+dYN;&|`wtqQSgC#u9KF<&4j9Vu#$ z^}#&vW#*x(-;bCVV9e*qs_`+6W`hhz*aEx-`w=pf`?fswYVCI4jctq8dLXywQY$B4 z7Km!pWa+Xqt3+r~I+^p`~;XpSUCFC|}kCl8XU+~~Sy0=0)5%QSi zl1sv!o}%`Zn&l*hnK%*YgYvF3k|MckaylaBHicOC=(uNQbsuTN~1C+i*AbD{a!St;LjM4hfb!9kOR>|9e# zojb0?hponD2dYhtOgbidWjL56*D_1IMZN0fXY^feh60c)ljnj=M1oB!77COr>!|if z)r^G0Ne!wf(fF93Nu_NB#z=PRQ4y>iRzdt5YO99Y%!<4ge=V0Gf9RmpBbEn@&;;1U z*ZZP?3bDn$P684!5@*)l*%`YtqRs8-Sv!*@{l)?TWPjAN(Wln*D47 zssI|&9`MZ-L38u*VX)>!_^yJ4oU&GGq8k@<^Fm1HiyQlwNL__Q^R)i<*!Rwht!c$b#+$4mnN7fw#odVF4`m{^yC zqOLJRUn|_It;=I}=kCNE)ZR>99vsi6l2>}Q^^sf_b@D@{$F!44OeH6m9%26Qv{CmU)ohIY%(qyD zX2+&@N8*=pYK_#*#Lwh-ah~SHVbp|@nI4Vf}@w5Qp#$M$GZi3`YWu* zO!CcTyHEx9Dckf?@}&--o680e(=Wr$#PA|`{>#4c){~zA9({SEtRjy7-ZQOBWAz=+ABN-!Y2#3gi0D;S(O~hujF=7*p$(Z zwlw7c)jD$v7sTQl4%4Ncrcqt%Q`~7di-#tJ$LC?U(DA9A*jS~g_wUK9nGt7g(F57N z2@{@dDzUM#wvq~Zs#<8unv#=OLNvh)>t4oMNUR$qd-{(yTDW|`M=XPw#mqYT&~uS{Y^nE0S3iQ5Gf z4?eEG5{J<+uZcQc)U{BcQm!+u;iw=Zx|h5173lJRz)fFe_O6yat`+3qZkSD&_ueyr zIJh*<>uEcb8JIie?0p{DV^A_hNGz3bu|+CgtKGL)B`t}co)5gz?>TFSeM312tun)N z(T-FkcAG(?are*lToS9U?&%iT!11Pup`Xh-#fx3KcCyX!YyWsrllU;}CK*i4u;k_OnA|hDn5227Z|hy(6jh%q6(U>PVr1hkL48mWGmX>Smearmy+# z4e1!i!0g6>T2IGoib8V7yI0gk>-l%R3Zv9d!LY<%GS_k3#h2AqcL3H%h3P*$;slWf z`-CMQ16ChgRZOhjpi`Z0XJGAF{?aQPiV5xLg4M$gc=aT_zxgeB&sU;n%YdyRk@Kaw zmZUr1(~6uMdceV~lY7g80Cr($V*?!2{3^NQqQUToO7t~bqVL97Pwih2NPZbqWL<$C z0s|8=P~&8gdY5coK$3%Fj|S)+LKdcOj8W$=hjb1XI+lAM;*szJc1mV*UEBNuI)iV6 zJY3CX#cW($9pPCzNCOfJ*r^|-+CK~pD>_w)!awnhin+g0Ce@NaT9dm9SBSO3EKybZ zQpZW)np$3|=Rp_~Ui~U{u9a9Qvp~67R_A13xf-9xW0B^y4Gp?&ca;kCHv0wU8{eb3 zQ9@#i(n&{hK4l%563}TaCMf;zv+y9&3+uVtS$>w7Al3Q3TB!}k@6h7BQ#;=e2m!d4 z!h&GQHqX?q14w)vlg&%PtS!$ZXqjW@%f_-;iXJ!pv1Clk|0Yuex-MbYpeLDGdgqqK z)7n=V05k7=5^}t~NhkHV`!xA5z-S{B%LLtXUU*IM6n}lS;lo$1Pv>1V)h;ZLX}H{e z_ChHZ0LINx&&zJ~TwL{>Y776!$J@zz^h)ZVvj9E@mNf=gb^7u=A+cZlkk{=TyfBvU z)my(^Wa$DIA_k-2L*?%uJsvZqbGdK_{HAp0xMG|zpW7=c#p#bIG0%#-jH*8CY_p?K zL6A()1G14-EAbg12%Bp$x-tlKwOtJ(%+&F<4C*pZ&6WToO$@ch-5yMz1-R zTeGc(S2q}Hb~R6h_gUvfOR@;{H(#*FnpSgRd+d9L^9u)Y?&m-G4RB^hPSX#pe|}6*zCN9uTCtIZ#z+ zY_Wf2dgjm~1vWvHr($-=?|1dnuCPpjI37x!kW-W;QM5IP=5v&Kg@r7}E1Wu*nwHjF z0x~mPxDM32Ox9c;3qWL)T7>33RalLg-i_lQxVp6fwBDk#hP@8*BRi7v{Zcx0?t+Vf zphwp~m*Cg9z5%83*)UaU@xkapw@i_Sn+t=WbZ#w3;rd8Dz!ZgZzM2Tx7nf zjHnUYumTXd<9ee4GJ%^csw=nX|5}mXwiW#CJR7>>^WnG3PJlF$+)fh`?fUSVYZ#F* z&_}0_dY$WTy~i;I;?||Dk=t5;SyLG|kKD)5=b!*$8Q!CQY^1f~u7vJU8pRO5AZ!`H zR!Psr$!pT;m6_N5w&>mhbINWi6S4`{$CM$+?;{M6KeL=(22pcQ)%9O2- z+{494K+j&16L<2rQ5ne>8>+M96wT6(>9wi{21N!o`y({ntd3SUdL35I_j(MftpzlS zpuLY;Ls6Ol0og|Au}kwGO6RUplJ21u!k|(9#n^T_U!&A%LPyL!-AvX!INI4`{V`4T zCWTd`B~4tSOorEJ#q*egA@BOLNhl2ks>)T2_T=7$h40OlANM87OxzN)KTqfi8x1s* zkaKWOQ`%BBtU?soYV|JsF;3{0A>d!FR9O)i1jENc6ESeIYj$)--Tw(V@7ha$Qz3NR z;c2Gx4H%dJlZcCWOIj0zgR*d znJ@^vOhZ7rY=@cHm=VBFy|!d(U0d3(kzL+tFGd$WIAHiUZS z%6ZXNJ+;`oPt}?#1{&Yyj-lS%d$NV4PiJ*YWe3vyvo0&ImPIYwMbhi9cyf-4{iNv0 z4miIf;BCLT2G1_Aay&c#L;H&3iKzIm()w?KMtKAKP&=`?c3K84VvNI#Ay(vm0od*-rd(m(+m)Lu9?-h z1Y9)75dnVn>eUfv&hItsczUz2Sg*bET7YPNEJa$-o{ymdf|t4#Zv7hP%H@d(D$b;|HAc8K{lBT0A$u+D|AO9sp$X zCl>5%^n_eljCcKELU8pAWV9TbSYm500F_dtDIp$U6 z_;EEm+#^$}WB{Py534-?aE+A?=u}awDvxN0_e-Qrf2oS0wFe8O9zu4oHf$FND`NJG zQE6N~;tWuYio zKFZkeq{3nypkd1cq(QJ_c+H&5;qBA5aXiP(%r1j^Jq2H6(P@oqvu>RY(G39U%wY)4 zd@lhJdftiPYLTx{dn7Hky?(C$^&naNep~Y~d2hBcoYhxx;rceP^}_+<*ejoNZsseW z$7jd&uUs0(C`B?tAKg6nhvJLW9)f$K0WN{t(oeI7h5qNzMT`Cq{r161SA0v) zWGOE_Dd|YAH1eP(j#+;l7*0UgY@`}!K0UausM{cAT9l1ZF4P-oBStC9ZTDX%KRN2Ew14 z%NCD006~=7^sDO#+bnCZ94EKk*)K}%QJR)m)6V@^{qH@IHf^~{sjdDN>Jd2be;`-ODe*vuR$-zyyPwFO#HUOwMN{U_PcAIv`|~E&>wDGCDQ{3nJGp zRGV-ItpH5?biY9*QK{cd`i~K&cNP2x$$1TfvX<<`8eWdJa!wB0wrl3?%x za2l-{Al+g_L$h~f;zX2~O6pH5uuVLdbxgqPf`{cbAu;yPCjF{H-k6`5^zb(ZEGx?@ z+gNti-{OP;-) zj@g8c*Gi6-I_At6BlVwgjRJ8go_N6F17~v?XHc2B)Vm<~Xw|act5S4_&;F`q@mTqo z7NO8>FY7A86Lau1-Q1!hedE)ovb_v$PaLZKWdOr z`&!q&UH#K3#npDfGoP;1chNj$E&{VyY#B}_Tle6`fdWybxc~NPggaHj5zwk~+i8)Z zU8$ttC+Y?wbySNR%+S4aHOc&7SnWi#_$$o!Pf3$i_#Aml(JfNnVLoYc%3p{H1+u;S zo>1Y*5wjz~n~2me0HYG2MLh}#Rw?{I)C0j>yF%{Ej8$phDd>vfC_E`#UoAz=sKYeI0#xEU) z9`Q^0+3Hu@8?Hv4xzh}68JK;;k#g^>xO*a;$NM;lZ2!y})9_yxq@c3TWuRt@r9?Ew z=J_o)SDgRQdskb^(`c-Awm&0PdP~7q?n@Ij)TZ`9mDb-MsQcL5mX88j zPZ=D3Kw)c{1n`F9w^L)Y^b2)9X_wlE`a2VMJ}G2}0_4Jek+!L-F(HfVNKnlisP*Wr zIAMRF%g_pjVOS-6>GOvBGa!2z+fo*`!}<;E1Jrsk0LZbAs85!lM+G5Dyki~fm|p)B zF`y?HCH35c=XkWAvS!dz2$>WR_2@hWR_J`KbHd_ICc($I_&V?bHNIFwV-nK|HF70i z7IW<@>(62rKXX4ATaac#ecfEErqa?@i%~hblA^a&F@?fRfBQZ_*Lv=?v&!nW$6rZp zIaS8#2qPqnZy)n`gT1A=0h=AEYsVanwd)H=(gPOg5X9NMu|R>{8xY*I9=mTtX)0pN zhi;I0p}U?F0z?>+G0XyYaBthWid#&dDDCoI#pumV8Sj#pB^& z0BSH-El$DOkLeI)f8v@HkVNLjT+M&@4A}c~yiKWedS>BxFK&2|X(hF7>o#WU*z_OH>APX z#ZNavgc>qhd~aA;*|*T@n|8L}m07-7^5IZI?}ffl{RgCBlv6-v{<)Ti!!Qlhh~n?g zT{uPTOBQ!jBl3}!_GE(1L>%;(PWOn_GeGW*#--$lo7OqKpDoib_zLi+l{OZQf4(16 zq|0q7Y#jk?b!i7SDTzS@Gz7}ZB!`OyRAD?F&KBw}MjxbMAVC#c;AnX$yn=b}6jw#P z@zQX6`Vh=$U<(@&6V8{@Kadqc{WMlJBO9N8gSW@*7BG?_+EJ(CvTf5U6Nh>zx1y%Gg<=h^JFUxvwCrgMx8rgNS@h-n+^B z`<}aQ7{y($4HFRJ8xYh{+qme+1@rVVPu|7*B&Na^w+iV*?9T{qFOGb_G-9B${TPTX zyNf!P2=-X69~zjXOSg4_!MqpR{7Vd-g2slBduWcDCnq*kuHK(|JCu`DdeV zpA>CipJQhdfRyI^_men^;T74$SY_!*D4H&odn> z@`7chw}!|Wv_L+<2#?Oqz-(h+n`!yIXBGM!{feNKg@|xBopf$F-K8+cq=(epL2&Z% zB1>aOO^|W(r`28Z0GLt3lcMCh^A%()7j=$@9WAw6{pB5*fWZJBAC-N7VSN`1WYs@k zx%0;$;}bzq!nZ0;TZLbK_saW_z)N`YfONmoA($7o&-5aH_XcJ|BVn(JG0~Fde>HbB z&6E7CTusEfc`xWU+jC_5G9E6(9FOh!?P^712vPJTF%KU<`g-~f7oEylFsa2ox)&*H zu6Nn*K8cKrl{6e2zsnyOC|6tjC?j1ZqvgEP>a~m1%DCs+J#*SGUEGe2KA$ct1C9z8 zltSW)rBh`D)il5p2iVEUc;iFxUnKX}AMd#lT~0~d=REZbQT}!Df9sZaSAY|5?6wMM z|CxdOI-vaS)&GyZ_Y7!qUDk%}hy^q#hzPip1f_SRqoOpaDj4dL7BCbkp$9}!!~)Vu zs7mi8v`~W5YiNN0p-3+wgc_Q__qf*HXYaGsv)6mh`S+cFSjdxSX6`B1T=$G0L=ae* z5vFkT!2j%X{^NOxPY-d)vf$&z|Dr4ZSAqR2@Y^|gRK?wXCGEh0fA>HCqBzfK0`vQ8 z1~68CF*N_joBd$ze|`YE@&J8@#@@LDzxemRoJ#&p0e&^~z`~RpUVT3j82(!lPz+-M zET%c((1Cvy?Oznwzy6x`4Zx&N!aC4D08{@x30SWH79+WJ|G!i7BUSD4!S~+fA2I%`kqCYJ#~1tWTGlx=AS&Ey#+dvsMXfVJT(Zuf|EZe( z{x*tVzQ>pga=iabG3Ede^QGwinl^tCRsJc8CH{Vh|73}k-j6BH9l_2_!;Md)cO}mJ z<7521PufDEA=HP3KbZ&hv!o+Y=X*p4(6pMA4`^LKs}AQy6m{Qt%ZfKuUI{P}2gI@QlC9Yv z%W=snfBy7`Fcq))m2_t%D0voQ1S-|`KmHdkC;K~gLJoOc@$Y;IU`e)zjBO(hB9xC6 zHFkNM)!iIaUeNm6n4afHysHiMj`837FF1%V&jF~whwbd2tnB-zK>dfsUd~h`rk_;V zUuNZFLWG1}6*z-Poqt{1^T!;5MEVn(pm<(%B3;Fg(fn_+)4m0~S5)U0v-^L+m%ig2 z+qxRuuG8Ju-4k!u<0qo1(0Vz~a6a@QF1g5pljF~iczvZ4-{TRH_8#(U7jxo!Qu&4P zAlrZVrC*Ew%+17#B&w2fm7ZJ34Z`;aqaqJJNnW&9M;2z@U!f2F^Hu7&zN)*MF!e}Z zIQDD1i9P;dDV9m^*bjFDe*BfD5JkquvS8`?uwHI(LfyCidkq+{Z1t6Oj2yX}HI3vs zX!EDZU(S3cIOc&Me3|{p`&Z^q8m0e4oMm{VeG;+*heTeF5~lhF|z_1I-QQUr&#`ECa+#%&NJ~PZZ*J_e#Fw zsujj%{Rzf{+;_K7dEuY;ExgzJ9Jy4fNpp><>xr-8;Ig|aM?p!%!Jzqf=Up1+#3M@x zJfQgd;_iQRdp_hSvUMDYh?hYB`jlKNQ*3?i@C(f=#vc{S*c!08-(Gt^HJbG;Gujf~ z7mbLKp!C04WS2n?mm@?gD23|7=8~e1*Q|T;3^&-3?z*yvdd0<43o~16Kd8vs=ZYYK zwnTr5yF@7eKW^~8zMoiXncs^*>_;|I9}R{*OluBf3>7qUC)5>+@=a9oAzH!Z27|g; z<;l3Fu~jP(8VgrYWWUBIEn}Ck#`mD}h=O|@rV;KpZHRG>7%0UFjsurp*ufI$&`NACm4^^%s)AmMAhSjzCJYD-fs9-?R~92^{1MnTW?y_l9sK$fWwLc{_e(?3%;?;{j=}=XFFFuFq{2jsC zow*I;>-Cql3v#>L@;U>1hAc@E?ZS!wc!HSjK@DkjL1}VEBu^j=tl7&FDe_=m?4vz8 zg+DBFzmP9kH303MekV*!d=C2-mmh8fmL)ILDnxDODrt8oTVg<}O*^VDAmh7=<0k_5 zRV;KFqF&mEUEu5D3p=PA*_AvLf^dwK-Fq;u4{}qn4z)F!$`a{h1iyj_B`%t8l;23U z_ZXG>VDu7lT_Cb^((1HELeI&k^TxD4KW==nxdoAS*5r2)OZ2A+I{YR|wyTa0`@cYl% zFxG@aynQ*Mje9FBR$ZA9tQ1&Gp$JAxVV6NT<-!q9*TqX^tZ{?$8!kWw)k!#-3XCHd z`Q}oxWxad_qhsYiis{ImUV7|>+NEQ7GB38E0K^*YmP0v~6K(|yit{nMf!#itDO!bU zOqLTW?u*qc*^He;l4B{(5V`3qqi%vzS++&oCIL*Wy*KO!EBEMmj2u_K7 ziGfJ$wZj&jF_}%)0Y!%9C6%b{3s8goC zn)56$&FsGKB4b$DqfcsAppd381r-ZtjZWvgz74qFmXU!KT;n|TPD3ufkw+H7Inpd@ zRy|g}MS+lqBk7;?ktc>h3Xz@W8-+^RtPu7hUxE{kFY#<@c7*0*rudT{lv5bNwV%4s z?|Ip^vp~RApHwVB{VYXA(LW#(qF48v%p02V<$b#?bm&O_HJQ}oW9So+iRll4f17-&K<5nvJX5{^;dNKM6(%C?C=~W-rnMT(fVUWXW;t@oT1x`%t9?x6wPh9qW$CZJVzQHiqY#c^>uvcB!6}Q<9N^NcftGELFiL5!mn<=ohzGLY&+4< zt>NA`v8^ANr(Or8g@0MwAf5$LkDR(RoqFWp?IRKaRU_wWi$$NpW`CJQYF|9GtqeQy zlfp^y+2@w^h}06jYQsv*_n3#aE4G3o-e(M!_u58wHfH7+&U2Ki85U>cW{9`PNQ&~u zlnTXh(PP|Z1XA^*1X2wH7`O~5lWhZ-QTb)T)xoyONf}g*72)d_6n~Sh)Y;kkQytKX z+=2Wrwh=K0{rQ0vO=5ARsfiuJ>C*I1h26F!z5Q)cJ`s-=D2%f#tO`%%f7;Am8sn>p zvc1!}I=7)EXLD?Q-V!gf@D$F;^G9y|#y~NQ+s@KbEd}x(ciHitn(_R74T($q%X&{N z>t@=M_kE#}U6Rl_)o0~$x!vu(A8zh07q-=c9JyuE>|25->jOX=X?n+G1sSb_v!KC& z5Hoz3JM_vE>|N%jJB1kg0LZsj*U^`#YNz_Rrj{Ju?ZQd34h-T+h4ycK`sF)?_23y) z?vDzLxkx8IgS>5&XWoL~8_JM=j24(y;9!4c0Lo+~_@Q%EV$rGmRJuYI`d;i?XM@r$ z`M}4ZRj>$uz3Y&R!gEl3aF{$y?HvIxbZ{Yxc(&M6{FYVD-OYJ3 zzpAb2;8f`4oUqF}h36lcKD3M*7DW#E$$6p=A3H6AzJ~Q4mKLoxdkNLnd90_dn;xn2 zA|P3?hJ5IRzq-4tM)JeYrv*PfOyI@DYi(QKo#mJI?iX5}WL(=>*%I_J2EsoC^qBaq z?cu&T!+>)~{_q^}_)(M1r{$spQV2I+h^S)NQY=r6Gw{6=EsnS%Y;Eos>?%wa^Nv9) z;;$}8$~Wg<;%_}&#J3Yl$Wh56a_Q*@0yUR5^h**b$A~v{n}=RQHx}~I;h4B1v{ZwM z{7QX+-|>|{M0mSJqx%NbEGX~38tGa1%;WfZ*Lfd{l`oH?x>)TRRjj*h`klgEPI=a5 zO|(Zo68PBos1%c>5UXUG?@2pUXF;|L+Q?#RS^9f`SWDNT#Hx66! zyY~+o{d>hPOK-7~aK&R#Oz%hf)A4h`iWPm^yWQ^U#cjFe?!B2x(bxvrtr4%l^VZ#z zkGBKRf(#_mB`I%C(PTHOyZ(^>-3s!Qs#W{ljk|4$9AF7nZ7&fu12=X=?Pnr8TH9r}MIK zCnU?shtmv9w?Xl41#7_uDS-}ds+;mee7D&#gy1Vtdz@c>_mPz13?TN~_hH za=E1T2&MR{r4_GuXV&e2I8}iBUoxh-la>SG=UR&@g6yq_cKRx4lSs|Qsrvp@)%dg2~0aHSf_rSluwm^atOP3NgndC z6<(9E$CuuEAn+z=H2lKV5SuP|%E*~Zx3=sX^GU{jO)tSqkAKz2GCJvrdw(UD&tl^^xQA{Y?-tR;8K^(ycEIO$n}ioJ=Z@qS z`Agg8WCRrzC;04P2MmnuS`4N)v|4(mJA#Yi2C?N5J+UcGS!RA$&Tl>)tLZc>v1#q= zM9#hbj_hiS9ooHQs2ie}7Z|Rk9viEd;GLmbq^vF*mi>CE-*EfMEj!Py$r9ym-l<~2 zz~YJ&UqYs7(;>7rj2F%47EzbLRWrnTcgK|Hqrir`OK5(Cd5UfrG>vqzXC5C$L~C#P z)m;j-1KqqcO0cO(e3d)l5x5`LiFhJHs|VpeHSPj4y249#Cp{9vi5Xf7p)AU3eCj!C zvP6sSfcaY$r^8-5(dG1(e@Hq4=9Ff}=+S}2=!&`NcDc5=YLXzTWT~BSrR}|WItGms zy%TQF>zF_|-L@GVgfD=9`^yX+$a687Wf%HLn@QX>~TBL1HO`Mman0S(n&k^(x)AMmwx+#@^nnk;B_z!I7xc1l4b?U6us| zc;%Ikxz%|mcXuavw<~TkkxX4(wzcQ=Rb~qao#a{wy7u`JcfCz{R!$;6G0s6nFZzjjLBnj&=6 zv7KC-$0Y?9h}+Gf^&-w`KExrDTm>@khy`p%m%#i&G0oNPuw;@Q?{2)k>r}fIEC!sV zA7hzCj81{;soVMzh@&^mVz;I@mj`kZpQ$_aXJc=Sy~Q8jT^_Hus(W^DjOES88DQce zvqyMl4p)9^lK8Z<6w-8ky+Vc1<#J0Qz+nwLmd*PEwsi@j=9B+iqu@kdd zsv$TKO`{kh7{E_Xo;V6(RJR^a$(fBqnWVp^_Dd%d&hKCqL@f=GS`Oh2yx>glxb8uY z2x5n7<<$aZgBZ$GfUXC+j?XYDMKd^Rt-d&vRf?+OyCIlioTZmqsmw?^L-+&AIajLa z+c-U7^@+eLNmO@v%IBO(Ajh;WxwQ(VV%MrWg-6(fJU8N^-_0#Pg4nK$ck7fk>xtRMfDiZjKJsU&mPg{=j60pBOWbSYvuGM-!kkP z&_(FYdoydCaE+LC7&pFZmvFRi_~Kr)56=thP^V5eEy9XYpgtHVpnMkTz${s$6i}(f zaoy{!#(c}VpI4Ytc4g_SnD6qJx7`7S$({_N6%C-@?3T?*bVM!zAnJ7rOtmenspuP& z`O;O#AlLy7NJny+4>!R|BpQY?D|hvj2>T^^?57KQx*JjvPV=+-35@F`(1}CPF!c?^ zAszR2KYK5uvzRbB4dPbjmu6B}7nPcml-~(i^1r|Z4_|nAu~8oJgatAtRgo~DVOZi` z003NFB$I{r5f9N0XL2g#;Z{XQ3m0-Q+SS)p`*d2ZtlV>z`}#YkJY@_w#!q@JJkGM$ z-YBlEwnmG?`x4_%d{1BeKC#Ts^xq<#>OePTi+(JBGjgXz@^F!zR2&L3)Sa(AACDox zBOLlh@ZIrq!=AKA%i!+wkKRD;KM`?ypTch&0g3G=Tl{?%!05==xon3v#ZK`%o^dUE7UYng28@ovUQcPBBUALe9$zK8xtnT7hJ`U$j#KfVP8(c?e@|{ zxu)mB_*xMMTQ}&f-NY;8CoNWYK9}SIx*049mi}d$20GWdpP2M73Hu*^JaW8oRo|_T zl+GzPtP%H06`a$FqCg7nB`EQY($@B~it2@BmNi81#qsNx1nk1(C|KKWl->Gi^OKEg z1o3;0;8^(XS&&u$D_5#Kj`0fG=#u_2;Ku0qHv)2A0h?ER$QpFQiM$<9ZyUGds9Jd8(H(1^H64 zQUvZrz*pXu;jxnvK+YtJ)sX!uF8D{fS_ZVQ!9E8De$2(2f)%w_CSOEhafVMuaoEs& zSq~bYs7D_y8tW2Rm6Ya+d>A3M)8Q*tEx&Wgxp~uERxZXQ3=#zo#bzyZardY?8C}&) zaPZRvZXVi0z8W}?^m2?fKF&)XQ3K)4dMi`pQ(%nxVD#C*PSU|9SZd z%-v~P(l6mW<hdCg88Y{YqJt@eQ%{YIQfazXA|L?cs}pe~2_76l(poYO*z%ECuE zKsro7(l_snc&zMEr^8unz_h;>WGu~8obxXK;dF-m`)uW5S46f0i!#!)IOwr@GSSCP z<8#k=TpATQW-C3e3erJm0cqpoz8hQ zmeL>R56jcKF6R1#C~$dfK)g%4tcp}t9|imU-gRzS^67)NDL^DjlGN?2v(oo4u`QVPJ}mVkm8D586+}9&tO{CAL7(Kb^MG5MD|NWBax-jS+yO~Lp$eXUgko{s z*25#auA)GrAjU!GPuAEQ!{48ouX=MHSUcP|3O2%o&F`app|@a43_LUHbfovp0^j#k zkAbxNJ{)HcG-4U03n>fLM{s7!$$%)qpoTI9#S5#U+1&P}y3!j+2}f|d&!ki&ZejtN zeKX8~J(sM#efj-V*_=;3V!Zb(N&Z<_Uq!x{J%AFDQq*PQVl%~BGuLU& znS-VF<_)Jg7l$*_Mtw&~9QqULqE?S!Hk@3NZ*Ew0$W>)^CEK-V`Zah9P>0laFolu) zOf#)_l-VtFrKcV^Z{MT~S+Gs}#QYt|1p7HPHf$>#xZs>%jRn7 zK11FKb3)#Mxd$0Ecrte$S-25q&f?y8-j&(1;|x^1*E5nKW?`!gcFL|>GSNY82rR69 zC8~boCKuk~&gdhY1PQkj>N%n5+P4mz5{LaI%lmDWcsC^dj_8>p*HP)3c)|B`W%a0~ z%BB~beGew!2Ma)FfDZZ*22ji&)zyDO2Vb&SGksX9 z6d%uS%axla=2UF8OPPvue%mgc{(SR=(eOn=-auKG_mq~e7#nIp?6K&mXst7^>N7aY z2R_n)=r^C3j-wG(*?0PzhZZVIgn{Q^+JuZVQ$9JxA$r6Ezon1uJR*fgICN~aWj}(M z$`5b{H3nFgco|9|2-ZCxHP{S$g31Am!Buu9Q270p(e&mY2VP9|An{bkP5KT`VkD!S zS%8WmHqAJuLA5{Bc)h)=cJRx6v9127a~IueR#$`Ux~O!sF&RN1bb2<1WdrD;2tT+r zdWz$+;|G5TH^+6yY=7$MXx;)%=2mEzjQ?C)x2J5mXCFvn&M+8Q-#*JjScHQx;5`B1 zyH{B1OWVs&JDxYaVo69=NiY_`Gs;OCr}X1@z56#qGbhyCX*so$ydLw8JSh{?q)`I$ z%>r}$tSETdE;{dN6%bqe1vfk>X z7u6sjTFCPq3W!_9{7|f({ehbU*E^#-rsBMsMgK_RAY5P7+)d8be6^ojHPosWR3|ye zQbwN^$z5B2Jo0SzR0i7GmRHUWpn2f@Nu#ZW`fA~-)=p1XgjrOd<|0FTtsZPQ>3SvC zEv{51D8T|X5nToxwdeL@w&Si^*L?IGuEex#-BYlReaM1V*=Gsd;S$lf7k-C)-t>7s zM`M{94irBzI6`n#n;Ys-9=CnSqdB-*28u_Lmx@#DHO-~%aS1&HB*W^ETabUThAFVW zswBm&Z3O|u5X{i|UvmMjG5Z;P3$Q9!U?ctH3RxZGOt=x0z2{U%23xjjpQP#2kgfHB znds?uqtS==jBBEOE0@cG*le=f0@=)L5tVU>a!rXzA3VaC;vDtD{?-RX&8nWSy;3J% zE&jCN^H*=wcDT%?;fP?(xHg}u!&Qb`d2XZ2B{0&agKD}743*;XG|%LFc?+=y1{oI) z*x$b{SeG!mj>G)g5OVxeoS-AG=_fRiP3)Z(UuhlVFlV-;oOv zAA?fI33%~G#z+T`1;|JGX6f6`xj42ni?x}y&yuV`t2Urg_1QQ(F^K9AkK@a_X3&oM zsmR&i&PohG%h4L!T1}tGUVIj#7NxtB&32%ZT)B~*l-81>l~ZSCPE(L>c5l(==G6Pw z7Buf_q_{4U6uj+5*KK`WL!aR)kXu%#v;;V(yGnzERIw?pL(lPZWkGR+I}Iny;W9qC z!by9!qz27A-bgrX+jaRT!#w}=S&2$Voz@f3;Fd2|KZz)=eu@4w=xCU`=I|T!22OJy zw?&VPha)pYEpGxTKI?8=1Y+;a`RVVlf}~ZuSOOo^)&(+PG+0KnOyVBHBA~IOXWi@$ zu49>xZ=?9&b35{`OMP=&3%u+G%FURi_u=~`Q$ehq6@Bdn$Tr-Lr{sJ+qe2mc%Zuer z=jH~{6ho6o5N4H-Pivf`>6vq~UKbZW$_)g(SJ3)1;S1lZX2p!-;W83p_i9#kp){$+ z6$50V5>~z-Tu?m@ayl(4IK@oOGi73(2Vn6iYlQbIO`Bau;cc^v0TZKSJHYz?;6q5x;Z^R^u!m>KPUf=wSy0Ggi%L zMxNf8l@T0Aj}2@DgbOLa03CSGxb#neN=!>iO4iHpIYt0;tF^diT9N(7`u@+@pa?Lf z_MVzNuc(FmMEcmX)&n&Ove;rsF7?A;>3)r)GTKdkr%m>&%8`4<^K*>yLe%y+a~kK3 zYsJ!iGo%tbQDBgWIRv*G=`mJ2XVskf5I~yULLXBFjNAG@dZ8S>7$qlOQSG)}hrxXm zRZiP5?lCD6%D0C&_fwY^w`2%KUNi0Dqmbmm5s&(B}%(G7L> za$7W`&Gs}d2o%5)aCc#t)9CvC;>A>L&gi9~D-{YzJr95h(y%U(&=uR*U%PmHi z)iB}?*r3AhGIE42uj>Rym?TsXz(CmsX?Vr66({CF%dXp-mBM|}y~TkjK@86@jN>YE64lwDwcMch z^9R^Ljd@aU0bYo{qkL8l)F|E1Io|{Jf!XYr zr)Xv^x&WopMC=$gI+PAcA14v;o32My9V;j|S(fC4BD_?syT=Y}bm^pQXVe9QKkZ%V zOWr&Q#)p*M9X}SB?#s6{481;2YnqP(o6>jSPdDX%B;b*w9m+;CzVd&#=}~^$9Y2BM z3FftrBDv;i(gTBoKH=j-@}$i{n0GkcIO4$z;9{O}`RykZlJ%+V#J9PW!3qgpoxB+XoZ59Grd1CC!Sr+? z(I`IQ1SGD_8xmNYz`x_IJYL8_6u$i`dd7F9`ZPnZN${fA`oq;3Sb`?kQOVg4dLI?` z(QC4C5o3gj^*W=bc~mM0zwqqwvvfP@K=-)!+Em; zdE&Ypck|9ZO1`&pWw|ksY)m+By|TRIv)hKz-_>d$T4+f4u2gNb?dDZI8QEL7kJ0rB zhUpT#9hL{(J|7VMw12>PnuQ_q2&Dvds$S%~*OWN4J@yf6{w&`Cx1K9N=8~3b=&~TL z`Oo~OyKz+ASHq(B$b5diWGx1N=2FTejX`&hze}@D}H#*3+c5EE{8?={FdL_#>g!9{j ztX6DuKWV}J4K_b?XlpY#81(z^Oh3s6s{#1H0+Ayl)ve&P*;gWxv~YNIFPC*!LH{U3 za~C%FK-6nM7vn{hAJae6x%x$NL`Y$KXLp3F>Dh0tVq}_4tc2COt@Z&^aXCvGK>E-N zC1~yqJ7~xh%xX(AS?%=YFo~LHYLO!?McAy~Dm{;b%f?x*en)h~Lz8nwo#7&1?`nLG z@RxaiIxt;LHa9O*M0#;I%BK<4zdl+z%oAvlht-R6tis>Q4D=|K-H*lIF4=$wu-gkL zEGvDd8op~J9Ce6X_F4I=P-$)iOof=1X=JBTm{OLJss|LtXNc}BU8`SXCK|nVk|jz6 zR`+f0cN{kDn)nu1u_#39NGK7hou{p;Fr7lA_5g9(Ng#Koby%?4_DM-Enm_b1E~R{M zman||h`q+NiD~h!w?Q?wC42ONCGLi8^wt2YscR6b<5<-}zZ@=@ru|4IR_bx}Zu@Xv zr7bx!-OGM^O=;XaEt-jg>M?bmEUm>}CECG-ingO%i|{=B+&kQ?3^MP-+=KNVF|`+? z0^`&t_mx6F_Ou3^WQi8t-}Jp3mQu@HrCO7f;2rz=Hh9isp|i@B54}0;R7-WMbd~D@ z_F1+7P(loxV=v8Pes`mg!z7O%6D81oZ2;;UFtlB>8WmOCqOFlVBixqg@RGZM(jZc%`Ev?*?Exz&< zOMN~h`ujk7J4@@K?N&{u&d}!OW@SU;LjJ8;a+poc(1n3W7`O(xDQcCmhBO=(rANN_ z=)gtLPSU+q`f$RSo=7)(4-{`%rqqQXOC(%zjH?>~>9ih){vKKv1#tL|-#74(Ml2WI zv1?hoG1xZ&l%9rH+z|CZPGLK)hF%A^O`EejUTW8sws%}`n7@FVh#C|On%PrGt}A#& zAyRG`$xIA*-u*TGr%X|PFWPu1&f;hL*=>&+lq_jgh6S_6BRE^U)`VDsWV*l2q0$b! z&28EAn5(**6gj1A^#NQE7MS;iGJHC_OX{rKOq(!MwQmY*cZJ*6aWYsH(XA|cXy+)R zY+q#CaD$l})< z9j_<`#S9i9^f^}}676ie+V({&G(*ie(XvwA()VRdJ!dExf`=2}AeRDeSvPX0W22Av z+_S@#07Um~6DXh17)B|VMv(z|4=s=-jL*QD-RhQF4F)fkg|P+DzMR>Pw~&Ky5ibEX zVPhu5{+l@?j($v<M01Rh>>i#p{mIZlb;#vPn0bmM_XVlM%8^f1w?-6` zZpTQhN4C=H#~)_3tut|Iv?UwyHa9p%RJjiGuW0j5y7l>v(n{;z-?<`q$;>a8LSmEa zI|oz?i3bi%#UpZ;ic%zpub$8Khf%VcGkm+a5mO73VMUFkZWFqyqEUp^jxPF0c3`gI z^J5us#=%jRSP0j>S4@{?3x^5@7;~YZv;|R z-bKA$nM<``s1<^Ey<4T{ctFZ|*CI8~uT70};R3JL zD~M^btU$<&bggrEp+KT@c!eq-;{`bN8O>XQ%(hy=<=u=&pYlkpQw%f(_@KuPy*V2I zu45mivjR1{EU#xKNU!mF^zzw~qKKdkm05G`Ww1d5YH6Y3_S>Dq`A9O7K4y>6ini=s zoZ_-b_6ffwQrICgFii>;YAh3GmlO{a9UFkf%y}BM7OQe}%F0cz4x-YG&BT9c2SDYv z%E^cwIo9wKn*SS<#5<0%Ktuw>)`Ik>tX!-4YATsTwZJoP)OqG>N$@hUiiw)j&3knxdJGDaP@phYOL|@nF^W7)2XL*_O&fq)wt0%!Bt%5g$ip1^M;q9S zDSO8QCu64RgM8f7kOW3QIJ-GG%Zn?)-<|NbO7QV|_!;xbtk${0Ps4J&F+6rjTpXdV zpJdi6%c5cy;hvE|EmkA2g$0&dyedhK+o@wpsh|b;J=D8}`olk%b*0 zmJK&u0Kd`a9;N$IX6rrUFX$(~dt=*VCv?!bxj17YV$kvAHtTUrN5&4-Yf;7r6@N9C z{CZJ8mb6I9TPra#mFXci!ha2No^M(Uo&c9u{H#j&fKr5>+;>yTf_sk^;BDh@DK3#S z%)eRX$BB4FI$VIo;Ll_<=R8QxSeIIjFeLLX{ zY)?c1zMin4FW-5jy&;S369#XxR%+N@Vz)@8Ad%YD;$T@@`DjVO?(OxBn7swMsx%dLmvf@BX--RkuU+BQUVId{b2`ZUU=+DHHM9pWquQS34_F`xWmTx$N(BbwcBnn@_#HZ`DB)pl!FI zrt_;_dF=FG6vR$Q0O`5Ncli2yah0HXWB8JM_SmLZ^Sf~6;0pywdG-N#l}hX07Is7l zx1jfp;h~jp&ePSw91d`amw@l&$}fzhDJ1bfvPsxAU}^%&&e8CVSsNF~pa5ks22UBJ zyk-y~<>BkFpPY=%^vLJQeXMaF8}+Q5cLy625m^{DUN5;ANn@roDsk? zF;i9HB3!#Ou!=lJTd*;S*$Q-9NSB1Fa4w9K?8QCz0 zsgpwxqVE=azP;TQ<{TrNH8)GoLZ5#BnP3({5y@7nmgyX!Bu5HJt=etO8=BhHT*Zei z^)@7OO0=5NCE2!weF8(# z!fZ=6-DQ%0Q3A#u#7%1NbKVcVs^rD@(pdqk{NC~v%(TY2zmC%2G_=;=;9=7EI7W6f zG#HmKY@8B?FEgG6Z4_fVcWSqPQXc;4>C8Z{Y5Xhwt$ILXXIFX+Am1~R2maZeYNz&P+BKOQ5 zck<28@g9#IllnUJeRnakDHgYQW;J^NWuFay1&iyIkt;CuJyu<`e{dQ{G(qIr7q(qd z@U1@~nyAbOOGtbb%*-$1;Io}kOtLX&DC~1P4aB!F@d=)KZw&&HFVz|9+t)f}6piur zxclwuHltAE`%9=YXx%f8VU0YzF-Wz{a|CZvLXchkLN`s3N&x4+M}MI5pdT#odfEr< z?pm_Vr0##?eo%8R~dCpShj79i1=GnI`U6atv^|0uT_Cd^wF1Df4pgWu~f0s zwxG9IT3FTRd+{q=?QZ+YE#Ki$*fx=GwL8>R0jJfYENEv2b~42rPTt>|VD8gG^)ctk z%FK>&Dd!q95!s9O|0w+=^ppI&RFRY;gI$hHz*=m#bH?DiOWOAAyrU6sxf}_4<<0l< zN}Pb^vj@PxKn0}UlhF5><06fJoVWL#y92BKo`noDhnH15?Rt`(ba>zHqSJTo+s>@S zBX{mA-kdb0Gj?qkpDJzr&68N18@{m>-i6n$76q1(8oO`2-p zxsnZ?Ze-jW?u|_+k~=nn6mHVyChHLOB&ie~hwzU(nI8ii9=GZNEOJ_jrtT9#>8#bb z(Mm4ID$K?bah^)OfA_tbe>Hc`cMwnW=yvmL?h#BcYwi|j;m~PNiR34Ax>vHLQQY}e zM|EU$X7Rx|&tpYtlH>rMs<*+s(A=mZW5H#+z*`Yc9d8iFD2d629pf@+Z@2m<+*K

)QX(9|vW+2S#%hi2Zkk!Pu``HEC39)ds2iVrW<5oaeplS%D8-oL zu6`PS>zMzfw!96nB1;|LwO;s%8%_cpSIf=L&1m1*;}Ej`FgKKIY(`Tjw($1diJ4kc zQ*|}LbnPOS98^PL0LLtYw2>61DZEj&ow?lJ_iY9i540gl!<7hcgJgLSH^{3&@&3cZ zJ8QyE^cSXpCVacOzL5>X&OmRfYm5xz*A!OKHp|dQEGgE7)^}2NhC1s;%icoH?u2HG z>&U6u0NBOucp_`!3q07)EkZt2EaBc~m-Xv?+jm6FvvXYBxMVqTM4==*-0Bp7C^p>y zsswT0rz>nF8Y1Qaim0P9}S8(b(CBZ^Kdl5?TQD*5lgwQ}&X1kfNI?QFN)|r+N z6fPrhM+{r9$Yc(&kSWFs4&JVOBVPkYhUa|(^CYbYppF@`y>;V;ZhBMeP%O;uq@0}& z+8bal)i~qc`ndPh;SaO(0?W+mqp?9#p}mb<3Bc~ZmVjHPPU-tE46S5zz8g%3H>c5f z;d0#lMaGh5{!<&XUN6{P*;8DGp69YVvdi>#)Gx0ZD>Kl$HLZ^DlHjlr$>{UHP{9e>vN839v!4zC%zncr`N11aUA9Zn?y zcAXZ*CmY!3D0={Z+G_7ayFyd8?JC#tok0;p!^Ox!H%08$;8=i{zL1?C5&|E+B^_3j zw;75=wr&kR2c1d+T7zOQFb;+_MO*l=M2mT$36{)`PY-GSGaPZ!P)XH}qj6IM`c?wQ zS41;ARuPURaw~l?_!x;;8P{xQtTAvWiCDq39rq$tTr3i9hU=J;+g{TMbE&TU=FbFN z^QchH90keNI%jR4^!HxRYUC7c?aOwg^^v>fM$5(WfNOlh&XXIZdEd(<;ypH?|D??W zU{R+Lrm=D-d&T7@=gic{D;j$g(wmX642nAK_{GoNd(Ig>(0JrjUblN;-faewcVPam zxq(9gK>xGPp)DA8(hWlkZF68-2fHPPdN@i{!%iaWCCRqiPPoa=wnecp+^bi4Z+C~Z z>>YO|nOe<_L}hnCRud~5U|Hp@5uJ=n-|K|{krsJF24oe_=yI!zTtctm?Om3D7WGkS z@&=xoWt$tfHLTnBOB(zKQNMfpH{vm;&3^x}4o$pp^f~BKX&>R71*XESi2FK364G+2 z<2!P`3~t!9sIdt|DTSHCRyXBPA!T;FzBL16jFna88@qpmo`|>*z@C{#p_mwQw*4; zt=zv8zJ21?Uw)-e-q26tk0Od7ifx~QHWp$?Cn2OJw?!Wf=vOwSsN zWi^1ac9~-E#$#Hq@lU4szK_$G)Jhz}@WhFW^@!*&XER`zl)I@_ACPv#5(7N<=RH$p z8FyTAT`J`gF}N3rT81NW>0g8#~~Ku;E5N|;t91$Wm}7bD0zC~5U^@a}02TWOV*v@8Ol4~>XFY|UaiejhNA+~?gv;}7N3Z3c?Vv+s!mGJ06!azI0tY17oFSsJ1u|!mamaRF|>`WI-&BHmzi*-EHQiQ^lrl)ckeABHKLd zaGKow7W4Fr#kHcTZqs$Aqqey{FQ+w5xQaAWAkfCn#--|Nmp}z2n(z+qm(k`_`qX zYO83?wu+Xbc8AfLHKPbst9FT45xI4uMQc=Ttr``C+A*R^trA7Viri*M5Japb^1Iy6 zc%Qa+pXd2~-hbc!BCh1Ra-7HUJ-){}&j`;5msP~d+NUS0IXBCj{pVklE&2U&@42&v zVOQ9{*`>l-E&gKvh)ciqdC?@6Jqvv#t~D12la2r#YxR=_ebDt@+fc(}jk@5>BVynoXw1G;!7#|VEmZm)Ra!L>ZO#|+)>L&tj(!N6SS!qUi5 zO$K{YlJv$xiiPiA@tFuF)I68bX?$Y`lx>n!-7Rd84yU&33AX_Z=XHr&EcmQGc|v9I zz0hw|qx#lypDdR8haK;4QAN3@Z+s9RcZ*$g;7)$Szi{|YPf5{z-{BqH$sRCd zQyq8x-|frG}H%fm)i%R!Pr00KtM&HBY-IjjyY+B`y`-+nhHLIr-W12ejmNuSP2JIA-*&F zuX^?Wd@mk-RhkX?(C0Z2`ZM<5eYV{S-k*GtcxUvIjfatf*XW!;hoZYX`Jqhb`bE0a zBT{bkNkXRdaO4(&NzAalb3!UY5elm?YzH0 zIIiQ+@%xGN6vdcTHCy6^Q8dRWq@g701FpVTlVdB{G#`Rl6ehGpCD%Hu{^x)n{ z6bW_meAcbTXY=x3hb0E{@I^q6TJW+{UCUcI<7C?42?Xu&;tf=FX79mSN+>g}(x9BQ#th%u^>v_bRoCu7DfX*%QWz&fzbm!hmE>*Iup8b*})pV(UP1_Z;lTW`3E>vd4`9aRb0X z#muIJ8t519SFzN^fF?KAM7L5+NWnw;Ck=}Zfbsm?)SCq_EziA$Ccpm>(0B8x_+hGa zNz7yMcCqJOmaOv01055kCY4&k(4-YFVq&D8m|$$6>Ai?16ArK;9z_|o9?wK~E%n%8a;_ULRDa{UjB>tXw z!Po+W{v>Ijin%(j#i16RJXvLN9V=GOfBKU4J-dj^5?}zYr*Kq~eE^(+ydN7Ztq08g zw!ybMzp0;XY=|5^v^Mh)ydhq;ITYx*DqAoWXUu(kz%uOK8e}qO4jt1*TUdcm_?I*> zXT7g4*DbFj_N^p#Q-H!*8~3wY1G*((70=}&#!0&;_=({bmd8UN)eAexMNcd&J=?2geNCl@S_LCG-23 z|Htd-I+3>#)oFDU-yL{Yd=wvvbJz zy*%7J*ZulAA0$60j(h<8n&i=WDpu~(30OQ(B&U5WXv zIb>TwOq#aa>1}`mNPwYCe%*DiD{omVr@qw)S1-JHulRd$!o6Bxx+v~HOc(Kl(_$9I zDky)c5Vn~W{d89Syo1MwqQe@NFVsLmu_=D?1eMKt>l*kCXDP7TWv+JfRsQ4tCKvF( zt!{1+SUlREFHhW+gA$xaS~~E~#GXdH*MNu7K?AQlcbY_|MoFusgnesWm;5|QM4r{ndgL)PGpjk5y770@E_c6n15ku|Xf zitd+~?pr(uKz+=zOZ%P-{VnYde3BD&IdLR3C8=6dqWk?eM2np%PVvZ=dE3QyRvM8e z>%XaAeuKJn-P+(P@$9XfinpO>6YS!C(=DiwO1VW~9ytJ(g8Nxa+VB!(%08X3AnW;Y zmIUTWL$dLeH|QgSv#{kp)-|{bMn7Y=I#2F;&EK6&NZZ?MZLjG%?M(Aa>5O^DfDn8U zP%)g)3ygN>erzK)b!@_?x3oLJ&>d#eV%V}Qph4Wo>ey^ob?tjk^5`H%?}(%i%?{Y) z|1#sNt981FJ#sdI<)rfSH9*hELtyHzh+}g!SQ~lB=DvBug;t=%x&PEe)SZRFH0am| z+H{I_I-+alLVxn_z_w19HqvGusw_t2`g{_o$pdNjxlro)z)}KiKY$GZDLeL@f0UJKb&!wvB6sfe) zU~IeJ;B2Tv{{{C9hcyA$Mvv#xBiV3RX~?`4<%(7HJ%Y07Xx6h*0w3r+XsPih<*`N^tkIZ3(jF35gS4rRk#WU18wZk?Y^3Q ztZU*aj{Ro7c8wl3B*P-^^yf>!G^qU3$1a4JjGG4&uISo1TG}XTpY3;dd0NwU@8_h` zKN=#Mg^%D7UPn8&iUoeY=BnXtpqDDRrPDk%FCe?;c>Lv&I*0Bx_oyVPwrBOSkXdd> zGkk3mJ^9{%2aYR;-nlcmo&6wKIB28SspG{LW+bn8v}04A zc5>>1WO;dcPW9I^eishi_s^BxeQ#ff+=g}6Z@i8{7ha>ZgBXpG`$_reCvOhbBSnvA z6ce|c$A^Z8ZW)9-I;v~MpXYX2y9IFE3@pmCak+E8D!&BG zbHGQMa!AsK1N%wmbh@CFnjBH5`cq@yo~;PUmV<+8>4&;}<3kDP6iSzBhmHRl_?33B z&zEP~4wYbZlD8ndY-5uz=3<&^se{hR)UIv+5V8{>=*jEJp5NX(*F8^=+ObsbvCrtv zC|i5v&Dm0N&cWe7W>ZW`pEd2_)_B=!K3>Y^%Py!Ys+Ea(rpMH}=K%&QaLTkcD$C8) zaKqJh9a(s%V&J`fe+MydxY~Wj){SuNX$+J_H7a3udK5d|t~MH70vMhu`X~uVA}S)&CIt z(B5Rv_xa+DQy)d!u9;oEef8Sa<~j1oW0wlhMQHba`2zPYdnj)&0vf>jIwFv9n zv5!tGwVXK|w~NS#Uh7)D49-PU=NE?8x*(w#{6ap2+^1w@P8XQtw;?b&D>OH6K z%8&_n+*|M98&E2~Ha~d7MZ*-nOlkpXXprW}A1!g!LTz12Md>{04q^)?Cz@Cz@uIc1 zu2%)f$wd>9iy8Xo#3KxICeEo=cs(KMd(HVLEV`(!e`-E)!M37cbMTG}uA+^yiX*b? z9pFWC%$Z#a_>!XDE}OAxGv6nqi1?({q$|8@%Q^gBW_orfjpncM9eJxNaEN2_4t5fp>J`QtG$L$+oCqC;BETq#FHkaSsQN za@eEaj-0-z6jpQcFtuerR&U}|Vt;DxmXMLkeA<`t_Xh}Emg7cchWJgp08;e*EPjw)rE!Q(n=>41Eub;78clX z?A*+)l)$Z_E&S5*^YFP`p{e|t$z5CwEv8;m)0XkcebEVW6Y@LGDL*Yy8-yXQtq9i9g$zO}>esZp#?{8r4Y0ld-Daz=ucqJHDR zGt>vGvS|W=xZ*CD8#(pnsFMaPUU#%T$+8d#zkq zaz|wZE%+@33D9nznvLsG7aQydndWxkbf5_RL5K4F#3t+NHs~pr7Zf zTl)wdazY`R=2i)ntZ*A2w3&GFqH3=?Pwu$ws7KZzy+r4;&{7xPV>c*c{Kt|PvI-SE zt(cx?d%=)cPA03=0ulB}2eCXM&j=2JkYdGf2tjr!&aUefcBv*BZk2%4kpv|8=3bc0 zmkqhSU|+4tYl-^(0{(66-gOO!SM1-FRmb0QZE#k=G(C7rpqjbf7uwiaDM*;DkdHx= z5?9jijFyu5z}e~$iRg9vrZ|z+n5uxLdahwD$JqJWaON|2?Q}Vwas;1Gux2h>B#u22 zbYHVxIr}-MfbDM&ZqR&ZT6eOb!z%RisR9D|G5_cz0Nrw zWo8*VX`_l z=d+8;LAx_Q>fxW788$0^dMpSHzBD2B!~b(M?@ocUwSto4WcFQ};gPAz9GHI#TQK)j z4e%;4u!ttR<)rvsT1B@d02v1;lm)F}H(_jU*Nq|VK6KSGm%TWX{A%<)GUSn|`XiyS zPKP))kv!`XW5m$t2x)W1Io_YRnL4c!XaSW80oRI5^2MN~k2bA)>npF~x$W!R<< zRf&p8DN2f_*1D3aSn}8|QqG(4k)BjkO_U@Dxein(6y+b;7EW)^V*K zy4;2{>bjB@Of%go3U1311Q!w~u*v;fwGx0>!d`K;de>}jSe9{~=SimtJx#Ky@w<>5 z`y^}dvF<7(s^rb>eV+T$Mk>_4by`m}5M!D(8OJAmlHP>0Tb8A}Fxdt;G4YBDPk07i zIW8UlME3mJ7Wys$tAzGq^)cP;TsJ35W1kl0erq+n_9L7i z!ps37Dyk0R93_O~+sd<-yBX8(%sQv~pHzqUFX|P=rnI3;MZ(-34VHp1&azYFaQLuy zN%P}<pzm_VjxNoUeS-;jL zUnhFpf5Cpft^*X$lfG|5->`)@9aS}(Q@+ExBKXdj5+{RU8Bra-m@0`CLaYwr;tngl zpSn=8LKttxaT-xmjeJ_uVHS*qf#Jpn)(Qm$)>cn~3&YAizcyi;D$W%Rp6aW!(R~r6 zI`lTa=Xh|^8dIg61AE~0q;PB&Xlr%HKG3uIIwN|&cSzl4JJqOw)%Ymq6^X|Df`XTC zl7&L;Sa=%ygYsQGIOVXh(YBa+L8lRUQPZ3iv+l|TuQ+GzEI%iAbz>(+g_MKr3}?{v zC&m(g1<`cM=HumiamB`H+idhxFLxvc-rk0zZBgRPW#HZ?IGXp(SHQ|$1{NyjD{x)I z9;TkWdwyw$>^64m4PWOJNlEQru z?_R|@jIFtc>nlX758|?hNwP2IC8zng(@CEFy+>rnmekj3UFakvsz`02CO6%;@-ay$ z)!(h$FNmbIGXp+pTn@*Xdz37Fftn-Nh>qIka*M{>_Q_JOxHMW>w&U0##W%h7j#FTRw&v=G zff&8s{atz>KiiE;uTuutk;nu(MSdA^w-sKGm5bXHW=n+MsjYL)IkFxJdi zj&6RA)UXL0OZI}4PuXPzEF{kZb)dQ0L2dr>1u_}ldx-};79<`j-XC_DY%2}#THR2Q zAm>;l)DOHh!;g-sO?!^kyBE%zO4kP#&Kr*o1hk~(bFzW*@BxkG5^|jpLc5%zFubBV zq8}f+g+P3?s`7KiwkZFoqdzgjZI68Q7O*w`EggX$eZXD)%=p>_gV#FT%f;;h?3G-A zym}R>5ws!GW-_6DS?-i!$%T?2r6dZ7P%q$@Vfc*z>hTNqi zD{Bjbd(J#FqhON<5YdtXiY|{faT%0pQ{Bui2QTYBHg7uK8s&8vLBND)d1G()%HhrI z1PzqhLS`@P2iE&;8I+rjy7d~kQ^@Bli1(h8XIQI9q-`Br$0fBsN1ccl8)8q~5UR)l z%Xm+F+8D;BWA0v)Ar!i*0K8BS%K(6)ho)M5z6i}#n*bnAq~OEm<92L<{b(zEmW*0CR}qUnn;{a{^JJ6rXkD1+Coaq00s)y&N{3el1^mPRe| zEvI{RkPwU%w^!!rHB&v&|AWYR#> zK?=&QPT0b8F8fpkR=s&zY0W#s*SDlI&-`_4G6PzuzS%1TO1gaqv^wO)AQu`b$aW1EXC?$9xKkI*VHwl zQzE%kCiE7R$xpN)c;mtRq?cGar;$csN!s3Du!--*h33s;>$Q2ox_=UR@!{Vti-BHq z+cBCU?ah23p{=EuNP++(^K*V$P*XJL()ozw7W;vjvwLgdjbC4 z6NP#05mgP4%>EH8iwzH4u&pn+!k&^`NdF2_&1`Ac(o3{pGe5A4EC^80{HtnR%5d~7 zTx-mRchi22fT|!Cw3=JV%?6?4O&TG&O&zwR^1DmpO_~ zkA`j>laeB)q#Yg+fzs8+z1H@Bjw|HlTzv*XP!HQzRWHdt}5QYDqlHja>zYl zD96X&6fwozXm+;t{ODRzZ?>`WS;!{8$gqC>9T`MQ8!qYO;Z%P=RD0%C*+iIwiX!zM zkevBfIHQLp=9|+sT6FO>cyv$VwE`O)3p+=5>CBHF^=5zR#ZX~HPY*fp$*+-0n6+7K zUb^qhuID%EH(#ebr=REeGtT=j@>~%HcCz7<^8DfAFsQ}0*E!3&`L6IY_;_2F5qm%< zU+-k>ZH=uMEYg3(7m4-8rPY9 zF3c}Oz&}sL?>d8Y)FRNf*25JG?;6glwVExj9JL|@GUnCRRFZbRrmtu3Fu*#RUia&@ ztQ_>P^^615)0|`nG4N#NncG}q=ZwFs2$#`&b3dpRzF$O})^isv%L>x4Zd zP=@N(y5TdU4mhQ-(l&uz)QU&Tx~pOC<(HQHFLWp~U#RGeK6FU$=r_j12SC>+=qVs*r`yyUDaHEb!*YH&YZ}eC3-v7U zQfbdVw^93{KBXG`t@z%;VQY^;14VFUlBv~L_1Nf~U#wc?a8d6<`6`PExQZdc6>SVC z(6*=eJghgwAJd{1XBAC2zW3*s5Y-Dq3V8YVNrg2eE@g(dPe)0v5#W1bXn=I=p&* zK+vnGoq)dSf>x@OJrzk<&AOI4HXPT+7ho#@t&P#QWi=2rU?Gs}sH7|FXbIJlr2(6b zHEJ~Dc0$BJmeuRRvl1C=B4gFA&@nyl+P5fG$N`QI25swUdwo#?0dr-rLb!$srJ%e* zqQ=-t>@vf)qBevugeCeN60d25n8wZ+VO2}vH$`f|y4@LH$z2mX>eFGIClRnpe63+x zdYg-&kE74jKn=9dUT4(Q;1>2z>iBns%UhI#+~>d|E&Qw8x6nv@#;%R*zj+7t#JMy0 zQI;jXPV@ryS_xSOZ>v)8K7{eAU(zmz4ET@M4k`2gAcenn&B46DS&GOvt3O=apnR_Q zO6;m`Hh%JhiCxmc)oNnuZGo+RMG(m*$mxTtJd8z0VryrbapDltv0g}EGBnAe zQ_^zUtx%_)Rfrn|*?cpoyy4w@uKFYVNSr~{aB!)?aE*<-N*v)Buf-KG(ua>9PYkv8 zL`I~#Q0D{k=Dlmj_vL0-q~YzIMwd2iZTT+P+7N2*UR!!Oe2o5DQ)A{aN$sRX;MVLN z7Yjby0I-i?mPybQGsW%M$HAxcwio65%FR_Dpz6?@bNAKe02cHP3TdQ5o{&wFX7*zT zN%|Z5r&jAyZ(*{r1H4OeUdOXnoR08pjC_Upqvk8TPX&iwY84q}b&)Gc?CO&FJVi@E zyVJhYQ;GP)kl{{IRs49~j|}h!WBn5(XYK$59P4=a$7}nw!g6@i83~Kb3AwI3!If!6 z3mNvDOFqRLY2@4KUVKed1+l6n^I-wQ5QJBYRzI9liBYRS)=Oh%bI3C{XEs+*2y=Z8 zN`9%jV*BpGoE`%czYj5^;GiywyF+fCH);p2pe3 zsHXEkf&pA9BR_KAaHB(TTBa(?FR;WbCq(=(!&bU*A@z^CP+MLZlef!VVtQ`wK<;l} zp@zE!Jv~<=t0=gTRVighKGgyqqQ&p*4 zk^dpSKW+^0O_5;xgU^4kfa6kOVo^YdEivey{0;o-IPA+*qB(o%5f#JPUV8l+$^C{+@++M4VetK4U~<)0cxC3D-;-n5B-SP*PW4mzYeB`3iQhJLjEU;rjWRT6>F zhe1Rh4(jF4&qxh&2A9e+UfYTt2eq-Qj!Zk*zrm>DS$t#B=$?F6wT%>ipMpk#QGXBJN7uxYO=d~ z`z)NsH_I#gW4wE##4{FClr}=rVd8(Tm+eR~nst#o zU9PRlFP&);*|WbJx<;5dteh9zNmQ-hUayV0x+#7;)GC7Djx=JC6M42yR9{p@1T84x zgru1rR;)^|N$d*=#EeEhXA?H6cLJiKAtGU|&ad&=cB>lO=O1T>_FaLmAqzo2<|h9V z`7=RbgLQ(LYClqvyWpeubzije#@HrB9^UnG_dccJq5?*2)BL_^mH1MtOHd3=Y#H%_g(GVkX=f@>kFDG5dv28d6@%)k zBj=N2_`n0^Y#^L)pnt@MqJtTVm=fiP)jSxnM%kAL1J!ty8psDcI<(NQBkoP8zduHv zoaXn&WGPc3JCMC1vy2xV(#VgfCEo#chM9_IId0tyQ?WsKY^wq`G2wM&1 ziIsc3`u>dN2nh?BkS39QpdJ_Q7FV4H%Ia)d)AwWdit*{(8?MH28mg3&op6gtuXH=l z`X%Y+k}{C`M)wUQ&z23wcO{Ua--_tA9yL>&{o_WygWyUg4oUMGQHZfIhtbz%>fS5U zWA%h@FpBR;#EVY8-?=oE^gsA05e zELerN#rdar+Y*jMBxcL1#ENDB98=FAINyD9Jj|B**-#>5AtOR5*M79)Kw?5h9kaHh z)y47lRF|~6$ntaHF2yDdn2d39DeluqO_QS==CdF40aS6a?#t zp-ZL9qqr_>Z_{#Aae78JAMpVk+LZvZaM4LUs`1R|c}7%{Y8~Ex801pR-W5R62wO8H1Vl zV@~vcfS}Xe*+7e_^xMln;#I$Am{><5?+#fXG6liBQDUHRUM^y`1_cL0|M04zX8{!~ zkBcyWdUIlZ0nl^dO>qPKqM>ZbMG(F7xD3C}y+w-@FN-PNmjM@S%?QUNuOwO!PHA{q zh&DPcPipG+mba@XYMTr?c9@E6RGu4C!uotF$D5%d9V=F%y2IDu zrbN^vHglVHz8ykw#S4j(LTW+(%+GMZtvecV& zb+*Q;aap5`1sqbqD6rwsKCk1FdoG&h%K1h=A>mUiVAdEAUF zRP#yXO&VBV*3rn>T#P}O!8E-A`hgm)ec%d78TwRaRy*;b<%@0+_5PsTQsvq}$Y>L1 z`X8@Aqc*lcrcP!7q;#IUiDreeto~Th(V}hQr_nKYrxUrG7(#k@OG1_OCYRzeUbR-d zFy)nAr@ul7To_q;OJ~W0Oufzr7%s~RgY>tyFo46JPpr3uPd~O(O!MlYLvJxQoKEK2 z`47HlP%9pVJf#su+gEs6jiz54k?8kI5gLEX5qie;Q|j|4g%>${(S3J|7;kvPp5VT^T2<8Nq_fXP@#%Qw7&40Cvlb zK`z*irlyXfDAjD~8Z%-r-910FD0WzR2}Uh5r4LtPvVuE^0=;O47gYwclv6WLan`Zz zHNq@5n~%CbI-gX`1`^ff!K4x+myc--8JflR*nc2%dMj$Ikl?NtK;)`P|#{`M@O6z{!dG(w}! zY>m2|?D1pjpUUX{hs*;79cRU7gYAU?k9A;|`sN72hBd`kO{SgJ1OGj529Sbpr;X}z zs5Wm|dNMGrrP2@s2k)O``J)WCRd?lVHxqwX%|B1+-xYcLqj$yk`^9Mv#{^-ca3R~> z7D%28*v*E0NIng+F%qOduS1NRo#2gu+t5jz@T%a?JQ+j4?H3AMf)%2PPw{DfqEY*gvnf_qGg zVMat@-0?4UBL*LrGI`SZE(kLBV+)J0h=FgoAR*PFV8$gQsZfhb!Cndj{g~7%Bfung zt&O-*$dWP$C;F4bdi{?O6rUmNJLLFl5DzQb;~YRRx$)zr4ayga{Wg}vI@RM&3qddp z^BA?oOfr!C^M?9T8VnU8q9OElMfHtyxDG+u0NE77)S}m!8Wnb-$Cur3-~fYn_;Sv) z^3h=2BWk%f{mm{;o0@me3Oj2o3#*8dCrP?riJ_hD$nDM;dJYOvuDq6y3_iZZmaiig z)mX@cPG;rjs?QYHh$YOUJ1cO^J!znPaJ}yo8!8(XkQ5%$Q!{D`pKT(V3$1UhA(8yC zj(WLORJDL2TkeR>?Ii1Yo^|UEmIIwgD_Zb~YilBm>JbVQ>RHFE%hH2;sN2O^oX9G$ zJ8eA(4)n|A4+q;AL-C#!A6AJ$iML|1@Q3@6{gQ$`Sl)D+CXA1pv^E?>8n+ZTsopA> zrmkFo-)Ye7QVRX*2ZY%|)OAWGbt;D>7dNLEE#q_CaS-U{;jbZv(zZgiC-;+3K};SC z#8U)-Lm;BkR6Mz?L%PtS$O?DmwaKHpisAZTU|JKze$aPASz*|rW%3qVvsJI*YG4e{>X|5=7%i|BTQDw#z9A*Rb{Dk>u6p=lM~juThSllyFb^zecP<@}q=( zF50=Qxn_@^V$WV$oxrNY=<;>rIVlzrS1TpVU3$j*eW=r{kVQe5na%=?N)EqiLA39X za+lHS0uCkgvQE2_*amrY95LU+^c-KEW zI%2}jhZ%Jb{e^O~km~#OE3ukuzCtpg&x)3bdnCYqW8{oii13VrjT3<#X><({xjU)| z3&%M=yO)bV9=vu_?)?pXc})>OU2lfgJ41J_Of7AwZBY}Lm3q-66gzr@g0Wscz`2}y zVguruxPC8+dGFziNB+a`A+~Cf7TL7<;|2{0v1O2>>kOta)RujHTT~{ePV?Fq$vW=u zP$KNp{z=77g8G2zOxpLl^LHTD$70^6MSvPpzr*yq+6Qh1rhiMsPb&X*;khH#TI2)e zb;%@2i2@vy>T$6q8K?^2+KPY*@-C8fYS5xX9pd*SELT`StOkb z0u{{EX?=a`%8NHXi77eq#%)JNgeiQFHicQZ_W50Q2N0G;$uxw^+&aF}j?c=z$kGuR z3)-~1A@}`{2m`u?1^fT>4PVOzbv}30e@GMRl(87LJaS?dw+Z(E+!*LlQe%%oMWnNBHex0_1qxSdVz1;1) z%?DHC&i%}$4_>?4NIUW}?cq-{{y*m2V8-z^lEb?nNZ;v+ z7`{l~CA;%SX#Wp)`Rjl63jkRqN>{mp>HqCliAVrVS(DNcJAM|zU*A)+7jRO-f{nY}kF9~4J|Ah!1I0v^YE^ zFkf7;FWO(AFB1m(@_=3F+?vv~Wu(JYA0J9q@Hesxd)Cw);Cf;!cu75Ol;#~U9o?+G zgtl*5cQ4vXyR|GS{91eDxDVKR{ zEAsc89{<$v@h=Q{W3h9?pk)7lwd_U2&T2F{FL5eM{pv(bN<&zQ1jxthJWN6UwuaQ6 zg;U*m5UrVKI<*qx@58Ny>-m;ue)}~YyyyumxlxLqQ|5nKI6y}(iiXVsC>D70m|?F* zSRHzmPZcys{NH@^Sr8S|6tuUU;g@_C^X$t1jHJ>8h^5pYjhqvf3X#?)HXMgqW z4Q2aWM4s0z@TC2tk}hZ$0Un{M-CGzTqNyU*>MmT z#jRRay#9tJgQzPN4o?y-`lZ5~Uby>&bguqyhJSXr{^|h%O%;*2D8QZ07nPd4L!Kj* z_t1jd{djt{EJ~KPJ4bjd$i@c~2nlIIwgeP7pf-}~A)2s{5ZRGfcGj=#F=@og zd+}dQUO%mY*N2?IB`GXhQl)mRrCv;`)iWrA zfgY?^8#>pkcN+AD<=Cwqr zdzA2qc=|*pr`D;XOGaE9Cn}2K4fgZYRo~8y{FR{|e+QWInR_a{|7G3u4}{sZoRwR; z!jmXrHOjxQUrg2I`h=wTP{~A2c?0s^N>a#1S;d{ny4A4o%8L0)CCF}Fe5;QP=t}u&O~H@Vcs|Mel=t zxeR|_^Q#(r#N*~tCoHXwe|2O?`xUx1?eU!&&E2-3P_Ak>suq`vwhiCguQp*lk?n*n zRu}jaja0IpaVcLvbgMEP6Vdv3#;g6zx3eIAgqvcN%hi;q1CXlh|0unym6(es#$TrX zQo#R&1$|hU(~}c71Ev2c5HTk;smjHFIxu%U7Z{WQp6#R0Ya1t4Mfd@cdVU|Prc6Bz z?sS!?rKW+TS2P&7iKz26^8Wc2%}4t23(iinIrZ9gfYegg!hQ)tFMbD>m8nsKAqUcapaA{&g{pegkI-kdv)&yA5fKf z%6=OzwZC^jlALawM+MJ7ujf4Nt=)dKHtH|& z@8ZsF?(>K6@txnR+;Q`;?U2P-Alc`8?igmwXK8!<5AYLZb0BS`^T)u%NI@;Ap0h#Bs_*%hF~;I92Nf>b{xi5w-~a3%q$3sPgtybM z+9eCL-{+WQ0!DKlnC3W;2^U_C9FQJ=9ZnH0imJMB&!s2y)9Tp0)M|bY!aJh>vQ}=qA{-);GqJw}a&M$MpBn8@0|_l2$-zd*{9d-g6Ji2D-f_JhX$ zEH~$>hUG@)X3|KFYpMj#Yq?dy=S%bK$oT;6vc!E&(TJ!k;=(+7Dhc{sz{DJ$OX5Xy zexRpJQeOx=e2(@<30`diS{Lrs@$P=0ebXK;O!)eB%{G~QhOM(!jM7$1bn}^|WJa>C zZHJz);q&#tCaOrXuJ~;#*DJ$96scwP0PttDJ1M+&bOImDgp0en_$_xh(l8N$CYoK4 z(r|QaiyhZ12qUR&je~K-_W-nZHVSK%03W&GZD${FfuwX9!iYixy{y{0Rfdyi} zYmpT*DcoUrHDKuL?oT3L#-U5elJgZs>&%w}TWsNA1Fz^C3l_dple=2RXO}i#ha+NI zOPt5)r}esM&0MIMH?7YC#r1lX#hiA61n@6>ssDBM8-k!Ua7^oeUvo(366-~T{7 zJBpDVCkj{GAG8auxn>rPrF>i3isGV1uKQ}vjz*Kyb;Z3kl*VyfK^ki&)7lfPFT!!b zME+N6Upa*jhtE{l40?XPRnZ~3jE)_BbRO{{+Kzvz}&e+$C7;bK{ULD(@qh zp{979%yC#Jw!jZ7b1_eHLw}l&Z<2U>3fhXxq?O8wXrz20k{~H<^o>*TfTqTOLH?95 z_=jS?c(EN$Rv+E_OYr@NpOU+o8Gkgn4azzTlK1=Qt;h0O+JbS_i9U=L3Jt1tHSq6Z zRiE?~Z#w*L?qpi1?6?6J`{Yu2cxa-&S*Ww7xXi+6ZWq>bMuI1X5}QWr*6*9ku^H|@ zi(&*eLd1Wsm$}#S)FC;aW<}cHp0v(v3@2*pS9tIVIPRo>mKgpvutXcD+kJUg*TH|U z1ti=RenmnUq;5!k!jzOBQ{uQYS0tG0C`W*4b<57fv|r9N+O38U37)hkJiC;P^dOpiFCj7TuiU zzj3qt-w0aSfv*JnXRF`}jw6-!b(mupQV-@p ze$2Z;tpNy6Z7X4&9W*MMggr2DNjh5#(cH>lM~JJq;59ngu97w7o}6xwx|wx3u0)*` z2lFxTy9^XC%uOSHQ5OlMONuFsdK@!ChG_WcaBBLehbKtOM?_hE+gF9FK&gBkUk6g z%&y1l=;fKbKx?p5#}>hAlylir2Fn(IbgI>oNKnnHIb?q-DALyB>1EN}zaF97u``aR z)!WzNXXyUnov^L_jvt-Dv!F)EY6ftY>++*K)x5sUw_ZMd0lJq*%g3r#$zmm0@w-Py z>!B*_iZKmg*-zfGVgkO|6x+BoIoFC0sl$Dz$tm0keE#2b6kK&v)z1(4rOc^*bB}Fn zk?HufVkrv*l-YW1U9>E9M%V6seXp&Qa6Vv``W`0SrfuyESFYye1f_3(&<&!Vn%Mzt z;_hz_t?sOnr?gY2=++8lZDasJaw+?S~U~GXn}ysgo`oZ5Ww3HB^jml0*}#R`p#Z_Mkz^Y9M=)k_-O zqRC8yopf4b#@B^U;<#3d^wUC)Ur6jtaF{*d%E&HmyZ;R|{t|%h2JQ^tH8jtcJk?)9 zei#kwP{GEK_0M%bY6?1b)fyTTDf*3#Xpz8`_?Fg_cc;MT{d*Ine4euF)q7v^5olzhhriGgPK~D z`zjaOHG8~x((QOWDHhS{XUL-`F1_+KJ{8YF%aA!0V)oX{(j1DlNk_avU?3r>-l)d+ zl~R3CAl6-`fsidNl+r@WuzULQ+lNWgWWB)oWxc(Eczd>^f7H89$LR^nU;N^Wm&4F@ zqK7>HX87N;#`>10LGGQRQ#rAcY3l0Z2oc?gL<82rkLq#mkc2VxM?JGSGtiwNPF$sObz^B@-Io%5nA%&J^6}c` zb_U9{O%pghaB}}2JW_VyyD%140!eu(pgbXMTz=!N9v4e7YLv+=aKeNiQR2DxDi}7+qyJpv*(&`I=E=)q5x6|Hs;Uhr_w8|HFwCK@brUT?mp0 z61|rY5;f5~iQc0O217)mBwF+?h|cJv8$yUW`skg(VDvW1`{bPO-e;d}JA41G>;3P! z9_uk{-Rmx&yR3VCos7Df0w{XB0otxkPzye_8=q6QL_KA;VWdE|o?~Kv@2$zf3(cEC^v~ z!ym9`#z$SPA$j~YR}N_$gNNYbk#~A6c}$;+Jvb^4E&=u>12~E%5G^rWDhNI=VwIu# zbxmwhY3XZ1!Z|YgyYR|8dhZa-7UIG6S+7WQA$l_xJ%#MEKtXVJ~=F zuop_aJ~}?pgFFb})wm*3_BiTQURBxSEL)B5fdyb zJD$5B*IHJqIJN!#A{F_vcCNGKuZ|@eqq|ak@rRYKQ)%{=Dta9a4r>*jPi9HTykOY=t~|D6&2O)xHad;e=0-bt#czvUobf{2@r z*yHL$HxT-Vdlq^ zKKh}jH>UxM+#W}-=`vM1CH1yV*Ctsptjguq5pzw0OMDxYK;Iq3l1_IY42l5P3YYCY z!5>ZE*xfvMg75g`>N)D`EuVKUhbCrli0GUFziV{yudLq6I7WI5wi+}p=Bl!;P`Pb% zcW1;}oACgcBsQ?p|8hxbzhyq?C4H3E>+FQ2F^x8gVmV?Q{Y}1u* zzUpb(C48Xvah!z`DdHlRx>l?&NF_e{>V@k>)oWXoFY99%6ddyD_@u)I;}tsU1EZK5 z+EF+s9N=~n5MI*g=D#C7F#BH2EH=?YLpF8Qvhb2^spK|XNp6J(;}Se1%n+WyCCj=c zy%Se#-C4aE??;~FrISVFr1n6fqSVa_TLBzq#l|k`3u-XmbWvev=N{o$-qf$C?aqkL z@gWFO^soBy@$j{3WD>SYYE>f8Y6uUZvT;;iJs?8tQpQKUxqE-;I=D?y=A1<=Uy+3y z=8MS^{?1hAJ-obm|5&t+{Q%l;kPVFp)fTjU35u;h`sv^DOqiD;{P0k%>VRL56?X(`Bbqlx7yDgydbzUimn5EIuOywIb!PVjThnBAXpf_9P@WtsN*dV zQ_WA0)Mxpw44lPc@3CV*SqLtCHDB-YmywBBnCw>qPHkNtwMp~ot+8WIG^_^!Ax3?5 zS9ZNEPo~N-YY@?V%+Ow#zmijy)kq|wFPdRFY;4QTD_4A0Er~KZyMhevZKZzBk?^ zr1zNucCjuyjq;wWO*I-TM{Z3PBq1SpREr&*^g!W;@wvxQZL8^R7m+}(7xx!+9dpyl z*>dZTXjC*bs*>8#M3$psVlc4@SOH|KVrV15Uq2~CA?^6|v#YU`-oaj0MHA1k-@ zkYdl>UxrNZ$Rq^{wd_wiRh(TxZFOg#Rzdn2GHh3N&)b<4f(Xx8&g};NyT8x+oQIgZ zIO&(0Jx`%^5l(`OQgEMHx5S(F)=gg#k>$-ZN6Egu5$iSfjT=6BYk(V;olNZ|R`6^| ze~sw{R_u*#y75-!tJ4F(3!$ZY_3+>$4#%eo{+S;iea&R_n59CVKFD7eHhX?)jIXOi zZ(0|AzaNxC2}Jz*_|Zvz`fHPl$5Fcq0 zEmaIcMoVq4A3z>76th1l>AT?v?v%qrBsQy;W1_NLZJYQQWoO0cm%XU%B>>P0bUSiq zmV?H)7x?=yI0xL1q)OYgW3195tLayQFUukz@WA-y!=|S`k*{2yrqtD1RsoiA>pn!W z9ERRGmM>fab)73wF9@zLi;XF{848~D3;*<{PxEU;y{4>3!p{h~kY=%< zp}gs@xv3}NBnCkh!m07adEHs@a|7}CEoK9X0E)_EXFA;7cReA8{)lPk#LFd2;1|lWriZ05y|2hgW~+OK*_I*KO>+^7~DxCSNd=+93GU z@t1ogFW5ELX7_Bat*ACa8T;$lv^oFqmsh7fzPvs;>`b*$P$R5s+)ZP51u0t$98~QD z4M;#a_8ffom;W#+m^wL6+ylfupldEI;VT7Bt0nAga;)!^Hn#_gdKyFs5~($^tnw*$ zP;gM*EC#*SN7zEOy#x{**|u1&IyZhnzBnIk5-t``dlpvy_nZ2vyDw#LuDX2RO2Dyz zBO)^encrZ(3m`sKwdO&qbGlssZ@aR}jyZUcNkH_`a&Nj*QWov=2gixh5AMPsj}=(^ z20LqOlWk$R_8g~n3H+FF0)|7LLAkU7dE-;5rceH_SQQX&1f$jYl2<(R^%kd&vf2mA zs~@DD9J$0vCR&NgfKPW=dprb3oHmO?eiN5J*(3P?@g;H#6;67Vzb#zh692VF`Zsl| zQ|>dtcUfagLDY8t>IkP`4FnnqMt1j#wPdBBa=O;5V&QrmpNG;wX6@MgnPUjWZtyA8t)r#C;ZA4TOMCZAibc_U@(0%fx$j=+K?RiC;N> zapKPWWx?~d_4g0aU&WX9He!c{5X!QiiRMo&$M15XW8Y) zu_L_DF{msr1XrY*@9E* z%5sjY$_9gaw~SpHC@Dlp&xTphpyfP+P)D%6*HC#Ey5;KDEmLu_n|-4 zC#;qi?4Ay~3`Q=+u>fU?-auOp<^U*NQSsO_WLdgYn?9*>ykeD3N`uk!N@&4K((AY?z*#*u&QDf<0bgdvAJ<*O2Pq#|yTSQIzL;(0q1W{`l zNT*fe#k|TcA|!z5t?l_Nxw}>7uUza1Z8{+);gi``mUX;S2G|Jw=u+F#S}_Kg+@7n> z)=1})rHatq^A4XI4&l_E%vcP~I*R`7B4~gnY2@xNY;EGa9vqZSvZA^;t}jew#jhOj z28pie@XJ_0SN-%zg+zPmN*Z_Vi!ie!;WX}kPz)@W`$S=Fu*y>=J6Sj<*^5|N_Uh`6 zu+V_~156@D+FDITjMO>$<{L*(g=C8HJHihr6KOX3eVge%9U5?#E~$znCY=s$L<1+E zxKJPSgIM-Gt48GCRuTRA^xkhAcuc;yL#}j8=S_Q#fB(CYsqY{)*VY0WvIfms0(|Gm zvN7@4vzMq~^haF?XlQa3>L=AGKR8zOR$L|34x54rqR}YBtP~y>5AC7-8wLA!95t9V z1|yaAuul|PdybFXL)k`@CW^<=#~_*ZqE)-(J3=QnHxB$!05+%wbL5dfA(dSc=sZwZ z;F@0NKWXyENUZ>ofKO(YMaGzO(U%aHmSV@tf8V~zOZtfU&1)%@SaQP6AwM9Z z*L-@Xh&xqfIIpj(2m%OWqbyQ*beC(oG8s6)Gb;K8wrcYRD`50Qzl?LjICa~Gq+I4v zfM(ClN)i3&0t#}exms;)5MPd;HsHxRu7OXZZ#uT%ChWrD@1G zIR+~p|BFII$d>pLAXPNkvm2M(a!R`K&R;{iUj%;eksvlEY45drER=V3tMEL>JVXBL zBz!2LIDj9hkv@-D|EAiVyWSZC+VRWov9cB>Q~DZHHi8&LYD2NV{Ea-;f`Cua&pa4s z=KSRyYS|e=Mj76;`(ver%^x{bj~2U^6Y7j4-o_Z*+Ng|zq?d<$ZVQ|z@d8Tk7{bfB zz|OCvm@iB+GN9gyH%*Y_<27>xuU36%?Java@GROYG6R-BtA(z|v;HOhwQvWrRecTe zF2^~M-gXG5Xy@mdP7EVwEKk9_Y7~&yG-#UIq-aGk&BhfWYfuhrqteF8PIYUfJ?P1! z8pdF-dy{dSlgCL53LEaK61^@wvG8$Y0Zam(Odz-Slw$73+_X2B62*Y&Xvli>GfEV& zy4L4FVebbdi#ySDDxA;H0I>{d!>+*tzKB1X{Cu2t+b$2jUqydGI12EP$*)CzYg7?o z23T+s^($y;d}|sm;>fotnP^d!xf+(l=VUUpI{Att^- znq*euNqr3ptXwXCG}OpFTEPYzyPoi38I+FvfRbOX<$jZjEV88rVfTVR1)`*li+8G< z^=C~MO&Zy|_*q6)cyH@*V{%?Fzk;*Rr0JEh^Th$jmAW2^m2bA+>8()Rtb`*poh&uV z1$UibOWaD>bL6SM`$`-TQ`o*J@Mq5z=qm{v7&!F{rV47fbLDs5@+P^d$?N3_>asEr zdCJHMxQf!M^x+G#=?|8mPS);vfHF-+QHWtiXFB^B9MXp+BCOK-XsiS$N=&`@swb&} z*nt9%CgySK$KJf<+|(*n>VTBl#*h4;j1>e1P*ZJG`)81ajG}~xSN)#hQp9cxlAZQg z4Q(kDV1hdL!p0U=VxfrB`6s8gb$KZPOTJHi0J9JicCKv`{+DfYO><`c8wsI*Jn}Ew z2DE!^Uy*IC_Id8w^=(2%W^AOLzS=yVwa1jxl34{A*$TaWw7R&z?yqa&dSiaUS21%I zA8{=2s+iQrKuccqkT2t_$NjE@-3R=b$D~P!fCCYRnHMC-^3IB|rK2ObKyz|#pi+je z0l%x@=&g+O^NQ83|0*Vbn~->Znd!fc@*tac#)kz@W`<~W>?#VZT`)yav=QpBn_W&f zAKmkUnSqY{m8|qcEisFb&>_%YSWHHD(C_J^}u@1j&IsVU9c!b-H^2Y~9qta?w$; zu?W!YueEc|F;ewYS28xEWqgtoxqj32=Wyf&?`)F$Om!1vl2v6PjoL34Sv9C>R9n@I zfdrIn(=$^3)*1c<0uo&5Im^<@O}i*G11GpdrB%%v<(#u{s8al%{6vdV;E>?V21#;= zCdHFQPQBz)vWr`v{LN&Jm{X%sq8PZmx$Q%ME*ARB=g~#{iYalaOX4{i>OW!!QklY& zOxRVF6xPU8fv`BAJf3MK!ODyYy_xi11=8>?`clmLX2!kCbPNtAi+zkyGNsY-#t@)e zv}v&ovos3r>LkdV!=lQ0u;FM_^s0vT_C}#UuJjlqL-_OZPec4~YK*_H?y!ByB}3)> zD1XKsSsPF3q;+~^9{nURESFOWHutGf%zvmxQpPI)!r{+fB&pX2)m;*7DK~1;C-IXu zzUevPr7&-o+K@yN&CRo-a$MAdFeNz<4wMP@66Ouq11H+eE|7jd}c>BB?ePi`N_jIhUyZPBATBd+#q_qh3X>$q~1Z z)0h~R$l-q+YaJI*Lke_7@#@i>v#hNYbc({2@pv?IQu*rsUU1cGCgFlWM-ycazJQI7 zcSDQ^oIFpsVvkyko?IJSvgW!*3^`{`-rC+;WjqhhlY-xO9(YhTd?5ubWE~dZabs2T zU%3aNfv&NbyX++doCqfEfUN z3U-4lX|=f~PqT+75#w8FC9?!H>>T~sXZp4NM`ire`e@(_Lzc@5pyLUM zLrNnR!TE)s|Id?m&W%OCAA;v(KqspU?mOL|u6&nvo+bzC{N;f<`^J7}_H#J+clQb? zf30FH)ag_$_djktjE{`uG5R@fc57Mn9H;dcqg;Pws0kOQoQe4Ry60bnxqKOxOVM~( zn~_Qw_90yr}X$0_q4^gGGBgj4y+K@Y62YW-4UIP;j}v*+$|H?nPL zFNFxZ`af7Fh5^_CmCKmY^Jei!?q`2%``QyLJr2LM%%TG_c3N+UQ>>%LazL|7{&cAK zk1I07|8?r)VwK7{GXF;k3|{m$Xr@MyFC0u{3Bg{9_2iiS@NSXmSn6&76=4j7R?7d} zQtA&PWQY%RmQ=c1r}95`?6--T7S?}?870Txil+e?1h~34`I3Y`K>uh6`Mm!ED2f%_ zDd*f36PaoLAE#Nw4L<-23+evx`0rB#@9M9Uy2^B*M!qKuhE$b5OP~X;v`A!hd!QpV ztMN?t&PJMlibFG?Mqq%8Z;fRpmt47%}n7g^;O7dvZ?yiDCEtRI+`M7m|z*YCyWCr|QrQXi9+ zC~Uc3IR5ut@PF~`zy4sUUyt^Z$~ho=!uUUD_Mdxd>KmBX!vyYj7T2WtD#~*`owS3c zm@*Dk`gX^jy=eXqKkl#g?j1^Vo}cI+EeyC1iqe};9;I*UK4y(g6?puq?n(xR%arO3 zY_wC|niv0n&Q?5(0HEU>-Hmq1vo!m|oPMVY5renLg080ttiiMzY2DYUK0amExggwU zEoh5dqGtkoW|~I-hkg1FHGX^QzvzUDBy;{&!1dBcq?HjPiW@i0YpamjC%>?$CpLOu5fo z1V15g@z(hi@;1M$oVp}$@IMFrvtRx9uh2igh?C^gZ~W;0um9*TOV5)tasEBAzkb%T zvi|)J+T^f}tk|_r`oVqnsEJjfww~j|w%2}61@mUjo9Euu^k?E^CE{EbMs^{B)ELj_ z2Jr}-XG}~EC>^^t(qvEN`JZk(%dG`U0%~i*7>^#v|K{BPVxNAX&IrL{)+ztn^-#b6 z^(F3=!PTL=CF%uXGy(!)iSbk;b)%jj1&Rz>5b6(d9TxWk6PC+ zn4aJ@q>^aG!R^UopOv9KB;WH_7TK7)Vs$%voK-G%lgCT0>v$eTZ@sMw9p{E8OgP0! z*Qp~CFk)#q+6xBlWwxX(h( z!7@T3PJhR{_Pc;)yow&=)x3_~$i&G^gT!e4g*(n+zScQg3;d0&MZN z^(Z^L5!ZcS>`Se$P6k@@57F4jQP>L$tyu2I$eg%GTgd#+ZdJvbkXLk#7jdKOzY|h< zB)@zeaJn*GhnxOSjRP>53jF>RgOo~kpQ~qu{Hrrwhh8=Ft;M5()Mxg3t{5Vkdu5)- z61XWKRZ3WTF)KxQAL2q4=;Ua*LQ35?ac>SMXR5G=j5OD zk2pOo&V!9R5l_x)EN`DzhGzQEgp;Xc%SS_6guo=79xKz{LaAyhzYz(EqKF-IrX+bT z?LC(IJu8|Ad{CSflHGMN?xR~3UU4vN@e=F(Nxc36G%`!S)&p|I(`oyOa)G{x0c7dJ z8H$`s)P`%J(*k#7q|cxiZy#O5^ww+OwMMTm1Eb|Kkc@mz&Tn$+R;_IvN48cYs+_&O z#!8-b)}z%Luy~t3s9v$D%s^4=IyH|=5Y1PKXjp2p&=v0y=ECx?^vg_2n#G#jMsF3a z_futE-NaK{F_bRSneq%t z&ozJaK>A3yr6uy!8DdY--;8-v9Ibz}m{!axhCphi$K-`Jy6;BJ;u3{z_*`ubg^qh% z!Lgj$`eq_$0%$i*`=y&&Rp63sjJ&Uz!6M$G>J-WPSlBz%WsisQS~M34-v=>;@(b~4 zEadC3NLGPzySD2>jtzIMCn1bN3*=hk-rpq;~kir13hy>h-*rIU?s9@Wk^k(heC zg}URpNR1bOde1F=<^9h$YXbld0(EtP!Dl`x)1BJ7vNiu*hVDyPF(-n{A|~!3GMJ8o zFTo+FwC(5iib17hylLlL!ZhR4c$kH_wp*HXWc7yVi_7@3D-=n1u8LvApG@{|cjs*> zrt6>w!_3})_JZD93?x4G! zSLfSDF`*Q>>aD4SRK@^TnsOS4c1l1GeHPKU?rLah{p*MKWM1h>_m&l?bG+S(vCYnYR8F5;f3t15Y zpCRGW7Abi14GTV5gy z35a= zBJPvOh%yutxPmW?leAMUA@P+L<79&t4wT%I$UZ&O6vEU4PgcKmgN=ot?F z%Xjz60XWrqd{B7y>gMO~>Prl~m^hT;%o8h(+M_m?Fm}-rdez$49@lmJTTvsIl4^@P zX=^wWv~0+Jc6EYutFGQGw;fUSbrxzW645<4S<#vvuc$deTvkQ#8(XaPz-$(^SIE{^ zoUC8!%Lt;X-py8Q%nfG|p!M7>oKA7WyhAma zxelV62SwbC(Gw!Y1vR?brpC{613S8{jw9mSMJ;UVJOw=QvZVe}8vmPV{yoH7o1@-< zWAHE}l=KX{dDCAx*`E+nR#rEq+4PoUXBrxQsLBydcE=Q!xlPbypgcZc)2}Zo2QVR>NWw=h$j+3<98dbG@>rxmP{ z)2x$n_)9#a33gY2U8&aB3C+wxMEPj8bFf+xx5aZKtN%jx&hW6Z<=|j0#2tNW#APOo ziPP>_aJa_3+vgH)=IacG#L+faSe0dXPPaBo`GZ_!Po**BV_jc(f1470+dq^B=f9X)p?>5_bm3pyEzxE)FSgXK=jjNK75k+sr>JQWH(%(*q6Ksc%%Kl!Ea(ei$<9Nsy|2z#iVxb7N@;2;Q{(dv9^iL)x=vsggoPI_ls=au48$6C zQ6KDm3*dAdXnW~9^V1oQrMxi;S7D(TE6csU+^fb;4f9QsAwM`&C&E|#$;eawTm zKq-;o^k`=5Wq11Ij(Png-)C|^*r>ic&n@~Bw2D%+TuUI{y_dSk6p7AuUOS>I?9dUI zf|HI7tZZ25srS%frF-SvUG-Kd+T#IYw=K9e>X?Dtm?zVR#qc8$3E6C?<=E!)~8l;t)LWCZ(`(Y!UC%v6g zGCP&R+nUML*yJ5yP+#g^e07)5Y5lkVhm`LY@mLNj z5RMF7c-lFqGoC6j6gRKQ$JRo4XYtN@f*sDP=gv_{@3&Ws;jc@IT1&GF0MO7A_sp$r za~>-9;PFLk4v(#%l~DtBCR5Q)N@zpzwD?;>d<1JRz5SN9?mh6wXAXgQ-)qrF1iCGO z8Pks80sF*uHS7d@*8U5RTk3KZ-cEoJB8(wN7|DaiN>tl$qpyaL`^LN%VCCU`-zaZs zRa%c#s&HS>DVc9pc2MA5AVziJ*-!=^b(?7X1n|ACpK{}5{AAxu5S1Rbf@cvW2AP+w zoZFBw=})Ph6Wyw&vc(Y$uQjjFSHUw=9K1f9NhBh#AG02$QvMRCuDkt#wx*lg*Y8>d zktL;m!sCXbgO%w<;(otNCB&+#BRz&EuF=R$`7b*9$K^ct$1s%KKlRHUWdd*12E(2V z2OdWmS9|Jh?7>@Ccyjfk;r+rFQXI4@Uf1gGFEXa!-cz-8bT2^e1;|BzfIVSe)Z#Mr zll!?j?f2bI&wwZ1UAQewA%1SHeB!#EPmWOCgo=8M^_IoLG=5K%)VNBT9?S6=ync>7 zS_Y0@B0v}}RN$E}>LtW4eE}jLohkEv4?AxU!@yf~55uexO7C9#95$RdzI1DpPQu?= z20PoY7BaHPKu?f;DPP5qiUa26J>LSTdozM&uL&^oZw6&gnrMSore0-BeDAqM_#SFo zzr{;hc-WVj99D3f?USjc0YoE?evVFL!vzdlHwG^(TQwo}I&G$G^G7u=CGluFe#W!4 zn@s&^{&s@#6q~$w3GYqohvib=D`;f<11~r66Q|bnO|Gr#;74(w-DYzAP!4joz`S6v z*Yc-%hcU?BPc512;VjUawkH*$3k{b8#>9A9XEYv?4 z7>_l=F(J2cBAO6mVVKe?!CE?l%H_x61~lP1hc|q_e!e<)Ix}pBgPo9qDseqS=+|mb z9CRuG=%e~4Gx8YN*Co*4?e)Bd#Zd*{jkhoN+)GT`5GMx_i;b7wJUD#$>8@V28h>d% z;+;LD@L8w{;O9A<3Y~uO0&o{3OSaT;cl8eB_OjH3N}7n+;s1!S{g*ZA7x9j$a%oF= zat7tozf_yzpsS(7&&wOx)Kj=bD!j8L>;1(RLaH{I`ob60PZ8upI48SD!hc8Zs;meJ zp(m9KX55~@c~BsDaYE97S}xDqIoRhQE6+AG(R5UFc}-~c<_dM?Q3wjc+7bhf5->In zS}_|H>zC5ErSUk_;PC1S?F;OhHE*XA5jXR3Q9#0#B zy!_U(e!p&W^U@YmU$ALl=PyX9oEu07Woy|+)3rIR4Q*iC%7tlZUkeRpuQElibRCLN zc2~-Ml&pdJRBvrQ+$Nq0x+;{sU4LwUF!2Fd?%YBoFmbsHp*94*fLu&xRl|1x- zG$iYN6bgQz&5j+NeS#~5zQaf?Qrgyd#jSJ)=x}T-el*&b*%~@Ievd;T;f8r(Ns4Oy>VB#vv57Yu)@l?C}& zvW83!8LR9(45H%RURF-mD=AFEBK@R%j%Kui7hmJK@>*rJ0(x|%VUbdm@8kq)T+;eU zm(=7Vp0wz0QnfLF$DSy-m{;#6wRCEhDR&)h`7m#}t!TkSSf0?>dd-4QWXs}@1?^qI3C!$Rq0SM+DY`MmPr_B< zWvDT-U}ClQQWBD?F}ppwJ&2=j3gniw3Pc8=y)qat&thRdy!(w?l5|HJo=0<}P>Pn^ zM~~Y+dA>4sRrGbYjL4L6Ib*TTVB>#TJa57Kq1w2JSQY8B`t9L0Aez?8^>br{R593RaiOL~p{BcA=9-2EjLaG?97r(B(0H4M05vDS26o(ny!@^VV(^V(x8UsN2S zYu9rtV-d0P=1lm}Q_oc{@t0ih^|>ueudfa&jFg*`;2V!y)$n3QnB;iv!T@bgLdd?qg|v!B+m3I3oJv0())$?zcROlGpB7A zC<*8bAfw#A8BfTb3gv2Wu5b0~M) zZsh)FKHG?2p~D)Gl}O12tJ`bIBF9Q#6dybih|yPe>AVcnMRJHyB$*Xxu>qCVIg5Ll zRr4PtF46dP`>|PbCCBF*$GE=$eS#YJz<5+Z!gAacN#WB;lu886f3XQ?{dZvB`1t{E zS(IN?rVHl`Zppd3K70foOdI8~V+Z2i0J!ySUJ&IJ)?oQu*e785u1Le8pT5fLz4jZN z3%}6H;Olr^+ec>9o*xo~-h9|z>V8!$e|4nPchV$m0*K_B-zd0xbvYx}6jh+Z0{92? zTpIZ~H(t8B?yW($?&-9&6Mxsxx#`5cv$7S_}|1FtOz~FC9Oz| zQ#Rm@dpEu#RAS=Odb1jEf+oF31w%9n9Se3>pOxUDt^A_cn0{Q2TZohIjvLT`>x4(M zg#+>EktmO^`QP&7llHesD**sR)X&t|6(zDZ+@ND>tg1PKcM11J$@X?PChw_aQ#@aW zBH_KMgUSNQw=jm|-;E|eG zCyR1sxX|^0$}L9ZcjHe->-c0~s1-sRDwnnEdPWa%(O5!P#VGe2I^x5Bs%U|v%;L#N zsjZIv?wJ_J+faNDn4$Mh2REac&HXw~;rP5%iQ8h{uLO+^N#?8d))m;HiwwjyZjVWZ+|&stae`}<%S#b8yUxxj?dpT;Notzjf4yQ?4| z=s*S}UI>HM~GRM_XHn`9Y*U31N3xq=^o1d*rFBa@ve& z!Ee0)CTcd2Fk7+_ZcP}dFC#Tsg8LcQ;l$gv2h5T?ca!`SwCd#7Jhlj9VQ)T~8I%)c zUZ;2n7uu2+Sh=xRzrS8YndkNkD*S?lxw-}hN!(ACaFWJ3g{{>0uV#~-QdFjZy$)ad zeNOVziLH85x#&`Ag&wrk6GroFZEQz&QM4dvp7d_)-=?KV+PQd5B0_^%DNJc$m@X;> zUaQk3+#VMo;K1fq5bK74Z04ep!(B|f{H=3}Pf^4ttYQMr35#aas$r{#=}5HgLiH32 z6|C>zmxHKC%h?pz)9jI1{0*Q}J~6Btv-&AtXKvOG9tlL-0z}uJUfLZ0bd_xMliLWV z*=X6J5}_A3#d^=OIALVA#Gu6LQQz_s0z!R9ZoIXliu8^n~w4t%eG_)hqKi z-3jB|7dKg-I`UZbKj#*L)mcs@UA0f%iN<9r&&t|-owA3DHSDlzNxpOVXnU*MkO#sx ztik*v-Gte{LY14oXehz1b+16O%Gr^+J-fK@n-sgpHc5D`AR_xfeB4OvM67ZqD~-4& z#1v)LLdPL8QWoh*d-$1$H8=PG2p&s%ySC`y5V9ijDcp{)3KR2E44!%{&2#2x6tp!~ z)QJ^q*d46wd4=woA;OYl-mT2s#~G@Qe^lu8grbPC1XHEB4e_~}>$zX8^Hc)i<#sA! zwP)kLr!I77NKoQ>P~6?o>{3pO`ouOcqm1^Yq@5VP428&eMX|=Ew+0}D$4TIlvK@YQa1x>Ag$kJi9DR*W#py3Aa?f1gRKggjn z)_jvG$P9VsV+hOajvQUa4V(I{by|`61+K18Qe0yD(mR7`GFkd?Gt14^BQWRCWtnOq zoE2Be^KrSaV@G6=6LjEExw@l~?}rxIYF1I?K{=T&dcIku?umJ`w(170T51$JiZZUs zJk;a*o9d7M@X#`Fiw&GNzgjv$;0%PLZo=tPcQf;%QIP{8%SOG1@D<&?>lu`VR8Bop z20Ec2AJtQeq-g7rd)1rcKCT6neC!K`D076+KmrmGEHtt6*v5r*Rj2gYm5j#H%cP8SiYCZF(bY(I<+ z5s{@m=n2u|8g`>8kl6I9np3U>GIF(5z9T%jTX0X$vc*f6Q`smxT(JTuU(tBtg^nUx zQKNx2OA|2|5v}B`boi6OY~#wy!gS;1$>#B^Zd->D51!x(liwn5N2_G=v%UBgGV_c1 z>Xg~bz}T35C_b*(zOy0D5|S7%U${jZt~Zo_^j&y`KFSdFbHi3(_7NC7Bh}+aRT0@D zb~3cXmlTNPAap(XY5Zna=U1lxAiq@+Q(pR{p~J1VkgW5%4XZ6hiA{@nylBnzCzcat zAnsV{JmBXe3>rYpeP;}DS|xRo6V<8A1pa3ue}6i~8Qopk zSs4R@mk80;-n-+BUx#mhqe2U~Uh#hxK0VBeMRnWyHC1)ghl~PX0Uh&zd7Vf9*(LhZ zLpX7MV~Ftf%gXl41Xo`{g*NU9xDUNWq;S96YyU_fg8eA84FI3owoBv&3`)9SwT_0a z>5T$m=LGXX(}Gm3HwM)wzA^6;8C&Z1-hXD86kPMFcFs3~q(0VsfAxFZ>&E%tN*(H8Nhky_XW4)|X0*GVp8_>@I=9(pt>KdVT5dsJu zX}|mA$BAUGrqJMO${P29irHV83pqYJd=q?Pc*p_|39>+cRcK#7RkZMKoS_-~lDnC! z?V?K-{xJ|ut}Y%hK5=F}e0;fcOfS5lt14>5NeNVvM*B~->)N#&>$)zYR$aHwd_jHL zd2WHf|K#PsBazx>PW;oyRx9p7$}0J|?pIiJh@CCu73>jCd4A?s&EY0nYkGrFD`)F+ z7}D6Nk_m7rz|Fbl&Sss=maJYi`^r5Rs=u}!lk3f~E)KEY5ONS1AB6>Z=$OFm8+}t7 z1|=fpnk#TyeNh!V9VIySeTKG#g5{BV;I-Heo0cEBHa>E`X65h9^je;!NxQS7cCuAO zu>2s`HFQBj4igufLRazSDXS=Vo#9v=k-bdOQ+ex#?{U*YS-5gzu5H<=@Jl+eH6F3~ z!)0s$IcaI(2kUP*Xt<<-B-#7*QE>yF&@pw&j6=A=$CjRzMTFNDXg_}zO~{hW5NA`D zA!k#SIao@#btW71Urzgf0MET-@uQPX&V#N zL(8By10@<+eoCss-2mTdqWy*)J~|-bVsS>^etf6kI>NJ9$bf(Oxj>Ekp13_wToINV zBG%y8(b^s*D`8(`Cbl|77PD*>dS5W{@!PLvn6Ef$pw!u`Lny~~{guj}{7(1T*+7J; zh&HFL^}P0Oj1~g6D?e_h5YIe6KmRJJJ7h!{U?G2+fGWxH0PM$ILRA|q>+BW2-- z%CTZW%lFJ^pLCRoj9ew;6|-2sfqoqx#a;%h=3$;W91tF-4TM?^xc4Wbu7)R0ER6vm zenl1X6)FM2N=gzr1r8l-c}+%#6&L%*eX71x?LU4I3J8MmS-Ntanq*qOS_&1-2$JorhbzC^sELS^)%jF0ku8@5@G=c;3R;_ks>ir;M>&Pj zlpMmF&(X@|E5mHz^7J0uKiH~Eg6$86(()z2dL<=}4*4yklm^Ly&m8ax^r{ydufAv3 zD=yBH>@t9NjjRpE_@gy^Tf0`(*aaO*fdt0)T%C^9xVvNOHMCQ&4u*gf&SompHGS;q z(u|x)L1JVJ;W7l^(5s-XK%a9s@{ff4HRPrHm20lh+GHA5nLoSQ-WPs}y{1f&$;~p_ zn6J&g*=rXOlpm;!#m)9_sfch2XMWp1Wyw;^*Z$}S@?hh-SQagOlI2n2T4dy|puScv ze_KJ_s{OHAT%C4Jp{$Ugu4(WQFij=_I2o!*^?6 zF)l?7jDUWArMuc)ZX|=OEWHb__&K5<9z-SIouOl4)4HD5vX8epApRQujOzoO*QD5J zPn1URXCbY~FJ;LVXZ*p1#-Kz5dJv2YF*q1tz+x~fuN!UrTKWfn7;BVACRT!Brbw8S zF8t`Suz}a+1E>{I>fQX;l0|ySwE$JYgqY5_2W|!qKy{W=WDx{g#nVz-$2+bv8{BaD zj9YR}cJ>9w)MkdPV9~|@$$JH}hUqoc@pj%8#U{g;LyK_Yx zUIDlJ4?e%!dXNGmj**Yes+SNZ;2`WI!(aoT!$ye$sSn#hHZs6i@ zpUE61XEISh<&&ITKID7QuUuQ@GoWB8I!NTb1JRK{kQh zjt6W}z6I67KQ;c(MS)_&Z2%IA-SA;`|D~@X_Gna-J7*j;lAeFJ0sm?n3P9fb@(LD0x?$ zyxzXk@HNxx{Xy%!EAa8-CV<)@`6oZ{Ci!>!(EiEqJdZuyhLIg(L(Nn(L} zBX=P?m@pHBTe2PJo9Kwh14`Cn6$JyhlWlF?sA#;~7lOc9Pt)0EXqrs;l3vM|(f3oJ zUnMuyU23F+WlKrgq_iFXv<4=03@5v_R#^RP$=9_e4SLp+Y!FW8g?#VV8`liW6se7j zWJUH+qme+d(QMXkV-4qkTbTy!v9F5a7r>R=jQpsESV|8deL7lka%nEJg@%QNn1WJG zeW`5=3$betU3LjHKUC1IY8nygyo}jf9qJj$tO}wO=KE&F_sd^JzpR#5-`?DGc(L&_ z=2zi>pac(P9Jdp3v7!PV@1-hjiyNTWc1#!gHNgOXXBKgp2>_(}J%jpNyC_7T+MevS z57v*CXDEboh1@aKi@!>yq_uW5E(jQ(jgs4mU%}zb#Sn}>veJ2O?s@rOzTRjeWyT{H z#9H1lDqoL9333v<$n)Y-=#H8gcD=4{8QTjn$ylq_K<&~6_c+($4$J!D&6Ho^sHV~B zLl&$Ha#wG538V7*H(Y;8xK8{;I<R&@FGl8^B*_NZxz!|>25h@NOH$mSZ%x@hvJd;Y>3ip7LN}nJ~c{5f| zoL4q2LSc;66Z*OowPMz`GrXqdy0d?Gyy_#4^>LtDqG(*}uUrPZJ3jt56KP)%C856}->@rz ztSmKl*n@_6{Z(w(Op+uo56=()wSw7(3@-JDr&||#lxu@^8F#3)V=m09_|Y;08e`f` zHn}62C60t|JnY~wJz%ne*6ZMvOeIXs)_jR%Js~k(<=_Tlk5&FLEdSAHv6d%xQx3*% zZ_&>B5dVsX43)SY7V=~EkW3Uz*zHWaU4?<2micy$G?^5^44KXRTqA0)1|_3;os@kJSvXSNEmpxQ-Oq z&5%b3{RaEByBTY&wVOls%;w-4cU-ylW<8<#y#hFMtzC`2EqH!%hU~sIMcarm;hj=i z3Y|35^fEHSZD;^zg_Q=v!nxA;ZN7)0FBARvf)1D(j;O%E~{<2rEo!OZiKdGo3rv{PK6;ibp+Xw>2ZYW6(X z@@*d&KL1V=G0gc5sOK7qlhks~r@^t@Zu~;X~Put4h0xq*j>`W3}l>Uog8dZLl|oC`fEqC#q{`x+KdcMoTfDY-Q7c4 zut{Zs#XDm-0QhnL9{~TmG$HOK$Ia9vulq!cg2&% zu&_5FH@G%jpG0HXYo4DRw;O=3DvvPV2P(xTl0@}_D|T*!k7h%pxLU8ivTn$e2s8j+ zFPIYSY}O%72Um&2s3NDxF4O6og>&yswYct&aI)hjP7kA}aU>Aryg|F5zP0gGI7#LV z%KOIjEVtp&`&Tl)lY(`goEROT#K_qiyEhjQyrT^s9x8&`C=%*VzABgu>x*fPo30F) zPr0awB7uHx!LSO$R?A*6?oHOLR?v@HQzWURR=i&Q*;&NSO3_uei4%U1qz%rmIfEn5 z65?M;pDf@mQDI~)B7y5ne!=^icY|KFbdBSLUa>{E^%Dv2vJ}@V!sHSBxK!$>u~fVp zpG}guh{T?nj-9^Kn zD2=5s70vr}9kD9u`mDY?i+3S{PEOX2xz=C4Ob_15*=vhhb6Q_J63$$%KrWSi1M1Pg z@mb&R?CndB6etHmTQ6Fo*wU@kqqZ^q57v2_07a*7e0*!~^u>3kHBk?9!vYwOl;hg4 z!0tO>UEk{msprJYHj*gk4!CZgg^qtItKysZ-0ZnDy z+K364J>QJSGA}XT@sECx%luigmYUpL8h=NEPKmvl&A<}yj6(K-SAQ3`U zngj?vkQn%O-uu1ZjDybje)s;kKlB$SaL(Clud~{hgzeOQ7rj0d zD;AZ+Ay93;wL%!}08EQal@ZLYrM-n7==;aaFvyV4i%ury$7C0`PQH?xT3Oa}Z0YS* z2&dkk9my#iqLL1}Ketvc&8tf{niD6KitoZ`OK0(TKSTtbSLFz*^Zs}d_i-dmR->Co zuJ9Zs5Fnc$-o0b-7?%g|m`FW=qL2Dic?fL%m#U+OeXs-X%HYtqS8zO6@fiX612`P{ zQOa#Oe_7~ELmtX`7U)H``g!A0$Qq->Xsy`8T0zg}c?){EVh9G_kjhcNYTbHsIQI-1 zomI2^o(NZRbNlEo@=J_4ej)Fd^ixb<;U#``+3XY9MQqY3)X!K zK%HnXDl24CS$-S79A_DmaFLD2kRnw3ma8WZhEEglyILr&h3e+Eni{9cLZX_gbrPLM zPU!ltC)Z0M9@$~v+i96yi+v2B=gB7d(Vqbo`~JEMX3H$O62jy%^Kc|LR%8BvK2N6^ z$19Aok;-l)dOe{|LIIQl;6_O#Et%lxVzf$+MXHt(BV;rH zvRBC+Imr0WA8wDfZOj`^JSxHxmvpFr`gvAgAQxI@NeQNUFU)nI%l4-yBy)GTz~)!| zEtRF)6}z0#K?`emS02pGX%8oL+xW_&SJ=w#LyL3rE|5E)SvC1iSB48t9OmkXpF|(v z%^XN_iro7=}^XzsUI$PR-YsCFhV&%QgvSn7{QGlxG57VJp5QFD(G|_8f#)qmq z!pGhAHdWdWiq#V~6fGw&w~?(1udMqzNTrkr|3ARv8>{R8t*Zx*f$jbNk@$l<95f@5 znHS&GgyK|PZ3z@YCQJH@zsQOc@40$lSdZN=Y}#jSE~=C zK9OKInz1i6!OA3}HJaTD_Ql=8Ugv_pzj}NW!*~n}z4_B4 z&I>m!P|Y0~Wy<)FhiEiRuGOt)7Z!Wq)w^mH-*8xg!NZ2$;Q18!B-h<@2Vyr|79Q^o zpKVNiwaw=?7aZ_*PDZ5}dA)J18NT<27q^)amE5k)4rCgd6$-&I=1Za=WVB%3fk zkZ45nyzoUBEau8Oz)cg2eAj=g!`;IH(hZD}J?eYOyAcH-OLv{LcG+R%TsHb)e0joF znxe0S2VnRtPuQmWtZP|?uU>pm*z1fh!wvFlSwdnWroFAai|f1N;}0uP7h5N@piXh8 zNkb=7xUtTNTQB6Vvuh+yzh*mDt8*~s%3zVQlycGn{v<0F9~x_Vg!e)#V2>3w_Q7^>HjmM9dK0>W`>j{K71Pze=e zJG_L}gmlT@1x6A`wq1(P1!|;owG3#8;Tw@rigxcDz;QMjl~ilPRm;~{yycAYEt#LL z`(NMXRrm=kk(-qi_2)6XdohM<{p(N?dH7;9c7%cC%ULxy3AmlLkb?lPOP5~xUJ}y@ zGa!$B8Mvo#XWsruZ4A_g7Ax+7GQ6Hmp~Swg}68WJfyX^LX zxpxueqrMe+7x%wN0ny%go}tVx!9bKQnpa)!P-QuX?RSZZi^g@;4L22t zr@Z7^^qcKzJ#k#T%VUZYT6IrF?OdR*xun^|d!nYMz`P+bN2;s@GG^2U=<-X_eZeYO zK5L54iRD>M4`?KgzW8_#S$`_a@xw{As_ud_qrTqV9jM9-8|WGLN!N>{1-+X54$5@n zR_$d|ctLKSuLIoeR>O`J?k@}8zyI3;^uN6Ir;qQ5QflId&k`bTy{q$Q!X&_KdPI>M zsdJs^wYk)Y{bt-q11G%kN3=)@L-i}SUwAUkj(68h)^g}BKemb1WJmSV$q~BhhC+W$ zd%dhTebSp!krzkZ$LW?X$tCgd@r!eWrH^~|J+IW1<4Q=p*jQ95;C|(~_+Et+wKfq} zkK7*E-694)3wSEJ2%g2PIN}^x?-Y9w+FZ6Jr=vROQf05P26QBcS$c4t6A4UG=^@H6S|atfrE(irwj=^ zy=r4$P+xot;`8gu>S{+n=ZLOpiHY#*P4&Z^!uLO>dXMUTtacCEkx_nQmCO~Q<)Ac0 z&rWOlF+QD`T@vJ6>}!{TqhAP z#T9;ZhYcIoR@uYmIh^|FEORVyE_IE2;C8H)Vv+~s>IW%9A!(BDx`;<`ewa!_WnreNO3;86p)mDq8G!Juit)0lj}wQdp6AaB>> z20uk}$<+6ywM^7w`!GhE{mAhe2fBdCkF7*PDg|A=Yj-PPp$d2tO>vcjSauBX$})@p z39sZl>pb!6r-8~eRDWQlMI#N3dnp=KfNG(i>nksT;3uYr>#Qjw4ZFH$AU7^0Z_`N; z-)0i8VHlWIw*plU`g@3;Ud!CJEiLTKc51zJ>r$lHKzVxwBr$ACJLSzor(xv-)afJX zzUz_CG(4S({myztul0QNW7xjS^hQ!*Ke4tk5n6hn4lI|?t#d%Mf)EOZmpweKxVCXA=(`%mR3fDA(sNP>Nu5&{64Pf{3^3h1%G zXz*{&1eV&h?!g2^F8nJ3ZE_ix))JFa5PO5fQ++`JaqTf6Pn_^`C`NH4NyO4v5fu%U zvM&ipV?3{+4D_QvsH6Fc18vW#Y3kao zc5$&)|10%M>vn`RQ-jb7Sm_C=R<{l1)Rg-us-h=LQ_-^V4dd(2x(hx4eovukcJjVT z^(0I38i{uq;ClcPwFoWpc*iv2ujx2oRDr?7sG6DkCD04(!l;L-&JJpIbeQ0{R$E!u zxeix%KVws8a)X|9voE*y#_nro)6+AJUMG*Rq}#2}M9DGo-o4;oCo_YA2GcBqFN=@-&}JdO5S6-~a9%SDzz# zYpy)ZEiC3?cagDztbT4C;09<1ZRfLiRBUQ1FXUAWgjRfS6k<3I6mw^y@9)Hdm?E3= zG!Dd>Bi&1hWt+Fh=@piODb`W#k00-Q<`upl;NMmw${3ZmUN9yyzDy52l4F+>6M}JHz~}iYr|{P%fyKp0=jYo|Z&2U}22G01iX(7#JuX&W{^bP@X{?zZm9KP+!ZK zVJ>A;D0|E|A(&TWHD!Wxl^5)U_*SNRG48*Y-0rt=Q$0%>4+kRCfLCEBe3BvvNyq^8 zNJ^`a!>s%*RtJgUD#?7^O9Ct|1S4Bt{;UHxs^^?z+Dp4qArx%le)Um37#dNgv(v|R z#%q9L27_}`7kO2o7k4bO!Nwvkn9U<^B9IV&Q~<4zxL&L@0YGsEewk0c2S>e>%T0YT z8e2-Xm11|^6wV?tG?==a8l>fMfQp_EVT?RL7L`r*Z!)c055J&42utJkZ+7#F*@qFp zP;SaxJ^Mx~7iOR^|GZT>{PP zQl)lEJNxbg(^mDnFjU-4Ztg~m6@H{)Im$LY_Fz+!HqhFjNv`T%a|6X`Pk$fNgTBp> zHibnp(--S(HZ$l5xZ=f2Zxe13&Y10biw)iF;Ag#T1+#hETI&_#lMV0|1iu?~1rupg z%2qn$lkFvUEsa<^jRZ`uoBF=9G>*>|lLRmZUdllYSTWOiV znX|n+>hx;!>@)96pDDYo$!4}--p14Tf6ro6h1*8ll9Z{FM>2Og_n)2=Cd>im?PR2` zYOH9w&vm)E&dU9`_g1nHAbElbz#li*RJ6+=P=!%<{CQ4L)46*m<5g7zkEpKa+P%Jv zDwUSR6rh9G=Y?H9&xLP^9cbJ}cK!nETR}M?0JC8t38P9-ys~?RJBFazTrpgv9*S-V zkv=I7B{%d+RNcxf#))Ba1U##!UCBnKpWh#ULX5Gs4c|}Ee!c{hcLqob85WKe$nP7& zB$=*j+{H9}zOBlQEZE)*Ps$p21HXVVsLHqp3gjs)3+_q+{);VB!waM6`>*>Ccw=iY zJaR*DX8l>oqq6yzO$#|cJ@^neZiI8&dUI{L*R^faI7MY~3=|`hTAgZ)%r>S{9aQbk zo$M!Kfp!cH?akga4%N#6uMA66R9fPljAG#c4jwDVR84{N7HB$ui;@IS9d5-N0dBMQ z2@Ywav(7}CB51es7JR0Oc$oLF}Uoos8=*8952Z0UHi zrAm`_-O%ov73ntosz7-#Il?Lt4~UI3XbJZrBxHFd5W>fj-Ai(M7P;A2A3%LD<<{X& z_+Xq4YcJ(d$*os@LucjNT-Kcz9Kj|+m1u=Uu6sDEH?FZmd!DT|2l2?K5TnJ>?lyGX zNRPbZKI`!S6TRCQRr!q2Dg>%~FR()Sa{e2kMdy9_3Y4+w5;ZrtoZbuS8Hk~VCN=`r zz#BmzOWy$;```F4yWSXWl&aJ<%biDtV0JkDET|&NpjWtZgf+x^~Nq%M8(xY_)Q*{}* zLL?mDKBNjUL~Qg>20e8>l$k|#04-k)qPrM?W4>z2FOtsbirlgHprhYnpml+1+-Oa^ zvt=%j&u%qEoFV!Y(X^Hnkn9?qEwuj#Al&1}xDQvcT|uq-_vFWIl{ES z*8Z7#o(V&RxS4~)by4X-(?Fh#J|^6=@f(H6_5cpG%VUnHo9s83sL0csxR({3xT1MK zeoA<3FwgGHaDkpM8qIMsV$TC#&aRLVl-@~{Q}nzUchXvhpp;+uNz7@ny&!?l*LT^4 zS>HUww^;uZ9x1MJyr`r=Plk=B-e>vYljYCy=FkI5gI3u%!mX>XL@o4DT09oVU@k=g zx@P289(VeNy~rKz0xFUXDpcFot36!BTJkLeU~hk=tcyUpA@F>Bn(_p9kWDzIqHlFg zwCN2cAalB?g}5Z*6SyMcGYXsUZ+`-LHI$BQARDhdjI29H@(Y)U)+aB9ZzqRG&TvVP z)ebjQ$vQmbsXc<{uYD&5f%q9F;DAFE-J!qn*XaJ$R3BspeWwBEfza31)xN_Y)Wql; z2CI@JiPd@%h`DRYw?TnNU&hVNYTZWlW2OBp-%xgNvqAK+X;Jp7<;UX|XHcCj2{C02 zqsghZu;Ly$w3T-yCtFx=;j3E4YsEZXk5`F93R6#HM00E=9zVCbbi>jv$aB)dmM;Td zHr3e1-B(edOkF9MeAgzh;?Y?J!`*YLs$X{Iw&O1bbr>OaGpuo~-MKLDG zmhA;w{$m-d2;1heJ)?V!>O*YSj5F1#{mi9UrUQQV{1L8iEn|KQfR+@S3q)Lf4zukG z_d3&b3$;?tX50&uEKQq-;@Ybfr>&IC;b;=a;#^h?URk`e^@K_jb$`ug=NL3-P%>PL zz!tOuXb3{F55Jk5TugPqL;Baw+kIVjv=b%B<)A<9@j*=C($3^KBZWT2eu;+31F`bG zr^%i5ks`ER;^2b9_BmR=4U)Bk#o&XS#|~+2&SAD)m5M->PcT%MLpp|>5vbmHUsJOyF9xS!OIqZPHxWQ=T5WgHG0!H02biXL0U#{*C30=gEduU;@VffmKsrdt4Csjes*wa0)@vteo5 zr&cWJP=Yt9kUd7%_3FzQU>7%L6QjkU9XVCQ64Nof`om8$RT{N!7J&9SernogP0LXY zfh#`#_iUHkhTB(4bnrZ7SLp7r<9@~S#=@P{DHUr-9)#1}_wK;@#AzOmo$eag>EW^r zJlWYD{zuGCL~SiUH}U##>D}4peYpXN&w{bs_Vxj_VPwe43hXJurxvwU;?82-xYf#c z2g7*e-e5Inllon-I&-1~uT?2z7U3LACCy2qgj#Ae;yj7}iDO#;1y2UNUS4qF#7Z?w!OTF znJ3H7n1AbUo)C;G{uu3siM!-AD$0|aJK5v8;C47M9yC3u?|9t5gjpEMvlc~6CgH+M ze`ezD*#s$HJBNXpkhZpkt;gl^bXV_U(Dk8R-fKksNVKAq3Fl0K&BMmE!ZK}(HB-<9 zilFwodnInX#xGFN?USH_3Vk8E118bx_F^~EYrcF??#uu_RIJr!)t%4jO1erhyhLuJ zeX(9tRT4YS7enc9&w2A9d`qMXiMVMZAa@W6=r6ONS4WA%EP3+e`?DjvFg15ajIs+- z#}z(r&JYsXqFJd`FHEP?BvUpO_+J4G3F@H>@T8QU($XS82>0D1&I#BStaVICTUCiyZId&+a) zlN9icU&5aPRSYiGT_sGyC38S86ALid_^4>*>pDtvi`CXs*a`vXDa0a`M2q59F){aF z#G+|<%WfsF$ijOLP08(3R8+dTxIFSs{tzX-YO=M)EmLtM&n=(XhuZ#2TZg!CkX{o} zor^6K#E0ts&Sv!l_n6I9rL>Wr#%w$6BW%y-i_60?cAPn5nj~(~%Dy-sPEnBt*5zP2 z#nmT?Op^O%hHBo;^#>L-`JxWR(B0^o^2HBKzwwy5byPQSsis~Yme>ic!s29mU=8Ez zhuw6K4%fwiW&hXe{6UngD|NXepr{Ff8nyS(l__&gnP#6)ktNM>XfriLQ>V&%CsR0H zqiNl1XgbLvR}IBhXf7j{5rya#8At07NdcetB@ZswHG}qc z$>olDS4>LJ-*m!@)UB*o8eDa};!iwRc+Zd7 zA`ms5Ms*ie(p1f}CCnuVo^yK{)shwpe(%2-+AVWbZ?{Rz%!nyXuwi+{#gxJ&Zldc) z>lx;OL0&!MY8kg=TxKROw=0{RuU=LX|3JwrETm1A&Zo$zz-C6jX(*Fu-bFE4Y0RiL zlsT&x`pLbQjv!%FbRqPVN3U3)6ZaWQqoj08K=_;lv@z?2!HQA~s-jVPgWeqtg81Pz zteO^EzFsfD*WWDdW@1D}`W9~%j;6?ouOcLe=T!Has|Hd#b>&*(b5aE+d4=G;&Jp48E|OZ+uBn!zNB9&J-H8#epMaM*O97KI z2Yr+#&Kf|0wVmeJ93_o24s0ont{QpW-`v)H;g9%EX4oTjP>{0PIBdB4OSfrOmsD$B zQQtmDQE7QsT6&dkRfpsK)8Ja|cRZI^ImQZuyA#TUDg5O;e6o9el9=CM>8hy0tYyWm zqO7*E*&lltm>u6Ii;s$#{haDad^GP%y0>SN*ojWKn8$YIjI|!ed7Sd$UvU*4Fog~V zC8mQG=^1*~#taS*)FMX?tlm00%A_h^m@dM0>}gm)5_xPdo2Qcn|CCyu5`FAwWXOj@ z_OWkIth(Iywrwm{deMSB4L~9vDV{Hf@U4Zd*<5Rh?<-l4!&S9V9BAn4)P5C|JI)P+ z#oZDw{3(DQ>}T7pE!$o@(*v9@IG0gI+8;kPb!#_h7MnrucXEUOyVF{0(80iO`LWXX zJ?FQw<3Db9;wG({NS6Mex}b+KxSk?)$!+<2Bg8-a-j4_T=SloyI=}YQ4-5XV;I|gC zZ&>EXTl}>b=|B1GhXwx=Yke651U0X|prbnoxpZFr#(%}RAJ+VSYc_8?6Bpn-%&mfS zl8KDsYUdecLW>W%!_!WM?Gd~!bY+*L`)`SKtjvZ}$5emXN<)l9SXh1Kqzz5+=aAX; zj=vrlA;2Prsk0xLSMeL+_~T%VX#Fv3l6ztQny^~XAID<+`S^-IPV)F15!A90^PGJ3 z%bWK6(^3i2x@VoMsohmi6Vwt z7Rl2UjUA+~U*U^!J231+!gTWXe%%D$ae@DO@ZcqIowmB@P) z|2_$1g5{|4^vFf2FD?JiQ});AU8Bd7!&v@$XbBVJdSI|q^0fUnG1PhW#Gi)l%+M_% zF`_*6r;ollpx{>eJ(zHRWvwqADW6ftyzQvLwo`$Uh~}4H{nH$)RW=JNA>F@u$M1P6 z+V9A)9#xh%H<17DT5lW-aGVM3@4xrPe>MMqjYO>scc6zRhPpdhQ>iwO +- [Summary](#summary) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) +- [Context](#context) +- [Design Details](#design-details) + - [Applying Updates During Pod Admission](#applying-updates-during-pod-admission) + - [Applying Disruption-free Updates](#Applying-disruption-free-updates) + - [Applying Disruptive Updates](#applying-disruptive-updates) + - [Comparison of `UpdateMode`s](#comparison-of-updatemodes) + - [Test Plan](#test-plan) +- [Implementation History](#implementation-history) + + +## Summary + + +VPA applies its recommendations with a mutating webhook, during pod creation. It can also evict +pods expecting that it will apply the recommendation when the pod is recreated. This is a +disruptive process so VPA has some mechanism to avoid too frequent disruptive updates. + +This proposal allows VPA to apply its recommendations more frequently, with less disruption by +using the +[in-place update feature] which is an alpha feature [available in Kubernetes 1.27.] This proposal enables only core uses +of in place updates in VPA with intention of gathering more feedback. Any more advanced uses of in place updates in VPA +(like applying different recommendations during pod initialization) will be introduced as separate enhancement +proposals. + +[in-place update feature]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/1287-in-place-update-pod-resources +[available in Kubernetes 1.27.]: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#api-change-3 + +### Goals + +* Allow VPA to actuate without disruption, +* Allow VPA to actuate more frequently, when it can actuate without disruption, +* Allow VPA to actuate in situations where actuation by eviction doesn't work. + +### Non-Goals + +* Improved handling of injected sidecars + * Separate AEP will improve VPAs handling of injected sidecars. + +## Proposal + +Add new supported values of [`UpdateMode`]: + +* `InPlaceOnly` and +* `InPlaceOrRecreate`. + +[`UpdateMode`]: https://github.com/kubernetes/autoscaler/blob/71b489f5aec3899157b37472cdf36a1de223d011/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L124 + +## Context + +[In-place update of pod resources KEP] is available in alpha in [Kubernetes 1.27]. The feature allows changing container +resources while the container is running. It also adds [`ResizePolicy`] field to Container. This field indicates for an +individual resource if a container needs to be restarted by kubelet when the resource is changed. For example it may be +the case that a Container automatically adapts to a change in CPU, but needs to be restarted for a change in Memory to +take effect. + +[In-place update of pod resources KEP]: https://github.com/kubernetes/enhancements/issues/1287 +[Kubernetes 1.27]: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md#api-change-3 +[`ResizePolicy`]: https://github.com/kubernetes/api/blob/8360d82aecbc72aa039281a394ebed2eaf0c0ccc/core/v1/types.go#L2448 + +## Design Details + +In the update modes existing before (`Initial` and `Recreate`) only admission controller was changing pod spec (updater +was responsible for evicting pods so that admission controller could change them during admission). + +In the newly added `InPlaceOnly` and `InPlaceOrRecreate` modes VPA Updater will execute in-place updates (which change +pod spec without involving admission controller). + +### Applying Updates During Pod Admission + +For VPAs in `InPlaceOnly` and `InPlaceOrRecreate` modes VPA Admission Controller will apply updates to starting pods, +like it does for VPAs in `Initial`, `Auto`, and `Recreate` modes. + +### Applying Disruption-free Updates + +When an update only changes resources for which a container indicates that it doesn't require a restart, then VPA can +attempt to actuate the change without disrupting the pod. VPA Updater will: +* attempt to actuate such updates in place, +* attempt to actuate them if difference between recommendation and request is at least 10% + * even if pod has been running for less than 12h, +* if necessary execute only partial updates (if some changes would require a restart). + +### Applying Disruptive Updates + +In both `InPlaceOnly` and `InPlaceOrRecreate` modes VPA updater will attempt to apply updates that require container +restart in place. It will update them under conditions that would trigger update with `Recreate` mode. That is it will +apply disruptive updates if: + +* Any container has a request below the corresponding `LowerBound` or +* Any container has a request above the corresponding `UpperBound` or +* Difference between sum of pods requests and sum of recommendation `Target`s is more than 10% and the pod has been + running undisrupted for at least 12h. + * Successful disruptive update counts as disruption here (and prevents further disruptive updates to the pod for 12h). + +In `InPlaceOrRecreate` mode (but not in `InPlaceOnly` mode) VPA updater will evict pod to actuate a recommendation if it +attempted to apply the recommendation in place and failed. + +VPA updater will consider that the update failed if: +* The pod has `.status.resize: Infeasible` or +* The pod has `.status.resize: Deferred` and more than 1 minute elapsed since the update or +* The pod has `.status.resize: InProgress` and more than 1 hour elapsed since the update: + * There seems to be a bug where containers that say they need to be restarted get stuck in update, hopefully it gets + fixed and we don't have to worry about this by beta. +* Patch attempt will return an error. + * If the attempt fails because it would change pods QoS: + * `InPlaceOrRecreate` will treat it as any other failure and consider evicting the pod. + * `InPlaceOnly` will consider applying request slightly lower than the limit. + +Those failure modes shouldn't disrupt pod operations, only update. If there are problems that can disrupt pod operation +we should consider not implementing the `InPlaceOnly` mode. + +### Comparison of `UpdateMode`s + +VPA updater considers the following conditions when deciding if it should apply an update: +- [`CanEvict`]: + - Pod is `Pending` or + - There are enough running pods in the controller. +- Quick OOM: + - Any container in the pod had a quick OOM ([by default less than 10 minutes] after the container started) and + - There is difference between sum of recommendations and sum of current requests over containers (see + [`defaultPriorityProcessor.GetUpdatePriority`]). +- Long-lived pod - started enough time ago ([by default 12h]) +- Significant change - difference between sum of requests over containers and sum of recommendations are different + enough ([by default 10%]). +- [Outside recommended range]: + - At least one container has at least one resource request lower than the lower bound of the corresponding + recommendation or + - At least one container has at least one resource request higher than the upper bound of the corresponding + recommendation. +- Disruption-free update - doesn't change any resources for which the relevant container specifies + `RestartPolicy: RestartContainer`. + +`Auto` / `Recreate` evicts pod if: + * [`CanEvict`] returns true for the pod, and it meets at least one of the following conditions: + * Quick OOM, + * Outside recommended range, + * Long-lived pod with significant change. + +`InPlaceOnly` and `InPlaceOrRecreate` will attempt to apply a disruption-free update in place if it meets at least one +of the following conditions: +* Quick OOM, +* Outside recommended range, +* Significant change. + +`InPlaceOnly` and `InPlaceOrRecreate` when considering a disruption-free update in place ignore some conditions that +influence eviction decission in the `Recreate` mode: +* [`CanEvict`] won't be checked and +* Pods with significant change can be updated even if they are not long-lived. + +`InPlaceOnly` and `InPlaceOrRecreate` will attempt to apply updates that are **not** disruption-free in place under +the same conditions that apply to updates in the `Recreate` mode. + +`InPlaceOrRecreate` will attempt to apply updates that by eviction when: +* VPA already attempted to apply the update in-place and failed and +* it meets conditions for applying in the `Recreate` mode. + +[`CanEvict`]: https://github.com/kubernetes/autoscaler/blob/114a35961a85efdf3f36859350764e5e2c0c7013/vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction.go#LL100C10-L100C37 +[by default less than 10 minutes]: https://github.com/kubernetes/autoscaler/blob/114a35961a85efdf3f36859350764e5e2c0c7013/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go#L37 +[`UpdatePriorityCalculator.AddPod`]: https://github.com/kubernetes/autoscaler/blob/114a35961a85efdf3f36859350764e5e2c0c7013/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go#L81 +[by default 12h]: https://github.com/kubernetes/autoscaler/blob/114a35961a85efdf3f36859350764e5e2c0c7013/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go#L35 +[by default 10%]: https://github.com/kubernetes/autoscaler/blob/114a35961a85efdf3f36859350764e5e2c0c7013/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go#L33 +[Outside recommended range]: https://github.com/kubernetes/autoscaler/blob/114a35961a85efdf3f36859350764e5e2c0c7013/vertical-pod-autoscaler/pkg/updater/priority/priority_processor.go#L73 + +### Test Plan + +The following test scenarios will be added to e2e tests. Both `InPlaceOnly` and `InPlaceOrRecreate` modes will be tested +and they should behave the same: + +* Admission controller applies recommendation to pod controlled by VPA. +* Disruption-free in-place update applied to all containers of a pod (request in recommendation bounds). +* Partial disruption-free update applied to some containers of a pod, some disruptive changes skipped (request in + recommendation bounds). +* Disruptive in-place update applied to all containers of a pod (request out ouf recommendation bounds). + +There will be also scenarios testing differences between `InPlaceOnly` and `InPlaceOrRecreate` modesL +* Disruptive in-place update will fail. In `InPlaceOnly` pod should not be evicted, in the `InPlaceOrRecreate` pod + should be evicted and the recommendation applied. +* VPA attempts an update that would change Pods QoS (`RequestsOnly` scaling, request initially < limit, recommendation + equal to limit). In `InPlaceOnly` pod should not be evicted, request slightly lower than the recommendation will be + applied. In the `InPlaceOrRecreate` pod should be evicted and the recommendation applied. + +### Details still to consider + +#### Ensure in-place resize request doesn't cause restarts + +Currently the container [resize policy](https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/#container-resize-policies) +can be either `NotRequired` or `RestartContainer`. With `NotRequired` in-place update could still end up +restarting the container if in-place update is not possible, depending on kubelet and container +runtime implementation. However in the proposed design it should be VPA's decision whether to fall back +to restarts or not. + +Extending or changing the existing API for in-place updates is possible, e.g. adding a new +`MustNotRestart` container resize policy. + +#### Should `InPlaceOnly` mode be dropped + +The use case for `InPlaceOnly` is not understood yet. Unless we have a strong signal it solves real +needs we should not implement it. Also VPA cannot ensure no restart would happen unless +*Ensure in-place resize request doesn't cause restarts* (see above) is solved. + +#### Careful with memory scale down + +Downsizing memory may have to be done slowly to prevent OOMs if application starts to allocate rapidly. +Needs more research on how to scale down on memory safely. + +## Implementation History + +- 2023-05-10: initial version diff --git a/vertical-pod-autoscaler/enhancements/4831-control-eviction-behavior/README.md b/vertical-pod-autoscaler/enhancements/4831-control-eviction-behavior/README.md index 29c0c4f9501b..cd32fef68610 100644 --- a/vertical-pod-autoscaler/enhancements/4831-control-eviction-behavior/README.md +++ b/vertical-pod-autoscaler/enhancements/4831-control-eviction-behavior/README.md @@ -21,9 +21,9 @@ For some workloads, each eviction introduces disruptions for users. Examples inc * Allow for resource-specific decisions: The desired policy may be different for CPU and Memory ## Proposal -Add a new field `EvictionRequirements` to [`PodUpdatePolicy`](https://github.com/kubernetes/autoscaler/blob/2f4385b72e304216cf745893747da45ef314898f/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L109) of type `[]*EvictionRequirement`. A single `EvictionRequirement` defines a condition which must be `true` to allow eviction for the corresponding `Pod`. When multiple `EvictionRequirements` are specified for a `Pod`, all of them must evaluate to `true` to allow eviction. +Add a new field `EvictionRequirements` to [`PodUpdatePolicy`](https://github.com/kubernetes/autoscaler/blob/2f4385b72e304216cf745893747da45ef314898f/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L109) of type `[]*EvictionRequirement`. A single `EvictionRequirement` defines a condition which must be `true` to allow eviction for the corresponding `Pod`. When multiple `EvictionRequirements` are specified for a `Pod`, all of them must evaluate to `true` to allow eviction. For Pods with multiple Containers, an `EvictionRequirement` evaluates to `true` when _at least one Container that is under VPA control_ fulfills the `EvictionRequirement`. -A single `EvictionRequirement` specifies `Resources` and a `ChangeRequirement` comparing the new recommendation (`Target`) with the existing requests on a Pod (`Requests`). Possible values for `Resources` are `[CPU]` and `[Memory]` or both `[CPU,Memory]`. If `Resources: [CPU, Memory]`, the condition must be true for either of the two resources to allow for eviction. Possible values for `ChangeRequirement` are `TargetHigherThanRequests`, `TargetHigherThanOrEqualToRequests`, `TargetLowerThanRequests` and `TargetLowerThanOrEqualToRequests`. +A single `EvictionRequirement` specifies `Resources` and a `ChangeRequirement` comparing the new recommendation (`Target`) with the existing requests on a Pod (`Requests`). Possible values for `Resources` are `[CPU]` and `[Memory]` or both `[CPU,Memory]`. If `Resources: [CPU, Memory]`, the condition must be true for either of the two resources to allow for eviction. Possible values for `ChangeRequirement` are `TargetHigherThanRequests` and `TargetLowerThanRequests`. Add validation to prevent users from adding `EvictionRequirements` which can never evaluate to `true`: * Reject if more than one `EvictionRequirement` for a single resource is found diff --git a/vertical-pod-autoscaler/go.mod b/vertical-pod-autoscaler/go.mod index 91e91a9a4907..43614a03cb80 100644 --- a/vertical-pod-autoscaler/go.mod +++ b/vertical-pod-autoscaler/go.mod @@ -50,11 +50,11 @@ require ( github.com/spf13/cobra v1.6.0 // indirect github.com/spf13/pflag v1.0.5 // indirect github.com/stretchr/objx v0.4.0 // indirect - golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10 // indirect + golang.org/x/net v0.8.0 // indirect golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b // indirect - golang.org/x/sys v0.3.0 // indirect - golang.org/x/term v0.3.0 // indirect - golang.org/x/text v0.5.0 // indirect + golang.org/x/sys v0.6.0 // indirect + golang.org/x/term v0.6.0 // indirect + golang.org/x/text v0.8.0 // indirect google.golang.org/appengine v1.6.7 // indirect google.golang.org/protobuf v1.28.1 // indirect gopkg.in/inf.v0 v0.9.1 // indirect diff --git a/vertical-pod-autoscaler/go.sum b/vertical-pod-autoscaler/go.sum index 7e4f9620df23..bffb8ca8fcc4 100644 --- a/vertical-pod-autoscaler/go.sum +++ b/vertical-pod-autoscaler/go.sum @@ -344,8 +344,8 @@ golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96b golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk= -golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10 h1:Frnccbp+ok2GkUS2tC84yAq/U9Vg+0sIO7aRL3T4Xnc= -golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE= +golang.org/x/net v0.8.0 h1:Zrh2ngAOFYneWTAIAPethzeaQLuHwhuBkuV6ZiRnUaQ= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -406,12 +406,12 @@ golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.3.0 h1:w8ZOecv6NaNa/zC8944JTU3vz4u6Lagfk4RPQxv92NQ= -golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= -golang.org/x/term v0.3.0 h1:qoo4akIqOcDME5bhc/NgxUdovd6BSS2uMsVjB56q1xI= -golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA= +golang.org/x/term v0.6.0 h1:clScbb1cHjoCkyRbWwBEUZ5H/tIFu5TAXIqaZD0Gcjw= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -419,8 +419,8 @@ golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.5.0 h1:OLmvp0KP+FVG99Ct/qFiL/Fhk4zp4QQnZ7b2U+5piUM= -golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0 h1:57P1ETyNKtuIjB4SRd15iJxuhj8Gc416Y78H3qgMh68= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= diff --git a/vertical-pod-autoscaler/hack/generate-crd-yaml.sh b/vertical-pod-autoscaler/hack/generate-crd-yaml.sh index 268105eaed19..d1f091c08a45 100755 --- a/vertical-pod-autoscaler/hack/generate-crd-yaml.sh +++ b/vertical-pod-autoscaler/hack/generate-crd-yaml.sh @@ -40,7 +40,7 @@ else fi # The following commands always returns an error because controller-gen does not accept keys other than strings. -${CONTROLLER_GEN} ${CRD_OPTS} paths="${APIS_PATH}/..." output:crd:dir=${WORKSPACE} >& ${WORKSPACE}/errors.log ||: +${CONTROLLER_GEN} ${CRD_OPTS} paths="${APIS_PATH}/..." output:crd:dir="\"${WORKSPACE}\"" >& ${WORKSPACE}/errors.log ||: grep -v -e 'map keys must be strings, not int' -e 'not all generators ran successfully' -e 'usage' ${WORKSPACE}/errors.log \ && { echo "Failed to generate CRD YAMLs."; exit 1; } diff --git a/vertical-pod-autoscaler/hack/vpa-process-yaml.sh b/vertical-pod-autoscaler/hack/vpa-process-yaml.sh index f843d11a8e95..328f8aaf3f30 100755 --- a/vertical-pod-autoscaler/hack/vpa-process-yaml.sh +++ b/vertical-pod-autoscaler/hack/vpa-process-yaml.sh @@ -32,7 +32,7 @@ if [ $# -eq 0 ]; then fi DEFAULT_REGISTRY="registry.k8s.io/autoscaling" -DEFAULT_TAG="0.13.0" +DEFAULT_TAG="0.14.0" REGISTRY_TO_APPLY=${REGISTRY-$DEFAULT_REGISTRY} TAG_TO_APPLY=${TAG-$DEFAULT_TAG} diff --git a/vertical-pod-autoscaler/hack/vpa-process-yamls.sh b/vertical-pod-autoscaler/hack/vpa-process-yamls.sh index d30552efcbe7..aa7aafef04c4 100755 --- a/vertical-pod-autoscaler/hack/vpa-process-yamls.sh +++ b/vertical-pod-autoscaler/hack/vpa-process-yamls.sh @@ -42,7 +42,7 @@ fi ACTION=$1 COMPONENTS="vpa-v1-crd-gen vpa-rbac updater-deployment recommender-deployment admission-controller-deployment" case ${ACTION} in -delete|diff|print) COMPONENTS+=" vpa-beta2-crd" ;; +delete|diff) COMPONENTS+=" vpa-beta2-crd" ;; esac if [ $# -gt 1 ]; then diff --git a/vertical-pod-autoscaler/pkg/admission-controller/Makefile b/vertical-pod-autoscaler/pkg/admission-controller/Makefile index cb4a6d345e64..b540b3fba2dc 100644 --- a/vertical-pod-autoscaler/pkg/admission-controller/Makefile +++ b/vertical-pod-autoscaler/pkg/admission-controller/Makefile @@ -42,7 +42,7 @@ ifndef TAG ERR = $(error TAG is undefined) $(ERR) endif - docker build --pull -t ${REGISTRY}/${FULL_COMPONENT}-$*:${TAG} --build-arg ARCH=$* . + docker buildx build --pull --load --platform linux/$* -t ${REGISTRY}/${FULL_COMPONENT}-$*:${TAG} --build-arg ARCH=$* . .PHONY: docker-push docker-push: $(addprefix sub-push-,$(ALL_ARCHITECTURES)) push-multi-arch; @@ -76,11 +76,23 @@ build-in-docker: $(addprefix build-in-docker-,$(ALL_ARCHITECTURES)) .PHONY: build-in-docker-* build-in-docker-%: clean docker-builder + echo '=============== local git status ===============' + git status + echo '=============== last commit ===============' + git log -1 + echo '=============== bulding from the above ===============' docker run -v `pwd`/../..:/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler vpa-autoscaling-builder:latest bash -c 'cd /gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler && make build-binary-with-vendor-$* -C pkg/admission-controller' +.PHONY: create-buildx-builder +create-buildx-builder: + BUILDER=$(shell docker buildx create --driver=docker-container --use) + +.PHONY: remove-buildx-builder +remove-buildx-builder: + docker buildx rm ${BUILDER} .PHONY: release -release: build-in-docker docker-build docker-push +release: build-in-docker create-buildx-builder docker-build remove-buildx-builder docker-push @echo "Full in-docker release ${FULL_COMPONENT}:${TAG} completed" clean: $(addprefix clean-,$(ALL_ARCHITECTURES)) diff --git a/vertical-pod-autoscaler/pkg/admission-controller/config.go b/vertical-pod-autoscaler/pkg/admission-controller/config.go index 949909876ef4..fc6f143d0d4a 100644 --- a/vertical-pod-autoscaler/pkg/admission-controller/config.go +++ b/vertical-pod-autoscaler/pkg/admission-controller/config.go @@ -37,6 +37,7 @@ func configTLS(serverCert, serverKey []byte) *tls.Config { klog.Fatal(err) } return &tls.Config{ + MinVersion: tls.VersionTLS12, Certificates: []tls.Certificate{sCert}, } } diff --git a/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go b/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go index aff68d6fe406..cd3eba2ea125 100644 --- a/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go +++ b/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go @@ -40,6 +40,7 @@ type VerticalPodAutoscalerList struct { // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // +kubebuilder:storageversion // +kubebuilder:resource:shortName=vpa +// +kubebuilder:subresource:status // +kubebuilder:printcolumn:name="Mode",type="string",JSONPath=".spec.updatePolicy.updateMode" // +kubebuilder:printcolumn:name="CPU",type="string",JSONPath=".status.recommendation.containerRecommendations[0].target.cpu" // +kubebuilder:printcolumn:name="Mem",type="string",JSONPath=".status.recommendation.containerRecommendations[0].target.memory" @@ -93,8 +94,11 @@ type VerticalPodAutoscalerSpec struct { // Controls how the autoscaler computes recommended resources. // The resource policy may be used to set constraints on the recommendations - // for individual containers. If not specified, the autoscaler computes recommended - // resources for all containers in the pod, without additional constraints. + // for individual containers. + // If any individual containers need to be excluded from getting the VPA recommendations, then + // it must be disabled explicitly by setting mode to "Off" under containerPolicies. + // If not specified, the autoscaler computes recommended resources for all containers in the pod, + // without additional constraints. // +optional ResourcePolicy *PodResourcePolicy `json:"resourcePolicy,omitempty" protobuf:"bytes,3,opt,name=resourcePolicy"` @@ -105,6 +109,27 @@ type VerticalPodAutoscalerSpec struct { Recommenders []*VerticalPodAutoscalerRecommenderSelector `json:"recommenders,omitempty" protobuf:"bytes,4,opt,name=recommenders"` } +// EvictionChangeRequirement refers to the relationship between the new target recommendation for a Pod and its current requests, what kind of change is necessary for the Pod to be evicted +// +kubebuilder:validation:Enum:=TargetHigherThanRequests;TargetLowerThanRequests +type EvictionChangeRequirement string + +const ( + // TargetHigherThanRequests means the new target recommendation for a Pod is higher than its current requests, i.e. the Pod is scaled up + TargetHigherThanRequests EvictionChangeRequirement = "TargetHigherThanRequests" + // TargetLowerThanRequests means the new target recommendation for a Pod is lower than its current requests, i.e. the Pod is scaled down + TargetLowerThanRequests EvictionChangeRequirement = "TargetLowerThanRequests" +) + +// EvictionRequirement defines a single condition which needs to be true in +// order to evict a Pod +type EvictionRequirement struct { + // Resources is a list of one or more resources that the condition applies + // to. If more than one resource is given, the EvictionRequirement is fulfilled + // if at least one resource meets `changeRequirement`. + Resources []v1.ResourceName `json:"resource" protobuf:"bytes,1,name=resources"` + ChangeRequirement EvictionChangeRequirement `json:"changeRequirement" protobuf:"bytes,2,name=changeRequirement"` +} + // PodUpdatePolicy describes the rules on how changes are applied to the pods. type PodUpdatePolicy struct { // Controls when autoscaler applies changes to the pod resources. @@ -117,6 +142,12 @@ type PodUpdatePolicy struct { // allowed. Overrides global '--min-replicas' flag. // +optional MinReplicas *int32 `json:"minReplicas,omitempty" protobuf:"varint,2,opt,name=minReplicas"` + + // EvictionRequirements is a list of EvictionRequirements that need to + // evaluate to true in order for a Pod to be evicted. If more than one + // EvictionRequirement is specified, all of them need to be fulfilled to allow eviction. + // +optional + EvictionRequirements []*EvictionRequirement `json:"evictionRequirements,omitempty" protobuf:"bytes,3,opt,name=evictionRequirements"` } // UpdateMode controls when autoscaler applies changes to the pod resources. diff --git a/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1beta2/types.go b/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1beta2/types.go index de4c0843a575..d2fca95a6dfc 100644 --- a/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1beta2/types.go +++ b/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1beta2/types.go @@ -39,6 +39,7 @@ type VerticalPodAutoscalerList struct { // +genclient // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // +kubebuilder:resource:shortName=vpa +// +kubebuilder:subresource:status // +k8s:prerelease-lifecycle-gen=true // VerticalPodAutoscaler is the configuration for a vertical pod diff --git a/vertical-pod-autoscaler/pkg/recommender/Makefile b/vertical-pod-autoscaler/pkg/recommender/Makefile index bc870bc9efce..c51d0e944045 100644 --- a/vertical-pod-autoscaler/pkg/recommender/Makefile +++ b/vertical-pod-autoscaler/pkg/recommender/Makefile @@ -42,7 +42,7 @@ ifndef TAG ERR = $(error TAG is undefined) $(ERR) endif - docker build --pull -t ${REGISTRY}/${FULL_COMPONENT}-$*:${TAG} --build-arg ARCH=$* . + docker buildx build --pull --load --platform linux/$* -t ${REGISTRY}/${FULL_COMPONENT}-$*:${TAG} --build-arg ARCH=$* . .PHONY: docker-push docker-push: $(addprefix sub-push-,$(ALL_ARCHITECTURES)) push-multi-arch; @@ -76,9 +76,22 @@ build-in-docker: $(addprefix build-in-docker-,$(ALL_ARCHITECTURES)) .PHONY: build-in-docker-* build-in-docker-%: clean docker-builder + echo '=============== local git status ===============' + git status + echo '=============== last commit ===============' + git log -1 + echo '=============== bulding from the above ===============' docker run -v `pwd`/../..:/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler vpa-autoscaling-builder:latest bash -c 'cd /gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler && make build-binary-with-vendor-$* -C pkg/recommender' -release: build-in-docker docker-build docker-push +.PHONY: create-buildx-builder +create-buildx-builder: + BUILDER=$(shell docker buildx create --driver=docker-container --use) + +.PHONY: remove-buildx-builder +remove-buildx-builder: + docker buildx rm ${BUILDER} + +release: build-in-docker create-buildx-builder docker-build remove-buildx-builder docker-push @echo "Full in-docker release ${FULL_COMPONENT}:${TAG} completed" clean: $(addprefix clean-,$(ALL_ARCHITECTURES)) diff --git a/vertical-pod-autoscaler/pkg/recommender/input/cluster_feeder.go b/vertical-pod-autoscaler/pkg/recommender/input/cluster_feeder.go index 2698268c508a..316ab1ec9f70 100644 --- a/vertical-pod-autoscaler/pkg/recommender/input/cluster_feeder.go +++ b/vertical-pod-autoscaler/pkg/recommender/input/cluster_feeder.go @@ -27,8 +27,13 @@ import ( "k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/util/wait" "k8s.io/apimachinery/pkg/watch" + kube_client "k8s.io/client-go/kubernetes" + corev1 "k8s.io/client-go/kubernetes/typed/core/v1" + v1lister "k8s.io/client-go/listers/core/v1" + "k8s.io/client-go/tools/cache" + "k8s.io/klog/v2" + vpa_types "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1" - vpa_clientset "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/client/clientset/versioned" vpa_api "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/client/clientset/versioned/typed/autoscaling.k8s.io/v1" vpa_lister "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/client/listers/autoscaling.k8s.io/v1" controllerfetcher "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/input/controller_fetcher" @@ -39,27 +44,12 @@ import ( "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/model" "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/target" metrics_recommender "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/metrics/recommender" - vpa_api_util "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/vpa" - "k8s.io/client-go/informers" - kube_client "k8s.io/client-go/kubernetes" - corev1 "k8s.io/client-go/kubernetes/typed/core/v1" - v1lister "k8s.io/client-go/listers/core/v1" - "k8s.io/client-go/rest" - "k8s.io/client-go/tools/cache" - "k8s.io/klog/v2" - resourceclient "k8s.io/metrics/pkg/client/clientset/versioned/typed/metrics/v1beta1" ) const ( - evictionWatchRetryWait = 10 * time.Second - evictionWatchJitterFactor = 0.5 - scaleCacheLoopPeriod = 7 * time.Second - scaleCacheEntryLifetime = time.Hour - scaleCacheEntryFreshnessTime = 10 * time.Minute - scaleCacheEntryJitterFactor float64 = 1. - defaultResyncPeriod = 10 * time.Minute - // DefaultRecommenderName designates the recommender that will handle VPA objects which don't specify - // recommender name explicitly (and so implicitly specify that the default recommender should handle them) + evictionWatchRetryWait = 10 * time.Second + evictionWatchJitterFactor = 0.5 + // DefaultRecommenderName recommender name explicitly (and so implicitly specify that the default recommender should handle them) DefaultRecommenderName = "default" ) @@ -116,34 +106,6 @@ func (m ClusterStateFeederFactory) Make() *clusterStateFeeder { } } -// NewClusterStateFeeder creates new ClusterStateFeeder with internal data providers, based on kube client config. -// Deprecated; Use ClusterStateFeederFactory instead. -func NewClusterStateFeeder(config *rest.Config, clusterState *model.ClusterState, memorySave bool, namespace, metricsClientName string, recommenderName string) ClusterStateFeeder { - kubeClient := kube_client.NewForConfigOrDie(config) - podLister, oomObserver := NewPodListerAndOOMObserver(kubeClient, namespace) - factory := informers.NewSharedInformerFactoryWithOptions(kubeClient, defaultResyncPeriod, informers.WithNamespace(namespace)) - controllerFetcher := controllerfetcher.NewControllerFetcher(config, kubeClient, factory, scaleCacheEntryFreshnessTime, scaleCacheEntryLifetime, scaleCacheEntryJitterFactor) - controllerFetcher.Start(context.TODO(), scaleCacheLoopPeriod) - return ClusterStateFeederFactory{ - PodLister: podLister, - OOMObserver: oomObserver, - KubeClient: kubeClient, - MetricsClient: newMetricsClient(config, namespace, metricsClientName), - VpaCheckpointClient: vpa_clientset.NewForConfigOrDie(config).AutoscalingV1(), - VpaLister: vpa_api_util.NewVpasLister(vpa_clientset.NewForConfigOrDie(config), make(chan struct{}), namespace), - ClusterState: clusterState, - SelectorFetcher: target.NewVpaTargetSelectorFetcher(config, kubeClient, factory), - MemorySaveMode: memorySave, - ControllerFetcher: controllerFetcher, - RecommenderName: recommenderName, - }.Make() -} - -func newMetricsClient(config *rest.Config, namespace, clientName string) metrics.MetricsClient { - metricsGetter := resourceclient.NewForConfigOrDie(config) - return metrics.NewMetricsClient(metricsGetter, namespace, clientName) -} - // WatchEvictionEventsWithRetries watches new Events with reason=Evicted and passes them to the observer. func WatchEvictionEventsWithRetries(kubeClient kube_client.Interface, observer oom.Observer, namespace string) { go func() { diff --git a/vertical-pod-autoscaler/pkg/recommender/input/history/history_provider.go b/vertical-pod-autoscaler/pkg/recommender/input/history/history_provider.go index 0c4573af0e02..9ed8bf03e5d4 100644 --- a/vertical-pod-autoscaler/pkg/recommender/input/history/history_provider.go +++ b/vertical-pod-autoscaler/pkg/recommender/input/history/history_provider.go @@ -19,6 +19,7 @@ package history import ( "context" "fmt" + "net/http" "sort" "strings" "time" @@ -32,6 +33,18 @@ import ( "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/model" ) +// PrometheusBasicAuthTransport contains the username and password of prometheus server +type PrometheusBasicAuthTransport struct { + Username string + Password string +} + +// RoundTrip function injects the username and password in the request's basic auth header +func (t *PrometheusBasicAuthTransport) RoundTrip(req *http.Request) (*http.Response, error) { + req.SetBasicAuth(t.Username, t.Password) + return http.DefaultTransport.RoundTrip(req) +} + // PrometheusHistoryProviderConfig allow to select which metrics // should be queried to get real resource utilization. type PrometheusHistoryProviderConfig struct { @@ -43,6 +56,7 @@ type PrometheusHistoryProviderConfig struct { CtrNamespaceLabel, CtrPodNameLabel, CtrNameLabel string CadvisorMetricsJobName string Namespace string + PrometheusBasicAuthTransport } // PodHistory represents history of usage and labels for a given pod. @@ -76,9 +90,19 @@ type prometheusHistoryProvider struct { // NewPrometheusHistoryProvider constructs a history provider that gets data from Prometheus. func NewPrometheusHistoryProvider(config PrometheusHistoryProviderConfig) (HistoryProvider, error) { - promClient, err := promapi.NewClient(promapi.Config{ + promConfig := promapi.Config{ Address: config.Address, - }) + } + + if config.Username != "" && config.Password != "" { + transport := &PrometheusBasicAuthTransport{ + Username: config.Username, + Password: config.Password, + } + promConfig.RoundTripper = transport + } + + promClient, err := promapi.NewClient(promConfig) if err != nil { return &prometheusHistoryProvider{}, err } diff --git a/vertical-pod-autoscaler/pkg/recommender/input/metrics/metrics_client.go b/vertical-pod-autoscaler/pkg/recommender/input/metrics/metrics_client.go index d9ae2791d5ec..7a55d9641649 100644 --- a/vertical-pod-autoscaler/pkg/recommender/input/metrics/metrics_client.go +++ b/vertical-pod-autoscaler/pkg/recommender/input/metrics/metrics_client.go @@ -22,11 +22,13 @@ import ( k8sapiv1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/model" - recommender_metrics "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/metrics/recommender" + "k8s.io/client-go/rest" "k8s.io/klog/v2" "k8s.io/metrics/pkg/apis/metrics/v1beta1" resourceclient "k8s.io/metrics/pkg/client/clientset/versioned/typed/metrics/v1beta1" + + "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/model" + recommender_metrics "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/metrics/recommender" ) // ContainerMetricsSnapshot contains information about usage of certain container within defined time window. @@ -55,9 +57,13 @@ type metricsClient struct { } // NewMetricsClient creates new instance of MetricsClient, which is used by recommender. -// It requires an instance of PodMetricsesGetter, which is used for underlying communication with metrics server. // namespace limits queries to particular namespace, use k8sapiv1.NamespaceAll to select all namespaces. -func NewMetricsClient(metricsGetter resourceclient.PodMetricsesGetter, namespace, clientName string) MetricsClient { +func NewMetricsClient(config *rest.Config, namespace, clientName string) MetricsClient { + metricsGetter := resourceclient.NewForConfigOrDie(config) + return newMetricsClient(metricsGetter, namespace, clientName) +} + +func newMetricsClient(metricsGetter resourceclient.PodMetricsesGetter, namespace, clientName string) MetricsClient { return &metricsClient{ metricsGetter: metricsGetter, namespace: namespace, diff --git a/vertical-pod-autoscaler/pkg/recommender/input/metrics/metrics_client_test_util.go b/vertical-pod-autoscaler/pkg/recommender/input/metrics/metrics_client_test_util.go index e03912cd4410..f2a623fb162a 100644 --- a/vertical-pod-autoscaler/pkg/recommender/input/metrics/metrics_client_test_util.go +++ b/vertical-pod-autoscaler/pkg/recommender/input/metrics/metrics_client_test_util.go @@ -29,9 +29,10 @@ import ( "k8s.io/apimachinery/pkg/runtime" core "k8s.io/client-go/testing" - "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/model" metricsapi "k8s.io/metrics/pkg/apis/metrics/v1beta1" "k8s.io/metrics/pkg/client/clientset/versioned/fake" + + "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/model" ) type metricsClientTestCase struct { @@ -84,7 +85,7 @@ func (tc *metricsClientTestCase) createFakeMetricsClient() MetricsClient { fakeMetricsGetter.AddReactor("list", "pods", func(action core.Action) (handled bool, ret runtime.Object, err error) { return true, tc.getFakePodMetricsList(), nil }) - return NewMetricsClient(fakeMetricsGetter.MetricsV1beta1(), "", "fake") + return newMetricsClient(fakeMetricsGetter.MetricsV1beta1(), "", "fake") } func (tc *metricsClientTestCase) getFakePodMetricsList() *metricsapi.PodMetricsList { diff --git a/vertical-pod-autoscaler/pkg/recommender/main.go b/vertical-pod-autoscaler/pkg/recommender/main.go index ddaaf27e5f19..c03d4959764d 100644 --- a/vertical-pod-autoscaler/pkg/recommender/main.go +++ b/vertical-pod-autoscaler/pkg/recommender/main.go @@ -17,21 +17,31 @@ limitations under the License. package main import ( + "context" "flag" "time" - "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/input" - apiv1 "k8s.io/api/core/v1" + "k8s.io/client-go/informers" + kube_client "k8s.io/client-go/kubernetes" + kube_flag "k8s.io/component-base/cli/flag" + klog "k8s.io/klog/v2" + "k8s.io/autoscaler/vertical-pod-autoscaler/common" + vpa_clientset "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/client/clientset/versioned" + "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/checkpoint" + "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/input" + controllerfetcher "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/input/controller_fetcher" "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/input/history" + input_metrics "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/input/metrics" + "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/logic" "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/model" "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/routines" + "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/target" "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/metrics" metrics_quality "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/metrics/quality" metrics_recommender "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/metrics/recommender" - kube_flag "k8s.io/component-base/cli/flag" - klog "k8s.io/klog/v2" + vpa_api_util "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/vpa" ) var ( @@ -58,6 +68,9 @@ var ( ctrPodNameLabel = flag.String("container-pod-name-label", "pod_name", `Label name to look for container pod names`) ctrNameLabel = flag.String("container-name-label", "name", `Label name to look for container names`) vpaObjectNamespace = flag.String("vpa-object-namespace", apiv1.NamespaceAll, "Namespace to search for VPA objects and pod stats. Empty means all namespaces will be used.") + username = flag.String("username", "", "The username used in the prometheus server basic auth") + password = flag.String("password", "", "The password used in the prometheus server basic auth") + memorySaver = flag.Bool("memory-saver", false, `If true, only track pods which have an associated VPA`) ) // Aggregation configuration flags @@ -76,12 +89,27 @@ var ( postProcessorCPUasInteger = flag.Bool("cpu-integer-post-processor-enabled", false, "Enable the cpu-integer recommendation post processor. The post processor will round up CPU recommendations to a whole CPU for pods which were opted in by setting an appropriate label on VPA object (experimental)") ) +const ( + // aggregateContainerStateGCInterval defines how often expired AggregateContainerStates are garbage collected. + aggregateContainerStateGCInterval = 1 * time.Hour + scaleCacheEntryLifetime time.Duration = time.Hour + scaleCacheEntryFreshnessTime time.Duration = 10 * time.Minute + scaleCacheEntryJitterFactor float64 = 1. + scaleCacheLoopPeriod = 7 * time.Second + defaultResyncPeriod time.Duration = 10 * time.Minute +) + func main() { klog.InitFlags(nil) kube_flag.InitFlags() klog.V(1).Infof("Vertical Pod Autoscaler %s Recommender: %v", common.VerticalPodAutoscalerVersion, recommenderName) config := common.CreateKubeConfigOrDie(*kubeconfig, float32(*kubeApiQps), int(*kubeApiBurst)) + kubeClient := kube_client.NewForConfigOrDie(config) + clusterState := model.NewClusterState(aggregateContainerStateGCInterval) + factory := informers.NewSharedInformerFactoryWithOptions(kubeClient, defaultResyncPeriod, informers.WithNamespace(*vpaObjectNamespace)) + controllerFetcher := controllerfetcher.NewControllerFetcher(config, kubeClient, factory, scaleCacheEntryFreshnessTime, scaleCacheEntryLifetime, scaleCacheEntryJitterFactor) + podLister, oomObserver := input.NewPodListerAndOOMObserver(kubeClient, *vpaObjectNamespace) model.InitializeAggregationsConfig(model.NewAggregationsConfig(*memoryAggregationInterval, *memoryAggregationIntervalCount, *memoryHistogramDecayHalfLife, *cpuHistogramDecayHalfLife, *oomBumpUpRatio, *oomMinBumpUp)) @@ -99,7 +127,32 @@ func main() { // CappingPostProcessor, should always come in the last position for post-processing postProcessors = append(postProcessors, &routines.CappingPostProcessor{}) - recommender := routines.NewRecommender(config, *checkpointsGCInterval, useCheckpoints, *vpaObjectNamespace, *recommenderName, postProcessors) + clusterStateFeeder := input.ClusterStateFeederFactory{ + PodLister: podLister, + OOMObserver: oomObserver, + KubeClient: kubeClient, + MetricsClient: input_metrics.NewMetricsClient(config, *vpaObjectNamespace, "default-metrics-client"), + VpaCheckpointClient: vpa_clientset.NewForConfigOrDie(config).AutoscalingV1(), + VpaLister: vpa_api_util.NewVpasLister(vpa_clientset.NewForConfigOrDie(config), make(chan struct{}), *vpaObjectNamespace), + ClusterState: clusterState, + SelectorFetcher: target.NewVpaTargetSelectorFetcher(config, kubeClient, factory), + MemorySaveMode: *memorySaver, + ControllerFetcher: controllerFetcher, + RecommenderName: *recommenderName, + }.Make() + controllerFetcher.Start(context.Background(), scaleCacheLoopPeriod) + + recommender := routines.RecommenderFactory{ + ClusterState: clusterState, + ClusterStateFeeder: clusterStateFeeder, + ControllerFetcher: controllerFetcher, + CheckpointWriter: checkpoint.NewCheckpointWriter(clusterState, vpa_clientset.NewForConfigOrDie(config).AutoscalingV1()), + VpaClient: vpa_clientset.NewForConfigOrDie(config).AutoscalingV1(), + PodResourceRecommender: logic.CreatePodResourceRecommender(), + RecommendationPostProcessors: postProcessors, + CheckpointsGCInterval: *checkpointsGCInterval, + UseCheckpoints: useCheckpoints, + }.Make() promQueryTimeout, err := time.ParseDuration(*queryTimeout) if err != nil { @@ -123,6 +176,10 @@ func main() { CtrNameLabel: *ctrNameLabel, CadvisorMetricsJobName: *prometheusJobName, Namespace: *vpaObjectNamespace, + PrometheusBasicAuthTransport: history.PrometheusBasicAuthTransport{ + Username: *username, + Password: *password, + }, } provider, err := history.NewPrometheusHistoryProvider(config) if err != nil { diff --git a/vertical-pod-autoscaler/pkg/recommender/model/cluster.go b/vertical-pod-autoscaler/pkg/recommender/model/cluster.go index 45a49d529c75..e2abc4f010b6 100644 --- a/vertical-pod-autoscaler/pkg/recommender/model/cluster.go +++ b/vertical-pod-autoscaler/pkg/recommender/model/cluster.go @@ -279,6 +279,7 @@ func (cluster *ClusterState) AddOrUpdateVpa(apiObject *vpa_types.VerticalPodAuto vpa.Recommendation = currentRecommendation vpa.SetUpdateMode(apiObject.Spec.UpdatePolicy) vpa.SetResourcePolicy(apiObject.Spec.ResourcePolicy) + vpa.SetAPIVersion(apiObject.GetObjectKind().GroupVersionKind().Version) return nil } diff --git a/vertical-pod-autoscaler/pkg/recommender/model/cluster_test.go b/vertical-pod-autoscaler/pkg/recommender/model/cluster_test.go index 520a91bfdc83..067bc3953e08 100644 --- a/vertical-pod-autoscaler/pkg/recommender/model/cluster_test.go +++ b/vertical-pod-autoscaler/pkg/recommender/model/cluster_test.go @@ -450,6 +450,7 @@ func TestAddOrUpdateVPAPolicies(t *testing.T) { resourcePolicy *vpa_types.PodResourcePolicy expectedScalingMode *vpa_types.ContainerScalingMode expectedUpdateMode *vpa_types.UpdateMode + expectedAPIVersion string }{ { name: "Defaults to auto", @@ -459,6 +460,7 @@ func TestAddOrUpdateVPAPolicies(t *testing.T) { // hence the UpdateModeOff does not influence container scaling mode here. expectedScalingMode: &scalingModeAuto, expectedUpdateMode: &updateModeOff, + expectedAPIVersion: "v1", }, { name: "Default scaling mode set to Off", oldVpa: nil, @@ -473,6 +475,7 @@ func TestAddOrUpdateVPAPolicies(t *testing.T) { }, expectedScalingMode: &scalingModeOff, expectedUpdateMode: &updateModeAuto, + expectedAPIVersion: "v1", }, { name: "Explicit scaling mode set to Off", oldVpa: nil, @@ -487,6 +490,7 @@ func TestAddOrUpdateVPAPolicies(t *testing.T) { }, expectedScalingMode: &scalingModeOff, expectedUpdateMode: &updateModeAuto, + expectedAPIVersion: "v1", }, { name: "Other container has explicit scaling mode Off", oldVpa: nil, @@ -501,6 +505,7 @@ func TestAddOrUpdateVPAPolicies(t *testing.T) { }, expectedScalingMode: &scalingModeAuto, expectedUpdateMode: &updateModeAuto, + expectedAPIVersion: "v1", }, { name: "Scaling mode to default Off", oldVpa: testVpaBuilder.WithUpdateMode(vpa_types.UpdateModeAuto).Get(), @@ -515,6 +520,7 @@ func TestAddOrUpdateVPAPolicies(t *testing.T) { }, expectedScalingMode: &scalingModeOff, expectedUpdateMode: &updateModeAuto, + expectedAPIVersion: "v1", }, { name: "Scaling mode to explicit Off", oldVpa: testVpaBuilder.WithUpdateMode(vpa_types.UpdateModeAuto).Get(), @@ -529,6 +535,7 @@ func TestAddOrUpdateVPAPolicies(t *testing.T) { }, expectedScalingMode: &scalingModeOff, expectedUpdateMode: &updateModeAuto, + expectedAPIVersion: "v1", }, // Tests checking changes to UpdateMode. { @@ -537,12 +544,49 @@ func TestAddOrUpdateVPAPolicies(t *testing.T) { newVpa: testVpaBuilder.WithUpdateMode(vpa_types.UpdateModeAuto).Get(), expectedScalingMode: &scalingModeAuto, expectedUpdateMode: &updateModeAuto, + expectedAPIVersion: "v1", }, { name: "UpdateMode from Auto to Off", oldVpa: testVpaBuilder.WithUpdateMode(vpa_types.UpdateModeAuto).Get(), newVpa: testVpaBuilder.WithUpdateMode(vpa_types.UpdateModeOff).Get(), expectedScalingMode: &scalingModeAuto, expectedUpdateMode: &updateModeOff, + expectedAPIVersion: "v1", + }, + // Test different API versions being recorded. + // Note that this path for testing the apiVersions is not actively exercised + // in a running recommender. The GroupVersion is cleared before it reaches + // the recommenders code. These tests only test the propagation of version + // changes. When introducing new api versions that need to be differentiated + // in logic and/or metrics a dedicated detection mechanism is needed for + // those new versions. We can not get this information from the api request: + // https://github.com/kubernetes/kubernetes/pull/59264#issuecomment-362579495 + { + name: "Record APIVersion v1", + oldVpa: nil, + newVpa: testVpaBuilder.WithGroupVersion(metav1.GroupVersion(vpa_types.SchemeGroupVersion)).Get(), + expectedScalingMode: &scalingModeAuto, + expectedAPIVersion: "v1", + }, + { + name: "Record APIVersion v1beta2", + oldVpa: nil, + newVpa: testVpaBuilder.WithGroupVersion(metav1.GroupVersion{ + Group: vpa_types.SchemeGroupVersion.Group, + Version: "v1beta2", + }).Get(), + expectedScalingMode: &scalingModeAuto, + expectedAPIVersion: "v1beta2", + }, + { + name: "Record APIVersion v1beta1", + oldVpa: nil, + newVpa: testVpaBuilder.WithGroupVersion(metav1.GroupVersion{ + Group: vpa_types.SchemeGroupVersion.Group, + Version: "v1beta1", + }).Get(), + expectedScalingMode: &scalingModeAuto, + expectedAPIVersion: "v1beta1", }, } for _, tc := range cases { @@ -572,6 +616,7 @@ func TestAddOrUpdateVPAPolicies(t *testing.T) { assert.Equal(t, tc.expectedUpdateMode, aggregation.UpdateMode, "Unexpected update mode for container %s", containerName) assert.Equal(t, tc.expectedScalingMode, aggregation.GetScalingMode(), "Unexpected scaling mode for container %s", containerName) } + assert.Equal(t, tc.expectedAPIVersion, vpa.APIVersion) }) } } diff --git a/vertical-pod-autoscaler/pkg/recommender/model/vpa.go b/vertical-pod-autoscaler/pkg/recommender/model/vpa.go index a3f6232aafa2..4a97c4c5ba2b 100644 --- a/vertical-pod-autoscaler/pkg/recommender/model/vpa.go +++ b/vertical-pod-autoscaler/pkg/recommender/model/vpa.go @@ -104,8 +104,8 @@ type Vpa struct { Created time.Time // CheckpointWritten indicates when last checkpoint for the VPA object was stored. CheckpointWritten time.Time - // IsV1Beta1API is set to true if VPA object has labelSelector defined as in v1beta1 api. - IsV1Beta1API bool + // APIVersion of the VPA object. + APIVersion string // TargetRef points to the controller managing the set of pods. TargetRef *autoscaling.CrossVersionObjectReference // PodCount contains number of live Pods matching a given VPA object. @@ -123,12 +123,27 @@ func NewVpa(id VpaID, selector labels.Selector, created time.Time) *Vpa { Created: created, Annotations: make(vpaAnnotationsMap), Conditions: make(vpaConditionsMap), - IsV1Beta1API: false, - PodCount: 0, + // APIVersion defaults to the version of the client used to read resources. + // If a new version is introduced that needs to be differentiated beyond the + // client conversion, this needs to be done based on the resource content. + // The K8s client will not return the resource apiVersion as it's converted + // to the version requested by the client server side. + APIVersion: vpa_types.SchemeGroupVersion.Version, + PodCount: 0, } return vpa } +// SetAPIVersion to the version of the VPA API object. +// Default API Version is the API version of the VPA client. +// If the provided version is empty, no change is made. +func (vpa *Vpa) SetAPIVersion(to string) { + if to == "" { + return + } + vpa.APIVersion = to +} + // UseAggregationIfMatching checks if the given aggregation matches (contributes to) this VPA // and adds it to the set of VPA's aggregations if that is the case. func (vpa *Vpa) UseAggregationIfMatching(aggregationKey AggregateStateKey, aggregation *AggregateContainerState) { diff --git a/vertical-pod-autoscaler/pkg/recommender/routines/recommender.go b/vertical-pod-autoscaler/pkg/recommender/routines/recommender.go index 46a167f04d75..564b9b9845a2 100644 --- a/vertical-pod-autoscaler/pkg/recommender/routines/recommender.go +++ b/vertical-pod-autoscaler/pkg/recommender/routines/recommender.go @@ -21,7 +21,8 @@ import ( "flag" "time" - vpa_clientset "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/client/clientset/versioned" + "k8s.io/klog/v2" + vpa_api "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/client/clientset/versioned/typed/autoscaling.k8s.io/v1" "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/checkpoint" "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/input" @@ -30,25 +31,11 @@ import ( "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/model" metrics_recommender "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/metrics/recommender" vpa_utils "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/vpa" - "k8s.io/client-go/informers" - kube_client "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" - klog "k8s.io/klog/v2" -) - -const ( - // AggregateContainerStateGCInterval defines how often expired AggregateContainerStates are garbage collected. - AggregateContainerStateGCInterval = 1 * time.Hour - scaleCacheEntryLifetime time.Duration = time.Hour - scaleCacheEntryFreshnessTime time.Duration = 10 * time.Minute - scaleCacheEntryJitterFactor float64 = 1. - defaultResyncPeriod time.Duration = 10 * time.Minute ) var ( checkpointsWriteTimeout = flag.Duration("checkpoints-timeout", time.Minute, `Timeout for writing checkpoints since the start of the recommender's main loop`) minCheckpointsPerRun = flag.Int("min-checkpoints", 10, "Minimum number of checkpoints to write per recommender's main loop") - memorySaver = flag.Bool("memory-saver", false, `If true, only track pods which have an associated VPA`) ) // Recommender recommend resources for certain containers, based on utilization periodically got from metrics api. @@ -223,25 +210,3 @@ func (c RecommenderFactory) Make() Recommender { klog.V(3).Infof("New Recommender created %+v", recommender) return recommender } - -// NewRecommender creates a new recommender instance. -// Dependencies are created automatically. -// Deprecated; use RecommenderFactory instead. -func NewRecommender(config *rest.Config, checkpointsGCInterval time.Duration, useCheckpoints bool, namespace string, recommenderName string, recommendationPostProcessors []RecommendationPostProcessor) Recommender { - clusterState := model.NewClusterState(AggregateContainerStateGCInterval) - kubeClient := kube_client.NewForConfigOrDie(config) - factory := informers.NewSharedInformerFactoryWithOptions(kubeClient, defaultResyncPeriod, informers.WithNamespace(namespace)) - controllerFetcher := controllerfetcher.NewControllerFetcher(config, kubeClient, factory, scaleCacheEntryFreshnessTime, scaleCacheEntryLifetime, scaleCacheEntryJitterFactor) - - return RecommenderFactory{ - ClusterState: clusterState, - ClusterStateFeeder: input.NewClusterStateFeeder(config, clusterState, *memorySaver, namespace, "default-metrics-client", recommenderName), - ControllerFetcher: controllerFetcher, - CheckpointWriter: checkpoint.NewCheckpointWriter(clusterState, vpa_clientset.NewForConfigOrDie(config).AutoscalingV1()), - VpaClient: vpa_clientset.NewForConfigOrDie(config).AutoscalingV1(), - PodResourceRecommender: logic.CreatePodResourceRecommender(), - RecommendationPostProcessors: recommendationPostProcessors, - CheckpointsGCInterval: checkpointsGCInterval, - UseCheckpoints: useCheckpoints, - }.Make() -} diff --git a/vertical-pod-autoscaler/pkg/updater/Makefile b/vertical-pod-autoscaler/pkg/updater/Makefile index 484fe1d15711..8f7a4a35d2bd 100644 --- a/vertical-pod-autoscaler/pkg/updater/Makefile +++ b/vertical-pod-autoscaler/pkg/updater/Makefile @@ -42,7 +42,7 @@ ifndef TAG ERR = $(error TAG is undefined) $(ERR) endif - docker build --pull -t ${REGISTRY}/${FULL_COMPONENT}-$*:${TAG} --build-arg ARCH=$* . + docker buildx build --pull --load --platform linux/$* -t ${REGISTRY}/${FULL_COMPONENT}-$*:${TAG} --build-arg ARCH=$* . .PHONY: docker-push docker-push: $(addprefix sub-push-,$(ALL_ARCHITECTURES)) push-multi-arch; @@ -76,10 +76,23 @@ build-in-docker: $(addprefix build-in-docker-,$(ALL_ARCHITECTURES)) .PHONY: build-in-docker-* build-in-docker-%: clean docker-builder + echo '=============== local git status ===============' + git status + echo '=============== last commit ===============' + git log -1 + echo '=============== bulding from the above ===============' docker run -v `pwd`/../..:/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler vpa-autoscaling-builder:latest bash -c 'cd /gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler && make build-binary-with-vendor-$* -C pkg/updater' +.PHONY: create-buildx-builder +create-buildx-builder: + BUILDER=$(shell docker buildx create --driver=docker-container --use) + +.PHONY: remove-buildx-builder +remove-buildx-builder: + docker buildx rm ${BUILDER} + .PHONY: release -release: build-in-docker docker-build docker-push +release: build-in-docker create-buildx-builder docker-build remove-buildx-builder docker-push @echo "Full in-docker release ${FULL_COMPONENT}:${TAG} completed" clean: $(addprefix clean-,$(ALL_ARCHITECTURES)) diff --git a/vertical-pod-autoscaler/pkg/updater/logic/updater_test.go b/vertical-pod-autoscaler/pkg/updater/logic/updater_test.go index a53fe1ab8a24..bc7fc3ca5ca1 100644 --- a/vertical-pod-autoscaler/pkg/updater/logic/updater_test.go +++ b/vertical-pod-autoscaler/pkg/updater/logic/updater_test.go @@ -27,6 +27,7 @@ import ( "github.com/golang/mock/gomock" "github.com/stretchr/testify/assert" apiv1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/labels" vpa_types "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1" @@ -144,7 +145,7 @@ func testRunOnceBase( for i := range pods { pods[i] = test.Pod().WithName("test_"+strconv.Itoa(i)). - AddContainer(test.BuildTestContainer(containerName, "1", "100M")). + AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).WithMemRequest(resource.MustParse("100M")).Get()). WithCreator(&rc.ObjectMeta, &rc.TypeMeta). Get() diff --git a/vertical-pod-autoscaler/pkg/updater/priority/priority_processor_test.go b/vertical-pod-autoscaler/pkg/updater/priority/priority_processor_test.go index adff2d6e6000..ceda7c340e14 100644 --- a/vertical-pod-autoscaler/pkg/updater/priority/priority_processor_test.go +++ b/vertical-pod-autoscaler/pkg/updater/priority/priority_processor_test.go @@ -20,6 +20,7 @@ import ( "testing" corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" vpa_types "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1" "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/annotations" "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/test" @@ -37,7 +38,7 @@ func TestGetUpdatePriority(t *testing.T) { }{ { name: "simple scale up", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "2", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("2")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("10", "").Get(), expectedPrio: PodPriority{ OutsideRecommendedRange: false, @@ -46,7 +47,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "simple scale down", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("2", "").Get(), expectedPrio: PodPriority{ OutsideRecommendedRange: false, @@ -55,7 +56,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "no resource diff", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "2", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("2")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("2", "").Get(), expectedPrio: PodPriority{ OutsideRecommendedRange: false, @@ -64,7 +65,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "scale up on milliquanitites", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "10m", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("10m")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("900m", "").Get(), expectedPrio: PodPriority{ OutsideRecommendedRange: false, @@ -73,7 +74,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "scale up outside recommended range", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName). WithTarget("10", ""). WithLowerBound("6", ""). @@ -85,7 +86,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "scale down outside recommended range", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "8", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("8")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName). WithTarget("2", ""). WithLowerBound("1", ""). @@ -97,7 +98,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "scale up with multiple quantities", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "2", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("2")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("10", "").Get(), expectedPrio: PodPriority{ OutsideRecommendedRange: false, @@ -106,7 +107,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "multiple resources, both scale up", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "3", "10M")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("3")).WithMemRequest(resource.MustParse("10M")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("6", "20M").Get(), expectedPrio: PodPriority{ OutsideRecommendedRange: false, @@ -115,7 +116,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "multiple resources, only one scale up", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "10M")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).WithMemRequest(resource.MustParse("10M")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("2", "20M").Get(), expectedPrio: PodPriority{ OutsideRecommendedRange: false, @@ -124,7 +125,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "multiple resources, both scale down", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "20M")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).WithMemRequest(resource.MustParse("20M")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("2", "10M").Get(), expectedPrio: PodPriority{ OutsideRecommendedRange: false, @@ -133,7 +134,7 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "multiple resources, one outside recommended range", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "20M")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).WithMemRequest(resource.MustParse("20M")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName). WithTarget("2", "10M"). WithLowerBound("1", "5M"). @@ -145,8 +146,8 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "multiple containers, both scale up", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "1", "")). - AddContainer(test.BuildTestContainer("test-container-2", "2", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()). + AddContainer(test.Container().WithName("test-container-2").WithCPURequest(resource.MustParse("2")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName). WithTarget("4", "").AppendRecommendation( test.Recommendation(). @@ -159,8 +160,8 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "multiple containers, both scale down", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "3", "")). - AddContainer(test.BuildTestContainer("test-container-2", "7", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("3")).Get()). + AddContainer(test.Container().WithName("test-container-2").WithCPURequest(resource.MustParse("7")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName). WithTarget("1", "").AppendRecommendation( test.Recommendation(). @@ -173,8 +174,8 @@ func TestGetUpdatePriority(t *testing.T) { }, }, { name: "multiple containers, both scale up, one outside range", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "1", "")). - AddContainer(test.BuildTestContainer("test-container-2", "2", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()). + AddContainer(test.Container().WithName("test-container-2").WithCPURequest(resource.MustParse("2")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName). WithTarget("4", ""). WithLowerBound("1", "").AppendRecommendation( @@ -193,8 +194,8 @@ func TestGetUpdatePriority(t *testing.T) { // container1: request={6 CPU, 10 MB}, recommended={8 CPU, 20 MB} // container2: request={4 CPU, 30 MB}, recommended={7 CPU, 30 MB} // total: request={10 CPU, 40 MB}, recommended={15 CPU, 50 MB} - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "6", "10M")). - AddContainer(test.BuildTestContainer("test-container-2", "4", "30M")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("6")).WithMemRequest(resource.MustParse("10M")).Get()). + AddContainer(test.Container().WithName("test-container-2").WithCPURequest(resource.MustParse("4")).WithMemRequest(resource.MustParse("30M")).Get()).Get(), vpa: test.VerticalPodAutoscaler().WithContainer(containerName). WithTarget("8", "20M").AppendRecommendation( test.Recommendation(). @@ -221,7 +222,7 @@ func TestGetUpdatePriority(t *testing.T) { // recommendation for a container. func TestGetUpdatePriority_NoRecommendationForContainer(t *testing.T) { p := NewProcessor() - pod := test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer("test-container", "5", "10")).Get() + pod := test.Pod().WithName("POD1").AddContainer(test.Container().WithName("test-container").WithCPURequest(resource.MustParse("5")).WithMemRequest(resource.MustParse("10")).Get()).Get() vpa := test.VerticalPodAutoscaler().WithName("test-vpa").WithContainer("test-container").Get() result := p.GetUpdatePriority(pod, vpa, nil) assert.NotNil(t, result) @@ -245,28 +246,28 @@ func TestGetUpdatePriority_VpaObservedContainers(t *testing.T) { }{ { name: "with no VpaObservedContainers annotation", - pod: test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "1", "")).Get(), + pod: test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()).Get(), recommendation: test.Recommendation().WithContainer(containerName).WithTarget("10", "").Get(), want: optedInContainerDiff, }, { name: "with container listed in VpaObservedContainers annotation", pod: test.Pod().WithAnnotations(map[string]string{annotations.VpaObservedContainersLabel: containerName}). - WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "1", "")).Get(), + WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()).Get(), recommendation: test.Recommendation().WithContainer(containerName).WithTarget("10", "").Get(), want: optedInContainerDiff, }, { name: "with container not listed in VpaObservedContainers annotation", pod: test.Pod().WithAnnotations(map[string]string{annotations.VpaObservedContainersLabel: ""}). - WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "1", "")).Get(), + WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()).Get(), recommendation: test.Recommendation().WithContainer(containerName).WithTarget("10", "").Get(), want: optedOutContainerDiff, }, { name: "with incorrect VpaObservedContainers annotation", pod: test.Pod().WithAnnotations(map[string]string{annotations.VpaObservedContainersLabel: "abcd;';"}). - WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "1", "")).Get(), + WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()).Get(), recommendation: test.Recommendation().WithContainer(containerName).WithTarget("10", "").Get(), want: optedInContainerDiff, }, diff --git a/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator_test.go b/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator_test.go index 3fc48b7efd1a..94b149f8c877 100644 --- a/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator_test.go +++ b/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator_test.go @@ -26,6 +26,7 @@ import ( "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/utils/test" apiv1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "github.com/stretchr/testify/assert" @@ -37,10 +38,10 @@ const ( // TODO(bskiba): Refactor the SortPriority tests as a testcase list test. func TestSortPriority(t *testing.T) { - pod1 := test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "2", "")).Get() - pod2 := test.Pod().WithName("POD2").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get() - pod3 := test.Pod().WithName("POD3").AddContainer(test.BuildTestContainer(containerName, "1", "")).Get() - pod4 := test.Pod().WithName("POD4").AddContainer(test.BuildTestContainer(containerName, "3", "")).Get() + pod1 := test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("2")).Get()).Get() + pod2 := test.Pod().WithName("POD2").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get() + pod3 := test.Pod().WithName("POD3").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()).Get() + pod4 := test.Pod().WithName("POD4").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("3")).Get()).Get() vpa := test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("10", "").Get() @@ -63,9 +64,9 @@ func TestSortPriority(t *testing.T) { } func TestSortPriorityResourcesDecrease(t *testing.T) { - pod1 := test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get() - pod2 := test.Pod().WithName("POD2").AddContainer(test.BuildTestContainer(containerName, "8", "")).Get() - pod3 := test.Pod().WithName("POD3").AddContainer(test.BuildTestContainer(containerName, "10", "")).Get() + pod1 := test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get() + pod2 := test.Pod().WithName("POD2").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("8")).Get()).Get() + pod3 := test.Pod().WithName("POD3").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("10")).Get()).Get() vpa := test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("5", "").Get() @@ -90,7 +91,7 @@ func TestSortPriorityResourcesDecrease(t *testing.T) { } func TestUpdateNotRequired(t *testing.T) { - pod1 := test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get() + pod1 := test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get() vpa := test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("4", "").Get() priorityProcessor := NewFakeProcessor(map[string]PodPriority{"POD1": { @@ -113,7 +114,7 @@ func TestUseProcessor(t *testing.T) { recommendationProcessor.On("Apply").Return(processedRecommendation, nil) vpa := test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("5", "5M").Get() - pod1 := test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "10M")).Get() + pod1 := test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).WithMemRequest(resource.MustParse("10M")).Get()).Get() priorityProcessor := NewFakeProcessor(map[string]PodPriority{ "POD1": {ResourceDiff: 0.0}, @@ -134,9 +135,9 @@ func TestUseProcessor(t *testing.T) { // 2. diverging from the target by more than MinChangePriority. func TestUpdateLonglivedPods(t *testing.T) { pods := []*apiv1.Pod{ - test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get(), - test.Pod().WithName("POD2").AddContainer(test.BuildTestContainer(containerName, "1", "")).Get(), - test.Pod().WithName("POD3").AddContainer(test.BuildTestContainer(containerName, "8", "")).Get(), + test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get(), + test.Pod().WithName("POD2").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()).Get(), + test.Pod().WithName("POD3").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("8")).Get()).Get(), } // Both pods are within the recommended range. @@ -168,9 +169,9 @@ func TestUpdateLonglivedPods(t *testing.T) { // range for at least one container. func TestUpdateShortlivedPods(t *testing.T) { pods := []*apiv1.Pod{ - test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get(), - test.Pod().WithName("POD2").AddContainer(test.BuildTestContainer(containerName, "1", "")).Get(), - test.Pod().WithName("POD3").AddContainer(test.BuildTestContainer(containerName, "10", "")).Get(), + test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get(), + test.Pod().WithName("POD2").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()).Get(), + test.Pod().WithName("POD3").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("10")).Get()).Get(), } // Pods 1 and 2 are within the recommended range. @@ -198,7 +199,7 @@ func TestUpdateShortlivedPods(t *testing.T) { } func TestUpdatePodWithQuickOOM(t *testing.T) { - pod := test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get() + pod := test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get() // Pretend that the test pod started 11 hours ago. timestampNow := pod.Status.StartTime.Time.Add(time.Hour * 11) @@ -234,7 +235,7 @@ func TestUpdatePodWithQuickOOM(t *testing.T) { } func TestDontUpdatePodWithQuickOOMNoResourceChange(t *testing.T) { - pod := test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "8Gi")).Get() + pod := test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).WithMemRequest(resource.MustParse("8Gi")).Get()).Get() // Pretend that the test pod started 11 hours ago. timestampNow := pod.Status.StartTime.Time.Add(time.Hour * 11) @@ -270,7 +271,7 @@ func TestDontUpdatePodWithQuickOOMNoResourceChange(t *testing.T) { } func TestDontUpdatePodWithOOMAfterLongRun(t *testing.T) { - pod := test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get() + pod := test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get() // Pretend that the test pod started 11 hours ago. timestampNow := pod.Status.StartTime.Time.Add(time.Hour * 11) @@ -331,7 +332,7 @@ func TestQuickOOM_VpaOvservedContainers(t *testing.T) { for _, tc := range tests { t.Run(fmt.Sprintf("test case: %s", tc.name), func(t *testing.T) { pod := test.Pod().WithAnnotations(tc.annotation). - WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get() + WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get() // Pretend that the test pod started 11 hours ago. timestampNow := pod.Status.StartTime.Time.Add(time.Hour * 11) @@ -416,7 +417,7 @@ func TestQuickOOM_ContainerResourcePolicy(t *testing.T) { for _, tc := range tests { t.Run(fmt.Sprintf("test case: %s", tc.name), func(t *testing.T) { pod := test.Pod().WithAnnotations(map[string]string{annotations.VpaObservedContainersLabel: containerName}). - WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get() + WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get() // Pretend that the test pod started 11 hours ago. timestampNow := pod.Status.StartTime.Time.Add(time.Hour * 11) @@ -475,10 +476,10 @@ func (p *pod1Admission) CleanUp() {} func TestAdmission(t *testing.T) { - pod1 := test.Pod().WithName("POD1").AddContainer(test.BuildTestContainer(containerName, "2", "")).Get() - pod2 := test.Pod().WithName("POD2").AddContainer(test.BuildTestContainer(containerName, "4", "")).Get() - pod3 := test.Pod().WithName("POD3").AddContainer(test.BuildTestContainer(containerName, "1", "")).Get() - pod4 := test.Pod().WithName("POD4").AddContainer(test.BuildTestContainer(containerName, "3", "")).Get() + pod1 := test.Pod().WithName("POD1").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("2")).Get()).Get() + pod2 := test.Pod().WithName("POD2").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("4")).Get()).Get() + pod3 := test.Pod().WithName("POD3").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).Get()).Get() + pod4 := test.Pod().WithName("POD4").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("3")).Get()).Get() vpa := test.VerticalPodAutoscaler().WithContainer(containerName).WithTarget("10", "").Get() diff --git a/vertical-pod-autoscaler/pkg/utils/metrics/recommender/recommender.go b/vertical-pod-autoscaler/pkg/utils/metrics/recommender/recommender.go index f48f3ca5c3f8..c01c2f79bff7 100644 --- a/vertical-pod-autoscaler/pkg/utils/metrics/recommender/recommender.go +++ b/vertical-pod-autoscaler/pkg/utils/metrics/recommender/recommender.go @@ -33,7 +33,14 @@ const ( ) var ( - modes = []string{string(vpa_types.UpdateModeOff), string(vpa_types.UpdateModeInitial), string(vpa_types.UpdateModeRecreate), string(vpa_types.UpdateModeAuto)} + // TODO: unify this list with the types defined in the VPA handler to avoid + // drift if one file is changed and the other one is missed. + modes = []string{ + string(vpa_types.UpdateModeOff), + string(vpa_types.UpdateModeInitial), + string(vpa_types.UpdateModeRecreate), + string(vpa_types.UpdateModeAuto), + } ) type apiVersion string @@ -150,20 +157,15 @@ func NewObjectCounter() *ObjectCounter { // Add updates the helper state to include the given VPA object func (oc *ObjectCounter) Add(vpa *model.Vpa) { - mode := string(vpa_types.UpdateModeAuto) + mode := vpa_types.UpdateModeAuto if vpa.UpdateMode != nil && string(*vpa.UpdateMode) != "" { - mode = string(*vpa.UpdateMode) - } - // TODO: Maybe report v1 version as well. - api := v1beta2 - if vpa.IsV1Beta1API { - api = v1beta1 + mode = *vpa.UpdateMode } key := objectCounterKey{ - mode: mode, + mode: string(mode), has: vpa.HasRecommendation(), - apiVersion: api, + apiVersion: apiVersion(vpa.APIVersion), matchesPods: vpa.HasMatchedPods(), unsupportedConfig: vpa.Conditions.ConditionActive(vpa_types.ConfigUnsupported), } diff --git a/vertical-pod-autoscaler/pkg/utils/metrics/recommender/recommender_test.go b/vertical-pod-autoscaler/pkg/utils/metrics/recommender/recommender_test.go new file mode 100644 index 000000000000..8cd8fe1792ef --- /dev/null +++ b/vertical-pod-autoscaler/pkg/utils/metrics/recommender/recommender_test.go @@ -0,0 +1,322 @@ +/* +Copyright 2023 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package recommender + +import ( + "strings" + "testing" + + "github.com/prometheus/client_golang/prometheus" + dto "github.com/prometheus/client_model/go" + apiv1 "k8s.io/api/core/v1" + vpa_types "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1" + "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/model" +) + +func TestObjectCounter(t *testing.T) { + updateModeOff := vpa_types.UpdateModeOff + updateModeInitial := vpa_types.UpdateModeInitial + updateModeRecreate := vpa_types.UpdateModeRecreate + updateModeAuto := vpa_types.UpdateModeAuto + // We verify that other update modes are handled correctly as validation + // may not happen if there are issues with the admission controller. + updateModeUserDefined := vpa_types.UpdateMode("userDefined") + + cases := []struct { + name string + add []*model.Vpa + wantMetrics map[string]float64 + }{ + { + name: "set empty api on metric if it is missing on the VPA", + add: []*model.Vpa{ + { + APIVersion: "", + }, + }, + wantMetrics: map[string]float64{ + "api=,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report api version v1beta1", + add: []*model.Vpa{ + { + APIVersion: "v1beta1", + }, + }, + wantMetrics: map[string]float64{ + "api=v1beta1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report api version v1beta2", + add: []*model.Vpa{ + { + APIVersion: "v1beta2", + }, + }, + wantMetrics: map[string]float64{ + "api=v1beta2,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report api version v1", + add: []*model.Vpa{ + { + APIVersion: "v1", + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "default update mode to auto", + add: []*model.Vpa{ + { + APIVersion: "v1", + UpdateMode: nil, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report update mode auto", + add: []*model.Vpa{ + { + APIVersion: "v1", + UpdateMode: &updateModeAuto, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report update mode initial", + add: []*model.Vpa{ + { + APIVersion: "v1", + UpdateMode: &updateModeInitial, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Initial,": 1, + }, + }, + { + name: "report update mode recreate", + add: []*model.Vpa{ + { + APIVersion: "v1", + UpdateMode: &updateModeRecreate, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Recreate,": 1, + }, + }, + { + name: "report update mode off", + add: []*model.Vpa{ + { + APIVersion: "v1", + UpdateMode: &updateModeOff, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Off,": 1, + }, + }, + { + name: "report update mode user defined", + add: []*model.Vpa{ + { + APIVersion: "v1", + UpdateMode: &updateModeUserDefined, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=userDefined,": 1, + }, + }, + { + name: "report has recommendation as false on missing recommendations", + add: []*model.Vpa{ + { + APIVersion: "v1", + Recommendation: nil, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report has recommendation as false on missing container recommendations", + add: []*model.Vpa{ + { + APIVersion: "v1", + Recommendation: &vpa_types.RecommendedPodResources{ + ContainerRecommendations: nil, + }, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report has recommendation as true on existing container recommendations", + add: []*model.Vpa{ + { + APIVersion: "v1", + Recommendation: &vpa_types.RecommendedPodResources{ + ContainerRecommendations: []vpa_types.RecommendedContainerResources{{}}, + }, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=true,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report has matches pods as true on missing condition", + add: []*model.Vpa{ + { + APIVersion: "v1", + Conditions: nil, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report has matches pods as false on NoPodsMatched condition", + add: []*model.Vpa{ + { + APIVersion: "v1", + Conditions: map[vpa_types.VerticalPodAutoscalerConditionType]vpa_types.VerticalPodAutoscalerCondition{ + vpa_types.NoPodsMatched: { + Status: apiv1.ConditionTrue, + }, + }, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=false,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report unsupported config as false on missing condition", + add: []*model.Vpa{ + { + APIVersion: "v1", + Conditions: nil, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=false,update_mode=Auto,": 1, + }, + }, + { + name: "report unsupported config as true on ConfigUnsupported condition", + add: []*model.Vpa{ + { + APIVersion: "v1", + Conditions: map[vpa_types.VerticalPodAutoscalerConditionType]vpa_types.VerticalPodAutoscalerCondition{ + vpa_types.ConfigUnsupported: { + Status: apiv1.ConditionTrue, + }, + }, + }, + }, + wantMetrics: map[string]float64{ + "api=v1,has_recommendation=false,matches_pods=true,unsupported_config=true,update_mode=Auto,": 1, + }, + }, + } + + for _, tc := range cases { + t.Run(tc.name, func(t *testing.T) { + counter := NewObjectCounter() + for _, add := range tc.add { + counter.Add(add) + } + + t.Cleanup(func() { + // Reset the metric after the test to avoid collisions. + vpaObjectCount.Reset() + }) + counter.Observe() + + metrics := make(chan prometheus.Metric) + + go func() { + vpaObjectCount.Collect(metrics) + close(metrics) + }() + + gotMetrics := make(map[string]float64) + for metric := range metrics { + var metricProto dto.Metric + if err := metric.Write(&metricProto); err != nil { + t.Errorf("failed to write metric: %v", err) + } + + key := labelsToKey(metricProto.GetLabel()) + gotMetrics[key] = *metricProto.GetGauge().Value + } + + for wantKey, wantValue := range tc.wantMetrics { + gotValue, gotKey := gotMetrics[wantKey] + if !gotKey { + t.Errorf("missing metrics sample %q, want value %f", wantKey, wantValue) + } + if gotValue != wantValue { + t.Errorf("incorrect metrics sample %q, want value %f, got value %f", wantKey, wantValue, gotValue) + } + } + + // If a test case only covers specific metric cases we expect all other metrics to be 0. + for gotKey, gotValue := range gotMetrics { + _, wantKey := tc.wantMetrics[gotKey] + if wantKey { + continue + } + if gotValue != 0 { + t.Errorf("incorrect metrics sample %q, want value %f, got value %f", gotKey, 0.0, gotValue) + } + } + }) + } +} + +func labelsToKey(labels []*dto.LabelPair) string { + key := strings.Builder{} + for _, label := range labels { + key.WriteString(*label.Name) + key.WriteRune('=') + key.WriteString(*label.Value) + key.WriteRune(',') + } + return key.String() +} diff --git a/vertical-pod-autoscaler/pkg/utils/test/test_utils.go b/vertical-pod-autoscaler/pkg/utils/test/test_utils.go index c21974873cce..598879ffbb70 100644 --- a/vertical-pod-autoscaler/pkg/utils/test/test_utils.go +++ b/vertical-pod-autoscaler/pkg/utils/test/test_utils.go @@ -30,7 +30,7 @@ import ( vpa_types_v1beta1 "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1beta1" vpa_lister "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/client/listers/autoscaling.k8s.io/v1" vpa_lister_v1beta1 "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/client/listers/autoscaling.k8s.io/v1beta1" - "k8s.io/client-go/listers/core/v1" + v1 "k8s.io/client-go/listers/core/v1" "k8s.io/client-go/tools/record" ) @@ -39,22 +39,6 @@ var ( testTimestamp, _ = time.Parse(timeLayout, "2017-04-18 17:35:05") ) -// BuildTestContainer creates container with specified resources -func BuildTestContainer(containerName, cpu, mem string) apiv1.Container { - // TODO: Use builder directly, remove this function. - builder := Container().WithName(containerName) - - if len(cpu) > 0 { - cpuVal, _ := resource.ParseQuantity(cpu) - builder = builder.WithCPURequest(cpuVal) - } - if len(mem) > 0 { - memVal, _ := resource.ParseQuantity(mem) - builder = builder.WithMemRequest(memVal) - } - return builder.Get() -} - // BuildTestPolicy creates ResourcesPolicy with specified constraints func BuildTestPolicy(containerName, minCPU, maxCPU, minMemory, maxMemory string) *vpa_types.PodResourcePolicy { minCPUVal, _ := resource.ParseQuantity(minCPU) diff --git a/vertical-pod-autoscaler/pkg/utils/test/test_vpa.go b/vertical-pod-autoscaler/pkg/utils/test/test_vpa.go index 3401487d7b9d..92e7d3fd4ab2 100644 --- a/vertical-pod-autoscaler/pkg/utils/test/test_vpa.go +++ b/vertical-pod-autoscaler/pkg/utils/test/test_vpa.go @@ -41,6 +41,7 @@ type VerticalPodAutoscalerBuilder interface { WithUpperBound(cpu, memory string) VerticalPodAutoscalerBuilder WithAnnotations(map[string]string) VerticalPodAutoscalerBuilder WithRecommender(string2 string) VerticalPodAutoscalerBuilder + WithGroupVersion(gv meta.GroupVersion) VerticalPodAutoscalerBuilder AppendCondition(conditionType vpa_types.VerticalPodAutoscalerConditionType, status core.ConditionStatus, reason, message string, lastTransitionTime time.Time) VerticalPodAutoscalerBuilder AppendRecommendation(vpa_types.RecommendedContainerResources) VerticalPodAutoscalerBuilder @@ -50,6 +51,7 @@ type VerticalPodAutoscalerBuilder interface { // VerticalPodAutoscaler returns a new VerticalPodAutoscalerBuilder. func VerticalPodAutoscaler() VerticalPodAutoscalerBuilder { return &verticalPodAutoscalerBuilder{ + groupVersion: meta.GroupVersion(vpa_types.SchemeGroupVersion), recommendation: Recommendation(), appendedRecommendations: []vpa_types.RecommendedContainerResources{}, namespace: "default", @@ -58,6 +60,7 @@ func VerticalPodAutoscaler() VerticalPodAutoscalerBuilder { } type verticalPodAutoscalerBuilder struct { + groupVersion meta.GroupVersion vpaName string containerName string namespace string @@ -161,6 +164,12 @@ func (b *verticalPodAutoscalerBuilder) WithRecommender(recommender string) Verti return &c } +func (b *verticalPodAutoscalerBuilder) WithGroupVersion(gv meta.GroupVersion) VerticalPodAutoscalerBuilder { + c := *b + c.groupVersion = gv + return &c +} + func (b *verticalPodAutoscalerBuilder) AppendCondition(conditionType vpa_types.VerticalPodAutoscalerConditionType, status core.ConditionStatus, reason, message string, lastTransitionTime time.Time) VerticalPodAutoscalerBuilder { c := *b @@ -198,6 +207,10 @@ func (b *verticalPodAutoscalerBuilder) Get() *vpa_types.VerticalPodAutoscaler { recommendation.ContainerRecommendations = append(recommendation.ContainerRecommendations, b.appendedRecommendations...) return &vpa_types.VerticalPodAutoscaler{ + TypeMeta: meta.TypeMeta{ + APIVersion: b.groupVersion.String(), + Kind: "VerticalPodAutoscaler", + }, ObjectMeta: meta.ObjectMeta{ Name: b.vpaName, Namespace: b.namespace, diff --git a/vertical-pod-autoscaler/pkg/utils/vpa/api.go b/vertical-pod-autoscaler/pkg/utils/vpa/api.go index 7fa869390be4..891c0e0bb3de 100644 --- a/vertical-pod-autoscaler/pkg/utils/vpa/api.go +++ b/vertical-pod-autoscaler/pkg/utils/vpa/api.go @@ -49,14 +49,14 @@ type patchRecord struct { Value interface{} `json:"value"` } -func patchVpa(vpaClient vpa_api.VerticalPodAutoscalerInterface, vpaName string, patches []patchRecord) (result *vpa_types.VerticalPodAutoscaler, err error) { +func patchVpaStatus(vpaClient vpa_api.VerticalPodAutoscalerInterface, vpaName string, patches []patchRecord) (result *vpa_types.VerticalPodAutoscaler, err error) { bytes, err := json.Marshal(patches) if err != nil { klog.Errorf("Cannot marshal VPA status patches %+v. Reason: %+v", patches, err) return } - return vpaClient.Patch(context.TODO(), vpaName, types.JSONPatchType, bytes, meta.PatchOptions{}) + return vpaClient.Patch(context.TODO(), vpaName, types.JSONPatchType, bytes, meta.PatchOptions{}, "status") } // UpdateVpaStatusIfNeeded updates the status field of the VPA API object. @@ -69,7 +69,7 @@ func UpdateVpaStatusIfNeeded(vpaClient vpa_api.VerticalPodAutoscalerInterface, v }} if !apiequality.Semantic.DeepEqual(*oldStatus, *newStatus) { - return patchVpa(vpaClient, vpaName, patches) + return patchVpaStatus(vpaClient, vpaName, patches) } return nil, nil } diff --git a/vertical-pod-autoscaler/pkg/utils/vpa/api_test.go b/vertical-pod-autoscaler/pkg/utils/vpa/api_test.go index bd13dc8c7595..df3dc4e615ad 100644 --- a/vertical-pod-autoscaler/pkg/utils/vpa/api_test.go +++ b/vertical-pod-autoscaler/pkg/utils/vpa/api_test.go @@ -113,7 +113,7 @@ func TestPodMatchesVPA(t *testing.T) { result bool } - pod := test.Pod().WithName("test-pod").AddContainer(test.BuildTestContainer(containerName, "1", "100M")).Get() + pod := test.Pod().WithName("test-pod").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).WithMemRequest(resource.MustParse("100M")).Get()).Get() pod.Labels = map[string]string{"app": "testingApp"} vpaBuilder := test.VerticalPodAutoscaler(). @@ -137,7 +137,7 @@ func TestPodMatchesVPA(t *testing.T) { } func TestGetControllingVPAForPod(t *testing.T) { - pod := test.Pod().WithName("test-pod").AddContainer(test.BuildTestContainer(containerName, "1", "100M")).Get() + pod := test.Pod().WithName("test-pod").AddContainer(test.Container().WithName(containerName).WithCPURequest(resource.MustParse("1")).WithMemRequest(resource.MustParse("100M")).Get()).Get() pod.Labels = map[string]string{"app": "testingApp"} vpaBuilder := test.VerticalPodAutoscaler(). diff --git a/vertical-pod-autoscaler/pkg/utils/vpa/capping_test.go b/vertical-pod-autoscaler/pkg/utils/vpa/capping_test.go index 75a1540466d7..e02b28c33852 100644 --- a/vertical-pod-autoscaler/pkg/utils/vpa/capping_test.go +++ b/vertical-pod-autoscaler/pkg/utils/vpa/capping_test.go @@ -27,7 +27,7 @@ import ( ) func TestRecommendationNotAvailable(t *testing.T) { - pod := test.Pod().WithName("pod1").AddContainer(test.BuildTestContainer("ctr-name", "", "")).Get() + pod := test.Pod().WithName("pod1").AddContainer(test.Container().WithName("ctr-name").Get()).Get() podRecommendation := vpa_types.RecommendedPodResources{ ContainerRecommendations: []vpa_types.RecommendedContainerResources{ { @@ -48,7 +48,7 @@ func TestRecommendationNotAvailable(t *testing.T) { } func TestRecommendationToLimitCapping(t *testing.T) { - pod := test.Pod().WithName("pod1").AddContainer(test.BuildTestContainer("ctr-name", "", "")).Get() + pod := test.Pod().WithName("pod1").AddContainer(test.Container().WithName("ctr-name").Get()).Get() pod.Spec.Containers[0].Resources.Limits = apiv1.ResourceList{ apiv1.ResourceCPU: *resource.NewScaledQuantity(3, 1), @@ -143,7 +143,7 @@ func TestRecommendationToLimitCapping(t *testing.T) { } func TestRecommendationCappedToMinMaxPolicy(t *testing.T) { - pod := test.Pod().WithName("pod1").AddContainer(test.BuildTestContainer("ctr-name", "", "")).Get() + pod := test.Pod().WithName("pod1").AddContainer(test.Container().WithName("ctr-name").Get()).Get() podRecommendation := vpa_types.RecommendedPodResources{ ContainerRecommendations: []vpa_types.RecommendedContainerResources{ { @@ -238,7 +238,7 @@ var applyTestCases = []struct { } func TestApply(t *testing.T) { - pod := test.Pod().WithName("pod1").AddContainer(test.BuildTestContainer("ctr-name", "", "")).Get() + pod := test.Pod().WithName("pod1").AddContainer(test.Container().WithName("ctr-name").Get()).Get() for _, testCase := range applyTestCases { res, _, err := NewCappingRecommendationProcessor(&fakeLimitRangeCalculator{}).Apply( diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/flow.go b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/flow.go index b51f0e0cf1f5..b7dbd186957e 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/flow.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/flow.go @@ -6,23 +6,91 @@ package http2 -// flow is the flow control window's size. -type flow struct { +// inflowMinRefresh is the minimum number of bytes we'll send for a +// flow control window update. +const inflowMinRefresh = 4 << 10 + +// inflow accounts for an inbound flow control window. +// It tracks both the latest window sent to the peer (used for enforcement) +// and the accumulated unsent window. +type inflow struct { + avail int32 + unsent int32 +} + +// init sets the initial window. +func (f *inflow) init(n int32) { + f.avail = n +} + +// add adds n bytes to the window, with a maximum window size of max, +// indicating that the peer can now send us more data. +// For example, the user read from a {Request,Response} body and consumed +// some of the buffered data, so the peer can now send more. +// It returns the number of bytes to send in a WINDOW_UPDATE frame to the peer. +// Window updates are accumulated and sent when the unsent capacity +// is at least inflowMinRefresh or will at least double the peer's available window. +func (f *inflow) add(n int) (connAdd int32) { + if n < 0 { + panic("negative update") + } + unsent := int64(f.unsent) + int64(n) + // "A sender MUST NOT allow a flow-control window to exceed 2^31-1 octets." + // RFC 7540 Section 6.9.1. + const maxWindow = 1<<31 - 1 + if unsent+int64(f.avail) > maxWindow { + panic("flow control update exceeds maximum window size") + } + f.unsent = int32(unsent) + if f.unsent < inflowMinRefresh && f.unsent < f.avail { + // If there aren't at least inflowMinRefresh bytes of window to send, + // and this update won't at least double the window, buffer the update for later. + return 0 + } + f.avail += f.unsent + f.unsent = 0 + return int32(unsent) +} + +// take attempts to take n bytes from the peer's flow control window. +// It reports whether the window has available capacity. +func (f *inflow) take(n uint32) bool { + if n > uint32(f.avail) { + return false + } + f.avail -= int32(n) + return true +} + +// takeInflows attempts to take n bytes from two inflows, +// typically connection-level and stream-level flows. +// It reports whether both windows have available capacity. +func takeInflows(f1, f2 *inflow, n uint32) bool { + if n > uint32(f1.avail) || n > uint32(f2.avail) { + return false + } + f1.avail -= int32(n) + f2.avail -= int32(n) + return true +} + +// outflow is the outbound flow control window's size. +type outflow struct { _ incomparable // n is the number of DATA bytes we're allowed to send. - // A flow is kept both on a conn and a per-stream. + // An outflow is kept both on a conn and a per-stream. n int32 - // conn points to the shared connection-level flow that is - // shared by all streams on that conn. It is nil for the flow + // conn points to the shared connection-level outflow that is + // shared by all streams on that conn. It is nil for the outflow // that's on the conn directly. - conn *flow + conn *outflow } -func (f *flow) setConnFlow(cf *flow) { f.conn = cf } +func (f *outflow) setConnFlow(cf *outflow) { f.conn = cf } -func (f *flow) available() int32 { +func (f *outflow) available() int32 { n := f.n if f.conn != nil && f.conn.n < n { n = f.conn.n @@ -30,7 +98,7 @@ func (f *flow) available() int32 { return n } -func (f *flow) take(n int32) { +func (f *outflow) take(n int32) { if n > f.available() { panic("internal error: took too much") } @@ -42,7 +110,7 @@ func (f *flow) take(n int32) { // add adds n bytes (positive or negative) to the flow control window. // It returns false if the sum would exceed 2^31-1. -func (f *flow) add(n int32) bool { +func (f *outflow) add(n int32) bool { sum := f.n + n if (sum > n) == (f.n > 0) { f.n = sum diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/frame.go b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/frame.go index 184ac45feb70..c1f6b90dc32f 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/frame.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/frame.go @@ -662,6 +662,15 @@ func (f *Framer) WriteData(streamID uint32, endStream bool, data []byte) error { // It is the caller's responsibility not to violate the maximum frame size // and to not call other Write methods concurrently. func (f *Framer) WriteDataPadded(streamID uint32, endStream bool, data, pad []byte) error { + if err := f.startWriteDataPadded(streamID, endStream, data, pad); err != nil { + return err + } + return f.endWrite() +} + +// startWriteDataPadded is WriteDataPadded, but only writes the frame to the Framer's internal buffer. +// The caller should call endWrite to flush the frame to the underlying writer. +func (f *Framer) startWriteDataPadded(streamID uint32, endStream bool, data, pad []byte) error { if !validStreamID(streamID) && !f.AllowIllegalWrites { return errStreamID } @@ -691,7 +700,7 @@ func (f *Framer) WriteDataPadded(streamID uint32, endStream bool, data, pad []by } f.wbuf = append(f.wbuf, data...) f.wbuf = append(f.wbuf, pad...) - return f.endWrite() + return nil } // A SettingsFrame conveys configuration parameters that affect how diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/hpack/hpack.go b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/hpack/hpack.go index ebdfbee964ae..7a1d976696a7 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/hpack/hpack.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/hpack/hpack.go @@ -211,7 +211,7 @@ func (d *Decoder) at(i uint64) (hf HeaderField, ok bool) { return dt.ents[dt.len()-(int(i)-staticTable.len())], true } -// Decode decodes an entire block. +// DecodeFull decodes an entire block. // // TODO: remove this method and make it incremental later? This is // easier for debugging now. @@ -359,6 +359,7 @@ func (d *Decoder) parseFieldLiteral(n uint8, it indexType) error { var hf HeaderField wantStr := d.emitEnabled || it.indexed() + var undecodedName undecodedString if nameIdx > 0 { ihf, ok := d.at(nameIdx) if !ok { @@ -366,15 +367,27 @@ func (d *Decoder) parseFieldLiteral(n uint8, it indexType) error { } hf.Name = ihf.Name } else { - hf.Name, buf, err = d.readString(buf, wantStr) + undecodedName, buf, err = d.readString(buf) if err != nil { return err } } - hf.Value, buf, err = d.readString(buf, wantStr) + undecodedValue, buf, err := d.readString(buf) if err != nil { return err } + if wantStr { + if nameIdx <= 0 { + hf.Name, err = d.decodeString(undecodedName) + if err != nil { + return err + } + } + hf.Value, err = d.decodeString(undecodedValue) + if err != nil { + return err + } + } d.buf = buf if it.indexed() { d.dynTab.add(hf) @@ -459,46 +472,52 @@ func readVarInt(n byte, p []byte) (i uint64, remain []byte, err error) { return 0, origP, errNeedMore } -// readString decodes an hpack string from p. +// readString reads an hpack string from p. // -// wantStr is whether s will be used. If false, decompression and -// []byte->string garbage are skipped if s will be ignored -// anyway. This does mean that huffman decoding errors for non-indexed -// strings past the MAX_HEADER_LIST_SIZE are ignored, but the server -// is returning an error anyway, and because they're not indexed, the error -// won't affect the decoding state. -func (d *Decoder) readString(p []byte, wantStr bool) (s string, remain []byte, err error) { +// It returns a reference to the encoded string data to permit deferring decode costs +// until after the caller verifies all data is present. +func (d *Decoder) readString(p []byte) (u undecodedString, remain []byte, err error) { if len(p) == 0 { - return "", p, errNeedMore + return u, p, errNeedMore } isHuff := p[0]&128 != 0 strLen, p, err := readVarInt(7, p) if err != nil { - return "", p, err + return u, p, err } if d.maxStrLen != 0 && strLen > uint64(d.maxStrLen) { - return "", nil, ErrStringLength + // Returning an error here means Huffman decoding errors + // for non-indexed strings past the maximum string length + // are ignored, but the server is returning an error anyway + // and because the string is not indexed the error will not + // affect the decoding state. + return u, nil, ErrStringLength } if uint64(len(p)) < strLen { - return "", p, errNeedMore - } - if !isHuff { - if wantStr { - s = string(p[:strLen]) - } - return s, p[strLen:], nil + return u, p, errNeedMore } + u.isHuff = isHuff + u.b = p[:strLen] + return u, p[strLen:], nil +} - if wantStr { - buf := bufPool.Get().(*bytes.Buffer) - buf.Reset() // don't trust others - defer bufPool.Put(buf) - if err := huffmanDecode(buf, d.maxStrLen, p[:strLen]); err != nil { - buf.Reset() - return "", nil, err - } +type undecodedString struct { + isHuff bool + b []byte +} + +func (d *Decoder) decodeString(u undecodedString) (string, error) { + if !u.isHuff { + return string(u.b), nil + } + buf := bufPool.Get().(*bytes.Buffer) + buf.Reset() // don't trust others + var s string + err := huffmanDecode(buf, d.maxStrLen, u.b) + if err == nil { s = buf.String() - buf.Reset() // be nice to GC } - return s, p[strLen:], nil + buf.Reset() // be nice to GC + bufPool.Put(buf) + return s, err } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/server.go b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/server.go index 4eb7617fa0db..8cb14f3c97f5 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/server.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/server.go @@ -448,7 +448,7 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) { // configured value for inflow, that will be updated when we send a // WINDOW_UPDATE shortly after sending SETTINGS. sc.flow.add(initialWindowSize) - sc.inflow.add(initialWindowSize) + sc.inflow.init(initialWindowSize) sc.hpackEncoder = hpack.NewEncoder(&sc.headerWriteBuf) sc.hpackEncoder.SetMaxDynamicTableSizeLimit(s.maxEncoderHeaderTableSize()) @@ -563,8 +563,8 @@ type serverConn struct { wroteFrameCh chan frameWriteResult // from writeFrameAsync -> serve, tickles more frame writes bodyReadCh chan bodyReadMsg // from handlers -> serve serveMsgCh chan interface{} // misc messages & code to send to / run on the serve loop - flow flow // conn-wide (not stream-specific) outbound flow control - inflow flow // conn-wide inbound flow control + flow outflow // conn-wide (not stream-specific) outbound flow control + inflow inflow // conn-wide inbound flow control tlsState *tls.ConnectionState // shared by all handlers, like net/http remoteAddrStr string writeSched WriteScheduler @@ -641,10 +641,10 @@ type stream struct { cancelCtx func() // owned by serverConn's serve loop: - bodyBytes int64 // body bytes seen so far - declBodyBytes int64 // or -1 if undeclared - flow flow // limits writing from Handler to client - inflow flow // what the client is allowed to POST/etc to us + bodyBytes int64 // body bytes seen so far + declBodyBytes int64 // or -1 if undeclared + flow outflow // limits writing from Handler to client + inflow inflow // what the client is allowed to POST/etc to us state streamState resetQueued bool // RST_STREAM queued for write; set by sc.resetStream gotTrailerHeader bool // HEADER frame for trailers was seen @@ -843,8 +843,13 @@ type frameWriteResult struct { // and then reports when it's done. // At most one goroutine can be running writeFrameAsync at a time per // serverConn. -func (sc *serverConn) writeFrameAsync(wr FrameWriteRequest) { - err := wr.write.writeFrame(sc) +func (sc *serverConn) writeFrameAsync(wr FrameWriteRequest, wd *writeData) { + var err error + if wd == nil { + err = wr.write.writeFrame(sc) + } else { + err = sc.framer.endWrite() + } sc.wroteFrameCh <- frameWriteResult{wr: wr, err: err} } @@ -1251,9 +1256,16 @@ func (sc *serverConn) startFrameWrite(wr FrameWriteRequest) { sc.writingFrameAsync = false err := wr.write.writeFrame(sc) sc.wroteFrame(frameWriteResult{wr: wr, err: err}) + } else if wd, ok := wr.write.(*writeData); ok { + // Encode the frame in the serve goroutine, to ensure we don't have + // any lingering asynchronous references to data passed to Write. + // See https://go.dev/issue/58446. + sc.framer.startWriteDataPadded(wd.streamID, wd.endStream, wd.p, nil) + sc.writingFrameAsync = true + go sc.writeFrameAsync(wr, wd) } else { sc.writingFrameAsync = true - go sc.writeFrameAsync(wr) + go sc.writeFrameAsync(wr, nil) } } @@ -1503,7 +1515,7 @@ func (sc *serverConn) processFrame(f Frame) error { if sc.inGoAway && (sc.goAwayCode != ErrCodeNo || f.Header().StreamID > sc.maxClientStreamID) { if f, ok := f.(*DataFrame); ok { - if sc.inflow.available() < int32(f.Length) { + if !sc.inflow.take(f.Length) { return sc.countError("data_flow", streamError(f.Header().StreamID, ErrCodeFlowControl)) } sc.sendWindowUpdate(nil, int(f.Length)) // conn-level @@ -1775,14 +1787,9 @@ func (sc *serverConn) processData(f *DataFrame) error { // But still enforce their connection-level flow control, // and return any flow control bytes since we're not going // to consume them. - if sc.inflow.available() < int32(f.Length) { + if !sc.inflow.take(f.Length) { return sc.countError("data_flow", streamError(id, ErrCodeFlowControl)) } - // Deduct the flow control from inflow, since we're - // going to immediately add it back in - // sendWindowUpdate, which also schedules sending the - // frames. - sc.inflow.take(int32(f.Length)) sc.sendWindowUpdate(nil, int(f.Length)) // conn-level if st != nil && st.resetQueued { @@ -1797,10 +1804,9 @@ func (sc *serverConn) processData(f *DataFrame) error { // Sender sending more than they'd declared? if st.declBodyBytes != -1 && st.bodyBytes+int64(len(data)) > st.declBodyBytes { - if sc.inflow.available() < int32(f.Length) { + if !sc.inflow.take(f.Length) { return sc.countError("data_flow", streamError(id, ErrCodeFlowControl)) } - sc.inflow.take(int32(f.Length)) sc.sendWindowUpdate(nil, int(f.Length)) // conn-level st.body.CloseWithError(fmt.Errorf("sender tried to send more than declared Content-Length of %d bytes", st.declBodyBytes)) @@ -1811,10 +1817,9 @@ func (sc *serverConn) processData(f *DataFrame) error { } if f.Length > 0 { // Check whether the client has flow control quota. - if st.inflow.available() < int32(f.Length) { + if !takeInflows(&sc.inflow, &st.inflow, f.Length) { return sc.countError("flow_on_data_length", streamError(id, ErrCodeFlowControl)) } - st.inflow.take(int32(f.Length)) if len(data) > 0 { wrote, err := st.body.Write(data) @@ -1830,10 +1835,12 @@ func (sc *serverConn) processData(f *DataFrame) error { // Return any padded flow control now, since we won't // refund it later on body reads. - if pad := int32(f.Length) - int32(len(data)); pad > 0 { - sc.sendWindowUpdate32(nil, pad) - sc.sendWindowUpdate32(st, pad) - } + // Call sendWindowUpdate even if there is no padding, + // to return buffered flow control credit if the sent + // window has shrunk. + pad := int32(f.Length) - int32(len(data)) + sc.sendWindowUpdate32(nil, pad) + sc.sendWindowUpdate32(st, pad) } if f.StreamEnded() { st.endStream() @@ -2105,8 +2112,7 @@ func (sc *serverConn) newStream(id, pusherID uint32, state streamState) *stream st.cw.Init() st.flow.conn = &sc.flow // link to conn-level counter st.flow.add(sc.initialStreamSendWindowSize) - st.inflow.conn = &sc.inflow // link to conn-level counter - st.inflow.add(sc.srv.initialStreamRecvWindowSize()) + st.inflow.init(sc.srv.initialStreamRecvWindowSize()) if sc.hs.WriteTimeout != 0 { st.writeDeadline = time.AfterFunc(sc.hs.WriteTimeout, st.onWriteTimeout) } @@ -2198,7 +2204,7 @@ func (sc *serverConn) newWriterAndRequestNoBody(st *stream, rp requestParam) (*r tlsState = sc.tlsState } - needsContinue := rp.header.Get("Expect") == "100-continue" + needsContinue := httpguts.HeaderValuesContainsToken(rp.header["Expect"], "100-continue") if needsContinue { rp.header.Del("Expect") } @@ -2388,47 +2394,28 @@ func (sc *serverConn) noteBodyRead(st *stream, n int) { } // st may be nil for conn-level -func (sc *serverConn) sendWindowUpdate(st *stream, n int) { - sc.serveG.check() - // "The legal range for the increment to the flow control - // window is 1 to 2^31-1 (2,147,483,647) octets." - // A Go Read call on 64-bit machines could in theory read - // a larger Read than this. Very unlikely, but we handle it here - // rather than elsewhere for now. - const maxUint31 = 1<<31 - 1 - for n > maxUint31 { - sc.sendWindowUpdate32(st, maxUint31) - n -= maxUint31 - } - sc.sendWindowUpdate32(st, int32(n)) +func (sc *serverConn) sendWindowUpdate32(st *stream, n int32) { + sc.sendWindowUpdate(st, int(n)) } // st may be nil for conn-level -func (sc *serverConn) sendWindowUpdate32(st *stream, n int32) { +func (sc *serverConn) sendWindowUpdate(st *stream, n int) { sc.serveG.check() - if n == 0 { - return - } - if n < 0 { - panic("negative update") - } var streamID uint32 - if st != nil { + var send int32 + if st == nil { + send = sc.inflow.add(n) + } else { streamID = st.id + send = st.inflow.add(n) + } + if send == 0 { + return } sc.writeFrame(FrameWriteRequest{ - write: writeWindowUpdate{streamID: streamID, n: uint32(n)}, + write: writeWindowUpdate{streamID: streamID, n: uint32(send)}, stream: st, }) - var ok bool - if st == nil { - ok = sc.inflow.add(n) - } else { - ok = st.inflow.add(n) - } - if !ok { - panic("internal error; sent too many window updates without decrements?") - } } // requestBody is the Handler's Request.Body type. diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/transport.go b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/transport.go index 30f706e6cb81..05ba23d3d988 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/transport.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/net/http2/transport.go @@ -47,10 +47,6 @@ const ( // we buffer per stream. transportDefaultStreamFlow = 4 << 20 - // transportDefaultStreamMinRefresh is the minimum number of bytes we'll send - // a stream-level WINDOW_UPDATE for at a time. - transportDefaultStreamMinRefresh = 4 << 10 - defaultUserAgent = "Go-http-client/2.0" // initialMaxConcurrentStreams is a connections maxConcurrentStreams until @@ -310,8 +306,8 @@ type ClientConn struct { mu sync.Mutex // guards following cond *sync.Cond // hold mu; broadcast on flow/closed changes - flow flow // our conn-level flow control quota (cs.flow is per stream) - inflow flow // peer's conn-level flow control + flow outflow // our conn-level flow control quota (cs.outflow is per stream) + inflow inflow // peer's conn-level flow control doNotReuse bool // whether conn is marked to not be reused for any future requests closing bool closed bool @@ -376,10 +372,10 @@ type clientStream struct { respHeaderRecv chan struct{} // closed when headers are received res *http.Response // set if respHeaderRecv is closed - flow flow // guarded by cc.mu - inflow flow // guarded by cc.mu - bytesRemain int64 // -1 means unknown; owned by transportResponseBody.Read - readErr error // sticky read error; owned by transportResponseBody.Read + flow outflow // guarded by cc.mu + inflow inflow // guarded by cc.mu + bytesRemain int64 // -1 means unknown; owned by transportResponseBody.Read + readErr error // sticky read error; owned by transportResponseBody.Read reqBody io.ReadCloser reqBodyContentLength int64 // -1 means unknown @@ -811,7 +807,7 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro cc.bw.Write(clientPreface) cc.fr.WriteSettings(initialSettings...) cc.fr.WriteWindowUpdate(0, transportDefaultConnFlow) - cc.inflow.add(transportDefaultConnFlow + initialWindowSize) + cc.inflow.init(transportDefaultConnFlow + initialWindowSize) cc.bw.Flush() if cc.werr != nil { cc.Close() @@ -1573,7 +1569,7 @@ func (cs *clientStream) cleanupWriteRequest(err error) { close(cs.donec) } -// awaitOpenSlotForStream waits until len(streams) < maxConcurrentStreams. +// awaitOpenSlotForStreamLocked waits until len(streams) < maxConcurrentStreams. // Must hold cc.mu. func (cc *ClientConn) awaitOpenSlotForStreamLocked(cs *clientStream) error { for { @@ -2073,8 +2069,7 @@ type resAndError struct { func (cc *ClientConn) addStreamLocked(cs *clientStream) { cs.flow.add(int32(cc.initialWindowSize)) cs.flow.setConnFlow(&cc.flow) - cs.inflow.add(transportDefaultStreamFlow) - cs.inflow.setConnFlow(&cc.inflow) + cs.inflow.init(transportDefaultStreamFlow) cs.ID = cc.nextStreamID cc.nextStreamID += 2 cc.streams[cs.ID] = cs @@ -2533,21 +2528,10 @@ func (b transportResponseBody) Read(p []byte) (n int, err error) { } cc.mu.Lock() - var connAdd, streamAdd int32 - // Check the conn-level first, before the stream-level. - if v := cc.inflow.available(); v < transportDefaultConnFlow/2 { - connAdd = transportDefaultConnFlow - v - cc.inflow.add(connAdd) - } + connAdd := cc.inflow.add(n) + var streamAdd int32 if err == nil { // No need to refresh if the stream is over or failed. - // Consider any buffered body data (read from the conn but not - // consumed by the client) when computing flow control for this - // stream. - v := int(cs.inflow.available()) + cs.bufPipe.Len() - if v < transportDefaultStreamFlow-transportDefaultStreamMinRefresh { - streamAdd = int32(transportDefaultStreamFlow - v) - cs.inflow.add(streamAdd) - } + streamAdd = cs.inflow.add(n) } cc.mu.Unlock() @@ -2575,17 +2559,15 @@ func (b transportResponseBody) Close() error { if unread > 0 { cc.mu.Lock() // Return connection-level flow control. - if unread > 0 { - cc.inflow.add(int32(unread)) - } + connAdd := cc.inflow.add(unread) cc.mu.Unlock() // TODO(dneil): Acquiring this mutex can block indefinitely. // Move flow control return to a goroutine? cc.wmu.Lock() // Return connection-level flow control. - if unread > 0 { - cc.fr.WriteWindowUpdate(0, uint32(unread)) + if connAdd > 0 { + cc.fr.WriteWindowUpdate(0, uint32(connAdd)) } cc.bw.Flush() cc.wmu.Unlock() @@ -2628,13 +2610,18 @@ func (rl *clientConnReadLoop) processData(f *DataFrame) error { // But at least return their flow control: if f.Length > 0 { cc.mu.Lock() - cc.inflow.add(int32(f.Length)) + ok := cc.inflow.take(f.Length) + connAdd := cc.inflow.add(int(f.Length)) cc.mu.Unlock() - - cc.wmu.Lock() - cc.fr.WriteWindowUpdate(0, uint32(f.Length)) - cc.bw.Flush() - cc.wmu.Unlock() + if !ok { + return ConnectionError(ErrCodeFlowControl) + } + if connAdd > 0 { + cc.wmu.Lock() + cc.fr.WriteWindowUpdate(0, uint32(connAdd)) + cc.bw.Flush() + cc.wmu.Unlock() + } } return nil } @@ -2665,9 +2652,7 @@ func (rl *clientConnReadLoop) processData(f *DataFrame) error { } // Check connection-level flow control. cc.mu.Lock() - if cs.inflow.available() >= int32(f.Length) { - cs.inflow.take(int32(f.Length)) - } else { + if !takeInflows(&cc.inflow, &cs.inflow, f.Length) { cc.mu.Unlock() return ConnectionError(ErrCodeFlowControl) } @@ -2689,19 +2674,20 @@ func (rl *clientConnReadLoop) processData(f *DataFrame) error { } } - if refund > 0 { - cc.inflow.add(int32(refund)) - if !didReset { - cs.inflow.add(int32(refund)) - } + sendConn := cc.inflow.add(refund) + var sendStream int32 + if !didReset { + sendStream = cs.inflow.add(refund) } cc.mu.Unlock() - if refund > 0 { + if sendConn > 0 || sendStream > 0 { cc.wmu.Lock() - cc.fr.WriteWindowUpdate(0, uint32(refund)) - if !didReset { - cc.fr.WriteWindowUpdate(cs.ID, uint32(refund)) + if sendConn > 0 { + cc.fr.WriteWindowUpdate(0, uint32(sendConn)) + } + if sendStream > 0 { + cc.fr.WriteWindowUpdate(cs.ID, uint32(sendStream)) } cc.bw.Flush() cc.wmu.Unlock() diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/gccgo.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/gccgo.go index 0dee23222ca8..b06f52d748f6 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/gccgo.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/gccgo.go @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -//go:build gccgo && !aix -// +build gccgo,!aix +//go:build gccgo && !aix && !hurd +// +build gccgo,!aix,!hurd package unix diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/gccgo_c.c b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/gccgo_c.c index 2cb1fefac640..f98a1c542f05 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/gccgo_c.c +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/gccgo_c.c @@ -2,8 +2,8 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -// +build gccgo -// +build !aix +//go:build gccgo && !aix && !hurd +// +build gccgo,!aix,!hurd #include #include diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ioctl.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ioctl.go index 6c7ad052e6b3..7ce8dd406fff 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ioctl.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ioctl.go @@ -2,13 +2,12 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -//go:build aix || darwin || dragonfly || freebsd || linux || netbsd || openbsd || solaris -// +build aix darwin dragonfly freebsd linux netbsd openbsd solaris +//go:build aix || darwin || dragonfly || freebsd || hurd || linux || netbsd || openbsd || solaris +// +build aix darwin dragonfly freebsd hurd linux netbsd openbsd solaris package unix import ( - "runtime" "unsafe" ) @@ -27,7 +26,7 @@ func IoctlSetInt(fd int, req uint, value int) error { // passing the integer value directly. func IoctlSetPointerInt(fd int, req uint, value int) error { v := int32(value) - return ioctl(fd, req, uintptr(unsafe.Pointer(&v))) + return ioctlPtr(fd, req, unsafe.Pointer(&v)) } // IoctlSetWinsize performs an ioctl on fd with a *Winsize argument. @@ -36,9 +35,7 @@ func IoctlSetPointerInt(fd int, req uint, value int) error { func IoctlSetWinsize(fd int, req uint, value *Winsize) error { // TODO: if we get the chance, remove the req parameter and // hardcode TIOCSWINSZ. - err := ioctl(fd, req, uintptr(unsafe.Pointer(value))) - runtime.KeepAlive(value) - return err + return ioctlPtr(fd, req, unsafe.Pointer(value)) } // IoctlSetTermios performs an ioctl on fd with a *Termios. @@ -46,9 +43,7 @@ func IoctlSetWinsize(fd int, req uint, value *Winsize) error { // The req value will usually be TCSETA or TIOCSETA. func IoctlSetTermios(fd int, req uint, value *Termios) error { // TODO: if we get the chance, remove the req parameter. - err := ioctl(fd, req, uintptr(unsafe.Pointer(value))) - runtime.KeepAlive(value) - return err + return ioctlPtr(fd, req, unsafe.Pointer(value)) } // IoctlGetInt performs an ioctl operation which gets an integer value @@ -58,18 +53,18 @@ func IoctlSetTermios(fd int, req uint, value *Termios) error { // for those, IoctlRetInt should be used instead of this function. func IoctlGetInt(fd int, req uint) (int, error) { var value int - err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + err := ioctlPtr(fd, req, unsafe.Pointer(&value)) return value, err } func IoctlGetWinsize(fd int, req uint) (*Winsize, error) { var value Winsize - err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + err := ioctlPtr(fd, req, unsafe.Pointer(&value)) return &value, err } func IoctlGetTermios(fd int, req uint) (*Termios, error) { var value Termios - err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + err := ioctlPtr(fd, req, unsafe.Pointer(&value)) return &value, err } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ioctl_zos.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ioctl_zos.go index 5384e7d91d79..6532f09af2e3 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ioctl_zos.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ioctl_zos.go @@ -27,9 +27,7 @@ func IoctlSetInt(fd int, req uint, value int) error { func IoctlSetWinsize(fd int, req uint, value *Winsize) error { // TODO: if we get the chance, remove the req parameter and // hardcode TIOCSWINSZ. - err := ioctl(fd, req, uintptr(unsafe.Pointer(value))) - runtime.KeepAlive(value) - return err + return ioctlPtr(fd, req, unsafe.Pointer(value)) } // IoctlSetTermios performs an ioctl on fd with a *Termios. @@ -51,13 +49,13 @@ func IoctlSetTermios(fd int, req uint, value *Termios) error { // for those, IoctlRetInt should be used instead of this function. func IoctlGetInt(fd int, req uint) (int, error) { var value int - err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + err := ioctlPtr(fd, req, unsafe.Pointer(&value)) return value, err } func IoctlGetWinsize(fd int, req uint) (*Winsize, error) { var value Winsize - err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + err := ioctlPtr(fd, req, unsafe.Pointer(&value)) return &value, err } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/mkall.sh b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/mkall.sh index 727cba212704..247158f1ce29 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/mkall.sh +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/mkall.sh @@ -174,10 +174,28 @@ openbsd_arm64) mktypes="GOARCH=$GOARCH go tool cgo -godefs -- -fsigned-char" ;; openbsd_mips64) + mkasm="go run mkasm.go" + mkerrors="$mkerrors -m64" + mksyscall="go run mksyscall.go -openbsd -libc" + mksysctl="go run mksysctl_openbsd.go" + # Let the type of C char be signed for making the bare syscall + # API consistent across platforms. + mktypes="GOARCH=$GOARCH go tool cgo -godefs -- -fsigned-char" + ;; +openbsd_ppc64) + mkasm="go run mkasm.go" mkerrors="$mkerrors -m64" - mksyscall="go run mksyscall.go -openbsd" + mksyscall="go run mksyscall.go -openbsd -libc" + mksysctl="go run mksysctl_openbsd.go" + # Let the type of C char be signed for making the bare syscall + # API consistent across platforms. + mktypes="GOARCH=$GOARCH go tool cgo -godefs -- -fsigned-char" + ;; +openbsd_riscv64) + mkasm="go run mkasm.go" + mkerrors="$mkerrors -m64" + mksyscall="go run mksyscall.go -openbsd -libc" mksysctl="go run mksysctl_openbsd.go" - mksysnum="go run mksysnum.go 'https://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/kern/syscalls.master'" # Let the type of C char be signed for making the bare syscall # API consistent across platforms. mktypes="GOARCH=$GOARCH go tool cgo -godefs -- -fsigned-char" diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ptrace_darwin.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ptrace_darwin.go index 463c3eff7fd2..39dba6ca6a34 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ptrace_darwin.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ptrace_darwin.go @@ -7,6 +7,12 @@ package unix +import "unsafe" + func ptrace(request int, pid int, addr uintptr, data uintptr) error { return ptrace1(request, pid, addr, data) } + +func ptracePtr(request int, pid int, addr uintptr, data unsafe.Pointer) error { + return ptrace1Ptr(request, pid, addr, data) +} diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ptrace_ios.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ptrace_ios.go index ed0509a0117c..9ea66330a968 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ptrace_ios.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ptrace_ios.go @@ -7,6 +7,12 @@ package unix +import "unsafe" + func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { return ENOTSUP } + +func ptracePtr(request int, pid int, addr uintptr, data unsafe.Pointer) (err error) { + return ENOTSUP +} diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_aix.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_aix.go index 2db1b51e99f0..d9f5544ccf45 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_aix.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_aix.go @@ -292,9 +292,7 @@ func anyToSockaddr(fd int, rsa *RawSockaddrAny) (Sockaddr, error) { break } } - - bytes := (*[len(pp.Path)]byte)(unsafe.Pointer(&pp.Path[0]))[0:n] - sa.Name = string(bytes) + sa.Name = string(unsafe.Slice((*byte)(unsafe.Pointer(&pp.Path[0])), n)) return sa, nil case AF_INET: @@ -411,6 +409,7 @@ func (w WaitStatus) CoreDump() bool { return w&0x80 == 0x80 } func (w WaitStatus) TrapCause() int { return -1 } //sys ioctl(fd int, req uint, arg uintptr) (err error) +//sys ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) = ioctl // fcntl must never be called with cmd=F_DUP2FD because it doesn't work on AIX // There is no way to create a custom fcntl and to keep //sys fcntl easily, diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_bsd.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_bsd.go index eda42671f195..7705c3270b5e 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_bsd.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_bsd.go @@ -245,8 +245,7 @@ func anyToSockaddr(fd int, rsa *RawSockaddrAny) (Sockaddr, error) { break } } - bytes := (*[len(pp.Path)]byte)(unsafe.Pointer(&pp.Path[0]))[0:n] - sa.Name = string(bytes) + sa.Name = string(unsafe.Slice((*byte)(unsafe.Pointer(&pp.Path[0])), n)) return sa, nil case AF_INET: diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin.go index 1f63382182f3..7064d6ebab6a 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin.go @@ -14,7 +14,6 @@ package unix import ( "fmt" - "runtime" "syscall" "unsafe" ) @@ -230,6 +229,7 @@ func direntNamlen(buf []byte) (uint64, bool) { func PtraceAttach(pid int) (err error) { return ptrace(PT_ATTACH, pid, 0, 0) } func PtraceDetach(pid int) (err error) { return ptrace(PT_DETACH, pid, 0, 0) } +func PtraceDenyAttach() (err error) { return ptrace(PT_DENY_ATTACH, 0, 0, 0) } //sysnb pipe(p *[2]int32) (err error) @@ -375,11 +375,10 @@ func Flistxattr(fd int, dest []byte) (sz int, err error) { func Kill(pid int, signum syscall.Signal) (err error) { return kill(pid, int(signum), 1) } //sys ioctl(fd int, req uint, arg uintptr) (err error) +//sys ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) = SYS_IOCTL func IoctlCtlInfo(fd int, ctlInfo *CtlInfo) error { - err := ioctl(fd, CTLIOCGINFO, uintptr(unsafe.Pointer(ctlInfo))) - runtime.KeepAlive(ctlInfo) - return err + return ioctlPtr(fd, CTLIOCGINFO, unsafe.Pointer(ctlInfo)) } // IfreqMTU is struct ifreq used to get or set a network device's MTU. @@ -393,16 +392,14 @@ type IfreqMTU struct { func IoctlGetIfreqMTU(fd int, ifname string) (*IfreqMTU, error) { var ifreq IfreqMTU copy(ifreq.Name[:], ifname) - err := ioctl(fd, SIOCGIFMTU, uintptr(unsafe.Pointer(&ifreq))) + err := ioctlPtr(fd, SIOCGIFMTU, unsafe.Pointer(&ifreq)) return &ifreq, err } // IoctlSetIfreqMTU performs the SIOCSIFMTU ioctl operation on fd to set the MTU // of the network device specified by ifreq.Name. func IoctlSetIfreqMTU(fd int, ifreq *IfreqMTU) error { - err := ioctl(fd, SIOCSIFMTU, uintptr(unsafe.Pointer(ifreq))) - runtime.KeepAlive(ifreq) - return err + return ioctlPtr(fd, SIOCSIFMTU, unsafe.Pointer(ifreq)) } //sys sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) = SYS_SYSCTL diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go index b37310ce9b40..9fa879806bcb 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go @@ -47,5 +47,6 @@ func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, //sys getfsstat(buf unsafe.Pointer, size uintptr, flags int) (n int, err error) = SYS_GETFSSTAT64 //sys Lstat(path string, stat *Stat_t) (err error) = SYS_LSTAT64 //sys ptrace1(request int, pid int, addr uintptr, data uintptr) (err error) = SYS_ptrace +//sys ptrace1Ptr(request int, pid int, addr unsafe.Pointer, data uintptr) (err error) = SYS_ptrace //sys Stat(path string, stat *Stat_t) (err error) = SYS_STAT64 //sys Statfs(path string, stat *Statfs_t) (err error) = SYS_STATFS64 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go index d51ec996304e..f17b8c526a53 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go @@ -47,5 +47,6 @@ func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, //sys getfsstat(buf unsafe.Pointer, size uintptr, flags int) (n int, err error) = SYS_GETFSSTAT //sys Lstat(path string, stat *Stat_t) (err error) //sys ptrace1(request int, pid int, addr uintptr, data uintptr) (err error) = SYS_ptrace +//sys ptrace1Ptr(request int, pid int, addr unsafe.Pointer, data uintptr) (err error) = SYS_ptrace //sys Stat(path string, stat *Stat_t) (err error) //sys Statfs(path string, stat *Statfs_t) (err error) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_dragonfly.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_dragonfly.go index 61c0d0de15d5..221efc26bcdc 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_dragonfly.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_dragonfly.go @@ -172,6 +172,7 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { } //sys ioctl(fd int, req uint, arg uintptr) (err error) +//sys ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) = SYS_IOCTL //sys sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) = SYS___SYSCTL @@ -255,6 +256,7 @@ func Sendfile(outfd int, infd int, offset *int64, count int) (written int, err e //sys Chmod(path string, mode uint32) (err error) //sys Chown(path string, uid int, gid int) (err error) //sys Chroot(path string) (err error) +//sys ClockGettime(clockid int32, time *Timespec) (err error) //sys Close(fd int) (err error) //sys Dup(fd int) (nfd int, err error) //sys Dup2(from int, to int) (err error) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd.go index de7c23e0648a..5bdde03e4a84 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd.go @@ -161,7 +161,8 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { return } -//sys ioctl(fd int, req uint, arg uintptr) (err error) +//sys ioctl(fd int, req uint, arg uintptr) (err error) = SYS_IOCTL +//sys ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) = SYS_IOCTL //sys sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) = SYS___SYSCTL @@ -253,6 +254,7 @@ func Sendfile(outfd int, infd int, offset *int64, count int) (written int, err e } //sys ptrace(request int, pid int, addr uintptr, data int) (err error) +//sys ptracePtr(request int, pid int, addr unsafe.Pointer, data int) (err error) = SYS_PTRACE func PtraceAttach(pid int) (err error) { return ptrace(PT_ATTACH, pid, 0, 0) @@ -267,19 +269,36 @@ func PtraceDetach(pid int) (err error) { } func PtraceGetFpRegs(pid int, fpregsout *FpReg) (err error) { - return ptrace(PT_GETFPREGS, pid, uintptr(unsafe.Pointer(fpregsout)), 0) + return ptracePtr(PT_GETFPREGS, pid, unsafe.Pointer(fpregsout), 0) } func PtraceGetRegs(pid int, regsout *Reg) (err error) { - return ptrace(PT_GETREGS, pid, uintptr(unsafe.Pointer(regsout)), 0) + return ptracePtr(PT_GETREGS, pid, unsafe.Pointer(regsout), 0) +} + +func PtraceIO(req int, pid int, offs uintptr, out []byte, countin int) (count int, err error) { + ioDesc := PtraceIoDesc{ + Op: int32(req), + Offs: offs, + } + if countin > 0 { + _ = out[:countin] // check bounds + ioDesc.Addr = &out[0] + } else if out != nil { + ioDesc.Addr = (*byte)(unsafe.Pointer(&_zero)) + } + ioDesc.SetLen(countin) + + err = ptracePtr(PT_IO, pid, unsafe.Pointer(&ioDesc), 0) + return int(ioDesc.Len), err } func PtraceLwpEvents(pid int, enable int) (err error) { return ptrace(PT_LWP_EVENTS, pid, 0, enable) } -func PtraceLwpInfo(pid int, info uintptr) (err error) { - return ptrace(PT_LWPINFO, pid, info, int(unsafe.Sizeof(PtraceLwpInfoStruct{}))) +func PtraceLwpInfo(pid int, info *PtraceLwpInfoStruct) (err error) { + return ptracePtr(PT_LWPINFO, pid, unsafe.Pointer(info), int(unsafe.Sizeof(*info))) } func PtracePeekData(pid int, addr uintptr, out []byte) (count int, err error) { @@ -299,13 +318,25 @@ func PtracePokeText(pid int, addr uintptr, data []byte) (count int, err error) { } func PtraceSetRegs(pid int, regs *Reg) (err error) { - return ptrace(PT_SETREGS, pid, uintptr(unsafe.Pointer(regs)), 0) + return ptracePtr(PT_SETREGS, pid, unsafe.Pointer(regs), 0) } func PtraceSingleStep(pid int) (err error) { return ptrace(PT_STEP, pid, 1, 0) } +func Dup3(oldfd, newfd, flags int) error { + if oldfd == newfd || flags&^O_CLOEXEC != 0 { + return EINVAL + } + how := F_DUP2FD + if flags&O_CLOEXEC != 0 { + how = F_DUP2FD_CLOEXEC + } + _, err := fcntl(oldfd, how, newfd) + return err +} + /* * Exposed directly */ @@ -319,6 +350,7 @@ func PtraceSingleStep(pid int) (err error) { //sys Chmod(path string, mode uint32) (err error) //sys Chown(path string, uid int, gid int) (err error) //sys Chroot(path string) (err error) +//sys ClockGettime(clockid int32, time *Timespec) (err error) //sys Close(fd int) (err error) //sys Dup(fd int) (nfd int, err error) //sys Dup2(from int, to int) (err error) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go index b11ede89a960..b8da510043cb 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go @@ -42,6 +42,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } +func (d *PtraceIoDesc) SetLen(length int) { + d.Len = uint32(length) +} + func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { var writtenOut uint64 = 0 _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr((*offset)>>32), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0) @@ -57,11 +61,5 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) func PtraceGetFsBase(pid int, fsbase *int64) (err error) { - return ptrace(PT_GETFSBASE, pid, uintptr(unsafe.Pointer(fsbase)), 0) -} - -func PtraceIO(req int, pid int, addr uintptr, out []byte, countin int) (count int, err error) { - ioDesc := PtraceIoDesc{Op: int32(req), Offs: uintptr(unsafe.Pointer(addr)), Addr: uintptr(unsafe.Pointer(&out[0])), Len: uint32(countin)} - err = ptrace(PT_IO, pid, uintptr(unsafe.Pointer(&ioDesc)), 0) - return int(ioDesc.Len), err + return ptracePtr(PT_GETFSBASE, pid, unsafe.Pointer(fsbase), 0) } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go index 9ed8eec6c287..47155c48390b 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go @@ -42,6 +42,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } +func (d *PtraceIoDesc) SetLen(length int) { + d.Len = uint64(length) +} + func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { var writtenOut uint64 = 0 _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0, 0) @@ -57,11 +61,5 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) func PtraceGetFsBase(pid int, fsbase *int64) (err error) { - return ptrace(PT_GETFSBASE, pid, uintptr(unsafe.Pointer(fsbase)), 0) -} - -func PtraceIO(req int, pid int, addr uintptr, out []byte, countin int) (count int, err error) { - ioDesc := PtraceIoDesc{Op: int32(req), Offs: uintptr(unsafe.Pointer(addr)), Addr: uintptr(unsafe.Pointer(&out[0])), Len: uint64(countin)} - err = ptrace(PT_IO, pid, uintptr(unsafe.Pointer(&ioDesc)), 0) - return int(ioDesc.Len), err + return ptracePtr(PT_GETFSBASE, pid, unsafe.Pointer(fsbase), 0) } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go index f8ac98247905..08932093fa24 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go @@ -42,6 +42,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } +func (d *PtraceIoDesc) SetLen(length int) { + d.Len = uint32(length) +} + func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { var writtenOut uint64 = 0 _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr((*offset)>>32), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0) @@ -55,9 +59,3 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e } func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) - -func PtraceIO(req int, pid int, addr uintptr, out []byte, countin int) (count int, err error) { - ioDesc := PtraceIoDesc{Op: int32(req), Offs: uintptr(unsafe.Pointer(addr)), Addr: uintptr(unsafe.Pointer(&out[0])), Len: uint32(countin)} - err = ptrace(PT_IO, pid, uintptr(unsafe.Pointer(&ioDesc)), 0) - return int(ioDesc.Len), err -} diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_arm64.go index 8e932036ec37..d151a0d0e53a 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_arm64.go @@ -42,6 +42,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } +func (d *PtraceIoDesc) SetLen(length int) { + d.Len = uint64(length) +} + func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { var writtenOut uint64 = 0 _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0, 0) @@ -55,9 +59,3 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e } func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) - -func PtraceIO(req int, pid int, addr uintptr, out []byte, countin int) (count int, err error) { - ioDesc := PtraceIoDesc{Op: int32(req), Offs: uintptr(unsafe.Pointer(addr)), Addr: uintptr(unsafe.Pointer(&out[0])), Len: uint64(countin)} - err = ptrace(PT_IO, pid, uintptr(unsafe.Pointer(&ioDesc)), 0) - return int(ioDesc.Len), err -} diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_riscv64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_riscv64.go index cbe12227896b..d5cd64b37874 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_riscv64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd_riscv64.go @@ -42,6 +42,10 @@ func (cmsg *Cmsghdr) SetLen(length int) { cmsg.Len = uint32(length) } +func (d *PtraceIoDesc) SetLen(length int) { + d.Len = uint64(length) +} + func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { var writtenOut uint64 = 0 _, _, e1 := Syscall9(SYS_SENDFILE, uintptr(infd), uintptr(outfd), uintptr(*offset), uintptr(count), 0, uintptr(unsafe.Pointer(&writtenOut)), 0, 0, 0) @@ -55,9 +59,3 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e } func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) - -func PtraceIO(req int, pid int, addr uintptr, out []byte, countin int) (count int, err error) { - ioDesc := PtraceIoDesc{Op: int32(req), Offs: uintptr(unsafe.Pointer(addr)), Addr: uintptr(unsafe.Pointer(&out[0])), Len: uint64(countin)} - err = ptrace(PT_IO, pid, uintptr(unsafe.Pointer(&ioDesc)), 0) - return int(ioDesc.Len), err -} diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_hurd.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_hurd.go new file mode 100644 index 000000000000..381fd4673bec --- /dev/null +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_hurd.go @@ -0,0 +1,30 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build hurd +// +build hurd + +package unix + +/* +#include +int ioctl(int, unsigned long int, uintptr_t); +*/ +import "C" + +func ioctl(fd int, req uint, arg uintptr) (err error) { + r0, er := C.ioctl(C.int(fd), C.ulong(req), C.uintptr_t(arg)) + if r0 == -1 && er != nil { + err = er + } + return +} + +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + r0, er := C.ioctl(C.int(fd), C.ulong(req), C.uintptr_t(uintptr(arg))) + if r0 == -1 && er != nil { + err = er + } + return +} diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_hurd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_hurd_386.go new file mode 100644 index 000000000000..7cf54a3e4f10 --- /dev/null +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_hurd_386.go @@ -0,0 +1,29 @@ +// Copyright 2022 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +//go:build 386 && hurd +// +build 386,hurd + +package unix + +const ( + TIOCGETA = 0x62251713 +) + +type Winsize struct { + Row uint16 + Col uint16 + Xpixel uint16 + Ypixel uint16 +} + +type Termios struct { + Iflag uint32 + Oflag uint32 + Cflag uint32 + Lflag uint32 + Cc [20]uint8 + Ispeed int32 + Ospeed int32 +} diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_linux.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_linux.go index c5a98440eca1..9735331530ac 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_linux.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_linux.go @@ -1015,8 +1015,7 @@ func anyToSockaddr(fd int, rsa *RawSockaddrAny) (Sockaddr, error) { for n < len(pp.Path) && pp.Path[n] != 0 { n++ } - bytes := (*[len(pp.Path)]byte)(unsafe.Pointer(&pp.Path[0]))[0:n] - sa.Name = string(bytes) + sa.Name = string(unsafe.Slice((*byte)(unsafe.Pointer(&pp.Path[0])), n)) return sa, nil case AF_INET: @@ -1365,6 +1364,10 @@ func SetsockoptTCPRepairOpt(fd, level, opt int, o []TCPRepairOpt) (err error) { return setsockopt(fd, level, opt, unsafe.Pointer(&o[0]), uintptr(SizeofTCPRepairOpt*len(o))) } +func SetsockoptTCPMD5Sig(fd, level, opt int, s *TCPMD5Sig) error { + return setsockopt(fd, level, opt, unsafe.Pointer(s), unsafe.Sizeof(*s)) +} + // Keyctl Commands (http://man7.org/linux/man-pages/man2/keyctl.2.html) // KeyctlInt calls keyctl commands in which each argument is an int. @@ -1579,6 +1582,7 @@ func BindToDevice(fd int, device string) (err error) { } //sys ptrace(request int, pid int, addr uintptr, data uintptr) (err error) +//sys ptracePtr(request int, pid int, addr uintptr, data unsafe.Pointer) (err error) = SYS_PTRACE func ptracePeek(req int, pid int, addr uintptr, out []byte) (count int, err error) { // The peek requests are machine-size oriented, so we wrap it @@ -1596,7 +1600,7 @@ func ptracePeek(req int, pid int, addr uintptr, out []byte) (count int, err erro // boundary. n := 0 if addr%SizeofPtr != 0 { - err = ptrace(req, pid, addr-addr%SizeofPtr, uintptr(unsafe.Pointer(&buf[0]))) + err = ptracePtr(req, pid, addr-addr%SizeofPtr, unsafe.Pointer(&buf[0])) if err != nil { return 0, err } @@ -1608,7 +1612,7 @@ func ptracePeek(req int, pid int, addr uintptr, out []byte) (count int, err erro for len(out) > 0 { // We use an internal buffer to guarantee alignment. // It's not documented if this is necessary, but we're paranoid. - err = ptrace(req, pid, addr+uintptr(n), uintptr(unsafe.Pointer(&buf[0]))) + err = ptracePtr(req, pid, addr+uintptr(n), unsafe.Pointer(&buf[0])) if err != nil { return n, err } @@ -1640,7 +1644,7 @@ func ptracePoke(pokeReq int, peekReq int, pid int, addr uintptr, data []byte) (c n := 0 if addr%SizeofPtr != 0 { var buf [SizeofPtr]byte - err = ptrace(peekReq, pid, addr-addr%SizeofPtr, uintptr(unsafe.Pointer(&buf[0]))) + err = ptracePtr(peekReq, pid, addr-addr%SizeofPtr, unsafe.Pointer(&buf[0])) if err != nil { return 0, err } @@ -1667,7 +1671,7 @@ func ptracePoke(pokeReq int, peekReq int, pid int, addr uintptr, data []byte) (c // Trailing edge. if len(data) > 0 { var buf [SizeofPtr]byte - err = ptrace(peekReq, pid, addr+uintptr(n), uintptr(unsafe.Pointer(&buf[0]))) + err = ptracePtr(peekReq, pid, addr+uintptr(n), unsafe.Pointer(&buf[0])) if err != nil { return n, err } @@ -1696,11 +1700,11 @@ func PtracePokeUser(pid int, addr uintptr, data []byte) (count int, err error) { } func PtraceGetRegs(pid int, regsout *PtraceRegs) (err error) { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) + return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) } func PtraceSetRegs(pid int, regs *PtraceRegs) (err error) { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) + return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) } func PtraceSetOptions(pid int, options int) (err error) { @@ -1709,7 +1713,7 @@ func PtraceSetOptions(pid int, options int) (err error) { func PtraceGetEventMsg(pid int) (msg uint, err error) { var data _C_long - err = ptrace(PTRACE_GETEVENTMSG, pid, 0, uintptr(unsafe.Pointer(&data))) + err = ptracePtr(PTRACE_GETEVENTMSG, pid, 0, unsafe.Pointer(&data)) msg = uint(data) return } @@ -1800,6 +1804,7 @@ func Sendfile(outfd int, infd int, offset *int64, count int) (written int, err e //sysnb Capset(hdr *CapUserHeader, data *CapUserData) (err error) //sys Chdir(path string) (err error) //sys Chroot(path string) (err error) +//sys ClockAdjtime(clockid int32, buf *Timex) (state int, err error) //sys ClockGetres(clockid int32, res *Timespec) (err error) //sys ClockGettime(clockid int32, time *Timespec) (err error) //sys ClockNanosleep(clockid int32, flags int, request *Timespec, remain *Timespec) (err error) @@ -1973,36 +1978,46 @@ func Signalfd(fd int, sigmask *Sigset_t, flags int) (newfd int, err error) { //sys preadv2(fd int, iovs []Iovec, offs_l uintptr, offs_h uintptr, flags int) (n int, err error) = SYS_PREADV2 //sys pwritev2(fd int, iovs []Iovec, offs_l uintptr, offs_h uintptr, flags int) (n int, err error) = SYS_PWRITEV2 -func bytes2iovec(bs [][]byte) []Iovec { - iovecs := make([]Iovec, len(bs)) - for i, b := range bs { - iovecs[i].SetLen(len(b)) +// minIovec is the size of the small initial allocation used by +// Readv, Writev, etc. +// +// This small allocation gets stack allocated, which lets the +// common use case of len(iovs) <= minIovs avoid more expensive +// heap allocations. +const minIovec = 8 + +// appendBytes converts bs to Iovecs and appends them to vecs. +func appendBytes(vecs []Iovec, bs [][]byte) []Iovec { + for _, b := range bs { + var v Iovec + v.SetLen(len(b)) if len(b) > 0 { - iovecs[i].Base = &b[0] + v.Base = &b[0] } else { - iovecs[i].Base = (*byte)(unsafe.Pointer(&_zero)) + v.Base = (*byte)(unsafe.Pointer(&_zero)) } + vecs = append(vecs, v) } - return iovecs + return vecs } -// offs2lohi splits offs into its lower and upper unsigned long. On 64-bit -// systems, hi will always be 0. On 32-bit systems, offs will be split in half. -// preadv/pwritev chose this calling convention so they don't need to add a -// padding-register for alignment on ARM. +// offs2lohi splits offs into its low and high order bits. func offs2lohi(offs int64) (lo, hi uintptr) { - return uintptr(offs), uintptr(uint64(offs) >> SizeofLong) + const longBits = SizeofLong * 8 + return uintptr(offs), uintptr(uint64(offs) >> (longBits - 1) >> 1) // two shifts to avoid false positive in vet } func Readv(fd int, iovs [][]byte) (n int, err error) { - iovecs := bytes2iovec(iovs) + iovecs := make([]Iovec, 0, minIovec) + iovecs = appendBytes(iovecs, iovs) n, err = readv(fd, iovecs) readvRacedetect(iovecs, n, err) return n, err } func Preadv(fd int, iovs [][]byte, offset int64) (n int, err error) { - iovecs := bytes2iovec(iovs) + iovecs := make([]Iovec, 0, minIovec) + iovecs = appendBytes(iovecs, iovs) lo, hi := offs2lohi(offset) n, err = preadv(fd, iovecs, lo, hi) readvRacedetect(iovecs, n, err) @@ -2010,7 +2025,8 @@ func Preadv(fd int, iovs [][]byte, offset int64) (n int, err error) { } func Preadv2(fd int, iovs [][]byte, offset int64, flags int) (n int, err error) { - iovecs := bytes2iovec(iovs) + iovecs := make([]Iovec, 0, minIovec) + iovecs = appendBytes(iovecs, iovs) lo, hi := offs2lohi(offset) n, err = preadv2(fd, iovecs, lo, hi, flags) readvRacedetect(iovecs, n, err) @@ -2037,7 +2053,8 @@ func readvRacedetect(iovecs []Iovec, n int, err error) { } func Writev(fd int, iovs [][]byte) (n int, err error) { - iovecs := bytes2iovec(iovs) + iovecs := make([]Iovec, 0, minIovec) + iovecs = appendBytes(iovecs, iovs) if raceenabled { raceReleaseMerge(unsafe.Pointer(&ioSync)) } @@ -2047,7 +2064,8 @@ func Writev(fd int, iovs [][]byte) (n int, err error) { } func Pwritev(fd int, iovs [][]byte, offset int64) (n int, err error) { - iovecs := bytes2iovec(iovs) + iovecs := make([]Iovec, 0, minIovec) + iovecs = appendBytes(iovecs, iovs) if raceenabled { raceReleaseMerge(unsafe.Pointer(&ioSync)) } @@ -2058,7 +2076,8 @@ func Pwritev(fd int, iovs [][]byte, offset int64) (n int, err error) { } func Pwritev2(fd int, iovs [][]byte, offset int64, flags int) (n int, err error) { - iovecs := bytes2iovec(iovs) + iovecs := make([]Iovec, 0, minIovec) + iovecs = appendBytes(iovecs, iovs) if raceenabled { raceReleaseMerge(unsafe.Pointer(&ioSync)) } @@ -2139,6 +2158,14 @@ func isGroupMember(gid int) bool { return false } +func isCapDacOverrideSet() bool { + hdr := CapUserHeader{Version: LINUX_CAPABILITY_VERSION_3} + data := [2]CapUserData{} + err := Capget(&hdr, &data[0]) + + return err == nil && data[0].Effective&(1< 0 { @@ -346,13 +359,9 @@ func Recvmsg(fd int, p, oob []byte, flags int) (n, oobn int, recvflags int, from return } -// RecvmsgBuffers receives a message from a socket using the recvmsg -// system call. The flags are passed to recvmsg. Any non-control data -// read is scattered into the buffers slices. The results are: -// - n is the number of non-control data read into bufs -// - oobn is the number of control data read into oob; this may be interpreted using [ParseSocketControlMessage] -// - recvflags is flags returned by recvmsg -// - from is the address of the sender +// RecvmsgBuffers receives a message from a socket using the recvmsg system +// call. This function is equivalent to Recvmsg, but non-control data read is +// scattered into the buffers slices. func RecvmsgBuffers(fd int, buffers [][]byte, oob []byte, flags int) (n, oobn int, recvflags int, from Sockaddr, err error) { iov := make([]Iovec, len(buffers)) for i := range buffers { @@ -371,11 +380,38 @@ func RecvmsgBuffers(fd int, buffers [][]byte, oob []byte, flags int) (n, oobn in return } +// Sendmsg sends a message on a socket to an address using the sendmsg system +// call. This function is equivalent to SendmsgN, but does not return the +// number of bytes actually sent. func Sendmsg(fd int, p, oob []byte, to Sockaddr, flags int) (err error) { _, err = SendmsgN(fd, p, oob, to, flags) return } +// SendmsgN sends a message on a socket to an address using the sendmsg system +// call. p contains the non-control data to send, and oob contains the "out of +// band" control data. The flags are passed to sendmsg. The number of +// non-control bytes actually written to the socket is returned. +// +// Some socket types do not support sending control data without accompanying +// non-control data. If p is empty, and oob contains control data, and the +// underlying socket type is not SOCK_DGRAM, p will be treated as containing a +// single '\0' and the return value will indicate zero bytes sent. +// +// The Go function Recvmsg, if called with an empty p and a non-empty oob, +// will read and ignore this additional '\0'. If the message is received by +// code that does not use Recvmsg, or that does not use Go at all, that code +// will need to be written to expect and ignore the additional '\0'. +// +// If you need to send non-empty oob with p actually empty, and if the +// underlying socket type supports it, you can do so via a raw system call as +// follows: +// +// msg := &unix.Msghdr{ +// Control: &oob[0], +// } +// msg.SetControllen(len(oob)) +// n, _, errno := unix.Syscall(unix.SYS_SENDMSG, uintptr(fd), uintptr(unsafe.Pointer(msg)), flags) func SendmsgN(fd int, p, oob []byte, to Sockaddr, flags int) (n int, err error) { var iov [1]Iovec if len(p) > 0 { @@ -394,9 +430,8 @@ func SendmsgN(fd int, p, oob []byte, to Sockaddr, flags int) (n int, err error) } // SendmsgBuffers sends a message on a socket to an address using the sendmsg -// system call. The flags are passed to sendmsg. Any non-control data written -// is gathered from buffers. The function returns the number of bytes written -// to the socket. +// system call. This function is equivalent to SendmsgN, but the non-control +// data is gathered from buffers. func SendmsgBuffers(fd int, buffers [][]byte, oob []byte, to Sockaddr, flags int) (n int, err error) { iov := make([]Iovec, len(buffers)) for i := range buffers { @@ -543,7 +578,7 @@ func Lutimes(path string, tv []Timeval) error { return UtimesNanoAt(AT_FDCWD, path, ts, AT_SYMLINK_NOFOLLOW) } -// emptyIovec reports whether there are no bytes in the slice of Iovec. +// emptyIovecs reports whether there are no bytes in the slice of Iovec. func emptyIovecs(iov []Iovec) bool { for i := range iov { if iov[i].Len > 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_zos_s390x.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_zos_s390x.go index 68b2f3e1cd0a..b295497ae476 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_zos_s390x.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/syscall_zos_s390x.go @@ -139,8 +139,7 @@ func anyToSockaddr(_ int, rsa *RawSockaddrAny) (Sockaddr, error) { for n < int(pp.Len) && pp.Path[n] != 0 { n++ } - bytes := (*[len(pp.Path)]byte)(unsafe.Pointer(&pp.Path[0]))[0:n] - sa.Name = string(bytes) + sa.Name = string(unsafe.Slice((*byte)(unsafe.Pointer(&pp.Path[0])), n)) return sa, nil case AF_INET: @@ -214,6 +213,7 @@ func (cmsg *Cmsghdr) SetLen(length int) { //sys mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) = SYS_MMAP //sys munmap(addr uintptr, length uintptr) (err error) = SYS_MUNMAP //sys ioctl(fd int, req uint, arg uintptr) (err error) = SYS_IOCTL +//sys ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) = SYS_IOCTL //sys Access(path string, mode uint32) (err error) = SYS___ACCESS_A //sys Chdir(path string) (err error) = SYS___CHDIR_A diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/timestruct.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/timestruct.go index 3d893040553b..616b1b284858 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/timestruct.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/timestruct.go @@ -9,7 +9,7 @@ package unix import "time" -// TimespecToNSec returns the time stored in ts as nanoseconds. +// TimespecToNsec returns the time stored in ts as nanoseconds. func TimespecToNsec(ts Timespec) int64 { return ts.Nano() } // NsecToTimespec converts a number of nanoseconds into a Timespec. diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/xattr_bsd.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/xattr_bsd.go index 663b3779de2d..f5f8e9f3665e 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/xattr_bsd.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/xattr_bsd.go @@ -36,9 +36,14 @@ func xattrnamespace(fullattr string) (ns int, attr string, err error) { func initxattrdest(dest []byte, idx int) (d unsafe.Pointer) { if len(dest) > idx { return unsafe.Pointer(&dest[idx]) - } else { - return unsafe.Pointer(_zero) } + if dest != nil { + // extattr_get_file and extattr_list_file treat NULL differently from + // a non-NULL pointer of length zero. Preserve the property of nilness, + // even if we can't use dest directly. + return unsafe.Pointer(&_zero) + } + return nil } // FreeBSD and NetBSD implement their own syscalls to handle extended attributes diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux.go index 785d693eb328..398c37e52d6b 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux.go @@ -70,6 +70,7 @@ const ( ALG_SET_DRBG_ENTROPY = 0x6 ALG_SET_IV = 0x2 ALG_SET_KEY = 0x1 + ALG_SET_KEY_BY_KEY_SERIAL = 0x7 ALG_SET_OP = 0x3 ANON_INODE_FS_MAGIC = 0x9041934 ARPHRD_6LOWPAN = 0x339 @@ -457,7 +458,6 @@ const ( B600 = 0x8 B75 = 0x2 B9600 = 0xd - BALLOON_KVM_MAGIC = 0x13661366 BDEVFS_MAGIC = 0x62646576 BINDERFS_SUPER_MAGIC = 0x6c6f6f70 BINFMTFS_MAGIC = 0x42494e4d @@ -563,6 +563,7 @@ const ( BUS_USB = 0x3 BUS_VIRTUAL = 0x6 CAN_BCM = 0x2 + CAN_BUS_OFF_THRESHOLD = 0x100 CAN_CTRLMODE_3_SAMPLES = 0x4 CAN_CTRLMODE_BERR_REPORTING = 0x10 CAN_CTRLMODE_CC_LEN8_DLC = 0x100 @@ -577,9 +578,12 @@ const ( CAN_EFF_FLAG = 0x80000000 CAN_EFF_ID_BITS = 0x1d CAN_EFF_MASK = 0x1fffffff + CAN_ERROR_PASSIVE_THRESHOLD = 0x80 + CAN_ERROR_WARNING_THRESHOLD = 0x60 CAN_ERR_ACK = 0x20 CAN_ERR_BUSERROR = 0x80 CAN_ERR_BUSOFF = 0x40 + CAN_ERR_CNT = 0x200 CAN_ERR_CRTL = 0x4 CAN_ERR_CRTL_ACTIVE = 0x40 CAN_ERR_CRTL_RX_OVERFLOW = 0x1 @@ -771,6 +775,8 @@ const ( DEVLINK_GENL_MCGRP_CONFIG_NAME = "config" DEVLINK_GENL_NAME = "devlink" DEVLINK_GENL_VERSION = 0x1 + DEVLINK_PORT_FN_CAP_MIGRATABLE = 0x2 + DEVLINK_PORT_FN_CAP_ROCE = 0x1 DEVLINK_SB_THRESHOLD_TO_ALPHA_MAX = 0x14 DEVLINK_SUPPORTED_FLASH_OVERWRITE_SECTIONS = 0x3 DEVMEM_MAGIC = 0x454d444d @@ -820,9 +826,9 @@ const ( DM_UUID_FLAG = 0x4000 DM_UUID_LEN = 0x81 DM_VERSION = 0xc138fd00 - DM_VERSION_EXTRA = "-ioctl (2022-02-22)" + DM_VERSION_EXTRA = "-ioctl (2022-07-28)" DM_VERSION_MAJOR = 0x4 - DM_VERSION_MINOR = 0x2e + DM_VERSION_MINOR = 0x2f DM_VERSION_PATCHLEVEL = 0x0 DT_BLK = 0x6 DT_CHR = 0x2 @@ -1049,6 +1055,7 @@ const ( ETH_P_CAIF = 0xf7 ETH_P_CAN = 0xc ETH_P_CANFD = 0xd + ETH_P_CANXL = 0xe ETH_P_CFM = 0x8902 ETH_P_CONTROL = 0x16 ETH_P_CUST = 0x6006 @@ -1060,6 +1067,7 @@ const ( ETH_P_DNA_RT = 0x6003 ETH_P_DSA = 0x1b ETH_P_DSA_8021Q = 0xdadb + ETH_P_DSA_A5PSW = 0xe001 ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada ETH_P_ERSPAN = 0x88be @@ -1194,8 +1202,10 @@ const ( FAN_MARK_EVICTABLE = 0x200 FAN_MARK_FILESYSTEM = 0x100 FAN_MARK_FLUSH = 0x80 + FAN_MARK_IGNORE = 0x400 FAN_MARK_IGNORED_MASK = 0x20 FAN_MARK_IGNORED_SURV_MODIFY = 0x40 + FAN_MARK_IGNORE_SURV = 0x440 FAN_MARK_INODE = 0x0 FAN_MARK_MOUNT = 0x10 FAN_MARK_ONLYDIR = 0x8 @@ -1253,7 +1263,10 @@ const ( FSCRYPT_MODE_AES_128_CBC = 0x5 FSCRYPT_MODE_AES_128_CTS = 0x6 FSCRYPT_MODE_AES_256_CTS = 0x4 + FSCRYPT_MODE_AES_256_HCTR2 = 0xa FSCRYPT_MODE_AES_256_XTS = 0x1 + FSCRYPT_MODE_SM4_CTS = 0x8 + FSCRYPT_MODE_SM4_XTS = 0x7 FSCRYPT_POLICY_FLAGS_PAD_16 = 0x2 FSCRYPT_POLICY_FLAGS_PAD_32 = 0x3 FSCRYPT_POLICY_FLAGS_PAD_4 = 0x0 @@ -1272,8 +1285,6 @@ const ( FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 FS_ENCRYPTION_MODE_AES_256_XTS = 0x1 FS_ENCRYPTION_MODE_INVALID = 0x0 - FS_ENCRYPTION_MODE_SPECK128_256_CTS = 0x8 - FS_ENCRYPTION_MODE_SPECK128_256_XTS = 0x7 FS_IOC_ADD_ENCRYPTION_KEY = 0xc0506617 FS_IOC_GET_ENCRYPTION_KEY_STATUS = 0xc080661a FS_IOC_GET_ENCRYPTION_POLICY_EX = 0xc0096616 @@ -1430,6 +1441,7 @@ const ( IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 + IFF_NO_CARRIER = 0x40 IFF_NO_PI = 0x1000 IFF_ONE_QUEUE = 0x2000 IFF_PERSIST = 0x800 @@ -1761,6 +1773,7 @@ const ( LANDLOCK_ACCESS_FS_REFER = 0x2000 LANDLOCK_ACCESS_FS_REMOVE_DIR = 0x10 LANDLOCK_ACCESS_FS_REMOVE_FILE = 0x20 + LANDLOCK_ACCESS_FS_TRUNCATE = 0x4000 LANDLOCK_ACCESS_FS_WRITE_FILE = 0x2 LANDLOCK_CREATE_RULESET_VERSION = 0x1 LINUX_REBOOT_CMD_CAD_OFF = 0x0 @@ -1800,11 +1813,13 @@ const ( LWTUNNEL_IP_OPT_GENEVE_MAX = 0x3 LWTUNNEL_IP_OPT_VXLAN_MAX = 0x1 MADV_COLD = 0x14 + MADV_COLLAPSE = 0x19 MADV_DODUMP = 0x11 MADV_DOFORK = 0xb MADV_DONTDUMP = 0x10 MADV_DONTFORK = 0xa MADV_DONTNEED = 0x4 + MADV_DONTNEED_LOCKED = 0x18 MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 @@ -1846,7 +1861,7 @@ const ( MFD_ALLOW_SEALING = 0x2 MFD_CLOEXEC = 0x1 MFD_HUGETLB = 0x4 - MFD_HUGE_16GB = -0x78000000 + MFD_HUGE_16GB = 0x88000000 MFD_HUGE_16MB = 0x60000000 MFD_HUGE_1GB = 0x78000000 MFD_HUGE_1MB = 0x50000000 @@ -2153,6 +2168,7 @@ const ( PACKET_FANOUT_DATA = 0x16 PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 + PACKET_FANOUT_FLAG_IGNORE_OUTGOING = 0x4000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 @@ -2212,6 +2228,11 @@ const ( PERF_AUX_FLAG_PARTIAL = 0x4 PERF_AUX_FLAG_PMU_FORMAT_TYPE_MASK = 0xff00 PERF_AUX_FLAG_TRUNCATED = 0x1 + PERF_BR_ARM64_DEBUG_DATA = 0x7 + PERF_BR_ARM64_DEBUG_EXIT = 0x5 + PERF_BR_ARM64_DEBUG_HALT = 0x4 + PERF_BR_ARM64_DEBUG_INST = 0x6 + PERF_BR_ARM64_FIQ = 0x3 PERF_FLAG_FD_CLOEXEC = 0x8 PERF_FLAG_FD_NO_GROUP = 0x1 PERF_FLAG_FD_OUTPUT = 0x2 @@ -2232,6 +2253,8 @@ const ( PERF_MEM_LOCK_NA = 0x1 PERF_MEM_LOCK_SHIFT = 0x18 PERF_MEM_LVLNUM_ANY_CACHE = 0xb + PERF_MEM_LVLNUM_CXL = 0x9 + PERF_MEM_LVLNUM_IO = 0xa PERF_MEM_LVLNUM_L1 = 0x1 PERF_MEM_LVLNUM_L2 = 0x2 PERF_MEM_LVLNUM_L3 = 0x3 @@ -2265,6 +2288,7 @@ const ( PERF_MEM_REMOTE_REMOTE = 0x1 PERF_MEM_REMOTE_SHIFT = 0x25 PERF_MEM_SNOOPX_FWD = 0x1 + PERF_MEM_SNOOPX_PEER = 0x2 PERF_MEM_SNOOPX_SHIFT = 0x26 PERF_MEM_SNOOP_HIT = 0x4 PERF_MEM_SNOOP_HITM = 0x10 @@ -2301,7 +2325,6 @@ const ( PERF_SAMPLE_BRANCH_PLM_ALL = 0x7 PERF_SAMPLE_WEIGHT_TYPE = 0x1004000 PIPEFS_MAGIC = 0x50495045 - PPC_CMM_MAGIC = 0xc7571590 PPPIOCGNPMODE = 0xc008744c PPPIOCNEWUNIT = 0xc004743e PRIO_PGRP = 0x1 @@ -2999,6 +3022,7 @@ const ( STATX_BLOCKS = 0x400 STATX_BTIME = 0x800 STATX_CTIME = 0x80 + STATX_DIOALIGN = 0x2000 STATX_GID = 0x10 STATX_INO = 0x100 STATX_MNT_ID = 0x1000 @@ -3392,9 +3416,7 @@ const ( XDP_ZEROCOPY = 0x4 XENFS_SUPER_MAGIC = 0xabba1974 XFS_SUPER_MAGIC = 0x58465342 - Z3FOLD_MAGIC = 0x33 ZONEFS_MAGIC = 0x5a4f4653 - ZSMALLOC_MAGIC = 0x58295829 _HIDIOCGRAWNAME_LEN = 0x80 _HIDIOCGRAWPHYS_LEN = 0x40 _HIDIOCGRAWUNIQ_LEN = 0x40 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_386.go index 36c0dfc7c4cf..a46df0f1e57a 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_386.go @@ -133,6 +133,7 @@ const ( MEMGETREGIONCOUNT = 0x80044d07 MEMISLOCKED = 0x80084d17 MEMLOCK = 0x40084d05 + MEMREAD = 0xc03c4d1a MEMREADOOB = 0xc00c4d04 MEMSETBADBLOCK = 0x40084d0c MEMUNLOCK = 0x40084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go index 4ff942703b7b..6cd4a3ea9d33 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go @@ -133,6 +133,7 @@ const ( MEMGETREGIONCOUNT = 0x80044d07 MEMISLOCKED = 0x80084d17 MEMLOCK = 0x40084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x40084d0c MEMUNLOCK = 0x40084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go index 3eaa0fb78e30..c7ebee24df3f 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x80044d07 MEMISLOCKED = 0x80084d17 MEMLOCK = 0x40084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc00c4d04 MEMSETBADBLOCK = 0x40084d0c MEMUNLOCK = 0x40084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go index d7995bdc3a21..9d5352c3e45e 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go @@ -134,6 +134,7 @@ const ( MEMGETREGIONCOUNT = 0x80044d07 MEMISLOCKED = 0x80084d17 MEMLOCK = 0x40084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x40084d0c MEMUNLOCK = 0x40084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go index 928e24c20535..f26a164f4aab 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go @@ -132,6 +132,7 @@ const ( MEMGETREGIONCOUNT = 0x80044d07 MEMISLOCKED = 0x80084d17 MEMLOCK = 0x40084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x40084d0c MEMUNLOCK = 0x40084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go index 179bffb474b4..890bc3c9b706 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x40044d07 MEMISLOCKED = 0x40084d17 MEMLOCK = 0x80084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc00c4d04 MEMSETBADBLOCK = 0x80084d0c MEMUNLOCK = 0x80084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go index 1fba17bd75cb..549f26ac6466 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x40044d07 MEMISLOCKED = 0x40084d17 MEMLOCK = 0x80084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x80084d0c MEMUNLOCK = 0x80084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go index b77dde31537e..e0365e32c174 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x40044d07 MEMISLOCKED = 0x40084d17 MEMLOCK = 0x80084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x80084d0c MEMUNLOCK = 0x80084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go index 78c6c751bfa5..fdccce15ca20 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x40044d07 MEMISLOCKED = 0x40084d17 MEMLOCK = 0x80084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc00c4d04 MEMSETBADBLOCK = 0x80084d0c MEMUNLOCK = 0x80084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go index 1c0d31f0b4c2..b2205c83faa1 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x40044d07 MEMISLOCKED = 0x40084d17 MEMLOCK = 0x80084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc00c4d04 MEMSETBADBLOCK = 0x80084d0c MEMUNLOCK = 0x80084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go index 959dd9bb8fcc..81aa5ad0f695 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x40044d07 MEMISLOCKED = 0x40084d17 MEMLOCK = 0x80084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x80084d0c MEMUNLOCK = 0x80084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go index 5a873cdbc9d2..76807a1fd4f7 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x40044d07 MEMISLOCKED = 0x40084d17 MEMLOCK = 0x80084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x80084d0c MEMUNLOCK = 0x80084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go index e336d141e1f1..d4a5ab9e4e06 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x80044d07 MEMISLOCKED = 0x80084d17 MEMLOCK = 0x40084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x40084d0c MEMUNLOCK = 0x40084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go index 390c01d92a53..66e65db95192 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go @@ -131,6 +131,7 @@ const ( MEMGETREGIONCOUNT = 0x80044d07 MEMISLOCKED = 0x80084d17 MEMLOCK = 0x40084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x40084d0c MEMUNLOCK = 0x40084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go index 98a6e5f11f50..f619252691e2 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go @@ -136,6 +136,7 @@ const ( MEMGETREGIONCOUNT = 0x40044d07 MEMISLOCKED = 0x40084d17 MEMLOCK = 0x80084d05 + MEMREAD = 0xc0404d1a MEMREADOOB = 0xc0104d04 MEMSETBADBLOCK = 0x80084d0c MEMUNLOCK = 0x80084d06 diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_386.go index 6d56edc05ac3..af20e474b388 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_386.go @@ -46,6 +46,7 @@ const ( AF_SNA = 0xb AF_UNIX = 0x1 AF_UNSPEC = 0x0 + ALTWERASE = 0x200 ARPHRD_ETHER = 0x1 ARPHRD_FRELAY = 0xf ARPHRD_IEEE1394 = 0x18 @@ -108,6 +109,15 @@ const ( BPF_DIRECTION_IN = 0x1 BPF_DIRECTION_OUT = 0x2 BPF_DIV = 0x30 + BPF_FILDROP_CAPTURE = 0x1 + BPF_FILDROP_DROP = 0x2 + BPF_FILDROP_PASS = 0x0 + BPF_F_DIR_IN = 0x10 + BPF_F_DIR_MASK = 0x30 + BPF_F_DIR_OUT = 0x20 + BPF_F_DIR_SHIFT = 0x4 + BPF_F_FLOWID = 0x8 + BPF_F_PRI_MASK = 0x7 BPF_H = 0x8 BPF_IMM = 0x0 BPF_IND = 0x40 @@ -136,6 +146,7 @@ const ( BPF_OR = 0x40 BPF_RELEASE = 0x30bb6 BPF_RET = 0x6 + BPF_RND = 0xc0 BPF_RSH = 0x70 BPF_ST = 0x2 BPF_STX = 0x3 @@ -147,6 +158,12 @@ const ( BRKINT = 0x2 CFLUSH = 0xf CLOCAL = 0x8000 + CLOCK_BOOTTIME = 0x6 + CLOCK_MONOTONIC = 0x3 + CLOCK_PROCESS_CPUTIME_ID = 0x2 + CLOCK_REALTIME = 0x0 + CLOCK_THREAD_CPUTIME_ID = 0x4 + CLOCK_UPTIME = 0x5 CPUSTATES = 0x6 CP_IDLE = 0x5 CP_INTR = 0x4 @@ -170,7 +187,65 @@ const ( CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 + DIOCADDQUEUE = 0xc100445d + DIOCADDRULE = 0xccc84404 + DIOCADDSTATE = 0xc1084425 + DIOCCHANGERULE = 0xccc8441a + DIOCCLRIFFLAG = 0xc024445a + DIOCCLRSRCNODES = 0x20004455 + DIOCCLRSTATES = 0xc0d04412 + DIOCCLRSTATUS = 0xc0244416 + DIOCGETLIMIT = 0xc0084427 + DIOCGETQSTATS = 0xc1084460 + DIOCGETQUEUE = 0xc100445f + DIOCGETQUEUES = 0xc100445e + DIOCGETRULE = 0xccc84407 + DIOCGETRULES = 0xccc84406 + DIOCGETRULESET = 0xc444443b + DIOCGETRULESETS = 0xc444443a + DIOCGETSRCNODES = 0xc0084454 + DIOCGETSTATE = 0xc1084413 + DIOCGETSTATES = 0xc0084419 + DIOCGETSTATUS = 0xc1e84415 + DIOCGETSYNFLWATS = 0xc0084463 + DIOCGETTIMEOUT = 0xc008441e + DIOCIGETIFACES = 0xc0244457 + DIOCKILLSRCNODES = 0xc068445b + DIOCKILLSTATES = 0xc0d04429 + DIOCNATLOOK = 0xc0504417 + DIOCOSFPADD = 0xc084444f DIOCOSFPFLUSH = 0x2000444e + DIOCOSFPGET = 0xc0844450 + DIOCRADDADDRS = 0xc44c4443 + DIOCRADDTABLES = 0xc44c443d + DIOCRCLRADDRS = 0xc44c4442 + DIOCRCLRASTATS = 0xc44c4448 + DIOCRCLRTABLES = 0xc44c443c + DIOCRCLRTSTATS = 0xc44c4441 + DIOCRDELADDRS = 0xc44c4444 + DIOCRDELTABLES = 0xc44c443e + DIOCRGETADDRS = 0xc44c4446 + DIOCRGETASTATS = 0xc44c4447 + DIOCRGETTABLES = 0xc44c443f + DIOCRGETTSTATS = 0xc44c4440 + DIOCRINADEFINE = 0xc44c444d + DIOCRSETADDRS = 0xc44c4445 + DIOCRSETTFLAGS = 0xc44c444a + DIOCRTSTADDRS = 0xc44c4449 + DIOCSETDEBUG = 0xc0044418 + DIOCSETHOSTID = 0xc0044456 + DIOCSETIFFLAG = 0xc0244459 + DIOCSETLIMIT = 0xc0084428 + DIOCSETREASS = 0xc004445c + DIOCSETSTATUSIF = 0xc0244414 + DIOCSETSYNCOOKIES = 0xc0014462 + DIOCSETSYNFLWATS = 0xc0084461 + DIOCSETTIMEOUT = 0xc008441d + DIOCSTART = 0x20004401 + DIOCSTOP = 0x20004402 + DIOCXBEGIN = 0xc00c4451 + DIOCXCOMMIT = 0xc00c4452 + DIOCXROLLBACK = 0xc00c4453 DLT_ARCNET = 0x7 DLT_ATM_RFC1483 = 0xb DLT_AX25 = 0x3 @@ -186,6 +261,7 @@ const ( DLT_LOOP = 0xc DLT_MPLS = 0xdb DLT_NULL = 0x0 + DLT_OPENFLOW = 0x10b DLT_PFLOG = 0x75 DLT_PFSYNC = 0x12 DLT_PPP = 0x9 @@ -196,6 +272,23 @@ const ( DLT_RAW = 0xe DLT_SLIP = 0x8 DLT_SLIP_BSDOS = 0xf + DLT_USBPCAP = 0xf9 + DLT_USER0 = 0x93 + DLT_USER1 = 0x94 + DLT_USER10 = 0x9d + DLT_USER11 = 0x9e + DLT_USER12 = 0x9f + DLT_USER13 = 0xa0 + DLT_USER14 = 0xa1 + DLT_USER15 = 0xa2 + DLT_USER2 = 0x95 + DLT_USER3 = 0x96 + DLT_USER4 = 0x97 + DLT_USER5 = 0x98 + DLT_USER6 = 0x99 + DLT_USER7 = 0x9a + DLT_USER8 = 0x9b + DLT_USER9 = 0x9c DT_BLK = 0x6 DT_CHR = 0x2 DT_DIR = 0x4 @@ -215,6 +308,8 @@ const ( EMUL_ENABLED = 0x1 EMUL_NATIVE = 0x2 ENDRUNDISC = 0x9 + ETH64_8021_RSVD_MASK = 0xfffffffffff0 + ETH64_8021_RSVD_PREFIX = 0x180c2000000 ETHERMIN = 0x2e ETHERMTU = 0x5dc ETHERTYPE_8023 = 0x4 @@ -267,6 +362,7 @@ const ( ETHERTYPE_DN = 0x6003 ETHERTYPE_DOGFIGHT = 0x1989 ETHERTYPE_DSMD = 0x8039 + ETHERTYPE_EAPOL = 0x888e ETHERTYPE_ECMA = 0x803 ETHERTYPE_ENCRYPT = 0x803d ETHERTYPE_ES = 0x805d @@ -298,6 +394,7 @@ const ( ETHERTYPE_LLDP = 0x88cc ETHERTYPE_LOGICRAFT = 0x8148 ETHERTYPE_LOOPBACK = 0x9000 + ETHERTYPE_MACSEC = 0x88e5 ETHERTYPE_MATRA = 0x807a ETHERTYPE_MAX = 0xffff ETHERTYPE_MERIT = 0x807c @@ -326,15 +423,17 @@ const ( ETHERTYPE_NCD = 0x8149 ETHERTYPE_NESTAR = 0x8006 ETHERTYPE_NETBEUI = 0x8191 + ETHERTYPE_NHRP = 0x2001 ETHERTYPE_NOVELL = 0x8138 ETHERTYPE_NS = 0x600 ETHERTYPE_NSAT = 0x601 ETHERTYPE_NSCOMPAT = 0x807 + ETHERTYPE_NSH = 0x984f ETHERTYPE_NTRAILER = 0x10 ETHERTYPE_OS9 = 0x7007 ETHERTYPE_OS9NET = 0x7009 ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e + ETHERTYPE_PBB = 0x88e7 ETHERTYPE_PCS = 0x4242 ETHERTYPE_PLANNING = 0x8044 ETHERTYPE_PPP = 0x880b @@ -409,28 +508,40 @@ const ( ETHER_CRC_POLY_LE = 0xedb88320 ETHER_HDR_LEN = 0xe ETHER_MAX_DIX_LEN = 0x600 + ETHER_MAX_HARDMTU_LEN = 0xff9b ETHER_MAX_LEN = 0x5ee ETHER_MIN_LEN = 0x40 ETHER_TYPE_LEN = 0x2 ETHER_VLAN_ENCAP_LEN = 0x4 EVFILT_AIO = -0x3 + EVFILT_DEVICE = -0x8 + EVFILT_EXCEPT = -0x9 EVFILT_PROC = -0x5 EVFILT_READ = -0x1 EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0x7 + EVFILT_SYSCOUNT = 0x9 EVFILT_TIMER = -0x7 EVFILT_VNODE = -0x4 EVFILT_WRITE = -0x2 + EVL_ENCAPLEN = 0x4 + EVL_PRIO_BITS = 0xd + EVL_PRIO_MAX = 0x7 + EVL_VLID_MASK = 0xfff + EVL_VLID_MAX = 0xffe + EVL_VLID_MIN = 0x1 + EVL_VLID_NULL = 0x0 EV_ADD = 0x1 EV_CLEAR = 0x20 EV_DELETE = 0x2 EV_DISABLE = 0x8 + EV_DISPATCH = 0x80 EV_ENABLE = 0x4 EV_EOF = 0x8000 EV_ERROR = 0x4000 EV_FLAG1 = 0x2000 EV_ONESHOT = 0x10 - EV_SYSFLAGS = 0xf000 + EV_RECEIPT = 0x40 + EV_SYSFLAGS = 0xf800 EXTA = 0x4b00 EXTB = 0x9600 EXTPROC = 0x800 @@ -443,6 +554,7 @@ const ( F_GETFL = 0x3 F_GETLK = 0x7 F_GETOWN = 0x5 + F_ISATTY = 0xb F_OK = 0x0 F_RDLCK = 0x1 F_SETFD = 0x2 @@ -460,7 +572,6 @@ const ( IEXTEN = 0x400 IFAN_ARRIVAL = 0x0 IFAN_DEPARTURE = 0x1 - IFA_ROUTE = 0x1 IFF_ALLMULTI = 0x200 IFF_BROADCAST = 0x2 IFF_CANTCHANGE = 0x8e52 @@ -471,12 +582,12 @@ const ( IFF_LOOPBACK = 0x8 IFF_MULTICAST = 0x8000 IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 IFF_OACTIVE = 0x400 IFF_POINTOPOINT = 0x10 IFF_PROMISC = 0x100 IFF_RUNNING = 0x40 IFF_SIMPLEX = 0x800 + IFF_STATICARP = 0x20 IFF_UP = 0x1 IFNAMSIZ = 0x10 IFT_1822 = 0x2 @@ -605,6 +716,7 @@ const ( IFT_LINEGROUP = 0xd2 IFT_LOCALTALK = 0x2a IFT_LOOP = 0x18 + IFT_MBIM = 0xfa IFT_MEDIAMAILOVERIP = 0x8b IFT_MFSIGLINK = 0xa7 IFT_MIOX25 = 0x26 @@ -695,6 +807,7 @@ const ( IFT_VOICEOVERCABLE = 0xc6 IFT_VOICEOVERFRAMERELAY = 0x99 IFT_VOICEOVERIP = 0x68 + IFT_WIREGUARD = 0xfb IFT_X213 = 0x5d IFT_X25 = 0x5 IFT_X25DDN = 0x4 @@ -729,8 +842,6 @@ const ( IPPROTO_AH = 0x33 IPPROTO_CARP = 0x70 IPPROTO_DIVERT = 0x102 - IPPROTO_DIVERT_INIT = 0x2 - IPPROTO_DIVERT_RESP = 0x1 IPPROTO_DONE = 0x101 IPPROTO_DSTOPTS = 0x3c IPPROTO_EGP = 0x8 @@ -762,9 +873,11 @@ const ( IPPROTO_RAW = 0xff IPPROTO_ROUTING = 0x2b IPPROTO_RSVP = 0x2e + IPPROTO_SCTP = 0x84 IPPROTO_TCP = 0x6 IPPROTO_TP = 0x1d IPPROTO_UDP = 0x11 + IPPROTO_UDPLITE = 0x88 IPV6_AUTH_LEVEL = 0x35 IPV6_AUTOFLOWLABEL = 0x3b IPV6_CHECKSUM = 0x1a @@ -787,6 +900,7 @@ const ( IPV6_LEAVE_GROUP = 0xd IPV6_MAXHLIM = 0xff IPV6_MAXPACKET = 0xffff + IPV6_MINHOPCOUNT = 0x41 IPV6_MMTU = 0x500 IPV6_MULTICAST_HOPS = 0xa IPV6_MULTICAST_IF = 0x9 @@ -826,12 +940,12 @@ const ( IP_DEFAULT_MULTICAST_LOOP = 0x1 IP_DEFAULT_MULTICAST_TTL = 0x1 IP_DF = 0x4000 - IP_DIVERTFL = 0x1022 IP_DROP_MEMBERSHIP = 0xd IP_ESP_NETWORK_LEVEL = 0x16 IP_ESP_TRANS_LEVEL = 0x15 IP_HDRINCL = 0x2 IP_IPCOMP_LEVEL = 0x1d + IP_IPDEFTTL = 0x25 IP_IPSECFLOWINFO = 0x24 IP_IPSEC_LOCAL_AUTH = 0x1b IP_IPSEC_LOCAL_CRED = 0x19 @@ -865,10 +979,15 @@ const ( IP_RETOPTS = 0x8 IP_RF = 0x8000 IP_RTABLE = 0x1021 + IP_SENDSRCADDR = 0x7 IP_TOS = 0x3 IP_TTL = 0x4 ISIG = 0x80 ISTRIP = 0x20 + ITIMER_PROF = 0x2 + ITIMER_REAL = 0x0 + ITIMER_VIRTUAL = 0x1 + IUCLC = 0x1000 IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 @@ -900,10 +1019,11 @@ const ( MAP_INHERIT_COPY = 0x1 MAP_INHERIT_NONE = 0x2 MAP_INHERIT_SHARE = 0x0 - MAP_NOEXTEND = 0x100 - MAP_NORESERVE = 0x40 + MAP_INHERIT_ZERO = 0x3 + MAP_NOEXTEND = 0x0 + MAP_NORESERVE = 0x0 MAP_PRIVATE = 0x2 - MAP_RENAME = 0x20 + MAP_RENAME = 0x0 MAP_SHARED = 0x1 MAP_STACK = 0x4000 MAP_TRYFIXED = 0x0 @@ -922,6 +1042,7 @@ const ( MNT_NOATIME = 0x8000 MNT_NODEV = 0x10 MNT_NOEXEC = 0x4 + MNT_NOPERM = 0x20 MNT_NOSUID = 0x8 MNT_NOWAIT = 0x2 MNT_QUOTA = 0x2000 @@ -929,13 +1050,29 @@ const ( MNT_RELOAD = 0x40000 MNT_ROOTFS = 0x4000 MNT_SOFTDEP = 0x4000000 + MNT_STALLED = 0x100000 + MNT_SWAPPABLE = 0x200000 MNT_SYNCHRONOUS = 0x2 MNT_UPDATE = 0x10000 MNT_VISFLAGMASK = 0x400ffff MNT_WAIT = 0x1 MNT_WANTRDWR = 0x2000000 MNT_WXALLOWED = 0x800 + MOUNT_AFS = "afs" + MOUNT_CD9660 = "cd9660" + MOUNT_EXT2FS = "ext2fs" + MOUNT_FFS = "ffs" + MOUNT_FUSEFS = "fuse" + MOUNT_MFS = "mfs" + MOUNT_MSDOS = "msdos" + MOUNT_NCPFS = "ncpfs" + MOUNT_NFS = "nfs" + MOUNT_NTFS = "ntfs" + MOUNT_TMPFS = "tmpfs" + MOUNT_UDF = "udf" + MOUNT_UFS = "ffs" MSG_BCAST = 0x100 + MSG_CMSG_CLOEXEC = 0x800 MSG_CTRUNC = 0x20 MSG_DONTROUTE = 0x4 MSG_DONTWAIT = 0x80 @@ -946,6 +1083,7 @@ const ( MSG_PEEK = 0x2 MSG_TRUNC = 0x10 MSG_WAITALL = 0x40 + MSG_WAITFORONE = 0x1000 MS_ASYNC = 0x1 MS_INVALIDATE = 0x4 MS_SYNC = 0x2 @@ -953,12 +1091,16 @@ const ( NET_RT_DUMP = 0x1 NET_RT_FLAGS = 0x2 NET_RT_IFLIST = 0x3 - NET_RT_MAXID = 0x6 + NET_RT_IFNAMES = 0x6 + NET_RT_MAXID = 0x8 + NET_RT_SOURCE = 0x7 NET_RT_STATS = 0x4 NET_RT_TABLE = 0x5 NFDBITS = 0x20 NOFLSH = 0x80000000 + NOKERNINFO = 0x2000000 NOTE_ATTRIB = 0x8 + NOTE_CHANGE = 0x1 NOTE_CHILD = 0x4 NOTE_DELETE = 0x1 NOTE_EOF = 0x2 @@ -968,6 +1110,7 @@ const ( NOTE_FORK = 0x40000000 NOTE_LINK = 0x10 NOTE_LOWAT = 0x1 + NOTE_OOB = 0x4 NOTE_PCTRLMASK = 0xf0000000 NOTE_PDATAMASK = 0xfffff NOTE_RENAME = 0x20 @@ -977,11 +1120,13 @@ const ( NOTE_TRUNCATE = 0x80 NOTE_WRITE = 0x2 OCRNL = 0x10 + OLCUC = 0x20 ONLCR = 0x2 ONLRET = 0x80 ONOCR = 0x40 ONOEOT = 0x8 OPOST = 0x1 + OXTABS = 0x4 O_ACCMODE = 0x3 O_APPEND = 0x8 O_ASYNC = 0x40 @@ -1015,7 +1160,6 @@ const ( PROT_NONE = 0x0 PROT_READ = 0x1 PROT_WRITE = 0x2 - PT_MASK = 0x3ff000 RLIMIT_CORE = 0x4 RLIMIT_CPU = 0x0 RLIMIT_DATA = 0x2 @@ -1027,19 +1171,25 @@ const ( RLIMIT_STACK = 0x3 RLIM_INFINITY = 0x7fffffffffffffff RTAX_AUTHOR = 0x6 + RTAX_BFD = 0xb RTAX_BRD = 0x7 + RTAX_DNS = 0xc RTAX_DST = 0x0 RTAX_GATEWAY = 0x1 RTAX_GENMASK = 0x3 RTAX_IFA = 0x5 RTAX_IFP = 0x4 RTAX_LABEL = 0xa - RTAX_MAX = 0xb + RTAX_MAX = 0xf RTAX_NETMASK = 0x2 + RTAX_SEARCH = 0xe RTAX_SRC = 0x8 RTAX_SRCMASK = 0x9 + RTAX_STATIC = 0xd RTA_AUTHOR = 0x40 + RTA_BFD = 0x800 RTA_BRD = 0x80 + RTA_DNS = 0x1000 RTA_DST = 0x1 RTA_GATEWAY = 0x2 RTA_GENMASK = 0x8 @@ -1047,49 +1197,57 @@ const ( RTA_IFP = 0x10 RTA_LABEL = 0x400 RTA_NETMASK = 0x4 + RTA_SEARCH = 0x4000 RTA_SRC = 0x100 RTA_SRCMASK = 0x200 + RTA_STATIC = 0x2000 RTF_ANNOUNCE = 0x4000 + RTF_BFD = 0x1000000 RTF_BLACKHOLE = 0x1000 + RTF_BROADCAST = 0x400000 + RTF_CACHED = 0x20000 RTF_CLONED = 0x10000 RTF_CLONING = 0x100 + RTF_CONNECTED = 0x800000 RTF_DONE = 0x40 RTF_DYNAMIC = 0x10 - RTF_FMASK = 0x10f808 + RTF_FMASK = 0x110fc08 RTF_GATEWAY = 0x2 RTF_HOST = 0x4 RTF_LLINFO = 0x400 - RTF_MASK = 0x80 + RTF_LOCAL = 0x200000 RTF_MODIFIED = 0x20 RTF_MPATH = 0x40000 RTF_MPLS = 0x100000 + RTF_MULTICAST = 0x200 RTF_PERMANENT_ARP = 0x2000 RTF_PROTO1 = 0x8000 RTF_PROTO2 = 0x4000 RTF_PROTO3 = 0x2000 RTF_REJECT = 0x8 - RTF_SOURCE = 0x20000 RTF_STATIC = 0x800 - RTF_TUNNEL = 0x100000 RTF_UP = 0x1 RTF_USETRAILERS = 0x8000 - RTF_XRESOLVE = 0x200 + RTM_80211INFO = 0x15 RTM_ADD = 0x1 + RTM_BFD = 0x12 RTM_CHANGE = 0x3 + RTM_CHGADDRATTR = 0x14 RTM_DELADDR = 0xd RTM_DELETE = 0x2 RTM_DESYNC = 0x10 RTM_GET = 0x4 RTM_IFANNOUNCE = 0xf RTM_IFINFO = 0xe - RTM_LOCK = 0x8 + RTM_INVALIDATE = 0x11 RTM_LOSING = 0x5 RTM_MAXSIZE = 0x800 RTM_MISS = 0x7 RTM_NEWADDR = 0xc + RTM_PROPOSAL = 0x13 RTM_REDIRECT = 0x6 RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 + RTM_SOURCE = 0x16 RTM_VERSION = 0x5 RTV_EXPIRE = 0x4 RTV_HOPCOUNT = 0x2 @@ -1099,67 +1257,74 @@ const ( RTV_RTTVAR = 0x80 RTV_SPIPE = 0x10 RTV_SSTHRESH = 0x20 + RT_TABLEID_BITS = 0x8 + RT_TABLEID_MASK = 0xff RT_TABLEID_MAX = 0xff RUSAGE_CHILDREN = -0x1 RUSAGE_SELF = 0x0 RUSAGE_THREAD = 0x1 SCM_RIGHTS = 0x1 SCM_TIMESTAMP = 0x4 + SEEK_CUR = 0x1 + SEEK_END = 0x2 + SEEK_SET = 0x0 SHUT_RD = 0x0 SHUT_RDWR = 0x2 SHUT_WR = 0x1 SIOCADDMULTI = 0x80206931 SIOCAIFADDR = 0x8040691a SIOCAIFGROUP = 0x80246987 - SIOCALIFADDR = 0x8218691c SIOCATMARK = 0x40047307 - SIOCBRDGADD = 0x8054693c - SIOCBRDGADDS = 0x80546941 - SIOCBRDGARL = 0x806e694d + SIOCBRDGADD = 0x805c693c + SIOCBRDGADDL = 0x805c6949 + SIOCBRDGADDS = 0x805c6941 + SIOCBRDGARL = 0x808c694d SIOCBRDGDADDR = 0x81286947 - SIOCBRDGDEL = 0x8054693d - SIOCBRDGDELS = 0x80546942 - SIOCBRDGFLUSH = 0x80546948 - SIOCBRDGFRL = 0x806e694e + SIOCBRDGDEL = 0x805c693d + SIOCBRDGDELS = 0x805c6942 + SIOCBRDGFLUSH = 0x805c6948 + SIOCBRDGFRL = 0x808c694e SIOCBRDGGCACHE = 0xc0146941 SIOCBRDGGFD = 0xc0146952 SIOCBRDGGHT = 0xc0146951 - SIOCBRDGGIFFLGS = 0xc054693e + SIOCBRDGGIFFLGS = 0xc05c693e SIOCBRDGGMA = 0xc0146953 SIOCBRDGGPARAM = 0xc03c6958 SIOCBRDGGPRI = 0xc0146950 SIOCBRDGGRL = 0xc028694f - SIOCBRDGGSIFS = 0xc054693c SIOCBRDGGTO = 0xc0146946 - SIOCBRDGIFS = 0xc0546942 + SIOCBRDGIFS = 0xc05c6942 SIOCBRDGRTS = 0xc0186943 SIOCBRDGSADDR = 0xc1286944 SIOCBRDGSCACHE = 0x80146940 SIOCBRDGSFD = 0x80146952 SIOCBRDGSHT = 0x80146951 - SIOCBRDGSIFCOST = 0x80546955 - SIOCBRDGSIFFLGS = 0x8054693f - SIOCBRDGSIFPRIO = 0x80546954 + SIOCBRDGSIFCOST = 0x805c6955 + SIOCBRDGSIFFLGS = 0x805c693f + SIOCBRDGSIFPRIO = 0x805c6954 + SIOCBRDGSIFPROT = 0x805c694a SIOCBRDGSMA = 0x80146953 SIOCBRDGSPRI = 0x80146950 SIOCBRDGSPROTO = 0x8014695a SIOCBRDGSTO = 0x80146945 SIOCBRDGSTXHC = 0x80146959 + SIOCDELLABEL = 0x80206997 SIOCDELMULTI = 0x80206932 SIOCDIFADDR = 0x80206919 SIOCDIFGROUP = 0x80246989 + SIOCDIFPARENT = 0x802069b4 SIOCDIFPHYADDR = 0x80206949 - SIOCDLIFADDR = 0x8218691e + SIOCDPWE3NEIGHBOR = 0x802069de + SIOCDVNETID = 0x802069af SIOCGETKALIVE = 0xc01869a4 SIOCGETLABEL = 0x8020699a + SIOCGETMPWCFG = 0xc02069ae SIOCGETPFLOW = 0xc02069fe SIOCGETPFSYNC = 0xc02069f8 SIOCGETSGCNT = 0xc0147534 SIOCGETVIFCNT = 0xc0147533 SIOCGETVLAN = 0xc0206990 - SIOCGHIWAT = 0x40047301 SIOCGIFADDR = 0xc0206921 - SIOCGIFASYNCMAP = 0xc020697c SIOCGIFBRDADDR = 0xc0206923 SIOCGIFCONF = 0xc0086924 SIOCGIFDATA = 0xc020691b @@ -1168,40 +1333,53 @@ const ( SIOCGIFFLAGS = 0xc0206911 SIOCGIFGATTR = 0xc024698b SIOCGIFGENERIC = 0xc020693a + SIOCGIFGLIST = 0xc024698d SIOCGIFGMEMB = 0xc024698a SIOCGIFGROUP = 0xc0246988 SIOCGIFHARDMTU = 0xc02069a5 - SIOCGIFMEDIA = 0xc0286936 + SIOCGIFLLPRIO = 0xc02069b6 + SIOCGIFMEDIA = 0xc0386938 SIOCGIFMETRIC = 0xc0206917 SIOCGIFMTU = 0xc020697e SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206948 + SIOCGIFPAIR = 0xc02069b1 + SIOCGIFPARENT = 0xc02069b3 SIOCGIFPRIORITY = 0xc020699c - SIOCGIFPSRCADDR = 0xc0206947 SIOCGIFRDOMAIN = 0xc02069a0 SIOCGIFRTLABEL = 0xc0206983 - SIOCGIFTIMESLOT = 0xc0206986 + SIOCGIFRXR = 0x802069aa + SIOCGIFSFFPAGE = 0xc1126939 SIOCGIFXFLAGS = 0xc020699e - SIOCGLIFADDR = 0xc218691d SIOCGLIFPHYADDR = 0xc218694b + SIOCGLIFPHYDF = 0xc02069c2 + SIOCGLIFPHYECN = 0xc02069c8 SIOCGLIFPHYRTABLE = 0xc02069a2 SIOCGLIFPHYTTL = 0xc02069a9 - SIOCGLOWAT = 0x40047303 SIOCGPGRP = 0x40047309 + SIOCGPWE3 = 0xc0206998 + SIOCGPWE3CTRLWORD = 0xc02069dc + SIOCGPWE3FAT = 0xc02069dd + SIOCGPWE3NEIGHBOR = 0xc21869de + SIOCGRXHPRIO = 0xc02069db SIOCGSPPPPARAMS = 0xc0206994 + SIOCGTXHPRIO = 0xc02069c6 + SIOCGUMBINFO = 0xc02069be + SIOCGUMBPARAM = 0xc02069c0 SIOCGVH = 0xc02069f6 + SIOCGVNETFLOWID = 0xc02069c4 SIOCGVNETID = 0xc02069a7 + SIOCIFAFATTACH = 0x801169ab + SIOCIFAFDETACH = 0x801169ac SIOCIFCREATE = 0x8020697a SIOCIFDESTROY = 0x80206979 SIOCIFGCLONERS = 0xc00c6978 SIOCSETKALIVE = 0x801869a3 SIOCSETLABEL = 0x80206999 + SIOCSETMPWCFG = 0x802069ad SIOCSETPFLOW = 0x802069fd SIOCSETPFSYNC = 0x802069f7 SIOCSETVLAN = 0x8020698f - SIOCSHIWAT = 0x80047300 SIOCSIFADDR = 0x8020690c - SIOCSIFASYNCMAP = 0x8020697d SIOCSIFBRDADDR = 0x80206913 SIOCSIFDESCR = 0x80206980 SIOCSIFDSTADDR = 0x8020690e @@ -1209,25 +1387,37 @@ const ( SIOCSIFGATTR = 0x8024698c SIOCSIFGENERIC = 0x80206939 SIOCSIFLLADDR = 0x8020691f - SIOCSIFMEDIA = 0xc0206935 + SIOCSIFLLPRIO = 0x802069b5 + SIOCSIFMEDIA = 0xc0206937 SIOCSIFMETRIC = 0x80206918 SIOCSIFMTU = 0x8020697f SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x80406946 + SIOCSIFPAIR = 0x802069b0 + SIOCSIFPARENT = 0x802069b2 SIOCSIFPRIORITY = 0x8020699b SIOCSIFRDOMAIN = 0x8020699f SIOCSIFRTLABEL = 0x80206982 - SIOCSIFTIMESLOT = 0x80206985 SIOCSIFXFLAGS = 0x8020699d SIOCSLIFPHYADDR = 0x8218694a + SIOCSLIFPHYDF = 0x802069c1 + SIOCSLIFPHYECN = 0x802069c7 SIOCSLIFPHYRTABLE = 0x802069a1 SIOCSLIFPHYTTL = 0x802069a8 - SIOCSLOWAT = 0x80047302 SIOCSPGRP = 0x80047308 + SIOCSPWE3CTRLWORD = 0x802069dc + SIOCSPWE3FAT = 0x802069dd + SIOCSPWE3NEIGHBOR = 0x821869de + SIOCSRXHPRIO = 0x802069db SIOCSSPPPPARAMS = 0x80206993 + SIOCSTXHPRIO = 0x802069c5 + SIOCSUMBPARAM = 0x802069bf SIOCSVH = 0xc02069f5 + SIOCSVNETFLOWID = 0x802069c3 SIOCSVNETID = 0x802069a6 + SOCK_CLOEXEC = 0x8000 SOCK_DGRAM = 0x2 + SOCK_DNS = 0x1000 + SOCK_NONBLOCK = 0x4000 SOCK_RAW = 0x3 SOCK_RDM = 0x4 SOCK_SEQPACKET = 0x5 @@ -1238,6 +1428,7 @@ const ( SO_BINDANY = 0x1000 SO_BROADCAST = 0x20 SO_DEBUG = 0x1 + SO_DOMAIN = 0x1024 SO_DONTROUTE = 0x10 SO_ERROR = 0x1007 SO_KEEPALIVE = 0x8 @@ -1245,6 +1436,7 @@ const ( SO_NETPROC = 0x1020 SO_OOBINLINE = 0x100 SO_PEERCRED = 0x1022 + SO_PROTOCOL = 0x1025 SO_RCVBUF = 0x1002 SO_RCVLOWAT = 0x1004 SO_RCVTIMEO = 0x1006 @@ -1258,6 +1450,7 @@ const ( SO_TIMESTAMP = 0x800 SO_TYPE = 0x1008 SO_USELOOPBACK = 0x40 + SO_ZEROIZE = 0x2000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1287,9 +1480,24 @@ const ( S_IXOTH = 0x1 S_IXUSR = 0x40 TCIFLUSH = 0x1 + TCIOFF = 0x3 TCIOFLUSH = 0x3 + TCION = 0x4 TCOFLUSH = 0x2 - TCP_MAXBURST = 0x4 + TCOOFF = 0x1 + TCOON = 0x2 + TCPOPT_EOL = 0x0 + TCPOPT_MAXSEG = 0x2 + TCPOPT_NOP = 0x1 + TCPOPT_SACK = 0x5 + TCPOPT_SACK_HDR = 0x1010500 + TCPOPT_SACK_PERMITTED = 0x4 + TCPOPT_SACK_PERMIT_HDR = 0x1010402 + TCPOPT_SIGNATURE = 0x13 + TCPOPT_TIMESTAMP = 0x8 + TCPOPT_TSTAMP_HDR = 0x101080a + TCPOPT_WINDOW = 0x3 + TCP_INFO = 0x9 TCP_MAXSEG = 0x2 TCP_MAXWIN = 0xffff TCP_MAX_SACK = 0x3 @@ -1298,11 +1506,15 @@ const ( TCP_MSS = 0x200 TCP_NODELAY = 0x1 TCP_NOPUSH = 0x10 - TCP_NSTATES = 0xb + TCP_SACKHOLE_LIMIT = 0x80 TCP_SACK_ENABLE = 0x8 TCSAFLUSH = 0x2 + TIMER_ABSTIME = 0x1 + TIMER_RELTIME = 0x0 TIOCCBRK = 0x2000747a TIOCCDTR = 0x20007478 + TIOCCHKVERAUTH = 0x2000741e + TIOCCLRVERAUTH = 0x2000741d TIOCCONS = 0x80047462 TIOCDRAIN = 0x2000745e TIOCEXCL = 0x2000740d @@ -1357,17 +1569,21 @@ const ( TIOCSETAF = 0x802c7416 TIOCSETAW = 0x802c7415 TIOCSETD = 0x8004741b + TIOCSETVERAUTH = 0x8004741c TIOCSFLAGS = 0x8004745c TIOCSIG = 0x8004745f TIOCSPGRP = 0x80047476 TIOCSTART = 0x2000746e - TIOCSTAT = 0x80047465 - TIOCSTI = 0x80017472 + TIOCSTAT = 0x20007465 TIOCSTOP = 0x2000746f TIOCSTSTAMP = 0x8008745a TIOCSWINSZ = 0x80087467 TIOCUCNTL = 0x80047466 + TIOCUCNTL_CBRK = 0x7a + TIOCUCNTL_SBRK = 0x7b TOSTOP = 0x400000 + UTIME_NOW = -0x2 + UTIME_OMIT = -0x1 VDISCARD = 0xf VDSUSP = 0xb VEOF = 0x0 @@ -1378,6 +1594,19 @@ const ( VKILL = 0x5 VLNEXT = 0xe VMIN = 0x10 + VM_ANONMIN = 0x7 + VM_LOADAVG = 0x2 + VM_MALLOC_CONF = 0xc + VM_MAXID = 0xd + VM_MAXSLP = 0xa + VM_METER = 0x1 + VM_NKMEMPAGES = 0x6 + VM_PSSTRINGS = 0x3 + VM_SWAPENCRYPT = 0x5 + VM_USPACE = 0xb + VM_UVMEXP = 0x4 + VM_VNODEMIN = 0x9 + VM_VTEXTMIN = 0x8 VQUIT = 0x9 VREPRINT = 0x6 VSTART = 0xc @@ -1390,8 +1619,8 @@ const ( WCONTINUED = 0x8 WCOREFLAG = 0x80 WNOHANG = 0x1 - WSTOPPED = 0x7f WUNTRACED = 0x2 + XCASE = 0x1000000 ) // Errors @@ -1405,6 +1634,7 @@ const ( EALREADY = syscall.Errno(0x25) EAUTH = syscall.Errno(0x50) EBADF = syscall.Errno(0x9) + EBADMSG = syscall.Errno(0x5c) EBADRPC = syscall.Errno(0x48) EBUSY = syscall.Errno(0x10) ECANCELED = syscall.Errno(0x58) @@ -1431,7 +1661,7 @@ const ( EIPSEC = syscall.Errno(0x52) EISCONN = syscall.Errno(0x38) EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x5b) + ELAST = syscall.Errno(0x5f) ELOOP = syscall.Errno(0x3e) EMEDIUMTYPE = syscall.Errno(0x56) EMFILE = syscall.Errno(0x18) @@ -1459,12 +1689,14 @@ const ( ENOTCONN = syscall.Errno(0x39) ENOTDIR = syscall.Errno(0x14) ENOTEMPTY = syscall.Errno(0x42) + ENOTRECOVERABLE = syscall.Errno(0x5d) ENOTSOCK = syscall.Errno(0x26) ENOTSUP = syscall.Errno(0x5b) ENOTTY = syscall.Errno(0x19) ENXIO = syscall.Errno(0x6) EOPNOTSUPP = syscall.Errno(0x2d) EOVERFLOW = syscall.Errno(0x57) + EOWNERDEAD = syscall.Errno(0x5e) EPERM = syscall.Errno(0x1) EPFNOSUPPORT = syscall.Errno(0x2e) EPIPE = syscall.Errno(0x20) @@ -1472,6 +1704,7 @@ const ( EPROCUNAVAIL = syscall.Errno(0x4c) EPROGMISMATCH = syscall.Errno(0x4b) EPROGUNAVAIL = syscall.Errno(0x4a) + EPROTO = syscall.Errno(0x5f) EPROTONOSUPPORT = syscall.Errno(0x2b) EPROTOTYPE = syscall.Errno(0x29) ERANGE = syscall.Errno(0x22) @@ -1568,7 +1801,7 @@ var errorList = [...]struct { {32, "EPIPE", "broken pipe"}, {33, "EDOM", "numerical argument out of domain"}, {34, "ERANGE", "result too large"}, - {35, "EWOULDBLOCK", "resource temporarily unavailable"}, + {35, "EAGAIN", "resource temporarily unavailable"}, {36, "EINPROGRESS", "operation now in progress"}, {37, "EALREADY", "operation already in progress"}, {38, "ENOTSOCK", "socket operation on non-socket"}, @@ -1624,7 +1857,11 @@ var errorList = [...]struct { {88, "ECANCELED", "operation canceled"}, {89, "EIDRM", "identifier removed"}, {90, "ENOMSG", "no message of desired type"}, - {91, "ELAST", "not supported"}, + {91, "ENOTSUP", "not supported"}, + {92, "EBADMSG", "bad message"}, + {93, "ENOTRECOVERABLE", "state not recoverable"}, + {94, "EOWNERDEAD", "previous owner died"}, + {95, "ELAST", "protocol error"}, } // Signal table @@ -1638,7 +1875,7 @@ var signalList = [...]struct { {3, "SIGQUIT", "quit"}, {4, "SIGILL", "illegal instruction"}, {5, "SIGTRAP", "trace/BPT trap"}, - {6, "SIGABRT", "abort trap"}, + {6, "SIGIOT", "abort trap"}, {7, "SIGEMT", "EMT trap"}, {8, "SIGFPE", "floating point exception"}, {9, "SIGKILL", "killed"}, @@ -1665,4 +1902,5 @@ var signalList = [...]struct { {30, "SIGUSR1", "user defined signal 1"}, {31, "SIGUSR2", "user defined signal 2"}, {32, "SIGTHR", "thread AST"}, + {28672, "SIGSTKSZ", "unknown signal"}, } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_amd64.go index 25cb6094813c..6015fcb2bf69 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_amd64.go @@ -109,6 +109,15 @@ const ( BPF_DIRECTION_IN = 0x1 BPF_DIRECTION_OUT = 0x2 BPF_DIV = 0x30 + BPF_FILDROP_CAPTURE = 0x1 + BPF_FILDROP_DROP = 0x2 + BPF_FILDROP_PASS = 0x0 + BPF_F_DIR_IN = 0x10 + BPF_F_DIR_MASK = 0x30 + BPF_F_DIR_OUT = 0x20 + BPF_F_DIR_SHIFT = 0x4 + BPF_F_FLOWID = 0x8 + BPF_F_PRI_MASK = 0x7 BPF_H = 0x8 BPF_IMM = 0x0 BPF_IND = 0x40 @@ -137,6 +146,7 @@ const ( BPF_OR = 0x40 BPF_RELEASE = 0x30bb6 BPF_RET = 0x6 + BPF_RND = 0xc0 BPF_RSH = 0x70 BPF_ST = 0x2 BPF_STX = 0x3 @@ -177,7 +187,65 @@ const ( CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 + DIOCADDQUEUE = 0xc110445d + DIOCADDRULE = 0xcd604404 + DIOCADDSTATE = 0xc1084425 + DIOCCHANGERULE = 0xcd60441a + DIOCCLRIFFLAG = 0xc028445a + DIOCCLRSRCNODES = 0x20004455 + DIOCCLRSTATES = 0xc0e04412 + DIOCCLRSTATUS = 0xc0284416 + DIOCGETLIMIT = 0xc0084427 + DIOCGETQSTATS = 0xc1204460 + DIOCGETQUEUE = 0xc110445f + DIOCGETQUEUES = 0xc110445e + DIOCGETRULE = 0xcd604407 + DIOCGETRULES = 0xcd604406 + DIOCGETRULESET = 0xc444443b + DIOCGETRULESETS = 0xc444443a + DIOCGETSRCNODES = 0xc0104454 + DIOCGETSTATE = 0xc1084413 + DIOCGETSTATES = 0xc0104419 + DIOCGETSTATUS = 0xc1e84415 + DIOCGETSYNFLWATS = 0xc0084463 + DIOCGETTIMEOUT = 0xc008441e + DIOCIGETIFACES = 0xc0284457 + DIOCKILLSRCNODES = 0xc080445b + DIOCKILLSTATES = 0xc0e04429 + DIOCNATLOOK = 0xc0504417 + DIOCOSFPADD = 0xc088444f DIOCOSFPFLUSH = 0x2000444e + DIOCOSFPGET = 0xc0884450 + DIOCRADDADDRS = 0xc4504443 + DIOCRADDTABLES = 0xc450443d + DIOCRCLRADDRS = 0xc4504442 + DIOCRCLRASTATS = 0xc4504448 + DIOCRCLRTABLES = 0xc450443c + DIOCRCLRTSTATS = 0xc4504441 + DIOCRDELADDRS = 0xc4504444 + DIOCRDELTABLES = 0xc450443e + DIOCRGETADDRS = 0xc4504446 + DIOCRGETASTATS = 0xc4504447 + DIOCRGETTABLES = 0xc450443f + DIOCRGETTSTATS = 0xc4504440 + DIOCRINADEFINE = 0xc450444d + DIOCRSETADDRS = 0xc4504445 + DIOCRSETTFLAGS = 0xc450444a + DIOCRTSTADDRS = 0xc4504449 + DIOCSETDEBUG = 0xc0044418 + DIOCSETHOSTID = 0xc0044456 + DIOCSETIFFLAG = 0xc0284459 + DIOCSETLIMIT = 0xc0084428 + DIOCSETREASS = 0xc004445c + DIOCSETSTATUSIF = 0xc0284414 + DIOCSETSYNCOOKIES = 0xc0014462 + DIOCSETSYNFLWATS = 0xc0084461 + DIOCSETTIMEOUT = 0xc008441d + DIOCSTART = 0x20004401 + DIOCSTOP = 0x20004402 + DIOCXBEGIN = 0xc0104451 + DIOCXCOMMIT = 0xc0104452 + DIOCXROLLBACK = 0xc0104453 DLT_ARCNET = 0x7 DLT_ATM_RFC1483 = 0xb DLT_AX25 = 0x3 @@ -240,6 +308,8 @@ const ( EMUL_ENABLED = 0x1 EMUL_NATIVE = 0x2 ENDRUNDISC = 0x9 + ETH64_8021_RSVD_MASK = 0xfffffffffff0 + ETH64_8021_RSVD_PREFIX = 0x180c2000000 ETHERMIN = 0x2e ETHERMTU = 0x5dc ETHERTYPE_8023 = 0x4 @@ -292,6 +362,7 @@ const ( ETHERTYPE_DN = 0x6003 ETHERTYPE_DOGFIGHT = 0x1989 ETHERTYPE_DSMD = 0x8039 + ETHERTYPE_EAPOL = 0x888e ETHERTYPE_ECMA = 0x803 ETHERTYPE_ENCRYPT = 0x803d ETHERTYPE_ES = 0x805d @@ -323,6 +394,7 @@ const ( ETHERTYPE_LLDP = 0x88cc ETHERTYPE_LOGICRAFT = 0x8148 ETHERTYPE_LOOPBACK = 0x9000 + ETHERTYPE_MACSEC = 0x88e5 ETHERTYPE_MATRA = 0x807a ETHERTYPE_MAX = 0xffff ETHERTYPE_MERIT = 0x807c @@ -351,15 +423,17 @@ const ( ETHERTYPE_NCD = 0x8149 ETHERTYPE_NESTAR = 0x8006 ETHERTYPE_NETBEUI = 0x8191 + ETHERTYPE_NHRP = 0x2001 ETHERTYPE_NOVELL = 0x8138 ETHERTYPE_NS = 0x600 ETHERTYPE_NSAT = 0x601 ETHERTYPE_NSCOMPAT = 0x807 + ETHERTYPE_NSH = 0x984f ETHERTYPE_NTRAILER = 0x10 ETHERTYPE_OS9 = 0x7007 ETHERTYPE_OS9NET = 0x7009 ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e + ETHERTYPE_PBB = 0x88e7 ETHERTYPE_PCS = 0x4242 ETHERTYPE_PLANNING = 0x8044 ETHERTYPE_PPP = 0x880b @@ -441,10 +515,11 @@ const ( ETHER_VLAN_ENCAP_LEN = 0x4 EVFILT_AIO = -0x3 EVFILT_DEVICE = -0x8 + EVFILT_EXCEPT = -0x9 EVFILT_PROC = -0x5 EVFILT_READ = -0x1 EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0x8 + EVFILT_SYSCOUNT = 0x9 EVFILT_TIMER = -0x7 EVFILT_VNODE = -0x4 EVFILT_WRITE = -0x2 @@ -466,7 +541,7 @@ const ( EV_FLAG1 = 0x2000 EV_ONESHOT = 0x10 EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 + EV_SYSFLAGS = 0xf800 EXTA = 0x4b00 EXTB = 0x9600 EXTPROC = 0x800 @@ -732,6 +807,7 @@ const ( IFT_VOICEOVERCABLE = 0xc6 IFT_VOICEOVERFRAMERELAY = 0x99 IFT_VOICEOVERIP = 0x68 + IFT_WIREGUARD = 0xfb IFT_X213 = 0x5d IFT_X25 = 0x5 IFT_X25DDN = 0x4 @@ -797,9 +873,11 @@ const ( IPPROTO_RAW = 0xff IPPROTO_ROUTING = 0x2b IPPROTO_RSVP = 0x2e + IPPROTO_SCTP = 0x84 IPPROTO_TCP = 0x6 IPPROTO_TP = 0x1d IPPROTO_UDP = 0x11 + IPPROTO_UDPLITE = 0x88 IPV6_AUTH_LEVEL = 0x35 IPV6_AUTOFLOWLABEL = 0x3b IPV6_CHECKSUM = 0x1a @@ -906,6 +984,9 @@ const ( IP_TTL = 0x4 ISIG = 0x80 ISTRIP = 0x20 + ITIMER_PROF = 0x2 + ITIMER_REAL = 0x0 + ITIMER_VIRTUAL = 0x1 IUCLC = 0x1000 IXANY = 0x800 IXOFF = 0x400 @@ -970,12 +1051,26 @@ const ( MNT_ROOTFS = 0x4000 MNT_SOFTDEP = 0x4000000 MNT_STALLED = 0x100000 + MNT_SWAPPABLE = 0x200000 MNT_SYNCHRONOUS = 0x2 MNT_UPDATE = 0x10000 MNT_VISFLAGMASK = 0x400ffff MNT_WAIT = 0x1 MNT_WANTRDWR = 0x2000000 MNT_WXALLOWED = 0x800 + MOUNT_AFS = "afs" + MOUNT_CD9660 = "cd9660" + MOUNT_EXT2FS = "ext2fs" + MOUNT_FFS = "ffs" + MOUNT_FUSEFS = "fuse" + MOUNT_MFS = "mfs" + MOUNT_MSDOS = "msdos" + MOUNT_NCPFS = "ncpfs" + MOUNT_NFS = "nfs" + MOUNT_NTFS = "ntfs" + MOUNT_TMPFS = "tmpfs" + MOUNT_UDF = "udf" + MOUNT_UFS = "ffs" MSG_BCAST = 0x100 MSG_CMSG_CLOEXEC = 0x800 MSG_CTRUNC = 0x20 @@ -988,6 +1083,7 @@ const ( MSG_PEEK = 0x2 MSG_TRUNC = 0x10 MSG_WAITALL = 0x40 + MSG_WAITFORONE = 0x1000 MS_ASYNC = 0x1 MS_INVALIDATE = 0x4 MS_SYNC = 0x2 @@ -996,7 +1092,8 @@ const ( NET_RT_FLAGS = 0x2 NET_RT_IFLIST = 0x3 NET_RT_IFNAMES = 0x6 - NET_RT_MAXID = 0x7 + NET_RT_MAXID = 0x8 + NET_RT_SOURCE = 0x7 NET_RT_STATS = 0x4 NET_RT_TABLE = 0x5 NFDBITS = 0x20 @@ -1013,6 +1110,7 @@ const ( NOTE_FORK = 0x40000000 NOTE_LINK = 0x10 NOTE_LOWAT = 0x1 + NOTE_OOB = 0x4 NOTE_PCTRLMASK = 0xf0000000 NOTE_PDATAMASK = 0xfffff NOTE_RENAME = 0x20 @@ -1130,9 +1228,11 @@ const ( RTF_STATIC = 0x800 RTF_UP = 0x1 RTF_USETRAILERS = 0x8000 + RTM_80211INFO = 0x15 RTM_ADD = 0x1 RTM_BFD = 0x12 RTM_CHANGE = 0x3 + RTM_CHGADDRATTR = 0x14 RTM_DELADDR = 0xd RTM_DELETE = 0x2 RTM_DESYNC = 0x10 @@ -1140,7 +1240,6 @@ const ( RTM_IFANNOUNCE = 0xf RTM_IFINFO = 0xe RTM_INVALIDATE = 0x11 - RTM_LOCK = 0x8 RTM_LOSING = 0x5 RTM_MAXSIZE = 0x800 RTM_MISS = 0x7 @@ -1148,7 +1247,7 @@ const ( RTM_PROPOSAL = 0x13 RTM_REDIRECT = 0x6 RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 + RTM_SOURCE = 0x16 RTM_VERSION = 0x5 RTV_EXPIRE = 0x4 RTV_HOPCOUNT = 0x2 @@ -1166,6 +1265,9 @@ const ( RUSAGE_THREAD = 0x1 SCM_RIGHTS = 0x1 SCM_TIMESTAMP = 0x4 + SEEK_CUR = 0x1 + SEEK_END = 0x2 + SEEK_SET = 0x0 SHUT_RD = 0x0 SHUT_RDWR = 0x2 SHUT_WR = 0x1 @@ -1182,35 +1284,37 @@ const ( SIOCBRDGDELS = 0x80606942 SIOCBRDGFLUSH = 0x80606948 SIOCBRDGFRL = 0x808c694e - SIOCBRDGGCACHE = 0xc0186941 - SIOCBRDGGFD = 0xc0186952 - SIOCBRDGGHT = 0xc0186951 + SIOCBRDGGCACHE = 0xc0146941 + SIOCBRDGGFD = 0xc0146952 + SIOCBRDGGHT = 0xc0146951 SIOCBRDGGIFFLGS = 0xc060693e - SIOCBRDGGMA = 0xc0186953 + SIOCBRDGGMA = 0xc0146953 SIOCBRDGGPARAM = 0xc0406958 - SIOCBRDGGPRI = 0xc0186950 + SIOCBRDGGPRI = 0xc0146950 SIOCBRDGGRL = 0xc030694f - SIOCBRDGGTO = 0xc0186946 + SIOCBRDGGTO = 0xc0146946 SIOCBRDGIFS = 0xc0606942 SIOCBRDGRTS = 0xc0206943 SIOCBRDGSADDR = 0xc1286944 - SIOCBRDGSCACHE = 0x80186940 - SIOCBRDGSFD = 0x80186952 - SIOCBRDGSHT = 0x80186951 + SIOCBRDGSCACHE = 0x80146940 + SIOCBRDGSFD = 0x80146952 + SIOCBRDGSHT = 0x80146951 SIOCBRDGSIFCOST = 0x80606955 SIOCBRDGSIFFLGS = 0x8060693f SIOCBRDGSIFPRIO = 0x80606954 SIOCBRDGSIFPROT = 0x8060694a - SIOCBRDGSMA = 0x80186953 - SIOCBRDGSPRI = 0x80186950 - SIOCBRDGSPROTO = 0x8018695a - SIOCBRDGSTO = 0x80186945 - SIOCBRDGSTXHC = 0x80186959 + SIOCBRDGSMA = 0x80146953 + SIOCBRDGSPRI = 0x80146950 + SIOCBRDGSPROTO = 0x8014695a + SIOCBRDGSTO = 0x80146945 + SIOCBRDGSTXHC = 0x80146959 + SIOCDELLABEL = 0x80206997 SIOCDELMULTI = 0x80206932 SIOCDIFADDR = 0x80206919 SIOCDIFGROUP = 0x80286989 SIOCDIFPARENT = 0x802069b4 SIOCDIFPHYADDR = 0x80206949 + SIOCDPWE3NEIGHBOR = 0x802069de SIOCDVNETID = 0x802069af SIOCGETKALIVE = 0xc01869a4 SIOCGETLABEL = 0x8020699a @@ -1229,6 +1333,7 @@ const ( SIOCGIFFLAGS = 0xc0206911 SIOCGIFGATTR = 0xc028698b SIOCGIFGENERIC = 0xc020693a + SIOCGIFGLIST = 0xc028698d SIOCGIFGMEMB = 0xc028698a SIOCGIFGROUP = 0xc0286988 SIOCGIFHARDMTU = 0xc02069a5 @@ -1243,13 +1348,21 @@ const ( SIOCGIFRDOMAIN = 0xc02069a0 SIOCGIFRTLABEL = 0xc0206983 SIOCGIFRXR = 0x802069aa + SIOCGIFSFFPAGE = 0xc1126939 SIOCGIFXFLAGS = 0xc020699e SIOCGLIFPHYADDR = 0xc218694b SIOCGLIFPHYDF = 0xc02069c2 + SIOCGLIFPHYECN = 0xc02069c8 SIOCGLIFPHYRTABLE = 0xc02069a2 SIOCGLIFPHYTTL = 0xc02069a9 SIOCGPGRP = 0x40047309 + SIOCGPWE3 = 0xc0206998 + SIOCGPWE3CTRLWORD = 0xc02069dc + SIOCGPWE3FAT = 0xc02069dd + SIOCGPWE3NEIGHBOR = 0xc21869de + SIOCGRXHPRIO = 0xc02069db SIOCGSPPPPARAMS = 0xc0206994 + SIOCGTXHPRIO = 0xc02069c6 SIOCGUMBINFO = 0xc02069be SIOCGUMBPARAM = 0xc02069c0 SIOCGVH = 0xc02069f6 @@ -1287,19 +1400,20 @@ const ( SIOCSIFXFLAGS = 0x8020699d SIOCSLIFPHYADDR = 0x8218694a SIOCSLIFPHYDF = 0x802069c1 + SIOCSLIFPHYECN = 0x802069c7 SIOCSLIFPHYRTABLE = 0x802069a1 SIOCSLIFPHYTTL = 0x802069a8 SIOCSPGRP = 0x80047308 + SIOCSPWE3CTRLWORD = 0x802069dc + SIOCSPWE3FAT = 0x802069dd + SIOCSPWE3NEIGHBOR = 0x821869de + SIOCSRXHPRIO = 0x802069db SIOCSSPPPPARAMS = 0x80206993 + SIOCSTXHPRIO = 0x802069c5 SIOCSUMBPARAM = 0x802069bf SIOCSVH = 0xc02069f5 SIOCSVNETFLOWID = 0x802069c3 SIOCSVNETID = 0x802069a6 - SIOCSWGDPID = 0xc018695b - SIOCSWGMAXFLOW = 0xc0186960 - SIOCSWGMAXGROUP = 0xc018695d - SIOCSWSDPID = 0x8018695c - SIOCSWSPORTNO = 0xc060695f SOCK_CLOEXEC = 0x8000 SOCK_DGRAM = 0x2 SOCK_DNS = 0x1000 @@ -1314,6 +1428,7 @@ const ( SO_BINDANY = 0x1000 SO_BROADCAST = 0x20 SO_DEBUG = 0x1 + SO_DOMAIN = 0x1024 SO_DONTROUTE = 0x10 SO_ERROR = 0x1007 SO_KEEPALIVE = 0x8 @@ -1321,6 +1436,7 @@ const ( SO_NETPROC = 0x1020 SO_OOBINLINE = 0x100 SO_PEERCRED = 0x1022 + SO_PROTOCOL = 0x1025 SO_RCVBUF = 0x1002 SO_RCVLOWAT = 0x1004 SO_RCVTIMEO = 0x1006 @@ -1370,7 +1486,18 @@ const ( TCOFLUSH = 0x2 TCOOFF = 0x1 TCOON = 0x2 - TCP_MAXBURST = 0x4 + TCPOPT_EOL = 0x0 + TCPOPT_MAXSEG = 0x2 + TCPOPT_NOP = 0x1 + TCPOPT_SACK = 0x5 + TCPOPT_SACK_HDR = 0x1010500 + TCPOPT_SACK_PERMITTED = 0x4 + TCPOPT_SACK_PERMIT_HDR = 0x1010402 + TCPOPT_SIGNATURE = 0x13 + TCPOPT_TIMESTAMP = 0x8 + TCPOPT_TSTAMP_HDR = 0x101080a + TCPOPT_WINDOW = 0x3 + TCP_INFO = 0x9 TCP_MAXSEG = 0x2 TCP_MAXWIN = 0xffff TCP_MAX_SACK = 0x3 @@ -1379,8 +1506,11 @@ const ( TCP_MSS = 0x200 TCP_NODELAY = 0x1 TCP_NOPUSH = 0x10 + TCP_SACKHOLE_LIMIT = 0x80 TCP_SACK_ENABLE = 0x8 TCSAFLUSH = 0x2 + TIMER_ABSTIME = 0x1 + TIMER_RELTIME = 0x0 TIOCCBRK = 0x2000747a TIOCCDTR = 0x20007478 TIOCCHKVERAUTH = 0x2000741e @@ -1445,7 +1575,6 @@ const ( TIOCSPGRP = 0x80047476 TIOCSTART = 0x2000746e TIOCSTAT = 0x20007465 - TIOCSTI = 0x80017472 TIOCSTOP = 0x2000746f TIOCSTSTAMP = 0x8008745a TIOCSWINSZ = 0x80087467 @@ -1467,7 +1596,8 @@ const ( VMIN = 0x10 VM_ANONMIN = 0x7 VM_LOADAVG = 0x2 - VM_MAXID = 0xc + VM_MALLOC_CONF = 0xc + VM_MAXID = 0xd VM_MAXSLP = 0xa VM_METER = 0x1 VM_NKMEMPAGES = 0x6 @@ -1745,7 +1875,7 @@ var signalList = [...]struct { {3, "SIGQUIT", "quit"}, {4, "SIGILL", "illegal instruction"}, {5, "SIGTRAP", "trace/BPT trap"}, - {6, "SIGABRT", "abort trap"}, + {6, "SIGIOT", "abort trap"}, {7, "SIGEMT", "EMT trap"}, {8, "SIGFPE", "floating point exception"}, {9, "SIGKILL", "killed"}, @@ -1772,4 +1902,5 @@ var signalList = [...]struct { {30, "SIGUSR1", "user defined signal 1"}, {31, "SIGUSR2", "user defined signal 2"}, {32, "SIGTHR", "thread AST"}, + {28672, "SIGSTKSZ", "unknown signal"}, } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm.go index aef6c085609a..8d44955e44d8 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm.go @@ -46,6 +46,7 @@ const ( AF_SNA = 0xb AF_UNIX = 0x1 AF_UNSPEC = 0x0 + ALTWERASE = 0x200 ARPHRD_ETHER = 0x1 ARPHRD_FRELAY = 0xf ARPHRD_IEEE1394 = 0x18 @@ -82,7 +83,7 @@ const ( BIOCGFILDROP = 0x40044278 BIOCGHDRCMPLT = 0x40044274 BIOCGRSIG = 0x40044273 - BIOCGRTIMEOUT = 0x400c426e + BIOCGRTIMEOUT = 0x4010426e BIOCGSTATS = 0x4008426f BIOCIMMEDIATE = 0x80044270 BIOCLOCK = 0x20004276 @@ -96,7 +97,7 @@ const ( BIOCSFILDROP = 0x80044279 BIOCSHDRCMPLT = 0x80044275 BIOCSRSIG = 0x80044272 - BIOCSRTIMEOUT = 0x800c426d + BIOCSRTIMEOUT = 0x8010426d BIOCVERSION = 0x40044271 BPF_A = 0x10 BPF_ABS = 0x20 @@ -108,6 +109,15 @@ const ( BPF_DIRECTION_IN = 0x1 BPF_DIRECTION_OUT = 0x2 BPF_DIV = 0x30 + BPF_FILDROP_CAPTURE = 0x1 + BPF_FILDROP_DROP = 0x2 + BPF_FILDROP_PASS = 0x0 + BPF_F_DIR_IN = 0x10 + BPF_F_DIR_MASK = 0x30 + BPF_F_DIR_OUT = 0x20 + BPF_F_DIR_SHIFT = 0x4 + BPF_F_FLOWID = 0x8 + BPF_F_PRI_MASK = 0x7 BPF_H = 0x8 BPF_IMM = 0x0 BPF_IND = 0x40 @@ -136,6 +146,7 @@ const ( BPF_OR = 0x40 BPF_RELEASE = 0x30bb6 BPF_RET = 0x6 + BPF_RND = 0xc0 BPF_RSH = 0x70 BPF_ST = 0x2 BPF_STX = 0x3 @@ -147,6 +158,12 @@ const ( BRKINT = 0x2 CFLUSH = 0xf CLOCAL = 0x8000 + CLOCK_BOOTTIME = 0x6 + CLOCK_MONOTONIC = 0x3 + CLOCK_PROCESS_CPUTIME_ID = 0x2 + CLOCK_REALTIME = 0x0 + CLOCK_THREAD_CPUTIME_ID = 0x4 + CLOCK_UPTIME = 0x5 CPUSTATES = 0x6 CP_IDLE = 0x5 CP_INTR = 0x4 @@ -170,7 +187,65 @@ const ( CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 + DIOCADDQUEUE = 0xc100445d + DIOCADDRULE = 0xcce04404 + DIOCADDSTATE = 0xc1084425 + DIOCCHANGERULE = 0xcce0441a + DIOCCLRIFFLAG = 0xc024445a + DIOCCLRSRCNODES = 0x20004455 + DIOCCLRSTATES = 0xc0d04412 + DIOCCLRSTATUS = 0xc0244416 + DIOCGETLIMIT = 0xc0084427 + DIOCGETQSTATS = 0xc1084460 + DIOCGETQUEUE = 0xc100445f + DIOCGETQUEUES = 0xc100445e + DIOCGETRULE = 0xcce04407 + DIOCGETRULES = 0xcce04406 + DIOCGETRULESET = 0xc444443b + DIOCGETRULESETS = 0xc444443a + DIOCGETSRCNODES = 0xc0084454 + DIOCGETSTATE = 0xc1084413 + DIOCGETSTATES = 0xc0084419 + DIOCGETSTATUS = 0xc1e84415 + DIOCGETSYNFLWATS = 0xc0084463 + DIOCGETTIMEOUT = 0xc008441e + DIOCIGETIFACES = 0xc0244457 + DIOCKILLSRCNODES = 0xc068445b + DIOCKILLSTATES = 0xc0d04429 + DIOCNATLOOK = 0xc0504417 + DIOCOSFPADD = 0xc088444f DIOCOSFPFLUSH = 0x2000444e + DIOCOSFPGET = 0xc0884450 + DIOCRADDADDRS = 0xc44c4443 + DIOCRADDTABLES = 0xc44c443d + DIOCRCLRADDRS = 0xc44c4442 + DIOCRCLRASTATS = 0xc44c4448 + DIOCRCLRTABLES = 0xc44c443c + DIOCRCLRTSTATS = 0xc44c4441 + DIOCRDELADDRS = 0xc44c4444 + DIOCRDELTABLES = 0xc44c443e + DIOCRGETADDRS = 0xc44c4446 + DIOCRGETASTATS = 0xc44c4447 + DIOCRGETTABLES = 0xc44c443f + DIOCRGETTSTATS = 0xc44c4440 + DIOCRINADEFINE = 0xc44c444d + DIOCRSETADDRS = 0xc44c4445 + DIOCRSETTFLAGS = 0xc44c444a + DIOCRTSTADDRS = 0xc44c4449 + DIOCSETDEBUG = 0xc0044418 + DIOCSETHOSTID = 0xc0044456 + DIOCSETIFFLAG = 0xc0244459 + DIOCSETLIMIT = 0xc0084428 + DIOCSETREASS = 0xc004445c + DIOCSETSTATUSIF = 0xc0244414 + DIOCSETSYNCOOKIES = 0xc0014462 + DIOCSETSYNFLWATS = 0xc0084461 + DIOCSETTIMEOUT = 0xc008441d + DIOCSTART = 0x20004401 + DIOCSTOP = 0x20004402 + DIOCXBEGIN = 0xc00c4451 + DIOCXCOMMIT = 0xc00c4452 + DIOCXROLLBACK = 0xc00c4453 DLT_ARCNET = 0x7 DLT_ATM_RFC1483 = 0xb DLT_AX25 = 0x3 @@ -186,6 +261,7 @@ const ( DLT_LOOP = 0xc DLT_MPLS = 0xdb DLT_NULL = 0x0 + DLT_OPENFLOW = 0x10b DLT_PFLOG = 0x75 DLT_PFSYNC = 0x12 DLT_PPP = 0x9 @@ -196,6 +272,23 @@ const ( DLT_RAW = 0xe DLT_SLIP = 0x8 DLT_SLIP_BSDOS = 0xf + DLT_USBPCAP = 0xf9 + DLT_USER0 = 0x93 + DLT_USER1 = 0x94 + DLT_USER10 = 0x9d + DLT_USER11 = 0x9e + DLT_USER12 = 0x9f + DLT_USER13 = 0xa0 + DLT_USER14 = 0xa1 + DLT_USER15 = 0xa2 + DLT_USER2 = 0x95 + DLT_USER3 = 0x96 + DLT_USER4 = 0x97 + DLT_USER5 = 0x98 + DLT_USER6 = 0x99 + DLT_USER7 = 0x9a + DLT_USER8 = 0x9b + DLT_USER9 = 0x9c DT_BLK = 0x6 DT_CHR = 0x2 DT_DIR = 0x4 @@ -215,6 +308,8 @@ const ( EMUL_ENABLED = 0x1 EMUL_NATIVE = 0x2 ENDRUNDISC = 0x9 + ETH64_8021_RSVD_MASK = 0xfffffffffff0 + ETH64_8021_RSVD_PREFIX = 0x180c2000000 ETHERMIN = 0x2e ETHERMTU = 0x5dc ETHERTYPE_8023 = 0x4 @@ -267,6 +362,7 @@ const ( ETHERTYPE_DN = 0x6003 ETHERTYPE_DOGFIGHT = 0x1989 ETHERTYPE_DSMD = 0x8039 + ETHERTYPE_EAPOL = 0x888e ETHERTYPE_ECMA = 0x803 ETHERTYPE_ENCRYPT = 0x803d ETHERTYPE_ES = 0x805d @@ -298,6 +394,7 @@ const ( ETHERTYPE_LLDP = 0x88cc ETHERTYPE_LOGICRAFT = 0x8148 ETHERTYPE_LOOPBACK = 0x9000 + ETHERTYPE_MACSEC = 0x88e5 ETHERTYPE_MATRA = 0x807a ETHERTYPE_MAX = 0xffff ETHERTYPE_MERIT = 0x807c @@ -326,15 +423,17 @@ const ( ETHERTYPE_NCD = 0x8149 ETHERTYPE_NESTAR = 0x8006 ETHERTYPE_NETBEUI = 0x8191 + ETHERTYPE_NHRP = 0x2001 ETHERTYPE_NOVELL = 0x8138 ETHERTYPE_NS = 0x600 ETHERTYPE_NSAT = 0x601 ETHERTYPE_NSCOMPAT = 0x807 + ETHERTYPE_NSH = 0x984f ETHERTYPE_NTRAILER = 0x10 ETHERTYPE_OS9 = 0x7007 ETHERTYPE_OS9NET = 0x7009 ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e + ETHERTYPE_PBB = 0x88e7 ETHERTYPE_PCS = 0x4242 ETHERTYPE_PLANNING = 0x8044 ETHERTYPE_PPP = 0x880b @@ -409,28 +508,40 @@ const ( ETHER_CRC_POLY_LE = 0xedb88320 ETHER_HDR_LEN = 0xe ETHER_MAX_DIX_LEN = 0x600 + ETHER_MAX_HARDMTU_LEN = 0xff9b ETHER_MAX_LEN = 0x5ee ETHER_MIN_LEN = 0x40 ETHER_TYPE_LEN = 0x2 ETHER_VLAN_ENCAP_LEN = 0x4 EVFILT_AIO = -0x3 + EVFILT_DEVICE = -0x8 + EVFILT_EXCEPT = -0x9 EVFILT_PROC = -0x5 EVFILT_READ = -0x1 EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0x7 + EVFILT_SYSCOUNT = 0x9 EVFILT_TIMER = -0x7 EVFILT_VNODE = -0x4 EVFILT_WRITE = -0x2 + EVL_ENCAPLEN = 0x4 + EVL_PRIO_BITS = 0xd + EVL_PRIO_MAX = 0x7 + EVL_VLID_MASK = 0xfff + EVL_VLID_MAX = 0xffe + EVL_VLID_MIN = 0x1 + EVL_VLID_NULL = 0x0 EV_ADD = 0x1 EV_CLEAR = 0x20 EV_DELETE = 0x2 EV_DISABLE = 0x8 + EV_DISPATCH = 0x80 EV_ENABLE = 0x4 EV_EOF = 0x8000 EV_ERROR = 0x4000 EV_FLAG1 = 0x2000 EV_ONESHOT = 0x10 - EV_SYSFLAGS = 0xf000 + EV_RECEIPT = 0x40 + EV_SYSFLAGS = 0xf800 EXTA = 0x4b00 EXTB = 0x9600 EXTPROC = 0x800 @@ -443,6 +554,8 @@ const ( F_GETFL = 0x3 F_GETLK = 0x7 F_GETOWN = 0x5 + F_ISATTY = 0xb + F_OK = 0x0 F_RDLCK = 0x1 F_SETFD = 0x2 F_SETFL = 0x4 @@ -459,7 +572,6 @@ const ( IEXTEN = 0x400 IFAN_ARRIVAL = 0x0 IFAN_DEPARTURE = 0x1 - IFA_ROUTE = 0x1 IFF_ALLMULTI = 0x200 IFF_BROADCAST = 0x2 IFF_CANTCHANGE = 0x8e52 @@ -470,12 +582,12 @@ const ( IFF_LOOPBACK = 0x8 IFF_MULTICAST = 0x8000 IFF_NOARP = 0x80 - IFF_NOTRAILERS = 0x20 IFF_OACTIVE = 0x400 IFF_POINTOPOINT = 0x10 IFF_PROMISC = 0x100 IFF_RUNNING = 0x40 IFF_SIMPLEX = 0x800 + IFF_STATICARP = 0x20 IFF_UP = 0x1 IFNAMSIZ = 0x10 IFT_1822 = 0x2 @@ -604,6 +716,7 @@ const ( IFT_LINEGROUP = 0xd2 IFT_LOCALTALK = 0x2a IFT_LOOP = 0x18 + IFT_MBIM = 0xfa IFT_MEDIAMAILOVERIP = 0x8b IFT_MFSIGLINK = 0xa7 IFT_MIOX25 = 0x26 @@ -694,6 +807,7 @@ const ( IFT_VOICEOVERCABLE = 0xc6 IFT_VOICEOVERFRAMERELAY = 0x99 IFT_VOICEOVERIP = 0x68 + IFT_WIREGUARD = 0xfb IFT_X213 = 0x5d IFT_X25 = 0x5 IFT_X25DDN = 0x4 @@ -728,8 +842,6 @@ const ( IPPROTO_AH = 0x33 IPPROTO_CARP = 0x70 IPPROTO_DIVERT = 0x102 - IPPROTO_DIVERT_INIT = 0x2 - IPPROTO_DIVERT_RESP = 0x1 IPPROTO_DONE = 0x101 IPPROTO_DSTOPTS = 0x3c IPPROTO_EGP = 0x8 @@ -761,9 +873,11 @@ const ( IPPROTO_RAW = 0xff IPPROTO_ROUTING = 0x2b IPPROTO_RSVP = 0x2e + IPPROTO_SCTP = 0x84 IPPROTO_TCP = 0x6 IPPROTO_TP = 0x1d IPPROTO_UDP = 0x11 + IPPROTO_UDPLITE = 0x88 IPV6_AUTH_LEVEL = 0x35 IPV6_AUTOFLOWLABEL = 0x3b IPV6_CHECKSUM = 0x1a @@ -786,6 +900,7 @@ const ( IPV6_LEAVE_GROUP = 0xd IPV6_MAXHLIM = 0xff IPV6_MAXPACKET = 0xffff + IPV6_MINHOPCOUNT = 0x41 IPV6_MMTU = 0x500 IPV6_MULTICAST_HOPS = 0xa IPV6_MULTICAST_IF = 0x9 @@ -825,12 +940,12 @@ const ( IP_DEFAULT_MULTICAST_LOOP = 0x1 IP_DEFAULT_MULTICAST_TTL = 0x1 IP_DF = 0x4000 - IP_DIVERTFL = 0x1022 IP_DROP_MEMBERSHIP = 0xd IP_ESP_NETWORK_LEVEL = 0x16 IP_ESP_TRANS_LEVEL = 0x15 IP_HDRINCL = 0x2 IP_IPCOMP_LEVEL = 0x1d + IP_IPDEFTTL = 0x25 IP_IPSECFLOWINFO = 0x24 IP_IPSEC_LOCAL_AUTH = 0x1b IP_IPSEC_LOCAL_CRED = 0x19 @@ -864,10 +979,15 @@ const ( IP_RETOPTS = 0x8 IP_RF = 0x8000 IP_RTABLE = 0x1021 + IP_SENDSRCADDR = 0x7 IP_TOS = 0x3 IP_TTL = 0x4 ISIG = 0x80 ISTRIP = 0x20 + ITIMER_PROF = 0x2 + ITIMER_REAL = 0x0 + ITIMER_VIRTUAL = 0x1 + IUCLC = 0x1000 IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 @@ -922,6 +1042,7 @@ const ( MNT_NOATIME = 0x8000 MNT_NODEV = 0x10 MNT_NOEXEC = 0x4 + MNT_NOPERM = 0x20 MNT_NOSUID = 0x8 MNT_NOWAIT = 0x2 MNT_QUOTA = 0x2000 @@ -929,12 +1050,27 @@ const ( MNT_RELOAD = 0x40000 MNT_ROOTFS = 0x4000 MNT_SOFTDEP = 0x4000000 + MNT_STALLED = 0x100000 + MNT_SWAPPABLE = 0x200000 MNT_SYNCHRONOUS = 0x2 MNT_UPDATE = 0x10000 MNT_VISFLAGMASK = 0x400ffff MNT_WAIT = 0x1 MNT_WANTRDWR = 0x2000000 MNT_WXALLOWED = 0x800 + MOUNT_AFS = "afs" + MOUNT_CD9660 = "cd9660" + MOUNT_EXT2FS = "ext2fs" + MOUNT_FFS = "ffs" + MOUNT_FUSEFS = "fuse" + MOUNT_MFS = "mfs" + MOUNT_MSDOS = "msdos" + MOUNT_NCPFS = "ncpfs" + MOUNT_NFS = "nfs" + MOUNT_NTFS = "ntfs" + MOUNT_TMPFS = "tmpfs" + MOUNT_UDF = "udf" + MOUNT_UFS = "ffs" MSG_BCAST = 0x100 MSG_CMSG_CLOEXEC = 0x800 MSG_CTRUNC = 0x20 @@ -947,6 +1083,7 @@ const ( MSG_PEEK = 0x2 MSG_TRUNC = 0x10 MSG_WAITALL = 0x40 + MSG_WAITFORONE = 0x1000 MS_ASYNC = 0x1 MS_INVALIDATE = 0x4 MS_SYNC = 0x2 @@ -954,12 +1091,16 @@ const ( NET_RT_DUMP = 0x1 NET_RT_FLAGS = 0x2 NET_RT_IFLIST = 0x3 - NET_RT_MAXID = 0x6 + NET_RT_IFNAMES = 0x6 + NET_RT_MAXID = 0x8 + NET_RT_SOURCE = 0x7 NET_RT_STATS = 0x4 NET_RT_TABLE = 0x5 NFDBITS = 0x20 NOFLSH = 0x80000000 + NOKERNINFO = 0x2000000 NOTE_ATTRIB = 0x8 + NOTE_CHANGE = 0x1 NOTE_CHILD = 0x4 NOTE_DELETE = 0x1 NOTE_EOF = 0x2 @@ -969,6 +1110,7 @@ const ( NOTE_FORK = 0x40000000 NOTE_LINK = 0x10 NOTE_LOWAT = 0x1 + NOTE_OOB = 0x4 NOTE_PCTRLMASK = 0xf0000000 NOTE_PDATAMASK = 0xfffff NOTE_RENAME = 0x20 @@ -978,11 +1120,13 @@ const ( NOTE_TRUNCATE = 0x80 NOTE_WRITE = 0x2 OCRNL = 0x10 + OLCUC = 0x20 ONLCR = 0x2 ONLRET = 0x80 ONOCR = 0x40 ONOEOT = 0x8 OPOST = 0x1 + OXTABS = 0x4 O_ACCMODE = 0x3 O_APPEND = 0x8 O_ASYNC = 0x40 @@ -1027,19 +1171,25 @@ const ( RLIMIT_STACK = 0x3 RLIM_INFINITY = 0x7fffffffffffffff RTAX_AUTHOR = 0x6 + RTAX_BFD = 0xb RTAX_BRD = 0x7 + RTAX_DNS = 0xc RTAX_DST = 0x0 RTAX_GATEWAY = 0x1 RTAX_GENMASK = 0x3 RTAX_IFA = 0x5 RTAX_IFP = 0x4 RTAX_LABEL = 0xa - RTAX_MAX = 0xb + RTAX_MAX = 0xf RTAX_NETMASK = 0x2 + RTAX_SEARCH = 0xe RTAX_SRC = 0x8 RTAX_SRCMASK = 0x9 + RTAX_STATIC = 0xd RTA_AUTHOR = 0x40 + RTA_BFD = 0x800 RTA_BRD = 0x80 + RTA_DNS = 0x1000 RTA_DST = 0x1 RTA_GATEWAY = 0x2 RTA_GENMASK = 0x8 @@ -1047,24 +1197,29 @@ const ( RTA_IFP = 0x10 RTA_LABEL = 0x400 RTA_NETMASK = 0x4 + RTA_SEARCH = 0x4000 RTA_SRC = 0x100 RTA_SRCMASK = 0x200 + RTA_STATIC = 0x2000 RTF_ANNOUNCE = 0x4000 + RTF_BFD = 0x1000000 RTF_BLACKHOLE = 0x1000 RTF_BROADCAST = 0x400000 + RTF_CACHED = 0x20000 RTF_CLONED = 0x10000 RTF_CLONING = 0x100 + RTF_CONNECTED = 0x800000 RTF_DONE = 0x40 RTF_DYNAMIC = 0x10 - RTF_FMASK = 0x70f808 + RTF_FMASK = 0x110fc08 RTF_GATEWAY = 0x2 RTF_HOST = 0x4 RTF_LLINFO = 0x400 RTF_LOCAL = 0x200000 - RTF_MASK = 0x80 RTF_MODIFIED = 0x20 RTF_MPATH = 0x40000 RTF_MPLS = 0x100000 + RTF_MULTICAST = 0x200 RTF_PERMANENT_ARP = 0x2000 RTF_PROTO1 = 0x8000 RTF_PROTO2 = 0x4000 @@ -1073,23 +1228,26 @@ const ( RTF_STATIC = 0x800 RTF_UP = 0x1 RTF_USETRAILERS = 0x8000 - RTF_XRESOLVE = 0x200 + RTM_80211INFO = 0x15 RTM_ADD = 0x1 + RTM_BFD = 0x12 RTM_CHANGE = 0x3 + RTM_CHGADDRATTR = 0x14 RTM_DELADDR = 0xd RTM_DELETE = 0x2 RTM_DESYNC = 0x10 RTM_GET = 0x4 RTM_IFANNOUNCE = 0xf RTM_IFINFO = 0xe - RTM_LOCK = 0x8 + RTM_INVALIDATE = 0x11 RTM_LOSING = 0x5 RTM_MAXSIZE = 0x800 RTM_MISS = 0x7 RTM_NEWADDR = 0xc + RTM_PROPOSAL = 0x13 RTM_REDIRECT = 0x6 RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 + RTM_SOURCE = 0x16 RTM_VERSION = 0x5 RTV_EXPIRE = 0x4 RTV_HOPCOUNT = 0x2 @@ -1099,67 +1257,74 @@ const ( RTV_RTTVAR = 0x80 RTV_SPIPE = 0x10 RTV_SSTHRESH = 0x20 + RT_TABLEID_BITS = 0x8 + RT_TABLEID_MASK = 0xff RT_TABLEID_MAX = 0xff RUSAGE_CHILDREN = -0x1 RUSAGE_SELF = 0x0 RUSAGE_THREAD = 0x1 SCM_RIGHTS = 0x1 SCM_TIMESTAMP = 0x4 + SEEK_CUR = 0x1 + SEEK_END = 0x2 + SEEK_SET = 0x0 SHUT_RD = 0x0 SHUT_RDWR = 0x2 SHUT_WR = 0x1 SIOCADDMULTI = 0x80206931 SIOCAIFADDR = 0x8040691a SIOCAIFGROUP = 0x80246987 - SIOCALIFADDR = 0x8218691c SIOCATMARK = 0x40047307 - SIOCBRDGADD = 0x8054693c - SIOCBRDGADDS = 0x80546941 - SIOCBRDGARL = 0x806e694d + SIOCBRDGADD = 0x8060693c + SIOCBRDGADDL = 0x80606949 + SIOCBRDGADDS = 0x80606941 + SIOCBRDGARL = 0x808c694d SIOCBRDGDADDR = 0x81286947 - SIOCBRDGDEL = 0x8054693d - SIOCBRDGDELS = 0x80546942 - SIOCBRDGFLUSH = 0x80546948 - SIOCBRDGFRL = 0x806e694e + SIOCBRDGDEL = 0x8060693d + SIOCBRDGDELS = 0x80606942 + SIOCBRDGFLUSH = 0x80606948 + SIOCBRDGFRL = 0x808c694e SIOCBRDGGCACHE = 0xc0146941 SIOCBRDGGFD = 0xc0146952 SIOCBRDGGHT = 0xc0146951 - SIOCBRDGGIFFLGS = 0xc054693e + SIOCBRDGGIFFLGS = 0xc060693e SIOCBRDGGMA = 0xc0146953 - SIOCBRDGGPARAM = 0xc03c6958 + SIOCBRDGGPARAM = 0xc0406958 SIOCBRDGGPRI = 0xc0146950 SIOCBRDGGRL = 0xc028694f - SIOCBRDGGSIFS = 0xc054693c SIOCBRDGGTO = 0xc0146946 - SIOCBRDGIFS = 0xc0546942 + SIOCBRDGIFS = 0xc0606942 SIOCBRDGRTS = 0xc0186943 SIOCBRDGSADDR = 0xc1286944 SIOCBRDGSCACHE = 0x80146940 SIOCBRDGSFD = 0x80146952 SIOCBRDGSHT = 0x80146951 - SIOCBRDGSIFCOST = 0x80546955 - SIOCBRDGSIFFLGS = 0x8054693f - SIOCBRDGSIFPRIO = 0x80546954 + SIOCBRDGSIFCOST = 0x80606955 + SIOCBRDGSIFFLGS = 0x8060693f + SIOCBRDGSIFPRIO = 0x80606954 + SIOCBRDGSIFPROT = 0x8060694a SIOCBRDGSMA = 0x80146953 SIOCBRDGSPRI = 0x80146950 SIOCBRDGSPROTO = 0x8014695a SIOCBRDGSTO = 0x80146945 SIOCBRDGSTXHC = 0x80146959 + SIOCDELLABEL = 0x80206997 SIOCDELMULTI = 0x80206932 SIOCDIFADDR = 0x80206919 SIOCDIFGROUP = 0x80246989 + SIOCDIFPARENT = 0x802069b4 SIOCDIFPHYADDR = 0x80206949 - SIOCDLIFADDR = 0x8218691e + SIOCDPWE3NEIGHBOR = 0x802069de + SIOCDVNETID = 0x802069af SIOCGETKALIVE = 0xc01869a4 SIOCGETLABEL = 0x8020699a + SIOCGETMPWCFG = 0xc02069ae SIOCGETPFLOW = 0xc02069fe SIOCGETPFSYNC = 0xc02069f8 SIOCGETSGCNT = 0xc0147534 SIOCGETVIFCNT = 0xc0147533 SIOCGETVLAN = 0xc0206990 - SIOCGHIWAT = 0x40047301 SIOCGIFADDR = 0xc0206921 - SIOCGIFASYNCMAP = 0xc020697c SIOCGIFBRDADDR = 0xc0206923 SIOCGIFCONF = 0xc0086924 SIOCGIFDATA = 0xc020691b @@ -1168,41 +1333,53 @@ const ( SIOCGIFFLAGS = 0xc0206911 SIOCGIFGATTR = 0xc024698b SIOCGIFGENERIC = 0xc020693a + SIOCGIFGLIST = 0xc024698d SIOCGIFGMEMB = 0xc024698a SIOCGIFGROUP = 0xc0246988 SIOCGIFHARDMTU = 0xc02069a5 - SIOCGIFMEDIA = 0xc0286936 + SIOCGIFLLPRIO = 0xc02069b6 + SIOCGIFMEDIA = 0xc0386938 SIOCGIFMETRIC = 0xc0206917 SIOCGIFMTU = 0xc020697e SIOCGIFNETMASK = 0xc0206925 - SIOCGIFPDSTADDR = 0xc0206948 + SIOCGIFPAIR = 0xc02069b1 + SIOCGIFPARENT = 0xc02069b3 SIOCGIFPRIORITY = 0xc020699c - SIOCGIFPSRCADDR = 0xc0206947 SIOCGIFRDOMAIN = 0xc02069a0 SIOCGIFRTLABEL = 0xc0206983 SIOCGIFRXR = 0x802069aa - SIOCGIFTIMESLOT = 0xc0206986 + SIOCGIFSFFPAGE = 0xc1126939 SIOCGIFXFLAGS = 0xc020699e - SIOCGLIFADDR = 0xc218691d SIOCGLIFPHYADDR = 0xc218694b + SIOCGLIFPHYDF = 0xc02069c2 + SIOCGLIFPHYECN = 0xc02069c8 SIOCGLIFPHYRTABLE = 0xc02069a2 SIOCGLIFPHYTTL = 0xc02069a9 - SIOCGLOWAT = 0x40047303 SIOCGPGRP = 0x40047309 + SIOCGPWE3 = 0xc0206998 + SIOCGPWE3CTRLWORD = 0xc02069dc + SIOCGPWE3FAT = 0xc02069dd + SIOCGPWE3NEIGHBOR = 0xc21869de + SIOCGRXHPRIO = 0xc02069db SIOCGSPPPPARAMS = 0xc0206994 + SIOCGTXHPRIO = 0xc02069c6 + SIOCGUMBINFO = 0xc02069be + SIOCGUMBPARAM = 0xc02069c0 SIOCGVH = 0xc02069f6 + SIOCGVNETFLOWID = 0xc02069c4 SIOCGVNETID = 0xc02069a7 + SIOCIFAFATTACH = 0x801169ab + SIOCIFAFDETACH = 0x801169ac SIOCIFCREATE = 0x8020697a SIOCIFDESTROY = 0x80206979 SIOCIFGCLONERS = 0xc00c6978 SIOCSETKALIVE = 0x801869a3 SIOCSETLABEL = 0x80206999 + SIOCSETMPWCFG = 0x802069ad SIOCSETPFLOW = 0x802069fd SIOCSETPFSYNC = 0x802069f7 SIOCSETVLAN = 0x8020698f - SIOCSHIWAT = 0x80047300 SIOCSIFADDR = 0x8020690c - SIOCSIFASYNCMAP = 0x8020697d SIOCSIFBRDADDR = 0x80206913 SIOCSIFDESCR = 0x80206980 SIOCSIFDSTADDR = 0x8020690e @@ -1210,26 +1387,36 @@ const ( SIOCSIFGATTR = 0x8024698c SIOCSIFGENERIC = 0x80206939 SIOCSIFLLADDR = 0x8020691f - SIOCSIFMEDIA = 0xc0206935 + SIOCSIFLLPRIO = 0x802069b5 + SIOCSIFMEDIA = 0xc0206937 SIOCSIFMETRIC = 0x80206918 SIOCSIFMTU = 0x8020697f SIOCSIFNETMASK = 0x80206916 - SIOCSIFPHYADDR = 0x80406946 + SIOCSIFPAIR = 0x802069b0 + SIOCSIFPARENT = 0x802069b2 SIOCSIFPRIORITY = 0x8020699b SIOCSIFRDOMAIN = 0x8020699f SIOCSIFRTLABEL = 0x80206982 - SIOCSIFTIMESLOT = 0x80206985 SIOCSIFXFLAGS = 0x8020699d SIOCSLIFPHYADDR = 0x8218694a + SIOCSLIFPHYDF = 0x802069c1 + SIOCSLIFPHYECN = 0x802069c7 SIOCSLIFPHYRTABLE = 0x802069a1 SIOCSLIFPHYTTL = 0x802069a8 - SIOCSLOWAT = 0x80047302 SIOCSPGRP = 0x80047308 + SIOCSPWE3CTRLWORD = 0x802069dc + SIOCSPWE3FAT = 0x802069dd + SIOCSPWE3NEIGHBOR = 0x821869de + SIOCSRXHPRIO = 0x802069db SIOCSSPPPPARAMS = 0x80206993 + SIOCSTXHPRIO = 0x802069c5 + SIOCSUMBPARAM = 0x802069bf SIOCSVH = 0xc02069f5 + SIOCSVNETFLOWID = 0x802069c3 SIOCSVNETID = 0x802069a6 SOCK_CLOEXEC = 0x8000 SOCK_DGRAM = 0x2 + SOCK_DNS = 0x1000 SOCK_NONBLOCK = 0x4000 SOCK_RAW = 0x3 SOCK_RDM = 0x4 @@ -1241,6 +1428,7 @@ const ( SO_BINDANY = 0x1000 SO_BROADCAST = 0x20 SO_DEBUG = 0x1 + SO_DOMAIN = 0x1024 SO_DONTROUTE = 0x10 SO_ERROR = 0x1007 SO_KEEPALIVE = 0x8 @@ -1248,6 +1436,7 @@ const ( SO_NETPROC = 0x1020 SO_OOBINLINE = 0x100 SO_PEERCRED = 0x1022 + SO_PROTOCOL = 0x1025 SO_RCVBUF = 0x1002 SO_RCVLOWAT = 0x1004 SO_RCVTIMEO = 0x1006 @@ -1261,6 +1450,7 @@ const ( SO_TIMESTAMP = 0x800 SO_TYPE = 0x1008 SO_USELOOPBACK = 0x40 + SO_ZEROIZE = 0x2000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1290,9 +1480,24 @@ const ( S_IXOTH = 0x1 S_IXUSR = 0x40 TCIFLUSH = 0x1 + TCIOFF = 0x3 TCIOFLUSH = 0x3 + TCION = 0x4 TCOFLUSH = 0x2 - TCP_MAXBURST = 0x4 + TCOOFF = 0x1 + TCOON = 0x2 + TCPOPT_EOL = 0x0 + TCPOPT_MAXSEG = 0x2 + TCPOPT_NOP = 0x1 + TCPOPT_SACK = 0x5 + TCPOPT_SACK_HDR = 0x1010500 + TCPOPT_SACK_PERMITTED = 0x4 + TCPOPT_SACK_PERMIT_HDR = 0x1010402 + TCPOPT_SIGNATURE = 0x13 + TCPOPT_TIMESTAMP = 0x8 + TCPOPT_TSTAMP_HDR = 0x101080a + TCPOPT_WINDOW = 0x3 + TCP_INFO = 0x9 TCP_MAXSEG = 0x2 TCP_MAXWIN = 0xffff TCP_MAX_SACK = 0x3 @@ -1301,11 +1506,15 @@ const ( TCP_MSS = 0x200 TCP_NODELAY = 0x1 TCP_NOPUSH = 0x10 - TCP_NSTATES = 0xb + TCP_SACKHOLE_LIMIT = 0x80 TCP_SACK_ENABLE = 0x8 TCSAFLUSH = 0x2 + TIMER_ABSTIME = 0x1 + TIMER_RELTIME = 0x0 TIOCCBRK = 0x2000747a TIOCCDTR = 0x20007478 + TIOCCHKVERAUTH = 0x2000741e + TIOCCLRVERAUTH = 0x2000741d TIOCCONS = 0x80047462 TIOCDRAIN = 0x2000745e TIOCEXCL = 0x2000740d @@ -1321,7 +1530,7 @@ const ( TIOCGFLAGS = 0x4004745d TIOCGPGRP = 0x40047477 TIOCGSID = 0x40047463 - TIOCGTSTAMP = 0x400c745b + TIOCGTSTAMP = 0x4010745b TIOCGWINSZ = 0x40087468 TIOCMBIC = 0x8004746b TIOCMBIS = 0x8004746c @@ -1360,17 +1569,21 @@ const ( TIOCSETAF = 0x802c7416 TIOCSETAW = 0x802c7415 TIOCSETD = 0x8004741b + TIOCSETVERAUTH = 0x8004741c TIOCSFLAGS = 0x8004745c TIOCSIG = 0x8004745f TIOCSPGRP = 0x80047476 TIOCSTART = 0x2000746e - TIOCSTAT = 0x80047465 - TIOCSTI = 0x80017472 + TIOCSTAT = 0x20007465 TIOCSTOP = 0x2000746f TIOCSTSTAMP = 0x8008745a TIOCSWINSZ = 0x80087467 TIOCUCNTL = 0x80047466 + TIOCUCNTL_CBRK = 0x7a + TIOCUCNTL_SBRK = 0x7b TOSTOP = 0x400000 + UTIME_NOW = -0x2 + UTIME_OMIT = -0x1 VDISCARD = 0xf VDSUSP = 0xb VEOF = 0x0 @@ -1381,6 +1594,19 @@ const ( VKILL = 0x5 VLNEXT = 0xe VMIN = 0x10 + VM_ANONMIN = 0x7 + VM_LOADAVG = 0x2 + VM_MALLOC_CONF = 0xc + VM_MAXID = 0xd + VM_MAXSLP = 0xa + VM_METER = 0x1 + VM_NKMEMPAGES = 0x6 + VM_PSSTRINGS = 0x3 + VM_SWAPENCRYPT = 0x5 + VM_USPACE = 0xb + VM_UVMEXP = 0x4 + VM_VNODEMIN = 0x9 + VM_VTEXTMIN = 0x8 VQUIT = 0x9 VREPRINT = 0x6 VSTART = 0xc @@ -1394,6 +1620,7 @@ const ( WCOREFLAG = 0x80 WNOHANG = 0x1 WUNTRACED = 0x2 + XCASE = 0x1000000 ) // Errors @@ -1407,6 +1634,7 @@ const ( EALREADY = syscall.Errno(0x25) EAUTH = syscall.Errno(0x50) EBADF = syscall.Errno(0x9) + EBADMSG = syscall.Errno(0x5c) EBADRPC = syscall.Errno(0x48) EBUSY = syscall.Errno(0x10) ECANCELED = syscall.Errno(0x58) @@ -1433,7 +1661,7 @@ const ( EIPSEC = syscall.Errno(0x52) EISCONN = syscall.Errno(0x38) EISDIR = syscall.Errno(0x15) - ELAST = syscall.Errno(0x5b) + ELAST = syscall.Errno(0x5f) ELOOP = syscall.Errno(0x3e) EMEDIUMTYPE = syscall.Errno(0x56) EMFILE = syscall.Errno(0x18) @@ -1461,12 +1689,14 @@ const ( ENOTCONN = syscall.Errno(0x39) ENOTDIR = syscall.Errno(0x14) ENOTEMPTY = syscall.Errno(0x42) + ENOTRECOVERABLE = syscall.Errno(0x5d) ENOTSOCK = syscall.Errno(0x26) ENOTSUP = syscall.Errno(0x5b) ENOTTY = syscall.Errno(0x19) ENXIO = syscall.Errno(0x6) EOPNOTSUPP = syscall.Errno(0x2d) EOVERFLOW = syscall.Errno(0x57) + EOWNERDEAD = syscall.Errno(0x5e) EPERM = syscall.Errno(0x1) EPFNOSUPPORT = syscall.Errno(0x2e) EPIPE = syscall.Errno(0x20) @@ -1474,6 +1704,7 @@ const ( EPROCUNAVAIL = syscall.Errno(0x4c) EPROGMISMATCH = syscall.Errno(0x4b) EPROGUNAVAIL = syscall.Errno(0x4a) + EPROTO = syscall.Errno(0x5f) EPROTONOSUPPORT = syscall.Errno(0x2b) EPROTOTYPE = syscall.Errno(0x29) ERANGE = syscall.Errno(0x22) @@ -1570,7 +1801,7 @@ var errorList = [...]struct { {32, "EPIPE", "broken pipe"}, {33, "EDOM", "numerical argument out of domain"}, {34, "ERANGE", "result too large"}, - {35, "EWOULDBLOCK", "resource temporarily unavailable"}, + {35, "EAGAIN", "resource temporarily unavailable"}, {36, "EINPROGRESS", "operation now in progress"}, {37, "EALREADY", "operation already in progress"}, {38, "ENOTSOCK", "socket operation on non-socket"}, @@ -1626,7 +1857,11 @@ var errorList = [...]struct { {88, "ECANCELED", "operation canceled"}, {89, "EIDRM", "identifier removed"}, {90, "ENOMSG", "no message of desired type"}, - {91, "ELAST", "not supported"}, + {91, "ENOTSUP", "not supported"}, + {92, "EBADMSG", "bad message"}, + {93, "ENOTRECOVERABLE", "state not recoverable"}, + {94, "EOWNERDEAD", "previous owner died"}, + {95, "ELAST", "protocol error"}, } // Signal table @@ -1640,7 +1875,7 @@ var signalList = [...]struct { {3, "SIGQUIT", "quit"}, {4, "SIGILL", "illegal instruction"}, {5, "SIGTRAP", "trace/BPT trap"}, - {6, "SIGABRT", "abort trap"}, + {6, "SIGIOT", "abort trap"}, {7, "SIGEMT", "EMT trap"}, {8, "SIGFPE", "floating point exception"}, {9, "SIGKILL", "killed"}, @@ -1667,4 +1902,5 @@ var signalList = [...]struct { {30, "SIGUSR1", "user defined signal 1"}, {31, "SIGUSR2", "user defined signal 2"}, {32, "SIGTHR", "thread AST"}, + {28672, "SIGSTKSZ", "unknown signal"}, } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm64.go index 90de7dfc33a3..ae16fe7542ae 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm64.go @@ -112,6 +112,12 @@ const ( BPF_FILDROP_CAPTURE = 0x1 BPF_FILDROP_DROP = 0x2 BPF_FILDROP_PASS = 0x0 + BPF_F_DIR_IN = 0x10 + BPF_F_DIR_MASK = 0x30 + BPF_F_DIR_OUT = 0x20 + BPF_F_DIR_SHIFT = 0x4 + BPF_F_FLOWID = 0x8 + BPF_F_PRI_MASK = 0x7 BPF_H = 0x8 BPF_IMM = 0x0 BPF_IND = 0x40 @@ -140,6 +146,7 @@ const ( BPF_OR = 0x40 BPF_RELEASE = 0x30bb6 BPF_RET = 0x6 + BPF_RND = 0xc0 BPF_RSH = 0x70 BPF_ST = 0x2 BPF_STX = 0x3 @@ -180,7 +187,65 @@ const ( CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 + DIOCADDQUEUE = 0xc110445d + DIOCADDRULE = 0xcd604404 + DIOCADDSTATE = 0xc1084425 + DIOCCHANGERULE = 0xcd60441a + DIOCCLRIFFLAG = 0xc028445a + DIOCCLRSRCNODES = 0x20004455 + DIOCCLRSTATES = 0xc0e04412 + DIOCCLRSTATUS = 0xc0284416 + DIOCGETLIMIT = 0xc0084427 + DIOCGETQSTATS = 0xc1204460 + DIOCGETQUEUE = 0xc110445f + DIOCGETQUEUES = 0xc110445e + DIOCGETRULE = 0xcd604407 + DIOCGETRULES = 0xcd604406 + DIOCGETRULESET = 0xc444443b + DIOCGETRULESETS = 0xc444443a + DIOCGETSRCNODES = 0xc0104454 + DIOCGETSTATE = 0xc1084413 + DIOCGETSTATES = 0xc0104419 + DIOCGETSTATUS = 0xc1e84415 + DIOCGETSYNFLWATS = 0xc0084463 + DIOCGETTIMEOUT = 0xc008441e + DIOCIGETIFACES = 0xc0284457 + DIOCKILLSRCNODES = 0xc080445b + DIOCKILLSTATES = 0xc0e04429 + DIOCNATLOOK = 0xc0504417 + DIOCOSFPADD = 0xc088444f DIOCOSFPFLUSH = 0x2000444e + DIOCOSFPGET = 0xc0884450 + DIOCRADDADDRS = 0xc4504443 + DIOCRADDTABLES = 0xc450443d + DIOCRCLRADDRS = 0xc4504442 + DIOCRCLRASTATS = 0xc4504448 + DIOCRCLRTABLES = 0xc450443c + DIOCRCLRTSTATS = 0xc4504441 + DIOCRDELADDRS = 0xc4504444 + DIOCRDELTABLES = 0xc450443e + DIOCRGETADDRS = 0xc4504446 + DIOCRGETASTATS = 0xc4504447 + DIOCRGETTABLES = 0xc450443f + DIOCRGETTSTATS = 0xc4504440 + DIOCRINADEFINE = 0xc450444d + DIOCRSETADDRS = 0xc4504445 + DIOCRSETTFLAGS = 0xc450444a + DIOCRTSTADDRS = 0xc4504449 + DIOCSETDEBUG = 0xc0044418 + DIOCSETHOSTID = 0xc0044456 + DIOCSETIFFLAG = 0xc0284459 + DIOCSETLIMIT = 0xc0084428 + DIOCSETREASS = 0xc004445c + DIOCSETSTATUSIF = 0xc0284414 + DIOCSETSYNCOOKIES = 0xc0014462 + DIOCSETSYNFLWATS = 0xc0084461 + DIOCSETTIMEOUT = 0xc008441d + DIOCSTART = 0x20004401 + DIOCSTOP = 0x20004402 + DIOCXBEGIN = 0xc0104451 + DIOCXCOMMIT = 0xc0104452 + DIOCXROLLBACK = 0xc0104453 DLT_ARCNET = 0x7 DLT_ATM_RFC1483 = 0xb DLT_AX25 = 0x3 @@ -243,6 +308,8 @@ const ( EMUL_ENABLED = 0x1 EMUL_NATIVE = 0x2 ENDRUNDISC = 0x9 + ETH64_8021_RSVD_MASK = 0xfffffffffff0 + ETH64_8021_RSVD_PREFIX = 0x180c2000000 ETHERMIN = 0x2e ETHERMTU = 0x5dc ETHERTYPE_8023 = 0x4 @@ -295,6 +362,7 @@ const ( ETHERTYPE_DN = 0x6003 ETHERTYPE_DOGFIGHT = 0x1989 ETHERTYPE_DSMD = 0x8039 + ETHERTYPE_EAPOL = 0x888e ETHERTYPE_ECMA = 0x803 ETHERTYPE_ENCRYPT = 0x803d ETHERTYPE_ES = 0x805d @@ -326,6 +394,7 @@ const ( ETHERTYPE_LLDP = 0x88cc ETHERTYPE_LOGICRAFT = 0x8148 ETHERTYPE_LOOPBACK = 0x9000 + ETHERTYPE_MACSEC = 0x88e5 ETHERTYPE_MATRA = 0x807a ETHERTYPE_MAX = 0xffff ETHERTYPE_MERIT = 0x807c @@ -354,15 +423,16 @@ const ( ETHERTYPE_NCD = 0x8149 ETHERTYPE_NESTAR = 0x8006 ETHERTYPE_NETBEUI = 0x8191 + ETHERTYPE_NHRP = 0x2001 ETHERTYPE_NOVELL = 0x8138 ETHERTYPE_NS = 0x600 ETHERTYPE_NSAT = 0x601 ETHERTYPE_NSCOMPAT = 0x807 + ETHERTYPE_NSH = 0x984f ETHERTYPE_NTRAILER = 0x10 ETHERTYPE_OS9 = 0x7007 ETHERTYPE_OS9NET = 0x7009 ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e ETHERTYPE_PBB = 0x88e7 ETHERTYPE_PCS = 0x4242 ETHERTYPE_PLANNING = 0x8044 @@ -445,10 +515,11 @@ const ( ETHER_VLAN_ENCAP_LEN = 0x4 EVFILT_AIO = -0x3 EVFILT_DEVICE = -0x8 + EVFILT_EXCEPT = -0x9 EVFILT_PROC = -0x5 EVFILT_READ = -0x1 EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0x8 + EVFILT_SYSCOUNT = 0x9 EVFILT_TIMER = -0x7 EVFILT_VNODE = -0x4 EVFILT_WRITE = -0x2 @@ -470,7 +541,7 @@ const ( EV_FLAG1 = 0x2000 EV_ONESHOT = 0x10 EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 + EV_SYSFLAGS = 0xf800 EXTA = 0x4b00 EXTB = 0x9600 EXTPROC = 0x800 @@ -736,6 +807,7 @@ const ( IFT_VOICEOVERCABLE = 0xc6 IFT_VOICEOVERFRAMERELAY = 0x99 IFT_VOICEOVERIP = 0x68 + IFT_WIREGUARD = 0xfb IFT_X213 = 0x5d IFT_X25 = 0x5 IFT_X25DDN = 0x4 @@ -801,9 +873,11 @@ const ( IPPROTO_RAW = 0xff IPPROTO_ROUTING = 0x2b IPPROTO_RSVP = 0x2e + IPPROTO_SCTP = 0x84 IPPROTO_TCP = 0x6 IPPROTO_TP = 0x1d IPPROTO_UDP = 0x11 + IPPROTO_UDPLITE = 0x88 IPV6_AUTH_LEVEL = 0x35 IPV6_AUTOFLOWLABEL = 0x3b IPV6_CHECKSUM = 0x1a @@ -910,6 +984,9 @@ const ( IP_TTL = 0x4 ISIG = 0x80 ISTRIP = 0x20 + ITIMER_PROF = 0x2 + ITIMER_REAL = 0x0 + ITIMER_VIRTUAL = 0x1 IUCLC = 0x1000 IXANY = 0x800 IXOFF = 0x400 @@ -981,6 +1058,19 @@ const ( MNT_WAIT = 0x1 MNT_WANTRDWR = 0x2000000 MNT_WXALLOWED = 0x800 + MOUNT_AFS = "afs" + MOUNT_CD9660 = "cd9660" + MOUNT_EXT2FS = "ext2fs" + MOUNT_FFS = "ffs" + MOUNT_FUSEFS = "fuse" + MOUNT_MFS = "mfs" + MOUNT_MSDOS = "msdos" + MOUNT_NCPFS = "ncpfs" + MOUNT_NFS = "nfs" + MOUNT_NTFS = "ntfs" + MOUNT_TMPFS = "tmpfs" + MOUNT_UDF = "udf" + MOUNT_UFS = "ffs" MSG_BCAST = 0x100 MSG_CMSG_CLOEXEC = 0x800 MSG_CTRUNC = 0x20 @@ -993,6 +1083,7 @@ const ( MSG_PEEK = 0x2 MSG_TRUNC = 0x10 MSG_WAITALL = 0x40 + MSG_WAITFORONE = 0x1000 MS_ASYNC = 0x1 MS_INVALIDATE = 0x4 MS_SYNC = 0x2 @@ -1001,7 +1092,8 @@ const ( NET_RT_FLAGS = 0x2 NET_RT_IFLIST = 0x3 NET_RT_IFNAMES = 0x6 - NET_RT_MAXID = 0x7 + NET_RT_MAXID = 0x8 + NET_RT_SOURCE = 0x7 NET_RT_STATS = 0x4 NET_RT_TABLE = 0x5 NFDBITS = 0x20 @@ -1018,6 +1110,7 @@ const ( NOTE_FORK = 0x40000000 NOTE_LINK = 0x10 NOTE_LOWAT = 0x1 + NOTE_OOB = 0x4 NOTE_PCTRLMASK = 0xf0000000 NOTE_PDATAMASK = 0xfffff NOTE_RENAME = 0x20 @@ -1154,7 +1247,7 @@ const ( RTM_PROPOSAL = 0x13 RTM_REDIRECT = 0x6 RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 + RTM_SOURCE = 0x16 RTM_VERSION = 0x5 RTV_EXPIRE = 0x4 RTV_HOPCOUNT = 0x2 @@ -1172,6 +1265,9 @@ const ( RUSAGE_THREAD = 0x1 SCM_RIGHTS = 0x1 SCM_TIMESTAMP = 0x4 + SEEK_CUR = 0x1 + SEEK_END = 0x2 + SEEK_SET = 0x0 SHUT_RD = 0x0 SHUT_RDWR = 0x2 SHUT_WR = 0x1 @@ -1188,30 +1284,30 @@ const ( SIOCBRDGDELS = 0x80606942 SIOCBRDGFLUSH = 0x80606948 SIOCBRDGFRL = 0x808c694e - SIOCBRDGGCACHE = 0xc0186941 - SIOCBRDGGFD = 0xc0186952 - SIOCBRDGGHT = 0xc0186951 + SIOCBRDGGCACHE = 0xc0146941 + SIOCBRDGGFD = 0xc0146952 + SIOCBRDGGHT = 0xc0146951 SIOCBRDGGIFFLGS = 0xc060693e - SIOCBRDGGMA = 0xc0186953 + SIOCBRDGGMA = 0xc0146953 SIOCBRDGGPARAM = 0xc0406958 - SIOCBRDGGPRI = 0xc0186950 + SIOCBRDGGPRI = 0xc0146950 SIOCBRDGGRL = 0xc030694f - SIOCBRDGGTO = 0xc0186946 + SIOCBRDGGTO = 0xc0146946 SIOCBRDGIFS = 0xc0606942 SIOCBRDGRTS = 0xc0206943 SIOCBRDGSADDR = 0xc1286944 - SIOCBRDGSCACHE = 0x80186940 - SIOCBRDGSFD = 0x80186952 - SIOCBRDGSHT = 0x80186951 + SIOCBRDGSCACHE = 0x80146940 + SIOCBRDGSFD = 0x80146952 + SIOCBRDGSHT = 0x80146951 SIOCBRDGSIFCOST = 0x80606955 SIOCBRDGSIFFLGS = 0x8060693f SIOCBRDGSIFPRIO = 0x80606954 SIOCBRDGSIFPROT = 0x8060694a - SIOCBRDGSMA = 0x80186953 - SIOCBRDGSPRI = 0x80186950 - SIOCBRDGSPROTO = 0x8018695a - SIOCBRDGSTO = 0x80186945 - SIOCBRDGSTXHC = 0x80186959 + SIOCBRDGSMA = 0x80146953 + SIOCBRDGSPRI = 0x80146950 + SIOCBRDGSPROTO = 0x8014695a + SIOCBRDGSTO = 0x80146945 + SIOCBRDGSTXHC = 0x80146959 SIOCDELLABEL = 0x80206997 SIOCDELMULTI = 0x80206932 SIOCDIFADDR = 0x80206919 @@ -1264,6 +1360,7 @@ const ( SIOCGPWE3CTRLWORD = 0xc02069dc SIOCGPWE3FAT = 0xc02069dd SIOCGPWE3NEIGHBOR = 0xc21869de + SIOCGRXHPRIO = 0xc02069db SIOCGSPPPPARAMS = 0xc0206994 SIOCGTXHPRIO = 0xc02069c6 SIOCGUMBINFO = 0xc02069be @@ -1310,17 +1407,13 @@ const ( SIOCSPWE3CTRLWORD = 0x802069dc SIOCSPWE3FAT = 0x802069dd SIOCSPWE3NEIGHBOR = 0x821869de + SIOCSRXHPRIO = 0x802069db SIOCSSPPPPARAMS = 0x80206993 SIOCSTXHPRIO = 0x802069c5 SIOCSUMBPARAM = 0x802069bf SIOCSVH = 0xc02069f5 SIOCSVNETFLOWID = 0x802069c3 SIOCSVNETID = 0x802069a6 - SIOCSWGDPID = 0xc018695b - SIOCSWGMAXFLOW = 0xc0186960 - SIOCSWGMAXGROUP = 0xc018695d - SIOCSWSDPID = 0x8018695c - SIOCSWSPORTNO = 0xc060695f SOCK_CLOEXEC = 0x8000 SOCK_DGRAM = 0x2 SOCK_DNS = 0x1000 @@ -1335,6 +1428,7 @@ const ( SO_BINDANY = 0x1000 SO_BROADCAST = 0x20 SO_DEBUG = 0x1 + SO_DOMAIN = 0x1024 SO_DONTROUTE = 0x10 SO_ERROR = 0x1007 SO_KEEPALIVE = 0x8 @@ -1342,6 +1436,7 @@ const ( SO_NETPROC = 0x1020 SO_OOBINLINE = 0x100 SO_PEERCRED = 0x1022 + SO_PROTOCOL = 0x1025 SO_RCVBUF = 0x1002 SO_RCVLOWAT = 0x1004 SO_RCVTIMEO = 0x1006 @@ -1391,7 +1486,18 @@ const ( TCOFLUSH = 0x2 TCOOFF = 0x1 TCOON = 0x2 - TCP_MAXBURST = 0x4 + TCPOPT_EOL = 0x0 + TCPOPT_MAXSEG = 0x2 + TCPOPT_NOP = 0x1 + TCPOPT_SACK = 0x5 + TCPOPT_SACK_HDR = 0x1010500 + TCPOPT_SACK_PERMITTED = 0x4 + TCPOPT_SACK_PERMIT_HDR = 0x1010402 + TCPOPT_SIGNATURE = 0x13 + TCPOPT_TIMESTAMP = 0x8 + TCPOPT_TSTAMP_HDR = 0x101080a + TCPOPT_WINDOW = 0x3 + TCP_INFO = 0x9 TCP_MAXSEG = 0x2 TCP_MAXWIN = 0xffff TCP_MAX_SACK = 0x3 @@ -1400,6 +1506,7 @@ const ( TCP_MSS = 0x200 TCP_NODELAY = 0x1 TCP_NOPUSH = 0x10 + TCP_SACKHOLE_LIMIT = 0x80 TCP_SACK_ENABLE = 0x8 TCSAFLUSH = 0x2 TIMER_ABSTIME = 0x1 @@ -1768,7 +1875,7 @@ var signalList = [...]struct { {3, "SIGQUIT", "quit"}, {4, "SIGILL", "illegal instruction"}, {5, "SIGTRAP", "trace/BPT trap"}, - {6, "SIGABRT", "abort trap"}, + {6, "SIGIOT", "abort trap"}, {7, "SIGEMT", "EMT trap"}, {8, "SIGFPE", "floating point exception"}, {9, "SIGKILL", "killed"}, @@ -1795,4 +1902,5 @@ var signalList = [...]struct { {30, "SIGUSR1", "user defined signal 1"}, {31, "SIGUSR2", "user defined signal 2"}, {32, "SIGTHR", "thread AST"}, + {28672, "SIGSTKSZ", "unknown signal"}, } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_mips64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_mips64.go index f1154ff56f6c..03d90fe35501 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_mips64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zerrors_openbsd_mips64.go @@ -112,6 +112,12 @@ const ( BPF_FILDROP_CAPTURE = 0x1 BPF_FILDROP_DROP = 0x2 BPF_FILDROP_PASS = 0x0 + BPF_F_DIR_IN = 0x10 + BPF_F_DIR_MASK = 0x30 + BPF_F_DIR_OUT = 0x20 + BPF_F_DIR_SHIFT = 0x4 + BPF_F_FLOWID = 0x8 + BPF_F_PRI_MASK = 0x7 BPF_H = 0x8 BPF_IMM = 0x0 BPF_IND = 0x40 @@ -140,6 +146,7 @@ const ( BPF_OR = 0x40 BPF_RELEASE = 0x30bb6 BPF_RET = 0x6 + BPF_RND = 0xc0 BPF_RSH = 0x70 BPF_ST = 0x2 BPF_STX = 0x3 @@ -301,6 +308,8 @@ const ( EMUL_ENABLED = 0x1 EMUL_NATIVE = 0x2 ENDRUNDISC = 0x9 + ETH64_8021_RSVD_MASK = 0xfffffffffff0 + ETH64_8021_RSVD_PREFIX = 0x180c2000000 ETHERMIN = 0x2e ETHERMTU = 0x5dc ETHERTYPE_8023 = 0x4 @@ -353,6 +362,7 @@ const ( ETHERTYPE_DN = 0x6003 ETHERTYPE_DOGFIGHT = 0x1989 ETHERTYPE_DSMD = 0x8039 + ETHERTYPE_EAPOL = 0x888e ETHERTYPE_ECMA = 0x803 ETHERTYPE_ENCRYPT = 0x803d ETHERTYPE_ES = 0x805d @@ -413,15 +423,16 @@ const ( ETHERTYPE_NCD = 0x8149 ETHERTYPE_NESTAR = 0x8006 ETHERTYPE_NETBEUI = 0x8191 + ETHERTYPE_NHRP = 0x2001 ETHERTYPE_NOVELL = 0x8138 ETHERTYPE_NS = 0x600 ETHERTYPE_NSAT = 0x601 ETHERTYPE_NSCOMPAT = 0x807 + ETHERTYPE_NSH = 0x984f ETHERTYPE_NTRAILER = 0x10 ETHERTYPE_OS9 = 0x7007 ETHERTYPE_OS9NET = 0x7009 ETHERTYPE_PACER = 0x80c6 - ETHERTYPE_PAE = 0x888e ETHERTYPE_PBB = 0x88e7 ETHERTYPE_PCS = 0x4242 ETHERTYPE_PLANNING = 0x8044 @@ -504,10 +515,11 @@ const ( ETHER_VLAN_ENCAP_LEN = 0x4 EVFILT_AIO = -0x3 EVFILT_DEVICE = -0x8 + EVFILT_EXCEPT = -0x9 EVFILT_PROC = -0x5 EVFILT_READ = -0x1 EVFILT_SIGNAL = -0x6 - EVFILT_SYSCOUNT = 0x8 + EVFILT_SYSCOUNT = 0x9 EVFILT_TIMER = -0x7 EVFILT_VNODE = -0x4 EVFILT_WRITE = -0x2 @@ -529,7 +541,7 @@ const ( EV_FLAG1 = 0x2000 EV_ONESHOT = 0x10 EV_RECEIPT = 0x40 - EV_SYSFLAGS = 0xf000 + EV_SYSFLAGS = 0xf800 EXTA = 0x4b00 EXTB = 0x9600 EXTPROC = 0x800 @@ -795,6 +807,7 @@ const ( IFT_VOICEOVERCABLE = 0xc6 IFT_VOICEOVERFRAMERELAY = 0x99 IFT_VOICEOVERIP = 0x68 + IFT_WIREGUARD = 0xfb IFT_X213 = 0x5d IFT_X25 = 0x5 IFT_X25DDN = 0x4 @@ -860,6 +873,7 @@ const ( IPPROTO_RAW = 0xff IPPROTO_ROUTING = 0x2b IPPROTO_RSVP = 0x2e + IPPROTO_SCTP = 0x84 IPPROTO_TCP = 0x6 IPPROTO_TP = 0x1d IPPROTO_UDP = 0x11 @@ -970,6 +984,9 @@ const ( IP_TTL = 0x4 ISIG = 0x80 ISTRIP = 0x20 + ITIMER_PROF = 0x2 + ITIMER_REAL = 0x0 + ITIMER_VIRTUAL = 0x1 IUCLC = 0x1000 IXANY = 0x800 IXOFF = 0x400 @@ -1041,6 +1058,19 @@ const ( MNT_WAIT = 0x1 MNT_WANTRDWR = 0x2000000 MNT_WXALLOWED = 0x800 + MOUNT_AFS = "afs" + MOUNT_CD9660 = "cd9660" + MOUNT_EXT2FS = "ext2fs" + MOUNT_FFS = "ffs" + MOUNT_FUSEFS = "fuse" + MOUNT_MFS = "mfs" + MOUNT_MSDOS = "msdos" + MOUNT_NCPFS = "ncpfs" + MOUNT_NFS = "nfs" + MOUNT_NTFS = "ntfs" + MOUNT_TMPFS = "tmpfs" + MOUNT_UDF = "udf" + MOUNT_UFS = "ffs" MSG_BCAST = 0x100 MSG_CMSG_CLOEXEC = 0x800 MSG_CTRUNC = 0x20 @@ -1053,6 +1083,7 @@ const ( MSG_PEEK = 0x2 MSG_TRUNC = 0x10 MSG_WAITALL = 0x40 + MSG_WAITFORONE = 0x1000 MS_ASYNC = 0x1 MS_INVALIDATE = 0x4 MS_SYNC = 0x2 @@ -1061,7 +1092,8 @@ const ( NET_RT_FLAGS = 0x2 NET_RT_IFLIST = 0x3 NET_RT_IFNAMES = 0x6 - NET_RT_MAXID = 0x7 + NET_RT_MAXID = 0x8 + NET_RT_SOURCE = 0x7 NET_RT_STATS = 0x4 NET_RT_TABLE = 0x5 NFDBITS = 0x20 @@ -1078,6 +1110,7 @@ const ( NOTE_FORK = 0x40000000 NOTE_LINK = 0x10 NOTE_LOWAT = 0x1 + NOTE_OOB = 0x4 NOTE_PCTRLMASK = 0xf0000000 NOTE_PDATAMASK = 0xfffff NOTE_RENAME = 0x20 @@ -1214,7 +1247,7 @@ const ( RTM_PROPOSAL = 0x13 RTM_REDIRECT = 0x6 RTM_RESOLVE = 0xb - RTM_RTTUNIT = 0xf4240 + RTM_SOURCE = 0x16 RTM_VERSION = 0x5 RTV_EXPIRE = 0x4 RTV_HOPCOUNT = 0x2 @@ -1232,6 +1265,9 @@ const ( RUSAGE_THREAD = 0x1 SCM_RIGHTS = 0x1 SCM_TIMESTAMP = 0x4 + SEEK_CUR = 0x1 + SEEK_END = 0x2 + SEEK_SET = 0x0 SHUT_RD = 0x0 SHUT_RDWR = 0x2 SHUT_WR = 0x1 @@ -1248,30 +1284,30 @@ const ( SIOCBRDGDELS = 0x80606942 SIOCBRDGFLUSH = 0x80606948 SIOCBRDGFRL = 0x808c694e - SIOCBRDGGCACHE = 0xc0186941 - SIOCBRDGGFD = 0xc0186952 - SIOCBRDGGHT = 0xc0186951 + SIOCBRDGGCACHE = 0xc0146941 + SIOCBRDGGFD = 0xc0146952 + SIOCBRDGGHT = 0xc0146951 SIOCBRDGGIFFLGS = 0xc060693e - SIOCBRDGGMA = 0xc0186953 + SIOCBRDGGMA = 0xc0146953 SIOCBRDGGPARAM = 0xc0406958 - SIOCBRDGGPRI = 0xc0186950 + SIOCBRDGGPRI = 0xc0146950 SIOCBRDGGRL = 0xc030694f - SIOCBRDGGTO = 0xc0186946 + SIOCBRDGGTO = 0xc0146946 SIOCBRDGIFS = 0xc0606942 SIOCBRDGRTS = 0xc0206943 SIOCBRDGSADDR = 0xc1286944 - SIOCBRDGSCACHE = 0x80186940 - SIOCBRDGSFD = 0x80186952 - SIOCBRDGSHT = 0x80186951 + SIOCBRDGSCACHE = 0x80146940 + SIOCBRDGSFD = 0x80146952 + SIOCBRDGSHT = 0x80146951 SIOCBRDGSIFCOST = 0x80606955 SIOCBRDGSIFFLGS = 0x8060693f SIOCBRDGSIFPRIO = 0x80606954 SIOCBRDGSIFPROT = 0x8060694a - SIOCBRDGSMA = 0x80186953 - SIOCBRDGSPRI = 0x80186950 - SIOCBRDGSPROTO = 0x8018695a - SIOCBRDGSTO = 0x80186945 - SIOCBRDGSTXHC = 0x80186959 + SIOCBRDGSMA = 0x80146953 + SIOCBRDGSPRI = 0x80146950 + SIOCBRDGSPROTO = 0x8014695a + SIOCBRDGSTO = 0x80146945 + SIOCBRDGSTXHC = 0x80146959 SIOCDELLABEL = 0x80206997 SIOCDELMULTI = 0x80206932 SIOCDIFADDR = 0x80206919 @@ -1378,11 +1414,6 @@ const ( SIOCSVH = 0xc02069f5 SIOCSVNETFLOWID = 0x802069c3 SIOCSVNETID = 0x802069a6 - SIOCSWGDPID = 0xc018695b - SIOCSWGMAXFLOW = 0xc0186960 - SIOCSWGMAXGROUP = 0xc018695d - SIOCSWSDPID = 0x8018695c - SIOCSWSPORTNO = 0xc060695f SOCK_CLOEXEC = 0x8000 SOCK_DGRAM = 0x2 SOCK_DNS = 0x1000 @@ -1455,7 +1486,18 @@ const ( TCOFLUSH = 0x2 TCOOFF = 0x1 TCOON = 0x2 - TCP_MAXBURST = 0x4 + TCPOPT_EOL = 0x0 + TCPOPT_MAXSEG = 0x2 + TCPOPT_NOP = 0x1 + TCPOPT_SACK = 0x5 + TCPOPT_SACK_HDR = 0x1010500 + TCPOPT_SACK_PERMITTED = 0x4 + TCPOPT_SACK_PERMIT_HDR = 0x1010402 + TCPOPT_SIGNATURE = 0x13 + TCPOPT_TIMESTAMP = 0x8 + TCPOPT_TSTAMP_HDR = 0x101080a + TCPOPT_WINDOW = 0x3 + TCP_INFO = 0x9 TCP_MAXSEG = 0x2 TCP_MAXWIN = 0xffff TCP_MAX_SACK = 0x3 @@ -1833,7 +1875,7 @@ var signalList = [...]struct { {3, "SIGQUIT", "quit"}, {4, "SIGILL", "illegal instruction"}, {5, "SIGTRAP", "trace/BPT trap"}, - {6, "SIGABRT", "abort trap"}, + {6, "SIGIOT", "abort trap"}, {7, "SIGEMT", "EMT trap"}, {8, "SIGFPE", "floating point exception"}, {9, "SIGKILL", "killed"}, @@ -1860,4 +1902,5 @@ var signalList = [...]struct { {30, "SIGUSR1", "user defined signal 1"}, {31, "SIGUSR2", "user defined signal 2"}, {32, "SIGTHR", "thread AST"}, + {81920, "SIGSTKSZ", "unknown signal"}, } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_armnn_linux.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_armnn_linux.go index bd001a6e1cc7..97f20ca282f5 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_armnn_linux.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_armnn_linux.go @@ -15,12 +15,12 @@ type PtraceRegsArm struct { // PtraceGetRegsArm fetches the registers used by arm binaries. func PtraceGetRegsArm(pid int, regsout *PtraceRegsArm) error { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) + return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) } // PtraceSetRegsArm sets the registers used by arm binaries. func PtraceSetRegsArm(pid int, regs *PtraceRegsArm) error { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) + return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) } // PtraceRegsArm64 is the registers used by arm64 binaries. @@ -33,10 +33,10 @@ type PtraceRegsArm64 struct { // PtraceGetRegsArm64 fetches the registers used by arm64 binaries. func PtraceGetRegsArm64(pid int, regsout *PtraceRegsArm64) error { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) + return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) } // PtraceSetRegsArm64 sets the registers used by arm64 binaries. func PtraceSetRegsArm64(pid int, regs *PtraceRegsArm64) error { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) + return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_linux_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_linux_arm64.go index 6cb6d688aa46..834d2856dd41 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_linux_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_linux_arm64.go @@ -7,11 +7,11 @@ import "unsafe" // PtraceGetRegSetArm64 fetches the registers used by arm64 binaries. func PtraceGetRegSetArm64(pid, addr int, regsout *PtraceRegsArm64) error { iovec := Iovec{(*byte)(unsafe.Pointer(regsout)), uint64(unsafe.Sizeof(*regsout))} - return ptrace(PTRACE_GETREGSET, pid, uintptr(addr), uintptr(unsafe.Pointer(&iovec))) + return ptracePtr(PTRACE_GETREGSET, pid, uintptr(addr), unsafe.Pointer(&iovec)) } // PtraceSetRegSetArm64 sets the registers used by arm64 binaries. func PtraceSetRegSetArm64(pid, addr int, regs *PtraceRegsArm64) error { iovec := Iovec{(*byte)(unsafe.Pointer(regs)), uint64(unsafe.Sizeof(*regs))} - return ptrace(PTRACE_SETREGSET, pid, uintptr(addr), uintptr(unsafe.Pointer(&iovec))) + return ptracePtr(PTRACE_SETREGSET, pid, uintptr(addr), unsafe.Pointer(&iovec)) } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_mipsnn_linux.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_mipsnn_linux.go index c34d0639be3a..0b5f7943054b 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_mipsnn_linux.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_mipsnn_linux.go @@ -21,12 +21,12 @@ type PtraceRegsMips struct { // PtraceGetRegsMips fetches the registers used by mips binaries. func PtraceGetRegsMips(pid int, regsout *PtraceRegsMips) error { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) + return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) } // PtraceSetRegsMips sets the registers used by mips binaries. func PtraceSetRegsMips(pid int, regs *PtraceRegsMips) error { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) + return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) } // PtraceRegsMips64 is the registers used by mips64 binaries. @@ -42,10 +42,10 @@ type PtraceRegsMips64 struct { // PtraceGetRegsMips64 fetches the registers used by mips64 binaries. func PtraceGetRegsMips64(pid int, regsout *PtraceRegsMips64) error { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) + return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) } // PtraceSetRegsMips64 sets the registers used by mips64 binaries. func PtraceSetRegsMips64(pid int, regs *PtraceRegsMips64) error { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) + return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_mipsnnle_linux.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_mipsnnle_linux.go index 3ccf0c0c4a80..2807f7e64602 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_mipsnnle_linux.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_mipsnnle_linux.go @@ -21,12 +21,12 @@ type PtraceRegsMipsle struct { // PtraceGetRegsMipsle fetches the registers used by mipsle binaries. func PtraceGetRegsMipsle(pid int, regsout *PtraceRegsMipsle) error { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) + return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) } // PtraceSetRegsMipsle sets the registers used by mipsle binaries. func PtraceSetRegsMipsle(pid int, regs *PtraceRegsMipsle) error { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) + return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) } // PtraceRegsMips64le is the registers used by mips64le binaries. @@ -42,10 +42,10 @@ type PtraceRegsMips64le struct { // PtraceGetRegsMips64le fetches the registers used by mips64le binaries. func PtraceGetRegsMips64le(pid int, regsout *PtraceRegsMips64le) error { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) + return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) } // PtraceSetRegsMips64le sets the registers used by mips64le binaries. func PtraceSetRegsMips64le(pid int, regs *PtraceRegsMips64le) error { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) + return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_x86_linux.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_x86_linux.go index 7d65857004c4..281ea64e34ac 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_x86_linux.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zptrace_x86_linux.go @@ -31,12 +31,12 @@ type PtraceRegs386 struct { // PtraceGetRegs386 fetches the registers used by 386 binaries. func PtraceGetRegs386(pid int, regsout *PtraceRegs386) error { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) + return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) } // PtraceSetRegs386 sets the registers used by 386 binaries. func PtraceSetRegs386(pid int, regs *PtraceRegs386) error { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) + return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) } // PtraceRegsAmd64 is the registers used by amd64 binaries. @@ -72,10 +72,10 @@ type PtraceRegsAmd64 struct { // PtraceGetRegsAmd64 fetches the registers used by amd64 binaries. func PtraceGetRegsAmd64(pid int, regsout *PtraceRegsAmd64) error { - return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) + return ptracePtr(PTRACE_GETREGS, pid, 0, unsafe.Pointer(regsout)) } // PtraceSetRegsAmd64 sets the registers used by amd64 binaries. func PtraceSetRegsAmd64(pid int, regs *PtraceRegsAmd64) error { - return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) + return ptracePtr(PTRACE_SETREGS, pid, 0, unsafe.Pointer(regs)) } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc.go index 870215d2c479..ef9dcd1bef8c 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc.go @@ -223,6 +223,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + r0, er := C.ioctl(C.int(fd), C.int(req), C.uintptr_t(uintptr(arg))) + if r0 == -1 && er != nil { + err = er + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func FcntlInt(fd uintptr, cmd int, arg int) (r int, err error) { r0, er := C.fcntl(C.uintptr_t(fd), C.int(cmd), C.uintptr_t(arg)) r = int(r0) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64.go index a89b0bfa53ca..f86a94592348 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64.go @@ -103,6 +103,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, e1 := callioctl_ptr(fd, int(req), arg) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func FcntlInt(fd uintptr, cmd int, arg int) (r int, err error) { r0, e1 := callfcntl(fd, cmd, uintptr(arg)) r = int(r0) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64_gc.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64_gc.go index 2caa5adf9509..d32a84cae27c 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64_gc.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64_gc.go @@ -423,6 +423,13 @@ func callioctl(fd int, req int, arg uintptr) (r1 uintptr, e1 Errno) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func callioctl_ptr(fd int, req int, arg unsafe.Pointer) (r1 uintptr, e1 Errno) { + r1, _, e1 = syscall6(uintptr(unsafe.Pointer(&libc_ioctl)), 3, uintptr(fd), uintptr(req), uintptr(arg), 0, 0, 0) + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func callfcntl(fd uintptr, cmd int, arg uintptr) (r1 uintptr, e1 Errno) { r1, _, e1 = syscall6(uintptr(unsafe.Pointer(&libc_fcntl)), 3, fd, uintptr(cmd), arg, 0, 0, 0) return diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64_gccgo.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64_gccgo.go index 944a714b1ad4..d7d8baf819c0 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64_gccgo.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_aix_ppc64_gccgo.go @@ -191,6 +191,14 @@ func callioctl(fd int, req int, arg uintptr) (r1 uintptr, e1 Errno) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func callioctl_ptr(fd int, req int, arg unsafe.Pointer) (r1 uintptr, e1 Errno) { + r1 = uintptr(C.ioctl(C.int(fd), C.int(req), C.uintptr_t(uintptr(arg)))) + e1 = syscall.GetErrno() + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func callfcntl(fd uintptr, cmd int, arg uintptr) (r1 uintptr, e1 Errno) { r1 = uintptr(C.fcntl(C.uintptr_t(fd), C.int(cmd), C.uintptr_t(arg))) e1 = syscall.GetErrno() diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go index c2461c496797..a29ffdd566db 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go @@ -725,6 +725,14 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ioctl_trampoline_addr uintptr //go:cgo_import_dynamic libc_ioctl ioctl "/usr/lib/libSystem.B.dylib" @@ -2502,6 +2510,14 @@ func ptrace1(request int, pid int, addr uintptr, data uintptr) (err error) { return } +func ptrace1Ptr(request int, pid int, addr uintptr, data unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall6(libc_ptrace_trampoline_addr, uintptr(request), uintptr(pid), addr, uintptr(data), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ptrace_trampoline_addr uintptr //go:cgo_import_dynamic libc_ptrace ptrace "/usr/lib/libSystem.B.dylib" diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go index 26a0fdc505bb..2fd4590bb786 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go @@ -725,6 +725,14 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ioctl_trampoline_addr uintptr //go:cgo_import_dynamic libc_ioctl ioctl "/usr/lib/libSystem.B.dylib" @@ -2502,6 +2510,14 @@ func ptrace1(request int, pid int, addr uintptr, data uintptr) (err error) { return } +func ptrace1Ptr(request int, pid int, addr uintptr, data unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall6(libc_ptrace_trampoline_addr, uintptr(request), uintptr(pid), addr, uintptr(data), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ptrace_trampoline_addr uintptr //go:cgo_import_dynamic libc_ptrace ptrace "/usr/lib/libSystem.B.dylib" diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go index 1b6eedfa6115..3b85134707ef 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go @@ -436,6 +436,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -552,6 +562,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go index 039c4aa06c2c..1129065624e5 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go @@ -388,6 +388,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -414,6 +424,16 @@ func ptrace(request int, pid int, addr uintptr, data int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ptracePtr(request int, pid int, addr unsafe.Pointer, data int) (err error) { + _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -544,6 +564,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go index 0535d3cfdf2b..55f5abfe599c 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go @@ -388,6 +388,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -414,6 +424,16 @@ func ptrace(request int, pid int, addr uintptr, data int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ptracePtr(request int, pid int, addr unsafe.Pointer, data int) (err error) { + _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -544,6 +564,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go index 1018b5221704..d39651c2b586 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go @@ -388,6 +388,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -414,6 +424,16 @@ func ptrace(request int, pid int, addr uintptr, data int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ptracePtr(request int, pid int, addr unsafe.Pointer, data int) (err error) { + _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -544,6 +564,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm64.go index 3802f4b379a5..ddb740868011 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm64.go @@ -388,6 +388,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -414,6 +424,16 @@ func ptrace(request int, pid int, addr uintptr, data int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ptracePtr(request int, pid int, addr unsafe.Pointer, data int) (err error) { + _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -544,6 +564,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_riscv64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_riscv64.go index 8a2db7da9f3e..09a53a616c05 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_riscv64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_freebsd_riscv64.go @@ -388,6 +388,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -414,6 +424,16 @@ func ptrace(request int, pid int, addr uintptr, data int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ptracePtr(request int, pid int, addr unsafe.Pointer, data int) (err error) { + _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -544,6 +564,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_linux.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_linux.go index 293cf36804e9..430cb24de7e0 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_linux.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_linux.go @@ -379,6 +379,16 @@ func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ptracePtr(request int, pid int, addr uintptr, data unsafe.Pointer) (err error) { + _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func reboot(magic1 uint, magic2 uint, cmd int, arg string) (err error) { var _p0 *byte _p0, err = BytePtrFromString(arg) @@ -537,6 +547,17 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockAdjtime(clockid int32, buf *Timex) (state int, err error) { + r0, _, e1 := Syscall(SYS_CLOCK_ADJTIME, uintptr(clockid), uintptr(unsafe.Pointer(buf)), 0) + state = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ClockGetres(clockid int32, res *Timespec) (err error) { _, _, e1 := Syscall(SYS_CLOCK_GETRES, uintptr(clockid), uintptr(unsafe.Pointer(res)), 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go index 4af561a48d8c..8e1d9c8f6663 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go @@ -405,6 +405,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -521,6 +531,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go index 3b90e9448add..21c6950400e3 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go @@ -405,6 +405,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -521,6 +531,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go index 890f4ccd131c..298168f90a17 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go @@ -405,6 +405,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -521,6 +531,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm64.go index c79f071fc6a8..68b8bd492fec 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm64.go @@ -405,6 +405,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { var _p0 unsafe.Pointer if len(mib) > 0 { @@ -521,6 +531,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := Syscall(SYS_CLOCK_GETTIME, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go index 2925fe0a7b73..0b0f910e1ab9 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go @@ -527,6 +527,14 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ioctl_trampoline_addr uintptr //go:cgo_import_dynamic libc_ioctl ioctl "libc.so" @@ -696,6 +704,20 @@ var libc_chroot_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := syscall_syscall(libc_clock_gettime_trampoline_addr, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +var libc_clock_gettime_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_clock_gettime clock_gettime "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := syscall_syscall(libc_close_trampoline_addr, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.s b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.s index 75eb2f5f3f72..087444250c9a 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.s +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.s @@ -5,792 +5,665 @@ TEXT libc_getgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgroups(SB) - GLOBL ·libc_getgroups_trampoline_addr(SB), RODATA, $4 DATA ·libc_getgroups_trampoline_addr(SB)/4, $libc_getgroups_trampoline<>(SB) TEXT libc_setgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgroups(SB) - GLOBL ·libc_setgroups_trampoline_addr(SB), RODATA, $4 DATA ·libc_setgroups_trampoline_addr(SB)/4, $libc_setgroups_trampoline<>(SB) TEXT libc_wait4_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_wait4(SB) - GLOBL ·libc_wait4_trampoline_addr(SB), RODATA, $4 DATA ·libc_wait4_trampoline_addr(SB)/4, $libc_wait4_trampoline<>(SB) TEXT libc_accept_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_accept(SB) - GLOBL ·libc_accept_trampoline_addr(SB), RODATA, $4 DATA ·libc_accept_trampoline_addr(SB)/4, $libc_accept_trampoline<>(SB) TEXT libc_bind_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_bind(SB) - GLOBL ·libc_bind_trampoline_addr(SB), RODATA, $4 DATA ·libc_bind_trampoline_addr(SB)/4, $libc_bind_trampoline<>(SB) TEXT libc_connect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_connect(SB) - GLOBL ·libc_connect_trampoline_addr(SB), RODATA, $4 DATA ·libc_connect_trampoline_addr(SB)/4, $libc_connect_trampoline<>(SB) TEXT libc_socket_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socket(SB) - GLOBL ·libc_socket_trampoline_addr(SB), RODATA, $4 DATA ·libc_socket_trampoline_addr(SB)/4, $libc_socket_trampoline<>(SB) TEXT libc_getsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockopt(SB) - GLOBL ·libc_getsockopt_trampoline_addr(SB), RODATA, $4 DATA ·libc_getsockopt_trampoline_addr(SB)/4, $libc_getsockopt_trampoline<>(SB) TEXT libc_setsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsockopt(SB) - GLOBL ·libc_setsockopt_trampoline_addr(SB), RODATA, $4 DATA ·libc_setsockopt_trampoline_addr(SB)/4, $libc_setsockopt_trampoline<>(SB) TEXT libc_getpeername_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpeername(SB) - GLOBL ·libc_getpeername_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpeername_trampoline_addr(SB)/4, $libc_getpeername_trampoline<>(SB) TEXT libc_getsockname_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockname(SB) - GLOBL ·libc_getsockname_trampoline_addr(SB), RODATA, $4 DATA ·libc_getsockname_trampoline_addr(SB)/4, $libc_getsockname_trampoline<>(SB) TEXT libc_shutdown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_shutdown(SB) - GLOBL ·libc_shutdown_trampoline_addr(SB), RODATA, $4 DATA ·libc_shutdown_trampoline_addr(SB)/4, $libc_shutdown_trampoline<>(SB) TEXT libc_socketpair_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socketpair(SB) - GLOBL ·libc_socketpair_trampoline_addr(SB), RODATA, $4 DATA ·libc_socketpair_trampoline_addr(SB)/4, $libc_socketpair_trampoline<>(SB) TEXT libc_recvfrom_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvfrom(SB) - GLOBL ·libc_recvfrom_trampoline_addr(SB), RODATA, $4 DATA ·libc_recvfrom_trampoline_addr(SB)/4, $libc_recvfrom_trampoline<>(SB) TEXT libc_sendto_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendto(SB) - GLOBL ·libc_sendto_trampoline_addr(SB), RODATA, $4 DATA ·libc_sendto_trampoline_addr(SB)/4, $libc_sendto_trampoline<>(SB) TEXT libc_recvmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvmsg(SB) - GLOBL ·libc_recvmsg_trampoline_addr(SB), RODATA, $4 DATA ·libc_recvmsg_trampoline_addr(SB)/4, $libc_recvmsg_trampoline<>(SB) TEXT libc_sendmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendmsg(SB) - GLOBL ·libc_sendmsg_trampoline_addr(SB), RODATA, $4 DATA ·libc_sendmsg_trampoline_addr(SB)/4, $libc_sendmsg_trampoline<>(SB) TEXT libc_kevent_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kevent(SB) - GLOBL ·libc_kevent_trampoline_addr(SB), RODATA, $4 DATA ·libc_kevent_trampoline_addr(SB)/4, $libc_kevent_trampoline<>(SB) TEXT libc_utimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimes(SB) - GLOBL ·libc_utimes_trampoline_addr(SB), RODATA, $4 DATA ·libc_utimes_trampoline_addr(SB)/4, $libc_utimes_trampoline<>(SB) TEXT libc_futimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_futimes(SB) - GLOBL ·libc_futimes_trampoline_addr(SB), RODATA, $4 DATA ·libc_futimes_trampoline_addr(SB)/4, $libc_futimes_trampoline<>(SB) TEXT libc_poll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_poll(SB) - GLOBL ·libc_poll_trampoline_addr(SB), RODATA, $4 DATA ·libc_poll_trampoline_addr(SB)/4, $libc_poll_trampoline<>(SB) TEXT libc_madvise_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_madvise(SB) - GLOBL ·libc_madvise_trampoline_addr(SB), RODATA, $4 DATA ·libc_madvise_trampoline_addr(SB)/4, $libc_madvise_trampoline<>(SB) TEXT libc_mlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlock(SB) - GLOBL ·libc_mlock_trampoline_addr(SB), RODATA, $4 DATA ·libc_mlock_trampoline_addr(SB)/4, $libc_mlock_trampoline<>(SB) TEXT libc_mlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlockall(SB) - GLOBL ·libc_mlockall_trampoline_addr(SB), RODATA, $4 DATA ·libc_mlockall_trampoline_addr(SB)/4, $libc_mlockall_trampoline<>(SB) TEXT libc_mprotect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mprotect(SB) - GLOBL ·libc_mprotect_trampoline_addr(SB), RODATA, $4 DATA ·libc_mprotect_trampoline_addr(SB)/4, $libc_mprotect_trampoline<>(SB) TEXT libc_msync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_msync(SB) - GLOBL ·libc_msync_trampoline_addr(SB), RODATA, $4 DATA ·libc_msync_trampoline_addr(SB)/4, $libc_msync_trampoline<>(SB) TEXT libc_munlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlock(SB) - GLOBL ·libc_munlock_trampoline_addr(SB), RODATA, $4 DATA ·libc_munlock_trampoline_addr(SB)/4, $libc_munlock_trampoline<>(SB) TEXT libc_munlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlockall(SB) - GLOBL ·libc_munlockall_trampoline_addr(SB), RODATA, $4 DATA ·libc_munlockall_trampoline_addr(SB)/4, $libc_munlockall_trampoline<>(SB) TEXT libc_pipe2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pipe2(SB) - GLOBL ·libc_pipe2_trampoline_addr(SB), RODATA, $4 DATA ·libc_pipe2_trampoline_addr(SB)/4, $libc_pipe2_trampoline<>(SB) TEXT libc_getdents_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getdents(SB) - GLOBL ·libc_getdents_trampoline_addr(SB), RODATA, $4 DATA ·libc_getdents_trampoline_addr(SB)/4, $libc_getdents_trampoline<>(SB) TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getcwd(SB) - GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $4 DATA ·libc_getcwd_trampoline_addr(SB)/4, $libc_getcwd_trampoline<>(SB) TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) - GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $4 DATA ·libc_ioctl_trampoline_addr(SB)/4, $libc_ioctl_trampoline<>(SB) TEXT libc_sysctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sysctl(SB) - GLOBL ·libc_sysctl_trampoline_addr(SB), RODATA, $4 DATA ·libc_sysctl_trampoline_addr(SB)/4, $libc_sysctl_trampoline<>(SB) TEXT libc_ppoll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ppoll(SB) - GLOBL ·libc_ppoll_trampoline_addr(SB), RODATA, $4 DATA ·libc_ppoll_trampoline_addr(SB)/4, $libc_ppoll_trampoline<>(SB) TEXT libc_access_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_access(SB) - GLOBL ·libc_access_trampoline_addr(SB), RODATA, $4 DATA ·libc_access_trampoline_addr(SB)/4, $libc_access_trampoline<>(SB) TEXT libc_adjtime_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_adjtime(SB) - GLOBL ·libc_adjtime_trampoline_addr(SB), RODATA, $4 DATA ·libc_adjtime_trampoline_addr(SB)/4, $libc_adjtime_trampoline<>(SB) TEXT libc_chdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chdir(SB) - GLOBL ·libc_chdir_trampoline_addr(SB), RODATA, $4 DATA ·libc_chdir_trampoline_addr(SB)/4, $libc_chdir_trampoline<>(SB) TEXT libc_chflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chflags(SB) - GLOBL ·libc_chflags_trampoline_addr(SB), RODATA, $4 DATA ·libc_chflags_trampoline_addr(SB)/4, $libc_chflags_trampoline<>(SB) TEXT libc_chmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chmod(SB) - GLOBL ·libc_chmod_trampoline_addr(SB), RODATA, $4 DATA ·libc_chmod_trampoline_addr(SB)/4, $libc_chmod_trampoline<>(SB) TEXT libc_chown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chown(SB) - GLOBL ·libc_chown_trampoline_addr(SB), RODATA, $4 DATA ·libc_chown_trampoline_addr(SB)/4, $libc_chown_trampoline<>(SB) TEXT libc_chroot_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chroot(SB) - GLOBL ·libc_chroot_trampoline_addr(SB), RODATA, $4 DATA ·libc_chroot_trampoline_addr(SB)/4, $libc_chroot_trampoline<>(SB) +TEXT libc_clock_gettime_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_clock_gettime(SB) +GLOBL ·libc_clock_gettime_trampoline_addr(SB), RODATA, $4 +DATA ·libc_clock_gettime_trampoline_addr(SB)/4, $libc_clock_gettime_trampoline<>(SB) + TEXT libc_close_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_close(SB) - GLOBL ·libc_close_trampoline_addr(SB), RODATA, $4 DATA ·libc_close_trampoline_addr(SB)/4, $libc_close_trampoline<>(SB) TEXT libc_dup_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup(SB) - GLOBL ·libc_dup_trampoline_addr(SB), RODATA, $4 DATA ·libc_dup_trampoline_addr(SB)/4, $libc_dup_trampoline<>(SB) TEXT libc_dup2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup2(SB) - GLOBL ·libc_dup2_trampoline_addr(SB), RODATA, $4 DATA ·libc_dup2_trampoline_addr(SB)/4, $libc_dup2_trampoline<>(SB) TEXT libc_dup3_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup3(SB) - GLOBL ·libc_dup3_trampoline_addr(SB), RODATA, $4 DATA ·libc_dup3_trampoline_addr(SB)/4, $libc_dup3_trampoline<>(SB) TEXT libc_exit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_exit(SB) - GLOBL ·libc_exit_trampoline_addr(SB), RODATA, $4 DATA ·libc_exit_trampoline_addr(SB)/4, $libc_exit_trampoline<>(SB) TEXT libc_faccessat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_faccessat(SB) - GLOBL ·libc_faccessat_trampoline_addr(SB), RODATA, $4 DATA ·libc_faccessat_trampoline_addr(SB)/4, $libc_faccessat_trampoline<>(SB) TEXT libc_fchdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchdir(SB) - GLOBL ·libc_fchdir_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchdir_trampoline_addr(SB)/4, $libc_fchdir_trampoline<>(SB) TEXT libc_fchflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchflags(SB) - GLOBL ·libc_fchflags_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchflags_trampoline_addr(SB)/4, $libc_fchflags_trampoline<>(SB) TEXT libc_fchmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmod(SB) - GLOBL ·libc_fchmod_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchmod_trampoline_addr(SB)/4, $libc_fchmod_trampoline<>(SB) TEXT libc_fchmodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmodat(SB) - GLOBL ·libc_fchmodat_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchmodat_trampoline_addr(SB)/4, $libc_fchmodat_trampoline<>(SB) TEXT libc_fchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchown(SB) - GLOBL ·libc_fchown_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchown_trampoline_addr(SB)/4, $libc_fchown_trampoline<>(SB) TEXT libc_fchownat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchownat(SB) - GLOBL ·libc_fchownat_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchownat_trampoline_addr(SB)/4, $libc_fchownat_trampoline<>(SB) TEXT libc_flock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_flock(SB) - GLOBL ·libc_flock_trampoline_addr(SB), RODATA, $4 DATA ·libc_flock_trampoline_addr(SB)/4, $libc_flock_trampoline<>(SB) TEXT libc_fpathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fpathconf(SB) - GLOBL ·libc_fpathconf_trampoline_addr(SB), RODATA, $4 DATA ·libc_fpathconf_trampoline_addr(SB)/4, $libc_fpathconf_trampoline<>(SB) TEXT libc_fstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstat(SB) - GLOBL ·libc_fstat_trampoline_addr(SB), RODATA, $4 DATA ·libc_fstat_trampoline_addr(SB)/4, $libc_fstat_trampoline<>(SB) TEXT libc_fstatat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatat(SB) - GLOBL ·libc_fstatat_trampoline_addr(SB), RODATA, $4 DATA ·libc_fstatat_trampoline_addr(SB)/4, $libc_fstatat_trampoline<>(SB) TEXT libc_fstatfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatfs(SB) - GLOBL ·libc_fstatfs_trampoline_addr(SB), RODATA, $4 DATA ·libc_fstatfs_trampoline_addr(SB)/4, $libc_fstatfs_trampoline<>(SB) TEXT libc_fsync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fsync(SB) - GLOBL ·libc_fsync_trampoline_addr(SB), RODATA, $4 DATA ·libc_fsync_trampoline_addr(SB)/4, $libc_fsync_trampoline<>(SB) TEXT libc_ftruncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ftruncate(SB) - GLOBL ·libc_ftruncate_trampoline_addr(SB), RODATA, $4 DATA ·libc_ftruncate_trampoline_addr(SB)/4, $libc_ftruncate_trampoline<>(SB) TEXT libc_getegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getegid(SB) - GLOBL ·libc_getegid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getegid_trampoline_addr(SB)/4, $libc_getegid_trampoline<>(SB) TEXT libc_geteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_geteuid(SB) - GLOBL ·libc_geteuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_geteuid_trampoline_addr(SB)/4, $libc_geteuid_trampoline<>(SB) TEXT libc_getgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgid(SB) - GLOBL ·libc_getgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getgid_trampoline_addr(SB)/4, $libc_getgid_trampoline<>(SB) TEXT libc_getpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgid(SB) - GLOBL ·libc_getpgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpgid_trampoline_addr(SB)/4, $libc_getpgid_trampoline<>(SB) TEXT libc_getpgrp_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgrp(SB) - GLOBL ·libc_getpgrp_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpgrp_trampoline_addr(SB)/4, $libc_getpgrp_trampoline<>(SB) TEXT libc_getpid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpid(SB) - GLOBL ·libc_getpid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpid_trampoline_addr(SB)/4, $libc_getpid_trampoline<>(SB) TEXT libc_getppid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getppid(SB) - GLOBL ·libc_getppid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getppid_trampoline_addr(SB)/4, $libc_getppid_trampoline<>(SB) TEXT libc_getpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpriority(SB) - GLOBL ·libc_getpriority_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpriority_trampoline_addr(SB)/4, $libc_getpriority_trampoline<>(SB) TEXT libc_getrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrlimit(SB) - GLOBL ·libc_getrlimit_trampoline_addr(SB), RODATA, $4 DATA ·libc_getrlimit_trampoline_addr(SB)/4, $libc_getrlimit_trampoline<>(SB) TEXT libc_getrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrtable(SB) - GLOBL ·libc_getrtable_trampoline_addr(SB), RODATA, $4 DATA ·libc_getrtable_trampoline_addr(SB)/4, $libc_getrtable_trampoline<>(SB) TEXT libc_getrusage_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrusage(SB) - GLOBL ·libc_getrusage_trampoline_addr(SB), RODATA, $4 DATA ·libc_getrusage_trampoline_addr(SB)/4, $libc_getrusage_trampoline<>(SB) TEXT libc_getsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsid(SB) - GLOBL ·libc_getsid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getsid_trampoline_addr(SB)/4, $libc_getsid_trampoline<>(SB) TEXT libc_gettimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_gettimeofday(SB) - GLOBL ·libc_gettimeofday_trampoline_addr(SB), RODATA, $4 DATA ·libc_gettimeofday_trampoline_addr(SB)/4, $libc_gettimeofday_trampoline<>(SB) TEXT libc_getuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getuid(SB) - GLOBL ·libc_getuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getuid_trampoline_addr(SB)/4, $libc_getuid_trampoline<>(SB) TEXT libc_issetugid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_issetugid(SB) - GLOBL ·libc_issetugid_trampoline_addr(SB), RODATA, $4 DATA ·libc_issetugid_trampoline_addr(SB)/4, $libc_issetugid_trampoline<>(SB) TEXT libc_kill_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kill(SB) - GLOBL ·libc_kill_trampoline_addr(SB), RODATA, $4 DATA ·libc_kill_trampoline_addr(SB)/4, $libc_kill_trampoline<>(SB) TEXT libc_kqueue_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kqueue(SB) - GLOBL ·libc_kqueue_trampoline_addr(SB), RODATA, $4 DATA ·libc_kqueue_trampoline_addr(SB)/4, $libc_kqueue_trampoline<>(SB) TEXT libc_lchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lchown(SB) - GLOBL ·libc_lchown_trampoline_addr(SB), RODATA, $4 DATA ·libc_lchown_trampoline_addr(SB)/4, $libc_lchown_trampoline<>(SB) TEXT libc_link_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_link(SB) - GLOBL ·libc_link_trampoline_addr(SB), RODATA, $4 DATA ·libc_link_trampoline_addr(SB)/4, $libc_link_trampoline<>(SB) TEXT libc_linkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_linkat(SB) - GLOBL ·libc_linkat_trampoline_addr(SB), RODATA, $4 DATA ·libc_linkat_trampoline_addr(SB)/4, $libc_linkat_trampoline<>(SB) TEXT libc_listen_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_listen(SB) - GLOBL ·libc_listen_trampoline_addr(SB), RODATA, $4 DATA ·libc_listen_trampoline_addr(SB)/4, $libc_listen_trampoline<>(SB) TEXT libc_lstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lstat(SB) - GLOBL ·libc_lstat_trampoline_addr(SB), RODATA, $4 DATA ·libc_lstat_trampoline_addr(SB)/4, $libc_lstat_trampoline<>(SB) TEXT libc_mkdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdir(SB) - GLOBL ·libc_mkdir_trampoline_addr(SB), RODATA, $4 DATA ·libc_mkdir_trampoline_addr(SB)/4, $libc_mkdir_trampoline<>(SB) TEXT libc_mkdirat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdirat(SB) - GLOBL ·libc_mkdirat_trampoline_addr(SB), RODATA, $4 DATA ·libc_mkdirat_trampoline_addr(SB)/4, $libc_mkdirat_trampoline<>(SB) TEXT libc_mkfifo_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifo(SB) - GLOBL ·libc_mkfifo_trampoline_addr(SB), RODATA, $4 DATA ·libc_mkfifo_trampoline_addr(SB)/4, $libc_mkfifo_trampoline<>(SB) TEXT libc_mkfifoat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifoat(SB) - GLOBL ·libc_mkfifoat_trampoline_addr(SB), RODATA, $4 DATA ·libc_mkfifoat_trampoline_addr(SB)/4, $libc_mkfifoat_trampoline<>(SB) TEXT libc_mknod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknod(SB) - GLOBL ·libc_mknod_trampoline_addr(SB), RODATA, $4 DATA ·libc_mknod_trampoline_addr(SB)/4, $libc_mknod_trampoline<>(SB) TEXT libc_mknodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknodat(SB) - GLOBL ·libc_mknodat_trampoline_addr(SB), RODATA, $4 DATA ·libc_mknodat_trampoline_addr(SB)/4, $libc_mknodat_trampoline<>(SB) TEXT libc_nanosleep_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_nanosleep(SB) - GLOBL ·libc_nanosleep_trampoline_addr(SB), RODATA, $4 DATA ·libc_nanosleep_trampoline_addr(SB)/4, $libc_nanosleep_trampoline<>(SB) TEXT libc_open_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_open(SB) - GLOBL ·libc_open_trampoline_addr(SB), RODATA, $4 DATA ·libc_open_trampoline_addr(SB)/4, $libc_open_trampoline<>(SB) TEXT libc_openat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_openat(SB) - GLOBL ·libc_openat_trampoline_addr(SB), RODATA, $4 DATA ·libc_openat_trampoline_addr(SB)/4, $libc_openat_trampoline<>(SB) TEXT libc_pathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pathconf(SB) - GLOBL ·libc_pathconf_trampoline_addr(SB), RODATA, $4 DATA ·libc_pathconf_trampoline_addr(SB)/4, $libc_pathconf_trampoline<>(SB) TEXT libc_pread_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pread(SB) - GLOBL ·libc_pread_trampoline_addr(SB), RODATA, $4 DATA ·libc_pread_trampoline_addr(SB)/4, $libc_pread_trampoline<>(SB) TEXT libc_pwrite_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pwrite(SB) - GLOBL ·libc_pwrite_trampoline_addr(SB), RODATA, $4 DATA ·libc_pwrite_trampoline_addr(SB)/4, $libc_pwrite_trampoline<>(SB) TEXT libc_read_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_read(SB) - GLOBL ·libc_read_trampoline_addr(SB), RODATA, $4 DATA ·libc_read_trampoline_addr(SB)/4, $libc_read_trampoline<>(SB) TEXT libc_readlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlink(SB) - GLOBL ·libc_readlink_trampoline_addr(SB), RODATA, $4 DATA ·libc_readlink_trampoline_addr(SB)/4, $libc_readlink_trampoline<>(SB) TEXT libc_readlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlinkat(SB) - GLOBL ·libc_readlinkat_trampoline_addr(SB), RODATA, $4 DATA ·libc_readlinkat_trampoline_addr(SB)/4, $libc_readlinkat_trampoline<>(SB) TEXT libc_rename_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rename(SB) - GLOBL ·libc_rename_trampoline_addr(SB), RODATA, $4 DATA ·libc_rename_trampoline_addr(SB)/4, $libc_rename_trampoline<>(SB) TEXT libc_renameat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_renameat(SB) - GLOBL ·libc_renameat_trampoline_addr(SB), RODATA, $4 DATA ·libc_renameat_trampoline_addr(SB)/4, $libc_renameat_trampoline<>(SB) TEXT libc_revoke_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_revoke(SB) - GLOBL ·libc_revoke_trampoline_addr(SB), RODATA, $4 DATA ·libc_revoke_trampoline_addr(SB)/4, $libc_revoke_trampoline<>(SB) TEXT libc_rmdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rmdir(SB) - GLOBL ·libc_rmdir_trampoline_addr(SB), RODATA, $4 DATA ·libc_rmdir_trampoline_addr(SB)/4, $libc_rmdir_trampoline<>(SB) TEXT libc_lseek_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lseek(SB) - GLOBL ·libc_lseek_trampoline_addr(SB), RODATA, $4 DATA ·libc_lseek_trampoline_addr(SB)/4, $libc_lseek_trampoline<>(SB) TEXT libc_select_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_select(SB) - GLOBL ·libc_select_trampoline_addr(SB), RODATA, $4 DATA ·libc_select_trampoline_addr(SB)/4, $libc_select_trampoline<>(SB) TEXT libc_setegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setegid(SB) - GLOBL ·libc_setegid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setegid_trampoline_addr(SB)/4, $libc_setegid_trampoline<>(SB) TEXT libc_seteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_seteuid(SB) - GLOBL ·libc_seteuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_seteuid_trampoline_addr(SB)/4, $libc_seteuid_trampoline<>(SB) TEXT libc_setgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgid(SB) - GLOBL ·libc_setgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setgid_trampoline_addr(SB)/4, $libc_setgid_trampoline<>(SB) TEXT libc_setlogin_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setlogin(SB) - GLOBL ·libc_setlogin_trampoline_addr(SB), RODATA, $4 DATA ·libc_setlogin_trampoline_addr(SB)/4, $libc_setlogin_trampoline<>(SB) TEXT libc_setpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpgid(SB) - GLOBL ·libc_setpgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setpgid_trampoline_addr(SB)/4, $libc_setpgid_trampoline<>(SB) TEXT libc_setpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpriority(SB) - GLOBL ·libc_setpriority_trampoline_addr(SB), RODATA, $4 DATA ·libc_setpriority_trampoline_addr(SB)/4, $libc_setpriority_trampoline<>(SB) TEXT libc_setregid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setregid(SB) - GLOBL ·libc_setregid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setregid_trampoline_addr(SB)/4, $libc_setregid_trampoline<>(SB) TEXT libc_setreuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setreuid(SB) - GLOBL ·libc_setreuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setreuid_trampoline_addr(SB)/4, $libc_setreuid_trampoline<>(SB) TEXT libc_setresgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresgid(SB) - GLOBL ·libc_setresgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setresgid_trampoline_addr(SB)/4, $libc_setresgid_trampoline<>(SB) TEXT libc_setresuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresuid(SB) - GLOBL ·libc_setresuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setresuid_trampoline_addr(SB)/4, $libc_setresuid_trampoline<>(SB) TEXT libc_setrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrlimit(SB) - GLOBL ·libc_setrlimit_trampoline_addr(SB), RODATA, $4 DATA ·libc_setrlimit_trampoline_addr(SB)/4, $libc_setrlimit_trampoline<>(SB) TEXT libc_setrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrtable(SB) - GLOBL ·libc_setrtable_trampoline_addr(SB), RODATA, $4 DATA ·libc_setrtable_trampoline_addr(SB)/4, $libc_setrtable_trampoline<>(SB) TEXT libc_setsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsid(SB) - GLOBL ·libc_setsid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setsid_trampoline_addr(SB)/4, $libc_setsid_trampoline<>(SB) TEXT libc_settimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_settimeofday(SB) - GLOBL ·libc_settimeofday_trampoline_addr(SB), RODATA, $4 DATA ·libc_settimeofday_trampoline_addr(SB)/4, $libc_settimeofday_trampoline<>(SB) TEXT libc_setuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setuid(SB) - GLOBL ·libc_setuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setuid_trampoline_addr(SB)/4, $libc_setuid_trampoline<>(SB) TEXT libc_stat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_stat(SB) - GLOBL ·libc_stat_trampoline_addr(SB), RODATA, $4 DATA ·libc_stat_trampoline_addr(SB)/4, $libc_stat_trampoline<>(SB) TEXT libc_statfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_statfs(SB) - GLOBL ·libc_statfs_trampoline_addr(SB), RODATA, $4 DATA ·libc_statfs_trampoline_addr(SB)/4, $libc_statfs_trampoline<>(SB) TEXT libc_symlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlink(SB) - GLOBL ·libc_symlink_trampoline_addr(SB), RODATA, $4 DATA ·libc_symlink_trampoline_addr(SB)/4, $libc_symlink_trampoline<>(SB) TEXT libc_symlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlinkat(SB) - GLOBL ·libc_symlinkat_trampoline_addr(SB), RODATA, $4 DATA ·libc_symlinkat_trampoline_addr(SB)/4, $libc_symlinkat_trampoline<>(SB) TEXT libc_sync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sync(SB) - GLOBL ·libc_sync_trampoline_addr(SB), RODATA, $4 DATA ·libc_sync_trampoline_addr(SB)/4, $libc_sync_trampoline<>(SB) TEXT libc_truncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_truncate(SB) - GLOBL ·libc_truncate_trampoline_addr(SB), RODATA, $4 DATA ·libc_truncate_trampoline_addr(SB)/4, $libc_truncate_trampoline<>(SB) TEXT libc_umask_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_umask(SB) - GLOBL ·libc_umask_trampoline_addr(SB), RODATA, $4 DATA ·libc_umask_trampoline_addr(SB)/4, $libc_umask_trampoline<>(SB) TEXT libc_unlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlink(SB) - GLOBL ·libc_unlink_trampoline_addr(SB), RODATA, $4 DATA ·libc_unlink_trampoline_addr(SB)/4, $libc_unlink_trampoline<>(SB) TEXT libc_unlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlinkat(SB) - GLOBL ·libc_unlinkat_trampoline_addr(SB), RODATA, $4 DATA ·libc_unlinkat_trampoline_addr(SB)/4, $libc_unlinkat_trampoline<>(SB) TEXT libc_unmount_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unmount(SB) - GLOBL ·libc_unmount_trampoline_addr(SB), RODATA, $4 DATA ·libc_unmount_trampoline_addr(SB)/4, $libc_unmount_trampoline<>(SB) TEXT libc_write_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_write(SB) - GLOBL ·libc_write_trampoline_addr(SB), RODATA, $4 DATA ·libc_write_trampoline_addr(SB)/4, $libc_write_trampoline<>(SB) TEXT libc_mmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mmap(SB) - GLOBL ·libc_mmap_trampoline_addr(SB), RODATA, $4 DATA ·libc_mmap_trampoline_addr(SB)/4, $libc_mmap_trampoline<>(SB) TEXT libc_munmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munmap(SB) - GLOBL ·libc_munmap_trampoline_addr(SB), RODATA, $4 DATA ·libc_munmap_trampoline_addr(SB)/4, $libc_munmap_trampoline<>(SB) TEXT libc_utimensat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimensat(SB) - GLOBL ·libc_utimensat_trampoline_addr(SB), RODATA, $4 DATA ·libc_utimensat_trampoline_addr(SB)/4, $libc_utimensat_trampoline<>(SB) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go index 98446d2b9540..48ff5de75b55 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go @@ -527,6 +527,14 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ioctl_trampoline_addr uintptr //go:cgo_import_dynamic libc_ioctl ioctl "libc.so" @@ -696,6 +704,20 @@ var libc_chroot_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := syscall_syscall(libc_clock_gettime_trampoline_addr, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +var libc_clock_gettime_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_clock_gettime clock_gettime "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := syscall_syscall(libc_close_trampoline_addr, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.s b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.s index 243a6663ce67..5782cd108447 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.s +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.s @@ -5,792 +5,665 @@ TEXT libc_getgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgroups(SB) - GLOBL ·libc_getgroups_trampoline_addr(SB), RODATA, $8 DATA ·libc_getgroups_trampoline_addr(SB)/8, $libc_getgroups_trampoline<>(SB) TEXT libc_setgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgroups(SB) - GLOBL ·libc_setgroups_trampoline_addr(SB), RODATA, $8 DATA ·libc_setgroups_trampoline_addr(SB)/8, $libc_setgroups_trampoline<>(SB) TEXT libc_wait4_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_wait4(SB) - GLOBL ·libc_wait4_trampoline_addr(SB), RODATA, $8 DATA ·libc_wait4_trampoline_addr(SB)/8, $libc_wait4_trampoline<>(SB) TEXT libc_accept_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_accept(SB) - GLOBL ·libc_accept_trampoline_addr(SB), RODATA, $8 DATA ·libc_accept_trampoline_addr(SB)/8, $libc_accept_trampoline<>(SB) TEXT libc_bind_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_bind(SB) - GLOBL ·libc_bind_trampoline_addr(SB), RODATA, $8 DATA ·libc_bind_trampoline_addr(SB)/8, $libc_bind_trampoline<>(SB) TEXT libc_connect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_connect(SB) - GLOBL ·libc_connect_trampoline_addr(SB), RODATA, $8 DATA ·libc_connect_trampoline_addr(SB)/8, $libc_connect_trampoline<>(SB) TEXT libc_socket_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socket(SB) - GLOBL ·libc_socket_trampoline_addr(SB), RODATA, $8 DATA ·libc_socket_trampoline_addr(SB)/8, $libc_socket_trampoline<>(SB) TEXT libc_getsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockopt(SB) - GLOBL ·libc_getsockopt_trampoline_addr(SB), RODATA, $8 DATA ·libc_getsockopt_trampoline_addr(SB)/8, $libc_getsockopt_trampoline<>(SB) TEXT libc_setsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsockopt(SB) - GLOBL ·libc_setsockopt_trampoline_addr(SB), RODATA, $8 DATA ·libc_setsockopt_trampoline_addr(SB)/8, $libc_setsockopt_trampoline<>(SB) TEXT libc_getpeername_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpeername(SB) - GLOBL ·libc_getpeername_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpeername_trampoline_addr(SB)/8, $libc_getpeername_trampoline<>(SB) TEXT libc_getsockname_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockname(SB) - GLOBL ·libc_getsockname_trampoline_addr(SB), RODATA, $8 DATA ·libc_getsockname_trampoline_addr(SB)/8, $libc_getsockname_trampoline<>(SB) TEXT libc_shutdown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_shutdown(SB) - GLOBL ·libc_shutdown_trampoline_addr(SB), RODATA, $8 DATA ·libc_shutdown_trampoline_addr(SB)/8, $libc_shutdown_trampoline<>(SB) TEXT libc_socketpair_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socketpair(SB) - GLOBL ·libc_socketpair_trampoline_addr(SB), RODATA, $8 DATA ·libc_socketpair_trampoline_addr(SB)/8, $libc_socketpair_trampoline<>(SB) TEXT libc_recvfrom_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvfrom(SB) - GLOBL ·libc_recvfrom_trampoline_addr(SB), RODATA, $8 DATA ·libc_recvfrom_trampoline_addr(SB)/8, $libc_recvfrom_trampoline<>(SB) TEXT libc_sendto_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendto(SB) - GLOBL ·libc_sendto_trampoline_addr(SB), RODATA, $8 DATA ·libc_sendto_trampoline_addr(SB)/8, $libc_sendto_trampoline<>(SB) TEXT libc_recvmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvmsg(SB) - GLOBL ·libc_recvmsg_trampoline_addr(SB), RODATA, $8 DATA ·libc_recvmsg_trampoline_addr(SB)/8, $libc_recvmsg_trampoline<>(SB) TEXT libc_sendmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendmsg(SB) - GLOBL ·libc_sendmsg_trampoline_addr(SB), RODATA, $8 DATA ·libc_sendmsg_trampoline_addr(SB)/8, $libc_sendmsg_trampoline<>(SB) TEXT libc_kevent_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kevent(SB) - GLOBL ·libc_kevent_trampoline_addr(SB), RODATA, $8 DATA ·libc_kevent_trampoline_addr(SB)/8, $libc_kevent_trampoline<>(SB) TEXT libc_utimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimes(SB) - GLOBL ·libc_utimes_trampoline_addr(SB), RODATA, $8 DATA ·libc_utimes_trampoline_addr(SB)/8, $libc_utimes_trampoline<>(SB) TEXT libc_futimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_futimes(SB) - GLOBL ·libc_futimes_trampoline_addr(SB), RODATA, $8 DATA ·libc_futimes_trampoline_addr(SB)/8, $libc_futimes_trampoline<>(SB) TEXT libc_poll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_poll(SB) - GLOBL ·libc_poll_trampoline_addr(SB), RODATA, $8 DATA ·libc_poll_trampoline_addr(SB)/8, $libc_poll_trampoline<>(SB) TEXT libc_madvise_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_madvise(SB) - GLOBL ·libc_madvise_trampoline_addr(SB), RODATA, $8 DATA ·libc_madvise_trampoline_addr(SB)/8, $libc_madvise_trampoline<>(SB) TEXT libc_mlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlock(SB) - GLOBL ·libc_mlock_trampoline_addr(SB), RODATA, $8 DATA ·libc_mlock_trampoline_addr(SB)/8, $libc_mlock_trampoline<>(SB) TEXT libc_mlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlockall(SB) - GLOBL ·libc_mlockall_trampoline_addr(SB), RODATA, $8 DATA ·libc_mlockall_trampoline_addr(SB)/8, $libc_mlockall_trampoline<>(SB) TEXT libc_mprotect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mprotect(SB) - GLOBL ·libc_mprotect_trampoline_addr(SB), RODATA, $8 DATA ·libc_mprotect_trampoline_addr(SB)/8, $libc_mprotect_trampoline<>(SB) TEXT libc_msync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_msync(SB) - GLOBL ·libc_msync_trampoline_addr(SB), RODATA, $8 DATA ·libc_msync_trampoline_addr(SB)/8, $libc_msync_trampoline<>(SB) TEXT libc_munlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlock(SB) - GLOBL ·libc_munlock_trampoline_addr(SB), RODATA, $8 DATA ·libc_munlock_trampoline_addr(SB)/8, $libc_munlock_trampoline<>(SB) TEXT libc_munlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlockall(SB) - GLOBL ·libc_munlockall_trampoline_addr(SB), RODATA, $8 DATA ·libc_munlockall_trampoline_addr(SB)/8, $libc_munlockall_trampoline<>(SB) TEXT libc_pipe2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pipe2(SB) - GLOBL ·libc_pipe2_trampoline_addr(SB), RODATA, $8 DATA ·libc_pipe2_trampoline_addr(SB)/8, $libc_pipe2_trampoline<>(SB) TEXT libc_getdents_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getdents(SB) - GLOBL ·libc_getdents_trampoline_addr(SB), RODATA, $8 DATA ·libc_getdents_trampoline_addr(SB)/8, $libc_getdents_trampoline<>(SB) TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getcwd(SB) - GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $8 DATA ·libc_getcwd_trampoline_addr(SB)/8, $libc_getcwd_trampoline<>(SB) TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) - GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $8 DATA ·libc_ioctl_trampoline_addr(SB)/8, $libc_ioctl_trampoline<>(SB) TEXT libc_sysctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sysctl(SB) - GLOBL ·libc_sysctl_trampoline_addr(SB), RODATA, $8 DATA ·libc_sysctl_trampoline_addr(SB)/8, $libc_sysctl_trampoline<>(SB) TEXT libc_ppoll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ppoll(SB) - GLOBL ·libc_ppoll_trampoline_addr(SB), RODATA, $8 DATA ·libc_ppoll_trampoline_addr(SB)/8, $libc_ppoll_trampoline<>(SB) TEXT libc_access_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_access(SB) - GLOBL ·libc_access_trampoline_addr(SB), RODATA, $8 DATA ·libc_access_trampoline_addr(SB)/8, $libc_access_trampoline<>(SB) TEXT libc_adjtime_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_adjtime(SB) - GLOBL ·libc_adjtime_trampoline_addr(SB), RODATA, $8 DATA ·libc_adjtime_trampoline_addr(SB)/8, $libc_adjtime_trampoline<>(SB) TEXT libc_chdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chdir(SB) - GLOBL ·libc_chdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_chdir_trampoline_addr(SB)/8, $libc_chdir_trampoline<>(SB) TEXT libc_chflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chflags(SB) - GLOBL ·libc_chflags_trampoline_addr(SB), RODATA, $8 DATA ·libc_chflags_trampoline_addr(SB)/8, $libc_chflags_trampoline<>(SB) TEXT libc_chmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chmod(SB) - GLOBL ·libc_chmod_trampoline_addr(SB), RODATA, $8 DATA ·libc_chmod_trampoline_addr(SB)/8, $libc_chmod_trampoline<>(SB) TEXT libc_chown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chown(SB) - GLOBL ·libc_chown_trampoline_addr(SB), RODATA, $8 DATA ·libc_chown_trampoline_addr(SB)/8, $libc_chown_trampoline<>(SB) TEXT libc_chroot_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chroot(SB) - GLOBL ·libc_chroot_trampoline_addr(SB), RODATA, $8 DATA ·libc_chroot_trampoline_addr(SB)/8, $libc_chroot_trampoline<>(SB) +TEXT libc_clock_gettime_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_clock_gettime(SB) +GLOBL ·libc_clock_gettime_trampoline_addr(SB), RODATA, $8 +DATA ·libc_clock_gettime_trampoline_addr(SB)/8, $libc_clock_gettime_trampoline<>(SB) + TEXT libc_close_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_close(SB) - GLOBL ·libc_close_trampoline_addr(SB), RODATA, $8 DATA ·libc_close_trampoline_addr(SB)/8, $libc_close_trampoline<>(SB) TEXT libc_dup_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup(SB) - GLOBL ·libc_dup_trampoline_addr(SB), RODATA, $8 DATA ·libc_dup_trampoline_addr(SB)/8, $libc_dup_trampoline<>(SB) TEXT libc_dup2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup2(SB) - GLOBL ·libc_dup2_trampoline_addr(SB), RODATA, $8 DATA ·libc_dup2_trampoline_addr(SB)/8, $libc_dup2_trampoline<>(SB) TEXT libc_dup3_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup3(SB) - GLOBL ·libc_dup3_trampoline_addr(SB), RODATA, $8 DATA ·libc_dup3_trampoline_addr(SB)/8, $libc_dup3_trampoline<>(SB) TEXT libc_exit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_exit(SB) - GLOBL ·libc_exit_trampoline_addr(SB), RODATA, $8 DATA ·libc_exit_trampoline_addr(SB)/8, $libc_exit_trampoline<>(SB) TEXT libc_faccessat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_faccessat(SB) - GLOBL ·libc_faccessat_trampoline_addr(SB), RODATA, $8 DATA ·libc_faccessat_trampoline_addr(SB)/8, $libc_faccessat_trampoline<>(SB) TEXT libc_fchdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchdir(SB) - GLOBL ·libc_fchdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchdir_trampoline_addr(SB)/8, $libc_fchdir_trampoline<>(SB) TEXT libc_fchflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchflags(SB) - GLOBL ·libc_fchflags_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchflags_trampoline_addr(SB)/8, $libc_fchflags_trampoline<>(SB) TEXT libc_fchmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmod(SB) - GLOBL ·libc_fchmod_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchmod_trampoline_addr(SB)/8, $libc_fchmod_trampoline<>(SB) TEXT libc_fchmodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmodat(SB) - GLOBL ·libc_fchmodat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchmodat_trampoline_addr(SB)/8, $libc_fchmodat_trampoline<>(SB) TEXT libc_fchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchown(SB) - GLOBL ·libc_fchown_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchown_trampoline_addr(SB)/8, $libc_fchown_trampoline<>(SB) TEXT libc_fchownat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchownat(SB) - GLOBL ·libc_fchownat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchownat_trampoline_addr(SB)/8, $libc_fchownat_trampoline<>(SB) TEXT libc_flock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_flock(SB) - GLOBL ·libc_flock_trampoline_addr(SB), RODATA, $8 DATA ·libc_flock_trampoline_addr(SB)/8, $libc_flock_trampoline<>(SB) TEXT libc_fpathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fpathconf(SB) - GLOBL ·libc_fpathconf_trampoline_addr(SB), RODATA, $8 DATA ·libc_fpathconf_trampoline_addr(SB)/8, $libc_fpathconf_trampoline<>(SB) TEXT libc_fstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstat(SB) - GLOBL ·libc_fstat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fstat_trampoline_addr(SB)/8, $libc_fstat_trampoline<>(SB) TEXT libc_fstatat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatat(SB) - GLOBL ·libc_fstatat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fstatat_trampoline_addr(SB)/8, $libc_fstatat_trampoline<>(SB) TEXT libc_fstatfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatfs(SB) - GLOBL ·libc_fstatfs_trampoline_addr(SB), RODATA, $8 DATA ·libc_fstatfs_trampoline_addr(SB)/8, $libc_fstatfs_trampoline<>(SB) TEXT libc_fsync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fsync(SB) - GLOBL ·libc_fsync_trampoline_addr(SB), RODATA, $8 DATA ·libc_fsync_trampoline_addr(SB)/8, $libc_fsync_trampoline<>(SB) TEXT libc_ftruncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ftruncate(SB) - GLOBL ·libc_ftruncate_trampoline_addr(SB), RODATA, $8 DATA ·libc_ftruncate_trampoline_addr(SB)/8, $libc_ftruncate_trampoline<>(SB) TEXT libc_getegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getegid(SB) - GLOBL ·libc_getegid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getegid_trampoline_addr(SB)/8, $libc_getegid_trampoline<>(SB) TEXT libc_geteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_geteuid(SB) - GLOBL ·libc_geteuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_geteuid_trampoline_addr(SB)/8, $libc_geteuid_trampoline<>(SB) TEXT libc_getgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgid(SB) - GLOBL ·libc_getgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getgid_trampoline_addr(SB)/8, $libc_getgid_trampoline<>(SB) TEXT libc_getpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgid(SB) - GLOBL ·libc_getpgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpgid_trampoline_addr(SB)/8, $libc_getpgid_trampoline<>(SB) TEXT libc_getpgrp_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgrp(SB) - GLOBL ·libc_getpgrp_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpgrp_trampoline_addr(SB)/8, $libc_getpgrp_trampoline<>(SB) TEXT libc_getpid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpid(SB) - GLOBL ·libc_getpid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpid_trampoline_addr(SB)/8, $libc_getpid_trampoline<>(SB) TEXT libc_getppid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getppid(SB) - GLOBL ·libc_getppid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getppid_trampoline_addr(SB)/8, $libc_getppid_trampoline<>(SB) TEXT libc_getpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpriority(SB) - GLOBL ·libc_getpriority_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpriority_trampoline_addr(SB)/8, $libc_getpriority_trampoline<>(SB) TEXT libc_getrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrlimit(SB) - GLOBL ·libc_getrlimit_trampoline_addr(SB), RODATA, $8 DATA ·libc_getrlimit_trampoline_addr(SB)/8, $libc_getrlimit_trampoline<>(SB) TEXT libc_getrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrtable(SB) - GLOBL ·libc_getrtable_trampoline_addr(SB), RODATA, $8 DATA ·libc_getrtable_trampoline_addr(SB)/8, $libc_getrtable_trampoline<>(SB) TEXT libc_getrusage_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrusage(SB) - GLOBL ·libc_getrusage_trampoline_addr(SB), RODATA, $8 DATA ·libc_getrusage_trampoline_addr(SB)/8, $libc_getrusage_trampoline<>(SB) TEXT libc_getsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsid(SB) - GLOBL ·libc_getsid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getsid_trampoline_addr(SB)/8, $libc_getsid_trampoline<>(SB) TEXT libc_gettimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_gettimeofday(SB) - GLOBL ·libc_gettimeofday_trampoline_addr(SB), RODATA, $8 DATA ·libc_gettimeofday_trampoline_addr(SB)/8, $libc_gettimeofday_trampoline<>(SB) TEXT libc_getuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getuid(SB) - GLOBL ·libc_getuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getuid_trampoline_addr(SB)/8, $libc_getuid_trampoline<>(SB) TEXT libc_issetugid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_issetugid(SB) - GLOBL ·libc_issetugid_trampoline_addr(SB), RODATA, $8 DATA ·libc_issetugid_trampoline_addr(SB)/8, $libc_issetugid_trampoline<>(SB) TEXT libc_kill_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kill(SB) - GLOBL ·libc_kill_trampoline_addr(SB), RODATA, $8 DATA ·libc_kill_trampoline_addr(SB)/8, $libc_kill_trampoline<>(SB) TEXT libc_kqueue_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kqueue(SB) - GLOBL ·libc_kqueue_trampoline_addr(SB), RODATA, $8 DATA ·libc_kqueue_trampoline_addr(SB)/8, $libc_kqueue_trampoline<>(SB) TEXT libc_lchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lchown(SB) - GLOBL ·libc_lchown_trampoline_addr(SB), RODATA, $8 DATA ·libc_lchown_trampoline_addr(SB)/8, $libc_lchown_trampoline<>(SB) TEXT libc_link_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_link(SB) - GLOBL ·libc_link_trampoline_addr(SB), RODATA, $8 DATA ·libc_link_trampoline_addr(SB)/8, $libc_link_trampoline<>(SB) TEXT libc_linkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_linkat(SB) - GLOBL ·libc_linkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_linkat_trampoline_addr(SB)/8, $libc_linkat_trampoline<>(SB) TEXT libc_listen_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_listen(SB) - GLOBL ·libc_listen_trampoline_addr(SB), RODATA, $8 DATA ·libc_listen_trampoline_addr(SB)/8, $libc_listen_trampoline<>(SB) TEXT libc_lstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lstat(SB) - GLOBL ·libc_lstat_trampoline_addr(SB), RODATA, $8 DATA ·libc_lstat_trampoline_addr(SB)/8, $libc_lstat_trampoline<>(SB) TEXT libc_mkdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdir(SB) - GLOBL ·libc_mkdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkdir_trampoline_addr(SB)/8, $libc_mkdir_trampoline<>(SB) TEXT libc_mkdirat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdirat(SB) - GLOBL ·libc_mkdirat_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkdirat_trampoline_addr(SB)/8, $libc_mkdirat_trampoline<>(SB) TEXT libc_mkfifo_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifo(SB) - GLOBL ·libc_mkfifo_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkfifo_trampoline_addr(SB)/8, $libc_mkfifo_trampoline<>(SB) TEXT libc_mkfifoat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifoat(SB) - GLOBL ·libc_mkfifoat_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkfifoat_trampoline_addr(SB)/8, $libc_mkfifoat_trampoline<>(SB) TEXT libc_mknod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknod(SB) - GLOBL ·libc_mknod_trampoline_addr(SB), RODATA, $8 DATA ·libc_mknod_trampoline_addr(SB)/8, $libc_mknod_trampoline<>(SB) TEXT libc_mknodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknodat(SB) - GLOBL ·libc_mknodat_trampoline_addr(SB), RODATA, $8 DATA ·libc_mknodat_trampoline_addr(SB)/8, $libc_mknodat_trampoline<>(SB) TEXT libc_nanosleep_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_nanosleep(SB) - GLOBL ·libc_nanosleep_trampoline_addr(SB), RODATA, $8 DATA ·libc_nanosleep_trampoline_addr(SB)/8, $libc_nanosleep_trampoline<>(SB) TEXT libc_open_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_open(SB) - GLOBL ·libc_open_trampoline_addr(SB), RODATA, $8 DATA ·libc_open_trampoline_addr(SB)/8, $libc_open_trampoline<>(SB) TEXT libc_openat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_openat(SB) - GLOBL ·libc_openat_trampoline_addr(SB), RODATA, $8 DATA ·libc_openat_trampoline_addr(SB)/8, $libc_openat_trampoline<>(SB) TEXT libc_pathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pathconf(SB) - GLOBL ·libc_pathconf_trampoline_addr(SB), RODATA, $8 DATA ·libc_pathconf_trampoline_addr(SB)/8, $libc_pathconf_trampoline<>(SB) TEXT libc_pread_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pread(SB) - GLOBL ·libc_pread_trampoline_addr(SB), RODATA, $8 DATA ·libc_pread_trampoline_addr(SB)/8, $libc_pread_trampoline<>(SB) TEXT libc_pwrite_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pwrite(SB) - GLOBL ·libc_pwrite_trampoline_addr(SB), RODATA, $8 DATA ·libc_pwrite_trampoline_addr(SB)/8, $libc_pwrite_trampoline<>(SB) TEXT libc_read_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_read(SB) - GLOBL ·libc_read_trampoline_addr(SB), RODATA, $8 DATA ·libc_read_trampoline_addr(SB)/8, $libc_read_trampoline<>(SB) TEXT libc_readlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlink(SB) - GLOBL ·libc_readlink_trampoline_addr(SB), RODATA, $8 DATA ·libc_readlink_trampoline_addr(SB)/8, $libc_readlink_trampoline<>(SB) TEXT libc_readlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlinkat(SB) - GLOBL ·libc_readlinkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_readlinkat_trampoline_addr(SB)/8, $libc_readlinkat_trampoline<>(SB) TEXT libc_rename_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rename(SB) - GLOBL ·libc_rename_trampoline_addr(SB), RODATA, $8 DATA ·libc_rename_trampoline_addr(SB)/8, $libc_rename_trampoline<>(SB) TEXT libc_renameat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_renameat(SB) - GLOBL ·libc_renameat_trampoline_addr(SB), RODATA, $8 DATA ·libc_renameat_trampoline_addr(SB)/8, $libc_renameat_trampoline<>(SB) TEXT libc_revoke_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_revoke(SB) - GLOBL ·libc_revoke_trampoline_addr(SB), RODATA, $8 DATA ·libc_revoke_trampoline_addr(SB)/8, $libc_revoke_trampoline<>(SB) TEXT libc_rmdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rmdir(SB) - GLOBL ·libc_rmdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_rmdir_trampoline_addr(SB)/8, $libc_rmdir_trampoline<>(SB) TEXT libc_lseek_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lseek(SB) - GLOBL ·libc_lseek_trampoline_addr(SB), RODATA, $8 DATA ·libc_lseek_trampoline_addr(SB)/8, $libc_lseek_trampoline<>(SB) TEXT libc_select_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_select(SB) - GLOBL ·libc_select_trampoline_addr(SB), RODATA, $8 DATA ·libc_select_trampoline_addr(SB)/8, $libc_select_trampoline<>(SB) TEXT libc_setegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setegid(SB) - GLOBL ·libc_setegid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setegid_trampoline_addr(SB)/8, $libc_setegid_trampoline<>(SB) TEXT libc_seteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_seteuid(SB) - GLOBL ·libc_seteuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_seteuid_trampoline_addr(SB)/8, $libc_seteuid_trampoline<>(SB) TEXT libc_setgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgid(SB) - GLOBL ·libc_setgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setgid_trampoline_addr(SB)/8, $libc_setgid_trampoline<>(SB) TEXT libc_setlogin_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setlogin(SB) - GLOBL ·libc_setlogin_trampoline_addr(SB), RODATA, $8 DATA ·libc_setlogin_trampoline_addr(SB)/8, $libc_setlogin_trampoline<>(SB) TEXT libc_setpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpgid(SB) - GLOBL ·libc_setpgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setpgid_trampoline_addr(SB)/8, $libc_setpgid_trampoline<>(SB) TEXT libc_setpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpriority(SB) - GLOBL ·libc_setpriority_trampoline_addr(SB), RODATA, $8 DATA ·libc_setpriority_trampoline_addr(SB)/8, $libc_setpriority_trampoline<>(SB) TEXT libc_setregid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setregid(SB) - GLOBL ·libc_setregid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setregid_trampoline_addr(SB)/8, $libc_setregid_trampoline<>(SB) TEXT libc_setreuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setreuid(SB) - GLOBL ·libc_setreuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setreuid_trampoline_addr(SB)/8, $libc_setreuid_trampoline<>(SB) TEXT libc_setresgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresgid(SB) - GLOBL ·libc_setresgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setresgid_trampoline_addr(SB)/8, $libc_setresgid_trampoline<>(SB) TEXT libc_setresuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresuid(SB) - GLOBL ·libc_setresuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setresuid_trampoline_addr(SB)/8, $libc_setresuid_trampoline<>(SB) TEXT libc_setrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrlimit(SB) - GLOBL ·libc_setrlimit_trampoline_addr(SB), RODATA, $8 DATA ·libc_setrlimit_trampoline_addr(SB)/8, $libc_setrlimit_trampoline<>(SB) TEXT libc_setrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrtable(SB) - GLOBL ·libc_setrtable_trampoline_addr(SB), RODATA, $8 DATA ·libc_setrtable_trampoline_addr(SB)/8, $libc_setrtable_trampoline<>(SB) TEXT libc_setsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsid(SB) - GLOBL ·libc_setsid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setsid_trampoline_addr(SB)/8, $libc_setsid_trampoline<>(SB) TEXT libc_settimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_settimeofday(SB) - GLOBL ·libc_settimeofday_trampoline_addr(SB), RODATA, $8 DATA ·libc_settimeofday_trampoline_addr(SB)/8, $libc_settimeofday_trampoline<>(SB) TEXT libc_setuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setuid(SB) - GLOBL ·libc_setuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setuid_trampoline_addr(SB)/8, $libc_setuid_trampoline<>(SB) TEXT libc_stat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_stat(SB) - GLOBL ·libc_stat_trampoline_addr(SB), RODATA, $8 DATA ·libc_stat_trampoline_addr(SB)/8, $libc_stat_trampoline<>(SB) TEXT libc_statfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_statfs(SB) - GLOBL ·libc_statfs_trampoline_addr(SB), RODATA, $8 DATA ·libc_statfs_trampoline_addr(SB)/8, $libc_statfs_trampoline<>(SB) TEXT libc_symlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlink(SB) - GLOBL ·libc_symlink_trampoline_addr(SB), RODATA, $8 DATA ·libc_symlink_trampoline_addr(SB)/8, $libc_symlink_trampoline<>(SB) TEXT libc_symlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlinkat(SB) - GLOBL ·libc_symlinkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_symlinkat_trampoline_addr(SB)/8, $libc_symlinkat_trampoline<>(SB) TEXT libc_sync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sync(SB) - GLOBL ·libc_sync_trampoline_addr(SB), RODATA, $8 DATA ·libc_sync_trampoline_addr(SB)/8, $libc_sync_trampoline<>(SB) TEXT libc_truncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_truncate(SB) - GLOBL ·libc_truncate_trampoline_addr(SB), RODATA, $8 DATA ·libc_truncate_trampoline_addr(SB)/8, $libc_truncate_trampoline<>(SB) TEXT libc_umask_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_umask(SB) - GLOBL ·libc_umask_trampoline_addr(SB), RODATA, $8 DATA ·libc_umask_trampoline_addr(SB)/8, $libc_umask_trampoline<>(SB) TEXT libc_unlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlink(SB) - GLOBL ·libc_unlink_trampoline_addr(SB), RODATA, $8 DATA ·libc_unlink_trampoline_addr(SB)/8, $libc_unlink_trampoline<>(SB) TEXT libc_unlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlinkat(SB) - GLOBL ·libc_unlinkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_unlinkat_trampoline_addr(SB)/8, $libc_unlinkat_trampoline<>(SB) TEXT libc_unmount_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unmount(SB) - GLOBL ·libc_unmount_trampoline_addr(SB), RODATA, $8 DATA ·libc_unmount_trampoline_addr(SB)/8, $libc_unmount_trampoline<>(SB) TEXT libc_write_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_write(SB) - GLOBL ·libc_write_trampoline_addr(SB), RODATA, $8 DATA ·libc_write_trampoline_addr(SB)/8, $libc_write_trampoline<>(SB) TEXT libc_mmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mmap(SB) - GLOBL ·libc_mmap_trampoline_addr(SB), RODATA, $8 DATA ·libc_mmap_trampoline_addr(SB)/8, $libc_mmap_trampoline<>(SB) TEXT libc_munmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munmap(SB) - GLOBL ·libc_munmap_trampoline_addr(SB), RODATA, $8 DATA ·libc_munmap_trampoline_addr(SB)/8, $libc_munmap_trampoline<>(SB) TEXT libc_utimensat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimensat(SB) - GLOBL ·libc_utimensat_trampoline_addr(SB), RODATA, $8 DATA ·libc_utimensat_trampoline_addr(SB)/8, $libc_utimensat_trampoline<>(SB) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go index 8da6791d1e33..2452a641dae7 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go @@ -527,6 +527,14 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ioctl_trampoline_addr uintptr //go:cgo_import_dynamic libc_ioctl ioctl "libc.so" @@ -696,6 +704,20 @@ var libc_chroot_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := syscall_syscall(libc_clock_gettime_trampoline_addr, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +var libc_clock_gettime_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_clock_gettime clock_gettime "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := syscall_syscall(libc_close_trampoline_addr, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.s b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.s index 9ad116d9fbdd..cf310420c942 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.s +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.s @@ -5,792 +5,665 @@ TEXT libc_getgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgroups(SB) - GLOBL ·libc_getgroups_trampoline_addr(SB), RODATA, $4 DATA ·libc_getgroups_trampoline_addr(SB)/4, $libc_getgroups_trampoline<>(SB) TEXT libc_setgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgroups(SB) - GLOBL ·libc_setgroups_trampoline_addr(SB), RODATA, $4 DATA ·libc_setgroups_trampoline_addr(SB)/4, $libc_setgroups_trampoline<>(SB) TEXT libc_wait4_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_wait4(SB) - GLOBL ·libc_wait4_trampoline_addr(SB), RODATA, $4 DATA ·libc_wait4_trampoline_addr(SB)/4, $libc_wait4_trampoline<>(SB) TEXT libc_accept_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_accept(SB) - GLOBL ·libc_accept_trampoline_addr(SB), RODATA, $4 DATA ·libc_accept_trampoline_addr(SB)/4, $libc_accept_trampoline<>(SB) TEXT libc_bind_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_bind(SB) - GLOBL ·libc_bind_trampoline_addr(SB), RODATA, $4 DATA ·libc_bind_trampoline_addr(SB)/4, $libc_bind_trampoline<>(SB) TEXT libc_connect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_connect(SB) - GLOBL ·libc_connect_trampoline_addr(SB), RODATA, $4 DATA ·libc_connect_trampoline_addr(SB)/4, $libc_connect_trampoline<>(SB) TEXT libc_socket_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socket(SB) - GLOBL ·libc_socket_trampoline_addr(SB), RODATA, $4 DATA ·libc_socket_trampoline_addr(SB)/4, $libc_socket_trampoline<>(SB) TEXT libc_getsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockopt(SB) - GLOBL ·libc_getsockopt_trampoline_addr(SB), RODATA, $4 DATA ·libc_getsockopt_trampoline_addr(SB)/4, $libc_getsockopt_trampoline<>(SB) TEXT libc_setsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsockopt(SB) - GLOBL ·libc_setsockopt_trampoline_addr(SB), RODATA, $4 DATA ·libc_setsockopt_trampoline_addr(SB)/4, $libc_setsockopt_trampoline<>(SB) TEXT libc_getpeername_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpeername(SB) - GLOBL ·libc_getpeername_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpeername_trampoline_addr(SB)/4, $libc_getpeername_trampoline<>(SB) TEXT libc_getsockname_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockname(SB) - GLOBL ·libc_getsockname_trampoline_addr(SB), RODATA, $4 DATA ·libc_getsockname_trampoline_addr(SB)/4, $libc_getsockname_trampoline<>(SB) TEXT libc_shutdown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_shutdown(SB) - GLOBL ·libc_shutdown_trampoline_addr(SB), RODATA, $4 DATA ·libc_shutdown_trampoline_addr(SB)/4, $libc_shutdown_trampoline<>(SB) TEXT libc_socketpair_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socketpair(SB) - GLOBL ·libc_socketpair_trampoline_addr(SB), RODATA, $4 DATA ·libc_socketpair_trampoline_addr(SB)/4, $libc_socketpair_trampoline<>(SB) TEXT libc_recvfrom_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvfrom(SB) - GLOBL ·libc_recvfrom_trampoline_addr(SB), RODATA, $4 DATA ·libc_recvfrom_trampoline_addr(SB)/4, $libc_recvfrom_trampoline<>(SB) TEXT libc_sendto_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendto(SB) - GLOBL ·libc_sendto_trampoline_addr(SB), RODATA, $4 DATA ·libc_sendto_trampoline_addr(SB)/4, $libc_sendto_trampoline<>(SB) TEXT libc_recvmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvmsg(SB) - GLOBL ·libc_recvmsg_trampoline_addr(SB), RODATA, $4 DATA ·libc_recvmsg_trampoline_addr(SB)/4, $libc_recvmsg_trampoline<>(SB) TEXT libc_sendmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendmsg(SB) - GLOBL ·libc_sendmsg_trampoline_addr(SB), RODATA, $4 DATA ·libc_sendmsg_trampoline_addr(SB)/4, $libc_sendmsg_trampoline<>(SB) TEXT libc_kevent_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kevent(SB) - GLOBL ·libc_kevent_trampoline_addr(SB), RODATA, $4 DATA ·libc_kevent_trampoline_addr(SB)/4, $libc_kevent_trampoline<>(SB) TEXT libc_utimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimes(SB) - GLOBL ·libc_utimes_trampoline_addr(SB), RODATA, $4 DATA ·libc_utimes_trampoline_addr(SB)/4, $libc_utimes_trampoline<>(SB) TEXT libc_futimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_futimes(SB) - GLOBL ·libc_futimes_trampoline_addr(SB), RODATA, $4 DATA ·libc_futimes_trampoline_addr(SB)/4, $libc_futimes_trampoline<>(SB) TEXT libc_poll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_poll(SB) - GLOBL ·libc_poll_trampoline_addr(SB), RODATA, $4 DATA ·libc_poll_trampoline_addr(SB)/4, $libc_poll_trampoline<>(SB) TEXT libc_madvise_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_madvise(SB) - GLOBL ·libc_madvise_trampoline_addr(SB), RODATA, $4 DATA ·libc_madvise_trampoline_addr(SB)/4, $libc_madvise_trampoline<>(SB) TEXT libc_mlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlock(SB) - GLOBL ·libc_mlock_trampoline_addr(SB), RODATA, $4 DATA ·libc_mlock_trampoline_addr(SB)/4, $libc_mlock_trampoline<>(SB) TEXT libc_mlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlockall(SB) - GLOBL ·libc_mlockall_trampoline_addr(SB), RODATA, $4 DATA ·libc_mlockall_trampoline_addr(SB)/4, $libc_mlockall_trampoline<>(SB) TEXT libc_mprotect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mprotect(SB) - GLOBL ·libc_mprotect_trampoline_addr(SB), RODATA, $4 DATA ·libc_mprotect_trampoline_addr(SB)/4, $libc_mprotect_trampoline<>(SB) TEXT libc_msync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_msync(SB) - GLOBL ·libc_msync_trampoline_addr(SB), RODATA, $4 DATA ·libc_msync_trampoline_addr(SB)/4, $libc_msync_trampoline<>(SB) TEXT libc_munlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlock(SB) - GLOBL ·libc_munlock_trampoline_addr(SB), RODATA, $4 DATA ·libc_munlock_trampoline_addr(SB)/4, $libc_munlock_trampoline<>(SB) TEXT libc_munlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlockall(SB) - GLOBL ·libc_munlockall_trampoline_addr(SB), RODATA, $4 DATA ·libc_munlockall_trampoline_addr(SB)/4, $libc_munlockall_trampoline<>(SB) TEXT libc_pipe2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pipe2(SB) - GLOBL ·libc_pipe2_trampoline_addr(SB), RODATA, $4 DATA ·libc_pipe2_trampoline_addr(SB)/4, $libc_pipe2_trampoline<>(SB) TEXT libc_getdents_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getdents(SB) - GLOBL ·libc_getdents_trampoline_addr(SB), RODATA, $4 DATA ·libc_getdents_trampoline_addr(SB)/4, $libc_getdents_trampoline<>(SB) TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getcwd(SB) - GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $4 DATA ·libc_getcwd_trampoline_addr(SB)/4, $libc_getcwd_trampoline<>(SB) TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) - GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $4 DATA ·libc_ioctl_trampoline_addr(SB)/4, $libc_ioctl_trampoline<>(SB) TEXT libc_sysctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sysctl(SB) - GLOBL ·libc_sysctl_trampoline_addr(SB), RODATA, $4 DATA ·libc_sysctl_trampoline_addr(SB)/4, $libc_sysctl_trampoline<>(SB) TEXT libc_ppoll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ppoll(SB) - GLOBL ·libc_ppoll_trampoline_addr(SB), RODATA, $4 DATA ·libc_ppoll_trampoline_addr(SB)/4, $libc_ppoll_trampoline<>(SB) TEXT libc_access_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_access(SB) - GLOBL ·libc_access_trampoline_addr(SB), RODATA, $4 DATA ·libc_access_trampoline_addr(SB)/4, $libc_access_trampoline<>(SB) TEXT libc_adjtime_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_adjtime(SB) - GLOBL ·libc_adjtime_trampoline_addr(SB), RODATA, $4 DATA ·libc_adjtime_trampoline_addr(SB)/4, $libc_adjtime_trampoline<>(SB) TEXT libc_chdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chdir(SB) - GLOBL ·libc_chdir_trampoline_addr(SB), RODATA, $4 DATA ·libc_chdir_trampoline_addr(SB)/4, $libc_chdir_trampoline<>(SB) TEXT libc_chflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chflags(SB) - GLOBL ·libc_chflags_trampoline_addr(SB), RODATA, $4 DATA ·libc_chflags_trampoline_addr(SB)/4, $libc_chflags_trampoline<>(SB) TEXT libc_chmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chmod(SB) - GLOBL ·libc_chmod_trampoline_addr(SB), RODATA, $4 DATA ·libc_chmod_trampoline_addr(SB)/4, $libc_chmod_trampoline<>(SB) TEXT libc_chown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chown(SB) - GLOBL ·libc_chown_trampoline_addr(SB), RODATA, $4 DATA ·libc_chown_trampoline_addr(SB)/4, $libc_chown_trampoline<>(SB) TEXT libc_chroot_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chroot(SB) - GLOBL ·libc_chroot_trampoline_addr(SB), RODATA, $4 DATA ·libc_chroot_trampoline_addr(SB)/4, $libc_chroot_trampoline<>(SB) +TEXT libc_clock_gettime_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_clock_gettime(SB) +GLOBL ·libc_clock_gettime_trampoline_addr(SB), RODATA, $4 +DATA ·libc_clock_gettime_trampoline_addr(SB)/4, $libc_clock_gettime_trampoline<>(SB) + TEXT libc_close_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_close(SB) - GLOBL ·libc_close_trampoline_addr(SB), RODATA, $4 DATA ·libc_close_trampoline_addr(SB)/4, $libc_close_trampoline<>(SB) TEXT libc_dup_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup(SB) - GLOBL ·libc_dup_trampoline_addr(SB), RODATA, $4 DATA ·libc_dup_trampoline_addr(SB)/4, $libc_dup_trampoline<>(SB) TEXT libc_dup2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup2(SB) - GLOBL ·libc_dup2_trampoline_addr(SB), RODATA, $4 DATA ·libc_dup2_trampoline_addr(SB)/4, $libc_dup2_trampoline<>(SB) TEXT libc_dup3_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup3(SB) - GLOBL ·libc_dup3_trampoline_addr(SB), RODATA, $4 DATA ·libc_dup3_trampoline_addr(SB)/4, $libc_dup3_trampoline<>(SB) TEXT libc_exit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_exit(SB) - GLOBL ·libc_exit_trampoline_addr(SB), RODATA, $4 DATA ·libc_exit_trampoline_addr(SB)/4, $libc_exit_trampoline<>(SB) TEXT libc_faccessat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_faccessat(SB) - GLOBL ·libc_faccessat_trampoline_addr(SB), RODATA, $4 DATA ·libc_faccessat_trampoline_addr(SB)/4, $libc_faccessat_trampoline<>(SB) TEXT libc_fchdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchdir(SB) - GLOBL ·libc_fchdir_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchdir_trampoline_addr(SB)/4, $libc_fchdir_trampoline<>(SB) TEXT libc_fchflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchflags(SB) - GLOBL ·libc_fchflags_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchflags_trampoline_addr(SB)/4, $libc_fchflags_trampoline<>(SB) TEXT libc_fchmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmod(SB) - GLOBL ·libc_fchmod_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchmod_trampoline_addr(SB)/4, $libc_fchmod_trampoline<>(SB) TEXT libc_fchmodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmodat(SB) - GLOBL ·libc_fchmodat_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchmodat_trampoline_addr(SB)/4, $libc_fchmodat_trampoline<>(SB) TEXT libc_fchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchown(SB) - GLOBL ·libc_fchown_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchown_trampoline_addr(SB)/4, $libc_fchown_trampoline<>(SB) TEXT libc_fchownat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchownat(SB) - GLOBL ·libc_fchownat_trampoline_addr(SB), RODATA, $4 DATA ·libc_fchownat_trampoline_addr(SB)/4, $libc_fchownat_trampoline<>(SB) TEXT libc_flock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_flock(SB) - GLOBL ·libc_flock_trampoline_addr(SB), RODATA, $4 DATA ·libc_flock_trampoline_addr(SB)/4, $libc_flock_trampoline<>(SB) TEXT libc_fpathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fpathconf(SB) - GLOBL ·libc_fpathconf_trampoline_addr(SB), RODATA, $4 DATA ·libc_fpathconf_trampoline_addr(SB)/4, $libc_fpathconf_trampoline<>(SB) TEXT libc_fstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstat(SB) - GLOBL ·libc_fstat_trampoline_addr(SB), RODATA, $4 DATA ·libc_fstat_trampoline_addr(SB)/4, $libc_fstat_trampoline<>(SB) TEXT libc_fstatat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatat(SB) - GLOBL ·libc_fstatat_trampoline_addr(SB), RODATA, $4 DATA ·libc_fstatat_trampoline_addr(SB)/4, $libc_fstatat_trampoline<>(SB) TEXT libc_fstatfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatfs(SB) - GLOBL ·libc_fstatfs_trampoline_addr(SB), RODATA, $4 DATA ·libc_fstatfs_trampoline_addr(SB)/4, $libc_fstatfs_trampoline<>(SB) TEXT libc_fsync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fsync(SB) - GLOBL ·libc_fsync_trampoline_addr(SB), RODATA, $4 DATA ·libc_fsync_trampoline_addr(SB)/4, $libc_fsync_trampoline<>(SB) TEXT libc_ftruncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ftruncate(SB) - GLOBL ·libc_ftruncate_trampoline_addr(SB), RODATA, $4 DATA ·libc_ftruncate_trampoline_addr(SB)/4, $libc_ftruncate_trampoline<>(SB) TEXT libc_getegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getegid(SB) - GLOBL ·libc_getegid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getegid_trampoline_addr(SB)/4, $libc_getegid_trampoline<>(SB) TEXT libc_geteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_geteuid(SB) - GLOBL ·libc_geteuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_geteuid_trampoline_addr(SB)/4, $libc_geteuid_trampoline<>(SB) TEXT libc_getgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgid(SB) - GLOBL ·libc_getgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getgid_trampoline_addr(SB)/4, $libc_getgid_trampoline<>(SB) TEXT libc_getpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgid(SB) - GLOBL ·libc_getpgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpgid_trampoline_addr(SB)/4, $libc_getpgid_trampoline<>(SB) TEXT libc_getpgrp_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgrp(SB) - GLOBL ·libc_getpgrp_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpgrp_trampoline_addr(SB)/4, $libc_getpgrp_trampoline<>(SB) TEXT libc_getpid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpid(SB) - GLOBL ·libc_getpid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpid_trampoline_addr(SB)/4, $libc_getpid_trampoline<>(SB) TEXT libc_getppid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getppid(SB) - GLOBL ·libc_getppid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getppid_trampoline_addr(SB)/4, $libc_getppid_trampoline<>(SB) TEXT libc_getpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpriority(SB) - GLOBL ·libc_getpriority_trampoline_addr(SB), RODATA, $4 DATA ·libc_getpriority_trampoline_addr(SB)/4, $libc_getpriority_trampoline<>(SB) TEXT libc_getrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrlimit(SB) - GLOBL ·libc_getrlimit_trampoline_addr(SB), RODATA, $4 DATA ·libc_getrlimit_trampoline_addr(SB)/4, $libc_getrlimit_trampoline<>(SB) TEXT libc_getrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrtable(SB) - GLOBL ·libc_getrtable_trampoline_addr(SB), RODATA, $4 DATA ·libc_getrtable_trampoline_addr(SB)/4, $libc_getrtable_trampoline<>(SB) TEXT libc_getrusage_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrusage(SB) - GLOBL ·libc_getrusage_trampoline_addr(SB), RODATA, $4 DATA ·libc_getrusage_trampoline_addr(SB)/4, $libc_getrusage_trampoline<>(SB) TEXT libc_getsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsid(SB) - GLOBL ·libc_getsid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getsid_trampoline_addr(SB)/4, $libc_getsid_trampoline<>(SB) TEXT libc_gettimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_gettimeofday(SB) - GLOBL ·libc_gettimeofday_trampoline_addr(SB), RODATA, $4 DATA ·libc_gettimeofday_trampoline_addr(SB)/4, $libc_gettimeofday_trampoline<>(SB) TEXT libc_getuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getuid(SB) - GLOBL ·libc_getuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_getuid_trampoline_addr(SB)/4, $libc_getuid_trampoline<>(SB) TEXT libc_issetugid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_issetugid(SB) - GLOBL ·libc_issetugid_trampoline_addr(SB), RODATA, $4 DATA ·libc_issetugid_trampoline_addr(SB)/4, $libc_issetugid_trampoline<>(SB) TEXT libc_kill_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kill(SB) - GLOBL ·libc_kill_trampoline_addr(SB), RODATA, $4 DATA ·libc_kill_trampoline_addr(SB)/4, $libc_kill_trampoline<>(SB) TEXT libc_kqueue_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kqueue(SB) - GLOBL ·libc_kqueue_trampoline_addr(SB), RODATA, $4 DATA ·libc_kqueue_trampoline_addr(SB)/4, $libc_kqueue_trampoline<>(SB) TEXT libc_lchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lchown(SB) - GLOBL ·libc_lchown_trampoline_addr(SB), RODATA, $4 DATA ·libc_lchown_trampoline_addr(SB)/4, $libc_lchown_trampoline<>(SB) TEXT libc_link_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_link(SB) - GLOBL ·libc_link_trampoline_addr(SB), RODATA, $4 DATA ·libc_link_trampoline_addr(SB)/4, $libc_link_trampoline<>(SB) TEXT libc_linkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_linkat(SB) - GLOBL ·libc_linkat_trampoline_addr(SB), RODATA, $4 DATA ·libc_linkat_trampoline_addr(SB)/4, $libc_linkat_trampoline<>(SB) TEXT libc_listen_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_listen(SB) - GLOBL ·libc_listen_trampoline_addr(SB), RODATA, $4 DATA ·libc_listen_trampoline_addr(SB)/4, $libc_listen_trampoline<>(SB) TEXT libc_lstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lstat(SB) - GLOBL ·libc_lstat_trampoline_addr(SB), RODATA, $4 DATA ·libc_lstat_trampoline_addr(SB)/4, $libc_lstat_trampoline<>(SB) TEXT libc_mkdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdir(SB) - GLOBL ·libc_mkdir_trampoline_addr(SB), RODATA, $4 DATA ·libc_mkdir_trampoline_addr(SB)/4, $libc_mkdir_trampoline<>(SB) TEXT libc_mkdirat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdirat(SB) - GLOBL ·libc_mkdirat_trampoline_addr(SB), RODATA, $4 DATA ·libc_mkdirat_trampoline_addr(SB)/4, $libc_mkdirat_trampoline<>(SB) TEXT libc_mkfifo_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifo(SB) - GLOBL ·libc_mkfifo_trampoline_addr(SB), RODATA, $4 DATA ·libc_mkfifo_trampoline_addr(SB)/4, $libc_mkfifo_trampoline<>(SB) TEXT libc_mkfifoat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifoat(SB) - GLOBL ·libc_mkfifoat_trampoline_addr(SB), RODATA, $4 DATA ·libc_mkfifoat_trampoline_addr(SB)/4, $libc_mkfifoat_trampoline<>(SB) TEXT libc_mknod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknod(SB) - GLOBL ·libc_mknod_trampoline_addr(SB), RODATA, $4 DATA ·libc_mknod_trampoline_addr(SB)/4, $libc_mknod_trampoline<>(SB) TEXT libc_mknodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknodat(SB) - GLOBL ·libc_mknodat_trampoline_addr(SB), RODATA, $4 DATA ·libc_mknodat_trampoline_addr(SB)/4, $libc_mknodat_trampoline<>(SB) TEXT libc_nanosleep_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_nanosleep(SB) - GLOBL ·libc_nanosleep_trampoline_addr(SB), RODATA, $4 DATA ·libc_nanosleep_trampoline_addr(SB)/4, $libc_nanosleep_trampoline<>(SB) TEXT libc_open_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_open(SB) - GLOBL ·libc_open_trampoline_addr(SB), RODATA, $4 DATA ·libc_open_trampoline_addr(SB)/4, $libc_open_trampoline<>(SB) TEXT libc_openat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_openat(SB) - GLOBL ·libc_openat_trampoline_addr(SB), RODATA, $4 DATA ·libc_openat_trampoline_addr(SB)/4, $libc_openat_trampoline<>(SB) TEXT libc_pathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pathconf(SB) - GLOBL ·libc_pathconf_trampoline_addr(SB), RODATA, $4 DATA ·libc_pathconf_trampoline_addr(SB)/4, $libc_pathconf_trampoline<>(SB) TEXT libc_pread_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pread(SB) - GLOBL ·libc_pread_trampoline_addr(SB), RODATA, $4 DATA ·libc_pread_trampoline_addr(SB)/4, $libc_pread_trampoline<>(SB) TEXT libc_pwrite_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pwrite(SB) - GLOBL ·libc_pwrite_trampoline_addr(SB), RODATA, $4 DATA ·libc_pwrite_trampoline_addr(SB)/4, $libc_pwrite_trampoline<>(SB) TEXT libc_read_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_read(SB) - GLOBL ·libc_read_trampoline_addr(SB), RODATA, $4 DATA ·libc_read_trampoline_addr(SB)/4, $libc_read_trampoline<>(SB) TEXT libc_readlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlink(SB) - GLOBL ·libc_readlink_trampoline_addr(SB), RODATA, $4 DATA ·libc_readlink_trampoline_addr(SB)/4, $libc_readlink_trampoline<>(SB) TEXT libc_readlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlinkat(SB) - GLOBL ·libc_readlinkat_trampoline_addr(SB), RODATA, $4 DATA ·libc_readlinkat_trampoline_addr(SB)/4, $libc_readlinkat_trampoline<>(SB) TEXT libc_rename_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rename(SB) - GLOBL ·libc_rename_trampoline_addr(SB), RODATA, $4 DATA ·libc_rename_trampoline_addr(SB)/4, $libc_rename_trampoline<>(SB) TEXT libc_renameat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_renameat(SB) - GLOBL ·libc_renameat_trampoline_addr(SB), RODATA, $4 DATA ·libc_renameat_trampoline_addr(SB)/4, $libc_renameat_trampoline<>(SB) TEXT libc_revoke_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_revoke(SB) - GLOBL ·libc_revoke_trampoline_addr(SB), RODATA, $4 DATA ·libc_revoke_trampoline_addr(SB)/4, $libc_revoke_trampoline<>(SB) TEXT libc_rmdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rmdir(SB) - GLOBL ·libc_rmdir_trampoline_addr(SB), RODATA, $4 DATA ·libc_rmdir_trampoline_addr(SB)/4, $libc_rmdir_trampoline<>(SB) TEXT libc_lseek_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lseek(SB) - GLOBL ·libc_lseek_trampoline_addr(SB), RODATA, $4 DATA ·libc_lseek_trampoline_addr(SB)/4, $libc_lseek_trampoline<>(SB) TEXT libc_select_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_select(SB) - GLOBL ·libc_select_trampoline_addr(SB), RODATA, $4 DATA ·libc_select_trampoline_addr(SB)/4, $libc_select_trampoline<>(SB) TEXT libc_setegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setegid(SB) - GLOBL ·libc_setegid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setegid_trampoline_addr(SB)/4, $libc_setegid_trampoline<>(SB) TEXT libc_seteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_seteuid(SB) - GLOBL ·libc_seteuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_seteuid_trampoline_addr(SB)/4, $libc_seteuid_trampoline<>(SB) TEXT libc_setgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgid(SB) - GLOBL ·libc_setgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setgid_trampoline_addr(SB)/4, $libc_setgid_trampoline<>(SB) TEXT libc_setlogin_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setlogin(SB) - GLOBL ·libc_setlogin_trampoline_addr(SB), RODATA, $4 DATA ·libc_setlogin_trampoline_addr(SB)/4, $libc_setlogin_trampoline<>(SB) TEXT libc_setpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpgid(SB) - GLOBL ·libc_setpgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setpgid_trampoline_addr(SB)/4, $libc_setpgid_trampoline<>(SB) TEXT libc_setpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpriority(SB) - GLOBL ·libc_setpriority_trampoline_addr(SB), RODATA, $4 DATA ·libc_setpriority_trampoline_addr(SB)/4, $libc_setpriority_trampoline<>(SB) TEXT libc_setregid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setregid(SB) - GLOBL ·libc_setregid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setregid_trampoline_addr(SB)/4, $libc_setregid_trampoline<>(SB) TEXT libc_setreuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setreuid(SB) - GLOBL ·libc_setreuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setreuid_trampoline_addr(SB)/4, $libc_setreuid_trampoline<>(SB) TEXT libc_setresgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresgid(SB) - GLOBL ·libc_setresgid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setresgid_trampoline_addr(SB)/4, $libc_setresgid_trampoline<>(SB) TEXT libc_setresuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresuid(SB) - GLOBL ·libc_setresuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setresuid_trampoline_addr(SB)/4, $libc_setresuid_trampoline<>(SB) TEXT libc_setrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrlimit(SB) - GLOBL ·libc_setrlimit_trampoline_addr(SB), RODATA, $4 DATA ·libc_setrlimit_trampoline_addr(SB)/4, $libc_setrlimit_trampoline<>(SB) TEXT libc_setrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrtable(SB) - GLOBL ·libc_setrtable_trampoline_addr(SB), RODATA, $4 DATA ·libc_setrtable_trampoline_addr(SB)/4, $libc_setrtable_trampoline<>(SB) TEXT libc_setsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsid(SB) - GLOBL ·libc_setsid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setsid_trampoline_addr(SB)/4, $libc_setsid_trampoline<>(SB) TEXT libc_settimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_settimeofday(SB) - GLOBL ·libc_settimeofday_trampoline_addr(SB), RODATA, $4 DATA ·libc_settimeofday_trampoline_addr(SB)/4, $libc_settimeofday_trampoline<>(SB) TEXT libc_setuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setuid(SB) - GLOBL ·libc_setuid_trampoline_addr(SB), RODATA, $4 DATA ·libc_setuid_trampoline_addr(SB)/4, $libc_setuid_trampoline<>(SB) TEXT libc_stat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_stat(SB) - GLOBL ·libc_stat_trampoline_addr(SB), RODATA, $4 DATA ·libc_stat_trampoline_addr(SB)/4, $libc_stat_trampoline<>(SB) TEXT libc_statfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_statfs(SB) - GLOBL ·libc_statfs_trampoline_addr(SB), RODATA, $4 DATA ·libc_statfs_trampoline_addr(SB)/4, $libc_statfs_trampoline<>(SB) TEXT libc_symlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlink(SB) - GLOBL ·libc_symlink_trampoline_addr(SB), RODATA, $4 DATA ·libc_symlink_trampoline_addr(SB)/4, $libc_symlink_trampoline<>(SB) TEXT libc_symlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlinkat(SB) - GLOBL ·libc_symlinkat_trampoline_addr(SB), RODATA, $4 DATA ·libc_symlinkat_trampoline_addr(SB)/4, $libc_symlinkat_trampoline<>(SB) TEXT libc_sync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sync(SB) - GLOBL ·libc_sync_trampoline_addr(SB), RODATA, $4 DATA ·libc_sync_trampoline_addr(SB)/4, $libc_sync_trampoline<>(SB) TEXT libc_truncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_truncate(SB) - GLOBL ·libc_truncate_trampoline_addr(SB), RODATA, $4 DATA ·libc_truncate_trampoline_addr(SB)/4, $libc_truncate_trampoline<>(SB) TEXT libc_umask_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_umask(SB) - GLOBL ·libc_umask_trampoline_addr(SB), RODATA, $4 DATA ·libc_umask_trampoline_addr(SB)/4, $libc_umask_trampoline<>(SB) TEXT libc_unlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlink(SB) - GLOBL ·libc_unlink_trampoline_addr(SB), RODATA, $4 DATA ·libc_unlink_trampoline_addr(SB)/4, $libc_unlink_trampoline<>(SB) TEXT libc_unlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlinkat(SB) - GLOBL ·libc_unlinkat_trampoline_addr(SB), RODATA, $4 DATA ·libc_unlinkat_trampoline_addr(SB)/4, $libc_unlinkat_trampoline<>(SB) TEXT libc_unmount_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unmount(SB) - GLOBL ·libc_unmount_trampoline_addr(SB), RODATA, $4 DATA ·libc_unmount_trampoline_addr(SB)/4, $libc_unmount_trampoline<>(SB) TEXT libc_write_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_write(SB) - GLOBL ·libc_write_trampoline_addr(SB), RODATA, $4 DATA ·libc_write_trampoline_addr(SB)/4, $libc_write_trampoline<>(SB) TEXT libc_mmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mmap(SB) - GLOBL ·libc_mmap_trampoline_addr(SB), RODATA, $4 DATA ·libc_mmap_trampoline_addr(SB)/4, $libc_mmap_trampoline<>(SB) TEXT libc_munmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munmap(SB) - GLOBL ·libc_munmap_trampoline_addr(SB), RODATA, $4 DATA ·libc_munmap_trampoline_addr(SB)/4, $libc_munmap_trampoline<>(SB) TEXT libc_utimensat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimensat(SB) - GLOBL ·libc_utimensat_trampoline_addr(SB), RODATA, $4 DATA ·libc_utimensat_trampoline_addr(SB)/4, $libc_utimensat_trampoline<>(SB) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go index 800aab6e3e79..5e35600a60c3 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go @@ -527,6 +527,14 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ioctl_trampoline_addr uintptr //go:cgo_import_dynamic libc_ioctl ioctl "libc.so" @@ -696,6 +704,20 @@ var libc_chroot_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := syscall_syscall(libc_clock_gettime_trampoline_addr, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +var libc_clock_gettime_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_clock_gettime clock_gettime "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := syscall_syscall(libc_close_trampoline_addr, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.s b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.s index 4efeff9abbf4..484bb42e0a89 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.s +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.s @@ -5,792 +5,665 @@ TEXT libc_getgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgroups(SB) - GLOBL ·libc_getgroups_trampoline_addr(SB), RODATA, $8 DATA ·libc_getgroups_trampoline_addr(SB)/8, $libc_getgroups_trampoline<>(SB) TEXT libc_setgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgroups(SB) - GLOBL ·libc_setgroups_trampoline_addr(SB), RODATA, $8 DATA ·libc_setgroups_trampoline_addr(SB)/8, $libc_setgroups_trampoline<>(SB) TEXT libc_wait4_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_wait4(SB) - GLOBL ·libc_wait4_trampoline_addr(SB), RODATA, $8 DATA ·libc_wait4_trampoline_addr(SB)/8, $libc_wait4_trampoline<>(SB) TEXT libc_accept_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_accept(SB) - GLOBL ·libc_accept_trampoline_addr(SB), RODATA, $8 DATA ·libc_accept_trampoline_addr(SB)/8, $libc_accept_trampoline<>(SB) TEXT libc_bind_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_bind(SB) - GLOBL ·libc_bind_trampoline_addr(SB), RODATA, $8 DATA ·libc_bind_trampoline_addr(SB)/8, $libc_bind_trampoline<>(SB) TEXT libc_connect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_connect(SB) - GLOBL ·libc_connect_trampoline_addr(SB), RODATA, $8 DATA ·libc_connect_trampoline_addr(SB)/8, $libc_connect_trampoline<>(SB) TEXT libc_socket_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socket(SB) - GLOBL ·libc_socket_trampoline_addr(SB), RODATA, $8 DATA ·libc_socket_trampoline_addr(SB)/8, $libc_socket_trampoline<>(SB) TEXT libc_getsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockopt(SB) - GLOBL ·libc_getsockopt_trampoline_addr(SB), RODATA, $8 DATA ·libc_getsockopt_trampoline_addr(SB)/8, $libc_getsockopt_trampoline<>(SB) TEXT libc_setsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsockopt(SB) - GLOBL ·libc_setsockopt_trampoline_addr(SB), RODATA, $8 DATA ·libc_setsockopt_trampoline_addr(SB)/8, $libc_setsockopt_trampoline<>(SB) TEXT libc_getpeername_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpeername(SB) - GLOBL ·libc_getpeername_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpeername_trampoline_addr(SB)/8, $libc_getpeername_trampoline<>(SB) TEXT libc_getsockname_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockname(SB) - GLOBL ·libc_getsockname_trampoline_addr(SB), RODATA, $8 DATA ·libc_getsockname_trampoline_addr(SB)/8, $libc_getsockname_trampoline<>(SB) TEXT libc_shutdown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_shutdown(SB) - GLOBL ·libc_shutdown_trampoline_addr(SB), RODATA, $8 DATA ·libc_shutdown_trampoline_addr(SB)/8, $libc_shutdown_trampoline<>(SB) TEXT libc_socketpair_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socketpair(SB) - GLOBL ·libc_socketpair_trampoline_addr(SB), RODATA, $8 DATA ·libc_socketpair_trampoline_addr(SB)/8, $libc_socketpair_trampoline<>(SB) TEXT libc_recvfrom_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvfrom(SB) - GLOBL ·libc_recvfrom_trampoline_addr(SB), RODATA, $8 DATA ·libc_recvfrom_trampoline_addr(SB)/8, $libc_recvfrom_trampoline<>(SB) TEXT libc_sendto_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendto(SB) - GLOBL ·libc_sendto_trampoline_addr(SB), RODATA, $8 DATA ·libc_sendto_trampoline_addr(SB)/8, $libc_sendto_trampoline<>(SB) TEXT libc_recvmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvmsg(SB) - GLOBL ·libc_recvmsg_trampoline_addr(SB), RODATA, $8 DATA ·libc_recvmsg_trampoline_addr(SB)/8, $libc_recvmsg_trampoline<>(SB) TEXT libc_sendmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendmsg(SB) - GLOBL ·libc_sendmsg_trampoline_addr(SB), RODATA, $8 DATA ·libc_sendmsg_trampoline_addr(SB)/8, $libc_sendmsg_trampoline<>(SB) TEXT libc_kevent_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kevent(SB) - GLOBL ·libc_kevent_trampoline_addr(SB), RODATA, $8 DATA ·libc_kevent_trampoline_addr(SB)/8, $libc_kevent_trampoline<>(SB) TEXT libc_utimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimes(SB) - GLOBL ·libc_utimes_trampoline_addr(SB), RODATA, $8 DATA ·libc_utimes_trampoline_addr(SB)/8, $libc_utimes_trampoline<>(SB) TEXT libc_futimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_futimes(SB) - GLOBL ·libc_futimes_trampoline_addr(SB), RODATA, $8 DATA ·libc_futimes_trampoline_addr(SB)/8, $libc_futimes_trampoline<>(SB) TEXT libc_poll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_poll(SB) - GLOBL ·libc_poll_trampoline_addr(SB), RODATA, $8 DATA ·libc_poll_trampoline_addr(SB)/8, $libc_poll_trampoline<>(SB) TEXT libc_madvise_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_madvise(SB) - GLOBL ·libc_madvise_trampoline_addr(SB), RODATA, $8 DATA ·libc_madvise_trampoline_addr(SB)/8, $libc_madvise_trampoline<>(SB) TEXT libc_mlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlock(SB) - GLOBL ·libc_mlock_trampoline_addr(SB), RODATA, $8 DATA ·libc_mlock_trampoline_addr(SB)/8, $libc_mlock_trampoline<>(SB) TEXT libc_mlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlockall(SB) - GLOBL ·libc_mlockall_trampoline_addr(SB), RODATA, $8 DATA ·libc_mlockall_trampoline_addr(SB)/8, $libc_mlockall_trampoline<>(SB) TEXT libc_mprotect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mprotect(SB) - GLOBL ·libc_mprotect_trampoline_addr(SB), RODATA, $8 DATA ·libc_mprotect_trampoline_addr(SB)/8, $libc_mprotect_trampoline<>(SB) TEXT libc_msync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_msync(SB) - GLOBL ·libc_msync_trampoline_addr(SB), RODATA, $8 DATA ·libc_msync_trampoline_addr(SB)/8, $libc_msync_trampoline<>(SB) TEXT libc_munlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlock(SB) - GLOBL ·libc_munlock_trampoline_addr(SB), RODATA, $8 DATA ·libc_munlock_trampoline_addr(SB)/8, $libc_munlock_trampoline<>(SB) TEXT libc_munlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlockall(SB) - GLOBL ·libc_munlockall_trampoline_addr(SB), RODATA, $8 DATA ·libc_munlockall_trampoline_addr(SB)/8, $libc_munlockall_trampoline<>(SB) TEXT libc_pipe2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pipe2(SB) - GLOBL ·libc_pipe2_trampoline_addr(SB), RODATA, $8 DATA ·libc_pipe2_trampoline_addr(SB)/8, $libc_pipe2_trampoline<>(SB) TEXT libc_getdents_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getdents(SB) - GLOBL ·libc_getdents_trampoline_addr(SB), RODATA, $8 DATA ·libc_getdents_trampoline_addr(SB)/8, $libc_getdents_trampoline<>(SB) TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getcwd(SB) - GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $8 DATA ·libc_getcwd_trampoline_addr(SB)/8, $libc_getcwd_trampoline<>(SB) TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) - GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $8 DATA ·libc_ioctl_trampoline_addr(SB)/8, $libc_ioctl_trampoline<>(SB) TEXT libc_sysctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sysctl(SB) - GLOBL ·libc_sysctl_trampoline_addr(SB), RODATA, $8 DATA ·libc_sysctl_trampoline_addr(SB)/8, $libc_sysctl_trampoline<>(SB) TEXT libc_ppoll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ppoll(SB) - GLOBL ·libc_ppoll_trampoline_addr(SB), RODATA, $8 DATA ·libc_ppoll_trampoline_addr(SB)/8, $libc_ppoll_trampoline<>(SB) TEXT libc_access_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_access(SB) - GLOBL ·libc_access_trampoline_addr(SB), RODATA, $8 DATA ·libc_access_trampoline_addr(SB)/8, $libc_access_trampoline<>(SB) TEXT libc_adjtime_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_adjtime(SB) - GLOBL ·libc_adjtime_trampoline_addr(SB), RODATA, $8 DATA ·libc_adjtime_trampoline_addr(SB)/8, $libc_adjtime_trampoline<>(SB) TEXT libc_chdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chdir(SB) - GLOBL ·libc_chdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_chdir_trampoline_addr(SB)/8, $libc_chdir_trampoline<>(SB) TEXT libc_chflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chflags(SB) - GLOBL ·libc_chflags_trampoline_addr(SB), RODATA, $8 DATA ·libc_chflags_trampoline_addr(SB)/8, $libc_chflags_trampoline<>(SB) TEXT libc_chmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chmod(SB) - GLOBL ·libc_chmod_trampoline_addr(SB), RODATA, $8 DATA ·libc_chmod_trampoline_addr(SB)/8, $libc_chmod_trampoline<>(SB) TEXT libc_chown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chown(SB) - GLOBL ·libc_chown_trampoline_addr(SB), RODATA, $8 DATA ·libc_chown_trampoline_addr(SB)/8, $libc_chown_trampoline<>(SB) TEXT libc_chroot_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chroot(SB) - GLOBL ·libc_chroot_trampoline_addr(SB), RODATA, $8 DATA ·libc_chroot_trampoline_addr(SB)/8, $libc_chroot_trampoline<>(SB) +TEXT libc_clock_gettime_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_clock_gettime(SB) +GLOBL ·libc_clock_gettime_trampoline_addr(SB), RODATA, $8 +DATA ·libc_clock_gettime_trampoline_addr(SB)/8, $libc_clock_gettime_trampoline<>(SB) + TEXT libc_close_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_close(SB) - GLOBL ·libc_close_trampoline_addr(SB), RODATA, $8 DATA ·libc_close_trampoline_addr(SB)/8, $libc_close_trampoline<>(SB) TEXT libc_dup_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup(SB) - GLOBL ·libc_dup_trampoline_addr(SB), RODATA, $8 DATA ·libc_dup_trampoline_addr(SB)/8, $libc_dup_trampoline<>(SB) TEXT libc_dup2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup2(SB) - GLOBL ·libc_dup2_trampoline_addr(SB), RODATA, $8 DATA ·libc_dup2_trampoline_addr(SB)/8, $libc_dup2_trampoline<>(SB) TEXT libc_dup3_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup3(SB) - GLOBL ·libc_dup3_trampoline_addr(SB), RODATA, $8 DATA ·libc_dup3_trampoline_addr(SB)/8, $libc_dup3_trampoline<>(SB) TEXT libc_exit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_exit(SB) - GLOBL ·libc_exit_trampoline_addr(SB), RODATA, $8 DATA ·libc_exit_trampoline_addr(SB)/8, $libc_exit_trampoline<>(SB) TEXT libc_faccessat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_faccessat(SB) - GLOBL ·libc_faccessat_trampoline_addr(SB), RODATA, $8 DATA ·libc_faccessat_trampoline_addr(SB)/8, $libc_faccessat_trampoline<>(SB) TEXT libc_fchdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchdir(SB) - GLOBL ·libc_fchdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchdir_trampoline_addr(SB)/8, $libc_fchdir_trampoline<>(SB) TEXT libc_fchflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchflags(SB) - GLOBL ·libc_fchflags_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchflags_trampoline_addr(SB)/8, $libc_fchflags_trampoline<>(SB) TEXT libc_fchmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmod(SB) - GLOBL ·libc_fchmod_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchmod_trampoline_addr(SB)/8, $libc_fchmod_trampoline<>(SB) TEXT libc_fchmodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmodat(SB) - GLOBL ·libc_fchmodat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchmodat_trampoline_addr(SB)/8, $libc_fchmodat_trampoline<>(SB) TEXT libc_fchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchown(SB) - GLOBL ·libc_fchown_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchown_trampoline_addr(SB)/8, $libc_fchown_trampoline<>(SB) TEXT libc_fchownat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchownat(SB) - GLOBL ·libc_fchownat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchownat_trampoline_addr(SB)/8, $libc_fchownat_trampoline<>(SB) TEXT libc_flock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_flock(SB) - GLOBL ·libc_flock_trampoline_addr(SB), RODATA, $8 DATA ·libc_flock_trampoline_addr(SB)/8, $libc_flock_trampoline<>(SB) TEXT libc_fpathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fpathconf(SB) - GLOBL ·libc_fpathconf_trampoline_addr(SB), RODATA, $8 DATA ·libc_fpathconf_trampoline_addr(SB)/8, $libc_fpathconf_trampoline<>(SB) TEXT libc_fstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstat(SB) - GLOBL ·libc_fstat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fstat_trampoline_addr(SB)/8, $libc_fstat_trampoline<>(SB) TEXT libc_fstatat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatat(SB) - GLOBL ·libc_fstatat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fstatat_trampoline_addr(SB)/8, $libc_fstatat_trampoline<>(SB) TEXT libc_fstatfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatfs(SB) - GLOBL ·libc_fstatfs_trampoline_addr(SB), RODATA, $8 DATA ·libc_fstatfs_trampoline_addr(SB)/8, $libc_fstatfs_trampoline<>(SB) TEXT libc_fsync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fsync(SB) - GLOBL ·libc_fsync_trampoline_addr(SB), RODATA, $8 DATA ·libc_fsync_trampoline_addr(SB)/8, $libc_fsync_trampoline<>(SB) TEXT libc_ftruncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ftruncate(SB) - GLOBL ·libc_ftruncate_trampoline_addr(SB), RODATA, $8 DATA ·libc_ftruncate_trampoline_addr(SB)/8, $libc_ftruncate_trampoline<>(SB) TEXT libc_getegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getegid(SB) - GLOBL ·libc_getegid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getegid_trampoline_addr(SB)/8, $libc_getegid_trampoline<>(SB) TEXT libc_geteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_geteuid(SB) - GLOBL ·libc_geteuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_geteuid_trampoline_addr(SB)/8, $libc_geteuid_trampoline<>(SB) TEXT libc_getgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgid(SB) - GLOBL ·libc_getgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getgid_trampoline_addr(SB)/8, $libc_getgid_trampoline<>(SB) TEXT libc_getpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgid(SB) - GLOBL ·libc_getpgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpgid_trampoline_addr(SB)/8, $libc_getpgid_trampoline<>(SB) TEXT libc_getpgrp_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgrp(SB) - GLOBL ·libc_getpgrp_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpgrp_trampoline_addr(SB)/8, $libc_getpgrp_trampoline<>(SB) TEXT libc_getpid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpid(SB) - GLOBL ·libc_getpid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpid_trampoline_addr(SB)/8, $libc_getpid_trampoline<>(SB) TEXT libc_getppid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getppid(SB) - GLOBL ·libc_getppid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getppid_trampoline_addr(SB)/8, $libc_getppid_trampoline<>(SB) TEXT libc_getpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpriority(SB) - GLOBL ·libc_getpriority_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpriority_trampoline_addr(SB)/8, $libc_getpriority_trampoline<>(SB) TEXT libc_getrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrlimit(SB) - GLOBL ·libc_getrlimit_trampoline_addr(SB), RODATA, $8 DATA ·libc_getrlimit_trampoline_addr(SB)/8, $libc_getrlimit_trampoline<>(SB) TEXT libc_getrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrtable(SB) - GLOBL ·libc_getrtable_trampoline_addr(SB), RODATA, $8 DATA ·libc_getrtable_trampoline_addr(SB)/8, $libc_getrtable_trampoline<>(SB) TEXT libc_getrusage_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrusage(SB) - GLOBL ·libc_getrusage_trampoline_addr(SB), RODATA, $8 DATA ·libc_getrusage_trampoline_addr(SB)/8, $libc_getrusage_trampoline<>(SB) TEXT libc_getsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsid(SB) - GLOBL ·libc_getsid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getsid_trampoline_addr(SB)/8, $libc_getsid_trampoline<>(SB) TEXT libc_gettimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_gettimeofday(SB) - GLOBL ·libc_gettimeofday_trampoline_addr(SB), RODATA, $8 DATA ·libc_gettimeofday_trampoline_addr(SB)/8, $libc_gettimeofday_trampoline<>(SB) TEXT libc_getuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getuid(SB) - GLOBL ·libc_getuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getuid_trampoline_addr(SB)/8, $libc_getuid_trampoline<>(SB) TEXT libc_issetugid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_issetugid(SB) - GLOBL ·libc_issetugid_trampoline_addr(SB), RODATA, $8 DATA ·libc_issetugid_trampoline_addr(SB)/8, $libc_issetugid_trampoline<>(SB) TEXT libc_kill_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kill(SB) - GLOBL ·libc_kill_trampoline_addr(SB), RODATA, $8 DATA ·libc_kill_trampoline_addr(SB)/8, $libc_kill_trampoline<>(SB) TEXT libc_kqueue_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kqueue(SB) - GLOBL ·libc_kqueue_trampoline_addr(SB), RODATA, $8 DATA ·libc_kqueue_trampoline_addr(SB)/8, $libc_kqueue_trampoline<>(SB) TEXT libc_lchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lchown(SB) - GLOBL ·libc_lchown_trampoline_addr(SB), RODATA, $8 DATA ·libc_lchown_trampoline_addr(SB)/8, $libc_lchown_trampoline<>(SB) TEXT libc_link_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_link(SB) - GLOBL ·libc_link_trampoline_addr(SB), RODATA, $8 DATA ·libc_link_trampoline_addr(SB)/8, $libc_link_trampoline<>(SB) TEXT libc_linkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_linkat(SB) - GLOBL ·libc_linkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_linkat_trampoline_addr(SB)/8, $libc_linkat_trampoline<>(SB) TEXT libc_listen_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_listen(SB) - GLOBL ·libc_listen_trampoline_addr(SB), RODATA, $8 DATA ·libc_listen_trampoline_addr(SB)/8, $libc_listen_trampoline<>(SB) TEXT libc_lstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lstat(SB) - GLOBL ·libc_lstat_trampoline_addr(SB), RODATA, $8 DATA ·libc_lstat_trampoline_addr(SB)/8, $libc_lstat_trampoline<>(SB) TEXT libc_mkdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdir(SB) - GLOBL ·libc_mkdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkdir_trampoline_addr(SB)/8, $libc_mkdir_trampoline<>(SB) TEXT libc_mkdirat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdirat(SB) - GLOBL ·libc_mkdirat_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkdirat_trampoline_addr(SB)/8, $libc_mkdirat_trampoline<>(SB) TEXT libc_mkfifo_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifo(SB) - GLOBL ·libc_mkfifo_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkfifo_trampoline_addr(SB)/8, $libc_mkfifo_trampoline<>(SB) TEXT libc_mkfifoat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifoat(SB) - GLOBL ·libc_mkfifoat_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkfifoat_trampoline_addr(SB)/8, $libc_mkfifoat_trampoline<>(SB) TEXT libc_mknod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknod(SB) - GLOBL ·libc_mknod_trampoline_addr(SB), RODATA, $8 DATA ·libc_mknod_trampoline_addr(SB)/8, $libc_mknod_trampoline<>(SB) TEXT libc_mknodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknodat(SB) - GLOBL ·libc_mknodat_trampoline_addr(SB), RODATA, $8 DATA ·libc_mknodat_trampoline_addr(SB)/8, $libc_mknodat_trampoline<>(SB) TEXT libc_nanosleep_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_nanosleep(SB) - GLOBL ·libc_nanosleep_trampoline_addr(SB), RODATA, $8 DATA ·libc_nanosleep_trampoline_addr(SB)/8, $libc_nanosleep_trampoline<>(SB) TEXT libc_open_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_open(SB) - GLOBL ·libc_open_trampoline_addr(SB), RODATA, $8 DATA ·libc_open_trampoline_addr(SB)/8, $libc_open_trampoline<>(SB) TEXT libc_openat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_openat(SB) - GLOBL ·libc_openat_trampoline_addr(SB), RODATA, $8 DATA ·libc_openat_trampoline_addr(SB)/8, $libc_openat_trampoline<>(SB) TEXT libc_pathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pathconf(SB) - GLOBL ·libc_pathconf_trampoline_addr(SB), RODATA, $8 DATA ·libc_pathconf_trampoline_addr(SB)/8, $libc_pathconf_trampoline<>(SB) TEXT libc_pread_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pread(SB) - GLOBL ·libc_pread_trampoline_addr(SB), RODATA, $8 DATA ·libc_pread_trampoline_addr(SB)/8, $libc_pread_trampoline<>(SB) TEXT libc_pwrite_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pwrite(SB) - GLOBL ·libc_pwrite_trampoline_addr(SB), RODATA, $8 DATA ·libc_pwrite_trampoline_addr(SB)/8, $libc_pwrite_trampoline<>(SB) TEXT libc_read_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_read(SB) - GLOBL ·libc_read_trampoline_addr(SB), RODATA, $8 DATA ·libc_read_trampoline_addr(SB)/8, $libc_read_trampoline<>(SB) TEXT libc_readlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlink(SB) - GLOBL ·libc_readlink_trampoline_addr(SB), RODATA, $8 DATA ·libc_readlink_trampoline_addr(SB)/8, $libc_readlink_trampoline<>(SB) TEXT libc_readlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlinkat(SB) - GLOBL ·libc_readlinkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_readlinkat_trampoline_addr(SB)/8, $libc_readlinkat_trampoline<>(SB) TEXT libc_rename_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rename(SB) - GLOBL ·libc_rename_trampoline_addr(SB), RODATA, $8 DATA ·libc_rename_trampoline_addr(SB)/8, $libc_rename_trampoline<>(SB) TEXT libc_renameat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_renameat(SB) - GLOBL ·libc_renameat_trampoline_addr(SB), RODATA, $8 DATA ·libc_renameat_trampoline_addr(SB)/8, $libc_renameat_trampoline<>(SB) TEXT libc_revoke_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_revoke(SB) - GLOBL ·libc_revoke_trampoline_addr(SB), RODATA, $8 DATA ·libc_revoke_trampoline_addr(SB)/8, $libc_revoke_trampoline<>(SB) TEXT libc_rmdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rmdir(SB) - GLOBL ·libc_rmdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_rmdir_trampoline_addr(SB)/8, $libc_rmdir_trampoline<>(SB) TEXT libc_lseek_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lseek(SB) - GLOBL ·libc_lseek_trampoline_addr(SB), RODATA, $8 DATA ·libc_lseek_trampoline_addr(SB)/8, $libc_lseek_trampoline<>(SB) TEXT libc_select_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_select(SB) - GLOBL ·libc_select_trampoline_addr(SB), RODATA, $8 DATA ·libc_select_trampoline_addr(SB)/8, $libc_select_trampoline<>(SB) TEXT libc_setegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setegid(SB) - GLOBL ·libc_setegid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setegid_trampoline_addr(SB)/8, $libc_setegid_trampoline<>(SB) TEXT libc_seteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_seteuid(SB) - GLOBL ·libc_seteuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_seteuid_trampoline_addr(SB)/8, $libc_seteuid_trampoline<>(SB) TEXT libc_setgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgid(SB) - GLOBL ·libc_setgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setgid_trampoline_addr(SB)/8, $libc_setgid_trampoline<>(SB) TEXT libc_setlogin_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setlogin(SB) - GLOBL ·libc_setlogin_trampoline_addr(SB), RODATA, $8 DATA ·libc_setlogin_trampoline_addr(SB)/8, $libc_setlogin_trampoline<>(SB) TEXT libc_setpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpgid(SB) - GLOBL ·libc_setpgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setpgid_trampoline_addr(SB)/8, $libc_setpgid_trampoline<>(SB) TEXT libc_setpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpriority(SB) - GLOBL ·libc_setpriority_trampoline_addr(SB), RODATA, $8 DATA ·libc_setpriority_trampoline_addr(SB)/8, $libc_setpriority_trampoline<>(SB) TEXT libc_setregid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setregid(SB) - GLOBL ·libc_setregid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setregid_trampoline_addr(SB)/8, $libc_setregid_trampoline<>(SB) TEXT libc_setreuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setreuid(SB) - GLOBL ·libc_setreuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setreuid_trampoline_addr(SB)/8, $libc_setreuid_trampoline<>(SB) TEXT libc_setresgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresgid(SB) - GLOBL ·libc_setresgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setresgid_trampoline_addr(SB)/8, $libc_setresgid_trampoline<>(SB) TEXT libc_setresuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresuid(SB) - GLOBL ·libc_setresuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setresuid_trampoline_addr(SB)/8, $libc_setresuid_trampoline<>(SB) TEXT libc_setrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrlimit(SB) - GLOBL ·libc_setrlimit_trampoline_addr(SB), RODATA, $8 DATA ·libc_setrlimit_trampoline_addr(SB)/8, $libc_setrlimit_trampoline<>(SB) TEXT libc_setrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrtable(SB) - GLOBL ·libc_setrtable_trampoline_addr(SB), RODATA, $8 DATA ·libc_setrtable_trampoline_addr(SB)/8, $libc_setrtable_trampoline<>(SB) TEXT libc_setsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsid(SB) - GLOBL ·libc_setsid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setsid_trampoline_addr(SB)/8, $libc_setsid_trampoline<>(SB) TEXT libc_settimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_settimeofday(SB) - GLOBL ·libc_settimeofday_trampoline_addr(SB), RODATA, $8 DATA ·libc_settimeofday_trampoline_addr(SB)/8, $libc_settimeofday_trampoline<>(SB) TEXT libc_setuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setuid(SB) - GLOBL ·libc_setuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setuid_trampoline_addr(SB)/8, $libc_setuid_trampoline<>(SB) TEXT libc_stat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_stat(SB) - GLOBL ·libc_stat_trampoline_addr(SB), RODATA, $8 DATA ·libc_stat_trampoline_addr(SB)/8, $libc_stat_trampoline<>(SB) TEXT libc_statfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_statfs(SB) - GLOBL ·libc_statfs_trampoline_addr(SB), RODATA, $8 DATA ·libc_statfs_trampoline_addr(SB)/8, $libc_statfs_trampoline<>(SB) TEXT libc_symlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlink(SB) - GLOBL ·libc_symlink_trampoline_addr(SB), RODATA, $8 DATA ·libc_symlink_trampoline_addr(SB)/8, $libc_symlink_trampoline<>(SB) TEXT libc_symlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlinkat(SB) - GLOBL ·libc_symlinkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_symlinkat_trampoline_addr(SB)/8, $libc_symlinkat_trampoline<>(SB) TEXT libc_sync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sync(SB) - GLOBL ·libc_sync_trampoline_addr(SB), RODATA, $8 DATA ·libc_sync_trampoline_addr(SB)/8, $libc_sync_trampoline<>(SB) TEXT libc_truncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_truncate(SB) - GLOBL ·libc_truncate_trampoline_addr(SB), RODATA, $8 DATA ·libc_truncate_trampoline_addr(SB)/8, $libc_truncate_trampoline<>(SB) TEXT libc_umask_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_umask(SB) - GLOBL ·libc_umask_trampoline_addr(SB), RODATA, $8 DATA ·libc_umask_trampoline_addr(SB)/8, $libc_umask_trampoline<>(SB) TEXT libc_unlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlink(SB) - GLOBL ·libc_unlink_trampoline_addr(SB), RODATA, $8 DATA ·libc_unlink_trampoline_addr(SB)/8, $libc_unlink_trampoline<>(SB) TEXT libc_unlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlinkat(SB) - GLOBL ·libc_unlinkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_unlinkat_trampoline_addr(SB)/8, $libc_unlinkat_trampoline<>(SB) TEXT libc_unmount_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unmount(SB) - GLOBL ·libc_unmount_trampoline_addr(SB), RODATA, $8 DATA ·libc_unmount_trampoline_addr(SB)/8, $libc_unmount_trampoline<>(SB) TEXT libc_write_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_write(SB) - GLOBL ·libc_write_trampoline_addr(SB), RODATA, $8 DATA ·libc_write_trampoline_addr(SB)/8, $libc_write_trampoline<>(SB) TEXT libc_mmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mmap(SB) - GLOBL ·libc_mmap_trampoline_addr(SB), RODATA, $8 DATA ·libc_mmap_trampoline_addr(SB)/8, $libc_mmap_trampoline<>(SB) TEXT libc_munmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munmap(SB) - GLOBL ·libc_munmap_trampoline_addr(SB), RODATA, $8 DATA ·libc_munmap_trampoline_addr(SB)/8, $libc_munmap_trampoline<>(SB) TEXT libc_utimensat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimensat(SB) - GLOBL ·libc_utimensat_trampoline_addr(SB), RODATA, $8 DATA ·libc_utimensat_trampoline_addr(SB)/8, $libc_utimensat_trampoline<>(SB) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go index 016d959bc664..b04cef1a1988 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go @@ -1,4 +1,4 @@ -// go run mksyscall.go -openbsd -tags openbsd,mips64 syscall_bsd.go syscall_openbsd.go syscall_openbsd_mips64.go +// go run mksyscall.go -openbsd -libc -tags openbsd,mips64 syscall_bsd.go syscall_openbsd.go syscall_openbsd_mips64.go // Code generated by the command above; see README.md. DO NOT EDIT. //go:build openbsd && mips64 @@ -16,7 +16,7 @@ var _ syscall.Errno // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func getgroups(ngid int, gid *_Gid_t) (n int, err error) { - r0, _, e1 := RawSyscall(SYS_GETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) + r0, _, e1 := syscall_rawSyscall(libc_getgroups_trampoline_addr, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -24,20 +24,28 @@ func getgroups(ngid int, gid *_Gid_t) (n int, err error) { return } +var libc_getgroups_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getgroups getgroups "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func setgroups(ngid int, gid *_Gid_t) (err error) { - _, _, e1 := RawSyscall(SYS_SETGROUPS, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) + _, _, e1 := syscall_rawSyscall(libc_setgroups_trampoline_addr, uintptr(ngid), uintptr(unsafe.Pointer(gid)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setgroups_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setgroups setgroups "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err error) { - r0, _, e1 := Syscall6(SYS_WAIT4, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) + r0, _, e1 := syscall_syscall6(libc_wait4_trampoline_addr, uintptr(pid), uintptr(unsafe.Pointer(wstatus)), uintptr(options), uintptr(unsafe.Pointer(rusage)), 0, 0) wpid = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -45,10 +53,14 @@ func wait4(pid int, wstatus *_C_int, options int, rusage *Rusage) (wpid int, err return } +var libc_wait4_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_wait4 wait4 "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { - r0, _, e1 := Syscall(SYS_ACCEPT, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) + r0, _, e1 := syscall_syscall(libc_accept_trampoline_addr, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) fd = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -56,30 +68,42 @@ func accept(s int, rsa *RawSockaddrAny, addrlen *_Socklen) (fd int, err error) { return } +var libc_accept_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_accept accept "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func bind(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_BIND, uintptr(s), uintptr(addr), uintptr(addrlen)) + _, _, e1 := syscall_syscall(libc_bind_trampoline_addr, uintptr(s), uintptr(addr), uintptr(addrlen)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_bind_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_bind bind "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func connect(s int, addr unsafe.Pointer, addrlen _Socklen) (err error) { - _, _, e1 := Syscall(SYS_CONNECT, uintptr(s), uintptr(addr), uintptr(addrlen)) + _, _, e1 := syscall_syscall(libc_connect_trampoline_addr, uintptr(s), uintptr(addr), uintptr(addrlen)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_connect_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_connect connect "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func socket(domain int, typ int, proto int) (fd int, err error) { - r0, _, e1 := RawSyscall(SYS_SOCKET, uintptr(domain), uintptr(typ), uintptr(proto)) + r0, _, e1 := syscall_rawSyscall(libc_socket_trampoline_addr, uintptr(domain), uintptr(typ), uintptr(proto)) fd = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -87,66 +111,94 @@ func socket(domain int, typ int, proto int) (fd int, err error) { return } +var libc_socket_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_socket socket "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func getsockopt(s int, level int, name int, val unsafe.Pointer, vallen *_Socklen) (err error) { - _, _, e1 := Syscall6(SYS_GETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) + _, _, e1 := syscall_syscall6(libc_getsockopt_trampoline_addr, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(unsafe.Pointer(vallen)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_getsockopt_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getsockopt getsockopt "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func setsockopt(s int, level int, name int, val unsafe.Pointer, vallen uintptr) (err error) { - _, _, e1 := Syscall6(SYS_SETSOCKOPT, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) + _, _, e1 := syscall_syscall6(libc_setsockopt_trampoline_addr, uintptr(s), uintptr(level), uintptr(name), uintptr(val), uintptr(vallen), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setsockopt_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setsockopt setsockopt "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func getpeername(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETPEERNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) + _, _, e1 := syscall_rawSyscall(libc_getpeername_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) if e1 != 0 { err = errnoErr(e1) } return } +var libc_getpeername_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getpeername getpeername "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func getsockname(fd int, rsa *RawSockaddrAny, addrlen *_Socklen) (err error) { - _, _, e1 := RawSyscall(SYS_GETSOCKNAME, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) + _, _, e1 := syscall_rawSyscall(libc_getsockname_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) if e1 != 0 { err = errnoErr(e1) } return } +var libc_getsockname_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getsockname getsockname "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Shutdown(s int, how int) (err error) { - _, _, e1 := Syscall(SYS_SHUTDOWN, uintptr(s), uintptr(how), 0) + _, _, e1 := syscall_syscall(libc_shutdown_trampoline_addr, uintptr(s), uintptr(how), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_shutdown_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_shutdown shutdown "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func socketpair(domain int, typ int, proto int, fd *[2]int32) (err error) { - _, _, e1 := RawSyscall6(SYS_SOCKETPAIR, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) + _, _, e1 := syscall_rawSyscall6(libc_socketpair_trampoline_addr, uintptr(domain), uintptr(typ), uintptr(proto), uintptr(unsafe.Pointer(fd)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_socketpair_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_socketpair socketpair "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) { @@ -156,7 +208,7 @@ func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Sockl } else { _p0 = unsafe.Pointer(&_zero) } - r0, _, e1 := Syscall6(SYS_RECVFROM, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) + r0, _, e1 := syscall_syscall6(libc_recvfrom_trampoline_addr, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(flags), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen))) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -164,6 +216,10 @@ func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Sockl return } +var libc_recvfrom_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_recvfrom recvfrom "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) { @@ -173,17 +229,21 @@ func sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) ( } else { _p0 = unsafe.Pointer(&_zero) } - _, _, e1 := Syscall6(SYS_SENDTO, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) + _, _, e1 := syscall_syscall6(libc_sendto_trampoline_addr, uintptr(s), uintptr(_p0), uintptr(len(buf)), uintptr(flags), uintptr(to), uintptr(addrlen)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_sendto_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_sendto sendto "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_RECVMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) + r0, _, e1 := syscall_syscall(libc_recvmsg_trampoline_addr, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -191,10 +251,14 @@ func recvmsg(s int, msg *Msghdr, flags int) (n int, err error) { return } +var libc_recvmsg_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_recvmsg recvmsg "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { - r0, _, e1 := Syscall(SYS_SENDMSG, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) + r0, _, e1 := syscall_syscall(libc_sendmsg_trampoline_addr, uintptr(s), uintptr(unsafe.Pointer(msg)), uintptr(flags)) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -202,10 +266,14 @@ func sendmsg(s int, msg *Msghdr, flags int) (n int, err error) { return } +var libc_sendmsg_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_sendmsg sendmsg "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, nevent int, timeout *Timespec) (n int, err error) { - r0, _, e1 := Syscall6(SYS_KEVENT, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) + r0, _, e1 := syscall_syscall6(libc_kevent_trampoline_addr, uintptr(kq), uintptr(change), uintptr(nchange), uintptr(event), uintptr(nevent), uintptr(unsafe.Pointer(timeout))) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -213,6 +281,10 @@ func kevent(kq int, change unsafe.Pointer, nchange int, event unsafe.Pointer, ne return } +var libc_kevent_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_kevent kevent "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func utimes(path string, timeval *[2]Timeval) (err error) { @@ -221,27 +293,35 @@ func utimes(path string, timeval *[2]Timeval) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_UTIMES, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) + _, _, e1 := syscall_syscall(libc_utimes_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(timeval)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_utimes_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_utimes utimes "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func futimes(fd int, timeval *[2]Timeval) (err error) { - _, _, e1 := Syscall(SYS_FUTIMES, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) + _, _, e1 := syscall_syscall(libc_futimes_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(timeval)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_futimes_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_futimes futimes "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { - r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + r0, _, e1 := syscall_syscall(libc_poll_trampoline_addr, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -249,6 +329,10 @@ func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { return } +var libc_poll_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_poll poll "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Madvise(b []byte, behav int) (err error) { @@ -258,13 +342,17 @@ func Madvise(b []byte, behav int) (err error) { } else { _p0 = unsafe.Pointer(&_zero) } - _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + _, _, e1 := syscall_syscall(libc_madvise_trampoline_addr, uintptr(_p0), uintptr(len(b)), uintptr(behav)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_madvise_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_madvise madvise "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Mlock(b []byte) (err error) { @@ -274,23 +362,31 @@ func Mlock(b []byte) (err error) { } else { _p0 = unsafe.Pointer(&_zero) } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + _, _, e1 := syscall_syscall(libc_mlock_trampoline_addr, uintptr(_p0), uintptr(len(b)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_mlock_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mlock mlock "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + _, _, e1 := syscall_syscall(libc_mlockall_trampoline_addr, uintptr(flags), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_mlockall_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mlockall mlockall "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Mprotect(b []byte, prot int) (err error) { @@ -300,13 +396,17 @@ func Mprotect(b []byte, prot int) (err error) { } else { _p0 = unsafe.Pointer(&_zero) } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + _, _, e1 := syscall_syscall(libc_mprotect_trampoline_addr, uintptr(_p0), uintptr(len(b)), uintptr(prot)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_mprotect_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mprotect mprotect "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Msync(b []byte, flags int) (err error) { @@ -316,13 +416,17 @@ func Msync(b []byte, flags int) (err error) { } else { _p0 = unsafe.Pointer(&_zero) } - _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + _, _, e1 := syscall_syscall(libc_msync_trampoline_addr, uintptr(_p0), uintptr(len(b)), uintptr(flags)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_msync_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_msync msync "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Munlock(b []byte) (err error) { @@ -332,33 +436,45 @@ func Munlock(b []byte) (err error) { } else { _p0 = unsafe.Pointer(&_zero) } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + _, _, e1 := syscall_syscall(libc_munlock_trampoline_addr, uintptr(_p0), uintptr(len(b)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_munlock_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_munlock munlock "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + _, _, e1 := syscall_syscall(libc_munlockall_trampoline_addr, 0, 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_munlockall_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_munlockall munlockall "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func pipe2(p *[2]_C_int, flags int) (err error) { - _, _, e1 := RawSyscall(SYS_PIPE2, uintptr(unsafe.Pointer(p)), uintptr(flags), 0) + _, _, e1 := syscall_rawSyscall(libc_pipe2_trampoline_addr, uintptr(unsafe.Pointer(p)), uintptr(flags), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_pipe2_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_pipe2 pipe2 "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getdents(fd int, buf []byte) (n int, err error) { @@ -368,7 +484,7 @@ func Getdents(fd int, buf []byte) (n int, err error) { } else { _p0 = unsafe.Pointer(&_zero) } - r0, _, e1 := Syscall(SYS_GETDENTS, uintptr(fd), uintptr(_p0), uintptr(len(buf))) + r0, _, e1 := syscall_syscall(libc_getdents_trampoline_addr, uintptr(fd), uintptr(_p0), uintptr(len(buf))) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -376,6 +492,10 @@ func Getdents(fd int, buf []byte) (n int, err error) { return } +var libc_getdents_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getdents getdents "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getcwd(buf []byte) (n int, err error) { @@ -385,7 +505,7 @@ func Getcwd(buf []byte) (n int, err error) { } else { _p0 = unsafe.Pointer(&_zero) } - r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + r0, _, e1 := syscall_syscall(libc_getcwd_trampoline_addr, uintptr(_p0), uintptr(len(buf)), 0) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -393,16 +513,32 @@ func Getcwd(buf []byte) (n int, err error) { return } +var libc_getcwd_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getcwd getcwd "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func ioctl(fd int, req uint, arg uintptr) (err error) { - _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { err = errnoErr(e1) } return } +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +var libc_ioctl_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_ioctl ioctl "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { @@ -412,17 +548,21 @@ func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) } else { _p0 = unsafe.Pointer(&_zero) } - _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) + _, _, e1 := syscall_syscall6(libc_sysctl_trampoline_addr, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_sysctl_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_sysctl sysctl "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func ppoll(fds *PollFd, nfds int, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { - r0, _, e1 := Syscall6(SYS_PPOLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask)), 0, 0) + r0, _, e1 := syscall_syscall6(libc_ppoll_trampoline_addr, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask)), 0, 0) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -430,6 +570,10 @@ func ppoll(fds *PollFd, nfds int, timeout *Timespec, sigmask *Sigset_t) (n int, return } +var libc_ppoll_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_ppoll ppoll "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Access(path string, mode uint32) (err error) { @@ -438,23 +582,31 @@ func Access(path string, mode uint32) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_ACCESS, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) + _, _, e1 := syscall_syscall(libc_access_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_access_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_access access "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Adjtime(delta *Timeval, olddelta *Timeval) (err error) { - _, _, e1 := Syscall(SYS_ADJTIME, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) + _, _, e1 := syscall_syscall(libc_adjtime_trampoline_addr, uintptr(unsafe.Pointer(delta)), uintptr(unsafe.Pointer(olddelta)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_adjtime_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_adjtime adjtime "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Chdir(path string) (err error) { @@ -463,13 +615,17 @@ func Chdir(path string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_CHDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) + _, _, e1 := syscall_syscall(libc_chdir_trampoline_addr, uintptr(unsafe.Pointer(_p0)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_chdir_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_chdir chdir "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Chflags(path string, flags int) (err error) { @@ -478,13 +634,17 @@ func Chflags(path string, flags int) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_CHFLAGS, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) + _, _, e1 := syscall_syscall(libc_chflags_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_chflags_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_chflags chflags "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Chmod(path string, mode uint32) (err error) { @@ -493,13 +653,17 @@ func Chmod(path string, mode uint32) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_CHMOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) + _, _, e1 := syscall_syscall(libc_chmod_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_chmod_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_chmod chmod "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Chown(path string, uid int, gid int) (err error) { @@ -508,13 +672,17 @@ func Chown(path string, uid int, gid int) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_CHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) + _, _, e1 := syscall_syscall(libc_chown_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_chown_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_chown chown "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Chroot(path string) (err error) { @@ -523,27 +691,49 @@ func Chroot(path string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_CHROOT, uintptr(unsafe.Pointer(_p0)), 0, 0) + _, _, e1 := syscall_syscall(libc_chroot_trampoline_addr, uintptr(unsafe.Pointer(_p0)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_chroot_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_chroot chroot "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := syscall_syscall(libc_clock_gettime_trampoline_addr, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +var libc_clock_gettime_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_clock_gettime clock_gettime "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Close(fd int) (err error) { - _, _, e1 := Syscall(SYS_CLOSE, uintptr(fd), 0, 0) + _, _, e1 := syscall_syscall(libc_close_trampoline_addr, uintptr(fd), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_close_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_close close "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Dup(fd int) (nfd int, err error) { - r0, _, e1 := Syscall(SYS_DUP, uintptr(fd), 0, 0) + r0, _, e1 := syscall_syscall(libc_dup_trampoline_addr, uintptr(fd), 0, 0) nfd = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -551,33 +741,49 @@ func Dup(fd int) (nfd int, err error) { return } +var libc_dup_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_dup dup "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Dup2(from int, to int) (err error) { - _, _, e1 := Syscall(SYS_DUP2, uintptr(from), uintptr(to), 0) + _, _, e1 := syscall_syscall(libc_dup2_trampoline_addr, uintptr(from), uintptr(to), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_dup2_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_dup2 dup2 "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Dup3(from int, to int, flags int) (err error) { - _, _, e1 := Syscall(SYS_DUP3, uintptr(from), uintptr(to), uintptr(flags)) + _, _, e1 := syscall_syscall(libc_dup3_trampoline_addr, uintptr(from), uintptr(to), uintptr(flags)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_dup3_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_dup3 dup3 "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT, uintptr(code), 0, 0) + syscall_syscall(libc_exit_trampoline_addr, uintptr(code), 0, 0) return } +var libc_exit_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_exit exit "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Faccessat(dirfd int, path string, mode uint32, flags int) (err error) { @@ -586,43 +792,59 @@ func Faccessat(dirfd int, path string, mode uint32, flags int) (err error) { if err != nil { return } - _, _, e1 := Syscall6(SYS_FACCESSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) + _, _, e1 := syscall_syscall6(libc_faccessat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_faccessat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_faccessat faccessat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fchdir(fd int) (err error) { - _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) + _, _, e1 := syscall_syscall(libc_fchdir_trampoline_addr, uintptr(fd), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fchdir_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fchdir fchdir "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fchflags(fd int, flags int) (err error) { - _, _, e1 := Syscall(SYS_FCHFLAGS, uintptr(fd), uintptr(flags), 0) + _, _, e1 := syscall_syscall(libc_fchflags_trampoline_addr, uintptr(fd), uintptr(flags), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fchflags_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fchflags fchflags "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fchmod(fd int, mode uint32) (err error) { - _, _, e1 := Syscall(SYS_FCHMOD, uintptr(fd), uintptr(mode), 0) + _, _, e1 := syscall_syscall(libc_fchmod_trampoline_addr, uintptr(fd), uintptr(mode), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fchmod_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fchmod fchmod "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { @@ -631,23 +853,31 @@ func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { if err != nil { return } - _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) + _, _, e1 := syscall_syscall6(libc_fchmodat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fchmodat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fchmodat fchmodat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fchown(fd int, uid int, gid int) (err error) { - _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) + _, _, e1 := syscall_syscall(libc_fchown_trampoline_addr, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fchown_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fchown fchown "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) { @@ -656,27 +886,35 @@ func Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) { if err != nil { return } - _, _, e1 := Syscall6(SYS_FCHOWNAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), uintptr(flags), 0) + _, _, e1 := syscall_syscall6(libc_fchownat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid), uintptr(flags), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fchownat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fchownat fchownat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Flock(fd int, how int) (err error) { - _, _, e1 := Syscall(SYS_FLOCK, uintptr(fd), uintptr(how), 0) + _, _, e1 := syscall_syscall(libc_flock_trampoline_addr, uintptr(fd), uintptr(how), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_flock_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_flock flock "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fpathconf(fd int, name int) (val int, err error) { - r0, _, e1 := Syscall(SYS_FPATHCONF, uintptr(fd), uintptr(name), 0) + r0, _, e1 := syscall_syscall(libc_fpathconf_trampoline_addr, uintptr(fd), uintptr(name), 0) val = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -684,16 +922,24 @@ func Fpathconf(fd int, name int) (val int, err error) { return } +var libc_fpathconf_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fpathconf fpathconf "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fstat(fd int, stat *Stat_t) (err error) { - _, _, e1 := Syscall(SYS_FSTAT, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) + _, _, e1 := syscall_syscall(libc_fstat_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fstat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fstat fstat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { @@ -702,71 +948,99 @@ func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { if err != nil { return } - _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + _, _, e1 := syscall_syscall6(libc_fstatat_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fstatat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fstatat fstatat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fstatfs(fd int, stat *Statfs_t) (err error) { - _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) + _, _, e1 := syscall_syscall(libc_fstatfs_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fstatfs_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fstatfs fstatfs "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Fsync(fd int) (err error) { - _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) + _, _, e1 := syscall_syscall(libc_fsync_trampoline_addr, uintptr(fd), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_fsync_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_fsync fsync "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Ftruncate(fd int, length int64) (err error) { - _, _, e1 := Syscall(SYS_FTRUNCATE, uintptr(fd), 0, uintptr(length)) + _, _, e1 := syscall_syscall(libc_ftruncate_trampoline_addr, uintptr(fd), uintptr(length), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_ftruncate_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_ftruncate ftruncate "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _, _ := syscall_rawSyscall(libc_getegid_trampoline_addr, 0, 0, 0) egid = int(r0) return } +var libc_getegid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getegid getegid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _, _ := syscall_rawSyscall(libc_geteuid_trampoline_addr, 0, 0, 0) uid = int(r0) return } +var libc_geteuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_geteuid geteuid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _, _ := syscall_rawSyscall(libc_getgid_trampoline_addr, 0, 0, 0) gid = int(r0) return } +var libc_getgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getgid getgid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpgid(pid int) (pgid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETPGID, uintptr(pid), 0, 0) + r0, _, e1 := syscall_rawSyscall(libc_getpgid_trampoline_addr, uintptr(pid), 0, 0) pgid = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -774,34 +1048,50 @@ func Getpgid(pid int) (pgid int, err error) { return } +var libc_getpgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getpgid getpgid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpgrp() (pgrp int) { - r0, _, _ := RawSyscall(SYS_GETPGRP, 0, 0, 0) + r0, _, _ := syscall_rawSyscall(libc_getpgrp_trampoline_addr, 0, 0, 0) pgrp = int(r0) return } +var libc_getpgrp_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getpgrp getpgrp "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _, _ := syscall_rawSyscall(libc_getpid_trampoline_addr, 0, 0, 0) pid = int(r0) return } +var libc_getpid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getpid getpid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _, _ := syscall_rawSyscall(libc_getppid_trampoline_addr, 0, 0, 0) ppid = int(r0) return } +var libc_getppid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getppid getppid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpriority(which int, who int) (prio int, err error) { - r0, _, e1 := Syscall(SYS_GETPRIORITY, uintptr(which), uintptr(who), 0) + r0, _, e1 := syscall_syscall(libc_getpriority_trampoline_addr, uintptr(which), uintptr(who), 0) prio = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -809,20 +1099,28 @@ func Getpriority(which int, who int) (prio int, err error) { return } +var libc_getpriority_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getpriority getpriority "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_GETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) + _, _, e1 := syscall_rawSyscall(libc_getrlimit_trampoline_addr, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_getrlimit_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getrlimit getrlimit "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getrtable() (rtable int, err error) { - r0, _, e1 := RawSyscall(SYS_GETRTABLE, 0, 0, 0) + r0, _, e1 := syscall_rawSyscall(libc_getrtable_trampoline_addr, 0, 0, 0) rtable = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -830,20 +1128,28 @@ func Getrtable() (rtable int, err error) { return } +var libc_getrtable_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getrtable getrtable "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getrusage(who int, rusage *Rusage) (err error) { - _, _, e1 := RawSyscall(SYS_GETRUSAGE, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) + _, _, e1 := syscall_rawSyscall(libc_getrusage_trampoline_addr, uintptr(who), uintptr(unsafe.Pointer(rusage)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_getrusage_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getrusage getrusage "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getsid(pid int) (sid int, err error) { - r0, _, e1 := RawSyscall(SYS_GETSID, uintptr(pid), 0, 0) + r0, _, e1 := syscall_rawSyscall(libc_getsid_trampoline_addr, uintptr(pid), 0, 0) sid = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -851,46 +1157,66 @@ func Getsid(pid int) (sid int, err error) { return } +var libc_getsid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getsid getsid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettimeofday(tv *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_GETTIMEOFDAY, uintptr(unsafe.Pointer(tv)), 0, 0) + _, _, e1 := syscall_rawSyscall(libc_gettimeofday_trampoline_addr, uintptr(unsafe.Pointer(tv)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_gettimeofday_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_gettimeofday gettimeofday "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _, _ := syscall_rawSyscall(libc_getuid_trampoline_addr, 0, 0, 0) uid = int(r0) return } +var libc_getuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_getuid getuid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Issetugid() (tainted bool) { - r0, _, _ := Syscall(SYS_ISSETUGID, 0, 0, 0) + r0, _, _ := syscall_syscall(libc_issetugid_trampoline_addr, 0, 0, 0) tainted = bool(r0 != 0) return } +var libc_issetugid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_issetugid issetugid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Kill(pid int, signum syscall.Signal) (err error) { - _, _, e1 := Syscall(SYS_KILL, uintptr(pid), uintptr(signum), 0) + _, _, e1 := syscall_syscall(libc_kill_trampoline_addr, uintptr(pid), uintptr(signum), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_kill_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_kill kill "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Kqueue() (fd int, err error) { - r0, _, e1 := Syscall(SYS_KQUEUE, 0, 0, 0) + r0, _, e1 := syscall_syscall(libc_kqueue_trampoline_addr, 0, 0, 0) fd = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -898,6 +1224,10 @@ func Kqueue() (fd int, err error) { return } +var libc_kqueue_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_kqueue kqueue "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Lchown(path string, uid int, gid int) (err error) { @@ -906,13 +1236,17 @@ func Lchown(path string, uid int, gid int) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_LCHOWN, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) + _, _, e1 := syscall_syscall(libc_lchown_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(uid), uintptr(gid)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_lchown_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_lchown lchown "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Link(path string, link string) (err error) { @@ -926,13 +1260,17 @@ func Link(path string, link string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_LINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) + _, _, e1 := syscall_syscall(libc_link_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_link_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_link link "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Linkat(pathfd int, path string, linkfd int, link string, flags int) (err error) { @@ -946,23 +1284,31 @@ func Linkat(pathfd int, path string, linkfd int, link string, flags int) (err er if err != nil { return } - _, _, e1 := Syscall6(SYS_LINKAT, uintptr(pathfd), uintptr(unsafe.Pointer(_p0)), uintptr(linkfd), uintptr(unsafe.Pointer(_p1)), uintptr(flags), 0) + _, _, e1 := syscall_syscall6(libc_linkat_trampoline_addr, uintptr(pathfd), uintptr(unsafe.Pointer(_p0)), uintptr(linkfd), uintptr(unsafe.Pointer(_p1)), uintptr(flags), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_linkat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_linkat linkat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Listen(s int, backlog int) (err error) { - _, _, e1 := Syscall(SYS_LISTEN, uintptr(s), uintptr(backlog), 0) + _, _, e1 := syscall_syscall(libc_listen_trampoline_addr, uintptr(s), uintptr(backlog), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_listen_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_listen listen "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Lstat(path string, stat *Stat_t) (err error) { @@ -971,13 +1317,17 @@ func Lstat(path string, stat *Stat_t) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_LSTAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) + _, _, e1 := syscall_syscall(libc_lstat_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_lstat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_lstat lstat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Mkdir(path string, mode uint32) (err error) { @@ -986,13 +1336,17 @@ func Mkdir(path string, mode uint32) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_MKDIR, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) + _, _, e1 := syscall_syscall(libc_mkdir_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_mkdir_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mkdir mkdir "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Mkdirat(dirfd int, path string, mode uint32) (err error) { @@ -1001,13 +1355,17 @@ func Mkdirat(dirfd int, path string, mode uint32) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_MKDIRAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) + _, _, e1 := syscall_syscall(libc_mkdirat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_mkdirat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mkdirat mkdirat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Mkfifo(path string, mode uint32) (err error) { @@ -1016,13 +1374,17 @@ func Mkfifo(path string, mode uint32) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_MKFIFO, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) + _, _, e1 := syscall_syscall(libc_mkfifo_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(mode), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_mkfifo_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mkfifo mkfifo "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Mkfifoat(dirfd int, path string, mode uint32) (err error) { @@ -1031,13 +1393,17 @@ func Mkfifoat(dirfd int, path string, mode uint32) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_MKFIFOAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) + _, _, e1 := syscall_syscall(libc_mkfifoat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_mkfifoat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mkfifoat mkfifoat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Mknod(path string, mode uint32, dev int) (err error) { @@ -1046,13 +1412,17 @@ func Mknod(path string, mode uint32, dev int) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_MKNOD, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) + _, _, e1 := syscall_syscall(libc_mknod_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_mknod_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mknod mknod "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Mknodat(dirfd int, path string, mode uint32, dev int) (err error) { @@ -1061,23 +1431,31 @@ func Mknodat(dirfd int, path string, mode uint32, dev int) (err error) { if err != nil { return } - _, _, e1 := Syscall6(SYS_MKNODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0) + _, _, e1 := syscall_syscall6(libc_mknodat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(dev), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_mknodat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mknodat mknodat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Nanosleep(time *Timespec, leftover *Timespec) (err error) { - _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) + _, _, e1 := syscall_syscall(libc_nanosleep_trampoline_addr, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_nanosleep_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_nanosleep nanosleep "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Open(path string, mode int, perm uint32) (fd int, err error) { @@ -1086,7 +1464,7 @@ func Open(path string, mode int, perm uint32) (fd int, err error) { if err != nil { return } - r0, _, e1 := Syscall(SYS_OPEN, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) + r0, _, e1 := syscall_syscall(libc_open_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm)) fd = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1094,6 +1472,10 @@ func Open(path string, mode int, perm uint32) (fd int, err error) { return } +var libc_open_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_open open "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Openat(dirfd int, path string, mode int, perm uint32) (fd int, err error) { @@ -1102,7 +1484,7 @@ func Openat(dirfd int, path string, mode int, perm uint32) (fd int, err error) { if err != nil { return } - r0, _, e1 := Syscall6(SYS_OPENAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm), 0, 0) + r0, _, e1 := syscall_syscall6(libc_openat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(perm), 0, 0) fd = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1110,6 +1492,10 @@ func Openat(dirfd int, path string, mode int, perm uint32) (fd int, err error) { return } +var libc_openat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_openat openat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Pathconf(path string, name int) (val int, err error) { @@ -1118,7 +1504,7 @@ func Pathconf(path string, name int) (val int, err error) { if err != nil { return } - r0, _, e1 := Syscall(SYS_PATHCONF, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) + r0, _, e1 := syscall_syscall(libc_pathconf_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(name), 0) val = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1126,6 +1512,10 @@ func Pathconf(path string, name int) (val int, err error) { return } +var libc_pathconf_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_pathconf pathconf "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func pread(fd int, p []byte, offset int64) (n int, err error) { @@ -1135,7 +1525,7 @@ func pread(fd int, p []byte, offset int64) (n int, err error) { } else { _p0 = unsafe.Pointer(&_zero) } - r0, _, e1 := Syscall6(SYS_PREAD, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), 0) + r0, _, e1 := syscall_syscall6(libc_pread_trampoline_addr, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1143,6 +1533,10 @@ func pread(fd int, p []byte, offset int64) (n int, err error) { return } +var libc_pread_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_pread pread "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func pwrite(fd int, p []byte, offset int64) (n int, err error) { @@ -1152,7 +1546,7 @@ func pwrite(fd int, p []byte, offset int64) (n int, err error) { } else { _p0 = unsafe.Pointer(&_zero) } - r0, _, e1 := Syscall6(SYS_PWRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)), 0, uintptr(offset), 0) + r0, _, e1 := syscall_syscall6(libc_pwrite_trampoline_addr, uintptr(fd), uintptr(_p0), uintptr(len(p)), uintptr(offset), 0, 0) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1160,6 +1554,10 @@ func pwrite(fd int, p []byte, offset int64) (n int, err error) { return } +var libc_pwrite_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_pwrite pwrite "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func read(fd int, p []byte) (n int, err error) { @@ -1169,7 +1567,7 @@ func read(fd int, p []byte) (n int, err error) { } else { _p0 = unsafe.Pointer(&_zero) } - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(_p0), uintptr(len(p))) + r0, _, e1 := syscall_syscall(libc_read_trampoline_addr, uintptr(fd), uintptr(_p0), uintptr(len(p))) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1177,6 +1575,10 @@ func read(fd int, p []byte) (n int, err error) { return } +var libc_read_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_read read "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Readlink(path string, buf []byte) (n int, err error) { @@ -1191,7 +1593,7 @@ func Readlink(path string, buf []byte) (n int, err error) { } else { _p1 = unsafe.Pointer(&_zero) } - r0, _, e1 := Syscall(SYS_READLINK, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) + r0, _, e1 := syscall_syscall(libc_readlink_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf))) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1199,6 +1601,10 @@ func Readlink(path string, buf []byte) (n int, err error) { return } +var libc_readlink_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_readlink readlink "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Readlinkat(dirfd int, path string, buf []byte) (n int, err error) { @@ -1213,7 +1619,7 @@ func Readlinkat(dirfd int, path string, buf []byte) (n int, err error) { } else { _p1 = unsafe.Pointer(&_zero) } - r0, _, e1 := Syscall6(SYS_READLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf)), 0, 0) + r0, _, e1 := syscall_syscall6(libc_readlinkat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(buf)), 0, 0) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1221,6 +1627,10 @@ func Readlinkat(dirfd int, path string, buf []byte) (n int, err error) { return } +var libc_readlinkat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_readlinkat readlinkat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Rename(from string, to string) (err error) { @@ -1234,13 +1644,17 @@ func Rename(from string, to string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_RENAME, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) + _, _, e1 := syscall_syscall(libc_rename_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_rename_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_rename rename "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Renameat(fromfd int, from string, tofd int, to string) (err error) { @@ -1254,13 +1668,17 @@ func Renameat(fromfd int, from string, tofd int, to string) (err error) { if err != nil { return } - _, _, e1 := Syscall6(SYS_RENAMEAT, uintptr(fromfd), uintptr(unsafe.Pointer(_p0)), uintptr(tofd), uintptr(unsafe.Pointer(_p1)), 0, 0) + _, _, e1 := syscall_syscall6(libc_renameat_trampoline_addr, uintptr(fromfd), uintptr(unsafe.Pointer(_p0)), uintptr(tofd), uintptr(unsafe.Pointer(_p1)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_renameat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_renameat renameat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Revoke(path string) (err error) { @@ -1269,13 +1687,17 @@ func Revoke(path string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_REVOKE, uintptr(unsafe.Pointer(_p0)), 0, 0) + _, _, e1 := syscall_syscall(libc_revoke_trampoline_addr, uintptr(unsafe.Pointer(_p0)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_revoke_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_revoke revoke "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Rmdir(path string) (err error) { @@ -1284,17 +1706,21 @@ func Rmdir(path string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_RMDIR, uintptr(unsafe.Pointer(_p0)), 0, 0) + _, _, e1 := syscall_syscall(libc_rmdir_trampoline_addr, uintptr(unsafe.Pointer(_p0)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_rmdir_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_rmdir rmdir "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { - r0, _, e1 := Syscall6(SYS_LSEEK, uintptr(fd), 0, uintptr(offset), uintptr(whence), 0, 0) + r0, _, e1 := syscall_syscall(libc_lseek_trampoline_addr, uintptr(fd), uintptr(offset), uintptr(whence)) newoffset = int64(r0) if e1 != 0 { err = errnoErr(e1) @@ -1302,10 +1728,14 @@ func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { return } +var libc_lseek_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_lseek lseek "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_SELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) + r0, _, e1 := syscall_syscall6(libc_select_trampoline_addr, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1313,36 +1743,52 @@ func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err return } +var libc_select_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_select select "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setegid(egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEGID, uintptr(egid), 0, 0) + _, _, e1 := syscall_rawSyscall(libc_setegid_trampoline_addr, uintptr(egid), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setegid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setegid setegid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Seteuid(euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETEUID, uintptr(euid), 0, 0) + _, _, e1 := syscall_rawSyscall(libc_seteuid_trampoline_addr, uintptr(euid), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_seteuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_seteuid seteuid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setgid(gid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETGID, uintptr(gid), 0, 0) + _, _, e1 := syscall_rawSyscall(libc_setgid_trampoline_addr, uintptr(gid), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setgid setgid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setlogin(name string) (err error) { @@ -1351,97 +1797,133 @@ func Setlogin(name string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_SETLOGIN, uintptr(unsafe.Pointer(_p0)), 0, 0) + _, _, e1 := syscall_syscall(libc_setlogin_trampoline_addr, uintptr(unsafe.Pointer(_p0)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setlogin_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setlogin setlogin "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setpgid(pid int, pgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETPGID, uintptr(pid), uintptr(pgid), 0) + _, _, e1 := syscall_rawSyscall(libc_setpgid_trampoline_addr, uintptr(pid), uintptr(pgid), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setpgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setpgid setpgid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setpriority(which int, who int, prio int) (err error) { - _, _, e1 := Syscall(SYS_SETPRIORITY, uintptr(which), uintptr(who), uintptr(prio)) + _, _, e1 := syscall_syscall(libc_setpriority_trampoline_addr, uintptr(which), uintptr(who), uintptr(prio)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setpriority_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setpriority setpriority "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setregid(rgid int, egid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREGID, uintptr(rgid), uintptr(egid), 0) + _, _, e1 := syscall_rawSyscall(libc_setregid_trampoline_addr, uintptr(rgid), uintptr(egid), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setregid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setregid setregid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setreuid(ruid int, euid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETREUID, uintptr(ruid), uintptr(euid), 0) + _, _, e1 := syscall_rawSyscall(libc_setreuid_trampoline_addr, uintptr(ruid), uintptr(euid), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setreuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setreuid setreuid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setresgid(rgid int, egid int, sgid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) + _, _, e1 := syscall_rawSyscall(libc_setresgid_trampoline_addr, uintptr(rgid), uintptr(egid), uintptr(sgid)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setresgid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setresgid setresgid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setresuid(ruid int, euid int, suid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) + _, _, e1 := syscall_rawSyscall(libc_setresuid_trampoline_addr, uintptr(ruid), uintptr(euid), uintptr(suid)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setresuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setresuid setresuid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setrlimit(which int, lim *Rlimit) (err error) { - _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) + _, _, e1 := syscall_rawSyscall(libc_setrlimit_trampoline_addr, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setrlimit_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setrlimit setrlimit "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setrtable(rtable int) (err error) { - _, _, e1 := RawSyscall(SYS_SETRTABLE, uintptr(rtable), 0, 0) + _, _, e1 := syscall_rawSyscall(libc_setrtable_trampoline_addr, uintptr(rtable), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setrtable_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setrtable setrtable "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setsid() (pid int, err error) { - r0, _, e1 := RawSyscall(SYS_SETSID, 0, 0, 0) + r0, _, e1 := syscall_rawSyscall(libc_setsid_trampoline_addr, 0, 0, 0) pid = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1449,26 +1931,38 @@ func Setsid() (pid int, err error) { return } +var libc_setsid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setsid setsid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Settimeofday(tp *Timeval) (err error) { - _, _, e1 := RawSyscall(SYS_SETTIMEOFDAY, uintptr(unsafe.Pointer(tp)), 0, 0) + _, _, e1 := syscall_rawSyscall(libc_settimeofday_trampoline_addr, uintptr(unsafe.Pointer(tp)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_settimeofday_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_settimeofday settimeofday "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Setuid(uid int) (err error) { - _, _, e1 := RawSyscall(SYS_SETUID, uintptr(uid), 0, 0) + _, _, e1 := syscall_rawSyscall(libc_setuid_trampoline_addr, uintptr(uid), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_setuid_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_setuid setuid "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Stat(path string, stat *Stat_t) (err error) { @@ -1477,13 +1971,17 @@ func Stat(path string, stat *Stat_t) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_STAT, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) + _, _, e1 := syscall_syscall(libc_stat_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_stat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_stat stat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Statfs(path string, stat *Statfs_t) (err error) { @@ -1492,13 +1990,17 @@ func Statfs(path string, stat *Statfs_t) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_STATFS, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) + _, _, e1 := syscall_syscall(libc_statfs_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_statfs_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_statfs statfs "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Symlink(path string, link string) (err error) { @@ -1512,13 +2014,17 @@ func Symlink(path string, link string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_SYMLINK, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) + _, _, e1 := syscall_syscall(libc_symlink_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_symlink_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_symlink symlink "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Symlinkat(oldpath string, newdirfd int, newpath string) (err error) { @@ -1532,23 +2038,31 @@ func Symlinkat(oldpath string, newdirfd int, newpath string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_SYMLINKAT, uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1))) + _, _, e1 := syscall_syscall(libc_symlinkat_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(newdirfd), uintptr(unsafe.Pointer(_p1))) if e1 != 0 { err = errnoErr(e1) } return } +var libc_symlinkat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_symlinkat symlinkat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Sync() (err error) { - _, _, e1 := Syscall(SYS_SYNC, 0, 0, 0) + _, _, e1 := syscall_syscall(libc_sync_trampoline_addr, 0, 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_sync_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_sync sync "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Truncate(path string, length int64) (err error) { @@ -1557,21 +2071,29 @@ func Truncate(path string, length int64) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_TRUNCATE, uintptr(unsafe.Pointer(_p0)), 0, uintptr(length)) + _, _, e1 := syscall_syscall(libc_truncate_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(length), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_truncate_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_truncate truncate "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(newmask int) (oldmask int) { - r0, _, _ := Syscall(SYS_UMASK, uintptr(newmask), 0, 0) + r0, _, _ := syscall_syscall(libc_umask_trampoline_addr, uintptr(newmask), 0, 0) oldmask = int(r0) return } +var libc_umask_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_umask umask "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Unlink(path string) (err error) { @@ -1580,13 +2102,17 @@ func Unlink(path string) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_UNLINK, uintptr(unsafe.Pointer(_p0)), 0, 0) + _, _, e1 := syscall_syscall(libc_unlink_trampoline_addr, uintptr(unsafe.Pointer(_p0)), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_unlink_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_unlink unlink "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Unlinkat(dirfd int, path string, flags int) (err error) { @@ -1595,13 +2121,17 @@ func Unlinkat(dirfd int, path string, flags int) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_UNLINKAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags)) + _, _, e1 := syscall_syscall(libc_unlinkat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags)) if e1 != 0 { err = errnoErr(e1) } return } +var libc_unlinkat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_unlinkat unlinkat "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Unmount(path string, flags int) (err error) { @@ -1610,13 +2140,17 @@ func Unmount(path string, flags int) (err error) { if err != nil { return } - _, _, e1 := Syscall(SYS_UNMOUNT, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) + _, _, e1 := syscall_syscall(libc_unmount_trampoline_addr, uintptr(unsafe.Pointer(_p0)), uintptr(flags), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_unmount_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_unmount unmount "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func write(fd int, p []byte) (n int, err error) { @@ -1626,7 +2160,7 @@ func write(fd int, p []byte) (n int, err error) { } else { _p0 = unsafe.Pointer(&_zero) } - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p))) + r0, _, e1 := syscall_syscall(libc_write_trampoline_addr, uintptr(fd), uintptr(_p0), uintptr(len(p))) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1634,10 +2168,14 @@ func write(fd int, p []byte) (n int, err error) { return } +var libc_write_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_write write "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) (ret uintptr, err error) { - r0, _, e1 := Syscall9(SYS_MMAP, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), 0, uintptr(pos), 0, 0) + r0, _, e1 := syscall_syscall6(libc_mmap_trampoline_addr, uintptr(addr), uintptr(length), uintptr(prot), uintptr(flag), uintptr(fd), uintptr(pos)) ret = uintptr(r0) if e1 != 0 { err = errnoErr(e1) @@ -1645,20 +2183,28 @@ func mmap(addr uintptr, length uintptr, prot int, flag int, fd int, pos int64) ( return } +var libc_mmap_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_mmap mmap "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func munmap(addr uintptr, length uintptr) (err error) { - _, _, e1 := Syscall(SYS_MUNMAP, uintptr(addr), uintptr(length), 0) + _, _, e1 := syscall_syscall(libc_munmap_trampoline_addr, uintptr(addr), uintptr(length), 0) if e1 != 0 { err = errnoErr(e1) } return } +var libc_munmap_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_munmap munmap "libc.so" + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func readlen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_READ, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) + r0, _, e1 := syscall_syscall(libc_read_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1669,7 +2215,7 @@ func readlen(fd int, buf *byte, nbuf int) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func writelen(fd int, buf *byte, nbuf int) (n int, err error) { - r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) + r0, _, e1 := syscall_syscall(libc_write_trampoline_addr, uintptr(fd), uintptr(unsafe.Pointer(buf)), uintptr(nbuf)) n = int(r0) if e1 != 0 { err = errnoErr(e1) @@ -1685,9 +2231,13 @@ func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error if err != nil { return } - _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + _, _, e1 := syscall_syscall6(libc_utimensat_trampoline_addr, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) if e1 != 0 { err = errnoErr(e1) } return } + +var libc_utimensat_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_utimensat utimensat "libc.so" diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.s b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.s new file mode 100644 index 000000000000..55af27263ad7 --- /dev/null +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.s @@ -0,0 +1,669 @@ +// go run mkasm.go openbsd mips64 +// Code generated by the command above; DO NOT EDIT. + +#include "textflag.h" + +TEXT libc_getgroups_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getgroups(SB) +GLOBL ·libc_getgroups_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getgroups_trampoline_addr(SB)/8, $libc_getgroups_trampoline<>(SB) + +TEXT libc_setgroups_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setgroups(SB) +GLOBL ·libc_setgroups_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setgroups_trampoline_addr(SB)/8, $libc_setgroups_trampoline<>(SB) + +TEXT libc_wait4_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_wait4(SB) +GLOBL ·libc_wait4_trampoline_addr(SB), RODATA, $8 +DATA ·libc_wait4_trampoline_addr(SB)/8, $libc_wait4_trampoline<>(SB) + +TEXT libc_accept_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_accept(SB) +GLOBL ·libc_accept_trampoline_addr(SB), RODATA, $8 +DATA ·libc_accept_trampoline_addr(SB)/8, $libc_accept_trampoline<>(SB) + +TEXT libc_bind_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_bind(SB) +GLOBL ·libc_bind_trampoline_addr(SB), RODATA, $8 +DATA ·libc_bind_trampoline_addr(SB)/8, $libc_bind_trampoline<>(SB) + +TEXT libc_connect_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_connect(SB) +GLOBL ·libc_connect_trampoline_addr(SB), RODATA, $8 +DATA ·libc_connect_trampoline_addr(SB)/8, $libc_connect_trampoline<>(SB) + +TEXT libc_socket_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_socket(SB) +GLOBL ·libc_socket_trampoline_addr(SB), RODATA, $8 +DATA ·libc_socket_trampoline_addr(SB)/8, $libc_socket_trampoline<>(SB) + +TEXT libc_getsockopt_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getsockopt(SB) +GLOBL ·libc_getsockopt_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getsockopt_trampoline_addr(SB)/8, $libc_getsockopt_trampoline<>(SB) + +TEXT libc_setsockopt_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setsockopt(SB) +GLOBL ·libc_setsockopt_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setsockopt_trampoline_addr(SB)/8, $libc_setsockopt_trampoline<>(SB) + +TEXT libc_getpeername_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getpeername(SB) +GLOBL ·libc_getpeername_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getpeername_trampoline_addr(SB)/8, $libc_getpeername_trampoline<>(SB) + +TEXT libc_getsockname_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getsockname(SB) +GLOBL ·libc_getsockname_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getsockname_trampoline_addr(SB)/8, $libc_getsockname_trampoline<>(SB) + +TEXT libc_shutdown_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_shutdown(SB) +GLOBL ·libc_shutdown_trampoline_addr(SB), RODATA, $8 +DATA ·libc_shutdown_trampoline_addr(SB)/8, $libc_shutdown_trampoline<>(SB) + +TEXT libc_socketpair_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_socketpair(SB) +GLOBL ·libc_socketpair_trampoline_addr(SB), RODATA, $8 +DATA ·libc_socketpair_trampoline_addr(SB)/8, $libc_socketpair_trampoline<>(SB) + +TEXT libc_recvfrom_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_recvfrom(SB) +GLOBL ·libc_recvfrom_trampoline_addr(SB), RODATA, $8 +DATA ·libc_recvfrom_trampoline_addr(SB)/8, $libc_recvfrom_trampoline<>(SB) + +TEXT libc_sendto_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_sendto(SB) +GLOBL ·libc_sendto_trampoline_addr(SB), RODATA, $8 +DATA ·libc_sendto_trampoline_addr(SB)/8, $libc_sendto_trampoline<>(SB) + +TEXT libc_recvmsg_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_recvmsg(SB) +GLOBL ·libc_recvmsg_trampoline_addr(SB), RODATA, $8 +DATA ·libc_recvmsg_trampoline_addr(SB)/8, $libc_recvmsg_trampoline<>(SB) + +TEXT libc_sendmsg_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_sendmsg(SB) +GLOBL ·libc_sendmsg_trampoline_addr(SB), RODATA, $8 +DATA ·libc_sendmsg_trampoline_addr(SB)/8, $libc_sendmsg_trampoline<>(SB) + +TEXT libc_kevent_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_kevent(SB) +GLOBL ·libc_kevent_trampoline_addr(SB), RODATA, $8 +DATA ·libc_kevent_trampoline_addr(SB)/8, $libc_kevent_trampoline<>(SB) + +TEXT libc_utimes_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_utimes(SB) +GLOBL ·libc_utimes_trampoline_addr(SB), RODATA, $8 +DATA ·libc_utimes_trampoline_addr(SB)/8, $libc_utimes_trampoline<>(SB) + +TEXT libc_futimes_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_futimes(SB) +GLOBL ·libc_futimes_trampoline_addr(SB), RODATA, $8 +DATA ·libc_futimes_trampoline_addr(SB)/8, $libc_futimes_trampoline<>(SB) + +TEXT libc_poll_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_poll(SB) +GLOBL ·libc_poll_trampoline_addr(SB), RODATA, $8 +DATA ·libc_poll_trampoline_addr(SB)/8, $libc_poll_trampoline<>(SB) + +TEXT libc_madvise_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_madvise(SB) +GLOBL ·libc_madvise_trampoline_addr(SB), RODATA, $8 +DATA ·libc_madvise_trampoline_addr(SB)/8, $libc_madvise_trampoline<>(SB) + +TEXT libc_mlock_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mlock(SB) +GLOBL ·libc_mlock_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mlock_trampoline_addr(SB)/8, $libc_mlock_trampoline<>(SB) + +TEXT libc_mlockall_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mlockall(SB) +GLOBL ·libc_mlockall_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mlockall_trampoline_addr(SB)/8, $libc_mlockall_trampoline<>(SB) + +TEXT libc_mprotect_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mprotect(SB) +GLOBL ·libc_mprotect_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mprotect_trampoline_addr(SB)/8, $libc_mprotect_trampoline<>(SB) + +TEXT libc_msync_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_msync(SB) +GLOBL ·libc_msync_trampoline_addr(SB), RODATA, $8 +DATA ·libc_msync_trampoline_addr(SB)/8, $libc_msync_trampoline<>(SB) + +TEXT libc_munlock_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_munlock(SB) +GLOBL ·libc_munlock_trampoline_addr(SB), RODATA, $8 +DATA ·libc_munlock_trampoline_addr(SB)/8, $libc_munlock_trampoline<>(SB) + +TEXT libc_munlockall_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_munlockall(SB) +GLOBL ·libc_munlockall_trampoline_addr(SB), RODATA, $8 +DATA ·libc_munlockall_trampoline_addr(SB)/8, $libc_munlockall_trampoline<>(SB) + +TEXT libc_pipe2_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_pipe2(SB) +GLOBL ·libc_pipe2_trampoline_addr(SB), RODATA, $8 +DATA ·libc_pipe2_trampoline_addr(SB)/8, $libc_pipe2_trampoline<>(SB) + +TEXT libc_getdents_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getdents(SB) +GLOBL ·libc_getdents_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getdents_trampoline_addr(SB)/8, $libc_getdents_trampoline<>(SB) + +TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getcwd(SB) +GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getcwd_trampoline_addr(SB)/8, $libc_getcwd_trampoline<>(SB) + +TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_ioctl(SB) +GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $8 +DATA ·libc_ioctl_trampoline_addr(SB)/8, $libc_ioctl_trampoline<>(SB) + +TEXT libc_sysctl_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_sysctl(SB) +GLOBL ·libc_sysctl_trampoline_addr(SB), RODATA, $8 +DATA ·libc_sysctl_trampoline_addr(SB)/8, $libc_sysctl_trampoline<>(SB) + +TEXT libc_ppoll_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_ppoll(SB) +GLOBL ·libc_ppoll_trampoline_addr(SB), RODATA, $8 +DATA ·libc_ppoll_trampoline_addr(SB)/8, $libc_ppoll_trampoline<>(SB) + +TEXT libc_access_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_access(SB) +GLOBL ·libc_access_trampoline_addr(SB), RODATA, $8 +DATA ·libc_access_trampoline_addr(SB)/8, $libc_access_trampoline<>(SB) + +TEXT libc_adjtime_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_adjtime(SB) +GLOBL ·libc_adjtime_trampoline_addr(SB), RODATA, $8 +DATA ·libc_adjtime_trampoline_addr(SB)/8, $libc_adjtime_trampoline<>(SB) + +TEXT libc_chdir_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_chdir(SB) +GLOBL ·libc_chdir_trampoline_addr(SB), RODATA, $8 +DATA ·libc_chdir_trampoline_addr(SB)/8, $libc_chdir_trampoline<>(SB) + +TEXT libc_chflags_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_chflags(SB) +GLOBL ·libc_chflags_trampoline_addr(SB), RODATA, $8 +DATA ·libc_chflags_trampoline_addr(SB)/8, $libc_chflags_trampoline<>(SB) + +TEXT libc_chmod_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_chmod(SB) +GLOBL ·libc_chmod_trampoline_addr(SB), RODATA, $8 +DATA ·libc_chmod_trampoline_addr(SB)/8, $libc_chmod_trampoline<>(SB) + +TEXT libc_chown_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_chown(SB) +GLOBL ·libc_chown_trampoline_addr(SB), RODATA, $8 +DATA ·libc_chown_trampoline_addr(SB)/8, $libc_chown_trampoline<>(SB) + +TEXT libc_chroot_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_chroot(SB) +GLOBL ·libc_chroot_trampoline_addr(SB), RODATA, $8 +DATA ·libc_chroot_trampoline_addr(SB)/8, $libc_chroot_trampoline<>(SB) + +TEXT libc_clock_gettime_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_clock_gettime(SB) +GLOBL ·libc_clock_gettime_trampoline_addr(SB), RODATA, $8 +DATA ·libc_clock_gettime_trampoline_addr(SB)/8, $libc_clock_gettime_trampoline<>(SB) + +TEXT libc_close_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_close(SB) +GLOBL ·libc_close_trampoline_addr(SB), RODATA, $8 +DATA ·libc_close_trampoline_addr(SB)/8, $libc_close_trampoline<>(SB) + +TEXT libc_dup_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_dup(SB) +GLOBL ·libc_dup_trampoline_addr(SB), RODATA, $8 +DATA ·libc_dup_trampoline_addr(SB)/8, $libc_dup_trampoline<>(SB) + +TEXT libc_dup2_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_dup2(SB) +GLOBL ·libc_dup2_trampoline_addr(SB), RODATA, $8 +DATA ·libc_dup2_trampoline_addr(SB)/8, $libc_dup2_trampoline<>(SB) + +TEXT libc_dup3_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_dup3(SB) +GLOBL ·libc_dup3_trampoline_addr(SB), RODATA, $8 +DATA ·libc_dup3_trampoline_addr(SB)/8, $libc_dup3_trampoline<>(SB) + +TEXT libc_exit_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_exit(SB) +GLOBL ·libc_exit_trampoline_addr(SB), RODATA, $8 +DATA ·libc_exit_trampoline_addr(SB)/8, $libc_exit_trampoline<>(SB) + +TEXT libc_faccessat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_faccessat(SB) +GLOBL ·libc_faccessat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_faccessat_trampoline_addr(SB)/8, $libc_faccessat_trampoline<>(SB) + +TEXT libc_fchdir_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fchdir(SB) +GLOBL ·libc_fchdir_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fchdir_trampoline_addr(SB)/8, $libc_fchdir_trampoline<>(SB) + +TEXT libc_fchflags_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fchflags(SB) +GLOBL ·libc_fchflags_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fchflags_trampoline_addr(SB)/8, $libc_fchflags_trampoline<>(SB) + +TEXT libc_fchmod_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fchmod(SB) +GLOBL ·libc_fchmod_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fchmod_trampoline_addr(SB)/8, $libc_fchmod_trampoline<>(SB) + +TEXT libc_fchmodat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fchmodat(SB) +GLOBL ·libc_fchmodat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fchmodat_trampoline_addr(SB)/8, $libc_fchmodat_trampoline<>(SB) + +TEXT libc_fchown_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fchown(SB) +GLOBL ·libc_fchown_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fchown_trampoline_addr(SB)/8, $libc_fchown_trampoline<>(SB) + +TEXT libc_fchownat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fchownat(SB) +GLOBL ·libc_fchownat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fchownat_trampoline_addr(SB)/8, $libc_fchownat_trampoline<>(SB) + +TEXT libc_flock_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_flock(SB) +GLOBL ·libc_flock_trampoline_addr(SB), RODATA, $8 +DATA ·libc_flock_trampoline_addr(SB)/8, $libc_flock_trampoline<>(SB) + +TEXT libc_fpathconf_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fpathconf(SB) +GLOBL ·libc_fpathconf_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fpathconf_trampoline_addr(SB)/8, $libc_fpathconf_trampoline<>(SB) + +TEXT libc_fstat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fstat(SB) +GLOBL ·libc_fstat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fstat_trampoline_addr(SB)/8, $libc_fstat_trampoline<>(SB) + +TEXT libc_fstatat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fstatat(SB) +GLOBL ·libc_fstatat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fstatat_trampoline_addr(SB)/8, $libc_fstatat_trampoline<>(SB) + +TEXT libc_fstatfs_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fstatfs(SB) +GLOBL ·libc_fstatfs_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fstatfs_trampoline_addr(SB)/8, $libc_fstatfs_trampoline<>(SB) + +TEXT libc_fsync_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_fsync(SB) +GLOBL ·libc_fsync_trampoline_addr(SB), RODATA, $8 +DATA ·libc_fsync_trampoline_addr(SB)/8, $libc_fsync_trampoline<>(SB) + +TEXT libc_ftruncate_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_ftruncate(SB) +GLOBL ·libc_ftruncate_trampoline_addr(SB), RODATA, $8 +DATA ·libc_ftruncate_trampoline_addr(SB)/8, $libc_ftruncate_trampoline<>(SB) + +TEXT libc_getegid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getegid(SB) +GLOBL ·libc_getegid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getegid_trampoline_addr(SB)/8, $libc_getegid_trampoline<>(SB) + +TEXT libc_geteuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_geteuid(SB) +GLOBL ·libc_geteuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_geteuid_trampoline_addr(SB)/8, $libc_geteuid_trampoline<>(SB) + +TEXT libc_getgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getgid(SB) +GLOBL ·libc_getgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getgid_trampoline_addr(SB)/8, $libc_getgid_trampoline<>(SB) + +TEXT libc_getpgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getpgid(SB) +GLOBL ·libc_getpgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getpgid_trampoline_addr(SB)/8, $libc_getpgid_trampoline<>(SB) + +TEXT libc_getpgrp_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getpgrp(SB) +GLOBL ·libc_getpgrp_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getpgrp_trampoline_addr(SB)/8, $libc_getpgrp_trampoline<>(SB) + +TEXT libc_getpid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getpid(SB) +GLOBL ·libc_getpid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getpid_trampoline_addr(SB)/8, $libc_getpid_trampoline<>(SB) + +TEXT libc_getppid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getppid(SB) +GLOBL ·libc_getppid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getppid_trampoline_addr(SB)/8, $libc_getppid_trampoline<>(SB) + +TEXT libc_getpriority_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getpriority(SB) +GLOBL ·libc_getpriority_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getpriority_trampoline_addr(SB)/8, $libc_getpriority_trampoline<>(SB) + +TEXT libc_getrlimit_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getrlimit(SB) +GLOBL ·libc_getrlimit_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getrlimit_trampoline_addr(SB)/8, $libc_getrlimit_trampoline<>(SB) + +TEXT libc_getrtable_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getrtable(SB) +GLOBL ·libc_getrtable_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getrtable_trampoline_addr(SB)/8, $libc_getrtable_trampoline<>(SB) + +TEXT libc_getrusage_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getrusage(SB) +GLOBL ·libc_getrusage_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getrusage_trampoline_addr(SB)/8, $libc_getrusage_trampoline<>(SB) + +TEXT libc_getsid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getsid(SB) +GLOBL ·libc_getsid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getsid_trampoline_addr(SB)/8, $libc_getsid_trampoline<>(SB) + +TEXT libc_gettimeofday_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_gettimeofday(SB) +GLOBL ·libc_gettimeofday_trampoline_addr(SB), RODATA, $8 +DATA ·libc_gettimeofday_trampoline_addr(SB)/8, $libc_gettimeofday_trampoline<>(SB) + +TEXT libc_getuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_getuid(SB) +GLOBL ·libc_getuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_getuid_trampoline_addr(SB)/8, $libc_getuid_trampoline<>(SB) + +TEXT libc_issetugid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_issetugid(SB) +GLOBL ·libc_issetugid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_issetugid_trampoline_addr(SB)/8, $libc_issetugid_trampoline<>(SB) + +TEXT libc_kill_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_kill(SB) +GLOBL ·libc_kill_trampoline_addr(SB), RODATA, $8 +DATA ·libc_kill_trampoline_addr(SB)/8, $libc_kill_trampoline<>(SB) + +TEXT libc_kqueue_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_kqueue(SB) +GLOBL ·libc_kqueue_trampoline_addr(SB), RODATA, $8 +DATA ·libc_kqueue_trampoline_addr(SB)/8, $libc_kqueue_trampoline<>(SB) + +TEXT libc_lchown_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_lchown(SB) +GLOBL ·libc_lchown_trampoline_addr(SB), RODATA, $8 +DATA ·libc_lchown_trampoline_addr(SB)/8, $libc_lchown_trampoline<>(SB) + +TEXT libc_link_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_link(SB) +GLOBL ·libc_link_trampoline_addr(SB), RODATA, $8 +DATA ·libc_link_trampoline_addr(SB)/8, $libc_link_trampoline<>(SB) + +TEXT libc_linkat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_linkat(SB) +GLOBL ·libc_linkat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_linkat_trampoline_addr(SB)/8, $libc_linkat_trampoline<>(SB) + +TEXT libc_listen_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_listen(SB) +GLOBL ·libc_listen_trampoline_addr(SB), RODATA, $8 +DATA ·libc_listen_trampoline_addr(SB)/8, $libc_listen_trampoline<>(SB) + +TEXT libc_lstat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_lstat(SB) +GLOBL ·libc_lstat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_lstat_trampoline_addr(SB)/8, $libc_lstat_trampoline<>(SB) + +TEXT libc_mkdir_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mkdir(SB) +GLOBL ·libc_mkdir_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mkdir_trampoline_addr(SB)/8, $libc_mkdir_trampoline<>(SB) + +TEXT libc_mkdirat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mkdirat(SB) +GLOBL ·libc_mkdirat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mkdirat_trampoline_addr(SB)/8, $libc_mkdirat_trampoline<>(SB) + +TEXT libc_mkfifo_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mkfifo(SB) +GLOBL ·libc_mkfifo_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mkfifo_trampoline_addr(SB)/8, $libc_mkfifo_trampoline<>(SB) + +TEXT libc_mkfifoat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mkfifoat(SB) +GLOBL ·libc_mkfifoat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mkfifoat_trampoline_addr(SB)/8, $libc_mkfifoat_trampoline<>(SB) + +TEXT libc_mknod_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mknod(SB) +GLOBL ·libc_mknod_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mknod_trampoline_addr(SB)/8, $libc_mknod_trampoline<>(SB) + +TEXT libc_mknodat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mknodat(SB) +GLOBL ·libc_mknodat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mknodat_trampoline_addr(SB)/8, $libc_mknodat_trampoline<>(SB) + +TEXT libc_nanosleep_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_nanosleep(SB) +GLOBL ·libc_nanosleep_trampoline_addr(SB), RODATA, $8 +DATA ·libc_nanosleep_trampoline_addr(SB)/8, $libc_nanosleep_trampoline<>(SB) + +TEXT libc_open_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_open(SB) +GLOBL ·libc_open_trampoline_addr(SB), RODATA, $8 +DATA ·libc_open_trampoline_addr(SB)/8, $libc_open_trampoline<>(SB) + +TEXT libc_openat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_openat(SB) +GLOBL ·libc_openat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_openat_trampoline_addr(SB)/8, $libc_openat_trampoline<>(SB) + +TEXT libc_pathconf_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_pathconf(SB) +GLOBL ·libc_pathconf_trampoline_addr(SB), RODATA, $8 +DATA ·libc_pathconf_trampoline_addr(SB)/8, $libc_pathconf_trampoline<>(SB) + +TEXT libc_pread_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_pread(SB) +GLOBL ·libc_pread_trampoline_addr(SB), RODATA, $8 +DATA ·libc_pread_trampoline_addr(SB)/8, $libc_pread_trampoline<>(SB) + +TEXT libc_pwrite_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_pwrite(SB) +GLOBL ·libc_pwrite_trampoline_addr(SB), RODATA, $8 +DATA ·libc_pwrite_trampoline_addr(SB)/8, $libc_pwrite_trampoline<>(SB) + +TEXT libc_read_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_read(SB) +GLOBL ·libc_read_trampoline_addr(SB), RODATA, $8 +DATA ·libc_read_trampoline_addr(SB)/8, $libc_read_trampoline<>(SB) + +TEXT libc_readlink_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_readlink(SB) +GLOBL ·libc_readlink_trampoline_addr(SB), RODATA, $8 +DATA ·libc_readlink_trampoline_addr(SB)/8, $libc_readlink_trampoline<>(SB) + +TEXT libc_readlinkat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_readlinkat(SB) +GLOBL ·libc_readlinkat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_readlinkat_trampoline_addr(SB)/8, $libc_readlinkat_trampoline<>(SB) + +TEXT libc_rename_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_rename(SB) +GLOBL ·libc_rename_trampoline_addr(SB), RODATA, $8 +DATA ·libc_rename_trampoline_addr(SB)/8, $libc_rename_trampoline<>(SB) + +TEXT libc_renameat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_renameat(SB) +GLOBL ·libc_renameat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_renameat_trampoline_addr(SB)/8, $libc_renameat_trampoline<>(SB) + +TEXT libc_revoke_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_revoke(SB) +GLOBL ·libc_revoke_trampoline_addr(SB), RODATA, $8 +DATA ·libc_revoke_trampoline_addr(SB)/8, $libc_revoke_trampoline<>(SB) + +TEXT libc_rmdir_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_rmdir(SB) +GLOBL ·libc_rmdir_trampoline_addr(SB), RODATA, $8 +DATA ·libc_rmdir_trampoline_addr(SB)/8, $libc_rmdir_trampoline<>(SB) + +TEXT libc_lseek_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_lseek(SB) +GLOBL ·libc_lseek_trampoline_addr(SB), RODATA, $8 +DATA ·libc_lseek_trampoline_addr(SB)/8, $libc_lseek_trampoline<>(SB) + +TEXT libc_select_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_select(SB) +GLOBL ·libc_select_trampoline_addr(SB), RODATA, $8 +DATA ·libc_select_trampoline_addr(SB)/8, $libc_select_trampoline<>(SB) + +TEXT libc_setegid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setegid(SB) +GLOBL ·libc_setegid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setegid_trampoline_addr(SB)/8, $libc_setegid_trampoline<>(SB) + +TEXT libc_seteuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_seteuid(SB) +GLOBL ·libc_seteuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_seteuid_trampoline_addr(SB)/8, $libc_seteuid_trampoline<>(SB) + +TEXT libc_setgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setgid(SB) +GLOBL ·libc_setgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setgid_trampoline_addr(SB)/8, $libc_setgid_trampoline<>(SB) + +TEXT libc_setlogin_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setlogin(SB) +GLOBL ·libc_setlogin_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setlogin_trampoline_addr(SB)/8, $libc_setlogin_trampoline<>(SB) + +TEXT libc_setpgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setpgid(SB) +GLOBL ·libc_setpgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setpgid_trampoline_addr(SB)/8, $libc_setpgid_trampoline<>(SB) + +TEXT libc_setpriority_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setpriority(SB) +GLOBL ·libc_setpriority_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setpriority_trampoline_addr(SB)/8, $libc_setpriority_trampoline<>(SB) + +TEXT libc_setregid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setregid(SB) +GLOBL ·libc_setregid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setregid_trampoline_addr(SB)/8, $libc_setregid_trampoline<>(SB) + +TEXT libc_setreuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setreuid(SB) +GLOBL ·libc_setreuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setreuid_trampoline_addr(SB)/8, $libc_setreuid_trampoline<>(SB) + +TEXT libc_setresgid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setresgid(SB) +GLOBL ·libc_setresgid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setresgid_trampoline_addr(SB)/8, $libc_setresgid_trampoline<>(SB) + +TEXT libc_setresuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setresuid(SB) +GLOBL ·libc_setresuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setresuid_trampoline_addr(SB)/8, $libc_setresuid_trampoline<>(SB) + +TEXT libc_setrlimit_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setrlimit(SB) +GLOBL ·libc_setrlimit_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setrlimit_trampoline_addr(SB)/8, $libc_setrlimit_trampoline<>(SB) + +TEXT libc_setrtable_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setrtable(SB) +GLOBL ·libc_setrtable_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setrtable_trampoline_addr(SB)/8, $libc_setrtable_trampoline<>(SB) + +TEXT libc_setsid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setsid(SB) +GLOBL ·libc_setsid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setsid_trampoline_addr(SB)/8, $libc_setsid_trampoline<>(SB) + +TEXT libc_settimeofday_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_settimeofday(SB) +GLOBL ·libc_settimeofday_trampoline_addr(SB), RODATA, $8 +DATA ·libc_settimeofday_trampoline_addr(SB)/8, $libc_settimeofday_trampoline<>(SB) + +TEXT libc_setuid_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_setuid(SB) +GLOBL ·libc_setuid_trampoline_addr(SB), RODATA, $8 +DATA ·libc_setuid_trampoline_addr(SB)/8, $libc_setuid_trampoline<>(SB) + +TEXT libc_stat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_stat(SB) +GLOBL ·libc_stat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_stat_trampoline_addr(SB)/8, $libc_stat_trampoline<>(SB) + +TEXT libc_statfs_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_statfs(SB) +GLOBL ·libc_statfs_trampoline_addr(SB), RODATA, $8 +DATA ·libc_statfs_trampoline_addr(SB)/8, $libc_statfs_trampoline<>(SB) + +TEXT libc_symlink_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_symlink(SB) +GLOBL ·libc_symlink_trampoline_addr(SB), RODATA, $8 +DATA ·libc_symlink_trampoline_addr(SB)/8, $libc_symlink_trampoline<>(SB) + +TEXT libc_symlinkat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_symlinkat(SB) +GLOBL ·libc_symlinkat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_symlinkat_trampoline_addr(SB)/8, $libc_symlinkat_trampoline<>(SB) + +TEXT libc_sync_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_sync(SB) +GLOBL ·libc_sync_trampoline_addr(SB), RODATA, $8 +DATA ·libc_sync_trampoline_addr(SB)/8, $libc_sync_trampoline<>(SB) + +TEXT libc_truncate_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_truncate(SB) +GLOBL ·libc_truncate_trampoline_addr(SB), RODATA, $8 +DATA ·libc_truncate_trampoline_addr(SB)/8, $libc_truncate_trampoline<>(SB) + +TEXT libc_umask_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_umask(SB) +GLOBL ·libc_umask_trampoline_addr(SB), RODATA, $8 +DATA ·libc_umask_trampoline_addr(SB)/8, $libc_umask_trampoline<>(SB) + +TEXT libc_unlink_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_unlink(SB) +GLOBL ·libc_unlink_trampoline_addr(SB), RODATA, $8 +DATA ·libc_unlink_trampoline_addr(SB)/8, $libc_unlink_trampoline<>(SB) + +TEXT libc_unlinkat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_unlinkat(SB) +GLOBL ·libc_unlinkat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_unlinkat_trampoline_addr(SB)/8, $libc_unlinkat_trampoline<>(SB) + +TEXT libc_unmount_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_unmount(SB) +GLOBL ·libc_unmount_trampoline_addr(SB), RODATA, $8 +DATA ·libc_unmount_trampoline_addr(SB)/8, $libc_unmount_trampoline<>(SB) + +TEXT libc_write_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_write(SB) +GLOBL ·libc_write_trampoline_addr(SB), RODATA, $8 +DATA ·libc_write_trampoline_addr(SB)/8, $libc_write_trampoline<>(SB) + +TEXT libc_mmap_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_mmap(SB) +GLOBL ·libc_mmap_trampoline_addr(SB), RODATA, $8 +DATA ·libc_mmap_trampoline_addr(SB)/8, $libc_mmap_trampoline<>(SB) + +TEXT libc_munmap_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_munmap(SB) +GLOBL ·libc_munmap_trampoline_addr(SB), RODATA, $8 +DATA ·libc_munmap_trampoline_addr(SB)/8, $libc_munmap_trampoline<>(SB) + +TEXT libc_utimensat_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_utimensat(SB) +GLOBL ·libc_utimensat_trampoline_addr(SB), RODATA, $8 +DATA ·libc_utimensat_trampoline_addr(SB)/8, $libc_utimensat_trampoline<>(SB) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go index c85de2d9766b..47a07ee0c274 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go @@ -527,6 +527,14 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ioctl_trampoline_addr uintptr //go:cgo_import_dynamic libc_ioctl ioctl "libc.so" @@ -696,6 +704,20 @@ var libc_chroot_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := syscall_syscall(libc_clock_gettime_trampoline_addr, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +var libc_clock_gettime_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_clock_gettime clock_gettime "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := syscall_syscall(libc_close_trampoline_addr, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.s b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.s index 7c9223b64187..4028255b0d5b 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.s +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.s @@ -249,6 +249,12 @@ TEXT libc_chroot_trampoline<>(SB),NOSPLIT,$0-0 GLOBL ·libc_chroot_trampoline_addr(SB), RODATA, $8 DATA ·libc_chroot_trampoline_addr(SB)/8, $libc_chroot_trampoline<>(SB) +TEXT libc_clock_gettime_trampoline<>(SB),NOSPLIT,$0-0 + CALL libc_clock_gettime(SB) + RET +GLOBL ·libc_clock_gettime_trampoline_addr(SB), RODATA, $8 +DATA ·libc_clock_gettime_trampoline_addr(SB)/8, $libc_clock_gettime_trampoline<>(SB) + TEXT libc_close_trampoline<>(SB),NOSPLIT,$0-0 CALL libc_close(SB) RET diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go index 8e3e7873f893..573378fdb96f 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go @@ -527,6 +527,14 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(libc_ioctl_trampoline_addr, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + var libc_ioctl_trampoline_addr uintptr //go:cgo_import_dynamic libc_ioctl ioctl "libc.so" @@ -696,6 +704,20 @@ var libc_chroot_trampoline_addr uintptr // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := syscall_syscall(libc_clock_gettime_trampoline_addr, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +var libc_clock_gettime_trampoline_addr uintptr + +//go:cgo_import_dynamic libc_clock_gettime clock_gettime "libc.so" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := syscall_syscall(libc_close_trampoline_addr, uintptr(fd), 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.s b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.s index 7dba789271ca..e1fbd4dfa8c8 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.s +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.s @@ -5,792 +5,665 @@ TEXT libc_getgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgroups(SB) - GLOBL ·libc_getgroups_trampoline_addr(SB), RODATA, $8 DATA ·libc_getgroups_trampoline_addr(SB)/8, $libc_getgroups_trampoline<>(SB) TEXT libc_setgroups_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgroups(SB) - GLOBL ·libc_setgroups_trampoline_addr(SB), RODATA, $8 DATA ·libc_setgroups_trampoline_addr(SB)/8, $libc_setgroups_trampoline<>(SB) TEXT libc_wait4_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_wait4(SB) - GLOBL ·libc_wait4_trampoline_addr(SB), RODATA, $8 DATA ·libc_wait4_trampoline_addr(SB)/8, $libc_wait4_trampoline<>(SB) TEXT libc_accept_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_accept(SB) - GLOBL ·libc_accept_trampoline_addr(SB), RODATA, $8 DATA ·libc_accept_trampoline_addr(SB)/8, $libc_accept_trampoline<>(SB) TEXT libc_bind_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_bind(SB) - GLOBL ·libc_bind_trampoline_addr(SB), RODATA, $8 DATA ·libc_bind_trampoline_addr(SB)/8, $libc_bind_trampoline<>(SB) TEXT libc_connect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_connect(SB) - GLOBL ·libc_connect_trampoline_addr(SB), RODATA, $8 DATA ·libc_connect_trampoline_addr(SB)/8, $libc_connect_trampoline<>(SB) TEXT libc_socket_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socket(SB) - GLOBL ·libc_socket_trampoline_addr(SB), RODATA, $8 DATA ·libc_socket_trampoline_addr(SB)/8, $libc_socket_trampoline<>(SB) TEXT libc_getsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockopt(SB) - GLOBL ·libc_getsockopt_trampoline_addr(SB), RODATA, $8 DATA ·libc_getsockopt_trampoline_addr(SB)/8, $libc_getsockopt_trampoline<>(SB) TEXT libc_setsockopt_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsockopt(SB) - GLOBL ·libc_setsockopt_trampoline_addr(SB), RODATA, $8 DATA ·libc_setsockopt_trampoline_addr(SB)/8, $libc_setsockopt_trampoline<>(SB) TEXT libc_getpeername_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpeername(SB) - GLOBL ·libc_getpeername_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpeername_trampoline_addr(SB)/8, $libc_getpeername_trampoline<>(SB) TEXT libc_getsockname_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsockname(SB) - GLOBL ·libc_getsockname_trampoline_addr(SB), RODATA, $8 DATA ·libc_getsockname_trampoline_addr(SB)/8, $libc_getsockname_trampoline<>(SB) TEXT libc_shutdown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_shutdown(SB) - GLOBL ·libc_shutdown_trampoline_addr(SB), RODATA, $8 DATA ·libc_shutdown_trampoline_addr(SB)/8, $libc_shutdown_trampoline<>(SB) TEXT libc_socketpair_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_socketpair(SB) - GLOBL ·libc_socketpair_trampoline_addr(SB), RODATA, $8 DATA ·libc_socketpair_trampoline_addr(SB)/8, $libc_socketpair_trampoline<>(SB) TEXT libc_recvfrom_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvfrom(SB) - GLOBL ·libc_recvfrom_trampoline_addr(SB), RODATA, $8 DATA ·libc_recvfrom_trampoline_addr(SB)/8, $libc_recvfrom_trampoline<>(SB) TEXT libc_sendto_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendto(SB) - GLOBL ·libc_sendto_trampoline_addr(SB), RODATA, $8 DATA ·libc_sendto_trampoline_addr(SB)/8, $libc_sendto_trampoline<>(SB) TEXT libc_recvmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_recvmsg(SB) - GLOBL ·libc_recvmsg_trampoline_addr(SB), RODATA, $8 DATA ·libc_recvmsg_trampoline_addr(SB)/8, $libc_recvmsg_trampoline<>(SB) TEXT libc_sendmsg_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sendmsg(SB) - GLOBL ·libc_sendmsg_trampoline_addr(SB), RODATA, $8 DATA ·libc_sendmsg_trampoline_addr(SB)/8, $libc_sendmsg_trampoline<>(SB) TEXT libc_kevent_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kevent(SB) - GLOBL ·libc_kevent_trampoline_addr(SB), RODATA, $8 DATA ·libc_kevent_trampoline_addr(SB)/8, $libc_kevent_trampoline<>(SB) TEXT libc_utimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimes(SB) - GLOBL ·libc_utimes_trampoline_addr(SB), RODATA, $8 DATA ·libc_utimes_trampoline_addr(SB)/8, $libc_utimes_trampoline<>(SB) TEXT libc_futimes_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_futimes(SB) - GLOBL ·libc_futimes_trampoline_addr(SB), RODATA, $8 DATA ·libc_futimes_trampoline_addr(SB)/8, $libc_futimes_trampoline<>(SB) TEXT libc_poll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_poll(SB) - GLOBL ·libc_poll_trampoline_addr(SB), RODATA, $8 DATA ·libc_poll_trampoline_addr(SB)/8, $libc_poll_trampoline<>(SB) TEXT libc_madvise_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_madvise(SB) - GLOBL ·libc_madvise_trampoline_addr(SB), RODATA, $8 DATA ·libc_madvise_trampoline_addr(SB)/8, $libc_madvise_trampoline<>(SB) TEXT libc_mlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlock(SB) - GLOBL ·libc_mlock_trampoline_addr(SB), RODATA, $8 DATA ·libc_mlock_trampoline_addr(SB)/8, $libc_mlock_trampoline<>(SB) TEXT libc_mlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mlockall(SB) - GLOBL ·libc_mlockall_trampoline_addr(SB), RODATA, $8 DATA ·libc_mlockall_trampoline_addr(SB)/8, $libc_mlockall_trampoline<>(SB) TEXT libc_mprotect_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mprotect(SB) - GLOBL ·libc_mprotect_trampoline_addr(SB), RODATA, $8 DATA ·libc_mprotect_trampoline_addr(SB)/8, $libc_mprotect_trampoline<>(SB) TEXT libc_msync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_msync(SB) - GLOBL ·libc_msync_trampoline_addr(SB), RODATA, $8 DATA ·libc_msync_trampoline_addr(SB)/8, $libc_msync_trampoline<>(SB) TEXT libc_munlock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlock(SB) - GLOBL ·libc_munlock_trampoline_addr(SB), RODATA, $8 DATA ·libc_munlock_trampoline_addr(SB)/8, $libc_munlock_trampoline<>(SB) TEXT libc_munlockall_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munlockall(SB) - GLOBL ·libc_munlockall_trampoline_addr(SB), RODATA, $8 DATA ·libc_munlockall_trampoline_addr(SB)/8, $libc_munlockall_trampoline<>(SB) TEXT libc_pipe2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pipe2(SB) - GLOBL ·libc_pipe2_trampoline_addr(SB), RODATA, $8 DATA ·libc_pipe2_trampoline_addr(SB)/8, $libc_pipe2_trampoline<>(SB) TEXT libc_getdents_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getdents(SB) - GLOBL ·libc_getdents_trampoline_addr(SB), RODATA, $8 DATA ·libc_getdents_trampoline_addr(SB)/8, $libc_getdents_trampoline<>(SB) TEXT libc_getcwd_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getcwd(SB) - GLOBL ·libc_getcwd_trampoline_addr(SB), RODATA, $8 DATA ·libc_getcwd_trampoline_addr(SB)/8, $libc_getcwd_trampoline<>(SB) TEXT libc_ioctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ioctl(SB) - GLOBL ·libc_ioctl_trampoline_addr(SB), RODATA, $8 DATA ·libc_ioctl_trampoline_addr(SB)/8, $libc_ioctl_trampoline<>(SB) TEXT libc_sysctl_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sysctl(SB) - GLOBL ·libc_sysctl_trampoline_addr(SB), RODATA, $8 DATA ·libc_sysctl_trampoline_addr(SB)/8, $libc_sysctl_trampoline<>(SB) TEXT libc_ppoll_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ppoll(SB) - GLOBL ·libc_ppoll_trampoline_addr(SB), RODATA, $8 DATA ·libc_ppoll_trampoline_addr(SB)/8, $libc_ppoll_trampoline<>(SB) TEXT libc_access_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_access(SB) - GLOBL ·libc_access_trampoline_addr(SB), RODATA, $8 DATA ·libc_access_trampoline_addr(SB)/8, $libc_access_trampoline<>(SB) TEXT libc_adjtime_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_adjtime(SB) - GLOBL ·libc_adjtime_trampoline_addr(SB), RODATA, $8 DATA ·libc_adjtime_trampoline_addr(SB)/8, $libc_adjtime_trampoline<>(SB) TEXT libc_chdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chdir(SB) - GLOBL ·libc_chdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_chdir_trampoline_addr(SB)/8, $libc_chdir_trampoline<>(SB) TEXT libc_chflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chflags(SB) - GLOBL ·libc_chflags_trampoline_addr(SB), RODATA, $8 DATA ·libc_chflags_trampoline_addr(SB)/8, $libc_chflags_trampoline<>(SB) TEXT libc_chmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chmod(SB) - GLOBL ·libc_chmod_trampoline_addr(SB), RODATA, $8 DATA ·libc_chmod_trampoline_addr(SB)/8, $libc_chmod_trampoline<>(SB) TEXT libc_chown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chown(SB) - GLOBL ·libc_chown_trampoline_addr(SB), RODATA, $8 DATA ·libc_chown_trampoline_addr(SB)/8, $libc_chown_trampoline<>(SB) TEXT libc_chroot_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_chroot(SB) - GLOBL ·libc_chroot_trampoline_addr(SB), RODATA, $8 DATA ·libc_chroot_trampoline_addr(SB)/8, $libc_chroot_trampoline<>(SB) +TEXT libc_clock_gettime_trampoline<>(SB),NOSPLIT,$0-0 + JMP libc_clock_gettime(SB) +GLOBL ·libc_clock_gettime_trampoline_addr(SB), RODATA, $8 +DATA ·libc_clock_gettime_trampoline_addr(SB)/8, $libc_clock_gettime_trampoline<>(SB) + TEXT libc_close_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_close(SB) - GLOBL ·libc_close_trampoline_addr(SB), RODATA, $8 DATA ·libc_close_trampoline_addr(SB)/8, $libc_close_trampoline<>(SB) TEXT libc_dup_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup(SB) - GLOBL ·libc_dup_trampoline_addr(SB), RODATA, $8 DATA ·libc_dup_trampoline_addr(SB)/8, $libc_dup_trampoline<>(SB) TEXT libc_dup2_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup2(SB) - GLOBL ·libc_dup2_trampoline_addr(SB), RODATA, $8 DATA ·libc_dup2_trampoline_addr(SB)/8, $libc_dup2_trampoline<>(SB) TEXT libc_dup3_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_dup3(SB) - GLOBL ·libc_dup3_trampoline_addr(SB), RODATA, $8 DATA ·libc_dup3_trampoline_addr(SB)/8, $libc_dup3_trampoline<>(SB) TEXT libc_exit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_exit(SB) - GLOBL ·libc_exit_trampoline_addr(SB), RODATA, $8 DATA ·libc_exit_trampoline_addr(SB)/8, $libc_exit_trampoline<>(SB) TEXT libc_faccessat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_faccessat(SB) - GLOBL ·libc_faccessat_trampoline_addr(SB), RODATA, $8 DATA ·libc_faccessat_trampoline_addr(SB)/8, $libc_faccessat_trampoline<>(SB) TEXT libc_fchdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchdir(SB) - GLOBL ·libc_fchdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchdir_trampoline_addr(SB)/8, $libc_fchdir_trampoline<>(SB) TEXT libc_fchflags_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchflags(SB) - GLOBL ·libc_fchflags_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchflags_trampoline_addr(SB)/8, $libc_fchflags_trampoline<>(SB) TEXT libc_fchmod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmod(SB) - GLOBL ·libc_fchmod_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchmod_trampoline_addr(SB)/8, $libc_fchmod_trampoline<>(SB) TEXT libc_fchmodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchmodat(SB) - GLOBL ·libc_fchmodat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchmodat_trampoline_addr(SB)/8, $libc_fchmodat_trampoline<>(SB) TEXT libc_fchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchown(SB) - GLOBL ·libc_fchown_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchown_trampoline_addr(SB)/8, $libc_fchown_trampoline<>(SB) TEXT libc_fchownat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fchownat(SB) - GLOBL ·libc_fchownat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fchownat_trampoline_addr(SB)/8, $libc_fchownat_trampoline<>(SB) TEXT libc_flock_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_flock(SB) - GLOBL ·libc_flock_trampoline_addr(SB), RODATA, $8 DATA ·libc_flock_trampoline_addr(SB)/8, $libc_flock_trampoline<>(SB) TEXT libc_fpathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fpathconf(SB) - GLOBL ·libc_fpathconf_trampoline_addr(SB), RODATA, $8 DATA ·libc_fpathconf_trampoline_addr(SB)/8, $libc_fpathconf_trampoline<>(SB) TEXT libc_fstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstat(SB) - GLOBL ·libc_fstat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fstat_trampoline_addr(SB)/8, $libc_fstat_trampoline<>(SB) TEXT libc_fstatat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatat(SB) - GLOBL ·libc_fstatat_trampoline_addr(SB), RODATA, $8 DATA ·libc_fstatat_trampoline_addr(SB)/8, $libc_fstatat_trampoline<>(SB) TEXT libc_fstatfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fstatfs(SB) - GLOBL ·libc_fstatfs_trampoline_addr(SB), RODATA, $8 DATA ·libc_fstatfs_trampoline_addr(SB)/8, $libc_fstatfs_trampoline<>(SB) TEXT libc_fsync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_fsync(SB) - GLOBL ·libc_fsync_trampoline_addr(SB), RODATA, $8 DATA ·libc_fsync_trampoline_addr(SB)/8, $libc_fsync_trampoline<>(SB) TEXT libc_ftruncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_ftruncate(SB) - GLOBL ·libc_ftruncate_trampoline_addr(SB), RODATA, $8 DATA ·libc_ftruncate_trampoline_addr(SB)/8, $libc_ftruncate_trampoline<>(SB) TEXT libc_getegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getegid(SB) - GLOBL ·libc_getegid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getegid_trampoline_addr(SB)/8, $libc_getegid_trampoline<>(SB) TEXT libc_geteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_geteuid(SB) - GLOBL ·libc_geteuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_geteuid_trampoline_addr(SB)/8, $libc_geteuid_trampoline<>(SB) TEXT libc_getgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getgid(SB) - GLOBL ·libc_getgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getgid_trampoline_addr(SB)/8, $libc_getgid_trampoline<>(SB) TEXT libc_getpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgid(SB) - GLOBL ·libc_getpgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpgid_trampoline_addr(SB)/8, $libc_getpgid_trampoline<>(SB) TEXT libc_getpgrp_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpgrp(SB) - GLOBL ·libc_getpgrp_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpgrp_trampoline_addr(SB)/8, $libc_getpgrp_trampoline<>(SB) TEXT libc_getpid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpid(SB) - GLOBL ·libc_getpid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpid_trampoline_addr(SB)/8, $libc_getpid_trampoline<>(SB) TEXT libc_getppid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getppid(SB) - GLOBL ·libc_getppid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getppid_trampoline_addr(SB)/8, $libc_getppid_trampoline<>(SB) TEXT libc_getpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getpriority(SB) - GLOBL ·libc_getpriority_trampoline_addr(SB), RODATA, $8 DATA ·libc_getpriority_trampoline_addr(SB)/8, $libc_getpriority_trampoline<>(SB) TEXT libc_getrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrlimit(SB) - GLOBL ·libc_getrlimit_trampoline_addr(SB), RODATA, $8 DATA ·libc_getrlimit_trampoline_addr(SB)/8, $libc_getrlimit_trampoline<>(SB) TEXT libc_getrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrtable(SB) - GLOBL ·libc_getrtable_trampoline_addr(SB), RODATA, $8 DATA ·libc_getrtable_trampoline_addr(SB)/8, $libc_getrtable_trampoline<>(SB) TEXT libc_getrusage_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getrusage(SB) - GLOBL ·libc_getrusage_trampoline_addr(SB), RODATA, $8 DATA ·libc_getrusage_trampoline_addr(SB)/8, $libc_getrusage_trampoline<>(SB) TEXT libc_getsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getsid(SB) - GLOBL ·libc_getsid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getsid_trampoline_addr(SB)/8, $libc_getsid_trampoline<>(SB) TEXT libc_gettimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_gettimeofday(SB) - GLOBL ·libc_gettimeofday_trampoline_addr(SB), RODATA, $8 DATA ·libc_gettimeofday_trampoline_addr(SB)/8, $libc_gettimeofday_trampoline<>(SB) TEXT libc_getuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_getuid(SB) - GLOBL ·libc_getuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_getuid_trampoline_addr(SB)/8, $libc_getuid_trampoline<>(SB) TEXT libc_issetugid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_issetugid(SB) - GLOBL ·libc_issetugid_trampoline_addr(SB), RODATA, $8 DATA ·libc_issetugid_trampoline_addr(SB)/8, $libc_issetugid_trampoline<>(SB) TEXT libc_kill_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kill(SB) - GLOBL ·libc_kill_trampoline_addr(SB), RODATA, $8 DATA ·libc_kill_trampoline_addr(SB)/8, $libc_kill_trampoline<>(SB) TEXT libc_kqueue_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_kqueue(SB) - GLOBL ·libc_kqueue_trampoline_addr(SB), RODATA, $8 DATA ·libc_kqueue_trampoline_addr(SB)/8, $libc_kqueue_trampoline<>(SB) TEXT libc_lchown_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lchown(SB) - GLOBL ·libc_lchown_trampoline_addr(SB), RODATA, $8 DATA ·libc_lchown_trampoline_addr(SB)/8, $libc_lchown_trampoline<>(SB) TEXT libc_link_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_link(SB) - GLOBL ·libc_link_trampoline_addr(SB), RODATA, $8 DATA ·libc_link_trampoline_addr(SB)/8, $libc_link_trampoline<>(SB) TEXT libc_linkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_linkat(SB) - GLOBL ·libc_linkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_linkat_trampoline_addr(SB)/8, $libc_linkat_trampoline<>(SB) TEXT libc_listen_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_listen(SB) - GLOBL ·libc_listen_trampoline_addr(SB), RODATA, $8 DATA ·libc_listen_trampoline_addr(SB)/8, $libc_listen_trampoline<>(SB) TEXT libc_lstat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lstat(SB) - GLOBL ·libc_lstat_trampoline_addr(SB), RODATA, $8 DATA ·libc_lstat_trampoline_addr(SB)/8, $libc_lstat_trampoline<>(SB) TEXT libc_mkdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdir(SB) - GLOBL ·libc_mkdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkdir_trampoline_addr(SB)/8, $libc_mkdir_trampoline<>(SB) TEXT libc_mkdirat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkdirat(SB) - GLOBL ·libc_mkdirat_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkdirat_trampoline_addr(SB)/8, $libc_mkdirat_trampoline<>(SB) TEXT libc_mkfifo_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifo(SB) - GLOBL ·libc_mkfifo_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkfifo_trampoline_addr(SB)/8, $libc_mkfifo_trampoline<>(SB) TEXT libc_mkfifoat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mkfifoat(SB) - GLOBL ·libc_mkfifoat_trampoline_addr(SB), RODATA, $8 DATA ·libc_mkfifoat_trampoline_addr(SB)/8, $libc_mkfifoat_trampoline<>(SB) TEXT libc_mknod_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknod(SB) - GLOBL ·libc_mknod_trampoline_addr(SB), RODATA, $8 DATA ·libc_mknod_trampoline_addr(SB)/8, $libc_mknod_trampoline<>(SB) TEXT libc_mknodat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mknodat(SB) - GLOBL ·libc_mknodat_trampoline_addr(SB), RODATA, $8 DATA ·libc_mknodat_trampoline_addr(SB)/8, $libc_mknodat_trampoline<>(SB) TEXT libc_nanosleep_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_nanosleep(SB) - GLOBL ·libc_nanosleep_trampoline_addr(SB), RODATA, $8 DATA ·libc_nanosleep_trampoline_addr(SB)/8, $libc_nanosleep_trampoline<>(SB) TEXT libc_open_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_open(SB) - GLOBL ·libc_open_trampoline_addr(SB), RODATA, $8 DATA ·libc_open_trampoline_addr(SB)/8, $libc_open_trampoline<>(SB) TEXT libc_openat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_openat(SB) - GLOBL ·libc_openat_trampoline_addr(SB), RODATA, $8 DATA ·libc_openat_trampoline_addr(SB)/8, $libc_openat_trampoline<>(SB) TEXT libc_pathconf_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pathconf(SB) - GLOBL ·libc_pathconf_trampoline_addr(SB), RODATA, $8 DATA ·libc_pathconf_trampoline_addr(SB)/8, $libc_pathconf_trampoline<>(SB) TEXT libc_pread_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pread(SB) - GLOBL ·libc_pread_trampoline_addr(SB), RODATA, $8 DATA ·libc_pread_trampoline_addr(SB)/8, $libc_pread_trampoline<>(SB) TEXT libc_pwrite_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_pwrite(SB) - GLOBL ·libc_pwrite_trampoline_addr(SB), RODATA, $8 DATA ·libc_pwrite_trampoline_addr(SB)/8, $libc_pwrite_trampoline<>(SB) TEXT libc_read_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_read(SB) - GLOBL ·libc_read_trampoline_addr(SB), RODATA, $8 DATA ·libc_read_trampoline_addr(SB)/8, $libc_read_trampoline<>(SB) TEXT libc_readlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlink(SB) - GLOBL ·libc_readlink_trampoline_addr(SB), RODATA, $8 DATA ·libc_readlink_trampoline_addr(SB)/8, $libc_readlink_trampoline<>(SB) TEXT libc_readlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_readlinkat(SB) - GLOBL ·libc_readlinkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_readlinkat_trampoline_addr(SB)/8, $libc_readlinkat_trampoline<>(SB) TEXT libc_rename_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rename(SB) - GLOBL ·libc_rename_trampoline_addr(SB), RODATA, $8 DATA ·libc_rename_trampoline_addr(SB)/8, $libc_rename_trampoline<>(SB) TEXT libc_renameat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_renameat(SB) - GLOBL ·libc_renameat_trampoline_addr(SB), RODATA, $8 DATA ·libc_renameat_trampoline_addr(SB)/8, $libc_renameat_trampoline<>(SB) TEXT libc_revoke_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_revoke(SB) - GLOBL ·libc_revoke_trampoline_addr(SB), RODATA, $8 DATA ·libc_revoke_trampoline_addr(SB)/8, $libc_revoke_trampoline<>(SB) TEXT libc_rmdir_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_rmdir(SB) - GLOBL ·libc_rmdir_trampoline_addr(SB), RODATA, $8 DATA ·libc_rmdir_trampoline_addr(SB)/8, $libc_rmdir_trampoline<>(SB) TEXT libc_lseek_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_lseek(SB) - GLOBL ·libc_lseek_trampoline_addr(SB), RODATA, $8 DATA ·libc_lseek_trampoline_addr(SB)/8, $libc_lseek_trampoline<>(SB) TEXT libc_select_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_select(SB) - GLOBL ·libc_select_trampoline_addr(SB), RODATA, $8 DATA ·libc_select_trampoline_addr(SB)/8, $libc_select_trampoline<>(SB) TEXT libc_setegid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setegid(SB) - GLOBL ·libc_setegid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setegid_trampoline_addr(SB)/8, $libc_setegid_trampoline<>(SB) TEXT libc_seteuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_seteuid(SB) - GLOBL ·libc_seteuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_seteuid_trampoline_addr(SB)/8, $libc_seteuid_trampoline<>(SB) TEXT libc_setgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setgid(SB) - GLOBL ·libc_setgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setgid_trampoline_addr(SB)/8, $libc_setgid_trampoline<>(SB) TEXT libc_setlogin_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setlogin(SB) - GLOBL ·libc_setlogin_trampoline_addr(SB), RODATA, $8 DATA ·libc_setlogin_trampoline_addr(SB)/8, $libc_setlogin_trampoline<>(SB) TEXT libc_setpgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpgid(SB) - GLOBL ·libc_setpgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setpgid_trampoline_addr(SB)/8, $libc_setpgid_trampoline<>(SB) TEXT libc_setpriority_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setpriority(SB) - GLOBL ·libc_setpriority_trampoline_addr(SB), RODATA, $8 DATA ·libc_setpriority_trampoline_addr(SB)/8, $libc_setpriority_trampoline<>(SB) TEXT libc_setregid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setregid(SB) - GLOBL ·libc_setregid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setregid_trampoline_addr(SB)/8, $libc_setregid_trampoline<>(SB) TEXT libc_setreuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setreuid(SB) - GLOBL ·libc_setreuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setreuid_trampoline_addr(SB)/8, $libc_setreuid_trampoline<>(SB) TEXT libc_setresgid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresgid(SB) - GLOBL ·libc_setresgid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setresgid_trampoline_addr(SB)/8, $libc_setresgid_trampoline<>(SB) TEXT libc_setresuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setresuid(SB) - GLOBL ·libc_setresuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setresuid_trampoline_addr(SB)/8, $libc_setresuid_trampoline<>(SB) TEXT libc_setrlimit_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrlimit(SB) - GLOBL ·libc_setrlimit_trampoline_addr(SB), RODATA, $8 DATA ·libc_setrlimit_trampoline_addr(SB)/8, $libc_setrlimit_trampoline<>(SB) TEXT libc_setrtable_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setrtable(SB) - GLOBL ·libc_setrtable_trampoline_addr(SB), RODATA, $8 DATA ·libc_setrtable_trampoline_addr(SB)/8, $libc_setrtable_trampoline<>(SB) TEXT libc_setsid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setsid(SB) - GLOBL ·libc_setsid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setsid_trampoline_addr(SB)/8, $libc_setsid_trampoline<>(SB) TEXT libc_settimeofday_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_settimeofday(SB) - GLOBL ·libc_settimeofday_trampoline_addr(SB), RODATA, $8 DATA ·libc_settimeofday_trampoline_addr(SB)/8, $libc_settimeofday_trampoline<>(SB) TEXT libc_setuid_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_setuid(SB) - GLOBL ·libc_setuid_trampoline_addr(SB), RODATA, $8 DATA ·libc_setuid_trampoline_addr(SB)/8, $libc_setuid_trampoline<>(SB) TEXT libc_stat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_stat(SB) - GLOBL ·libc_stat_trampoline_addr(SB), RODATA, $8 DATA ·libc_stat_trampoline_addr(SB)/8, $libc_stat_trampoline<>(SB) TEXT libc_statfs_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_statfs(SB) - GLOBL ·libc_statfs_trampoline_addr(SB), RODATA, $8 DATA ·libc_statfs_trampoline_addr(SB)/8, $libc_statfs_trampoline<>(SB) TEXT libc_symlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlink(SB) - GLOBL ·libc_symlink_trampoline_addr(SB), RODATA, $8 DATA ·libc_symlink_trampoline_addr(SB)/8, $libc_symlink_trampoline<>(SB) TEXT libc_symlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_symlinkat(SB) - GLOBL ·libc_symlinkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_symlinkat_trampoline_addr(SB)/8, $libc_symlinkat_trampoline<>(SB) TEXT libc_sync_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_sync(SB) - GLOBL ·libc_sync_trampoline_addr(SB), RODATA, $8 DATA ·libc_sync_trampoline_addr(SB)/8, $libc_sync_trampoline<>(SB) TEXT libc_truncate_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_truncate(SB) - GLOBL ·libc_truncate_trampoline_addr(SB), RODATA, $8 DATA ·libc_truncate_trampoline_addr(SB)/8, $libc_truncate_trampoline<>(SB) TEXT libc_umask_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_umask(SB) - GLOBL ·libc_umask_trampoline_addr(SB), RODATA, $8 DATA ·libc_umask_trampoline_addr(SB)/8, $libc_umask_trampoline<>(SB) TEXT libc_unlink_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlink(SB) - GLOBL ·libc_unlink_trampoline_addr(SB), RODATA, $8 DATA ·libc_unlink_trampoline_addr(SB)/8, $libc_unlink_trampoline<>(SB) TEXT libc_unlinkat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unlinkat(SB) - GLOBL ·libc_unlinkat_trampoline_addr(SB), RODATA, $8 DATA ·libc_unlinkat_trampoline_addr(SB)/8, $libc_unlinkat_trampoline<>(SB) TEXT libc_unmount_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_unmount(SB) - GLOBL ·libc_unmount_trampoline_addr(SB), RODATA, $8 DATA ·libc_unmount_trampoline_addr(SB)/8, $libc_unmount_trampoline<>(SB) TEXT libc_write_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_write(SB) - GLOBL ·libc_write_trampoline_addr(SB), RODATA, $8 DATA ·libc_write_trampoline_addr(SB)/8, $libc_write_trampoline<>(SB) TEXT libc_mmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_mmap(SB) - GLOBL ·libc_mmap_trampoline_addr(SB), RODATA, $8 DATA ·libc_mmap_trampoline_addr(SB)/8, $libc_mmap_trampoline<>(SB) TEXT libc_munmap_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_munmap(SB) - GLOBL ·libc_munmap_trampoline_addr(SB), RODATA, $8 DATA ·libc_munmap_trampoline_addr(SB)/8, $libc_munmap_trampoline<>(SB) TEXT libc_utimensat_trampoline<>(SB),NOSPLIT,$0-0 JMP libc_utimensat(SB) - GLOBL ·libc_utimensat_trampoline_addr(SB), RODATA, $8 DATA ·libc_utimensat_trampoline_addr(SB)/8, $libc_utimensat_trampoline<>(SB) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go index 91f5a2bde282..4873a1e5d3e9 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go @@ -38,6 +38,7 @@ import ( //go:cgo_import_dynamic libc_chmod chmod "libc.so" //go:cgo_import_dynamic libc_chown chown "libc.so" //go:cgo_import_dynamic libc_chroot chroot "libc.so" +//go:cgo_import_dynamic libc_clockgettime clockgettime "libc.so" //go:cgo_import_dynamic libc_close close "libc.so" //go:cgo_import_dynamic libc_creat creat "libc.so" //go:cgo_import_dynamic libc_dup dup "libc.so" @@ -177,6 +178,7 @@ import ( //go:linkname procChmod libc_chmod //go:linkname procChown libc_chown //go:linkname procChroot libc_chroot +//go:linkname procClockGettime libc_clockgettime //go:linkname procClose libc_close //go:linkname procCreat libc_creat //go:linkname procDup libc_dup @@ -317,6 +319,7 @@ var ( procChmod, procChown, procChroot, + procClockGettime, procClose, procCreat, procDup, @@ -654,6 +657,17 @@ func ioctlRet(fd int, req uint, arg uintptr) (ret int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtrRet(fd int, req uint, arg unsafe.Pointer) (ret int, err error) { + r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procioctl)), 3, uintptr(fd), uintptr(req), uintptr(arg), 0, 0, 0) + ret = int(r0) + if e1 != 0 { + err = e1 + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procpoll)), 3, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout), 0, 0, 0) n = int(r0) @@ -750,6 +764,16 @@ func Chroot(path string) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ClockGettime(clockid int32, time *Timespec) (err error) { + _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procClockGettime)), 2, uintptr(clockid), uintptr(unsafe.Pointer(time)), 0, 0, 0, 0) + if e1 != 0 { + err = e1 + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Close(fd int) (err error) { _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procClose)), 1, uintptr(fd), 0, 0, 0, 0, 0) if e1 != 0 { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_zos_s390x.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_zos_s390x.go index f2079457c6b2..07bfe2ef9ad0 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_zos_s390x.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_zos_s390x.go @@ -267,6 +267,16 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) { + _, _, e1 := syscall_syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_386.go index 9e9d0b2a9c45..55e0484719c4 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_386.go @@ -17,6 +17,7 @@ var sysctlMib = []mibentry{ {"ddb.max_line", []_C_int{9, 3}}, {"ddb.max_width", []_C_int{9, 2}}, {"ddb.panic", []_C_int{9, 5}}, + {"ddb.profile", []_C_int{9, 9}}, {"ddb.radix", []_C_int{9, 1}}, {"ddb.tab_stop_width", []_C_int{9, 4}}, {"ddb.trigger", []_C_int{9, 8}}, @@ -33,29 +34,37 @@ var sysctlMib = []mibentry{ {"hw.ncpufound", []_C_int{6, 21}}, {"hw.ncpuonline", []_C_int{6, 25}}, {"hw.pagesize", []_C_int{6, 7}}, + {"hw.perfpolicy", []_C_int{6, 23}}, {"hw.physmem", []_C_int{6, 19}}, + {"hw.power", []_C_int{6, 26}}, {"hw.product", []_C_int{6, 15}}, {"hw.serialno", []_C_int{6, 17}}, {"hw.setperf", []_C_int{6, 13}}, + {"hw.smt", []_C_int{6, 24}}, {"hw.usermem", []_C_int{6, 20}}, {"hw.uuid", []_C_int{6, 18}}, {"hw.vendor", []_C_int{6, 14}}, {"hw.version", []_C_int{6, 16}}, - {"kern.arandom", []_C_int{1, 37}}, + {"kern.allowdt", []_C_int{1, 65}}, + {"kern.allowkmem", []_C_int{1, 52}}, {"kern.argmax", []_C_int{1, 8}}, + {"kern.audio", []_C_int{1, 84}}, {"kern.boottime", []_C_int{1, 21}}, {"kern.bufcachepercent", []_C_int{1, 72}}, {"kern.ccpu", []_C_int{1, 45}}, {"kern.clockrate", []_C_int{1, 12}}, + {"kern.consbuf", []_C_int{1, 83}}, + {"kern.consbufsize", []_C_int{1, 82}}, {"kern.consdev", []_C_int{1, 75}}, {"kern.cp_time", []_C_int{1, 40}}, {"kern.cp_time2", []_C_int{1, 71}}, - {"kern.cryptodevallowsoft", []_C_int{1, 53}}, + {"kern.cpustats", []_C_int{1, 85}}, {"kern.domainname", []_C_int{1, 22}}, {"kern.file", []_C_int{1, 73}}, {"kern.forkstat", []_C_int{1, 42}}, {"kern.fscale", []_C_int{1, 46}}, {"kern.fsync", []_C_int{1, 33}}, + {"kern.global_ptrace", []_C_int{1, 81}}, {"kern.hostid", []_C_int{1, 11}}, {"kern.hostname", []_C_int{1, 10}}, {"kern.intrcnt.nintrcnt", []_C_int{1, 63, 1}}, @@ -78,17 +87,16 @@ var sysctlMib = []mibentry{ {"kern.ngroups", []_C_int{1, 18}}, {"kern.nosuidcoredump", []_C_int{1, 32}}, {"kern.nprocs", []_C_int{1, 47}}, - {"kern.nselcoll", []_C_int{1, 43}}, {"kern.nthreads", []_C_int{1, 26}}, {"kern.numvnodes", []_C_int{1, 58}}, {"kern.osrelease", []_C_int{1, 2}}, {"kern.osrevision", []_C_int{1, 3}}, {"kern.ostype", []_C_int{1, 1}}, {"kern.osversion", []_C_int{1, 27}}, + {"kern.pfstatus", []_C_int{1, 86}}, {"kern.pool_debug", []_C_int{1, 77}}, {"kern.posix1version", []_C_int{1, 17}}, {"kern.proc", []_C_int{1, 66}}, - {"kern.random", []_C_int{1, 31}}, {"kern.rawpartition", []_C_int{1, 24}}, {"kern.saved_ids", []_C_int{1, 20}}, {"kern.securelevel", []_C_int{1, 9}}, @@ -106,21 +114,20 @@ var sysctlMib = []mibentry{ {"kern.timecounter.hardware", []_C_int{1, 69, 3}}, {"kern.timecounter.tick", []_C_int{1, 69, 1}}, {"kern.timecounter.timestepwarnings", []_C_int{1, 69, 2}}, - {"kern.tty.maxptys", []_C_int{1, 44, 6}}, - {"kern.tty.nptys", []_C_int{1, 44, 7}}, + {"kern.timeout_stats", []_C_int{1, 87}}, {"kern.tty.tk_cancc", []_C_int{1, 44, 4}}, {"kern.tty.tk_nin", []_C_int{1, 44, 1}}, {"kern.tty.tk_nout", []_C_int{1, 44, 2}}, {"kern.tty.tk_rawcc", []_C_int{1, 44, 3}}, {"kern.tty.ttyinfo", []_C_int{1, 44, 5}}, {"kern.ttycount", []_C_int{1, 57}}, - {"kern.userasymcrypto", []_C_int{1, 60}}, - {"kern.usercrypto", []_C_int{1, 52}}, - {"kern.usermount", []_C_int{1, 30}}, + {"kern.utc_offset", []_C_int{1, 88}}, {"kern.version", []_C_int{1, 4}}, - {"kern.vnode", []_C_int{1, 13}}, + {"kern.video", []_C_int{1, 89}}, {"kern.watchdog.auto", []_C_int{1, 64, 2}}, {"kern.watchdog.period", []_C_int{1, 64, 1}}, + {"kern.witnesswatch", []_C_int{1, 53}}, + {"kern.wxabort", []_C_int{1, 74}}, {"net.bpf.bufsize", []_C_int{4, 31, 1}}, {"net.bpf.maxbufsize", []_C_int{4, 31, 2}}, {"net.inet.ah.enable", []_C_int{4, 2, 51, 1}}, @@ -148,7 +155,9 @@ var sysctlMib = []mibentry{ {"net.inet.icmp.stats", []_C_int{4, 2, 1, 7}}, {"net.inet.icmp.tstamprepl", []_C_int{4, 2, 1, 6}}, {"net.inet.igmp.stats", []_C_int{4, 2, 2, 1}}, + {"net.inet.ip.arpdown", []_C_int{4, 2, 0, 40}}, {"net.inet.ip.arpqueued", []_C_int{4, 2, 0, 36}}, + {"net.inet.ip.arptimeout", []_C_int{4, 2, 0, 39}}, {"net.inet.ip.encdebug", []_C_int{4, 2, 0, 12}}, {"net.inet.ip.forwarding", []_C_int{4, 2, 0, 1}}, {"net.inet.ip.ifq.congestion", []_C_int{4, 2, 0, 30, 4}}, @@ -157,8 +166,10 @@ var sysctlMib = []mibentry{ {"net.inet.ip.ifq.maxlen", []_C_int{4, 2, 0, 30, 2}}, {"net.inet.ip.maxqueue", []_C_int{4, 2, 0, 11}}, {"net.inet.ip.mforwarding", []_C_int{4, 2, 0, 31}}, + {"net.inet.ip.mrtmfc", []_C_int{4, 2, 0, 37}}, {"net.inet.ip.mrtproto", []_C_int{4, 2, 0, 34}}, {"net.inet.ip.mrtstats", []_C_int{4, 2, 0, 35}}, + {"net.inet.ip.mrtvif", []_C_int{4, 2, 0, 38}}, {"net.inet.ip.mtu", []_C_int{4, 2, 0, 4}}, {"net.inet.ip.mtudisc", []_C_int{4, 2, 0, 27}}, {"net.inet.ip.mtudisctimeout", []_C_int{4, 2, 0, 28}}, @@ -175,9 +186,7 @@ var sysctlMib = []mibentry{ {"net.inet.ipcomp.stats", []_C_int{4, 2, 108, 2}}, {"net.inet.ipip.allow", []_C_int{4, 2, 4, 1}}, {"net.inet.ipip.stats", []_C_int{4, 2, 4, 2}}, - {"net.inet.mobileip.allow", []_C_int{4, 2, 55, 1}}, {"net.inet.pfsync.stats", []_C_int{4, 2, 240, 1}}, - {"net.inet.pim.stats", []_C_int{4, 2, 103, 1}}, {"net.inet.tcp.ackonpush", []_C_int{4, 2, 6, 13}}, {"net.inet.tcp.always_keepalive", []_C_int{4, 2, 6, 22}}, {"net.inet.tcp.baddynamic", []_C_int{4, 2, 6, 6}}, @@ -191,6 +200,7 @@ var sysctlMib = []mibentry{ {"net.inet.tcp.reasslimit", []_C_int{4, 2, 6, 18}}, {"net.inet.tcp.rfc1323", []_C_int{4, 2, 6, 1}}, {"net.inet.tcp.rfc3390", []_C_int{4, 2, 6, 17}}, + {"net.inet.tcp.rootonly", []_C_int{4, 2, 6, 24}}, {"net.inet.tcp.rstppslimit", []_C_int{4, 2, 6, 12}}, {"net.inet.tcp.sack", []_C_int{4, 2, 6, 10}}, {"net.inet.tcp.sackholelimit", []_C_int{4, 2, 6, 20}}, @@ -198,9 +208,12 @@ var sysctlMib = []mibentry{ {"net.inet.tcp.stats", []_C_int{4, 2, 6, 21}}, {"net.inet.tcp.synbucketlimit", []_C_int{4, 2, 6, 16}}, {"net.inet.tcp.syncachelimit", []_C_int{4, 2, 6, 15}}, + {"net.inet.tcp.synhashsize", []_C_int{4, 2, 6, 25}}, + {"net.inet.tcp.synuselimit", []_C_int{4, 2, 6, 23}}, {"net.inet.udp.baddynamic", []_C_int{4, 2, 17, 2}}, {"net.inet.udp.checksum", []_C_int{4, 2, 17, 1}}, {"net.inet.udp.recvspace", []_C_int{4, 2, 17, 3}}, + {"net.inet.udp.rootonly", []_C_int{4, 2, 17, 6}}, {"net.inet.udp.sendspace", []_C_int{4, 2, 17, 4}}, {"net.inet.udp.stats", []_C_int{4, 2, 17, 5}}, {"net.inet6.divert.recvspace", []_C_int{4, 24, 86, 1}}, @@ -213,13 +226,8 @@ var sysctlMib = []mibentry{ {"net.inet6.icmp6.nd6_delay", []_C_int{4, 24, 30, 8}}, {"net.inet6.icmp6.nd6_maxnudhint", []_C_int{4, 24, 30, 15}}, {"net.inet6.icmp6.nd6_mmaxtries", []_C_int{4, 24, 30, 10}}, - {"net.inet6.icmp6.nd6_prune", []_C_int{4, 24, 30, 6}}, {"net.inet6.icmp6.nd6_umaxtries", []_C_int{4, 24, 30, 9}}, - {"net.inet6.icmp6.nd6_useloopback", []_C_int{4, 24, 30, 11}}, - {"net.inet6.icmp6.nodeinfo", []_C_int{4, 24, 30, 13}}, - {"net.inet6.icmp6.rediraccept", []_C_int{4, 24, 30, 2}}, {"net.inet6.icmp6.redirtimeout", []_C_int{4, 24, 30, 3}}, - {"net.inet6.ip6.accept_rtadv", []_C_int{4, 24, 17, 12}}, {"net.inet6.ip6.auto_flowlabel", []_C_int{4, 24, 17, 17}}, {"net.inet6.ip6.dad_count", []_C_int{4, 24, 17, 16}}, {"net.inet6.ip6.dad_pending", []_C_int{4, 24, 17, 49}}, @@ -232,20 +240,19 @@ var sysctlMib = []mibentry{ {"net.inet6.ip6.maxdynroutes", []_C_int{4, 24, 17, 48}}, {"net.inet6.ip6.maxfragpackets", []_C_int{4, 24, 17, 9}}, {"net.inet6.ip6.maxfrags", []_C_int{4, 24, 17, 41}}, - {"net.inet6.ip6.maxifdefrouters", []_C_int{4, 24, 17, 47}}, - {"net.inet6.ip6.maxifprefixes", []_C_int{4, 24, 17, 46}}, {"net.inet6.ip6.mforwarding", []_C_int{4, 24, 17, 42}}, + {"net.inet6.ip6.mrtmfc", []_C_int{4, 24, 17, 53}}, + {"net.inet6.ip6.mrtmif", []_C_int{4, 24, 17, 52}}, {"net.inet6.ip6.mrtproto", []_C_int{4, 24, 17, 8}}, {"net.inet6.ip6.mtudisctimeout", []_C_int{4, 24, 17, 50}}, {"net.inet6.ip6.multicast_mtudisc", []_C_int{4, 24, 17, 44}}, {"net.inet6.ip6.multipath", []_C_int{4, 24, 17, 43}}, {"net.inet6.ip6.neighborgcthresh", []_C_int{4, 24, 17, 45}}, {"net.inet6.ip6.redirect", []_C_int{4, 24, 17, 2}}, - {"net.inet6.ip6.rr_prune", []_C_int{4, 24, 17, 22}}, + {"net.inet6.ip6.soiikey", []_C_int{4, 24, 17, 54}}, {"net.inet6.ip6.sourcecheck", []_C_int{4, 24, 17, 10}}, {"net.inet6.ip6.sourcecheck_logint", []_C_int{4, 24, 17, 11}}, {"net.inet6.ip6.use_deprecated", []_C_int{4, 24, 17, 21}}, - {"net.inet6.ip6.v6only", []_C_int{4, 24, 17, 24}}, {"net.key.sadb_dump", []_C_int{4, 30, 1}}, {"net.key.spd_dump", []_C_int{4, 30, 2}}, {"net.mpls.ifq.congestion", []_C_int{4, 33, 3, 4}}, @@ -254,12 +261,12 @@ var sysctlMib = []mibentry{ {"net.mpls.ifq.maxlen", []_C_int{4, 33, 3, 2}}, {"net.mpls.mapttl_ip", []_C_int{4, 33, 5}}, {"net.mpls.mapttl_ip6", []_C_int{4, 33, 6}}, - {"net.mpls.maxloop_inkernel", []_C_int{4, 33, 4}}, {"net.mpls.ttl", []_C_int{4, 33, 2}}, {"net.pflow.stats", []_C_int{4, 34, 1}}, {"net.pipex.enable", []_C_int{4, 35, 1}}, {"vm.anonmin", []_C_int{2, 7}}, {"vm.loadavg", []_C_int{2, 2}}, + {"vm.malloc_conf", []_C_int{2, 12}}, {"vm.maxslp", []_C_int{2, 10}}, {"vm.nkmempages", []_C_int{2, 6}}, {"vm.psstrings", []_C_int{2, 3}}, diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_amd64.go index adecd09667d0..d2243cf83f5b 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_amd64.go @@ -36,23 +36,29 @@ var sysctlMib = []mibentry{ {"hw.pagesize", []_C_int{6, 7}}, {"hw.perfpolicy", []_C_int{6, 23}}, {"hw.physmem", []_C_int{6, 19}}, + {"hw.power", []_C_int{6, 26}}, {"hw.product", []_C_int{6, 15}}, {"hw.serialno", []_C_int{6, 17}}, {"hw.setperf", []_C_int{6, 13}}, + {"hw.smt", []_C_int{6, 24}}, {"hw.usermem", []_C_int{6, 20}}, {"hw.uuid", []_C_int{6, 18}}, {"hw.vendor", []_C_int{6, 14}}, {"hw.version", []_C_int{6, 16}}, + {"kern.allowdt", []_C_int{1, 65}}, {"kern.allowkmem", []_C_int{1, 52}}, {"kern.argmax", []_C_int{1, 8}}, + {"kern.audio", []_C_int{1, 84}}, {"kern.boottime", []_C_int{1, 21}}, {"kern.bufcachepercent", []_C_int{1, 72}}, {"kern.ccpu", []_C_int{1, 45}}, {"kern.clockrate", []_C_int{1, 12}}, + {"kern.consbuf", []_C_int{1, 83}}, + {"kern.consbufsize", []_C_int{1, 82}}, {"kern.consdev", []_C_int{1, 75}}, {"kern.cp_time", []_C_int{1, 40}}, {"kern.cp_time2", []_C_int{1, 71}}, - {"kern.dnsjackport", []_C_int{1, 13}}, + {"kern.cpustats", []_C_int{1, 85}}, {"kern.domainname", []_C_int{1, 22}}, {"kern.file", []_C_int{1, 73}}, {"kern.forkstat", []_C_int{1, 42}}, @@ -81,13 +87,13 @@ var sysctlMib = []mibentry{ {"kern.ngroups", []_C_int{1, 18}}, {"kern.nosuidcoredump", []_C_int{1, 32}}, {"kern.nprocs", []_C_int{1, 47}}, - {"kern.nselcoll", []_C_int{1, 43}}, {"kern.nthreads", []_C_int{1, 26}}, {"kern.numvnodes", []_C_int{1, 58}}, {"kern.osrelease", []_C_int{1, 2}}, {"kern.osrevision", []_C_int{1, 3}}, {"kern.ostype", []_C_int{1, 1}}, {"kern.osversion", []_C_int{1, 27}}, + {"kern.pfstatus", []_C_int{1, 86}}, {"kern.pool_debug", []_C_int{1, 77}}, {"kern.posix1version", []_C_int{1, 17}}, {"kern.proc", []_C_int{1, 66}}, @@ -108,15 +114,19 @@ var sysctlMib = []mibentry{ {"kern.timecounter.hardware", []_C_int{1, 69, 3}}, {"kern.timecounter.tick", []_C_int{1, 69, 1}}, {"kern.timecounter.timestepwarnings", []_C_int{1, 69, 2}}, + {"kern.timeout_stats", []_C_int{1, 87}}, {"kern.tty.tk_cancc", []_C_int{1, 44, 4}}, {"kern.tty.tk_nin", []_C_int{1, 44, 1}}, {"kern.tty.tk_nout", []_C_int{1, 44, 2}}, {"kern.tty.tk_rawcc", []_C_int{1, 44, 3}}, {"kern.tty.ttyinfo", []_C_int{1, 44, 5}}, {"kern.ttycount", []_C_int{1, 57}}, + {"kern.utc_offset", []_C_int{1, 88}}, {"kern.version", []_C_int{1, 4}}, + {"kern.video", []_C_int{1, 89}}, {"kern.watchdog.auto", []_C_int{1, 64, 2}}, {"kern.watchdog.period", []_C_int{1, 64, 1}}, + {"kern.witnesswatch", []_C_int{1, 53}}, {"kern.wxabort", []_C_int{1, 74}}, {"net.bpf.bufsize", []_C_int{4, 31, 1}}, {"net.bpf.maxbufsize", []_C_int{4, 31, 2}}, @@ -176,7 +186,6 @@ var sysctlMib = []mibentry{ {"net.inet.ipcomp.stats", []_C_int{4, 2, 108, 2}}, {"net.inet.ipip.allow", []_C_int{4, 2, 4, 1}}, {"net.inet.ipip.stats", []_C_int{4, 2, 4, 2}}, - {"net.inet.mobileip.allow", []_C_int{4, 2, 55, 1}}, {"net.inet.pfsync.stats", []_C_int{4, 2, 240, 1}}, {"net.inet.tcp.ackonpush", []_C_int{4, 2, 6, 13}}, {"net.inet.tcp.always_keepalive", []_C_int{4, 2, 6, 22}}, @@ -252,12 +261,12 @@ var sysctlMib = []mibentry{ {"net.mpls.ifq.maxlen", []_C_int{4, 33, 3, 2}}, {"net.mpls.mapttl_ip", []_C_int{4, 33, 5}}, {"net.mpls.mapttl_ip6", []_C_int{4, 33, 6}}, - {"net.mpls.maxloop_inkernel", []_C_int{4, 33, 4}}, {"net.mpls.ttl", []_C_int{4, 33, 2}}, {"net.pflow.stats", []_C_int{4, 34, 1}}, {"net.pipex.enable", []_C_int{4, 35, 1}}, {"vm.anonmin", []_C_int{2, 7}}, {"vm.loadavg", []_C_int{2, 2}}, + {"vm.malloc_conf", []_C_int{2, 12}}, {"vm.maxslp", []_C_int{2, 10}}, {"vm.nkmempages", []_C_int{2, 6}}, {"vm.psstrings", []_C_int{2, 3}}, diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm.go index 8ea52a4a1810..82dc51bd8b57 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm.go @@ -17,6 +17,7 @@ var sysctlMib = []mibentry{ {"ddb.max_line", []_C_int{9, 3}}, {"ddb.max_width", []_C_int{9, 2}}, {"ddb.panic", []_C_int{9, 5}}, + {"ddb.profile", []_C_int{9, 9}}, {"ddb.radix", []_C_int{9, 1}}, {"ddb.tab_stop_width", []_C_int{9, 4}}, {"ddb.trigger", []_C_int{9, 8}}, @@ -33,29 +34,37 @@ var sysctlMib = []mibentry{ {"hw.ncpufound", []_C_int{6, 21}}, {"hw.ncpuonline", []_C_int{6, 25}}, {"hw.pagesize", []_C_int{6, 7}}, + {"hw.perfpolicy", []_C_int{6, 23}}, {"hw.physmem", []_C_int{6, 19}}, + {"hw.power", []_C_int{6, 26}}, {"hw.product", []_C_int{6, 15}}, {"hw.serialno", []_C_int{6, 17}}, {"hw.setperf", []_C_int{6, 13}}, + {"hw.smt", []_C_int{6, 24}}, {"hw.usermem", []_C_int{6, 20}}, {"hw.uuid", []_C_int{6, 18}}, {"hw.vendor", []_C_int{6, 14}}, {"hw.version", []_C_int{6, 16}}, - {"kern.arandom", []_C_int{1, 37}}, + {"kern.allowdt", []_C_int{1, 65}}, + {"kern.allowkmem", []_C_int{1, 52}}, {"kern.argmax", []_C_int{1, 8}}, + {"kern.audio", []_C_int{1, 84}}, {"kern.boottime", []_C_int{1, 21}}, {"kern.bufcachepercent", []_C_int{1, 72}}, {"kern.ccpu", []_C_int{1, 45}}, {"kern.clockrate", []_C_int{1, 12}}, + {"kern.consbuf", []_C_int{1, 83}}, + {"kern.consbufsize", []_C_int{1, 82}}, {"kern.consdev", []_C_int{1, 75}}, {"kern.cp_time", []_C_int{1, 40}}, {"kern.cp_time2", []_C_int{1, 71}}, - {"kern.cryptodevallowsoft", []_C_int{1, 53}}, + {"kern.cpustats", []_C_int{1, 85}}, {"kern.domainname", []_C_int{1, 22}}, {"kern.file", []_C_int{1, 73}}, {"kern.forkstat", []_C_int{1, 42}}, {"kern.fscale", []_C_int{1, 46}}, {"kern.fsync", []_C_int{1, 33}}, + {"kern.global_ptrace", []_C_int{1, 81}}, {"kern.hostid", []_C_int{1, 11}}, {"kern.hostname", []_C_int{1, 10}}, {"kern.intrcnt.nintrcnt", []_C_int{1, 63, 1}}, @@ -78,17 +87,16 @@ var sysctlMib = []mibentry{ {"kern.ngroups", []_C_int{1, 18}}, {"kern.nosuidcoredump", []_C_int{1, 32}}, {"kern.nprocs", []_C_int{1, 47}}, - {"kern.nselcoll", []_C_int{1, 43}}, {"kern.nthreads", []_C_int{1, 26}}, {"kern.numvnodes", []_C_int{1, 58}}, {"kern.osrelease", []_C_int{1, 2}}, {"kern.osrevision", []_C_int{1, 3}}, {"kern.ostype", []_C_int{1, 1}}, {"kern.osversion", []_C_int{1, 27}}, + {"kern.pfstatus", []_C_int{1, 86}}, {"kern.pool_debug", []_C_int{1, 77}}, {"kern.posix1version", []_C_int{1, 17}}, {"kern.proc", []_C_int{1, 66}}, - {"kern.random", []_C_int{1, 31}}, {"kern.rawpartition", []_C_int{1, 24}}, {"kern.saved_ids", []_C_int{1, 20}}, {"kern.securelevel", []_C_int{1, 9}}, @@ -106,21 +114,20 @@ var sysctlMib = []mibentry{ {"kern.timecounter.hardware", []_C_int{1, 69, 3}}, {"kern.timecounter.tick", []_C_int{1, 69, 1}}, {"kern.timecounter.timestepwarnings", []_C_int{1, 69, 2}}, - {"kern.tty.maxptys", []_C_int{1, 44, 6}}, - {"kern.tty.nptys", []_C_int{1, 44, 7}}, + {"kern.timeout_stats", []_C_int{1, 87}}, {"kern.tty.tk_cancc", []_C_int{1, 44, 4}}, {"kern.tty.tk_nin", []_C_int{1, 44, 1}}, {"kern.tty.tk_nout", []_C_int{1, 44, 2}}, {"kern.tty.tk_rawcc", []_C_int{1, 44, 3}}, {"kern.tty.ttyinfo", []_C_int{1, 44, 5}}, {"kern.ttycount", []_C_int{1, 57}}, - {"kern.userasymcrypto", []_C_int{1, 60}}, - {"kern.usercrypto", []_C_int{1, 52}}, - {"kern.usermount", []_C_int{1, 30}}, + {"kern.utc_offset", []_C_int{1, 88}}, {"kern.version", []_C_int{1, 4}}, - {"kern.vnode", []_C_int{1, 13}}, + {"kern.video", []_C_int{1, 89}}, {"kern.watchdog.auto", []_C_int{1, 64, 2}}, {"kern.watchdog.period", []_C_int{1, 64, 1}}, + {"kern.witnesswatch", []_C_int{1, 53}}, + {"kern.wxabort", []_C_int{1, 74}}, {"net.bpf.bufsize", []_C_int{4, 31, 1}}, {"net.bpf.maxbufsize", []_C_int{4, 31, 2}}, {"net.inet.ah.enable", []_C_int{4, 2, 51, 1}}, @@ -148,7 +155,9 @@ var sysctlMib = []mibentry{ {"net.inet.icmp.stats", []_C_int{4, 2, 1, 7}}, {"net.inet.icmp.tstamprepl", []_C_int{4, 2, 1, 6}}, {"net.inet.igmp.stats", []_C_int{4, 2, 2, 1}}, + {"net.inet.ip.arpdown", []_C_int{4, 2, 0, 40}}, {"net.inet.ip.arpqueued", []_C_int{4, 2, 0, 36}}, + {"net.inet.ip.arptimeout", []_C_int{4, 2, 0, 39}}, {"net.inet.ip.encdebug", []_C_int{4, 2, 0, 12}}, {"net.inet.ip.forwarding", []_C_int{4, 2, 0, 1}}, {"net.inet.ip.ifq.congestion", []_C_int{4, 2, 0, 30, 4}}, @@ -157,8 +166,10 @@ var sysctlMib = []mibentry{ {"net.inet.ip.ifq.maxlen", []_C_int{4, 2, 0, 30, 2}}, {"net.inet.ip.maxqueue", []_C_int{4, 2, 0, 11}}, {"net.inet.ip.mforwarding", []_C_int{4, 2, 0, 31}}, + {"net.inet.ip.mrtmfc", []_C_int{4, 2, 0, 37}}, {"net.inet.ip.mrtproto", []_C_int{4, 2, 0, 34}}, {"net.inet.ip.mrtstats", []_C_int{4, 2, 0, 35}}, + {"net.inet.ip.mrtvif", []_C_int{4, 2, 0, 38}}, {"net.inet.ip.mtu", []_C_int{4, 2, 0, 4}}, {"net.inet.ip.mtudisc", []_C_int{4, 2, 0, 27}}, {"net.inet.ip.mtudisctimeout", []_C_int{4, 2, 0, 28}}, @@ -175,9 +186,7 @@ var sysctlMib = []mibentry{ {"net.inet.ipcomp.stats", []_C_int{4, 2, 108, 2}}, {"net.inet.ipip.allow", []_C_int{4, 2, 4, 1}}, {"net.inet.ipip.stats", []_C_int{4, 2, 4, 2}}, - {"net.inet.mobileip.allow", []_C_int{4, 2, 55, 1}}, {"net.inet.pfsync.stats", []_C_int{4, 2, 240, 1}}, - {"net.inet.pim.stats", []_C_int{4, 2, 103, 1}}, {"net.inet.tcp.ackonpush", []_C_int{4, 2, 6, 13}}, {"net.inet.tcp.always_keepalive", []_C_int{4, 2, 6, 22}}, {"net.inet.tcp.baddynamic", []_C_int{4, 2, 6, 6}}, @@ -191,6 +200,7 @@ var sysctlMib = []mibentry{ {"net.inet.tcp.reasslimit", []_C_int{4, 2, 6, 18}}, {"net.inet.tcp.rfc1323", []_C_int{4, 2, 6, 1}}, {"net.inet.tcp.rfc3390", []_C_int{4, 2, 6, 17}}, + {"net.inet.tcp.rootonly", []_C_int{4, 2, 6, 24}}, {"net.inet.tcp.rstppslimit", []_C_int{4, 2, 6, 12}}, {"net.inet.tcp.sack", []_C_int{4, 2, 6, 10}}, {"net.inet.tcp.sackholelimit", []_C_int{4, 2, 6, 20}}, @@ -198,9 +208,12 @@ var sysctlMib = []mibentry{ {"net.inet.tcp.stats", []_C_int{4, 2, 6, 21}}, {"net.inet.tcp.synbucketlimit", []_C_int{4, 2, 6, 16}}, {"net.inet.tcp.syncachelimit", []_C_int{4, 2, 6, 15}}, + {"net.inet.tcp.synhashsize", []_C_int{4, 2, 6, 25}}, + {"net.inet.tcp.synuselimit", []_C_int{4, 2, 6, 23}}, {"net.inet.udp.baddynamic", []_C_int{4, 2, 17, 2}}, {"net.inet.udp.checksum", []_C_int{4, 2, 17, 1}}, {"net.inet.udp.recvspace", []_C_int{4, 2, 17, 3}}, + {"net.inet.udp.rootonly", []_C_int{4, 2, 17, 6}}, {"net.inet.udp.sendspace", []_C_int{4, 2, 17, 4}}, {"net.inet.udp.stats", []_C_int{4, 2, 17, 5}}, {"net.inet6.divert.recvspace", []_C_int{4, 24, 86, 1}}, @@ -213,13 +226,8 @@ var sysctlMib = []mibentry{ {"net.inet6.icmp6.nd6_delay", []_C_int{4, 24, 30, 8}}, {"net.inet6.icmp6.nd6_maxnudhint", []_C_int{4, 24, 30, 15}}, {"net.inet6.icmp6.nd6_mmaxtries", []_C_int{4, 24, 30, 10}}, - {"net.inet6.icmp6.nd6_prune", []_C_int{4, 24, 30, 6}}, {"net.inet6.icmp6.nd6_umaxtries", []_C_int{4, 24, 30, 9}}, - {"net.inet6.icmp6.nd6_useloopback", []_C_int{4, 24, 30, 11}}, - {"net.inet6.icmp6.nodeinfo", []_C_int{4, 24, 30, 13}}, - {"net.inet6.icmp6.rediraccept", []_C_int{4, 24, 30, 2}}, {"net.inet6.icmp6.redirtimeout", []_C_int{4, 24, 30, 3}}, - {"net.inet6.ip6.accept_rtadv", []_C_int{4, 24, 17, 12}}, {"net.inet6.ip6.auto_flowlabel", []_C_int{4, 24, 17, 17}}, {"net.inet6.ip6.dad_count", []_C_int{4, 24, 17, 16}}, {"net.inet6.ip6.dad_pending", []_C_int{4, 24, 17, 49}}, @@ -232,20 +240,19 @@ var sysctlMib = []mibentry{ {"net.inet6.ip6.maxdynroutes", []_C_int{4, 24, 17, 48}}, {"net.inet6.ip6.maxfragpackets", []_C_int{4, 24, 17, 9}}, {"net.inet6.ip6.maxfrags", []_C_int{4, 24, 17, 41}}, - {"net.inet6.ip6.maxifdefrouters", []_C_int{4, 24, 17, 47}}, - {"net.inet6.ip6.maxifprefixes", []_C_int{4, 24, 17, 46}}, {"net.inet6.ip6.mforwarding", []_C_int{4, 24, 17, 42}}, + {"net.inet6.ip6.mrtmfc", []_C_int{4, 24, 17, 53}}, + {"net.inet6.ip6.mrtmif", []_C_int{4, 24, 17, 52}}, {"net.inet6.ip6.mrtproto", []_C_int{4, 24, 17, 8}}, {"net.inet6.ip6.mtudisctimeout", []_C_int{4, 24, 17, 50}}, {"net.inet6.ip6.multicast_mtudisc", []_C_int{4, 24, 17, 44}}, {"net.inet6.ip6.multipath", []_C_int{4, 24, 17, 43}}, {"net.inet6.ip6.neighborgcthresh", []_C_int{4, 24, 17, 45}}, {"net.inet6.ip6.redirect", []_C_int{4, 24, 17, 2}}, - {"net.inet6.ip6.rr_prune", []_C_int{4, 24, 17, 22}}, + {"net.inet6.ip6.soiikey", []_C_int{4, 24, 17, 54}}, {"net.inet6.ip6.sourcecheck", []_C_int{4, 24, 17, 10}}, {"net.inet6.ip6.sourcecheck_logint", []_C_int{4, 24, 17, 11}}, {"net.inet6.ip6.use_deprecated", []_C_int{4, 24, 17, 21}}, - {"net.inet6.ip6.v6only", []_C_int{4, 24, 17, 24}}, {"net.key.sadb_dump", []_C_int{4, 30, 1}}, {"net.key.spd_dump", []_C_int{4, 30, 2}}, {"net.mpls.ifq.congestion", []_C_int{4, 33, 3, 4}}, @@ -254,12 +261,12 @@ var sysctlMib = []mibentry{ {"net.mpls.ifq.maxlen", []_C_int{4, 33, 3, 2}}, {"net.mpls.mapttl_ip", []_C_int{4, 33, 5}}, {"net.mpls.mapttl_ip6", []_C_int{4, 33, 6}}, - {"net.mpls.maxloop_inkernel", []_C_int{4, 33, 4}}, {"net.mpls.ttl", []_C_int{4, 33, 2}}, {"net.pflow.stats", []_C_int{4, 34, 1}}, {"net.pipex.enable", []_C_int{4, 35, 1}}, {"vm.anonmin", []_C_int{2, 7}}, {"vm.loadavg", []_C_int{2, 2}}, + {"vm.malloc_conf", []_C_int{2, 12}}, {"vm.maxslp", []_C_int{2, 10}}, {"vm.nkmempages", []_C_int{2, 6}}, {"vm.psstrings", []_C_int{2, 3}}, diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm64.go index 154b57ae3e2a..cbdda1a4ae24 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm64.go @@ -36,6 +36,7 @@ var sysctlMib = []mibentry{ {"hw.pagesize", []_C_int{6, 7}}, {"hw.perfpolicy", []_C_int{6, 23}}, {"hw.physmem", []_C_int{6, 19}}, + {"hw.power", []_C_int{6, 26}}, {"hw.product", []_C_int{6, 15}}, {"hw.serialno", []_C_int{6, 17}}, {"hw.setperf", []_C_int{6, 13}}, @@ -44,6 +45,7 @@ var sysctlMib = []mibentry{ {"hw.uuid", []_C_int{6, 18}}, {"hw.vendor", []_C_int{6, 14}}, {"hw.version", []_C_int{6, 16}}, + {"kern.allowdt", []_C_int{1, 65}}, {"kern.allowkmem", []_C_int{1, 52}}, {"kern.argmax", []_C_int{1, 8}}, {"kern.audio", []_C_int{1, 84}}, @@ -51,6 +53,8 @@ var sysctlMib = []mibentry{ {"kern.bufcachepercent", []_C_int{1, 72}}, {"kern.ccpu", []_C_int{1, 45}}, {"kern.clockrate", []_C_int{1, 12}}, + {"kern.consbuf", []_C_int{1, 83}}, + {"kern.consbufsize", []_C_int{1, 82}}, {"kern.consdev", []_C_int{1, 75}}, {"kern.cp_time", []_C_int{1, 40}}, {"kern.cp_time2", []_C_int{1, 71}}, @@ -83,13 +87,13 @@ var sysctlMib = []mibentry{ {"kern.ngroups", []_C_int{1, 18}}, {"kern.nosuidcoredump", []_C_int{1, 32}}, {"kern.nprocs", []_C_int{1, 47}}, - {"kern.nselcoll", []_C_int{1, 43}}, {"kern.nthreads", []_C_int{1, 26}}, {"kern.numvnodes", []_C_int{1, 58}}, {"kern.osrelease", []_C_int{1, 2}}, {"kern.osrevision", []_C_int{1, 3}}, {"kern.ostype", []_C_int{1, 1}}, {"kern.osversion", []_C_int{1, 27}}, + {"kern.pfstatus", []_C_int{1, 86}}, {"kern.pool_debug", []_C_int{1, 77}}, {"kern.posix1version", []_C_int{1, 17}}, {"kern.proc", []_C_int{1, 66}}, @@ -110,13 +114,16 @@ var sysctlMib = []mibentry{ {"kern.timecounter.hardware", []_C_int{1, 69, 3}}, {"kern.timecounter.tick", []_C_int{1, 69, 1}}, {"kern.timecounter.timestepwarnings", []_C_int{1, 69, 2}}, + {"kern.timeout_stats", []_C_int{1, 87}}, {"kern.tty.tk_cancc", []_C_int{1, 44, 4}}, {"kern.tty.tk_nin", []_C_int{1, 44, 1}}, {"kern.tty.tk_nout", []_C_int{1, 44, 2}}, {"kern.tty.tk_rawcc", []_C_int{1, 44, 3}}, {"kern.tty.ttyinfo", []_C_int{1, 44, 5}}, {"kern.ttycount", []_C_int{1, 57}}, + {"kern.utc_offset", []_C_int{1, 88}}, {"kern.version", []_C_int{1, 4}}, + {"kern.video", []_C_int{1, 89}}, {"kern.watchdog.auto", []_C_int{1, 64, 2}}, {"kern.watchdog.period", []_C_int{1, 64, 1}}, {"kern.witnesswatch", []_C_int{1, 53}}, @@ -179,7 +186,6 @@ var sysctlMib = []mibentry{ {"net.inet.ipcomp.stats", []_C_int{4, 2, 108, 2}}, {"net.inet.ipip.allow", []_C_int{4, 2, 4, 1}}, {"net.inet.ipip.stats", []_C_int{4, 2, 4, 2}}, - {"net.inet.mobileip.allow", []_C_int{4, 2, 55, 1}}, {"net.inet.pfsync.stats", []_C_int{4, 2, 240, 1}}, {"net.inet.tcp.ackonpush", []_C_int{4, 2, 6, 13}}, {"net.inet.tcp.always_keepalive", []_C_int{4, 2, 6, 22}}, @@ -255,7 +261,6 @@ var sysctlMib = []mibentry{ {"net.mpls.ifq.maxlen", []_C_int{4, 33, 3, 2}}, {"net.mpls.mapttl_ip", []_C_int{4, 33, 5}}, {"net.mpls.mapttl_ip6", []_C_int{4, 33, 6}}, - {"net.mpls.maxloop_inkernel", []_C_int{4, 33, 4}}, {"net.mpls.ttl", []_C_int{4, 33, 2}}, {"net.pflow.stats", []_C_int{4, 34, 1}}, {"net.pipex.enable", []_C_int{4, 35, 1}}, diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_mips64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_mips64.go index d96bb2ba4db6..f55eae1a8211 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_mips64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysctl_openbsd_mips64.go @@ -36,6 +36,7 @@ var sysctlMib = []mibentry{ {"hw.pagesize", []_C_int{6, 7}}, {"hw.perfpolicy", []_C_int{6, 23}}, {"hw.physmem", []_C_int{6, 19}}, + {"hw.power", []_C_int{6, 26}}, {"hw.product", []_C_int{6, 15}}, {"hw.serialno", []_C_int{6, 17}}, {"hw.setperf", []_C_int{6, 13}}, @@ -86,7 +87,6 @@ var sysctlMib = []mibentry{ {"kern.ngroups", []_C_int{1, 18}}, {"kern.nosuidcoredump", []_C_int{1, 32}}, {"kern.nprocs", []_C_int{1, 47}}, - {"kern.nselcoll", []_C_int{1, 43}}, {"kern.nthreads", []_C_int{1, 26}}, {"kern.numvnodes", []_C_int{1, 58}}, {"kern.osrelease", []_C_int{1, 2}}, @@ -123,6 +123,7 @@ var sysctlMib = []mibentry{ {"kern.ttycount", []_C_int{1, 57}}, {"kern.utc_offset", []_C_int{1, 88}}, {"kern.version", []_C_int{1, 4}}, + {"kern.video", []_C_int{1, 89}}, {"kern.watchdog.auto", []_C_int{1, 64, 2}}, {"kern.watchdog.period", []_C_int{1, 64, 1}}, {"kern.witnesswatch", []_C_int{1, 53}}, diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_openbsd_mips64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_openbsd_mips64.go index a37f77375636..01c43a01fda7 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_openbsd_mips64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_openbsd_mips64.go @@ -6,6 +6,7 @@ package unix +// Deprecated: Use libc wrappers instead of direct syscalls. const ( SYS_EXIT = 1 // { void sys_exit(int rval); } SYS_FORK = 2 // { int sys_fork(void); } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go index d9c78cdcbc45..29dc483378ae 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go @@ -362,7 +362,7 @@ type FpExtendedPrecision struct{} type PtraceIoDesc struct { Op int32 Offs uintptr - Addr uintptr + Addr *byte Len uint32 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go index 26991b165596..0a89b28906a6 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go @@ -367,7 +367,7 @@ type FpExtendedPrecision struct{} type PtraceIoDesc struct { Op int32 Offs uintptr - Addr uintptr + Addr *byte Len uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go index f8324e7e7f49..c8666bb15288 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go @@ -350,7 +350,7 @@ type FpExtendedPrecision struct { type PtraceIoDesc struct { Op int32 Offs uintptr - Addr uintptr + Addr *byte Len uint32 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm64.go index 4220411f341a..88fb48a887b1 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm64.go @@ -347,7 +347,7 @@ type FpExtendedPrecision struct{} type PtraceIoDesc struct { Op int32 Offs uintptr - Addr uintptr + Addr *byte Len uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_riscv64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_riscv64.go index 0660fd45c7c6..698dc975e92b 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_riscv64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_freebsd_riscv64.go @@ -348,7 +348,7 @@ type FpExtendedPrecision struct{} type PtraceIoDesc struct { Op int32 Offs uintptr - Addr uintptr + Addr *byte Len uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux.go index ff6881167d97..ca84727cfe80 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux.go @@ -29,6 +29,41 @@ type Itimerval struct { Value Timeval } +const ( + ADJ_OFFSET = 0x1 + ADJ_FREQUENCY = 0x2 + ADJ_MAXERROR = 0x4 + ADJ_ESTERROR = 0x8 + ADJ_STATUS = 0x10 + ADJ_TIMECONST = 0x20 + ADJ_TAI = 0x80 + ADJ_SETOFFSET = 0x100 + ADJ_MICRO = 0x1000 + ADJ_NANO = 0x2000 + ADJ_TICK = 0x4000 + ADJ_OFFSET_SINGLESHOT = 0x8001 + ADJ_OFFSET_SS_READ = 0xa001 +) + +const ( + STA_PLL = 0x1 + STA_PPSFREQ = 0x2 + STA_PPSTIME = 0x4 + STA_FLL = 0x8 + STA_INS = 0x10 + STA_DEL = 0x20 + STA_UNSYNC = 0x40 + STA_FREQHOLD = 0x80 + STA_PPSSIGNAL = 0x100 + STA_PPSJITTER = 0x200 + STA_PPSWANDER = 0x400 + STA_PPSERROR = 0x800 + STA_CLOCKERR = 0x1000 + STA_NANO = 0x2000 + STA_MODE = 0x4000 + STA_CLK = 0x8000 +) + const ( TIME_OK = 0x0 TIME_INS = 0x1 @@ -53,29 +88,30 @@ type StatxTimestamp struct { } type Statx_t struct { - Mask uint32 - Blksize uint32 - Attributes uint64 - Nlink uint32 - Uid uint32 - Gid uint32 - Mode uint16 - _ [1]uint16 - Ino uint64 - Size uint64 - Blocks uint64 - Attributes_mask uint64 - Atime StatxTimestamp - Btime StatxTimestamp - Ctime StatxTimestamp - Mtime StatxTimestamp - Rdev_major uint32 - Rdev_minor uint32 - Dev_major uint32 - Dev_minor uint32 - Mnt_id uint64 - _ uint64 - _ [12]uint64 + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + Mnt_id uint64 + Dio_mem_align uint32 + Dio_offset_align uint32 + _ [12]uint64 } type Fsid struct { @@ -420,36 +456,60 @@ type Ucred struct { } type TCPInfo struct { - State uint8 - Ca_state uint8 - Retransmits uint8 - Probes uint8 - Backoff uint8 - Options uint8 - Rto uint32 - Ato uint32 - Snd_mss uint32 - Rcv_mss uint32 - Unacked uint32 - Sacked uint32 - Lost uint32 - Retrans uint32 - Fackets uint32 - Last_data_sent uint32 - Last_ack_sent uint32 - Last_data_recv uint32 - Last_ack_recv uint32 - Pmtu uint32 - Rcv_ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Snd_ssthresh uint32 - Snd_cwnd uint32 - Advmss uint32 - Reordering uint32 - Rcv_rtt uint32 - Rcv_space uint32 - Total_retrans uint32 + State uint8 + Ca_state uint8 + Retransmits uint8 + Probes uint8 + Backoff uint8 + Options uint8 + Rto uint32 + Ato uint32 + Snd_mss uint32 + Rcv_mss uint32 + Unacked uint32 + Sacked uint32 + Lost uint32 + Retrans uint32 + Fackets uint32 + Last_data_sent uint32 + Last_ack_sent uint32 + Last_data_recv uint32 + Last_ack_recv uint32 + Pmtu uint32 + Rcv_ssthresh uint32 + Rtt uint32 + Rttvar uint32 + Snd_ssthresh uint32 + Snd_cwnd uint32 + Advmss uint32 + Reordering uint32 + Rcv_rtt uint32 + Rcv_space uint32 + Total_retrans uint32 + Pacing_rate uint64 + Max_pacing_rate uint64 + Bytes_acked uint64 + Bytes_received uint64 + Segs_out uint32 + Segs_in uint32 + Notsent_bytes uint32 + Min_rtt uint32 + Data_segs_in uint32 + Data_segs_out uint32 + Delivery_rate uint64 + Busy_time uint64 + Rwnd_limited uint64 + Sndbuf_limited uint64 + Delivered uint32 + Delivered_ce uint32 + Bytes_sent uint64 + Bytes_retrans uint64 + Dsack_dups uint32 + Reord_seen uint32 + Rcv_ooopack uint32 + Snd_wnd uint32 + Rcv_wnd uint32 + Rehash uint32 } type CanFilter struct { @@ -492,7 +552,7 @@ const ( SizeofIPv6MTUInfo = 0x20 SizeofICMPv6Filter = 0x20 SizeofUcred = 0xc - SizeofTCPInfo = 0x68 + SizeofTCPInfo = 0xf0 SizeofCanFilter = 0x8 SizeofTCPRepairOpt = 0x8 ) @@ -1007,6 +1067,7 @@ const ( PerfBitCommExec = CBitFieldMaskBit24 PerfBitUseClockID = CBitFieldMaskBit25 PerfBitContextSwitch = CBitFieldMaskBit26 + PerfBitWriteBackward = CBitFieldMaskBit27 ) const ( @@ -1099,7 +1160,8 @@ const ( PERF_SAMPLE_BRANCH_NO_CYCLES_SHIFT = 0xf PERF_SAMPLE_BRANCH_TYPE_SAVE_SHIFT = 0x10 PERF_SAMPLE_BRANCH_HW_INDEX_SHIFT = 0x11 - PERF_SAMPLE_BRANCH_MAX_SHIFT = 0x12 + PERF_SAMPLE_BRANCH_PRIV_SAVE_SHIFT = 0x12 + PERF_SAMPLE_BRANCH_MAX_SHIFT = 0x13 PERF_SAMPLE_BRANCH_USER = 0x1 PERF_SAMPLE_BRANCH_KERNEL = 0x2 PERF_SAMPLE_BRANCH_HV = 0x4 @@ -1118,7 +1180,8 @@ const ( PERF_SAMPLE_BRANCH_NO_CYCLES = 0x8000 PERF_SAMPLE_BRANCH_TYPE_SAVE = 0x10000 PERF_SAMPLE_BRANCH_HW_INDEX = 0x20000 - PERF_SAMPLE_BRANCH_MAX = 0x40000 + PERF_SAMPLE_BRANCH_PRIV_SAVE = 0x40000 + PERF_SAMPLE_BRANCH_MAX = 0x80000 PERF_BR_UNKNOWN = 0x0 PERF_BR_COND = 0x1 PERF_BR_UNCOND = 0x2 @@ -1132,7 +1195,10 @@ const ( PERF_BR_COND_RET = 0xa PERF_BR_ERET = 0xb PERF_BR_IRQ = 0xc - PERF_BR_MAX = 0xd + PERF_BR_SERROR = 0xd + PERF_BR_NO_TX = 0xe + PERF_BR_EXTEND_ABI = 0xf + PERF_BR_MAX = 0x10 PERF_SAMPLE_REGS_ABI_NONE = 0x0 PERF_SAMPLE_REGS_ABI_32 = 0x1 PERF_SAMPLE_REGS_ABI_64 = 0x2 @@ -1151,7 +1217,8 @@ const ( PERF_FORMAT_TOTAL_TIME_RUNNING = 0x2 PERF_FORMAT_ID = 0x4 PERF_FORMAT_GROUP = 0x8 - PERF_FORMAT_MAX = 0x10 + PERF_FORMAT_LOST = 0x10 + PERF_FORMAT_MAX = 0x20 PERF_IOC_FLAG_GROUP = 0x1 PERF_RECORD_MMAP = 0x1 PERF_RECORD_LOST = 0x2 @@ -1197,7 +1264,7 @@ type TCPMD5Sig struct { Flags uint8 Prefixlen uint8 Keylen uint16 - _ uint32 + Ifindex int32 Key [80]uint8 } @@ -1897,7 +1964,11 @@ const ( NFT_MSG_GETOBJ = 0x13 NFT_MSG_DELOBJ = 0x14 NFT_MSG_GETOBJ_RESET = 0x15 - NFT_MSG_MAX = 0x19 + NFT_MSG_NEWFLOWTABLE = 0x16 + NFT_MSG_GETFLOWTABLE = 0x17 + NFT_MSG_DELFLOWTABLE = 0x18 + NFT_MSG_GETRULE_RESET = 0x19 + NFT_MSG_MAX = 0x1a NFTA_LIST_UNSPEC = 0x0 NFTA_LIST_ELEM = 0x1 NFTA_HOOK_UNSPEC = 0x0 @@ -2401,9 +2472,11 @@ const ( SOF_TIMESTAMPING_OPT_STATS = 0x1000 SOF_TIMESTAMPING_OPT_PKTINFO = 0x2000 SOF_TIMESTAMPING_OPT_TX_SWHW = 0x4000 + SOF_TIMESTAMPING_BIND_PHC = 0x8000 + SOF_TIMESTAMPING_OPT_ID_TCP = 0x10000 - SOF_TIMESTAMPING_LAST = 0x8000 - SOF_TIMESTAMPING_MASK = 0xffff + SOF_TIMESTAMPING_LAST = 0x10000 + SOF_TIMESTAMPING_MASK = 0x1ffff SCM_TSTAMP_SND = 0x0 SCM_TSTAMP_SCHED = 0x1 @@ -2979,7 +3052,16 @@ const ( DEVLINK_CMD_TRAP_POLICER_NEW = 0x47 DEVLINK_CMD_TRAP_POLICER_DEL = 0x48 DEVLINK_CMD_HEALTH_REPORTER_TEST = 0x49 - DEVLINK_CMD_MAX = 0x51 + DEVLINK_CMD_RATE_GET = 0x4a + DEVLINK_CMD_RATE_SET = 0x4b + DEVLINK_CMD_RATE_NEW = 0x4c + DEVLINK_CMD_RATE_DEL = 0x4d + DEVLINK_CMD_LINECARD_GET = 0x4e + DEVLINK_CMD_LINECARD_SET = 0x4f + DEVLINK_CMD_LINECARD_NEW = 0x50 + DEVLINK_CMD_LINECARD_DEL = 0x51 + DEVLINK_CMD_SELFTESTS_GET = 0x52 + DEVLINK_CMD_MAX = 0x53 DEVLINK_PORT_TYPE_NOTSET = 0x0 DEVLINK_PORT_TYPE_AUTO = 0x1 DEVLINK_PORT_TYPE_ETH = 0x2 @@ -3208,7 +3290,13 @@ const ( DEVLINK_ATTR_RATE_NODE_NAME = 0xa8 DEVLINK_ATTR_RATE_PARENT_NODE_NAME = 0xa9 DEVLINK_ATTR_REGION_MAX_SNAPSHOTS = 0xaa - DEVLINK_ATTR_MAX = 0xae + DEVLINK_ATTR_LINECARD_INDEX = 0xab + DEVLINK_ATTR_LINECARD_STATE = 0xac + DEVLINK_ATTR_LINECARD_TYPE = 0xad + DEVLINK_ATTR_LINECARD_SUPPORTED_TYPES = 0xae + DEVLINK_ATTR_NESTED_DEVLINK = 0xaf + DEVLINK_ATTR_SELFTESTS = 0xb0 + DEVLINK_ATTR_MAX = 0xb3 DEVLINK_DPIPE_FIELD_MAPPING_TYPE_NONE = 0x0 DEVLINK_DPIPE_FIELD_MAPPING_TYPE_IFINDEX = 0x1 DEVLINK_DPIPE_MATCH_TYPE_FIELD_EXACT = 0x0 @@ -3224,7 +3312,8 @@ const ( DEVLINK_PORT_FUNCTION_ATTR_HW_ADDR = 0x1 DEVLINK_PORT_FN_ATTR_STATE = 0x2 DEVLINK_PORT_FN_ATTR_OPSTATE = 0x3 - DEVLINK_PORT_FUNCTION_ATTR_MAX = 0x3 + DEVLINK_PORT_FN_ATTR_CAPS = 0x4 + DEVLINK_PORT_FUNCTION_ATTR_MAX = 0x4 ) type FsverityDigest struct { @@ -3317,7 +3406,8 @@ const ( LWTUNNEL_ENCAP_SEG6_LOCAL = 0x7 LWTUNNEL_ENCAP_RPL = 0x8 LWTUNNEL_ENCAP_IOAM6 = 0x9 - LWTUNNEL_ENCAP_MAX = 0x9 + LWTUNNEL_ENCAP_XFRM = 0xa + LWTUNNEL_ENCAP_MAX = 0xa MPLS_IPTUNNEL_UNSPEC = 0x0 MPLS_IPTUNNEL_DST = 0x1 @@ -3512,7 +3602,10 @@ const ( ETHTOOL_MSG_PHC_VCLOCKS_GET = 0x21 ETHTOOL_MSG_MODULE_GET = 0x22 ETHTOOL_MSG_MODULE_SET = 0x23 - ETHTOOL_MSG_USER_MAX = 0x23 + ETHTOOL_MSG_PSE_GET = 0x24 + ETHTOOL_MSG_PSE_SET = 0x25 + ETHTOOL_MSG_RSS_GET = 0x26 + ETHTOOL_MSG_USER_MAX = 0x26 ETHTOOL_MSG_KERNEL_NONE = 0x0 ETHTOOL_MSG_STRSET_GET_REPLY = 0x1 ETHTOOL_MSG_LINKINFO_GET_REPLY = 0x2 @@ -3550,7 +3643,9 @@ const ( ETHTOOL_MSG_PHC_VCLOCKS_GET_REPLY = 0x22 ETHTOOL_MSG_MODULE_GET_REPLY = 0x23 ETHTOOL_MSG_MODULE_NTF = 0x24 - ETHTOOL_MSG_KERNEL_MAX = 0x24 + ETHTOOL_MSG_PSE_GET_REPLY = 0x25 + ETHTOOL_MSG_RSS_GET_REPLY = 0x26 + ETHTOOL_MSG_KERNEL_MAX = 0x26 ETHTOOL_A_HEADER_UNSPEC = 0x0 ETHTOOL_A_HEADER_DEV_INDEX = 0x1 ETHTOOL_A_HEADER_DEV_NAME = 0x2 @@ -3609,7 +3704,8 @@ const ( ETHTOOL_A_LINKMODES_MASTER_SLAVE_CFG = 0x7 ETHTOOL_A_LINKMODES_MASTER_SLAVE_STATE = 0x8 ETHTOOL_A_LINKMODES_LANES = 0x9 - ETHTOOL_A_LINKMODES_MAX = 0x9 + ETHTOOL_A_LINKMODES_RATE_MATCHING = 0xa + ETHTOOL_A_LINKMODES_MAX = 0xa ETHTOOL_A_LINKSTATE_UNSPEC = 0x0 ETHTOOL_A_LINKSTATE_HEADER = 0x1 ETHTOOL_A_LINKSTATE_LINK = 0x2 @@ -3617,7 +3713,8 @@ const ( ETHTOOL_A_LINKSTATE_SQI_MAX = 0x4 ETHTOOL_A_LINKSTATE_EXT_STATE = 0x5 ETHTOOL_A_LINKSTATE_EXT_SUBSTATE = 0x6 - ETHTOOL_A_LINKSTATE_MAX = 0x6 + ETHTOOL_A_LINKSTATE_EXT_DOWN_CNT = 0x7 + ETHTOOL_A_LINKSTATE_MAX = 0x7 ETHTOOL_A_DEBUG_UNSPEC = 0x0 ETHTOOL_A_DEBUG_HEADER = 0x1 ETHTOOL_A_DEBUG_MSGMASK = 0x2 @@ -4201,6 +4298,9 @@ const ( NL80211_ACL_POLICY_DENY_UNLESS_LISTED = 0x1 NL80211_AC_VI = 0x1 NL80211_AC_VO = 0x0 + NL80211_AP_SETTINGS_EXTERNAL_AUTH_SUPPORT = 0x1 + NL80211_AP_SETTINGS_SA_QUERY_OFFLOAD_SUPPORT = 0x2 + NL80211_AP_SME_SA_QUERY_OFFLOAD = 0x1 NL80211_ATTR_4ADDR = 0x53 NL80211_ATTR_ACK = 0x5c NL80211_ATTR_ACK_SIGNAL = 0x107 @@ -4209,6 +4309,7 @@ const ( NL80211_ATTR_AIRTIME_WEIGHT = 0x112 NL80211_ATTR_AKM_SUITES = 0x4c NL80211_ATTR_AP_ISOLATE = 0x60 + NL80211_ATTR_AP_SETTINGS_FLAGS = 0x135 NL80211_ATTR_AUTH_DATA = 0x9c NL80211_ATTR_AUTH_TYPE = 0x35 NL80211_ATTR_BANDS = 0xef @@ -4240,6 +4341,9 @@ const ( NL80211_ATTR_COALESCE_RULE_DELAY = 0x1 NL80211_ATTR_COALESCE_RULE_MAX = 0x3 NL80211_ATTR_COALESCE_RULE_PKT_PATTERN = 0x3 + NL80211_ATTR_COLOR_CHANGE_COLOR = 0x130 + NL80211_ATTR_COLOR_CHANGE_COUNT = 0x12f + NL80211_ATTR_COLOR_CHANGE_ELEMS = 0x131 NL80211_ATTR_CONN_FAILED_REASON = 0x9b NL80211_ATTR_CONTROL_PORT = 0x44 NL80211_ATTR_CONTROL_PORT_ETHERTYPE = 0x66 @@ -4266,6 +4370,7 @@ const ( NL80211_ATTR_DEVICE_AP_SME = 0x8d NL80211_ATTR_DFS_CAC_TIME = 0x7 NL80211_ATTR_DFS_REGION = 0x92 + NL80211_ATTR_DISABLE_EHT = 0x137 NL80211_ATTR_DISABLE_HE = 0x12d NL80211_ATTR_DISABLE_HT = 0x93 NL80211_ATTR_DISABLE_VHT = 0xaf @@ -4273,6 +4378,8 @@ const ( NL80211_ATTR_DONT_WAIT_FOR_ACK = 0x8e NL80211_ATTR_DTIM_PERIOD = 0xd NL80211_ATTR_DURATION = 0x57 + NL80211_ATTR_EHT_CAPABILITY = 0x136 + NL80211_ATTR_EML_CAPABILITY = 0x13d NL80211_ATTR_EXT_CAPA = 0xa9 NL80211_ATTR_EXT_CAPA_MASK = 0xaa NL80211_ATTR_EXTERNAL_AUTH_ACTION = 0x104 @@ -4337,10 +4444,11 @@ const ( NL80211_ATTR_MAC_HINT = 0xc8 NL80211_ATTR_MAC_MASK = 0xd7 NL80211_ATTR_MAX_AP_ASSOC_STA = 0xca - NL80211_ATTR_MAX = 0x137 + NL80211_ATTR_MAX = 0x141 NL80211_ATTR_MAX_CRIT_PROT_DURATION = 0xb4 NL80211_ATTR_MAX_CSA_COUNTERS = 0xce NL80211_ATTR_MAX_MATCH_SETS = 0x85 + NL80211_ATTR_MAX_NUM_AKM_SUITES = 0x13c NL80211_ATTR_MAX_NUM_PMKIDS = 0x56 NL80211_ATTR_MAX_NUM_SCAN_SSIDS = 0x2b NL80211_ATTR_MAX_NUM_SCHED_SCAN_PLANS = 0xde @@ -4350,6 +4458,8 @@ const ( NL80211_ATTR_MAX_SCAN_PLAN_INTERVAL = 0xdf NL80211_ATTR_MAX_SCAN_PLAN_ITERATIONS = 0xe0 NL80211_ATTR_MAX_SCHED_SCAN_IE_LEN = 0x7c + NL80211_ATTR_MBSSID_CONFIG = 0x132 + NL80211_ATTR_MBSSID_ELEMS = 0x133 NL80211_ATTR_MCAST_RATE = 0x6b NL80211_ATTR_MDID = 0xb1 NL80211_ATTR_MEASUREMENT_DURATION = 0xeb @@ -4359,6 +4469,11 @@ const ( NL80211_ATTR_MESH_PEER_AID = 0xed NL80211_ATTR_MESH_SETUP = 0x70 NL80211_ATTR_MGMT_SUBTYPE = 0x29 + NL80211_ATTR_MLD_ADDR = 0x13a + NL80211_ATTR_MLD_CAPA_AND_OPS = 0x13e + NL80211_ATTR_MLO_LINK_ID = 0x139 + NL80211_ATTR_MLO_LINKS = 0x138 + NL80211_ATTR_MLO_SUPPORT = 0x13b NL80211_ATTR_MNTR_FLAGS = 0x17 NL80211_ATTR_MPATH_INFO = 0x1b NL80211_ATTR_MPATH_NEXT_HOP = 0x1a @@ -4371,6 +4486,7 @@ const ( NL80211_ATTR_NETNS_FD = 0xdb NL80211_ATTR_NOACK_MAP = 0x95 NL80211_ATTR_NSS = 0x106 + NL80211_ATTR_OBSS_COLOR_BITMAP = 0x12e NL80211_ATTR_OFFCHANNEL_TX_OK = 0x6c NL80211_ATTR_OPER_CLASS = 0xd6 NL80211_ATTR_OPMODE_NOTIF = 0xc2 @@ -4397,6 +4513,7 @@ const ( NL80211_ATTR_PROTOCOL_FEATURES = 0xad NL80211_ATTR_PS_STATE = 0x5d NL80211_ATTR_QOS_MAP = 0xc7 + NL80211_ATTR_RADAR_BACKGROUND = 0x134 NL80211_ATTR_RADAR_EVENT = 0xa8 NL80211_ATTR_REASON_CODE = 0x36 NL80211_ATTR_RECEIVE_MULTICAST = 0x121 @@ -4412,6 +4529,7 @@ const ( NL80211_ATTR_RESP_IE = 0x4e NL80211_ATTR_ROAM_SUPPORT = 0x83 NL80211_ATTR_RX_FRAME_TYPES = 0x64 + NL80211_ATTR_RX_HW_TIMESTAMP = 0x140 NL80211_ATTR_RXMGMT_FLAGS = 0xbc NL80211_ATTR_RX_SIGNAL_DBM = 0x97 NL80211_ATTR_S1G_CAPABILITY = 0x128 @@ -4469,6 +4587,7 @@ const ( NL80211_ATTR_SUPPORT_MESH_AUTH = 0x73 NL80211_ATTR_SURVEY_INFO = 0x54 NL80211_ATTR_SURVEY_RADIO_STATS = 0xda + NL80211_ATTR_TD_BITMAP = 0x141 NL80211_ATTR_TDLS_ACTION = 0x88 NL80211_ATTR_TDLS_DIALOG_TOKEN = 0x89 NL80211_ATTR_TDLS_EXTERNAL_SETUP = 0x8c @@ -4484,6 +4603,7 @@ const ( NL80211_ATTR_TSID = 0xd2 NL80211_ATTR_TWT_RESPONDER = 0x116 NL80211_ATTR_TX_FRAME_TYPES = 0x63 + NL80211_ATTR_TX_HW_TIMESTAMP = 0x13f NL80211_ATTR_TX_NO_CCK_RATE = 0x87 NL80211_ATTR_TXQ_LIMIT = 0x10a NL80211_ATTR_TXQ_MEMORY_LIMIT = 0x10b @@ -4557,6 +4677,10 @@ const ( NL80211_BAND_ATTR_RATES = 0x2 NL80211_BAND_ATTR_VHT_CAPA = 0x8 NL80211_BAND_ATTR_VHT_MCS_SET = 0x7 + NL80211_BAND_IFTYPE_ATTR_EHT_CAP_MAC = 0x8 + NL80211_BAND_IFTYPE_ATTR_EHT_CAP_MCS_SET = 0xa + NL80211_BAND_IFTYPE_ATTR_EHT_CAP_PHY = 0x9 + NL80211_BAND_IFTYPE_ATTR_EHT_CAP_PPE = 0xb NL80211_BAND_IFTYPE_ATTR_HE_6GHZ_CAPA = 0x6 NL80211_BAND_IFTYPE_ATTR_HE_CAP_MAC = 0x2 NL80211_BAND_IFTYPE_ATTR_HE_CAP_MCS_SET = 0x4 @@ -4564,6 +4688,8 @@ const ( NL80211_BAND_IFTYPE_ATTR_HE_CAP_PPE = 0x5 NL80211_BAND_IFTYPE_ATTR_IFTYPES = 0x1 NL80211_BAND_IFTYPE_ATTR_MAX = 0xb + NL80211_BAND_IFTYPE_ATTR_VENDOR_ELEMS = 0x7 + NL80211_BAND_LC = 0x5 NL80211_BAND_S1GHZ = 0x4 NL80211_BITRATE_ATTR_2GHZ_SHORTPREAMBLE = 0x2 NL80211_BITRATE_ATTR_MAX = 0x2 @@ -4584,7 +4710,9 @@ const ( NL80211_BSS_FREQUENCY_OFFSET = 0x14 NL80211_BSS_INFORMATION_ELEMENTS = 0x6 NL80211_BSS_LAST_SEEN_BOOTTIME = 0xf - NL80211_BSS_MAX = 0x14 + NL80211_BSS_MAX = 0x16 + NL80211_BSS_MLD_ADDR = 0x16 + NL80211_BSS_MLO_LINK_ID = 0x15 NL80211_BSS_PAD = 0x10 NL80211_BSS_PARENT_BSSID = 0x12 NL80211_BSS_PARENT_TSF = 0x11 @@ -4612,6 +4740,7 @@ const ( NL80211_CHAN_WIDTH_20 = 0x1 NL80211_CHAN_WIDTH_20_NOHT = 0x0 NL80211_CHAN_WIDTH_2 = 0x9 + NL80211_CHAN_WIDTH_320 = 0xd NL80211_CHAN_WIDTH_40 = 0x2 NL80211_CHAN_WIDTH_4 = 0xa NL80211_CHAN_WIDTH_5 = 0x6 @@ -4621,8 +4750,11 @@ const ( NL80211_CMD_ABORT_SCAN = 0x72 NL80211_CMD_ACTION = 0x3b NL80211_CMD_ACTION_TX_STATUS = 0x3c + NL80211_CMD_ADD_LINK = 0x94 + NL80211_CMD_ADD_LINK_STA = 0x96 NL80211_CMD_ADD_NAN_FUNCTION = 0x75 NL80211_CMD_ADD_TX_TS = 0x69 + NL80211_CMD_ASSOC_COMEBACK = 0x93 NL80211_CMD_ASSOCIATE = 0x26 NL80211_CMD_AUTHENTICATE = 0x25 NL80211_CMD_CANCEL_REMAIN_ON_CHANNEL = 0x38 @@ -4630,6 +4762,10 @@ const ( NL80211_CMD_CHANNEL_SWITCH = 0x66 NL80211_CMD_CH_SWITCH_NOTIFY = 0x58 NL80211_CMD_CH_SWITCH_STARTED_NOTIFY = 0x6e + NL80211_CMD_COLOR_CHANGE_ABORTED = 0x90 + NL80211_CMD_COLOR_CHANGE_COMPLETED = 0x91 + NL80211_CMD_COLOR_CHANGE_REQUEST = 0x8e + NL80211_CMD_COLOR_CHANGE_STARTED = 0x8f NL80211_CMD_CONNECT = 0x2e NL80211_CMD_CONN_FAILED = 0x5b NL80211_CMD_CONTROL_PORT_FRAME = 0x81 @@ -4678,8 +4814,9 @@ const ( NL80211_CMD_LEAVE_IBSS = 0x2c NL80211_CMD_LEAVE_MESH = 0x45 NL80211_CMD_LEAVE_OCB = 0x6d - NL80211_CMD_MAX = 0x93 + NL80211_CMD_MAX = 0x98 NL80211_CMD_MICHAEL_MIC_FAILURE = 0x29 + NL80211_CMD_MODIFY_LINK_STA = 0x97 NL80211_CMD_NAN_MATCH = 0x78 NL80211_CMD_NEW_BEACON = 0xf NL80211_CMD_NEW_INTERFACE = 0x7 @@ -4692,6 +4829,7 @@ const ( NL80211_CMD_NEW_WIPHY = 0x3 NL80211_CMD_NOTIFY_CQM = 0x40 NL80211_CMD_NOTIFY_RADAR = 0x86 + NL80211_CMD_OBSS_COLOR_COLLISION = 0x8d NL80211_CMD_PEER_MEASUREMENT_COMPLETE = 0x85 NL80211_CMD_PEER_MEASUREMENT_RESULT = 0x84 NL80211_CMD_PEER_MEASUREMENT_START = 0x83 @@ -4707,6 +4845,8 @@ const ( NL80211_CMD_REGISTER_FRAME = 0x3a NL80211_CMD_RELOAD_REGDB = 0x7e NL80211_CMD_REMAIN_ON_CHANNEL = 0x37 + NL80211_CMD_REMOVE_LINK = 0x95 + NL80211_CMD_REMOVE_LINK_STA = 0x98 NL80211_CMD_REQ_SET_REG = 0x1b NL80211_CMD_ROAM = 0x2f NL80211_CMD_SCAN_ABORTED = 0x23 @@ -4717,6 +4857,7 @@ const ( NL80211_CMD_SET_CHANNEL = 0x41 NL80211_CMD_SET_COALESCE = 0x65 NL80211_CMD_SET_CQM = 0x3f + NL80211_CMD_SET_FILS_AAD = 0x92 NL80211_CMD_SET_INTERFACE = 0x6 NL80211_CMD_SET_KEY = 0xa NL80211_CMD_SET_MAC_ACL = 0x5d @@ -4791,6 +4932,8 @@ const ( NL80211_EDMG_BW_CONFIG_MIN = 0x4 NL80211_EDMG_CHANNELS_MAX = 0x3c NL80211_EDMG_CHANNELS_MIN = 0x1 + NL80211_EHT_MAX_CAPABILITY_LEN = 0x33 + NL80211_EHT_MIN_CAPABILITY_LEN = 0xd NL80211_EXTERNAL_AUTH_ABORT = 0x1 NL80211_EXTERNAL_AUTH_START = 0x0 NL80211_EXT_FEATURE_4WAY_HANDSHAKE_AP_PSK = 0x32 @@ -4807,6 +4950,7 @@ const ( NL80211_EXT_FEATURE_BEACON_RATE_HT = 0x7 NL80211_EXT_FEATURE_BEACON_RATE_LEGACY = 0x6 NL80211_EXT_FEATURE_BEACON_RATE_VHT = 0x8 + NL80211_EXT_FEATURE_BSS_COLOR = 0x3a NL80211_EXT_FEATURE_BSS_PARENT_TSF = 0x4 NL80211_EXT_FEATURE_CAN_REPLACE_PTK0 = 0x1f NL80211_EXT_FEATURE_CONTROL_PORT_NO_PREAUTH = 0x2a @@ -4818,6 +4962,7 @@ const ( NL80211_EXT_FEATURE_DFS_OFFLOAD = 0x19 NL80211_EXT_FEATURE_ENABLE_FTM_RESPONDER = 0x20 NL80211_EXT_FEATURE_EXT_KEY_ID = 0x24 + NL80211_EXT_FEATURE_FILS_CRYPTO_OFFLOAD = 0x3b NL80211_EXT_FEATURE_FILS_DISCOVERY = 0x34 NL80211_EXT_FEATURE_FILS_MAX_CHANNEL_TIME = 0x11 NL80211_EXT_FEATURE_FILS_SK_OFFLOAD = 0xe @@ -4833,8 +4978,10 @@ const ( NL80211_EXT_FEATURE_OCE_PROBE_REQ_DEFERRAL_SUPPRESSION = 0x14 NL80211_EXT_FEATURE_OCE_PROBE_REQ_HIGH_TX_RATE = 0x13 NL80211_EXT_FEATURE_OPERATING_CHANNEL_VALIDATION = 0x31 + NL80211_EXT_FEATURE_POWERED_ADDR_CHANGE = 0x3d NL80211_EXT_FEATURE_PROTECTED_TWT = 0x2b NL80211_EXT_FEATURE_PROT_RANGE_NEGO_AND_MEASURE = 0x39 + NL80211_EXT_FEATURE_RADAR_BACKGROUND = 0x3c NL80211_EXT_FEATURE_RRM = 0x1 NL80211_EXT_FEATURE_SAE_OFFLOAD_AP = 0x33 NL80211_EXT_FEATURE_SAE_OFFLOAD = 0x26 @@ -4906,7 +5053,9 @@ const ( NL80211_FREQUENCY_ATTR_NO_10MHZ = 0x11 NL80211_FREQUENCY_ATTR_NO_160MHZ = 0xc NL80211_FREQUENCY_ATTR_NO_20MHZ = 0x10 + NL80211_FREQUENCY_ATTR_NO_320MHZ = 0x1a NL80211_FREQUENCY_ATTR_NO_80MHZ = 0xb + NL80211_FREQUENCY_ATTR_NO_EHT = 0x1b NL80211_FREQUENCY_ATTR_NO_HE = 0x13 NL80211_FREQUENCY_ATTR_NO_HT40_MINUS = 0x9 NL80211_FREQUENCY_ATTR_NO_HT40_PLUS = 0xa @@ -5006,6 +5155,12 @@ const ( NL80211_MAX_SUPP_HT_RATES = 0x4d NL80211_MAX_SUPP_RATES = 0x20 NL80211_MAX_SUPP_REG_RULES = 0x80 + NL80211_MBSSID_CONFIG_ATTR_EMA = 0x5 + NL80211_MBSSID_CONFIG_ATTR_INDEX = 0x3 + NL80211_MBSSID_CONFIG_ATTR_MAX = 0x5 + NL80211_MBSSID_CONFIG_ATTR_MAX_EMA_PROFILE_PERIODICITY = 0x2 + NL80211_MBSSID_CONFIG_ATTR_MAX_INTERFACES = 0x1 + NL80211_MBSSID_CONFIG_ATTR_TX_IFINDEX = 0x4 NL80211_MESHCONF_ATTR_MAX = 0x1f NL80211_MESHCONF_AUTO_OPEN_PLINKS = 0x7 NL80211_MESHCONF_AWAKE_WINDOW = 0x1b @@ -5168,6 +5323,7 @@ const ( NL80211_PMSR_FTM_FAILURE_UNSPECIFIED = 0x0 NL80211_PMSR_FTM_FAILURE_WRONG_CHANNEL = 0x3 NL80211_PMSR_FTM_REQ_ATTR_ASAP = 0x1 + NL80211_PMSR_FTM_REQ_ATTR_BSS_COLOR = 0xd NL80211_PMSR_FTM_REQ_ATTR_BURST_DURATION = 0x5 NL80211_PMSR_FTM_REQ_ATTR_BURST_PERIOD = 0x4 NL80211_PMSR_FTM_REQ_ATTR_FTMS_PER_BURST = 0x6 @@ -5244,12 +5400,36 @@ const ( NL80211_RADAR_PRE_CAC_EXPIRED = 0x4 NL80211_RATE_INFO_10_MHZ_WIDTH = 0xb NL80211_RATE_INFO_160_MHZ_WIDTH = 0xa + NL80211_RATE_INFO_320_MHZ_WIDTH = 0x12 NL80211_RATE_INFO_40_MHZ_WIDTH = 0x3 NL80211_RATE_INFO_5_MHZ_WIDTH = 0xc NL80211_RATE_INFO_80_MHZ_WIDTH = 0x8 NL80211_RATE_INFO_80P80_MHZ_WIDTH = 0x9 NL80211_RATE_INFO_BITRATE32 = 0x5 NL80211_RATE_INFO_BITRATE = 0x1 + NL80211_RATE_INFO_EHT_GI_0_8 = 0x0 + NL80211_RATE_INFO_EHT_GI_1_6 = 0x1 + NL80211_RATE_INFO_EHT_GI_3_2 = 0x2 + NL80211_RATE_INFO_EHT_GI = 0x15 + NL80211_RATE_INFO_EHT_MCS = 0x13 + NL80211_RATE_INFO_EHT_NSS = 0x14 + NL80211_RATE_INFO_EHT_RU_ALLOC_106 = 0x3 + NL80211_RATE_INFO_EHT_RU_ALLOC_106P26 = 0x4 + NL80211_RATE_INFO_EHT_RU_ALLOC_242 = 0x5 + NL80211_RATE_INFO_EHT_RU_ALLOC_26 = 0x0 + NL80211_RATE_INFO_EHT_RU_ALLOC_2x996 = 0xb + NL80211_RATE_INFO_EHT_RU_ALLOC_2x996P484 = 0xc + NL80211_RATE_INFO_EHT_RU_ALLOC_3x996 = 0xd + NL80211_RATE_INFO_EHT_RU_ALLOC_3x996P484 = 0xe + NL80211_RATE_INFO_EHT_RU_ALLOC_484 = 0x6 + NL80211_RATE_INFO_EHT_RU_ALLOC_484P242 = 0x7 + NL80211_RATE_INFO_EHT_RU_ALLOC_4x996 = 0xf + NL80211_RATE_INFO_EHT_RU_ALLOC_52 = 0x1 + NL80211_RATE_INFO_EHT_RU_ALLOC_52P26 = 0x2 + NL80211_RATE_INFO_EHT_RU_ALLOC_996 = 0x8 + NL80211_RATE_INFO_EHT_RU_ALLOC_996P484 = 0x9 + NL80211_RATE_INFO_EHT_RU_ALLOC_996P484P242 = 0xa + NL80211_RATE_INFO_EHT_RU_ALLOC = 0x16 NL80211_RATE_INFO_HE_1XLTF = 0x0 NL80211_RATE_INFO_HE_2XLTF = 0x1 NL80211_RATE_INFO_HE_4XLTF = 0x2 @@ -5292,6 +5472,7 @@ const ( NL80211_RRF_GO_CONCURRENT = 0x1000 NL80211_RRF_IR_CONCURRENT = 0x1000 NL80211_RRF_NO_160MHZ = 0x10000 + NL80211_RRF_NO_320MHZ = 0x40000 NL80211_RRF_NO_80MHZ = 0x8000 NL80211_RRF_NO_CCK = 0x2 NL80211_RRF_NO_HE = 0x20000 @@ -5607,3 +5788,25 @@ const ( AUDIT_NLGRP_NONE = 0x0 AUDIT_NLGRP_READLOG = 0x1 ) + +const ( + TUN_F_CSUM = 0x1 + TUN_F_TSO4 = 0x2 + TUN_F_TSO6 = 0x4 + TUN_F_TSO_ECN = 0x8 + TUN_F_UFO = 0x10 +) + +const ( + VIRTIO_NET_HDR_F_NEEDS_CSUM = 0x1 + VIRTIO_NET_HDR_F_DATA_VALID = 0x2 + VIRTIO_NET_HDR_F_RSC_INFO = 0x4 +) + +const ( + VIRTIO_NET_HDR_GSO_NONE = 0x0 + VIRTIO_NET_HDR_GSO_TCPV4 = 0x1 + VIRTIO_NET_HDR_GSO_UDP = 0x3 + VIRTIO_NET_HDR_GSO_TCPV6 = 0x4 + VIRTIO_NET_HDR_GSO_ECN = 0x80 +) diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_386.go index 89c516a29acf..4ecc1495cd0a 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_386.go @@ -414,7 +414,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [122]int8 + Data [122]byte _ uint32 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go index 62b4fb269963..34fddff964e9 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go @@ -427,7 +427,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]int8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go index e86b35893ece..3b14a6031f3f 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go @@ -405,7 +405,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [122]uint8 + Data [122]byte _ uint32 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go index 6c6be4c911d8..0517651ab3f9 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go @@ -406,7 +406,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]int8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_loong64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_loong64.go index 4982ea355a28..3b0c51813452 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_loong64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_loong64.go @@ -407,7 +407,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]int8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go index 173141a67032..fccdf4dd0f46 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go @@ -410,7 +410,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [122]int8 + Data [122]byte _ uint32 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go index 93ae4c51673d..500de8fc07db 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go @@ -409,7 +409,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]int8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go index 4e4e510ca519..d0434cd2c6db 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go @@ -409,7 +409,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]int8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go index 3f5ba013d995..84206ba5347a 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go @@ -410,7 +410,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [122]int8 + Data [122]byte _ uint32 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc.go index 71dfe7cdb47a..ab078cf1f51d 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc.go @@ -417,7 +417,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [122]uint8 + Data [122]byte _ uint32 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go index 3a2b7f0a666e..42eb2c4cefd6 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go @@ -416,7 +416,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]uint8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go index a52d62756328..31304a4e8bb5 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go @@ -416,7 +416,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]uint8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_riscv64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_riscv64.go index dfc007d8a691..c311f9612d88 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_riscv64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_riscv64.go @@ -434,7 +434,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]uint8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go index b53cb9103d30..bba3cefac1dd 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go @@ -429,7 +429,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]int8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go index fe0aa3547280..ad8a01380461 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go @@ -411,7 +411,7 @@ const ( type SockaddrStorage struct { Family uint16 - _ [118]int8 + Data [118]byte _ uint64 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go index 2fd2060e617a..9bc4c8f9d889 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go @@ -491,6 +491,90 @@ type Utsname struct { Machine [256]byte } +const SizeofUvmexp = 0x278 + +type Uvmexp struct { + Pagesize int64 + Pagemask int64 + Pageshift int64 + Npages int64 + Free int64 + Active int64 + Inactive int64 + Paging int64 + Wired int64 + Zeropages int64 + Reserve_pagedaemon int64 + Reserve_kernel int64 + Freemin int64 + Freetarg int64 + Inactarg int64 + Wiredmax int64 + Nswapdev int64 + Swpages int64 + Swpginuse int64 + Swpgonly int64 + Nswget int64 + Unused1 int64 + Cpuhit int64 + Cpumiss int64 + Faults int64 + Traps int64 + Intrs int64 + Swtch int64 + Softs int64 + Syscalls int64 + Pageins int64 + Swapins int64 + Swapouts int64 + Pgswapin int64 + Pgswapout int64 + Forks int64 + Forks_ppwait int64 + Forks_sharevm int64 + Pga_zerohit int64 + Pga_zeromiss int64 + Zeroaborts int64 + Fltnoram int64 + Fltnoanon int64 + Fltpgwait int64 + Fltpgrele int64 + Fltrelck int64 + Fltrelckok int64 + Fltanget int64 + Fltanretry int64 + Fltamcopy int64 + Fltnamap int64 + Fltnomap int64 + Fltlget int64 + Fltget int64 + Flt_anon int64 + Flt_acow int64 + Flt_obj int64 + Flt_prcopy int64 + Flt_przero int64 + Pdwoke int64 + Pdrevs int64 + Unused4 int64 + Pdfreed int64 + Pdscans int64 + Pdanscan int64 + Pdobscan int64 + Pdreact int64 + Pdbusy int64 + Pdpageouts int64 + Pdpending int64 + Pddeact int64 + Anonpages int64 + Filepages int64 + Execpages int64 + Colorhit int64 + Colormiss int64 + Ncolors int64 + Bootpages int64 + Poolpages int64 +} + const SizeofClockinfo = 0x14 type Clockinfo struct { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go index 6a5a1a8ae556..bb05f655d225 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go @@ -499,6 +499,90 @@ type Utsname struct { Machine [256]byte } +const SizeofUvmexp = 0x278 + +type Uvmexp struct { + Pagesize int64 + Pagemask int64 + Pageshift int64 + Npages int64 + Free int64 + Active int64 + Inactive int64 + Paging int64 + Wired int64 + Zeropages int64 + Reserve_pagedaemon int64 + Reserve_kernel int64 + Freemin int64 + Freetarg int64 + Inactarg int64 + Wiredmax int64 + Nswapdev int64 + Swpages int64 + Swpginuse int64 + Swpgonly int64 + Nswget int64 + Unused1 int64 + Cpuhit int64 + Cpumiss int64 + Faults int64 + Traps int64 + Intrs int64 + Swtch int64 + Softs int64 + Syscalls int64 + Pageins int64 + Swapins int64 + Swapouts int64 + Pgswapin int64 + Pgswapout int64 + Forks int64 + Forks_ppwait int64 + Forks_sharevm int64 + Pga_zerohit int64 + Pga_zeromiss int64 + Zeroaborts int64 + Fltnoram int64 + Fltnoanon int64 + Fltpgwait int64 + Fltpgrele int64 + Fltrelck int64 + Fltrelckok int64 + Fltanget int64 + Fltanretry int64 + Fltamcopy int64 + Fltnamap int64 + Fltnomap int64 + Fltlget int64 + Fltget int64 + Flt_anon int64 + Flt_acow int64 + Flt_obj int64 + Flt_prcopy int64 + Flt_przero int64 + Pdwoke int64 + Pdrevs int64 + Unused4 int64 + Pdfreed int64 + Pdscans int64 + Pdanscan int64 + Pdobscan int64 + Pdreact int64 + Pdbusy int64 + Pdpageouts int64 + Pdpending int64 + Pddeact int64 + Anonpages int64 + Filepages int64 + Execpages int64 + Colorhit int64 + Colormiss int64 + Ncolors int64 + Bootpages int64 + Poolpages int64 +} + const SizeofClockinfo = 0x14 type Clockinfo struct { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go index 84cc8d01e656..db40e3a19c66 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go @@ -496,6 +496,90 @@ type Utsname struct { Machine [256]byte } +const SizeofUvmexp = 0x278 + +type Uvmexp struct { + Pagesize int64 + Pagemask int64 + Pageshift int64 + Npages int64 + Free int64 + Active int64 + Inactive int64 + Paging int64 + Wired int64 + Zeropages int64 + Reserve_pagedaemon int64 + Reserve_kernel int64 + Freemin int64 + Freetarg int64 + Inactarg int64 + Wiredmax int64 + Nswapdev int64 + Swpages int64 + Swpginuse int64 + Swpgonly int64 + Nswget int64 + Unused1 int64 + Cpuhit int64 + Cpumiss int64 + Faults int64 + Traps int64 + Intrs int64 + Swtch int64 + Softs int64 + Syscalls int64 + Pageins int64 + Swapins int64 + Swapouts int64 + Pgswapin int64 + Pgswapout int64 + Forks int64 + Forks_ppwait int64 + Forks_sharevm int64 + Pga_zerohit int64 + Pga_zeromiss int64 + Zeroaborts int64 + Fltnoram int64 + Fltnoanon int64 + Fltpgwait int64 + Fltpgrele int64 + Fltrelck int64 + Fltrelckok int64 + Fltanget int64 + Fltanretry int64 + Fltamcopy int64 + Fltnamap int64 + Fltnomap int64 + Fltlget int64 + Fltget int64 + Flt_anon int64 + Flt_acow int64 + Flt_obj int64 + Flt_prcopy int64 + Flt_przero int64 + Pdwoke int64 + Pdrevs int64 + Unused4 int64 + Pdfreed int64 + Pdscans int64 + Pdanscan int64 + Pdobscan int64 + Pdreact int64 + Pdbusy int64 + Pdpageouts int64 + Pdpending int64 + Pddeact int64 + Anonpages int64 + Filepages int64 + Execpages int64 + Colorhit int64 + Colormiss int64 + Ncolors int64 + Bootpages int64 + Poolpages int64 +} + const SizeofClockinfo = 0x14 type Clockinfo struct { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm64.go index c844e7096ff5..11121151ccf0 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm64.go @@ -499,6 +499,90 @@ type Utsname struct { Machine [256]byte } +const SizeofUvmexp = 0x278 + +type Uvmexp struct { + Pagesize int64 + Pagemask int64 + Pageshift int64 + Npages int64 + Free int64 + Active int64 + Inactive int64 + Paging int64 + Wired int64 + Zeropages int64 + Reserve_pagedaemon int64 + Reserve_kernel int64 + Freemin int64 + Freetarg int64 + Inactarg int64 + Wiredmax int64 + Nswapdev int64 + Swpages int64 + Swpginuse int64 + Swpgonly int64 + Nswget int64 + Unused1 int64 + Cpuhit int64 + Cpumiss int64 + Faults int64 + Traps int64 + Intrs int64 + Swtch int64 + Softs int64 + Syscalls int64 + Pageins int64 + Swapins int64 + Swapouts int64 + Pgswapin int64 + Pgswapout int64 + Forks int64 + Forks_ppwait int64 + Forks_sharevm int64 + Pga_zerohit int64 + Pga_zeromiss int64 + Zeroaborts int64 + Fltnoram int64 + Fltnoanon int64 + Fltpgwait int64 + Fltpgrele int64 + Fltrelck int64 + Fltrelckok int64 + Fltanget int64 + Fltanretry int64 + Fltamcopy int64 + Fltnamap int64 + Fltnomap int64 + Fltlget int64 + Fltget int64 + Flt_anon int64 + Flt_acow int64 + Flt_obj int64 + Flt_prcopy int64 + Flt_przero int64 + Pdwoke int64 + Pdrevs int64 + Unused4 int64 + Pdfreed int64 + Pdscans int64 + Pdanscan int64 + Pdobscan int64 + Pdreact int64 + Pdbusy int64 + Pdpageouts int64 + Pdpending int64 + Pddeact int64 + Anonpages int64 + Filepages int64 + Execpages int64 + Colorhit int64 + Colormiss int64 + Ncolors int64 + Bootpages int64 + Poolpages int64 +} + const SizeofClockinfo = 0x14 type Clockinfo struct { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go index 2ed718ca06a7..26eba23b729f 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go @@ -58,22 +58,22 @@ type Rlimit struct { type _Gid_t uint32 type Stat_t struct { - Mode uint32 - Dev int32 - Ino uint64 - Nlink uint32 - Uid uint32 - Gid uint32 - Rdev int32 - Atim Timespec - Mtim Timespec - Ctim Timespec - Size int64 - Blocks int64 - Blksize uint32 - Flags uint32 - Gen uint32 - X__st_birthtim Timespec + Mode uint32 + Dev int32 + Ino uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Rdev int32 + Atim Timespec + Mtim Timespec + Ctim Timespec + Size int64 + Blocks int64 + Blksize int32 + Flags uint32 + Gen uint32 + _ Timespec } type Statfs_t struct { @@ -98,7 +98,7 @@ type Statfs_t struct { F_mntonname [90]byte F_mntfromname [90]byte F_mntfromspec [90]byte - Pad_cgo_0 [2]byte + _ [2]byte Mount_info [160]byte } @@ -111,13 +111,13 @@ type Flock_t struct { } type Dirent struct { - Fileno uint64 - Off int64 - Reclen uint16 - Type uint8 - Namlen uint8 - X__d_padding [4]uint8 - Name [256]int8 + Fileno uint64 + Off int64 + Reclen uint16 + Type uint8 + Namlen uint8 + _ [4]uint8 + Name [256]int8 } type Fsid struct { @@ -262,8 +262,8 @@ type FdSet struct { } const ( - SizeofIfMsghdr = 0xec - SizeofIfData = 0xd4 + SizeofIfMsghdr = 0xa0 + SizeofIfData = 0x88 SizeofIfaMsghdr = 0x18 SizeofIfAnnounceMsghdr = 0x1a SizeofRtMsghdr = 0x60 @@ -292,7 +292,7 @@ type IfData struct { Link_state uint8 Mtu uint32 Metric uint32 - Pad uint32 + Rdomain uint32 Baudrate uint64 Ipackets uint64 Ierrors uint64 @@ -304,10 +304,10 @@ type IfData struct { Imcasts uint64 Omcasts uint64 Iqdrops uint64 + Oqdrops uint64 Noproto uint64 Capabilities uint32 Lastchange Timeval - Mclpool [7]Mclpool } type IfaMsghdr struct { @@ -368,20 +368,12 @@ type RtMetrics struct { Pad uint32 } -type Mclpool struct { - Grown int32 - Alive uint16 - Hwm uint16 - Cwm uint16 - Lwm uint16 -} - const ( SizeofBpfVersion = 0x4 SizeofBpfStat = 0x8 SizeofBpfProgram = 0x8 SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 + SizeofBpfHdr = 0x18 ) type BpfVersion struct { @@ -407,11 +399,14 @@ type BpfInsn struct { } type BpfHdr struct { - Tstamp BpfTimeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte + Tstamp BpfTimeval + Caplen uint32 + Datalen uint32 + Hdrlen uint16 + Ifidx uint16 + Flowid uint16 + Flags uint8 + Drops uint8 } type BpfTimeval struct { @@ -488,7 +483,7 @@ type Uvmexp struct { Zeropages int32 Reserve_pagedaemon int32 Reserve_kernel int32 - Anonpages int32 + Unused01 int32 Vnodepages int32 Vtextpages int32 Freemin int32 @@ -507,8 +502,8 @@ type Uvmexp struct { Swpgonly int32 Nswget int32 Nanon int32 - Nanonneeded int32 - Nfreeanon int32 + Unused05 int32 + Unused06 int32 Faults int32 Traps int32 Intrs int32 @@ -516,8 +511,8 @@ type Uvmexp struct { Softs int32 Syscalls int32 Pageins int32 - Obsolete_swapins int32 - Obsolete_swapouts int32 + Unused07 int32 + Unused08 int32 Pgswapin int32 Pgswapout int32 Forks int32 @@ -525,7 +520,7 @@ type Uvmexp struct { Forks_sharevm int32 Pga_zerohit int32 Pga_zeromiss int32 - Zeroaborts int32 + Unused09 int32 Fltnoram int32 Fltnoanon int32 Fltnoamap int32 @@ -557,9 +552,9 @@ type Uvmexp struct { Pdpageouts int32 Pdpending int32 Pddeact int32 - Pdreanon int32 - Pdrevnode int32 - Pdrevtext int32 + Unused11 int32 + Unused12 int32 + Unused13 int32 Fpswtch int32 Kmapent int32 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go index b4fb97ebe650..5a5479886989 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go @@ -73,7 +73,6 @@ type Stat_t struct { Blksize int32 Flags uint32 Gen uint32 - _ [4]byte _ Timespec } @@ -81,7 +80,6 @@ type Statfs_t struct { F_flags uint32 F_bsize uint32 F_iosize uint32 - _ [4]byte F_blocks uint64 F_bfree uint64 F_bavail int64 @@ -200,10 +198,8 @@ type IPv6Mreq struct { type Msghdr struct { Name *byte Namelen uint32 - _ [4]byte Iov *Iovec Iovlen uint32 - _ [4]byte Control *byte Controllen uint32 Flags int32 @@ -311,7 +307,6 @@ type IfData struct { Oqdrops uint64 Noproto uint64 Capabilities uint32 - _ [4]byte Lastchange Timeval } @@ -373,14 +368,12 @@ type RtMetrics struct { Pad uint32 } -type Mclpool struct{} - const ( SizeofBpfVersion = 0x4 SizeofBpfStat = 0x8 SizeofBpfProgram = 0x10 SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 + SizeofBpfHdr = 0x18 ) type BpfVersion struct { @@ -395,7 +388,6 @@ type BpfStat struct { type BpfProgram struct { Len uint32 - _ [4]byte Insns *BpfInsn } @@ -411,7 +403,10 @@ type BpfHdr struct { Caplen uint32 Datalen uint32 Hdrlen uint16 - _ [2]byte + Ifidx uint16 + Flowid uint16 + Flags uint8 + Drops uint8 } type BpfTimeval struct { @@ -488,7 +483,7 @@ type Uvmexp struct { Zeropages int32 Reserve_pagedaemon int32 Reserve_kernel int32 - Anonpages int32 + Unused01 int32 Vnodepages int32 Vtextpages int32 Freemin int32 @@ -507,8 +502,8 @@ type Uvmexp struct { Swpgonly int32 Nswget int32 Nanon int32 - Nanonneeded int32 - Nfreeanon int32 + Unused05 int32 + Unused06 int32 Faults int32 Traps int32 Intrs int32 @@ -516,8 +511,8 @@ type Uvmexp struct { Softs int32 Syscalls int32 Pageins int32 - Obsolete_swapins int32 - Obsolete_swapouts int32 + Unused07 int32 + Unused08 int32 Pgswapin int32 Pgswapout int32 Forks int32 @@ -525,7 +520,7 @@ type Uvmexp struct { Forks_sharevm int32 Pga_zerohit int32 Pga_zeromiss int32 - Zeroaborts int32 + Unused09 int32 Fltnoram int32 Fltnoanon int32 Fltnoamap int32 @@ -557,9 +552,9 @@ type Uvmexp struct { Pdpageouts int32 Pdpending int32 Pddeact int32 - Pdreanon int32 - Pdrevnode int32 - Pdrevtext int32 + Unused11 int32 + Unused12 int32 + Unused13 int32 Fpswtch int32 Kmapent int32 } diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go index 2c4675040ef3..be58c4e1ff8b 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go @@ -375,14 +375,12 @@ type RtMetrics struct { Pad uint32 } -type Mclpool struct{} - const ( SizeofBpfVersion = 0x4 SizeofBpfStat = 0x8 SizeofBpfProgram = 0x8 SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 + SizeofBpfHdr = 0x18 ) type BpfVersion struct { @@ -412,7 +410,10 @@ type BpfHdr struct { Caplen uint32 Datalen uint32 Hdrlen uint16 - _ [2]byte + Ifidx uint16 + Flowid uint16 + Flags uint8 + Drops uint8 } type BpfTimeval struct { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm64.go index ddee04514708..52338266cb3e 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm64.go @@ -368,14 +368,12 @@ type RtMetrics struct { Pad uint32 } -type Mclpool struct{} - const ( SizeofBpfVersion = 0x4 SizeofBpfStat = 0x8 SizeofBpfProgram = 0x10 SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 + SizeofBpfHdr = 0x18 ) type BpfVersion struct { @@ -405,7 +403,10 @@ type BpfHdr struct { Caplen uint32 Datalen uint32 Hdrlen uint16 - _ [2]byte + Ifidx uint16 + Flowid uint16 + Flags uint8 + Drops uint8 } type BpfTimeval struct { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_mips64.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_mips64.go index eb13d4e8bfc2..605cfdb12b1d 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_mips64.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/unix/ztypes_openbsd_mips64.go @@ -368,14 +368,12 @@ type RtMetrics struct { Pad uint32 } -type Mclpool struct{} - const ( SizeofBpfVersion = 0x4 SizeofBpfStat = 0x8 SizeofBpfProgram = 0x10 SizeofBpfInsn = 0x8 - SizeofBpfHdr = 0x14 + SizeofBpfHdr = 0x18 ) type BpfVersion struct { @@ -405,7 +403,10 @@ type BpfHdr struct { Caplen uint32 Datalen uint32 Hdrlen uint16 - _ [2]byte + Ifidx uint16 + Flowid uint16 + Flags uint8 + Drops uint8 } type BpfTimeval struct { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/syscall_windows.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/syscall_windows.go index a49853e9d3af..3723b2c224c8 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/syscall_windows.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/syscall_windows.go @@ -10,7 +10,6 @@ import ( errorspkg "errors" "fmt" "runtime" - "strings" "sync" "syscall" "time" @@ -87,22 +86,13 @@ func StringToUTF16(s string) []uint16 { // s, with a terminating NUL added. If s contains a NUL byte at any // location, it returns (nil, syscall.EINVAL). func UTF16FromString(s string) ([]uint16, error) { - if strings.IndexByte(s, 0) != -1 { - return nil, syscall.EINVAL - } - return utf16.Encode([]rune(s + "\x00")), nil + return syscall.UTF16FromString(s) } // UTF16ToString returns the UTF-8 encoding of the UTF-16 sequence s, // with a terminating NUL and any bytes after the NUL removed. func UTF16ToString(s []uint16) string { - for i, v := range s { - if v == 0 { - s = s[:i] - break - } - } - return string(utf16.Decode(s)) + return syscall.UTF16ToString(s) } // StringToUTF16Ptr is deprecated. Use UTF16PtrFromString instead. @@ -834,6 +824,9 @@ const socket_error = uintptr(^uint32(0)) //sys WSAStartup(verreq uint32, data *WSAData) (sockerr error) = ws2_32.WSAStartup //sys WSACleanup() (err error) [failretval==socket_error] = ws2_32.WSACleanup //sys WSAIoctl(s Handle, iocc uint32, inbuf *byte, cbif uint32, outbuf *byte, cbob uint32, cbbr *uint32, overlapped *Overlapped, completionRoutine uintptr) (err error) [failretval==socket_error] = ws2_32.WSAIoctl +//sys WSALookupServiceBegin(querySet *WSAQUERYSET, flags uint32, handle *Handle) (err error) [failretval==socket_error] = ws2_32.WSALookupServiceBeginW +//sys WSALookupServiceNext(handle Handle, flags uint32, size *int32, querySet *WSAQUERYSET) (err error) [failretval==socket_error] = ws2_32.WSALookupServiceNextW +//sys WSALookupServiceEnd(handle Handle) (err error) [failretval==socket_error] = ws2_32.WSALookupServiceEnd //sys socket(af int32, typ int32, protocol int32) (handle Handle, err error) [failretval==InvalidHandle] = ws2_32.socket //sys sendto(s Handle, buf []byte, flags int32, to unsafe.Pointer, tolen int32) (err error) [failretval==socket_error] = ws2_32.sendto //sys recvfrom(s Handle, buf []byte, flags int32, from *RawSockaddrAny, fromlen *int32) (n int32, err error) [failretval==-1] = ws2_32.recvfrom @@ -1029,8 +1022,7 @@ func (rsa *RawSockaddrAny) Sockaddr() (Sockaddr, error) { for n < len(pp.Path) && pp.Path[n] != 0 { n++ } - bytes := (*[len(pp.Path)]byte)(unsafe.Pointer(&pp.Path[0]))[0:n] - sa.Name = string(bytes) + sa.Name = string(unsafe.Slice((*byte)(unsafe.Pointer(&pp.Path[0])), n)) return sa, nil case AF_INET: diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/types_windows.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/types_windows.go index 0c4add974106..857acf1032d9 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/types_windows.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/types_windows.go @@ -1243,6 +1243,51 @@ const ( DnsSectionAdditional = 0x0003 ) +const ( + // flags of WSALookupService + LUP_DEEP = 0x0001 + LUP_CONTAINERS = 0x0002 + LUP_NOCONTAINERS = 0x0004 + LUP_NEAREST = 0x0008 + LUP_RETURN_NAME = 0x0010 + LUP_RETURN_TYPE = 0x0020 + LUP_RETURN_VERSION = 0x0040 + LUP_RETURN_COMMENT = 0x0080 + LUP_RETURN_ADDR = 0x0100 + LUP_RETURN_BLOB = 0x0200 + LUP_RETURN_ALIASES = 0x0400 + LUP_RETURN_QUERY_STRING = 0x0800 + LUP_RETURN_ALL = 0x0FF0 + LUP_RES_SERVICE = 0x8000 + + LUP_FLUSHCACHE = 0x1000 + LUP_FLUSHPREVIOUS = 0x2000 + + LUP_NON_AUTHORITATIVE = 0x4000 + LUP_SECURE = 0x8000 + LUP_RETURN_PREFERRED_NAMES = 0x10000 + LUP_DNS_ONLY = 0x20000 + + LUP_ADDRCONFIG = 0x100000 + LUP_DUAL_ADDR = 0x200000 + LUP_FILESERVER = 0x400000 + LUP_DISABLE_IDN_ENCODING = 0x00800000 + LUP_API_ANSI = 0x01000000 + + LUP_RESOLUTION_HANDLE = 0x80000000 +) + +const ( + // values of WSAQUERYSET's namespace + NS_ALL = 0 + NS_DNS = 12 + NS_NLA = 15 + NS_BTH = 16 + NS_EMAIL = 37 + NS_PNRPNAME = 38 + NS_PNRPCLOUD = 39 +) + type DNSSRVData struct { Target *uint16 Priority uint16 @@ -3258,3 +3303,43 @@ const ( DWMWA_TEXT_COLOR = 36 DWMWA_VISIBLE_FRAME_BORDER_THICKNESS = 37 ) + +type WSAQUERYSET struct { + Size uint32 + ServiceInstanceName *uint16 + ServiceClassId *GUID + Version *WSAVersion + Comment *uint16 + NameSpace uint32 + NSProviderId *GUID + Context *uint16 + NumberOfProtocols uint32 + AfpProtocols *AFProtocols + QueryString *uint16 + NumberOfCsAddrs uint32 + SaBuffer *CSAddrInfo + OutputFlags uint32 + Blob *BLOB +} + +type WSAVersion struct { + Version uint32 + EnumerationOfComparison int32 +} + +type AFProtocols struct { + AddressFamily int32 + Protocol int32 +} + +type CSAddrInfo struct { + LocalAddr SocketAddress + RemoteAddr SocketAddress + SocketType int32 + Protocol int32 +} + +type BLOB struct { + Size uint32 + BlobData *byte +} diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/zsyscall_windows.go b/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/zsyscall_windows.go index ac60052e44a7..6d2a268534d7 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/zsyscall_windows.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/sys/windows/zsyscall_windows.go @@ -474,6 +474,9 @@ var ( procWSAEnumProtocolsW = modws2_32.NewProc("WSAEnumProtocolsW") procWSAGetOverlappedResult = modws2_32.NewProc("WSAGetOverlappedResult") procWSAIoctl = modws2_32.NewProc("WSAIoctl") + procWSALookupServiceBeginW = modws2_32.NewProc("WSALookupServiceBeginW") + procWSALookupServiceEnd = modws2_32.NewProc("WSALookupServiceEnd") + procWSALookupServiceNextW = modws2_32.NewProc("WSALookupServiceNextW") procWSARecv = modws2_32.NewProc("WSARecv") procWSARecvFrom = modws2_32.NewProc("WSARecvFrom") procWSASend = modws2_32.NewProc("WSASend") @@ -4067,6 +4070,30 @@ func WSAIoctl(s Handle, iocc uint32, inbuf *byte, cbif uint32, outbuf *byte, cbo return } +func WSALookupServiceBegin(querySet *WSAQUERYSET, flags uint32, handle *Handle) (err error) { + r1, _, e1 := syscall.Syscall(procWSALookupServiceBeginW.Addr(), 3, uintptr(unsafe.Pointer(querySet)), uintptr(flags), uintptr(unsafe.Pointer(handle))) + if r1 == socket_error { + err = errnoErr(e1) + } + return +} + +func WSALookupServiceEnd(handle Handle) (err error) { + r1, _, e1 := syscall.Syscall(procWSALookupServiceEnd.Addr(), 1, uintptr(handle), 0, 0) + if r1 == socket_error { + err = errnoErr(e1) + } + return +} + +func WSALookupServiceNext(handle Handle, flags uint32, size *int32, querySet *WSAQUERYSET) (err error) { + r1, _, e1 := syscall.Syscall6(procWSALookupServiceNextW.Addr(), 4, uintptr(handle), uintptr(flags), uintptr(unsafe.Pointer(size)), uintptr(unsafe.Pointer(querySet)), 0, 0) + if r1 == socket_error { + err = errnoErr(e1) + } + return +} + func WSARecv(s Handle, bufs *WSABuf, bufcnt uint32, recvd *uint32, flags *uint32, overlapped *Overlapped, croutine *byte) (err error) { r1, _, e1 := syscall.Syscall9(procWSARecv.Addr(), 7, uintptr(s), uintptr(unsafe.Pointer(bufs)), uintptr(bufcnt), uintptr(unsafe.Pointer(recvd)), uintptr(unsafe.Pointer(flags)), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(croutine)), 0, 0) if r1 == socket_error { diff --git a/vertical-pod-autoscaler/vendor/golang.org/x/text/unicode/norm/forminfo.go b/vertical-pod-autoscaler/vendor/golang.org/x/text/unicode/norm/forminfo.go index d69ccb4f9761..487335d14d36 100644 --- a/vertical-pod-autoscaler/vendor/golang.org/x/text/unicode/norm/forminfo.go +++ b/vertical-pod-autoscaler/vendor/golang.org/x/text/unicode/norm/forminfo.go @@ -13,7 +13,7 @@ import "encoding/binary" // a rune to a uint16. The values take two forms. For v >= 0x8000: // bits // 15: 1 (inverse of NFD_QC bit of qcInfo) -// 13..7: qcInfo (see below). isYesD is always true (no decompostion). +// 13..7: qcInfo (see below). isYesD is always true (no decomposition). // 6..0: ccc (compressed CCC value). // For v < 0x8000, the respective rune has a decomposition and v is an index // into a byte array of UTF-8 decomposition sequences and additional info and diff --git a/vertical-pod-autoscaler/vendor/modules.txt b/vertical-pod-autoscaler/vendor/modules.txt index 90282d7b8179..b218be2a9417 100644 --- a/vertical-pod-autoscaler/vendor/modules.txt +++ b/vertical-pod-autoscaler/vendor/modules.txt @@ -133,7 +133,7 @@ github.com/stretchr/objx ## explicit; go 1.13 github.com/stretchr/testify/assert github.com/stretchr/testify/mock -# golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10 +# golang.org/x/net v0.8.0 ## explicit; go 1.17 golang.org/x/net/context golang.org/x/net/context/ctxhttp @@ -145,16 +145,16 @@ golang.org/x/net/idna ## explicit; go 1.11 golang.org/x/oauth2 golang.org/x/oauth2/internal -# golang.org/x/sys v0.3.0 +# golang.org/x/sys v0.6.0 ## explicit; go 1.17 golang.org/x/sys/internal/unsafeheader golang.org/x/sys/plan9 golang.org/x/sys/unix golang.org/x/sys/windows -# golang.org/x/term v0.3.0 +# golang.org/x/term v0.6.0 ## explicit; go 1.17 golang.org/x/term -# golang.org/x/text v0.5.0 +# golang.org/x/text v0.8.0 ## explicit; go 1.17 golang.org/x/text/secure/bidirule golang.org/x/text/transform